Lolly's Wiki
wikidb
https://lars.timmann.de/wiki/index.php?title=Main_Page
MediaWiki 1.41.0-alpha
first-letter
Media
Special
Talk
User
User talk
Lolly's Wiki
Lolly's Wiki talk
File
File talk
MediaWiki
MediaWiki talk
Template
Template talk
Help
Help talk
Category
Category talk
Hauptseite
0
1
1
2012-05-22T09:13:31Z
MediaWiki default
0
wikitext
text/x-wiki
'''MediaWiki wurde erfolgreich installiert.'''
Hilfe zur Benutzung und Konfiguration der Wiki-Software findest du im [http://meta.wikimedia.org/wiki/Help:Contents Benutzerhandbuch].
== Starthilfen ==
* [http://www.mediawiki.org/wiki/Manual:Configuration_settings Liste der Konfigurationsvariablen]
* [http://www.mediawiki.org/wiki/Manual:FAQ MediaWiki-FAQ]
* [https://lists.wikimedia.org/mailman/listinfo/mediawiki-announce Mailingliste neuer MediaWiki-Versionen]
3e49cd7ebcf2690896f04fa5d75773f2825463fb
21
1
2012-05-22T13:42:12Z
Lollypop
2
wikitext
text/x-wiki
=Meine Projekte=
<categorytree mode=pages hideroot=on>Projekte</categorytree>
== Starthilfen ==
Hilfe zur Benutzung und Konfiguration der Wiki-Software findest du im [http://meta.wikimedia.org/wiki/Help:Contents Benutzerhandbuch].
* [http://www.mediawiki.org/wiki/Manual:Configuration_settings Liste der Konfigurationsvariablen]
* [http://www.mediawiki.org/wiki/Manual:FAQ MediaWiki-FAQ]
* [https://lists.wikimedia.org/mailman/listinfo/mediawiki-announce Mailingliste neuer MediaWiki-Versionen]
70607a23f4bf767207f2146ce410116065323ac9
22
21
2012-05-22T13:43:02Z
Lollypop
2
wikitext
text/x-wiki
=Meine Projekte=
<categorytree mode=all hideroot=on>Projekte</categorytree>
== Starthilfen ==
Hilfe zur Benutzung und Konfiguration der Wiki-Software findest du im [http://meta.wikimedia.org/wiki/Help:Contents Benutzerhandbuch].
* [http://www.mediawiki.org/wiki/Manual:Configuration_settings Liste der Konfigurationsvariablen]
* [http://www.mediawiki.org/wiki/Manual:FAQ MediaWiki-FAQ]
* [https://lists.wikimedia.org/mailman/listinfo/mediawiki-announce Mailingliste neuer MediaWiki-Versionen]
a538f79570543ed9bbbe9d719d977276070f04d8
23
22
2012-05-22T13:45:43Z
Lollypop
2
wikitext
text/x-wiki
=Meine Projekte=
<categorytree mode=pages hideroot=on>Projekte</categorytree>
== Starthilfen ==
Hilfe zur Benutzung und Konfiguration der Wiki-Software findest du im [http://meta.wikimedia.org/wiki/Help:Contents Benutzerhandbuch].
* [http://www.mediawiki.org/wiki/Manual:Configuration_settings Liste der Konfigurationsvariablen]
* [http://www.mediawiki.org/wiki/Manual:FAQ MediaWiki-FAQ]
* [https://lists.wikimedia.org/mailman/listinfo/mediawiki-announce Mailingliste neuer MediaWiki-Versionen]
70607a23f4bf767207f2146ce410116065323ac9
24
23
2012-05-22T13:53:14Z
Lollypop
2
wikitext
text/x-wiki
=Meine Projekte=
<categorytree mode=pages hideroot=on depth=2>Projekte</categorytree>
== Starthilfen ==
Hilfe zur Benutzung und Konfiguration der Wiki-Software findest du im [http://meta.wikimedia.org/wiki/Help:Contents Benutzerhandbuch].
* [http://www.mediawiki.org/wiki/Manual:Configuration_settings Liste der Konfigurationsvariablen]
* [http://www.mediawiki.org/wiki/Manual:FAQ MediaWiki-FAQ]
* [https://lists.wikimedia.org/mailman/listinfo/mediawiki-announce Mailingliste neuer MediaWiki-Versionen]
fdad01a22224e0d2fdcd3025d0a57d744e8f64d9
36
24
2012-05-22T15:06:10Z
Lollypop
2
wikitext
text/x-wiki
=Meine Projekte=
<categorytree mode=pages hideroot=on depth=2>Projekte</categorytree>
=KnowHow=
<categorytree mode=pages hideroot=on depth=2>KnowHow</categorytree>
== Starthilfen ==
Hilfe zur Benutzung und Konfiguration der Wiki-Software findest du im [http://meta.wikimedia.org/wiki/Help:Contents Benutzerhandbuch].
* [http://www.mediawiki.org/wiki/Manual:Configuration_settings Liste der Konfigurationsvariablen]
* [http://www.mediawiki.org/wiki/Manual:FAQ MediaWiki-FAQ]
* [https://lists.wikimedia.org/mailman/listinfo/mediawiki-announce Mailingliste neuer MediaWiki-Versionen]
eb865ba47233611c59e5b93b7b1cb90b3ca391ce
43
36
2012-05-23T06:50:59Z
Lollypop
2
/* KnowHow */
wikitext
text/x-wiki
=Meine Projekte=
<categorytree mode=pages hideroot=on depth=2>Projekte</categorytree>
=KnowHow=
<categorytree mode=pages hideroot=on depth=1>KnowHow</categorytree>
== Starthilfen ==
Hilfe zur Benutzung und Konfiguration der Wiki-Software findest du im [http://meta.wikimedia.org/wiki/Help:Contents Benutzerhandbuch].
* [http://www.mediawiki.org/wiki/Manual:Configuration_settings Liste der Konfigurationsvariablen]
* [http://www.mediawiki.org/wiki/Manual:FAQ MediaWiki-FAQ]
* [https://lists.wikimedia.org/mailman/listinfo/mediawiki-announce Mailingliste neuer MediaWiki-Versionen]
d93fab73fa95ef589f1aa4b844837054f3f1f3b6
44
43
2012-05-23T06:52:31Z
Lollypop
2
wikitext
text/x-wiki
=Meine Projekte=
<categorytree mode=pages hideroot=on depth=2>Projekte</categorytree>
=KnowHow=
<categorytree mode=pages hideroot=on depth=1>KnowHow</categorytree>
= Starthilfen zum Wiki =
Hilfe zur Benutzung und Konfiguration der Wiki-Software findest du im [http://meta.wikimedia.org/wiki/Help:Contents Benutzerhandbuch].
* [http://www.mediawiki.org/wiki/Manual:Configuration_settings Liste der Konfigurationsvariablen]
* [http://www.mediawiki.org/wiki/Manual:FAQ MediaWiki-FAQ]
* [https://lists.wikimedia.org/mailman/listinfo/mediawiki-announce Mailingliste neuer MediaWiki-Versionen]
caea0037ff32938d2345cbe433cf8521f5dae7de
47
44
2012-05-23T06:56:43Z
Lollypop
2
/* KnowHow */
wikitext
text/x-wiki
=Meine Projekte=
<categorytree mode=pages hideroot=on depth=2>Projekte</categorytree>
=KnowHow=
<categorytree mode=pages hideroot=on depth=2>KnowHow</categorytree>
= Starthilfen zum Wiki =
Hilfe zur Benutzung und Konfiguration der Wiki-Software findest du im [http://meta.wikimedia.org/wiki/Help:Contents Benutzerhandbuch].
* [http://www.mediawiki.org/wiki/Manual:Configuration_settings Liste der Konfigurationsvariablen]
* [http://www.mediawiki.org/wiki/Manual:FAQ MediaWiki-FAQ]
* [https://lists.wikimedia.org/mailman/listinfo/mediawiki-announce Mailingliste neuer MediaWiki-Versionen]
042dcd0ecb5e18ec6840971ebf795263437537e6
Category:Ameisen
14
3
3
2012-05-22T12:07:22Z
Lollypop
2
Die Seite wurde neu angelegt: „Ameisen“
wikitext
text/x-wiki
Ameisen
513943b020bc215c0080169c1ed48f2891342533
9
3
2012-05-22T12:44:27Z
Lollypop
2
wikitext
text/x-wiki
Ameisen
[[Kategorie:Projekte]]
b8b06de6ebc721255669816c160fabaf4af405ef
Lolly's Wiki:Gemeinschafts-Portal
4
4
4
2012-05-22T12:10:04Z
Lollypop
2
Die Seite wurde neu angelegt: „=Projekte= ==Tiere== <categorytree mode=pages>Ameisen</categorytree> <categorytree mode=pages>Gottesanbeterinnen</categorytree> <categorytree mode=pages>Tausendf…“
wikitext
text/x-wiki
=Projekte=
==Tiere==
<categorytree mode=pages>Ameisen</categorytree>
<categorytree mode=pages>Gottesanbeterinnen</categorytree>
<categorytree mode=pages>Tausendfüßer</categorytree>
==Pflanzen==
=Technik=
==Computer==
539b177a809cd3d7215438bdde6357d99c7b5a42
7
4
2012-05-22T12:24:51Z
Lollypop
2
wikitext
text/x-wiki
=Projekte=
<categorytree mode=categories>Ameisen</categorytree>
<categorytree mode=categories>Gottesanbeterinnen</categorytree>
<categorytree mode=categories>Tausendfüßer</categorytree>
ddc733fa1a143a281b77a30065d9e02a668fd69d
8
7
2012-05-22T12:27:56Z
Lollypop
2
wikitext
text/x-wiki
=Projekte=
<categorytree mode=categories>Ameisen</categorytree>
7f016bf7f16ebe3bed0d701565cee2c19b1fa669
10
8
2012-05-22T12:45:00Z
Lollypop
2
wikitext
text/x-wiki
=Projekte=
<categorytree mode=categories>Projekte</categorytree>
ca77498b2791c360859cd67d66d5c286d9caedfd
14
10
2012-05-22T12:52:41Z
Lollypop
2
wikitext
text/x-wiki
=Projekte=
<categorytree mode=categories>Ameisen</categorytree>
7f016bf7f16ebe3bed0d701565cee2c19b1fa669
15
14
2012-05-22T13:00:21Z
Lollypop
2
wikitext
text/x-wiki
=Meine Projekte=
<categorytree mode=categories hideroot=on>Projekte</categorytree>
58226942d79342b2f0f2e20516e799ec0718b2ab
16
15
2012-05-22T13:03:13Z
Lollypop
2
/* Meine Projekte */
wikitext
text/x-wiki
=Meine Projekte=
<categorytree mode=pages hideroot=on>Projekte</categorytree>
e40a74eb475e3bd17b5c4ac7b08f3b766200e5a8
20
16
2012-05-22T13:41:28Z
Lollypop
2
Die Seite wurde geleert.
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
Category:Myrmica
14
5
5
2012-05-22T12:11:47Z
Lollypop
2
Die Seite wurde neu angelegt: „[[Kategorie:Ameisen]]“
wikitext
text/x-wiki
[[Kategorie:Ameisen]]
f955b9a6bcf9dfb967469048129fb2b16000839e
Category:Messor
14
6
6
2012-05-22T12:13:31Z
Lollypop
2
Die Seite wurde neu angelegt: „[[Kategorie:Ameisen]]“
wikitext
text/x-wiki
[[Kategorie:Ameisen]]
f955b9a6bcf9dfb967469048129fb2b16000839e
Category:Projekte
14
7
11
2012-05-22T12:47:27Z
Lollypop
2
Die Seite wurde neu angelegt: „=Projekte=“
wikitext
text/x-wiki
=Projekte=
e4bea54604e7e9d179e7da3dadadad38333fcb0a
Category:Tausendfuesser
14
9
13
2012-05-22T12:50:14Z
Lollypop
2
Die Seite wurde neu angelegt: „[[Kategorie:Projekte]]“
wikitext
text/x-wiki
[[Kategorie:Projekte]]
e5d54bb57ba0950c24694ecf450e32e79e629d83
Messor barbarus
0
10
17
2012-05-22T13:03:58Z
Lollypop
2
Die Seite wurde neu angelegt: „[[Kategorie:Messor]]“
wikitext
text/x-wiki
[[Kategorie:Messor]]
575c3724c2c4cd658266ee50cee0c28720724aeb
MediaWiki:Sidebar
8
11
18
2012-05-22T13:39:44Z
Lollypop
2
Die Seite wurde neu angelegt: „* navigation ** portal-url|portal ** mainpage|mainpage-description ** currentevents-url|currentevents ** recentchanges-url|recentchanges ** randompage-url|randomp…“
wikitext
text/x-wiki
* navigation
** portal-url|portal
** mainpage|mainpage-description
** currentevents-url|currentevents
** recentchanges-url|recentchanges
** randompage-url|randompage
** helppage|help
* SEARCH
* TOOLBOX
* LANGUAGES
6b6cff438797b39694db9fceb797670445a95d5f
19
18
2012-05-22T13:41:03Z
Lollypop
2
wikitext
text/x-wiki
* navigation
** mainpage|mainpage-description
** portal-url|portal
** currentevents-url|currentevents
** recentchanges-url|recentchanges
** randompage-url|randompage
** helppage|help
* SEARCH
* TOOLBOX
* LANGUAGES
e7f21f0cae3181e6175a69bd44c633184afba8b5
Category:Archispirostreptus
14
12
25
2012-05-22T13:54:22Z
Lollypop
2
Die Seite wurde neu angelegt: „[[Kategorie:Tausendfuesser]]“
wikitext
text/x-wiki
[[Kategorie:Tausendfuesser]]
0621b82421e1a1f52ea373a7753ab04e4564b24b
Archispirostreptus gigas
0
13
26
2012-05-22T13:55:22Z
Lollypop
2
Die Seite wurde neu angelegt: „[[Kategorie:Archispirostreptus]]“
wikitext
text/x-wiki
[[Kategorie:Archispirostreptus]]
259a2788ac1714da47d0ce657943c4a1e98dcb45
Myrmica rubra
0
14
27
2012-05-22T14:55:51Z
Lollypop
2
Die Seite wurde neu angelegt: „[[Kategorie:Myrmica]]“
wikitext
text/x-wiki
[[Kategorie:Myrmica]]
33ed3ffb7d32e835fa124f24603a1e521611bf9f
Tetramorium caespitum
0
15
28
2012-05-22T14:57:12Z
Lollypop
2
Die Seite wurde neu angelegt: „[[Kategorie:Tetramorium]]“
wikitext
text/x-wiki
[[Kategorie:Tetramorium]]
162654114239a20d553f4f2b5d4bad35913a588a
Category:Tetramorium
14
16
29
2012-05-22T14:57:34Z
Lollypop
2
Die Seite wurde neu angelegt: „[[Kategorie:Ameisen]]“
wikitext
text/x-wiki
[[Kategorie:Ameisen]]
f955b9a6bcf9dfb967469048129fb2b16000839e
Lasius flavus
0
17
30
2012-05-22T15:00:15Z
Lollypop
2
Die Seite wurde neu angelegt: „[[Kategorie:Lasius]]“
wikitext
text/x-wiki
[[Kategorie:Lasius]]
1bea11ef7d8a75034969f5e0689219be01fc8996
Category:Lasius
14
18
31
2012-05-22T15:00:40Z
Lollypop
2
Die Seite wurde neu angelegt: „[[Kategorie:Ameisen]]“
wikitext
text/x-wiki
[[Kategorie:Ameisen]]
f955b9a6bcf9dfb967469048129fb2b16000839e
Formica fusca
0
19
32
2012-05-22T15:01:42Z
Lollypop
2
Die Seite wurde neu angelegt: „[[Kategorie:Formica]]“
wikitext
text/x-wiki
[[Kategorie:Formica]]
0947935d1949adf90a4dad06e74e720964344a85
Category:Formica
14
20
33
2012-05-22T15:02:11Z
Lollypop
2
Die Seite wurde neu angelegt: „[[Kategorie:Ameisen]]“
wikitext
text/x-wiki
[[Kategorie:Ameisen]]
f955b9a6bcf9dfb967469048129fb2b16000839e
Category:Solaris
14
21
34
2012-05-22T15:04:33Z
Lollypop
2
Die Seite wurde neu angelegt: „[[Kategorie:KnowHow]]“
wikitext
text/x-wiki
[[Kategorie:KnowHow]]
5b3e805e2df69a16d339bfd0115e4688ccfd0e65
Category:KnowHow
14
22
35
2012-05-22T15:05:21Z
Lollypop
2
Die Seite wurde neu angelegt: „=Allgemeines Know How zu verschiedenen Themen=“
wikitext
text/x-wiki
=Allgemeines Know How zu verschiedenen Themen=
4f61a917cf13e2fefb343d66f8029dd5263830b9
Solaris mdb magic
0
23
37
2012-05-22T15:12:47Z
Lollypop
2
Die Seite wurde neu angelegt: „=Verschiedene kleine mdb Tricks= ==Memory usage== <code> # echo ::memstat|mdb -k Page Summary Pages MB %Tot ------------ -----…“
wikitext
text/x-wiki
=Verschiedene kleine mdb Tricks=
==Memory usage==
<code>
# echo ::memstat|mdb -k
Page Summary Pages MB %Tot
------------ ---------------- ---------------- ----
Kernel 2855874 11155 69%
Anon 50119 195 1%
Exec and libs 4754 18 0%
Page cache 22972 89 1%
Free (cachelist) 11948 46 0%
Free (freelist) 1221894 4773 29%
Total 4167561 16279
Physical 4078747 15932
</code>
==Kernelparameter abfragen==
Syntax: echo '<Parameter>/D' | mdb -k
<code>
# echo 'ncsize/D' | mdb -k
ncsize:
ncsize: 70485
</code>
==Kernelparameter setzen==
Syntax: echo '<Parameter>/W<Value>' | mdb -wk
<code>
# echo 'do_tcp_fusion/W0' | mdb -wk
do_tcp_fusion: 0 = 0x0
</code>
[[Kategorie:Solaris]]
ed37f821d7303bb6d398143c19559d278c6daf8a
42
37
2012-05-23T06:50:13Z
Lollypop
2
wikitext
text/x-wiki
=Verschiedene kleine mdb Tricks=
==Memory usage==
<pre>
# echo ::memstat|mdb -k
Page Summary Pages MB %Tot
------------ ---------------- ---------------- ----
Kernel 2855874 11155 69%
Anon 50119 195 1%
Exec and libs 4754 18 0%
Page cache 22972 89 1%
Free (cachelist) 11948 46 0%
Free (freelist) 1221894 4773 29%
Total 4167561 16279
Physical 4078747 15932
</pre>
==Kernelparameter abfragen==
Syntax: echo '<Parameter>/D' | mdb -k
<pre>
# echo 'ncsize/D' | mdb -k
ncsize:
ncsize: 70485
</pre>
==Kernelparameter setzen==
Syntax: echo '<Parameter>/W<Value>' | mdb -wk
<pre>
# echo 'do_tcp_fusion/W0' | mdb -wk
do_tcp_fusion: 0 = 0x0
</pre>
[[Kategorie:Solaris]]
c92e8c1c1a3f844ddbfe636e5bcb533ce0db9be4
Solaris kernel debugging
0
24
38
2012-05-22T15:14:48Z
Lollypop
2
Die Seite wurde neu angelegt: „* Direkt in den Debugger booten <code> ok> boot -kd ... Welcome to kmdb kmdb: unable to determine terminal type: assuming `vt100' [0]> </code> oder bei x86 Grube…“
wikitext
text/x-wiki
* Direkt in den Debugger booten
<code>
ok> boot -kd
...
Welcome to kmdb
kmdb: unable to determine terminal type: assuming `vt100'
[0]>
</code>
oder bei x86 Grubeintrag auswählen und in der "kernel"-Zeile -kd hinzufügen...
* Mod-Debug aktivieren
<code>
[0]> moddebug/W 0x80000000
moddebug: 0 = 0x80000000
[0]> :c
SunOS Release 5.10 Version Generic_141415-07 64-bit
...
</code>
* Mod-Kmem aktivieren
<code>
[0]> kmem_flags/W 0x0000000f
kmem_flags: 0 = 0xf
[0]> :c
SunOS Release 5.10 Version Generic_141415-07 64-bit
...
</code>
* Mod-snooping aktivieren
<code>
[0]> snooping/W 0x1
snooping: 0 = 0x1
[0]> :c
SunOS Release 5.10 Version Generic_141415-07 64-bit
...
</code>
* Stack ausgeben lassen
<code>
[0]> $c
</code>
* Letzte Meldungen
<code>
[0]> ::msgbuf
</code>
* Crashdump schreiben lassen bei x86-Systemen
<code>
panic...
[0]> $<systemdump
</code>
* Links
* [http://developers.sun.com/solaris/articles/manage_core_dump.html Core Dump Management on the Solaris OS]
* [http://www.c0t0d0s0.org/presentations/hhosug/hhosug2.pdf PDF des zweiten HHOSUG Meetings]
[[Kategorie:Solaris]]
2ee72607f4b5d9f651f765a1b88159aaa7cf83dd
39
38
2012-05-22T15:27:05Z
Lollypop
2
wikitext
text/x-wiki
* Direkt in den Debugger booten
<syntaxhighlight lang="bash">
ok> boot -kd
...
Welcome to kmdb
kmdb: unable to determine terminal type: assuming `vt100'
[0]>
</syntaxhighlight>
oder bei x86 Grubeintrag auswählen und in der "kernel"-Zeile -kd hinzufügen...
* Mod-Debug aktivieren
<code>
[0]> moddebug/W 0x80000000
moddebug: 0 = 0x80000000
[0]> :c
SunOS Release 5.10 Version Generic_141415-07 64-bit
...
</code>
* Mod-Kmem aktivieren
<code>
[0]> kmem_flags/W 0x0000000f
kmem_flags: 0 = 0xf
[0]> :c
SunOS Release 5.10 Version Generic_141415-07 64-bit
...
</code>
* Mod-snooping aktivieren
<code>
[0]> snooping/W 0x1
snooping: 0 = 0x1
[0]> :c
SunOS Release 5.10 Version Generic_141415-07 64-bit
...
</code>
* Stack ausgeben lassen
<code>
[0]> $c
</code>
* Letzte Meldungen
<code>
[0]> ::msgbuf
</code>
* Crashdump schreiben lassen bei x86-Systemen
<code>
panic...
[0]> $<systemdump
</code>
* Links
* [http://developers.sun.com/solaris/articles/manage_core_dump.html Core Dump Management on the Solaris OS]
* [http://www.c0t0d0s0.org/presentations/hhosug/hhosug2.pdf PDF des zweiten HHOSUG Meetings]
[[Kategorie:Solaris]]
1bd2781c13ac3aa54ccbf2ec67722e565185466a
40
39
2012-05-22T15:29:42Z
Lollypop
2
wikitext
text/x-wiki
* Direkt in den Debugger booten
<code>
ok> boot -kd
...
Welcome to kmdb
kmdb: unable to determine terminal type: assuming `vt100'
[0]>
</code>
oder bei x86 Grubeintrag auswählen und in der "kernel"-Zeile -kd hinzufügen...
* Mod-Debug aktivieren
<code>
[0]> moddebug/W 0x80000000
moddebug: 0 = 0x80000000
[0]> :c
SunOS Release 5.10 Version Generic_141415-07 64-bit
...
</code>
* Mod-Kmem aktivieren
<code>
[0]> kmem_flags/W 0x0000000f
kmem_flags: 0 = 0xf
[0]> :c
SunOS Release 5.10 Version Generic_141415-07 64-bit
...
</code>
* Mod-snooping aktivieren
<code>
[0]> snooping/W 0x1
snooping: 0 = 0x1
[0]> :c
SunOS Release 5.10 Version Generic_141415-07 64-bit
...
</code>
* Stack ausgeben lassen
<code>
[0]> $c
</code>
* Letzte Meldungen
<code>
[0]> ::msgbuf
</code>
* Crashdump schreiben lassen bei x86-Systemen
<code>
panic...
[0]> $<systemdump
</code>
* Links
* [http://developers.sun.com/solaris/articles/manage_core_dump.html Core Dump Management on the Solaris OS]
* [http://www.c0t0d0s0.org/presentations/hhosug/hhosug2.pdf PDF des zweiten HHOSUG Meetings]
[[Kategorie:Solaris]]
2ee72607f4b5d9f651f765a1b88159aaa7cf83dd
41
40
2012-05-23T06:48:55Z
Lollypop
2
wikitext
text/x-wiki
* Direkt in den Debugger booten
<pre>
ok> boot -kd
...
Welcome to kmdb
kmdb: unable to determine terminal type: assuming `vt100'
[0]>
</pre>
oder bei x86 Grubeintrag auswählen und in der "kernel"-Zeile -kd hinzufügen...
* Mod-Debug aktivieren
<pre>
[0]> moddebug/W 0x80000000
moddebug: 0 = 0x80000000
[0]> :c
SunOS Release 5.10 Version Generic_141415-07 64-bit
...
</pre>
* Mod-Kmem aktivieren
<pre>
[0]> kmem_flags/W 0x0000000f
kmem_flags: 0 = 0xf
[0]> :c
SunOS Release 5.10 Version Generic_141415-07 64-bit
...
</pre>
* Mod-snooping aktivieren
<pre>
[0]> snooping/W 0x1
snooping: 0 = 0x1
[0]> :c
SunOS Release 5.10 Version Generic_141415-07 64-bit
...
</pre>
* Stack ausgeben lassen
<pre>
[0]> $c
</pre>
* Letzte Meldungen
<pre>
[0]> ::msgbuf
</pre>
* Crashdump schreiben lassen bei x86-Systemen
<pre>
panic...
[0]> $<systemdump
</pre>
* Links
* [http://developers.sun.com/solaris/articles/manage_core_dump.html Core Dump Management on the Solaris OS]
* [http://www.c0t0d0s0.org/presentations/hhosug/hhosug2.pdf PDF des zweiten HHOSUG Meetings]
[[Kategorie:Solaris]]
a1e4b721f268ccc936712f5a735c1dff784d634f
Solaris SVM boot cdrom with metadevices
0
25
45
2012-05-23T06:54:43Z
Lollypop
2
Die Seite wurde neu angelegt: „First, boot from the media: # boot net -s Now mount one of the subdisks read-only, so you cannot accidentally damage the subdisk: # mount -o ro /dev/dsk/c0t0…“
wikitext
text/x-wiki
First, boot from the media:
# boot net -s
Now mount one of the subdisks read-only, so you cannot accidentally damage the subdisk:
# mount -o ro /dev/dsk/c0t0d0s0 /a
Then set up the current booted environment so it can use Solaris Volume Manager:
# cp /a/kernel/drv/md.conf /kernel/drv/md.conf
# umount /a
Now update the Solaris Volume Manager driver to load the configuration:
# update_drv -f md
Ignore any error messages from update_drv:
# metainit -r
[[Kategorie:Solaris_SVM]]
4e1781eb8e7f177278b1437956f9ce01e76106e0
Category:Solaris SVM
14
26
46
2012-05-23T06:55:08Z
Lollypop
2
Die Seite wurde neu angelegt: „[[Kategorie:Solaris]]“
wikitext
text/x-wiki
[[Kategorie:Solaris]]
45811ac9bef9ab2254080294d01e6f892f5d9499
Exim cheatsheet
0
27
48
2012-05-23T07:22:23Z
Lollypop
2
Die Seite wurde neu angelegt: „Up: [[exim]] ==Fragen und Antworten== ===Header einer MailID ansehen=== <pre># exim -mvh <msgid></pre> ===Statistiken der aktuellen Queue ansehen=== <pre># ex…“
wikitext
text/x-wiki
Up: [[exim]]
==Fragen und Antworten==
===Header einer MailID ansehen===
<pre># exim -mvh <msgid></pre>
===Statistiken der aktuellen Queue ansehen===
<pre># exim -bpu | exiqsum <parameter></pre>
===Routing von Mails testen===
====Kurz und bündig====
<pre># exim -bv -v <Mailadresse></pre>
====Mit viel Debugging====
<pre># exim -bv -d+all <Mailadresse></pre>
===Wie stosse ich den Versand aller Mails für eine bestimmte Domain an?===
<pre># exim -Rff <Domain></pre>
===Wie stosse ich den Versand EINER bestimmten Mail erneut an?===
<pre># exim -M <message-id></pre>
===Wie ermittle ich, wieviele Mails in der Queue liegen?===
<pre># exim -bpc</pre>
===Wie finde ich eine bestimmte Mail in der Queue?===
Dazu kann entweder in den Logfiles gesucht werden
<pre># exigrep <pattern> /var/log/exim/mainlog-jjjjmmdd</pre>
oder es kann in der Queue gesucht werden
<pre># exiqgrep -r <pattern></pre>
===Was tun die Exim-Prozesse?===
<pre># exiwhat</pre>
===Ausgeben von Exim-Parametern===
<pre># exim -bP <Parameter></pre>
z.B.:
<pre># exim -bP message_size_limit</pre>
===Immer gut: queue files ansehen===
<pre>
# find $(exim -bP spool_directory | nawk '{print $NF;}')/input
</pre>
[[Kategorie:Exim]]
63bbc6934f35ad1c61d0f3d30b44a67e6222596c
50
48
2012-05-23T07:26:44Z
Lollypop
2
wikitext
text/x-wiki
=Fragen und Antworten=
==Header einer MailID ansehen==
<pre># exim -mvh <msgid></pre>
==Statistiken der aktuellen Queue ansehen==
<pre># exim -bpu | exiqsum <parameter></pre>
==Routing von Mails testen==
===Kurz und bündig===
<pre># exim -bv -v <Mailadresse></pre>
===Mit viel Debugging===
<pre># exim -bv -d+all <Mailadresse></pre>
==Wie stosse ich den Versand aller Mails für eine bestimmte Domain an?==
<pre># exim -Rff <Domain></pre>
==Wie stosse ich den Versand EINER bestimmten Mail erneut an?==
<pre># exim -M <message-id></pre>
==Wie ermittle ich, wieviele Mails in der Queue liegen?==
<pre># exim -bpc</pre>
==Wie finde ich eine bestimmte Mail in der Queue?==
Dazu kann entweder in den Logfiles gesucht werden
<pre># exigrep <pattern> /var/log/exim/mainlog-jjjjmmdd</pre>
oder es kann in der Queue gesucht werden
<pre># exiqgrep -r <pattern></pre>
Besser, als exigrep ist exipick!
<pre>
# /opt/exim/bin/exipick '$message_body =~ /.*Vjagra.*/'
</pre>
==Was tun die Exim-Prozesse?==
<pre># exiwhat</pre>
==Ausgeben von Exim-Parametern==
<pre># exim -bP <Parameter></pre>
z.B.:
<pre># exim -bP message_size_limit</pre>
==Immer gut: queue files ansehen==
<pre>
# find $(exim -bP spool_directory | nawk '{print $NF;}')/input
</pre>
[[Kategorie:Exim]]
772eee3b5f0528587c2fbb3be110aa5218dc441b
Category:Exim
14
28
49
2012-05-23T07:22:57Z
Lollypop
2
Die Seite wurde neu angelegt: „[[Kategorie:KnowHow]]“
wikitext
text/x-wiki
[[Kategorie:KnowHow]]
5b3e805e2df69a16d339bfd0115e4688ccfd0e65
ZFS cheatsheet
0
29
51
2012-05-23T07:35:52Z
Lollypop
2
Die Seite wurde neu angelegt: „* [[ZFS_and_Databases|Eignung von ZFS für Datenbanken]] * [[ZFS_and_Backup|Welche Backupsoftware kann man für ZFS benutzen]] * [[ZFS_Autosnap|Automatische Snaps…“
wikitext
text/x-wiki
* [[ZFS_and_Databases|Eignung von ZFS für Datenbanken]]
* [[ZFS_and_Backup|Welche Backupsoftware kann man für ZFS benutzen]]
* [[ZFS_Autosnap|Automatische Snapshots auf Solaris 10]]
* [[ZFS_Recovery|Reparieren von defktem ZFS]]
* Wichtige ZFS-Patches: 127729-07 (x86) / 127728-06 (SPARC)
* ZFS Best Practices Guide http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide
* ZFS FAQ bei Opensolaris.org http://www.opensolaris.org/os/community/zfs/faq/
== ZFS Tuning ==
Eine gefühlte Langsamkeit auf Systemen mit ZFS kommt vom sehr großen Cachehunger. Den kann man eingrenzen:
Erstmal schauen, was phase ist:
<pre>
lollypop@wirefall:~# echo "::kmastat ! grep Total" |mdb -k
Total [hat_memload] 13508608B 309323764 0
Total [kmem_msb] 24010752B 1509706 0
Total [kmem_va] 660340736B 140448 0
Total [kmem_default] 690409472B 1416078794 0
Total [kmem_io_64G] 34619392B 8456 0
Total [kmem_io_4G] 16384B 92 0
Total [kmem_io_2G] 24576B 62 0
Total [bp_map] 1048576B 234488 0
Total [umem_np] 786432B 976 0
Total [id32] 4096B 2620 0
Total [zfs_file_data_buf] 1471275008B 1326646 0
Total [segkp] 589824B 192886 0
Total [ip_minor_arena_sa] 64B 13332 0
Total [ip_minor_arena_la] 192B 45183 0
Total [spdsock] 64B 1 0
Total [namefs_inodes] 64B 24 0
lollypop@wirefall:~# echo "::memstat" | mdb -k
Page Summary Pages MB %Tot
------------ ---------------- ---------------- ----
Kernel 255013 996 24%
ZFS File Data 359196 1403 34%
Anon 346538 1353 33%
Exec and libs 33948 132 3%
Page cache 4836 18 0%
Free (cachelist) 22086 86 2%
Free (freelist) 23420 91 2%
Total 1045037 4082
Physical 1045036 4082
</pre>
Oder nur ZFS
<pre>
echo "::memstat ! egrep '(Page Summary|-----|ZFS)'"| mdb -k
</pre>
Ausgeben aller ARC-Parameter:
<pre>
lollypop@wirefall:~# echo "::arc -m" | mdb -k
hits = 80839319
misses = 3717788
demand_data_hits = 4127150
demand_data_misses = 51589
demand_metadata_hits = 9467792
demand_metadata_misses = 2125852
prefetch_data_hits = 127941
prefetch_data_misses = 596238
prefetch_metadata_hits = 67116436
prefetch_metadata_misses = 944109
mru_hits = 2031248
mru_ghost_hits = 1906199
mfu_hits = 78514880
mfu_ghost_hits = 993236
deleted = 880714
recycle_miss = 1381210
mutex_miss = 197
evict_skip = 38573528
evict_l2_cached = 0
evict_l2_eligible = 94658370048
evict_l2_ineligible = 8946457600
hash_elements = 79571
hash_elements_max = 82328
hash_collisions = 3005774
hash_chains = 22460
hash_chain_max = 8
p = 64 MB
c = 512 MB
c_min = 127 MB
c_max = 512 MB
size = 512 MB
hdr_size = 14825736
data_size = 468982784
other_size = 53480992
l2_hits = 0
l2_misses = 0
l2_feeds = 0
l2_rw_clash = 0
l2_read_bytes = 0
l2_write_bytes = 0
l2_writes_sent = 0
l2_writes_done = 0
l2_writes_error = 0
l2_writes_hdr_miss = 0
l2_evict_lock_retry = 0
l2_evict_reading = 0
l2_free_on_write = 0
l2_abort_lowmem = 0
l2_cksum_bad = 0
l2_io_error = 0
l2_size = 0
l2_hdr_size = 0
memory_throttle_count = 0
arc_no_grow = 0
arc_tempreserve = 0 MB
arc_meta_used = 150 MB
arc_meta_limit = 128 MB
arc_meta_max = 313 MB
</pre>
Man kann sich auch alle Parameter ausgeben lassen, die für ZFS gesetzt sind mit:
<pre>
# echo ::zfs_params | mdb -k
arc_reduce_dnlc_percent = 0x3
zfs_arc_max = 0x100000000
zfs_arc_min = 0x0
arc_shrink_shift = 0x5
zfs_mdcomp_disable = 0x0
zfs_prefetch_disable = 0x0
zfetch_max_streams = 0x8
zfetch_min_sec_reap = 0x2
zfetch_block_cap = 0x100
zfetch_array_rd_sz = 0x100000
zfs_default_bs = 0x9
zfs_default_ibs = 0xe
...
# echo "::arc -a" | mdb -k
hits = 592730
misses = 5095
demand_data_hits = 0
demand_data_misses = 0
demand_metadata_hits = 592719
demand_metadata_misses = 4866
prefetch_data_hits = 0
prefetch_data_misses = 0
...
</pre>
Setzen von Kernelparametern geht auch online mit:
<pre>
# echo zfs_arc_max/Z100000000 | mdb -kw
zfs_arc_max: <old value> = 0x100000000
</pre>
Das setzt den zfs_arc_max auf 4GB = 0x100000000
== Limitieren des ARC Cache ==
In der /etc/system einfach:
set zfs:zfs_arc_max = <Number of bytes>
Siehe auch [http://www.solarisinternals.com/wiki/index.php/ZFS_Evil_Tuning_Guide#Limiting_the_ARC_Cache Limiting the ARC Cache]
== ZFS Platzverbrauch besser anzeigen ==
<pre>
$ zfs list -o space
NAME AVAIL USED USEDSNAP USEDDS USEDREFRESERV USEDCHILD
rpool 25.4G 7.79G 0 64K 0 7.79G
rpool/ROOT 25.4G 6.29G 0 18K 0 6.29G
rpool/ROOT/snv_98 25.4G 6.29G 0 6.29G 0 0
rpool/dump 25.4G 1.00G 0 1.00G 0 0
rpool/export 25.4G 38K 0 20K 0 18K
rpool/export/home 25.4G 18K 0 18K 0 0
rpool/swap 25.8G 512M 0 111M 401M 0
</pre>
Wenn zfs list -o space als shortcut noch nicht zur Verfügung steht, geht meist:
<pre>
$ zfs list -o name,avail,used,usedsnap,usedds,usedrefreserv,usedchild -t filesystem,volume
</pre>
[[Kategorie:ZFS]]
7dee1625a84dee848addbbad2e116a9af9ddf429
ZFS Recovery
0
30
52
2012-05-23T07:37:09Z
Lollypop
2
Die Seite wurde neu angelegt: „Siehe [http://sunsolve.sun.com/search/document.do?assetkey=1-66-233602-1 SunAlert 233602 : Solaris 10 Assertion Failure in ZFS May Cause a System Panic]: The b…“
wikitext
text/x-wiki
Siehe [http://sunsolve.sun.com/search/document.do?assetkey=1-66-233602-1 SunAlert 233602 : Solaris 10 Assertion Failure in ZFS May Cause a System Panic]:
The best recovery for this is to do the following:
<code>
1. Set the following in /etc/system:
set zfs:zfs_recover=1
set aok=1
2. Import the pool using 'zpool import'
3. Run a full scrub on the pool using 'zpool scrub'
4. Use 'zdb -d' and make sure that there is no ondisk corruption reported
5. Once the pool comes to a clean state, comment / remove the added entries in /etc/system.
</code>
[[Kategorie:ZFS]]
3dccb3dd1d55c0f68974a0568ca621adeb3e7525
ZFS Recovery
0
30
53
52
2012-05-23T07:37:35Z
Lollypop
2
wikitext
text/x-wiki
Siehe [http://sunsolve.sun.com/search/document.do?assetkey=1-66-233602-1 SunAlert 233602 : Solaris 10 Assertion Failure in ZFS May Cause a System Panic]:
The best recovery for this is to do the following:
<pre>
1. Set the following in /etc/system:
set zfs:zfs_recover=1
set aok=1
2. Import the pool using 'zpool import'
3. Run a full scrub on the pool using 'zpool scrub'
4. Use 'zdb -d' and make sure that there is no ondisk corruption reported
5. Once the pool comes to a clean state, comment / remove the added entries in /etc/system.
</pre>
[[Kategorie:ZFS]]
24b048c56d4ac6ea53bf8459f1b8ab5835fdae34
Category:ZFS
14
31
54
2012-05-23T07:37:59Z
Lollypop
2
Die Seite wurde neu angelegt: „[[Kategorie:Solaris]]“
wikitext
text/x-wiki
[[Kategorie:Solaris]]
45811ac9bef9ab2254080294d01e6f892f5d9499
ZFS cheatsheet
0
29
55
51
2012-05-23T07:42:20Z
Lollypop
2
wikitext
text/x-wiki
* [[ZFS_Recovery|Reparieren von defktem ZFS]]
* Wichtige ZFS-Patches: 127729-07 (x86) / 127728-06 (SPARC)
* ZFS Best Practices Guide http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide
* ZFS FAQ bei Opensolaris.org http://www.opensolaris.org/os/community/zfs/faq/
== ZFS Tuning ==
Eine gefühlte Langsamkeit auf Systemen mit ZFS kommt vom sehr großen Cachehunger. Den kann man eingrenzen:
Erstmal schauen, was phase ist:
<pre>
lollypop@wirefall:~# echo "::kmastat ! grep Total" |mdb -k
Total [hat_memload] 13508608B 309323764 0
Total [kmem_msb] 24010752B 1509706 0
Total [kmem_va] 660340736B 140448 0
Total [kmem_default] 690409472B 1416078794 0
Total [kmem_io_64G] 34619392B 8456 0
Total [kmem_io_4G] 16384B 92 0
Total [kmem_io_2G] 24576B 62 0
Total [bp_map] 1048576B 234488 0
Total [umem_np] 786432B 976 0
Total [id32] 4096B 2620 0
Total [zfs_file_data_buf] 1471275008B 1326646 0
Total [segkp] 589824B 192886 0
Total [ip_minor_arena_sa] 64B 13332 0
Total [ip_minor_arena_la] 192B 45183 0
Total [spdsock] 64B 1 0
Total [namefs_inodes] 64B 24 0
lollypop@wirefall:~# echo "::memstat" | mdb -k
Page Summary Pages MB %Tot
------------ ---------------- ---------------- ----
Kernel 255013 996 24%
ZFS File Data 359196 1403 34%
Anon 346538 1353 33%
Exec and libs 33948 132 3%
Page cache 4836 18 0%
Free (cachelist) 22086 86 2%
Free (freelist) 23420 91 2%
Total 1045037 4082
Physical 1045036 4082
</pre>
Oder nur ZFS
<pre>
echo "::memstat ! egrep '(Page Summary|-----|ZFS)'"| mdb -k
</pre>
Ausgeben aller ARC-Parameter:
<pre>
lollypop@wirefall:~# echo "::arc -m" | mdb -k
hits = 80839319
misses = 3717788
demand_data_hits = 4127150
demand_data_misses = 51589
demand_metadata_hits = 9467792
demand_metadata_misses = 2125852
prefetch_data_hits = 127941
prefetch_data_misses = 596238
prefetch_metadata_hits = 67116436
prefetch_metadata_misses = 944109
mru_hits = 2031248
mru_ghost_hits = 1906199
mfu_hits = 78514880
mfu_ghost_hits = 993236
deleted = 880714
recycle_miss = 1381210
mutex_miss = 197
evict_skip = 38573528
evict_l2_cached = 0
evict_l2_eligible = 94658370048
evict_l2_ineligible = 8946457600
hash_elements = 79571
hash_elements_max = 82328
hash_collisions = 3005774
hash_chains = 22460
hash_chain_max = 8
p = 64 MB
c = 512 MB
c_min = 127 MB
c_max = 512 MB
size = 512 MB
hdr_size = 14825736
data_size = 468982784
other_size = 53480992
l2_hits = 0
l2_misses = 0
l2_feeds = 0
l2_rw_clash = 0
l2_read_bytes = 0
l2_write_bytes = 0
l2_writes_sent = 0
l2_writes_done = 0
l2_writes_error = 0
l2_writes_hdr_miss = 0
l2_evict_lock_retry = 0
l2_evict_reading = 0
l2_free_on_write = 0
l2_abort_lowmem = 0
l2_cksum_bad = 0
l2_io_error = 0
l2_size = 0
l2_hdr_size = 0
memory_throttle_count = 0
arc_no_grow = 0
arc_tempreserve = 0 MB
arc_meta_used = 150 MB
arc_meta_limit = 128 MB
arc_meta_max = 313 MB
</pre>
Man kann sich auch alle Parameter ausgeben lassen, die für ZFS gesetzt sind mit:
<pre>
# echo ::zfs_params | mdb -k
arc_reduce_dnlc_percent = 0x3
zfs_arc_max = 0x100000000
zfs_arc_min = 0x0
arc_shrink_shift = 0x5
zfs_mdcomp_disable = 0x0
zfs_prefetch_disable = 0x0
zfetch_max_streams = 0x8
zfetch_min_sec_reap = 0x2
zfetch_block_cap = 0x100
zfetch_array_rd_sz = 0x100000
zfs_default_bs = 0x9
zfs_default_ibs = 0xe
...
# echo "::arc -a" | mdb -k
hits = 592730
misses = 5095
demand_data_hits = 0
demand_data_misses = 0
demand_metadata_hits = 592719
demand_metadata_misses = 4866
prefetch_data_hits = 0
prefetch_data_misses = 0
...
</pre>
Setzen von Kernelparametern geht auch online mit:
<pre>
# echo zfs_arc_max/Z100000000 | mdb -kw
zfs_arc_max: <old value> = 0x100000000
</pre>
Das setzt den zfs_arc_max auf 4GB = 0x100000000
== Limitieren des ARC Cache ==
In der /etc/system einfach:
set zfs:zfs_arc_max = <Number of bytes>
Siehe auch [http://www.solarisinternals.com/wiki/index.php/ZFS_Evil_Tuning_Guide#Limiting_the_ARC_Cache Limiting the ARC Cache]
== ZFS Platzverbrauch besser anzeigen ==
<pre>
$ zfs list -o space
NAME AVAIL USED USEDSNAP USEDDS USEDREFRESERV USEDCHILD
rpool 25.4G 7.79G 0 64K 0 7.79G
rpool/ROOT 25.4G 6.29G 0 18K 0 6.29G
rpool/ROOT/snv_98 25.4G 6.29G 0 6.29G 0 0
rpool/dump 25.4G 1.00G 0 1.00G 0 0
rpool/export 25.4G 38K 0 20K 0 18K
rpool/export/home 25.4G 18K 0 18K 0 0
rpool/swap 25.8G 512M 0 111M 401M 0
</pre>
Wenn zfs list -o space als shortcut noch nicht zur Verfügung steht, geht meist:
<pre>
$ zfs list -o name,avail,used,usedsnap,usedds,usedrefreserv,usedchild -t filesystem,volume
</pre>
[[Kategorie:ZFS]]
fe8867b727c1c2920b1f95e276af482217572f6c
Sun Cluster - Repair Infrastructure
0
32
56
2012-05-23T07:45:36Z
Lollypop
2
Die Seite wurde neu angelegt: „Wenn bei einem Clusterknoten die Infrastructure-Datei beschädigt ist, oder ein nicht mehr vorhandenes Quorum-Device herauskonfiguriert werden soll, dann muß man…“
wikitext
text/x-wiki
Wenn bei einem Clusterknoten die Infrastructure-Datei beschädigt ist, oder ein nicht mehr vorhandenes Quorum-Device herauskonfiguriert werden soll, dann muß man die folgenden Schritte ausführen:
1. Knoten in Non-Cluster-Modus bringen
<pre>
# reboot -- -sx
</pre>
Aus dem OBP ei Sparc-Systemen:
<pre>
ok> boot -sx
</pre>
Oder bei x86/Opteron:
<pre>
b -sx
</pre>
2. Infrastructure editieren:
<pre>
# mount /var
# export TERM=vt100
# vi /etc/cluster/ccr/infrastructure
</pre>
Hier müssen alle Quorumdevice-Einträge raus und die Stimmen der anderen Nodes (bei mehr als zwei Nodes) müssen auf 0 gesetzt werden.
z.B.:
cluster.nodes.2.properties.quorum_vote 0
3. Generieren der Checksumme in der Datei:
<pre>
# /usr/cluster/lib/sc/ccradm -i /etc/cluster/ccr/infrastructure -o
</pre>
4. Check, ob alles OK ist
<pre>
# /usr/cluster/lib/sc/chkinfr
</pre>
5. Reboot in den Cluster-Modus
<pre>
# reboot
</pre>
Alternative Beschreibung von [http://www.edv-birk.de/ Lothar Birk]:
==Notfall-Situation, wenn der Cluster-Node beim Boot kein Clusterquorum bekommt==
===Boot in den 'Non-Cluster' Modus===
boot -xs
===Manipulation der infrastructure Datei in der ccr===
<code>
cd /etc/cluster/ccr
oder
cd /etc/cluster/ccr/global
cp infrastructure 100610_infrastructure
vi infrastructure
- Quorum-Vote des anderen Nodes auf 0 setzen
...node.X...quorum_vote 0
- Alle Zeilen am Ende der Datei mit:
...quorum_devices... löschen
/usr/cluster/lib/sc/ccradm -i infrastructure -o
oder
/usr/cluster/lib/sc/ccradm recover -o infrastructure
</code>
===Boot wieder in den Cluster-Mode und anlegen eines Quorum-Devices===
<code>
init 6
clq add d1
</code>
[[Kategorie:SunCluster]]
56fe133eb0e30c45acebd9e962f710dbbcffedf9
57
56
2012-05-23T07:46:29Z
Lollypop
2
wikitext
text/x-wiki
Wenn bei einem Clusterknoten die Infrastructure-Datei beschädigt ist, oder ein nicht mehr vorhandenes Quorum-Device herauskonfiguriert werden soll, dann muß man die folgenden Schritte ausführen:
1. Knoten in Non-Cluster-Modus bringen
<pre>
# reboot -- -sx
</pre>
Aus dem OBP ei Sparc-Systemen:
<pre>
ok> boot -sx
</pre>
Oder bei x86/Opteron:
<pre>
b -sx
</pre>
2. Infrastructure editieren:
<pre>
# mount /var
# export TERM=vt100
# vi /etc/cluster/ccr/infrastructure
</pre>
Hier müssen alle Quorumdevice-Einträge raus und die Stimmen der anderen Nodes (bei mehr als zwei Nodes) müssen auf 0 gesetzt werden.
z.B.:
cluster.nodes.2.properties.quorum_vote 0
3. Generieren der Checksumme in der Datei:
<pre>
# /usr/cluster/lib/sc/ccradm -i /etc/cluster/ccr/infrastructure -o
</pre>
4. Check, ob alles OK ist
<pre>
# /usr/cluster/lib/sc/chkinfr
</pre>
5. Reboot in den Cluster-Modus
<pre>
# reboot
</pre>
Alternative Beschreibung von [http://www.edv-birk.de/ Lothar Birk]:
==Notfall-Situation, wenn der Cluster-Node beim Boot kein Clusterquorum bekommt==
===Boot in den 'Non-Cluster' Modus===
boot -xs
===Manipulation der infrastructure Datei in der ccr===
<pre>
cd /etc/cluster/ccr
oder
cd /etc/cluster/ccr/global
cp infrastructure 100610_infrastructure
vi infrastructure
- Quorum-Vote des anderen Nodes auf 0 setzen
...node.X...quorum_vote 0
- Alle Zeilen am Ende der Datei mit:
...quorum_devices... löschen
/usr/cluster/lib/sc/ccradm -i infrastructure -o
oder
/usr/cluster/lib/sc/ccradm recover -o infrastructure
</pre>
===Boot wieder in den Cluster-Mode und anlegen eines Quorum-Devices===
<pre>
init 6
clq add d1
</pre>
[[Kategorie:SunCluster]]
7db28f21480eb7d92e92f4d312178b80ac1c1640
Category:SunCluster
14
33
58
2012-05-23T07:46:55Z
Lollypop
2
Die Seite wurde neu angelegt: „[[Kategorie:Solaris]]“
wikitext
text/x-wiki
[[Kategorie:Solaris]]
45811ac9bef9ab2254080294d01e6f892f5d9499
File:SC quickreference.pdf
6
34
59
2012-05-23T07:50:58Z
Lollypop
2
SunCluster Quickreference
wikitext
text/x-wiki
SunCluster Quickreference
3ce29facba59deb85cfa2d826259fe10d08ed812
SunCluster cheatsheet
0
35
60
2012-05-23T07:55:21Z
Lollypop
2
Die Seite wurde neu angelegt: „=SunCluster cheatsheet= * [[:File:SC_quickreference.pdf|SunCluster Quickreference]]“
wikitext
text/x-wiki
=SunCluster cheatsheet=
* [[:File:SC_quickreference.pdf|SunCluster Quickreference]]
e9d820edf1167d41f5613e6541e0f56e33d58b37
61
60
2012-05-23T07:55:36Z
Lollypop
2
wikitext
text/x-wiki
* [[:File:SC_quickreference.pdf|SunCluster Quickreference]]
e3a7b88ee9c1c55a6fd9f0dd02864076b42a2d9b
62
61
2012-05-23T07:56:28Z
Lollypop
2
wikitext
text/x-wiki
* [[Media:SC_quickreference.pdf|SunCluster Quickreference]]
0c473934d9607a4fadc6ab751b158e779b800e5c
63
62
2012-05-23T07:58:37Z
Lollypop
2
wikitext
text/x-wiki
* [[Media:SC_quickreference.pdf|SunCluster 3.x Quick Reference]]
da591cf7744e7df3dc1fcaad737b07f640e4ef9c
65
63
2012-05-23T08:01:01Z
Lollypop
2
wikitext
text/x-wiki
* [[Media:SC_quickreference.pdf|SunCluster 3.x Quick Reference]]
* [[Media:820-0318.pdf|SunCluster 3.x Quick Reference]]
[[Kategorie:SunCluster]]
e18cbd6cc704fcf50081b45c4fc359cf1bd8f0b7
66
65
2012-05-23T08:01:26Z
Lollypop
2
wikitext
text/x-wiki
* [[Media:SC_quickreference.pdf|SunCluster 3.x Quick Reference]]
* [[Media:820-0318.pdf|SunCluster 3.2 Quick Reference (Deutsch)]]
[[Kategorie:SunCluster]]
4a19681d508023c0fc75031a3f0a896a86494dbf
File:820-0318.pdf
6
36
64
2012-05-23T07:59:40Z
Lollypop
2
SunCluster 3.2 Quick Reference (Deutsch)
wikitext
text/x-wiki
SunCluster 3.2 Quick Reference (Deutsch)
d89166f549fa28dcaf9f48b6aad343c9039d92b7
Hauptseite
0
1
67
47
2012-05-23T11:36:53Z
Lollypop
2
wikitext
text/x-wiki
=[[:Kategorie:Projekte|Meine Projekte]]=
<categorytree mode=pages hideroot=on depth=2>Projekte</categorytree>
=[[:Kategorie:KnowHow|KnowHow]]=
<categorytree mode=pages hideroot=on depth=2>KnowHow</categorytree>
= Starthilfen zum Wiki =
Hilfe zur Benutzung und Konfiguration der Wiki-Software findest du im [http://meta.wikimedia.org/wiki/Help:Contents Benutzerhandbuch].
* [http://www.mediawiki.org/wiki/Manual:Configuration_settings Liste der Konfigurationsvariablen]
* [http://www.mediawiki.org/wiki/Manual:FAQ MediaWiki-FAQ]
* [https://lists.wikimedia.org/mailman/listinfo/mediawiki-announce Mailingliste neuer MediaWiki-Versionen]
72d31bdd4fdd9bd43fb54a1a79c3f0c19b8ded74
Bash cheatsheet
0
37
68
2012-05-25T06:43:54Z
Lollypop
2
Die Seite wurde neu angelegt: „=Nützliche Variablenersetzungen= ==dirname== <pre> $ myself=/usr/bin/blafasel ; echo ${myself%/*} /usr/bin </pre> ==basename== <pre> $ myself=/usr/bin/blafasel …“
wikitext
text/x-wiki
=Nützliche Variablenersetzungen=
==dirname==
<pre>
$ myself=/usr/bin/blafasel ; echo ${myself%/*}
/usr/bin
</pre>
==basename==
<pre>
$ myself=/usr/bin/blafasel ; echo ${myself##*/}
blafasel
</pre>
[[Kategorie:KnowHow]]
15b45a6d9eaffcae59c6affad057353778336a9e
69
68
2012-05-25T06:45:34Z
Lollypop
2
hat „[[Bash]]“ nach „[[Bash cheatsheet]]“ verschoben: Einheitlichkeit
wikitext
text/x-wiki
=Nützliche Variablenersetzungen=
==dirname==
<pre>
$ myself=/usr/bin/blafasel ; echo ${myself%/*}
/usr/bin
</pre>
==basename==
<pre>
$ myself=/usr/bin/blafasel ; echo ${myself##*/}
blafasel
</pre>
[[Kategorie:KnowHow]]
15b45a6d9eaffcae59c6affad057353778336a9e
71
69
2012-05-25T06:46:06Z
Lollypop
2
wikitext
text/x-wiki
=Nützliche Variablenersetzungen=
==dirname==
<pre>
$ myself=/usr/bin/blafasel ; echo ${myself%/*}
/usr/bin
</pre>
==basename==
<pre>
$ myself=/usr/bin/blafasel ; echo ${myself##*/}
blafasel
</pre>
[[Kategorie:Bash]]
62e990326ea3a8913236f0eb9c0b9d484c63bb97
Bash
0
38
70
2012-05-25T06:45:35Z
Lollypop
2
hat „[[Bash]]“ nach „[[Bash cheatsheet]]“ verschoben: Einheitlichkeit
wikitext
text/x-wiki
#WEITERLEITUNG [[Bash cheatsheet]]
9ac2e3dc869b764eeccc89dfc8e65fa66654ae64
Category:Bash
14
39
72
2012-05-25T06:46:25Z
Lollypop
2
Die Seite wurde neu angelegt: „[[Kategorie: KnowHow]]“
wikitext
text/x-wiki
[[Kategorie: KnowHow]]
66e66ecf096ffc26f093363afb83fca47a7b1982
Category:Ameisen
14
3
73
9
2012-05-29T20:31:01Z
Lollypop
2
wikitext
text/x-wiki
Ameisen
[[Kategorie:Tiere]]
968f61000932373ecc9da30eb87cb2c8d8beec0b
Category:Tausendfuesser
14
9
74
13
2012-05-29T20:31:35Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:Tiere]]
cdedcc18051d8835b96ae206bd357542432afa24
Category:Tiere
14
40
75
2012-05-29T20:32:06Z
Lollypop
2
Die Seite wurde neu angelegt: „[[Kategorie:Projekte]]“
wikitext
text/x-wiki
[[Kategorie:Projekte]]
e5d54bb57ba0950c24694ecf450e32e79e629d83
Category:Pflanzen
14
41
76
2012-05-29T20:33:18Z
Lollypop
2
Die Seite wurde neu angelegt: „[[Kategorie:Projekte]]“
wikitext
text/x-wiki
[[Kategorie:Projekte]]
e5d54bb57ba0950c24694ecf450e32e79e629d83
Category:Arundo
14
42
77
2012-05-29T20:35:08Z
Lollypop
2
Die Seite wurde neu angelegt: „[[Kategorie:Pflanzen]]“
wikitext
text/x-wiki
[[Kategorie:Pflanzen]]
251e5bc4da59e803a6a47f224643e4460e86c273
Arundo donax
0
43
78
2012-05-29T20:36:18Z
Lollypop
2
Die Seite wurde neu angelegt: „[[Kategorie:Arundo]]“
wikitext
text/x-wiki
[[Kategorie:Arundo]]
a6092a89b1befba6803b47874fa6af02b7284251
79
78
2012-05-29T20:51:16Z
Lollypop
2
wikitext
text/x-wiki
{{Taxobox
| Taxon_Name = Pfahlrohr
| Taxon_WissName = Arundo donax
| Taxon_Rang = Art
| Taxon_Autor = [[Carl von Linné|L.]]
| Taxon2_WissName = Arundo
| Taxon2_Rang = Gattung
| Taxon3_WissName = Arundineae
| Taxon3_Rang = Tribus
| Taxon4_WissName = Arundinoideae
| Taxon4_Rang = Unterfamilie
| Taxon5_Name = Süßgräser
| Taxon5_WissName = Poaceae
| Taxon5_Rang = Familie
| Taxon6_Name = Süßgrasartige
| Taxon6_WissName = Poales
| Taxon6_Rang = Ordnung
}}
[[Kategorie:Arundo]]
b07c840d88d59eb020fbfd42f1b934344a82eb20
Template:Taxobox
10
44
80
2012-05-29T21:05:44Z
Lollypop
2
Die Seite wurde neu angelegt: „{{Taxobox | Taxon_Name = | Taxon_WissName = | Taxon_Rang = | Taxon_Autor = | Taxon2_Name = | Taxon2_WissName = | Taxon2_Rang =…“
wikitext
text/x-wiki
{{Taxobox
| Taxon_Name =
| Taxon_WissName =
| Taxon_Rang =
| Taxon_Autor =
| Taxon2_Name =
| Taxon2_WissName =
| Taxon2_Rang =
| Taxon3_Name =
| Taxon3_WissName =
| Taxon3_Rang =
| Taxon4_Name =
| Taxon4_WissName =
| Taxon4_Rang =
| Taxon5_Name =
| Taxon5_WissName =
| Taxon5_Rang =
| Taxon6_Name =
| Taxon6_WissName =
| Taxon6_Rang =
| Bild =
| Bildbeschreibung =
}}
fa323fc44ad403109453622caaed3fec613906fc
85
80
2012-05-29T21:30:44Z
Lollypop
2
wikitext
text/x-wiki
<includeonly>{| cellpadding="2" cellspacing="1" width="300" class="taxobox {{#ifeq: {{lc:{{{Modus|taxobox}}}}}|paläobox|palaeobox}} float-right toptextcells" id="Vorlage_Taxobox" summary="Taxobox"
! {{#if: {{{Name|}}}|{{{Name}}}|{{#if: {{{Taxon_Name|}}}|{{{Taxon_Name}}}|{{#if: {{{Taxon_WissName|}}}|{{#ifexpr: {{Taxobox/IstRangKursiv|{{{Taxon_Rang|}}}}}|''}}{{{Taxon_WissName}}}{{#ifexpr: {{Taxobox/IstRangKursiv|{{{Taxon_Rang|}}}}}|''}}}}}}}}
{{#if: {{{Bild|}}}|{{#switch: {{lc:{{{Bild}}}}}
|fehlt|ohne|kein|keines= {{!-}}
|#default={{!-}}
{{!}} style="text-align:center;font-size:8pt;" {{!}} [[Datei:{{{Bild}}}|frameless|300x400px{{#if:{{{Bildbeschreibung|}}}|{{!}}{{{Bildbeschreibung}}}}}]]
{{#if: {{{Bildbeschreibung|}}}|{{#ifeq: {{{Bildbeschreibung}}}|ohne||{{{Bildbeschreibung|}}}}}|{{#if: {{{Taxon_Name|}}}|{{{Taxon_Name|}}} {{#if: {{{Taxon_WissName|}}}|(''{{{Taxon_WissName|}}}'')}}|''{{{Taxon_WissName|}}}''}}}}
}}|{{!-}}}}
{{#ifeq: {{lc:{{{Modus|taxobox}}}}}|paläobox|
{{#if: {{{ErdzeitalterVon|}}}{{{MioVon|}}}{{{TausendVon|}}}|
{{!-}}
! [[Erdzeitalter|Zeitraum]]
{{#if: {{{ErdzeitalterVon|}}}|
{{!-}}
{{!}}class="taxo-zeit"{{!}} {{{ErdzeitalterVon|}}}{{#if: {{{ErdzeitalterBis|}}}| bis {{{ErdzeitalterBis}}}}}}}}}
{{#if: {{{MioVon|}}}|
{{#if: {{{TausendBis|}}}|
{{!-}}
{{!}}class="taxo-zeit" {{!}}{{{MioVon|}}} [[Mya (Einheit)|Mio. Jahre]] bis {{{TausendBis}}}.000 Jahre
|{{!-}}
{{!}}class="taxo-zeit" {{!}}{{{MioVon|}}}{{#if: {{{MioBis|}}}| bis {{{MioBis}}}}} [[Mya (Einheit)|Mio. Jahre]]}}}}
{{#if: {{{TausendVon|}}}|
{{!-}}
{{!}}class="taxo-zeit" {{!}}{{{TausendVon|}}}{{#if: {{{TausendBis|}}}| bis {{{TausendBis}}}}}.000 Jahre}}
{{#if: {{{Fundorte|}}} |
{{!-}}
! [[Fossil|Fundorte]]
{{!-}}
{{!}} class="taxo-ort" {{!}}
{{{Fundorte}}}}}}}
|-
! [[Systematik (Biologie)|Systematik]]
|-
|
{| width="100%"
{{Taxobox/Zeile
| Rang = {{{Taxon6_Rang|}}}
| Name = {{{Taxon6_Name|}}}
| LinkName = {{{Taxon6_LinkName|}}}
| KeinLink = {{#ifeq:{{{Taxon6_LinkName|}}}|nein|ja}}
| WissName = {{{Taxon6_WissName|}}}
| KeinRang = {{{Rangunterdrückung|}}}
}}
{{Taxobox/Zeile
| Rang = {{{Taxon5_Rang|}}}
| Name = {{{Taxon5_Name|}}}
| LinkName = {{{Taxon5_LinkName|}}}
| KeinLink = {{#ifeq:{{{Taxon5_LinkName|}}}|nein|ja|{{#if:{{{Taxon5_Autor|}}}|ja}}}}
| WissName = {{#if:{{{Taxon5_Autor|}}}|{{#if:{{{Taxon5_Name|}}}||{{{Taxon5_WissName}}}}}|{{{Taxon5_WissName|}}}}}
| KeinRang = {{{Rangunterdrückung|}}}
}}
{{Taxobox/Zeile
| Rang = {{{Taxon4_Rang|}}}
| Name = {{{Taxon4_Name|}}}
| LinkName = {{{Taxon4_LinkName|}}}
| KeinLink = {{#ifeq:{{{Taxon4_LinkName|}}}|nein|ja|{{#if:{{{Taxon4_Autor|}}}|ja}}}}
| WissName = {{#if:{{{Taxon4_Autor|}}}|{{#if:{{{Taxon4_Name|}}}||{{{Taxon4_WissName}}}}}|{{{Taxon4_WissName|}}}}}
| KeinRang = {{{Rangunterdrückung|}}}
}}
{{Taxobox/Zeile
| Rang = {{{Taxon3_Rang|}}}
| Name = {{{Taxon3_Name|}}}
| LinkName = {{{Taxon3_LinkName|}}}
| KeinLink = {{#ifeq:{{{Taxon3_LinkName|}}}|nein|ja|{{#if:{{{Taxon3_Autor|}}}|ja}}}}
| WissName = {{#if:{{{Taxon3_Autor|}}}|{{#if:{{{Taxon3_Name|}}}||{{{Taxon3_WissName}}}}}|{{{Taxon3_WissName|}}}}}
| KeinRang = {{{Rangunterdrückung|}}}
}}
{{Taxobox/Zeile
| Rang = {{{Taxon2_Rang|}}}
| Name = {{{Taxon2_Name|}}}
| LinkName = {{{Taxon2_LinkName|}}}
| KeinLink = {{#ifeq:{{{Taxon2_LinkName|}}}|nein|ja|{{#if:{{{Taxon2_Autor|}}}|ja}}}}
| WissName = {{#if:{{{Taxon2_Autor|}}}|{{#if:{{{Taxon2_Name|}}}||{{{Taxon2_WissName}}}}}|{{{Taxon2_WissName|}}}}}
| KeinRang = {{{Rangunterdrückung|}}}
}}
{{Taxobox/Zeile
| Rang = {{{Taxon_Rang|}}}
| Name = {{{Taxon_Name|}}}
| WissName = {{#if:{{{Taxon_Name|}}}||{{{Taxon_WissName|}}}}}
| KeinLink = ja
| KeinRang = {{{Rangunterdrückung|}}}
}}
|}
|-
{{#if: {{{Taxon5_Autor|}}} | {{Taxobox/Zitat
| Rang = {{{Taxon5_Rang|}}}
| WissName = {{{Taxon5_WissName|}}}
| Autor = {{{Taxon5_Autor|}}}
}}}}
{{#if: {{{Taxon4_Autor|}}} | {{Taxobox/Zitat
| Rang = {{{Taxon4_Rang|}}}
| WissName = {{{Taxon4_WissName|}}}
| Autor = {{{Taxon4_Autor|}}}
}}}}
{{#if: {{{Taxon3_Autor|}}} | {{Taxobox/Zitat
| Rang = {{{Taxon3_Rang|}}}
| WissName = {{{Taxon3_WissName|}}}
| Autor = {{{Taxon3_Autor|}}}
}}}}
{{#if: {{{Taxon2_Autor|}}} | {{Taxobox/Zitat
| Rang = {{{Taxon2_Rang|}}}
| WissName = {{{Taxon2_WissName|}}}
| Autor = {{{Taxon2_Autor|}}}
}}}}
{{#if: {{{Taxon_WissName|}}} | {{Taxobox/Zitat
| Rang = {{{Taxon_Rang|}}}
| WissName = {{{Taxon_WissName|}}}
| Autor = {{{Taxon_Autor|}}}
| KeinRang = {{#if: {{{Taxon2_Autor|}}}{{{Taxon3_Autor|}}}{{{Taxon4_Autor|}}}{{{Taxon5_Autor|}}}||ja}}
}}}}
{{#if: {{{Subtaxa_Rang|}}} | {{!-}}
!{{Taxobox/Rang|Rang={{{Subtaxa_Rang}}}|Plural={{{Subtaxa_Plural|ja}}}}}
{{!-}}
{{!}}
{{#if: {{{Subtaxa|}}} | {{{Subtaxa}}} }}}}
|}{{#if: {{{Taxon_Name|}}}{{#ifexpr: {{Taxobox/IstRangKursiv|{{{Taxon_Rang|}}}}}||nonitalic}}
|
| {{#ifexpr: {{str find|{{PAGENAME}}|(}} = -1
| {{DISPLAYTITLE:{{#if:{{NAMESPACE}}|{{NAMESPACE}}:}}''{{#if: {{{Taxon_WissName|}}}|{{{Taxon_WissName}}}|{{PAGENAME}}}}''}}
| <span style="display:none">[[Vorlage:Taxobox/Wartung/KlammerlemmaUndKursiv]]</span>
}}}}</includeonly><noinclude>{{Dokumentation}}
</noinclude>
196f8c9ce3c249a3f28812628415466230ae3e22
95
85
2012-05-29T21:54:23Z
Lollypop
2
wikitext
text/x-wiki
<includeonly>
{| cellpadding="2" cellspacing="1" width="300" {{#ifeq: {{lc:{{{Modus|taxobox}}}}}|paläobox|class="palaeobox float-right" id="Vorlage_Paläobox" summary="Paläobox"|class="taxobox float-right" id="Vorlage_Taxobox" summary="Taxobox"}} id="Vorlage_Taxobox" summary="Taxobox"
! {{#if: {{{Taxon_Name|}}}||{{Taxoauswahl|Hervorheben=ja|Taxon={{{Taxon_Rang|}}}}}}}{{#if: {{{Name|}}}|{{{Name}}}|{{#if: {{{Taxon_Name|}}}|{{{Taxon_Name}}}|{{#if: {{{Taxon_WissName|}}}|{{{Taxon_WissName}}}}}}}}}{{#if: {{{Taxon_Name|}}}||{{Taxoauswahl|Hervorheben=ja|Taxon={{{Taxon_Rang|}}}}}}}
{{#if: {{{Bild|}}}|{{#switch: {{lc:{{{Bild}}}}}
|fehlt= {{!-}}
{{!}}<div style="text-align:center;font-size:8pt;">[[:Kategorie:Wikipedia:Bilderwunsch Taxobox|Hier fehlt ein Bild]]{{#ifeq: {{NAMESPACE}} | {{ns:0}} | <span style="display: none;">[[Kategorie:Wikipedia:Bilderwunsch Taxobox]]</span>}}</div>
|ohne|kein|keines= {{!-}}
|#default={{!-}}
{{!}} <div style="text-align:center;font-size:8pt;">[[Bild:{{{Bild}}}|frameless|300x400px{{#if:{{{Bildbeschreibung|}}}|{{!}}{{{Bildbeschreibung}}}}}]]
{{#if: {{{Bildbeschreibung|}}}|{{#ifeq: {{{Bildbeschreibung}}}|ohne||{{{Bildbeschreibung|}}}}}|{{#if: {{{Taxon_Name|}}}|{{{Taxon_Name|}}} {{#if: {{{Taxon_WissName|}}}|(''{{{Taxon_WissName|}}}'')}}|''{{{Taxon_WissName|}}}''}}}}
</div>}}|{{!-}}}}
{{#ifeq: {{lc:{{{Modus|taxobox}}}}}|paläobox|
{{#if: {{{ErdzeitalterVon|}}}{{{MioVon|}}}{{{TausendVon|}}}|
{{!-}}
! [[Erdzeitalter|Zeitraum]]
{{#if: {{{ErdzeitalterVon|}}}|
{{!-}}
{{!}}class="taxo-zeit"{{!}} {{{ErdzeitalterVon|}}}{{#if: {{{ErdzeitalterBis|}}}| bis {{{ErdzeitalterBis}}}}}}}}}
{{#if: {{{MioVon|}}}|
{{#if: {{{TausendBis|}}}|
{{!-}}
{{!}}class="taxo-zeit" {{!}}{{{MioVon|}}} [[Mya (Einheit)|Mio. Jahre]] bis {{{TausendBis}}}.000 Jahre
|{{!-}}
{{!}}class="taxo-zeit" {{!}}{{{MioVon|}}}{{#if: {{{MioBis|}}}| bis {{{MioBis}}}}} [[Mya (Einheit)|Mio. Jahre]]}}}}
{{#if: {{{TausendVon|}}}|
{{!-}}
{{!}}class="taxo-zeit" {{!}}{{{TausendVon|}}}{{#if: {{{TausendBis|}}}| bis {{{TausendBis}}}}}.000 Jahre}}
{{#if: {{{Fundorte|}}} |
{{!-}}
! [[Fossil|Fundorte]]
{{!-}}
{{!}} class="taxo-ort" {{!}}
{{{Fundorte}}}}}}}
|-
! [[Systematik (Biologie)|Systematik]]
|-
|
{| width="100%"
{{Taxozeile
| Taxon = {{{Taxon6_Rang|}}}
| Taxon_Name = {{{Taxon6_Name|}}}
| Taxon_LinkName = {{{Taxon6_LinkName|}}}
| KeinLink = {{#ifeq:{{{Taxon6_LinkName|}}}|nein|ja}}
| Taxon_WissName = {{{Taxon6_WissName|}}}
}}
{{Taxozeile
| Taxon = {{{Taxon5_Rang|}}}
| Taxon_Name = {{{Taxon5_Name|}}}
| Taxon_LinkName = {{{Taxon5_LinkName|}}}
| KeinLink = {{#ifeq:{{{Taxon5_LinkName|}}}|nein|ja|{{#if:{{{Taxon5_Autor|}}}|ja}}}}
| Taxon_WissName = {{#if:{{{Taxon5_Autor|}}}|{{#if:{{{Taxon5_Name|}}}||{{{Taxon5_WissName}}}}}|{{{Taxon5_WissName|}}}}}
}}
{{Taxozeile
| Taxon = {{{Taxon4_Rang|}}}
| Taxon_Name = {{{Taxon4_Name|}}}
| Taxon_LinkName = {{{Taxon4_LinkName|}}}
| KeinLink = {{#ifeq:{{{Taxon4_LinkName|}}}|nein|ja|{{#if:{{{Taxon4_Autor|}}}|ja}}}}
| Taxon_WissName = {{#if:{{{Taxon4_Autor|}}}|{{#if:{{{Taxon4_Name|}}}||{{{Taxon4_WissName}}}}}|{{{Taxon4_WissName|}}}}}
}}
{{Taxozeile
| Taxon = {{{Taxon3_Rang|}}}
| Taxon_Name = {{{Taxon3_Name|}}}
| Taxon_LinkName = {{{Taxon3_LinkName|}}}
| KeinLink = {{#ifeq:{{{Taxon3_LinkName|}}}|nein|ja|{{#if:{{{Taxon3_Autor|}}}|ja}}}}
| Taxon_WissName = {{#if:{{{Taxon3_Autor|}}}|{{#if:{{{Taxon3_Name|}}}||{{{Taxon3_WissName}}}}}|{{{Taxon3_WissName|}}}}}
}}
{{Taxozeile
| Taxon = {{{Taxon2_Rang|}}}
| Taxon_Name = {{{Taxon2_Name|}}}
| Taxon_LinkName = {{{Taxon2_LinkName|}}}
| KeinLink = {{#ifeq:{{{Taxon2_LinkName|}}}|nein|ja|{{#if:{{{Taxon2_Autor|}}}|ja}}}}
| Taxon_WissName = {{#if:{{{Taxon2_Autor|}}}|{{#if:{{{Taxon2_Name|}}}||{{{Taxon2_WissName}}}}}|{{{Taxon2_WissName|}}}}}
}}
{{Taxozeile
| Taxon = {{{Taxon_Rang|}}}
| Taxon_Name = {{{Taxon_Name|}}}
| Taxon_WissName = {{#if:{{{Taxon_Name|}}}||{{{Taxon_WissName|}}}}}
| KeinLink = ja
}}
|}
|-
{{#if: {{{Taxon5_Autor|}}} | {{Taxozitat
| Taxon = {{{Taxon5_Rang|}}}
| Taxon_WissName = {{{Taxon5_WissName|}}}
| Autor = {{{Taxon5_Autor|}}}
}}}}
{{#if: {{{Taxon4_Autor|}}} | {{Taxozitat
| Taxon = {{{Taxon4_Rang|}}}
| Taxon_WissName = {{{Taxon4_WissName|}}}
| Autor = {{{Taxon4_Autor|}}}
}}}}
{{#if: {{{Taxon3_Autor|}}} | {{Taxozitat
| Taxon = {{{Taxon3_Rang|}}}
| Taxon_WissName = {{{Taxon3_WissName|}}}
| Autor = {{{Taxon3_Autor|}}}
}}}}
{{#if: {{{Taxon2_Autor|}}} | {{Taxozitat
| Taxon = {{{Taxon2_Rang|}}}
| Taxon_WissName = {{{Taxon2_WissName|}}}
| Autor = {{{Taxon2_Autor|}}}
}}}}
{{#if: {{{Taxon_WissName|}}} | {{Taxozitat
| Taxon = {{{Taxon_Rang|}}}
| Taxon_WissName = {{{Taxon_WissName|}}}
| Autor = {{{Taxon_Autor|}}}
| KeinTaxon = {{#if: {{{Taxon2_Autor|}}}||ja}}
}}}}
{{#if: {{{Subtaxa_Rang|}}} | {{!-}}
!{{Taxoauswahl|Taxon={{{Subtaxa_Rang}}}|Plural={{{Subtaxa_Plural|ja}}}}}
{{!-}}
{{!}}
{{#if: {{{Subtaxa|}}} | {{{Subtaxa}}} }}}}
|}
</includeonly><noinclude>''Diese Vorlage wird für [[Vorlage:Ameisenart]] und [[Vorlage:Ameisengattung]] benötigt.''</noinclude><noinclude>
[[Kategorie:Vorlage]]
</noinclude>
19bcbd42246ed3d6001280041e1b5a408697737e
96
95
2012-05-29T21:56:57Z
Lollypop
2
wikitext
text/x-wiki
<includeonly>{| cellpadding="2" cellspacing="1" width="300" class="taxobox {{#ifeq: {{lc:{{{Modus|taxobox}}}}}|paläobox|palaeobox}} float-right toptextcells" id="Vorlage_Taxobox" summary="Taxobox"
! {{#if: {{{Name|}}}|{{{Name}}}|{{#if: {{{Taxon_Name|}}}|{{{Taxon_Name}}}|{{#if: {{{Taxon_WissName|}}}|{{#ifexpr: {{Taxobox/IstRangKursiv|{{{Taxon_Rang|}}}}}|''}}{{{Taxon_WissName}}}{{#ifexpr: {{Taxobox/IstRangKursiv|{{{Taxon_Rang|}}}}}|''}}}}}}}}
{{#if: {{{Bild|}}}|{{#switch: {{lc:{{{Bild}}}}}
|fehlt|ohne|kein|keines= {{!-}}
|#default={{!-}}
{{!}} style="text-align:center;font-size:8pt;" {{!}} [[Datei:{{{Bild}}}|frameless|300x400px{{#if:{{{Bildbeschreibung|}}}|{{!}}{{{Bildbeschreibung}}}}}]]
{{#if: {{{Bildbeschreibung|}}}|{{#ifeq: {{{Bildbeschreibung}}}|ohne||{{{Bildbeschreibung|}}}}}|{{#if: {{{Taxon_Name|}}}|{{{Taxon_Name|}}} {{#if: {{{Taxon_WissName|}}}|(''{{{Taxon_WissName|}}}'')}}|''{{{Taxon_WissName|}}}''}}}}
}}|{{!-}}}}
{{#ifeq: {{lc:{{{Modus|taxobox}}}}}|paläobox|
{{#if: {{{ErdzeitalterVon|}}}{{{MioVon|}}}{{{TausendVon|}}}|
{{!-}}
! [[Erdzeitalter|Zeitraum]]
{{#if: {{{ErdzeitalterVon|}}}|
{{!-}}
{{!}}class="taxo-zeit"{{!}} {{{ErdzeitalterVon|}}}{{#if: {{{ErdzeitalterBis|}}}| bis {{{ErdzeitalterBis}}}}}}}}}
{{#if: {{{MioVon|}}}|
{{#if: {{{TausendBis|}}}|
{{!-}}
{{!}}class="taxo-zeit" {{!}}{{{MioVon|}}} [[Mya (Einheit)|Mio. Jahre]] bis {{{TausendBis}}}.000 Jahre
|{{!-}}
{{!}}class="taxo-zeit" {{!}}{{{MioVon|}}}{{#if: {{{MioBis|}}}| bis {{{MioBis}}}}} [[Mya (Einheit)|Mio. Jahre]]}}}}
{{#if: {{{TausendVon|}}}|
{{!-}}
{{!}}class="taxo-zeit" {{!}}{{{TausendVon|}}}{{#if: {{{TausendBis|}}}| bis {{{TausendBis}}}}}.000 Jahre}}
{{#if: {{{Fundorte|}}} |
{{!-}}
! [[Fossil|Fundorte]]
{{!-}}
{{!}} class="taxo-ort" {{!}}
{{{Fundorte}}}}}}}
|-
! [[Systematik (Biologie)|Systematik]]
|-
|
{| width="100%"
{{Taxobox/Zeile
| Rang = {{{Taxon6_Rang|}}}
| Name = {{{Taxon6_Name|}}}
| LinkName = {{{Taxon6_LinkName|}}}
| KeinLink = {{#ifeq:{{{Taxon6_LinkName|}}}|nein|ja}}
| WissName = {{{Taxon6_WissName|}}}
| KeinRang = {{{Rangunterdrückung|}}}
}}
{{Taxobox/Zeile
| Rang = {{{Taxon5_Rang|}}}
| Name = {{{Taxon5_Name|}}}
| LinkName = {{{Taxon5_LinkName|}}}
| KeinLink = {{#ifeq:{{{Taxon5_LinkName|}}}|nein|ja|{{#if:{{{Taxon5_Autor|}}}|ja}}}}
| WissName = {{#if:{{{Taxon5_Autor|}}}|{{#if:{{{Taxon5_Name|}}}||{{{Taxon5_WissName}}}}}|{{{Taxon5_WissName|}}}}}
| KeinRang = {{{Rangunterdrückung|}}}
}}
{{Taxobox/Zeile
| Rang = {{{Taxon4_Rang|}}}
| Name = {{{Taxon4_Name|}}}
| LinkName = {{{Taxon4_LinkName|}}}
| KeinLink = {{#ifeq:{{{Taxon4_LinkName|}}}|nein|ja|{{#if:{{{Taxon4_Autor|}}}|ja}}}}
| WissName = {{#if:{{{Taxon4_Autor|}}}|{{#if:{{{Taxon4_Name|}}}||{{{Taxon4_WissName}}}}}|{{{Taxon4_WissName|}}}}}
| KeinRang = {{{Rangunterdrückung|}}}
}}
{{Taxobox/Zeile
| Rang = {{{Taxon3_Rang|}}}
| Name = {{{Taxon3_Name|}}}
| LinkName = {{{Taxon3_LinkName|}}}
| KeinLink = {{#ifeq:{{{Taxon3_LinkName|}}}|nein|ja|{{#if:{{{Taxon3_Autor|}}}|ja}}}}
| WissName = {{#if:{{{Taxon3_Autor|}}}|{{#if:{{{Taxon3_Name|}}}||{{{Taxon3_WissName}}}}}|{{{Taxon3_WissName|}}}}}
| KeinRang = {{{Rangunterdrückung|}}}
}}
{{Taxobox/Zeile
| Rang = {{{Taxon2_Rang|}}}
| Name = {{{Taxon2_Name|}}}
| LinkName = {{{Taxon2_LinkName|}}}
| KeinLink = {{#ifeq:{{{Taxon2_LinkName|}}}|nein|ja|{{#if:{{{Taxon2_Autor|}}}|ja}}}}
| WissName = {{#if:{{{Taxon2_Autor|}}}|{{#if:{{{Taxon2_Name|}}}||{{{Taxon2_WissName}}}}}|{{{Taxon2_WissName|}}}}}
| KeinRang = {{{Rangunterdrückung|}}}
}}
{{Taxobox/Zeile
| Rang = {{{Taxon_Rang|}}}
| Name = {{{Taxon_Name|}}}
| WissName = {{#if:{{{Taxon_Name|}}}||{{{Taxon_WissName|}}}}}
| KeinLink = ja
| KeinRang = {{{Rangunterdrückung|}}}
}}
|}
|-
{{#if: {{{Taxon5_Autor|}}} | {{Taxobox/Zitat
| Rang = {{{Taxon5_Rang|}}}
| WissName = {{{Taxon5_WissName|}}}
| Autor = {{{Taxon5_Autor|}}}
}}}}
{{#if: {{{Taxon4_Autor|}}} | {{Taxobox/Zitat
| Rang = {{{Taxon4_Rang|}}}
| WissName = {{{Taxon4_WissName|}}}
| Autor = {{{Taxon4_Autor|}}}
}}}}
{{#if: {{{Taxon3_Autor|}}} | {{Taxobox/Zitat
| Rang = {{{Taxon3_Rang|}}}
| WissName = {{{Taxon3_WissName|}}}
| Autor = {{{Taxon3_Autor|}}}
}}}}
{{#if: {{{Taxon2_Autor|}}} | {{Taxobox/Zitat
| Rang = {{{Taxon2_Rang|}}}
| WissName = {{{Taxon2_WissName|}}}
| Autor = {{{Taxon2_Autor|}}}
}}}}
{{#if: {{{Taxon_WissName|}}} | {{Taxobox/Zitat
| Rang = {{{Taxon_Rang|}}}
| WissName = {{{Taxon_WissName|}}}
| Autor = {{{Taxon_Autor|}}}
| KeinRang = {{#if: {{{Taxon2_Autor|}}}{{{Taxon3_Autor|}}}{{{Taxon4_Autor|}}}{{{Taxon5_Autor|}}}||ja}}
}}}}
{{#if: {{{Subtaxa_Rang|}}} | {{!-}}
!{{Taxobox/Rang|Rang={{{Subtaxa_Rang}}}|Plural={{{Subtaxa_Plural|ja}}}}}
{{!-}}
{{!}}
{{#if: {{{Subtaxa|}}} | {{{Subtaxa}}} }}}}
|}{{#if: {{{Taxon_Name|}}}{{#ifexpr: {{Taxobox/IstRangKursiv|{{{Taxon_Rang|}}}}}||nonitalic}}
|
| {{#ifexpr: {{str find|{{PAGENAME}}|(}} = -1
| {{DISPLAYTITLE:{{#if:{{NAMESPACE}}|{{NAMESPACE}}:}}''{{#if: {{{Taxon_WissName|}}}|{{{Taxon_WissName}}}|{{PAGENAME}}}}''}}
| <span style="display:none">[[Vorlage:Taxobox/Wartung/KlammerlemmaUndKursiv]]</span>
}}}}</includeonly><noinclude>{{Dokumentation}}
</noinclude>
196f8c9ce3c249a3f28812628415466230ae3e22
Template:Ameisengattung
10
45
81
2012-05-29T21:18:32Z
Lollypop
2
Die Seite wurde neu angelegt: „<includeonly> {| cellpadding="2" cellspacing="0" style="border: 1px solid #6688AA;width:260px; background-color:#efefef; float:right;overflow: hidden; margin-lef…“
wikitext
text/x-wiki
<includeonly>
{| cellpadding="2" cellspacing="0" style="border: 1px solid #6688AA;width:260px; background-color:#efefef; float:right;overflow: hidden; margin-left:10px;margin-bottom:10px;" valign="middle" |
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| '''''{{{Gattung}}}''''' {{#if:{{{DeName|}}}| <br>({{{DeName|}}}) }}
|-
| style=" border: 0px solid #6688AA;border-bottom-width:1px;"| <div style="text-align:center;font-size:8pt;">[[Bild:{{{Bild}}}|frameless|250x300px|{{{Bildbeschreibung}}}]]
{{{Bildbeschreibung}}}</div>
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| Systematik
|-
|style="text-align:center;width=250px;"|
{| style="background-color:#efefef;text-align:left;"
|-
| Unterfamilie:
|[[{{{Unterfamilie|}}}]]
|-
| Gattung:
|''[[{{{Gattung|}}}]]''
|-
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Wissenschaftlicher Name
|-
|style="text-align:center"|
{| style="text-align:center;background-color:#efefef;widht:250px"
|-
|style="text-align:center;width:250px"|''{{{Gattung}}}''
{{#if:{{{Autor|}}}| {{{Autor|}}} | }}
|}
|}
{{#ifeq: {{NAMESPACE}} | {{ns:0}} | [[Kategorie:Ameisengattung]][[Kategorie:{{{Unterfamilie}}}]][[Kategorie:{{{Gattung}}}|!]]}}</includeonly>
<noinclude>
77c66c8b9a1fe01ea9388284d1d867b73e95333c
Lasius fuliginosus
0
46
82
2012-05-29T21:20:56Z
Lollypop
2
Die Seite wurde neu angelegt: „{{Ameisengattung | Autor = (Fabricius, 1804) | Unterfamilie = Formicinae | Gattung = Lasius }}“
wikitext
text/x-wiki
{{Ameisengattung
| Autor = (Fabricius, 1804)
| Unterfamilie = Formicinae
| Gattung = Lasius
}}
b027b928c19165d56c8c5d740e869ed6897a1e66
Template:Ameisenart
10
47
83
2012-05-29T21:22:31Z
Lollypop
2
Die Seite wurde neu angelegt: „<includeonly> {| cellpadding="2" cellspacing="0" style="border: 1px solid #6688AA;width:260px; background-color:#efefef; float:right;overflow: hidden; margin-left…“
wikitext
text/x-wiki
<includeonly>
{| cellpadding="2" cellspacing="0" style="border: 1px solid #6688AA;width:260px; background-color:#efefef; float:right;overflow: hidden; margin-left:10px; margin-bottom:10px;" valign="middle" |
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| '''''{{{WissName}}}''''' {{#if:{{{DeName|}}}| <br>({{{DeName|}}}) }}
|-
|style=" border: 0px solid #6688AA;border-bottom-width:1px;"| <div style="text-align:center;font-size:8pt;">
{{#if: {{{Bild|}}} | [[Bild:{{{Bild}}}|frameless|250x300px|{{{Bildbeschreibung}}}]]{{{Bildbeschreibung}}}</div> | |}}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| Systematik
|-
|style="text-align:center;width=250px;"|
{| style="background-color:#efefef;text-align:left;"
|-
| Unterfamilie:
|[[{{{Unterfamilie|}}}]]
|-
| Gattung:
|''[[{{{Gattung|}}}]]''
{{#if:{{{Untergattung|}}}|
{{#if:{{{Untergattung}}}| [[Kategorie:{{{Untergattung|}}}]]}}
{{!-}}
{{!}} Untergattung:
{{!}} ''[[{{{Untergattung|}}}]]''
}}
|-
| Art:
|''{{{WissName|}}}''
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Wissenschaftlicher Name
|-
|style="text-align:center"|
{| style="text-align:center;background-color:#efefef;widht:250px"
|-
|style="text-align:center;width:250px"|''{{{WissName}}}''
{{#if:{{{Autor|}}}| {{{Autor|}}} | }}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Weitere Informationen
|-
|style="text-align:left"|
{| style="text-align:left;background-color:#efefef;widht:250px"
{{#if:{{{Verbreitung|}}} |{{!-}}
{{!}} Verbreitung:
{{!}} {{{Verbreitung|}}} |{{!-}}}}
{{#if:{{{Habitat|}}}
| {{!-}}
{{!}} Habitat:
{{!}} {{{Habitat|}}}|{{!-}}}}
{{#if:{{{Gruendung|}}}
| {{!-}}
{{!}} Gründung:
{{!}} {{{Gruendung|}}} |{{!-}}}}
{{#if:{{{Koeniginnen|}}}
|{{!-}}
{{!}} Königinnen:
{{!}} {{{Koeniginnen|}}} |{{!-}}}}
{{#if:{{{maxKolo|}}}
|{{!-}}
{{!}} max. Koloniegröße:
{{!}} {{{maxKolo|}}} |{{!-}}}}
|}
|}
{{#ifeq: {{NAMESPACE}} | {{ns:0}} | [[Kategorie:{{{Gattung}}}|{{{Art}}}]] [[Kategorie:Ameisenart]]}}</includeonly>
<noinclude>
17d588ef1f426900ceed3165d021ab31238fd4a5
Lasius flavus
0
17
84
30
2012-05-29T21:25:00Z
Lollypop
2
wikitext
text/x-wiki
{{Ameisenart
| DeName = Gelbe Wiesenameise
| WissName = Lasius flavus
| Autor = (Fabricius, 1782)
| Untergattung = Cautolasius
| Gattung = Lasius
| Unterfamilie = Formicinae
| Art = flavus
| Bild = Lasius flavus.jpg
| Bildbeschreibung = Arbeiterin von ''Lasius flavus''
| Verbreitung = Europa
| Habitat = Wiese
| Gruendung = [[Gründung#Die unabhängige Koloniegründung durch einzelne Königinnen (claustrale/semiclaustrale Gründung)|claustral]]
| Koeniginnen = [[Oligogynie|oligogyn]]
| maxKolo = 100.000
}}
[[Kategorie:Lasius]]
7cf6ad92041c23b264a76ca7d47f5239ae8917f0
Template:Taxobox/IstRangKursiv
10
48
86
2012-05-29T21:37:50Z
Lollypop
2
Die Seite wurde neu angelegt: „<includeonly>{{#switch: {{lc:{{{1}}}}} |gattung|genus|untergattung|subgenus|sektion|sectio|untersektion|subsectio|serie|series|unterserie|subseries|stirps|stirps|…“
wikitext
text/x-wiki
<includeonly>{{#switch: {{lc:{{{1}}}}}
|gattung|genus|untergattung|subgenus|sektion|sectio|untersektion|subsectio|serie|series|unterserie|subseries|stirps|stirps|artenkreis|superspecies|superspezies|art|species|unterart|subspecies|varietät|varietas|untervarietät|subvarietas|form|forma|unterform|subforma = 1
| #default = 0
}}</includeonly><noinclude>Diese Vorlage wird innerhalb der [[Vorlage:Taxobox]] verwendet, technische Dokumentation siehe [[Vorlage:Taxobox/Doku/Tech]].
[[Kategorie:Vorlage:Untervorlage|Taxobox/IstRangKursiv]]
</noinclude>
d0653c2411fec4154728ecb78ac465c86126ef9c
Template:Taxobox/Rang
10
49
87
2012-05-29T21:39:20Z
Lollypop
2
Die Seite wurde neu angelegt: „<includeonly>{{#switch: {{lc:{{{Rang}}}}} ||ohne= |ohne rang= ohne Rang |klassifikation= {{#if:{{{Genitiv|}}}|der }}[[Systematik (Biologie)|Klassifikation…“
wikitext
text/x-wiki
<includeonly>{{#switch: {{lc:{{{Rang}}}}}
||ohne=
|ohne rang= ohne Rang
|klassifikation= {{#if:{{{Genitiv|}}}|der }}[[Systematik (Biologie)|Klassifikation]]
|domäne= {{#if:{{{Genitiv|}}}|der }}[[Domäne (Biologie)|Domäne]]
|reich|regnum = {{#if:{{{Genitiv|}}}|des }}[[Reich (Biologie)|Reich]]{{#ifeq:{{{Plural|}}}|ja|e}}{{#if:{{{Genitiv|}}}|s}}
|unterreich|subregnum = {{#if:{{{Genitiv|}}}|des }}[[Reich (Biologie)|Unterreich]]{{#ifeq:{{{Plural|}}}|ja|e}}{{#if:{{{Genitiv|}}}|s}}
|überabteilung|superdivisio = {{#if:{{{Genitiv|}}}|der }}[[Abteilung (Biologie)|Überabteilung]]{{#ifeq:{{{Plural|}}}|ja|en}}
|abteilung|divisio = {{#if:{{{Genitiv|}}}|der }}[[Abteilung (Biologie)|Abteilung]]{{#ifeq:{{{Plural|}}}|ja|en}}
|unterabteilung|subdivisio = {{#if:{{{Genitiv|}}}|der }}[[Abteilung (Biologie)|Unterabteilung]]{{#ifeq:{{{Plural|}}}|ja|en}}
|überstamm|superphylum = {{#if:{{{Genitiv|}}}|des }}[[Stamm (Systematik)|{{#ifeq:{{{Plural|}}}|ja|Überstämme|Überstamm}}]]{{#if:{{{Genitiv|}}}|s}}
|stamm|phylum = {{#if:{{{Genitiv|}}}|des }}[[Stamm (Systematik)|{{#ifeq:{{{Plural|}}}|ja|Stämme|Stamm}}]]{{#if:{{{Genitiv|}}}|s}}
|unterstamm|subphylum = {{#if:{{{Genitiv|}}}|des }}[[Stamm (Systematik)|{{#ifeq:{{{Plural|}}}|ja|Unterstämme|Unterstamm}}]]{{#if:{{{Genitiv|}}}|s}}
|überklasse|superclassis = {{#if:{{{Genitiv|}}}|der }}[[Klasse (Biologie)|Überklasse]]{{#ifeq:{{{Plural|}}}|ja|n}}
|reihe|seria = {{#if:{{{Genitiv|}}}|der }}[[Reihe (Biologie)|Reihe]]{{#ifeq:{{{Plural|}}}|ja|n}}
|klasse|classis = {{#if:{{{Genitiv|}}}|der }}[[Klasse (Biologie)|Klasse]]{{#ifeq:{{{Plural|}}}|ja|n}}
|unterklasse|subclassis = {{#if:{{{Genitiv|}}}|der }}[[Klasse (Biologie)|Unterklasse]]{{#ifeq:{{{Plural|}}}|ja|n}}
|teilklasse|infraclassis = {{#if:{{{Genitiv|}}}|der }}[[Klasse (Biologie)|Teilklasse]]{{#ifeq:{{{Plural|}}}|ja|n}}
|überkohorte|supercohors = {{#if:{{{Genitiv|}}}|der }}[[Kohorte (Biologie)|Überkohorte]]{{#ifeq:{{{Plural|}}}|ja|n}}
|kohorte|cohors = {{#if:{{{Genitiv|}}}|der }}[[Kohorte (Biologie)|Kohorte]]{{#ifeq:{{{Plural|}}}|ja|n}}
|unterkohorte|subcohors = {{#if:{{{Genitiv|}}}|der }}[[Kohorte (Biologie)|Unterkohorte]]{{#ifeq:{{{Plural|}}}|ja|n}}
|teilkohorte|infracohors = {{#if:{{{Genitiv|}}}|der }}[[Kohorte (Biologie)|Teilkohorte]]{{#ifeq:{{{Plural|}}}|ja|n}}
|überordnung|superordo = {{#if:{{{Genitiv|}}}|der }}[[Ordnung (Biologie)|Überordnung]]{{#ifeq:{{{Plural|}}}|ja|en}}
|ordnung|ordo = {{#if:{{{Genitiv|}}}|der }}[[Ordnung (Biologie)|Ordnung]]{{#ifeq:{{{Plural|}}}|ja|en}}
|unterordnung|subordo = {{#if:{{{Genitiv|}}}|der }}[[Ordnung (Biologie)|Unterordnung]]{{#ifeq:{{{Plural|}}}|ja|en}}
|teilordnung|infraordo = {{#if:{{{Genitiv|}}}|der }}[[Ordnung (Biologie)|Teilordnung]]{{#ifeq:{{{Plural|}}}|ja|en}}
|überfamilie|superfamilia = {{#if:{{{Genitiv|}}}|der }}[[Familie (Biologie)|Überfamilie]]{{#ifeq:{{{Plural|}}}|ja|n}}
|familie|familia = {{#if:{{{Genitiv|}}}|der }}[[Familie (Biologie)|Familie]]{{#ifeq:{{{Plural|}}}|ja|n}}
|unterfamilie|subfamilia = {{#if:{{{Genitiv|}}}|der }}[[Familie (Biologie)|Unterfamilie]]{{#ifeq:{{{Plural|}}}|ja|n}}
|tribus = {{#if:{{{Genitiv|}}}|der }}[[Tribus (Biologie)|Tribus]]{{#ifeq:{{{Plural|}}}|ja|}}
|untertribus|subtribus = {{#if:{{{Genitiv|}}}|der }}[[Tribus (Biologie)|Untertribus]]{{#ifeq:{{{Plural|}}}|ja|}}
|gattung|genus = {{#if:{{{Genitiv|}}}|der }}[[Gattung (Biologie)|Gattung]]{{#ifeq:{{{Plural|}}}|ja|en}}
|untergattung|subgenus = {{#if:{{{Genitiv|}}}|der }}[[Gattung (Biologie)|Untergattung]]{{#ifeq:{{{Plural|}}}|ja|en}}
|sektion|sectio = {{#if:{{{Genitiv|}}}|der }}[[Gattung (Biologie)|Sektion]]{{#ifeq:{{{Plural|}}}|ja|en}}
|untersektion|subsectio = {{#if:{{{Genitiv|}}}|der }}[[Gattung (Biologie)|Untersektion]]{{#ifeq:{{{Plural|}}}|ja|en}}
|serie|series = {{#if:{{{Genitiv|}}}|der }}[[Gattung (Biologie)|Serie]]{{#ifeq:{{{Plural|}}}|ja|n}}
|unterserie|subseries = {{#if:{{{Genitiv|}}}|der }}[[Gattung (Biologie)|Unterserie]]{{#ifeq:{{{Plural|}}}|ja|n}}
|stirps = {{#if:{{{Genitiv|}}}|der }}[[Stirps|{{#ifeq:{{{Plural|}}}|ja|Stirpes|Stirps}}]]
|artenkreis|superspecies|superspezies = {{#if:{{{Genitiv|}}}|der }}[[Superspezies]]{{#ifeq:{{{Plural|}}}|ja|}}
|art|species = {{#if:{{{Genitiv|}}}|der }}[[Art (Biologie)|Art]]{{#ifeq:{{{Plural|}}}|ja|en}}
|unterart|subspecies = {{#if:{{{Genitiv|}}}|der }}[[Unterart]]{{#ifeq:{{{Plural|}}}|ja|en}}
|varietät|varietas = {{#if:{{{Genitiv|}}}|der }}[[Varietät (Biologie)|Varietät]]{{#ifeq:{{{Plural|}}}|ja|en}}
|untervarietät|subvarietas = {{#if:{{{Genitiv|}}}|der }}[[Varietät (Biologie)|Varietät]]{{#ifeq:{{{Plural|}}}|ja|en}}
|form|forma = {{#if:{{{Genitiv|}}}|der }}[[Form (Biologie)|Form]]{{#ifeq:{{{Plural|}}}|ja|en}}
|unterform|subforma = {{#if:{{{Genitiv|}}}|der }}[[Form (Biologie)|Unterform]]{{#ifeq:{{{Plural|}}}|ja|en}}
| #default = <div class="error">[[Vorlage:Taxobox/Rang/Doku|Warnung: Unbekannter Rang]]</div>
}}</includeonly><noinclude>Diese Vorlage wird innerhalb der [[Vorlage:Taxobox]] verwendet, Dokumentation siehe [[Vorlage:Taxobox/Doku/Tech]].
[[Kategorie:Vorlage:Untervorlage|Taxobox/Rang]]
</noinclude>
08d611e98b59296887c6a3483247963f54967a0d
Template:Taxobox/Zeile
10
50
88
2012-05-29T21:40:06Z
Lollypop
2
Die Seite wurde neu angelegt: „<includeonly>{{#if: {{{Rang|}}}{{{WissName|}}}{{{Name|}}} | {{!-}} {{#ifeq: {{lc:{{{Rang|}}}}}|incertae sedis|{{#ifeq: {{{Name|}}}{{{WissName|}}}||{{!}} style="te…“
wikitext
text/x-wiki
<includeonly>{{#if: {{{Rang|}}}{{{WissName|}}}{{{Name|}}} | {{!-}}
{{#ifeq: {{lc:{{{Rang|}}}}}|incertae sedis|{{#ifeq: {{{Name|}}}{{{WissName|}}}||{{!}} style="text-align:center;" colspan="2" {{!}} ''[[incertae sedis]]''|{{!}}<div class="error">[[Vorlage:Taxobox/Rang|Warnung: Bei „incertae sedis“ keine weiteren Angaben in dieser Zeile möglich!]] </div>
}}|{{!}} {{#if: {{#ifeq: {{lc: {{{KeinRang|}}}}} | ja | x }}{{#ifeq: {{lc: {{{Rang|}}}}} | ohne | x }}{{#if: {{{Rang|}}}||x}}||''{{Taxobox/Rang|Rang={{{Rang|}}}}}:''}} {{!!}} {{#if:{{{Name|}}}|{{#if:{{{KeinLink|}}}|{{{Name|}}}|{{#if:{{{LinkName|}}}|[[{{{LinkName|}}}|{{{Name|}}}]]|[[{{{Name|}}}]]}}}} }} {{#if:{{{WissName|}}}|{{#if:{{{Name|}}}| ({{#ifexpr: {{Taxobox/IstRangKursiv|{{{Rang|}}}}}|''}}{{{WissName|}}}{{#ifexpr: {{Taxobox/IstRangKursiv|{{{Rang|}}}}}|''}})|{{#if:{{{KeinLink|}}}|{{#ifexpr: {{Taxobox/IstRangKursiv|{{{Rang|}}}}}|''}}{{{WissName|}}}{{#ifexpr: {{Taxobox/IstRangKursiv|{{{Rang|}}}}}|''}}|{{#ifexpr: {{Taxobox/IstRangKursiv|{{{Rang|}}}}}|''}}[[{{#if:{{{LinkName|}}}|{{{LinkName|}}}{{!}}}}{{{WissName|}}}]]{{#ifexpr: {{Taxobox/IstRangKursiv|{{{Rang|}}}}}|''}}}}}}}}}}}}</includeonly><noinclude>Diese Vorlage wird innerhalb der [[Vorlage:Taxobox]] verwendet, technische Dokumentation siehe [[Vorlage:Taxobox/Doku/Tech]].
[[Kategorie:Vorlage:Untervorlage|Taxobox/Zeile]]
</noinclude>
064861eaf4ff04a16ce62fd0556ab1e6888909b3
Template:Taxobox/Zitat
10
51
89
2012-05-29T21:41:15Z
Lollypop
2
Die Seite wurde neu angelegt: „<includeonly>! [[Nomenklatur (Biologie)|Wissenschaftlicher Name]] {{#if:{{{KeinRang|}}} | | {{Taxobox/Rang|Rang={{{Rang|}}}|Genitiv=ja}}}} {{!-}} {{!}} class="tax…“
wikitext
text/x-wiki
<includeonly>! [[Nomenklatur (Biologie)|Wissenschaftlicher Name]] {{#if:{{{KeinRang|}}} | | {{Taxobox/Rang|Rang={{{Rang|}}}|Genitiv=ja}}}}
{{!-}}
{{!}} class="taxo-name" {{!}} {{#ifexpr: {{Taxobox/IstRangKursiv|{{{Rang|}}}}}|''}}{{{WissName|}}}{{#ifexpr: {{Taxobox/IstRangKursiv|{{{Rang|}}}}}|''}}
{{!-}}
{{!}} class="Person" {{!}} {{{Autor|}}}
{{!-}}</includeonly><noinclude>Diese Vorlage wird innerhalb der [[Vorlage:Taxobox]] verwendet, technische Dokumentation siehe [[Vorlage:Taxobox/Doku/Tech]].
[[Kategorie:Vorlage:Untervorlage|Taxobox/Zitat]]
</noinclude>
76d9cfd474d15d88f25f42f70b0102f94fdc7704
Template:Dokumentation
10
52
90
2012-05-29T21:45:23Z
Lollypop
2
Die Seite wurde neu angelegt: „{{Nobots}}{{Tausendfach verwendet}}<onlyinclude><hr class="rulerdocumentation hintergrundfarbe6" style="margin:1em 0em; height:0.7ex;" /> {{#ifeq:{{NAMESPACE}}|{{…“
wikitext
text/x-wiki
{{Nobots}}{{Tausendfach verwendet}}<onlyinclude><hr class="rulerdocumentation hintergrundfarbe6" style="margin:1em 0em; height:0.7ex;" />
{{#ifeq:{{NAMESPACE}}|{{ns:0}}|<strong class="error">Achtung: Die {{Vorlage|Dokumentation}} wird im Artikelnamensraum verwendet. Wahrscheinlich fehlt <code><noinclude></code> in einer eingebundenen Vorlage oder die Kapselung ist fehlerhaft. Bitte {{Bearbeiten|text=entferne diesen Fehler}}.</strong>|
<div style="float:right; clear:left;">[[Datei:Information icon.svg|frameless|18px|link=#Dokumentation.Info|Informationen zu dieser Dokumentation|alt=]]</div>
{{Überschriftensimulation 4|1=<span class="editsection">[<span class="plainlinks">[{{fullurl:{{SUBJECTPAGENAME}}/Doku|action=edit}} Bearbeiten]</span>]</span> Dokumentation}}
{{#ifexist: {{SUBJECTPAGENAME}}/Doku|
{{{{SUBJECTPAGENAME}}/Doku}}
<br /><hr style="border:none; height:0.7ex; clear:both;" />
{{{!}} {{Bausteindesign5}}
{{!}} Bei Fragen zu dieser [[Hilfe:Vorlagen|Vorlage]] kannst Du Dich an die [[Wikipedia:WikiProjekt Vorlagen/Werkstatt|Vorlagenwerkstatt]] wenden.
{{!}}}
{{{!}} cellspacing="8" cellpadding="0" class="plainlinks" style="background:transparent; margin: 2px 0;" id="Dokumentation.Info"
{{!}} style="position:relative; width:35px; vertical-align:top;" {{!}} [[Datei:Information icon.svg|30px|Information|alt=]]
{{!}} style="width: 100%;" {{!}}
<ul>
<li>{{#switch:{{ParmPart|1|{{{nr|<noinclude>10</noinclude>}}}}}
| 1 = {{Verwendung|ns=1}} der Vorlage auf Artikel-Diskussionsseiten.
| 2 = {{Verwendung|ns=2}} der Vorlage auf Benutzerseiten.
| 3 = {{Verwendung|ns=3}} der Vorlage auf Benutzer-Diskussionsseiten.
| 4 = {{Verwendung|ns=4}} der Vorlage auf Systemseiten.
| 6 = {{Verwendung|ns=6}} der Vorlage bei Dateien.
| 10 = {{Verwendung|ns=10}} der Vorlage auf Vorlagenseiten.
| 11 = {{Verwendung|ns=10}} der Vorlage auf Vorlagen-Diskussionsseiten.
| 14 = {{Verwendung|ns=14}} der Vorlage auf Kategorieseiten.
| #default = {{Verwendung}} der Vorlage in Artikeln.
}}</li>
<li>{{#switch:{{ParmPart|2|{{{nr|<noinclude>10</noinclude>}}}}}
| 1 = {{Verwendung|ns=1}} der Vorlage auf Artikel-Diskussionsseiten.
| 2 = {{Verwendung|ns=2}} der Vorlage auf Benutzerseiten.
| 3 = {{Verwendung|ns=3}} der Vorlage auf Benutzer-Diskussionsseiten.
| 4 = {{Verwendung|ns=4}} der Vorlage auf Systemseiten.
| 6 = {{Verwendung|ns=6}} der Vorlage bei Dateien.
| 10 = {{Verwendung|ns=10}} der Vorlage auf Vorlagenseiten.
| 11 = {{Verwendung|ns=10}} der Vorlage auf Vorlagen-Diskussionsseiten.
| 14 = {{Verwendung|ns=14}} der Vorlage auf Kategorieseiten.
}}</li>
<li> Diese Dokumentation befindet sich [[{{SUBJECTPAGENAME}}/Doku|auf einer eingebundenen Unterseite]]<span class="metadata"><span /> ([{{fullurl:{{SUBJECTPAGENAME}}/Doku|action=edit}} Bearbeiten]/[{{fullurl:{{SUBJECTPAGENAME}}/Doku|action=history}} Versionen]{{#ifexist:{{TALKPAGENAME}}/Doku|/[[{{TALKPAGENAME}}/Doku|Diskussion]]}})</span>.</li>
{{#ifexist: {{SUBJECTPAGENAME}}/Meta
| <li>Die Metadaten ([[Hilfe:Kategorien|Kategorien]] und [[Hilfe:Internationalisierung|Interwikis]]) {{#ifeq:{{NAMESPACE}}|{{ns:2}}
| in [[{{SUBJECTPAGENAME}}/Meta]] werden '''nicht''' eingebunden, weil sich die Vorlage im [[Hilfe:Benutzernamensraum|Benutzernamensraum]] befindet
| werden [[{{SUBJECTPAGENAME}}/Meta|von einer Unterseite eingebunden]]<span class="metadata"><span /> ([{{fullurl:{{SUBJECTPAGENAME}}/Meta|action=edit}} Bearbeiten]/[{{fullurl:{{SUBJECTPAGENAME}}/Meta|action=history}} Versionen]{{#ifexist:{{TALKPAGENAME}}/Meta|/[[{{TALKPAGENAME}}/Meta|Diskussion]]}})</span>
}}.</li>
| <li class="metadata metadata-label">[{{fullurl:{{SUBJECTPAGENAME}}/Meta|action=edit&preload=Vorlage:Dokumentation/preload-meta}} Metadatenseite erstellen].</li>
}}
{{#ifexist:{{SUBJECTPAGENAME}}/Wartung
| <li>Für diese Vorlage existiert eine [[{{SUBJECTPAGENAME}}/Wartung|Wartungsseite]]<span class="metadata"><span /> ([{{fullurl:{{SUBJECTPAGENAME}}/Wartung|action=edit}} Bearbeiten]/[{{fullurl:{{SUBJECTPAGENAME}}/Wartung|action=history}} Versionen]{{#ifexist:{{TALKPAGENAME}}/Wartung|/[[{{TALKPAGENAME}}/Wartung|Diskussion]]}})</span> zum Auffinden fehlerhafter Verwendungen.</li>
| <li class="metadata metadata-label">[{{fullurl:{{SUBJECTPAGENAME}}/Wartung|action=edit&preload=Vorlage:Dokumentation/preload-wartung}} Wartungsseite erstellen].</li>
}}
{{#ifexist:{{SUBJECTPAGENAME}}/XML
| <li>Für diese Vorlage existiert eine [[{{SUBJECTPAGENAME}}/XML|XML-Beschreibung]]<span class="metadata"><span /> ([{{fullurl:{{SUBJECTPAGENAME}}/XML|action=edit}} Bearbeiten]/[{{fullurl:{{SUBJECTPAGENAME}}/XML|action=history}} Versionen]{{#ifexist:{{TALKPAGENAME}}/XML|/[[{{TALKPAGENAME}}/XML|Diskussion]]}})</span> für den [[Wikipedia:Helferlein/Vorlagen-Meister|Vorlagenmeister]].</li>
| <li class="metadata metadata-label">[[tools:~revolus/Template-Master/index.de.html|XML-Beschreibungsseite erstellen]]</li>
}}
{{#ifexist:{{SUBJECTPAGENAME}}/Test
| <li>Anwendungsbeispiele und Funktionalitätsprüfungen befinden sich auf der [[{{SUBJECTPAGENAME}}/Test|Testseite]]<span class="metadata"><span /> ([{{fullurl:{{SUBJECTPAGENAME}}/Test|action=edit}} Bearbeiten]/[{{fullurl:{{SUBJECTPAGENAME}}/Test|action=history}} Versionen]{{#ifexist:{{TALKPAGENAME}}/Test|/[[{{TALKPAGENAME}}/Test|Diskussion]]}})</span>.</li>
| <li class="metadata metadata-label">[{{fullurl:{{SUBJECTPAGENAME}}/Test|action=edit&preload=Vorlage:Dokumentation/preload-test}} Test-/Beispielseite erstellen].</li>
}}
{{#ifexist:{{SUBJECTPAGENAME}}/Druck
| <li>Es existiert eine spezielle [[{{SUBJECTPAGENAME}}/Druck|Druckversion]]<span class="metadata"><span /> ([{{fullurl:{{SUBJECTPAGENAME}}/Druck|action=edit}} Bearbeiten]/[{{fullurl:{{SUBJECTPAGENAME}}/Druck|action=history}} Versionen]{{#ifexist:{{TALKPAGENAME}}/Druck|/[[{{TALKPAGENAME}}/Druck|Diskussion]]}})</span> für die [[Hilfe:Buchfunktion|Buchfunktion]].</li>
| <li class="metadata metadata-label">[{{fullurl:{{SUBJECTPAGENAME}}/Druck|action=edit&preload=Vorlage:Dokumentation/preload-druck}} Druckversion erstellen].</li>
}}
{{#ifexist:{{SUBJECTPAGENAME}}/Editnotice
| <li>Es existiert eine [[{{SUBJECTPAGENAME}}/Editnotice|Editnotice]]<span class="metadata"><span /> ([{{fullurl:{{SUBJECTPAGENAME}}/Editnotice|action=edit}} Bearbeiten]/[{{fullurl:{{SUBJECTPAGENAME}}/Editnotice|action=history}} Versionen]{{#ifexist:{{TALKPAGENAME}}/Editnotice|/[[{{TALKPAGENAME}}/Editnotice|Diskussion]]}})</span>, die beim Bearbeiten angezeigt wird.</li>
| <li class="metadata metadata-label">[{{fullurl:{{SUBJECTPAGENAME}}/Editnotice|action=edit&preload=Vorlage:Dokumentation/preload-editnotice}} Editnotice erstellen].</li>
}}
<li>[[Spezial:Präfixindex/{{SUBJECTPAGENAME}}/|Liste der Unterseiten]].</li>
</ul>
{{!}}}
|<span class="plainlinks" style="font-size:150%;">
* [{{fullurl:{{SUBJECTPAGENAME}}/Doku|action=edit&preload=Vorlage:Dokumentation/preload-doku}} Dokumentation erstellen]
{{#ifexist:{{SUBJECTPAGENAME}}/Meta||
* [{{fullurl:{{SUBJECTPAGENAME}}/Meta|action=edit&preload=Vorlage:Dokumentation/preload-meta}} Metadatenseite erstellen]}}
{{#ifexist:{{SUBJECTPAGENAME}}/Wartung||
* [{{fullurl:{{SUBJECTPAGENAME}}/Wartung|action=edit&preload=Vorlage:Dokumentation/preload-wartung}} Wartungsseite erstellen]}}
{{#ifexist:{{SUBJECTPAGENAME}}/Test||
* [{{fullurl:{{SUBJECTPAGENAME}}/Test|action=edit&preload=Vorlage:Dokumentation/preload-test}} Test-/Beispielseite erstellen]}}
</span>{{#ifeq:{{NAMESPACE}}|{{ns:10}}|
[[Kategorie:Vorlage:nicht dokumentiert|{{PAGENAME}}]]
}}
}}
<div style="clear:both;" />
{{#ifeq:{{NAMESPACE}}|{{ns:2}}||{{#ifexist: {{SUBJECTPAGENAME}}/Meta|{{{{SUBJECTPAGENAME}}/Meta}}
}}}}
}}<hr class="rulerdocumentation hintergrundfarbe6" style="margin:1em 0em; height:0.7ex;" /></onlyinclude>
b68e1351f1da05d1f43a53948d1ee8ac39b472b8
Template:Überschriftensimulation 4
10
53
91
2012-05-29T21:46:26Z
Lollypop
2
Die Seite wurde neu angelegt: „{{Anker|{{{1}}}}}<div class="Vorlage_Ueberschriftensimulation_4" style="margin:0; margin-bottom:.3em; padding-top:.5em; padding-bottom:.17em; background:none; fon…“
wikitext
text/x-wiki
{{Anker|{{{1}}}}}<div class="Vorlage_Ueberschriftensimulation_4" style="margin:0; margin-bottom:.3em; padding-top:.5em; padding-bottom:.17em; background:none; font-size:116%; color:black; font-weight:bold">{{{1}}}</div><noinclude>
----
Simuliert in ''Diskussionseiten'' eine Überschrift, die nicht im Inhaltsverzeichnis erscheinen soll. In ''Artikeln'' darf diese Vorlage nicht verwendet werden; dafür gibt es andere Lösungen, siehe [[Hilfe:Inhaltsverzeichnis]].
Für Syntax und Anwendung siehe [[Wikipedia:Textbausteine/Formatierungshilfen]].
[[Kategorie:Vorlage:Formatierungshilfe|Uberschriftensimulation 4]]
</noinclude>
ab55d954365b9fa1665fd76747fedb5b3826a2a8
Template:!
10
54
92
2012-05-29T21:48:18Z
Lollypop
2
Die Seite wurde neu angelegt: „<includeonly>|</includeonly><noinclude>''Diese Vorlage wird für [[Vorlage:Ameisenart]] und [[Vorlage:Ameisengattung]] benötigt.''</noinclude><noinclude> [[Kateg…“
wikitext
text/x-wiki
<includeonly>|</includeonly><noinclude>''Diese Vorlage wird für [[Vorlage:Ameisenart]] und [[Vorlage:Ameisengattung]] benötigt.''</noinclude><noinclude>
[[Kategorie:Vorlage]]
</noinclude>
157a139a5346c9b1ee4d23940c14347ca885264a
Template:!!
10
55
93
2012-05-29T21:49:28Z
Lollypop
2
Die Seite wurde neu angelegt: „<includeonly>||</includeonly><noinclude>''Diese Vorlage wird für [[Vorlage:Ameisenart]] und [[Vorlage:Ameisengattung]] benötigt.''</noinclude><noinclude> [[Kate…“
wikitext
text/x-wiki
<includeonly>||</includeonly><noinclude>''Diese Vorlage wird für [[Vorlage:Ameisenart]] und [[Vorlage:Ameisengattung]] benötigt.''</noinclude><noinclude>
[[Kategorie:Vorlage]]
</noinclude>
52144e87a27494565ab2949e32af9f8f53da7ce0
Template:!-
10
56
94
2012-05-29T21:51:08Z
Lollypop
2
Die Seite wurde neu angelegt: „<includeonly>|-</includeonly><noinclude>''Diese Vorlage wird für [[Vorlage:Ameisenart]] und [[Vorlage:Ameisengattung]] benötigt.''</noinclude><noinclude> [[Kate…“
wikitext
text/x-wiki
<includeonly>|-</includeonly><noinclude>''Diese Vorlage wird für [[Vorlage:Ameisenart]] und [[Vorlage:Ameisengattung]] benötigt.''</noinclude><noinclude>
[[Kategorie:Vorlage]]
</noinclude>
59369d5f08dc57133fab641e030ad5a6157a256e
Template:Anker
10
57
97
2012-05-29T22:01:05Z
Lollypop
2
Die Seite wurde neu angelegt: „{{#if: {{{1|}}} | {{Anker/code|{{{1|}}}}}{{#if: {{{2|}}} | {{Anker/code|{{{2|}}}}}{{#if: {{{3|}}} | {{Anker/code|{{{3|}}}}}{{#if: {{{4|}}} | {{Anker/code|{{{4|}}}…“
wikitext
text/x-wiki
{{#if: {{{1|}}} | {{Anker/code|{{{1|}}}}}{{#if: {{{2|}}} | {{Anker/code|{{{2|}}}}}{{#if: {{{3|}}} | {{Anker/code|{{{3|}}}}}{{#if: {{{4|}}} | {{Anker/code|{{{4|}}}}} {{#if: {{{5|}}} | {{Anker/code|{{{5|}}}}}{{#if: {{{6|}}} | {{Anker/code|{{{6|}}}}} }} }}}} }} }} }}<noinclude>
{{Dokumentation}}</noinclude>
04f5e273882ac64467139a06e68acd39db1349eb
Template:Anker/code
10
58
98
2012-05-29T22:02:08Z
Lollypop
2
Die Seite wurde neu angelegt: „{{Dokumentation/Unterseite}} <onlyinclude><span id="{{anchorencode:{{{1|{{{anchor|anchor}}}}}}}}"><span id="Anker:{{anchorencode:{{{1|{{{anchor|anchor}}}}}}}}"><…“
wikitext
text/x-wiki
{{Dokumentation/Unterseite}}
<onlyinclude><span id="{{anchorencode:{{{1|{{{anchor|anchor}}}}}}}}"><span id="Anker:{{anchorencode:{{{1|{{{anchor|anchor}}}}}}}}"></span></span></onlyinclude>
bdc479327ae19875b735889778d264af584bd52d
Template:Dokumentation/Unterseite
10
59
99
2012-05-29T22:04:33Z
Lollypop
2
Die Seite wurde neu angelegt: „<onlyinclude>{| {{Bausteindesign3}} | [[Datei:Information icon.svg|30px|Dokumentations-Unterseite|link=]] |style="width: 100%;"| Diese Seite ist eine Untervorlage…“
wikitext
text/x-wiki
<onlyinclude>{| {{Bausteindesign3}}
| [[Datei:Information icon.svg|30px|Dokumentations-Unterseite|link=]]
|style="width: 100%;"| Diese Seite ist eine Untervorlage von '''[[{{{1|{{#rel2abs:{{FULLPAGENAME}}/..}}}}}]]'''.
|}<includeonly>{{#ifeq:{{NAMESPACE}}|{{ns:10}}|
[[Kategorie:Vorlage:Untervorlage|{{PAGENAME}}]]
<!--Wartung--><span style="display:none;">{{#ifexist:{{#rel2abs:{{FULLPAGENAME}}/..}}
|<!--nichts-->
|{{#if:{{#rel2abs:{{FULLPAGENAME}}/..}}
| [[Vorlage:Dokumentation/Wartung/Unterseite verwaist]]
| [[Vorlage:Dokumentation/Wartung/keine echte Unterseite]]
}}
}}{{#if:{{{1|}}}
| [[Vorlage:Dokumentation/Wartung/Unterseite mit abweichender Oberseite]]
}}</span>
}}</includeonly></onlyinclude>
[[Kategorie:Vorlage:für Vorlagen| {{PAGENAME}}]]
[[Kategorie:Vorlage:mit Kategorisierung]]
dc75736be7d02a65694287dde9cec66c5a060e0a
Formica fusca
0
19
100
32
2012-05-29T22:07:30Z
Lollypop
2
wikitext
text/x-wiki
{{Ameisenart
| DeName = Grauschwarze Sklavenameise
| WissName = Formica fusca
| Autor = Linnaeus, 1758
| Untergattung = Serviformica
| Gattung = Formica
| Unterfamilie = Formicinae
| Art = fusca
| Verbreitung = Mitteleuropa bis Fennoskandien
| Habitat = Trockene bis mäßig beschattete Bereiche
| Gruendung = [[Gründung#Die unabhängige Koloniegründung durch einzelne Königinnen (claustrale/semiclaustrale Gründung)|claustral]]
| Koeniginnen = [[polygyn]]
}}
[[Kategorie:Formica]]
6ceb0d78fbbf0695c08fceb6740eb6f843648abe
Messor barbarus
0
10
101
17
2012-05-29T22:08:43Z
Lollypop
2
wikitext
text/x-wiki
{{Ameisenart
| Autor = (Linnaeus, 1767)
| WissName = Messor barbarus
| Gattung = Messor
| Unterfamilie = Myrmicinae
| Art = barbarus
| Bild = Messor_barbarus_Major.jpg
| Bildbeschreibung = Major-Arbeiterin von ''Messor barbarus''
| Verbreitung = Südeuropa, Afrika, Asien
| Koeniginnen = [[Monogynie|monogyn]]
| Gruendung = [[Gründung#Die unabhängige Koloniegründung durch einzelne Königinnen (claustrale/semiclaustrale Gründung)|claustral]]
}}[[Kategorie:Messor]]
c79b6a982406ca46589da75bcb4bb2c10b32ea07
102
101
2012-05-29T22:09:23Z
Lollypop
2
wikitext
text/x-wiki
{{Ameisenart
| Autor = (Linnaeus, 1767)
| WissName = Messor barbarus
| Gattung = Messor
| Unterfamilie = Myrmicinae
| Art = barbarus
| Verbreitung = Südeuropa, Afrika, Asien
| Koeniginnen = [[Monogynie|monogyn]]
| Gruendung = [[Gründung#Die unabhängige Koloniegründung durch einzelne Königinnen (claustrale/semiclaustrale Gründung)|claustral]]
}}[[Kategorie:Messor]]
f74aed81c525bfb19ec500f83ce6fe8e589c38f6
Myrmica rubra
0
14
103
27
2012-05-29T22:11:06Z
Lollypop
2
wikitext
text/x-wiki
{{Ameisenart
| Autor = (Linnaeus, 1758)
| DeName = Rote Gartenameise
| WissName = Myrmica rubra
| Gattung = Myrmica
| Unterfamilie = Myrmicinae
| Art = rubra
| Verbreitung = Europa, Asien, Afrika und Nordamerika
| Habitat = Gärten, Wäldern und Wiesen, zumeist unter Steinen, Holz,<br>o. ä.
| Gruendung = [[Gründung#Die unabhängige Koloniegründung durch einzelne Königinnen (claustrale/semiclaustrale Gründung)|semiclaustral]]
| Koeniginnen = [[Polygynie|polygyn]]
}}[[Kategorie:Myrmica]]
7616273d8ce62ff68b622bf3d539edbeecfe6bae
Tapinoma sp.
0
60
104
2012-05-29T22:20:12Z
Lollypop
2
Die Seite wurde neu angelegt: „{{Ameisengattung | DeName = | WissName = Tapinoma sp. | Autor = | Unterfamilie = Dolichoderinae | Gattung = Tapinoma | Art = spec…“
wikitext
text/x-wiki
{{Ameisengattung
| DeName =
| WissName = Tapinoma sp.
| Autor =
| Unterfamilie = Dolichoderinae
| Gattung = Tapinoma
| Art = spec.
| Koeniginnen = [[Polygynie|polygyn]]
}}
39cc4c44ceae7babf23dae2e4758c5417d30c2f3
Category:Tapinoma
14
61
105
2012-05-29T22:20:54Z
Lollypop
2
Die Seite wurde neu angelegt: „[[Kategorie:Ameisen]]“
wikitext
text/x-wiki
[[Kategorie:Ameisen]]
f955b9a6bcf9dfb967469048129fb2b16000839e
Formica fusca
0
19
106
100
2012-05-29T22:21:44Z
Lollypop
2
wikitext
text/x-wiki
{{Ameisenart
| DeName = Grauschwarze Sklavenameise
| WissName = Formica fusca
| Autor = Linnaeus, 1758
| Untergattung = Serviformica
| Gattung = Formica
| Unterfamilie = Formicinae
| Art = fusca
| Verbreitung = Mitteleuropa bis Fennoskandien
| Habitat = Trockene bis mäßig beschattete Bereiche
| Gruendung = [[Gründung#Die unabhängige Koloniegründung durch einzelne Königinnen (claustrale/semiclaustrale Gründung)|claustral]]
| Koeniginnen = [[polygyn]]
}}
121f2461dcd47802cf8bbb382bd1791192e7f072
Lasius flavus
0
17
107
84
2012-05-29T22:23:00Z
Lollypop
2
wikitext
text/x-wiki
{{Ameisenart
| DeName = Gelbe Wiesenameise
| WissName = Lasius flavus
| Autor = (Fabricius, 1782)
| Untergattung = Cautolasius
| Gattung = Lasius
| Unterfamilie = Formicinae
| Art = flavus
| Verbreitung = Europa
| Habitat = Wiese
| Gruendung = [[Gründung#Die unabhängige Koloniegründung durch einzelne Königinnen (claustrale/semiclaustrale Gründung)|claustral]]
| Koeniginnen = [[Oligogynie|oligogyn]]
| maxKolo = 100.000
}}
f3753be59ac9aecb9e3d2b005a5fb40384d473c7
Lasius fuliginosus
0
46
108
82
2012-05-29T22:25:07Z
Lollypop
2
hat „[[Lasius]]“ nach „[[Lasius fuliginosus]]“ verschoben
wikitext
text/x-wiki
{{Ameisengattung
| Autor = (Fabricius, 1804)
| Unterfamilie = Formicinae
| Gattung = Lasius
}}
b027b928c19165d56c8c5d740e869ed6897a1e66
110
108
2012-05-29T22:25:43Z
Lollypop
2
wikitext
text/x-wiki
{{Ameisenart
| Autor = (Latreille, 1798)
| WissName = Lasius fuliginosus
| Gattung = Lasius
| Untergattung = Dendrolasius
| Unterfamilie = Formicinae
| Art = fuliginosus
| Bild = Lasius fuliginosus.jpg
| Bildbeschreibung = <br />''L. fuliginosus'' Arbeiterin
| Verbreitung = Mitteleuropa
| Habitat = Laub- und Nadelwälder, Parks; meist in Holz nistend
| Gruendung = [[Gründung#Die abhängige Koloniegründung durch temporären Sozialparasitismus|sozialparasitär]]
| Koeniginnen = [[Polygynie|polygyn]], [[Oligogynie|oligogyn]]
| maxKolo = bis 2 Millionen
}}
68a52f89bd94a0dbee06b605647ea9117ba4434a
Lasius
0
62
109
2012-05-29T22:25:08Z
Lollypop
2
hat „[[Lasius]]“ nach „[[Lasius fuliginosus]]“ verschoben
wikitext
text/x-wiki
#WEITERLEITUNG [[Lasius fuliginosus]]
123159cf108d99c0e3fc20a01ceafa6646f14e12
Temnothorax nylanderi
0
63
111
2012-05-29T22:30:59Z
Lollypop
2
Die Seite wurde neu angelegt: „{{Ameisenart | DeName = Nylanders' Schmalbrustameise | WissName = Temnothorax nylanderi | Autor = (Förster, 1850) | Gattung = T…“
wikitext
text/x-wiki
{{Ameisenart
| DeName = Nylanders' Schmalbrustameise
| WissName = Temnothorax nylanderi
| Autor = (Förster, 1850)
| Gattung = Temnothorax
| Unterfamilie = Myrmicinae
| Art = nylanderi
| Bild =
| Bildbeschreibung =
| Verbreitung =
| Gruendung = [[Gründung#Die unabhängige Koloniegründung durch einzelne Königinnen (claustrale/semiclaustrale Gründung)|claustral]], [[Pleometrose]]
| Koeniginnen = [[Monogynie|monogyn]]
}}
2065fea8057130989e9f43ccee5d644c4edd1f6c
Category:Temnothorax
14
64
112
2012-05-29T22:31:30Z
Lollypop
2
Die Seite wurde neu angelegt: „[[Kategorie:Ameisen]]“
wikitext
text/x-wiki
[[Kategorie:Ameisen]]
f955b9a6bcf9dfb967469048129fb2b16000839e
Hauptseite
0
1
113
67
2012-05-29T22:31:48Z
Lollypop
2
wikitext
text/x-wiki
=[[:Kategorie:Projekte|Meine Projekte]]=
<categorytree mode=pages hideroot=on depth=3>Projekte</categorytree>
=[[:Kategorie:KnowHow|KnowHow]]=
<categorytree mode=pages hideroot=on depth=2>KnowHow</categorytree>
= Starthilfen zum Wiki =
Hilfe zur Benutzung und Konfiguration der Wiki-Software findest du im [http://meta.wikimedia.org/wiki/Help:Contents Benutzerhandbuch].
* [http://www.mediawiki.org/wiki/Manual:Configuration_settings Liste der Konfigurationsvariablen]
* [http://www.mediawiki.org/wiki/Manual:FAQ MediaWiki-FAQ]
* [https://lists.wikimedia.org/mailman/listinfo/mediawiki-announce Mailingliste neuer MediaWiki-Versionen]
8ce5e96011f95929c37237f701aae25aed47db96
Template:Taxobox
10
44
114
96
2012-05-29T22:44:26Z
Lollypop
2
wikitext
text/x-wiki
<includeonly>{| cellpadding="2" cellspacing="1" width="300" class="taxobox {{#ifeq: {{lc:{{{Modus|taxobox}}}}}|paläobox|palaeobox}} float-right toptextcells" id="Vorlage_Taxobox" summary="Taxobox"
! {{#if: {{{Name|}}}|{{{Name}}}|{{#if: {{{Taxon_Name|}}}|{{{Taxon_Name}}}|{{#if: {{{Taxon_WissName|}}}|{{#ifexpr: {{Taxobox/IstRangKursiv|{{{Taxon_Rang|}}}}}|''}}{{{Taxon_WissName}}}{{#ifexpr: {{Taxobox/IstRangKursiv|{{{Taxon_Rang|}}}}}|''}}}}}}}}
{{#if: {{{Bild|}}}|{{#switch: {{lc:{{{Bild}}}}}
|fehlt|ohne|kein|keines= {{!-}}
|#default={{!-}}
{{!}} style="text-align:center;font-size:8pt;" {{!}} [[Datei:{{{Bild}}}|frameless|300x400px{{#if:{{{Bildbeschreibung|}}}|{{!}}{{{Bildbeschreibung}}}}}]]
{{#if: {{{Bildbeschreibung|}}}|{{#ifeq: {{{Bildbeschreibung}}}|ohne||{{{Bildbeschreibung|}}}}}|{{#if: {{{Taxon_Name|}}}|{{{Taxon_Name|}}} {{#if: {{{Taxon_WissName|}}}|(''{{{Taxon_WissName|}}}'')}}|''{{{Taxon_WissName|}}}''}}}}
}}|{{!-}}}}
{{#ifeq: {{lc:{{{Modus|taxobox}}}}}|paläobox|
{{#if: {{{ErdzeitalterVon|}}}{{{MioVon|}}}{{{TausendVon|}}}|
{{!-}}
! [[Erdzeitalter|Zeitraum]]
{{#if: {{{ErdzeitalterVon|}}}|
{{!-}}
{{!}}class="taxo-zeit"{{!}} {{{ErdzeitalterVon|}}}{{#if: {{{ErdzeitalterBis|}}}| bis {{{ErdzeitalterBis}}}}}}}}}
{{#if: {{{MioVon|}}}|
{{#if: {{{TausendBis|}}}|
{{!-}}
{{!}}class="taxo-zeit" {{!}}{{{MioVon|}}} [[Mya (Einheit)|Mio. Jahre]] bis {{{TausendBis}}}.000 Jahre
|{{!-}}
{{!}}class="taxo-zeit" {{!}}{{{MioVon|}}}{{#if: {{{MioBis|}}}| bis {{{MioBis}}}}} [[Mya (Einheit)|Mio. Jahre]]}}}}
{{#if: {{{TausendVon|}}}|
{{!-}}
{{!}}class="taxo-zeit" {{!}}{{{TausendVon|}}}{{#if: {{{TausendBis|}}}| bis {{{TausendBis}}}}}.000 Jahre}}
{{#if: {{{Fundorte|}}} |
{{!-}}
! [[Fossil|Fundorte]]
{{!-}}
{{!}} class="taxo-ort" {{!}}
{{{Fundorte}}}}}}}
|-
! [[Systematik (Biologie)|Systematik]]
|-
|
{| width="100%"
{{Taxobox/Zeile
| Rang = {{{Taxon6_Rang|}}}
| Name = {{{Taxon6_Name|}}}
| LinkName = {{{Taxon6_LinkName|}}}
| KeinLink = {{#ifeq:{{{Taxon6_LinkName|}}}|nein|ja}}
| WissName = {{{Taxon6_WissName|}}}
| KeinRang = {{{Rangunterdrückung|}}}
}}
{{Taxobox/Zeile
| Rang = {{{Taxon5_Rang|}}}
| Name = {{{Taxon5_Name|}}}
| LinkName = {{{Taxon5_LinkName|}}}
| KeinLink = {{#ifeq:{{{Taxon5_LinkName|}}}|nein|ja|{{#if:{{{Taxon5_Autor|}}}|ja}}}}
| WissName = {{#if:{{{Taxon5_Autor|}}}|{{#if:{{{Taxon5_Name|}}}||{{{Taxon5_WissName}}}}}|{{{Taxon5_WissName|}}}}}
| KeinRang = {{{Rangunterdrückung|}}}
}}
{{Taxobox/Zeile
| Rang = {{{Taxon4_Rang|}}}
| Name = {{{Taxon4_Name|}}}
| LinkName = {{{Taxon4_LinkName|}}}
| KeinLink = {{#ifeq:{{{Taxon4_LinkName|}}}|nein|ja|{{#if:{{{Taxon4_Autor|}}}|ja}}}}
| WissName = {{#if:{{{Taxon4_Autor|}}}|{{#if:{{{Taxon4_Name|}}}||{{{Taxon4_WissName}}}}}|{{{Taxon4_WissName|}}}}}
| KeinRang = {{{Rangunterdrückung|}}}
}}
{{Taxobox/Zeile
| Rang = {{{Taxon3_Rang|}}}
| Name = {{{Taxon3_Name|}}}
| LinkName = {{{Taxon3_LinkName|}}}
| KeinLink = {{#ifeq:{{{Taxon3_LinkName|}}}|nein|ja|{{#if:{{{Taxon3_Autor|}}}|ja}}}}
| WissName = {{#if:{{{Taxon3_Autor|}}}|{{#if:{{{Taxon3_Name|}}}||{{{Taxon3_WissName}}}}}|{{{Taxon3_WissName|}}}}}
| KeinRang = {{{Rangunterdrückung|}}}
}}
{{Taxobox/Zeile
| Rang = {{{Taxon2_Rang|}}}
| Name = {{{Taxon2_Name|}}}
| LinkName = {{{Taxon2_LinkName|}}}
| KeinLink = {{#ifeq:{{{Taxon2_LinkName|}}}|nein|ja|{{#if:{{{Taxon2_Autor|}}}|ja}}}}
| WissName = {{#if:{{{Taxon2_Autor|}}}|{{#if:{{{Taxon2_Name|}}}||{{{Taxon2_WissName}}}}}|{{{Taxon2_WissName|}}}}}
| KeinRang = {{{Rangunterdrückung|}}}
}}
{{Taxobox/Zeile
| Rang = {{{Taxon_Rang|}}}
| Name = {{{Taxon_Name|}}}
| WissName = {{#if:{{{Taxon_Name|}}}||{{{Taxon_WissName|}}}}}
| KeinLink = ja
| KeinRang = {{{Rangunterdrückung|}}}
}}
|}
|-
{{#if: {{{Taxon5_Autor|}}} | {{Taxobox/Zitat
| Rang = {{{Taxon5_Rang|}}}
| WissName = {{{Taxon5_WissName|}}}
| Autor = {{{Taxon5_Autor|}}}
}}}}
{{#if: {{{Taxon4_Autor|}}} | {{Taxobox/Zitat
| Rang = {{{Taxon4_Rang|}}}
| WissName = {{{Taxon4_WissName|}}}
| Autor = {{{Taxon4_Autor|}}}
}}}}
{{#if: {{{Taxon3_Autor|}}} | {{Taxobox/Zitat
| Rang = {{{Taxon3_Rang|}}}
| WissName = {{{Taxon3_WissName|}}}
| Autor = {{{Taxon3_Autor|}}}
}}}}
{{#if: {{{Taxon2_Autor|}}} | {{Taxobox/Zitat
| Rang = {{{Taxon2_Rang|}}}
| WissName = {{{Taxon2_WissName|}}}
| Autor = {{{Taxon2_Autor|}}}
}}}}
{{#if: {{{Taxon_WissName|}}} | {{Taxobox/Zitat
| Rang = {{{Taxon_Rang|}}}
| WissName = {{{Taxon_WissName|}}}
| Autor = {{{Taxon_Autor|}}}
| KeinRang = {{#if: {{{Taxon2_Autor|}}}{{{Taxon3_Autor|}}}{{{Taxon4_Autor|}}}{{{Taxon5_Autor|}}}||ja}}
}}}}
{{#if: {{{Subtaxa_Rang|}}} | {{!-}}
!{{Taxobox/Rang|Rang={{{Subtaxa_Rang}}}|Plural={{{Subtaxa_Plural|ja}}}}}
{{!-}}
{{!}}
{{#if: {{{Subtaxa|}}} | {{{Subtaxa}}} }}}}
|}{{#if: {{{Taxon_Name|}}}{{#ifexpr: {{Taxobox/IstRangKursiv|{{{Taxon_Rang|}}}}}||nonitalic}}
|
| {{#ifexpr: {{str find|{{PAGENAME}}|(}} = -1
| {{DISPLAYTITLE:{{#if:{{NAMESPACE}}|{{NAMESPACE}}:}}''{{#if: {{{Taxon_WissName|}}}|{{{Taxon_WissName}}}|{{PAGENAME}}}}''}}
| <span style="display:none">[[Vorlage:Taxobox/Wartung/KlammerlemmaUndKursiv]]</span>
}}}}
[[Kategorie:{{{Taxon2_WissName}}}|{{{Art}}}]]
</includeonly>
<noinclude>{{Dokumentation}}
</noinclude>
3ea9b4c34a0d8f63c97497e7a42e4c2a5e47478f
115
114
2012-05-29T22:46:32Z
Lollypop
2
wikitext
text/x-wiki
<includeonly>{| cellpadding="2" cellspacing="1" width="300" class="taxobox {{#ifeq: {{lc:{{{Modus|taxobox}}}}}|paläobox|palaeobox}} float-right toptextcells" id="Vorlage_Taxobox" summary="Taxobox"
! {{#if: {{{Name|}}}|{{{Name}}}|{{#if: {{{Taxon_Name|}}}|{{{Taxon_Name}}}|{{#if: {{{Taxon_WissName|}}}|{{#ifexpr: {{Taxobox/IstRangKursiv|{{{Taxon_Rang|}}}}}|''}}{{{Taxon_WissName}}}{{#ifexpr: {{Taxobox/IstRangKursiv|{{{Taxon_Rang|}}}}}|''}}}}}}}}
{{#if: {{{Bild|}}}|{{#switch: {{lc:{{{Bild}}}}}
|fehlt|ohne|kein|keines= {{!-}}
|#default={{!-}}
{{!}} style="text-align:center;font-size:8pt;" {{!}} [[Datei:{{{Bild}}}|frameless|300x400px{{#if:{{{Bildbeschreibung|}}}|{{!}}{{{Bildbeschreibung}}}}}]]
{{#if: {{{Bildbeschreibung|}}}|{{#ifeq: {{{Bildbeschreibung}}}|ohne||{{{Bildbeschreibung|}}}}}|{{#if: {{{Taxon_Name|}}}|{{{Taxon_Name|}}} {{#if: {{{Taxon_WissName|}}}|(''{{{Taxon_WissName|}}}'')}}|''{{{Taxon_WissName|}}}''}}}}
}}|{{!-}}}}
{{#ifeq: {{lc:{{{Modus|taxobox}}}}}|paläobox|
{{#if: {{{ErdzeitalterVon|}}}{{{MioVon|}}}{{{TausendVon|}}}|
{{!-}}
! [[Erdzeitalter|Zeitraum]]
{{#if: {{{ErdzeitalterVon|}}}|
{{!-}}
{{!}}class="taxo-zeit"{{!}} {{{ErdzeitalterVon|}}}{{#if: {{{ErdzeitalterBis|}}}| bis {{{ErdzeitalterBis}}}}}}}}}
{{#if: {{{MioVon|}}}|
{{#if: {{{TausendBis|}}}|
{{!-}}
{{!}}class="taxo-zeit" {{!}}{{{MioVon|}}} [[Mya (Einheit)|Mio. Jahre]] bis {{{TausendBis}}}.000 Jahre
|{{!-}}
{{!}}class="taxo-zeit" {{!}}{{{MioVon|}}}{{#if: {{{MioBis|}}}| bis {{{MioBis}}}}} [[Mya (Einheit)|Mio. Jahre]]}}}}
{{#if: {{{TausendVon|}}}|
{{!-}}
{{!}}class="taxo-zeit" {{!}}{{{TausendVon|}}}{{#if: {{{TausendBis|}}}| bis {{{TausendBis}}}}}.000 Jahre}}
{{#if: {{{Fundorte|}}} |
{{!-}}
! [[Fossil|Fundorte]]
{{!-}}
{{!}} class="taxo-ort" {{!}}
{{{Fundorte}}}}}}}
|-
! [[Systematik (Biologie)|Systematik]]
|-
|
{| width="100%"
{{Taxobox/Zeile
| Rang = {{{Taxon6_Rang|}}}
| Name = {{{Taxon6_Name|}}}
| LinkName = {{{Taxon6_LinkName|}}}
| KeinLink = {{#ifeq:{{{Taxon6_LinkName|}}}|nein|ja}}
| WissName = {{{Taxon6_WissName|}}}
| KeinRang = {{{Rangunterdrückung|}}}
}}
{{Taxobox/Zeile
| Rang = {{{Taxon5_Rang|}}}
| Name = {{{Taxon5_Name|}}}
| LinkName = {{{Taxon5_LinkName|}}}
| KeinLink = {{#ifeq:{{{Taxon5_LinkName|}}}|nein|ja|{{#if:{{{Taxon5_Autor|}}}|ja}}}}
| WissName = {{#if:{{{Taxon5_Autor|}}}|{{#if:{{{Taxon5_Name|}}}||{{{Taxon5_WissName}}}}}|{{{Taxon5_WissName|}}}}}
| KeinRang = {{{Rangunterdrückung|}}}
}}
{{Taxobox/Zeile
| Rang = {{{Taxon4_Rang|}}}
| Name = {{{Taxon4_Name|}}}
| LinkName = {{{Taxon4_LinkName|}}}
| KeinLink = {{#ifeq:{{{Taxon4_LinkName|}}}|nein|ja|{{#if:{{{Taxon4_Autor|}}}|ja}}}}
| WissName = {{#if:{{{Taxon4_Autor|}}}|{{#if:{{{Taxon4_Name|}}}||{{{Taxon4_WissName}}}}}|{{{Taxon4_WissName|}}}}}
| KeinRang = {{{Rangunterdrückung|}}}
}}
{{Taxobox/Zeile
| Rang = {{{Taxon3_Rang|}}}
| Name = {{{Taxon3_Name|}}}
| LinkName = {{{Taxon3_LinkName|}}}
| KeinLink = {{#ifeq:{{{Taxon3_LinkName|}}}|nein|ja|{{#if:{{{Taxon3_Autor|}}}|ja}}}}
| WissName = {{#if:{{{Taxon3_Autor|}}}|{{#if:{{{Taxon3_Name|}}}||{{{Taxon3_WissName}}}}}|{{{Taxon3_WissName|}}}}}
| KeinRang = {{{Rangunterdrückung|}}}
}}
{{Taxobox/Zeile
| Rang = {{{Taxon2_Rang|}}}
| Name = {{{Taxon2_Name|}}}
| LinkName = {{{Taxon2_LinkName|}}}
| KeinLink = {{#ifeq:{{{Taxon2_LinkName|}}}|nein|ja|{{#if:{{{Taxon2_Autor|}}}|ja}}}}
| WissName = {{#if:{{{Taxon2_Autor|}}}|{{#if:{{{Taxon2_Name|}}}||{{{Taxon2_WissName}}}}}|{{{Taxon2_WissName|}}}}}
| KeinRang = {{{Rangunterdrückung|}}}
}}
{{Taxobox/Zeile
| Rang = {{{Taxon_Rang|}}}
| Name = {{{Taxon_Name|}}}
| WissName = {{#if:{{{Taxon_Name|}}}||{{{Taxon_WissName|}}}}}
| KeinLink = ja
| KeinRang = {{{Rangunterdrückung|}}}
}}
|}
|-
{{#if: {{{Taxon5_Autor|}}} | {{Taxobox/Zitat
| Rang = {{{Taxon5_Rang|}}}
| WissName = {{{Taxon5_WissName|}}}
| Autor = {{{Taxon5_Autor|}}}
}}}}
{{#if: {{{Taxon4_Autor|}}} | {{Taxobox/Zitat
| Rang = {{{Taxon4_Rang|}}}
| WissName = {{{Taxon4_WissName|}}}
| Autor = {{{Taxon4_Autor|}}}
}}}}
{{#if: {{{Taxon3_Autor|}}} | {{Taxobox/Zitat
| Rang = {{{Taxon3_Rang|}}}
| WissName = {{{Taxon3_WissName|}}}
| Autor = {{{Taxon3_Autor|}}}
}}}}
{{#if: {{{Taxon2_Autor|}}} | {{Taxobox/Zitat
| Rang = {{{Taxon2_Rang|}}}
| WissName = {{{Taxon2_WissName|}}}
| Autor = {{{Taxon2_Autor|}}}
}}}}
{{#if: {{{Taxon_WissName|}}} | {{Taxobox/Zitat
| Rang = {{{Taxon_Rang|}}}
| WissName = {{{Taxon_WissName|}}}
| Autor = {{{Taxon_Autor|}}}
| KeinRang = {{#if: {{{Taxon2_Autor|}}}{{{Taxon3_Autor|}}}{{{Taxon4_Autor|}}}{{{Taxon5_Autor|}}}||ja}}
}}}}
{{#if: {{{Subtaxa_Rang|}}} | {{!-}}
!{{Taxobox/Rang|Rang={{{Subtaxa_Rang}}}|Plural={{{Subtaxa_Plural|ja}}}}}
{{!-}}
{{!}}
{{#if: {{{Subtaxa|}}} | {{{Subtaxa}}} }}}}
|}{{#if: {{{Taxon_Name|}}}{{#ifexpr: {{Taxobox/IstRangKursiv|{{{Taxon_Rang|}}}}}||nonitalic}}
|
| {{#ifexpr: {{str find|{{PAGENAME}}|(}} = -1
| {{DISPLAYTITLE:{{#if:{{NAMESPACE}}|{{NAMESPACE}}:}}''{{#if: {{{Taxon_WissName|}}}|{{{Taxon_WissName}}}|{{PAGENAME}}}}''}}
| <span style="display:none">[[Vorlage:Taxobox/Wartung/KlammerlemmaUndKursiv]]</span>
}}}}
[[Kategorie:{{{Taxon2_WissName}}}|{{{Taxon3_WissName}}}|{{{Taxon4_WissName}}}|{{{Taxon5_WissName}}}|{{{Taxon6_WissName}}}]]
</includeonly>
<noinclude>{{Dokumentation}}
</noinclude>
264ec4761c9327a1e9698cf6f82df3b5d7933a6a
119
115
2012-05-30T10:37:46Z
Lollypop
2
wikitext
text/x-wiki
<includeonly>{| cellpadding="2" cellspacing="1" width="300" class="taxobox {{#ifeq: {{lc:{{{Modus|taxobox}}}}}|paläobox|palaeobox}} float-right toptextcells" id="Vorlage_Taxobox" summary="Taxobox"
! {{#if: {{{Name|}}}|{{{Name}}}|{{#if: {{{Taxon_Name|}}}|{{{Taxon_Name}}}|{{#if: {{{Taxon_WissName|}}}|{{#ifexpr: {{Taxobox/IstRangKursiv|{{{Taxon_Rang|}}}}}|''}}{{{Taxon_WissName}}}{{#ifexpr: {{Taxobox/IstRangKursiv|{{{Taxon_Rang|}}}}}|''}}}}}}}}
{{#if: {{{Bild|}}}|{{#switch: {{lc:{{{Bild}}}}}
|fehlt|ohne|kein|keines= {{!-}}
|#default={{!-}}
{{!}} style="text-align:center;font-size:8pt;" {{!}} [[Datei:{{{Bild}}}|frameless|300x400px{{#if:{{{Bildbeschreibung|}}}|{{!}}{{{Bildbeschreibung}}}}}]]
{{#if: {{{Bildbeschreibung|}}}|{{#ifeq: {{{Bildbeschreibung}}}|ohne||{{{Bildbeschreibung|}}}}}|{{#if: {{{Taxon_Name|}}}|{{{Taxon_Name|}}} {{#if: {{{Taxon_WissName|}}}|(''{{{Taxon_WissName|}}}'')}}|''{{{Taxon_WissName|}}}''}}}}
}}|{{!-}}}}
{{#ifeq: {{lc:{{{Modus|taxobox}}}}}|paläobox|
{{#if: {{{ErdzeitalterVon|}}}{{{MioVon|}}}{{{TausendVon|}}}|
{{!-}}
! [[Erdzeitalter|Zeitraum]]
{{#if: {{{ErdzeitalterVon|}}}|
{{!-}}
{{!}}class="taxo-zeit"{{!}} {{{ErdzeitalterVon|}}}{{#if: {{{ErdzeitalterBis|}}}| bis {{{ErdzeitalterBis}}}}}}}}}
{{#if: {{{MioVon|}}}|
{{#if: {{{TausendBis|}}}|
{{!-}}
{{!}}class="taxo-zeit" {{!}}{{{MioVon|}}} [[Mya (Einheit)|Mio. Jahre]] bis {{{TausendBis}}}.000 Jahre
|{{!-}}
{{!}}class="taxo-zeit" {{!}}{{{MioVon|}}}{{#if: {{{MioBis|}}}| bis {{{MioBis}}}}} [[Mya (Einheit)|Mio. Jahre]]}}}}
{{#if: {{{TausendVon|}}}|
{{!-}}
{{!}}class="taxo-zeit" {{!}}{{{TausendVon|}}}{{#if: {{{TausendBis|}}}| bis {{{TausendBis}}}}}.000 Jahre}}
{{#if: {{{Fundorte|}}} |
{{!-}}
! [[Fossil|Fundorte]]
{{!-}}
{{!}} class="taxo-ort" {{!}}
{{{Fundorte}}}}}}}
|-
! [[Systematik (Biologie)|Systematik]]
|-
|
{| width="100%"
{{Taxobox/Zeile
| Rang = {{{Taxon6_Rang|}}}
| Name = {{{Taxon6_Name|}}}
| LinkName = {{{Taxon6_LinkName|}}}
| KeinLink = {{#ifeq:{{{Taxon6_LinkName|}}}|nein|ja}}
| WissName = {{{Taxon6_WissName|}}}
| KeinRang = {{{Rangunterdrückung|}}}
}}
{{Taxobox/Zeile
| Rang = {{{Taxon5_Rang|}}}
| Name = {{{Taxon5_Name|}}}
| LinkName = {{{Taxon5_LinkName|}}}
| KeinLink = {{#ifeq:{{{Taxon5_LinkName|}}}|nein|ja|{{#if:{{{Taxon5_Autor|}}}|ja}}}}
| WissName = {{#if:{{{Taxon5_Autor|}}}|{{#if:{{{Taxon5_Name|}}}||{{{Taxon5_WissName}}}}}|{{{Taxon5_WissName|}}}}}
| KeinRang = {{{Rangunterdrückung|}}}
}}
{{Taxobox/Zeile
| Rang = {{{Taxon4_Rang|}}}
| Name = {{{Taxon4_Name|}}}
| LinkName = {{{Taxon4_LinkName|}}}
| KeinLink = {{#ifeq:{{{Taxon4_LinkName|}}}|nein|ja|{{#if:{{{Taxon4_Autor|}}}|ja}}}}
| WissName = {{#if:{{{Taxon4_Autor|}}}|{{#if:{{{Taxon4_Name|}}}||{{{Taxon4_WissName}}}}}|{{{Taxon4_WissName|}}}}}
| KeinRang = {{{Rangunterdrückung|}}}
}}
{{Taxobox/Zeile
| Rang = {{{Taxon3_Rang|}}}
| Name = {{{Taxon3_Name|}}}
| LinkName = {{{Taxon3_LinkName|}}}
| KeinLink = {{#ifeq:{{{Taxon3_LinkName|}}}|nein|ja|{{#if:{{{Taxon3_Autor|}}}|ja}}}}
| WissName = {{#if:{{{Taxon3_Autor|}}}|{{#if:{{{Taxon3_Name|}}}||{{{Taxon3_WissName}}}}}|{{{Taxon3_WissName|}}}}}
| KeinRang = {{{Rangunterdrückung|}}}
}}
{{Taxobox/Zeile
| Rang = {{{Taxon2_Rang|}}}
| Name = {{{Taxon2_Name|}}}
| LinkName = {{{Taxon2_LinkName|}}}
| KeinLink = {{#ifeq:{{{Taxon2_LinkName|}}}|nein|ja|{{#if:{{{Taxon2_Autor|}}}|ja}}}}
| WissName = {{#if:{{{Taxon2_Autor|}}}|{{#if:{{{Taxon2_Name|}}}||{{{Taxon2_WissName}}}}}|{{{Taxon2_WissName|}}}}}
| KeinRang = {{{Rangunterdrückung|}}}
}}
{{Taxobox/Zeile
| Rang = {{{Taxon_Rang|}}}
| Name = {{{Taxon_Name|}}}
| WissName = {{#if:{{{Taxon_Name|}}}||{{{Taxon_WissName|}}}}}
| KeinLink = ja
| KeinRang = {{{Rangunterdrückung|}}}
}}
|}
|-
{{#if: {{{Taxon5_Autor|}}} | {{Taxobox/Zitat
| Rang = {{{Taxon5_Rang|}}}
| WissName = {{{Taxon5_WissName|}}}
| Autor = {{{Taxon5_Autor|}}}
}}}}
{{#if: {{{Taxon4_Autor|}}} | {{Taxobox/Zitat
| Rang = {{{Taxon4_Rang|}}}
| WissName = {{{Taxon4_WissName|}}}
| Autor = {{{Taxon4_Autor|}}}
}}}}
{{#if: {{{Taxon3_Autor|}}} | {{Taxobox/Zitat
| Rang = {{{Taxon3_Rang|}}}
| WissName = {{{Taxon3_WissName|}}}
| Autor = {{{Taxon3_Autor|}}}
}}}}
{{#if: {{{Taxon2_Autor|}}} | {{Taxobox/Zitat
| Rang = {{{Taxon2_Rang|}}}
| WissName = {{{Taxon2_WissName|}}}
| Autor = {{{Taxon2_Autor|}}}
}}}}
{{#if: {{{Taxon_WissName|}}} | {{Taxobox/Zitat
| Rang = {{{Taxon_Rang|}}}
| WissName = {{{Taxon_WissName|}}}
| Autor = {{{Taxon_Autor|}}}
| KeinRang = {{#if: {{{Taxon2_Autor|}}}{{{Taxon3_Autor|}}}{{{Taxon4_Autor|}}}{{{Taxon5_Autor|}}}||ja}}
}}}}
{{#if: {{{Subtaxa_Rang|}}} | {{!-}}
!{{Taxobox/Rang|Rang={{{Subtaxa_Rang}}}|Plural={{{Subtaxa_Plural|ja}}}}}
{{!-}}
{{!}}
{{#if: {{{Subtaxa|}}} | {{{Subtaxa}}} }}}}
|}{{#if: {{{Taxon_Name|}}}{{#ifexpr: {{Taxobox/IstRangKursiv|{{{Taxon_Rang|}}}}}||nonitalic}}
|
| {{#ifexpr: {{str find|{{PAGENAME}}|(}} = -1
| {{DISPLAYTITLE:{{#if:{{NAMESPACE}}|{{NAMESPACE}}:}}''{{#if: {{{Taxon_WissName|}}}|{{{Taxon_WissName}}}|{{PAGENAME}}}}''}}
| <span style="display:none">[[Vorlage:Taxobox/Wartung/KlammerlemmaUndKursiv]]</span>
}}}}</includeonly><noinclude>{{Dokumentation}}
</noinclude>
196f8c9ce3c249a3f28812628415466230ae3e22
Arundo donax
0
43
116
79
2012-05-29T22:48:50Z
Lollypop
2
wikitext
text/x-wiki
{{Taxobox
| Taxon_Name = Pfahlrohr
| Taxon_WissName = Arundo donax
| Taxon_Rang = Art
| Taxon_Autor = [[Carl von Linné|L.]]
| Taxon2_WissName = Arundo
| Taxon2_Rang = Gattung
| Taxon3_WissName = Arundineae
| Taxon3_Rang = Tribus
| Taxon4_WissName = Arundinoideae
| Taxon4_Rang = Unterfamilie
| Taxon5_Name = Süßgräser
| Taxon5_WissName = Poaceae
| Taxon5_Rang = Familie
| Taxon6_Name = Süßgrasartige
| Taxon6_WissName = Poales
| Taxon6_Rang = Ordnung
}}
778a49b2b60bf6372abc8f42b7e7986326d382dc
117
116
2012-05-30T10:32:00Z
Lollypop
2
wikitext
text/x-wiki
{{Taxobox
| Taxon_Name = Pfahlrohr
| Taxon_WissName = Arundo donax
| Taxon_Rang = Art
| Taxon_Autor = [[Carl von Linné|L.]]
| Taxon2_WissName = Arundo
| Taxon2_Rang = Gattung
| Taxon3_WissName = Arundineae
| Taxon3_Rang = Tribus
| Taxon4_WissName = Arundinoideae
| Taxon4_Rang = Unterfamilie
| Taxon5_Name = Süßgräser
| Taxon5_WissName = Poaceae
| Taxon5_Rang = Familie
| Taxon6_Name = Süßgrasartige
| Taxon6_WissName = Poales
| Taxon6_Rang = Ordnung
| Bild = Arundo donax Austrieb.jpg
| Bildbeschreibung = Pfahlrohr (''Arundo donax'') – Austrieb Mitte Mai in Hamburg
}}
6104f987cb62884bf8f35c5fdac6e02686b89420
126
117
2012-05-30T11:03:01Z
Lollypop
2
wikitext
text/x-wiki
{{Taxobox
| Taxon_Name = Pfahlrohr
| Taxon_WissName = Arundo donax
| Taxon_Rang = Art
| Taxon_Autor = [[Carl von Linné|L.]]
| Taxon2_WissName = Arundo
| Taxon2_Rang = Gattung
| Taxon3_WissName = Arundineae
| Taxon3_Rang = Tribus
| Taxon4_WissName = Arundinoideae
| Taxon4_Rang = Unterfamilie
| Taxon5_Name = Süßgräser
| Taxon5_WissName = Poaceae
| Taxon5_Rang = Familie
| Taxon6_Name = Süßgrasartige
| Taxon6_WissName = Poales
| Taxon6_Rang = Ordnung
| Bild = Arundo donax Austrieb.jpg
| Bildbeschreibung = Pfahlrohr (''Arundo donax'') – Austrieb Mitte Mai in Hamburg
}}
== Beschreibung ==
Das Pfahlrohr kommt eigentlich eher aus dem Süden, ist aber auch bei uns bedingt Winterhart.
8e4aaba3a483f4a1d116ee2b5c4d83cfd830b71b
131
126
2012-06-05T13:53:20Z
Lollypop
2
wikitext
text/x-wiki
{{Taxobox
| Taxon_Name = Pfahlrohr
| Taxon_WissName = Arundo donax
| Taxon_Rang = Art
| Taxon_Autor = [[Carl von Linné|L.]]
| Taxon2_WissName = Arundo
| Taxon2_Rang = Gattung
| Taxon3_WissName = Arundineae
| Taxon3_Rang = Tribus
| Taxon4_WissName = Arundinoideae
| Taxon4_Rang = Unterfamilie
| Taxon5_Name = Süßgräser
| Taxon5_WissName = Poaceae
| Taxon5_Rang = Familie
| Taxon6_Name = Süßgrasartige
| Taxon6_WissName = Poales
| Taxon6_Rang = Ordnung
| Bild = Arundo donax Austrieb.jpg
| Bildbeschreibung = Pfahlrohr (''Arundo donax'') – Austrieb Mitte Mai in Hamburg
}}
== Beschreibung ==
Das Pfahlrohr kommt eigentlich eher aus dem Süden, ist aber auch bei uns bedingt Winterhart.
[[Kategorie:Arundo]]
9a313c28812cc9bbb4bde3900334ae45d75db9d8
File:Arundo donax Austrieb.jpg
6
65
118
2012-05-30T10:32:35Z
Lollypop
2
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
Template:Bausteindesign4
10
66
120
2012-05-30T10:55:24Z
Lollypop
2
Die Seite wurde neu angelegt: „<code><onlyinclude>cellspacing="4" cellpadding="4" class="hintergrundfarbe1 rahmenfarbe3" style="width: 100%; font-size: 100%; border-style: solid; border-width: …“
wikitext
text/x-wiki
<code><onlyinclude>cellspacing="4" cellpadding="4" class="hintergrundfarbe1 rahmenfarbe3" style="width: 100%; font-size: 100%; border-style: solid; border-width: 2px; clear: both; position:relative;"</onlyinclude></code>
{{Dokumentation}}
8b2e0666c8ce21a109cc37ce9b8f24996052e4cd
Template:Bausteindesign5
10
67
121
2012-05-30T10:56:10Z
Lollypop
2
Die Seite wurde neu angelegt: „<code><onlyinclude>cellspacing="8" cellpadding="0" class="hintergrundfarbe1 rahmenfarbe2" style="width: 100%; font-size: 100%; border-style: solid; border-width: …“
wikitext
text/x-wiki
<code><onlyinclude>cellspacing="8" cellpadding="0" class="hintergrundfarbe1 rahmenfarbe2" style="width: 100%; font-size: 100%; border-style: solid; border-width: 3px; margin: auto; margin-top: 3px; margin-bottom: 3px; clear: both; position:relative;"</onlyinclude></code>
{{Dokumentation}}
8f8f8420cc297a0f3e08747f626f0fd15838a5e8
Template:Nobots
10
68
122
2012-05-30T10:58:05Z
Lollypop
2
Weiterleitung nach [[Vorlage:Bots]] erstellt
wikitext
text/x-wiki
#redirect [[Vorlage:bots]]
<!-- Vorlage ist Teil GLOBAL IDENTISCHER Vorlagen, Anmerkungen auf meta Beachten! -->
[[Kategorie:Vorlage:für Bots|Nobots]]
d0c86b1f92f2a0db944f4aaadfbc72fd10d4927b
Template:Tausendfach verwendet
10
69
123
2012-05-30T10:58:59Z
Lollypop
2
Die Seite wurde neu angelegt: „{| {{Bausteindesign4}} | valign="center" | [[Bild:Stop hand.svg|40px|alt=]] | Diese Vorlage ist <span class="plainlinks">[{{fullurl:Spezial:Linkliste|target={{SUB…“
wikitext
text/x-wiki
{| {{Bausteindesign4}}
| valign="center" | [[Bild:Stop hand.svg|40px|alt=]]
| Diese Vorlage ist <span class="plainlinks">[{{fullurl:Spezial:Linkliste|target={{SUBJECTPAGENAMEE}}&limit=500&hideredirs=1&hidelinks=1}} ''vielfach eingebunden'']. Wenn du die Auswirkungen genau kennst, kannst du sie [{{fullurl:{{FULLPAGENAME}}|action=edit}} bearbeiten]</span>. Meist ist es jedoch sinnvoll, Änderungswünsche erst auf [[{{DISKUSSIONSSEITE}}]] abzustimmen.
|}<noinclude>
Diese Vorlage bitte '''immer''' mit <tt><noinclude>{{Tausendfach verwendet}}</noinclude></tt> in andere Vorlagen einbauen!
[[Kategorie:Vorlage:Hinweisbaustein|{{PAGENAME}}]]
[[Kategorie:Vorlage:für Vorlagen|{{PAGENAME}}]]
[[eo:Ŝablono:Milfoje]]
</noinclude>
0904acd16228dc62bc13a4225b92a438d331e704
Template:Bots
10
70
124
2012-05-30T10:59:44Z
Lollypop
2
Die Seite wurde neu angelegt: „<noinclude><!-- Vorlage ist Teil GLOBAL IDENTISCHER Vorlagen, Anmerkungen auf meta Beachten! --> {{Dokumentation}}</noinclude>“
wikitext
text/x-wiki
<noinclude><!-- Vorlage ist Teil GLOBAL IDENTISCHER Vorlagen, Anmerkungen auf meta Beachten! -->
{{Dokumentation}}</noinclude>
679e6620cb010b2f217691a33c37546a0dce9ad2
Template:Bausteindesign3
10
71
125
2012-05-30T11:02:11Z
Lollypop
2
Die Seite wurde neu angelegt: „<code><onlyinclude>cellspacing="8" cellpadding="0" class="hintergrundfarbe1 rahmenfarbe1 {{{class|}}}" style="font-size: 100%; border-style: solid; margin-top: 2p…“
wikitext
text/x-wiki
<code><onlyinclude>cellspacing="8" cellpadding="0" class="hintergrundfarbe1 rahmenfarbe1 {{{class|}}}" style="font-size: 100%; border-style: solid; margin-top: 2px; margin-bottom: 2px; position:relative; {{{1|}}}"</onlyinclude></code>
{{Dokumentation}}
12ebe4e098e74fbf10e2890e22a4568fb958c035
Template:Taxobox/Doku
10
72
127
2012-05-30T11:04:22Z
Lollypop
2
Die Seite wurde neu angelegt: „<noinclude>{{Dokumentation/Dokuseite}}</noinclude> {{Tausendfach verwendet}} {{Wikipedia:Taxoboxen}} == Testseiten und Beispiele == * [[Wikipedia:Taxoboxen/Test…“
wikitext
text/x-wiki
<noinclude>{{Dokumentation/Dokuseite}}</noinclude>
{{Tausendfach verwendet}}
{{Wikipedia:Taxoboxen}}
== Testseiten und Beispiele ==
* [[Wikipedia:Taxoboxen/Test Vorlage Taxobox 1]]
* [[Wikipedia:Taxoboxen/Test Vorlage Taxobox 2]]
* [[Wikipedia:Taxoboxen/Test Vorlage Taxobox 3]]
== Untervorlagen ==
* [[Vorlage:Taxobox/Zeile]]
* [[Vorlage:Taxobox/Zitat]]
* [[Vorlage:Taxobox/Rang]]
* [[Vorlage:Taxobox/IstRangKursiv]]
== Technische Dokumentation ==
Siehe [[Vorlage:Taxobox/Doku/Tech]].
20fcdb40af291c0948fef1e77d0cf577597083cc
Template:!
10
54
128
92
2012-05-30T11:07:24Z
Lollypop
2
wikitext
text/x-wiki
|<noinclude>{{Dokumentation}}
</noinclude>
a492ad2fde3534428cc0125b707612181c5da0e5
Tetramorium caespitum
0
15
129
28
2012-05-30T11:11:28Z
Lollypop
2
wikitext
text/x-wiki
{{Ameisenart
| Autor = (Mayr, 1855)
| WissName = Tetramorium sp
| Gattung = Tetramorium
| Unterfamilie = Myrmicinae
| Art = (unbekannt)
| Bild =
| Bildbeschreibung =
| Verbreitung =
| Koeniginnen = [[Monogynie|monogyn]]
| maxKolo = 80000
}}
[[Kategorie:Tetramorium]]
328e77fdb05c65db03d15572aaa6b9890d4520fd
130
129
2012-05-30T11:12:03Z
Lollypop
2
wikitext
text/x-wiki
{{Ameisenart
| Autor = (Mayr, 1855)
| WissName = Tetramorium caespitum
| Gattung = Tetramorium
| Unterfamilie = Myrmicinae
| Art = caespitum
| Bild =
| Bildbeschreibung =
| Verbreitung =
| Koeniginnen = [[Monogynie|monogyn]]
| maxKolo = 80000
}}
[[Kategorie:Tetramorium]]
b470bd7edae8ab57474e6879894f9adafeb3cdfd
Sun Cluster - Repair Infrastructure
0
32
132
57
2012-06-11T07:41:55Z
Lollypop
2
wikitext
text/x-wiki
Wenn bei einem Clusterknoten die Infrastructure-Datei beschädigt ist, oder ein nicht mehr vorhandenes Quorum-Device herauskonfiguriert werden soll, dann muß man die folgenden Schritte ausführen:
1. Knoten in Non-Cluster-Modus bringen
<pre>
# reboot -- -sx
</pre>
Aus dem OBP ei Sparc-Systemen:
<pre>
ok> boot -sx
</pre>
Oder bei x86/Opteron:
<pre>
b -sx
</pre>
2. Infrastructure editieren:
<pre>
# mount /var
# export TERM=vt100
# vi /etc/cluster/ccr/infrastructure
</pre>
Hier müssen alle Quorumdevice-Einträge raus und die Stimmen der anderen Nodes (bei mehr als zwei Nodes) müssen auf 0 gesetzt werden.
z.B.:
cluster.nodes.2.properties.quorum_vote 0
3. Generieren der Checksumme in der Datei:
<pre>
# /usr/cluster/lib/sc/ccradm -i /etc/cluster/ccr/infrastructure -o
</pre>
oder ab Solaris Cluster 3.2:
<pre>
#/usr/cluster/lib/sc/ccradm recover -o /etc/cluster/ccr/global/infrastructure
</pre>
4. Check, ob alles OK ist
<pre>
# /usr/cluster/lib/sc/chkinfr
</pre>
5. Reboot in den Cluster-Modus
<pre>
# reboot
</pre>
Alternative Beschreibung von [http://www.edv-birk.de/ Lothar Birk]:
==Notfall-Situation, wenn der Cluster-Node beim Boot kein Clusterquorum bekommt==
===Boot in den 'Non-Cluster' Modus===
boot -xs
===Manipulation der infrastructure Datei in der ccr===
<pre>
cd /etc/cluster/ccr
oder
cd /etc/cluster/ccr/global
cp infrastructure 100610_infrastructure
vi infrastructure
- Quorum-Vote des anderen Nodes auf 0 setzen
...node.X...quorum_vote 0
- Alle Zeilen am Ende der Datei mit:
...quorum_devices... löschen
/usr/cluster/lib/sc/ccradm -i infrastructure -o
oder
/usr/cluster/lib/sc/ccradm recover -o infrastructure
</pre>
===Boot wieder in den Cluster-Mode und anlegen eines Quorum-Devices===
<pre>
init 6
clq add d1
</pre>
[[Kategorie:SunCluster]]
1386af1b496fadb1ea315e0aae60a832c14a2ddf
133
132
2012-06-11T07:43:46Z
Lollypop
2
wikitext
text/x-wiki
Wenn bei einem Clusterknoten die Infrastructure-Datei beschädigt ist, oder ein nicht mehr vorhandenes Quorum-Device herauskonfiguriert werden soll, dann muß man die folgenden Schritte ausführen:
1. Knoten in Non-Cluster-Modus bringen
<pre>
# reboot -- -sx
</pre>
Aus dem OBP ei Sparc-Systemen:
<pre>
ok> boot -sx
</pre>
Oder bei x86/Opteron:
<pre>
b -sx
</pre>
2. Infrastructure editieren:
<pre>
# mount /var
# export TERM=vt100
# vi /etc/cluster/ccr/infrastructure
</pre>
Hier müssen alle Quorumdevice-Einträge raus und die Stimmen der anderen Nodes (bei mehr als zwei Nodes) müssen auf 0 gesetzt werden.
z.B.:
cluster.nodes.2.properties.quorum_vote 0
Und den Installmode enablen:
cluster.properties.installmode enabled
3. Generieren der Checksumme in der Datei:
<pre>
# /usr/cluster/lib/sc/ccradm -i /etc/cluster/ccr/infrastructure -o
</pre>
oder ab Solaris Cluster 3.2:
<pre>
#/usr/cluster/lib/sc/ccradm recover -o /etc/cluster/ccr/global/infrastructure
</pre>
4. Check, ob alles OK ist
<pre>
# /usr/cluster/lib/sc/chkinfr
</pre>
5. Reboot in den Cluster-Modus
<pre>
# reboot
</pre>
Alternative Beschreibung von [http://www.edv-birk.de/ Lothar Birk]:
==Notfall-Situation, wenn der Cluster-Node beim Boot kein Clusterquorum bekommt==
===Boot in den 'Non-Cluster' Modus===
boot -xs
===Manipulation der infrastructure Datei in der ccr===
<pre>
cd /etc/cluster/ccr
oder
cd /etc/cluster/ccr/global
cp infrastructure 100610_infrastructure
vi infrastructure
- Quorum-Vote des anderen Nodes auf 0 setzen
...node.X...quorum_vote 0
- Alle Zeilen am Ende der Datei mit:
...quorum_devices... löschen
/usr/cluster/lib/sc/ccradm -i infrastructure -o
oder
/usr/cluster/lib/sc/ccradm recover -o infrastructure
</pre>
===Boot wieder in den Cluster-Mode und anlegen eines Quorum-Devices===
<pre>
init 6
clq add d1
</pre>
[[Kategorie:SunCluster]]
c29e834eac17d6ed2ba7ca35835eb2a064d60b11
Bash cheatsheet
0
37
134
71
2012-06-11T10:02:48Z
Lollypop
2
wikitext
text/x-wiki
=Nützliche Variablenersetzungen=
==dirname==
<pre>
$ myself=/usr/bin/blafasel ; echo ${myself%/*}
/usr/bin
</pre>
==basename==
<pre>
$ myself=/usr/bin/blafasel ; echo ${myself##*/}
blafasel
</pre>
=Schleifen=
==Zahlenfolgen==
$ for i in {0..9} ; do echo $i ; done
oder
$ for ((i=0;i<=9;i++)); do echo $i; done
so gehen natürlich auch andere Sprünge, z.B. immer 3 weiter:
$ for ((i=0;i<=9;i+=3)); do echo $i; done
[[Kategorie:Bash]]
c0c2007b23d36f28458fd93c7ff66978d6988a1e
135
134
2012-06-11T10:04:11Z
Lollypop
2
/* Zahlenfolgen */
wikitext
text/x-wiki
=Nützliche Variablenersetzungen=
==dirname==
<pre>
$ myself=/usr/bin/blafasel ; echo ${myself%/*}
/usr/bin
</pre>
==basename==
<pre>
$ myself=/usr/bin/blafasel ; echo ${myself##*/}
blafasel
</pre>
=Schleifen=
==Zahlenfolgen==
$ for i in {0..9} ; do echo $i ; done
oder
$ for ((i=0;i<=9;i++)); do echo $i; done
so gehen natürlich auch andere Sprünge, z.B. immer 3 weiter:
$ for ((i=0;i<=9;i+=3)); do echo $i; done
oder oder oder
$ for ((i=0,j=1;i<=9;i+=3,j++)); do echo "$i $j"; done
[[Kategorie:Bash]]
d814e99a664a3f2b31b6607628ced2897487150c
136
135
2012-06-11T10:06:24Z
Lollypop
2
wikitext
text/x-wiki
=Nützliche Variablenersetzungen=
==dirname==
<pre>
$ myself=/usr/bin/blafasel ; echo ${myself%/*}
/usr/bin
</pre>
==basename==
<pre>
$ myself=/usr/bin/blafasel ; echo ${myself##*/}
blafasel
</pre>
=Schleifen=
==Zahlenfolgen==
$ for i in {0..9} ; do echo $i ; done
oder
$ for ((i=0;i<=9;i++)); do echo $i; done
so gehen natürlich auch andere Sprünge, z.B. immer 3 weiter:
$ for ((i=0;i<=9;i+=3)); do echo $i; done
oder oder oder
$ for ((i=0,j=1;i<=9;i+=3,j++)); do echo "$i $j"; done
=Rechnen=
$ echo $[ 3 + 4 ]
[[Kategorie:Bash]]
b0baf01295c27a57f8a6b45c79f4d3d0410e34b6
137
136
2012-06-11T10:09:12Z
Lollypop
2
/* Rechnen */
wikitext
text/x-wiki
=Nützliche Variablenersetzungen=
==dirname==
<pre>
$ myself=/usr/bin/blafasel ; echo ${myself%/*}
/usr/bin
</pre>
==basename==
<pre>
$ myself=/usr/bin/blafasel ; echo ${myself##*/}
blafasel
</pre>
=Schleifen=
==Zahlenfolgen==
$ for i in {0..9} ; do echo $i ; done
oder
$ for ((i=0;i<=9;i++)); do echo $i; done
so gehen natürlich auch andere Sprünge, z.B. immer 3 weiter:
$ for ((i=0;i<=9;i+=3)); do echo $i; done
oder oder oder
$ for ((i=0,j=1;i<=9;i+=3,j++)); do echo "$i $j"; done
=Rechnen=
$ echo $[ 3 + 4 ]
$ echo $[ 2 ** 8 ] # 2^8
[[Kategorie:Bash]]
c667af9d1483e5852d610c4c05afe01751124434
Solaris IPMP
0
73
138
2012-06-13T12:42:38Z
Lollypop
2
Die Seite wurde neu angelegt: „<pre> ipadm create-ip net0 ipadm create-ip net1 ipadm set-ifprop -p standby=on -m ip net1 ipadm create-ipmp -i net0 -i net1 ipmp0 ipadm create-addr -T static -a l…“
wikitext
text/x-wiki
<pre>
ipadm create-ip net0
ipadm create-ip net1
ipadm set-ifprop -p standby=on -m ip net1
ipadm create-ipmp -i net0 -i net1 ipmp0
ipadm create-addr -T static -a local=1.2.3.4/24 ipmp0/v4
</pre>
4d16f4130b89b6f6f833597f6679f30c70d2b9b1
139
138
2012-06-13T12:43:00Z
Lollypop
2
hat „[[Solaris ipmp]]“ nach „[[Solaris IPMP]]“ verschoben
wikitext
text/x-wiki
<pre>
ipadm create-ip net0
ipadm create-ip net1
ipadm set-ifprop -p standby=on -m ip net1
ipadm create-ipmp -i net0 -i net1 ipmp0
ipadm create-addr -T static -a local=1.2.3.4/24 ipmp0/v4
</pre>
4d16f4130b89b6f6f833597f6679f30c70d2b9b1
141
139
2012-06-13T12:43:31Z
Lollypop
2
wikitext
text/x-wiki
<pre>
ipadm create-ip net0
ipadm create-ip net1
ipadm set-ifprop -p standby=on -m ip net1
ipadm create-ipmp -i net0 -i net1 ipmp0
ipadm create-addr -T static -a local=1.2.3.4/24 ipmp0/v4
</pre>
[[Kategorie:Solaris]]
f26ecf586f28c554260057361dccebcff7d12a34
Solaris ipmp
0
74
140
2012-06-13T12:43:00Z
Lollypop
2
hat „[[Solaris ipmp]]“ nach „[[Solaris IPMP]]“ verschoben
wikitext
text/x-wiki
#WEITERLEITUNG [[Solaris IPMP]]
86edc02f69231ec9e662596d11b124e30f1b11af
SSH Tipps und Tricks
0
75
142
2012-06-15T09:30:39Z
Lollypop
2
Die Seite wurde neu angelegt: „[[Kategorie:KnowHow]] ==SSH über ein oder mehrere Hops== Um die SSH-Verbindung von Host_A zu Host_B machen zu können muß man sich über zwei Rechner dorthin tu…“
wikitext
text/x-wiki
[[Kategorie:KnowHow]]
==SSH über ein oder mehrere Hops==
Um die SSH-Verbindung von Host_A zu Host_B machen zu können muß man sich über zwei Rechner dorthin tunneln (GW_1 und GW_2). Wenn man sich immer erst einloggt und dann weiter einloggt ist es manchmal sehr schwierig die Portforwardings mitzuschleifen, oder den Socks5-Proxy. Einfacher ist es, wenn man sich ProxyCommands für den Weg von Host_a zu Host_b definiert.
Wir kommen also nur von GW_2 zu Host_b, also machen wir uns hierfür einen Eintrag in der ~/.ssh/config:
<pre>
Host Host_B
ProxyCommand ssh GW_2 'exec 3<>/dev/tcp/%h/22; cat <&3 & cat >&3;kill $!'
</pre>
Zu GW_2 kommen wir aber nur über GW_1, also brauchen wir hierfür auch einen Eintrag:
<pre>
Host GW_2
ProxyCommand ssh GW_1 'exec 3<>/dev/tcp/%h/22; cat <&3 & cat >&3;kill $!'
</pre>
Jetzt gibt man auf Host_A einfach ssh Host_B ein und wird über die beiden Gateways GW_1 und GW_2 getunnelt. Portforwardings für z.B. NFS macht man jetzt einfach so:
<pre>
root@Host_A# share -F NFS -o ro=@127.0.0.1/32 /tmp
root@Host_A# ssh -R 22049:localhost:2049 user@Host_B
user@Host_B$ su -
root@Host_B# mount -oro nfs://127.0.0.1:22049/tmp /mnt
</pre>
8220f8309561ad2f4620229a2107a5a1a1791ab6
143
142
2012-06-15T09:30:57Z
Lollypop
2
/* SSH über ein oder mehrere Hops */
wikitext
text/x-wiki
[[Kategorie:KnowHow]]
==SSH über ein oder mehrere Hops==
Um die SSH-Verbindung von Host_A zu Host_B machen zu können muß man sich über zwei Rechner dorthin tunneln (GW_1 und GW_2). Wenn man sich immer erst einloggt und dann weiter einloggt ist es manchmal sehr schwierig die Portforwardings mitzuschleifen, oder den Socks5-Proxy. Einfacher ist es, wenn man sich ProxyCommands für den Weg von Host_a zu Host_b definiert.
Wir kommen also nur von GW_2 zu Host_b, also machen wir uns hierfür einen Eintrag in der ~/.ssh/config:
<pre>
Host Host_B
ProxyCommand ssh GW_2 'exec 3<>/dev/tcp/%h/22; cat <&3 & cat >&3;kill $!'
</pre>
Zu GW_2 kommen wir aber nur über GW_1, also brauchen wir hierfür auch einen Eintrag:
<pre>
Host GW_2
ProxyCommand ssh GW_1 'exec 3<>/dev/tcp/%h/22; cat <&3 & cat >&3;kill $!'
</pre>
Jetzt gibt man auf Host_A einfach ssh Host_B ein und wird über die beiden Gateways GW_1 und GW_2 getunnelt. Portforwardings für z.B. NFS macht man jetzt einfach so:
<pre>
root@Host_A# share -F nfs -o ro=@127.0.0.1/32 /tmp
root@Host_A# ssh -R 22049:localhost:2049 user@Host_B
user@Host_B$ su -
root@Host_B# mount -oro nfs://127.0.0.1:22049/tmp /mnt
</pre>
c6d4efdd080c0a4582e17ea6ddb5b89bae48500a
144
143
2012-06-15T09:32:19Z
Lollypop
2
/* SSH über ein oder mehrere Hops */
wikitext
text/x-wiki
[[Kategorie:KnowHow]]
==SSH über ein oder mehrere Hops==
Um die SSH-Verbindung von Host_A zu Host_B machen zu können muß man sich über zwei Rechner dorthin tunneln (GW_1 und GW_2). Wenn man sich immer erst einloggt und dann weiter einloggt ist es manchmal sehr schwierig die Portforwardings mitzuschleifen, oder den Socks5-Proxy. Einfacher ist es, wenn man sich ProxyCommands für den Weg von Host_a zu Host_b definiert.
Wir kommen also nur von GW_2 zu Host_b, also machen wir uns hierfür einen Eintrag in der ~/.ssh/config:
<pre>
Host Host_B
ProxyCommand ssh GW_2 "/bin/bash 'exec 3<>/dev/tcp/%h/22; cat <&3 & cat >&3;kill $!'"
</pre>
Zu GW_2 kommen wir aber nur über GW_1, also brauchen wir hierfür auch einen Eintrag:
<pre>
Host GW_2
ProxyCommand ssh GW_1 "/bin/bash -c 'exec 3<>/dev/tcp/%h/22; cat <&3 & cat >&3;kill $!'"
</pre>
Jetzt gibt man auf Host_A einfach ssh Host_B ein und wird über die beiden Gateways GW_1 und GW_2 getunnelt. Portforwardings für z.B. NFS macht man jetzt einfach so:
<pre>
root@Host_A# share -F nfs -o ro=@127.0.0.1/32 /tmp
root@Host_A# ssh -R 22049:localhost:2049 user@Host_B
user@Host_B$ su -
root@Host_B# mount -oro nfs://127.0.0.1:22049/tmp /mnt
</pre>
8deae230c0353e95bcc8892c92f8de2ff4fdc48e
145
144
2012-06-15T09:54:56Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:KnowHow]]
==SSH über ein oder mehrere Hops==
Um die SSH-Verbindung von Host_A zu Host_B machen zu können muß man sich über zwei Rechner dorthin tunneln (GW_1 und GW_2). Wenn man sich immer erst einloggt und dann weiter einloggt ist es manchmal sehr schwierig die Portforwardings mitzuschleifen, oder den Socks5-Proxy. Einfacher ist es, wenn man sich ProxyCommands für den Weg von Host_a zu Host_b definiert.
Wir kommen also nur von GW_2 zu Host_b, also machen wir uns hierfür einen Eintrag in der ~/.ssh/config:
<pre>
Host Host_B
ProxyCommand ssh GW_2 "/bin/bash 'exec 3<>/dev/tcp/%h/22; cat <&3 & cat >&3;kill $!'"
</pre>
Zu GW_2 kommen wir aber nur über GW_1, also brauchen wir hierfür auch einen Eintrag:
<pre>
Host GW_2
ProxyCommand ssh GW_1 "/bin/bash -c 'exec 3<>/dev/tcp/%h/22; cat <&3 & cat >&3;kill $!'"
</pre>
Jetzt gibt man auf Host_A einfach <i>ssh Host_B</i> ein und wird über die beiden Gateways GW_1 und GW_2 getunnelt.
Portforwardings für z.B. NFS macht man jetzt einfach so:
<pre>
root@Host_A# share -F nfs -o ro=@127.0.0.1/32 /tmp
root@Host_A# ssh -R 22049:localhost:2049 user@Host_B
user@Host_B$ su -
root@Host_B# mount -oro nfs://127.0.0.1:22049/tmp /mnt
</pre>
Im Hintergrund werden dann die TunnelVerbindungen aufgebaut und man macht das PortForwarding direkt von Host_A nach Host_B. Sehr schlank und elegant.
==Ausbruch aus dem Paradies==
Problem: Die Umgebung in der man sich aufhält ist leider so unglücklich mit Firewalls verbaut, daß man nicht arbeiten kann. Man muß aber per SSH raus, um woanders kurz etwas zu schauen, oder zu holen. Nunja, es gibt immer einen Weg...
Vorraussetzung ist ein lokal installiertes [http://www.meadowy.org/~gotoh/projects/connect connect], z.B. unter Ubuntu: apt-get install connect-proxy.
Weiterhin braucht man einen SSH-Server, wo ein sshd auf dem Port 443 lauscht, denn die meisten Proxies wollen einen nur auf known Ports durchlassen.
Dann trägt man in der ~/.ssh/config ein:
<pre>
Host ssh-via-proxy
ProxyCommand connect -H proxy-server:3128 ssh-server 443
</pre>
Schwuppdiwupp ist man mit <i>ssh ssh-server</i> auf dem SSH-Ziel, wo man hinmöchte. Natürlich kann man auf den ssh-server wieder als ProxyCommand eintragen usw. usw.
==Achja... das interne Wiki...==
Auch nicht schlimm, wenn das nur vom internen Netz erreichbar ist, dann fragen wir einfach via Socks-Proxy an:
<pre>
user@Host_A$ ssh -C -N -T -f -D8080 interner-rechner
user@Host_A$ chromium-browser --proxy-server="socks5://localhost:8080" https://wiki.intern.firma.de/ &
</pre>
Die Optionen sind:
<pre>
-C Requests compression <- das ist optional
-N Do not execute a remote command.
-T Disable pseudo-tty allocation.
-f Requests ssh to go to background just before command execution.
</pre>
bc76655a73ca48c51f9fafbac8ecdaf8e549c6c1
146
145
2012-06-15T09:55:56Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:KnowHow]]
=SSH, der Weg zum Ziel=
==SSH über ein oder mehrere Hops==
Um die SSH-Verbindung von Host_A zu Host_B machen zu können muß man sich über zwei Rechner dorthin tunneln (GW_1 und GW_2). Wenn man sich immer erst einloggt und dann weiter einloggt ist es manchmal sehr schwierig die Portforwardings mitzuschleifen, oder den Socks5-Proxy. Einfacher ist es, wenn man sich ProxyCommands für den Weg von Host_a zu Host_b definiert.
Wir kommen also nur von GW_2 zu Host_b, also machen wir uns hierfür einen Eintrag in der ~/.ssh/config:
<pre>
Host Host_B
ProxyCommand ssh GW_2 "/bin/bash 'exec 3<>/dev/tcp/%h/22; cat <&3 & cat >&3;kill $!'"
</pre>
Zu GW_2 kommen wir aber nur über GW_1, also brauchen wir hierfür auch einen Eintrag:
<pre>
Host GW_2
ProxyCommand ssh GW_1 "/bin/bash -c 'exec 3<>/dev/tcp/%h/22; cat <&3 & cat >&3;kill $!'"
</pre>
Jetzt gibt man auf Host_A einfach <i>ssh Host_B</i> ein und wird über die beiden Gateways GW_1 und GW_2 getunnelt.
Portforwardings für z.B. NFS macht man jetzt einfach so:
<pre>
root@Host_A# share -F nfs -o ro=@127.0.0.1/32 /tmp
root@Host_A# ssh -R 22049:localhost:2049 user@Host_B
user@Host_B$ su -
root@Host_B# mount -oro nfs://127.0.0.1:22049/tmp /mnt
</pre>
Im Hintergrund werden dann die TunnelVerbindungen aufgebaut und man macht das PortForwarding direkt von Host_A nach Host_B. Sehr schlank und elegant.
==Ausbruch aus dem Paradies==
Problem: Die Umgebung in der man sich aufhält ist leider so unglücklich mit Firewalls verbaut, daß man nicht arbeiten kann. Man muß aber per SSH raus, um woanders kurz etwas zu schauen, oder zu holen. Nunja, es gibt immer einen Weg...
Vorraussetzung ist ein lokal installiertes [http://www.meadowy.org/~gotoh/projects/connect connect], z.B. unter Ubuntu: apt-get install connect-proxy.
Weiterhin braucht man einen SSH-Server, wo ein sshd auf dem Port 443 lauscht, denn die meisten Proxies wollen einen nur auf known Ports durchlassen.
Dann trägt man in der ~/.ssh/config ein:
<pre>
Host ssh-via-proxy
ProxyCommand connect -H proxy-server:3128 ssh-server 443
</pre>
Schwuppdiwupp ist man mit <i>ssh ssh-server</i> auf dem SSH-Ziel, wo man hinmöchte. Natürlich kann man auf den ssh-server wieder als ProxyCommand eintragen usw. usw.
==Achja... das interne Wiki...==
Auch nicht schlimm, wenn das nur vom internen Netz erreichbar ist, dann fragen wir einfach via Socks-Proxy an:
<pre>
user@Host_A$ ssh -C -N -T -f -D8080 interner-rechner
user@Host_A$ chromium-browser --proxy-server="socks5://localhost:8080" https://wiki.intern.firma.de/ &
</pre>
Die Optionen sind:
<pre>
-C Requests compression <- das ist optional
-N Do not execute a remote command.
-T Disable pseudo-tty allocation.
-f Requests ssh to go to background just before command execution.
</pre>
56ee51ca2fe9aa4cd2d99a36a904836ccdd1c218
147
146
2012-06-15T10:35:49Z
Lollypop
2
/* Achja... das interne Wiki... */
wikitext
text/x-wiki
[[Kategorie:KnowHow]]
=SSH, der Weg zum Ziel=
==SSH über ein oder mehrere Hops==
Um die SSH-Verbindung von Host_A zu Host_B machen zu können muß man sich über zwei Rechner dorthin tunneln (GW_1 und GW_2). Wenn man sich immer erst einloggt und dann weiter einloggt ist es manchmal sehr schwierig die Portforwardings mitzuschleifen, oder den Socks5-Proxy. Einfacher ist es, wenn man sich ProxyCommands für den Weg von Host_a zu Host_b definiert.
Wir kommen also nur von GW_2 zu Host_b, also machen wir uns hierfür einen Eintrag in der ~/.ssh/config:
<pre>
Host Host_B
ProxyCommand ssh GW_2 "/bin/bash 'exec 3<>/dev/tcp/%h/22; cat <&3 & cat >&3;kill $!'"
</pre>
Zu GW_2 kommen wir aber nur über GW_1, also brauchen wir hierfür auch einen Eintrag:
<pre>
Host GW_2
ProxyCommand ssh GW_1 "/bin/bash -c 'exec 3<>/dev/tcp/%h/22; cat <&3 & cat >&3;kill $!'"
</pre>
Jetzt gibt man auf Host_A einfach <i>ssh Host_B</i> ein und wird über die beiden Gateways GW_1 und GW_2 getunnelt.
Portforwardings für z.B. NFS macht man jetzt einfach so:
<pre>
root@Host_A# share -F nfs -o ro=@127.0.0.1/32 /tmp
root@Host_A# ssh -R 22049:localhost:2049 user@Host_B
user@Host_B$ su -
root@Host_B# mount -oro nfs://127.0.0.1:22049/tmp /mnt
</pre>
Im Hintergrund werden dann die TunnelVerbindungen aufgebaut und man macht das PortForwarding direkt von Host_A nach Host_B. Sehr schlank und elegant.
==Ausbruch aus dem Paradies==
Problem: Die Umgebung in der man sich aufhält ist leider so unglücklich mit Firewalls verbaut, daß man nicht arbeiten kann. Man muß aber per SSH raus, um woanders kurz etwas zu schauen, oder zu holen. Nunja, es gibt immer einen Weg...
Vorraussetzung ist ein lokal installiertes [http://www.meadowy.org/~gotoh/projects/connect connect], z.B. unter Ubuntu: apt-get install connect-proxy.
Weiterhin braucht man einen SSH-Server, wo ein sshd auf dem Port 443 lauscht, denn die meisten Proxies wollen einen nur auf known Ports durchlassen.
Dann trägt man in der ~/.ssh/config ein:
<pre>
Host ssh-via-proxy
ProxyCommand connect -H proxy-server:3128 ssh-server 443
</pre>
Schwuppdiwupp ist man mit <i>ssh ssh-server</i> auf dem SSH-Ziel, wo man hinmöchte. Natürlich kann man auf den ssh-server wieder als ProxyCommand eintragen usw. usw.
==Achja... das interne Wiki...==
Auch nicht schlimm, wenn das nur vom internen Netz erreichbar ist, dann fragen wir einfach via Socks-Proxy an:
<pre>
user@Host_A$ ssh -C -N -T -f -D8080 interner-rechner
user@Host_A$ chromium-browser --proxy-server="socks5://localhost:8080" https://wiki.intern.firma.de/ &
</pre>
Die Optionen sind:
<pre>
-C Requests compression <- das ist optional
-N Do not execute a remote command.
-T Disable pseudo-tty allocation.
-f Requests ssh to go to background just before command execution.
</pre>
Oder wieder via ~/.ssh/config:
<pre>
Host wiki
Compression yes
DynamicForward 8888
RequestTTY no
PermitLocalCommand yes
LocalCommand chromium-browser --proxy-server="socks5://localhost:8888" https://wiki.intern.firma.de/ &
Hostname interner-rechner
</pre>
Und dann <i>ssh -N -f wiki</i> (Entsprechungen für -N und -f habe ich noch nicht gefunden).
20748801eab36183d02c62044bb52971ed22a0e6
148
147
2012-06-15T10:37:27Z
Lollypop
2
/* Achja... das interne Wiki... */
wikitext
text/x-wiki
[[Kategorie:KnowHow]]
=SSH, der Weg zum Ziel=
==SSH über ein oder mehrere Hops==
Um die SSH-Verbindung von Host_A zu Host_B machen zu können muß man sich über zwei Rechner dorthin tunneln (GW_1 und GW_2). Wenn man sich immer erst einloggt und dann weiter einloggt ist es manchmal sehr schwierig die Portforwardings mitzuschleifen, oder den Socks5-Proxy. Einfacher ist es, wenn man sich ProxyCommands für den Weg von Host_a zu Host_b definiert.
Wir kommen also nur von GW_2 zu Host_b, also machen wir uns hierfür einen Eintrag in der ~/.ssh/config:
<pre>
Host Host_B
ProxyCommand ssh GW_2 "/bin/bash 'exec 3<>/dev/tcp/%h/22; cat <&3 & cat >&3;kill $!'"
</pre>
Zu GW_2 kommen wir aber nur über GW_1, also brauchen wir hierfür auch einen Eintrag:
<pre>
Host GW_2
ProxyCommand ssh GW_1 "/bin/bash -c 'exec 3<>/dev/tcp/%h/22; cat <&3 & cat >&3;kill $!'"
</pre>
Jetzt gibt man auf Host_A einfach <i>ssh Host_B</i> ein und wird über die beiden Gateways GW_1 und GW_2 getunnelt.
Portforwardings für z.B. NFS macht man jetzt einfach so:
<pre>
root@Host_A# share -F nfs -o ro=@127.0.0.1/32 /tmp
root@Host_A# ssh -R 22049:localhost:2049 user@Host_B
user@Host_B$ su -
root@Host_B# mount -oro nfs://127.0.0.1:22049/tmp /mnt
</pre>
Im Hintergrund werden dann die TunnelVerbindungen aufgebaut und man macht das PortForwarding direkt von Host_A nach Host_B. Sehr schlank und elegant.
==Ausbruch aus dem Paradies==
Problem: Die Umgebung in der man sich aufhält ist leider so unglücklich mit Firewalls verbaut, daß man nicht arbeiten kann. Man muß aber per SSH raus, um woanders kurz etwas zu schauen, oder zu holen. Nunja, es gibt immer einen Weg...
Vorraussetzung ist ein lokal installiertes [http://www.meadowy.org/~gotoh/projects/connect connect], z.B. unter Ubuntu: apt-get install connect-proxy.
Weiterhin braucht man einen SSH-Server, wo ein sshd auf dem Port 443 lauscht, denn die meisten Proxies wollen einen nur auf known Ports durchlassen.
Dann trägt man in der ~/.ssh/config ein:
<pre>
Host ssh-via-proxy
ProxyCommand connect -H proxy-server:3128 ssh-server 443
</pre>
Schwuppdiwupp ist man mit <i>ssh ssh-server</i> auf dem SSH-Ziel, wo man hinmöchte. Natürlich kann man auf den ssh-server wieder als ProxyCommand eintragen usw. usw.
==Achja... das interne Wiki...==
Auch nicht schlimm, wenn das nur vom internen Netz erreichbar ist, dann fragen wir einfach via Socks-Proxy an:
<pre>
user@Host_A$ ssh -C -N -T -f -D8080 interner-rechner
user@Host_A$ chromium-browser --proxy-server="socks5://localhost:8080" https://wiki.intern.firma.de/ &
</pre>
Die Optionen sind:
<pre>
-C Requests compression <- das ist optional
-N Do not execute a remote command.
-T Disable pseudo-tty allocation.
-f Requests ssh to go to background just before command execution.
-D Local-Remote-Socks5-Proxy Port
</pre>
Oder wieder via ~/.ssh/config:
<pre>
Host wiki
Compression yes
DynamicForward 8888
RequestTTY no
PermitLocalCommand yes
LocalCommand chromium-browser --proxy-server="socks5://localhost:8888" https://wiki.intern.firma.de/ &
Hostname interner-rechner
</pre>
Und dann <i>ssh -N -f wiki</i> (Entsprechungen für -N und -f habe ich noch nicht gefunden).
8dad0ae256a3f9381b1528cb57cb649740f4a5ce
Category:NetApp
14
76
149
2012-06-15T17:36:47Z
Lollypop
2
Die Seite wurde neu angelegt: „[[Kategorie:KnowHow]]“
wikitext
text/x-wiki
[[Kategorie:KnowHow]]
5b3e805e2df69a16d339bfd0115e4688ccfd0e65
NetApp SMO
0
77
150
2012-06-15T17:42:34Z
Lollypop
2
Die Seite wurde neu angelegt: „[[Kategorie:NetApp]] =Installation des SnapManager for Oracle unter Solaris= ==HostUtilities== <pre> # pkgadd -d NTAPSANTool.pkg # /opt/NTAP/SANToolkit/bin/mpxi…“
wikitext
text/x-wiki
[[Kategorie:NetApp]]
=Installation des SnapManager for Oracle unter Solaris=
==HostUtilities==
<pre>
# pkgadd -d NTAPSANTool.pkg
# /opt/NTAP/SANToolkit/bin/mpxio_set -e
# /opt/NTAP/SANToolkit/bin/basic_config -ssd_set
# touch /reconfigure
# init 6
</pre>
Test mit:
<pre>
# /opt/NTAP/SANToolkit/bin/sanlun fcp show adapter
# /opt/NTAP/SANToolkit/bin/sanlun lun show all
</pre>
==SnapDrive==
<pre>
# pkgadd -d NTAPsnapdrive_sun_sparc_5.0/NTAPsnapdrive.pkg
# cat >/opt/NTAPsnapdrive/snapdrive.conf <<EOF
#
# Snapdrive Configuration
# file: /opt/NTAPsnapdrive/snapdrive.conf
# Version 5.0 (Change 1612424 Built 'Sun Feb 26 03 11 54 PST 2012')
#
# Default values are shown by lines which are commented-out in this file.
# If there is no un-commented-out line in this file relating to a particular value, then
# the default value represented in the commented-out line is what SnapDrive will use.
#
# To change a value:
#
# -- copy the line that is commented out to another line
# -- Leave the commented-out line
# -- Modify the new line to remove the '#' and to set the new value.
# -- Save the file and exit
# -- Also remember to restart the snapdrive daemon by issuing 'snapdrived restart'
#
#
#PATH="/sbin:/usr/sbin:/bin:/usr/lib/vxvm/bin:/usr/bin:/opt/VRTS/bin:/etc/vx/bin" #toolset search path
all-access-if-rbac-unspecified="on" #Allows access to all filer operations if the RBAC permissions file is missing in filer volume
#audit-log-file="/var/log/sd-audit.log" #Audit Log File Path
#audit-log-max-size=20480 #Maximum size (in bytes) of audit log file
#audit-log-save=2 #Number of historical audit log file to save
#autosupport-enabled="on" #Enable autosupport (requires autosupport-filer be set)
#available-lun-reserve=8 #Number of LUNs for which to reserve host resources
#check-export-permission-nfs-clone="on" #Checks if the host has nfs export permissions for resource being connected
#client-trace-log-file="/var/log/sd-client-trace.log" #client trace log file (Probably never used or useful)
#cluster-operation-timeout-secs=600 #Cluster Operation timeout in seconds (Useful only on SFRAC Environments). Increase this value if you frequent failures in SFRAC environments
#contact-http-dfm-port=8088 #HTTP server port to contact to access the DFM (Change this only if you have modified DFM Server settings)
#contact-http-port=80 #HTTP port to contact to access the filer (This should not be changed most of the time)
#contact-http-port-sdu-daemon=4094 #HTTP port on which sdu daemon will bind
#contact-https-port-sdu-daemon=4095 #HTTPS port on which sdu daemon will bind
#contact-ssl-dfm-port=8488 #SSL server port to contact to access the DFM
#contact-ssl-port=443 #SSL port to contact to access the filer
#contact-viadmin-port=8043 #HTTP/HTTPS port to contact to access the virtual interface admin
#daemon-trace-log-file="/var/log/sd-daemon-trace.log" #daemon trace log file
#datamotion-cutover-wait=120 #Wait time in seconds during data motion
#default-noprompt="off" #A default value for -noprompt option in the command line
default-transport="fcp" #Transport type to use for storage provisioning, when a decision is needed
#device-retries=3 #Number of retries on Ontap filer LUN device inquiry (This is no longer useful or used)
#device-retry-sleep-secs=1 #Number of seconds between Ontap filer LUN device inquiry retries (This is no longer useful or used)
#dfm-api-timeout=180 #Timeout in seconds for calling DFM API
#dfm-rbac-retries=12 #Number of access retries until DFM Refreshes (Increase this value if DFM is unable to discover newly created Volumes)
#dfm-rbac-retry-sleep-secs=15 #Number of seconds between DFM rbac access retries(Increase this value if DFM is unable to discover the Volume)
#do-lunclone="on" #Lunclone for Dataset mount_backup if readonly qtree is detected
#enable-alua="on" #Enable ALUA for the igroup
#enable-fcp-cache="on" #Enable FCP Cache in Assistants
#enable-implicit-host-preparation="on" #Enable implicit host preparation for LUN creation
#enable-parallel-operations="on" #Enable support for parallel operations
#enable-split-clone="off" #Enable split clone volume or lun during connnect/disconnect
#filer-restore-retries=1440 #Number of retries while doing lun restore
#filer-restore-retry-sleep-secs=15 #Number of secs between retries while restoring lun
#filesystem-freeze-timeout-secs=300 #File system freeze timeout in seconds
#flexclone-writereserve-enabled="off" #Enable space reservations during FlexClone creation
fstype="ufs" #File system to use when more than one file system is available
#lun-onlining-in-progress-retries=40 #Number of retries when lun onlining in progress after VBSR
#lun-onlining-in-progress-sleep-secs=3 #Number of secs between retries when lun onlining in progress after VBSR
#mgmt-retries=2 #Number of retries on ManageONTAP control channel
#mgmt-retry-sleep-long-secs=90 #Number of seconds between retries on ManageONTAP control channel (failover error)
#mgmt-retry-sleep-secs=2 #Number of seconds between retries on ManageONTAP control channel
#migrate-file="/opt/NTAPsnapdrive/.migfile" #Location of Migrate File
#multipathing-type="DMP" #Multipathing software to use when more than one multipathing solution is available.
multipathing-type="mpxio" #Multipathing software to use when more than one multipathing solution is available.
#password-file="/opt/NTAPsnapdrive/.pwfile" #location of password file
#portset-file="/opt/NTAPsnapdrive/.portset" #location of portset configuration file
#prefix-clone-name="" #Prefix string for naming FlexClone
#prefix-filer-lun="" #Prefix for all filer LUN names internally generated by storage create
#prepare-lun-count=16 #Number of LUNs for which to request host preparation
#rbac-cache="off" #Use RBAC cache when all DFM servers are down. Active only when rbac-method is dfm.
#rbac-method="native" #Role Based Access Control(RBAC) methods
#recovery-log-file="/var/log/sd-recovery.log" #recovery log file
#recovery-log-save=20 #Number of old copies of recovery log file to save
#san-clone-method="lunclone" #Clone methods for snap connect
#sdu-daemon-certificate-path="/opt/NTAPsnapdrive/snapdrive.pem" #location of https server certificate
#sdu-password-file="/opt/NTAPsnapdrive/.sdupw" #location of SDU Daemon and DFM password file
#secure-communication-among-cluster-nodes="off" #Enable Secure Communication (Useful only on SFRAC environments)
#sfsr-polling-frequency=10 #Sleep for the given amount of seconds before attempting SFSR
#snapconnect-nfs-removedirectories="off" #NFS snap connect cleaup unwanted dirs;
#snapcreate-cg-timeout="relaxed" #Timeout type used in snapshot creation with Consitency Groups.
#snapcreate-check-nonpersistent-nfs="on" #Check that entries exist in persistent filesystem file for specified nfs fs.
#snapcreate-consistency-retries=3 #Number of retries on best-effort snapshot consistency check failure
#snapcreate-consistency-retry-sleep=1 #Number of seconds between best-effort snapshot consistency retries
#snapcreate-must-make-snapinfo-on-qtree="off" #snap create must be able to create snapinfo on qtree
#snapdelete-delete-rollback-with-snap="off" #Delete all rollback snapshots related to specified snapshot
#snapmirror-dest-snap-support-enabled="on" #Enables snap restore and snap connect commands to deal with snapshots which were moved to another filer volume (e.g. via SnapMirror)
#snaprestore-delete-rollback-after-restore="on" #Delete rollback snapshot after a successfull restore
#snaprestore-make-rollback="on" #Create snap rollback before restore
#snaprestore-must-make-rollback="on" #Do not continue 'snap restore' if rollback creation fails
#snaprestore-snapmirror-check="on" #Enable snapmirror destination volume check in snap restore
#space-reservations-enabled="on" #Enable space reservations when creating new luns
#space-reservations-volume-enabled="snapshot" #Enable space reservation over volume.
#split-clone-async="on" #Lunclone for Dataset mount_backup if readonly qtree is detected
#trace-enabled="on" #Enable trace
#trace-level=7 #Trace levels: 1=FatalError; 2=AdminError; 3=CommandError; 4=warning, 5=info, 6=verbose, 7=full
#trace-log-file="/var/log/sd-trace.log" #trace log file
#trace-log-max-size=10485760 #Maximum size of trace log file in bytes; 0 means one trace log file per command
#trace-log-save=100 #Number of old copies of trace log file to save
#use-efi-label="off" #Enables use of EFI labels on Solaris which is required for lun size > 1 TB
#use-https-to-dfm="on" #Communication with DFM done via HTTPS instead of HTTP
use-https-to-filer="on" #Communication with filer done via HTTPS instead of HTTP
#use-https-to-sdu-daemon="off" #Communication with daemon done via HTTPS instead of HTTP
#use-https-to-viadmin="on" #Specifies if HTTPS must be used to communicate with SMVI Product
#vif-password-file="/opt/NTAPsnapdrive/.vifpw" #location of Virtual Interface Server password file
#virtualization-operation-timeout-secs=600 #Virtualization Operation timeout in seconds
#vmtype="vxvm" #Volume manager to use when more than one volume manager is available
vmtype="svm" #Volume manager to use when more than one volume manager is available
#vol-restore="off" #Method of restoring a volume
#volmove-cutover-retry=3 #Number of retries during volume migration
#volmove-cutover-retry-sleep=3 #Number of seconds between retries during volume migration cutover phase
EOF
</pre>
Config von SnapDrive
<pre>
# getent hosts fas01 >> /etc/hosts
# /opt/NTAPsnapdrive/bin/snapdrive config set root fas01
# /opt/NTAPsnapdrive/bin/snapdrive snap list -filer fas01
</pre>
==SnapManager for Oracle==
<pre>
# sh ./netapp.smo.sunos-sparc64-3.2.bin
# smogui
</pre>
Ab durch den Wizard...
dd87d19095a1dbf72524c58efd02cb553d8b0ee8
151
150
2012-06-15T17:44:52Z
Lollypop
2
/* SnapDrive */
wikitext
text/x-wiki
[[Kategorie:NetApp]]
=Installation des SnapManager for Oracle unter Solaris=
==HostUtilities==
<pre>
# pkgadd -d NTAPSANTool.pkg
# /opt/NTAP/SANToolkit/bin/mpxio_set -e
# /opt/NTAP/SANToolkit/bin/basic_config -ssd_set
# touch /reconfigure
# init 6
</pre>
Test mit:
<pre>
# /opt/NTAP/SANToolkit/bin/sanlun fcp show adapter
# /opt/NTAP/SANToolkit/bin/sanlun lun show all
</pre>
==SnapDrive==
<pre>
# pkgadd -d NTAPsnapdrive_sun_sparc_5.0/NTAPsnapdrive.pkg
# cat >/opt/NTAPsnapdrive/snapdrive.conf <<EOF
<pre>
Und für Solaris mit MPxIO und UFS sieht die /opt/NTAPsnapdrive/snapdrive.conf dann so aus:
<pre>
# Snapdrive Configuration
# file: /opt/NTAPsnapdrive/snapdrive.conf
# Version 5.0 (Change 1612424 Built 'Sun Feb 26 03 11 54 PST 2012')
#
# Default values are shown by lines which are commented-out in this file.
# If there is no un-commented-out line in this file relating to a particular value, then
# the default value represented in the commented-out line is what SnapDrive will use.
#
# To change a value:
#
# -- copy the line that is commented out to another line
# -- Leave the commented-out line
# -- Modify the new line to remove the '#' and to set the new value.
# -- Save the file and exit
# -- Also remember to restart the snapdrive daemon by issuing 'snapdrived restart'
#
#
#PATH="/sbin:/usr/sbin:/bin:/usr/lib/vxvm/bin:/usr/bin:/opt/VRTS/bin:/etc/vx/bin" #toolset search path
all-access-if-rbac-unspecified="on" #Allows access to all filer operations if the RBAC permissions file is missing in filer volume
#audit-log-file="/var/log/sd-audit.log" #Audit Log File Path
#audit-log-max-size=20480 #Maximum size (in bytes) of audit log file
#audit-log-save=2 #Number of historical audit log file to save
#autosupport-enabled="on" #Enable autosupport (requires autosupport-filer be set)
#available-lun-reserve=8 #Number of LUNs for which to reserve host resources
#check-export-permission-nfs-clone="on" #Checks if the host has nfs export permissions for resource being connected
#client-trace-log-file="/var/log/sd-client-trace.log" #client trace log file (Probably never used or useful)
#cluster-operation-timeout-secs=600 #Cluster Operation timeout in seconds (Useful only on SFRAC Environments). Increase this value if you frequent failures in SFRAC environments
#contact-http-dfm-port=8088 #HTTP server port to contact to access the DFM (Change this only if you have modified DFM Server settings)
#contact-http-port=80 #HTTP port to contact to access the filer (This should not be changed most of the time)
#contact-http-port-sdu-daemon=4094 #HTTP port on which sdu daemon will bind
#contact-https-port-sdu-daemon=4095 #HTTPS port on which sdu daemon will bind
#contact-ssl-dfm-port=8488 #SSL server port to contact to access the DFM
#contact-ssl-port=443 #SSL port to contact to access the filer
#contact-viadmin-port=8043 #HTTP/HTTPS port to contact to access the virtual interface admin
#daemon-trace-log-file="/var/log/sd-daemon-trace.log" #daemon trace log file
#datamotion-cutover-wait=120 #Wait time in seconds during data motion
#default-noprompt="off" #A default value for -noprompt option in the command line
default-transport="fcp" #Transport type to use for storage provisioning, when a decision is needed
#device-retries=3 #Number of retries on Ontap filer LUN device inquiry (This is no longer useful or used)
#device-retry-sleep-secs=1 #Number of seconds between Ontap filer LUN device inquiry retries (This is no longer useful or used)
#dfm-api-timeout=180 #Timeout in seconds for calling DFM API
#dfm-rbac-retries=12 #Number of access retries until DFM Refreshes (Increase this value if DFM is unable to discover newly created Volumes)
#dfm-rbac-retry-sleep-secs=15 #Number of seconds between DFM rbac access retries(Increase this value if DFM is unable to discover the Volume)
#do-lunclone="on" #Lunclone for Dataset mount_backup if readonly qtree is detected
#enable-alua="on" #Enable ALUA for the igroup
#enable-fcp-cache="on" #Enable FCP Cache in Assistants
#enable-implicit-host-preparation="on" #Enable implicit host preparation for LUN creation
#enable-parallel-operations="on" #Enable support for parallel operations
#enable-split-clone="off" #Enable split clone volume or lun during connnect/disconnect
#filer-restore-retries=1440 #Number of retries while doing lun restore
#filer-restore-retry-sleep-secs=15 #Number of secs between retries while restoring lun
#filesystem-freeze-timeout-secs=300 #File system freeze timeout in seconds
#flexclone-writereserve-enabled="off" #Enable space reservations during FlexClone creation
fstype="ufs" #File system to use when more than one file system is available
#lun-onlining-in-progress-retries=40 #Number of retries when lun onlining in progress after VBSR
#lun-onlining-in-progress-sleep-secs=3 #Number of secs between retries when lun onlining in progress after VBSR
#mgmt-retries=2 #Number of retries on ManageONTAP control channel
#mgmt-retry-sleep-long-secs=90 #Number of seconds between retries on ManageONTAP control channel (failover error)
#mgmt-retry-sleep-secs=2 #Number of seconds between retries on ManageONTAP control channel
#migrate-file="/opt/NTAPsnapdrive/.migfile" #Location of Migrate File
#multipathing-type="DMP" #Multipathing software to use when more than one multipathing solution is available.
multipathing-type="mpxio" #Multipathing software to use when more than one multipathing solution is available.
#password-file="/opt/NTAPsnapdrive/.pwfile" #location of password file
#portset-file="/opt/NTAPsnapdrive/.portset" #location of portset configuration file
#prefix-clone-name="" #Prefix string for naming FlexClone
#prefix-filer-lun="" #Prefix for all filer LUN names internally generated by storage create
#prepare-lun-count=16 #Number of LUNs for which to request host preparation
#rbac-cache="off" #Use RBAC cache when all DFM servers are down. Active only when rbac-method is dfm.
#rbac-method="native" #Role Based Access Control(RBAC) methods
#recovery-log-file="/var/log/sd-recovery.log" #recovery log file
#recovery-log-save=20 #Number of old copies of recovery log file to save
#san-clone-method="lunclone" #Clone methods for snap connect
#sdu-daemon-certificate-path="/opt/NTAPsnapdrive/snapdrive.pem" #location of https server certificate
#sdu-password-file="/opt/NTAPsnapdrive/.sdupw" #location of SDU Daemon and DFM password file
#secure-communication-among-cluster-nodes="off" #Enable Secure Communication (Useful only on SFRAC environments)
#sfsr-polling-frequency=10 #Sleep for the given amount of seconds before attempting SFSR
#snapconnect-nfs-removedirectories="off" #NFS snap connect cleaup unwanted dirs;
#snapcreate-cg-timeout="relaxed" #Timeout type used in snapshot creation with Consitency Groups.
#snapcreate-check-nonpersistent-nfs="on" #Check that entries exist in persistent filesystem file for specified nfs fs.
#snapcreate-consistency-retries=3 #Number of retries on best-effort snapshot consistency check failure
#snapcreate-consistency-retry-sleep=1 #Number of seconds between best-effort snapshot consistency retries
#snapcreate-must-make-snapinfo-on-qtree="off" #snap create must be able to create snapinfo on qtree
#snapdelete-delete-rollback-with-snap="off" #Delete all rollback snapshots related to specified snapshot
#snapmirror-dest-snap-support-enabled="on" #Enables snap restore and snap connect commands to deal with snapshots which were moved to another filer volume (e.g. via SnapMirror)
#snaprestore-delete-rollback-after-restore="on" #Delete rollback snapshot after a successfull restore
#snaprestore-make-rollback="on" #Create snap rollback before restore
#snaprestore-must-make-rollback="on" #Do not continue 'snap restore' if rollback creation fails
#snaprestore-snapmirror-check="on" #Enable snapmirror destination volume check in snap restore
#space-reservations-enabled="on" #Enable space reservations when creating new luns
#space-reservations-volume-enabled="snapshot" #Enable space reservation over volume.
#split-clone-async="on" #Lunclone for Dataset mount_backup if readonly qtree is detected
#trace-enabled="on" #Enable trace
#trace-level=7 #Trace levels: 1=FatalError; 2=AdminError; 3=CommandError; 4=warning, 5=info, 6=verbose, 7=full
#trace-log-file="/var/log/sd-trace.log" #trace log file
#trace-log-max-size=10485760 #Maximum size of trace log file in bytes; 0 means one trace log file per command
#trace-log-save=100 #Number of old copies of trace log file to save
#use-efi-label="off" #Enables use of EFI labels on Solaris which is required for lun size > 1 TB
#use-https-to-dfm="on" #Communication with DFM done via HTTPS instead of HTTP
use-https-to-filer="on" #Communication with filer done via HTTPS instead of HTTP
#use-https-to-sdu-daemon="off" #Communication with daemon done via HTTPS instead of HTTP
#use-https-to-viadmin="on" #Specifies if HTTPS must be used to communicate with SMVI Product
#vif-password-file="/opt/NTAPsnapdrive/.vifpw" #location of Virtual Interface Server password file
#virtualization-operation-timeout-secs=600 #Virtualization Operation timeout in seconds
#vmtype="vxvm" #Volume manager to use when more than one volume manager is available
vmtype="svm" #Volume manager to use when more than one volume manager is available
#vol-restore="off" #Method of restoring a volume
#volmove-cutover-retry=3 #Number of retries during volume migration
#volmove-cutover-retry-sleep=3 #Number of seconds between retries during volume migration cutover phase
EOF
</pre>
Config von SnapDrive
<pre>
# getent hosts fas01 >> /etc/hosts
# /opt/NTAPsnapdrive/bin/snapdrive config set root fas01
# /opt/NTAPsnapdrive/bin/snapdrive snap list -filer fas01
</pre>
==SnapManager for Oracle==
<pre>
# sh ./netapp.smo.sunos-sparc64-3.2.bin
# smogui
</pre>
Ab durch den Wizard...
38421e0f163f677c53b25548583e07a3f112ab08
152
151
2012-06-15T17:47:57Z
Lollypop
2
/* SnapDrive */
wikitext
text/x-wiki
[[Kategorie:NetApp]]
=Installation des SnapManager for Oracle unter Solaris=
==HostUtilities==
<pre>
# pkgadd -d NTAPSANTool.pkg
# /opt/NTAP/SANToolkit/bin/mpxio_set -e
# /opt/NTAP/SANToolkit/bin/basic_config -ssd_set
# touch /reconfigure
# init 6
</pre>
Test mit:
<pre>
# /opt/NTAP/SANToolkit/bin/sanlun fcp show adapter
# /opt/NTAP/SANToolkit/bin/sanlun lun show all
</pre>
==SnapDrive==
<pre>
# pkgadd -d NTAPsnapdrive_sun_sparc_5.0/NTAPsnapdrive.pkg
</pre>
Jetzt noch die /opt/NTAPsnapdrive/snapdrive.conf anpassen.
Und für Solaris mit MPxIO und UFS sieht die /opt/NTAPsnapdrive/snapdrive.conf dann so aus:
<pre>
# Snapdrive Configuration
# file: /opt/NTAPsnapdrive/snapdrive.conf
# Version 5.0 (Change 1612424 Built 'Sun Feb 26 03 11 54 PST 2012')
#
# Default values are shown by lines which are commented-out in this file.
# If there is no un-commented-out line in this file relating to a particular value, then
# the default value represented in the commented-out line is what SnapDrive will use.
#
# To change a value:
#
# -- copy the line that is commented out to another line
# -- Leave the commented-out line
# -- Modify the new line to remove the '#' and to set the new value.
# -- Save the file and exit
# -- Also remember to restart the snapdrive daemon by issuing 'snapdrived restart'
#
#
#PATH="/sbin:/usr/sbin:/bin:/usr/lib/vxvm/bin:/usr/bin:/opt/VRTS/bin:/etc/vx/bin" #toolset search path
all-access-if-rbac-unspecified="on" #Allows access to all filer operations if the RBAC permissions file is missing in filer volume
#audit-log-file="/var/log/sd-audit.log" #Audit Log File Path
#audit-log-max-size=20480 #Maximum size (in bytes) of audit log file
#audit-log-save=2 #Number of historical audit log file to save
#autosupport-enabled="on" #Enable autosupport (requires autosupport-filer be set)
#available-lun-reserve=8 #Number of LUNs for which to reserve host resources
#check-export-permission-nfs-clone="on" #Checks if the host has nfs export permissions for resource being connected
#client-trace-log-file="/var/log/sd-client-trace.log" #client trace log file (Probably never used or useful)
#cluster-operation-timeout-secs=600 #Cluster Operation timeout in seconds (Useful only on SFRAC Environments). Increase this value if you frequent failures in SFRAC environments
#contact-http-dfm-port=8088 #HTTP server port to contact to access the DFM (Change this only if you have modified DFM Server settings)
#contact-http-port=80 #HTTP port to contact to access the filer (This should not be changed most of the time)
#contact-http-port-sdu-daemon=4094 #HTTP port on which sdu daemon will bind
#contact-https-port-sdu-daemon=4095 #HTTPS port on which sdu daemon will bind
#contact-ssl-dfm-port=8488 #SSL server port to contact to access the DFM
#contact-ssl-port=443 #SSL port to contact to access the filer
#contact-viadmin-port=8043 #HTTP/HTTPS port to contact to access the virtual interface admin
#daemon-trace-log-file="/var/log/sd-daemon-trace.log" #daemon trace log file
#datamotion-cutover-wait=120 #Wait time in seconds during data motion
#default-noprompt="off" #A default value for -noprompt option in the command line
default-transport="fcp" #Transport type to use for storage provisioning, when a decision is needed
#device-retries=3 #Number of retries on Ontap filer LUN device inquiry (This is no longer useful or used)
#device-retry-sleep-secs=1 #Number of seconds between Ontap filer LUN device inquiry retries (This is no longer useful or used)
#dfm-api-timeout=180 #Timeout in seconds for calling DFM API
#dfm-rbac-retries=12 #Number of access retries until DFM Refreshes (Increase this value if DFM is unable to discover newly created Volumes)
#dfm-rbac-retry-sleep-secs=15 #Number of seconds between DFM rbac access retries(Increase this value if DFM is unable to discover the Volume)
#do-lunclone="on" #Lunclone for Dataset mount_backup if readonly qtree is detected
#enable-alua="on" #Enable ALUA for the igroup
#enable-fcp-cache="on" #Enable FCP Cache in Assistants
#enable-implicit-host-preparation="on" #Enable implicit host preparation for LUN creation
#enable-parallel-operations="on" #Enable support for parallel operations
#enable-split-clone="off" #Enable split clone volume or lun during connnect/disconnect
#filer-restore-retries=1440 #Number of retries while doing lun restore
#filer-restore-retry-sleep-secs=15 #Number of secs between retries while restoring lun
#filesystem-freeze-timeout-secs=300 #File system freeze timeout in seconds
#flexclone-writereserve-enabled="off" #Enable space reservations during FlexClone creation
fstype="ufs" #File system to use when more than one file system is available
#lun-onlining-in-progress-retries=40 #Number of retries when lun onlining in progress after VBSR
#lun-onlining-in-progress-sleep-secs=3 #Number of secs between retries when lun onlining in progress after VBSR
#mgmt-retries=2 #Number of retries on ManageONTAP control channel
#mgmt-retry-sleep-long-secs=90 #Number of seconds between retries on ManageONTAP control channel (failover error)
#mgmt-retry-sleep-secs=2 #Number of seconds between retries on ManageONTAP control channel
#migrate-file="/opt/NTAPsnapdrive/.migfile" #Location of Migrate File
#multipathing-type="DMP" #Multipathing software to use when more than one multipathing solution is available.
multipathing-type="mpxio" #Multipathing software to use when more than one multipathing solution is available.
#password-file="/opt/NTAPsnapdrive/.pwfile" #location of password file
#portset-file="/opt/NTAPsnapdrive/.portset" #location of portset configuration file
#prefix-clone-name="" #Prefix string for naming FlexClone
#prefix-filer-lun="" #Prefix for all filer LUN names internally generated by storage create
#prepare-lun-count=16 #Number of LUNs for which to request host preparation
#rbac-cache="off" #Use RBAC cache when all DFM servers are down. Active only when rbac-method is dfm.
#rbac-method="native" #Role Based Access Control(RBAC) methods
#recovery-log-file="/var/log/sd-recovery.log" #recovery log file
#recovery-log-save=20 #Number of old copies of recovery log file to save
#san-clone-method="lunclone" #Clone methods for snap connect
#sdu-daemon-certificate-path="/opt/NTAPsnapdrive/snapdrive.pem" #location of https server certificate
#sdu-password-file="/opt/NTAPsnapdrive/.sdupw" #location of SDU Daemon and DFM password file
#secure-communication-among-cluster-nodes="off" #Enable Secure Communication (Useful only on SFRAC environments)
#sfsr-polling-frequency=10 #Sleep for the given amount of seconds before attempting SFSR
#snapconnect-nfs-removedirectories="off" #NFS snap connect cleaup unwanted dirs;
#snapcreate-cg-timeout="relaxed" #Timeout type used in snapshot creation with Consitency Groups.
#snapcreate-check-nonpersistent-nfs="on" #Check that entries exist in persistent filesystem file for specified nfs fs.
#snapcreate-consistency-retries=3 #Number of retries on best-effort snapshot consistency check failure
#snapcreate-consistency-retry-sleep=1 #Number of seconds between best-effort snapshot consistency retries
#snapcreate-must-make-snapinfo-on-qtree="off" #snap create must be able to create snapinfo on qtree
#snapdelete-delete-rollback-with-snap="off" #Delete all rollback snapshots related to specified snapshot
#snapmirror-dest-snap-support-enabled="on" #Enables snap restore and snap connect commands to deal with snapshots which were moved to another filer volume (e.g. via SnapMirror)
#snaprestore-delete-rollback-after-restore="on" #Delete rollback snapshot after a successfull restore
#snaprestore-make-rollback="on" #Create snap rollback before restore
#snaprestore-must-make-rollback="on" #Do not continue 'snap restore' if rollback creation fails
#snaprestore-snapmirror-check="on" #Enable snapmirror destination volume check in snap restore
#space-reservations-enabled="on" #Enable space reservations when creating new luns
#space-reservations-volume-enabled="snapshot" #Enable space reservation over volume.
#split-clone-async="on" #Lunclone for Dataset mount_backup if readonly qtree is detected
#trace-enabled="on" #Enable trace
#trace-level=7 #Trace levels: 1=FatalError; 2=AdminError; 3=CommandError; 4=warning, 5=info, 6=verbose, 7=full
#trace-log-file="/var/log/sd-trace.log" #trace log file
#trace-log-max-size=10485760 #Maximum size of trace log file in bytes; 0 means one trace log file per command
#trace-log-save=100 #Number of old copies of trace log file to save
#use-efi-label="off" #Enables use of EFI labels on Solaris which is required for lun size > 1 TB
#use-https-to-dfm="on" #Communication with DFM done via HTTPS instead of HTTP
use-https-to-filer="on" #Communication with filer done via HTTPS instead of HTTP
#use-https-to-sdu-daemon="off" #Communication with daemon done via HTTPS instead of HTTP
#use-https-to-viadmin="on" #Specifies if HTTPS must be used to communicate with SMVI Product
#vif-password-file="/opt/NTAPsnapdrive/.vifpw" #location of Virtual Interface Server password file
#virtualization-operation-timeout-secs=600 #Virtualization Operation timeout in seconds
#vmtype="vxvm" #Volume manager to use when more than one volume manager is available
vmtype="svm" #Volume manager to use when more than one volume manager is available
#vol-restore="off" #Method of restoring a volume
#volmove-cutover-retry=3 #Number of retries during volume migration
#volmove-cutover-retry-sleep=3 #Number of seconds between retries during volume migration cutover phase
</pre>
Nun erst den snapdrived starten:
<pre>
# /usr/sbin/snapdrived start
</pre>
Verbindung mit dem Filer herstellen von SnapDrive:
<pre>
# getent hosts fas01 >> /etc/hosts
# /opt/NTAPsnapdrive/bin/snapdrive config set root fas01
# /opt/NTAPsnapdrive/bin/snapdrive snap list -filer fas01
</pre>
==SnapManager for Oracle==
<pre>
# sh ./netapp.smo.sunos-sparc64-3.2.bin
# smogui
</pre>
Ab durch den Wizard...
30488175d6c08225b1f307e6852b21dd36024ada
NetApp SMO
0
77
153
152
2012-06-15T17:49:53Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:NetApp]]
=Installation des SnapManager for Oracle unter Solaris=
==HostUtilities==
<pre>
# pkgadd -d NTAPSANTool.pkg
# /opt/NTAP/SANToolkit/bin/mpxio_set -e
# /opt/NTAP/SANToolkit/bin/basic_config -ssd_set
# touch /reconfigure
# init 6
</pre>
Test mit:
<pre>
# /opt/NTAP/SANToolkit/bin/sanlun fcp show adapter
# /opt/NTAP/SANToolkit/bin/sanlun lun show all
</pre>
==SnapDrive==
<pre>
# pkgadd -d NTAPsnapdrive_sun_sparc_5.0/NTAPsnapdrive.pkg
</pre>
Jetzt noch die /opt/NTAPsnapdrive/snapdrive.conf anpassen.
Und für Solaris mit MPxIO und UFS sieht die /opt/NTAPsnapdrive/snapdrive.conf dann so aus:
<pre>
# Snapdrive Configuration
# file: /opt/NTAPsnapdrive/snapdrive.conf
# Version 5.0 (Change 1612424 Built 'Sun Feb 26 03 11 54 PST 2012')
#
# Default values are shown by lines which are commented-out in this file.
# If there is no un-commented-out line in this file relating to a particular value, then
# the default value represented in the commented-out line is what SnapDrive will use.
#
# To change a value:
#
# -- copy the line that is commented out to another line
# -- Leave the commented-out line
# -- Modify the new line to remove the '#' and to set the new value.
# -- Save the file and exit
# -- Also remember to restart the snapdrive daemon by issuing 'snapdrived restart'
#
#
#PATH="/sbin:/usr/sbin:/bin:/usr/lib/vxvm/bin:/usr/bin:/opt/VRTS/bin:/etc/vx/bin" #toolset search path
all-access-if-rbac-unspecified="on" #Allows access to all filer operations if the RBAC permissions file is missing in filer volume
#audit-log-file="/var/log/sd-audit.log" #Audit Log File Path
#audit-log-max-size=20480 #Maximum size (in bytes) of audit log file
#audit-log-save=2 #Number of historical audit log file to save
#autosupport-enabled="on" #Enable autosupport (requires autosupport-filer be set)
#available-lun-reserve=8 #Number of LUNs for which to reserve host resources
#check-export-permission-nfs-clone="on" #Checks if the host has nfs export permissions for resource being connected
#client-trace-log-file="/var/log/sd-client-trace.log" #client trace log file (Probably never used or useful)
#cluster-operation-timeout-secs=600 #Cluster Operation timeout in seconds (Useful only on SFRAC Environments). Increase this value if you frequent failures in SFRAC environments
#contact-http-dfm-port=8088 #HTTP server port to contact to access the DFM (Change this only if you have modified DFM Server settings)
#contact-http-port=80 #HTTP port to contact to access the filer (This should not be changed most of the time)
#contact-http-port-sdu-daemon=4094 #HTTP port on which sdu daemon will bind
#contact-https-port-sdu-daemon=4095 #HTTPS port on which sdu daemon will bind
#contact-ssl-dfm-port=8488 #SSL server port to contact to access the DFM
#contact-ssl-port=443 #SSL port to contact to access the filer
#contact-viadmin-port=8043 #HTTP/HTTPS port to contact to access the virtual interface admin
#daemon-trace-log-file="/var/log/sd-daemon-trace.log" #daemon trace log file
#datamotion-cutover-wait=120 #Wait time in seconds during data motion
#default-noprompt="off" #A default value for -noprompt option in the command line
default-transport="fcp" #Transport type to use for storage provisioning, when a decision is needed
#device-retries=3 #Number of retries on Ontap filer LUN device inquiry (This is no longer useful or used)
#device-retry-sleep-secs=1 #Number of seconds between Ontap filer LUN device inquiry retries (This is no longer useful or used)
#dfm-api-timeout=180 #Timeout in seconds for calling DFM API
#dfm-rbac-retries=12 #Number of access retries until DFM Refreshes (Increase this value if DFM is unable to discover newly created Volumes)
#dfm-rbac-retry-sleep-secs=15 #Number of seconds between DFM rbac access retries(Increase this value if DFM is unable to discover the Volume)
#do-lunclone="on" #Lunclone for Dataset mount_backup if readonly qtree is detected
#enable-alua="on" #Enable ALUA for the igroup
#enable-fcp-cache="on" #Enable FCP Cache in Assistants
#enable-implicit-host-preparation="on" #Enable implicit host preparation for LUN creation
#enable-parallel-operations="on" #Enable support for parallel operations
#enable-split-clone="off" #Enable split clone volume or lun during connnect/disconnect
#filer-restore-retries=1440 #Number of retries while doing lun restore
#filer-restore-retry-sleep-secs=15 #Number of secs between retries while restoring lun
#filesystem-freeze-timeout-secs=300 #File system freeze timeout in seconds
#flexclone-writereserve-enabled="off" #Enable space reservations during FlexClone creation
fstype="ufs" #File system to use when more than one file system is available
#lun-onlining-in-progress-retries=40 #Number of retries when lun onlining in progress after VBSR
#lun-onlining-in-progress-sleep-secs=3 #Number of secs between retries when lun onlining in progress after VBSR
#mgmt-retries=2 #Number of retries on ManageONTAP control channel
#mgmt-retry-sleep-long-secs=90 #Number of seconds between retries on ManageONTAP control channel (failover error)
#mgmt-retry-sleep-secs=2 #Number of seconds between retries on ManageONTAP control channel
#migrate-file="/opt/NTAPsnapdrive/.migfile" #Location of Migrate File
#multipathing-type="DMP" #Multipathing software to use when more than one multipathing solution is available.
multipathing-type="mpxio" #Multipathing software to use when more than one multipathing solution is available.
#password-file="/opt/NTAPsnapdrive/.pwfile" #location of password file
#portset-file="/opt/NTAPsnapdrive/.portset" #location of portset configuration file
#prefix-clone-name="" #Prefix string for naming FlexClone
#prefix-filer-lun="" #Prefix for all filer LUN names internally generated by storage create
#prepare-lun-count=16 #Number of LUNs for which to request host preparation
#rbac-cache="off" #Use RBAC cache when all DFM servers are down. Active only when rbac-method is dfm.
#rbac-method="native" #Role Based Access Control(RBAC) methods
#recovery-log-file="/var/log/sd-recovery.log" #recovery log file
#recovery-log-save=20 #Number of old copies of recovery log file to save
#san-clone-method="lunclone" #Clone methods for snap connect
#sdu-daemon-certificate-path="/opt/NTAPsnapdrive/snapdrive.pem" #location of https server certificate
#sdu-password-file="/opt/NTAPsnapdrive/.sdupw" #location of SDU Daemon and DFM password file
#secure-communication-among-cluster-nodes="off" #Enable Secure Communication (Useful only on SFRAC environments)
#sfsr-polling-frequency=10 #Sleep for the given amount of seconds before attempting SFSR
#snapconnect-nfs-removedirectories="off" #NFS snap connect cleaup unwanted dirs;
#snapcreate-cg-timeout="relaxed" #Timeout type used in snapshot creation with Consitency Groups.
#snapcreate-check-nonpersistent-nfs="on" #Check that entries exist in persistent filesystem file for specified nfs fs.
#snapcreate-consistency-retries=3 #Number of retries on best-effort snapshot consistency check failure
#snapcreate-consistency-retry-sleep=1 #Number of seconds between best-effort snapshot consistency retries
#snapcreate-must-make-snapinfo-on-qtree="off" #snap create must be able to create snapinfo on qtree
#snapdelete-delete-rollback-with-snap="off" #Delete all rollback snapshots related to specified snapshot
#snapmirror-dest-snap-support-enabled="on" #Enables snap restore and snap connect commands to deal with snapshots which were moved to another filer volume (e.g. via SnapMirror)
#snaprestore-delete-rollback-after-restore="on" #Delete rollback snapshot after a successfull restore
#snaprestore-make-rollback="on" #Create snap rollback before restore
#snaprestore-must-make-rollback="on" #Do not continue 'snap restore' if rollback creation fails
#snaprestore-snapmirror-check="on" #Enable snapmirror destination volume check in snap restore
#space-reservations-enabled="on" #Enable space reservations when creating new luns
#space-reservations-volume-enabled="snapshot" #Enable space reservation over volume.
#split-clone-async="on" #Lunclone for Dataset mount_backup if readonly qtree is detected
#trace-enabled="on" #Enable trace
#trace-level=7 #Trace levels: 1=FatalError; 2=AdminError; 3=CommandError; 4=warning, 5=info, 6=verbose, 7=full
#trace-log-file="/var/log/sd-trace.log" #trace log file
#trace-log-max-size=10485760 #Maximum size of trace log file in bytes; 0 means one trace log file per command
#trace-log-save=100 #Number of old copies of trace log file to save
#use-efi-label="off" #Enables use of EFI labels on Solaris which is required for lun size > 1 TB
#use-https-to-dfm="on" #Communication with DFM done via HTTPS instead of HTTP
use-https-to-filer="on" #Communication with filer done via HTTPS instead of HTTP
#use-https-to-sdu-daemon="off" #Communication with daemon done via HTTPS instead of HTTP
#use-https-to-viadmin="on" #Specifies if HTTPS must be used to communicate with SMVI Product
#vif-password-file="/opt/NTAPsnapdrive/.vifpw" #location of Virtual Interface Server password file
#virtualization-operation-timeout-secs=600 #Virtualization Operation timeout in seconds
#vmtype="vxvm" #Volume manager to use when more than one volume manager is available
vmtype="svm" #Volume manager to use when more than one volume manager is available
#vol-restore="off" #Method of restoring a volume
#volmove-cutover-retry=3 #Number of retries during volume migration
#volmove-cutover-retry-sleep=3 #Number of seconds between retries during volume migration cutover phase
</pre>
Nun erst den snapdrived starten:
<pre>
# /usr/sbin/snapdrived start
</pre>
Verbindung mit dem Filer herstellen von SnapDrive:
<pre>
# getent hosts fas01 >> /etc/hosts
# /opt/NTAPsnapdrive/bin/snapdrive config set root fas01
# /opt/NTAPsnapdrive/bin/snapdrive snap list -filer fas01
</pre>
Check:
<pre>
# /opt/NTAPsnapdrive/bin/snapdrive config list
</pre>
==SnapManager for Oracle==
<pre>
# sh ./netapp.smo.sunos-sparc64-3.2.bin
# smogui
</pre>
Ab durch den Wizard...
12fb1b46ab31d17a30549e3a498b48ba277b39da
154
153
2012-06-15T17:56:42Z
Lollypop
2
/* HostUtilities */
wikitext
text/x-wiki
[[Kategorie:NetApp]]
=Installation des SnapManager for Oracle unter Solaris=
==HostUtilities==
<pre>
# cd /tmp
# gtar xzf ~/smo/netapp_solaris_host_utilities_5_1_sparc.tar.gz
# pkgadd -d NTAPSANTool.pkg
# /opt/NTAP/SANToolkit/bin/mpxio_set -e
# /opt/NTAP/SANToolkit/bin/basic_config -ssd_set
# touch /reconfigure
# init 6
</pre>
Test mit:
<pre>
# /opt/NTAP/SANToolkit/bin/sanlun fcp show adapter
# /opt/NTAP/SANToolkit/bin/sanlun lun show all
</pre>
==SnapDrive==
<pre>
# pkgadd -d NTAPsnapdrive_sun_sparc_5.0/NTAPsnapdrive.pkg
</pre>
Jetzt noch die /opt/NTAPsnapdrive/snapdrive.conf anpassen.
Und für Solaris mit MPxIO und UFS sieht die /opt/NTAPsnapdrive/snapdrive.conf dann so aus:
<pre>
# Snapdrive Configuration
# file: /opt/NTAPsnapdrive/snapdrive.conf
# Version 5.0 (Change 1612424 Built 'Sun Feb 26 03 11 54 PST 2012')
#
# Default values are shown by lines which are commented-out in this file.
# If there is no un-commented-out line in this file relating to a particular value, then
# the default value represented in the commented-out line is what SnapDrive will use.
#
# To change a value:
#
# -- copy the line that is commented out to another line
# -- Leave the commented-out line
# -- Modify the new line to remove the '#' and to set the new value.
# -- Save the file and exit
# -- Also remember to restart the snapdrive daemon by issuing 'snapdrived restart'
#
#
#PATH="/sbin:/usr/sbin:/bin:/usr/lib/vxvm/bin:/usr/bin:/opt/VRTS/bin:/etc/vx/bin" #toolset search path
all-access-if-rbac-unspecified="on" #Allows access to all filer operations if the RBAC permissions file is missing in filer volume
#audit-log-file="/var/log/sd-audit.log" #Audit Log File Path
#audit-log-max-size=20480 #Maximum size (in bytes) of audit log file
#audit-log-save=2 #Number of historical audit log file to save
#autosupport-enabled="on" #Enable autosupport (requires autosupport-filer be set)
#available-lun-reserve=8 #Number of LUNs for which to reserve host resources
#check-export-permission-nfs-clone="on" #Checks if the host has nfs export permissions for resource being connected
#client-trace-log-file="/var/log/sd-client-trace.log" #client trace log file (Probably never used or useful)
#cluster-operation-timeout-secs=600 #Cluster Operation timeout in seconds (Useful only on SFRAC Environments). Increase this value if you frequent failures in SFRAC environments
#contact-http-dfm-port=8088 #HTTP server port to contact to access the DFM (Change this only if you have modified DFM Server settings)
#contact-http-port=80 #HTTP port to contact to access the filer (This should not be changed most of the time)
#contact-http-port-sdu-daemon=4094 #HTTP port on which sdu daemon will bind
#contact-https-port-sdu-daemon=4095 #HTTPS port on which sdu daemon will bind
#contact-ssl-dfm-port=8488 #SSL server port to contact to access the DFM
#contact-ssl-port=443 #SSL port to contact to access the filer
#contact-viadmin-port=8043 #HTTP/HTTPS port to contact to access the virtual interface admin
#daemon-trace-log-file="/var/log/sd-daemon-trace.log" #daemon trace log file
#datamotion-cutover-wait=120 #Wait time in seconds during data motion
#default-noprompt="off" #A default value for -noprompt option in the command line
default-transport="fcp" #Transport type to use for storage provisioning, when a decision is needed
#device-retries=3 #Number of retries on Ontap filer LUN device inquiry (This is no longer useful or used)
#device-retry-sleep-secs=1 #Number of seconds between Ontap filer LUN device inquiry retries (This is no longer useful or used)
#dfm-api-timeout=180 #Timeout in seconds for calling DFM API
#dfm-rbac-retries=12 #Number of access retries until DFM Refreshes (Increase this value if DFM is unable to discover newly created Volumes)
#dfm-rbac-retry-sleep-secs=15 #Number of seconds between DFM rbac access retries(Increase this value if DFM is unable to discover the Volume)
#do-lunclone="on" #Lunclone for Dataset mount_backup if readonly qtree is detected
#enable-alua="on" #Enable ALUA for the igroup
#enable-fcp-cache="on" #Enable FCP Cache in Assistants
#enable-implicit-host-preparation="on" #Enable implicit host preparation for LUN creation
#enable-parallel-operations="on" #Enable support for parallel operations
#enable-split-clone="off" #Enable split clone volume or lun during connnect/disconnect
#filer-restore-retries=1440 #Number of retries while doing lun restore
#filer-restore-retry-sleep-secs=15 #Number of secs between retries while restoring lun
#filesystem-freeze-timeout-secs=300 #File system freeze timeout in seconds
#flexclone-writereserve-enabled="off" #Enable space reservations during FlexClone creation
fstype="ufs" #File system to use when more than one file system is available
#lun-onlining-in-progress-retries=40 #Number of retries when lun onlining in progress after VBSR
#lun-onlining-in-progress-sleep-secs=3 #Number of secs between retries when lun onlining in progress after VBSR
#mgmt-retries=2 #Number of retries on ManageONTAP control channel
#mgmt-retry-sleep-long-secs=90 #Number of seconds between retries on ManageONTAP control channel (failover error)
#mgmt-retry-sleep-secs=2 #Number of seconds between retries on ManageONTAP control channel
#migrate-file="/opt/NTAPsnapdrive/.migfile" #Location of Migrate File
#multipathing-type="DMP" #Multipathing software to use when more than one multipathing solution is available.
multipathing-type="mpxio" #Multipathing software to use when more than one multipathing solution is available.
#password-file="/opt/NTAPsnapdrive/.pwfile" #location of password file
#portset-file="/opt/NTAPsnapdrive/.portset" #location of portset configuration file
#prefix-clone-name="" #Prefix string for naming FlexClone
#prefix-filer-lun="" #Prefix for all filer LUN names internally generated by storage create
#prepare-lun-count=16 #Number of LUNs for which to request host preparation
#rbac-cache="off" #Use RBAC cache when all DFM servers are down. Active only when rbac-method is dfm.
#rbac-method="native" #Role Based Access Control(RBAC) methods
#recovery-log-file="/var/log/sd-recovery.log" #recovery log file
#recovery-log-save=20 #Number of old copies of recovery log file to save
#san-clone-method="lunclone" #Clone methods for snap connect
#sdu-daemon-certificate-path="/opt/NTAPsnapdrive/snapdrive.pem" #location of https server certificate
#sdu-password-file="/opt/NTAPsnapdrive/.sdupw" #location of SDU Daemon and DFM password file
#secure-communication-among-cluster-nodes="off" #Enable Secure Communication (Useful only on SFRAC environments)
#sfsr-polling-frequency=10 #Sleep for the given amount of seconds before attempting SFSR
#snapconnect-nfs-removedirectories="off" #NFS snap connect cleaup unwanted dirs;
#snapcreate-cg-timeout="relaxed" #Timeout type used in snapshot creation with Consitency Groups.
#snapcreate-check-nonpersistent-nfs="on" #Check that entries exist in persistent filesystem file for specified nfs fs.
#snapcreate-consistency-retries=3 #Number of retries on best-effort snapshot consistency check failure
#snapcreate-consistency-retry-sleep=1 #Number of seconds between best-effort snapshot consistency retries
#snapcreate-must-make-snapinfo-on-qtree="off" #snap create must be able to create snapinfo on qtree
#snapdelete-delete-rollback-with-snap="off" #Delete all rollback snapshots related to specified snapshot
#snapmirror-dest-snap-support-enabled="on" #Enables snap restore and snap connect commands to deal with snapshots which were moved to another filer volume (e.g. via SnapMirror)
#snaprestore-delete-rollback-after-restore="on" #Delete rollback snapshot after a successfull restore
#snaprestore-make-rollback="on" #Create snap rollback before restore
#snaprestore-must-make-rollback="on" #Do not continue 'snap restore' if rollback creation fails
#snaprestore-snapmirror-check="on" #Enable snapmirror destination volume check in snap restore
#space-reservations-enabled="on" #Enable space reservations when creating new luns
#space-reservations-volume-enabled="snapshot" #Enable space reservation over volume.
#split-clone-async="on" #Lunclone for Dataset mount_backup if readonly qtree is detected
#trace-enabled="on" #Enable trace
#trace-level=7 #Trace levels: 1=FatalError; 2=AdminError; 3=CommandError; 4=warning, 5=info, 6=verbose, 7=full
#trace-log-file="/var/log/sd-trace.log" #trace log file
#trace-log-max-size=10485760 #Maximum size of trace log file in bytes; 0 means one trace log file per command
#trace-log-save=100 #Number of old copies of trace log file to save
#use-efi-label="off" #Enables use of EFI labels on Solaris which is required for lun size > 1 TB
#use-https-to-dfm="on" #Communication with DFM done via HTTPS instead of HTTP
use-https-to-filer="on" #Communication with filer done via HTTPS instead of HTTP
#use-https-to-sdu-daemon="off" #Communication with daemon done via HTTPS instead of HTTP
#use-https-to-viadmin="on" #Specifies if HTTPS must be used to communicate with SMVI Product
#vif-password-file="/opt/NTAPsnapdrive/.vifpw" #location of Virtual Interface Server password file
#virtualization-operation-timeout-secs=600 #Virtualization Operation timeout in seconds
#vmtype="vxvm" #Volume manager to use when more than one volume manager is available
vmtype="svm" #Volume manager to use when more than one volume manager is available
#vol-restore="off" #Method of restoring a volume
#volmove-cutover-retry=3 #Number of retries during volume migration
#volmove-cutover-retry-sleep=3 #Number of seconds between retries during volume migration cutover phase
</pre>
Nun erst den snapdrived starten:
<pre>
# /usr/sbin/snapdrived start
</pre>
Verbindung mit dem Filer herstellen von SnapDrive:
<pre>
# getent hosts fas01 >> /etc/hosts
# /opt/NTAPsnapdrive/bin/snapdrive config set root fas01
# /opt/NTAPsnapdrive/bin/snapdrive snap list -filer fas01
</pre>
Check:
<pre>
# /opt/NTAPsnapdrive/bin/snapdrive config list
</pre>
==SnapManager for Oracle==
<pre>
# sh ./netapp.smo.sunos-sparc64-3.2.bin
# smogui
</pre>
Ab durch den Wizard...
e211c8c3f9a557d1a243c7419f1dbae71640d297
155
154
2012-06-15T18:22:29Z
Lollypop
2
/* SnapDrive */
wikitext
text/x-wiki
[[Kategorie:NetApp]]
=Installation des SnapManager for Oracle unter Solaris=
==HostUtilities==
<pre>
# cd /tmp
# gtar xzf ~/smo/netapp_solaris_host_utilities_5_1_sparc.tar.gz
# pkgadd -d NTAPSANTool.pkg
# /opt/NTAP/SANToolkit/bin/mpxio_set -e
# /opt/NTAP/SANToolkit/bin/basic_config -ssd_set
# touch /reconfigure
# init 6
</pre>
Test mit:
<pre>
# /opt/NTAP/SANToolkit/bin/sanlun fcp show adapter
# /opt/NTAP/SANToolkit/bin/sanlun lun show all
</pre>
==SnapDrive==
<pre>
# cd /tmp
# gtar xzf ~/smo/NTAPsnapdrive_sun_sparc_5.0P1.tar.Z
# pkgadd -d NTAPsnapdrive_sun_sparc_5.0/NTAPsnapdrive.pkg
</pre>
Jetzt noch die /opt/NTAPsnapdrive/snapdrive.conf anpassen.
Und für Solaris mit MPxIO und UFS sieht die /opt/NTAPsnapdrive/snapdrive.conf dann so aus:
<pre>
# Snapdrive Configuration
# file: /opt/NTAPsnapdrive/snapdrive.conf
# Version 5.0 (Change 1612424 Built 'Sun Feb 26 03 11 54 PST 2012')
#
# Default values are shown by lines which are commented-out in this file.
# If there is no un-commented-out line in this file relating to a particular value, then
# the default value represented in the commented-out line is what SnapDrive will use.
#
# To change a value:
#
# -- copy the line that is commented out to another line
# -- Leave the commented-out line
# -- Modify the new line to remove the '#' and to set the new value.
# -- Save the file and exit
# -- Also remember to restart the snapdrive daemon by issuing 'snapdrived restart'
#
#
#PATH="/sbin:/usr/sbin:/bin:/usr/lib/vxvm/bin:/usr/bin:/opt/VRTS/bin:/etc/vx/bin" #toolset search path
all-access-if-rbac-unspecified="on" #Allows access to all filer operations if the RBAC permissions file is missing in filer volume
#audit-log-file="/var/log/sd-audit.log" #Audit Log File Path
#audit-log-max-size=20480 #Maximum size (in bytes) of audit log file
#audit-log-save=2 #Number of historical audit log file to save
#autosupport-enabled="on" #Enable autosupport (requires autosupport-filer be set)
#available-lun-reserve=8 #Number of LUNs for which to reserve host resources
#check-export-permission-nfs-clone="on" #Checks if the host has nfs export permissions for resource being connected
#client-trace-log-file="/var/log/sd-client-trace.log" #client trace log file (Probably never used or useful)
#cluster-operation-timeout-secs=600 #Cluster Operation timeout in seconds (Useful only on SFRAC Environments). Increase this value if you frequent failures in SFRAC environments
#contact-http-dfm-port=8088 #HTTP server port to contact to access the DFM (Change this only if you have modified DFM Server settings)
#contact-http-port=80 #HTTP port to contact to access the filer (This should not be changed most of the time)
#contact-http-port-sdu-daemon=4094 #HTTP port on which sdu daemon will bind
#contact-https-port-sdu-daemon=4095 #HTTPS port on which sdu daemon will bind
#contact-ssl-dfm-port=8488 #SSL server port to contact to access the DFM
#contact-ssl-port=443 #SSL port to contact to access the filer
#contact-viadmin-port=8043 #HTTP/HTTPS port to contact to access the virtual interface admin
#daemon-trace-log-file="/var/log/sd-daemon-trace.log" #daemon trace log file
#datamotion-cutover-wait=120 #Wait time in seconds during data motion
#default-noprompt="off" #A default value for -noprompt option in the command line
default-transport="fcp" #Transport type to use for storage provisioning, when a decision is needed
#device-retries=3 #Number of retries on Ontap filer LUN device inquiry (This is no longer useful or used)
#device-retry-sleep-secs=1 #Number of seconds between Ontap filer LUN device inquiry retries (This is no longer useful or used)
#dfm-api-timeout=180 #Timeout in seconds for calling DFM API
#dfm-rbac-retries=12 #Number of access retries until DFM Refreshes (Increase this value if DFM is unable to discover newly created Volumes)
#dfm-rbac-retry-sleep-secs=15 #Number of seconds between DFM rbac access retries(Increase this value if DFM is unable to discover the Volume)
#do-lunclone="on" #Lunclone for Dataset mount_backup if readonly qtree is detected
#enable-alua="on" #Enable ALUA for the igroup
#enable-fcp-cache="on" #Enable FCP Cache in Assistants
#enable-implicit-host-preparation="on" #Enable implicit host preparation for LUN creation
#enable-parallel-operations="on" #Enable support for parallel operations
#enable-split-clone="off" #Enable split clone volume or lun during connnect/disconnect
#filer-restore-retries=1440 #Number of retries while doing lun restore
#filer-restore-retry-sleep-secs=15 #Number of secs between retries while restoring lun
#filesystem-freeze-timeout-secs=300 #File system freeze timeout in seconds
#flexclone-writereserve-enabled="off" #Enable space reservations during FlexClone creation
fstype="ufs" #File system to use when more than one file system is available
#lun-onlining-in-progress-retries=40 #Number of retries when lun onlining in progress after VBSR
#lun-onlining-in-progress-sleep-secs=3 #Number of secs between retries when lun onlining in progress after VBSR
#mgmt-retries=2 #Number of retries on ManageONTAP control channel
#mgmt-retry-sleep-long-secs=90 #Number of seconds between retries on ManageONTAP control channel (failover error)
#mgmt-retry-sleep-secs=2 #Number of seconds between retries on ManageONTAP control channel
#migrate-file="/opt/NTAPsnapdrive/.migfile" #Location of Migrate File
#multipathing-type="DMP" #Multipathing software to use when more than one multipathing solution is available.
multipathing-type="mpxio" #Multipathing software to use when more than one multipathing solution is available.
#password-file="/opt/NTAPsnapdrive/.pwfile" #location of password file
#portset-file="/opt/NTAPsnapdrive/.portset" #location of portset configuration file
#prefix-clone-name="" #Prefix string for naming FlexClone
#prefix-filer-lun="" #Prefix for all filer LUN names internally generated by storage create
#prepare-lun-count=16 #Number of LUNs for which to request host preparation
#rbac-cache="off" #Use RBAC cache when all DFM servers are down. Active only when rbac-method is dfm.
#rbac-method="native" #Role Based Access Control(RBAC) methods
#recovery-log-file="/var/log/sd-recovery.log" #recovery log file
#recovery-log-save=20 #Number of old copies of recovery log file to save
#san-clone-method="lunclone" #Clone methods for snap connect
#sdu-daemon-certificate-path="/opt/NTAPsnapdrive/snapdrive.pem" #location of https server certificate
#sdu-password-file="/opt/NTAPsnapdrive/.sdupw" #location of SDU Daemon and DFM password file
#secure-communication-among-cluster-nodes="off" #Enable Secure Communication (Useful only on SFRAC environments)
#sfsr-polling-frequency=10 #Sleep for the given amount of seconds before attempting SFSR
#snapconnect-nfs-removedirectories="off" #NFS snap connect cleaup unwanted dirs;
#snapcreate-cg-timeout="relaxed" #Timeout type used in snapshot creation with Consitency Groups.
#snapcreate-check-nonpersistent-nfs="on" #Check that entries exist in persistent filesystem file for specified nfs fs.
#snapcreate-consistency-retries=3 #Number of retries on best-effort snapshot consistency check failure
#snapcreate-consistency-retry-sleep=1 #Number of seconds between best-effort snapshot consistency retries
#snapcreate-must-make-snapinfo-on-qtree="off" #snap create must be able to create snapinfo on qtree
#snapdelete-delete-rollback-with-snap="off" #Delete all rollback snapshots related to specified snapshot
#snapmirror-dest-snap-support-enabled="on" #Enables snap restore and snap connect commands to deal with snapshots which were moved to another filer volume (e.g. via SnapMirror)
#snaprestore-delete-rollback-after-restore="on" #Delete rollback snapshot after a successfull restore
#snaprestore-make-rollback="on" #Create snap rollback before restore
#snaprestore-must-make-rollback="on" #Do not continue 'snap restore' if rollback creation fails
#snaprestore-snapmirror-check="on" #Enable snapmirror destination volume check in snap restore
#space-reservations-enabled="on" #Enable space reservations when creating new luns
#space-reservations-volume-enabled="snapshot" #Enable space reservation over volume.
#split-clone-async="on" #Lunclone for Dataset mount_backup if readonly qtree is detected
#trace-enabled="on" #Enable trace
#trace-level=7 #Trace levels: 1=FatalError; 2=AdminError; 3=CommandError; 4=warning, 5=info, 6=verbose, 7=full
#trace-log-file="/var/log/sd-trace.log" #trace log file
#trace-log-max-size=10485760 #Maximum size of trace log file in bytes; 0 means one trace log file per command
#trace-log-save=100 #Number of old copies of trace log file to save
#use-efi-label="off" #Enables use of EFI labels on Solaris which is required for lun size > 1 TB
#use-https-to-dfm="on" #Communication with DFM done via HTTPS instead of HTTP
use-https-to-filer="on" #Communication with filer done via HTTPS instead of HTTP
#use-https-to-sdu-daemon="off" #Communication with daemon done via HTTPS instead of HTTP
#use-https-to-viadmin="on" #Specifies if HTTPS must be used to communicate with SMVI Product
#vif-password-file="/opt/NTAPsnapdrive/.vifpw" #location of Virtual Interface Server password file
#virtualization-operation-timeout-secs=600 #Virtualization Operation timeout in seconds
#vmtype="vxvm" #Volume manager to use when more than one volume manager is available
vmtype="svm" #Volume manager to use when more than one volume manager is available
#vol-restore="off" #Method of restoring a volume
#volmove-cutover-retry=3 #Number of retries during volume migration
#volmove-cutover-retry-sleep=3 #Number of seconds between retries during volume migration cutover phase
</pre>
Nun erst den snapdrived starten:
<pre>
# /usr/sbin/snapdrived start
</pre>
Verbindung mit dem Filer herstellen von SnapDrive:
<pre>
# getent hosts fas01 >> /etc/hosts
# /opt/NTAPsnapdrive/bin/snapdrive config set root fas01
# /opt/NTAPsnapdrive/bin/snapdrive snap list -filer fas01
</pre>
Check:
<pre>
# /opt/NTAPsnapdrive/bin/snapdrive config list
</pre>
==SnapManager for Oracle==
<pre>
# sh ./netapp.smo.sunos-sparc64-3.2.bin
# smogui
</pre>
Ab durch den Wizard...
e87391dd7c68ceefb0270acb0b3fcbe04adf7439
175
155
2012-06-16T20:56:22Z
Lollypop
2
/* HostUtilities */
wikitext
text/x-wiki
[[Kategorie:NetApp]]
=Installation des SnapManager for Oracle unter Solaris=
==HostUtilities==
<pre>
# cd /tmp
# gtar xzf ~/smo/netapp_solaris_host_utilities_5_1_sparc.tar.gz
# pkgadd -d NTAPSANTool.pkg
</pre>
Und um Himmelswillen nicht:
# /opt/NTAP/SANToolkit/bin/mpxio_set -e
Dann klapp ALUA nicht!
<pre>
# /opt/NTAP/SANToolkit/bin/basic_config -ssd_set
# touch /reconfigure
# init 6
</pre>
Test mit:
<pre>
# /opt/NTAP/SANToolkit/bin/sanlun fcp show adapter
# /opt/NTAP/SANToolkit/bin/sanlun lun show all
</pre>
==SnapDrive==
<pre>
# cd /tmp
# gtar xzf ~/smo/NTAPsnapdrive_sun_sparc_5.0P1.tar.Z
# pkgadd -d NTAPsnapdrive_sun_sparc_5.0/NTAPsnapdrive.pkg
</pre>
Jetzt noch die /opt/NTAPsnapdrive/snapdrive.conf anpassen.
Und für Solaris mit MPxIO und UFS sieht die /opt/NTAPsnapdrive/snapdrive.conf dann so aus:
<pre>
# Snapdrive Configuration
# file: /opt/NTAPsnapdrive/snapdrive.conf
# Version 5.0 (Change 1612424 Built 'Sun Feb 26 03 11 54 PST 2012')
#
# Default values are shown by lines which are commented-out in this file.
# If there is no un-commented-out line in this file relating to a particular value, then
# the default value represented in the commented-out line is what SnapDrive will use.
#
# To change a value:
#
# -- copy the line that is commented out to another line
# -- Leave the commented-out line
# -- Modify the new line to remove the '#' and to set the new value.
# -- Save the file and exit
# -- Also remember to restart the snapdrive daemon by issuing 'snapdrived restart'
#
#
#PATH="/sbin:/usr/sbin:/bin:/usr/lib/vxvm/bin:/usr/bin:/opt/VRTS/bin:/etc/vx/bin" #toolset search path
all-access-if-rbac-unspecified="on" #Allows access to all filer operations if the RBAC permissions file is missing in filer volume
#audit-log-file="/var/log/sd-audit.log" #Audit Log File Path
#audit-log-max-size=20480 #Maximum size (in bytes) of audit log file
#audit-log-save=2 #Number of historical audit log file to save
#autosupport-enabled="on" #Enable autosupport (requires autosupport-filer be set)
#available-lun-reserve=8 #Number of LUNs for which to reserve host resources
#check-export-permission-nfs-clone="on" #Checks if the host has nfs export permissions for resource being connected
#client-trace-log-file="/var/log/sd-client-trace.log" #client trace log file (Probably never used or useful)
#cluster-operation-timeout-secs=600 #Cluster Operation timeout in seconds (Useful only on SFRAC Environments). Increase this value if you frequent failures in SFRAC environments
#contact-http-dfm-port=8088 #HTTP server port to contact to access the DFM (Change this only if you have modified DFM Server settings)
#contact-http-port=80 #HTTP port to contact to access the filer (This should not be changed most of the time)
#contact-http-port-sdu-daemon=4094 #HTTP port on which sdu daemon will bind
#contact-https-port-sdu-daemon=4095 #HTTPS port on which sdu daemon will bind
#contact-ssl-dfm-port=8488 #SSL server port to contact to access the DFM
#contact-ssl-port=443 #SSL port to contact to access the filer
#contact-viadmin-port=8043 #HTTP/HTTPS port to contact to access the virtual interface admin
#daemon-trace-log-file="/var/log/sd-daemon-trace.log" #daemon trace log file
#datamotion-cutover-wait=120 #Wait time in seconds during data motion
#default-noprompt="off" #A default value for -noprompt option in the command line
default-transport="fcp" #Transport type to use for storage provisioning, when a decision is needed
#device-retries=3 #Number of retries on Ontap filer LUN device inquiry (This is no longer useful or used)
#device-retry-sleep-secs=1 #Number of seconds between Ontap filer LUN device inquiry retries (This is no longer useful or used)
#dfm-api-timeout=180 #Timeout in seconds for calling DFM API
#dfm-rbac-retries=12 #Number of access retries until DFM Refreshes (Increase this value if DFM is unable to discover newly created Volumes)
#dfm-rbac-retry-sleep-secs=15 #Number of seconds between DFM rbac access retries(Increase this value if DFM is unable to discover the Volume)
#do-lunclone="on" #Lunclone for Dataset mount_backup if readonly qtree is detected
#enable-alua="on" #Enable ALUA for the igroup
#enable-fcp-cache="on" #Enable FCP Cache in Assistants
#enable-implicit-host-preparation="on" #Enable implicit host preparation for LUN creation
#enable-parallel-operations="on" #Enable support for parallel operations
#enable-split-clone="off" #Enable split clone volume or lun during connnect/disconnect
#filer-restore-retries=1440 #Number of retries while doing lun restore
#filer-restore-retry-sleep-secs=15 #Number of secs between retries while restoring lun
#filesystem-freeze-timeout-secs=300 #File system freeze timeout in seconds
#flexclone-writereserve-enabled="off" #Enable space reservations during FlexClone creation
fstype="ufs" #File system to use when more than one file system is available
#lun-onlining-in-progress-retries=40 #Number of retries when lun onlining in progress after VBSR
#lun-onlining-in-progress-sleep-secs=3 #Number of secs between retries when lun onlining in progress after VBSR
#mgmt-retries=2 #Number of retries on ManageONTAP control channel
#mgmt-retry-sleep-long-secs=90 #Number of seconds between retries on ManageONTAP control channel (failover error)
#mgmt-retry-sleep-secs=2 #Number of seconds between retries on ManageONTAP control channel
#migrate-file="/opt/NTAPsnapdrive/.migfile" #Location of Migrate File
#multipathing-type="DMP" #Multipathing software to use when more than one multipathing solution is available.
multipathing-type="mpxio" #Multipathing software to use when more than one multipathing solution is available.
#password-file="/opt/NTAPsnapdrive/.pwfile" #location of password file
#portset-file="/opt/NTAPsnapdrive/.portset" #location of portset configuration file
#prefix-clone-name="" #Prefix string for naming FlexClone
#prefix-filer-lun="" #Prefix for all filer LUN names internally generated by storage create
#prepare-lun-count=16 #Number of LUNs for which to request host preparation
#rbac-cache="off" #Use RBAC cache when all DFM servers are down. Active only when rbac-method is dfm.
#rbac-method="native" #Role Based Access Control(RBAC) methods
#recovery-log-file="/var/log/sd-recovery.log" #recovery log file
#recovery-log-save=20 #Number of old copies of recovery log file to save
#san-clone-method="lunclone" #Clone methods for snap connect
#sdu-daemon-certificate-path="/opt/NTAPsnapdrive/snapdrive.pem" #location of https server certificate
#sdu-password-file="/opt/NTAPsnapdrive/.sdupw" #location of SDU Daemon and DFM password file
#secure-communication-among-cluster-nodes="off" #Enable Secure Communication (Useful only on SFRAC environments)
#sfsr-polling-frequency=10 #Sleep for the given amount of seconds before attempting SFSR
#snapconnect-nfs-removedirectories="off" #NFS snap connect cleaup unwanted dirs;
#snapcreate-cg-timeout="relaxed" #Timeout type used in snapshot creation with Consitency Groups.
#snapcreate-check-nonpersistent-nfs="on" #Check that entries exist in persistent filesystem file for specified nfs fs.
#snapcreate-consistency-retries=3 #Number of retries on best-effort snapshot consistency check failure
#snapcreate-consistency-retry-sleep=1 #Number of seconds between best-effort snapshot consistency retries
#snapcreate-must-make-snapinfo-on-qtree="off" #snap create must be able to create snapinfo on qtree
#snapdelete-delete-rollback-with-snap="off" #Delete all rollback snapshots related to specified snapshot
#snapmirror-dest-snap-support-enabled="on" #Enables snap restore and snap connect commands to deal with snapshots which were moved to another filer volume (e.g. via SnapMirror)
#snaprestore-delete-rollback-after-restore="on" #Delete rollback snapshot after a successfull restore
#snaprestore-make-rollback="on" #Create snap rollback before restore
#snaprestore-must-make-rollback="on" #Do not continue 'snap restore' if rollback creation fails
#snaprestore-snapmirror-check="on" #Enable snapmirror destination volume check in snap restore
#space-reservations-enabled="on" #Enable space reservations when creating new luns
#space-reservations-volume-enabled="snapshot" #Enable space reservation over volume.
#split-clone-async="on" #Lunclone for Dataset mount_backup if readonly qtree is detected
#trace-enabled="on" #Enable trace
#trace-level=7 #Trace levels: 1=FatalError; 2=AdminError; 3=CommandError; 4=warning, 5=info, 6=verbose, 7=full
#trace-log-file="/var/log/sd-trace.log" #trace log file
#trace-log-max-size=10485760 #Maximum size of trace log file in bytes; 0 means one trace log file per command
#trace-log-save=100 #Number of old copies of trace log file to save
#use-efi-label="off" #Enables use of EFI labels on Solaris which is required for lun size > 1 TB
#use-https-to-dfm="on" #Communication with DFM done via HTTPS instead of HTTP
use-https-to-filer="on" #Communication with filer done via HTTPS instead of HTTP
#use-https-to-sdu-daemon="off" #Communication with daemon done via HTTPS instead of HTTP
#use-https-to-viadmin="on" #Specifies if HTTPS must be used to communicate with SMVI Product
#vif-password-file="/opt/NTAPsnapdrive/.vifpw" #location of Virtual Interface Server password file
#virtualization-operation-timeout-secs=600 #Virtualization Operation timeout in seconds
#vmtype="vxvm" #Volume manager to use when more than one volume manager is available
vmtype="svm" #Volume manager to use when more than one volume manager is available
#vol-restore="off" #Method of restoring a volume
#volmove-cutover-retry=3 #Number of retries during volume migration
#volmove-cutover-retry-sleep=3 #Number of seconds between retries during volume migration cutover phase
</pre>
Nun erst den snapdrived starten:
<pre>
# /usr/sbin/snapdrived start
</pre>
Verbindung mit dem Filer herstellen von SnapDrive:
<pre>
# getent hosts fas01 >> /etc/hosts
# /opt/NTAPsnapdrive/bin/snapdrive config set root fas01
# /opt/NTAPsnapdrive/bin/snapdrive snap list -filer fas01
</pre>
Check:
<pre>
# /opt/NTAPsnapdrive/bin/snapdrive config list
</pre>
==SnapManager for Oracle==
<pre>
# sh ./netapp.smo.sunos-sparc64-3.2.bin
# smogui
</pre>
Ab durch den Wizard...
e9a7ea682979c087b7e107d2d4a997ae80811f48
176
175
2012-06-16T20:57:01Z
Lollypop
2
/* HostUtilities */
wikitext
text/x-wiki
[[Kategorie:NetApp]]
=Installation des SnapManager for Oracle unter Solaris=
==HostUtilities==
<pre>
# cd /tmp
# gtar xzf ~/smo/netapp_solaris_host_utilities_5_1_sparc.tar.gz
# pkgadd -d NTAPSANTool.pkg
</pre>
Und um Himmelswillen nicht:
# /opt/NTAP/SANToolkit/bin/mpxio_set -e --no-never-do-this
Dann klapp ALUA nicht!
<pre>
# /opt/NTAP/SANToolkit/bin/basic_config -ssd_set
# touch /reconfigure
# init 6
</pre>
Test mit:
<pre>
# /opt/NTAP/SANToolkit/bin/sanlun fcp show adapter
# /opt/NTAP/SANToolkit/bin/sanlun lun show all
</pre>
==SnapDrive==
<pre>
# cd /tmp
# gtar xzf ~/smo/NTAPsnapdrive_sun_sparc_5.0P1.tar.Z
# pkgadd -d NTAPsnapdrive_sun_sparc_5.0/NTAPsnapdrive.pkg
</pre>
Jetzt noch die /opt/NTAPsnapdrive/snapdrive.conf anpassen.
Und für Solaris mit MPxIO und UFS sieht die /opt/NTAPsnapdrive/snapdrive.conf dann so aus:
<pre>
# Snapdrive Configuration
# file: /opt/NTAPsnapdrive/snapdrive.conf
# Version 5.0 (Change 1612424 Built 'Sun Feb 26 03 11 54 PST 2012')
#
# Default values are shown by lines which are commented-out in this file.
# If there is no un-commented-out line in this file relating to a particular value, then
# the default value represented in the commented-out line is what SnapDrive will use.
#
# To change a value:
#
# -- copy the line that is commented out to another line
# -- Leave the commented-out line
# -- Modify the new line to remove the '#' and to set the new value.
# -- Save the file and exit
# -- Also remember to restart the snapdrive daemon by issuing 'snapdrived restart'
#
#
#PATH="/sbin:/usr/sbin:/bin:/usr/lib/vxvm/bin:/usr/bin:/opt/VRTS/bin:/etc/vx/bin" #toolset search path
all-access-if-rbac-unspecified="on" #Allows access to all filer operations if the RBAC permissions file is missing in filer volume
#audit-log-file="/var/log/sd-audit.log" #Audit Log File Path
#audit-log-max-size=20480 #Maximum size (in bytes) of audit log file
#audit-log-save=2 #Number of historical audit log file to save
#autosupport-enabled="on" #Enable autosupport (requires autosupport-filer be set)
#available-lun-reserve=8 #Number of LUNs for which to reserve host resources
#check-export-permission-nfs-clone="on" #Checks if the host has nfs export permissions for resource being connected
#client-trace-log-file="/var/log/sd-client-trace.log" #client trace log file (Probably never used or useful)
#cluster-operation-timeout-secs=600 #Cluster Operation timeout in seconds (Useful only on SFRAC Environments). Increase this value if you frequent failures in SFRAC environments
#contact-http-dfm-port=8088 #HTTP server port to contact to access the DFM (Change this only if you have modified DFM Server settings)
#contact-http-port=80 #HTTP port to contact to access the filer (This should not be changed most of the time)
#contact-http-port-sdu-daemon=4094 #HTTP port on which sdu daemon will bind
#contact-https-port-sdu-daemon=4095 #HTTPS port on which sdu daemon will bind
#contact-ssl-dfm-port=8488 #SSL server port to contact to access the DFM
#contact-ssl-port=443 #SSL port to contact to access the filer
#contact-viadmin-port=8043 #HTTP/HTTPS port to contact to access the virtual interface admin
#daemon-trace-log-file="/var/log/sd-daemon-trace.log" #daemon trace log file
#datamotion-cutover-wait=120 #Wait time in seconds during data motion
#default-noprompt="off" #A default value for -noprompt option in the command line
default-transport="fcp" #Transport type to use for storage provisioning, when a decision is needed
#device-retries=3 #Number of retries on Ontap filer LUN device inquiry (This is no longer useful or used)
#device-retry-sleep-secs=1 #Number of seconds between Ontap filer LUN device inquiry retries (This is no longer useful or used)
#dfm-api-timeout=180 #Timeout in seconds for calling DFM API
#dfm-rbac-retries=12 #Number of access retries until DFM Refreshes (Increase this value if DFM is unable to discover newly created Volumes)
#dfm-rbac-retry-sleep-secs=15 #Number of seconds between DFM rbac access retries(Increase this value if DFM is unable to discover the Volume)
#do-lunclone="on" #Lunclone for Dataset mount_backup if readonly qtree is detected
#enable-alua="on" #Enable ALUA for the igroup
#enable-fcp-cache="on" #Enable FCP Cache in Assistants
#enable-implicit-host-preparation="on" #Enable implicit host preparation for LUN creation
#enable-parallel-operations="on" #Enable support for parallel operations
#enable-split-clone="off" #Enable split clone volume or lun during connnect/disconnect
#filer-restore-retries=1440 #Number of retries while doing lun restore
#filer-restore-retry-sleep-secs=15 #Number of secs between retries while restoring lun
#filesystem-freeze-timeout-secs=300 #File system freeze timeout in seconds
#flexclone-writereserve-enabled="off" #Enable space reservations during FlexClone creation
fstype="ufs" #File system to use when more than one file system is available
#lun-onlining-in-progress-retries=40 #Number of retries when lun onlining in progress after VBSR
#lun-onlining-in-progress-sleep-secs=3 #Number of secs between retries when lun onlining in progress after VBSR
#mgmt-retries=2 #Number of retries on ManageONTAP control channel
#mgmt-retry-sleep-long-secs=90 #Number of seconds between retries on ManageONTAP control channel (failover error)
#mgmt-retry-sleep-secs=2 #Number of seconds between retries on ManageONTAP control channel
#migrate-file="/opt/NTAPsnapdrive/.migfile" #Location of Migrate File
#multipathing-type="DMP" #Multipathing software to use when more than one multipathing solution is available.
multipathing-type="mpxio" #Multipathing software to use when more than one multipathing solution is available.
#password-file="/opt/NTAPsnapdrive/.pwfile" #location of password file
#portset-file="/opt/NTAPsnapdrive/.portset" #location of portset configuration file
#prefix-clone-name="" #Prefix string for naming FlexClone
#prefix-filer-lun="" #Prefix for all filer LUN names internally generated by storage create
#prepare-lun-count=16 #Number of LUNs for which to request host preparation
#rbac-cache="off" #Use RBAC cache when all DFM servers are down. Active only when rbac-method is dfm.
#rbac-method="native" #Role Based Access Control(RBAC) methods
#recovery-log-file="/var/log/sd-recovery.log" #recovery log file
#recovery-log-save=20 #Number of old copies of recovery log file to save
#san-clone-method="lunclone" #Clone methods for snap connect
#sdu-daemon-certificate-path="/opt/NTAPsnapdrive/snapdrive.pem" #location of https server certificate
#sdu-password-file="/opt/NTAPsnapdrive/.sdupw" #location of SDU Daemon and DFM password file
#secure-communication-among-cluster-nodes="off" #Enable Secure Communication (Useful only on SFRAC environments)
#sfsr-polling-frequency=10 #Sleep for the given amount of seconds before attempting SFSR
#snapconnect-nfs-removedirectories="off" #NFS snap connect cleaup unwanted dirs;
#snapcreate-cg-timeout="relaxed" #Timeout type used in snapshot creation with Consitency Groups.
#snapcreate-check-nonpersistent-nfs="on" #Check that entries exist in persistent filesystem file for specified nfs fs.
#snapcreate-consistency-retries=3 #Number of retries on best-effort snapshot consistency check failure
#snapcreate-consistency-retry-sleep=1 #Number of seconds between best-effort snapshot consistency retries
#snapcreate-must-make-snapinfo-on-qtree="off" #snap create must be able to create snapinfo on qtree
#snapdelete-delete-rollback-with-snap="off" #Delete all rollback snapshots related to specified snapshot
#snapmirror-dest-snap-support-enabled="on" #Enables snap restore and snap connect commands to deal with snapshots which were moved to another filer volume (e.g. via SnapMirror)
#snaprestore-delete-rollback-after-restore="on" #Delete rollback snapshot after a successfull restore
#snaprestore-make-rollback="on" #Create snap rollback before restore
#snaprestore-must-make-rollback="on" #Do not continue 'snap restore' if rollback creation fails
#snaprestore-snapmirror-check="on" #Enable snapmirror destination volume check in snap restore
#space-reservations-enabled="on" #Enable space reservations when creating new luns
#space-reservations-volume-enabled="snapshot" #Enable space reservation over volume.
#split-clone-async="on" #Lunclone for Dataset mount_backup if readonly qtree is detected
#trace-enabled="on" #Enable trace
#trace-level=7 #Trace levels: 1=FatalError; 2=AdminError; 3=CommandError; 4=warning, 5=info, 6=verbose, 7=full
#trace-log-file="/var/log/sd-trace.log" #trace log file
#trace-log-max-size=10485760 #Maximum size of trace log file in bytes; 0 means one trace log file per command
#trace-log-save=100 #Number of old copies of trace log file to save
#use-efi-label="off" #Enables use of EFI labels on Solaris which is required for lun size > 1 TB
#use-https-to-dfm="on" #Communication with DFM done via HTTPS instead of HTTP
use-https-to-filer="on" #Communication with filer done via HTTPS instead of HTTP
#use-https-to-sdu-daemon="off" #Communication with daemon done via HTTPS instead of HTTP
#use-https-to-viadmin="on" #Specifies if HTTPS must be used to communicate with SMVI Product
#vif-password-file="/opt/NTAPsnapdrive/.vifpw" #location of Virtual Interface Server password file
#virtualization-operation-timeout-secs=600 #Virtualization Operation timeout in seconds
#vmtype="vxvm" #Volume manager to use when more than one volume manager is available
vmtype="svm" #Volume manager to use when more than one volume manager is available
#vol-restore="off" #Method of restoring a volume
#volmove-cutover-retry=3 #Number of retries during volume migration
#volmove-cutover-retry-sleep=3 #Number of seconds between retries during volume migration cutover phase
</pre>
Nun erst den snapdrived starten:
<pre>
# /usr/sbin/snapdrived start
</pre>
Verbindung mit dem Filer herstellen von SnapDrive:
<pre>
# getent hosts fas01 >> /etc/hosts
# /opt/NTAPsnapdrive/bin/snapdrive config set root fas01
# /opt/NTAPsnapdrive/bin/snapdrive snap list -filer fas01
</pre>
Check:
<pre>
# /opt/NTAPsnapdrive/bin/snapdrive config list
</pre>
==SnapManager for Oracle==
<pre>
# sh ./netapp.smo.sunos-sparc64-3.2.bin
# smogui
</pre>
Ab durch den Wizard...
ed232ddce7923982aedf7981412a6c513e3b1621
Hauptseite
0
1
156
113
2012-06-15T22:53:37Z
Lollypop
2
wikitext
text/x-wiki
=[[:Kategorie:Projekte|Meine Projekte]]=
<categorytree mode=pages hideroot=on depth=3>Projekte</categorytree>
=[[:Kategorie:KnowHow|KnowHow]]=
<categorytree mode=pages hideroot=on depth=2>KnowHow</categorytree>
Anmerkungen immer gern an <email>L@rs.Timmann.de</email>
<mail address='L@rs.Timmann.de' description='L@rs.Timmann.de'>mich</mail>
= Starthilfen zum Wiki =
Hilfe zur Benutzung und Konfiguration der Wiki-Software findest du im [http://meta.wikimedia.org/wiki/Help:Contents Benutzerhandbuch].
* [http://www.mediawiki.org/wiki/Manual:Configuration_settings Liste der Konfigurationsvariablen]
* [http://www.mediawiki.org/wiki/Manual:FAQ MediaWiki-FAQ]
* [https://lists.wikimedia.org/mailman/listinfo/mediawiki-announce Mailingliste neuer MediaWiki-Versionen]
bec3a9490b941faff4dac834e300d2dcc0a9f63a
157
156
2012-06-15T22:58:43Z
Lollypop
2
wikitext
text/x-wiki
=[[:Kategorie:Projekte|Meine Projekte]]=
<categorytree mode=pages hideroot=on depth=3>Projekte</categorytree>
=[[:Kategorie:KnowHow|KnowHow]]=
<categorytree mode=pages hideroot=on depth=2>KnowHow</categorytree>
Anmerkungen immer gern an mich: Lars Timmann <<email>L@rs.Timmann.de</email>>
= Starthilfen zum Wiki =
Hilfe zur Benutzung und Konfiguration der Wiki-Software findest du im [http://meta.wikimedia.org/wiki/Help:Contents Benutzerhandbuch].
* [http://www.mediawiki.org/wiki/Manual:Configuration_settings Liste der Konfigurationsvariablen]
* [http://www.mediawiki.org/wiki/Manual:FAQ MediaWiki-FAQ]
* [https://lists.wikimedia.org/mailman/listinfo/mediawiki-announce Mailingliste neuer MediaWiki-Versionen]
d17e2c979d2f7ec3bdbfe4aa897ee28337063234
Category:Arundo
14
42
158
77
2012-06-15T23:51:13Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:Poaceae]]
49222b1513372850327a477425a8282153473506
Category:Poaceae
14
78
159
2012-06-15T23:51:39Z
Lollypop
2
Die Seite wurde neu angelegt: „[[Kategorie:Pflanzen]]“
wikitext
text/x-wiki
[[Kategorie:Pflanzen]]
251e5bc4da59e803a6a47f224643e4460e86c273
Category:Araceae
14
79
160
2012-06-15T23:53:54Z
Lollypop
2
Die Seite wurde neu angelegt: „[[Kategorie:Pflanzen]]“
wikitext
text/x-wiki
[[Kategorie:Pflanzen]]
251e5bc4da59e803a6a47f224643e4460e86c273
170
160
2012-06-16T10:09:22Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:Pflanzen]]
Ein wichtiger Link zu diesem Thema ist sicherlich [[www.aroid.org]]
987676f3586307bceea7ae7e0bf0bb4f7552bbf5
171
170
2012-06-16T10:10:40Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:Pflanzen]]
Ein wichtiger Link zu diesem Thema ist sicherlich die [http://www.aroid.org International Aroid Society]
bb3b61eed1783a9c400ed404606298539196c8d5
Amorphophallus fuscus
0
80
161
2012-06-16T00:00:34Z
Lollypop
2
Die Seite wurde neu angelegt: „{{Taxobox | Taxon_Name = | Taxon_WissName = Amorphophallus fuscus | Taxon_Rang = Art | Taxon_Autor = Hett. (N. Thailand) | Taxon2_WissName = …“
wikitext
text/x-wiki
{{Taxobox
| Taxon_Name =
| Taxon_WissName = Amorphophallus fuscus
| Taxon_Rang = Art
| Taxon_Autor = Hett. (N. Thailand)
| Taxon2_WissName = Amorphophallus
| Taxon2_Rang = Gattung
| Taxon3_WissName = Araceae
| Taxon3_Rang = Tribus
| Taxon4_WissName =
| Taxon4_Rang = Unterfamilie
| Taxon5_Name =
| Taxon5_WissName =
| Taxon5_Rang = Familie
| Taxon6_Name =
| Taxon6_WissName =
| Taxon6_Rang = Ordnung
| Bild =
| Bildbeschreibung =
}}
== Beschreibung ==
[[Kategorie:Amorphophallus]]
cb1ff99b8d8b30e21afff563d2911342b9e634ec
163
161
2012-06-16T00:03:27Z
Lollypop
2
wikitext
text/x-wiki
{{Taxobox
| Taxon_Name = Amorphophallus fuscus
| Taxon_WissName = Amorphophallus fuscus
| Taxon_Rang = Art
| Taxon_Autor = Hett. (N. Thailand)
| Taxon2_WissName = Amorphophallus
| Taxon2_Rang = Gattung
| Taxon3_WissName = Araceae
| Taxon3_Rang = Tribus
| Taxon4_WissName =
| Taxon4_Rang = Unterfamilie
| Taxon5_Name =
| Taxon5_WissName =
| Taxon5_Rang = Familie
| Taxon6_Name =
| Taxon6_WissName =
| Taxon6_Rang = Ordnung
| Bild =
| Bildbeschreibung =
}}
== Beschreibung ==
[[Kategorie:Amorphophallus]]
64729e80ade0aea03ac870d744cba422d1c21016
167
163
2012-06-16T00:08:55Z
Lollypop
2
wikitext
text/x-wiki
{{Taxobox
| Taxon_Name = Amorphophallus fuscus
| Taxon_WissName = Amorphophallus fuscus
| Taxon_Rang = Art
| Taxon_Autor = Hett. (N. Thailand)
| Taxon2_WissName = Amorphophallus
| Taxon2_Rang = Gattung
| Taxon3_WissName =
| Taxon3_Rang = Tribus
| Taxon4_WissName =
| Taxon4_Rang = Unterfamilie
| Taxon5_Name = Araceae
| Taxon5_WissName =
| Taxon5_Rang = Familie
| Taxon6_Name =
| Taxon6_WissName =
| Taxon6_Rang = Ordnung
| Bild =
| Bildbeschreibung =
}}
== Beschreibung ==
[[Kategorie:Amorphophallus]]
1a7735cac867bc38c337154c0e267027e1146064
Category:Amorphophallus
14
81
162
2012-06-16T00:01:00Z
Lollypop
2
Die Seite wurde neu angelegt: „[[Kategorie:Araceae]]“
wikitext
text/x-wiki
[[Kategorie:Araceae]]
4e71e23f23caf05c37434ce6d8a532c3b578e767
Amorphophallus henryi
0
82
164
2012-06-16T00:05:26Z
Lollypop
2
Die Seite wurde neu angelegt: „{{Taxobox | Taxon_Name = Amorphophallus henryi | Taxon_WissName = Amorphophallus henryi | Taxon_Rang = Art | Taxon_Autor = N.E. Br. (Taiwan) | …“
wikitext
text/x-wiki
{{Taxobox
| Taxon_Name = Amorphophallus henryi
| Taxon_WissName = Amorphophallus henryi
| Taxon_Rang = Art
| Taxon_Autor = N.E. Br. (Taiwan)
| Taxon2_WissName = Amorphophallus
| Taxon2_Rang = Gattung
| Taxon3_WissName = Araceae
| Taxon3_Rang = Tribus
| Taxon4_WissName =
| Taxon4_Rang = Unterfamilie
| Taxon5_Name =
| Taxon5_WissName =
| Taxon5_Rang = Familie
| Taxon6_Name =
| Taxon6_WissName =
| Taxon6_Rang = Ordnung
| Bild =
| Bildbeschreibung =
}}
== Beschreibung ==
[[Kategorie:Amorphophallus]]
7b6d17f168b84262035f245ad250bcbb76fcec7c
166
164
2012-06-16T00:08:23Z
Lollypop
2
wikitext
text/x-wiki
{{Taxobox
| Taxon_Name = Amorphophallus henryi
| Taxon_WissName = Amorphophallus henryi
| Taxon_Rang = Art
| Taxon_Autor = N.E. Br. (Taiwan)
| Taxon2_WissName = Amorphophallus
| Taxon2_Rang = Gattung
| Taxon3_WissName =
| Taxon3_Rang = Tribus
| Taxon4_WissName =
| Taxon4_Rang = Unterfamilie
| Taxon5_Name =
| Taxon5_WissName = Araceae
| Taxon5_Rang = Familie
| Taxon6_Name =
| Taxon6_WissName =
| Taxon6_Rang = Ordnung
| Bild =
| Bildbeschreibung =
}}
== Beschreibung ==
[[Kategorie:Amorphophallus]]
8e8891a2bbd8def8c191dd837fa6f806eded5639
Dracunculus vulgaris
0
83
165
2012-06-16T00:07:38Z
Lollypop
2
Die Seite wurde neu angelegt: „{{Taxobox | Taxon_Name = Dracunculus vulgaris | Taxon_WissName = Dracunculus vulgaris | Taxon_Rang = Art | Taxon_Autor = | Taxon2_WissName = …“
wikitext
text/x-wiki
{{Taxobox
| Taxon_Name = Dracunculus vulgaris
| Taxon_WissName = Dracunculus vulgaris
| Taxon_Rang = Art
| Taxon_Autor =
| Taxon2_WissName = Dracunculus
| Taxon2_Rang = Gattung
| Taxon3_WissName = Araceae
| Taxon3_Rang = Tribus
| Taxon4_WissName =
| Taxon4_Rang = Unterfamilie
| Taxon5_Name =
| Taxon5_WissName =
| Taxon5_Rang = Familie
| Taxon6_Name =
| Taxon6_WissName =
| Taxon6_Rang = Ordnung
| Bild =
| Bildbeschreibung =
}}
== Beschreibung ==
[[Kategorie:Dracunculus]]
b74fcc99a7b361d2f48b7f9412be6f94bb80dd5e
168
165
2012-06-16T00:09:16Z
Lollypop
2
wikitext
text/x-wiki
{{Taxobox
| Taxon_Name = Dracunculus vulgaris
| Taxon_WissName = Dracunculus vulgaris
| Taxon_Rang = Art
| Taxon_Autor =
| Taxon2_WissName = Dracunculus
| Taxon2_Rang = Gattung
| Taxon3_WissName =
| Taxon3_Rang = Tribus
| Taxon4_WissName =
| Taxon4_Rang = Unterfamilie
| Taxon5_Name =
| Taxon5_WissName = Araceae
| Taxon5_Rang = Familie
| Taxon6_Name =
| Taxon6_WissName =
| Taxon6_Rang = Ordnung
| Bild =
| Bildbeschreibung =
}}
== Beschreibung ==
[[Kategorie:Dracunculus]]
22d7c60021d362982ddfe33c5f2d39910b18a3c6
Category:Dracunculus
14
84
169
2012-06-16T00:09:45Z
Lollypop
2
Die Seite wurde neu angelegt: „[[Kategorie:Araceae]]“
wikitext
text/x-wiki
[[Kategorie:Araceae]]
4e71e23f23caf05c37434ce6d8a532c3b578e767
Arum maculatum
0
85
172
2012-06-16T18:56:07Z
Lollypop
2
Die Seite wurde neu angelegt: „{{Taxobox | Taxon_Name = Arum maculatum | Taxon_WissName = Arum maculatum | Taxon_Rang = Art | Taxon_Autor = | Taxon2_WissName = Arum | Taxon…“
wikitext
text/x-wiki
{{Taxobox
| Taxon_Name = Arum maculatum
| Taxon_WissName = Arum maculatum
| Taxon_Rang = Art
| Taxon_Autor =
| Taxon2_WissName = Arum
| Taxon2_Rang = Gattung
| Taxon3_WissName =
| Taxon3_Rang = Tribus
| Taxon4_WissName =
| Taxon4_Rang = Unterfamilie
| Taxon5_Name = Araceae
| Taxon5_WissName =
| Taxon5_Rang = Familie
| Taxon6_Name =
| Taxon6_WissName =
| Taxon6_Rang = Ordnung
| Bild =
| Bildbeschreibung =
}}
== Beschreibung ==
[[Kategorie:Arum]]
d7fe078922e3e6ec1335c01d791b4fdba316279e
174
172
2012-06-16T18:57:33Z
Lollypop
2
wikitext
text/x-wiki
{{Taxobox
| Taxon_Name = Gefleckter Aronstab
| Taxon_WissName = Arum maculatum
| Taxon_Rang = Art
| Taxon_Autor =
| Taxon2_WissName = Arum
| Taxon2_Rang = Gattung
| Taxon3_WissName =
| Taxon3_Rang = Tribus
| Taxon4_WissName =
| Taxon4_Rang = Unterfamilie
| Taxon5_Name = Araceae
| Taxon5_WissName =
| Taxon5_Rang = Familie
| Taxon6_Name =
| Taxon6_WissName =
| Taxon6_Rang = Ordnung
| Bild =
| Bildbeschreibung =
}}
== Beschreibung ==
[[Kategorie:Arum]]
8935b29cba33bcceaf435bb9f0a00876726c62ca
Category:Arum
14
86
173
2012-06-16T18:56:40Z
Lollypop
2
Die Seite wurde neu angelegt: „[[Kategorie:Araceae]]“
wikitext
text/x-wiki
[[Kategorie:Araceae]]
4e71e23f23caf05c37434ce6d8a532c3b578e767
NetApp Partner path misconfigured
0
87
177
2012-06-16T21:16:27Z
Lollypop
2
Die Seite wurde neu angelegt: „=FCP PARTNER PATH MISCONFIGURED= ==Auf dem Filer nachsehen, auf welchen LUNs das Problem besteht== Statistiken au Null setzen: filer> lun stats -z Dann schauen: …“
wikitext
text/x-wiki
=FCP PARTNER PATH MISCONFIGURED=
==Auf dem Filer nachsehen, auf welchen LUNs das Problem besteht==
Statistiken au Null setzen:
filer> lun stats -z
Dann schauen:
filer> lun stats -o
Schlecht ist, wenn unter der Spalte "Partner Ops" oder "Partner KBytes" etwas größer 0 steht.
==Mögliche Ursachen==
# Kein ALUA konfiguriert
# MPxIO falsch konfiguriert: Niemals "/opt/NTAP/SANToolkit/bin/mpxio_set -e" ausführen. Zurücknehmen kann man das mit "/opt/NTAP/SANToolkit/bin/mpxio_set -d" oder händisch in /kernel/drv/scsi_vhci.conf. Danach "touch /reconfigure ; init 6"
12611830323ade220cc444042afa6565bb2b7f5c
SSH Tipps und Tricks
0
75
178
148
2012-06-19T08:52:54Z
Lollypop
2
/* SSH über ein oder mehrere Hops */
wikitext
text/x-wiki
[[Kategorie:KnowHow]]
=SSH, der Weg zum Ziel=
==SSH über ein oder mehrere Hops==
Um die SSH-Verbindung von Host_A zu Host_B machen zu können muß man sich über zwei Rechner dorthin tunneln (GW_1 und GW_2). Wenn man sich immer erst einloggt und dann weiter einloggt ist es manchmal sehr schwierig die Portforwardings mitzuschleifen, oder den Socks5-Proxy. Einfacher ist es, wenn man sich ProxyCommands für den Weg von Host_a zu Host_b definiert.
Wir kommen also nur von GW_2 zu Host_b, also machen wir uns hierfür einen Eintrag in der ~/.ssh/config:
<pre>
Host Host_B
ProxyCommand ssh GW_2 "/bin/bash -c 'exec 3<>/dev/tcp/%h/%p; cat <&3 & cat >&3;kill $!'"
</pre>
Zu GW_2 kommen wir aber nur über GW_1, also brauchen wir hierfür auch einen Eintrag:
<pre>
Host GW_2
ProxyCommand ssh GW_1 "/bin/bash -c 'exec 3<>/dev/tcp/%h/%p; cat <&3 & cat >&3;kill $!'"
</pre>
Jetzt gibt man auf Host_A einfach <i>ssh Host_B</i> ein und wird über die beiden Gateways GW_1 und GW_2 getunnelt.
Portforwardings für z.B. NFS macht man jetzt einfach so:
<pre>
root@Host_A# share -F nfs -o ro=@127.0.0.1/32 /tmp
root@Host_A# ssh -R 22049:localhost:2049 user@Host_B
user@Host_B$ su -
root@Host_B# mount -oro nfs://127.0.0.1:22049/tmp /mnt
</pre>
Im Hintergrund werden dann die TunnelVerbindungen aufgebaut und man macht das PortForwarding direkt von Host_A nach Host_B. Sehr schlank und elegant.
PS: Das /dev/tcp/%h/%p ist ein BASH-Builtin wobei %h und %p von der SSH durch Host (%h) und Port (%p) ausgefüllt werden
==Ausbruch aus dem Paradies==
Problem: Die Umgebung in der man sich aufhält ist leider so unglücklich mit Firewalls verbaut, daß man nicht arbeiten kann. Man muß aber per SSH raus, um woanders kurz etwas zu schauen, oder zu holen. Nunja, es gibt immer einen Weg...
Vorraussetzung ist ein lokal installiertes [http://www.meadowy.org/~gotoh/projects/connect connect], z.B. unter Ubuntu: apt-get install connect-proxy.
Weiterhin braucht man einen SSH-Server, wo ein sshd auf dem Port 443 lauscht, denn die meisten Proxies wollen einen nur auf known Ports durchlassen.
Dann trägt man in der ~/.ssh/config ein:
<pre>
Host ssh-via-proxy
ProxyCommand connect -H proxy-server:3128 ssh-server 443
</pre>
Schwuppdiwupp ist man mit <i>ssh ssh-server</i> auf dem SSH-Ziel, wo man hinmöchte. Natürlich kann man auf den ssh-server wieder als ProxyCommand eintragen usw. usw.
==Achja... das interne Wiki...==
Auch nicht schlimm, wenn das nur vom internen Netz erreichbar ist, dann fragen wir einfach via Socks-Proxy an:
<pre>
user@Host_A$ ssh -C -N -T -f -D8080 interner-rechner
user@Host_A$ chromium-browser --proxy-server="socks5://localhost:8080" https://wiki.intern.firma.de/ &
</pre>
Die Optionen sind:
<pre>
-C Requests compression <- das ist optional
-N Do not execute a remote command.
-T Disable pseudo-tty allocation.
-f Requests ssh to go to background just before command execution.
-D Local-Remote-Socks5-Proxy Port
</pre>
Oder wieder via ~/.ssh/config:
<pre>
Host wiki
Compression yes
DynamicForward 8888
RequestTTY no
PermitLocalCommand yes
LocalCommand chromium-browser --proxy-server="socks5://localhost:8888" https://wiki.intern.firma.de/ &
Hostname interner-rechner
</pre>
Und dann <i>ssh -N -f wiki</i> (Entsprechungen für -N und -f habe ich noch nicht gefunden).
37d4df173efb72d47587d047a2ccfad995fee8d7
Linux udev
0
88
179
2012-08-08T09:30:46Z
Lollypop
2
Die Seite wurde neu angelegt: „/etc/udev/rules.d/99-custom.rules ENV{DM_VG_NAME}=="VolumeGroup1", ENV{DM_LV_NAME}=="LogicalVolume1", MODE="0660", OWNER="lollypop", GROUP="disk", SYMLINK+="Virt…“
wikitext
text/x-wiki
/etc/udev/rules.d/99-custom.rules
ENV{DM_VG_NAME}=="VolumeGroup1", ENV{DM_LV_NAME}=="LogicalVolume1", MODE="0660", OWNER="lollypop", GROUP="disk", SYMLINK+="VirtualBox-$env{DM_NAME}"
[[Kategorie:Linux]]
ee62ff888bd7f232c48a981fbdc5042f477ce7c9
Category:Linux
14
89
180
2012-08-08T09:31:24Z
Lollypop
2
Die Seite wurde neu angelegt: „[[Kategorie:KnowHow]]“
wikitext
text/x-wiki
[[Kategorie:KnowHow]]
5b3e805e2df69a16d339bfd0115e4688ccfd0e65
ZFS cheatsheet
0
29
181
55
2012-08-22T12:56:15Z
Lollypop
2
wikitext
text/x-wiki
* [[ZFS_Recovery|Reparieren von defktem ZFS]]
* Wichtige ZFS-Patches: 127729-07 (x86) / 127728-06 (SPARC)
* ZFS Best Practices Guide http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide
* ZFS FAQ bei Opensolaris.org http://www.opensolaris.org/os/community/zfs/faq/
== ZFS Tuning ==
Eine gefühlte Langsamkeit auf Systemen mit ZFS kommt vom sehr großen Cachehunger. Den kann man eingrenzen:
Erstmal schauen, was phase ist:
<pre>
lollypop@wirefall:~# echo "::kmastat ! grep Total" |mdb -k
Total [hat_memload] 13508608B 309323764 0
Total [kmem_msb] 24010752B 1509706 0
Total [kmem_va] 660340736B 140448 0
Total [kmem_default] 690409472B 1416078794 0
Total [kmem_io_64G] 34619392B 8456 0
Total [kmem_io_4G] 16384B 92 0
Total [kmem_io_2G] 24576B 62 0
Total [bp_map] 1048576B 234488 0
Total [umem_np] 786432B 976 0
Total [id32] 4096B 2620 0
Total [zfs_file_data_buf] 1471275008B 1326646 0
Total [segkp] 589824B 192886 0
Total [ip_minor_arena_sa] 64B 13332 0
Total [ip_minor_arena_la] 192B 45183 0
Total [spdsock] 64B 1 0
Total [namefs_inodes] 64B 24 0
lollypop@wirefall:~# echo "::memstat" | mdb -k
Page Summary Pages MB %Tot
------------ ---------------- ---------------- ----
Kernel 255013 996 24%
ZFS File Data 359196 1403 34%
Anon 346538 1353 33%
Exec and libs 33948 132 3%
Page cache 4836 18 0%
Free (cachelist) 22086 86 2%
Free (freelist) 23420 91 2%
Total 1045037 4082
Physical 1045036 4082
</pre>
Oder nur ZFS
<pre>
echo "::memstat ! egrep '(Page Summary|-----|ZFS)'"| mdb -k
</pre>
Ausgeben aller ARC-Parameter:
<pre>
lollypop@wirefall:~# echo "::arc -m" | mdb -k
hits = 80839319
misses = 3717788
demand_data_hits = 4127150
demand_data_misses = 51589
demand_metadata_hits = 9467792
demand_metadata_misses = 2125852
prefetch_data_hits = 127941
prefetch_data_misses = 596238
prefetch_metadata_hits = 67116436
prefetch_metadata_misses = 944109
mru_hits = 2031248
mru_ghost_hits = 1906199
mfu_hits = 78514880
mfu_ghost_hits = 993236
deleted = 880714
recycle_miss = 1381210
mutex_miss = 197
evict_skip = 38573528
evict_l2_cached = 0
evict_l2_eligible = 94658370048
evict_l2_ineligible = 8946457600
hash_elements = 79571
hash_elements_max = 82328
hash_collisions = 3005774
hash_chains = 22460
hash_chain_max = 8
p = 64 MB
c = 512 MB
c_min = 127 MB
c_max = 512 MB
size = 512 MB
hdr_size = 14825736
data_size = 468982784
other_size = 53480992
l2_hits = 0
l2_misses = 0
l2_feeds = 0
l2_rw_clash = 0
l2_read_bytes = 0
l2_write_bytes = 0
l2_writes_sent = 0
l2_writes_done = 0
l2_writes_error = 0
l2_writes_hdr_miss = 0
l2_evict_lock_retry = 0
l2_evict_reading = 0
l2_free_on_write = 0
l2_abort_lowmem = 0
l2_cksum_bad = 0
l2_io_error = 0
l2_size = 0
l2_hdr_size = 0
memory_throttle_count = 0
arc_no_grow = 0
arc_tempreserve = 0 MB
arc_meta_used = 150 MB
arc_meta_limit = 128 MB
arc_meta_max = 313 MB
</pre>
Man kann sich auch alle Parameter ausgeben lassen, die für ZFS gesetzt sind mit:
<pre>
# echo ::zfs_params | mdb -k
arc_reduce_dnlc_percent = 0x3
zfs_arc_max = 0x100000000
zfs_arc_min = 0x0
arc_shrink_shift = 0x5
zfs_mdcomp_disable = 0x0
zfs_prefetch_disable = 0x0
zfetch_max_streams = 0x8
zfetch_min_sec_reap = 0x2
zfetch_block_cap = 0x100
zfetch_array_rd_sz = 0x100000
zfs_default_bs = 0x9
zfs_default_ibs = 0xe
...
# echo "::arc -a" | mdb -k
hits = 592730
misses = 5095
demand_data_hits = 0
demand_data_misses = 0
demand_metadata_hits = 592719
demand_metadata_misses = 4866
prefetch_data_hits = 0
prefetch_data_misses = 0
...
</pre>
Setzen von Kernelparametern geht auch online mit:
<pre>
# echo zfs_arc_max/Z100000000 | mdb -kw
zfs_arc_max: <old value> = 0x100000000
</pre>
Das setzt den zfs_arc_max auf 4GB = 0x100000000
== Limitieren des ARC Cache ==
In der /etc/system einfach:
set zfs:zfs_arc_max = <Number of bytes>
Siehe auch [http://www.solarisinternals.com/wiki/index.php/ZFS_Evil_Tuning_Guide#Limiting_the_ARC_Cache Limiting the ARC Cache]
== ZFS Platzverbrauch besser anzeigen ==
<pre>
$ zfs list -o space
NAME AVAIL USED USEDSNAP USEDDS USEDREFRESERV USEDCHILD
rpool 25.4G 7.79G 0 64K 0 7.79G
rpool/ROOT 25.4G 6.29G 0 18K 0 6.29G
rpool/ROOT/snv_98 25.4G 6.29G 0 6.29G 0 0
rpool/dump 25.4G 1.00G 0 1.00G 0 0
rpool/export 25.4G 38K 0 20K 0 18K
rpool/export/home 25.4G 18K 0 18K 0 0
rpool/swap 25.8G 512M 0 111M 401M 0
</pre>
Wenn zfs list -o space als shortcut noch nicht zur Verfügung steht, geht meist:
<pre>
$ zfs list -o name,avail,used,usedsnap,usedds,usedrefreserv,usedchild -t filesystem,volume
</pre>
== Migration UFS-Root -> ZFS-Root via Live-Upgrade ==
Erstmal den ZFS Rootpool anlegen:
# zpool create rpool /dev/dsk/<disk>
Wer Problemen aus dem Weg gehen möchte, läßt den Namen bei rpool.
Boot-Environment mit lucreate erstellen
# lucreate -c ufsBE -n zfsBE -p rpool
Hiermit werden die Files in die ZFS-Umgebung kopiert.
Prüfen, ob das BootFS richtig gesetzt wurde:
<pre>
# zpool get bootfs rpool
NAME PROPERTY VALUE SOURCE
rpool bootfs rpool/ROOT/zfsBE local
</pre>
Auskommentieren von eventuell noch nachgebliebenen rootdev-Einträgen in der /etc/system
<pre>
# mkdir /tmp/rpool
# zpool import -R /tmp/rpool rpool
# perl -pi.orig -e 's#^(rootdev.*)$#* \1#g' /tmp/rpool/etc/system
# zpool export rpool
</pre>
[[Kategorie:ZFS]]
6db0c92c9341931c5900e8ade4d74425bdd14259
182
181
2012-08-22T13:01:33Z
Lollypop
2
/* Migration UFS-Root -> ZFS-Root via Live-Upgrade */
wikitext
text/x-wiki
* [[ZFS_Recovery|Reparieren von defktem ZFS]]
* Wichtige ZFS-Patches: 127729-07 (x86) / 127728-06 (SPARC)
* ZFS Best Practices Guide http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide
* ZFS FAQ bei Opensolaris.org http://www.opensolaris.org/os/community/zfs/faq/
== ZFS Tuning ==
Eine gefühlte Langsamkeit auf Systemen mit ZFS kommt vom sehr großen Cachehunger. Den kann man eingrenzen:
Erstmal schauen, was phase ist:
<pre>
lollypop@wirefall:~# echo "::kmastat ! grep Total" |mdb -k
Total [hat_memload] 13508608B 309323764 0
Total [kmem_msb] 24010752B 1509706 0
Total [kmem_va] 660340736B 140448 0
Total [kmem_default] 690409472B 1416078794 0
Total [kmem_io_64G] 34619392B 8456 0
Total [kmem_io_4G] 16384B 92 0
Total [kmem_io_2G] 24576B 62 0
Total [bp_map] 1048576B 234488 0
Total [umem_np] 786432B 976 0
Total [id32] 4096B 2620 0
Total [zfs_file_data_buf] 1471275008B 1326646 0
Total [segkp] 589824B 192886 0
Total [ip_minor_arena_sa] 64B 13332 0
Total [ip_minor_arena_la] 192B 45183 0
Total [spdsock] 64B 1 0
Total [namefs_inodes] 64B 24 0
lollypop@wirefall:~# echo "::memstat" | mdb -k
Page Summary Pages MB %Tot
------------ ---------------- ---------------- ----
Kernel 255013 996 24%
ZFS File Data 359196 1403 34%
Anon 346538 1353 33%
Exec and libs 33948 132 3%
Page cache 4836 18 0%
Free (cachelist) 22086 86 2%
Free (freelist) 23420 91 2%
Total 1045037 4082
Physical 1045036 4082
</pre>
Oder nur ZFS
<pre>
echo "::memstat ! egrep '(Page Summary|-----|ZFS)'"| mdb -k
</pre>
Ausgeben aller ARC-Parameter:
<pre>
lollypop@wirefall:~# echo "::arc -m" | mdb -k
hits = 80839319
misses = 3717788
demand_data_hits = 4127150
demand_data_misses = 51589
demand_metadata_hits = 9467792
demand_metadata_misses = 2125852
prefetch_data_hits = 127941
prefetch_data_misses = 596238
prefetch_metadata_hits = 67116436
prefetch_metadata_misses = 944109
mru_hits = 2031248
mru_ghost_hits = 1906199
mfu_hits = 78514880
mfu_ghost_hits = 993236
deleted = 880714
recycle_miss = 1381210
mutex_miss = 197
evict_skip = 38573528
evict_l2_cached = 0
evict_l2_eligible = 94658370048
evict_l2_ineligible = 8946457600
hash_elements = 79571
hash_elements_max = 82328
hash_collisions = 3005774
hash_chains = 22460
hash_chain_max = 8
p = 64 MB
c = 512 MB
c_min = 127 MB
c_max = 512 MB
size = 512 MB
hdr_size = 14825736
data_size = 468982784
other_size = 53480992
l2_hits = 0
l2_misses = 0
l2_feeds = 0
l2_rw_clash = 0
l2_read_bytes = 0
l2_write_bytes = 0
l2_writes_sent = 0
l2_writes_done = 0
l2_writes_error = 0
l2_writes_hdr_miss = 0
l2_evict_lock_retry = 0
l2_evict_reading = 0
l2_free_on_write = 0
l2_abort_lowmem = 0
l2_cksum_bad = 0
l2_io_error = 0
l2_size = 0
l2_hdr_size = 0
memory_throttle_count = 0
arc_no_grow = 0
arc_tempreserve = 0 MB
arc_meta_used = 150 MB
arc_meta_limit = 128 MB
arc_meta_max = 313 MB
</pre>
Man kann sich auch alle Parameter ausgeben lassen, die für ZFS gesetzt sind mit:
<pre>
# echo ::zfs_params | mdb -k
arc_reduce_dnlc_percent = 0x3
zfs_arc_max = 0x100000000
zfs_arc_min = 0x0
arc_shrink_shift = 0x5
zfs_mdcomp_disable = 0x0
zfs_prefetch_disable = 0x0
zfetch_max_streams = 0x8
zfetch_min_sec_reap = 0x2
zfetch_block_cap = 0x100
zfetch_array_rd_sz = 0x100000
zfs_default_bs = 0x9
zfs_default_ibs = 0xe
...
# echo "::arc -a" | mdb -k
hits = 592730
misses = 5095
demand_data_hits = 0
demand_data_misses = 0
demand_metadata_hits = 592719
demand_metadata_misses = 4866
prefetch_data_hits = 0
prefetch_data_misses = 0
...
</pre>
Setzen von Kernelparametern geht auch online mit:
<pre>
# echo zfs_arc_max/Z100000000 | mdb -kw
zfs_arc_max: <old value> = 0x100000000
</pre>
Das setzt den zfs_arc_max auf 4GB = 0x100000000
== Limitieren des ARC Cache ==
In der /etc/system einfach:
set zfs:zfs_arc_max = <Number of bytes>
Siehe auch [http://www.solarisinternals.com/wiki/index.php/ZFS_Evil_Tuning_Guide#Limiting_the_ARC_Cache Limiting the ARC Cache]
== ZFS Platzverbrauch besser anzeigen ==
<pre>
$ zfs list -o space
NAME AVAIL USED USEDSNAP USEDDS USEDREFRESERV USEDCHILD
rpool 25.4G 7.79G 0 64K 0 7.79G
rpool/ROOT 25.4G 6.29G 0 18K 0 6.29G
rpool/ROOT/snv_98 25.4G 6.29G 0 6.29G 0 0
rpool/dump 25.4G 1.00G 0 1.00G 0 0
rpool/export 25.4G 38K 0 20K 0 18K
rpool/export/home 25.4G 18K 0 18K 0 0
rpool/swap 25.8G 512M 0 111M 401M 0
</pre>
Wenn zfs list -o space als shortcut noch nicht zur Verfügung steht, geht meist:
<pre>
$ zfs list -o name,avail,used,usedsnap,usedds,usedrefreserv,usedchild -t filesystem,volume
</pre>
== Migration UFS-Root -> ZFS-Root via Live-Upgrade ==
Erstmal den ZFS Rootpool anlegen:
# zpool create rpool /dev/dsk/<disk>
Wer Problemen aus dem Weg gehen möchte, läßt den Namen bei rpool.
Boot-Environment (BE) mit lucreate erstellen
# lucreate -c ufsBE -n zfsBE -p rpool
Hiermit werden die Files in die ZFS-Umgebung kopiert.
Prüfen, ob das BootFS richtig gesetzt wurde:
<pre>
# zpool get bootfs rpool
NAME PROPERTY VALUE SOURCE
rpool bootfs rpool/ROOT/zfsBE local
</pre>
Auskommentieren von eventuell noch nachgebliebenen rootdev-Einträgen in der /etc/system
<pre>
# mkdir /tmp/rpool
# zpool import -R /tmp/rpool rpool
# perl -pi.orig -e 's#^(rootdev.*)$#* \1#g' /tmp/rpool/etc/system
# zpool export rpool
</pre>
Aktivieren des neuen BEs
# luactivate zfsBE
[[Kategorie:ZFS]]
3178234e092dd23e40caaecc0ed8e03edca99be2
Tapinoma sp.
0
60
183
104
2012-08-22T13:19:04Z
Lollypop
2
wikitext
text/x-wiki
{{Ameisengattung
| DeName =
| WissName = Tapinoma caespitum
| Autor =
| Unterfamilie = Dolichoderinae
| Gattung = Tapinoma
| Art = caespitum
| Koeniginnen = [[Polygynie|polygyn]]
}}
Teramorium caespitum ist eine kleine relativ langsam Art, die in Sandgebieten vorkommt.
In Hamburg habe ich sie im Hafenrandbereich (Kirchwerder) und in den Holmer Sandbergen finden können.
==Futter==
* Brot
* Mehlwürmer
* Zucker-/Honigwasser
==Ausbruchsschutz==
* Parrafinöl.
==Platzbedarf==
* Gering, Farmbecken reicht
==Eigene Haltungserfahrungen==
Ich habe mir eine kleine Kolonie in Kirchwerder ausgegraben und möchte hier ein wenig von den Erfahrungen berichten.
Diese Art verhält sich gegenüber Futter anders, als meine anderen Ameisen. Angebotene Mehlwürmer werden meist innerhalb eines halben Tages mit Sand bedeckt. Unter dem Sand werden die Mehlwürmer dann ausgehölt.
Auch Wasserschälchen werden gerne mit Sand zugebaggert, so daß ich sie manchmal als die Schlampen unter den Ameisen tituliere.
649224d7d9f9187801bfe0f4727a013e9b15b79a
185
183
2012-08-22T13:23:25Z
Lollypop
2
wikitext
text/x-wiki
{{Ameisengattung
| DeName =
| WissName = Tapinoma sp.
| Autor =
| Unterfamilie = Dolichoderinae
| Gattung = Tapinoma
| Art = sp.
| Koeniginnen = [[Polygynie|polygyn]]
}}
==Allgemeines==
Tapinoma sp. ist eine kleine quirlige Art, die es warm mag.
==Futter==
* Mehlwürmer (Hauptfutter)
* Zucker-/Honigwasser
==Ausbruchsschutz==
* PTFE
==Platzbedarf==
* Gering
==Eigene Haltungserfahrungen==
551f257567419bf2c5aa58f3b11af05c2d16fac2
202
185
2012-08-22T14:09:44Z
Lollypop
2
wikitext
text/x-wiki
{{Ameisenart
| DeName =
| WissName = Tapinoma sp.
| Autor =
| Unterfamilie = Dolichoderinae
| Gattung = Tapinoma
| Art = sp.
| Koeniginnen = [[Polygynie|polygyn]]
| Nest = Erdnest
| Ausbruchsschutz = PTFE
| Futter = Insekten, Zuckerwasser
}}
== Allgemeines ==
Tapinoma sp. ist eine kleine quirlige Art, die es warm mag.
== Eigene Haltungserfahrungen ==
...
397017822272dcf1d8362d02e71464cdec93e6c7
Tetramorium caespitum
0
15
184
130
2012-08-22T13:21:06Z
Lollypop
2
wikitext
text/x-wiki
{{Ameisenart
| Autor = (Mayr, 1855)
| WissName = Tetramorium caespitum
| Gattung = Tetramorium
| Unterfamilie = Myrmicinae
| Art = caespitum
| Bild =
| Bildbeschreibung =
| Verbreitung =
| Koeniginnen = [[Monogynie|monogyn]]
| maxKolo = 80000
}}
Tetramorium caespitum ist eine kleine relativ langsam Art, die in Sandgebieten vorkommt.
In Hamburg habe ich sie im Hafenrandbereich (Kirchwerder) und in den Holmer Sandbergen finden können.
==Futter==
* Brot
* Insekten
* Zucker-/Honigwasser
==Ausbruchsschutz==
* Parrafinöl.
==Platzbedarf==
* Gering, Farmbecken reicht
==Eigene Haltungserfahrungen==
Ich habe mir eine kleine Kolonie in Kirchwerder ausgegraben und möchte hier ein wenig von den Erfahrungen berichten.
Diese Art verhält sich gegenüber Futter anders, als meine anderen Ameisen. Angebotene Mehlwürmer werden meist innerhalb eines halben Tages mit Sand bedeckt. Unter dem Sand werden die Mehlwürmer dann ausgehölt.
Auch Wasserschälchen werden gerne mit Sand zugebaggert, so daß ich sie manchmal als die Schlampen unter den Ameisen tituliere.
[[Kategorie:Tetramorium]]
c3c4daacbe99f7cfe144f6232f96cb92558fc083
186
184
2012-08-22T13:23:47Z
Lollypop
2
wikitext
text/x-wiki
{{Ameisenart
| Autor = (Mayr, 1855)
| WissName = Tetramorium caespitum
| Gattung = Tetramorium
| Unterfamilie = Myrmicinae
| Art = caespitum
| Bild =
| Bildbeschreibung =
| Verbreitung =
| Koeniginnen = [[Monogynie|monogyn]]
| maxKolo = 80000
}}
==Allgemeines==
Tetramorium caespitum ist eine kleine relativ langsam Art, die in Sandgebieten vorkommt.
In Hamburg habe ich sie im Hafenrandbereich (Kirchwerder) und in den Holmer Sandbergen finden können.
==Futter==
* Brot
* Insekten
* Zucker-/Honigwasser
==Ausbruchsschutz==
* Parrafinöl.
==Platzbedarf==
* Gering, Farmbecken reicht
==Eigene Haltungserfahrungen==
Ich habe mir eine kleine Kolonie in Kirchwerder ausgegraben und möchte hier ein wenig von den Erfahrungen berichten.
Diese Art verhält sich gegenüber Futter anders, als meine anderen Ameisen. Angebotene Mehlwürmer werden meist innerhalb eines halben Tages mit Sand bedeckt. Unter dem Sand werden die Mehlwürmer dann ausgehölt.
Auch Wasserschälchen werden gerne mit Sand zugebaggert, so daß ich sie manchmal als die Schlampen unter den Ameisen tituliere.
[[Kategorie:Tetramorium]]
627e2a7f027f43dda030018b314f2f164cd4e986
189
186
2012-08-22T13:36:22Z
Lollypop
2
wikitext
text/x-wiki
{{Ameisenart
| Autor = (Mayr, 1855)
| WissName = Tetramorium caespitum
| Gattung = Tetramorium
| Unterfamilie = Myrmicinae
| Art = caespitum
| Bild =
| Bildbeschreibung =
| Verbreitung =
| Gründung =
| Nest = Erdnest
| Ausbruchsschutz = Paraffinöl
| Futter = Brot und Körner, Insekten, Zuckerwasser
| Koeniginnen = [[Monogynie|monogyn]]
| maxKolo = 80000
}}
==Allgemeines==
Tetramorium caespitum ist eine kleine relativ langsam Art, die in Sandgebieten vorkommt.
In Hamburg habe ich sie im Hafenrandbereich (Kirchwerder) und in den Holmer Sandbergen finden können.
==Eigene Haltungserfahrungen==
Ich habe mir eine kleine Kolonie in Kirchwerder ausgegraben und möchte hier ein wenig von den Erfahrungen berichten.
Diese Art verhält sich gegenüber Futter anders, als meine anderen Ameisen. Angebotene Mehlwürmer werden meist innerhalb eines halben Tages mit Sand bedeckt. Unter dem Sand werden die Mehlwürmer dann ausgehölt.
Auch Wasserschälchen werden gerne mit Sand zugebaggert, so daß ich sie manchmal als die Schlampen unter den Ameisen tituliere.
[[Kategorie:Tetramorium]]
c8988937aea954768406239d5fbec5540b83b4f3
190
189
2012-08-22T13:38:55Z
Lollypop
2
wikitext
text/x-wiki
{{Ameisenart
| Autor = (Mayr, 1855)
| WissName = Tetramorium caespitum
| Gattung = Tetramorium
| Unterfamilie = Myrmicinae
| Art = caespitum
| Bild =
| Bildbeschreibung =
| Verbreitung =
| Gründung =
| Koeniginnen = [[Monogynie|monogyn]]
| maxKolo = 80000
| Nest = Erdnest
| Ausbruchsschutz = Paraffinöl
| Futter = Brot und Körner, Insekten, Zuckerwasser
}}
==Allgemeines==
Tetramorium caespitum ist eine kleine relativ langsam Art, die in Sandgebieten vorkommt.
In Hamburg habe ich sie im Hafenrandbereich (Kirchwerder) und in den Holmer Sandbergen finden können.
==Eigene Haltungserfahrungen==
Ich habe mir eine kleine Kolonie in Kirchwerder ausgegraben und möchte hier ein wenig von den Erfahrungen berichten.
Diese Art verhält sich gegenüber Futter anders, als meine anderen Ameisen. Angebotene Mehlwürmer werden meist innerhalb eines halben Tages mit Sand bedeckt. Unter dem Sand werden die Mehlwürmer dann ausgehölt.
Auch Wasserschälchen werden gerne mit Sand zugebaggert, so daß ich sie manchmal als die Schlampen unter den Ameisen tituliere.
[[Kategorie:Tetramorium]]
7db4d29ef96731bcc481d08da41d581e4ab9a0cf
Template:Ameisenart
10
47
187
83
2012-08-22T13:29:48Z
Lollypop
2
wikitext
text/x-wiki
<includeonly>
{| cellpadding="2" cellspacing="0" style="border: 1px solid #6688AA;width:260px; background-color:#efefef; float:right;overflow: hidden; margin-left:10px; margin-bottom:10px;" valign="middle" |
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| '''''{{{WissName}}}''''' {{#if:{{{DeName|}}}| <br>({{{DeName|}}}) }}
|-
|style=" border: 0px solid #6688AA;border-bottom-width:1px;"| <div style="text-align:center;font-size:8pt;">
{{#if: {{{Bild|}}} | [[Bild:{{{Bild}}}|frameless|250x300px|{{{Bildbeschreibung}}}]]{{{Bildbeschreibung}}}</div> | |}}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| Systematik
|-
|style="text-align:center;width=250px;"|
{| style="background-color:#efefef;text-align:left;"
|-
| Unterfamilie:
|[[{{{Unterfamilie|}}}]]
|-
| Gattung:
|''[[{{{Gattung|}}}]]''
{{#if:{{{Untergattung|}}}|
{{#if:{{{Untergattung}}}| [[Kategorie:{{{Untergattung|}}}]]}}
{{!-}}
{{!}} Untergattung:
{{!}} ''[[{{{Untergattung|}}}]]''
}}
|-
| Art:
|''{{{WissName|}}}''
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Wissenschaftlicher Name
|-
|style="text-align:center"|
{| style="text-align:center;background-color:#efefef;widht:250px"
|-
|style="text-align:center;width:250px"|''{{{WissName}}}''
{{#if:{{{Autor|}}}| {{{Autor|}}} | }}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Weitere Informationen
|-
|style="text-align:left"|
{| style="text-align:left;background-color:#efefef;widht:250px"
{{#if:{{{Verbreitung|}}} |{{!-}}
{{!}} Verbreitung:
{{!}} {{{Verbreitung|}}} |{{!-}}}}
{{#if:{{{Habitat|}}}
| {{!-}}
{{!}} Habitat:
{{!}} {{{Habitat|}}}|{{!-}}}}
{{#if:{{{Gruendung|}}}
| {{!-}}
{{!}} Gründung:
{{!}} {{{Gruendung|}}} |{{!-}}}}
{{#if:{{{Nest|}}}
| {{!-}}
{{!}} Nest:
{{!}} {{{Nest|}}} |{{!-}}}}
{{#if:{{{Koeniginnen|}}}
|{{!-}}
{{!}} Königinnen:
{{!}} {{{Koeniginnen|}}} |{{!-}}}}
{{#if:{{{maxKolo|}}}
|{{!-}}
{{!}} max. Koloniegröße:
{{!}} {{{maxKolo|}}} |{{!-}}}}
|}
|}
{{#ifeq: {{NAMESPACE}} | {{ns:0}} | [[Kategorie:{{{Gattung}}}|{{{Art}}}]] [[Kategorie:Ameisenart]]}}</includeonly>
<noinclude>
b81175699c5a2978a9b1b6130edf6ca7675acc43
188
187
2012-08-22T13:32:14Z
Lollypop
2
wikitext
text/x-wiki
<includeonly>
{| cellpadding="2" cellspacing="0" style="border: 1px solid #6688AA;width:260px; background-color:#efefef; float:right;overflow: hidden; margin-left:10px; margin-bottom:10px;" valign="middle" |
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| '''''{{{WissName}}}''''' {{#if:{{{DeName|}}}| <br>({{{DeName|}}}) }}
|-
|style=" border: 0px solid #6688AA;border-bottom-width:1px;"| <div style="text-align:center;font-size:8pt;">
{{#if: {{{Bild|}}} | [[Bild:{{{Bild}}}|frameless|250x300px|{{{Bildbeschreibung}}}]]{{{Bildbeschreibung}}}</div> | |}}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| Systematik
|-
|style="text-align:center;width=250px;"|
{| style="background-color:#efefef;text-align:left;"
|-
| Unterfamilie:
|[[{{{Unterfamilie|}}}]]
|-
| Gattung:
|''[[{{{Gattung|}}}]]''
{{#if:{{{Untergattung|}}}|
{{#if:{{{Untergattung}}}| [[Kategorie:{{{Untergattung|}}}]]}}
{{!-}}
{{!}} Untergattung:
{{!}} ''[[{{{Untergattung|}}}]]''
}}
|-
| Art:
|''{{{WissName|}}}''
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Wissenschaftlicher Name
|-
|style="text-align:center"|
{| style="text-align:center;background-color:#efefef;widht:250px"
|-
|style="text-align:center;width:250px"|''{{{WissName}}}''
{{#if:{{{Autor|}}}| {{{Autor|}}} | }}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Weitere Informationen
|-
|style="text-align:left"|
{| style="text-align:left;background-color:#efefef;widht:250px"
{{#if:{{{Verbreitung|}}} |{{!-}}
{{!}} Verbreitung:
{{!}} {{{Verbreitung|}}} |{{!-}}}}
{{#if:{{{Habitat|}}}
| {{!-}}
{{!}} Habitat:
{{!}} {{{Habitat|}}}|{{!-}}}}
{{#if:{{{Gruendung|}}}
| {{!-}}
{{!}} Gründung:
{{!}} {{{Gruendung|}}} |{{!-}}}}
{{#if:{{{Nest|}}}
| {{!-}}
{{!}} Nest:
{{!}} {{{Nest|}}} |{{!-}}}}
{{#if:{{{Ausbruchsschutz|}}}
| {{!-}}
{{!}} Ausbruchsschutz:
{{!}} {{{Ausbruchsschutz|}}} |{{!-}}}}
{{#if:{{{Futter|}}}
| {{!-}}
{{!}} Futter:
{{!}} {{{Futter|}}} |{{!-}}}}
{{#if:{{{Koeniginnen|}}}
|{{!-}}
{{!}} Königinnen:
{{!}} {{{Koeniginnen|}}} |{{!-}}}}
{{#if:{{{maxKolo|}}}
|{{!-}}
{{!}} max. Koloniegröße:
{{!}} {{{maxKolo|}}} |{{!-}}}}
|}
|}
{{#ifeq: {{NAMESPACE}} | {{ns:0}} | [[Kategorie:{{{Gattung}}}|{{{Art}}}]] [[Kategorie:Ameisenart]]}}</includeonly>
<noinclude>
f9925c215e964bf583d167b4428bd41fda716996
191
188
2012-08-22T13:39:38Z
Lollypop
2
wikitext
text/x-wiki
<includeonly>
{| cellpadding="2" cellspacing="0" style="border: 1px solid #6688AA;width:260px; background-color:#efefef; float:right;overflow: hidden; margin-left:10px; margin-bottom:10px;" valign="middle" |
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| '''''{{{WissName}}}''''' {{#if:{{{DeName|}}}| <br>({{{DeName|}}}) }}
|-
|style=" border: 0px solid #6688AA;border-bottom-width:1px;"| <div style="text-align:center;font-size:8pt;">
{{#if: {{{Bild|}}} | [[Bild:{{{Bild}}}|frameless|250x300px|{{{Bildbeschreibung}}}]]{{{Bildbeschreibung}}}</div> | |}}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| Systematik
|-
|style="text-align:center;width=250px;"|
{| style="background-color:#efefef;text-align:left;"
|-
| Unterfamilie:
|[[{{{Unterfamilie|}}}]]
|-
| Gattung:
|''[[{{{Gattung|}}}]]''
{{#if:{{{Untergattung|}}}|
{{#if:{{{Untergattung}}}| [[Kategorie:{{{Untergattung|}}}]]}}
{{!-}}
{{!}} Untergattung:
{{!}} ''[[{{{Untergattung|}}}]]''
}}
|-
| Art:
|''{{{WissName|}}}''
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Wissenschaftlicher Name
|-
|style="text-align:center"|
{| style="text-align:center;background-color:#efefef;widht:250px"
|-
|style="text-align:center;width:250px"|''{{{WissName}}}''
{{#if:{{{Autor|}}}| {{{Autor|}}} | }}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Weitere Informationen
|-
|style="text-align:left"|
{| style="text-align:left;background-color:#efefef;widht:250px"
{{#if:{{{Verbreitung|}}} |{{!-}}
{{!}} Verbreitung:
{{!}} {{{Verbreitung|}}} |{{!-}}}}
{{#if:{{{Habitat|}}}
| {{!-}}
{{!}} Habitat:
{{!}} {{{Habitat|}}}|{{!-}}}}
{{#if:{{{Gruendung|}}}
| {{!-}}
{{!}} Gründung:
{{!}} {{{Gruendung|}}} |{{!-}}}}
{{#if:{{{Koeniginnen|}}}
|{{!-}}
{{!}} Königinnen:
{{!}} {{{Koeniginnen|}}} |{{!-}}}}
{{#if:{{{maxKolo|}}}
|{{!-}}
{{!}} max. Koloniegröße:
{{!}} {{{maxKolo|}}} |{{!-}}}}
{{#if:{{{Nest|}}}
| {{!-}}
{{!}} Nest:
{{!}} {{{Nest|}}} |{{!-}}}}
{{#if:{{{Ausbruchsschutz|}}}
| {{!-}}
{{!}} Ausbruchsschutz:
{{!}} {{{Ausbruchsschutz|}}} |{{!-}}}}
{{#if:{{{Futter|}}}
| {{!-}}
{{!}} Futter:
{{!}} {{{Futter|}}} |{{!-}}}}
|}
|}
{{#ifeq: {{NAMESPACE}} | {{ns:0}} | [[Kategorie:{{{Gattung}}}|{{{Art}}}]] [[Kategorie:Ameisenart]]}}</includeonly>
<noinclude>
718b46fa14836f1636897b68583a75e75db9f4bc
195
191
2012-08-22T13:47:58Z
Lollypop
2
wikitext
text/x-wiki
<includeonly>
{| cellpadding="2" cellspacing="0" style="border: 1px solid #6688AA;width:260px; background-color:#efefef; float:right;overflow: hidden; margin-left:10px; margin-bottom:10px;" valign="middle" |
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| '''''{{{WissName}}}''''' {{#if:{{{DeName|}}}| <br>({{{DeName|}}}) }}
|-
|style=" border: 0px solid #6688AA;border-bottom-width:1px;"| <div style="text-align:center;font-size:8pt;">
{{#if: {{{Bild|}}} | [[Bild:{{{Bild}}}|frameless|250x300px|{{{Bildbeschreibung}}}]]{{{Bildbeschreibung}}}</div> | |}}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| Systematik
|-
|style="text-align:center;width=250px;"|
{| style="background-color:#efefef;text-align:left;"
|-
| Unterfamilie:
|[[{{{Unterfamilie|}}}]]
|-
| Gattung:
|''[[{{{Gattung|}}}]]''
{{#if:{{{Untergattung|}}}|
{{#if:{{{Untergattung}}}| [[Kategorie:{{{Untergattung|}}}]]}}
{{!-}}
{{!}} Untergattung:
{{!}} ''[[{{{Untergattung|}}}]]''
}}
|-
| Art:
|''{{{WissName|}}}''
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Wissenschaftlicher Name
|-
|style="text-align:center"|
{| style="text-align:center;background-color:#efefef;widht:250px"
|-
|style="text-align:center;width:250px"|''{{{WissName}}}''
{{#if:{{{Autor|}}}| {{{Autor|}}} | }}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Weitere Informationen
|-
|style="text-align:left"|
{| style="text-align:left;background-color:#efefef;widht:250px"
{{#if:{{{Verbreitung|}}} |{{!-}}
{{!}} Verbreitung:
{{!}} {{{Verbreitung|}}} |{{!-}}}}
{{#if:{{{Habitat|}}}
| {{!-}}
{{!}} Habitat:
{{!}} {{{Habitat|}}}|{{!-}}}}
{{#if:{{{Gruendung|}}}
| {{!-}}
{{!}} Gründung:
{{!}} {{{Gruendung|}}} |{{!-}}}}
{{#if:{{{Koeniginnen|}}}
|{{!-}}
{{!}} Königinnen:
{{!}} {{{Koeniginnen|}}} |{{!-}}}}
{{#if:{{{maxKolo|}}}
|{{!-}}
{{!}} max. Koloniegröße:
{{!}} {{{maxKolo|}}} |{{!-}}}}
{{#if:{{{Nest|}}}
| {{!-}}
{{!}} Nest:
{{!}} {{{Nest|}}} |{{!-}}}}
{{#if:{{{Ausbruchsschutz|}}}
| {{!-}}
{{!}} Ausbruchsschutz:
{{!}} {{{Ausbruchsschutz|}}} |{{!-}}}}
{{#if:{{{Futter|}}}
| {{!-}}
{{!}} Futter:
{{!}} {{{Futter|}}} |{{!-}}}}
|}
|}
{{#ifeq: {{NAMESPACE}} | {{ns:0}} | [[Kategorie:{{{Gattung}}}|{{{Art}}}]] [[Kategorie:Ameisen]]}}</includeonly>
<noinclude>
075ce6d828cffd9631a62b5079b8bd084d43fe09
196
195
2012-08-22T13:49:09Z
Lollypop
2
wikitext
text/x-wiki
<includeonly>
{| cellpadding="2" cellspacing="0" style="border: 1px solid #6688AA;width:260px; background-color:#efefef; float:right;overflow: hidden; margin-left:10px; margin-bottom:10px;" valign="middle" |
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| '''''{{{WissName}}}''''' {{#if:{{{DeName|}}}| <br>({{{DeName|}}}) }}
|-
|style=" border: 0px solid #6688AA;border-bottom-width:1px;"| <div style="text-align:center;font-size:8pt;">
{{#if: {{{Bild|}}} | [[Bild:{{{Bild}}}|frameless|250x300px|{{{Bildbeschreibung}}}]]{{{Bildbeschreibung}}}</div> | |}}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| Systematik
|-
|style="text-align:center;width=250px;"|
{| style="background-color:#efefef;text-align:left;"
|-
| Unterfamilie:
|[[{{{Unterfamilie|}}}]]
|-
| Gattung:
|''[[{{{Gattung|}}}]]''
{{#if:{{{Untergattung|}}}|
{{#if:{{{Untergattung}}}| [[Kategorie:{{{Untergattung|}}}]]}}
{{!-}}
{{!}} Untergattung:
{{!}} ''[[{{{Untergattung|}}}]]''
}}
|-
| Art:
|''{{{WissName|}}}''
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Wissenschaftlicher Name
|-
|style="text-align:center"|
{| style="text-align:center;background-color:#efefef;widht:250px"
|-
|style="text-align:center;width:250px"|''{{{WissName}}}''
{{#if:{{{Autor|}}}| {{{Autor|}}} | }}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Weitere Informationen
|-
|style="text-align:left"|
{| style="text-align:left;background-color:#efefef;widht:250px"
{{#if:{{{Verbreitung|}}} |{{!-}}
{{!}} Verbreitung:
{{!}} {{{Verbreitung|}}} |{{!-}}}}
{{#if:{{{Habitat|}}}
| {{!-}}
{{!}} Habitat:
{{!}} {{{Habitat|}}}|{{!-}}}}
{{#if:{{{Gruendung|}}}
| {{!-}}
{{!}} Gründung:
{{!}} {{{Gruendung|}}} |{{!-}}}}
{{#if:{{{Koeniginnen|}}}
|{{!-}}
{{!}} Königinnen:
{{!}} {{{Koeniginnen|}}} |{{!-}}}}
{{#if:{{{maxKolo|}}}
|{{!-}}
{{!}} max. Koloniegröße:
{{!}} {{{maxKolo|}}} |{{!-}}}}
{{#if:{{{Nest|}}}
| {{!-}}
{{!}} Nest:
{{!}} {{{Nest|}}} |{{!-}}}}
{{#if:{{{Ausbruchsschutz|}}}
| {{!-}}
{{!}} Ausbruchsschutz:
{{!}} {{{Ausbruchsschutz|}}} |{{!-}}}}
{{#if:{{{Futter|}}}
| {{!-}}
{{!}} Futter:
{{!}} {{{Futter|}}} |{{!-}}}}
|}
|}
{{#ifeq: {{NAMESPACE}} | {{ns:0}} | [[Kategorie:{{{Gattung}}}]] [[Kategorie:Ameisen]]}}</includeonly>
<noinclude>
d564bb389c84f80a67a8fb29c2138a136bbcb0a3
198
196
2012-08-22T13:57:12Z
Lollypop
2
wikitext
text/x-wiki
<includeonly>
{| cellpadding="2" cellspacing="0" style="border: 1px solid #6688AA;width:260px; background-color:#efefef; float:right;overflow: hidden; margin-left:10px; margin-bottom:10px;" valign="middle" |
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| '''''{{{WissName}}}''''' {{#if:{{{DeName|}}}| <br>({{{DeName|}}}) }}
|-
|style=" border: 0px solid #6688AA;border-bottom-width:1px;"| <div style="text-align:center;font-size:8pt;">
{{#if: {{{Bild|}}} | [[Bild:{{{Bild}}}|frameless|250x300px|{{{Bildbeschreibung}}}]]{{{Bildbeschreibung}}}</div> | |}}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| Systematik
|-
|style="text-align:center;width=250px;"|
{| style="background-color:#efefef;text-align:left;"
|-
| Unterfamilie:
|[[{{{Unterfamilie|}}}]]
|-
| Gattung:
|''[[{{{Gattung|}}}]]''
{{#if:{{{Gattung}}}| [[http://ameisenwiki.de/index.php/Kategorie:{{{Gattung|}}}]]}}
{{#if:{{{Untergattung|}}}|
{{#if:{{{Untergattung}}}| [[http://ameisenwiki.de/index.php/Kategorie:{{{Untergattung|}}}]]}}
{{!-}}
{{!}} Untergattung:
{{!}} ''[[{{{Untergattung|}}}]]''
}}
|-
| Art:
|''{{{WissName|}}}''
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Wissenschaftlicher Name
|-
|style="text-align:center"|
{| style="text-align:center;background-color:#efefef;widht:250px"
|-
|style="text-align:center;width:250px"|''{{{WissName}}}''
{{#if:{{{Autor|}}}| {{{Autor|}}} | }}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Weitere Informationen
|-
|style="text-align:left"|
{| style="text-align:left;background-color:#efefef;widht:250px"
{{#if:{{{Verbreitung|}}} |{{!-}}
{{!}} Verbreitung:
{{!}} {{{Verbreitung|}}} |{{!-}}}}
{{#if:{{{Habitat|}}}
| {{!-}}
{{!}} Habitat:
{{!}} {{{Habitat|}}}|{{!-}}}}
{{#if:{{{Gruendung|}}}
| {{!-}}
{{!}} Gründung:
{{!}} {{{Gruendung|}}} |{{!-}}}}
{{#if:{{{Koeniginnen|}}}
|{{!-}}
{{!}} Königinnen:
{{!}} {{{Koeniginnen|}}} |{{!-}}}}
{{#if:{{{maxKolo|}}}
|{{!-}}
{{!}} max. Koloniegröße:
{{!}} {{{maxKolo|}}} |{{!-}}}}
{{#if:{{{Nest|}}}
| {{!-}}
{{!}} Nest:
{{!}} {{{Nest|}}} |{{!-}}}}
{{#if:{{{Ausbruchsschutz|}}}
| {{!-}}
{{!}} Ausbruchsschutz:
{{!}} {{{Ausbruchsschutz|}}} |{{!-}}}}
{{#if:{{{Futter|}}}
| {{!-}}
{{!}} Futter:
{{!}} {{{Futter|}}} |{{!-}}}}
|}
|}
{{#ifeq: {{NAMESPACE}} | {{ns:0}} | [[Kategorie:Ameisen]]}}</includeonly>
<noinclude>
403ce861370c0d57545efec71ec20d897d21ccfd
199
198
2012-08-22T13:59:57Z
Lollypop
2
wikitext
text/x-wiki
<includeonly>
{| cellpadding="2" cellspacing="0" style="border: 1px solid #6688AA;width:260px; background-color:#efefef; float:right;overflow: hidden; margin-left:10px; margin-bottom:10px;" valign="middle" |
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| '''''{{{WissName}}}''''' {{#if:{{{DeName|}}}| <br>({{{DeName|}}}) }}
|-
|style=" border: 0px solid #6688AA;border-bottom-width:1px;"| <div style="text-align:center;font-size:8pt;">
{{#if: {{{Bild|}}} | [[Bild:{{{Bild}}}|frameless|250x300px|{{{Bildbeschreibung}}}]]{{{Bildbeschreibung}}}</div> | |}}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| Systematik
|-
|style="text-align:center;width=250px;"|
{| style="background-color:#efefef;text-align:left;"
|-
| Unterfamilie:
|[[{{{Unterfamilie|}}}]]
|-
| Gattung:
|''[[{{{Gattung|}}}]]''
{{#if:{{{Gattung}}}| [[Kategorie:{{{Gattung|}}}]]}}
{{#if:{{{Untergattung|}}}|
{{#if:{{{Untergattung}}}| [[Kategorie:{{{Untergattung|}}}]]}}
{{!-}}
{{!}} Untergattung:
{{!}} ''[[{{{Untergattung|}}}]]''
}}
|-
| Art:
|''{{{WissName|}}}''
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Wissenschaftlicher Name
|-
|style="text-align:center"|
{| style="text-align:center;background-color:#efefef;widht:250px"
|-
|style="text-align:center;width:250px"|''{{{WissName}}}''
{{#if:{{{Autor|}}}| {{{Autor|}}} | }}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Weitere Informationen
|-
|style="text-align:left"|
{| style="text-align:left;background-color:#efefef;widht:250px"
{{#if:{{{Verbreitung|}}} |{{!-}}
{{!}} Verbreitung:
{{!}} {{{Verbreitung|}}} |{{!-}}}}
{{#if:{{{Habitat|}}}
| {{!-}}
{{!}} Habitat:
{{!}} {{{Habitat|}}}|{{!-}}}}
{{#if:{{{Gruendung|}}}
| {{!-}}
{{!}} Gründung:
{{!}} {{{Gruendung|}}} |{{!-}}}}
{{#if:{{{Koeniginnen|}}}
|{{!-}}
{{!}} Königinnen:
{{!}} {{{Koeniginnen|}}} |{{!-}}}}
{{#if:{{{maxKolo|}}}
|{{!-}}
{{!}} max. Koloniegröße:
{{!}} {{{maxKolo|}}} |{{!-}}}}
{{#if:{{{Nest|}}}
| {{!-}}
{{!}} Nest:
{{!}} {{{Nest|}}} |{{!-}}}}
{{#if:{{{Ausbruchsschutz|}}}
| {{!-}}
{{!}} Ausbruchsschutz:
{{!}} {{{Ausbruchsschutz|}}} |{{!-}}}}
{{#if:{{{Futter|}}}
| {{!-}}
{{!}} Futter:
{{!}} {{{Futter|}}} |{{!-}}}}
|}
|}
{{#ifeq: {{NAMESPACE}} | {{ns:0}} | [[Kategorie:Ameisen]]}}</includeonly>
<noinclude>
8886895280e52a14792f8842885429b02e4f5d6a
200
199
2012-08-22T14:01:48Z
Lollypop
2
wikitext
text/x-wiki
<includeonly>
{| cellpadding="2" cellspacing="0" style="border: 1px solid #6688AA;width:260px; background-color:#efefef; float:right;overflow: hidden; margin-left:10px; margin-bottom:10px;" valign="middle" |
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| '''''{{{WissName}}}''''' {{#if:{{{DeName|}}}| <br>({{{DeName|}}}) }}
|-
|style=" border: 0px solid #6688AA;border-bottom-width:1px;"| <div style="text-align:center;font-size:8pt;">
{{#if: {{{Bild|}}} | [[Bild:{{{Bild}}}|frameless|250x300px|{{{Bildbeschreibung}}}]]{{{Bildbeschreibung}}}</div> | |}}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| Systematik
|-
|style="text-align:center;width=250px;"|
{| style="background-color:#efefef;text-align:left;"
|-
| Unterfamilie:
|[[{{{Unterfamilie|}}}]]
|-
| Gattung:
|''[[{{{Gattung|}}}]]''
{{#if:{{{Untergattung|}}}|
{{!-}}
{{!}} Untergattung:
{{!}} ''[[{{{Untergattung|}}}]]''
}}
|-
| Art:
|''{{{WissName|}}}''
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Wissenschaftlicher Name
|-
|style="text-align:center"|
{| style="text-align:center;background-color:#efefef;widht:250px"
|-
|style="text-align:center;width:250px"|''{{{WissName}}}''
{{#if:{{{Autor|}}}| {{{Autor|}}} | }}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Weitere Informationen
|-
|style="text-align:left"|
{| style="text-align:left;background-color:#efefef;widht:250px"
{{#if:{{{Verbreitung|}}} |{{!-}}
{{!}} Verbreitung:
{{!}} {{{Verbreitung|}}} |{{!-}}}}
{{#if:{{{Habitat|}}}
| {{!-}}
{{!}} Habitat:
{{!}} {{{Habitat|}}}|{{!-}}}}
{{#if:{{{Gruendung|}}}
| {{!-}}
{{!}} Gründung:
{{!}} {{{Gruendung|}}} |{{!-}}}}
{{#if:{{{Koeniginnen|}}}
|{{!-}}
{{!}} Königinnen:
{{!}} {{{Koeniginnen|}}} |{{!-}}}}
{{#if:{{{maxKolo|}}}
|{{!-}}
{{!}} max. Koloniegröße:
{{!}} {{{maxKolo|}}} |{{!-}}}}
{{#if:{{{Nest|}}}
| {{!-}}
{{!}} Nest:
{{!}} {{{Nest|}}} |{{!-}}}}
{{#if:{{{Ausbruchsschutz|}}}
| {{!-}}
{{!}} Ausbruchsschutz:
{{!}} {{{Ausbruchsschutz|}}} |{{!-}}}}
{{#if:{{{Futter|}}}
| {{!-}}
{{!}} Futter:
{{!}} {{{Futter|}}} |{{!-}}}}
|}
|}
{{#if:{{{Untergattung}}}| [[Kategorie:{{{Untergattung|}}}]]}}
{{#if:{{{Gattung}}}| [[Kategorie:{{{Gattung|}}}]]}}
{{#ifeq: {{NAMESPACE}} | {{ns:0}} | [[Kategorie:Ameisen]]}}
</includeonly>
<noinclude>
0577cc347df6fa5ba342ec516cb2a5b9543073e0
201
200
2012-08-22T14:09:19Z
Lollypop
2
wikitext
text/x-wiki
<includeonly>
{| cellpadding="2" cellspacing="0" style="border: 1px solid #6688AA;width:260px; background-color:#efefef; float:right;overflow: hidden; margin-left:10px; margin-bottom:10px;" valign="middle" |
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| '''''{{{WissName}}}''''' {{#if:{{{DeName|}}}| <br>({{{DeName|}}}) }}
|-
|style=" border: 0px solid #6688AA;border-bottom-width:1px;"| <div style="text-align:center;font-size:8pt;">
{{#if: {{{Bild|}}} | [[Bild:{{{Bild}}}|frameless|250x300px|{{{Bildbeschreibung}}}]]{{{Bildbeschreibung}}}</div> | |}}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| Systematik
|-
|style="text-align:center;width=250px;"|
{| style="background-color:#efefef;text-align:left;"
|-
| Unterfamilie:
|[[{{{Unterfamilie|}}}]]
|-
| Gattung:
|''[[{{{Gattung|}}}]]''
{{#if:{{{Untergattung|}}}|
{{!-}}
{{!}} Untergattung:
{{!}} ''[[{{{Untergattung|}}}]]''
}}
|-
| Art:
|''{{{WissName|}}}''
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Wissenschaftlicher Name
|-
|style="text-align:center"|
{| style="text-align:center;background-color:#efefef;widht:250px"
|-
|style="text-align:center;width:250px"|''{{{WissName}}}''
{{#if:{{{Autor|}}}| {{{Autor|}}} | }}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Weitere Informationen
|-
|style="text-align:left"|
{| style="text-align:left;background-color:#efefef;widht:250px"
{{#if:{{{Verbreitung|}}} |{{!-}}
{{!}} Verbreitung:
{{!}} {{{Verbreitung|}}} |{{!-}}}}
{{#if:{{{Habitat|}}}
| {{!-}}
{{!}} Habitat:
{{!}} {{{Habitat|}}}|{{!-}}}}
{{#if:{{{Gruendung|}}}
| {{!-}}
{{!}} Gründung:
{{!}} {{{Gruendung|}}} |{{!-}}}}
{{#if:{{{Koeniginnen|}}}
|{{!-}}
{{!}} Königinnen:
{{!}} {{{Koeniginnen|}}} |{{!-}}}}
{{#if:{{{maxKolo|}}}
|{{!-}}
{{!}} max. Koloniegröße:
{{!}} {{{maxKolo|}}} |{{!-}}}}
{{#if:{{{Nest|}}}
| {{!-}}
{{!}} Nest:
{{!}} {{{Nest|}}} |{{!-}}}}
{{#if:{{{Ausbruchsschutz|}}}
| {{!-}}
{{!}} Ausbruchsschutz:
{{!}} {{{Ausbruchsschutz|}}} |{{!-}}}}
{{#if:{{{Futter|}}}
| {{!-}}
{{!}} Futter:
{{!}} {{{Futter|}}} |{{!-}}}}
|}
|}
{{#if:{{{Untergattung|}}}| [[Kategorie:{{{Untergattung|}}}]]}}
{{#if:{{{Gattung|}}}| [[Kategorie:{{{Gattung|}}}]]}}
{{#ifeq: {{NAMESPACE}} | {{ns:0}} | [[Kategorie:Ameisen]]}}
</includeonly>
<noinclude>
7f85cfdb12aa692a15876cb4ca7f92d2574585ca
Formica fusca
0
19
192
106
2012-08-22T13:41:23Z
Lollypop
2
wikitext
text/x-wiki
{{Ameisenart
| DeName = Grauschwarze Sklavenameise
| WissName = Formica fusca
| Autor = Linnaeus, 1758
| Untergattung = Serviformica
| Gattung = Formica
| Unterfamilie = Formicinae
| Art = fusca
| Verbreitung = Mitteleuropa bis Fennoskandien
| Habitat = Trockene bis mäßig beschattete Bereiche
| Gruendung = [[Gründung#Die unabhängige Koloniegründung durch einzelne Königinnen (claustrale/semiclaustrale Gründung)|claustral]]
| Koeniginnen = [[Polygynie|polygyn]]
| Nest = Erdnest
| Ausbruchsschutz = Paraffinöl
| Futter = Insekten, Zuckerwasser
}}
==Allgemeines==
==Eigene Haltungserfahrungen==
789b34320649884c470344560c908deb2baf33f5
193
192
2012-08-22T13:42:49Z
Lollypop
2
wikitext
text/x-wiki
{{Ameisenart
| DeName = Grauschwarze Sklavenameise
| WissName = Formica fusca
| Autor = Linnaeus, 1758
| Untergattung = Serviformica
| Gattung = Formica
| Unterfamilie = Formicinae
| Art = fusca
| Verbreitung = Mitteleuropa bis Fennoskandien
| Habitat = Trockene bis mäßig beschattete Bereiche
| Gruendung = [[Gründung#Die unabhängige Koloniegründung durch einzelne Königinnen (claustrale/semiclaustrale Gründung)|claustral]]
| Koeniginnen = [[Polygynie|polygyn]]
| Nest = Erdnest
| Ausbruchsschutz = Paraffinöl
| Futter = Insekten, Zuckerwasser
}}
== Allgemeines ==
...
== Eigene Haltungserfahrungen ==
...
fe8d4a8a2bfcbda907a55d24c2ad027ce4ae315d
Myrmica rubra
0
14
194
103
2012-08-22T13:45:38Z
Lollypop
2
wikitext
text/x-wiki
{{Ameisenart
| Autor = (Linnaeus, 1758)
| DeName = Rote Gartenameise
| WissName = Myrmica rubra
| Gattung = Myrmica
| Unterfamilie = Myrmicinae
| Art = rubra
| Verbreitung = Europa, Asien, Afrika und Nordamerika
| Habitat = Gärten, Wäldern und Wiesen, zumeist unter Steinen, Holz,<br>o. ä.
| Gruendung = [[Gründung#Die unabhängige Koloniegründung durch einzelne Königinnen (claustrale/semiclaustrale Gründung)|semiclaustral]]
| Koeniginnen = [[Polygynie|polygyn]]
| Nest = Erdnest
| Ausbruchsschutz = Paraffinöl
| Futter = Insekten, Zuckerwasser
}}
[[Kategorie:Myrmica]]
== Allgemeines ==
...
== Eigene Haltungserfahrungen ==
...
9e836f05ef61c4275b8183792906435f38e73aca
197
194
2012-08-22T13:49:26Z
Lollypop
2
wikitext
text/x-wiki
{{Ameisenart
| Autor = (Linnaeus, 1758)
| DeName = Rote Gartenameise
| WissName = Myrmica rubra
| Gattung = Myrmica
| Unterfamilie = Myrmicinae
| Art = rubra
| Verbreitung = Europa, Asien, Afrika und Nordamerika
| Habitat = Gärten, Wäldern und Wiesen, zumeist unter Steinen, Holz,<br>o. ä.
| Gruendung = [[Gründung#Die unabhängige Koloniegründung durch einzelne Königinnen (claustrale/semiclaustrale Gründung)|semiclaustral]]
| Koeniginnen = [[Polygynie|polygyn]]
| Nest = Erdnest
| Ausbruchsschutz = Paraffinöl
| Futter = Insekten, Zuckerwasser
}}
== Allgemeines ==
...
== Eigene Haltungserfahrungen ==
...
0d1581d074e8054b3d088e9230fd709683f78abe
Tapinoma sp.
0
60
203
202
2012-08-22T14:11:32Z
Lollypop
2
wikitext
text/x-wiki
{{Ameisenart
| DeName =
| WissName = Tapinoma sp.
| Autor =
| Unterfamilie = Dolichoderinae
| Gattung = Tapinoma
| Art = sp.
| Koeniginnen = [[Polygynie|polygyn]]
| Nest = Erdnest
| Ausbruchsschutz = PTFE
| Futter = '''Insekten''', Zuckerwasser
}}
== Allgemeines ==
Tapinoma sp. ist eine kleine quirlige Art, die es warm mag.
== Eigene Haltungserfahrungen ==
...
712edd2c3f17aaf153924db252fd7a51db0728bf
206
203
2012-08-22T14:32:23Z
Lollypop
2
wikitext
text/x-wiki
{{Ameisenart
| DeName =
| WissName = Tapinoma sp.
| Autor =
| Unterfamilie = Dolichoderinae
| Gattung = Tapinoma
| Art = sp.
| Koeniginnen = [[Polygynie|polygyn]]
| sizeGynomorphe = 5-6mm
| sizeErgatomorphe = 3-5mm
| Nest = Erdnest
| Ausbruchsschutz = PTFE
| Futter = '''Insekten''', Zuckerwasser
}}
== Allgemeines ==
Tapinoma sp. ist eine kleine quirlige Art, die es warm mag.
== Eigene Haltungserfahrungen ==
...
bc1a9672704ff961b027513089ddc0f59f6bef7d
208
206
2012-08-22T15:11:33Z
Lollypop
2
wikitext
text/x-wiki
{{Ameisenart
| DeName =
| WissName = Tapinoma sp.
| Autor =
| Unterfamilie = Dolichoderinae
| Gattung = Tapinoma
| Art = sp.
| Koeniginnen = [[Polygynie|polygyn]]
| sizeGynomorphe = 5-6mm
| sizeErgatomorphe = 3-5mm
| Nest = Erdnest
| Ausbruchsschutz = PTFE
| Nahrung = '''Insekten''', Zuckerwasser
| Luftfeuchtigkeit = 40-60%
| Temperatur = 20-30°C
| Winterruhe = Ja
}}
== Allgemeines ==
Tapinoma sp. ist eine kleine quirlige Art, die es warm mag.
== Eigene Haltungserfahrungen ==
...
56321696089a52e00c55b56146314f933049566f
Template:Ameisenart
10
47
204
201
2012-08-22T14:13:35Z
Lollypop
2
wikitext
text/x-wiki
<includeonly>
{| cellpadding="2" cellspacing="0" style="border: 1px solid #6688AA;width:260px; background-color:#efefef; float:right;overflow: hidden; margin-left:10px; margin-bottom:10px;" valign="middle" |
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| '''''{{{WissName}}}''''' {{#if:{{{DeName|}}}| <br>({{{DeName|}}}) }}
|-
|style=" border: 0px solid #6688AA;border-bottom-width:1px;"| <div style="text-align:center;font-size:8pt;">
{{#if: {{{Bild|}}} | [[Bild:{{{Bild}}}|frameless|250x300px|{{{Bildbeschreibung}}}]]{{{Bildbeschreibung}}}</div> | |}}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| Systematik
|-
|style="text-align:center;width=250px;"|
{| style="background-color:#efefef;text-align:left;"
|-
| Unterfamilie:
|[[{{{Unterfamilie|}}}]]
|-
| Gattung:
|''[[{{{Gattung|}}}]]''
{{#if:{{{Untergattung|}}}|
{{!-}}
{{!}} Untergattung:
{{!}} ''[[{{{Untergattung|}}}]]''
}}
|-
| Art:
|''{{{WissName|}}}''
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Wissenschaftlicher Name
|-
|style="text-align:center"|
{| style="text-align:center;background-color:#efefef;widht:250px"
|-
|style="text-align:center;width:250px"|''{{{WissName}}}''
{{#if:{{{Autor|}}}| {{{Autor|}}} | }}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Weitere Informationen
|-
|style="text-align:left"|
{| style="text-align:left;background-color:#efefef;widht:250px"
{{#if:{{{Verbreitung|}}} |{{!-}}
{{!}} Verbreitung:
{{!}} {{{Verbreitung|}}} |{{!-}}}}
{{#if:{{{Habitat|}}}
| {{!-}}
{{!}} Habitat:
{{!}} {{{Habitat|}}}|{{!-}}}}
{{#if:{{{Gruendung|}}}
| {{!-}}
{{!}} Gründung:
{{!}} {{{Gruendung|}}} |{{!-}}}}
{{#if:{{{Koeniginnen|}}}
|{{!-}}
{{!}} Königinnen:
{{!}} {{{Koeniginnen|}}} |{{!-}}}}
{{#if:{{{maxKolo|}}}
|{{!-}}
{{!}} max. Koloniegröße:
{{!}} {{{maxKolo|}}} |{{!-}}}}
{{#if:{{{Nest|}}}
| {{!-}}
{{!}} Nest:
{{!}} {{{Nest|}}} |{{!-}}}}
{{#if:{{{Ausbruchsschutz|}}}
| {{!-}}
{{!}} Ausbruchsschutz:
{{!}} {{{Ausbruchsschutz|}}} |{{!-}}}}
{{#if:{{{Futter|}}}
| {{!-}}
{{!}} Futter:
{{!}} {{{Futter|}}} |{{!-}}}}
|}
|}
{{#if:{{{Untergattung|}}}| [[Kategorie:{{{Untergattung|}}}]]}}
{{#if:{{{Gattung|}}}| [[Kategorie:{{{Gattung|}}}]]}}
{{#ifeq: {{NAMESPACE}} | {{ns:0}} | [[Kategorie:Ameisenarten]]}}
</includeonly>
<noinclude>
c7f1a92868b52e57461907b4b4803d941b882684
205
204
2012-08-22T14:31:18Z
Lollypop
2
wikitext
text/x-wiki
<includeonly>
{| cellpadding="2" cellspacing="0" style="border: 1px solid #6688AA;width:260px; background-color:#efefef; float:right;overflow: hidden; margin-left:10px; margin-bottom:10px;" valign="middle" |
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| '''''{{{WissName}}}''''' {{#if:{{{DeName|}}}| <br>({{{DeName|}}}) }}
|-
|style=" border: 0px solid #6688AA;border-bottom-width:1px;"| <div style="text-align:center;font-size:8pt;">
{{#if: {{{Bild|}}} | [[Bild:{{{Bild}}}|frameless|250x300px|{{{Bildbeschreibung}}}]]{{{Bildbeschreibung}}}</div> | |}}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| Systematik
|-
|style="text-align:center;width=250px;"|
{| style="background-color:#efefef;text-align:left;"
|-
| Unterfamilie:
|[[{{{Unterfamilie|}}}]]
|-
| Gattung:
|''[[{{{Gattung|}}}]]''
{{#if:{{{Untergattung|}}}|
{{!-}}
{{!}} Untergattung:
{{!}} ''[[{{{Untergattung|}}}]]''
}}
|-
| Art:
|''{{{WissName|}}}''
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Wissenschaftlicher Name
|-
|style="text-align:center"|
{| style="text-align:center;background-color:#efefef;widht:250px"
|-
|style="text-align:center;width:250px"|''{{{WissName}}}''
{{#if:{{{Autor|}}}| {{{Autor|}}} | }}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Weitere Informationen
|-
|style="text-align:left"|
{| style="text-align:left;background-color:#efefef;widht:250px"
{{#if:{{{Verbreitung|}}} |{{!-}}
{{!}} Verbreitung:
{{!}} {{{Verbreitung|}}} |{{!-}}}}
{{#if:{{{Habitat|}}}
| {{!-}}
{{!}} Habitat:
{{!}} {{{Habitat|}}}|{{!-}}}}
{{#if:{{{Gruendung|}}}
| {{!-}}
{{!}} Gründung:
{{!}} {{{Gruendung|}}} |{{!-}}}}
{{#if:{{{Koeniginnen|}}}
|{{!-}}
{{!}} Königinnen:
{{!}} {{{Koeniginnen|}}} |{{!-}}}}
{{#if:{{{maxKolo|}}}
|{{!-}}
{{!}} max. Koloniegröße:
{{!}} {{{maxKolo|}}} |{{!-}}}}
{{#if:{{{sizeGynomorphe|}}}
|{{!-}}
{{!}} Gyne:
{{!}} {{{sizeGynomorphe|}}} |{{!-}}}}
{{#if:{{{sizeErgatomorphe|}}}
|{{!-}}
{{!}} Arbeiterinnen:
{{!}} {{{sizeErgatomorphe|}}} |{{!-}}}}
{{#if:{{{sizeMajor|}}}
|{{!-}}
{{!}} Majorarbeiterinnen:
{{!}} {{{sizeMajor|}}} |{{!-}}}}
{{#if:{{{sizeSoldat|}}}
|{{!-}}
{{!}} Soldaten:
{{!}} {{{sizeSoldat|}}} |{{!-}}}}
{{#if:{{{Nest|}}}
| {{!-}}
{{!}} Nest:
{{!}} {{{Nest|}}}
|{{!-}}}}
{{#if:{{{Ausbruchsschutz|}}}
| {{!-}}
{{!}} Ausbruchsschutz:
{{!}} {{{Ausbruchsschutz|}}} |{{!-}}}}
{{#if:{{{Futter|}}}
| {{!-}}
{{!}} Futter:
{{!}} {{{Futter|}}} |{{!-}}}}
|}
|}
{{#if:{{{Untergattung|}}}| [[Kategorie:{{{Untergattung|}}}]]}}
{{#if:{{{Gattung|}}}| [[Kategorie:{{{Gattung|}}}]]}}
{{#ifeq: {{NAMESPACE}} | {{ns:0}} | [[Kategorie:Ameisenarten]]}}
</includeonly>
<noinclude>
Königin: 5-6mm
Arbeiterinnen: 3-5mm
Nahrung: Insekten und Honigwasser
Luftfeuchtigkeit: 40-60%
Temperatur: 20-30°C
Winterruhe:
8f7418555a523bc2d564565e27fb6922302babdc
207
205
2012-08-22T15:08:12Z
Lollypop
2
wikitext
text/x-wiki
<includeonly>
{| cellpadding="2" cellspacing="0" style="border: 1px solid #6688AA;width:260px; background-color:#efefef; float:right;overflow: hidden; margin-left:10px; margin-bottom:10px;" valign="middle" |
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| '''''{{{WissName}}}''''' {{#if:{{{DeName|}}}| <br>({{{DeName|}}}) }}
|-
|style=" border: 0px solid #6688AA;border-bottom-width:1px;"| <div style="text-align:center;font-size:8pt;">
{{#if: {{{Bild|}}} | [[Bild:{{{Bild}}}|frameless|250x300px|{{{Bildbeschreibung}}}]]{{{Bildbeschreibung}}}</div> | |}}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| Systematik
|-
|style="text-align:center;width=250px;"|
{| style="background-color:#efefef;text-align:left;"
|-
| Unterfamilie:
|[[{{{Unterfamilie|}}}]]
|-
| Gattung:
|''[[{{{Gattung|}}}]]''
{{#if:{{{Untergattung|}}}|
{{!-}}
{{!}} Untergattung:
{{!}} ''[[{{{Untergattung|}}}]]''
}}
|-
| Art:
|''{{{WissName|}}}''
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Wissenschaftlicher Name
|-
|style="text-align:center"|
{| style="text-align:center;background-color:#efefef;widht:250px"
|-
|style="text-align:center;width:250px"|''{{{WissName}}}''
{{#if:{{{Autor|}}}| {{{Autor|}}} | }}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Weitere Informationen
|-
|style="text-align:left"|
{| style="text-align:left;background-color:#efefef;widht:250px"
{{#if:{{{Verbreitung|}}} |{{!-}}
{{!}} Verbreitung:
{{!}} {{{Verbreitung|}}} |{{!-}}}}
{{#if:{{{Habitat|}}}
| {{!-}}
{{!}} Habitat:
{{!}} {{{Habitat|}}}|{{!-}}}}
{{#if:{{{Gruendung|}}}
| {{!-}}
{{!}} Gründung:
{{!}} {{{Gruendung|}}} |{{!-}}}}
{{#if:{{{Koeniginnen|}}}
|{{!-}}
{{!}} Königinnen:
{{!}} {{{Koeniginnen|}}} |{{!-}}}}
{{#if:{{{maxKolo|}}}
|{{!-}}
{{!}} max. Koloniegröße:
{{!}} {{{maxKolo|}}} |{{!-}}}}
{{#if:{{{sizeGynomorphe|}}}
|{{!-}}
{{!}} Gyne:
{{!}} {{{sizeGynomorphe|}}} |{{!-}}}}
{{#if:{{{sizeErgatomorphe|}}}
|{{!-}}
{{!}} Arbeiterinnen:
{{!}} {{{sizeErgatomorphe|}}} |{{!-}}}}
{{#if:{{{sizeMajor|}}}
|{{!-}}
{{!}} Majorarbeiterinnen:
{{!}} {{{sizeMajor|}}} |{{!-}}}}
{{#if:{{{sizeSoldat|}}}
|{{!-}}
{{!}} Soldaten:
{{!}} {{{sizeSoldat|}}} |{{!-}}}}
{{#if:{{{Nest|}}}
| {{!-}}
{{!}} Nest:
{{!}} {{{Nest|}}}
|{{!-}}}}
{{#if:{{{Ausbruchsschutz|}}}
| {{!-}}
{{!}} Ausbruchsschutz:
{{!}} {{{Ausbruchsschutz|}}} |{{!-}}}}
{{#if:{{{Nahrung|}}}
| {{!-}}
{{!}} Nahrung:
{{!}} {{{Nahrung|}}} |{{!-}}}}
{{#if:{{{Luftfeuchtigkeit|}}}
| {{!-}}
{{!}} Luftfeuchtigkeit:
{{!}} {{{Luftfeuchtigkeit|}}} |{{!-}}}}
{{#if:{{{Temperatur|}}}
| {{!-}}
{{!}} Temperatur:
{{!}} {{{Temperatur|}}} |{{!-}}}}
{{#if:{{{Winterruhe|}}}
| {{!-}}
{{!}} Winterruhe:
{{!}} {{{Winterruhe|}}} |{{!-}}}}
|}
|}
{{#if:{{{Untergattung|}}}| [[Kategorie:{{{Untergattung|}}}]]}}
{{#if:{{{Gattung|}}}| [[Kategorie:{{{Gattung|}}}]]}}
{{#ifeq: {{NAMESPACE}} | {{ns:0}} | [[Kategorie:Ameisenarten]]}}
</includeonly>
<noinclude>
d9847d28b97ddfce3b7ce3fc5773178184d2475a
Formica fusca
0
19
209
193
2012-08-22T15:13:12Z
Lollypop
2
wikitext
text/x-wiki
{{Ameisenart
| DeName = Grauschwarze Sklavenameise
| WissName = Formica fusca
| Autor = Linnaeus, 1758
| Untergattung = Serviformica
| Gattung = Formica
| Unterfamilie = Formicinae
| Art = fusca
| Verbreitung = Mitteleuropa bis Fennoskandien
| Habitat = Trockene bis mäßig beschattete Bereiche
| Gruendung = [[Gründung#Die unabhängige Koloniegründung durch einzelne Königinnen (claustrale/semiclaustrale Gründung)|claustral]]
| Koeniginnen = [[Polygynie|polygyn]]
| Nest = Erdnest
| Ausbruchsschutz = Paraffinöl
| Nahrung = Insekten, Zuckerwasser
| Luftfeuchtigkeit = 40-60%
| Temperatur = 20-30°C
| Winterruhe = Ja
}}
== Allgemeines ==
...
== Eigene Haltungserfahrungen ==
...
c9fe70d74b832c31a307ff4a8c6ceaa5c9527aef
Myrmica rubra
0
14
210
197
2012-08-22T15:14:25Z
Lollypop
2
wikitext
text/x-wiki
{{Ameisenart
| Autor = (Linnaeus, 1758)
| DeName = Rote Gartenameise
| WissName = Myrmica rubra
| Gattung = Myrmica
| Unterfamilie = Myrmicinae
| Art = rubra
| Verbreitung = Europa, Asien, Afrika und Nordamerika
| Habitat = Gärten, Wäldern und Wiesen, zumeist unter Steinen, Holz,<br>o. ä.
| Gruendung = [[Gründung#Die unabhängige Koloniegründung durch einzelne Königinnen (claustrale/semiclaustrale Gründung)|semiclaustral]]
| Koeniginnen = [[Polygynie|polygyn]]
| Nest = Erdnest
| Ausbruchsschutz = Paraffinöl
| Nahrung = Insekten, Zuckerwasser
| Luftfeuchtigkeit =
| Temperatur = 20-30°C
| Winterruhe = Ja
}}
== Allgemeines ==
...
== Eigene Haltungserfahrungen ==
...
40846cd5c94af3ec5734c6b4b945662bfe4b76aa
Lasius flavus
0
17
211
107
2012-08-22T15:17:33Z
Lollypop
2
wikitext
text/x-wiki
{{Ameisenart
| DeName = Gelbe Wiesenameise
| WissName = Lasius flavus
| Autor = (Fabricius, 1782)
| Untergattung = Cautolasius
| Gattung = Lasius
| Unterfamilie = Formicinae
| Art = flavus
| Verbreitung = Europa
| Habitat = Wiese
| Gruendung = [[Gründung#Die unabhängige Koloniegründung durch einzelne Königinnen (claustrale/semiclaustrale Gründung)|claustral]]
| Koeniginnen = [[Oligogynie|oligogyn]]
| maxKolo = 100.000
| sizeGynomorphe = 8-9mm
| sizeErgatomorphe = 3-4mm
| Nest = Erdnest
| Ausbruchsschutz = PTFE
| Nahrung = '''Insekten''', Zuckerwasser
| Luftfeuchtigkeit = 40-60%
| Temperatur = 18-30°C
| Winterruhe = Ja
}}
d361f5327da486ca053a3e7103fdec7062753fa9
212
211
2012-08-22T15:18:28Z
Lollypop
2
wikitext
text/x-wiki
{{Ameisenart
| DeName = Gelbe Wiesenameise
| WissName = Lasius flavus
| Autor = (Fabricius, 1782)
| Untergattung = Cautolasius
| Gattung = Lasius
| Unterfamilie = Formicinae
| Art = flavus
| Verbreitung = Europa
| Habitat = Wiese
| Gruendung = [[Gründung#Die unabhängige Koloniegründung durch einzelne Königinnen (claustrale/semiclaustrale Gründung)|claustral]]
| Koeniginnen = [[Oligogynie|oligogyn]]
| maxKolo = 100.000
| sizeGynomorphe = 8-9mm
| sizeErgatomorphe = 3-4mm
| Nest = Erdnest
| Ausbruchsschutz = PTFE
| Nahrung = Insekten, '''Zuckerwasser'''
| Luftfeuchtigkeit = 40-60%
| Temperatur = 18-30°C
| Winterruhe = Ja
}}
958d71a24b91e1700b9ba187dee3d5ff7ebbeb99
Lasius fuliginosus
0
46
213
110
2012-08-22T15:21:44Z
Lollypop
2
wikitext
text/x-wiki
{{Ameisenart
| Autor = (Latreille, 1798)
| WissName = Lasius fuliginosus
| Gattung = Lasius
| Untergattung = Dendrolasius
| Unterfamilie = Formicinae
| Art = fuliginosus
| Bild = Lasius fuliginosus.jpg
| Bildbeschreibung = <br />''L. fuliginosus'' Arbeiterin
| Verbreitung = Mitteleuropa
| Habitat = Laub- und Nadelwälder, Parks; meist in Holz nistend
| Gruendung = [[Gründung#Die abhängige Koloniegründung durch temporären Sozialparasitismus|sozialparasitär]]
| Koeniginnen = [[Polygynie|polygyn]], [[Oligogynie|oligogyn]]
| maxKolo = bis 2 Millionen
| sizeGynomorphe = 6-7mm
| sizeErgatomorphe = 4-6mm
| Nest = Erdnest mit Anschluß in morsches Holz
| Ausbruchsschutz = PTFE
| Nahrung = '''Insekten''', Zuckerwasser
| Luftfeuchtigkeit = 40-60%
| Temperatur = 20-30°C
| Winterruhe = Ja, Oktober-März bei 5-8°C
}}
0bd1161f8e9d7357790b5168e6e97f074e3b5a99
217
213
2012-08-22T15:34:50Z
Lollypop
2
wikitext
text/x-wiki
{{Ameisenart
| Autor = (Latreille, 1798)
| WissName = Lasius fuliginosus
| Gattung = Lasius
| Untergattung = Dendrolasius
| Unterfamilie = Formicinae
| Art = fuliginosus
| Bild = Lasius fuliginosus.jpg
| Bildbeschreibung = <br />''L. fuliginosus'' Arbeiterin
| Verbreitung = Mitteleuropa
| Habitat = Laub- und Nadelwälder, Parks; meist in Holz nistend
| Gruendung = [[Gründung#Die abhängige Koloniegründung durch temporären Sozialparasitismus|sozialparasitär]] in Nestern von [[Lasius umbratus]]
| Koeniginnen = [[Polygynie|polygyn]], [[Oligogynie|oligogyn]]
| maxKolo = bis 2 Millionen
| sizeGynomorphe = 6-7mm
| sizeErgatomorphe = 4-6mm
| Nest = Erdnest mit Anschluß in morsches Holz
| Ausbruchsschutz = PTFE
| Nahrung = Insekten, Zuckerwasser
| Luftfeuchtigkeit = 40-60%
| Temperatur = 20-30°C
| Winterruhe = Ja, Oktober-März bei 5-8°C
}}
6ae275bbafdef4f51a5f045a2f564147c4ae624e
ZFS fileinfo
0
90
214
2012-08-22T15:30:26Z
Lollypop
2
Die Seite wurde neu angelegt: „[[Kategorie:ZFS]] Wenn man nachträglich z.B. sehen möchte, mit welcher Blocksize ein File angelegt wurde, so kann man sich das anschauen mit zdb: # zdb -ddd <…“
wikitext
text/x-wiki
[[Kategorie:ZFS]]
Wenn man nachträglich z.B. sehen möchte, mit welcher Blocksize ein File angelegt wurde, so kann man sich das anschauen mit zdb:
# zdb -ddd <ZFS> <i-Node>
z.B.
<pre>
# ls -i /.globaldevices
524575 /.globaldevices
# zdb -dddd rpool/ROOT/zfsBE 524575
Dataset rpool/ROOT/zfsBE [ZPL], ID 45, cr_txg 8, 27.5G, 459538 objects, rootbp DVA[0]=<0:b1eb43600:200:STD:1> DVA[1]=<0:da0e39e00:200:STD:1> [L0 DMU objset] fletcher4 lzjb BE contiguous unique 2-copy size=800L/200P birth=3168L/3168P fill=459538 cksum=17cad0b0f0:7230399a8a3:134096738e1d8:25bba0c8eec052
Object lvl iblk dblk dsize lsize %full type
524575 3 16K 128K 100M 100M 100.00 ZFS plain file
168 bonus System attributes
dnode flags: USED_BYTES USERUSED_ACCOUNTED
dnode maxblkid: 799
path /.globaldevices
uid 0
gid 0
atime Wed Aug 22 09:50:28 2012
mtime Wed Aug 22 09:50:28 2012
ctime Wed Aug 22 09:50:28 2012
crtime Wed Aug 22 09:47:15 2012
gen 2639
mode 101600
size 104857600
parent 4
links 1
</pre>
f18e68615e8b05adb9ea60837a4a693dcfd1eb92
215
214
2012-08-22T15:31:13Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:ZFS]]
Wenn man nachträglich z.B. sehen möchte, mit welcher Blocksize ein File angelegt wurde, so kann man sich das anschauen mit zdb:
<pre>
# zdb -ddd <ZFS> <i-Node>
</pre>
z.B.
<pre>
# ls -i /.globaldevices
524575 /.globaldevices
# zdb -dddd rpool/ROOT/zfsBE 524575
Dataset rpool/ROOT/zfsBE [ZPL], ID 45, cr_txg 8, 27.5G, 459538 objects, rootbp DVA[0]=<0:b1eb43600:200:STD:1> DVA[1]=<0:da0e39e00:200:STD:1> [L0 DMU objset] fletcher4 lzjb BE contiguous unique 2-copy size=800L/200P birth=3168L/3168P fill=459538 cksum=17cad0b0f0:7230399a8a3:134096738e1d8:25bba0c8eec052
Object lvl iblk dblk dsize lsize %full type
524575 3 16K 128K 100M 100M 100.00 ZFS plain file
168 bonus System attributes
dnode flags: USED_BYTES USERUSED_ACCOUNTED
dnode maxblkid: 799
path /.globaldevices
uid 0
gid 0
atime Wed Aug 22 09:50:28 2012
mtime Wed Aug 22 09:50:28 2012
ctime Wed Aug 22 09:50:28 2012
crtime Wed Aug 22 09:47:15 2012
gen 2639
mode 101600
size 104857600
parent 4
links 1
</pre>
b62e1feb1eb15d20c808079b4ecd426203041756
ZFS cheatsheet
0
29
216
182
2012-08-22T15:33:44Z
Lollypop
2
/* Migration UFS-Root -> ZFS-Root via Live-Upgrade */
wikitext
text/x-wiki
* [[ZFS_Recovery|Reparieren von defktem ZFS]]
* Wichtige ZFS-Patches: 127729-07 (x86) / 127728-06 (SPARC)
* ZFS Best Practices Guide http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide
* ZFS FAQ bei Opensolaris.org http://www.opensolaris.org/os/community/zfs/faq/
== ZFS Tuning ==
Eine gefühlte Langsamkeit auf Systemen mit ZFS kommt vom sehr großen Cachehunger. Den kann man eingrenzen:
Erstmal schauen, was phase ist:
<pre>
lollypop@wirefall:~# echo "::kmastat ! grep Total" |mdb -k
Total [hat_memload] 13508608B 309323764 0
Total [kmem_msb] 24010752B 1509706 0
Total [kmem_va] 660340736B 140448 0
Total [kmem_default] 690409472B 1416078794 0
Total [kmem_io_64G] 34619392B 8456 0
Total [kmem_io_4G] 16384B 92 0
Total [kmem_io_2G] 24576B 62 0
Total [bp_map] 1048576B 234488 0
Total [umem_np] 786432B 976 0
Total [id32] 4096B 2620 0
Total [zfs_file_data_buf] 1471275008B 1326646 0
Total [segkp] 589824B 192886 0
Total [ip_minor_arena_sa] 64B 13332 0
Total [ip_minor_arena_la] 192B 45183 0
Total [spdsock] 64B 1 0
Total [namefs_inodes] 64B 24 0
lollypop@wirefall:~# echo "::memstat" | mdb -k
Page Summary Pages MB %Tot
------------ ---------------- ---------------- ----
Kernel 255013 996 24%
ZFS File Data 359196 1403 34%
Anon 346538 1353 33%
Exec and libs 33948 132 3%
Page cache 4836 18 0%
Free (cachelist) 22086 86 2%
Free (freelist) 23420 91 2%
Total 1045037 4082
Physical 1045036 4082
</pre>
Oder nur ZFS
<pre>
echo "::memstat ! egrep '(Page Summary|-----|ZFS)'"| mdb -k
</pre>
Ausgeben aller ARC-Parameter:
<pre>
lollypop@wirefall:~# echo "::arc -m" | mdb -k
hits = 80839319
misses = 3717788
demand_data_hits = 4127150
demand_data_misses = 51589
demand_metadata_hits = 9467792
demand_metadata_misses = 2125852
prefetch_data_hits = 127941
prefetch_data_misses = 596238
prefetch_metadata_hits = 67116436
prefetch_metadata_misses = 944109
mru_hits = 2031248
mru_ghost_hits = 1906199
mfu_hits = 78514880
mfu_ghost_hits = 993236
deleted = 880714
recycle_miss = 1381210
mutex_miss = 197
evict_skip = 38573528
evict_l2_cached = 0
evict_l2_eligible = 94658370048
evict_l2_ineligible = 8946457600
hash_elements = 79571
hash_elements_max = 82328
hash_collisions = 3005774
hash_chains = 22460
hash_chain_max = 8
p = 64 MB
c = 512 MB
c_min = 127 MB
c_max = 512 MB
size = 512 MB
hdr_size = 14825736
data_size = 468982784
other_size = 53480992
l2_hits = 0
l2_misses = 0
l2_feeds = 0
l2_rw_clash = 0
l2_read_bytes = 0
l2_write_bytes = 0
l2_writes_sent = 0
l2_writes_done = 0
l2_writes_error = 0
l2_writes_hdr_miss = 0
l2_evict_lock_retry = 0
l2_evict_reading = 0
l2_free_on_write = 0
l2_abort_lowmem = 0
l2_cksum_bad = 0
l2_io_error = 0
l2_size = 0
l2_hdr_size = 0
memory_throttle_count = 0
arc_no_grow = 0
arc_tempreserve = 0 MB
arc_meta_used = 150 MB
arc_meta_limit = 128 MB
arc_meta_max = 313 MB
</pre>
Man kann sich auch alle Parameter ausgeben lassen, die für ZFS gesetzt sind mit:
<pre>
# echo ::zfs_params | mdb -k
arc_reduce_dnlc_percent = 0x3
zfs_arc_max = 0x100000000
zfs_arc_min = 0x0
arc_shrink_shift = 0x5
zfs_mdcomp_disable = 0x0
zfs_prefetch_disable = 0x0
zfetch_max_streams = 0x8
zfetch_min_sec_reap = 0x2
zfetch_block_cap = 0x100
zfetch_array_rd_sz = 0x100000
zfs_default_bs = 0x9
zfs_default_ibs = 0xe
...
# echo "::arc -a" | mdb -k
hits = 592730
misses = 5095
demand_data_hits = 0
demand_data_misses = 0
demand_metadata_hits = 592719
demand_metadata_misses = 4866
prefetch_data_hits = 0
prefetch_data_misses = 0
...
</pre>
Setzen von Kernelparametern geht auch online mit:
<pre>
# echo zfs_arc_max/Z100000000 | mdb -kw
zfs_arc_max: <old value> = 0x100000000
</pre>
Das setzt den zfs_arc_max auf 4GB = 0x100000000
== Limitieren des ARC Cache ==
In der /etc/system einfach:
set zfs:zfs_arc_max = <Number of bytes>
Siehe auch [http://www.solarisinternals.com/wiki/index.php/ZFS_Evil_Tuning_Guide#Limiting_the_ARC_Cache Limiting the ARC Cache]
== ZFS Platzverbrauch besser anzeigen ==
<pre>
$ zfs list -o space
NAME AVAIL USED USEDSNAP USEDDS USEDREFRESERV USEDCHILD
rpool 25.4G 7.79G 0 64K 0 7.79G
rpool/ROOT 25.4G 6.29G 0 18K 0 6.29G
rpool/ROOT/snv_98 25.4G 6.29G 0 6.29G 0 0
rpool/dump 25.4G 1.00G 0 1.00G 0 0
rpool/export 25.4G 38K 0 20K 0 18K
rpool/export/home 25.4G 18K 0 18K 0 0
rpool/swap 25.8G 512M 0 111M 401M 0
</pre>
Wenn zfs list -o space als shortcut noch nicht zur Verfügung steht, geht meist:
<pre>
$ zfs list -o name,avail,used,usedsnap,usedds,usedrefreserv,usedchild -t filesystem,volume
</pre>
== Migration UFS-Root -> ZFS-Root via Live-Upgrade ==
Erstmal den ZFS Rootpool anlegen:
# zpool create rpool /dev/dsk/<disk>
Wer Problemen aus dem Weg gehen möchte, läßt den Namen bei rpool.
Boot-Environment (BE) mit lucreate erstellen
# lucreate -c ufsBE -n zfsBE -p rpool
Hiermit werden die Files in die ZFS-Umgebung kopiert.
Prüfen, ob das BootFS richtig gesetzt wurde:
<pre>
# zpool get bootfs rpool
NAME PROPERTY VALUE SOURCE
rpool bootfs rpool/ROOT/zfsBE local
</pre>
Auskommentieren von eventuell noch nachgebliebenen rootdev-Einträgen in der /etc/system
<pre>
# zpool export rpool
# zpool import -R /tmp/rpool rpool
# zfs unmount rpool
# rmdir /tmp/rpool/rpool
# zfs mount rpool/ROOT/zfsBE
# perl -pi.orig -e 's#^(rootdev.*)$#* \1#g' /tmp/rpool/etc/system
# zpool export rpool
</pre>
Aktivieren des neuen BEs
# luactivate zfsBE
[[Kategorie:ZFS]]
38d5fdd383851827b08ede4f6bcb8f30a06ceece
218
216
2012-08-22T15:37:55Z
Lollypop
2
/* Migration UFS-Root -> ZFS-Root via Live-Upgrade */
wikitext
text/x-wiki
* [[ZFS_Recovery|Reparieren von defktem ZFS]]
* Wichtige ZFS-Patches: 127729-07 (x86) / 127728-06 (SPARC)
* ZFS Best Practices Guide http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide
* ZFS FAQ bei Opensolaris.org http://www.opensolaris.org/os/community/zfs/faq/
== ZFS Tuning ==
Eine gefühlte Langsamkeit auf Systemen mit ZFS kommt vom sehr großen Cachehunger. Den kann man eingrenzen:
Erstmal schauen, was phase ist:
<pre>
lollypop@wirefall:~# echo "::kmastat ! grep Total" |mdb -k
Total [hat_memload] 13508608B 309323764 0
Total [kmem_msb] 24010752B 1509706 0
Total [kmem_va] 660340736B 140448 0
Total [kmem_default] 690409472B 1416078794 0
Total [kmem_io_64G] 34619392B 8456 0
Total [kmem_io_4G] 16384B 92 0
Total [kmem_io_2G] 24576B 62 0
Total [bp_map] 1048576B 234488 0
Total [umem_np] 786432B 976 0
Total [id32] 4096B 2620 0
Total [zfs_file_data_buf] 1471275008B 1326646 0
Total [segkp] 589824B 192886 0
Total [ip_minor_arena_sa] 64B 13332 0
Total [ip_minor_arena_la] 192B 45183 0
Total [spdsock] 64B 1 0
Total [namefs_inodes] 64B 24 0
lollypop@wirefall:~# echo "::memstat" | mdb -k
Page Summary Pages MB %Tot
------------ ---------------- ---------------- ----
Kernel 255013 996 24%
ZFS File Data 359196 1403 34%
Anon 346538 1353 33%
Exec and libs 33948 132 3%
Page cache 4836 18 0%
Free (cachelist) 22086 86 2%
Free (freelist) 23420 91 2%
Total 1045037 4082
Physical 1045036 4082
</pre>
Oder nur ZFS
<pre>
echo "::memstat ! egrep '(Page Summary|-----|ZFS)'"| mdb -k
</pre>
Ausgeben aller ARC-Parameter:
<pre>
lollypop@wirefall:~# echo "::arc -m" | mdb -k
hits = 80839319
misses = 3717788
demand_data_hits = 4127150
demand_data_misses = 51589
demand_metadata_hits = 9467792
demand_metadata_misses = 2125852
prefetch_data_hits = 127941
prefetch_data_misses = 596238
prefetch_metadata_hits = 67116436
prefetch_metadata_misses = 944109
mru_hits = 2031248
mru_ghost_hits = 1906199
mfu_hits = 78514880
mfu_ghost_hits = 993236
deleted = 880714
recycle_miss = 1381210
mutex_miss = 197
evict_skip = 38573528
evict_l2_cached = 0
evict_l2_eligible = 94658370048
evict_l2_ineligible = 8946457600
hash_elements = 79571
hash_elements_max = 82328
hash_collisions = 3005774
hash_chains = 22460
hash_chain_max = 8
p = 64 MB
c = 512 MB
c_min = 127 MB
c_max = 512 MB
size = 512 MB
hdr_size = 14825736
data_size = 468982784
other_size = 53480992
l2_hits = 0
l2_misses = 0
l2_feeds = 0
l2_rw_clash = 0
l2_read_bytes = 0
l2_write_bytes = 0
l2_writes_sent = 0
l2_writes_done = 0
l2_writes_error = 0
l2_writes_hdr_miss = 0
l2_evict_lock_retry = 0
l2_evict_reading = 0
l2_free_on_write = 0
l2_abort_lowmem = 0
l2_cksum_bad = 0
l2_io_error = 0
l2_size = 0
l2_hdr_size = 0
memory_throttle_count = 0
arc_no_grow = 0
arc_tempreserve = 0 MB
arc_meta_used = 150 MB
arc_meta_limit = 128 MB
arc_meta_max = 313 MB
</pre>
Man kann sich auch alle Parameter ausgeben lassen, die für ZFS gesetzt sind mit:
<pre>
# echo ::zfs_params | mdb -k
arc_reduce_dnlc_percent = 0x3
zfs_arc_max = 0x100000000
zfs_arc_min = 0x0
arc_shrink_shift = 0x5
zfs_mdcomp_disable = 0x0
zfs_prefetch_disable = 0x0
zfetch_max_streams = 0x8
zfetch_min_sec_reap = 0x2
zfetch_block_cap = 0x100
zfetch_array_rd_sz = 0x100000
zfs_default_bs = 0x9
zfs_default_ibs = 0xe
...
# echo "::arc -a" | mdb -k
hits = 592730
misses = 5095
demand_data_hits = 0
demand_data_misses = 0
demand_metadata_hits = 592719
demand_metadata_misses = 4866
prefetch_data_hits = 0
prefetch_data_misses = 0
...
</pre>
Setzen von Kernelparametern geht auch online mit:
<pre>
# echo zfs_arc_max/Z100000000 | mdb -kw
zfs_arc_max: <old value> = 0x100000000
</pre>
Das setzt den zfs_arc_max auf 4GB = 0x100000000
== Limitieren des ARC Cache ==
In der /etc/system einfach:
set zfs:zfs_arc_max = <Number of bytes>
Siehe auch [http://www.solarisinternals.com/wiki/index.php/ZFS_Evil_Tuning_Guide#Limiting_the_ARC_Cache Limiting the ARC Cache]
== ZFS Platzverbrauch besser anzeigen ==
<pre>
$ zfs list -o space
NAME AVAIL USED USEDSNAP USEDDS USEDREFRESERV USEDCHILD
rpool 25.4G 7.79G 0 64K 0 7.79G
rpool/ROOT 25.4G 6.29G 0 18K 0 6.29G
rpool/ROOT/snv_98 25.4G 6.29G 0 6.29G 0 0
rpool/dump 25.4G 1.00G 0 1.00G 0 0
rpool/export 25.4G 38K 0 20K 0 18K
rpool/export/home 25.4G 18K 0 18K 0 0
rpool/swap 25.8G 512M 0 111M 401M 0
</pre>
Wenn zfs list -o space als shortcut noch nicht zur Verfügung steht, geht meist:
<pre>
$ zfs list -o name,avail,used,usedsnap,usedds,usedrefreserv,usedchild -t filesystem,volume
</pre>
== Migration UFS-Root -> ZFS-Root via Live-Upgrade ==
Erstmal den ZFS Rootpool anlegen:
# zpool create rpool /dev/dsk/<zfs-disk>
Wer Problemen aus dem Weg gehen möchte, läßt den Namen bei rpool.
Boot-Environment (BE) mit lucreate erstellen
# lucreate -c ufsBE -n zfsBE -p rpool
Hiermit werden die Files in die ZFS-Umgebung kopiert.
Prüfen, ob das BootFS richtig gesetzt wurde:
<pre>
# zpool get bootfs rpool
NAME PROPERTY VALUE SOURCE
rpool bootfs rpool/ROOT/zfsBE local
</pre>
Auskommentieren von eventuell noch nachgebliebenen rootdev-Einträgen in der /etc/system
<pre>
# zpool export rpool
# zpool import -R /tmp/rpool rpool
# zfs unmount rpool
# rmdir /tmp/rpool/rpool
# zfs mount rpool/ROOT/zfsBE
# perl -pi.orig -e 's#^(rootdev.*)$#* \1#g' /tmp/rpool/etc/system
# zpool export rpool
</pre>
Bootblock für ZFS auf die ZFS-Platte
# installboot -F zfs /usr/platform/`uname -i`/lib/fs/zfs/bootblk /dev/rdsk/<zfs-disk>
Aktivieren des neuen BEs
# luactivate zfsBE
[[Kategorie:ZFS]]
e6b8a46b441bf962efd970c30e62a8270f75268e
219
218
2012-08-22T16:03:43Z
Lollypop
2
/* Migration UFS-Root -> ZFS-Root via Live-Upgrade */
wikitext
text/x-wiki
* [[ZFS_Recovery|Reparieren von defktem ZFS]]
* Wichtige ZFS-Patches: 127729-07 (x86) / 127728-06 (SPARC)
* ZFS Best Practices Guide http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide
* ZFS FAQ bei Opensolaris.org http://www.opensolaris.org/os/community/zfs/faq/
== ZFS Tuning ==
Eine gefühlte Langsamkeit auf Systemen mit ZFS kommt vom sehr großen Cachehunger. Den kann man eingrenzen:
Erstmal schauen, was phase ist:
<pre>
lollypop@wirefall:~# echo "::kmastat ! grep Total" |mdb -k
Total [hat_memload] 13508608B 309323764 0
Total [kmem_msb] 24010752B 1509706 0
Total [kmem_va] 660340736B 140448 0
Total [kmem_default] 690409472B 1416078794 0
Total [kmem_io_64G] 34619392B 8456 0
Total [kmem_io_4G] 16384B 92 0
Total [kmem_io_2G] 24576B 62 0
Total [bp_map] 1048576B 234488 0
Total [umem_np] 786432B 976 0
Total [id32] 4096B 2620 0
Total [zfs_file_data_buf] 1471275008B 1326646 0
Total [segkp] 589824B 192886 0
Total [ip_minor_arena_sa] 64B 13332 0
Total [ip_minor_arena_la] 192B 45183 0
Total [spdsock] 64B 1 0
Total [namefs_inodes] 64B 24 0
lollypop@wirefall:~# echo "::memstat" | mdb -k
Page Summary Pages MB %Tot
------------ ---------------- ---------------- ----
Kernel 255013 996 24%
ZFS File Data 359196 1403 34%
Anon 346538 1353 33%
Exec and libs 33948 132 3%
Page cache 4836 18 0%
Free (cachelist) 22086 86 2%
Free (freelist) 23420 91 2%
Total 1045037 4082
Physical 1045036 4082
</pre>
Oder nur ZFS
<pre>
echo "::memstat ! egrep '(Page Summary|-----|ZFS)'"| mdb -k
</pre>
Ausgeben aller ARC-Parameter:
<pre>
lollypop@wirefall:~# echo "::arc -m" | mdb -k
hits = 80839319
misses = 3717788
demand_data_hits = 4127150
demand_data_misses = 51589
demand_metadata_hits = 9467792
demand_metadata_misses = 2125852
prefetch_data_hits = 127941
prefetch_data_misses = 596238
prefetch_metadata_hits = 67116436
prefetch_metadata_misses = 944109
mru_hits = 2031248
mru_ghost_hits = 1906199
mfu_hits = 78514880
mfu_ghost_hits = 993236
deleted = 880714
recycle_miss = 1381210
mutex_miss = 197
evict_skip = 38573528
evict_l2_cached = 0
evict_l2_eligible = 94658370048
evict_l2_ineligible = 8946457600
hash_elements = 79571
hash_elements_max = 82328
hash_collisions = 3005774
hash_chains = 22460
hash_chain_max = 8
p = 64 MB
c = 512 MB
c_min = 127 MB
c_max = 512 MB
size = 512 MB
hdr_size = 14825736
data_size = 468982784
other_size = 53480992
l2_hits = 0
l2_misses = 0
l2_feeds = 0
l2_rw_clash = 0
l2_read_bytes = 0
l2_write_bytes = 0
l2_writes_sent = 0
l2_writes_done = 0
l2_writes_error = 0
l2_writes_hdr_miss = 0
l2_evict_lock_retry = 0
l2_evict_reading = 0
l2_free_on_write = 0
l2_abort_lowmem = 0
l2_cksum_bad = 0
l2_io_error = 0
l2_size = 0
l2_hdr_size = 0
memory_throttle_count = 0
arc_no_grow = 0
arc_tempreserve = 0 MB
arc_meta_used = 150 MB
arc_meta_limit = 128 MB
arc_meta_max = 313 MB
</pre>
Man kann sich auch alle Parameter ausgeben lassen, die für ZFS gesetzt sind mit:
<pre>
# echo ::zfs_params | mdb -k
arc_reduce_dnlc_percent = 0x3
zfs_arc_max = 0x100000000
zfs_arc_min = 0x0
arc_shrink_shift = 0x5
zfs_mdcomp_disable = 0x0
zfs_prefetch_disable = 0x0
zfetch_max_streams = 0x8
zfetch_min_sec_reap = 0x2
zfetch_block_cap = 0x100
zfetch_array_rd_sz = 0x100000
zfs_default_bs = 0x9
zfs_default_ibs = 0xe
...
# echo "::arc -a" | mdb -k
hits = 592730
misses = 5095
demand_data_hits = 0
demand_data_misses = 0
demand_metadata_hits = 592719
demand_metadata_misses = 4866
prefetch_data_hits = 0
prefetch_data_misses = 0
...
</pre>
Setzen von Kernelparametern geht auch online mit:
<pre>
# echo zfs_arc_max/Z100000000 | mdb -kw
zfs_arc_max: <old value> = 0x100000000
</pre>
Das setzt den zfs_arc_max auf 4GB = 0x100000000
== Limitieren des ARC Cache ==
In der /etc/system einfach:
set zfs:zfs_arc_max = <Number of bytes>
Siehe auch [http://www.solarisinternals.com/wiki/index.php/ZFS_Evil_Tuning_Guide#Limiting_the_ARC_Cache Limiting the ARC Cache]
== ZFS Platzverbrauch besser anzeigen ==
<pre>
$ zfs list -o space
NAME AVAIL USED USEDSNAP USEDDS USEDREFRESERV USEDCHILD
rpool 25.4G 7.79G 0 64K 0 7.79G
rpool/ROOT 25.4G 6.29G 0 18K 0 6.29G
rpool/ROOT/snv_98 25.4G 6.29G 0 6.29G 0 0
rpool/dump 25.4G 1.00G 0 1.00G 0 0
rpool/export 25.4G 38K 0 20K 0 18K
rpool/export/home 25.4G 18K 0 18K 0 0
rpool/swap 25.8G 512M 0 111M 401M 0
</pre>
Wenn zfs list -o space als shortcut noch nicht zur Verfügung steht, geht meist:
<pre>
$ zfs list -o name,avail,used,usedsnap,usedds,usedrefreserv,usedchild -t filesystem,volume
</pre>
== Migration UFS-Root -> ZFS-Root via Live-Upgrade ==
Erstmal den ZFS Rootpool anlegen:
# zpool create rpool /dev/dsk/<zfs-disk>
Wer Problemen aus dem Weg gehen möchte, läßt den Namen bei rpool.
Boot-Environment (BE) mit lucreate erstellen
# lucreate -c ufsBE -n zfsBE -p rpool
Hiermit werden die Files in die ZFS-Umgebung kopiert.
Prüfen, ob das BootFS richtig gesetzt wurde:
<pre>
# zpool get bootfs rpool
NAME PROPERTY VALUE SOURCE
rpool bootfs rpool/ROOT/zfsBE local
</pre>
Auskommentieren von eventuell noch nachgebliebenen rootdev-Einträgen in der /etc/system
<pre>
# zpool export rpool
# mkdir /tmp/rpool
# zpool import -R /tmp/rpool rpool
# zfs unmount rpool
# rmdir /tmp/rpool/rpool
# zfs mount rpool/ROOT/zfsBE
# perl -pi.orig -e 's#^(rootdev.*)$#* \1#g' /tmp/rpool/etc/system
# zpool export rpool
</pre>
Bootblock für ZFS auf die ZFS-Platte
# installboot -F zfs /usr/platform/`uname -i`/lib/fs/zfs/bootblk /dev/rdsk/<zfs-disk>
Aktivieren des neuen BEs
# luactivate zfsBE
[[Kategorie:ZFS]]
6665f3c2afbd8f5d61f057ae57d75f5fdb44a524
NetApp move root vol
0
91
220
2012-09-06T13:27:51Z
Lollypop
2
Die Seite wurde neu angelegt: „[[Kategorie:NetApp]] =Migration des Root-Volumes auf ein neues Aggregat= Das kann auch benutzt werden, um auf ein 64-Bit Aggregat zu migrieren... ==Anlegen des…“
wikitext
text/x-wiki
[[Kategorie:NetApp]]
=Migration des Root-Volumes auf ein neues Aggregat=
Das kann auch benutzt werden, um auf ein 64-Bit Aggregat zu migrieren...
==Anlegen des neuen Root-Volumes==
<pre>
> priv set advanced
*> aggr create aggr0_new -B 64 -t raid4 -d <disk0> <disk1>
*> vol create vol0_new aggr0_new 250g
</pre>
==Kopieren der Daten==
Um den ndmpd aktivieren zu können muß mindestens ein Netzwerkinterface einen Link haben!
<pre>
*> options ndmpd.enable on
*> ndmpcopy -f /vol/vol0 /vol/vol0_new
Ndmpcopy: Starting copy [ 0 ] ...
...
*> vol options vol0_new root
*> reboot
</pre>
==Aufräumen==
<pre>
> vol status
Volume State Status Options
vol0_new online raid4, flex root, create_ucode=on
64-bit
vol0 online raid_dp, flex create_ucode=on
64-bit
> vol offline vol0
> vol destroy vol0
> aggr offline aggr0
> aggr destroy aggr0
> disk zero spares
</pre>
==Setzen der Standard-Namen==
<pre>
> aggr rename aggr0_new aggr0
> vol rename vol0_new vol0
> reboot
</pre>
==Umbau des Aggregat auf raid_dp mit Hilfe der frei gewordenen Platten==
<pre>
> aggr status -r aggr0
Aggregate aggr0 (online, raid4) (block checksums)
Plex /aggr0/plex0 (online, normal, active, pool0)
RAID group /aggr0/plex0/rg0 (normal, block checksums)
RAID Disk Device HA SHELF BAY CHAN Pool Type RPM Used (MB/blks) Phys (MB/blks)
--------- ------ ------------- ---- ---- ---- ----- -------------- --------------
parity 0a.00.23 0a 0 23 SA:A 0 BSAS 7200 1695466/3472315904 1695759/3472914816
data 0a.00.6 0a 0 6 SA:A 0 BSAS 7200 1695466/3472315904 1695759/3472914816
> aggr add aggr0 1
> aggr options aggr0 raidtype raid_dp
> aggr status -r aggr0
Aggregate aggr0 (online, raid_dp) (block checksums)
Plex /aggr0/plex0 (online, normal, active, pool0)
RAID group /aggr0/plex0/rg0 (normal, block checksums)
RAID Disk Device HA SHELF BAY CHAN Pool Type RPM Used (MB/blks) Phys (MB/blks)
--------- ------ ------------- ---- ---- ---- ----- -------------- --------------
dparity 0a.00.23 0a 0 0 SA:B 0 BSAS 7200 1695466/3472315904 1695759/3472914816
parity 0a.00.6 0a 0 2 SA:B 0 BSAS 7200 1695466/3472315904 1695759/3472914816
data 0a.00.1 0a 0 4 SA:B 0 BSAS 7200 1695466/3472315904 1695759/3472914816
</pre>
1fb73a4df4fb18ab57b57610c2984129f008d06a
221
220
2012-09-07T12:18:36Z
Lollypop
2
/* Umbau des Aggregat auf raid_dp mit Hilfe der frei gewordenen Platten */
wikitext
text/x-wiki
[[Kategorie:NetApp]]
=Migration des Root-Volumes auf ein neues Aggregat=
Das kann auch benutzt werden, um auf ein 64-Bit Aggregat zu migrieren...
==Anlegen des neuen Root-Volumes==
<pre>
> priv set advanced
*> aggr create aggr0_new -B 64 -t raid4 -d <disk0> <disk1>
*> vol create vol0_new aggr0_new 250g
</pre>
==Kopieren der Daten==
Um den ndmpd aktivieren zu können muß mindestens ein Netzwerkinterface einen Link haben!
<pre>
*> options ndmpd.enable on
*> ndmpcopy -f /vol/vol0 /vol/vol0_new
Ndmpcopy: Starting copy [ 0 ] ...
...
*> vol options vol0_new root
*> reboot
</pre>
==Aufräumen==
<pre>
> vol status
Volume State Status Options
vol0_new online raid4, flex root, create_ucode=on
64-bit
vol0 online raid_dp, flex create_ucode=on
64-bit
> vol offline vol0
> vol destroy vol0
> aggr offline aggr0
> aggr destroy aggr0
> disk zero spares
</pre>
==Setzen der Standard-Namen==
<pre>
> aggr rename aggr0_new aggr0
> vol rename vol0_new vol0
> reboot
</pre>
==Umbau des Aggregat auf raid_dp mit Hilfe der frei gewordenen Platten==
<pre>
> aggr status -r aggr0
Aggregate aggr0 (online, raid4) (block checksums)
Plex /aggr0/plex0 (online, normal, active, pool0)
RAID group /aggr0/plex0/rg0 (normal, block checksums)
RAID Disk Device HA SHELF BAY CHAN Pool Type RPM Used (MB/blks) Phys (MB/blks)
--------- ------ ------------- ---- ---- ---- ----- -------------- --------------
parity 0a.00.23 0a 0 23 SA:A 0 BSAS 7200 1695466/3472315904 1695759/3472914816
data 0a.00.6 0a 0 6 SA:A 0 BSAS 7200 1695466/3472315904 1695759/3472914816
> aggr options aggr0 raidtype raid_dp
> aggr status -r aggr0
Aggregate aggr0 (online, raid_dp, reconstruct) (block checksums)
Plex /aggr0/plex0 (online, normal, active, pool0)
RAID group /aggr0/plex0/rg0 (reconstruction 0% completed, block checksums)
RAID Disk Device HA SHELF BAY CHAN Pool Type RPM Used (MB/blks) Phys (MB/blks)
--------- ------ ------------- ---- ---- ---- ----- -------------- --------------
dparity 0a.00.3 0a 0 3 SA:A 0 BSAS 7200 1695466/3472315904 1695759/3472914816 (reconstruction 0% completed)
parity 0a.00.23 0a 0 23 SA:A 0 BSAS 7200 1695466/3472315904 1695759/3472914816
data 0a.00.6 0a 0 6 SA:A 0 BSAS 7200 1695466/3472315904 1695759/3472914816
</pre>
679995fab22fd7eb7142dbe5efc6096cd6a54dda
Exim cheatsheet
0
27
222
50
2012-09-26T08:20:13Z
Lollypop
2
/* Wie finde ich eine bestimmte Mail in der Queue? */
wikitext
text/x-wiki
=Fragen und Antworten=
==Header einer MailID ansehen==
<pre># exim -mvh <msgid></pre>
==Statistiken der aktuellen Queue ansehen==
<pre># exim -bpu | exiqsum <parameter></pre>
==Routing von Mails testen==
===Kurz und bündig===
<pre># exim -bv -v <Mailadresse></pre>
===Mit viel Debugging===
<pre># exim -bv -d+all <Mailadresse></pre>
==Wie stosse ich den Versand aller Mails für eine bestimmte Domain an?==
<pre># exim -Rff <Domain></pre>
==Wie stosse ich den Versand EINER bestimmten Mail erneut an?==
<pre># exim -M <message-id></pre>
==Wie ermittle ich, wieviele Mails in der Queue liegen?==
<pre># exim -bpc</pre>
==Wie finde ich eine bestimmte Mail in der Queue?==
Dazu kann entweder in den Logfiles gesucht werden
<pre># exigrep <pattern> /var/log/exim/mainlog-jjjjmmdd</pre>
oder es kann in der Queue gesucht werden
<pre># exiqgrep -r <pattern></pre>
Besser, als exigrep ist exipick!
Ausgabe aller frozen Mails in der Queue:
<code bash>
# exipick -z
</code>
Ausgabe aller Mails an <reciepient> in der Queue:
<code bash>
# exipick -r <reciepient>
</code>
Ausgabe aller Mails von <sender> in der Queue:
<code bash>
# exipick -f <sender>
</code>
Ausggeben aller Mails, die Lokal abgesandt wurden in der Queue:
<code bash>
# exipick --or '$sender_host_address eq 127.0.0.1' '$received_protocol eq local'
</code>
Sogar der Body einer Mail kann durchsucht werden:
<pre>
# /opt/exim/bin/exipick '$message_body =~ /.*Vjagra.*/'
</pre>
Oder Ausgabe der sender_host_address für alle Mails die mehr als 40 und weniger als 50 Minuten alt sind und nicht im Status frozen sind:
<pre>
# exipick --show-vars sender_host_address '$message_age > 40m' '$message_age < 50m' '!$deliver_freeze'
</pre>
==Was tun die Exim-Prozesse?==
<pre># exiwhat</pre>
==Ausgeben von Exim-Parametern==
<pre># exim -bP <Parameter></pre>
z.B.:
<pre># exim -bP message_size_limit</pre>
==Immer gut: queue files ansehen==
<pre>
# find $(exim -bP spool_directory | nawk '{print $NF;}')/input
</pre>
[[Kategorie:Exim]]
748cd6b39f79d3304942cda9b9a96edc13330909
223
222
2012-09-26T08:27:07Z
Lollypop
2
/* Wie finde ich eine bestimmte Mail in der Queue? */
wikitext
text/x-wiki
=Fragen und Antworten=
==Header einer MailID ansehen==
<pre># exim -mvh <msgid></pre>
==Statistiken der aktuellen Queue ansehen==
<pre># exim -bpu | exiqsum <parameter></pre>
==Routing von Mails testen==
===Kurz und bündig===
<pre># exim -bv -v <Mailadresse></pre>
===Mit viel Debugging===
<pre># exim -bv -d+all <Mailadresse></pre>
==Wie stosse ich den Versand aller Mails für eine bestimmte Domain an?==
<pre># exim -Rff <Domain></pre>
==Wie stosse ich den Versand EINER bestimmten Mail erneut an?==
<pre># exim -M <message-id></pre>
==Wie ermittle ich, wieviele Mails in der Queue liegen?==
<pre># exim -bpc</pre>
==Wie finde ich eine bestimmte Mail in der Queue?==
Dazu kann entweder in den Logfiles gesucht werden
<pre># exigrep <pattern> /var/log/exim/mainlog-jjjjmmdd</pre>
oder es kann in der Queue gesucht werden
<pre># exiqgrep -r <pattern></pre>
Besser, als exigrep ist exipick!
Ausgabe aller frozen Mails in der Queue:
<pre>
# exipick -z
</pre>
Ausgabe aller Mails an <reciepient> in der Queue:
<pre>
# exipick -r <reciepient>
</pre>
Ausgabe aller Mails von <sender> in der Queue:
<pre>
# exipick -f <sender>
</pre>
Ausggeben aller Mails, die Lokal abgesandt wurden in der Queue:
<pre>
# exipick --or '$sender_host_address eq 127.0.0.1' '$received_protocol eq local'
</pre>
Sogar der Body einer Mail kann durchsucht werden:
<pre>
# /opt/exim/bin/exipick '$message_body =~ /.*Vjagra.*/'
</pre>
Oder Ausgabe der sender_host_address für alle Mails die mehr als 40 und weniger als 50 Minuten alt sind und nicht im Status frozen sind:
<pre>
# exipick --show-vars sender_host_address '$message_age > 40m' '$message_age < 50m' '!$deliver_freeze'
</pre>
==Was tun die Exim-Prozesse?==
<pre># exiwhat</pre>
==Ausgeben von Exim-Parametern==
<pre># exim -bP <Parameter></pre>
z.B.:
<pre># exim -bP message_size_limit</pre>
==Immer gut: queue files ansehen==
<pre>
# find $(exim -bP spool_directory | nawk '{print $NF;}')/input
</pre>
[[Kategorie:Exim]]
8a1804e9b88982fd719cf96e2575111d7b5cfd85
Formica sanguinea
0
92
224
2012-09-27T14:32:43Z
Lollypop
2
Die Seite wurde neu angelegt: „{{Ameisenart | DeName = Blutrote Raubameise | WissName = Formica fusca | Autor = Linnaeus, 1798 | Untergattung = Raptiformica | Gatt…“
wikitext
text/x-wiki
{{Ameisenart
| DeName = Blutrote Raubameise
| WissName = Formica fusca
| Autor = Linnaeus, 1798
| Untergattung = Raptiformica
| Gattung = Formica
| Unterfamilie = Formicinae
| Art = sanguinea
| Verbreitung = Mitteleuropa bis Fennoskandien
| Habitat = Trockene bis mäßig beschattete Bereiche
| Gruendung = [[Gründung#Die unabhängige Koloniegründung durch einzelne Königinnen (claustrale/semiclaustrale Gründung)|claustral]]
| Koeniginnen = [[Polygynie|polygyn]]
| Nest = Erdnest
| Ausbruchsschutz = Paraffinöl
| Nahrung = Insekten, Zuckerwasser
| Luftfeuchtigkeit = 40-60%
| Temperatur = 20-30°C
| Winterruhe = Ja
}}
== Allgemeines ==
...
== Eigene Haltungserfahrungen ==
...
3198076b20be9ab087531be9e84eae45c1e5cb15
225
224
2012-09-27T14:33:03Z
Lollypop
2
wikitext
text/x-wiki
{{Ameisenart
| DeName = Blutrote Raubameise
| WissName = Formica sanguinea
| Autor = Linnaeus, 1798
| Untergattung = Raptiformica
| Gattung = Formica
| Unterfamilie = Formicinae
| Art = sanguinea
| Verbreitung = Mitteleuropa bis Fennoskandien
| Habitat = Trockene bis mäßig beschattete Bereiche
| Gruendung = [[Gründung#Die unabhängige Koloniegründung durch einzelne Königinnen (claustrale/semiclaustrale Gründung)|claustral]]
| Koeniginnen = [[Polygynie|polygyn]]
| Nest = Erdnest
| Ausbruchsschutz = Paraffinöl
| Nahrung = Insekten, Zuckerwasser
| Luftfeuchtigkeit = 40-60%
| Temperatur = 20-30°C
| Winterruhe = Ja
}}
== Allgemeines ==
...
== Eigene Haltungserfahrungen ==
...
9a5c01b1b8b6396e18bcf59da01eb61e04b6f69c
226
225
2012-09-27T14:34:28Z
Lollypop
2
wikitext
text/x-wiki
{{Ameisenart
| DeName = Blutrote Raubameise
| WissName = Formica sanguinea
| Autor = Latreille, 1798
| Untergattung = Raptiformica
| Gattung = Formica
| Unterfamilie = Formicinae
| Art = sanguinea
| Verbreitung = Mitteleuropa bis Fennoskandien
| Habitat = Trockene bis mäßig beschattete Bereiche
| Gruendung = [[Sozialparasitisch]], [[Pleometrose]]
| Koeniginnen = [[Polygynie|polygyn]]
| Nest = Erdnest
| Ausbruchsschutz = Paraffinöl
| Nahrung = Insekten, Zuckerwasser
| Luftfeuchtigkeit = 40-60%
| Temperatur = 20-30°C
| Winterruhe = Ja
}}
== Allgemeines ==
...
== Eigene Haltungserfahrungen ==
...
3c6a09de006d08e2b251db59588c989d2a773493
ZFS Recovery
0
30
227
53
2012-11-06T09:41:37Z
Lollypop
2
wikitext
text/x-wiki
Siehe [http://sunsolve.sun.com/search/document.do?assetkey=1-66-233602-1 SunAlert 233602 : Solaris 10 Assertion Failure in ZFS May Cause a System Panic]:
The best recovery for this is to do the following:
<pre>
1. Set the following in /etc/system:
set zfs:zfs_recover=1
set aok=1
2. Import the pool using 'zpool import'
3. Run a full scrub on the pool using 'zpool scrub'
4. Use 'zdb -d' and make sure that there is no ondisk corruption reported
5. Once the pool comes to a clean state, comment / remove the added entries in /etc/system.
</pre>
==Zurückgehen auf einen früheren Uberblock==
<pre>
# zpool import defect_pool
cannot import 'defect_pool': I/O error
Destroy and re-create the pool from
a backup source.
# cd /var/cluster/run/HAStoragePlus/zfs/
# strings defect_pool.cachefile | nawk '/c[0-9]+t/'
0/dev/dsk/c8t600A0B80006E103C000008164E51CDD2d0s0
0/dev/dsk/c8t600A0B80006E10E40000D47D4E51CF9Ed0s0
# zdb -lu /dev/dsk/c8t600A0B80006E103C000008164E51CDD2d0s0 | nawk '/txg =/{txg=$NF}/timestamp =/{printf "txg %d\t%s\n",txg,$0}' | sort -n -k 2n,2n | uniq | tail -10
txg 40353851 timestamp = 1352184849 UTC = Tue Nov 6 07:54:09 2012
txg 40353852 timestamp = 1352184849 UTC = Tue Nov 6 07:54:09 2012
txg 40353853 timestamp = 1352184849 UTC = Tue Nov 6 07:54:09 2012
txg 40353870 timestamp = 1352185334 UTC = Tue Nov 6 08:02:14 2012
txg 40353871 timestamp = 1352185334 UTC = Tue Nov 6 08:02:14 2012
txg 40353872 timestamp = 1352185334 UTC = Tue Nov 6 08:02:14 2012
txg 40353873 timestamp = 1352185334 UTC = Tue Nov 6 08:02:14 2012
txg 40353874 timestamp = 1352185334 UTC = Tue Nov 6 08:02:14 2012
txg 40353875 timestamp = 1352185334 UTC = Tue Nov 6 08:02:14 2012
txg 40353879 timestamp = 1352185334 UTC = Tue Nov 6 08:02:14 2012
# zpool import -T 40353853 defect_pool
</pre>
[[Kategorie:ZFS]]
1aef34f902f421d9f9d04fc18ec18aa8d033b2d0
228
227
2012-11-06T09:43:45Z
Lollypop
2
/* Zurückgehen auf einen früheren Uberblock */
wikitext
text/x-wiki
Siehe [http://sunsolve.sun.com/search/document.do?assetkey=1-66-233602-1 SunAlert 233602 : Solaris 10 Assertion Failure in ZFS May Cause a System Panic]:
The best recovery for this is to do the following:
<pre>
1. Set the following in /etc/system:
set zfs:zfs_recover=1
set aok=1
2. Import the pool using 'zpool import'
3. Run a full scrub on the pool using 'zpool scrub'
4. Use 'zdb -d' and make sure that there is no ondisk corruption reported
5. Once the pool comes to a clean state, comment / remove the added entries in /etc/system.
</pre>
==Zurückgehen auf einen früheren Uberblock==
<pre>
# zpool import defect_pool
cannot import 'defect_pool': I/O error
Destroy and re-create the pool from
a backup source.
# cd /var/cluster/run/HAStoragePlus/zfs/
# strings defect_pool.cachefile | nawk '/c[0-9]+t/'
0/dev/dsk/c8t600A0B80006E103C000008164E51CDD2d0s0
0/dev/dsk/c8t600A0B80006E10E40000D47D4E51CF9Ed0s0
# zdb -lu /dev/dsk/c8t600A0B80006E103C000008164E51CDD2d0s0 | nawk '/txg =/{txg=$NF}/timestamp =/{printf "txg %d\t%s\n",txg,$0}' | sort -n -k 2n,2n | uniq | tail -10
txg 40353851 timestamp = 1352184849 UTC = Tue Nov 6 07:54:09 2012
txg 40353852 timestamp = 1352184849 UTC = Tue Nov 6 07:54:09 2012
txg 40353853 timestamp = 1352184849 UTC = Tue Nov 6 07:54:09 2012
txg 40353870 timestamp = 1352185334 UTC = Tue Nov 6 08:02:14 2012
txg 40353871 timestamp = 1352185334 UTC = Tue Nov 6 08:02:14 2012
txg 40353872 timestamp = 1352185334 UTC = Tue Nov 6 08:02:14 2012
txg 40353873 timestamp = 1352185334 UTC = Tue Nov 6 08:02:14 2012
txg 40353874 timestamp = 1352185334 UTC = Tue Nov 6 08:02:14 2012
txg 40353875 timestamp = 1352185334 UTC = Tue Nov 6 08:02:14 2012
txg 40353879 timestamp = 1352185334 UTC = Tue Nov 6 08:02:14 2012
# zpool import -T <txg> defect_pool
Also z.B. auf Tue Nov 6 07:54:09 2012 -> txg 40353853
# zpool import -T 40353853 defect_pool
</pre>
[[Kategorie:ZFS]]
f08d526fb317bf34cf725004365c2570a74290e0
229
228
2012-11-06T10:07:34Z
Lollypop
2
/* Zurückgehen auf einen früheren Uberblock */
wikitext
text/x-wiki
Siehe [http://sunsolve.sun.com/search/document.do?assetkey=1-66-233602-1 SunAlert 233602 : Solaris 10 Assertion Failure in ZFS May Cause a System Panic]:
The best recovery for this is to do the following:
<pre>
1. Set the following in /etc/system:
set zfs:zfs_recover=1
set aok=1
2. Import the pool using 'zpool import'
3. Run a full scrub on the pool using 'zpool scrub'
4. Use 'zdb -d' and make sure that there is no ondisk corruption reported
5. Once the pool comes to a clean state, comment / remove the added entries in /etc/system.
</pre>
==Zurückgehen auf einen früheren Uberblock==
<pre>
# zpool import defect_pool
cannot import 'defect_pool': I/O error
Destroy and re-create the pool from
a backup source.
# cd /var/cluster/run/HAStoragePlus/zfs/
# strings defect_pool.cachefile | nawk '/c[0-9]+t/'
0/dev/dsk/c8t600A0B80006E103C000008164E51CDD2d0s0
0/dev/dsk/c8t600A0B80006E10E40000D47D4E51CF9Ed0s0
# zdb -lu /dev/dsk/c8t600A0B80006E103C000008164E51CDD2d0s0 | nawk '/txg =/{txg=$NF}/timestamp =/{printf "txg %d\t%s\n",txg,$0}' | sort -n -k 2n,2n | uniq | tail -10
txg 40353851 timestamp = 1352184849 UTC = Tue Nov 6 07:54:09 2012
txg 40353852 timestamp = 1352184849 UTC = Tue Nov 6 07:54:09 2012
txg 40353853 timestamp = 1352184849 UTC = Tue Nov 6 07:54:09 2012
txg 40353870 timestamp = 1352185334 UTC = Tue Nov 6 08:02:14 2012
txg 40353871 timestamp = 1352185334 UTC = Tue Nov 6 08:02:14 2012
txg 40353872 timestamp = 1352185334 UTC = Tue Nov 6 08:02:14 2012
txg 40353873 timestamp = 1352185334 UTC = Tue Nov 6 08:02:14 2012
txg 40353874 timestamp = 1352185334 UTC = Tue Nov 6 08:02:14 2012
txg 40353875 timestamp = 1352185334 UTC = Tue Nov 6 08:02:14 2012
txg 40353879 timestamp = 1352185334 UTC = Tue Nov 6 08:02:14 2012
# zpool import -T <txg> defect_pool
Also z.B. auf Tue Nov 6 07:54:09 2012 -> txg 40353853
# zpool import -T 40353853 defect_pool
Pool defect_pool returned to its state as of Tue Nov 06 07:32:33 2012.
Discarded approximately 22 minutes of transactions.
</pre>
[[Kategorie:ZFS]]
5e91d88b2a5d2aa0adb2699138c74b44f3d8a999
230
229
2012-11-06T13:35:51Z
Lollypop
2
/* Zurückgehen auf einen früheren Uberblock */
wikitext
text/x-wiki
Siehe [http://sunsolve.sun.com/search/document.do?assetkey=1-66-233602-1 SunAlert 233602 : Solaris 10 Assertion Failure in ZFS May Cause a System Panic]:
The best recovery for this is to do the following:
<pre>
1. Set the following in /etc/system:
set zfs:zfs_recover=1
set aok=1
2. Import the pool using 'zpool import'
3. Run a full scrub on the pool using 'zpool scrub'
4. Use 'zdb -d' and make sure that there is no ondisk corruption reported
5. Once the pool comes to a clean state, comment / remove the added entries in /etc/system.
</pre>
==Zurückgehen auf einen früheren Uberblock==
<pre>
# zpool import defect_pool
cannot import 'defect_pool': I/O error
Destroy and re-create the pool from
a backup source.
# cd /var/cluster/run/HAStoragePlus/zfs/
# strings defect_pool.cachefile | nawk '/c[0-9]+t/'
0/dev/dsk/c8t600A0B80006E103C000008164E51CDD2d0s0
0/dev/dsk/c8t600A0B80006E10E40000D47D4E51CF9Ed0s0
oder
# zpool import -c defect_pool.cachefile
# zdb -lu /dev/dsk/c8t600A0B80006E103C000008164E51CDD2d0s0 | nawk '/txg =/{txg=$NF}/timestamp =/{printf "txg %d\t%s\n",txg,$0}' | sort -n -k 2n,2n | uniq | tail -10
txg 40353851 timestamp = 1352184849 UTC = Tue Nov 6 07:54:09 2012
txg 40353852 timestamp = 1352184849 UTC = Tue Nov 6 07:54:09 2012
txg 40353853 timestamp = 1352184849 UTC = Tue Nov 6 07:54:09 2012
txg 40353870 timestamp = 1352185334 UTC = Tue Nov 6 08:02:14 2012
txg 40353871 timestamp = 1352185334 UTC = Tue Nov 6 08:02:14 2012
txg 40353872 timestamp = 1352185334 UTC = Tue Nov 6 08:02:14 2012
txg 40353873 timestamp = 1352185334 UTC = Tue Nov 6 08:02:14 2012
txg 40353874 timestamp = 1352185334 UTC = Tue Nov 6 08:02:14 2012
txg 40353875 timestamp = 1352185334 UTC = Tue Nov 6 08:02:14 2012
txg 40353879 timestamp = 1352185334 UTC = Tue Nov 6 08:02:14 2012
# zpool import -T <txg> defect_pool
Also z.B. auf Tue Nov 6 07:54:09 2012 -> txg 40353853
# zpool import -T 40353853 defect_pool
Pool defect_pool returned to its state as of Tue Nov 06 07:32:33 2012.
Discarded approximately 22 minutes of transactions.
</pre>
[[Kategorie:ZFS]]
499418518b3a510dff8d53122b2f3d5ac91297fd
Solaris IPMP
0
73
231
141
2012-12-11T10:06:09Z
Lollypop
2
wikitext
text/x-wiki
==Einstellen der Konfiguration auf manuell==
<pre>
netadm enable -p ncp defaultfixed
</pre>
==Einfaches IPMP mit einem Standby-Interface==
<pre>
ipadm create-ip net0
ipadm create-ip net1
ipadm set-ifprop -p standby=on -m ip net1
ipadm create-ipmp -i net0 -i net1 ipmp0
ipadm create-addr -T static -a local=1.2.3.4/24 ipmp0/v4
</pre>
==Link-based IPMP in einem VLAN (hier VLAN 2)==
<pre>
# VLAN-Interfaces konfigurieren
dladm create-vlan -l net1 -v 2 net1_vlan2
dladm create-vlan -l net2 -v 2 net2_vlan2
# VLAN-Interfaces für IP konfigurieren
ipadm create-ip net1_vlan2
ipadm create-ip net2_vlan2
# IPMP-Interface konfigurieren
ipadm create-ipmp -i net1_vlan2,net2_vlan2 ipmp0
# Und ganz normal eine IP auf das IPMP-Interface konfigureren
ipadm create-addr -T static -a local=10.1.2.106/24 ipmp0
# Und die Defaultroute permanent setzen
route -p add default 10.1.2.254
</pre>
[[Kategorie:Solaris]]
eed62514c28d8af9330e26a0edac45580be4ffd3
Solaris perl
0
93
232
2012-12-12T16:22:58Z
Lollypop
2
Die Seite wurde neu angelegt: „==Module::Build / Build.PL== Bei Fehlermeldungen a la <pre> gcc: unrecognized option '-KPIC' gcc: language O4 not recognized </pre> beim bauen ovn Perlmodulen unt…“
wikitext
text/x-wiki
==Module::Build / Build.PL==
Bei Fehlermeldungen a la
<pre>
gcc: unrecognized option '-KPIC'
gcc: language O4 not recognized
</pre>
beim bauen ovn Perlmodulen unter Solaris, kann man versuchen die Defaultvariablen im Module::Build zu überschreiben:
<pre>
# /usr/perl5/bin/perlgcc Build.PL --config cc=gcc --config ld=gcc --config optimize='-O2' --config cccdlflags='-DPIC'
# make
</pre>
[[kategorie:Solaris]]
d91dc0eb008780959653f86949dda960a568ec12
233
232
2012-12-12T16:45:37Z
Lollypop
2
/* Module::Build / Build.PL */
wikitext
text/x-wiki
==Module::Build / Build.PL==
Bei Fehlermeldungen a la
<pre>
gcc: unrecognized option '-KPIC'
gcc: language O4 not recognized
</pre>
beim bauen ovn Perlmodulen unter Solaris, kann man versuchen die Defaultvariablen im Module::Build zu überschreiben:
<pre>
# /usr/perl5/bin/perlgcc Build.PL --config cc=gcc --config ld=gcc --config optimize='-O2' --config cccdlflags='-DPIC'
# make
</pre>
das gilt auch für Makefile.PL:
<pre>
/usr/perl5/bin/perlgcc Makefile.PL --config cc=gcc --config ld=gcc --config optimize='-O2' --config cccdlflags='-DPIC'
</pre>
[[kategorie:Solaris]]
168620d4336aa01859dbe8fde2a5c9573598d23a
234
233
2012-12-12T16:47:29Z
Lollypop
2
wikitext
text/x-wiki
==Module::Build / Build.PL==
Bei Fehlermeldungen a la
<pre>
gcc: unrecognized option '-KPIC'
gcc: language O4 not recognized
</pre>
beim bauen ovn Perlmodulen unter Solaris, kann man versuchen die Defaultvariablen im Module::Build zu überschreiben:
<pre>
# /usr/perl5/bin/perlgcc Build.PL --config cc=gcc --config ld=gcc --config optimize='-O2' --config cccdlflags='-DPIC'
# make
</pre>
das gilt auch für Makefile.PL:
<pre>
/usr/perl5/bin/perlgcc Makefile.PL cc=gcc ld=gcc optimize='-O2' cccdlflags='-DPIC'
</pre>
[[kategorie:Solaris]]
6c493e671e190d095b66c190b88f391888674c66
235
234
2012-12-13T10:09:06Z
Lollypop
2
wikitext
text/x-wiki
==Module::Build / Build.PL==
Bei Fehlermeldungen a la
<pre>
gcc: unrecognized option '-KPIC'
gcc: language O4 not recognized
</pre>
beim bauen von Perlmodulen unter Solaris, kann man versuchen die Defaultvariablen im Module::Build zu überschreiben:
<pre>
# /usr/perl5/bin/perlgcc Build.PL --config cc=gcc --config ld=gcc --config optimize='-O2' --config cccdlflags='-DPIC'
# make
</pre>
das gilt auch für Makefile.PL:
<pre>
/usr/perl5/bin/perlgcc Makefile.PL cc=gcc ld=gcc optimize='-O2' cccdlflags='-DPIC'
</pre>
==Environment Variablen für Programme, die MakeMaker benutzen==
Unter Solaris gibt es ja öfter Probleme, wenn man nur den GCC installiert hat. Ein Aufruf von /usr/perl5/bin/perlgcc hilft dann in den meisten Fällen.
Für sa-compile von Spamassassin nützt es jedoch nichts. Dafür hilft es die notwendigen Parameter via PERL_MM_OPT zu setzen:
<pre>
PERL_MM_OPT='optimize=-O2 cc=gcc ld=gcc cccdlflags=-DPIC' /opt/spamassassin/bin/sa-compile -D
</pre>
Wie die Parameter heißen findet man mit <i>perl -V</i> heraus.
Mehr zum Thema gibt es [http://search.cpan.org/~mschwern/ExtUtils-MakeMaker/lib/ExtUtils/MakeMaker.pm hier]
[[kategorie:Solaris]]
1058216da61d7fce11fe9ebe59bbbc5c31d5082f
Category:Solaris11
14
95
239
2013-04-02T09:16:38Z
Lollypop
2
Die Seite wurde neu angelegt: „[[Kategorie: Solaris]]“
wikitext
text/x-wiki
[[Kategorie: Solaris]]
99aff038d2513d4f65f6661f5e2366f3d554775a
Solaris 11 Networking
0
96
240
2013-04-04T10:53:39Z
Lollypop
2
Die Seite wurde neu angelegt: „ = DNS = == Client == <pre> # svccfg -s svc:/network/dns/client:default setprop config/nameserver = net_address: "( 0.0.0.0 192.168.1.1 )" # svccfg -s svc:/netw…“
wikitext
text/x-wiki
= DNS =
== Client ==
<pre>
# svccfg -s svc:/network/dns/client:default setprop config/nameserver = net_address: "( 0.0.0.0 192.168.1.1 )"
# svccfg -s svc:/network/dns/client:default setprop config/search = astring: "timmann.de blindhuhn.de"
# svcadm refresh svc:/network/dns/client:default
# svcadm restart svc:/network/dns/client:default
</pre>
== Server ==
18325105a8ad1a27ac8d603ca1742ecfc4046122
241
240
2013-04-04T10:58:44Z
Lollypop
2
wikitext
text/x-wiki
= Interfaces =
= DNS =
== Client ==
<pre>
# svccfg -s svc:/network/dns/client:default setprop config/nameserver = net_address: "( 0.0.0.0 192.168.1.1 )"
# svccfg -s svc:/network/dns/client:default setprop config/search = astring: "timmann.de blindhuhn.de"
# svcadm refresh svc:/network/dns/client:default
# svcadm restart svc:/network/dns/client:default
</pre>
== Server ==
<pre>
# groupadd -g 53 dns
# useradd -u 53 -g dns -d /var/named dns
# usermod -A solaris.smf.manage.bind dns
# svccfg -s svc:network/dns/server:default setprop start/group = dns
# svccfg -s svc:network/dns/server:default setprop start/user = dns
# svccfg -s svc:network/dns/server:default setprop options/ip_interfaces = IPv4
# svccfg -s svc:network/dns/server:default setprop options/configuration_file = /etc/named.conf
# svcadm refresh svc:network/dns/server:default
# svcadm enable svc:network/dns/server:default
</pre>
0ea198e88723d10d1aca237c1c2f672042885953
242
241
2013-04-04T11:18:04Z
Lollypop
2
wikitext
text/x-wiki
= Nodename =
<pre>
# svccfg -s svc:/system/identity:node setprop config/nodename = astring: camponotus
# svcadm refresh svc:/system/identity:node
# svcadm restart svc:/system/identity:node
</pre>
= Interfaces =
= Normal =
<pre>
# ipadm create-ip net1
# ipadm create-addr -T static -a local=192.168.5.101/24 net1/v4mailcluster1
</pre>
= IPMP =
<pre>
# ipadm create-ip net2
# ipadm create-ip net3
# ipadm create-addr -T static -a 192.168.5.102/24 net2/v4ipmptestadress
# ipadm create-addr -T static -a 192.168.5.103/24 net3/v4ipmptestadress
# ipadm create-ipmp ipmp0
# ipadm add-ipmp -i net2 -i net3 ipmp0
# ipadm create-addr -T static -a 192.168.5.101/24 ipmp0/v4mailcluster1
# ipmpstat -i
INTERFACE ACTIVE GROUP FLAGS LINK PROBE STATE
net2 yes ipmp0 ------- up ok ok
net3 yes ipmp0 --mbM-- up ok ok
</pre>
= DNS =
== Client ==
<pre>
# svccfg -s svc:/network/dns/client:default setprop config/nameserver = net_address: "( 0.0.0.0 192.168.1.1 )"
# svccfg -s svc:/network/dns/client:default setprop config/search = astring: "timmann.de blindhuhn.de"
# svcadm refresh svc:/network/dns/client:default
# svcadm restart svc:/network/dns/client:default
</pre>
== Server ==
<pre>
# groupadd -g 53 dns
# useradd -u 53 -g dns -d /var/named -m dns
# usermod -A solaris.smf.manage.bind dns
# svccfg -s svc:network/dns/server:default setprop start/group = dns
# svccfg -s svc:network/dns/server:default setprop start/user = dns
# svccfg -s svc:network/dns/server:default setprop options/ip_interfaces = IPv4
# svccfg -s svc:network/dns/server:default setprop options/configuration_file = /etc/named.conf
# svcadm refresh svc:network/dns/server:default
# svcadm enable svc:network/dns/server:default
</pre>
3d77ee18c70e0160d979a509ff858ed39595334b
243
242
2013-04-04T11:21:45Z
Lollypop
2
/* IPMP */
wikitext
text/x-wiki
= Nodename =
<pre>
# svccfg -s svc:/system/identity:node setprop config/nodename = astring: camponotus
# svcadm refresh svc:/system/identity:node
# svcadm restart svc:/system/identity:node
</pre>
= Interfaces =
= Normal =
<pre>
# ipadm create-ip net1
# ipadm create-addr -T static -a local=192.168.5.101/24 net1/v4mailcluster1
</pre>
= IPMP =
<pre>
# ipadm create-ip net2
# ipadm create-ip net3
# ipadm create-addr -T static -a 192.168.5.102/24 net2/v4ipmptestadress
# ipadm create-addr -T static -a 192.168.5.103/24 net3/v4ipmptestadress
# ipadm create-ipmp ipmp0
# ipadm add-ipmp -i net2 -i net3 ipmp0
# ipadm create-addr -T static -a 192.168.5.101/24 ipmp0/v4mailcluster1
# ipmpstat -i
INTERFACE ACTIVE GROUP FLAGS LINK PROBE STATE
net2 yes ipmp0 ------- up ok ok
net3 yes ipmp0 --mbM-- up ok ok
# ipmpstat -an
ADDRESS STATE GROUP INBOUND OUTBOUND
:: down ipmp0 -- --
139.11.5.101 up ipmp0 net3 net2 net3
</pre>
= DNS =
== Client ==
<pre>
# svccfg -s svc:/network/dns/client:default setprop config/nameserver = net_address: "( 0.0.0.0 192.168.1.1 )"
# svccfg -s svc:/network/dns/client:default setprop config/search = astring: "timmann.de blindhuhn.de"
# svcadm refresh svc:/network/dns/client:default
# svcadm restart svc:/network/dns/client:default
</pre>
== Server ==
<pre>
# groupadd -g 53 dns
# useradd -u 53 -g dns -d /var/named -m dns
# usermod -A solaris.smf.manage.bind dns
# svccfg -s svc:network/dns/server:default setprop start/group = dns
# svccfg -s svc:network/dns/server:default setprop start/user = dns
# svccfg -s svc:network/dns/server:default setprop options/ip_interfaces = IPv4
# svccfg -s svc:network/dns/server:default setprop options/configuration_file = /etc/named.conf
# svcadm refresh svc:network/dns/server:default
# svcadm enable svc:network/dns/server:default
</pre>
9a403e114d3a9a1bd4f821aa37e4f13a57f3430e
244
243
2013-04-04T11:25:25Z
Lollypop
2
/* IPMP */
wikitext
text/x-wiki
= Nodename =
<pre>
# svccfg -s svc:/system/identity:node setprop config/nodename = astring: camponotus
# svcadm refresh svc:/system/identity:node
# svcadm restart svc:/system/identity:node
</pre>
= Interfaces =
= Normal =
<pre>
# ipadm create-ip net1
# ipadm create-addr -T static -a local=192.168.5.101/24 net1/v4mailcluster1
</pre>
= IPMP =
<pre>
# ipadm create-ip net2
# ipadm create-ip net3
# ipadm create-addr -T static -a 192.168.5.102/24 net2/v4ipmptestadress
# ipadm create-addr -T static -a 192.168.5.103/24 net3/v4ipmptestadress
# ipadm create-ipmp ipmp0
# ipadm add-ipmp -i net2 -i net3 ipmp0
# ipadm create-addr -T static -a 192.168.5.101/24 ipmp0/v4mailcluster0
# ipmpstat -i
INTERFACE ACTIVE GROUP FLAGS LINK PROBE STATE
net2 yes ipmp0 ------- up ok ok
net3 yes ipmp0 --mbM-- up ok ok
# ipmpstat -an
ADDRESS STATE GROUP INBOUND OUTBOUND
:: down ipmp0 -- --
139.11.5.101 up ipmp0 net3 net2 net3
</pre>
= Change adress =
<pre>
# ipadm create-addr -T static -a 192.168.5.111/24 ipmp0/v4mailcluster1
</pre>
Login to new IP.
<pre>
# ipadm delete-addr ipmp0/v4mailcluster0
</pre>
= DNS =
== Client ==
<pre>
# svccfg -s svc:/network/dns/client:default setprop config/nameserver = net_address: "( 0.0.0.0 192.168.1.1 )"
# svccfg -s svc:/network/dns/client:default setprop config/search = astring: "timmann.de blindhuhn.de"
# svcadm refresh svc:/network/dns/client:default
# svcadm restart svc:/network/dns/client:default
</pre>
== Server ==
<pre>
# groupadd -g 53 dns
# useradd -u 53 -g dns -d /var/named -m dns
# usermod -A solaris.smf.manage.bind dns
# svccfg -s svc:network/dns/server:default setprop start/group = dns
# svccfg -s svc:network/dns/server:default setprop start/user = dns
# svccfg -s svc:network/dns/server:default setprop options/ip_interfaces = IPv4
# svccfg -s svc:network/dns/server:default setprop options/configuration_file = /etc/named.conf
# svcadm refresh svc:network/dns/server:default
# svcadm enable svc:network/dns/server:default
</pre>
4866f5bc47f870b0d3e383bfbdbc5ee06b3425b1
245
244
2013-04-04T11:26:31Z
Lollypop
2
wikitext
text/x-wiki
= Nodename =
<pre>
# svccfg -s svc:/system/identity:node setprop config/nodename = astring: camponotus
# svcadm refresh svc:/system/identity:node
# svcadm restart svc:/system/identity:node
</pre>
= Interfaces =
== Initial setup ==
<pre>
# ipadm create-ip net1
# ipadm create-addr -T static -a local=192.168.5.101/24 net1/v4mailcluster1
</pre>
== IPMP ==
<pre>
# ipadm create-ip net2
# ipadm create-ip net3
# ipadm create-addr -T static -a 192.168.5.102/24 net2/v4ipmptestadress
# ipadm create-addr -T static -a 192.168.5.103/24 net3/v4ipmptestadress
# ipadm create-ipmp ipmp0
# ipadm add-ipmp -i net2 -i net3 ipmp0
# ipadm create-addr -T static -a 192.168.5.101/24 ipmp0/v4mailcluster0
# ipmpstat -i
INTERFACE ACTIVE GROUP FLAGS LINK PROBE STATE
net2 yes ipmp0 ------- up ok ok
net3 yes ipmp0 --mbM-- up ok ok
# ipmpstat -an
ADDRESS STATE GROUP INBOUND OUTBOUND
:: down ipmp0 -- --
139.11.5.101 up ipmp0 net3 net2 net3
</pre>
== Change adress ==
<pre>
# ipadm create-addr -T static -a 192.168.5.111/24 ipmp0/v4mailcluster1
</pre>
Login to new IP.
<pre>
# ipadm delete-addr ipmp0/v4mailcluster0
</pre>
= DNS =
== Client ==
<pre>
# svccfg -s svc:/network/dns/client:default setprop config/nameserver = net_address: "( 0.0.0.0 192.168.1.1 )"
# svccfg -s svc:/network/dns/client:default setprop config/search = astring: "timmann.de blindhuhn.de"
# svcadm refresh svc:/network/dns/client:default
# svcadm restart svc:/network/dns/client:default
</pre>
== Server ==
<pre>
# groupadd -g 53 dns
# useradd -u 53 -g dns -d /var/named -m dns
# usermod -A solaris.smf.manage.bind dns
# svccfg -s svc:network/dns/server:default setprop start/group = dns
# svccfg -s svc:network/dns/server:default setprop start/user = dns
# svccfg -s svc:network/dns/server:default setprop options/ip_interfaces = IPv4
# svccfg -s svc:network/dns/server:default setprop options/configuration_file = /etc/named.conf
# svcadm refresh svc:network/dns/server:default
# svcadm enable svc:network/dns/server:default
</pre>
d7b955b7a4c0af6882828ada5a83d7cd801f1cb9
246
245
2013-04-08T16:07:22Z
Lollypop
2
wikitext
text/x-wiki
= Nodename =
<pre>
# svccfg -s svc:/system/identity:node setprop config/nodename = astring: camponotus
# svcadm refresh svc:/system/identity:node
# svcadm restart svc:/system/identity:node
</pre>
= Interfaces =
== Initial setup ==
<pre>
# ipadm create-ip net1
# ipadm create-addr -T static -a local=192.168.5.101/24 net1/v4mailcluster1
</pre>
== IPMP ==
<pre>
# ipadm create-ip net2
# ipadm create-ip net3
# ipadm create-addr -T static -a 192.168.5.102/24 net2/v4ipmptestadress
# ipadm create-addr -T static -a 192.168.5.103/24 net3/v4ipmptestadress
# ipadm create-ipmp ipmp0
# ipadm add-ipmp -i net2 -i net3 ipmp0
# ipadm create-addr -T static -a 192.168.5.101/24 ipmp0/v4mailcluster0
# ipmpstat -i
INTERFACE ACTIVE GROUP FLAGS LINK PROBE STATE
net2 yes ipmp0 ------- up ok ok
net3 yes ipmp0 --mbM-- up ok ok
# ipmpstat -an
ADDRESS STATE GROUP INBOUND OUTBOUND
:: down ipmp0 -- --
139.11.5.101 up ipmp0 net3 net2 net3
</pre>
== Change adress ==
<pre>
# ipadm create-addr -T static -a 192.168.5.111/24 ipmp0/v4mailcluster1
</pre>
Login to new IP.
<pre>
# ipadm delete-addr ipmp0/v4mailcluster0
</pre>
= DNS =
== Client ==
<pre>
# svccfg -s svc:/network/dns/client:default setprop config/nameserver = net_address: "( 0.0.0.0 192.168.1.1 )"
# svccfg -s svc:/network/dns/client:default setprop config/search = astring: "timmann.de blindhuhn.de"
# svcadm refresh svc:/network/dns/client:default
# svcadm restart svc:/network/dns/client:default
</pre>
== Server ==
<pre>
# groupadd -g 53 dns
# useradd -u 53 -g dns -d /var/named -m dns
# usermod -A solaris.smf.manage.bind dns
# svccfg -s svc:network/dns/server:default setprop start/group = dns
# svccfg -s svc:network/dns/server:default setprop start/user = dns
# svccfg -s svc:network/dns/server:default setprop options/ip_interfaces = IPv4
# svccfg -s svc:network/dns/server:default setprop options/configuration_file = /etc/named.conf
# svcadm refresh svc:network/dns/server:default
# svcadm enable svc:network/dns/server:default
</pre>
[[Kategorie:Solaris11]]
0ba75c5e5b760290c91ff5076c21df59dcf7693e
247
246
2013-04-08T16:09:01Z
Lollypop
2
/* Client */
wikitext
text/x-wiki
= Nodename =
<pre>
# svccfg -s svc:/system/identity:node setprop config/nodename = astring: camponotus
# svcadm refresh svc:/system/identity:node
# svcadm restart svc:/system/identity:node
</pre>
= Interfaces =
== Initial setup ==
<pre>
# ipadm create-ip net1
# ipadm create-addr -T static -a local=192.168.5.101/24 net1/v4mailcluster1
</pre>
== IPMP ==
<pre>
# ipadm create-ip net2
# ipadm create-ip net3
# ipadm create-addr -T static -a 192.168.5.102/24 net2/v4ipmptestadress
# ipadm create-addr -T static -a 192.168.5.103/24 net3/v4ipmptestadress
# ipadm create-ipmp ipmp0
# ipadm add-ipmp -i net2 -i net3 ipmp0
# ipadm create-addr -T static -a 192.168.5.101/24 ipmp0/v4mailcluster0
# ipmpstat -i
INTERFACE ACTIVE GROUP FLAGS LINK PROBE STATE
net2 yes ipmp0 ------- up ok ok
net3 yes ipmp0 --mbM-- up ok ok
# ipmpstat -an
ADDRESS STATE GROUP INBOUND OUTBOUND
:: down ipmp0 -- --
139.11.5.101 up ipmp0 net3 net2 net3
</pre>
== Change adress ==
<pre>
# ipadm create-addr -T static -a 192.168.5.111/24 ipmp0/v4mailcluster1
</pre>
Login to new IP.
<pre>
# ipadm delete-addr ipmp0/v4mailcluster0
</pre>
= DNS =
== Client ==
<pre>
# svccfg -s svc:/network/dns/client setprop config/nameserver = net_address: "( 0.0.0.0 192.168.1.1 )"
# svccfg -s svc:/network/dns/client setprop config/search = astring: "timmann.de blindhuhn.de"
# svcadm refresh svc:/network/dns/client:default
# svcadm restart svc:/network/dns/client:default
</pre>
== Server ==
<pre>
# groupadd -g 53 dns
# useradd -u 53 -g dns -d /var/named -m dns
# usermod -A solaris.smf.manage.bind dns
# svccfg -s svc:network/dns/server:default setprop start/group = dns
# svccfg -s svc:network/dns/server:default setprop start/user = dns
# svccfg -s svc:network/dns/server:default setprop options/ip_interfaces = IPv4
# svccfg -s svc:network/dns/server:default setprop options/configuration_file = /etc/named.conf
# svcadm refresh svc:network/dns/server:default
# svcadm enable svc:network/dns/server:default
</pre>
[[Kategorie:Solaris11]]
8b982a9d2a3982ee559645c6ab963dcc613f751d
248
247
2013-04-08T16:23:59Z
Lollypop
2
/* Client */
wikitext
text/x-wiki
= Nodename =
<pre>
# svccfg -s svc:/system/identity:node setprop config/nodename = astring: camponotus
# svcadm refresh svc:/system/identity:node
# svcadm restart svc:/system/identity:node
</pre>
= Interfaces =
== Initial setup ==
<pre>
# ipadm create-ip net1
# ipadm create-addr -T static -a local=192.168.5.101/24 net1/v4mailcluster1
</pre>
== IPMP ==
<pre>
# ipadm create-ip net2
# ipadm create-ip net3
# ipadm create-addr -T static -a 192.168.5.102/24 net2/v4ipmptestadress
# ipadm create-addr -T static -a 192.168.5.103/24 net3/v4ipmptestadress
# ipadm create-ipmp ipmp0
# ipadm add-ipmp -i net2 -i net3 ipmp0
# ipadm create-addr -T static -a 192.168.5.101/24 ipmp0/v4mailcluster0
# ipmpstat -i
INTERFACE ACTIVE GROUP FLAGS LINK PROBE STATE
net2 yes ipmp0 ------- up ok ok
net3 yes ipmp0 --mbM-- up ok ok
# ipmpstat -an
ADDRESS STATE GROUP INBOUND OUTBOUND
:: down ipmp0 -- --
139.11.5.101 up ipmp0 net3 net2 net3
</pre>
== Change adress ==
<pre>
# ipadm create-addr -T static -a 192.168.5.111/24 ipmp0/v4mailcluster1
</pre>
Login to new IP.
<pre>
# ipadm delete-addr ipmp0/v4mailcluster0
</pre>
= DNS =
== Client ==
<pre>
# svccfg -s svc:/network/dns/client setprop config/nameserver = net_address: "( 0.0.0.0 192.168.1.1 )"
# svccfg -s svc:/network/dns/client setprop config/search = astring: "timmann.de blindhuhn.de"
# svcadm refresh svc:/network/dns/client:default
# svcadm restart svc:/network/dns/client:default
</pre>
Activate dns in nameservice switch (nsswitch.conf):
<pre>
# perl -pi -e "s/^hosts:\s+files$/hosts: files dns/g" /etc/nsswitch.conf
# nscfg import -f svc:/system/name-service/switch:default
# svcadm refresh name-service/switch
# svcprop -p config/host svc:/system/name-service/switch:default
files\ dns
</pre>
== Server ==
<pre>
# groupadd -g 53 dns
# useradd -u 53 -g dns -d /var/named -m dns
# usermod -A solaris.smf.manage.bind dns
# svccfg -s svc:network/dns/server:default setprop start/group = dns
# svccfg -s svc:network/dns/server:default setprop start/user = dns
# svccfg -s svc:network/dns/server:default setprop options/ip_interfaces = IPv4
# svccfg -s svc:network/dns/server:default setprop options/configuration_file = /etc/named.conf
# svcadm refresh svc:network/dns/server:default
# svcadm enable svc:network/dns/server:default
</pre>
[[Kategorie:Solaris11]]
8c7ca0a5acfb09aa8cb7fc10a2c4167423dc4566
249
248
2013-04-08T16:26:15Z
Lollypop
2
/* IPMP */
wikitext
text/x-wiki
= Nodename =
<pre>
# svccfg -s svc:/system/identity:node setprop config/nodename = astring: camponotus
# svcadm refresh svc:/system/identity:node
# svcadm restart svc:/system/identity:node
</pre>
= Interfaces =
== Initial setup ==
<pre>
# ipadm create-ip net1
# ipadm create-addr -T static -a local=192.168.5.101/24 net1/v4mailcluster1
</pre>
== IPMP ==
<pre>
# ipadm create-ip net2
# ipadm create-ip net3
# ipadm create-addr -T static -a 192.168.5.102/24 net2/v4ipmptestadress
# ipadm create-addr -T static -a 192.168.5.103/24 net3/v4ipmptestadress
# ipadm create-ipmp ipmp0
# ipadm add-ipmp -i net2 -i net3 ipmp0
# ipadm create-addr -T static -a 192.168.5.101/24 ipmp0/v4mailcluster0
# ipmpstat -i
INTERFACE ACTIVE GROUP FLAGS LINK PROBE STATE
net2 yes ipmp0 ------- up ok ok
net3 yes ipmp0 --mbM-- up ok ok
# ipmpstat -an
ADDRESS STATE GROUP INBOUND OUTBOUND
:: down ipmp0 -- --
192.168.5.101 up ipmp0 net3 net2 net3
</pre>
== Change adress ==
<pre>
# ipadm create-addr -T static -a 192.168.5.111/24 ipmp0/v4mailcluster1
</pre>
Login to new IP.
<pre>
# ipadm delete-addr ipmp0/v4mailcluster0
</pre>
= DNS =
== Client ==
<pre>
# svccfg -s svc:/network/dns/client setprop config/nameserver = net_address: "( 0.0.0.0 192.168.1.1 )"
# svccfg -s svc:/network/dns/client setprop config/search = astring: "timmann.de blindhuhn.de"
# svcadm refresh svc:/network/dns/client:default
# svcadm restart svc:/network/dns/client:default
</pre>
Activate dns in nameservice switch (nsswitch.conf):
<pre>
# perl -pi -e "s/^hosts:\s+files$/hosts: files dns/g" /etc/nsswitch.conf
# nscfg import -f svc:/system/name-service/switch:default
# svcadm refresh name-service/switch
# svcprop -p config/host svc:/system/name-service/switch:default
files\ dns
</pre>
== Server ==
<pre>
# groupadd -g 53 dns
# useradd -u 53 -g dns -d /var/named -m dns
# usermod -A solaris.smf.manage.bind dns
# svccfg -s svc:network/dns/server:default setprop start/group = dns
# svccfg -s svc:network/dns/server:default setprop start/user = dns
# svccfg -s svc:network/dns/server:default setprop options/ip_interfaces = IPv4
# svccfg -s svc:network/dns/server:default setprop options/configuration_file = /etc/named.conf
# svcadm refresh svc:network/dns/server:default
# svcadm enable svc:network/dns/server:default
</pre>
[[Kategorie:Solaris11]]
4b9d95ed2d8eacc752c646255a197dcc4e540e03
250
249
2013-04-08T17:04:25Z
Lollypop
2
/* IPMP */
wikitext
text/x-wiki
= Nodename =
<pre>
# svccfg -s svc:/system/identity:node setprop config/nodename = astring: camponotus
# svcadm refresh svc:/system/identity:node
# svcadm restart svc:/system/identity:node
</pre>
= Interfaces =
== Initial setup ==
<pre>
# ipadm create-ip net1
# ipadm create-addr -T static -a local=192.168.5.101/24 net1/v4mailcluster1
</pre>
== IPMP ==
<pre>
# ipadm create-ip net2
# ipadm create-ip net3
# ipadm create-addr -T static -a 192.168.5.102/24 net2/v4ipmptestadress
# ipadm create-addr -T static -a 192.168.5.103/24 net3/v4ipmptestadress
# ipadm create-ipmp ipmp0
# ipadm add-ipmp -i net2 -i net3 ipmp0
# ipadm create-addr -T static -a 192.168.5.101/24 ipmp0/v4mailcluster0
# ipmpstat -i
INTERFACE ACTIVE GROUP FLAGS LINK PROBE STATE
net2 yes ipmp0 ------- up ok ok
net3 yes ipmp0 --mbM-- up ok ok
# ipmpstat -an
ADDRESS STATE GROUP INBOUND OUTBOUND
:: down ipmp0 -- --
192.168.5.101 up ipmp0 net3 net2 net3
</pre>
Set one interface to standby:
<pre>
# ipadm set-ifprop -p standby=on -m ip net2
# ipmpstat -i
INTERFACE ACTIVE GROUP FLAGS LINK PROBE STATE
net3 yes ipmp0 --mbM-- up ok ok
net2 no ipmp0 is----- up ok ok
# ipmpstat -g
GROUP GROUPNAME STATE FDT INTERFACES
ipmp0 ipmp0 ok 10.00s net3 (net2)
</pre>
== Change adress ==
<pre>
# ipadm create-addr -T static -a 192.168.5.111/24 ipmp0/v4mailcluster1
</pre>
Login to new IP.
<pre>
# ipadm delete-addr ipmp0/v4mailcluster0
</pre>
= DNS =
== Client ==
<pre>
# svccfg -s svc:/network/dns/client setprop config/nameserver = net_address: "( 0.0.0.0 192.168.1.1 )"
# svccfg -s svc:/network/dns/client setprop config/search = astring: "timmann.de blindhuhn.de"
# svcadm refresh svc:/network/dns/client:default
# svcadm restart svc:/network/dns/client:default
</pre>
Activate dns in nameservice switch (nsswitch.conf):
<pre>
# perl -pi -e "s/^hosts:\s+files$/hosts: files dns/g" /etc/nsswitch.conf
# nscfg import -f svc:/system/name-service/switch:default
# svcadm refresh name-service/switch
# svcprop -p config/host svc:/system/name-service/switch:default
files\ dns
</pre>
== Server ==
<pre>
# groupadd -g 53 dns
# useradd -u 53 -g dns -d /var/named -m dns
# usermod -A solaris.smf.manage.bind dns
# svccfg -s svc:network/dns/server:default setprop start/group = dns
# svccfg -s svc:network/dns/server:default setprop start/user = dns
# svccfg -s svc:network/dns/server:default setprop options/ip_interfaces = IPv4
# svccfg -s svc:network/dns/server:default setprop options/configuration_file = /etc/named.conf
# svcadm refresh svc:network/dns/server:default
# svcadm enable svc:network/dns/server:default
</pre>
[[Kategorie:Solaris11]]
0b1a0b395979bda6186b349a5042f0d1bf4db67c
Solaris 11 First Steps
0
97
251
2013-04-10T07:43:17Z
Lollypop
2
Die Seite wurde neu angelegt: „= What's different to Solaris 10 = == Networking == === Interface Names === === Etherstubs and VNICs === === ipadm === == Package Management == [[IPS_cheat_sheet|…“
wikitext
text/x-wiki
= What's different to Solaris 10 =
== Networking ==
=== Interface Names ===
=== Etherstubs and VNICs ===
=== ipadm ===
== Package Management ==
[[IPS_cheat_sheet|Some examples]]
== Zones ==
=== Immutable Zones ===
=== zonestat ===
== Live Upgrade ==
== Distro Constructor ==
bb09226384ac4e76c207d70fa1ba0139e01bfacb
255
251
2013-04-10T07:58:40Z
Lollypop
2
wikitext
text/x-wiki
= What's different to Solaris 10 =
== Networking ==
=== Interface Names ===
=== Etherstubs and VNICs ===
=== ipadm ===
== Package Management ==
[[IPS_cheat_sheet|Some examples]]
== Zones ==
=== Immutable Zones ===
=== zonestat ===
== Live Upgrade ==
== Distro Constructor ==
[[Kategorie:Solaris11]]
00259566e7ab0d99f9095753b5418bee15dc5506
IPS cheat sheet
0
98
252
2013-04-10T07:52:06Z
Lollypop
2
Die Seite wurde neu angelegt: „== Repairing packages == Damn fast fingers did <pre> root@solaris11:/home/lollypop# rm /usr/bin/ls </pre> So... the file is gone... oops. No problem in Solaris 1…“
wikitext
text/x-wiki
== Repairing packages ==
Damn fast fingers did
<pre>
root@solaris11:/home/lollypop# rm /usr/bin/ls
</pre>
So... the file is gone... oops.
No problem in Solaris 11. You can repair package contents!
But.. in which package was it?
<pre>
root@solaris11:/home/lollypop# pkg search /usr/bin/ls
INDEX ACTION VALUE PACKAGE
path file usr/bin/ls pkg:/system/core-os@0.5.11-0.175.0.10.1.0.0
</pre>
So it is in the package pkg:/system/core-os@0.5.11-0.175.0.10.1.0.0 . Let us take a look what the system thinks what is wrong with our files from this package:
<pre>
root@solaris11:/home/lollypop# pkg verify pkg:/system/core-os@0.5.11-0.175.0.10.1.0.0
PACKAGE STATUS <3>
pkg://solaris/system/core-os ERROR
file: usr/bin/ls
Missing: regular file does not exist
</pre>
That is exactly what we thougt :-).
So let us fix it!
<pre>
root@solaris11:/home/lollypop# pkg fix pkg:/system/core-os@0.5.11-0.175.0.10.1.0.0
Verifying: pkg://solaris/system/core-os ERROR <3>
file: usr/bin/ls
Missing: regular file does not exist
Created ZFS snapshot: 2013-04-10-07:40:21
Repairing: pkg://solaris/system/core-os
DOWNLOAD PKGS FILES XFER (MB)
Completed 1/1 1/1 0.0/0.0$<3>
PHASE ACTIONS
Update Phase 1/1
PHASE ITEMS
Image State Update Phase 2/2
root@solaris11:/home/lollypop#
</pre>
bafe0255d5159fdd11a105a0bd276fc2045fdf24
253
252
2013-04-10T07:56:00Z
Lollypop
2
wikitext
text/x-wiki
== Repairing packages ==
Damn fast fingers did it! Lucky Luk style... the man who delete files faster than his shadow...
<pre>
root@solaris11:/home/lollypop# rm /usr/bin/ls
</pre>
So... the file is gone... oops.
No problem in Solaris 11. You can repair package contents!
But.. in which package was it?
<pre>
root@solaris11:/home/lollypop# pkg search /usr/bin/ls
INDEX ACTION VALUE PACKAGE
path file usr/bin/ls pkg:/system/core-os@0.5.11-0.175.0.10.1.0.0
</pre>
So it is in the package pkg:/system/core-os@0.5.11-0.175.0.10.1.0.0 . Let us take a look what the system thinks what is wrong with our files from this package:
<pre>
root@solaris11:/home/lollypop# pkg verify pkg:/system/core-os@0.5.11-0.175.0.10.1.0.0
PACKAGE STATUS <3>
pkg://solaris/system/core-os ERROR
file: usr/bin/ls
Missing: regular file does not exist
</pre>
That is exactly what we thougt :-).
So let us fix it!
<pre>
root@solaris11:/home/lollypop# pkg fix pkg:/system/core-os@0.5.11-0.175.0.10.1.0.0
Verifying: pkg://solaris/system/core-os ERROR <3>
file: usr/bin/ls
Missing: regular file does not exist
Created ZFS snapshot: 2013-04-10-07:40:21
Repairing: pkg://solaris/system/core-os
DOWNLOAD PKGS FILES XFER (MB)
Completed 1/1 1/1 0.0/0.0$<3>
PHASE ACTIONS
Update Phase 1/1
PHASE ITEMS
Image State Update Phase 2/2
root@solaris11:/home/lollypop#
</pre>
Beware of trying this with /usr/bin/pkg !!!
8f5229f607c6bfff473b2782d72735ddfd198361
254
253
2013-04-10T07:57:26Z
Lollypop
2
wikitext
text/x-wiki
== Repairing packages ==
Damn fast fingers did it! Lucky Luk style... the man who delete files faster than his shadow...
<pre>
root@solaris11:/home/lollypop# rm /usr/bin/ls
</pre>
So... the file is gone... oops.
No problem in Solaris 11. You can repair package contents!
But.. in which package was it?
<pre>
root@solaris11:/home/lollypop# pkg search /usr/bin/ls
INDEX ACTION VALUE PACKAGE
path file usr/bin/ls pkg:/system/core-os@0.5.11-0.175.0.10.1.0.0
</pre>
So it is in the package pkg:/system/core-os@0.5.11-0.175.0.10.1.0.0 . Let us take a look what the system thinks what is wrong with our files from this package:
<pre>
root@solaris11:/home/lollypop# pkg verify pkg:/system/core-os@0.5.11-0.175.0.10.1.0.0
PACKAGE STATUS <3>
pkg://solaris/system/core-os ERROR
file: usr/bin/ls
Missing: regular file does not exist
</pre>
That is exactly what we thougt :-).
So let us fix it!
<pre>
root@solaris11:/home/lollypop# pkg fix pkg:/system/core-os@0.5.11-0.175.0.10.1.0.0
Verifying: pkg://solaris/system/core-os ERROR <3>
file: usr/bin/ls
Missing: regular file does not exist
Created ZFS snapshot: 2013-04-10-07:40:21
Repairing: pkg://solaris/system/core-os
DOWNLOAD PKGS FILES XFER (MB)
Completed 1/1 1/1 0.0/0.0$<3>
PHASE ACTIONS
Update Phase 1/1
PHASE ITEMS
Image State Update Phase 2/2
root@solaris11:/home/lollypop#
</pre>
Beware of trying this with /usr/bin/pkg !!!
[[Solaris11]]
ec84a6915602a2e11d146e1189f893ad086eae12
IPS cheat sheet
0
98
256
254
2013-04-10T07:59:17Z
Lollypop
2
wikitext
text/x-wiki
== Repairing packages ==
Damn fast fingers did it! Lucky Luk style... the man who delete files faster than his shadow...
<pre>
root@solaris11:/home/lollypop# rm /usr/bin/ls
</pre>
So... the file is gone... oops.
No problem in Solaris 11. You can repair package contents!
But.. in which package was it?
<pre>
root@solaris11:/home/lollypop# pkg search /usr/bin/ls
INDEX ACTION VALUE PACKAGE
path file usr/bin/ls pkg:/system/core-os@0.5.11-0.175.0.10.1.0.0
</pre>
So it is in the package pkg:/system/core-os@0.5.11-0.175.0.10.1.0.0 . Let us take a look what the system thinks what is wrong with our files from this package:
<pre>
root@solaris11:/home/lollypop# pkg verify pkg:/system/core-os@0.5.11-0.175.0.10.1.0.0
PACKAGE STATUS <3>
pkg://solaris/system/core-os ERROR
file: usr/bin/ls
Missing: regular file does not exist
</pre>
That is exactly what we thougt :-).
So let us fix it!
<pre>
root@solaris11:/home/lollypop# pkg fix pkg:/system/core-os@0.5.11-0.175.0.10.1.0.0
Verifying: pkg://solaris/system/core-os ERROR <3>
file: usr/bin/ls
Missing: regular file does not exist
Created ZFS snapshot: 2013-04-10-07:40:21
Repairing: pkg://solaris/system/core-os
DOWNLOAD PKGS FILES XFER (MB)
Completed 1/1 1/1 0.0/0.0$<3>
PHASE ACTIONS
Update Phase 1/1
PHASE ITEMS
Image State Update Phase 2/2
root@solaris11:/home/lollypop#
</pre>
Beware of trying this with /usr/bin/pkg !!!
[[Kategorie:Solaris11]]
620b1def5255e9a013953398d51538d92e2fa2d2
258
256
2013-04-10T08:24:05Z
Lollypop
2
wikitext
text/x-wiki
=Cheat sheet=
=Examples=
== Repairing packages ==
Damn fast fingers did it! Lucky Luk style... the man who delete files faster than his shadow...
<pre>
root@solaris11:/home/lollypop# rm /usr/bin/ls
</pre>
So... the file is gone... oops.
No problem in Solaris 11. You can repair package contents!
But.. in which package was it?
<pre>
root@solaris11:/home/lollypop# pkg search /usr/bin/ls
INDEX ACTION VALUE PACKAGE
path file usr/bin/ls pkg:/system/core-os@0.5.11-0.175.0.10.1.0.0
</pre>
So it is in the package pkg:/system/core-os@0.5.11-0.175.0.10.1.0.0 . Let us take a look what the system thinks what is wrong with our files from this package:
<pre>
root@solaris11:/home/lollypop# pkg verify pkg:/system/core-os@0.5.11-0.175.0.10.1.0.0
PACKAGE STATUS <3>
pkg://solaris/system/core-os ERROR
file: usr/bin/ls
Missing: regular file does not exist
</pre>
That is exactly what we thougt :-).
So let us fix it!
<pre>
root@solaris11:/home/lollypop# pkg fix pkg:/system/core-os@0.5.11-0.175.0.10.1.0.0
Verifying: pkg://solaris/system/core-os ERROR <3>
file: usr/bin/ls
Missing: regular file does not exist
Created ZFS snapshot: 2013-04-10-07:40:21
Repairing: pkg://solaris/system/core-os
DOWNLOAD PKGS FILES XFER (MB)
Completed 1/1 1/1 0.0/0.0$<3>
PHASE ACTIONS
Update Phase 1/1
PHASE ITEMS
Image State Update Phase 2/2
root@solaris11:/home/lollypop#
</pre>
Beware of trying this with /usr/bin/pkg !!!
[[Kategorie:Solaris11]]
aef92fc3538959527e8ce502fe90db1e5cf29d30
270
258
2013-04-11T10:43:12Z
Lollypop
2
wikitext
text/x-wiki
=Cheat sheet=
=Examples=
== Switching to Oracle Support Repository ==
1. Auf der Webseite https://pkg-register.oracle.com/ ein Zertifikat
herunterladen
(x) Oracle Solaris 11 Support -> Submit
Comment: Rechnername -> Accept
-> Download Key
-> Download Certificate
2. Per SCP auf Rechner kopieren.
3. Einloggen per SSH, dann:
# mv Oracle_Solaris_11_Support.key.pem /var/pkg/ssl
# mv Oracle_Solaris_11_Support.certificate.pem /var/pkg/ssl
4. Bei Bedarf Proxy für http UND https setzen
# http_proxy=http://139.11.6.62:3128/
# https_proxy=http://139.11.6.62:3128/
# export http_proxy https_proxy
5. Neuen Publisher setzen:
# pkg set-publisher \
-k /var/pkg/ssl/Oracle_Solaris_11_Support.key.pem \
-c /var/pkg/ssl/Oracle_Solaris_11_Support.certificate.pem \
-G '*' -g https://pkg.oracle.com/solaris/support/ solaris
6. Katalog aktualisieren
# pkg refresh --full
7. Prüfen, ob Updates verfügbar
# pkg update -nv
8. Updates installieren
# pkg update -v
== Adding another repository ==
Until now the repository of OpenCSW is [http://www.opencsw.org/2012/02/ips-repository-in-the-works/ not in IPS format].
== Repairing packages ==
Damn fast fingers did it! Lucky Luk style... the man who delete files faster than his shadow...
<pre>
root@solaris11:/home/lollypop# rm /usr/bin/ls
</pre>
So... the file is gone... oops.
No problem in Solaris 11. You can repair package contents!
But.. in which package was it?
<pre>
root@solaris11:/home/lollypop# pkg search /usr/bin/ls
INDEX ACTION VALUE PACKAGE
path file usr/bin/ls pkg:/system/core-os@0.5.11-0.175.0.10.1.0.0
</pre>
So it is in the package pkg:/system/core-os@0.5.11-0.175.0.10.1.0.0 . Let us take a look what the system thinks what is wrong with our files from this package:
<pre>
root@solaris11:/home/lollypop# pkg verify pkg:/system/core-os@0.5.11-0.175.0.10.1.0.0
PACKAGE STATUS <3>
pkg://solaris/system/core-os ERROR
file: usr/bin/ls
Missing: regular file does not exist
</pre>
That is exactly what we thougt :-).
So let us fix it!
<pre>
root@solaris11:/home/lollypop# pkg fix pkg:/system/core-os@0.5.11-0.175.0.10.1.0.0
Verifying: pkg://solaris/system/core-os ERROR <3>
file: usr/bin/ls
Missing: regular file does not exist
Created ZFS snapshot: 2013-04-10-07:40:21
Repairing: pkg://solaris/system/core-os
DOWNLOAD PKGS FILES XFER (MB)
Completed 1/1 1/1 0.0/0.0$<3>
PHASE ACTIONS
Update Phase 1/1
PHASE ITEMS
Image State Update Phase 2/2
root@solaris11:/home/lollypop#
</pre>
Beware of trying this with /usr/bin/pkg !!!
[[Kategorie:Solaris11]]
25789282d7f02701c6fb3a32e31dcadcede06e5e
271
270
2013-04-11T10:51:52Z
Lollypop
2
/* Switching to Oracle Support Repository */
wikitext
text/x-wiki
=Cheat sheet=
=Examples=
== Switching to Oracle Support Repository ==
1. Get the client certificate at https://pkg-register.oracle.com/
(x) Oracle Solaris 11 Support -> Submit
Comment: Rechnername -> Accept
-> Download Key
-> Download Certificate
2. Copy it to your Solaris 11 host into /var/pkg/ssl.
<pre>
# mv Oracle_Solaris_11_Support.key.pem /var/pkg/ssl
# mv Oracle_Solaris_11_Support.certificate.pem /var/pkg/ssl
</pre>
4. Set you proxy environment if needed:
<pre>
# http_proxy=http://proxy:3128/
# https_proxy=http://proxy:3128/
# export http_proxy https_proxy
</pre>
5. Set the publisher to the support repository:
# pkg set-publisher \
-k /var/pkg/ssl/Oracle_Solaris_11_Support.key.pem \
-c /var/pkg/ssl/Oracle_Solaris_11_Support.certificate.pem \
-G '*' -g https://pkg.oracle.com/solaris/support/ solaris
6. Refresh the catalog:
# pkg refresh --full
7. Check for updates:
# pkg update -nv
8. If needed or wanted do the update:
# pkg update -v
== Adding another repository ==
Until now the repository of OpenCSW is [http://www.opencsw.org/2012/02/ips-repository-in-the-works/ not in IPS format].
== Repairing packages ==
Damn fast fingers did it! Lucky Luk style... the man who delete files faster than his shadow...
<pre>
root@solaris11:/home/lollypop# rm /usr/bin/ls
</pre>
So... the file is gone... oops.
No problem in Solaris 11. You can repair package contents!
But.. in which package was it?
<pre>
root@solaris11:/home/lollypop# pkg search /usr/bin/ls
INDEX ACTION VALUE PACKAGE
path file usr/bin/ls pkg:/system/core-os@0.5.11-0.175.0.10.1.0.0
</pre>
So it is in the package pkg:/system/core-os@0.5.11-0.175.0.10.1.0.0 . Let us take a look what the system thinks what is wrong with our files from this package:
<pre>
root@solaris11:/home/lollypop# pkg verify pkg:/system/core-os@0.5.11-0.175.0.10.1.0.0
PACKAGE STATUS <3>
pkg://solaris/system/core-os ERROR
file: usr/bin/ls
Missing: regular file does not exist
</pre>
That is exactly what we thougt :-).
So let us fix it!
<pre>
root@solaris11:/home/lollypop# pkg fix pkg:/system/core-os@0.5.11-0.175.0.10.1.0.0
Verifying: pkg://solaris/system/core-os ERROR <3>
file: usr/bin/ls
Missing: regular file does not exist
Created ZFS snapshot: 2013-04-10-07:40:21
Repairing: pkg://solaris/system/core-os
DOWNLOAD PKGS FILES XFER (MB)
Completed 1/1 1/1 0.0/0.0$<3>
PHASE ACTIONS
Update Phase 1/1
PHASE ITEMS
Image State Update Phase 2/2
root@solaris11:/home/lollypop#
</pre>
Beware of trying this with /usr/bin/pkg !!!
[[Kategorie:Solaris11]]
ba8fa113aa8accb9ba227ee58348357ff4e27b59
273
271
2013-04-11T15:49:34Z
Lollypop
2
wikitext
text/x-wiki
=Cheat sheet=
[[File:Ips-one-liners.pdf|page=1|600px]]
=Examples=
== Switching to Oracle Support Repository ==
1. Get the client certificate at https://pkg-register.oracle.com/
(x) Oracle Solaris 11 Support -> Submit
Comment: Rechnername -> Accept
-> Download Key
-> Download Certificate
2. Copy it to your Solaris 11 host into /var/pkg/ssl.
<pre>
# mv Oracle_Solaris_11_Support.key.pem /var/pkg/ssl
# mv Oracle_Solaris_11_Support.certificate.pem /var/pkg/ssl
</pre>
4. Set you proxy environment if needed:
<pre>
# http_proxy=http://proxy:3128/
# https_proxy=http://proxy:3128/
# export http_proxy https_proxy
</pre>
5. Set the publisher to the support repository:
# pkg set-publisher \
-k /var/pkg/ssl/Oracle_Solaris_11_Support.key.pem \
-c /var/pkg/ssl/Oracle_Solaris_11_Support.certificate.pem \
-G '*' -g https://pkg.oracle.com/solaris/support/ solaris
6. Refresh the catalog:
# pkg refresh --full
7. Check for updates:
# pkg update -nv
8. If needed or wanted do the update:
# pkg update -v
== Adding another repository ==
Until now the repository of OpenCSW is [http://www.opencsw.org/2012/02/ips-repository-in-the-works/ not in IPS format].
== Repairing packages ==
Damn fast fingers did it! Lucky Luk style... the man who delete files faster than his shadow...
<pre>
root@solaris11:/home/lollypop# rm /usr/bin/ls
</pre>
So... the file is gone... oops.
No problem in Solaris 11. You can repair package contents!
But.. in which package was it?
<pre>
root@solaris11:/home/lollypop# pkg search /usr/bin/ls
INDEX ACTION VALUE PACKAGE
path file usr/bin/ls pkg:/system/core-os@0.5.11-0.175.0.10.1.0.0
</pre>
So it is in the package pkg:/system/core-os@0.5.11-0.175.0.10.1.0.0 . Let us take a look what the system thinks what is wrong with our files from this package:
<pre>
root@solaris11:/home/lollypop# pkg verify pkg:/system/core-os@0.5.11-0.175.0.10.1.0.0
PACKAGE STATUS <3>
pkg://solaris/system/core-os ERROR
file: usr/bin/ls
Missing: regular file does not exist
</pre>
That is exactly what we thougt :-).
So let us fix it!
<pre>
root@solaris11:/home/lollypop# pkg fix pkg:/system/core-os@0.5.11-0.175.0.10.1.0.0
Verifying: pkg://solaris/system/core-os ERROR <3>
file: usr/bin/ls
Missing: regular file does not exist
Created ZFS snapshot: 2013-04-10-07:40:21
Repairing: pkg://solaris/system/core-os
DOWNLOAD PKGS FILES XFER (MB)
Completed 1/1 1/1 0.0/0.0$<3>
PHASE ACTIONS
Update Phase 1/1
PHASE ITEMS
Image State Update Phase 2/2
root@solaris11:/home/lollypop#
</pre>
Beware of trying this with /usr/bin/pkg !!!
[[Kategorie:Solaris11]]
b90610cf72fa4339a2c05a7ec11990040fca6794
Solaris 11 First Steps
0
97
257
255
2013-04-10T08:23:09Z
Lollypop
2
wikitext
text/x-wiki
= What's different to Solaris 10 =
== Installation ==
=== Automated Installer ===
== Package Management ==
[[IPS_cheat_sheet|Some examples]]
== Boot environments ==
=== beadm ===
== Live Upgrade ==
== Distro Constructor ==
== Networking ==
An enhanced version of the new network stack from the project Crossbow (known from OpenSolaris) is implemented.
This means a lot of new features like virtual switches, virtual NICs and so on.
=== Interface Names ===
=== Etherstubs and VNICs ===
=== ipadm ===
== Zones ==
=== Immutable Zones ===
=== zonestat ===
[[Kategorie:Solaris11]]
3811811dc3f358f78581353968eb42cda2eb882c
259
257
2013-04-10T08:55:00Z
Lollypop
2
wikitext
text/x-wiki
= What's new in Solaris 11 =
== Installation ==
=== Automated Installer ===
== Package Management ==
No more patching! The new way to update your operating system is pkg. This tool get's new versions of Software over the network.
You can search repositories for software packages and install them over the network.
[[IPS_cheat_sheet#Examples|Some examples]]
== Live upgrade is now Boot environments ==
=== beadm ===
== Distro Constructor ==
== Networking (Crossbow) ==
An enhanced version of the new network stack from the project Crossbow (known from OpenSolaris) is implemented.
This means a lot of new features like virtual switches, virtual NICs and so on.
=== Interface Names ===
=== Etherstubs and VNICs ===
=== ipadm ===
== Storage Engine (COMSTAR) ==
== Zones ==
=== Immutable Zones ===
=== zonestat ===
[[Kategorie:Solaris11]]
3766ede0f267a96b494546e373394376a79b95ee
260
259
2013-04-10T09:00:22Z
Lollypop
2
wikitext
text/x-wiki
= What's new in Solaris 11 =
== Installation ==
=== Automated Installer ===
== Package Management ==
No more patching! The new way to update your operating system is pkg. This tool get's new versions of Software over the network.
You can search repositories for software packages and install them over the network.
[[IPS_cheat_sheet#Examples|Some examples]]
== Live upgrade is now Boot environments (beadm) ==
== Distro Constructor ==
== Networking (Crossbow) ==
An enhanced version of the new network stack from the project Crossbow (known from OpenSolaris) is implemented.
This means a lot of new features like virtual switches, virtual NICs and so on.
=== Interface Names ===
=== Etherstubs and VNICs ===
=== ipadm ===
== Storage Engine (COMSTAR) ==
== Zones ==
=== Immutable Zones ===
=== zonestat ===
[[Kategorie:Solaris11]]
c5bdb98829cf68fb48e14e2de80d553ffa7419f4
261
260
2013-04-10T09:40:11Z
Lollypop
2
/* Distro Constructor */
wikitext
text/x-wiki
= What's new in Solaris 11 =
== Installation ==
=== Automated Installer ===
== Package Management ==
No more patching! The new way to update your operating system is pkg. This tool get's new versions of Software over the network.
You can search repositories for software packages and install them over the network.
[[IPS_cheat_sheet#Examples|Some examples]]
== Live upgrade is now Boot environments (beadm) ==
== Distro Constructor ==
You can compile your own Solaris 11 distribution ISO image by using the Distribution Constructor. This will make customized installations much faster.
There is a good article at Oracle called [http://www.oracle.com/technetwork/articles/servers-storage-admin/o11-087-sol11-dist-const-496819.html How to Create a Customized Oracle Solaris 11 Image Using the Distribution Constructor].
== Networking (Crossbow) ==
An enhanced version of the new network stack from the project Crossbow (known from OpenSolaris) is implemented.
This means a lot of new features like virtual switches, virtual NICs and so on.
=== Interface Names ===
=== Etherstubs and VNICs ===
=== ipadm ===
== Storage Engine (COMSTAR) ==
== Zones ==
=== Immutable Zones ===
=== zonestat ===
[[Kategorie:Solaris11]]
b66d99f57d8d34e004d3fbe0be8183edbd221a42
262
261
2013-04-10T09:49:08Z
Lollypop
2
/* Interface Names */
wikitext
text/x-wiki
= What's new in Solaris 11 =
== Installation ==
=== Automated Installer ===
== Package Management ==
No more patching! The new way to update your operating system is pkg. This tool get's new versions of Software over the network.
You can search repositories for software packages and install them over the network.
[[IPS_cheat_sheet#Examples|Some examples]]
== Live upgrade is now Boot environments (beadm) ==
== Distro Constructor ==
You can compile your own Solaris 11 distribution ISO image by using the Distribution Constructor. This will make customized installations much faster.
There is a good article at Oracle called [http://www.oracle.com/technetwork/articles/servers-storage-admin/o11-087-sol11-dist-const-496819.html How to Create a Customized Oracle Solaris 11 Image Using the Distribution Constructor].
== Networking (Crossbow) ==
An enhanced version of the new network stack from the project Crossbow (known from OpenSolaris) is implemented.
This means a lot of new features like virtual switches, virtual NICs and so on.
=== Interface Names ===
The new network virtualization covers interface names. They are now named net0, net1, ... and not after their drivers. So now you can just say net0 is frontend traffic. and interface net1 is backend. Independant of which hardware your server is build of.
You can even name them after their usage like frontend0 and backend0. So you always now what kind of traffic is at this interface.
=== Etherstubs and VNICs ===
=== ipadm ===
== Storage Engine (COMSTAR) ==
== Zones ==
=== Immutable Zones ===
=== zonestat ===
[[Kategorie:Solaris11]]
325253e9ca252686789d343cf37cb19cf4881dde
263
262
2013-04-10T09:55:09Z
Lollypop
2
/* Networking (Crossbow) */
wikitext
text/x-wiki
= What's new in Solaris 11 =
== Installation ==
=== Automated Installer ===
== Package Management ==
No more patching! The new way to update your operating system is pkg. This tool get's new versions of Software over the network.
You can search repositories for software packages and install them over the network.
[[IPS_cheat_sheet#Examples|Some examples]]
== Live upgrade is now Boot environments (beadm) ==
== Distro Constructor ==
You can compile your own Solaris 11 distribution ISO image by using the Distribution Constructor. This will make customized installations much faster.
There is a good article at Oracle called [http://www.oracle.com/technetwork/articles/servers-storage-admin/o11-087-sol11-dist-const-496819.html How to Create a Customized Oracle Solaris 11 Image Using the Distribution Constructor].
== Networking (Crossbow) ==
An enhanced version of the new network stack from the project Crossbow (known from OpenSolaris) is implemented.
The new stack virtualizes the network of you Solaris. This means a lot of new features like virtual switches, virtual NICs and so on can be used.
You can build even complex networks virtualized inside your Solaris instance.
=== Interface Names ===
The new network virtualization covers interface names. They are now named net0, net1, ... and not after their drivers. So now you can just say net0 is frontend traffic. and interface net1 is backend. Independant of which hardware your server is build of.
You can even name them after their usage like frontend0 and backend0. So you always now what kind of traffic is at this interface.
=== Etherstubs and VNICs ===
Etherstubs are virtual switches inside your OS which can be connected to VNICs and physical interfaces.
=== ipadm ===
The tool ipadm is, together with dladm, a powerful tool to manage your network stack.
== Storage Engine (COMSTAR) ==
== Zones ==
=== Immutable Zones ===
=== zonestat ===
[[Kategorie:Solaris11]]
f37a5f8556cc85cc70828ae5fc32c43023732d23
264
263
2013-04-10T11:25:18Z
Lollypop
2
wikitext
text/x-wiki
= What's new in Solaris 11 =
== Installation ==
=== Automated Installer ===
== Package Management ==
No more patching! The new way to update your operating system is pkg. This tool get's new versions of Software over the network.
You can search repositories for software packages and install them over the network.
[[IPS_cheat_sheet#Examples|Some examples]]
== Live upgrade is now Boot environments (beadm) ==
For many years the usage of live upgrade was a bit difficult. With support of ZFS in live upgrade the updates went easier and consumed less disk space.
Since OpenSolaris (and now in Solaris 11) we have a new way to make updates.
The new way to handle upgrades and updates is beadm the boot environment admin tool. You can create a boot environment manually at any time as known from live upgrade.
New is that software updates from pkg create boot environments automatically if needed (or if pkg is used with --require-new-be or --require-backup-be).
== Distro Constructor ==
You can compile your own Solaris 11 distribution ISO image by using the Distribution Constructor. This will make customized installations much faster.
There is a good article at Oracle called [http://www.oracle.com/technetwork/articles/servers-storage-admin/o11-087-sol11-dist-const-496819.html How to Create a Customized Oracle Solaris 11 Image Using the Distribution Constructor].
== Networking (Crossbow) ==
An enhanced version of the new network stack from the project Crossbow (known from OpenSolaris) is implemented.
The new stack virtualizes the network of you Solaris. This means a lot of new features like virtual switches, virtual NICs and so on can be used.
You can build even complex networks virtualized inside your Solaris instance.
=== Interface Names ===
The new network virtualization covers interface names. They are now named net0, net1, ... and not after their drivers. So now you can just say net0 is frontend traffic. and interface net1 is backend. Independant of which hardware your server is build of.
You can even name them after their usage like frontend0 and backend0. So you always now what kind of traffic is at this interface.
=== Etherstubs and VNICs ===
Etherstubs are virtual switches inside your OS which can be connected to VNICs and physical interfaces.
=== ipadm ===
The tool ipadm is, together with dladm, a powerful tool to manage your network stack.
== Storage Engine (COMSTAR) ==
== ZFS deduplication and encryption ==
=== ZFS deduplication ===
=== ZFS encryption ===
== Zones ==
=== Immutable Zones ===
=== zonestat ===
== Kernel based CIFS ==
[[Kategorie:Solaris11]]
66f8b5960e13e59102e90e924a2afa82b5507a72
268
264
2013-04-11T10:06:31Z
Lollypop
2
wikitext
text/x-wiki
= What's new in Solaris 11 =
== Installation ==
=== Automated Installer ===
The automated installer short AI is a new way to setup an install server. The configuration is in XML files.
For further informations look [http://www.oracle.com/technetwork/articles/servers-storage-admin/best-commands-ai-1667217.html here].
== Package Management ==
No more patching! The new way to update your operating system is pkg. This tool get's new versions of Software over the network.
You can search repositories for software packages and install them over the network.
[[IPS_cheat_sheet#Examples|Some examples]]
== Live upgrade is now Boot environments (beadm) ==
For many years the usage of live upgrade was a bit difficult. With support of ZFS in live upgrade the updates went easier and consumed less disk space.
Since OpenSolaris (and now in Solaris 11) we have a new way to make updates.
The new way to handle upgrades and updates is beadm the boot environment admin tool. You can create a boot environment manually at any time as known from live upgrade.
New is that software updates from pkg create boot environments automatically if needed (or if pkg is used with --require-new-be or --require-backup-be).
== Distro Constructor ==
You can compile your own Solaris 11 distribution ISO image by using the Distribution Constructor. This will make customized installations much faster.
There is a good article at Oracle called [http://www.oracle.com/technetwork/articles/servers-storage-admin/o11-087-sol11-dist-const-496819.html How to Create a Customized Oracle Solaris 11 Image Using the Distribution Constructor].
== Networking (Crossbow) ==
An enhanced version of the new network stack from the project Crossbow (known from OpenSolaris) is implemented.
The new stack virtualizes the network of you Solaris. This means a lot of new features like virtual switches, virtual NICs and so on can be used.
You can build even complex networks virtualized inside your Solaris instance.
=== Interface Names ===
The new network virtualization covers interface names. They are now named net0, net1, ... and not after their drivers. So now you can just say net0 is frontend traffic. and interface net1 is backend. Independant of which hardware your server is build of.
You can even name them after their usage like frontend0 and backend0. So you always now what kind of traffic is at this interface.
=== Etherstubs and VNICs ===
Etherstubs are virtual switches inside your OS which can be connected to VNICs and physical interfaces.
=== ipadm ===
The tool ipadm is, together with dladm, a powerful tool to manage your network stack.
== Storage Engine (COMSTAR) ==
== ZFS deduplication and encryption ==
=== ZFS deduplication ===
=== ZFS encryption ===
== Zones ==
=== Immutable Zones ===
=== zonestat ===
== Kernel based CIFS ==
[[Kategorie:Solaris11]]
711b868c4e912641ce6a945f8a8c1dca7c5ca6d6
269
268
2013-04-11T10:12:39Z
Lollypop
2
/* Package Management */
wikitext
text/x-wiki
= What's new in Solaris 11 =
== Installation ==
=== Automated Installer ===
The automated installer short AI is a new way to setup an install server. The configuration is in XML files.
For further informations look [http://www.oracle.com/technetwork/articles/servers-storage-admin/best-commands-ai-1667217.html here].
== Package Management ==
No more patching! The new way to update your operating system is pkg. This tool get's new versions of Software over the network.
You can add multiple repositories, search repositories for software packages and install them over the network.
[[IPS_cheat_sheet#Examples|Some examples]].
== Live upgrade is now Boot environments (beadm) ==
For many years the usage of live upgrade was a bit difficult. With support of ZFS in live upgrade the updates went easier and consumed less disk space.
Since OpenSolaris (and now in Solaris 11) we have a new way to make updates.
The new way to handle upgrades and updates is beadm the boot environment admin tool. You can create a boot environment manually at any time as known from live upgrade.
New is that software updates from pkg create boot environments automatically if needed (or if pkg is used with --require-new-be or --require-backup-be).
== Distro Constructor ==
You can compile your own Solaris 11 distribution ISO image by using the Distribution Constructor. This will make customized installations much faster.
There is a good article at Oracle called [http://www.oracle.com/technetwork/articles/servers-storage-admin/o11-087-sol11-dist-const-496819.html How to Create a Customized Oracle Solaris 11 Image Using the Distribution Constructor].
== Networking (Crossbow) ==
An enhanced version of the new network stack from the project Crossbow (known from OpenSolaris) is implemented.
The new stack virtualizes the network of you Solaris. This means a lot of new features like virtual switches, virtual NICs and so on can be used.
You can build even complex networks virtualized inside your Solaris instance.
=== Interface Names ===
The new network virtualization covers interface names. They are now named net0, net1, ... and not after their drivers. So now you can just say net0 is frontend traffic. and interface net1 is backend. Independant of which hardware your server is build of.
You can even name them after their usage like frontend0 and backend0. So you always now what kind of traffic is at this interface.
=== Etherstubs and VNICs ===
Etherstubs are virtual switches inside your OS which can be connected to VNICs and physical interfaces.
=== ipadm ===
The tool ipadm is, together with dladm, a powerful tool to manage your network stack.
== Storage Engine (COMSTAR) ==
== ZFS deduplication and encryption ==
=== ZFS deduplication ===
=== ZFS encryption ===
== Zones ==
=== Immutable Zones ===
=== zonestat ===
== Kernel based CIFS ==
[[Kategorie:Solaris11]]
47d0d95f801ebb0e00505df02344594e18da2de2
272
269
2013-04-11T10:56:38Z
Lollypop
2
wikitext
text/x-wiki
= What's new in Solaris 11 =
== Installation ==
=== Automated Installer ===
The automated installer short AI is a new way to setup an install server. The configuration is in XML files.
For further informations look [http://www.oracle.com/technetwork/articles/servers-storage-admin/best-commands-ai-1667217.html here].
== Package Management ==
No more patching! The new way to update your operating system is pkg. This tool get's new versions of Software over the network.
You can add multiple repositories, search repositories for software packages and install them over the network.
[[IPS_cheat_sheet#Examples|Some examples]].
== Live upgrade is now Boot environments (beadm) ==
For many years the usage of live upgrade was a bit difficult. With support of ZFS in live upgrade the updates went easier and consumed less disk space.
Since OpenSolaris (and now in Solaris 11) we have a new way to make updates.
The new way to handle upgrades and updates is beadm the boot environment admin tool. You can create a boot environment manually at any time as known from live upgrade.
New is that software updates from pkg create boot environments automatically if needed (or if pkg is used with --require-new-be or --require-backup-be).
== Distro Constructor ==
You can compile your own Solaris 11 distribution ISO image by using the Distribution Constructor. This will make customized installations much faster.
There is a good article at Oracle called [http://www.oracle.com/technetwork/articles/servers-storage-admin/o11-087-sol11-dist-const-496819.html How to Create a Customized Oracle Solaris 11 Image Using the Distribution Constructor].
== Networking (Crossbow) ==
An enhanced version of the new network stack from the project Crossbow (known from OpenSolaris) is implemented.
The new stack virtualizes the network of your Solaris. This means a lot of new features like virtual switches, virtual NICs and so on can be used.
You can build even complex networks virtualized inside your Solaris instance.
=== Interface Names ===
The new network virtualization covers interface names. They are now named net0, net1, ... and not after their drivers. So now you can just say net0 is frontend traffic. and interface net1 is backend. Independant of which hardware your server is build of.
You can even name them after their usage like frontend0 and backend0. So you always now what kind of traffic is at this interface.
=== Etherstubs and VNICs ===
Etherstubs are virtual switches inside your OS which can be connected to VNICs and physical interfaces.
=== ipadm ===
The tool ipadm is, together with dladm, a powerful tool to manage your network stack.
== Storage Engine (COMSTAR) ==
== ZFS deduplication and encryption ==
=== ZFS deduplication ===
=== ZFS encryption ===
== Zones ==
=== Immutable Zones ===
=== zonestat ===
== Kernel based CIFS ==
[[Kategorie:Solaris11]]
17ddd2f7aaf44a33b69df6090a32a63cf60b5baf
Solaris 11 Networking
0
96
265
250
2013-04-11T09:57:54Z
Lollypop
2
wikitext
text/x-wiki
= Switch to manual configuration =
To disable automatic procedures to take back your changes you have to enable the manual configuration mode.
<pre>
# netadm enable –p ncp defaultfixed
</pre>
= Nodename =
<pre>
# svccfg -s svc:/system/identity:node setprop config/nodename = astring: camponotus
# svcadm refresh svc:/system/identity:node
# svcadm restart svc:/system/identity:node
</pre>
= Interfaces =
== Initial setup ==
<pre>
# ipadm create-ip net1
# ipadm create-addr -T static -a local=192.168.5.101/24 net1/v4mailcluster1
</pre>
== IPMP ==
<pre>
# ipadm create-ip net2
# ipadm create-ip net3
# ipadm create-addr -T static -a 192.168.5.102/24 net2/v4ipmptestadress
# ipadm create-addr -T static -a 192.168.5.103/24 net3/v4ipmptestadress
# ipadm create-ipmp ipmp0
# ipadm add-ipmp -i net2 -i net3 ipmp0
# ipadm create-addr -T static -a 192.168.5.101/24 ipmp0/v4mailcluster0
# ipmpstat -i
INTERFACE ACTIVE GROUP FLAGS LINK PROBE STATE
net2 yes ipmp0 ------- up ok ok
net3 yes ipmp0 --mbM-- up ok ok
# ipmpstat -an
ADDRESS STATE GROUP INBOUND OUTBOUND
:: down ipmp0 -- --
192.168.5.101 up ipmp0 net3 net2 net3
</pre>
Set one interface to standby:
<pre>
# ipadm set-ifprop -p standby=on -m ip net2
# ipmpstat -i
INTERFACE ACTIVE GROUP FLAGS LINK PROBE STATE
net3 yes ipmp0 --mbM-- up ok ok
net2 no ipmp0 is----- up ok ok
# ipmpstat -g
GROUP GROUPNAME STATE FDT INTERFACES
ipmp0 ipmp0 ok 10.00s net3 (net2)
</pre>
== Change adress ==
<pre>
# ipadm create-addr -T static -a 192.168.5.111/24 ipmp0/v4mailcluster1
</pre>
Login to new IP.
<pre>
# ipadm delete-addr ipmp0/v4mailcluster0
</pre>
= DNS =
== Client ==
<pre>
# svccfg -s svc:/network/dns/client setprop config/nameserver = net_address: "( 0.0.0.0 192.168.1.1 )"
# svccfg -s svc:/network/dns/client setprop config/search = astring: "timmann.de blindhuhn.de"
# svcadm refresh svc:/network/dns/client:default
# svcadm restart svc:/network/dns/client:default
</pre>
Activate dns in nameservice switch (nsswitch.conf):
<pre>
# perl -pi -e "s/^hosts:\s+files$/hosts: files dns/g" /etc/nsswitch.conf
# nscfg import -f svc:/system/name-service/switch:default
# svcadm refresh name-service/switch
# svcprop -p config/host svc:/system/name-service/switch:default
files\ dns
</pre>
== Server ==
<pre>
# groupadd -g 53 dns
# useradd -u 53 -g dns -d /var/named -m dns
# usermod -A solaris.smf.manage.bind dns
# svccfg -s svc:network/dns/server:default setprop start/group = dns
# svccfg -s svc:network/dns/server:default setprop start/user = dns
# svccfg -s svc:network/dns/server:default setprop options/ip_interfaces = IPv4
# svccfg -s svc:network/dns/server:default setprop options/configuration_file = /etc/named.conf
# svcadm refresh svc:network/dns/server:default
# svcadm enable svc:network/dns/server:default
</pre>
[[Kategorie:Solaris11]]
3b88b266c0dc7cbf9d79a36e526ed95e497bd330
266
265
2013-04-11T10:00:40Z
Lollypop
2
/* Change adress */
wikitext
text/x-wiki
= Switch to manual configuration =
To disable automatic procedures to take back your changes you have to enable the manual configuration mode.
<pre>
# netadm enable –p ncp defaultfixed
</pre>
= Nodename =
<pre>
# svccfg -s svc:/system/identity:node setprop config/nodename = astring: camponotus
# svcadm refresh svc:/system/identity:node
# svcadm restart svc:/system/identity:node
</pre>
= Interfaces =
== Initial setup ==
<pre>
# ipadm create-ip net1
# ipadm create-addr -T static -a local=192.168.5.101/24 net1/v4mailcluster1
</pre>
== IPMP ==
<pre>
# ipadm create-ip net2
# ipadm create-ip net3
# ipadm create-addr -T static -a 192.168.5.102/24 net2/v4ipmptestadress
# ipadm create-addr -T static -a 192.168.5.103/24 net3/v4ipmptestadress
# ipadm create-ipmp ipmp0
# ipadm add-ipmp -i net2 -i net3 ipmp0
# ipadm create-addr -T static -a 192.168.5.101/24 ipmp0/v4mailcluster0
# ipmpstat -i
INTERFACE ACTIVE GROUP FLAGS LINK PROBE STATE
net2 yes ipmp0 ------- up ok ok
net3 yes ipmp0 --mbM-- up ok ok
# ipmpstat -an
ADDRESS STATE GROUP INBOUND OUTBOUND
:: down ipmp0 -- --
192.168.5.101 up ipmp0 net3 net2 net3
</pre>
Set one interface to standby:
<pre>
# ipadm set-ifprop -p standby=on -m ip net2
# ipmpstat -i
INTERFACE ACTIVE GROUP FLAGS LINK PROBE STATE
net3 yes ipmp0 --mbM-- up ok ok
net2 no ipmp0 is----- up ok ok
# ipmpstat -g
GROUP GROUPNAME STATE FDT INTERFACES
ipmp0 ipmp0 ok 10.00s net3 (net2)
</pre>
== Change adress ==
1. Create new interface:
<pre>
# ipadm create-addr -T static -a 192.168.5.111/24 ipmp0/v4mailcluster1
</pre>
2. Login to new IP.
3. Delete the old interface.
<pre>
# ipadm delete-addr ipmp0/v4mailcluster0
</pre>
= DNS =
== Client ==
<pre>
# svccfg -s svc:/network/dns/client setprop config/nameserver = net_address: "( 0.0.0.0 192.168.1.1 )"
# svccfg -s svc:/network/dns/client setprop config/search = astring: "timmann.de blindhuhn.de"
# svcadm refresh svc:/network/dns/client:default
# svcadm restart svc:/network/dns/client:default
</pre>
Activate dns in nameservice switch (nsswitch.conf):
<pre>
# perl -pi -e "s/^hosts:\s+files$/hosts: files dns/g" /etc/nsswitch.conf
# nscfg import -f svc:/system/name-service/switch:default
# svcadm refresh name-service/switch
# svcprop -p config/host svc:/system/name-service/switch:default
files\ dns
</pre>
== Server ==
<pre>
# groupadd -g 53 dns
# useradd -u 53 -g dns -d /var/named -m dns
# usermod -A solaris.smf.manage.bind dns
# svccfg -s svc:network/dns/server:default setprop start/group = dns
# svccfg -s svc:network/dns/server:default setprop start/user = dns
# svccfg -s svc:network/dns/server:default setprop options/ip_interfaces = IPv4
# svccfg -s svc:network/dns/server:default setprop options/configuration_file = /etc/named.conf
# svcadm refresh svc:network/dns/server:default
# svcadm enable svc:network/dns/server:default
</pre>
[[Kategorie:Solaris11]]
6515f7a5abe46bc01d746389b4f4325f6381358e
267
266
2013-04-11T10:01:46Z
Lollypop
2
/* Change adress */
wikitext
text/x-wiki
= Switch to manual configuration =
To disable automatic procedures to take back your changes you have to enable the manual configuration mode.
<pre>
# netadm enable –p ncp defaultfixed
</pre>
= Nodename =
<pre>
# svccfg -s svc:/system/identity:node setprop config/nodename = astring: camponotus
# svcadm refresh svc:/system/identity:node
# svcadm restart svc:/system/identity:node
</pre>
= Interfaces =
== Initial setup ==
<pre>
# ipadm create-ip net1
# ipadm create-addr -T static -a local=192.168.5.101/24 net1/v4mailcluster1
</pre>
== IPMP ==
<pre>
# ipadm create-ip net2
# ipadm create-ip net3
# ipadm create-addr -T static -a 192.168.5.102/24 net2/v4ipmptestadress
# ipadm create-addr -T static -a 192.168.5.103/24 net3/v4ipmptestadress
# ipadm create-ipmp ipmp0
# ipadm add-ipmp -i net2 -i net3 ipmp0
# ipadm create-addr -T static -a 192.168.5.101/24 ipmp0/v4mailcluster0
# ipmpstat -i
INTERFACE ACTIVE GROUP FLAGS LINK PROBE STATE
net2 yes ipmp0 ------- up ok ok
net3 yes ipmp0 --mbM-- up ok ok
# ipmpstat -an
ADDRESS STATE GROUP INBOUND OUTBOUND
:: down ipmp0 -- --
192.168.5.101 up ipmp0 net3 net2 net3
</pre>
Set one interface to standby:
<pre>
# ipadm set-ifprop -p standby=on -m ip net2
# ipmpstat -i
INTERFACE ACTIVE GROUP FLAGS LINK PROBE STATE
net3 yes ipmp0 --mbM-- up ok ok
net2 no ipmp0 is----- up ok ok
# ipmpstat -g
GROUP GROUPNAME STATE FDT INTERFACES
ipmp0 ipmp0 ok 10.00s net3 (net2)
</pre>
== Change adress ==
1. Create new interface:
<pre>
# ipadm create-addr -T static -a 192.168.5.111/24 ipmp0/v4mailcluster1
</pre>
2. Login to new IP.
3. Delete the old interface:
<pre>
# ipadm delete-addr ipmp0/v4mailcluster0
</pre>
= DNS =
== Client ==
<pre>
# svccfg -s svc:/network/dns/client setprop config/nameserver = net_address: "( 0.0.0.0 192.168.1.1 )"
# svccfg -s svc:/network/dns/client setprop config/search = astring: "timmann.de blindhuhn.de"
# svcadm refresh svc:/network/dns/client:default
# svcadm restart svc:/network/dns/client:default
</pre>
Activate dns in nameservice switch (nsswitch.conf):
<pre>
# perl -pi -e "s/^hosts:\s+files$/hosts: files dns/g" /etc/nsswitch.conf
# nscfg import -f svc:/system/name-service/switch:default
# svcadm refresh name-service/switch
# svcprop -p config/host svc:/system/name-service/switch:default
files\ dns
</pre>
== Server ==
<pre>
# groupadd -g 53 dns
# useradd -u 53 -g dns -d /var/named -m dns
# usermod -A solaris.smf.manage.bind dns
# svccfg -s svc:network/dns/server:default setprop start/group = dns
# svccfg -s svc:network/dns/server:default setprop start/user = dns
# svccfg -s svc:network/dns/server:default setprop options/ip_interfaces = IPv4
# svccfg -s svc:network/dns/server:default setprop options/configuration_file = /etc/named.conf
# svcadm refresh svc:network/dns/server:default
# svcadm enable svc:network/dns/server:default
</pre>
[[Kategorie:Solaris11]]
29ef07ed69ea784801efc97de071a4b7a6ba276f
File:Ips-one-liners.pdf
6
99
274
2013-04-11T16:13:43Z
Lollypop
2
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
ZFS cheatsheet
0
29
275
219
2013-04-15T11:37:30Z
Lollypop
2
wikitext
text/x-wiki
* [[ZFS_Recovery|Reparieren von defktem ZFS]]
* Wichtige ZFS-Patches: 127729-07 (x86) / 127728-06 (SPARC)
* ZFS Best Practices Guide http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide
* ZFS FAQ bei Opensolaris.org http://www.opensolaris.org/os/community/zfs/faq/
== Löschen nicht löschbarer Snapshots ==
Hier nach einem abgebrochenen ZFS send/recv:
<pre>
# zfs destroy MYSQL-LOG/binlog@copy_20130403
cannot destroy 'MYSQL-LOG/binlog@copy_20130403': dataset is busy
# zfs holds -r MYSQL-LOG@copy_20130403
NAME TAG TIMESTAMP
MYSQL-LOG@copy_20130403 .send-22887-0 Wed Apr 3 09:03:32 2013
# zfs release .send-22887-0 MYSQL-LOG@copy_20130403
# zfs destroy MYSQL-LOG/binlog@copy_20130403
<pre>
== ZFS Tuning ==
Eine gefühlte Langsamkeit auf Systemen mit ZFS kommt vom sehr großen Cachehunger. Den kann man eingrenzen:
Erstmal schauen, was phase ist:
<pre>
lollypop@wirefall:~# echo "::kmastat ! grep Total" |mdb -k
Total [hat_memload] 13508608B 309323764 0
Total [kmem_msb] 24010752B 1509706 0
Total [kmem_va] 660340736B 140448 0
Total [kmem_default] 690409472B 1416078794 0
Total [kmem_io_64G] 34619392B 8456 0
Total [kmem_io_4G] 16384B 92 0
Total [kmem_io_2G] 24576B 62 0
Total [bp_map] 1048576B 234488 0
Total [umem_np] 786432B 976 0
Total [id32] 4096B 2620 0
Total [zfs_file_data_buf] 1471275008B 1326646 0
Total [segkp] 589824B 192886 0
Total [ip_minor_arena_sa] 64B 13332 0
Total [ip_minor_arena_la] 192B 45183 0
Total [spdsock] 64B 1 0
Total [namefs_inodes] 64B 24 0
lollypop@wirefall:~# echo "::memstat" | mdb -k
Page Summary Pages MB %Tot
------------ ---------------- ---------------- ----
Kernel 255013 996 24%
ZFS File Data 359196 1403 34%
Anon 346538 1353 33%
Exec and libs 33948 132 3%
Page cache 4836 18 0%
Free (cachelist) 22086 86 2%
Free (freelist) 23420 91 2%
Total 1045037 4082
Physical 1045036 4082
</pre>
Oder nur ZFS
<pre>
echo "::memstat ! egrep '(Page Summary|-----|ZFS)'"| mdb -k
</pre>
Ausgeben aller ARC-Parameter:
<pre>
lollypop@wirefall:~# echo "::arc -m" | mdb -k
hits = 80839319
misses = 3717788
demand_data_hits = 4127150
demand_data_misses = 51589
demand_metadata_hits = 9467792
demand_metadata_misses = 2125852
prefetch_data_hits = 127941
prefetch_data_misses = 596238
prefetch_metadata_hits = 67116436
prefetch_metadata_misses = 944109
mru_hits = 2031248
mru_ghost_hits = 1906199
mfu_hits = 78514880
mfu_ghost_hits = 993236
deleted = 880714
recycle_miss = 1381210
mutex_miss = 197
evict_skip = 38573528
evict_l2_cached = 0
evict_l2_eligible = 94658370048
evict_l2_ineligible = 8946457600
hash_elements = 79571
hash_elements_max = 82328
hash_collisions = 3005774
hash_chains = 22460
hash_chain_max = 8
p = 64 MB
c = 512 MB
c_min = 127 MB
c_max = 512 MB
size = 512 MB
hdr_size = 14825736
data_size = 468982784
other_size = 53480992
l2_hits = 0
l2_misses = 0
l2_feeds = 0
l2_rw_clash = 0
l2_read_bytes = 0
l2_write_bytes = 0
l2_writes_sent = 0
l2_writes_done = 0
l2_writes_error = 0
l2_writes_hdr_miss = 0
l2_evict_lock_retry = 0
l2_evict_reading = 0
l2_free_on_write = 0
l2_abort_lowmem = 0
l2_cksum_bad = 0
l2_io_error = 0
l2_size = 0
l2_hdr_size = 0
memory_throttle_count = 0
arc_no_grow = 0
arc_tempreserve = 0 MB
arc_meta_used = 150 MB
arc_meta_limit = 128 MB
arc_meta_max = 313 MB
</pre>
Man kann sich auch alle Parameter ausgeben lassen, die für ZFS gesetzt sind mit:
<pre>
# echo ::zfs_params | mdb -k
arc_reduce_dnlc_percent = 0x3
zfs_arc_max = 0x100000000
zfs_arc_min = 0x0
arc_shrink_shift = 0x5
zfs_mdcomp_disable = 0x0
zfs_prefetch_disable = 0x0
zfetch_max_streams = 0x8
zfetch_min_sec_reap = 0x2
zfetch_block_cap = 0x100
zfetch_array_rd_sz = 0x100000
zfs_default_bs = 0x9
zfs_default_ibs = 0xe
...
# echo "::arc -a" | mdb -k
hits = 592730
misses = 5095
demand_data_hits = 0
demand_data_misses = 0
demand_metadata_hits = 592719
demand_metadata_misses = 4866
prefetch_data_hits = 0
prefetch_data_misses = 0
...
</pre>
Setzen von Kernelparametern geht auch online mit:
<pre>
# echo zfs_arc_max/Z100000000 | mdb -kw
zfs_arc_max: <old value> = 0x100000000
</pre>
Das setzt den zfs_arc_max auf 4GB = 0x100000000
== Limitieren des ARC Cache ==
In der /etc/system einfach:
set zfs:zfs_arc_max = <Number of bytes>
Siehe auch [http://www.solarisinternals.com/wiki/index.php/ZFS_Evil_Tuning_Guide#Limiting_the_ARC_Cache Limiting the ARC Cache]
== ZFS Platzverbrauch besser anzeigen ==
<pre>
$ zfs list -o space
NAME AVAIL USED USEDSNAP USEDDS USEDREFRESERV USEDCHILD
rpool 25.4G 7.79G 0 64K 0 7.79G
rpool/ROOT 25.4G 6.29G 0 18K 0 6.29G
rpool/ROOT/snv_98 25.4G 6.29G 0 6.29G 0 0
rpool/dump 25.4G 1.00G 0 1.00G 0 0
rpool/export 25.4G 38K 0 20K 0 18K
rpool/export/home 25.4G 18K 0 18K 0 0
rpool/swap 25.8G 512M 0 111M 401M 0
</pre>
Wenn zfs list -o space als shortcut noch nicht zur Verfügung steht, geht meist:
<pre>
$ zfs list -o name,avail,used,usedsnap,usedds,usedrefreserv,usedchild -t filesystem,volume
</pre>
== Migration UFS-Root -> ZFS-Root via Live-Upgrade ==
Erstmal den ZFS Rootpool anlegen:
# zpool create rpool /dev/dsk/<zfs-disk>
Wer Problemen aus dem Weg gehen möchte, läßt den Namen bei rpool.
Boot-Environment (BE) mit lucreate erstellen
# lucreate -c ufsBE -n zfsBE -p rpool
Hiermit werden die Files in die ZFS-Umgebung kopiert.
Prüfen, ob das BootFS richtig gesetzt wurde:
<pre>
# zpool get bootfs rpool
NAME PROPERTY VALUE SOURCE
rpool bootfs rpool/ROOT/zfsBE local
</pre>
Auskommentieren von eventuell noch nachgebliebenen rootdev-Einträgen in der /etc/system
<pre>
# zpool export rpool
# mkdir /tmp/rpool
# zpool import -R /tmp/rpool rpool
# zfs unmount rpool
# rmdir /tmp/rpool/rpool
# zfs mount rpool/ROOT/zfsBE
# perl -pi.orig -e 's#^(rootdev.*)$#* \1#g' /tmp/rpool/etc/system
# zpool export rpool
</pre>
Bootblock für ZFS auf die ZFS-Platte
# installboot -F zfs /usr/platform/`uname -i`/lib/fs/zfs/bootblk /dev/rdsk/<zfs-disk>
Aktivieren des neuen BEs
# luactivate zfsBE
[[Kategorie:ZFS]]
c14ebdc93bd537fe0f609f4dc75cdd1b3496d207
Solaris SMF
0
100
276
2013-05-08T12:31:47Z
Lollypop
2
Die Seite wurde neu angelegt: „== Running foreground processes == <pre> <?xml version='1.0'?> <!DOCTYPE service_bundle SYSTEM '/usr/share/lib/xml/dtd/service_bundle.dtd.1'> <service_bundle type…“
wikitext
text/x-wiki
== Running foreground processes ==
<pre>
<?xml version='1.0'?>
<!DOCTYPE service_bundle SYSTEM '/usr/share/lib/xml/dtd/service_bundle.dtd.1'>
<service_bundle type='manifest' name='export'>
<service name='network/foreground-daemon' type='service' version='0'>
<single_instance/>
<dependency name='filesystem_minimal' grouping='require_all' restart_on='none' type='service'>
<service_fmri value='svc:/system/filesystem/local'/>
</dependency>
<dependency name='loopback' grouping='require_any' restart_on='error' type='service'>
<service_fmri value='svc:/network/loopback'/>
</dependency>
<dependency name='network' grouping='optional_all' restart_on='error' type='service'>
<service_fmri value='svc:/milestone/network'/>
</dependency>
<instance name='default' enabled='true'>
<exec_method name='refresh' type='method' exec=':true' timeout_seconds='60'/>
<exec_method name='stop' type='method' exec=':kill' timeout_seconds='60'/>
<exec_method name='start' type='method' exec='/opt/foreground/bin/foreground-daemon %m' timeout_seconds='0'>
<method_context project='foreground-project' >
<method_credential user='foreground-user' group='noaccess' />
</method_context>
</exec_method>
<property_group type="framework" name="startd">
<propval type="astring" name="duration" value="child"/>
</property_group>
<template>
<common_name>
<loctext xml:lang='C'>Foreground Daemon</loctext>
</common_name>
<documentation>
<manpage title='riaut' section='1M' manpath='/opt/foreground/man'/>
</documentation>
</template>
</instance>
<stability value='Unstable'/>
</service>
</service_bundle>
</pre>
39d7834a167738fa4b46b709bc8b88e965b61bd8
277
276
2013-05-08T12:45:25Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:Solaris]]
== Running foreground processes ==
<pre>
<?xml version='1.0'?>
<!DOCTYPE service_bundle SYSTEM '/usr/share/lib/xml/dtd/service_bundle.dtd.1'>
<service_bundle type='manifest' name='export'>
<service name='network/foreground-daemon' type='service' version='0'>
<single_instance/>
<dependency name='filesystem_minimal' grouping='require_all' restart_on='none' type='service'>
<service_fmri value='svc:/system/filesystem/local'/>
</dependency>
<dependency name='loopback' grouping='require_any' restart_on='error' type='service'>
<service_fmri value='svc:/network/loopback'/>
</dependency>
<dependency name='network' grouping='optional_all' restart_on='error' type='service'>
<service_fmri value='svc:/milestone/network'/>
</dependency>
<instance name='default' enabled='true'>
<exec_method name='refresh' type='method' exec=':true' timeout_seconds='60'/>
<exec_method name='stop' type='method' exec=':kill' timeout_seconds='60'/>
<exec_method name='start' type='method' exec='/opt/foreground/bin/foreground-daemon %m' timeout_seconds='0'>
<method_context project='foreground-project' >
<method_credential user='foreground-user' group='noaccess' />
</method_context>
</exec_method>
<property_group type="framework" name="startd">
<propval type="astring" name="duration" value="child"/>
</property_group>
<template>
<common_name>
<loctext xml:lang='C'>Foreground Daemon</loctext>
</common_name>
<documentation>
<manpage title='riaut' section='1M' manpath='/opt/foreground/man'/>
</documentation>
</template>
</instance>
<stability value='Unstable'/>
</service>
</service_bundle>
</pre>
8dcc5ac3948a32b7999e70154d2d8c52b3a8a672
SSH Tipps und Tricks
0
75
278
178
2013-05-15T06:35:29Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:KnowHow]]
=SSH, der Weg zum Ziel=
==SSH über ein oder mehrere Hops==
Um die SSH-Verbindung von Host_A zu Host_B machen zu können muß man sich über zwei Rechner dorthin tunneln (GW_1 und GW_2). Wenn man sich immer erst einloggt und dann weiter einloggt ist es manchmal sehr schwierig die Portforwardings mitzuschleifen, oder den Socks5-Proxy. Einfacher ist es, wenn man sich ProxyCommands für den Weg von Host_a zu Host_b definiert.
Wir kommen also nur von GW_2 zu Host_b, also machen wir uns hierfür einen Eintrag in der ~/.ssh/config:
<pre>
Host Host_B
ProxyCommand ssh GW_2 "/bin/bash -c 'exec 3<>/dev/tcp/%h/%p; cat <&3 & cat >&3;kill $!'"
</pre>
Zu GW_2 kommen wir aber nur über GW_1, also brauchen wir hierfür auch einen Eintrag:
<pre>
Host GW_2
ProxyCommand ssh GW_1 "/bin/bash -c 'exec 3<>/dev/tcp/%h/%p; cat <&3 & cat >&3;kill $!'"
</pre>
Jetzt gibt man auf Host_A einfach <i>ssh Host_B</i> ein und wird über die beiden Gateways GW_1 und GW_2 getunnelt.
Portforwardings für z.B. NFS macht man jetzt einfach so:
<pre>
root@Host_A# share -F nfs -o ro=@127.0.0.1/32 /tmp
root@Host_A# ssh -R 22049:localhost:2049 user@Host_B
user@Host_B$ su -
root@Host_B# mount -oro nfs://127.0.0.1:22049/tmp /mnt
</pre>
Im Hintergrund werden dann die TunnelVerbindungen aufgebaut und man macht das PortForwarding direkt von Host_A nach Host_B. Sehr schlank und elegant.
PS: Das /dev/tcp/%h/%p ist ein BASH-Builtin wobei %h und %p von der SSH durch Host (%h) und Port (%p) ausgefüllt werden
==Ausbruch aus dem Paradies==
Problem: Die Umgebung in der man sich aufhält ist leider so unglücklich mit Firewalls verbaut, daß man nicht arbeiten kann. Man muß aber per SSH raus, um woanders kurz etwas zu schauen, oder zu holen. Nunja, es gibt immer einen Weg...
Vorraussetzung ist ein lokal installiertes [http://www.meadowy.org/~gotoh/projects/connect connect], z.B. unter Ubuntu: apt-get install connect-proxy.
Weiterhin braucht man einen SSH-Server, wo ein sshd auf dem Port 443 lauscht, denn die meisten Proxies wollen einen nur auf known Ports durchlassen.
Dann trägt man in der ~/.ssh/config ein:
<pre>
Host ssh-via-proxy
ProxyCommand connect -H proxy-server:3128 ssh-server 443
</pre>
Schwuppdiwupp ist man mit <i>ssh ssh-server</i> auf dem SSH-Ziel, wo man hinmöchte. Natürlich kann man auf den ssh-server wieder als ProxyCommand eintragen usw. usw.
==Achja... das interne Wiki...==
Auch nicht schlimm, wenn das nur vom internen Netz erreichbar ist, dann fragen wir einfach via Socks-Proxy an:
<pre>
user@Host_A$ ssh -C -N -T -f -D8080 interner-rechner
user@Host_A$ chromium-browser --proxy-server="socks5://localhost:8080" https://wiki.intern.firma.de/ &
</pre>
Die Optionen sind:
<pre>
-C Requests compression <- das ist optional
-N Do not execute a remote command.
-T Disable pseudo-tty allocation.
-f Requests ssh to go to background just before command execution.
-D Local-Remote-Socks5-Proxy Port
</pre>
Oder wieder via ~/.ssh/config:
<pre>
Host wiki
Compression yes
DynamicForward 8888
RequestTTY no
PermitLocalCommand yes
LocalCommand chromium-browser --proxy-server="socks5://localhost:8888" https://wiki.intern.firma.de/ &
Hostname interner-rechner
</pre>
Und dann <i>ssh -N -f wiki</i> (Entsprechungen für -N und -f habe ich noch nicht gefunden).
==Der Fingerabdruck==
Für die Verifikation ist es oft leichter mit kürzeren Zahlenketten. Daher ist der Fingerabdruck praktisch, um Keys einfacher zu vergleichen:
<pre>
$ ssh-keygen -lf ~/.ssh/id_dsa.pub
1024 98:c5:76:...:08:fa:ba lollypop@lollybook (DSA)
</pre>
e1d556316a7c1616e748d0ba6307a27b0a95796f
ZFS Recovery
0
30
279
230
2013-05-15T09:28:59Z
Lollypop
2
wikitext
text/x-wiki
Siehe [http://sunsolve.sun.com/search/document.do?assetkey=1-66-233602-1 SunAlert 233602 : Solaris 10 Assertion Failure in ZFS May Cause a System Panic]:
The best recovery for this is to do the following:
<pre>
1. Set the following in /etc/system:
set zfs:zfs_recover=1
set aok=1
2. Import the pool using 'zpool import'
3. Run a full scrub on the pool using 'zpool scrub'
4. Use 'zdb -d' and make sure that there is no ondisk corruption reported
5. Once the pool comes to a clean state, comment / remove the added entries in /etc/system.
</pre>
==Zurückgehen auf einen früheren Uberblock==
<pre>
# zpool import defect_pool
cannot import 'defect_pool': I/O error
Destroy and re-create the pool from
a backup source.
# cd /var/cluster/run/HAStoragePlus/zfs/
# strings defect_pool.cachefile | nawk '/c[0-9]+t/'
0/dev/dsk/c8t600A0B80006E103C000008164E51CDD2d0s0
0/dev/dsk/c8t600A0B80006E10E40000D47D4E51CF9Ed0s0
oder
# zpool import -c defect_pool.cachefile
# zdb -lu /dev/dsk/c8t600A0B80006E103C000008164E51CDD2d0s0 | nawk '/txg =/{txg=$NF}/timestamp =/{printf "txg %d\t%s\n",txg,$0}' | sort -n -k 2n,2n | uniq | tail -10
txg 40353851 timestamp = 1352184849 UTC = Tue Nov 6 07:54:09 2012
txg 40353852 timestamp = 1352184849 UTC = Tue Nov 6 07:54:09 2012
txg 40353853 timestamp = 1352184849 UTC = Tue Nov 6 07:54:09 2012
txg 40353870 timestamp = 1352185334 UTC = Tue Nov 6 08:02:14 2012
txg 40353871 timestamp = 1352185334 UTC = Tue Nov 6 08:02:14 2012
txg 40353872 timestamp = 1352185334 UTC = Tue Nov 6 08:02:14 2012
txg 40353873 timestamp = 1352185334 UTC = Tue Nov 6 08:02:14 2012
txg 40353874 timestamp = 1352185334 UTC = Tue Nov 6 08:02:14 2012
txg 40353875 timestamp = 1352185334 UTC = Tue Nov 6 08:02:14 2012
txg 40353879 timestamp = 1352185334 UTC = Tue Nov 6 08:02:14 2012
# zpool import -T <txg> defect_pool
Also z.B. auf Tue Nov 6 07:54:09 2012 -> txg 40353853
# zpool import -T 40353853 defect_pool
Pool defect_pool returned to its state as of Tue Nov 06 07:32:33 2012.
Discarded approximately 22 minutes of transactions.
</pre>
==PANIC, NOTICE: spa_import_rootpool: error 19==
Die Lösung ist, den Pool und das Device explizit anzugeben. Wenn beim booten also kommt:
<pre>
NOTICE: spa_import_rootpool: error 19
Cannot mount root on /pci@0,0/pci8086,340a@3/pci1000,3150@0/sd@1,0:a
panic[cpu0]/thread=fffffffffbc28820: vfs_mountroot: cannot mount root
</pre>
Hilft ein Boot in den Failsafe mode und editieren der /a/rpool/boot/grub/menu.lst, oder Eingabe der Parameter in der Grub-Commandline:
<pre>
title s10x_u8wos_08a
findroot (s10x_u8wos_08a,0,a)
bootfs rpool/ROOT/s10x_u8wos_08a
kernel$ /platform/i86pc/multiboot -B zfs-bootfs=rpool/ROOT/s10x_u8wos_08a,bootpath="/pci@0,0/pci8086,340a@3/pci1000,3150@0/sd@1,0:a"
module /platform/i86pc/boot_archive
</pre>
[[Kategorie:ZFS]]
350b8c36fea9f6eb991f5ec3cc18fbf9090cecc5
304
279
2013-09-10T15:48:00Z
Lollypop
2
/* Zurückgehen auf einen früheren Uberblock */
wikitext
text/x-wiki
Siehe [http://sunsolve.sun.com/search/document.do?assetkey=1-66-233602-1 SunAlert 233602 : Solaris 10 Assertion Failure in ZFS May Cause a System Panic]:
The best recovery for this is to do the following:
<pre>
1. Set the following in /etc/system:
set zfs:zfs_recover=1
set aok=1
2. Import the pool using 'zpool import'
3. Run a full scrub on the pool using 'zpool scrub'
4. Use 'zdb -d' and make sure that there is no ondisk corruption reported
5. Once the pool comes to a clean state, comment / remove the added entries in /etc/system.
</pre>
==Zurückgehen auf einen früheren Uberblock==
<source lang=bash>
# zpool import defect_pool
cannot import 'defect_pool': I/O error
Destroy and re-create the pool from
a backup source.
</source>
Unter /etc/zfs:
<source lang=bash>
# cd /etc/zfs
# strings zpool.cache | nawk '/c[0-9]+t/'
...
/dev/dsk/c7t0d0s0
...
# zdb -l /dev/dsk/c7t0d0s0 | nawk '$1=="name:"{print;exit;}'
name: 'defect_pool'
</source>
Für einen ZPool im Solaris Cluster:
<source lang=bash>
# cd /var/cluster/run/HAStoragePlus/zfs/
# strings defect_pool.cachefile | nawk '/c[0-9]+t/'
0/dev/dsk/c8t600A0B80006E103C000008164E51CDD2d0s0
0/dev/dsk/c8t600A0B80006E10E40000D47D4E51CF9Ed0s0
</source>
oder
<source lang=bash>
# zpool import -o readonly=on -c defect_pool.cachefile
</source>
<source lang=bash>
# zdb -lu /dev/dsk/c8t600A0B80006E103C000008164E51CDD2d0s0 | nawk '/txg =/{txg=$NF}/timestamp =/{printf "txg %d\t%s\n",txg,$0}' | sort -n -k 2n,2n | uniq | tail -10
txg 40353851 timestamp = 1352184849 UTC = Tue Nov 6 07:54:09 2012
txg 40353852 timestamp = 1352184849 UTC = Tue Nov 6 07:54:09 2012
txg 40353853 timestamp = 1352184849 UTC = Tue Nov 6 07:54:09 2012
txg 40353870 timestamp = 1352185334 UTC = Tue Nov 6 08:02:14 2012
txg 40353871 timestamp = 1352185334 UTC = Tue Nov 6 08:02:14 2012
txg 40353872 timestamp = 1352185334 UTC = Tue Nov 6 08:02:14 2012
txg 40353873 timestamp = 1352185334 UTC = Tue Nov 6 08:02:14 2012
txg 40353874 timestamp = 1352185334 UTC = Tue Nov 6 08:02:14 2012
txg 40353875 timestamp = 1352185334 UTC = Tue Nov 6 08:02:14 2012
txg 40353879 timestamp = 1352185334 UTC = Tue Nov 6 08:02:14 2012
# zpool import -o readonly=on -T <txg> defect_pool
</source>
Also z.B. auf Tue Nov 6 07:54:09 2012 -> txg 40353853
<source lang=bash>
# zpool import -T 40353853 defect_pool
Pool defect_pool returned to its state as of Tue Nov 06 07:32:33 2012.
Discarded approximately 22 minutes of transactions.
</source>
==PANIC, NOTICE: spa_import_rootpool: error 19==
Die Lösung ist, den Pool und das Device explizit anzugeben. Wenn beim booten also kommt:
<pre>
NOTICE: spa_import_rootpool: error 19
Cannot mount root on /pci@0,0/pci8086,340a@3/pci1000,3150@0/sd@1,0:a
panic[cpu0]/thread=fffffffffbc28820: vfs_mountroot: cannot mount root
</pre>
Hilft ein Boot in den Failsafe mode und editieren der /a/rpool/boot/grub/menu.lst, oder Eingabe der Parameter in der Grub-Commandline:
<pre>
title s10x_u8wos_08a
findroot (s10x_u8wos_08a,0,a)
bootfs rpool/ROOT/s10x_u8wos_08a
kernel$ /platform/i86pc/multiboot -B zfs-bootfs=rpool/ROOT/s10x_u8wos_08a,bootpath="/pci@0,0/pci8086,340a@3/pci1000,3150@0/sd@1,0:a"
module /platform/i86pc/boot_archive
</pre>
[[Kategorie:ZFS]]
4442c6c1a4b1923157fa7106b4be441e911fc5a0
Category:Sendmail
14
101
280
2013-05-23T14:27:30Z
Lollypop
2
Die Seite wurde neu angelegt: „[[Kategorie:KnowHow]] =Wenn es doch mal nicht ohne Sendmail geht= ==Absender rewrite== In die .mc Datei: <pre> FEATURE(`genericstable')dnl GENERICS_DOMAIN_FILE(`…“
wikitext
text/x-wiki
[[Kategorie:KnowHow]]
=Wenn es doch mal nicht ohne Sendmail geht=
==Absender rewrite==
In die .mc Datei:
<pre>
FEATURE(`genericstable')dnl
GENERICS_DOMAIN_FILE(`/etc/mail/genericsdomain')dnl
</pre>
/etc/mail/genericsdomain:
<pre>
src-domain.de
</pre>
Check:
<pre>
# sendmail -bt
ADDRESS TEST MODE (ruleset 3 NOT automatically invoked)
Enter <ruleset> <address>
> $=G
src-domain.de
>
</pre>
/etc/mail/genericstable:
<pre>
# localuser in any genericsdomain -> dst-user@dst-domain.de
localuser dst-user@dst-domain.de
# any other user @src-domain.de -> default-user@dst-domain.de
@src-domain.de default-user@dst-domain.de
</pre>
Erzeugen der Übersetzungsdatenbank:
<pre>
# makemap -f hash /etc/mail/genericstable.db < /etc/mail/genericstable
</pre>
Check:
<pre>
# sendmail -bt -d60.1
ADDRESS TEST MODE (ruleset 3 NOT automatically invoked)
Enter <ruleset> <address>
> /tryflags hs
> /try esmtp localuser@src-domain.de
Trying header sender address localuser@src-domain.de for mailer esmtp
canonify input: localuser @ src-domain . de
Canonify2 input: localuser < @ src-domain . de >
map_lookup(host, src-domain.de) => NOT FOUND (68)
Canonify2 returns: localuser < @ src-domain . de . >
canonify returns: localuser < @ src-domain . de . >
1 input: localuser < @ src-domain . de . >
1 returns: localuser < @ src-domain . de . >
HdrFromSMTP input: localuser < @ src-domain . de . >
PseudoToReal input: localuser < @ src-domain . de . >
PseudoToReal returns: localuser < @ src-domain . de . >
MasqSMTP input: localuser < @ src-domain . de . >
MasqSMTP returns: localuser < @ src-domain . de . >
MasqHdr input: localuser < @ src-domain . de . >
map_lookup(generics, localuser@src-domain.de) => NOT FOUND (0)
map_lookup(generics, @src-domain.de) => NOT FOUND (0)
map_lookup(generics, localuser) => dst-user@dst-domain.de (0)
canonify input: dst-user @ dst-domain . de
Canonify2 input: dst-user < @ dst-domain . de >
map_lookup(host, dst-domain.de) => NOT FOUND (68)
Canonify2 returns: dst-user < @ dst-domain . de >
canonify returns: dst-user < @ dst-domain . de >
MasqHdr returns: dst-user < @ dst-domain . de >
HdrFromSMTP returns: dst-user < @ dst-domain . de >
final input: dst-user < @ dst-domain . de >
final returns: dst-user @ dst-domain . de
Rcode = 0, addr = dst-user@dst-domain.de
</pre>
Und beliebige user@src-domain.de:
<pre>
# sendmail -bt -d60.1
ADDRESS TEST MODE (ruleset 3 NOT automatically invoked)
Enter <ruleset> <address>
> /tryflags hs
> /try esmtp anyuser@src-domain.de
Trying header sender address anyuser@src-domain.de for mailer esmtp
canonify input: anyuser @ src-domain . de
Canonify2 input: anyuser < @ src-domain . de >
map_lookup(host, src-domain.de) => NOT FOUND (68)
Canonify2 returns: anyuser < @ src-domain . de . >
canonify returns: anyuser < @ src-domain . de . >
1 input: anyuser < @ src-domain . de . >
1 returns: anyuser < @ src-domain . de . >
HdrFromSMTP input: anyuser < @ src-domain . de . >
PseudoToReal input: anyuser < @ src-domain . de . >
PseudoToReal returns: anyuser < @ src-domain . de . >
MasqSMTP input: anyuser < @ src-domain . de . >
MasqSMTP returns: anyuser < @ src-domain . de . >
MasqHdr input: anyuser < @ src-domain . de . >
map_lookup(generics, anyuser@src-domain.de) => NOT FOUND (0)
map_lookup(generics, @src-domain.de) => default-user@dst-domain.de (0)
canonify input: default-user @ dst-domain . de
Canonify2 input: default-user < @ dst-domain . de >
map_lookup(host, dst-domain.de) => NOT FOUND (68)
Canonify2 returns: default-user < @ dst-domain . de >
canonify returns: default-user < @ dst-domain . de >
MasqHdr returns: default-user < @ dst-domain . de >
HdrFromSMTP returns: default-user < @ dst-domain . de >
final input: default-user < @ dst-domain . de >
final returns: default-user @ dst-domain . de
Rcode = 0, addr = default-user@dst-domain.de
</pre>
d0e1658afa588990367fd6d8215cc911d5d36e9b
281
280
2013-05-23T14:29:28Z
Lollypop
2
Der Seiteninhalt wurde durch einen anderen Text ersetzt: „[[Kategorie:KnowHow]]“
wikitext
text/x-wiki
[[Kategorie:KnowHow]]
5b3e805e2df69a16d339bfd0115e4688ccfd0e65
283
281
2013-05-23T14:31:30Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:KnowHow]]
=Wenn es doch mal nicht ohne Sendmail geht=
4f09208a3fbb7024c19947cbf76c06ab5346eb53
Sendmail sender rewrite
0
102
282
2013-05-23T14:31:01Z
Lollypop
2
Die Seite wurde neu angelegt: „[[Kategorie:Sendmail]] ==Absender rewrite== In die .mc Datei: <pre> FEATURE(`genericstable')dnl GENERICS_DOMAIN_FILE(`/etc/mail/genericsdomain')dnl </pre> /etc/m…“
wikitext
text/x-wiki
[[Kategorie:Sendmail]]
==Absender rewrite==
In die .mc Datei:
<pre>
FEATURE(`genericstable')dnl
GENERICS_DOMAIN_FILE(`/etc/mail/genericsdomain')dnl
</pre>
/etc/mail/genericsdomain:
<pre>
src-domain.de
</pre>
Check:
<pre>
# sendmail -bt
ADDRESS TEST MODE (ruleset 3 NOT automatically invoked)
Enter <ruleset> <address>
> $=G
src-domain.de
>
</pre>
/etc/mail/genericstable:
<pre>
# localuser in any genericsdomain -> dst-user@dst-domain.de
localuser dst-user@dst-domain.de
# any other user @src-domain.de -> default-user@dst-domain.de
@src-domain.de default-user@dst-domain.de
</pre>
Erzeugen der Übersetzungsdatenbank:
<pre>
# makemap -f hash /etc/mail/genericstable.db < /etc/mail/genericstable
</pre>
Check:
<pre>
# sendmail -bt -d60.1
ADDRESS TEST MODE (ruleset 3 NOT automatically invoked)
Enter <ruleset> <address>
> /tryflags hs
> /try esmtp localuser@src-domain.de
Trying header sender address localuser@src-domain.de for mailer esmtp
canonify input: localuser @ src-domain . de
Canonify2 input: localuser < @ src-domain . de >
map_lookup(host, src-domain.de) => NOT FOUND (68)
Canonify2 returns: localuser < @ src-domain . de . >
canonify returns: localuser < @ src-domain . de . >
1 input: localuser < @ src-domain . de . >
1 returns: localuser < @ src-domain . de . >
HdrFromSMTP input: localuser < @ src-domain . de . >
PseudoToReal input: localuser < @ src-domain . de . >
PseudoToReal returns: localuser < @ src-domain . de . >
MasqSMTP input: localuser < @ src-domain . de . >
MasqSMTP returns: localuser < @ src-domain . de . >
MasqHdr input: localuser < @ src-domain . de . >
map_lookup(generics, localuser@src-domain.de) => NOT FOUND (0)
map_lookup(generics, @src-domain.de) => NOT FOUND (0)
map_lookup(generics, localuser) => dst-user@dst-domain.de (0)
canonify input: dst-user @ dst-domain . de
Canonify2 input: dst-user < @ dst-domain . de >
map_lookup(host, dst-domain.de) => NOT FOUND (68)
Canonify2 returns: dst-user < @ dst-domain . de >
canonify returns: dst-user < @ dst-domain . de >
MasqHdr returns: dst-user < @ dst-domain . de >
HdrFromSMTP returns: dst-user < @ dst-domain . de >
final input: dst-user < @ dst-domain . de >
final returns: dst-user @ dst-domain . de
Rcode = 0, addr = dst-user@dst-domain.de
</pre>
Und beliebige user@src-domain.de:
<pre>
# sendmail -bt -d60.1
ADDRESS TEST MODE (ruleset 3 NOT automatically invoked)
Enter <ruleset> <address>
> /tryflags hs
> /try esmtp anyuser@src-domain.de
Trying header sender address anyuser@src-domain.de for mailer esmtp
canonify input: anyuser @ src-domain . de
Canonify2 input: anyuser < @ src-domain . de >
map_lookup(host, src-domain.de) => NOT FOUND (68)
Canonify2 returns: anyuser < @ src-domain . de . >
canonify returns: anyuser < @ src-domain . de . >
1 input: anyuser < @ src-domain . de . >
1 returns: anyuser < @ src-domain . de . >
HdrFromSMTP input: anyuser < @ src-domain . de . >
PseudoToReal input: anyuser < @ src-domain . de . >
PseudoToReal returns: anyuser < @ src-domain . de . >
MasqSMTP input: anyuser < @ src-domain . de . >
MasqSMTP returns: anyuser < @ src-domain . de . >
MasqHdr input: anyuser < @ src-domain . de . >
map_lookup(generics, anyuser@src-domain.de) => NOT FOUND (0)
map_lookup(generics, @src-domain.de) => default-user@dst-domain.de (0)
canonify input: default-user @ dst-domain . de
Canonify2 input: default-user < @ dst-domain . de >
map_lookup(host, dst-domain.de) => NOT FOUND (68)
Canonify2 returns: default-user < @ dst-domain . de >
canonify returns: default-user < @ dst-domain . de >
MasqHdr returns: default-user < @ dst-domain . de >
HdrFromSMTP returns: default-user < @ dst-domain . de >
final input: default-user < @ dst-domain . de >
final returns: default-user @ dst-domain . de
Rcode = 0, addr = default-user@dst-domain.de
</pre>
bd3e3fe540bf806533f8385836201542a48825a1
284
282
2013-05-23T14:33:38Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:Sendmail]]
=Sender rewrite=
In die .mc Datei:
<pre>
FEATURE(`genericstable')dnl
GENERICS_DOMAIN_FILE(`/etc/mail/genericsdomain')dnl
</pre>
==/etc/mail/genericsdomain==
<pre>
src-domain.de
</pre>
==Testen der genericsdomain==
<pre>
# sendmail -bt
ADDRESS TEST MODE (ruleset 3 NOT automatically invoked)
Enter <ruleset> <address>
> $=G
src-domain.de
>
</pre>
==/etc/mail/genericstable==
<pre>
# localuser in any genericsdomain -> dst-user@dst-domain.de
localuser dst-user@dst-domain.de
# any other user @src-domain.de -> default-user@dst-domain.de
@src-domain.de default-user@dst-domain.de
</pre>
==Erzeugen der Übersetzungsdatenbank genericstable.db==
<pre>
# makemap -f hash /etc/mail/genericstable.db < /etc/mail/genericstable
</pre>
==Testen der genericstable.db==
<pre>
# sendmail -bt -d60.1
ADDRESS TEST MODE (ruleset 3 NOT automatically invoked)
Enter <ruleset> <address>
> /tryflags hs
> /try esmtp localuser@src-domain.de
Trying header sender address localuser@src-domain.de for mailer esmtp
canonify input: localuser @ src-domain . de
Canonify2 input: localuser < @ src-domain . de >
map_lookup(host, src-domain.de) => NOT FOUND (68)
Canonify2 returns: localuser < @ src-domain . de . >
canonify returns: localuser < @ src-domain . de . >
1 input: localuser < @ src-domain . de . >
1 returns: localuser < @ src-domain . de . >
HdrFromSMTP input: localuser < @ src-domain . de . >
PseudoToReal input: localuser < @ src-domain . de . >
PseudoToReal returns: localuser < @ src-domain . de . >
MasqSMTP input: localuser < @ src-domain . de . >
MasqSMTP returns: localuser < @ src-domain . de . >
MasqHdr input: localuser < @ src-domain . de . >
map_lookup(generics, localuser@src-domain.de) => NOT FOUND (0)
map_lookup(generics, @src-domain.de) => NOT FOUND (0)
map_lookup(generics, localuser) => dst-user@dst-domain.de (0)
canonify input: dst-user @ dst-domain . de
Canonify2 input: dst-user < @ dst-domain . de >
map_lookup(host, dst-domain.de) => NOT FOUND (68)
Canonify2 returns: dst-user < @ dst-domain . de >
canonify returns: dst-user < @ dst-domain . de >
MasqHdr returns: dst-user < @ dst-domain . de >
HdrFromSMTP returns: dst-user < @ dst-domain . de >
final input: dst-user < @ dst-domain . de >
final returns: dst-user @ dst-domain . de
Rcode = 0, addr = dst-user@dst-domain.de
</pre>
Und beliebige user@src-domain.de:
<pre>
# sendmail -bt -d60.1
ADDRESS TEST MODE (ruleset 3 NOT automatically invoked)
Enter <ruleset> <address>
> /tryflags hs
> /try esmtp anyuser@src-domain.de
Trying header sender address anyuser@src-domain.de for mailer esmtp
canonify input: anyuser @ src-domain . de
Canonify2 input: anyuser < @ src-domain . de >
map_lookup(host, src-domain.de) => NOT FOUND (68)
Canonify2 returns: anyuser < @ src-domain . de . >
canonify returns: anyuser < @ src-domain . de . >
1 input: anyuser < @ src-domain . de . >
1 returns: anyuser < @ src-domain . de . >
HdrFromSMTP input: anyuser < @ src-domain . de . >
PseudoToReal input: anyuser < @ src-domain . de . >
PseudoToReal returns: anyuser < @ src-domain . de . >
MasqSMTP input: anyuser < @ src-domain . de . >
MasqSMTP returns: anyuser < @ src-domain . de . >
MasqHdr input: anyuser < @ src-domain . de . >
map_lookup(generics, anyuser@src-domain.de) => NOT FOUND (0)
map_lookup(generics, @src-domain.de) => default-user@dst-domain.de (0)
canonify input: default-user @ dst-domain . de
Canonify2 input: default-user < @ dst-domain . de >
map_lookup(host, dst-domain.de) => NOT FOUND (68)
Canonify2 returns: default-user < @ dst-domain . de >
canonify returns: default-user < @ dst-domain . de >
MasqHdr returns: default-user < @ dst-domain . de >
HdrFromSMTP returns: default-user < @ dst-domain . de >
final input: default-user < @ dst-domain . de >
final returns: default-user @ dst-domain . de
Rcode = 0, addr = default-user@dst-domain.de
</pre>
85868f0ab13bf509fd0f9f9755254dfd0a865f49
Category:OpenVPN
14
103
285
2013-05-30T09:03:22Z
Lollypop
2
Die Seite wurde neu angelegt: „[[Kategorie:KnowHow]]“
wikitext
text/x-wiki
[[Kategorie:KnowHow]]
5b3e805e2df69a16d339bfd0115e4688ccfd0e65
OpenVPN Inline Certs
0
104
286
2013-05-30T09:08:20Z
Lollypop
2
Die Seite wurde neu angelegt: „[[Kategorie:OpenVPN]] To get an OpenVPN-Configuration in one file you can inline all referred files like this: <pre> # nawk ' /^(tls-auth|ca|cert|key)/ { type=$…“
wikitext
text/x-wiki
[[Kategorie:OpenVPN]]
To get an OpenVPN-Configuration in one file you can inline all referred files like this:
<pre>
# nawk '
/^(tls-auth|ca|cert|key)/ {
type=$1;
file=$2;
# for tls-auth we need the key-direction
if(type=="tls-auth")print "key-direction",$3;
print "<"type">";
while(getline tlsauth<file)
print tlsauth;
close(file);
print "</"type">";
next;
}
{
# All other lines are printed as they are
print;
}' connection.ovpn
</pre>
18c6480b442883959e432236c9c3baf5624cb991
Category:FC
14
105
287
2013-07-30T07:09:05Z
Lollypop
2
Die Seite wurde neu angelegt: „[[Kategorie: KnowHow]]“
wikitext
text/x-wiki
[[Kategorie: KnowHow]]
66e66ecf096ffc26f093363afb83fca47a7b1982
File:Switch-types-blads-ids-product-names.pdf
6
106
288
2013-07-30T08:01:16Z
Lollypop
2
Brocade swType to Name mapping
wikitext
text/x-wiki
Brocade swType to Name mapping
f0615dbb0141c1cf5364eb97647009b80613050f
Brocade
0
107
289
2013-07-30T08:11:07Z
Lollypop
2
Die Seite wurde neu angelegt: „[[Kategorie:FC]] ==Firmware== <source lang=bash> brocade:admin> firmwareshow Appl Primary/Secondary Versions ------------------------------------------ FOS …“
wikitext
text/x-wiki
[[Kategorie:FC]]
==Firmware==
<source lang=bash>
brocade:admin> firmwareshow
Appl Primary/Secondary Versions
------------------------------------------
FOS v6.4.2a
v6.4.2a
</source>
== General Switch Information ==
<source lang=bash>
brocade:admin> switchshow
switchName: brocade
switchType: 71.2
switchState: Online
switchMode: Native
switchRole: Principal
switchDomain: 1
switchId: fffc01
switchWwn: 10:00:00:05:34:be:f3:f0
zoning: ON (Fabric1)
switchBeacon: OFF
Index Port Address Media Speed State Proto
==============================================
0 0 010000 id N4 Online FC F-Port 50:0a:09:81:96:c8:3e:f8
1 1 010100 id N4 Online FC F-Port 50:0a:09:81:86:c8:3e:f8
2 2 010200 id N8 Online FC F-Port 21:00:00:24:ff:36:45:02
3 3 010300 id N8 Online FC F-Port 21:00:00:24:ff:36:45:21
4 4 010400 id N8 Online FC F-Port 21:00:00:24:ff:36:44:90
5 5 010500 id N8 Online FC F-Port 21:00:00:24:ff:36:45:f6
6 6 010600 id N8 No_Light FC
...
</source>
Wichtige Zeilen:
===switchType===
<source lang=bash>
switchType: 71.2
</source>
* [https://www.ibm.com/developerworks/community/blogs/anthonyv/entry/brocade_san_switch_models1?lang=en Tabelle von IBM]
* PDF von Brocade: [[Media:Switch-types-blads-ids-product-names.pdf|Switch Types, Blade IDs, and Product Names]]
===zoning===
<source lang=bash>
zoning: ON (Fabric1)
</source>
Zeigt an, ob das Zoning aktiv ist und welche Zonenkonfiguration aktiv ist (hier Fabric1) siehe auch [[#Fabric|Fabric]].
==Fabric==
<source lang=bash>
brocade:admin> fabricshow
Switch ID Worldwide Name Enet IP Addr FC IP Addr Name
-------------------------------------------------------------------------
1: fffc01 10:00:00:05:34:be:f3:f0 10.60.1.109 0.0.0.0 >"Fabric1"
</source>
==InterSwitchLinks (ISL)==
<source lang=bash>
brocade:admin> islshow
</source>
912a512106338c1605222f419b4bcdad480bd4d4
290
289
2013-07-30T08:47:10Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:FC]]
=Ein paar Kommandos und eine Erklärung dazu==
==Firmware==
<source lang=bash>
brocade:admin> firmwareshow
Appl Primary/Secondary Versions
------------------------------------------
FOS v6.4.2a
v6.4.2a
</source>
== General Switch Information ==
<source lang=bash>
brocade:admin> switchshow
switchName: brocade
switchType: 71.2
switchState: Online
switchMode: Native
switchRole: Principal
switchDomain: 1
switchId: fffc01
switchWwn: 10:00:00:05:34:be:f3:f0
zoning: ON (Fabric1)
switchBeacon: OFF
Index Port Address Media Speed State Proto
==============================================
0 0 010000 id N4 Online FC F-Port 50:0a:09:81:96:c8:3e:f8
1 1 010100 id N4 Online FC F-Port 50:0a:09:81:86:c8:3e:f8
2 2 010200 id N8 Online FC F-Port 21:00:00:24:ff:36:45:02
3 3 010300 id N8 Online FC F-Port 21:00:00:24:ff:36:45:21
4 4 010400 id N8 Online FC F-Port 21:00:00:24:ff:36:44:90
5 5 010500 id N8 Online FC F-Port 21:00:00:24:ff:36:45:f6
6 6 010600 id N8 No_Light FC
...
</source>
Wichtige Zeilen:
===switchshow:switchType===
<source lang=bash>
switchType: 71.2
</source>
switchType gibt Auskunft, welchen Switch wir vor uns haben. Hier einen Brocade 300.
* [https://www.ibm.com/developerworks/community/blogs/anthonyv/entry/brocade_san_switch_models1?lang=en Tabelle von IBM]
* PDF von Brocade: [[Media:Switch-types-blads-ids-product-names.pdf|Switch Types, Blade IDs, and Product Names]]
===switchshow:zoning===
<source lang=bash>
zoning: ON (Fabric1)
</source>
Zeigt an, ob das [[#Zoning|Zoning]] aktiv ist und welche Konfiguration aktiv ist (hier Fabric1) siehe auch [[#Fabric|Fabric]].
===switchshow:switchRole===
Es gibt zwei Rollen
* principal
und
* subordinate
<source lang=bash>
switchRole: Principal
</source>
==Fabric==
Eine Fabric besteht aus einem oder mehreren Fibre-Channel-Switchen, die miteinander verbunden sind. Komponenten wie Hosts, Storage und Tapes werden über die Fibre-Channel-Switche mit der Fabric verbunden.
<source lang=bash>
brocade:admin> fabricshow
Switch ID Worldwide Name Enet IP Addr FC IP Addr Name
-------------------------------------------------------------------------
1: fffc01 10:00:00:05:34:be:f3:f0 10.60.1.109 0.0.0.0 >"Fabric1"
</source>
==InterSwitchLinks (ISL)==
Mit islshow bekommt man heraus, welche weiteren Switches angeschlossen sind und über welche Ports sie mit dem aktuellen verbunden sind.
<source lang=bash>
brocade:admin> islshow
</source>
==Zoning==
Eine Zone legt fest, welche Ports oder WWNs sich sehen dürfen.
Heute mach man eigentlich nur noch WWN-Zoning, weil es das flexibelste und sicherste ist. Man kann dadurch einfach die Kabel innerhalb der [[#Fabric|Fabric]] hin und herstecken, ohne daß ein Gerät mit mal ein anderes sehen kann, als vorher.
Bei Portzoning ist die Gefahr des falsch steckens gegeben.
cbe983c2841d779aff8554304f208b41d61b9eb9
291
290
2013-07-30T09:12:12Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:FC]]
=Ein paar Kommandos mit kurzer Erklärung dazu=
==Firmware==
<source lang=bash>
brocade:admin> firmwareshow
Appl Primary/Secondary Versions
------------------------------------------
FOS v6.4.2a
v6.4.2a
</source>
== General Switch Information ==
<source lang=bash>
brocade:admin> switchshow
switchName: brocade
switchType: 71.2
switchState: Online
switchMode: Native
switchRole: Principal
switchDomain: 1
switchId: fffc01
switchWwn: 10:00:00:05:34:be:f3:f0
zoning: ON (Fabric1)
switchBeacon: OFF
Index Port Address Media Speed State Proto
==============================================
0 0 010000 id N4 Online FC F-Port 50:0a:09:81:96:c8:3e:f8
1 1 010100 id N4 Online FC F-Port 50:0a:09:81:86:c8:3e:f8
2 2 010200 id N8 Online FC F-Port 21:00:00:24:ff:36:45:02
3 3 010300 id N8 Online FC F-Port 21:00:00:24:ff:36:45:21
4 4 010400 id N8 Online FC F-Port 21:00:00:24:ff:36:44:90
5 5 010500 id N8 Online FC F-Port 21:00:00:24:ff:36:45:f6
6 6 010600 id N8 No_Light FC
...
</source>
Wichtige Zeilen:
===switchshow:switchType===
<source lang=bash>
switchType: 71.2
</source>
switchType gibt Auskunft, welchen Switch wir vor uns haben. Hier einen Brocade 300.
* [https://www.ibm.com/developerworks/community/blogs/anthonyv/entry/brocade_san_switch_models1?lang=en Tabelle von IBM]
* PDF von Brocade: [[Media:Switch-types-blads-ids-product-names.pdf|Switch Types, Blade IDs, and Product Names]]
===switchshow:zoning===
<source lang=bash>
zoning: ON (Fabric1)
</source>
Zeigt an, ob das [[#Zoning|Zoning]] aktiv ist und welche Konfiguration aktiv ist (hier Fabric1) siehe auch [[#Fabric|Fabric]].
===switchshow:switchRole===
<source lang=bash>
switchRole: Principal
</source>
Es gibt zwei Rollen
* Principal (also den Chef)
und
* Subordinate (also den Untergeordneten)
==Fabric==
Eine Fabric besteht aus einem oder mehreren Fibre-Channel-Switchen, die miteinander verbunden sind. Komponenten wie Hosts, Storage und Tapes werden über die Fibre-Channel-Switche mit der Fabric verbunden.
<source lang=bash>
brocade:admin> fabricshow
Switch ID Worldwide Name Enet IP Addr FC IP Addr Name
-------------------------------------------------------------------------
1: fffc01 10:00:00:05:34:be:f3:f0 10.60.1.110 0.0.0.0 >"brocade"
2: fffc02 10:00:00:05:1e:0d:da:27 10.60.1.111 0.0.0.0 "brocade1"
4: fffc04 10:00:00:05:1e:b3:61:7d 10.60.1.113 0.0.0.0 "brocade3"
42: fffc2a 10:00:00:05:1e:0c:f3:98 10.60.1.112 0.0.0.0 "brocade2"
The Fabric has 4 switches
</source>
==InterSwitchLinks (ISL)==
Mit islshow bekommt man heraus, welche weiteren Switches angeschlossen sind und über welche Ports sie mit dem aktuellen verbunden sind.
<source lang=bash>
brocade:admin> islshow
rz1_fab1_01:admin> islshow
1: 0-> 0 10:00:00:05:1e:0d:ca:27 2 brocade1 sp: 4.000G bw: 4.000G
2: 4-> 0 10:00:00:05:1e:0c:e3:98 42 brocade2 sp: 4.000G bw: 4.000G
3: 8-> 17 10:00:00:05:1e:0d:ca:27 2 brocade1 sp: 4.000G bw: 4.000G
4: 9-> 0 10:00:00:05:1e:b3:51:7d 4 brocade3 sp: 4.000G bw: 4.000G
5: 12-> 17 10:00:00:05:1e:0c:e3:98 42 brocade2 sp: 4.000G bw: 4.000G
6: 13-> 17 10:00:00:05:1e:b3:51:7d 4 brocade3 sp: 4.000G bw: 4.000G
</source>
==Zoning==
Eine Zone legt fest, welche Ports oder WWNs sich sehen dürfen.
Heute mach man eigentlich nur noch WWN-Zoning, weil es das flexibelste und sicherste ist. Man kann dadurch einfach die Kabel innerhalb der [[#Fabric|Fabric]] hin und herstecken, ohne daß ein Gerät mit mal ein anderes sehen kann, als vorher.
Bei Portzoning ist die Gefahr des falsch steckens gegeben.
=Switch Types and Product Names=
{| class="wikitable sortable" style="text-align: center; width: 85%"
! Switch Type
! Switch Name
|-
| 1 || Brocade 1000 Switches
|-
| 2, 6 || Brocade 2800 Switch
|-
| 3 || Brocade 2100, 2400 Switches
|-
| 4 || Brocade 20x0, 2010, 2040, 2050 Switches
|-
| 5 || Brocade 22x0, 2210, 2240, 2250 Switches
|-
| 7 || Brocade 2000 Switch
|-
| 9 || Brocade 3800 Switch
|-
| 10 || Brocade 12000 Director
|-
| 12 || Brocade 3900 Switch
|-
| 16 || Brocade 3200 Switch
|-
| 17 || Brocade 3800VL
|-
| 18 || Brocade 3000 Switch
|-
| 21 || Brocade 24000 Director
|-
| 22 || Brocade 3016 Switch
|-
| 26 || Brocade 3850 Switch
|-
| 27 || Brocade 3250 Switch
|-
| 29 || Brocade 4012 Embedded Switch
|-
| 32 || Brocade 4100 Switch
|-
| 33 || Brocade 3014 Switch
|-
| 34 || Brocade 200E Switch
|-
| 37 || Brocade 4020 Embedded Switch
|-
| 38 || Brocade 7420 SAN Router
|-
| 40 || Fibre Channel Routing (FCR) Front Domain
|-
| 41 || Fibre Channel Routing, (FCR) Xlate Domain
|-
| 42 || Brocade 48000 Director
|-
| 43 || Brocade 4024 Embedded Switch
|-
| 44 || Brocade 4900 Switch
|-
| 45 || Brocade 4016 Embedded Switch
|-
| 46 || Brocade 7500 Switch
|-
| 51 || Brocade 4018 Embedded Switch
|-
| 55.2 || Brocade 7600 Switch
|-
| 58 || Brocade 5000 Switch
|-
| 61 || Brocade 4424 Embedded Switch
|-
| 62 || Brocade DCX Backbone
|-
| 64 || Brocade 5300 Switch
|-
| 66 || Brocade 5100 Switch
|-
| 67 || Brocade Encryption Switch
|-
| 69 || Brocade 5410 Blade
|-
| 70 || Brocade 5410 Embedded Switch
|-
| 71 || Brocade 300 Switch
|-
| 72 || Brocade 5480 Embedded Switch
|-
| 73 || Brocade 5470 Embedded Switch
|-
| 75 || Brocade M5424 Embedded Switch
|-
| 76 || Brocade 8000 Switch
|-
| 77 || Brocade DCX-4S Backbone
|-
| 83 || Brocade 7800 Extension Switch
|-
| 86 || Brocade 5450 Embedded Switch
|-
| 87 || Brocade 5460 Embedded Switch
|-
| 90 || Brocade 8470 Embedded Switch
|-
| 92 || Brocade VA-40FC Switch
|-
| 95 || Brocade VDX 6720-24 Data Center Switch
|-
| 96 || Brocade VDX 6730-32 Data Center Switch
|-
| 97 || Brocade VDX 6720-60 Data Center Switch
|-
| 98 || Brocade VDX 6730-76 Data Center Switch
|-
| 108 || Dell M8428-k FCoE Embedded Switch
|-
| 109 || Brocade 6510 Switch
|-
| 116 || Brocade VDX 6710 Data Center Switch
|-
| 117 || Brocade 6547 Embedded Switch
|-
| 118 || Brocade 6505 Switch
|-
| 120 || Brocade DCX 8510-8 Backbone
|-
| 121 || Brocade DCX 8510-4 Backbone
|}
576b3ba0ef5183dc8dc0c5551b6c4e373cda037a
292
291
2013-07-30T09:18:02Z
Lollypop
2
/* switchshow:switchRole */
wikitext
text/x-wiki
[[Kategorie:FC]]
=Ein paar Kommandos mit kurzer Erklärung dazu=
==Firmware==
<source lang=bash>
brocade:admin> firmwareshow
Appl Primary/Secondary Versions
------------------------------------------
FOS v6.4.2a
v6.4.2a
</source>
== General Switch Information ==
<source lang=bash>
brocade:admin> switchshow
switchName: brocade
switchType: 71.2
switchState: Online
switchMode: Native
switchRole: Principal
switchDomain: 1
switchId: fffc01
switchWwn: 10:00:00:05:34:be:f3:f0
zoning: ON (Fabric1)
switchBeacon: OFF
Index Port Address Media Speed State Proto
==============================================
0 0 010000 id N4 Online FC F-Port 50:0a:09:81:96:c8:3e:f8
1 1 010100 id N4 Online FC F-Port 50:0a:09:81:86:c8:3e:f8
2 2 010200 id N8 Online FC F-Port 21:00:00:24:ff:36:45:02
3 3 010300 id N8 Online FC F-Port 21:00:00:24:ff:36:45:21
4 4 010400 id N8 Online FC F-Port 21:00:00:24:ff:36:44:90
5 5 010500 id N8 Online FC F-Port 21:00:00:24:ff:36:45:f6
6 6 010600 id N8 No_Light FC
...
</source>
Wichtige Zeilen:
===switchshow:switchType===
<source lang=bash>
switchType: 71.2
</source>
switchType gibt Auskunft, welchen Switch wir vor uns haben. Hier einen Brocade 300.
* [https://www.ibm.com/developerworks/community/blogs/anthonyv/entry/brocade_san_switch_models1?lang=en Tabelle von IBM]
* PDF von Brocade: [[Media:Switch-types-blads-ids-product-names.pdf|Switch Types, Blade IDs, and Product Names]]
===switchshow:zoning===
<source lang=bash>
zoning: ON (Fabric1)
</source>
Zeigt an, ob das [[#Zoning|Zoning]] aktiv ist und welche Konfiguration aktiv ist (hier Fabric1) siehe auch [[#Fabric|Fabric]].
===switchshow:switchRole===
Es gibt zwei Rollen
* Principal (also den Chef)
und
* Subordinate (also den Untergeordneten)
z.B.:
<source lang=bash>
switchRole: Principal
</source>
Die Rolle kann man ändern
'''ACHTUNG: Nicht Unterbrechungsfrei!'''<br>
'''WARNING: DISRUPTIVE ACTION !'''
<source lang=bash>
brocade1:admin> fabricprincipal -f 1
</source>
==Fabric==
Eine Fabric besteht aus einem oder mehreren Fibre-Channel-Switchen, die miteinander verbunden sind. Komponenten wie Hosts, Storage und Tapes werden über die Fibre-Channel-Switche mit der Fabric verbunden.
<source lang=bash>
brocade:admin> fabricshow
Switch ID Worldwide Name Enet IP Addr FC IP Addr Name
-------------------------------------------------------------------------
1: fffc01 10:00:00:05:34:be:f3:f0 10.60.1.110 0.0.0.0 >"brocade"
2: fffc02 10:00:00:05:1e:0d:da:27 10.60.1.111 0.0.0.0 "brocade1"
4: fffc04 10:00:00:05:1e:b3:61:7d 10.60.1.113 0.0.0.0 "brocade3"
42: fffc2a 10:00:00:05:1e:0c:f3:98 10.60.1.112 0.0.0.0 "brocade2"
The Fabric has 4 switches
</source>
==InterSwitchLinks (ISL)==
Mit islshow bekommt man heraus, welche weiteren Switches angeschlossen sind und über welche Ports sie mit dem aktuellen verbunden sind.
<source lang=bash>
brocade:admin> islshow
rz1_fab1_01:admin> islshow
1: 0-> 0 10:00:00:05:1e:0d:ca:27 2 brocade1 sp: 4.000G bw: 4.000G
2: 4-> 0 10:00:00:05:1e:0c:e3:98 42 brocade2 sp: 4.000G bw: 4.000G
3: 8-> 17 10:00:00:05:1e:0d:ca:27 2 brocade1 sp: 4.000G bw: 4.000G
4: 9-> 0 10:00:00:05:1e:b3:51:7d 4 brocade3 sp: 4.000G bw: 4.000G
5: 12-> 17 10:00:00:05:1e:0c:e3:98 42 brocade2 sp: 4.000G bw: 4.000G
6: 13-> 17 10:00:00:05:1e:b3:51:7d 4 brocade3 sp: 4.000G bw: 4.000G
</source>
==Zoning==
Eine Zone legt fest, welche Ports oder WWNs sich sehen dürfen.
Heute mach man eigentlich nur noch WWN-Zoning, weil es das flexibelste und sicherste ist. Man kann dadurch einfach die Kabel innerhalb der [[#Fabric|Fabric]] hin und herstecken, ohne daß ein Gerät mit mal ein anderes sehen kann, als vorher.
Bei Portzoning ist die Gefahr des falsch steckens gegeben.
=Switch Types and Product Names=
{| class="wikitable sortable" style="text-align: center; width: 85%"
! Switch Type
! Switch Name
|-
| 1 || Brocade 1000 Switches
|-
| 2, 6 || Brocade 2800 Switch
|-
| 3 || Brocade 2100, 2400 Switches
|-
| 4 || Brocade 20x0, 2010, 2040, 2050 Switches
|-
| 5 || Brocade 22x0, 2210, 2240, 2250 Switches
|-
| 7 || Brocade 2000 Switch
|-
| 9 || Brocade 3800 Switch
|-
| 10 || Brocade 12000 Director
|-
| 12 || Brocade 3900 Switch
|-
| 16 || Brocade 3200 Switch
|-
| 17 || Brocade 3800VL
|-
| 18 || Brocade 3000 Switch
|-
| 21 || Brocade 24000 Director
|-
| 22 || Brocade 3016 Switch
|-
| 26 || Brocade 3850 Switch
|-
| 27 || Brocade 3250 Switch
|-
| 29 || Brocade 4012 Embedded Switch
|-
| 32 || Brocade 4100 Switch
|-
| 33 || Brocade 3014 Switch
|-
| 34 || Brocade 200E Switch
|-
| 37 || Brocade 4020 Embedded Switch
|-
| 38 || Brocade 7420 SAN Router
|-
| 40 || Fibre Channel Routing (FCR) Front Domain
|-
| 41 || Fibre Channel Routing, (FCR) Xlate Domain
|-
| 42 || Brocade 48000 Director
|-
| 43 || Brocade 4024 Embedded Switch
|-
| 44 || Brocade 4900 Switch
|-
| 45 || Brocade 4016 Embedded Switch
|-
| 46 || Brocade 7500 Switch
|-
| 51 || Brocade 4018 Embedded Switch
|-
| 55.2 || Brocade 7600 Switch
|-
| 58 || Brocade 5000 Switch
|-
| 61 || Brocade 4424 Embedded Switch
|-
| 62 || Brocade DCX Backbone
|-
| 64 || Brocade 5300 Switch
|-
| 66 || Brocade 5100 Switch
|-
| 67 || Brocade Encryption Switch
|-
| 69 || Brocade 5410 Blade
|-
| 70 || Brocade 5410 Embedded Switch
|-
| 71 || Brocade 300 Switch
|-
| 72 || Brocade 5480 Embedded Switch
|-
| 73 || Brocade 5470 Embedded Switch
|-
| 75 || Brocade M5424 Embedded Switch
|-
| 76 || Brocade 8000 Switch
|-
| 77 || Brocade DCX-4S Backbone
|-
| 83 || Brocade 7800 Extension Switch
|-
| 86 || Brocade 5450 Embedded Switch
|-
| 87 || Brocade 5460 Embedded Switch
|-
| 90 || Brocade 8470 Embedded Switch
|-
| 92 || Brocade VA-40FC Switch
|-
| 95 || Brocade VDX 6720-24 Data Center Switch
|-
| 96 || Brocade VDX 6730-32 Data Center Switch
|-
| 97 || Brocade VDX 6720-60 Data Center Switch
|-
| 98 || Brocade VDX 6730-76 Data Center Switch
|-
| 108 || Dell M8428-k FCoE Embedded Switch
|-
| 109 || Brocade 6510 Switch
|-
| 116 || Brocade VDX 6710 Data Center Switch
|-
| 117 || Brocade 6547 Embedded Switch
|-
| 118 || Brocade 6505 Switch
|-
| 120 || Brocade DCX 8510-8 Backbone
|-
| 121 || Brocade DCX 8510-4 Backbone
|}
b4c1d0d4d56e66ccb80ead1342d923a9b2fe1409
VMWare Linux parameter
0
108
293
2013-09-02T08:02:40Z
Lollypop
2
Die Seite wurde neu angelegt: „==/etc/sysctl.conf== <source lang=bash> # vm.swappiness = 0 The kernel will swap only to avoid an out of memory condition. vm.swappiness = 0 # TCP SYN Flood Pro…“
wikitext
text/x-wiki
==/etc/sysctl.conf==
<source lang=bash>
# vm.swappiness = 0 The kernel will swap only to avoid an out of memory condition.
vm.swappiness = 0
# TCP SYN Flood Protection
net.ipv4.tcp_syncookies = 1
net.ipv4.tcp_max_syn_backlog = 4096
net.ipv4.tcp_synack_retries = 3
</source>
99423de58c6151088ec19c95124f61748d9778c6
294
293
2013-09-02T13:53:11Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:VMWare]]
==/etc/sysctl.conf==
<source lang=bash>
# vm.swappiness = 0 The kernel will swap only to avoid an out of memory condition.
vm.swappiness = 0
# TCP SYN Flood Protection
net.ipv4.tcp_syncookies = 1
net.ipv4.tcp_max_syn_backlog = 4096
net.ipv4.tcp_synack_retries = 3
</source>
==Pinning kernel to 2.6 for ESX 4.1==
Create /etc/apt/preferences.d/linux-image with this content:
<source lang=bash>
Package: linux-image-server linux-server linux-headers-server
Pin: version 2.6.*
Pin-Priority: 1000
</source>
7589a127a34c050d322a91c0daea43734149fd5b
296
294
2013-09-02T14:12:42Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:VMWare]]
==/etc/sysctl.conf==
<source lang=bash>
# vm.swappiness = 0 The kernel will swap only to avoid an out of memory condition.
vm.swappiness = 0
# TCP SYN Flood Protection
net.ipv4.tcp_syncookies = 1
net.ipv4.tcp_max_syn_backlog = 4096
net.ipv4.tcp_synack_retries = 3
</source>
==Pinning kernel to 2.6 for ESX 4.1==
Create /etc/apt/preferences.d/linux-image with this content:
<source lang=bash>
Package: linux-image-server linux-server linux-headers-server
Pin: version 2.6.*
Pin-Priority: 1000
</source>
==Autobuild of kernel drivers==
Create /etc/kernel/header_postinst.d/vmware :
<source lang=bash>
#!/bin/bash
# We're passed the version of the kernel being installed
inst_kern=$1
/usr/bin/vmware-config-tools.pl --modules-only --default --kernel-version ${inst_kern}
</source>
<source lang=bash>
# chmod 755 /etc/kernel/header_postinst.d/vmware
</source>
ecbaab32fbcf5b99a742b3febf607c4b2f6954fc
297
296
2013-09-02T15:51:09Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:VMWare]]
==/etc/sysctl.conf==
<source lang=bash>
# vm.swappiness = 0 The kernel will swap only to avoid an out of memory condition.
vm.swappiness = 0
# TCP SYN Flood Protection
net.ipv4.tcp_syncookies = 1
net.ipv4.tcp_max_syn_backlog = 4096
net.ipv4.tcp_synack_retries = 3
</source>
==Pinning kernel to 2.6 for ESX 4.1==
Create /etc/apt/preferences.d/linux-image with this content:
<source lang=bash>
Package: linux-image-server linux-server linux-headers-server
Pin: version 2.6.*
Pin-Priority: 1000
</source>
==Autobuild of kernel drivers==
Create /etc/kernel/header_postinst.d/vmware :
<source lang=bash>
#!/bin/bash
# We're passed the version of the kernel being installed
inst_kern=$1
/usr/bin/vmware-config-tools.pl --modules-only --default --kernel-version ${inst_kern}
</source>
<source lang=bash>
# chmod 755 /etc/kernel/header_postinst.d/vmware
</source>
==Source from VMWare==
<source lang=bash>
# aptitude install module-assistant
</source>
<source lang=bash>
# aptitude install vmware-tools-foundation vmware-tools-vmxnet-common
</source>
<source lang=bash>
# aptitude install vmware-tools-vmxnet-modules-source
</source>
<source lang=bash>
# aptitude install linux-{image,headers}-3.2.0-52-virtual
</source>
<source lang=bash>
# m-a --text-mode --kvers-list 3.2.0-52-virtual build vmware-tools-vmxnet-modules
# m-a --text-mode --kvers-list 3.2.0-52-virtual install vmware-tools-vmxnet-modules
</source>
c2cb91355af750eda5cc3afb45b1af92af31e140
298
297
2013-09-02T16:14:16Z
Lollypop
2
/* Source from VMWare */
wikitext
text/x-wiki
[[Kategorie:VMWare]]
==/etc/sysctl.conf==
<source lang=bash>
# vm.swappiness = 0 The kernel will swap only to avoid an out of memory condition.
vm.swappiness = 0
# TCP SYN Flood Protection
net.ipv4.tcp_syncookies = 1
net.ipv4.tcp_max_syn_backlog = 4096
net.ipv4.tcp_synack_retries = 3
</source>
==Pinning kernel to 2.6 for ESX 4.1==
Create /etc/apt/preferences.d/linux-image with this content:
<source lang=bash>
Package: linux-image-server linux-server linux-headers-server
Pin: version 2.6.*
Pin-Priority: 1000
</source>
==Autobuild of kernel drivers==
Create /etc/kernel/header_postinst.d/vmware :
<source lang=bash>
#!/bin/bash
# We're passed the version of the kernel being installed
inst_kern=$1
/usr/bin/vmware-config-tools.pl --modules-only --default --kernel-version ${inst_kern}
</source>
<source lang=bash>
# chmod 755 /etc/kernel/header_postinst.d/vmware
</source>
==Source from VMWare==
<source lang=bash>
# aptitude install module-assistant
</source>
<source lang=bash>
# aptitude install vmware-tools-foundation
</source>
<source lang=bash>
# aptitude install vmware-tools-{vmci,vmxnet,vsock,vmblock,vmhgfs,vmsync}-common
# aptitude install vmware-tools-{vmci,vmxnet,vsock,vmblock,vmhgfs,vmsync}-modules-source
</source>
<source lang=bash>
# aptitude install linux-{image,headers}-3.2.0-52-virtual
</source>
<source lang=bash>
# m-a --text-mode --kvers-list 3.2.0-52-virtual build vmware-tools-{vmci,vmxnet,vsock,vmblock,vmhgfs,vmsync}-modules
# m-a --text-mode --kvers-list 3.2.0-52-virtual install vmware-tools-{vmci,vmxnet,vsock,vmblock,vmhgfs,vmsync}-modules
</source>
2558d9819d27333a083c4bd8f01e5592cef6e37d
299
298
2013-09-02T16:15:56Z
Lollypop
2
/* Source from VMWare */
wikitext
text/x-wiki
[[Kategorie:VMWare]]
==/etc/sysctl.conf==
<source lang=bash>
# vm.swappiness = 0 The kernel will swap only to avoid an out of memory condition.
vm.swappiness = 0
# TCP SYN Flood Protection
net.ipv4.tcp_syncookies = 1
net.ipv4.tcp_max_syn_backlog = 4096
net.ipv4.tcp_synack_retries = 3
</source>
==Pinning kernel to 2.6 for ESX 4.1==
Create /etc/apt/preferences.d/linux-image with this content:
<source lang=bash>
Package: linux-image-server linux-server linux-headers-server
Pin: version 2.6.*
Pin-Priority: 1000
</source>
==Autobuild of kernel drivers==
Create /etc/kernel/header_postinst.d/vmware :
<source lang=bash>
#!/bin/bash
# We're passed the version of the kernel being installed
inst_kern=$1
/usr/bin/vmware-config-tools.pl --modules-only --default --kernel-version ${inst_kern}
</source>
<source lang=bash>
# chmod 755 /etc/kernel/header_postinst.d/vmware
</source>
==Source from VMWare==
<source lang=bash>
# aptitude install module-assistant
</source>
<source lang=bash>
# aptitude install vmware-tools-foundation vmware-tools-libraries-nox vmware-tools-guestlib vmware-tools-core
</source>
<source lang=bash>
# aptitude install vmware-tools-{vmci,vmxnet,vsock,vmblock,vmhgfs,vmsync}-common
# aptitude install vmware-tools-{vmci,vmxnet,vsock,vmblock,vmhgfs,vmsync}-modules-source
</source>
<source lang=bash>
# aptitude install linux-{image,headers}-3.2.0-52-virtual
</source>
<source lang=bash>
# m-a --text-mode --kvers-list 3.2.0-52-virtual build vmware-tools-{vmci,vmxnet,vsock,vmblock,vmhgfs,vmsync}-modules
# m-a --text-mode --kvers-list 3.2.0-52-virtual install vmware-tools-{vmci,vmxnet,vsock,vmblock,vmhgfs,vmsync}-modules
</source>
48f3f01c5ef040b0c851af249e551ced4089700f
300
299
2013-09-03T16:36:54Z
Lollypop
2
/* Source from VMWare */
wikitext
text/x-wiki
[[Kategorie:VMWare]]
==/etc/sysctl.conf==
<source lang=bash>
# vm.swappiness = 0 The kernel will swap only to avoid an out of memory condition.
vm.swappiness = 0
# TCP SYN Flood Protection
net.ipv4.tcp_syncookies = 1
net.ipv4.tcp_max_syn_backlog = 4096
net.ipv4.tcp_synack_retries = 3
</source>
==Pinning kernel to 2.6 for ESX 4.1==
Create /etc/apt/preferences.d/linux-image with this content:
<source lang=bash>
Package: linux-image-server linux-server linux-headers-server
Pin: version 2.6.*
Pin-Priority: 1000
</source>
==Autobuild of kernel drivers==
Create /etc/kernel/header_postinst.d/vmware :
<source lang=bash>
#!/bin/bash
# We're passed the version of the kernel being installed
inst_kern=$1
/usr/bin/vmware-config-tools.pl --modules-only --default --kernel-version ${inst_kern}
</source>
<source lang=bash>
# chmod 755 /etc/kernel/header_postinst.d/vmware
</source>
==Source from VMWare==
<source lang=bash>
# aptitude install module-assistant
</source>
<source lang=bash>
# aptitude install vmware-tools-foundation vmware-tools-libraries-nox vmware-tools-guestlib vmware-tools-core
</source>
<source lang=bash>
# aptitude install vmware-tools-{vmci,vmxnet,vsock,vmblock,vmhgfs,vmsync}-common
# aptitude install vmware-tools-{vmci,vmxnet,vsock,vmblock,vmhgfs,vmsync}-modules-source
</source>
<source lang=bash>
# aptitude install linux-{image,headers}-3.2.0-52-generic
</source>
<source lang=bash>
# m-a --text-mode --kvers-list 3.2.0-52-generic build vmware-tools-{vmci,vmxnet,vsock,vmblock,vmhgfs,vmsync}-modules
# m-a --text-mode --kvers-list 3.2.0-52-generic install vmware-tools-{vmci,vmxnet,vsock,vmblock,vmhgfs,vmsync}-modules
</source>
aec69eeb90aa71b975031ce68bfa43efeaf1cb2c
301
300
2013-09-03T16:54:47Z
Lollypop
2
/* Source from VMWare */
wikitext
text/x-wiki
[[Kategorie:VMWare]]
==/etc/sysctl.conf==
<source lang=bash>
# vm.swappiness = 0 The kernel will swap only to avoid an out of memory condition.
vm.swappiness = 0
# TCP SYN Flood Protection
net.ipv4.tcp_syncookies = 1
net.ipv4.tcp_max_syn_backlog = 4096
net.ipv4.tcp_synack_retries = 3
</source>
==Pinning kernel to 2.6 for ESX 4.1==
Create /etc/apt/preferences.d/linux-image with this content:
<source lang=bash>
Package: linux-image-server linux-server linux-headers-server
Pin: version 2.6.*
Pin-Priority: 1000
</source>
==Autobuild of kernel drivers==
Create /etc/kernel/header_postinst.d/vmware :
<source lang=bash>
#!/bin/bash
# We're passed the version of the kernel being installed
inst_kern=$1
/usr/bin/vmware-config-tools.pl --modules-only --default --kernel-version ${inst_kern}
</source>
<source lang=bash>
# chmod 755 /etc/kernel/header_postinst.d/vmware
</source>
==Source from VMWare==
Add this to your /etc/apt/sources.list:
<source lang=bash>
deb http://packages.vmware.com/tools/esx/latest/ubuntu precise main
</source>
<source lang=bash>
# aptitude install module-assistant
</source>
<source lang=bash>
# aptitude install vmware-tools-foundation vmware-tools-libraries-nox vmware-tools-guestlib vmware-tools-core
</source>
<source lang=bash>
# aptitude install vmware-tools-{vmci,vmxnet,vsock,vmblock,vmhgfs,vmsync}-common
# aptitude install vmware-tools-{vmci,vmxnet,vsock,vmblock,vmhgfs,vmsync}-modules-source
</source>
<source lang=bash>
# aptitude install linux-{image,headers}-3.2.0-52-generic
</source>
<source lang=bash>
# m-a --text-mode --kvers-list 3.2.0-52-generic build vmware-tools-{vmci,vmxnet,vsock,vmblock,vmhgfs,vmsync}-modules
# m-a --text-mode --kvers-list 3.2.0-52-generic install vmware-tools-{vmci,vmxnet,vsock,vmblock,vmhgfs,vmsync}-modules
</source>
41e0297d13c2273c799c2311091aa94cc1f6161d
302
301
2013-09-03T16:57:03Z
Lollypop
2
/* Source from VMWare */
wikitext
text/x-wiki
[[Kategorie:VMWare]]
==/etc/sysctl.conf==
<source lang=bash>
# vm.swappiness = 0 The kernel will swap only to avoid an out of memory condition.
vm.swappiness = 0
# TCP SYN Flood Protection
net.ipv4.tcp_syncookies = 1
net.ipv4.tcp_max_syn_backlog = 4096
net.ipv4.tcp_synack_retries = 3
</source>
==Pinning kernel to 2.6 for ESX 4.1==
Create /etc/apt/preferences.d/linux-image with this content:
<source lang=bash>
Package: linux-image-server linux-server linux-headers-server
Pin: version 2.6.*
Pin-Priority: 1000
</source>
==Autobuild of kernel drivers==
Create /etc/kernel/header_postinst.d/vmware :
<source lang=bash>
#!/bin/bash
# We're passed the version of the kernel being installed
inst_kern=$1
/usr/bin/vmware-config-tools.pl --modules-only --default --kernel-version ${inst_kern}
</source>
<source lang=bash>
# chmod 755 /etc/kernel/header_postinst.d/vmware
</source>
==Source from VMWare==
Add this to your /etc/apt/sources.list:
<source lang=bash>
deb http://packages.vmware.com/tools/esx/latest/ubuntu precise main
</source>
<source lang=bash>
# aptitude update
</source>
Get Module-Assistant:
<source lang=bash>
# aptitude install module assistant
</source>
Get the base packages:
<source lang=bash>
# aptitude install vmware-tools-foundation vmware-tools-libraries-nox vmware-tools-guestlib vmware-tools-core
</source>
Get the odules:
<source lang=bash>
# aptitude install vmware-tools-{vmci,vmxnet,vsock,vmblock,vmhgfs,vmsync}-common
# aptitude install vmware-tools-{vmci,vmxnet,vsock,vmblock,vmhgfs,vmsync}-modules-source
</source>
Get kernel and headers:
<source lang=bash>
# aptitude install linux-{image,headers}-3.2.0-52-generic
</source>
Compile and install the modules with module assistant
<source lang=bash>
# m-a --text-mode --kvers-list 3.2.0-52-generic build vmware-tools-{vmci,vmxnet,vsock,vmblock,vmhgfs,vmsync}-modules
# m-a --text-mode --kvers-list 3.2.0-52-generic install vmware-tools-{vmci,vmxnet,vsock,vmblock,vmhgfs,vmsync}-modules
</source>
26a1799e7cac1e28da72f073d8ff63e2f09dc9ba
303
302
2013-09-03T17:01:09Z
Lollypop
2
/* Source from VMWare */
wikitext
text/x-wiki
[[Kategorie:VMWare]]
==/etc/sysctl.conf==
<source lang=bash>
# vm.swappiness = 0 The kernel will swap only to avoid an out of memory condition.
vm.swappiness = 0
# TCP SYN Flood Protection
net.ipv4.tcp_syncookies = 1
net.ipv4.tcp_max_syn_backlog = 4096
net.ipv4.tcp_synack_retries = 3
</source>
==Pinning kernel to 2.6 for ESX 4.1==
Create /etc/apt/preferences.d/linux-image with this content:
<source lang=bash>
Package: linux-image-server linux-server linux-headers-server
Pin: version 2.6.*
Pin-Priority: 1000
</source>
==Autobuild of kernel drivers==
Create /etc/kernel/header_postinst.d/vmware :
<source lang=bash>
#!/bin/bash
# We're passed the version of the kernel being installed
inst_kern=$1
/usr/bin/vmware-config-tools.pl --modules-only --default --kernel-version ${inst_kern}
</source>
<source lang=bash>
# chmod 755 /etc/kernel/header_postinst.d/vmware
</source>
==Source from VMWare==
After you removed previously installed vmware-tools, just follow these steps:
1. Add this to your /etc/apt/sources.list:
<source lang=bash>
deb http://packages.vmware.com/tools/esx/latest/ubuntu precise main
</source>
2. Update your package database:
<source lang=bash>
# aptitude update
</source>
3. Get Module-Assistant:
<source lang=bash>
# aptitude install module assistant
</source>
4. Get the base packages:
<source lang=bash>
# aptitude install vmware-tools-foundation vmware-tools-libraries-nox vmware-tools-guestlib vmware-tools-core
</source>
5. Get the modules:
<source lang=bash>
# aptitude install vmware-tools-{vmci,vmxnet,vsock,vmblock,vmhgfs,vmsync}-common
# aptitude install vmware-tools-{vmci,vmxnet,vsock,vmblock,vmhgfs,vmsync}-modules-source
</source>
6. Get kernel and headers:
<source lang=bash>
# aptitude install linux-{image,headers}-3.2.0-52-generic
</source>
7. Compile and install the modules with module assistant
<source lang=bash>
# m-a --text-mode --kvers-list 3.2.0-52-generic build vmware-tools-{vmci,vmxnet,vsock,vmblock,vmhgfs,vmsync}-modules
# m-a --text-mode --kvers-list 3.2.0-52-generic install vmware-tools-{vmci,vmxnet,vsock,vmblock,vmhgfs,vmsync}-modules
</source>
eb061e7be20432070fcfa0288339cec0bb7712f8
305
303
2013-09-24T15:30:05Z
Lollypop
2
/* Source from VMWare */
wikitext
text/x-wiki
[[Kategorie:VMWare]]
==/etc/sysctl.conf==
<source lang=bash>
# vm.swappiness = 0 The kernel will swap only to avoid an out of memory condition.
vm.swappiness = 0
# TCP SYN Flood Protection
net.ipv4.tcp_syncookies = 1
net.ipv4.tcp_max_syn_backlog = 4096
net.ipv4.tcp_synack_retries = 3
</source>
==Pinning kernel to 2.6 for ESX 4.1==
Create /etc/apt/preferences.d/linux-image with this content:
<source lang=bash>
Package: linux-image-server linux-server linux-headers-server
Pin: version 2.6.*
Pin-Priority: 1000
</source>
==Autobuild of kernel drivers==
Create /etc/kernel/header_postinst.d/vmware :
<source lang=bash>
#!/bin/bash
# We're passed the version of the kernel being installed
inst_kern=$1
/usr/bin/vmware-config-tools.pl --modules-only --default --kernel-version ${inst_kern}
</source>
<source lang=bash>
# chmod 755 /etc/kernel/header_postinst.d/vmware
</source>
==Source from VMWare==
After you removed previously installed vmware-tools, just follow these steps:
1. Add this to your /etc/apt/sources.list:
<source lang=bash>
deb http://packages.vmware.com/tools/esx/latest/ubuntu precise main
</source>
2. Update your package database:
<source lang=bash>
# aptitude update
</source>
3. Get Module-Assistant:
<source lang=bash>
# aptitude install module-assistant
</source>
4. Get the base packages:
<source lang=bash>
# aptitude install vmware-tools-foundation vmware-tools-libraries-nox vmware-tools-guestlib vmware-tools-core
</source>
5. Get the modules:
<source lang=bash>
# aptitude install vmware-tools-{vmci,vmxnet,vsock,vmblock,vmhgfs,vmsync}-common
# aptitude install vmware-tools-{vmci,vmxnet,vsock,vmblock,vmhgfs,vmsync}-modules-source
</source>
6. Get kernel and headers:
<source lang=bash>
# aptitude install linux-{image,headers}-3.2.0-52-generic
</source>
7. Compile and install the modules with module assistant
<source lang=bash>
# m-a --text-mode --kvers-list 3.2.0-52-generic build vmware-tools-{vmci,vmxnet,vsock,vmblock,vmhgfs,vmsync}-modules
# m-a --text-mode --kvers-list 3.2.0-52-generic install vmware-tools-{vmci,vmxnet,vsock,vmblock,vmhgfs,vmsync}-modules
</source>
a906dfa97399059b11bb5f509fd9834ef102937b
Category:VMWare
14
109
295
2013-09-02T13:53:50Z
Lollypop
2
Die Seite wurde neu angelegt: „[[Kategorie:KnowHow]]“
wikitext
text/x-wiki
[[Kategorie:KnowHow]]
5b3e805e2df69a16d339bfd0115e4688ccfd0e65
NetApp SSH
0
110
306
2013-11-12T08:13:20Z
Lollypop
2
Die Seite wurde neu angelegt: „[Kategorie:NetApp] == Prüfen, ob das SSH-Homedir /etc/sshd/<user>/.ssh existiert == <source lang=bash> nac*> priv set -q diag nac*> ls /etc/sshd/ . .. ssh_host_k…“
wikitext
text/x-wiki
[Kategorie:NetApp]
== Prüfen, ob das SSH-Homedir /etc/sshd/<user>/.ssh existiert ==
<source lang=bash>
nac*> priv set -q diag
nac*> ls /etc/sshd/
.
..
ssh_host_key
ssh_host_key.pub
ssh_host_rsa_key
ssh_host_rsa_key.pub
ssh_host_dsa_key
ssh_host_dsa_key.pub
</source>
== Anlegen eines Verzeichnisses mit Mode 0700 ==
<source lang=bash>
nac*> options wafl.default_qtree_mode
wafl.default_qtree_mode 0777
nac*> options wafl.default_qtree_mode 0700
nac*> qtree create /vol/vol0/__
nac*> options wafl.default_qtree_mode 0777
</source>
== NDMPd Status prüfen / Anschalten ==
<source lang=bash>
nac*> ndmpd status
ndmpd ON.
No ndmpd sessions active.
</source>
== Verzeichnis erzeugen durch kopieren des QTrees ==
<source lang=bash>
nac*> ndmpcopy /vol/vol0/__ /vol/vol0/etc/sshd/root/.ssh
...
Ndmpcopy: Transfer successful [ 0 hours, 0 minutes, 20 seconds ]
Ndmpcopy: Done
nac*> qtree delete /vol/vol0/__
</source>
== SSH-Key /etc/sshd/<user>/.ssh/authorized_keys schreiben ==
<source lang=bash>
nac*> wrfile /etc/sshd/root/.ssh/authorized_keys
ssh-dss AAA...== user@clienthost
^C
</source>
1e559af14a1d0753b5b637e0698d7d576d433823
307
306
2013-11-14T07:44:35Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:NetApp]]
== Prüfen, ob das SSH-Homedir /etc/sshd/<user>/.ssh existiert ==
<source lang=bash>
nac*> priv set -q diag
nac*> ls /etc/sshd/
.
..
ssh_host_key
ssh_host_key.pub
ssh_host_rsa_key
ssh_host_rsa_key.pub
ssh_host_dsa_key
ssh_host_dsa_key.pub
</source>
== Anlegen eines Verzeichnisses mit Mode 0700 ==
<source lang=bash>
nac*> options wafl.default_qtree_mode
wafl.default_qtree_mode 0777
nac*> options wafl.default_qtree_mode 0700
nac*> qtree create /vol/vol0/__
nac*> options wafl.default_qtree_mode 0777
</source>
== NDMPd Status prüfen / Anschalten ==
<source lang=bash>
nac*> ndmpd status
ndmpd ON.
No ndmpd sessions active.
</source>
== Verzeichnis erzeugen durch kopieren des QTrees ==
<source lang=bash>
nac*> ndmpcopy /vol/vol0/__ /vol/vol0/etc/sshd/root/.ssh
...
Ndmpcopy: Transfer successful [ 0 hours, 0 minutes, 20 seconds ]
Ndmpcopy: Done
nac*> qtree delete /vol/vol0/__
</source>
== SSH-Key /etc/sshd/<user>/.ssh/authorized_keys schreiben ==
<source lang=bash>
nac*> wrfile /etc/sshd/root/.ssh/authorized_keys
ssh-dss AAA...== user@clienthost
^C
</source>
ce8463747ee076392f3ec09dc125d15241fb2652
308
307
2013-11-14T07:45:21Z
Lollypop
2
/* NDMPd Status prüfen / Anschalten */
wikitext
text/x-wiki
[[Kategorie:NetApp]]
== Prüfen, ob das SSH-Homedir /etc/sshd/<user>/.ssh existiert ==
<source lang=bash>
nac*> priv set -q diag
nac*> ls /etc/sshd/
.
..
ssh_host_key
ssh_host_key.pub
ssh_host_rsa_key
ssh_host_rsa_key.pub
ssh_host_dsa_key
ssh_host_dsa_key.pub
</source>
== Anlegen eines Verzeichnisses mit Mode 0700 ==
<source lang=bash>
nac*> options wafl.default_qtree_mode
wafl.default_qtree_mode 0777
nac*> options wafl.default_qtree_mode 0700
nac*> qtree create /vol/vol0/__
nac*> options wafl.default_qtree_mode 0777
</source>
== NDMPd Status prüfen / anschalten ==
<source lang=bash>
nac*> ndmpd status
ndmpd ON.
No ndmpd sessions active.
</source>
== Verzeichnis erzeugen durch kopieren des QTrees ==
<source lang=bash>
nac*> ndmpcopy /vol/vol0/__ /vol/vol0/etc/sshd/root/.ssh
...
Ndmpcopy: Transfer successful [ 0 hours, 0 minutes, 20 seconds ]
Ndmpcopy: Done
nac*> qtree delete /vol/vol0/__
</source>
== SSH-Key /etc/sshd/<user>/.ssh/authorized_keys schreiben ==
<source lang=bash>
nac*> wrfile /etc/sshd/root/.ssh/authorized_keys
ssh-dss AAA...== user@clienthost
^C
</source>
d92a3250deb64d5ab30953017aa4e7a82da5d064
309
308
2013-11-14T08:07:31Z
Lollypop
2
/* NDMPd Status prüfen / anschalten */
wikitext
text/x-wiki
[[Kategorie:NetApp]]
== Prüfen, ob das SSH-Homedir /etc/sshd/<user>/.ssh existiert ==
<source lang=bash>
nac*> priv set -q diag
nac*> ls /etc/sshd/
.
..
ssh_host_key
ssh_host_key.pub
ssh_host_rsa_key
ssh_host_rsa_key.pub
ssh_host_dsa_key
ssh_host_dsa_key.pub
</source>
== Anlegen eines Verzeichnisses mit Mode 0700 ==
<source lang=bash>
nac*> options wafl.default_qtree_mode
wafl.default_qtree_mode 0777
nac*> options wafl.default_qtree_mode 0700
nac*> qtree create /vol/vol0/__
nac*> options wafl.default_qtree_mode 0777
</source>
== NDMPd Status prüfen / anschalten ==
<source lang=bash>
nac*> ndmpd status
ndmpd OFF.
No ndmpd sessions active.
nac*> ndmpd on
nac*> ndmpd status
ndmpd ON.
No ndmpd sessions active.
</source>
== Verzeichnis erzeugen durch kopieren des QTrees ==
<source lang=bash>
nac*> ndmpcopy /vol/vol0/__ /vol/vol0/etc/sshd/root/.ssh
...
Ndmpcopy: Transfer successful [ 0 hours, 0 minutes, 20 seconds ]
Ndmpcopy: Done
nac*> qtree delete /vol/vol0/__
</source>
== SSH-Key /etc/sshd/<user>/.ssh/authorized_keys schreiben ==
<source lang=bash>
nac*> wrfile /etc/sshd/root/.ssh/authorized_keys
ssh-dss AAA...== user@clienthost
^C
</source>
da175e2c179cbe0631fbf780c32d0fb5ef451efa
Solaris ssh from DVD
0
111
310
2014-01-02T15:02:37Z
Lollypop
2
Die Seite wurde neu angelegt: „[Kategorie:Solaris] =Get SSH on a system bootet from DVD= ==Mount DVD== <source lang=bash> # iostat -En c0t0d0 Soft Errors: 0 Hard Errors: 0 Transpo…“
wikitext
text/x-wiki
[Kategorie:Solaris]
=Get SSH on a system bootet from DVD=
==Mount DVD==
<source lang=bash>
# iostat -En
c0t0d0 Soft Errors: 0 Hard Errors: 0 Transport Errors: 0
Vendor: AMI Product: Virtual CDROM Revision: 1.00 Serial No:
Size: 0.00GB <0 bytes>
Media Error: 0 Device Not Ready: 0 No Device: 0 Recoverable: 0
Illegal Request: 732 Predictive Failure Analysis: 0
...
# mkdir /tmp/dvd
# mount -F hsfs -oro /dev/dsk/c0t0d0s0 /tmp/dvd
</source>
==Unpacking software==
<source lang=bash>
# mkdir /tmp/pkg
# pkgtrans /tmp/dvd/Solaris_10/Product /tmp/pkg SUNWsshu SUNWcry SUNWopenssl-libraries
# mkdir /tmp/ssh
# cd /tmp/ssh
# 7z x -so /tmp/pkg/SUNWsshu/archive/none.7z | cpio -idv
# 7z x -so /tmp/pkg/SUNWcry/archive/none.7z | cpio -idv
# 7z x -so /tmp/pkg/SUNWopenssl-libraries/archive/none.7z | cpio -idv
</source>
==Prefer unpacked libraries==
<source lang=bash>
# crle -c /var/ld/ld.config -l /tmp/ssh/usr/sfw/lib:/lib:/usr/lib
# crle
Configuration file [version 4]: /var/ld/ld.config
Platform: 32-bit LSB 80386
Default Library Path (ELF): /tmp/ssh/usr/sfw/lib:/lib:/usr/lib
Trusted Directories (ELF): /lib/secure:/usr/lib/secure (system default)
Command line:
crle -c /var/ld/ld.config -l /tmp/ssh/usr/sfw/lib:/lib:/usr/lib
</source>
==Check it==
<source lang=bash>
# ldd /tmp/ssh/usr/bin/ssh
libsocket.so.1 => /lib/libsocket.so.1
libnsl.so.1 => /lib/libnsl.so.1
libz.so.1 => /usr/lib/libz.so.1
libcrypto.so.0.9.7 => /usr/sfw/lib/libcrypto.so.0.9.7
libgss.so.1 => /usr/lib/libgss.so.1
libc.so.1 => /lib/libc.so.1
libmp.so.2 => /lib/libmp.so.2
libmd.so.1 => /lib/libmd.so.1
libscf.so.1 => /lib/libscf.so.1
libcmd.so.1 => /lib/libcmd.so.1
libdoor.so.1 => /lib/libdoor.so.1
libuutil.so.1 => /lib/libuutil.so.1
libgen.so.1 => /lib/libgen.so.1
libcrypto_extra.so.0.9.7 => /tmp/ssh/usr/sfw/lib/libcrypto_extra.so.0.9.7
libm.so.2 => /lib/libm.so.2
</source>
Looks good:
* libcrypto_extra.so.0.9.7 => /tmp/ssh/usr/sfw/lib/libcrypto_extra.so.0.9.7
==Use ssh from /tmp/ssh==
<source lang=bash>
# /tmp/ssh/usr/bin/ssh <user>@<ip>
</source>
97c986b15517f538eb4584b7c7f2970b01b0025a
311
310
2014-01-02T15:09:50Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:Solaris]]
=Get SSH on a system bootet from DVD=
==Mount DVD==
<source lang=bash>
# iostat -En
c0t0d0 Soft Errors: 0 Hard Errors: 0 Transport Errors: 0
Vendor: AMI Product: Virtual CDROM Revision: 1.00 Serial No:
Size: 0.00GB <0 bytes>
Media Error: 0 Device Not Ready: 0 No Device: 0 Recoverable: 0
Illegal Request: 732 Predictive Failure Analysis: 0
...
# mkdir /tmp/dvd
# mount -F hsfs -oro /dev/dsk/c0t0d0s0 /tmp/dvd
</source>
==Unpacking software==
<source lang=bash>
# mkdir /tmp/pkg
# pkgtrans /tmp/dvd/Solaris_10/Product /tmp/pkg SUNWsshu SUNWcry SUNWopenssl-libraries
# mkdir /tmp/ssh
# cd /tmp/ssh
# 7z x -so /tmp/pkg/SUNWsshu/archive/none.7z | cpio -idv
# 7z x -so /tmp/pkg/SUNWcry/archive/none.7z | cpio -idv
# 7z x -so /tmp/pkg/SUNWopenssl-libraries/archive/none.7z | cpio -idv
</source>
==Prefer unpacked libraries==
<source lang=bash>
# crle -c /var/ld/ld.config -l /tmp/ssh/usr/sfw/lib:/lib:/usr/lib
# crle
Configuration file [version 4]: /var/ld/ld.config
Platform: 32-bit LSB 80386
Default Library Path (ELF): /tmp/ssh/usr/sfw/lib:/lib:/usr/lib
Trusted Directories (ELF): /lib/secure:/usr/lib/secure (system default)
Command line:
crle -c /var/ld/ld.config -l /tmp/ssh/usr/sfw/lib:/lib:/usr/lib
</source>
==Check it==
<source lang=bash>
# ldd /tmp/ssh/usr/bin/ssh
libsocket.so.1 => /lib/libsocket.so.1
libnsl.so.1 => /lib/libnsl.so.1
libz.so.1 => /usr/lib/libz.so.1
libcrypto.so.0.9.7 => /usr/sfw/lib/libcrypto.so.0.9.7
libgss.so.1 => /usr/lib/libgss.so.1
libc.so.1 => /lib/libc.so.1
libmp.so.2 => /lib/libmp.so.2
libmd.so.1 => /lib/libmd.so.1
libscf.so.1 => /lib/libscf.so.1
libcmd.so.1 => /lib/libcmd.so.1
libdoor.so.1 => /lib/libdoor.so.1
libuutil.so.1 => /lib/libuutil.so.1
libgen.so.1 => /lib/libgen.so.1
libcrypto_extra.so.0.9.7 => /tmp/ssh/usr/sfw/lib/libcrypto_extra.so.0.9.7
libm.so.2 => /lib/libm.so.2
</source>
Looks good:
* libcrypto_extra.so.0.9.7 => /tmp/ssh/usr/sfw/lib/libcrypto_extra.so.0.9.7
==Use ssh from /tmp/ssh==
<source lang=bash>
# /tmp/ssh/usr/bin/ssh <user>@<ip>
</source>
6532b6d3740184db4941dd6665ea308d15f8ab3b
312
311
2014-01-02T16:52:21Z
Lollypop
2
/* Prefer unpacked libraries */
wikitext
text/x-wiki
[[Kategorie:Solaris]]
=Get SSH on a system bootet from DVD=
==Mount DVD==
<source lang=bash>
# iostat -En
c0t0d0 Soft Errors: 0 Hard Errors: 0 Transport Errors: 0
Vendor: AMI Product: Virtual CDROM Revision: 1.00 Serial No:
Size: 0.00GB <0 bytes>
Media Error: 0 Device Not Ready: 0 No Device: 0 Recoverable: 0
Illegal Request: 732 Predictive Failure Analysis: 0
...
# mkdir /tmp/dvd
# mount -F hsfs -oro /dev/dsk/c0t0d0s0 /tmp/dvd
</source>
==Unpacking software==
<source lang=bash>
# mkdir /tmp/pkg
# pkgtrans /tmp/dvd/Solaris_10/Product /tmp/pkg SUNWsshu SUNWcry SUNWopenssl-libraries
# mkdir /tmp/ssh
# cd /tmp/ssh
# 7z x -so /tmp/pkg/SUNWsshu/archive/none.7z | cpio -idv
# 7z x -so /tmp/pkg/SUNWcry/archive/none.7z | cpio -idv
# 7z x -so /tmp/pkg/SUNWopenssl-libraries/archive/none.7z | cpio -idv
</source>
==Use unpacked libraries==
<source lang=bash>
# crle -c /var/ld/ld.config -l /tmp/ssh/usr/sfw/lib:/lib:/usr/lib
# crle
Configuration file [version 4]: /var/ld/ld.config
Platform: 32-bit LSB 80386
Default Library Path (ELF): /tmp/ssh/usr/sfw/lib:/lib:/usr/lib
Trusted Directories (ELF): /lib/secure:/usr/lib/secure (system default)
Command line:
crle -c /var/ld/ld.config -l /tmp/ssh/usr/sfw/lib:/lib:/usr/lib
</source>
==Check it==
<source lang=bash>
# ldd /tmp/ssh/usr/bin/ssh
libsocket.so.1 => /lib/libsocket.so.1
libnsl.so.1 => /lib/libnsl.so.1
libz.so.1 => /usr/lib/libz.so.1
libcrypto.so.0.9.7 => /usr/sfw/lib/libcrypto.so.0.9.7
libgss.so.1 => /usr/lib/libgss.so.1
libc.so.1 => /lib/libc.so.1
libmp.so.2 => /lib/libmp.so.2
libmd.so.1 => /lib/libmd.so.1
libscf.so.1 => /lib/libscf.so.1
libcmd.so.1 => /lib/libcmd.so.1
libdoor.so.1 => /lib/libdoor.so.1
libuutil.so.1 => /lib/libuutil.so.1
libgen.so.1 => /lib/libgen.so.1
libcrypto_extra.so.0.9.7 => /tmp/ssh/usr/sfw/lib/libcrypto_extra.so.0.9.7
libm.so.2 => /lib/libm.so.2
</source>
Looks good:
* libcrypto_extra.so.0.9.7 => /tmp/ssh/usr/sfw/lib/libcrypto_extra.so.0.9.7
==Use ssh from /tmp/ssh==
<source lang=bash>
# /tmp/ssh/usr/bin/ssh <user>@<ip>
</source>
288e6c3ccf4f2b9fc7cafe05a879c99618ec31db
Category:MeerwasserAquarium
14
112
313
2014-01-26T11:12:05Z
Lollypop
2
Die Seite wurde neu angelegt: „[[Kategorie:Projekte]]“
wikitext
text/x-wiki
[[Kategorie:Projekte]]
e5d54bb57ba0950c24694ecf450e32e79e629d83
Xenia umbellata
0
113
314
2014-01-26T11:13:21Z
Lollypop
2
Die Seite wurde neu angelegt: „[[Kategorie:MeerwasserAquarium]]“
wikitext
text/x-wiki
[[Kategorie:MeerwasserAquarium]]
8be7020c03c3c9fea5db630fced24aa7302520d8
315
314
2014-01-26T11:17:57Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:MeerwasserAquarium]]
{{Ameisenart
| DeName = Pumpende Xenie
| WissName = Xenia umbellata
| Autor =
| Untergattung =
| Gattung =
| Unterfamilie =
| Art =
| Verbreitung = Indopazifik
| Habitat =
| Gruendung =
| Koeniginnen =
| Nest =
| Ausbruchsschutz =
| Nahrung = Zooxanthellen / Licht
| Luftfeuchtigkeit =
| Temperatur = 24°C - 26°C
| Winterruhe =
}}
4c7fe5ce180d2229244a331633f9e471fb55fb1f
319
315
2014-01-26T11:48:03Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:MeerwasserAquarium]]
{{Ameisenart
| DeName = Pumpende Xenie
| WissName = Xenia umbellata
| Autor =
| Untergattung =
| Gattung =
| Unterfamilie =
| Art =
| Verbreitung = Indopazifik
| Habitat =
| Gruendung =
| Koeniginnen =
| Nest =
| Ausbruchsschutz =
| Nahrung = Zooxanthellen / Licht
| Luftfeuchtigkeit =
| Temperatur = 24°C - 26°C
| Winterruhe =
}}
Wenn mann die Pumpende Xenie (Xenia umbellata) frisch in das Becken einsetzt, zieht sie sich, wie alle Weichkorallen ersteinmal zusammen uns sieht die ersten 2 Tage eher tot, als lebendig aus. Wir haben sie einfach mit einem Gummiband aufgebunden.
[[Datei:Xenia_umbellata-New.png|200px|thumb|left|Xenia umbellata - Frisch eingesetzt]]
Schon bald aber entfaltet sie wieder ihre volle Pracht.
[[Datei:Xenia_umbellata-First_week.png|200px|thumb|left|Xenia umbellata - In der ersten Woche]]
Und nach spätestens zwei Wochen ist sie dann auch festgewachsen und das Gummiband kann entfernt werden.
[[Datei:Xenia_umbellata-Two_weeks.png|200px|thumb|left|Xenia umbellata - In der ersten Woche]]
f39d861d647755f49dbd710175c2a1ead18b3079
320
319
2014-01-26T12:08:43Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:MeerwasserAquarium]]
{{Ameisenart
| DeName = Pumpende Xenie
| WissName = Xenia umbellata
| Autor =
| Untergattung =
| Gattung =
| Unterfamilie =
| Art =
| Verbreitung = Indopazifik
| Habitat =
| Gruendung =
| Koeniginnen =
| Nest =
| Ausbruchsschutz =
| Nahrung = Zooxanthellen / Licht
| Luftfeuchtigkeit =
| Temperatur = 24°C - 26°C
| Winterruhe =
}}
Wenn mann die Pumpende Xenie (Xenia umbellata) frisch in das Becken einsetzt, zieht sie sich, wie alle Weichkorallen ersteinmal zusammen uns sieht die ersten 2 Tage eher tot, als lebendig aus. Wir haben sie einfach mit einem Gummiband aufgebunden. Nach wenigen Tagen entfaltet sie wieder ihre volle Pracht und nach spätestens zwei Wochen ist sie dann auch festgewachsen und das Gummiband kann entfernt werden.
<gallery mode="packed-hover">
Image:Xenia_umbellata-New.png|Frisch eingesetzt
Image:Xenia_umbellata-First_week.png|In der ersten Woche
Image:Xenia_umbellata-Two_weeks.png|Nach zwei Wochen
</gallery>
fc3d9ae57fc6f77d1f17950fef54d79c6f207e9b
321
320
2014-01-26T12:11:49Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:MeerwasserAquarium]]
{{Ameisenart
| DeName = Pumpende Xenie
| WissName = Xenia umbellata
| Autor =
| Untergattung =
| Gattung =
| Unterfamilie =
| Art =
| Verbreitung = Indopazifik
| Habitat =
| Gruendung =
| Koeniginnen =
| Nest =
| Ausbruchsschutz =
| Nahrung = Zooxanthellen / Licht
| Luftfeuchtigkeit =
| Temperatur = 24°C - 26°C
| Winterruhe =
}}
Wenn mann die Pumpende Xenie (Xenia umbellata) frisch in das Becken einsetzt, zieht sie sich, wie alle Weichkorallen ersteinmal zusammen uns sieht die ersten 2 Tage eher tot, als lebendig aus. Wir haben sie einfach mit einem Gummiband aufgebunden. Nach wenigen Tagen entfaltet sie wieder ihre volle Pracht und nach spätestens zwei Wochen ist sie dann auch festgewachsen und das Gummiband kann entfernt werden.
Da sich die Pumpende Xenie recht gut vermehrt bietet es sich an sie gleich als Neuzugang auf einem extra Stein zu befestigen, damit man sie besser im Griff hat und eventuell noch umsetzen kann.
<gallery mode="packed-hover">
Image:Xenia_umbellata-New.png|Frisch eingesetzt
Image:Xenia_umbellata-First_week.png|In der ersten Woche
Image:Xenia_umbellata-Two_weeks.png|Nach zwei Wochen
</gallery>
62910cafca5671d2850be7520c6ecf335460ca71
324
321
2014-01-26T12:19:21Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:MeerwasserAquarium]]
{{Systematik
| DeName = Pumpende Xenie
| WissName = Xenia umbellata
| Autor =
| Untergattung =
| Gattung =
| Unterfamilie =
| Art =
| Verbreitung = Indopazifik
| Habitat =
| Gruendung =
| Koeniginnen =
| Nest =
| Ausbruchsschutz =
| Nahrung = Zooxanthellen / Licht
| Luftfeuchtigkeit =
| Temperatur = 24°C - 26°C
| Winterruhe =
}}
Wenn mann die Pumpende Xenie (Xenia umbellata) frisch in das Becken einsetzt, zieht sie sich, wie alle Weichkorallen ersteinmal zusammen uns sieht die ersten 2 Tage eher tot, als lebendig aus. Wir haben sie einfach mit einem Gummiband aufgebunden. Nach wenigen Tagen entfaltet sie wieder ihre volle Pracht und nach spätestens zwei Wochen ist sie dann auch festgewachsen und das Gummiband kann entfernt werden.
Da sich die Pumpende Xenie recht gut vermehrt bietet es sich an sie gleich als Neuzugang auf einem extra Stein zu befestigen, damit man sie besser im Griff hat und eventuell noch umsetzen kann.
<gallery mode="packed-hover">
Image:Xenia_umbellata-New.png|Frisch eingesetzt
Image:Xenia_umbellata-First_week.png|In der ersten Woche
Image:Xenia_umbellata-Two_weeks.png|Nach zwei Wochen
</gallery>
9f26e2e704b4cbc060df24ee3522d22d0550b9d7
355
324
2014-02-04T12:34:39Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:MeerwasserAquarium]]
{{Systematik
| Bild = Xenia_umbellata-Two_weeks.png
| Bildbeschreibung = Xenia umbellata
| DeName = Pumpende Xenie
| WissName = Xenia umbellata
| Autor =
| Untergattung =
| Gattung =
| Unterfamilie =
| Art =
| Verbreitung = Indopazifik
| Habitat =
| Gruendung =
| Koeniginnen =
| Nest =
| Ausbruchsschutz =
| Nahrung = Zooxanthellen / Licht
| Luftfeuchtigkeit =
| Temperatur = 24°C - 26°C
| Winterruhe =
}}
Wenn mann die Pumpende Xenie (Xenia umbellata) frisch in das Becken einsetzt, zieht sie sich, wie alle Weichkorallen ersteinmal zusammen uns sieht die ersten 2 Tage eher tot, als lebendig aus. Wir haben sie einfach mit einem Gummiband aufgebunden. Nach wenigen Tagen entfaltet sie wieder ihre volle Pracht und nach spätestens zwei Wochen ist sie dann auch festgewachsen und das Gummiband kann entfernt werden.
Da sich die Pumpende Xenie recht gut vermehrt bietet es sich an sie gleich als Neuzugang auf einem extra Stein zu befestigen, damit man sie besser im Griff hat und eventuell noch umsetzen kann.
<gallery mode="packed-hover">
Image:Xenia_umbellata-New.png|Frisch eingesetzt
Image:Xenia_umbellata-First_week.png|In der ersten Woche
Image:Xenia_umbellata-Two_weeks.png|Nach zwei Wochen
</gallery>
8c095ee0f19b346c461db25c0f46789095eb786f
File:Xenia umbellata-New.png
6
114
316
2014-01-26T11:28:56Z
Lollypop
2
Xenia umbellata frich eingsetzt.
wikitext
text/x-wiki
Xenia umbellata frich eingsetzt.
cbcff53f26f911f79f72253b19497c04b6e5536d
File:Xenia umbellata-First week.png
6
115
317
2014-01-26T11:36:38Z
Lollypop
2
Xenia umbellata in der ersten Woche.
wikitext
text/x-wiki
Xenia umbellata in der ersten Woche.
f0afbf21cf687c5dba97751b067af5178a70d24e
File:Xenia umbellata-Two weeks.png
6
116
318
2014-01-26T11:43:26Z
Lollypop
2
Xenia umbellata nach zwei Wochen.
wikitext
text/x-wiki
Xenia umbellata nach zwei Wochen.
b2710bb8f72fe31c12279e072725c1e1aeeee773
Template:Systematik
10
117
322
2014-01-26T12:17:25Z
Lollypop
2
Die Seite wurde neu angelegt: „<includeonly> {| cellpadding="2" cellspacing="0" style="border: 1px solid #6688AA;width:260px; background-color:#efefef; float:right;overflow: hidden; margin-left…“
wikitext
text/x-wiki
<includeonly>
{| cellpadding="2" cellspacing="0" style="border: 1px solid #6688AA;width:260px; background-color:#efefef; float:right;overflow: hidden; margin-left:10px; margin-bottom:10px;" valign="middle" |
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| '''''{{{WissName}}}''''' {{#if:{{{DeName|}}}| <br>({{{DeName|}}}) }}
|-
|style=" border: 0px solid #6688AA;border-bottom-width:1px;"| <div style="text-align:center;font-size:8pt;">
{{#if: {{{Bild|}}} | [[Bild:{{{Bild}}}|frameless|250x300px|{{{Bildbeschreibung}}}]]{{{Bildbeschreibung}}}</div> | |}}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| Systematik
|-
|style="text-align:center;width=250px;"|
{| style="background-color:#efefef;text-align:left;"
|-
{{#if:{{{Unterfamilie|}}}|
{{!-}}
{{!}} Unterfamilie:
{{!}} ''[[{{{Unterfamilie|}}}]]''
}}
|-
{{#if:{{{Gattung|}}}|
{{!-}}
{{!}} Gattung:
{{!}} ''[[{{{Gattung|}}}]]''
}}
|-
{{#if:{{{Untergattung|}}}|
{{!-}}
{{!}} Untergattung:
{{!}} ''[[{{{Untergattung|}}}]]''
}}
|-
| Art:
|''{{{WissName|}}}''
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Wissenschaftlicher Name
|-
|style="text-align:center"|
{| style="text-align:center;background-color:#efefef;widht:250px"
|-
|style="text-align:center;width:250px"|''{{{WissName}}}''
{{#if:{{{Autor|}}}| {{{Autor|}}} | }}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Weitere Informationen
|-
|style="text-align:left"|
{| style="text-align:left;background-color:#efefef;widht:250px"
{{#if:{{{Verbreitung|}}} |{{!-}}
{{!}} Verbreitung:
{{!}} {{{Verbreitung|}}} |{{!-}}}}
{{#if:{{{Habitat|}}}
| {{!-}}
{{!}} Habitat:
{{!}} {{{Habitat|}}}|{{!-}}}}
{{#if:{{{Nahrung|}}}
| {{!-}}
{{!}} Nahrung:
{{!}} {{{Nahrung|}}} |{{!-}}}}
{{#if:{{{Luftfeuchtigkeit|}}}
| {{!-}}
{{!}} Luftfeuchtigkeit:
{{!}} {{{Luftfeuchtigkeit|}}} |{{!-}}}}
{{#if:{{{Temperatur|}}}
| {{!-}}
{{!}} Temperatur:
{{!}} {{{Temperatur|}}} |{{!-}}}}
|}
{{#if:{{{Untergattung|}}}| [[Kategorie:{{{Untergattung|}}}]]}}
{{#if:{{{Gattung|}}}| [[Kategorie:{{{Gattung|}}}]]}}
{{#ifeq: {{NAMESPACE}} | {{ns:0}} | [[Kategorie:Spezies]]}}
</includeonly>
<noinclude>
2ac4c2703491d2c3ee3ec51929af12b82a84659a
323
322
2014-01-26T12:19:00Z
Lollypop
2
wikitext
text/x-wiki
<includeonly>
{| cellpadding="2" cellspacing="0" style="border: 1px solid #6688AA;width:260px; background-color:#efefef; float:right;overflow: hidden; margin-left:10px; margin-bottom:10px;" valign="middle" |
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| '''''{{{WissName}}}''''' {{#if:{{{DeName|}}}| <br>({{{DeName|}}}) }}
|-
|style=" border: 0px solid #6688AA;border-bottom-width:1px;"| <div style="text-align:center;font-size:8pt;">
{{#if: {{{Bild|}}} | [[Bild:{{{Bild}}}|frameless|250x300px|{{{Bildbeschreibung}}}]]{{{Bildbeschreibung}}}</div> | |}}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| Systematik
|-
|style="text-align:center;width=250px;"|
{| style="background-color:#efefef;text-align:left;"
|-
{{#if:{{{Unterfamilie|}}}|
{{!-}}
{{!}} Unterfamilie:
{{!}} ''[[{{{Unterfamilie|}}}]]''
}}
|-
{{#if:{{{Gattung|}}}|
{{!-}}
{{!}} Gattung:
{{!}} ''[[{{{Gattung|}}}]]''
}}
|-
{{#if:{{{Untergattung|}}}|
{{!-}}
{{!}} Untergattung:
{{!}} ''[[{{{Untergattung|}}}]]''
}}
|-
| Art:
|''{{{WissName|}}}''
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Wissenschaftlicher Name
|-
|style="text-align:center"|
{| style="text-align:center;background-color:#efefef;widht:250px"
|-
|style="text-align:center;width:250px"|''{{{WissName}}}''
{{#if:{{{Autor|}}}| {{{Autor|}}} | }}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Weitere Informationen
|-
|style="text-align:left"|
{| style="text-align:left;background-color:#efefef;widht:250px"
{{#if:{{{Verbreitung|}}} |{{!-}}
{{!}} Verbreitung:
{{!}} {{{Verbreitung|}}} |{{!-}}}}
{{#if:{{{Habitat|}}}
| {{!-}}
{{!}} Habitat:
{{!}} {{{Habitat|}}}|{{!-}}}}
{{#if:{{{Nahrung|}}}
| {{!-}}
{{!}} Nahrung:
{{!}} {{{Nahrung|}}} |{{!-}}}}
{{#if:{{{Luftfeuchtigkeit|}}}
| {{!-}}
{{!}} Luftfeuchtigkeit:
{{!}} {{{Luftfeuchtigkeit|}}} |{{!-}}}}
{{#if:{{{Temperatur|}}}
| {{!-}}
{{!}} Temperatur:
{{!}} {{{Temperatur|}}} |{{!-}}}}
|}
|}
{{#if:{{{Untergattung|}}}| [[Kategorie:{{{Untergattung|}}}]]}}
{{#if:{{{Gattung|}}}| [[Kategorie:{{{Gattung|}}}]]}}
{{#ifeq: {{NAMESPACE}} | {{ns:0}} | [[Kategorie:Spezies]]}}
</includeonly>
<noinclude>
5b33e23d4b077de6c290c0fe40d19bb879ca9ccd
Solaris zone memory on the fly
0
118
325
2014-01-27T10:46:54Z
Lollypop
2
Die Seite wurde neu angelegt: „[[Kategorie:Solaris]] = Setting memory parameter for running zones = You can change memory parameter for running zones. But remember to make it persistent by chan…“
wikitext
text/x-wiki
[[Kategorie:Solaris]]
= Setting memory parameter for running zones =
You can change memory parameter for running zones. But remember to make it persistent by changing zone config file, too.
So I do it always in advance.
== Change setting in the config file ==
<source lang=bash>
# zonecfg -z myzone
zonecfg:myzone> select capped-memory
zonecfg:myzone:capped-memory> info
capped-memory:
[swap: 10G]
zonecfg:myzone:capped-memory> set swap=16G
zonecfg:myzone:capped-memory> set physical=16G
zonecfg:myzone:capped-memory> set locked=10G
zonecfg:myzone:capped-memory> info
physical: 16G
[swap: 16G]
[locked: 10G]
zonecfg:myzone:capped-memory> end
zonecfg:myzone> verify
zonecfg:myzone> commit
zonecfg:myzone> exit
#
</source>
== Change settings for the running zone ==
First take a look:
<source lang=bash>
# prctl -t privileged -i zone myzone
zone: 1: myzone
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
zone.max-swap
privileged 10.0GB - deny -
zone.cpu-shares
privileged 1 - none -
</source>
Set the new values:
<source lang=bash>
# rcapadm -z myzone -m 16G
# prctl -n zone.max-swap -v 16g -t privileged -r -e deny -i zone myzone
</source>
Prove values:
<source lang=bash>
# zlogin myzone prtconf | grep Memory
prtconf: devinfo facility not available
Memory size: 16384 Megabytes
# prctl -t privileged -i zone myzone
zone: 1: myzone
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
zone.max-swap
privileged 16.0GB - deny -
zone.cpu-shares
privileged 1 - none -
</source>
Done.
2d5ecce3c8fc43dfa2769f1ef49a5bf9bb610069
326
325
2014-01-27T10:48:18Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:Solaris]]
= Setting memory parameter for running zones =
You can change memory parameter for running zones. But remember to make it persistent by changing zone config file, too.
So I do it always in advance.
== Change setting in the config file ==
<source lang=bash>
# zonecfg -z myzone
zonecfg:myzone> select capped-memory
zonecfg:myzone:capped-memory> info
capped-memory:
[swap: 10G]
zonecfg:myzone:capped-memory> set swap=16G
zonecfg:myzone:capped-memory> set physical=16G
zonecfg:myzone:capped-memory> set locked=10G
zonecfg:myzone:capped-memory> info
physical: 16G
[swap: 16G]
[locked: 10G]
zonecfg:myzone:capped-memory> end
zonecfg:myzone> verify
zonecfg:myzone> commit
zonecfg:myzone> exit
#
</source>
== Change settings for the running zone ==
First take a look:
<source lang=bash>
# zlogin myzone prtconf | grep Memory
prtconf: devinfo facility not available
Memory size: 65536 Megabytes
# prctl -t privileged -i zone myzone
zone: 1: myzone
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
zone.max-swap
privileged 10.0GB - deny -
zone.cpu-shares
privileged 1 - none -
</source>
Set the new values:
<source lang=bash>
# rcapadm -z myzone -m 16G
# prctl -n zone.max-swap -v 16g -t privileged -r -e deny -i zone myzone
</source>
Prove values:
<source lang=bash>
# zlogin myzone prtconf | grep Memory
prtconf: devinfo facility not available
Memory size: 16384 Megabytes
# prctl -t privileged -i zone myzone
zone: 1: myzone
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
zone.max-swap
privileged 16.0GB - deny -
zone.cpu-shares
privileged 1 - none -
</source>
Done.
073c0ebbd033e04f53a1e6257f61690b0427171f
VMWare Linux parameter
0
108
327
305
2014-01-29T09:23:39Z
Lollypop
2
/* Source from VMWare */
wikitext
text/x-wiki
[[Kategorie:VMWare]]
==/etc/sysctl.conf==
<source lang=bash>
# vm.swappiness = 0 The kernel will swap only to avoid an out of memory condition.
vm.swappiness = 0
# TCP SYN Flood Protection
net.ipv4.tcp_syncookies = 1
net.ipv4.tcp_max_syn_backlog = 4096
net.ipv4.tcp_synack_retries = 3
</source>
==Pinning kernel to 2.6 for ESX 4.1==
Create /etc/apt/preferences.d/linux-image with this content:
<source lang=bash>
Package: linux-image-server linux-server linux-headers-server
Pin: version 2.6.*
Pin-Priority: 1000
</source>
==Autobuild of kernel drivers==
Create /etc/kernel/header_postinst.d/vmware :
<source lang=bash>
#!/bin/bash
# We're passed the version of the kernel being installed
inst_kern=$1
/usr/bin/vmware-config-tools.pl --modules-only --default --kernel-version ${inst_kern}
</source>
<source lang=bash>
# chmod 755 /etc/kernel/header_postinst.d/vmware
</source>
==Source from VMWare==
After you removed previously installed vmware-tools, just follow these steps:
1. Add this to your /etc/apt/sources.list:
<source lang=bash>
deb http://packages.vmware.com/tools/esx/latest/ubuntu precise main
gpg --search C0B5E0AB66FD4949 # hinzufügen (1)
gpg -a --export C0B5E0AB66FD4949 | apt-key add --
</source>
2. Update your package database:
<source lang=bash>
# aptitude update
</source>
3. Get Module-Assistant:
<source lang=bash>
# aptitude install module-assistant
</source>
4. Get the base packages:
<source lang=bash>
# aptitude install vmware-tools-foundation vmware-tools-libraries-nox vmware-tools-guestlib vmware-tools-core
</source>
5. Get the modules:
<source lang=bash>
# aptitude install vmware-tools-{vmci,vmxnet,vsock,vmblock,vmhgfs,vmsync}-common
# aptitude install vmware-tools-{vmci,vmxnet,vsock,vmblock,vmhgfs,vmsync}-modules-source
</source>
6. Get kernel and headers:
<source lang=bash>
# aptitude install linux-{image,headers}-3.2.0-52-generic
</source>
7. Compile and install the modules with module assistant
<source lang=bash>
# m-a --text-mode --kvers-list 3.2.0-52-generic build vmware-tools-{vmci,vmxnet,vsock,vmblock,vmhgfs,vmsync}-modules
# m-a --text-mode --kvers-list 3.2.0-52-generic install vmware-tools-{vmci,vmxnet,vsock,vmblock,vmhgfs,vmsync}-modules
</source>
a09b9a026293f577060bc20287a7cf55210f49cc
328
327
2014-01-29T10:07:47Z
Lollypop
2
/* Source from VMWare */
wikitext
text/x-wiki
[[Kategorie:VMWare]]
==/etc/sysctl.conf==
<source lang=bash>
# vm.swappiness = 0 The kernel will swap only to avoid an out of memory condition.
vm.swappiness = 0
# TCP SYN Flood Protection
net.ipv4.tcp_syncookies = 1
net.ipv4.tcp_max_syn_backlog = 4096
net.ipv4.tcp_synack_retries = 3
</source>
==Pinning kernel to 2.6 for ESX 4.1==
Create /etc/apt/preferences.d/linux-image with this content:
<source lang=bash>
Package: linux-image-server linux-server linux-headers-server
Pin: version 2.6.*
Pin-Priority: 1000
</source>
==Autobuild of kernel drivers==
Create /etc/kernel/header_postinst.d/vmware :
<source lang=bash>
#!/bin/bash
# We're passed the version of the kernel being installed
inst_kern=$1
/usr/bin/vmware-config-tools.pl --modules-only --default --kernel-version ${inst_kern}
</source>
<source lang=bash>
# chmod 755 /etc/kernel/header_postinst.d/vmware
</source>
==Source from VMWare==
After you removed previously installed vmware-tools, just follow these steps:
1. Add this to your /etc/apt/sources.list:
<source lang=bash>
deb http://packages.vmware.com/tools/esx/latest/ubuntu precise main
gpg --search C0B5E0AB66FD4949 # hinzufügen (1)
gpg -a --export C0B5E0AB66FD4949 | apt-key add --
</source>
2. Update your package database:
<source lang=bash>
# aptitude update
</source>
3. Get Module-Assistant:
<source lang=bash>
# aptitude install module-assistant
</source>
4. Get the base packages:
<source lang=bash>
# aptitude install vmware-tools-foundation vmware-tools-libraries-nox vmware-tools-guestlib vmware-tools-core
</source>
5. Get the modules:
<source lang=bash>
# aptitude install vmware-tools-{vmci,vmxnet,vsock,vmblock,vmhgfs,vmsync}-common
# aptitude install vmware-tools-{vmci,vmxnet,vsock,vmblock,vmhgfs,vmsync}-modules-source
</source>
6. Get kernel and headers:
<source lang=bash>
# aptitude install linux-{image,headers}-3.2.0-52-generic
</source>
7. Compile and install the modules with module assistant
<source lang=bash>
# m-a prepare --kvers-list 3.2.0-52-generic
# m-a --text-mode --kvers-list 3.2.0-52-generic build vmware-tools-{vmci,vmxnet,vsock,vmblock,vmhgfs,vmsync}-modules
# m-a --text-mode --kvers-list 3.2.0-52-generic install vmware-tools-{vmci,vmxnet,vsock,vmblock,vmhgfs,vmsync}-modules
</source>
7fe441324229c089ebdb0566dc4c7f64c8fb5d28
330
328
2014-02-04T09:50:59Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:VMWare]]
==/etc/sysctl.conf==
<source lang=bash>
# vm.swappiness = 0 The kernel will swap only to avoid an out of memory condition.
vm.swappiness = 0
# TCP SYN Flood Protection
net.ipv4.tcp_syncookies = 1
net.ipv4.tcp_max_syn_backlog = 4096
net.ipv4.tcp_synack_retries = 3
</source>
==Pinning kernel to 2.6 for ESX 4.1==
Create /etc/apt/preferences.d/linux-image with this content:
<source lang=bash>
Package: linux-image-server linux-server linux-headers-server
Pin: version 2.6.*
Pin-Priority: 1000
</source>
==Autobuild of kernel drivers==
Create /etc/kernel/header_postinst.d/vmware :
<source lang=bash>
#!/bin/bash
# We're passed the version of the kernel being installed
inst_kern=$1
/usr/bin/vmware-config-tools.pl --modules-only --default --kernel-version ${inst_kern}
</source>
<source lang=bash>
# chmod 755 /etc/kernel/header_postinst.d/vmware
</source>
==Source from VMWare==
After you removed previously installed vmware-tools, just follow these steps:
1. Add this to your /etc/apt/sources.list:
<source lang=bash>
deb http://packages.vmware.com/tools/esx/latest/ubuntu precise main
gpg --search C0B5E0AB66FD4949 # hinzufügen (1)
gpg -a --export C0B5E0AB66FD4949 | apt-key add --
</source>
2. Update your package database:
<source lang=bash>
# aptitude update
</source>
3. Get Module-Assistant:
<source lang=bash>
# aptitude install module-assistant
</source>
4. Get the base packages:
<source lang=bash>
# aptitude install vmware-tools-foundation vmware-tools-libraries-nox vmware-tools-guestlib vmware-tools-core
</source>
5. Get the modules:
<source lang=bash>
# aptitude install vmware-tools-{vmci,vmxnet,vsock,vmblock,vmhgfs,vmsync}-common
# aptitude install vmware-tools-{vmci,vmxnet,vsock,vmblock,vmhgfs,vmsync}-modules-source
</source>
6. Get kernel and headers:
<source lang=bash>
# aptitude install linux-{image,headers}-3.2.0-52-generic
</source>
7. Compile and install the modules with module assistant
<source lang=bash>
# m-a prepare --kvers-list 3.2.0-52-generic
# m-a --text-mode --kvers-list 3.2.0-52-generic build vmware-tools-{vmci,vmxnet,vsock,vmblock,vmhgfs,vmsync}-modules
# m-a --text-mode --kvers-list 3.2.0-52-generic install vmware-tools-{vmci,vmxnet,vsock,vmblock,vmhgfs,vmsync}-modules
</source>
== Minimal /etc/vmware-tools/config ==
<source>
libdir = "/usr/lib/vmware-tools"
</source>
b07f7936c381210cba327513f427330a9d353ee4
331
330
2014-02-04T09:56:34Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:VMWare]]
==/etc/sysctl.conf==
<source lang=bash>
# vm.swappiness = 0 The kernel will swap only to avoid an out of memory condition.
vm.swappiness = 0
# TCP SYN Flood Protection
net.ipv4.tcp_syncookies = 1
net.ipv4.tcp_max_syn_backlog = 4096
net.ipv4.tcp_synack_retries = 3
</source>
==Pinning kernel to 2.6 for ESX 4.1==
Create /etc/apt/preferences.d/linux-image with this content:
<source lang=bash>
Package: linux-image-server linux-server linux-headers-server
Pin: version 2.6.*
Pin-Priority: 1000
</source>
==Autobuild of kernel drivers==
Create /etc/kernel/header_postinst.d/vmware :
<source lang=bash>
#!/bin/bash
# We're passed the version of the kernel being installed
inst_kern=$1
/usr/bin/vmware-config-tools.pl --modules-only --default --kernel-version ${inst_kern}
</source>
<source lang=bash>
# chmod 755 /etc/kernel/header_postinst.d/vmware
</source>
==Source from VMWare==
After you removed previously installed vmware-tools, just follow these steps:
1. Add this to your /etc/apt/sources.list:
<source lang=bash>
deb http://packages.vmware.com/tools/esx/latest/ubuntu precise main
gpg --search C0B5E0AB66FD4949 # hinzufügen (1)
gpg -a --export C0B5E0AB66FD4949 | apt-key add --
apt-get install vmware-tools-core vmware-tools-esx-nox vmware-tools-foundation \
vmware-tools-guestlib vmware-tools-libraries-nox vmware-tools-libraries-x \
vmware-tools-plugins-autoupgrade vmware-tools-plugins-deploypkg \
vmware-tools-plugins-grabbitmqproxy vmware-tools-plugins-guestinfo \
vmware-tools-plugins-hgfsserver vmware-tools-plugins-powerops \
vmware-tools-plugins-timesync vmware-tools-plugins-vix \
vmware-tools-plugins-vmbackup vmware-tools-services vmware-tools-user
</source>
2. Update your package database:
<source lang=bash>
# aptitude update
</source>
3. Get Module-Assistant:
<source lang=bash>
# aptitude install module-assistant
</source>
4. Get the base packages:
<source lang=bash>
# aptitude install vmware-tools-foundation vmware-tools-libraries-nox vmware-tools-guestlib vmware-tools-core
</source>
5. Get the modules:
<source lang=bash>
# aptitude install vmware-tools-{vmci,vmxnet,vsock,vmblock,vmhgfs,vmsync}-common
# aptitude install vmware-tools-{vmci,vmxnet,vsock,vmblock,vmhgfs,vmsync}-modules-source
</source>
6. Get kernel and headers:
<source lang=bash>
# aptitude install linux-{image,headers}-3.2.0-52-generic
</source>
7. Compile and install the modules with module assistant
<source lang=bash>
# m-a prepare --kvers-list 3.2.0-52-generic
# m-a --text-mode --kvers-list 3.2.0-52-generic build vmware-tools-{vmci,vmxnet,vsock,vmblock,vmhgfs,vmsync}-modules
# m-a --text-mode --kvers-list 3.2.0-52-generic install vmware-tools-{vmci,vmxnet,vsock,vmblock,vmhgfs,vmsync}-modules
</source>
== Minimal /etc/vmware-tools/config ==
<source lang=bash>
libdir = "/usr/lib/vmware-tools"
</source>
f2af13d41e73c40a286c2834d832191a8170af8e
334
331
2014-02-04T10:04:28Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:VMWare]][[Kategorie:Ubuntu]]
==/etc/sysctl.conf==
<source lang=bash>
# vm.swappiness = 0 The kernel will swap only to avoid an out of memory condition.
vm.swappiness = 0
# TCP SYN Flood Protection
net.ipv4.tcp_syncookies = 1
net.ipv4.tcp_max_syn_backlog = 4096
net.ipv4.tcp_synack_retries = 3
</source>
==Pinning kernel to 2.6 for ESX 4.1==
Create /etc/apt/preferences.d/linux-image with this content:
<source lang=bash>
Package: linux-image-server linux-server linux-headers-server
Pin: version 2.6.*
Pin-Priority: 1000
</source>
==Autobuild of kernel drivers==
Create /etc/kernel/header_postinst.d/vmware :
<source lang=bash>
#!/bin/bash
# We're passed the version of the kernel being installed
inst_kern=$1
/usr/bin/vmware-config-tools.pl --modules-only --default --kernel-version ${inst_kern}
</source>
<source lang=bash>
# chmod 755 /etc/kernel/header_postinst.d/vmware
</source>
==Source from VMWare==
After you removed previously installed vmware-tools, just follow these steps:
1. Add this to your /etc/apt/sources.list:
<source lang=bash>
deb http://packages.vmware.com/tools/esx/latest/ubuntu precise main
gpg --search C0B5E0AB66FD4949 # hinzufügen (1)
gpg -a --export C0B5E0AB66FD4949 | apt-key add --
apt-get install vmware-tools-core vmware-tools-esx-nox vmware-tools-foundation \
vmware-tools-guestlib vmware-tools-libraries-nox vmware-tools-libraries-x \
vmware-tools-plugins-autoupgrade vmware-tools-plugins-deploypkg \
vmware-tools-plugins-grabbitmqproxy vmware-tools-plugins-guestinfo \
vmware-tools-plugins-hgfsserver vmware-tools-plugins-powerops \
vmware-tools-plugins-timesync vmware-tools-plugins-vix \
vmware-tools-plugins-vmbackup vmware-tools-services vmware-tools-user
</source>
2. Update your package database:
<source lang=bash>
# aptitude update
</source>
3. Get Module-Assistant:
<source lang=bash>
# aptitude install module-assistant
</source>
4. Get the base packages:
<source lang=bash>
# aptitude install vmware-tools-foundation vmware-tools-libraries-nox vmware-tools-guestlib vmware-tools-core
</source>
5. Get the modules:
<source lang=bash>
# aptitude install vmware-tools-{vmci,vmxnet,vsock,vmblock,vmhgfs,vmsync}-common
# aptitude install vmware-tools-{vmci,vmxnet,vsock,vmblock,vmhgfs,vmsync}-modules-source
</source>
6. Get kernel and headers:
<source lang=bash>
# aptitude install linux-{image,headers}-3.2.0-52-generic
</source>
7. Compile and install the modules with module assistant
<source lang=bash>
# m-a prepare --kvers-list 3.2.0-52-generic
# m-a --text-mode --kvers-list 3.2.0-52-generic build vmware-tools-{vmci,vmxnet,vsock,vmblock,vmhgfs,vmsync}-modules
# m-a --text-mode --kvers-list 3.2.0-52-generic install vmware-tools-{vmci,vmxnet,vsock,vmblock,vmhgfs,vmsync}-modules
</source>
== Minimal /etc/vmware-tools/config ==
<source lang=bash>
libdir = "/usr/lib/vmware-tools"
</source>
715a8d969c057c27f5a5e9b0346cae03e782cede
335
334
2014-02-04T10:07:03Z
Lollypop
2
/* Source from VMWare */
wikitext
text/x-wiki
[[Kategorie:VMWare]][[Kategorie:Ubuntu]]
==/etc/sysctl.conf==
<source lang=bash>
# vm.swappiness = 0 The kernel will swap only to avoid an out of memory condition.
vm.swappiness = 0
# TCP SYN Flood Protection
net.ipv4.tcp_syncookies = 1
net.ipv4.tcp_max_syn_backlog = 4096
net.ipv4.tcp_synack_retries = 3
</source>
==Pinning kernel to 2.6 for ESX 4.1==
Create /etc/apt/preferences.d/linux-image with this content:
<source lang=bash>
Package: linux-image-server linux-server linux-headers-server
Pin: version 2.6.*
Pin-Priority: 1000
</source>
==Autobuild of kernel drivers==
Create /etc/kernel/header_postinst.d/vmware :
<source lang=bash>
#!/bin/bash
# We're passed the version of the kernel being installed
inst_kern=$1
/usr/bin/vmware-config-tools.pl --modules-only --default --kernel-version ${inst_kern}
</source>
<source lang=bash>
# chmod 755 /etc/kernel/header_postinst.d/vmware
</source>
==Source from VMWare==
After you removed previously installed vmware-tools, just follow these steps:
1. Add this to your /etc/apt/sources.list:
<source lang=bash>
deb http://packages.vmware.com/tools/esx/latest/ubuntu precise main
gpg --search C0B5E0AB66FD4949 # hinzufügen (1)
gpg -a --export C0B5E0AB66FD4949 | apt-key add --
apt-get update
apt-get install vmware-tools-core vmware-tools-esx-nox vmware-tools-foundation \
vmware-tools-guestlib vmware-tools-libraries-nox vmware-tools-libraries-x \
vmware-tools-plugins-autoupgrade vmware-tools-plugins-deploypkg \
vmware-tools-plugins-grabbitmqproxy vmware-tools-plugins-guestinfo \
vmware-tools-plugins-hgfsserver vmware-tools-plugins-powerops \
vmware-tools-plugins-timesync vmware-tools-plugins-vix \
vmware-tools-plugins-vmbackup vmware-tools-services vmware-tools-user
</source>
2. Update your package database:
<source lang=bash>
# aptitude update
</source>
3. Get Module-Assistant:
<source lang=bash>
# aptitude install module-assistant
</source>
4. Get the base packages:
<source lang=bash>
# aptitude install vmware-tools-foundation vmware-tools-libraries-nox vmware-tools-guestlib vmware-tools-core
</source>
5. Get the modules:
<source lang=bash>
# aptitude install vmware-tools-{vmci,vmxnet,vsock,vmblock,vmhgfs,vmsync}-common
# aptitude install vmware-tools-{vmci,vmxnet,vsock,vmblock,vmhgfs,vmsync}-modules-source
</source>
6. Get kernel and headers:
<source lang=bash>
# aptitude install linux-{image,headers}-3.2.0-52-generic
</source>
7. Compile and install the modules with module assistant
<source lang=bash>
# m-a prepare --kvers-list 3.2.0-52-generic
# m-a --text-mode --kvers-list 3.2.0-52-generic build vmware-tools-{vmci,vmxnet,vsock,vmblock,vmhgfs,vmsync}-modules
# m-a --text-mode --kvers-list 3.2.0-52-generic install vmware-tools-{vmci,vmxnet,vsock,vmblock,vmhgfs,vmsync}-modules
</source>
== Minimal /etc/vmware-tools/config ==
<source lang=bash>
libdir = "/usr/lib/vmware-tools"
</source>
d817186a1cf1b14ab0adbdcd90945a08b16309f2
336
335
2014-02-04T10:18:52Z
Lollypop
2
/* Source from VMWare */
wikitext
text/x-wiki
[[Kategorie:VMWare]][[Kategorie:Ubuntu]]
==/etc/sysctl.conf==
<source lang=bash>
# vm.swappiness = 0 The kernel will swap only to avoid an out of memory condition.
vm.swappiness = 0
# TCP SYN Flood Protection
net.ipv4.tcp_syncookies = 1
net.ipv4.tcp_max_syn_backlog = 4096
net.ipv4.tcp_synack_retries = 3
</source>
==Pinning kernel to 2.6 for ESX 4.1==
Create /etc/apt/preferences.d/linux-image with this content:
<source lang=bash>
Package: linux-image-server linux-server linux-headers-server
Pin: version 2.6.*
Pin-Priority: 1000
</source>
==Autobuild of kernel drivers==
Create /etc/kernel/header_postinst.d/vmware :
<source lang=bash>
#!/bin/bash
# We're passed the version of the kernel being installed
inst_kern=$1
/usr/bin/vmware-config-tools.pl --modules-only --default --kernel-version ${inst_kern}
</source>
<source lang=bash>
# chmod 755 /etc/kernel/header_postinst.d/vmware
</source>
==Source from VMWare==
After you removed previously installed vmware-tools, just follow these steps:
1. Add this to your /etc/apt/sources.list:
<source lang=bash>
deb http://packages.vmware.com/tools/esx/latest/ubuntu precise main
</source>
Then do:
<source lang=bash>
gpg --search C0B5E0AB66FD4949 # hinzufügen (1)
gpg -a --export C0B5E0AB66FD4949 | apt-key add --
apt-get update
apt-get install vmware-tools-core vmware-tools-esx-nox vmware-tools-foundation \
vmware-tools-guestlib vmware-tools-libraries-nox vmware-tools-libraries-x \
vmware-tools-plugins-autoupgrade vmware-tools-plugins-deploypkg \
vmware-tools-plugins-grabbitmqproxy vmware-tools-plugins-guestinfo \
vmware-tools-plugins-hgfsserver vmware-tools-plugins-powerops \
vmware-tools-plugins-timesync vmware-tools-plugins-vix \
vmware-tools-plugins-vmbackup vmware-tools-services vmware-tools-user
</source>
2. Update your package database:
<source lang=bash>
# aptitude update
</source>
3. Get Module-Assistant:
<source lang=bash>
# aptitude install module-assistant
</source>
4. Get the base packages:
<source lang=bash>
# aptitude install vmware-tools-foundation vmware-tools-libraries-nox vmware-tools-guestlib vmware-tools-core
</source>
5. Get the modules:
<source lang=bash>
# aptitude install vmware-tools-{vmci,vmxnet,vsock,vmblock,vmhgfs,vmsync}-common
# aptitude install vmware-tools-{vmci,vmxnet,vsock,vmblock,vmhgfs,vmsync}-modules-source
</source>
6. Get kernel and headers:
<source lang=bash>
# aptitude install linux-{image,headers}-3.2.0-52-generic
</source>
7. Compile and install the modules with module assistant
<source lang=bash>
# m-a prepare --kvers-list 3.2.0-52-generic
# m-a --text-mode --kvers-list 3.2.0-52-generic build vmware-tools-{vmci,vmxnet,vsock,vmblock,vmhgfs,vmsync}-modules
# m-a --text-mode --kvers-list 3.2.0-52-generic install vmware-tools-{vmci,vmxnet,vsock,vmblock,vmhgfs,vmsync}-modules
</source>
== Minimal /etc/vmware-tools/config ==
<source lang=bash>
libdir = "/usr/lib/vmware-tools"
</source>
2a62aef019f1c9f4f97551913f49278c5c3b50ef
Duncanopsammia axifuga
0
119
329
2014-02-03T16:05:32Z
Lollypop
2
Die Seite wurde neu angelegt: „[[Kategorie:MeerwasserAquarium]] {{Systematik | DeName = Bartkoralle | WissName = Duncanopsammia axifuga | Autor = | Untergattung =…“
wikitext
text/x-wiki
[[Kategorie:MeerwasserAquarium]]
{{Systematik
| DeName = Bartkoralle
| WissName = Duncanopsammia axifuga
| Autor =
| Untergattung =
| Gattung =
| Unterfamilie =
| Art =
| Verbreitung = Australien , Indischer Ozean, Südchinesisches Meer, Taiwan
| Habitat =
| Gruendung =
| Koeniginnen =
| Nest =
| Ausbruchsschutz =
| Nahrung = Artemia, Phytoplankton, Plankton, Staubfutter, Zooplankton, Zooxanthellen / Licht
| Luftfeuchtigkeit =
| Temperatur = 24°C - 26°C
| Winterruhe =
}}
069b7155b38c686f469f22959f793d187387a473
339
329
2014-02-04T10:34:18Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:MeerwasserAquarium]]
{{Systematik
| DeName = Bartkoralle
| WissName = Duncanopsammia axifuga
| Autor =
| Untergattung =
| Gattung =
| Unterfamilie =
| Art =
| Verbreitung = Australien , Indischer Ozean, Südchinesisches Meer, Taiwan
| Nahrung = Artemia, Phytoplankton, Plankton, Staubfutter, Zooplankton, Zooxanthellen / Licht
| Luftfeuchtigkeit =
| Temperatur = 24°C - 26°C
}}
5e6f0a64d33ff749f4307c1aeb92147f1e883328
Ubuntu apt
0
120
332
2014-02-04T10:02:25Z
Lollypop
2
Die Seite wurde neu angelegt: „[[Kategorie:Ubuntu]] == Configuring a proxy for apt == Put this into your /etc/apt/apt.conf.d/00proxy : <source lang=bash> // Options for the downloading routines…“
wikitext
text/x-wiki
[[Kategorie:Ubuntu]]
== Configuring a proxy for apt ==
Put this into your /etc/apt/apt.conf.d/00proxy :
<source lang=bash>
// Options for the downloading routines
Acquire
{
Queue-Mode "host"; // host|access
Retries "0";
Source-Symlinks "true";
// HTTP method configuration
http
{
//Proxy::http.us.debian.org "DIRECT"; // Specific per-host setting
Proxy "http://<user>:<password>@<proxy-host>:<proxy-port>";
Timeout "120";
Pipeline-Depth "5";
// Cache Control. Note these do not work with Squid 2.0.2
No-Cache "false";
Max-Age "86400"; // 1 Day age on index files
No-Store "false"; // Prevent the cache from storing archives
};
ftp
{
Proxy "http://<user>:<password>@<proxy-host>:<proxy-port>";
//Proxy::http.us.debian.org "DIRECT"; // Specific per-host setting
Timeout "120";
/* Passive mode control, proxy, non-proxy and per-host. Pasv mode
is prefered if possible */
Passive "true";
Proxy::Passive "true";
Passive::http.us.debian.org "true"; // Specific per-host setting
};
cdrom
{
mount "/cdrom";
// You need the trailing slash!
"/cdrom"
{
Mount "sleep 1000";
UMount "sleep 500";
}
};
};
</source>
bb5797e048d50625eacc04f53242aa6a2b69c7a8
Category:Ubuntu
14
121
333
2014-02-04T10:03:21Z
Lollypop
2
Die Seite wurde neu angelegt: „[[Kategorie:KnowHow]]“
wikitext
text/x-wiki
[[Kategorie:KnowHow]]
5b3e805e2df69a16d339bfd0115e4688ccfd0e65
Cerithium caeruleum
0
122
337
2014-02-04T10:30:52Z
Lollypop
2
Die Seite wurde neu angelegt: „[[Kategorie:MeerwasserAquarium]] {{Systematik | DeName = | WissName = Cerithium coeruleum | Autor = | Untergattung = | Gat…“
wikitext
text/x-wiki
[[Kategorie:MeerwasserAquarium]]
{{Systematik
| DeName =
| WissName = Cerithium coeruleum
| Autor =
| Untergattung =
| Gattung =
| Unterfamilie =
| Art =
| Verbreitung =
| Habitat =
| Gruendung =
| Koeniginnen =
| Nest =
| Ausbruchsschutz =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C
| Winterruhe =
}}
a60cd80dcb363b01b94fed2c81857aa20457c4ef
338
337
2014-02-04T10:32:52Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:MeerwasserAquarium]]
{{Systematik
| DeName = Nadelschnecke
| WissName = Cerithium coeruleum
| Autor =
| Untergattung =
| Gattung =
| Unterfamilie =
| Art =
| Verbreitung = Kenia, Rotes Meer
| Habitat =
| Nahrung = Algen, Cyanobakterien, Detritus
| Luftfeuchtigkeit =
| Temperatur = 24°C - 29°C
| Winterruhe =
}}
d349cce0cca3a82505b6a99febaef7076945a171
342
338
2014-02-04T10:43:04Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:MeerwasserAquarium]]
{{Systematik
| DeName = Nadelschnecke
| WissName = Cerithium coeruleum
| Autor =
| Untergattung =
| Gattung =
| Unterfamilie =
| Art =
| Verbreitung =
| Habitat =
| Nahrung = Algen
| Luftfeuchtigkeit =
| Temperatur = 24°C - 26°C
}}
464f86ae873aa181528a8b8e10faa88c82c1ca5a
Trochus sp.
0
123
340
2014-02-04T10:37:41Z
Lollypop
2
Die Seite wurde neu angelegt: „[[Kategorie:MeerwasserAquarium]] {{Systematik | DeName = Kegelschnecke | WissName = Trochus sp. | Autor = | Untergattung = …“
wikitext
text/x-wiki
[[Kategorie:MeerwasserAquarium]]
{{Systematik
| DeName = Kegelschnecke
| WissName = Trochus sp.
| Autor =
| Untergattung =
| Gattung =
| Unterfamilie =
| Art =
| Verbreitung =
| Habitat =
| Nahrung = Algen
| Luftfeuchtigkeit =
| Temperatur = 24°C - 26°C
| Winterruhe =
}}
6c40b8f51816216a18316dde1c3a4b36576dd81b
Cypraea annulus
0
124
341
2014-02-04T10:40:07Z
Lollypop
2
Die Seite wurde neu angelegt: „[[Kategorie:MeerwasserAquarium]] {{Systematik | DeName = Kaurischnecke | WissName = Cypraea annulata | Autor = GRAY, 1825 | Unterg…“
wikitext
text/x-wiki
[[Kategorie:MeerwasserAquarium]]
{{Systematik
| DeName = Kaurischnecke
| WissName = Cypraea annulata
| Autor = GRAY, 1825
| Untergattung =
| Gattung =
| Unterfamilie =
| Art =
| Verbreitung =
| Habitat =
| Nahrung = Algen
| Luftfeuchtigkeit =
| Temperatur = 24°C - 26°C
| Winterruhe =
}}
f06036003dee22b363c66e053bd1bcdfc76e3838
Calcinus laevimanus
0
125
343
2014-02-04T10:43:13Z
Lollypop
2
Die Seite wurde neu angelegt: „[[Kategorie:MeerwasserAquarium]] {{Systematik | DeName = Großscheren-Einsiedlerkrebs | WissName = Calcinus laevimanus | Autor = …“
wikitext
text/x-wiki
[[Kategorie:MeerwasserAquarium]]
{{Systematik
| DeName = Großscheren-Einsiedlerkrebs
| WissName = Calcinus laevimanus
| Autor =
| Untergattung =
| Gattung =
| Unterfamilie =
| Art =
| Verbreitung = Indopazifik
| Habitat =
| Nahrung = Algen
| Luftfeuchtigkeit =
| Temperatur = 23°C - 28°C
}}
99aef7c883b724a205abf9444aa9e1d8ca1e9b4d
344
343
2014-02-04T10:43:45Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:MeerwasserAquarium]]
{{Systematik
| DeName = Großscheren-Einsiedlerkrebs
| WissName = Calcinus laevimanus
| Autor =
| Untergattung =
| Gattung =
| Unterfamilie =
| Art =
| Verbreitung = Indopazifik
| Habitat =
| Nahrung = Algen, Artemia, Banane, Flockenfutter, Frostfutter, Nori-Algen, Salat
| Luftfeuchtigkeit =
| Temperatur = 23°C - 28°C
}}
32ffe6b129e559d508d8fff81166f2b00aecc54d
352
344
2014-02-04T12:27:12Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:MeerwasserAquarium]]
{{Systematik
| DeName = Großscheren-Einsiedlerkrebs
| WissName = Calcinus laevimanus
| Autor =
| Untergattung =
| Gattung =
| Unterfamilie =
| Art =
| Verbreitung = Indopazifik
| Habitat =
| Nahrung = Algen, Artemia, Banane, Flockenfutter, Frostfutter, Nori-Algen, Salat
| Luftfeuchtigkeit =
| Temperatur = 23°C - 28°C
}}
<gallery mode="packed-hover">
Image:Calcinus_laevimanus.png|Neu
</gallery>
b9b9d828e3040d762ea1c41f342caeb6e732a6e6
354
352
2014-02-04T12:32:26Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:MeerwasserAquarium]]
{{Systematik
| Bild = Calcinus_laevimanus.png
| Bildbeschreibung = Calcinus laevimanus auf Nahrungssuche
| DeName = Großscheren-Einsiedlerkrebs
| WissName = Calcinus laevimanus
| Autor =
| Untergattung =
| Gattung =
| Unterfamilie =
| Art =
| Verbreitung = Indopazifik
| Habitat =
| Nahrung = Algen, Artemia, Banane, Flockenfutter, Frostfutter, Nori-Algen, Salat
| Luftfeuchtigkeit =
| Temperatur = 23°C - 28°C
}}
<gallery mode="packed-hover">
Image:Calcinus_laevimanus.png|Auf Nahrungssuche
</gallery>
63baf05c00e99deb082db044a2351ec2f1963678
Capnella imbricata
0
126
345
2014-02-04T11:04:35Z
Lollypop
2
Die Seite wurde neu angelegt: „[[Kategorie:MeerwasserAquarium]] {{Systematik | DeName = Bäumchenweichkoralle | WissName = Capnella imbricata | Autor = | Unterg…“
wikitext
text/x-wiki
[[Kategorie:MeerwasserAquarium]]
{{Systematik
| DeName = Bäumchenweichkoralle
| WissName = Capnella imbricata
| Autor =
| Untergattung =
| Gattung =
| Unterfamilie =
| Art =
| Verbreitung = Indopazifik
| Habitat =
| Nahrung = Zooxanthellen / Licht
| Luftfeuchtigkeit =
| Temperatur = 23°C - 28°C
}}
4cdee96d2038b3ade004c591d784ac2d7cc03796
Caulastrea sp.
0
127
346
2014-02-04T11:09:48Z
Lollypop
2
Die Seite wurde neu angelegt: „[[Kategorie:MeerwasserAquarium]] {{Systematik | DeName = Trompetenkoralle | WissName = Caulastrea sp. | Autor = | Untergattung …“
wikitext
text/x-wiki
[[Kategorie:MeerwasserAquarium]]
{{Systematik
| DeName = Trompetenkoralle
| WissName = Caulastrea sp.
| Autor =
| Untergattung =
| Gattung =
| Unterfamilie =
| Art =
| Verbreitung =
| Habitat =
| Nahrung = Zooxanthellen / Licht
| Luftfeuchtigkeit =
| Temperatur = 24°C - 26°C
}}
5584bf5dd24f98bbbfb219fb8d3f5e702b6edf60
350
346
2014-02-04T12:13:43Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:MeerwasserAquarium]]
{{Systematik
| DeName = Trompetenkoralle
| WissName = Caulastrea sp.
| Autor =
| Untergattung =
| Gattung =
| Unterfamilie =
| Art =
| Verbreitung =
| Habitat =
| Nahrung = Zooxanthellen / Licht
| Luftfeuchtigkeit =
| Temperatur = 24°C - 26°C
}}
<gallery mode="packed-hover">
Image:Caulastrea_sp.JPG|Neu
</gallery>
f3f33950c13e0bae9e41fd9911de01c0cdd950ef
353
350
2014-02-04T12:30:53Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:MeerwasserAquarium]]
{{Systematik
| Bild = Caulastrea_sp.JPG
| Bildbeschreibung = Junge Calaustrea Kolonie
| DeName = Trompetenkoralle
| WissName = Caulastrea sp.
| Autor =
| Untergattung =
| Gattung =
| Unterfamilie =
| Art =
| Verbreitung =
| Habitat =
| Nahrung = Zooxanthellen / Licht
| Luftfeuchtigkeit =
| Temperatur = 24°C - 26°C
}}
<gallery mode="packed-hover">
Image:Caulastrea_sp.JPG|Junge Calaustrea Kolonie
</gallery>
efc049cd025d8b776cad9c122274d042835e21af
Leptoseris sp.
0
128
347
2014-02-04T11:14:12Z
Lollypop
2
Die Seite wurde neu angelegt: „[[Kategorie:MeerwasserAquarium]] {{Systematik | DeName = Großpolypige Steinkoralle | WissName = Leptoseris sp. | Autor = | Unter…“
wikitext
text/x-wiki
[[Kategorie:MeerwasserAquarium]]
{{Systematik
| DeName = Großpolypige Steinkoralle
| WissName = Leptoseris sp.
| Autor =
| Untergattung =
| Gattung =
| Unterfamilie =
| Art =
| Verbreitung =
| Habitat =
| Nahrung = Zooxanthellen / Licht
| Luftfeuchtigkeit =
| Temperatur = 24°C - 26°C
}}
7a70a4bda876700023a56022a68e16760f65a416
Sarcophyton crassicaule
0
129
348
2014-02-04T11:19:06Z
Lollypop
2
Die Seite wurde neu angelegt: „[[Kategorie:MeerwasserAquarium]] {{Systematik | DeName = Hufeisen-Lederkoralle | WissName = Sarcophyton crassicaule | Autor = | U…“
wikitext
text/x-wiki
[[Kategorie:MeerwasserAquarium]]
{{Systematik
| DeName = Hufeisen-Lederkoralle
| WissName = Sarcophyton crassicaule
| Autor =
| Untergattung =
| Gattung =
| Unterfamilie =
| Art =
| Verbreitung =
| Habitat =
| Nahrung = Zooxanthellen / Licht
| Luftfeuchtigkeit =
| Temperatur = 23°C - 29°C
}}
041f4f5461dd504f465924056eaa641c04e3ddac
File:Caulastrea sp.JPG
6
130
349
2014-02-04T12:12:20Z
Lollypop
2
Caulastrea sp.
wikitext
text/x-wiki
Caulastrea sp.
4218fa25c69e278282d5053c717b1273f9bbbb31
File:Calcinus laevimanus.png
6
131
351
2014-02-04T12:26:13Z
Lollypop
2
Calcinus laevimanus
wikitext
text/x-wiki
Calcinus laevimanus
910ed14c9bbd055c64903d7b4bef42edb6ec4db9
Mycedium elephantotus
0
132
356
2014-02-04T12:48:54Z
Lollypop
2
Die Seite wurde neu angelegt: „[[Kategorie:MeerwasserAquarium]] {{Systematik | Bild = Mycedium_elephantopus.png | Bildbeschreibung = Mycedium elephantopus | DeName = Gr…“
wikitext
text/x-wiki
[[Kategorie:MeerwasserAquarium]]
{{Systematik
| Bild = Mycedium_elephantopus.png
| Bildbeschreibung = Mycedium elephantopus
| DeName = Großpolypie Steinkoralle
| WissName = Mycedium elephantopus
| Autor =
| Untergattung =
| Gattung =
| Unterfamilie =
| Art =
| Verbreitung =
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 23°C - 28°C
}}
<gallery mode="packed-hover">
Image:Mycedium_elephantopus.png|Kleine Kolonie
</gallery>
d33465985cd6e8aaefcf8491fda622a5d1c77b27
358
356
2014-02-04T12:50:55Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:MeerwasserAquarium]]
{{Systematik
| Bild = Mycedium_elephantopus.png
| Bildbeschreibung = Mycedium elephantopus
| DeName = Großpolypie Steinkoralle
| WissName = Mycedium elephantopus
| Autor =
| Untergattung =
| Gattung =
| Unterfamilie =
| Art =
| Verbreitung = Australien , Great Barrier Riff, Indonesien, Japan, Papua-Neuguinea
| Habitat =
| Nahrung = Plankton, Zooxanthellen / Licht
| Luftfeuchtigkeit =
| Temperatur = 24°C - 27°C
}}
<gallery mode="packed-hover">
Image:Mycedium_elephantopus.png|Kleine Kolonie
</gallery>
2b3f5c0c0c989150e869029bbe5494e1207b194c
361
358
2014-02-25T11:12:41Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:MeerwasserAquarium]]
{{Systematik
| Bild = Mycedium_elephantopus.png
| Bildbeschreibung = Mycedium elephantopus
| DeName = Großpolypige Steinkoralle
| WissName = Mycedium elephantopus
| Autor =
| Untergattung =
| Gattung =
| Unterfamilie =
| Art =
| Verbreitung = Australien , Great Barrier Riff, Indonesien, Japan, Papua-Neuguinea
| Habitat =
| Nahrung = Plankton, Zooxanthellen / Licht
| Luftfeuchtigkeit =
| Temperatur = 24°C - 27°C
}}
<gallery mode="packed-hover">
Image:Mycedium_elephantopus.png|Kleine Kolonie
</gallery>
22acd2b3427015f47cfcfb17e1c7393f3cd9b328
368
361
2014-03-03T16:50:53Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:MeerwasserAquarium]]
{{Systematik
| Bild = Mycedium_elephantopus.png
| Bildbeschreibung = Mycedium elephantotus
| DeName = Großpolypige Steinkoralle
| WissName = Mycedium elephantotus
| Autor =
| Untergattung =
| Gattung =
| Unterfamilie =
| Art =
| Verbreitung = Australien , Great Barrier Riff, Indonesien, Japan, Papua-Neuguinea
| Habitat =
| Nahrung = Plankton, Zooxanthellen / Licht
| Luftfeuchtigkeit =
| Temperatur = 24°C - 27°C
}}
<gallery mode="packed-hover">
Image:Mycedium_elephantopus.png|Kleine Kolonie
</gallery>
77b7f456352ec2bf49ff5ab5810fe21d69a15284
369
368
2014-03-03T16:51:56Z
Lollypop
2
hat „[[Mycedium elephantopus]]“ nach „[[Mycedium elephantotus]]“ verschoben: wrong name
wikitext
text/x-wiki
[[Kategorie:MeerwasserAquarium]]
{{Systematik
| Bild = Mycedium_elephantopus.png
| Bildbeschreibung = Mycedium elephantotus
| DeName = Großpolypige Steinkoralle
| WissName = Mycedium elephantotus
| Autor =
| Untergattung =
| Gattung =
| Unterfamilie =
| Art =
| Verbreitung = Australien , Great Barrier Riff, Indonesien, Japan, Papua-Neuguinea
| Habitat =
| Nahrung = Plankton, Zooxanthellen / Licht
| Luftfeuchtigkeit =
| Temperatur = 24°C - 27°C
}}
<gallery mode="packed-hover">
Image:Mycedium_elephantopus.png|Kleine Kolonie
</gallery>
77b7f456352ec2bf49ff5ab5810fe21d69a15284
File:Mycedium elephantopus.png
6
133
357
2014-02-04T12:49:23Z
Lollypop
2
Mycedium elephantopus
wikitext
text/x-wiki
Mycedium elephantopus
b08331734ef14f653ebb77f90349df1ba7097349
VMWare Linux parameter
0
108
359
336
2014-02-06T15:47:50Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:VMWare]][[Kategorie:Ubuntu]]
==/etc/sysctl.conf==
<source lang=bash>
# vm.swappiness = 0 The kernel will swap only to avoid an out of memory condition.
vm.swappiness = 0
# TCP SYN Flood Protection
net.ipv4.tcp_syncookies = 1
net.ipv4.tcp_max_syn_backlog = 4096
net.ipv4.tcp_synack_retries = 3
</source>
==Pinning kernel to 2.6 for ESX 4.1==
Create /etc/apt/preferences.d/linux-image with this content:
<source lang=bash>
Package: linux-image-server linux-server linux-headers-server
Pin: version 2.6.*
Pin-Priority: 1000
</source>
==Autobuild of kernel drivers==
Create /etc/kernel/header_postinst.d/vmware :
<source lang=bash>
#!/bin/bash
# We're passed the version of the kernel being installed
inst_kern=$1
/usr/bin/vmware-config-tools.pl --modules-only --default --kernel-version ${inst_kern}
</source>
<source lang=bash>
# chmod 755 /etc/kernel/header_postinst.d/vmware
</source>
==Prebuild packages from VMWare==
<source lang=bash>
echo "deb http://packages.vmware.com/tools/esx/latest/ubuntu $(lsb_release -cs) main" > /etc/apt/sources.list.d/vmware-repository
gpg --search C0B5E0AB66FD4949 # hinzufügen (1)
gpg -a --export C0B5E0AB66FD4949 | apt-key add --
apt-get update
apt-get install vmware-tools-core vmware-tools-esx-nox vmware-tools-foundation \
vmware-tools-guestlib vmware-tools-libraries-nox vmware-tools-libraries-x \
vmware-tools-plugins-autoupgrade vmware-tools-plugins-deploypkg \
vmware-tools-plugins-grabbitmqproxy vmware-tools-plugins-guestinfo \
vmware-tools-plugins-hgfsserver vmware-tools-plugins-powerops \
vmware-tools-plugins-timesync vmware-tools-plugins-vix \
vmware-tools-plugins-vmbackup vmware-tools-services vmware-tools-user
</source>
==Source from VMWare==
After you removed previously installed vmware-tools, just follow these steps:
1. Add this to your /etc/apt/sources.list:
<source lang=bash>
deb http://packages.vmware.com/tools/esx/latest/ubuntu precise main
</source>
Then do:
<source lang=bash>
gpg --search C0B5E0AB66FD4949 # hinzufügen (1)
gpg -a --export C0B5E0AB66FD4949 | apt-key add --
</source>
2. Update your package database:
<source lang=bash>
# aptitude update
</source>
3. Get Module-Assistant:
<source lang=bash>
# aptitude install module-assistant
</source>
4. Get the base packages:
<source lang=bash>
# aptitude install vmware-tools-foundation vmware-tools-libraries-nox vmware-tools-guestlib vmware-tools-core
</source>
5. Get the modules:
<source lang=bash>
# aptitude install vmware-tools-{vmci,vmxnet,vsock,vmblock,vmhgfs,vmsync}-common
# aptitude install vmware-tools-{vmci,vmxnet,vsock,vmblock,vmhgfs,vmsync}-modules-source
</source>
6. Get kernel and headers:
<source lang=bash>
# aptitude install linux-{image,headers}-3.2.0-52-generic
</source>
7. Compile and install the modules with module assistant
<source lang=bash>
# m-a prepare --kvers-list 3.2.0-52-generic
# m-a --text-mode --kvers-list 3.2.0-52-generic build vmware-tools-{vmci,vmxnet,vsock,vmblock,vmhgfs,vmsync}-modules
# m-a --text-mode --kvers-list 3.2.0-52-generic install vmware-tools-{vmci,vmxnet,vsock,vmblock,vmhgfs,vmsync}-modules
</source>
== Minimal /etc/vmware-tools/config ==
<source lang=bash>
libdir = "/usr/lib/vmware-tools"
</source>
5e4bd9b2b934c2c04649ff5febd842873bf050f7
360
359
2014-02-06T15:53:24Z
Lollypop
2
/* Prebuild packages from VMWare */
wikitext
text/x-wiki
[[Kategorie:VMWare]][[Kategorie:Ubuntu]]
==/etc/sysctl.conf==
<source lang=bash>
# vm.swappiness = 0 The kernel will swap only to avoid an out of memory condition.
vm.swappiness = 0
# TCP SYN Flood Protection
net.ipv4.tcp_syncookies = 1
net.ipv4.tcp_max_syn_backlog = 4096
net.ipv4.tcp_synack_retries = 3
</source>
==Pinning kernel to 2.6 for ESX 4.1==
Create /etc/apt/preferences.d/linux-image with this content:
<source lang=bash>
Package: linux-image-server linux-server linux-headers-server
Pin: version 2.6.*
Pin-Priority: 1000
</source>
==Autobuild of kernel drivers==
Create /etc/kernel/header_postinst.d/vmware :
<source lang=bash>
#!/bin/bash
# We're passed the version of the kernel being installed
inst_kern=$1
/usr/bin/vmware-config-tools.pl --modules-only --default --kernel-version ${inst_kern}
</source>
<source lang=bash>
# chmod 755 /etc/kernel/header_postinst.d/vmware
</source>
==Prebuild packages from VMWare==
<source lang=bash>
echo "deb http://packages.vmware.com/tools/esx/latest/ubuntu $(lsb_release -cs) main" > /etc/apt/sources.list.d/vmware-repository
apt-key adv --keyserver subkeys.pgp.net --recv-keys C0B5E0AB66FD4949
apt-get update
apt-get install vmware-tools-core vmware-tools-esx-nox vmware-tools-foundation \
vmware-tools-guestlib vmware-tools-libraries-nox vmware-tools-libraries-x \
vmware-tools-plugins-autoupgrade vmware-tools-plugins-deploypkg \
vmware-tools-plugins-grabbitmqproxy vmware-tools-plugins-guestinfo \
vmware-tools-plugins-hgfsserver vmware-tools-plugins-powerops \
vmware-tools-plugins-timesync vmware-tools-plugins-vix \
vmware-tools-plugins-vmbackup vmware-tools-services vmware-tools-user
</source>
==Source from VMWare==
After you removed previously installed vmware-tools, just follow these steps:
1. Add this to your /etc/apt/sources.list:
<source lang=bash>
deb http://packages.vmware.com/tools/esx/latest/ubuntu precise main
</source>
Then do:
<source lang=bash>
gpg --search C0B5E0AB66FD4949 # hinzufügen (1)
gpg -a --export C0B5E0AB66FD4949 | apt-key add --
</source>
2. Update your package database:
<source lang=bash>
# aptitude update
</source>
3. Get Module-Assistant:
<source lang=bash>
# aptitude install module-assistant
</source>
4. Get the base packages:
<source lang=bash>
# aptitude install vmware-tools-foundation vmware-tools-libraries-nox vmware-tools-guestlib vmware-tools-core
</source>
5. Get the modules:
<source lang=bash>
# aptitude install vmware-tools-{vmci,vmxnet,vsock,vmblock,vmhgfs,vmsync}-common
# aptitude install vmware-tools-{vmci,vmxnet,vsock,vmblock,vmhgfs,vmsync}-modules-source
</source>
6. Get kernel and headers:
<source lang=bash>
# aptitude install linux-{image,headers}-3.2.0-52-generic
</source>
7. Compile and install the modules with module assistant
<source lang=bash>
# m-a prepare --kvers-list 3.2.0-52-generic
# m-a --text-mode --kvers-list 3.2.0-52-generic build vmware-tools-{vmci,vmxnet,vsock,vmblock,vmhgfs,vmsync}-modules
# m-a --text-mode --kvers-list 3.2.0-52-generic install vmware-tools-{vmci,vmxnet,vsock,vmblock,vmhgfs,vmsync}-modules
</source>
== Minimal /etc/vmware-tools/config ==
<source lang=bash>
libdir = "/usr/lib/vmware-tools"
</source>
faeffbcc8b7d9d1f83665609e8ec718bb678f483
Parazoanthus gracilis
0
134
362
2014-02-25T11:16:47Z
Lollypop
2
Die Seite wurde neu angelegt: „[[Kategorie:MeerwasserAquarium]] {{Systematik | DeName = Gelbe Krustenanemone | WissName = Parazoanthus gracilis | Autor = | Unte…“
wikitext
text/x-wiki
[[Kategorie:MeerwasserAquarium]]
{{Systematik
| DeName = Gelbe Krustenanemone
| WissName = Parazoanthus gracilis
| Autor =
| Untergattung =
| Gattung =
| Unterfamilie =
| Art =
| Verbreitung =
| Habitat =
| Nahrung = Plankton, Zooxanthellen / Licht
| Luftfeuchtigkeit =
| Temperatur = 20°C - 26°C
}}
13b9be320bcbff737db9f3e74267e72c6f9c395d
File:Calcinus laevimanus neues Haus.png
6
135
363
2014-02-25T13:52:03Z
Lollypop
2
Calcinus laevimanus in neuem Haus
wikitext
text/x-wiki
Calcinus laevimanus in neuem Haus
139fb937c67f2714ff061040076b3f93f370cd83
Calcinus laevimanus
0
125
364
354
2014-02-25T13:53:11Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:MeerwasserAquarium]]
{{Systematik
| Bild = Calcinus_laevimanus.png
| Bildbeschreibung = Calcinus laevimanus auf Nahrungssuche
| DeName = Großscheren-Einsiedlerkrebs
| WissName = Calcinus laevimanus
| Autor =
| Untergattung =
| Gattung =
| Unterfamilie =
| Art =
| Verbreitung = Indopazifik
| Habitat =
| Nahrung = Algen, Artemia, Flockenfutter, Frostfutter, Nori-Algen, Salat
| Luftfeuchtigkeit =
| Temperatur = 23°C - 28°C
}}
<gallery mode="packed-hover">
Image:Calcinus_laevimanus.png|Auf Nahrungssuche
Image:Calcinus_laevimanus_neues_Haus.png|Nach Umzug in neues Schneckenhaus
</gallery>
1138457223ba2fd9dc3099ff10938448ca0d8181
Cypraea annulus
0
124
365
341
2014-03-03T16:48:18Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:MeerwasserAquarium]]
{{Systematik
| DeName = Kaurischnecke
| WissName = Cypraea annulus
| Autor = GRAY, 1825
| Untergattung =
| Gattung =
| Unterfamilie =
| Art =
| Verbreitung =
| Habitat =
| Nahrung = Algen
| Luftfeuchtigkeit =
| Temperatur = 24°C - 26°C
| Winterruhe =
}}
bc65e78adc714769c6805e8f86abf0564247f2cf
366
365
2014-03-03T16:48:40Z
Lollypop
2
hat „[[Cypraea annulata]]“ nach „[[Cypraea annulus]]“ verschoben: wrong name
wikitext
text/x-wiki
[[Kategorie:MeerwasserAquarium]]
{{Systematik
| DeName = Kaurischnecke
| WissName = Cypraea annulus
| Autor = GRAY, 1825
| Untergattung =
| Gattung =
| Unterfamilie =
| Art =
| Verbreitung =
| Habitat =
| Nahrung = Algen
| Luftfeuchtigkeit =
| Temperatur = 24°C - 26°C
| Winterruhe =
}}
bc65e78adc714769c6805e8f86abf0564247f2cf
Cypraea annulata
0
136
367
2014-03-03T16:48:40Z
Lollypop
2
hat „[[Cypraea annulata]]“ nach „[[Cypraea annulus]]“ verschoben: wrong name
wikitext
text/x-wiki
#WEITERLEITUNG [[Cypraea annulus]]
f071c1cb5bce85c08fc1a10da89b42d7d635314e
Mycedium elephantopus
0
137
370
2014-03-03T16:51:56Z
Lollypop
2
hat „[[Mycedium elephantopus]]“ nach „[[Mycedium elephantotus]]“ verschoben: wrong name
wikitext
text/x-wiki
#WEITERLEITUNG [[Mycedium elephantotus]]
2d0f77e1d7c2333a9b922e526f1153cce7b33224
Cerithium caeruleum
0
122
371
342
2014-03-03T18:21:41Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:MeerwasserAquarium]]
{{Systematik
| DeName = Nadelschnecke
| WissName = Cerithium caeruleum
| Autor =
| Untergattung =
| Gattung =
| Unterfamilie =
| Art =
| Verbreitung =
| Habitat =
| Nahrung = Algen
| Luftfeuchtigkeit =
| Temperatur = 24°C - 26°C
}}
e8a08d7ea6eba4fc6105adad955422a7a25d8edb
372
371
2014-03-03T18:21:57Z
Lollypop
2
hat „[[Cerithium coeruleum]]“ nach „[[Cerithium caeruleum]]“ verschoben
wikitext
text/x-wiki
[[Kategorie:MeerwasserAquarium]]
{{Systematik
| DeName = Nadelschnecke
| WissName = Cerithium caeruleum
| Autor =
| Untergattung =
| Gattung =
| Unterfamilie =
| Art =
| Verbreitung =
| Habitat =
| Nahrung = Algen
| Luftfeuchtigkeit =
| Temperatur = 24°C - 26°C
}}
e8a08d7ea6eba4fc6105adad955422a7a25d8edb
Cerithium coeruleum
0
138
373
2014-03-03T18:21:57Z
Lollypop
2
hat „[[Cerithium coeruleum]]“ nach „[[Cerithium caeruleum]]“ verschoben
wikitext
text/x-wiki
#WEITERLEITUNG [[Cerithium caeruleum]]
ceb2add0b1b359f142733161834dbfe3bc306c50
Fibrechannel Analyse
0
139
374
2014-04-01T13:48:14Z
Lollypop
2
Die Seite wurde neu angelegt: „=Fibrechannel Analyse unter Solaris= =Kommandos : Solaris= ==luxadm== ===luxadm -e port=== ===luxadm -e dump_map <HW_path>=== ===luxadm probe=== ===luxadm displa…“
wikitext
text/x-wiki
=Fibrechannel Analyse unter Solaris=
=Kommandos : Solaris=
==luxadm==
===luxadm -e port===
===luxadm -e dump_map <HW_path>===
===luxadm probe===
===luxadm display <Diskpath|WWN>===
==fcinfo==
===fcinfo hba-port===
==mpathadm==
===mpathadm list lu===
=Kommandos : Common Array Manager=
==lsscs==
Ist unter Solaris in /opt/SUNWsefms/bin
===lsscs list array===
===lsscs list array <array_name>===
===lsscs list -a <array_name> fcport===
=Kommandos : Brocade=
6285ff7d57c450cba313501df1e8b401153b577b
376
374
2014-04-01T13:51:09Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:Solaris]]
[[Kategorie:Brocade]]
=Fibrechannel Analyse unter Solaris=
=Kommandos : Solaris=
==luxadm==
===luxadm -e port===
===luxadm -e dump_map <HW_path>===
===luxadm probe===
===luxadm display <Diskpath|WWN>===
==fcinfo==
===fcinfo hba-port===
==mpathadm==
===mpathadm list lu===
=Kommandos : Common Array Manager=
==lsscs==
Ist unter Solaris in /opt/SUNWsefms/bin
===lsscs list array===
===lsscs list array <array_name>===
===lsscs list -a <array_name> fcport===
=Kommandos : Brocade=
5e3106dc23c6acb16ef8d1ff2d42cb17099a00f1
377
376
2014-04-01T14:13:30Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:Solaris]]
[[Kategorie:Brocade]]
=Fibrechannel Analyse unter Solaris=
=Kommandos : Solaris=
==luxadm==
===luxadm -e port===
Gibt vorhandene Fibrechannelports und deren Status aus:
<source lang=bash>
# luxadm -e port
/devices/pci@79,0/pci10de,378@b/pci1077,143@0/fp@0,0:devctl CONNECTED
/devices/pci@79,0/pci10de,378@b/pci1077,143@0,1/fp@0,0:devctl NOT CONNECTED
/devices/pci@79,0/pci10de,376@e/pci1077,143@0/fp@0,0:devctl CONNECTED
/devices/pci@79,0/pci10de,376@e/pci1077,143@0,1/fp@0,0:devctl NOT CONNECTED
</source>
===luxadm -e dump_map <HW_path>===
Gibt die Tabelle der bekannten Geräte an einem Port aus
<source lang=bash>
# luxadm -e dump_map /devices/pci@79,0/pci10de,378@b/pci1077,143@0/fp@0,0:devctl
Pos Port_ID Hard_Addr Port WWN Node WWN Type
0 30200 0 202600a0b86e10e4 200600a0b86e10e4 0x0 (Disk device)
1 30600 0 202700a0b86e10e4 200600a0b86e10e4 0x0 (Disk device)
2 10100 0 203400a0b85bb030 200400a0b85bb030 0x0 (Disk device)
3 10500 0 203500a0b85bb030 200400a0b85bb030 0x0 (Disk device)
4 10200 0 202600a0b86e103c 200600a0b86e103c 0x0 (Disk device)
5 11400 0 202700a0b86e103c 200600a0b86e103c 0x0 (Disk device)
6 30100 0 203200a0b85aeb2d 200200a0b85aeb2d 0x0 (Disk device)
7 30500 0 203300a0b85aeb2d 200200a0b85aeb2d 0x0 (Disk device)
8 10800 0 2100001b32902d45 2000001b32902d45 0x1f (Unknown Type,Host Bus Adapter)
</source>
Erklärung der interessanten Spalten:
* Port_ID <Switch_ID><Switchport><??>
Es sind also offensichtlich 2 Switches in der Fabric an Port /devices/pci@79,0/pci10de,378@b/pci1077,143@0/fp@0,0:devctl. Und zwar mit der ID 1 und mit der ID 3.
Switch ID 1
Port 1 und 5 : Node WWN 200400a0b85bb030
Port 2 und 14 : Node WWN 200600a0b86e103c
Port 8 : Node WWN 2000001b32902d45 (Wir selbst)
Switch ID 3
Port 1 und 5 : Node WWN 200200a0b85aeb2d
Port 2 und 6 : Node WWN 200600a0b86e10e4
Wir hängen also mit 2 Storages auf dem Switch mit der ID 1 und haben eine Verbindung zu einem Switch mit der ID 3 an dem 2 weitere Storages hängen.
* Node WWN
Wir sehen hier 4 Disk Devices mit jeweils 2 Einträgen (Gleiche Node WWN)
* Port WWN
Dies ist die Port WWN der an den Switch angeschlossenen Geräte (unter 8 finden wir uns selbst).
Pro Storage sehen wir hier 2 Port WWNs.
* Type
Disk Device: Storage
Host Bus Adapter: Die FC-Karte auf dem abfragenden Host
===luxadm probe===
<source lang=bash>
</source>
===luxadm display <Diskpath|WWN>===
<source lang=bash>
</source>
==fcinfo==
===fcinfo hba-port===
<source lang=bash>
</source>
==mpathadm==
===mpathadm list lu===
<source lang=bash>
</source>
=Kommandos : Common Array Manager=
==lsscs==
Ist unter Solaris in /opt/SUNWsefms/bin
===lsscs list array===
<source lang=bash>
</source>
===lsscs list array <array_name>===
<source lang=bash>
</source>
===lsscs list -a <array_name> fcport===
<source lang=bash>
</source>
=Kommandos : Brocade=
149c163b1c7d03d4222dfc847c2f76f941fe3a70
378
377
2014-04-01T14:24:23Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:Solaris]]
[[Kategorie:Brocade]]
=Fibrechannel Analyse unter Solaris=
=Kommandos : Solaris=
==luxadm==
===luxadm -e port===
Gibt vorhandene Fibrechannelports und deren Status aus:
<source lang=bash>
# luxadm -e port
/devices/pci@79,0/pci10de,378@b/pci1077,143@0/fp@0,0:devctl CONNECTED
/devices/pci@79,0/pci10de,378@b/pci1077,143@0,1/fp@0,0:devctl NOT CONNECTED
/devices/pci@79,0/pci10de,376@e/pci1077,143@0/fp@0,0:devctl CONNECTED
/devices/pci@79,0/pci10de,376@e/pci1077,143@0,1/fp@0,0:devctl NOT CONNECTED
</source>
2 Dualport Karten:
/devices/pci@79,0/pci10de,378@b/pci1077,143@0 und ...,1
/devices/pci@79,0/pci10de,376@e/pci1077,143@0 und ...,1
[https://support.oracle.com/epmos/faces/DocContentDisplay?id=1277396.1 Sun x86 Platforms: Matrix of Recognized Device Paths (Doc ID 1277396.1)]
===luxadm -e dump_map <HW_path>===
Gibt die Tabelle der bekannten Geräte an einem Port aus
<source lang=bash>
# luxadm -e dump_map /devices/pci@79,0/pci10de,378@b/pci1077,143@0/fp@0,0:devctl
Pos Port_ID Hard_Addr Port WWN Node WWN Type
0 30200 0 202600a0b86e10e4 200600a0b86e10e4 0x0 (Disk device)
1 30600 0 202700a0b86e10e4 200600a0b86e10e4 0x0 (Disk device)
2 10100 0 203400a0b85bb030 200400a0b85bb030 0x0 (Disk device)
3 10500 0 203500a0b85bb030 200400a0b85bb030 0x0 (Disk device)
4 10200 0 202600a0b86e103c 200600a0b86e103c 0x0 (Disk device)
5 11400 0 202700a0b86e103c 200600a0b86e103c 0x0 (Disk device)
6 30100 0 203200a0b85aeb2d 200200a0b85aeb2d 0x0 (Disk device)
7 30500 0 203300a0b85aeb2d 200200a0b85aeb2d 0x0 (Disk device)
8 10800 0 2100001b32902d45 2000001b32902d45 0x1f (Unknown Type,Host Bus Adapter)
</source>
Erklärung der interessanten Spalten:
* Port_ID <Switch_ID><Switchport><??>
Es sind also offensichtlich 2 Switches in der Fabric an Port /devices/pci@79,0/pci10de,378@b/pci1077,143@0/fp@0,0:devctl
und zwar mit der ID 1 und mit der ID 3.
Switch ID 1
Port 1 und 5 : Node WWN 200400a0b85bb030
Port 2 und 14 : Node WWN 200600a0b86e103c
Port 8 : Node WWN 2000001b32902d45 (Wir selbst)
Switch ID 3
Port 1 und 5 : Node WWN 200200a0b85aeb2d
Port 2 und 6 : Node WWN 200600a0b86e10e4
Wir hängen also mit 2 Storages auf dem Switch mit der ID 1 und haben eine Verbindung zu einem Switch mit der ID 3 an dem 2 weitere Storages hängen.
* Node WWN
Wir sehen hier 4 Disk Devices mit jeweils 2 Einträgen (Gleiche Node WWN)
* Port WWN
Dies ist die Port WWN der an den Switch angeschlossenen Geräte (unter 8 finden wir uns selbst).
Pro Storage sehen wir hier 2 Port WWNs, also 2 Pfade über unseren einen Hostport.
* Type
Disk Device: Storage
Host Bus Adapter: FC-Karte
===luxadm probe===
<source lang=bash>
</source>
===luxadm display <Diskpath|WWN>===
<source lang=bash>
</source>
==fcinfo==
===fcinfo hba-port===
<source lang=bash>
</source>
==mpathadm==
===mpathadm list lu===
<source lang=bash>
</source>
=Kommandos : Common Array Manager=
==lsscs==
Ist unter Solaris in /opt/SUNWsefms/bin
===lsscs list array===
<source lang=bash>
</source>
===lsscs list array <array_name>===
<source lang=bash>
</source>
===lsscs list -a <array_name> fcport===
<source lang=bash>
</source>
=Kommandos : Brocade=
00249514cc08349e087d68637923f3344493d41c
379
378
2014-04-01T14:26:07Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:Solaris]]
[[Kategorie:Brocade]]
=Fibrechannel Analyse unter Solaris=
=Kommandos : Solaris=
==luxadm==
===luxadm -e port===
Gibt vorhandene Fibrechannelports und deren Status aus:
<source lang=bash>
# luxadm -e port
/devices/pci@79,0/pci10de,378@b/pci1077,143@0/fp@0,0:devctl CONNECTED
/devices/pci@79,0/pci10de,378@b/pci1077,143@0,1/fp@0,0:devctl NOT CONNECTED
/devices/pci@79,0/pci10de,376@e/pci1077,143@0/fp@0,0:devctl CONNECTED
/devices/pci@79,0/pci10de,376@e/pci1077,143@0,1/fp@0,0:devctl NOT CONNECTED
</source>
2 Dualport Karten:
/devices/pci@79,0/pci10de,378@b/pci1077,143@0 und ...,1
/devices/pci@79,0/pci10de,376@e/pci1077,143@0 und ...,1
<source lang=bash>
# prtdiag -v | head -1
System Configuration: Sun Microsystems Sun Fire X4440
</source>
Siehe auch [https://support.oracle.com/epmos/faces/DocContentDisplay?id=1277396.1 Sun x86 Platforms: Matrix of Recognized Device Paths (Doc ID 1277396.1)] (Oracle Support Login benötigt)
===luxadm -e dump_map <HW_path>===
Gibt die Tabelle der bekannten Geräte an einem Port aus
<source lang=bash>
# luxadm -e dump_map /devices/pci@79,0/pci10de,378@b/pci1077,143@0/fp@0,0:devctl
Pos Port_ID Hard_Addr Port WWN Node WWN Type
0 30200 0 202600a0b86e10e4 200600a0b86e10e4 0x0 (Disk device)
1 30600 0 202700a0b86e10e4 200600a0b86e10e4 0x0 (Disk device)
2 10100 0 203400a0b85bb030 200400a0b85bb030 0x0 (Disk device)
3 10500 0 203500a0b85bb030 200400a0b85bb030 0x0 (Disk device)
4 10200 0 202600a0b86e103c 200600a0b86e103c 0x0 (Disk device)
5 11400 0 202700a0b86e103c 200600a0b86e103c 0x0 (Disk device)
6 30100 0 203200a0b85aeb2d 200200a0b85aeb2d 0x0 (Disk device)
7 30500 0 203300a0b85aeb2d 200200a0b85aeb2d 0x0 (Disk device)
8 10800 0 2100001b32902d45 2000001b32902d45 0x1f (Unknown Type,Host Bus Adapter)
</source>
Erklärung der interessanten Spalten:
* Port_ID <Switch_ID><Switchport><??>
Es sind also offensichtlich 2 Switches in der Fabric an Port /devices/pci@79,0/pci10de,378@b/pci1077,143@0/fp@0,0:devctl
und zwar mit der ID 1 und mit der ID 3.
Switch ID 1
Port 1 und 5 : Node WWN 200400a0b85bb030
Port 2 und 14 : Node WWN 200600a0b86e103c
Port 8 : Node WWN 2000001b32902d45 (Wir selbst)
Switch ID 3
Port 1 und 5 : Node WWN 200200a0b85aeb2d
Port 2 und 6 : Node WWN 200600a0b86e10e4
Wir hängen also mit 2 Storages auf dem Switch mit der ID 1 und haben eine Verbindung zu einem Switch mit der ID 3 an dem 2 weitere Storages hängen.
* Node WWN
Wir sehen hier 4 Disk Devices mit jeweils 2 Einträgen (Gleiche Node WWN)
* Port WWN
Dies ist die Port WWN der an den Switch angeschlossenen Geräte (unter 8 finden wir uns selbst).
Pro Storage sehen wir hier 2 Port WWNs, also 2 Pfade über unseren einen Hostport.
* Type
Disk Device: Storage
Host Bus Adapter: FC-Karte
===luxadm probe===
<source lang=bash>
</source>
===luxadm display <Diskpath|WWN>===
<source lang=bash>
</source>
==fcinfo==
===fcinfo hba-port===
<source lang=bash>
</source>
==mpathadm==
===mpathadm list lu===
<source lang=bash>
</source>
=Kommandos : Common Array Manager=
==lsscs==
Ist unter Solaris in /opt/SUNWsefms/bin
===lsscs list array===
<source lang=bash>
</source>
===lsscs list array <array_name>===
<source lang=bash>
</source>
===lsscs list -a <array_name> fcport===
<source lang=bash>
</source>
=Kommandos : Brocade=
f77e25011bcff9d5df0ddfbca544aa52e52ceb4f
380
379
2014-04-01T14:31:43Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:Solaris]]
[[Kategorie:Brocade]]
=Fibrechannel Analyse unter Solaris=
=Kommandos : Solaris=
==luxadm==
===luxadm -e port===
Gibt vorhandene Fibrechannelports und deren Status aus:
<source lang=bash>
# luxadm -e port
/devices/pci@79,0/pci10de,378@b/pci1077,143@0/fp@0,0:devctl CONNECTED
/devices/pci@79,0/pci10de,378@b/pci1077,143@0,1/fp@0,0:devctl NOT CONNECTED
/devices/pci@79,0/pci10de,376@e/pci1077,143@0/fp@0,0:devctl CONNECTED
/devices/pci@79,0/pci10de,376@e/pci1077,143@0,1/fp@0,0:devctl NOT CONNECTED
</source>
2 Dualport Karten:
/devices/pci@79,0/pci10de,378@b/pci1077,143@0 und ...,1
/devices/pci@79,0/pci10de,376@e/pci1077,143@0 und ...,1
<source lang=bash>
# prtdiag -v | head -1
System Configuration: Sun Microsystems Sun Fire X4440
</source>
Aus der Seite [https://support.oracle.com/epmos/faces/DocContentDisplay?id=1277396.1 Sun x86 Platforms: Matrix of Recognized Device Paths (Doc ID 1277396.1)] (Oracle Support Login benötigt):
PCI:
PCIe SLOT0 /pci@0,0/pci10de,375@f/pci1000,3150@0 // with PCI Express 8-Port SAS/SATA HBA
PCIe SLOT0 /pci@0,0/pci10de,375@f/ // without PCI Express 8-Port SAS/SATA HBA
PCIe SLOT1 /pci@0,0/pci10de,376@e/
PCIe SLOT2 /pci@7c,0/pci10de,377@f/
PCIe SLOT3 /pci@0,0/pci10de,377@a/
PCIe SLOT4 /pci@7c,0/pci10de,376@e/
PCIe SLOT5 /pci@7c,0/pci10de,378@b/
(7c can be renamed something else depending on BIOS/OS version)
Also stecken unsere Karten in Slot 4 und 5.
===luxadm -e dump_map <HW_path>===
Gibt die Tabelle der bekannten Geräte an einem Port aus
<source lang=bash>
# luxadm -e dump_map /devices/pci@79,0/pci10de,378@b/pci1077,143@0/fp@0,0:devctl
Pos Port_ID Hard_Addr Port WWN Node WWN Type
0 30200 0 202600a0b86e10e4 200600a0b86e10e4 0x0 (Disk device)
1 30600 0 202700a0b86e10e4 200600a0b86e10e4 0x0 (Disk device)
2 10100 0 203400a0b85bb030 200400a0b85bb030 0x0 (Disk device)
3 10500 0 203500a0b85bb030 200400a0b85bb030 0x0 (Disk device)
4 10200 0 202600a0b86e103c 200600a0b86e103c 0x0 (Disk device)
5 11400 0 202700a0b86e103c 200600a0b86e103c 0x0 (Disk device)
6 30100 0 203200a0b85aeb2d 200200a0b85aeb2d 0x0 (Disk device)
7 30500 0 203300a0b85aeb2d 200200a0b85aeb2d 0x0 (Disk device)
8 10800 0 2100001b32902d45 2000001b32902d45 0x1f (Unknown Type,Host Bus Adapter)
</source>
Erklärung der interessanten Spalten:
* Port_ID <Switch_ID><Switchport><??>
Es sind also offensichtlich 2 Switches in der Fabric an Port /devices/pci@79,0/pci10de,378@b/pci1077,143@0/fp@0,0:devctl
und zwar mit der ID 1 und mit der ID 3.
Switch ID 1
Port 1 und 5 : Node WWN 200400a0b85bb030
Port 2 und 14 : Node WWN 200600a0b86e103c
Port 8 : Node WWN 2000001b32902d45 (Wir selbst)
Switch ID 3
Port 1 und 5 : Node WWN 200200a0b85aeb2d
Port 2 und 6 : Node WWN 200600a0b86e10e4
Wir hängen also mit 2 Storages auf dem Switch mit der ID 1 und haben eine Verbindung zu einem Switch mit der ID 3 an dem 2 weitere Storages hängen.
* Node WWN
Wir sehen hier 4 Disk Devices mit jeweils 2 Einträgen (Gleiche Node WWN)
* Port WWN
Dies ist die Port WWN der an den Switch angeschlossenen Geräte (unter 8 finden wir uns selbst).
Pro Storage sehen wir hier 2 Port WWNs, also 2 Pfade über unseren einen Hostport.
* Type
Disk Device: Storage
Host Bus Adapter: FC-Karte
===luxadm probe===
<source lang=bash>
</source>
===luxadm display <Diskpath|WWN>===
<source lang=bash>
</source>
==fcinfo==
===fcinfo hba-port===
<source lang=bash>
</source>
==mpathadm==
===mpathadm list lu===
<source lang=bash>
</source>
=Kommandos : Common Array Manager=
==lsscs==
Ist unter Solaris in /opt/SUNWsefms/bin
===lsscs list array===
<source lang=bash>
</source>
===lsscs list array <array_name>===
<source lang=bash>
</source>
===lsscs list -a <array_name> fcport===
<source lang=bash>
</source>
=Kommandos : Brocade=
06e337e2c9574700b538a7907b01e20e344e0755
381
380
2014-04-01T14:40:45Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:Solaris]]
[[Kategorie:Brocade]]
=Fibrechannel Analyse unter Solaris=
=Kommandos : Solaris=
==luxadm==
===luxadm -e port===
Gibt vorhandene Fibrechannelports und deren Status aus:
<source lang=bash>
# luxadm -e port
/devices/pci@79,0/pci10de,378@b/pci1077,143@0/fp@0,0:devctl CONNECTED
/devices/pci@79,0/pci10de,378@b/pci1077,143@0,1/fp@0,0:devctl NOT CONNECTED
/devices/pci@79,0/pci10de,376@e/pci1077,143@0/fp@0,0:devctl CONNECTED
/devices/pci@79,0/pci10de,376@e/pci1077,143@0,1/fp@0,0:devctl NOT CONNECTED
</source>
2 Dualport Karten:
/devices/pci@79,0/pci10de,378@b/pci1077,143@0 und ...,1
/devices/pci@79,0/pci10de,376@e/pci1077,143@0 und ...,1
<source lang=bash>
# prtdiag -v | head -1
System Configuration: Sun Microsystems Sun Fire X4440
</source>
Aus der Seite [https://support.oracle.com/epmos/faces/DocContentDisplay?id=1277396.1 Sun x86 Platforms: Matrix of Recognized Device Paths (Doc ID 1277396.1)] (Oracle Support Login benötigt):
Sun Fire x4440 (Tucana)
PCI:
PCIe SLOT0 /pci@0,0/pci10de,375@f/pci1000,3150@0 // with PCI Express 8-Port SAS/SATA HBA
PCIe SLOT0 /pci@0,0/pci10de,375@f/ // without PCI Express 8-Port SAS/SATA HBA
PCIe SLOT1 /pci@0,0/pci10de,376@e/
PCIe SLOT2 /pci@7c,0/pci10de,377@f/
PCIe SLOT3 /pci@0,0/pci10de,377@a/
PCIe SLOT4 /pci@7c,0/pci10de,376@e/
PCIe SLOT5 /pci@7c,0/pci10de,378@b/
(7c can be renamed something else depending on BIOS/OS version)
Also stecken unsere Karten in Slot 4 und 5.
===luxadm -e dump_map <HW_path>===
Gibt die Tabelle der bekannten Geräte an einem Port aus
<source lang=bash>
# luxadm -e dump_map /devices/pci@79,0/pci10de,378@b/pci1077,143@0/fp@0,0:devctl
Pos Port_ID Hard_Addr Port WWN Node WWN Type
0 30200 0 202600a0b86e10e4 200600a0b86e10e4 0x0 (Disk device)
1 30600 0 202700a0b86e10e4 200600a0b86e10e4 0x0 (Disk device)
2 10100 0 203400a0b85bb030 200400a0b85bb030 0x0 (Disk device)
3 10500 0 203500a0b85bb030 200400a0b85bb030 0x0 (Disk device)
4 10200 0 202600a0b86e103c 200600a0b86e103c 0x0 (Disk device)
5 11400 0 202700a0b86e103c 200600a0b86e103c 0x0 (Disk device)
6 30100 0 203200a0b85aeb2d 200200a0b85aeb2d 0x0 (Disk device)
7 30500 0 203300a0b85aeb2d 200200a0b85aeb2d 0x0 (Disk device)
8 10800 0 2100001b32902d45 2000001b32902d45 0x1f (Unknown Type,Host Bus Adapter)
</source>
Erklärung der interessanten Spalten:
* Port_ID <Switch_ID><Switchport><??>
Es sind also offensichtlich 2 Switches in der Fabric an Port /devices/pci@79,0/pci10de,378@b/pci1077,143@0/fp@0,0:devctl
und zwar mit der ID 1 und mit der ID 3.
Switch ID 1
Port 1 und 5 : Node WWN 200400a0b85bb030
Port 2 und 14 : Node WWN 200600a0b86e103c
Port 8 : Node WWN 2000001b32902d45 (Wir selbst)
Switch ID 3
Port 1 und 5 : Node WWN 200200a0b85aeb2d
Port 2 und 6 : Node WWN 200600a0b86e10e4
Wir hängen also mit 2 Storages auf dem Switch mit der ID 1 und haben eine Verbindung zu einem Switch mit der ID 3 an dem 2 weitere Storages hängen.
* Node WWN
Wir sehen hier 4 Disk Devices mit jeweils 2 Einträgen (Gleiche Node WWN)
* Port WWN
Dies ist die Port WWN der an den Switch angeschlossenen Geräte (unter 8 finden wir uns selbst).
Pro Storage sehen wir hier 2 Port WWNs, also 2 Pfade über unseren einen Hostport.
* Type
Disk Device: Storage
Host Bus Adapter: FC-Karte
===luxadm probe===
<source lang=bash>
</source>
===luxadm display <Diskpath|WWN>===
<source lang=bash>
</source>
==fcinfo==
===fcinfo hba-port===
<source lang=bash>
</source>
==mpathadm==
===mpathadm list lu===
<source lang=bash>
</source>
=Kommandos : Common Array Manager=
==lsscs==
Ist unter Solaris in /opt/SUNWsefms/bin
===lsscs list array===
<source lang=bash>
</source>
===lsscs list array <array_name>===
<source lang=bash>
</source>
===lsscs list -a <array_name> fcport===
<source lang=bash>
</source>
=Kommandos : Brocade=
95bd6c69f86416df989470d8cf41b9ccfa0e4ae0
382
381
2014-04-01T14:45:16Z
Lollypop
2
/* luxadm -e dump_map */
wikitext
text/x-wiki
[[Kategorie:Solaris]]
[[Kategorie:Brocade]]
=Fibrechannel Analyse unter Solaris=
=Kommandos : Solaris=
==luxadm==
===luxadm -e port===
Gibt vorhandene Fibrechannelports und deren Status aus:
<source lang=bash>
# luxadm -e port
/devices/pci@79,0/pci10de,378@b/pci1077,143@0/fp@0,0:devctl CONNECTED
/devices/pci@79,0/pci10de,378@b/pci1077,143@0,1/fp@0,0:devctl NOT CONNECTED
/devices/pci@79,0/pci10de,376@e/pci1077,143@0/fp@0,0:devctl CONNECTED
/devices/pci@79,0/pci10de,376@e/pci1077,143@0,1/fp@0,0:devctl NOT CONNECTED
</source>
2 Dualport Karten:
/devices/pci@79,0/pci10de,378@b/pci1077,143@0 und ...,1
/devices/pci@79,0/pci10de,376@e/pci1077,143@0 und ...,1
<source lang=bash>
# prtdiag -v | head -1
System Configuration: Sun Microsystems Sun Fire X4440
</source>
Aus der Seite [https://support.oracle.com/epmos/faces/DocContentDisplay?id=1277396.1 Sun x86 Platforms: Matrix of Recognized Device Paths (Doc ID 1277396.1)] (Oracle Support Login benötigt):
Sun Fire x4440 (Tucana)
PCI:
PCIe SLOT0 /pci@0,0/pci10de,375@f/pci1000,3150@0 // with PCI Express 8-Port SAS/SATA HBA
PCIe SLOT0 /pci@0,0/pci10de,375@f/ // without PCI Express 8-Port SAS/SATA HBA
PCIe SLOT1 /pci@0,0/pci10de,376@e/
PCIe SLOT2 /pci@7c,0/pci10de,377@f/
PCIe SLOT3 /pci@0,0/pci10de,377@a/
PCIe SLOT4 /pci@7c,0/pci10de,376@e/
PCIe SLOT5 /pci@7c,0/pci10de,378@b/
(7c can be renamed something else depending on BIOS/OS version)
Also stecken unsere Karten in Slot 4 und 5.
===luxadm -e dump_map <HW_path>===
Gibt die Tabelle der bekannten Geräte an einem Port aus
<source lang=bash>
# luxadm -e dump_map /devices/pci@79,0/pci10de,378@b/pci1077,143@0/fp@0,0:devctl
Pos Port_ID Hard_Addr Port WWN Node WWN Type
0 30200 0 202600a0b86e10e4 200600a0b86e10e4 0x0 (Disk device)
1 30600 0 202700a0b86e10e4 200600a0b86e10e4 0x0 (Disk device)
2 10100 0 203400a0b85bb030 200400a0b85bb030 0x0 (Disk device)
3 10500 0 203500a0b85bb030 200400a0b85bb030 0x0 (Disk device)
4 10200 0 202600a0b86e103c 200600a0b86e103c 0x0 (Disk device)
5 11400 0 202700a0b86e103c 200600a0b86e103c 0x0 (Disk device)
6 30100 0 203200a0b85aeb2d 200200a0b85aeb2d 0x0 (Disk device)
7 30500 0 203300a0b85aeb2d 200200a0b85aeb2d 0x0 (Disk device)
8 10800 0 2100001b32902d45 2000001b32902d45 0x1f (Unknown Type,Host Bus Adapter)
</source>
Erklärung der interessanten Spalten:
* Port_ID <Switch_ID><Switchport><??>
Es sind also offensichtlich 2 Switches in der Fabric an Port /devices/pci@79,0/pci10de,378@b/pci1077,143@0/fp@0,0:devctl
und zwar mit der ID 1 und mit der ID 3.
Switch ID 1
Port 1 und 5 : Node WWN 200400a0b85bb030
Port 2 und 14 : Node WWN 200600a0b86e103c
Port 8 : Node WWN 2000001b32902d45 (Wir selbst)
Switch ID 3
Port 1 und 5 : Node WWN 200200a0b85aeb2d
Port 2 und 6 : Node WWN 200600a0b86e10e4
Wir hängen also mit 2 Storages auf dem Switch mit der ID 1 und haben eine Verbindung zu einem Switch mit der ID 3 an dem 2 weitere Storages hängen.
* Node WWN
Wir sehen hier 4 Disk Devices mit jeweils 2 Einträgen (Gleiche Node WWN)
* Port WWN
Dies ist die Port WWN der an den Switch angeschlossenen Geräte (unter 8 finden wir uns selbst).
Pro Storage sehen wir hier 2 Port WWNs, also 2 Pfade über unseren einen Hostport.
Daher nachher 4 Pfade (2 Pro Hostport) beim mpathadm list lu.
* Type
Disk Device: Storage
Host Bus Adapter: FC-Karte
===luxadm probe===
<source lang=bash>
</source>
===luxadm display <Diskpath|WWN>===
<source lang=bash>
</source>
==fcinfo==
===fcinfo hba-port===
<source lang=bash>
</source>
==mpathadm==
===mpathadm list lu===
<source lang=bash>
</source>
=Kommandos : Common Array Manager=
==lsscs==
Ist unter Solaris in /opt/SUNWsefms/bin
===lsscs list array===
<source lang=bash>
</source>
===lsscs list array <array_name>===
<source lang=bash>
</source>
===lsscs list -a <array_name> fcport===
<source lang=bash>
</source>
=Kommandos : Brocade=
1f18c7f1d88c68e9bfad241e2d439775f5e665de
383
382
2014-04-01T14:47:05Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:Solaris]]
[[Kategorie:Brocade]]
=Fibrechannel Analyse unter Solaris=
=Kommandos : Solaris=
==luxadm==
===luxadm -e port===
Gibt vorhandene Fibrechannelports und deren Status aus:
<source lang=bash>
# luxadm -e port
/devices/pci@79,0/pci10de,378@b/pci1077,143@0/fp@0,0:devctl CONNECTED
/devices/pci@79,0/pci10de,378@b/pci1077,143@0,1/fp@0,0:devctl NOT CONNECTED
/devices/pci@79,0/pci10de,376@e/pci1077,143@0/fp@0,0:devctl CONNECTED
/devices/pci@79,0/pci10de,376@e/pci1077,143@0,1/fp@0,0:devctl NOT CONNECTED
</source>
2 Dualport Karten:
/devices/pci@79,0/pci10de,378@b/pci1077,143@0 und ...,1
/devices/pci@79,0/pci10de,376@e/pci1077,143@0 und ...,1
<source lang=bash>
# prtdiag -v | head -1
System Configuration: Sun Microsystems Sun Fire X4440
</source>
Aus der Seite [https://support.oracle.com/epmos/faces/DocContentDisplay?id=1277396.1 Sun x86 Platforms: Matrix of Recognized Device Paths (Doc ID 1277396.1)] (Oracle Support Login benötigt):
Sun Fire x4440 (Tucana)
PCI:
PCIe SLOT0 /pci@0,0/pci10de,375@f/pci1000,3150@0 // with PCI Express 8-Port SAS/SATA HBA
PCIe SLOT0 /pci@0,0/pci10de,375@f/ // without PCI Express 8-Port SAS/SATA HBA
PCIe SLOT1 /pci@0,0/pci10de,376@e/
PCIe SLOT2 /pci@7c,0/pci10de,377@f/
PCIe SLOT3 /pci@0,0/pci10de,377@a/
PCIe SLOT4 /pci@7c,0/pci10de,376@e/
PCIe SLOT5 /pci@7c,0/pci10de,378@b/
(7c can be renamed something else depending on BIOS/OS version)
Also stecken unsere Karten in Slot 4 und 5.
===luxadm -e dump_map <HW_path>===
Gibt die Tabelle der bekannten Geräte an einem Port aus
<source lang=bash>
# luxadm -e dump_map /devices/pci@79,0/pci10de,378@b/pci1077,143@0/fp@0,0:devctl
Pos Port_ID Hard_Addr Port WWN Node WWN Type
0 30200 0 202600a0b86e10e4 200600a0b86e10e4 0x0 (Disk device)
1 30600 0 202700a0b86e10e4 200600a0b86e10e4 0x0 (Disk device)
2 10100 0 203400a0b85bb030 200400a0b85bb030 0x0 (Disk device)
3 10500 0 203500a0b85bb030 200400a0b85bb030 0x0 (Disk device)
4 10200 0 202600a0b86e103c 200600a0b86e103c 0x0 (Disk device)
5 11400 0 202700a0b86e103c 200600a0b86e103c 0x0 (Disk device)
6 30100 0 203200a0b85aeb2d 200200a0b85aeb2d 0x0 (Disk device)
7 30500 0 203300a0b85aeb2d 200200a0b85aeb2d 0x0 (Disk device)
8 10800 0 2100001b32902d45 2000001b32902d45 0x1f (Unknown Type,Host Bus Adapter)
</source>
Erklärung der interessanten Spalten:
* Port_ID <Switch_ID><Switchport><??>
Es sind also offensichtlich 2 Switches in der Fabric an Port /devices/pci@79,0/pci10de,378@b/pci1077,143@0/fp@0,0:devctl
und zwar mit der ID 1 und mit der ID 3.
Switch ID 1
Port 1 und 5 : Node WWN 200400a0b85bb030
Port 2 und 14 : Node WWN 200600a0b86e103c
Port 8 : Node WWN 2000001b32902d45 (Wir selbst)
Switch ID 3
Port 1 und 5 : Node WWN 200200a0b85aeb2d
Port 2 und 6 : Node WWN 200600a0b86e10e4
Wir hängen also mit 2 Storages auf dem Switch mit der ID 1 und haben eine Verbindung zu einem Switch mit der ID 3 an dem 2 weitere Storages hängen.
* Node WWN
Wir sehen hier 4 Disk Devices mit jeweils 2 Einträgen (Gleiche Node WWN)
* Port WWN
Dies ist die Port WWN der an den Switch angeschlossenen Geräte (unter 8 finden wir uns selbst).
Pro Storage sehen wir hier 2 Port WWNs, also 2 Pfade über unseren einen Hostport.
Daher nachher 4 Pfade (2 Pro Hostport) beim [[#mpathadm list lu]].
* Type
Disk Device: Storage
Host Bus Adapter: FC-Karte
===luxadm probe===
<source lang=bash>
</source>
===luxadm display <Diskpath|WWN>===
<source lang=bash>
</source>
==fcinfo==
===fcinfo hba-port===
<source lang=bash>
</source>
==mpathadm==
===mpathadm list lu===
<source lang=bash>
</source>
=Kommandos : Common Array Manager=
==lsscs==
Ist unter Solaris in /opt/SUNWsefms/bin
===lsscs list array===
<source lang=bash>
</source>
===lsscs list array <array_name>===
<source lang=bash>
</source>
===lsscs list -a <array_name> fcport===
<source lang=bash>
</source>
=Kommandos : Brocade=
294f65ef1411e6b5f3b80bac0a62fe8acaf054a5
384
383
2014-04-01T14:49:18Z
Lollypop
2
/* luxadm probe */
wikitext
text/x-wiki
[[Kategorie:Solaris]]
[[Kategorie:Brocade]]
=Fibrechannel Analyse unter Solaris=
=Kommandos : Solaris=
==luxadm==
===luxadm -e port===
Gibt vorhandene Fibrechannelports und deren Status aus:
<source lang=bash>
# luxadm -e port
/devices/pci@79,0/pci10de,378@b/pci1077,143@0/fp@0,0:devctl CONNECTED
/devices/pci@79,0/pci10de,378@b/pci1077,143@0,1/fp@0,0:devctl NOT CONNECTED
/devices/pci@79,0/pci10de,376@e/pci1077,143@0/fp@0,0:devctl CONNECTED
/devices/pci@79,0/pci10de,376@e/pci1077,143@0,1/fp@0,0:devctl NOT CONNECTED
</source>
2 Dualport Karten:
/devices/pci@79,0/pci10de,378@b/pci1077,143@0 und ...,1
/devices/pci@79,0/pci10de,376@e/pci1077,143@0 und ...,1
<source lang=bash>
# prtdiag -v | head -1
System Configuration: Sun Microsystems Sun Fire X4440
</source>
Aus der Seite [https://support.oracle.com/epmos/faces/DocContentDisplay?id=1277396.1 Sun x86 Platforms: Matrix of Recognized Device Paths (Doc ID 1277396.1)] (Oracle Support Login benötigt):
Sun Fire x4440 (Tucana)
PCI:
PCIe SLOT0 /pci@0,0/pci10de,375@f/pci1000,3150@0 // with PCI Express 8-Port SAS/SATA HBA
PCIe SLOT0 /pci@0,0/pci10de,375@f/ // without PCI Express 8-Port SAS/SATA HBA
PCIe SLOT1 /pci@0,0/pci10de,376@e/
PCIe SLOT2 /pci@7c,0/pci10de,377@f/
PCIe SLOT3 /pci@0,0/pci10de,377@a/
PCIe SLOT4 /pci@7c,0/pci10de,376@e/
PCIe SLOT5 /pci@7c,0/pci10de,378@b/
(7c can be renamed something else depending on BIOS/OS version)
Also stecken unsere Karten in Slot 4 und 5.
===luxadm -e dump_map <HW_path>===
Gibt die Tabelle der bekannten Geräte an einem Port aus
<source lang=bash>
# luxadm -e dump_map /devices/pci@79,0/pci10de,378@b/pci1077,143@0/fp@0,0:devctl
Pos Port_ID Hard_Addr Port WWN Node WWN Type
0 30200 0 202600a0b86e10e4 200600a0b86e10e4 0x0 (Disk device)
1 30600 0 202700a0b86e10e4 200600a0b86e10e4 0x0 (Disk device)
2 10100 0 203400a0b85bb030 200400a0b85bb030 0x0 (Disk device)
3 10500 0 203500a0b85bb030 200400a0b85bb030 0x0 (Disk device)
4 10200 0 202600a0b86e103c 200600a0b86e103c 0x0 (Disk device)
5 11400 0 202700a0b86e103c 200600a0b86e103c 0x0 (Disk device)
6 30100 0 203200a0b85aeb2d 200200a0b85aeb2d 0x0 (Disk device)
7 30500 0 203300a0b85aeb2d 200200a0b85aeb2d 0x0 (Disk device)
8 10800 0 2100001b32902d45 2000001b32902d45 0x1f (Unknown Type,Host Bus Adapter)
</source>
Erklärung der interessanten Spalten:
* Port_ID <Switch_ID><Switchport><??>
Es sind also offensichtlich 2 Switches in der Fabric an Port /devices/pci@79,0/pci10de,378@b/pci1077,143@0/fp@0,0:devctl
und zwar mit der ID 1 und mit der ID 3.
Switch ID 1
Port 1 und 5 : Node WWN 200400a0b85bb030
Port 2 und 14 : Node WWN 200600a0b86e103c
Port 8 : Node WWN 2000001b32902d45 (Wir selbst)
Switch ID 3
Port 1 und 5 : Node WWN 200200a0b85aeb2d
Port 2 und 6 : Node WWN 200600a0b86e10e4
Wir hängen also mit 2 Storages auf dem Switch mit der ID 1 und haben eine Verbindung zu einem Switch mit der ID 3 an dem 2 weitere Storages hängen.
* Node WWN
Wir sehen hier 4 Disk Devices mit jeweils 2 Einträgen (Gleiche Node WWN)
* Port WWN
Dies ist die Port WWN der an den Switch angeschlossenen Geräte (unter 8 finden wir uns selbst).
Pro Storage sehen wir hier 2 Port WWNs, also 2 Pfade über unseren einen Hostport.
Daher nachher 4 Pfade (2 Pro Hostport) beim [[#mpathadm list lu]].
* Type
Disk Device: Storage
Host Bus Adapter: FC-Karte
===luxadm probe===
Auflistung aller erkannten Fibrechanneldevices
<source lang=bash>
#> luxadm probe
Found Fibre Channel device(s):
Node WWN:200600a0b86e10e4 Device Type:Disk device
Logical Path:/dev/rdsk/c8t600A0B80006E10E40000DC1C52E8B751d0s2
...
</source>
===luxadm display <Diskpath|WWN>===
<source lang=bash>
</source>
==fcinfo==
===fcinfo hba-port===
<source lang=bash>
</source>
==mpathadm==
===mpathadm list lu===
<source lang=bash>
</source>
=Kommandos : Common Array Manager=
==lsscs==
Ist unter Solaris in /opt/SUNWsefms/bin
===lsscs list array===
<source lang=bash>
</source>
===lsscs list array <array_name>===
<source lang=bash>
</source>
===lsscs list -a <array_name> fcport===
<source lang=bash>
</source>
=Kommandos : Brocade=
02955275c5dd3af9420ee4f1f36e567bc2a47f3e
385
384
2014-04-01T15:15:24Z
Lollypop
2
/* luxadm display */
wikitext
text/x-wiki
[[Kategorie:Solaris]]
[[Kategorie:Brocade]]
=Fibrechannel Analyse unter Solaris=
=Kommandos : Solaris=
==luxadm==
===luxadm -e port===
Gibt vorhandene Fibrechannelports und deren Status aus:
<source lang=bash>
# luxadm -e port
/devices/pci@79,0/pci10de,378@b/pci1077,143@0/fp@0,0:devctl CONNECTED
/devices/pci@79,0/pci10de,378@b/pci1077,143@0,1/fp@0,0:devctl NOT CONNECTED
/devices/pci@79,0/pci10de,376@e/pci1077,143@0/fp@0,0:devctl CONNECTED
/devices/pci@79,0/pci10de,376@e/pci1077,143@0,1/fp@0,0:devctl NOT CONNECTED
</source>
2 Dualport Karten:
/devices/pci@79,0/pci10de,378@b/pci1077,143@0 und ...,1
/devices/pci@79,0/pci10de,376@e/pci1077,143@0 und ...,1
<source lang=bash>
# prtdiag -v | head -1
System Configuration: Sun Microsystems Sun Fire X4440
</source>
Aus der Seite [https://support.oracle.com/epmos/faces/DocContentDisplay?id=1277396.1 Sun x86 Platforms: Matrix of Recognized Device Paths (Doc ID 1277396.1)] (Oracle Support Login benötigt):
Sun Fire x4440 (Tucana)
PCI:
PCIe SLOT0 /pci@0,0/pci10de,375@f/pci1000,3150@0 // with PCI Express 8-Port SAS/SATA HBA
PCIe SLOT0 /pci@0,0/pci10de,375@f/ // without PCI Express 8-Port SAS/SATA HBA
PCIe SLOT1 /pci@0,0/pci10de,376@e/
PCIe SLOT2 /pci@7c,0/pci10de,377@f/
PCIe SLOT3 /pci@0,0/pci10de,377@a/
PCIe SLOT4 /pci@7c,0/pci10de,376@e/
PCIe SLOT5 /pci@7c,0/pci10de,378@b/
(7c can be renamed something else depending on BIOS/OS version)
Also stecken unsere Karten in Slot 4 und 5.
===luxadm -e dump_map <HW_path>===
Gibt die Tabelle der bekannten Geräte an einem Port aus
<source lang=bash>
# luxadm -e dump_map /devices/pci@79,0/pci10de,378@b/pci1077,143@0/fp@0,0:devctl
Pos Port_ID Hard_Addr Port WWN Node WWN Type
0 30200 0 202600a0b86e10e4 200600a0b86e10e4 0x0 (Disk device)
1 30600 0 202700a0b86e10e4 200600a0b86e10e4 0x0 (Disk device)
2 10100 0 203400a0b85bb030 200400a0b85bb030 0x0 (Disk device)
3 10500 0 203500a0b85bb030 200400a0b85bb030 0x0 (Disk device)
4 10200 0 202600a0b86e103c 200600a0b86e103c 0x0 (Disk device)
5 11400 0 202700a0b86e103c 200600a0b86e103c 0x0 (Disk device)
6 30100 0 203200a0b85aeb2d 200200a0b85aeb2d 0x0 (Disk device)
7 30500 0 203300a0b85aeb2d 200200a0b85aeb2d 0x0 (Disk device)
8 10800 0 2100001b32902d45 2000001b32902d45 0x1f (Unknown Type,Host Bus Adapter)
</source>
Erklärung der interessanten Spalten:
* Port_ID <Switch_ID><Switchport><??>
Es sind also offensichtlich 2 Switches in der Fabric an Port /devices/pci@79,0/pci10de,378@b/pci1077,143@0/fp@0,0:devctl
und zwar mit der ID 1 und mit der ID 3.
Switch ID 1
Port 1 und 5 : Node WWN 200400a0b85bb030
Port 2 und 14 : Node WWN 200600a0b86e103c
Port 8 : Node WWN 2000001b32902d45 (Wir selbst)
Switch ID 3
Port 1 und 5 : Node WWN 200200a0b85aeb2d
Port 2 und 6 : Node WWN 200600a0b86e10e4
Wir hängen also mit 2 Storages auf dem Switch mit der ID 1 und haben eine Verbindung zu einem Switch mit der ID 3 an dem 2 weitere Storages hängen.
* Node WWN
Wir sehen hier 4 Disk Devices mit jeweils 2 Einträgen (Gleiche Node WWN)
* Port WWN
Dies ist die Port WWN der an den Switch angeschlossenen Geräte (unter 8 finden wir uns selbst).
Pro Storage sehen wir hier 2 Port WWNs, also 2 Pfade über unseren einen Hostport.
Daher nachher 4 Pfade (2 Pro Hostport) beim [[#mpathadm list lu]].
* Type
Disk Device: Storage
Host Bus Adapter: FC-Karte
===luxadm probe===
Auflistung aller erkannten Fibrechanneldevices
<source lang=bash>
#> luxadm probe
Found Fibre Channel device(s):
Node WWN:200600a0b86e10e4 Device Type:Disk device
Logical Path:/dev/rdsk/c8t600A0B80006E10E40000DC1C52E8B751d0s2
...
</source>
===luxadm display <Diskpath|WWN>===
<source lang=bash>
#> luxadm display /dev/rdsk/c8t600A0B80006E10E40000DC1C52E8B751d0s2
DEVICE PROPERTIES for disk: /dev/rdsk/c8t600A0B80006E10E40000DC1C52E8B751d0s2
Vendor: SUN
Product ID: STK6580_6780
Revision: 0784
Serial Num: SP01068442
Unformatted capacity: 204800.000 MBytes
Write Cache: Enabled
Read Cache: Enabled
Minimum prefetch: 0x300
Maximum prefetch: 0x0
Device Type: Disk device
Path(s):
/dev/rdsk/c8t600A0B80006E10E40000DC1C52E8B751d0s2
/devices/scsi_vhci/disk@g600a0b80006e10e40000dc1c52e8b751:c,raw
Controller /dev/cfg/c4
Device Address 202600a0b86e10e4,5
Host controller port WWN 2100001b328a417f
Class primary
State ONLINE
Controller /dev/cfg/c4
Device Address 202700a0b86e10e4,5
Host controller port WWN 2100001b328a417f
Class secondary
State STANDBY
Controller /dev/cfg/c6
Device Address 201600a0b86e10e4,5
Host controller port WWN 2100001b32904445
Class primary
State ONLINE
Controller /dev/cfg/c6
Device Address 201700a0b86e10e4,5
Host controller port WWN 2100001b32904445
Class secondary
State STANDBY
</source>
* Vendor: SUN
Hersteller
* Product ID: STK6580_6780
Also ein StorageTek 6580/6780
* Revision: 0784
Grobe Firmwarepeilung (Firmware Version: 07.84.47.10)
Siehe hier [[#lsscs list array <array_name>]
* Serial Num: SP01068442
Praktisch, wenn man mit NetApps arbeitet, um die LUNs zuzuordnen.
* Unformatted capacity: 204800.000 MBytes
Immer gut zu wissen
* Write Cache: Enabled
Die Batterie im Storage sollte also OK sein ;-)
==fcinfo==
===fcinfo hba-port===
<source lang=bash>
</source>
==mpathadm==
===mpathadm list lu===
<source lang=bash>
</source>
=Kommandos : Common Array Manager=
==lsscs==
Ist unter Solaris in /opt/SUNWsefms/bin
===lsscs list array===
<source lang=bash>
</source>
===lsscs list array <array_name>===
<source lang=bash>
</source>
===lsscs list -a <array_name> fcport===
<source lang=bash>
</source>
=Kommandos : Brocade=
bbc4348cc2d76ea10fac5512af0904d1647977a2
386
385
2014-04-01T15:31:51Z
Lollypop
2
/* luxadm display */
wikitext
text/x-wiki
[[Kategorie:Solaris]]
[[Kategorie:Brocade]]
=Fibrechannel Analyse unter Solaris=
=Kommandos : Solaris=
==luxadm==
===luxadm -e port===
Gibt vorhandene Fibrechannelports und deren Status aus:
<source lang=bash>
# luxadm -e port
/devices/pci@79,0/pci10de,378@b/pci1077,143@0/fp@0,0:devctl CONNECTED
/devices/pci@79,0/pci10de,378@b/pci1077,143@0,1/fp@0,0:devctl NOT CONNECTED
/devices/pci@79,0/pci10de,376@e/pci1077,143@0/fp@0,0:devctl CONNECTED
/devices/pci@79,0/pci10de,376@e/pci1077,143@0,1/fp@0,0:devctl NOT CONNECTED
</source>
2 Dualport Karten:
/devices/pci@79,0/pci10de,378@b/pci1077,143@0 und ...,1
/devices/pci@79,0/pci10de,376@e/pci1077,143@0 und ...,1
<source lang=bash>
# prtdiag -v | head -1
System Configuration: Sun Microsystems Sun Fire X4440
</source>
Aus der Seite [https://support.oracle.com/epmos/faces/DocContentDisplay?id=1277396.1 Sun x86 Platforms: Matrix of Recognized Device Paths (Doc ID 1277396.1)] (Oracle Support Login benötigt):
Sun Fire x4440 (Tucana)
PCI:
PCIe SLOT0 /pci@0,0/pci10de,375@f/pci1000,3150@0 // with PCI Express 8-Port SAS/SATA HBA
PCIe SLOT0 /pci@0,0/pci10de,375@f/ // without PCI Express 8-Port SAS/SATA HBA
PCIe SLOT1 /pci@0,0/pci10de,376@e/
PCIe SLOT2 /pci@7c,0/pci10de,377@f/
PCIe SLOT3 /pci@0,0/pci10de,377@a/
PCIe SLOT4 /pci@7c,0/pci10de,376@e/
PCIe SLOT5 /pci@7c,0/pci10de,378@b/
(7c can be renamed something else depending on BIOS/OS version)
Also stecken unsere Karten in Slot 4 und 5.
===luxadm -e dump_map <HW_path>===
Gibt die Tabelle der bekannten Geräte an einem Port aus
<source lang=bash>
# luxadm -e dump_map /devices/pci@79,0/pci10de,378@b/pci1077,143@0/fp@0,0:devctl
Pos Port_ID Hard_Addr Port WWN Node WWN Type
0 30200 0 202600a0b86e10e4 200600a0b86e10e4 0x0 (Disk device)
1 30600 0 202700a0b86e10e4 200600a0b86e10e4 0x0 (Disk device)
2 10100 0 203400a0b85bb030 200400a0b85bb030 0x0 (Disk device)
3 10500 0 203500a0b85bb030 200400a0b85bb030 0x0 (Disk device)
4 10200 0 202600a0b86e103c 200600a0b86e103c 0x0 (Disk device)
5 11400 0 202700a0b86e103c 200600a0b86e103c 0x0 (Disk device)
6 30100 0 203200a0b85aeb2d 200200a0b85aeb2d 0x0 (Disk device)
7 30500 0 203300a0b85aeb2d 200200a0b85aeb2d 0x0 (Disk device)
8 10800 0 2100001b32902d45 2000001b32902d45 0x1f (Unknown Type,Host Bus Adapter)
</source>
Erklärung der interessanten Spalten:
* Port_ID <Switch_ID><Switchport><??>
Es sind also offensichtlich 2 Switches in der Fabric an Port /devices/pci@79,0/pci10de,378@b/pci1077,143@0/fp@0,0:devctl
und zwar mit der ID 1 und mit der ID 3.
Switch ID 1
Port 1 und 5 : Node WWN 200400a0b85bb030
Port 2 und 14 : Node WWN 200600a0b86e103c
Port 8 : Node WWN 2000001b32902d45 (Wir selbst)
Switch ID 3
Port 1 und 5 : Node WWN 200200a0b85aeb2d
Port 2 und 6 : Node WWN 200600a0b86e10e4
Wir hängen also mit 2 Storages auf dem Switch mit der ID 1 und haben eine Verbindung zu einem Switch mit der ID 3 an dem 2 weitere Storages hängen.
* Node WWN
Wir sehen hier 4 Disk Devices mit jeweils 2 Einträgen (Gleiche Node WWN)
* Port WWN
Dies ist die Port WWN der an den Switch angeschlossenen Geräte (unter 8 finden wir uns selbst).
Pro Storage sehen wir hier 2 Port WWNs, also 2 Pfade über unseren einen Hostport.
Daher nachher 4 Pfade (2 Pro Hostport) beim [[#mpathadm list lu]].
* Type
Disk Device: Storage
Host Bus Adapter: FC-Karte
===luxadm probe===
Auflistung aller erkannten Fibrechanneldevices
<source lang=bash>
#> luxadm probe
Found Fibre Channel device(s):
Node WWN:200600a0b86e10e4 Device Type:Disk device
Logical Path:/dev/rdsk/c8t600A0B80006E10E40000DC1C52E8B751d0s2
...
</source>
===luxadm display <Diskpath|WWN>===
<source lang=bash>
#> luxadm display /dev/rdsk/c8t600A0B80006E10E40000DC1C52E8B751d0s2
DEVICE PROPERTIES for disk: /dev/rdsk/c8t600A0B80006E10E40000DC1C52E8B751d0s2
Vendor: SUN
Product ID: STK6580_6780
Revision: 0784
Serial Num: SP01068442
Unformatted capacity: 204800.000 MBytes
Write Cache: Enabled
Read Cache: Enabled
Minimum prefetch: 0x300
Maximum prefetch: 0x0
Device Type: Disk device
Path(s):
/dev/rdsk/c8t600A0B80006E10E40000DC1C52E8B751d0s2
/devices/scsi_vhci/disk@g600a0b80006e10e40000dc1c52e8b751:c,raw
Controller /dev/cfg/c4
Device Address 202600a0b86e10e4,5
Host controller port WWN 2100001b328a417f
Class primary
State ONLINE
Controller /dev/cfg/c4
Device Address 202700a0b86e10e4,5
Host controller port WWN 2100001b328a417f
Class secondary
State STANDBY
Controller /dev/cfg/c6
Device Address 201600a0b86e10e4,5
Host controller port WWN 2100001b32904445
Class primary
State ONLINE
Controller /dev/cfg/c6
Device Address 201700a0b86e10e4,5
Host controller port WWN 2100001b32904445
Class secondary
State STANDBY
</source>
* Vendor: SUN
Hersteller
* Product ID: STK6580_6780
Also ein StorageTek 6580/6780
* Revision: 0784
Grobe Firmwarepeilung (Firmware Version: 07.84.47.10)
Siehe hier [[#lsscs list array <array_name>]]
* Serial Num: SP01068442
Praktisch, wenn man mit NetApps arbeitet, um die LUNs zuzuordnen.
* Unformatted capacity: 204800.000 MBytes
Immer gut zu wissen
* Write Cache: Enabled
Die Batterie im Storage sollte also OK sein ;-)
* Path(s):
Rawdevicepath
Hardwaredevicepath
Jetzt folgen immer pro Pfad zu diesem Device ein Block aus
Controller (siehe unten)
Device Address <Port WWN vom Device>,<LUN ID>
Class <primary|secondary> (siehe unten)
State <Online|Standby|Oflline>
Zuweisung Controller zum FC-Port über:
<source lang=bash>
# ls -al /dev/cfg/c6
lrwxrwxrwx 1 root root 60 Sep 3 2009 /dev/cfg/c6 -> ../../devices/pci@79,0/pci10de,376@e/pci1077,143@0/fp@0,0:fc
</source>
Class:
Via ALUA (Asymmetric Logical Unit Access) teilt das Device dem Host mit, über welche Pfade der Host primär auf die LUN zugreifen soll.
==fcinfo==
===fcinfo hba-port===
<source lang=bash>
</source>
==mpathadm==
===mpathadm list lu===
<source lang=bash>
</source>
=Kommandos : Common Array Manager=
==lsscs==
Ist unter Solaris in /opt/SUNWsefms/bin
===lsscs list array===
<source lang=bash>
</source>
===lsscs list array <array_name>===
<source lang=bash>
</source>
===lsscs list -a <array_name> fcport===
<source lang=bash>
</source>
=Kommandos : Brocade=
640491ceafb9fc0be11205f47598d3196bb57b71
387
386
2014-04-01T15:34:44Z
Lollypop
2
/* fcinfo hba-port */
wikitext
text/x-wiki
[[Kategorie:Solaris]]
[[Kategorie:Brocade]]
=Fibrechannel Analyse unter Solaris=
=Kommandos : Solaris=
==luxadm==
===luxadm -e port===
Gibt vorhandene Fibrechannelports und deren Status aus:
<source lang=bash>
# luxadm -e port
/devices/pci@79,0/pci10de,378@b/pci1077,143@0/fp@0,0:devctl CONNECTED
/devices/pci@79,0/pci10de,378@b/pci1077,143@0,1/fp@0,0:devctl NOT CONNECTED
/devices/pci@79,0/pci10de,376@e/pci1077,143@0/fp@0,0:devctl CONNECTED
/devices/pci@79,0/pci10de,376@e/pci1077,143@0,1/fp@0,0:devctl NOT CONNECTED
</source>
2 Dualport Karten:
/devices/pci@79,0/pci10de,378@b/pci1077,143@0 und ...,1
/devices/pci@79,0/pci10de,376@e/pci1077,143@0 und ...,1
<source lang=bash>
# prtdiag -v | head -1
System Configuration: Sun Microsystems Sun Fire X4440
</source>
Aus der Seite [https://support.oracle.com/epmos/faces/DocContentDisplay?id=1277396.1 Sun x86 Platforms: Matrix of Recognized Device Paths (Doc ID 1277396.1)] (Oracle Support Login benötigt):
Sun Fire x4440 (Tucana)
PCI:
PCIe SLOT0 /pci@0,0/pci10de,375@f/pci1000,3150@0 // with PCI Express 8-Port SAS/SATA HBA
PCIe SLOT0 /pci@0,0/pci10de,375@f/ // without PCI Express 8-Port SAS/SATA HBA
PCIe SLOT1 /pci@0,0/pci10de,376@e/
PCIe SLOT2 /pci@7c,0/pci10de,377@f/
PCIe SLOT3 /pci@0,0/pci10de,377@a/
PCIe SLOT4 /pci@7c,0/pci10de,376@e/
PCIe SLOT5 /pci@7c,0/pci10de,378@b/
(7c can be renamed something else depending on BIOS/OS version)
Also stecken unsere Karten in Slot 4 und 5.
===luxadm -e dump_map <HW_path>===
Gibt die Tabelle der bekannten Geräte an einem Port aus
<source lang=bash>
# luxadm -e dump_map /devices/pci@79,0/pci10de,378@b/pci1077,143@0/fp@0,0:devctl
Pos Port_ID Hard_Addr Port WWN Node WWN Type
0 30200 0 202600a0b86e10e4 200600a0b86e10e4 0x0 (Disk device)
1 30600 0 202700a0b86e10e4 200600a0b86e10e4 0x0 (Disk device)
2 10100 0 203400a0b85bb030 200400a0b85bb030 0x0 (Disk device)
3 10500 0 203500a0b85bb030 200400a0b85bb030 0x0 (Disk device)
4 10200 0 202600a0b86e103c 200600a0b86e103c 0x0 (Disk device)
5 11400 0 202700a0b86e103c 200600a0b86e103c 0x0 (Disk device)
6 30100 0 203200a0b85aeb2d 200200a0b85aeb2d 0x0 (Disk device)
7 30500 0 203300a0b85aeb2d 200200a0b85aeb2d 0x0 (Disk device)
8 10800 0 2100001b32902d45 2000001b32902d45 0x1f (Unknown Type,Host Bus Adapter)
</source>
Erklärung der interessanten Spalten:
* Port_ID <Switch_ID><Switchport><??>
Es sind also offensichtlich 2 Switches in der Fabric an Port /devices/pci@79,0/pci10de,378@b/pci1077,143@0/fp@0,0:devctl
und zwar mit der ID 1 und mit der ID 3.
Switch ID 1
Port 1 und 5 : Node WWN 200400a0b85bb030
Port 2 und 14 : Node WWN 200600a0b86e103c
Port 8 : Node WWN 2000001b32902d45 (Wir selbst)
Switch ID 3
Port 1 und 5 : Node WWN 200200a0b85aeb2d
Port 2 und 6 : Node WWN 200600a0b86e10e4
Wir hängen also mit 2 Storages auf dem Switch mit der ID 1 und haben eine Verbindung zu einem Switch mit der ID 3 an dem 2 weitere Storages hängen.
* Node WWN
Wir sehen hier 4 Disk Devices mit jeweils 2 Einträgen (Gleiche Node WWN)
* Port WWN
Dies ist die Port WWN der an den Switch angeschlossenen Geräte (unter 8 finden wir uns selbst).
Pro Storage sehen wir hier 2 Port WWNs, also 2 Pfade über unseren einen Hostport.
Daher nachher 4 Pfade (2 Pro Hostport) beim [[#mpathadm list lu]].
* Type
Disk Device: Storage
Host Bus Adapter: FC-Karte
===luxadm probe===
Auflistung aller erkannten Fibrechanneldevices
<source lang=bash>
#> luxadm probe
Found Fibre Channel device(s):
Node WWN:200600a0b86e10e4 Device Type:Disk device
Logical Path:/dev/rdsk/c8t600A0B80006E10E40000DC1C52E8B751d0s2
...
</source>
===luxadm display <Diskpath|WWN>===
<source lang=bash>
#> luxadm display /dev/rdsk/c8t600A0B80006E10E40000DC1C52E8B751d0s2
DEVICE PROPERTIES for disk: /dev/rdsk/c8t600A0B80006E10E40000DC1C52E8B751d0s2
Vendor: SUN
Product ID: STK6580_6780
Revision: 0784
Serial Num: SP01068442
Unformatted capacity: 204800.000 MBytes
Write Cache: Enabled
Read Cache: Enabled
Minimum prefetch: 0x300
Maximum prefetch: 0x0
Device Type: Disk device
Path(s):
/dev/rdsk/c8t600A0B80006E10E40000DC1C52E8B751d0s2
/devices/scsi_vhci/disk@g600a0b80006e10e40000dc1c52e8b751:c,raw
Controller /dev/cfg/c4
Device Address 202600a0b86e10e4,5
Host controller port WWN 2100001b328a417f
Class primary
State ONLINE
Controller /dev/cfg/c4
Device Address 202700a0b86e10e4,5
Host controller port WWN 2100001b328a417f
Class secondary
State STANDBY
Controller /dev/cfg/c6
Device Address 201600a0b86e10e4,5
Host controller port WWN 2100001b32904445
Class primary
State ONLINE
Controller /dev/cfg/c6
Device Address 201700a0b86e10e4,5
Host controller port WWN 2100001b32904445
Class secondary
State STANDBY
</source>
* Vendor: SUN
Hersteller
* Product ID: STK6580_6780
Also ein StorageTek 6580/6780
* Revision: 0784
Grobe Firmwarepeilung (Firmware Version: 07.84.47.10)
Siehe hier [[#lsscs list array <array_name>]]
* Serial Num: SP01068442
Praktisch, wenn man mit NetApps arbeitet, um die LUNs zuzuordnen.
* Unformatted capacity: 204800.000 MBytes
Immer gut zu wissen
* Write Cache: Enabled
Die Batterie im Storage sollte also OK sein ;-)
* Path(s):
Rawdevicepath
Hardwaredevicepath
Jetzt folgen immer pro Pfad zu diesem Device ein Block aus
Controller (siehe unten)
Device Address <Port WWN vom Device>,<LUN ID>
Class <primary|secondary> (siehe unten)
State <Online|Standby|Oflline>
Zuweisung Controller zum FC-Port über:
<source lang=bash>
# ls -al /dev/cfg/c6
lrwxrwxrwx 1 root root 60 Sep 3 2009 /dev/cfg/c6 -> ../../devices/pci@79,0/pci10de,376@e/pci1077,143@0/fp@0,0:fc
</source>
Class:
Via ALUA (Asymmetric Logical Unit Access) teilt das Device dem Host mit, über welche Pfade der Host primär auf die LUN zugreifen soll.
==fcinfo==
===fcinfo hba-port===
Gibt ein paar Infos über Hersteller, Modell, Firmware, Port und Node WWN, Current Speed, ... aus
<source lang=bash>
#> fcinfo hba-port
HBA Port WWN: 2100001b328a417f
OS Device Name: /dev/cfg/c4
Manufacturer: QLogic Corp.
Model: 375-3356-02
Firmware Version: 05.06.00
FCode/BIOS Version: BIOS: 2.02; fcode: 2.01; EFI: 2.00;
Serial Number: 0402R00-0918701860
Driver Name: qlc
Driver Version: 20110825-3.06
Type: N-port
State: online
Supported Speeds: 1Gb 2Gb 4Gb
Current Speed: 4Gb
Node WWN: 2000001b328a417f
HBA Port WWN: 2101001b32aa417f
OS Device Name: /dev/cfg/c5
Manufacturer: QLogic Corp.
Model: 375-3356-02
Firmware Version: 05.06.00
FCode/BIOS Version: BIOS: 2.02; fcode: 2.01; EFI: 2.00;
Serial Number: 0402R00-0918701860
Driver Name: qlc
Driver Version: 20110825-3.06
Type: unknown
State: offline
Supported Speeds: 1Gb 2Gb 4Gb
Current Speed: not established
Node WWN: 2001001b32aa417f
HBA Port WWN: 2100001b32904445
OS Device Name: /dev/cfg/c6
Manufacturer: QLogic Corp.
Model: 375-3356-02
Firmware Version: 05.06.00
FCode/BIOS Version: BIOS: 2.02; fcode: 2.01; EFI: 2.00;
Serial Number: 0402R00-0918701887
Driver Name: qlc
Driver Version: 20110825-3.06
Type: N-port
State: online
Supported Speeds: 1Gb 2Gb 4Gb
Current Speed: 4Gb
Node WWN: 2000001b32904445
HBA Port WWN: 2101001b32b04445
OS Device Name: /dev/cfg/c7
Manufacturer: QLogic Corp.
Model: 375-3356-02
Firmware Version: 05.06.00
FCode/BIOS Version: BIOS: 2.02; fcode: 2.01; EFI: 2.00;
Serial Number: 0402R00-0918701887
Driver Name: qlc
Driver Version: 20110825-3.06
Type: unknown
State: offline
Supported Speeds: 1Gb 2Gb 4Gb
Current Speed: not established
Node WWN: 2001001b32b04445
</source>
==mpathadm==
===mpathadm list lu===
<source lang=bash>
</source>
=Kommandos : Common Array Manager=
==lsscs==
Ist unter Solaris in /opt/SUNWsefms/bin
===lsscs list array===
<source lang=bash>
</source>
===lsscs list array <array_name>===
<source lang=bash>
</source>
===lsscs list -a <array_name> fcport===
<source lang=bash>
</source>
=Kommandos : Brocade=
2a6611a0423797842f9bbb3d2cf31409aece4bf5
388
387
2014-04-01T15:38:58Z
Lollypop
2
/* fcinfo */
wikitext
text/x-wiki
[[Kategorie:Solaris]]
[[Kategorie:Brocade]]
=Fibrechannel Analyse unter Solaris=
=Kommandos : Solaris=
==luxadm==
===luxadm -e port===
Gibt vorhandene Fibrechannelports und deren Status aus:
<source lang=bash>
# luxadm -e port
/devices/pci@79,0/pci10de,378@b/pci1077,143@0/fp@0,0:devctl CONNECTED
/devices/pci@79,0/pci10de,378@b/pci1077,143@0,1/fp@0,0:devctl NOT CONNECTED
/devices/pci@79,0/pci10de,376@e/pci1077,143@0/fp@0,0:devctl CONNECTED
/devices/pci@79,0/pci10de,376@e/pci1077,143@0,1/fp@0,0:devctl NOT CONNECTED
</source>
2 Dualport Karten:
/devices/pci@79,0/pci10de,378@b/pci1077,143@0 und ...,1
/devices/pci@79,0/pci10de,376@e/pci1077,143@0 und ...,1
<source lang=bash>
# prtdiag -v | head -1
System Configuration: Sun Microsystems Sun Fire X4440
</source>
Aus der Seite [https://support.oracle.com/epmos/faces/DocContentDisplay?id=1277396.1 Sun x86 Platforms: Matrix of Recognized Device Paths (Doc ID 1277396.1)] (Oracle Support Login benötigt):
Sun Fire x4440 (Tucana)
PCI:
PCIe SLOT0 /pci@0,0/pci10de,375@f/pci1000,3150@0 // with PCI Express 8-Port SAS/SATA HBA
PCIe SLOT0 /pci@0,0/pci10de,375@f/ // without PCI Express 8-Port SAS/SATA HBA
PCIe SLOT1 /pci@0,0/pci10de,376@e/
PCIe SLOT2 /pci@7c,0/pci10de,377@f/
PCIe SLOT3 /pci@0,0/pci10de,377@a/
PCIe SLOT4 /pci@7c,0/pci10de,376@e/
PCIe SLOT5 /pci@7c,0/pci10de,378@b/
(7c can be renamed something else depending on BIOS/OS version)
Also stecken unsere Karten in Slot 4 und 5.
===luxadm -e dump_map <HW_path>===
Gibt die Tabelle der bekannten Geräte an einem Port aus
<source lang=bash>
# luxadm -e dump_map /devices/pci@79,0/pci10de,378@b/pci1077,143@0/fp@0,0:devctl
Pos Port_ID Hard_Addr Port WWN Node WWN Type
0 30200 0 202600a0b86e10e4 200600a0b86e10e4 0x0 (Disk device)
1 30600 0 202700a0b86e10e4 200600a0b86e10e4 0x0 (Disk device)
2 10100 0 203400a0b85bb030 200400a0b85bb030 0x0 (Disk device)
3 10500 0 203500a0b85bb030 200400a0b85bb030 0x0 (Disk device)
4 10200 0 202600a0b86e103c 200600a0b86e103c 0x0 (Disk device)
5 11400 0 202700a0b86e103c 200600a0b86e103c 0x0 (Disk device)
6 30100 0 203200a0b85aeb2d 200200a0b85aeb2d 0x0 (Disk device)
7 30500 0 203300a0b85aeb2d 200200a0b85aeb2d 0x0 (Disk device)
8 10800 0 2100001b32902d45 2000001b32902d45 0x1f (Unknown Type,Host Bus Adapter)
</source>
Erklärung der interessanten Spalten:
* Port_ID <Switch_ID><Switchport><??>
Es sind also offensichtlich 2 Switches in der Fabric an Port /devices/pci@79,0/pci10de,378@b/pci1077,143@0/fp@0,0:devctl
und zwar mit der ID 1 und mit der ID 3.
Switch ID 1
Port 1 und 5 : Node WWN 200400a0b85bb030
Port 2 und 14 : Node WWN 200600a0b86e103c
Port 8 : Node WWN 2000001b32902d45 (Wir selbst)
Switch ID 3
Port 1 und 5 : Node WWN 200200a0b85aeb2d
Port 2 und 6 : Node WWN 200600a0b86e10e4
Wir hängen also mit 2 Storages auf dem Switch mit der ID 1 und haben eine Verbindung zu einem Switch mit der ID 3 an dem 2 weitere Storages hängen.
* Node WWN
Wir sehen hier 4 Disk Devices mit jeweils 2 Einträgen (Gleiche Node WWN)
* Port WWN
Dies ist die Port WWN der an den Switch angeschlossenen Geräte (unter 8 finden wir uns selbst).
Pro Storage sehen wir hier 2 Port WWNs, also 2 Pfade über unseren einen Hostport.
Daher nachher 4 Pfade (2 Pro Hostport) beim [[#mpathadm list lu]].
* Type
Disk Device: Storage
Host Bus Adapter: FC-Karte
===luxadm probe===
Auflistung aller erkannten Fibrechanneldevices
<source lang=bash>
#> luxadm probe
Found Fibre Channel device(s):
Node WWN:200600a0b86e10e4 Device Type:Disk device
Logical Path:/dev/rdsk/c8t600A0B80006E10E40000DC1C52E8B751d0s2
...
</source>
===luxadm display <Diskpath|WWN>===
<source lang=bash>
#> luxadm display /dev/rdsk/c8t600A0B80006E10E40000DC1C52E8B751d0s2
DEVICE PROPERTIES for disk: /dev/rdsk/c8t600A0B80006E10E40000DC1C52E8B751d0s2
Vendor: SUN
Product ID: STK6580_6780
Revision: 0784
Serial Num: SP01068442
Unformatted capacity: 204800.000 MBytes
Write Cache: Enabled
Read Cache: Enabled
Minimum prefetch: 0x300
Maximum prefetch: 0x0
Device Type: Disk device
Path(s):
/dev/rdsk/c8t600A0B80006E10E40000DC1C52E8B751d0s2
/devices/scsi_vhci/disk@g600a0b80006e10e40000dc1c52e8b751:c,raw
Controller /dev/cfg/c4
Device Address 202600a0b86e10e4,5
Host controller port WWN 2100001b328a417f
Class primary
State ONLINE
Controller /dev/cfg/c4
Device Address 202700a0b86e10e4,5
Host controller port WWN 2100001b328a417f
Class secondary
State STANDBY
Controller /dev/cfg/c6
Device Address 201600a0b86e10e4,5
Host controller port WWN 2100001b32904445
Class primary
State ONLINE
Controller /dev/cfg/c6
Device Address 201700a0b86e10e4,5
Host controller port WWN 2100001b32904445
Class secondary
State STANDBY
</source>
* Vendor: SUN
Hersteller
* Product ID: STK6580_6780
Also ein StorageTek 6580/6780
* Revision: 0784
Grobe Firmwarepeilung (Firmware Version: 07.84.47.10)
Siehe hier [[#lsscs list array <array_name>]]
* Serial Num: SP01068442
Praktisch, wenn man mit NetApps arbeitet, um die LUNs zuzuordnen.
* Unformatted capacity: 204800.000 MBytes
Immer gut zu wissen
* Write Cache: Enabled
Die Batterie im Storage sollte also OK sein ;-)
* Path(s):
Rawdevicepath
Hardwaredevicepath
Jetzt folgen immer pro Pfad zu diesem Device ein Block aus
Controller (siehe unten)
Device Address <Port WWN vom Device>,<LUN ID>
Class <primary|secondary> (siehe unten)
State <Online|Standby|Oflline>
Zuweisung Controller zum FC-Port über:
<source lang=bash>
# ls -al /dev/cfg/c6
lrwxrwxrwx 1 root root 60 Sep 3 2009 /dev/cfg/c6 -> ../../devices/pci@79,0/pci10de,376@e/pci1077,143@0/fp@0,0:fc
</source>
Class:
Via ALUA (Asymmetric Logical Unit Access) teilt das Device dem Host mit, über welche Pfade der Host primär auf die LUN zugreifen soll.
==fcinfo==
===fcinfo hba-port===
Gibt ein paar Infos über Hersteller, Modell, Firmware, Port und Node WWN, Current Speed, ... aus
<source lang=bash>
#> fcinfo hba-port
HBA Port WWN: 2100001b328a417f
OS Device Name: /dev/cfg/c4
Manufacturer: QLogic Corp.
Model: 375-3356-02
Firmware Version: 05.06.00
FCode/BIOS Version: BIOS: 2.02; fcode: 2.01; EFI: 2.00;
Serial Number: 0402R00-0918701860
Driver Name: qlc
Driver Version: 20110825-3.06
Type: N-port
State: online
Supported Speeds: 1Gb 2Gb 4Gb
Current Speed: 4Gb
Node WWN: 2000001b328a417f
HBA Port WWN: 2101001b32aa417f
OS Device Name: /dev/cfg/c5
Manufacturer: QLogic Corp.
Model: 375-3356-02
Firmware Version: 05.06.00
FCode/BIOS Version: BIOS: 2.02; fcode: 2.01; EFI: 2.00;
Serial Number: 0402R00-0918701860
Driver Name: qlc
Driver Version: 20110825-3.06
Type: unknown
State: offline
Supported Speeds: 1Gb 2Gb 4Gb
Current Speed: not established
Node WWN: 2001001b32aa417f
HBA Port WWN: 2100001b32904445
OS Device Name: /dev/cfg/c6
Manufacturer: QLogic Corp.
Model: 375-3356-02
Firmware Version: 05.06.00
FCode/BIOS Version: BIOS: 2.02; fcode: 2.01; EFI: 2.00;
Serial Number: 0402R00-0918701887
Driver Name: qlc
Driver Version: 20110825-3.06
Type: N-port
State: online
Supported Speeds: 1Gb 2Gb 4Gb
Current Speed: 4Gb
Node WWN: 2000001b32904445
HBA Port WWN: 2101001b32b04445
OS Device Name: /dev/cfg/c7
Manufacturer: QLogic Corp.
Model: 375-3356-02
Firmware Version: 05.06.00
FCode/BIOS Version: BIOS: 2.02; fcode: 2.01; EFI: 2.00;
Serial Number: 0402R00-0918701887
Driver Name: qlc
Driver Version: 20110825-3.06
Type: unknown
State: offline
Supported Speeds: 1Gb 2Gb 4Gb
Current Speed: not established
Node WWN: 2001001b32b04445
</source>
===fcinfo remote-port --port <HBA Port WWN> --linkstat===
<source lang=bash>
# fcinfo remote-port --port 2100001b32904445 --linkstat
Remote Port WWN: 201600a0b86e103c
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200600a0b86e103c
Link Error Statistics:
Link Failure Count: 3
Loss of Sync Count: 3
Loss of Signal Count: 2
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 201700a0b86e103c
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200600a0b86e103c
Link Error Statistics:
Link Failure Count: 4
Loss of Sync Count: 261
Loss of Signal Count: 4
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 202200a0b85aeb2d
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200200a0b85aeb2d
Link Error Statistics:
Link Failure Count: 2
Loss of Sync Count: 1
Loss of Signal Count: 2
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 202300a0b85aeb2d
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200200a0b85aeb2d
Link Error Statistics:
Link Failure Count: 2
Loss of Sync Count: 1
Loss of Signal Count: 2
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 201600a0b86e10e4
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200600a0b86e10e4
Link Error Statistics:
Link Failure Count: 3
Loss of Sync Count: 1
Loss of Signal Count: 0
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 201700a0b86e10e4
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200600a0b86e10e4
Link Error Statistics:
Link Failure Count: 3
Loss of Sync Count: 1
Loss of Signal Count: 0
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 202400a0b85bb030
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200400a0b85bb030
Link Error Statistics:
Link Failure Count: 2
Loss of Sync Count: 1
Loss of Signal Count: 2
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 202500a0b85bb030
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200400a0b85bb030
Link Error Statistics:
Link Failure Count: 2
Loss of Sync Count: 1
Loss of Signal Count: 3
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
</source>
==mpathadm==
===mpathadm list lu===
<source lang=bash>
</source>
=Kommandos : Common Array Manager=
==lsscs==
Ist unter Solaris in /opt/SUNWsefms/bin
===lsscs list array===
<source lang=bash>
</source>
===lsscs list array <array_name>===
<source lang=bash>
</source>
===lsscs list -a <array_name> fcport===
<source lang=bash>
</source>
=Kommandos : Brocade=
692d5f8390fb8f6b2cbc5c67263994a325910e70
389
388
2014-04-05T10:04:56Z
Lollypop
2
/* luxadm -e port */
wikitext
text/x-wiki
[[Kategorie:Solaris]]
[[Kategorie:Brocade]]
=Fibrechannel Analyse unter Solaris=
=Kommandos : Solaris=
==luxadm==
===luxadm -e port===
Gibt die Hardwarepfade der vorhandened Fibrechannelports und deren Status aus:
<source lang=bash>
# luxadm -e port
/devices/pci@79,0/pci10de,378@b/pci1077,143@0/fp@0,0:devctl CONNECTED
/devices/pci@79,0/pci10de,378@b/pci1077,143@0,1/fp@0,0:devctl NOT CONNECTED
/devices/pci@79,0/pci10de,376@e/pci1077,143@0/fp@0,0:devctl CONNECTED
/devices/pci@79,0/pci10de,376@e/pci1077,143@0,1/fp@0,0:devctl NOT CONNECTED
</source>
2 Dualport Karten:
/devices/pci@79,0/pci10de,378@b/pci1077,143@0 und ...,1
/devices/pci@79,0/pci10de,376@e/pci1077,143@0 und ...,1
<source lang=bash>
# prtdiag -v | head -1
System Configuration: Sun Microsystems Sun Fire X4440
</source>
Aus der Seite [https://support.oracle.com/epmos/faces/DocContentDisplay?id=1277396.1 Sun x86 Platforms: Matrix of Recognized Device Paths (Doc ID 1277396.1)] (Oracle Support Login benötigt):
Sun Fire x4440 (Tucana)
PCI:
PCIe SLOT0 /pci@0,0/pci10de,375@f/pci1000,3150@0 // with PCI Express 8-Port SAS/SATA HBA
PCIe SLOT0 /pci@0,0/pci10de,375@f/ // without PCI Express 8-Port SAS/SATA HBA
PCIe SLOT1 /pci@0,0/pci10de,376@e/
PCIe SLOT2 /pci@7c,0/pci10de,377@f/
PCIe SLOT3 /pci@0,0/pci10de,377@a/
PCIe SLOT4 /pci@7c,0/pci10de,376@e/
PCIe SLOT5 /pci@7c,0/pci10de,378@b/
(7c can be renamed something else depending on BIOS/OS version)
Also stecken unsere Karten in Slot 4 und 5.
===luxadm -e dump_map <HW_path>===
Gibt die Tabelle der bekannten Geräte an einem Port aus
<source lang=bash>
# luxadm -e dump_map /devices/pci@79,0/pci10de,378@b/pci1077,143@0/fp@0,0:devctl
Pos Port_ID Hard_Addr Port WWN Node WWN Type
0 30200 0 202600a0b86e10e4 200600a0b86e10e4 0x0 (Disk device)
1 30600 0 202700a0b86e10e4 200600a0b86e10e4 0x0 (Disk device)
2 10100 0 203400a0b85bb030 200400a0b85bb030 0x0 (Disk device)
3 10500 0 203500a0b85bb030 200400a0b85bb030 0x0 (Disk device)
4 10200 0 202600a0b86e103c 200600a0b86e103c 0x0 (Disk device)
5 11400 0 202700a0b86e103c 200600a0b86e103c 0x0 (Disk device)
6 30100 0 203200a0b85aeb2d 200200a0b85aeb2d 0x0 (Disk device)
7 30500 0 203300a0b85aeb2d 200200a0b85aeb2d 0x0 (Disk device)
8 10800 0 2100001b32902d45 2000001b32902d45 0x1f (Unknown Type,Host Bus Adapter)
</source>
Erklärung der interessanten Spalten:
* Port_ID <Switch_ID><Switchport><??>
Es sind also offensichtlich 2 Switches in der Fabric an Port /devices/pci@79,0/pci10de,378@b/pci1077,143@0/fp@0,0:devctl
und zwar mit der ID 1 und mit der ID 3.
Switch ID 1
Port 1 und 5 : Node WWN 200400a0b85bb030
Port 2 und 14 : Node WWN 200600a0b86e103c
Port 8 : Node WWN 2000001b32902d45 (Wir selbst)
Switch ID 3
Port 1 und 5 : Node WWN 200200a0b85aeb2d
Port 2 und 6 : Node WWN 200600a0b86e10e4
Wir hängen also mit 2 Storages auf dem Switch mit der ID 1 und haben eine Verbindung zu einem Switch mit der ID 3 an dem 2 weitere Storages hängen.
* Node WWN
Wir sehen hier 4 Disk Devices mit jeweils 2 Einträgen (Gleiche Node WWN)
* Port WWN
Dies ist die Port WWN der an den Switch angeschlossenen Geräte (unter 8 finden wir uns selbst).
Pro Storage sehen wir hier 2 Port WWNs, also 2 Pfade über unseren einen Hostport.
Daher nachher 4 Pfade (2 Pro Hostport) beim [[#mpathadm list lu]].
* Type
Disk Device: Storage
Host Bus Adapter: FC-Karte
===luxadm probe===
Auflistung aller erkannten Fibrechanneldevices
<source lang=bash>
#> luxadm probe
Found Fibre Channel device(s):
Node WWN:200600a0b86e10e4 Device Type:Disk device
Logical Path:/dev/rdsk/c8t600A0B80006E10E40000DC1C52E8B751d0s2
...
</source>
===luxadm display <Diskpath|WWN>===
<source lang=bash>
#> luxadm display /dev/rdsk/c8t600A0B80006E10E40000DC1C52E8B751d0s2
DEVICE PROPERTIES for disk: /dev/rdsk/c8t600A0B80006E10E40000DC1C52E8B751d0s2
Vendor: SUN
Product ID: STK6580_6780
Revision: 0784
Serial Num: SP01068442
Unformatted capacity: 204800.000 MBytes
Write Cache: Enabled
Read Cache: Enabled
Minimum prefetch: 0x300
Maximum prefetch: 0x0
Device Type: Disk device
Path(s):
/dev/rdsk/c8t600A0B80006E10E40000DC1C52E8B751d0s2
/devices/scsi_vhci/disk@g600a0b80006e10e40000dc1c52e8b751:c,raw
Controller /dev/cfg/c4
Device Address 202600a0b86e10e4,5
Host controller port WWN 2100001b328a417f
Class primary
State ONLINE
Controller /dev/cfg/c4
Device Address 202700a0b86e10e4,5
Host controller port WWN 2100001b328a417f
Class secondary
State STANDBY
Controller /dev/cfg/c6
Device Address 201600a0b86e10e4,5
Host controller port WWN 2100001b32904445
Class primary
State ONLINE
Controller /dev/cfg/c6
Device Address 201700a0b86e10e4,5
Host controller port WWN 2100001b32904445
Class secondary
State STANDBY
</source>
* Vendor: SUN
Hersteller
* Product ID: STK6580_6780
Also ein StorageTek 6580/6780
* Revision: 0784
Grobe Firmwarepeilung (Firmware Version: 07.84.47.10)
Siehe hier [[#lsscs list array <array_name>]]
* Serial Num: SP01068442
Praktisch, wenn man mit NetApps arbeitet, um die LUNs zuzuordnen.
* Unformatted capacity: 204800.000 MBytes
Immer gut zu wissen
* Write Cache: Enabled
Die Batterie im Storage sollte also OK sein ;-)
* Path(s):
Rawdevicepath
Hardwaredevicepath
Jetzt folgen immer pro Pfad zu diesem Device ein Block aus
Controller (siehe unten)
Device Address <Port WWN vom Device>,<LUN ID>
Class <primary|secondary> (siehe unten)
State <Online|Standby|Oflline>
Zuweisung Controller zum FC-Port über:
<source lang=bash>
# ls -al /dev/cfg/c6
lrwxrwxrwx 1 root root 60 Sep 3 2009 /dev/cfg/c6 -> ../../devices/pci@79,0/pci10de,376@e/pci1077,143@0/fp@0,0:fc
</source>
Class:
Via ALUA (Asymmetric Logical Unit Access) teilt das Device dem Host mit, über welche Pfade der Host primär auf die LUN zugreifen soll.
==fcinfo==
===fcinfo hba-port===
Gibt ein paar Infos über Hersteller, Modell, Firmware, Port und Node WWN, Current Speed, ... aus
<source lang=bash>
#> fcinfo hba-port
HBA Port WWN: 2100001b328a417f
OS Device Name: /dev/cfg/c4
Manufacturer: QLogic Corp.
Model: 375-3356-02
Firmware Version: 05.06.00
FCode/BIOS Version: BIOS: 2.02; fcode: 2.01; EFI: 2.00;
Serial Number: 0402R00-0918701860
Driver Name: qlc
Driver Version: 20110825-3.06
Type: N-port
State: online
Supported Speeds: 1Gb 2Gb 4Gb
Current Speed: 4Gb
Node WWN: 2000001b328a417f
HBA Port WWN: 2101001b32aa417f
OS Device Name: /dev/cfg/c5
Manufacturer: QLogic Corp.
Model: 375-3356-02
Firmware Version: 05.06.00
FCode/BIOS Version: BIOS: 2.02; fcode: 2.01; EFI: 2.00;
Serial Number: 0402R00-0918701860
Driver Name: qlc
Driver Version: 20110825-3.06
Type: unknown
State: offline
Supported Speeds: 1Gb 2Gb 4Gb
Current Speed: not established
Node WWN: 2001001b32aa417f
HBA Port WWN: 2100001b32904445
OS Device Name: /dev/cfg/c6
Manufacturer: QLogic Corp.
Model: 375-3356-02
Firmware Version: 05.06.00
FCode/BIOS Version: BIOS: 2.02; fcode: 2.01; EFI: 2.00;
Serial Number: 0402R00-0918701887
Driver Name: qlc
Driver Version: 20110825-3.06
Type: N-port
State: online
Supported Speeds: 1Gb 2Gb 4Gb
Current Speed: 4Gb
Node WWN: 2000001b32904445
HBA Port WWN: 2101001b32b04445
OS Device Name: /dev/cfg/c7
Manufacturer: QLogic Corp.
Model: 375-3356-02
Firmware Version: 05.06.00
FCode/BIOS Version: BIOS: 2.02; fcode: 2.01; EFI: 2.00;
Serial Number: 0402R00-0918701887
Driver Name: qlc
Driver Version: 20110825-3.06
Type: unknown
State: offline
Supported Speeds: 1Gb 2Gb 4Gb
Current Speed: not established
Node WWN: 2001001b32b04445
</source>
===fcinfo remote-port --port <HBA Port WWN> --linkstat===
<source lang=bash>
# fcinfo remote-port --port 2100001b32904445 --linkstat
Remote Port WWN: 201600a0b86e103c
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200600a0b86e103c
Link Error Statistics:
Link Failure Count: 3
Loss of Sync Count: 3
Loss of Signal Count: 2
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 201700a0b86e103c
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200600a0b86e103c
Link Error Statistics:
Link Failure Count: 4
Loss of Sync Count: 261
Loss of Signal Count: 4
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 202200a0b85aeb2d
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200200a0b85aeb2d
Link Error Statistics:
Link Failure Count: 2
Loss of Sync Count: 1
Loss of Signal Count: 2
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 202300a0b85aeb2d
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200200a0b85aeb2d
Link Error Statistics:
Link Failure Count: 2
Loss of Sync Count: 1
Loss of Signal Count: 2
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 201600a0b86e10e4
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200600a0b86e10e4
Link Error Statistics:
Link Failure Count: 3
Loss of Sync Count: 1
Loss of Signal Count: 0
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 201700a0b86e10e4
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200600a0b86e10e4
Link Error Statistics:
Link Failure Count: 3
Loss of Sync Count: 1
Loss of Signal Count: 0
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 202400a0b85bb030
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200400a0b85bb030
Link Error Statistics:
Link Failure Count: 2
Loss of Sync Count: 1
Loss of Signal Count: 2
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 202500a0b85bb030
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200400a0b85bb030
Link Error Statistics:
Link Failure Count: 2
Loss of Sync Count: 1
Loss of Signal Count: 3
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
</source>
==mpathadm==
===mpathadm list lu===
<source lang=bash>
</source>
=Kommandos : Common Array Manager=
==lsscs==
Ist unter Solaris in /opt/SUNWsefms/bin
===lsscs list array===
<source lang=bash>
</source>
===lsscs list array <array_name>===
<source lang=bash>
</source>
===lsscs list -a <array_name> fcport===
<source lang=bash>
</source>
=Kommandos : Brocade=
73d88f9addd57fb2eab4ce49f078921955d1a9df
390
389
2014-04-05T10:13:59Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:Solaris]]
[[Kategorie:Brocade]]
=Fibrechannel Analyse unter Solaris=
=Kommandos : Solaris=
==luxadm==
===luxadm -e port===
Gibt die Hardwarepfade der vorhandened Fibrechannelports und deren Status aus:
<source lang=bash>
# luxadm -e port
/devices/pci@79,0/pci10de,378@b/pci1077,143@0/fp@0,0:devctl CONNECTED
/devices/pci@79,0/pci10de,378@b/pci1077,143@0,1/fp@0,0:devctl NOT CONNECTED
/devices/pci@79,0/pci10de,376@e/pci1077,143@0/fp@0,0:devctl CONNECTED
/devices/pci@79,0/pci10de,376@e/pci1077,143@0,1/fp@0,0:devctl NOT CONNECTED
</source>
2 Dualport Karten:
/devices/pci@79,0/pci10de,378@b/pci1077,143@0 und ...,1
/devices/pci@79,0/pci10de,376@e/pci1077,143@0 und ...,1
<source lang=bash>
# prtdiag -v | head -1
System Configuration: Sun Microsystems Sun Fire X4440
</source>
Aus der Seite [https://support.oracle.com/epmos/faces/DocContentDisplay?id=1277396.1 Sun x86 Platforms: Matrix of Recognized Device Paths (Doc ID 1277396.1)] (Oracle Support Login benötigt):
Sun Fire x4440 (Tucana)
PCI:
PCIe SLOT0 /pci@0,0/pci10de,375@f/pci1000,3150@0 // with PCI Express 8-Port SAS/SATA HBA
PCIe SLOT0 /pci@0,0/pci10de,375@f/ // without PCI Express 8-Port SAS/SATA HBA
PCIe SLOT1 /pci@0,0/pci10de,376@e/
PCIe SLOT2 /pci@7c,0/pci10de,377@f/
PCIe SLOT3 /pci@0,0/pci10de,377@a/
PCIe SLOT4 /pci@7c,0/pci10de,376@e/
PCIe SLOT5 /pci@7c,0/pci10de,378@b/
(7c can be renamed something else depending on BIOS/OS version)
Also stecken unsere Karten in Slot 4 und 5.
===luxadm -e dump_map <HW_path>===
Gibt die Tabelle der bekannten Geräte an einem Port aus
<source lang=bash>
# luxadm -e dump_map /devices/pci@79,0/pci10de,378@b/pci1077,143@0/fp@0,0:devctl
Pos Port_ID Hard_Addr Port WWN Node WWN Type
0 30200 0 202600a0b86e10e4 200600a0b86e10e4 0x0 (Disk device)
1 30600 0 202700a0b86e10e4 200600a0b86e10e4 0x0 (Disk device)
2 10100 0 203400a0b85bb030 200400a0b85bb030 0x0 (Disk device)
3 10500 0 203500a0b85bb030 200400a0b85bb030 0x0 (Disk device)
4 10200 0 202600a0b86e103c 200600a0b86e103c 0x0 (Disk device)
5 11400 0 202700a0b86e103c 200600a0b86e103c 0x0 (Disk device)
6 30100 0 203200a0b85aeb2d 200200a0b85aeb2d 0x0 (Disk device)
7 30500 0 203300a0b85aeb2d 200200a0b85aeb2d 0x0 (Disk device)
8 10800 0 2100001b32902d45 2000001b32902d45 0x1f (Unknown Type,Host Bus Adapter)
</source>
Erklärung der interessanten Spalten:
* Port_ID <Switch_ID><Switchport><??>
Es sind also offensichtlich 2 Switches in der Fabric an Port /devices/pci@79,0/pci10de,378@b/pci1077,143@0/fp@0,0:devctl
und zwar mit der ID 1 und mit der ID 3.
Switch ID 1
Port 1 und 5 : Node WWN 200400a0b85bb030
Port 2 und 14 : Node WWN 200600a0b86e103c
Port 8 : Node WWN 2000001b32902d45 (Wir selbst)
Switch ID 3
Port 1 und 5 : Node WWN 200200a0b85aeb2d
Port 2 und 6 : Node WWN 200600a0b86e10e4
Wir hängen also mit 2 Storages auf dem Switch mit der ID 1 und haben eine Verbindung zu einem Switch mit der ID 3 an dem 2 weitere Storages hängen.
* Node WWN
Wir sehen hier 4 Disk Devices mit jeweils 2 Einträgen (Gleiche Node WWN)
* Port WWN
Dies ist die Port WWN der an den Switch angeschlossenen Geräte (unter 8 finden wir uns selbst).
Pro Storage sehen wir hier 2 Port WWNs, also 2 Pfade über unseren einen Hostport.
Daher nachher 4 Pfade (2 Pro Hostport) beim [[#mpathadm list lu]].
* Type
Disk Device: Storage
Host Bus Adapter: FC-Karte
===luxadm probe===
Auflistung aller erkannten Fibrechanneldevices
<source lang=bash>
#> luxadm probe
Found Fibre Channel device(s):
Node WWN:200600a0b86e10e4 Device Type:Disk device
Logical Path:/dev/rdsk/c8t600A0B80006E10E40000DC1C52E8B751d0s2
...
</source>
===luxadm display <Diskpath|WWN>===
<source lang=bash>
#> luxadm display /dev/rdsk/c8t600A0B80006E10E40000DC1C52E8B751d0s2
DEVICE PROPERTIES for disk: /dev/rdsk/c8t600A0B80006E10E40000DC1C52E8B751d0s2
Vendor: SUN
Product ID: STK6580_6780
Revision: 0784
Serial Num: SP01068442
Unformatted capacity: 204800.000 MBytes
Write Cache: Enabled
Read Cache: Enabled
Minimum prefetch: 0x300
Maximum prefetch: 0x0
Device Type: Disk device
Path(s):
/dev/rdsk/c8t600A0B80006E10E40000DC1C52E8B751d0s2
/devices/scsi_vhci/disk@g600a0b80006e10e40000dc1c52e8b751:c,raw
Controller /dev/cfg/c4
Device Address 202600a0b86e10e4,5
Host controller port WWN 2100001b328a417f
Class primary
State ONLINE
Controller /dev/cfg/c4
Device Address 202700a0b86e10e4,5
Host controller port WWN 2100001b328a417f
Class secondary
State STANDBY
Controller /dev/cfg/c6
Device Address 201600a0b86e10e4,5
Host controller port WWN 2100001b32904445
Class primary
State ONLINE
Controller /dev/cfg/c6
Device Address 201700a0b86e10e4,5
Host controller port WWN 2100001b32904445
Class secondary
State STANDBY
</source>
* Vendor: SUN
Hersteller
* Product ID: STK6580_6780
Also ein StorageTek 6580/6780
* Revision: 0784
Grobe Firmwarepeilung (Firmware Version: 07.84.47.10)
Siehe hier [[#lsscs list array <array_name>]]
* Serial Num: SP01068442
Praktisch, wenn man mit NetApps arbeitet, um die LUNs zuzuordnen.
* Unformatted capacity: 204800.000 MBytes
Immer gut zu wissen
* Write Cache: Enabled
Die Batterie im Storage sollte also OK sein ;-)
* Path(s):
Rawdevicepath
Hardwaredevicepath
Jetzt folgen immer pro Pfad zu diesem Device ein Block aus
Controller (siehe unten)
Device Address <Port WWN vom Device>,<LUN ID>
Class <primary|secondary> (siehe unten)
State <Online|Standby|Oflline>
Zuweisung Controller zum FC-Port über:
<source lang=bash>
# ls -al /dev/cfg/c6
lrwxrwxrwx 1 root root 60 Sep 3 2009 /dev/cfg/c6 -> ../../devices/pci@79,0/pci10de,376@e/pci1077,143@0/fp@0,0:fc
</source>
Man sieht den Hardwarepfad von [[#luxadm -e port]]
Class:
Via ALUA (Asymmetric Logical Unit Access) teilt das Device dem Host mit, über welche Pfade der Host primär auf die LUN zugreifen soll.
==fcinfo==
===fcinfo hba-port===
Gibt ein paar Infos über Hersteller, Modell, Firmware, Port und Node WWN, Current Speed, ... aus
<source lang=bash>
#> fcinfo hba-port
HBA Port WWN: 2100001b328a417f
OS Device Name: /dev/cfg/c4
Manufacturer: QLogic Corp.
Model: 375-3356-02
Firmware Version: 05.06.00
FCode/BIOS Version: BIOS: 2.02; fcode: 2.01; EFI: 2.00;
Serial Number: 0402R00-0918701860
Driver Name: qlc
Driver Version: 20110825-3.06
Type: N-port
State: online
Supported Speeds: 1Gb 2Gb 4Gb
Current Speed: 4Gb
Node WWN: 2000001b328a417f
HBA Port WWN: 2101001b32aa417f
OS Device Name: /dev/cfg/c5
Manufacturer: QLogic Corp.
Model: 375-3356-02
Firmware Version: 05.06.00
FCode/BIOS Version: BIOS: 2.02; fcode: 2.01; EFI: 2.00;
Serial Number: 0402R00-0918701860
Driver Name: qlc
Driver Version: 20110825-3.06
Type: unknown
State: offline
Supported Speeds: 1Gb 2Gb 4Gb
Current Speed: not established
Node WWN: 2001001b32aa417f
HBA Port WWN: 2100001b32904445
OS Device Name: /dev/cfg/c6
Manufacturer: QLogic Corp.
Model: 375-3356-02
Firmware Version: 05.06.00
FCode/BIOS Version: BIOS: 2.02; fcode: 2.01; EFI: 2.00;
Serial Number: 0402R00-0918701887
Driver Name: qlc
Driver Version: 20110825-3.06
Type: N-port
State: online
Supported Speeds: 1Gb 2Gb 4Gb
Current Speed: 4Gb
Node WWN: 2000001b32904445
HBA Port WWN: 2101001b32b04445
OS Device Name: /dev/cfg/c7
Manufacturer: QLogic Corp.
Model: 375-3356-02
Firmware Version: 05.06.00
FCode/BIOS Version: BIOS: 2.02; fcode: 2.01; EFI: 2.00;
Serial Number: 0402R00-0918701887
Driver Name: qlc
Driver Version: 20110825-3.06
Type: unknown
State: offline
Supported Speeds: 1Gb 2Gb 4Gb
Current Speed: not established
Node WWN: 2001001b32b04445
</source>
===fcinfo remote-port --port <HBA Port WWN> --linkstat===
<source lang=bash>
# fcinfo remote-port --port 2100001b32904445 --linkstat
Remote Port WWN: 201600a0b86e103c
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200600a0b86e103c
Link Error Statistics:
Link Failure Count: 3
Loss of Sync Count: 3
Loss of Signal Count: 2
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 201700a0b86e103c
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200600a0b86e103c
Link Error Statistics:
Link Failure Count: 4
Loss of Sync Count: 261
Loss of Signal Count: 4
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 202200a0b85aeb2d
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200200a0b85aeb2d
Link Error Statistics:
Link Failure Count: 2
Loss of Sync Count: 1
Loss of Signal Count: 2
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 202300a0b85aeb2d
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200200a0b85aeb2d
Link Error Statistics:
Link Failure Count: 2
Loss of Sync Count: 1
Loss of Signal Count: 2
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 201600a0b86e10e4
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200600a0b86e10e4
Link Error Statistics:
Link Failure Count: 3
Loss of Sync Count: 1
Loss of Signal Count: 0
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 201700a0b86e10e4
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200600a0b86e10e4
Link Error Statistics:
Link Failure Count: 3
Loss of Sync Count: 1
Loss of Signal Count: 0
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 202400a0b85bb030
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200400a0b85bb030
Link Error Statistics:
Link Failure Count: 2
Loss of Sync Count: 1
Loss of Signal Count: 2
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 202500a0b85bb030
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200400a0b85bb030
Link Error Statistics:
Link Failure Count: 2
Loss of Sync Count: 1
Loss of Signal Count: 3
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
</source>
==mpathadm==
===mpathadm list lu===
<source lang=bash>
</source>
=Kommandos : Common Array Manager=
==lsscs==
Ist unter Solaris in /opt/SUNWsefms/bin
===lsscs list array===
<source lang=bash>
</source>
===lsscs list array <array_name>===
<source lang=bash>
</source>
===lsscs list -a <array_name> fcport===
<source lang=bash>
</source>
=Kommandos : Brocade=
d69086e80fa7bef041123680a05013963659cea7
391
390
2014-04-05T10:29:39Z
Lollypop
2
/* Kommandos : Brocade */
wikitext
text/x-wiki
[[Kategorie:Solaris]]
[[Kategorie:Brocade]]
=Fibrechannel Analyse unter Solaris=
=Kommandos : Solaris=
==luxadm==
===luxadm -e port===
Gibt die Hardwarepfade der vorhandened Fibrechannelports und deren Status aus:
<source lang=bash>
# luxadm -e port
/devices/pci@79,0/pci10de,378@b/pci1077,143@0/fp@0,0:devctl CONNECTED
/devices/pci@79,0/pci10de,378@b/pci1077,143@0,1/fp@0,0:devctl NOT CONNECTED
/devices/pci@79,0/pci10de,376@e/pci1077,143@0/fp@0,0:devctl CONNECTED
/devices/pci@79,0/pci10de,376@e/pci1077,143@0,1/fp@0,0:devctl NOT CONNECTED
</source>
2 Dualport Karten:
/devices/pci@79,0/pci10de,378@b/pci1077,143@0 und ...,1
/devices/pci@79,0/pci10de,376@e/pci1077,143@0 und ...,1
<source lang=bash>
# prtdiag -v | head -1
System Configuration: Sun Microsystems Sun Fire X4440
</source>
Aus der Seite [https://support.oracle.com/epmos/faces/DocContentDisplay?id=1277396.1 Sun x86 Platforms: Matrix of Recognized Device Paths (Doc ID 1277396.1)] (Oracle Support Login benötigt):
Sun Fire x4440 (Tucana)
PCI:
PCIe SLOT0 /pci@0,0/pci10de,375@f/pci1000,3150@0 // with PCI Express 8-Port SAS/SATA HBA
PCIe SLOT0 /pci@0,0/pci10de,375@f/ // without PCI Express 8-Port SAS/SATA HBA
PCIe SLOT1 /pci@0,0/pci10de,376@e/
PCIe SLOT2 /pci@7c,0/pci10de,377@f/
PCIe SLOT3 /pci@0,0/pci10de,377@a/
PCIe SLOT4 /pci@7c,0/pci10de,376@e/
PCIe SLOT5 /pci@7c,0/pci10de,378@b/
(7c can be renamed something else depending on BIOS/OS version)
Also stecken unsere Karten in Slot 4 und 5.
===luxadm -e dump_map <HW_path>===
Gibt die Tabelle der bekannten Geräte an einem Port aus
<source lang=bash>
# luxadm -e dump_map /devices/pci@79,0/pci10de,378@b/pci1077,143@0/fp@0,0:devctl
Pos Port_ID Hard_Addr Port WWN Node WWN Type
0 30200 0 202600a0b86e10e4 200600a0b86e10e4 0x0 (Disk device)
1 30600 0 202700a0b86e10e4 200600a0b86e10e4 0x0 (Disk device)
2 10100 0 203400a0b85bb030 200400a0b85bb030 0x0 (Disk device)
3 10500 0 203500a0b85bb030 200400a0b85bb030 0x0 (Disk device)
4 10200 0 202600a0b86e103c 200600a0b86e103c 0x0 (Disk device)
5 11400 0 202700a0b86e103c 200600a0b86e103c 0x0 (Disk device)
6 30100 0 203200a0b85aeb2d 200200a0b85aeb2d 0x0 (Disk device)
7 30500 0 203300a0b85aeb2d 200200a0b85aeb2d 0x0 (Disk device)
8 10800 0 2100001b32902d45 2000001b32902d45 0x1f (Unknown Type,Host Bus Adapter)
</source>
Erklärung der interessanten Spalten:
* Port_ID <Switch_ID><Switchport><??>
Es sind also offensichtlich 2 Switches in der Fabric an Port /devices/pci@79,0/pci10de,378@b/pci1077,143@0/fp@0,0:devctl
und zwar mit der ID 1 und mit der ID 3.
Switch ID 1
Port 1 und 5 : Node WWN 200400a0b85bb030
Port 2 und 14 : Node WWN 200600a0b86e103c
Port 8 : Node WWN 2000001b32902d45 (Wir selbst)
Switch ID 3
Port 1 und 5 : Node WWN 200200a0b85aeb2d
Port 2 und 6 : Node WWN 200600a0b86e10e4
Wir hängen also mit 2 Storages auf dem Switch mit der ID 1 und haben eine Verbindung zu einem Switch mit der ID 3 an dem 2 weitere Storages hängen.
* Node WWN
Wir sehen hier 4 Disk Devices mit jeweils 2 Einträgen (Gleiche Node WWN)
* Port WWN
Dies ist die Port WWN der an den Switch angeschlossenen Geräte (unter 8 finden wir uns selbst).
Pro Storage sehen wir hier 2 Port WWNs, also 2 Pfade über unseren einen Hostport.
Daher nachher 4 Pfade (2 Pro Hostport) beim [[#mpathadm list lu]].
* Type
Disk Device: Storage
Host Bus Adapter: FC-Karte
===luxadm probe===
Auflistung aller erkannten Fibrechanneldevices
<source lang=bash>
#> luxadm probe
Found Fibre Channel device(s):
Node WWN:200600a0b86e10e4 Device Type:Disk device
Logical Path:/dev/rdsk/c8t600A0B80006E10E40000DC1C52E8B751d0s2
...
</source>
===luxadm display <Diskpath|WWN>===
<source lang=bash>
#> luxadm display /dev/rdsk/c8t600A0B80006E10E40000DC1C52E8B751d0s2
DEVICE PROPERTIES for disk: /dev/rdsk/c8t600A0B80006E10E40000DC1C52E8B751d0s2
Vendor: SUN
Product ID: STK6580_6780
Revision: 0784
Serial Num: SP01068442
Unformatted capacity: 204800.000 MBytes
Write Cache: Enabled
Read Cache: Enabled
Minimum prefetch: 0x300
Maximum prefetch: 0x0
Device Type: Disk device
Path(s):
/dev/rdsk/c8t600A0B80006E10E40000DC1C52E8B751d0s2
/devices/scsi_vhci/disk@g600a0b80006e10e40000dc1c52e8b751:c,raw
Controller /dev/cfg/c4
Device Address 202600a0b86e10e4,5
Host controller port WWN 2100001b328a417f
Class primary
State ONLINE
Controller /dev/cfg/c4
Device Address 202700a0b86e10e4,5
Host controller port WWN 2100001b328a417f
Class secondary
State STANDBY
Controller /dev/cfg/c6
Device Address 201600a0b86e10e4,5
Host controller port WWN 2100001b32904445
Class primary
State ONLINE
Controller /dev/cfg/c6
Device Address 201700a0b86e10e4,5
Host controller port WWN 2100001b32904445
Class secondary
State STANDBY
</source>
* Vendor: SUN
Hersteller
* Product ID: STK6580_6780
Also ein StorageTek 6580/6780
* Revision: 0784
Grobe Firmwarepeilung (Firmware Version: 07.84.47.10)
Siehe hier [[#lsscs list array <array_name>]]
* Serial Num: SP01068442
Praktisch, wenn man mit NetApps arbeitet, um die LUNs zuzuordnen.
* Unformatted capacity: 204800.000 MBytes
Immer gut zu wissen
* Write Cache: Enabled
Die Batterie im Storage sollte also OK sein ;-)
* Path(s):
Rawdevicepath
Hardwaredevicepath
Jetzt folgen immer pro Pfad zu diesem Device ein Block aus
Controller (siehe unten)
Device Address <Port WWN vom Device>,<LUN ID>
Class <primary|secondary> (siehe unten)
State <Online|Standby|Oflline>
Zuweisung Controller zum FC-Port über:
<source lang=bash>
# ls -al /dev/cfg/c6
lrwxrwxrwx 1 root root 60 Sep 3 2009 /dev/cfg/c6 -> ../../devices/pci@79,0/pci10de,376@e/pci1077,143@0/fp@0,0:fc
</source>
Man sieht den Hardwarepfad von [[#luxadm -e port]]
Class:
Via ALUA (Asymmetric Logical Unit Access) teilt das Device dem Host mit, über welche Pfade der Host primär auf die LUN zugreifen soll.
==fcinfo==
===fcinfo hba-port===
Gibt ein paar Infos über Hersteller, Modell, Firmware, Port und Node WWN, Current Speed, ... aus
<source lang=bash>
#> fcinfo hba-port
HBA Port WWN: 2100001b328a417f
OS Device Name: /dev/cfg/c4
Manufacturer: QLogic Corp.
Model: 375-3356-02
Firmware Version: 05.06.00
FCode/BIOS Version: BIOS: 2.02; fcode: 2.01; EFI: 2.00;
Serial Number: 0402R00-0918701860
Driver Name: qlc
Driver Version: 20110825-3.06
Type: N-port
State: online
Supported Speeds: 1Gb 2Gb 4Gb
Current Speed: 4Gb
Node WWN: 2000001b328a417f
HBA Port WWN: 2101001b32aa417f
OS Device Name: /dev/cfg/c5
Manufacturer: QLogic Corp.
Model: 375-3356-02
Firmware Version: 05.06.00
FCode/BIOS Version: BIOS: 2.02; fcode: 2.01; EFI: 2.00;
Serial Number: 0402R00-0918701860
Driver Name: qlc
Driver Version: 20110825-3.06
Type: unknown
State: offline
Supported Speeds: 1Gb 2Gb 4Gb
Current Speed: not established
Node WWN: 2001001b32aa417f
HBA Port WWN: 2100001b32904445
OS Device Name: /dev/cfg/c6
Manufacturer: QLogic Corp.
Model: 375-3356-02
Firmware Version: 05.06.00
FCode/BIOS Version: BIOS: 2.02; fcode: 2.01; EFI: 2.00;
Serial Number: 0402R00-0918701887
Driver Name: qlc
Driver Version: 20110825-3.06
Type: N-port
State: online
Supported Speeds: 1Gb 2Gb 4Gb
Current Speed: 4Gb
Node WWN: 2000001b32904445
HBA Port WWN: 2101001b32b04445
OS Device Name: /dev/cfg/c7
Manufacturer: QLogic Corp.
Model: 375-3356-02
Firmware Version: 05.06.00
FCode/BIOS Version: BIOS: 2.02; fcode: 2.01; EFI: 2.00;
Serial Number: 0402R00-0918701887
Driver Name: qlc
Driver Version: 20110825-3.06
Type: unknown
State: offline
Supported Speeds: 1Gb 2Gb 4Gb
Current Speed: not established
Node WWN: 2001001b32b04445
</source>
===fcinfo remote-port --port <HBA Port WWN> --linkstat===
<source lang=bash>
# fcinfo remote-port --port 2100001b32904445 --linkstat
Remote Port WWN: 201600a0b86e103c
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200600a0b86e103c
Link Error Statistics:
Link Failure Count: 3
Loss of Sync Count: 3
Loss of Signal Count: 2
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 201700a0b86e103c
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200600a0b86e103c
Link Error Statistics:
Link Failure Count: 4
Loss of Sync Count: 261
Loss of Signal Count: 4
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 202200a0b85aeb2d
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200200a0b85aeb2d
Link Error Statistics:
Link Failure Count: 2
Loss of Sync Count: 1
Loss of Signal Count: 2
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 202300a0b85aeb2d
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200200a0b85aeb2d
Link Error Statistics:
Link Failure Count: 2
Loss of Sync Count: 1
Loss of Signal Count: 2
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 201600a0b86e10e4
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200600a0b86e10e4
Link Error Statistics:
Link Failure Count: 3
Loss of Sync Count: 1
Loss of Signal Count: 0
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 201700a0b86e10e4
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200600a0b86e10e4
Link Error Statistics:
Link Failure Count: 3
Loss of Sync Count: 1
Loss of Signal Count: 0
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 202400a0b85bb030
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200400a0b85bb030
Link Error Statistics:
Link Failure Count: 2
Loss of Sync Count: 1
Loss of Signal Count: 2
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 202500a0b85bb030
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200400a0b85bb030
Link Error Statistics:
Link Failure Count: 2
Loss of Sync Count: 1
Loss of Signal Count: 3
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
</source>
==mpathadm==
===mpathadm list lu===
<source lang=bash>
</source>
=Kommandos : Common Array Manager=
==lsscs==
Ist unter Solaris in /opt/SUNWsefms/bin
===lsscs list array===
<source lang=bash>
</source>
===lsscs list array <array_name>===
<source lang=bash>
</source>
===lsscs list -a <array_name> fcport===
<source lang=bash>
</source>
=Kommandos : Brocade=
==switchshow==
5722f7cb89ce315c9c8deb182547be9614a3d58a
392
391
2014-04-05T10:32:55Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:Solaris]]
[[Kategorie:Brocade]]
=Fibrechannel Analyse unter Solaris=
=Kommandos : Solaris=
==luxadm==
===luxadm -e port===
Gibt die Hardwarepfade der vorhandened Fibrechannelports und deren Status aus:
<source lang=bash>
# luxadm -e port
/devices/pci@79,0/pci10de,378@b/pci1077,143@0/fp@0,0:devctl CONNECTED
/devices/pci@79,0/pci10de,378@b/pci1077,143@0,1/fp@0,0:devctl NOT CONNECTED
/devices/pci@79,0/pci10de,376@e/pci1077,143@0/fp@0,0:devctl CONNECTED
/devices/pci@79,0/pci10de,376@e/pci1077,143@0,1/fp@0,0:devctl NOT CONNECTED
</source>
2 Dualport Karten:
/devices/pci@79,0/pci10de,378@b/pci1077,143@0 und ...,1
/devices/pci@79,0/pci10de,376@e/pci1077,143@0 und ...,1
<source lang=bash>
# prtdiag -v | head -1
System Configuration: Sun Microsystems Sun Fire X4440
</source>
Aus der Seite [https://support.oracle.com/epmos/faces/DocContentDisplay?id=1277396.1 Sun x86 Platforms: Matrix of Recognized Device Paths (Doc ID 1277396.1)] (Oracle Support Login benötigt):
Sun Fire x4440 (Tucana)
PCI:
PCIe SLOT0 /pci@0,0/pci10de,375@f/pci1000,3150@0 // with PCI Express 8-Port SAS/SATA HBA
PCIe SLOT0 /pci@0,0/pci10de,375@f/ // without PCI Express 8-Port SAS/SATA HBA
PCIe SLOT1 /pci@0,0/pci10de,376@e/
PCIe SLOT2 /pci@7c,0/pci10de,377@f/
PCIe SLOT3 /pci@0,0/pci10de,377@a/
PCIe SLOT4 /pci@7c,0/pci10de,376@e/
PCIe SLOT5 /pci@7c,0/pci10de,378@b/
(7c can be renamed something else depending on BIOS/OS version)
Also stecken unsere Karten in Slot 4 und 5.
===luxadm -e dump_map <HW_path>===
Gibt die Tabelle der bekannten Geräte an einem Port aus
<source lang=bash>
# luxadm -e dump_map /devices/pci@79,0/pci10de,378@b/pci1077,143@0/fp@0,0:devctl
Pos Port_ID Hard_Addr Port WWN Node WWN Type
0 30200 0 202600a0b86e10e4 200600a0b86e10e4 0x0 (Disk device)
1 30600 0 202700a0b86e10e4 200600a0b86e10e4 0x0 (Disk device)
2 10100 0 203400a0b85bb030 200400a0b85bb030 0x0 (Disk device)
3 10500 0 203500a0b85bb030 200400a0b85bb030 0x0 (Disk device)
4 10200 0 202600a0b86e103c 200600a0b86e103c 0x0 (Disk device)
5 11400 0 202700a0b86e103c 200600a0b86e103c 0x0 (Disk device)
6 30100 0 203200a0b85aeb2d 200200a0b85aeb2d 0x0 (Disk device)
7 30500 0 203300a0b85aeb2d 200200a0b85aeb2d 0x0 (Disk device)
8 10800 0 2100001b32902d45 2000001b32902d45 0x1f (Unknown Type,Host Bus Adapter)
</source>
Erklärung der interessanten Spalten:
* Port_ID <Switch_ID><Switchport><??>
Es sind also offensichtlich 2 Switches in der Fabric an Port /devices/pci@79,0/pci10de,378@b/pci1077,143@0/fp@0,0:devctl
und zwar mit der ID 1 und mit der ID 3.
Switch ID 1
Port 1 und 5 : Node WWN 200400a0b85bb030
Port 2 und 14 : Node WWN 200600a0b86e103c
Port 8 : Node WWN 2000001b32902d45 (Wir selbst)
Switch ID 3
Port 1 und 5 : Node WWN 200200a0b85aeb2d
Port 2 und 6 : Node WWN 200600a0b86e10e4
Wir hängen also mit 2 Storages auf dem Switch mit der ID 1 und haben eine Verbindung zu einem Switch mit der ID 3 an dem 2 weitere Storages hängen.
* Node WWN
Wir sehen hier 4 Disk Devices mit jeweils 2 Einträgen (Gleiche Node WWN)
* Port WWN
Dies ist die Port WWN der an den Switch angeschlossenen Geräte (unter 8 finden wir uns selbst).
Pro Storage sehen wir hier 2 Port WWNs, also 2 Pfade über unseren einen Hostport.
Daher nachher 4 Pfade (2 Pro Hostport) beim [[#mpathadm list lu]].
* Type
Disk Device: Storage
Host Bus Adapter: FC-Karte
===luxadm probe===
Auflistung aller erkannten Fibrechanneldevices
<source lang=bash>
#> luxadm probe
Found Fibre Channel device(s):
Node WWN:200600a0b86e10e4 Device Type:Disk device
Logical Path:/dev/rdsk/c8t600A0B80006E10E40000DC1C52E8B751d0s2
...
</source>
===luxadm display <Diskpath|WWN>===
<source lang=bash>
#> luxadm display /dev/rdsk/c8t600A0B80006E10E40000DC1C52E8B751d0s2
DEVICE PROPERTIES for disk: /dev/rdsk/c8t600A0B80006E10E40000DC1C52E8B751d0s2
Vendor: SUN
Product ID: STK6580_6780
Revision: 0784
Serial Num: SP01068442
Unformatted capacity: 204800.000 MBytes
Write Cache: Enabled
Read Cache: Enabled
Minimum prefetch: 0x300
Maximum prefetch: 0x0
Device Type: Disk device
Path(s):
/dev/rdsk/c8t600A0B80006E10E40000DC1C52E8B751d0s2
/devices/scsi_vhci/disk@g600a0b80006e10e40000dc1c52e8b751:c,raw
Controller /dev/cfg/c4
Device Address 202600a0b86e10e4,5
Host controller port WWN 2100001b328a417f
Class primary
State ONLINE
Controller /dev/cfg/c4
Device Address 202700a0b86e10e4,5
Host controller port WWN 2100001b328a417f
Class secondary
State STANDBY
Controller /dev/cfg/c6
Device Address 201600a0b86e10e4,5
Host controller port WWN 2100001b32904445
Class primary
State ONLINE
Controller /dev/cfg/c6
Device Address 201700a0b86e10e4,5
Host controller port WWN 2100001b32904445
Class secondary
State STANDBY
</source>
* Vendor: SUN
Hersteller
* Product ID: STK6580_6780
Also ein StorageTek 6580/6780
* Revision: 0784
Grobe Firmwarepeilung (Firmware Version: 07.84.47.10)
Siehe hier [[#lsscs list array <array_name>]]
* Serial Num: SP01068442
Praktisch, wenn man mit NetApps arbeitet, um die LUNs zuzuordnen.
* Unformatted capacity: 204800.000 MBytes
Immer gut zu wissen
* Write Cache: Enabled
Die Batterie im Storage sollte also OK sein ;-)
* Path(s):
Rawdevicepath
Hardwaredevicepath
Jetzt folgen immer pro Pfad zu diesem Device ein Block aus
Controller (siehe unten)
Device Address <Port WWN vom Device>,<LUN ID>
Class <primary|secondary> (siehe unten)
State <Online|Standby|Oflline>
Zuweisung Controller zum FC-Port über:
<source lang=bash>
# ls -al /dev/cfg/c6
lrwxrwxrwx 1 root root 60 Sep 3 2009 /dev/cfg/c6 -> ../../devices/pci@79,0/pci10de,376@e/pci1077,143@0/fp@0,0:fc
</source>
Man sieht den Hardwarepfad von [[#luxadm -e port]]
Class:
Via ALUA (Asymmetric Logical Unit Access) teilt das Device dem Host mit, über welche Pfade der Host primär auf die LUN zugreifen soll.
==fcinfo==
===fcinfo hba-port===
Gibt ein paar Infos über Hersteller, Modell, Firmware, Port und Node WWN, Current Speed, ... aus
<source lang=bash>
#> fcinfo hba-port
HBA Port WWN: 2100001b328a417f
OS Device Name: /dev/cfg/c4
Manufacturer: QLogic Corp.
Model: 375-3356-02
Firmware Version: 05.06.00
FCode/BIOS Version: BIOS: 2.02; fcode: 2.01; EFI: 2.00;
Serial Number: 0402R00-0918701860
Driver Name: qlc
Driver Version: 20110825-3.06
Type: N-port
State: online
Supported Speeds: 1Gb 2Gb 4Gb
Current Speed: 4Gb
Node WWN: 2000001b328a417f
HBA Port WWN: 2101001b32aa417f
OS Device Name: /dev/cfg/c5
Manufacturer: QLogic Corp.
Model: 375-3356-02
Firmware Version: 05.06.00
FCode/BIOS Version: BIOS: 2.02; fcode: 2.01; EFI: 2.00;
Serial Number: 0402R00-0918701860
Driver Name: qlc
Driver Version: 20110825-3.06
Type: unknown
State: offline
Supported Speeds: 1Gb 2Gb 4Gb
Current Speed: not established
Node WWN: 2001001b32aa417f
HBA Port WWN: 2100001b32904445
OS Device Name: /dev/cfg/c6
Manufacturer: QLogic Corp.
Model: 375-3356-02
Firmware Version: 05.06.00
FCode/BIOS Version: BIOS: 2.02; fcode: 2.01; EFI: 2.00;
Serial Number: 0402R00-0918701887
Driver Name: qlc
Driver Version: 20110825-3.06
Type: N-port
State: online
Supported Speeds: 1Gb 2Gb 4Gb
Current Speed: 4Gb
Node WWN: 2000001b32904445
HBA Port WWN: 2101001b32b04445
OS Device Name: /dev/cfg/c7
Manufacturer: QLogic Corp.
Model: 375-3356-02
Firmware Version: 05.06.00
FCode/BIOS Version: BIOS: 2.02; fcode: 2.01; EFI: 2.00;
Serial Number: 0402R00-0918701887
Driver Name: qlc
Driver Version: 20110825-3.06
Type: unknown
State: offline
Supported Speeds: 1Gb 2Gb 4Gb
Current Speed: not established
Node WWN: 2001001b32b04445
</source>
===fcinfo remote-port --port <HBA Port WWN> --linkstat===
<source lang=bash>
# fcinfo remote-port --port 2100001b32904445 --linkstat
Remote Port WWN: 201600a0b86e103c
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200600a0b86e103c
Link Error Statistics:
Link Failure Count: 3
Loss of Sync Count: 3
Loss of Signal Count: 2
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 201700a0b86e103c
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200600a0b86e103c
Link Error Statistics:
Link Failure Count: 4
Loss of Sync Count: 261
Loss of Signal Count: 4
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 202200a0b85aeb2d
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200200a0b85aeb2d
Link Error Statistics:
Link Failure Count: 2
Loss of Sync Count: 1
Loss of Signal Count: 2
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 202300a0b85aeb2d
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200200a0b85aeb2d
Link Error Statistics:
Link Failure Count: 2
Loss of Sync Count: 1
Loss of Signal Count: 2
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 201600a0b86e10e4
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200600a0b86e10e4
Link Error Statistics:
Link Failure Count: 3
Loss of Sync Count: 1
Loss of Signal Count: 0
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 201700a0b86e10e4
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200600a0b86e10e4
Link Error Statistics:
Link Failure Count: 3
Loss of Sync Count: 1
Loss of Signal Count: 0
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 202400a0b85bb030
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200400a0b85bb030
Link Error Statistics:
Link Failure Count: 2
Loss of Sync Count: 1
Loss of Signal Count: 2
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 202500a0b85bb030
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200400a0b85bb030
Link Error Statistics:
Link Failure Count: 2
Loss of Sync Count: 1
Loss of Signal Count: 3
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
</source>
==mpathadm==
===mpathadm list lu===
<source lang=bash>
</source>
=Kommandos : Common Array Manager=
==lsscs==
Ist unter Solaris in /opt/SUNWsefms/bin
===lsscs list array===
<source lang=bash>
</source>
===lsscs list array <array_name>===
<source lang=bash>
</source>
===lsscs list -a <array_name> fcport===
<source lang=bash>
</source>
=Kommandos : Brocade=
==Switch-Kommandos==
===switchshow===
==Port-Kommandos==
===portstatsshow===
===portstatsclear===
==Zone-Kommandos==
===zoneshow===
===alicreate===
===alishow===
a0de27a3d67510a90572517d1e7684062679abaf
393
392
2014-04-28T07:05:57Z
Lollypop
2
/* Kommandos : Solaris */
wikitext
text/x-wiki
[[Kategorie:Solaris]]
[[Kategorie:Brocade]]
=Fibrechannel Analyse unter Solaris=
=Kommandos : Solaris=
==luxadm==
===luxadm -e port===
Gibt die Hardwarepfade der vorhandened Fibrechannelports und deren Status aus:
<source lang=bash>
# luxadm -e port
/devices/pci@79,0/pci10de,378@b/pci1077,143@0/fp@0,0:devctl CONNECTED
/devices/pci@79,0/pci10de,378@b/pci1077,143@0,1/fp@0,0:devctl NOT CONNECTED
/devices/pci@79,0/pci10de,376@e/pci1077,143@0/fp@0,0:devctl CONNECTED
/devices/pci@79,0/pci10de,376@e/pci1077,143@0,1/fp@0,0:devctl NOT CONNECTED
</source>
2 Dualport Karten:
/devices/pci@79,0/pci10de,378@b/pci1077,143@0 und ...,1
/devices/pci@79,0/pci10de,376@e/pci1077,143@0 und ...,1
<source lang=bash>
# prtdiag -v | head -1
System Configuration: Sun Microsystems Sun Fire X4440
</source>
Aus der Seite [https://support.oracle.com/epmos/faces/DocContentDisplay?id=1277396.1 Sun x86 Platforms: Matrix of Recognized Device Paths (Doc ID 1277396.1)] (Oracle Support Login benötigt):
Sun Fire x4440 (Tucana)
PCI:
PCIe SLOT0 /pci@0,0/pci10de,375@f/pci1000,3150@0 // with PCI Express 8-Port SAS/SATA HBA
PCIe SLOT0 /pci@0,0/pci10de,375@f/ // without PCI Express 8-Port SAS/SATA HBA
PCIe SLOT1 /pci@0,0/pci10de,376@e/
PCIe SLOT2 /pci@7c,0/pci10de,377@f/
PCIe SLOT3 /pci@0,0/pci10de,377@a/
PCIe SLOT4 /pci@7c,0/pci10de,376@e/
PCIe SLOT5 /pci@7c,0/pci10de,378@b/
(7c can be renamed something else depending on BIOS/OS version)
Also stecken unsere Karten in Slot 4 und 5.
===luxadm -e dump_map <HW_path>===
Gibt die Tabelle der bekannten Geräte an einem Port aus
<source lang=bash>
# luxadm -e dump_map /devices/pci@79,0/pci10de,378@b/pci1077,143@0/fp@0,0:devctl
Pos Port_ID Hard_Addr Port WWN Node WWN Type
0 30200 0 202600a0b86e10e4 200600a0b86e10e4 0x0 (Disk device)
1 30600 0 202700a0b86e10e4 200600a0b86e10e4 0x0 (Disk device)
2 10100 0 203400a0b85bb030 200400a0b85bb030 0x0 (Disk device)
3 10500 0 203500a0b85bb030 200400a0b85bb030 0x0 (Disk device)
4 10200 0 202600a0b86e103c 200600a0b86e103c 0x0 (Disk device)
5 11400 0 202700a0b86e103c 200600a0b86e103c 0x0 (Disk device)
6 30100 0 203200a0b85aeb2d 200200a0b85aeb2d 0x0 (Disk device)
7 30500 0 203300a0b85aeb2d 200200a0b85aeb2d 0x0 (Disk device)
8 10800 0 2100001b32902d45 2000001b32902d45 0x1f (Unknown Type,Host Bus Adapter)
</source>
Erklärung der interessanten Spalten:
* Port_ID <Switch_ID><Switchport><??>
Es sind also offensichtlich 2 Switches in der Fabric an Port /devices/pci@79,0/pci10de,378@b/pci1077,143@0/fp@0,0:devctl
und zwar mit der ID 1 und mit der ID 3.
Switch ID 1
Port 1 und 5 : Node WWN 200400a0b85bb030
Port 2 und 14 : Node WWN 200600a0b86e103c
Port 8 : Node WWN 2000001b32902d45 (Wir selbst)
Switch ID 3
Port 1 und 5 : Node WWN 200200a0b85aeb2d
Port 2 und 6 : Node WWN 200600a0b86e10e4
Wir hängen also mit 2 Storages auf dem Switch mit der ID 1 und haben eine Verbindung zu einem Switch mit der ID 3 an dem 2 weitere Storages hängen.
* Node WWN
Wir sehen hier 4 Disk Devices mit jeweils 2 Einträgen (Gleiche Node WWN)
* Port WWN
Dies ist die Port WWN der an den Switch angeschlossenen Geräte (unter 8 finden wir uns selbst).
Pro Storage sehen wir hier 2 Port WWNs, also 2 Pfade über unseren einen Hostport.
Daher nachher 4 Pfade (2 Pro Hostport) beim [[#mpathadm list lu]].
* Type
Disk Device: Storage
Host Bus Adapter: FC-Karte
===luxadm probe===
Auflistung aller erkannten Fibrechanneldevices
<source lang=bash>
#> luxadm probe
Found Fibre Channel device(s):
Node WWN:200600a0b86e10e4 Device Type:Disk device
Logical Path:/dev/rdsk/c8t600A0B80006E10E40000DC1C52E8B751d0s2
...
</source>
===luxadm display <Diskpath|WWN>===
<source lang=bash>
#> luxadm display /dev/rdsk/c8t600A0B80006E10E40000DC1C52E8B751d0s2
DEVICE PROPERTIES for disk: /dev/rdsk/c8t600A0B80006E10E40000DC1C52E8B751d0s2
Vendor: SUN
Product ID: STK6580_6780
Revision: 0784
Serial Num: SP01068442
Unformatted capacity: 204800.000 MBytes
Write Cache: Enabled
Read Cache: Enabled
Minimum prefetch: 0x300
Maximum prefetch: 0x0
Device Type: Disk device
Path(s):
/dev/rdsk/c8t600A0B80006E10E40000DC1C52E8B751d0s2
/devices/scsi_vhci/disk@g600a0b80006e10e40000dc1c52e8b751:c,raw
Controller /dev/cfg/c4
Device Address 202600a0b86e10e4,5
Host controller port WWN 2100001b328a417f
Class primary
State ONLINE
Controller /dev/cfg/c4
Device Address 202700a0b86e10e4,5
Host controller port WWN 2100001b328a417f
Class secondary
State STANDBY
Controller /dev/cfg/c6
Device Address 201600a0b86e10e4,5
Host controller port WWN 2100001b32904445
Class primary
State ONLINE
Controller /dev/cfg/c6
Device Address 201700a0b86e10e4,5
Host controller port WWN 2100001b32904445
Class secondary
State STANDBY
</source>
* Vendor: SUN
Hersteller
* Product ID: STK6580_6780
Also ein StorageTek 6580/6780
* Revision: 0784
Grobe Firmwarepeilung (Firmware Version: 07.84.47.10)
Siehe hier [[#lsscs list array <array_name>]]
* Serial Num: SP01068442
Praktisch, wenn man mit NetApps arbeitet, um die LUNs zuzuordnen.
* Unformatted capacity: 204800.000 MBytes
Immer gut zu wissen
* Write Cache: Enabled
Die Batterie im Storage sollte also OK sein ;-)
* Path(s):
Rawdevicepath
Hardwaredevicepath
Jetzt folgen immer pro Pfad zu diesem Device ein Block aus
Controller (siehe unten)
Device Address <Port WWN vom Device>,<LUN ID>
Class <primary|secondary> (siehe unten)
State <Online|Standby|Oflline>
Zuweisung Controller zum FC-Port über:
<source lang=bash>
# ls -al /dev/cfg/c6
lrwxrwxrwx 1 root root 60 Sep 3 2009 /dev/cfg/c6 -> ../../devices/pci@79,0/pci10de,376@e/pci1077,143@0/fp@0,0:fc
</source>
Man sieht den Hardwarepfad von [[#luxadm -e port]]
Class:
Via ALUA (Asymmetric Logical Unit Access) teilt das Device dem Host mit, über welche Pfade der Host primär auf die LUN zugreifen soll.
==fcinfo==
===fcinfo hba-port===
Gibt ein paar Infos über Hersteller, Modell, Firmware, Port und Node WWN, Current Speed, ... aus
<source lang=bash>
#> fcinfo hba-port
HBA Port WWN: 2100001b328a417f
OS Device Name: /dev/cfg/c4
Manufacturer: QLogic Corp.
Model: 375-3356-02
Firmware Version: 05.06.00
FCode/BIOS Version: BIOS: 2.02; fcode: 2.01; EFI: 2.00;
Serial Number: 0402R00-0918701860
Driver Name: qlc
Driver Version: 20110825-3.06
Type: N-port
State: online
Supported Speeds: 1Gb 2Gb 4Gb
Current Speed: 4Gb
Node WWN: 2000001b328a417f
HBA Port WWN: 2101001b32aa417f
OS Device Name: /dev/cfg/c5
Manufacturer: QLogic Corp.
Model: 375-3356-02
Firmware Version: 05.06.00
FCode/BIOS Version: BIOS: 2.02; fcode: 2.01; EFI: 2.00;
Serial Number: 0402R00-0918701860
Driver Name: qlc
Driver Version: 20110825-3.06
Type: unknown
State: offline
Supported Speeds: 1Gb 2Gb 4Gb
Current Speed: not established
Node WWN: 2001001b32aa417f
HBA Port WWN: 2100001b32904445
OS Device Name: /dev/cfg/c6
Manufacturer: QLogic Corp.
Model: 375-3356-02
Firmware Version: 05.06.00
FCode/BIOS Version: BIOS: 2.02; fcode: 2.01; EFI: 2.00;
Serial Number: 0402R00-0918701887
Driver Name: qlc
Driver Version: 20110825-3.06
Type: N-port
State: online
Supported Speeds: 1Gb 2Gb 4Gb
Current Speed: 4Gb
Node WWN: 2000001b32904445
HBA Port WWN: 2101001b32b04445
OS Device Name: /dev/cfg/c7
Manufacturer: QLogic Corp.
Model: 375-3356-02
Firmware Version: 05.06.00
FCode/BIOS Version: BIOS: 2.02; fcode: 2.01; EFI: 2.00;
Serial Number: 0402R00-0918701887
Driver Name: qlc
Driver Version: 20110825-3.06
Type: unknown
State: offline
Supported Speeds: 1Gb 2Gb 4Gb
Current Speed: not established
Node WWN: 2001001b32b04445
</source>
===fcinfo remote-port --port <HBA Port WWN> --linkstat===
<source lang=bash>
# fcinfo remote-port --port 2100001b32904445 --linkstat
Remote Port WWN: 201600a0b86e103c
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200600a0b86e103c
Link Error Statistics:
Link Failure Count: 3
Loss of Sync Count: 3
Loss of Signal Count: 2
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 201700a0b86e103c
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200600a0b86e103c
Link Error Statistics:
Link Failure Count: 4
Loss of Sync Count: 261
Loss of Signal Count: 4
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 202200a0b85aeb2d
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200200a0b85aeb2d
Link Error Statistics:
Link Failure Count: 2
Loss of Sync Count: 1
Loss of Signal Count: 2
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 202300a0b85aeb2d
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200200a0b85aeb2d
Link Error Statistics:
Link Failure Count: 2
Loss of Sync Count: 1
Loss of Signal Count: 2
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 201600a0b86e10e4
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200600a0b86e10e4
Link Error Statistics:
Link Failure Count: 3
Loss of Sync Count: 1
Loss of Signal Count: 0
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 201700a0b86e10e4
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200600a0b86e10e4
Link Error Statistics:
Link Failure Count: 3
Loss of Sync Count: 1
Loss of Signal Count: 0
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 202400a0b85bb030
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200400a0b85bb030
Link Error Statistics:
Link Failure Count: 2
Loss of Sync Count: 1
Loss of Signal Count: 2
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 202500a0b85bb030
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200400a0b85bb030
Link Error Statistics:
Link Failure Count: 2
Loss of Sync Count: 1
Loss of Signal Count: 3
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
</source>
==mpathadm==
===mpathadm list lu===
<source lang=bash>
</source>
==cfgadm==
===cfgadm -al -o show_FCP_dev [<controller>]===
<source lang=bash>
# cfgadm -al -o show_FCP_dev | grep unusable
c8::21000024ff2d49a2,0 disk connected configured unusable
c8::21000024ff2d49a2,1 disk connected configured unusable
c8::21000024ff2d49a2,2 disk connected configured unusable
c8::21000024ff2d49a2,3 disk connected configured unusable
c8::21000024ff2d49a2,4 disk connected configured unusable
c8::21000024ff2d49a2,5 disk connected configured unusable
c8::21000024ff2d49a2,6 disk connected configured unusable
c8::21000024ff2d49a2,7 disk connected configured unusable
c8::21000024ff2d49a2,8 disk connected configured unusable
c8::21000024ff2d49a2,9 disk connected configured unusable
c8::21000024ff2d49a2,10 disk connected configured unusable
c9::203400a0b839c421,31 disk connected configured unusable
c9::203400a0b84913d2,31 disk connected configured unusable
c9::203500a0b839c421,31 disk connected configured unusable
c9::203500a0b84913d2,31 disk connected configured unusable
</source>
===cfgadm -c unconfigure -o unusable_SCSI_LUN <device>===
<source lang=bash>
# cfgadm -c unconfigure -o unusable_SCSI_LUN c8::21000024ff2d49a2
</source>
=Kommandos : Common Array Manager=
==lsscs==
Ist unter Solaris in /opt/SUNWsefms/bin
===lsscs list array===
<source lang=bash>
</source>
===lsscs list array <array_name>===
<source lang=bash>
</source>
===lsscs list -a <array_name> fcport===
<source lang=bash>
</source>
=Kommandos : Brocade=
==Switch-Kommandos==
===switchshow===
==Port-Kommandos==
===portstatsshow===
===portstatsclear===
==Zone-Kommandos==
===zoneshow===
===alicreate===
===alishow===
fdb03dc392f843ee4fc52af0c0c8044eaaa5a337
394
393
2014-04-28T07:14:28Z
Lollypop
2
/* cfgadm */
wikitext
text/x-wiki
[[Kategorie:Solaris]]
[[Kategorie:Brocade]]
=Fibrechannel Analyse unter Solaris=
=Kommandos : Solaris=
==luxadm==
===luxadm -e port===
Gibt die Hardwarepfade der vorhandened Fibrechannelports und deren Status aus:
<source lang=bash>
# luxadm -e port
/devices/pci@79,0/pci10de,378@b/pci1077,143@0/fp@0,0:devctl CONNECTED
/devices/pci@79,0/pci10de,378@b/pci1077,143@0,1/fp@0,0:devctl NOT CONNECTED
/devices/pci@79,0/pci10de,376@e/pci1077,143@0/fp@0,0:devctl CONNECTED
/devices/pci@79,0/pci10de,376@e/pci1077,143@0,1/fp@0,0:devctl NOT CONNECTED
</source>
2 Dualport Karten:
/devices/pci@79,0/pci10de,378@b/pci1077,143@0 und ...,1
/devices/pci@79,0/pci10de,376@e/pci1077,143@0 und ...,1
<source lang=bash>
# prtdiag -v | head -1
System Configuration: Sun Microsystems Sun Fire X4440
</source>
Aus der Seite [https://support.oracle.com/epmos/faces/DocContentDisplay?id=1277396.1 Sun x86 Platforms: Matrix of Recognized Device Paths (Doc ID 1277396.1)] (Oracle Support Login benötigt):
Sun Fire x4440 (Tucana)
PCI:
PCIe SLOT0 /pci@0,0/pci10de,375@f/pci1000,3150@0 // with PCI Express 8-Port SAS/SATA HBA
PCIe SLOT0 /pci@0,0/pci10de,375@f/ // without PCI Express 8-Port SAS/SATA HBA
PCIe SLOT1 /pci@0,0/pci10de,376@e/
PCIe SLOT2 /pci@7c,0/pci10de,377@f/
PCIe SLOT3 /pci@0,0/pci10de,377@a/
PCIe SLOT4 /pci@7c,0/pci10de,376@e/
PCIe SLOT5 /pci@7c,0/pci10de,378@b/
(7c can be renamed something else depending on BIOS/OS version)
Also stecken unsere Karten in Slot 4 und 5.
===luxadm -e dump_map <HW_path>===
Gibt die Tabelle der bekannten Geräte an einem Port aus
<source lang=bash>
# luxadm -e dump_map /devices/pci@79,0/pci10de,378@b/pci1077,143@0/fp@0,0:devctl
Pos Port_ID Hard_Addr Port WWN Node WWN Type
0 30200 0 202600a0b86e10e4 200600a0b86e10e4 0x0 (Disk device)
1 30600 0 202700a0b86e10e4 200600a0b86e10e4 0x0 (Disk device)
2 10100 0 203400a0b85bb030 200400a0b85bb030 0x0 (Disk device)
3 10500 0 203500a0b85bb030 200400a0b85bb030 0x0 (Disk device)
4 10200 0 202600a0b86e103c 200600a0b86e103c 0x0 (Disk device)
5 11400 0 202700a0b86e103c 200600a0b86e103c 0x0 (Disk device)
6 30100 0 203200a0b85aeb2d 200200a0b85aeb2d 0x0 (Disk device)
7 30500 0 203300a0b85aeb2d 200200a0b85aeb2d 0x0 (Disk device)
8 10800 0 2100001b32902d45 2000001b32902d45 0x1f (Unknown Type,Host Bus Adapter)
</source>
Erklärung der interessanten Spalten:
* Port_ID <Switch_ID><Switchport><??>
Es sind also offensichtlich 2 Switches in der Fabric an Port /devices/pci@79,0/pci10de,378@b/pci1077,143@0/fp@0,0:devctl
und zwar mit der ID 1 und mit der ID 3.
Switch ID 1
Port 1 und 5 : Node WWN 200400a0b85bb030
Port 2 und 14 : Node WWN 200600a0b86e103c
Port 8 : Node WWN 2000001b32902d45 (Wir selbst)
Switch ID 3
Port 1 und 5 : Node WWN 200200a0b85aeb2d
Port 2 und 6 : Node WWN 200600a0b86e10e4
Wir hängen also mit 2 Storages auf dem Switch mit der ID 1 und haben eine Verbindung zu einem Switch mit der ID 3 an dem 2 weitere Storages hängen.
* Node WWN
Wir sehen hier 4 Disk Devices mit jeweils 2 Einträgen (Gleiche Node WWN)
* Port WWN
Dies ist die Port WWN der an den Switch angeschlossenen Geräte (unter 8 finden wir uns selbst).
Pro Storage sehen wir hier 2 Port WWNs, also 2 Pfade über unseren einen Hostport.
Daher nachher 4 Pfade (2 Pro Hostport) beim [[#mpathadm list lu]].
* Type
Disk Device: Storage
Host Bus Adapter: FC-Karte
===luxadm probe===
Auflistung aller erkannten Fibrechanneldevices
<source lang=bash>
#> luxadm probe
Found Fibre Channel device(s):
Node WWN:200600a0b86e10e4 Device Type:Disk device
Logical Path:/dev/rdsk/c8t600A0B80006E10E40000DC1C52E8B751d0s2
...
</source>
===luxadm display <Diskpath|WWN>===
<source lang=bash>
#> luxadm display /dev/rdsk/c8t600A0B80006E10E40000DC1C52E8B751d0s2
DEVICE PROPERTIES for disk: /dev/rdsk/c8t600A0B80006E10E40000DC1C52E8B751d0s2
Vendor: SUN
Product ID: STK6580_6780
Revision: 0784
Serial Num: SP01068442
Unformatted capacity: 204800.000 MBytes
Write Cache: Enabled
Read Cache: Enabled
Minimum prefetch: 0x300
Maximum prefetch: 0x0
Device Type: Disk device
Path(s):
/dev/rdsk/c8t600A0B80006E10E40000DC1C52E8B751d0s2
/devices/scsi_vhci/disk@g600a0b80006e10e40000dc1c52e8b751:c,raw
Controller /dev/cfg/c4
Device Address 202600a0b86e10e4,5
Host controller port WWN 2100001b328a417f
Class primary
State ONLINE
Controller /dev/cfg/c4
Device Address 202700a0b86e10e4,5
Host controller port WWN 2100001b328a417f
Class secondary
State STANDBY
Controller /dev/cfg/c6
Device Address 201600a0b86e10e4,5
Host controller port WWN 2100001b32904445
Class primary
State ONLINE
Controller /dev/cfg/c6
Device Address 201700a0b86e10e4,5
Host controller port WWN 2100001b32904445
Class secondary
State STANDBY
</source>
* Vendor: SUN
Hersteller
* Product ID: STK6580_6780
Also ein StorageTek 6580/6780
* Revision: 0784
Grobe Firmwarepeilung (Firmware Version: 07.84.47.10)
Siehe hier [[#lsscs list array <array_name>]]
* Serial Num: SP01068442
Praktisch, wenn man mit NetApps arbeitet, um die LUNs zuzuordnen.
* Unformatted capacity: 204800.000 MBytes
Immer gut zu wissen
* Write Cache: Enabled
Die Batterie im Storage sollte also OK sein ;-)
* Path(s):
Rawdevicepath
Hardwaredevicepath
Jetzt folgen immer pro Pfad zu diesem Device ein Block aus
Controller (siehe unten)
Device Address <Port WWN vom Device>,<LUN ID>
Class <primary|secondary> (siehe unten)
State <Online|Standby|Oflline>
Zuweisung Controller zum FC-Port über:
<source lang=bash>
# ls -al /dev/cfg/c6
lrwxrwxrwx 1 root root 60 Sep 3 2009 /dev/cfg/c6 -> ../../devices/pci@79,0/pci10de,376@e/pci1077,143@0/fp@0,0:fc
</source>
Man sieht den Hardwarepfad von [[#luxadm -e port]]
Class:
Via ALUA (Asymmetric Logical Unit Access) teilt das Device dem Host mit, über welche Pfade der Host primär auf die LUN zugreifen soll.
==fcinfo==
===fcinfo hba-port===
Gibt ein paar Infos über Hersteller, Modell, Firmware, Port und Node WWN, Current Speed, ... aus
<source lang=bash>
#> fcinfo hba-port
HBA Port WWN: 2100001b328a417f
OS Device Name: /dev/cfg/c4
Manufacturer: QLogic Corp.
Model: 375-3356-02
Firmware Version: 05.06.00
FCode/BIOS Version: BIOS: 2.02; fcode: 2.01; EFI: 2.00;
Serial Number: 0402R00-0918701860
Driver Name: qlc
Driver Version: 20110825-3.06
Type: N-port
State: online
Supported Speeds: 1Gb 2Gb 4Gb
Current Speed: 4Gb
Node WWN: 2000001b328a417f
HBA Port WWN: 2101001b32aa417f
OS Device Name: /dev/cfg/c5
Manufacturer: QLogic Corp.
Model: 375-3356-02
Firmware Version: 05.06.00
FCode/BIOS Version: BIOS: 2.02; fcode: 2.01; EFI: 2.00;
Serial Number: 0402R00-0918701860
Driver Name: qlc
Driver Version: 20110825-3.06
Type: unknown
State: offline
Supported Speeds: 1Gb 2Gb 4Gb
Current Speed: not established
Node WWN: 2001001b32aa417f
HBA Port WWN: 2100001b32904445
OS Device Name: /dev/cfg/c6
Manufacturer: QLogic Corp.
Model: 375-3356-02
Firmware Version: 05.06.00
FCode/BIOS Version: BIOS: 2.02; fcode: 2.01; EFI: 2.00;
Serial Number: 0402R00-0918701887
Driver Name: qlc
Driver Version: 20110825-3.06
Type: N-port
State: online
Supported Speeds: 1Gb 2Gb 4Gb
Current Speed: 4Gb
Node WWN: 2000001b32904445
HBA Port WWN: 2101001b32b04445
OS Device Name: /dev/cfg/c7
Manufacturer: QLogic Corp.
Model: 375-3356-02
Firmware Version: 05.06.00
FCode/BIOS Version: BIOS: 2.02; fcode: 2.01; EFI: 2.00;
Serial Number: 0402R00-0918701887
Driver Name: qlc
Driver Version: 20110825-3.06
Type: unknown
State: offline
Supported Speeds: 1Gb 2Gb 4Gb
Current Speed: not established
Node WWN: 2001001b32b04445
</source>
===fcinfo remote-port --port <HBA Port WWN> --linkstat===
<source lang=bash>
# fcinfo remote-port --port 2100001b32904445 --linkstat
Remote Port WWN: 201600a0b86e103c
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200600a0b86e103c
Link Error Statistics:
Link Failure Count: 3
Loss of Sync Count: 3
Loss of Signal Count: 2
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 201700a0b86e103c
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200600a0b86e103c
Link Error Statistics:
Link Failure Count: 4
Loss of Sync Count: 261
Loss of Signal Count: 4
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 202200a0b85aeb2d
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200200a0b85aeb2d
Link Error Statistics:
Link Failure Count: 2
Loss of Sync Count: 1
Loss of Signal Count: 2
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 202300a0b85aeb2d
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200200a0b85aeb2d
Link Error Statistics:
Link Failure Count: 2
Loss of Sync Count: 1
Loss of Signal Count: 2
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 201600a0b86e10e4
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200600a0b86e10e4
Link Error Statistics:
Link Failure Count: 3
Loss of Sync Count: 1
Loss of Signal Count: 0
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 201700a0b86e10e4
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200600a0b86e10e4
Link Error Statistics:
Link Failure Count: 3
Loss of Sync Count: 1
Loss of Signal Count: 0
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 202400a0b85bb030
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200400a0b85bb030
Link Error Statistics:
Link Failure Count: 2
Loss of Sync Count: 1
Loss of Signal Count: 2
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 202500a0b85bb030
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200400a0b85bb030
Link Error Statistics:
Link Failure Count: 2
Loss of Sync Count: 1
Loss of Signal Count: 3
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
</source>
==mpathadm==
===mpathadm list lu===
<source lang=bash>
</source>
==cfgadm==
===cfgadm -al -o show_FCP_dev [<controller>]===
<source lang=bash>
# cfgadm -al -o show_FCP_dev | grep unusable
c8::21000024ff2d49a2,0 disk connected configured unusable
c8::21000024ff2d49a2,1 disk connected configured unusable
c8::21000024ff2d49a2,2 disk connected configured unusable
c8::21000024ff2d49a2,3 disk connected configured unusable
c8::21000024ff2d49a2,4 disk connected configured unusable
c8::21000024ff2d49a2,5 disk connected configured unusable
c8::21000024ff2d49a2,6 disk connected configured unusable
c8::21000024ff2d49a2,7 disk connected configured unusable
c8::21000024ff2d49a2,8 disk connected configured unusable
c8::21000024ff2d49a2,9 disk connected configured unusable
c8::21000024ff2d49a2,10 disk connected configured unusable
c9::203400a0b839c421,31 disk connected configured unusable
c9::203400a0b84913d2,31 disk connected configured unusable
c9::203500a0b839c421,31 disk connected configured unusable
c9::203500a0b84913d2,31 disk connected configured unusable
</source>
===cfgadm -c unconfigure -o unusable_SCSI_LUN <device>===
<source lang=bash>
# cfgadm -c unconfigure -o unusable_SCSI_LUN c8::21000024ff2d49a2
</source>
===cfgadm -o force_update -c configure <controller>===
Rescan LUNs. Be careful! Does a forcelip!
<source lang=bash>
# cfgadm -o force_update -c configure c10
</source>
=Kommandos : Common Array Manager=
==lsscs==
Ist unter Solaris in /opt/SUNWsefms/bin
===lsscs list array===
<source lang=bash>
</source>
===lsscs list array <array_name>===
<source lang=bash>
</source>
===lsscs list -a <array_name> fcport===
<source lang=bash>
</source>
=Kommandos : Brocade=
==Switch-Kommandos==
===switchshow===
==Port-Kommandos==
===portstatsshow===
===portstatsclear===
==Zone-Kommandos==
===zoneshow===
===alicreate===
===alishow===
00bc289f8d046ca10df350e093eb246c0c32f7dd
395
394
2014-04-28T07:23:43Z
Lollypop
2
/* cfgadm -c unconfigure -o unusable_SCSI_LUN */
wikitext
text/x-wiki
[[Kategorie:Solaris]]
[[Kategorie:Brocade]]
=Fibrechannel Analyse unter Solaris=
=Kommandos : Solaris=
==luxadm==
===luxadm -e port===
Gibt die Hardwarepfade der vorhandened Fibrechannelports und deren Status aus:
<source lang=bash>
# luxadm -e port
/devices/pci@79,0/pci10de,378@b/pci1077,143@0/fp@0,0:devctl CONNECTED
/devices/pci@79,0/pci10de,378@b/pci1077,143@0,1/fp@0,0:devctl NOT CONNECTED
/devices/pci@79,0/pci10de,376@e/pci1077,143@0/fp@0,0:devctl CONNECTED
/devices/pci@79,0/pci10de,376@e/pci1077,143@0,1/fp@0,0:devctl NOT CONNECTED
</source>
2 Dualport Karten:
/devices/pci@79,0/pci10de,378@b/pci1077,143@0 und ...,1
/devices/pci@79,0/pci10de,376@e/pci1077,143@0 und ...,1
<source lang=bash>
# prtdiag -v | head -1
System Configuration: Sun Microsystems Sun Fire X4440
</source>
Aus der Seite [https://support.oracle.com/epmos/faces/DocContentDisplay?id=1277396.1 Sun x86 Platforms: Matrix of Recognized Device Paths (Doc ID 1277396.1)] (Oracle Support Login benötigt):
Sun Fire x4440 (Tucana)
PCI:
PCIe SLOT0 /pci@0,0/pci10de,375@f/pci1000,3150@0 // with PCI Express 8-Port SAS/SATA HBA
PCIe SLOT0 /pci@0,0/pci10de,375@f/ // without PCI Express 8-Port SAS/SATA HBA
PCIe SLOT1 /pci@0,0/pci10de,376@e/
PCIe SLOT2 /pci@7c,0/pci10de,377@f/
PCIe SLOT3 /pci@0,0/pci10de,377@a/
PCIe SLOT4 /pci@7c,0/pci10de,376@e/
PCIe SLOT5 /pci@7c,0/pci10de,378@b/
(7c can be renamed something else depending on BIOS/OS version)
Also stecken unsere Karten in Slot 4 und 5.
===luxadm -e dump_map <HW_path>===
Gibt die Tabelle der bekannten Geräte an einem Port aus
<source lang=bash>
# luxadm -e dump_map /devices/pci@79,0/pci10de,378@b/pci1077,143@0/fp@0,0:devctl
Pos Port_ID Hard_Addr Port WWN Node WWN Type
0 30200 0 202600a0b86e10e4 200600a0b86e10e4 0x0 (Disk device)
1 30600 0 202700a0b86e10e4 200600a0b86e10e4 0x0 (Disk device)
2 10100 0 203400a0b85bb030 200400a0b85bb030 0x0 (Disk device)
3 10500 0 203500a0b85bb030 200400a0b85bb030 0x0 (Disk device)
4 10200 0 202600a0b86e103c 200600a0b86e103c 0x0 (Disk device)
5 11400 0 202700a0b86e103c 200600a0b86e103c 0x0 (Disk device)
6 30100 0 203200a0b85aeb2d 200200a0b85aeb2d 0x0 (Disk device)
7 30500 0 203300a0b85aeb2d 200200a0b85aeb2d 0x0 (Disk device)
8 10800 0 2100001b32902d45 2000001b32902d45 0x1f (Unknown Type,Host Bus Adapter)
</source>
Erklärung der interessanten Spalten:
* Port_ID <Switch_ID><Switchport><??>
Es sind also offensichtlich 2 Switches in der Fabric an Port /devices/pci@79,0/pci10de,378@b/pci1077,143@0/fp@0,0:devctl
und zwar mit der ID 1 und mit der ID 3.
Switch ID 1
Port 1 und 5 : Node WWN 200400a0b85bb030
Port 2 und 14 : Node WWN 200600a0b86e103c
Port 8 : Node WWN 2000001b32902d45 (Wir selbst)
Switch ID 3
Port 1 und 5 : Node WWN 200200a0b85aeb2d
Port 2 und 6 : Node WWN 200600a0b86e10e4
Wir hängen also mit 2 Storages auf dem Switch mit der ID 1 und haben eine Verbindung zu einem Switch mit der ID 3 an dem 2 weitere Storages hängen.
* Node WWN
Wir sehen hier 4 Disk Devices mit jeweils 2 Einträgen (Gleiche Node WWN)
* Port WWN
Dies ist die Port WWN der an den Switch angeschlossenen Geräte (unter 8 finden wir uns selbst).
Pro Storage sehen wir hier 2 Port WWNs, also 2 Pfade über unseren einen Hostport.
Daher nachher 4 Pfade (2 Pro Hostport) beim [[#mpathadm list lu]].
* Type
Disk Device: Storage
Host Bus Adapter: FC-Karte
===luxadm probe===
Auflistung aller erkannten Fibrechanneldevices
<source lang=bash>
#> luxadm probe
Found Fibre Channel device(s):
Node WWN:200600a0b86e10e4 Device Type:Disk device
Logical Path:/dev/rdsk/c8t600A0B80006E10E40000DC1C52E8B751d0s2
...
</source>
===luxadm display <Diskpath|WWN>===
<source lang=bash>
#> luxadm display /dev/rdsk/c8t600A0B80006E10E40000DC1C52E8B751d0s2
DEVICE PROPERTIES for disk: /dev/rdsk/c8t600A0B80006E10E40000DC1C52E8B751d0s2
Vendor: SUN
Product ID: STK6580_6780
Revision: 0784
Serial Num: SP01068442
Unformatted capacity: 204800.000 MBytes
Write Cache: Enabled
Read Cache: Enabled
Minimum prefetch: 0x300
Maximum prefetch: 0x0
Device Type: Disk device
Path(s):
/dev/rdsk/c8t600A0B80006E10E40000DC1C52E8B751d0s2
/devices/scsi_vhci/disk@g600a0b80006e10e40000dc1c52e8b751:c,raw
Controller /dev/cfg/c4
Device Address 202600a0b86e10e4,5
Host controller port WWN 2100001b328a417f
Class primary
State ONLINE
Controller /dev/cfg/c4
Device Address 202700a0b86e10e4,5
Host controller port WWN 2100001b328a417f
Class secondary
State STANDBY
Controller /dev/cfg/c6
Device Address 201600a0b86e10e4,5
Host controller port WWN 2100001b32904445
Class primary
State ONLINE
Controller /dev/cfg/c6
Device Address 201700a0b86e10e4,5
Host controller port WWN 2100001b32904445
Class secondary
State STANDBY
</source>
* Vendor: SUN
Hersteller
* Product ID: STK6580_6780
Also ein StorageTek 6580/6780
* Revision: 0784
Grobe Firmwarepeilung (Firmware Version: 07.84.47.10)
Siehe hier [[#lsscs list array <array_name>]]
* Serial Num: SP01068442
Praktisch, wenn man mit NetApps arbeitet, um die LUNs zuzuordnen.
* Unformatted capacity: 204800.000 MBytes
Immer gut zu wissen
* Write Cache: Enabled
Die Batterie im Storage sollte also OK sein ;-)
* Path(s):
Rawdevicepath
Hardwaredevicepath
Jetzt folgen immer pro Pfad zu diesem Device ein Block aus
Controller (siehe unten)
Device Address <Port WWN vom Device>,<LUN ID>
Class <primary|secondary> (siehe unten)
State <Online|Standby|Oflline>
Zuweisung Controller zum FC-Port über:
<source lang=bash>
# ls -al /dev/cfg/c6
lrwxrwxrwx 1 root root 60 Sep 3 2009 /dev/cfg/c6 -> ../../devices/pci@79,0/pci10de,376@e/pci1077,143@0/fp@0,0:fc
</source>
Man sieht den Hardwarepfad von [[#luxadm -e port]]
Class:
Via ALUA (Asymmetric Logical Unit Access) teilt das Device dem Host mit, über welche Pfade der Host primär auf die LUN zugreifen soll.
==fcinfo==
===fcinfo hba-port===
Gibt ein paar Infos über Hersteller, Modell, Firmware, Port und Node WWN, Current Speed, ... aus
<source lang=bash>
#> fcinfo hba-port
HBA Port WWN: 2100001b328a417f
OS Device Name: /dev/cfg/c4
Manufacturer: QLogic Corp.
Model: 375-3356-02
Firmware Version: 05.06.00
FCode/BIOS Version: BIOS: 2.02; fcode: 2.01; EFI: 2.00;
Serial Number: 0402R00-0918701860
Driver Name: qlc
Driver Version: 20110825-3.06
Type: N-port
State: online
Supported Speeds: 1Gb 2Gb 4Gb
Current Speed: 4Gb
Node WWN: 2000001b328a417f
HBA Port WWN: 2101001b32aa417f
OS Device Name: /dev/cfg/c5
Manufacturer: QLogic Corp.
Model: 375-3356-02
Firmware Version: 05.06.00
FCode/BIOS Version: BIOS: 2.02; fcode: 2.01; EFI: 2.00;
Serial Number: 0402R00-0918701860
Driver Name: qlc
Driver Version: 20110825-3.06
Type: unknown
State: offline
Supported Speeds: 1Gb 2Gb 4Gb
Current Speed: not established
Node WWN: 2001001b32aa417f
HBA Port WWN: 2100001b32904445
OS Device Name: /dev/cfg/c6
Manufacturer: QLogic Corp.
Model: 375-3356-02
Firmware Version: 05.06.00
FCode/BIOS Version: BIOS: 2.02; fcode: 2.01; EFI: 2.00;
Serial Number: 0402R00-0918701887
Driver Name: qlc
Driver Version: 20110825-3.06
Type: N-port
State: online
Supported Speeds: 1Gb 2Gb 4Gb
Current Speed: 4Gb
Node WWN: 2000001b32904445
HBA Port WWN: 2101001b32b04445
OS Device Name: /dev/cfg/c7
Manufacturer: QLogic Corp.
Model: 375-3356-02
Firmware Version: 05.06.00
FCode/BIOS Version: BIOS: 2.02; fcode: 2.01; EFI: 2.00;
Serial Number: 0402R00-0918701887
Driver Name: qlc
Driver Version: 20110825-3.06
Type: unknown
State: offline
Supported Speeds: 1Gb 2Gb 4Gb
Current Speed: not established
Node WWN: 2001001b32b04445
</source>
===fcinfo remote-port --port <HBA Port WWN> --linkstat===
<source lang=bash>
# fcinfo remote-port --port 2100001b32904445 --linkstat
Remote Port WWN: 201600a0b86e103c
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200600a0b86e103c
Link Error Statistics:
Link Failure Count: 3
Loss of Sync Count: 3
Loss of Signal Count: 2
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 201700a0b86e103c
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200600a0b86e103c
Link Error Statistics:
Link Failure Count: 4
Loss of Sync Count: 261
Loss of Signal Count: 4
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 202200a0b85aeb2d
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200200a0b85aeb2d
Link Error Statistics:
Link Failure Count: 2
Loss of Sync Count: 1
Loss of Signal Count: 2
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 202300a0b85aeb2d
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200200a0b85aeb2d
Link Error Statistics:
Link Failure Count: 2
Loss of Sync Count: 1
Loss of Signal Count: 2
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 201600a0b86e10e4
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200600a0b86e10e4
Link Error Statistics:
Link Failure Count: 3
Loss of Sync Count: 1
Loss of Signal Count: 0
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 201700a0b86e10e4
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200600a0b86e10e4
Link Error Statistics:
Link Failure Count: 3
Loss of Sync Count: 1
Loss of Signal Count: 0
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 202400a0b85bb030
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200400a0b85bb030
Link Error Statistics:
Link Failure Count: 2
Loss of Sync Count: 1
Loss of Signal Count: 2
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 202500a0b85bb030
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200400a0b85bb030
Link Error Statistics:
Link Failure Count: 2
Loss of Sync Count: 1
Loss of Signal Count: 3
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
</source>
==mpathadm==
===mpathadm list lu===
<source lang=bash>
</source>
==cfgadm==
===cfgadm -al -o show_FCP_dev [<controller>]===
<source lang=bash>
# cfgadm -al -o show_FCP_dev | grep unusable
c8::21000024ff2d49a2,0 disk connected configured unusable
c8::21000024ff2d49a2,1 disk connected configured unusable
c8::21000024ff2d49a2,2 disk connected configured unusable
c8::21000024ff2d49a2,3 disk connected configured unusable
c8::21000024ff2d49a2,4 disk connected configured unusable
c8::21000024ff2d49a2,5 disk connected configured unusable
c8::21000024ff2d49a2,6 disk connected configured unusable
c8::21000024ff2d49a2,7 disk connected configured unusable
c8::21000024ff2d49a2,8 disk connected configured unusable
c8::21000024ff2d49a2,9 disk connected configured unusable
c8::21000024ff2d49a2,10 disk connected configured unusable
c9::203400a0b839c421,31 disk connected configured unusable
c9::203400a0b84913d2,31 disk connected configured unusable
c9::203500a0b839c421,31 disk connected configured unusable
c9::203500a0b84913d2,31 disk connected configured unusable
</source>
===cfgadm -c unconfigure -o unusable_SCSI_LUN <unusable device>===
<source lang=bash>
# cfgadm -c unconfigure -o unusable_SCSI_LUN c8::21000024ff2d49a2
</source>
===cfgadm -o force_update -c configure <controller>===
Rescan LUNs. Be careful! Does a forcelip!
<source lang=bash>
# cfgadm -o force_update -c configure c10
</source>
=Kommandos : Common Array Manager=
==lsscs==
Ist unter Solaris in /opt/SUNWsefms/bin
===lsscs list array===
<source lang=bash>
</source>
===lsscs list array <array_name>===
<source lang=bash>
</source>
===lsscs list -a <array_name> fcport===
<source lang=bash>
</source>
=Kommandos : Brocade=
==Switch-Kommandos==
===switchshow===
==Port-Kommandos==
===portstatsshow===
===portstatsclear===
==Zone-Kommandos==
===zoneshow===
===alicreate===
===alishow===
bc57ccc138cedff84e9044df87d8c371fbe7bd49
Category:Brocade
14
140
375
2014-04-01T13:50:48Z
Lollypop
2
Die Seite wurde neu angelegt: „[[Kategorie:KnowHow]]“
wikitext
text/x-wiki
[[Kategorie:KnowHow]]
5b3e805e2df69a16d339bfd0115e4688ccfd0e65
ZFS Recovery
0
30
396
304
2014-05-07T16:58:09Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:ZFS]]
[[Kategorie:Solaris]]
Siehe [http://sunsolve.sun.com/search/document.do?assetkey=1-66-233602-1 SunAlert 233602 : Solaris 10 Assertion Failure in ZFS May Cause a System Panic]:
The best recovery for this is to do the following:
<pre>
1. Set the following in /etc/system:
set zfs:zfs_recover=1
set aok=1
2. Import the pool using 'zpool import'
3. Run a full scrub on the pool using 'zpool scrub'
4. Use 'zdb -d' and make sure that there is no ondisk corruption reported
5. Once the pool comes to a clean state, comment / remove the added entries in /etc/system.
</pre>
==Zurückgehen auf einen früheren Uberblock==
<source lang=bash>
# zpool import defect_pool
cannot import 'defect_pool': I/O error
Destroy and re-create the pool from
a backup source.
</source>
Unter /etc/zfs:
<source lang=bash>
# cd /etc/zfs
# strings zpool.cache | nawk '/c[0-9]+t/'
...
/dev/dsk/c7t0d0s0
...
# zdb -l /dev/dsk/c7t0d0s0 | nawk '$1=="name:"{print;exit;}'
name: 'defect_pool'
</source>
Für einen ZPool im Solaris Cluster:
<source lang=bash>
# cd /var/cluster/run/HAStoragePlus/zfs/
# strings defect_pool.cachefile | nawk '/c[0-9]+t/'
0/dev/dsk/c8t600A0B80006E103C000008164E51CDD2d0s0
0/dev/dsk/c8t600A0B80006E10E40000D47D4E51CF9Ed0s0
</source>
oder
<source lang=bash>
# zpool import -o readonly=on -c defect_pool.cachefile
</source>
<source lang=bash>
# zdb -lu /dev/dsk/c8t600A0B80006E103C000008164E51CDD2d0s0 | nawk '/txg =/{txg=$NF}/timestamp =/{printf "txg %d\t%s\n",txg,$0}' | sort -n -k 2n,2n | uniq | tail -10
txg 40353851 timestamp = 1352184849 UTC = Tue Nov 6 07:54:09 2012
txg 40353852 timestamp = 1352184849 UTC = Tue Nov 6 07:54:09 2012
txg 40353853 timestamp = 1352184849 UTC = Tue Nov 6 07:54:09 2012
txg 40353870 timestamp = 1352185334 UTC = Tue Nov 6 08:02:14 2012
txg 40353871 timestamp = 1352185334 UTC = Tue Nov 6 08:02:14 2012
txg 40353872 timestamp = 1352185334 UTC = Tue Nov 6 08:02:14 2012
txg 40353873 timestamp = 1352185334 UTC = Tue Nov 6 08:02:14 2012
txg 40353874 timestamp = 1352185334 UTC = Tue Nov 6 08:02:14 2012
txg 40353875 timestamp = 1352185334 UTC = Tue Nov 6 08:02:14 2012
txg 40353879 timestamp = 1352185334 UTC = Tue Nov 6 08:02:14 2012
# zpool import -o readonly=on -T <txg> defect_pool
</source>
Also z.B. auf Tue Nov 6 07:54:09 2012 -> txg 40353853
<source lang=bash>
# zpool import -T 40353853 defect_pool
Pool defect_pool returned to its state as of Tue Nov 06 07:32:33 2012.
Discarded approximately 22 minutes of transactions.
</source>
==PANIC, NOTICE: spa_import_rootpool: error 19==
Die Lösung ist, den Pool und das Device explizit anzugeben. Wenn beim booten also kommt:
<pre>
NOTICE: spa_import_rootpool: error 19
Cannot mount root on /pci@0,0/pci8086,340a@3/pci1000,3150@0/sd@1,0:a
panic[cpu0]/thread=fffffffffbc28820: vfs_mountroot: cannot mount root
</pre>
Hilft ein Boot in den Failsafe mode und editieren der /a/rpool/boot/grub/menu.lst, oder Eingabe der Parameter in der Grub-Commandline:
<pre>
title s10x_u8wos_08a
findroot (s10x_u8wos_08a,0,a)
bootfs rpool/ROOT/s10x_u8wos_08a
kernel$ /platform/i86pc/multiboot -B zfs-bootfs=rpool/ROOT/s10x_u8wos_08a,bootpath="/pci@0,0/pci8086,340a@3/pci1000,3150@0/sd@1,0:a"
module /platform/i86pc/boot_archive
</pre>
a2658f2c5ab5fe776eafc9889be7e1c8ac8a37a2
ZFS fast scrub
0
141
397
2014-05-07T17:02:43Z
Lollypop
2
Die Seite wurde neu angelegt: „[[Kategorie:ZFS]] [[Kategorie:Solaris]] NEVER DO THIS!!! If you need a fast scrub to get to production state after an bloody hard unplanned downtime... and so …“
wikitext
text/x-wiki
[[Kategorie:ZFS]]
[[Kategorie:Solaris]]
NEVER DO THIS!!!
If you need a fast scrub to get to production state after an bloody hard unplanned downtime... and so on...
I would expect you to not to do this.
But it worked for me:
<source lang=bash>
# echo "zfs_scrub_delay/W0" | mdb -kw
zfs_scrub_delay:0x4 = 0x0
</source>
This sets the scrub delay to zero... your system will do a lot of scrubbing and not so much other things.
Remember to set it back to the old value later (4 in this example)!
But remember I told you: NEVER DO THIS!!!
1d8fb60fcf23b2c0660a9fb89541714687df6b68
398
397
2014-05-07T17:03:20Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:ZFS]]
[[Kategorie:Solaris]]
NEVER DO THIS!!!
If you need a fast scrub to get to production state after an bloody hard unplanned downtime... and so on...
I would expect you to not to do this.
But it worked for me:
<source lang=bash>
# echo "zfs_scrub_delay/W0" | mdb -kw
zfs_scrub_delay:0x4 = 0x0
</source>
This sets the scrub delay to zero... your system will do a lot of scrubbing and not so much other things.
Remember to set it back to the old value later (4 in this example)!
<source lang=bash>
# echo "zfs_scrub_delay/W4" | mdb -kw
zfs_scrub_delay:0x0 = 0x4
</source>
But remember I told you: NEVER DO THIS!!!
77f4ba8f17b1562c514f5f6c26ec8217c40cdcf1
399
398
2014-05-07T17:05:24Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:ZFS]]
[[Kategorie:Solaris]]
NEVER DO THIS!!!
If you need a fast scrub to get to production state after an bloody hard unplanned downtime... and so on...
I would expect you to not to do this.
But it worked for me:
<source lang=bash>
# echo "zfs_scrub_delay/D" | mdb -k
zfs_scrub_delay:
zfs_scrub_delay:4
# echo "zfs_scrub_delay/W0" | mdb -kw
zfs_scrub_delay:0x4 = 0x0
</source>
This sets the scrub delay to zero... your system will do a lot of scrubbing and not so much other things.
Remember to set it back to the old value later (4 in this example)!
<source lang=bash>
# echo "zfs_scrub_delay/W4" | mdb -kw
zfs_scrub_delay:0x0 = 0x4
</source>
But remember I told you: NEVER DO THIS!!!
ab61389e47a316342521dbc8f712eb2f914ff858
Category:Schaben
14
142
400
2014-05-09T20:02:55Z
Lollypop
2
Die Seite wurde neu angelegt: „[[Kategorie:Tiere]]“
wikitext
text/x-wiki
[[Kategorie:Tiere]]
cdedcc18051d8835b96ae206bd357542432afa24
Gromphadorhina spec.
0
144
402
2014-05-09T20:07:46Z
Lollypop
2
Die Seite wurde neu angelegt: „[[Kategorie:Gromphadorhina]]“
wikitext
text/x-wiki
[[Kategorie:Gromphadorhina]]
b1bebddae0f3fad969b205cd4643d10dbe947ae9
403
402
2014-05-09T20:30:56Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:Gromphadorhina]]
{{Systematik
| DeName = Fauchschabe
| WissName = Gromphadorhina spec.
| Autor =
| Untergattung =
| Gattung =
| Unterfamilie =
| Art =
| Verbreitung =
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| Winterruhe =
}}
39cba6e493d7e418f42d537abe443ccc7d36ca55
404
403
2014-05-09T20:36:17Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:Gromphadorhina]]
{{Systematik
| DeName = Fauchschabe
| WissName = Gromphadorhina spec.
| Autor =
| Untergattung =
| Gattung = Gromphadorhina
| Familie = Blaberidae
| Unterfamilie =
| Art =
| Verbreitung =
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| Winterruhe =
}}
1308019f434f431eefb83fe7d06a3d5b4190ef1c
Template:Systematik
10
117
405
323
2014-05-09T20:37:03Z
Lollypop
2
wikitext
text/x-wiki
<includeonly>
{| cellpadding="2" cellspacing="0" style="border: 1px solid #6688AA;width:260px; background-color:#efefef; float:right;overflow: hidden; margin-left:10px; margin-bottom:10px;" valign="middle" |
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| '''''{{{WissName}}}''''' {{#if:{{{DeName|}}}| <br>({{{DeName|}}}) }}
|-
|style=" border: 0px solid #6688AA;border-bottom-width:1px;"| <div style="text-align:center;font-size:8pt;">
{{#if: {{{Bild|}}} | [[Bild:{{{Bild}}}|frameless|250x300px|{{{Bildbeschreibung}}}]]{{{Bildbeschreibung}}}</div> | |}}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| Systematik
|-
|style="text-align:center;width=250px;"|
{| style="background-color:#efefef;text-align:left;"
|-
{{#if:{{{Familie|}}}|
{{!-}}
{{!}} Familie:
{{!}} ''[[{{{Familie|}}}]]''
}}
|-
{{#if:{{{Unterfamilie|}}}|
{{!-}}
{{!}} Unterfamilie:
{{!}} ''[[{{{Unterfamilie|}}}]]''
}}
|-
{{#if:{{{Gattung|}}}|
{{!-}}
{{!}} Gattung:
{{!}} ''[[{{{Gattung|}}}]]''
}}
|-
{{#if:{{{Untergattung|}}}|
{{!-}}
{{!}} Untergattung:
{{!}} ''[[{{{Untergattung|}}}]]''
}}
|-
| Art:
|''{{{WissName|}}}''
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Wissenschaftlicher Name
|-
|style="text-align:center"|
{| style="text-align:center;background-color:#efefef;widht:250px"
|-
|style="text-align:center;width:250px"|''{{{WissName}}}''
{{#if:{{{Autor|}}}| {{{Autor|}}} | }}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Weitere Informationen
|-
|style="text-align:left"|
{| style="text-align:left;background-color:#efefef;widht:250px"
{{#if:{{{Verbreitung|}}} |{{!-}}
{{!}} Verbreitung:
{{!}} {{{Verbreitung|}}} |{{!-}}}}
{{#if:{{{Habitat|}}}
| {{!-}}
{{!}} Habitat:
{{!}} {{{Habitat|}}}|{{!-}}}}
{{#if:{{{Nahrung|}}}
| {{!-}}
{{!}} Nahrung:
{{!}} {{{Nahrung|}}} |{{!-}}}}
{{#if:{{{Luftfeuchtigkeit|}}}
| {{!-}}
{{!}} Luftfeuchtigkeit:
{{!}} {{{Luftfeuchtigkeit|}}} |{{!-}}}}
{{#if:{{{Temperatur|}}}
| {{!-}}
{{!}} Temperatur:
{{!}} {{{Temperatur|}}} |{{!-}}}}
|}
|}
{{#if:{{{Untergattung|}}}| [[Kategorie:{{{Untergattung|}}}]]}}
{{#if:{{{Gattung|}}}| [[Kategorie:{{{Gattung|}}}]]}}
{{#ifeq: {{NAMESPACE}} | {{ns:0}} | [[Kategorie:Spezies]]}}
</includeonly>
<noinclude>
532efa02de643c5c160dbb4a28efb7114df21af9
Gromphadorhina portentosa
0
145
406
2014-05-28T12:28:08Z
Lollypop
2
Die Seite wurde neu angelegt: „[[Kategorie:Gromphadorhina]] {{Systematik | DeName = Fauchschabe | WissName = Gromphadorhina portentosa | Autor = | Untergattung …“
wikitext
text/x-wiki
[[Kategorie:Gromphadorhina]]
{{Systematik
| DeName = Fauchschabe
| WissName = Gromphadorhina portentosa
| Autor =
| Untergattung =
| Gattung = Gromphadorhina
| Familie = Blaberidae
| Unterfamilie =
| Art =
| Verbreitung =
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| Winterruhe =
}}
a8d1cbc0963f7df01559cecc1d09b8ace158540b
Elliptorhina javanica
0
146
407
2014-05-28T12:40:31Z
Lollypop
2
Die Seite wurde neu angelegt: „[[Kategorie:Gromphadorhina]] {{Systematik | DeName = Fauchschabe | WissName = Elliptorhina javanica | Autor = | Untergattung …“
wikitext
text/x-wiki
[[Kategorie:Gromphadorhina]]
{{Systematik
| DeName = Fauchschabe
| WissName = Elliptorhina javanica
| Autor =
| Untergattung =
| Gattung = Elliptorhina
| Familie = Blaberidae
| Unterfamilie =
| Art =
| Verbreitung =
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| Winterruhe =
}}
01abfbd1715beeb195786efa471cfe70cdca42ed
408
407
2014-05-28T12:41:05Z
Lollypop
2
wikitext
text/x-wiki
{{Systematik
| DeName = Fauchschabe
| WissName = Elliptorhina javanica
| Autor =
| Untergattung =
| Gattung = Elliptorhina
| Familie = Blaberidae
| Unterfamilie =
| Art =
| Verbreitung =
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| Winterruhe =
}}
b8a0a9dbc11e930ef875156c5d162bd262d634da
Gromphadorhina spec.
0
144
409
404
2014-05-28T12:41:54Z
Lollypop
2
wikitext
text/x-wiki
{{Systematik
| DeName = Fauchschabe
| WissName = Gromphadorhina spec.
| Autor =
| Untergattung =
| Gattung = Gromphadorhina
| Familie = Blaberidae
| Unterfamilie =
| Art =
| Verbreitung =
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| Winterruhe =
}}
78610f42bd7287ffbebcdbecdc5979a0d2c34427
Category:Elliptorhina
14
147
410
2014-05-28T12:43:30Z
Lollypop
2
Die Seite wurde neu angelegt: „[[Kategorie:Schaben]]“
wikitext
text/x-wiki
[[Kategorie:Schaben]]
b2ea0c52ba3cc8ec3158b4eaccc987028f8b404c
Template:Systematik
10
117
411
405
2014-05-28T12:45:15Z
Lollypop
2
wikitext
text/x-wiki
<includeonly>
{| cellpadding="2" cellspacing="0" style="border: 1px solid #6688AA;width:260px; background-color:#efefef; float:right;overflow: hidden; margin-left:10px; margin-bottom:10px;" valign="middle" |
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| '''''{{{WissName}}}''''' {{#if:{{{DeName|}}}| <br>({{{DeName|}}}) }}
|-
|style=" border: 0px solid #6688AA;border-bottom-width:1px;"| <div style="text-align:center;font-size:8pt;">
{{#if: {{{Bild|}}} | [[Bild:{{{Bild}}}|frameless|250x300px|{{{Bildbeschreibung}}}]]{{{Bildbeschreibung}}}</div> | |}}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| Systematik
|-
|style="text-align:center;width=250px;"|
{| style="background-color:#efefef;text-align:left;"
|-
{{#if:{{{Familie|}}}|
{{!-}}
{{!}} Familie:
{{!}} ''[[{{{Familie|}}}]]''
}}
|-
{{#if:{{{Unterfamilie|}}}|
{{!-}}
{{!}} Unterfamilie:
{{!}} ''[[{{{Unterfamilie|}}}]]''
}}
|-
{{#if:{{{Gattung|}}}|
{{!-}}
{{!}} Gattung:
{{!}} ''[[Kategorie:{{{Gattung|}}}]]''
}}
|-
{{#if:{{{Untergattung|}}}|
{{!-}}
{{!}} Untergattung:
{{!}} ''[[{{{Untergattung|}}}]]''
}}
|-
| Art:
|''{{{WissName|}}}''
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Wissenschaftlicher Name
|-
|style="text-align:center"|
{| style="text-align:center;background-color:#efefef;widht:250px"
|-
|style="text-align:center;width:250px"|''{{{WissName}}}''
{{#if:{{{Autor|}}}| {{{Autor|}}} | }}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Weitere Informationen
|-
|style="text-align:left"|
{| style="text-align:left;background-color:#efefef;widht:250px"
{{#if:{{{Verbreitung|}}} |{{!-}}
{{!}} Verbreitung:
{{!}} {{{Verbreitung|}}} |{{!-}}}}
{{#if:{{{Habitat|}}}
| {{!-}}
{{!}} Habitat:
{{!}} {{{Habitat|}}}|{{!-}}}}
{{#if:{{{Nahrung|}}}
| {{!-}}
{{!}} Nahrung:
{{!}} {{{Nahrung|}}} |{{!-}}}}
{{#if:{{{Luftfeuchtigkeit|}}}
| {{!-}}
{{!}} Luftfeuchtigkeit:
{{!}} {{{Luftfeuchtigkeit|}}} |{{!-}}}}
{{#if:{{{Temperatur|}}}
| {{!-}}
{{!}} Temperatur:
{{!}} {{{Temperatur|}}} |{{!-}}}}
|}
|}
{{#if:{{{Untergattung|}}}| [[Kategorie:{{{Untergattung|}}}]]}}
{{#if:{{{Gattung|}}}| [[Kategorie:{{{Gattung|}}}]]}}
{{#ifeq: {{NAMESPACE}} | {{ns:0}} | [[Kategorie:Spezies]]}}
</includeonly>
<noinclude>
eb4e057e84b95e6a3a63f77bacf12970fea2a76d
412
411
2014-05-28T12:46:18Z
Lollypop
2
wikitext
text/x-wiki
<includeonly>
{| cellpadding="2" cellspacing="0" style="border: 1px solid #6688AA;width:260px; background-color:#efefef; float:right;overflow: hidden; margin-left:10px; margin-bottom:10px;" valign="middle" |
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| '''''{{{WissName}}}''''' {{#if:{{{DeName|}}}| <br>({{{DeName|}}}) }}
|-
|style=" border: 0px solid #6688AA;border-bottom-width:1px;"| <div style="text-align:center;font-size:8pt;">
{{#if: {{{Bild|}}} | [[Bild:{{{Bild}}}|frameless|250x300px|{{{Bildbeschreibung}}}]]{{{Bildbeschreibung}}}</div> | |}}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| Systematik
|-
|style="text-align:center;width=250px;"|
{| style="background-color:#efefef;text-align:left;"
|-
{{#if:{{{Familie|}}}|
{{!-}}
{{!}} Familie:
{{!}} ''[[{{{Familie|}}}]]''
}}
|-
{{#if:{{{Unterfamilie|}}}|
{{!-}}
{{!}} Unterfamilie:
{{!}} ''[[{{{Unterfamilie|}}}]]''
}}
|-
{{#if:{{{Gattung|}}}|
{{!-}}
{{!}} Gattung:
{{!}} ''[[{{{Gattung|}}}]]''
}}
|-
{{#if:{{{Untergattung|}}}|
{{!-}}
{{!}} Untergattung:
{{!}} ''[[{{{Untergattung|}}}]]''
}}
|-
| Art:
|''{{{WissName|}}}''
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Wissenschaftlicher Name
|-
|style="text-align:center"|
{| style="text-align:center;background-color:#efefef;widht:250px"
|-
|style="text-align:center;width:250px"|''{{{WissName}}}''
{{#if:{{{Autor|}}}| {{{Autor|}}} | }}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Weitere Informationen
|-
|style="text-align:left"|
{| style="text-align:left;background-color:#efefef;widht:250px"
{{#if:{{{Verbreitung|}}} |{{!-}}
{{!}} Verbreitung:
{{!}} {{{Verbreitung|}}} |{{!-}}}}
{{#if:{{{Habitat|}}}
| {{!-}}
{{!}} Habitat:
{{!}} {{{Habitat|}}}|{{!-}}}}
{{#if:{{{Nahrung|}}}
| {{!-}}
{{!}} Nahrung:
{{!}} {{{Nahrung|}}} |{{!-}}}}
{{#if:{{{Luftfeuchtigkeit|}}}
| {{!-}}
{{!}} Luftfeuchtigkeit:
{{!}} {{{Luftfeuchtigkeit|}}} |{{!-}}}}
{{#if:{{{Temperatur|}}}
| {{!-}}
{{!}} Temperatur:
{{!}} {{{Temperatur|}}} |{{!-}}}}
|}
|}
{{#if:{{{Untergattung|}}}| [[Kategorie:{{{Untergattung|}}}]]}}
{{#if:{{{Gattung|}}}| [[Kategorie:{{{Gattung|}}}]]}}
{{#ifeq: {{NAMESPACE}} | {{ns:0}} | [[Kategorie:Spezies]]}}
</includeonly>
<noinclude>
532efa02de643c5c160dbb4a28efb7114df21af9
Archimandrita tesselata
0
148
413
2014-05-28T12:51:17Z
Lollypop
2
Die Seite wurde neu angelegt: „{{Systematik | DeName = Pfefferschabe | WissName = Archimandrita tesselata | Autor = | Untergattung = | Gattung =…“
wikitext
text/x-wiki
{{Systematik
| DeName = Pfefferschabe
| WissName = Archimandrita tesselata
| Autor =
| Untergattung =
| Gattung = Archimandrita
| Familie = Blaberidae
| Unterfamilie =
| Art =
| Verbreitung =
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| Winterruhe =
}}
1209ed499cfcd1030cffc1be742a478cef8b6b8c
415
413
2014-05-28T12:55:09Z
Lollypop
2
wikitext
text/x-wiki
{{Systematik
| DeName = Pfefferschabe
| WissName = Archimandrita tesselata
| Autor =
| Untergattung =
| Gattung = Archimandrita
| Familie = Blaberidae
| Unterfamilie =
| Art =
| Verbreitung = Guatemala, Costa Rica, Panama, Kolumbien
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| Winterruhe =
}}
cc721953c46331ad957bde5b1bd4e2e37277c5cb
Category:Archimandrita
14
149
414
2014-05-28T12:52:17Z
Lollypop
2
Die Seite wurde neu angelegt: „[[Kategorie:Schaben]]“
wikitext
text/x-wiki
[[Kategorie:Schaben]]
b2ea0c52ba3cc8ec3158b4eaccc987028f8b404c
Blaptica dubia
0
150
416
2014-05-28T12:58:09Z
Lollypop
2
Die Seite wurde neu angelegt: „{{Systematik | DeName = Argentinische Waldschabe | WissName = Blaptica dubia | Autor = | Untergattung = | Gattung …“
wikitext
text/x-wiki
{{Systematik
| DeName = Argentinische Waldschabe
| WissName = Blaptica dubia
| Autor =
| Untergattung =
| Gattung = Blaptica
| Familie = Blaberidae
| Unterfamilie =
| Art =
| Verbreitung = Argentinien, Paraguay, Uruguay
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| Winterruhe =
}}
93dffe97c88d4751497cdd2884a97831e3a010f5
Category:Blaptica
14
151
417
2014-05-28T12:58:42Z
Lollypop
2
Die Seite wurde neu angelegt: „[[Kategorie: Schaben]]“
wikitext
text/x-wiki
[[Kategorie: Schaben]]
ba9795aca49cac3148fe273e4c3817ca37cc6aa9
Princisia vanwaerebeki
0
152
418
2014-05-28T13:00:17Z
Lollypop
2
Die Seite wurde neu angelegt: „{{Systematik | DeName = Fauchschabe | WissName = Princisia vanwaerebeki | Autor = | Untergattung = | Gattung = Pr…“
wikitext
text/x-wiki
{{Systematik
| DeName = Fauchschabe
| WissName = Princisia vanwaerebeki
| Autor =
| Untergattung =
| Gattung = Princisia
| Familie = Blaberidae
| Unterfamilie =
| Art =
| Verbreitung =
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| Winterruhe =
}}
df92bd8c5138a7a97e3c32b15bf20d39db5ef2e3
Category:Princisia
14
153
419
2014-05-28T13:08:09Z
Lollypop
2
Die Seite wurde neu angelegt: „[[Kategorie:Schaben]]“
wikitext
text/x-wiki
[[Kategorie:Schaben]]
b2ea0c52ba3cc8ec3158b4eaccc987028f8b404c
Gromphadorhina portentosa
0
145
420
406
2014-05-28T13:13:01Z
Lollypop
2
wikitext
text/x-wiki
{{Systematik
| DeName = Fauchschabe
| WissName = Gromphadorhina portentosa
| Autor =
| Untergattung =
| Gattung = Gromphadorhina
| Familie = Blaberidae
| Unterfamilie =
| Art =
| Verbreitung =
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| Winterruhe =
}}
9f05b434ee75afb1aba3b1e3e065f4c5109553ad
Archispirostreptus gigas
0
13
421
26
2014-05-28T13:14:20Z
Lollypop
2
wikitext
text/x-wiki
{{Systematik
| DeName = Riesentausendfüsser
| WissName = Archispirostreptus gigas
| Autor =
| Untergattung =
| Gattung = Archispirostreptus
| Familie =
| Unterfamilie =
| Art =
| Verbreitung =
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| Winterruhe =
}}
c8deec132789ddf05c57541e8cf5427590fe3ef8
424
421
2014-05-28T13:23:29Z
Lollypop
2
wikitext
text/x-wiki
{{Systematik
| DeName = Riesentausendfüsser
| WissName = Archispirostreptus gigas
| Autor =
| Untergattung =
| Gattung = Archispirostreptus
| Familie =
| Unterfamilie =
| Art =
| Verbreitung = Somalia, Kenia, Tansania, Mosambik
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| Winterruhe =
}}
65d1352a8a2ef18aeda429ca12b5bf53eb0e3391
425
424
2014-05-28T13:24:08Z
Lollypop
2
wikitext
text/x-wiki
{{Systematik
| DeName = Riesentausendfüsser
| WissName = Archispirostreptus gigas
| Autor =
| Untergattung =
| Gattung = Archispirostreptus
| Familie = Spirostreptidae
| Unterfamilie =
| Art =
| Verbreitung = Somalia, Kenia, Tansania, Mosambik
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| Winterruhe =
}}
67c2db26ecc2273797366b63a0eeaa6a7de45cd9
426
425
2014-05-28T13:24:39Z
Lollypop
2
wikitext
text/x-wiki
{{Systematik
| DeName = Riesentausendfüsser
| WissName = Archispirostreptus gigas
| Autor = Peters, 1855
| Untergattung =
| Gattung = Archispirostreptus
| Familie = Spirostreptidae
| Unterfamilie =
| Art =
| Verbreitung = Somalia, Kenia, Tansania, Mosambik
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| Winterruhe =
}}
d88c65757f0ef73021f5663968b79b0381c13ee8
Orthoporus ornatus
0
154
422
2014-05-28T13:14:46Z
Lollypop
2
Die Seite wurde neu angelegt: „{{Systematik | DeName = Wüstentausendfüsser | WissName = Orthoporus ornatus | Autor = | Untergattung = | Gattung …“
wikitext
text/x-wiki
{{Systematik
| DeName = Wüstentausendfüsser
| WissName = Orthoporus ornatus
| Autor =
| Untergattung =
| Gattung = Orthoporus
| Familie =
| Unterfamilie =
| Art =
| Verbreitung =
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| Winterruhe =
}}
59f2377630c3ce22e574e41604cd21b97c60f79b
430
422
2014-05-28T13:33:03Z
Lollypop
2
wikitext
text/x-wiki
{{Systematik
| DeName = Wüstentausendfüsser
| WissName = Orthoporus ornatus
| Autor =
| Untergattung =
| Gattung = Orthoporus
| Familie = Spirostreptidae
| Unterfamilie =
| Art =
| Verbreitung =
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| Winterruhe =
}}
5cea8f00fc48b655b2d490e717f153e90feb0ae4
431
430
2014-05-28T13:35:04Z
Lollypop
2
wikitext
text/x-wiki
{{Systematik
| DeName = Wüstentausendfüsser
| WissName = Orthoporus ornatus
| Autor = Girard, 1853
| Untergattung =
| Gattung = Orthoporus
| Familie = Spirostreptidae
| Unterfamilie =
| Art =
| Verbreitung =
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| Winterruhe =
}}
9b9a09bce33219e096fb2da66586cec87f543167
Category:Orthoporus
14
155
423
2014-05-28T13:20:08Z
Lollypop
2
Die Seite wurde neu angelegt: „[[Kategorie:Tausendfuesser]]“
wikitext
text/x-wiki
[[Kategorie:Tausendfuesser]]
0621b82421e1a1f52ea373a7753ab04e4564b24b
Telodeinopus aoutii
0
156
427
2014-05-28T13:27:53Z
Lollypop
2
Die Seite wurde neu angelegt: „{{Systematik | DeName = | WissName = Telodeinopus aoutii | Autor = Demange, 1971 | Untergattung = | Gattung = Tel…“
wikitext
text/x-wiki
{{Systematik
| DeName =
| WissName = Telodeinopus aoutii
| Autor = Demange, 1971
| Untergattung =
| Gattung = Telodeinopus
| Familie =
| Unterfamilie =
| Art =
| Verbreitung = Togo, Ghana
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| Winterruhe =
}}
9719faa6dd7b410f090109a88c7a2c97b7a5d858
428
427
2014-05-28T13:29:36Z
Lollypop
2
wikitext
text/x-wiki
{{Systematik
| DeName =
| WissName = Telodeinopus aoutii
| Autor = Demange, 1971
| Untergattung =
| Gattung = Telodeinopus
| Familie = Spirostreptida
| Unterfamilie =
| Art =
| Verbreitung = Togo, Ghana
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| Winterruhe =
}}
a236278701624f243e45da17a1cbb6ae350d5d84
Category:Telodeinopus
14
157
429
2014-05-28T13:30:15Z
Lollypop
2
Die Seite wurde neu angelegt: „[[Kategorie:Tausendfuesser]]“
wikitext
text/x-wiki
[[Kategorie:Tausendfuesser]]
0621b82421e1a1f52ea373a7753ab04e4564b24b
Fibrechannel Analyse
0
139
432
395
2014-06-23T12:51:40Z
Lollypop
2
/* switchshow */
wikitext
text/x-wiki
[[Kategorie:Solaris]]
[[Kategorie:Brocade]]
=Fibrechannel Analyse unter Solaris=
=Kommandos : Solaris=
==luxadm==
===luxadm -e port===
Gibt die Hardwarepfade der vorhandened Fibrechannelports und deren Status aus:
<source lang=bash>
# luxadm -e port
/devices/pci@79,0/pci10de,378@b/pci1077,143@0/fp@0,0:devctl CONNECTED
/devices/pci@79,0/pci10de,378@b/pci1077,143@0,1/fp@0,0:devctl NOT CONNECTED
/devices/pci@79,0/pci10de,376@e/pci1077,143@0/fp@0,0:devctl CONNECTED
/devices/pci@79,0/pci10de,376@e/pci1077,143@0,1/fp@0,0:devctl NOT CONNECTED
</source>
2 Dualport Karten:
/devices/pci@79,0/pci10de,378@b/pci1077,143@0 und ...,1
/devices/pci@79,0/pci10de,376@e/pci1077,143@0 und ...,1
<source lang=bash>
# prtdiag -v | head -1
System Configuration: Sun Microsystems Sun Fire X4440
</source>
Aus der Seite [https://support.oracle.com/epmos/faces/DocContentDisplay?id=1277396.1 Sun x86 Platforms: Matrix of Recognized Device Paths (Doc ID 1277396.1)] (Oracle Support Login benötigt):
Sun Fire x4440 (Tucana)
PCI:
PCIe SLOT0 /pci@0,0/pci10de,375@f/pci1000,3150@0 // with PCI Express 8-Port SAS/SATA HBA
PCIe SLOT0 /pci@0,0/pci10de,375@f/ // without PCI Express 8-Port SAS/SATA HBA
PCIe SLOT1 /pci@0,0/pci10de,376@e/
PCIe SLOT2 /pci@7c,0/pci10de,377@f/
PCIe SLOT3 /pci@0,0/pci10de,377@a/
PCIe SLOT4 /pci@7c,0/pci10de,376@e/
PCIe SLOT5 /pci@7c,0/pci10de,378@b/
(7c can be renamed something else depending on BIOS/OS version)
Also stecken unsere Karten in Slot 4 und 5.
===luxadm -e dump_map <HW_path>===
Gibt die Tabelle der bekannten Geräte an einem Port aus
<source lang=bash>
# luxadm -e dump_map /devices/pci@79,0/pci10de,378@b/pci1077,143@0/fp@0,0:devctl
Pos Port_ID Hard_Addr Port WWN Node WWN Type
0 30200 0 202600a0b86e10e4 200600a0b86e10e4 0x0 (Disk device)
1 30600 0 202700a0b86e10e4 200600a0b86e10e4 0x0 (Disk device)
2 10100 0 203400a0b85bb030 200400a0b85bb030 0x0 (Disk device)
3 10500 0 203500a0b85bb030 200400a0b85bb030 0x0 (Disk device)
4 10200 0 202600a0b86e103c 200600a0b86e103c 0x0 (Disk device)
5 11400 0 202700a0b86e103c 200600a0b86e103c 0x0 (Disk device)
6 30100 0 203200a0b85aeb2d 200200a0b85aeb2d 0x0 (Disk device)
7 30500 0 203300a0b85aeb2d 200200a0b85aeb2d 0x0 (Disk device)
8 10800 0 2100001b32902d45 2000001b32902d45 0x1f (Unknown Type,Host Bus Adapter)
</source>
Erklärung der interessanten Spalten:
* Port_ID <Switch_ID><Switchport><??>
Es sind also offensichtlich 2 Switches in der Fabric an Port /devices/pci@79,0/pci10de,378@b/pci1077,143@0/fp@0,0:devctl
und zwar mit der ID 1 und mit der ID 3.
Switch ID 1
Port 1 und 5 : Node WWN 200400a0b85bb030
Port 2 und 14 : Node WWN 200600a0b86e103c
Port 8 : Node WWN 2000001b32902d45 (Wir selbst)
Switch ID 3
Port 1 und 5 : Node WWN 200200a0b85aeb2d
Port 2 und 6 : Node WWN 200600a0b86e10e4
Wir hängen also mit 2 Storages auf dem Switch mit der ID 1 und haben eine Verbindung zu einem Switch mit der ID 3 an dem 2 weitere Storages hängen.
* Node WWN
Wir sehen hier 4 Disk Devices mit jeweils 2 Einträgen (Gleiche Node WWN)
* Port WWN
Dies ist die Port WWN der an den Switch angeschlossenen Geräte (unter 8 finden wir uns selbst).
Pro Storage sehen wir hier 2 Port WWNs, also 2 Pfade über unseren einen Hostport.
Daher nachher 4 Pfade (2 Pro Hostport) beim [[#mpathadm list lu]].
* Type
Disk Device: Storage
Host Bus Adapter: FC-Karte
===luxadm probe===
Auflistung aller erkannten Fibrechanneldevices
<source lang=bash>
#> luxadm probe
Found Fibre Channel device(s):
Node WWN:200600a0b86e10e4 Device Type:Disk device
Logical Path:/dev/rdsk/c8t600A0B80006E10E40000DC1C52E8B751d0s2
...
</source>
===luxadm display <Diskpath|WWN>===
<source lang=bash>
#> luxadm display /dev/rdsk/c8t600A0B80006E10E40000DC1C52E8B751d0s2
DEVICE PROPERTIES for disk: /dev/rdsk/c8t600A0B80006E10E40000DC1C52E8B751d0s2
Vendor: SUN
Product ID: STK6580_6780
Revision: 0784
Serial Num: SP01068442
Unformatted capacity: 204800.000 MBytes
Write Cache: Enabled
Read Cache: Enabled
Minimum prefetch: 0x300
Maximum prefetch: 0x0
Device Type: Disk device
Path(s):
/dev/rdsk/c8t600A0B80006E10E40000DC1C52E8B751d0s2
/devices/scsi_vhci/disk@g600a0b80006e10e40000dc1c52e8b751:c,raw
Controller /dev/cfg/c4
Device Address 202600a0b86e10e4,5
Host controller port WWN 2100001b328a417f
Class primary
State ONLINE
Controller /dev/cfg/c4
Device Address 202700a0b86e10e4,5
Host controller port WWN 2100001b328a417f
Class secondary
State STANDBY
Controller /dev/cfg/c6
Device Address 201600a0b86e10e4,5
Host controller port WWN 2100001b32904445
Class primary
State ONLINE
Controller /dev/cfg/c6
Device Address 201700a0b86e10e4,5
Host controller port WWN 2100001b32904445
Class secondary
State STANDBY
</source>
* Vendor: SUN
Hersteller
* Product ID: STK6580_6780
Also ein StorageTek 6580/6780
* Revision: 0784
Grobe Firmwarepeilung (Firmware Version: 07.84.47.10)
Siehe hier [[#lsscs list array <array_name>]]
* Serial Num: SP01068442
Praktisch, wenn man mit NetApps arbeitet, um die LUNs zuzuordnen.
* Unformatted capacity: 204800.000 MBytes
Immer gut zu wissen
* Write Cache: Enabled
Die Batterie im Storage sollte also OK sein ;-)
* Path(s):
Rawdevicepath
Hardwaredevicepath
Jetzt folgen immer pro Pfad zu diesem Device ein Block aus
Controller (siehe unten)
Device Address <Port WWN vom Device>,<LUN ID>
Class <primary|secondary> (siehe unten)
State <Online|Standby|Oflline>
Zuweisung Controller zum FC-Port über:
<source lang=bash>
# ls -al /dev/cfg/c6
lrwxrwxrwx 1 root root 60 Sep 3 2009 /dev/cfg/c6 -> ../../devices/pci@79,0/pci10de,376@e/pci1077,143@0/fp@0,0:fc
</source>
Man sieht den Hardwarepfad von [[#luxadm -e port]]
Class:
Via ALUA (Asymmetric Logical Unit Access) teilt das Device dem Host mit, über welche Pfade der Host primär auf die LUN zugreifen soll.
==fcinfo==
===fcinfo hba-port===
Gibt ein paar Infos über Hersteller, Modell, Firmware, Port und Node WWN, Current Speed, ... aus
<source lang=bash>
#> fcinfo hba-port
HBA Port WWN: 2100001b328a417f
OS Device Name: /dev/cfg/c4
Manufacturer: QLogic Corp.
Model: 375-3356-02
Firmware Version: 05.06.00
FCode/BIOS Version: BIOS: 2.02; fcode: 2.01; EFI: 2.00;
Serial Number: 0402R00-0918701860
Driver Name: qlc
Driver Version: 20110825-3.06
Type: N-port
State: online
Supported Speeds: 1Gb 2Gb 4Gb
Current Speed: 4Gb
Node WWN: 2000001b328a417f
HBA Port WWN: 2101001b32aa417f
OS Device Name: /dev/cfg/c5
Manufacturer: QLogic Corp.
Model: 375-3356-02
Firmware Version: 05.06.00
FCode/BIOS Version: BIOS: 2.02; fcode: 2.01; EFI: 2.00;
Serial Number: 0402R00-0918701860
Driver Name: qlc
Driver Version: 20110825-3.06
Type: unknown
State: offline
Supported Speeds: 1Gb 2Gb 4Gb
Current Speed: not established
Node WWN: 2001001b32aa417f
HBA Port WWN: 2100001b32904445
OS Device Name: /dev/cfg/c6
Manufacturer: QLogic Corp.
Model: 375-3356-02
Firmware Version: 05.06.00
FCode/BIOS Version: BIOS: 2.02; fcode: 2.01; EFI: 2.00;
Serial Number: 0402R00-0918701887
Driver Name: qlc
Driver Version: 20110825-3.06
Type: N-port
State: online
Supported Speeds: 1Gb 2Gb 4Gb
Current Speed: 4Gb
Node WWN: 2000001b32904445
HBA Port WWN: 2101001b32b04445
OS Device Name: /dev/cfg/c7
Manufacturer: QLogic Corp.
Model: 375-3356-02
Firmware Version: 05.06.00
FCode/BIOS Version: BIOS: 2.02; fcode: 2.01; EFI: 2.00;
Serial Number: 0402R00-0918701887
Driver Name: qlc
Driver Version: 20110825-3.06
Type: unknown
State: offline
Supported Speeds: 1Gb 2Gb 4Gb
Current Speed: not established
Node WWN: 2001001b32b04445
</source>
===fcinfo remote-port --port <HBA Port WWN> --linkstat===
<source lang=bash>
# fcinfo remote-port --port 2100001b32904445 --linkstat
Remote Port WWN: 201600a0b86e103c
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200600a0b86e103c
Link Error Statistics:
Link Failure Count: 3
Loss of Sync Count: 3
Loss of Signal Count: 2
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 201700a0b86e103c
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200600a0b86e103c
Link Error Statistics:
Link Failure Count: 4
Loss of Sync Count: 261
Loss of Signal Count: 4
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 202200a0b85aeb2d
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200200a0b85aeb2d
Link Error Statistics:
Link Failure Count: 2
Loss of Sync Count: 1
Loss of Signal Count: 2
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 202300a0b85aeb2d
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200200a0b85aeb2d
Link Error Statistics:
Link Failure Count: 2
Loss of Sync Count: 1
Loss of Signal Count: 2
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 201600a0b86e10e4
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200600a0b86e10e4
Link Error Statistics:
Link Failure Count: 3
Loss of Sync Count: 1
Loss of Signal Count: 0
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 201700a0b86e10e4
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200600a0b86e10e4
Link Error Statistics:
Link Failure Count: 3
Loss of Sync Count: 1
Loss of Signal Count: 0
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 202400a0b85bb030
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200400a0b85bb030
Link Error Statistics:
Link Failure Count: 2
Loss of Sync Count: 1
Loss of Signal Count: 2
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 202500a0b85bb030
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200400a0b85bb030
Link Error Statistics:
Link Failure Count: 2
Loss of Sync Count: 1
Loss of Signal Count: 3
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
</source>
==mpathadm==
===mpathadm list lu===
<source lang=bash>
</source>
==cfgadm==
===cfgadm -al -o show_FCP_dev [<controller>]===
<source lang=bash>
# cfgadm -al -o show_FCP_dev | grep unusable
c8::21000024ff2d49a2,0 disk connected configured unusable
c8::21000024ff2d49a2,1 disk connected configured unusable
c8::21000024ff2d49a2,2 disk connected configured unusable
c8::21000024ff2d49a2,3 disk connected configured unusable
c8::21000024ff2d49a2,4 disk connected configured unusable
c8::21000024ff2d49a2,5 disk connected configured unusable
c8::21000024ff2d49a2,6 disk connected configured unusable
c8::21000024ff2d49a2,7 disk connected configured unusable
c8::21000024ff2d49a2,8 disk connected configured unusable
c8::21000024ff2d49a2,9 disk connected configured unusable
c8::21000024ff2d49a2,10 disk connected configured unusable
c9::203400a0b839c421,31 disk connected configured unusable
c9::203400a0b84913d2,31 disk connected configured unusable
c9::203500a0b839c421,31 disk connected configured unusable
c9::203500a0b84913d2,31 disk connected configured unusable
</source>
===cfgadm -c unconfigure -o unusable_SCSI_LUN <unusable device>===
<source lang=bash>
# cfgadm -c unconfigure -o unusable_SCSI_LUN c8::21000024ff2d49a2
</source>
===cfgadm -o force_update -c configure <controller>===
Rescan LUNs. Be careful! Does a forcelip!
<source lang=bash>
# cfgadm -o force_update -c configure c10
</source>
=Kommandos : Common Array Manager=
==lsscs==
Ist unter Solaris in /opt/SUNWsefms/bin
===lsscs list array===
<source lang=bash>
</source>
===lsscs list array <array_name>===
<source lang=bash>
</source>
===lsscs list -a <array_name> fcport===
<source lang=bash>
</source>
=Kommandos : Brocade=
==Switch-Kommandos==
===switchshow===
<source lang=bash>
san-sw_11:admin> switchshow
switchName: san-sw_11
switchType: 71.2
switchState: Online
switchMode: Native
switchRole: Principal
switchDomain: 1
switchId: fffc01
switchWwn: 10:00:00:05:33:df:43:5a
zoning: ON (Fabric1)
switchBeacon: OFF
Index Port Address Media Speed State Proto
==============================================
0 0 010000 id N8 No_Light FC
1 1 010100 id N8 Online FC E-Port 10:00:00:05:33:df:bd:b9 "san-sw_21" (downstream)
2 2 010200 id N8 Online FC F-Port 21:00:00:24:ff:05:74:e4
3 3 010300 id N8 Online FC F-Port 50:0a:09:81:8d:32:5d:c4
4 4 010400 id N8 No_Light FC
5 5 010500 id N8 Online FC E-Port 10:00:00:05:33:df:bd:b9 "san-sw_21"
6 6 010600 id N4 Online FC F-Port 20:06:00:a0:b8:32:38:17
7 7 010700 id N4 Online FC F-Port 20:07:00:a0:b8:32:38:17
8 8 010800 id N4 Online FC F-Port 21:00:00:1b:32:91:4c:ed
9 9 010900 id N4 Online FC F-Port 21:00:00:1b:32:98:05:1a
10 10 010a00 id N8 Online FC F-Port 21:00:00:24:ff:4a:d3:bc
11 11 010b00 id N8 No_Light FC
12 12 010c00 id N8 No_Light FC
13 13 010d00 id N8 No_Light FC
14 14 010e00 id N8 No_Light FC
15 15 010f00 id N8 No_Light FC
16 16 011000 -- N8 No_Module FC (No POD License) Disabled
17 17 011100 -- N8 No_Module FC (No POD License) Disabled
18 18 011200 -- N8 No_Module FC (No POD License) Disabled
19 19 011300 -- N8 No_Module FC (No POD License) Disabled
20 20 011400 -- N8 No_Module FC (No POD License) Disabled
21 21 011500 -- N8 No_Module FC (No POD License) Disabled
22 22 011600 -- N8 No_Module FC (No POD License) Disabled
23 23 011700 -- N8 No_Module FC (No POD License) Disabled
</source>
What do we see?
1. There is a dual ISL (InterSwitchLink) to another switch (san-sw_21)
2. 16 ports are
==Port-Kommandos==
===portstatsshow===
===portstatsclear===
==Zone-Kommandos==
===zoneshow===
===alicreate===
===alishow===
f89f905eab5feae0cfc718dca8d14cf6712484c8
433
432
2014-06-23T13:00:10Z
Lollypop
2
/* switchshow */
wikitext
text/x-wiki
[[Kategorie:Solaris]]
[[Kategorie:Brocade]]
=Fibrechannel Analyse unter Solaris=
=Kommandos : Solaris=
==luxadm==
===luxadm -e port===
Gibt die Hardwarepfade der vorhandened Fibrechannelports und deren Status aus:
<source lang=bash>
# luxadm -e port
/devices/pci@79,0/pci10de,378@b/pci1077,143@0/fp@0,0:devctl CONNECTED
/devices/pci@79,0/pci10de,378@b/pci1077,143@0,1/fp@0,0:devctl NOT CONNECTED
/devices/pci@79,0/pci10de,376@e/pci1077,143@0/fp@0,0:devctl CONNECTED
/devices/pci@79,0/pci10de,376@e/pci1077,143@0,1/fp@0,0:devctl NOT CONNECTED
</source>
2 Dualport Karten:
/devices/pci@79,0/pci10de,378@b/pci1077,143@0 und ...,1
/devices/pci@79,0/pci10de,376@e/pci1077,143@0 und ...,1
<source lang=bash>
# prtdiag -v | head -1
System Configuration: Sun Microsystems Sun Fire X4440
</source>
Aus der Seite [https://support.oracle.com/epmos/faces/DocContentDisplay?id=1277396.1 Sun x86 Platforms: Matrix of Recognized Device Paths (Doc ID 1277396.1)] (Oracle Support Login benötigt):
Sun Fire x4440 (Tucana)
PCI:
PCIe SLOT0 /pci@0,0/pci10de,375@f/pci1000,3150@0 // with PCI Express 8-Port SAS/SATA HBA
PCIe SLOT0 /pci@0,0/pci10de,375@f/ // without PCI Express 8-Port SAS/SATA HBA
PCIe SLOT1 /pci@0,0/pci10de,376@e/
PCIe SLOT2 /pci@7c,0/pci10de,377@f/
PCIe SLOT3 /pci@0,0/pci10de,377@a/
PCIe SLOT4 /pci@7c,0/pci10de,376@e/
PCIe SLOT5 /pci@7c,0/pci10de,378@b/
(7c can be renamed something else depending on BIOS/OS version)
Also stecken unsere Karten in Slot 4 und 5.
===luxadm -e dump_map <HW_path>===
Gibt die Tabelle der bekannten Geräte an einem Port aus
<source lang=bash>
# luxadm -e dump_map /devices/pci@79,0/pci10de,378@b/pci1077,143@0/fp@0,0:devctl
Pos Port_ID Hard_Addr Port WWN Node WWN Type
0 30200 0 202600a0b86e10e4 200600a0b86e10e4 0x0 (Disk device)
1 30600 0 202700a0b86e10e4 200600a0b86e10e4 0x0 (Disk device)
2 10100 0 203400a0b85bb030 200400a0b85bb030 0x0 (Disk device)
3 10500 0 203500a0b85bb030 200400a0b85bb030 0x0 (Disk device)
4 10200 0 202600a0b86e103c 200600a0b86e103c 0x0 (Disk device)
5 11400 0 202700a0b86e103c 200600a0b86e103c 0x0 (Disk device)
6 30100 0 203200a0b85aeb2d 200200a0b85aeb2d 0x0 (Disk device)
7 30500 0 203300a0b85aeb2d 200200a0b85aeb2d 0x0 (Disk device)
8 10800 0 2100001b32902d45 2000001b32902d45 0x1f (Unknown Type,Host Bus Adapter)
</source>
Erklärung der interessanten Spalten:
* Port_ID <Switch_ID><Switchport><??>
Es sind also offensichtlich 2 Switches in der Fabric an Port /devices/pci@79,0/pci10de,378@b/pci1077,143@0/fp@0,0:devctl
und zwar mit der ID 1 und mit der ID 3.
Switch ID 1
Port 1 und 5 : Node WWN 200400a0b85bb030
Port 2 und 14 : Node WWN 200600a0b86e103c
Port 8 : Node WWN 2000001b32902d45 (Wir selbst)
Switch ID 3
Port 1 und 5 : Node WWN 200200a0b85aeb2d
Port 2 und 6 : Node WWN 200600a0b86e10e4
Wir hängen also mit 2 Storages auf dem Switch mit der ID 1 und haben eine Verbindung zu einem Switch mit der ID 3 an dem 2 weitere Storages hängen.
* Node WWN
Wir sehen hier 4 Disk Devices mit jeweils 2 Einträgen (Gleiche Node WWN)
* Port WWN
Dies ist die Port WWN der an den Switch angeschlossenen Geräte (unter 8 finden wir uns selbst).
Pro Storage sehen wir hier 2 Port WWNs, also 2 Pfade über unseren einen Hostport.
Daher nachher 4 Pfade (2 Pro Hostport) beim [[#mpathadm list lu]].
* Type
Disk Device: Storage
Host Bus Adapter: FC-Karte
===luxadm probe===
Auflistung aller erkannten Fibrechanneldevices
<source lang=bash>
#> luxadm probe
Found Fibre Channel device(s):
Node WWN:200600a0b86e10e4 Device Type:Disk device
Logical Path:/dev/rdsk/c8t600A0B80006E10E40000DC1C52E8B751d0s2
...
</source>
===luxadm display <Diskpath|WWN>===
<source lang=bash>
#> luxadm display /dev/rdsk/c8t600A0B80006E10E40000DC1C52E8B751d0s2
DEVICE PROPERTIES for disk: /dev/rdsk/c8t600A0B80006E10E40000DC1C52E8B751d0s2
Vendor: SUN
Product ID: STK6580_6780
Revision: 0784
Serial Num: SP01068442
Unformatted capacity: 204800.000 MBytes
Write Cache: Enabled
Read Cache: Enabled
Minimum prefetch: 0x300
Maximum prefetch: 0x0
Device Type: Disk device
Path(s):
/dev/rdsk/c8t600A0B80006E10E40000DC1C52E8B751d0s2
/devices/scsi_vhci/disk@g600a0b80006e10e40000dc1c52e8b751:c,raw
Controller /dev/cfg/c4
Device Address 202600a0b86e10e4,5
Host controller port WWN 2100001b328a417f
Class primary
State ONLINE
Controller /dev/cfg/c4
Device Address 202700a0b86e10e4,5
Host controller port WWN 2100001b328a417f
Class secondary
State STANDBY
Controller /dev/cfg/c6
Device Address 201600a0b86e10e4,5
Host controller port WWN 2100001b32904445
Class primary
State ONLINE
Controller /dev/cfg/c6
Device Address 201700a0b86e10e4,5
Host controller port WWN 2100001b32904445
Class secondary
State STANDBY
</source>
* Vendor: SUN
Hersteller
* Product ID: STK6580_6780
Also ein StorageTek 6580/6780
* Revision: 0784
Grobe Firmwarepeilung (Firmware Version: 07.84.47.10)
Siehe hier [[#lsscs list array <array_name>]]
* Serial Num: SP01068442
Praktisch, wenn man mit NetApps arbeitet, um die LUNs zuzuordnen.
* Unformatted capacity: 204800.000 MBytes
Immer gut zu wissen
* Write Cache: Enabled
Die Batterie im Storage sollte also OK sein ;-)
* Path(s):
Rawdevicepath
Hardwaredevicepath
Jetzt folgen immer pro Pfad zu diesem Device ein Block aus
Controller (siehe unten)
Device Address <Port WWN vom Device>,<LUN ID>
Class <primary|secondary> (siehe unten)
State <Online|Standby|Oflline>
Zuweisung Controller zum FC-Port über:
<source lang=bash>
# ls -al /dev/cfg/c6
lrwxrwxrwx 1 root root 60 Sep 3 2009 /dev/cfg/c6 -> ../../devices/pci@79,0/pci10de,376@e/pci1077,143@0/fp@0,0:fc
</source>
Man sieht den Hardwarepfad von [[#luxadm -e port]]
Class:
Via ALUA (Asymmetric Logical Unit Access) teilt das Device dem Host mit, über welche Pfade der Host primär auf die LUN zugreifen soll.
==fcinfo==
===fcinfo hba-port===
Gibt ein paar Infos über Hersteller, Modell, Firmware, Port und Node WWN, Current Speed, ... aus
<source lang=bash>
#> fcinfo hba-port
HBA Port WWN: 2100001b328a417f
OS Device Name: /dev/cfg/c4
Manufacturer: QLogic Corp.
Model: 375-3356-02
Firmware Version: 05.06.00
FCode/BIOS Version: BIOS: 2.02; fcode: 2.01; EFI: 2.00;
Serial Number: 0402R00-0918701860
Driver Name: qlc
Driver Version: 20110825-3.06
Type: N-port
State: online
Supported Speeds: 1Gb 2Gb 4Gb
Current Speed: 4Gb
Node WWN: 2000001b328a417f
HBA Port WWN: 2101001b32aa417f
OS Device Name: /dev/cfg/c5
Manufacturer: QLogic Corp.
Model: 375-3356-02
Firmware Version: 05.06.00
FCode/BIOS Version: BIOS: 2.02; fcode: 2.01; EFI: 2.00;
Serial Number: 0402R00-0918701860
Driver Name: qlc
Driver Version: 20110825-3.06
Type: unknown
State: offline
Supported Speeds: 1Gb 2Gb 4Gb
Current Speed: not established
Node WWN: 2001001b32aa417f
HBA Port WWN: 2100001b32904445
OS Device Name: /dev/cfg/c6
Manufacturer: QLogic Corp.
Model: 375-3356-02
Firmware Version: 05.06.00
FCode/BIOS Version: BIOS: 2.02; fcode: 2.01; EFI: 2.00;
Serial Number: 0402R00-0918701887
Driver Name: qlc
Driver Version: 20110825-3.06
Type: N-port
State: online
Supported Speeds: 1Gb 2Gb 4Gb
Current Speed: 4Gb
Node WWN: 2000001b32904445
HBA Port WWN: 2101001b32b04445
OS Device Name: /dev/cfg/c7
Manufacturer: QLogic Corp.
Model: 375-3356-02
Firmware Version: 05.06.00
FCode/BIOS Version: BIOS: 2.02; fcode: 2.01; EFI: 2.00;
Serial Number: 0402R00-0918701887
Driver Name: qlc
Driver Version: 20110825-3.06
Type: unknown
State: offline
Supported Speeds: 1Gb 2Gb 4Gb
Current Speed: not established
Node WWN: 2001001b32b04445
</source>
===fcinfo remote-port --port <HBA Port WWN> --linkstat===
<source lang=bash>
# fcinfo remote-port --port 2100001b32904445 --linkstat
Remote Port WWN: 201600a0b86e103c
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200600a0b86e103c
Link Error Statistics:
Link Failure Count: 3
Loss of Sync Count: 3
Loss of Signal Count: 2
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 201700a0b86e103c
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200600a0b86e103c
Link Error Statistics:
Link Failure Count: 4
Loss of Sync Count: 261
Loss of Signal Count: 4
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 202200a0b85aeb2d
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200200a0b85aeb2d
Link Error Statistics:
Link Failure Count: 2
Loss of Sync Count: 1
Loss of Signal Count: 2
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 202300a0b85aeb2d
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200200a0b85aeb2d
Link Error Statistics:
Link Failure Count: 2
Loss of Sync Count: 1
Loss of Signal Count: 2
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 201600a0b86e10e4
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200600a0b86e10e4
Link Error Statistics:
Link Failure Count: 3
Loss of Sync Count: 1
Loss of Signal Count: 0
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 201700a0b86e10e4
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200600a0b86e10e4
Link Error Statistics:
Link Failure Count: 3
Loss of Sync Count: 1
Loss of Signal Count: 0
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 202400a0b85bb030
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200400a0b85bb030
Link Error Statistics:
Link Failure Count: 2
Loss of Sync Count: 1
Loss of Signal Count: 2
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 202500a0b85bb030
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200400a0b85bb030
Link Error Statistics:
Link Failure Count: 2
Loss of Sync Count: 1
Loss of Signal Count: 3
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
</source>
==mpathadm==
===mpathadm list lu===
<source lang=bash>
</source>
==cfgadm==
===cfgadm -al -o show_FCP_dev [<controller>]===
<source lang=bash>
# cfgadm -al -o show_FCP_dev | grep unusable
c8::21000024ff2d49a2,0 disk connected configured unusable
c8::21000024ff2d49a2,1 disk connected configured unusable
c8::21000024ff2d49a2,2 disk connected configured unusable
c8::21000024ff2d49a2,3 disk connected configured unusable
c8::21000024ff2d49a2,4 disk connected configured unusable
c8::21000024ff2d49a2,5 disk connected configured unusable
c8::21000024ff2d49a2,6 disk connected configured unusable
c8::21000024ff2d49a2,7 disk connected configured unusable
c8::21000024ff2d49a2,8 disk connected configured unusable
c8::21000024ff2d49a2,9 disk connected configured unusable
c8::21000024ff2d49a2,10 disk connected configured unusable
c9::203400a0b839c421,31 disk connected configured unusable
c9::203400a0b84913d2,31 disk connected configured unusable
c9::203500a0b839c421,31 disk connected configured unusable
c9::203500a0b84913d2,31 disk connected configured unusable
</source>
===cfgadm -c unconfigure -o unusable_SCSI_LUN <unusable device>===
<source lang=bash>
# cfgadm -c unconfigure -o unusable_SCSI_LUN c8::21000024ff2d49a2
</source>
===cfgadm -o force_update -c configure <controller>===
Rescan LUNs. Be careful! Does a forcelip!
<source lang=bash>
# cfgadm -o force_update -c configure c10
</source>
=Kommandos : Common Array Manager=
==lsscs==
Ist unter Solaris in /opt/SUNWsefms/bin
===lsscs list array===
<source lang=bash>
</source>
===lsscs list array <array_name>===
<source lang=bash>
</source>
===lsscs list -a <array_name> fcport===
<source lang=bash>
</source>
=Kommandos : Brocade=
==Switch-Kommandos==
===switchshow===
<source lang=bash>
san-sw_11:admin> switchshow
switchName: san-sw_11
switchType: 71.2
switchState: Online
switchMode: Native
switchRole: Principal
switchDomain: 1
switchId: fffc01
switchWwn: 10:00:00:05:33:df:43:5a
zoning: ON (Fabric1)
switchBeacon: OFF
Index Port Address Media Speed State Proto
==============================================
0 0 010000 id N8 No_Light FC
1 1 010100 id N8 Online FC E-Port 10:00:00:05:33:df:bd:b9 "san-sw_21" (downstream)
2 2 010200 id N8 Online FC F-Port 21:00:00:24:ff:05:74:e4
3 3 010300 id N8 Online FC F-Port 50:0a:09:81:8d:32:5d:c4
4 4 010400 id N8 No_Light FC
5 5 010500 id N8 Online FC E-Port 10:00:00:05:33:df:bd:b9 "san-sw_21"
6 6 010600 id N4 Online FC F-Port 20:06:00:a0:b8:32:38:17
7 7 010700 id N4 Online FC F-Port 20:07:00:a0:b8:32:38:17
8 8 010800 id N4 Online FC F-Port 21:00:00:1b:32:91:4c:ed
9 9 010900 id N4 Online FC F-Port 21:00:00:1b:32:98:05:1a
10 10 010a00 id N8 Online FC F-Port 21:00:00:24:ff:4a:d3:bc
11 11 010b00 id N8 No_Light FC
12 12 010c00 id N8 No_Light FC
13 13 010d00 id N8 No_Light FC
14 14 010e00 id N8 No_Light FC
15 15 010f00 id N8 No_Light FC
16 16 011000 -- N8 No_Module FC (No POD License) Disabled
17 17 011100 -- N8 No_Module FC (No POD License) Disabled
18 18 011200 -- N8 No_Module FC (No POD License) Disabled
19 19 011300 -- N8 No_Module FC (No POD License) Disabled
20 20 011400 -- N8 No_Module FC (No POD License) Disabled
21 21 011500 -- N8 No_Module FC (No POD License) Disabled
22 22 011600 -- N8 No_Module FC (No POD License) Disabled
23 23 011700 -- N8 No_Module FC (No POD License) Disabled
</source>
Was sagt uns das?
# Dies ist der "Principal" (alle andere sind "Subordinate") der Fabric "Fabric1" (switchRole:, zoning:)
# Der Switch ist gezoned (zoning:)
# SwitchID ist "fffc01"
# Es ist ein 24-Port Switch
# Es gibt einen doppelten ISL (InterSwitchLink) zu einem anderen Switch E-Port (san-sw_21)
# 6 Ports sind mit SFPs bestückt, aber nicht belegt (0,4,11-15)
# 8 Ports haben keine Lizenz und auch kein SFP (No_Module)
# 9 Ports sind belegt
==Port-Kommandos==
===portstatsshow===
===portstatsclear===
==Zone-Kommandos==
===zoneshow===
===alicreate===
===alishow===
da8e3ba45b970038c0724ef971709ced29cda7f0
434
433
2014-06-23T13:03:34Z
Lollypop
2
/* Switch-Kommandos */
wikitext
text/x-wiki
[[Kategorie:Solaris]]
[[Kategorie:Brocade]]
=Fibrechannel Analyse unter Solaris=
=Kommandos : Solaris=
==luxadm==
===luxadm -e port===
Gibt die Hardwarepfade der vorhandened Fibrechannelports und deren Status aus:
<source lang=bash>
# luxadm -e port
/devices/pci@79,0/pci10de,378@b/pci1077,143@0/fp@0,0:devctl CONNECTED
/devices/pci@79,0/pci10de,378@b/pci1077,143@0,1/fp@0,0:devctl NOT CONNECTED
/devices/pci@79,0/pci10de,376@e/pci1077,143@0/fp@0,0:devctl CONNECTED
/devices/pci@79,0/pci10de,376@e/pci1077,143@0,1/fp@0,0:devctl NOT CONNECTED
</source>
2 Dualport Karten:
/devices/pci@79,0/pci10de,378@b/pci1077,143@0 und ...,1
/devices/pci@79,0/pci10de,376@e/pci1077,143@0 und ...,1
<source lang=bash>
# prtdiag -v | head -1
System Configuration: Sun Microsystems Sun Fire X4440
</source>
Aus der Seite [https://support.oracle.com/epmos/faces/DocContentDisplay?id=1277396.1 Sun x86 Platforms: Matrix of Recognized Device Paths (Doc ID 1277396.1)] (Oracle Support Login benötigt):
Sun Fire x4440 (Tucana)
PCI:
PCIe SLOT0 /pci@0,0/pci10de,375@f/pci1000,3150@0 // with PCI Express 8-Port SAS/SATA HBA
PCIe SLOT0 /pci@0,0/pci10de,375@f/ // without PCI Express 8-Port SAS/SATA HBA
PCIe SLOT1 /pci@0,0/pci10de,376@e/
PCIe SLOT2 /pci@7c,0/pci10de,377@f/
PCIe SLOT3 /pci@0,0/pci10de,377@a/
PCIe SLOT4 /pci@7c,0/pci10de,376@e/
PCIe SLOT5 /pci@7c,0/pci10de,378@b/
(7c can be renamed something else depending on BIOS/OS version)
Also stecken unsere Karten in Slot 4 und 5.
===luxadm -e dump_map <HW_path>===
Gibt die Tabelle der bekannten Geräte an einem Port aus
<source lang=bash>
# luxadm -e dump_map /devices/pci@79,0/pci10de,378@b/pci1077,143@0/fp@0,0:devctl
Pos Port_ID Hard_Addr Port WWN Node WWN Type
0 30200 0 202600a0b86e10e4 200600a0b86e10e4 0x0 (Disk device)
1 30600 0 202700a0b86e10e4 200600a0b86e10e4 0x0 (Disk device)
2 10100 0 203400a0b85bb030 200400a0b85bb030 0x0 (Disk device)
3 10500 0 203500a0b85bb030 200400a0b85bb030 0x0 (Disk device)
4 10200 0 202600a0b86e103c 200600a0b86e103c 0x0 (Disk device)
5 11400 0 202700a0b86e103c 200600a0b86e103c 0x0 (Disk device)
6 30100 0 203200a0b85aeb2d 200200a0b85aeb2d 0x0 (Disk device)
7 30500 0 203300a0b85aeb2d 200200a0b85aeb2d 0x0 (Disk device)
8 10800 0 2100001b32902d45 2000001b32902d45 0x1f (Unknown Type,Host Bus Adapter)
</source>
Erklärung der interessanten Spalten:
* Port_ID <Switch_ID><Switchport><??>
Es sind also offensichtlich 2 Switches in der Fabric an Port /devices/pci@79,0/pci10de,378@b/pci1077,143@0/fp@0,0:devctl
und zwar mit der ID 1 und mit der ID 3.
Switch ID 1
Port 1 und 5 : Node WWN 200400a0b85bb030
Port 2 und 14 : Node WWN 200600a0b86e103c
Port 8 : Node WWN 2000001b32902d45 (Wir selbst)
Switch ID 3
Port 1 und 5 : Node WWN 200200a0b85aeb2d
Port 2 und 6 : Node WWN 200600a0b86e10e4
Wir hängen also mit 2 Storages auf dem Switch mit der ID 1 und haben eine Verbindung zu einem Switch mit der ID 3 an dem 2 weitere Storages hängen.
* Node WWN
Wir sehen hier 4 Disk Devices mit jeweils 2 Einträgen (Gleiche Node WWN)
* Port WWN
Dies ist die Port WWN der an den Switch angeschlossenen Geräte (unter 8 finden wir uns selbst).
Pro Storage sehen wir hier 2 Port WWNs, also 2 Pfade über unseren einen Hostport.
Daher nachher 4 Pfade (2 Pro Hostport) beim [[#mpathadm list lu]].
* Type
Disk Device: Storage
Host Bus Adapter: FC-Karte
===luxadm probe===
Auflistung aller erkannten Fibrechanneldevices
<source lang=bash>
#> luxadm probe
Found Fibre Channel device(s):
Node WWN:200600a0b86e10e4 Device Type:Disk device
Logical Path:/dev/rdsk/c8t600A0B80006E10E40000DC1C52E8B751d0s2
...
</source>
===luxadm display <Diskpath|WWN>===
<source lang=bash>
#> luxadm display /dev/rdsk/c8t600A0B80006E10E40000DC1C52E8B751d0s2
DEVICE PROPERTIES for disk: /dev/rdsk/c8t600A0B80006E10E40000DC1C52E8B751d0s2
Vendor: SUN
Product ID: STK6580_6780
Revision: 0784
Serial Num: SP01068442
Unformatted capacity: 204800.000 MBytes
Write Cache: Enabled
Read Cache: Enabled
Minimum prefetch: 0x300
Maximum prefetch: 0x0
Device Type: Disk device
Path(s):
/dev/rdsk/c8t600A0B80006E10E40000DC1C52E8B751d0s2
/devices/scsi_vhci/disk@g600a0b80006e10e40000dc1c52e8b751:c,raw
Controller /dev/cfg/c4
Device Address 202600a0b86e10e4,5
Host controller port WWN 2100001b328a417f
Class primary
State ONLINE
Controller /dev/cfg/c4
Device Address 202700a0b86e10e4,5
Host controller port WWN 2100001b328a417f
Class secondary
State STANDBY
Controller /dev/cfg/c6
Device Address 201600a0b86e10e4,5
Host controller port WWN 2100001b32904445
Class primary
State ONLINE
Controller /dev/cfg/c6
Device Address 201700a0b86e10e4,5
Host controller port WWN 2100001b32904445
Class secondary
State STANDBY
</source>
* Vendor: SUN
Hersteller
* Product ID: STK6580_6780
Also ein StorageTek 6580/6780
* Revision: 0784
Grobe Firmwarepeilung (Firmware Version: 07.84.47.10)
Siehe hier [[#lsscs list array <array_name>]]
* Serial Num: SP01068442
Praktisch, wenn man mit NetApps arbeitet, um die LUNs zuzuordnen.
* Unformatted capacity: 204800.000 MBytes
Immer gut zu wissen
* Write Cache: Enabled
Die Batterie im Storage sollte also OK sein ;-)
* Path(s):
Rawdevicepath
Hardwaredevicepath
Jetzt folgen immer pro Pfad zu diesem Device ein Block aus
Controller (siehe unten)
Device Address <Port WWN vom Device>,<LUN ID>
Class <primary|secondary> (siehe unten)
State <Online|Standby|Oflline>
Zuweisung Controller zum FC-Port über:
<source lang=bash>
# ls -al /dev/cfg/c6
lrwxrwxrwx 1 root root 60 Sep 3 2009 /dev/cfg/c6 -> ../../devices/pci@79,0/pci10de,376@e/pci1077,143@0/fp@0,0:fc
</source>
Man sieht den Hardwarepfad von [[#luxadm -e port]]
Class:
Via ALUA (Asymmetric Logical Unit Access) teilt das Device dem Host mit, über welche Pfade der Host primär auf die LUN zugreifen soll.
==fcinfo==
===fcinfo hba-port===
Gibt ein paar Infos über Hersteller, Modell, Firmware, Port und Node WWN, Current Speed, ... aus
<source lang=bash>
#> fcinfo hba-port
HBA Port WWN: 2100001b328a417f
OS Device Name: /dev/cfg/c4
Manufacturer: QLogic Corp.
Model: 375-3356-02
Firmware Version: 05.06.00
FCode/BIOS Version: BIOS: 2.02; fcode: 2.01; EFI: 2.00;
Serial Number: 0402R00-0918701860
Driver Name: qlc
Driver Version: 20110825-3.06
Type: N-port
State: online
Supported Speeds: 1Gb 2Gb 4Gb
Current Speed: 4Gb
Node WWN: 2000001b328a417f
HBA Port WWN: 2101001b32aa417f
OS Device Name: /dev/cfg/c5
Manufacturer: QLogic Corp.
Model: 375-3356-02
Firmware Version: 05.06.00
FCode/BIOS Version: BIOS: 2.02; fcode: 2.01; EFI: 2.00;
Serial Number: 0402R00-0918701860
Driver Name: qlc
Driver Version: 20110825-3.06
Type: unknown
State: offline
Supported Speeds: 1Gb 2Gb 4Gb
Current Speed: not established
Node WWN: 2001001b32aa417f
HBA Port WWN: 2100001b32904445
OS Device Name: /dev/cfg/c6
Manufacturer: QLogic Corp.
Model: 375-3356-02
Firmware Version: 05.06.00
FCode/BIOS Version: BIOS: 2.02; fcode: 2.01; EFI: 2.00;
Serial Number: 0402R00-0918701887
Driver Name: qlc
Driver Version: 20110825-3.06
Type: N-port
State: online
Supported Speeds: 1Gb 2Gb 4Gb
Current Speed: 4Gb
Node WWN: 2000001b32904445
HBA Port WWN: 2101001b32b04445
OS Device Name: /dev/cfg/c7
Manufacturer: QLogic Corp.
Model: 375-3356-02
Firmware Version: 05.06.00
FCode/BIOS Version: BIOS: 2.02; fcode: 2.01; EFI: 2.00;
Serial Number: 0402R00-0918701887
Driver Name: qlc
Driver Version: 20110825-3.06
Type: unknown
State: offline
Supported Speeds: 1Gb 2Gb 4Gb
Current Speed: not established
Node WWN: 2001001b32b04445
</source>
===fcinfo remote-port --port <HBA Port WWN> --linkstat===
<source lang=bash>
# fcinfo remote-port --port 2100001b32904445 --linkstat
Remote Port WWN: 201600a0b86e103c
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200600a0b86e103c
Link Error Statistics:
Link Failure Count: 3
Loss of Sync Count: 3
Loss of Signal Count: 2
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 201700a0b86e103c
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200600a0b86e103c
Link Error Statistics:
Link Failure Count: 4
Loss of Sync Count: 261
Loss of Signal Count: 4
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 202200a0b85aeb2d
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200200a0b85aeb2d
Link Error Statistics:
Link Failure Count: 2
Loss of Sync Count: 1
Loss of Signal Count: 2
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 202300a0b85aeb2d
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200200a0b85aeb2d
Link Error Statistics:
Link Failure Count: 2
Loss of Sync Count: 1
Loss of Signal Count: 2
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 201600a0b86e10e4
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200600a0b86e10e4
Link Error Statistics:
Link Failure Count: 3
Loss of Sync Count: 1
Loss of Signal Count: 0
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 201700a0b86e10e4
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200600a0b86e10e4
Link Error Statistics:
Link Failure Count: 3
Loss of Sync Count: 1
Loss of Signal Count: 0
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 202400a0b85bb030
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200400a0b85bb030
Link Error Statistics:
Link Failure Count: 2
Loss of Sync Count: 1
Loss of Signal Count: 2
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 202500a0b85bb030
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200400a0b85bb030
Link Error Statistics:
Link Failure Count: 2
Loss of Sync Count: 1
Loss of Signal Count: 3
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
</source>
==mpathadm==
===mpathadm list lu===
<source lang=bash>
</source>
==cfgadm==
===cfgadm -al -o show_FCP_dev [<controller>]===
<source lang=bash>
# cfgadm -al -o show_FCP_dev | grep unusable
c8::21000024ff2d49a2,0 disk connected configured unusable
c8::21000024ff2d49a2,1 disk connected configured unusable
c8::21000024ff2d49a2,2 disk connected configured unusable
c8::21000024ff2d49a2,3 disk connected configured unusable
c8::21000024ff2d49a2,4 disk connected configured unusable
c8::21000024ff2d49a2,5 disk connected configured unusable
c8::21000024ff2d49a2,6 disk connected configured unusable
c8::21000024ff2d49a2,7 disk connected configured unusable
c8::21000024ff2d49a2,8 disk connected configured unusable
c8::21000024ff2d49a2,9 disk connected configured unusable
c8::21000024ff2d49a2,10 disk connected configured unusable
c9::203400a0b839c421,31 disk connected configured unusable
c9::203400a0b84913d2,31 disk connected configured unusable
c9::203500a0b839c421,31 disk connected configured unusable
c9::203500a0b84913d2,31 disk connected configured unusable
</source>
===cfgadm -c unconfigure -o unusable_SCSI_LUN <unusable device>===
<source lang=bash>
# cfgadm -c unconfigure -o unusable_SCSI_LUN c8::21000024ff2d49a2
</source>
===cfgadm -o force_update -c configure <controller>===
Rescan LUNs. Be careful! Does a forcelip!
<source lang=bash>
# cfgadm -o force_update -c configure c10
</source>
=Kommandos : Common Array Manager=
==lsscs==
Ist unter Solaris in /opt/SUNWsefms/bin
===lsscs list array===
<source lang=bash>
</source>
===lsscs list array <array_name>===
<source lang=bash>
</source>
===lsscs list -a <array_name> fcport===
<source lang=bash>
</source>
=Kommandos : Brocade=
==Switch-Kommandos==
===switchshow===
<source lang=bash>
san-sw_11:admin> switchshow
switchName: san-sw_11
switchType: 71.2
switchState: Online
switchMode: Native
switchRole: Principal
switchDomain: 1
switchId: fffc01
switchWwn: 10:00:00:05:33:df:43:5a
zoning: ON (Fabric1)
switchBeacon: OFF
Index Port Address Media Speed State Proto
==============================================
0 0 010000 id N8 No_Light FC
1 1 010100 id N8 Online FC E-Port 10:00:00:05:33:df:bd:b9 "san-sw_21" (downstream)
2 2 010200 id N8 Online FC F-Port 21:00:00:24:ff:05:74:e4
3 3 010300 id N8 Online FC F-Port 50:0a:09:81:8d:32:5d:c4
4 4 010400 id N8 No_Light FC
5 5 010500 id N8 Online FC E-Port 10:00:00:05:33:df:bd:b9 "san-sw_21"
6 6 010600 id N4 Online FC F-Port 20:06:00:a0:b8:32:38:17
7 7 010700 id N4 Online FC F-Port 20:07:00:a0:b8:32:38:17
8 8 010800 id N4 Online FC F-Port 21:00:00:1b:32:91:4c:ed
9 9 010900 id N4 Online FC F-Port 21:00:00:1b:32:98:05:1a
10 10 010a00 id N8 Online FC F-Port 21:00:00:24:ff:4a:d3:bc
11 11 010b00 id N8 No_Light FC
12 12 010c00 id N8 No_Light FC
13 13 010d00 id N8 No_Light FC
14 14 010e00 id N8 No_Light FC
15 15 010f00 id N8 No_Light FC
16 16 011000 -- N8 No_Module FC (No POD License) Disabled
17 17 011100 -- N8 No_Module FC (No POD License) Disabled
18 18 011200 -- N8 No_Module FC (No POD License) Disabled
19 19 011300 -- N8 No_Module FC (No POD License) Disabled
20 20 011400 -- N8 No_Module FC (No POD License) Disabled
21 21 011500 -- N8 No_Module FC (No POD License) Disabled
22 22 011600 -- N8 No_Module FC (No POD License) Disabled
23 23 011700 -- N8 No_Module FC (No POD License) Disabled
</source>
Was sagt uns das?
# Dies ist der "Principal" (alle andere sind "Subordinate") der Fabric "Fabric1" (switchRole:, zoning:)
# Der Switch ist gezoned (zoning:)
# SwitchID ist "fffc01"
# Es ist ein 24-Port Switch
# Es gibt einen doppelten ISL (InterSwitchLink) zu einem anderen Switch E-Port (san-sw_21)
# 6 Ports sind mit SFPs bestückt, aber nicht belegt (0,4,11-15)
# 8 Ports haben keine Lizenz und auch kein SFP (No_Module)
# 9 Ports sind belegt
<source lang=bash>
san-sw_11:root> fabricshow
Switch ID Worldwide Name Enet IP Addr FC IP Addr Name
-------------------------------------------------------------------------
1: fffc01 10:00:00:05:33:df:43:5a 192.168.1.117 0.0.0.0 >"san-sw_11"
2: fffc02 10:00:00:05:33:df:bd:b9 192.168.1.119 0.0.0.0 "san-sw_21"
The Fabric has 2 switches
</source>
==Port-Kommandos==
===portstatsshow===
===portstatsclear===
==Zone-Kommandos==
===zoneshow===
===alicreate===
===alishow===
efb82914e60a16ee5727d77b1634a35587b85e0e
455
434
2014-09-18T19:54:12Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:Solaris]]
[[Kategorie:Brocade]]
=Fibrechannel Analyse unter Solaris=
=Kommandos : Solaris=
==luxadm==
===luxadm -e port===
Gibt die Hardwarepfade der vorhandened Fibrechannelports und deren Status aus:
<source lang=bash>
# luxadm -e port
/devices/pci@79,0/pci10de,378@b/pci1077,143@0/fp@0,0:devctl CONNECTED
/devices/pci@79,0/pci10de,378@b/pci1077,143@0,1/fp@0,0:devctl NOT CONNECTED
/devices/pci@79,0/pci10de,376@e/pci1077,143@0/fp@0,0:devctl CONNECTED
/devices/pci@79,0/pci10de,376@e/pci1077,143@0,1/fp@0,0:devctl NOT CONNECTED
</source>
2 Dualport Karten:
/devices/pci@79,0/pci10de,378@b/pci1077,143@0 und ...,1
/devices/pci@79,0/pci10de,376@e/pci1077,143@0 und ...,1
<source lang=bash>
# prtdiag -v | head -1
System Configuration: Sun Microsystems Sun Fire X4440
</source>
Aus der Seite [https://support.oracle.com/epmos/faces/DocContentDisplay?id=1277396.1 Sun x86 Platforms: Matrix of Recognized Device Paths (Doc ID 1277396.1)] (Oracle Support Login benötigt):
Sun Fire x4440 (Tucana)
PCI:
PCIe SLOT0 /pci@0,0/pci10de,375@f/pci1000,3150@0 // with PCI Express 8-Port SAS/SATA HBA
PCIe SLOT0 /pci@0,0/pci10de,375@f/ // without PCI Express 8-Port SAS/SATA HBA
PCIe SLOT1 /pci@0,0/pci10de,376@e/
PCIe SLOT2 /pci@7c,0/pci10de,377@f/
PCIe SLOT3 /pci@0,0/pci10de,377@a/
PCIe SLOT4 /pci@7c,0/pci10de,376@e/
PCIe SLOT5 /pci@7c,0/pci10de,378@b/
(7c can be renamed something else depending on BIOS/OS version)
Also stecken unsere Karten in Slot 4 und 5.
===luxadm -e dump_map <HW_path>===
Gibt die Tabelle der bekannten Geräte an einem Port aus
<source lang=bash>
# luxadm -e dump_map /devices/pci@79,0/pci10de,378@b/pci1077,143@0/fp@0,0:devctl
Pos Port_ID Hard_Addr Port WWN Node WWN Type
0 30200 0 202600a0b86e10e4 200600a0b86e10e4 0x0 (Disk device)
1 30600 0 202700a0b86e10e4 200600a0b86e10e4 0x0 (Disk device)
2 10100 0 203400a0b85bb030 200400a0b85bb030 0x0 (Disk device)
3 10500 0 203500a0b85bb030 200400a0b85bb030 0x0 (Disk device)
4 10200 0 202600a0b86e103c 200600a0b86e103c 0x0 (Disk device)
5 11400 0 202700a0b86e103c 200600a0b86e103c 0x0 (Disk device)
6 30100 0 203200a0b85aeb2d 200200a0b85aeb2d 0x0 (Disk device)
7 30500 0 203300a0b85aeb2d 200200a0b85aeb2d 0x0 (Disk device)
8 10800 0 2100001b32902d45 2000001b32902d45 0x1f (Unknown Type,Host Bus Adapter)
</source>
Erklärung der interessanten Spalten:
* Port_ID <Switch_ID><Switchport><??>
Es sind also offensichtlich 2 Switches in der Fabric an Port /devices/pci@79,0/pci10de,378@b/pci1077,143@0/fp@0,0:devctl
und zwar mit der ID 1 und mit der ID 3.
Switch ID 1
Port 1 und 5 : Node WWN 200400a0b85bb030
Port 2 und 14 : Node WWN 200600a0b86e103c
Port 8 : Node WWN 2000001b32902d45 (Wir selbst)
Switch ID 3
Port 1 und 5 : Node WWN 200200a0b85aeb2d
Port 2 und 6 : Node WWN 200600a0b86e10e4
Wir hängen also mit 2 Storages auf dem Switch mit der ID 1 und haben eine Verbindung zu einem Switch mit der ID 3 an dem 2 weitere Storages hängen.
* Node WWN
Wir sehen hier 4 Disk Devices mit jeweils 2 Einträgen (Gleiche Node WWN)
* Port WWN
Dies ist die Port WWN der an den Switch angeschlossenen Geräte (unter 8 finden wir uns selbst).
Pro Storage sehen wir hier 2 Port WWNs, also 2 Pfade über unseren einen Hostport.
Daher nachher 4 Pfade (2 Pro Hostport) beim [[#mpathadm list lu]].
* Type
Disk Device: Storage
Host Bus Adapter: FC-Karte
===luxadm probe===
Auflistung aller erkannten Fibrechanneldevices
<source lang=bash>
#> luxadm probe
Found Fibre Channel device(s):
Node WWN:200600a0b86e10e4 Device Type:Disk device
Logical Path:/dev/rdsk/c8t600A0B80006E10E40000DC1C52E8B751d0s2
...
</source>
===luxadm display <Diskpath|WWN>===
<source lang=bash>
#> luxadm display /dev/rdsk/c8t600A0B80006E10E40000DC1C52E8B751d0s2
DEVICE PROPERTIES for disk: /dev/rdsk/c8t600A0B80006E10E40000DC1C52E8B751d0s2
Vendor: SUN
Product ID: STK6580_6780
Revision: 0784
Serial Num: SP01068442
Unformatted capacity: 204800.000 MBytes
Write Cache: Enabled
Read Cache: Enabled
Minimum prefetch: 0x300
Maximum prefetch: 0x0
Device Type: Disk device
Path(s):
/dev/rdsk/c8t600A0B80006E10E40000DC1C52E8B751d0s2
/devices/scsi_vhci/disk@g600a0b80006e10e40000dc1c52e8b751:c,raw
Controller /dev/cfg/c4
Device Address 202600a0b86e10e4,5
Host controller port WWN 2100001b328a417f
Class primary
State ONLINE
Controller /dev/cfg/c4
Device Address 202700a0b86e10e4,5
Host controller port WWN 2100001b328a417f
Class secondary
State STANDBY
Controller /dev/cfg/c6
Device Address 201600a0b86e10e4,5
Host controller port WWN 2100001b32904445
Class primary
State ONLINE
Controller /dev/cfg/c6
Device Address 201700a0b86e10e4,5
Host controller port WWN 2100001b32904445
Class secondary
State STANDBY
</source>
* Vendor: SUN
Hersteller
* Product ID: STK6580_6780
Also ein StorageTek 6580/6780
* Revision: 0784
Grobe Firmwarepeilung (Firmware Version: 07.84.47.10)
Siehe hier [[#lsscs list array <array_name>]]
* Serial Num: SP01068442
Praktisch, wenn man mit NetApps arbeitet, um die LUNs zuzuordnen.
* Unformatted capacity: 204800.000 MBytes
Immer gut zu wissen
* Write Cache: Enabled
Die Batterie im Storage sollte also OK sein ;-)
* Path(s):
Rawdevicepath
Hardwaredevicepath
Jetzt folgen immer pro Pfad zu diesem Device ein Block aus
Controller (siehe unten)
Device Address <Port WWN vom Device>,<LUN ID>
Class <primary|secondary> (siehe unten)
State <Online|Standby|Oflline>
Zuweisung Controller zum FC-Port über:
<source lang=bash>
# ls -al /dev/cfg/c6
lrwxrwxrwx 1 root root 60 Sep 3 2009 /dev/cfg/c6 -> ../../devices/pci@79,0/pci10de,376@e/pci1077,143@0/fp@0,0:fc
</source>
Man sieht den Hardwarepfad von [[#luxadm -e port]]
Class:
Via ALUA (Asymmetric Logical Unit Access) teilt das Device dem Host mit, über welche Pfade der Host primär auf die LUN zugreifen soll.
==fcinfo==
===fcinfo hba-port===
Gibt ein paar Infos über Hersteller, Modell, Firmware, Port und Node WWN, Current Speed, ... aus
<source lang=bash>
#> fcinfo hba-port
HBA Port WWN: 2100001b328a417f
OS Device Name: /dev/cfg/c4
Manufacturer: QLogic Corp.
Model: 375-3356-02
Firmware Version: 05.06.00
FCode/BIOS Version: BIOS: 2.02; fcode: 2.01; EFI: 2.00;
Serial Number: 0402R00-0918701860
Driver Name: qlc
Driver Version: 20110825-3.06
Type: N-port
State: online
Supported Speeds: 1Gb 2Gb 4Gb
Current Speed: 4Gb
Node WWN: 2000001b328a417f
HBA Port WWN: 2101001b32aa417f
OS Device Name: /dev/cfg/c5
Manufacturer: QLogic Corp.
Model: 375-3356-02
Firmware Version: 05.06.00
FCode/BIOS Version: BIOS: 2.02; fcode: 2.01; EFI: 2.00;
Serial Number: 0402R00-0918701860
Driver Name: qlc
Driver Version: 20110825-3.06
Type: unknown
State: offline
Supported Speeds: 1Gb 2Gb 4Gb
Current Speed: not established
Node WWN: 2001001b32aa417f
HBA Port WWN: 2100001b32904445
OS Device Name: /dev/cfg/c6
Manufacturer: QLogic Corp.
Model: 375-3356-02
Firmware Version: 05.06.00
FCode/BIOS Version: BIOS: 2.02; fcode: 2.01; EFI: 2.00;
Serial Number: 0402R00-0918701887
Driver Name: qlc
Driver Version: 20110825-3.06
Type: N-port
State: online
Supported Speeds: 1Gb 2Gb 4Gb
Current Speed: 4Gb
Node WWN: 2000001b32904445
HBA Port WWN: 2101001b32b04445
OS Device Name: /dev/cfg/c7
Manufacturer: QLogic Corp.
Model: 375-3356-02
Firmware Version: 05.06.00
FCode/BIOS Version: BIOS: 2.02; fcode: 2.01; EFI: 2.00;
Serial Number: 0402R00-0918701887
Driver Name: qlc
Driver Version: 20110825-3.06
Type: unknown
State: offline
Supported Speeds: 1Gb 2Gb 4Gb
Current Speed: not established
Node WWN: 2001001b32b04445
</source>
===fcinfo remote-port --port <HBA Port WWN> --linkstat===
<source lang=bash>
# fcinfo remote-port --port 2100001b32904445 --linkstat
Remote Port WWN: 201600a0b86e103c
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200600a0b86e103c
Link Error Statistics:
Link Failure Count: 3
Loss of Sync Count: 3
Loss of Signal Count: 2
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 201700a0b86e103c
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200600a0b86e103c
Link Error Statistics:
Link Failure Count: 4
Loss of Sync Count: 261
Loss of Signal Count: 4
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 202200a0b85aeb2d
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200200a0b85aeb2d
Link Error Statistics:
Link Failure Count: 2
Loss of Sync Count: 1
Loss of Signal Count: 2
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 202300a0b85aeb2d
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200200a0b85aeb2d
Link Error Statistics:
Link Failure Count: 2
Loss of Sync Count: 1
Loss of Signal Count: 2
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 201600a0b86e10e4
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200600a0b86e10e4
Link Error Statistics:
Link Failure Count: 3
Loss of Sync Count: 1
Loss of Signal Count: 0
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 201700a0b86e10e4
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200600a0b86e10e4
Link Error Statistics:
Link Failure Count: 3
Loss of Sync Count: 1
Loss of Signal Count: 0
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 202400a0b85bb030
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200400a0b85bb030
Link Error Statistics:
Link Failure Count: 2
Loss of Sync Count: 1
Loss of Signal Count: 2
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 202500a0b85bb030
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200400a0b85bb030
Link Error Statistics:
Link Failure Count: 2
Loss of Sync Count: 1
Loss of Signal Count: 3
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
</source>
==mpathadm==
===mpathadm list lu===
<source lang=bash>
</source>
==cfgadm==
===cfgadm -al -o show_FCP_dev [<controller>]===
<source lang=bash>
# cfgadm -al -o show_FCP_dev | grep unusable
c8::21000024ff2d49a2,0 disk connected configured unusable
c8::21000024ff2d49a2,1 disk connected configured unusable
c8::21000024ff2d49a2,2 disk connected configured unusable
c8::21000024ff2d49a2,3 disk connected configured unusable
c8::21000024ff2d49a2,4 disk connected configured unusable
c8::21000024ff2d49a2,5 disk connected configured unusable
c8::21000024ff2d49a2,6 disk connected configured unusable
c8::21000024ff2d49a2,7 disk connected configured unusable
c8::21000024ff2d49a2,8 disk connected configured unusable
c8::21000024ff2d49a2,9 disk connected configured unusable
c8::21000024ff2d49a2,10 disk connected configured unusable
c9::203400a0b839c421,31 disk connected configured unusable
c9::203400a0b84913d2,31 disk connected configured unusable
c9::203500a0b839c421,31 disk connected configured unusable
c9::203500a0b84913d2,31 disk connected configured unusable
</source>
===cfgadm -c unconfigure -o unusable_SCSI_LUN <unusable device>===
<source lang=bash>
# cfgadm -c unconfigure -o unusable_SCSI_LUN c8::21000024ff2d49a2
</source>
===cfgadm -o force_update -c configure <controller>===
Rescan LUNs. Be careful! Does a forcelip!
<source lang=bash>
# cfgadm -o force_update -c configure c10
</source>
=Kommandos : Common Array Manager=
==lsscs==
Ist unter Solaris in /opt/SUNWsefms/bin
===lsscs list array===
<source lang=bash>
</source>
===lsscs list array <array_name>===
<source lang=bash>
</source>
===lsscs list -a <array_name> fcport===
<source lang=bash>
</source>
=Kommandos : Brocade=
==Switch-Kommandos==
===switchshow===
<source lang=bash>
san-sw_11:admin> switchshow
switchName: san-sw_11
switchType: 71.2
switchState: Online
switchMode: Native
switchRole: Principal
switchDomain: 1
switchId: fffc01
switchWwn: 10:00:00:05:33:df:43:5a
zoning: ON (Fabric1)
switchBeacon: OFF
Index Port Address Media Speed State Proto
==============================================
0 0 010000 id N8 No_Light FC
1 1 010100 id N8 Online FC E-Port 10:00:00:05:33:df:bd:b9 "san-sw_21" (downstream)
2 2 010200 id N8 Online FC F-Port 21:00:00:24:ff:05:74:e4
3 3 010300 id N8 Online FC F-Port 50:0a:09:81:8d:32:5d:c4
4 4 010400 id N8 No_Light FC
5 5 010500 id N8 Online FC E-Port 10:00:00:05:33:df:bd:b9 "san-sw_21"
6 6 010600 id N4 Online FC F-Port 20:06:00:a0:b8:32:38:17
7 7 010700 id N4 Online FC F-Port 20:07:00:a0:b8:32:38:17
8 8 010800 id N4 Online FC F-Port 21:00:00:1b:32:91:4c:ed
9 9 010900 id N4 Online FC F-Port 21:00:00:1b:32:98:05:1a
10 10 010a00 id N8 Online FC F-Port 21:00:00:24:ff:4a:d3:bc
11 11 010b00 id N8 No_Light FC
12 12 010c00 id N8 No_Light FC
13 13 010d00 id N8 No_Light FC
14 14 010e00 id N8 No_Light FC
15 15 010f00 id N8 No_Light FC
16 16 011000 -- N8 No_Module FC (No POD License) Disabled
17 17 011100 -- N8 No_Module FC (No POD License) Disabled
18 18 011200 -- N8 No_Module FC (No POD License) Disabled
19 19 011300 -- N8 No_Module FC (No POD License) Disabled
20 20 011400 -- N8 No_Module FC (No POD License) Disabled
21 21 011500 -- N8 No_Module FC (No POD License) Disabled
22 22 011600 -- N8 No_Module FC (No POD License) Disabled
23 23 011700 -- N8 No_Module FC (No POD License) Disabled
</source>
Was sagt uns das?
# Dies ist der "Principal" (alle andere sind "Subordinate") der Fabric "Fabric1" (switchRole:, zoning:)
# Der Switch ist gezoned (zoning:)
# SwitchID ist "fffc01"
# Es ist ein 24-Port Switch
# Es gibt einen doppelten ISL (InterSwitchLink) zu einem anderen Switch E-Port (san-sw_21)
# 6 Ports sind mit SFPs bestückt, aber nicht belegt (0,4,11-15)
# 8 Ports haben keine Lizenz und auch kein SFP (No_Module)
# 9 Ports sind belegt
<source lang=bash>
san-sw_11:root> fabricshow
Switch ID Worldwide Name Enet IP Addr FC IP Addr Name
-------------------------------------------------------------------------
1: fffc01 10:00:00:05:33:df:43:5a 192.168.1.117 0.0.0.0 >"san-sw_11"
2: fffc02 10:00:00:05:33:df:bd:b9 192.168.1.119 0.0.0.0 "san-sw_21"
The Fabric has 2 switches
</source>
==Port-Kommandos==
===portstatsshow===
===portstatsclear===
==Zone-Kommandos==
===zoneshow===
===alicreate===
===alishow===
==Backup der Switchconfig per Script==
===Put the ssh-pub-key on the switches===
<source lang=bash>
fcsw1:root> cat >/root/.ssh/authorized_keys <<EOF
> ssh-dss AAAAB3NzaC1...
...
...
lF8qsgtTD8cc= root@host
> EOF
</source>
===Generate ssh-key on the switches===
<source lang=bash>
fcsw1:root> ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
2a:23:33:...:69:bc:25:a5:f9 root@fcsw1
The key's randomart image is:
+--[ RSA 2048]----+
| |
| ... |
| |
+-----------------+
</source>
===Copy the key to your users ~/.ssh/authorized_keys===
<source lang=bash>
fcsw1:root> cat /root/.ssh/id_rsa.pub
ssh-rsa AAAAB3NzaC1yc2EAAA...
...
KHnw1T1NaQ== root@fcsw1
</source>
===Now the script===
<source lang=bash>
# cat /opt/bin/backup_brocade_config
#!/bin/bash
SWITCHES="
172.30.40.50
172.30.40.51
"
LOCALUSER="backupuser"
BACKUPDIR="brocade_backup"
BACKUPHOST="172.30.40.10"
date="$(date '+%Y%m%d-%H%M%S')"
for switch in ${SWITCHES} ; do
printf "Backing up ${switch} to ~${LOCALUSER}/${BACKUPDIR}/${switch}_config_${date}.txt... "
ssh root@${switch} /fabos/link_sbin/configupload -all -p scp ${BACKUPHOST},${LOCALUSER},${BACKUPDIR}/${switch}_config_${date}.txt
done
</source>
2afcc987f18e5338ebbdf6854ad360dabd205a6d
456
455
2014-09-18T19:55:52Z
Lollypop
2
/* Backup der Switchconfig per Script */
wikitext
text/x-wiki
[[Kategorie:Solaris]]
[[Kategorie:Brocade]]
=Fibrechannel Analyse unter Solaris=
=Kommandos : Solaris=
==luxadm==
===luxadm -e port===
Gibt die Hardwarepfade der vorhandened Fibrechannelports und deren Status aus:
<source lang=bash>
# luxadm -e port
/devices/pci@79,0/pci10de,378@b/pci1077,143@0/fp@0,0:devctl CONNECTED
/devices/pci@79,0/pci10de,378@b/pci1077,143@0,1/fp@0,0:devctl NOT CONNECTED
/devices/pci@79,0/pci10de,376@e/pci1077,143@0/fp@0,0:devctl CONNECTED
/devices/pci@79,0/pci10de,376@e/pci1077,143@0,1/fp@0,0:devctl NOT CONNECTED
</source>
2 Dualport Karten:
/devices/pci@79,0/pci10de,378@b/pci1077,143@0 und ...,1
/devices/pci@79,0/pci10de,376@e/pci1077,143@0 und ...,1
<source lang=bash>
# prtdiag -v | head -1
System Configuration: Sun Microsystems Sun Fire X4440
</source>
Aus der Seite [https://support.oracle.com/epmos/faces/DocContentDisplay?id=1277396.1 Sun x86 Platforms: Matrix of Recognized Device Paths (Doc ID 1277396.1)] (Oracle Support Login benötigt):
Sun Fire x4440 (Tucana)
PCI:
PCIe SLOT0 /pci@0,0/pci10de,375@f/pci1000,3150@0 // with PCI Express 8-Port SAS/SATA HBA
PCIe SLOT0 /pci@0,0/pci10de,375@f/ // without PCI Express 8-Port SAS/SATA HBA
PCIe SLOT1 /pci@0,0/pci10de,376@e/
PCIe SLOT2 /pci@7c,0/pci10de,377@f/
PCIe SLOT3 /pci@0,0/pci10de,377@a/
PCIe SLOT4 /pci@7c,0/pci10de,376@e/
PCIe SLOT5 /pci@7c,0/pci10de,378@b/
(7c can be renamed something else depending on BIOS/OS version)
Also stecken unsere Karten in Slot 4 und 5.
===luxadm -e dump_map <HW_path>===
Gibt die Tabelle der bekannten Geräte an einem Port aus
<source lang=bash>
# luxadm -e dump_map /devices/pci@79,0/pci10de,378@b/pci1077,143@0/fp@0,0:devctl
Pos Port_ID Hard_Addr Port WWN Node WWN Type
0 30200 0 202600a0b86e10e4 200600a0b86e10e4 0x0 (Disk device)
1 30600 0 202700a0b86e10e4 200600a0b86e10e4 0x0 (Disk device)
2 10100 0 203400a0b85bb030 200400a0b85bb030 0x0 (Disk device)
3 10500 0 203500a0b85bb030 200400a0b85bb030 0x0 (Disk device)
4 10200 0 202600a0b86e103c 200600a0b86e103c 0x0 (Disk device)
5 11400 0 202700a0b86e103c 200600a0b86e103c 0x0 (Disk device)
6 30100 0 203200a0b85aeb2d 200200a0b85aeb2d 0x0 (Disk device)
7 30500 0 203300a0b85aeb2d 200200a0b85aeb2d 0x0 (Disk device)
8 10800 0 2100001b32902d45 2000001b32902d45 0x1f (Unknown Type,Host Bus Adapter)
</source>
Erklärung der interessanten Spalten:
* Port_ID <Switch_ID><Switchport><??>
Es sind also offensichtlich 2 Switches in der Fabric an Port /devices/pci@79,0/pci10de,378@b/pci1077,143@0/fp@0,0:devctl
und zwar mit der ID 1 und mit der ID 3.
Switch ID 1
Port 1 und 5 : Node WWN 200400a0b85bb030
Port 2 und 14 : Node WWN 200600a0b86e103c
Port 8 : Node WWN 2000001b32902d45 (Wir selbst)
Switch ID 3
Port 1 und 5 : Node WWN 200200a0b85aeb2d
Port 2 und 6 : Node WWN 200600a0b86e10e4
Wir hängen also mit 2 Storages auf dem Switch mit der ID 1 und haben eine Verbindung zu einem Switch mit der ID 3 an dem 2 weitere Storages hängen.
* Node WWN
Wir sehen hier 4 Disk Devices mit jeweils 2 Einträgen (Gleiche Node WWN)
* Port WWN
Dies ist die Port WWN der an den Switch angeschlossenen Geräte (unter 8 finden wir uns selbst).
Pro Storage sehen wir hier 2 Port WWNs, also 2 Pfade über unseren einen Hostport.
Daher nachher 4 Pfade (2 Pro Hostport) beim [[#mpathadm list lu]].
* Type
Disk Device: Storage
Host Bus Adapter: FC-Karte
===luxadm probe===
Auflistung aller erkannten Fibrechanneldevices
<source lang=bash>
#> luxadm probe
Found Fibre Channel device(s):
Node WWN:200600a0b86e10e4 Device Type:Disk device
Logical Path:/dev/rdsk/c8t600A0B80006E10E40000DC1C52E8B751d0s2
...
</source>
===luxadm display <Diskpath|WWN>===
<source lang=bash>
#> luxadm display /dev/rdsk/c8t600A0B80006E10E40000DC1C52E8B751d0s2
DEVICE PROPERTIES for disk: /dev/rdsk/c8t600A0B80006E10E40000DC1C52E8B751d0s2
Vendor: SUN
Product ID: STK6580_6780
Revision: 0784
Serial Num: SP01068442
Unformatted capacity: 204800.000 MBytes
Write Cache: Enabled
Read Cache: Enabled
Minimum prefetch: 0x300
Maximum prefetch: 0x0
Device Type: Disk device
Path(s):
/dev/rdsk/c8t600A0B80006E10E40000DC1C52E8B751d0s2
/devices/scsi_vhci/disk@g600a0b80006e10e40000dc1c52e8b751:c,raw
Controller /dev/cfg/c4
Device Address 202600a0b86e10e4,5
Host controller port WWN 2100001b328a417f
Class primary
State ONLINE
Controller /dev/cfg/c4
Device Address 202700a0b86e10e4,5
Host controller port WWN 2100001b328a417f
Class secondary
State STANDBY
Controller /dev/cfg/c6
Device Address 201600a0b86e10e4,5
Host controller port WWN 2100001b32904445
Class primary
State ONLINE
Controller /dev/cfg/c6
Device Address 201700a0b86e10e4,5
Host controller port WWN 2100001b32904445
Class secondary
State STANDBY
</source>
* Vendor: SUN
Hersteller
* Product ID: STK6580_6780
Also ein StorageTek 6580/6780
* Revision: 0784
Grobe Firmwarepeilung (Firmware Version: 07.84.47.10)
Siehe hier [[#lsscs list array <array_name>]]
* Serial Num: SP01068442
Praktisch, wenn man mit NetApps arbeitet, um die LUNs zuzuordnen.
* Unformatted capacity: 204800.000 MBytes
Immer gut zu wissen
* Write Cache: Enabled
Die Batterie im Storage sollte also OK sein ;-)
* Path(s):
Rawdevicepath
Hardwaredevicepath
Jetzt folgen immer pro Pfad zu diesem Device ein Block aus
Controller (siehe unten)
Device Address <Port WWN vom Device>,<LUN ID>
Class <primary|secondary> (siehe unten)
State <Online|Standby|Oflline>
Zuweisung Controller zum FC-Port über:
<source lang=bash>
# ls -al /dev/cfg/c6
lrwxrwxrwx 1 root root 60 Sep 3 2009 /dev/cfg/c6 -> ../../devices/pci@79,0/pci10de,376@e/pci1077,143@0/fp@0,0:fc
</source>
Man sieht den Hardwarepfad von [[#luxadm -e port]]
Class:
Via ALUA (Asymmetric Logical Unit Access) teilt das Device dem Host mit, über welche Pfade der Host primär auf die LUN zugreifen soll.
==fcinfo==
===fcinfo hba-port===
Gibt ein paar Infos über Hersteller, Modell, Firmware, Port und Node WWN, Current Speed, ... aus
<source lang=bash>
#> fcinfo hba-port
HBA Port WWN: 2100001b328a417f
OS Device Name: /dev/cfg/c4
Manufacturer: QLogic Corp.
Model: 375-3356-02
Firmware Version: 05.06.00
FCode/BIOS Version: BIOS: 2.02; fcode: 2.01; EFI: 2.00;
Serial Number: 0402R00-0918701860
Driver Name: qlc
Driver Version: 20110825-3.06
Type: N-port
State: online
Supported Speeds: 1Gb 2Gb 4Gb
Current Speed: 4Gb
Node WWN: 2000001b328a417f
HBA Port WWN: 2101001b32aa417f
OS Device Name: /dev/cfg/c5
Manufacturer: QLogic Corp.
Model: 375-3356-02
Firmware Version: 05.06.00
FCode/BIOS Version: BIOS: 2.02; fcode: 2.01; EFI: 2.00;
Serial Number: 0402R00-0918701860
Driver Name: qlc
Driver Version: 20110825-3.06
Type: unknown
State: offline
Supported Speeds: 1Gb 2Gb 4Gb
Current Speed: not established
Node WWN: 2001001b32aa417f
HBA Port WWN: 2100001b32904445
OS Device Name: /dev/cfg/c6
Manufacturer: QLogic Corp.
Model: 375-3356-02
Firmware Version: 05.06.00
FCode/BIOS Version: BIOS: 2.02; fcode: 2.01; EFI: 2.00;
Serial Number: 0402R00-0918701887
Driver Name: qlc
Driver Version: 20110825-3.06
Type: N-port
State: online
Supported Speeds: 1Gb 2Gb 4Gb
Current Speed: 4Gb
Node WWN: 2000001b32904445
HBA Port WWN: 2101001b32b04445
OS Device Name: /dev/cfg/c7
Manufacturer: QLogic Corp.
Model: 375-3356-02
Firmware Version: 05.06.00
FCode/BIOS Version: BIOS: 2.02; fcode: 2.01; EFI: 2.00;
Serial Number: 0402R00-0918701887
Driver Name: qlc
Driver Version: 20110825-3.06
Type: unknown
State: offline
Supported Speeds: 1Gb 2Gb 4Gb
Current Speed: not established
Node WWN: 2001001b32b04445
</source>
===fcinfo remote-port --port <HBA Port WWN> --linkstat===
<source lang=bash>
# fcinfo remote-port --port 2100001b32904445 --linkstat
Remote Port WWN: 201600a0b86e103c
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200600a0b86e103c
Link Error Statistics:
Link Failure Count: 3
Loss of Sync Count: 3
Loss of Signal Count: 2
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 201700a0b86e103c
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200600a0b86e103c
Link Error Statistics:
Link Failure Count: 4
Loss of Sync Count: 261
Loss of Signal Count: 4
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 202200a0b85aeb2d
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200200a0b85aeb2d
Link Error Statistics:
Link Failure Count: 2
Loss of Sync Count: 1
Loss of Signal Count: 2
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 202300a0b85aeb2d
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200200a0b85aeb2d
Link Error Statistics:
Link Failure Count: 2
Loss of Sync Count: 1
Loss of Signal Count: 2
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 201600a0b86e10e4
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200600a0b86e10e4
Link Error Statistics:
Link Failure Count: 3
Loss of Sync Count: 1
Loss of Signal Count: 0
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 201700a0b86e10e4
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200600a0b86e10e4
Link Error Statistics:
Link Failure Count: 3
Loss of Sync Count: 1
Loss of Signal Count: 0
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 202400a0b85bb030
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200400a0b85bb030
Link Error Statistics:
Link Failure Count: 2
Loss of Sync Count: 1
Loss of Signal Count: 2
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 202500a0b85bb030
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200400a0b85bb030
Link Error Statistics:
Link Failure Count: 2
Loss of Sync Count: 1
Loss of Signal Count: 3
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
</source>
==mpathadm==
===mpathadm list lu===
<source lang=bash>
</source>
==cfgadm==
===cfgadm -al -o show_FCP_dev [<controller>]===
<source lang=bash>
# cfgadm -al -o show_FCP_dev | grep unusable
c8::21000024ff2d49a2,0 disk connected configured unusable
c8::21000024ff2d49a2,1 disk connected configured unusable
c8::21000024ff2d49a2,2 disk connected configured unusable
c8::21000024ff2d49a2,3 disk connected configured unusable
c8::21000024ff2d49a2,4 disk connected configured unusable
c8::21000024ff2d49a2,5 disk connected configured unusable
c8::21000024ff2d49a2,6 disk connected configured unusable
c8::21000024ff2d49a2,7 disk connected configured unusable
c8::21000024ff2d49a2,8 disk connected configured unusable
c8::21000024ff2d49a2,9 disk connected configured unusable
c8::21000024ff2d49a2,10 disk connected configured unusable
c9::203400a0b839c421,31 disk connected configured unusable
c9::203400a0b84913d2,31 disk connected configured unusable
c9::203500a0b839c421,31 disk connected configured unusable
c9::203500a0b84913d2,31 disk connected configured unusable
</source>
===cfgadm -c unconfigure -o unusable_SCSI_LUN <unusable device>===
<source lang=bash>
# cfgadm -c unconfigure -o unusable_SCSI_LUN c8::21000024ff2d49a2
</source>
===cfgadm -o force_update -c configure <controller>===
Rescan LUNs. Be careful! Does a forcelip!
<source lang=bash>
# cfgadm -o force_update -c configure c10
</source>
=Kommandos : Common Array Manager=
==lsscs==
Ist unter Solaris in /opt/SUNWsefms/bin
===lsscs list array===
<source lang=bash>
</source>
===lsscs list array <array_name>===
<source lang=bash>
</source>
===lsscs list -a <array_name> fcport===
<source lang=bash>
</source>
=Kommandos : Brocade=
==Switch-Kommandos==
===switchshow===
<source lang=bash>
san-sw_11:admin> switchshow
switchName: san-sw_11
switchType: 71.2
switchState: Online
switchMode: Native
switchRole: Principal
switchDomain: 1
switchId: fffc01
switchWwn: 10:00:00:05:33:df:43:5a
zoning: ON (Fabric1)
switchBeacon: OFF
Index Port Address Media Speed State Proto
==============================================
0 0 010000 id N8 No_Light FC
1 1 010100 id N8 Online FC E-Port 10:00:00:05:33:df:bd:b9 "san-sw_21" (downstream)
2 2 010200 id N8 Online FC F-Port 21:00:00:24:ff:05:74:e4
3 3 010300 id N8 Online FC F-Port 50:0a:09:81:8d:32:5d:c4
4 4 010400 id N8 No_Light FC
5 5 010500 id N8 Online FC E-Port 10:00:00:05:33:df:bd:b9 "san-sw_21"
6 6 010600 id N4 Online FC F-Port 20:06:00:a0:b8:32:38:17
7 7 010700 id N4 Online FC F-Port 20:07:00:a0:b8:32:38:17
8 8 010800 id N4 Online FC F-Port 21:00:00:1b:32:91:4c:ed
9 9 010900 id N4 Online FC F-Port 21:00:00:1b:32:98:05:1a
10 10 010a00 id N8 Online FC F-Port 21:00:00:24:ff:4a:d3:bc
11 11 010b00 id N8 No_Light FC
12 12 010c00 id N8 No_Light FC
13 13 010d00 id N8 No_Light FC
14 14 010e00 id N8 No_Light FC
15 15 010f00 id N8 No_Light FC
16 16 011000 -- N8 No_Module FC (No POD License) Disabled
17 17 011100 -- N8 No_Module FC (No POD License) Disabled
18 18 011200 -- N8 No_Module FC (No POD License) Disabled
19 19 011300 -- N8 No_Module FC (No POD License) Disabled
20 20 011400 -- N8 No_Module FC (No POD License) Disabled
21 21 011500 -- N8 No_Module FC (No POD License) Disabled
22 22 011600 -- N8 No_Module FC (No POD License) Disabled
23 23 011700 -- N8 No_Module FC (No POD License) Disabled
</source>
Was sagt uns das?
# Dies ist der "Principal" (alle andere sind "Subordinate") der Fabric "Fabric1" (switchRole:, zoning:)
# Der Switch ist gezoned (zoning:)
# SwitchID ist "fffc01"
# Es ist ein 24-Port Switch
# Es gibt einen doppelten ISL (InterSwitchLink) zu einem anderen Switch E-Port (san-sw_21)
# 6 Ports sind mit SFPs bestückt, aber nicht belegt (0,4,11-15)
# 8 Ports haben keine Lizenz und auch kein SFP (No_Module)
# 9 Ports sind belegt
<source lang=bash>
san-sw_11:root> fabricshow
Switch ID Worldwide Name Enet IP Addr FC IP Addr Name
-------------------------------------------------------------------------
1: fffc01 10:00:00:05:33:df:43:5a 192.168.1.117 0.0.0.0 >"san-sw_11"
2: fffc02 10:00:00:05:33:df:bd:b9 192.168.1.119 0.0.0.0 "san-sw_21"
The Fabric has 2 switches
</source>
==Port-Kommandos==
===portstatsshow===
===portstatsclear===
==Zone-Kommandos==
===zoneshow===
===alicreate===
===alishow===
==Backup der Switchconfig per Script==
===Put the backup host ssh-pub-key on the switches===
<source lang=bash>
fcsw1:root> cat >/root/.ssh/authorized_keys <<EOF
> ssh-dss AAAAB3NzaC1...
...
...
lF8qsgtTD8cc= root@host
> EOF
</source>
===Generate ssh-key on the switches===
<source lang=bash>
fcsw1:root> ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
2a:23:33:...:69:bc:25:a5:f9 root@fcsw1
The key's randomart image is:
+--[ RSA 2048]----+
| |
| ... |
| |
+-----------------+
</source>
===Copy the key to your backup users ~/.ssh/authorized_keys on backup host===
<source lang=bash>
fcsw1:root> cat /root/.ssh/id_rsa.pub
ssh-rsa AAAAB3NzaC1yc2EAAA...
...
KHnw1T1NaQ== root@fcsw1
</source>
===Now the script on the backup host===
<source lang=bash>
# cat /opt/bin/backup_brocade_config
#!/bin/bash
SWITCHES="
172.30.40.50
172.30.40.51
"
LOCALUSER="backupuser"
BACKUPDIR="brocade_backup"
BACKUPHOST="172.30.40.10"
date="$(date '+%Y%m%d-%H%M%S')"
for switch in ${SWITCHES} ; do
printf "Backing up ${switch} to ~${LOCALUSER}/${BACKUPDIR}/${switch}_config_${date}.txt... "
ssh root@${switch} /fabos/link_sbin/configupload -all -p scp ${BACKUPHOST},${LOCALUSER},${BACKUPDIR}/${switch}_config_${date}.txt
done
</source>
0af1a76fb109fcbc4ae8310cb0a6e6a6fd4cfe4c
ZFS Networker
0
158
435
2014-09-15T07:04:20Z
Lollypop
2
Die Seite wurde neu angelegt: „[[Kategorie:ZFS]] [[Kategorie:Solaris]] =Backup of ZFS snapshots on Solaris Cluster with Legato/EMC Networker= First of all: 1. Install Solaris client package…“
wikitext
text/x-wiki
[[Kategorie:ZFS]]
[[Kategorie:Solaris]]
=Backup of ZFS snapshots on Solaris Cluster with Legato/EMC Networker=
First of all:
1. Install Solaris client package LGTOclnt
2. Register new resource type in cluster. One one node do:
<source lang=bash>
# clrt register -f /usr/sbin/LGTO.clnt.rtr LGTO.clnt
</source>
The structure of my RGs is always:
RG: <name>-rg
ZFS-HASP: <name>-hasp-zfs-res
Logical Host: <name>-lh-res
Logical Host Name: <name>-lh
So I use scripts like this:
<source lang=bash>
# RGname=sample-rg
# clrs create \
-t LGTO.clnt \
-g ${RGname} \
-p Resource_dependencies=$(basename ${RGname} -rg)-hasp-zfs-res \
-p clientname=$(basename ${RGname} -rg)-lh \
-p Network_resource=$(basename ${RGname} -rg)-lh-res \
-p owned_paths=/local/${RGname} \
$(basename ${RGname} -rg)-nsr-res
</source>
This expands to:
<source lang=bash>
# clrs create \
-t LGTO.clnt \
-g sample-rg \
-p Resource_dependencies=sample-hasp-zfs-res \
-p clientname=sample-lh \
-p Network_resource=sample-lh-res \
-p owned_paths=/local/sample-rg \
sample-nsr-res
</source>
# less /nsr/res/TEST_TEAM.res
type: savepnpc;
precmd: /global/recover-rg/rman_backup/scripts/networker_scripts/TESTTEAM_networker_precmd.sh;
timeout: "12:00pm";
abort precmd with group: Yes;
6a5662d40d1261367a386c397a5524eb1f03023a
436
435
2014-09-15T07:55:48Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:ZFS]]
[[Kategorie:Solaris]]
=Backup of ZFS snapshots on Solaris Cluster with Legato/EMC Networker=
First of all:
1. Install Solaris client package LGTOclnt
2. Register new resource type in cluster. One one node do:
<source lang=bash>
# clrt register -f /usr/sbin/LGTO.clnt.rtr LGTO.clnt
</source>
Now you have a new resource type LGTO.clnt in your cluster.
The structure of my RGs is always:
RG: <name>-rg
ZFS-HASP: <name>-hasp-zfs-res
Logical Host: <name>-lh-res
Logical Host Name: <name>-lh
So I use scripts like this:
<source lang=bash>
# RGname=sample-rg
# clrs create \
-t LGTO.clnt \
-g ${RGname} \
-p Resource_dependencies=$(basename ${RGname} -rg)-hasp-zfs-res \
-p clientname=$(basename ${RGname} -rg)-lh \
-p Network_resource=$(basename ${RGname} -rg)-lh-res \
-p owned_paths=/local/${RGname} \
$(basename ${RGname} -rg)-nsr-res
</source>
This expands to:
<source lang=bash>
# clrs create \
-t LGTO.clnt \
-g sample-rg \
-p Resource_dependencies=sample-hasp-zfs-res \
-p clientname=sample-lh \
-p Network_resource=sample-lh-res \
-p owned_paths=/local/sample-rg \
sample-nsr-res
</source>
Now we have a client name to which we can connect: sample-lh
What we need now is a resource definition in our Networker directory like this:
<source lang=bash>
# cat /nsr/res/sample.res
type: savepnpc;
precmd: "/local/sample-rg/scripts/networker/networker_precmd.sh >/local/sample-rg/scripts/networker/networker_precmd.log 2>&1";
pstcmd: "/local/sample-rg/scripts/networker/networker_pstcmd.sh >/local/sample-rg/scripts/networker/networker_pstcmd.log 2>&1";
timeout: "12:00pm";
abort precmd with group: Yes;
</source>
aeaa60e625ef8e8c44da5634ab5959623445cd35
437
436
2014-09-15T08:03:21Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:ZFS]]
[[Kategorie:Solaris]]
=Backup of ZFS snapshots on Solaris Cluster with Legato/EMC Networker=
First of all:
1. Install Solaris client package LGTOclnt
2. Register new resource type in cluster. One one node do:
<source lang=bash>
# clrt register -f /usr/sbin/LGTO.clnt.rtr LGTO.clnt
</source>
Now you have a new resource type LGTO.clnt in your cluster.
The structure of my RGs is always:
<pre>
RG: <name>-rg
ZFS-HASP: <name>-hasp-zfs-res
Logical Host: <name>-lh-res
Logical Host Name: <name>-lh
</pre>
So I use scripts like this:
<source lang=bash>
# RGname=sample-rg
# clrs create \
-t LGTO.clnt \
-g ${RGname} \
-p Resource_dependencies=$(basename ${RGname} -rg)-hasp-zfs-res \
-p clientname=$(basename ${RGname} -rg)-lh \
-p Network_resource=$(basename ${RGname} -rg)-lh-res \
-p owned_paths=/local/${RGname} \
$(basename ${RGname} -rg)-nsr-res
</source>
This expands to:
<source lang=bash>
# clrs create \
-t LGTO.clnt \
-g sample-rg \
-p Resource_dependencies=sample-hasp-zfs-res \
-p clientname=sample-lh \
-p Network_resource=sample-lh-res \
-p owned_paths=/local/sample-rg \
sample-nsr-res
</source>
Now we have a client name to which we can connect: sample-lh
What we need now is a resource definition in our Networker directory like this:
<source lang=bash>
# cat /nsr/res/sample.res
type: savepnpc;
precmd: "/local/sample-rg/nsr/networker_precmd.sh >/local/sample-rg/nsr/networker_precmd.log 2>&1";
pstcmd: "/local/sample-rg/nsr/networker_pstcmd.sh >/local/sample-rg/nsr/networker_pstcmd.log 2>&1";
timeout: "12:00pm";
abort precmd with group: Yes;
</source>
Of course our sample.res is just a link:
<source lang=bash>
# ls -al /nsr/res/TEST_TEAM.res
lrwxrwxrwx 1 root root 40 Sep 5 2014 /nsr/res/sample.res -> /local/sample-rg/nsr/res/sample.res
</source>
ef98f4130ed14c90d0f91b78c12c41241ba4004a
438
437
2014-09-15T08:03:41Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:ZFS]]
[[Kategorie:Solaris]]
=Backup of ZFS snapshots on Solaris Cluster with Legato/EMC Networker=
First of all:
1. Install Solaris client package LGTOclnt
2. Register new resource type in cluster. One one node do:
<source lang=bash>
# clrt register -f /usr/sbin/LGTO.clnt.rtr LGTO.clnt
</source>
Now you have a new resource type LGTO.clnt in your cluster.
The structure of my RGs is always:
<pre>
RG: <name>-rg
ZFS-HASP: <name>-hasp-zfs-res
Logical Host: <name>-lh-res
Logical Host Name: <name>-lh
</pre>
So I use scripts like this:
<source lang=bash>
# RGname=sample-rg
# clrs create \
-t LGTO.clnt \
-g ${RGname} \
-p Resource_dependencies=$(basename ${RGname} -rg)-hasp-zfs-res \
-p clientname=$(basename ${RGname} -rg)-lh \
-p Network_resource=$(basename ${RGname} -rg)-lh-res \
-p owned_paths=/local/${RGname} \
$(basename ${RGname} -rg)-nsr-res
</source>
This expands to:
<source lang=bash>
# clrs create \
-t LGTO.clnt \
-g sample-rg \
-p Resource_dependencies=sample-hasp-zfs-res \
-p clientname=sample-lh \
-p Network_resource=sample-lh-res \
-p owned_paths=/local/sample-rg \
sample-nsr-res
</source>
Now we have a client name to which we can connect: sample-lh
What we need now is a resource definition in our Networker directory like this:
<source lang=bash>
# cat /nsr/res/sample.res
type: savepnpc;
precmd: "/local/sample-rg/nsr/networker_precmd.sh >/local/sample-rg/nsr/networker_precmd.log 2>&1";
pstcmd: "/local/sample-rg/nsr/networker_pstcmd.sh >/local/sample-rg/nsr/networker_pstcmd.log 2>&1";
timeout: "12:00pm";
abort precmd with group: Yes;
</source>
Of course our sample.res is just a link:
<source lang=bash>
# ls -al /nsr/res/sample.res
lrwxrwxrwx 1 root root 40 Sep 5 2014 /nsr/res/sample.res -> /local/sample-rg/nsr/res/sample.res
</source>
68f50a381900341f0c06412778c959b8aab0051a
439
438
2014-09-15T08:04:37Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:ZFS]]
[[Kategorie:Solaris]]
=Backup of ZFS snapshots on Solaris Cluster with Legato/EMC Networker=
First of all:
1. Install Solaris client package LGTOclnt
2. Register new resource type in cluster. One one node do:
<source lang=bash>
# clrt register -f /usr/sbin/LGTO.clnt.rtr LGTO.clnt
</source>
Now you have a new resource type LGTO.clnt in your cluster.
The structure of my RGs is always:
<pre>
RG: <name>-rg
ZFS-HASP: <name>-hasp-zfs-res
Logical Host: <name>-lh-res
Logical Host Name: <name>-lh
</pre>
So I use scripts like this:
<source lang=bash>
# RGname=sample-rg
# clrs create \
-t LGTO.clnt \
-g ${RGname} \
-p Resource_dependencies=$(basename ${RGname} -rg)-hasp-zfs-res \
-p clientname=$(basename ${RGname} -rg)-lh \
-p Network_resource=$(basename ${RGname} -rg)-lh-res \
-p owned_paths=/local/${RGname} \
$(basename ${RGname} -rg)-nsr-res
</source>
This expands to:
<source lang=bash>
# clrs create \
-t LGTO.clnt \
-g sample-rg \
-p Resource_dependencies=sample-hasp-zfs-res \
-p clientname=sample-lh \
-p Network_resource=sample-lh-res \
-p owned_paths=/local/sample-rg \
sample-nsr-res
</source>
Now we have a client name to which we can connect: sample-lh
What we need now is a resource definition in our Networker directory like this:
<source lang=bash>
# cat /nsr/res/sample.res
type: savepnpc;
precmd: "/local/sample-rg/nsr/networker_precmd.sh >/local/sample-rg/nsr/networker_precmd.log 2>&1";
pstcmd: "/local/sample-rg/nsr/networker_pstcmd.sh >/local/sample-rg/nsr/networker_pstcmd.log 2>&1";
timeout: "12:00pm";
abort precmd with group: Yes;
</source>
Of course our sample.res is just a link:
<source lang=bash>
# ls -al /nsr/res/sample.res
lrwxrwxrwx 1 root root 40 Sep 5 2014 /nsr/res/sample.res -> /local/sample-rg/nsr/res/sample.res
</source>
So we have no need to copy it to every node. But we need the link on every node!
d3d863ec6cfc43efc4ac4c95184bf9fd9ed7000c
440
439
2014-09-15T08:20:40Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:ZFS]]
[[Kategorie:Solaris]]
=Backup of ZFS snapshots on Solaris Cluster with Legato/EMC Networker=
This describes how to setup a backup of the Solaris Cluster resource group named sample-rg.
The structure of my RGs is always:
<pre>
RG: <name>-rg
ZFS-HASP: <name>-hasp-zfs-res
Logical Host: <name>-lh-res
Logical Host Name: <name>-lh
ZPOOL: <name>_pool
</pre>
I used the bash as shell.
==Define variables used in the following command lines==
<source lang=bash>
# NAME=sample
# RGname=${NAME}-rg
# NetworkerGroup=$(echo ${NAME} | tr 'a-z' 'A-Z' )
# ZPOOL=${NAME}_pool
# ZPOOL_BASEDIR=/local/${RGname}
</source>
==Define a resource for Networker==
What we need now is a resource definition in our Networker directory like this:
<source lang=bash>
# zfs create ${ZPOOL}/nsr
# mkdir ${ZPOOL_BASEDIR}/nsr/{bin,log,res}
# cat > /local/${RGname}/nsr/res/${NetworkerGroup}.res <<EOF
type: savepnpc;
precmd: "/local/${RGname}/nsr/bin/networker_precmd.sh >/local/${RGname}/nsr/log/networker_precmd.log 2>&1";
pstcmd: "/local/${RGname}/nsr/bin/networker_pstcmd.sh >/local/${RGname}/nsr/log/networker_pstcmd.log 2>&1";
timeout: "12:00pm";
abort precmd with group: Yes;
EOF
</source>
And now create a link on every node to this file
<source lang=bash>
# ln -s /local/${RGname}/nsr/res/${NetworkerGroup}.res /nsr/res/${NetworkerGroup}.res
</source>
==Registering new resource type LGTO.clnt==
1. Install Solaris client package LGTOclnt
2. Register new resource type in cluster. One one node do:
<source lang=bash>
# clrt register -f /usr/sbin/LGTO.clnt.rtr LGTO.clnt
</source>
Now you have a new resource type LGTO.clnt in your cluster.
==Create client resource of type LGTO.clnt==
So I use scripts like this:
<source lang=bash>
# RGname=sample-rg
# clrs create \
-t LGTO.clnt \
-g ${RGname} \
-p Resource_dependencies=$(basename ${RGname} -rg)-hasp-zfs-res \
-p clientname=$(basename ${RGname} -rg)-lh \
-p Network_resource=$(basename ${RGname} -rg)-lh-res \
-p owned_paths=/local/${RGname} \
$(basename ${RGname} -rg)-nsr-res
</source>
This expands to:
<source lang=bash>
# clrs create \
-t LGTO.clnt \
-g sample-rg \
-p Resource_dependencies=sample-hasp-zfs-res \
-p clientname=sample-lh \
-p Network_resource=sample-lh-res \
-p owned_paths=/local/sample-rg \
sample-nsr-res
</source>
Now we have a client name to which we can connect: sample-lh
da03fc9179088c2a68402191beb9d2625f7b66b1
441
440
2014-09-15T08:24:55Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:ZFS]]
[[Kategorie:Solaris]]
=Backup of ZFS snapshots on Solaris Cluster with Legato/EMC Networker=
This describes how to setup a backup of the Solaris Cluster resource group named sample-rg.
The structure of my RGs is always:
<pre>
RG: <name>-rg
ZFS-HASP: <name>-hasp-zfs-res
Logical Host: <name>-lh-res
Logical Host Name: <name>-lh
ZPOOL: <name>_pool
</pre>
I used the bash as shell.
==Define variables used in the following command lines==
<source lang=bash>
# NAME=sample
# RGname=${NAME}-rg
# NetworkerGroup=$(echo ${NAME} | tr 'a-z' 'A-Z' )
# ZPOOL=${NAME}_pool
# ZPOOL_BASEDIR=/local/${RGname}
</source>
==Define a resource for Networker==
What we need now is a resource definition in our Networker directory like this:
<source lang=bash>
# zfs create ${ZPOOL}/nsr
# mkdir ${ZPOOL_BASEDIR}/nsr/{bin,log,res}
# cat > ${ZPOOL_BASEDIR}/nsr/res/${NetworkerGroup}.res <<EOF
type: savepnpc;
precmd: "${ZPOOL_BASEDIR}/nsr/bin/networker_precmd.sh >${ZPOOL_BASEDIR}/nsr/log/networker_precmd.log 2>&1";
pstcmd: "${ZPOOL_BASEDIR}/nsr/bin/networker_pstcmd.sh >${ZPOOL_BASEDIR}/nsr/log/networker_pstcmd.log 2>&1";
timeout: "12:00pm";
abort precmd with group: Yes;
EOF
</source>
And now create a link on every node to this file
<source lang=bash>
# ln -s ${ZPOOL_BASEDIR}/nsr/res/${NetworkerGroup}.res /nsr/res/${NetworkerGroup}.res
</source>
==Registering new resource type LGTO.clnt==
1. Install Solaris client package LGTOclnt
2. Register new resource type in cluster. One one node do:
<source lang=bash>
# clrt register -f /usr/sbin/LGTO.clnt.rtr LGTO.clnt
</source>
Now you have a new resource type LGTO.clnt in your cluster.
==Create client resource of type LGTO.clnt==
So I use scripts like this:
<source lang=bash>
# RGname=sample-rg
# clrs create \
-t LGTO.clnt \
-g ${RGname} \
-p Resource_dependencies=$(basename ${RGname} -rg)-hasp-zfs-res \
-p clientname=$(basename ${RGname} -rg)-lh \
-p Network_resource=$(basename ${RGname} -rg)-lh-res \
-p owned_paths=${ZPOOL_BASEDIR} \
$(basename ${RGname} -rg)-nsr-res
</source>
This expands to:
<source lang=bash>
# clrs create \
-t LGTO.clnt \
-g sample-rg \
-p Resource_dependencies=sample-hasp-zfs-res \
-p clientname=sample-lh \
-p Network_resource=sample-lh-res \
-p owned_paths=/local/sample-rg \
sample-nsr-res
</source>
Now we have a client name to which we can connect to: sample-lh
05507a8b9a686b3bb615cfddf4f2575161a2e63c
442
441
2014-09-15T08:31:43Z
Lollypop
2
/* Define a resource for Networker */
wikitext
text/x-wiki
[[Kategorie:ZFS]]
[[Kategorie:Solaris]]
=Backup of ZFS snapshots on Solaris Cluster with Legato/EMC Networker=
This describes how to setup a backup of the Solaris Cluster resource group named sample-rg.
The structure of my RGs is always:
<pre>
RG: <name>-rg
ZFS-HASP: <name>-hasp-zfs-res
Logical Host: <name>-lh-res
Logical Host Name: <name>-lh
ZPOOL: <name>_pool
</pre>
I used the bash as shell.
==Define variables used in the following command lines==
<source lang=bash>
# NAME=sample
# RGname=${NAME}-rg
# NetworkerGroup=$(echo ${NAME} | tr 'a-z' 'A-Z' )
# ZPOOL=${NAME}_pool
# ZPOOL_BASEDIR=/local/${RGname}
</source>
==Define a resource for Networker==
What we need now is a resource definition in our Networker directory like this:
<source lang=bash>
# zfs create ${ZPOOL}/nsr
# mkdir ${ZPOOL_BASEDIR}/nsr/{bin,log,res}
# cat > ${ZPOOL_BASEDIR}/nsr/res/${NetworkerGroup}.res <<EOF
type: savepnpc;
precmd: "${ZPOOL_BASEDIR}/nsr/bin/networker_precmd.sh >${ZPOOL_BASEDIR}/nsr/log/networker_precmd.log 2>&1";
pstcmd: "${ZPOOL_BASEDIR}/nsr/bin/networker_pstcmd.sh >${ZPOOL_BASEDIR}/nsr/log/networker_pstcmd.log 2>&1";
timeout: "08:00am";
abort precmd with group: Yes;
EOF
</source>
And now create a link on every node to this file
<source lang=bash>
# ln -s ${ZPOOL_BASEDIR}/nsr/res/${NetworkerGroup}.res /nsr/res/${NetworkerGroup}.res
</source>
==Registering new resource type LGTO.clnt==
1. Install Solaris client package LGTOclnt
2. Register new resource type in cluster. One one node do:
<source lang=bash>
# clrt register -f /usr/sbin/LGTO.clnt.rtr LGTO.clnt
</source>
Now you have a new resource type LGTO.clnt in your cluster.
==Create client resource of type LGTO.clnt==
So I use scripts like this:
<source lang=bash>
# RGname=sample-rg
# clrs create \
-t LGTO.clnt \
-g ${RGname} \
-p Resource_dependencies=$(basename ${RGname} -rg)-hasp-zfs-res \
-p clientname=$(basename ${RGname} -rg)-lh \
-p Network_resource=$(basename ${RGname} -rg)-lh-res \
-p owned_paths=${ZPOOL_BASEDIR} \
$(basename ${RGname} -rg)-nsr-res
</source>
This expands to:
<source lang=bash>
# clrs create \
-t LGTO.clnt \
-g sample-rg \
-p Resource_dependencies=sample-hasp-zfs-res \
-p clientname=sample-lh \
-p Network_resource=sample-lh-res \
-p owned_paths=/local/sample-rg \
sample-nsr-res
</source>
Now we have a client name to which we can connect to: sample-lh
6a5d3fbd1387727ac83098cca218e9b228c74058
443
442
2014-09-15T10:24:05Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:ZFS]]
[[Kategorie:Solaris]]
=Backup of ZFS snapshots on Solaris Cluster with Legato/EMC Networker=
This describes how to setup a backup of the Solaris Cluster resource group named sample-rg.
The structure of my RGs is always:
<pre>
RG: <name>-rg
ZFS-HASP: <name>-hasp-zfs-res
Logical Host: <name>-lh-res
Logical Host Name: <name>-lh
ZPOOL: <name>_pool
</pre>
I used the bash as shell.
==Define variables used in the following command lines==
<source lang=bash>
# NAME=sample
# RGname=${NAME}-rg
# NetworkerGroup=$(echo ${NAME} | tr 'a-z' 'A-Z' )
# ZPOOL=${NAME}_pool
# ZPOOL_BASEDIR=/local/${RGname}
</source>
==Define a resource for Networker==
What we need now is a resource definition in our Networker directory like this:
<source lang=bash>
# zfs create ${ZPOOL}/nsr
# mkdir ${ZPOOL_BASEDIR}/nsr/{bin,log,res}
# cat > ${ZPOOL_BASEDIR}/nsr/res/${NetworkerGroup}.res <<EOF
type: savepnpc;
precmd: "${ZPOOL_BASEDIR}/nsr/bin/prepst_command.sh pre >${ZPOOL_BASEDIR}/nsr/log/networker_precmd.log 2>&1";
pstcmd: "${ZPOOL_BASEDIR}/nsr/bin/prepst_command.sh pst >${ZPOOL_BASEDIR}/nsr/log/networker_pstcmd.log 2>&1";
timeout: "08:00am";
abort precmd with group: Yes;
EOF
</source>
And now create a link on every cluster node to this file
<source lang=bash>
# ln -s ${ZPOOL_BASEDIR}/nsr/res/${NetworkerGroup}.res /nsr/res/${NetworkerGroup}.res
</source>
==The pre-/pstcmd-script==
!!!THIS CODE IS UNTESTED DO NOT USE THIS!!!
!!!THIS JUST AN EXAMPLE!!!
<source lang=bash>
#!/bin/bash
ZPOOL="sample_pool"
ZONE="sample-zone"
ORACLE_SID=SAMPLE
ORACLE_USER=oracle
SNAPSHOT_NAME="nsr"
ZFS_CMD="echo /usr/bin/zfs"
ZLOGIN_CMD="echo /usr/bin/zlogin"
function snapshot_pre {
DB=$1
DBUSER=$2
if [ $# -eq 3 && "_$3_" != "__" ]
then
ZONE=$3
ZONE_CMD="${ZLOGIN_CMD} -l ${DBUSER} ${ZONE}"
ZONE_BASE=$(/usr/sbin/zonecfg -z ${ZONE} info zonepath | nawk '{print $NF;}')
ZONE_ROOT="${ZONE_BASE}/root"
else
ZONE_ROOT=""
fi
SCRIPT_NAME="tmp/.nsr-pre-snap-script.$$"
# Create script inside zone
cat >${ZONE_ROOT}/{SCRIPT_NAME} <<EOS
#!/bin/bash
DBDIR=\$(/usr/bin/nawk -F':' -v ORACLE_SID=${ORACLE_SID} '\$1==ORACLE_SID {print \$2;}' /var/opt/oracle/oratab)
\${DBDIR}/bin/sqlplus sys/${DBUSER} as sysdba << EOF
create pfile from spfile;
alter system archive log current;
alter database backup controlfile to trace;
alter database begin backup;
EOF
EOS
chmod 755 ${ZONE_BASE}/root/${SCRIPT_NAME}
${ZONE_CMD} /${SCRIPT_NAME} 2>&1 | print_log
rm -f ${ZONE_ROOT}/${SCRIPT_NAME}
}
function snapshot_pst {
DB=$1
DBUSER=$2
if [ $# -eq 3 && "_$3_" != "__" ]
then
ZONE=$3
ZONE_CMD="${ZLOGIN_CMD} -l ${DBUSER} ${ZONE}"
ZONE_BASE=$(/usr/sbin/zonecfg -z ${ZONE} info zonepath | nawk '{print $NF;}')
ZONE_ROOT="${ZONE_BASE}/root"
else
ZONE_ROOT=""
fi
SCRIPT_NAME="tmp/.nsr-pre-snap-script.$$"
# Create script inside zone
cat >${ZONE_ROOT}/{SCRIPT_NAME} <<EOS
#!/bin/bash
DBDIR=\$(/usr/bin/nawk -F':' -v ORACLE_SID=${ORACLE_SID} '\$1==ORACLE_SID {print \$2;}' /var/opt/oracle/oratab)
\${DBDIR}/bin/sqlplus sys/${DBUSER} as sysdba << EOF
alter database end backup;
alter system archive log current;
EOF
EOS
chmod 755 ${ZONE_BASE}/root/${SCRIPT_NAME}
${ZONE_CMD} /${SCRIPT_NAME} 2>&1 | print_log
rm -f ${ZONE_ROOT}/${SCRIPT_NAME}
}
function snapshot_create {
ZPOOL=$1
SNAPSHOT_NAME=$2
${ZFS_CMD} snapshot -r ${ZPOOL}@${SNAPSHOT_NAME} 2>&1 | print_log
}
function snapshot_destroy {
ZPOOL=$1
SNAPSHOT_NAME=$2
if (${ZFS_CMD} list -t snapshot ${ZPOOL}@${SNAPSHOT_NAME} >/dev/null)
then
${ZFS_CMD} destroy -r ${ZPOOL}@${SNAPSHOT_NAME} 2>&1 | print_log
fi
}
function usage {
echo "Usage: $0 (pre|pst)"
exit 1
}
case $1 in
pre)
snapshot_destroy ${ZPOOL} ${SNAPSHOT_NAME}
snapshot_pre ${DB} ${DBUSER} ${ZONE}
snapshot_create ${ZPOOL} ${SNAPSHOT_NAME}
snapshot_pst ${DB} ${DBUSER} ${ZONE}
;;
pst)
snapshot_destroy ${ZPOOL} ${SNAPSHOT_NAME}
;;
*)
usage
;;
esac
</source>
!!!THIS CODE IS UNTESTED DO NOT USE THIS!!!
!!!THIS JUST AN EXAMPLE!!!
==Registering new resource type LGTO.clnt==
1. Install Solaris client package LGTOclnt
2. Register new resource type in cluster. One one node do:
<source lang=bash>
# clrt register -f /usr/sbin/LGTO.clnt.rtr LGTO.clnt
</source>
Now you have a new resource type LGTO.clnt in your cluster.
==Create client resource of type LGTO.clnt==
So I use scripts like this:
<source lang=bash>
# RGname=sample-rg
# clrs create \
-t LGTO.clnt \
-g ${RGname} \
-p Resource_dependencies=$(basename ${RGname} -rg)-hasp-zfs-res \
-p clientname=$(basename ${RGname} -rg)-lh \
-p Network_resource=$(basename ${RGname} -rg)-lh-res \
-p owned_paths=${ZPOOL_BASEDIR} \
$(basename ${RGname} -rg)-nsr-res
</source>
This expands to:
<source lang=bash>
# clrs create \
-t LGTO.clnt \
-g sample-rg \
-p Resource_dependencies=sample-hasp-zfs-res \
-p clientname=sample-lh \
-p Network_resource=sample-lh-res \
-p owned_paths=/local/sample-rg \
sample-nsr-res
</source>
Now we have a client name to which we can connect to: sample-lh
359e6c216fde29598a0e3ce0e01fabb66429ebfa
444
443
2014-09-15T10:26:26Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:ZFS]]
[[Kategorie:Solaris]]
=Backup of ZFS snapshots on Solaris Cluster with Legato/EMC Networker=
This describes how to setup a backup of the Solaris Cluster resource group named sample-rg.
The structure of my RGs is always:
<pre>
RG: <name>-rg
ZFS-HASP: <name>-hasp-zfs-res
Logical Host: <name>-lh-res
Logical Host Name: <name>-lh
ZPOOL: <name>_pool
</pre>
I used the bash as shell.
==Define variables used in the following command lines==
<source lang=bash>
# NAME=sample
# RGname=${NAME}-rg
# NetworkerGroup=$(echo ${NAME} | tr 'a-z' 'A-Z' )
# ZPOOL=${NAME}_pool
# ZPOOL_BASEDIR=/local/${RGname}
</source>
==Define a resource for Networker==
What we need now is a resource definition in our Networker directory like this:
<source lang=bash>
# zfs create ${ZPOOL}/nsr
# mkdir ${ZPOOL_BASEDIR}/nsr/{bin,log,res}
# cat > ${ZPOOL_BASEDIR}/nsr/res/${NetworkerGroup}.res <<EOF
type: savepnpc;
precmd: "${ZPOOL_BASEDIR}/nsr/bin/prepst_command.sh pre >${ZPOOL_BASEDIR}/nsr/log/networker_precmd.log 2>&1";
pstcmd: "${ZPOOL_BASEDIR}/nsr/bin/prepst_command.sh pst >${ZPOOL_BASEDIR}/nsr/log/networker_pstcmd.log 2>&1";
timeout: "08:00am";
abort precmd with group: Yes;
EOF
</source>
And now create a link on every cluster node to this file
<source lang=bash>
# ln -s ${ZPOOL_BASEDIR}/nsr/res/${NetworkerGroup}.res /nsr/res/${NetworkerGroup}.res
</source>
==The pre-/pstcmd-script==
!!!THIS CODE IS UNTESTED DO NOT USE THIS!!!
!!!THIS JUST AN EXAMPLE!!!
<source lang=bash>
#!/bin/bash
ZPOOL="sample_pool"
ZONE="sample-zone"
ORACLE_SID=SAMPLE
ORACLE_USER=oracle
SNAPSHOT_NAME="nsr"
ZFS_CMD="echo /usr/bin/zfs"
ZLOGIN_CMD="echo /usr/bin/zlogin"
function snapshot_pre {
DB=$1
DBUSER=$2
if [ $# -eq 3 && "_$3_" != "__" ]
then
ZONE=$3
ZONE_CMD="${ZLOGIN_CMD} -l ${DBUSER} ${ZONE}"
ZONE_BASE=$(/usr/sbin/zonecfg -z ${ZONE} info zonepath | nawk '{print $NF;}')
ZONE_ROOT="${ZONE_BASE}/root"
else
ZONE_ROOT=""
fi
SCRIPT_NAME="tmp/.nsr-pre-snap-script.$$"
# Create script inside zone
cat >${ZONE_ROOT}/{SCRIPT_NAME} <<EOS
#!/bin/bash
DBDIR=\$(/usr/bin/nawk -F':' -v ORACLE_SID=${ORACLE_SID} '\$1==ORACLE_SID {print \$2;}' /var/opt/oracle/oratab)
\${DBDIR}/bin/sqlplus sys/${DBUSER} as sysdba << EOF
create pfile from spfile;
alter system archive log current;
alter database backup controlfile to trace;
alter database begin backup;
EOF
EOS
chmod 755 ${ZONE_ROOT}/${SCRIPT_NAME}
${ZONE_CMD} /${SCRIPT_NAME} 2>&1 | print_log
rm -f ${ZONE_ROOT}/${SCRIPT_NAME}
}
function snapshot_pst {
DB=$1
DBUSER=$2
if [ $# -eq 3 && "_$3_" != "__" ]
then
ZONE=$3
ZONE_CMD="${ZLOGIN_CMD} -l ${DBUSER} ${ZONE}"
ZONE_BASE=$(/usr/sbin/zonecfg -z ${ZONE} info zonepath | nawk '{print $NF;}')
ZONE_ROOT="${ZONE_BASE}/root"
else
ZONE_ROOT=""
fi
SCRIPT_NAME="tmp/.nsr-pre-snap-script.$$"
# Create script inside zone
cat >${ZONE_ROOT}/{SCRIPT_NAME} <<EOS
#!/bin/bash
DBDIR=\$(/usr/bin/nawk -F':' -v ORACLE_SID=${ORACLE_SID} '\$1==ORACLE_SID {print \$2;}' /var/opt/oracle/oratab)
\${DBDIR}/bin/sqlplus sys/${DBUSER} as sysdba << EOF
alter database end backup;
alter system archive log current;
EOF
EOS
chmod 755 ${ZONE_ROOT}/${SCRIPT_NAME}
${ZONE_CMD} /${SCRIPT_NAME} 2>&1 | print_log
rm -f ${ZONE_ROOT}/${SCRIPT_NAME}
}
function snapshot_create {
ZPOOL=$1
SNAPSHOT_NAME=$2
${ZFS_CMD} snapshot -r ${ZPOOL}@${SNAPSHOT_NAME} 2>&1 | print_log
}
function snapshot_destroy {
ZPOOL=$1
SNAPSHOT_NAME=$2
if (${ZFS_CMD} list -t snapshot ${ZPOOL}@${SNAPSHOT_NAME} >/dev/null)
then
${ZFS_CMD} destroy -r ${ZPOOL}@${SNAPSHOT_NAME} 2>&1 | print_log
fi
}
function usage {
echo "Usage: $0 (pre|pst)"
exit 1
}
case $1 in
pre)
snapshot_destroy ${ZPOOL} ${SNAPSHOT_NAME}
snapshot_pre ${DB} ${DBUSER} ${ZONE}
snapshot_create ${ZPOOL} ${SNAPSHOT_NAME}
snapshot_pst ${DB} ${DBUSER} ${ZONE}
;;
pst)
snapshot_destroy ${ZPOOL} ${SNAPSHOT_NAME}
;;
*)
usage
;;
esac
</source>
!!!THIS CODE IS UNTESTED DO NOT USE THIS!!!
!!!THIS JUST AN EXAMPLE!!!
==Registering new resource type LGTO.clnt==
1. Install Solaris client package LGTOclnt
2. Register new resource type in cluster. One one node do:
<source lang=bash>
# clrt register -f /usr/sbin/LGTO.clnt.rtr LGTO.clnt
</source>
Now you have a new resource type LGTO.clnt in your cluster.
==Create client resource of type LGTO.clnt==
So I use scripts like this:
<source lang=bash>
# RGname=sample-rg
# clrs create \
-t LGTO.clnt \
-g ${RGname} \
-p Resource_dependencies=$(basename ${RGname} -rg)-hasp-zfs-res \
-p clientname=$(basename ${RGname} -rg)-lh \
-p Network_resource=$(basename ${RGname} -rg)-lh-res \
-p owned_paths=${ZPOOL_BASEDIR} \
$(basename ${RGname} -rg)-nsr-res
</source>
This expands to:
<source lang=bash>
# clrs create \
-t LGTO.clnt \
-g sample-rg \
-p Resource_dependencies=sample-hasp-zfs-res \
-p clientname=sample-lh \
-p Network_resource=sample-lh-res \
-p owned_paths=/local/sample-rg \
sample-nsr-res
</source>
Now we have a client name to which we can connect to: sample-lh
5863251600153901739de1cb9534ed49a01a94c6
454
444
2014-09-15T12:18:23Z
Lollypop
2
/* The pre-/pstcmd-script */
wikitext
text/x-wiki
[[Kategorie:ZFS]]
[[Kategorie:Solaris]]
=Backup of ZFS snapshots on Solaris Cluster with Legato/EMC Networker=
This describes how to setup a backup of the Solaris Cluster resource group named sample-rg.
The structure of my RGs is always:
<pre>
RG: <name>-rg
ZFS-HASP: <name>-hasp-zfs-res
Logical Host: <name>-lh-res
Logical Host Name: <name>-lh
ZPOOL: <name>_pool
</pre>
I used the bash as shell.
==Define variables used in the following command lines==
<source lang=bash>
# NAME=sample
# RGname=${NAME}-rg
# NetworkerGroup=$(echo ${NAME} | tr 'a-z' 'A-Z' )
# ZPOOL=${NAME}_pool
# ZPOOL_BASEDIR=/local/${RGname}
</source>
==Define a resource for Networker==
What we need now is a resource definition in our Networker directory like this:
<source lang=bash>
# zfs create ${ZPOOL}/nsr
# mkdir ${ZPOOL_BASEDIR}/nsr/{bin,log,res}
# cat > ${ZPOOL_BASEDIR}/nsr/res/${NetworkerGroup}.res <<EOF
type: savepnpc;
precmd: "${ZPOOL_BASEDIR}/nsr/bin/prepst_command.sh pre >${ZPOOL_BASEDIR}/nsr/log/networker_precmd.log 2>&1";
pstcmd: "${ZPOOL_BASEDIR}/nsr/bin/prepst_command.sh pst >${ZPOOL_BASEDIR}/nsr/log/networker_pstcmd.log 2>&1";
timeout: "08:00am";
abort precmd with group: Yes;
EOF
</source>
And now create a link on every cluster node to this file
<source lang=bash>
# ln -s ${ZPOOL_BASEDIR}/nsr/res/${NetworkerGroup}.res /nsr/res/${NetworkerGroup}.res
</source>
==The pre-/pstcmd-script==
!!!THIS CODE IS UNTESTED DO NOT USE THIS!!!
!!!THIS JUST AN EXAMPLE!!!
<source lang=bash>
#!/bin/bash
ZPOOL="sample_pool"
ZONE="sample-zone"
ORACLE_SID=SAMPLE
ORACLE_USER=oracle
SNAPSHOT_NAME="nsr"
ZFS_CMD="echo /usr/bin/zfs"
ZLOGIN_CMD="echo /usr/bin/zlogin"
function print_log {
printf "%s %s\n" "$(date '+%Y%m%d %H:%M:%S')" "$*"
}
function snapshot_pre {
DB=$1
DBUSER=$2
if [ $# -eq 3 && "_$3_" != "__" ]
then
ZONE=$3
ZONE_CMD="${ZLOGIN_CMD} -l ${DBUSER} ${ZONE}"
ZONE_BASE=$(/usr/sbin/zonecfg -z ${ZONE} info zonepath | nawk '{print $NF;}')
ZONE_ROOT="${ZONE_BASE}/root"
else
ZONE_ROOT=""
fi
SCRIPT_NAME="tmp/.nsr-pre-snap-script.$$"
# Create script inside zone
cat >${ZONE_ROOT}/{SCRIPT_NAME} <<EOS
#!/bin/bash
DBDIR=\$(/usr/bin/nawk -F':' -v ORACLE_SID=${ORACLE_SID} '\$1==ORACLE_SID {print \$2;}' /var/opt/oracle/oratab)
\${DBDIR}/bin/sqlplus sys/${DBUSER} as sysdba << EOF
create pfile from spfile;
alter system archive log current;
alter database backup controlfile to trace;
alter database begin backup;
EOF
EOS
chmod 755 ${ZONE_ROOT}/${SCRIPT_NAME}
${ZONE_CMD} /${SCRIPT_NAME} 2>&1 | print_log
rm -f ${ZONE_ROOT}/${SCRIPT_NAME}
}
function snapshot_pst {
DB=$1
DBUSER=$2
if [ $# -eq 3 && "_$3_" != "__" ]
then
ZONE=$3
ZONE_CMD="${ZLOGIN_CMD} -l ${DBUSER} ${ZONE}"
ZONE_BASE=$(/usr/sbin/zonecfg -z ${ZONE} info zonepath | nawk '{print $NF;}')
ZONE_ROOT="${ZONE_BASE}/root"
else
ZONE_ROOT=""
fi
SCRIPT_NAME="tmp/.nsr-pre-snap-script.$$"
# Create script inside zone
cat >${ZONE_ROOT}/{SCRIPT_NAME} <<EOS
#!/bin/bash
DBDIR=\$(/usr/bin/nawk -F':' -v ORACLE_SID=${ORACLE_SID} '\$1==ORACLE_SID {print \$2;}' /var/opt/oracle/oratab)
\${DBDIR}/bin/sqlplus sys/${DBUSER} as sysdba << EOF
alter database end backup;
alter system archive log current;
EOF
EOS
chmod 755 ${ZONE_ROOT}/${SCRIPT_NAME}
${ZONE_CMD} /${SCRIPT_NAME} 2>&1 | print_log
rm -f ${ZONE_ROOT}/${SCRIPT_NAME}
}
function snapshot_create {
ZPOOL=$1
SNAPSHOT_NAME=$2
${ZFS_CMD} snapshot -r ${ZPOOL}@${SNAPSHOT_NAME} 2>&1 | print_log
}
function snapshot_destroy {
ZPOOL=$1
SNAPSHOT_NAME=$2
if (${ZFS_CMD} list -t snapshot ${ZPOOL}@${SNAPSHOT_NAME} >/dev/null)
then
${ZFS_CMD} destroy -r ${ZPOOL}@${SNAPSHOT_NAME} 2>&1 | print_log
fi
}
function usage {
echo "Usage: $0 (pre|pst)"
exit 1
}
case $1 in
pre)
snapshot_destroy ${ZPOOL} ${SNAPSHOT_NAME}
snapshot_pre ${DB} ${DBUSER} ${ZONE}
snapshot_create ${ZPOOL} ${SNAPSHOT_NAME}
snapshot_pst ${DB} ${DBUSER} ${ZONE}
;;
pst)
snapshot_destroy ${ZPOOL} ${SNAPSHOT_NAME}
;;
*)
usage
;;
esac
</source>
!!!THIS CODE IS UNTESTED DO NOT USE THIS!!!
!!!THIS JUST AN EXAMPLE!!!
==Registering new resource type LGTO.clnt==
1. Install Solaris client package LGTOclnt
2. Register new resource type in cluster. One one node do:
<source lang=bash>
# clrt register -f /usr/sbin/LGTO.clnt.rtr LGTO.clnt
</source>
Now you have a new resource type LGTO.clnt in your cluster.
==Create client resource of type LGTO.clnt==
So I use scripts like this:
<source lang=bash>
# RGname=sample-rg
# clrs create \
-t LGTO.clnt \
-g ${RGname} \
-p Resource_dependencies=$(basename ${RGname} -rg)-hasp-zfs-res \
-p clientname=$(basename ${RGname} -rg)-lh \
-p Network_resource=$(basename ${RGname} -rg)-lh-res \
-p owned_paths=${ZPOOL_BASEDIR} \
$(basename ${RGname} -rg)-nsr-res
</source>
This expands to:
<source lang=bash>
# clrs create \
-t LGTO.clnt \
-g sample-rg \
-p Resource_dependencies=sample-hasp-zfs-res \
-p clientname=sample-lh \
-p Network_resource=sample-lh-res \
-p owned_paths=/local/sample-rg \
sample-nsr-res
</source>
Now we have a client name to which we can connect to: sample-lh
a11993c70e901a584159df35271ae8e826cda147
Lolly's Wiki:General disclaimer
4
159
445
2014-09-15T10:41:59Z
Lollypop
2
Die Seite wurde neu angelegt: „__NOTOC__ <center>''Please note that all textual contents of {{SITENAME}}, unless they're not exclusively marked as [[:en:Public domain|public domain]], are re…“
wikitext
text/x-wiki
__NOTOC__
<center>''Please note that all textual contents of {{SITENAME}}, unless they're not exclusively marked as [[:en:Public domain|public domain]], are released under the [[GNU Free Documentation License]] (GFDL).<br>See [[Project:Copyrights]] for more information.''</center><br />
<center><b>{{SITENAME}} M<small>AKES</small> N<small>O</small> G<small>UARANTEE</small> O<small>F</small> V<small>ALIDITY</small></b></center>
'''{{SITENAME}}''' at http://{{SERVER}}/ is the online home of and the first access point for information about the software [[:en:MediaWiki|MediaWiki]]. The community working here is a voluntary association of individuals and groups who are developing an open-content common resource of different media and human technical knowledge. The structure of this site, which is a [[:en:Wiki|wiki]], allows anyone with an Internet connection and World Wide Web browser to alter the content found here. Therefore, please be advised that nothing found here has necessarily been reviewed by professionals with the expertise necessary to provide you with complete, accurate or reliable information.
That is not to say that you will not find valuable and accurate information on MediaWiki.org; much of the time you will. However, MediaWiki.org cannot guarantee the validity of the information found here. The content of any given information may recently have been changed, vandalized or altered by someone whose opinion does not correspond with the state of knowledge in the relevant fields.
=== No formal peer review ===
We are working on ways to select and highlight reliable versions of information articles. Our active community of editors uses tools such as the [[Special:Recentchanges]] and [[Special:Newpages]] feeds to monitor new and changing content. However, MediaWiki.org is not uniformly peer reviewed; while readers may correct errors or engage in casual review, they have no legal duty to do so and thus all information read here is without any implied warranty of fitness for any purpose or use whatsoever. Even articles that have been vetted by informal review may later have been edited inappropriately, just before you view them.
::''None of the authors, contributors, sponsors, administrators, sysops, or anyone else connected with MediaWiki.org in any way whatsoever can be responsible for the appearance of any inaccurate or libelous information or for your use of the information contained in or linked from these web pages.''
=== No contract; limited license ===
Please make sure that you understand that the information provided here is being provided freely, and that no kind of agreement or contract is created between you and the owners or users of this site, the owners of the servers upon which it is housed, the individual MediaWiki.org contributors, any project administrators, sysops or anyone else who is in any way connected with this project or sister projects subject to your claims against them directly. You are being granted a limited license to copy anything from this site; it does not create or imply any contractual or extracontractual liability on the part of MediaWiki or any of its agents, members, organizers or other users.
There is no agreement or understanding between you and MediaWiki.org regarding your use or modification of this information beyond the [[GNU Free Documentation License]] (GFDL); neither is anyone at MediaWiki.org responsible should someone change, edit, modify or remove any information that you may post on MediaWiki.org or any of its associated projects.
=== Trademarks ===
Any of the trademarks, service marks, collective marks, design rights, personality rights or similar rights that are mentioned, used or cited in the articles of MediaWiki.org are the property of their respective owners. Their use here does not imply that you may use them for any other purpose other than for the same or a similar informational use as contemplated by the original authors of these MediaWiki.org articles under the GFDL licensing scheme. Unless otherwise stated MediaWiki.org and Wikimedia sites are neither endorsed nor affiliated with any of the holders of any such rights and as such MediaWiki.org can not grant any rights to use any otherwise protected materials. Your use of any such or similar incorporeal property is at your own risk.
=== Jurisdiction and legality of content ===
Publication of information found on MediaWiki.org may be in violation of the laws of the country or jurisdiction from where you are viewing this information. The MediaWiki database is stored on servers in the [[:en:United States of America|United States of America]], and is maintained in reference to the protections afforded under local and federal law. Laws in your country or jurisdiction may not protect or allow the same kinds of speech or distribution. MediaWiki.org does not encourage the violation of any laws; and cannot be responsible for any violations of such laws, should you link to this domain or use, reproduce, or republish the information contained herein.
Thank you for spending the time to read this page, and please enjoy your experience at MediaWiki.org.
[[Category:General Disclaimer]]
888401a54e21563e738b0636311e1d9dfb458074
446
445
2014-09-15T10:44:35Z
Lollypop
2
wikitext
text/x-wiki
__NOTOC__
<center>''Please note that all textual contents of {{SITENAME}}, unless they're not exclusively marked as [[:en:Public domain|public domain]], are released under the [[GNU Free Documentation License]] (GFDL).<br>See [[Project:Copyrights]] for more information.''</center><br />
<center><b>{{SITENAME}} M<small>AKES</small> N<small>O</small> G<small>UARANTEE</small> O<small>F</small> V<small>ALIDITY</small></b></center>
'''{{SITENAME}}''' at {{SERVER}} is the online home of and the first access point for information about the software [[:en:MediaWiki|MediaWiki]]. The community working here is a voluntary association of individuals and groups who are developing an open-content common resource of different media and human technical knowledge. The structure of this site, which is a [[:en:Wiki|wiki]], allows anyone with an Internet connection and World Wide Web browser to alter the content found here. Therefore, please be advised that nothing found here has necessarily been reviewed by professionals with the expertise necessary to provide you with complete, accurate or reliable information.
That is not to say that you will not find valuable and accurate information on MediaWiki.org; much of the time you will. However, MediaWiki.org cannot guarantee the validity of the information found here. The content of any given information may recently have been changed, vandalized or altered by someone whose opinion does not correspond with the state of knowledge in the relevant fields.
=== No formal peer review ===
We are working on ways to select and highlight reliable versions of information articles. Our active community of editors uses tools such as the [[Special:Recentchanges]] and [[Special:Newpages]] feeds to monitor new and changing content. However, MediaWiki.org is not uniformly peer reviewed; while readers may correct errors or engage in casual review, they have no legal duty to do so and thus all information read here is without any implied warranty of fitness for any purpose or use whatsoever. Even articles that have been vetted by informal review may later have been edited inappropriately, just before you view them.
::''None of the authors, contributors, sponsors, administrators, sysops, or anyone else connected with MediaWiki.org in any way whatsoever can be responsible for the appearance of any inaccurate or libelous information or for your use of the information contained in or linked from these web pages.''
=== No contract; limited license ===
Please make sure that you understand that the information provided here is being provided freely, and that no kind of agreement or contract is created between you and the owners or users of this site, the owners of the servers upon which it is housed, the individual MediaWiki.org contributors, any project administrators, sysops or anyone else who is in any way connected with this project or sister projects subject to your claims against them directly. You are being granted a limited license to copy anything from this site; it does not create or imply any contractual or extracontractual liability on the part of MediaWiki or any of its agents, members, organizers or other users.
There is no agreement or understanding between you and MediaWiki.org regarding your use or modification of this information beyond the [[GNU Free Documentation License]] (GFDL); neither is anyone at MediaWiki.org responsible should someone change, edit, modify or remove any information that you may post on MediaWiki.org or any of its associated projects.
=== Trademarks ===
Any of the trademarks, service marks, collective marks, design rights, personality rights or similar rights that are mentioned, used or cited in the articles of MediaWiki.org are the property of their respective owners. Their use here does not imply that you may use them for any other purpose other than for the same or a similar informational use as contemplated by the original authors of these MediaWiki.org articles under the GFDL licensing scheme. Unless otherwise stated MediaWiki.org and Wikimedia sites are neither endorsed nor affiliated with any of the holders of any such rights and as such MediaWiki.org can not grant any rights to use any otherwise protected materials. Your use of any such or similar incorporeal property is at your own risk.
=== Jurisdiction and legality of content ===
Publication of information found on MediaWiki.org may be in violation of the laws of the country or jurisdiction from where you are viewing this information. The MediaWiki database is stored on servers in [[:en:Germany|Germany]], and is maintained in reference to the protections afforded under local and federal law. Laws in your country or jurisdiction may not protect or allow the same kinds of speech or distribution. MediaWiki.org does not encourage the violation of any laws; and cannot be responsible for any violations of such laws, should you link to this domain or use, reproduce, or republish the information contained herein.
Thank you for spending the time to read this page, and please enjoy your experience at {{SITENAME}}.
[[Category:General Disclaimer]]
37ca50e178e2f38cce6c4b2c6a0a5435162b884f
447
446
2014-09-15T10:48:55Z
Lollypop
2
wikitext
text/x-wiki
__NOTOC__
German Disclaimer:
Mit dem Urteil vom 12. Mai 1998 hat das Landgericht Hamburg entschieden, dass man durch die Anbringung eines Links die Inhalte der gelinkten Seiten ggf. mit zu verantworten hat. Dies kann nur dadurch verhindert werden, dass man sich ausdrücklich von diesem Inhalt distanziert.
Für alle Links auf dieser Homepage gilt: Ich distanziere mich hiermit ausdrücklich von allen Inhalten aller verlinkten Seitenadressen auf meiner Homepage und mache mir diese Inhalte nicht zu eigen.
International disclaimer:
<center>''Please note that all textual contents of {{SITENAME}}, unless they're not exclusively marked as [[:en:Public domain|public domain]], are released under the [[GNU Free Documentation License]] (GFDL).<br>See [[Project:Copyrights]] for more information.''</center><br />
<center><b>{{SITENAME}} M<small>AKES</small> N<small>O</small> G<small>UARANTEE</small> O<small>F</small> V<small>ALIDITY</small></b></center>
'''{{SITENAME}}''' at {{SERVER}} is the online home of and the first access point for information about the software [[:en:MediaWiki|MediaWiki]]. The community working here is a voluntary association of individuals and groups who are developing an open-content common resource of different media and human technical knowledge. The structure of this site, which is a [[:en:Wiki|wiki]], allows anyone with an Internet connection and World Wide Web browser to alter the content found here. Therefore, please be advised that nothing found here has necessarily been reviewed by professionals with the expertise necessary to provide you with complete, accurate or reliable information.
That is not to say that you will not find valuable and accurate information on MediaWiki.org; much of the time you will. However, MediaWiki.org cannot guarantee the validity of the information found here. The content of any given information may recently have been changed, vandalized or altered by someone whose opinion does not correspond with the state of knowledge in the relevant fields.
=== No formal peer review ===
We are working on ways to select and highlight reliable versions of information articles. Our active community of editors uses tools such as the [[Special:Recentchanges]] and [[Special:Newpages]] feeds to monitor new and changing content. However, MediaWiki.org is not uniformly peer reviewed; while readers may correct errors or engage in casual review, they have no legal duty to do so and thus all information read here is without any implied warranty of fitness for any purpose or use whatsoever. Even articles that have been vetted by informal review may later have been edited inappropriately, just before you view them.
::''None of the authors, contributors, sponsors, administrators, sysops, or anyone else connected with MediaWiki.org in any way whatsoever can be responsible for the appearance of any inaccurate or libelous information or for your use of the information contained in or linked from these web pages.''
=== No contract; limited license ===
Please make sure that you understand that the information provided here is being provided freely, and that no kind of agreement or contract is created between you and the owners or users of this site, the owners of the servers upon which it is housed, the individual MediaWiki.org contributors, any project administrators, sysops or anyone else who is in any way connected with this project or sister projects subject to your claims against them directly. You are being granted a limited license to copy anything from this site; it does not create or imply any contractual or extracontractual liability on the part of MediaWiki or any of its agents, members, organizers or other users.
There is no agreement or understanding between you and MediaWiki.org regarding your use or modification of this information beyond the [[GNU Free Documentation License]] (GFDL); neither is anyone at MediaWiki.org responsible should someone change, edit, modify or remove any information that you may post on MediaWiki.org or any of its associated projects.
=== Trademarks ===
Any of the trademarks, service marks, collective marks, design rights, personality rights or similar rights that are mentioned, used or cited in the articles of MediaWiki.org are the property of their respective owners. Their use here does not imply that you may use them for any other purpose other than for the same or a similar informational use as contemplated by the original authors of these MediaWiki.org articles under the GFDL licensing scheme. Unless otherwise stated MediaWiki.org and Wikimedia sites are neither endorsed nor affiliated with any of the holders of any such rights and as such MediaWiki.org can not grant any rights to use any otherwise protected materials. Your use of any such or similar incorporeal property is at your own risk.
=== Jurisdiction and legality of content ===
Publication of information found on MediaWiki.org may be in violation of the laws of the country or jurisdiction from where you are viewing this information. The MediaWiki database is stored on servers in [[:en:Germany|Germany]], and is maintained in reference to the protections afforded under local and federal law. Laws in your country or jurisdiction may not protect or allow the same kinds of speech or distribution. MediaWiki.org does not encourage the violation of any laws; and cannot be responsible for any violations of such laws, should you link to this domain or use, reproduce, or republish the information contained herein.
Thank you for spending the time to read this page, and please enjoy your experience at {{SITENAME}}.
[[Category:General Disclaimer]]
43e2637b861c11ff5c069021bb813fc7ec940663
450
447
2014-09-15T10:54:11Z
Lollypop
2
wikitext
text/x-wiki
__NOTOC__
=German Disclaimer=
Mit dem Urteil vom 12. Mai 1998 hat das Landgericht Hamburg entschieden, dass man durch die Anbringung eines Links die Inhalte der gelinkten Seiten ggf. mit zu verantworten hat. Dies kann nur dadurch verhindert werden, dass man sich ausdrücklich von diesem Inhalt distanziert.
Für alle Links auf dieser Homepage gilt: Ich distanziere mich hiermit ausdrücklich von allen Inhalten aller verlinkten Seitenadressen auf meiner Homepage und mache mir diese Inhalte nicht zu eigen.
=International disclaimer=
<center>''Please note that all textual contents of {{SITENAME}}, unless they're not exclusively marked as [[:en:Public domain|public domain]], are released under the [[GNU Free Documentation License]] (GFDL).<br>See [[Project:Copyrights]] for more information.''</center><br />
<center><b>{{SITENAME}} M<small>AKES</small> N<small>O</small> G<small>UARANTEE</small> O<small>F</small> V<small>ALIDITY</small></b></center>
'''{{SITENAME}}''' at {{SERVER}} is the online home of and the first access point for information about the software [[:en:MediaWiki|MediaWiki]]. The community working here is a voluntary association of individuals and groups who are developing an open-content common resource of different media and human technical knowledge. The structure of this site, which is a [[:en:Wiki|wiki]], allows anyone with an Internet connection and World Wide Web browser to alter the content found here. Therefore, please be advised that nothing found here has necessarily been reviewed by professionals with the expertise necessary to provide you with complete, accurate or reliable information.
That is not to say that you will not find valuable and accurate information on MediaWiki.org; much of the time you will. However, MediaWiki.org cannot guarantee the validity of the information found here. The content of any given information may recently have been changed, vandalized or altered by someone whose opinion does not correspond with the state of knowledge in the relevant fields.
=== No formal peer review ===
We are working on ways to select and highlight reliable versions of information articles. Our active community of editors uses tools such as the [[Special:Recentchanges]] and [[Special:Newpages]] feeds to monitor new and changing content. However, MediaWiki.org is not uniformly peer reviewed; while readers may correct errors or engage in casual review, they have no legal duty to do so and thus all information read here is without any implied warranty of fitness for any purpose or use whatsoever. Even articles that have been vetted by informal review may later have been edited inappropriately, just before you view them.
::''None of the authors, contributors, sponsors, administrators, sysops, or anyone else connected with MediaWiki.org in any way whatsoever can be responsible for the appearance of any inaccurate or libelous information or for your use of the information contained in or linked from these web pages.''
=== No contract; limited license ===
Please make sure that you understand that the information provided here is being provided freely, and that no kind of agreement or contract is created between you and the owners or users of this site, the owners of the servers upon which it is housed, the individual MediaWiki.org contributors, any project administrators, sysops or anyone else who is in any way connected with this project or sister projects subject to your claims against them directly. You are being granted a limited license to copy anything from this site; it does not create or imply any contractual or extracontractual liability on the part of MediaWiki or any of its agents, members, organizers or other users.
There is no agreement or understanding between you and MediaWiki.org regarding your use or modification of this information beyond the [[GNU Free Documentation License]] (GFDL); neither is anyone at MediaWiki.org responsible should someone change, edit, modify or remove any information that you may post on MediaWiki.org or any of its associated projects.
=== Trademarks ===
Any of the trademarks, service marks, collective marks, design rights, personality rights or similar rights that are mentioned, used or cited in the articles of MediaWiki.org are the property of their respective owners. Their use here does not imply that you may use them for any other purpose other than for the same or a similar informational use as contemplated by the original authors of these MediaWiki.org articles under the GFDL licensing scheme. Unless otherwise stated MediaWiki.org and Wikimedia sites are neither endorsed nor affiliated with any of the holders of any such rights and as such MediaWiki.org can not grant any rights to use any otherwise protected materials. Your use of any such or similar incorporeal property is at your own risk.
=== Jurisdiction and legality of content ===
Publication of information found on MediaWiki.org may be in violation of the laws of the country or jurisdiction from where you are viewing this information. The MediaWiki database is stored on servers in [[:en:Germany|Germany]], and is maintained in reference to the protections afforded under local and federal law. Laws in your country or jurisdiction may not protect or allow the same kinds of speech or distribution. MediaWiki.org does not encourage the violation of any laws; and cannot be responsible for any violations of such laws, should you link to this domain or use, reproduce, or republish the information contained herein.
Thank you for spending the time to read this page, and please enjoy your experience at {{SITENAME}}.
[[Category:General Disclaimer]]
a67b2203806d4251a96b106e396ca54fbb192ba3
451
450
2014-09-15T10:57:31Z
Lollypop
2
wikitext
text/x-wiki
__NOTOC__
=German Disclaimer=
Mit dem Urteil vom 12. Mai 1998 hat das Landgericht Hamburg entschieden, dass man durch die Anbringung eines Links die Inhalte der gelinkten Seiten ggf. mit zu verantworten hat. Dies kann nur dadurch verhindert werden, dass man sich ausdrücklich von diesem Inhalt distanziert.
Für alle Links auf dieser Homepage gilt: Ich distanziere mich hiermit ausdrücklich von allen Inhalten aller verlinkten Seitenadressen auf meiner Homepage und mache mir diese Inhalte nicht zu eigen.
=International disclaimer=
<center>''Please note that all textual contents of {{SITENAME}}, unless they're not exclusively marked as [[:en:Public domain|public domain]], are released under the [[GNU Free Documentation License]] (GFDL).<br>See [[Project:Copyrights]] for more information.''</center><br />
<center><b>{{SITENAME}} M<small>AKES</small> N<small>O</small> G<small>UARANTEE</small> O<small>F</small> V<small>ALIDITY</small></b></center>
'''{{SITENAME}}''' at {{SERVER}} is the online home of and the first access point for information about the software [[:en:MediaWiki|MediaWiki]]. The community working here is a voluntary association of individuals and groups who are developing an open-content common resource of different media and human technical knowledge. The structure of this site, which is a [[:en:Wiki|wiki]], allows anyone with an Internet connection and World Wide Web browser to alter the content found here. Therefore, please be advised that nothing found here has necessarily been reviewed by professionals with the expertise necessary to provide you with complete, accurate or reliable information.
That is not to say that you will not find valuable and accurate information on MediaWiki.org; much of the time you will. However, {{SITENAME}} cannot guarantee the validity of the information found here. The content of any given information may recently have been changed, vandalized or altered by someone whose opinion does not correspond with the state of knowledge in the relevant fields.
=== No formal peer review ===
We are working on ways to select and highlight reliable versions of information articles. Our active community of editors uses tools such as the [[Special:Recentchanges]] and [[Special:Newpages]] feeds to monitor new and changing content. However, MediaWiki.org is not uniformly peer reviewed; while readers may correct errors or engage in casual review, they have no legal duty to do so and thus all information read here is without any implied warranty of fitness for any purpose or use whatsoever. Even articles that have been vetted by informal review may later have been edited inappropriately, just before you view them.
::''None of the authors, contributors, sponsors, administrators, sysops, or anyone else connected with {{SITENAME}} in any way whatsoever can be responsible for the appearance of any inaccurate or libelous information or for your use of the information contained in or linked from these web pages.''
=== No contract; limited license ===
Please make sure that you understand that the information provided here is being provided freely, and that no kind of agreement or contract is created between you and the owners or users of this site, the owners of the servers upon which it is housed, the individual MediaWiki.org contributors, any project administrators, sysops or anyone else who is in any way connected with this project or sister projects subject to your claims against them directly. You are being granted a limited license to copy anything from this site; it does not create or imply any contractual or extracontractual liability on the part of MediaWiki or any of its agents, members, organizers or other users.
There is no agreement or understanding between you and {{SITENAME}} regarding your use or modification of this information beyond the [[GNU Free Documentation License]] (GFDL); neither is anyone at {{SITENAME}} responsible should someone change, edit, modify or remove any information that you may post on {{SITENAME}} or any of its associated projects.
=== Trademarks ===
Any of the trademarks, service marks, collective marks, design rights, personality rights or similar rights that are mentioned, used or cited in the articles of MediaWiki.org are the property of their respective owners. Their use here does not imply that you may use them for any other purpose other than for the same or a similar informational use as contemplated by the original authors of these MediaWiki.org articles under the GFDL licensing scheme. Unless otherwise stated MediaWiki.org and Wikimedia sites are neither endorsed nor affiliated with any of the holders of any such rights and as such {{SITENAME}} can not grant any rights to use any otherwise protected materials. Your use of any such or similar incorporeal property is at your own risk.
=== Jurisdiction and legality of content ===
Publication of information found on {{SITENAME}} may be in violation of the laws of the country or jurisdiction from where you are viewing this information. The {{SITENAME}} database is stored on servers in Germany, and is maintained in reference to the protections afforded under local and federal law. Laws in your country or jurisdiction may not protect or allow the same kinds of speech or distribution. MediaWiki.org does not encourage the violation of any laws; and cannot be responsible for any violations of such laws, should you link to this domain or use, reproduce, or republish the information contained herein.
Thank you for spending the time to read this page, and please enjoy your experience at {{SITENAME}}.
[[Category:General Disclaimer]]
eabee73aaecd3ea405ad63fc4d96b8b258bf0749
452
451
2014-09-15T10:59:10Z
Lollypop
2
wikitext
text/x-wiki
__NOTOC__
=German Disclaimer=
Mit dem Urteil vom 12. Mai 1998 hat das Landgericht Hamburg entschieden, dass man durch die Anbringung eines Links die Inhalte der gelinkten Seiten ggf. mit zu verantworten hat. Dies kann nur dadurch verhindert werden, dass man sich ausdrücklich von diesem Inhalt distanziert.
Für alle Links auf dieser Homepage gilt: Ich distanziere mich hiermit ausdrücklich von allen Inhalten aller verlinkten Seitenadressen auf meiner Homepage und mache mir diese Inhalte nicht zu eigen.
=International disclaimer=
<center>''Please note that all textual contents of {{SITENAME}}, unless they're not exclusively marked as [[:en:Public domain|public domain]], are released under the [[GNU Free Documentation License]] (GFDL).<br>See [[Project:Copyrights]] for more information.''</center><br />
<center><b>{{SITENAME}} M<small>AKES</small> N<small>O</small> G<small>UARANTEE</small> O<small>F</small> V<small>ALIDITY</small></b></center>
'''{{SITENAME}}''' at {{SERVER}} is the online home of and the first access point for information about the software [[:en:MediaWiki|MediaWiki]]. The community working here is a voluntary association of individuals and groups who are developing an open-content common resource of different media and human technical knowledge. The structure of this site, which is a [[:en:Wiki|wiki]], allows anyone with an Internet connection and World Wide Web browser to alter the content found here. Therefore, please be advised that nothing found here has necessarily been reviewed by professionals with the expertise necessary to provide you with complete, accurate or reliable information.
That is not to say that you will not find valuable and accurate information on {{SITENAME}}; much of the time you will. However, {{SITENAME}} cannot guarantee the validity of the information found here. The content of any given information may recently have been changed, vandalized or altered by someone whose opinion does not correspond with the state of knowledge in the relevant fields.
=== No formal peer review ===
We are working on ways to select and highlight reliable versions of information articles. Our active community of editors uses tools such as the [[Special:Recentchanges]] and [[Special:Newpages]] feeds to monitor new and changing content. However, {{SITENAME}} is not uniformly peer reviewed; while readers may correct errors or engage in casual review, they have no legal duty to do so and thus all information read here is without any implied warranty of fitness for any purpose or use whatsoever. Even articles that have been vetted by informal review may later have been edited inappropriately, just before you view them.
::''None of the authors, contributors, sponsors, administrators, sysops, or anyone else connected with {{SITENAME}} in any way whatsoever can be responsible for the appearance of any inaccurate or libelous information or for your use of the information contained in or linked from these web pages.''
=== No contract; limited license ===
Please make sure that you understand that the information provided here is being provided freely, and that no kind of agreement or contract is created between you and the owners or users of this site, the owners of the servers upon which it is housed, the individual {{SITENAME}} contributors, any project administrators, sysops or anyone else who is in any way connected with this project or sister projects subject to your claims against them directly. You are being granted a limited license to copy anything from this site; it does not create or imply any contractual or extracontractual liability on the part of {{SITENAME}} or any of its agents, members, organizers or other users.
There is no agreement or understanding between you and {{SITENAME}} regarding your use or modification of this information beyond the [[GNU Free Documentation License]] (GFDL); neither is anyone at {{SITENAME}} responsible should someone change, edit, modify or remove any information that you may post on {{SITENAME}} or any of its associated projects.
=== Trademarks ===
Any of the trademarks, service marks, collective marks, design rights, personality rights or similar rights that are mentioned, used or cited in the articles of {{SITENAME}} are the property of their respective owners. Their use here does not imply that you may use them for any other purpose other than for the same or a similar informational use as contemplated by the original authors of these {{SITENAME}} articles under the GFDL licensing scheme. Unless otherwise stated {{SITENAME}} and Wikimedia sites are neither endorsed nor affiliated with any of the holders of any such rights and as such {{SITENAME}} can not grant any rights to use any otherwise protected materials. Your use of any such or similar incorporeal property is at your own risk.
=== Jurisdiction and legality of content ===
Publication of information found on {{SITENAME}} may be in violation of the laws of the country or jurisdiction from where you are viewing this information. The {{SITENAME}} database is stored on servers in Germany, and is maintained in reference to the protections afforded under local and federal law. Laws in your country or jurisdiction may not protect or allow the same kinds of speech or distribution. {{SITENAME}} does not encourage the violation of any laws; and cannot be responsible for any violations of such laws, should you link to this domain or use, reproduce, or republish the information contained herein.
Thank you for spending the time to read this page, and please enjoy your experience at {{SITENAME}}.
[[Category:General Disclaimer]]
4fcfcf68c91ffa1010990cde3e931e43d7752664
Hauptseite
0
1
448
157
2014-09-15T10:52:14Z
Lollypop
2
wikitext
text/x-wiki
First of all please read my [[Project:General_disclaimer|disclaimer]]!
Bitte zuerst bitte meinen [[Project:General_disclaimer|Haftungsausshluss]] lesen!
=[[:Kategorie:Projekte|Meine Projekte]]=
<categorytree mode=pages hideroot=on depth=3>Projekte</categorytree>
=[[:Kategorie:KnowHow|KnowHow]]=
<categorytree mode=pages hideroot=on depth=2>KnowHow</categorytree>
Anmerkungen immer gern an mich: Lars Timmann <<email>L@rs.Timmann.de</email>>
= Starthilfen zum Wiki =
Hilfe zur Benutzung und Konfiguration der Wiki-Software findest du im [http://meta.wikimedia.org/wiki/Help:Contents Benutzerhandbuch].
* [http://www.mediawiki.org/wiki/Manual:Configuration_settings Liste der Konfigurationsvariablen]
* [http://www.mediawiki.org/wiki/Manual:FAQ MediaWiki-FAQ]
* [https://lists.wikimedia.org/mailman/listinfo/mediawiki-announce Mailingliste neuer MediaWiki-Versionen]
f8d799c0741cb873d0b2e682865762be40c0e406
453
448
2014-09-15T11:00:13Z
Lollypop
2
wikitext
text/x-wiki
First of all please read my [[Project:General_disclaimer|disclaimer]]!
Bitte zuerst bitte meinen [[Project:General_disclaimer|Haftungsausshluss]] lesen!
=[[:Kategorie:Projekte|Meine Projekte]]=
<categorytree mode=pages hideroot=on depth=3>Projekte</categorytree>
=[[:Kategorie:KnowHow|KnowHow]]=
<categorytree mode=pages hideroot=on depth=2>KnowHow</categorytree>
Anmerkungen immer gern an mich: Lars Timmann <<email>L@rs.Timmann.de</email>>
= Starthilfen zum Wiki =
Hilfe zur Benutzung und Konfiguration der Wiki-Software findest du im [http://meta.wikimedia.org/wiki/Help:Contents Benutzerhandbuch].
* [http://www.mediawiki.org/wiki/Manual:Configuration_settings Liste der Konfigurationsvariablen]
* [http://www.mediawiki.org/wiki/Manual:FAQ MediaWiki-FAQ]
* [https://lists.wikimedia.org/mailman/listinfo/mediawiki-announce Mailingliste neuer MediaWiki-Versionen]
Mit dem Urteil vom 12. Mai 1998 hat das Landgericht Hamburg entschieden, dass man durch die Anbringung eines Links die Inhalte der gelinkten Seiten ggf. mit zu verantworten hat. Dies kann nur dadurch verhindert werden, dass man sich ausdrücklich von diesem Inhalt distanziert. Für alle Links auf dieser Homepage gilt: Ich distanziere mich hiermit ausdrücklich von allen Inhalten aller verlinkten Seitenadressen auf meiner Homepage und mache mir diese Inhalte nicht zu eigen.
b36811f71b21ae88396807ecf4e33d4e6680f303
Lolly's Wiki:Über Lolly's Wiki
4
160
449
2014-09-15T10:53:37Z
Lollypop
2
Die Seite wurde neu angelegt: „First of all please read my [[Project:General_disclaimer|disclaimer]]! Bitte zuerst bitte meinen [[Project:General_disclaimer|Haftungsausshluss]] lesen!“
wikitext
text/x-wiki
First of all please read my [[Project:General_disclaimer|disclaimer]]!
Bitte zuerst bitte meinen [[Project:General_disclaimer|Haftungsausshluss]] lesen!
12991da973158e68378410cddc46fb4d47cf6968
Fibrechannel Analyse
0
139
457
456
2014-09-19T06:44:37Z
Lollypop
2
/* Now the script on the backup host */
wikitext
text/x-wiki
[[Kategorie:Solaris]]
[[Kategorie:Brocade]]
=Fibrechannel Analyse unter Solaris=
=Kommandos : Solaris=
==luxadm==
===luxadm -e port===
Gibt die Hardwarepfade der vorhandened Fibrechannelports und deren Status aus:
<source lang=bash>
# luxadm -e port
/devices/pci@79,0/pci10de,378@b/pci1077,143@0/fp@0,0:devctl CONNECTED
/devices/pci@79,0/pci10de,378@b/pci1077,143@0,1/fp@0,0:devctl NOT CONNECTED
/devices/pci@79,0/pci10de,376@e/pci1077,143@0/fp@0,0:devctl CONNECTED
/devices/pci@79,0/pci10de,376@e/pci1077,143@0,1/fp@0,0:devctl NOT CONNECTED
</source>
2 Dualport Karten:
/devices/pci@79,0/pci10de,378@b/pci1077,143@0 und ...,1
/devices/pci@79,0/pci10de,376@e/pci1077,143@0 und ...,1
<source lang=bash>
# prtdiag -v | head -1
System Configuration: Sun Microsystems Sun Fire X4440
</source>
Aus der Seite [https://support.oracle.com/epmos/faces/DocContentDisplay?id=1277396.1 Sun x86 Platforms: Matrix of Recognized Device Paths (Doc ID 1277396.1)] (Oracle Support Login benötigt):
Sun Fire x4440 (Tucana)
PCI:
PCIe SLOT0 /pci@0,0/pci10de,375@f/pci1000,3150@0 // with PCI Express 8-Port SAS/SATA HBA
PCIe SLOT0 /pci@0,0/pci10de,375@f/ // without PCI Express 8-Port SAS/SATA HBA
PCIe SLOT1 /pci@0,0/pci10de,376@e/
PCIe SLOT2 /pci@7c,0/pci10de,377@f/
PCIe SLOT3 /pci@0,0/pci10de,377@a/
PCIe SLOT4 /pci@7c,0/pci10de,376@e/
PCIe SLOT5 /pci@7c,0/pci10de,378@b/
(7c can be renamed something else depending on BIOS/OS version)
Also stecken unsere Karten in Slot 4 und 5.
===luxadm -e dump_map <HW_path>===
Gibt die Tabelle der bekannten Geräte an einem Port aus
<source lang=bash>
# luxadm -e dump_map /devices/pci@79,0/pci10de,378@b/pci1077,143@0/fp@0,0:devctl
Pos Port_ID Hard_Addr Port WWN Node WWN Type
0 30200 0 202600a0b86e10e4 200600a0b86e10e4 0x0 (Disk device)
1 30600 0 202700a0b86e10e4 200600a0b86e10e4 0x0 (Disk device)
2 10100 0 203400a0b85bb030 200400a0b85bb030 0x0 (Disk device)
3 10500 0 203500a0b85bb030 200400a0b85bb030 0x0 (Disk device)
4 10200 0 202600a0b86e103c 200600a0b86e103c 0x0 (Disk device)
5 11400 0 202700a0b86e103c 200600a0b86e103c 0x0 (Disk device)
6 30100 0 203200a0b85aeb2d 200200a0b85aeb2d 0x0 (Disk device)
7 30500 0 203300a0b85aeb2d 200200a0b85aeb2d 0x0 (Disk device)
8 10800 0 2100001b32902d45 2000001b32902d45 0x1f (Unknown Type,Host Bus Adapter)
</source>
Erklärung der interessanten Spalten:
* Port_ID <Switch_ID><Switchport><??>
Es sind also offensichtlich 2 Switches in der Fabric an Port /devices/pci@79,0/pci10de,378@b/pci1077,143@0/fp@0,0:devctl
und zwar mit der ID 1 und mit der ID 3.
Switch ID 1
Port 1 und 5 : Node WWN 200400a0b85bb030
Port 2 und 14 : Node WWN 200600a0b86e103c
Port 8 : Node WWN 2000001b32902d45 (Wir selbst)
Switch ID 3
Port 1 und 5 : Node WWN 200200a0b85aeb2d
Port 2 und 6 : Node WWN 200600a0b86e10e4
Wir hängen also mit 2 Storages auf dem Switch mit der ID 1 und haben eine Verbindung zu einem Switch mit der ID 3 an dem 2 weitere Storages hängen.
* Node WWN
Wir sehen hier 4 Disk Devices mit jeweils 2 Einträgen (Gleiche Node WWN)
* Port WWN
Dies ist die Port WWN der an den Switch angeschlossenen Geräte (unter 8 finden wir uns selbst).
Pro Storage sehen wir hier 2 Port WWNs, also 2 Pfade über unseren einen Hostport.
Daher nachher 4 Pfade (2 Pro Hostport) beim [[#mpathadm list lu]].
* Type
Disk Device: Storage
Host Bus Adapter: FC-Karte
===luxadm probe===
Auflistung aller erkannten Fibrechanneldevices
<source lang=bash>
#> luxadm probe
Found Fibre Channel device(s):
Node WWN:200600a0b86e10e4 Device Type:Disk device
Logical Path:/dev/rdsk/c8t600A0B80006E10E40000DC1C52E8B751d0s2
...
</source>
===luxadm display <Diskpath|WWN>===
<source lang=bash>
#> luxadm display /dev/rdsk/c8t600A0B80006E10E40000DC1C52E8B751d0s2
DEVICE PROPERTIES for disk: /dev/rdsk/c8t600A0B80006E10E40000DC1C52E8B751d0s2
Vendor: SUN
Product ID: STK6580_6780
Revision: 0784
Serial Num: SP01068442
Unformatted capacity: 204800.000 MBytes
Write Cache: Enabled
Read Cache: Enabled
Minimum prefetch: 0x300
Maximum prefetch: 0x0
Device Type: Disk device
Path(s):
/dev/rdsk/c8t600A0B80006E10E40000DC1C52E8B751d0s2
/devices/scsi_vhci/disk@g600a0b80006e10e40000dc1c52e8b751:c,raw
Controller /dev/cfg/c4
Device Address 202600a0b86e10e4,5
Host controller port WWN 2100001b328a417f
Class primary
State ONLINE
Controller /dev/cfg/c4
Device Address 202700a0b86e10e4,5
Host controller port WWN 2100001b328a417f
Class secondary
State STANDBY
Controller /dev/cfg/c6
Device Address 201600a0b86e10e4,5
Host controller port WWN 2100001b32904445
Class primary
State ONLINE
Controller /dev/cfg/c6
Device Address 201700a0b86e10e4,5
Host controller port WWN 2100001b32904445
Class secondary
State STANDBY
</source>
* Vendor: SUN
Hersteller
* Product ID: STK6580_6780
Also ein StorageTek 6580/6780
* Revision: 0784
Grobe Firmwarepeilung (Firmware Version: 07.84.47.10)
Siehe hier [[#lsscs list array <array_name>]]
* Serial Num: SP01068442
Praktisch, wenn man mit NetApps arbeitet, um die LUNs zuzuordnen.
* Unformatted capacity: 204800.000 MBytes
Immer gut zu wissen
* Write Cache: Enabled
Die Batterie im Storage sollte also OK sein ;-)
* Path(s):
Rawdevicepath
Hardwaredevicepath
Jetzt folgen immer pro Pfad zu diesem Device ein Block aus
Controller (siehe unten)
Device Address <Port WWN vom Device>,<LUN ID>
Class <primary|secondary> (siehe unten)
State <Online|Standby|Oflline>
Zuweisung Controller zum FC-Port über:
<source lang=bash>
# ls -al /dev/cfg/c6
lrwxrwxrwx 1 root root 60 Sep 3 2009 /dev/cfg/c6 -> ../../devices/pci@79,0/pci10de,376@e/pci1077,143@0/fp@0,0:fc
</source>
Man sieht den Hardwarepfad von [[#luxadm -e port]]
Class:
Via ALUA (Asymmetric Logical Unit Access) teilt das Device dem Host mit, über welche Pfade der Host primär auf die LUN zugreifen soll.
==fcinfo==
===fcinfo hba-port===
Gibt ein paar Infos über Hersteller, Modell, Firmware, Port und Node WWN, Current Speed, ... aus
<source lang=bash>
#> fcinfo hba-port
HBA Port WWN: 2100001b328a417f
OS Device Name: /dev/cfg/c4
Manufacturer: QLogic Corp.
Model: 375-3356-02
Firmware Version: 05.06.00
FCode/BIOS Version: BIOS: 2.02; fcode: 2.01; EFI: 2.00;
Serial Number: 0402R00-0918701860
Driver Name: qlc
Driver Version: 20110825-3.06
Type: N-port
State: online
Supported Speeds: 1Gb 2Gb 4Gb
Current Speed: 4Gb
Node WWN: 2000001b328a417f
HBA Port WWN: 2101001b32aa417f
OS Device Name: /dev/cfg/c5
Manufacturer: QLogic Corp.
Model: 375-3356-02
Firmware Version: 05.06.00
FCode/BIOS Version: BIOS: 2.02; fcode: 2.01; EFI: 2.00;
Serial Number: 0402R00-0918701860
Driver Name: qlc
Driver Version: 20110825-3.06
Type: unknown
State: offline
Supported Speeds: 1Gb 2Gb 4Gb
Current Speed: not established
Node WWN: 2001001b32aa417f
HBA Port WWN: 2100001b32904445
OS Device Name: /dev/cfg/c6
Manufacturer: QLogic Corp.
Model: 375-3356-02
Firmware Version: 05.06.00
FCode/BIOS Version: BIOS: 2.02; fcode: 2.01; EFI: 2.00;
Serial Number: 0402R00-0918701887
Driver Name: qlc
Driver Version: 20110825-3.06
Type: N-port
State: online
Supported Speeds: 1Gb 2Gb 4Gb
Current Speed: 4Gb
Node WWN: 2000001b32904445
HBA Port WWN: 2101001b32b04445
OS Device Name: /dev/cfg/c7
Manufacturer: QLogic Corp.
Model: 375-3356-02
Firmware Version: 05.06.00
FCode/BIOS Version: BIOS: 2.02; fcode: 2.01; EFI: 2.00;
Serial Number: 0402R00-0918701887
Driver Name: qlc
Driver Version: 20110825-3.06
Type: unknown
State: offline
Supported Speeds: 1Gb 2Gb 4Gb
Current Speed: not established
Node WWN: 2001001b32b04445
</source>
===fcinfo remote-port --port <HBA Port WWN> --linkstat===
<source lang=bash>
# fcinfo remote-port --port 2100001b32904445 --linkstat
Remote Port WWN: 201600a0b86e103c
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200600a0b86e103c
Link Error Statistics:
Link Failure Count: 3
Loss of Sync Count: 3
Loss of Signal Count: 2
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 201700a0b86e103c
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200600a0b86e103c
Link Error Statistics:
Link Failure Count: 4
Loss of Sync Count: 261
Loss of Signal Count: 4
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 202200a0b85aeb2d
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200200a0b85aeb2d
Link Error Statistics:
Link Failure Count: 2
Loss of Sync Count: 1
Loss of Signal Count: 2
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 202300a0b85aeb2d
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200200a0b85aeb2d
Link Error Statistics:
Link Failure Count: 2
Loss of Sync Count: 1
Loss of Signal Count: 2
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 201600a0b86e10e4
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200600a0b86e10e4
Link Error Statistics:
Link Failure Count: 3
Loss of Sync Count: 1
Loss of Signal Count: 0
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 201700a0b86e10e4
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200600a0b86e10e4
Link Error Statistics:
Link Failure Count: 3
Loss of Sync Count: 1
Loss of Signal Count: 0
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 202400a0b85bb030
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200400a0b85bb030
Link Error Statistics:
Link Failure Count: 2
Loss of Sync Count: 1
Loss of Signal Count: 2
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 202500a0b85bb030
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200400a0b85bb030
Link Error Statistics:
Link Failure Count: 2
Loss of Sync Count: 1
Loss of Signal Count: 3
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
</source>
==mpathadm==
===mpathadm list lu===
<source lang=bash>
</source>
==cfgadm==
===cfgadm -al -o show_FCP_dev [<controller>]===
<source lang=bash>
# cfgadm -al -o show_FCP_dev | grep unusable
c8::21000024ff2d49a2,0 disk connected configured unusable
c8::21000024ff2d49a2,1 disk connected configured unusable
c8::21000024ff2d49a2,2 disk connected configured unusable
c8::21000024ff2d49a2,3 disk connected configured unusable
c8::21000024ff2d49a2,4 disk connected configured unusable
c8::21000024ff2d49a2,5 disk connected configured unusable
c8::21000024ff2d49a2,6 disk connected configured unusable
c8::21000024ff2d49a2,7 disk connected configured unusable
c8::21000024ff2d49a2,8 disk connected configured unusable
c8::21000024ff2d49a2,9 disk connected configured unusable
c8::21000024ff2d49a2,10 disk connected configured unusable
c9::203400a0b839c421,31 disk connected configured unusable
c9::203400a0b84913d2,31 disk connected configured unusable
c9::203500a0b839c421,31 disk connected configured unusable
c9::203500a0b84913d2,31 disk connected configured unusable
</source>
===cfgadm -c unconfigure -o unusable_SCSI_LUN <unusable device>===
<source lang=bash>
# cfgadm -c unconfigure -o unusable_SCSI_LUN c8::21000024ff2d49a2
</source>
===cfgadm -o force_update -c configure <controller>===
Rescan LUNs. Be careful! Does a forcelip!
<source lang=bash>
# cfgadm -o force_update -c configure c10
</source>
=Kommandos : Common Array Manager=
==lsscs==
Ist unter Solaris in /opt/SUNWsefms/bin
===lsscs list array===
<source lang=bash>
</source>
===lsscs list array <array_name>===
<source lang=bash>
</source>
===lsscs list -a <array_name> fcport===
<source lang=bash>
</source>
=Kommandos : Brocade=
==Switch-Kommandos==
===switchshow===
<source lang=bash>
san-sw_11:admin> switchshow
switchName: san-sw_11
switchType: 71.2
switchState: Online
switchMode: Native
switchRole: Principal
switchDomain: 1
switchId: fffc01
switchWwn: 10:00:00:05:33:df:43:5a
zoning: ON (Fabric1)
switchBeacon: OFF
Index Port Address Media Speed State Proto
==============================================
0 0 010000 id N8 No_Light FC
1 1 010100 id N8 Online FC E-Port 10:00:00:05:33:df:bd:b9 "san-sw_21" (downstream)
2 2 010200 id N8 Online FC F-Port 21:00:00:24:ff:05:74:e4
3 3 010300 id N8 Online FC F-Port 50:0a:09:81:8d:32:5d:c4
4 4 010400 id N8 No_Light FC
5 5 010500 id N8 Online FC E-Port 10:00:00:05:33:df:bd:b9 "san-sw_21"
6 6 010600 id N4 Online FC F-Port 20:06:00:a0:b8:32:38:17
7 7 010700 id N4 Online FC F-Port 20:07:00:a0:b8:32:38:17
8 8 010800 id N4 Online FC F-Port 21:00:00:1b:32:91:4c:ed
9 9 010900 id N4 Online FC F-Port 21:00:00:1b:32:98:05:1a
10 10 010a00 id N8 Online FC F-Port 21:00:00:24:ff:4a:d3:bc
11 11 010b00 id N8 No_Light FC
12 12 010c00 id N8 No_Light FC
13 13 010d00 id N8 No_Light FC
14 14 010e00 id N8 No_Light FC
15 15 010f00 id N8 No_Light FC
16 16 011000 -- N8 No_Module FC (No POD License) Disabled
17 17 011100 -- N8 No_Module FC (No POD License) Disabled
18 18 011200 -- N8 No_Module FC (No POD License) Disabled
19 19 011300 -- N8 No_Module FC (No POD License) Disabled
20 20 011400 -- N8 No_Module FC (No POD License) Disabled
21 21 011500 -- N8 No_Module FC (No POD License) Disabled
22 22 011600 -- N8 No_Module FC (No POD License) Disabled
23 23 011700 -- N8 No_Module FC (No POD License) Disabled
</source>
Was sagt uns das?
# Dies ist der "Principal" (alle andere sind "Subordinate") der Fabric "Fabric1" (switchRole:, zoning:)
# Der Switch ist gezoned (zoning:)
# SwitchID ist "fffc01"
# Es ist ein 24-Port Switch
# Es gibt einen doppelten ISL (InterSwitchLink) zu einem anderen Switch E-Port (san-sw_21)
# 6 Ports sind mit SFPs bestückt, aber nicht belegt (0,4,11-15)
# 8 Ports haben keine Lizenz und auch kein SFP (No_Module)
# 9 Ports sind belegt
<source lang=bash>
san-sw_11:root> fabricshow
Switch ID Worldwide Name Enet IP Addr FC IP Addr Name
-------------------------------------------------------------------------
1: fffc01 10:00:00:05:33:df:43:5a 192.168.1.117 0.0.0.0 >"san-sw_11"
2: fffc02 10:00:00:05:33:df:bd:b9 192.168.1.119 0.0.0.0 "san-sw_21"
The Fabric has 2 switches
</source>
==Port-Kommandos==
===portstatsshow===
===portstatsclear===
==Zone-Kommandos==
===zoneshow===
===alicreate===
===alishow===
==Backup der Switchconfig per Script==
===Put the backup host ssh-pub-key on the switches===
<source lang=bash>
fcsw1:root> cat >/root/.ssh/authorized_keys <<EOF
> ssh-dss AAAAB3NzaC1...
...
...
lF8qsgtTD8cc= root@host
> EOF
</source>
===Generate ssh-key on the switches===
<source lang=bash>
fcsw1:root> ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
2a:23:33:...:69:bc:25:a5:f9 root@fcsw1
The key's randomart image is:
+--[ RSA 2048]----+
| |
| ... |
| |
+-----------------+
</source>
===Copy the key to your backup users ~/.ssh/authorized_keys on backup host===
<source lang=bash>
fcsw1:root> cat /root/.ssh/id_rsa.pub
ssh-rsa AAAAB3NzaC1yc2EAAA...
...
KHnw1T1NaQ== root@fcsw1
</source>
===Now the script on the backup host===
<source lang=bash>
# cat /opt/bin/backup_brocade_config
#!/bin/bash
SWITCHES="
172.30.40.50
172.30.40.51
"
LOCALUSER="backupuser"
BACKUPDIR="brocade_backup"
BACKUPHOST="172.30.40.10"
DATE="$(date '+%Y%m%d-%H%M%S')"
for switch in ${SWITCHES} ; do
printf "Backing up ${switch} to ~${LOCALUSER}/${BACKUPDIR}/${switch}_config_${date}.txt... "
ssh root@${switch} /fabos/link_sbin/configupload -all -p scp ${BACKUPHOST},${LOCALUSER},${BACKUPDIR}/${switch}_config_${DATE}.txt
done
</source>
be177514f334a987b694d88d23e01f5f15e63e01
476
457
2014-09-24T11:12:30Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:Solaris]]
[[Kategorie:Brocade]]
=Fibrechannel Analyse unter Solaris=
=Kommandos : Solaris=
==luxadm==
===luxadm -e port===
Gibt die Hardwarepfade der vorhandened Fibrechannelports und deren Status aus:
<source lang=bash>
# luxadm -e port
/devices/pci@79,0/pci10de,378@b/pci1077,143@0/fp@0,0:devctl CONNECTED
/devices/pci@79,0/pci10de,378@b/pci1077,143@0,1/fp@0,0:devctl NOT CONNECTED
/devices/pci@79,0/pci10de,376@e/pci1077,143@0/fp@0,0:devctl CONNECTED
/devices/pci@79,0/pci10de,376@e/pci1077,143@0,1/fp@0,0:devctl NOT CONNECTED
</source>
2 Dualport Karten:
/devices/pci@79,0/pci10de,378@b/pci1077,143@0 und ...,1
/devices/pci@79,0/pci10de,376@e/pci1077,143@0 und ...,1
<source lang=bash>
# prtdiag -v | head -1
System Configuration: Sun Microsystems Sun Fire X4440
</source>
Aus der Seite [https://support.oracle.com/epmos/faces/DocContentDisplay?id=1277396.1 Sun x86 Platforms: Matrix of Recognized Device Paths (Doc ID 1277396.1)] (Oracle Support Login benötigt):
Sun Fire x4440 (Tucana)
PCI:
PCIe SLOT0 /pci@0,0/pci10de,375@f/pci1000,3150@0 // with PCI Express 8-Port SAS/SATA HBA
PCIe SLOT0 /pci@0,0/pci10de,375@f/ // without PCI Express 8-Port SAS/SATA HBA
PCIe SLOT1 /pci@0,0/pci10de,376@e/
PCIe SLOT2 /pci@7c,0/pci10de,377@f/
PCIe SLOT3 /pci@0,0/pci10de,377@a/
PCIe SLOT4 /pci@7c,0/pci10de,376@e/
PCIe SLOT5 /pci@7c,0/pci10de,378@b/
(7c can be renamed something else depending on BIOS/OS version)
Also stecken unsere Karten in Slot 4 und 5.
===luxadm -e dump_map <HW_path>===
Gibt die Tabelle der bekannten Geräte an einem Port aus
<source lang=bash>
# luxadm -e dump_map /devices/pci@79,0/pci10de,378@b/pci1077,143@0/fp@0,0:devctl
Pos Port_ID Hard_Addr Port WWN Node WWN Type
0 30200 0 202600a0b86e10e4 200600a0b86e10e4 0x0 (Disk device)
1 30600 0 202700a0b86e10e4 200600a0b86e10e4 0x0 (Disk device)
2 10100 0 203400a0b85bb030 200400a0b85bb030 0x0 (Disk device)
3 10500 0 203500a0b85bb030 200400a0b85bb030 0x0 (Disk device)
4 10200 0 202600a0b86e103c 200600a0b86e103c 0x0 (Disk device)
5 11400 0 202700a0b86e103c 200600a0b86e103c 0x0 (Disk device)
6 30100 0 203200a0b85aeb2d 200200a0b85aeb2d 0x0 (Disk device)
7 30500 0 203300a0b85aeb2d 200200a0b85aeb2d 0x0 (Disk device)
8 10800 0 2100001b32902d45 2000001b32902d45 0x1f (Unknown Type,Host Bus Adapter)
</source>
Erklärung der interessanten Spalten:
* Port_ID <Switch_ID><Switchport><??>
Es sind also offensichtlich 2 Switches in der Fabric an Port /devices/pci@79,0/pci10de,378@b/pci1077,143@0/fp@0,0:devctl
und zwar mit der ID 1 und mit der ID 3.
Switch ID 1
Port 1 und 5 : Node WWN 200400a0b85bb030
Port 2 und 14 : Node WWN 200600a0b86e103c
Port 8 : Node WWN 2000001b32902d45 (Wir selbst)
Switch ID 3
Port 1 und 5 : Node WWN 200200a0b85aeb2d
Port 2 und 6 : Node WWN 200600a0b86e10e4
Wir hängen also mit 2 Storages auf dem Switch mit der ID 1 und haben eine Verbindung zu einem Switch mit der ID 3 an dem 2 weitere Storages hängen.
* Node WWN
Wir sehen hier 4 Disk Devices mit jeweils 2 Einträgen (Gleiche Node WWN)
* Port WWN
Dies ist die Port WWN der an den Switch angeschlossenen Geräte (unter 8 finden wir uns selbst).
Pro Storage sehen wir hier 2 Port WWNs, also 2 Pfade über unseren einen Hostport.
Daher nachher 4 Pfade (2 Pro Hostport) beim [[#mpathadm list lu]].
* Type
Disk Device: Storage
Host Bus Adapter: FC-Karte
===luxadm probe===
Auflistung aller erkannten Fibrechanneldevices
<source lang=bash>
#> luxadm probe
Found Fibre Channel device(s):
Node WWN:200600a0b86e10e4 Device Type:Disk device
Logical Path:/dev/rdsk/c8t600A0B80006E10E40000DC1C52E8B751d0s2
...
</source>
===luxadm display <Diskpath|WWN>===
<source lang=bash>
#> luxadm display /dev/rdsk/c8t600A0B80006E10E40000DC1C52E8B751d0s2
DEVICE PROPERTIES for disk: /dev/rdsk/c8t600A0B80006E10E40000DC1C52E8B751d0s2
Vendor: SUN
Product ID: STK6580_6780
Revision: 0784
Serial Num: SP01068442
Unformatted capacity: 204800.000 MBytes
Write Cache: Enabled
Read Cache: Enabled
Minimum prefetch: 0x300
Maximum prefetch: 0x0
Device Type: Disk device
Path(s):
/dev/rdsk/c8t600A0B80006E10E40000DC1C52E8B751d0s2
/devices/scsi_vhci/disk@g600a0b80006e10e40000dc1c52e8b751:c,raw
Controller /dev/cfg/c4
Device Address 202600a0b86e10e4,5
Host controller port WWN 2100001b328a417f
Class primary
State ONLINE
Controller /dev/cfg/c4
Device Address 202700a0b86e10e4,5
Host controller port WWN 2100001b328a417f
Class secondary
State STANDBY
Controller /dev/cfg/c6
Device Address 201600a0b86e10e4,5
Host controller port WWN 2100001b32904445
Class primary
State ONLINE
Controller /dev/cfg/c6
Device Address 201700a0b86e10e4,5
Host controller port WWN 2100001b32904445
Class secondary
State STANDBY
</source>
* Vendor: SUN
Hersteller
* Product ID: STK6580_6780
Also ein StorageTek 6580/6780
* Revision: 0784
Grobe Firmwarepeilung (Firmware Version: 07.84.47.10)
Siehe hier [[#lsscs list array <array_name>]]
* Serial Num: SP01068442
Praktisch, wenn man mit NetApps arbeitet, um die LUNs zuzuordnen.
* Unformatted capacity: 204800.000 MBytes
Immer gut zu wissen
* Write Cache: Enabled
Die Batterie im Storage sollte also OK sein ;-)
* Path(s):
Rawdevicepath
Hardwaredevicepath
Jetzt folgen immer pro Pfad zu diesem Device ein Block aus
Controller (siehe unten)
Device Address <Port WWN vom Device>,<LUN ID>
Class <primary|secondary> (siehe unten)
State <Online|Standby|Oflline>
Zuweisung Controller zum FC-Port über:
<source lang=bash>
# ls -al /dev/cfg/c6
lrwxrwxrwx 1 root root 60 Sep 3 2009 /dev/cfg/c6 -> ../../devices/pci@79,0/pci10de,376@e/pci1077,143@0/fp@0,0:fc
</source>
Man sieht den Hardwarepfad von [[#luxadm -e port]]
Class:
Via ALUA (Asymmetric Logical Unit Access) teilt das Device dem Host mit, über welche Pfade der Host primär auf die LUN zugreifen soll.
==fcinfo==
===fcinfo hba-port===
Gibt ein paar Infos über Hersteller, Modell, Firmware, Port und Node WWN, Current Speed, ... aus
<source lang=bash>
#> fcinfo hba-port
HBA Port WWN: 2100001b328a417f
OS Device Name: /dev/cfg/c4
Manufacturer: QLogic Corp.
Model: 375-3356-02
Firmware Version: 05.06.00
FCode/BIOS Version: BIOS: 2.02; fcode: 2.01; EFI: 2.00;
Serial Number: 0402R00-0918701860
Driver Name: qlc
Driver Version: 20110825-3.06
Type: N-port
State: online
Supported Speeds: 1Gb 2Gb 4Gb
Current Speed: 4Gb
Node WWN: 2000001b328a417f
HBA Port WWN: 2101001b32aa417f
OS Device Name: /dev/cfg/c5
Manufacturer: QLogic Corp.
Model: 375-3356-02
Firmware Version: 05.06.00
FCode/BIOS Version: BIOS: 2.02; fcode: 2.01; EFI: 2.00;
Serial Number: 0402R00-0918701860
Driver Name: qlc
Driver Version: 20110825-3.06
Type: unknown
State: offline
Supported Speeds: 1Gb 2Gb 4Gb
Current Speed: not established
Node WWN: 2001001b32aa417f
HBA Port WWN: 2100001b32904445
OS Device Name: /dev/cfg/c6
Manufacturer: QLogic Corp.
Model: 375-3356-02
Firmware Version: 05.06.00
FCode/BIOS Version: BIOS: 2.02; fcode: 2.01; EFI: 2.00;
Serial Number: 0402R00-0918701887
Driver Name: qlc
Driver Version: 20110825-3.06
Type: N-port
State: online
Supported Speeds: 1Gb 2Gb 4Gb
Current Speed: 4Gb
Node WWN: 2000001b32904445
HBA Port WWN: 2101001b32b04445
OS Device Name: /dev/cfg/c7
Manufacturer: QLogic Corp.
Model: 375-3356-02
Firmware Version: 05.06.00
FCode/BIOS Version: BIOS: 2.02; fcode: 2.01; EFI: 2.00;
Serial Number: 0402R00-0918701887
Driver Name: qlc
Driver Version: 20110825-3.06
Type: unknown
State: offline
Supported Speeds: 1Gb 2Gb 4Gb
Current Speed: not established
Node WWN: 2001001b32b04445
</source>
===fcinfo remote-port --port <HBA Port WWN> --linkstat===
<source lang=bash>
# fcinfo remote-port --port 2100001b32904445 --linkstat
Remote Port WWN: 201600a0b86e103c
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200600a0b86e103c
Link Error Statistics:
Link Failure Count: 3
Loss of Sync Count: 3
Loss of Signal Count: 2
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 201700a0b86e103c
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200600a0b86e103c
Link Error Statistics:
Link Failure Count: 4
Loss of Sync Count: 261
Loss of Signal Count: 4
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 202200a0b85aeb2d
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200200a0b85aeb2d
Link Error Statistics:
Link Failure Count: 2
Loss of Sync Count: 1
Loss of Signal Count: 2
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 202300a0b85aeb2d
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200200a0b85aeb2d
Link Error Statistics:
Link Failure Count: 2
Loss of Sync Count: 1
Loss of Signal Count: 2
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 201600a0b86e10e4
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200600a0b86e10e4
Link Error Statistics:
Link Failure Count: 3
Loss of Sync Count: 1
Loss of Signal Count: 0
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 201700a0b86e10e4
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200600a0b86e10e4
Link Error Statistics:
Link Failure Count: 3
Loss of Sync Count: 1
Loss of Signal Count: 0
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 202400a0b85bb030
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200400a0b85bb030
Link Error Statistics:
Link Failure Count: 2
Loss of Sync Count: 1
Loss of Signal Count: 2
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 202500a0b85bb030
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200400a0b85bb030
Link Error Statistics:
Link Failure Count: 2
Loss of Sync Count: 1
Loss of Signal Count: 3
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
</source>
==mpathadm==
===mpathadm list lu===
<source lang=bash>
</source>
==cfgadm==
===cfgadm -al -o show_FCP_dev [<controller>]===
<source lang=bash>
# cfgadm -al -o show_FCP_dev | grep unusable
c8::21000024ff2d49a2,0 disk connected configured unusable
c8::21000024ff2d49a2,1 disk connected configured unusable
c8::21000024ff2d49a2,2 disk connected configured unusable
c8::21000024ff2d49a2,3 disk connected configured unusable
c8::21000024ff2d49a2,4 disk connected configured unusable
c8::21000024ff2d49a2,5 disk connected configured unusable
c8::21000024ff2d49a2,6 disk connected configured unusable
c8::21000024ff2d49a2,7 disk connected configured unusable
c8::21000024ff2d49a2,8 disk connected configured unusable
c8::21000024ff2d49a2,9 disk connected configured unusable
c8::21000024ff2d49a2,10 disk connected configured unusable
c9::203400a0b839c421,31 disk connected configured unusable
c9::203400a0b84913d2,31 disk connected configured unusable
c9::203500a0b839c421,31 disk connected configured unusable
c9::203500a0b84913d2,31 disk connected configured unusable
</source>
===cfgadm -c unconfigure -o unusable_SCSI_LUN <unusable device>===
<source lang=bash>
# cfgadm -c unconfigure -o unusable_SCSI_LUN c8::21000024ff2d49a2
</source>
===cfgadm -o force_update -c configure <controller>===
Rescan LUNs. Be careful! Does a forcelip!
<source lang=bash>
# cfgadm -o force_update -c configure c10
</source>
=Kommandos : Common Array Manager=
==lsscs==
Ist unter Solaris in /opt/SUNWsefms/bin
===lsscs list array===
<source lang=bash>
</source>
===lsscs list array <array_name>===
<source lang=bash>
</source>
===lsscs list -a <array_name> fcport===
<source lang=bash>
</source>
=Kommandos : Brocade=
==Switch-Kommandos==
===switchshow===
<source lang=bash>
san-sw_11:admin> switchshow
switchName: san-sw_11
switchType: 71.2
switchState: Online
switchMode: Native
switchRole: Principal
switchDomain: 1
switchId: fffc01
switchWwn: 10:00:00:05:33:df:43:5a
zoning: ON (Fabric1)
switchBeacon: OFF
Index Port Address Media Speed State Proto
==============================================
0 0 010000 id N8 No_Light FC
1 1 010100 id N8 Online FC E-Port 10:00:00:05:33:df:bd:b9 "san-sw_21" (downstream)
2 2 010200 id N8 Online FC F-Port 21:00:00:24:ff:05:74:e4
3 3 010300 id N8 Online FC F-Port 50:0a:09:81:8d:32:5d:c4
4 4 010400 id N8 No_Light FC
5 5 010500 id N8 Online FC E-Port 10:00:00:05:33:df:bd:b9 "san-sw_21"
6 6 010600 id N4 Online FC F-Port 20:06:00:a0:b8:32:38:17
7 7 010700 id N4 Online FC F-Port 20:07:00:a0:b8:32:38:17
8 8 010800 id N4 Online FC F-Port 21:00:00:1b:32:91:4c:ed
9 9 010900 id N4 Online FC F-Port 21:00:00:1b:32:98:05:1a
10 10 010a00 id N8 Online FC F-Port 21:00:00:24:ff:4a:d3:bc
11 11 010b00 id N8 No_Light FC
12 12 010c00 id N8 No_Light FC
13 13 010d00 id N8 No_Light FC
14 14 010e00 id N8 No_Light FC
15 15 010f00 id N8 No_Light FC
16 16 011000 -- N8 No_Module FC (No POD License) Disabled
17 17 011100 -- N8 No_Module FC (No POD License) Disabled
18 18 011200 -- N8 No_Module FC (No POD License) Disabled
19 19 011300 -- N8 No_Module FC (No POD License) Disabled
20 20 011400 -- N8 No_Module FC (No POD License) Disabled
21 21 011500 -- N8 No_Module FC (No POD License) Disabled
22 22 011600 -- N8 No_Module FC (No POD License) Disabled
23 23 011700 -- N8 No_Module FC (No POD License) Disabled
</source>
Was sagt uns das?
# Dies ist der "Principal" (alle andere sind "Subordinate") der Fabric "Fabric1" (switchRole:, zoning:)
# Der Switch ist gezoned (zoning:)
# SwitchID ist "fffc01"
# Es ist ein 24-Port Switch
# Es gibt einen doppelten ISL (InterSwitchLink) zu einem anderen Switch E-Port (san-sw_21)
# 6 Ports sind mit SFPs bestückt, aber nicht belegt (0,4,11-15)
# 8 Ports haben keine Lizenz und auch kein SFP (No_Module)
# 9 Ports sind belegt
<source lang=bash>
san-sw_11:root> fabricshow
Switch ID Worldwide Name Enet IP Addr FC IP Addr Name
-------------------------------------------------------------------------
1: fffc01 10:00:00:05:33:df:43:5a 192.168.1.117 0.0.0.0 >"san-sw_11"
2: fffc02 10:00:00:05:33:df:bd:b9 192.168.1.119 0.0.0.0 "san-sw_21"
The Fabric has 2 switches
</source>
==Port-Kommandos==
===portstatsshow===
===portstatsclear===
==Zone-Kommandos==
===zoneshow===
===alicreate===
===alishow===
==Backup der Switchconfig per Script==
===Put the backup host ssh-pub-key on the switches===
<source lang=bash>
fcsw1:root> cat >/root/.ssh/authorized_keys <<EOF
> ssh-dss AAAAB3NzaC1...
...
...
lF8qsgtTD8cc= root@host
> EOF
</source>
===Generate ssh-key on the switches===
<source lang=bash>
fcsw1:root> ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
2a:23:33:...:69:bc:25:a5:f9 root@fcsw1
The key's randomart image is:
+--[ RSA 2048]----+
| |
| ... |
| |
+-----------------+
</source>
===Copy the key to your backup users ~/.ssh/authorized_keys on backup host===
<source lang=bash>
fcsw1:root> cat /root/.ssh/id_rsa.pub
ssh-rsa AAAAB3NzaC1yc2EAAA...
...
KHnw1T1NaQ== root@fcsw1
</source>
===Now the script on the backup host===
<source lang=bash>
# cat /opt/bin/backup_brocade_config
#!/bin/bash
SWITCHES="
172.30.40.50
172.30.40.51
"
LOCALUSER="backupuser"
BACKUPDIR="brocade_backup"
BACKUPHOST="172.30.40.10"
DATE="$(date '+%Y%m%d-%H%M%S')"
for switch in ${SWITCHES} ; do
printf "Backing up ${switch} to ~${LOCALUSER}/${BACKUPDIR}/${switch}_config_${date}.txt... "
ssh root@${switch} /fabos/link_sbin/configupload -all -p scp ${BACKUPHOST},${LOCALUSER},${BACKUPDIR}/${switch}_config_${DATE}.txt
done
</source>
==Script zum parsen einer configupload-Datei==
<source lang=awk>
#!/usr/bin/nawk -f
BEGIN{
vendor["001438"]="Hewlett-Packard";
vendor["00a098"]="NetApp";
vendor["0024ff"]="Qlogic";
vendor["001b32"]="Qlogic";
vendor["0000c9"]="Emulex";
vendor["00e002"]="CROSSROADS SYSTEMS, INC.";
}
/\[Zoning\]/,/^$/ {
if(/^cfg./){
split($0,cfgparts,":");
gsub(/^cfg./,"",cfgparts[1]);
cfg[cfgparts[1]]=cfgparts[2];
}
else if(/^zone./) {
zonename=$0;
gsub(/:.*$/,"",zonename);
gsub(/^zone./,"",zonename);
zonemembers=$0;
gsub(/^[^:]*:/,"",zonemembers);
zone[zonename]=zonemembers;
}
else if(/^alias./) {
aliasname=$0;
gsub(/:.*$/,"",aliasname);
gsub(/^alias./,"",aliasname);
aliasmembers=$0;
gsub(/^[^:]*:/,"",aliasmembers);
alias[aliasname]=aliasmembers;
if(length(aliasname)>longestalias){
longestalias=length(aliasname);
}
}
else if(/^enable:/) {
cfgenabled=$0;
gsub(/^enable:/,"",cfgenabled);
}
}
END {
print "Config:",cfgenabled;
split(cfg[cfgenabled],active_zones,";");
for(active_zone in active_zones) {
split(zone[active_zones[active_zone]],zone_members,";");
asort(zone_members);
print "Zone",active_zones[active_zone],"(",length(zone_members),"Members ):";
for(zone_member in zone_members){
member=zone_members[zone_member];
if(alias[member]!=""){
member=alias[member];
}
WWN=member;
gsub(/:/,"",WWN);
if(WWN ~ /^5/){start=2;}else{start=5;}
vendor_id=substr(WWN,start,6);
printf " Member: %s\t",member;
if(alias[zone_members[zone_member]]!=""){
format=sprintf("%%s%%%ds\t",longestalias-length(zone_members[zone_member]));
printf format,zone_members[zone_member]," ";
}
printf "%s\n",vendor[vendor_id];
}
}
}
</source>
46285a8648ad5de7e88b11a56ecfeacc693e00be
477
476
2014-09-24T12:19:55Z
Lollypop
2
/* Script zum parsen einer configupload-Datei */
wikitext
text/x-wiki
[[Kategorie:Solaris]]
[[Kategorie:Brocade]]
=Fibrechannel Analyse unter Solaris=
=Kommandos : Solaris=
==luxadm==
===luxadm -e port===
Gibt die Hardwarepfade der vorhandened Fibrechannelports und deren Status aus:
<source lang=bash>
# luxadm -e port
/devices/pci@79,0/pci10de,378@b/pci1077,143@0/fp@0,0:devctl CONNECTED
/devices/pci@79,0/pci10de,378@b/pci1077,143@0,1/fp@0,0:devctl NOT CONNECTED
/devices/pci@79,0/pci10de,376@e/pci1077,143@0/fp@0,0:devctl CONNECTED
/devices/pci@79,0/pci10de,376@e/pci1077,143@0,1/fp@0,0:devctl NOT CONNECTED
</source>
2 Dualport Karten:
/devices/pci@79,0/pci10de,378@b/pci1077,143@0 und ...,1
/devices/pci@79,0/pci10de,376@e/pci1077,143@0 und ...,1
<source lang=bash>
# prtdiag -v | head -1
System Configuration: Sun Microsystems Sun Fire X4440
</source>
Aus der Seite [https://support.oracle.com/epmos/faces/DocContentDisplay?id=1277396.1 Sun x86 Platforms: Matrix of Recognized Device Paths (Doc ID 1277396.1)] (Oracle Support Login benötigt):
Sun Fire x4440 (Tucana)
PCI:
PCIe SLOT0 /pci@0,0/pci10de,375@f/pci1000,3150@0 // with PCI Express 8-Port SAS/SATA HBA
PCIe SLOT0 /pci@0,0/pci10de,375@f/ // without PCI Express 8-Port SAS/SATA HBA
PCIe SLOT1 /pci@0,0/pci10de,376@e/
PCIe SLOT2 /pci@7c,0/pci10de,377@f/
PCIe SLOT3 /pci@0,0/pci10de,377@a/
PCIe SLOT4 /pci@7c,0/pci10de,376@e/
PCIe SLOT5 /pci@7c,0/pci10de,378@b/
(7c can be renamed something else depending on BIOS/OS version)
Also stecken unsere Karten in Slot 4 und 5.
===luxadm -e dump_map <HW_path>===
Gibt die Tabelle der bekannten Geräte an einem Port aus
<source lang=bash>
# luxadm -e dump_map /devices/pci@79,0/pci10de,378@b/pci1077,143@0/fp@0,0:devctl
Pos Port_ID Hard_Addr Port WWN Node WWN Type
0 30200 0 202600a0b86e10e4 200600a0b86e10e4 0x0 (Disk device)
1 30600 0 202700a0b86e10e4 200600a0b86e10e4 0x0 (Disk device)
2 10100 0 203400a0b85bb030 200400a0b85bb030 0x0 (Disk device)
3 10500 0 203500a0b85bb030 200400a0b85bb030 0x0 (Disk device)
4 10200 0 202600a0b86e103c 200600a0b86e103c 0x0 (Disk device)
5 11400 0 202700a0b86e103c 200600a0b86e103c 0x0 (Disk device)
6 30100 0 203200a0b85aeb2d 200200a0b85aeb2d 0x0 (Disk device)
7 30500 0 203300a0b85aeb2d 200200a0b85aeb2d 0x0 (Disk device)
8 10800 0 2100001b32902d45 2000001b32902d45 0x1f (Unknown Type,Host Bus Adapter)
</source>
Erklärung der interessanten Spalten:
* Port_ID <Switch_ID><Switchport><??>
Es sind also offensichtlich 2 Switches in der Fabric an Port /devices/pci@79,0/pci10de,378@b/pci1077,143@0/fp@0,0:devctl
und zwar mit der ID 1 und mit der ID 3.
Switch ID 1
Port 1 und 5 : Node WWN 200400a0b85bb030
Port 2 und 14 : Node WWN 200600a0b86e103c
Port 8 : Node WWN 2000001b32902d45 (Wir selbst)
Switch ID 3
Port 1 und 5 : Node WWN 200200a0b85aeb2d
Port 2 und 6 : Node WWN 200600a0b86e10e4
Wir hängen also mit 2 Storages auf dem Switch mit der ID 1 und haben eine Verbindung zu einem Switch mit der ID 3 an dem 2 weitere Storages hängen.
* Node WWN
Wir sehen hier 4 Disk Devices mit jeweils 2 Einträgen (Gleiche Node WWN)
* Port WWN
Dies ist die Port WWN der an den Switch angeschlossenen Geräte (unter 8 finden wir uns selbst).
Pro Storage sehen wir hier 2 Port WWNs, also 2 Pfade über unseren einen Hostport.
Daher nachher 4 Pfade (2 Pro Hostport) beim [[#mpathadm list lu]].
* Type
Disk Device: Storage
Host Bus Adapter: FC-Karte
===luxadm probe===
Auflistung aller erkannten Fibrechanneldevices
<source lang=bash>
#> luxadm probe
Found Fibre Channel device(s):
Node WWN:200600a0b86e10e4 Device Type:Disk device
Logical Path:/dev/rdsk/c8t600A0B80006E10E40000DC1C52E8B751d0s2
...
</source>
===luxadm display <Diskpath|WWN>===
<source lang=bash>
#> luxadm display /dev/rdsk/c8t600A0B80006E10E40000DC1C52E8B751d0s2
DEVICE PROPERTIES for disk: /dev/rdsk/c8t600A0B80006E10E40000DC1C52E8B751d0s2
Vendor: SUN
Product ID: STK6580_6780
Revision: 0784
Serial Num: SP01068442
Unformatted capacity: 204800.000 MBytes
Write Cache: Enabled
Read Cache: Enabled
Minimum prefetch: 0x300
Maximum prefetch: 0x0
Device Type: Disk device
Path(s):
/dev/rdsk/c8t600A0B80006E10E40000DC1C52E8B751d0s2
/devices/scsi_vhci/disk@g600a0b80006e10e40000dc1c52e8b751:c,raw
Controller /dev/cfg/c4
Device Address 202600a0b86e10e4,5
Host controller port WWN 2100001b328a417f
Class primary
State ONLINE
Controller /dev/cfg/c4
Device Address 202700a0b86e10e4,5
Host controller port WWN 2100001b328a417f
Class secondary
State STANDBY
Controller /dev/cfg/c6
Device Address 201600a0b86e10e4,5
Host controller port WWN 2100001b32904445
Class primary
State ONLINE
Controller /dev/cfg/c6
Device Address 201700a0b86e10e4,5
Host controller port WWN 2100001b32904445
Class secondary
State STANDBY
</source>
* Vendor: SUN
Hersteller
* Product ID: STK6580_6780
Also ein StorageTek 6580/6780
* Revision: 0784
Grobe Firmwarepeilung (Firmware Version: 07.84.47.10)
Siehe hier [[#lsscs list array <array_name>]]
* Serial Num: SP01068442
Praktisch, wenn man mit NetApps arbeitet, um die LUNs zuzuordnen.
* Unformatted capacity: 204800.000 MBytes
Immer gut zu wissen
* Write Cache: Enabled
Die Batterie im Storage sollte also OK sein ;-)
* Path(s):
Rawdevicepath
Hardwaredevicepath
Jetzt folgen immer pro Pfad zu diesem Device ein Block aus
Controller (siehe unten)
Device Address <Port WWN vom Device>,<LUN ID>
Class <primary|secondary> (siehe unten)
State <Online|Standby|Oflline>
Zuweisung Controller zum FC-Port über:
<source lang=bash>
# ls -al /dev/cfg/c6
lrwxrwxrwx 1 root root 60 Sep 3 2009 /dev/cfg/c6 -> ../../devices/pci@79,0/pci10de,376@e/pci1077,143@0/fp@0,0:fc
</source>
Man sieht den Hardwarepfad von [[#luxadm -e port]]
Class:
Via ALUA (Asymmetric Logical Unit Access) teilt das Device dem Host mit, über welche Pfade der Host primär auf die LUN zugreifen soll.
==fcinfo==
===fcinfo hba-port===
Gibt ein paar Infos über Hersteller, Modell, Firmware, Port und Node WWN, Current Speed, ... aus
<source lang=bash>
#> fcinfo hba-port
HBA Port WWN: 2100001b328a417f
OS Device Name: /dev/cfg/c4
Manufacturer: QLogic Corp.
Model: 375-3356-02
Firmware Version: 05.06.00
FCode/BIOS Version: BIOS: 2.02; fcode: 2.01; EFI: 2.00;
Serial Number: 0402R00-0918701860
Driver Name: qlc
Driver Version: 20110825-3.06
Type: N-port
State: online
Supported Speeds: 1Gb 2Gb 4Gb
Current Speed: 4Gb
Node WWN: 2000001b328a417f
HBA Port WWN: 2101001b32aa417f
OS Device Name: /dev/cfg/c5
Manufacturer: QLogic Corp.
Model: 375-3356-02
Firmware Version: 05.06.00
FCode/BIOS Version: BIOS: 2.02; fcode: 2.01; EFI: 2.00;
Serial Number: 0402R00-0918701860
Driver Name: qlc
Driver Version: 20110825-3.06
Type: unknown
State: offline
Supported Speeds: 1Gb 2Gb 4Gb
Current Speed: not established
Node WWN: 2001001b32aa417f
HBA Port WWN: 2100001b32904445
OS Device Name: /dev/cfg/c6
Manufacturer: QLogic Corp.
Model: 375-3356-02
Firmware Version: 05.06.00
FCode/BIOS Version: BIOS: 2.02; fcode: 2.01; EFI: 2.00;
Serial Number: 0402R00-0918701887
Driver Name: qlc
Driver Version: 20110825-3.06
Type: N-port
State: online
Supported Speeds: 1Gb 2Gb 4Gb
Current Speed: 4Gb
Node WWN: 2000001b32904445
HBA Port WWN: 2101001b32b04445
OS Device Name: /dev/cfg/c7
Manufacturer: QLogic Corp.
Model: 375-3356-02
Firmware Version: 05.06.00
FCode/BIOS Version: BIOS: 2.02; fcode: 2.01; EFI: 2.00;
Serial Number: 0402R00-0918701887
Driver Name: qlc
Driver Version: 20110825-3.06
Type: unknown
State: offline
Supported Speeds: 1Gb 2Gb 4Gb
Current Speed: not established
Node WWN: 2001001b32b04445
</source>
===fcinfo remote-port --port <HBA Port WWN> --linkstat===
<source lang=bash>
# fcinfo remote-port --port 2100001b32904445 --linkstat
Remote Port WWN: 201600a0b86e103c
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200600a0b86e103c
Link Error Statistics:
Link Failure Count: 3
Loss of Sync Count: 3
Loss of Signal Count: 2
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 201700a0b86e103c
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200600a0b86e103c
Link Error Statistics:
Link Failure Count: 4
Loss of Sync Count: 261
Loss of Signal Count: 4
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 202200a0b85aeb2d
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200200a0b85aeb2d
Link Error Statistics:
Link Failure Count: 2
Loss of Sync Count: 1
Loss of Signal Count: 2
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 202300a0b85aeb2d
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200200a0b85aeb2d
Link Error Statistics:
Link Failure Count: 2
Loss of Sync Count: 1
Loss of Signal Count: 2
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 201600a0b86e10e4
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200600a0b86e10e4
Link Error Statistics:
Link Failure Count: 3
Loss of Sync Count: 1
Loss of Signal Count: 0
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 201700a0b86e10e4
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200600a0b86e10e4
Link Error Statistics:
Link Failure Count: 3
Loss of Sync Count: 1
Loss of Signal Count: 0
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 202400a0b85bb030
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200400a0b85bb030
Link Error Statistics:
Link Failure Count: 2
Loss of Sync Count: 1
Loss of Signal Count: 2
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 202500a0b85bb030
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200400a0b85bb030
Link Error Statistics:
Link Failure Count: 2
Loss of Sync Count: 1
Loss of Signal Count: 3
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
</source>
==mpathadm==
===mpathadm list lu===
<source lang=bash>
</source>
==cfgadm==
===cfgadm -al -o show_FCP_dev [<controller>]===
<source lang=bash>
# cfgadm -al -o show_FCP_dev | grep unusable
c8::21000024ff2d49a2,0 disk connected configured unusable
c8::21000024ff2d49a2,1 disk connected configured unusable
c8::21000024ff2d49a2,2 disk connected configured unusable
c8::21000024ff2d49a2,3 disk connected configured unusable
c8::21000024ff2d49a2,4 disk connected configured unusable
c8::21000024ff2d49a2,5 disk connected configured unusable
c8::21000024ff2d49a2,6 disk connected configured unusable
c8::21000024ff2d49a2,7 disk connected configured unusable
c8::21000024ff2d49a2,8 disk connected configured unusable
c8::21000024ff2d49a2,9 disk connected configured unusable
c8::21000024ff2d49a2,10 disk connected configured unusable
c9::203400a0b839c421,31 disk connected configured unusable
c9::203400a0b84913d2,31 disk connected configured unusable
c9::203500a0b839c421,31 disk connected configured unusable
c9::203500a0b84913d2,31 disk connected configured unusable
</source>
===cfgadm -c unconfigure -o unusable_SCSI_LUN <unusable device>===
<source lang=bash>
# cfgadm -c unconfigure -o unusable_SCSI_LUN c8::21000024ff2d49a2
</source>
===cfgadm -o force_update -c configure <controller>===
Rescan LUNs. Be careful! Does a forcelip!
<source lang=bash>
# cfgadm -o force_update -c configure c10
</source>
=Kommandos : Common Array Manager=
==lsscs==
Ist unter Solaris in /opt/SUNWsefms/bin
===lsscs list array===
<source lang=bash>
</source>
===lsscs list array <array_name>===
<source lang=bash>
</source>
===lsscs list -a <array_name> fcport===
<source lang=bash>
</source>
=Kommandos : Brocade=
==Switch-Kommandos==
===switchshow===
<source lang=bash>
san-sw_11:admin> switchshow
switchName: san-sw_11
switchType: 71.2
switchState: Online
switchMode: Native
switchRole: Principal
switchDomain: 1
switchId: fffc01
switchWwn: 10:00:00:05:33:df:43:5a
zoning: ON (Fabric1)
switchBeacon: OFF
Index Port Address Media Speed State Proto
==============================================
0 0 010000 id N8 No_Light FC
1 1 010100 id N8 Online FC E-Port 10:00:00:05:33:df:bd:b9 "san-sw_21" (downstream)
2 2 010200 id N8 Online FC F-Port 21:00:00:24:ff:05:74:e4
3 3 010300 id N8 Online FC F-Port 50:0a:09:81:8d:32:5d:c4
4 4 010400 id N8 No_Light FC
5 5 010500 id N8 Online FC E-Port 10:00:00:05:33:df:bd:b9 "san-sw_21"
6 6 010600 id N4 Online FC F-Port 20:06:00:a0:b8:32:38:17
7 7 010700 id N4 Online FC F-Port 20:07:00:a0:b8:32:38:17
8 8 010800 id N4 Online FC F-Port 21:00:00:1b:32:91:4c:ed
9 9 010900 id N4 Online FC F-Port 21:00:00:1b:32:98:05:1a
10 10 010a00 id N8 Online FC F-Port 21:00:00:24:ff:4a:d3:bc
11 11 010b00 id N8 No_Light FC
12 12 010c00 id N8 No_Light FC
13 13 010d00 id N8 No_Light FC
14 14 010e00 id N8 No_Light FC
15 15 010f00 id N8 No_Light FC
16 16 011000 -- N8 No_Module FC (No POD License) Disabled
17 17 011100 -- N8 No_Module FC (No POD License) Disabled
18 18 011200 -- N8 No_Module FC (No POD License) Disabled
19 19 011300 -- N8 No_Module FC (No POD License) Disabled
20 20 011400 -- N8 No_Module FC (No POD License) Disabled
21 21 011500 -- N8 No_Module FC (No POD License) Disabled
22 22 011600 -- N8 No_Module FC (No POD License) Disabled
23 23 011700 -- N8 No_Module FC (No POD License) Disabled
</source>
Was sagt uns das?
# Dies ist der "Principal" (alle andere sind "Subordinate") der Fabric "Fabric1" (switchRole:, zoning:)
# Der Switch ist gezoned (zoning:)
# SwitchID ist "fffc01"
# Es ist ein 24-Port Switch
# Es gibt einen doppelten ISL (InterSwitchLink) zu einem anderen Switch E-Port (san-sw_21)
# 6 Ports sind mit SFPs bestückt, aber nicht belegt (0,4,11-15)
# 8 Ports haben keine Lizenz und auch kein SFP (No_Module)
# 9 Ports sind belegt
<source lang=bash>
san-sw_11:root> fabricshow
Switch ID Worldwide Name Enet IP Addr FC IP Addr Name
-------------------------------------------------------------------------
1: fffc01 10:00:00:05:33:df:43:5a 192.168.1.117 0.0.0.0 >"san-sw_11"
2: fffc02 10:00:00:05:33:df:bd:b9 192.168.1.119 0.0.0.0 "san-sw_21"
The Fabric has 2 switches
</source>
==Port-Kommandos==
===portstatsshow===
===portstatsclear===
==Zone-Kommandos==
===zoneshow===
===alicreate===
===alishow===
==Backup der Switchconfig per Script==
===Put the backup host ssh-pub-key on the switches===
<source lang=bash>
fcsw1:root> cat >/root/.ssh/authorized_keys <<EOF
> ssh-dss AAAAB3NzaC1...
...
...
lF8qsgtTD8cc= root@host
> EOF
</source>
===Generate ssh-key on the switches===
<source lang=bash>
fcsw1:root> ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
2a:23:33:...:69:bc:25:a5:f9 root@fcsw1
The key's randomart image is:
+--[ RSA 2048]----+
| |
| ... |
| |
+-----------------+
</source>
===Copy the key to your backup users ~/.ssh/authorized_keys on backup host===
<source lang=bash>
fcsw1:root> cat /root/.ssh/id_rsa.pub
ssh-rsa AAAAB3NzaC1yc2EAAA...
...
KHnw1T1NaQ== root@fcsw1
</source>
===Now the script on the backup host===
<source lang=bash>
# cat /opt/bin/backup_brocade_config
#!/bin/bash
SWITCHES="
172.30.40.50
172.30.40.51
"
LOCALUSER="backupuser"
BACKUPDIR="brocade_backup"
BACKUPHOST="172.30.40.10"
DATE="$(date '+%Y%m%d-%H%M%S')"
for switch in ${SWITCHES} ; do
printf "Backing up ${switch} to ~${LOCALUSER}/${BACKUPDIR}/${switch}_config_${date}.txt... "
ssh root@${switch} /fabos/link_sbin/configupload -all -p scp ${BACKUPHOST},${LOCALUSER},${BACKUPDIR}/${switch}_config_${DATE}.txt
done
</source>
==Script zum parsen einer configupload-Datei==
<source lang=awk>
#!/usr/bin/nawk -f
BEGIN{
vendor["001438"]="Hewlett-Packard";
vendor["00a098"]="NetApp";
vendor["0024ff"]="Qlogic";
vendor["001b32"]="Qlogic";
vendor["0000c9"]="Emulex";
vendor["00e002"]="CROSSROADS SYSTEMS, INC.";
}
/\[Zoning\]/,/^$/ {
if(/^cfg./){
split($0,cfgparts,":");
gsub(/^cfg./,"",cfgparts[1]);
cfg[cfgparts[1]]=cfgparts[2];
}
else if(/^zone./) {
zonename=$0;
gsub(/:.*$/,"",zonename);
gsub(/^zone./,"",zonename);
zonemembers=$0;
gsub(/^[^:]*:/,"",zonemembers);
zone[zonename]=zonemembers;
}
else if(/^alias./) {
aliasname=$0;
gsub(/:.*$/,"",aliasname);
gsub(/^alias./,"",aliasname);
aliasmembers=$0;
gsub(/^[^:]*:/,"",aliasmembers);
alias[aliasname]=aliasmembers;
if(length(aliasname)>longestalias){
longestalias=length(aliasname);
}
}
else if(/^enable:/) {
cfgenabled=$0;
gsub(/^enable:/,"",cfgenabled);
}
}
END {
print "Config:",cfgenabled;
split(cfg[cfgenabled],active_zones,";");
for(active_zone in active_zones) {
split(zone[active_zones[active_zone]],zone_members,";");
asort(zone_members);
print "Zone",active_zones[active_zone],"(",length(zone_members),"Members ):";
for(zone_member in zone_members){
member=zone_members[zone_member];
if(alias[member]!=""){
member=alias[member];
}
WWN=member;
gsub(/:/,"",WWN);
if(WWN ~ /^5/){start=2;}else{start=5;}
vendor_id=substr(WWN,start,6);
printf " Member: %s\t",member;
if(alias[zone_members[zone_member]]!=""){
format=sprintf("%%s%%%ds\t",longestalias-length(zone_members[zone_member]));
printf format,zone_members[zone_member]," ";
}
printf "%s\n",vendor[vendor_id];
}
}
print "Create config:";
printf "cfgdelete \"%s\"\n",cfgenabled;
for(active_zone in active_zones) {
split(zone[active_zones[active_zone]],zone_members,";");
asort(zone_members);
for(zone_member in zone_members){
member=zone_members[zone_member];
if(alias[member]!=""){
printf "alicreate \"%s\",\"%s\"\n",member,alias[member];
alias[member]="";
}
}
printf "zonecreate \"%s\",\"%s\"\n",active_zones[active_zone],zone[active_zones[active_zone]];
if(!secondelement){
secondelement=1;
printf "cfgcreate";
} else {
printf "cfgadd ";
}
printf " \"%s\",\"%s\"\n",cfgenabled,active_zones[active_zone];
}
printf "cfgsave\ncfgenable \"%s\"\n",cfgenabled;
}
</source>
1958e9eef5625547ea390394972b3b843ad8e5e6
478
477
2014-09-24T12:21:36Z
Lollypop
2
/* Script zum parsen einer configupload-Datei */
wikitext
text/x-wiki
[[Kategorie:Solaris]]
[[Kategorie:Brocade]]
=Fibrechannel Analyse unter Solaris=
=Kommandos : Solaris=
==luxadm==
===luxadm -e port===
Gibt die Hardwarepfade der vorhandened Fibrechannelports und deren Status aus:
<source lang=bash>
# luxadm -e port
/devices/pci@79,0/pci10de,378@b/pci1077,143@0/fp@0,0:devctl CONNECTED
/devices/pci@79,0/pci10de,378@b/pci1077,143@0,1/fp@0,0:devctl NOT CONNECTED
/devices/pci@79,0/pci10de,376@e/pci1077,143@0/fp@0,0:devctl CONNECTED
/devices/pci@79,0/pci10de,376@e/pci1077,143@0,1/fp@0,0:devctl NOT CONNECTED
</source>
2 Dualport Karten:
/devices/pci@79,0/pci10de,378@b/pci1077,143@0 und ...,1
/devices/pci@79,0/pci10de,376@e/pci1077,143@0 und ...,1
<source lang=bash>
# prtdiag -v | head -1
System Configuration: Sun Microsystems Sun Fire X4440
</source>
Aus der Seite [https://support.oracle.com/epmos/faces/DocContentDisplay?id=1277396.1 Sun x86 Platforms: Matrix of Recognized Device Paths (Doc ID 1277396.1)] (Oracle Support Login benötigt):
Sun Fire x4440 (Tucana)
PCI:
PCIe SLOT0 /pci@0,0/pci10de,375@f/pci1000,3150@0 // with PCI Express 8-Port SAS/SATA HBA
PCIe SLOT0 /pci@0,0/pci10de,375@f/ // without PCI Express 8-Port SAS/SATA HBA
PCIe SLOT1 /pci@0,0/pci10de,376@e/
PCIe SLOT2 /pci@7c,0/pci10de,377@f/
PCIe SLOT3 /pci@0,0/pci10de,377@a/
PCIe SLOT4 /pci@7c,0/pci10de,376@e/
PCIe SLOT5 /pci@7c,0/pci10de,378@b/
(7c can be renamed something else depending on BIOS/OS version)
Also stecken unsere Karten in Slot 4 und 5.
===luxadm -e dump_map <HW_path>===
Gibt die Tabelle der bekannten Geräte an einem Port aus
<source lang=bash>
# luxadm -e dump_map /devices/pci@79,0/pci10de,378@b/pci1077,143@0/fp@0,0:devctl
Pos Port_ID Hard_Addr Port WWN Node WWN Type
0 30200 0 202600a0b86e10e4 200600a0b86e10e4 0x0 (Disk device)
1 30600 0 202700a0b86e10e4 200600a0b86e10e4 0x0 (Disk device)
2 10100 0 203400a0b85bb030 200400a0b85bb030 0x0 (Disk device)
3 10500 0 203500a0b85bb030 200400a0b85bb030 0x0 (Disk device)
4 10200 0 202600a0b86e103c 200600a0b86e103c 0x0 (Disk device)
5 11400 0 202700a0b86e103c 200600a0b86e103c 0x0 (Disk device)
6 30100 0 203200a0b85aeb2d 200200a0b85aeb2d 0x0 (Disk device)
7 30500 0 203300a0b85aeb2d 200200a0b85aeb2d 0x0 (Disk device)
8 10800 0 2100001b32902d45 2000001b32902d45 0x1f (Unknown Type,Host Bus Adapter)
</source>
Erklärung der interessanten Spalten:
* Port_ID <Switch_ID><Switchport><??>
Es sind also offensichtlich 2 Switches in der Fabric an Port /devices/pci@79,0/pci10de,378@b/pci1077,143@0/fp@0,0:devctl
und zwar mit der ID 1 und mit der ID 3.
Switch ID 1
Port 1 und 5 : Node WWN 200400a0b85bb030
Port 2 und 14 : Node WWN 200600a0b86e103c
Port 8 : Node WWN 2000001b32902d45 (Wir selbst)
Switch ID 3
Port 1 und 5 : Node WWN 200200a0b85aeb2d
Port 2 und 6 : Node WWN 200600a0b86e10e4
Wir hängen also mit 2 Storages auf dem Switch mit der ID 1 und haben eine Verbindung zu einem Switch mit der ID 3 an dem 2 weitere Storages hängen.
* Node WWN
Wir sehen hier 4 Disk Devices mit jeweils 2 Einträgen (Gleiche Node WWN)
* Port WWN
Dies ist die Port WWN der an den Switch angeschlossenen Geräte (unter 8 finden wir uns selbst).
Pro Storage sehen wir hier 2 Port WWNs, also 2 Pfade über unseren einen Hostport.
Daher nachher 4 Pfade (2 Pro Hostport) beim [[#mpathadm list lu]].
* Type
Disk Device: Storage
Host Bus Adapter: FC-Karte
===luxadm probe===
Auflistung aller erkannten Fibrechanneldevices
<source lang=bash>
#> luxadm probe
Found Fibre Channel device(s):
Node WWN:200600a0b86e10e4 Device Type:Disk device
Logical Path:/dev/rdsk/c8t600A0B80006E10E40000DC1C52E8B751d0s2
...
</source>
===luxadm display <Diskpath|WWN>===
<source lang=bash>
#> luxadm display /dev/rdsk/c8t600A0B80006E10E40000DC1C52E8B751d0s2
DEVICE PROPERTIES for disk: /dev/rdsk/c8t600A0B80006E10E40000DC1C52E8B751d0s2
Vendor: SUN
Product ID: STK6580_6780
Revision: 0784
Serial Num: SP01068442
Unformatted capacity: 204800.000 MBytes
Write Cache: Enabled
Read Cache: Enabled
Minimum prefetch: 0x300
Maximum prefetch: 0x0
Device Type: Disk device
Path(s):
/dev/rdsk/c8t600A0B80006E10E40000DC1C52E8B751d0s2
/devices/scsi_vhci/disk@g600a0b80006e10e40000dc1c52e8b751:c,raw
Controller /dev/cfg/c4
Device Address 202600a0b86e10e4,5
Host controller port WWN 2100001b328a417f
Class primary
State ONLINE
Controller /dev/cfg/c4
Device Address 202700a0b86e10e4,5
Host controller port WWN 2100001b328a417f
Class secondary
State STANDBY
Controller /dev/cfg/c6
Device Address 201600a0b86e10e4,5
Host controller port WWN 2100001b32904445
Class primary
State ONLINE
Controller /dev/cfg/c6
Device Address 201700a0b86e10e4,5
Host controller port WWN 2100001b32904445
Class secondary
State STANDBY
</source>
* Vendor: SUN
Hersteller
* Product ID: STK6580_6780
Also ein StorageTek 6580/6780
* Revision: 0784
Grobe Firmwarepeilung (Firmware Version: 07.84.47.10)
Siehe hier [[#lsscs list array <array_name>]]
* Serial Num: SP01068442
Praktisch, wenn man mit NetApps arbeitet, um die LUNs zuzuordnen.
* Unformatted capacity: 204800.000 MBytes
Immer gut zu wissen
* Write Cache: Enabled
Die Batterie im Storage sollte also OK sein ;-)
* Path(s):
Rawdevicepath
Hardwaredevicepath
Jetzt folgen immer pro Pfad zu diesem Device ein Block aus
Controller (siehe unten)
Device Address <Port WWN vom Device>,<LUN ID>
Class <primary|secondary> (siehe unten)
State <Online|Standby|Oflline>
Zuweisung Controller zum FC-Port über:
<source lang=bash>
# ls -al /dev/cfg/c6
lrwxrwxrwx 1 root root 60 Sep 3 2009 /dev/cfg/c6 -> ../../devices/pci@79,0/pci10de,376@e/pci1077,143@0/fp@0,0:fc
</source>
Man sieht den Hardwarepfad von [[#luxadm -e port]]
Class:
Via ALUA (Asymmetric Logical Unit Access) teilt das Device dem Host mit, über welche Pfade der Host primär auf die LUN zugreifen soll.
==fcinfo==
===fcinfo hba-port===
Gibt ein paar Infos über Hersteller, Modell, Firmware, Port und Node WWN, Current Speed, ... aus
<source lang=bash>
#> fcinfo hba-port
HBA Port WWN: 2100001b328a417f
OS Device Name: /dev/cfg/c4
Manufacturer: QLogic Corp.
Model: 375-3356-02
Firmware Version: 05.06.00
FCode/BIOS Version: BIOS: 2.02; fcode: 2.01; EFI: 2.00;
Serial Number: 0402R00-0918701860
Driver Name: qlc
Driver Version: 20110825-3.06
Type: N-port
State: online
Supported Speeds: 1Gb 2Gb 4Gb
Current Speed: 4Gb
Node WWN: 2000001b328a417f
HBA Port WWN: 2101001b32aa417f
OS Device Name: /dev/cfg/c5
Manufacturer: QLogic Corp.
Model: 375-3356-02
Firmware Version: 05.06.00
FCode/BIOS Version: BIOS: 2.02; fcode: 2.01; EFI: 2.00;
Serial Number: 0402R00-0918701860
Driver Name: qlc
Driver Version: 20110825-3.06
Type: unknown
State: offline
Supported Speeds: 1Gb 2Gb 4Gb
Current Speed: not established
Node WWN: 2001001b32aa417f
HBA Port WWN: 2100001b32904445
OS Device Name: /dev/cfg/c6
Manufacturer: QLogic Corp.
Model: 375-3356-02
Firmware Version: 05.06.00
FCode/BIOS Version: BIOS: 2.02; fcode: 2.01; EFI: 2.00;
Serial Number: 0402R00-0918701887
Driver Name: qlc
Driver Version: 20110825-3.06
Type: N-port
State: online
Supported Speeds: 1Gb 2Gb 4Gb
Current Speed: 4Gb
Node WWN: 2000001b32904445
HBA Port WWN: 2101001b32b04445
OS Device Name: /dev/cfg/c7
Manufacturer: QLogic Corp.
Model: 375-3356-02
Firmware Version: 05.06.00
FCode/BIOS Version: BIOS: 2.02; fcode: 2.01; EFI: 2.00;
Serial Number: 0402R00-0918701887
Driver Name: qlc
Driver Version: 20110825-3.06
Type: unknown
State: offline
Supported Speeds: 1Gb 2Gb 4Gb
Current Speed: not established
Node WWN: 2001001b32b04445
</source>
===fcinfo remote-port --port <HBA Port WWN> --linkstat===
<source lang=bash>
# fcinfo remote-port --port 2100001b32904445 --linkstat
Remote Port WWN: 201600a0b86e103c
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200600a0b86e103c
Link Error Statistics:
Link Failure Count: 3
Loss of Sync Count: 3
Loss of Signal Count: 2
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 201700a0b86e103c
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200600a0b86e103c
Link Error Statistics:
Link Failure Count: 4
Loss of Sync Count: 261
Loss of Signal Count: 4
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 202200a0b85aeb2d
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200200a0b85aeb2d
Link Error Statistics:
Link Failure Count: 2
Loss of Sync Count: 1
Loss of Signal Count: 2
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 202300a0b85aeb2d
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200200a0b85aeb2d
Link Error Statistics:
Link Failure Count: 2
Loss of Sync Count: 1
Loss of Signal Count: 2
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 201600a0b86e10e4
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200600a0b86e10e4
Link Error Statistics:
Link Failure Count: 3
Loss of Sync Count: 1
Loss of Signal Count: 0
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 201700a0b86e10e4
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200600a0b86e10e4
Link Error Statistics:
Link Failure Count: 3
Loss of Sync Count: 1
Loss of Signal Count: 0
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 202400a0b85bb030
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200400a0b85bb030
Link Error Statistics:
Link Failure Count: 2
Loss of Sync Count: 1
Loss of Signal Count: 2
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 202500a0b85bb030
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200400a0b85bb030
Link Error Statistics:
Link Failure Count: 2
Loss of Sync Count: 1
Loss of Signal Count: 3
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
</source>
==mpathadm==
===mpathadm list lu===
<source lang=bash>
</source>
==cfgadm==
===cfgadm -al -o show_FCP_dev [<controller>]===
<source lang=bash>
# cfgadm -al -o show_FCP_dev | grep unusable
c8::21000024ff2d49a2,0 disk connected configured unusable
c8::21000024ff2d49a2,1 disk connected configured unusable
c8::21000024ff2d49a2,2 disk connected configured unusable
c8::21000024ff2d49a2,3 disk connected configured unusable
c8::21000024ff2d49a2,4 disk connected configured unusable
c8::21000024ff2d49a2,5 disk connected configured unusable
c8::21000024ff2d49a2,6 disk connected configured unusable
c8::21000024ff2d49a2,7 disk connected configured unusable
c8::21000024ff2d49a2,8 disk connected configured unusable
c8::21000024ff2d49a2,9 disk connected configured unusable
c8::21000024ff2d49a2,10 disk connected configured unusable
c9::203400a0b839c421,31 disk connected configured unusable
c9::203400a0b84913d2,31 disk connected configured unusable
c9::203500a0b839c421,31 disk connected configured unusable
c9::203500a0b84913d2,31 disk connected configured unusable
</source>
===cfgadm -c unconfigure -o unusable_SCSI_LUN <unusable device>===
<source lang=bash>
# cfgadm -c unconfigure -o unusable_SCSI_LUN c8::21000024ff2d49a2
</source>
===cfgadm -o force_update -c configure <controller>===
Rescan LUNs. Be careful! Does a forcelip!
<source lang=bash>
# cfgadm -o force_update -c configure c10
</source>
=Kommandos : Common Array Manager=
==lsscs==
Ist unter Solaris in /opt/SUNWsefms/bin
===lsscs list array===
<source lang=bash>
</source>
===lsscs list array <array_name>===
<source lang=bash>
</source>
===lsscs list -a <array_name> fcport===
<source lang=bash>
</source>
=Kommandos : Brocade=
==Switch-Kommandos==
===switchshow===
<source lang=bash>
san-sw_11:admin> switchshow
switchName: san-sw_11
switchType: 71.2
switchState: Online
switchMode: Native
switchRole: Principal
switchDomain: 1
switchId: fffc01
switchWwn: 10:00:00:05:33:df:43:5a
zoning: ON (Fabric1)
switchBeacon: OFF
Index Port Address Media Speed State Proto
==============================================
0 0 010000 id N8 No_Light FC
1 1 010100 id N8 Online FC E-Port 10:00:00:05:33:df:bd:b9 "san-sw_21" (downstream)
2 2 010200 id N8 Online FC F-Port 21:00:00:24:ff:05:74:e4
3 3 010300 id N8 Online FC F-Port 50:0a:09:81:8d:32:5d:c4
4 4 010400 id N8 No_Light FC
5 5 010500 id N8 Online FC E-Port 10:00:00:05:33:df:bd:b9 "san-sw_21"
6 6 010600 id N4 Online FC F-Port 20:06:00:a0:b8:32:38:17
7 7 010700 id N4 Online FC F-Port 20:07:00:a0:b8:32:38:17
8 8 010800 id N4 Online FC F-Port 21:00:00:1b:32:91:4c:ed
9 9 010900 id N4 Online FC F-Port 21:00:00:1b:32:98:05:1a
10 10 010a00 id N8 Online FC F-Port 21:00:00:24:ff:4a:d3:bc
11 11 010b00 id N8 No_Light FC
12 12 010c00 id N8 No_Light FC
13 13 010d00 id N8 No_Light FC
14 14 010e00 id N8 No_Light FC
15 15 010f00 id N8 No_Light FC
16 16 011000 -- N8 No_Module FC (No POD License) Disabled
17 17 011100 -- N8 No_Module FC (No POD License) Disabled
18 18 011200 -- N8 No_Module FC (No POD License) Disabled
19 19 011300 -- N8 No_Module FC (No POD License) Disabled
20 20 011400 -- N8 No_Module FC (No POD License) Disabled
21 21 011500 -- N8 No_Module FC (No POD License) Disabled
22 22 011600 -- N8 No_Module FC (No POD License) Disabled
23 23 011700 -- N8 No_Module FC (No POD License) Disabled
</source>
Was sagt uns das?
# Dies ist der "Principal" (alle andere sind "Subordinate") der Fabric "Fabric1" (switchRole:, zoning:)
# Der Switch ist gezoned (zoning:)
# SwitchID ist "fffc01"
# Es ist ein 24-Port Switch
# Es gibt einen doppelten ISL (InterSwitchLink) zu einem anderen Switch E-Port (san-sw_21)
# 6 Ports sind mit SFPs bestückt, aber nicht belegt (0,4,11-15)
# 8 Ports haben keine Lizenz und auch kein SFP (No_Module)
# 9 Ports sind belegt
<source lang=bash>
san-sw_11:root> fabricshow
Switch ID Worldwide Name Enet IP Addr FC IP Addr Name
-------------------------------------------------------------------------
1: fffc01 10:00:00:05:33:df:43:5a 192.168.1.117 0.0.0.0 >"san-sw_11"
2: fffc02 10:00:00:05:33:df:bd:b9 192.168.1.119 0.0.0.0 "san-sw_21"
The Fabric has 2 switches
</source>
==Port-Kommandos==
===portstatsshow===
===portstatsclear===
==Zone-Kommandos==
===zoneshow===
===alicreate===
===alishow===
==Backup der Switchconfig per Script==
===Put the backup host ssh-pub-key on the switches===
<source lang=bash>
fcsw1:root> cat >/root/.ssh/authorized_keys <<EOF
> ssh-dss AAAAB3NzaC1...
...
...
lF8qsgtTD8cc= root@host
> EOF
</source>
===Generate ssh-key on the switches===
<source lang=bash>
fcsw1:root> ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
2a:23:33:...:69:bc:25:a5:f9 root@fcsw1
The key's randomart image is:
+--[ RSA 2048]----+
| |
| ... |
| |
+-----------------+
</source>
===Copy the key to your backup users ~/.ssh/authorized_keys on backup host===
<source lang=bash>
fcsw1:root> cat /root/.ssh/id_rsa.pub
ssh-rsa AAAAB3NzaC1yc2EAAA...
...
KHnw1T1NaQ== root@fcsw1
</source>
===Now the script on the backup host===
<source lang=bash>
# cat /opt/bin/backup_brocade_config
#!/bin/bash
SWITCHES="
172.30.40.50
172.30.40.51
"
LOCALUSER="backupuser"
BACKUPDIR="brocade_backup"
BACKUPHOST="172.30.40.10"
DATE="$(date '+%Y%m%d-%H%M%S')"
for switch in ${SWITCHES} ; do
printf "Backing up ${switch} to ~${LOCALUSER}/${BACKUPDIR}/${switch}_config_${date}.txt... "
ssh root@${switch} /fabos/link_sbin/configupload -all -p scp ${BACKUPHOST},${LOCALUSER},${BACKUPDIR}/${switch}_config_${DATE}.txt
done
</source>
==Script zum parsen einer configupload-Datei==
<source lang=awk>
#!/usr/bin/nawk -f
BEGIN{
vendor["001438"]="Hewlett-Packard";
vendor["00a098"]="NetApp";
vendor["0024ff"]="Qlogic";
vendor["001b32"]="Qlogic";
vendor["0000c9"]="Emulex";
vendor["00e002"]="CROSSROADS SYSTEMS, INC.";
}
/\[Zoning\]/,/^$/ {
if(/^cfg./){
split($0,cfgparts,":");
gsub(/^cfg./,"",cfgparts[1]);
cfg[cfgparts[1]]=cfgparts[2];
}
else if(/^zone./) {
zonename=$0;
gsub(/:.*$/,"",zonename);
gsub(/^zone./,"",zonename);
zonemembers=$0;
gsub(/^[^:]*:/,"",zonemembers);
zone[zonename]=zonemembers;
}
else if(/^alias./) {
aliasname=$0;
gsub(/:.*$/,"",aliasname);
gsub(/^alias./,"",aliasname);
aliasmembers=$0;
gsub(/^[^:]*:/,"",aliasmembers);
alias[aliasname]=aliasmembers;
if(length(aliasname)>longestalias){
longestalias=length(aliasname);
}
}
else if(/^enable:/) {
cfgenabled=$0;
gsub(/^enable:/,"",cfgenabled);
}
}
END {
print "Config:",cfgenabled;
split(cfg[cfgenabled],active_zones,";");
for(active_zone in active_zones) {
split(zone[active_zones[active_zone]],zone_members,";");
asort(zone_members);
print "Zone",active_zones[active_zone],"(",length(zone_members),"Members ):";
for(zone_member in zone_members){
member=zone_members[zone_member];
if(alias[member]!=""){
member=alias[member];
}
WWN=member;
gsub(/:/,"",WWN);
if(WWN ~ /^5/){start=2;}else{start=5;}
vendor_id=substr(WWN,start,6);
printf " Member: %s\t",member;
if(alias[zone_members[zone_member]]!=""){
format=sprintf("%%s%%%ds\t",longestalias-length(zone_members[zone_member]));
printf format,zone_members[zone_member]," ";
}
printf "%s\n",vendor[vendor_id];
}
}
printf "\n\n\nCreate config:\n-------------------------------------------------\n";
printf "cfgdelete \"%s\"\n",cfgenabled;
for(active_zone in active_zones) {
split(zone[active_zones[active_zone]],zone_members,";");
asort(zone_members);
for(zone_member in zone_members){
member=zone_members[zone_member];
if(alias[member]!=""){
printf "alicreate \"%s\",\"%s\"\n",member,alias[member];
alias[member]="";
}
}
printf "zonecreate \"%s\",\"%s\"\n",active_zones[active_zone],zone[active_zones[active_zone]];
if(!secondelement){
secondelement=1;
printf "cfgcreate";
} else {
printf "cfgadd ";
}
printf " \"%s\",\"%s\"\n",cfgenabled,active_zones[active_zone];
}
printf "cfgsave\ncfgenable \"%s\"\n",cfgenabled;
}
</source>
074e587c7a41afe534986eeb03f05a5a86295f53
File:Archimandrita tesselata IMG 2891.JPG
6
161
458
2014-09-21T21:20:36Z
Lollypop
2
Archimandrita tesselata an einem Stück Gurke
wikitext
text/x-wiki
Archimandrita tesselata an einem Stück Gurke
3548cb971b3147c8fe5f43fadb121163f55bbb75
File:Archimandrita tesselata R0015551.png
6
162
459
2014-09-21T21:40:01Z
Lollypop
2
Archimandrita tesselata: Können diese Augen lügen?
wikitext
text/x-wiki
Archimandrita tesselata: Können diese Augen lügen?
694fb5c12b11ae18af4916660734a4a62559a304
Archimandrita tesselata
0
148
460
415
2014-09-21T21:40:52Z
Lollypop
2
wikitext
text/x-wiki
{{Systematik
| DeName = Pfefferschabe
| WissName = Archimandrita tesselata
| Bild = Archimandrita_tesselata_IMG_2891.JPG
| Bildbeschreibung = Archimandrita Tesselata an einem Stück Gurke
| Autor =
| Untergattung =
| Gattung = Archimandrita
| Familie = Blaberidae
| Unterfamilie =
| Art =
| Verbreitung = Guatemala, Costa Rica, Panama, Kolumbien
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| Winterruhe =
}}
Archimandrita tesselata ist eine große, meist recht scheue Schabenart.
Ihr Namen "Pfefferschabe" trägt sie aufgrund der Zeichnung auf ihren Flügeldecken.
<gallery mode="packed-hover">
Image:Archimandrita tesselata IMG 2891.JPG|An Gurke
Image:Archimandrita_tesselata_R0015551.png|Können dese Augen lügen?
</gallery>
e7e3cafaddb644ef3c5894d96198f6e881b46e7f
461
460
2014-09-21T21:41:26Z
Lollypop
2
wikitext
text/x-wiki
{{Systematik
| DeName = Pfefferschabe
| WissName = Archimandrita tesselata
| Bild = Archimandrita_tesselata_IMG_2891.JPG
| Bildbeschreibung = Archimandrita Tesselata an einem Stück Gurke
| Autor =
| Untergattung =
| Gattung = Archimandrita
| Familie = Blaberidae
| Unterfamilie =
| Art =
| Verbreitung = Guatemala, Costa Rica, Panama, Kolumbien
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| Winterruhe =
}}
Archimandrita tesselata ist eine große, meist recht scheue Schabenart.
Ihren Namen "Pfefferschabe" trägt sie aufgrund der Zeichnung auf ihren Flügeldecken.
<gallery mode="packed-hover">
Image:Archimandrita tesselata IMG 2891.JPG|An Gurke
Image:Archimandrita_tesselata_R0015551.png|Können dese Augen lügen?
</gallery>
6fcbb789d9c741d9f7e415acfec0f36b31348152
462
461
2014-09-21T21:42:05Z
Lollypop
2
wikitext
text/x-wiki
{{Systematik
| DeName = Pfefferschabe
| WissName = Archimandrita tesselata
| Bild = Archimandrita_tesselata_IMG_2891.JPG
| Bildbeschreibung = Archimandrita Tesselata an einem Stück Gurke
| Autor =
| Untergattung =
| Gattung = Archimandrita
| Familie = Blaberidae
| Unterfamilie =
| Art =
| Verbreitung = Guatemala, Costa Rica, Panama, Kolumbien
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| Winterruhe =
}}
Archimandrita tesselata ist eine große, meist recht scheue, Schabenart.
Ihren Namen "Pfefferschabe" trägt sie aufgrund der Zeichnung auf ihren Flügeldecken.
<gallery mode="packed-hover">
Image:Archimandrita tesselata IMG 2891.JPG|An Gurke
Image:Archimandrita_tesselata_R0015551.png|Können dese Augen lügen?
</gallery>
c557a241984f850e04b77d6a6156d873cbdffc4d
465
462
2014-09-21T21:58:41Z
Lollypop
2
wikitext
text/x-wiki
{{Systematik
| DeName = Pfefferschabe
| WissName = Archimandrita tesselata
| Bild = Archimandrita_tesselata_IMG_2891.JPG
| Bildbeschreibung = Archimandrita Tesselata an einem Stück Gurke
| Autor =
| Untergattung =
| Gattung = Archimandrita
| Familie = Blaberidae
| Unterfamilie =
| Art =
| Verbreitung = Guatemala, Costa Rica, Panama, Kolumbien
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| Winterruhe =
}}
Archimandrita tesselata ist eine große, meist recht scheue Schabenart.
Ihren Namen "Pfefferschabe" trägt sie aufgrund der Zeichnung auf ihren Flügeldecken.
<gallery mode="packed-hover">
Image:Archimandrita tesselata IMG 2891.JPG|An Gurke
Image:Archimandrita_tesselata_R0015551.png|Können dese Augen lügen?
</gallery>
<gallery mode="packed-hover">
Image:Archimandrita_tesselata_IMG_2901.png
Image:Archimandrita_tesselata_IMG_2902.png
Image:Archimandrita_tesselata_IMG_2903.png
Image:Archimandrita_tesselata_IMG_2904.png
Image:Archimandrita_tesselata_IMG_2905.png
Image:Archimandrita_tesselata_IMG_2906.png
Image:Archimandrita_tesselata_IMG_2907.png
Image:Archimandrita_tesselata_IMG_2908.png
Image:Archimandrita_tesselata_IMG_2909.png
Image:Archimandrita_tesselata_IMG_2910.png
Image:Archimandrita_tesselata_IMG_2911.png
Image:Archimandrita_tesselata_IMG_2912.png
Image:Archimandrita_tesselata_IMG_2913.png
Image:Archimandrita_tesselata_IMG_2914.png
Image:Archimandrita_tesselata_IMG_2915.png
Image:Archimandrita_tesselata_IMG_2916.png
Image:Archimandrita_tesselata_IMG_2917.png
Image:Archimandrita_tesselata_IMG_2918.png
</gallery>
06adfede6fe079e22032762767328f04fecebf2c
468
465
2014-09-21T22:03:57Z
Lollypop
2
wikitext
text/x-wiki
{{Systematik
| DeName = Pfefferschabe
| WissName = Archimandrita tesselata
| Bild = Archimandrita_tesselata_IMG_2891.JPG
| Bildbeschreibung = Archimandrita Tesselata an einem Stück Gurke
| Autor =
| Untergattung =
| Gattung = Archimandrita
| Familie = Blaberidae
| Unterfamilie =
| Art =
| Verbreitung = Guatemala, Costa Rica, Panama, Kolumbien
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| Winterruhe =
}}
Archimandrita tesselata ist eine große, meist recht scheue Schabenart.
Ihren Namen "Pfefferschabe" trägt sie aufgrund der Zeichnung auf ihren Flügeldecken.
<gallery mode="packed-hover">
Image:Archimandrita tesselata IMG 2891.JPG|An Gurke
Image:Archimandrita_tesselata_R0015551.png|Können dese Augen lügen?
</gallery>
<gallery mode="packed-hover" caption="Häutung eines Archimandrita tesselata Männchens">
Image:Archimandrita_tesselata_IMG_2901.png
Image:Archimandrita_tesselata_IMG_2902.png
Image:Archimandrita_tesselata_IMG_2903.png
Image:Archimandrita_tesselata_IMG_2904.png
Image:Archimandrita_tesselata_IMG_2905.png
Image:Archimandrita_tesselata_IMG_2906.png
Image:Archimandrita_tesselata_IMG_2907.png
Image:Archimandrita_tesselata_IMG_2908.png
Image:Archimandrita_tesselata_IMG_2909.png
Image:Archimandrita_tesselata_IMG_2910.png
Image:Archimandrita_tesselata_IMG_2911.png
Image:Archimandrita_tesselata_IMG_2912.png
Image:Archimandrita_tesselata_IMG_2913.png
Image:Archimandrita_tesselata_IMG_2914.png
Image:Archimandrita_tesselata_IMG_2915.png
Image:Archimandrita_tesselata_IMG_2916.png
Image:Archimandrita_tesselata_IMG_2917.png
Image:Archimandrita_tesselata_IMG_2918.png
</gallery>
a821d19e1fea72ec18176ce7407597b320afc422
471
468
2014-09-21T22:20:37Z
Lollypop
2
wikitext
text/x-wiki
{{Systematik
| DeName = Pfefferschabe
| WissName = Archimandrita tesselata
| Bild = Archimandrita_tesselata_IMG_2891.JPG
| Bildbeschreibung = Archimandrita Tesselata an einem Stück Gurke
| Autor =
| Untergattung =
| Gattung = Archimandrita
| Familie = Blaberidae
| Unterfamilie =
| Art =
| Verbreitung = Guatemala, Costa Rica, Panama, Kolumbien
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| Winterruhe =
}}
Archimandrita tesselata ist eine große, meist recht scheue Schabenart.
Ihren Namen "Pfefferschabe" trägt sie aufgrund der Zeichnung auf ihren Flügeldecken.
<gallery mode="packed-hover">
Image:Archimandrita tesselata IMG 2891.JPG|An Gurke
Image:Archimandrita_tesselata_R0015551.png|Können dese Augen lügen?
</gallery>
<gallery mode="slideshow" caption="Häutung eines Archimandrita tesselata Männchens">
Image:Archimandrita_tesselata_IMG_2901.png
Image:Archimandrita_tesselata_IMG_2902.png
Image:Archimandrita_tesselata_IMG_2903.png
Image:Archimandrita_tesselata_IMG_2904.png
Image:Archimandrita_tesselata_IMG_2905.png
Image:Archimandrita_tesselata_IMG_2906.png
Image:Archimandrita_tesselata_IMG_2907.png
Image:Archimandrita_tesselata_IMG_2908.png
Image:Archimandrita_tesselata_IMG_2909.png
Image:Archimandrita_tesselata_IMG_2910.png
Image:Archimandrita_tesselata_IMG_2911.png
Image:Archimandrita_tesselata_IMG_2912.png
Image:Archimandrita_tesselata_IMG_2913.png
Image:Archimandrita_tesselata_IMG_2914.png
Image:Archimandrita_tesselata_IMG_2915.png
Image:Archimandrita_tesselata_IMG_2916.png
Image:Archimandrita_tesselata_IMG_2917.png
Image:Archimandrita_tesselata_IMG_2918.png
</gallery>
<slideshow sequence="random" transition="fade" refresh="10000">
<div>[[Image:Archimandrita_tesselata_IMG_2901.png|thumb|right|128px|Caption 1]]</div>
<div>[[Image:Archimandrita_tesselata_IMG_2902.png|thumb|right|128px|Caption 2]]</div>
<div>[[Image:Archimandrita_tesselata_IMG_2903.png|thumb|right|128px|Caption 3]]</div>
<div>[[Image:Archimandrita_tesselata_IMG_2904.png|thumb|right|128px|Caption 4]]</div>
<div>[[Image:Archimandrita_tesselata_IMG_2905.png|thumb|right|128px|Caption 5]]</div>
<div>[[Image:Archimandrita_tesselata_IMG_2906.png|thumb|right|128px|Caption 6]]</div>
</slideshow>
328a09b299f32462c4b92bb0f71aae1729497e58
472
471
2014-09-21T22:34:08Z
Lollypop
2
wikitext
text/x-wiki
{{Systematik
| DeName = Pfefferschabe
| WissName = Archimandrita tesselata
| Bild = Archimandrita_tesselata_IMG_2891.JPG
| Bildbeschreibung = Archimandrita Tesselata an einem Stück Gurke
| Autor =
| Untergattung =
| Gattung = Archimandrita
| Familie = Blaberidae
| Unterfamilie =
| Art =
| Verbreitung = Guatemala, Costa Rica, Panama, Kolumbien
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| Winterruhe =
}}
Archimandrita tesselata ist eine große, meist recht scheue Schabenart.
Ihren Namen "Pfefferschabe" trägt sie aufgrund der Zeichnung auf ihren Flügeldecken.
<gallery mode="packed-hover">
Image:Archimandrita tesselata IMG 2891.JPG|An Gurke
Image:Archimandrita_tesselata_R0015551.png|Können dese Augen lügen?
</gallery>
<gallery mode="slideshow" caption="Häutung eines Archimandrita tesselata Männchens">
Image:Archimandrita_tesselata_IMG_2901.png
Image:Archimandrita_tesselata_IMG_2902.png
Image:Archimandrita_tesselata_IMG_2903.png
Image:Archimandrita_tesselata_IMG_2904.png
Image:Archimandrita_tesselata_IMG_2905.png
Image:Archimandrita_tesselata_IMG_2906.png
Image:Archimandrita_tesselata_IMG_2907.png
Image:Archimandrita_tesselata_IMG_2908.png
Image:Archimandrita_tesselata_IMG_2909.png
Image:Archimandrita_tesselata_IMG_2910.png
Image:Archimandrita_tesselata_IMG_2911.png
Image:Archimandrita_tesselata_IMG_2912.png
Image:Archimandrita_tesselata_IMG_2913.png
Image:Archimandrita_tesselata_IMG_2914.png
Image:Archimandrita_tesselata_IMG_2915.png
Image:Archimandrita_tesselata_IMG_2916.png
Image:Archimandrita_tesselata_IMG_2917.png
Image:Archimandrita_tesselata_IMG_2918.png
</gallery>
<slideshow sequence="random" transition="fade" refresh="10000">
<div>[[Image:Archimandrita_tesselata_IMG_2901.png|thumb|right|128px|Caption 1]]</div>
<div>[[Image:Archimandrita_tesselata_IMG_2902.png|thumb|right|128px|Caption 2]]</div>
<div>[[Image:Archimandrita_tesselata_IMG_2903.png|thumb|right|128px|Caption 3]]</div>
<div>[[Image:Archimandrita_tesselata_IMG_2904.png|thumb|right|128px|Caption 4]]</div>
<div>[[Image:Archimandrita_tesselata_IMG_2905.png|thumb|right|128px|Caption 5]]</div>
<div>[[Image:Archimandrita_tesselata_IMG_2906.png|thumb|right|128px|Caption 6]]</div>
</slideshow>
<slideshow sequence="random" transition="fade" refresh="10000">
<div>[[Image:Archimandrita tesselata IMG 2891.JPG|thumb|right|128px|Caption 1]]</div>
<div>[[Image:Archimandrita tesselata IMG 2891.JPG|thumb|right|128px|Caption 2]]</div>
<div>[[Image:Archimandrita tesselata IMG 2891.JPG|thumb|right|128px|Caption 3]]</div>
</slideshow>
7297e01a37e7d3e649643f6442449afb3abf7621
473
472
2014-09-21T22:35:39Z
Lollypop
2
wikitext
text/x-wiki
{{Systematik
| DeName = Pfefferschabe
| WissName = Archimandrita tesselata
| Bild = Archimandrita_tesselata_IMG_2891.JPG
| Bildbeschreibung = Archimandrita Tesselata an einem Stück Gurke
| Autor =
| Untergattung =
| Gattung = Archimandrita
| Familie = Blaberidae
| Unterfamilie =
| Art =
| Verbreitung = Guatemala, Costa Rica, Panama, Kolumbien
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| Winterruhe =
}}
Archimandrita tesselata ist eine große, meist recht scheue Schabenart.
Ihren Namen "Pfefferschabe" trägt sie aufgrund der Zeichnung auf ihren Flügeldecken.
<gallery mode="packed-hover">
Image:Archimandrita tesselata IMG 2891.JPG|An Gurke
Image:Archimandrita_tesselata_R0015551.png|Können dese Augen lügen?
</gallery>
<gallery mode="slideshow" caption="Häutung eines Archimandrita tesselata Männchens">
Image:Archimandrita_tesselata_IMG_2901.png
Image:Archimandrita_tesselata_IMG_2902.png
Image:Archimandrita_tesselata_IMG_2903.png
Image:Archimandrita_tesselata_IMG_2904.png
Image:Archimandrita_tesselata_IMG_2905.png
Image:Archimandrita_tesselata_IMG_2906.png
Image:Archimandrita_tesselata_IMG_2907.png
Image:Archimandrita_tesselata_IMG_2908.png
Image:Archimandrita_tesselata_IMG_2909.png
Image:Archimandrita_tesselata_IMG_2910.png
Image:Archimandrita_tesselata_IMG_2911.png
Image:Archimandrita_tesselata_IMG_2912.png
Image:Archimandrita_tesselata_IMG_2913.png
Image:Archimandrita_tesselata_IMG_2914.png
Image:Archimandrita_tesselata_IMG_2915.png
Image:Archimandrita_tesselata_IMG_2916.png
Image:Archimandrita_tesselata_IMG_2917.png
Image:Archimandrita_tesselata_IMG_2918.png
</gallery>
<slideshow sequence="random" transition="fade" refresh="10000">
<div>[[Image:Archimandrita_tesselata_IMG_2901.png|thumb|right|128px|Caption 1]]</div>
<div>[[Image:Archimandrita_tesselata_IMG_2902.png|thumb|right|128px|Caption 2]]</div>
<div>[[Image:Archimandrita_tesselata_IMG_2903.png|thumb|right|128px|Caption 3]]</div>
<div>[[Image:Archimandrita_tesselata_IMG_2904.png|thumb|right|128px|Caption 4]]</div>
<div>[[Image:Archimandrita_tesselata_IMG_2905.png|thumb|right|128px|Caption 5]]</div>
<div>[[Image:Archimandrita_tesselata_IMG_2906.png|thumb|right|128px|Caption 6]]</div>
</slideshow>
328a09b299f32462c4b92bb0f71aae1729497e58
474
473
2014-09-21T22:40:20Z
Lollypop
2
wikitext
text/x-wiki
{{Systematik
| DeName = Pfefferschabe
| WissName = Archimandrita tesselata
| Bild = Archimandrita_tesselata_IMG_2891.JPG
| Bildbeschreibung = Archimandrita Tesselata an einem Stück Gurke
| Autor =
| Untergattung =
| Gattung = Archimandrita
| Familie = Blaberidae
| Unterfamilie =
| Art =
| Verbreitung = Guatemala, Costa Rica, Panama, Kolumbien
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| Winterruhe =
}}
Archimandrita tesselata ist eine große, meist recht scheue Schabenart.
Ihren Namen "Pfefferschabe" trägt sie aufgrund der Zeichnung auf ihren Flügeldecken.
<gallery mode="packed-hover">
Image:Archimandrita tesselata IMG 2891.JPG|An Gurke
Image:Archimandrita_tesselata_R0015551.png|Können dese Augen lügen?
</gallery>
<gallery mode="slideshow" caption="Häutung eines Archimandrita tesselata Männchens">
Image:Archimandrita_tesselata_IMG_2901.png
Image:Archimandrita_tesselata_IMG_2902.png
Image:Archimandrita_tesselata_IMG_2903.png
Image:Archimandrita_tesselata_IMG_2904.png
Image:Archimandrita_tesselata_IMG_2905.png
Image:Archimandrita_tesselata_IMG_2906.png
Image:Archimandrita_tesselata_IMG_2907.png
Image:Archimandrita_tesselata_IMG_2908.png
Image:Archimandrita_tesselata_IMG_2909.png
Image:Archimandrita_tesselata_IMG_2910.png
Image:Archimandrita_tesselata_IMG_2911.png
Image:Archimandrita_tesselata_IMG_2912.png
Image:Archimandrita_tesselata_IMG_2913.png
Image:Archimandrita_tesselata_IMG_2914.png
Image:Archimandrita_tesselata_IMG_2915.png
Image:Archimandrita_tesselata_IMG_2916.png
Image:Archimandrita_tesselata_IMG_2917.png
Image:Archimandrita_tesselata_IMG_2918.png
</gallery>
<slideshow sequence="forward" transition="blindDown" refresh="3000">
<div>[[Image:Archimandrita_tesselata_IMG_2901.png|thumb|right|256px|Caption 1]]</div>
<div>[[Image:Archimandrita_tesselata_IMG_2902.png|thumb|right|256px|Caption 2]]</div>
<div>[[Image:Archimandrita_tesselata_IMG_2903.png|thumb|right|256px|Caption 3]]</div>
<div>[[Image:Archimandrita_tesselata_IMG_2904.png|thumb|right|256px|Caption 4]]</div>
<div>[[Image:Archimandrita_tesselata_IMG_2905.png|thumb|right|256px|Caption 5]]</div>
<div>[[Image:Archimandrita_tesselata_IMG_2906.png|thumb|right|256px|Caption 6]]</div>
</slideshow>
c6e6e992e1031b16851b476a8c94652b97be7eb4
502
474
2014-09-26T12:48:54Z
Lollypop
2
wikitext
text/x-wiki
{{Systematik
| DeName = Pfefferschabe
| WissName = Archimandrita tesselata
| Bild = Archimandrita_tesselata_IMG_2891.JPG
| Bildbeschreibung = Archimandrita Tesselata an einem Stück Gurke
| Autor =
| Untergattung =
| Gattung = Archimandrita
| Familie = Blaberidae
| Unterfamilie = Blaberinae
| Art =
| Verbreitung = Guatemala, Costa Rica, Panama, Kolumbien
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| Winterruhe =
}}
Archimandrita tesselata ist eine große, meist recht scheue Schabenart.
Ihren Namen "Pfefferschabe" trägt sie aufgrund der Zeichnung auf ihren Flügeldecken.
<gallery mode="packed-hover">
Image:Archimandrita tesselata IMG 2891.JPG|An Gurke
Image:Archimandrita_tesselata_R0015551.png|Können dese Augen lügen?
</gallery>
<gallery mode="slideshow" caption="Häutung eines Archimandrita tesselata Männchens">
Image:Archimandrita_tesselata_IMG_2901.png
Image:Archimandrita_tesselata_IMG_2902.png
Image:Archimandrita_tesselata_IMG_2903.png
Image:Archimandrita_tesselata_IMG_2904.png
Image:Archimandrita_tesselata_IMG_2905.png
Image:Archimandrita_tesselata_IMG_2906.png
Image:Archimandrita_tesselata_IMG_2907.png
Image:Archimandrita_tesselata_IMG_2908.png
Image:Archimandrita_tesselata_IMG_2909.png
Image:Archimandrita_tesselata_IMG_2910.png
Image:Archimandrita_tesselata_IMG_2911.png
Image:Archimandrita_tesselata_IMG_2912.png
Image:Archimandrita_tesselata_IMG_2913.png
Image:Archimandrita_tesselata_IMG_2914.png
Image:Archimandrita_tesselata_IMG_2915.png
Image:Archimandrita_tesselata_IMG_2916.png
Image:Archimandrita_tesselata_IMG_2917.png
Image:Archimandrita_tesselata_IMG_2918.png
</gallery>
<slideshow sequence="forward" transition="blindDown" refresh="3000">
<div>[[Image:Archimandrita_tesselata_IMG_2901.png|thumb|right|256px|Caption 1]]</div>
<div>[[Image:Archimandrita_tesselata_IMG_2902.png|thumb|right|256px|Caption 2]]</div>
<div>[[Image:Archimandrita_tesselata_IMG_2903.png|thumb|right|256px|Caption 3]]</div>
<div>[[Image:Archimandrita_tesselata_IMG_2904.png|thumb|right|256px|Caption 4]]</div>
<div>[[Image:Archimandrita_tesselata_IMG_2905.png|thumb|right|256px|Caption 5]]</div>
<div>[[Image:Archimandrita_tesselata_IMG_2906.png|thumb|right|256px|Caption 6]]</div>
</slideshow>
83faebe631c9e1575fcad28371b6ea86797a240e
File:Archimandrita tesselata IMG 2901.png
6
163
463
2014-09-21T21:51:36Z
Lollypop
2
Häutung Teil1
wikitext
text/x-wiki
Häutung Teil1
7e91f3060959c41f6c14202941f15c534cf662d0
File:Archimandrita tesselata IMG 2902.png
6
164
464
2014-09-21T21:52:49Z
Lollypop
2
Häutung Teil2
wikitext
text/x-wiki
Häutung Teil2
139608d204b4d1cdf1dd16747f6b1c46fd1f0a88
File:Archimandrita tesselata IMG 2903.png
6
165
466
2014-09-21T21:59:28Z
Lollypop
2
Häutung Teil 3
wikitext
text/x-wiki
Häutung Teil 3
68e96d78ce899c2d30f806eb4f5ab75d068fdcbb
File:Archimandrita tesselata IMG 2904.png
6
166
467
2014-09-21T22:00:21Z
Lollypop
2
Häutung Teil4
wikitext
text/x-wiki
Häutung Teil4
f2b175d9c5ee3ed21e76fda9ff4826fbb297b233
File:Archimandrita tesselata IMG 2905.png
6
167
469
2014-09-21T22:06:06Z
Lollypop
2
Häutung Teil5
wikitext
text/x-wiki
Häutung Teil5
f534d7d5d4d4424f5e1067d68c4069303b268cea
File:Archimandrita tesselata IMG 2906.png
6
168
470
2014-09-21T22:09:55Z
Lollypop
2
Häutung Teil6
wikitext
text/x-wiki
Häutung Teil6
5201e0b3f6bdcacd98454f2fba30bc107259b7a2
ZFS Networker
0
158
475
454
2014-09-23T15:37:36Z
Lollypop
2
/* The pre-/pstcmd-script */
wikitext
text/x-wiki
[[Kategorie:ZFS]]
[[Kategorie:Solaris]]
=Backup of ZFS snapshots on Solaris Cluster with Legato/EMC Networker=
This describes how to setup a backup of the Solaris Cluster resource group named sample-rg.
The structure of my RGs is always:
<pre>
RG: <name>-rg
ZFS-HASP: <name>-hasp-zfs-res
Logical Host: <name>-lh-res
Logical Host Name: <name>-lh
ZPOOL: <name>_pool
</pre>
I used the bash as shell.
==Define variables used in the following command lines==
<source lang=bash>
# NAME=sample
# RGname=${NAME}-rg
# NetworkerGroup=$(echo ${NAME} | tr 'a-z' 'A-Z' )
# ZPOOL=${NAME}_pool
# ZPOOL_BASEDIR=/local/${RGname}
</source>
==Define a resource for Networker==
What we need now is a resource definition in our Networker directory like this:
<source lang=bash>
# zfs create ${ZPOOL}/nsr
# mkdir ${ZPOOL_BASEDIR}/nsr/{bin,log,res}
# cat > ${ZPOOL_BASEDIR}/nsr/res/${NetworkerGroup}.res <<EOF
type: savepnpc;
precmd: "${ZPOOL_BASEDIR}/nsr/bin/prepst_command.sh pre >${ZPOOL_BASEDIR}/nsr/log/networker_precmd.log 2>&1";
pstcmd: "${ZPOOL_BASEDIR}/nsr/bin/prepst_command.sh pst >${ZPOOL_BASEDIR}/nsr/log/networker_pstcmd.log 2>&1";
timeout: "08:00am";
abort precmd with group: Yes;
EOF
</source>
And now create a link on every cluster node to this file
<source lang=bash>
# ln -s ${ZPOOL_BASEDIR}/nsr/res/${NetworkerGroup}.res /nsr/res/${NetworkerGroup}.res
</source>
==The pre-/pstcmd-script==
!!!THIS CODE IS UNTESTED DO NOT USE THIS!!!
!!!THIS JUST AN EXAMPLE!!!
Still not working...
<source lang=bash>
#!/bin/bash
function print_option () {
option=$1; shift
# now process line
while [ $# -gt 0 ]
do
case $1 in
${option})
echo $2
shift
shift
;;
*)
shift
;;
esac
done
}
function print_log () {
LOGFILE=$1 ; shift
if [ $# -gt 0 ]
then
printf "%s : %s\n" "$(date '+%Y%m%d %H:%M:%S')" "$*" >> ${LOGFILE}
else
printf "%s : " "$(date '+%Y%m%d %H:%M:%S')" >> ${LOGFILE}
cat >> ${LOGFILE}
fi
}
# Get commandline from parent pid
savepnpc_pid=$(ptree $$ | nawk '/savepnpc/{print $1}')
savepnpc_commandline="$(pargs -e ${savepnpc_pid} | head -1)"
CLIENT_NAME=$(print_option -m ${savepnpc_commandline})
LOGFILE=/nsr/logs/${CLIENT_NAME}.log
print_log ${LOGFILE} "Called from ${savepnpc_commandline}"
exec >>${LOGFILE} 2>&1
print_log ${LOGFILE} "Begin backup of ${CLIENT_NAME}"
# Get resource name from hostname
LH_RES=$(/usr/cluster/bin/clrs show -t SUNW.LogicalHostname -p HostnameList | na wk -v Hostname="${CLIENT_NAME}" '/^Resource:/{res=$NF} /HostnameList:/ {for(i=2; i<=NF;i++){if($i == Hostname){print res}}}')
print_log ${LOGFILE} "LogicalHostname of ${CLIENT_NAME} is ${LH_RES}"
# Get ressourceGroup name from ressource name
RG=$(/usr/cluster/bin/scha_resource_get -O GROUP -R ${LH_RES})
print_log ${LOGFILE} "RessourceGroup of ${LH_RES} is ${RG}"
ZPOOLS=$(/usr/cluster/bin/clrs show -g ${RG} -p Zpools | nawk '$1=="Zpools:"{$1 ="";print $0}')
print_log ${LOGFILE} "ZPools used in ${RG}: ${ZPOOLS}"
Start_command=$(/usr/cluster/bin/clrs show -p Start_command -g ${RG} | /usr/bin/ nawk -F ':' '$1 ~ /Start_command/ && $2 ~ /sczbt/')
print_log ${LOGFILE} "sczbt Start_command is: ${Start_command}"
sczbt_config=$(print_option -P ${Start_command})
print_log ${LOGFILE} "sczbt_config is ${sczbt_config}/sczbt_config"
ZONE=$(nawk -F '=' '$1=="Zonename"{gsub(/"/,"",$2);print $2}' ${sczbt_config}/sc zbt_config)
print_log ${LOGFILE} "Zone from ${sczbt_config}/sczbt_config is ${ZONE}"
ORACLE_SID=SAMPLE
ORACLE_USER=oracle
SNAPSHOT_NAME="nsr"
ZFS_CMD="/usr/sbin/zfs"
ZLOGIN_CMD="/usr/bin/zlogin"
function snapshot_pre {
DB=$1
DBUSER=$2
if [ $# -eq 3 -a "_$3_" != "__" ]
then
ZONE=$3
ZONE_CMD="${ZLOGIN_CMD} -l ${DBUSER} ${ZONE}"
ZONE_BASE=$(/usr/sbin/zonecfg -z ${ZONE} info zonepath | nawk '{print $NF;}' )
ZONE_ROOT="${ZONE_BASE}/root"
else
ZONE_ROOT=""
ZONE_CMD="su - ${DBUSER} -c"
fi
if( ${ZONE_CMD} echo >/dev/null 2>&1 )
then
SCRIPT_NAME="tmp/.nsr-pre-snap-script.$$"
# Create script inside zone
cat >${ZONE_ROOT}/{SCRIPT_NAME} <<EOS
#!/bin/bash
DBDIR=\$(/usr/bin/nawk -F':' -v ORACLE_SID=${ORACLE_SID} '\$1==ORACLE_SID {print \$2;}' /var/opt/oracle/oratab)
\${DBDIR}/bin/sqlplus sys/${DBUSER} as sysdba << EOF
create pfile from spfile;
alter system archive log current;
alter database backup controlfile to trace;
alter database begin backup;
EOF
EOS
chmod 755 ${ZONE_ROOT}/${SCRIPT_NAME}
${ZONE_CMD} /${SCRIPT_NAME} 2>&1 | print_log ${LOGFILE}
rm -f ${ZONE_ROOT}/${SCRIPT_NAME}
fi
}
function snapshot_pst {
DB=$1
DBUSER=$2
if [ $# -eq 3 -a "_$3_" != "__" ]
then
ZONE=$3
ZONE_CMD="${ZLOGIN_CMD} -l ${DBUSER} ${ZONE}"
ZONE_BASE=$(/usr/sbin/zonecfg -z ${ZONE} info zonepath | nawk '{print $NF;}' )
ZONE_ROOT="${ZONE_BASE}/root"
else
ZONE_ROOT=""
ZONE_CMD="su - ${DBUSER} -c"
fi
if( ${ZONE_CMD} echo >/dev/null 2>&1 )
then
SCRIPT_NAME="tmp/.nsr-pre-snap-script.$$"
# Create script inside zone
cat >${ZONE_ROOT}/{SCRIPT_NAME} <<EOS
#!/bin/bash
DBDIR=\$(/usr/bin/nawk -F':' -v ORACLE_SID=${ORACLE_SID} '\$1==ORACLE_SID {print \$2;}' /var/opt/oracle/oratab)
\${DBDIR}/bin/sqlplus sys/${DBUSER} as sysdba << EOF
alter database end backup;
alter system archive log current;
EOF
EOS
chmod 755 ${ZONE_ROOT}/${SCRIPT_NAME}
${ZONE_CMD} /${SCRIPT_NAME} 2>&1 | print_log ${LOGFILE}
rm -f ${ZONE_ROOT}/${SCRIPT_NAME}
fi
}
function snapshot_create {
ZPOOL=$1
SNAPSHOT_NAME=$2
${ZFS_CMD} snapshot -r ${ZPOOL}@${SNAPSHOT_NAME} 2>&1 | print_log ${LOGFILE}
#for zfs in $(zfs list -Ho name -t snapshot -r ${ZPOOL} | grep ${SNAPSHOT_NAME })
#do
# ${ZFS_CMD} clone -o readonly=on ${zfs} ${zfs/@*/}/nsr_backup
# /usr/sbin/savepnpc -s hhlokens01.srv.ndr-net.de -g NdrCms -LL -m ndrcmstest -cl /local/ndrcmstest-rg/home/nsr_backup
#done
}
function snapshot_destroy {
ZPOOL=$1
SNAPSHOT_NAME=$2
if (${ZFS_CMD} list -t snapshot ${ZPOOL}@${SNAPSHOT_NAME} >/dev/null 2>&1)
then
#for zfs in $(zfs list -Ho name -t snapshot -r ${ZPOOL} | grep ${SNAPSHOT_NA ME})
#do
# ${ZFS_CMD} unmount ${zfs/@*/}/nsr_backup
# ${ZFS_CMD} destroy ${zfs/@*/}/nsr_backup
#done
${ZFS_CMD} destroy -r ${ZPOOL}@${SNAPSHOT_NAME} 2>&1 | print_log ${LOGFILE}
fi
}
function usage {
echo "Usage: $0 (pre|pst)"
exit 1
}
case $1 in
pre)
snapshot_destroy ${ZPOOLS} ${SNAPSHOT_NAME}
snapshot_pre ${DB} ${DBUSER} ${ZONE}
snapshot_create ${ZPOOLS} ${SNAPSHOT_NAME}
snapshot_pst ${DB} ${DBUSER} ${ZONE}
;;
pst)
snapshot_destroy ${ZPOOL} ${SNAPSHOT_NAME}
;;
*)
usage
;;
esac
root@hhrotcms05:~ #> cat /opt/nsr/bin/nsr_snapshot.sh
#!/bin/bash
function print_option () {
option=$1; shift
# now process line
while [ $# -gt 0 ]
do
case $1 in
${option})
echo $2
shift
shift
;;
*)
shift
;;
esac
done
}
function print_log () {
LOGFILE=$1 ; shift
if [ $# -gt 0 ]
then
printf "%s : %s\n" "$(date '+%Y%m%d %H:%M:%S')" "$*" >> ${LOGFILE}
else
printf "%s : " "$(date '+%Y%m%d %H:%M:%S')" >> ${LOGFILE}
cat >> ${LOGFILE}
fi
}
# Get commandline from parent pid
savepnpc_pid=$(ptree $$ | nawk '/savepnpc/{print $1}')
savepnpc_commandline="$(pargs -e ${savepnpc_pid} | head -1)"
CLIENT_NAME=$(print_option -m ${savepnpc_commandline})
LOGFILE=/nsr/logs/${CLIENT_NAME}.log
print_log ${LOGFILE} "Called from ${savepnpc_commandline}"
exec >>${LOGFILE} 2>&1
print_log ${LOGFILE} "Begin backup of ${CLIENT_NAME}"
# Get resource name from hostname
LH_RES=$(/usr/cluster/bin/clrs show -t SUNW.LogicalHostname -p HostnameList | nawk -v Hostname="${CLIENT_NAME}" '/^Resource:/{res=$NF} /HostnameList:/ {for(i=2;i<=NF;i++){if($i == Hostname){print res}}}')
print_log ${LOGFILE} "LogicalHostname of ${CLIENT_NAME} is ${LH_RES}"
# Get ressourceGroup name from ressource name
RG=$(/usr/cluster/bin/scha_resource_get -O GROUP -R ${LH_RES})
print_log ${LOGFILE} "RessourceGroup of ${LH_RES} is ${RG}"
ZPOOLS=$(/usr/cluster/bin/clrs show -g ${RG} -p Zpools | nawk '$1=="Zpools:"{$1="";print $0}')
print_log ${LOGFILE} "ZPools used in ${RG}: ${ZPOOLS}"
Start_command=$(/usr/cluster/bin/clrs show -p Start_command -g ${RG} | /usr/bin/nawk -F ':' '$1 ~ /Start_command/ && $2 ~ /sczbt/')
print_log ${LOGFILE} "sczbt Start_command is: ${Start_command}"
sczbt_config=$(print_option -P ${Start_command})
print_log ${LOGFILE} "sczbt_config is ${sczbt_config}/sczbt_config"
ZONE=$(nawk -F '=' '$1=="Zonename"{gsub(/"/,"",$2);print $2}' ${sczbt_config}/sczbt_config)
print_log ${LOGFILE} "Zone from ${sczbt_config}/sczbt_config is ${ZONE}"
ORACLE_SID=SAMPLE
ORACLE_USER=oracle
SNAPSHOT_NAME="nsr"
ZFS_CMD="/usr/sbin/zfs"
ZLOGIN_CMD="/usr/bin/zlogin"
function snapshot_pre {
DB=$1
DBUSER=$2
if [ $# -eq 3 -a "_$3_" != "__" ]
then
ZONE=$3
ZONE_CMD="${ZLOGIN_CMD} -l ${DBUSER} ${ZONE}"
ZONE_BASE=$(/usr/sbin/zonecfg -z ${ZONE} info zonepath | nawk '{print $NF;}')
ZONE_ROOT="${ZONE_BASE}/root"
else
ZONE_ROOT=""
ZONE_CMD="su - ${DBUSER} -c"
fi
if( ${ZONE_CMD} echo >/dev/null 2>&1 )
then
SCRIPT_NAME="tmp/.nsr-pre-snap-script.$$"
# Create script inside zone
cat >${ZONE_ROOT}/{SCRIPT_NAME} <<EOS
#!/bin/bash
DBDIR=\$(/usr/bin/nawk -F':' -v ORACLE_SID=${ORACLE_SID} '\$1==ORACLE_SID {print \$2;}' /var/opt/oracle/oratab)
\${DBDIR}/bin/sqlplus sys/${DBUSER} as sysdba << EOF
create pfile from spfile;
alter system archive log current;
alter database backup controlfile to trace;
alter database begin backup;
EOF
EOS
chmod 755 ${ZONE_ROOT}/${SCRIPT_NAME}
${ZONE_CMD} /${SCRIPT_NAME} 2>&1 | print_log ${LOGFILE}
rm -f ${ZONE_ROOT}/${SCRIPT_NAME}
fi
}
function snapshot_pst {
DB=$1
DBUSER=$2
if [ $# -eq 3 -a "_$3_" != "__" ]
then
ZONE=$3
ZONE_CMD="${ZLOGIN_CMD} -l ${DBUSER} ${ZONE}"
ZONE_BASE=$(/usr/sbin/zonecfg -z ${ZONE} info zonepath | nawk '{print $NF;}')
ZONE_ROOT="${ZONE_BASE}/root"
else
ZONE_ROOT=""
ZONE_CMD="su - ${DBUSER} -c"
fi
if( ${ZONE_CMD} echo >/dev/null 2>&1 )
then
SCRIPT_NAME="tmp/.nsr-pre-snap-script.$$"
# Create script inside zone
cat >${ZONE_ROOT}/{SCRIPT_NAME} <<EOS
#!/bin/bash
DBDIR=\$(/usr/bin/nawk -F':' -v ORACLE_SID=${ORACLE_SID} '\$1==ORACLE_SID {print \$2;}' /var/opt/oracle/oratab)
\${DBDIR}/bin/sqlplus sys/${DBUSER} as sysdba << EOF
alter database end backup;
alter system archive log current;
EOF
EOS
chmod 755 ${ZONE_ROOT}/${SCRIPT_NAME}
${ZONE_CMD} /${SCRIPT_NAME} 2>&1 | print_log ${LOGFILE}
rm -f ${ZONE_ROOT}/${SCRIPT_NAME}
fi
}
function snapshot_create {
ZPOOL=$1
SNAPSHOT_NAME=$2
${ZFS_CMD} snapshot -r ${ZPOOL}@${SNAPSHOT_NAME} 2>&1 | print_log ${LOGFILE}
#for zfs in $(zfs list -Ho name -t snapshot -r ${ZPOOL} | grep ${SNAPSHOT_NAME})
#do
# ${ZFS_CMD} clone -o readonly=on ${zfs} ${zfs/@*/}/nsr_backup
# /usr/sbin/savepnpc -s hhlokens01.srv.ndr-net.de -g NdrCms -LL -m ndrcmstest-cl /local/ndrcmstest-rg/home/nsr_backup
#done
}
function snapshot_destroy {
ZPOOL=$1
SNAPSHOT_NAME=$2
if (${ZFS_CMD} list -t snapshot ${ZPOOL}@${SNAPSHOT_NAME} >/dev/null 2>&1)
then
#for zfs in $(zfs list -Ho name -t snapshot -r ${ZPOOL} | grep ${SNAPSHOT_NAME})
#do
# ${ZFS_CMD} unmount ${zfs/@*/}/nsr_backup
# ${ZFS_CMD} destroy ${zfs/@*/}/nsr_backup
#done
${ZFS_CMD} destroy -r ${ZPOOL}@${SNAPSHOT_NAME} 2>&1 | print_log ${LOGFILE}
fi
}
function usage {
echo "Usage: $0 (pre|pst)"
exit 1
}
case $1 in
pre)
snapshot_destroy ${ZPOOLS} ${SNAPSHOT_NAME}
snapshot_pre ${DB} ${DBUSER} ${ZONE}
snapshot_create ${ZPOOLS} ${SNAPSHOT_NAME}
snapshot_pst ${DB} ${DBUSER} ${ZONE}
;;
pst)
snapshot_destroy ${ZPOOL} ${SNAPSHOT_NAME}
;;
*)
usage
;;
esac
</source>
!!!THIS CODE IS UNTESTED DO NOT USE THIS!!!
!!!THIS JUST AN EXAMPLE!!!
==Registering new resource type LGTO.clnt==
1. Install Solaris client package LGTOclnt
2. Register new resource type in cluster. One one node do:
<source lang=bash>
# clrt register -f /usr/sbin/LGTO.clnt.rtr LGTO.clnt
</source>
Now you have a new resource type LGTO.clnt in your cluster.
==Create client resource of type LGTO.clnt==
So I use scripts like this:
<source lang=bash>
# RGname=sample-rg
# clrs create \
-t LGTO.clnt \
-g ${RGname} \
-p Resource_dependencies=$(basename ${RGname} -rg)-hasp-zfs-res \
-p clientname=$(basename ${RGname} -rg)-lh \
-p Network_resource=$(basename ${RGname} -rg)-lh-res \
-p owned_paths=${ZPOOL_BASEDIR} \
$(basename ${RGname} -rg)-nsr-res
</source>
This expands to:
<source lang=bash>
# clrs create \
-t LGTO.clnt \
-g sample-rg \
-p Resource_dependencies=sample-hasp-zfs-res \
-p clientname=sample-lh \
-p Network_resource=sample-lh-res \
-p owned_paths=/local/sample-rg \
sample-nsr-res
</source>
Now we have a client name to which we can connect to: sample-lh
9a0c1d740a50357c9fb8d2ae67f78066da1f49de
Gromphadorhina portentosa
0
145
479
420
2014-09-26T12:00:33Z
Lollypop
2
wikitext
text/x-wiki
{{Systematik
| DeName = Fauchschabe
| WissName = Gromphadorrhina portentosa
| Autor =
| Untergattung =
| Gattung = Gromphadorhina
| Familie = Blaberidae
| Unterfamilie =
| Art =
| Verbreitung =
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| Winterruhe =
}}
3ea1e3c8669391b28cd6f9607d798d75eec5e8dd
480
479
2014-09-26T12:00:45Z
Lollypop
2
wikitext
text/x-wiki
{{Systematik
| DeName = Fauchschabe
| WissName = Gromphadorrhina portentosa
| Autor =
| Untergattung =
| Gattung = Gromphadorrhina
| Familie = Blaberidae
| Unterfamilie =
| Art =
| Verbreitung =
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| Winterruhe =
}}
bdae5b9c0d6e638a2ff8eb928701d0a53a93df3e
482
480
2014-09-26T12:01:28Z
Lollypop
2
Lollypop verschob Seite [[Gromphadorhina portentosa]] nach [[Gromphadorrhina portentosa]]: Gattung falsch geschrieben %}
wikitext
text/x-wiki
{{Systematik
| DeName = Fauchschabe
| WissName = Gromphadorrhina portentosa
| Autor =
| Untergattung =
| Gattung = Gromphadorrhina
| Familie = Blaberidae
| Unterfamilie =
| Art =
| Verbreitung =
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| Winterruhe =
}}
bdae5b9c0d6e638a2ff8eb928701d0a53a93df3e
505
482
2014-09-26T13:05:00Z
Lollypop
2
wikitext
text/x-wiki
{{Systematik
| DeName = Fauchschabe
| WissName = Gromphadorrhina portentosa
| Autor =
| Untergattung =
| Gattung = Gromphadorrhina
| Familie = Blaberidae
| Unterfamilie = Oxyhaloinae
| Art =
| Verbreitung =
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| Winterruhe =
}}
8a335135f79fc990fc89d13dc351d895330bcd9a
Gromphadorhina spec.
0
144
484
409
2014-09-26T12:01:53Z
Lollypop
2
wikitext
text/x-wiki
{{Systematik
| DeName = Fauchschabe
| WissName = Gromphadorrhina spec.
| Autor =
| Untergattung =
| Gattung = Gromphadorrhina
| Familie = Blaberidae
| Unterfamilie =
| Art =
| Verbreitung =
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| Winterruhe =
}}
46f6d60b03f816a8cdfaa4dd8fcbb778860dabb8
504
484
2014-09-26T13:04:41Z
Lollypop
2
wikitext
text/x-wiki
{{Systematik
| DeName = Fauchschabe
| WissName = Gromphadorrhina spec.
| Autor =
| Untergattung =
| Gattung = Gromphadorrhina
| Familie = Blaberidae
| Unterfamilie = Oxyhaloinae
| Art =
| Verbreitung =
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| Winterruhe =
}}
1f7b53b23df0e5cf836c74b37fc358637f4ea80d
Therea regularis
0
171
485
2014-09-26T12:05:16Z
Lollypop
2
Die Seite wurde neu angelegt: „{{Systematik | DeName = | WissName = Therea regularis | Autor = | Untergattung = | Gattung = Therea | Familie…“
wikitext
text/x-wiki
{{Systematik
| DeName =
| WissName = Therea regularis
| Autor =
| Untergattung =
| Gattung = Therea
| Familie = Blaberidae
| Unterfamilie =
| Art =
| Verbreitung =
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| Winterruhe =
}}
e8083aaf3e50a2e2b586ee8fcf696c106515b0d1
490
485
2014-09-26T12:11:37Z
Lollypop
2
wikitext
text/x-wiki
{{Systematik
| DeName =
| WissName = Therea regularis
| Autor =
| Untergattung =
| Gattung = Therea
| Familie = Polyphagidae
| Unterfamilie = Polyphaginae
| Art =
| Verbreitung = Indien
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| Winterruhe =
}}
5054fa8c38ff9219bc41ec6439318657dda9b7de
507
490
2014-09-26T13:09:48Z
Lollypop
2
wikitext
text/x-wiki
{{Systematik
| DeName = Dominoschabe
| WissName = Therea regularis
| Autor =
| Untergattung =
| Gattung = Therea
| Familie = Polyphagidae
| Unterfamilie = Polyphaginae
| Art =
| Verbreitung = Indien
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| Winterruhe =
}}
669fdb24012da2840785150518ff206c6a5e3076
508
507
2014-09-26T13:10:10Z
Lollypop
2
wikitext
text/x-wiki
{{Systematik
| DeName = Dominoschabe
| WissName = Therea regularis
| Autor =
| Untergattung =
| Gattung = Therea
| Familie = Polyphagidae
| Unterfamilie = Polyphaginae
| Art =
| Verbreitung = Indien
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| Winterruhe =
}}
Kleine, quirlige Art.
1c95cdce5518c5d578eb6580eeb8e74b5a7c2bc4
Category:Therea
14
172
486
2014-09-26T12:05:32Z
Lollypop
2
Die Seite wurde neu angelegt: „[[Kategorie:Schaben]]“
wikitext
text/x-wiki
[[Kategorie:Schaben]]
b2ea0c52ba3cc8ec3158b4eaccc987028f8b404c
Therea olegrandjeani
0
173
487
2014-09-26T12:07:05Z
Lollypop
2
Die Seite wurde neu angelegt: „{{Systematik | DeName = | WissName = Therea olegrandjeani | Autor = | Untergattung = | Gattung = Therea | Fam…“
wikitext
text/x-wiki
{{Systematik
| DeName =
| WissName = Therea olegrandjeani
| Autor =
| Untergattung =
| Gattung = Therea
| Familie = Blaberidae
| Unterfamilie =
| Art =
| Verbreitung =
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| Winterruhe =
}}
635c8010250fe4958f8ab717468713a4b4e17d27
488
487
2014-09-26T12:09:17Z
Lollypop
2
wikitext
text/x-wiki
{{Systematik
| DeName = Fragezeichen-Schabe
| WissName = Therea olegrandjeani
| Autor =
| Untergattung =
| Gattung = Therea
| Familie = Blaberidae
| Unterfamilie =
| Art =
| Verbreitung =
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| Winterruhe =
}}
01f39fc734ee3184b809c0ee69ce60b49f93ef8c
489
488
2014-09-26T12:10:52Z
Lollypop
2
wikitext
text/x-wiki
{{Systematik
| DeName = Fragezeichen-Schabe
| WissName = Therea olegrandjeani
| Autor =
| Untergattung =
| Gattung = Therea
| Familie = Polyphagidae
| Unterfamilie = Polyphaginae
| Art =
| Verbreitung =
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| Winterruhe =
}}
1f8d05aee7decd559251052ea21c32b94bf0da6d
491
489
2014-09-26T12:12:00Z
Lollypop
2
wikitext
text/x-wiki
{{Systematik
| DeName = Fragezeichen-Schabe
| WissName = Therea olegrandjeani
| Autor =
| Untergattung =
| Gattung = Therea
| Familie = Polyphagidae
| Unterfamilie = Polyphaginae
| Art =
| Verbreitung = Indien
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| Winterruhe =
}}
c9601be3c098d69705898ed9e636879f88d645ed
Category:Schaben
14
142
492
400
2014-09-26T12:21:04Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:Tiere]]
{{#categorytree:Schaben|mode=pages|depth=3}}
f251c5dfd50a3ead367bada48bb9bc9c0ae7e805
493
492
2014-09-26T12:23:16Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:Tiere]]
{{#categorytree:Schaben|mode=pages|hideroot=on|depth=3}}
6c5fa66c7e87ec5eef98493ccc6df6d0c3dba391
Category:Tiere
14
40
494
75
2014-09-26T12:32:37Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:Projekte]]
{{#categorytree:{{PAGENAMEE}}|mode=pages|hideroot=on|depth=3}}
106d90f911da9c9d0da47a695d19c6dd77d36030
Category:Ameisen
14
3
495
73
2014-09-26T12:33:20Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:Tiere]]
{{#categorytree:{{PAGENAMEE}}|mode=pages|hideroot=on|depth=3}}
d31e780da4ab61c502fb0c26d697c8c76ea3156c
Category:Tausendfuesser
14
9
496
74
2014-09-26T12:34:00Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:Tiere]]
{{#categorytree:{{PAGENAMEE}}|mode=pages|hideroot=on|depth=3}}
d31e780da4ab61c502fb0c26d697c8c76ea3156c
Elliptorhina javanica
0
146
497
408
2014-09-26T12:35:02Z
Lollypop
2
wikitext
text/x-wiki
{{Systematik
| DeName = Fauchschabe
| WissName = Elliptorhina javanica
| Autor =
| Untergattung =
| Gattung = Elliptorhina
| Familie = Blaberidae
| Unterfamilie = Oxyhaloinae
| Art =
| Verbreitung =
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| Winterruhe =
}}
9bec3b837fab65b867152261ff4d268986fdd446
Template:Systematik
10
117
498
412
2014-09-26T12:39:14Z
Lollypop
2
wikitext
text/x-wiki
<includeonly>
{| cellpadding="2" cellspacing="0" style="border: 1px solid #6688AA;width:260px; background-color:#efefef; float:right;overflow: hidden; margin-left:10px; margin-bottom:10px;" valign="middle" |
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| '''''{{{WissName}}}''''' {{#if:{{{DeName|}}}| <br>({{{DeName|}}}) }}
|-
|style=" border: 0px solid #6688AA;border-bottom-width:1px;"| <div style="text-align:center;font-size:8pt;">
{{#if: {{{Bild|}}} | [[Bild:{{{Bild}}}|frameless|250x300px|{{{Bildbeschreibung}}}]]{{{Bildbeschreibung}}}</div> | |}}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| Systematik
|-
|style="text-align:center;width=250px;"|
{| style="background-color:#efefef;text-align:left;"
|-
{{#if:{{{Familie|}}}|
{{!-}}
{{!}} Familie:
{{!}} ''[[{{{Familie|}}}]]''
}}
|-
{{#if:{{{Unterfamilie|}}}|
{{!-}}
{{!}} Unterfamilie:
{{!}} ''[[{{{Unterfamilie|}}}]]''
}}
|-
{{#if:{{{Gattung|}}}|
{{!-}}
{{!}} Gattung:
{{!}} ''[[{{{Gattung|}}}]]''
}}
|-
{{#if:{{{Untergattung|}}}|
{{!-}}
{{!}} Untergattung:
{{!}} ''[[{{{Untergattung|}}}]]''
}}
|-
| Art:
|''{{{WissName|}}}''
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Wissenschaftlicher Name
|-
|style="text-align:center"|
{| style="text-align:center;background-color:#efefef;widht:250px"
|-
|style="text-align:center;width:250px"|''{{{WissName}}}''
{{#if:{{{Autor|}}}| {{{Autor|}}} | }}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Weitere Informationen
|-
|style="text-align:left"|
{| style="text-align:left;background-color:#efefef;widht:250px"
{{#if:{{{Verbreitung|}}} |{{!-}}
{{!}} Verbreitung:
{{!}} {{{Verbreitung|}}} |{{!-}}}}
{{#if:{{{Habitat|}}}
| {{!-}}
{{!}} Habitat:
{{!}} {{{Habitat|}}}|{{!-}}}}
{{#if:{{{Nahrung|}}}
| {{!-}}
{{!}} Nahrung:
{{!}} {{{Nahrung|}}} |{{!-}}}}
{{#if:{{{Luftfeuchtigkeit|}}}
| {{!-}}
{{!}} Luftfeuchtigkeit:
{{!}} {{{Luftfeuchtigkeit|}}} |{{!-}}}}
{{#if:{{{Temperatur|}}}
| {{!-}}
{{!}} Temperatur:
{{!}} {{{Temperatur|}}} |{{!-}}}}
|}
|}
{{#if:{{{Untergattung|}}}| [[Kategorie:{{{Untergattung|}}}]]}}
{{#if:{{{Gattung|}}}| [[Kategorie:{{{Gattung|}}}]]}}
{{#ifeq: {{NAMESPACE}} | {{ns:0}} | [[Kategorie:Spezies]]}}
{{#if:{{{Familie|}}}| Familie:{{{Untergattung|}}}}}
{{#if:{{{Unterfamilie|}}}| Unterfamilie:{{{Unterfamilie|}}}}}
{{#if:{{{Untergattung|}}}| UnterGattung:{{{Untergattung|}}}}}
</includeonly>
<noinclude>
9834985d42634938c637a24be96ebc772220944d
499
498
2014-09-26T12:40:53Z
Lollypop
2
wikitext
text/x-wiki
<includeonly>
{| cellpadding="2" cellspacing="0" style="border: 1px solid #6688AA;width:260px; background-color:#efefef; float:right;overflow: hidden; margin-left:10px; margin-bottom:10px;" valign="middle" |
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| '''''{{{WissName}}}''''' {{#if:{{{DeName|}}}| <br>({{{DeName|}}}) }}
|-
|style=" border: 0px solid #6688AA;border-bottom-width:1px;"| <div style="text-align:center;font-size:8pt;">
{{#if: {{{Bild|}}} | [[Bild:{{{Bild}}}|frameless|250x300px|{{{Bildbeschreibung}}}]]{{{Bildbeschreibung}}}</div> | |}}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| Systematik
|-
|style="text-align:center;width=250px;"|
{| style="background-color:#efefef;text-align:left;"
|-
{{#if:{{{Familie|}}}|
{{!-}}
{{!}} Familie:
{{!}} ''[[{{{Familie|}}}]]''
}}
|-
{{#if:{{{Unterfamilie|}}}|
{{!-}}
{{!}} Unterfamilie:
{{!}} ''[[{{{Unterfamilie|}}}]]''
}}
|-
{{#if:{{{Gattung|}}}|
{{!-}}
{{!}} Gattung:
{{!}} ''[[{{{Gattung|}}}]]''
}}
|-
{{#if:{{{Untergattung|}}}|
{{!-}}
{{!}} Untergattung:
{{!}} ''[[{{{Untergattung|}}}]]''
}}
|-
| Art:
|''{{{WissName|}}}''
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Wissenschaftlicher Name
|-
|style="text-align:center"|
{| style="text-align:center;background-color:#efefef;widht:250px"
|-
|style="text-align:center;width:250px"|''{{{WissName}}}''
{{#if:{{{Autor|}}}| {{{Autor|}}} | }}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Weitere Informationen
|-
|style="text-align:left"|
{| style="text-align:left;background-color:#efefef;widht:250px"
{{#if:{{{Verbreitung|}}} |{{!-}}
{{!}} Verbreitung:
{{!}} {{{Verbreitung|}}} |{{!-}}}}
{{#if:{{{Habitat|}}}
| {{!-}}
{{!}} Habitat:
{{!}} {{{Habitat|}}}|{{!-}}}}
{{#if:{{{Nahrung|}}}
| {{!-}}
{{!}} Nahrung:
{{!}} {{{Nahrung|}}} |{{!-}}}}
{{#if:{{{Luftfeuchtigkeit|}}}
| {{!-}}
{{!}} Luftfeuchtigkeit:
{{!}} {{{Luftfeuchtigkeit|}}} |{{!-}}}}
{{#if:{{{Temperatur|}}}
| {{!-}}
{{!}} Temperatur:
{{!}} {{{Temperatur|}}} |{{!-}}}}
|}
|}
{{#if:{{{Untergattung|}}}| [[Kategorie:{{{Untergattung|}}}]]}}
{{#if:{{{Gattung|}}}| [[Kategorie:{{{Gattung|}}}]]}}
{{#ifeq: {{NAMESPACE}} | {{ns:0}} | [[Kategorie:Spezies]]}}
{{#if:{{{Familie|}}}| Familie:{{{Familie|}}}}}->{{#if:{{{Unterfamilie|}}}| Unterfamilie:{{{Unterfamilie|}}}}}->{{#if:{{{Gattung|}}}| Gattung:{{{Gattung|}}}}}->{{#if:{{{Untergattung|}}}| UnterGattung:{{{Untergattung|}}}}}
</includeonly>
<noinclude>
50c14fc721120eb20f075c741fc93d3b353c65c8
500
499
2014-09-26T12:47:23Z
Lollypop
2
wikitext
text/x-wiki
<includeonly>
{| cellpadding="2" cellspacing="0" style="border: 1px solid #6688AA;width:260px; background-color:#efefef; float:right;overflow: hidden; margin-left:10px; margin-bottom:10px;" valign="middle" |
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| '''''{{{WissName}}}''''' {{#if:{{{DeName|}}}| <br>({{{DeName|}}}) }}
|-
|style=" border: 0px solid #6688AA;border-bottom-width:1px;"| <div style="text-align:center;font-size:8pt;">
{{#if: {{{Bild|}}} | [[Bild:{{{Bild}}}|frameless|250x300px|{{{Bildbeschreibung}}}]]{{{Bildbeschreibung}}}</div> | |}}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| Systematik
|-
|style="text-align:center;width=250px;"|
{| style="background-color:#efefef;text-align:left;"
|-
{{#if:{{{Familie|}}}|
{{!-}}
{{!}} Familie:
{{!}} ''[[{{{Familie|}}}]]''
}}
|-
{{#if:{{{Unterfamilie|}}}|
{{!-}}
{{!}} Unterfamilie:
{{!}} ''[[{{{Unterfamilie|}}}]]''
}}
|-
{{#if:{{{Gattung|}}}|
{{!-}}
{{!}} Gattung:
{{!}} ''[[{{{Gattung|}}}]]''
}}
|-
{{#if:{{{Untergattung|}}}|
{{!-}}
{{!}} Untergattung:
{{!}} ''[[{{{Untergattung|}}}]]''
}}
|-
| Art:
|''{{{WissName|}}}''
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Wissenschaftlicher Name
|-
|style="text-align:center"|
{| style="text-align:center;background-color:#efefef;widht:250px"
|-
|style="text-align:center;width:250px"|''{{{WissName}}}''
{{#if:{{{Autor|}}}| {{{Autor|}}} | }}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Weitere Informationen
|-
|style="text-align:left"|
{| style="text-align:left;background-color:#efefef;widht:250px"
{{#if:{{{Verbreitung|}}} |{{!-}}
{{!}} Verbreitung:
{{!}} {{{Verbreitung|}}} |{{!-}}}}
{{#if:{{{Habitat|}}}
| {{!-}}
{{!}} Habitat:
{{!}} {{{Habitat|}}}|{{!-}}}}
{{#if:{{{Nahrung|}}}
| {{!-}}
{{!}} Nahrung:
{{!}} {{{Nahrung|}}} |{{!-}}}}
{{#if:{{{Luftfeuchtigkeit|}}}
| {{!-}}
{{!}} Luftfeuchtigkeit:
{{!}} {{{Luftfeuchtigkeit|}}} |{{!-}}}}
{{#if:{{{Temperatur|}}}
| {{!-}}
{{!}} Temperatur:
{{!}} {{{Temperatur|}}} |{{!-}}}}
|}
|}
{{#if:{{{Untergattung|}}}| [[Kategorie:{{{Untergattung|}}}]]}}
{{#if:{{{Gattung|}}}| [[Kategorie:{{{Gattung|}}}]]}}
{{#ifeq: {{NAMESPACE}} | {{ns:0}} | [[Kategorie:Spezies]]}}
{{#if:{{{Familie|}}}| [{{{Familie|}}}]}}->{{#if:{{{Unterfamilie|}}}| [{{{Unterfamilie|}}}]}}->{{#if:{{{Gattung|}}}| [{{{Gattung|}}}]}}->{{#if:{{{Untergattung|}}}| [{{{Untergattung|}}}]}}
</includeonly>
<noinclude>
31b9af660e54823efddb959c58cc4604fa05759a
501
500
2014-09-26T12:48:16Z
Lollypop
2
wikitext
text/x-wiki
<includeonly>
{| cellpadding="2" cellspacing="0" style="border: 1px solid #6688AA;width:260px; background-color:#efefef; float:right;overflow: hidden; margin-left:10px; margin-bottom:10px;" valign="middle" |
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| '''''{{{WissName}}}''''' {{#if:{{{DeName|}}}| <br>({{{DeName|}}}) }}
|-
|style=" border: 0px solid #6688AA;border-bottom-width:1px;"| <div style="text-align:center;font-size:8pt;">
{{#if: {{{Bild|}}} | [[Bild:{{{Bild}}}|frameless|250x300px|{{{Bildbeschreibung}}}]]{{{Bildbeschreibung}}}</div> | |}}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| Systematik
|-
|style="text-align:center;width=250px;"|
{| style="background-color:#efefef;text-align:left;"
|-
{{#if:{{{Familie|}}}|
{{!-}}
{{!}} Familie:
{{!}} ''[[{{{Familie|}}}]]''
}}
|-
{{#if:{{{Unterfamilie|}}}|
{{!-}}
{{!}} Unterfamilie:
{{!}} ''[[{{{Unterfamilie|}}}]]''
}}
|-
{{#if:{{{Gattung|}}}|
{{!-}}
{{!}} Gattung:
{{!}} ''[[{{{Gattung|}}}]]''
}}
|-
{{#if:{{{Untergattung|}}}|
{{!-}}
{{!}} Untergattung:
{{!}} ''[[{{{Untergattung|}}}]]''
}}
|-
| Art:
|''{{{WissName|}}}''
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Wissenschaftlicher Name
|-
|style="text-align:center"|
{| style="text-align:center;background-color:#efefef;widht:250px"
|-
|style="text-align:center;width:250px"|''{{{WissName}}}''
{{#if:{{{Autor|}}}| {{{Autor|}}} | }}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Weitere Informationen
|-
|style="text-align:left"|
{| style="text-align:left;background-color:#efefef;widht:250px"
{{#if:{{{Verbreitung|}}} |{{!-}}
{{!}} Verbreitung:
{{!}} {{{Verbreitung|}}} |{{!-}}}}
{{#if:{{{Habitat|}}}
| {{!-}}
{{!}} Habitat:
{{!}} {{{Habitat|}}}|{{!-}}}}
{{#if:{{{Nahrung|}}}
| {{!-}}
{{!}} Nahrung:
{{!}} {{{Nahrung|}}} |{{!-}}}}
{{#if:{{{Luftfeuchtigkeit|}}}
| {{!-}}
{{!}} Luftfeuchtigkeit:
{{!}} {{{Luftfeuchtigkeit|}}} |{{!-}}}}
{{#if:{{{Temperatur|}}}
| {{!-}}
{{!}} Temperatur:
{{!}} {{{Temperatur|}}} |{{!-}}}}
|}
|}
{{#if:{{{Untergattung|}}}| [[Kategorie:{{{Untergattung|}}}]]}}
{{#if:{{{Gattung|}}}| [[Kategorie:{{{Gattung|}}}]]}}
{{#ifeq: {{NAMESPACE}} | {{ns:0}} | [[Kategorie:Spezies]]}}
{{#if:{{{Familie|}}}| [[{{{Familie|}}}]]}}->{{#if:{{{Unterfamilie|}}}| [[{{{Unterfamilie|}}}]]}}->{{#if:{{{Gattung|}}}| [[{{{Gattung|}}}]]}}->{{#if:{{{Untergattung|}}}| [[{{{Untergattung|}}}]]}}
</includeonly>
<noinclude>
6edc931a71c7d05f47e0a165ca51973086665613
503
501
2014-09-26T12:50:22Z
Lollypop
2
wikitext
text/x-wiki
<includeonly>
{| cellpadding="2" cellspacing="0" style="border: 1px solid #6688AA;width:260px; background-color:#efefef; float:right;overflow: hidden; margin-left:10px; margin-bottom:10px;" valign="middle" |
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| '''''{{{WissName}}}''''' {{#if:{{{DeName|}}}| <br>({{{DeName|}}}) }}
|-
|style=" border: 0px solid #6688AA;border-bottom-width:1px;"| <div style="text-align:center;font-size:8pt;">
{{#if: {{{Bild|}}} | [[Bild:{{{Bild}}}|frameless|250x300px|{{{Bildbeschreibung}}}]]{{{Bildbeschreibung}}}</div> | |}}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| Systematik
|-
|style="text-align:center;width=250px;"|
{| style="background-color:#efefef;text-align:left;"
|-
{{#if:{{{Familie|}}}|
{{!-}}
{{!}} Familie:
{{!}} ''[[{{{Familie|}}}]]''
}}
|-
{{#if:{{{Unterfamilie|}}}|
{{!-}}
{{!}} Unterfamilie:
{{!}} ''[[{{{Unterfamilie|}}}]]''
}}
|-
{{#if:{{{Gattung|}}}|
{{!-}}
{{!}} Gattung:
{{!}} ''[[{{{Gattung|}}}]]''
}}
|-
{{#if:{{{Untergattung|}}}|
{{!-}}
{{!}} Untergattung:
{{!}} ''[[{{{Untergattung|}}}]]''
}}
|-
| Art:
|''{{{WissName|}}}''
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Wissenschaftlicher Name
|-
|style="text-align:center"|
{| style="text-align:center;background-color:#efefef;widht:250px"
|-
|style="text-align:center;width:250px"|''{{{WissName}}}''
{{#if:{{{Autor|}}}| {{{Autor|}}} | }}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Weitere Informationen
|-
|style="text-align:left"|
{| style="text-align:left;background-color:#efefef;widht:250px"
{{#if:{{{Verbreitung|}}} |{{!-}}
{{!}} Verbreitung:
{{!}} {{{Verbreitung|}}} |{{!-}}}}
{{#if:{{{Habitat|}}}
| {{!-}}
{{!}} Habitat:
{{!}} {{{Habitat|}}}|{{!-}}}}
{{#if:{{{Nahrung|}}}
| {{!-}}
{{!}} Nahrung:
{{!}} {{{Nahrung|}}} |{{!-}}}}
{{#if:{{{Luftfeuchtigkeit|}}}
| {{!-}}
{{!}} Luftfeuchtigkeit:
{{!}} {{{Luftfeuchtigkeit|}}} |{{!-}}}}
{{#if:{{{Temperatur|}}}
| {{!-}}
{{!}} Temperatur:
{{!}} {{{Temperatur|}}} |{{!-}}}}
|}
|}
{{#if:{{{Untergattung|}}}| [[Kategorie:{{{Untergattung|}}}]]}}
{{#if:{{{Gattung|}}}| [[Kategorie:{{{Gattung|}}}]]}}
{{#ifeq: {{NAMESPACE}} | {{ns:0}} | [[Kategorie:Spezies]]}}
{{#if:{{{Familie|}}}| [[{{{Familie|}}}]]| ???}}->{{#if:{{{Unterfamilie|}}}| [[{{{Unterfamilie|}}}]]| ???}}->{{#if:{{{Gattung|}}}| [[{{{Gattung|}}}]]| ???}}->{{#if:{{{Untergattung|}}}| [[{{{Untergattung|}}}]]| ???}}
</includeonly>
<noinclude>
9e311eca7d90995a4c8e849a5e9ad28f182d8354
Princisia vanwaerebeki
0
152
506
418
2014-09-26T13:06:10Z
Lollypop
2
wikitext
text/x-wiki
{{Systematik
| DeName = Fauchschabe
| WissName = Princisia vanwaerebeki
| Autor =
| Untergattung =
| Gattung = Princisia
| Familie = Blaberidae
| Unterfamilie = Oxyhaloinae
| Art =
| Verbreitung =
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| Winterruhe =
}}
21d3f582150aec2b6292112e1680347aa9757748
Therea regularis
0
171
510
508
2014-09-26T13:40:40Z
Lollypop
2
wikitext
text/x-wiki
{{Systematik
| DeName = Dominoschabe
| WissName = Therea regularis
| Autor =
| Familie = Polyphagidae
| Unterfamilie = Polyphaginae
| Gattung = Therea
| Untergattung =
| Art = regularis
| Verbreitung = Indien
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| Winterruhe =
}}
Kleine, quirlige Art.
7f754c797dbcbf11a3c021bd51840955e2587f85
Template:Systematik
10
117
511
503
2014-09-26T13:44:20Z
Lollypop
2
wikitext
text/x-wiki
<includeonly>
{| cellpadding="2" cellspacing="0" style="border: 1px solid #6688AA;width:260px; background-color:#efefef; float:right;overflow: hidden; margin-left:10px; margin-bottom:10px;" valign="middle" |
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| '''''{{{WissName}}}''''' {{#if:{{{DeName|}}}| <br>({{{DeName|}}}) }}
|-
|style=" border: 0px solid #6688AA;border-bottom-width:1px;"| <div style="text-align:center;font-size:8pt;">
{{#if: {{{Bild|}}} | [[Bild:{{{Bild}}}|frameless|250x300px|{{{Bildbeschreibung}}}]]{{{Bildbeschreibung}}}</div> | |}}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| Systematik
|-
|style="text-align:center;width=250px;"|
{| style="background-color:#efefef;text-align:left;"
|-
{{#if:{{{Familie|}}}|
{{!-}}
{{!}} Familie:
{{!}} ''[[{{{Familie|}}}]]''
}}
|-
{{#if:{{{Unterfamilie|}}}|
{{!-}}
{{!}} Unterfamilie:
{{!}} ''[[{{{Unterfamilie|}}}]]''
}}
|-
{{#if:{{{Gattung|}}}|
{{!-}}
{{!}} Gattung:
{{!}} ''[[{{{Gattung|}}}]]''
}}
|-
{{#if:{{{Untergattung|}}}|
{{!-}}
{{!}} Untergattung:
{{!}} ''[[{{{Untergattung|}}}]]''
}}
|-
| Art:
|''{{{WissName|}}}''
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Wissenschaftlicher Name
|-
|style="text-align:center"|
{| style="text-align:center;background-color:#efefef;widht:250px"
|-
|style="text-align:center;width:250px"|''{{{WissName}}}''
{{#if:{{{Autor|}}}| {{{Autor|}}} | }}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Weitere Informationen
|-
|style="text-align:left"|
{| style="text-align:left;background-color:#efefef;widht:250px"
{{#if:{{{Verbreitung|}}} |{{!-}}
{{!}} Verbreitung:
{{!}} {{{Verbreitung|}}} |{{!-}}}}
{{#if:{{{Habitat|}}}
| {{!-}}
{{!}} Habitat:
{{!}} {{{Habitat|}}}|{{!-}}}}
{{#if:{{{Nahrung|}}}
| {{!-}}
{{!}} Nahrung:
{{!}} {{{Nahrung|}}} |{{!-}}}}
{{#if:{{{Luftfeuchtigkeit|}}}
| {{!-}}
{{!}} Luftfeuchtigkeit:
{{!}} {{{Luftfeuchtigkeit|}}} |{{!-}}}}
{{#if:{{{Temperatur|}}}
| {{!-}}
{{!}} Temperatur:
{{!}} {{{Temperatur|}}} |{{!-}}}}
|}
|}
{{#if:{{{Untergattung|}}}| [[Kategorie:{{{Untergattung|}}}]]}}
{{#if:{{{Gattung|}}}| [[Kategorie:{{{Gattung|}}}|{{#if:{{{Art|}}}|{{Art|}},{{{Gattung}}}}}]]}}
{{#ifeq: {{NAMESPACE}} | {{ns:0}} | [[Kategorie:Spezies]]}}
{{#if:{{{Familie|}}}| [[{{{Familie|}}}]]| ???}}->{{#if:{{{Unterfamilie|}}}| [[{{{Unterfamilie|}}}]]| ???}}->{{#if:{{{Gattung|}}}| [[{{{Gattung|}}}]]| ???}}->{{#if:{{{Untergattung|}}}| [[{{{Untergattung|}}}]]| ???}}
</includeonly>
<noinclude>
9804163cbf48b3be593b1ea6f4fcc0402e5ed9aa
512
511
2014-09-26T13:45:53Z
Lollypop
2
wikitext
text/x-wiki
<includeonly>
{| cellpadding="2" cellspacing="0" style="border: 1px solid #6688AA;width:260px; background-color:#efefef; float:right;overflow: hidden; margin-left:10px; margin-bottom:10px;" valign="middle" |
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| '''''{{{WissName}}}''''' {{#if:{{{DeName|}}}| <br>({{{DeName|}}}) }}
|-
|style=" border: 0px solid #6688AA;border-bottom-width:1px;"| <div style="text-align:center;font-size:8pt;">
{{#if: {{{Bild|}}} | [[Bild:{{{Bild}}}|frameless|250x300px|{{{Bildbeschreibung}}}]]{{{Bildbeschreibung}}}</div> | |}}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| Systematik
|-
|style="text-align:center;width=250px;"|
{| style="background-color:#efefef;text-align:left;"
|-
{{#if:{{{Familie|}}}|
{{!-}}
{{!}} Familie:
{{!}} ''[[{{{Familie|}}}]]''
}}
|-
{{#if:{{{Unterfamilie|}}}|
{{!-}}
{{!}} Unterfamilie:
{{!}} ''[[{{{Unterfamilie|}}}]]''
}}
|-
{{#if:{{{Gattung|}}}|
{{!-}}
{{!}} Gattung:
{{!}} ''[[{{{Gattung|}}}]]''
}}
|-
{{#if:{{{Untergattung|}}}|
{{!-}}
{{!}} Untergattung:
{{!}} ''[[{{{Untergattung|}}}]]''
}}
|-
| Art:
|''{{{WissName|}}}''
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Wissenschaftlicher Name
|-
|style="text-align:center"|
{| style="text-align:center;background-color:#efefef;widht:250px"
|-
|style="text-align:center;width:250px"|''{{{WissName}}}''
{{#if:{{{Autor|}}}| {{{Autor|}}} | }}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Weitere Informationen
|-
|style="text-align:left"|
{| style="text-align:left;background-color:#efefef;widht:250px"
{{#if:{{{Verbreitung|}}} |{{!-}}
{{!}} Verbreitung:
{{!}} {{{Verbreitung|}}} |{{!-}}}}
{{#if:{{{Habitat|}}}
| {{!-}}
{{!}} Habitat:
{{!}} {{{Habitat|}}}|{{!-}}}}
{{#if:{{{Nahrung|}}}
| {{!-}}
{{!}} Nahrung:
{{!}} {{{Nahrung|}}} |{{!-}}}}
{{#if:{{{Luftfeuchtigkeit|}}}
| {{!-}}
{{!}} Luftfeuchtigkeit:
{{!}} {{{Luftfeuchtigkeit|}}} |{{!-}}}}
{{#if:{{{Temperatur|}}}
| {{!-}}
{{!}} Temperatur:
{{!}} {{{Temperatur|}}} |{{!-}}}}
|}
|}
{{#if:{{{Untergattung|}}}| [[Kategorie:{{{Untergattung|}}}]]}}
{{#if:{{{Gattung|}}}| [[Kategorie:{{{Gattung|}}}|{{#if:{{{Art|}}}| {{{Art|}}},{{{Gattung}}} }}]]}}
{{#ifeq: {{NAMESPACE}} | {{ns:0}} | [[Kategorie:Spezies]]}}
{{#if:{{{Familie|}}}| [[{{{Familie|}}}]]| ???}}->{{#if:{{{Unterfamilie|}}}| [[{{{Unterfamilie|}}}]]| ???}}->{{#if:{{{Gattung|}}}| [[{{{Gattung|}}}]]| ???}}->{{#if:{{{Untergattung|}}}| [[{{{Untergattung|}}}]]| ???}}
</includeonly>
<noinclude>
fbdd983195169336b1f4ece6289753b139a29e0b
513
512
2014-09-26T13:47:06Z
Lollypop
2
wikitext
text/x-wiki
<includeonly>
{| cellpadding="2" cellspacing="0" style="border: 1px solid #6688AA;width:260px; background-color:#efefef; float:right;overflow: hidden; margin-left:10px; margin-bottom:10px;" valign="middle" |
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| '''''{{{WissName}}}''''' {{#if:{{{DeName|}}}| <br>({{{DeName|}}}) }}
|-
|style=" border: 0px solid #6688AA;border-bottom-width:1px;"| <div style="text-align:center;font-size:8pt;">
{{#if: {{{Bild|}}} | [[Bild:{{{Bild}}}|frameless|250x300px|{{{Bildbeschreibung}}}]]{{{Bildbeschreibung}}}</div> | |}}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| Systematik
|-
|style="text-align:center;width=250px;"|
{| style="background-color:#efefef;text-align:left;"
|-
{{#if:{{{Familie|}}}|
{{!-}}
{{!}} Familie:
{{!}} ''[[{{{Familie|}}}]]''
}}
|-
{{#if:{{{Unterfamilie|}}}|
{{!-}}
{{!}} Unterfamilie:
{{!}} ''[[{{{Unterfamilie|}}}]]''
}}
|-
{{#if:{{{Gattung|}}}|
{{!-}}
{{!}} Gattung:
{{!}} ''[[{{{Gattung|}}}]]''
}}
|-
{{#if:{{{Untergattung|}}}|
{{!-}}
{{!}} Untergattung:
{{!}} ''[[{{{Untergattung|}}}]]''
}}
|-
| Art:
|''{{{WissName|}}}''
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Wissenschaftlicher Name
|-
|style="text-align:center"|
{| style="text-align:center;background-color:#efefef;widht:250px"
|-
|style="text-align:center;width:250px"|''{{{WissName}}}''
{{#if:{{{Autor|}}}| {{{Autor|}}} | }}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Weitere Informationen
|-
|style="text-align:left"|
{| style="text-align:left;background-color:#efefef;widht:250px"
{{#if:{{{Verbreitung|}}} |{{!-}}
{{!}} Verbreitung:
{{!}} {{{Verbreitung|}}} |{{!-}}}}
{{#if:{{{Habitat|}}}
| {{!-}}
{{!}} Habitat:
{{!}} {{{Habitat|}}}|{{!-}}}}
{{#if:{{{Nahrung|}}}
| {{!-}}
{{!}} Nahrung:
{{!}} {{{Nahrung|}}} |{{!-}}}}
{{#if:{{{Luftfeuchtigkeit|}}}
| {{!-}}
{{!}} Luftfeuchtigkeit:
{{!}} {{{Luftfeuchtigkeit|}}} |{{!-}}}}
{{#if:{{{Temperatur|}}}
| {{!-}}
{{!}} Temperatur:
{{!}} {{{Temperatur|}}} |{{!-}}}}
|}
|}
{{#if:{{{Untergattung|}}}| [[Kategorie:{{{Untergattung|}}}]]}}
{{#if:{{{Gattung|}}}| [[Kategorie:{{{Gattung|}}}{{#if:{{{Art|}}}| |{{{Art|}}},{{{Gattung}}} }}]]}}
{{#ifeq: {{NAMESPACE}} | {{ns:0}} | [[Kategorie:Spezies]]}}
{{#if:{{{Familie|}}}| [[{{{Familie|}}}]]| ???}}->{{#if:{{{Unterfamilie|}}}| [[{{{Unterfamilie|}}}]]| ???}}->{{#if:{{{Gattung|}}}| [[{{{Gattung|}}}]]| ???}}->{{#if:{{{Untergattung|}}}| [[{{{Untergattung|}}}]]| ???}}
</includeonly>
<noinclude>
259e9a3b8989ec0279caec93ba119a50de0f52c1
514
513
2014-09-26T13:54:11Z
Lollypop
2
wikitext
text/x-wiki
<includeonly>
{| cellpadding="2" cellspacing="0" style="border: 1px solid #6688AA;width:260px; background-color:#efefef; float:right;overflow: hidden; margin-left:10px; margin-bottom:10px;" valign="middle" |
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| '''''{{{WissName}}}''''' {{#if:{{{DeName|}}}| <br>({{{DeName|}}}) }}
|-
|style=" border: 0px solid #6688AA;border-bottom-width:1px;"| <div style="text-align:center;font-size:8pt;">
{{#if: {{{Bild|}}} | [[Bild:{{{Bild}}}|frameless|250x300px|{{{Bildbeschreibung}}}]]{{{Bildbeschreibung}}}</div> | |}}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| Systematik
|-
|style="text-align:center;width=250px;"|
{| style="background-color:#efefef;text-align:left;"
|-
{{#if:{{{Familie|}}}|
{{!-}}
{{!}} Familie:
{{!}} ''[[{{{Familie|}}}]]''
}}
|-
{{#if:{{{Unterfamilie|}}}|
{{!-}}
{{!}} Unterfamilie:
{{!}} ''[[{{{Unterfamilie|}}}]]''
}}
|-
{{#if:{{{Gattung|}}}|
{{!-}}
{{!}} Gattung:
{{!}} ''[[{{{Gattung|}}}]]''
}}
|-
{{#if:{{{Untergattung|}}}|
{{!-}}
{{!}} Untergattung:
{{!}} ''[[{{{Untergattung|}}}]]''
}}
|-
| Art:
|''{{{WissName|}}}''
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Wissenschaftlicher Name
|-
|style="text-align:center"|
{| style="text-align:center;background-color:#efefef;widht:250px"
|-
|style="text-align:center;width:250px"|''{{{WissName}}}''
{{#if:{{{Autor|}}}| {{{Autor|}}} | }}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Weitere Informationen
|-
|style="text-align:left"|
{| style="text-align:left;background-color:#efefef;widht:250px"
{{#if:{{{Verbreitung|}}} |{{!-}}
{{!}} Verbreitung:
{{!}} {{{Verbreitung|}}} |{{!-}}}}
{{#if:{{{Habitat|}}}
| {{!-}}
{{!}} Habitat:
{{!}} {{{Habitat|}}}|{{!-}}}}
{{#if:{{{Nahrung|}}}
| {{!-}}
{{!}} Nahrung:
{{!}} {{{Nahrung|}}} |{{!-}}}}
{{#if:{{{Luftfeuchtigkeit|}}}
| {{!-}}
{{!}} Luftfeuchtigkeit:
{{!}} {{{Luftfeuchtigkeit|}}} |{{!-}}}}
{{#if:{{{Temperatur|}}}
| {{!-}}
{{!}} Temperatur:
{{!}} {{{Temperatur|}}} |{{!-}}}}
|}
|}
{{#if:{{{Untergattung|}}}| [[Kategorie:{{{Untergattung|}}}]]}}
{{#if:{{{Gattung|}}}| [[Kategorie:{{{Gattung|}}}{{#if:{{{Art|}}}| {{!}}{{{Art|}}},{{{Gattung}}}}}]]}}
{{#ifeq: {{NAMESPACE}} | {{ns:0}} | [[Kategorie:Spezies]]}}
{{#if:{{{Familie|}}}| [[{{{Familie|}}}]]| ???}}->{{#if:{{{Unterfamilie|}}}| [[{{{Unterfamilie|}}}]]| ???}}->{{#if:{{{Gattung|}}}| [[{{{Gattung|}}}]]| ???}}->{{#if:{{{Untergattung|}}}| [[{{{Untergattung|}}}]]| ???}}
</includeonly>
<noinclude>
40cafcfbe2dcbc0b6bdcfd5bbe018b4ca33d0ffa
549
514
2014-10-07T14:41:59Z
Lollypop
2
wikitext
text/x-wiki
<includeonly>
{| cellpadding="2" cellspacing="0" style="border: 1px solid #6688AA;width:260px; background-color:#efefef; float:right;overflow: hidden; margin-left:10px; margin-bottom:10px;" valign="middle" |
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| '''''{{{WissName}}}''''' {{#if:{{{DeName|}}}| <br>({{{DeName|}}}) }}
|-
|style=" border: 0px solid #6688AA;border-bottom-width:1px;"| <div style="text-align:center;font-size:8pt;">
{{#if: {{{Bild|}}} | [[Bild:{{{Bild}}}|frameless|250x300px|{{{Bildbeschreibung}}}]]{{{Bildbeschreibung}}}</div> | |}}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| Systematik
|-
|style="text-align:center;width=250px;"|
{| style="background-color:#efefef;text-align:left;"
|-
{{#if:{{{Familie|}}}|
{{!-}}
{{!}} Familie:
{{!}} ''[[{{{Familie|}}}]]''
}}
|-
{{#if:{{{Unterfamilie|}}}|
{{!-}}
{{!}} Unterfamilie:
{{!}} ''[[{{{Unterfamilie|}}}]]''
}}
|-
{{#if:{{{Tribus|}}}|
{{!-}}
{{!}} Tribus:
{{!}} ''[[{{{Tribus|}}}]]''
}}
|-
{{#if:{{{Gattung|}}}|
{{!-}}
{{!}} Gattung:
{{!}} ''[[{{{Gattung|}}}]]''
}}
|-
{{#if:{{{Untergattung|}}}|
{{!-}}
{{!}} Untergattung:
{{!}} ''[[{{{Untergattung|}}}]]''
}}
|-
| Art:
|''{{{WissName|}}}''
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Wissenschaftlicher Name
|-
|style="text-align:center"|
{| style="text-align:center;background-color:#efefef;widht:250px"
|-
|style="text-align:center;width:250px"|''{{{WissName}}}''
{{#if:{{{Autor|}}}| {{{Autor|}}} | }}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Weitere Informationen
|-
|style="text-align:left"|
{| style="text-align:left;background-color:#efefef;widht:250px"
{{#if:{{{Verbreitung|}}} |{{!-}}
{{!}} Verbreitung:
{{!}} {{{Verbreitung|}}} |{{!-}}}}
{{#if:{{{Habitat|}}}
| {{!-}}
{{!}} Habitat:
{{!}} {{{Habitat|}}}|{{!-}}}}
{{#if:{{{Nahrung|}}}
| {{!-}}
{{!}} Nahrung:
{{!}} {{{Nahrung|}}} |{{!-}}}}
{{#if:{{{Luftfeuchtigkeit|}}}
| {{!-}}
{{!}} Luftfeuchtigkeit:
{{!}} {{{Luftfeuchtigkeit|}}} |{{!-}}}}
{{#if:{{{Temperatur|}}}
| {{!-}}
{{!}} Temperatur:
{{!}} {{{Temperatur|}}} |{{!-}}}}
|}
|}
{{#if:{{{Untergattung|}}}| [[Kategorie:{{{Untergattung|}}}]]}}
{{#if:{{{Gattung|}}}| [[Kategorie:{{{Gattung|}}}{{#if:{{{Art|}}}| {{!}}{{{Art|}}},{{{Gattung}}}}}]]}}
{{#ifeq: {{NAMESPACE}} | {{ns:0}} | [[Kategorie:Spezies]]}}
{{#if:{{{Familie|}}}| [[{{{Familie|}}}]]| ???}}->{{#if:{{{Unterfamilie|}}}| [[{{{Unterfamilie|}}}]]| ???}}->{{#if:{{{Tribus|}}}| [[{{{Tribus|}}}]]| ???}}->{{#if:{{{Gattung|}}}| [[{{{Gattung|}}}]]| ???}}->{{#if:{{{Untergattung|}}}| [[{{{Untergattung|}}}]]| ???}}
</includeonly>
<noinclude>
2a827280b181bee822f162cd58f5026624a0659d
Therea olegrandjeani
0
173
515
491
2014-09-26T13:54:56Z
Lollypop
2
wikitext
text/x-wiki
{{Systematik
| DeName = Fragezeichen-Schabe
| WissName = Therea olegrandjeani
| Autor =
| Familie = Polyphagidae
| Unterfamilie = Polyphaginae
| Gattung = Therea
| Untergattung =
| Art = olegrandjeani
| Verbreitung = Indien
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| Winterruhe =
}}
ae66d14a749389db2f013bee5c422ccc5d10ab53
Archimandrita tesselata
0
148
516
502
2014-09-26T14:02:38Z
Lollypop
2
wikitext
text/x-wiki
{{Systematik
| DeName = Pfefferschabe
| WissName = Archimandrita tesselata
| Bild = Archimandrita_tesselata_IMG_2891.JPG
| Bildbeschreibung = Archimandrita Tesselata an einem Stück Gurke
| Autor =
| Untergattung =
| Gattung = Archimandrita
| Familie = Blaberidae
| Unterfamilie = Blaberinae
| Art =
| Verbreitung = Guatemala, Costa Rica, Panama, Kolumbien
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| Winterruhe =
}}
Archimandrita tesselata ist eine große, meist recht scheue Schabenart.
Ihren Namen "Pfefferschabe" trägt sie aufgrund der Zeichnung auf ihren Flügeldecken.
<gallery mode="packed-hover">
Image:Archimandrita tesselata IMG 2891.JPG|An Gurke
Image:Archimandrita_tesselata_R0015551.png|Können dese Augen lügen?
</gallery>
<gallery mode="slideshow" caption="Häutung eines Archimandrita tesselata Männchens">
Image:Archimandrita_tesselata_IMG_2901.png
Image:Archimandrita_tesselata_IMG_2902.png
Image:Archimandrita_tesselata_IMG_2903.png
Image:Archimandrita_tesselata_IMG_2904.png
Image:Archimandrita_tesselata_IMG_2905.png
Image:Archimandrita_tesselata_IMG_2906.png
Image:Archimandrita_tesselata_IMG_2907.png
Image:Archimandrita_tesselata_IMG_2908.png
Image:Archimandrita_tesselata_IMG_2909.png
Image:Archimandrita_tesselata_IMG_2910.png
Image:Archimandrita_tesselata_IMG_2911.png
Image:Archimandrita_tesselata_IMG_2912.png
Image:Archimandrita_tesselata_IMG_2913.png
Image:Archimandrita_tesselata_IMG_2914.png
Image:Archimandrita_tesselata_IMG_2915.png
Image:Archimandrita_tesselata_IMG_2916.png
Image:Archimandrita_tesselata_IMG_2917.png
Image:Archimandrita_tesselata_IMG_2918.png
</gallery>
<slideshow sequence="forward" transition="blindDown" refresh="3000">
[[Image:Archimandrita_tesselata_IMG_2901.png|thumb|right|256px|Caption 1]]
[[Image:Archimandrita_tesselata_IMG_2902.png|thumb|right|256px|Caption 2]]
[[Image:Archimandrita_tesselata_IMG_2903.png|thumb|right|256px|Caption 3]]
[[Image:Archimandrita_tesselata_IMG_2904.png|thumb|right|256px|Caption 4]]
[[Image:Archimandrita_tesselata_IMG_2905.png|thumb|right|256px|Caption 5]]
[[Image:Archimandrita_tesselata_IMG_2906.png|thumb|right|256px|Caption 6]]
</slideshow>
5040721e37d1cade32daf35e62c0f128413bafdd
517
516
2014-09-26T14:03:27Z
Lollypop
2
wikitext
text/x-wiki
{{Systematik
| DeName = Pfefferschabe
| WissName = Archimandrita tesselata
| Bild = Archimandrita_tesselata_IMG_2891.JPG
| Bildbeschreibung = Archimandrita Tesselata an einem Stück Gurke
| Autor =
| Familie = Blaberidae
| Unterfamilie = Blaberinae
| Gattung = Archimandrita
| Untergattung =
| Art = tesselata
| Verbreitung = Guatemala, Costa Rica, Panama, Kolumbien
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| Winterruhe =
}}
Archimandrita tesselata ist eine große, meist recht scheue Schabenart.
Ihren Namen "Pfefferschabe" trägt sie aufgrund der Zeichnung auf ihren Flügeldecken.
<gallery mode="packed-hover">
Image:Archimandrita tesselata IMG 2891.JPG|An Gurke
Image:Archimandrita_tesselata_R0015551.png|Können dese Augen lügen?
</gallery>
<gallery mode="slideshow" caption="Häutung eines Archimandrita tesselata Männchens">
Image:Archimandrita_tesselata_IMG_2901.png
Image:Archimandrita_tesselata_IMG_2902.png
Image:Archimandrita_tesselata_IMG_2903.png
Image:Archimandrita_tesselata_IMG_2904.png
Image:Archimandrita_tesselata_IMG_2905.png
Image:Archimandrita_tesselata_IMG_2906.png
Image:Archimandrita_tesselata_IMG_2907.png
Image:Archimandrita_tesselata_IMG_2908.png
Image:Archimandrita_tesselata_IMG_2909.png
Image:Archimandrita_tesselata_IMG_2910.png
Image:Archimandrita_tesselata_IMG_2911.png
Image:Archimandrita_tesselata_IMG_2912.png
Image:Archimandrita_tesselata_IMG_2913.png
Image:Archimandrita_tesselata_IMG_2914.png
Image:Archimandrita_tesselata_IMG_2915.png
Image:Archimandrita_tesselata_IMG_2916.png
Image:Archimandrita_tesselata_IMG_2917.png
Image:Archimandrita_tesselata_IMG_2918.png
</gallery>
<slideshow sequence="forward" transition="blindDown" refresh="3000">
[[Image:Archimandrita_tesselata_IMG_2901.png|thumb|right|256px|Caption 1]]
[[Image:Archimandrita_tesselata_IMG_2902.png|thumb|right|256px|Caption 2]]
[[Image:Archimandrita_tesselata_IMG_2903.png|thumb|right|256px|Caption 3]]
[[Image:Archimandrita_tesselata_IMG_2904.png|thumb|right|256px|Caption 4]]
[[Image:Archimandrita_tesselata_IMG_2905.png|thumb|right|256px|Caption 5]]
[[Image:Archimandrita_tesselata_IMG_2906.png|thumb|right|256px|Caption 6]]
</slideshow>
dcf15b6b4229a7b0e29bc9c80caa9211bb471ccc
Gromphadorhina spec.
0
144
518
504
2014-09-26T14:04:00Z
Lollypop
2
wikitext
text/x-wiki
{{Systematik
| DeName = Fauchschabe
| WissName = Gromphadorrhina spec.
| Autor =
| Untergattung =
| Gattung = Gromphadorrhina
| Familie = Blaberidae
| Unterfamilie = Oxyhaloinae
| Art = spec.
| Verbreitung =
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| Winterruhe =
}}
4f3b479ba8e6025c865a97239be2d12c89b0b069
Gromphadorhina portentosa
0
145
519
505
2014-09-26T14:04:22Z
Lollypop
2
wikitext
text/x-wiki
{{Systematik
| DeName = Fauchschabe
| WissName = Gromphadorrhina portentosa
| Autor =
| Untergattung =
| Gattung = Gromphadorrhina
| Familie = Blaberidae
| Unterfamilie = Oxyhaloinae
| Art = portentosa
| Verbreitung =
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| Winterruhe =
}}
b8643d7dfd4d9962ffe74d5e650b92f3c9fa5245
558
519
2014-10-07T14:46:54Z
Lollypop
2
wikitext
text/x-wiki
{{Systematik
| DeName = Fauchschabe
| WissName = Gromphadorhina portentosa
| Autor =
| Untergattung =
| Gattung = Gromphadorhina
| Familie = Blaberidae
| Unterfamilie = Oxyhaloinae
| Art = portentosa
| Verbreitung =
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| Winterruhe =
}}
418c4462cf1b26c98e507ab6ab7d1d0cf7a755b8
559
558
2014-10-07T14:47:06Z
Lollypop
2
Lollypop verschob Seite [[Gromphadorrhina portentosa]] nach [[Gromphadorhina portentosa]] und überschrieb dabei eine Weiterleitung
wikitext
text/x-wiki
{{Systematik
| DeName = Fauchschabe
| WissName = Gromphadorhina portentosa
| Autor =
| Untergattung =
| Gattung = Gromphadorhina
| Familie = Blaberidae
| Unterfamilie = Oxyhaloinae
| Art = portentosa
| Verbreitung =
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| Winterruhe =
}}
418c4462cf1b26c98e507ab6ab7d1d0cf7a755b8
Blaptica dubia
0
150
520
416
2014-09-26T14:04:47Z
Lollypop
2
wikitext
text/x-wiki
{{Systematik
| DeName = Argentinische Waldschabe
| WissName = Blaptica dubia
| Autor =
| Untergattung =
| Gattung = Blaptica
| Familie = Blaberidae
| Unterfamilie =
| Art = dubia
| Verbreitung = Argentinien, Paraguay, Uruguay
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| Winterruhe =
}}
652dae06b6aeb473cb0f830710455b176dd594e9
Elliptorhina javanica
0
146
521
497
2014-09-26T14:05:11Z
Lollypop
2
wikitext
text/x-wiki
{{Systematik
| DeName = Fauchschabe
| WissName = Elliptorhina javanica
| Autor =
| Untergattung =
| Gattung = Elliptorhina
| Familie = Blaberidae
| Unterfamilie = Oxyhaloinae
| Art = javanica
| Verbreitung =
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| Winterruhe =
}}
2e04485f7ee1b32a0ba196bed32fe3ed147d63ed
525
521
2014-10-01T06:43:58Z
Lollypop
2
wikitext
text/x-wiki
{{Systematik
| DeName = Fauchschabe
| WissName = Elliptorhina javanica
| Autor =
| Bild = Elliptorhina_javanica.JPG
| Bildbeschreibung = ELliptorhina javanica an einem Champignon
| Untergattung =
| Gattung = Elliptorhina
| Familie = Blaberidae
| Unterfamilie = Oxyhaloinae
| Art = javanica
| Verbreitung =
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| Winterruhe =
}}
f79dc52a5b90f378344b7043f9ef98bf892fadc0
550
525
2014-10-07T14:43:18Z
Lollypop
2
wikitext
text/x-wiki
{{Systematik
| DeName = Fauchschabe
| WissName = Elliptorhina javanica
| Autor =
| Bild = Elliptorhina_javanica.JPG
| Bildbeschreibung = ELliptorhina javanica an einem Champignon
| Untergattung =
| Gattung = Elliptorhina
| Familie = Blaberidae
| Unterfamilie = Oxyhaloinae
| Tribus = Gromphadorhini
| Art = javanica
| Verbreitung =
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| Winterruhe =
}}
a7cd573606cf8d902f2bc7318c08ef7ef63d3134
Princisia vanwaerebeki
0
152
522
506
2014-09-26T14:06:18Z
Lollypop
2
wikitext
text/x-wiki
{{Systematik
| DeName = Fauchschabe
| WissName = Princisia vanwaerebeki
| Autor =
| Untergattung =
| Gattung = Princisia
| Familie = Blaberidae
| Unterfamilie = Oxyhaloinae
| Art = vanwaerebeki
| Verbreitung =
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| Winterruhe =
}}
71448dde9a99bfc84c5b5b92a45376656e3efe73
Gromphadorhina oblongonota
0
175
523
2014-09-30T13:55:38Z
Lollypop
2
Die Seite wurde neu angelegt: „{{Systematik | DeName = Fauchschabe | WissName = Gromphadorrhina oblongonata | Autor = | Untergattung = | Gattung …“
wikitext
text/x-wiki
{{Systematik
| DeName = Fauchschabe
| WissName = Gromphadorrhina oblongonata
| Autor =
| Untergattung =
| Gattung = Gromphadorrhina
| Familie = Blaberidae
| Unterfamilie = Oxyhaloinae
| Art = oblongonata
| Verbreitung =
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| Winterruhe =
}}
e586c5c5997edd3356ed68b988373f62fac952b1
551
523
2014-10-07T14:43:48Z
Lollypop
2
wikitext
text/x-wiki
{{Systematik
| DeName = Fauchschabe
| WissName = Gromphadorhina oblongonata
| Autor =
| Untergattung =
| Gattung = Gromphadorhina
| Familie = Blaberidae
| Unterfamilie = Oxyhaloinae
| Tribus = Gromphadorhini
| Art = oblongonata
| Verbreitung =
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| Winterruhe =
}}
a8b3b527a83a02d47cc0bf640277a1f2114950bf
552
551
2014-10-07T14:44:20Z
Lollypop
2
Lollypop verschob Seite [[Gromphadorrhina oblongonata]] nach [[Gromphadorhina oblongonata]]: Korrektur Gattung
wikitext
text/x-wiki
{{Systematik
| DeName = Fauchschabe
| WissName = Gromphadorhina oblongonata
| Autor =
| Untergattung =
| Gattung = Gromphadorhina
| Familie = Blaberidae
| Unterfamilie = Oxyhaloinae
| Tribus = Gromphadorhini
| Art = oblongonata
| Verbreitung =
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| Winterruhe =
}}
a8b3b527a83a02d47cc0bf640277a1f2114950bf
554
552
2014-10-07T14:45:11Z
Lollypop
2
Lollypop verschob Seite [[Gromphadorhina oblongonata]] nach [[Gromphadorhina oblongonota]]
wikitext
text/x-wiki
{{Systematik
| DeName = Fauchschabe
| WissName = Gromphadorhina oblongonata
| Autor =
| Untergattung =
| Gattung = Gromphadorhina
| Familie = Blaberidae
| Unterfamilie = Oxyhaloinae
| Tribus = Gromphadorhini
| Art = oblongonata
| Verbreitung =
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| Winterruhe =
}}
a8b3b527a83a02d47cc0bf640277a1f2114950bf
556
554
2014-10-07T14:45:35Z
Lollypop
2
wikitext
text/x-wiki
{{Systematik
| DeName = Fauchschabe
| WissName = Gromphadorhina oblongonota
| Autor =
| Untergattung =
| Gattung = Gromphadorhina
| Familie = Blaberidae
| Unterfamilie = Oxyhaloinae
| Tribus = Gromphadorhini
| Art = oblongonota
| Verbreitung =
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| Winterruhe =
}}
4eba548b4332b1958887b46fa1b6f8cd48b40d37
File:Elliptorhina javanica.JPG
6
176
524
2014-10-01T06:42:27Z
Lollypop
2
Elliptorhina javanica an Reiswaffel
wikitext
text/x-wiki
Elliptorhina javanica an Reiswaffel
4bd659e80590130b95f53d6a636ee043b22a03a5
Fibrechannel Analyse
0
139
526
478
2014-10-01T14:01:15Z
Lollypop
2
/* Script zum parsen einer configupload-Datei */
wikitext
text/x-wiki
[[Kategorie:Solaris]]
[[Kategorie:Brocade]]
=Fibrechannel Analyse unter Solaris=
=Kommandos : Solaris=
==luxadm==
===luxadm -e port===
Gibt die Hardwarepfade der vorhandened Fibrechannelports und deren Status aus:
<source lang=bash>
# luxadm -e port
/devices/pci@79,0/pci10de,378@b/pci1077,143@0/fp@0,0:devctl CONNECTED
/devices/pci@79,0/pci10de,378@b/pci1077,143@0,1/fp@0,0:devctl NOT CONNECTED
/devices/pci@79,0/pci10de,376@e/pci1077,143@0/fp@0,0:devctl CONNECTED
/devices/pci@79,0/pci10de,376@e/pci1077,143@0,1/fp@0,0:devctl NOT CONNECTED
</source>
2 Dualport Karten:
/devices/pci@79,0/pci10de,378@b/pci1077,143@0 und ...,1
/devices/pci@79,0/pci10de,376@e/pci1077,143@0 und ...,1
<source lang=bash>
# prtdiag -v | head -1
System Configuration: Sun Microsystems Sun Fire X4440
</source>
Aus der Seite [https://support.oracle.com/epmos/faces/DocContentDisplay?id=1277396.1 Sun x86 Platforms: Matrix of Recognized Device Paths (Doc ID 1277396.1)] (Oracle Support Login benötigt):
Sun Fire x4440 (Tucana)
PCI:
PCIe SLOT0 /pci@0,0/pci10de,375@f/pci1000,3150@0 // with PCI Express 8-Port SAS/SATA HBA
PCIe SLOT0 /pci@0,0/pci10de,375@f/ // without PCI Express 8-Port SAS/SATA HBA
PCIe SLOT1 /pci@0,0/pci10de,376@e/
PCIe SLOT2 /pci@7c,0/pci10de,377@f/
PCIe SLOT3 /pci@0,0/pci10de,377@a/
PCIe SLOT4 /pci@7c,0/pci10de,376@e/
PCIe SLOT5 /pci@7c,0/pci10de,378@b/
(7c can be renamed something else depending on BIOS/OS version)
Also stecken unsere Karten in Slot 4 und 5.
===luxadm -e dump_map <HW_path>===
Gibt die Tabelle der bekannten Geräte an einem Port aus
<source lang=bash>
# luxadm -e dump_map /devices/pci@79,0/pci10de,378@b/pci1077,143@0/fp@0,0:devctl
Pos Port_ID Hard_Addr Port WWN Node WWN Type
0 30200 0 202600a0b86e10e4 200600a0b86e10e4 0x0 (Disk device)
1 30600 0 202700a0b86e10e4 200600a0b86e10e4 0x0 (Disk device)
2 10100 0 203400a0b85bb030 200400a0b85bb030 0x0 (Disk device)
3 10500 0 203500a0b85bb030 200400a0b85bb030 0x0 (Disk device)
4 10200 0 202600a0b86e103c 200600a0b86e103c 0x0 (Disk device)
5 11400 0 202700a0b86e103c 200600a0b86e103c 0x0 (Disk device)
6 30100 0 203200a0b85aeb2d 200200a0b85aeb2d 0x0 (Disk device)
7 30500 0 203300a0b85aeb2d 200200a0b85aeb2d 0x0 (Disk device)
8 10800 0 2100001b32902d45 2000001b32902d45 0x1f (Unknown Type,Host Bus Adapter)
</source>
Erklärung der interessanten Spalten:
* Port_ID <Switch_ID><Switchport><??>
Es sind also offensichtlich 2 Switches in der Fabric an Port /devices/pci@79,0/pci10de,378@b/pci1077,143@0/fp@0,0:devctl
und zwar mit der ID 1 und mit der ID 3.
Switch ID 1
Port 1 und 5 : Node WWN 200400a0b85bb030
Port 2 und 14 : Node WWN 200600a0b86e103c
Port 8 : Node WWN 2000001b32902d45 (Wir selbst)
Switch ID 3
Port 1 und 5 : Node WWN 200200a0b85aeb2d
Port 2 und 6 : Node WWN 200600a0b86e10e4
Wir hängen also mit 2 Storages auf dem Switch mit der ID 1 und haben eine Verbindung zu einem Switch mit der ID 3 an dem 2 weitere Storages hängen.
* Node WWN
Wir sehen hier 4 Disk Devices mit jeweils 2 Einträgen (Gleiche Node WWN)
* Port WWN
Dies ist die Port WWN der an den Switch angeschlossenen Geräte (unter 8 finden wir uns selbst).
Pro Storage sehen wir hier 2 Port WWNs, also 2 Pfade über unseren einen Hostport.
Daher nachher 4 Pfade (2 Pro Hostport) beim [[#mpathadm list lu]].
* Type
Disk Device: Storage
Host Bus Adapter: FC-Karte
===luxadm probe===
Auflistung aller erkannten Fibrechanneldevices
<source lang=bash>
#> luxadm probe
Found Fibre Channel device(s):
Node WWN:200600a0b86e10e4 Device Type:Disk device
Logical Path:/dev/rdsk/c8t600A0B80006E10E40000DC1C52E8B751d0s2
...
</source>
===luxadm display <Diskpath|WWN>===
<source lang=bash>
#> luxadm display /dev/rdsk/c8t600A0B80006E10E40000DC1C52E8B751d0s2
DEVICE PROPERTIES for disk: /dev/rdsk/c8t600A0B80006E10E40000DC1C52E8B751d0s2
Vendor: SUN
Product ID: STK6580_6780
Revision: 0784
Serial Num: SP01068442
Unformatted capacity: 204800.000 MBytes
Write Cache: Enabled
Read Cache: Enabled
Minimum prefetch: 0x300
Maximum prefetch: 0x0
Device Type: Disk device
Path(s):
/dev/rdsk/c8t600A0B80006E10E40000DC1C52E8B751d0s2
/devices/scsi_vhci/disk@g600a0b80006e10e40000dc1c52e8b751:c,raw
Controller /dev/cfg/c4
Device Address 202600a0b86e10e4,5
Host controller port WWN 2100001b328a417f
Class primary
State ONLINE
Controller /dev/cfg/c4
Device Address 202700a0b86e10e4,5
Host controller port WWN 2100001b328a417f
Class secondary
State STANDBY
Controller /dev/cfg/c6
Device Address 201600a0b86e10e4,5
Host controller port WWN 2100001b32904445
Class primary
State ONLINE
Controller /dev/cfg/c6
Device Address 201700a0b86e10e4,5
Host controller port WWN 2100001b32904445
Class secondary
State STANDBY
</source>
* Vendor: SUN
Hersteller
* Product ID: STK6580_6780
Also ein StorageTek 6580/6780
* Revision: 0784
Grobe Firmwarepeilung (Firmware Version: 07.84.47.10)
Siehe hier [[#lsscs list array <array_name>]]
* Serial Num: SP01068442
Praktisch, wenn man mit NetApps arbeitet, um die LUNs zuzuordnen.
* Unformatted capacity: 204800.000 MBytes
Immer gut zu wissen
* Write Cache: Enabled
Die Batterie im Storage sollte also OK sein ;-)
* Path(s):
Rawdevicepath
Hardwaredevicepath
Jetzt folgen immer pro Pfad zu diesem Device ein Block aus
Controller (siehe unten)
Device Address <Port WWN vom Device>,<LUN ID>
Class <primary|secondary> (siehe unten)
State <Online|Standby|Oflline>
Zuweisung Controller zum FC-Port über:
<source lang=bash>
# ls -al /dev/cfg/c6
lrwxrwxrwx 1 root root 60 Sep 3 2009 /dev/cfg/c6 -> ../../devices/pci@79,0/pci10de,376@e/pci1077,143@0/fp@0,0:fc
</source>
Man sieht den Hardwarepfad von [[#luxadm -e port]]
Class:
Via ALUA (Asymmetric Logical Unit Access) teilt das Device dem Host mit, über welche Pfade der Host primär auf die LUN zugreifen soll.
==fcinfo==
===fcinfo hba-port===
Gibt ein paar Infos über Hersteller, Modell, Firmware, Port und Node WWN, Current Speed, ... aus
<source lang=bash>
#> fcinfo hba-port
HBA Port WWN: 2100001b328a417f
OS Device Name: /dev/cfg/c4
Manufacturer: QLogic Corp.
Model: 375-3356-02
Firmware Version: 05.06.00
FCode/BIOS Version: BIOS: 2.02; fcode: 2.01; EFI: 2.00;
Serial Number: 0402R00-0918701860
Driver Name: qlc
Driver Version: 20110825-3.06
Type: N-port
State: online
Supported Speeds: 1Gb 2Gb 4Gb
Current Speed: 4Gb
Node WWN: 2000001b328a417f
HBA Port WWN: 2101001b32aa417f
OS Device Name: /dev/cfg/c5
Manufacturer: QLogic Corp.
Model: 375-3356-02
Firmware Version: 05.06.00
FCode/BIOS Version: BIOS: 2.02; fcode: 2.01; EFI: 2.00;
Serial Number: 0402R00-0918701860
Driver Name: qlc
Driver Version: 20110825-3.06
Type: unknown
State: offline
Supported Speeds: 1Gb 2Gb 4Gb
Current Speed: not established
Node WWN: 2001001b32aa417f
HBA Port WWN: 2100001b32904445
OS Device Name: /dev/cfg/c6
Manufacturer: QLogic Corp.
Model: 375-3356-02
Firmware Version: 05.06.00
FCode/BIOS Version: BIOS: 2.02; fcode: 2.01; EFI: 2.00;
Serial Number: 0402R00-0918701887
Driver Name: qlc
Driver Version: 20110825-3.06
Type: N-port
State: online
Supported Speeds: 1Gb 2Gb 4Gb
Current Speed: 4Gb
Node WWN: 2000001b32904445
HBA Port WWN: 2101001b32b04445
OS Device Name: /dev/cfg/c7
Manufacturer: QLogic Corp.
Model: 375-3356-02
Firmware Version: 05.06.00
FCode/BIOS Version: BIOS: 2.02; fcode: 2.01; EFI: 2.00;
Serial Number: 0402R00-0918701887
Driver Name: qlc
Driver Version: 20110825-3.06
Type: unknown
State: offline
Supported Speeds: 1Gb 2Gb 4Gb
Current Speed: not established
Node WWN: 2001001b32b04445
</source>
===fcinfo remote-port --port <HBA Port WWN> --linkstat===
<source lang=bash>
# fcinfo remote-port --port 2100001b32904445 --linkstat
Remote Port WWN: 201600a0b86e103c
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200600a0b86e103c
Link Error Statistics:
Link Failure Count: 3
Loss of Sync Count: 3
Loss of Signal Count: 2
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 201700a0b86e103c
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200600a0b86e103c
Link Error Statistics:
Link Failure Count: 4
Loss of Sync Count: 261
Loss of Signal Count: 4
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 202200a0b85aeb2d
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200200a0b85aeb2d
Link Error Statistics:
Link Failure Count: 2
Loss of Sync Count: 1
Loss of Signal Count: 2
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 202300a0b85aeb2d
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200200a0b85aeb2d
Link Error Statistics:
Link Failure Count: 2
Loss of Sync Count: 1
Loss of Signal Count: 2
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 201600a0b86e10e4
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200600a0b86e10e4
Link Error Statistics:
Link Failure Count: 3
Loss of Sync Count: 1
Loss of Signal Count: 0
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 201700a0b86e10e4
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200600a0b86e10e4
Link Error Statistics:
Link Failure Count: 3
Loss of Sync Count: 1
Loss of Signal Count: 0
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 202400a0b85bb030
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200400a0b85bb030
Link Error Statistics:
Link Failure Count: 2
Loss of Sync Count: 1
Loss of Signal Count: 2
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 202500a0b85bb030
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200400a0b85bb030
Link Error Statistics:
Link Failure Count: 2
Loss of Sync Count: 1
Loss of Signal Count: 3
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
</source>
==mpathadm==
===mpathadm list lu===
<source lang=bash>
</source>
==cfgadm==
===cfgadm -al -o show_FCP_dev [<controller>]===
<source lang=bash>
# cfgadm -al -o show_FCP_dev | grep unusable
c8::21000024ff2d49a2,0 disk connected configured unusable
c8::21000024ff2d49a2,1 disk connected configured unusable
c8::21000024ff2d49a2,2 disk connected configured unusable
c8::21000024ff2d49a2,3 disk connected configured unusable
c8::21000024ff2d49a2,4 disk connected configured unusable
c8::21000024ff2d49a2,5 disk connected configured unusable
c8::21000024ff2d49a2,6 disk connected configured unusable
c8::21000024ff2d49a2,7 disk connected configured unusable
c8::21000024ff2d49a2,8 disk connected configured unusable
c8::21000024ff2d49a2,9 disk connected configured unusable
c8::21000024ff2d49a2,10 disk connected configured unusable
c9::203400a0b839c421,31 disk connected configured unusable
c9::203400a0b84913d2,31 disk connected configured unusable
c9::203500a0b839c421,31 disk connected configured unusable
c9::203500a0b84913d2,31 disk connected configured unusable
</source>
===cfgadm -c unconfigure -o unusable_SCSI_LUN <unusable device>===
<source lang=bash>
# cfgadm -c unconfigure -o unusable_SCSI_LUN c8::21000024ff2d49a2
</source>
===cfgadm -o force_update -c configure <controller>===
Rescan LUNs. Be careful! Does a forcelip!
<source lang=bash>
# cfgadm -o force_update -c configure c10
</source>
=Kommandos : Common Array Manager=
==lsscs==
Ist unter Solaris in /opt/SUNWsefms/bin
===lsscs list array===
<source lang=bash>
</source>
===lsscs list array <array_name>===
<source lang=bash>
</source>
===lsscs list -a <array_name> fcport===
<source lang=bash>
</source>
=Kommandos : Brocade=
==Switch-Kommandos==
===switchshow===
<source lang=bash>
san-sw_11:admin> switchshow
switchName: san-sw_11
switchType: 71.2
switchState: Online
switchMode: Native
switchRole: Principal
switchDomain: 1
switchId: fffc01
switchWwn: 10:00:00:05:33:df:43:5a
zoning: ON (Fabric1)
switchBeacon: OFF
Index Port Address Media Speed State Proto
==============================================
0 0 010000 id N8 No_Light FC
1 1 010100 id N8 Online FC E-Port 10:00:00:05:33:df:bd:b9 "san-sw_21" (downstream)
2 2 010200 id N8 Online FC F-Port 21:00:00:24:ff:05:74:e4
3 3 010300 id N8 Online FC F-Port 50:0a:09:81:8d:32:5d:c4
4 4 010400 id N8 No_Light FC
5 5 010500 id N8 Online FC E-Port 10:00:00:05:33:df:bd:b9 "san-sw_21"
6 6 010600 id N4 Online FC F-Port 20:06:00:a0:b8:32:38:17
7 7 010700 id N4 Online FC F-Port 20:07:00:a0:b8:32:38:17
8 8 010800 id N4 Online FC F-Port 21:00:00:1b:32:91:4c:ed
9 9 010900 id N4 Online FC F-Port 21:00:00:1b:32:98:05:1a
10 10 010a00 id N8 Online FC F-Port 21:00:00:24:ff:4a:d3:bc
11 11 010b00 id N8 No_Light FC
12 12 010c00 id N8 No_Light FC
13 13 010d00 id N8 No_Light FC
14 14 010e00 id N8 No_Light FC
15 15 010f00 id N8 No_Light FC
16 16 011000 -- N8 No_Module FC (No POD License) Disabled
17 17 011100 -- N8 No_Module FC (No POD License) Disabled
18 18 011200 -- N8 No_Module FC (No POD License) Disabled
19 19 011300 -- N8 No_Module FC (No POD License) Disabled
20 20 011400 -- N8 No_Module FC (No POD License) Disabled
21 21 011500 -- N8 No_Module FC (No POD License) Disabled
22 22 011600 -- N8 No_Module FC (No POD License) Disabled
23 23 011700 -- N8 No_Module FC (No POD License) Disabled
</source>
Was sagt uns das?
# Dies ist der "Principal" (alle andere sind "Subordinate") der Fabric "Fabric1" (switchRole:, zoning:)
# Der Switch ist gezoned (zoning:)
# SwitchID ist "fffc01"
# Es ist ein 24-Port Switch
# Es gibt einen doppelten ISL (InterSwitchLink) zu einem anderen Switch E-Port (san-sw_21)
# 6 Ports sind mit SFPs bestückt, aber nicht belegt (0,4,11-15)
# 8 Ports haben keine Lizenz und auch kein SFP (No_Module)
# 9 Ports sind belegt
<source lang=bash>
san-sw_11:root> fabricshow
Switch ID Worldwide Name Enet IP Addr FC IP Addr Name
-------------------------------------------------------------------------
1: fffc01 10:00:00:05:33:df:43:5a 192.168.1.117 0.0.0.0 >"san-sw_11"
2: fffc02 10:00:00:05:33:df:bd:b9 192.168.1.119 0.0.0.0 "san-sw_21"
The Fabric has 2 switches
</source>
==Port-Kommandos==
===portstatsshow===
===portstatsclear===
==Zone-Kommandos==
===zoneshow===
===alicreate===
===alishow===
==Backup der Switchconfig per Script==
===Put the backup host ssh-pub-key on the switches===
<source lang=bash>
fcsw1:root> cat >/root/.ssh/authorized_keys <<EOF
> ssh-dss AAAAB3NzaC1...
...
...
lF8qsgtTD8cc= root@host
> EOF
</source>
===Generate ssh-key on the switches===
<source lang=bash>
fcsw1:root> ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
2a:23:33:...:69:bc:25:a5:f9 root@fcsw1
The key's randomart image is:
+--[ RSA 2048]----+
| |
| ... |
| |
+-----------------+
</source>
===Copy the key to your backup users ~/.ssh/authorized_keys on backup host===
<source lang=bash>
fcsw1:root> cat /root/.ssh/id_rsa.pub
ssh-rsa AAAAB3NzaC1yc2EAAA...
...
KHnw1T1NaQ== root@fcsw1
</source>
===Now the script on the backup host===
<source lang=bash>
# cat /opt/bin/backup_brocade_config
#!/bin/bash
SWITCHES="
172.30.40.50
172.30.40.51
"
LOCALUSER="backupuser"
BACKUPDIR="brocade_backup"
BACKUPHOST="172.30.40.10"
DATE="$(date '+%Y%m%d-%H%M%S')"
for switch in ${SWITCHES} ; do
printf "Backing up ${switch} to ~${LOCALUSER}/${BACKUPDIR}/${switch}_config_${date}.txt... "
ssh root@${switch} /fabos/link_sbin/configupload -all -p scp ${BACKUPHOST},${LOCALUSER},${BACKUPDIR}/${switch}_config_${DATE}.txt
done
</source>
==Script zum parsen einer configupload-Datei==
<source lang=awk>
#!/usr/bin/gawk -f
BEGIN{
vendor["001438"]="Hewlett-Packard";
vendor["00a098"]="NetApp";
vendor["0024ff"]="Qlogic";
vendor["001b32"]="Qlogic";
vendor["0000c9"]="Emulex";
vendor["00e002"]="CROSSROADS SYSTEMS, INC.";
}
/\[Zoning\]/,/^$/ {
if(/^cfg./){
split($0,cfgparts,":");
gsub(/^cfg./,"",cfgparts[1]);
cfg[cfgparts[1]]=cfgparts[2];
}
else if(/^zone./) {
zonename=$0;
gsub(/:.*$/,"",zonename);
gsub(/^zone./,"",zonename);
zonemembers=$0;
gsub(/^[^:]*:/,"",zonemembers);
zone[zonename]=zonemembers;
}
else if(/^alias./) {
aliasname=$0;
gsub(/:.*$/,"",aliasname);
gsub(/^alias./,"",aliasname);
aliasmembers=$0;
gsub(/^[^:]*:/,"",aliasmembers);
alias[aliasname]=aliasmembers;
if(length(aliasname)>longestalias){
longestalias=length(aliasname);
}
}
else if(/^enable:/) {
cfgenabled=$0;
gsub(/^enable:/,"",cfgenabled);
}
}
END {
print "Config:",cfgenabled;
split(cfg[cfgenabled],active_zones,";");
for(active_zone in active_zones) {
split(zone[active_zones[active_zone]],zone_members,";");
asort(zone_members);
print "Zone",active_zones[active_zone],"(",length(zone_members),"Members ):";
for(zone_member in zone_members){
member=zone_members[zone_member];
if(alias[member]!=""){
member=alias[member];
}
WWN=member;
gsub(/:/,"",WWN);
if(WWN ~ /^5/){start=2;}else{start=5;}
vendor_id=substr(WWN,start,6);
printf " Member: %s\t",member;
if(alias[zone_members[zone_member]]!=""){
format=sprintf("%%s%%%ds\t",longestalias-length(zone_members[zone_member]));
printf format,zone_members[zone_member]," ";
}
printf "%s\n",vendor[vendor_id];
}
}
printf "\n\n\nCreate config:\n-------------------------------------------------\n";
printf "cfgdelete \"%s\"\n",cfgenabled;
for(active_zone in active_zones) {
split(zone[active_zones[active_zone]],zone_members,";");
asort(zone_members);
for(zone_member in zone_members){
member=zone_members[zone_member];
if(alias[member]!=""){
printf "alicreate \"%s\",\"%s\"\n",member,alias[member];
alias[member]="";
}
}
printf "zonecreate \"%s\",\"%s\"\n",active_zones[active_zone],zone[active_zones[active_zone]];
if(!secondelement){
secondelement=1;
printf "cfgcreate";
} else {
printf "cfgadd ";
}
printf " \"%s\",\"%s\"\n",cfgenabled,active_zones[active_zone];
}
printf "cfgsave\ncfgenable \"%s\"\n",cfgenabled;
}
</source>
f0a44fc77677e716b8d3037f4d6f62df406825d9
527
526
2014-10-01T14:01:26Z
Lollypop
2
/* Script zum parsen einer configupload-Datei */
wikitext
text/x-wiki
[[Kategorie:Solaris]]
[[Kategorie:Brocade]]
=Fibrechannel Analyse unter Solaris=
=Kommandos : Solaris=
==luxadm==
===luxadm -e port===
Gibt die Hardwarepfade der vorhandened Fibrechannelports und deren Status aus:
<source lang=bash>
# luxadm -e port
/devices/pci@79,0/pci10de,378@b/pci1077,143@0/fp@0,0:devctl CONNECTED
/devices/pci@79,0/pci10de,378@b/pci1077,143@0,1/fp@0,0:devctl NOT CONNECTED
/devices/pci@79,0/pci10de,376@e/pci1077,143@0/fp@0,0:devctl CONNECTED
/devices/pci@79,0/pci10de,376@e/pci1077,143@0,1/fp@0,0:devctl NOT CONNECTED
</source>
2 Dualport Karten:
/devices/pci@79,0/pci10de,378@b/pci1077,143@0 und ...,1
/devices/pci@79,0/pci10de,376@e/pci1077,143@0 und ...,1
<source lang=bash>
# prtdiag -v | head -1
System Configuration: Sun Microsystems Sun Fire X4440
</source>
Aus der Seite [https://support.oracle.com/epmos/faces/DocContentDisplay?id=1277396.1 Sun x86 Platforms: Matrix of Recognized Device Paths (Doc ID 1277396.1)] (Oracle Support Login benötigt):
Sun Fire x4440 (Tucana)
PCI:
PCIe SLOT0 /pci@0,0/pci10de,375@f/pci1000,3150@0 // with PCI Express 8-Port SAS/SATA HBA
PCIe SLOT0 /pci@0,0/pci10de,375@f/ // without PCI Express 8-Port SAS/SATA HBA
PCIe SLOT1 /pci@0,0/pci10de,376@e/
PCIe SLOT2 /pci@7c,0/pci10de,377@f/
PCIe SLOT3 /pci@0,0/pci10de,377@a/
PCIe SLOT4 /pci@7c,0/pci10de,376@e/
PCIe SLOT5 /pci@7c,0/pci10de,378@b/
(7c can be renamed something else depending on BIOS/OS version)
Also stecken unsere Karten in Slot 4 und 5.
===luxadm -e dump_map <HW_path>===
Gibt die Tabelle der bekannten Geräte an einem Port aus
<source lang=bash>
# luxadm -e dump_map /devices/pci@79,0/pci10de,378@b/pci1077,143@0/fp@0,0:devctl
Pos Port_ID Hard_Addr Port WWN Node WWN Type
0 30200 0 202600a0b86e10e4 200600a0b86e10e4 0x0 (Disk device)
1 30600 0 202700a0b86e10e4 200600a0b86e10e4 0x0 (Disk device)
2 10100 0 203400a0b85bb030 200400a0b85bb030 0x0 (Disk device)
3 10500 0 203500a0b85bb030 200400a0b85bb030 0x0 (Disk device)
4 10200 0 202600a0b86e103c 200600a0b86e103c 0x0 (Disk device)
5 11400 0 202700a0b86e103c 200600a0b86e103c 0x0 (Disk device)
6 30100 0 203200a0b85aeb2d 200200a0b85aeb2d 0x0 (Disk device)
7 30500 0 203300a0b85aeb2d 200200a0b85aeb2d 0x0 (Disk device)
8 10800 0 2100001b32902d45 2000001b32902d45 0x1f (Unknown Type,Host Bus Adapter)
</source>
Erklärung der interessanten Spalten:
* Port_ID <Switch_ID><Switchport><??>
Es sind also offensichtlich 2 Switches in der Fabric an Port /devices/pci@79,0/pci10de,378@b/pci1077,143@0/fp@0,0:devctl
und zwar mit der ID 1 und mit der ID 3.
Switch ID 1
Port 1 und 5 : Node WWN 200400a0b85bb030
Port 2 und 14 : Node WWN 200600a0b86e103c
Port 8 : Node WWN 2000001b32902d45 (Wir selbst)
Switch ID 3
Port 1 und 5 : Node WWN 200200a0b85aeb2d
Port 2 und 6 : Node WWN 200600a0b86e10e4
Wir hängen also mit 2 Storages auf dem Switch mit der ID 1 und haben eine Verbindung zu einem Switch mit der ID 3 an dem 2 weitere Storages hängen.
* Node WWN
Wir sehen hier 4 Disk Devices mit jeweils 2 Einträgen (Gleiche Node WWN)
* Port WWN
Dies ist die Port WWN der an den Switch angeschlossenen Geräte (unter 8 finden wir uns selbst).
Pro Storage sehen wir hier 2 Port WWNs, also 2 Pfade über unseren einen Hostport.
Daher nachher 4 Pfade (2 Pro Hostport) beim [[#mpathadm list lu]].
* Type
Disk Device: Storage
Host Bus Adapter: FC-Karte
===luxadm probe===
Auflistung aller erkannten Fibrechanneldevices
<source lang=bash>
#> luxadm probe
Found Fibre Channel device(s):
Node WWN:200600a0b86e10e4 Device Type:Disk device
Logical Path:/dev/rdsk/c8t600A0B80006E10E40000DC1C52E8B751d0s2
...
</source>
===luxadm display <Diskpath|WWN>===
<source lang=bash>
#> luxadm display /dev/rdsk/c8t600A0B80006E10E40000DC1C52E8B751d0s2
DEVICE PROPERTIES for disk: /dev/rdsk/c8t600A0B80006E10E40000DC1C52E8B751d0s2
Vendor: SUN
Product ID: STK6580_6780
Revision: 0784
Serial Num: SP01068442
Unformatted capacity: 204800.000 MBytes
Write Cache: Enabled
Read Cache: Enabled
Minimum prefetch: 0x300
Maximum prefetch: 0x0
Device Type: Disk device
Path(s):
/dev/rdsk/c8t600A0B80006E10E40000DC1C52E8B751d0s2
/devices/scsi_vhci/disk@g600a0b80006e10e40000dc1c52e8b751:c,raw
Controller /dev/cfg/c4
Device Address 202600a0b86e10e4,5
Host controller port WWN 2100001b328a417f
Class primary
State ONLINE
Controller /dev/cfg/c4
Device Address 202700a0b86e10e4,5
Host controller port WWN 2100001b328a417f
Class secondary
State STANDBY
Controller /dev/cfg/c6
Device Address 201600a0b86e10e4,5
Host controller port WWN 2100001b32904445
Class primary
State ONLINE
Controller /dev/cfg/c6
Device Address 201700a0b86e10e4,5
Host controller port WWN 2100001b32904445
Class secondary
State STANDBY
</source>
* Vendor: SUN
Hersteller
* Product ID: STK6580_6780
Also ein StorageTek 6580/6780
* Revision: 0784
Grobe Firmwarepeilung (Firmware Version: 07.84.47.10)
Siehe hier [[#lsscs list array <array_name>]]
* Serial Num: SP01068442
Praktisch, wenn man mit NetApps arbeitet, um die LUNs zuzuordnen.
* Unformatted capacity: 204800.000 MBytes
Immer gut zu wissen
* Write Cache: Enabled
Die Batterie im Storage sollte also OK sein ;-)
* Path(s):
Rawdevicepath
Hardwaredevicepath
Jetzt folgen immer pro Pfad zu diesem Device ein Block aus
Controller (siehe unten)
Device Address <Port WWN vom Device>,<LUN ID>
Class <primary|secondary> (siehe unten)
State <Online|Standby|Oflline>
Zuweisung Controller zum FC-Port über:
<source lang=bash>
# ls -al /dev/cfg/c6
lrwxrwxrwx 1 root root 60 Sep 3 2009 /dev/cfg/c6 -> ../../devices/pci@79,0/pci10de,376@e/pci1077,143@0/fp@0,0:fc
</source>
Man sieht den Hardwarepfad von [[#luxadm -e port]]
Class:
Via ALUA (Asymmetric Logical Unit Access) teilt das Device dem Host mit, über welche Pfade der Host primär auf die LUN zugreifen soll.
==fcinfo==
===fcinfo hba-port===
Gibt ein paar Infos über Hersteller, Modell, Firmware, Port und Node WWN, Current Speed, ... aus
<source lang=bash>
#> fcinfo hba-port
HBA Port WWN: 2100001b328a417f
OS Device Name: /dev/cfg/c4
Manufacturer: QLogic Corp.
Model: 375-3356-02
Firmware Version: 05.06.00
FCode/BIOS Version: BIOS: 2.02; fcode: 2.01; EFI: 2.00;
Serial Number: 0402R00-0918701860
Driver Name: qlc
Driver Version: 20110825-3.06
Type: N-port
State: online
Supported Speeds: 1Gb 2Gb 4Gb
Current Speed: 4Gb
Node WWN: 2000001b328a417f
HBA Port WWN: 2101001b32aa417f
OS Device Name: /dev/cfg/c5
Manufacturer: QLogic Corp.
Model: 375-3356-02
Firmware Version: 05.06.00
FCode/BIOS Version: BIOS: 2.02; fcode: 2.01; EFI: 2.00;
Serial Number: 0402R00-0918701860
Driver Name: qlc
Driver Version: 20110825-3.06
Type: unknown
State: offline
Supported Speeds: 1Gb 2Gb 4Gb
Current Speed: not established
Node WWN: 2001001b32aa417f
HBA Port WWN: 2100001b32904445
OS Device Name: /dev/cfg/c6
Manufacturer: QLogic Corp.
Model: 375-3356-02
Firmware Version: 05.06.00
FCode/BIOS Version: BIOS: 2.02; fcode: 2.01; EFI: 2.00;
Serial Number: 0402R00-0918701887
Driver Name: qlc
Driver Version: 20110825-3.06
Type: N-port
State: online
Supported Speeds: 1Gb 2Gb 4Gb
Current Speed: 4Gb
Node WWN: 2000001b32904445
HBA Port WWN: 2101001b32b04445
OS Device Name: /dev/cfg/c7
Manufacturer: QLogic Corp.
Model: 375-3356-02
Firmware Version: 05.06.00
FCode/BIOS Version: BIOS: 2.02; fcode: 2.01; EFI: 2.00;
Serial Number: 0402R00-0918701887
Driver Name: qlc
Driver Version: 20110825-3.06
Type: unknown
State: offline
Supported Speeds: 1Gb 2Gb 4Gb
Current Speed: not established
Node WWN: 2001001b32b04445
</source>
===fcinfo remote-port --port <HBA Port WWN> --linkstat===
<source lang=bash>
# fcinfo remote-port --port 2100001b32904445 --linkstat
Remote Port WWN: 201600a0b86e103c
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200600a0b86e103c
Link Error Statistics:
Link Failure Count: 3
Loss of Sync Count: 3
Loss of Signal Count: 2
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 201700a0b86e103c
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200600a0b86e103c
Link Error Statistics:
Link Failure Count: 4
Loss of Sync Count: 261
Loss of Signal Count: 4
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 202200a0b85aeb2d
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200200a0b85aeb2d
Link Error Statistics:
Link Failure Count: 2
Loss of Sync Count: 1
Loss of Signal Count: 2
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 202300a0b85aeb2d
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200200a0b85aeb2d
Link Error Statistics:
Link Failure Count: 2
Loss of Sync Count: 1
Loss of Signal Count: 2
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 201600a0b86e10e4
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200600a0b86e10e4
Link Error Statistics:
Link Failure Count: 3
Loss of Sync Count: 1
Loss of Signal Count: 0
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 201700a0b86e10e4
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200600a0b86e10e4
Link Error Statistics:
Link Failure Count: 3
Loss of Sync Count: 1
Loss of Signal Count: 0
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 202400a0b85bb030
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200400a0b85bb030
Link Error Statistics:
Link Failure Count: 2
Loss of Sync Count: 1
Loss of Signal Count: 2
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 202500a0b85bb030
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200400a0b85bb030
Link Error Statistics:
Link Failure Count: 2
Loss of Sync Count: 1
Loss of Signal Count: 3
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
</source>
==mpathadm==
===mpathadm list lu===
<source lang=bash>
</source>
==cfgadm==
===cfgadm -al -o show_FCP_dev [<controller>]===
<source lang=bash>
# cfgadm -al -o show_FCP_dev | grep unusable
c8::21000024ff2d49a2,0 disk connected configured unusable
c8::21000024ff2d49a2,1 disk connected configured unusable
c8::21000024ff2d49a2,2 disk connected configured unusable
c8::21000024ff2d49a2,3 disk connected configured unusable
c8::21000024ff2d49a2,4 disk connected configured unusable
c8::21000024ff2d49a2,5 disk connected configured unusable
c8::21000024ff2d49a2,6 disk connected configured unusable
c8::21000024ff2d49a2,7 disk connected configured unusable
c8::21000024ff2d49a2,8 disk connected configured unusable
c8::21000024ff2d49a2,9 disk connected configured unusable
c8::21000024ff2d49a2,10 disk connected configured unusable
c9::203400a0b839c421,31 disk connected configured unusable
c9::203400a0b84913d2,31 disk connected configured unusable
c9::203500a0b839c421,31 disk connected configured unusable
c9::203500a0b84913d2,31 disk connected configured unusable
</source>
===cfgadm -c unconfigure -o unusable_SCSI_LUN <unusable device>===
<source lang=bash>
# cfgadm -c unconfigure -o unusable_SCSI_LUN c8::21000024ff2d49a2
</source>
===cfgadm -o force_update -c configure <controller>===
Rescan LUNs. Be careful! Does a forcelip!
<source lang=bash>
# cfgadm -o force_update -c configure c10
</source>
=Kommandos : Common Array Manager=
==lsscs==
Ist unter Solaris in /opt/SUNWsefms/bin
===lsscs list array===
<source lang=bash>
</source>
===lsscs list array <array_name>===
<source lang=bash>
</source>
===lsscs list -a <array_name> fcport===
<source lang=bash>
</source>
=Kommandos : Brocade=
==Switch-Kommandos==
===switchshow===
<source lang=bash>
san-sw_11:admin> switchshow
switchName: san-sw_11
switchType: 71.2
switchState: Online
switchMode: Native
switchRole: Principal
switchDomain: 1
switchId: fffc01
switchWwn: 10:00:00:05:33:df:43:5a
zoning: ON (Fabric1)
switchBeacon: OFF
Index Port Address Media Speed State Proto
==============================================
0 0 010000 id N8 No_Light FC
1 1 010100 id N8 Online FC E-Port 10:00:00:05:33:df:bd:b9 "san-sw_21" (downstream)
2 2 010200 id N8 Online FC F-Port 21:00:00:24:ff:05:74:e4
3 3 010300 id N8 Online FC F-Port 50:0a:09:81:8d:32:5d:c4
4 4 010400 id N8 No_Light FC
5 5 010500 id N8 Online FC E-Port 10:00:00:05:33:df:bd:b9 "san-sw_21"
6 6 010600 id N4 Online FC F-Port 20:06:00:a0:b8:32:38:17
7 7 010700 id N4 Online FC F-Port 20:07:00:a0:b8:32:38:17
8 8 010800 id N4 Online FC F-Port 21:00:00:1b:32:91:4c:ed
9 9 010900 id N4 Online FC F-Port 21:00:00:1b:32:98:05:1a
10 10 010a00 id N8 Online FC F-Port 21:00:00:24:ff:4a:d3:bc
11 11 010b00 id N8 No_Light FC
12 12 010c00 id N8 No_Light FC
13 13 010d00 id N8 No_Light FC
14 14 010e00 id N8 No_Light FC
15 15 010f00 id N8 No_Light FC
16 16 011000 -- N8 No_Module FC (No POD License) Disabled
17 17 011100 -- N8 No_Module FC (No POD License) Disabled
18 18 011200 -- N8 No_Module FC (No POD License) Disabled
19 19 011300 -- N8 No_Module FC (No POD License) Disabled
20 20 011400 -- N8 No_Module FC (No POD License) Disabled
21 21 011500 -- N8 No_Module FC (No POD License) Disabled
22 22 011600 -- N8 No_Module FC (No POD License) Disabled
23 23 011700 -- N8 No_Module FC (No POD License) Disabled
</source>
Was sagt uns das?
# Dies ist der "Principal" (alle andere sind "Subordinate") der Fabric "Fabric1" (switchRole:, zoning:)
# Der Switch ist gezoned (zoning:)
# SwitchID ist "fffc01"
# Es ist ein 24-Port Switch
# Es gibt einen doppelten ISL (InterSwitchLink) zu einem anderen Switch E-Port (san-sw_21)
# 6 Ports sind mit SFPs bestückt, aber nicht belegt (0,4,11-15)
# 8 Ports haben keine Lizenz und auch kein SFP (No_Module)
# 9 Ports sind belegt
<source lang=bash>
san-sw_11:root> fabricshow
Switch ID Worldwide Name Enet IP Addr FC IP Addr Name
-------------------------------------------------------------------------
1: fffc01 10:00:00:05:33:df:43:5a 192.168.1.117 0.0.0.0 >"san-sw_11"
2: fffc02 10:00:00:05:33:df:bd:b9 192.168.1.119 0.0.0.0 "san-sw_21"
The Fabric has 2 switches
</source>
==Port-Kommandos==
===portstatsshow===
===portstatsclear===
==Zone-Kommandos==
===zoneshow===
===alicreate===
===alishow===
==Backup der Switchconfig per Script==
===Put the backup host ssh-pub-key on the switches===
<source lang=bash>
fcsw1:root> cat >/root/.ssh/authorized_keys <<EOF
> ssh-dss AAAAB3NzaC1...
...
...
lF8qsgtTD8cc= root@host
> EOF
</source>
===Generate ssh-key on the switches===
<source lang=bash>
fcsw1:root> ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
2a:23:33:...:69:bc:25:a5:f9 root@fcsw1
The key's randomart image is:
+--[ RSA 2048]----+
| |
| ... |
| |
+-----------------+
</source>
===Copy the key to your backup users ~/.ssh/authorized_keys on backup host===
<source lang=bash>
fcsw1:root> cat /root/.ssh/id_rsa.pub
ssh-rsa AAAAB3NzaC1yc2EAAA...
...
KHnw1T1NaQ== root@fcsw1
</source>
===Now the script on the backup host===
<source lang=bash>
# cat /opt/bin/backup_brocade_config
#!/bin/bash
SWITCHES="
172.30.40.50
172.30.40.51
"
LOCALUSER="backupuser"
BACKUPDIR="brocade_backup"
BACKUPHOST="172.30.40.10"
DATE="$(date '+%Y%m%d-%H%M%S')"
for switch in ${SWITCHES} ; do
printf "Backing up ${switch} to ~${LOCALUSER}/${BACKUPDIR}/${switch}_config_${date}.txt... "
ssh root@${switch} /fabos/link_sbin/configupload -all -p scp ${BACKUPHOST},${LOCALUSER},${BACKUPDIR}/${switch}_config_${DATE}.txt
done
</source>
==Script zum parsen einer configupload-Datei==
<source lang=gawk>
#!/usr/bin/gawk -f
BEGIN{
vendor["001438"]="Hewlett-Packard";
vendor["00a098"]="NetApp";
vendor["0024ff"]="Qlogic";
vendor["001b32"]="Qlogic";
vendor["0000c9"]="Emulex";
vendor["00e002"]="CROSSROADS SYSTEMS, INC.";
}
/\[Zoning\]/,/^$/ {
if(/^cfg./){
split($0,cfgparts,":");
gsub(/^cfg./,"",cfgparts[1]);
cfg[cfgparts[1]]=cfgparts[2];
}
else if(/^zone./) {
zonename=$0;
gsub(/:.*$/,"",zonename);
gsub(/^zone./,"",zonename);
zonemembers=$0;
gsub(/^[^:]*:/,"",zonemembers);
zone[zonename]=zonemembers;
}
else if(/^alias./) {
aliasname=$0;
gsub(/:.*$/,"",aliasname);
gsub(/^alias./,"",aliasname);
aliasmembers=$0;
gsub(/^[^:]*:/,"",aliasmembers);
alias[aliasname]=aliasmembers;
if(length(aliasname)>longestalias){
longestalias=length(aliasname);
}
}
else if(/^enable:/) {
cfgenabled=$0;
gsub(/^enable:/,"",cfgenabled);
}
}
END {
print "Config:",cfgenabled;
split(cfg[cfgenabled],active_zones,";");
for(active_zone in active_zones) {
split(zone[active_zones[active_zone]],zone_members,";");
asort(zone_members);
print "Zone",active_zones[active_zone],"(",length(zone_members),"Members ):";
for(zone_member in zone_members){
member=zone_members[zone_member];
if(alias[member]!=""){
member=alias[member];
}
WWN=member;
gsub(/:/,"",WWN);
if(WWN ~ /^5/){start=2;}else{start=5;}
vendor_id=substr(WWN,start,6);
printf " Member: %s\t",member;
if(alias[zone_members[zone_member]]!=""){
format=sprintf("%%s%%%ds\t",longestalias-length(zone_members[zone_member]));
printf format,zone_members[zone_member]," ";
}
printf "%s\n",vendor[vendor_id];
}
}
printf "\n\n\nCreate config:\n-------------------------------------------------\n";
printf "cfgdelete \"%s\"\n",cfgenabled;
for(active_zone in active_zones) {
split(zone[active_zones[active_zone]],zone_members,";");
asort(zone_members);
for(zone_member in zone_members){
member=zone_members[zone_member];
if(alias[member]!=""){
printf "alicreate \"%s\",\"%s\"\n",member,alias[member];
alias[member]="";
}
}
printf "zonecreate \"%s\",\"%s\"\n",active_zones[active_zone],zone[active_zones[active_zone]];
if(!secondelement){
secondelement=1;
printf "cfgcreate";
} else {
printf "cfgadd ";
}
printf " \"%s\",\"%s\"\n",cfgenabled,active_zones[active_zone];
}
printf "cfgsave\ncfgenable \"%s\"\n",cfgenabled;
}
</source>
91aa3b748a87cebfd59238d824bb4c0478cd781d
528
527
2014-10-01T14:01:44Z
Lollypop
2
/* Script zum parsen einer configupload-Datei */
wikitext
text/x-wiki
[[Kategorie:Solaris]]
[[Kategorie:Brocade]]
=Fibrechannel Analyse unter Solaris=
=Kommandos : Solaris=
==luxadm==
===luxadm -e port===
Gibt die Hardwarepfade der vorhandened Fibrechannelports und deren Status aus:
<source lang=bash>
# luxadm -e port
/devices/pci@79,0/pci10de,378@b/pci1077,143@0/fp@0,0:devctl CONNECTED
/devices/pci@79,0/pci10de,378@b/pci1077,143@0,1/fp@0,0:devctl NOT CONNECTED
/devices/pci@79,0/pci10de,376@e/pci1077,143@0/fp@0,0:devctl CONNECTED
/devices/pci@79,0/pci10de,376@e/pci1077,143@0,1/fp@0,0:devctl NOT CONNECTED
</source>
2 Dualport Karten:
/devices/pci@79,0/pci10de,378@b/pci1077,143@0 und ...,1
/devices/pci@79,0/pci10de,376@e/pci1077,143@0 und ...,1
<source lang=bash>
# prtdiag -v | head -1
System Configuration: Sun Microsystems Sun Fire X4440
</source>
Aus der Seite [https://support.oracle.com/epmos/faces/DocContentDisplay?id=1277396.1 Sun x86 Platforms: Matrix of Recognized Device Paths (Doc ID 1277396.1)] (Oracle Support Login benötigt):
Sun Fire x4440 (Tucana)
PCI:
PCIe SLOT0 /pci@0,0/pci10de,375@f/pci1000,3150@0 // with PCI Express 8-Port SAS/SATA HBA
PCIe SLOT0 /pci@0,0/pci10de,375@f/ // without PCI Express 8-Port SAS/SATA HBA
PCIe SLOT1 /pci@0,0/pci10de,376@e/
PCIe SLOT2 /pci@7c,0/pci10de,377@f/
PCIe SLOT3 /pci@0,0/pci10de,377@a/
PCIe SLOT4 /pci@7c,0/pci10de,376@e/
PCIe SLOT5 /pci@7c,0/pci10de,378@b/
(7c can be renamed something else depending on BIOS/OS version)
Also stecken unsere Karten in Slot 4 und 5.
===luxadm -e dump_map <HW_path>===
Gibt die Tabelle der bekannten Geräte an einem Port aus
<source lang=bash>
# luxadm -e dump_map /devices/pci@79,0/pci10de,378@b/pci1077,143@0/fp@0,0:devctl
Pos Port_ID Hard_Addr Port WWN Node WWN Type
0 30200 0 202600a0b86e10e4 200600a0b86e10e4 0x0 (Disk device)
1 30600 0 202700a0b86e10e4 200600a0b86e10e4 0x0 (Disk device)
2 10100 0 203400a0b85bb030 200400a0b85bb030 0x0 (Disk device)
3 10500 0 203500a0b85bb030 200400a0b85bb030 0x0 (Disk device)
4 10200 0 202600a0b86e103c 200600a0b86e103c 0x0 (Disk device)
5 11400 0 202700a0b86e103c 200600a0b86e103c 0x0 (Disk device)
6 30100 0 203200a0b85aeb2d 200200a0b85aeb2d 0x0 (Disk device)
7 30500 0 203300a0b85aeb2d 200200a0b85aeb2d 0x0 (Disk device)
8 10800 0 2100001b32902d45 2000001b32902d45 0x1f (Unknown Type,Host Bus Adapter)
</source>
Erklärung der interessanten Spalten:
* Port_ID <Switch_ID><Switchport><??>
Es sind also offensichtlich 2 Switches in der Fabric an Port /devices/pci@79,0/pci10de,378@b/pci1077,143@0/fp@0,0:devctl
und zwar mit der ID 1 und mit der ID 3.
Switch ID 1
Port 1 und 5 : Node WWN 200400a0b85bb030
Port 2 und 14 : Node WWN 200600a0b86e103c
Port 8 : Node WWN 2000001b32902d45 (Wir selbst)
Switch ID 3
Port 1 und 5 : Node WWN 200200a0b85aeb2d
Port 2 und 6 : Node WWN 200600a0b86e10e4
Wir hängen also mit 2 Storages auf dem Switch mit der ID 1 und haben eine Verbindung zu einem Switch mit der ID 3 an dem 2 weitere Storages hängen.
* Node WWN
Wir sehen hier 4 Disk Devices mit jeweils 2 Einträgen (Gleiche Node WWN)
* Port WWN
Dies ist die Port WWN der an den Switch angeschlossenen Geräte (unter 8 finden wir uns selbst).
Pro Storage sehen wir hier 2 Port WWNs, also 2 Pfade über unseren einen Hostport.
Daher nachher 4 Pfade (2 Pro Hostport) beim [[#mpathadm list lu]].
* Type
Disk Device: Storage
Host Bus Adapter: FC-Karte
===luxadm probe===
Auflistung aller erkannten Fibrechanneldevices
<source lang=bash>
#> luxadm probe
Found Fibre Channel device(s):
Node WWN:200600a0b86e10e4 Device Type:Disk device
Logical Path:/dev/rdsk/c8t600A0B80006E10E40000DC1C52E8B751d0s2
...
</source>
===luxadm display <Diskpath|WWN>===
<source lang=bash>
#> luxadm display /dev/rdsk/c8t600A0B80006E10E40000DC1C52E8B751d0s2
DEVICE PROPERTIES for disk: /dev/rdsk/c8t600A0B80006E10E40000DC1C52E8B751d0s2
Vendor: SUN
Product ID: STK6580_6780
Revision: 0784
Serial Num: SP01068442
Unformatted capacity: 204800.000 MBytes
Write Cache: Enabled
Read Cache: Enabled
Minimum prefetch: 0x300
Maximum prefetch: 0x0
Device Type: Disk device
Path(s):
/dev/rdsk/c8t600A0B80006E10E40000DC1C52E8B751d0s2
/devices/scsi_vhci/disk@g600a0b80006e10e40000dc1c52e8b751:c,raw
Controller /dev/cfg/c4
Device Address 202600a0b86e10e4,5
Host controller port WWN 2100001b328a417f
Class primary
State ONLINE
Controller /dev/cfg/c4
Device Address 202700a0b86e10e4,5
Host controller port WWN 2100001b328a417f
Class secondary
State STANDBY
Controller /dev/cfg/c6
Device Address 201600a0b86e10e4,5
Host controller port WWN 2100001b32904445
Class primary
State ONLINE
Controller /dev/cfg/c6
Device Address 201700a0b86e10e4,5
Host controller port WWN 2100001b32904445
Class secondary
State STANDBY
</source>
* Vendor: SUN
Hersteller
* Product ID: STK6580_6780
Also ein StorageTek 6580/6780
* Revision: 0784
Grobe Firmwarepeilung (Firmware Version: 07.84.47.10)
Siehe hier [[#lsscs list array <array_name>]]
* Serial Num: SP01068442
Praktisch, wenn man mit NetApps arbeitet, um die LUNs zuzuordnen.
* Unformatted capacity: 204800.000 MBytes
Immer gut zu wissen
* Write Cache: Enabled
Die Batterie im Storage sollte also OK sein ;-)
* Path(s):
Rawdevicepath
Hardwaredevicepath
Jetzt folgen immer pro Pfad zu diesem Device ein Block aus
Controller (siehe unten)
Device Address <Port WWN vom Device>,<LUN ID>
Class <primary|secondary> (siehe unten)
State <Online|Standby|Oflline>
Zuweisung Controller zum FC-Port über:
<source lang=bash>
# ls -al /dev/cfg/c6
lrwxrwxrwx 1 root root 60 Sep 3 2009 /dev/cfg/c6 -> ../../devices/pci@79,0/pci10de,376@e/pci1077,143@0/fp@0,0:fc
</source>
Man sieht den Hardwarepfad von [[#luxadm -e port]]
Class:
Via ALUA (Asymmetric Logical Unit Access) teilt das Device dem Host mit, über welche Pfade der Host primär auf die LUN zugreifen soll.
==fcinfo==
===fcinfo hba-port===
Gibt ein paar Infos über Hersteller, Modell, Firmware, Port und Node WWN, Current Speed, ... aus
<source lang=bash>
#> fcinfo hba-port
HBA Port WWN: 2100001b328a417f
OS Device Name: /dev/cfg/c4
Manufacturer: QLogic Corp.
Model: 375-3356-02
Firmware Version: 05.06.00
FCode/BIOS Version: BIOS: 2.02; fcode: 2.01; EFI: 2.00;
Serial Number: 0402R00-0918701860
Driver Name: qlc
Driver Version: 20110825-3.06
Type: N-port
State: online
Supported Speeds: 1Gb 2Gb 4Gb
Current Speed: 4Gb
Node WWN: 2000001b328a417f
HBA Port WWN: 2101001b32aa417f
OS Device Name: /dev/cfg/c5
Manufacturer: QLogic Corp.
Model: 375-3356-02
Firmware Version: 05.06.00
FCode/BIOS Version: BIOS: 2.02; fcode: 2.01; EFI: 2.00;
Serial Number: 0402R00-0918701860
Driver Name: qlc
Driver Version: 20110825-3.06
Type: unknown
State: offline
Supported Speeds: 1Gb 2Gb 4Gb
Current Speed: not established
Node WWN: 2001001b32aa417f
HBA Port WWN: 2100001b32904445
OS Device Name: /dev/cfg/c6
Manufacturer: QLogic Corp.
Model: 375-3356-02
Firmware Version: 05.06.00
FCode/BIOS Version: BIOS: 2.02; fcode: 2.01; EFI: 2.00;
Serial Number: 0402R00-0918701887
Driver Name: qlc
Driver Version: 20110825-3.06
Type: N-port
State: online
Supported Speeds: 1Gb 2Gb 4Gb
Current Speed: 4Gb
Node WWN: 2000001b32904445
HBA Port WWN: 2101001b32b04445
OS Device Name: /dev/cfg/c7
Manufacturer: QLogic Corp.
Model: 375-3356-02
Firmware Version: 05.06.00
FCode/BIOS Version: BIOS: 2.02; fcode: 2.01; EFI: 2.00;
Serial Number: 0402R00-0918701887
Driver Name: qlc
Driver Version: 20110825-3.06
Type: unknown
State: offline
Supported Speeds: 1Gb 2Gb 4Gb
Current Speed: not established
Node WWN: 2001001b32b04445
</source>
===fcinfo remote-port --port <HBA Port WWN> --linkstat===
<source lang=bash>
# fcinfo remote-port --port 2100001b32904445 --linkstat
Remote Port WWN: 201600a0b86e103c
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200600a0b86e103c
Link Error Statistics:
Link Failure Count: 3
Loss of Sync Count: 3
Loss of Signal Count: 2
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 201700a0b86e103c
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200600a0b86e103c
Link Error Statistics:
Link Failure Count: 4
Loss of Sync Count: 261
Loss of Signal Count: 4
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 202200a0b85aeb2d
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200200a0b85aeb2d
Link Error Statistics:
Link Failure Count: 2
Loss of Sync Count: 1
Loss of Signal Count: 2
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 202300a0b85aeb2d
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200200a0b85aeb2d
Link Error Statistics:
Link Failure Count: 2
Loss of Sync Count: 1
Loss of Signal Count: 2
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 201600a0b86e10e4
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200600a0b86e10e4
Link Error Statistics:
Link Failure Count: 3
Loss of Sync Count: 1
Loss of Signal Count: 0
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 201700a0b86e10e4
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200600a0b86e10e4
Link Error Statistics:
Link Failure Count: 3
Loss of Sync Count: 1
Loss of Signal Count: 0
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 202400a0b85bb030
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200400a0b85bb030
Link Error Statistics:
Link Failure Count: 2
Loss of Sync Count: 1
Loss of Signal Count: 2
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 202500a0b85bb030
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200400a0b85bb030
Link Error Statistics:
Link Failure Count: 2
Loss of Sync Count: 1
Loss of Signal Count: 3
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
</source>
==mpathadm==
===mpathadm list lu===
<source lang=bash>
</source>
==cfgadm==
===cfgadm -al -o show_FCP_dev [<controller>]===
<source lang=bash>
# cfgadm -al -o show_FCP_dev | grep unusable
c8::21000024ff2d49a2,0 disk connected configured unusable
c8::21000024ff2d49a2,1 disk connected configured unusable
c8::21000024ff2d49a2,2 disk connected configured unusable
c8::21000024ff2d49a2,3 disk connected configured unusable
c8::21000024ff2d49a2,4 disk connected configured unusable
c8::21000024ff2d49a2,5 disk connected configured unusable
c8::21000024ff2d49a2,6 disk connected configured unusable
c8::21000024ff2d49a2,7 disk connected configured unusable
c8::21000024ff2d49a2,8 disk connected configured unusable
c8::21000024ff2d49a2,9 disk connected configured unusable
c8::21000024ff2d49a2,10 disk connected configured unusable
c9::203400a0b839c421,31 disk connected configured unusable
c9::203400a0b84913d2,31 disk connected configured unusable
c9::203500a0b839c421,31 disk connected configured unusable
c9::203500a0b84913d2,31 disk connected configured unusable
</source>
===cfgadm -c unconfigure -o unusable_SCSI_LUN <unusable device>===
<source lang=bash>
# cfgadm -c unconfigure -o unusable_SCSI_LUN c8::21000024ff2d49a2
</source>
===cfgadm -o force_update -c configure <controller>===
Rescan LUNs. Be careful! Does a forcelip!
<source lang=bash>
# cfgadm -o force_update -c configure c10
</source>
=Kommandos : Common Array Manager=
==lsscs==
Ist unter Solaris in /opt/SUNWsefms/bin
===lsscs list array===
<source lang=bash>
</source>
===lsscs list array <array_name>===
<source lang=bash>
</source>
===lsscs list -a <array_name> fcport===
<source lang=bash>
</source>
=Kommandos : Brocade=
==Switch-Kommandos==
===switchshow===
<source lang=bash>
san-sw_11:admin> switchshow
switchName: san-sw_11
switchType: 71.2
switchState: Online
switchMode: Native
switchRole: Principal
switchDomain: 1
switchId: fffc01
switchWwn: 10:00:00:05:33:df:43:5a
zoning: ON (Fabric1)
switchBeacon: OFF
Index Port Address Media Speed State Proto
==============================================
0 0 010000 id N8 No_Light FC
1 1 010100 id N8 Online FC E-Port 10:00:00:05:33:df:bd:b9 "san-sw_21" (downstream)
2 2 010200 id N8 Online FC F-Port 21:00:00:24:ff:05:74:e4
3 3 010300 id N8 Online FC F-Port 50:0a:09:81:8d:32:5d:c4
4 4 010400 id N8 No_Light FC
5 5 010500 id N8 Online FC E-Port 10:00:00:05:33:df:bd:b9 "san-sw_21"
6 6 010600 id N4 Online FC F-Port 20:06:00:a0:b8:32:38:17
7 7 010700 id N4 Online FC F-Port 20:07:00:a0:b8:32:38:17
8 8 010800 id N4 Online FC F-Port 21:00:00:1b:32:91:4c:ed
9 9 010900 id N4 Online FC F-Port 21:00:00:1b:32:98:05:1a
10 10 010a00 id N8 Online FC F-Port 21:00:00:24:ff:4a:d3:bc
11 11 010b00 id N8 No_Light FC
12 12 010c00 id N8 No_Light FC
13 13 010d00 id N8 No_Light FC
14 14 010e00 id N8 No_Light FC
15 15 010f00 id N8 No_Light FC
16 16 011000 -- N8 No_Module FC (No POD License) Disabled
17 17 011100 -- N8 No_Module FC (No POD License) Disabled
18 18 011200 -- N8 No_Module FC (No POD License) Disabled
19 19 011300 -- N8 No_Module FC (No POD License) Disabled
20 20 011400 -- N8 No_Module FC (No POD License) Disabled
21 21 011500 -- N8 No_Module FC (No POD License) Disabled
22 22 011600 -- N8 No_Module FC (No POD License) Disabled
23 23 011700 -- N8 No_Module FC (No POD License) Disabled
</source>
Was sagt uns das?
# Dies ist der "Principal" (alle andere sind "Subordinate") der Fabric "Fabric1" (switchRole:, zoning:)
# Der Switch ist gezoned (zoning:)
# SwitchID ist "fffc01"
# Es ist ein 24-Port Switch
# Es gibt einen doppelten ISL (InterSwitchLink) zu einem anderen Switch E-Port (san-sw_21)
# 6 Ports sind mit SFPs bestückt, aber nicht belegt (0,4,11-15)
# 8 Ports haben keine Lizenz und auch kein SFP (No_Module)
# 9 Ports sind belegt
<source lang=bash>
san-sw_11:root> fabricshow
Switch ID Worldwide Name Enet IP Addr FC IP Addr Name
-------------------------------------------------------------------------
1: fffc01 10:00:00:05:33:df:43:5a 192.168.1.117 0.0.0.0 >"san-sw_11"
2: fffc02 10:00:00:05:33:df:bd:b9 192.168.1.119 0.0.0.0 "san-sw_21"
The Fabric has 2 switches
</source>
==Port-Kommandos==
===portstatsshow===
===portstatsclear===
==Zone-Kommandos==
===zoneshow===
===alicreate===
===alishow===
==Backup der Switchconfig per Script==
===Put the backup host ssh-pub-key on the switches===
<source lang=bash>
fcsw1:root> cat >/root/.ssh/authorized_keys <<EOF
> ssh-dss AAAAB3NzaC1...
...
...
lF8qsgtTD8cc= root@host
> EOF
</source>
===Generate ssh-key on the switches===
<source lang=bash>
fcsw1:root> ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
2a:23:33:...:69:bc:25:a5:f9 root@fcsw1
The key's randomart image is:
+--[ RSA 2048]----+
| |
| ... |
| |
+-----------------+
</source>
===Copy the key to your backup users ~/.ssh/authorized_keys on backup host===
<source lang=bash>
fcsw1:root> cat /root/.ssh/id_rsa.pub
ssh-rsa AAAAB3NzaC1yc2EAAA...
...
KHnw1T1NaQ== root@fcsw1
</source>
===Now the script on the backup host===
<source lang=bash>
# cat /opt/bin/backup_brocade_config
#!/bin/bash
SWITCHES="
172.30.40.50
172.30.40.51
"
LOCALUSER="backupuser"
BACKUPDIR="brocade_backup"
BACKUPHOST="172.30.40.10"
DATE="$(date '+%Y%m%d-%H%M%S')"
for switch in ${SWITCHES} ; do
printf "Backing up ${switch} to ~${LOCALUSER}/${BACKUPDIR}/${switch}_config_${date}.txt... "
ssh root@${switch} /fabos/link_sbin/configupload -all -p scp ${BACKUPHOST},${LOCALUSER},${BACKUPDIR}/${switch}_config_${DATE}.txt
done
</source>
==Script zum parsen einer configupload-Datei==
<source lang=awk>
#!/usr/bin/gawk -f
BEGIN{
vendor["001438"]="Hewlett-Packard";
vendor["00a098"]="NetApp";
vendor["0024ff"]="Qlogic";
vendor["001b32"]="Qlogic";
vendor["0000c9"]="Emulex";
vendor["00e002"]="CROSSROADS SYSTEMS, INC.";
}
/\[Zoning\]/,/^$/ {
if(/^cfg./){
split($0,cfgparts,":");
gsub(/^cfg./,"",cfgparts[1]);
cfg[cfgparts[1]]=cfgparts[2];
}
else if(/^zone./) {
zonename=$0;
gsub(/:.*$/,"",zonename);
gsub(/^zone./,"",zonename);
zonemembers=$0;
gsub(/^[^:]*:/,"",zonemembers);
zone[zonename]=zonemembers;
}
else if(/^alias./) {
aliasname=$0;
gsub(/:.*$/,"",aliasname);
gsub(/^alias./,"",aliasname);
aliasmembers=$0;
gsub(/^[^:]*:/,"",aliasmembers);
alias[aliasname]=aliasmembers;
if(length(aliasname)>longestalias){
longestalias=length(aliasname);
}
}
else if(/^enable:/) {
cfgenabled=$0;
gsub(/^enable:/,"",cfgenabled);
}
}
END {
print "Config:",cfgenabled;
split(cfg[cfgenabled],active_zones,";");
for(active_zone in active_zones) {
split(zone[active_zones[active_zone]],zone_members,";");
asort(zone_members);
print "Zone",active_zones[active_zone],"(",length(zone_members),"Members ):";
for(zone_member in zone_members){
member=zone_members[zone_member];
if(alias[member]!=""){
member=alias[member];
}
WWN=member;
gsub(/:/,"",WWN);
if(WWN ~ /^5/){start=2;}else{start=5;}
vendor_id=substr(WWN,start,6);
printf " Member: %s\t",member;
if(alias[zone_members[zone_member]]!=""){
format=sprintf("%%s%%%ds\t",longestalias-length(zone_members[zone_member]));
printf format,zone_members[zone_member]," ";
}
printf "%s\n",vendor[vendor_id];
}
}
printf "\n\n\nCreate config:\n-------------------------------------------------\n";
printf "cfgdelete \"%s\"\n",cfgenabled;
for(active_zone in active_zones) {
split(zone[active_zones[active_zone]],zone_members,";");
asort(zone_members);
for(zone_member in zone_members){
member=zone_members[zone_member];
if(alias[member]!=""){
printf "alicreate \"%s\",\"%s\"\n",member,alias[member];
alias[member]="";
}
}
printf "zonecreate \"%s\",\"%s\"\n",active_zones[active_zone],zone[active_zones[active_zone]];
if(!secondelement){
secondelement=1;
printf "cfgcreate";
} else {
printf "cfgadd ";
}
printf " \"%s\",\"%s\"\n",cfgenabled,active_zones[active_zone];
}
printf "cfgsave\ncfgenable \"%s\"\n",cfgenabled;
}
</source>
f0a44fc77677e716b8d3037f4d6f62df406825d9
529
528
2014-10-01T15:17:53Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:Solaris]]
[[Kategorie:Brocade]]
=Fibrechannel Analyse unter Solaris=
=Kommandos : Solaris=
==luxadm==
===luxadm -e port===
Gibt die Hardwarepfade der vorhandened Fibrechannelports und deren Status aus:
<source lang=bash>
# luxadm -e port
/devices/pci@79,0/pci10de,378@b/pci1077,143@0/fp@0,0:devctl CONNECTED
/devices/pci@79,0/pci10de,378@b/pci1077,143@0,1/fp@0,0:devctl NOT CONNECTED
/devices/pci@79,0/pci10de,376@e/pci1077,143@0/fp@0,0:devctl CONNECTED
/devices/pci@79,0/pci10de,376@e/pci1077,143@0,1/fp@0,0:devctl NOT CONNECTED
</source>
2 Dualport Karten:
/devices/pci@79,0/pci10de,378@b/pci1077,143@0 und ...,1
/devices/pci@79,0/pci10de,376@e/pci1077,143@0 und ...,1
<source lang=bash>
# prtdiag -v | head -1
System Configuration: Sun Microsystems Sun Fire X4440
</source>
Aus der Seite [https://support.oracle.com/epmos/faces/DocContentDisplay?id=1277396.1 Sun x86 Platforms: Matrix of Recognized Device Paths (Doc ID 1277396.1)] (Oracle Support Login benötigt):
Sun Fire x4440 (Tucana)
PCI:
PCIe SLOT0 /pci@0,0/pci10de,375@f/pci1000,3150@0 // with PCI Express 8-Port SAS/SATA HBA
PCIe SLOT0 /pci@0,0/pci10de,375@f/ // without PCI Express 8-Port SAS/SATA HBA
PCIe SLOT1 /pci@0,0/pci10de,376@e/
PCIe SLOT2 /pci@7c,0/pci10de,377@f/
PCIe SLOT3 /pci@0,0/pci10de,377@a/
PCIe SLOT4 /pci@7c,0/pci10de,376@e/
PCIe SLOT5 /pci@7c,0/pci10de,378@b/
(7c can be renamed something else depending on BIOS/OS version)
Also stecken unsere Karten in Slot 4 und 5.
===luxadm -e dump_map <HW_path>===
Gibt die Tabelle der bekannten Geräte an einem Port aus
<source lang=bash>
# luxadm -e dump_map /devices/pci@79,0/pci10de,378@b/pci1077,143@0/fp@0,0:devctl
Pos Port_ID Hard_Addr Port WWN Node WWN Type
0 30200 0 202600a0b86e10e4 200600a0b86e10e4 0x0 (Disk device)
1 30600 0 202700a0b86e10e4 200600a0b86e10e4 0x0 (Disk device)
2 10100 0 203400a0b85bb030 200400a0b85bb030 0x0 (Disk device)
3 10500 0 203500a0b85bb030 200400a0b85bb030 0x0 (Disk device)
4 10200 0 202600a0b86e103c 200600a0b86e103c 0x0 (Disk device)
5 11400 0 202700a0b86e103c 200600a0b86e103c 0x0 (Disk device)
6 30100 0 203200a0b85aeb2d 200200a0b85aeb2d 0x0 (Disk device)
7 30500 0 203300a0b85aeb2d 200200a0b85aeb2d 0x0 (Disk device)
8 10800 0 2100001b32902d45 2000001b32902d45 0x1f (Unknown Type,Host Bus Adapter)
</source>
Erklärung der interessanten Spalten:
* Port_ID <Switch_ID><Switchport><??>
Es sind also offensichtlich 2 Switches in der Fabric an Port /devices/pci@79,0/pci10de,378@b/pci1077,143@0/fp@0,0:devctl
und zwar mit der ID 1 und mit der ID 3.
Switch ID 1
Port 1 und 5 : Node WWN 200400a0b85bb030
Port 2 und 14 : Node WWN 200600a0b86e103c
Port 8 : Node WWN 2000001b32902d45 (Wir selbst)
Switch ID 3
Port 1 und 5 : Node WWN 200200a0b85aeb2d
Port 2 und 6 : Node WWN 200600a0b86e10e4
Wir hängen also mit 2 Storages auf dem Switch mit der ID 1 und haben eine Verbindung zu einem Switch mit der ID 3 an dem 2 weitere Storages hängen.
* Node WWN
Wir sehen hier 4 Disk Devices mit jeweils 2 Einträgen (Gleiche Node WWN)
* Port WWN
Dies ist die Port WWN der an den Switch angeschlossenen Geräte (unter 8 finden wir uns selbst).
Pro Storage sehen wir hier 2 Port WWNs, also 2 Pfade über unseren einen Hostport.
Daher nachher 4 Pfade (2 Pro Hostport) beim [[#mpathadm list lu]].
* Type
Disk Device: Storage
Host Bus Adapter: FC-Karte
===luxadm probe===
Auflistung aller erkannten Fibrechanneldevices
<source lang=bash>
#> luxadm probe
Found Fibre Channel device(s):
Node WWN:200600a0b86e10e4 Device Type:Disk device
Logical Path:/dev/rdsk/c8t600A0B80006E10E40000DC1C52E8B751d0s2
...
</source>
===luxadm display <Diskpath|WWN>===
<source lang=bash>
#> luxadm display /dev/rdsk/c8t600A0B80006E10E40000DC1C52E8B751d0s2
DEVICE PROPERTIES for disk: /dev/rdsk/c8t600A0B80006E10E40000DC1C52E8B751d0s2
Vendor: SUN
Product ID: STK6580_6780
Revision: 0784
Serial Num: SP01068442
Unformatted capacity: 204800.000 MBytes
Write Cache: Enabled
Read Cache: Enabled
Minimum prefetch: 0x300
Maximum prefetch: 0x0
Device Type: Disk device
Path(s):
/dev/rdsk/c8t600A0B80006E10E40000DC1C52E8B751d0s2
/devices/scsi_vhci/disk@g600a0b80006e10e40000dc1c52e8b751:c,raw
Controller /dev/cfg/c4
Device Address 202600a0b86e10e4,5
Host controller port WWN 2100001b328a417f
Class primary
State ONLINE
Controller /dev/cfg/c4
Device Address 202700a0b86e10e4,5
Host controller port WWN 2100001b328a417f
Class secondary
State STANDBY
Controller /dev/cfg/c6
Device Address 201600a0b86e10e4,5
Host controller port WWN 2100001b32904445
Class primary
State ONLINE
Controller /dev/cfg/c6
Device Address 201700a0b86e10e4,5
Host controller port WWN 2100001b32904445
Class secondary
State STANDBY
</source>
* Vendor: SUN
Hersteller
* Product ID: STK6580_6780
Also ein StorageTek 6580/6780
* Revision: 0784
Grobe Firmwarepeilung (Firmware Version: 07.84.47.10)
Siehe hier [[#lsscs list array <array_name>]]
* Serial Num: SP01068442
Praktisch, wenn man mit NetApps arbeitet, um die LUNs zuzuordnen.
* Unformatted capacity: 204800.000 MBytes
Immer gut zu wissen
* Write Cache: Enabled
Die Batterie im Storage sollte also OK sein ;-)
* Path(s):
Rawdevicepath
Hardwaredevicepath
Jetzt folgen immer pro Pfad zu diesem Device ein Block aus
Controller (siehe unten)
Device Address <Port WWN vom Device>,<LUN ID>
Class <primary|secondary> (siehe unten)
State <Online|Standby|Oflline>
Zuweisung Controller zum FC-Port über:
<source lang=bash>
# ls -al /dev/cfg/c6
lrwxrwxrwx 1 root root 60 Sep 3 2009 /dev/cfg/c6 -> ../../devices/pci@79,0/pci10de,376@e/pci1077,143@0/fp@0,0:fc
</source>
Man sieht den Hardwarepfad von [[#luxadm -e port]]
Class:
Via ALUA (Asymmetric Logical Unit Access) teilt das Device dem Host mit, über welche Pfade der Host primär auf die LUN zugreifen soll.
==fcinfo==
===fcinfo hba-port===
Gibt ein paar Infos über Hersteller, Modell, Firmware, Port und Node WWN, Current Speed, ... aus
<source lang=bash>
#> fcinfo hba-port
HBA Port WWN: 2100001b328a417f
OS Device Name: /dev/cfg/c4
Manufacturer: QLogic Corp.
Model: 375-3356-02
Firmware Version: 05.06.00
FCode/BIOS Version: BIOS: 2.02; fcode: 2.01; EFI: 2.00;
Serial Number: 0402R00-0918701860
Driver Name: qlc
Driver Version: 20110825-3.06
Type: N-port
State: online
Supported Speeds: 1Gb 2Gb 4Gb
Current Speed: 4Gb
Node WWN: 2000001b328a417f
HBA Port WWN: 2101001b32aa417f
OS Device Name: /dev/cfg/c5
Manufacturer: QLogic Corp.
Model: 375-3356-02
Firmware Version: 05.06.00
FCode/BIOS Version: BIOS: 2.02; fcode: 2.01; EFI: 2.00;
Serial Number: 0402R00-0918701860
Driver Name: qlc
Driver Version: 20110825-3.06
Type: unknown
State: offline
Supported Speeds: 1Gb 2Gb 4Gb
Current Speed: not established
Node WWN: 2001001b32aa417f
HBA Port WWN: 2100001b32904445
OS Device Name: /dev/cfg/c6
Manufacturer: QLogic Corp.
Model: 375-3356-02
Firmware Version: 05.06.00
FCode/BIOS Version: BIOS: 2.02; fcode: 2.01; EFI: 2.00;
Serial Number: 0402R00-0918701887
Driver Name: qlc
Driver Version: 20110825-3.06
Type: N-port
State: online
Supported Speeds: 1Gb 2Gb 4Gb
Current Speed: 4Gb
Node WWN: 2000001b32904445
HBA Port WWN: 2101001b32b04445
OS Device Name: /dev/cfg/c7
Manufacturer: QLogic Corp.
Model: 375-3356-02
Firmware Version: 05.06.00
FCode/BIOS Version: BIOS: 2.02; fcode: 2.01; EFI: 2.00;
Serial Number: 0402R00-0918701887
Driver Name: qlc
Driver Version: 20110825-3.06
Type: unknown
State: offline
Supported Speeds: 1Gb 2Gb 4Gb
Current Speed: not established
Node WWN: 2001001b32b04445
</source>
===fcinfo remote-port --port <HBA Port WWN> --linkstat===
<source lang=bash>
# fcinfo remote-port --port 2100001b32904445 --linkstat
Remote Port WWN: 201600a0b86e103c
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200600a0b86e103c
Link Error Statistics:
Link Failure Count: 3
Loss of Sync Count: 3
Loss of Signal Count: 2
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 201700a0b86e103c
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200600a0b86e103c
Link Error Statistics:
Link Failure Count: 4
Loss of Sync Count: 261
Loss of Signal Count: 4
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 202200a0b85aeb2d
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200200a0b85aeb2d
Link Error Statistics:
Link Failure Count: 2
Loss of Sync Count: 1
Loss of Signal Count: 2
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 202300a0b85aeb2d
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200200a0b85aeb2d
Link Error Statistics:
Link Failure Count: 2
Loss of Sync Count: 1
Loss of Signal Count: 2
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 201600a0b86e10e4
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200600a0b86e10e4
Link Error Statistics:
Link Failure Count: 3
Loss of Sync Count: 1
Loss of Signal Count: 0
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 201700a0b86e10e4
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200600a0b86e10e4
Link Error Statistics:
Link Failure Count: 3
Loss of Sync Count: 1
Loss of Signal Count: 0
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 202400a0b85bb030
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200400a0b85bb030
Link Error Statistics:
Link Failure Count: 2
Loss of Sync Count: 1
Loss of Signal Count: 2
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 202500a0b85bb030
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200400a0b85bb030
Link Error Statistics:
Link Failure Count: 2
Loss of Sync Count: 1
Loss of Signal Count: 3
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
</source>
==mpathadm==
===mpathadm list lu===
<source lang=bash>
</source>
==cfgadm==
===cfgadm -al -o show_FCP_dev [<controller>]===
<source lang=bash>
# cfgadm -al -o show_FCP_dev | grep unusable
c8::21000024ff2d49a2,0 disk connected configured unusable
c8::21000024ff2d49a2,1 disk connected configured unusable
c8::21000024ff2d49a2,2 disk connected configured unusable
c8::21000024ff2d49a2,3 disk connected configured unusable
c8::21000024ff2d49a2,4 disk connected configured unusable
c8::21000024ff2d49a2,5 disk connected configured unusable
c8::21000024ff2d49a2,6 disk connected configured unusable
c8::21000024ff2d49a2,7 disk connected configured unusable
c8::21000024ff2d49a2,8 disk connected configured unusable
c8::21000024ff2d49a2,9 disk connected configured unusable
c8::21000024ff2d49a2,10 disk connected configured unusable
c9::203400a0b839c421,31 disk connected configured unusable
c9::203400a0b84913d2,31 disk connected configured unusable
c9::203500a0b839c421,31 disk connected configured unusable
c9::203500a0b84913d2,31 disk connected configured unusable
</source>
===cfgadm -c unconfigure -o unusable_SCSI_LUN <unusable device>===
<source lang=bash>
# cfgadm -c unconfigure -o unusable_SCSI_LUN c8::21000024ff2d49a2
</source>
===cfgadm -o force_update -c configure <controller>===
Rescan LUNs. Be careful! Does a forcelip!
<source lang=bash>
# cfgadm -o force_update -c configure c10
</source>
=Kommandos : Common Array Manager=
==lsscs==
Ist unter Solaris in /opt/SUNWsefms/bin
===lsscs list array===
<source lang=bash>
</source>
===lsscs list array <array_name>===
<source lang=bash>
</source>
===lsscs list -a <array_name> fcport===
<source lang=bash>
</source>
=Kommandos : Brocade=
==Switch-Kommandos==
===switchshow===
<source lang=bash>
san-sw_11:admin> switchshow
switchName: san-sw_11
switchType: 71.2
switchState: Online
switchMode: Native
switchRole: Principal
switchDomain: 1
switchId: fffc01
switchWwn: 10:00:00:05:33:df:43:5a
zoning: ON (Fabric1)
switchBeacon: OFF
Index Port Address Media Speed State Proto
==============================================
0 0 010000 id N8 No_Light FC
1 1 010100 id N8 Online FC E-Port 10:00:00:05:33:df:bd:b9 "san-sw_21" (downstream)
2 2 010200 id N8 Online FC F-Port 21:00:00:24:ff:05:74:e4
3 3 010300 id N8 Online FC F-Port 50:0a:09:81:8d:32:5d:c4
4 4 010400 id N8 No_Light FC
5 5 010500 id N8 Online FC E-Port 10:00:00:05:33:df:bd:b9 "san-sw_21"
6 6 010600 id N4 Online FC F-Port 20:06:00:a0:b8:32:38:17
7 7 010700 id N4 Online FC F-Port 20:07:00:a0:b8:32:38:17
8 8 010800 id N4 Online FC F-Port 21:00:00:1b:32:91:4c:ed
9 9 010900 id N4 Online FC F-Port 21:00:00:1b:32:98:05:1a
10 10 010a00 id N8 Online FC F-Port 21:00:00:24:ff:4a:d3:bc
11 11 010b00 id N8 No_Light FC
12 12 010c00 id N8 No_Light FC
13 13 010d00 id N8 No_Light FC
14 14 010e00 id N8 No_Light FC
15 15 010f00 id N8 No_Light FC
16 16 011000 -- N8 No_Module FC (No POD License) Disabled
17 17 011100 -- N8 No_Module FC (No POD License) Disabled
18 18 011200 -- N8 No_Module FC (No POD License) Disabled
19 19 011300 -- N8 No_Module FC (No POD License) Disabled
20 20 011400 -- N8 No_Module FC (No POD License) Disabled
21 21 011500 -- N8 No_Module FC (No POD License) Disabled
22 22 011600 -- N8 No_Module FC (No POD License) Disabled
23 23 011700 -- N8 No_Module FC (No POD License) Disabled
</source>
Was sagt uns das?
# Dies ist der "Principal" (alle andere sind "Subordinate") der Fabric "Fabric1" (switchRole:, zoning:)
# Der Switch ist gezoned (zoning:)
# SwitchID ist "fffc01"
# Es ist ein 24-Port Switch
# Es gibt einen doppelten ISL (InterSwitchLink) zu einem anderen Switch E-Port (san-sw_21)
# 6 Ports sind mit SFPs bestückt, aber nicht belegt (0,4,11-15)
# 8 Ports haben keine Lizenz und auch kein SFP (No_Module)
# 9 Ports sind belegt
<source lang=bash>
san-sw_11:root> fabricshow
Switch ID Worldwide Name Enet IP Addr FC IP Addr Name
-------------------------------------------------------------------------
1: fffc01 10:00:00:05:33:df:43:5a 192.168.1.117 0.0.0.0 >"san-sw_11"
2: fffc02 10:00:00:05:33:df:bd:b9 192.168.1.119 0.0.0.0 "san-sw_21"
The Fabric has 2 switches
</source>
==Port-Kommandos==
===portstatsshow===
===portstatsclear===
==Zone-Kommandos==
===zoneshow===
===alicreate===
===alishow===
==Backup der Switchconfig per Script==
===Put the backup host ssh-pub-key on the switches===
<source lang=bash>
fcsw1:root> cat >/root/.ssh/authorized_keys <<EOF
> ssh-dss AAAAB3NzaC1...
...
...
lF8qsgtTD8cc= root@host
> EOF
</source>
===Generate ssh-key on the switches===
<source lang=bash>
fcsw1:root> ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
2a:23:33:...:69:bc:25:a5:f9 root@fcsw1
The key's randomart image is:
+--[ RSA 2048]----+
| |
| ... |
| |
+-----------------+
</source>
===Copy the key to your backup users ~/.ssh/authorized_keys on backup host===
<source lang=bash>
fcsw1:root> cat /root/.ssh/id_rsa.pub
ssh-rsa AAAAB3NzaC1yc2EAAA...
...
KHnw1T1NaQ== root@fcsw1
</source>
===Now the script on the backup host===
<source lang=bash>
# cat /opt/bin/backup_brocade_config
#!/bin/bash
SWITCHES="
172.30.40.50
172.30.40.51
"
LOCALUSER="backupuser"
BACKUPDIR="brocade_backup"
BACKUPHOST="172.30.40.10"
DATE="$(date '+%Y%m%d-%H%M%S')"
for switch in ${SWITCHES} ; do
printf "Backing up ${switch} to ~${LOCALUSER}/${BACKUPDIR}/${switch}_config_${date}.txt... "
ssh root@${switch} /fabos/link_sbin/configupload -all -p scp ${BACKUPHOST},${LOCALUSER},${BACKUPDIR}/${switch}_config_${DATE}.txt
done
</source>
==Script zum parsen einer configupload-Datei==
<source lang=awk>
#!/usr/bin/gawk -f
BEGIN{
vendor["001438"]="Hewlett-Packard";
vendor["00a098"]="NetApp";
vendor["0024ff"]="Qlogic";
vendor["001b32"]="Qlogic";
vendor["0000c9"]="Emulex";
vendor["00e002"]="CROSSROADS SYSTEMS, INC.";
}
/\[Zoning\]/,/^$/ {
if(/^cfg./){
split($0,cfgparts,":");
gsub(/^cfg./,"",cfgparts[1]);
cfg[cfgparts[1]]=cfgparts[2];
}
else if(/^zone./) {
zonename=$0;
gsub(/:.*$/,"",zonename);
gsub(/^zone./,"",zonename);
zonemembers=$0;
gsub(/^[^:]*:/,"",zonemembers);
zone[zonename]=zonemembers;
}
else if(/^alias./) {
aliasname=$0;
gsub(/:.*$/,"",aliasname);
gsub(/^alias./,"",aliasname);
aliasmembers=$0;
gsub(/^[^:]*:/,"",aliasmembers);
alias[aliasname]=aliasmembers;
if(length(aliasname)>longestalias){
longestalias=length(aliasname);
}
}
else if(/^enable:/) {
cfgenabled=$0;
gsub(/^enable:/,"",cfgenabled);
}
}
END {
print "Config:",cfgenabled;
split(cfg[cfgenabled],active_zones,";");
for(active_zone in active_zones) {
split(zone[active_zones[active_zone]],zone_members,";");
asort(zone_members);
print "Zone",active_zones[active_zone],"(",length(zone_members),"Members ):";
for(zone_member in zone_members){
member=zone_members[zone_member];
if(alias[member]!=""){
member=alias[member];
}
WWN=member;
gsub(/:/,"",WWN);
if(WWN ~ /^5/){start=2;}else{start=5;}
vendor_id=substr(WWN,start,6);
printf " Member: %s\t",member;
if(alias[zone_members[zone_member]]!=""){
format=sprintf("%%s%%%ds\t",longestalias-length(zone_members[zone_member]));
printf format,zone_members[zone_member]," ";
}
printf "%s\n",vendor[vendor_id];
}
}
printf "\n\n\nCreate config:\n-------------------------------------------------\n";
printf "cfgdelete \"%s\"\n",cfgenabled;
for(active_zone in active_zones) {
split(zone[active_zones[active_zone]],zone_members,";");
asort(zone_members);
for(zone_member in zone_members){
member=zone_members[zone_member];
if(alias[member]!=""){
printf "alicreate \"%s\",\"%s\"\n",member,alias[member];
alias[member]="";
}
}
printf "zonecreate \"%s\",\"%s\"\n",active_zones[active_zone],zone[active_zones[active_zone]];
if(!secondelement){
secondelement=1;
printf "cfgcreate";
} else {
printf "cfgadd ";
}
printf " \"%s\",\"%s\"\n",cfgenabled,active_zones[active_zone];
}
printf "cfgsave\ncfgenable \"%s\"\n",cfgenabled;
}
</source>
=Kommandos: NetApp=
==fcp topology show : Wo hängt mein Frontend-SAN?==
<source lang=bash>
fas01> fcp topology show
Switches connected on adapter 0d:
None connected.
Switches connected on adapter 0c:
None connected.
Switches connected on adapter 1a:
Switch Name: fcsw01
Switch Vendor: Brocade Communications, Inc.
Switch Release: v6.4.2a
Switch Domain: 1
Switch WWN: 10:00:00:05:33:c6:1e:6c
Port Count: 24
Switches connected on adapter 1b:
Switch Name: fcsw02
Switch Vendor: Brocade Communications, Inc.
Switch Release: v6.4.2a
Switch Domain: 1
Switch WWN: 10:00:00:05:33:c7:5e:d2
Port Count: 24
Switches connected on adapter 1c:
None connected.
Switches connected on adapter 1d:
None connected.
</source>
==fcp config <port> : Welche WWN habe ich?
<source lang=bash>
fas01> fcp config 1a
1a: ONLINE <ADAPTER UP> PTP Fabric
host address 010600
portname 50:0a:09:83:90:00:29:24 nodename 50:0a:09:80:80:00:29:24
mediatype auto speed auto
</source>
Kleines Schmankerl ist noch die "host address", die uns zeigt, daß wir auf Switch-ID 01 Port 06 hängen.
c57acb1ad985c479ea1d15d1d96e802d36a47f4d
530
529
2014-10-01T15:20:08Z
Lollypop
2
/* Kommandos: NetApp */
wikitext
text/x-wiki
[[Kategorie:Solaris]]
[[Kategorie:Brocade]]
=Fibrechannel Analyse unter Solaris=
=Kommandos : Solaris=
==luxadm==
===luxadm -e port===
Gibt die Hardwarepfade der vorhandened Fibrechannelports und deren Status aus:
<source lang=bash>
# luxadm -e port
/devices/pci@79,0/pci10de,378@b/pci1077,143@0/fp@0,0:devctl CONNECTED
/devices/pci@79,0/pci10de,378@b/pci1077,143@0,1/fp@0,0:devctl NOT CONNECTED
/devices/pci@79,0/pci10de,376@e/pci1077,143@0/fp@0,0:devctl CONNECTED
/devices/pci@79,0/pci10de,376@e/pci1077,143@0,1/fp@0,0:devctl NOT CONNECTED
</source>
2 Dualport Karten:
/devices/pci@79,0/pci10de,378@b/pci1077,143@0 und ...,1
/devices/pci@79,0/pci10de,376@e/pci1077,143@0 und ...,1
<source lang=bash>
# prtdiag -v | head -1
System Configuration: Sun Microsystems Sun Fire X4440
</source>
Aus der Seite [https://support.oracle.com/epmos/faces/DocContentDisplay?id=1277396.1 Sun x86 Platforms: Matrix of Recognized Device Paths (Doc ID 1277396.1)] (Oracle Support Login benötigt):
Sun Fire x4440 (Tucana)
PCI:
PCIe SLOT0 /pci@0,0/pci10de,375@f/pci1000,3150@0 // with PCI Express 8-Port SAS/SATA HBA
PCIe SLOT0 /pci@0,0/pci10de,375@f/ // without PCI Express 8-Port SAS/SATA HBA
PCIe SLOT1 /pci@0,0/pci10de,376@e/
PCIe SLOT2 /pci@7c,0/pci10de,377@f/
PCIe SLOT3 /pci@0,0/pci10de,377@a/
PCIe SLOT4 /pci@7c,0/pci10de,376@e/
PCIe SLOT5 /pci@7c,0/pci10de,378@b/
(7c can be renamed something else depending on BIOS/OS version)
Also stecken unsere Karten in Slot 4 und 5.
===luxadm -e dump_map <HW_path>===
Gibt die Tabelle der bekannten Geräte an einem Port aus
<source lang=bash>
# luxadm -e dump_map /devices/pci@79,0/pci10de,378@b/pci1077,143@0/fp@0,0:devctl
Pos Port_ID Hard_Addr Port WWN Node WWN Type
0 30200 0 202600a0b86e10e4 200600a0b86e10e4 0x0 (Disk device)
1 30600 0 202700a0b86e10e4 200600a0b86e10e4 0x0 (Disk device)
2 10100 0 203400a0b85bb030 200400a0b85bb030 0x0 (Disk device)
3 10500 0 203500a0b85bb030 200400a0b85bb030 0x0 (Disk device)
4 10200 0 202600a0b86e103c 200600a0b86e103c 0x0 (Disk device)
5 11400 0 202700a0b86e103c 200600a0b86e103c 0x0 (Disk device)
6 30100 0 203200a0b85aeb2d 200200a0b85aeb2d 0x0 (Disk device)
7 30500 0 203300a0b85aeb2d 200200a0b85aeb2d 0x0 (Disk device)
8 10800 0 2100001b32902d45 2000001b32902d45 0x1f (Unknown Type,Host Bus Adapter)
</source>
Erklärung der interessanten Spalten:
* Port_ID <Switch_ID><Switchport><??>
Es sind also offensichtlich 2 Switches in der Fabric an Port /devices/pci@79,0/pci10de,378@b/pci1077,143@0/fp@0,0:devctl
und zwar mit der ID 1 und mit der ID 3.
Switch ID 1
Port 1 und 5 : Node WWN 200400a0b85bb030
Port 2 und 14 : Node WWN 200600a0b86e103c
Port 8 : Node WWN 2000001b32902d45 (Wir selbst)
Switch ID 3
Port 1 und 5 : Node WWN 200200a0b85aeb2d
Port 2 und 6 : Node WWN 200600a0b86e10e4
Wir hängen also mit 2 Storages auf dem Switch mit der ID 1 und haben eine Verbindung zu einem Switch mit der ID 3 an dem 2 weitere Storages hängen.
* Node WWN
Wir sehen hier 4 Disk Devices mit jeweils 2 Einträgen (Gleiche Node WWN)
* Port WWN
Dies ist die Port WWN der an den Switch angeschlossenen Geräte (unter 8 finden wir uns selbst).
Pro Storage sehen wir hier 2 Port WWNs, also 2 Pfade über unseren einen Hostport.
Daher nachher 4 Pfade (2 Pro Hostport) beim [[#mpathadm list lu]].
* Type
Disk Device: Storage
Host Bus Adapter: FC-Karte
===luxadm probe===
Auflistung aller erkannten Fibrechanneldevices
<source lang=bash>
#> luxadm probe
Found Fibre Channel device(s):
Node WWN:200600a0b86e10e4 Device Type:Disk device
Logical Path:/dev/rdsk/c8t600A0B80006E10E40000DC1C52E8B751d0s2
...
</source>
===luxadm display <Diskpath|WWN>===
<source lang=bash>
#> luxadm display /dev/rdsk/c8t600A0B80006E10E40000DC1C52E8B751d0s2
DEVICE PROPERTIES for disk: /dev/rdsk/c8t600A0B80006E10E40000DC1C52E8B751d0s2
Vendor: SUN
Product ID: STK6580_6780
Revision: 0784
Serial Num: SP01068442
Unformatted capacity: 204800.000 MBytes
Write Cache: Enabled
Read Cache: Enabled
Minimum prefetch: 0x300
Maximum prefetch: 0x0
Device Type: Disk device
Path(s):
/dev/rdsk/c8t600A0B80006E10E40000DC1C52E8B751d0s2
/devices/scsi_vhci/disk@g600a0b80006e10e40000dc1c52e8b751:c,raw
Controller /dev/cfg/c4
Device Address 202600a0b86e10e4,5
Host controller port WWN 2100001b328a417f
Class primary
State ONLINE
Controller /dev/cfg/c4
Device Address 202700a0b86e10e4,5
Host controller port WWN 2100001b328a417f
Class secondary
State STANDBY
Controller /dev/cfg/c6
Device Address 201600a0b86e10e4,5
Host controller port WWN 2100001b32904445
Class primary
State ONLINE
Controller /dev/cfg/c6
Device Address 201700a0b86e10e4,5
Host controller port WWN 2100001b32904445
Class secondary
State STANDBY
</source>
* Vendor: SUN
Hersteller
* Product ID: STK6580_6780
Also ein StorageTek 6580/6780
* Revision: 0784
Grobe Firmwarepeilung (Firmware Version: 07.84.47.10)
Siehe hier [[#lsscs list array <array_name>]]
* Serial Num: SP01068442
Praktisch, wenn man mit NetApps arbeitet, um die LUNs zuzuordnen.
* Unformatted capacity: 204800.000 MBytes
Immer gut zu wissen
* Write Cache: Enabled
Die Batterie im Storage sollte also OK sein ;-)
* Path(s):
Rawdevicepath
Hardwaredevicepath
Jetzt folgen immer pro Pfad zu diesem Device ein Block aus
Controller (siehe unten)
Device Address <Port WWN vom Device>,<LUN ID>
Class <primary|secondary> (siehe unten)
State <Online|Standby|Oflline>
Zuweisung Controller zum FC-Port über:
<source lang=bash>
# ls -al /dev/cfg/c6
lrwxrwxrwx 1 root root 60 Sep 3 2009 /dev/cfg/c6 -> ../../devices/pci@79,0/pci10de,376@e/pci1077,143@0/fp@0,0:fc
</source>
Man sieht den Hardwarepfad von [[#luxadm -e port]]
Class:
Via ALUA (Asymmetric Logical Unit Access) teilt das Device dem Host mit, über welche Pfade der Host primär auf die LUN zugreifen soll.
==fcinfo==
===fcinfo hba-port===
Gibt ein paar Infos über Hersteller, Modell, Firmware, Port und Node WWN, Current Speed, ... aus
<source lang=bash>
#> fcinfo hba-port
HBA Port WWN: 2100001b328a417f
OS Device Name: /dev/cfg/c4
Manufacturer: QLogic Corp.
Model: 375-3356-02
Firmware Version: 05.06.00
FCode/BIOS Version: BIOS: 2.02; fcode: 2.01; EFI: 2.00;
Serial Number: 0402R00-0918701860
Driver Name: qlc
Driver Version: 20110825-3.06
Type: N-port
State: online
Supported Speeds: 1Gb 2Gb 4Gb
Current Speed: 4Gb
Node WWN: 2000001b328a417f
HBA Port WWN: 2101001b32aa417f
OS Device Name: /dev/cfg/c5
Manufacturer: QLogic Corp.
Model: 375-3356-02
Firmware Version: 05.06.00
FCode/BIOS Version: BIOS: 2.02; fcode: 2.01; EFI: 2.00;
Serial Number: 0402R00-0918701860
Driver Name: qlc
Driver Version: 20110825-3.06
Type: unknown
State: offline
Supported Speeds: 1Gb 2Gb 4Gb
Current Speed: not established
Node WWN: 2001001b32aa417f
HBA Port WWN: 2100001b32904445
OS Device Name: /dev/cfg/c6
Manufacturer: QLogic Corp.
Model: 375-3356-02
Firmware Version: 05.06.00
FCode/BIOS Version: BIOS: 2.02; fcode: 2.01; EFI: 2.00;
Serial Number: 0402R00-0918701887
Driver Name: qlc
Driver Version: 20110825-3.06
Type: N-port
State: online
Supported Speeds: 1Gb 2Gb 4Gb
Current Speed: 4Gb
Node WWN: 2000001b32904445
HBA Port WWN: 2101001b32b04445
OS Device Name: /dev/cfg/c7
Manufacturer: QLogic Corp.
Model: 375-3356-02
Firmware Version: 05.06.00
FCode/BIOS Version: BIOS: 2.02; fcode: 2.01; EFI: 2.00;
Serial Number: 0402R00-0918701887
Driver Name: qlc
Driver Version: 20110825-3.06
Type: unknown
State: offline
Supported Speeds: 1Gb 2Gb 4Gb
Current Speed: not established
Node WWN: 2001001b32b04445
</source>
===fcinfo remote-port --port <HBA Port WWN> --linkstat===
<source lang=bash>
# fcinfo remote-port --port 2100001b32904445 --linkstat
Remote Port WWN: 201600a0b86e103c
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200600a0b86e103c
Link Error Statistics:
Link Failure Count: 3
Loss of Sync Count: 3
Loss of Signal Count: 2
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 201700a0b86e103c
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200600a0b86e103c
Link Error Statistics:
Link Failure Count: 4
Loss of Sync Count: 261
Loss of Signal Count: 4
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 202200a0b85aeb2d
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200200a0b85aeb2d
Link Error Statistics:
Link Failure Count: 2
Loss of Sync Count: 1
Loss of Signal Count: 2
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 202300a0b85aeb2d
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200200a0b85aeb2d
Link Error Statistics:
Link Failure Count: 2
Loss of Sync Count: 1
Loss of Signal Count: 2
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 201600a0b86e10e4
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200600a0b86e10e4
Link Error Statistics:
Link Failure Count: 3
Loss of Sync Count: 1
Loss of Signal Count: 0
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 201700a0b86e10e4
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200600a0b86e10e4
Link Error Statistics:
Link Failure Count: 3
Loss of Sync Count: 1
Loss of Signal Count: 0
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 202400a0b85bb030
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200400a0b85bb030
Link Error Statistics:
Link Failure Count: 2
Loss of Sync Count: 1
Loss of Signal Count: 2
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 202500a0b85bb030
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200400a0b85bb030
Link Error Statistics:
Link Failure Count: 2
Loss of Sync Count: 1
Loss of Signal Count: 3
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
</source>
==mpathadm==
===mpathadm list lu===
<source lang=bash>
</source>
==cfgadm==
===cfgadm -al -o show_FCP_dev [<controller>]===
<source lang=bash>
# cfgadm -al -o show_FCP_dev | grep unusable
c8::21000024ff2d49a2,0 disk connected configured unusable
c8::21000024ff2d49a2,1 disk connected configured unusable
c8::21000024ff2d49a2,2 disk connected configured unusable
c8::21000024ff2d49a2,3 disk connected configured unusable
c8::21000024ff2d49a2,4 disk connected configured unusable
c8::21000024ff2d49a2,5 disk connected configured unusable
c8::21000024ff2d49a2,6 disk connected configured unusable
c8::21000024ff2d49a2,7 disk connected configured unusable
c8::21000024ff2d49a2,8 disk connected configured unusable
c8::21000024ff2d49a2,9 disk connected configured unusable
c8::21000024ff2d49a2,10 disk connected configured unusable
c9::203400a0b839c421,31 disk connected configured unusable
c9::203400a0b84913d2,31 disk connected configured unusable
c9::203500a0b839c421,31 disk connected configured unusable
c9::203500a0b84913d2,31 disk connected configured unusable
</source>
===cfgadm -c unconfigure -o unusable_SCSI_LUN <unusable device>===
<source lang=bash>
# cfgadm -c unconfigure -o unusable_SCSI_LUN c8::21000024ff2d49a2
</source>
===cfgadm -o force_update -c configure <controller>===
Rescan LUNs. Be careful! Does a forcelip!
<source lang=bash>
# cfgadm -o force_update -c configure c10
</source>
=Kommandos : Common Array Manager=
==lsscs==
Ist unter Solaris in /opt/SUNWsefms/bin
===lsscs list array===
<source lang=bash>
</source>
===lsscs list array <array_name>===
<source lang=bash>
</source>
===lsscs list -a <array_name> fcport===
<source lang=bash>
</source>
=Kommandos : Brocade=
==Switch-Kommandos==
===switchshow===
<source lang=bash>
san-sw_11:admin> switchshow
switchName: san-sw_11
switchType: 71.2
switchState: Online
switchMode: Native
switchRole: Principal
switchDomain: 1
switchId: fffc01
switchWwn: 10:00:00:05:33:df:43:5a
zoning: ON (Fabric1)
switchBeacon: OFF
Index Port Address Media Speed State Proto
==============================================
0 0 010000 id N8 No_Light FC
1 1 010100 id N8 Online FC E-Port 10:00:00:05:33:df:bd:b9 "san-sw_21" (downstream)
2 2 010200 id N8 Online FC F-Port 21:00:00:24:ff:05:74:e4
3 3 010300 id N8 Online FC F-Port 50:0a:09:81:8d:32:5d:c4
4 4 010400 id N8 No_Light FC
5 5 010500 id N8 Online FC E-Port 10:00:00:05:33:df:bd:b9 "san-sw_21"
6 6 010600 id N4 Online FC F-Port 20:06:00:a0:b8:32:38:17
7 7 010700 id N4 Online FC F-Port 20:07:00:a0:b8:32:38:17
8 8 010800 id N4 Online FC F-Port 21:00:00:1b:32:91:4c:ed
9 9 010900 id N4 Online FC F-Port 21:00:00:1b:32:98:05:1a
10 10 010a00 id N8 Online FC F-Port 21:00:00:24:ff:4a:d3:bc
11 11 010b00 id N8 No_Light FC
12 12 010c00 id N8 No_Light FC
13 13 010d00 id N8 No_Light FC
14 14 010e00 id N8 No_Light FC
15 15 010f00 id N8 No_Light FC
16 16 011000 -- N8 No_Module FC (No POD License) Disabled
17 17 011100 -- N8 No_Module FC (No POD License) Disabled
18 18 011200 -- N8 No_Module FC (No POD License) Disabled
19 19 011300 -- N8 No_Module FC (No POD License) Disabled
20 20 011400 -- N8 No_Module FC (No POD License) Disabled
21 21 011500 -- N8 No_Module FC (No POD License) Disabled
22 22 011600 -- N8 No_Module FC (No POD License) Disabled
23 23 011700 -- N8 No_Module FC (No POD License) Disabled
</source>
Was sagt uns das?
# Dies ist der "Principal" (alle andere sind "Subordinate") der Fabric "Fabric1" (switchRole:, zoning:)
# Der Switch ist gezoned (zoning:)
# SwitchID ist "fffc01"
# Es ist ein 24-Port Switch
# Es gibt einen doppelten ISL (InterSwitchLink) zu einem anderen Switch E-Port (san-sw_21)
# 6 Ports sind mit SFPs bestückt, aber nicht belegt (0,4,11-15)
# 8 Ports haben keine Lizenz und auch kein SFP (No_Module)
# 9 Ports sind belegt
<source lang=bash>
san-sw_11:root> fabricshow
Switch ID Worldwide Name Enet IP Addr FC IP Addr Name
-------------------------------------------------------------------------
1: fffc01 10:00:00:05:33:df:43:5a 192.168.1.117 0.0.0.0 >"san-sw_11"
2: fffc02 10:00:00:05:33:df:bd:b9 192.168.1.119 0.0.0.0 "san-sw_21"
The Fabric has 2 switches
</source>
==Port-Kommandos==
===portstatsshow===
===portstatsclear===
==Zone-Kommandos==
===zoneshow===
===alicreate===
===alishow===
==Backup der Switchconfig per Script==
===Put the backup host ssh-pub-key on the switches===
<source lang=bash>
fcsw1:root> cat >/root/.ssh/authorized_keys <<EOF
> ssh-dss AAAAB3NzaC1...
...
...
lF8qsgtTD8cc= root@host
> EOF
</source>
===Generate ssh-key on the switches===
<source lang=bash>
fcsw1:root> ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
2a:23:33:...:69:bc:25:a5:f9 root@fcsw1
The key's randomart image is:
+--[ RSA 2048]----+
| |
| ... |
| |
+-----------------+
</source>
===Copy the key to your backup users ~/.ssh/authorized_keys on backup host===
<source lang=bash>
fcsw1:root> cat /root/.ssh/id_rsa.pub
ssh-rsa AAAAB3NzaC1yc2EAAA...
...
KHnw1T1NaQ== root@fcsw1
</source>
===Now the script on the backup host===
<source lang=bash>
# cat /opt/bin/backup_brocade_config
#!/bin/bash
SWITCHES="
172.30.40.50
172.30.40.51
"
LOCALUSER="backupuser"
BACKUPDIR="brocade_backup"
BACKUPHOST="172.30.40.10"
DATE="$(date '+%Y%m%d-%H%M%S')"
for switch in ${SWITCHES} ; do
printf "Backing up ${switch} to ~${LOCALUSER}/${BACKUPDIR}/${switch}_config_${date}.txt... "
ssh root@${switch} /fabos/link_sbin/configupload -all -p scp ${BACKUPHOST},${LOCALUSER},${BACKUPDIR}/${switch}_config_${DATE}.txt
done
</source>
==Script zum parsen einer configupload-Datei==
<source lang=awk>
#!/usr/bin/gawk -f
BEGIN{
vendor["001438"]="Hewlett-Packard";
vendor["00a098"]="NetApp";
vendor["0024ff"]="Qlogic";
vendor["001b32"]="Qlogic";
vendor["0000c9"]="Emulex";
vendor["00e002"]="CROSSROADS SYSTEMS, INC.";
}
/\[Zoning\]/,/^$/ {
if(/^cfg./){
split($0,cfgparts,":");
gsub(/^cfg./,"",cfgparts[1]);
cfg[cfgparts[1]]=cfgparts[2];
}
else if(/^zone./) {
zonename=$0;
gsub(/:.*$/,"",zonename);
gsub(/^zone./,"",zonename);
zonemembers=$0;
gsub(/^[^:]*:/,"",zonemembers);
zone[zonename]=zonemembers;
}
else if(/^alias./) {
aliasname=$0;
gsub(/:.*$/,"",aliasname);
gsub(/^alias./,"",aliasname);
aliasmembers=$0;
gsub(/^[^:]*:/,"",aliasmembers);
alias[aliasname]=aliasmembers;
if(length(aliasname)>longestalias){
longestalias=length(aliasname);
}
}
else if(/^enable:/) {
cfgenabled=$0;
gsub(/^enable:/,"",cfgenabled);
}
}
END {
print "Config:",cfgenabled;
split(cfg[cfgenabled],active_zones,";");
for(active_zone in active_zones) {
split(zone[active_zones[active_zone]],zone_members,";");
asort(zone_members);
print "Zone",active_zones[active_zone],"(",length(zone_members),"Members ):";
for(zone_member in zone_members){
member=zone_members[zone_member];
if(alias[member]!=""){
member=alias[member];
}
WWN=member;
gsub(/:/,"",WWN);
if(WWN ~ /^5/){start=2;}else{start=5;}
vendor_id=substr(WWN,start,6);
printf " Member: %s\t",member;
if(alias[zone_members[zone_member]]!=""){
format=sprintf("%%s%%%ds\t",longestalias-length(zone_members[zone_member]));
printf format,zone_members[zone_member]," ";
}
printf "%s\n",vendor[vendor_id];
}
}
printf "\n\n\nCreate config:\n-------------------------------------------------\n";
printf "cfgdelete \"%s\"\n",cfgenabled;
for(active_zone in active_zones) {
split(zone[active_zones[active_zone]],zone_members,";");
asort(zone_members);
for(zone_member in zone_members){
member=zone_members[zone_member];
if(alias[member]!=""){
printf "alicreate \"%s\",\"%s\"\n",member,alias[member];
alias[member]="";
}
}
printf "zonecreate \"%s\",\"%s\"\n",active_zones[active_zone],zone[active_zones[active_zone]];
if(!secondelement){
secondelement=1;
printf "cfgcreate";
} else {
printf "cfgadd ";
}
printf " \"%s\",\"%s\"\n",cfgenabled,active_zones[active_zone];
}
printf "cfgsave\ncfgenable \"%s\"\n",cfgenabled;
}
</source>
=Kommandos: NetApp=
==fcp topology show : Wo hängt mein Frontend-SAN?==
<source lang=bash>
fas01> fcp topology show
Switches connected on adapter 0d:
None connected.
Switches connected on adapter 0c:
None connected.
Switches connected on adapter 1a:
Switch Name: fcsw01
Switch Vendor: Brocade Communications, Inc.
Switch Release: v6.4.2a
Switch Domain: 1
Switch WWN: 10:00:00:05:33:c6:1e:6c
Port Count: 24
Switches connected on adapter 1b:
Switch Name: fcsw02
Switch Vendor: Brocade Communications, Inc.
Switch Release: v6.4.2a
Switch Domain: 1
Switch WWN: 10:00:00:05:33:c7:5e:d2
Port Count: 24
Switches connected on adapter 1c:
None connected.
Switches connected on adapter 1d:
None connected.
</source>
==fcp config <port> : Welche WWN habe ich?==
<source lang=bash>
fas01> fcp config 1a
1a: ONLINE <ADAPTER UP> PTP Fabric
host address 010600
portname 50:0a:09:83:90:00:29:24 nodename 50:0a:09:80:80:00:29:24
mediatype auto speed auto
</source>
Kleines Schmankerl ist noch die "host address", die uns zeigt, daß wir auf Switch-ID 01 Port 06 hängen.
c1abd1fdfb387aa58da2d8fd3f57555dd0f16540
531
530
2014-10-01T15:25:03Z
Lollypop
2
/* Kommandos: NetApp */
wikitext
text/x-wiki
[[Kategorie:Solaris]]
[[Kategorie:Brocade]]
=Fibrechannel Analyse unter Solaris=
=Kommandos : Solaris=
==luxadm==
===luxadm -e port===
Gibt die Hardwarepfade der vorhandened Fibrechannelports und deren Status aus:
<source lang=bash>
# luxadm -e port
/devices/pci@79,0/pci10de,378@b/pci1077,143@0/fp@0,0:devctl CONNECTED
/devices/pci@79,0/pci10de,378@b/pci1077,143@0,1/fp@0,0:devctl NOT CONNECTED
/devices/pci@79,0/pci10de,376@e/pci1077,143@0/fp@0,0:devctl CONNECTED
/devices/pci@79,0/pci10de,376@e/pci1077,143@0,1/fp@0,0:devctl NOT CONNECTED
</source>
2 Dualport Karten:
/devices/pci@79,0/pci10de,378@b/pci1077,143@0 und ...,1
/devices/pci@79,0/pci10de,376@e/pci1077,143@0 und ...,1
<source lang=bash>
# prtdiag -v | head -1
System Configuration: Sun Microsystems Sun Fire X4440
</source>
Aus der Seite [https://support.oracle.com/epmos/faces/DocContentDisplay?id=1277396.1 Sun x86 Platforms: Matrix of Recognized Device Paths (Doc ID 1277396.1)] (Oracle Support Login benötigt):
Sun Fire x4440 (Tucana)
PCI:
PCIe SLOT0 /pci@0,0/pci10de,375@f/pci1000,3150@0 // with PCI Express 8-Port SAS/SATA HBA
PCIe SLOT0 /pci@0,0/pci10de,375@f/ // without PCI Express 8-Port SAS/SATA HBA
PCIe SLOT1 /pci@0,0/pci10de,376@e/
PCIe SLOT2 /pci@7c,0/pci10de,377@f/
PCIe SLOT3 /pci@0,0/pci10de,377@a/
PCIe SLOT4 /pci@7c,0/pci10de,376@e/
PCIe SLOT5 /pci@7c,0/pci10de,378@b/
(7c can be renamed something else depending on BIOS/OS version)
Also stecken unsere Karten in Slot 4 und 5.
===luxadm -e dump_map <HW_path>===
Gibt die Tabelle der bekannten Geräte an einem Port aus
<source lang=bash>
# luxadm -e dump_map /devices/pci@79,0/pci10de,378@b/pci1077,143@0/fp@0,0:devctl
Pos Port_ID Hard_Addr Port WWN Node WWN Type
0 30200 0 202600a0b86e10e4 200600a0b86e10e4 0x0 (Disk device)
1 30600 0 202700a0b86e10e4 200600a0b86e10e4 0x0 (Disk device)
2 10100 0 203400a0b85bb030 200400a0b85bb030 0x0 (Disk device)
3 10500 0 203500a0b85bb030 200400a0b85bb030 0x0 (Disk device)
4 10200 0 202600a0b86e103c 200600a0b86e103c 0x0 (Disk device)
5 11400 0 202700a0b86e103c 200600a0b86e103c 0x0 (Disk device)
6 30100 0 203200a0b85aeb2d 200200a0b85aeb2d 0x0 (Disk device)
7 30500 0 203300a0b85aeb2d 200200a0b85aeb2d 0x0 (Disk device)
8 10800 0 2100001b32902d45 2000001b32902d45 0x1f (Unknown Type,Host Bus Adapter)
</source>
Erklärung der interessanten Spalten:
* Port_ID <Switch_ID><Switchport><??>
Es sind also offensichtlich 2 Switches in der Fabric an Port /devices/pci@79,0/pci10de,378@b/pci1077,143@0/fp@0,0:devctl
und zwar mit der ID 1 und mit der ID 3.
Switch ID 1
Port 1 und 5 : Node WWN 200400a0b85bb030
Port 2 und 14 : Node WWN 200600a0b86e103c
Port 8 : Node WWN 2000001b32902d45 (Wir selbst)
Switch ID 3
Port 1 und 5 : Node WWN 200200a0b85aeb2d
Port 2 und 6 : Node WWN 200600a0b86e10e4
Wir hängen also mit 2 Storages auf dem Switch mit der ID 1 und haben eine Verbindung zu einem Switch mit der ID 3 an dem 2 weitere Storages hängen.
* Node WWN
Wir sehen hier 4 Disk Devices mit jeweils 2 Einträgen (Gleiche Node WWN)
* Port WWN
Dies ist die Port WWN der an den Switch angeschlossenen Geräte (unter 8 finden wir uns selbst).
Pro Storage sehen wir hier 2 Port WWNs, also 2 Pfade über unseren einen Hostport.
Daher nachher 4 Pfade (2 Pro Hostport) beim [[#mpathadm list lu]].
* Type
Disk Device: Storage
Host Bus Adapter: FC-Karte
===luxadm probe===
Auflistung aller erkannten Fibrechanneldevices
<source lang=bash>
#> luxadm probe
Found Fibre Channel device(s):
Node WWN:200600a0b86e10e4 Device Type:Disk device
Logical Path:/dev/rdsk/c8t600A0B80006E10E40000DC1C52E8B751d0s2
...
</source>
===luxadm display <Diskpath|WWN>===
<source lang=bash>
#> luxadm display /dev/rdsk/c8t600A0B80006E10E40000DC1C52E8B751d0s2
DEVICE PROPERTIES for disk: /dev/rdsk/c8t600A0B80006E10E40000DC1C52E8B751d0s2
Vendor: SUN
Product ID: STK6580_6780
Revision: 0784
Serial Num: SP01068442
Unformatted capacity: 204800.000 MBytes
Write Cache: Enabled
Read Cache: Enabled
Minimum prefetch: 0x300
Maximum prefetch: 0x0
Device Type: Disk device
Path(s):
/dev/rdsk/c8t600A0B80006E10E40000DC1C52E8B751d0s2
/devices/scsi_vhci/disk@g600a0b80006e10e40000dc1c52e8b751:c,raw
Controller /dev/cfg/c4
Device Address 202600a0b86e10e4,5
Host controller port WWN 2100001b328a417f
Class primary
State ONLINE
Controller /dev/cfg/c4
Device Address 202700a0b86e10e4,5
Host controller port WWN 2100001b328a417f
Class secondary
State STANDBY
Controller /dev/cfg/c6
Device Address 201600a0b86e10e4,5
Host controller port WWN 2100001b32904445
Class primary
State ONLINE
Controller /dev/cfg/c6
Device Address 201700a0b86e10e4,5
Host controller port WWN 2100001b32904445
Class secondary
State STANDBY
</source>
* Vendor: SUN
Hersteller
* Product ID: STK6580_6780
Also ein StorageTek 6580/6780
* Revision: 0784
Grobe Firmwarepeilung (Firmware Version: 07.84.47.10)
Siehe hier [[#lsscs list array <array_name>]]
* Serial Num: SP01068442
Praktisch, wenn man mit NetApps arbeitet, um die LUNs zuzuordnen.
* Unformatted capacity: 204800.000 MBytes
Immer gut zu wissen
* Write Cache: Enabled
Die Batterie im Storage sollte also OK sein ;-)
* Path(s):
Rawdevicepath
Hardwaredevicepath
Jetzt folgen immer pro Pfad zu diesem Device ein Block aus
Controller (siehe unten)
Device Address <Port WWN vom Device>,<LUN ID>
Class <primary|secondary> (siehe unten)
State <Online|Standby|Oflline>
Zuweisung Controller zum FC-Port über:
<source lang=bash>
# ls -al /dev/cfg/c6
lrwxrwxrwx 1 root root 60 Sep 3 2009 /dev/cfg/c6 -> ../../devices/pci@79,0/pci10de,376@e/pci1077,143@0/fp@0,0:fc
</source>
Man sieht den Hardwarepfad von [[#luxadm -e port]]
Class:
Via ALUA (Asymmetric Logical Unit Access) teilt das Device dem Host mit, über welche Pfade der Host primär auf die LUN zugreifen soll.
==fcinfo==
===fcinfo hba-port===
Gibt ein paar Infos über Hersteller, Modell, Firmware, Port und Node WWN, Current Speed, ... aus
<source lang=bash>
#> fcinfo hba-port
HBA Port WWN: 2100001b328a417f
OS Device Name: /dev/cfg/c4
Manufacturer: QLogic Corp.
Model: 375-3356-02
Firmware Version: 05.06.00
FCode/BIOS Version: BIOS: 2.02; fcode: 2.01; EFI: 2.00;
Serial Number: 0402R00-0918701860
Driver Name: qlc
Driver Version: 20110825-3.06
Type: N-port
State: online
Supported Speeds: 1Gb 2Gb 4Gb
Current Speed: 4Gb
Node WWN: 2000001b328a417f
HBA Port WWN: 2101001b32aa417f
OS Device Name: /dev/cfg/c5
Manufacturer: QLogic Corp.
Model: 375-3356-02
Firmware Version: 05.06.00
FCode/BIOS Version: BIOS: 2.02; fcode: 2.01; EFI: 2.00;
Serial Number: 0402R00-0918701860
Driver Name: qlc
Driver Version: 20110825-3.06
Type: unknown
State: offline
Supported Speeds: 1Gb 2Gb 4Gb
Current Speed: not established
Node WWN: 2001001b32aa417f
HBA Port WWN: 2100001b32904445
OS Device Name: /dev/cfg/c6
Manufacturer: QLogic Corp.
Model: 375-3356-02
Firmware Version: 05.06.00
FCode/BIOS Version: BIOS: 2.02; fcode: 2.01; EFI: 2.00;
Serial Number: 0402R00-0918701887
Driver Name: qlc
Driver Version: 20110825-3.06
Type: N-port
State: online
Supported Speeds: 1Gb 2Gb 4Gb
Current Speed: 4Gb
Node WWN: 2000001b32904445
HBA Port WWN: 2101001b32b04445
OS Device Name: /dev/cfg/c7
Manufacturer: QLogic Corp.
Model: 375-3356-02
Firmware Version: 05.06.00
FCode/BIOS Version: BIOS: 2.02; fcode: 2.01; EFI: 2.00;
Serial Number: 0402R00-0918701887
Driver Name: qlc
Driver Version: 20110825-3.06
Type: unknown
State: offline
Supported Speeds: 1Gb 2Gb 4Gb
Current Speed: not established
Node WWN: 2001001b32b04445
</source>
===fcinfo remote-port --port <HBA Port WWN> --linkstat===
<source lang=bash>
# fcinfo remote-port --port 2100001b32904445 --linkstat
Remote Port WWN: 201600a0b86e103c
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200600a0b86e103c
Link Error Statistics:
Link Failure Count: 3
Loss of Sync Count: 3
Loss of Signal Count: 2
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 201700a0b86e103c
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200600a0b86e103c
Link Error Statistics:
Link Failure Count: 4
Loss of Sync Count: 261
Loss of Signal Count: 4
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 202200a0b85aeb2d
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200200a0b85aeb2d
Link Error Statistics:
Link Failure Count: 2
Loss of Sync Count: 1
Loss of Signal Count: 2
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 202300a0b85aeb2d
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200200a0b85aeb2d
Link Error Statistics:
Link Failure Count: 2
Loss of Sync Count: 1
Loss of Signal Count: 2
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 201600a0b86e10e4
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200600a0b86e10e4
Link Error Statistics:
Link Failure Count: 3
Loss of Sync Count: 1
Loss of Signal Count: 0
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 201700a0b86e10e4
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200600a0b86e10e4
Link Error Statistics:
Link Failure Count: 3
Loss of Sync Count: 1
Loss of Signal Count: 0
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 202400a0b85bb030
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200400a0b85bb030
Link Error Statistics:
Link Failure Count: 2
Loss of Sync Count: 1
Loss of Signal Count: 2
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 202500a0b85bb030
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200400a0b85bb030
Link Error Statistics:
Link Failure Count: 2
Loss of Sync Count: 1
Loss of Signal Count: 3
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
</source>
==mpathadm==
===mpathadm list lu===
<source lang=bash>
</source>
==cfgadm==
===cfgadm -al -o show_FCP_dev [<controller>]===
<source lang=bash>
# cfgadm -al -o show_FCP_dev | grep unusable
c8::21000024ff2d49a2,0 disk connected configured unusable
c8::21000024ff2d49a2,1 disk connected configured unusable
c8::21000024ff2d49a2,2 disk connected configured unusable
c8::21000024ff2d49a2,3 disk connected configured unusable
c8::21000024ff2d49a2,4 disk connected configured unusable
c8::21000024ff2d49a2,5 disk connected configured unusable
c8::21000024ff2d49a2,6 disk connected configured unusable
c8::21000024ff2d49a2,7 disk connected configured unusable
c8::21000024ff2d49a2,8 disk connected configured unusable
c8::21000024ff2d49a2,9 disk connected configured unusable
c8::21000024ff2d49a2,10 disk connected configured unusable
c9::203400a0b839c421,31 disk connected configured unusable
c9::203400a0b84913d2,31 disk connected configured unusable
c9::203500a0b839c421,31 disk connected configured unusable
c9::203500a0b84913d2,31 disk connected configured unusable
</source>
===cfgadm -c unconfigure -o unusable_SCSI_LUN <unusable device>===
<source lang=bash>
# cfgadm -c unconfigure -o unusable_SCSI_LUN c8::21000024ff2d49a2
</source>
===cfgadm -o force_update -c configure <controller>===
Rescan LUNs. Be careful! Does a forcelip!
<source lang=bash>
# cfgadm -o force_update -c configure c10
</source>
=Kommandos : Common Array Manager=
==lsscs==
Ist unter Solaris in /opt/SUNWsefms/bin
===lsscs list array===
<source lang=bash>
</source>
===lsscs list array <array_name>===
<source lang=bash>
</source>
===lsscs list -a <array_name> fcport===
<source lang=bash>
</source>
=Kommandos : Brocade=
==Switch-Kommandos==
===switchshow===
<source lang=bash>
san-sw_11:admin> switchshow
switchName: san-sw_11
switchType: 71.2
switchState: Online
switchMode: Native
switchRole: Principal
switchDomain: 1
switchId: fffc01
switchWwn: 10:00:00:05:33:df:43:5a
zoning: ON (Fabric1)
switchBeacon: OFF
Index Port Address Media Speed State Proto
==============================================
0 0 010000 id N8 No_Light FC
1 1 010100 id N8 Online FC E-Port 10:00:00:05:33:df:bd:b9 "san-sw_21" (downstream)
2 2 010200 id N8 Online FC F-Port 21:00:00:24:ff:05:74:e4
3 3 010300 id N8 Online FC F-Port 50:0a:09:81:8d:32:5d:c4
4 4 010400 id N8 No_Light FC
5 5 010500 id N8 Online FC E-Port 10:00:00:05:33:df:bd:b9 "san-sw_21"
6 6 010600 id N4 Online FC F-Port 20:06:00:a0:b8:32:38:17
7 7 010700 id N4 Online FC F-Port 20:07:00:a0:b8:32:38:17
8 8 010800 id N4 Online FC F-Port 21:00:00:1b:32:91:4c:ed
9 9 010900 id N4 Online FC F-Port 21:00:00:1b:32:98:05:1a
10 10 010a00 id N8 Online FC F-Port 21:00:00:24:ff:4a:d3:bc
11 11 010b00 id N8 No_Light FC
12 12 010c00 id N8 No_Light FC
13 13 010d00 id N8 No_Light FC
14 14 010e00 id N8 No_Light FC
15 15 010f00 id N8 No_Light FC
16 16 011000 -- N8 No_Module FC (No POD License) Disabled
17 17 011100 -- N8 No_Module FC (No POD License) Disabled
18 18 011200 -- N8 No_Module FC (No POD License) Disabled
19 19 011300 -- N8 No_Module FC (No POD License) Disabled
20 20 011400 -- N8 No_Module FC (No POD License) Disabled
21 21 011500 -- N8 No_Module FC (No POD License) Disabled
22 22 011600 -- N8 No_Module FC (No POD License) Disabled
23 23 011700 -- N8 No_Module FC (No POD License) Disabled
</source>
Was sagt uns das?
# Dies ist der "Principal" (alle andere sind "Subordinate") der Fabric "Fabric1" (switchRole:, zoning:)
# Der Switch ist gezoned (zoning:)
# SwitchID ist "fffc01"
# Es ist ein 24-Port Switch
# Es gibt einen doppelten ISL (InterSwitchLink) zu einem anderen Switch E-Port (san-sw_21)
# 6 Ports sind mit SFPs bestückt, aber nicht belegt (0,4,11-15)
# 8 Ports haben keine Lizenz und auch kein SFP (No_Module)
# 9 Ports sind belegt
<source lang=bash>
san-sw_11:root> fabricshow
Switch ID Worldwide Name Enet IP Addr FC IP Addr Name
-------------------------------------------------------------------------
1: fffc01 10:00:00:05:33:df:43:5a 192.168.1.117 0.0.0.0 >"san-sw_11"
2: fffc02 10:00:00:05:33:df:bd:b9 192.168.1.119 0.0.0.0 "san-sw_21"
The Fabric has 2 switches
</source>
==Port-Kommandos==
===portstatsshow===
===portstatsclear===
==Zone-Kommandos==
===zoneshow===
===alicreate===
===alishow===
==Backup der Switchconfig per Script==
===Put the backup host ssh-pub-key on the switches===
<source lang=bash>
fcsw1:root> cat >/root/.ssh/authorized_keys <<EOF
> ssh-dss AAAAB3NzaC1...
...
...
lF8qsgtTD8cc= root@host
> EOF
</source>
===Generate ssh-key on the switches===
<source lang=bash>
fcsw1:root> ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
2a:23:33:...:69:bc:25:a5:f9 root@fcsw1
The key's randomart image is:
+--[ RSA 2048]----+
| |
| ... |
| |
+-----------------+
</source>
===Copy the key to your backup users ~/.ssh/authorized_keys on backup host===
<source lang=bash>
fcsw1:root> cat /root/.ssh/id_rsa.pub
ssh-rsa AAAAB3NzaC1yc2EAAA...
...
KHnw1T1NaQ== root@fcsw1
</source>
===Now the script on the backup host===
<source lang=bash>
# cat /opt/bin/backup_brocade_config
#!/bin/bash
SWITCHES="
172.30.40.50
172.30.40.51
"
LOCALUSER="backupuser"
BACKUPDIR="brocade_backup"
BACKUPHOST="172.30.40.10"
DATE="$(date '+%Y%m%d-%H%M%S')"
for switch in ${SWITCHES} ; do
printf "Backing up ${switch} to ~${LOCALUSER}/${BACKUPDIR}/${switch}_config_${date}.txt... "
ssh root@${switch} /fabos/link_sbin/configupload -all -p scp ${BACKUPHOST},${LOCALUSER},${BACKUPDIR}/${switch}_config_${DATE}.txt
done
</source>
==Script zum parsen einer configupload-Datei==
<source lang=awk>
#!/usr/bin/gawk -f
BEGIN{
vendor["001438"]="Hewlett-Packard";
vendor["00a098"]="NetApp";
vendor["0024ff"]="Qlogic";
vendor["001b32"]="Qlogic";
vendor["0000c9"]="Emulex";
vendor["00e002"]="CROSSROADS SYSTEMS, INC.";
}
/\[Zoning\]/,/^$/ {
if(/^cfg./){
split($0,cfgparts,":");
gsub(/^cfg./,"",cfgparts[1]);
cfg[cfgparts[1]]=cfgparts[2];
}
else if(/^zone./) {
zonename=$0;
gsub(/:.*$/,"",zonename);
gsub(/^zone./,"",zonename);
zonemembers=$0;
gsub(/^[^:]*:/,"",zonemembers);
zone[zonename]=zonemembers;
}
else if(/^alias./) {
aliasname=$0;
gsub(/:.*$/,"",aliasname);
gsub(/^alias./,"",aliasname);
aliasmembers=$0;
gsub(/^[^:]*:/,"",aliasmembers);
alias[aliasname]=aliasmembers;
if(length(aliasname)>longestalias){
longestalias=length(aliasname);
}
}
else if(/^enable:/) {
cfgenabled=$0;
gsub(/^enable:/,"",cfgenabled);
}
}
END {
print "Config:",cfgenabled;
split(cfg[cfgenabled],active_zones,";");
for(active_zone in active_zones) {
split(zone[active_zones[active_zone]],zone_members,";");
asort(zone_members);
print "Zone",active_zones[active_zone],"(",length(zone_members),"Members ):";
for(zone_member in zone_members){
member=zone_members[zone_member];
if(alias[member]!=""){
member=alias[member];
}
WWN=member;
gsub(/:/,"",WWN);
if(WWN ~ /^5/){start=2;}else{start=5;}
vendor_id=substr(WWN,start,6);
printf " Member: %s\t",member;
if(alias[zone_members[zone_member]]!=""){
format=sprintf("%%s%%%ds\t",longestalias-length(zone_members[zone_member]));
printf format,zone_members[zone_member]," ";
}
printf "%s\n",vendor[vendor_id];
}
}
printf "\n\n\nCreate config:\n-------------------------------------------------\n";
printf "cfgdelete \"%s\"\n",cfgenabled;
for(active_zone in active_zones) {
split(zone[active_zones[active_zone]],zone_members,";");
asort(zone_members);
for(zone_member in zone_members){
member=zone_members[zone_member];
if(alias[member]!=""){
printf "alicreate \"%s\",\"%s\"\n",member,alias[member];
alias[member]="";
}
}
printf "zonecreate \"%s\",\"%s\"\n",active_zones[active_zone],zone[active_zones[active_zone]];
if(!secondelement){
secondelement=1;
printf "cfgcreate";
} else {
printf "cfgadd ";
}
printf " \"%s\",\"%s\"\n",cfgenabled,active_zones[active_zone];
}
printf "cfgsave\ncfgenable \"%s\"\n",cfgenabled;
}
</source>
=Kommandos: NetApp=
==fcp topology show : Wo hängt mein Frontend-SAN?==
<source lang=bash>
fas01> fcp topology show
Switches connected on adapter 0d:
None connected.
Switches connected on adapter 0c:
None connected.
Switches connected on adapter 1a:
Switch Name: fcsw01
Switch Vendor: Brocade Communications, Inc.
Switch Release: v6.4.2a
Switch Domain: 1
Switch WWN: 10:00:00:05:33:c6:1e:6c
Port Count: 24
Switches connected on adapter 1b:
Switch Name: fcsw02
Switch Vendor: Brocade Communications, Inc.
Switch Release: v6.4.2a
Switch Domain: 1
Switch WWN: 10:00:00:05:33:c7:5e:d2
Port Count: 24
Switches connected on adapter 1c:
None connected.
Switches connected on adapter 1d:
None connected.
</source>
==fcp config <port> : Welche WWN habe ich?==
<source lang=bash>
fas01> fcp config 1a
1a: ONLINE <ADAPTER UP> PTP Fabric
host address 010600
portname 50:0a:09:83:90:00:29:24 nodename 50:0a:09:80:80:00:29:24
mediatype auto speed auto
</source>
Kleines Schmankerl ist noch die "host address", die uns zeigt, daß wir auf Switch-ID 01 Port 06 hängen.
==fcp wwpn-alias (set|show) : Aliasnamen für mehr Klarheit beim Debugging==
<source lang=bash>
fas01> fcp wwpn-alias set sun07_Slot2_Port0 21000024ff363a5a
fas01> fcp wwpn-alias show
WWPN Alias
---- -----
21:00:00:24:ff:36:3a:5a sun07_Slot2_Port0
</source>
aebf5f335c6ce6c74dbd6176cbf7c366a5819a5d
532
531
2014-10-01T15:35:44Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:Solaris]]
[[Kategorie:Brocade]]
=Fibrechannel Analyse=
=Kommandos : Solaris=
==luxadm==
===luxadm -e port===
Gibt die Hardwarepfade der vorhandened Fibrechannelports und deren Status aus:
<source lang=bash>
# luxadm -e port
/devices/pci@79,0/pci10de,378@b/pci1077,143@0/fp@0,0:devctl CONNECTED
/devices/pci@79,0/pci10de,378@b/pci1077,143@0,1/fp@0,0:devctl NOT CONNECTED
/devices/pci@79,0/pci10de,376@e/pci1077,143@0/fp@0,0:devctl CONNECTED
/devices/pci@79,0/pci10de,376@e/pci1077,143@0,1/fp@0,0:devctl NOT CONNECTED
</source>
2 Dualport Karten:
/devices/pci@79,0/pci10de,378@b/pci1077,143@0 und ...,1
/devices/pci@79,0/pci10de,376@e/pci1077,143@0 und ...,1
<source lang=bash>
# prtdiag -v | head -1
System Configuration: Sun Microsystems Sun Fire X4440
</source>
Aus der Seite [https://support.oracle.com/epmos/faces/DocContentDisplay?id=1277396.1 Sun x86 Platforms: Matrix of Recognized Device Paths (Doc ID 1277396.1)] (Oracle Support Login benötigt):
Sun Fire x4440 (Tucana)
PCI:
PCIe SLOT0 /pci@0,0/pci10de,375@f/pci1000,3150@0 // with PCI Express 8-Port SAS/SATA HBA
PCIe SLOT0 /pci@0,0/pci10de,375@f/ // without PCI Express 8-Port SAS/SATA HBA
PCIe SLOT1 /pci@0,0/pci10de,376@e/
PCIe SLOT2 /pci@7c,0/pci10de,377@f/
PCIe SLOT3 /pci@0,0/pci10de,377@a/
PCIe SLOT4 /pci@7c,0/pci10de,376@e/
PCIe SLOT5 /pci@7c,0/pci10de,378@b/
(7c can be renamed something else depending on BIOS/OS version)
Also stecken unsere Karten in Slot 4 und 5.
===luxadm -e dump_map <HW_path>===
Gibt die Tabelle der bekannten Geräte an einem Port aus
<source lang=bash>
# luxadm -e dump_map /devices/pci@79,0/pci10de,378@b/pci1077,143@0/fp@0,0:devctl
Pos Port_ID Hard_Addr Port WWN Node WWN Type
0 30200 0 202600a0b86e10e4 200600a0b86e10e4 0x0 (Disk device)
1 30600 0 202700a0b86e10e4 200600a0b86e10e4 0x0 (Disk device)
2 10100 0 203400a0b85bb030 200400a0b85bb030 0x0 (Disk device)
3 10500 0 203500a0b85bb030 200400a0b85bb030 0x0 (Disk device)
4 10200 0 202600a0b86e103c 200600a0b86e103c 0x0 (Disk device)
5 11400 0 202700a0b86e103c 200600a0b86e103c 0x0 (Disk device)
6 30100 0 203200a0b85aeb2d 200200a0b85aeb2d 0x0 (Disk device)
7 30500 0 203300a0b85aeb2d 200200a0b85aeb2d 0x0 (Disk device)
8 10800 0 2100001b32902d45 2000001b32902d45 0x1f (Unknown Type,Host Bus Adapter)
</source>
Erklärung der interessanten Spalten:
* Port_ID <Switch_ID><Switchport><??>
Es sind also offensichtlich 2 Switches in der Fabric an Port /devices/pci@79,0/pci10de,378@b/pci1077,143@0/fp@0,0:devctl
und zwar mit der ID 1 und mit der ID 3.
Switch ID 1
Port 1 und 5 : Node WWN 200400a0b85bb030
Port 2 und 14 : Node WWN 200600a0b86e103c
Port 8 : Node WWN 2000001b32902d45 (Wir selbst)
Switch ID 3
Port 1 und 5 : Node WWN 200200a0b85aeb2d
Port 2 und 6 : Node WWN 200600a0b86e10e4
Wir hängen also mit 2 Storages auf dem Switch mit der ID 1 und haben eine Verbindung zu einem Switch mit der ID 3 an dem 2 weitere Storages hängen.
* Node WWN
Wir sehen hier 4 Disk Devices mit jeweils 2 Einträgen (Gleiche Node WWN)
* Port WWN
Dies ist die Port WWN der an den Switch angeschlossenen Geräte (unter 8 finden wir uns selbst).
Pro Storage sehen wir hier 2 Port WWNs, also 2 Pfade über unseren einen Hostport.
Daher nachher 4 Pfade (2 Pro Hostport) beim [[#mpathadm list lu]].
* Type
Disk Device: Storage
Host Bus Adapter: FC-Karte
===luxadm probe===
Auflistung aller erkannten Fibrechanneldevices
<source lang=bash>
#> luxadm probe
Found Fibre Channel device(s):
Node WWN:200600a0b86e10e4 Device Type:Disk device
Logical Path:/dev/rdsk/c8t600A0B80006E10E40000DC1C52E8B751d0s2
...
</source>
===luxadm display <Diskpath|WWN>===
<source lang=bash>
#> luxadm display /dev/rdsk/c8t600A0B80006E10E40000DC1C52E8B751d0s2
DEVICE PROPERTIES for disk: /dev/rdsk/c8t600A0B80006E10E40000DC1C52E8B751d0s2
Vendor: SUN
Product ID: STK6580_6780
Revision: 0784
Serial Num: SP01068442
Unformatted capacity: 204800.000 MBytes
Write Cache: Enabled
Read Cache: Enabled
Minimum prefetch: 0x300
Maximum prefetch: 0x0
Device Type: Disk device
Path(s):
/dev/rdsk/c8t600A0B80006E10E40000DC1C52E8B751d0s2
/devices/scsi_vhci/disk@g600a0b80006e10e40000dc1c52e8b751:c,raw
Controller /dev/cfg/c4
Device Address 202600a0b86e10e4,5
Host controller port WWN 2100001b328a417f
Class primary
State ONLINE
Controller /dev/cfg/c4
Device Address 202700a0b86e10e4,5
Host controller port WWN 2100001b328a417f
Class secondary
State STANDBY
Controller /dev/cfg/c6
Device Address 201600a0b86e10e4,5
Host controller port WWN 2100001b32904445
Class primary
State ONLINE
Controller /dev/cfg/c6
Device Address 201700a0b86e10e4,5
Host controller port WWN 2100001b32904445
Class secondary
State STANDBY
</source>
* Vendor: SUN
Hersteller
* Product ID: STK6580_6780
Also ein StorageTek 6580/6780
* Revision: 0784
Grobe Firmwarepeilung (Firmware Version: 07.84.47.10)
Siehe hier [[#lsscs list array <array_name>]]
* Serial Num: SP01068442
Praktisch, wenn man mit NetApps arbeitet, um die LUNs zuzuordnen.
* Unformatted capacity: 204800.000 MBytes
Immer gut zu wissen
* Write Cache: Enabled
Die Batterie im Storage sollte also OK sein ;-)
* Path(s):
Rawdevicepath
Hardwaredevicepath
Jetzt folgen immer pro Pfad zu diesem Device ein Block aus
Controller (siehe unten)
Device Address <Port WWN vom Device>,<LUN ID>
Class <primary|secondary> (siehe unten)
State <Online|Standby|Oflline>
Zuweisung Controller zum FC-Port über:
<source lang=bash>
# ls -al /dev/cfg/c6
lrwxrwxrwx 1 root root 60 Sep 3 2009 /dev/cfg/c6 -> ../../devices/pci@79,0/pci10de,376@e/pci1077,143@0/fp@0,0:fc
</source>
Man sieht den Hardwarepfad von [[#luxadm -e port]]
Class:
Via ALUA (Asymmetric Logical Unit Access) teilt das Device dem Host mit, über welche Pfade der Host primär auf die LUN zugreifen soll.
==fcinfo==
===fcinfo hba-port===
Gibt ein paar Infos über Hersteller, Modell, Firmware, Port und Node WWN, Current Speed, ... aus
<source lang=bash>
#> fcinfo hba-port
HBA Port WWN: 2100001b328a417f
OS Device Name: /dev/cfg/c4
Manufacturer: QLogic Corp.
Model: 375-3356-02
Firmware Version: 05.06.00
FCode/BIOS Version: BIOS: 2.02; fcode: 2.01; EFI: 2.00;
Serial Number: 0402R00-0918701860
Driver Name: qlc
Driver Version: 20110825-3.06
Type: N-port
State: online
Supported Speeds: 1Gb 2Gb 4Gb
Current Speed: 4Gb
Node WWN: 2000001b328a417f
HBA Port WWN: 2101001b32aa417f
OS Device Name: /dev/cfg/c5
Manufacturer: QLogic Corp.
Model: 375-3356-02
Firmware Version: 05.06.00
FCode/BIOS Version: BIOS: 2.02; fcode: 2.01; EFI: 2.00;
Serial Number: 0402R00-0918701860
Driver Name: qlc
Driver Version: 20110825-3.06
Type: unknown
State: offline
Supported Speeds: 1Gb 2Gb 4Gb
Current Speed: not established
Node WWN: 2001001b32aa417f
HBA Port WWN: 2100001b32904445
OS Device Name: /dev/cfg/c6
Manufacturer: QLogic Corp.
Model: 375-3356-02
Firmware Version: 05.06.00
FCode/BIOS Version: BIOS: 2.02; fcode: 2.01; EFI: 2.00;
Serial Number: 0402R00-0918701887
Driver Name: qlc
Driver Version: 20110825-3.06
Type: N-port
State: online
Supported Speeds: 1Gb 2Gb 4Gb
Current Speed: 4Gb
Node WWN: 2000001b32904445
HBA Port WWN: 2101001b32b04445
OS Device Name: /dev/cfg/c7
Manufacturer: QLogic Corp.
Model: 375-3356-02
Firmware Version: 05.06.00
FCode/BIOS Version: BIOS: 2.02; fcode: 2.01; EFI: 2.00;
Serial Number: 0402R00-0918701887
Driver Name: qlc
Driver Version: 20110825-3.06
Type: unknown
State: offline
Supported Speeds: 1Gb 2Gb 4Gb
Current Speed: not established
Node WWN: 2001001b32b04445
</source>
===fcinfo remote-port --port <HBA Port WWN> --linkstat===
<source lang=bash>
# fcinfo remote-port --port 2100001b32904445 --linkstat
Remote Port WWN: 201600a0b86e103c
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200600a0b86e103c
Link Error Statistics:
Link Failure Count: 3
Loss of Sync Count: 3
Loss of Signal Count: 2
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 201700a0b86e103c
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200600a0b86e103c
Link Error Statistics:
Link Failure Count: 4
Loss of Sync Count: 261
Loss of Signal Count: 4
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 202200a0b85aeb2d
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200200a0b85aeb2d
Link Error Statistics:
Link Failure Count: 2
Loss of Sync Count: 1
Loss of Signal Count: 2
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 202300a0b85aeb2d
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200200a0b85aeb2d
Link Error Statistics:
Link Failure Count: 2
Loss of Sync Count: 1
Loss of Signal Count: 2
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 201600a0b86e10e4
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200600a0b86e10e4
Link Error Statistics:
Link Failure Count: 3
Loss of Sync Count: 1
Loss of Signal Count: 0
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 201700a0b86e10e4
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200600a0b86e10e4
Link Error Statistics:
Link Failure Count: 3
Loss of Sync Count: 1
Loss of Signal Count: 0
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 202400a0b85bb030
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200400a0b85bb030
Link Error Statistics:
Link Failure Count: 2
Loss of Sync Count: 1
Loss of Signal Count: 2
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 202500a0b85bb030
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200400a0b85bb030
Link Error Statistics:
Link Failure Count: 2
Loss of Sync Count: 1
Loss of Signal Count: 3
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
</source>
==mpathadm==
===mpathadm list lu===
<source lang=bash>
</source>
==cfgadm==
===cfgadm -al -o show_FCP_dev [<controller>]===
<source lang=bash>
# cfgadm -al -o show_FCP_dev | grep unusable
c8::21000024ff2d49a2,0 disk connected configured unusable
c8::21000024ff2d49a2,1 disk connected configured unusable
c8::21000024ff2d49a2,2 disk connected configured unusable
c8::21000024ff2d49a2,3 disk connected configured unusable
c8::21000024ff2d49a2,4 disk connected configured unusable
c8::21000024ff2d49a2,5 disk connected configured unusable
c8::21000024ff2d49a2,6 disk connected configured unusable
c8::21000024ff2d49a2,7 disk connected configured unusable
c8::21000024ff2d49a2,8 disk connected configured unusable
c8::21000024ff2d49a2,9 disk connected configured unusable
c8::21000024ff2d49a2,10 disk connected configured unusable
c9::203400a0b839c421,31 disk connected configured unusable
c9::203400a0b84913d2,31 disk connected configured unusable
c9::203500a0b839c421,31 disk connected configured unusable
c9::203500a0b84913d2,31 disk connected configured unusable
</source>
===cfgadm -c unconfigure -o unusable_SCSI_LUN <unusable device>===
<source lang=bash>
# cfgadm -c unconfigure -o unusable_SCSI_LUN c8::21000024ff2d49a2
</source>
===cfgadm -o force_update -c configure <controller>===
Rescan LUNs. Be careful! Does a forcelip!
<source lang=bash>
# cfgadm -o force_update -c configure c10
</source>
=Kommandos : Common Array Manager=
==lsscs==
Ist unter Solaris in /opt/SUNWsefms/bin
===lsscs list array===
<source lang=bash>
</source>
===lsscs list array <array_name>===
<source lang=bash>
</source>
===lsscs list -a <array_name> fcport===
<source lang=bash>
</source>
=Kommandos : Brocade=
==Switch-Kommandos==
===switchshow===
<source lang=bash>
san-sw_11:admin> switchshow
switchName: san-sw_11
switchType: 71.2
switchState: Online
switchMode: Native
switchRole: Principal
switchDomain: 1
switchId: fffc01
switchWwn: 10:00:00:05:33:df:43:5a
zoning: ON (Fabric1)
switchBeacon: OFF
Index Port Address Media Speed State Proto
==============================================
0 0 010000 id N8 No_Light FC
1 1 010100 id N8 Online FC E-Port 10:00:00:05:33:df:bd:b9 "san-sw_21" (downstream)
2 2 010200 id N8 Online FC F-Port 21:00:00:24:ff:05:74:e4
3 3 010300 id N8 Online FC F-Port 50:0a:09:81:8d:32:5d:c4
4 4 010400 id N8 No_Light FC
5 5 010500 id N8 Online FC E-Port 10:00:00:05:33:df:bd:b9 "san-sw_21"
6 6 010600 id N4 Online FC F-Port 20:06:00:a0:b8:32:38:17
7 7 010700 id N4 Online FC F-Port 20:07:00:a0:b8:32:38:17
8 8 010800 id N4 Online FC F-Port 21:00:00:1b:32:91:4c:ed
9 9 010900 id N4 Online FC F-Port 21:00:00:1b:32:98:05:1a
10 10 010a00 id N8 Online FC F-Port 21:00:00:24:ff:4a:d3:bc
11 11 010b00 id N8 No_Light FC
12 12 010c00 id N8 No_Light FC
13 13 010d00 id N8 No_Light FC
14 14 010e00 id N8 No_Light FC
15 15 010f00 id N8 No_Light FC
16 16 011000 -- N8 No_Module FC (No POD License) Disabled
17 17 011100 -- N8 No_Module FC (No POD License) Disabled
18 18 011200 -- N8 No_Module FC (No POD License) Disabled
19 19 011300 -- N8 No_Module FC (No POD License) Disabled
20 20 011400 -- N8 No_Module FC (No POD License) Disabled
21 21 011500 -- N8 No_Module FC (No POD License) Disabled
22 22 011600 -- N8 No_Module FC (No POD License) Disabled
23 23 011700 -- N8 No_Module FC (No POD License) Disabled
</source>
Was sagt uns das?
# Dies ist der "Principal" (alle andere sind "Subordinate") der Fabric "Fabric1" (switchRole:, zoning:)
# Der Switch ist gezoned (zoning:)
# SwitchID ist "fffc01"
# Es ist ein 24-Port Switch
# Es gibt einen doppelten ISL (InterSwitchLink) zu einem anderen Switch E-Port (san-sw_21)
# 6 Ports sind mit SFPs bestückt, aber nicht belegt (0,4,11-15)
# 8 Ports haben keine Lizenz und auch kein SFP (No_Module)
# 9 Ports sind belegt
<source lang=bash>
san-sw_11:root> fabricshow
Switch ID Worldwide Name Enet IP Addr FC IP Addr Name
-------------------------------------------------------------------------
1: fffc01 10:00:00:05:33:df:43:5a 192.168.1.117 0.0.0.0 >"san-sw_11"
2: fffc02 10:00:00:05:33:df:bd:b9 192.168.1.119 0.0.0.0 "san-sw_21"
The Fabric has 2 switches
</source>
==Port-Kommandos==
===portstatsshow===
===portstatsclear===
==Zone-Kommandos==
===zoneshow===
===alicreate===
===alishow===
==Backup der Switchconfig per Script==
===Put the backup host ssh-pub-key on the switches===
<source lang=bash>
fcsw1:root> cat >/root/.ssh/authorized_keys <<EOF
> ssh-dss AAAAB3NzaC1...
...
...
lF8qsgtTD8cc= root@host
> EOF
</source>
===Generate ssh-key on the switches===
<source lang=bash>
fcsw1:root> ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
2a:23:33:...:69:bc:25:a5:f9 root@fcsw1
The key's randomart image is:
+--[ RSA 2048]----+
| |
| ... |
| |
+-----------------+
</source>
===Copy the key to your backup users ~/.ssh/authorized_keys on backup host===
<source lang=bash>
fcsw1:root> cat /root/.ssh/id_rsa.pub
ssh-rsa AAAAB3NzaC1yc2EAAA...
...
KHnw1T1NaQ== root@fcsw1
</source>
===Now the script on the backup host===
<source lang=bash>
# cat /opt/bin/backup_brocade_config
#!/bin/bash
SWITCHES="
172.30.40.50
172.30.40.51
"
LOCALUSER="backupuser"
BACKUPDIR="brocade_backup"
BACKUPHOST="172.30.40.10"
DATE="$(date '+%Y%m%d-%H%M%S')"
for switch in ${SWITCHES} ; do
printf "Backing up ${switch} to ~${LOCALUSER}/${BACKUPDIR}/${switch}_config_${date}.txt... "
ssh root@${switch} /fabos/link_sbin/configupload -all -p scp ${BACKUPHOST},${LOCALUSER},${BACKUPDIR}/${switch}_config_${DATE}.txt
done
</source>
==Script zum parsen einer configupload-Datei==
<source lang=awk>
#!/usr/bin/gawk -f
BEGIN{
vendor["001438"]="Hewlett-Packard";
vendor["00a098"]="NetApp";
vendor["0024ff"]="Qlogic";
vendor["001b32"]="Qlogic";
vendor["0000c9"]="Emulex";
vendor["00e002"]="CROSSROADS SYSTEMS, INC.";
}
/\[Zoning\]/,/^$/ {
if(/^cfg./){
split($0,cfgparts,":");
gsub(/^cfg./,"",cfgparts[1]);
cfg[cfgparts[1]]=cfgparts[2];
}
else if(/^zone./) {
zonename=$0;
gsub(/:.*$/,"",zonename);
gsub(/^zone./,"",zonename);
zonemembers=$0;
gsub(/^[^:]*:/,"",zonemembers);
zone[zonename]=zonemembers;
}
else if(/^alias./) {
aliasname=$0;
gsub(/:.*$/,"",aliasname);
gsub(/^alias./,"",aliasname);
aliasmembers=$0;
gsub(/^[^:]*:/,"",aliasmembers);
alias[aliasname]=aliasmembers;
if(length(aliasname)>longestalias){
longestalias=length(aliasname);
}
}
else if(/^enable:/) {
cfgenabled=$0;
gsub(/^enable:/,"",cfgenabled);
}
}
END {
print "Config:",cfgenabled;
split(cfg[cfgenabled],active_zones,";");
for(active_zone in active_zones) {
split(zone[active_zones[active_zone]],zone_members,";");
asort(zone_members);
print "Zone",active_zones[active_zone],"(",length(zone_members),"Members ):";
for(zone_member in zone_members){
member=zone_members[zone_member];
if(alias[member]!=""){
member=alias[member];
}
WWN=member;
gsub(/:/,"",WWN);
if(WWN ~ /^5/){start=2;}else{start=5;}
vendor_id=substr(WWN,start,6);
printf " Member: %s\t",member;
if(alias[zone_members[zone_member]]!=""){
format=sprintf("%%s%%%ds\t",longestalias-length(zone_members[zone_member]));
printf format,zone_members[zone_member]," ";
}
printf "%s\n",vendor[vendor_id];
}
}
printf "\n\n\nCreate config:\n-------------------------------------------------\n";
printf "cfgdelete \"%s\"\n",cfgenabled;
for(active_zone in active_zones) {
split(zone[active_zones[active_zone]],zone_members,";");
asort(zone_members);
for(zone_member in zone_members){
member=zone_members[zone_member];
if(alias[member]!=""){
printf "alicreate \"%s\",\"%s\"\n",member,alias[member];
alias[member]="";
}
}
printf "zonecreate \"%s\",\"%s\"\n",active_zones[active_zone],zone[active_zones[active_zone]];
if(!secondelement){
secondelement=1;
printf "cfgcreate";
} else {
printf "cfgadd ";
}
printf " \"%s\",\"%s\"\n",cfgenabled,active_zones[active_zone];
}
printf "cfgsave\ncfgenable \"%s\"\n",cfgenabled;
}
</source>
=Kommandos: NetApp=
==fcp topology show : Wo hängt mein Frontend-SAN?==
<source lang=bash>
fas01> fcp topology show
Switches connected on adapter 0d:
None connected.
Switches connected on adapter 0c:
None connected.
Switches connected on adapter 1a:
Switch Name: fcsw01
Switch Vendor: Brocade Communications, Inc.
Switch Release: v6.4.2a
Switch Domain: 1
Switch WWN: 10:00:00:05:33:c6:1e:6c
Port Count: 24
Switches connected on adapter 1b:
Switch Name: fcsw02
Switch Vendor: Brocade Communications, Inc.
Switch Release: v6.4.2a
Switch Domain: 1
Switch WWN: 10:00:00:05:33:c7:5e:d2
Port Count: 24
Switches connected on adapter 1c:
None connected.
Switches connected on adapter 1d:
None connected.
</source>
==fcp config <port> : Welche WWN habe ich?==
<source lang=bash>
fas01> fcp config 1a
1a: ONLINE <ADAPTER UP> PTP Fabric
host address 010600
portname 50:0a:09:83:90:00:29:24 nodename 50:0a:09:80:80:00:29:24
mediatype auto speed auto
</source>
Kleines Schmankerl ist noch die "host address", die uns zeigt, daß wir auf Switch-ID 01 Port 06 hängen.
==fcp wwpn-alias (set|show) : Aliasnamen für mehr Klarheit beim Debugging==
<source lang=bash>
fas01> fcp wwpn-alias set sun07_Slot2_Port0 21000024ff363a5a
fas01> fcp wwpn-alias show
WWPN Alias
---- -----
21:00:00:24:ff:36:3a:5a sun07_Slot2_Port0
</source>
7946fbf33764799aef37556d9d421038ed954ef4
533
532
2014-10-01T15:36:04Z
Lollypop
2
Lollypop verschob Seite [[Solaris FC]] nach [[Fibrechannel Analyse]]: Nicht nur Solaris
wikitext
text/x-wiki
[[Kategorie:Solaris]]
[[Kategorie:Brocade]]
=Fibrechannel Analyse=
=Kommandos : Solaris=
==luxadm==
===luxadm -e port===
Gibt die Hardwarepfade der vorhandened Fibrechannelports und deren Status aus:
<source lang=bash>
# luxadm -e port
/devices/pci@79,0/pci10de,378@b/pci1077,143@0/fp@0,0:devctl CONNECTED
/devices/pci@79,0/pci10de,378@b/pci1077,143@0,1/fp@0,0:devctl NOT CONNECTED
/devices/pci@79,0/pci10de,376@e/pci1077,143@0/fp@0,0:devctl CONNECTED
/devices/pci@79,0/pci10de,376@e/pci1077,143@0,1/fp@0,0:devctl NOT CONNECTED
</source>
2 Dualport Karten:
/devices/pci@79,0/pci10de,378@b/pci1077,143@0 und ...,1
/devices/pci@79,0/pci10de,376@e/pci1077,143@0 und ...,1
<source lang=bash>
# prtdiag -v | head -1
System Configuration: Sun Microsystems Sun Fire X4440
</source>
Aus der Seite [https://support.oracle.com/epmos/faces/DocContentDisplay?id=1277396.1 Sun x86 Platforms: Matrix of Recognized Device Paths (Doc ID 1277396.1)] (Oracle Support Login benötigt):
Sun Fire x4440 (Tucana)
PCI:
PCIe SLOT0 /pci@0,0/pci10de,375@f/pci1000,3150@0 // with PCI Express 8-Port SAS/SATA HBA
PCIe SLOT0 /pci@0,0/pci10de,375@f/ // without PCI Express 8-Port SAS/SATA HBA
PCIe SLOT1 /pci@0,0/pci10de,376@e/
PCIe SLOT2 /pci@7c,0/pci10de,377@f/
PCIe SLOT3 /pci@0,0/pci10de,377@a/
PCIe SLOT4 /pci@7c,0/pci10de,376@e/
PCIe SLOT5 /pci@7c,0/pci10de,378@b/
(7c can be renamed something else depending on BIOS/OS version)
Also stecken unsere Karten in Slot 4 und 5.
===luxadm -e dump_map <HW_path>===
Gibt die Tabelle der bekannten Geräte an einem Port aus
<source lang=bash>
# luxadm -e dump_map /devices/pci@79,0/pci10de,378@b/pci1077,143@0/fp@0,0:devctl
Pos Port_ID Hard_Addr Port WWN Node WWN Type
0 30200 0 202600a0b86e10e4 200600a0b86e10e4 0x0 (Disk device)
1 30600 0 202700a0b86e10e4 200600a0b86e10e4 0x0 (Disk device)
2 10100 0 203400a0b85bb030 200400a0b85bb030 0x0 (Disk device)
3 10500 0 203500a0b85bb030 200400a0b85bb030 0x0 (Disk device)
4 10200 0 202600a0b86e103c 200600a0b86e103c 0x0 (Disk device)
5 11400 0 202700a0b86e103c 200600a0b86e103c 0x0 (Disk device)
6 30100 0 203200a0b85aeb2d 200200a0b85aeb2d 0x0 (Disk device)
7 30500 0 203300a0b85aeb2d 200200a0b85aeb2d 0x0 (Disk device)
8 10800 0 2100001b32902d45 2000001b32902d45 0x1f (Unknown Type,Host Bus Adapter)
</source>
Erklärung der interessanten Spalten:
* Port_ID <Switch_ID><Switchport><??>
Es sind also offensichtlich 2 Switches in der Fabric an Port /devices/pci@79,0/pci10de,378@b/pci1077,143@0/fp@0,0:devctl
und zwar mit der ID 1 und mit der ID 3.
Switch ID 1
Port 1 und 5 : Node WWN 200400a0b85bb030
Port 2 und 14 : Node WWN 200600a0b86e103c
Port 8 : Node WWN 2000001b32902d45 (Wir selbst)
Switch ID 3
Port 1 und 5 : Node WWN 200200a0b85aeb2d
Port 2 und 6 : Node WWN 200600a0b86e10e4
Wir hängen also mit 2 Storages auf dem Switch mit der ID 1 und haben eine Verbindung zu einem Switch mit der ID 3 an dem 2 weitere Storages hängen.
* Node WWN
Wir sehen hier 4 Disk Devices mit jeweils 2 Einträgen (Gleiche Node WWN)
* Port WWN
Dies ist die Port WWN der an den Switch angeschlossenen Geräte (unter 8 finden wir uns selbst).
Pro Storage sehen wir hier 2 Port WWNs, also 2 Pfade über unseren einen Hostport.
Daher nachher 4 Pfade (2 Pro Hostport) beim [[#mpathadm list lu]].
* Type
Disk Device: Storage
Host Bus Adapter: FC-Karte
===luxadm probe===
Auflistung aller erkannten Fibrechanneldevices
<source lang=bash>
#> luxadm probe
Found Fibre Channel device(s):
Node WWN:200600a0b86e10e4 Device Type:Disk device
Logical Path:/dev/rdsk/c8t600A0B80006E10E40000DC1C52E8B751d0s2
...
</source>
===luxadm display <Diskpath|WWN>===
<source lang=bash>
#> luxadm display /dev/rdsk/c8t600A0B80006E10E40000DC1C52E8B751d0s2
DEVICE PROPERTIES for disk: /dev/rdsk/c8t600A0B80006E10E40000DC1C52E8B751d0s2
Vendor: SUN
Product ID: STK6580_6780
Revision: 0784
Serial Num: SP01068442
Unformatted capacity: 204800.000 MBytes
Write Cache: Enabled
Read Cache: Enabled
Minimum prefetch: 0x300
Maximum prefetch: 0x0
Device Type: Disk device
Path(s):
/dev/rdsk/c8t600A0B80006E10E40000DC1C52E8B751d0s2
/devices/scsi_vhci/disk@g600a0b80006e10e40000dc1c52e8b751:c,raw
Controller /dev/cfg/c4
Device Address 202600a0b86e10e4,5
Host controller port WWN 2100001b328a417f
Class primary
State ONLINE
Controller /dev/cfg/c4
Device Address 202700a0b86e10e4,5
Host controller port WWN 2100001b328a417f
Class secondary
State STANDBY
Controller /dev/cfg/c6
Device Address 201600a0b86e10e4,5
Host controller port WWN 2100001b32904445
Class primary
State ONLINE
Controller /dev/cfg/c6
Device Address 201700a0b86e10e4,5
Host controller port WWN 2100001b32904445
Class secondary
State STANDBY
</source>
* Vendor: SUN
Hersteller
* Product ID: STK6580_6780
Also ein StorageTek 6580/6780
* Revision: 0784
Grobe Firmwarepeilung (Firmware Version: 07.84.47.10)
Siehe hier [[#lsscs list array <array_name>]]
* Serial Num: SP01068442
Praktisch, wenn man mit NetApps arbeitet, um die LUNs zuzuordnen.
* Unformatted capacity: 204800.000 MBytes
Immer gut zu wissen
* Write Cache: Enabled
Die Batterie im Storage sollte also OK sein ;-)
* Path(s):
Rawdevicepath
Hardwaredevicepath
Jetzt folgen immer pro Pfad zu diesem Device ein Block aus
Controller (siehe unten)
Device Address <Port WWN vom Device>,<LUN ID>
Class <primary|secondary> (siehe unten)
State <Online|Standby|Oflline>
Zuweisung Controller zum FC-Port über:
<source lang=bash>
# ls -al /dev/cfg/c6
lrwxrwxrwx 1 root root 60 Sep 3 2009 /dev/cfg/c6 -> ../../devices/pci@79,0/pci10de,376@e/pci1077,143@0/fp@0,0:fc
</source>
Man sieht den Hardwarepfad von [[#luxadm -e port]]
Class:
Via ALUA (Asymmetric Logical Unit Access) teilt das Device dem Host mit, über welche Pfade der Host primär auf die LUN zugreifen soll.
==fcinfo==
===fcinfo hba-port===
Gibt ein paar Infos über Hersteller, Modell, Firmware, Port und Node WWN, Current Speed, ... aus
<source lang=bash>
#> fcinfo hba-port
HBA Port WWN: 2100001b328a417f
OS Device Name: /dev/cfg/c4
Manufacturer: QLogic Corp.
Model: 375-3356-02
Firmware Version: 05.06.00
FCode/BIOS Version: BIOS: 2.02; fcode: 2.01; EFI: 2.00;
Serial Number: 0402R00-0918701860
Driver Name: qlc
Driver Version: 20110825-3.06
Type: N-port
State: online
Supported Speeds: 1Gb 2Gb 4Gb
Current Speed: 4Gb
Node WWN: 2000001b328a417f
HBA Port WWN: 2101001b32aa417f
OS Device Name: /dev/cfg/c5
Manufacturer: QLogic Corp.
Model: 375-3356-02
Firmware Version: 05.06.00
FCode/BIOS Version: BIOS: 2.02; fcode: 2.01; EFI: 2.00;
Serial Number: 0402R00-0918701860
Driver Name: qlc
Driver Version: 20110825-3.06
Type: unknown
State: offline
Supported Speeds: 1Gb 2Gb 4Gb
Current Speed: not established
Node WWN: 2001001b32aa417f
HBA Port WWN: 2100001b32904445
OS Device Name: /dev/cfg/c6
Manufacturer: QLogic Corp.
Model: 375-3356-02
Firmware Version: 05.06.00
FCode/BIOS Version: BIOS: 2.02; fcode: 2.01; EFI: 2.00;
Serial Number: 0402R00-0918701887
Driver Name: qlc
Driver Version: 20110825-3.06
Type: N-port
State: online
Supported Speeds: 1Gb 2Gb 4Gb
Current Speed: 4Gb
Node WWN: 2000001b32904445
HBA Port WWN: 2101001b32b04445
OS Device Name: /dev/cfg/c7
Manufacturer: QLogic Corp.
Model: 375-3356-02
Firmware Version: 05.06.00
FCode/BIOS Version: BIOS: 2.02; fcode: 2.01; EFI: 2.00;
Serial Number: 0402R00-0918701887
Driver Name: qlc
Driver Version: 20110825-3.06
Type: unknown
State: offline
Supported Speeds: 1Gb 2Gb 4Gb
Current Speed: not established
Node WWN: 2001001b32b04445
</source>
===fcinfo remote-port --port <HBA Port WWN> --linkstat===
<source lang=bash>
# fcinfo remote-port --port 2100001b32904445 --linkstat
Remote Port WWN: 201600a0b86e103c
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200600a0b86e103c
Link Error Statistics:
Link Failure Count: 3
Loss of Sync Count: 3
Loss of Signal Count: 2
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 201700a0b86e103c
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200600a0b86e103c
Link Error Statistics:
Link Failure Count: 4
Loss of Sync Count: 261
Loss of Signal Count: 4
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 202200a0b85aeb2d
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200200a0b85aeb2d
Link Error Statistics:
Link Failure Count: 2
Loss of Sync Count: 1
Loss of Signal Count: 2
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 202300a0b85aeb2d
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200200a0b85aeb2d
Link Error Statistics:
Link Failure Count: 2
Loss of Sync Count: 1
Loss of Signal Count: 2
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 201600a0b86e10e4
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200600a0b86e10e4
Link Error Statistics:
Link Failure Count: 3
Loss of Sync Count: 1
Loss of Signal Count: 0
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 201700a0b86e10e4
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200600a0b86e10e4
Link Error Statistics:
Link Failure Count: 3
Loss of Sync Count: 1
Loss of Signal Count: 0
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 202400a0b85bb030
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200400a0b85bb030
Link Error Statistics:
Link Failure Count: 2
Loss of Sync Count: 1
Loss of Signal Count: 2
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 202500a0b85bb030
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200400a0b85bb030
Link Error Statistics:
Link Failure Count: 2
Loss of Sync Count: 1
Loss of Signal Count: 3
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
</source>
==mpathadm==
===mpathadm list lu===
<source lang=bash>
</source>
==cfgadm==
===cfgadm -al -o show_FCP_dev [<controller>]===
<source lang=bash>
# cfgadm -al -o show_FCP_dev | grep unusable
c8::21000024ff2d49a2,0 disk connected configured unusable
c8::21000024ff2d49a2,1 disk connected configured unusable
c8::21000024ff2d49a2,2 disk connected configured unusable
c8::21000024ff2d49a2,3 disk connected configured unusable
c8::21000024ff2d49a2,4 disk connected configured unusable
c8::21000024ff2d49a2,5 disk connected configured unusable
c8::21000024ff2d49a2,6 disk connected configured unusable
c8::21000024ff2d49a2,7 disk connected configured unusable
c8::21000024ff2d49a2,8 disk connected configured unusable
c8::21000024ff2d49a2,9 disk connected configured unusable
c8::21000024ff2d49a2,10 disk connected configured unusable
c9::203400a0b839c421,31 disk connected configured unusable
c9::203400a0b84913d2,31 disk connected configured unusable
c9::203500a0b839c421,31 disk connected configured unusable
c9::203500a0b84913d2,31 disk connected configured unusable
</source>
===cfgadm -c unconfigure -o unusable_SCSI_LUN <unusable device>===
<source lang=bash>
# cfgadm -c unconfigure -o unusable_SCSI_LUN c8::21000024ff2d49a2
</source>
===cfgadm -o force_update -c configure <controller>===
Rescan LUNs. Be careful! Does a forcelip!
<source lang=bash>
# cfgadm -o force_update -c configure c10
</source>
=Kommandos : Common Array Manager=
==lsscs==
Ist unter Solaris in /opt/SUNWsefms/bin
===lsscs list array===
<source lang=bash>
</source>
===lsscs list array <array_name>===
<source lang=bash>
</source>
===lsscs list -a <array_name> fcport===
<source lang=bash>
</source>
=Kommandos : Brocade=
==Switch-Kommandos==
===switchshow===
<source lang=bash>
san-sw_11:admin> switchshow
switchName: san-sw_11
switchType: 71.2
switchState: Online
switchMode: Native
switchRole: Principal
switchDomain: 1
switchId: fffc01
switchWwn: 10:00:00:05:33:df:43:5a
zoning: ON (Fabric1)
switchBeacon: OFF
Index Port Address Media Speed State Proto
==============================================
0 0 010000 id N8 No_Light FC
1 1 010100 id N8 Online FC E-Port 10:00:00:05:33:df:bd:b9 "san-sw_21" (downstream)
2 2 010200 id N8 Online FC F-Port 21:00:00:24:ff:05:74:e4
3 3 010300 id N8 Online FC F-Port 50:0a:09:81:8d:32:5d:c4
4 4 010400 id N8 No_Light FC
5 5 010500 id N8 Online FC E-Port 10:00:00:05:33:df:bd:b9 "san-sw_21"
6 6 010600 id N4 Online FC F-Port 20:06:00:a0:b8:32:38:17
7 7 010700 id N4 Online FC F-Port 20:07:00:a0:b8:32:38:17
8 8 010800 id N4 Online FC F-Port 21:00:00:1b:32:91:4c:ed
9 9 010900 id N4 Online FC F-Port 21:00:00:1b:32:98:05:1a
10 10 010a00 id N8 Online FC F-Port 21:00:00:24:ff:4a:d3:bc
11 11 010b00 id N8 No_Light FC
12 12 010c00 id N8 No_Light FC
13 13 010d00 id N8 No_Light FC
14 14 010e00 id N8 No_Light FC
15 15 010f00 id N8 No_Light FC
16 16 011000 -- N8 No_Module FC (No POD License) Disabled
17 17 011100 -- N8 No_Module FC (No POD License) Disabled
18 18 011200 -- N8 No_Module FC (No POD License) Disabled
19 19 011300 -- N8 No_Module FC (No POD License) Disabled
20 20 011400 -- N8 No_Module FC (No POD License) Disabled
21 21 011500 -- N8 No_Module FC (No POD License) Disabled
22 22 011600 -- N8 No_Module FC (No POD License) Disabled
23 23 011700 -- N8 No_Module FC (No POD License) Disabled
</source>
Was sagt uns das?
# Dies ist der "Principal" (alle andere sind "Subordinate") der Fabric "Fabric1" (switchRole:, zoning:)
# Der Switch ist gezoned (zoning:)
# SwitchID ist "fffc01"
# Es ist ein 24-Port Switch
# Es gibt einen doppelten ISL (InterSwitchLink) zu einem anderen Switch E-Port (san-sw_21)
# 6 Ports sind mit SFPs bestückt, aber nicht belegt (0,4,11-15)
# 8 Ports haben keine Lizenz und auch kein SFP (No_Module)
# 9 Ports sind belegt
<source lang=bash>
san-sw_11:root> fabricshow
Switch ID Worldwide Name Enet IP Addr FC IP Addr Name
-------------------------------------------------------------------------
1: fffc01 10:00:00:05:33:df:43:5a 192.168.1.117 0.0.0.0 >"san-sw_11"
2: fffc02 10:00:00:05:33:df:bd:b9 192.168.1.119 0.0.0.0 "san-sw_21"
The Fabric has 2 switches
</source>
==Port-Kommandos==
===portstatsshow===
===portstatsclear===
==Zone-Kommandos==
===zoneshow===
===alicreate===
===alishow===
==Backup der Switchconfig per Script==
===Put the backup host ssh-pub-key on the switches===
<source lang=bash>
fcsw1:root> cat >/root/.ssh/authorized_keys <<EOF
> ssh-dss AAAAB3NzaC1...
...
...
lF8qsgtTD8cc= root@host
> EOF
</source>
===Generate ssh-key on the switches===
<source lang=bash>
fcsw1:root> ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
2a:23:33:...:69:bc:25:a5:f9 root@fcsw1
The key's randomart image is:
+--[ RSA 2048]----+
| |
| ... |
| |
+-----------------+
</source>
===Copy the key to your backup users ~/.ssh/authorized_keys on backup host===
<source lang=bash>
fcsw1:root> cat /root/.ssh/id_rsa.pub
ssh-rsa AAAAB3NzaC1yc2EAAA...
...
KHnw1T1NaQ== root@fcsw1
</source>
===Now the script on the backup host===
<source lang=bash>
# cat /opt/bin/backup_brocade_config
#!/bin/bash
SWITCHES="
172.30.40.50
172.30.40.51
"
LOCALUSER="backupuser"
BACKUPDIR="brocade_backup"
BACKUPHOST="172.30.40.10"
DATE="$(date '+%Y%m%d-%H%M%S')"
for switch in ${SWITCHES} ; do
printf "Backing up ${switch} to ~${LOCALUSER}/${BACKUPDIR}/${switch}_config_${date}.txt... "
ssh root@${switch} /fabos/link_sbin/configupload -all -p scp ${BACKUPHOST},${LOCALUSER},${BACKUPDIR}/${switch}_config_${DATE}.txt
done
</source>
==Script zum parsen einer configupload-Datei==
<source lang=awk>
#!/usr/bin/gawk -f
BEGIN{
vendor["001438"]="Hewlett-Packard";
vendor["00a098"]="NetApp";
vendor["0024ff"]="Qlogic";
vendor["001b32"]="Qlogic";
vendor["0000c9"]="Emulex";
vendor["00e002"]="CROSSROADS SYSTEMS, INC.";
}
/\[Zoning\]/,/^$/ {
if(/^cfg./){
split($0,cfgparts,":");
gsub(/^cfg./,"",cfgparts[1]);
cfg[cfgparts[1]]=cfgparts[2];
}
else if(/^zone./) {
zonename=$0;
gsub(/:.*$/,"",zonename);
gsub(/^zone./,"",zonename);
zonemembers=$0;
gsub(/^[^:]*:/,"",zonemembers);
zone[zonename]=zonemembers;
}
else if(/^alias./) {
aliasname=$0;
gsub(/:.*$/,"",aliasname);
gsub(/^alias./,"",aliasname);
aliasmembers=$0;
gsub(/^[^:]*:/,"",aliasmembers);
alias[aliasname]=aliasmembers;
if(length(aliasname)>longestalias){
longestalias=length(aliasname);
}
}
else if(/^enable:/) {
cfgenabled=$0;
gsub(/^enable:/,"",cfgenabled);
}
}
END {
print "Config:",cfgenabled;
split(cfg[cfgenabled],active_zones,";");
for(active_zone in active_zones) {
split(zone[active_zones[active_zone]],zone_members,";");
asort(zone_members);
print "Zone",active_zones[active_zone],"(",length(zone_members),"Members ):";
for(zone_member in zone_members){
member=zone_members[zone_member];
if(alias[member]!=""){
member=alias[member];
}
WWN=member;
gsub(/:/,"",WWN);
if(WWN ~ /^5/){start=2;}else{start=5;}
vendor_id=substr(WWN,start,6);
printf " Member: %s\t",member;
if(alias[zone_members[zone_member]]!=""){
format=sprintf("%%s%%%ds\t",longestalias-length(zone_members[zone_member]));
printf format,zone_members[zone_member]," ";
}
printf "%s\n",vendor[vendor_id];
}
}
printf "\n\n\nCreate config:\n-------------------------------------------------\n";
printf "cfgdelete \"%s\"\n",cfgenabled;
for(active_zone in active_zones) {
split(zone[active_zones[active_zone]],zone_members,";");
asort(zone_members);
for(zone_member in zone_members){
member=zone_members[zone_member];
if(alias[member]!=""){
printf "alicreate \"%s\",\"%s\"\n",member,alias[member];
alias[member]="";
}
}
printf "zonecreate \"%s\",\"%s\"\n",active_zones[active_zone],zone[active_zones[active_zone]];
if(!secondelement){
secondelement=1;
printf "cfgcreate";
} else {
printf "cfgadd ";
}
printf " \"%s\",\"%s\"\n",cfgenabled,active_zones[active_zone];
}
printf "cfgsave\ncfgenable \"%s\"\n",cfgenabled;
}
</source>
=Kommandos: NetApp=
==fcp topology show : Wo hängt mein Frontend-SAN?==
<source lang=bash>
fas01> fcp topology show
Switches connected on adapter 0d:
None connected.
Switches connected on adapter 0c:
None connected.
Switches connected on adapter 1a:
Switch Name: fcsw01
Switch Vendor: Brocade Communications, Inc.
Switch Release: v6.4.2a
Switch Domain: 1
Switch WWN: 10:00:00:05:33:c6:1e:6c
Port Count: 24
Switches connected on adapter 1b:
Switch Name: fcsw02
Switch Vendor: Brocade Communications, Inc.
Switch Release: v6.4.2a
Switch Domain: 1
Switch WWN: 10:00:00:05:33:c7:5e:d2
Port Count: 24
Switches connected on adapter 1c:
None connected.
Switches connected on adapter 1d:
None connected.
</source>
==fcp config <port> : Welche WWN habe ich?==
<source lang=bash>
fas01> fcp config 1a
1a: ONLINE <ADAPTER UP> PTP Fabric
host address 010600
portname 50:0a:09:83:90:00:29:24 nodename 50:0a:09:80:80:00:29:24
mediatype auto speed auto
</source>
Kleines Schmankerl ist noch die "host address", die uns zeigt, daß wir auf Switch-ID 01 Port 06 hängen.
==fcp wwpn-alias (set|show) : Aliasnamen für mehr Klarheit beim Debugging==
<source lang=bash>
fas01> fcp wwpn-alias set sun07_Slot2_Port0 21000024ff363a5a
fas01> fcp wwpn-alias show
WWPN Alias
---- -----
21:00:00:24:ff:36:3a:5a sun07_Slot2_Port0
</source>
7946fbf33764799aef37556d9d421038ed954ef4
535
533
2014-10-01T15:36:47Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:Solaris]]
[[Kategorie:Brocade]]
[[Kategorie:NetApp]]
=Fibrechannel Analyse=
=Kommandos : Solaris=
==luxadm==
===luxadm -e port===
Gibt die Hardwarepfade der vorhandened Fibrechannelports und deren Status aus:
<source lang=bash>
# luxadm -e port
/devices/pci@79,0/pci10de,378@b/pci1077,143@0/fp@0,0:devctl CONNECTED
/devices/pci@79,0/pci10de,378@b/pci1077,143@0,1/fp@0,0:devctl NOT CONNECTED
/devices/pci@79,0/pci10de,376@e/pci1077,143@0/fp@0,0:devctl CONNECTED
/devices/pci@79,0/pci10de,376@e/pci1077,143@0,1/fp@0,0:devctl NOT CONNECTED
</source>
2 Dualport Karten:
/devices/pci@79,0/pci10de,378@b/pci1077,143@0 und ...,1
/devices/pci@79,0/pci10de,376@e/pci1077,143@0 und ...,1
<source lang=bash>
# prtdiag -v | head -1
System Configuration: Sun Microsystems Sun Fire X4440
</source>
Aus der Seite [https://support.oracle.com/epmos/faces/DocContentDisplay?id=1277396.1 Sun x86 Platforms: Matrix of Recognized Device Paths (Doc ID 1277396.1)] (Oracle Support Login benötigt):
Sun Fire x4440 (Tucana)
PCI:
PCIe SLOT0 /pci@0,0/pci10de,375@f/pci1000,3150@0 // with PCI Express 8-Port SAS/SATA HBA
PCIe SLOT0 /pci@0,0/pci10de,375@f/ // without PCI Express 8-Port SAS/SATA HBA
PCIe SLOT1 /pci@0,0/pci10de,376@e/
PCIe SLOT2 /pci@7c,0/pci10de,377@f/
PCIe SLOT3 /pci@0,0/pci10de,377@a/
PCIe SLOT4 /pci@7c,0/pci10de,376@e/
PCIe SLOT5 /pci@7c,0/pci10de,378@b/
(7c can be renamed something else depending on BIOS/OS version)
Also stecken unsere Karten in Slot 4 und 5.
===luxadm -e dump_map <HW_path>===
Gibt die Tabelle der bekannten Geräte an einem Port aus
<source lang=bash>
# luxadm -e dump_map /devices/pci@79,0/pci10de,378@b/pci1077,143@0/fp@0,0:devctl
Pos Port_ID Hard_Addr Port WWN Node WWN Type
0 30200 0 202600a0b86e10e4 200600a0b86e10e4 0x0 (Disk device)
1 30600 0 202700a0b86e10e4 200600a0b86e10e4 0x0 (Disk device)
2 10100 0 203400a0b85bb030 200400a0b85bb030 0x0 (Disk device)
3 10500 0 203500a0b85bb030 200400a0b85bb030 0x0 (Disk device)
4 10200 0 202600a0b86e103c 200600a0b86e103c 0x0 (Disk device)
5 11400 0 202700a0b86e103c 200600a0b86e103c 0x0 (Disk device)
6 30100 0 203200a0b85aeb2d 200200a0b85aeb2d 0x0 (Disk device)
7 30500 0 203300a0b85aeb2d 200200a0b85aeb2d 0x0 (Disk device)
8 10800 0 2100001b32902d45 2000001b32902d45 0x1f (Unknown Type,Host Bus Adapter)
</source>
Erklärung der interessanten Spalten:
* Port_ID <Switch_ID><Switchport><??>
Es sind also offensichtlich 2 Switches in der Fabric an Port /devices/pci@79,0/pci10de,378@b/pci1077,143@0/fp@0,0:devctl
und zwar mit der ID 1 und mit der ID 3.
Switch ID 1
Port 1 und 5 : Node WWN 200400a0b85bb030
Port 2 und 14 : Node WWN 200600a0b86e103c
Port 8 : Node WWN 2000001b32902d45 (Wir selbst)
Switch ID 3
Port 1 und 5 : Node WWN 200200a0b85aeb2d
Port 2 und 6 : Node WWN 200600a0b86e10e4
Wir hängen also mit 2 Storages auf dem Switch mit der ID 1 und haben eine Verbindung zu einem Switch mit der ID 3 an dem 2 weitere Storages hängen.
* Node WWN
Wir sehen hier 4 Disk Devices mit jeweils 2 Einträgen (Gleiche Node WWN)
* Port WWN
Dies ist die Port WWN der an den Switch angeschlossenen Geräte (unter 8 finden wir uns selbst).
Pro Storage sehen wir hier 2 Port WWNs, also 2 Pfade über unseren einen Hostport.
Daher nachher 4 Pfade (2 Pro Hostport) beim [[#mpathadm list lu]].
* Type
Disk Device: Storage
Host Bus Adapter: FC-Karte
===luxadm probe===
Auflistung aller erkannten Fibrechanneldevices
<source lang=bash>
#> luxadm probe
Found Fibre Channel device(s):
Node WWN:200600a0b86e10e4 Device Type:Disk device
Logical Path:/dev/rdsk/c8t600A0B80006E10E40000DC1C52E8B751d0s2
...
</source>
===luxadm display <Diskpath|WWN>===
<source lang=bash>
#> luxadm display /dev/rdsk/c8t600A0B80006E10E40000DC1C52E8B751d0s2
DEVICE PROPERTIES for disk: /dev/rdsk/c8t600A0B80006E10E40000DC1C52E8B751d0s2
Vendor: SUN
Product ID: STK6580_6780
Revision: 0784
Serial Num: SP01068442
Unformatted capacity: 204800.000 MBytes
Write Cache: Enabled
Read Cache: Enabled
Minimum prefetch: 0x300
Maximum prefetch: 0x0
Device Type: Disk device
Path(s):
/dev/rdsk/c8t600A0B80006E10E40000DC1C52E8B751d0s2
/devices/scsi_vhci/disk@g600a0b80006e10e40000dc1c52e8b751:c,raw
Controller /dev/cfg/c4
Device Address 202600a0b86e10e4,5
Host controller port WWN 2100001b328a417f
Class primary
State ONLINE
Controller /dev/cfg/c4
Device Address 202700a0b86e10e4,5
Host controller port WWN 2100001b328a417f
Class secondary
State STANDBY
Controller /dev/cfg/c6
Device Address 201600a0b86e10e4,5
Host controller port WWN 2100001b32904445
Class primary
State ONLINE
Controller /dev/cfg/c6
Device Address 201700a0b86e10e4,5
Host controller port WWN 2100001b32904445
Class secondary
State STANDBY
</source>
* Vendor: SUN
Hersteller
* Product ID: STK6580_6780
Also ein StorageTek 6580/6780
* Revision: 0784
Grobe Firmwarepeilung (Firmware Version: 07.84.47.10)
Siehe hier [[#lsscs list array <array_name>]]
* Serial Num: SP01068442
Praktisch, wenn man mit NetApps arbeitet, um die LUNs zuzuordnen.
* Unformatted capacity: 204800.000 MBytes
Immer gut zu wissen
* Write Cache: Enabled
Die Batterie im Storage sollte also OK sein ;-)
* Path(s):
Rawdevicepath
Hardwaredevicepath
Jetzt folgen immer pro Pfad zu diesem Device ein Block aus
Controller (siehe unten)
Device Address <Port WWN vom Device>,<LUN ID>
Class <primary|secondary> (siehe unten)
State <Online|Standby|Oflline>
Zuweisung Controller zum FC-Port über:
<source lang=bash>
# ls -al /dev/cfg/c6
lrwxrwxrwx 1 root root 60 Sep 3 2009 /dev/cfg/c6 -> ../../devices/pci@79,0/pci10de,376@e/pci1077,143@0/fp@0,0:fc
</source>
Man sieht den Hardwarepfad von [[#luxadm -e port]]
Class:
Via ALUA (Asymmetric Logical Unit Access) teilt das Device dem Host mit, über welche Pfade der Host primär auf die LUN zugreifen soll.
==fcinfo==
===fcinfo hba-port===
Gibt ein paar Infos über Hersteller, Modell, Firmware, Port und Node WWN, Current Speed, ... aus
<source lang=bash>
#> fcinfo hba-port
HBA Port WWN: 2100001b328a417f
OS Device Name: /dev/cfg/c4
Manufacturer: QLogic Corp.
Model: 375-3356-02
Firmware Version: 05.06.00
FCode/BIOS Version: BIOS: 2.02; fcode: 2.01; EFI: 2.00;
Serial Number: 0402R00-0918701860
Driver Name: qlc
Driver Version: 20110825-3.06
Type: N-port
State: online
Supported Speeds: 1Gb 2Gb 4Gb
Current Speed: 4Gb
Node WWN: 2000001b328a417f
HBA Port WWN: 2101001b32aa417f
OS Device Name: /dev/cfg/c5
Manufacturer: QLogic Corp.
Model: 375-3356-02
Firmware Version: 05.06.00
FCode/BIOS Version: BIOS: 2.02; fcode: 2.01; EFI: 2.00;
Serial Number: 0402R00-0918701860
Driver Name: qlc
Driver Version: 20110825-3.06
Type: unknown
State: offline
Supported Speeds: 1Gb 2Gb 4Gb
Current Speed: not established
Node WWN: 2001001b32aa417f
HBA Port WWN: 2100001b32904445
OS Device Name: /dev/cfg/c6
Manufacturer: QLogic Corp.
Model: 375-3356-02
Firmware Version: 05.06.00
FCode/BIOS Version: BIOS: 2.02; fcode: 2.01; EFI: 2.00;
Serial Number: 0402R00-0918701887
Driver Name: qlc
Driver Version: 20110825-3.06
Type: N-port
State: online
Supported Speeds: 1Gb 2Gb 4Gb
Current Speed: 4Gb
Node WWN: 2000001b32904445
HBA Port WWN: 2101001b32b04445
OS Device Name: /dev/cfg/c7
Manufacturer: QLogic Corp.
Model: 375-3356-02
Firmware Version: 05.06.00
FCode/BIOS Version: BIOS: 2.02; fcode: 2.01; EFI: 2.00;
Serial Number: 0402R00-0918701887
Driver Name: qlc
Driver Version: 20110825-3.06
Type: unknown
State: offline
Supported Speeds: 1Gb 2Gb 4Gb
Current Speed: not established
Node WWN: 2001001b32b04445
</source>
===fcinfo remote-port --port <HBA Port WWN> --linkstat===
<source lang=bash>
# fcinfo remote-port --port 2100001b32904445 --linkstat
Remote Port WWN: 201600a0b86e103c
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200600a0b86e103c
Link Error Statistics:
Link Failure Count: 3
Loss of Sync Count: 3
Loss of Signal Count: 2
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 201700a0b86e103c
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200600a0b86e103c
Link Error Statistics:
Link Failure Count: 4
Loss of Sync Count: 261
Loss of Signal Count: 4
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 202200a0b85aeb2d
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200200a0b85aeb2d
Link Error Statistics:
Link Failure Count: 2
Loss of Sync Count: 1
Loss of Signal Count: 2
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 202300a0b85aeb2d
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200200a0b85aeb2d
Link Error Statistics:
Link Failure Count: 2
Loss of Sync Count: 1
Loss of Signal Count: 2
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 201600a0b86e10e4
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200600a0b86e10e4
Link Error Statistics:
Link Failure Count: 3
Loss of Sync Count: 1
Loss of Signal Count: 0
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 201700a0b86e10e4
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200600a0b86e10e4
Link Error Statistics:
Link Failure Count: 3
Loss of Sync Count: 1
Loss of Signal Count: 0
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 202400a0b85bb030
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200400a0b85bb030
Link Error Statistics:
Link Failure Count: 2
Loss of Sync Count: 1
Loss of Signal Count: 2
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 202500a0b85bb030
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200400a0b85bb030
Link Error Statistics:
Link Failure Count: 2
Loss of Sync Count: 1
Loss of Signal Count: 3
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
</source>
==mpathadm==
===mpathadm list lu===
<source lang=bash>
</source>
==cfgadm==
===cfgadm -al -o show_FCP_dev [<controller>]===
<source lang=bash>
# cfgadm -al -o show_FCP_dev | grep unusable
c8::21000024ff2d49a2,0 disk connected configured unusable
c8::21000024ff2d49a2,1 disk connected configured unusable
c8::21000024ff2d49a2,2 disk connected configured unusable
c8::21000024ff2d49a2,3 disk connected configured unusable
c8::21000024ff2d49a2,4 disk connected configured unusable
c8::21000024ff2d49a2,5 disk connected configured unusable
c8::21000024ff2d49a2,6 disk connected configured unusable
c8::21000024ff2d49a2,7 disk connected configured unusable
c8::21000024ff2d49a2,8 disk connected configured unusable
c8::21000024ff2d49a2,9 disk connected configured unusable
c8::21000024ff2d49a2,10 disk connected configured unusable
c9::203400a0b839c421,31 disk connected configured unusable
c9::203400a0b84913d2,31 disk connected configured unusable
c9::203500a0b839c421,31 disk connected configured unusable
c9::203500a0b84913d2,31 disk connected configured unusable
</source>
===cfgadm -c unconfigure -o unusable_SCSI_LUN <unusable device>===
<source lang=bash>
# cfgadm -c unconfigure -o unusable_SCSI_LUN c8::21000024ff2d49a2
</source>
===cfgadm -o force_update -c configure <controller>===
Rescan LUNs. Be careful! Does a forcelip!
<source lang=bash>
# cfgadm -o force_update -c configure c10
</source>
=Kommandos : Common Array Manager=
==lsscs==
Ist unter Solaris in /opt/SUNWsefms/bin
===lsscs list array===
<source lang=bash>
</source>
===lsscs list array <array_name>===
<source lang=bash>
</source>
===lsscs list -a <array_name> fcport===
<source lang=bash>
</source>
=Kommandos : Brocade=
==Switch-Kommandos==
===switchshow===
<source lang=bash>
san-sw_11:admin> switchshow
switchName: san-sw_11
switchType: 71.2
switchState: Online
switchMode: Native
switchRole: Principal
switchDomain: 1
switchId: fffc01
switchWwn: 10:00:00:05:33:df:43:5a
zoning: ON (Fabric1)
switchBeacon: OFF
Index Port Address Media Speed State Proto
==============================================
0 0 010000 id N8 No_Light FC
1 1 010100 id N8 Online FC E-Port 10:00:00:05:33:df:bd:b9 "san-sw_21" (downstream)
2 2 010200 id N8 Online FC F-Port 21:00:00:24:ff:05:74:e4
3 3 010300 id N8 Online FC F-Port 50:0a:09:81:8d:32:5d:c4
4 4 010400 id N8 No_Light FC
5 5 010500 id N8 Online FC E-Port 10:00:00:05:33:df:bd:b9 "san-sw_21"
6 6 010600 id N4 Online FC F-Port 20:06:00:a0:b8:32:38:17
7 7 010700 id N4 Online FC F-Port 20:07:00:a0:b8:32:38:17
8 8 010800 id N4 Online FC F-Port 21:00:00:1b:32:91:4c:ed
9 9 010900 id N4 Online FC F-Port 21:00:00:1b:32:98:05:1a
10 10 010a00 id N8 Online FC F-Port 21:00:00:24:ff:4a:d3:bc
11 11 010b00 id N8 No_Light FC
12 12 010c00 id N8 No_Light FC
13 13 010d00 id N8 No_Light FC
14 14 010e00 id N8 No_Light FC
15 15 010f00 id N8 No_Light FC
16 16 011000 -- N8 No_Module FC (No POD License) Disabled
17 17 011100 -- N8 No_Module FC (No POD License) Disabled
18 18 011200 -- N8 No_Module FC (No POD License) Disabled
19 19 011300 -- N8 No_Module FC (No POD License) Disabled
20 20 011400 -- N8 No_Module FC (No POD License) Disabled
21 21 011500 -- N8 No_Module FC (No POD License) Disabled
22 22 011600 -- N8 No_Module FC (No POD License) Disabled
23 23 011700 -- N8 No_Module FC (No POD License) Disabled
</source>
Was sagt uns das?
# Dies ist der "Principal" (alle andere sind "Subordinate") der Fabric "Fabric1" (switchRole:, zoning:)
# Der Switch ist gezoned (zoning:)
# SwitchID ist "fffc01"
# Es ist ein 24-Port Switch
# Es gibt einen doppelten ISL (InterSwitchLink) zu einem anderen Switch E-Port (san-sw_21)
# 6 Ports sind mit SFPs bestückt, aber nicht belegt (0,4,11-15)
# 8 Ports haben keine Lizenz und auch kein SFP (No_Module)
# 9 Ports sind belegt
<source lang=bash>
san-sw_11:root> fabricshow
Switch ID Worldwide Name Enet IP Addr FC IP Addr Name
-------------------------------------------------------------------------
1: fffc01 10:00:00:05:33:df:43:5a 192.168.1.117 0.0.0.0 >"san-sw_11"
2: fffc02 10:00:00:05:33:df:bd:b9 192.168.1.119 0.0.0.0 "san-sw_21"
The Fabric has 2 switches
</source>
==Port-Kommandos==
===portstatsshow===
===portstatsclear===
==Zone-Kommandos==
===zoneshow===
===alicreate===
===alishow===
==Backup der Switchconfig per Script==
===Put the backup host ssh-pub-key on the switches===
<source lang=bash>
fcsw1:root> cat >/root/.ssh/authorized_keys <<EOF
> ssh-dss AAAAB3NzaC1...
...
...
lF8qsgtTD8cc= root@host
> EOF
</source>
===Generate ssh-key on the switches===
<source lang=bash>
fcsw1:root> ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
2a:23:33:...:69:bc:25:a5:f9 root@fcsw1
The key's randomart image is:
+--[ RSA 2048]----+
| |
| ... |
| |
+-----------------+
</source>
===Copy the key to your backup users ~/.ssh/authorized_keys on backup host===
<source lang=bash>
fcsw1:root> cat /root/.ssh/id_rsa.pub
ssh-rsa AAAAB3NzaC1yc2EAAA...
...
KHnw1T1NaQ== root@fcsw1
</source>
===Now the script on the backup host===
<source lang=bash>
# cat /opt/bin/backup_brocade_config
#!/bin/bash
SWITCHES="
172.30.40.50
172.30.40.51
"
LOCALUSER="backupuser"
BACKUPDIR="brocade_backup"
BACKUPHOST="172.30.40.10"
DATE="$(date '+%Y%m%d-%H%M%S')"
for switch in ${SWITCHES} ; do
printf "Backing up ${switch} to ~${LOCALUSER}/${BACKUPDIR}/${switch}_config_${date}.txt... "
ssh root@${switch} /fabos/link_sbin/configupload -all -p scp ${BACKUPHOST},${LOCALUSER},${BACKUPDIR}/${switch}_config_${DATE}.txt
done
</source>
==Script zum parsen einer configupload-Datei==
<source lang=awk>
#!/usr/bin/gawk -f
BEGIN{
vendor["001438"]="Hewlett-Packard";
vendor["00a098"]="NetApp";
vendor["0024ff"]="Qlogic";
vendor["001b32"]="Qlogic";
vendor["0000c9"]="Emulex";
vendor["00e002"]="CROSSROADS SYSTEMS, INC.";
}
/\[Zoning\]/,/^$/ {
if(/^cfg./){
split($0,cfgparts,":");
gsub(/^cfg./,"",cfgparts[1]);
cfg[cfgparts[1]]=cfgparts[2];
}
else if(/^zone./) {
zonename=$0;
gsub(/:.*$/,"",zonename);
gsub(/^zone./,"",zonename);
zonemembers=$0;
gsub(/^[^:]*:/,"",zonemembers);
zone[zonename]=zonemembers;
}
else if(/^alias./) {
aliasname=$0;
gsub(/:.*$/,"",aliasname);
gsub(/^alias./,"",aliasname);
aliasmembers=$0;
gsub(/^[^:]*:/,"",aliasmembers);
alias[aliasname]=aliasmembers;
if(length(aliasname)>longestalias){
longestalias=length(aliasname);
}
}
else if(/^enable:/) {
cfgenabled=$0;
gsub(/^enable:/,"",cfgenabled);
}
}
END {
print "Config:",cfgenabled;
split(cfg[cfgenabled],active_zones,";");
for(active_zone in active_zones) {
split(zone[active_zones[active_zone]],zone_members,";");
asort(zone_members);
print "Zone",active_zones[active_zone],"(",length(zone_members),"Members ):";
for(zone_member in zone_members){
member=zone_members[zone_member];
if(alias[member]!=""){
member=alias[member];
}
WWN=member;
gsub(/:/,"",WWN);
if(WWN ~ /^5/){start=2;}else{start=5;}
vendor_id=substr(WWN,start,6);
printf " Member: %s\t",member;
if(alias[zone_members[zone_member]]!=""){
format=sprintf("%%s%%%ds\t",longestalias-length(zone_members[zone_member]));
printf format,zone_members[zone_member]," ";
}
printf "%s\n",vendor[vendor_id];
}
}
printf "\n\n\nCreate config:\n-------------------------------------------------\n";
printf "cfgdelete \"%s\"\n",cfgenabled;
for(active_zone in active_zones) {
split(zone[active_zones[active_zone]],zone_members,";");
asort(zone_members);
for(zone_member in zone_members){
member=zone_members[zone_member];
if(alias[member]!=""){
printf "alicreate \"%s\",\"%s\"\n",member,alias[member];
alias[member]="";
}
}
printf "zonecreate \"%s\",\"%s\"\n",active_zones[active_zone],zone[active_zones[active_zone]];
if(!secondelement){
secondelement=1;
printf "cfgcreate";
} else {
printf "cfgadd ";
}
printf " \"%s\",\"%s\"\n",cfgenabled,active_zones[active_zone];
}
printf "cfgsave\ncfgenable \"%s\"\n",cfgenabled;
}
</source>
=Kommandos: NetApp=
==fcp topology show : Wo hängt mein Frontend-SAN?==
<source lang=bash>
fas01> fcp topology show
Switches connected on adapter 0d:
None connected.
Switches connected on adapter 0c:
None connected.
Switches connected on adapter 1a:
Switch Name: fcsw01
Switch Vendor: Brocade Communications, Inc.
Switch Release: v6.4.2a
Switch Domain: 1
Switch WWN: 10:00:00:05:33:c6:1e:6c
Port Count: 24
Switches connected on adapter 1b:
Switch Name: fcsw02
Switch Vendor: Brocade Communications, Inc.
Switch Release: v6.4.2a
Switch Domain: 1
Switch WWN: 10:00:00:05:33:c7:5e:d2
Port Count: 24
Switches connected on adapter 1c:
None connected.
Switches connected on adapter 1d:
None connected.
</source>
==fcp config <port> : Welche WWN habe ich?==
<source lang=bash>
fas01> fcp config 1a
1a: ONLINE <ADAPTER UP> PTP Fabric
host address 010600
portname 50:0a:09:83:90:00:29:24 nodename 50:0a:09:80:80:00:29:24
mediatype auto speed auto
</source>
Kleines Schmankerl ist noch die "host address", die uns zeigt, daß wir auf Switch-ID 01 Port 06 hängen.
==fcp wwpn-alias (set|show) : Aliasnamen für mehr Klarheit beim Debugging==
<source lang=bash>
fas01> fcp wwpn-alias set sun07_Slot2_Port0 21000024ff363a5a
fas01> fcp wwpn-alias show
WWPN Alias
---- -----
21:00:00:24:ff:36:3a:5a sun07_Slot2_Port0
</source>
f7e05334e6b99c8032d24c1b1f442e4ee3526d57
536
535
2014-10-02T06:34:07Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:Solaris]]
[[Kategorie:Brocade]]
[[Kategorie:NetApp]]
[[Kategorie:FC]]
=Fibrechannel Analyse=
=Kommandos : Solaris=
==luxadm==
===luxadm -e port===
Gibt die Hardwarepfade der vorhandened Fibrechannelports und deren Status aus:
<source lang=bash>
# luxadm -e port
/devices/pci@79,0/pci10de,378@b/pci1077,143@0/fp@0,0:devctl CONNECTED
/devices/pci@79,0/pci10de,378@b/pci1077,143@0,1/fp@0,0:devctl NOT CONNECTED
/devices/pci@79,0/pci10de,376@e/pci1077,143@0/fp@0,0:devctl CONNECTED
/devices/pci@79,0/pci10de,376@e/pci1077,143@0,1/fp@0,0:devctl NOT CONNECTED
</source>
2 Dualport Karten:
/devices/pci@79,0/pci10de,378@b/pci1077,143@0 und ...,1
/devices/pci@79,0/pci10de,376@e/pci1077,143@0 und ...,1
<source lang=bash>
# prtdiag -v | head -1
System Configuration: Sun Microsystems Sun Fire X4440
</source>
Aus der Seite [https://support.oracle.com/epmos/faces/DocContentDisplay?id=1277396.1 Sun x86 Platforms: Matrix of Recognized Device Paths (Doc ID 1277396.1)] (Oracle Support Login benötigt):
Sun Fire x4440 (Tucana)
PCI:
PCIe SLOT0 /pci@0,0/pci10de,375@f/pci1000,3150@0 // with PCI Express 8-Port SAS/SATA HBA
PCIe SLOT0 /pci@0,0/pci10de,375@f/ // without PCI Express 8-Port SAS/SATA HBA
PCIe SLOT1 /pci@0,0/pci10de,376@e/
PCIe SLOT2 /pci@7c,0/pci10de,377@f/
PCIe SLOT3 /pci@0,0/pci10de,377@a/
PCIe SLOT4 /pci@7c,0/pci10de,376@e/
PCIe SLOT5 /pci@7c,0/pci10de,378@b/
(7c can be renamed something else depending on BIOS/OS version)
Also stecken unsere Karten in Slot 4 und 5.
===luxadm -e dump_map <HW_path>===
Gibt die Tabelle der bekannten Geräte an einem Port aus
<source lang=bash>
# luxadm -e dump_map /devices/pci@79,0/pci10de,378@b/pci1077,143@0/fp@0,0:devctl
Pos Port_ID Hard_Addr Port WWN Node WWN Type
0 30200 0 202600a0b86e10e4 200600a0b86e10e4 0x0 (Disk device)
1 30600 0 202700a0b86e10e4 200600a0b86e10e4 0x0 (Disk device)
2 10100 0 203400a0b85bb030 200400a0b85bb030 0x0 (Disk device)
3 10500 0 203500a0b85bb030 200400a0b85bb030 0x0 (Disk device)
4 10200 0 202600a0b86e103c 200600a0b86e103c 0x0 (Disk device)
5 11400 0 202700a0b86e103c 200600a0b86e103c 0x0 (Disk device)
6 30100 0 203200a0b85aeb2d 200200a0b85aeb2d 0x0 (Disk device)
7 30500 0 203300a0b85aeb2d 200200a0b85aeb2d 0x0 (Disk device)
8 10800 0 2100001b32902d45 2000001b32902d45 0x1f (Unknown Type,Host Bus Adapter)
</source>
Erklärung der interessanten Spalten:
* Port_ID <Switch_ID><Switchport><??>
Es sind also offensichtlich 2 Switches in der Fabric an Port /devices/pci@79,0/pci10de,378@b/pci1077,143@0/fp@0,0:devctl
und zwar mit der ID 1 und mit der ID 3.
Switch ID 1
Port 1 und 5 : Node WWN 200400a0b85bb030
Port 2 und 14 : Node WWN 200600a0b86e103c
Port 8 : Node WWN 2000001b32902d45 (Wir selbst)
Switch ID 3
Port 1 und 5 : Node WWN 200200a0b85aeb2d
Port 2 und 6 : Node WWN 200600a0b86e10e4
Wir hängen also mit 2 Storages auf dem Switch mit der ID 1 und haben eine Verbindung zu einem Switch mit der ID 3 an dem 2 weitere Storages hängen.
* Node WWN
Wir sehen hier 4 Disk Devices mit jeweils 2 Einträgen (Gleiche Node WWN)
* Port WWN
Dies ist die Port WWN der an den Switch angeschlossenen Geräte (unter 8 finden wir uns selbst).
Pro Storage sehen wir hier 2 Port WWNs, also 2 Pfade über unseren einen Hostport.
Daher nachher 4 Pfade (2 Pro Hostport) beim [[#mpathadm list lu]].
* Type
Disk Device: Storage
Host Bus Adapter: FC-Karte
===luxadm probe===
Auflistung aller erkannten Fibrechanneldevices
<source lang=bash>
#> luxadm probe
Found Fibre Channel device(s):
Node WWN:200600a0b86e10e4 Device Type:Disk device
Logical Path:/dev/rdsk/c8t600A0B80006E10E40000DC1C52E8B751d0s2
...
</source>
===luxadm display <Diskpath|WWN>===
<source lang=bash>
#> luxadm display /dev/rdsk/c8t600A0B80006E10E40000DC1C52E8B751d0s2
DEVICE PROPERTIES for disk: /dev/rdsk/c8t600A0B80006E10E40000DC1C52E8B751d0s2
Vendor: SUN
Product ID: STK6580_6780
Revision: 0784
Serial Num: SP01068442
Unformatted capacity: 204800.000 MBytes
Write Cache: Enabled
Read Cache: Enabled
Minimum prefetch: 0x300
Maximum prefetch: 0x0
Device Type: Disk device
Path(s):
/dev/rdsk/c8t600A0B80006E10E40000DC1C52E8B751d0s2
/devices/scsi_vhci/disk@g600a0b80006e10e40000dc1c52e8b751:c,raw
Controller /dev/cfg/c4
Device Address 202600a0b86e10e4,5
Host controller port WWN 2100001b328a417f
Class primary
State ONLINE
Controller /dev/cfg/c4
Device Address 202700a0b86e10e4,5
Host controller port WWN 2100001b328a417f
Class secondary
State STANDBY
Controller /dev/cfg/c6
Device Address 201600a0b86e10e4,5
Host controller port WWN 2100001b32904445
Class primary
State ONLINE
Controller /dev/cfg/c6
Device Address 201700a0b86e10e4,5
Host controller port WWN 2100001b32904445
Class secondary
State STANDBY
</source>
* Vendor: SUN
Hersteller
* Product ID: STK6580_6780
Also ein StorageTek 6580/6780
* Revision: 0784
Grobe Firmwarepeilung (Firmware Version: 07.84.47.10)
Siehe hier [[#lsscs list array <array_name>]]
* Serial Num: SP01068442
Praktisch, wenn man mit NetApps arbeitet, um die LUNs zuzuordnen.
* Unformatted capacity: 204800.000 MBytes
Immer gut zu wissen
* Write Cache: Enabled
Die Batterie im Storage sollte also OK sein ;-)
* Path(s):
Rawdevicepath
Hardwaredevicepath
Jetzt folgen immer pro Pfad zu diesem Device ein Block aus
Controller (siehe unten)
Device Address <Port WWN vom Device>,<LUN ID>
Class <primary|secondary> (siehe unten)
State <Online|Standby|Oflline>
Zuweisung Controller zum FC-Port über:
<source lang=bash>
# ls -al /dev/cfg/c6
lrwxrwxrwx 1 root root 60 Sep 3 2009 /dev/cfg/c6 -> ../../devices/pci@79,0/pci10de,376@e/pci1077,143@0/fp@0,0:fc
</source>
Man sieht den Hardwarepfad von [[#luxadm -e port]]
Class:
Via ALUA (Asymmetric Logical Unit Access) teilt das Device dem Host mit, über welche Pfade der Host primär auf die LUN zugreifen soll.
==fcinfo==
===fcinfo hba-port===
Gibt ein paar Infos über Hersteller, Modell, Firmware, Port und Node WWN, Current Speed, ... aus
<source lang=bash>
#> fcinfo hba-port
HBA Port WWN: 2100001b328a417f
OS Device Name: /dev/cfg/c4
Manufacturer: QLogic Corp.
Model: 375-3356-02
Firmware Version: 05.06.00
FCode/BIOS Version: BIOS: 2.02; fcode: 2.01; EFI: 2.00;
Serial Number: 0402R00-0918701860
Driver Name: qlc
Driver Version: 20110825-3.06
Type: N-port
State: online
Supported Speeds: 1Gb 2Gb 4Gb
Current Speed: 4Gb
Node WWN: 2000001b328a417f
HBA Port WWN: 2101001b32aa417f
OS Device Name: /dev/cfg/c5
Manufacturer: QLogic Corp.
Model: 375-3356-02
Firmware Version: 05.06.00
FCode/BIOS Version: BIOS: 2.02; fcode: 2.01; EFI: 2.00;
Serial Number: 0402R00-0918701860
Driver Name: qlc
Driver Version: 20110825-3.06
Type: unknown
State: offline
Supported Speeds: 1Gb 2Gb 4Gb
Current Speed: not established
Node WWN: 2001001b32aa417f
HBA Port WWN: 2100001b32904445
OS Device Name: /dev/cfg/c6
Manufacturer: QLogic Corp.
Model: 375-3356-02
Firmware Version: 05.06.00
FCode/BIOS Version: BIOS: 2.02; fcode: 2.01; EFI: 2.00;
Serial Number: 0402R00-0918701887
Driver Name: qlc
Driver Version: 20110825-3.06
Type: N-port
State: online
Supported Speeds: 1Gb 2Gb 4Gb
Current Speed: 4Gb
Node WWN: 2000001b32904445
HBA Port WWN: 2101001b32b04445
OS Device Name: /dev/cfg/c7
Manufacturer: QLogic Corp.
Model: 375-3356-02
Firmware Version: 05.06.00
FCode/BIOS Version: BIOS: 2.02; fcode: 2.01; EFI: 2.00;
Serial Number: 0402R00-0918701887
Driver Name: qlc
Driver Version: 20110825-3.06
Type: unknown
State: offline
Supported Speeds: 1Gb 2Gb 4Gb
Current Speed: not established
Node WWN: 2001001b32b04445
</source>
===fcinfo remote-port --port <HBA Port WWN> --linkstat===
<source lang=bash>
# fcinfo remote-port --port 2100001b32904445 --linkstat
Remote Port WWN: 201600a0b86e103c
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200600a0b86e103c
Link Error Statistics:
Link Failure Count: 3
Loss of Sync Count: 3
Loss of Signal Count: 2
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 201700a0b86e103c
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200600a0b86e103c
Link Error Statistics:
Link Failure Count: 4
Loss of Sync Count: 261
Loss of Signal Count: 4
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 202200a0b85aeb2d
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200200a0b85aeb2d
Link Error Statistics:
Link Failure Count: 2
Loss of Sync Count: 1
Loss of Signal Count: 2
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 202300a0b85aeb2d
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200200a0b85aeb2d
Link Error Statistics:
Link Failure Count: 2
Loss of Sync Count: 1
Loss of Signal Count: 2
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 201600a0b86e10e4
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200600a0b86e10e4
Link Error Statistics:
Link Failure Count: 3
Loss of Sync Count: 1
Loss of Signal Count: 0
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 201700a0b86e10e4
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200600a0b86e10e4
Link Error Statistics:
Link Failure Count: 3
Loss of Sync Count: 1
Loss of Signal Count: 0
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 202400a0b85bb030
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200400a0b85bb030
Link Error Statistics:
Link Failure Count: 2
Loss of Sync Count: 1
Loss of Signal Count: 2
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 202500a0b85bb030
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200400a0b85bb030
Link Error Statistics:
Link Failure Count: 2
Loss of Sync Count: 1
Loss of Signal Count: 3
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
</source>
==mpathadm==
===mpathadm list lu===
<source lang=bash>
</source>
==cfgadm==
===cfgadm -al -o show_FCP_dev [<controller>]===
<source lang=bash>
# cfgadm -al -o show_FCP_dev | grep unusable
c8::21000024ff2d49a2,0 disk connected configured unusable
c8::21000024ff2d49a2,1 disk connected configured unusable
c8::21000024ff2d49a2,2 disk connected configured unusable
c8::21000024ff2d49a2,3 disk connected configured unusable
c8::21000024ff2d49a2,4 disk connected configured unusable
c8::21000024ff2d49a2,5 disk connected configured unusable
c8::21000024ff2d49a2,6 disk connected configured unusable
c8::21000024ff2d49a2,7 disk connected configured unusable
c8::21000024ff2d49a2,8 disk connected configured unusable
c8::21000024ff2d49a2,9 disk connected configured unusable
c8::21000024ff2d49a2,10 disk connected configured unusable
c9::203400a0b839c421,31 disk connected configured unusable
c9::203400a0b84913d2,31 disk connected configured unusable
c9::203500a0b839c421,31 disk connected configured unusable
c9::203500a0b84913d2,31 disk connected configured unusable
</source>
===cfgadm -c unconfigure -o unusable_SCSI_LUN <unusable device>===
<source lang=bash>
# cfgadm -c unconfigure -o unusable_SCSI_LUN c8::21000024ff2d49a2
</source>
===cfgadm -o force_update -c configure <controller>===
Rescan LUNs. Be careful! Does a forcelip!
<source lang=bash>
# cfgadm -o force_update -c configure c10
</source>
=Kommandos : Common Array Manager=
==lsscs==
Ist unter Solaris in /opt/SUNWsefms/bin
===lsscs list array===
<source lang=bash>
</source>
===lsscs list array <array_name>===
<source lang=bash>
</source>
===lsscs list -a <array_name> fcport===
<source lang=bash>
</source>
=Kommandos : Brocade=
==Switch-Kommandos==
===switchshow===
<source lang=bash>
san-sw_11:admin> switchshow
switchName: san-sw_11
switchType: 71.2
switchState: Online
switchMode: Native
switchRole: Principal
switchDomain: 1
switchId: fffc01
switchWwn: 10:00:00:05:33:df:43:5a
zoning: ON (Fabric1)
switchBeacon: OFF
Index Port Address Media Speed State Proto
==============================================
0 0 010000 id N8 No_Light FC
1 1 010100 id N8 Online FC E-Port 10:00:00:05:33:df:bd:b9 "san-sw_21" (downstream)
2 2 010200 id N8 Online FC F-Port 21:00:00:24:ff:05:74:e4
3 3 010300 id N8 Online FC F-Port 50:0a:09:81:8d:32:5d:c4
4 4 010400 id N8 No_Light FC
5 5 010500 id N8 Online FC E-Port 10:00:00:05:33:df:bd:b9 "san-sw_21"
6 6 010600 id N4 Online FC F-Port 20:06:00:a0:b8:32:38:17
7 7 010700 id N4 Online FC F-Port 20:07:00:a0:b8:32:38:17
8 8 010800 id N4 Online FC F-Port 21:00:00:1b:32:91:4c:ed
9 9 010900 id N4 Online FC F-Port 21:00:00:1b:32:98:05:1a
10 10 010a00 id N8 Online FC F-Port 21:00:00:24:ff:4a:d3:bc
11 11 010b00 id N8 No_Light FC
12 12 010c00 id N8 No_Light FC
13 13 010d00 id N8 No_Light FC
14 14 010e00 id N8 No_Light FC
15 15 010f00 id N8 No_Light FC
16 16 011000 -- N8 No_Module FC (No POD License) Disabled
17 17 011100 -- N8 No_Module FC (No POD License) Disabled
18 18 011200 -- N8 No_Module FC (No POD License) Disabled
19 19 011300 -- N8 No_Module FC (No POD License) Disabled
20 20 011400 -- N8 No_Module FC (No POD License) Disabled
21 21 011500 -- N8 No_Module FC (No POD License) Disabled
22 22 011600 -- N8 No_Module FC (No POD License) Disabled
23 23 011700 -- N8 No_Module FC (No POD License) Disabled
</source>
Was sagt uns das?
# Dies ist der "Principal" (alle andere sind "Subordinate") der Fabric "Fabric1" (switchRole:, zoning:)
# Der Switch ist gezoned (zoning:)
# SwitchID ist "fffc01"
# Es ist ein 24-Port Switch
# Es gibt einen doppelten ISL (InterSwitchLink) zu einem anderen Switch E-Port (san-sw_21)
# 6 Ports sind mit SFPs bestückt, aber nicht belegt (0,4,11-15)
# 8 Ports haben keine Lizenz und auch kein SFP (No_Module)
# 9 Ports sind belegt
<source lang=bash>
san-sw_11:root> fabricshow
Switch ID Worldwide Name Enet IP Addr FC IP Addr Name
-------------------------------------------------------------------------
1: fffc01 10:00:00:05:33:df:43:5a 192.168.1.117 0.0.0.0 >"san-sw_11"
2: fffc02 10:00:00:05:33:df:bd:b9 192.168.1.119 0.0.0.0 "san-sw_21"
The Fabric has 2 switches
</source>
==Port-Kommandos==
===portstatsshow===
===portstatsclear===
==Zone-Kommandos==
===zoneshow===
===alicreate===
===alishow===
==Backup der Switchconfig per Script==
===Put the backup host ssh-pub-key on the switches===
<source lang=bash>
fcsw1:root> cat >/root/.ssh/authorized_keys <<EOF
> ssh-dss AAAAB3NzaC1...
...
...
lF8qsgtTD8cc= root@host
> EOF
</source>
===Generate ssh-key on the switches===
<source lang=bash>
fcsw1:root> ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
2a:23:33:...:69:bc:25:a5:f9 root@fcsw1
The key's randomart image is:
+--[ RSA 2048]----+
| |
| ... |
| |
+-----------------+
</source>
===Copy the key to your backup users ~/.ssh/authorized_keys on backup host===
<source lang=bash>
fcsw1:root> cat /root/.ssh/id_rsa.pub
ssh-rsa AAAAB3NzaC1yc2EAAA...
...
KHnw1T1NaQ== root@fcsw1
</source>
===Now the script on the backup host===
<source lang=bash>
# cat /opt/bin/backup_brocade_config
#!/bin/bash
SWITCHES="
172.30.40.50
172.30.40.51
"
LOCALUSER="backupuser"
BACKUPDIR="brocade_backup"
BACKUPHOST="172.30.40.10"
DATE="$(date '+%Y%m%d-%H%M%S')"
for switch in ${SWITCHES} ; do
printf "Backing up ${switch} to ~${LOCALUSER}/${BACKUPDIR}/${switch}_config_${date}.txt... "
ssh root@${switch} /fabos/link_sbin/configupload -all -p scp ${BACKUPHOST},${LOCALUSER},${BACKUPDIR}/${switch}_config_${DATE}.txt
done
</source>
==Script zum parsen einer configupload-Datei==
<source lang=awk>
#!/usr/bin/gawk -f
BEGIN{
vendor["001438"]="Hewlett-Packard";
vendor["00a098"]="NetApp";
vendor["0024ff"]="Qlogic";
vendor["001b32"]="Qlogic";
vendor["0000c9"]="Emulex";
vendor["00e002"]="CROSSROADS SYSTEMS, INC.";
}
/\[Zoning\]/,/^$/ {
if(/^cfg./){
split($0,cfgparts,":");
gsub(/^cfg./,"",cfgparts[1]);
cfg[cfgparts[1]]=cfgparts[2];
}
else if(/^zone./) {
zonename=$0;
gsub(/:.*$/,"",zonename);
gsub(/^zone./,"",zonename);
zonemembers=$0;
gsub(/^[^:]*:/,"",zonemembers);
zone[zonename]=zonemembers;
}
else if(/^alias./) {
aliasname=$0;
gsub(/:.*$/,"",aliasname);
gsub(/^alias./,"",aliasname);
aliasmembers=$0;
gsub(/^[^:]*:/,"",aliasmembers);
alias[aliasname]=aliasmembers;
if(length(aliasname)>longestalias){
longestalias=length(aliasname);
}
}
else if(/^enable:/) {
cfgenabled=$0;
gsub(/^enable:/,"",cfgenabled);
}
}
END {
print "Config:",cfgenabled;
split(cfg[cfgenabled],active_zones,";");
for(active_zone in active_zones) {
split(zone[active_zones[active_zone]],zone_members,";");
asort(zone_members);
print "Zone",active_zones[active_zone],"(",length(zone_members),"Members ):";
for(zone_member in zone_members){
member=zone_members[zone_member];
if(alias[member]!=""){
member=alias[member];
}
WWN=member;
gsub(/:/,"",WWN);
if(WWN ~ /^5/){start=2;}else{start=5;}
vendor_id=substr(WWN,start,6);
printf " Member: %s\t",member;
if(alias[zone_members[zone_member]]!=""){
format=sprintf("%%s%%%ds\t",longestalias-length(zone_members[zone_member]));
printf format,zone_members[zone_member]," ";
}
printf "%s\n",vendor[vendor_id];
}
}
printf "\n\n\nCreate config:\n-------------------------------------------------\n";
printf "cfgdelete \"%s\"\n",cfgenabled;
for(active_zone in active_zones) {
split(zone[active_zones[active_zone]],zone_members,";");
asort(zone_members);
for(zone_member in zone_members){
member=zone_members[zone_member];
if(alias[member]!=""){
printf "alicreate \"%s\",\"%s\"\n",member,alias[member];
alias[member]="";
}
}
printf "zonecreate \"%s\",\"%s\"\n",active_zones[active_zone],zone[active_zones[active_zone]];
if(!secondelement){
secondelement=1;
printf "cfgcreate";
} else {
printf "cfgadd ";
}
printf " \"%s\",\"%s\"\n",cfgenabled,active_zones[active_zone];
}
printf "cfgsave\ncfgenable \"%s\"\n",cfgenabled;
}
</source>
=Kommandos: NetApp=
==fcp topology show : Wo hängt mein Frontend-SAN?==
<source lang=bash>
fas01> fcp topology show
Switches connected on adapter 0d:
None connected.
Switches connected on adapter 0c:
None connected.
Switches connected on adapter 1a:
Switch Name: fcsw01
Switch Vendor: Brocade Communications, Inc.
Switch Release: v6.4.2a
Switch Domain: 1
Switch WWN: 10:00:00:05:33:c6:1e:6c
Port Count: 24
Switches connected on adapter 1b:
Switch Name: fcsw02
Switch Vendor: Brocade Communications, Inc.
Switch Release: v6.4.2a
Switch Domain: 1
Switch WWN: 10:00:00:05:33:c7:5e:d2
Port Count: 24
Switches connected on adapter 1c:
None connected.
Switches connected on adapter 1d:
None connected.
</source>
==fcp config <port> : Welche WWN habe ich?==
<source lang=bash>
fas01> fcp config 1a
1a: ONLINE <ADAPTER UP> PTP Fabric
host address 010600
portname 50:0a:09:83:90:00:29:24 nodename 50:0a:09:80:80:00:29:24
mediatype auto speed auto
</source>
Kleines Schmankerl ist noch die "host address", die uns zeigt, daß wir auf Switch-ID 01 Port 06 hängen.
==fcp wwpn-alias (set|show) : Aliasnamen für mehr Klarheit beim Debugging==
<source lang=bash>
fas01> fcp wwpn-alias set sun07_Slot2_Port0 21000024ff363a5a
fas01> fcp wwpn-alias show
WWPN Alias
---- -----
21:00:00:24:ff:36:3a:5a sun07_Slot2_Port0
</source>
0be74bcdeda1d2889cb32e873454aed3df322cb9
541
536
2014-10-06T14:29:00Z
Lollypop
2
/* Kommandos : Solaris */
wikitext
text/x-wiki
[[Kategorie:Solaris]]
[[Kategorie:Brocade]]
[[Kategorie:NetApp]]
[[Kategorie:FC]]
=Fibrechannel Analyse=
=Kommandos : Solaris=
==luxadm==
===luxadm -e port===
Gibt die Hardwarepfade der vorhandened Fibrechannelports und deren Status aus:
<source lang=bash>
# luxadm -e port
/devices/pci@79,0/pci10de,378@b/pci1077,143@0/fp@0,0:devctl CONNECTED
/devices/pci@79,0/pci10de,378@b/pci1077,143@0,1/fp@0,0:devctl NOT CONNECTED
/devices/pci@79,0/pci10de,376@e/pci1077,143@0/fp@0,0:devctl CONNECTED
/devices/pci@79,0/pci10de,376@e/pci1077,143@0,1/fp@0,0:devctl NOT CONNECTED
</source>
2 Dualport Karten:
/devices/pci@79,0/pci10de,378@b/pci1077,143@0 und ...,1
/devices/pci@79,0/pci10de,376@e/pci1077,143@0 und ...,1
<source lang=bash>
# prtdiag -v | head -1
System Configuration: Sun Microsystems Sun Fire X4440
</source>
Aus der Seite [https://support.oracle.com/epmos/faces/DocContentDisplay?id=1277396.1 Sun x86 Platforms: Matrix of Recognized Device Paths (Doc ID 1277396.1)] (Oracle Support Login benötigt):
Sun Fire x4440 (Tucana)
PCI:
PCIe SLOT0 /pci@0,0/pci10de,375@f/pci1000,3150@0 // with PCI Express 8-Port SAS/SATA HBA
PCIe SLOT0 /pci@0,0/pci10de,375@f/ // without PCI Express 8-Port SAS/SATA HBA
PCIe SLOT1 /pci@0,0/pci10de,376@e/
PCIe SLOT2 /pci@7c,0/pci10de,377@f/
PCIe SLOT3 /pci@0,0/pci10de,377@a/
PCIe SLOT4 /pci@7c,0/pci10de,376@e/
PCIe SLOT5 /pci@7c,0/pci10de,378@b/
(7c can be renamed something else depending on BIOS/OS version)
Also stecken unsere Karten in Slot 4 und 5.
===luxadm -e dump_map <HW_path>===
Gibt die Tabelle der bekannten Geräte an einem Port aus
<source lang=bash>
# luxadm -e dump_map /devices/pci@79,0/pci10de,378@b/pci1077,143@0/fp@0,0:devctl
Pos Port_ID Hard_Addr Port WWN Node WWN Type
0 30200 0 202600a0b86e10e4 200600a0b86e10e4 0x0 (Disk device)
1 30600 0 202700a0b86e10e4 200600a0b86e10e4 0x0 (Disk device)
2 10100 0 203400a0b85bb030 200400a0b85bb030 0x0 (Disk device)
3 10500 0 203500a0b85bb030 200400a0b85bb030 0x0 (Disk device)
4 10200 0 202600a0b86e103c 200600a0b86e103c 0x0 (Disk device)
5 11400 0 202700a0b86e103c 200600a0b86e103c 0x0 (Disk device)
6 30100 0 203200a0b85aeb2d 200200a0b85aeb2d 0x0 (Disk device)
7 30500 0 203300a0b85aeb2d 200200a0b85aeb2d 0x0 (Disk device)
8 10800 0 2100001b32902d45 2000001b32902d45 0x1f (Unknown Type,Host Bus Adapter)
</source>
Erklärung der interessanten Spalten:
* Port_ID <Switch_ID><Switchport><??>
Es sind also offensichtlich 2 Switches in der Fabric an Port /devices/pci@79,0/pci10de,378@b/pci1077,143@0/fp@0,0:devctl
und zwar mit der ID 1 und mit der ID 3.
Switch ID 1
Port 1 und 5 : Node WWN 200400a0b85bb030
Port 2 und 14 : Node WWN 200600a0b86e103c
Port 8 : Node WWN 2000001b32902d45 (Wir selbst)
Switch ID 3
Port 1 und 5 : Node WWN 200200a0b85aeb2d
Port 2 und 6 : Node WWN 200600a0b86e10e4
Wir hängen also mit 2 Storages auf dem Switch mit der ID 1 und haben eine Verbindung zu einem Switch mit der ID 3 an dem 2 weitere Storages hängen.
* Node WWN
Wir sehen hier 4 Disk Devices mit jeweils 2 Einträgen (Gleiche Node WWN)
* Port WWN
Dies ist die Port WWN der an den Switch angeschlossenen Geräte (unter 8 finden wir uns selbst).
Pro Storage sehen wir hier 2 Port WWNs, also 2 Pfade über unseren einen Hostport.
Daher nachher 4 Pfade (2 Pro Hostport) beim [[#mpathadm list lu]].
* Type
Disk Device: Storage
Host Bus Adapter: FC-Karte
===luxadm probe===
Auflistung aller erkannten Fibrechanneldevices
<source lang=bash>
#> luxadm probe
Found Fibre Channel device(s):
Node WWN:200600a0b86e10e4 Device Type:Disk device
Logical Path:/dev/rdsk/c8t600A0B80006E10E40000DC1C52E8B751d0s2
...
</source>
===luxadm display <Diskpath|WWN>===
<source lang=bash>
#> luxadm display /dev/rdsk/c8t600A0B80006E10E40000DC1C52E8B751d0s2
DEVICE PROPERTIES for disk: /dev/rdsk/c8t600A0B80006E10E40000DC1C52E8B751d0s2
Vendor: SUN
Product ID: STK6580_6780
Revision: 0784
Serial Num: SP01068442
Unformatted capacity: 204800.000 MBytes
Write Cache: Enabled
Read Cache: Enabled
Minimum prefetch: 0x300
Maximum prefetch: 0x0
Device Type: Disk device
Path(s):
/dev/rdsk/c8t600A0B80006E10E40000DC1C52E8B751d0s2
/devices/scsi_vhci/disk@g600a0b80006e10e40000dc1c52e8b751:c,raw
Controller /dev/cfg/c4
Device Address 202600a0b86e10e4,5
Host controller port WWN 2100001b328a417f
Class primary
State ONLINE
Controller /dev/cfg/c4
Device Address 202700a0b86e10e4,5
Host controller port WWN 2100001b328a417f
Class secondary
State STANDBY
Controller /dev/cfg/c6
Device Address 201600a0b86e10e4,5
Host controller port WWN 2100001b32904445
Class primary
State ONLINE
Controller /dev/cfg/c6
Device Address 201700a0b86e10e4,5
Host controller port WWN 2100001b32904445
Class secondary
State STANDBY
</source>
* Vendor: SUN
Hersteller
* Product ID: STK6580_6780
Also ein StorageTek 6580/6780
* Revision: 0784
Grobe Firmwarepeilung (Firmware Version: 07.84.47.10)
Siehe hier [[#lsscs list array <array_name>]]
* Serial Num: SP01068442
Praktisch, wenn man mit NetApps arbeitet, um die LUNs zuzuordnen.
* Unformatted capacity: 204800.000 MBytes
Immer gut zu wissen
* Write Cache: Enabled
Die Batterie im Storage sollte also OK sein ;-)
* Path(s):
Rawdevicepath
Hardwaredevicepath
Jetzt folgen immer pro Pfad zu diesem Device ein Block aus
Controller (siehe unten)
Device Address <Port WWN vom Device>,<LUN ID>
Class <primary|secondary> (siehe unten)
State <Online|Standby|Oflline>
Zuweisung Controller zum FC-Port über:
<source lang=bash>
# ls -al /dev/cfg/c6
lrwxrwxrwx 1 root root 60 Sep 3 2009 /dev/cfg/c6 -> ../../devices/pci@79,0/pci10de,376@e/pci1077,143@0/fp@0,0:fc
</source>
Man sieht den Hardwarepfad von [[#luxadm -e port]]
Class:
Via ALUA (Asymmetric Logical Unit Access) teilt das Device dem Host mit, über welche Pfade der Host primär auf die LUN zugreifen soll.
==fcinfo==
===fcinfo hba-port===
Gibt ein paar Infos über Hersteller, Modell, Firmware, Port und Node WWN, Current Speed, ... aus
<source lang=bash>
#> fcinfo hba-port
HBA Port WWN: 2100001b328a417f
OS Device Name: /dev/cfg/c4
Manufacturer: QLogic Corp.
Model: 375-3356-02
Firmware Version: 05.06.00
FCode/BIOS Version: BIOS: 2.02; fcode: 2.01; EFI: 2.00;
Serial Number: 0402R00-0918701860
Driver Name: qlc
Driver Version: 20110825-3.06
Type: N-port
State: online
Supported Speeds: 1Gb 2Gb 4Gb
Current Speed: 4Gb
Node WWN: 2000001b328a417f
HBA Port WWN: 2101001b32aa417f
OS Device Name: /dev/cfg/c5
Manufacturer: QLogic Corp.
Model: 375-3356-02
Firmware Version: 05.06.00
FCode/BIOS Version: BIOS: 2.02; fcode: 2.01; EFI: 2.00;
Serial Number: 0402R00-0918701860
Driver Name: qlc
Driver Version: 20110825-3.06
Type: unknown
State: offline
Supported Speeds: 1Gb 2Gb 4Gb
Current Speed: not established
Node WWN: 2001001b32aa417f
HBA Port WWN: 2100001b32904445
OS Device Name: /dev/cfg/c6
Manufacturer: QLogic Corp.
Model: 375-3356-02
Firmware Version: 05.06.00
FCode/BIOS Version: BIOS: 2.02; fcode: 2.01; EFI: 2.00;
Serial Number: 0402R00-0918701887
Driver Name: qlc
Driver Version: 20110825-3.06
Type: N-port
State: online
Supported Speeds: 1Gb 2Gb 4Gb
Current Speed: 4Gb
Node WWN: 2000001b32904445
HBA Port WWN: 2101001b32b04445
OS Device Name: /dev/cfg/c7
Manufacturer: QLogic Corp.
Model: 375-3356-02
Firmware Version: 05.06.00
FCode/BIOS Version: BIOS: 2.02; fcode: 2.01; EFI: 2.00;
Serial Number: 0402R00-0918701887
Driver Name: qlc
Driver Version: 20110825-3.06
Type: unknown
State: offline
Supported Speeds: 1Gb 2Gb 4Gb
Current Speed: not established
Node WWN: 2001001b32b04445
</source>
===fcinfo remote-port --port <HBA Port WWN> --linkstat===
<source lang=bash>
# fcinfo remote-port --port 2100001b32904445 --linkstat
Remote Port WWN: 201600a0b86e103c
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200600a0b86e103c
Link Error Statistics:
Link Failure Count: 3
Loss of Sync Count: 3
Loss of Signal Count: 2
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 201700a0b86e103c
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200600a0b86e103c
Link Error Statistics:
Link Failure Count: 4
Loss of Sync Count: 261
Loss of Signal Count: 4
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 202200a0b85aeb2d
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200200a0b85aeb2d
Link Error Statistics:
Link Failure Count: 2
Loss of Sync Count: 1
Loss of Signal Count: 2
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 202300a0b85aeb2d
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200200a0b85aeb2d
Link Error Statistics:
Link Failure Count: 2
Loss of Sync Count: 1
Loss of Signal Count: 2
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 201600a0b86e10e4
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200600a0b86e10e4
Link Error Statistics:
Link Failure Count: 3
Loss of Sync Count: 1
Loss of Signal Count: 0
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 201700a0b86e10e4
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200600a0b86e10e4
Link Error Statistics:
Link Failure Count: 3
Loss of Sync Count: 1
Loss of Signal Count: 0
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 202400a0b85bb030
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200400a0b85bb030
Link Error Statistics:
Link Failure Count: 2
Loss of Sync Count: 1
Loss of Signal Count: 2
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 202500a0b85bb030
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200400a0b85bb030
Link Error Statistics:
Link Failure Count: 2
Loss of Sync Count: 1
Loss of Signal Count: 3
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
</source>
==mpathadm==
===mpathadm list lu===
<source lang=bash>
</source>
==cfgadm==
===cfgadm -al -o show_FCP_dev [<controller>]===
<source lang=bash>
# cfgadm -al -o show_FCP_dev | grep unusable
c8::21000024ff2d49a2,0 disk connected configured unusable
c8::21000024ff2d49a2,1 disk connected configured unusable
c8::21000024ff2d49a2,2 disk connected configured unusable
c8::21000024ff2d49a2,3 disk connected configured unusable
c8::21000024ff2d49a2,4 disk connected configured unusable
c8::21000024ff2d49a2,5 disk connected configured unusable
c8::21000024ff2d49a2,6 disk connected configured unusable
c8::21000024ff2d49a2,7 disk connected configured unusable
c8::21000024ff2d49a2,8 disk connected configured unusable
c8::21000024ff2d49a2,9 disk connected configured unusable
c8::21000024ff2d49a2,10 disk connected configured unusable
c9::203400a0b839c421,31 disk connected configured unusable
c9::203400a0b84913d2,31 disk connected configured unusable
c9::203500a0b839c421,31 disk connected configured unusable
c9::203500a0b84913d2,31 disk connected configured unusable
</source>
===cfgadm -c unconfigure -o unusable_SCSI_LUN <unusable device>===
<source lang=bash>
# cfgadm -c unconfigure -o unusable_SCSI_LUN c8::21000024ff2d49a2
</source>
===cfgadm -o force_update -c configure <controller>===
Rescan LUNs. Be careful! Does a forcelip!
<source lang=bash>
# cfgadm -o force_update -c configure c10
</source>
==prtconf -Da <device>==
<source lang=bash>
# prtconf -Da /dev/cfg/c3
i86pc (driver name: rootnex)
pci, instance #0 (driver name: npe)
pci8086,3410, instance #5 (driver name: pcieb)
pci111d,806e, instance #12 (driver name: pcieb)
pci111d,806e, instance #13 (driver name: pcieb)
pci1077,170, instance #0 (driver name: qlc) <---
fp, instance #0 (driver name: fp)
</source>
=Kommandos : Common Array Manager=
==lsscs==
Ist unter Solaris in /opt/SUNWsefms/bin
===lsscs list array===
<source lang=bash>
</source>
===lsscs list array <array_name>===
<source lang=bash>
</source>
===lsscs list -a <array_name> fcport===
<source lang=bash>
</source>
=Kommandos : Brocade=
==Switch-Kommandos==
===switchshow===
<source lang=bash>
san-sw_11:admin> switchshow
switchName: san-sw_11
switchType: 71.2
switchState: Online
switchMode: Native
switchRole: Principal
switchDomain: 1
switchId: fffc01
switchWwn: 10:00:00:05:33:df:43:5a
zoning: ON (Fabric1)
switchBeacon: OFF
Index Port Address Media Speed State Proto
==============================================
0 0 010000 id N8 No_Light FC
1 1 010100 id N8 Online FC E-Port 10:00:00:05:33:df:bd:b9 "san-sw_21" (downstream)
2 2 010200 id N8 Online FC F-Port 21:00:00:24:ff:05:74:e4
3 3 010300 id N8 Online FC F-Port 50:0a:09:81:8d:32:5d:c4
4 4 010400 id N8 No_Light FC
5 5 010500 id N8 Online FC E-Port 10:00:00:05:33:df:bd:b9 "san-sw_21"
6 6 010600 id N4 Online FC F-Port 20:06:00:a0:b8:32:38:17
7 7 010700 id N4 Online FC F-Port 20:07:00:a0:b8:32:38:17
8 8 010800 id N4 Online FC F-Port 21:00:00:1b:32:91:4c:ed
9 9 010900 id N4 Online FC F-Port 21:00:00:1b:32:98:05:1a
10 10 010a00 id N8 Online FC F-Port 21:00:00:24:ff:4a:d3:bc
11 11 010b00 id N8 No_Light FC
12 12 010c00 id N8 No_Light FC
13 13 010d00 id N8 No_Light FC
14 14 010e00 id N8 No_Light FC
15 15 010f00 id N8 No_Light FC
16 16 011000 -- N8 No_Module FC (No POD License) Disabled
17 17 011100 -- N8 No_Module FC (No POD License) Disabled
18 18 011200 -- N8 No_Module FC (No POD License) Disabled
19 19 011300 -- N8 No_Module FC (No POD License) Disabled
20 20 011400 -- N8 No_Module FC (No POD License) Disabled
21 21 011500 -- N8 No_Module FC (No POD License) Disabled
22 22 011600 -- N8 No_Module FC (No POD License) Disabled
23 23 011700 -- N8 No_Module FC (No POD License) Disabled
</source>
Was sagt uns das?
# Dies ist der "Principal" (alle andere sind "Subordinate") der Fabric "Fabric1" (switchRole:, zoning:)
# Der Switch ist gezoned (zoning:)
# SwitchID ist "fffc01"
# Es ist ein 24-Port Switch
# Es gibt einen doppelten ISL (InterSwitchLink) zu einem anderen Switch E-Port (san-sw_21)
# 6 Ports sind mit SFPs bestückt, aber nicht belegt (0,4,11-15)
# 8 Ports haben keine Lizenz und auch kein SFP (No_Module)
# 9 Ports sind belegt
<source lang=bash>
san-sw_11:root> fabricshow
Switch ID Worldwide Name Enet IP Addr FC IP Addr Name
-------------------------------------------------------------------------
1: fffc01 10:00:00:05:33:df:43:5a 192.168.1.117 0.0.0.0 >"san-sw_11"
2: fffc02 10:00:00:05:33:df:bd:b9 192.168.1.119 0.0.0.0 "san-sw_21"
The Fabric has 2 switches
</source>
==Port-Kommandos==
===portstatsshow===
===portstatsclear===
==Zone-Kommandos==
===zoneshow===
===alicreate===
===alishow===
==Backup der Switchconfig per Script==
===Put the backup host ssh-pub-key on the switches===
<source lang=bash>
fcsw1:root> cat >/root/.ssh/authorized_keys <<EOF
> ssh-dss AAAAB3NzaC1...
...
...
lF8qsgtTD8cc= root@host
> EOF
</source>
===Generate ssh-key on the switches===
<source lang=bash>
fcsw1:root> ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
2a:23:33:...:69:bc:25:a5:f9 root@fcsw1
The key's randomart image is:
+--[ RSA 2048]----+
| |
| ... |
| |
+-----------------+
</source>
===Copy the key to your backup users ~/.ssh/authorized_keys on backup host===
<source lang=bash>
fcsw1:root> cat /root/.ssh/id_rsa.pub
ssh-rsa AAAAB3NzaC1yc2EAAA...
...
KHnw1T1NaQ== root@fcsw1
</source>
===Now the script on the backup host===
<source lang=bash>
# cat /opt/bin/backup_brocade_config
#!/bin/bash
SWITCHES="
172.30.40.50
172.30.40.51
"
LOCALUSER="backupuser"
BACKUPDIR="brocade_backup"
BACKUPHOST="172.30.40.10"
DATE="$(date '+%Y%m%d-%H%M%S')"
for switch in ${SWITCHES} ; do
printf "Backing up ${switch} to ~${LOCALUSER}/${BACKUPDIR}/${switch}_config_${date}.txt... "
ssh root@${switch} /fabos/link_sbin/configupload -all -p scp ${BACKUPHOST},${LOCALUSER},${BACKUPDIR}/${switch}_config_${DATE}.txt
done
</source>
==Script zum parsen einer configupload-Datei==
<source lang=awk>
#!/usr/bin/gawk -f
BEGIN{
vendor["001438"]="Hewlett-Packard";
vendor["00a098"]="NetApp";
vendor["0024ff"]="Qlogic";
vendor["001b32"]="Qlogic";
vendor["0000c9"]="Emulex";
vendor["00e002"]="CROSSROADS SYSTEMS, INC.";
}
/\[Zoning\]/,/^$/ {
if(/^cfg./){
split($0,cfgparts,":");
gsub(/^cfg./,"",cfgparts[1]);
cfg[cfgparts[1]]=cfgparts[2];
}
else if(/^zone./) {
zonename=$0;
gsub(/:.*$/,"",zonename);
gsub(/^zone./,"",zonename);
zonemembers=$0;
gsub(/^[^:]*:/,"",zonemembers);
zone[zonename]=zonemembers;
}
else if(/^alias./) {
aliasname=$0;
gsub(/:.*$/,"",aliasname);
gsub(/^alias./,"",aliasname);
aliasmembers=$0;
gsub(/^[^:]*:/,"",aliasmembers);
alias[aliasname]=aliasmembers;
if(length(aliasname)>longestalias){
longestalias=length(aliasname);
}
}
else if(/^enable:/) {
cfgenabled=$0;
gsub(/^enable:/,"",cfgenabled);
}
}
END {
print "Config:",cfgenabled;
split(cfg[cfgenabled],active_zones,";");
for(active_zone in active_zones) {
split(zone[active_zones[active_zone]],zone_members,";");
asort(zone_members);
print "Zone",active_zones[active_zone],"(",length(zone_members),"Members ):";
for(zone_member in zone_members){
member=zone_members[zone_member];
if(alias[member]!=""){
member=alias[member];
}
WWN=member;
gsub(/:/,"",WWN);
if(WWN ~ /^5/){start=2;}else{start=5;}
vendor_id=substr(WWN,start,6);
printf " Member: %s\t",member;
if(alias[zone_members[zone_member]]!=""){
format=sprintf("%%s%%%ds\t",longestalias-length(zone_members[zone_member]));
printf format,zone_members[zone_member]," ";
}
printf "%s\n",vendor[vendor_id];
}
}
printf "\n\n\nCreate config:\n-------------------------------------------------\n";
printf "cfgdelete \"%s\"\n",cfgenabled;
for(active_zone in active_zones) {
split(zone[active_zones[active_zone]],zone_members,";");
asort(zone_members);
for(zone_member in zone_members){
member=zone_members[zone_member];
if(alias[member]!=""){
printf "alicreate \"%s\",\"%s\"\n",member,alias[member];
alias[member]="";
}
}
printf "zonecreate \"%s\",\"%s\"\n",active_zones[active_zone],zone[active_zones[active_zone]];
if(!secondelement){
secondelement=1;
printf "cfgcreate";
} else {
printf "cfgadd ";
}
printf " \"%s\",\"%s\"\n",cfgenabled,active_zones[active_zone];
}
printf "cfgsave\ncfgenable \"%s\"\n",cfgenabled;
}
</source>
=Kommandos: NetApp=
==fcp topology show : Wo hängt mein Frontend-SAN?==
<source lang=bash>
fas01> fcp topology show
Switches connected on adapter 0d:
None connected.
Switches connected on adapter 0c:
None connected.
Switches connected on adapter 1a:
Switch Name: fcsw01
Switch Vendor: Brocade Communications, Inc.
Switch Release: v6.4.2a
Switch Domain: 1
Switch WWN: 10:00:00:05:33:c6:1e:6c
Port Count: 24
Switches connected on adapter 1b:
Switch Name: fcsw02
Switch Vendor: Brocade Communications, Inc.
Switch Release: v6.4.2a
Switch Domain: 1
Switch WWN: 10:00:00:05:33:c7:5e:d2
Port Count: 24
Switches connected on adapter 1c:
None connected.
Switches connected on adapter 1d:
None connected.
</source>
==fcp config <port> : Welche WWN habe ich?==
<source lang=bash>
fas01> fcp config 1a
1a: ONLINE <ADAPTER UP> PTP Fabric
host address 010600
portname 50:0a:09:83:90:00:29:24 nodename 50:0a:09:80:80:00:29:24
mediatype auto speed auto
</source>
Kleines Schmankerl ist noch die "host address", die uns zeigt, daß wir auf Switch-ID 01 Port 06 hängen.
==fcp wwpn-alias (set|show) : Aliasnamen für mehr Klarheit beim Debugging==
<source lang=bash>
fas01> fcp wwpn-alias set sun07_Slot2_Port0 21000024ff363a5a
fas01> fcp wwpn-alias show
WWPN Alias
---- -----
21:00:00:24:ff:36:3a:5a sun07_Slot2_Port0
</source>
f8574051c84e68820ae59b29356f0add896f45b2
Solaris FC
0
177
534
2014-10-01T15:36:05Z
Lollypop
2
Lollypop verschob Seite [[Solaris FC]] nach [[Fibrechannel Analyse]]: Nicht nur Solaris
wikitext
text/x-wiki
#WEITERLEITUNG [[Fibrechannel Analyse]]
a6e1f099b504711fd3fc2fa7d6006fbe3086607e
Solaris SMF
0
100
537
277
2014-10-02T11:34:05Z
Lollypop
2
/* Running foreground processes */
wikitext
text/x-wiki
[[Kategorie:Solaris]]
== Running foreground processes ==
<pre>
<?xml version='1.0'?>
<!DOCTYPE service_bundle SYSTEM '/usr/share/lib/xml/dtd/service_bundle.dtd.1'>
<service_bundle type='manifest' name='export'>
<service name='network/foreground-daemon' type='service' version='0'>
<single_instance/>
<dependency name='filesystem_minimal' grouping='require_all' restart_on='none' type='service'>
<service_fmri value='svc:/system/filesystem/local'/>
</dependency>
<dependency name='loopback' grouping='require_any' restart_on='error' type='service'>
<service_fmri value='svc:/network/loopback'/>
</dependency>
<dependency name='network' grouping='optional_all' restart_on='error' type='service'>
<service_fmri value='svc:/milestone/network'/>
</dependency>
<instance name='default' enabled='true'>
<exec_method name='refresh' type='method' exec=':true' timeout_seconds='60'/>
<exec_method name='stop' type='method' exec=':kill' timeout_seconds='60'/>
<exec_method name='start' type='method' exec='/opt/foreground/bin/foreground-daemon %m' timeout_seconds='0'>
<method_context project='foreground-project' >
<method_credential user='foreground-user' group='noaccess' />
</method_context>
</exec_method>
<property_group type="framework" name="startd">
<propval type="astring" name="duration" value="child"/>
</property_group>
<template>
<common_name>
<loctext xml:lang='C'>Foreground Daemon</loctext>
</common_name>
<documentation>
<manpage title='foreground-daemon' section='1M' manpath='/opt/foreground/man'/>
</documentation>
</template>
</instance>
<stability value='Unstable'/>
</service>
</service_bundle>
</pre>
5dad70643fa4728dcedc5990c6beb4b3bf2174ab
Perl Tipps und Tricks
0
178
538
2014-10-06T13:05:22Z
Lollypop
2
Die Seite wurde neu angelegt: „[[Kategorie:Perl]] ==Unread while reading from filehandle== Dov Grobgeld made my day! <source lang=perl> # Found at a comment of Dov Grobgeld at https://gro…“
wikitext
text/x-wiki
[[Kategorie:Perl]]
==Unread while reading from filehandle==
Dov Grobgeld made my day!
<source lang=perl>
# Found at a comment of Dov Grobgeld at https://groups.google.com/d/msg/comp.lang.perl/7fPyGpWpP8M/hc7xTMvAoW0J
while($_ = shift(@linestack) || <IN>) {
:
push(@linestack, $whatever); # unread
}
</source>
cac58b2907e7e7f7f15eaa5564281faf316e280d
540
538
2014-10-06T13:06:07Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:Perl|Tipps und Tricks]]
==Unread while reading from filehandle==
Dov Grobgeld made my day!
<source lang=perl>
# Found at a comment of Dov Grobgeld at https://groups.google.com/d/msg/comp.lang.perl/7fPyGpWpP8M/hc7xTMvAoW0J
while($_ = shift(@linestack) || <IN>) {
:
push(@linestack, $whatever); # unread
}
</source>
238142a33b4b30461f96614e8fcc7f613949a5db
Category:Perl
14
179
539
2014-10-06T13:05:44Z
Lollypop
2
Die Seite wurde neu angelegt: „[[Kategorie:KnowHow]]“
wikitext
text/x-wiki
[[Kategorie:KnowHow]]
5b3e805e2df69a16d339bfd0115e4688ccfd0e65
Category:Insekten
14
180
542
2014-10-07T13:09:40Z
Lollypop
2
Die Seite wurde neu angelegt: „[[Kategorie:Tiere]]“
wikitext
text/x-wiki
[[Kategorie:Tiere]]
cdedcc18051d8835b96ae206bd357542432afa24
545
542
2014-10-07T13:11:26Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:Tiere]]
{{#categorytree:Insekten|mode=pages|hideroot=on|depth=3}}
1bd31f0a07fbbfdea6daab3648f24097c4d27633
546
545
2014-10-07T13:12:17Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:Tiere]]
{{#categorytree:{{PAGENAMEE}}|mode=pages|hideroot=on|depth=3}}
d31e780da4ab61c502fb0c26d697c8c76ea3156c
Category:Ameisen
14
3
543
495
2014-10-07T13:10:16Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:Insekten]]
{{#categorytree:{{PAGENAMEE}}|mode=pages|hideroot=on|depth=3}}
6bfb8409b05eefad1733a6200663afa35e18fb5f
Category:Schaben
14
142
544
493
2014-10-07T13:10:41Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:Insekten]]
{{#categorytree:Schaben|mode=pages|hideroot=on|depth=3}}
2363c6dcd0529e4b4602205e062f0b2783880065
547
544
2014-10-07T13:12:45Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:Insekten]]
{{#categorytree:{{PAGENAMEE}}|mode=pages|hideroot=on|depth=3}}
6bfb8409b05eefad1733a6200663afa35e18fb5f
Category:Tiere
14
40
548
494
2014-10-07T14:02:52Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:Projekte]]
{{#categorytree:{{PAGENAMEE}}|mode=pages|hideroot=on|depth=4}}
1862fa08b67f388dc6d2577e48f784f580ed7015
Gromphadorrhina oblongonata
0
181
553
2014-10-07T14:44:20Z
Lollypop
2
Lollypop verschob Seite [[Gromphadorrhina oblongonata]] nach [[Gromphadorhina oblongonata]]: Korrektur Gattung
wikitext
text/x-wiki
#WEITERLEITUNG [[Gromphadorhina oblongonata]]
9743afac7c324163d70526c6bb92a54e10ce5711
Category:Gromphadorhina
14
183
557
2014-10-07T14:46:30Z
Lollypop
2
Die Seite wurde neu angelegt: „[[Kategorie:Schaben]]“
wikitext
text/x-wiki
[[Kategorie:Schaben]]
b2ea0c52ba3cc8ec3158b4eaccc987028f8b404c
Gromphadorrhina portentosa
0
184
560
2014-10-07T14:47:06Z
Lollypop
2
Lollypop verschob Seite [[Gromphadorrhina portentosa]] nach [[Gromphadorhina portentosa]] und überschrieb dabei eine Weiterleitung
wikitext
text/x-wiki
#WEITERLEITUNG [[Gromphadorhina portentosa]]
0a92c3166236196b4104b99ca5201dcc63d1f9f7
Gromphadorhina spec.
0
144
561
518
2014-10-07T14:47:38Z
Lollypop
2
wikitext
text/x-wiki
{{Systematik
| DeName = Fauchschabe
| WissName = Gromphadorhina spec.
| Autor =
| Untergattung =
| Gattung = Gromphadorhina
| Familie = Blaberidae
| Unterfamilie = Oxyhaloinae
| Art = spec.
| Verbreitung =
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| Winterruhe =
}}
ae9427ddbcaec4ef234975eb5391f6ce845a806e
562
561
2014-10-07T14:48:30Z
Lollypop
2
wikitext
text/x-wiki
{{Systematik
| DeName = Fauchschabe
| WissName = Gromphadorhina spec.
| Autor =
| Untergattung =
| Gattung = Gromphadorhina
| Familie = Blaberidae
| Unterfamilie = Oxyhaloinae
| Tribus = Gromphadorhini
| Art = spec.
| Verbreitung =
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| Winterruhe =
}}
dd956b4b86214df7390a68588b631d918e5507f3
Gromphadorhina portentosa
0
145
563
559
2014-10-07T14:49:03Z
Lollypop
2
wikitext
text/x-wiki
{{Systematik
| DeName = Fauchschabe
| WissName = Gromphadorhina portentosa
| Autor =
| Familie = Blaberidae
| Unterfamilie = Oxyhaloinae
| Tribus = Gromphadorhini
| Gattung = Gromphadorhina
| Untergattung =
| Art = portentosa
| Verbreitung =
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| Winterruhe =
}}
e940e88db675d96a2934cf34ee78c288503ae102
567
563
2014-10-07T14:53:49Z
Lollypop
2
wikitext
text/x-wiki
{{Systematik
| DeName = Fauchschabe
| WissName = Gromphadorhina portentosa
| Autor = Schaum, 1853
| Familie = Blaberidae
| Unterfamilie = Oxyhaloinae
| Tribus = Gromphadorhini
| Gattung = Gromphadorhina
| Untergattung =
| Art = portentosa
| Verbreitung =
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| Winterruhe =
}}
b964c8005b8781a38621f9add5172bfc7f7dc2d8
588
567
2014-10-26T17:38:51Z
Lollypop
2
wikitext
text/x-wiki
{{Systematik
| DeName = Fauchschabe
| WissName = Gromphadorhina portentosa
| Autor = Schaum, 1853
| Familie = Blaberidae
| Unterfamilie = Oxyhaloinae
| Tribus = Gromphadorhini
| Gattung = Gromphadorhina
| Untergattung =
| Art = portentosa
| Verbreitung =
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| StudyGroupNumber = BCG 12
| Winterruhe =
}}
496a0af2177c4baaba781b36e47f5c2d6da9a9d2
Gromphadorhina oblongonota
0
175
564
556
2014-10-07T14:49:44Z
Lollypop
2
wikitext
text/x-wiki
{{Systematik
| DeName = Fauchschabe
| WissName = Gromphadorhina oblongonota
| Autor =
| Familie = Blaberidae
| Unterfamilie = Oxyhaloinae
| Gattung = Gromphadorhina
| Untergattung =
| Tribus = Gromphadorhini
| Art = oblongonota
| Verbreitung =
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| Winterruhe =
}}
e9423138ae0bc49ac0633d3fcd3f6f3c5851f537
566
564
2014-10-07T14:53:06Z
Lollypop
2
wikitext
text/x-wiki
{{Systematik
| DeName = Fauchschabe
| WissName = Gromphadorhina oblongonota
| Autor = van Herrewege, 1973
| Familie = Blaberidae
| Unterfamilie = Oxyhaloinae
| Gattung = Gromphadorhina
| Untergattung =
| Tribus = Gromphadorhini
| Art = oblongonota
| Verbreitung =
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| Winterruhe =
}}
18586a8deba6acc5a7110aaad385a1ba0d5a4ec5
587
566
2014-10-26T17:38:26Z
Lollypop
2
wikitext
text/x-wiki
{{Systematik
| DeName = Fauchschabe
| WissName = Gromphadorhina oblongonota
| Autor = van Herrewege, 1973
| Familie = Blaberidae
| Unterfamilie = Oxyhaloinae
| Gattung = Gromphadorhina
| Untergattung =
| Tribus = Gromphadorhini
| Art = oblongonota
| Verbreitung =
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| StudyGroupNumber = BCG 48
| Winterruhe =
}}
86f6612314fb5244ee19bbfcc63d250acff35cec
Elliptorhina javanica
0
146
565
550
2014-10-07T14:51:35Z
Lollypop
2
wikitext
text/x-wiki
{{Systematik
| DeName = Fauchschabe
| WissName = Elliptorhina javanica
| Autor = Hanitsch, 1930
| Bild = Elliptorhina_javanica.JPG
| Bildbeschreibung = ELliptorhina javanica an einem Champignon
| Familie = Blaberidae
| Unterfamilie = Oxyhaloinae
| Gattung = Elliptorhina
| Untergattung =
| Tribus = Gromphadorhini
| Art = javanica
| Verbreitung =
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| Winterruhe =
}}
8bcd459353d19841c2b8647d78b285f7308b5179
Princisia vanwaerebeki
0
152
568
522
2014-10-07T14:55:51Z
Lollypop
2
wikitext
text/x-wiki
{{Systematik
| DeName = Fauchschabe
| WissName = Princisia vanwaerebeki
| Autor = van Herrewege, 1973
| Familie = Blaberidae
| Unterfamilie = Oxyhaloinae
| Gattung = Princisia
| Untergattung =
| Tribus = Gromphadorhini
| Art = vanwaerebeki
| Verbreitung =
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| Winterruhe =
}}
44ae002a54ae80b0b2e63bc54d5576d49e22e919
589
568
2014-10-26T17:46:32Z
Lollypop
2
wikitext
text/x-wiki
{{Systematik
| DeName = Fauchschabe
| WissName = Princisia vanwaerebeki
| Autor = van Herrewege, 1973
| Familie = Blaberidae
| Unterfamilie = Oxyhaloinae
| Gattung = Princisia
| Untergattung =
| Tribus = Gromphadorhini
| Art = vanwaerebeki
| Verbreitung =
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| StudyGroupNumber = BCG 34
| Winterruhe =
}}
57bd94cb6198e5c4e32ed98a0cfeb3436ab97294
Blaptica dubia
0
150
569
520
2014-10-07T15:09:18Z
Lollypop
2
wikitext
text/x-wiki
{{Systematik
| DeName = Argentinische Waldschabe
| WissName = Blaptica dubia
| Autor = Serville, 1838
| Untergattung =
| Gattung = Blaptica
| Familie = Blaberidae
| Unterfamilie = Blaberinae
| Tribus =
| Art = dubia
| Verbreitung = Argentinien, Paraguay, Uruguay
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| Winterruhe =
}}
709afd761940ff9e14998d3f7d307e6de0fc159b
Therea olegrandjeani
0
173
570
515
2014-10-07T15:20:36Z
Lollypop
2
wikitext
text/x-wiki
{{Systematik
| DeName = Fragezeichen-Schabe
| WissName = Therea olegrandjeani
| Autor = Fritzsche & Zompro, 2008
| Familie = Polyphagidae
| Unterfamilie = Polyphaginae
| Tribus =
| Gattung = Therea
| Untergattung =
| Art = olegrandjeani
| Verbreitung = Indien
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| Winterruhe =
}}
bef9b1a2b9f92c3a66d8492e04e1f6a9d30a6e1f
571
570
2014-10-07T15:21:37Z
Lollypop
2
wikitext
text/x-wiki
{{Systematik
| DeName = Fragezeichen-Schabe
| WissName = Therea olegrandjeani
| Autor = Fritzsche & Zompro, 2008
| Familie = Corydiidae
| Unterfamilie = Corydiinae
| Tribus =
| Gattung = Therea
| Untergattung =
| Art = olegrandjeani
| Verbreitung = Indien
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| Winterruhe =
}}
8ab33e53b4d442906e9fafef7140bfe5994a7b10
Therea regularis
0
171
572
510
2014-10-07T15:22:28Z
Lollypop
2
wikitext
text/x-wiki
{{Systematik
| DeName = Dominoschabe
| WissName = Therea regularis
| Autor = Grandcolas, 1993
| Familie = Polyphagidae
| Unterfamilie = Polyphaginae
| Gattung = Therea
| Untergattung =
| Art = regularis
| Verbreitung = Indien
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| Winterruhe =
}}
Kleine, quirlige Art.
3db8e5ec8c7783ddc28c7c292a94d6f9f67a7dd5
573
572
2014-10-07T15:22:57Z
Lollypop
2
wikitext
text/x-wiki
{{Systematik
| DeName = Dominoschabe
| WissName = Therea regularis
| Autor = Grandcolas, 1993
| Familie = Corydiidae
| Unterfamilie = Corydiinae
| Gattung = Therea
| Untergattung =
| Art = regularis
| Verbreitung = Indien
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| Winterruhe =
}}
Kleine, quirlige Art.
c19f24261039d275aecdcb242558e90b7af49449
Solaris cluster clone
0
185
574
2014-10-09T13:03:08Z
Lollypop
2
Die Seite wurde neu angelegt: „[[Kategorie:Solaris]] If you need to recreate a cluster node from a survived node, you need to do the following steps ==Clone system disk== For example via m…“
wikitext
text/x-wiki
[[Kategorie:Solaris]]
If you need to recreate a cluster node from a survived node, you need to do the following steps
==Clone system disk==
For example via metattach to the metaroot.
==Edit normal Solaris parameter==
/etc/nodename
/etc/hostname.*
Check: /etc/inet/hosts
If mirrored by SVM do
# Edit /etc/vfstab of the clone to normal Devices
# Edit /etc/system:
<source lang=bash>
* Begin MDD root info (do not edit)
** rootdev:/pseudo/md@0:0,10,blk
* End MDD root info (do not edit)
</source>
Umount cloned disk
fsck cloned disk root slice
==Edit Cluster parameter==
Get the right id from:
<source lang=bash>
# nawk '/cluster\.nodes\.[^.]*\.name/{split($1,field,"."); print field[3],$NF}' /etc/cluster/ccr/global/infrastructure
1 node-a
2 node-b
</source>
Edit the
echo <nodeid> > /etc/cluster/nodeid
for example node-b:
echo 2 > /etc/cluster/nodeid
of the clone.
2e5781f6f2242926076d5642c277b09e6f600bc2
ZFS RaidController
0
186
575
2014-10-13T12:52:58Z
Lollypop
2
Die Seite wurde neu angelegt: „[[Kategorie:Solaris]] =MegaRAID= Konfiguration aller Disks als einzelne LogicalDrives <source lang=bash> -cfgclr -a0 -cfgldadd -r0[252:0] -a0 -cfgldadd -r0[252…“
wikitext
text/x-wiki
[[Kategorie:Solaris]]
=MegaRAID=
Konfiguration aller Disks als einzelne LogicalDrives
<source lang=bash>
-cfgclr -a0
-cfgldadd -r0[252:0] -a0
-cfgldadd -r0[252:1] -a0
-cfgldadd -r0[252:2] -a0
-cfgldadd -r0[252:3] -a0
-ldsetprop -EnDskCache -LAll -a0
q for quit
</source>
c49786d906d3ab42d41d3201536c2ec35f73faa0
576
575
2014-10-13T13:06:55Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:Solaris]]
[[Kategorie:ZFS]]
=ZFS ist besser direkt auf den Disks=
Wegen der besseren Checksummen im ZFS möchte man den RaidController abschalten, oder die Disks einzeln herausreichen.
==X4170 mit MegaRAID==
Konfiguration aller Disks als einzelne LogicalDrives
<source lang=bash>
-cfgclr -a0
-cfgldadd -r0[252:0] -a0
-cfgldadd -r0[252:1] -a0
-cfgldadd -r0[252:2] -a0
-cfgldadd -r0[252:3] -a0
-ldsetprop -EnDskCache -LAll -a0
q for quit
</source>
45b5ba187dc0eee1ee76624ef22bc138aad4d9ba
577
576
2014-10-13T13:46:48Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:Solaris]]
[[Kategorie:ZFS]]
=ZFS ist besser direkt auf den Disks=
Wegen der besseren Checksummen im ZFS möchte man den RaidController abschalten, oder die Disks einzeln herausreichen.
==X4170 mit MegaRAID==
Konfiguration aller Disks als einzelne LogicalDrives
<source lang=bash>
-cfgclr -a0
-cfgldadd -r0[252:0] -a0
-cfgldadd -r0[252:1] -a0
-cfgldadd -r0[252:2] -a0
-cfgldadd -r0[252:3] -a0
-ldsetprop EnDskCache -LAll -a0
q for quit
</source>
e60d40a67775732eff23c73d8b144ddc0a0539a0
ZFS Networker
0
158
578
475
2014-10-14T12:32:45Z
Lollypop
2
/* The pre-/pstcmd-script */
wikitext
text/x-wiki
[[Kategorie:ZFS]]
[[Kategorie:Solaris]]
=Backup of ZFS snapshots on Solaris Cluster with Legato/EMC Networker=
This describes how to setup a backup of the Solaris Cluster resource group named sample-rg.
The structure of my RGs is always:
<pre>
RG: <name>-rg
ZFS-HASP: <name>-hasp-zfs-res
Logical Host: <name>-lh-res
Logical Host Name: <name>-lh
ZPOOL: <name>_pool
</pre>
I used the bash as shell.
==Define variables used in the following command lines==
<source lang=bash>
# NAME=sample
# RGname=${NAME}-rg
# NetworkerGroup=$(echo ${NAME} | tr 'a-z' 'A-Z' )
# ZPOOL=${NAME}_pool
# ZPOOL_BASEDIR=/local/${RGname}
</source>
==Define a resource for Networker==
What we need now is a resource definition in our Networker directory like this:
<source lang=bash>
# zfs create ${ZPOOL}/nsr
# mkdir ${ZPOOL_BASEDIR}/nsr/{bin,log,res}
# cat > ${ZPOOL_BASEDIR}/nsr/res/${NetworkerGroup}.res <<EOF
type: savepnpc;
precmd: "${ZPOOL_BASEDIR}/nsr/bin/prepst_command.sh pre >${ZPOOL_BASEDIR}/nsr/log/networker_precmd.log 2>&1";
pstcmd: "${ZPOOL_BASEDIR}/nsr/bin/prepst_command.sh pst >${ZPOOL_BASEDIR}/nsr/log/networker_pstcmd.log 2>&1";
timeout: "08:00am";
abort precmd with group: Yes;
EOF
</source>
And now create a link on every cluster node to this file
<source lang=bash>
# ln -s ${ZPOOL_BASEDIR}/nsr/res/${NetworkerGroup}.res /nsr/res/${NetworkerGroup}.res
</source>
==The pre-/pstcmd-script==
!!!THIS CODE IS UNTESTED DO NOT USE THIS!!!
!!!THIS JUST AN EXAMPLE!!!
Still not working...
<source lang=bash>
#!/bin/bash
function print_option () {
option=$1; shift
# now process line
while [ $# -gt 0 ]
do
case $1 in
${option})
echo $2
shift
shift
;;
*)
shift
;;
esac
done
}
function print_log () {
LOGFILE=$1 ; shift
if [ $# -gt 0 ]
then
printf "%s (%s): %s\n" "$(date '+%Y%m%d %H:%M:%S')" "${cmd_option}" "$*" >> ${LOGFILE}
else
printf "%s (%s): " "$(date '+%Y%m%d %H:%M:%S')" "${cmd_option}" >> ${LOGFILE}
#cat >> ${LOGFILE}
while read data
do
printf "%s (%s): %s\n" "$(date '+%Y%m%d %H:%M:%S')" "${cmd_option}" "${data}" >> ${LOGFILE}
done
fi
}
function snapshot_pre {
DB=$1
DBUSER=$2
if [ $# -eq 3 -a "_$3_" != "__" ]
then
ZONE=$3
ZONE_CMD="${ZLOGIN_CMD} -l ${DBUSER} ${ZONE}"
ZONE_BASE=$(/usr/sbin/zonecfg -z ${ZONE} info zonepath | nawk '{print $NF;}')
ZONE_ROOT="${ZONE_BASE}/root"
else
ZONE_ROOT=""
ZONE_CMD="su - ${DBUSER} -c"
fi
if( ${ZONE_CMD} echo >/dev/null 2>&1 )
then
SCRIPT_NAME="tmp/.nsr-pre-snap-script.$$"
# Create script inside zone
cat >${ZONE_ROOT}/{SCRIPT_NAME} <<EOS
#!/bin/bash
DBDIR=\$(/usr/bin/nawk -F':' -v ORACLE_SID=${ORACLE_SID} '\$1==ORACLE_SID {print \$2;}' /var/opt/oracle/oratab)
\${DBDIR}/bin/sqlplus sys/${DBUSER} as sysdba << EOF
create pfile from spfile;
alter system archive log current;
alter database backup controlfile to trace;
alter database begin backup;
EOF
EOS
chmod 755 ${ZONE_ROOT}/${SCRIPT_NAME}
${ZONE_CMD} /${SCRIPT_NAME} 2>&1 | print_log ${LOGFILE}
rm -f ${ZONE_ROOT}/${SCRIPT_NAME}
fi
}
function snapshot_pst {
DB=$1
DBUSER=$2
if [ $# -eq 3 -a "_$3_" != "__" ]
then
ZONE=$3
ZONE_CMD="${ZLOGIN_CMD} -l ${DBUSER} ${ZONE}"
ZONE_BASE=$(/usr/sbin/zonecfg -z ${ZONE} info zonepath | nawk '{print $NF;}')
ZONE_ROOT="${ZONE_BASE}/root"
else
ZONE_ROOT=""
ZONE_CMD="su - ${DBUSER} -c"
fi
if( ${ZONE_CMD} echo >/dev/null 2>&1 )
then
SCRIPT_NAME="tmp/.nsr-pre-snap-script.$$"
# Create script inside zone
cat >${ZONE_ROOT}/{SCRIPT_NAME} <<EOS
#!/bin/bash
DBDIR=\$(/usr/bin/nawk -F':' -v ORACLE_SID=${ORACLE_SID} '\$1==ORACLE_SID {print \$2;}' /var/opt/oracle/oratab)
\${DBDIR}/bin/sqlplus sys/${DBUSER} as sysdba << EOF
alter database end backup;
alter system archive log current;
EOF
EOS
chmod 755 ${ZONE_ROOT}/${SCRIPT_NAME}
${ZONE_CMD} /${SCRIPT_NAME} 2>&1 | print_log ${LOGFILE}
rm -f ${ZONE_ROOT}/${SCRIPT_NAME}
fi
}
function snapshot_create {
ZPOOL=$1
SNAPSHOT_NAME=$2
print_log ${LOGFILE} "Create ZFS snapshot -r ${ZPOOL}@${SNAPSHOT_NAME}"
${ZFS_CMD} snapshot -r ${ZPOOL}@${SNAPSHOT_NAME}
for zfs_snapshot in $(${ZFS_CMD} list -Ho name -t snapshot -r ${ZPOOL} | grep ${SNAPSHOT_NAME})
do
${ZFS_CMD} clone -o readonly=on ${zfs_snapshot} ${zfs_snapshot/@*/}/nsr_backup
${ZFS_CMD} mount ${zfs_snapshot/@*/}/nsr_backup 2>/dev/null
if ( df -h ${zfs_snapshot/@*/}/nsr_backup )
then
${ZFS_CMD} list -Ho creation,name ${zfs_snapshot/@*/}/nsr_backup | print_log ${LOGFILE}
fi
done
}
function snapshot_destroy {
ZPOOL=$1
SNAPSHOT_NAME=$2
if (${ZFS_CMD} list -t snapshot ${ZPOOL}@${SNAPSHOT_NAME})
then
for zfs_snapshot in $(${ZFS_CMD} list -Ho name -t snapshot -r ${ZPOOL} | grep ${SNAPSHOT_NAME})
do
if ( df -h ${zfs_snapshot/@*/}/nsr_backup )
then
print_log ${LOGFILE} "Unmount ZFS clone ${zfs_snapshot/@*/}/nsr_backup"
${ZFS_CMD} unmount ${zfs_snapshot/@*/}/nsr_backup
fi
# If this is a clone of ${zfs_snapshot}, then destroy it
if [ "_$(${ZFS_CMD} list -Ho origin ${zfs_snapshot/@*/}/nsr_backup)_" == "_${zfs_snapshot}_" ]
then
print_log ${LOGFILE} "Destroy ZFS clone ${zfs_snapshot/@*/}/nsr_backup"
${ZFS_CMD} destroy ${zfs_snapshot/@*/}/nsr_backup
fi
done
print_log ${LOGFILE} "Destroy ZFS snapshot -r ${ZPOOL}@${SNAPSHOT_NAME}"
${ZFS_CMD} destroy -r ${ZPOOL}@${SNAPSHOT_NAME}
fi
}
function usage {
echo "Usage: $0 (pre|pst)"
exit 1
}
cmd_option=$1
export cmd_option
ORACLE_SID=SAMPLE
ORACLE_USER=oracle
SNAPSHOT_NAME="nsr"
ZFS_CMD="/usr/sbin/zfs"
ZLOGIN_CMD="/usr/bin/zlogin"
case ${cmd_option} in
pre)
# Get commandline from parent pid
# pre /usr/sbin/savepnpc -c <networker-client> -s <networker-server> -g <NSR_GROUP> -LL
pid=$(ptree $$ | nawk '/savepnpc/{print $1}')
;;
pst)
# Get commandline from parent pid
# pst /usr/bin/pstclntsave -s <networker-server> -g <NSR_GROUP> -c <networker-client>
pid=$(ptree $$ | nawk '/pstclntsave/{print $1}')
;;
esac
commandline="$(pargs -e ${pid} | head -1)"
# Called from backupserver use -c
CLIENT_NAME=$(print_option -c ${commandline})
# If called from cmdline use -m
CLIENT_NAME=${CLIENT_NAME:-$(print_option -m ${commandline})}
# Last resort pre/post
CLIENT_NAME=${CLIENT_NAME:-${cmd_option}}
SERVER_NAME=$(print_option -s ${commandline})
GROUP_NAME=$(print_option -g ${commandline})
LOGFILE=/nsr/logs/${CLIENT_NAME}.log
print_log ${LOGFILE} "Called from ${commandline}"
named_pipe=/tmp/.named_pipe.$$
# Delete named pipe on exit
trap "rm -f ${named_pipe}" EXIT
# Create named pipe
mknod ${named_pipe} p
# Read from named pipe and send it to print_log
tee <${named_pipe} | print_log ${LOGFILE}&
# Close STDOUT & STDERR
exec 1>&-
exec 2>&-
# Redirect them to named pipe
exec >${named_pipe} 2>&1
print_log ${LOGFILE} "Begin backup of ${CLIENT_NAME}"
# Get resource name from hostname
LH_RES=$(/usr/cluster/bin/clrs show -t SUNW.LogicalHostname -p HostnameList | nawk -v Hostname="${CLIENT_NAME}" '/^Resource:/{res=$NF} /HostnameList:/ {for(i=2;i<=NF;i++){if($i == Hostname){print res}}}')
print_log ${LOGFILE} "LogicalHostname of ${CLIENT_NAME} is ${LH_RES}"
# Get ressourceGroup name from ressource name
RG=$(/usr/cluster/bin/scha_resource_get -O GROUP -R ${LH_RES})
print_log ${LOGFILE} "RessourceGroup of ${LH_RES} is ${RG}"
ZPOOLS=$(/usr/cluster/bin/clrs show -g ${RG} -p Zpools | nawk '$1=="Zpools:"{$1="";print $0}')
print_log ${LOGFILE} "ZPools used in ${RG}: ${ZPOOLS}"
Start_command=$(/usr/cluster/bin/clrs show -p Start_command -g ${RG} | /usr/bin/nawk -F ':' '$1 ~ /Start_command/ && $2 ~ /sczbt/')
print_log ${LOGFILE} "sczbt Start_command is: ${Start_command}"
sczbt_config=$(print_option -P ${Start_command})/sczbt_$(print_option -R ${Start_command})
print_log ${LOGFILE} "sczbt_config is ${sczbt_config}"
ZONE=$(nawk -F '=' '$1=="Zonename"{gsub(/"/,"",$2);print $2}' ${sczbt_config})
print_log ${LOGFILE} "Zone from ${sczbt_config} is ${ZONE}"
case ${cmd_option} in
pre)
snapshot_destroy ${ZPOOLS} ${SNAPSHOT_NAME}
# snapshot_pre ${DB} ${DBUSER} ${ZONE}
snapshot_create ${ZPOOLS} ${SNAPSHOT_NAME}
# snapshot_pst ${DB} ${DBUSER} ${ZONE}
;;
pst)
#snapshot_destroy ${ZPOOLS} ${SNAPSHOT_NAME}
;;
*)
usage
;;
esac
print_log ${LOGFILE} "End backup of ${CLIENT_NAME}"
</source>
!!!THIS CODE IS UNTESTED DO NOT USE THIS!!!
!!!THIS JUST AN EXAMPLE!!!
==Registering new resource type LGTO.clnt==
1. Install Solaris client package LGTOclnt
2. Register new resource type in cluster. One one node do:
<source lang=bash>
# clrt register -f /usr/sbin/LGTO.clnt.rtr LGTO.clnt
</source>
Now you have a new resource type LGTO.clnt in your cluster.
==Create client resource of type LGTO.clnt==
So I use scripts like this:
<source lang=bash>
# RGname=sample-rg
# clrs create \
-t LGTO.clnt \
-g ${RGname} \
-p Resource_dependencies=$(basename ${RGname} -rg)-hasp-zfs-res \
-p clientname=$(basename ${RGname} -rg)-lh \
-p Network_resource=$(basename ${RGname} -rg)-lh-res \
-p owned_paths=${ZPOOL_BASEDIR} \
$(basename ${RGname} -rg)-nsr-res
</source>
This expands to:
<source lang=bash>
# clrs create \
-t LGTO.clnt \
-g sample-rg \
-p Resource_dependencies=sample-hasp-zfs-res \
-p clientname=sample-lh \
-p Network_resource=sample-lh-res \
-p owned_paths=/local/sample-rg \
sample-nsr-res
</source>
Now we have a client name to which we can connect to: sample-lh
ab7b9c41d08e2e2d3e57961b7607513ff289d8d2
579
578
2014-10-14T14:28:10Z
Lollypop
2
/* The pre-/pstcmd-script */
wikitext
text/x-wiki
[[Kategorie:ZFS]]
[[Kategorie:Solaris]]
=Backup of ZFS snapshots on Solaris Cluster with Legato/EMC Networker=
This describes how to setup a backup of the Solaris Cluster resource group named sample-rg.
The structure of my RGs is always:
<pre>
RG: <name>-rg
ZFS-HASP: <name>-hasp-zfs-res
Logical Host: <name>-lh-res
Logical Host Name: <name>-lh
ZPOOL: <name>_pool
</pre>
I used the bash as shell.
==Define variables used in the following command lines==
<source lang=bash>
# NAME=sample
# RGname=${NAME}-rg
# NetworkerGroup=$(echo ${NAME} | tr 'a-z' 'A-Z' )
# ZPOOL=${NAME}_pool
# ZPOOL_BASEDIR=/local/${RGname}
</source>
==Define a resource for Networker==
What we need now is a resource definition in our Networker directory like this:
<source lang=bash>
# zfs create ${ZPOOL}/nsr
# mkdir ${ZPOOL_BASEDIR}/nsr/{bin,log,res}
# cat > ${ZPOOL_BASEDIR}/nsr/res/${NetworkerGroup}.res <<EOF
type: savepnpc;
precmd: "${ZPOOL_BASEDIR}/nsr/bin/prepst_command.sh pre >${ZPOOL_BASEDIR}/nsr/log/networker_precmd.log 2>&1";
pstcmd: "${ZPOOL_BASEDIR}/nsr/bin/prepst_command.sh pst >${ZPOOL_BASEDIR}/nsr/log/networker_pstcmd.log 2>&1";
timeout: "08:00am";
abort precmd with group: Yes;
EOF
</source>
And now create a link on every cluster node to this file
<source lang=bash>
# ln -s ${ZPOOL_BASEDIR}/nsr/res/${NetworkerGroup}.res /nsr/res/${NetworkerGroup}.res
</source>
==The pre-/pstcmd-script==
!!!THIS CODE IS UNTESTED DO NOT USE THIS!!!
!!!THIS JUST AN EXAMPLE!!!
Still not working...
<source lang=bash>
#!/bin/bash
function print_option () {
option=$1; shift
# now process line
while [ $# -gt 0 ]
do
case $1 in
${option})
echo $2
shift
shift
;;
*)
shift
;;
esac
done
}
function print_log () {
LOGFILE=$1 ; shift
if [ $# -gt 0 ]
then
printf "%s (%s): %s\n" "$(date '+%Y%m%d %H:%M:%S')" "${cmd_option}" "$*" >> ${LOGFILE}
else
printf "%s (%s): " "$(date '+%Y%m%d %H:%M:%S')" "${cmd_option}" >> ${LOGFILE}
#cat >> ${LOGFILE}
while read data
do
printf "%s (%s): %s\n" "$(date '+%Y%m%d %H:%M:%S')" "${cmd_option}" "${data}" >> ${LOGFILE}
done
fi
}
function snapshot_pre {
DB=$1
DBUSER=$2
if [ $# -eq 3 -a "_$3_" != "__" ]
then
ZONE=$3
ZONE_CMD="${ZLOGIN_CMD} -l ${DBUSER} ${ZONE}"
ZONE_BASE=$(/usr/sbin/zonecfg -z ${ZONE} info zonepath | nawk '{print $NF;}')
ZONE_ROOT="${ZONE_BASE}/root"
else
ZONE_ROOT=""
ZONE_CMD="su - ${DBUSER} -c"
fi
if( ${ZONE_CMD} echo >/dev/null 2>&1 )
then
SCRIPT_NAME="tmp/.nsr-pre-snap-script.$$"
# Create script inside zone
cat >${ZONE_ROOT}/{SCRIPT_NAME} <<EOS
#!/bin/bash
DBDIR=\$(/usr/bin/nawk -F':' -v ORACLE_SID=${ORACLE_SID} '\$1==ORACLE_SID {print \$2;}' /var/opt/oracle/oratab)
\${DBDIR}/bin/sqlplus sys/${DBUSER} as sysdba << EOF
create pfile from spfile;
alter system archive log current;
alter database backup controlfile to trace;
alter database begin backup;
EOF
EOS
chmod 755 ${ZONE_ROOT}/${SCRIPT_NAME}
${ZONE_CMD} /${SCRIPT_NAME} 2>&1 | print_log ${LOGFILE}
rm -f ${ZONE_ROOT}/${SCRIPT_NAME}
fi
}
function snapshot_pst {
DB=$1
DBUSER=$2
if [ $# -eq 3 -a "_$3_" != "__" ]
then
ZONE=$3
ZONE_CMD="${ZLOGIN_CMD} -l ${DBUSER} ${ZONE}"
ZONE_BASE=$(/usr/sbin/zonecfg -z ${ZONE} info zonepath | nawk '{print $NF;}')
ZONE_ROOT="${ZONE_BASE}/root"
else
ZONE_ROOT=""
ZONE_CMD="su - ${DBUSER} -c"
fi
if( ${ZONE_CMD} echo >/dev/null 2>&1 )
then
SCRIPT_NAME="tmp/.nsr-pre-snap-script.$$"
# Create script inside zone
cat >${ZONE_ROOT}/{SCRIPT_NAME} <<EOS
#!/bin/bash
DBDIR=\$(/usr/bin/nawk -F':' -v ORACLE_SID=${ORACLE_SID} '\$1==ORACLE_SID {print \$2;}' /var/opt/oracle/oratab)
\${DBDIR}/bin/sqlplus sys/${DBUSER} as sysdba << EOF
alter database end backup;
alter system archive log current;
EOF
EOS
chmod 755 ${ZONE_ROOT}/${SCRIPT_NAME}
${ZONE_CMD} /${SCRIPT_NAME} 2>&1 | print_log ${LOGFILE}
rm -f ${ZONE_ROOT}/${SCRIPT_NAME}
fi
}
function snapshot_create {
ZPOOL=$1
SNAPSHOT_NAME=$2
print_log ${LOGFILE} "Create ZFS snapshot -r ${ZPOOL}@${SNAPSHOT_NAME}"
${ZFS_CMD} snapshot -r ${ZPOOL}@${SNAPSHOT_NAME}
for zfs_snapshot in $(${ZFS_CMD} list -Ho name -t snapshot -r ${ZPOOL} | grep ${SNAPSHOT_NAME})
do
${ZFS_CMD} clone -o readonly=on ${zfs_snapshot} ${zfs_snapshot/@*/}/nsr_backup
${ZFS_CMD} mount ${zfs_snapshot/@*/}/nsr_backup 2>/dev/null
if ( df -h ${zfs_snapshot/@*/}/nsr_backup )
then
# echo /usr/sbin/save -s ${SERVER_NAME} -g ${GROUP_NAME} -LL -m ${CLIENT_NAME} $(${ZFS_CMD} get -Ho value mountpoint ${zfs_snapshot/@*/}/nsr_backup)
${ZFS_CMD} list -Ho creation,name ${zfs_snapshot/@*/}/nsr_backup | print_log ${LOGFILE}
fi
done
}
function snapshot_destroy {
ZPOOL=$1
SNAPSHOT_NAME=$2
if (${ZFS_CMD} list -t snapshot ${ZPOOL}@${SNAPSHOT_NAME})
then
for zfs_snapshot in $(${ZFS_CMD} list -Ho name -t snapshot -r ${ZPOOL} | grep ${SNAPSHOT_NAME})
do
if ( df -h ${zfs_snapshot/@*/}/nsr_backup )
then
print_log ${LOGFILE} "Unmount ZFS clone ${zfs_snapshot/@*/}/nsr_backup"
${ZFS_CMD} unmount ${zfs_snapshot/@*/}/nsr_backup
fi
# If this is a clone of ${zfs_snapshot}, then destroy it
if [ "_$(${ZFS_CMD} list -Ho origin ${zfs_snapshot/@*/}/nsr_backup)_" == "_${zfs_snapshot}_" ]
then
print_log ${LOGFILE} "Destroy ZFS clone ${zfs_snapshot/@*/}/nsr_backup"
${ZFS_CMD} destroy ${zfs_snapshot/@*/}/nsr_backup
fi
done
print_log ${LOGFILE} "Destroy ZFS snapshot -r ${ZPOOL}@${SNAPSHOT_NAME}"
${ZFS_CMD} destroy -r ${ZPOOL}@${SNAPSHOT_NAME}
fi
}
function usage {
echo "Usage: $0 (pre|pst)"
exit 1
}
cmd_option=$1
export cmd_option
ORACLE_SID=SAMPLE
ORACLE_USER=oracle
SNAPSHOT_NAME="nsr"
ZFS_CMD="/usr/sbin/zfs"
ZLOGIN_CMD="/usr/bin/zlogin"
case ${cmd_option} in
pre)
# Get commandline from parent pid
# pre /usr/sbin/savepnpc -c ndrcmstest-cl -s hhlokens01.srv.ndr-net.de -g NdrCms -LL
pid=$(ptree $$ | nawk '/savepnpc/{print $1}')
;;
pst)
# Get commandline from parent pid
# pst /usr/bin/pstclntsave -s hhlokens01.srv.ndr-net.de -g NdrCms -c ndrcmstest
pid=$(ptree $$ | nawk '/pstclntsave/{print $1}')
;;
esac
commandline="$(pargs -e ${pid} | head -1)"
# Called from backupserver use -c
CLIENT_NAME=$(print_option -c ${commandline})
# If called from cmdline use -m
CLIENT_NAME=${CLIENT_NAME:-$(print_option -m ${commandline})}
# Last resort pre/post
CLIENT_NAME=${CLIENT_NAME:-${cmd_option}}
SERVER_NAME=$(print_option -s ${commandline})
GROUP_NAME=$(print_option -g ${commandline})
LOGFILE=/nsr/logs/${CLIENT_NAME}.log
print_log ${LOGFILE} "Called from ${commandline}"
named_pipe=/tmp/.named_pipe.$$
# Delete named pipe on exit
trap "rm -f ${named_pipe}" EXIT
# Create named pipe
mknod ${named_pipe} p
# Read from named pipe and send it to print_log
tee <${named_pipe} | print_log ${LOGFILE}&
# Close STDOUT & STDERR
exec 1>&-
exec 2>&-
# Redirect them to named pipe
exec >${named_pipe} 2>&1
print_log ${LOGFILE} "Begin backup of ${CLIENT_NAME}"
# Get resource name from hostname
LH_RES=$(/usr/cluster/bin/clrs show -t SUNW.LogicalHostname -p HostnameList | nawk -v Hostname="${CLIENT_NAME}" '/^Resource:/{res=$NF} /HostnameList:/ {for(i=2;i<=NF;i++){if($i == Hostname){print res}}}')
print_log ${LOGFILE} "LogicalHostname of ${CLIENT_NAME} is ${LH_RES}"
# Get ressourceGroup name from ressource name
RG=$(/usr/cluster/bin/scha_resource_get -O GROUP -R ${LH_RES})
print_log ${LOGFILE} "RessourceGroup of ${LH_RES} is ${RG}"
ZPOOLS=$(/usr/cluster/bin/clrs show -g ${RG} -p Zpools | nawk '$1=="Zpools:"{$1="";print $0}')
print_log ${LOGFILE} "ZPools used in ${RG}: ${ZPOOLS}"
Start_command=$(/usr/cluster/bin/clrs show -p Start_command -g ${RG} | /usr/bin/nawk -F ':' '$1 ~ /Start_command/ && $2 ~ /sczbt/')
print_log ${LOGFILE} "sczbt Start_command is: ${Start_command}"
sczbt_config=$(print_option -P ${Start_command})/sczbt_$(print_option -R ${Start_command})
print_log ${LOGFILE} "sczbt_config is ${sczbt_config}"
ZONE=$(nawk -F '=' '$1=="Zonename"{gsub(/"/,"",$2);print $2}' ${sczbt_config})
print_log ${LOGFILE} "Zone from ${sczbt_config} is ${ZONE}"
case ${cmd_option} in
pre)
for ZPOOL in ${ZPOOLS}
do
snapshot_destroy ${ZPOOL} ${SNAPSHOT_NAME}
done
# snapshot_pre ${DB} ${DBUSER} ${ZONE}
for ZPOOL in ${ZPOOLS}
do
snapshot_create ${ZPOOL} ${SNAPSHOT_NAME}
done
# snapshot_pst ${DB} ${DBUSER} ${ZONE}
;;
pst)
#for ZPOOL in ${ZPOOLS}
#do
# snapshot_destroy ${ZPOOL} ${SNAPSHOT_NAME}
#done
;;
*)
usage
;;
esac
print_log ${LOGFILE} "End backup of ${CLIENT_NAME}"
</source>
!!!THIS CODE IS UNTESTED DO NOT USE THIS!!!
!!!THIS JUST AN EXAMPLE!!!
==Registering new resource type LGTO.clnt==
1. Install Solaris client package LGTOclnt
2. Register new resource type in cluster. One one node do:
<source lang=bash>
# clrt register -f /usr/sbin/LGTO.clnt.rtr LGTO.clnt
</source>
Now you have a new resource type LGTO.clnt in your cluster.
==Create client resource of type LGTO.clnt==
So I use scripts like this:
<source lang=bash>
# RGname=sample-rg
# clrs create \
-t LGTO.clnt \
-g ${RGname} \
-p Resource_dependencies=$(basename ${RGname} -rg)-hasp-zfs-res \
-p clientname=$(basename ${RGname} -rg)-lh \
-p Network_resource=$(basename ${RGname} -rg)-lh-res \
-p owned_paths=${ZPOOL_BASEDIR} \
$(basename ${RGname} -rg)-nsr-res
</source>
This expands to:
<source lang=bash>
# clrs create \
-t LGTO.clnt \
-g sample-rg \
-p Resource_dependencies=sample-hasp-zfs-res \
-p clientname=sample-lh \
-p Network_resource=sample-lh-res \
-p owned_paths=/local/sample-rg \
sample-nsr-res
</source>
Now we have a client name to which we can connect to: sample-lh
6c5b1559515e2bb9bda9fd0a681f628223f0f318
580
579
2014-10-14T14:28:37Z
Lollypop
2
/* The pre-/pstcmd-script */
wikitext
text/x-wiki
[[Kategorie:ZFS]]
[[Kategorie:Solaris]]
=Backup of ZFS snapshots on Solaris Cluster with Legato/EMC Networker=
This describes how to setup a backup of the Solaris Cluster resource group named sample-rg.
The structure of my RGs is always:
<pre>
RG: <name>-rg
ZFS-HASP: <name>-hasp-zfs-res
Logical Host: <name>-lh-res
Logical Host Name: <name>-lh
ZPOOL: <name>_pool
</pre>
I used the bash as shell.
==Define variables used in the following command lines==
<source lang=bash>
# NAME=sample
# RGname=${NAME}-rg
# NetworkerGroup=$(echo ${NAME} | tr 'a-z' 'A-Z' )
# ZPOOL=${NAME}_pool
# ZPOOL_BASEDIR=/local/${RGname}
</source>
==Define a resource for Networker==
What we need now is a resource definition in our Networker directory like this:
<source lang=bash>
# zfs create ${ZPOOL}/nsr
# mkdir ${ZPOOL_BASEDIR}/nsr/{bin,log,res}
# cat > ${ZPOOL_BASEDIR}/nsr/res/${NetworkerGroup}.res <<EOF
type: savepnpc;
precmd: "${ZPOOL_BASEDIR}/nsr/bin/prepst_command.sh pre >${ZPOOL_BASEDIR}/nsr/log/networker_precmd.log 2>&1";
pstcmd: "${ZPOOL_BASEDIR}/nsr/bin/prepst_command.sh pst >${ZPOOL_BASEDIR}/nsr/log/networker_pstcmd.log 2>&1";
timeout: "08:00am";
abort precmd with group: Yes;
EOF
</source>
And now create a link on every cluster node to this file
<source lang=bash>
# ln -s ${ZPOOL_BASEDIR}/nsr/res/${NetworkerGroup}.res /nsr/res/${NetworkerGroup}.res
</source>
==The pre-/pstcmd-script==
!!!THIS CODE IS UNTESTED DO NOT USE THIS!!!
!!!THIS JUST AN EXAMPLE!!!
<source lang=bash>
#!/bin/bash
function print_option () {
option=$1; shift
# now process line
while [ $# -gt 0 ]
do
case $1 in
${option})
echo $2
shift
shift
;;
*)
shift
;;
esac
done
}
function print_log () {
LOGFILE=$1 ; shift
if [ $# -gt 0 ]
then
printf "%s (%s): %s\n" "$(date '+%Y%m%d %H:%M:%S')" "${cmd_option}" "$*" >> ${LOGFILE}
else
printf "%s (%s): " "$(date '+%Y%m%d %H:%M:%S')" "${cmd_option}" >> ${LOGFILE}
#cat >> ${LOGFILE}
while read data
do
printf "%s (%s): %s\n" "$(date '+%Y%m%d %H:%M:%S')" "${cmd_option}" "${data}" >> ${LOGFILE}
done
fi
}
function snapshot_pre {
DB=$1
DBUSER=$2
if [ $# -eq 3 -a "_$3_" != "__" ]
then
ZONE=$3
ZONE_CMD="${ZLOGIN_CMD} -l ${DBUSER} ${ZONE}"
ZONE_BASE=$(/usr/sbin/zonecfg -z ${ZONE} info zonepath | nawk '{print $NF;}')
ZONE_ROOT="${ZONE_BASE}/root"
else
ZONE_ROOT=""
ZONE_CMD="su - ${DBUSER} -c"
fi
if( ${ZONE_CMD} echo >/dev/null 2>&1 )
then
SCRIPT_NAME="tmp/.nsr-pre-snap-script.$$"
# Create script inside zone
cat >${ZONE_ROOT}/{SCRIPT_NAME} <<EOS
#!/bin/bash
DBDIR=\$(/usr/bin/nawk -F':' -v ORACLE_SID=${ORACLE_SID} '\$1==ORACLE_SID {print \$2;}' /var/opt/oracle/oratab)
\${DBDIR}/bin/sqlplus sys/${DBUSER} as sysdba << EOF
create pfile from spfile;
alter system archive log current;
alter database backup controlfile to trace;
alter database begin backup;
EOF
EOS
chmod 755 ${ZONE_ROOT}/${SCRIPT_NAME}
${ZONE_CMD} /${SCRIPT_NAME} 2>&1 | print_log ${LOGFILE}
rm -f ${ZONE_ROOT}/${SCRIPT_NAME}
fi
}
function snapshot_pst {
DB=$1
DBUSER=$2
if [ $# -eq 3 -a "_$3_" != "__" ]
then
ZONE=$3
ZONE_CMD="${ZLOGIN_CMD} -l ${DBUSER} ${ZONE}"
ZONE_BASE=$(/usr/sbin/zonecfg -z ${ZONE} info zonepath | nawk '{print $NF;}')
ZONE_ROOT="${ZONE_BASE}/root"
else
ZONE_ROOT=""
ZONE_CMD="su - ${DBUSER} -c"
fi
if( ${ZONE_CMD} echo >/dev/null 2>&1 )
then
SCRIPT_NAME="tmp/.nsr-pre-snap-script.$$"
# Create script inside zone
cat >${ZONE_ROOT}/{SCRIPT_NAME} <<EOS
#!/bin/bash
DBDIR=\$(/usr/bin/nawk -F':' -v ORACLE_SID=${ORACLE_SID} '\$1==ORACLE_SID {print \$2;}' /var/opt/oracle/oratab)
\${DBDIR}/bin/sqlplus sys/${DBUSER} as sysdba << EOF
alter database end backup;
alter system archive log current;
EOF
EOS
chmod 755 ${ZONE_ROOT}/${SCRIPT_NAME}
${ZONE_CMD} /${SCRIPT_NAME} 2>&1 | print_log ${LOGFILE}
rm -f ${ZONE_ROOT}/${SCRIPT_NAME}
fi
}
function snapshot_create {
ZPOOL=$1
SNAPSHOT_NAME=$2
print_log ${LOGFILE} "Create ZFS snapshot -r ${ZPOOL}@${SNAPSHOT_NAME}"
${ZFS_CMD} snapshot -r ${ZPOOL}@${SNAPSHOT_NAME}
for zfs_snapshot in $(${ZFS_CMD} list -Ho name -t snapshot -r ${ZPOOL} | grep ${SNAPSHOT_NAME})
do
${ZFS_CMD} clone -o readonly=on ${zfs_snapshot} ${zfs_snapshot/@*/}/nsr_backup
${ZFS_CMD} mount ${zfs_snapshot/@*/}/nsr_backup 2>/dev/null
if ( df -h ${zfs_snapshot/@*/}/nsr_backup )
then
# echo /usr/sbin/save -s ${SERVER_NAME} -g ${GROUP_NAME} -LL -m ${CLIENT_NAME} $(${ZFS_CMD} get -Ho value mountpoint ${zfs_snapshot/@*/}/nsr_backup)
${ZFS_CMD} list -Ho creation,name ${zfs_snapshot/@*/}/nsr_backup | print_log ${LOGFILE}
fi
done
}
function snapshot_destroy {
ZPOOL=$1
SNAPSHOT_NAME=$2
if (${ZFS_CMD} list -t snapshot ${ZPOOL}@${SNAPSHOT_NAME})
then
for zfs_snapshot in $(${ZFS_CMD} list -Ho name -t snapshot -r ${ZPOOL} | grep ${SNAPSHOT_NAME})
do
if ( df -h ${zfs_snapshot/@*/}/nsr_backup )
then
print_log ${LOGFILE} "Unmount ZFS clone ${zfs_snapshot/@*/}/nsr_backup"
${ZFS_CMD} unmount ${zfs_snapshot/@*/}/nsr_backup
fi
# If this is a clone of ${zfs_snapshot}, then destroy it
if [ "_$(${ZFS_CMD} list -Ho origin ${zfs_snapshot/@*/}/nsr_backup)_" == "_${zfs_snapshot}_" ]
then
print_log ${LOGFILE} "Destroy ZFS clone ${zfs_snapshot/@*/}/nsr_backup"
${ZFS_CMD} destroy ${zfs_snapshot/@*/}/nsr_backup
fi
done
print_log ${LOGFILE} "Destroy ZFS snapshot -r ${ZPOOL}@${SNAPSHOT_NAME}"
${ZFS_CMD} destroy -r ${ZPOOL}@${SNAPSHOT_NAME}
fi
}
function usage {
echo "Usage: $0 (pre|pst)"
exit 1
}
cmd_option=$1
export cmd_option
ORACLE_SID=SAMPLE
ORACLE_USER=oracle
SNAPSHOT_NAME="nsr"
ZFS_CMD="/usr/sbin/zfs"
ZLOGIN_CMD="/usr/bin/zlogin"
case ${cmd_option} in
pre)
# Get commandline from parent pid
# pre /usr/sbin/savepnpc -c ndrcmstest-cl -s hhlokens01.srv.ndr-net.de -g NdrCms -LL
pid=$(ptree $$ | nawk '/savepnpc/{print $1}')
;;
pst)
# Get commandline from parent pid
# pst /usr/bin/pstclntsave -s hhlokens01.srv.ndr-net.de -g NdrCms -c ndrcmstest
pid=$(ptree $$ | nawk '/pstclntsave/{print $1}')
;;
esac
commandline="$(pargs -e ${pid} | head -1)"
# Called from backupserver use -c
CLIENT_NAME=$(print_option -c ${commandline})
# If called from cmdline use -m
CLIENT_NAME=${CLIENT_NAME:-$(print_option -m ${commandline})}
# Last resort pre/post
CLIENT_NAME=${CLIENT_NAME:-${cmd_option}}
SERVER_NAME=$(print_option -s ${commandline})
GROUP_NAME=$(print_option -g ${commandline})
LOGFILE=/nsr/logs/${CLIENT_NAME}.log
print_log ${LOGFILE} "Called from ${commandline}"
named_pipe=/tmp/.named_pipe.$$
# Delete named pipe on exit
trap "rm -f ${named_pipe}" EXIT
# Create named pipe
mknod ${named_pipe} p
# Read from named pipe and send it to print_log
tee <${named_pipe} | print_log ${LOGFILE}&
# Close STDOUT & STDERR
exec 1>&-
exec 2>&-
# Redirect them to named pipe
exec >${named_pipe} 2>&1
print_log ${LOGFILE} "Begin backup of ${CLIENT_NAME}"
# Get resource name from hostname
LH_RES=$(/usr/cluster/bin/clrs show -t SUNW.LogicalHostname -p HostnameList | nawk -v Hostname="${CLIENT_NAME}" '/^Resource:/{res=$NF} /HostnameList:/ {for(i=2;i<=NF;i++){if($i == Hostname){print res}}}')
print_log ${LOGFILE} "LogicalHostname of ${CLIENT_NAME} is ${LH_RES}"
# Get ressourceGroup name from ressource name
RG=$(/usr/cluster/bin/scha_resource_get -O GROUP -R ${LH_RES})
print_log ${LOGFILE} "RessourceGroup of ${LH_RES} is ${RG}"
ZPOOLS=$(/usr/cluster/bin/clrs show -g ${RG} -p Zpools | nawk '$1=="Zpools:"{$1="";print $0}')
print_log ${LOGFILE} "ZPools used in ${RG}: ${ZPOOLS}"
Start_command=$(/usr/cluster/bin/clrs show -p Start_command -g ${RG} | /usr/bin/nawk -F ':' '$1 ~ /Start_command/ && $2 ~ /sczbt/')
print_log ${LOGFILE} "sczbt Start_command is: ${Start_command}"
sczbt_config=$(print_option -P ${Start_command})/sczbt_$(print_option -R ${Start_command})
print_log ${LOGFILE} "sczbt_config is ${sczbt_config}"
ZONE=$(nawk -F '=' '$1=="Zonename"{gsub(/"/,"",$2);print $2}' ${sczbt_config})
print_log ${LOGFILE} "Zone from ${sczbt_config} is ${ZONE}"
case ${cmd_option} in
pre)
for ZPOOL in ${ZPOOLS}
do
snapshot_destroy ${ZPOOL} ${SNAPSHOT_NAME}
done
# snapshot_pre ${DB} ${DBUSER} ${ZONE}
for ZPOOL in ${ZPOOLS}
do
snapshot_create ${ZPOOL} ${SNAPSHOT_NAME}
done
# snapshot_pst ${DB} ${DBUSER} ${ZONE}
;;
pst)
#for ZPOOL in ${ZPOOLS}
#do
# snapshot_destroy ${ZPOOL} ${SNAPSHOT_NAME}
#done
;;
*)
usage
;;
esac
print_log ${LOGFILE} "End backup of ${CLIENT_NAME}"
</source>
!!!THIS CODE IS UNTESTED DO NOT USE THIS!!!
!!!THIS JUST AN EXAMPLE!!!
==Registering new resource type LGTO.clnt==
1. Install Solaris client package LGTOclnt
2. Register new resource type in cluster. One one node do:
<source lang=bash>
# clrt register -f /usr/sbin/LGTO.clnt.rtr LGTO.clnt
</source>
Now you have a new resource type LGTO.clnt in your cluster.
==Create client resource of type LGTO.clnt==
So I use scripts like this:
<source lang=bash>
# RGname=sample-rg
# clrs create \
-t LGTO.clnt \
-g ${RGname} \
-p Resource_dependencies=$(basename ${RGname} -rg)-hasp-zfs-res \
-p clientname=$(basename ${RGname} -rg)-lh \
-p Network_resource=$(basename ${RGname} -rg)-lh-res \
-p owned_paths=${ZPOOL_BASEDIR} \
$(basename ${RGname} -rg)-nsr-res
</source>
This expands to:
<source lang=bash>
# clrs create \
-t LGTO.clnt \
-g sample-rg \
-p Resource_dependencies=sample-hasp-zfs-res \
-p clientname=sample-lh \
-p Network_resource=sample-lh-res \
-p owned_paths=/local/sample-rg \
sample-nsr-res
</source>
Now we have a client name to which we can connect to: sample-lh
6dbcf3b5ad27d21504d2865abf401e8953dcbd83
581
580
2014-10-14T14:35:32Z
Lollypop
2
/* The pre-/pstcmd-script */
wikitext
text/x-wiki
[[Kategorie:ZFS]]
[[Kategorie:Solaris]]
=Backup of ZFS snapshots on Solaris Cluster with Legato/EMC Networker=
This describes how to setup a backup of the Solaris Cluster resource group named sample-rg.
The structure of my RGs is always:
<pre>
RG: <name>-rg
ZFS-HASP: <name>-hasp-zfs-res
Logical Host: <name>-lh-res
Logical Host Name: <name>-lh
ZPOOL: <name>_pool
</pre>
I used the bash as shell.
==Define variables used in the following command lines==
<source lang=bash>
# NAME=sample
# RGname=${NAME}-rg
# NetworkerGroup=$(echo ${NAME} | tr 'a-z' 'A-Z' )
# ZPOOL=${NAME}_pool
# ZPOOL_BASEDIR=/local/${RGname}
</source>
==Define a resource for Networker==
What we need now is a resource definition in our Networker directory like this:
<source lang=bash>
# zfs create ${ZPOOL}/nsr
# mkdir ${ZPOOL_BASEDIR}/nsr/{bin,log,res}
# cat > ${ZPOOL_BASEDIR}/nsr/res/${NetworkerGroup}.res <<EOF
type: savepnpc;
precmd: "${ZPOOL_BASEDIR}/nsr/bin/prepst_command.sh pre >${ZPOOL_BASEDIR}/nsr/log/networker_precmd.log 2>&1";
pstcmd: "${ZPOOL_BASEDIR}/nsr/bin/prepst_command.sh pst >${ZPOOL_BASEDIR}/nsr/log/networker_pstcmd.log 2>&1";
timeout: "08:00am";
abort precmd with group: Yes;
EOF
</source>
And now create a link on every cluster node to this file
<source lang=bash>
# ln -s ${ZPOOL_BASEDIR}/nsr/res/${NetworkerGroup}.res /nsr/res/${NetworkerGroup}.res
</source>
==The pre-/pstcmd-script==
!!!THIS CODE IS UNTESTED DO NOT USE THIS!!!
!!!THIS JUST AN EXAMPLE!!!
<source lang=bash>
#!/bin/bash
function print_option () {
option=$1; shift
# now process line
while [ $# -gt 0 ]
do
case $1 in
${option})
echo $2
shift
shift
;;
*)
shift
;;
esac
done
}
function print_log () {
LOGFILE=$1 ; shift
if [ $# -gt 0 ]
then
printf "%s (%s): %s\n" "$(date '+%Y%m%d %H:%M:%S')" "${cmd_option}" "$*" >> ${LOGFILE}
else
printf "%s (%s): " "$(date '+%Y%m%d %H:%M:%S')" "${cmd_option}" >> ${LOGFILE}
#cat >> ${LOGFILE}
while read data
do
printf "%s (%s): %s\n" "$(date '+%Y%m%d %H:%M:%S')" "${cmd_option}" "${data}" >> ${LOGFILE}
done
fi
}
function snapshot_pre {
DB=$1
DBUSER=$2
if [ $# -eq 3 -a "_$3_" != "__" ]
then
ZONE=$3
ZONE_CMD="${ZLOGIN_CMD} -l ${DBUSER} ${ZONE}"
ZONE_BASE=$(/usr/sbin/zonecfg -z ${ZONE} info zonepath | nawk '{print $NF;}')
ZONE_ROOT="${ZONE_BASE}/root"
else
ZONE_ROOT=""
ZONE_CMD="su - ${DBUSER} -c"
fi
if( ${ZONE_CMD} echo >/dev/null 2>&1 )
then
SCRIPT_NAME="tmp/.nsr-pre-snap-script.$$"
# Create script inside zone
cat >${ZONE_ROOT}/{SCRIPT_NAME} <<EOS
#!/bin/bash
DBDIR=\$(/usr/bin/nawk -F':' -v ORACLE_SID=${ORACLE_SID} '\$1==ORACLE_SID {print \$2;}' /var/opt/oracle/oratab)
\${DBDIR}/bin/sqlplus sys/${DBUSER} as sysdba << EOF
create pfile from spfile;
alter system archive log current;
alter database backup controlfile to trace;
alter database begin backup;
EOF
EOS
chmod 755 ${ZONE_ROOT}/${SCRIPT_NAME}
${ZONE_CMD} /${SCRIPT_NAME} 2>&1 | print_log ${LOGFILE}
rm -f ${ZONE_ROOT}/${SCRIPT_NAME}
fi
}
function snapshot_pst {
DB=$1
DBUSER=$2
if [ $# -eq 3 -a "_$3_" != "__" ]
then
ZONE=$3
ZONE_CMD="${ZLOGIN_CMD} -l ${DBUSER} ${ZONE}"
ZONE_BASE=$(/usr/sbin/zonecfg -z ${ZONE} info zonepath | nawk '{print $NF;}')
ZONE_ROOT="${ZONE_BASE}/root"
else
ZONE_ROOT=""
ZONE_CMD="su - ${DBUSER} -c"
fi
if( ${ZONE_CMD} echo >/dev/null 2>&1 )
then
SCRIPT_NAME="tmp/.nsr-pre-snap-script.$$"
# Create script inside zone
cat >${ZONE_ROOT}/{SCRIPT_NAME} <<EOS
#!/bin/bash
DBDIR=\$(/usr/bin/nawk -F':' -v ORACLE_SID=${ORACLE_SID} '\$1==ORACLE_SID {print \$2;}' /var/opt/oracle/oratab)
\${DBDIR}/bin/sqlplus sys/${DBUSER} as sysdba << EOF
alter database end backup;
alter system archive log current;
EOF
EOS
chmod 755 ${ZONE_ROOT}/${SCRIPT_NAME}
${ZONE_CMD} /${SCRIPT_NAME} 2>&1 | print_log ${LOGFILE}
rm -f ${ZONE_ROOT}/${SCRIPT_NAME}
fi
}
function snapshot_create {
ZPOOL=$1
SNAPSHOT_NAME=$2
print_log ${LOGFILE} "Create ZFS snapshot -r ${ZPOOL}@${SNAPSHOT_NAME}"
${ZFS_CMD} snapshot -r ${ZPOOL}@${SNAPSHOT_NAME}
for zfs_snapshot in $(${ZFS_CMD} list -Ho name -t snapshot -r ${ZPOOL} | grep ${SNAPSHOT_NAME})
do
${ZFS_CMD} clone -o readonly=on ${zfs_snapshot} ${zfs_snapshot/@*/}/nsr_backup
${ZFS_CMD} mount ${zfs_snapshot/@*/}/nsr_backup 2>/dev/null
if ( df -h ${zfs_snapshot/@*/}/nsr_backup )
then
${ZFS_CMD} list -Ho creation,name ${zfs_snapshot/@*/}/nsr_backup | print_log ${LOGFILE}
fi
done
}
function snapshot_destroy {
ZPOOL=$1
SNAPSHOT_NAME=$2
if (${ZFS_CMD} list -t snapshot ${ZPOOL}@${SNAPSHOT_NAME})
then
for zfs_snapshot in $(${ZFS_CMD} list -Ho name -t snapshot -r ${ZPOOL} | grep ${SNAPSHOT_NAME})
do
if ( df -h ${zfs_snapshot/@*/}/nsr_backup )
then
print_log ${LOGFILE} "Unmount ZFS clone ${zfs_snapshot/@*/}/nsr_backup"
${ZFS_CMD} unmount ${zfs_snapshot/@*/}/nsr_backup
fi
# If this is a clone of ${zfs_snapshot}, then destroy it
if [ "_$(${ZFS_CMD} list -Ho origin ${zfs_snapshot/@*/}/nsr_backup)_" == "_${zfs_snapshot}_" ]
then
print_log ${LOGFILE} "Destroy ZFS clone ${zfs_snapshot/@*/}/nsr_backup"
${ZFS_CMD} destroy ${zfs_snapshot/@*/}/nsr_backup
fi
done
print_log ${LOGFILE} "Destroy ZFS snapshot -r ${ZPOOL}@${SNAPSHOT_NAME}"
${ZFS_CMD} destroy -r ${ZPOOL}@${SNAPSHOT_NAME}
fi
}
function usage {
echo "Usage: $0 (pre|pst)"
exit 1
}
cmd_option=$1
export cmd_option
ORACLE_SID=SAMPLE
ORACLE_USER=oracle
SNAPSHOT_NAME="nsr"
ZFS_CMD="/usr/sbin/zfs"
ZLOGIN_CMD="/usr/bin/zlogin"
case ${cmd_option} in
pre)
# Get commandline from parent pid
# pre /usr/sbin/savepnpc -c <NetworkerClient> -s <NetworkerServer> -g <NetworkerGroup> -LL
pid=$(ptree $$ | nawk '/savepnpc/{print $1}')
;;
pst)
# Get commandline from parent pid
# pst /usr/bin/pstclntsave -s <NetworkerServer> -g <NetworkerGroup> -c <NetworkerClient>
pid=$(ptree $$ | nawk '/pstclntsave/{print $1}')
;;
esac
commandline="$(pargs -e ${pid} | head -1)"
# Called from backupserver use -c
CLIENT_NAME=$(print_option -c ${commandline})
# If called from cmdline use -m
CLIENT_NAME=${CLIENT_NAME:-$(print_option -m ${commandline})}
# Last resort pre/post
CLIENT_NAME=${CLIENT_NAME:-${cmd_option}}
SERVER_NAME=$(print_option -s ${commandline})
GROUP_NAME=$(print_option -g ${commandline})
LOGFILE=/nsr/logs/${CLIENT_NAME}.log
print_log ${LOGFILE} "Called from ${commandline}"
named_pipe=/tmp/.named_pipe.$$
# Delete named pipe on exit
trap "rm -f ${named_pipe}" EXIT
# Create named pipe
mknod ${named_pipe} p
# Read from named pipe and send it to print_log
tee <${named_pipe} | print_log ${LOGFILE}&
# Close STDOUT & STDERR
exec 1>&-
exec 2>&-
# Redirect them to named pipe
exec >${named_pipe} 2>&1
print_log ${LOGFILE} "Begin backup of ${CLIENT_NAME}"
# Get resource name from hostname
LH_RES=$(/usr/cluster/bin/clrs show -t SUNW.LogicalHostname -p HostnameList | nawk -v Hostname="${CLIENT_NAME}" '/^Resource:/{res=$NF} /HostnameList:/ {for(i=2;i<=NF;i++){if($i == Hostname){print res}}}')
print_log ${LOGFILE} "LogicalHostname of ${CLIENT_NAME} is ${LH_RES}"
# Get ressourceGroup name from ressource name
RG=$(/usr/cluster/bin/scha_resource_get -O GROUP -R ${LH_RES})
print_log ${LOGFILE} "RessourceGroup of ${LH_RES} is ${RG}"
ZPOOLS=$(/usr/cluster/bin/clrs show -g ${RG} -p Zpools | nawk '$1=="Zpools:"{$1="";print $0}')
print_log ${LOGFILE} "ZPools used in ${RG}: ${ZPOOLS}"
Start_command=$(/usr/cluster/bin/clrs show -p Start_command -g ${RG} | /usr/bin/nawk -F ':' '$1 ~ /Start_command/ && $2 ~ /sczbt/')
print_log ${LOGFILE} "sczbt Start_command is: ${Start_command}"
sczbt_config=$(print_option -P ${Start_command})/sczbt_$(print_option -R ${Start_command})
print_log ${LOGFILE} "sczbt_config is ${sczbt_config}"
ZONE=$(nawk -F '=' '$1=="Zonename"{gsub(/"/,"",$2);print $2}' ${sczbt_config})
print_log ${LOGFILE} "Zone from ${sczbt_config} is ${ZONE}"
case ${cmd_option} in
pre)
for ZPOOL in ${ZPOOLS}
do
snapshot_destroy ${ZPOOL} ${SNAPSHOT_NAME}
done
# snapshot_pre ${DB} ${DBUSER} ${ZONE}
for ZPOOL in ${ZPOOLS}
do
snapshot_create ${ZPOOL} ${SNAPSHOT_NAME}
done
# snapshot_pst ${DB} ${DBUSER} ${ZONE}
;;
pst)
#for ZPOOL in ${ZPOOLS}
#do
# snapshot_destroy ${ZPOOL} ${SNAPSHOT_NAME}
#done
;;
*)
usage
;;
esac
print_log ${LOGFILE} "End backup of ${CLIENT_NAME}"
</source>
<source lang=bash>
# digest -a md5 /opt/nsr/bin/nsr_snapshot.sh
c9020478ed49e89240e0b56961e46ac6
</source>
!!!THIS CODE IS UNTESTED DO NOT USE THIS!!!
!!!THIS JUST AN EXAMPLE!!!
==Registering new resource type LGTO.clnt==
1. Install Solaris client package LGTOclnt
2. Register new resource type in cluster. One one node do:
<source lang=bash>
# clrt register -f /usr/sbin/LGTO.clnt.rtr LGTO.clnt
</source>
Now you have a new resource type LGTO.clnt in your cluster.
==Create client resource of type LGTO.clnt==
So I use scripts like this:
<source lang=bash>
# RGname=sample-rg
# clrs create \
-t LGTO.clnt \
-g ${RGname} \
-p Resource_dependencies=$(basename ${RGname} -rg)-hasp-zfs-res \
-p clientname=$(basename ${RGname} -rg)-lh \
-p Network_resource=$(basename ${RGname} -rg)-lh-res \
-p owned_paths=${ZPOOL_BASEDIR} \
$(basename ${RGname} -rg)-nsr-res
</source>
This expands to:
<source lang=bash>
# clrs create \
-t LGTO.clnt \
-g sample-rg \
-p Resource_dependencies=sample-hasp-zfs-res \
-p clientname=sample-lh \
-p Network_resource=sample-lh-res \
-p owned_paths=/local/sample-rg \
sample-nsr-res
</source>
Now we have a client name to which we can connect to: sample-lh
576e1d5448e4da6d11107ecc74f279f2f7b08430
582
581
2014-10-14T14:36:23Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:ZFS]]
[[Kategorie:Solaris]]
=Backup of ZFS snapshots on Solaris Cluster with Legato/EMC Networker=
This describes how to setup a backup of the Solaris Cluster resource group named sample-rg.
The structure of my RGs is always:
<pre>
RG: <name>-rg
ZFS-HASP: <name>-hasp-zfs-res
Logical Host: <name>-lh-res
Logical Host Name: <name>-lh
ZPOOL: <name>_pool
</pre>
I used the bash as shell.
==Define variables used in the following command lines==
<source lang=bash>
# NAME=sample
# RGname=${NAME}-rg
# NetworkerGroup=$(echo ${NAME} | tr 'a-z' 'A-Z' )
# ZPOOL=${NAME}_pool
# ZPOOL_BASEDIR=/local/${RGname}
</source>
==Define a resource for Networker==
What we need now is a resource definition in our Networker directory like this:
<source lang=bash>
# zfs create ${ZPOOL}/nsr
# mkdir ${ZPOOL_BASEDIR}/nsr/{bin,log,res}
# cat > ${ZPOOL_BASEDIR}/nsr/res/${NetworkerGroup}.res <<EOF
type: savepnpc;
precmd: "${ZPOOL_BASEDIR}/nsr/bin/prepst_command.sh pre >${ZPOOL_BASEDIR}/nsr/log/networker_precmd.log 2>&1";
pstcmd: "${ZPOOL_BASEDIR}/nsr/bin/prepst_command.sh pst >${ZPOOL_BASEDIR}/nsr/log/networker_pstcmd.log 2>&1";
timeout: "08:00am";
abort precmd with group: Yes;
EOF
</source>
And now create a link on every cluster node to this file
<source lang=bash>
# ln -s ${ZPOOL_BASEDIR}/nsr/res/${NetworkerGroup}.res /nsr/res/${NetworkerGroup}.res
</source>
==The pre-/pstcmd-script==
!!!THIS CODE IS UNTESTED DO NOT USE THIS!!!
!!!THIS JUST AN EXAMPLE!!!
<source lang=bash>
#!/bin/bash
function print_option () {
option=$1; shift
# now process line
while [ $# -gt 0 ]
do
case $1 in
${option})
echo $2
shift
shift
;;
*)
shift
;;
esac
done
}
function print_log () {
LOGFILE=$1 ; shift
if [ $# -gt 0 ]
then
printf "%s (%s): %s\n" "$(date '+%Y%m%d %H:%M:%S')" "${cmd_option}" "$*" >> ${LOGFILE}
else
printf "%s (%s): " "$(date '+%Y%m%d %H:%M:%S')" "${cmd_option}" >> ${LOGFILE}
#cat >> ${LOGFILE}
while read data
do
printf "%s (%s): %s\n" "$(date '+%Y%m%d %H:%M:%S')" "${cmd_option}" "${data}" >> ${LOGFILE}
done
fi
}
function snapshot_pre {
DB=$1
DBUSER=$2
if [ $# -eq 3 -a "_$3_" != "__" ]
then
ZONE=$3
ZONE_CMD="${ZLOGIN_CMD} -l ${DBUSER} ${ZONE}"
ZONE_BASE=$(/usr/sbin/zonecfg -z ${ZONE} info zonepath | nawk '{print $NF;}')
ZONE_ROOT="${ZONE_BASE}/root"
else
ZONE_ROOT=""
ZONE_CMD="su - ${DBUSER} -c"
fi
if( ${ZONE_CMD} echo >/dev/null 2>&1 )
then
SCRIPT_NAME="tmp/.nsr-pre-snap-script.$$"
# Create script inside zone
cat >${ZONE_ROOT}/{SCRIPT_NAME} <<EOS
#!/bin/bash
DBDIR=\$(/usr/bin/nawk -F':' -v ORACLE_SID=${ORACLE_SID} '\$1==ORACLE_SID {print \$2;}' /var/opt/oracle/oratab)
\${DBDIR}/bin/sqlplus sys/${DBUSER} as sysdba << EOF
create pfile from spfile;
alter system archive log current;
alter database backup controlfile to trace;
alter database begin backup;
EOF
EOS
chmod 755 ${ZONE_ROOT}/${SCRIPT_NAME}
${ZONE_CMD} /${SCRIPT_NAME} 2>&1 | print_log ${LOGFILE}
rm -f ${ZONE_ROOT}/${SCRIPT_NAME}
fi
}
function snapshot_pst {
DB=$1
DBUSER=$2
if [ $# -eq 3 -a "_$3_" != "__" ]
then
ZONE=$3
ZONE_CMD="${ZLOGIN_CMD} -l ${DBUSER} ${ZONE}"
ZONE_BASE=$(/usr/sbin/zonecfg -z ${ZONE} info zonepath | nawk '{print $NF;}')
ZONE_ROOT="${ZONE_BASE}/root"
else
ZONE_ROOT=""
ZONE_CMD="su - ${DBUSER} -c"
fi
if( ${ZONE_CMD} echo >/dev/null 2>&1 )
then
SCRIPT_NAME="tmp/.nsr-pre-snap-script.$$"
# Create script inside zone
cat >${ZONE_ROOT}/{SCRIPT_NAME} <<EOS
#!/bin/bash
DBDIR=\$(/usr/bin/nawk -F':' -v ORACLE_SID=${ORACLE_SID} '\$1==ORACLE_SID {print \$2;}' /var/opt/oracle/oratab)
\${DBDIR}/bin/sqlplus sys/${DBUSER} as sysdba << EOF
alter database end backup;
alter system archive log current;
EOF
EOS
chmod 755 ${ZONE_ROOT}/${SCRIPT_NAME}
${ZONE_CMD} /${SCRIPT_NAME} 2>&1 | print_log ${LOGFILE}
rm -f ${ZONE_ROOT}/${SCRIPT_NAME}
fi
}
function snapshot_create {
ZPOOL=$1
SNAPSHOT_NAME=$2
print_log ${LOGFILE} "Create ZFS snapshot -r ${ZPOOL}@${SNAPSHOT_NAME}"
${ZFS_CMD} snapshot -r ${ZPOOL}@${SNAPSHOT_NAME}
for zfs_snapshot in $(${ZFS_CMD} list -Ho name -t snapshot -r ${ZPOOL} | grep ${SNAPSHOT_NAME})
do
${ZFS_CMD} clone -o readonly=on ${zfs_snapshot} ${zfs_snapshot/@*/}/nsr_backup
${ZFS_CMD} mount ${zfs_snapshot/@*/}/nsr_backup 2>/dev/null
if ( df -h ${zfs_snapshot/@*/}/nsr_backup )
then
${ZFS_CMD} list -Ho creation,name ${zfs_snapshot/@*/}/nsr_backup | print_log ${LOGFILE}
fi
done
}
function snapshot_destroy {
ZPOOL=$1
SNAPSHOT_NAME=$2
if (${ZFS_CMD} list -t snapshot ${ZPOOL}@${SNAPSHOT_NAME})
then
for zfs_snapshot in $(${ZFS_CMD} list -Ho name -t snapshot -r ${ZPOOL} | grep ${SNAPSHOT_NAME})
do
if ( df -h ${zfs_snapshot/@*/}/nsr_backup )
then
print_log ${LOGFILE} "Unmount ZFS clone ${zfs_snapshot/@*/}/nsr_backup"
${ZFS_CMD} unmount ${zfs_snapshot/@*/}/nsr_backup
fi
# If this is a clone of ${zfs_snapshot}, then destroy it
if [ "_$(${ZFS_CMD} list -Ho origin ${zfs_snapshot/@*/}/nsr_backup)_" == "_${zfs_snapshot}_" ]
then
print_log ${LOGFILE} "Destroy ZFS clone ${zfs_snapshot/@*/}/nsr_backup"
${ZFS_CMD} destroy ${zfs_snapshot/@*/}/nsr_backup
fi
done
print_log ${LOGFILE} "Destroy ZFS snapshot -r ${ZPOOL}@${SNAPSHOT_NAME}"
${ZFS_CMD} destroy -r ${ZPOOL}@${SNAPSHOT_NAME}
fi
}
function usage {
echo "Usage: $0 (pre|pst)"
exit 1
}
cmd_option=$1
export cmd_option
ORACLE_SID=SAMPLE
ORACLE_USER=oracle
SNAPSHOT_NAME="nsr"
ZFS_CMD="/usr/sbin/zfs"
ZLOGIN_CMD="/usr/bin/zlogin"
case ${cmd_option} in
pre)
# Get commandline from parent pid
# pre /usr/sbin/savepnpc -c <NetworkerClient> -s <NetworkerServer> -g <NetworkerGroup> -LL
pid=$(ptree $$ | nawk '/savepnpc/{print $1}')
;;
pst)
# Get commandline from parent pid
# pst /usr/bin/pstclntsave -s <NetworkerServer> -g <NetworkerGroup> -c <NetworkerClient>
pid=$(ptree $$ | nawk '/pstclntsave/{print $1}')
;;
esac
commandline="$(pargs -e ${pid} | head -1)"
# Called from backupserver use -c
CLIENT_NAME=$(print_option -c ${commandline})
# If called from cmdline use -m
CLIENT_NAME=${CLIENT_NAME:-$(print_option -m ${commandline})}
# Last resort pre/post
CLIENT_NAME=${CLIENT_NAME:-${cmd_option}}
SERVER_NAME=$(print_option -s ${commandline})
GROUP_NAME=$(print_option -g ${commandline})
LOGFILE=/nsr/logs/${CLIENT_NAME}.log
print_log ${LOGFILE} "Called from ${commandline}"
named_pipe=/tmp/.named_pipe.$$
# Delete named pipe on exit
trap "rm -f ${named_pipe}" EXIT
# Create named pipe
mknod ${named_pipe} p
# Read from named pipe and send it to print_log
tee <${named_pipe} | print_log ${LOGFILE}&
# Close STDOUT & STDERR
exec 1>&-
exec 2>&-
# Redirect them to named pipe
exec >${named_pipe} 2>&1
print_log ${LOGFILE} "Begin backup of ${CLIENT_NAME}"
# Get resource name from hostname
LH_RES=$(/usr/cluster/bin/clrs show -t SUNW.LogicalHostname -p HostnameList | nawk -v Hostname="${CLIENT_NAME}" '/^Resource:/{res=$NF} /HostnameList:/ {for(i=2;i<=NF;i++){if($i == Hostname){print res}}}')
print_log ${LOGFILE} "LogicalHostname of ${CLIENT_NAME} is ${LH_RES}"
# Get ressourceGroup name from ressource name
RG=$(/usr/cluster/bin/scha_resource_get -O GROUP -R ${LH_RES})
print_log ${LOGFILE} "RessourceGroup of ${LH_RES} is ${RG}"
ZPOOLS=$(/usr/cluster/bin/clrs show -g ${RG} -p Zpools | nawk '$1=="Zpools:"{$1="";print $0}')
print_log ${LOGFILE} "ZPools used in ${RG}: ${ZPOOLS}"
Start_command=$(/usr/cluster/bin/clrs show -p Start_command -g ${RG} | /usr/bin/nawk -F ':' '$1 ~ /Start_command/ && $2 ~ /sczbt/')
print_log ${LOGFILE} "sczbt Start_command is: ${Start_command}"
sczbt_config=$(print_option -P ${Start_command})/sczbt_$(print_option -R ${Start_command})
print_log ${LOGFILE} "sczbt_config is ${sczbt_config}"
ZONE=$(nawk -F '=' '$1=="Zonename"{gsub(/"/,"",$2);print $2}' ${sczbt_config})
print_log ${LOGFILE} "Zone from ${sczbt_config} is ${ZONE}"
case ${cmd_option} in
pre)
for ZPOOL in ${ZPOOLS}
do
snapshot_destroy ${ZPOOL} ${SNAPSHOT_NAME}
done
# snapshot_pre ${DB} ${DBUSER} ${ZONE}
for ZPOOL in ${ZPOOLS}
do
snapshot_create ${ZPOOL} ${SNAPSHOT_NAME}
done
# snapshot_pst ${DB} ${DBUSER} ${ZONE}
;;
pst)
#for ZPOOL in ${ZPOOLS}
#do
# snapshot_destroy ${ZPOOL} ${SNAPSHOT_NAME}
#done
;;
*)
usage
;;
esac
print_log ${LOGFILE} "End backup of ${CLIENT_NAME}"
</source>
MD5-Checksum
<source lang=bash>
# digest -a md5 /opt/nsr/bin/nsr_snapshot.sh
c9020478ed49e89240e0b56961e46ac6
</source>
!!!THIS CODE IS UNTESTED DO NOT USE THIS!!!
!!!THIS JUST AN EXAMPLE!!!
==Registering new resource type LGTO.clnt==
1. Install Solaris client package LGTOclnt
2. Register new resource type in cluster. One one node do:
<source lang=bash>
# clrt register -f /usr/sbin/LGTO.clnt.rtr LGTO.clnt
</source>
Now you have a new resource type LGTO.clnt in your cluster.
==Create client resource of type LGTO.clnt==
So I use scripts like this:
<source lang=bash>
# RGname=sample-rg
# clrs create \
-t LGTO.clnt \
-g ${RGname} \
-p Resource_dependencies=$(basename ${RGname} -rg)-hasp-zfs-res \
-p clientname=$(basename ${RGname} -rg)-lh \
-p Network_resource=$(basename ${RGname} -rg)-lh-res \
-p owned_paths=${ZPOOL_BASEDIR} \
$(basename ${RGname} -rg)-nsr-res
</source>
This expands to:
<source lang=bash>
# clrs create \
-t LGTO.clnt \
-g sample-rg \
-p Resource_dependencies=sample-hasp-zfs-res \
-p clientname=sample-lh \
-p Network_resource=sample-lh-res \
-p owned_paths=/local/sample-rg \
sample-nsr-res
</source>
Now we have a client name to which we can connect to: sample-lh
94131abadb7c6729a48cace58d0d673d00fa4dd9
583
582
2014-10-14T14:48:26Z
Lollypop
2
/* The pre-/pstcmd-script */
wikitext
text/x-wiki
[[Kategorie:ZFS]]
[[Kategorie:Solaris]]
=Backup of ZFS snapshots on Solaris Cluster with Legato/EMC Networker=
This describes how to setup a backup of the Solaris Cluster resource group named sample-rg.
The structure of my RGs is always:
<pre>
RG: <name>-rg
ZFS-HASP: <name>-hasp-zfs-res
Logical Host: <name>-lh-res
Logical Host Name: <name>-lh
ZPOOL: <name>_pool
</pre>
I used the bash as shell.
==Define variables used in the following command lines==
<source lang=bash>
# NAME=sample
# RGname=${NAME}-rg
# NetworkerGroup=$(echo ${NAME} | tr 'a-z' 'A-Z' )
# ZPOOL=${NAME}_pool
# ZPOOL_BASEDIR=/local/${RGname}
</source>
==Define a resource for Networker==
What we need now is a resource definition in our Networker directory like this:
<source lang=bash>
# zfs create ${ZPOOL}/nsr
# mkdir ${ZPOOL_BASEDIR}/nsr/{bin,log,res}
# cat > ${ZPOOL_BASEDIR}/nsr/res/${NetworkerGroup}.res <<EOF
type: savepnpc;
precmd: "${ZPOOL_BASEDIR}/nsr/bin/prepst_command.sh pre >${ZPOOL_BASEDIR}/nsr/log/networker_precmd.log 2>&1";
pstcmd: "${ZPOOL_BASEDIR}/nsr/bin/prepst_command.sh pst >${ZPOOL_BASEDIR}/nsr/log/networker_pstcmd.log 2>&1";
timeout: "08:00am";
abort precmd with group: Yes;
EOF
</source>
And now create a link on every cluster node to this file
<source lang=bash>
# ln -s ${ZPOOL_BASEDIR}/nsr/res/${NetworkerGroup}.res /nsr/res/${NetworkerGroup}.res
</source>
==The pre-/pstcmd-script==
!!!THIS CODE IS UNTESTED DO NOT USE THIS!!!
!!!THIS JUST AN EXAMPLE!!!
<source lang=bash>
#!/bin/bash
function print_option () {
option=$1; shift
# now process line
while [ $# -gt 0 ]
do
case $1 in
${option})
echo $2
shift
shift
;;
*)
shift
;;
esac
done
}
function print_log () {
LOGFILE=$1 ; shift
if [ $# -gt 0 ]
then
printf "%s (%s): %s\n" "$(date '+%Y%m%d %H:%M:%S')" "${cmd_option}" "$*" >> ${LOGFILE}
else
printf "%s (%s): " "$(date '+%Y%m%d %H:%M:%S')" "${cmd_option}" >> ${LOGFILE}
#cat >> ${LOGFILE}
while read data
do
printf "%s (%s): %s\n" "$(date '+%Y%m%d %H:%M:%S')" "${cmd_option}" "${data}" >> ${LOGFILE}
done
fi
}
function snapshot_pre {
DB=$1
DBUSER=$2
if [ $# -eq 3 -a "_$3_" != "__" ]
then
ZONE=$3
ZONE_CMD="${ZLOGIN_CMD} -l ${DBUSER} ${ZONE}"
ZONE_BASE=$(/usr/sbin/zonecfg -z ${ZONE} info zonepath | nawk '{print $NF;}')
ZONE_ROOT="${ZONE_BASE}/root"
else
ZONE_ROOT=""
ZONE_CMD="su - ${DBUSER} -c"
fi
if( ${ZONE_CMD} echo >/dev/null 2>&1 )
then
SCRIPT_NAME="tmp/.nsr-pre-snap-script.$$"
# Create script inside zone
cat >${ZONE_ROOT}/{SCRIPT_NAME} <<EOS
#!/bin/bash
DBDIR=\$(/usr/bin/nawk -F':' -v ORACLE_SID=${ORACLE_SID} '\$1==ORACLE_SID {print \$2;}' /var/opt/oracle/oratab)
\${DBDIR}/bin/sqlplus sys/${DBUSER} as sysdba << EOF
create pfile from spfile;
alter system archive log current;
alter database backup controlfile to trace;
alter database begin backup;
EOF
EOS
chmod 755 ${ZONE_ROOT}/${SCRIPT_NAME}
${ZONE_CMD} /${SCRIPT_NAME} 2>&1 | print_log ${LOGFILE}
rm -f ${ZONE_ROOT}/${SCRIPT_NAME}
fi
}
function snapshot_pst {
DB=$1
DBUSER=$2
if [ $# -eq 3 -a "_$3_" != "__" ]
then
ZONE=$3
ZONE_CMD="${ZLOGIN_CMD} -l ${DBUSER} ${ZONE}"
ZONE_BASE=$(/usr/sbin/zonecfg -z ${ZONE} info zonepath | nawk '{print $NF;}')
ZONE_ROOT="${ZONE_BASE}/root"
else
ZONE_ROOT=""
ZONE_CMD="su - ${DBUSER} -c"
fi
if( ${ZONE_CMD} echo >/dev/null 2>&1 )
then
SCRIPT_NAME="tmp/.nsr-pre-snap-script.$$"
# Create script inside zone
cat >${ZONE_ROOT}/{SCRIPT_NAME} <<EOS
#!/bin/bash
DBDIR=\$(/usr/bin/nawk -F':' -v ORACLE_SID=${ORACLE_SID} '\$1==ORACLE_SID {print \$2;}' /var/opt/oracle/oratab)
\${DBDIR}/bin/sqlplus sys/${DBUSER} as sysdba << EOF
alter database end backup;
alter system archive log current;
EOF
EOS
chmod 755 ${ZONE_ROOT}/${SCRIPT_NAME}
${ZONE_CMD} /${SCRIPT_NAME} 2>&1 | print_log ${LOGFILE}
rm -f ${ZONE_ROOT}/${SCRIPT_NAME}
fi
}
function snapshot_create {
ZPOOL=$1
SNAPSHOT_NAME=$2
print_log ${LOGFILE} "Create ZFS snapshot -r ${ZPOOL}@${SNAPSHOT_NAME}"
${ZFS_CMD} snapshot -r ${ZPOOL}@${SNAPSHOT_NAME}
for zfs_snapshot in $(${ZFS_CMD} list -Ho name -t snapshot -r ${ZPOOL} | grep ${SNAPSHOT_NAME})
do
${ZFS_CMD} clone -o readonly=on ${zfs_snapshot} ${zfs_snapshot/@*/}/nsr_backup
${ZFS_CMD} mount ${zfs_snapshot/@*/}/nsr_backup 2>/dev/null
if ( df -h ${zfs_snapshot/@*/}/nsr_backup )
then
# echo /usr/sbin/save -s ${SERVER_NAME} -g ${GROUP_NAME} -LL -m ${CLIENT_NAME} $(${ZFS_CMD} get -Ho value mountpoint ${zfs_snapshot/@*/}/nsr_backup)
${ZFS_CMD} list -Ho creation,name ${zfs_snapshot/@*/}/nsr_backup | print_log ${LOGFILE}
fi
done
}
function snapshot_destroy {
ZPOOL=$1
SNAPSHOT_NAME=$2
if (${ZFS_CMD} list -t snapshot ${ZPOOL}@${SNAPSHOT_NAME})
then
for zfs_snapshot in $(${ZFS_CMD} list -Ho name -t snapshot -r ${ZPOOL} | grep ${SNAPSHOT_NAME})
do
if ( df -h ${zfs_snapshot/@*/}/nsr_backup )
then
print_log ${LOGFILE} "Unmount ZFS clone ${zfs_snapshot/@*/}/nsr_backup"
${ZFS_CMD} unmount ${zfs_snapshot/@*/}/nsr_backup
fi
# If this is a clone of ${zfs_snapshot}, then destroy it
if [ "_$(${ZFS_CMD} list -Ho origin ${zfs_snapshot/@*/}/nsr_backup)_" == "_${zfs_snapshot}_" ]
then
print_log ${LOGFILE} "Destroy ZFS clone ${zfs_snapshot/@*/}/nsr_backup"
${ZFS_CMD} destroy ${zfs_snapshot/@*/}/nsr_backup
fi
done
print_log ${LOGFILE} "Destroy ZFS snapshot -r ${ZPOOL}@${SNAPSHOT_NAME}"
${ZFS_CMD} destroy -r ${ZPOOL}@${SNAPSHOT_NAME}
fi
}
function usage {
echo "Usage: $0 (pre|pst)"
exit 1
}
cmd_option=$1
export cmd_option
ORACLE_SID=SAMPLE
ORACLE_USER=oracle
SNAPSHOT_NAME="nsr"
ZFS_CMD="/usr/sbin/zfs"
ZLOGIN_CMD="/usr/bin/zlogin"
case ${cmd_option} in
pre)
# Get commandline from parent pid
# pre /usr/sbin/savepnpc -c <NetworkerClient> -s <NetworkerServer> -g <NetworkerGroup> -LL
pid=$(ptree $$ | nawk '/savepnpc/{print $1}')
;;
pst)
# Get commandline from parent pid
# pst /usr/bin/pstclntsave -s <NetworkerServer> -g <NetworkerGroup> -c <NetworkerClient>
pid=$(ptree $$ | nawk '/pstclntsave/{print $1}')
;;
esac
commandline="$(pargs -c ${pid} | nawk -F':' '$1 ~ /^argv/{printf $2}END{print;}')"
# Called from backupserver use -c
CLIENT_NAME=$(print_option -c ${commandline})
# If called from cmdline use -m
CLIENT_NAME=${CLIENT_NAME:-$(print_option -m ${commandline})}
# Last resort pre/post
CLIENT_NAME=${CLIENT_NAME:-${cmd_option}}
SERVER_NAME=$(print_option -s ${commandline})
GROUP_NAME=$(print_option -g ${commandline})
LOGFILE=/nsr/logs/${CLIENT_NAME}.log
print_log ${LOGFILE} "Called from ${commandline}"
named_pipe=/tmp/.named_pipe.$$
# Delete named pipe on exit
trap "rm -f ${named_pipe}" EXIT
# Create named pipe
mknod ${named_pipe} p
# Read from named pipe and send it to print_log
tee <${named_pipe} | print_log ${LOGFILE}&
# Close STDOUT & STDERR
exec 1>&-
exec 2>&-
# Redirect them to named pipe
exec >${named_pipe} 2>&1
print_log ${LOGFILE} "Begin backup of ${CLIENT_NAME}"
# Get resource name from hostname
LH_RES=$(/usr/cluster/bin/clrs show -t SUNW.LogicalHostname -p HostnameList | nawk -v Hostname="${CLIENT_NAME}" '/^Resource:/{res=$NF} /HostnameList:/ {for(i=2;i<=NF;i++){if($i == Hostname){print res}}}')
print_log ${LOGFILE} "LogicalHostname of ${CLIENT_NAME} is ${LH_RES}"
# Get ressourceGroup name from ressource name
RG=$(/usr/cluster/bin/scha_resource_get -O GROUP -R ${LH_RES})
print_log ${LOGFILE} "RessourceGroup of ${LH_RES} is ${RG}"
ZPOOLS=$(/usr/cluster/bin/clrs show -g ${RG} -p Zpools | nawk '$1=="Zpools:"{$1="";print $0}')
print_log ${LOGFILE} "ZPools used in ${RG}: ${ZPOOLS}"
Start_command=$(/usr/cluster/bin/clrs show -p Start_command -g ${RG} | /usr/bin/nawk -F ':' '$1 ~ /Start_command/ && $2 ~ /sczbt/')
print_log ${LOGFILE} "sczbt Start_command is: ${Start_command}"
sczbt_config=$(print_option -P ${Start_command})/sczbt_$(print_option -R ${Start_command})
print_log ${LOGFILE} "sczbt_config is ${sczbt_config}"
ZONE=$(nawk -F '=' '$1=="Zonename"{gsub(/"/,"",$2);print $2}' ${sczbt_config})
print_log ${LOGFILE} "Zone from ${sczbt_config} is ${ZONE}"
case ${cmd_option} in
pre)
for ZPOOL in ${ZPOOLS}
do
snapshot_destroy ${ZPOOL} ${SNAPSHOT_NAME}
done
# snapshot_pre ${DB} ${DBUSER} ${ZONE}
for ZPOOL in ${ZPOOLS}
do
snapshot_create ${ZPOOL} ${SNAPSHOT_NAME}
done
# snapshot_pst ${DB} ${DBUSER} ${ZONE}
;;
pst)
#for ZPOOL in ${ZPOOLS}
#do
# snapshot_destroy ${ZPOOL} ${SNAPSHOT_NAME}
#done
;;
*)
usage
;;
esac
print_log ${LOGFILE} "End backup of ${CLIENT_NAME}"
</source>
MD5-Checksum
<source lang=bash>
# digest -a md5 /opt/nsr/bin/nsr_snapshot.sh
62e591f1961ca5ecc9b344fbf269ea57
</source>
!!!THIS CODE IS UNTESTED DO NOT USE THIS!!!
!!!THIS JUST AN EXAMPLE!!!
==Registering new resource type LGTO.clnt==
1. Install Solaris client package LGTOclnt
2. Register new resource type in cluster. One one node do:
<source lang=bash>
# clrt register -f /usr/sbin/LGTO.clnt.rtr LGTO.clnt
</source>
Now you have a new resource type LGTO.clnt in your cluster.
==Create client resource of type LGTO.clnt==
So I use scripts like this:
<source lang=bash>
# RGname=sample-rg
# clrs create \
-t LGTO.clnt \
-g ${RGname} \
-p Resource_dependencies=$(basename ${RGname} -rg)-hasp-zfs-res \
-p clientname=$(basename ${RGname} -rg)-lh \
-p Network_resource=$(basename ${RGname} -rg)-lh-res \
-p owned_paths=${ZPOOL_BASEDIR} \
$(basename ${RGname} -rg)-nsr-res
</source>
This expands to:
<source lang=bash>
# clrs create \
-t LGTO.clnt \
-g sample-rg \
-p Resource_dependencies=sample-hasp-zfs-res \
-p clientname=sample-lh \
-p Network_resource=sample-lh-res \
-p owned_paths=/local/sample-rg \
sample-nsr-res
</source>
Now we have a client name to which we can connect to: sample-lh
ef25cd0ec0c34da141b65e66fcd4edc5848b87a0
591
583
2014-10-28T12:54:54Z
Lollypop
2
/* The pre-/pstcmd-script */
wikitext
text/x-wiki
[[Kategorie:ZFS]]
[[Kategorie:Solaris]]
=Backup of ZFS snapshots on Solaris Cluster with Legato/EMC Networker=
This describes how to setup a backup of the Solaris Cluster resource group named sample-rg.
The structure of my RGs is always:
<pre>
RG: <name>-rg
ZFS-HASP: <name>-hasp-zfs-res
Logical Host: <name>-lh-res
Logical Host Name: <name>-lh
ZPOOL: <name>_pool
</pre>
I used the bash as shell.
==Define variables used in the following command lines==
<source lang=bash>
# NAME=sample
# RGname=${NAME}-rg
# NetworkerGroup=$(echo ${NAME} | tr 'a-z' 'A-Z' )
# ZPOOL=${NAME}_pool
# ZPOOL_BASEDIR=/local/${RGname}
</source>
==Define a resource for Networker==
What we need now is a resource definition in our Networker directory like this:
<source lang=bash>
# zfs create ${ZPOOL}/nsr
# mkdir ${ZPOOL_BASEDIR}/nsr/{bin,log,res}
# cat > ${ZPOOL_BASEDIR}/nsr/res/${NetworkerGroup}.res <<EOF
type: savepnpc;
precmd: "${ZPOOL_BASEDIR}/nsr/bin/prepst_command.sh pre >${ZPOOL_BASEDIR}/nsr/log/networker_precmd.log 2>&1";
pstcmd: "${ZPOOL_BASEDIR}/nsr/bin/prepst_command.sh pst >${ZPOOL_BASEDIR}/nsr/log/networker_pstcmd.log 2>&1";
timeout: "08:00am";
abort precmd with group: Yes;
EOF
</source>
And now create a link on every cluster node to this file
<source lang=bash>
# ln -s ${ZPOOL_BASEDIR}/nsr/res/${NetworkerGroup}.res /nsr/res/${NetworkerGroup}.res
</source>
==The pre-/pstcmd-script==
!!!THIS CODE IS UNTESTED DO NOT USE THIS!!!
!!!THIS JUST AN EXAMPLE!!!
<source lang=bash>
#!/bin/bash
function print_option () {
option=$1; shift
# now process line
while [ $# -gt 0 ]
do
case $1 in
${option})
echo $2
shift
shift
;;
*)
shift
;;
esac
done
}
function print_log () {
LOGFILE=$1 ; shift
if [ $# -gt 0 ]
then
printf "%s (%s): %s\n" "$(date '+%Y%m%d %H:%M:%S')" "${cmd_option}" "$*" >> ${LOGFILE}
else
printf "%s (%s): " "$(date '+%Y%m%d %H:%M:%S')" "${cmd_option}" >> ${LOGFILE}
#cat >> ${LOGFILE}
while read data
do
printf "%s (%s): %s\n" "$(date '+%Y%m%d %H:%M:%S')" "${cmd_option}" "${data}" >> ${LOGFILE}
done
fi
}
function snapshot_pre {
DB=$1
DBUSER=$2
if [ $# -eq 3 -a "_$3_" != "__" ]
then
ZONE=$3
ZONE_CMD="${ZLOGIN_CMD} -l ${DBUSER} ${ZONE}"
ZONE_BASE=$(/usr/sbin/zonecfg -z ${ZONE} info zonepath | nawk '{print $NF;}')
ZONE_ROOT="${ZONE_BASE}/root"
else
ZONE_ROOT=""
ZONE_CMD="su - ${DBUSER} -c"
fi
if( ${ZONE_CMD} echo >/dev/null 2>&1 )
then
SCRIPT_NAME="tmp/.nsr-pre-snap-script.$$"
# Create script inside zone
cat >${ZONE_ROOT}/{SCRIPT_NAME} <<EOS
#!/bin/bash
DBDIR=\$(/usr/bin/nawk -F':' -v ORACLE_SID=${ORACLE_SID} '\$1==ORACLE_SID {print \$2;}' /var/opt/oracle/oratab)
\${DBDIR}/bin/sqlplus sys/${DBUSER} as sysdba << EOF
create pfile from spfile;
alter system archive log current;
alter database backup controlfile to trace;
alter database begin backup;
EOF
EOS
chmod 755 ${ZONE_ROOT}/${SCRIPT_NAME}
${ZONE_CMD} /${SCRIPT_NAME} 2>&1 | print_log ${LOGFILE}
rm -f ${ZONE_ROOT}/${SCRIPT_NAME}
fi
}
function snapshot_pst {
DB=$1
DBUSER=$2
if [ $# -eq 3 -a "_$3_" != "__" ]
then
ZONE=$3
ZONE_CMD="${ZLOGIN_CMD} -l ${DBUSER} ${ZONE}"
ZONE_BASE=$(/usr/sbin/zonecfg -z ${ZONE} info zonepath | nawk '{print $NF;}')
ZONE_ROOT="${ZONE_BASE}/root"
else
ZONE_ROOT=""
ZONE_CMD="su - ${DBUSER} -c"
fi
if( ${ZONE_CMD} echo >/dev/null 2>&1 )
then
SCRIPT_NAME="tmp/.nsr-pre-snap-script.$$"
# Create script inside zone
cat >${ZONE_ROOT}/{SCRIPT_NAME} <<EOS
#!/bin/bash
DBDIR=\$(/usr/bin/nawk -F':' -v ORACLE_SID=${ORACLE_SID} '\$1==ORACLE_SID {print \$2;}' /var/opt/oracle/oratab)
\${DBDIR}/bin/sqlplus sys/${DBUSER} as sysdba << EOF
alter database end backup;
alter system archive log current;
EOF
EOS
chmod 755 ${ZONE_ROOT}/${SCRIPT_NAME}
${ZONE_CMD} /${SCRIPT_NAME} 2>&1 | print_log ${LOGFILE}
rm -f ${ZONE_ROOT}/${SCRIPT_NAME}
fi
}
function snapshot_create {
ZPOOL=$1
SNAPSHOT_NAME=$2
print_log ${LOGFILE} "Create ZFS snapshot -r ${ZPOOL}@${SNAPSHOT_NAME}"
${ZFS_CMD} snapshot -r ${ZPOOL}@${SNAPSHOT_NAME}
for zfs_snapshot in $(${ZFS_CMD} list -Ho name -t snapshot -r ${ZPOOL} | grep ${SNAPSHOT_NAME})
do
${ZFS_CMD} clone -o readonly=on ${zfs_snapshot} ${zfs_snapshot/@*/}/nsr_backup
${ZFS_CMD} mount ${zfs_snapshot/@*/}/nsr_backup 2>/dev/null
if ( df -h ${zfs_snapshot/@*/}/nsr_backup )
then
# echo /usr/sbin/save -s ${SERVER_NAME} -g ${GROUP_NAME} -LL -m ${CLIENT_NAME} $(${ZFS_CMD} get -Ho value mountpoint ${zfs_snapshot/@*/}/nsr_backup)
${ZFS_CMD} list -Ho creation,name ${zfs_snapshot/@*/}/nsr_backup | print_log ${LOGFILE}
fi
done
}
function snapshot_destroy {
ZPOOL=$1
SNAPSHOT_NAME=$2
if (${ZFS_CMD} list -t snapshot ${ZPOOL}@${SNAPSHOT_NAME})
then
for zfs_snapshot in $(${ZFS_CMD} list -Ho name -t snapshot -r ${ZPOOL} | grep ${SNAPSHOT_NAME})
do
if ( df -h ${zfs_snapshot/@*/}/nsr_backup )
then
print_log ${LOGFILE} "Unmount ZFS clone ${zfs_snapshot/@*/}/nsr_backup"
${ZFS_CMD} unmount ${zfs_snapshot/@*/}/nsr_backup
fi
# If this is a clone of ${zfs_snapshot}, then destroy it
if [ "_$(${ZFS_CMD} list -Ho origin ${zfs_snapshot/@*/}/nsr_backup)_" == "_${zfs_snapshot}_" ]
then
print_log ${LOGFILE} "Destroy ZFS clone ${zfs_snapshot/@*/}/nsr_backup"
${ZFS_CMD} destroy ${zfs_snapshot/@*/}/nsr_backup
fi
done
print_log ${LOGFILE} "Destroy ZFS snapshot -r ${ZPOOL}@${SNAPSHOT_NAME}"
${ZFS_CMD} destroy -r ${ZPOOL}@${SNAPSHOT_NAME}
fi
}
function usage {
echo "Usage: $0 (pre|pst)"
exit 1
}
cmd_option=$1
export cmd_option
ORACLE_SID=SAMPLE
ORACLE_USER=oracle
SNAPSHOT_NAME="nsr"
ZFS_CMD="/usr/sbin/zfs"
ZLOGIN_CMD="/usr/bin/zlogin"
if [ "_${cmd_option}_" != "_init_" ]
then
case ${cmd_option} in
pre)
# Get commandline from parent pid
# pre /usr/sbin/savepnpc -c <NetworkerClient> -s <NetworkerServer> -g <NetworkerGroup> -LL
pid=$(ptree $$ | nawk '/savepnpc/{print $1}')
;;
pst)
# Get commandline from parent pid
# pst /usr/bin/pstclntsave -s <NetworkerServer> -g <NetworkerGroup> -c <NetworkerClient>
pid=$(ptree $$ | nawk '/pstclntsave/{print $1}')
;;
esac
commandline="$(pargs -c ${pid} | nawk -F':' '$1 ~ /^argv/{printf $2}END{print;}')"
# Called from backupserver use -c
CLIENT_NAME=$(print_option -c ${commandline})
# If called from cmdline use -m
CLIENT_NAME=${CLIENT_NAME:-$(print_option -m ${commandline})}
# Last resort pre/post
CLIENT_NAME=${CLIENT_NAME:-${cmd_option}}
SERVER_NAME=$(print_option -s ${commandline})
GROUP_NAME=$(print_option -g ${commandline})
LOGFILE=/nsr/logs/${CLIENT_NAME}.log
print_log ${LOGFILE} "Called from ${commandline}"
named_pipe=/tmp/.named_pipe.$$
# Delete named pipe on exit
trap "rm -f ${named_pipe}" EXIT
# Create named pipe
mknod ${named_pipe} p
# Read from named pipe and send it to print_log
tee <${named_pipe} | print_log ${LOGFILE}&
# Close STDOUT & STDERR
exec 1>&-
exec 2>&-
# Redirect them to named pipe
exec >${named_pipe} 2>&1
print_log ${LOGFILE} "Begin backup of ${CLIENT_NAME}"
# Get resource name from hostname
LH_RES=$(/usr/cluster/bin/clrs show -t SUNW.LogicalHostname -p HostnameList | nawk -v Hostname="${CLIENT_NAME}" '/^Resource:/{res=$NF} /HostnameList:/ {for(i=2;i<=NF;i++){if($i == Hostname){print res}}}')
print_log ${LOGFILE} "LogicalHostname of ${CLIENT_NAME} is ${LH_RES}"
# Get ressourceGroup name from ressource name
RG=$(/usr/cluster/bin/scha_resource_get -O GROUP -R ${LH_RES})
print_log ${LOGFILE} "RessourceGroup of ${LH_RES} is ${RG}"
ZPOOLS=$(/usr/cluster/bin/clrs show -g ${RG} -p Zpools | nawk '$1=="Zpools:"{$1="";print $0}')
print_log ${LOGFILE} "ZPools used in ${RG}: ${ZPOOLS}"
Start_command=$(/usr/cluster/bin/clrs show -p Start_command -g ${RG} | /usr/bin/nawk -F ':' '$1 ~ /Start_command/ && $2 ~ /sczbt/')
print_log ${LOGFILE} "sczbt Start_command is: ${Start_command}"
sczbt_config=$(print_option -P ${Start_command})/sczbt_$(print_option -R ${Start_command})
print_log ${LOGFILE} "sczbt_config is ${sczbt_config}"
ZONE=$(nawk -F '=' '$1=="Zonename"{gsub(/"/,"",$2);print $2}' ${sczbt_config})
print_log ${LOGFILE} "Zone from ${sczbt_config} is ${ZONE}"
else
LOGFILE=/nsr/logs/init.log
if [ $# -ne 2 ]
then
echo "Wrong count of parameters."
echo "Use $0 init <ZPool-Name>"
exit 1
fi
ZPOOL=$2
print_log ${LOGFILE} "ZPool to init : ${ZPOOL}"
fi
case ${cmd_option} in
init)
snapshot_destroy ${ZPOOL} ${SNAPSHOT_NAME}
snapshot_create ${ZPOOL} ${SNAPSHOT_NAME}
;;
pre)
for ZPOOL in ${ZPOOLS}
do
snapshot_destroy ${ZPOOL} ${SNAPSHOT_NAME}
done
# snapshot_pre ${DB} ${DBUSER} ${ZONE}
for ZPOOL in ${ZPOOLS}
do
snapshot_create ${ZPOOL} ${SNAPSHOT_NAME}
done
# snapshot_pst ${DB} ${DBUSER} ${ZONE}
;;
pst)
#for ZPOOL in ${ZPOOLS}
#do
# snapshot_destroy ${ZPOOL} ${SNAPSHOT_NAME}
#done
;;
*)
usage
;;
esac
print_log ${LOGFILE} "End backup of ${CLIENT_NAME}"
</source>
MD5-Checksum
<source lang=bash>
# digest -a md5 /opt/nsr/bin/nsr_snapshot.sh
</source>
!!!THIS CODE IS UNTESTED DO NOT USE THIS!!!
!!!THIS JUST AN EXAMPLE!!!
==Registering new resource type LGTO.clnt==
1. Install Solaris client package LGTOclnt
2. Register new resource type in cluster. One one node do:
<source lang=bash>
# clrt register -f /usr/sbin/LGTO.clnt.rtr LGTO.clnt
</source>
Now you have a new resource type LGTO.clnt in your cluster.
==Create client resource of type LGTO.clnt==
So I use scripts like this:
<source lang=bash>
# RGname=sample-rg
# clrs create \
-t LGTO.clnt \
-g ${RGname} \
-p Resource_dependencies=$(basename ${RGname} -rg)-hasp-zfs-res \
-p clientname=$(basename ${RGname} -rg)-lh \
-p Network_resource=$(basename ${RGname} -rg)-lh-res \
-p owned_paths=${ZPOOL_BASEDIR} \
$(basename ${RGname} -rg)-nsr-res
</source>
This expands to:
<source lang=bash>
# clrs create \
-t LGTO.clnt \
-g sample-rg \
-p Resource_dependencies=sample-hasp-zfs-res \
-p clientname=sample-lh \
-p Network_resource=sample-lh-res \
-p owned_paths=/local/sample-rg \
sample-nsr-res
</source>
Now we have a client name to which we can connect to: sample-lh
961bab4d074017f6b6061afa3d7eb3c19c89ddf6
592
591
2014-10-28T12:55:58Z
Lollypop
2
/* The pre-/pstcmd-script */
wikitext
text/x-wiki
[[Kategorie:ZFS]]
[[Kategorie:Solaris]]
=Backup of ZFS snapshots on Solaris Cluster with Legato/EMC Networker=
This describes how to setup a backup of the Solaris Cluster resource group named sample-rg.
The structure of my RGs is always:
<pre>
RG: <name>-rg
ZFS-HASP: <name>-hasp-zfs-res
Logical Host: <name>-lh-res
Logical Host Name: <name>-lh
ZPOOL: <name>_pool
</pre>
I used the bash as shell.
==Define variables used in the following command lines==
<source lang=bash>
# NAME=sample
# RGname=${NAME}-rg
# NetworkerGroup=$(echo ${NAME} | tr 'a-z' 'A-Z' )
# ZPOOL=${NAME}_pool
# ZPOOL_BASEDIR=/local/${RGname}
</source>
==Define a resource for Networker==
What we need now is a resource definition in our Networker directory like this:
<source lang=bash>
# zfs create ${ZPOOL}/nsr
# mkdir ${ZPOOL_BASEDIR}/nsr/{bin,log,res}
# cat > ${ZPOOL_BASEDIR}/nsr/res/${NetworkerGroup}.res <<EOF
type: savepnpc;
precmd: "${ZPOOL_BASEDIR}/nsr/bin/prepst_command.sh pre >${ZPOOL_BASEDIR}/nsr/log/networker_precmd.log 2>&1";
pstcmd: "${ZPOOL_BASEDIR}/nsr/bin/prepst_command.sh pst >${ZPOOL_BASEDIR}/nsr/log/networker_pstcmd.log 2>&1";
timeout: "08:00am";
abort precmd with group: Yes;
EOF
</source>
And now create a link on every cluster node to this file
<source lang=bash>
# ln -s ${ZPOOL_BASEDIR}/nsr/res/${NetworkerGroup}.res /nsr/res/${NetworkerGroup}.res
</source>
==The pre-/pstcmd-script==
!!!THIS CODE IS UNTESTED DO NOT USE THIS!!!
!!!THIS JUST AN EXAMPLE!!!
<source lang=bash>
#!/bin/bash
function print_option () {
option=$1; shift
# now process line
while [ $# -gt 0 ]
do
case $1 in
${option})
echo $2
shift
shift
;;
*)
shift
;;
esac
done
}
function print_log () {
LOGFILE=$1 ; shift
if [ $# -gt 0 ]
then
printf "%s (%s): %s\n" "$(date '+%Y%m%d %H:%M:%S')" "${cmd_option}" "$*" >> ${LOGFILE}
else
printf "%s (%s): " "$(date '+%Y%m%d %H:%M:%S')" "${cmd_option}" >> ${LOGFILE}
#cat >> ${LOGFILE}
while read data
do
printf "%s (%s): %s\n" "$(date '+%Y%m%d %H:%M:%S')" "${cmd_option}" "${data}" >> ${LOGFILE}
done
fi
}
function snapshot_pre {
DB=$1
DBUSER=$2
if [ $# -eq 3 -a "_$3_" != "__" ]
then
ZONE=$3
ZONE_CMD="${ZLOGIN_CMD} -l ${DBUSER} ${ZONE}"
ZONE_BASE=$(/usr/sbin/zonecfg -z ${ZONE} info zonepath | nawk '{print $NF;}')
ZONE_ROOT="${ZONE_BASE}/root"
else
ZONE_ROOT=""
ZONE_CMD="su - ${DBUSER} -c"
fi
if( ${ZONE_CMD} echo >/dev/null 2>&1 )
then
SCRIPT_NAME="tmp/.nsr-pre-snap-script.$$"
# Create script inside zone
cat >${ZONE_ROOT}/{SCRIPT_NAME} <<EOS
#!/bin/bash
DBDIR=\$(/usr/bin/nawk -F':' -v ORACLE_SID=${ORACLE_SID} '\$1==ORACLE_SID {print \$2;}' /var/opt/oracle/oratab)
\${DBDIR}/bin/sqlplus sys/${DBUSER} as sysdba << EOF
create pfile from spfile;
alter system archive log current;
alter database backup controlfile to trace;
alter database begin backup;
EOF
EOS
chmod 755 ${ZONE_ROOT}/${SCRIPT_NAME}
${ZONE_CMD} /${SCRIPT_NAME} 2>&1 | print_log ${LOGFILE}
rm -f ${ZONE_ROOT}/${SCRIPT_NAME}
fi
}
function snapshot_pst {
DB=$1
DBUSER=$2
if [ $# -eq 3 -a "_$3_" != "__" ]
then
ZONE=$3
ZONE_CMD="${ZLOGIN_CMD} -l ${DBUSER} ${ZONE}"
ZONE_BASE=$(/usr/sbin/zonecfg -z ${ZONE} info zonepath | nawk '{print $NF;}')
ZONE_ROOT="${ZONE_BASE}/root"
else
ZONE_ROOT=""
ZONE_CMD="su - ${DBUSER} -c"
fi
if( ${ZONE_CMD} echo >/dev/null 2>&1 )
then
SCRIPT_NAME="tmp/.nsr-pre-snap-script.$$"
# Create script inside zone
cat >${ZONE_ROOT}/{SCRIPT_NAME} <<EOS
#!/bin/bash
DBDIR=\$(/usr/bin/nawk -F':' -v ORACLE_SID=${ORACLE_SID} '\$1==ORACLE_SID {print \$2;}' /var/opt/oracle/oratab)
\${DBDIR}/bin/sqlplus sys/${DBUSER} as sysdba << EOF
alter database end backup;
alter system archive log current;
EOF
EOS
chmod 755 ${ZONE_ROOT}/${SCRIPT_NAME}
${ZONE_CMD} /${SCRIPT_NAME} 2>&1 | print_log ${LOGFILE}
rm -f ${ZONE_ROOT}/${SCRIPT_NAME}
fi
}
function snapshot_create {
ZPOOL=$1
SNAPSHOT_NAME=$2
print_log ${LOGFILE} "Create ZFS snapshot -r ${ZPOOL}@${SNAPSHOT_NAME}"
${ZFS_CMD} snapshot -r ${ZPOOL}@${SNAPSHOT_NAME}
for zfs_snapshot in $(${ZFS_CMD} list -Ho name -t snapshot -r ${ZPOOL} | grep ${SNAPSHOT_NAME})
do
${ZFS_CMD} clone -o readonly=on ${zfs_snapshot} ${zfs_snapshot/@*/}/nsr_backup
${ZFS_CMD} mount ${zfs_snapshot/@*/}/nsr_backup 2>/dev/null
if ( df -h ${zfs_snapshot/@*/}/nsr_backup )
then
# echo /usr/sbin/save -s ${SERVER_NAME} -g ${GROUP_NAME} -LL -m ${CLIENT_NAME} $(${ZFS_CMD} get -Ho value mountpoint ${zfs_snapshot/@*/}/nsr_backup)
${ZFS_CMD} list -Ho creation,name ${zfs_snapshot/@*/}/nsr_backup | print_log ${LOGFILE}
fi
done
}
function snapshot_destroy {
ZPOOL=$1
SNAPSHOT_NAME=$2
if (${ZFS_CMD} list -t snapshot ${ZPOOL}@${SNAPSHOT_NAME})
then
for zfs_snapshot in $(${ZFS_CMD} list -Ho name -t snapshot -r ${ZPOOL} | grep ${SNAPSHOT_NAME})
do
if ( df -h ${zfs_snapshot/@*/}/nsr_backup )
then
print_log ${LOGFILE} "Unmount ZFS clone ${zfs_snapshot/@*/}/nsr_backup"
${ZFS_CMD} unmount ${zfs_snapshot/@*/}/nsr_backup
fi
# If this is a clone of ${zfs_snapshot}, then destroy it
if [ "_$(${ZFS_CMD} list -Ho origin ${zfs_snapshot/@*/}/nsr_backup)_" == "_${zfs_snapshot}_" ]
then
print_log ${LOGFILE} "Destroy ZFS clone ${zfs_snapshot/@*/}/nsr_backup"
${ZFS_CMD} destroy ${zfs_snapshot/@*/}/nsr_backup
fi
done
print_log ${LOGFILE} "Destroy ZFS snapshot -r ${ZPOOL}@${SNAPSHOT_NAME}"
${ZFS_CMD} destroy -r ${ZPOOL}@${SNAPSHOT_NAME}
fi
}
function usage {
echo "Usage: $0 (pre|pst)"
exit 1
}
cmd_option=$1
export cmd_option
ORACLE_SID=SAMPLE
ORACLE_USER=oracle
SNAPSHOT_NAME="nsr"
ZFS_CMD="/usr/sbin/zfs"
ZLOGIN_CMD="/usr/bin/zlogin"
if [ "_${cmd_option}_" != "_init_" ]
then
case ${cmd_option} in
pre)
# Get commandline from parent pid
# pre /usr/sbin/savepnpc -c <NetworkerClient> -s <NetworkerServer> -g <NetworkerGroup> -LL
pid=$(ptree $$ | nawk '/savepnpc/{print $1}')
;;
pst)
# Get commandline from parent pid
# pst /usr/bin/pstclntsave -s <NetworkerServer> -g <NetworkerGroup> -c <NetworkerClient>
pid=$(ptree $$ | nawk '/pstclntsave/{print $1}')
;;
esac
commandline="$(pargs -c ${pid} | nawk -F':' '$1 ~ /^argv/{printf $2}END{print;}')"
# Called from backupserver use -c
CLIENT_NAME=$(print_option -c ${commandline})
# If called from cmdline use -m
CLIENT_NAME=${CLIENT_NAME:-$(print_option -m ${commandline})}
# Last resort pre/post
CLIENT_NAME=${CLIENT_NAME:-${cmd_option}}
SERVER_NAME=$(print_option -s ${commandline})
GROUP_NAME=$(print_option -g ${commandline})
LOGFILE=/nsr/logs/${CLIENT_NAME}.log
print_log ${LOGFILE} "Called from ${commandline}"
named_pipe=/tmp/.named_pipe.$$
# Delete named pipe on exit
trap "rm -f ${named_pipe}" EXIT
# Create named pipe
mknod ${named_pipe} p
# Read from named pipe and send it to print_log
tee <${named_pipe} | print_log ${LOGFILE}&
# Close STDOUT & STDERR
exec 1>&-
exec 2>&-
# Redirect them to named pipe
exec >${named_pipe} 2>&1
print_log ${LOGFILE} "Begin backup of ${CLIENT_NAME}"
# Get resource name from hostname
LH_RES=$(/usr/cluster/bin/clrs show -t SUNW.LogicalHostname -p HostnameList | nawk -v Hostname="${CLIENT_NAME}" '/^Resource:/{res=$NF} /HostnameList:/ {for(i=2;i<=NF;i++){if($i == Hostname){print res}}}')
print_log ${LOGFILE} "LogicalHostname of ${CLIENT_NAME} is ${LH_RES}"
# Get ressourceGroup name from ressource name
RG=$(/usr/cluster/bin/scha_resource_get -O GROUP -R ${LH_RES})
print_log ${LOGFILE} "RessourceGroup of ${LH_RES} is ${RG}"
ZPOOLS=$(/usr/cluster/bin/clrs show -g ${RG} -p Zpools | nawk '$1=="Zpools:"{$1="";print $0}')
print_log ${LOGFILE} "ZPools used in ${RG}: ${ZPOOLS}"
Start_command=$(/usr/cluster/bin/clrs show -p Start_command -g ${RG} | /usr/bin/nawk -F ':' '$1 ~ /Start_command/ && $2 ~ /sczbt/')
print_log ${LOGFILE} "sczbt Start_command is: ${Start_command}"
sczbt_config=$(print_option -P ${Start_command})/sczbt_$(print_option -R ${Start_command})
print_log ${LOGFILE} "sczbt_config is ${sczbt_config}"
ZONE=$(nawk -F '=' '$1=="Zonename"{gsub(/"/,"",$2);print $2}' ${sczbt_config})
print_log ${LOGFILE} "Zone from ${sczbt_config} is ${ZONE}"
else
LOGFILE=/nsr/logs/init.log
if [ $# -ne 2 ]
then
echo "Wrong count of parameters."
echo "Use $0 init <ZPool-Name>"
exit 1
fi
ZPOOL=$2
print_log ${LOGFILE} "ZPool to init : ${ZPOOL}"
fi
case ${cmd_option} in
init)
snapshot_destroy ${ZPOOL} ${SNAPSHOT_NAME}
snapshot_create ${ZPOOL} ${SNAPSHOT_NAME}
;;
pre)
for ZPOOL in ${ZPOOLS}
do
snapshot_destroy ${ZPOOL} ${SNAPSHOT_NAME}
done
# snapshot_pre ${DB} ${DBUSER} ${ZONE}
for ZPOOL in ${ZPOOLS}
do
snapshot_create ${ZPOOL} ${SNAPSHOT_NAME}
done
# snapshot_pst ${DB} ${DBUSER} ${ZONE}
;;
pst)
#for ZPOOL in ${ZPOOLS}
#do
# snapshot_destroy ${ZPOOL} ${SNAPSHOT_NAME}
#done
;;
*)
usage
;;
esac
print_log ${LOGFILE} "End backup of ${CLIENT_NAME}"
</source>
MD5-Checksum
<source lang=bash>
# digest -a md5 /opt/nsr/bin/nsr_snapshot.sh
6e788968bdee9f3cbe747a3151aa0b5c
</source>
!!!THIS CODE IS UNTESTED DO NOT USE THIS!!!
!!!THIS JUST AN EXAMPLE!!!
==Registering new resource type LGTO.clnt==
1. Install Solaris client package LGTOclnt
2. Register new resource type in cluster. One one node do:
<source lang=bash>
# clrt register -f /usr/sbin/LGTO.clnt.rtr LGTO.clnt
</source>
Now you have a new resource type LGTO.clnt in your cluster.
==Create client resource of type LGTO.clnt==
So I use scripts like this:
<source lang=bash>
# RGname=sample-rg
# clrs create \
-t LGTO.clnt \
-g ${RGname} \
-p Resource_dependencies=$(basename ${RGname} -rg)-hasp-zfs-res \
-p clientname=$(basename ${RGname} -rg)-lh \
-p Network_resource=$(basename ${RGname} -rg)-lh-res \
-p owned_paths=${ZPOOL_BASEDIR} \
$(basename ${RGname} -rg)-nsr-res
</source>
This expands to:
<source lang=bash>
# clrs create \
-t LGTO.clnt \
-g sample-rg \
-p Resource_dependencies=sample-hasp-zfs-res \
-p clientname=sample-lh \
-p Network_resource=sample-lh-res \
-p owned_paths=/local/sample-rg \
sample-nsr-res
</source>
Now we have a client name to which we can connect to: sample-lh
67dc41c0c367c12da47bc2eab1013c3b977929ad
593
592
2014-10-28T12:59:06Z
Lollypop
2
/* Define a resource for Networker */
wikitext
text/x-wiki
[[Kategorie:ZFS]]
[[Kategorie:Solaris]]
=Backup of ZFS snapshots on Solaris Cluster with Legato/EMC Networker=
This describes how to setup a backup of the Solaris Cluster resource group named sample-rg.
The structure of my RGs is always:
<pre>
RG: <name>-rg
ZFS-HASP: <name>-hasp-zfs-res
Logical Host: <name>-lh-res
Logical Host Name: <name>-lh
ZPOOL: <name>_pool
</pre>
I used the bash as shell.
==Define variables used in the following command lines==
<source lang=bash>
# NAME=sample
# RGname=${NAME}-rg
# NetworkerGroup=$(echo ${NAME} | tr 'a-z' 'A-Z' )
# ZPOOL=${NAME}_pool
# ZPOOL_BASEDIR=/local/${RGname}
</source>
==Define a resource for Networker==
What we need now is a resource definition in our Networker directory like this:
<source lang=bash>
# mkdir /nsr/{bin,log,res}
# cat > /nsr/res/${NetworkerGroup}.res <<EOF
type: savepnpc;
precmd: "/nsr/bin/prepst_command.sh pre >/nsr/log/networker_precmd.log 2>&1";
pstcmd: "/nsr/bin/prepst_command.sh pst >/nsr/log/networker_pstcmd.log 2>&1";
timeout: "08:00am";
abort precmd with group: Yes;
EOF
</source>
==The pre-/pstcmd-script==
!!!THIS CODE IS UNTESTED DO NOT USE THIS!!!
!!!THIS JUST AN EXAMPLE!!!
<source lang=bash>
#!/bin/bash
function print_option () {
option=$1; shift
# now process line
while [ $# -gt 0 ]
do
case $1 in
${option})
echo $2
shift
shift
;;
*)
shift
;;
esac
done
}
function print_log () {
LOGFILE=$1 ; shift
if [ $# -gt 0 ]
then
printf "%s (%s): %s\n" "$(date '+%Y%m%d %H:%M:%S')" "${cmd_option}" "$*" >> ${LOGFILE}
else
printf "%s (%s): " "$(date '+%Y%m%d %H:%M:%S')" "${cmd_option}" >> ${LOGFILE}
#cat >> ${LOGFILE}
while read data
do
printf "%s (%s): %s\n" "$(date '+%Y%m%d %H:%M:%S')" "${cmd_option}" "${data}" >> ${LOGFILE}
done
fi
}
function snapshot_pre {
DB=$1
DBUSER=$2
if [ $# -eq 3 -a "_$3_" != "__" ]
then
ZONE=$3
ZONE_CMD="${ZLOGIN_CMD} -l ${DBUSER} ${ZONE}"
ZONE_BASE=$(/usr/sbin/zonecfg -z ${ZONE} info zonepath | nawk '{print $NF;}')
ZONE_ROOT="${ZONE_BASE}/root"
else
ZONE_ROOT=""
ZONE_CMD="su - ${DBUSER} -c"
fi
if( ${ZONE_CMD} echo >/dev/null 2>&1 )
then
SCRIPT_NAME="tmp/.nsr-pre-snap-script.$$"
# Create script inside zone
cat >${ZONE_ROOT}/{SCRIPT_NAME} <<EOS
#!/bin/bash
DBDIR=\$(/usr/bin/nawk -F':' -v ORACLE_SID=${ORACLE_SID} '\$1==ORACLE_SID {print \$2;}' /var/opt/oracle/oratab)
\${DBDIR}/bin/sqlplus sys/${DBUSER} as sysdba << EOF
create pfile from spfile;
alter system archive log current;
alter database backup controlfile to trace;
alter database begin backup;
EOF
EOS
chmod 755 ${ZONE_ROOT}/${SCRIPT_NAME}
${ZONE_CMD} /${SCRIPT_NAME} 2>&1 | print_log ${LOGFILE}
rm -f ${ZONE_ROOT}/${SCRIPT_NAME}
fi
}
function snapshot_pst {
DB=$1
DBUSER=$2
if [ $# -eq 3 -a "_$3_" != "__" ]
then
ZONE=$3
ZONE_CMD="${ZLOGIN_CMD} -l ${DBUSER} ${ZONE}"
ZONE_BASE=$(/usr/sbin/zonecfg -z ${ZONE} info zonepath | nawk '{print $NF;}')
ZONE_ROOT="${ZONE_BASE}/root"
else
ZONE_ROOT=""
ZONE_CMD="su - ${DBUSER} -c"
fi
if( ${ZONE_CMD} echo >/dev/null 2>&1 )
then
SCRIPT_NAME="tmp/.nsr-pre-snap-script.$$"
# Create script inside zone
cat >${ZONE_ROOT}/{SCRIPT_NAME} <<EOS
#!/bin/bash
DBDIR=\$(/usr/bin/nawk -F':' -v ORACLE_SID=${ORACLE_SID} '\$1==ORACLE_SID {print \$2;}' /var/opt/oracle/oratab)
\${DBDIR}/bin/sqlplus sys/${DBUSER} as sysdba << EOF
alter database end backup;
alter system archive log current;
EOF
EOS
chmod 755 ${ZONE_ROOT}/${SCRIPT_NAME}
${ZONE_CMD} /${SCRIPT_NAME} 2>&1 | print_log ${LOGFILE}
rm -f ${ZONE_ROOT}/${SCRIPT_NAME}
fi
}
function snapshot_create {
ZPOOL=$1
SNAPSHOT_NAME=$2
print_log ${LOGFILE} "Create ZFS snapshot -r ${ZPOOL}@${SNAPSHOT_NAME}"
${ZFS_CMD} snapshot -r ${ZPOOL}@${SNAPSHOT_NAME}
for zfs_snapshot in $(${ZFS_CMD} list -Ho name -t snapshot -r ${ZPOOL} | grep ${SNAPSHOT_NAME})
do
${ZFS_CMD} clone -o readonly=on ${zfs_snapshot} ${zfs_snapshot/@*/}/nsr_backup
${ZFS_CMD} mount ${zfs_snapshot/@*/}/nsr_backup 2>/dev/null
if ( df -h ${zfs_snapshot/@*/}/nsr_backup )
then
# echo /usr/sbin/save -s ${SERVER_NAME} -g ${GROUP_NAME} -LL -m ${CLIENT_NAME} $(${ZFS_CMD} get -Ho value mountpoint ${zfs_snapshot/@*/}/nsr_backup)
${ZFS_CMD} list -Ho creation,name ${zfs_snapshot/@*/}/nsr_backup | print_log ${LOGFILE}
fi
done
}
function snapshot_destroy {
ZPOOL=$1
SNAPSHOT_NAME=$2
if (${ZFS_CMD} list -t snapshot ${ZPOOL}@${SNAPSHOT_NAME})
then
for zfs_snapshot in $(${ZFS_CMD} list -Ho name -t snapshot -r ${ZPOOL} | grep ${SNAPSHOT_NAME})
do
if ( df -h ${zfs_snapshot/@*/}/nsr_backup )
then
print_log ${LOGFILE} "Unmount ZFS clone ${zfs_snapshot/@*/}/nsr_backup"
${ZFS_CMD} unmount ${zfs_snapshot/@*/}/nsr_backup
fi
# If this is a clone of ${zfs_snapshot}, then destroy it
if [ "_$(${ZFS_CMD} list -Ho origin ${zfs_snapshot/@*/}/nsr_backup)_" == "_${zfs_snapshot}_" ]
then
print_log ${LOGFILE} "Destroy ZFS clone ${zfs_snapshot/@*/}/nsr_backup"
${ZFS_CMD} destroy ${zfs_snapshot/@*/}/nsr_backup
fi
done
print_log ${LOGFILE} "Destroy ZFS snapshot -r ${ZPOOL}@${SNAPSHOT_NAME}"
${ZFS_CMD} destroy -r ${ZPOOL}@${SNAPSHOT_NAME}
fi
}
function usage {
echo "Usage: $0 (pre|pst)"
exit 1
}
cmd_option=$1
export cmd_option
ORACLE_SID=SAMPLE
ORACLE_USER=oracle
SNAPSHOT_NAME="nsr"
ZFS_CMD="/usr/sbin/zfs"
ZLOGIN_CMD="/usr/bin/zlogin"
if [ "_${cmd_option}_" != "_init_" ]
then
case ${cmd_option} in
pre)
# Get commandline from parent pid
# pre /usr/sbin/savepnpc -c <NetworkerClient> -s <NetworkerServer> -g <NetworkerGroup> -LL
pid=$(ptree $$ | nawk '/savepnpc/{print $1}')
;;
pst)
# Get commandline from parent pid
# pst /usr/bin/pstclntsave -s <NetworkerServer> -g <NetworkerGroup> -c <NetworkerClient>
pid=$(ptree $$ | nawk '/pstclntsave/{print $1}')
;;
esac
commandline="$(pargs -c ${pid} | nawk -F':' '$1 ~ /^argv/{printf $2}END{print;}')"
# Called from backupserver use -c
CLIENT_NAME=$(print_option -c ${commandline})
# If called from cmdline use -m
CLIENT_NAME=${CLIENT_NAME:-$(print_option -m ${commandline})}
# Last resort pre/post
CLIENT_NAME=${CLIENT_NAME:-${cmd_option}}
SERVER_NAME=$(print_option -s ${commandline})
GROUP_NAME=$(print_option -g ${commandline})
LOGFILE=/nsr/logs/${CLIENT_NAME}.log
print_log ${LOGFILE} "Called from ${commandline}"
named_pipe=/tmp/.named_pipe.$$
# Delete named pipe on exit
trap "rm -f ${named_pipe}" EXIT
# Create named pipe
mknod ${named_pipe} p
# Read from named pipe and send it to print_log
tee <${named_pipe} | print_log ${LOGFILE}&
# Close STDOUT & STDERR
exec 1>&-
exec 2>&-
# Redirect them to named pipe
exec >${named_pipe} 2>&1
print_log ${LOGFILE} "Begin backup of ${CLIENT_NAME}"
# Get resource name from hostname
LH_RES=$(/usr/cluster/bin/clrs show -t SUNW.LogicalHostname -p HostnameList | nawk -v Hostname="${CLIENT_NAME}" '/^Resource:/{res=$NF} /HostnameList:/ {for(i=2;i<=NF;i++){if($i == Hostname){print res}}}')
print_log ${LOGFILE} "LogicalHostname of ${CLIENT_NAME} is ${LH_RES}"
# Get ressourceGroup name from ressource name
RG=$(/usr/cluster/bin/scha_resource_get -O GROUP -R ${LH_RES})
print_log ${LOGFILE} "RessourceGroup of ${LH_RES} is ${RG}"
ZPOOLS=$(/usr/cluster/bin/clrs show -g ${RG} -p Zpools | nawk '$1=="Zpools:"{$1="";print $0}')
print_log ${LOGFILE} "ZPools used in ${RG}: ${ZPOOLS}"
Start_command=$(/usr/cluster/bin/clrs show -p Start_command -g ${RG} | /usr/bin/nawk -F ':' '$1 ~ /Start_command/ && $2 ~ /sczbt/')
print_log ${LOGFILE} "sczbt Start_command is: ${Start_command}"
sczbt_config=$(print_option -P ${Start_command})/sczbt_$(print_option -R ${Start_command})
print_log ${LOGFILE} "sczbt_config is ${sczbt_config}"
ZONE=$(nawk -F '=' '$1=="Zonename"{gsub(/"/,"",$2);print $2}' ${sczbt_config})
print_log ${LOGFILE} "Zone from ${sczbt_config} is ${ZONE}"
else
LOGFILE=/nsr/logs/init.log
if [ $# -ne 2 ]
then
echo "Wrong count of parameters."
echo "Use $0 init <ZPool-Name>"
exit 1
fi
ZPOOL=$2
print_log ${LOGFILE} "ZPool to init : ${ZPOOL}"
fi
case ${cmd_option} in
init)
snapshot_destroy ${ZPOOL} ${SNAPSHOT_NAME}
snapshot_create ${ZPOOL} ${SNAPSHOT_NAME}
;;
pre)
for ZPOOL in ${ZPOOLS}
do
snapshot_destroy ${ZPOOL} ${SNAPSHOT_NAME}
done
# snapshot_pre ${DB} ${DBUSER} ${ZONE}
for ZPOOL in ${ZPOOLS}
do
snapshot_create ${ZPOOL} ${SNAPSHOT_NAME}
done
# snapshot_pst ${DB} ${DBUSER} ${ZONE}
;;
pst)
#for ZPOOL in ${ZPOOLS}
#do
# snapshot_destroy ${ZPOOL} ${SNAPSHOT_NAME}
#done
;;
*)
usage
;;
esac
print_log ${LOGFILE} "End backup of ${CLIENT_NAME}"
</source>
MD5-Checksum
<source lang=bash>
# digest -a md5 /opt/nsr/bin/nsr_snapshot.sh
6e788968bdee9f3cbe747a3151aa0b5c
</source>
!!!THIS CODE IS UNTESTED DO NOT USE THIS!!!
!!!THIS JUST AN EXAMPLE!!!
==Registering new resource type LGTO.clnt==
1. Install Solaris client package LGTOclnt
2. Register new resource type in cluster. One one node do:
<source lang=bash>
# clrt register -f /usr/sbin/LGTO.clnt.rtr LGTO.clnt
</source>
Now you have a new resource type LGTO.clnt in your cluster.
==Create client resource of type LGTO.clnt==
So I use scripts like this:
<source lang=bash>
# RGname=sample-rg
# clrs create \
-t LGTO.clnt \
-g ${RGname} \
-p Resource_dependencies=$(basename ${RGname} -rg)-hasp-zfs-res \
-p clientname=$(basename ${RGname} -rg)-lh \
-p Network_resource=$(basename ${RGname} -rg)-lh-res \
-p owned_paths=${ZPOOL_BASEDIR} \
$(basename ${RGname} -rg)-nsr-res
</source>
This expands to:
<source lang=bash>
# clrs create \
-t LGTO.clnt \
-g sample-rg \
-p Resource_dependencies=sample-hasp-zfs-res \
-p clientname=sample-lh \
-p Network_resource=sample-lh-res \
-p owned_paths=/local/sample-rg \
sample-nsr-res
</source>
Now we have a client name to which we can connect to: sample-lh
3db399eb976d21d2bafe246bd1dacc0c3d780605
594
593
2014-10-28T13:00:27Z
Lollypop
2
/* The pre-/pstcmd-script */
wikitext
text/x-wiki
[[Kategorie:ZFS]]
[[Kategorie:Solaris]]
=Backup of ZFS snapshots on Solaris Cluster with Legato/EMC Networker=
This describes how to setup a backup of the Solaris Cluster resource group named sample-rg.
The structure of my RGs is always:
<pre>
RG: <name>-rg
ZFS-HASP: <name>-hasp-zfs-res
Logical Host: <name>-lh-res
Logical Host Name: <name>-lh
ZPOOL: <name>_pool
</pre>
I used the bash as shell.
==Define variables used in the following command lines==
<source lang=bash>
# NAME=sample
# RGname=${NAME}-rg
# NetworkerGroup=$(echo ${NAME} | tr 'a-z' 'A-Z' )
# ZPOOL=${NAME}_pool
# ZPOOL_BASEDIR=/local/${RGname}
</source>
==Define a resource for Networker==
What we need now is a resource definition in our Networker directory like this:
<source lang=bash>
# mkdir /nsr/{bin,log,res}
# cat > /nsr/res/${NetworkerGroup}.res <<EOF
type: savepnpc;
precmd: "/nsr/bin/prepst_command.sh pre >/nsr/log/networker_precmd.log 2>&1";
pstcmd: "/nsr/bin/prepst_command.sh pst >/nsr/log/networker_pstcmd.log 2>&1";
timeout: "08:00am";
abort precmd with group: Yes;
EOF
</source>
==The pre-/pstcmd-script==
!!!THIS CODE IS UNTESTED DO NOT USE THIS!!!
!!!THIS JUST AN EXAMPLE!!!
<source lang=bash>
#!/bin/bash
function print_option () {
option=$1; shift
# now process line
while [ $# -gt 0 ]
do
case $1 in
${option})
echo $2
shift
shift
;;
*)
shift
;;
esac
done
}
function print_log () {
LOGFILE=$1 ; shift
if [ $# -gt 0 ]
then
printf "%s (%s): %s\n" "$(date '+%Y%m%d %H:%M:%S')" "${cmd_option}" "$*" >> ${LOGFILE}
else
printf "%s (%s): " "$(date '+%Y%m%d %H:%M:%S')" "${cmd_option}" >> ${LOGFILE}
#cat >> ${LOGFILE}
while read data
do
printf "%s (%s): %s\n" "$(date '+%Y%m%d %H:%M:%S')" "${cmd_option}" "${data}" >> ${LOGFILE}
done
fi
}
function snapshot_pre {
DB=$1
DBUSER=$2
if [ $# -eq 3 -a "_$3_" != "__" ]
then
ZONE=$3
ZONE_CMD="${ZLOGIN_CMD} -l ${DBUSER} ${ZONE}"
ZONE_BASE=$(/usr/sbin/zonecfg -z ${ZONE} info zonepath | nawk '{print $NF;}')
ZONE_ROOT="${ZONE_BASE}/root"
else
ZONE_ROOT=""
ZONE_CMD="su - ${DBUSER} -c"
fi
if( ${ZONE_CMD} echo >/dev/null 2>&1 )
then
SCRIPT_NAME="tmp/.nsr-pre-snap-script.$$"
# Create script inside zone
cat >${ZONE_ROOT}/{SCRIPT_NAME} <<EOS
#!/bin/bash
DBDIR=\$(/usr/bin/nawk -F':' -v ORACLE_SID=${ORACLE_SID} '\$1==ORACLE_SID {print \$2;}' /var/opt/oracle/oratab)
\${DBDIR}/bin/sqlplus sys/${DBUSER} as sysdba << EOF
create pfile from spfile;
alter system archive log current;
alter database backup controlfile to trace;
alter database begin backup;
EOF
EOS
chmod 755 ${ZONE_ROOT}/${SCRIPT_NAME}
${ZONE_CMD} /${SCRIPT_NAME} 2>&1 | print_log ${LOGFILE}
rm -f ${ZONE_ROOT}/${SCRIPT_NAME}
fi
}
function snapshot_pst {
DB=$1
DBUSER=$2
if [ $# -eq 3 -a "_$3_" != "__" ]
then
ZONE=$3
ZONE_CMD="${ZLOGIN_CMD} -l ${DBUSER} ${ZONE}"
ZONE_BASE=$(/usr/sbin/zonecfg -z ${ZONE} info zonepath | nawk '{print $NF;}')
ZONE_ROOT="${ZONE_BASE}/root"
else
ZONE_ROOT=""
ZONE_CMD="su - ${DBUSER} -c"
fi
if( ${ZONE_CMD} echo >/dev/null 2>&1 )
then
SCRIPT_NAME="tmp/.nsr-pre-snap-script.$$"
# Create script inside zone
cat >${ZONE_ROOT}/{SCRIPT_NAME} <<EOS
#!/bin/bash
DBDIR=\$(/usr/bin/nawk -F':' -v ORACLE_SID=${ORACLE_SID} '\$1==ORACLE_SID {print \$2;}' /var/opt/oracle/oratab)
\${DBDIR}/bin/sqlplus sys/${DBUSER} as sysdba << EOF
alter database end backup;
alter system archive log current;
EOF
EOS
chmod 755 ${ZONE_ROOT}/${SCRIPT_NAME}
${ZONE_CMD} /${SCRIPT_NAME} 2>&1 | print_log ${LOGFILE}
rm -f ${ZONE_ROOT}/${SCRIPT_NAME}
fi
}
function snapshot_create {
ZPOOL=$1
SNAPSHOT_NAME=$2
print_log ${LOGFILE} "Create ZFS snapshot -r ${ZPOOL}@${SNAPSHOT_NAME}"
${ZFS_CMD} snapshot -r ${ZPOOL}@${SNAPSHOT_NAME}
for zfs_snapshot in $(${ZFS_CMD} list -Ho name -t snapshot -r ${ZPOOL} | grep ${SNAPSHOT_NAME})
do
${ZFS_CMD} clone -o readonly=on ${zfs_snapshot} ${zfs_snapshot/@*/}/nsr_backup
${ZFS_CMD} mount ${zfs_snapshot/@*/}/nsr_backup 2>/dev/null
if ( df -h ${zfs_snapshot/@*/}/nsr_backup )
then
# echo /usr/sbin/save -s ${SERVER_NAME} -g ${GROUP_NAME} -LL -m ${CLIENT_NAME} $(${ZFS_CMD} get -Ho value mountpoint ${zfs_snapshot/@*/}/nsr_backup)
${ZFS_CMD} list -Ho creation,name ${zfs_snapshot/@*/}/nsr_backup | print_log ${LOGFILE}
fi
done
}
function snapshot_destroy {
ZPOOL=$1
SNAPSHOT_NAME=$2
if (${ZFS_CMD} list -t snapshot ${ZPOOL}@${SNAPSHOT_NAME})
then
for zfs_snapshot in $(${ZFS_CMD} list -Ho name -t snapshot -r ${ZPOOL} | grep ${SNAPSHOT_NAME})
do
if ( df -h ${zfs_snapshot/@*/}/nsr_backup )
then
print_log ${LOGFILE} "Unmount ZFS clone ${zfs_snapshot/@*/}/nsr_backup"
${ZFS_CMD} unmount ${zfs_snapshot/@*/}/nsr_backup
fi
# If this is a clone of ${zfs_snapshot}, then destroy it
if [ "_$(${ZFS_CMD} list -Ho origin ${zfs_snapshot/@*/}/nsr_backup)_" == "_${zfs_snapshot}_" ]
then
print_log ${LOGFILE} "Destroy ZFS clone ${zfs_snapshot/@*/}/nsr_backup"
${ZFS_CMD} destroy ${zfs_snapshot/@*/}/nsr_backup
fi
done
print_log ${LOGFILE} "Destroy ZFS snapshot -r ${ZPOOL}@${SNAPSHOT_NAME}"
${ZFS_CMD} destroy -r ${ZPOOL}@${SNAPSHOT_NAME}
fi
}
function usage {
echo "Usage: $0 (pre|pst)"
exit 1
}
cmd_option=$1
export cmd_option
ORACLE_SID=SAMPLE
ORACLE_USER=oracle
SNAPSHOT_NAME="nsr"
ZFS_CMD="/usr/sbin/zfs"
ZLOGIN_CMD="/usr/bin/zlogin"
if [ "_${cmd_option}_" != "_init_" ]
then
case ${cmd_option} in
pre)
# Get commandline from parent pid
# pre /usr/sbin/savepnpc -c <NetworkerClient> -s <NetworkerServer> -g <NetworkerGroup> -LL
pid=$(ptree $$ | nawk '/savepnpc/{print $1}')
;;
pst)
# Get commandline from parent pid
# pst /usr/bin/pstclntsave -s <NetworkerServer> -g <NetworkerGroup> -c <NetworkerClient>
pid=$(ptree $$ | nawk '/pstclntsave/{print $1}')
;;
esac
commandline="$(pargs -c ${pid} | nawk -F':' '$1 ~ /^argv/{printf $2}END{print;}')"
# Called from backupserver use -c
CLIENT_NAME=$(print_option -c ${commandline})
# If called from cmdline use -m
CLIENT_NAME=${CLIENT_NAME:-$(print_option -m ${commandline})}
# Last resort pre/post
CLIENT_NAME=${CLIENT_NAME:-${cmd_option}}
SERVER_NAME=$(print_option -s ${commandline})
GROUP_NAME=$(print_option -g ${commandline})
LOGFILE=/nsr/logs/${CLIENT_NAME}.log
print_log ${LOGFILE} "Called from ${commandline}"
named_pipe=/tmp/.named_pipe.$$
# Delete named pipe on exit
trap "rm -f ${named_pipe}" EXIT
# Create named pipe
mknod ${named_pipe} p
# Read from named pipe and send it to print_log
tee <${named_pipe} | print_log ${LOGFILE}&
# Close STDOUT & STDERR
exec 1>&-
exec 2>&-
# Redirect them to named pipe
exec >${named_pipe} 2>&1
print_log ${LOGFILE} "Begin backup of ${CLIENT_NAME}"
# Get resource name from hostname
LH_RES=$(/usr/cluster/bin/clrs show -t SUNW.LogicalHostname -p HostnameList | nawk -v Hostname="${CLIENT_NAME}" '/^Resource:/{res=$NF} /HostnameList:/ {for(i=2;i<=NF;i++){if($i == Hostname){print res}}}')
print_log ${LOGFILE} "LogicalHostname of ${CLIENT_NAME} is ${LH_RES}"
# Get ressourceGroup name from ressource name
RG=$(/usr/cluster/bin/scha_resource_get -O GROUP -R ${LH_RES})
print_log ${LOGFILE} "RessourceGroup of ${LH_RES} is ${RG}"
ZPOOLS=$(/usr/cluster/bin/clrs show -g ${RG} -p Zpools | nawk '$1=="Zpools:"{$1="";print $0}')
print_log ${LOGFILE} "ZPools used in ${RG}: ${ZPOOLS}"
Start_command=$(/usr/cluster/bin/clrs show -p Start_command -g ${RG} | /usr/bin/nawk -F ':' '$1 ~ /Start_command/ && $2 ~ /sczbt/')
print_log ${LOGFILE} "sczbt Start_command is: ${Start_command}"
sczbt_config=$(print_option -P ${Start_command})/sczbt_$(print_option -R ${Start_command})
print_log ${LOGFILE} "sczbt_config is ${sczbt_config}"
ZONE=$(nawk -F '=' '$1=="Zonename"{gsub(/"/,"",$2);print $2}' ${sczbt_config})
print_log ${LOGFILE} "Zone from ${sczbt_config} is ${ZONE}"
else
LOGFILE=/nsr/logs/init.log
if [ $# -ne 2 ]
then
echo "Wrong count of parameters."
echo "Use $0 init <ZPool-Name>"
exit 1
fi
ZPOOL=$2
print_log ${LOGFILE} "ZPool to init : ${ZPOOL}"
fi
case ${cmd_option} in
init)
snapshot_destroy ${ZPOOL} ${SNAPSHOT_NAME}
snapshot_create ${ZPOOL} ${SNAPSHOT_NAME}
;;
pre)
for ZPOOL in ${ZPOOLS}
do
snapshot_destroy ${ZPOOL} ${SNAPSHOT_NAME}
done
# snapshot_pre ${DB} ${DBUSER} ${ZONE}
for ZPOOL in ${ZPOOLS}
do
snapshot_create ${ZPOOL} ${SNAPSHOT_NAME}
done
# snapshot_pst ${DB} ${DBUSER} ${ZONE}
;;
pst)
#for ZPOOL in ${ZPOOLS}
#do
# snapshot_destroy ${ZPOOL} ${SNAPSHOT_NAME}
#done
;;
*)
usage
;;
esac
print_log ${LOGFILE} "End backup of ${CLIENT_NAME}"
</source>
MD5-Checksum
<source lang=bash>
# digest -a md5 /nsr/bin/nsr_snapshot.sh
6e788968bdee9f3cbe747a3151aa0b5c
</source>
!!!THIS CODE IS UNTESTED DO NOT USE THIS!!!
!!!THIS JUST AN EXAMPLE!!!
==Registering new resource type LGTO.clnt==
1. Install Solaris client package LGTOclnt
2. Register new resource type in cluster. One one node do:
<source lang=bash>
# clrt register -f /usr/sbin/LGTO.clnt.rtr LGTO.clnt
</source>
Now you have a new resource type LGTO.clnt in your cluster.
==Create client resource of type LGTO.clnt==
So I use scripts like this:
<source lang=bash>
# RGname=sample-rg
# clrs create \
-t LGTO.clnt \
-g ${RGname} \
-p Resource_dependencies=$(basename ${RGname} -rg)-hasp-zfs-res \
-p clientname=$(basename ${RGname} -rg)-lh \
-p Network_resource=$(basename ${RGname} -rg)-lh-res \
-p owned_paths=${ZPOOL_BASEDIR} \
$(basename ${RGname} -rg)-nsr-res
</source>
This expands to:
<source lang=bash>
# clrs create \
-t LGTO.clnt \
-g sample-rg \
-p Resource_dependencies=sample-hasp-zfs-res \
-p clientname=sample-lh \
-p Network_resource=sample-lh-res \
-p owned_paths=/local/sample-rg \
sample-nsr-res
</source>
Now we have a client name to which we can connect to: sample-lh
2ebcdb0082b59b0ef6293b7b9679c87ce3c34d6f
595
594
2014-10-28T13:01:27Z
Lollypop
2
/* Define a resource for Networker */
wikitext
text/x-wiki
[[Kategorie:ZFS]]
[[Kategorie:Solaris]]
=Backup of ZFS snapshots on Solaris Cluster with Legato/EMC Networker=
This describes how to setup a backup of the Solaris Cluster resource group named sample-rg.
The structure of my RGs is always:
<pre>
RG: <name>-rg
ZFS-HASP: <name>-hasp-zfs-res
Logical Host: <name>-lh-res
Logical Host Name: <name>-lh
ZPOOL: <name>_pool
</pre>
I used the bash as shell.
==Define variables used in the following command lines==
<source lang=bash>
# NAME=sample
# RGname=${NAME}-rg
# NetworkerGroup=$(echo ${NAME} | tr 'a-z' 'A-Z' )
# ZPOOL=${NAME}_pool
# ZPOOL_BASEDIR=/local/${RGname}
</source>
==Define a resource for Networker==
What we need now is a resource definition in our Networker directory like this:
<source lang=bash>
# mkdir /nsr/{bin,log,res}
# cat > /nsr/res/${NetworkerGroup}.res <<EOF
type: savepnpc;
precmd: "/nsr/bin/nsr_snapshot.sh pre >/nsr/log/networker_precmd.log 2>&1";
pstcmd: "/nsr/bin/nsr_snapshot.sh pst >/nsr/log/networker_pstcmd.log 2>&1";
timeout: "08:00am";
abort precmd with group: Yes;
EOF
</source>
==The pre-/pstcmd-script==
!!!THIS CODE IS UNTESTED DO NOT USE THIS!!!
!!!THIS JUST AN EXAMPLE!!!
<source lang=bash>
#!/bin/bash
function print_option () {
option=$1; shift
# now process line
while [ $# -gt 0 ]
do
case $1 in
${option})
echo $2
shift
shift
;;
*)
shift
;;
esac
done
}
function print_log () {
LOGFILE=$1 ; shift
if [ $# -gt 0 ]
then
printf "%s (%s): %s\n" "$(date '+%Y%m%d %H:%M:%S')" "${cmd_option}" "$*" >> ${LOGFILE}
else
printf "%s (%s): " "$(date '+%Y%m%d %H:%M:%S')" "${cmd_option}" >> ${LOGFILE}
#cat >> ${LOGFILE}
while read data
do
printf "%s (%s): %s\n" "$(date '+%Y%m%d %H:%M:%S')" "${cmd_option}" "${data}" >> ${LOGFILE}
done
fi
}
function snapshot_pre {
DB=$1
DBUSER=$2
if [ $# -eq 3 -a "_$3_" != "__" ]
then
ZONE=$3
ZONE_CMD="${ZLOGIN_CMD} -l ${DBUSER} ${ZONE}"
ZONE_BASE=$(/usr/sbin/zonecfg -z ${ZONE} info zonepath | nawk '{print $NF;}')
ZONE_ROOT="${ZONE_BASE}/root"
else
ZONE_ROOT=""
ZONE_CMD="su - ${DBUSER} -c"
fi
if( ${ZONE_CMD} echo >/dev/null 2>&1 )
then
SCRIPT_NAME="tmp/.nsr-pre-snap-script.$$"
# Create script inside zone
cat >${ZONE_ROOT}/{SCRIPT_NAME} <<EOS
#!/bin/bash
DBDIR=\$(/usr/bin/nawk -F':' -v ORACLE_SID=${ORACLE_SID} '\$1==ORACLE_SID {print \$2;}' /var/opt/oracle/oratab)
\${DBDIR}/bin/sqlplus sys/${DBUSER} as sysdba << EOF
create pfile from spfile;
alter system archive log current;
alter database backup controlfile to trace;
alter database begin backup;
EOF
EOS
chmod 755 ${ZONE_ROOT}/${SCRIPT_NAME}
${ZONE_CMD} /${SCRIPT_NAME} 2>&1 | print_log ${LOGFILE}
rm -f ${ZONE_ROOT}/${SCRIPT_NAME}
fi
}
function snapshot_pst {
DB=$1
DBUSER=$2
if [ $# -eq 3 -a "_$3_" != "__" ]
then
ZONE=$3
ZONE_CMD="${ZLOGIN_CMD} -l ${DBUSER} ${ZONE}"
ZONE_BASE=$(/usr/sbin/zonecfg -z ${ZONE} info zonepath | nawk '{print $NF;}')
ZONE_ROOT="${ZONE_BASE}/root"
else
ZONE_ROOT=""
ZONE_CMD="su - ${DBUSER} -c"
fi
if( ${ZONE_CMD} echo >/dev/null 2>&1 )
then
SCRIPT_NAME="tmp/.nsr-pre-snap-script.$$"
# Create script inside zone
cat >${ZONE_ROOT}/{SCRIPT_NAME} <<EOS
#!/bin/bash
DBDIR=\$(/usr/bin/nawk -F':' -v ORACLE_SID=${ORACLE_SID} '\$1==ORACLE_SID {print \$2;}' /var/opt/oracle/oratab)
\${DBDIR}/bin/sqlplus sys/${DBUSER} as sysdba << EOF
alter database end backup;
alter system archive log current;
EOF
EOS
chmod 755 ${ZONE_ROOT}/${SCRIPT_NAME}
${ZONE_CMD} /${SCRIPT_NAME} 2>&1 | print_log ${LOGFILE}
rm -f ${ZONE_ROOT}/${SCRIPT_NAME}
fi
}
function snapshot_create {
ZPOOL=$1
SNAPSHOT_NAME=$2
print_log ${LOGFILE} "Create ZFS snapshot -r ${ZPOOL}@${SNAPSHOT_NAME}"
${ZFS_CMD} snapshot -r ${ZPOOL}@${SNAPSHOT_NAME}
for zfs_snapshot in $(${ZFS_CMD} list -Ho name -t snapshot -r ${ZPOOL} | grep ${SNAPSHOT_NAME})
do
${ZFS_CMD} clone -o readonly=on ${zfs_snapshot} ${zfs_snapshot/@*/}/nsr_backup
${ZFS_CMD} mount ${zfs_snapshot/@*/}/nsr_backup 2>/dev/null
if ( df -h ${zfs_snapshot/@*/}/nsr_backup )
then
# echo /usr/sbin/save -s ${SERVER_NAME} -g ${GROUP_NAME} -LL -m ${CLIENT_NAME} $(${ZFS_CMD} get -Ho value mountpoint ${zfs_snapshot/@*/}/nsr_backup)
${ZFS_CMD} list -Ho creation,name ${zfs_snapshot/@*/}/nsr_backup | print_log ${LOGFILE}
fi
done
}
function snapshot_destroy {
ZPOOL=$1
SNAPSHOT_NAME=$2
if (${ZFS_CMD} list -t snapshot ${ZPOOL}@${SNAPSHOT_NAME})
then
for zfs_snapshot in $(${ZFS_CMD} list -Ho name -t snapshot -r ${ZPOOL} | grep ${SNAPSHOT_NAME})
do
if ( df -h ${zfs_snapshot/@*/}/nsr_backup )
then
print_log ${LOGFILE} "Unmount ZFS clone ${zfs_snapshot/@*/}/nsr_backup"
${ZFS_CMD} unmount ${zfs_snapshot/@*/}/nsr_backup
fi
# If this is a clone of ${zfs_snapshot}, then destroy it
if [ "_$(${ZFS_CMD} list -Ho origin ${zfs_snapshot/@*/}/nsr_backup)_" == "_${zfs_snapshot}_" ]
then
print_log ${LOGFILE} "Destroy ZFS clone ${zfs_snapshot/@*/}/nsr_backup"
${ZFS_CMD} destroy ${zfs_snapshot/@*/}/nsr_backup
fi
done
print_log ${LOGFILE} "Destroy ZFS snapshot -r ${ZPOOL}@${SNAPSHOT_NAME}"
${ZFS_CMD} destroy -r ${ZPOOL}@${SNAPSHOT_NAME}
fi
}
function usage {
echo "Usage: $0 (pre|pst)"
exit 1
}
cmd_option=$1
export cmd_option
ORACLE_SID=SAMPLE
ORACLE_USER=oracle
SNAPSHOT_NAME="nsr"
ZFS_CMD="/usr/sbin/zfs"
ZLOGIN_CMD="/usr/bin/zlogin"
if [ "_${cmd_option}_" != "_init_" ]
then
case ${cmd_option} in
pre)
# Get commandline from parent pid
# pre /usr/sbin/savepnpc -c <NetworkerClient> -s <NetworkerServer> -g <NetworkerGroup> -LL
pid=$(ptree $$ | nawk '/savepnpc/{print $1}')
;;
pst)
# Get commandline from parent pid
# pst /usr/bin/pstclntsave -s <NetworkerServer> -g <NetworkerGroup> -c <NetworkerClient>
pid=$(ptree $$ | nawk '/pstclntsave/{print $1}')
;;
esac
commandline="$(pargs -c ${pid} | nawk -F':' '$1 ~ /^argv/{printf $2}END{print;}')"
# Called from backupserver use -c
CLIENT_NAME=$(print_option -c ${commandline})
# If called from cmdline use -m
CLIENT_NAME=${CLIENT_NAME:-$(print_option -m ${commandline})}
# Last resort pre/post
CLIENT_NAME=${CLIENT_NAME:-${cmd_option}}
SERVER_NAME=$(print_option -s ${commandline})
GROUP_NAME=$(print_option -g ${commandline})
LOGFILE=/nsr/logs/${CLIENT_NAME}.log
print_log ${LOGFILE} "Called from ${commandline}"
named_pipe=/tmp/.named_pipe.$$
# Delete named pipe on exit
trap "rm -f ${named_pipe}" EXIT
# Create named pipe
mknod ${named_pipe} p
# Read from named pipe and send it to print_log
tee <${named_pipe} | print_log ${LOGFILE}&
# Close STDOUT & STDERR
exec 1>&-
exec 2>&-
# Redirect them to named pipe
exec >${named_pipe} 2>&1
print_log ${LOGFILE} "Begin backup of ${CLIENT_NAME}"
# Get resource name from hostname
LH_RES=$(/usr/cluster/bin/clrs show -t SUNW.LogicalHostname -p HostnameList | nawk -v Hostname="${CLIENT_NAME}" '/^Resource:/{res=$NF} /HostnameList:/ {for(i=2;i<=NF;i++){if($i == Hostname){print res}}}')
print_log ${LOGFILE} "LogicalHostname of ${CLIENT_NAME} is ${LH_RES}"
# Get ressourceGroup name from ressource name
RG=$(/usr/cluster/bin/scha_resource_get -O GROUP -R ${LH_RES})
print_log ${LOGFILE} "RessourceGroup of ${LH_RES} is ${RG}"
ZPOOLS=$(/usr/cluster/bin/clrs show -g ${RG} -p Zpools | nawk '$1=="Zpools:"{$1="";print $0}')
print_log ${LOGFILE} "ZPools used in ${RG}: ${ZPOOLS}"
Start_command=$(/usr/cluster/bin/clrs show -p Start_command -g ${RG} | /usr/bin/nawk -F ':' '$1 ~ /Start_command/ && $2 ~ /sczbt/')
print_log ${LOGFILE} "sczbt Start_command is: ${Start_command}"
sczbt_config=$(print_option -P ${Start_command})/sczbt_$(print_option -R ${Start_command})
print_log ${LOGFILE} "sczbt_config is ${sczbt_config}"
ZONE=$(nawk -F '=' '$1=="Zonename"{gsub(/"/,"",$2);print $2}' ${sczbt_config})
print_log ${LOGFILE} "Zone from ${sczbt_config} is ${ZONE}"
else
LOGFILE=/nsr/logs/init.log
if [ $# -ne 2 ]
then
echo "Wrong count of parameters."
echo "Use $0 init <ZPool-Name>"
exit 1
fi
ZPOOL=$2
print_log ${LOGFILE} "ZPool to init : ${ZPOOL}"
fi
case ${cmd_option} in
init)
snapshot_destroy ${ZPOOL} ${SNAPSHOT_NAME}
snapshot_create ${ZPOOL} ${SNAPSHOT_NAME}
;;
pre)
for ZPOOL in ${ZPOOLS}
do
snapshot_destroy ${ZPOOL} ${SNAPSHOT_NAME}
done
# snapshot_pre ${DB} ${DBUSER} ${ZONE}
for ZPOOL in ${ZPOOLS}
do
snapshot_create ${ZPOOL} ${SNAPSHOT_NAME}
done
# snapshot_pst ${DB} ${DBUSER} ${ZONE}
;;
pst)
#for ZPOOL in ${ZPOOLS}
#do
# snapshot_destroy ${ZPOOL} ${SNAPSHOT_NAME}
#done
;;
*)
usage
;;
esac
print_log ${LOGFILE} "End backup of ${CLIENT_NAME}"
</source>
MD5-Checksum
<source lang=bash>
# digest -a md5 /nsr/bin/nsr_snapshot.sh
6e788968bdee9f3cbe747a3151aa0b5c
</source>
!!!THIS CODE IS UNTESTED DO NOT USE THIS!!!
!!!THIS JUST AN EXAMPLE!!!
==Registering new resource type LGTO.clnt==
1. Install Solaris client package LGTOclnt
2. Register new resource type in cluster. One one node do:
<source lang=bash>
# clrt register -f /usr/sbin/LGTO.clnt.rtr LGTO.clnt
</source>
Now you have a new resource type LGTO.clnt in your cluster.
==Create client resource of type LGTO.clnt==
So I use scripts like this:
<source lang=bash>
# RGname=sample-rg
# clrs create \
-t LGTO.clnt \
-g ${RGname} \
-p Resource_dependencies=$(basename ${RGname} -rg)-hasp-zfs-res \
-p clientname=$(basename ${RGname} -rg)-lh \
-p Network_resource=$(basename ${RGname} -rg)-lh-res \
-p owned_paths=${ZPOOL_BASEDIR} \
$(basename ${RGname} -rg)-nsr-res
</source>
This expands to:
<source lang=bash>
# clrs create \
-t LGTO.clnt \
-g sample-rg \
-p Resource_dependencies=sample-hasp-zfs-res \
-p clientname=sample-lh \
-p Network_resource=sample-lh-res \
-p owned_paths=/local/sample-rg \
sample-nsr-res
</source>
Now we have a client name to which we can connect to: sample-lh
6fd3a2ca9a7c39050547423d7810a64248a87e5f
MediaWiki:Geshi.css
8
187
584
2014-10-14T15:13:57Z
Lollypop
2
Die Seite wurde neu angelegt: „/* CSS in dieser MediaWiki-Systemnachricht wird auf die GeSHi-Syntaxhervorhebung angewendet */ div.mw-geshi { padding: 1em; margin: 1em 0; border: 1px …“
css
text/css
/* CSS in dieser MediaWiki-Systemnachricht wird auf die GeSHi-Syntaxhervorhebung angewendet */
div.mw-geshi {
padding: 1em;
margin: 1em 0;
border: 1px dashed #2f6fab;
background-color: #f9f9f9;
}
5827a85906fe8982cfa54142e2cafb3373bacead
Template:Systematik
10
117
585
549
2014-10-26T17:25:45Z
Lollypop
2
wikitext
text/x-wiki
<includeonly>
{| cellpadding="2" cellspacing="0" style="border: 1px solid #6688AA;width:260px; background-color:#efefef; float:right;overflow: hidden; margin-left:10px; margin-bottom:10px;" valign="middle" |
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| '''''{{{WissName}}}''''' {{#if:{{{DeName|}}}| <br>({{{DeName|}}}) }}
|-
|style=" border: 0px solid #6688AA;border-bottom-width:1px;"| <div style="text-align:center;font-size:8pt;">
{{#if: {{{Bild|}}} | [[Bild:{{{Bild}}}|frameless|250x300px|{{{Bildbeschreibung}}}]]{{{Bildbeschreibung}}}</div> | |}}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| Systematik
|-
|style="text-align:center;width=250px;"|
{| style="background-color:#efefef;text-align:left;"
|-
{{#if:{{{Familie|}}}|
{{!-}}
{{!}} Familie:
{{!}} ''[[{{{Familie|}}}]]''
}}
|-
{{#if:{{{Unterfamilie|}}}|
{{!-}}
{{!}} Unterfamilie:
{{!}} ''[[{{{Unterfamilie|}}}]]''
}}
|-
{{#if:{{{Tribus|}}}|
{{!-}}
{{!}} Tribus:
{{!}} ''[[{{{Tribus|}}}]]''
}}
|-
{{#if:{{{Gattung|}}}|
{{!-}}
{{!}} Gattung:
{{!}} ''[[{{{Gattung|}}}]]''
}}
|-
{{#if:{{{Untergattung|}}}|
{{!-}}
{{!}} Untergattung:
{{!}} ''[[{{{Untergattung|}}}]]''
}}
|-
| Art:
|''{{{WissName|}}}''
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Wissenschaftlicher Name
|-
|style="text-align:center"|
{| style="text-align:center;background-color:#efefef;widht:250px"
|-
|style="text-align:center;width:250px"|''{{{WissName}}}''
{{#if:{{{Autor|}}}| {{{Autor|}}} | }}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Weitere Informationen
|-
|style="text-align:left"|
{| style="text-align:left;background-color:#efefef;widht:250px"
{{#if:{{{Verbreitung|}}} |{{!-}}
{{!}} Verbreitung:
{{!}} {{{Verbreitung|}}} |{{!-}}}}
{{#if:{{{Habitat|}}}
| {{!-}}
{{!}} Habitat:
{{!}} {{{Habitat|}}}|{{!-}}}}
{{#if:{{{Nahrung|}}}
| {{!-}}
{{!}} Nahrung:
{{!}} {{{Nahrung|}}} |{{!-}}}}
{{#if:{{{Luftfeuchtigkeit|}}}
| {{!-}}
{{!}} Luftfeuchtigkeit:
{{!}} {{{Luftfeuchtigkeit|}}} |{{!-}}}}
{{#if:{{{Temperatur|}}}
| {{!-}}
{{!}} Temperatur:
{{!}} {{{Temperatur|}}} |{{!-}}}}
{{#if:{{{StudyGroupNumber|}}}
| {{!-}}
{{!}} :
{{!}} {{{StudyGroupNumber:|}}} |{{!-}}}}
|}
|}
{{#if:{{{Untergattung|}}}| [[Kategorie:{{{Untergattung|}}}]]}}
{{#if:{{{Gattung|}}}| [[Kategorie:{{{Gattung|}}}{{#if:{{{Art|}}}| {{!}}{{{Art|}}},{{{Gattung}}}}}]]}}
{{#ifeq: {{NAMESPACE}} | {{ns:0}} | [[Kategorie:Spezies]]}}
{{#if:{{{Familie|}}}| [[{{{Familie|}}}]]| ???}}->{{#if:{{{Unterfamilie|}}}| [[{{{Unterfamilie|}}}]]| ???}}->{{#if:{{{Tribus|}}}| [[{{{Tribus|}}}]]| ???}}->{{#if:{{{Gattung|}}}| [[{{{Gattung|}}}]]| ???}}->{{#if:{{{Untergattung|}}}| [[{{{Untergattung|}}}]]| ???}}
</includeonly>
<noinclude>
7f6a7881504bd89d2504224e7bf458efd6330750
586
585
2014-10-26T17:27:20Z
Lollypop
2
wikitext
text/x-wiki
<includeonly>
{| cellpadding="2" cellspacing="0" style="border: 1px solid #6688AA;width:260px; background-color:#efefef; float:right;overflow: hidden; margin-left:10px; margin-bottom:10px;" valign="middle" |
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| '''''{{{WissName}}}''''' {{#if:{{{DeName|}}}| <br>({{{DeName|}}}) }}
|-
|style=" border: 0px solid #6688AA;border-bottom-width:1px;"| <div style="text-align:center;font-size:8pt;">
{{#if: {{{Bild|}}} | [[Bild:{{{Bild}}}|frameless|250x300px|{{{Bildbeschreibung}}}]]{{{Bildbeschreibung}}}</div> | |}}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| Systematik
|-
|style="text-align:center;width=250px;"|
{| style="background-color:#efefef;text-align:left;"
|-
{{#if:{{{Familie|}}}|
{{!-}}
{{!}} Familie:
{{!}} ''[[{{{Familie|}}}]]''
}}
|-
{{#if:{{{Unterfamilie|}}}|
{{!-}}
{{!}} Unterfamilie:
{{!}} ''[[{{{Unterfamilie|}}}]]''
}}
|-
{{#if:{{{Tribus|}}}|
{{!-}}
{{!}} Tribus:
{{!}} ''[[{{{Tribus|}}}]]''
}}
|-
{{#if:{{{Gattung|}}}|
{{!-}}
{{!}} Gattung:
{{!}} ''[[{{{Gattung|}}}]]''
}}
|-
{{#if:{{{Untergattung|}}}|
{{!-}}
{{!}} Untergattung:
{{!}} ''[[{{{Untergattung|}}}]]''
}}
|-
| Art:
|''{{{WissName|}}}''
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Wissenschaftlicher Name
|-
|style="text-align:center"|
{| style="text-align:center;background-color:#efefef;widht:250px"
|-
|style="text-align:center;width:250px"|''{{{WissName}}}''
{{#if:{{{Autor|}}}| {{{Autor|}}} | }}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Weitere Informationen
|-
|style="text-align:left"|
{| style="text-align:left;background-color:#efefef;widht:250px"
{{#if:{{{Verbreitung|}}} |{{!-}}
{{!}} Verbreitung:
{{!}} {{{Verbreitung|}}} |{{!-}}}}
{{#if:{{{Habitat|}}}
| {{!-}}
{{!}} Habitat:
{{!}} {{{Habitat|}}}|{{!-}}}}
{{#if:{{{Nahrung|}}}
| {{!-}}
{{!}} Nahrung:
{{!}} {{{Nahrung|}}} |{{!-}}}}
{{#if:{{{Luftfeuchtigkeit|}}}
| {{!-}}
{{!}} Luftfeuchtigkeit:
{{!}} {{{Luftfeuchtigkeit|}}} |{{!-}}}}
{{#if:{{{Temperatur|}}}
| {{!-}}
{{!}} Temperatur:
{{!}} {{{Temperatur|}}} |{{!-}}}}
{{#if:{{{StudyGroupNumber|}}}
| {{!-}}
{{!}} StudyGroupNumber:
{{!}} {{{StudyGroupNumber|}}} |{{!-}}}}
|}
|}
{{#if:{{{Untergattung|}}}| [[Kategorie:{{{Untergattung|}}}]]}}
{{#if:{{{Gattung|}}}| [[Kategorie:{{{Gattung|}}}{{#if:{{{Art|}}}| {{!}}{{{Art|}}},{{{Gattung}}}}}]]}}
{{#ifeq: {{NAMESPACE}} | {{ns:0}} | [[Kategorie:Spezies]]}}
{{#if:{{{Familie|}}}| [[{{{Familie|}}}]]| ???}}->{{#if:{{{Unterfamilie|}}}| [[{{{Unterfamilie|}}}]]| ???}}->{{#if:{{{Tribus|}}}| [[{{{Tribus|}}}]]| ???}}->{{#if:{{{Gattung|}}}| [[{{{Gattung|}}}]]| ???}}->{{#if:{{{Untergattung|}}}| [[{{{Untergattung|}}}]]| ???}}
</includeonly>
<noinclude>
f45f4a4229b0670d64789e1c3c639a856cfc861c
Archimandrita tesselata
0
148
590
517
2014-10-26T17:49:22Z
Lollypop
2
wikitext
text/x-wiki
{{Systematik
| DeName = Pfefferschabe
| WissName = Archimandrita tesselata
| Bild = Archimandrita_tesselata_IMG_2891.JPG
| Bildbeschreibung = Archimandrita Tesselata an einem Stück Gurke
| Autor =
| Familie = Blaberidae
| Unterfamilie = Blaberinae
| Gattung = Archimandrita
| Untergattung =
| Art = tesselata
| Verbreitung = Guatemala, Costa Rica, Panama, Kolumbien
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| StudyGroupNumber = BCG 23
| Winterruhe =
}}
Archimandrita tesselata ist eine große, meist recht scheue Schabenart.
Ihren Namen "Pfefferschabe" trägt sie aufgrund der Zeichnung auf ihren Flügeldecken.
<gallery mode="packed-hover">
Image:Archimandrita tesselata IMG 2891.JPG|An Gurke
Image:Archimandrita_tesselata_R0015551.png|Können dese Augen lügen?
</gallery>
<gallery mode="slideshow" caption="Häutung eines Archimandrita tesselata Männchens">
Image:Archimandrita_tesselata_IMG_2901.png
Image:Archimandrita_tesselata_IMG_2902.png
Image:Archimandrita_tesselata_IMG_2903.png
Image:Archimandrita_tesselata_IMG_2904.png
Image:Archimandrita_tesselata_IMG_2905.png
Image:Archimandrita_tesselata_IMG_2906.png
Image:Archimandrita_tesselata_IMG_2907.png
Image:Archimandrita_tesselata_IMG_2908.png
Image:Archimandrita_tesselata_IMG_2909.png
Image:Archimandrita_tesselata_IMG_2910.png
Image:Archimandrita_tesselata_IMG_2911.png
Image:Archimandrita_tesselata_IMG_2912.png
Image:Archimandrita_tesselata_IMG_2913.png
Image:Archimandrita_tesselata_IMG_2914.png
Image:Archimandrita_tesselata_IMG_2915.png
Image:Archimandrita_tesselata_IMG_2916.png
Image:Archimandrita_tesselata_IMG_2917.png
Image:Archimandrita_tesselata_IMG_2918.png
</gallery>
<slideshow sequence="forward" transition="blindDown" refresh="3000">
[[Image:Archimandrita_tesselata_IMG_2901.png|thumb|right|256px|Caption 1]]
[[Image:Archimandrita_tesselata_IMG_2902.png|thumb|right|256px|Caption 2]]
[[Image:Archimandrita_tesselata_IMG_2903.png|thumb|right|256px|Caption 3]]
[[Image:Archimandrita_tesselata_IMG_2904.png|thumb|right|256px|Caption 4]]
[[Image:Archimandrita_tesselata_IMG_2905.png|thumb|right|256px|Caption 5]]
[[Image:Archimandrita_tesselata_IMG_2906.png|thumb|right|256px|Caption 6]]
</slideshow>
6470bb65fd073d80b09aea2732665c7b27e55b4a
Solaris OracleDB zone
0
188
596
2014-11-05T16:32:31Z
Lollypop
2
Die Seite wurde neu angelegt: „[[Kategorie:Solaris]] =Setup Solaris server with OracleDB in a zone= Our setup is a 48GB x86-Server ==Limit ZFS ARC== Add to /etc/system: <source lang=bash> s…“
wikitext
text/x-wiki
[[Kategorie:Solaris]]
=Setup Solaris server with OracleDB in a zone=
Our setup is a 48GB x86-Server
==Limit ZFS ARC==
Add to /etc/system:
<source lang=bash>
set zfs:zfs_arc_max = <bytes as hex value>
</source>
To calculate your own value:
<source lang=bash>
# LIMIT_GB=8 ; printf "*\n** Limit ARC to %dGB\n*\nset zfs:zfs_arc_max = 0x%x\n" ${LIMIT_GB} $[${LIMIT_GB} * 1024 * 1024 * 1024]
*
** Limit ARC to 8GB
*
set zfs:zfs_arc_max = 0x200000000
</source>
0cb6b9051d6860ed83dd768c5eafac5b8117b62b
597
596
2014-11-05T17:07:33Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:Solaris]]
=Setup Solaris server with OracleDB in a zone=
Our setup is a 48GB x86-Server
==Limit ZFS ARC==
Add to /etc/system:
<source lang=bash>
set zfs:zfs_arc_max = <bytes as hex value>
</source>
To calculate your own value:
<source lang=bash>
# LIMIT_GB=8 ; printf "*\n** Limit ZFS ARC to %dGB\n*\nset zfs:zfs_arc_max = 0x%x\n" ${LIMIT_GB} $[${LIMIT_GB} * 1024 * 1024 * 1024]
*
** Limit ZFS ARC to 8GB
*
set zfs:zfs_arc_max = 0x200000000
</source>
==Create Zone==
<source lang=bash>
ZONENAME=oracle
ZONEPOOL=rpool
ZONEBASE=/var/zones
zfs create -o mountpoint=none ${ZONEPOOL}/zones
zfs create -o compression=on -o mountpoint=${ZONEBASE}/${ZONENAME} ${ZONEPOOL}/zones/${ZONENAME}
printf "
create
set autoboot=true
set zonepath=${ZONEBASE}/${ZONENAME}
add dedicated-cpu
set ncpus=5-6
end
set autoboot=true
verify
commit
" | zonecfg -z ${ZONENAME} -f -
</source>
989e3418a0fd6139a885db20b8a5d8dc3df97936
598
597
2014-11-05T17:50:56Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:Solaris]]
=Setup Solaris server with OracleDB in a zone=
Our setup is a 48GB x86-Server
==Limit ZFS ARC==
Add to /etc/system:
<source lang=bash>
set zfs:zfs_arc_max = <bytes as hex value>
</source>
To calculate your own value:
<source lang=bash>
# LIMIT_GB=8 ; printf "*\n** Limit ZFS ARC to %dGB\n*\nset zfs:zfs_arc_max = 0x%x\n" ${LIMIT_GB} $[${LIMIT_GB} * 1024 * 1024 * 1024]
*
** Limit ZFS ARC to 8GB
*
set zfs:zfs_arc_max = 0x200000000
</source>
==Create Zone==
<source lang=bash>
ZONENAME=oracle
ZONEPOOL=rpool
ZONEBASE=/var/zones
zfs create -o mountpoint=none ${ZONEPOOL}/zones
zfs create -o compression=on -o mountpoint=${ZONEBASE}/${ZONENAME} ${ZONEPOOL}/zones/${ZONENAME}
chmod 700 ${ZONEBASE}/${ZONENAME}
printf "
create
set autoboot=true
set zonepath=${ZONEBASE}/${ZONENAME}
add dedicated-cpu
set ncpus=5-6
end
set scheduling-class=FSS
set max-shm-memory=30G
verify
commit
" | zonecfg -z ${ZONENAME} -f -
zoneadm -z ${ZONENAME} install
</source>
0be6450ae47cae6e41c495743f6db1a86e8adad4
599
598
2014-11-05T18:06:37Z
Lollypop
2
/* Create Zone */
wikitext
text/x-wiki
[[Kategorie:Solaris]]
=Setup Solaris server with OracleDB in a zone=
Our setup is a 48GB x86-Server
==Limit ZFS ARC==
Add to /etc/system:
<source lang=bash>
set zfs:zfs_arc_max = <bytes as hex value>
</source>
To calculate your own value:
<source lang=bash>
# LIMIT_GB=8 ; printf "*\n** Limit ZFS ARC to %dGB\n*\nset zfs:zfs_arc_max = 0x%x\n" ${LIMIT_GB} $[${LIMIT_GB} * 1024 * 1024 * 1024]
*
** Limit ZFS ARC to 8GB
*
set zfs:zfs_arc_max = 0x200000000
</source>
==Create Zone==
Set values:
<source lang=bash>
ZONENAME=oracle
ZONEPOOL=rpool
ZONEBASE=/var/zones
MAX-SHM-MEMORY=30G
NUMBER_OF_CPUS=2
</source>
Create zone with
<source lang=bash>
zfs create -o mountpoint=none ${ZONEPOOL}/zones
zfs create -o compression=on -o mountpoint=${ZONEBASE}/${ZONENAME} ${ZONEPOOL}/zones/${ZONENAME}
chmod 700 ${ZONEBASE}/${ZONENAME}
printf "
create
set autoboot=true
set zonepath=${ZONEBASE}/${ZONENAME}
add dedicated-cpu
set ncpus=${NUMBER_OF_CPUS}
end
set scheduling-class=FSS
set max-shm-memory=${MAX-SHM-MEMORY}
verify
commit
" | zonecfg -z ${ZONENAME} -f -
zoneadm -z ${ZONENAME} install
</source>
# Enable dynamic pool service to add support for dedicated-cpus
svcadm enable svc:/system/pools/dynamic
zoneadm -z ${ZONENAME} boot
bd3c48aa77a4934621116186216cca9fba41fe26
600
599
2014-11-05T18:07:24Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:Solaris]]
=Setup Solaris server with OracleDB in a zone=
Our setup is a 48GB x86-Server
==Limit ZFS ARC==
Add to /etc/system:
<source lang=bash>
set zfs:zfs_arc_max = <bytes as hex value>
</source>
To calculate your own value:
<source lang=bash>
# LIMIT_GB=8 ; printf "*\n** Limit ZFS ARC to %dGB\n*\nset zfs:zfs_arc_max = 0x%x\n" ${LIMIT_GB} $[${LIMIT_GB} * 1024 * 1024 * 1024]
*
** Limit ZFS ARC to 8GB
*
set zfs:zfs_arc_max = 0x200000000
</source>
==Create Zone==
Set values:
<source lang=bash>
ZONENAME=oracle
ZONEPOOL=rpool
ZONEBASE=/var/zones
MAX_SHM_MEMORY=30G
NUMBER_OF_CPUS=2
</source>
Create zone with
<source lang=bash>
zfs create -o mountpoint=none ${ZONEPOOL}/zones
zfs create -o compression=on -o mountpoint=${ZONEBASE}/${ZONENAME} ${ZONEPOOL}/zones/${ZONENAME}
chmod 700 ${ZONEBASE}/${ZONENAME}
printf "
create
set autoboot=true
set zonepath=${ZONEBASE}/${ZONENAME}
add dedicated-cpu
set ncpus=${NUMBER_OF_CPUS}
end
set scheduling-class=FSS
set max-shm-memory=${MAX_SHM_MEMORY}
verify
commit
" | zonecfg -z ${ZONENAME} -f -
zoneadm -z ${ZONENAME} install
</source>
# Enable dynamic pool service to add support for dedicated-cpus
svcadm enable svc:/system/pools/dynamic
zoneadm -z ${ZONENAME} boot
2a351959a2843cd89391f68e5b343214588d944e
601
600
2014-11-05T18:09:19Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:Solaris]]
=Setup Solaris server with OracleDB in a zone=
Our setup is a 48GB x86-Server
==Limit ZFS ARC==
Add to /etc/system:
<source lang=bash>
set zfs:zfs_arc_max = <bytes as hex value>
</source>
To calculate your own value:
<source lang=bash>
# LIMIT_GB=8 ; printf "*\n** Limit ZFS ARC to %dGB\n*\nset zfs:zfs_arc_max = 0x%x\n" ${LIMIT_GB} $[${LIMIT_GB} * 1024 * 1024 * 1024]
*
** Limit ZFS ARC to 8GB
*
set zfs:zfs_arc_max = 0x200000000
</source>
==Create Zone==
Set values:
<source lang=bash>
ZONENAME=oracle
ZONEPOOL=rpool
ZONEBASE=/var/zones
MAX_SHM_MEMORY=30G
NUMBER_OF_CPUS=2
</source>
Create zone with
<source lang=bash>
zfs create -o mountpoint=none ${ZONEPOOL}/zones
zfs create -o compression=on -o mountpoint=${ZONEBASE}/${ZONENAME} ${ZONEPOOL}/zones/${ZONENAME}
chmod 700 ${ZONEBASE}/${ZONENAME}
printf "
create
set autoboot=true
set zonepath=${ZONEBASE}/${ZONENAME}
add dedicated-cpu
set ncpus=${NUMBER_OF_CPUS}
end
set scheduling-class=FSS
set max-shm-memory=${MAX_SHM_MEMORY}
verify
commit
" | zonecfg -z ${ZONENAME} -f -
</source>
Enable dynamic pool service to add support for dedicated-cpus:
<source lang=bash>
svcadm enable svc:/system/pools/dynamic
</source>
Install and boot:
<source lang=bash>
zoneadm -z ${ZONENAME} install
zoneadm -z ${ZONENAME} boot
zlogin ${ZONENAME}
</source>
3ac940012f9ef606ebcfd23c5e930c9247a10ad4
602
601
2014-11-05T18:17:34Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:Solaris]]
=Setup Solaris server with OracleDB in a zone=
Our setup is a 48GB x86-Server
==Limit ZFS ARC==
Add to /etc/system:
<source lang=bash>
set zfs:zfs_arc_max = <bytes as hex value>
</source>
To calculate your own value:
<source lang=bash>
# LIMIT_GB=8 ; printf "*\n** Limit ZFS ARC to %dGB\n*\nset zfs:zfs_arc_max = 0x%x\n" ${LIMIT_GB} $[${LIMIT_GB} * 1024 * 1024 * 1024]
*
** Limit ZFS ARC to 8GB
*
set zfs:zfs_arc_max = 0x200000000
</source>
==Create Zone==
Set values:
<source lang=bash>
ZONENAME=oracle
ZONEPOOL=rpool
ZONEBASE=/var/zones
MAX_SHM_MEMORY=30G
NUMBER_OF_CPUS=2
</source>
Create zone with
<source lang=bash>
zfs create -o mountpoint=none ${ZONEPOOL}/zones
zfs create -o compression=on -o mountpoint=${ZONEBASE}/${ZONENAME} ${ZONEPOOL}/zones/${ZONENAME}
chmod 700 ${ZONEBASE}/${ZONENAME}
printf "
create
set autoboot=true
set zonepath=${ZONEBASE}/${ZONENAME}
add dedicated-cpu
set ncpus=${NUMBER_OF_CPUS}
end
set scheduling-class=FSS
set max-shm-memory=${MAX_SHM_MEMORY}
verify
commit
" | zonecfg -z ${ZONENAME} -f -
</source>
Enable dynamic pool service to add support for dedicated-cpus:
<source lang=bash>
svcadm enable svc:/system/pools/dynamic
</source>
Install and boot:
<source lang=bash>
zoneadm -z ${ZONENAME} install
zoneadm -z ${ZONENAME} boot
zlogin ${ZONENAME} usermod -s /bin/bash root
zlogin ${ZONENAME}
</source>
CPU-check:
<source lang=bash>
-bash-3.2# psrinfo -pv
The physical processor has 2 virtual processors (0 1)
x86 (chipid 0x0 GenuineIntel family 6 model 44 step 2 clock 3059 MHz)
Intel(r) Xeon(r) CPU X5675 @ 3.07GHz
</source>
c40ffd23641bff84ee0644219bdf2b7dda8fe49d
603
602
2014-11-05T20:31:49Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:Solaris]]
=Setup Solaris server with OracleDB and CPU limit in a zone=
Our setup is a 48GB x86-Server
==Limit ZFS ARC==
Add to /etc/system:
<source lang=bash>
set zfs:zfs_arc_max = <bytes as hex value>
</source>
To calculate your own value:
<source lang=bash>
# LIMIT_GB=8 ; printf "*\n** Limit ZFS ARC to %dGB\n*\nset zfs:zfs_arc_max = 0x%x\n" ${LIMIT_GB} $[${LIMIT_GB} * 1024 * 1024 * 1024]
*
** Limit ZFS ARC to 8GB
*
set zfs:zfs_arc_max = 0x200000000
</source>
==Create Zone==
Set values:
<source lang=bash>
ZONENAME=oracle
ZONEPOOL=rpool
ZONEBASE=/var/zones
MAX_SHM_MEMORY=30G
NUMBER_OF_CPUS=2
</source>
Create zone with
<source lang=bash>
zfs create -o mountpoint=none ${ZONEPOOL}/zones
zfs create -o compression=on -o mountpoint=${ZONEBASE}/${ZONENAME} ${ZONEPOOL}/zones/${ZONENAME}
chmod 700 ${ZONEBASE}/${ZONENAME}
printf "
create
set autoboot=true
set zonepath=${ZONEBASE}/${ZONENAME}
add dedicated-cpu
set ncpus=${NUMBER_OF_CPUS}
end
set scheduling-class=FSS
set max-shm-memory=${MAX_SHM_MEMORY}
verify
commit
" | zonecfg -z ${ZONENAME} -f -
</source>
Enable dynamic pool service to add support for dedicated-cpus:
<source lang=bash>
svcadm enable svc:/system/pools/dynamic
</source>
Install and boot:
<source lang=bash>
zoneadm -z ${ZONENAME} install
zoneadm -z ${ZONENAME} boot
zlogin ${ZONENAME} usermod -s /bin/bash root
zlogin ${ZONENAME}
</source>
CPU-check:
<source lang=bash>
-bash-3.2# psrinfo -pv
The physical processor has 2 virtual processors (0 1)
x86 (chipid 0x0 GenuineIntel family 6 model 44 step 2 clock 3059 MHz)
Intel(r) Xeon(r) CPU X5675 @ 3.07GHz
</source>
f2354daece6651f781076ab1f9d766fc5a1862e4
604
603
2014-11-05T20:32:52Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:Solaris]]
=Setup OracleDB on a Solaris zone with CPU limit=
Our setup is a 48GB x86-Server
==Limit ZFS ARC==
Add to /etc/system:
<source lang=bash>
set zfs:zfs_arc_max = <bytes as hex value>
</source>
To calculate your own value:
<source lang=bash>
# LIMIT_GB=8 ; printf "*\n** Limit ZFS ARC to %dGB\n*\nset zfs:zfs_arc_max = 0x%x\n" ${LIMIT_GB} $[${LIMIT_GB} * 1024 * 1024 * 1024]
*
** Limit ZFS ARC to 8GB
*
set zfs:zfs_arc_max = 0x200000000
</source>
==Create Zone==
Set values:
<source lang=bash>
ZONENAME=oracle
ZONEPOOL=rpool
ZONEBASE=/var/zones
MAX_SHM_MEMORY=30G
NUMBER_OF_CPUS=2
</source>
Create zone with
<source lang=bash>
zfs create -o mountpoint=none ${ZONEPOOL}/zones
zfs create -o compression=on -o mountpoint=${ZONEBASE}/${ZONENAME} ${ZONEPOOL}/zones/${ZONENAME}
chmod 700 ${ZONEBASE}/${ZONENAME}
printf "
create
set autoboot=true
set zonepath=${ZONEBASE}/${ZONENAME}
add dedicated-cpu
set ncpus=${NUMBER_OF_CPUS}
end
set scheduling-class=FSS
set max-shm-memory=${MAX_SHM_MEMORY}
verify
commit
" | zonecfg -z ${ZONENAME} -f -
</source>
Enable dynamic pool service to add support for dedicated-cpus:
<source lang=bash>
svcadm enable svc:/system/pools/dynamic
</source>
Install and boot:
<source lang=bash>
zoneadm -z ${ZONENAME} install
zoneadm -z ${ZONENAME} boot
zlogin ${ZONENAME} usermod -s /bin/bash root
zlogin ${ZONENAME}
</source>
CPU-check:
<source lang=bash>
-bash-3.2# psrinfo -pv
The physical processor has 2 virtual processors (0 1)
x86 (chipid 0x0 GenuineIntel family 6 model 44 step 2 clock 3059 MHz)
Intel(r) Xeon(r) CPU X5675 @ 3.07GHz
</source>
fd3e63234b8bffdb1351b0e910462373407e3e49
605
604
2014-11-05T20:33:30Z
Lollypop
2
/* Setup OracleDB on a Solaris zone with CPU limit */
wikitext
text/x-wiki
[[Kategorie:Solaris]]
=Setup Oracle Database on a Solaris zone with CPU limit=
Our setup is a 48GB x86-Server
==Limit ZFS ARC==
Add to /etc/system:
<source lang=bash>
set zfs:zfs_arc_max = <bytes as hex value>
</source>
To calculate your own value:
<source lang=bash>
# LIMIT_GB=8 ; printf "*\n** Limit ZFS ARC to %dGB\n*\nset zfs:zfs_arc_max = 0x%x\n" ${LIMIT_GB} $[${LIMIT_GB} * 1024 * 1024 * 1024]
*
** Limit ZFS ARC to 8GB
*
set zfs:zfs_arc_max = 0x200000000
</source>
==Create Zone==
Set values:
<source lang=bash>
ZONENAME=oracle
ZONEPOOL=rpool
ZONEBASE=/var/zones
MAX_SHM_MEMORY=30G
NUMBER_OF_CPUS=2
</source>
Create zone with
<source lang=bash>
zfs create -o mountpoint=none ${ZONEPOOL}/zones
zfs create -o compression=on -o mountpoint=${ZONEBASE}/${ZONENAME} ${ZONEPOOL}/zones/${ZONENAME}
chmod 700 ${ZONEBASE}/${ZONENAME}
printf "
create
set autoboot=true
set zonepath=${ZONEBASE}/${ZONENAME}
add dedicated-cpu
set ncpus=${NUMBER_OF_CPUS}
end
set scheduling-class=FSS
set max-shm-memory=${MAX_SHM_MEMORY}
verify
commit
" | zonecfg -z ${ZONENAME} -f -
</source>
Enable dynamic pool service to add support for dedicated-cpus:
<source lang=bash>
svcadm enable svc:/system/pools/dynamic
</source>
Install and boot:
<source lang=bash>
zoneadm -z ${ZONENAME} install
zoneadm -z ${ZONENAME} boot
zlogin ${ZONENAME} usermod -s /bin/bash root
zlogin ${ZONENAME}
</source>
CPU-check:
<source lang=bash>
-bash-3.2# psrinfo -pv
The physical processor has 2 virtual processors (0 1)
x86 (chipid 0x0 GenuineIntel family 6 model 44 step 2 clock 3059 MHz)
Intel(r) Xeon(r) CPU X5675 @ 3.07GHz
</source>
7566e75d81c997cec2ede81e3e95c51bb6db2dd1
608
605
2014-11-06T10:52:25Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:Solaris]]
=Setup Oracle Database on a Solaris zone with CPU limit=
Our setup is a 48GB x86-Server
==Limit ZFS ARC==
Add to /etc/system:
<source lang=bash>
set zfs:zfs_arc_max = <bytes as hex value>
</source>
To calculate your own value:
<source lang=bash>
# LIMIT_GB=8 ; printf "*\n** Limit ZFS ARC to %dGB\n*\nset zfs:zfs_arc_max = 0x%x\n" ${LIMIT_GB} $[${LIMIT_GB} * 1024 * 1024 * 1024]
*
** Limit ZFS ARC to 8GB
*
set zfs:zfs_arc_max = 0x200000000
</source>
==Create Zone==
Set values:
<source lang=bash>
ZONENAME=oracle
ZONEPOOL=rpool
ZONEBASE=/var/zones
MAX_SHM_MEMORY=30G
LOCKED_MEMORY=30G
MAX_PHYS_MEMORY=34G
SWAP=${MAX_PHYS_MEMORY}
NUMBER_OF_CPUS=2
</source>
Create zone with
<source lang=bash>
zfs create -o mountpoint=none ${ZONEPOOL}/zones
zfs create -o compression=on -o mountpoint=${ZONEBASE}/${ZONENAME} ${ZONEPOOL}/zones/${ZONENAME}
chmod 700 ${ZONEBASE}/${ZONENAME}
printf "
create
set autoboot=true
set zonepath=${ZONEBASE}/${ZONENAME}
add dedicated-cpu
set ncpus=${NUMBER_OF_CPUS}
end
add capped-memory
set swap=${SWAP}
set physical=${MAX_PHYS_MEMORY}
set locked=${LOCKED_MEMORY}
end
set scheduling-class=FSS
set max-shm-memory=${MAX_SHM_MEMORY}
verify
commit
" | zonecfg -z ${ZONENAME} -f -
</source>
Enable dynamic pool service to add support for dedicated-cpus:
<source lang=bash>
svcadm enable svc:/system/pools/dynamic
</source>
Install and boot:
<source lang=bash>
zoneadm -z ${ZONENAME} install
zoneadm -z ${ZONENAME} boot
zlogin ${ZONENAME} usermod -s /bin/bash root
zlogin ${ZONENAME}
</source>
CPU-check:
<source lang=bash>
-bash-3.2# psrinfo -pv
The physical processor has 2 virtual processors (0 1)
x86 (chipid 0x0 GenuineIntel family 6 model 44 step 2 clock 3059 MHz)
Intel(r) Xeon(r) CPU X5675 @ 3.07GHz
</source>
96150e29a31264c521784c8221d3524e17460499
609
608
2014-11-06T16:29:36Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:Solaris]]
=Setup Oracle Database on a Solaris zone with CPU limit=
Our setup is a 48GB x86-Server
==Limit ZFS ARC==
Add to /etc/system:
<source lang=bash>
set zfs:zfs_arc_max = <bytes as hex value>
</source>
To calculate your own value:
<source lang=bash>
# LIMIT_GB=8 ; printf "*\n** Limit ZFS ARC to %dGB\n*\nset zfs:zfs_arc_max = 0x%x\n" ${LIMIT_GB} $[${LIMIT_GB} * 1024 * 1024 * 1024]
*
** Limit ZFS ARC to 8GB
*
set zfs:zfs_arc_max = 0x200000000
</source>
==Create Zone==
Set values:
<source lang=bash>
ZONENAME=oracle
ZONEPOOL=rpool
ZONEBASE=/var/zones
MAX_SHM_MEMORY=30G
LOCKED_MEMORY=30G
MAX_PHYS_MEMORY=34G
SWAP=${MAX_PHYS_MEMORY}
NUMBER_OF_CPUS=2
</source>
Create zone with
<source lang=bash>
zfs create -o mountpoint=none ${ZONEPOOL}/zones
zfs create -o compression=on -o mountpoint=${ZONEBASE}/${ZONENAME} ${ZONEPOOL}/zones/${ZONENAME}
chmod 700 ${ZONEBASE}/${ZONENAME}
printf "
create
set autoboot=true
set zonepath=${ZONEBASE}/${ZONENAME}
add dedicated-cpu
set ncpus=${NUMBER_OF_CPUS}
end
add capped-memory
set swap=${SWAP}
set physical=${MAX_PHYS_MEMORY}
set locked=${LOCKED_MEMORY}
end
set scheduling-class=FSS
set max-shm-memory=${MAX_SHM_MEMORY}
verify
commit
" | zonecfg -z ${ZONENAME} -f -
</source>
Enable dynamic pool service to add support for dedicated-cpus:
<source lang=bash>
svcadm enable svc:/system/pools/dynamic
</source>
Install and boot:
<source lang=bash>
zoneadm -z ${ZONENAME} install
zoneadm -z ${ZONENAME} boot
zlogin ${ZONENAME} usermod -s /bin/bash root
zlogin ${ZONENAME}
</source>
CPU-check:
<source lang=bash>
-bash-3.2# psrinfo -pv
The physical processor has 2 virtual processors (0 1)
x86 (chipid 0x0 GenuineIntel family 6 model 44 step 2 clock 3059 MHz)
Intel(r) Xeon(r) CPU X5675 @ 3.07GHz
</source>
==Create ZPools==
<source lang=bash>
DATABASEPOOL=dbpool
DATABASEPOOL_DATAVDEV="mirror c1t1d0 c1t2d0"
DATABASEPOOL_ZILVDEV="mirror c1t3d0 c1t4d0"
REDOPOOL_NAME=redopool
REDOPOOL_DATAVDEV="mirror c1t5d0 c1t6d0"
REDOPOOL_ZILVDEV="mirror c1t7d0 c1t8d0"
ARCHIVEPOOL=archivepool
ARCHIVEPOOL_DATAVDEV="mirror c1t9d0 c1t10d0"
DB_BLOCK_SIZE=8192
</source>
<source lang=bash>
zpool create ${DATABASEPOOL} ${DATABASEPOOL_DATAVDEV}
</source>
<source lang=bash>
zpool create ${REDOPOOL} ${REDOPOOL_DATAVDEV} log ${REDOPOOL_ZILVDEV}
</source>
<source lang=bash>
zpool create ${ARCHIVEBASEPOOL} ${ARCHIVEPOOL_DATAVDEV}
</source>
<source lang=bash>
</source>
<source lang=bash>
</source>
<source lang=bash>
</source>
fb1834f130dfe0b6ac495787a48f2037ecb5dddc
610
609
2014-11-06T18:15:31Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:Solaris]]
=Setup Oracle Database on a Solaris zone with CPU limit=
Our setup is a 48GB x86-Server
==Limit ZFS ARC==
Add to /etc/system:
<source lang=bash>
set zfs:zfs_arc_max = <bytes as hex value>
</source>
To calculate your own value:
<source lang=bash>
# LIMIT_GB=8 ; printf "*\n** Limit ZFS ARC to %dGB\n*\nset zfs:zfs_arc_max = 0x%x\n" ${LIMIT_GB} $[${LIMIT_GB} * 1024 * 1024 * 1024]
*
** Limit ZFS ARC to 8GB
*
set zfs:zfs_arc_max = 0x200000000
</source>
==Create Zone==
Set values:
<source lang=bash>
ZONENAME=oracle
ZONEPOOL=rpool
ZONEBASE=/var/zones
MAX_SHM_MEMORY=30G
LOCKED_MEMORY=30G
MAX_PHYS_MEMORY=34G
SWAP=${MAX_PHYS_MEMORY}
NUMBER_OF_CPUS=2
</source>
Create zone with
<source lang=bash>
zfs create -o mountpoint=none ${ZONEPOOL}/zones
zfs create -o compression=on -o mountpoint=${ZONEBASE}/${ZONENAME} ${ZONEPOOL}/zones/${ZONENAME}
chmod 700 ${ZONEBASE}/${ZONENAME}
printf "
create
set autoboot=true
set zonepath=${ZONEBASE}/${ZONENAME}
add dedicated-cpu
set ncpus=${NUMBER_OF_CPUS}
end
add capped-memory
set swap=${SWAP}
set physical=${MAX_PHYS_MEMORY}
set locked=${LOCKED_MEMORY}
end
set scheduling-class=FSS
set max-shm-memory=${MAX_SHM_MEMORY}
verify
commit
" | zonecfg -z ${ZONENAME} -f -
</source>
Enable dynamic pool service to add support for dedicated-cpus:
<source lang=bash>
svcadm enable svc:/system/pools/dynamic
</source>
Install and boot:
<source lang=bash>
zoneadm -z ${ZONENAME} install
zoneadm -z ${ZONENAME} boot
zlogin ${ZONENAME} usermod -s /bin/bash root
zlogin ${ZONENAME}
</source>
CPU-check:
<source lang=bash>
-bash-3.2# psrinfo -pv
The physical processor has 2 virtual processors (0 1)
x86 (chipid 0x0 GenuineIntel family 6 model 44 step 2 clock 3059 MHz)
Intel(r) Xeon(r) CPU X5675 @ 3.07GHz
</source>
==Create ZPools==
I used this paper: [http://www.oracle.com/technetwork/server-storage/solaris10/config-solaris-zfs-wp-167894.pdf]
Values are for Solaris 10.
<source lang=bash>
DATABASEPOOL=dbpool
DATABASEPOOL_DATAVDEV="mirror c1t1d0 c1t2d0"
DATABASEPOOL_ZILVDEV="mirror c1t3d0 c1t4d0"
REDOPOOL_NAME=redopool
REDOPOOL_DATAVDEV="mirror c1t5d0 c1t6d0"
REDOPOOL_ZILVDEV="mirror c1t7d0 c1t8d0"
ARCHIVEPOOL=archivepool
ARCHIVEPOOL_DATAVDEV="mirror c1t9d0 c1t10d0"
DB_BASEPATH=/database
DB_BLOCK_SIZE=8192
</source>
<source lang=bash>
zpool create ${DATABASEPOOL} ${DATABASEPOOL_DATAVDEV}
zfs create -o recordsize=${DB_BLOCK_SIZE} -o mountpoint=${DB_BASEPATH}/data ${DATABASEPOOL}/data
zfs set logbias=throughput ${DATABASEPOOL}/data
zfs create -o recordsize=${DB_BLOCK_SIZE} -o mountpoint=${DB_BASEPATH}/index ${DATABASEPOOL}/index
zfs set logbias=throughput ${DATABASEPOOL}/index
zfs create -o mountpoint=${DB_BASEPATH}/temp ${DATABASEPOOL}/temp
zfs set logbias=throughput ${DATABASEPOOL}/temp
zfs create -o mountpoint=${DB_BASEPATH}/undo ${DATABASEPOOL}/undo
zfs set logbias=throughput ${DATABASEPOOL}/undo
</source>
<source lang=bash>
zpool create ${REDOPOOL} ${REDOPOOL_DATAVDEV} log ${REDOPOOL_ZILVDEV}
zfs create -o mountpoint=${DB_BASEPATH}/redo ${REDOPOOL}/redo
zfs set logbias=latency ${REDOPOOL}/redo
</source>
<source lang=bash>
zpool create ${ARCHIVEBASEPOOL} ${ARCHIVEPOOL_DATAVDEV}
zfs create -o compression=on -o mountpoint=${DB_BASEPATH}/archive ${ARCHIVEBASEPOOL}/archive
zfs set primarycache=metadata ${ARCHIVEBASEPOOL}/archive
</source>
d4ade81f231ac68322925c1a18fbb780d896abfc
Fibrechannel Analyse
0
139
606
541
2014-11-06T10:18:10Z
Lollypop
2
/* Kommandos: NetApp */
wikitext
text/x-wiki
[[Kategorie:Solaris]]
[[Kategorie:Brocade]]
[[Kategorie:NetApp]]
[[Kategorie:FC]]
=Fibrechannel Analyse=
=Kommandos : Solaris=
==luxadm==
===luxadm -e port===
Gibt die Hardwarepfade der vorhandened Fibrechannelports und deren Status aus:
<source lang=bash>
# luxadm -e port
/devices/pci@79,0/pci10de,378@b/pci1077,143@0/fp@0,0:devctl CONNECTED
/devices/pci@79,0/pci10de,378@b/pci1077,143@0,1/fp@0,0:devctl NOT CONNECTED
/devices/pci@79,0/pci10de,376@e/pci1077,143@0/fp@0,0:devctl CONNECTED
/devices/pci@79,0/pci10de,376@e/pci1077,143@0,1/fp@0,0:devctl NOT CONNECTED
</source>
2 Dualport Karten:
/devices/pci@79,0/pci10de,378@b/pci1077,143@0 und ...,1
/devices/pci@79,0/pci10de,376@e/pci1077,143@0 und ...,1
<source lang=bash>
# prtdiag -v | head -1
System Configuration: Sun Microsystems Sun Fire X4440
</source>
Aus der Seite [https://support.oracle.com/epmos/faces/DocContentDisplay?id=1277396.1 Sun x86 Platforms: Matrix of Recognized Device Paths (Doc ID 1277396.1)] (Oracle Support Login benötigt):
Sun Fire x4440 (Tucana)
PCI:
PCIe SLOT0 /pci@0,0/pci10de,375@f/pci1000,3150@0 // with PCI Express 8-Port SAS/SATA HBA
PCIe SLOT0 /pci@0,0/pci10de,375@f/ // without PCI Express 8-Port SAS/SATA HBA
PCIe SLOT1 /pci@0,0/pci10de,376@e/
PCIe SLOT2 /pci@7c,0/pci10de,377@f/
PCIe SLOT3 /pci@0,0/pci10de,377@a/
PCIe SLOT4 /pci@7c,0/pci10de,376@e/
PCIe SLOT5 /pci@7c,0/pci10de,378@b/
(7c can be renamed something else depending on BIOS/OS version)
Also stecken unsere Karten in Slot 4 und 5.
===luxadm -e dump_map <HW_path>===
Gibt die Tabelle der bekannten Geräte an einem Port aus
<source lang=bash>
# luxadm -e dump_map /devices/pci@79,0/pci10de,378@b/pci1077,143@0/fp@0,0:devctl
Pos Port_ID Hard_Addr Port WWN Node WWN Type
0 30200 0 202600a0b86e10e4 200600a0b86e10e4 0x0 (Disk device)
1 30600 0 202700a0b86e10e4 200600a0b86e10e4 0x0 (Disk device)
2 10100 0 203400a0b85bb030 200400a0b85bb030 0x0 (Disk device)
3 10500 0 203500a0b85bb030 200400a0b85bb030 0x0 (Disk device)
4 10200 0 202600a0b86e103c 200600a0b86e103c 0x0 (Disk device)
5 11400 0 202700a0b86e103c 200600a0b86e103c 0x0 (Disk device)
6 30100 0 203200a0b85aeb2d 200200a0b85aeb2d 0x0 (Disk device)
7 30500 0 203300a0b85aeb2d 200200a0b85aeb2d 0x0 (Disk device)
8 10800 0 2100001b32902d45 2000001b32902d45 0x1f (Unknown Type,Host Bus Adapter)
</source>
Erklärung der interessanten Spalten:
* Port_ID <Switch_ID><Switchport><??>
Es sind also offensichtlich 2 Switches in der Fabric an Port /devices/pci@79,0/pci10de,378@b/pci1077,143@0/fp@0,0:devctl
und zwar mit der ID 1 und mit der ID 3.
Switch ID 1
Port 1 und 5 : Node WWN 200400a0b85bb030
Port 2 und 14 : Node WWN 200600a0b86e103c
Port 8 : Node WWN 2000001b32902d45 (Wir selbst)
Switch ID 3
Port 1 und 5 : Node WWN 200200a0b85aeb2d
Port 2 und 6 : Node WWN 200600a0b86e10e4
Wir hängen also mit 2 Storages auf dem Switch mit der ID 1 und haben eine Verbindung zu einem Switch mit der ID 3 an dem 2 weitere Storages hängen.
* Node WWN
Wir sehen hier 4 Disk Devices mit jeweils 2 Einträgen (Gleiche Node WWN)
* Port WWN
Dies ist die Port WWN der an den Switch angeschlossenen Geräte (unter 8 finden wir uns selbst).
Pro Storage sehen wir hier 2 Port WWNs, also 2 Pfade über unseren einen Hostport.
Daher nachher 4 Pfade (2 Pro Hostport) beim [[#mpathadm list lu]].
* Type
Disk Device: Storage
Host Bus Adapter: FC-Karte
===luxadm probe===
Auflistung aller erkannten Fibrechanneldevices
<source lang=bash>
#> luxadm probe
Found Fibre Channel device(s):
Node WWN:200600a0b86e10e4 Device Type:Disk device
Logical Path:/dev/rdsk/c8t600A0B80006E10E40000DC1C52E8B751d0s2
...
</source>
===luxadm display <Diskpath|WWN>===
<source lang=bash>
#> luxadm display /dev/rdsk/c8t600A0B80006E10E40000DC1C52E8B751d0s2
DEVICE PROPERTIES for disk: /dev/rdsk/c8t600A0B80006E10E40000DC1C52E8B751d0s2
Vendor: SUN
Product ID: STK6580_6780
Revision: 0784
Serial Num: SP01068442
Unformatted capacity: 204800.000 MBytes
Write Cache: Enabled
Read Cache: Enabled
Minimum prefetch: 0x300
Maximum prefetch: 0x0
Device Type: Disk device
Path(s):
/dev/rdsk/c8t600A0B80006E10E40000DC1C52E8B751d0s2
/devices/scsi_vhci/disk@g600a0b80006e10e40000dc1c52e8b751:c,raw
Controller /dev/cfg/c4
Device Address 202600a0b86e10e4,5
Host controller port WWN 2100001b328a417f
Class primary
State ONLINE
Controller /dev/cfg/c4
Device Address 202700a0b86e10e4,5
Host controller port WWN 2100001b328a417f
Class secondary
State STANDBY
Controller /dev/cfg/c6
Device Address 201600a0b86e10e4,5
Host controller port WWN 2100001b32904445
Class primary
State ONLINE
Controller /dev/cfg/c6
Device Address 201700a0b86e10e4,5
Host controller port WWN 2100001b32904445
Class secondary
State STANDBY
</source>
* Vendor: SUN
Hersteller
* Product ID: STK6580_6780
Also ein StorageTek 6580/6780
* Revision: 0784
Grobe Firmwarepeilung (Firmware Version: 07.84.47.10)
Siehe hier [[#lsscs list array <array_name>]]
* Serial Num: SP01068442
Praktisch, wenn man mit NetApps arbeitet, um die LUNs zuzuordnen.
* Unformatted capacity: 204800.000 MBytes
Immer gut zu wissen
* Write Cache: Enabled
Die Batterie im Storage sollte also OK sein ;-)
* Path(s):
Rawdevicepath
Hardwaredevicepath
Jetzt folgen immer pro Pfad zu diesem Device ein Block aus
Controller (siehe unten)
Device Address <Port WWN vom Device>,<LUN ID>
Class <primary|secondary> (siehe unten)
State <Online|Standby|Oflline>
Zuweisung Controller zum FC-Port über:
<source lang=bash>
# ls -al /dev/cfg/c6
lrwxrwxrwx 1 root root 60 Sep 3 2009 /dev/cfg/c6 -> ../../devices/pci@79,0/pci10de,376@e/pci1077,143@0/fp@0,0:fc
</source>
Man sieht den Hardwarepfad von [[#luxadm -e port]]
Class:
Via ALUA (Asymmetric Logical Unit Access) teilt das Device dem Host mit, über welche Pfade der Host primär auf die LUN zugreifen soll.
==fcinfo==
===fcinfo hba-port===
Gibt ein paar Infos über Hersteller, Modell, Firmware, Port und Node WWN, Current Speed, ... aus
<source lang=bash>
#> fcinfo hba-port
HBA Port WWN: 2100001b328a417f
OS Device Name: /dev/cfg/c4
Manufacturer: QLogic Corp.
Model: 375-3356-02
Firmware Version: 05.06.00
FCode/BIOS Version: BIOS: 2.02; fcode: 2.01; EFI: 2.00;
Serial Number: 0402R00-0918701860
Driver Name: qlc
Driver Version: 20110825-3.06
Type: N-port
State: online
Supported Speeds: 1Gb 2Gb 4Gb
Current Speed: 4Gb
Node WWN: 2000001b328a417f
HBA Port WWN: 2101001b32aa417f
OS Device Name: /dev/cfg/c5
Manufacturer: QLogic Corp.
Model: 375-3356-02
Firmware Version: 05.06.00
FCode/BIOS Version: BIOS: 2.02; fcode: 2.01; EFI: 2.00;
Serial Number: 0402R00-0918701860
Driver Name: qlc
Driver Version: 20110825-3.06
Type: unknown
State: offline
Supported Speeds: 1Gb 2Gb 4Gb
Current Speed: not established
Node WWN: 2001001b32aa417f
HBA Port WWN: 2100001b32904445
OS Device Name: /dev/cfg/c6
Manufacturer: QLogic Corp.
Model: 375-3356-02
Firmware Version: 05.06.00
FCode/BIOS Version: BIOS: 2.02; fcode: 2.01; EFI: 2.00;
Serial Number: 0402R00-0918701887
Driver Name: qlc
Driver Version: 20110825-3.06
Type: N-port
State: online
Supported Speeds: 1Gb 2Gb 4Gb
Current Speed: 4Gb
Node WWN: 2000001b32904445
HBA Port WWN: 2101001b32b04445
OS Device Name: /dev/cfg/c7
Manufacturer: QLogic Corp.
Model: 375-3356-02
Firmware Version: 05.06.00
FCode/BIOS Version: BIOS: 2.02; fcode: 2.01; EFI: 2.00;
Serial Number: 0402R00-0918701887
Driver Name: qlc
Driver Version: 20110825-3.06
Type: unknown
State: offline
Supported Speeds: 1Gb 2Gb 4Gb
Current Speed: not established
Node WWN: 2001001b32b04445
</source>
===fcinfo remote-port --port <HBA Port WWN> --linkstat===
<source lang=bash>
# fcinfo remote-port --port 2100001b32904445 --linkstat
Remote Port WWN: 201600a0b86e103c
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200600a0b86e103c
Link Error Statistics:
Link Failure Count: 3
Loss of Sync Count: 3
Loss of Signal Count: 2
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 201700a0b86e103c
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200600a0b86e103c
Link Error Statistics:
Link Failure Count: 4
Loss of Sync Count: 261
Loss of Signal Count: 4
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 202200a0b85aeb2d
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200200a0b85aeb2d
Link Error Statistics:
Link Failure Count: 2
Loss of Sync Count: 1
Loss of Signal Count: 2
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 202300a0b85aeb2d
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200200a0b85aeb2d
Link Error Statistics:
Link Failure Count: 2
Loss of Sync Count: 1
Loss of Signal Count: 2
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 201600a0b86e10e4
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200600a0b86e10e4
Link Error Statistics:
Link Failure Count: 3
Loss of Sync Count: 1
Loss of Signal Count: 0
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 201700a0b86e10e4
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200600a0b86e10e4
Link Error Statistics:
Link Failure Count: 3
Loss of Sync Count: 1
Loss of Signal Count: 0
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 202400a0b85bb030
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200400a0b85bb030
Link Error Statistics:
Link Failure Count: 2
Loss of Sync Count: 1
Loss of Signal Count: 2
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 202500a0b85bb030
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200400a0b85bb030
Link Error Statistics:
Link Failure Count: 2
Loss of Sync Count: 1
Loss of Signal Count: 3
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
</source>
==mpathadm==
===mpathadm list lu===
<source lang=bash>
</source>
==cfgadm==
===cfgadm -al -o show_FCP_dev [<controller>]===
<source lang=bash>
# cfgadm -al -o show_FCP_dev | grep unusable
c8::21000024ff2d49a2,0 disk connected configured unusable
c8::21000024ff2d49a2,1 disk connected configured unusable
c8::21000024ff2d49a2,2 disk connected configured unusable
c8::21000024ff2d49a2,3 disk connected configured unusable
c8::21000024ff2d49a2,4 disk connected configured unusable
c8::21000024ff2d49a2,5 disk connected configured unusable
c8::21000024ff2d49a2,6 disk connected configured unusable
c8::21000024ff2d49a2,7 disk connected configured unusable
c8::21000024ff2d49a2,8 disk connected configured unusable
c8::21000024ff2d49a2,9 disk connected configured unusable
c8::21000024ff2d49a2,10 disk connected configured unusable
c9::203400a0b839c421,31 disk connected configured unusable
c9::203400a0b84913d2,31 disk connected configured unusable
c9::203500a0b839c421,31 disk connected configured unusable
c9::203500a0b84913d2,31 disk connected configured unusable
</source>
===cfgadm -c unconfigure -o unusable_SCSI_LUN <unusable device>===
<source lang=bash>
# cfgadm -c unconfigure -o unusable_SCSI_LUN c8::21000024ff2d49a2
</source>
===cfgadm -o force_update -c configure <controller>===
Rescan LUNs. Be careful! Does a forcelip!
<source lang=bash>
# cfgadm -o force_update -c configure c10
</source>
==prtconf -Da <device>==
<source lang=bash>
# prtconf -Da /dev/cfg/c3
i86pc (driver name: rootnex)
pci, instance #0 (driver name: npe)
pci8086,3410, instance #5 (driver name: pcieb)
pci111d,806e, instance #12 (driver name: pcieb)
pci111d,806e, instance #13 (driver name: pcieb)
pci1077,170, instance #0 (driver name: qlc) <---
fp, instance #0 (driver name: fp)
</source>
=Kommandos : Common Array Manager=
==lsscs==
Ist unter Solaris in /opt/SUNWsefms/bin
===lsscs list array===
<source lang=bash>
</source>
===lsscs list array <array_name>===
<source lang=bash>
</source>
===lsscs list -a <array_name> fcport===
<source lang=bash>
</source>
=Kommandos : Brocade=
==Switch-Kommandos==
===switchshow===
<source lang=bash>
san-sw_11:admin> switchshow
switchName: san-sw_11
switchType: 71.2
switchState: Online
switchMode: Native
switchRole: Principal
switchDomain: 1
switchId: fffc01
switchWwn: 10:00:00:05:33:df:43:5a
zoning: ON (Fabric1)
switchBeacon: OFF
Index Port Address Media Speed State Proto
==============================================
0 0 010000 id N8 No_Light FC
1 1 010100 id N8 Online FC E-Port 10:00:00:05:33:df:bd:b9 "san-sw_21" (downstream)
2 2 010200 id N8 Online FC F-Port 21:00:00:24:ff:05:74:e4
3 3 010300 id N8 Online FC F-Port 50:0a:09:81:8d:32:5d:c4
4 4 010400 id N8 No_Light FC
5 5 010500 id N8 Online FC E-Port 10:00:00:05:33:df:bd:b9 "san-sw_21"
6 6 010600 id N4 Online FC F-Port 20:06:00:a0:b8:32:38:17
7 7 010700 id N4 Online FC F-Port 20:07:00:a0:b8:32:38:17
8 8 010800 id N4 Online FC F-Port 21:00:00:1b:32:91:4c:ed
9 9 010900 id N4 Online FC F-Port 21:00:00:1b:32:98:05:1a
10 10 010a00 id N8 Online FC F-Port 21:00:00:24:ff:4a:d3:bc
11 11 010b00 id N8 No_Light FC
12 12 010c00 id N8 No_Light FC
13 13 010d00 id N8 No_Light FC
14 14 010e00 id N8 No_Light FC
15 15 010f00 id N8 No_Light FC
16 16 011000 -- N8 No_Module FC (No POD License) Disabled
17 17 011100 -- N8 No_Module FC (No POD License) Disabled
18 18 011200 -- N8 No_Module FC (No POD License) Disabled
19 19 011300 -- N8 No_Module FC (No POD License) Disabled
20 20 011400 -- N8 No_Module FC (No POD License) Disabled
21 21 011500 -- N8 No_Module FC (No POD License) Disabled
22 22 011600 -- N8 No_Module FC (No POD License) Disabled
23 23 011700 -- N8 No_Module FC (No POD License) Disabled
</source>
Was sagt uns das?
# Dies ist der "Principal" (alle andere sind "Subordinate") der Fabric "Fabric1" (switchRole:, zoning:)
# Der Switch ist gezoned (zoning:)
# SwitchID ist "fffc01"
# Es ist ein 24-Port Switch
# Es gibt einen doppelten ISL (InterSwitchLink) zu einem anderen Switch E-Port (san-sw_21)
# 6 Ports sind mit SFPs bestückt, aber nicht belegt (0,4,11-15)
# 8 Ports haben keine Lizenz und auch kein SFP (No_Module)
# 9 Ports sind belegt
<source lang=bash>
san-sw_11:root> fabricshow
Switch ID Worldwide Name Enet IP Addr FC IP Addr Name
-------------------------------------------------------------------------
1: fffc01 10:00:00:05:33:df:43:5a 192.168.1.117 0.0.0.0 >"san-sw_11"
2: fffc02 10:00:00:05:33:df:bd:b9 192.168.1.119 0.0.0.0 "san-sw_21"
The Fabric has 2 switches
</source>
==Port-Kommandos==
===portstatsshow===
===portstatsclear===
==Zone-Kommandos==
===zoneshow===
===alicreate===
===alishow===
==Backup der Switchconfig per Script==
===Put the backup host ssh-pub-key on the switches===
<source lang=bash>
fcsw1:root> cat >/root/.ssh/authorized_keys <<EOF
> ssh-dss AAAAB3NzaC1...
...
...
lF8qsgtTD8cc= root@host
> EOF
</source>
===Generate ssh-key on the switches===
<source lang=bash>
fcsw1:root> ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
2a:23:33:...:69:bc:25:a5:f9 root@fcsw1
The key's randomart image is:
+--[ RSA 2048]----+
| |
| ... |
| |
+-----------------+
</source>
===Copy the key to your backup users ~/.ssh/authorized_keys on backup host===
<source lang=bash>
fcsw1:root> cat /root/.ssh/id_rsa.pub
ssh-rsa AAAAB3NzaC1yc2EAAA...
...
KHnw1T1NaQ== root@fcsw1
</source>
===Now the script on the backup host===
<source lang=bash>
# cat /opt/bin/backup_brocade_config
#!/bin/bash
SWITCHES="
172.30.40.50
172.30.40.51
"
LOCALUSER="backupuser"
BACKUPDIR="brocade_backup"
BACKUPHOST="172.30.40.10"
DATE="$(date '+%Y%m%d-%H%M%S')"
for switch in ${SWITCHES} ; do
printf "Backing up ${switch} to ~${LOCALUSER}/${BACKUPDIR}/${switch}_config_${date}.txt... "
ssh root@${switch} /fabos/link_sbin/configupload -all -p scp ${BACKUPHOST},${LOCALUSER},${BACKUPDIR}/${switch}_config_${DATE}.txt
done
</source>
==Script zum parsen einer configupload-Datei==
<source lang=awk>
#!/usr/bin/gawk -f
BEGIN{
vendor["001438"]="Hewlett-Packard";
vendor["00a098"]="NetApp";
vendor["0024ff"]="Qlogic";
vendor["001b32"]="Qlogic";
vendor["0000c9"]="Emulex";
vendor["00e002"]="CROSSROADS SYSTEMS, INC.";
}
/\[Zoning\]/,/^$/ {
if(/^cfg./){
split($0,cfgparts,":");
gsub(/^cfg./,"",cfgparts[1]);
cfg[cfgparts[1]]=cfgparts[2];
}
else if(/^zone./) {
zonename=$0;
gsub(/:.*$/,"",zonename);
gsub(/^zone./,"",zonename);
zonemembers=$0;
gsub(/^[^:]*:/,"",zonemembers);
zone[zonename]=zonemembers;
}
else if(/^alias./) {
aliasname=$0;
gsub(/:.*$/,"",aliasname);
gsub(/^alias./,"",aliasname);
aliasmembers=$0;
gsub(/^[^:]*:/,"",aliasmembers);
alias[aliasname]=aliasmembers;
if(length(aliasname)>longestalias){
longestalias=length(aliasname);
}
}
else if(/^enable:/) {
cfgenabled=$0;
gsub(/^enable:/,"",cfgenabled);
}
}
END {
print "Config:",cfgenabled;
split(cfg[cfgenabled],active_zones,";");
for(active_zone in active_zones) {
split(zone[active_zones[active_zone]],zone_members,";");
asort(zone_members);
print "Zone",active_zones[active_zone],"(",length(zone_members),"Members ):";
for(zone_member in zone_members){
member=zone_members[zone_member];
if(alias[member]!=""){
member=alias[member];
}
WWN=member;
gsub(/:/,"",WWN);
if(WWN ~ /^5/){start=2;}else{start=5;}
vendor_id=substr(WWN,start,6);
printf " Member: %s\t",member;
if(alias[zone_members[zone_member]]!=""){
format=sprintf("%%s%%%ds\t",longestalias-length(zone_members[zone_member]));
printf format,zone_members[zone_member]," ";
}
printf "%s\n",vendor[vendor_id];
}
}
printf "\n\n\nCreate config:\n-------------------------------------------------\n";
printf "cfgdelete \"%s\"\n",cfgenabled;
for(active_zone in active_zones) {
split(zone[active_zones[active_zone]],zone_members,";");
asort(zone_members);
for(zone_member in zone_members){
member=zone_members[zone_member];
if(alias[member]!=""){
printf "alicreate \"%s\",\"%s\"\n",member,alias[member];
alias[member]="";
}
}
printf "zonecreate \"%s\",\"%s\"\n",active_zones[active_zone],zone[active_zones[active_zone]];
if(!secondelement){
secondelement=1;
printf "cfgcreate";
} else {
printf "cfgadd ";
}
printf " \"%s\",\"%s\"\n",cfgenabled,active_zones[active_zone];
}
printf "cfgsave\ncfgenable \"%s\"\n",cfgenabled;
}
</source>
=Kommandos: NetApp=
==fcp topology show : Wo hängt mein Frontend-SAN?==
<source lang=bash>
fas01> fcp topology show
Switches connected on adapter 0d:
None connected.
Switches connected on adapter 0c:
None connected.
Switches connected on adapter 1a:
Switch Name: fcsw01
Switch Vendor: Brocade Communications, Inc.
Switch Release: v6.4.2a
Switch Domain: 1
Switch WWN: 10:00:00:05:33:c6:1e:6c
Port Count: 24
Switches connected on adapter 1b:
Switch Name: fcsw02
Switch Vendor: Brocade Communications, Inc.
Switch Release: v6.4.2a
Switch Domain: 1
Switch WWN: 10:00:00:05:33:c7:5e:d2
Port Count: 24
Switches connected on adapter 1c:
None connected.
Switches connected on adapter 1d:
None connected.
</source>
==fcp config <port> : Welche WWN habe ich?==
<source lang=bash>
fas01> fcp config 1a
1a: ONLINE <ADAPTER UP> PTP Fabric
host address 010600
portname 50:0a:09:83:90:00:29:24 nodename 50:0a:09:80:80:00:29:24
mediatype auto speed auto
</source>
Kleines Schmankerl ist noch die "host address", die uns zeigt, daß wir auf Switch-ID 01 Port 06 hängen.
==fcp wwpn-alias (set|show) : Aliasnamen für mehr Klarheit beim Debugging==
<source lang=bash>
fas01> fcp wwpn-alias set sun07_Slot2_Port0 21000024ff363a5a
fas01> fcp wwpn-alias show
WWPN Alias
---- -----
21:00:00:24:ff:36:3a:5a sun07_Slot2_Port0
</source>
==sanlun lun show -d <dev> (mit Solaris und ZPool)==
Wenn man wissen möchte, welche NetApp LUNs zu einem ZPool gehören, geht das folgender Maßen:
<source lang=bash>
# zpool status | nawk '/c[0-9]t/{dev=$1;gsub(/s[0-9]+$/,"",$1);command="/opt/NTAP/SANToolkit/bin/sanlun lun show -d /dev/rdsk/"$1"s2";command | getline; command | getline; print dev,$1$2;next;}{print;}'
</source>
Beispiel:
<source lang=bash>
# zpool status | nawk '/c[0-9]t/{dev=$1;gsub(/s[0-9]+$/,"",$1);command="/opt/NTAP/SANToolkit/bin/sanlun lun show -d /dev/rdsk/"$1"s2";command | getline; command | getline; print dev,$1$2;next;}{print;}'
Pool: testpool
Status: ONLINE
scan: resilvered 11,0G in 0h1m with 0 errors on Thu Oct 2 09:41:39 2014
config:
NAME STATE READ WRITE CKSUM
testpool ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
c5t60A98000433544625634696B76705370d0s0 fas01:/vol/testlun/LUN0
c5t60A980003830304F392446473844375Ad0 fas02:/vol/testlun/LUN0
</source>
0e0496fc59c2a7a215cdd3b34262d989eb8db9fa
607
606
2014-11-06T10:22:05Z
Lollypop
2
/* Port-Kommandos */
wikitext
text/x-wiki
[[Kategorie:Solaris]]
[[Kategorie:Brocade]]
[[Kategorie:NetApp]]
[[Kategorie:FC]]
=Fibrechannel Analyse=
=Kommandos : Solaris=
==luxadm==
===luxadm -e port===
Gibt die Hardwarepfade der vorhandened Fibrechannelports und deren Status aus:
<source lang=bash>
# luxadm -e port
/devices/pci@79,0/pci10de,378@b/pci1077,143@0/fp@0,0:devctl CONNECTED
/devices/pci@79,0/pci10de,378@b/pci1077,143@0,1/fp@0,0:devctl NOT CONNECTED
/devices/pci@79,0/pci10de,376@e/pci1077,143@0/fp@0,0:devctl CONNECTED
/devices/pci@79,0/pci10de,376@e/pci1077,143@0,1/fp@0,0:devctl NOT CONNECTED
</source>
2 Dualport Karten:
/devices/pci@79,0/pci10de,378@b/pci1077,143@0 und ...,1
/devices/pci@79,0/pci10de,376@e/pci1077,143@0 und ...,1
<source lang=bash>
# prtdiag -v | head -1
System Configuration: Sun Microsystems Sun Fire X4440
</source>
Aus der Seite [https://support.oracle.com/epmos/faces/DocContentDisplay?id=1277396.1 Sun x86 Platforms: Matrix of Recognized Device Paths (Doc ID 1277396.1)] (Oracle Support Login benötigt):
Sun Fire x4440 (Tucana)
PCI:
PCIe SLOT0 /pci@0,0/pci10de,375@f/pci1000,3150@0 // with PCI Express 8-Port SAS/SATA HBA
PCIe SLOT0 /pci@0,0/pci10de,375@f/ // without PCI Express 8-Port SAS/SATA HBA
PCIe SLOT1 /pci@0,0/pci10de,376@e/
PCIe SLOT2 /pci@7c,0/pci10de,377@f/
PCIe SLOT3 /pci@0,0/pci10de,377@a/
PCIe SLOT4 /pci@7c,0/pci10de,376@e/
PCIe SLOT5 /pci@7c,0/pci10de,378@b/
(7c can be renamed something else depending on BIOS/OS version)
Also stecken unsere Karten in Slot 4 und 5.
===luxadm -e dump_map <HW_path>===
Gibt die Tabelle der bekannten Geräte an einem Port aus
<source lang=bash>
# luxadm -e dump_map /devices/pci@79,0/pci10de,378@b/pci1077,143@0/fp@0,0:devctl
Pos Port_ID Hard_Addr Port WWN Node WWN Type
0 30200 0 202600a0b86e10e4 200600a0b86e10e4 0x0 (Disk device)
1 30600 0 202700a0b86e10e4 200600a0b86e10e4 0x0 (Disk device)
2 10100 0 203400a0b85bb030 200400a0b85bb030 0x0 (Disk device)
3 10500 0 203500a0b85bb030 200400a0b85bb030 0x0 (Disk device)
4 10200 0 202600a0b86e103c 200600a0b86e103c 0x0 (Disk device)
5 11400 0 202700a0b86e103c 200600a0b86e103c 0x0 (Disk device)
6 30100 0 203200a0b85aeb2d 200200a0b85aeb2d 0x0 (Disk device)
7 30500 0 203300a0b85aeb2d 200200a0b85aeb2d 0x0 (Disk device)
8 10800 0 2100001b32902d45 2000001b32902d45 0x1f (Unknown Type,Host Bus Adapter)
</source>
Erklärung der interessanten Spalten:
* Port_ID <Switch_ID><Switchport><??>
Es sind also offensichtlich 2 Switches in der Fabric an Port /devices/pci@79,0/pci10de,378@b/pci1077,143@0/fp@0,0:devctl
und zwar mit der ID 1 und mit der ID 3.
Switch ID 1
Port 1 und 5 : Node WWN 200400a0b85bb030
Port 2 und 14 : Node WWN 200600a0b86e103c
Port 8 : Node WWN 2000001b32902d45 (Wir selbst)
Switch ID 3
Port 1 und 5 : Node WWN 200200a0b85aeb2d
Port 2 und 6 : Node WWN 200600a0b86e10e4
Wir hängen also mit 2 Storages auf dem Switch mit der ID 1 und haben eine Verbindung zu einem Switch mit der ID 3 an dem 2 weitere Storages hängen.
* Node WWN
Wir sehen hier 4 Disk Devices mit jeweils 2 Einträgen (Gleiche Node WWN)
* Port WWN
Dies ist die Port WWN der an den Switch angeschlossenen Geräte (unter 8 finden wir uns selbst).
Pro Storage sehen wir hier 2 Port WWNs, also 2 Pfade über unseren einen Hostport.
Daher nachher 4 Pfade (2 Pro Hostport) beim [[#mpathadm list lu]].
* Type
Disk Device: Storage
Host Bus Adapter: FC-Karte
===luxadm probe===
Auflistung aller erkannten Fibrechanneldevices
<source lang=bash>
#> luxadm probe
Found Fibre Channel device(s):
Node WWN:200600a0b86e10e4 Device Type:Disk device
Logical Path:/dev/rdsk/c8t600A0B80006E10E40000DC1C52E8B751d0s2
...
</source>
===luxadm display <Diskpath|WWN>===
<source lang=bash>
#> luxadm display /dev/rdsk/c8t600A0B80006E10E40000DC1C52E8B751d0s2
DEVICE PROPERTIES for disk: /dev/rdsk/c8t600A0B80006E10E40000DC1C52E8B751d0s2
Vendor: SUN
Product ID: STK6580_6780
Revision: 0784
Serial Num: SP01068442
Unformatted capacity: 204800.000 MBytes
Write Cache: Enabled
Read Cache: Enabled
Minimum prefetch: 0x300
Maximum prefetch: 0x0
Device Type: Disk device
Path(s):
/dev/rdsk/c8t600A0B80006E10E40000DC1C52E8B751d0s2
/devices/scsi_vhci/disk@g600a0b80006e10e40000dc1c52e8b751:c,raw
Controller /dev/cfg/c4
Device Address 202600a0b86e10e4,5
Host controller port WWN 2100001b328a417f
Class primary
State ONLINE
Controller /dev/cfg/c4
Device Address 202700a0b86e10e4,5
Host controller port WWN 2100001b328a417f
Class secondary
State STANDBY
Controller /dev/cfg/c6
Device Address 201600a0b86e10e4,5
Host controller port WWN 2100001b32904445
Class primary
State ONLINE
Controller /dev/cfg/c6
Device Address 201700a0b86e10e4,5
Host controller port WWN 2100001b32904445
Class secondary
State STANDBY
</source>
* Vendor: SUN
Hersteller
* Product ID: STK6580_6780
Also ein StorageTek 6580/6780
* Revision: 0784
Grobe Firmwarepeilung (Firmware Version: 07.84.47.10)
Siehe hier [[#lsscs list array <array_name>]]
* Serial Num: SP01068442
Praktisch, wenn man mit NetApps arbeitet, um die LUNs zuzuordnen.
* Unformatted capacity: 204800.000 MBytes
Immer gut zu wissen
* Write Cache: Enabled
Die Batterie im Storage sollte also OK sein ;-)
* Path(s):
Rawdevicepath
Hardwaredevicepath
Jetzt folgen immer pro Pfad zu diesem Device ein Block aus
Controller (siehe unten)
Device Address <Port WWN vom Device>,<LUN ID>
Class <primary|secondary> (siehe unten)
State <Online|Standby|Oflline>
Zuweisung Controller zum FC-Port über:
<source lang=bash>
# ls -al /dev/cfg/c6
lrwxrwxrwx 1 root root 60 Sep 3 2009 /dev/cfg/c6 -> ../../devices/pci@79,0/pci10de,376@e/pci1077,143@0/fp@0,0:fc
</source>
Man sieht den Hardwarepfad von [[#luxadm -e port]]
Class:
Via ALUA (Asymmetric Logical Unit Access) teilt das Device dem Host mit, über welche Pfade der Host primär auf die LUN zugreifen soll.
==fcinfo==
===fcinfo hba-port===
Gibt ein paar Infos über Hersteller, Modell, Firmware, Port und Node WWN, Current Speed, ... aus
<source lang=bash>
#> fcinfo hba-port
HBA Port WWN: 2100001b328a417f
OS Device Name: /dev/cfg/c4
Manufacturer: QLogic Corp.
Model: 375-3356-02
Firmware Version: 05.06.00
FCode/BIOS Version: BIOS: 2.02; fcode: 2.01; EFI: 2.00;
Serial Number: 0402R00-0918701860
Driver Name: qlc
Driver Version: 20110825-3.06
Type: N-port
State: online
Supported Speeds: 1Gb 2Gb 4Gb
Current Speed: 4Gb
Node WWN: 2000001b328a417f
HBA Port WWN: 2101001b32aa417f
OS Device Name: /dev/cfg/c5
Manufacturer: QLogic Corp.
Model: 375-3356-02
Firmware Version: 05.06.00
FCode/BIOS Version: BIOS: 2.02; fcode: 2.01; EFI: 2.00;
Serial Number: 0402R00-0918701860
Driver Name: qlc
Driver Version: 20110825-3.06
Type: unknown
State: offline
Supported Speeds: 1Gb 2Gb 4Gb
Current Speed: not established
Node WWN: 2001001b32aa417f
HBA Port WWN: 2100001b32904445
OS Device Name: /dev/cfg/c6
Manufacturer: QLogic Corp.
Model: 375-3356-02
Firmware Version: 05.06.00
FCode/BIOS Version: BIOS: 2.02; fcode: 2.01; EFI: 2.00;
Serial Number: 0402R00-0918701887
Driver Name: qlc
Driver Version: 20110825-3.06
Type: N-port
State: online
Supported Speeds: 1Gb 2Gb 4Gb
Current Speed: 4Gb
Node WWN: 2000001b32904445
HBA Port WWN: 2101001b32b04445
OS Device Name: /dev/cfg/c7
Manufacturer: QLogic Corp.
Model: 375-3356-02
Firmware Version: 05.06.00
FCode/BIOS Version: BIOS: 2.02; fcode: 2.01; EFI: 2.00;
Serial Number: 0402R00-0918701887
Driver Name: qlc
Driver Version: 20110825-3.06
Type: unknown
State: offline
Supported Speeds: 1Gb 2Gb 4Gb
Current Speed: not established
Node WWN: 2001001b32b04445
</source>
===fcinfo remote-port --port <HBA Port WWN> --linkstat===
<source lang=bash>
# fcinfo remote-port --port 2100001b32904445 --linkstat
Remote Port WWN: 201600a0b86e103c
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200600a0b86e103c
Link Error Statistics:
Link Failure Count: 3
Loss of Sync Count: 3
Loss of Signal Count: 2
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 201700a0b86e103c
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200600a0b86e103c
Link Error Statistics:
Link Failure Count: 4
Loss of Sync Count: 261
Loss of Signal Count: 4
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 202200a0b85aeb2d
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200200a0b85aeb2d
Link Error Statistics:
Link Failure Count: 2
Loss of Sync Count: 1
Loss of Signal Count: 2
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 202300a0b85aeb2d
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200200a0b85aeb2d
Link Error Statistics:
Link Failure Count: 2
Loss of Sync Count: 1
Loss of Signal Count: 2
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 201600a0b86e10e4
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200600a0b86e10e4
Link Error Statistics:
Link Failure Count: 3
Loss of Sync Count: 1
Loss of Signal Count: 0
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 201700a0b86e10e4
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200600a0b86e10e4
Link Error Statistics:
Link Failure Count: 3
Loss of Sync Count: 1
Loss of Signal Count: 0
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 202400a0b85bb030
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200400a0b85bb030
Link Error Statistics:
Link Failure Count: 2
Loss of Sync Count: 1
Loss of Signal Count: 2
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 202500a0b85bb030
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200400a0b85bb030
Link Error Statistics:
Link Failure Count: 2
Loss of Sync Count: 1
Loss of Signal Count: 3
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
</source>
==mpathadm==
===mpathadm list lu===
<source lang=bash>
</source>
==cfgadm==
===cfgadm -al -o show_FCP_dev [<controller>]===
<source lang=bash>
# cfgadm -al -o show_FCP_dev | grep unusable
c8::21000024ff2d49a2,0 disk connected configured unusable
c8::21000024ff2d49a2,1 disk connected configured unusable
c8::21000024ff2d49a2,2 disk connected configured unusable
c8::21000024ff2d49a2,3 disk connected configured unusable
c8::21000024ff2d49a2,4 disk connected configured unusable
c8::21000024ff2d49a2,5 disk connected configured unusable
c8::21000024ff2d49a2,6 disk connected configured unusable
c8::21000024ff2d49a2,7 disk connected configured unusable
c8::21000024ff2d49a2,8 disk connected configured unusable
c8::21000024ff2d49a2,9 disk connected configured unusable
c8::21000024ff2d49a2,10 disk connected configured unusable
c9::203400a0b839c421,31 disk connected configured unusable
c9::203400a0b84913d2,31 disk connected configured unusable
c9::203500a0b839c421,31 disk connected configured unusable
c9::203500a0b84913d2,31 disk connected configured unusable
</source>
===cfgadm -c unconfigure -o unusable_SCSI_LUN <unusable device>===
<source lang=bash>
# cfgadm -c unconfigure -o unusable_SCSI_LUN c8::21000024ff2d49a2
</source>
===cfgadm -o force_update -c configure <controller>===
Rescan LUNs. Be careful! Does a forcelip!
<source lang=bash>
# cfgadm -o force_update -c configure c10
</source>
==prtconf -Da <device>==
<source lang=bash>
# prtconf -Da /dev/cfg/c3
i86pc (driver name: rootnex)
pci, instance #0 (driver name: npe)
pci8086,3410, instance #5 (driver name: pcieb)
pci111d,806e, instance #12 (driver name: pcieb)
pci111d,806e, instance #13 (driver name: pcieb)
pci1077,170, instance #0 (driver name: qlc) <---
fp, instance #0 (driver name: fp)
</source>
=Kommandos : Common Array Manager=
==lsscs==
Ist unter Solaris in /opt/SUNWsefms/bin
===lsscs list array===
<source lang=bash>
</source>
===lsscs list array <array_name>===
<source lang=bash>
</source>
===lsscs list -a <array_name> fcport===
<source lang=bash>
</source>
=Kommandos : Brocade=
==Switch-Kommandos==
===switchshow===
<source lang=bash>
san-sw_11:admin> switchshow
switchName: san-sw_11
switchType: 71.2
switchState: Online
switchMode: Native
switchRole: Principal
switchDomain: 1
switchId: fffc01
switchWwn: 10:00:00:05:33:df:43:5a
zoning: ON (Fabric1)
switchBeacon: OFF
Index Port Address Media Speed State Proto
==============================================
0 0 010000 id N8 No_Light FC
1 1 010100 id N8 Online FC E-Port 10:00:00:05:33:df:bd:b9 "san-sw_21" (downstream)
2 2 010200 id N8 Online FC F-Port 21:00:00:24:ff:05:74:e4
3 3 010300 id N8 Online FC F-Port 50:0a:09:81:8d:32:5d:c4
4 4 010400 id N8 No_Light FC
5 5 010500 id N8 Online FC E-Port 10:00:00:05:33:df:bd:b9 "san-sw_21"
6 6 010600 id N4 Online FC F-Port 20:06:00:a0:b8:32:38:17
7 7 010700 id N4 Online FC F-Port 20:07:00:a0:b8:32:38:17
8 8 010800 id N4 Online FC F-Port 21:00:00:1b:32:91:4c:ed
9 9 010900 id N4 Online FC F-Port 21:00:00:1b:32:98:05:1a
10 10 010a00 id N8 Online FC F-Port 21:00:00:24:ff:4a:d3:bc
11 11 010b00 id N8 No_Light FC
12 12 010c00 id N8 No_Light FC
13 13 010d00 id N8 No_Light FC
14 14 010e00 id N8 No_Light FC
15 15 010f00 id N8 No_Light FC
16 16 011000 -- N8 No_Module FC (No POD License) Disabled
17 17 011100 -- N8 No_Module FC (No POD License) Disabled
18 18 011200 -- N8 No_Module FC (No POD License) Disabled
19 19 011300 -- N8 No_Module FC (No POD License) Disabled
20 20 011400 -- N8 No_Module FC (No POD License) Disabled
21 21 011500 -- N8 No_Module FC (No POD License) Disabled
22 22 011600 -- N8 No_Module FC (No POD License) Disabled
23 23 011700 -- N8 No_Module FC (No POD License) Disabled
</source>
Was sagt uns das?
# Dies ist der "Principal" (alle andere sind "Subordinate") der Fabric "Fabric1" (switchRole:, zoning:)
# Der Switch ist gezoned (zoning:)
# SwitchID ist "fffc01"
# Es ist ein 24-Port Switch
# Es gibt einen doppelten ISL (InterSwitchLink) zu einem anderen Switch E-Port (san-sw_21)
# 6 Ports sind mit SFPs bestückt, aber nicht belegt (0,4,11-15)
# 8 Ports haben keine Lizenz und auch kein SFP (No_Module)
# 9 Ports sind belegt
<source lang=bash>
san-sw_11:root> fabricshow
Switch ID Worldwide Name Enet IP Addr FC IP Addr Name
-------------------------------------------------------------------------
1: fffc01 10:00:00:05:33:df:43:5a 192.168.1.117 0.0.0.0 >"san-sw_11"
2: fffc02 10:00:00:05:33:df:bd:b9 192.168.1.119 0.0.0.0 "san-sw_21"
The Fabric has 2 switches
</source>
==Port-Kommandos==
===porterrshow===
===portstatsshow===
===portstatsclear===
===portloginshow===
Möchte man sehen, welche WWNs sich hinter einem NPIV-Port verbergen, so hilft portloginshow.
==Zone-Kommandos==
===zoneshow===
===alicreate===
===alishow===
==Backup der Switchconfig per Script==
===Put the backup host ssh-pub-key on the switches===
<source lang=bash>
fcsw1:root> cat >/root/.ssh/authorized_keys <<EOF
> ssh-dss AAAAB3NzaC1...
...
...
lF8qsgtTD8cc= root@host
> EOF
</source>
===Generate ssh-key on the switches===
<source lang=bash>
fcsw1:root> ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
2a:23:33:...:69:bc:25:a5:f9 root@fcsw1
The key's randomart image is:
+--[ RSA 2048]----+
| |
| ... |
| |
+-----------------+
</source>
===Copy the key to your backup users ~/.ssh/authorized_keys on backup host===
<source lang=bash>
fcsw1:root> cat /root/.ssh/id_rsa.pub
ssh-rsa AAAAB3NzaC1yc2EAAA...
...
KHnw1T1NaQ== root@fcsw1
</source>
===Now the script on the backup host===
<source lang=bash>
# cat /opt/bin/backup_brocade_config
#!/bin/bash
SWITCHES="
172.30.40.50
172.30.40.51
"
LOCALUSER="backupuser"
BACKUPDIR="brocade_backup"
BACKUPHOST="172.30.40.10"
DATE="$(date '+%Y%m%d-%H%M%S')"
for switch in ${SWITCHES} ; do
printf "Backing up ${switch} to ~${LOCALUSER}/${BACKUPDIR}/${switch}_config_${date}.txt... "
ssh root@${switch} /fabos/link_sbin/configupload -all -p scp ${BACKUPHOST},${LOCALUSER},${BACKUPDIR}/${switch}_config_${DATE}.txt
done
</source>
==Script zum parsen einer configupload-Datei==
<source lang=awk>
#!/usr/bin/gawk -f
BEGIN{
vendor["001438"]="Hewlett-Packard";
vendor["00a098"]="NetApp";
vendor["0024ff"]="Qlogic";
vendor["001b32"]="Qlogic";
vendor["0000c9"]="Emulex";
vendor["00e002"]="CROSSROADS SYSTEMS, INC.";
}
/\[Zoning\]/,/^$/ {
if(/^cfg./){
split($0,cfgparts,":");
gsub(/^cfg./,"",cfgparts[1]);
cfg[cfgparts[1]]=cfgparts[2];
}
else if(/^zone./) {
zonename=$0;
gsub(/:.*$/,"",zonename);
gsub(/^zone./,"",zonename);
zonemembers=$0;
gsub(/^[^:]*:/,"",zonemembers);
zone[zonename]=zonemembers;
}
else if(/^alias./) {
aliasname=$0;
gsub(/:.*$/,"",aliasname);
gsub(/^alias./,"",aliasname);
aliasmembers=$0;
gsub(/^[^:]*:/,"",aliasmembers);
alias[aliasname]=aliasmembers;
if(length(aliasname)>longestalias){
longestalias=length(aliasname);
}
}
else if(/^enable:/) {
cfgenabled=$0;
gsub(/^enable:/,"",cfgenabled);
}
}
END {
print "Config:",cfgenabled;
split(cfg[cfgenabled],active_zones,";");
for(active_zone in active_zones) {
split(zone[active_zones[active_zone]],zone_members,";");
asort(zone_members);
print "Zone",active_zones[active_zone],"(",length(zone_members),"Members ):";
for(zone_member in zone_members){
member=zone_members[zone_member];
if(alias[member]!=""){
member=alias[member];
}
WWN=member;
gsub(/:/,"",WWN);
if(WWN ~ /^5/){start=2;}else{start=5;}
vendor_id=substr(WWN,start,6);
printf " Member: %s\t",member;
if(alias[zone_members[zone_member]]!=""){
format=sprintf("%%s%%%ds\t",longestalias-length(zone_members[zone_member]));
printf format,zone_members[zone_member]," ";
}
printf "%s\n",vendor[vendor_id];
}
}
printf "\n\n\nCreate config:\n-------------------------------------------------\n";
printf "cfgdelete \"%s\"\n",cfgenabled;
for(active_zone in active_zones) {
split(zone[active_zones[active_zone]],zone_members,";");
asort(zone_members);
for(zone_member in zone_members){
member=zone_members[zone_member];
if(alias[member]!=""){
printf "alicreate \"%s\",\"%s\"\n",member,alias[member];
alias[member]="";
}
}
printf "zonecreate \"%s\",\"%s\"\n",active_zones[active_zone],zone[active_zones[active_zone]];
if(!secondelement){
secondelement=1;
printf "cfgcreate";
} else {
printf "cfgadd ";
}
printf " \"%s\",\"%s\"\n",cfgenabled,active_zones[active_zone];
}
printf "cfgsave\ncfgenable \"%s\"\n",cfgenabled;
}
</source>
=Kommandos: NetApp=
==fcp topology show : Wo hängt mein Frontend-SAN?==
<source lang=bash>
fas01> fcp topology show
Switches connected on adapter 0d:
None connected.
Switches connected on adapter 0c:
None connected.
Switches connected on adapter 1a:
Switch Name: fcsw01
Switch Vendor: Brocade Communications, Inc.
Switch Release: v6.4.2a
Switch Domain: 1
Switch WWN: 10:00:00:05:33:c6:1e:6c
Port Count: 24
Switches connected on adapter 1b:
Switch Name: fcsw02
Switch Vendor: Brocade Communications, Inc.
Switch Release: v6.4.2a
Switch Domain: 1
Switch WWN: 10:00:00:05:33:c7:5e:d2
Port Count: 24
Switches connected on adapter 1c:
None connected.
Switches connected on adapter 1d:
None connected.
</source>
==fcp config <port> : Welche WWN habe ich?==
<source lang=bash>
fas01> fcp config 1a
1a: ONLINE <ADAPTER UP> PTP Fabric
host address 010600
portname 50:0a:09:83:90:00:29:24 nodename 50:0a:09:80:80:00:29:24
mediatype auto speed auto
</source>
Kleines Schmankerl ist noch die "host address", die uns zeigt, daß wir auf Switch-ID 01 Port 06 hängen.
==fcp wwpn-alias (set|show) : Aliasnamen für mehr Klarheit beim Debugging==
<source lang=bash>
fas01> fcp wwpn-alias set sun07_Slot2_Port0 21000024ff363a5a
fas01> fcp wwpn-alias show
WWPN Alias
---- -----
21:00:00:24:ff:36:3a:5a sun07_Slot2_Port0
</source>
==sanlun lun show -d <dev> (mit Solaris und ZPool)==
Wenn man wissen möchte, welche NetApp LUNs zu einem ZPool gehören, geht das folgender Maßen:
<source lang=bash>
# zpool status | nawk '/c[0-9]t/{dev=$1;gsub(/s[0-9]+$/,"",$1);command="/opt/NTAP/SANToolkit/bin/sanlun lun show -d /dev/rdsk/"$1"s2";command | getline; command | getline; print dev,$1$2;next;}{print;}'
</source>
Beispiel:
<source lang=bash>
# zpool status | nawk '/c[0-9]t/{dev=$1;gsub(/s[0-9]+$/,"",$1);command="/opt/NTAP/SANToolkit/bin/sanlun lun show -d /dev/rdsk/"$1"s2";command | getline; command | getline; print dev,$1$2;next;}{print;}'
Pool: testpool
Status: ONLINE
scan: resilvered 11,0G in 0h1m with 0 errors on Thu Oct 2 09:41:39 2014
config:
NAME STATE READ WRITE CKSUM
testpool ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
c5t60A98000433544625634696B76705370d0s0 fas01:/vol/testlun/LUN0
c5t60A980003830304F392446473844375Ad0 fas02:/vol/testlun/LUN0
</source>
d750e274693d06c6dd95f978a070ed4e631bc334
Solaris OracleDB zone
0
188
611
610
2014-11-06T18:17:52Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:Solaris]]
=Setup Oracle Database on a Solaris zone with CPU limit=
Our setup is a 48GB x86-Server
==Limit ZFS ARC==
Add to /etc/system:
<source lang=bash>
set zfs:zfs_arc_max = <bytes as hex value>
</source>
To calculate your own value:
<source lang=bash>
# LIMIT_GB=8 ; printf "*\n** Limit ZFS ARC to %dGB\n*\nset zfs:zfs_arc_max = 0x%x\n" ${LIMIT_GB} $[${LIMIT_GB} * 1024 * 1024 * 1024]
*
** Limit ZFS ARC to 8GB
*
set zfs:zfs_arc_max = 0x200000000
</source>
==Create Zone==
Set values:
<source lang=bash>
ZONENAME=oracle
ZONEPOOL=rpool
ZONEBASE=/var/zones
MAX_SHM_MEMORY=30G
LOCKED_MEMORY=30G
MAX_PHYS_MEMORY=34G
SWAP=${MAX_PHYS_MEMORY}
NUMBER_OF_CPUS=2
</source>
Create zone with
<source lang=bash>
zfs create -o mountpoint=none ${ZONEPOOL}/zones
zfs create -o compression=on -o mountpoint=${ZONEBASE}/${ZONENAME} ${ZONEPOOL}/zones/${ZONENAME}
chmod 700 ${ZONEBASE}/${ZONENAME}
printf "
create
set autoboot=true
set zonepath=${ZONEBASE}/${ZONENAME}
add dedicated-cpu
set ncpus=${NUMBER_OF_CPUS}
end
add capped-memory
set swap=${SWAP}
set physical=${MAX_PHYS_MEMORY}
set locked=${LOCKED_MEMORY}
end
set scheduling-class=FSS
set max-shm-memory=${MAX_SHM_MEMORY}
verify
commit
" | zonecfg -z ${ZONENAME} -f -
</source>
Enable dynamic pool service to add support for dedicated-cpus:
<source lang=bash>
svcadm enable svc:/system/pools/dynamic
</source>
Install and boot:
<source lang=bash>
zoneadm -z ${ZONENAME} install
zoneadm -z ${ZONENAME} boot
zlogin ${ZONENAME} usermod -s /bin/bash root
zlogin ${ZONENAME}
</source>
CPU-check:
<source lang=bash>
-bash-3.2# psrinfo -pv
The physical processor has 2 virtual processors (0 1)
x86 (chipid 0x0 GenuineIntel family 6 model 44 step 2 clock 3059 MHz)
Intel(r) Xeon(r) CPU X5675 @ 3.07GHz
</source>
==Create ZPools==
I used this paper: [http://www.oracle.com/technetwork/server-storage/solaris10/config-solaris-zfs-wp-167894.pdf]
Values are for Solaris 10.
<source lang=bash>
DATABASEPOOL=dbpool
DATABASEPOOL_DATA_VDEV="mirror c1t1d0 c1t2d0"
DATABASEPOOL_ZIL_VDEV="mirror c1t3d0 c1t4d0"
REDOPOOL_NAME=redopool
REDOPOOL_DATA_VDEV="mirror c1t5d0 c1t6d0"
REDOPOOL_ZIL_VDEV="mirror c1t7d0 c1t8d0"
ARCHIVEPOOL=archivepool
ARCHIVEPOOL_DATAV_DEV="mirror c1t9d0 c1t10d0"
DB_BASEPATH=/database
DB_BLOCK_SIZE=8192
</source>
<source lang=bash>
zpool create ${DATABASEPOOL} ${DATABASEPOOL_DATA_VDEV} log ${DATABASEPOOL_ZIL_VDEV}
zfs create -o recordsize=${DB_BLOCK_SIZE} -o mountpoint=${DB_BASEPATH}/data ${DATABASEPOOL}/data
zfs set logbias=throughput ${DATABASEPOOL}/data
zfs create -o recordsize=${DB_BLOCK_SIZE} -o mountpoint=${DB_BASEPATH}/index ${DATABASEPOOL}/index
zfs set logbias=throughput ${DATABASEPOOL}/index
zfs create -o mountpoint=${DB_BASEPATH}/temp ${DATABASEPOOL}/temp
zfs set logbias=throughput ${DATABASEPOOL}/temp
zfs create -o mountpoint=${DB_BASEPATH}/undo ${DATABASEPOOL}/undo
zfs set logbias=throughput ${DATABASEPOOL}/undo
</source>
<source lang=bash>
zpool create ${REDOPOOL} ${REDOPOOL_DATA_VDEV} log ${REDOPOOL_ZIL_VDEV}
zfs create -o mountpoint=${DB_BASEPATH}/redo ${REDOPOOL}/redo
zfs set logbias=latency ${REDOPOOL}/redo
</source>
<source lang=bash>
zpool create ${ARCHIVEBASEPOOL} ${ARCHIVEPOOL_DATA_VDEV}
zfs create -o compression=on -o mountpoint=${DB_BASEPATH}/archive ${ARCHIVEBASEPOOL}/archive
zfs set primarycache=metadata ${ARCHIVEBASEPOOL}/archive
</source>
4bc4dacf804d31dabba8bf60e849a33eb451ddb2
ZFS RaidController
0
186
612
577
2014-11-07T09:49:39Z
Lollypop
2
/* X4170 mit MegaRAID */
wikitext
text/x-wiki
[[Kategorie:Solaris]]
[[Kategorie:ZFS]]
=ZFS ist besser direkt auf den Disks=
Wegen der besseren Checksummen im ZFS möchte man den RaidController abschalten, oder die Disks einzeln herausreichen.
==X4170 mit MegaRAID==
Konfiguration aller Disks als einzelne LogicalDrives
<source lang=bash>
-cfgclr -a0
-cfgldadd -r0[252:0] -a0
-cfgldadd -r0[252:1] -a0
-cfgldadd -r0[252:2] -a0
-cfgldadd -r0[252:3] -a0
-ldsetprop EnDskCache -LAll -a0
-AdpBootDrive -set -L0 -a0
-AdpBootDrive -set -L1 -a0
q for quit
</source>
ab5d152d38b8380f6b6a54cde6a13ccc291107d5
613
612
2014-11-07T09:59:08Z
Lollypop
2
/* X4170 mit MegaRAID */
wikitext
text/x-wiki
[[Kategorie:Solaris]]
[[Kategorie:ZFS]]
=ZFS ist besser direkt auf den Disks=
Wegen der besseren Checksummen im ZFS möchte man den RaidController abschalten, oder die Disks einzeln herausreichen.
==X4170 mit MegaRAID==
Konfiguration aller Disks als einzelne LogicalDrives
<source lang=bash>
-cfgclr -a0
-cfgldadd -r0[252:0] -a0
-cfgldadd -r0[252:1] -a0
-cfgldadd -r0[252:2] -a0
-cfgldadd -r0[252:3] -a0
-ldsetprop EnDskCache -LAll -a0
-AdpBootDrive -set -physdrv[252:0] -a0
-AdpBootDrive -set -physdrv[252:1] -a0
q for quit
</source>
2181f6493a60cafc5d0d1babc12438d3e05bf167
614
613
2014-11-07T10:06:20Z
Lollypop
2
/* X4170 mit MegaRAID */
wikitext
text/x-wiki
[[Kategorie:Solaris]]
[[Kategorie:ZFS]]
=ZFS ist besser direkt auf den Disks=
Wegen der besseren Checksummen im ZFS möchte man den RaidController abschalten, oder die Disks einzeln herausreichen.
==X4170 mit MegaRAID==
Konfiguration aller Disks als einzelne LogicalDrives
<source lang=bash>
-cfgclr -a0
-cfgldadd -r0[252:0] -a0
-cfgldadd -r0[252:1] -a0
-cfgldadd -r0[252:2] -a0
-cfgldadd -r0[252:3] -a0
-ldsetprop EnDskCache -LAll -a0
-AdpBootDrive -set -L0 -a0
-AdpBootDrive -set -L1 -a0
q for quit
</source>
ab5d152d38b8380f6b6a54cde6a13ccc291107d5
615
614
2014-11-07T10:10:58Z
Lollypop
2
/* X4170 mit MegaRAID */
wikitext
text/x-wiki
[[Kategorie:Solaris]]
[[Kategorie:ZFS]]
=ZFS ist besser direkt auf den Disks=
Wegen der besseren Checksummen im ZFS möchte man den RaidController abschalten, oder die Disks einzeln herausreichen.
==X4170 mit MegaRAID==
Konfiguration aller Disks als einzelne LogicalDrives
<source lang=bash>
-cfgclr -a0
-cfgldadd -r0[252:0] -a0
-cfgldadd -r0[252:1] -a0
-cfgldadd -r0[252:2] -a0
-cfgldadd -r0[252:3] -a0
-ldsetprop EnDskCache -LAll -a0
-AdpBootDrive -set -L0 -a0
-AdpSetProp MaintainPdFailHistoryEnbl 0 a0
q for quit
</source>
a26bc808f0cd3daf255baa8741e4a6d9040ecd43
616
615
2014-11-07T10:12:08Z
Lollypop
2
/* X4170 mit MegaRAID */
wikitext
text/x-wiki
[[Kategorie:Solaris]]
[[Kategorie:ZFS]]
=ZFS ist besser direkt auf den Disks=
Wegen der besseren Checksummen im ZFS möchte man den RaidController abschalten, oder die Disks einzeln herausreichen.
==X4170 mit MegaRAID==
Konfiguration aller Disks als einzelne LogicalDrives
<source lang=bash>
-cfgclr -a0
-cfgldadd -r0[252:0] -a0
-cfgldadd -r0[252:1] -a0
-cfgldadd -r0[252:2] -a0
-cfgldadd -r0[252:3] -a0
-ldsetprop EnDskCache -LAll -a0
-AdpBootDrive -set -L0 -a0
-AdpSetProp MaintainPdFailHistoryEnbl 0 -a0
q for quit
</source>
06db38f3e3075d3fde1bf033d70444223b648810
SunCluster oneliner
0
189
617
2014-11-11T14:06:56Z
Lollypop
2
Die Seite wurde neu angelegt: „[[Kategorie:SunCluster]] ==Resource Groups to remaster== <source lang=bash> # /usr/cluster/bin/clrg status | \ /usr/bin/nawk ' NR<=5 || ( NF>=3 && $(NF-1)=="Ye…“
wikitext
text/x-wiki
[[Kategorie:SunCluster]]
==Resource Groups to remaster==
<source lang=bash>
# /usr/cluster/bin/clrg status | \
/usr/bin/nawk '
NR<=5 || ( NF>=3 && $(NF-1)=="Yes" ){
next;
}
NF==4 {
rg=$1;
primary=$2;
if($NF=="Online"){
printf "%20s\t%s on %s\n",rg,$NF,primary
}
while($0 !~ /^$/){
getline;
if($NF=="Online"){
printf "%20s\t%s on %s, but not on primary %s\n",rg,$NF,$1,primary;
list=list" "rg
}
}
}
END{
if(list != ""){
printf "To fix it do:\n\tclrg remaster %s\n",list;
}
}'
</source>
510064d62a676cbc5ddb8b2ae51e40402be90a36
654
617
2015-01-08T13:27:20Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:SunCluster|Einzeiler|Oneliner]]
==Resource Groups to remaster==
<source lang=bash>
# /usr/cluster/bin/clrg status | \
/usr/bin/nawk '
NR<=5 || ( NF>=3 && $(NF-1)=="Yes" ){
next;
}
NF==4 {
rg=$1;
primary=$2;
if($NF=="Online"){
printf "%20s\t%s on %s\n",rg,$NF,primary
}
while($0 !~ /^$/){
getline;
if($NF=="Online"){
printf "%20s\t%s on %s, but not on primary %s\n",rg,$NF,$1,primary;
list=list" "rg
}
}
}
END{
if(list != ""){
printf "To fix it do:\n\tclrg remaster %s\n",list;
}
}'
</source>
905bb9d816c3df0222b208f2a0db186242fed38a
655
654
2015-01-08T13:27:41Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:SunCluster|Einzeiler]]
==Resource Groups to remaster==
<source lang=bash>
# /usr/cluster/bin/clrg status | \
/usr/bin/nawk '
NR<=5 || ( NF>=3 && $(NF-1)=="Yes" ){
next;
}
NF==4 {
rg=$1;
primary=$2;
if($NF=="Online"){
printf "%20s\t%s on %s\n",rg,$NF,primary
}
while($0 !~ /^$/){
getline;
if($NF=="Online"){
printf "%20s\t%s on %s, but not on primary %s\n",rg,$NF,$1,primary;
list=list" "rg
}
}
}
END{
if(list != ""){
printf "To fix it do:\n\tclrg remaster %s\n",list;
}
}'
</source>
ec6e6927fe99936ffac8327306e4c811858192e4
Brocade
0
107
618
292
2014-11-26T11:52:02Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:FC]]
=Ein paar Kommandos mit kurzer Erklärung dazu=
==Firmware==
<source lang=bash>
brocade:admin> firmwareshow
Appl Primary/Secondary Versions
------------------------------------------
FOS v6.4.2a
v6.4.2a
</source>
== General Switch Information ==
<source lang=bash>
brocade:admin> switchshow
switchName: brocade
switchType: 71.2
switchState: Online
switchMode: Native
switchRole: Principal
switchDomain: 1
switchId: fffc01
switchWwn: 10:00:00:05:34:be:f3:f0
zoning: ON (Fabric1)
switchBeacon: OFF
Index Port Address Media Speed State Proto
==============================================
0 0 010000 id N4 Online FC F-Port 50:0a:09:81:96:c8:3e:f8
1 1 010100 id N4 Online FC F-Port 50:0a:09:81:86:c8:3e:f8
2 2 010200 id N8 Online FC F-Port 21:00:00:24:ff:36:45:02
3 3 010300 id N8 Online FC F-Port 21:00:00:24:ff:36:45:21
4 4 010400 id N8 Online FC F-Port 21:00:00:24:ff:36:44:90
5 5 010500 id N8 Online FC F-Port 21:00:00:24:ff:36:45:f6
6 6 010600 id N8 No_Light FC
...
</source>
Wichtige Zeilen:
===switchshow:switchType===
<source lang=bash>
switchType: 71.2
</source>
switchType gibt Auskunft, welchen Switch wir vor uns haben. Hier einen Brocade 300.
* [https://www.ibm.com/developerworks/community/blogs/anthonyv/entry/brocade_san_switch_models1?lang=en Tabelle von IBM]
* PDF von Brocade: [[Media:Switch-types-blads-ids-product-names.pdf|Switch Types, Blade IDs, and Product Names]]
===switchshow:zoning===
<source lang=bash>
zoning: ON (Fabric1)
</source>
Zeigt an, ob das [[#Zoning|Zoning]] aktiv ist und welche Konfiguration aktiv ist (hier Fabric1) siehe auch [[#Fabric|Fabric]].
===switchshow:switchRole===
Es gibt zwei Rollen
* Principal (also den Chef)
und
* Subordinate (also den Untergeordneten)
z.B.:
<source lang=bash>
switchRole: Principal
</source>
Die Rolle kann man ändern
'''ACHTUNG: Nicht Unterbrechungsfrei!'''<br>
'''WARNING: DISRUPTIVE ACTION !'''
<source lang=bash>
brocade1:admin> fabricprincipal -f 1
</source>
==Fabric==
Eine Fabric besteht aus einem oder mehreren Fibre-Channel-Switchen, die miteinander verbunden sind. Komponenten wie Hosts, Storage und Tapes werden über die Fibre-Channel-Switche mit der Fabric verbunden.
<source lang=bash>
brocade:admin> fabricshow
Switch ID Worldwide Name Enet IP Addr FC IP Addr Name
-------------------------------------------------------------------------
1: fffc01 10:00:00:05:34:be:f3:f0 10.60.1.110 0.0.0.0 >"brocade"
2: fffc02 10:00:00:05:1e:0d:da:27 10.60.1.111 0.0.0.0 "brocade1"
4: fffc04 10:00:00:05:1e:b3:61:7d 10.60.1.113 0.0.0.0 "brocade3"
42: fffc2a 10:00:00:05:1e:0c:f3:98 10.60.1.112 0.0.0.0 "brocade2"
The Fabric has 4 switches
</source>
==InterSwitchLinks (ISL)==
Mit islshow bekommt man heraus, welche weiteren Switches angeschlossen sind und über welche Ports sie mit dem aktuellen verbunden sind.
<source lang=bash>
brocade:admin> islshow
rz1_fab1_01:admin> islshow
1: 0-> 0 10:00:00:05:1e:0d:ca:27 2 brocade1 sp: 4.000G bw: 4.000G
2: 4-> 0 10:00:00:05:1e:0c:e3:98 42 brocade2 sp: 4.000G bw: 4.000G
3: 8-> 17 10:00:00:05:1e:0d:ca:27 2 brocade1 sp: 4.000G bw: 4.000G
4: 9-> 0 10:00:00:05:1e:b3:51:7d 4 brocade3 sp: 4.000G bw: 4.000G
5: 12-> 17 10:00:00:05:1e:0c:e3:98 42 brocade2 sp: 4.000G bw: 4.000G
6: 13-> 17 10:00:00:05:1e:b3:51:7d 4 brocade3 sp: 4.000G bw: 4.000G
</source>
==Zoning==
Eine Zone legt fest, welche Ports oder WWNs sich sehen dürfen.
Heute mach man eigentlich nur noch WWN-Zoning, weil es das flexibelste und sicherste ist. Man kann dadurch einfach die Kabel innerhalb der [[#Fabric|Fabric]] hin und herstecken, ohne daß ein Gerät mit mal ein anderes sehen kann, als vorher.
Bei Portzoning ist die Gefahr des falsch steckens gegeben.
=Switch Types and Product Names=
{| class="wikitable sortable" style="text-align: center; width: 85%"
! Switch Type
! Switch Name
|-
| 1 || Brocade 1000 Switches
|-
| 2, 6 || Brocade 2800 Switch
|-
| 3 || Brocade 2100, 2400 Switches
|-
| 4 || Brocade 20x0, 2010, 2040, 2050 Switches
|-
| 5 || Brocade 22x0, 2210, 2240, 2250 Switches
|-
| 7 || Brocade 2000 Switch
|-
| 9 || Brocade 3800 Switch
|-
| 10 || Brocade 12000 Director
|-
| 12 || Brocade 3900 Switch
|-
| 16 || Brocade 3200 Switch
|-
| 17 || Brocade 3800VL
|-
| 18 || Brocade 3000 Switch
|-
| 21 || Brocade 24000 Director
|-
| 22 || Brocade 3016 Switch
|-
| 26 || Brocade 3850 Switch
|-
| 27 || Brocade 3250 Switch
|-
| 29 || Brocade 4012 Embedded Switch
|-
| 32 || Brocade 4100 Switch
|-
| 33 || Brocade 3014 Switch
|-
| 34 || Brocade 200E Switch
|-
| 37 || Brocade 4020 Embedded Switch
|-
| 38 || Brocade 7420 SAN Router
|-
| 40 || Fibre Channel Routing (FCR) Front Domain
|-
| 41 || Fibre Channel Routing, (FCR) Xlate Domain
|-
| 42 || Brocade 48000 Director
|-
| 43 || Brocade 4024 Embedded Switch
|-
| 44 || Brocade 4900 Switch
|-
| 45 || Brocade 4016 Embedded Switch
|-
| 46 || Brocade 7500 Switch
|-
| 51 || Brocade 4018 Embedded Switch
|-
| 55.2 || Brocade 7600 Switch
|-
| 58 || Brocade 5000 Switch
|-
| 61 || Brocade 4424 Embedded Switch
|-
| 62 || Brocade DCX Backbone
|-
| 64 || Brocade 5300 Switch
|-
| 66 || Brocade 5100 Switch
|-
| 67 || Brocade Encryption Switch
|-
| 69 || Brocade 5410 Blade
|-
| 70 || Brocade 5410 Embedded Switch
|-
| 71 || Brocade 300 Switch
|-
| 72 || Brocade 5480 Embedded Switch
|-
| 73 || Brocade 5470 Embedded Switch
|-
| 75 || Brocade M5424 Embedded Switch
|-
| 76 || Brocade 8000 Switch
|-
| 77 || Brocade DCX-4S Backbone
|-
| 83 || Brocade 7800 Extension Switch
|-
| 86 || Brocade 5450 Embedded Switch
|-
| 87 || Brocade 5460 Embedded Switch
|-
| 90 || Brocade 8470 Embedded Switch
|-
| 92 || Brocade VA-40FC Switch
|-
| 95 || Brocade VDX 6720-24 Data Center Switch
|-
| 96 || Brocade VDX 6730-32 Data Center Switch
|-
| 97 || Brocade VDX 6720-60 Data Center Switch
|-
| 98 || Brocade VDX 6730-76 Data Center Switch
|-
| 108 || Dell M8428-k FCoE Embedded Switch
|-
| 109 || Brocade 6510 Switch
|-
| 116 || Brocade VDX 6710 Data Center Switch
|-
| 117 || Brocade 6547 Embedded Switch
|-
| 118 || Brocade 6505 Switch
|-
| 120 || Brocade DCX 8510-8 Backbone
|-
| 121 || Brocade DCX 8510-4 Backbone
|}
=SSH mit public key=
<source lang=bash>
BSAN01:root> cd ~/.ssh
BSAN01:root> ls -al
total 8
drwxr-xr-x 2 root sys 4096 Jul 18 2011 ./
drwxr-x--- 4 root sys 4096 Jun 19 2013 ../
BSAN01:root> echo "ssh-dss AAAA...TD8cc= root@sun" >> authorized_keys
</source>
fbc2cd0e833facbbec41c7deef6f2219c978a5c6
619
618
2014-11-26T12:20:08Z
Lollypop
2
/* SSH mit public key */
wikitext
text/x-wiki
[[Kategorie:FC]]
=Ein paar Kommandos mit kurzer Erklärung dazu=
==Firmware==
<source lang=bash>
brocade:admin> firmwareshow
Appl Primary/Secondary Versions
------------------------------------------
FOS v6.4.2a
v6.4.2a
</source>
== General Switch Information ==
<source lang=bash>
brocade:admin> switchshow
switchName: brocade
switchType: 71.2
switchState: Online
switchMode: Native
switchRole: Principal
switchDomain: 1
switchId: fffc01
switchWwn: 10:00:00:05:34:be:f3:f0
zoning: ON (Fabric1)
switchBeacon: OFF
Index Port Address Media Speed State Proto
==============================================
0 0 010000 id N4 Online FC F-Port 50:0a:09:81:96:c8:3e:f8
1 1 010100 id N4 Online FC F-Port 50:0a:09:81:86:c8:3e:f8
2 2 010200 id N8 Online FC F-Port 21:00:00:24:ff:36:45:02
3 3 010300 id N8 Online FC F-Port 21:00:00:24:ff:36:45:21
4 4 010400 id N8 Online FC F-Port 21:00:00:24:ff:36:44:90
5 5 010500 id N8 Online FC F-Port 21:00:00:24:ff:36:45:f6
6 6 010600 id N8 No_Light FC
...
</source>
Wichtige Zeilen:
===switchshow:switchType===
<source lang=bash>
switchType: 71.2
</source>
switchType gibt Auskunft, welchen Switch wir vor uns haben. Hier einen Brocade 300.
* [https://www.ibm.com/developerworks/community/blogs/anthonyv/entry/brocade_san_switch_models1?lang=en Tabelle von IBM]
* PDF von Brocade: [[Media:Switch-types-blads-ids-product-names.pdf|Switch Types, Blade IDs, and Product Names]]
===switchshow:zoning===
<source lang=bash>
zoning: ON (Fabric1)
</source>
Zeigt an, ob das [[#Zoning|Zoning]] aktiv ist und welche Konfiguration aktiv ist (hier Fabric1) siehe auch [[#Fabric|Fabric]].
===switchshow:switchRole===
Es gibt zwei Rollen
* Principal (also den Chef)
und
* Subordinate (also den Untergeordneten)
z.B.:
<source lang=bash>
switchRole: Principal
</source>
Die Rolle kann man ändern
'''ACHTUNG: Nicht Unterbrechungsfrei!'''<br>
'''WARNING: DISRUPTIVE ACTION !'''
<source lang=bash>
brocade1:admin> fabricprincipal -f 1
</source>
==Fabric==
Eine Fabric besteht aus einem oder mehreren Fibre-Channel-Switchen, die miteinander verbunden sind. Komponenten wie Hosts, Storage und Tapes werden über die Fibre-Channel-Switche mit der Fabric verbunden.
<source lang=bash>
brocade:admin> fabricshow
Switch ID Worldwide Name Enet IP Addr FC IP Addr Name
-------------------------------------------------------------------------
1: fffc01 10:00:00:05:34:be:f3:f0 10.60.1.110 0.0.0.0 >"brocade"
2: fffc02 10:00:00:05:1e:0d:da:27 10.60.1.111 0.0.0.0 "brocade1"
4: fffc04 10:00:00:05:1e:b3:61:7d 10.60.1.113 0.0.0.0 "brocade3"
42: fffc2a 10:00:00:05:1e:0c:f3:98 10.60.1.112 0.0.0.0 "brocade2"
The Fabric has 4 switches
</source>
==InterSwitchLinks (ISL)==
Mit islshow bekommt man heraus, welche weiteren Switches angeschlossen sind und über welche Ports sie mit dem aktuellen verbunden sind.
<source lang=bash>
brocade:admin> islshow
rz1_fab1_01:admin> islshow
1: 0-> 0 10:00:00:05:1e:0d:ca:27 2 brocade1 sp: 4.000G bw: 4.000G
2: 4-> 0 10:00:00:05:1e:0c:e3:98 42 brocade2 sp: 4.000G bw: 4.000G
3: 8-> 17 10:00:00:05:1e:0d:ca:27 2 brocade1 sp: 4.000G bw: 4.000G
4: 9-> 0 10:00:00:05:1e:b3:51:7d 4 brocade3 sp: 4.000G bw: 4.000G
5: 12-> 17 10:00:00:05:1e:0c:e3:98 42 brocade2 sp: 4.000G bw: 4.000G
6: 13-> 17 10:00:00:05:1e:b3:51:7d 4 brocade3 sp: 4.000G bw: 4.000G
</source>
==Zoning==
Eine Zone legt fest, welche Ports oder WWNs sich sehen dürfen.
Heute mach man eigentlich nur noch WWN-Zoning, weil es das flexibelste und sicherste ist. Man kann dadurch einfach die Kabel innerhalb der [[#Fabric|Fabric]] hin und herstecken, ohne daß ein Gerät mit mal ein anderes sehen kann, als vorher.
Bei Portzoning ist die Gefahr des falsch steckens gegeben.
=Switch Types and Product Names=
{| class="wikitable sortable" style="text-align: center; width: 85%"
! Switch Type
! Switch Name
|-
| 1 || Brocade 1000 Switches
|-
| 2, 6 || Brocade 2800 Switch
|-
| 3 || Brocade 2100, 2400 Switches
|-
| 4 || Brocade 20x0, 2010, 2040, 2050 Switches
|-
| 5 || Brocade 22x0, 2210, 2240, 2250 Switches
|-
| 7 || Brocade 2000 Switch
|-
| 9 || Brocade 3800 Switch
|-
| 10 || Brocade 12000 Director
|-
| 12 || Brocade 3900 Switch
|-
| 16 || Brocade 3200 Switch
|-
| 17 || Brocade 3800VL
|-
| 18 || Brocade 3000 Switch
|-
| 21 || Brocade 24000 Director
|-
| 22 || Brocade 3016 Switch
|-
| 26 || Brocade 3850 Switch
|-
| 27 || Brocade 3250 Switch
|-
| 29 || Brocade 4012 Embedded Switch
|-
| 32 || Brocade 4100 Switch
|-
| 33 || Brocade 3014 Switch
|-
| 34 || Brocade 200E Switch
|-
| 37 || Brocade 4020 Embedded Switch
|-
| 38 || Brocade 7420 SAN Router
|-
| 40 || Fibre Channel Routing (FCR) Front Domain
|-
| 41 || Fibre Channel Routing, (FCR) Xlate Domain
|-
| 42 || Brocade 48000 Director
|-
| 43 || Brocade 4024 Embedded Switch
|-
| 44 || Brocade 4900 Switch
|-
| 45 || Brocade 4016 Embedded Switch
|-
| 46 || Brocade 7500 Switch
|-
| 51 || Brocade 4018 Embedded Switch
|-
| 55.2 || Brocade 7600 Switch
|-
| 58 || Brocade 5000 Switch
|-
| 61 || Brocade 4424 Embedded Switch
|-
| 62 || Brocade DCX Backbone
|-
| 64 || Brocade 5300 Switch
|-
| 66 || Brocade 5100 Switch
|-
| 67 || Brocade Encryption Switch
|-
| 69 || Brocade 5410 Blade
|-
| 70 || Brocade 5410 Embedded Switch
|-
| 71 || Brocade 300 Switch
|-
| 72 || Brocade 5480 Embedded Switch
|-
| 73 || Brocade 5470 Embedded Switch
|-
| 75 || Brocade M5424 Embedded Switch
|-
| 76 || Brocade 8000 Switch
|-
| 77 || Brocade DCX-4S Backbone
|-
| 83 || Brocade 7800 Extension Switch
|-
| 86 || Brocade 5450 Embedded Switch
|-
| 87 || Brocade 5460 Embedded Switch
|-
| 90 || Brocade 8470 Embedded Switch
|-
| 92 || Brocade VA-40FC Switch
|-
| 95 || Brocade VDX 6720-24 Data Center Switch
|-
| 96 || Brocade VDX 6730-32 Data Center Switch
|-
| 97 || Brocade VDX 6720-60 Data Center Switch
|-
| 98 || Brocade VDX 6730-76 Data Center Switch
|-
| 108 || Dell M8428-k FCoE Embedded Switch
|-
| 109 || Brocade 6510 Switch
|-
| 116 || Brocade VDX 6710 Data Center Switch
|-
| 117 || Brocade 6547 Embedded Switch
|-
| 118 || Brocade 6505 Switch
|-
| 120 || Brocade DCX 8510-8 Backbone
|-
| 121 || Brocade DCX 8510-4 Backbone
|}
=SSH mit public key=
==Host -> Brocade==
<source lang=bash>
BSAN01:root> cd ~/.ssh
BSAN01:root> ls -al
total 8
drwxr-xr-x 2 root sys 4096 Jul 18 2011 ./
drwxr-x--- 4 root sys 4096 Jun 19 2013 ../
BSAN01:root> echo "ssh-dss AAAA...TD8cc= root@sun" >> authorized_keys
</source>
==Brocade -> Host==
===Key auf Switch generieren===
<source lang=bash>
Host# ssh admin@bsan01
BSAN01:admin> sshutil genkey
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Key pair generated successfully.
BSAN01:admin> exit
</source>
===Key vom Switch -> Host ~/.ssh/authorized_keys===
<source lang=bash>
Host# ssh root@bsan01 cat .ssh/id_dsa.pub >> ~/.ssh/authorized_keys
</source>
e6817b71c1d1b349f32efb011bbad5cb6ca1e757
620
619
2014-11-26T12:20:52Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:FC]]
[[Kategorie:Brocade]]
=Ein paar Kommandos mit kurzer Erklärung dazu=
==Firmware==
<source lang=bash>
brocade:admin> firmwareshow
Appl Primary/Secondary Versions
------------------------------------------
FOS v6.4.2a
v6.4.2a
</source>
== General Switch Information ==
<source lang=bash>
brocade:admin> switchshow
switchName: brocade
switchType: 71.2
switchState: Online
switchMode: Native
switchRole: Principal
switchDomain: 1
switchId: fffc01
switchWwn: 10:00:00:05:34:be:f3:f0
zoning: ON (Fabric1)
switchBeacon: OFF
Index Port Address Media Speed State Proto
==============================================
0 0 010000 id N4 Online FC F-Port 50:0a:09:81:96:c8:3e:f8
1 1 010100 id N4 Online FC F-Port 50:0a:09:81:86:c8:3e:f8
2 2 010200 id N8 Online FC F-Port 21:00:00:24:ff:36:45:02
3 3 010300 id N8 Online FC F-Port 21:00:00:24:ff:36:45:21
4 4 010400 id N8 Online FC F-Port 21:00:00:24:ff:36:44:90
5 5 010500 id N8 Online FC F-Port 21:00:00:24:ff:36:45:f6
6 6 010600 id N8 No_Light FC
...
</source>
Wichtige Zeilen:
===switchshow:switchType===
<source lang=bash>
switchType: 71.2
</source>
switchType gibt Auskunft, welchen Switch wir vor uns haben. Hier einen Brocade 300.
* [https://www.ibm.com/developerworks/community/blogs/anthonyv/entry/brocade_san_switch_models1?lang=en Tabelle von IBM]
* PDF von Brocade: [[Media:Switch-types-blads-ids-product-names.pdf|Switch Types, Blade IDs, and Product Names]]
===switchshow:zoning===
<source lang=bash>
zoning: ON (Fabric1)
</source>
Zeigt an, ob das [[#Zoning|Zoning]] aktiv ist und welche Konfiguration aktiv ist (hier Fabric1) siehe auch [[#Fabric|Fabric]].
===switchshow:switchRole===
Es gibt zwei Rollen
* Principal (also den Chef)
und
* Subordinate (also den Untergeordneten)
z.B.:
<source lang=bash>
switchRole: Principal
</source>
Die Rolle kann man ändern
'''ACHTUNG: Nicht Unterbrechungsfrei!'''<br>
'''WARNING: DISRUPTIVE ACTION !'''
<source lang=bash>
brocade1:admin> fabricprincipal -f 1
</source>
==Fabric==
Eine Fabric besteht aus einem oder mehreren Fibre-Channel-Switchen, die miteinander verbunden sind. Komponenten wie Hosts, Storage und Tapes werden über die Fibre-Channel-Switche mit der Fabric verbunden.
<source lang=bash>
brocade:admin> fabricshow
Switch ID Worldwide Name Enet IP Addr FC IP Addr Name
-------------------------------------------------------------------------
1: fffc01 10:00:00:05:34:be:f3:f0 10.60.1.110 0.0.0.0 >"brocade"
2: fffc02 10:00:00:05:1e:0d:da:27 10.60.1.111 0.0.0.0 "brocade1"
4: fffc04 10:00:00:05:1e:b3:61:7d 10.60.1.113 0.0.0.0 "brocade3"
42: fffc2a 10:00:00:05:1e:0c:f3:98 10.60.1.112 0.0.0.0 "brocade2"
The Fabric has 4 switches
</source>
==InterSwitchLinks (ISL)==
Mit islshow bekommt man heraus, welche weiteren Switches angeschlossen sind und über welche Ports sie mit dem aktuellen verbunden sind.
<source lang=bash>
brocade:admin> islshow
rz1_fab1_01:admin> islshow
1: 0-> 0 10:00:00:05:1e:0d:ca:27 2 brocade1 sp: 4.000G bw: 4.000G
2: 4-> 0 10:00:00:05:1e:0c:e3:98 42 brocade2 sp: 4.000G bw: 4.000G
3: 8-> 17 10:00:00:05:1e:0d:ca:27 2 brocade1 sp: 4.000G bw: 4.000G
4: 9-> 0 10:00:00:05:1e:b3:51:7d 4 brocade3 sp: 4.000G bw: 4.000G
5: 12-> 17 10:00:00:05:1e:0c:e3:98 42 brocade2 sp: 4.000G bw: 4.000G
6: 13-> 17 10:00:00:05:1e:b3:51:7d 4 brocade3 sp: 4.000G bw: 4.000G
</source>
==Zoning==
Eine Zone legt fest, welche Ports oder WWNs sich sehen dürfen.
Heute mach man eigentlich nur noch WWN-Zoning, weil es das flexibelste und sicherste ist. Man kann dadurch einfach die Kabel innerhalb der [[#Fabric|Fabric]] hin und herstecken, ohne daß ein Gerät mit mal ein anderes sehen kann, als vorher.
Bei Portzoning ist die Gefahr des falsch steckens gegeben.
=Switch Types and Product Names=
{| class="wikitable sortable" style="text-align: center; width: 85%"
! Switch Type
! Switch Name
|-
| 1 || Brocade 1000 Switches
|-
| 2, 6 || Brocade 2800 Switch
|-
| 3 || Brocade 2100, 2400 Switches
|-
| 4 || Brocade 20x0, 2010, 2040, 2050 Switches
|-
| 5 || Brocade 22x0, 2210, 2240, 2250 Switches
|-
| 7 || Brocade 2000 Switch
|-
| 9 || Brocade 3800 Switch
|-
| 10 || Brocade 12000 Director
|-
| 12 || Brocade 3900 Switch
|-
| 16 || Brocade 3200 Switch
|-
| 17 || Brocade 3800VL
|-
| 18 || Brocade 3000 Switch
|-
| 21 || Brocade 24000 Director
|-
| 22 || Brocade 3016 Switch
|-
| 26 || Brocade 3850 Switch
|-
| 27 || Brocade 3250 Switch
|-
| 29 || Brocade 4012 Embedded Switch
|-
| 32 || Brocade 4100 Switch
|-
| 33 || Brocade 3014 Switch
|-
| 34 || Brocade 200E Switch
|-
| 37 || Brocade 4020 Embedded Switch
|-
| 38 || Brocade 7420 SAN Router
|-
| 40 || Fibre Channel Routing (FCR) Front Domain
|-
| 41 || Fibre Channel Routing, (FCR) Xlate Domain
|-
| 42 || Brocade 48000 Director
|-
| 43 || Brocade 4024 Embedded Switch
|-
| 44 || Brocade 4900 Switch
|-
| 45 || Brocade 4016 Embedded Switch
|-
| 46 || Brocade 7500 Switch
|-
| 51 || Brocade 4018 Embedded Switch
|-
| 55.2 || Brocade 7600 Switch
|-
| 58 || Brocade 5000 Switch
|-
| 61 || Brocade 4424 Embedded Switch
|-
| 62 || Brocade DCX Backbone
|-
| 64 || Brocade 5300 Switch
|-
| 66 || Brocade 5100 Switch
|-
| 67 || Brocade Encryption Switch
|-
| 69 || Brocade 5410 Blade
|-
| 70 || Brocade 5410 Embedded Switch
|-
| 71 || Brocade 300 Switch
|-
| 72 || Brocade 5480 Embedded Switch
|-
| 73 || Brocade 5470 Embedded Switch
|-
| 75 || Brocade M5424 Embedded Switch
|-
| 76 || Brocade 8000 Switch
|-
| 77 || Brocade DCX-4S Backbone
|-
| 83 || Brocade 7800 Extension Switch
|-
| 86 || Brocade 5450 Embedded Switch
|-
| 87 || Brocade 5460 Embedded Switch
|-
| 90 || Brocade 8470 Embedded Switch
|-
| 92 || Brocade VA-40FC Switch
|-
| 95 || Brocade VDX 6720-24 Data Center Switch
|-
| 96 || Brocade VDX 6730-32 Data Center Switch
|-
| 97 || Brocade VDX 6720-60 Data Center Switch
|-
| 98 || Brocade VDX 6730-76 Data Center Switch
|-
| 108 || Dell M8428-k FCoE Embedded Switch
|-
| 109 || Brocade 6510 Switch
|-
| 116 || Brocade VDX 6710 Data Center Switch
|-
| 117 || Brocade 6547 Embedded Switch
|-
| 118 || Brocade 6505 Switch
|-
| 120 || Brocade DCX 8510-8 Backbone
|-
| 121 || Brocade DCX 8510-4 Backbone
|}
=SSH mit public key=
==Host -> Brocade==
<source lang=bash>
BSAN01:root> cd ~/.ssh
BSAN01:root> ls -al
total 8
drwxr-xr-x 2 root sys 4096 Jul 18 2011 ./
drwxr-x--- 4 root sys 4096 Jun 19 2013 ../
BSAN01:root> echo "ssh-dss AAAA...TD8cc= root@sun" >> authorized_keys
</source>
==Brocade -> Host==
===Key auf Switch generieren===
<source lang=bash>
Host# ssh admin@bsan01
BSAN01:admin> sshutil genkey
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Key pair generated successfully.
BSAN01:admin> exit
</source>
===Key vom Switch -> Host ~/.ssh/authorized_keys===
<source lang=bash>
Host# ssh root@bsan01 cat .ssh/id_dsa.pub >> ~/.ssh/authorized_keys
</source>
1571eaa0bb2a4100daa7bf640936cd5a0cd606ad
621
620
2014-11-26T12:24:30Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:FC]]
[[Kategorie:Brocade]]
=Ein paar Kommandos mit kurzer Erklärung dazu=
==Firmware==
<source lang=bash>
brocade:admin> firmwareshow
Appl Primary/Secondary Versions
------------------------------------------
FOS v6.4.2a
v6.4.2a
</source>
== General Switch Information ==
<source lang=bash>
brocade:admin> switchshow
switchName: brocade
switchType: 71.2
switchState: Online
switchMode: Native
switchRole: Principal
switchDomain: 1
switchId: fffc01
switchWwn: 10:00:00:05:34:be:f3:f0
zoning: ON (Fabric1)
switchBeacon: OFF
Index Port Address Media Speed State Proto
==============================================
0 0 010000 id N4 Online FC F-Port 50:0a:09:81:96:c8:3e:f8
1 1 010100 id N4 Online FC F-Port 50:0a:09:81:86:c8:3e:f8
2 2 010200 id N8 Online FC F-Port 21:00:00:24:ff:36:45:02
3 3 010300 id N8 Online FC F-Port 21:00:00:24:ff:36:45:21
4 4 010400 id N8 Online FC F-Port 21:00:00:24:ff:36:44:90
5 5 010500 id N8 Online FC F-Port 21:00:00:24:ff:36:45:f6
6 6 010600 id N8 No_Light FC
...
</source>
Wichtige Zeilen:
===switchshow:switchType===
<source lang=bash>
switchType: 71.2
</source>
switchType gibt Auskunft, welchen Switch wir vor uns haben. Hier einen Brocade 300.
* [https://www.ibm.com/developerworks/community/blogs/anthonyv/entry/brocade_san_switch_models1?lang=en Tabelle von IBM]
* PDF von Brocade: [[Media:Switch-types-blads-ids-product-names.pdf|Switch Types, Blade IDs, and Product Names]]
===switchshow:zoning===
<source lang=bash>
zoning: ON (Fabric1)
</source>
Zeigt an, ob das [[#Zoning|Zoning]] aktiv ist und welche Konfiguration aktiv ist (hier Fabric1) siehe auch [[#Fabric|Fabric]].
===switchshow:switchRole===
Es gibt zwei Rollen
* Principal (also den Chef)
und
* Subordinate (also den Untergeordneten)
z.B.:
<source lang=bash>
switchRole: Principal
</source>
Die Rolle kann man ändern
'''ACHTUNG: Nicht Unterbrechungsfrei!'''<br>
'''WARNING: DISRUPTIVE ACTION !'''
<source lang=bash>
brocade1:admin> fabricprincipal -f 1
</source>
==Fabric==
Eine Fabric besteht aus einem oder mehreren Fibre-Channel-Switchen, die miteinander verbunden sind. Komponenten wie Hosts, Storage und Tapes werden über die Fibre-Channel-Switche mit der Fabric verbunden.
<source lang=bash>
brocade:admin> fabricshow
Switch ID Worldwide Name Enet IP Addr FC IP Addr Name
-------------------------------------------------------------------------
1: fffc01 10:00:00:05:34:be:f3:f0 10.60.1.110 0.0.0.0 >"brocade"
2: fffc02 10:00:00:05:1e:0d:da:27 10.60.1.111 0.0.0.0 "brocade1"
4: fffc04 10:00:00:05:1e:b3:61:7d 10.60.1.113 0.0.0.0 "brocade3"
42: fffc2a 10:00:00:05:1e:0c:f3:98 10.60.1.112 0.0.0.0 "brocade2"
The Fabric has 4 switches
</source>
==InterSwitchLinks (ISL)==
Mit islshow bekommt man heraus, welche weiteren Switches angeschlossen sind und über welche Ports sie mit dem aktuellen verbunden sind.
<source lang=bash>
brocade:admin> islshow
rz1_fab1_01:admin> islshow
1: 0-> 0 10:00:00:05:1e:0d:ca:27 2 brocade1 sp: 4.000G bw: 4.000G
2: 4-> 0 10:00:00:05:1e:0c:e3:98 42 brocade2 sp: 4.000G bw: 4.000G
3: 8-> 17 10:00:00:05:1e:0d:ca:27 2 brocade1 sp: 4.000G bw: 4.000G
4: 9-> 0 10:00:00:05:1e:b3:51:7d 4 brocade3 sp: 4.000G bw: 4.000G
5: 12-> 17 10:00:00:05:1e:0c:e3:98 42 brocade2 sp: 4.000G bw: 4.000G
6: 13-> 17 10:00:00:05:1e:b3:51:7d 4 brocade3 sp: 4.000G bw: 4.000G
</source>
==Zoning==
Eine Zone legt fest, welche Ports oder WWNs sich sehen dürfen.
Heute mach man eigentlich nur noch WWN-Zoning, weil es das flexibelste und sicherste ist. Man kann dadurch einfach die Kabel innerhalb der [[#Fabric|Fabric]] hin und herstecken, ohne daß ein Gerät mit mal ein anderes sehen kann, als vorher.
Bei Portzoning ist die Gefahr des falsch steckens gegeben.
=Switch Types and Product Names=
{| class="wikitable sortable" style="text-align: center; width: 85%"
! Switch Type
! Switch Name
|-
| 1 || Brocade 1000 Switches
|-
| 2, 6 || Brocade 2800 Switch
|-
| 3 || Brocade 2100, 2400 Switches
|-
| 4 || Brocade 20x0, 2010, 2040, 2050 Switches
|-
| 5 || Brocade 22x0, 2210, 2240, 2250 Switches
|-
| 7 || Brocade 2000 Switch
|-
| 9 || Brocade 3800 Switch
|-
| 10 || Brocade 12000 Director
|-
| 12 || Brocade 3900 Switch
|-
| 16 || Brocade 3200 Switch
|-
| 17 || Brocade 3800VL
|-
| 18 || Brocade 3000 Switch
|-
| 21 || Brocade 24000 Director
|-
| 22 || Brocade 3016 Switch
|-
| 26 || Brocade 3850 Switch
|-
| 27 || Brocade 3250 Switch
|-
| 29 || Brocade 4012 Embedded Switch
|-
| 32 || Brocade 4100 Switch
|-
| 33 || Brocade 3014 Switch
|-
| 34 || Brocade 200E Switch
|-
| 37 || Brocade 4020 Embedded Switch
|-
| 38 || Brocade 7420 SAN Router
|-
| 40 || Fibre Channel Routing (FCR) Front Domain
|-
| 41 || Fibre Channel Routing, (FCR) Xlate Domain
|-
| 42 || Brocade 48000 Director
|-
| 43 || Brocade 4024 Embedded Switch
|-
| 44 || Brocade 4900 Switch
|-
| 45 || Brocade 4016 Embedded Switch
|-
| 46 || Brocade 7500 Switch
|-
| 51 || Brocade 4018 Embedded Switch
|-
| 55.2 || Brocade 7600 Switch
|-
| 58 || Brocade 5000 Switch
|-
| 61 || Brocade 4424 Embedded Switch
|-
| 62 || Brocade DCX Backbone
|-
| 64 || Brocade 5300 Switch
|-
| 66 || Brocade 5100 Switch
|-
| 67 || Brocade Encryption Switch
|-
| 69 || Brocade 5410 Blade
|-
| 70 || Brocade 5410 Embedded Switch
|-
| 71 || Brocade 300 Switch
|-
| 72 || Brocade 5480 Embedded Switch
|-
| 73 || Brocade 5470 Embedded Switch
|-
| 75 || Brocade M5424 Embedded Switch
|-
| 76 || Brocade 8000 Switch
|-
| 77 || Brocade DCX-4S Backbone
|-
| 83 || Brocade 7800 Extension Switch
|-
| 86 || Brocade 5450 Embedded Switch
|-
| 87 || Brocade 5460 Embedded Switch
|-
| 90 || Brocade 8470 Embedded Switch
|-
| 92 || Brocade VA-40FC Switch
|-
| 95 || Brocade VDX 6720-24 Data Center Switch
|-
| 96 || Brocade VDX 6730-32 Data Center Switch
|-
| 97 || Brocade VDX 6720-60 Data Center Switch
|-
| 98 || Brocade VDX 6730-76 Data Center Switch
|-
| 108 || Dell M8428-k FCoE Embedded Switch
|-
| 109 || Brocade 6510 Switch
|-
| 116 || Brocade VDX 6710 Data Center Switch
|-
| 117 || Brocade 6547 Embedded Switch
|-
| 118 || Brocade 6505 Switch
|-
| 120 || Brocade DCX 8510-8 Backbone
|-
| 121 || Brocade DCX 8510-4 Backbone
|}
=SSH mit public key=
==Host -> Brocade==
<source lang=bash>
BSAN01:root> cd ~/.ssh
BSAN01:root> ls -al
total 8
drwxr-xr-x 2 root sys 4096 Jul 18 2011 ./
drwxr-x--- 4 root sys 4096 Jun 19 2013 ../
BSAN01:root> echo "ssh-dss AAAA...TD8cc= root@sun" >> authorized_keys
</source>
==Brocade -> Host==
===Key auf Switch generieren===
<source lang=bash>
Host# ssh admin@bsan01
BSAN01:admin> sshutil genkey
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Key pair generated successfully.
BSAN01:admin> exit
</source>
===Key vom Switch -> Host ~/.ssh/authorized_keys===
<source lang=bash>
Host# ssh root@bsan01 cat .ssh/id_dsa.pub >> ~/.ssh/authorized_keys
</source>
=Backup der Config=
Wichtig, vorher die Keys austauschen!
Ein mögliches Script könnte so aussehen:
<source lang=bash>
#!/bin/bash
SWITCHES="
bsan01
bsan02
"
BACKUP_HOST="10.0.0.42"
LOCALUSER="bckpuser"
BACKUPDIR="brocade_backup"
date="$(date '+%Y%m%d-%H%M%S')"
for switch in ${SWITCHES} ; do
printf "Backing up ${switch} to ~${LOCALUSER}/${BACKUPDIR}/${switch}_config_${date}.txt... "
ssh root@${switch} /fabos/link_sbin/configupload -all -p scp ${BACKUP_HOST},${LOCALUSER},${BACKUPDIR}/${switch}_config_${date}.txt
done
</source>
40817568ca0f201fe44283e13c490a6f72a642ac
622
621
2014-11-26T12:38:06Z
Lollypop
2
/* SSH mit public key */
wikitext
text/x-wiki
[[Kategorie:FC]]
[[Kategorie:Brocade]]
=Ein paar Kommandos mit kurzer Erklärung dazu=
==Firmware==
<source lang=bash>
brocade:admin> firmwareshow
Appl Primary/Secondary Versions
------------------------------------------
FOS v6.4.2a
v6.4.2a
</source>
== General Switch Information ==
<source lang=bash>
brocade:admin> switchshow
switchName: brocade
switchType: 71.2
switchState: Online
switchMode: Native
switchRole: Principal
switchDomain: 1
switchId: fffc01
switchWwn: 10:00:00:05:34:be:f3:f0
zoning: ON (Fabric1)
switchBeacon: OFF
Index Port Address Media Speed State Proto
==============================================
0 0 010000 id N4 Online FC F-Port 50:0a:09:81:96:c8:3e:f8
1 1 010100 id N4 Online FC F-Port 50:0a:09:81:86:c8:3e:f8
2 2 010200 id N8 Online FC F-Port 21:00:00:24:ff:36:45:02
3 3 010300 id N8 Online FC F-Port 21:00:00:24:ff:36:45:21
4 4 010400 id N8 Online FC F-Port 21:00:00:24:ff:36:44:90
5 5 010500 id N8 Online FC F-Port 21:00:00:24:ff:36:45:f6
6 6 010600 id N8 No_Light FC
...
</source>
Wichtige Zeilen:
===switchshow:switchType===
<source lang=bash>
switchType: 71.2
</source>
switchType gibt Auskunft, welchen Switch wir vor uns haben. Hier einen Brocade 300.
* [https://www.ibm.com/developerworks/community/blogs/anthonyv/entry/brocade_san_switch_models1?lang=en Tabelle von IBM]
* PDF von Brocade: [[Media:Switch-types-blads-ids-product-names.pdf|Switch Types, Blade IDs, and Product Names]]
===switchshow:zoning===
<source lang=bash>
zoning: ON (Fabric1)
</source>
Zeigt an, ob das [[#Zoning|Zoning]] aktiv ist und welche Konfiguration aktiv ist (hier Fabric1) siehe auch [[#Fabric|Fabric]].
===switchshow:switchRole===
Es gibt zwei Rollen
* Principal (also den Chef)
und
* Subordinate (also den Untergeordneten)
z.B.:
<source lang=bash>
switchRole: Principal
</source>
Die Rolle kann man ändern
'''ACHTUNG: Nicht Unterbrechungsfrei!'''<br>
'''WARNING: DISRUPTIVE ACTION !'''
<source lang=bash>
brocade1:admin> fabricprincipal -f 1
</source>
==Fabric==
Eine Fabric besteht aus einem oder mehreren Fibre-Channel-Switchen, die miteinander verbunden sind. Komponenten wie Hosts, Storage und Tapes werden über die Fibre-Channel-Switche mit der Fabric verbunden.
<source lang=bash>
brocade:admin> fabricshow
Switch ID Worldwide Name Enet IP Addr FC IP Addr Name
-------------------------------------------------------------------------
1: fffc01 10:00:00:05:34:be:f3:f0 10.60.1.110 0.0.0.0 >"brocade"
2: fffc02 10:00:00:05:1e:0d:da:27 10.60.1.111 0.0.0.0 "brocade1"
4: fffc04 10:00:00:05:1e:b3:61:7d 10.60.1.113 0.0.0.0 "brocade3"
42: fffc2a 10:00:00:05:1e:0c:f3:98 10.60.1.112 0.0.0.0 "brocade2"
The Fabric has 4 switches
</source>
==InterSwitchLinks (ISL)==
Mit islshow bekommt man heraus, welche weiteren Switches angeschlossen sind und über welche Ports sie mit dem aktuellen verbunden sind.
<source lang=bash>
brocade:admin> islshow
rz1_fab1_01:admin> islshow
1: 0-> 0 10:00:00:05:1e:0d:ca:27 2 brocade1 sp: 4.000G bw: 4.000G
2: 4-> 0 10:00:00:05:1e:0c:e3:98 42 brocade2 sp: 4.000G bw: 4.000G
3: 8-> 17 10:00:00:05:1e:0d:ca:27 2 brocade1 sp: 4.000G bw: 4.000G
4: 9-> 0 10:00:00:05:1e:b3:51:7d 4 brocade3 sp: 4.000G bw: 4.000G
5: 12-> 17 10:00:00:05:1e:0c:e3:98 42 brocade2 sp: 4.000G bw: 4.000G
6: 13-> 17 10:00:00:05:1e:b3:51:7d 4 brocade3 sp: 4.000G bw: 4.000G
</source>
==Zoning==
Eine Zone legt fest, welche Ports oder WWNs sich sehen dürfen.
Heute mach man eigentlich nur noch WWN-Zoning, weil es das flexibelste und sicherste ist. Man kann dadurch einfach die Kabel innerhalb der [[#Fabric|Fabric]] hin und herstecken, ohne daß ein Gerät mit mal ein anderes sehen kann, als vorher.
Bei Portzoning ist die Gefahr des falsch steckens gegeben.
=Switch Types and Product Names=
{| class="wikitable sortable" style="text-align: center; width: 85%"
! Switch Type
! Switch Name
|-
| 1 || Brocade 1000 Switches
|-
| 2, 6 || Brocade 2800 Switch
|-
| 3 || Brocade 2100, 2400 Switches
|-
| 4 || Brocade 20x0, 2010, 2040, 2050 Switches
|-
| 5 || Brocade 22x0, 2210, 2240, 2250 Switches
|-
| 7 || Brocade 2000 Switch
|-
| 9 || Brocade 3800 Switch
|-
| 10 || Brocade 12000 Director
|-
| 12 || Brocade 3900 Switch
|-
| 16 || Brocade 3200 Switch
|-
| 17 || Brocade 3800VL
|-
| 18 || Brocade 3000 Switch
|-
| 21 || Brocade 24000 Director
|-
| 22 || Brocade 3016 Switch
|-
| 26 || Brocade 3850 Switch
|-
| 27 || Brocade 3250 Switch
|-
| 29 || Brocade 4012 Embedded Switch
|-
| 32 || Brocade 4100 Switch
|-
| 33 || Brocade 3014 Switch
|-
| 34 || Brocade 200E Switch
|-
| 37 || Brocade 4020 Embedded Switch
|-
| 38 || Brocade 7420 SAN Router
|-
| 40 || Fibre Channel Routing (FCR) Front Domain
|-
| 41 || Fibre Channel Routing, (FCR) Xlate Domain
|-
| 42 || Brocade 48000 Director
|-
| 43 || Brocade 4024 Embedded Switch
|-
| 44 || Brocade 4900 Switch
|-
| 45 || Brocade 4016 Embedded Switch
|-
| 46 || Brocade 7500 Switch
|-
| 51 || Brocade 4018 Embedded Switch
|-
| 55.2 || Brocade 7600 Switch
|-
| 58 || Brocade 5000 Switch
|-
| 61 || Brocade 4424 Embedded Switch
|-
| 62 || Brocade DCX Backbone
|-
| 64 || Brocade 5300 Switch
|-
| 66 || Brocade 5100 Switch
|-
| 67 || Brocade Encryption Switch
|-
| 69 || Brocade 5410 Blade
|-
| 70 || Brocade 5410 Embedded Switch
|-
| 71 || Brocade 300 Switch
|-
| 72 || Brocade 5480 Embedded Switch
|-
| 73 || Brocade 5470 Embedded Switch
|-
| 75 || Brocade M5424 Embedded Switch
|-
| 76 || Brocade 8000 Switch
|-
| 77 || Brocade DCX-4S Backbone
|-
| 83 || Brocade 7800 Extension Switch
|-
| 86 || Brocade 5450 Embedded Switch
|-
| 87 || Brocade 5460 Embedded Switch
|-
| 90 || Brocade 8470 Embedded Switch
|-
| 92 || Brocade VA-40FC Switch
|-
| 95 || Brocade VDX 6720-24 Data Center Switch
|-
| 96 || Brocade VDX 6730-32 Data Center Switch
|-
| 97 || Brocade VDX 6720-60 Data Center Switch
|-
| 98 || Brocade VDX 6730-76 Data Center Switch
|-
| 108 || Dell M8428-k FCoE Embedded Switch
|-
| 109 || Brocade 6510 Switch
|-
| 116 || Brocade VDX 6710 Data Center Switch
|-
| 117 || Brocade 6547 Embedded Switch
|-
| 118 || Brocade 6505 Switch
|-
| 120 || Brocade DCX 8510-8 Backbone
|-
| 121 || Brocade DCX 8510-4 Backbone
|}
=SSH mit public key=
==Host -> Brocade==
<source lang=bash>
BSAN01:root> cd ~/.ssh
BSAN01:root> ls -al
total 8
drwxr-xr-x 2 root sys 4096 Jul 18 2011 ./
drwxr-x--- 4 root sys 4096 Jun 19 2013 ../
BSAN01:root> echo "ssh-dss AAAA...TD8cc= root@sun" >> authorized_keys
</source>
==Brocade -> Host==
===Key auf Switch generieren===
Als '''admin''' !
<source lang=bash>
Host# ssh admin@bsan01
BSAN01:admin> sshutil genkey
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Key pair generated successfully.
BSAN01:admin> exit
</source>
===Key vom Switch -> Host ~/.ssh/authorized_keys===
Als '''root''' !
<source lang=bash>
Host# ssh root@bsan01 cat .ssh/id_dsa.pub >> ~/.ssh/authorized_keys
</source>
=Backup der Config=
Wichtig, vorher die Keys austauschen!
Ein mögliches Script könnte so aussehen:
<source lang=bash>
#!/bin/bash
SWITCHES="
bsan01
bsan02
"
BACKUP_HOST="10.0.0.42"
LOCALUSER="bckpuser"
BACKUPDIR="brocade_backup"
date="$(date '+%Y%m%d-%H%M%S')"
for switch in ${SWITCHES} ; do
printf "Backing up ${switch} to ~${LOCALUSER}/${BACKUPDIR}/${switch}_config_${date}.txt... "
ssh root@${switch} /fabos/link_sbin/configupload -all -p scp ${BACKUP_HOST},${LOCALUSER},${BACKUPDIR}/${switch}_config_${date}.txt
done
</source>
8092442d9358a4279c33cbf29e4794ee2ea39edb
623
622
2014-11-26T12:40:39Z
Lollypop
2
/* Backup der Config */
wikitext
text/x-wiki
[[Kategorie:FC]]
[[Kategorie:Brocade]]
=Ein paar Kommandos mit kurzer Erklärung dazu=
==Firmware==
<source lang=bash>
brocade:admin> firmwareshow
Appl Primary/Secondary Versions
------------------------------------------
FOS v6.4.2a
v6.4.2a
</source>
== General Switch Information ==
<source lang=bash>
brocade:admin> switchshow
switchName: brocade
switchType: 71.2
switchState: Online
switchMode: Native
switchRole: Principal
switchDomain: 1
switchId: fffc01
switchWwn: 10:00:00:05:34:be:f3:f0
zoning: ON (Fabric1)
switchBeacon: OFF
Index Port Address Media Speed State Proto
==============================================
0 0 010000 id N4 Online FC F-Port 50:0a:09:81:96:c8:3e:f8
1 1 010100 id N4 Online FC F-Port 50:0a:09:81:86:c8:3e:f8
2 2 010200 id N8 Online FC F-Port 21:00:00:24:ff:36:45:02
3 3 010300 id N8 Online FC F-Port 21:00:00:24:ff:36:45:21
4 4 010400 id N8 Online FC F-Port 21:00:00:24:ff:36:44:90
5 5 010500 id N8 Online FC F-Port 21:00:00:24:ff:36:45:f6
6 6 010600 id N8 No_Light FC
...
</source>
Wichtige Zeilen:
===switchshow:switchType===
<source lang=bash>
switchType: 71.2
</source>
switchType gibt Auskunft, welchen Switch wir vor uns haben. Hier einen Brocade 300.
* [https://www.ibm.com/developerworks/community/blogs/anthonyv/entry/brocade_san_switch_models1?lang=en Tabelle von IBM]
* PDF von Brocade: [[Media:Switch-types-blads-ids-product-names.pdf|Switch Types, Blade IDs, and Product Names]]
===switchshow:zoning===
<source lang=bash>
zoning: ON (Fabric1)
</source>
Zeigt an, ob das [[#Zoning|Zoning]] aktiv ist und welche Konfiguration aktiv ist (hier Fabric1) siehe auch [[#Fabric|Fabric]].
===switchshow:switchRole===
Es gibt zwei Rollen
* Principal (also den Chef)
und
* Subordinate (also den Untergeordneten)
z.B.:
<source lang=bash>
switchRole: Principal
</source>
Die Rolle kann man ändern
'''ACHTUNG: Nicht Unterbrechungsfrei!'''<br>
'''WARNING: DISRUPTIVE ACTION !'''
<source lang=bash>
brocade1:admin> fabricprincipal -f 1
</source>
==Fabric==
Eine Fabric besteht aus einem oder mehreren Fibre-Channel-Switchen, die miteinander verbunden sind. Komponenten wie Hosts, Storage und Tapes werden über die Fibre-Channel-Switche mit der Fabric verbunden.
<source lang=bash>
brocade:admin> fabricshow
Switch ID Worldwide Name Enet IP Addr FC IP Addr Name
-------------------------------------------------------------------------
1: fffc01 10:00:00:05:34:be:f3:f0 10.60.1.110 0.0.0.0 >"brocade"
2: fffc02 10:00:00:05:1e:0d:da:27 10.60.1.111 0.0.0.0 "brocade1"
4: fffc04 10:00:00:05:1e:b3:61:7d 10.60.1.113 0.0.0.0 "brocade3"
42: fffc2a 10:00:00:05:1e:0c:f3:98 10.60.1.112 0.0.0.0 "brocade2"
The Fabric has 4 switches
</source>
==InterSwitchLinks (ISL)==
Mit islshow bekommt man heraus, welche weiteren Switches angeschlossen sind und über welche Ports sie mit dem aktuellen verbunden sind.
<source lang=bash>
brocade:admin> islshow
rz1_fab1_01:admin> islshow
1: 0-> 0 10:00:00:05:1e:0d:ca:27 2 brocade1 sp: 4.000G bw: 4.000G
2: 4-> 0 10:00:00:05:1e:0c:e3:98 42 brocade2 sp: 4.000G bw: 4.000G
3: 8-> 17 10:00:00:05:1e:0d:ca:27 2 brocade1 sp: 4.000G bw: 4.000G
4: 9-> 0 10:00:00:05:1e:b3:51:7d 4 brocade3 sp: 4.000G bw: 4.000G
5: 12-> 17 10:00:00:05:1e:0c:e3:98 42 brocade2 sp: 4.000G bw: 4.000G
6: 13-> 17 10:00:00:05:1e:b3:51:7d 4 brocade3 sp: 4.000G bw: 4.000G
</source>
==Zoning==
Eine Zone legt fest, welche Ports oder WWNs sich sehen dürfen.
Heute mach man eigentlich nur noch WWN-Zoning, weil es das flexibelste und sicherste ist. Man kann dadurch einfach die Kabel innerhalb der [[#Fabric|Fabric]] hin und herstecken, ohne daß ein Gerät mit mal ein anderes sehen kann, als vorher.
Bei Portzoning ist die Gefahr des falsch steckens gegeben.
=Switch Types and Product Names=
{| class="wikitable sortable" style="text-align: center; width: 85%"
! Switch Type
! Switch Name
|-
| 1 || Brocade 1000 Switches
|-
| 2, 6 || Brocade 2800 Switch
|-
| 3 || Brocade 2100, 2400 Switches
|-
| 4 || Brocade 20x0, 2010, 2040, 2050 Switches
|-
| 5 || Brocade 22x0, 2210, 2240, 2250 Switches
|-
| 7 || Brocade 2000 Switch
|-
| 9 || Brocade 3800 Switch
|-
| 10 || Brocade 12000 Director
|-
| 12 || Brocade 3900 Switch
|-
| 16 || Brocade 3200 Switch
|-
| 17 || Brocade 3800VL
|-
| 18 || Brocade 3000 Switch
|-
| 21 || Brocade 24000 Director
|-
| 22 || Brocade 3016 Switch
|-
| 26 || Brocade 3850 Switch
|-
| 27 || Brocade 3250 Switch
|-
| 29 || Brocade 4012 Embedded Switch
|-
| 32 || Brocade 4100 Switch
|-
| 33 || Brocade 3014 Switch
|-
| 34 || Brocade 200E Switch
|-
| 37 || Brocade 4020 Embedded Switch
|-
| 38 || Brocade 7420 SAN Router
|-
| 40 || Fibre Channel Routing (FCR) Front Domain
|-
| 41 || Fibre Channel Routing, (FCR) Xlate Domain
|-
| 42 || Brocade 48000 Director
|-
| 43 || Brocade 4024 Embedded Switch
|-
| 44 || Brocade 4900 Switch
|-
| 45 || Brocade 4016 Embedded Switch
|-
| 46 || Brocade 7500 Switch
|-
| 51 || Brocade 4018 Embedded Switch
|-
| 55.2 || Brocade 7600 Switch
|-
| 58 || Brocade 5000 Switch
|-
| 61 || Brocade 4424 Embedded Switch
|-
| 62 || Brocade DCX Backbone
|-
| 64 || Brocade 5300 Switch
|-
| 66 || Brocade 5100 Switch
|-
| 67 || Brocade Encryption Switch
|-
| 69 || Brocade 5410 Blade
|-
| 70 || Brocade 5410 Embedded Switch
|-
| 71 || Brocade 300 Switch
|-
| 72 || Brocade 5480 Embedded Switch
|-
| 73 || Brocade 5470 Embedded Switch
|-
| 75 || Brocade M5424 Embedded Switch
|-
| 76 || Brocade 8000 Switch
|-
| 77 || Brocade DCX-4S Backbone
|-
| 83 || Brocade 7800 Extension Switch
|-
| 86 || Brocade 5450 Embedded Switch
|-
| 87 || Brocade 5460 Embedded Switch
|-
| 90 || Brocade 8470 Embedded Switch
|-
| 92 || Brocade VA-40FC Switch
|-
| 95 || Brocade VDX 6720-24 Data Center Switch
|-
| 96 || Brocade VDX 6730-32 Data Center Switch
|-
| 97 || Brocade VDX 6720-60 Data Center Switch
|-
| 98 || Brocade VDX 6730-76 Data Center Switch
|-
| 108 || Dell M8428-k FCoE Embedded Switch
|-
| 109 || Brocade 6510 Switch
|-
| 116 || Brocade VDX 6710 Data Center Switch
|-
| 117 || Brocade 6547 Embedded Switch
|-
| 118 || Brocade 6505 Switch
|-
| 120 || Brocade DCX 8510-8 Backbone
|-
| 121 || Brocade DCX 8510-4 Backbone
|}
=SSH mit public key=
==Host -> Brocade==
<source lang=bash>
BSAN01:root> cd ~/.ssh
BSAN01:root> ls -al
total 8
drwxr-xr-x 2 root sys 4096 Jul 18 2011 ./
drwxr-x--- 4 root sys 4096 Jun 19 2013 ../
BSAN01:root> echo "ssh-dss AAAA...TD8cc= root@sun" >> authorized_keys
</source>
==Brocade -> Host==
===Key auf Switch generieren===
Als '''admin''' !
<source lang=bash>
Host# ssh admin@bsan01
BSAN01:admin> sshutil genkey
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Key pair generated successfully.
BSAN01:admin> exit
</source>
===Key vom Switch -> Host ~/.ssh/authorized_keys===
Als '''root''' !
<source lang=bash>
Host# ssh root@bsan01 cat .ssh/id_dsa.pub >> ~/.ssh/authorized_keys
</source>
=Backup der Config=
Wichtig, vorher die Keys austauschen!
# Der Brocade Pubkey muß nach ~bckpuser/.ssh/authorized_keys
# Der Pubkey des aufrufenden Users muß auf den Brocade ~root/.ssh/authorized_keys
Ein mögliches Script könnte so aussehen:
<source lang=bash>
#!/bin/bash
SWITCHES="
bsan01
bsan02
"
BACKUP_HOST="10.0.0.42"
LOCALUSER="bckpuser"
BACKUPDIR="brocade_backup"
date="$(date '+%Y%m%d-%H%M%S')"
for switch in ${SWITCHES} ; do
printf "Backing up ${switch} to ~${LOCALUSER}/${BACKUPDIR}/${switch}_config_${date}.txt... "
ssh root@${switch} /fabos/link_sbin/configupload -all -p scp ${BACKUP_HOST},${LOCALUSER},${BACKUPDIR}/${switch}_config_${date}.txt
done
</source>
be1598896c8798017c5a128a744cc41f40497fae
StorageTek SL150
0
190
624
2014-11-28T10:51:09Z
Lollypop
2
Die Seite wurde neu angelegt: „=StorageTek SL150 Modular Tapelibrary= ==General Knowledge== ===Default Password=== passw0rd ==General Documentation== * [https://support.oracle.com/handb…“
wikitext
text/x-wiki
=StorageTek SL150 Modular Tapelibrary=
==General Knowledge==
===Default Password===
passw0rd
==General Documentation==
* [https://support.oracle.com/handbook_partner/Systems/SL150/SL150.html System Handbook]
* [https://support.oracle.com/epmos/faces/DocContentDisplay?id=1476370.2 Information Center]
* [http://docs.oracle.com/cd/E35103_07/index.html StorageTek SL150 Modular Tape Library]
==Service Requests==
* [https://support.oracle.com/epmos/faces/DocContentDisplay?id=1599469.1 How to Generate and Retrieve a Service Bundle]
* [https://support.oracle.com/epmos/faces/DocContentDisplay?id=1505959.1 Format of SL150 Serial Number]
==Firmware==
* [https://support.oracle.com/epmos/faces/DocContentDisplay?id=1474172.1 How to Find Firmware Update Patches]
* [https://support.oracle.com/epmos/faces/DocContentDisplay?id=1922504.1 How to find drive firmware patches for LTO tape drives]
==Backup Software related links==
* [http://www-01.ibm.com/support/docview.wss?uid=swg21598187 Oracle StorageTek SL150 Modular Tape Library System Configuration Information for IBM Tivoli Storage Manager Server]
==Other Links==
===Installation things===
* [https://support.oracle.com/epmos/faces/DocContentDisplay?id=1473827.1 How to Manually Retract the Robot Up To the Parked Position]
===Features===
* [https://support.oracle.com/epmos/faces/DocContentDisplay?id=1481733.1 Auto Clean Support for SL150 Library]
ede011b586f2d7737b9c83d14527d90e5d4c6c0b
627
624
2014-12-01T07:40:25Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:Backup]]
=StorageTek SL150 Modular Tapelibrary=
==General Knowledge==
===Default Password===
passw0rd
==General Documentation==
* [https://support.oracle.com/handbook_partner/Systems/SL150/SL150.html System Handbook]
* [https://support.oracle.com/epmos/faces/DocContentDisplay?id=1476370.2 Information Center]
* [http://docs.oracle.com/cd/E35103_07/index.html StorageTek SL150 Modular Tape Library]
==Service Requests==
* [https://support.oracle.com/epmos/faces/DocContentDisplay?id=1599469.1 How to Generate and Retrieve a Service Bundle]
* [https://support.oracle.com/epmos/faces/DocContentDisplay?id=1505959.1 Format of SL150 Serial Number]
==Firmware==
* [https://support.oracle.com/epmos/faces/DocContentDisplay?id=1474172.1 How to Find Firmware Update Patches]
* [https://support.oracle.com/epmos/faces/DocContentDisplay?id=1922504.1 How to find drive firmware patches for LTO tape drives]
==Backup Software related links==
* [http://www-01.ibm.com/support/docview.wss?uid=swg21598187 Oracle StorageTek SL150 Modular Tape Library System Configuration Information for IBM Tivoli Storage Manager Server]
==Other Links==
===Installation things===
* [https://support.oracle.com/epmos/faces/DocContentDisplay?id=1473827.1 How to Manually Retract the Robot Up To the Parked Position]
===Features===
* [https://support.oracle.com/epmos/faces/DocContentDisplay?id=1481733.1 Auto Clean Support for SL150 Library]
f8b304a62069a2e79c8a88e129383a0154f2ff61
ZFS Networker
0
158
625
595
2014-12-01T07:39:16Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:ZFS]]
[[Kategorie:Backup]]
[[Kategorie:Solaris]]
=Backup of ZFS snapshots on Solaris Cluster with Legato/EMC Networker=
This describes how to setup a backup of the Solaris Cluster resource group named sample-rg.
The structure of my RGs is always:
<pre>
RG: <name>-rg
ZFS-HASP: <name>-hasp-zfs-res
Logical Host: <name>-lh-res
Logical Host Name: <name>-lh
ZPOOL: <name>_pool
</pre>
I used the bash as shell.
==Define variables used in the following command lines==
<source lang=bash>
# NAME=sample
# RGname=${NAME}-rg
# NetworkerGroup=$(echo ${NAME} | tr 'a-z' 'A-Z' )
# ZPOOL=${NAME}_pool
# ZPOOL_BASEDIR=/local/${RGname}
</source>
==Define a resource for Networker==
What we need now is a resource definition in our Networker directory like this:
<source lang=bash>
# mkdir /nsr/{bin,log,res}
# cat > /nsr/res/${NetworkerGroup}.res <<EOF
type: savepnpc;
precmd: "/nsr/bin/nsr_snapshot.sh pre >/nsr/log/networker_precmd.log 2>&1";
pstcmd: "/nsr/bin/nsr_snapshot.sh pst >/nsr/log/networker_pstcmd.log 2>&1";
timeout: "08:00am";
abort precmd with group: Yes;
EOF
</source>
==The pre-/pstcmd-script==
!!!THIS CODE IS UNTESTED DO NOT USE THIS!!!
!!!THIS JUST AN EXAMPLE!!!
<source lang=bash>
#!/bin/bash
function print_option () {
option=$1; shift
# now process line
while [ $# -gt 0 ]
do
case $1 in
${option})
echo $2
shift
shift
;;
*)
shift
;;
esac
done
}
function print_log () {
LOGFILE=$1 ; shift
if [ $# -gt 0 ]
then
printf "%s (%s): %s\n" "$(date '+%Y%m%d %H:%M:%S')" "${cmd_option}" "$*" >> ${LOGFILE}
else
printf "%s (%s): " "$(date '+%Y%m%d %H:%M:%S')" "${cmd_option}" >> ${LOGFILE}
#cat >> ${LOGFILE}
while read data
do
printf "%s (%s): %s\n" "$(date '+%Y%m%d %H:%M:%S')" "${cmd_option}" "${data}" >> ${LOGFILE}
done
fi
}
function snapshot_pre {
DB=$1
DBUSER=$2
if [ $# -eq 3 -a "_$3_" != "__" ]
then
ZONE=$3
ZONE_CMD="${ZLOGIN_CMD} -l ${DBUSER} ${ZONE}"
ZONE_BASE=$(/usr/sbin/zonecfg -z ${ZONE} info zonepath | nawk '{print $NF;}')
ZONE_ROOT="${ZONE_BASE}/root"
else
ZONE_ROOT=""
ZONE_CMD="su - ${DBUSER} -c"
fi
if( ${ZONE_CMD} echo >/dev/null 2>&1 )
then
SCRIPT_NAME="tmp/.nsr-pre-snap-script.$$"
# Create script inside zone
cat >${ZONE_ROOT}/{SCRIPT_NAME} <<EOS
#!/bin/bash
DBDIR=\$(/usr/bin/nawk -F':' -v ORACLE_SID=${ORACLE_SID} '\$1==ORACLE_SID {print \$2;}' /var/opt/oracle/oratab)
\${DBDIR}/bin/sqlplus sys/${DBUSER} as sysdba << EOF
create pfile from spfile;
alter system archive log current;
alter database backup controlfile to trace;
alter database begin backup;
EOF
EOS
chmod 755 ${ZONE_ROOT}/${SCRIPT_NAME}
${ZONE_CMD} /${SCRIPT_NAME} 2>&1 | print_log ${LOGFILE}
rm -f ${ZONE_ROOT}/${SCRIPT_NAME}
fi
}
function snapshot_pst {
DB=$1
DBUSER=$2
if [ $# -eq 3 -a "_$3_" != "__" ]
then
ZONE=$3
ZONE_CMD="${ZLOGIN_CMD} -l ${DBUSER} ${ZONE}"
ZONE_BASE=$(/usr/sbin/zonecfg -z ${ZONE} info zonepath | nawk '{print $NF;}')
ZONE_ROOT="${ZONE_BASE}/root"
else
ZONE_ROOT=""
ZONE_CMD="su - ${DBUSER} -c"
fi
if( ${ZONE_CMD} echo >/dev/null 2>&1 )
then
SCRIPT_NAME="tmp/.nsr-pre-snap-script.$$"
# Create script inside zone
cat >${ZONE_ROOT}/{SCRIPT_NAME} <<EOS
#!/bin/bash
DBDIR=\$(/usr/bin/nawk -F':' -v ORACLE_SID=${ORACLE_SID} '\$1==ORACLE_SID {print \$2;}' /var/opt/oracle/oratab)
\${DBDIR}/bin/sqlplus sys/${DBUSER} as sysdba << EOF
alter database end backup;
alter system archive log current;
EOF
EOS
chmod 755 ${ZONE_ROOT}/${SCRIPT_NAME}
${ZONE_CMD} /${SCRIPT_NAME} 2>&1 | print_log ${LOGFILE}
rm -f ${ZONE_ROOT}/${SCRIPT_NAME}
fi
}
function snapshot_create {
ZPOOL=$1
SNAPSHOT_NAME=$2
print_log ${LOGFILE} "Create ZFS snapshot -r ${ZPOOL}@${SNAPSHOT_NAME}"
${ZFS_CMD} snapshot -r ${ZPOOL}@${SNAPSHOT_NAME}
for zfs_snapshot in $(${ZFS_CMD} list -Ho name -t snapshot -r ${ZPOOL} | grep ${SNAPSHOT_NAME})
do
${ZFS_CMD} clone -o readonly=on ${zfs_snapshot} ${zfs_snapshot/@*/}/nsr_backup
${ZFS_CMD} mount ${zfs_snapshot/@*/}/nsr_backup 2>/dev/null
if ( df -h ${zfs_snapshot/@*/}/nsr_backup )
then
# echo /usr/sbin/save -s ${SERVER_NAME} -g ${GROUP_NAME} -LL -m ${CLIENT_NAME} $(${ZFS_CMD} get -Ho value mountpoint ${zfs_snapshot/@*/}/nsr_backup)
${ZFS_CMD} list -Ho creation,name ${zfs_snapshot/@*/}/nsr_backup | print_log ${LOGFILE}
fi
done
}
function snapshot_destroy {
ZPOOL=$1
SNAPSHOT_NAME=$2
if (${ZFS_CMD} list -t snapshot ${ZPOOL}@${SNAPSHOT_NAME})
then
for zfs_snapshot in $(${ZFS_CMD} list -Ho name -t snapshot -r ${ZPOOL} | grep ${SNAPSHOT_NAME})
do
if ( df -h ${zfs_snapshot/@*/}/nsr_backup )
then
print_log ${LOGFILE} "Unmount ZFS clone ${zfs_snapshot/@*/}/nsr_backup"
${ZFS_CMD} unmount ${zfs_snapshot/@*/}/nsr_backup
fi
# If this is a clone of ${zfs_snapshot}, then destroy it
if [ "_$(${ZFS_CMD} list -Ho origin ${zfs_snapshot/@*/}/nsr_backup)_" == "_${zfs_snapshot}_" ]
then
print_log ${LOGFILE} "Destroy ZFS clone ${zfs_snapshot/@*/}/nsr_backup"
${ZFS_CMD} destroy ${zfs_snapshot/@*/}/nsr_backup
fi
done
print_log ${LOGFILE} "Destroy ZFS snapshot -r ${ZPOOL}@${SNAPSHOT_NAME}"
${ZFS_CMD} destroy -r ${ZPOOL}@${SNAPSHOT_NAME}
fi
}
function usage {
echo "Usage: $0 (pre|pst)"
exit 1
}
cmd_option=$1
export cmd_option
ORACLE_SID=SAMPLE
ORACLE_USER=oracle
SNAPSHOT_NAME="nsr"
ZFS_CMD="/usr/sbin/zfs"
ZLOGIN_CMD="/usr/bin/zlogin"
if [ "_${cmd_option}_" != "_init_" ]
then
case ${cmd_option} in
pre)
# Get commandline from parent pid
# pre /usr/sbin/savepnpc -c <NetworkerClient> -s <NetworkerServer> -g <NetworkerGroup> -LL
pid=$(ptree $$ | nawk '/savepnpc/{print $1}')
;;
pst)
# Get commandline from parent pid
# pst /usr/bin/pstclntsave -s <NetworkerServer> -g <NetworkerGroup> -c <NetworkerClient>
pid=$(ptree $$ | nawk '/pstclntsave/{print $1}')
;;
esac
commandline="$(pargs -c ${pid} | nawk -F':' '$1 ~ /^argv/{printf $2}END{print;}')"
# Called from backupserver use -c
CLIENT_NAME=$(print_option -c ${commandline})
# If called from cmdline use -m
CLIENT_NAME=${CLIENT_NAME:-$(print_option -m ${commandline})}
# Last resort pre/post
CLIENT_NAME=${CLIENT_NAME:-${cmd_option}}
SERVER_NAME=$(print_option -s ${commandline})
GROUP_NAME=$(print_option -g ${commandline})
LOGFILE=/nsr/logs/${CLIENT_NAME}.log
print_log ${LOGFILE} "Called from ${commandline}"
named_pipe=/tmp/.named_pipe.$$
# Delete named pipe on exit
trap "rm -f ${named_pipe}" EXIT
# Create named pipe
mknod ${named_pipe} p
# Read from named pipe and send it to print_log
tee <${named_pipe} | print_log ${LOGFILE}&
# Close STDOUT & STDERR
exec 1>&-
exec 2>&-
# Redirect them to named pipe
exec >${named_pipe} 2>&1
print_log ${LOGFILE} "Begin backup of ${CLIENT_NAME}"
# Get resource name from hostname
LH_RES=$(/usr/cluster/bin/clrs show -t SUNW.LogicalHostname -p HostnameList | nawk -v Hostname="${CLIENT_NAME}" '/^Resource:/{res=$NF} /HostnameList:/ {for(i=2;i<=NF;i++){if($i == Hostname){print res}}}')
print_log ${LOGFILE} "LogicalHostname of ${CLIENT_NAME} is ${LH_RES}"
# Get ressourceGroup name from ressource name
RG=$(/usr/cluster/bin/scha_resource_get -O GROUP -R ${LH_RES})
print_log ${LOGFILE} "RessourceGroup of ${LH_RES} is ${RG}"
ZPOOLS=$(/usr/cluster/bin/clrs show -g ${RG} -p Zpools | nawk '$1=="Zpools:"{$1="";print $0}')
print_log ${LOGFILE} "ZPools used in ${RG}: ${ZPOOLS}"
Start_command=$(/usr/cluster/bin/clrs show -p Start_command -g ${RG} | /usr/bin/nawk -F ':' '$1 ~ /Start_command/ && $2 ~ /sczbt/')
print_log ${LOGFILE} "sczbt Start_command is: ${Start_command}"
sczbt_config=$(print_option -P ${Start_command})/sczbt_$(print_option -R ${Start_command})
print_log ${LOGFILE} "sczbt_config is ${sczbt_config}"
ZONE=$(nawk -F '=' '$1=="Zonename"{gsub(/"/,"",$2);print $2}' ${sczbt_config})
print_log ${LOGFILE} "Zone from ${sczbt_config} is ${ZONE}"
else
LOGFILE=/nsr/logs/init.log
if [ $# -ne 2 ]
then
echo "Wrong count of parameters."
echo "Use $0 init <ZPool-Name>"
exit 1
fi
ZPOOL=$2
print_log ${LOGFILE} "ZPool to init : ${ZPOOL}"
fi
case ${cmd_option} in
init)
snapshot_destroy ${ZPOOL} ${SNAPSHOT_NAME}
snapshot_create ${ZPOOL} ${SNAPSHOT_NAME}
;;
pre)
for ZPOOL in ${ZPOOLS}
do
snapshot_destroy ${ZPOOL} ${SNAPSHOT_NAME}
done
# snapshot_pre ${DB} ${DBUSER} ${ZONE}
for ZPOOL in ${ZPOOLS}
do
snapshot_create ${ZPOOL} ${SNAPSHOT_NAME}
done
# snapshot_pst ${DB} ${DBUSER} ${ZONE}
;;
pst)
#for ZPOOL in ${ZPOOLS}
#do
# snapshot_destroy ${ZPOOL} ${SNAPSHOT_NAME}
#done
;;
*)
usage
;;
esac
print_log ${LOGFILE} "End backup of ${CLIENT_NAME}"
</source>
MD5-Checksum
<source lang=bash>
# digest -a md5 /nsr/bin/nsr_snapshot.sh
6e788968bdee9f3cbe747a3151aa0b5c
</source>
!!!THIS CODE IS UNTESTED DO NOT USE THIS!!!
!!!THIS JUST AN EXAMPLE!!!
==Registering new resource type LGTO.clnt==
1. Install Solaris client package LGTOclnt
2. Register new resource type in cluster. One one node do:
<source lang=bash>
# clrt register -f /usr/sbin/LGTO.clnt.rtr LGTO.clnt
</source>
Now you have a new resource type LGTO.clnt in your cluster.
==Create client resource of type LGTO.clnt==
So I use scripts like this:
<source lang=bash>
# RGname=sample-rg
# clrs create \
-t LGTO.clnt \
-g ${RGname} \
-p Resource_dependencies=$(basename ${RGname} -rg)-hasp-zfs-res \
-p clientname=$(basename ${RGname} -rg)-lh \
-p Network_resource=$(basename ${RGname} -rg)-lh-res \
-p owned_paths=${ZPOOL_BASEDIR} \
$(basename ${RGname} -rg)-nsr-res
</source>
This expands to:
<source lang=bash>
# clrs create \
-t LGTO.clnt \
-g sample-rg \
-p Resource_dependencies=sample-hasp-zfs-res \
-p clientname=sample-lh \
-p Network_resource=sample-lh-res \
-p owned_paths=/local/sample-rg \
sample-nsr-res
</source>
Now we have a client name to which we can connect to: sample-lh
81f8a8193a97f5c103883f699bad7451bf3c6087
639
625
2014-12-10T16:27:08Z
Lollypop
2
/* The pre-/pstcmd-script */
wikitext
text/x-wiki
[[Kategorie:ZFS]]
[[Kategorie:Backup]]
[[Kategorie:Solaris]]
=Backup of ZFS snapshots on Solaris Cluster with Legato/EMC Networker=
This describes how to setup a backup of the Solaris Cluster resource group named sample-rg.
The structure of my RGs is always:
<pre>
RG: <name>-rg
ZFS-HASP: <name>-hasp-zfs-res
Logical Host: <name>-lh-res
Logical Host Name: <name>-lh
ZPOOL: <name>_pool
</pre>
I used the bash as shell.
==Define variables used in the following command lines==
<source lang=bash>
# NAME=sample
# RGname=${NAME}-rg
# NetworkerGroup=$(echo ${NAME} | tr 'a-z' 'A-Z' )
# ZPOOL=${NAME}_pool
# ZPOOL_BASEDIR=/local/${RGname}
</source>
==Define a resource for Networker==
What we need now is a resource definition in our Networker directory like this:
<source lang=bash>
# mkdir /nsr/{bin,log,res}
# cat > /nsr/res/${NetworkerGroup}.res <<EOF
type: savepnpc;
precmd: "/nsr/bin/nsr_snapshot.sh pre >/nsr/log/networker_precmd.log 2>&1";
pstcmd: "/nsr/bin/nsr_snapshot.sh pst >/nsr/log/networker_pstcmd.log 2>&1";
timeout: "08:00am";
abort precmd with group: Yes;
EOF
</source>
==The pre-/pstcmd-script==
!!!THIS CODE IS UNTESTED DO NOT USE THIS!!!
!!!THIS JUST AN EXAMPLE!!!
<source lang=bash>
#!/bin/bash
cmd_option=$1
export cmd_option
SNAPSHOT_NAME="nsr"
BASE_LOG_DIR="/nsr/logs"
NSR_BACKUP_CLONE="nsr_backup"
# Commands
ZFS_CMD="/usr/sbin/zfs"
ZPOOL_CMD="/usr/sbin/zpool"
ZLOGIN_CMD="/usr/bin/zlogin"
ZONECFG_CMD="/usr/sbin/zonecfg"
DF_CMD="/usr/bin/df"
AWK_CMD="/usr/bin/nawk"
MKNOD_CMD="/usr/sbin/mknod"
PARGS_CMD="/usr/bin/pargs"
PTREE_CMD="/usr/bin/ptree"
CLRS_CMD="/usr/cluster/bin/clrs"
CLRG_CMD="/usr/cluster/bin/clrg"
CLRT_CMD="/usr/cluster/bin/clrt"
BASENAME_CMD="/usr/bin/basename"
GETENT_CMD="/usr/bin/getent"
SCHA_RESOURCE_GET_CMD="/usr/cluster/bin/scha_resource_get"
# Subdir in ZFS where to put ZFS-config
ZFS_SETUP_SUBDIR="cluster_config"
ZFS_CONFIG_FILE=ZFS_Setup.sh
# Oracle parameter
ORACLE_SID=SAMPLE
ORACLE_USER=oracle
GLOBAL_LOGFILE=${BASE_LOG_DIR}/$(${BASENAME_CMD} $0 .sh).log
exec >>${GLOBAL_LOGFILE} 2>&1
function print_option () {
option=$1; shift
# now process line
while [ $# -gt 0 ]
do
case $1 in
${option})
echo $2
shift
shift
;;
*)
shift
;;
esac
done
}
function print_log () {
LOGFILE=$1 ; shift
if [ $# -gt 0 ]
then
printf "%s (%s): %s\n" "$(date '+%Y%m%d %H:%M:%S')" "${cmd_option}" "$*" >> ${LOGFILE}
else
#printf "%s (%s): " "$(date '+%Y%m%d %H:%M:%S')" "${cmd_option}" >> ${LOGFILE}
while read data
do
printf "%s (%s): %s\n" "$(date '+%Y%m%d %H:%M:%S')" "${cmd_option}" "${data}" >> ${LOGFILE}
done
fi
}
function dump_zfs_config {
ZPOOL=$1
OUTPUT_FILE=$2
printf "\n\n# Create ZPool ${ZPOOL} with size $(${ZPOOL_CMD} list -Ho size ${ZPOOL}):\n\n" >> ${OUTPUT_FILE}
${ZPOOL_CMD} status ${ZPOOL} | ${AWK_CMD} '/config:/,/errors:/{if(/NAME/){getline; printf "Zpool structure of %s:\n\nzpool create %s",$1,$1; getline ; device=0; while(!/^$/ && !/errors:/){gsub(/mirror-[0-9]+/,"mirror",$1);gsub(/logs/,"log",$1);gsub(/(\/dev\/(r)*dsk\/)*c[0-9]+t[0-9A-F]+d[0-9]+(s[0-9]+)*/,"<device"device">",$1);if(/device/)device++;printf " %s",$1 ; getline}};printf "\n" ;}' >> ${OUTPUT_FILE}
printf "\n\n# Create ZFS\n\n" >> ${OUTPUT_FILE}
${ZFS_CMD} list -Hrt filesystem -o name,origin ${ZPOOL} | ${AWK_CMD} -v zfs_cmd=${ZFS_CMD} 'NR>1 && $2=="-"{print zfs_cmd,"create -o mountpoint=none",$1}' >> ${OUTPUT_FILE}
printf "\n\n# Set ZFS values\n\n" >> ${OUTPUT_FILE}
${ZFS_CMD} get -s local -Ho name,property,value -pr all ${ZPOOL} | ${AWK_CMD} -v zfs_cmd=${ZFS_CMD} '$2!="readonly"{printf "%s set -p %s=%s %s\n",zfs_cmd,$2,$3,$1}' >> ${OUTPUT_FILE}
}
function dump_cluster_config {
RG=$1
OUTPUT_DIR=$2
${CLRG_CMD} export -o ${OUTPUT_DIR}/${RG}.clrg_export.xml ${RG}
for RES in $(${CLRS_CMD} list -g ${RG})
do
${CLRS_CMD} export -o ${OUTPUT_DIR}/${RES}.clrs_export.xml ${RES}
done
# Commands to recreate the RG
COMMAND_FILE="${OUTPUT_DIR}/${RG}.ClusterCreateCommands.txt"
printf "Recreate %s:\n%s create -i %s %s\n\n" "${RG}" "${CLRG_CMD}" "${OUTPUT_DIR}/${RG}.clrg_export.xml" "${RG}" > ${COMMAND_FILE}
for RT in SUNW.LogicalHostname SUNW.HAStoragePlus SUNW.gds LGTO.clnt
do
for RT_VERSION in $(${CLRT_CMD} list | ${AWK_CMD} -v rt=${RT} '$1 ~ rt')
do
for RES in $(${CLRS_CMD} list -g ${RG} -t ${RT_VERSION})
do
if [ "_${RT}_" == "_SUNW.LogicalHostname_" ]
then
printf "Add the following entries to all nodes!!!:\n/etc/inet/hosts:\n" >> ${COMMAND_FILE}
${GETENT_CMD} hosts $(${CLRS_CMD} show -p HostnameList ${RES} | nawk '$1=="HostnameList:"{$1="";print}') >> ${COMMAND_FILE}
printf "\n" >> ${COMMAND_FILE}
fi
printf "Recreate %s:\n%s create -i %s %s\n\n" "${RES}" "${CLRS_CMD}" "${OUTPUT_DIR}/${RES}.clrs_export.xml" "${RES}" >> ${COMMAND_FILE}
done
done
done
}
function snapshot_pre {
DB=$1
DBUSER=$2
if [ $# -eq 3 -a "_$3_" != "__" ]
then
ZONE=$3
ZONE_CMD="${ZLOGIN_CMD} -l ${DBUSER} ${ZONE}"
ZONE_BASE=$(/usr/sbin/zonecfg -z ${ZONE} info zonepath | ${AWK_CMD} '{print $NF;}')
ZONE_ROOT="${ZONE_BASE}/root"
else
ZONE_ROOT=""
ZONE_CMD="su - ${DBUSER} -c"
fi
if( ${ZONE_CMD} echo >/dev/null 2>&1 )
then
SCRIPT_NAME="tmp/.nsr-pre-snap-script.$$"
# Create script inside zone
cat >${ZONE_ROOT}/{SCRIPT_NAME} <<EOS
#!/bin/bash
DBDIR=\$(${AWK_CMD} -F':' -v ORACLE_SID=${ORACLE_SID} '\$1==ORACLE_SID {print \$2;}' /var/opt/oracle/oratab)
\${DBDIR}/bin/sqlplus sys/${DBUSER} as sysdba << EOF
create pfile from spfile;
alter system archive log current;
alter database backup controlfile to trace;
alter database begin backup;
EOF
EOS
chmod 755 ${ZONE_ROOT}/${SCRIPT_NAME}
${ZONE_CMD} /${SCRIPT_NAME} 2>&1 | print_log ${LOGFILE}
rm -f ${ZONE_ROOT}/${SCRIPT_NAME}
fi
}
function snapshot_pst {
DB=$1
DBUSER=$2
if [ $# -eq 3 -a "_$3_" != "__" ]
then
ZONE=$3
ZONE_CMD="${ZLOGIN_CMD} -l ${DBUSER} ${ZONE}"
ZONE_BASE=$(/usr/sbin/zonecfg -z ${ZONE} info zonepath | ${AWK_CMD} '{print $NF;}')
ZONE_ROOT="${ZONE_BASE}/root"
else
ZONE_ROOT=""
ZONE_CMD="su - ${DBUSER} -c"
fi
if( ${ZONE_CMD} echo >/dev/null 2>&1 )
then
SCRIPT_NAME="tmp/.nsr-pre-snap-script.$$"
# Create script inside zone
cat >${ZONE_ROOT}/{SCRIPT_NAME} <<EOS
#!/bin/bash
DBDIR=\$(${AWK_CMD} -F':' -v ORACLE_SID=${ORACLE_SID} '\$1==ORACLE_SID {print \$2;}' /var/opt/oracle/oratab)
\${DBDIR}/bin/sqlplus sys/${DBUSER} as sysdba << EOF
alter database end backup;
alter system archive log current;
EOF
EOS
chmod 755 ${ZONE_ROOT}/${SCRIPT_NAME}
${ZONE_CMD} /${SCRIPT_NAME} 2>&1 | print_log ${LOGFILE}
rm -f ${ZONE_ROOT}/${SCRIPT_NAME}
fi
}
function snapshot_create {
ZPOOL=$1
SNAPSHOT_NAME=$2
print_log ${LOGFILE} "Create ZFS snapshot -r ${ZPOOL}@${SNAPSHOT_NAME}"
${ZFS_CMD} snapshot -r ${ZPOOL}@${SNAPSHOT_NAME}
for zfs_snapshot in $(${ZFS_CMD} list -Ho name -t snapshot -r ${ZPOOL} | grep ${SNAPSHOT_NAME})
do
${ZFS_CMD} clone -o readonly=on ${zfs_snapshot} ${zfs_snapshot/@*/}/${NSR_BACKUP_CLONE}
${ZFS_CMD} mount ${zfs_snapshot/@*/}/${NSR_BACKUP_CLONE} 2>/dev/null
if [ "_$(${ZFS_CMD} get -Ho value mounted ${zfs_snapshot/@*/}/${NSR_BACKUP_CLONE})_" == "_yes_" ]
then
# echo /usr/sbin/save -s ${SERVER_NAME} -g ${GROUP_NAME} -LL -m ${CLIENT_NAME} $(${ZFS_CMD} get -Ho value mountpoint ${zfs_snapshot/@*/}/${NSR_BACKUP_CLONE})
${ZFS_CMD} list -Ho creation,name ${zfs_snapshot/@*/}/${NSR_BACKUP_CLONE} | print_log ${LOGFILE}
fi
done
}
function snapshot_destroy {
ZPOOL=$1
SNAPSHOT_NAME=$2
if (${ZFS_CMD} list -t snapshot ${ZPOOL}@${SNAPSHOT_NAME} > /dev/null)
then
for zfs_snapshot in $(${ZFS_CMD} list -Ho name -t snapshot -r ${ZPOOL} | grep ${SNAPSHOT_NAME})
do
if [ "_$(${ZFS_CMD} get -Ho value mounted ${zfs_snapshot/@*/}/${NSR_BACKUP_CLONE})_" == "_yes_" ]
then
print_log ${LOGFILE} "Unmount ZFS clone ${zfs_snapshot/@*/}/${NSR_BACKUP_CLONE}"
${ZFS_CMD} unmount ${zfs_snapshot/@*/}/${NSR_BACKUP_CLONE}
fi
# If this is a clone of ${zfs_snapshot}, then destroy it
if [ "_$(${ZFS_CMD} list -Ho origin ${zfs_snapshot/@*/}/${NSR_BACKUP_CLONE})_" == "_${zfs_snapshot}_" ]
then
print_log ${LOGFILE} "Destroy ZFS clone ${zfs_snapshot/@*/}/${NSR_BACKUP_CLONE}"
${ZFS_CMD} destroy ${zfs_snapshot/@*/}/${NSR_BACKUP_CLONE}
fi
done
print_log ${LOGFILE} "Destroy ZFS snapshot -r ${ZPOOL}@${SNAPSHOT_NAME}"
${ZFS_CMD} destroy -r ${ZPOOL}@${SNAPSHOT_NAME}
fi
}
function usage {
echo "Usage: $0 (pre|pst)"
echo "Usage: $0 init <ZPool-Name>"
echo "Usage: $0 dump <ZPool-Name> <Output-File>"
exit 1
}
case ${cmd_option} in
pre|pst)
case ${cmd_option} in
pre)
# Get commandline from parent pid
# pre /usr/sbin/savepnpc -c <NetworkerClient> -s <NetworkerServer> -g <NetworkerGroup> -LL
print_log ${GLOBAL_LOGFILE} "Begin (${cmd_option}) Called from $(${PTREE_CMD} $$ | ${AWK_CMD} '/savepnpc/{print $0}')"
pid=$(${PTREE_CMD} $$ | ${AWK_CMD} '/savepnpc/{print $1}')
;;
pst)
# Get commandline from parent pid
# pst /usr/bin/pstclntsave -s <NetworkerServer> -g <NetworkerGroup> -c <NetworkerClient>
print_log ${GLOBAL_LOGFILE} "Begin (${cmd_option}) Called from $(${PTREE_CMD} $$ | ${AWK_CMD} '/pstclntsave/{print $0}')"
pid=$(${PTREE_CMD} $$ | ${AWK_CMD} '/pstclntsave/{print $1}')
;;
esac
commandline="$(${PARGS_CMD} -c ${pid} | ${AWK_CMD} -F':' '$1 ~ /^argv/{printf $2}END{print;}')"
# Called from backupserver use -c
CLIENT_NAME=$(print_option -c ${commandline})
# If called from cmdline use -m
CLIENT_NAME=${CLIENT_NAME:-$(print_option -m ${commandline})}
# Last resort pre/post
CLIENT_NAME=${CLIENT_NAME:-${cmd_option}}
SERVER_NAME=$(print_option -s ${commandline})
GROUP_NAME=$(print_option -g ${commandline})
LOGFILE=${BASE_LOG_DIR}/${CLIENT_NAME}.log
print_log ${LOGFILE} "Called from ${commandline}"
named_pipe=/tmp/.named_pipe.$$
# Delete named pipe on exit
trap "rm -f ${named_pipe}" EXIT
# Create named pipe
${MKNOD_CMD} ${named_pipe} p
# Read from named pipe and send it to print_log
tee <${named_pipe} | print_log ${LOGFILE}&
# Close STDOUT & STDERR
exec 1>&-
exec 2>&-
# Redirect them to named pipe
exec >${named_pipe} 2>&1
print_log ${LOGFILE} "Begin backup of ${CLIENT_NAME}"
# Get resource name from hostname
LH_RES=$(${CLRS_CMD} show -t SUNW.LogicalHostname -p HostnameList | ${AWK_CMD} -v Hostname="${CLIENT_NAME}" '/^Resource:/{res=$NF} /HostnameList:/ {for(i=2;i<=NF;i++){if($i == Hostname){print res}}}')
print_log ${LOGFILE} "LogicalHostname of ${CLIENT_NAME} is ${LH_RES}"
# Get ressourceGroup name from ressource name
RG=$(${SCHA_RESOURCE_GET_CMD} -O GROUP -R ${LH_RES})
print_log ${LOGFILE} "RessourceGroup of ${LH_RES} is ${RG}"
ZPOOLS=$(${CLRS_CMD} show -g ${RG} -p Zpools | ${AWK_CMD} '$1=="Zpools:"{$1="";print $0}')
print_log ${LOGFILE} "ZPools used in ${RG}: ${ZPOOLS}"
Start_command=$(${CLRS_CMD} show -p Start_command -g ${RG} | ${AWK_CMD} -F ':' '$1 ~ /Start_command/ && $2 ~ /sczbt/')
print_log ${LOGFILE} "sczbt Start_command is: ${Start_command}"
sczbt_config=$(print_option -P ${Start_command})/sczbt_$(print_option -R ${Start_command})
print_log ${LOGFILE} "sczbt_config is ${sczbt_config}"
ZONE=$(${AWK_CMD} -F '=' '$1=="Zonename"{gsub(/"/,"",$2);print $2}' ${sczbt_config})
print_log ${LOGFILE} "Zone from ${sczbt_config} is ${ZONE}"
;;
init)
LOGFILE=${BASE_LOG_DIR}/init.log
if [ $# -ne 2 ]
then
echo "Wrong count of parameters."
echo "Use $0 init <ZPool-Name>"
exit 1
fi
ZPOOL=$2
print_log ${GLOBAL_LOGFILE} "Begin (${cmd_option}) of zpool ${ZPOOL}"
print_log ${LOGFILE} "Begin init of zpool ${ZPOOL}"
;;
esac
case ${cmd_option} in
dump_cluster)
if [ $# -ne 3 ]
then
echo "Wrong count of parameters."
echo "Use $0 dump_cluster <Ressource_Group> <DIR>"
exit 1
fi
dump_cluster_config $2 $3
;;
dump)
if [ $# -ne 3 ]
then
echo "Wrong count of parameters."
echo "Use $0 dump <ZPool-Name> <File>"
exit 1
fi
dump_zfs_config $2 $3
;;
init)
snapshot_destroy ${ZPOOL} ${SNAPSHOT_NAME}
snapshot_create ${ZPOOL} ${SNAPSHOT_NAME}
print_log ${LOGFILE} "End init of zpool ${ZPOOL}"
;;
pre)
for ZPOOL in ${ZPOOLS}
do
snapshot_destroy ${ZPOOL} ${SNAPSHOT_NAME}
done
# snapshot_pre ${DB} ${DBUSER} ${ZONE}
# Find the dir to write down zfs-setup
for ZPOOL in ${ZPOOLS}
do
if [ "_$(${ZFS_CMD} list -Ho name ${ZPOOL}/${ZFS_SETUP_SUBDIR} 2>/dev/null)_" != "__" ]
then
CONFIG_DIR=$(${ZFS_CMD} get -Ho value mountpoint ${ZPOOL}/${ZFS_SETUP_SUBDIR})
else
if [ -d $(${ZFS_CMD} get -Ho value mountpoint ${ZPOOL})/${ZFS_SETUP_SUBDIR} ]
then
CONFIG_DIR=$(${ZFS_CMD} get -Ho value mountpoint ${ZPOOL})/${ZFS_SETUP_SUBDIR}
fi
fi
if [ -d ${CONFIG_DIR} ]
then
printf "# Settings for ZFS\n\n" > ${CONFIG_DIR}/${ZFS_CONFIG_FILE}
ZONE_CONFIG_FILE=zonecfg_${ZONE}.export
${ZONECFG_CMD} -z ${ZONE} export > ${CONFIG_DIR}/${ZONE_CONFIG_FILE}
fi
done
for ZPOOL in ${ZPOOLS}
do
if [ "_${CONFIG_DIR}_" != "__" ]
then
dump_zfs_config ${ZPOOL} ${CONFIG_DIR}/${ZFS_CONFIG_FILE}
dump_cluster_config ${RG} ${CONFIG_DIR}
fi
snapshot_create ${ZPOOL} ${SNAPSHOT_NAME}
done
# snapshot_pst ${DB} ${DBUSER} ${ZONE}
print_log ${LOGFILE} "End backup of ${CLIENT_NAME}"
;;
pst)
#for ZPOOL in ${ZPOOLS}
#do
# snapshot_destroy ${ZPOOL} ${SNAPSHOT_NAME}
#done
print_log ${LOGFILE} "End backup of ${CLIENT_NAME}"
;;
*)
usage
;;
esac
print_log ${GLOBAL_LOGFILE} "End (${cmd_option}) Called from:"
${PTREE_CMD} $$ | print_log ${GLOBAL_LOGFILE}
exit 0
</source>
MD5-Checksum
<source lang=bash>
# digest -a md5 /nsr/bin/nsr_snapshot.sh
aedff1a8bfa8ee0a012cd7def115e626
</source>
!!!THIS CODE IS UNTESTED DO NOT USE THIS!!!
!!!THIS JUST AN EXAMPLE!!!
==Registering new resource type LGTO.clnt==
1. Install Solaris client package LGTOclnt
2. Register new resource type in cluster. One one node do:
<source lang=bash>
# clrt register -f /usr/sbin/LGTO.clnt.rtr LGTO.clnt
</source>
Now you have a new resource type LGTO.clnt in your cluster.
==Create client resource of type LGTO.clnt==
So I use scripts like this:
<source lang=bash>
# RGname=sample-rg
# clrs create \
-t LGTO.clnt \
-g ${RGname} \
-p Resource_dependencies=$(basename ${RGname} -rg)-hasp-zfs-res \
-p clientname=$(basename ${RGname} -rg)-lh \
-p Network_resource=$(basename ${RGname} -rg)-lh-res \
-p owned_paths=${ZPOOL_BASEDIR} \
$(basename ${RGname} -rg)-nsr-res
</source>
This expands to:
<source lang=bash>
# clrs create \
-t LGTO.clnt \
-g sample-rg \
-p Resource_dependencies=sample-hasp-zfs-res \
-p clientname=sample-lh \
-p Network_resource=sample-lh-res \
-p owned_paths=/local/sample-rg \
sample-nsr-res
</source>
Now we have a client name to which we can connect to: sample-lh
3bc62762a04d8eece32469e04c1e0e6405dd4c44
640
639
2014-12-10T16:42:34Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:ZFS]]
[[Kategorie:Backup]]
[[Kategorie:Solaris]]
=Backup of ZFS snapshots on Solaris Cluster with Legato/EMC Networker=
This describes how to setup a backup of the Solaris Cluster resource group named sample-rg.
The structure of my RGs is always:
<pre>
RG: <name>-rg
ZFS-HASP: <name>-hasp-zfs-res
Logical Host: <name>-lh-res
Logical Host Name: <name>-lh
ZPOOL: <name>_pool
</pre>
I used the bash as shell.
==Define variables used in the following command lines==
<source lang=bash>
# NAME=sample
# RGname=${NAME}-rg
# NetworkerGroup=$(echo ${NAME} | tr 'a-z' 'A-Z' )
# ZPOOL=${NAME}_pool
# ZPOOL_BASEDIR=/local/${RGname}
</source>
==Define a resource for Networker==
What we need now is a resource definition in our Networker directory like this:
<source lang=bash>
# mkdir /nsr/{bin,log,res}
# cat > /nsr/res/${NetworkerGroup}.res <<EOF
type: savepnpc;
precmd: "/nsr/bin/nsr_snapshot.sh pre >/nsr/log/networker_precmd.log 2>&1";
pstcmd: "/nsr/bin/nsr_snapshot.sh pst >/nsr/log/networker_pstcmd.log 2>&1";
timeout: "08:00am";
abort precmd with group: Yes;
EOF
</source>
==The pre-/pstcmd-script==
!!!THIS CODE IS UNTESTED DO NOT USE THIS!!!
!!!THIS JUST AN EXAMPLE!!!
<source lang=bash>
#!/bin/bash
cmd_option=$1
export cmd_option
SNAPSHOT_NAME="nsr"
BASE_LOG_DIR="/nsr/logs"
NSR_BACKUP_CLONE="nsr_backup"
# Commands
ZFS_CMD="/usr/sbin/zfs"
ZPOOL_CMD="/usr/sbin/zpool"
ZLOGIN_CMD="/usr/bin/zlogin"
ZONECFG_CMD="/usr/sbin/zonecfg"
DF_CMD="/usr/bin/df"
AWK_CMD="/usr/bin/nawk"
MKNOD_CMD="/usr/sbin/mknod"
PARGS_CMD="/usr/bin/pargs"
PTREE_CMD="/usr/bin/ptree"
CLRS_CMD="/usr/cluster/bin/clrs"
CLRG_CMD="/usr/cluster/bin/clrg"
CLRT_CMD="/usr/cluster/bin/clrt"
BASENAME_CMD="/usr/bin/basename"
GETENT_CMD="/usr/bin/getent"
SCHA_RESOURCE_GET_CMD="/usr/cluster/bin/scha_resource_get"
# Subdir in ZFS where to put ZFS-config
ZFS_SETUP_SUBDIR="cluster_config"
ZFS_CONFIG_FILE=ZFS_Setup.sh
# Oracle parameter
ORACLE_SID=SAMPLE
ORACLE_USER=oracle
GLOBAL_LOGFILE=${BASE_LOG_DIR}/$(${BASENAME_CMD} $0 .sh).log
exec >>${GLOBAL_LOGFILE} 2>&1
function print_option () {
option=$1; shift
# now process line
while [ $# -gt 0 ]
do
case $1 in
${option})
echo $2
shift
shift
;;
*)
shift
;;
esac
done
}
function print_log () {
LOGFILE=$1 ; shift
if [ $# -gt 0 ]
then
printf "%s (%s): %s\n" "$(date '+%Y%m%d %H:%M:%S')" "${cmd_option}" "$*" >> ${LOGFILE}
else
#printf "%s (%s): " "$(date '+%Y%m%d %H:%M:%S')" "${cmd_option}" >> ${LOGFILE}
while read data
do
printf "%s (%s): %s\n" "$(date '+%Y%m%d %H:%M:%S')" "${cmd_option}" "${data}" >> ${LOGFILE}
done
fi
}
function dump_zfs_config {
ZPOOL=$1
OUTPUT_FILE=$2
printf "\n\n# Create ZPool ${ZPOOL} with size $(${ZPOOL_CMD} list -Ho size ${ZPOOL}):\n\n" >> ${OUTPUT_FILE}
${ZPOOL_CMD} status ${ZPOOL} | ${AWK_CMD} '/config:/,/errors:/{if(/NAME/){getline; printf "Zpool structure of %s:\n\nzpool create %s",$1,$1; getline ; device=0; while(!/^$/ && !/errors:/){gsub(/mirror-[0-9]+/,"mirror",$1);gsub(/logs/,"log",$1);gsub(/(\/dev\/(r)*dsk\/)*c[0-9]+t[0-9A-F]+d[0-9]+(s[0-9]+)*/,"<device"device">",$1);if(/device/)device++;printf " %s",$1 ; getline}};printf "\n" ;}' >> ${OUTPUT_FILE}
printf "\n\n# Create ZFS\n\n" >> ${OUTPUT_FILE}
${ZFS_CMD} list -Hrt filesystem -o name,origin ${ZPOOL} | ${AWK_CMD} -v zfs_cmd=${ZFS_CMD} 'NR>1 && $2=="-"{print zfs_cmd,"create -o mountpoint=none",$1}' >> ${OUTPUT_FILE}
printf "\n\n# Set ZFS values\n\n" >> ${OUTPUT_FILE}
${ZFS_CMD} get -s local -Ho name,property,value -pr all ${ZPOOL} | ${AWK_CMD} -v zfs_cmd=${ZFS_CMD} '$2!="readonly"{printf "%s set -p %s=%s %s\n",zfs_cmd,$2,$3,$1}' >> ${OUTPUT_FILE}
}
function dump_cluster_config {
RG=$1
OUTPUT_DIR=$2
${CLRG_CMD} export -o ${OUTPUT_DIR}/${RG}.clrg_export.xml ${RG}
for RES in $(${CLRS_CMD} list -g ${RG})
do
${CLRS_CMD} export -o ${OUTPUT_DIR}/${RES}.clrs_export.xml ${RES}
done
# Commands to recreate the RG
COMMAND_FILE="${OUTPUT_DIR}/${RG}.ClusterCreateCommands.txt"
printf "Recreate %s:\n%s create -i %s %s\n\n" "${RG}" "${CLRG_CMD}" "${OUTPUT_DIR}/${RG}.clrg_export.xml" "${RG}" > ${COMMAND_FILE}
for RT in SUNW.LogicalHostname SUNW.HAStoragePlus SUNW.gds LGTO.clnt
do
for RT_VERSION in $(${CLRT_CMD} list | ${AWK_CMD} -v rt=${RT} '$1 ~ rt')
do
for RES in $(${CLRS_CMD} list -g ${RG} -t ${RT_VERSION})
do
if [ "_${RT}_" == "_SUNW.LogicalHostname_" ]
then
printf "Add the following entries to all nodes!!!:\n/etc/inet/hosts:\n" >> ${COMMAND_FILE}
${GETENT_CMD} hosts $(${CLRS_CMD} show -p HostnameList ${RES} | nawk '$1=="HostnameList:"{$1="";print}') >> ${COMMAND_FILE}
printf "\n" >> ${COMMAND_FILE}
fi
printf "Recreate %s:\n%s create -i %s %s\n\n" "${RES}" "${CLRS_CMD}" "${OUTPUT_DIR}/${RES}.clrs_export.xml" "${RES}" >> ${COMMAND_FILE}
done
done
done
}
function snapshot_pre {
DB=$1
DBUSER=$2
if [ $# -eq 3 -a "_$3_" != "__" ]
then
ZONE=$3
ZONE_CMD="${ZLOGIN_CMD} -l ${DBUSER} ${ZONE}"
ZONE_BASE=$(/usr/sbin/zonecfg -z ${ZONE} info zonepath | ${AWK_CMD} '{print $NF;}')
ZONE_ROOT="${ZONE_BASE}/root"
else
ZONE_ROOT=""
ZONE_CMD="su - ${DBUSER} -c"
fi
if( ${ZONE_CMD} echo >/dev/null 2>&1 )
then
SCRIPT_NAME="tmp/.nsr-pre-snap-script.$$"
# Create script inside zone
cat >${ZONE_ROOT}/{SCRIPT_NAME} <<EOS
#!/bin/bash
DBDIR=\$(${AWK_CMD} -F':' -v ORACLE_SID=${ORACLE_SID} '\$1==ORACLE_SID {print \$2;}' /var/opt/oracle/oratab)
\${DBDIR}/bin/sqlplus sys/${DBUSER} as sysdba << EOF
create pfile from spfile;
alter system archive log current;
alter database backup controlfile to trace;
alter database begin backup;
EOF
EOS
chmod 755 ${ZONE_ROOT}/${SCRIPT_NAME}
${ZONE_CMD} /${SCRIPT_NAME} 2>&1 | print_log ${LOGFILE}
rm -f ${ZONE_ROOT}/${SCRIPT_NAME}
fi
}
function snapshot_pst {
DB=$1
DBUSER=$2
if [ $# -eq 3 -a "_$3_" != "__" ]
then
ZONE=$3
ZONE_CMD="${ZLOGIN_CMD} -l ${DBUSER} ${ZONE}"
ZONE_BASE=$(/usr/sbin/zonecfg -z ${ZONE} info zonepath | ${AWK_CMD} '{print $NF;}')
ZONE_ROOT="${ZONE_BASE}/root"
else
ZONE_ROOT=""
ZONE_CMD="su - ${DBUSER} -c"
fi
if( ${ZONE_CMD} echo >/dev/null 2>&1 )
then
SCRIPT_NAME="tmp/.nsr-pre-snap-script.$$"
# Create script inside zone
cat >${ZONE_ROOT}/{SCRIPT_NAME} <<EOS
#!/bin/bash
DBDIR=\$(${AWK_CMD} -F':' -v ORACLE_SID=${ORACLE_SID} '\$1==ORACLE_SID {print \$2;}' /var/opt/oracle/oratab)
\${DBDIR}/bin/sqlplus sys/${DBUSER} as sysdba << EOF
alter database end backup;
alter system archive log current;
EOF
EOS
chmod 755 ${ZONE_ROOT}/${SCRIPT_NAME}
${ZONE_CMD} /${SCRIPT_NAME} 2>&1 | print_log ${LOGFILE}
rm -f ${ZONE_ROOT}/${SCRIPT_NAME}
fi
}
function snapshot_create {
ZPOOL=$1
SNAPSHOT_NAME=$2
print_log ${LOGFILE} "Create ZFS snapshot -r ${ZPOOL}@${SNAPSHOT_NAME}"
${ZFS_CMD} snapshot -r ${ZPOOL}@${SNAPSHOT_NAME}
for zfs_snapshot in $(${ZFS_CMD} list -Ho name -t snapshot -r ${ZPOOL} | grep ${SNAPSHOT_NAME})
do
${ZFS_CMD} clone -o readonly=on ${zfs_snapshot} ${zfs_snapshot/@*/}/${NSR_BACKUP_CLONE}
${ZFS_CMD} mount ${zfs_snapshot/@*/}/${NSR_BACKUP_CLONE} 2>/dev/null
if [ "_$(${ZFS_CMD} get -Ho value mounted ${zfs_snapshot/@*/}/${NSR_BACKUP_CLONE})_" == "_yes_" ]
then
# echo /usr/sbin/save -s ${SERVER_NAME} -g ${GROUP_NAME} -LL -m ${CLIENT_NAME} $(${ZFS_CMD} get -Ho value mountpoint ${zfs_snapshot/@*/}/${NSR_BACKUP_CLONE})
${ZFS_CMD} list -Ho creation,name ${zfs_snapshot/@*/}/${NSR_BACKUP_CLONE} | print_log ${LOGFILE}
fi
done
}
function snapshot_destroy {
ZPOOL=$1
SNAPSHOT_NAME=$2
if (${ZFS_CMD} list -t snapshot ${ZPOOL}@${SNAPSHOT_NAME} > /dev/null)
then
for zfs_snapshot in $(${ZFS_CMD} list -Ho name -t snapshot -r ${ZPOOL} | grep ${SNAPSHOT_NAME})
do
if [ "_$(${ZFS_CMD} get -Ho value mounted ${zfs_snapshot/@*/}/${NSR_BACKUP_CLONE})_" == "_yes_" ]
then
print_log ${LOGFILE} "Unmount ZFS clone ${zfs_snapshot/@*/}/${NSR_BACKUP_CLONE}"
${ZFS_CMD} unmount ${zfs_snapshot/@*/}/${NSR_BACKUP_CLONE}
fi
# If this is a clone of ${zfs_snapshot}, then destroy it
if [ "_$(${ZFS_CMD} list -Ho origin ${zfs_snapshot/@*/}/${NSR_BACKUP_CLONE})_" == "_${zfs_snapshot}_" ]
then
print_log ${LOGFILE} "Destroy ZFS clone ${zfs_snapshot/@*/}/${NSR_BACKUP_CLONE}"
${ZFS_CMD} destroy ${zfs_snapshot/@*/}/${NSR_BACKUP_CLONE}
fi
done
print_log ${LOGFILE} "Destroy ZFS snapshot -r ${ZPOOL}@${SNAPSHOT_NAME}"
${ZFS_CMD} destroy -r ${ZPOOL}@${SNAPSHOT_NAME}
fi
}
function usage {
echo "Usage: $0 (pre|pst)"
echo "Usage: $0 init <ZPool-Name>"
echo "Usage: $0 dump <ZPool-Name> <Output-File>"
exit 1
}
case ${cmd_option} in
pre|pst)
case ${cmd_option} in
pre)
# Get commandline from parent pid
# pre /usr/sbin/savepnpc -c <NetworkerClient> -s <NetworkerServer> -g <NetworkerGroup> -LL
print_log ${GLOBAL_LOGFILE} "Begin (${cmd_option}) Called from $(${PTREE_CMD} $$ | ${AWK_CMD} '/savepnpc/{print $0}')"
pid=$(${PTREE_CMD} $$ | ${AWK_CMD} '/savepnpc/{print $1}')
;;
pst)
# Get commandline from parent pid
# pst /usr/bin/pstclntsave -s <NetworkerServer> -g <NetworkerGroup> -c <NetworkerClient>
print_log ${GLOBAL_LOGFILE} "Begin (${cmd_option}) Called from $(${PTREE_CMD} $$ | ${AWK_CMD} '/pstclntsave/{print $0}')"
pid=$(${PTREE_CMD} $$ | ${AWK_CMD} '/pstclntsave/{print $1}')
;;
esac
commandline="$(${PARGS_CMD} -c ${pid} | ${AWK_CMD} -F':' '$1 ~ /^argv/{printf $2}END{print;}')"
# Called from backupserver use -c
CLIENT_NAME=$(print_option -c ${commandline})
# If called from cmdline use -m
CLIENT_NAME=${CLIENT_NAME:-$(print_option -m ${commandline})}
# Last resort pre/post
CLIENT_NAME=${CLIENT_NAME:-${cmd_option}}
SERVER_NAME=$(print_option -s ${commandline})
GROUP_NAME=$(print_option -g ${commandline})
LOGFILE=${BASE_LOG_DIR}/${CLIENT_NAME}.log
print_log ${LOGFILE} "Called from ${commandline}"
named_pipe=/tmp/.named_pipe.$$
# Delete named pipe on exit
trap "rm -f ${named_pipe}" EXIT
# Create named pipe
${MKNOD_CMD} ${named_pipe} p
# Read from named pipe and send it to print_log
tee <${named_pipe} | print_log ${LOGFILE}&
# Close STDOUT & STDERR
exec 1>&-
exec 2>&-
# Redirect them to named pipe
exec >${named_pipe} 2>&1
print_log ${LOGFILE} "Begin backup of ${CLIENT_NAME}"
# Get resource name from hostname
LH_RES=$(${CLRS_CMD} show -t SUNW.LogicalHostname -p HostnameList | ${AWK_CMD} -v Hostname="${CLIENT_NAME}" '/^Resource:/{res=$NF} /HostnameList:/ {for(i=2;i<=NF;i++){if($i == Hostname){print res}}}')
print_log ${LOGFILE} "LogicalHostname of ${CLIENT_NAME} is ${LH_RES}"
# Get ressourceGroup name from ressource name
RG=$(${SCHA_RESOURCE_GET_CMD} -O GROUP -R ${LH_RES})
print_log ${LOGFILE} "RessourceGroup of ${LH_RES} is ${RG}"
ZPOOLS=$(${CLRS_CMD} show -g ${RG} -p Zpools | ${AWK_CMD} '$1=="Zpools:"{$1="";print $0}')
print_log ${LOGFILE} "ZPools used in ${RG}: ${ZPOOLS}"
Start_command=$(${CLRS_CMD} show -p Start_command -g ${RG} | ${AWK_CMD} -F ':' '$1 ~ /Start_command/ && $2 ~ /sczbt/')
print_log ${LOGFILE} "sczbt Start_command is: ${Start_command}"
sczbt_config=$(print_option -P ${Start_command})/sczbt_$(print_option -R ${Start_command})
print_log ${LOGFILE} "sczbt_config is ${sczbt_config}"
ZONE=$(${AWK_CMD} -F '=' '$1=="Zonename"{gsub(/"/,"",$2);print $2}' ${sczbt_config})
print_log ${LOGFILE} "Zone from ${sczbt_config} is ${ZONE}"
;;
init)
LOGFILE=${BASE_LOG_DIR}/init.log
if [ $# -ne 2 ]
then
echo "Wrong count of parameters."
echo "Use $0 init <ZPool-Name>"
exit 1
fi
ZPOOL=$2
print_log ${GLOBAL_LOGFILE} "Begin (${cmd_option}) of zpool ${ZPOOL}"
print_log ${LOGFILE} "Begin init of zpool ${ZPOOL}"
;;
esac
case ${cmd_option} in
dump_cluster)
if [ $# -ne 3 ]
then
echo "Wrong count of parameters."
echo "Use $0 dump_cluster <Ressource_Group> <DIR>"
exit 1
fi
dump_cluster_config $2 $3
;;
dump)
if [ $# -ne 3 ]
then
echo "Wrong count of parameters."
echo "Use $0 dump <ZPool-Name> <File>"
exit 1
fi
dump_zfs_config $2 $3
;;
init)
snapshot_destroy ${ZPOOL} ${SNAPSHOT_NAME}
snapshot_create ${ZPOOL} ${SNAPSHOT_NAME}
print_log ${LOGFILE} "End init of zpool ${ZPOOL}"
;;
pre)
for ZPOOL in ${ZPOOLS}
do
snapshot_destroy ${ZPOOL} ${SNAPSHOT_NAME}
done
# snapshot_pre ${DB} ${DBUSER} ${ZONE}
# Find the dir to write down zfs-setup
for ZPOOL in ${ZPOOLS}
do
if [ "_$(${ZFS_CMD} list -Ho name ${ZPOOL}/${ZFS_SETUP_SUBDIR} 2>/dev/null)_" != "__" ]
then
CONFIG_DIR=$(${ZFS_CMD} get -Ho value mountpoint ${ZPOOL}/${ZFS_SETUP_SUBDIR})
else
if [ -d $(${ZFS_CMD} get -Ho value mountpoint ${ZPOOL})/${ZFS_SETUP_SUBDIR} ]
then
CONFIG_DIR=$(${ZFS_CMD} get -Ho value mountpoint ${ZPOOL})/${ZFS_SETUP_SUBDIR}
fi
fi
if [ -d ${CONFIG_DIR} ]
then
printf "# Settings for ZFS\n\n" > ${CONFIG_DIR}/${ZFS_CONFIG_FILE}
ZONE_CONFIG_FILE=zonecfg_${ZONE}.export
${ZONECFG_CMD} -z ${ZONE} export > ${CONFIG_DIR}/${ZONE_CONFIG_FILE}
fi
done
for ZPOOL in ${ZPOOLS}
do
if [ "_${CONFIG_DIR}_" != "__" ]
then
dump_zfs_config ${ZPOOL} ${CONFIG_DIR}/${ZFS_CONFIG_FILE}
dump_cluster_config ${RG} ${CONFIG_DIR}
fi
snapshot_create ${ZPOOL} ${SNAPSHOT_NAME}
done
# snapshot_pst ${DB} ${DBUSER} ${ZONE}
print_log ${LOGFILE} "End backup of ${CLIENT_NAME}"
;;
pst)
#for ZPOOL in ${ZPOOLS}
#do
# snapshot_destroy ${ZPOOL} ${SNAPSHOT_NAME}
#done
print_log ${LOGFILE} "End backup of ${CLIENT_NAME}"
;;
*)
usage
;;
esac
print_log ${GLOBAL_LOGFILE} "End (${cmd_option}) Called from:"
${PTREE_CMD} $$ | print_log ${GLOBAL_LOGFILE}
exit 0
</source>
MD5-Checksum
<source lang=bash>
# digest -a md5 /nsr/bin/nsr_snapshot.sh
aedff1a8bfa8ee0a012cd7def115e626
</source>
!!!THIS CODE IS UNTESTED DO NOT USE THIS!!!
!!!THIS JUST AN EXAMPLE!!!
==Restore/Recover==
<source lang=bash>
NSR_CLIENT="sample-cl"
NSR_SERVER="nsr-server"
ZPOOL="sample_pool"
RG="${NSR_CLIENT%-cl}-rg"
ZONE="${NSR_CLIENT%-cl}-zone"
</source>
Look for a valid backup:
<source lang=bash>
# /usr/sbin/mminfo -s ${NSR_SERVER} -o t -N /local/${RG}/cluster_config/nsr_backup
</source>
Restore ZFS configuration:
<source lang=bash>
# /usr/sbin/recover -s ${NSR_SERVER} -c ${NSR_CLIENT} -d /tmp -a /local/${RG}/cluster_config/nsr_backup/ZFS_Setup.sh
</source>
Look into file /tmp/ZFS_Setup.sh which should look like this:
<source lang=bash>
# Create ZPool sample_pool with size 1.02T:
Zpool structure of sample_pool:
zpool create sample_pool mirror <device0> <device1>
# Create ZFS
/usr/sbin/zfs create -o mountpoint=none sample_pool/app
/usr/sbin/zfs create -o mountpoint=none sample_pool/cluster_config
/usr/sbin/zfs create -o mountpoint=none sample_pool/data1
/usr/sbin/zfs create -o mountpoint=none sample_pool/data2
/usr/sbin/zfs create -o mountpoint=none sample_pool/home
/usr/sbin/zfs create -o mountpoint=none sample_pool/log
/usr/sbin/zfs create -o mountpoint=none sample_pool/usr_local
/usr/sbin/zfs create -o mountpoint=none sample_pool/zone
# Set ZFS values
/usr/sbin/zfs set -p reservation=104857600 sample_pool
/usr/sbin/zfs set -p mountpoint=none sample_pool
/usr/sbin/zfs set -p mountpoint=/local/sample-rg/app sample_pool/app
/usr/sbin/zfs set -p mountpoint=/local/sample-rg/cluster_config sample_pool/cluster_config
/usr/sbin/zfs set -p mountpoint=/local/sample-rg/data1 sample_pool/data1
/usr/sbin/zfs set -p mountpoint=/local/sample-rg/data2 sample_pool/data2
/usr/sbin/zfs set -p mountpoint=/local/sample-rg/home sample_pool/home
/usr/sbin/zfs set -p mountpoint=/local/sample-rg/log sample_pool/log
/usr/sbin/zfs set -p mountpoint=/local/sample-rg/usr_local sample_pool/usr_local
/usr/sbin/zfs set -p mountpoint=/local/sample-rg/zone sample_pool/zone
/usr/sbin/zfs set -p zpdata:zn=sample-zone sample_pool/zone
/usr/sbin/zfs set -p zpdata:rbe=S10_U9 sample_pool/zone
/usr/sbin/zfs set -p mountpoint=/local/sample-rg/zone-zfsBE_20121105 sample_pool/zone-zfsBE_20121105
/usr/sbin/zfs set -p zoned=off sample_pool/zone-zfsBE_20121105
/usr/sbin/zfs set -p canmount=on sample_pool/zone-zfsBE_20121105
/usr/sbin/zfs set -p zpdata:zn=sample-zone sample_pool/zone-zfsBE_20121105
/usr/sbin/zfs set -p zpdata:rbe=S10_U9 sample_pool/zone-zfsBE_20121105
</source>
==Registering new resource type LGTO.clnt==
1. Install Solaris client package LGTOclnt
2. Register new resource type in cluster. One one node do:
<source lang=bash>
# clrt register -f /usr/sbin/LGTO.clnt.rtr LGTO.clnt
</source>
Now you have a new resource type LGTO.clnt in your cluster.
==Create client resource of type LGTO.clnt==
So I use scripts like this:
<source lang=bash>
# RGname=sample-rg
# clrs create \
-t LGTO.clnt \
-g ${RGname} \
-p Resource_dependencies=$(basename ${RGname} -rg)-hasp-zfs-res \
-p clientname=$(basename ${RGname} -rg)-lh \
-p Network_resource=$(basename ${RGname} -rg)-lh-res \
-p owned_paths=${ZPOOL_BASEDIR} \
$(basename ${RGname} -rg)-nsr-res
</source>
This expands to:
<source lang=bash>
# clrs create \
-t LGTO.clnt \
-g sample-rg \
-p Resource_dependencies=sample-hasp-zfs-res \
-p clientname=sample-lh \
-p Network_resource=sample-lh-res \
-p owned_paths=/local/sample-rg \
sample-nsr-res
</source>
Now we have a client name to which we can connect to: sample-lh
1d77e41feeb2819244cf3937e740838ee97ae52f
641
640
2014-12-10T16:50:25Z
Lollypop
2
/* Restore/Recover */
wikitext
text/x-wiki
[[Kategorie:ZFS]]
[[Kategorie:Backup]]
[[Kategorie:Solaris]]
=Backup of ZFS snapshots on Solaris Cluster with Legato/EMC Networker=
This describes how to setup a backup of the Solaris Cluster resource group named sample-rg.
The structure of my RGs is always:
<pre>
RG: <name>-rg
ZFS-HASP: <name>-hasp-zfs-res
Logical Host: <name>-lh-res
Logical Host Name: <name>-lh
ZPOOL: <name>_pool
</pre>
I used the bash as shell.
==Define variables used in the following command lines==
<source lang=bash>
# NAME=sample
# RGname=${NAME}-rg
# NetworkerGroup=$(echo ${NAME} | tr 'a-z' 'A-Z' )
# ZPOOL=${NAME}_pool
# ZPOOL_BASEDIR=/local/${RGname}
</source>
==Define a resource for Networker==
What we need now is a resource definition in our Networker directory like this:
<source lang=bash>
# mkdir /nsr/{bin,log,res}
# cat > /nsr/res/${NetworkerGroup}.res <<EOF
type: savepnpc;
precmd: "/nsr/bin/nsr_snapshot.sh pre >/nsr/log/networker_precmd.log 2>&1";
pstcmd: "/nsr/bin/nsr_snapshot.sh pst >/nsr/log/networker_pstcmd.log 2>&1";
timeout: "08:00am";
abort precmd with group: Yes;
EOF
</source>
==The pre-/pstcmd-script==
!!!THIS CODE IS UNTESTED DO NOT USE THIS!!!
!!!THIS JUST AN EXAMPLE!!!
<source lang=bash>
#!/bin/bash
cmd_option=$1
export cmd_option
SNAPSHOT_NAME="nsr"
BASE_LOG_DIR="/nsr/logs"
NSR_BACKUP_CLONE="nsr_backup"
# Commands
ZFS_CMD="/usr/sbin/zfs"
ZPOOL_CMD="/usr/sbin/zpool"
ZLOGIN_CMD="/usr/bin/zlogin"
ZONECFG_CMD="/usr/sbin/zonecfg"
DF_CMD="/usr/bin/df"
AWK_CMD="/usr/bin/nawk"
MKNOD_CMD="/usr/sbin/mknod"
PARGS_CMD="/usr/bin/pargs"
PTREE_CMD="/usr/bin/ptree"
CLRS_CMD="/usr/cluster/bin/clrs"
CLRG_CMD="/usr/cluster/bin/clrg"
CLRT_CMD="/usr/cluster/bin/clrt"
BASENAME_CMD="/usr/bin/basename"
GETENT_CMD="/usr/bin/getent"
SCHA_RESOURCE_GET_CMD="/usr/cluster/bin/scha_resource_get"
# Subdir in ZFS where to put ZFS-config
ZFS_SETUP_SUBDIR="cluster_config"
ZFS_CONFIG_FILE=ZFS_Setup.sh
# Oracle parameter
ORACLE_SID=SAMPLE
ORACLE_USER=oracle
GLOBAL_LOGFILE=${BASE_LOG_DIR}/$(${BASENAME_CMD} $0 .sh).log
exec >>${GLOBAL_LOGFILE} 2>&1
function print_option () {
option=$1; shift
# now process line
while [ $# -gt 0 ]
do
case $1 in
${option})
echo $2
shift
shift
;;
*)
shift
;;
esac
done
}
function print_log () {
LOGFILE=$1 ; shift
if [ $# -gt 0 ]
then
printf "%s (%s): %s\n" "$(date '+%Y%m%d %H:%M:%S')" "${cmd_option}" "$*" >> ${LOGFILE}
else
#printf "%s (%s): " "$(date '+%Y%m%d %H:%M:%S')" "${cmd_option}" >> ${LOGFILE}
while read data
do
printf "%s (%s): %s\n" "$(date '+%Y%m%d %H:%M:%S')" "${cmd_option}" "${data}" >> ${LOGFILE}
done
fi
}
function dump_zfs_config {
ZPOOL=$1
OUTPUT_FILE=$2
printf "\n\n# Create ZPool ${ZPOOL} with size $(${ZPOOL_CMD} list -Ho size ${ZPOOL}):\n\n" >> ${OUTPUT_FILE}
${ZPOOL_CMD} status ${ZPOOL} | ${AWK_CMD} '/config:/,/errors:/{if(/NAME/){getline; printf "Zpool structure of %s:\n\nzpool create %s",$1,$1; getline ; device=0; while(!/^$/ && !/errors:/){gsub(/mirror-[0-9]+/,"mirror",$1);gsub(/logs/,"log",$1);gsub(/(\/dev\/(r)*dsk\/)*c[0-9]+t[0-9A-F]+d[0-9]+(s[0-9]+)*/,"<device"device">",$1);if(/device/)device++;printf " %s",$1 ; getline}};printf "\n" ;}' >> ${OUTPUT_FILE}
printf "\n\n# Create ZFS\n\n" >> ${OUTPUT_FILE}
${ZFS_CMD} list -Hrt filesystem -o name,origin ${ZPOOL} | ${AWK_CMD} -v zfs_cmd=${ZFS_CMD} 'NR>1 && $2=="-"{print zfs_cmd,"create -o mountpoint=none",$1}' >> ${OUTPUT_FILE}
printf "\n\n# Set ZFS values\n\n" >> ${OUTPUT_FILE}
${ZFS_CMD} get -s local -Ho name,property,value -pr all ${ZPOOL} | ${AWK_CMD} -v zfs_cmd=${ZFS_CMD} '$2!="readonly"{printf "%s set -p %s=%s %s\n",zfs_cmd,$2,$3,$1}' >> ${OUTPUT_FILE}
}
function dump_cluster_config {
RG=$1
OUTPUT_DIR=$2
${CLRG_CMD} export -o ${OUTPUT_DIR}/${RG}.clrg_export.xml ${RG}
for RES in $(${CLRS_CMD} list -g ${RG})
do
${CLRS_CMD} export -o ${OUTPUT_DIR}/${RES}.clrs_export.xml ${RES}
done
# Commands to recreate the RG
COMMAND_FILE="${OUTPUT_DIR}/${RG}.ClusterCreateCommands.txt"
printf "Recreate %s:\n%s create -i %s %s\n\n" "${RG}" "${CLRG_CMD}" "${OUTPUT_DIR}/${RG}.clrg_export.xml" "${RG}" > ${COMMAND_FILE}
for RT in SUNW.LogicalHostname SUNW.HAStoragePlus SUNW.gds LGTO.clnt
do
for RT_VERSION in $(${CLRT_CMD} list | ${AWK_CMD} -v rt=${RT} '$1 ~ rt')
do
for RES in $(${CLRS_CMD} list -g ${RG} -t ${RT_VERSION})
do
if [ "_${RT}_" == "_SUNW.LogicalHostname_" ]
then
printf "Add the following entries to all nodes!!!:\n/etc/inet/hosts:\n" >> ${COMMAND_FILE}
${GETENT_CMD} hosts $(${CLRS_CMD} show -p HostnameList ${RES} | nawk '$1=="HostnameList:"{$1="";print}') >> ${COMMAND_FILE}
printf "\n" >> ${COMMAND_FILE}
fi
printf "Recreate %s:\n%s create -i %s %s\n\n" "${RES}" "${CLRS_CMD}" "${OUTPUT_DIR}/${RES}.clrs_export.xml" "${RES}" >> ${COMMAND_FILE}
done
done
done
}
function snapshot_pre {
DB=$1
DBUSER=$2
if [ $# -eq 3 -a "_$3_" != "__" ]
then
ZONE=$3
ZONE_CMD="${ZLOGIN_CMD} -l ${DBUSER} ${ZONE}"
ZONE_BASE=$(/usr/sbin/zonecfg -z ${ZONE} info zonepath | ${AWK_CMD} '{print $NF;}')
ZONE_ROOT="${ZONE_BASE}/root"
else
ZONE_ROOT=""
ZONE_CMD="su - ${DBUSER} -c"
fi
if( ${ZONE_CMD} echo >/dev/null 2>&1 )
then
SCRIPT_NAME="tmp/.nsr-pre-snap-script.$$"
# Create script inside zone
cat >${ZONE_ROOT}/{SCRIPT_NAME} <<EOS
#!/bin/bash
DBDIR=\$(${AWK_CMD} -F':' -v ORACLE_SID=${ORACLE_SID} '\$1==ORACLE_SID {print \$2;}' /var/opt/oracle/oratab)
\${DBDIR}/bin/sqlplus sys/${DBUSER} as sysdba << EOF
create pfile from spfile;
alter system archive log current;
alter database backup controlfile to trace;
alter database begin backup;
EOF
EOS
chmod 755 ${ZONE_ROOT}/${SCRIPT_NAME}
${ZONE_CMD} /${SCRIPT_NAME} 2>&1 | print_log ${LOGFILE}
rm -f ${ZONE_ROOT}/${SCRIPT_NAME}
fi
}
function snapshot_pst {
DB=$1
DBUSER=$2
if [ $# -eq 3 -a "_$3_" != "__" ]
then
ZONE=$3
ZONE_CMD="${ZLOGIN_CMD} -l ${DBUSER} ${ZONE}"
ZONE_BASE=$(/usr/sbin/zonecfg -z ${ZONE} info zonepath | ${AWK_CMD} '{print $NF;}')
ZONE_ROOT="${ZONE_BASE}/root"
else
ZONE_ROOT=""
ZONE_CMD="su - ${DBUSER} -c"
fi
if( ${ZONE_CMD} echo >/dev/null 2>&1 )
then
SCRIPT_NAME="tmp/.nsr-pre-snap-script.$$"
# Create script inside zone
cat >${ZONE_ROOT}/{SCRIPT_NAME} <<EOS
#!/bin/bash
DBDIR=\$(${AWK_CMD} -F':' -v ORACLE_SID=${ORACLE_SID} '\$1==ORACLE_SID {print \$2;}' /var/opt/oracle/oratab)
\${DBDIR}/bin/sqlplus sys/${DBUSER} as sysdba << EOF
alter database end backup;
alter system archive log current;
EOF
EOS
chmod 755 ${ZONE_ROOT}/${SCRIPT_NAME}
${ZONE_CMD} /${SCRIPT_NAME} 2>&1 | print_log ${LOGFILE}
rm -f ${ZONE_ROOT}/${SCRIPT_NAME}
fi
}
function snapshot_create {
ZPOOL=$1
SNAPSHOT_NAME=$2
print_log ${LOGFILE} "Create ZFS snapshot -r ${ZPOOL}@${SNAPSHOT_NAME}"
${ZFS_CMD} snapshot -r ${ZPOOL}@${SNAPSHOT_NAME}
for zfs_snapshot in $(${ZFS_CMD} list -Ho name -t snapshot -r ${ZPOOL} | grep ${SNAPSHOT_NAME})
do
${ZFS_CMD} clone -o readonly=on ${zfs_snapshot} ${zfs_snapshot/@*/}/${NSR_BACKUP_CLONE}
${ZFS_CMD} mount ${zfs_snapshot/@*/}/${NSR_BACKUP_CLONE} 2>/dev/null
if [ "_$(${ZFS_CMD} get -Ho value mounted ${zfs_snapshot/@*/}/${NSR_BACKUP_CLONE})_" == "_yes_" ]
then
# echo /usr/sbin/save -s ${SERVER_NAME} -g ${GROUP_NAME} -LL -m ${CLIENT_NAME} $(${ZFS_CMD} get -Ho value mountpoint ${zfs_snapshot/@*/}/${NSR_BACKUP_CLONE})
${ZFS_CMD} list -Ho creation,name ${zfs_snapshot/@*/}/${NSR_BACKUP_CLONE} | print_log ${LOGFILE}
fi
done
}
function snapshot_destroy {
ZPOOL=$1
SNAPSHOT_NAME=$2
if (${ZFS_CMD} list -t snapshot ${ZPOOL}@${SNAPSHOT_NAME} > /dev/null)
then
for zfs_snapshot in $(${ZFS_CMD} list -Ho name -t snapshot -r ${ZPOOL} | grep ${SNAPSHOT_NAME})
do
if [ "_$(${ZFS_CMD} get -Ho value mounted ${zfs_snapshot/@*/}/${NSR_BACKUP_CLONE})_" == "_yes_" ]
then
print_log ${LOGFILE} "Unmount ZFS clone ${zfs_snapshot/@*/}/${NSR_BACKUP_CLONE}"
${ZFS_CMD} unmount ${zfs_snapshot/@*/}/${NSR_BACKUP_CLONE}
fi
# If this is a clone of ${zfs_snapshot}, then destroy it
if [ "_$(${ZFS_CMD} list -Ho origin ${zfs_snapshot/@*/}/${NSR_BACKUP_CLONE})_" == "_${zfs_snapshot}_" ]
then
print_log ${LOGFILE} "Destroy ZFS clone ${zfs_snapshot/@*/}/${NSR_BACKUP_CLONE}"
${ZFS_CMD} destroy ${zfs_snapshot/@*/}/${NSR_BACKUP_CLONE}
fi
done
print_log ${LOGFILE} "Destroy ZFS snapshot -r ${ZPOOL}@${SNAPSHOT_NAME}"
${ZFS_CMD} destroy -r ${ZPOOL}@${SNAPSHOT_NAME}
fi
}
function usage {
echo "Usage: $0 (pre|pst)"
echo "Usage: $0 init <ZPool-Name>"
echo "Usage: $0 dump <ZPool-Name> <Output-File>"
exit 1
}
case ${cmd_option} in
pre|pst)
case ${cmd_option} in
pre)
# Get commandline from parent pid
# pre /usr/sbin/savepnpc -c <NetworkerClient> -s <NetworkerServer> -g <NetworkerGroup> -LL
print_log ${GLOBAL_LOGFILE} "Begin (${cmd_option}) Called from $(${PTREE_CMD} $$ | ${AWK_CMD} '/savepnpc/{print $0}')"
pid=$(${PTREE_CMD} $$ | ${AWK_CMD} '/savepnpc/{print $1}')
;;
pst)
# Get commandline from parent pid
# pst /usr/bin/pstclntsave -s <NetworkerServer> -g <NetworkerGroup> -c <NetworkerClient>
print_log ${GLOBAL_LOGFILE} "Begin (${cmd_option}) Called from $(${PTREE_CMD} $$ | ${AWK_CMD} '/pstclntsave/{print $0}')"
pid=$(${PTREE_CMD} $$ | ${AWK_CMD} '/pstclntsave/{print $1}')
;;
esac
commandline="$(${PARGS_CMD} -c ${pid} | ${AWK_CMD} -F':' '$1 ~ /^argv/{printf $2}END{print;}')"
# Called from backupserver use -c
CLIENT_NAME=$(print_option -c ${commandline})
# If called from cmdline use -m
CLIENT_NAME=${CLIENT_NAME:-$(print_option -m ${commandline})}
# Last resort pre/post
CLIENT_NAME=${CLIENT_NAME:-${cmd_option}}
SERVER_NAME=$(print_option -s ${commandline})
GROUP_NAME=$(print_option -g ${commandline})
LOGFILE=${BASE_LOG_DIR}/${CLIENT_NAME}.log
print_log ${LOGFILE} "Called from ${commandline}"
named_pipe=/tmp/.named_pipe.$$
# Delete named pipe on exit
trap "rm -f ${named_pipe}" EXIT
# Create named pipe
${MKNOD_CMD} ${named_pipe} p
# Read from named pipe and send it to print_log
tee <${named_pipe} | print_log ${LOGFILE}&
# Close STDOUT & STDERR
exec 1>&-
exec 2>&-
# Redirect them to named pipe
exec >${named_pipe} 2>&1
print_log ${LOGFILE} "Begin backup of ${CLIENT_NAME}"
# Get resource name from hostname
LH_RES=$(${CLRS_CMD} show -t SUNW.LogicalHostname -p HostnameList | ${AWK_CMD} -v Hostname="${CLIENT_NAME}" '/^Resource:/{res=$NF} /HostnameList:/ {for(i=2;i<=NF;i++){if($i == Hostname){print res}}}')
print_log ${LOGFILE} "LogicalHostname of ${CLIENT_NAME} is ${LH_RES}"
# Get ressourceGroup name from ressource name
RG=$(${SCHA_RESOURCE_GET_CMD} -O GROUP -R ${LH_RES})
print_log ${LOGFILE} "RessourceGroup of ${LH_RES} is ${RG}"
ZPOOLS=$(${CLRS_CMD} show -g ${RG} -p Zpools | ${AWK_CMD} '$1=="Zpools:"{$1="";print $0}')
print_log ${LOGFILE} "ZPools used in ${RG}: ${ZPOOLS}"
Start_command=$(${CLRS_CMD} show -p Start_command -g ${RG} | ${AWK_CMD} -F ':' '$1 ~ /Start_command/ && $2 ~ /sczbt/')
print_log ${LOGFILE} "sczbt Start_command is: ${Start_command}"
sczbt_config=$(print_option -P ${Start_command})/sczbt_$(print_option -R ${Start_command})
print_log ${LOGFILE} "sczbt_config is ${sczbt_config}"
ZONE=$(${AWK_CMD} -F '=' '$1=="Zonename"{gsub(/"/,"",$2);print $2}' ${sczbt_config})
print_log ${LOGFILE} "Zone from ${sczbt_config} is ${ZONE}"
;;
init)
LOGFILE=${BASE_LOG_DIR}/init.log
if [ $# -ne 2 ]
then
echo "Wrong count of parameters."
echo "Use $0 init <ZPool-Name>"
exit 1
fi
ZPOOL=$2
print_log ${GLOBAL_LOGFILE} "Begin (${cmd_option}) of zpool ${ZPOOL}"
print_log ${LOGFILE} "Begin init of zpool ${ZPOOL}"
;;
esac
case ${cmd_option} in
dump_cluster)
if [ $# -ne 3 ]
then
echo "Wrong count of parameters."
echo "Use $0 dump_cluster <Ressource_Group> <DIR>"
exit 1
fi
dump_cluster_config $2 $3
;;
dump)
if [ $# -ne 3 ]
then
echo "Wrong count of parameters."
echo "Use $0 dump <ZPool-Name> <File>"
exit 1
fi
dump_zfs_config $2 $3
;;
init)
snapshot_destroy ${ZPOOL} ${SNAPSHOT_NAME}
snapshot_create ${ZPOOL} ${SNAPSHOT_NAME}
print_log ${LOGFILE} "End init of zpool ${ZPOOL}"
;;
pre)
for ZPOOL in ${ZPOOLS}
do
snapshot_destroy ${ZPOOL} ${SNAPSHOT_NAME}
done
# snapshot_pre ${DB} ${DBUSER} ${ZONE}
# Find the dir to write down zfs-setup
for ZPOOL in ${ZPOOLS}
do
if [ "_$(${ZFS_CMD} list -Ho name ${ZPOOL}/${ZFS_SETUP_SUBDIR} 2>/dev/null)_" != "__" ]
then
CONFIG_DIR=$(${ZFS_CMD} get -Ho value mountpoint ${ZPOOL}/${ZFS_SETUP_SUBDIR})
else
if [ -d $(${ZFS_CMD} get -Ho value mountpoint ${ZPOOL})/${ZFS_SETUP_SUBDIR} ]
then
CONFIG_DIR=$(${ZFS_CMD} get -Ho value mountpoint ${ZPOOL})/${ZFS_SETUP_SUBDIR}
fi
fi
if [ -d ${CONFIG_DIR} ]
then
printf "# Settings for ZFS\n\n" > ${CONFIG_DIR}/${ZFS_CONFIG_FILE}
ZONE_CONFIG_FILE=zonecfg_${ZONE}.export
${ZONECFG_CMD} -z ${ZONE} export > ${CONFIG_DIR}/${ZONE_CONFIG_FILE}
fi
done
for ZPOOL in ${ZPOOLS}
do
if [ "_${CONFIG_DIR}_" != "__" ]
then
dump_zfs_config ${ZPOOL} ${CONFIG_DIR}/${ZFS_CONFIG_FILE}
dump_cluster_config ${RG} ${CONFIG_DIR}
fi
snapshot_create ${ZPOOL} ${SNAPSHOT_NAME}
done
# snapshot_pst ${DB} ${DBUSER} ${ZONE}
print_log ${LOGFILE} "End backup of ${CLIENT_NAME}"
;;
pst)
#for ZPOOL in ${ZPOOLS}
#do
# snapshot_destroy ${ZPOOL} ${SNAPSHOT_NAME}
#done
print_log ${LOGFILE} "End backup of ${CLIENT_NAME}"
;;
*)
usage
;;
esac
print_log ${GLOBAL_LOGFILE} "End (${cmd_option}) Called from:"
${PTREE_CMD} $$ | print_log ${GLOBAL_LOGFILE}
exit 0
</source>
MD5-Checksum
<source lang=bash>
# digest -a md5 /nsr/bin/nsr_snapshot.sh
aedff1a8bfa8ee0a012cd7def115e626
</source>
!!!THIS CODE IS UNTESTED DO NOT USE THIS!!!
!!!THIS JUST AN EXAMPLE!!!
==Restore/Recover==
===Set some variables===
<source lang=bash>
NSR_CLIENT="sample-cl"
NSR_SERVER="nsr-server"
ZPOOL="sample_pool"
RG="${NSR_CLIENT%-cl}-rg"
ZONE="${NSR_CLIENT%-cl}-zone"
</source>
===Look for a valid backup===
<source lang=bash>
# /usr/sbin/mminfo -s ${NSR_SERVER} -o t -N /local/${RG}/cluster_config/nsr_backup
</source>
===Restore ZFS configuration===
<source lang=bash>
# /usr/sbin/recover -s ${NSR_SERVER} -c ${NSR_CLIENT} -d /tmp -a /local/${RG}/cluster_config/nsr_backup/ZFS_Setup.sh
</source>
Look into file /tmp/ZFS_Setup.sh which should look like this:
<source lang=bash>
# Create ZPool sample_pool with size 1.02T:
Zpool structure of sample_pool:
zpool create sample_pool mirror <device0> <device1>
# Create ZFS
/usr/sbin/zfs create -o mountpoint=none sample_pool/app
/usr/sbin/zfs create -o mountpoint=none sample_pool/cluster_config
/usr/sbin/zfs create -o mountpoint=none sample_pool/data1
/usr/sbin/zfs create -o mountpoint=none sample_pool/data2
/usr/sbin/zfs create -o mountpoint=none sample_pool/home
/usr/sbin/zfs create -o mountpoint=none sample_pool/log
/usr/sbin/zfs create -o mountpoint=none sample_pool/usr_local
/usr/sbin/zfs create -o mountpoint=none sample_pool/zone
# Set ZFS values
/usr/sbin/zfs set -p reservation=104857600 sample_pool
/usr/sbin/zfs set -p mountpoint=none sample_pool
/usr/sbin/zfs set -p mountpoint=/local/sample-rg/app sample_pool/app
/usr/sbin/zfs set -p mountpoint=/local/sample-rg/cluster_config sample_pool/cluster_config
/usr/sbin/zfs set -p mountpoint=/local/sample-rg/data1 sample_pool/data1
/usr/sbin/zfs set -p mountpoint=/local/sample-rg/data2 sample_pool/data2
/usr/sbin/zfs set -p mountpoint=/local/sample-rg/home sample_pool/home
/usr/sbin/zfs set -p mountpoint=/local/sample-rg/log sample_pool/log
/usr/sbin/zfs set -p mountpoint=/local/sample-rg/usr_local sample_pool/usr_local
/usr/sbin/zfs set -p mountpoint=/local/sample-rg/zone sample_pool/zone
/usr/sbin/zfs set -p zpdata:zn=sample-zone sample_pool/zone
/usr/sbin/zfs set -p zpdata:rbe=S10_U9 sample_pool/zone
/usr/sbin/zfs set -p mountpoint=/local/sample-rg/zone-zfsBE_20121105 sample_pool/zone-zfsBE_20121105
/usr/sbin/zfs set -p zoned=off sample_pool/zone-zfsBE_20121105
/usr/sbin/zfs set -p canmount=on sample_pool/zone-zfsBE_20121105
/usr/sbin/zfs set -p zpdata:zn=sample-zone sample_pool/zone-zfsBE_20121105
/usr/sbin/zfs set -p zpdata:rbe=S10_U9 sample_pool/zone-zfsBE_20121105
</source>
Mount the needed ZFS filesystems.
===Restore zone configuration===
<source lang=bash>
# /usr/sbin/recover -s ${NSR_SERVER} -c ${NSR_CLIENT} -d /tmp -a /local/${RG}/cluster_config/nsr_backup/zonecfg_${ZONE}.export
# zonecfg -z ${ZONE} -f /tmp/zonecfg_${ZONE}.export
# zonecfg -z ${ZONE} info
</source>
===Restore cluster configuration===
<source lang=bash>
# /usr/sbin/recover -s ${NSR_SERVER} -c ${NSR_CLIENT} -d /tmp -a /local/${RG}/cluster_config/nsr_backup/*_export.xml
# /usr/sbin/recover -s ${NSR_SERVER} -c ${NSR_CLIENT} -d /tmp -a /local/${RG}/cluster_config/nsr_backup/*.ClusterCreateCommands.txt
# /usr/bin/perl -pi -e "s#/local/${RG}/cluster_config/nsr_backup/#/tmp/#g" /tmp/${RG}.ClusterCreateCommands.txt
</source>
Follow the instructions in /tmp/${RG}.ClusterCreateCommands.txt:
<source lang=bash>
Recreate sample-rg:
/usr/cluster/bin/clrg create -i /tmp/sample-rg.clrg_export.xml sample-rg
Add the following entries to all nodes!!!:
/etc/inet/hosts:
10.29.7.96 sample-cl
Recreate sample-lh-res:
/usr/cluster/bin/clrs create -i /tmp/sample-lh-res.clrs_export.xml sample-lh-res
Recreate sample-hasp-zfs-res:
/usr/cluster/bin/clrs create -i /tmp/sample-hasp-zfs-res.clrs_export.xml sample-hasp-zfs-res
Recreate sample-emctl-res:
/usr/cluster/bin/clrs create -i /tmp/sample-emctl-res.clrs_export.xml sample-emctl-res
Recreate sample-oracle-res:
/usr/cluster/bin/clrs create -i /tmp/sample-oracle-res.clrs_export.xml sample-oracle-res
Recreate sample-zone-res:
/usr/cluster/bin/clrs create -i /tmp/sample-zone-res.clrs_export.xml sample-zone-res
Recreate sample-nsr-res:
/usr/cluster/bin/clrs create -i /tmp/sample-nsr-res.clrs_export.xml sample-nsr-res
</source>
==Registering new resource type LGTO.clnt==
1. Install Solaris client package LGTOclnt
2. Register new resource type in cluster. One one node do:
<source lang=bash>
# clrt register -f /usr/sbin/LGTO.clnt.rtr LGTO.clnt
</source>
Now you have a new resource type LGTO.clnt in your cluster.
==Create client resource of type LGTO.clnt==
So I use scripts like this:
<source lang=bash>
# RGname=sample-rg
# clrs create \
-t LGTO.clnt \
-g ${RGname} \
-p Resource_dependencies=$(basename ${RGname} -rg)-hasp-zfs-res \
-p clientname=$(basename ${RGname} -rg)-lh \
-p Network_resource=$(basename ${RGname} -rg)-lh-res \
-p owned_paths=${ZPOOL_BASEDIR} \
$(basename ${RGname} -rg)-nsr-res
</source>
This expands to:
<source lang=bash>
# clrs create \
-t LGTO.clnt \
-g sample-rg \
-p Resource_dependencies=sample-hasp-zfs-res \
-p clientname=sample-lh \
-p Network_resource=sample-lh-res \
-p owned_paths=/local/sample-rg \
sample-nsr-res
</source>
Now we have a client name to which we can connect to: sample-lh
e1dee8d8a27a53892fb660f4c01fe8e547e942e7
Category:Backup
14
191
626
2014-12-01T07:39:40Z
Lollypop
2
Die Seite wurde neu angelegt: „[[Kategorie:KnowHow]]“
wikitext
text/x-wiki
[[Kategorie:KnowHow]]
5b3e805e2df69a16d339bfd0115e4688ccfd0e65
Blaberus giganteus
0
192
628
2014-12-01T07:45:45Z
Lollypop
2
Die Seite wurde neu angelegt: „{{Systematik | DeName = Mittelamerikanische Riesenschabe | WissName = Blaberus giganteus | Autor = | Untergattung = | Ga…“
wikitext
text/x-wiki
{{Systematik
| DeName = Mittelamerikanische Riesenschabe
| WissName = Blaberus giganteus
| Autor =
| Untergattung =
| Gattung = Blaberus
| Familie = Blaberidae
| Unterfamilie = Blaberinae
| Tribus =
| Art = giganteus
| Verbreitung = Mittelamerika und nördliches Südamerika
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| Winterruhe =
}}
98328ef7e8e7585a176399cceea6bb22d60eb0dd
Category:Blaberus
14
193
629
2014-12-02T12:13:51Z
Lollypop
2
Die Seite wurde neu angelegt: „[[Kategorie:Schaben]]“
wikitext
text/x-wiki
[[Kategorie:Schaben]]
b2ea0c52ba3cc8ec3158b4eaccc987028f8b404c
Filesysteme Tipps und Tricks
0
194
630
2014-12-03T14:18:26Z
Lollypop
2
Die Seite wurde neu angelegt: „[[Kategorie:Linux]] [[Kategorie:ZFS]] ==Get the creation time... not the changetime== ===Creation time on zfs=== ====You need the Filesystem where the file re…“
wikitext
text/x-wiki
[[Kategorie:Linux]]
[[Kategorie:ZFS]]
==Get the creation time... not the changetime==
===Creation time on zfs===
====You need the Filesystem where the file resides====
<source lang=bash>
# df -h /var/data/dumps/sackhalter_20140407.dump
Filesystem Size Used Avail Use% Mounted on
data/backup/dumps 24G 8.6G 16G 36% /var/data/dumps
</source>
====You need the i-node number of the file====
<source lang=bash>
# ls -i /var/data/dumps/sackhalter_20140407.dump
103 /var/data/dumps/sackhalter_20140407.dump
</source>
====Get the metadata of the file====
<source lang=bash>
# zdb -dddd data/backup/dumps 103 | grep crtime
crtime Tue Jul 29 13:00:18 2014
</source>
===Creation time on ext2/3/4===
====You need the Filesystem where the file resides====
<source lang=bash>
# df -h /usr/bin/passwd
Filesystem Size Used Avail Use% Mounted on
/dev/sda1 15G 8.4G 5.8G 60% /
</source>
====You need the i-node number of the file====
<source lang=bash>
# ls -i /usr/bin/passwd
130776 /usr/bin/passwd
</source>
====Get the metadata of the file====
<source lang=bash>
# debugfs -R 'stat <130776>' /dev/sda1 2>/dev/null | grep crtime
crtime: 0x5391870e:a6803fc8 -- Fri Jun 6 11:17:02 2014
</source>
====Nice oneliner====
<source lang=bash>
# file=/etc/passwd ; ls -1i ${file} | nawk -v dev=$(df --output=source ${file} | tail -n +2) 'BEGIN{debugfs="debugfs -R \"stat <INODE>\" /dev/sda1 2>/dev/null";}{file=$2;command=debugfs;gsub(/INODE/,$1,command); while (command | getline){if(/crtime/){print $0,file}}; close(command);}'
crtime: 0x54009e05:24f51228 -- Fri Aug 29 17:36:37 2014 /etc/passwd
</source>
6e48c58366bffc0eca5618ffc9d78d02051f5681
OwnCloud Config
0
195
631
2014-12-04T11:43:28Z
Lollypop
2
Die Seite wurde neu angelegt: „[[Kategorie:OwnCloud]]“
wikitext
text/x-wiki
[[Kategorie:OwnCloud]]
8b43d611ab83247f286b10a82fdbe45a54783936
633
631
2014-12-04T11:44:03Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:OwnCloud Config]]
a49184491d91787e0c6bfeb62c5e54b53e497556
634
633
2014-12-04T11:44:16Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:OwnCloud|Config]]
1253eba658000e986328393428739919c198c8a4
635
634
2014-12-04T11:47:32Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:OwnCloud]]
==Seperate self installed apps from bundled apps==
In your config add:
<source lang=php>
'apps_paths' =>
array (
0 =>
array (
'path' => OC::$SERVERROOT.'/apps',
'url' => '/apps',
'writable' => false,
),
1 =>
array (
'path' => OC::$SERVERROOT.'/other_apps',
'url' => '/other_apps',
'writable' => true,
),
),
</source>
5baba26cec783e846a005fd3a9dd992bc4538bd6
636
635
2014-12-04T11:48:00Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:OwnCloud]]
==Seperate self installed apps from bundled apps==
In your config add:
<source lang=php>
'apps_paths' => array (
0 =>
array (
'path' => OC::$SERVERROOT.'/apps',
'url' => '/apps',
'writable' => false,
),
1 =>
array (
'path' => OC::$SERVERROOT.'/other_apps',
'url' => '/other_apps',
'writable' => true,
),
),
</source>
d1859ba223418c57b4663b639f1f1a067756cc1e
637
636
2014-12-04T11:48:25Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:OwnCloud]]
==Seperate self installed apps from bundled apps==
In your config add:
<source lang=php>
'apps_paths' => array (
0 => array (
'path' => OC::$SERVERROOT.'/apps',
'url' => '/apps',
'writable' => false,
),
1 => array (
'path' => OC::$SERVERROOT.'/other_apps',
'url' => '/other_apps',
'writable' => true,
),
),
</source>
d2df7bc79bdcd9773a1a2d8dc9a69faaabfc8f36
Category:OwnCloud
14
196
632
2014-12-04T11:43:47Z
Lollypop
2
Die Seite wurde neu angelegt: „[[Kategorie:KnowHow]]“
wikitext
text/x-wiki
[[Kategorie:KnowHow]]
5b3e805e2df69a16d339bfd0115e4688ccfd0e65
Fibrechannel Analyse
0
139
638
607
2014-12-09T13:47:52Z
Lollypop
2
/* cfgadm -c unconfigure -o unusable_SCSI_LUN */
wikitext
text/x-wiki
[[Kategorie:Solaris]]
[[Kategorie:Brocade]]
[[Kategorie:NetApp]]
[[Kategorie:FC]]
=Fibrechannel Analyse=
=Kommandos : Solaris=
==luxadm==
===luxadm -e port===
Gibt die Hardwarepfade der vorhandened Fibrechannelports und deren Status aus:
<source lang=bash>
# luxadm -e port
/devices/pci@79,0/pci10de,378@b/pci1077,143@0/fp@0,0:devctl CONNECTED
/devices/pci@79,0/pci10de,378@b/pci1077,143@0,1/fp@0,0:devctl NOT CONNECTED
/devices/pci@79,0/pci10de,376@e/pci1077,143@0/fp@0,0:devctl CONNECTED
/devices/pci@79,0/pci10de,376@e/pci1077,143@0,1/fp@0,0:devctl NOT CONNECTED
</source>
2 Dualport Karten:
/devices/pci@79,0/pci10de,378@b/pci1077,143@0 und ...,1
/devices/pci@79,0/pci10de,376@e/pci1077,143@0 und ...,1
<source lang=bash>
# prtdiag -v | head -1
System Configuration: Sun Microsystems Sun Fire X4440
</source>
Aus der Seite [https://support.oracle.com/epmos/faces/DocContentDisplay?id=1277396.1 Sun x86 Platforms: Matrix of Recognized Device Paths (Doc ID 1277396.1)] (Oracle Support Login benötigt):
Sun Fire x4440 (Tucana)
PCI:
PCIe SLOT0 /pci@0,0/pci10de,375@f/pci1000,3150@0 // with PCI Express 8-Port SAS/SATA HBA
PCIe SLOT0 /pci@0,0/pci10de,375@f/ // without PCI Express 8-Port SAS/SATA HBA
PCIe SLOT1 /pci@0,0/pci10de,376@e/
PCIe SLOT2 /pci@7c,0/pci10de,377@f/
PCIe SLOT3 /pci@0,0/pci10de,377@a/
PCIe SLOT4 /pci@7c,0/pci10de,376@e/
PCIe SLOT5 /pci@7c,0/pci10de,378@b/
(7c can be renamed something else depending on BIOS/OS version)
Also stecken unsere Karten in Slot 4 und 5.
===luxadm -e dump_map <HW_path>===
Gibt die Tabelle der bekannten Geräte an einem Port aus
<source lang=bash>
# luxadm -e dump_map /devices/pci@79,0/pci10de,378@b/pci1077,143@0/fp@0,0:devctl
Pos Port_ID Hard_Addr Port WWN Node WWN Type
0 30200 0 202600a0b86e10e4 200600a0b86e10e4 0x0 (Disk device)
1 30600 0 202700a0b86e10e4 200600a0b86e10e4 0x0 (Disk device)
2 10100 0 203400a0b85bb030 200400a0b85bb030 0x0 (Disk device)
3 10500 0 203500a0b85bb030 200400a0b85bb030 0x0 (Disk device)
4 10200 0 202600a0b86e103c 200600a0b86e103c 0x0 (Disk device)
5 11400 0 202700a0b86e103c 200600a0b86e103c 0x0 (Disk device)
6 30100 0 203200a0b85aeb2d 200200a0b85aeb2d 0x0 (Disk device)
7 30500 0 203300a0b85aeb2d 200200a0b85aeb2d 0x0 (Disk device)
8 10800 0 2100001b32902d45 2000001b32902d45 0x1f (Unknown Type,Host Bus Adapter)
</source>
Erklärung der interessanten Spalten:
* Port_ID <Switch_ID><Switchport><??>
Es sind also offensichtlich 2 Switches in der Fabric an Port /devices/pci@79,0/pci10de,378@b/pci1077,143@0/fp@0,0:devctl
und zwar mit der ID 1 und mit der ID 3.
Switch ID 1
Port 1 und 5 : Node WWN 200400a0b85bb030
Port 2 und 14 : Node WWN 200600a0b86e103c
Port 8 : Node WWN 2000001b32902d45 (Wir selbst)
Switch ID 3
Port 1 und 5 : Node WWN 200200a0b85aeb2d
Port 2 und 6 : Node WWN 200600a0b86e10e4
Wir hängen also mit 2 Storages auf dem Switch mit der ID 1 und haben eine Verbindung zu einem Switch mit der ID 3 an dem 2 weitere Storages hängen.
* Node WWN
Wir sehen hier 4 Disk Devices mit jeweils 2 Einträgen (Gleiche Node WWN)
* Port WWN
Dies ist die Port WWN der an den Switch angeschlossenen Geräte (unter 8 finden wir uns selbst).
Pro Storage sehen wir hier 2 Port WWNs, also 2 Pfade über unseren einen Hostport.
Daher nachher 4 Pfade (2 Pro Hostport) beim [[#mpathadm list lu]].
* Type
Disk Device: Storage
Host Bus Adapter: FC-Karte
===luxadm probe===
Auflistung aller erkannten Fibrechanneldevices
<source lang=bash>
#> luxadm probe
Found Fibre Channel device(s):
Node WWN:200600a0b86e10e4 Device Type:Disk device
Logical Path:/dev/rdsk/c8t600A0B80006E10E40000DC1C52E8B751d0s2
...
</source>
===luxadm display <Diskpath|WWN>===
<source lang=bash>
#> luxadm display /dev/rdsk/c8t600A0B80006E10E40000DC1C52E8B751d0s2
DEVICE PROPERTIES for disk: /dev/rdsk/c8t600A0B80006E10E40000DC1C52E8B751d0s2
Vendor: SUN
Product ID: STK6580_6780
Revision: 0784
Serial Num: SP01068442
Unformatted capacity: 204800.000 MBytes
Write Cache: Enabled
Read Cache: Enabled
Minimum prefetch: 0x300
Maximum prefetch: 0x0
Device Type: Disk device
Path(s):
/dev/rdsk/c8t600A0B80006E10E40000DC1C52E8B751d0s2
/devices/scsi_vhci/disk@g600a0b80006e10e40000dc1c52e8b751:c,raw
Controller /dev/cfg/c4
Device Address 202600a0b86e10e4,5
Host controller port WWN 2100001b328a417f
Class primary
State ONLINE
Controller /dev/cfg/c4
Device Address 202700a0b86e10e4,5
Host controller port WWN 2100001b328a417f
Class secondary
State STANDBY
Controller /dev/cfg/c6
Device Address 201600a0b86e10e4,5
Host controller port WWN 2100001b32904445
Class primary
State ONLINE
Controller /dev/cfg/c6
Device Address 201700a0b86e10e4,5
Host controller port WWN 2100001b32904445
Class secondary
State STANDBY
</source>
* Vendor: SUN
Hersteller
* Product ID: STK6580_6780
Also ein StorageTek 6580/6780
* Revision: 0784
Grobe Firmwarepeilung (Firmware Version: 07.84.47.10)
Siehe hier [[#lsscs list array <array_name>]]
* Serial Num: SP01068442
Praktisch, wenn man mit NetApps arbeitet, um die LUNs zuzuordnen.
* Unformatted capacity: 204800.000 MBytes
Immer gut zu wissen
* Write Cache: Enabled
Die Batterie im Storage sollte also OK sein ;-)
* Path(s):
Rawdevicepath
Hardwaredevicepath
Jetzt folgen immer pro Pfad zu diesem Device ein Block aus
Controller (siehe unten)
Device Address <Port WWN vom Device>,<LUN ID>
Class <primary|secondary> (siehe unten)
State <Online|Standby|Oflline>
Zuweisung Controller zum FC-Port über:
<source lang=bash>
# ls -al /dev/cfg/c6
lrwxrwxrwx 1 root root 60 Sep 3 2009 /dev/cfg/c6 -> ../../devices/pci@79,0/pci10de,376@e/pci1077,143@0/fp@0,0:fc
</source>
Man sieht den Hardwarepfad von [[#luxadm -e port]]
Class:
Via ALUA (Asymmetric Logical Unit Access) teilt das Device dem Host mit, über welche Pfade der Host primär auf die LUN zugreifen soll.
==fcinfo==
===fcinfo hba-port===
Gibt ein paar Infos über Hersteller, Modell, Firmware, Port und Node WWN, Current Speed, ... aus
<source lang=bash>
#> fcinfo hba-port
HBA Port WWN: 2100001b328a417f
OS Device Name: /dev/cfg/c4
Manufacturer: QLogic Corp.
Model: 375-3356-02
Firmware Version: 05.06.00
FCode/BIOS Version: BIOS: 2.02; fcode: 2.01; EFI: 2.00;
Serial Number: 0402R00-0918701860
Driver Name: qlc
Driver Version: 20110825-3.06
Type: N-port
State: online
Supported Speeds: 1Gb 2Gb 4Gb
Current Speed: 4Gb
Node WWN: 2000001b328a417f
HBA Port WWN: 2101001b32aa417f
OS Device Name: /dev/cfg/c5
Manufacturer: QLogic Corp.
Model: 375-3356-02
Firmware Version: 05.06.00
FCode/BIOS Version: BIOS: 2.02; fcode: 2.01; EFI: 2.00;
Serial Number: 0402R00-0918701860
Driver Name: qlc
Driver Version: 20110825-3.06
Type: unknown
State: offline
Supported Speeds: 1Gb 2Gb 4Gb
Current Speed: not established
Node WWN: 2001001b32aa417f
HBA Port WWN: 2100001b32904445
OS Device Name: /dev/cfg/c6
Manufacturer: QLogic Corp.
Model: 375-3356-02
Firmware Version: 05.06.00
FCode/BIOS Version: BIOS: 2.02; fcode: 2.01; EFI: 2.00;
Serial Number: 0402R00-0918701887
Driver Name: qlc
Driver Version: 20110825-3.06
Type: N-port
State: online
Supported Speeds: 1Gb 2Gb 4Gb
Current Speed: 4Gb
Node WWN: 2000001b32904445
HBA Port WWN: 2101001b32b04445
OS Device Name: /dev/cfg/c7
Manufacturer: QLogic Corp.
Model: 375-3356-02
Firmware Version: 05.06.00
FCode/BIOS Version: BIOS: 2.02; fcode: 2.01; EFI: 2.00;
Serial Number: 0402R00-0918701887
Driver Name: qlc
Driver Version: 20110825-3.06
Type: unknown
State: offline
Supported Speeds: 1Gb 2Gb 4Gb
Current Speed: not established
Node WWN: 2001001b32b04445
</source>
===fcinfo remote-port --port <HBA Port WWN> --linkstat===
<source lang=bash>
# fcinfo remote-port --port 2100001b32904445 --linkstat
Remote Port WWN: 201600a0b86e103c
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200600a0b86e103c
Link Error Statistics:
Link Failure Count: 3
Loss of Sync Count: 3
Loss of Signal Count: 2
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 201700a0b86e103c
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200600a0b86e103c
Link Error Statistics:
Link Failure Count: 4
Loss of Sync Count: 261
Loss of Signal Count: 4
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 202200a0b85aeb2d
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200200a0b85aeb2d
Link Error Statistics:
Link Failure Count: 2
Loss of Sync Count: 1
Loss of Signal Count: 2
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 202300a0b85aeb2d
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200200a0b85aeb2d
Link Error Statistics:
Link Failure Count: 2
Loss of Sync Count: 1
Loss of Signal Count: 2
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 201600a0b86e10e4
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200600a0b86e10e4
Link Error Statistics:
Link Failure Count: 3
Loss of Sync Count: 1
Loss of Signal Count: 0
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 201700a0b86e10e4
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200600a0b86e10e4
Link Error Statistics:
Link Failure Count: 3
Loss of Sync Count: 1
Loss of Signal Count: 0
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 202400a0b85bb030
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200400a0b85bb030
Link Error Statistics:
Link Failure Count: 2
Loss of Sync Count: 1
Loss of Signal Count: 2
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 202500a0b85bb030
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200400a0b85bb030
Link Error Statistics:
Link Failure Count: 2
Loss of Sync Count: 1
Loss of Signal Count: 3
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
</source>
==mpathadm==
===mpathadm list lu===
<source lang=bash>
</source>
==cfgadm==
===cfgadm -al -o show_FCP_dev [<controller>]===
<source lang=bash>
# cfgadm -al -o show_FCP_dev | grep unusable
c8::21000024ff2d49a2,0 disk connected configured unusable
c8::21000024ff2d49a2,1 disk connected configured unusable
c8::21000024ff2d49a2,2 disk connected configured unusable
c8::21000024ff2d49a2,3 disk connected configured unusable
c8::21000024ff2d49a2,4 disk connected configured unusable
c8::21000024ff2d49a2,5 disk connected configured unusable
c8::21000024ff2d49a2,6 disk connected configured unusable
c8::21000024ff2d49a2,7 disk connected configured unusable
c8::21000024ff2d49a2,8 disk connected configured unusable
c8::21000024ff2d49a2,9 disk connected configured unusable
c8::21000024ff2d49a2,10 disk connected configured unusable
c9::203400a0b839c421,31 disk connected configured unusable
c9::203400a0b84913d2,31 disk connected configured unusable
c9::203500a0b839c421,31 disk connected configured unusable
c9::203500a0b84913d2,31 disk connected configured unusable
</source>
===cfgadm -c unconfigure -o unusable_SCSI_LUN <unusable device>===
<source lang=bash>
# cfgadm -c unconfigure -o unusable_SCSI_LUN c8::21000024ff2d49a2
</source>
Alle aufräumen:
<source lang=bash>
# cfgadm -alo show_SCSI_LUN | nawk '$NF=="unusable"{gsub(/,[0-9]+$/,"",$1);print $1}' | sort -u | xargs -n 1 cfgadm -c unconfigure -o unusable_SCSI_LUN
</source>
===cfgadm -o force_update -c configure <controller>===
Rescan LUNs. Be careful! Does a forcelip!
<source lang=bash>
# cfgadm -o force_update -c configure c10
</source>
==prtconf -Da <device>==
<source lang=bash>
# prtconf -Da /dev/cfg/c3
i86pc (driver name: rootnex)
pci, instance #0 (driver name: npe)
pci8086,3410, instance #5 (driver name: pcieb)
pci111d,806e, instance #12 (driver name: pcieb)
pci111d,806e, instance #13 (driver name: pcieb)
pci1077,170, instance #0 (driver name: qlc) <---
fp, instance #0 (driver name: fp)
</source>
=Kommandos : Common Array Manager=
==lsscs==
Ist unter Solaris in /opt/SUNWsefms/bin
===lsscs list array===
<source lang=bash>
</source>
===lsscs list array <array_name>===
<source lang=bash>
</source>
===lsscs list -a <array_name> fcport===
<source lang=bash>
</source>
=Kommandos : Brocade=
==Switch-Kommandos==
===switchshow===
<source lang=bash>
san-sw_11:admin> switchshow
switchName: san-sw_11
switchType: 71.2
switchState: Online
switchMode: Native
switchRole: Principal
switchDomain: 1
switchId: fffc01
switchWwn: 10:00:00:05:33:df:43:5a
zoning: ON (Fabric1)
switchBeacon: OFF
Index Port Address Media Speed State Proto
==============================================
0 0 010000 id N8 No_Light FC
1 1 010100 id N8 Online FC E-Port 10:00:00:05:33:df:bd:b9 "san-sw_21" (downstream)
2 2 010200 id N8 Online FC F-Port 21:00:00:24:ff:05:74:e4
3 3 010300 id N8 Online FC F-Port 50:0a:09:81:8d:32:5d:c4
4 4 010400 id N8 No_Light FC
5 5 010500 id N8 Online FC E-Port 10:00:00:05:33:df:bd:b9 "san-sw_21"
6 6 010600 id N4 Online FC F-Port 20:06:00:a0:b8:32:38:17
7 7 010700 id N4 Online FC F-Port 20:07:00:a0:b8:32:38:17
8 8 010800 id N4 Online FC F-Port 21:00:00:1b:32:91:4c:ed
9 9 010900 id N4 Online FC F-Port 21:00:00:1b:32:98:05:1a
10 10 010a00 id N8 Online FC F-Port 21:00:00:24:ff:4a:d3:bc
11 11 010b00 id N8 No_Light FC
12 12 010c00 id N8 No_Light FC
13 13 010d00 id N8 No_Light FC
14 14 010e00 id N8 No_Light FC
15 15 010f00 id N8 No_Light FC
16 16 011000 -- N8 No_Module FC (No POD License) Disabled
17 17 011100 -- N8 No_Module FC (No POD License) Disabled
18 18 011200 -- N8 No_Module FC (No POD License) Disabled
19 19 011300 -- N8 No_Module FC (No POD License) Disabled
20 20 011400 -- N8 No_Module FC (No POD License) Disabled
21 21 011500 -- N8 No_Module FC (No POD License) Disabled
22 22 011600 -- N8 No_Module FC (No POD License) Disabled
23 23 011700 -- N8 No_Module FC (No POD License) Disabled
</source>
Was sagt uns das?
# Dies ist der "Principal" (alle andere sind "Subordinate") der Fabric "Fabric1" (switchRole:, zoning:)
# Der Switch ist gezoned (zoning:)
# SwitchID ist "fffc01"
# Es ist ein 24-Port Switch
# Es gibt einen doppelten ISL (InterSwitchLink) zu einem anderen Switch E-Port (san-sw_21)
# 6 Ports sind mit SFPs bestückt, aber nicht belegt (0,4,11-15)
# 8 Ports haben keine Lizenz und auch kein SFP (No_Module)
# 9 Ports sind belegt
<source lang=bash>
san-sw_11:root> fabricshow
Switch ID Worldwide Name Enet IP Addr FC IP Addr Name
-------------------------------------------------------------------------
1: fffc01 10:00:00:05:33:df:43:5a 192.168.1.117 0.0.0.0 >"san-sw_11"
2: fffc02 10:00:00:05:33:df:bd:b9 192.168.1.119 0.0.0.0 "san-sw_21"
The Fabric has 2 switches
</source>
==Port-Kommandos==
===porterrshow===
===portstatsshow===
===portstatsclear===
===portloginshow===
Möchte man sehen, welche WWNs sich hinter einem NPIV-Port verbergen, so hilft portloginshow.
==Zone-Kommandos==
===zoneshow===
===alicreate===
===alishow===
==Backup der Switchconfig per Script==
===Put the backup host ssh-pub-key on the switches===
<source lang=bash>
fcsw1:root> cat >/root/.ssh/authorized_keys <<EOF
> ssh-dss AAAAB3NzaC1...
...
...
lF8qsgtTD8cc= root@host
> EOF
</source>
===Generate ssh-key on the switches===
<source lang=bash>
fcsw1:root> ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
2a:23:33:...:69:bc:25:a5:f9 root@fcsw1
The key's randomart image is:
+--[ RSA 2048]----+
| |
| ... |
| |
+-----------------+
</source>
===Copy the key to your backup users ~/.ssh/authorized_keys on backup host===
<source lang=bash>
fcsw1:root> cat /root/.ssh/id_rsa.pub
ssh-rsa AAAAB3NzaC1yc2EAAA...
...
KHnw1T1NaQ== root@fcsw1
</source>
===Now the script on the backup host===
<source lang=bash>
# cat /opt/bin/backup_brocade_config
#!/bin/bash
SWITCHES="
172.30.40.50
172.30.40.51
"
LOCALUSER="backupuser"
BACKUPDIR="brocade_backup"
BACKUPHOST="172.30.40.10"
DATE="$(date '+%Y%m%d-%H%M%S')"
for switch in ${SWITCHES} ; do
printf "Backing up ${switch} to ~${LOCALUSER}/${BACKUPDIR}/${switch}_config_${date}.txt... "
ssh root@${switch} /fabos/link_sbin/configupload -all -p scp ${BACKUPHOST},${LOCALUSER},${BACKUPDIR}/${switch}_config_${DATE}.txt
done
</source>
==Script zum parsen einer configupload-Datei==
<source lang=awk>
#!/usr/bin/gawk -f
BEGIN{
vendor["001438"]="Hewlett-Packard";
vendor["00a098"]="NetApp";
vendor["0024ff"]="Qlogic";
vendor["001b32"]="Qlogic";
vendor["0000c9"]="Emulex";
vendor["00e002"]="CROSSROADS SYSTEMS, INC.";
}
/\[Zoning\]/,/^$/ {
if(/^cfg./){
split($0,cfgparts,":");
gsub(/^cfg./,"",cfgparts[1]);
cfg[cfgparts[1]]=cfgparts[2];
}
else if(/^zone./) {
zonename=$0;
gsub(/:.*$/,"",zonename);
gsub(/^zone./,"",zonename);
zonemembers=$0;
gsub(/^[^:]*:/,"",zonemembers);
zone[zonename]=zonemembers;
}
else if(/^alias./) {
aliasname=$0;
gsub(/:.*$/,"",aliasname);
gsub(/^alias./,"",aliasname);
aliasmembers=$0;
gsub(/^[^:]*:/,"",aliasmembers);
alias[aliasname]=aliasmembers;
if(length(aliasname)>longestalias){
longestalias=length(aliasname);
}
}
else if(/^enable:/) {
cfgenabled=$0;
gsub(/^enable:/,"",cfgenabled);
}
}
END {
print "Config:",cfgenabled;
split(cfg[cfgenabled],active_zones,";");
for(active_zone in active_zones) {
split(zone[active_zones[active_zone]],zone_members,";");
asort(zone_members);
print "Zone",active_zones[active_zone],"(",length(zone_members),"Members ):";
for(zone_member in zone_members){
member=zone_members[zone_member];
if(alias[member]!=""){
member=alias[member];
}
WWN=member;
gsub(/:/,"",WWN);
if(WWN ~ /^5/){start=2;}else{start=5;}
vendor_id=substr(WWN,start,6);
printf " Member: %s\t",member;
if(alias[zone_members[zone_member]]!=""){
format=sprintf("%%s%%%ds\t",longestalias-length(zone_members[zone_member]));
printf format,zone_members[zone_member]," ";
}
printf "%s\n",vendor[vendor_id];
}
}
printf "\n\n\nCreate config:\n-------------------------------------------------\n";
printf "cfgdelete \"%s\"\n",cfgenabled;
for(active_zone in active_zones) {
split(zone[active_zones[active_zone]],zone_members,";");
asort(zone_members);
for(zone_member in zone_members){
member=zone_members[zone_member];
if(alias[member]!=""){
printf "alicreate \"%s\",\"%s\"\n",member,alias[member];
alias[member]="";
}
}
printf "zonecreate \"%s\",\"%s\"\n",active_zones[active_zone],zone[active_zones[active_zone]];
if(!secondelement){
secondelement=1;
printf "cfgcreate";
} else {
printf "cfgadd ";
}
printf " \"%s\",\"%s\"\n",cfgenabled,active_zones[active_zone];
}
printf "cfgsave\ncfgenable \"%s\"\n",cfgenabled;
}
</source>
=Kommandos: NetApp=
==fcp topology show : Wo hängt mein Frontend-SAN?==
<source lang=bash>
fas01> fcp topology show
Switches connected on adapter 0d:
None connected.
Switches connected on adapter 0c:
None connected.
Switches connected on adapter 1a:
Switch Name: fcsw01
Switch Vendor: Brocade Communications, Inc.
Switch Release: v6.4.2a
Switch Domain: 1
Switch WWN: 10:00:00:05:33:c6:1e:6c
Port Count: 24
Switches connected on adapter 1b:
Switch Name: fcsw02
Switch Vendor: Brocade Communications, Inc.
Switch Release: v6.4.2a
Switch Domain: 1
Switch WWN: 10:00:00:05:33:c7:5e:d2
Port Count: 24
Switches connected on adapter 1c:
None connected.
Switches connected on adapter 1d:
None connected.
</source>
==fcp config <port> : Welche WWN habe ich?==
<source lang=bash>
fas01> fcp config 1a
1a: ONLINE <ADAPTER UP> PTP Fabric
host address 010600
portname 50:0a:09:83:90:00:29:24 nodename 50:0a:09:80:80:00:29:24
mediatype auto speed auto
</source>
Kleines Schmankerl ist noch die "host address", die uns zeigt, daß wir auf Switch-ID 01 Port 06 hängen.
==fcp wwpn-alias (set|show) : Aliasnamen für mehr Klarheit beim Debugging==
<source lang=bash>
fas01> fcp wwpn-alias set sun07_Slot2_Port0 21000024ff363a5a
fas01> fcp wwpn-alias show
WWPN Alias
---- -----
21:00:00:24:ff:36:3a:5a sun07_Slot2_Port0
</source>
==sanlun lun show -d <dev> (mit Solaris und ZPool)==
Wenn man wissen möchte, welche NetApp LUNs zu einem ZPool gehören, geht das folgender Maßen:
<source lang=bash>
# zpool status | nawk '/c[0-9]t/{dev=$1;gsub(/s[0-9]+$/,"",$1);command="/opt/NTAP/SANToolkit/bin/sanlun lun show -d /dev/rdsk/"$1"s2";command | getline; command | getline; print dev,$1$2;next;}{print;}'
</source>
Beispiel:
<source lang=bash>
# zpool status | nawk '/c[0-9]t/{dev=$1;gsub(/s[0-9]+$/,"",$1);command="/opt/NTAP/SANToolkit/bin/sanlun lun show -d /dev/rdsk/"$1"s2";command | getline; command | getline; print dev,$1$2;next;}{print;}'
Pool: testpool
Status: ONLINE
scan: resilvered 11,0G in 0h1m with 0 errors on Thu Oct 2 09:41:39 2014
config:
NAME STATE READ WRITE CKSUM
testpool ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
c5t60A98000433544625634696B76705370d0s0 fas01:/vol/testlun/LUN0
c5t60A980003830304F392446473844375Ad0 fas02:/vol/testlun/LUN0
</source>
ad39511568ab2b9fa791fbb0972b786e01cbc04f
ZFS cheatsheet
0
29
642
275
2014-12-23T09:11:55Z
Lollypop
2
/* Löschen nicht löschbarer Snapshots */
wikitext
text/x-wiki
* [[ZFS_Recovery|Reparieren von defktem ZFS]]
* Wichtige ZFS-Patches: 127729-07 (x86) / 127728-06 (SPARC)
* ZFS Best Practices Guide http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide
* ZFS FAQ bei Opensolaris.org http://www.opensolaris.org/os/community/zfs/faq/
== Löschen nicht löschbarer Snapshots ==
Hier nach einem abgebrochenen ZFS send/recv:
<source lang=bash>
# zfs destroy MYSQL-LOG/binlog@copy_20130403
cannot destroy 'MYSQL-LOG/binlog@copy_20130403': dataset is busy
# zfs holds -r MYSQL-LOG@copy_20130403
NAME TAG TIMESTAMP
MYSQL-LOG@copy_20130403 .send-22887-0 Wed Apr 3 09:03:32 2013
# zfs release .send-22887-0 MYSQL-LOG@copy_20130403
# zfs destroy MYSQL-LOG/binlog@copy_20130403
</source>
== ZFS Tuning ==
Eine gefühlte Langsamkeit auf Systemen mit ZFS kommt vom sehr großen Cachehunger. Den kann man eingrenzen:
Erstmal schauen, was phase ist:
<source lang=bash>
lollypop@wirefall:~# echo "::kmastat ! grep Total" |mdb -k
Total [hat_memload] 13508608B 309323764 0
Total [kmem_msb] 24010752B 1509706 0
Total [kmem_va] 660340736B 140448 0
Total [kmem_default] 690409472B 1416078794 0
Total [kmem_io_64G] 34619392B 8456 0
Total [kmem_io_4G] 16384B 92 0
Total [kmem_io_2G] 24576B 62 0
Total [bp_map] 1048576B 234488 0
Total [umem_np] 786432B 976 0
Total [id32] 4096B 2620 0
Total [zfs_file_data_buf] 1471275008B 1326646 0
Total [segkp] 589824B 192886 0
Total [ip_minor_arena_sa] 64B 13332 0
Total [ip_minor_arena_la] 192B 45183 0
Total [spdsock] 64B 1 0
Total [namefs_inodes] 64B 24 0
lollypop@wirefall:~# echo "::memstat" | mdb -k
Page Summary Pages MB %Tot
------------ ---------------- ---------------- ----
Kernel 255013 996 24%
ZFS File Data 359196 1403 34%
Anon 346538 1353 33%
Exec and libs 33948 132 3%
Page cache 4836 18 0%
Free (cachelist) 22086 86 2%
Free (freelist) 23420 91 2%
Total 1045037 4082
Physical 1045036 4082
</source>
Oder nur ZFS
<source lang=bash>
echo "::memstat ! egrep '(Page Summary|-----|ZFS)'"| mdb -k
</source>
Ausgeben aller ARC-Parameter:
<source lang=bash>
lollypop@wirefall:~# echo "::arc -m" | mdb -k
hits = 80839319
misses = 3717788
demand_data_hits = 4127150
demand_data_misses = 51589
demand_metadata_hits = 9467792
demand_metadata_misses = 2125852
prefetch_data_hits = 127941
prefetch_data_misses = 596238
prefetch_metadata_hits = 67116436
prefetch_metadata_misses = 944109
mru_hits = 2031248
mru_ghost_hits = 1906199
mfu_hits = 78514880
mfu_ghost_hits = 993236
deleted = 880714
recycle_miss = 1381210
mutex_miss = 197
evict_skip = 38573528
evict_l2_cached = 0
evict_l2_eligible = 94658370048
evict_l2_ineligible = 8946457600
hash_elements = 79571
hash_elements_max = 82328
hash_collisions = 3005774
hash_chains = 22460
hash_chain_max = 8
p = 64 MB
c = 512 MB
c_min = 127 MB
c_max = 512 MB
size = 512 MB
hdr_size = 14825736
data_size = 468982784
other_size = 53480992
l2_hits = 0
l2_misses = 0
l2_feeds = 0
l2_rw_clash = 0
l2_read_bytes = 0
l2_write_bytes = 0
l2_writes_sent = 0
l2_writes_done = 0
l2_writes_error = 0
l2_writes_hdr_miss = 0
l2_evict_lock_retry = 0
l2_evict_reading = 0
l2_free_on_write = 0
l2_abort_lowmem = 0
l2_cksum_bad = 0
l2_io_error = 0
l2_size = 0
l2_hdr_size = 0
memory_throttle_count = 0
arc_no_grow = 0
arc_tempreserve = 0 MB
arc_meta_used = 150 MB
arc_meta_limit = 128 MB
arc_meta_max = 313 MB
</source>
Man kann sich auch alle Parameter ausgeben lassen, die für ZFS gesetzt sind mit:
<source lang=bash>
# echo ::zfs_params | mdb -k
arc_reduce_dnlc_percent = 0x3
zfs_arc_max = 0x100000000
zfs_arc_min = 0x0
arc_shrink_shift = 0x5
zfs_mdcomp_disable = 0x0
zfs_prefetch_disable = 0x0
zfetch_max_streams = 0x8
zfetch_min_sec_reap = 0x2
zfetch_block_cap = 0x100
zfetch_array_rd_sz = 0x100000
zfs_default_bs = 0x9
zfs_default_ibs = 0xe
...
# echo "::arc -a" | mdb -k
hits = 592730
misses = 5095
demand_data_hits = 0
demand_data_misses = 0
demand_metadata_hits = 592719
demand_metadata_misses = 4866
prefetch_data_hits = 0
prefetch_data_misses = 0
...
</source>
Setzen von Kernelparametern geht auch online mit:
<source lang=bash>
# echo zfs_arc_max/Z100000000 | mdb -kw
zfs_arc_max: <old value> = 0x100000000
</source>
Das setzt den zfs_arc_max auf 4GB = 0x100000000
== Limitieren des ARC Cache ==
In der /etc/system einfach:
set zfs:zfs_arc_max = <Number of bytes>
Siehe auch [http://www.solarisinternals.com/wiki/index.php/ZFS_Evil_Tuning_Guide#Limiting_the_ARC_Cache Limiting the ARC Cache]
== ZFS Platzverbrauch besser anzeigen ==
<pre>
$ zfs list -o space
NAME AVAIL USED USEDSNAP USEDDS USEDREFRESERV USEDCHILD
rpool 25.4G 7.79G 0 64K 0 7.79G
rpool/ROOT 25.4G 6.29G 0 18K 0 6.29G
rpool/ROOT/snv_98 25.4G 6.29G 0 6.29G 0 0
rpool/dump 25.4G 1.00G 0 1.00G 0 0
rpool/export 25.4G 38K 0 20K 0 18K
rpool/export/home 25.4G 18K 0 18K 0 0
rpool/swap 25.8G 512M 0 111M 401M 0
</pre>
Wenn zfs list -o space als shortcut noch nicht zur Verfügung steht, geht meist:
<pre>
$ zfs list -o name,avail,used,usedsnap,usedds,usedrefreserv,usedchild -t filesystem,volume
</pre>
== Migration UFS-Root -> ZFS-Root via Live-Upgrade ==
Erstmal den ZFS Rootpool anlegen:
# zpool create rpool /dev/dsk/<zfs-disk>
Wer Problemen aus dem Weg gehen möchte, läßt den Namen bei rpool.
Boot-Environment (BE) mit lucreate erstellen
# lucreate -c ufsBE -n zfsBE -p rpool
Hiermit werden die Files in die ZFS-Umgebung kopiert.
Prüfen, ob das BootFS richtig gesetzt wurde:
<pre>
# zpool get bootfs rpool
NAME PROPERTY VALUE SOURCE
rpool bootfs rpool/ROOT/zfsBE local
</pre>
Auskommentieren von eventuell noch nachgebliebenen rootdev-Einträgen in der /etc/system
<pre>
# zpool export rpool
# mkdir /tmp/rpool
# zpool import -R /tmp/rpool rpool
# zfs unmount rpool
# rmdir /tmp/rpool/rpool
# zfs mount rpool/ROOT/zfsBE
# perl -pi.orig -e 's#^(rootdev.*)$#* \1#g' /tmp/rpool/etc/system
# zpool export rpool
</pre>
Bootblock für ZFS auf die ZFS-Platte
# installboot -F zfs /usr/platform/`uname -i`/lib/fs/zfs/bootblk /dev/rdsk/<zfs-disk>
Aktivieren des neuen BEs
# luactivate zfsBE
[[Kategorie:ZFS]]
8061630fa56a16aadfce53a513d7314ca8e6fce5
643
642
2014-12-23T09:13:42Z
Lollypop
2
wikitext
text/x-wiki
== Links ==
* [[ZFS_Recovery|Reparieren von defktem ZFS]]
* Wichtige ZFS-Patches: 127729-07 (x86) / 127728-06 (SPARC)
* ZFS Best Practices Guide http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide
* ZFS FAQ bei Opensolaris.org http://www.opensolaris.org/os/community/zfs/faq/
== Löschen nicht löschbarer Snapshots ==
Hier nach einem abgebrochenen ZFS send/recv:
<source lang=bash>
# zfs destroy MYSQL-LOG/binlog@copy_20130403
cannot destroy 'MYSQL-LOG/binlog@copy_20130403': dataset is busy
# zfs holds -r MYSQL-LOG@copy_20130403
NAME TAG TIMESTAMP
MYSQL-LOG@copy_20130403 .send-22887-0 Wed Apr 3 09:03:32 2013
# zfs release .send-22887-0 MYSQL-LOG@copy_20130403
# zfs destroy MYSQL-LOG/binlog@copy_20130403
</source>
== ZFS Tuning ==
Eine gefühlte Langsamkeit auf Systemen mit ZFS kommt vom sehr großen Cachehunger. Den kann man eingrenzen:
Erstmal schauen, was phase ist:
<source lang=bash>
lollypop@wirefall:~# echo "::kmastat ! grep Total" |mdb -k
Total [hat_memload] 13508608B 309323764 0
Total [kmem_msb] 24010752B 1509706 0
Total [kmem_va] 660340736B 140448 0
Total [kmem_default] 690409472B 1416078794 0
Total [kmem_io_64G] 34619392B 8456 0
Total [kmem_io_4G] 16384B 92 0
Total [kmem_io_2G] 24576B 62 0
Total [bp_map] 1048576B 234488 0
Total [umem_np] 786432B 976 0
Total [id32] 4096B 2620 0
Total [zfs_file_data_buf] 1471275008B 1326646 0
Total [segkp] 589824B 192886 0
Total [ip_minor_arena_sa] 64B 13332 0
Total [ip_minor_arena_la] 192B 45183 0
Total [spdsock] 64B 1 0
Total [namefs_inodes] 64B 24 0
lollypop@wirefall:~# echo "::memstat" | mdb -k
Page Summary Pages MB %Tot
------------ ---------------- ---------------- ----
Kernel 255013 996 24%
ZFS File Data 359196 1403 34%
Anon 346538 1353 33%
Exec and libs 33948 132 3%
Page cache 4836 18 0%
Free (cachelist) 22086 86 2%
Free (freelist) 23420 91 2%
Total 1045037 4082
Physical 1045036 4082
</source>
Oder nur ZFS
<source lang=bash>
echo "::memstat ! egrep '(Page Summary|-----|ZFS)'"| mdb -k
</source>
Ausgeben aller ARC-Parameter:
<source lang=bash>
lollypop@wirefall:~# echo "::arc -m" | mdb -k
hits = 80839319
misses = 3717788
demand_data_hits = 4127150
demand_data_misses = 51589
demand_metadata_hits = 9467792
demand_metadata_misses = 2125852
prefetch_data_hits = 127941
prefetch_data_misses = 596238
prefetch_metadata_hits = 67116436
prefetch_metadata_misses = 944109
mru_hits = 2031248
mru_ghost_hits = 1906199
mfu_hits = 78514880
mfu_ghost_hits = 993236
deleted = 880714
recycle_miss = 1381210
mutex_miss = 197
evict_skip = 38573528
evict_l2_cached = 0
evict_l2_eligible = 94658370048
evict_l2_ineligible = 8946457600
hash_elements = 79571
hash_elements_max = 82328
hash_collisions = 3005774
hash_chains = 22460
hash_chain_max = 8
p = 64 MB
c = 512 MB
c_min = 127 MB
c_max = 512 MB
size = 512 MB
hdr_size = 14825736
data_size = 468982784
other_size = 53480992
l2_hits = 0
l2_misses = 0
l2_feeds = 0
l2_rw_clash = 0
l2_read_bytes = 0
l2_write_bytes = 0
l2_writes_sent = 0
l2_writes_done = 0
l2_writes_error = 0
l2_writes_hdr_miss = 0
l2_evict_lock_retry = 0
l2_evict_reading = 0
l2_free_on_write = 0
l2_abort_lowmem = 0
l2_cksum_bad = 0
l2_io_error = 0
l2_size = 0
l2_hdr_size = 0
memory_throttle_count = 0
arc_no_grow = 0
arc_tempreserve = 0 MB
arc_meta_used = 150 MB
arc_meta_limit = 128 MB
arc_meta_max = 313 MB
</source>
Man kann sich auch alle Parameter ausgeben lassen, die für ZFS gesetzt sind mit:
<source lang=bash>
# echo ::zfs_params | mdb -k
arc_reduce_dnlc_percent = 0x3
zfs_arc_max = 0x100000000
zfs_arc_min = 0x0
arc_shrink_shift = 0x5
zfs_mdcomp_disable = 0x0
zfs_prefetch_disable = 0x0
zfetch_max_streams = 0x8
zfetch_min_sec_reap = 0x2
zfetch_block_cap = 0x100
zfetch_array_rd_sz = 0x100000
zfs_default_bs = 0x9
zfs_default_ibs = 0xe
...
# echo "::arc -a" | mdb -k
hits = 592730
misses = 5095
demand_data_hits = 0
demand_data_misses = 0
demand_metadata_hits = 592719
demand_metadata_misses = 4866
prefetch_data_hits = 0
prefetch_data_misses = 0
...
</source>
Setzen von Kernelparametern geht auch online mit:
<source lang=bash>
# echo zfs_arc_max/Z100000000 | mdb -kw
zfs_arc_max: <old value> = 0x100000000
</source>
Das setzt den zfs_arc_max auf 4GB = 0x100000000
== Limitieren des ARC Cache ==
In der /etc/system einfach:
set zfs:zfs_arc_max = <Number of bytes>
Siehe auch [http://www.solarisinternals.com/wiki/index.php/ZFS_Evil_Tuning_Guide#Limiting_the_ARC_Cache Limiting the ARC Cache]
== ZFS Platzverbrauch besser anzeigen ==
<pre>
$ zfs list -o space
NAME AVAIL USED USEDSNAP USEDDS USEDREFRESERV USEDCHILD
rpool 25.4G 7.79G 0 64K 0 7.79G
rpool/ROOT 25.4G 6.29G 0 18K 0 6.29G
rpool/ROOT/snv_98 25.4G 6.29G 0 6.29G 0 0
rpool/dump 25.4G 1.00G 0 1.00G 0 0
rpool/export 25.4G 38K 0 20K 0 18K
rpool/export/home 25.4G 18K 0 18K 0 0
rpool/swap 25.8G 512M 0 111M 401M 0
</pre>
Wenn zfs list -o space als shortcut noch nicht zur Verfügung steht, geht meist:
<pre>
$ zfs list -o name,avail,used,usedsnap,usedds,usedrefreserv,usedchild -t filesystem,volume
</pre>
== Migration UFS-Root -> ZFS-Root via Live-Upgrade ==
Erstmal den ZFS Rootpool anlegen:
# zpool create rpool /dev/dsk/<zfs-disk>
Wer Problemen aus dem Weg gehen möchte, läßt den Namen bei rpool.
Boot-Environment (BE) mit lucreate erstellen
# lucreate -c ufsBE -n zfsBE -p rpool
Hiermit werden die Files in die ZFS-Umgebung kopiert.
Prüfen, ob das BootFS richtig gesetzt wurde:
<pre>
# zpool get bootfs rpool
NAME PROPERTY VALUE SOURCE
rpool bootfs rpool/ROOT/zfsBE local
</pre>
Auskommentieren von eventuell noch nachgebliebenen rootdev-Einträgen in der /etc/system
<pre>
# zpool export rpool
# mkdir /tmp/rpool
# zpool import -R /tmp/rpool rpool
# zfs unmount rpool
# rmdir /tmp/rpool/rpool
# zfs mount rpool/ROOT/zfsBE
# perl -pi.orig -e 's#^(rootdev.*)$#* \1#g' /tmp/rpool/etc/system
# zpool export rpool
</pre>
Bootblock für ZFS auf die ZFS-Platte
# installboot -F zfs /usr/platform/`uname -i`/lib/fs/zfs/bootblk /dev/rdsk/<zfs-disk>
Aktivieren des neuen BEs
# luactivate zfsBE
[[Kategorie:ZFS]]
a542e599914f6134bdc9f24bebcda2f7cfaf0d21
647
643
2015-01-02T11:03:48Z
Lollypop
2
/* Limitieren des ARC Cache */
wikitext
text/x-wiki
== Links ==
* [[ZFS_Recovery|Reparieren von defktem ZFS]]
* Wichtige ZFS-Patches: 127729-07 (x86) / 127728-06 (SPARC)
* ZFS Best Practices Guide http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide
* ZFS FAQ bei Opensolaris.org http://www.opensolaris.org/os/community/zfs/faq/
== Löschen nicht löschbarer Snapshots ==
Hier nach einem abgebrochenen ZFS send/recv:
<source lang=bash>
# zfs destroy MYSQL-LOG/binlog@copy_20130403
cannot destroy 'MYSQL-LOG/binlog@copy_20130403': dataset is busy
# zfs holds -r MYSQL-LOG@copy_20130403
NAME TAG TIMESTAMP
MYSQL-LOG@copy_20130403 .send-22887-0 Wed Apr 3 09:03:32 2013
# zfs release .send-22887-0 MYSQL-LOG@copy_20130403
# zfs destroy MYSQL-LOG/binlog@copy_20130403
</source>
== ZFS Tuning ==
Eine gefühlte Langsamkeit auf Systemen mit ZFS kommt vom sehr großen Cachehunger. Den kann man eingrenzen:
Erstmal schauen, was phase ist:
<source lang=bash>
lollypop@wirefall:~# echo "::kmastat ! grep Total" |mdb -k
Total [hat_memload] 13508608B 309323764 0
Total [kmem_msb] 24010752B 1509706 0
Total [kmem_va] 660340736B 140448 0
Total [kmem_default] 690409472B 1416078794 0
Total [kmem_io_64G] 34619392B 8456 0
Total [kmem_io_4G] 16384B 92 0
Total [kmem_io_2G] 24576B 62 0
Total [bp_map] 1048576B 234488 0
Total [umem_np] 786432B 976 0
Total [id32] 4096B 2620 0
Total [zfs_file_data_buf] 1471275008B 1326646 0
Total [segkp] 589824B 192886 0
Total [ip_minor_arena_sa] 64B 13332 0
Total [ip_minor_arena_la] 192B 45183 0
Total [spdsock] 64B 1 0
Total [namefs_inodes] 64B 24 0
lollypop@wirefall:~# echo "::memstat" | mdb -k
Page Summary Pages MB %Tot
------------ ---------------- ---------------- ----
Kernel 255013 996 24%
ZFS File Data 359196 1403 34%
Anon 346538 1353 33%
Exec and libs 33948 132 3%
Page cache 4836 18 0%
Free (cachelist) 22086 86 2%
Free (freelist) 23420 91 2%
Total 1045037 4082
Physical 1045036 4082
</source>
Oder nur ZFS
<source lang=bash>
echo "::memstat ! egrep '(Page Summary|-----|ZFS)'"| mdb -k
</source>
Ausgeben aller ARC-Parameter:
<source lang=bash>
lollypop@wirefall:~# echo "::arc -m" | mdb -k
hits = 80839319
misses = 3717788
demand_data_hits = 4127150
demand_data_misses = 51589
demand_metadata_hits = 9467792
demand_metadata_misses = 2125852
prefetch_data_hits = 127941
prefetch_data_misses = 596238
prefetch_metadata_hits = 67116436
prefetch_metadata_misses = 944109
mru_hits = 2031248
mru_ghost_hits = 1906199
mfu_hits = 78514880
mfu_ghost_hits = 993236
deleted = 880714
recycle_miss = 1381210
mutex_miss = 197
evict_skip = 38573528
evict_l2_cached = 0
evict_l2_eligible = 94658370048
evict_l2_ineligible = 8946457600
hash_elements = 79571
hash_elements_max = 82328
hash_collisions = 3005774
hash_chains = 22460
hash_chain_max = 8
p = 64 MB
c = 512 MB
c_min = 127 MB
c_max = 512 MB
size = 512 MB
hdr_size = 14825736
data_size = 468982784
other_size = 53480992
l2_hits = 0
l2_misses = 0
l2_feeds = 0
l2_rw_clash = 0
l2_read_bytes = 0
l2_write_bytes = 0
l2_writes_sent = 0
l2_writes_done = 0
l2_writes_error = 0
l2_writes_hdr_miss = 0
l2_evict_lock_retry = 0
l2_evict_reading = 0
l2_free_on_write = 0
l2_abort_lowmem = 0
l2_cksum_bad = 0
l2_io_error = 0
l2_size = 0
l2_hdr_size = 0
memory_throttle_count = 0
arc_no_grow = 0
arc_tempreserve = 0 MB
arc_meta_used = 150 MB
arc_meta_limit = 128 MB
arc_meta_max = 313 MB
</source>
Man kann sich auch alle Parameter ausgeben lassen, die für ZFS gesetzt sind mit:
<source lang=bash>
# echo ::zfs_params | mdb -k
arc_reduce_dnlc_percent = 0x3
zfs_arc_max = 0x100000000
zfs_arc_min = 0x0
arc_shrink_shift = 0x5
zfs_mdcomp_disable = 0x0
zfs_prefetch_disable = 0x0
zfetch_max_streams = 0x8
zfetch_min_sec_reap = 0x2
zfetch_block_cap = 0x100
zfetch_array_rd_sz = 0x100000
zfs_default_bs = 0x9
zfs_default_ibs = 0xe
...
# echo "::arc -a" | mdb -k
hits = 592730
misses = 5095
demand_data_hits = 0
demand_data_misses = 0
demand_metadata_hits = 592719
demand_metadata_misses = 4866
prefetch_data_hits = 0
prefetch_data_misses = 0
...
</source>
Setzen von Kernelparametern geht auch online mit:
<source lang=bash>
# echo zfs_arc_max/Z100000000 | mdb -kw
zfs_arc_max: <old value> = 0x100000000
</source>
Das setzt den zfs_arc_max auf 4GB = 0x100000000
== Limitieren des ARC Cache ==
In der /etc/system einfach:
set zfs:zfs_arc_max = <Number of bytes>
Siehe auch [http://www.solarisinternals.com/wiki/index.php/ZFS_Evil_Tuning_Guide#Limiting_the_ARC_Cache Limiting the ARC Cache]
!!!! NEVER DO THIS !!!!
Never use ''mdb -kw'' to set the values!!!
But on a '''test system''' you could try to get the position in the Kernel with
<source lang=bash>
> arc_stats::print -a arcstat_p.value.ui64 arcstat_c.value.ui64 arcstat_c_max.value.ui64
</source>
Calculate for example 8GB:
<source lang=bash>
# printf "0x%x\n" $[ 8 * 1024 *1024 *1024 ]
0x200000000
</source>
And raise the values like this:
arc.c = arc.c_max
arc.p = arc.c / 2
<source lang=bash>
# mdb -kw
Loading modules: [ unix krtld genunix dtrace specfs uppc pcplusmp cpu.generic zfs mpt_sas sockfs ip hook neti dls sctp arp usba uhci fcp fctl qlc nca md lofs sata cpc fcip random crypto logindmux ptm ufs sppp nfs ipc ]
> arc_stats::print -a arcstat_p.value.ui64 arcstat_c.value.ui64 arcstat_c_max.value.ui64
fffffffffbcfaf90 arcstat_p.value.ui64 = 0x4000000
fffffffffbcfafc0 arcstat_c.value.ui64 = 0x40000000
fffffffffbcfb020 arcstat_c_max.value.ui64 = 0x40000000
> fffffffffbcfb020/Z 0x200000000
arc_stats+0x4a0:0x40000000 = 0x200000000
> fffffffffbcfafc0/Z 0x200000000
arc_stats+0x440:0x44a42480 = 0x200000000
> fffffffffbcfaf90/Z 0x100000000
arc_stats+0x410:0x4000000 = 0x100000000
</source>
== ZFS Platzverbrauch besser anzeigen ==
<pre>
$ zfs list -o space
NAME AVAIL USED USEDSNAP USEDDS USEDREFRESERV USEDCHILD
rpool 25.4G 7.79G 0 64K 0 7.79G
rpool/ROOT 25.4G 6.29G 0 18K 0 6.29G
rpool/ROOT/snv_98 25.4G 6.29G 0 6.29G 0 0
rpool/dump 25.4G 1.00G 0 1.00G 0 0
rpool/export 25.4G 38K 0 20K 0 18K
rpool/export/home 25.4G 18K 0 18K 0 0
rpool/swap 25.8G 512M 0 111M 401M 0
</pre>
Wenn zfs list -o space als shortcut noch nicht zur Verfügung steht, geht meist:
<pre>
$ zfs list -o name,avail,used,usedsnap,usedds,usedrefreserv,usedchild -t filesystem,volume
</pre>
== Migration UFS-Root -> ZFS-Root via Live-Upgrade ==
Erstmal den ZFS Rootpool anlegen:
# zpool create rpool /dev/dsk/<zfs-disk>
Wer Problemen aus dem Weg gehen möchte, läßt den Namen bei rpool.
Boot-Environment (BE) mit lucreate erstellen
# lucreate -c ufsBE -n zfsBE -p rpool
Hiermit werden die Files in die ZFS-Umgebung kopiert.
Prüfen, ob das BootFS richtig gesetzt wurde:
<pre>
# zpool get bootfs rpool
NAME PROPERTY VALUE SOURCE
rpool bootfs rpool/ROOT/zfsBE local
</pre>
Auskommentieren von eventuell noch nachgebliebenen rootdev-Einträgen in der /etc/system
<pre>
# zpool export rpool
# mkdir /tmp/rpool
# zpool import -R /tmp/rpool rpool
# zfs unmount rpool
# rmdir /tmp/rpool/rpool
# zfs mount rpool/ROOT/zfsBE
# perl -pi.orig -e 's#^(rootdev.*)$#* \1#g' /tmp/rpool/etc/system
# zpool export rpool
</pre>
Bootblock für ZFS auf die ZFS-Platte
# installboot -F zfs /usr/platform/`uname -i`/lib/fs/zfs/bootblk /dev/rdsk/<zfs-disk>
Aktivieren des neuen BEs
# luactivate zfsBE
[[Kategorie:ZFS]]
90180370c54bdd4be2018f6431bce20e614e9a99
MySQL Tipps und Tricks
0
197
644
2014-12-29T09:14:14Z
Lollypop
2
Die Seite wurde neu angelegt: „[[Kategorie:MySQL]] ==Einzeiler== ===Alle Grants=== <source lang=bash> # mysql --skip-column-names --batch --execute 'select concat_ws("@",user,host) from my…“
wikitext
text/x-wiki
[[Kategorie:MySQL]]
==Einzeiler==
===Alle Grants===
<source lang=bash>
# mysql --skip-column-names --batch --execute 'select concat_ws("@",user,host) from mysql.user' | xargs -n 1 -i mysql --execute 'show grants for {}'
</source>
adc23e7493e9f5bf3d295c2e5123157ea2d89316
646
644
2014-12-29T09:15:07Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:MySQL|Tipps und Tricks]]
==Einzeiler==
===Alle Grants===
<source lang=bash>
# mysql --skip-column-names --batch --execute 'select concat_ws("@",user,host) from mysql.user' | xargs -n 1 -i mysql --execute 'show grants for {}'
</source>
f4c57ab6095a5dee631d710f26b20a4c67549041
Category:MySQL
14
198
645
2014-12-29T09:14:32Z
Lollypop
2
Die Seite wurde neu angelegt: „[[Kategorie:KnowHow]]“
wikitext
text/x-wiki
[[Kategorie:KnowHow]]
5b3e805e2df69a16d339bfd0115e4688ccfd0e65
Solaris grub
0
199
648
2015-01-07T12:19:39Z
Lollypop
2
Die Seite wurde neu angelegt: „[[Kategorie:Solaris]] == SP-Konsole auf x86-Systemen == === Speed und Port im GRUB setzen === /rpool/boot/grub/menu.lst <source lang=bash> title Oracle Solari…“
wikitext
text/x-wiki
[[Kategorie:Solaris]]
== SP-Konsole auf x86-Systemen ==
=== Speed und Port im GRUB setzen ===
/rpool/boot/grub/menu.lst
<source lang=bash>
title Oracle Solaris 10 X86
findroot (pool_rpool,0,a)
kernel$ /platform/i86pc/multiboot -B $ZFS-BOOTFS,console=ttya,ttya-mode="9600,8,n,1,-"
module /platform/i86pc/boot_archive
title Solaris failsafe
findroot (pool_rpool,0,a)
kernel /boot/multiboot -s -B console=ttya,ttya-mode="115200,8,n,1,-"
module /boot/amd64/x86.miniroot-safe
</source>
=== Speed im Solaris bekannt machen ===
/boot/solaris/bootenv.rc
<source lang=bash>
setprop ttya-mode '115200,8,n,1,-'
</source>
Nach reboot aktiv
=== Console login speed setzen ===
/etc/ttydefs
<source lang=bash>
console115200:115200 hupcl opost onlcr:115200::console
</source>
<source lang=bash>
# svccfg -s svc:/system/console-login setprop ttymon/label= astring: "console115200"
# svcadm refresh svc:/system/console-login
# svcadm restart svc:/system/console-login
</source>
=== Speed im BIOS setzen ===
Mit CTRL+E ins BIOS, dann: Advanced -> Serial Port Console Redirection -> Bits per second : 115200
=== Speed im SP setzen ===
<source lang=bash>
-> set SP/serial/host pendingspeed=115200 commitpending=true
Set 'pendingspeed' to '115200'
Set 'commitpending' to 'true'
-> show SP/serial/host speed
/SP/serial/host
Properties:
speed = 115200
</source>
53866d5689c18c1171c8f502134b21058dd83d79
649
648
2015-01-07T12:22:06Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:Solaris]]
== SP-Konsole auf x86-Systemen ==
=== Speed und Port im GRUB setzen ===
/rpool/boot/grub/menu.lst
<source lang=bash>
title Oracle Solaris 10 X86
findroot (pool_rpool,0,a)
kernel$ /platform/i86pc/multiboot -B $ZFS-BOOTFS,console=ttya,ttya-mode="115200,8,n,1,-"
module /platform/i86pc/boot_archive
title Solaris failsafe
findroot (pool_rpool,0,a)
kernel /boot/multiboot -s -B console=ttya,ttya-mode="115200,8,n,1,-"
module /boot/amd64/x86.miniroot-safe
</source>
=== Speed im Solaris bekannt machen ===
/boot/solaris/bootenv.rc
<source lang=bash>
setprop ttya-mode '115200,8,n,1,-'
</source>
Nach reboot aktiv
=== Console login speed setzen ===
/etc/ttydefs
<source lang=bash>
console115200:115200 hupcl opost onlcr:115200::console
</source>
<source lang=bash>
# svccfg -s svc:/system/console-login setprop ttymon/label= astring: "console115200"
# svcadm refresh svc:/system/console-login
# svcadm restart svc:/system/console-login
</source>
=== Speed im BIOS setzen ===
Mit CTRL+E ins BIOS, dann: Advanced -> Serial Port Console Redirection -> Bits per second : 115200
=== Speed im SP setzen ===
<source lang=bash>
-> set SP/serial/host pendingspeed=115200 commitpending=true
Set 'pendingspeed' to '115200'
Set 'commitpending' to 'true'
-> show SP/serial/host speed
/SP/serial/host
Properties:
speed = 115200
</source>
31aba42c26457d352a16045d80a6aad0bbf269d7
658
649
2015-01-08T13:31:07Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:Solaris|Grub]]
== SP-Konsole auf x86-Systemen ==
=== Speed und Port im GRUB setzen ===
/rpool/boot/grub/menu.lst
<source lang=bash>
title Oracle Solaris 10 X86
findroot (pool_rpool,0,a)
kernel$ /platform/i86pc/multiboot -B $ZFS-BOOTFS,console=ttya,ttya-mode="115200,8,n,1,-"
module /platform/i86pc/boot_archive
title Solaris failsafe
findroot (pool_rpool,0,a)
kernel /boot/multiboot -s -B console=ttya,ttya-mode="115200,8,n,1,-"
module /boot/amd64/x86.miniroot-safe
</source>
=== Speed im Solaris bekannt machen ===
/boot/solaris/bootenv.rc
<source lang=bash>
setprop ttya-mode '115200,8,n,1,-'
</source>
Nach reboot aktiv
=== Console login speed setzen ===
/etc/ttydefs
<source lang=bash>
console115200:115200 hupcl opost onlcr:115200::console
</source>
<source lang=bash>
# svccfg -s svc:/system/console-login setprop ttymon/label= astring: "console115200"
# svcadm refresh svc:/system/console-login
# svcadm restart svc:/system/console-login
</source>
=== Speed im BIOS setzen ===
Mit CTRL+E ins BIOS, dann: Advanced -> Serial Port Console Redirection -> Bits per second : 115200
=== Speed im SP setzen ===
<source lang=bash>
-> set SP/serial/host pendingspeed=115200 commitpending=true
Set 'pendingspeed' to '115200'
Set 'commitpending' to 'true'
-> show SP/serial/host speed
/SP/serial/host
Properties:
speed = 115200
</source>
73fd8327419b1dc13b271062a060c4f98e958231
Solaris Einzeiler
0
200
650
2015-01-08T13:23:36Z
Lollypop
2
Die Seite wurde neu angelegt: „=== netstat -aun oder lsof -i -P -n unter Solaris 10 === <source lang=bash> #!/bin/bash pfiles /proc/* 2>/dev/null | nawk -v port= ' /^[0-9]/ { pid=$1; c…“
wikitext
text/x-wiki
=== netstat -aun oder lsof -i -P -n unter Solaris 10 ===
<source lang=bash>
#!/bin/bash
pfiles /proc/* 2>/dev/null | nawk -v port= '
/^[0-9]/ {
pid=$1; cmd=$2; type="unknown"; next;
}
$1 == "SOCK_STREAM" {
type="tcp"; next;
}
$1 == "SOCK_DGRAM" {
type="udp"; next;
}
$2 ~ /AF_INET?/ && ( port=="" || $5==port ) {
if($2 ~ /[0-9]$/ && type !~ /[0-9]$/) type=type""substr($2,8);
if(cmd!="") { printf("%d %s\n",pid,cmd); cmd="" }
printf(" %s:%s/%s\n",$3,$5,type);
}'
</source>
0fe3c7613c0ebdea7a6e19f629976919d3d299e3
651
650
2015-01-08T13:25:15Z
Lollypop
2
wikitext
text/x-wiki
=== netstat -aun oder lsof -i -P -n unter Solaris 10 ===
<source lang=bash>
#!/bin/bash
pfiles /proc/* 2>/dev/null | nawk -v port=$1 '
/^[0-9]/ {
pid=$1; cmd=$2; type="unknown"; next;
}
$1 == "SOCK_STREAM" {
type="tcp"; next;
}
$1 == "SOCK_DGRAM" {
type="udp"; next;
}
$2 ~ /AF_INET?/ && ( port=="" || $5==port ) {
if($2 ~ /[0-9]$/ && type !~ /[0-9]$/) type=type""substr($2,8);
if(cmd!="") { printf("%d %s\n",pid,cmd); cmd="" }
printf(" %s:%s/%s\n",$3,$5,type);
}'
</source>
db5d500ee00dfab452b2e3ac0e3f444fc9aa6966
652
651
2015-01-08T13:25:42Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:Solaris|Einzeiler]]
=== netstat -aun oder lsof -i -P -n unter Solaris 10 ===
<source lang=bash>
#!/bin/bash
pfiles /proc/* 2>/dev/null | nawk -v port=$1 '
/^[0-9]/ {
pid=$1; cmd=$2; type="unknown"; next;
}
$1 == "SOCK_STREAM" {
type="tcp"; next;
}
$1 == "SOCK_DGRAM" {
type="udp"; next;
}
$2 ~ /AF_INET?/ && ( port=="" || $5==port ) {
if($2 ~ /[0-9]$/ && type !~ /[0-9]$/) type=type""substr($2,8);
if(cmd!="") { printf("%d %s\n",pid,cmd); cmd="" }
printf(" %s:%s/%s\n",$3,$5,type);
}'
</source>
4359d830871a48e5bd32577faf33492bbfca26dc
Solaris cluster clone
0
185
653
574
2015-01-08T13:26:35Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:Solaris|Cluster Clone]]
If you need to recreate a cluster node from a survived node, you need to do the following steps
==Clone system disk==
For example via metattach to the metaroot.
==Edit normal Solaris parameter==
/etc/nodename
/etc/hostname.*
Check: /etc/inet/hosts
If mirrored by SVM do
# Edit /etc/vfstab of the clone to normal Devices
# Edit /etc/system:
<source lang=bash>
* Begin MDD root info (do not edit)
** rootdev:/pseudo/md@0:0,10,blk
* End MDD root info (do not edit)
</source>
Umount cloned disk
fsck cloned disk root slice
==Edit Cluster parameter==
Get the right id from:
<source lang=bash>
# nawk '/cluster\.nodes\.[^.]*\.name/{split($1,field,"."); print field[3],$NF}' /etc/cluster/ccr/global/infrastructure
1 node-a
2 node-b
</source>
Edit the
echo <nodeid> > /etc/cluster/nodeid
for example node-b:
echo 2 > /etc/cluster/nodeid
of the clone.
f6bb102abdd8476683c2a5e7a01cdd4c2e9a58ec
657
653
2015-01-08T13:30:24Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:Solaris|Cluster Clone]][[Kategorie:SunCluster|Clone]]
If you need to recreate a cluster node from a survived node, you need to do the following steps
==Clone system disk==
For example via metattach to the metaroot.
==Edit normal Solaris parameter==
/etc/nodename
/etc/hostname.*
Check: /etc/inet/hosts
If mirrored by SVM do
# Edit /etc/vfstab of the clone to normal Devices
# Edit /etc/system:
<source lang=bash>
* Begin MDD root info (do not edit)
** rootdev:/pseudo/md@0:0,10,blk
* End MDD root info (do not edit)
</source>
Umount cloned disk
fsck cloned disk root slice
==Edit Cluster parameter==
Get the right id from:
<source lang=bash>
# nawk '/cluster\.nodes\.[^.]*\.name/{split($1,field,"."); print field[3],$NF}' /etc/cluster/ccr/global/infrastructure
1 node-a
2 node-b
</source>
Edit the
echo <nodeid> > /etc/cluster/nodeid
for example node-b:
echo 2 > /etc/cluster/nodeid
of the clone.
9461c1bd2c50fdf1b09f34ccf569af3f364fd842
Sun Cluster - Repair Infrastructure
0
32
656
133
2015-01-08T13:28:26Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:SunCluster|Repair Infrastructure]]
Wenn bei einem Clusterknoten die Infrastructure-Datei beschädigt ist, oder ein nicht mehr vorhandenes Quorum-Device herauskonfiguriert werden soll, dann muß man die folgenden Schritte ausführen:
1. Knoten in Non-Cluster-Modus bringen
<pre>
# reboot -- -sx
</pre>
Aus dem OBP ei Sparc-Systemen:
<pre>
ok> boot -sx
</pre>
Oder bei x86/Opteron:
<pre>
b -sx
</pre>
2. Infrastructure editieren:
<pre>
# mount /var
# export TERM=vt100
# vi /etc/cluster/ccr/infrastructure
</pre>
Hier müssen alle Quorumdevice-Einträge raus und die Stimmen der anderen Nodes (bei mehr als zwei Nodes) müssen auf 0 gesetzt werden.
z.B.:
cluster.nodes.2.properties.quorum_vote 0
Und den Installmode enablen:
cluster.properties.installmode enabled
3. Generieren der Checksumme in der Datei:
<pre>
# /usr/cluster/lib/sc/ccradm -i /etc/cluster/ccr/infrastructure -o
</pre>
oder ab Solaris Cluster 3.2:
<pre>
#/usr/cluster/lib/sc/ccradm recover -o /etc/cluster/ccr/global/infrastructure
</pre>
4. Check, ob alles OK ist
<pre>
# /usr/cluster/lib/sc/chkinfr
</pre>
5. Reboot in den Cluster-Modus
<pre>
# reboot
</pre>
Alternative Beschreibung von [http://www.edv-birk.de/ Lothar Birk]:
==Notfall-Situation, wenn der Cluster-Node beim Boot kein Clusterquorum bekommt==
===Boot in den 'Non-Cluster' Modus===
boot -xs
===Manipulation der infrastructure Datei in der ccr===
<pre>
cd /etc/cluster/ccr
oder
cd /etc/cluster/ccr/global
cp infrastructure 100610_infrastructure
vi infrastructure
- Quorum-Vote des anderen Nodes auf 0 setzen
...node.X...quorum_vote 0
- Alle Zeilen am Ende der Datei mit:
...quorum_devices... löschen
/usr/cluster/lib/sc/ccradm -i infrastructure -o
oder
/usr/cluster/lib/sc/ccradm recover -o infrastructure
</pre>
===Boot wieder in den Cluster-Mode und anlegen eines Quorum-Devices===
<pre>
init 6
clq add d1
</pre>
629fce3161247f3ebe3c08835542eebeeb8ba1cf
Solaris IPMP
0
73
659
231
2015-01-08T13:31:40Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:Solaris|IPMP]]
==Einstellen der Konfiguration auf manuell==
<pre>
netadm enable -p ncp defaultfixed
</pre>
==Einfaches IPMP mit einem Standby-Interface==
<pre>
ipadm create-ip net0
ipadm create-ip net1
ipadm set-ifprop -p standby=on -m ip net1
ipadm create-ipmp -i net0 -i net1 ipmp0
ipadm create-addr -T static -a local=1.2.3.4/24 ipmp0/v4
</pre>
==Link-based IPMP in einem VLAN (hier VLAN 2)==
<pre>
# VLAN-Interfaces konfigurieren
dladm create-vlan -l net1 -v 2 net1_vlan2
dladm create-vlan -l net2 -v 2 net2_vlan2
# VLAN-Interfaces für IP konfigurieren
ipadm create-ip net1_vlan2
ipadm create-ip net2_vlan2
# IPMP-Interface konfigurieren
ipadm create-ipmp -i net1_vlan2,net2_vlan2 ipmp0
# Und ganz normal eine IP auf das IPMP-Interface konfigureren
ipadm create-addr -T static -a local=10.1.2.106/24 ipmp0
# Und die Defaultroute permanent setzen
route -p add default 10.1.2.254
</pre>
6958567e1be47bac17d08863fc84c536f54597fc
Solaris kernel debugging
0
24
660
41
2015-01-08T13:32:27Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:Solaris|Kernel Debugging]]
* Direkt in den Debugger booten
<pre>
ok> boot -kd
...
Welcome to kmdb
kmdb: unable to determine terminal type: assuming `vt100'
[0]>
</pre>
oder bei x86 Grubeintrag auswählen und in der "kernel"-Zeile -kd hinzufügen...
* Mod-Debug aktivieren
<pre>
[0]> moddebug/W 0x80000000
moddebug: 0 = 0x80000000
[0]> :c
SunOS Release 5.10 Version Generic_141415-07 64-bit
...
</pre>
* Mod-Kmem aktivieren
<pre>
[0]> kmem_flags/W 0x0000000f
kmem_flags: 0 = 0xf
[0]> :c
SunOS Release 5.10 Version Generic_141415-07 64-bit
...
</pre>
* Mod-snooping aktivieren
<pre>
[0]> snooping/W 0x1
snooping: 0 = 0x1
[0]> :c
SunOS Release 5.10 Version Generic_141415-07 64-bit
...
</pre>
* Stack ausgeben lassen
<pre>
[0]> $c
</pre>
* Letzte Meldungen
<pre>
[0]> ::msgbuf
</pre>
* Crashdump schreiben lassen bei x86-Systemen
<pre>
panic...
[0]> $<systemdump
</pre>
* Links
* [http://developers.sun.com/solaris/articles/manage_core_dump.html Core Dump Management on the Solaris OS]
* [http://www.c0t0d0s0.org/presentations/hhosug/hhosug2.pdf PDF des zweiten HHOSUG Meetings]
e4f9ea090a7005130a129cf398eaa1684ebe89c5
Solaris mdb magic
0
23
661
42
2015-01-08T13:33:07Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:Solaris|Modular Debugger]]
=Verschiedene kleine mdb Tricks=
==Memory usage==
<pre>
# echo ::memstat|mdb -k
Page Summary Pages MB %Tot
------------ ---------------- ---------------- ----
Kernel 2855874 11155 69%
Anon 50119 195 1%
Exec and libs 4754 18 0%
Page cache 22972 89 1%
Free (cachelist) 11948 46 0%
Free (freelist) 1221894 4773 29%
Total 4167561 16279
Physical 4078747 15932
</pre>
==Kernelparameter abfragen==
Syntax: echo '<Parameter>/D' | mdb -k
<pre>
# echo 'ncsize/D' | mdb -k
ncsize:
ncsize: 70485
</pre>
==Kernelparameter setzen==
Syntax: echo '<Parameter>/W<Value>' | mdb -wk
<pre>
# echo 'do_tcp_fusion/W0' | mdb -wk
do_tcp_fusion: 0 = 0x0
</pre>
c639ac6b4c69354e398fc95b75dbd1bc0b4c652d
Solaris OracleDB zone
0
188
662
611
2015-01-08T13:34:00Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:Solaris|Oracle Zone]]
=Setup Oracle Database on a Solaris zone with CPU limit=
Our setup is a 48GB x86-Server
==Limit ZFS ARC==
Add to /etc/system:
<source lang=bash>
set zfs:zfs_arc_max = <bytes as hex value>
</source>
To calculate your own value:
<source lang=bash>
# LIMIT_GB=8 ; printf "*\n** Limit ZFS ARC to %dGB\n*\nset zfs:zfs_arc_max = 0x%x\n" ${LIMIT_GB} $[${LIMIT_GB} * 1024 * 1024 * 1024]
*
** Limit ZFS ARC to 8GB
*
set zfs:zfs_arc_max = 0x200000000
</source>
==Create Zone==
Set values:
<source lang=bash>
ZONENAME=oracle
ZONEPOOL=rpool
ZONEBASE=/var/zones
MAX_SHM_MEMORY=30G
LOCKED_MEMORY=30G
MAX_PHYS_MEMORY=34G
SWAP=${MAX_PHYS_MEMORY}
NUMBER_OF_CPUS=2
</source>
Create zone with
<source lang=bash>
zfs create -o mountpoint=none ${ZONEPOOL}/zones
zfs create -o compression=on -o mountpoint=${ZONEBASE}/${ZONENAME} ${ZONEPOOL}/zones/${ZONENAME}
chmod 700 ${ZONEBASE}/${ZONENAME}
printf "
create
set autoboot=true
set zonepath=${ZONEBASE}/${ZONENAME}
add dedicated-cpu
set ncpus=${NUMBER_OF_CPUS}
end
add capped-memory
set swap=${SWAP}
set physical=${MAX_PHYS_MEMORY}
set locked=${LOCKED_MEMORY}
end
set scheduling-class=FSS
set max-shm-memory=${MAX_SHM_MEMORY}
verify
commit
" | zonecfg -z ${ZONENAME} -f -
</source>
Enable dynamic pool service to add support for dedicated-cpus:
<source lang=bash>
svcadm enable svc:/system/pools/dynamic
</source>
Install and boot:
<source lang=bash>
zoneadm -z ${ZONENAME} install
zoneadm -z ${ZONENAME} boot
zlogin ${ZONENAME} usermod -s /bin/bash root
zlogin ${ZONENAME}
</source>
CPU-check:
<source lang=bash>
-bash-3.2# psrinfo -pv
The physical processor has 2 virtual processors (0 1)
x86 (chipid 0x0 GenuineIntel family 6 model 44 step 2 clock 3059 MHz)
Intel(r) Xeon(r) CPU X5675 @ 3.07GHz
</source>
==Create ZPools==
I used this paper: [http://www.oracle.com/technetwork/server-storage/solaris10/config-solaris-zfs-wp-167894.pdf]
Values are for Solaris 10.
<source lang=bash>
DATABASEPOOL=dbpool
DATABASEPOOL_DATA_VDEV="mirror c1t1d0 c1t2d0"
DATABASEPOOL_ZIL_VDEV="mirror c1t3d0 c1t4d0"
REDOPOOL_NAME=redopool
REDOPOOL_DATA_VDEV="mirror c1t5d0 c1t6d0"
REDOPOOL_ZIL_VDEV="mirror c1t7d0 c1t8d0"
ARCHIVEPOOL=archivepool
ARCHIVEPOOL_DATAV_DEV="mirror c1t9d0 c1t10d0"
DB_BASEPATH=/database
DB_BLOCK_SIZE=8192
</source>
<source lang=bash>
zpool create ${DATABASEPOOL} ${DATABASEPOOL_DATA_VDEV} log ${DATABASEPOOL_ZIL_VDEV}
zfs create -o recordsize=${DB_BLOCK_SIZE} -o mountpoint=${DB_BASEPATH}/data ${DATABASEPOOL}/data
zfs set logbias=throughput ${DATABASEPOOL}/data
zfs create -o recordsize=${DB_BLOCK_SIZE} -o mountpoint=${DB_BASEPATH}/index ${DATABASEPOOL}/index
zfs set logbias=throughput ${DATABASEPOOL}/index
zfs create -o mountpoint=${DB_BASEPATH}/temp ${DATABASEPOOL}/temp
zfs set logbias=throughput ${DATABASEPOOL}/temp
zfs create -o mountpoint=${DB_BASEPATH}/undo ${DATABASEPOOL}/undo
zfs set logbias=throughput ${DATABASEPOOL}/undo
</source>
<source lang=bash>
zpool create ${REDOPOOL} ${REDOPOOL_DATA_VDEV} log ${REDOPOOL_ZIL_VDEV}
zfs create -o mountpoint=${DB_BASEPATH}/redo ${REDOPOOL}/redo
zfs set logbias=latency ${REDOPOOL}/redo
</source>
<source lang=bash>
zpool create ${ARCHIVEBASEPOOL} ${ARCHIVEPOOL_DATA_VDEV}
zfs create -o compression=on -o mountpoint=${DB_BASEPATH}/archive ${ARCHIVEBASEPOOL}/archive
zfs set primarycache=metadata ${ARCHIVEBASEPOOL}/archive
</source>
dff3cffaeb71e6a47e36b4821b25b711fa2dae28
Solaris perl
0
93
663
235
2015-01-08T13:34:37Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:Solaris|Perl]]
==Module::Build / Build.PL==
Bei Fehlermeldungen a la
<pre>
gcc: unrecognized option '-KPIC'
gcc: language O4 not recognized
</pre>
beim bauen von Perlmodulen unter Solaris, kann man versuchen die Defaultvariablen im Module::Build zu überschreiben:
<pre>
# /usr/perl5/bin/perlgcc Build.PL --config cc=gcc --config ld=gcc --config optimize='-O2' --config cccdlflags='-DPIC'
# make
</pre>
das gilt auch für Makefile.PL:
<pre>
/usr/perl5/bin/perlgcc Makefile.PL cc=gcc ld=gcc optimize='-O2' cccdlflags='-DPIC'
</pre>
==Environment Variablen für Programme, die MakeMaker benutzen==
Unter Solaris gibt es ja öfter Probleme, wenn man nur den GCC installiert hat. Ein Aufruf von /usr/perl5/bin/perlgcc hilft dann in den meisten Fällen.
Für sa-compile von Spamassassin nützt es jedoch nichts. Dafür hilft es die notwendigen Parameter via PERL_MM_OPT zu setzen:
<pre>
PERL_MM_OPT='optimize=-O2 cc=gcc ld=gcc cccdlflags=-DPIC' /opt/spamassassin/bin/sa-compile -D
</pre>
Wie die Parameter heißen findet man mit <i>perl -V</i> heraus.
Mehr zum Thema gibt es [http://search.cpan.org/~mschwern/ExtUtils-MakeMaker/lib/ExtUtils/MakeMaker.pm hier]
07bb8fd991c1c68c77d983b8b579587174e1c605
Solaris SMF
0
100
664
537
2015-01-08T13:34:58Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:Solaris|SMF]]
== Running foreground processes ==
<pre>
<?xml version='1.0'?>
<!DOCTYPE service_bundle SYSTEM '/usr/share/lib/xml/dtd/service_bundle.dtd.1'>
<service_bundle type='manifest' name='export'>
<service name='network/foreground-daemon' type='service' version='0'>
<single_instance/>
<dependency name='filesystem_minimal' grouping='require_all' restart_on='none' type='service'>
<service_fmri value='svc:/system/filesystem/local'/>
</dependency>
<dependency name='loopback' grouping='require_any' restart_on='error' type='service'>
<service_fmri value='svc:/network/loopback'/>
</dependency>
<dependency name='network' grouping='optional_all' restart_on='error' type='service'>
<service_fmri value='svc:/milestone/network'/>
</dependency>
<instance name='default' enabled='true'>
<exec_method name='refresh' type='method' exec=':true' timeout_seconds='60'/>
<exec_method name='stop' type='method' exec=':kill' timeout_seconds='60'/>
<exec_method name='start' type='method' exec='/opt/foreground/bin/foreground-daemon %m' timeout_seconds='0'>
<method_context project='foreground-project' >
<method_credential user='foreground-user' group='noaccess' />
</method_context>
</exec_method>
<property_group type="framework" name="startd">
<propval type="astring" name="duration" value="child"/>
</property_group>
<template>
<common_name>
<loctext xml:lang='C'>Foreground Daemon</loctext>
</common_name>
<documentation>
<manpage title='foreground-daemon' section='1M' manpath='/opt/foreground/man'/>
</documentation>
</template>
</instance>
<stability value='Unstable'/>
</service>
</service_bundle>
</pre>
d7604dbaba5a8597c2eaf9fd5d6144b6e5cb92d7
673
664
2015-02-13T14:19:57Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:Solaris|SMF]]
== Running foreground processes ==
<pre>
<?xml version='1.0'?>
<!DOCTYPE service_bundle SYSTEM '/usr/share/lib/xml/dtd/service_bundle.dtd.1'>
<service_bundle type='manifest' name='export'>
<service name='network/foreground-daemon' type='service' version='0'>
<single_instance/>
<dependency name='filesystem_minimal' grouping='require_all' restart_on='none' type='service'>
<service_fmri value='svc:/system/filesystem/local'/>
</dependency>
<dependency name='loopback' grouping='require_any' restart_on='error' type='service'>
<service_fmri value='svc:/network/loopback'/>
</dependency>
<dependency name='network' grouping='optional_all' restart_on='error' type='service'>
<service_fmri value='svc:/milestone/network'/>
</dependency>
<instance name='default' enabled='true'>
<exec_method name='refresh' type='method' exec=':true' timeout_seconds='60'/>
<exec_method name='stop' type='method' exec=':kill' timeout_seconds='60'/>
<exec_method name='start' type='method' exec='/opt/foreground/bin/foreground-daemon %m' timeout_seconds='0'>
<method_context project='foreground-project' >
<method_credential user='foreground-user' group='noaccess' />
</method_context>
</exec_method>
<property_group type="framework" name="startd">
<propval type="astring" name="duration" value="child"/>
</property_group>
<template>
<common_name>
<loctext xml:lang='C'>Foreground Daemon</loctext>
</common_name>
<documentation>
<manpage title='foreground-daemon' section='1M' manpath='/opt/foreground/man'/>
</documentation>
</template>
</instance>
<stability value='Unstable'/>
</service>
</service_bundle>
</pre>
==Adding dependency on another service==
For example mount NFS after ZFS:
<source lang=bash>
svccfg -s svc:/network/nfs/client addpg filesystem-local dependency
svccfg -s svc:/network/nfs/client setprop filesystem-local/grouping = astring: require_all
svccfg -s svc:/network/nfs/client setprop filesystem-local/entities = fmri: svc:/system/filesystem/local:default
svccfg -s svc:/network/nfs/client setprop filesystem-local/restart_on = astring: none
svccfg -s svc:/network/nfs/client setprop filesystem-local/type = astring: service
</source>
721a8bbde8dd11f1a5f6a09de57ac0debd97c7e8
674
673
2015-02-13T14:21:24Z
Lollypop
2
/* Running foreground processes */
wikitext
text/x-wiki
[[Kategorie:Solaris|SMF]]
== Running foreground processes ==
<source lang=xml>
<?xml version='1.0'?>
<!DOCTYPE service_bundle SYSTEM '/usr/share/lib/xml/dtd/service_bundle.dtd.1'>
<service_bundle type='manifest' name='export'>
<service name='network/foreground-daemon' type='service' version='0'>
<single_instance/>
<dependency name='filesystem_minimal' grouping='require_all' restart_on='none' type='service'>
<service_fmri value='svc:/system/filesystem/local'/>
</dependency>
<dependency name='loopback' grouping='require_any' restart_on='error' type='service'>
<service_fmri value='svc:/network/loopback'/>
</dependency>
<dependency name='network' grouping='optional_all' restart_on='error' type='service'>
<service_fmri value='svc:/milestone/network'/>
</dependency>
<instance name='default' enabled='true'>
<exec_method name='refresh' type='method' exec=':true' timeout_seconds='60'/>
<exec_method name='stop' type='method' exec=':kill' timeout_seconds='60'/>
<exec_method name='start' type='method' exec='/opt/foreground/bin/foreground-daemon %m' timeout_seconds='0'>
<method_context project='foreground-project' >
<method_credential user='foreground-user' group='noaccess' />
</method_context>
</exec_method>
<property_group type="framework" name="startd">
<propval type="astring" name="duration" value="child"/>
</property_group>
<template>
<common_name>
<loctext xml:lang='C'>Foreground Daemon</loctext>
</common_name>
<documentation>
<manpage title='foreground-daemon' section='1M' manpath='/opt/foreground/man'/>
</documentation>
</template>
</instance>
<stability value='Unstable'/>
</service>
</service_bundle>
</source>
==Adding dependency on another service==
For example mount NFS after ZFS:
<source lang=bash>
svccfg -s svc:/network/nfs/client addpg filesystem-local dependency
svccfg -s svc:/network/nfs/client setprop filesystem-local/grouping = astring: require_all
svccfg -s svc:/network/nfs/client setprop filesystem-local/entities = fmri: svc:/system/filesystem/local:default
svccfg -s svc:/network/nfs/client setprop filesystem-local/restart_on = astring: none
svccfg -s svc:/network/nfs/client setprop filesystem-local/type = astring: service
</source>
cb5cf9d1782165e5bebb9a4fdeb99981344ca763
Solaris ssh from DVD
0
111
665
312
2015-01-08T13:35:22Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:Solaris|SSH]]
=Get SSH on a system bootet from DVD=
==Mount DVD==
<source lang=bash>
# iostat -En
c0t0d0 Soft Errors: 0 Hard Errors: 0 Transport Errors: 0
Vendor: AMI Product: Virtual CDROM Revision: 1.00 Serial No:
Size: 0.00GB <0 bytes>
Media Error: 0 Device Not Ready: 0 No Device: 0 Recoverable: 0
Illegal Request: 732 Predictive Failure Analysis: 0
...
# mkdir /tmp/dvd
# mount -F hsfs -oro /dev/dsk/c0t0d0s0 /tmp/dvd
</source>
==Unpacking software==
<source lang=bash>
# mkdir /tmp/pkg
# pkgtrans /tmp/dvd/Solaris_10/Product /tmp/pkg SUNWsshu SUNWcry SUNWopenssl-libraries
# mkdir /tmp/ssh
# cd /tmp/ssh
# 7z x -so /tmp/pkg/SUNWsshu/archive/none.7z | cpio -idv
# 7z x -so /tmp/pkg/SUNWcry/archive/none.7z | cpio -idv
# 7z x -so /tmp/pkg/SUNWopenssl-libraries/archive/none.7z | cpio -idv
</source>
==Use unpacked libraries==
<source lang=bash>
# crle -c /var/ld/ld.config -l /tmp/ssh/usr/sfw/lib:/lib:/usr/lib
# crle
Configuration file [version 4]: /var/ld/ld.config
Platform: 32-bit LSB 80386
Default Library Path (ELF): /tmp/ssh/usr/sfw/lib:/lib:/usr/lib
Trusted Directories (ELF): /lib/secure:/usr/lib/secure (system default)
Command line:
crle -c /var/ld/ld.config -l /tmp/ssh/usr/sfw/lib:/lib:/usr/lib
</source>
==Check it==
<source lang=bash>
# ldd /tmp/ssh/usr/bin/ssh
libsocket.so.1 => /lib/libsocket.so.1
libnsl.so.1 => /lib/libnsl.so.1
libz.so.1 => /usr/lib/libz.so.1
libcrypto.so.0.9.7 => /usr/sfw/lib/libcrypto.so.0.9.7
libgss.so.1 => /usr/lib/libgss.so.1
libc.so.1 => /lib/libc.so.1
libmp.so.2 => /lib/libmp.so.2
libmd.so.1 => /lib/libmd.so.1
libscf.so.1 => /lib/libscf.so.1
libcmd.so.1 => /lib/libcmd.so.1
libdoor.so.1 => /lib/libdoor.so.1
libuutil.so.1 => /lib/libuutil.so.1
libgen.so.1 => /lib/libgen.so.1
libcrypto_extra.so.0.9.7 => /tmp/ssh/usr/sfw/lib/libcrypto_extra.so.0.9.7
libm.so.2 => /lib/libm.so.2
</source>
Looks good:
* libcrypto_extra.so.0.9.7 => /tmp/ssh/usr/sfw/lib/libcrypto_extra.so.0.9.7
==Use ssh from /tmp/ssh==
<source lang=bash>
# /tmp/ssh/usr/bin/ssh <user>@<ip>
</source>
39ba5477b6509fae6ca2ef5c3adbd5358d7e16c8
Solaris zone memory on the fly
0
118
666
326
2015-01-08T13:35:46Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:Solaris|Zone Memory]]
= Setting memory parameter for running zones =
You can change memory parameter for running zones. But remember to make it persistent by changing zone config file, too.
So I do it always in advance.
== Change setting in the config file ==
<source lang=bash>
# zonecfg -z myzone
zonecfg:myzone> select capped-memory
zonecfg:myzone:capped-memory> info
capped-memory:
[swap: 10G]
zonecfg:myzone:capped-memory> set swap=16G
zonecfg:myzone:capped-memory> set physical=16G
zonecfg:myzone:capped-memory> set locked=10G
zonecfg:myzone:capped-memory> info
physical: 16G
[swap: 16G]
[locked: 10G]
zonecfg:myzone:capped-memory> end
zonecfg:myzone> verify
zonecfg:myzone> commit
zonecfg:myzone> exit
#
</source>
== Change settings for the running zone ==
First take a look:
<source lang=bash>
# zlogin myzone prtconf | grep Memory
prtconf: devinfo facility not available
Memory size: 65536 Megabytes
# prctl -t privileged -i zone myzone
zone: 1: myzone
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
zone.max-swap
privileged 10.0GB - deny -
zone.cpu-shares
privileged 1 - none -
</source>
Set the new values:
<source lang=bash>
# rcapadm -z myzone -m 16G
# prctl -n zone.max-swap -v 16g -t privileged -r -e deny -i zone myzone
</source>
Prove values:
<source lang=bash>
# zlogin myzone prtconf | grep Memory
prtconf: devinfo facility not available
Memory size: 16384 Megabytes
# prctl -t privileged -i zone myzone
zone: 1: myzone
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
zone.max-swap
privileged 16.0GB - deny -
zone.cpu-shares
privileged 1 - none -
</source>
Done.
427b12577499326fd772aa5a20fba1045466d7ec
ZFS Networker
0
158
667
641
2015-01-08T13:36:58Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:ZFS|Backup]]
[[Kategorie:Backup|Networker]]
[[Kategorie:Solaris|Backup]]
=Backup of ZFS snapshots on Solaris Cluster with Legato/EMC Networker=
This describes how to setup a backup of the Solaris Cluster resource group named sample-rg.
The structure of my RGs is always:
<pre>
RG: <name>-rg
ZFS-HASP: <name>-hasp-zfs-res
Logical Host: <name>-lh-res
Logical Host Name: <name>-lh
ZPOOL: <name>_pool
</pre>
I used the bash as shell.
==Define variables used in the following command lines==
<source lang=bash>
# NAME=sample
# RGname=${NAME}-rg
# NetworkerGroup=$(echo ${NAME} | tr 'a-z' 'A-Z' )
# ZPOOL=${NAME}_pool
# ZPOOL_BASEDIR=/local/${RGname}
</source>
==Define a resource for Networker==
What we need now is a resource definition in our Networker directory like this:
<source lang=bash>
# mkdir /nsr/{bin,log,res}
# cat > /nsr/res/${NetworkerGroup}.res <<EOF
type: savepnpc;
precmd: "/nsr/bin/nsr_snapshot.sh pre >/nsr/log/networker_precmd.log 2>&1";
pstcmd: "/nsr/bin/nsr_snapshot.sh pst >/nsr/log/networker_pstcmd.log 2>&1";
timeout: "08:00am";
abort precmd with group: Yes;
EOF
</source>
==The pre-/pstcmd-script==
!!!THIS CODE IS UNTESTED DO NOT USE THIS!!!
!!!THIS JUST AN EXAMPLE!!!
<source lang=bash>
#!/bin/bash
cmd_option=$1
export cmd_option
SNAPSHOT_NAME="nsr"
BASE_LOG_DIR="/nsr/logs"
NSR_BACKUP_CLONE="nsr_backup"
# Commands
ZFS_CMD="/usr/sbin/zfs"
ZPOOL_CMD="/usr/sbin/zpool"
ZLOGIN_CMD="/usr/bin/zlogin"
ZONECFG_CMD="/usr/sbin/zonecfg"
DF_CMD="/usr/bin/df"
AWK_CMD="/usr/bin/nawk"
MKNOD_CMD="/usr/sbin/mknod"
PARGS_CMD="/usr/bin/pargs"
PTREE_CMD="/usr/bin/ptree"
CLRS_CMD="/usr/cluster/bin/clrs"
CLRG_CMD="/usr/cluster/bin/clrg"
CLRT_CMD="/usr/cluster/bin/clrt"
BASENAME_CMD="/usr/bin/basename"
GETENT_CMD="/usr/bin/getent"
SCHA_RESOURCE_GET_CMD="/usr/cluster/bin/scha_resource_get"
# Subdir in ZFS where to put ZFS-config
ZFS_SETUP_SUBDIR="cluster_config"
ZFS_CONFIG_FILE=ZFS_Setup.sh
# Oracle parameter
ORACLE_SID=SAMPLE
ORACLE_USER=oracle
GLOBAL_LOGFILE=${BASE_LOG_DIR}/$(${BASENAME_CMD} $0 .sh).log
exec >>${GLOBAL_LOGFILE} 2>&1
function print_option () {
option=$1; shift
# now process line
while [ $# -gt 0 ]
do
case $1 in
${option})
echo $2
shift
shift
;;
*)
shift
;;
esac
done
}
function print_log () {
LOGFILE=$1 ; shift
if [ $# -gt 0 ]
then
printf "%s (%s): %s\n" "$(date '+%Y%m%d %H:%M:%S')" "${cmd_option}" "$*" >> ${LOGFILE}
else
#printf "%s (%s): " "$(date '+%Y%m%d %H:%M:%S')" "${cmd_option}" >> ${LOGFILE}
while read data
do
printf "%s (%s): %s\n" "$(date '+%Y%m%d %H:%M:%S')" "${cmd_option}" "${data}" >> ${LOGFILE}
done
fi
}
function dump_zfs_config {
ZPOOL=$1
OUTPUT_FILE=$2
printf "\n\n# Create ZPool ${ZPOOL} with size $(${ZPOOL_CMD} list -Ho size ${ZPOOL}):\n\n" >> ${OUTPUT_FILE}
${ZPOOL_CMD} status ${ZPOOL} | ${AWK_CMD} '/config:/,/errors:/{if(/NAME/){getline; printf "Zpool structure of %s:\n\nzpool create %s",$1,$1; getline ; device=0; while(!/^$/ && !/errors:/){gsub(/mirror-[0-9]+/,"mirror",$1);gsub(/logs/,"log",$1);gsub(/(\/dev\/(r)*dsk\/)*c[0-9]+t[0-9A-F]+d[0-9]+(s[0-9]+)*/,"<device"device">",$1);if(/device/)device++;printf " %s",$1 ; getline}};printf "\n" ;}' >> ${OUTPUT_FILE}
printf "\n\n# Create ZFS\n\n" >> ${OUTPUT_FILE}
${ZFS_CMD} list -Hrt filesystem -o name,origin ${ZPOOL} | ${AWK_CMD} -v zfs_cmd=${ZFS_CMD} 'NR>1 && $2=="-"{print zfs_cmd,"create -o mountpoint=none",$1}' >> ${OUTPUT_FILE}
printf "\n\n# Set ZFS values\n\n" >> ${OUTPUT_FILE}
${ZFS_CMD} get -s local -Ho name,property,value -pr all ${ZPOOL} | ${AWK_CMD} -v zfs_cmd=${ZFS_CMD} '$2!="readonly"{printf "%s set -p %s=%s %s\n",zfs_cmd,$2,$3,$1}' >> ${OUTPUT_FILE}
}
function dump_cluster_config {
RG=$1
OUTPUT_DIR=$2
${CLRG_CMD} export -o ${OUTPUT_DIR}/${RG}.clrg_export.xml ${RG}
for RES in $(${CLRS_CMD} list -g ${RG})
do
${CLRS_CMD} export -o ${OUTPUT_DIR}/${RES}.clrs_export.xml ${RES}
done
# Commands to recreate the RG
COMMAND_FILE="${OUTPUT_DIR}/${RG}.ClusterCreateCommands.txt"
printf "Recreate %s:\n%s create -i %s %s\n\n" "${RG}" "${CLRG_CMD}" "${OUTPUT_DIR}/${RG}.clrg_export.xml" "${RG}" > ${COMMAND_FILE}
for RT in SUNW.LogicalHostname SUNW.HAStoragePlus SUNW.gds LGTO.clnt
do
for RT_VERSION in $(${CLRT_CMD} list | ${AWK_CMD} -v rt=${RT} '$1 ~ rt')
do
for RES in $(${CLRS_CMD} list -g ${RG} -t ${RT_VERSION})
do
if [ "_${RT}_" == "_SUNW.LogicalHostname_" ]
then
printf "Add the following entries to all nodes!!!:\n/etc/inet/hosts:\n" >> ${COMMAND_FILE}
${GETENT_CMD} hosts $(${CLRS_CMD} show -p HostnameList ${RES} | nawk '$1=="HostnameList:"{$1="";print}') >> ${COMMAND_FILE}
printf "\n" >> ${COMMAND_FILE}
fi
printf "Recreate %s:\n%s create -i %s %s\n\n" "${RES}" "${CLRS_CMD}" "${OUTPUT_DIR}/${RES}.clrs_export.xml" "${RES}" >> ${COMMAND_FILE}
done
done
done
}
function snapshot_pre {
DB=$1
DBUSER=$2
if [ $# -eq 3 -a "_$3_" != "__" ]
then
ZONE=$3
ZONE_CMD="${ZLOGIN_CMD} -l ${DBUSER} ${ZONE}"
ZONE_BASE=$(/usr/sbin/zonecfg -z ${ZONE} info zonepath | ${AWK_CMD} '{print $NF;}')
ZONE_ROOT="${ZONE_BASE}/root"
else
ZONE_ROOT=""
ZONE_CMD="su - ${DBUSER} -c"
fi
if( ${ZONE_CMD} echo >/dev/null 2>&1 )
then
SCRIPT_NAME="tmp/.nsr-pre-snap-script.$$"
# Create script inside zone
cat >${ZONE_ROOT}/{SCRIPT_NAME} <<EOS
#!/bin/bash
DBDIR=\$(${AWK_CMD} -F':' -v ORACLE_SID=${ORACLE_SID} '\$1==ORACLE_SID {print \$2;}' /var/opt/oracle/oratab)
\${DBDIR}/bin/sqlplus sys/${DBUSER} as sysdba << EOF
create pfile from spfile;
alter system archive log current;
alter database backup controlfile to trace;
alter database begin backup;
EOF
EOS
chmod 755 ${ZONE_ROOT}/${SCRIPT_NAME}
${ZONE_CMD} /${SCRIPT_NAME} 2>&1 | print_log ${LOGFILE}
rm -f ${ZONE_ROOT}/${SCRIPT_NAME}
fi
}
function snapshot_pst {
DB=$1
DBUSER=$2
if [ $# -eq 3 -a "_$3_" != "__" ]
then
ZONE=$3
ZONE_CMD="${ZLOGIN_CMD} -l ${DBUSER} ${ZONE}"
ZONE_BASE=$(/usr/sbin/zonecfg -z ${ZONE} info zonepath | ${AWK_CMD} '{print $NF;}')
ZONE_ROOT="${ZONE_BASE}/root"
else
ZONE_ROOT=""
ZONE_CMD="su - ${DBUSER} -c"
fi
if( ${ZONE_CMD} echo >/dev/null 2>&1 )
then
SCRIPT_NAME="tmp/.nsr-pre-snap-script.$$"
# Create script inside zone
cat >${ZONE_ROOT}/{SCRIPT_NAME} <<EOS
#!/bin/bash
DBDIR=\$(${AWK_CMD} -F':' -v ORACLE_SID=${ORACLE_SID} '\$1==ORACLE_SID {print \$2;}' /var/opt/oracle/oratab)
\${DBDIR}/bin/sqlplus sys/${DBUSER} as sysdba << EOF
alter database end backup;
alter system archive log current;
EOF
EOS
chmod 755 ${ZONE_ROOT}/${SCRIPT_NAME}
${ZONE_CMD} /${SCRIPT_NAME} 2>&1 | print_log ${LOGFILE}
rm -f ${ZONE_ROOT}/${SCRIPT_NAME}
fi
}
function snapshot_create {
ZPOOL=$1
SNAPSHOT_NAME=$2
print_log ${LOGFILE} "Create ZFS snapshot -r ${ZPOOL}@${SNAPSHOT_NAME}"
${ZFS_CMD} snapshot -r ${ZPOOL}@${SNAPSHOT_NAME}
for zfs_snapshot in $(${ZFS_CMD} list -Ho name -t snapshot -r ${ZPOOL} | grep ${SNAPSHOT_NAME})
do
${ZFS_CMD} clone -o readonly=on ${zfs_snapshot} ${zfs_snapshot/@*/}/${NSR_BACKUP_CLONE}
${ZFS_CMD} mount ${zfs_snapshot/@*/}/${NSR_BACKUP_CLONE} 2>/dev/null
if [ "_$(${ZFS_CMD} get -Ho value mounted ${zfs_snapshot/@*/}/${NSR_BACKUP_CLONE})_" == "_yes_" ]
then
# echo /usr/sbin/save -s ${SERVER_NAME} -g ${GROUP_NAME} -LL -m ${CLIENT_NAME} $(${ZFS_CMD} get -Ho value mountpoint ${zfs_snapshot/@*/}/${NSR_BACKUP_CLONE})
${ZFS_CMD} list -Ho creation,name ${zfs_snapshot/@*/}/${NSR_BACKUP_CLONE} | print_log ${LOGFILE}
fi
done
}
function snapshot_destroy {
ZPOOL=$1
SNAPSHOT_NAME=$2
if (${ZFS_CMD} list -t snapshot ${ZPOOL}@${SNAPSHOT_NAME} > /dev/null)
then
for zfs_snapshot in $(${ZFS_CMD} list -Ho name -t snapshot -r ${ZPOOL} | grep ${SNAPSHOT_NAME})
do
if [ "_$(${ZFS_CMD} get -Ho value mounted ${zfs_snapshot/@*/}/${NSR_BACKUP_CLONE})_" == "_yes_" ]
then
print_log ${LOGFILE} "Unmount ZFS clone ${zfs_snapshot/@*/}/${NSR_BACKUP_CLONE}"
${ZFS_CMD} unmount ${zfs_snapshot/@*/}/${NSR_BACKUP_CLONE}
fi
# If this is a clone of ${zfs_snapshot}, then destroy it
if [ "_$(${ZFS_CMD} list -Ho origin ${zfs_snapshot/@*/}/${NSR_BACKUP_CLONE})_" == "_${zfs_snapshot}_" ]
then
print_log ${LOGFILE} "Destroy ZFS clone ${zfs_snapshot/@*/}/${NSR_BACKUP_CLONE}"
${ZFS_CMD} destroy ${zfs_snapshot/@*/}/${NSR_BACKUP_CLONE}
fi
done
print_log ${LOGFILE} "Destroy ZFS snapshot -r ${ZPOOL}@${SNAPSHOT_NAME}"
${ZFS_CMD} destroy -r ${ZPOOL}@${SNAPSHOT_NAME}
fi
}
function usage {
echo "Usage: $0 (pre|pst)"
echo "Usage: $0 init <ZPool-Name>"
echo "Usage: $0 dump <ZPool-Name> <Output-File>"
exit 1
}
case ${cmd_option} in
pre|pst)
case ${cmd_option} in
pre)
# Get commandline from parent pid
# pre /usr/sbin/savepnpc -c <NetworkerClient> -s <NetworkerServer> -g <NetworkerGroup> -LL
print_log ${GLOBAL_LOGFILE} "Begin (${cmd_option}) Called from $(${PTREE_CMD} $$ | ${AWK_CMD} '/savepnpc/{print $0}')"
pid=$(${PTREE_CMD} $$ | ${AWK_CMD} '/savepnpc/{print $1}')
;;
pst)
# Get commandline from parent pid
# pst /usr/bin/pstclntsave -s <NetworkerServer> -g <NetworkerGroup> -c <NetworkerClient>
print_log ${GLOBAL_LOGFILE} "Begin (${cmd_option}) Called from $(${PTREE_CMD} $$ | ${AWK_CMD} '/pstclntsave/{print $0}')"
pid=$(${PTREE_CMD} $$ | ${AWK_CMD} '/pstclntsave/{print $1}')
;;
esac
commandline="$(${PARGS_CMD} -c ${pid} | ${AWK_CMD} -F':' '$1 ~ /^argv/{printf $2}END{print;}')"
# Called from backupserver use -c
CLIENT_NAME=$(print_option -c ${commandline})
# If called from cmdline use -m
CLIENT_NAME=${CLIENT_NAME:-$(print_option -m ${commandline})}
# Last resort pre/post
CLIENT_NAME=${CLIENT_NAME:-${cmd_option}}
SERVER_NAME=$(print_option -s ${commandline})
GROUP_NAME=$(print_option -g ${commandline})
LOGFILE=${BASE_LOG_DIR}/${CLIENT_NAME}.log
print_log ${LOGFILE} "Called from ${commandline}"
named_pipe=/tmp/.named_pipe.$$
# Delete named pipe on exit
trap "rm -f ${named_pipe}" EXIT
# Create named pipe
${MKNOD_CMD} ${named_pipe} p
# Read from named pipe and send it to print_log
tee <${named_pipe} | print_log ${LOGFILE}&
# Close STDOUT & STDERR
exec 1>&-
exec 2>&-
# Redirect them to named pipe
exec >${named_pipe} 2>&1
print_log ${LOGFILE} "Begin backup of ${CLIENT_NAME}"
# Get resource name from hostname
LH_RES=$(${CLRS_CMD} show -t SUNW.LogicalHostname -p HostnameList | ${AWK_CMD} -v Hostname="${CLIENT_NAME}" '/^Resource:/{res=$NF} /HostnameList:/ {for(i=2;i<=NF;i++){if($i == Hostname){print res}}}')
print_log ${LOGFILE} "LogicalHostname of ${CLIENT_NAME} is ${LH_RES}"
# Get ressourceGroup name from ressource name
RG=$(${SCHA_RESOURCE_GET_CMD} -O GROUP -R ${LH_RES})
print_log ${LOGFILE} "RessourceGroup of ${LH_RES} is ${RG}"
ZPOOLS=$(${CLRS_CMD} show -g ${RG} -p Zpools | ${AWK_CMD} '$1=="Zpools:"{$1="";print $0}')
print_log ${LOGFILE} "ZPools used in ${RG}: ${ZPOOLS}"
Start_command=$(${CLRS_CMD} show -p Start_command -g ${RG} | ${AWK_CMD} -F ':' '$1 ~ /Start_command/ && $2 ~ /sczbt/')
print_log ${LOGFILE} "sczbt Start_command is: ${Start_command}"
sczbt_config=$(print_option -P ${Start_command})/sczbt_$(print_option -R ${Start_command})
print_log ${LOGFILE} "sczbt_config is ${sczbt_config}"
ZONE=$(${AWK_CMD} -F '=' '$1=="Zonename"{gsub(/"/,"",$2);print $2}' ${sczbt_config})
print_log ${LOGFILE} "Zone from ${sczbt_config} is ${ZONE}"
;;
init)
LOGFILE=${BASE_LOG_DIR}/init.log
if [ $# -ne 2 ]
then
echo "Wrong count of parameters."
echo "Use $0 init <ZPool-Name>"
exit 1
fi
ZPOOL=$2
print_log ${GLOBAL_LOGFILE} "Begin (${cmd_option}) of zpool ${ZPOOL}"
print_log ${LOGFILE} "Begin init of zpool ${ZPOOL}"
;;
esac
case ${cmd_option} in
dump_cluster)
if [ $# -ne 3 ]
then
echo "Wrong count of parameters."
echo "Use $0 dump_cluster <Ressource_Group> <DIR>"
exit 1
fi
dump_cluster_config $2 $3
;;
dump)
if [ $# -ne 3 ]
then
echo "Wrong count of parameters."
echo "Use $0 dump <ZPool-Name> <File>"
exit 1
fi
dump_zfs_config $2 $3
;;
init)
snapshot_destroy ${ZPOOL} ${SNAPSHOT_NAME}
snapshot_create ${ZPOOL} ${SNAPSHOT_NAME}
print_log ${LOGFILE} "End init of zpool ${ZPOOL}"
;;
pre)
for ZPOOL in ${ZPOOLS}
do
snapshot_destroy ${ZPOOL} ${SNAPSHOT_NAME}
done
# snapshot_pre ${DB} ${DBUSER} ${ZONE}
# Find the dir to write down zfs-setup
for ZPOOL in ${ZPOOLS}
do
if [ "_$(${ZFS_CMD} list -Ho name ${ZPOOL}/${ZFS_SETUP_SUBDIR} 2>/dev/null)_" != "__" ]
then
CONFIG_DIR=$(${ZFS_CMD} get -Ho value mountpoint ${ZPOOL}/${ZFS_SETUP_SUBDIR})
else
if [ -d $(${ZFS_CMD} get -Ho value mountpoint ${ZPOOL})/${ZFS_SETUP_SUBDIR} ]
then
CONFIG_DIR=$(${ZFS_CMD} get -Ho value mountpoint ${ZPOOL})/${ZFS_SETUP_SUBDIR}
fi
fi
if [ -d ${CONFIG_DIR} ]
then
printf "# Settings for ZFS\n\n" > ${CONFIG_DIR}/${ZFS_CONFIG_FILE}
ZONE_CONFIG_FILE=zonecfg_${ZONE}.export
${ZONECFG_CMD} -z ${ZONE} export > ${CONFIG_DIR}/${ZONE_CONFIG_FILE}
fi
done
for ZPOOL in ${ZPOOLS}
do
if [ "_${CONFIG_DIR}_" != "__" ]
then
dump_zfs_config ${ZPOOL} ${CONFIG_DIR}/${ZFS_CONFIG_FILE}
dump_cluster_config ${RG} ${CONFIG_DIR}
fi
snapshot_create ${ZPOOL} ${SNAPSHOT_NAME}
done
# snapshot_pst ${DB} ${DBUSER} ${ZONE}
print_log ${LOGFILE} "End backup of ${CLIENT_NAME}"
;;
pst)
#for ZPOOL in ${ZPOOLS}
#do
# snapshot_destroy ${ZPOOL} ${SNAPSHOT_NAME}
#done
print_log ${LOGFILE} "End backup of ${CLIENT_NAME}"
;;
*)
usage
;;
esac
print_log ${GLOBAL_LOGFILE} "End (${cmd_option}) Called from:"
${PTREE_CMD} $$ | print_log ${GLOBAL_LOGFILE}
exit 0
</source>
MD5-Checksum
<source lang=bash>
# digest -a md5 /nsr/bin/nsr_snapshot.sh
aedff1a8bfa8ee0a012cd7def115e626
</source>
!!!THIS CODE IS UNTESTED DO NOT USE THIS!!!
!!!THIS JUST AN EXAMPLE!!!
==Restore/Recover==
===Set some variables===
<source lang=bash>
NSR_CLIENT="sample-cl"
NSR_SERVER="nsr-server"
ZPOOL="sample_pool"
RG="${NSR_CLIENT%-cl}-rg"
ZONE="${NSR_CLIENT%-cl}-zone"
</source>
===Look for a valid backup===
<source lang=bash>
# /usr/sbin/mminfo -s ${NSR_SERVER} -o t -N /local/${RG}/cluster_config/nsr_backup
</source>
===Restore ZFS configuration===
<source lang=bash>
# /usr/sbin/recover -s ${NSR_SERVER} -c ${NSR_CLIENT} -d /tmp -a /local/${RG}/cluster_config/nsr_backup/ZFS_Setup.sh
</source>
Look into file /tmp/ZFS_Setup.sh which should look like this:
<source lang=bash>
# Create ZPool sample_pool with size 1.02T:
Zpool structure of sample_pool:
zpool create sample_pool mirror <device0> <device1>
# Create ZFS
/usr/sbin/zfs create -o mountpoint=none sample_pool/app
/usr/sbin/zfs create -o mountpoint=none sample_pool/cluster_config
/usr/sbin/zfs create -o mountpoint=none sample_pool/data1
/usr/sbin/zfs create -o mountpoint=none sample_pool/data2
/usr/sbin/zfs create -o mountpoint=none sample_pool/home
/usr/sbin/zfs create -o mountpoint=none sample_pool/log
/usr/sbin/zfs create -o mountpoint=none sample_pool/usr_local
/usr/sbin/zfs create -o mountpoint=none sample_pool/zone
# Set ZFS values
/usr/sbin/zfs set -p reservation=104857600 sample_pool
/usr/sbin/zfs set -p mountpoint=none sample_pool
/usr/sbin/zfs set -p mountpoint=/local/sample-rg/app sample_pool/app
/usr/sbin/zfs set -p mountpoint=/local/sample-rg/cluster_config sample_pool/cluster_config
/usr/sbin/zfs set -p mountpoint=/local/sample-rg/data1 sample_pool/data1
/usr/sbin/zfs set -p mountpoint=/local/sample-rg/data2 sample_pool/data2
/usr/sbin/zfs set -p mountpoint=/local/sample-rg/home sample_pool/home
/usr/sbin/zfs set -p mountpoint=/local/sample-rg/log sample_pool/log
/usr/sbin/zfs set -p mountpoint=/local/sample-rg/usr_local sample_pool/usr_local
/usr/sbin/zfs set -p mountpoint=/local/sample-rg/zone sample_pool/zone
/usr/sbin/zfs set -p zpdata:zn=sample-zone sample_pool/zone
/usr/sbin/zfs set -p zpdata:rbe=S10_U9 sample_pool/zone
/usr/sbin/zfs set -p mountpoint=/local/sample-rg/zone-zfsBE_20121105 sample_pool/zone-zfsBE_20121105
/usr/sbin/zfs set -p zoned=off sample_pool/zone-zfsBE_20121105
/usr/sbin/zfs set -p canmount=on sample_pool/zone-zfsBE_20121105
/usr/sbin/zfs set -p zpdata:zn=sample-zone sample_pool/zone-zfsBE_20121105
/usr/sbin/zfs set -p zpdata:rbe=S10_U9 sample_pool/zone-zfsBE_20121105
</source>
Mount the needed ZFS filesystems.
===Restore zone configuration===
<source lang=bash>
# /usr/sbin/recover -s ${NSR_SERVER} -c ${NSR_CLIENT} -d /tmp -a /local/${RG}/cluster_config/nsr_backup/zonecfg_${ZONE}.export
# zonecfg -z ${ZONE} -f /tmp/zonecfg_${ZONE}.export
# zonecfg -z ${ZONE} info
</source>
===Restore cluster configuration===
<source lang=bash>
# /usr/sbin/recover -s ${NSR_SERVER} -c ${NSR_CLIENT} -d /tmp -a /local/${RG}/cluster_config/nsr_backup/*_export.xml
# /usr/sbin/recover -s ${NSR_SERVER} -c ${NSR_CLIENT} -d /tmp -a /local/${RG}/cluster_config/nsr_backup/*.ClusterCreateCommands.txt
# /usr/bin/perl -pi -e "s#/local/${RG}/cluster_config/nsr_backup/#/tmp/#g" /tmp/${RG}.ClusterCreateCommands.txt
</source>
Follow the instructions in /tmp/${RG}.ClusterCreateCommands.txt:
<source lang=bash>
Recreate sample-rg:
/usr/cluster/bin/clrg create -i /tmp/sample-rg.clrg_export.xml sample-rg
Add the following entries to all nodes!!!:
/etc/inet/hosts:
10.29.7.96 sample-cl
Recreate sample-lh-res:
/usr/cluster/bin/clrs create -i /tmp/sample-lh-res.clrs_export.xml sample-lh-res
Recreate sample-hasp-zfs-res:
/usr/cluster/bin/clrs create -i /tmp/sample-hasp-zfs-res.clrs_export.xml sample-hasp-zfs-res
Recreate sample-emctl-res:
/usr/cluster/bin/clrs create -i /tmp/sample-emctl-res.clrs_export.xml sample-emctl-res
Recreate sample-oracle-res:
/usr/cluster/bin/clrs create -i /tmp/sample-oracle-res.clrs_export.xml sample-oracle-res
Recreate sample-zone-res:
/usr/cluster/bin/clrs create -i /tmp/sample-zone-res.clrs_export.xml sample-zone-res
Recreate sample-nsr-res:
/usr/cluster/bin/clrs create -i /tmp/sample-nsr-res.clrs_export.xml sample-nsr-res
</source>
==Registering new resource type LGTO.clnt==
1. Install Solaris client package LGTOclnt
2. Register new resource type in cluster. One one node do:
<source lang=bash>
# clrt register -f /usr/sbin/LGTO.clnt.rtr LGTO.clnt
</source>
Now you have a new resource type LGTO.clnt in your cluster.
==Create client resource of type LGTO.clnt==
So I use scripts like this:
<source lang=bash>
# RGname=sample-rg
# clrs create \
-t LGTO.clnt \
-g ${RGname} \
-p Resource_dependencies=$(basename ${RGname} -rg)-hasp-zfs-res \
-p clientname=$(basename ${RGname} -rg)-lh \
-p Network_resource=$(basename ${RGname} -rg)-lh-res \
-p owned_paths=${ZPOOL_BASEDIR} \
$(basename ${RGname} -rg)-nsr-res
</source>
This expands to:
<source lang=bash>
# clrs create \
-t LGTO.clnt \
-g sample-rg \
-p Resource_dependencies=sample-hasp-zfs-res \
-p clientname=sample-lh \
-p Network_resource=sample-lh-res \
-p owned_paths=/local/sample-rg \
sample-nsr-res
</source>
Now we have a client name to which we can connect to: sample-lh
5fafdc8330c4d47b2fcb6ad2422f78c9c78262cb
NetApp Commands
0
201
668
2015-01-27T09:01:51Z
Lollypop
2
Die Seite wurde neu angelegt: „[[Kategorie: NetApp]] ==Performance == filer> priv set -q diag ; statit -b ; sysstat -x -s -c 20 3 ; statit -e ; priv set filer> priv set -q diag ; stats…“
wikitext
text/x-wiki
[[Kategorie: NetApp]]
==Performance ==
filer> priv set -q diag ; statit -b ; sysstat -x -s -c 20 3 ; statit -e ; priv set
filer> priv set -q diag ; stats show lun:*:avg_latency
=== Flashpool ===
filer> priv set -q diag ; stats show -p hybrid_aggr
7867ce9dc23b1fb8f841e9e493640fb07217fcb4
669
668
2015-01-27T09:02:39Z
Lollypop
2
/* Performance */
wikitext
text/x-wiki
[[Kategorie: NetApp]]
==Performance ==
filer> priv set -q diag ; statit -b ; sysstat -x -s -c 20 3 ; statit -e ; priv set
filer> priv set -q diag ; stats show lun:*:avg_latency ; priv set
=== Flashpool ===
filer> priv set -q diag ; stats show -p hybrid_aggr ; priv set
3476cdfefe1d6b35cfab1cdc2504cd78bb2199b4
Category:LDOM
14
202
670
2015-02-12T12:22:19Z
Lollypop
2
Die Seite wurde neu angelegt: „[[Kategorie:Solaris]]“
wikitext
text/x-wiki
[[Kategorie:Solaris]]
45811ac9bef9ab2254080294d01e6f892f5d9499
Solaris LDOM
0
203
671
2015-02-12T12:29:27Z
Lollypop
2
Die Seite wurde neu angelegt: „[[Kategorie:LDOM]] [[Kategorie:Solaris]] <source lang=bash> #!/bin/bash link=$1 dev=$(dladm show-phys -L ${link} | \ nawk ' NR==2{ dev=$2; gsub(/[0-9]+$/,…“
wikitext
text/x-wiki
[[Kategorie:LDOM]]
[[Kategorie:Solaris]]
<source lang=bash>
#!/bin/bash
link=$1
dev=$(dladm show-phys -L ${link} | \
nawk '
NR==2{
dev=$2; gsub(/[0-9]+$/,"",dev);
instance=$2; gsub(/^[^0-9]*/,"",instance);
while(getline < "/etc/path_to_inst"){
gsub(/"/,"",$NF);
if($NF == dev && $(NF-1) == instance){
gsub(/"/,"",$1);
gsub(/^\//,"",$1);
print $1;
}
}
}
')
ldm ls-io -l ${dev}
</source>
cb7000638a7facda1e45d8dc9d4161998eab7a09
672
671
2015-02-12T12:37:50Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:LDOM]]
[[Kategorie:Solaris]]
==Useful scripts==
===get_pf_from_link_name.sh===
<source lang=bash>
#!/bin/bash
link=$1
dev=$(dladm show-phys -L ${link} | \
nawk '
NR==2{
dev=$2; gsub(/[0-9]+$/,"",dev);
instance=$2; gsub(/^[^0-9]*/,"",instance);
while(getline < "/etc/path_to_inst"){
gsub(/"/,"",$NF);
if($NF == dev && $(NF-1) == instance){
gsub(/"/,"",$1);
gsub(/^\//,"",$1);
print $1;
}
}
}
')
ldm ls-io -l ${dev}
</source>
66d365f4fcbe428aa415fb4bceb99609792a7d54
Hauptseite
0
1
675
453
2015-02-16T14:52:27Z
Lollypop
2
wikitext
text/x-wiki
First of all please read my [[Project:General_disclaimer|disclaimer]]!
Bitte zuerst bitte meinen [[Project:General_disclaimer|Haftungsausshluss]] lesen!
=[[:Kategorie:KnowHow|KnowHow]]=
<categorytree mode=pages hideroot=on depth=2>KnowHow</categorytree>
=[[:Kategorie:Projekte|Meine Projekte]]=
<categorytree mode=pages hideroot=on depth=3>Projekte</categorytree>
Anmerkungen immer gern an mich: Lars Timmann <<email>L@rs.Timmann.de</email>>
= Starthilfen zum Wiki =
Hilfe zur Benutzung und Konfiguration der Wiki-Software findest du im [http://meta.wikimedia.org/wiki/Help:Contents Benutzerhandbuch].
* [http://www.mediawiki.org/wiki/Manual:Configuration_settings Liste der Konfigurationsvariablen]
* [http://www.mediawiki.org/wiki/Manual:FAQ MediaWiki-FAQ]
* [https://lists.wikimedia.org/mailman/listinfo/mediawiki-announce Mailingliste neuer MediaWiki-Versionen]
Mit dem Urteil vom 12. Mai 1998 hat das Landgericht Hamburg entschieden, dass man durch die Anbringung eines Links die Inhalte der gelinkten Seiten ggf. mit zu verantworten hat. Dies kann nur dadurch verhindert werden, dass man sich ausdrücklich von diesem Inhalt distanziert. Für alle Links auf dieser Homepage gilt: Ich distanziere mich hiermit ausdrücklich von allen Inhalten aller verlinkten Seitenadressen auf meiner Homepage und mache mir diese Inhalte nicht zu eigen.
cc3c548b38ae89d92b4245b92987e97972594833
Fibrechannel Analyse
0
139
676
638
2015-02-19T16:26:50Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:Solaris]]
[[Kategorie:Brocade]]
[[Kategorie:NetApp]]
[[Kategorie:FC]]
=Fibrechannel Analyse=
=Kommandos : Solaris=
==luxadm==
===luxadm -e port===
Gibt die Hardwarepfade der vorhandened Fibrechannelports und deren Status aus:
<source lang=bash>
# luxadm -e port
/devices/pci@79,0/pci10de,378@b/pci1077,143@0/fp@0,0:devctl CONNECTED
/devices/pci@79,0/pci10de,378@b/pci1077,143@0,1/fp@0,0:devctl NOT CONNECTED
/devices/pci@79,0/pci10de,376@e/pci1077,143@0/fp@0,0:devctl CONNECTED
/devices/pci@79,0/pci10de,376@e/pci1077,143@0,1/fp@0,0:devctl NOT CONNECTED
</source>
2 Dualport Karten:
/devices/pci@79,0/pci10de,378@b/pci1077,143@0 und ...,1
/devices/pci@79,0/pci10de,376@e/pci1077,143@0 und ...,1
<source lang=bash>
# prtdiag -v | head -1
System Configuration: Sun Microsystems Sun Fire X4440
</source>
Aus der Seite [https://support.oracle.com/epmos/faces/DocContentDisplay?id=1277396.1 Sun x86 Platforms: Matrix of Recognized Device Paths (Doc ID 1277396.1)] (Oracle Support Login benötigt):
Sun Fire x4440 (Tucana)
PCI:
PCIe SLOT0 /pci@0,0/pci10de,375@f/pci1000,3150@0 // with PCI Express 8-Port SAS/SATA HBA
PCIe SLOT0 /pci@0,0/pci10de,375@f/ // without PCI Express 8-Port SAS/SATA HBA
PCIe SLOT1 /pci@0,0/pci10de,376@e/
PCIe SLOT2 /pci@7c,0/pci10de,377@f/
PCIe SLOT3 /pci@0,0/pci10de,377@a/
PCIe SLOT4 /pci@7c,0/pci10de,376@e/
PCIe SLOT5 /pci@7c,0/pci10de,378@b/
(7c can be renamed something else depending on BIOS/OS version)
Also stecken unsere Karten in Slot 4 und 5.
===luxadm -e dump_map <HW_path>===
Gibt die Tabelle der bekannten Geräte an einem Port aus
<source lang=bash>
# luxadm -e dump_map /devices/pci@79,0/pci10de,378@b/pci1077,143@0/fp@0,0:devctl
Pos Port_ID Hard_Addr Port WWN Node WWN Type
0 30200 0 202600a0b86e10e4 200600a0b86e10e4 0x0 (Disk device)
1 30600 0 202700a0b86e10e4 200600a0b86e10e4 0x0 (Disk device)
2 10100 0 203400a0b85bb030 200400a0b85bb030 0x0 (Disk device)
3 10500 0 203500a0b85bb030 200400a0b85bb030 0x0 (Disk device)
4 10200 0 202600a0b86e103c 200600a0b86e103c 0x0 (Disk device)
5 11400 0 202700a0b86e103c 200600a0b86e103c 0x0 (Disk device)
6 30100 0 203200a0b85aeb2d 200200a0b85aeb2d 0x0 (Disk device)
7 30500 0 203300a0b85aeb2d 200200a0b85aeb2d 0x0 (Disk device)
8 10800 0 2100001b32902d45 2000001b32902d45 0x1f (Unknown Type,Host Bus Adapter)
</source>
Erklärung der interessanten Spalten:
* Port_ID <Switch_ID><Switchport><??>
Es sind also offensichtlich 2 Switches in der Fabric an Port /devices/pci@79,0/pci10de,378@b/pci1077,143@0/fp@0,0:devctl
und zwar mit der ID 1 und mit der ID 3.
Switch ID 1
Port 1 und 5 : Node WWN 200400a0b85bb030
Port 2 und 14 : Node WWN 200600a0b86e103c
Port 8 : Node WWN 2000001b32902d45 (Wir selbst)
Switch ID 3
Port 1 und 5 : Node WWN 200200a0b85aeb2d
Port 2 und 6 : Node WWN 200600a0b86e10e4
Wir hängen also mit 2 Storages auf dem Switch mit der ID 1 und haben eine Verbindung zu einem Switch mit der ID 3 an dem 2 weitere Storages hängen.
* Node WWN
Wir sehen hier 4 Disk Devices mit jeweils 2 Einträgen (Gleiche Node WWN)
* Port WWN
Dies ist die Port WWN der an den Switch angeschlossenen Geräte (unter 8 finden wir uns selbst).
Pro Storage sehen wir hier 2 Port WWNs, also 2 Pfade über unseren einen Hostport.
Daher nachher 4 Pfade (2 Pro Hostport) beim [[#mpathadm list lu]].
* Type
Disk Device: Storage
Host Bus Adapter: FC-Karte
===luxadm probe===
Auflistung aller erkannten Fibrechanneldevices
<source lang=bash>
#> luxadm probe
Found Fibre Channel device(s):
Node WWN:200600a0b86e10e4 Device Type:Disk device
Logical Path:/dev/rdsk/c8t600A0B80006E10E40000DC1C52E8B751d0s2
...
</source>
===luxadm display <Diskpath|WWN>===
<source lang=bash>
#> luxadm display /dev/rdsk/c8t600A0B80006E10E40000DC1C52E8B751d0s2
DEVICE PROPERTIES for disk: /dev/rdsk/c8t600A0B80006E10E40000DC1C52E8B751d0s2
Vendor: SUN
Product ID: STK6580_6780
Revision: 0784
Serial Num: SP01068442
Unformatted capacity: 204800.000 MBytes
Write Cache: Enabled
Read Cache: Enabled
Minimum prefetch: 0x300
Maximum prefetch: 0x0
Device Type: Disk device
Path(s):
/dev/rdsk/c8t600A0B80006E10E40000DC1C52E8B751d0s2
/devices/scsi_vhci/disk@g600a0b80006e10e40000dc1c52e8b751:c,raw
Controller /dev/cfg/c4
Device Address 202600a0b86e10e4,5
Host controller port WWN 2100001b328a417f
Class primary
State ONLINE
Controller /dev/cfg/c4
Device Address 202700a0b86e10e4,5
Host controller port WWN 2100001b328a417f
Class secondary
State STANDBY
Controller /dev/cfg/c6
Device Address 201600a0b86e10e4,5
Host controller port WWN 2100001b32904445
Class primary
State ONLINE
Controller /dev/cfg/c6
Device Address 201700a0b86e10e4,5
Host controller port WWN 2100001b32904445
Class secondary
State STANDBY
</source>
* Vendor: SUN
Hersteller
* Product ID: STK6580_6780
Also ein StorageTek 6580/6780
* Revision: 0784
Grobe Firmwarepeilung (Firmware Version: 07.84.47.10)
Siehe hier [[#lsscs list array <array_name>]]
* Serial Num: SP01068442
Praktisch, wenn man mit NetApps arbeitet, um die LUNs zuzuordnen.
* Unformatted capacity: 204800.000 MBytes
Immer gut zu wissen
* Write Cache: Enabled
Die Batterie im Storage sollte also OK sein ;-)
* Path(s):
Rawdevicepath
Hardwaredevicepath
Jetzt folgen immer pro Pfad zu diesem Device ein Block aus
Controller (siehe unten)
Device Address <Port WWN vom Device>,<LUN ID>
Class <primary|secondary> (siehe unten)
State <Online|Standby|Oflline>
Zuweisung Controller zum FC-Port über:
<source lang=bash>
# ls -al /dev/cfg/c6
lrwxrwxrwx 1 root root 60 Sep 3 2009 /dev/cfg/c6 -> ../../devices/pci@79,0/pci10de,376@e/pci1077,143@0/fp@0,0:fc
</source>
Man sieht den Hardwarepfad von [[#luxadm -e port]]
Class:
Via ALUA (Asymmetric Logical Unit Access) teilt das Device dem Host mit, über welche Pfade der Host primär auf die LUN zugreifen soll.
==fcinfo==
===fcinfo hba-port===
Gibt ein paar Infos über Hersteller, Modell, Firmware, Port und Node WWN, Current Speed, ... aus
<source lang=bash>
#> fcinfo hba-port
HBA Port WWN: 2100001b328a417f
OS Device Name: /dev/cfg/c4
Manufacturer: QLogic Corp.
Model: 375-3356-02
Firmware Version: 05.06.00
FCode/BIOS Version: BIOS: 2.02; fcode: 2.01; EFI: 2.00;
Serial Number: 0402R00-0918701860
Driver Name: qlc
Driver Version: 20110825-3.06
Type: N-port
State: online
Supported Speeds: 1Gb 2Gb 4Gb
Current Speed: 4Gb
Node WWN: 2000001b328a417f
HBA Port WWN: 2101001b32aa417f
OS Device Name: /dev/cfg/c5
Manufacturer: QLogic Corp.
Model: 375-3356-02
Firmware Version: 05.06.00
FCode/BIOS Version: BIOS: 2.02; fcode: 2.01; EFI: 2.00;
Serial Number: 0402R00-0918701860
Driver Name: qlc
Driver Version: 20110825-3.06
Type: unknown
State: offline
Supported Speeds: 1Gb 2Gb 4Gb
Current Speed: not established
Node WWN: 2001001b32aa417f
HBA Port WWN: 2100001b32904445
OS Device Name: /dev/cfg/c6
Manufacturer: QLogic Corp.
Model: 375-3356-02
Firmware Version: 05.06.00
FCode/BIOS Version: BIOS: 2.02; fcode: 2.01; EFI: 2.00;
Serial Number: 0402R00-0918701887
Driver Name: qlc
Driver Version: 20110825-3.06
Type: N-port
State: online
Supported Speeds: 1Gb 2Gb 4Gb
Current Speed: 4Gb
Node WWN: 2000001b32904445
HBA Port WWN: 2101001b32b04445
OS Device Name: /dev/cfg/c7
Manufacturer: QLogic Corp.
Model: 375-3356-02
Firmware Version: 05.06.00
FCode/BIOS Version: BIOS: 2.02; fcode: 2.01; EFI: 2.00;
Serial Number: 0402R00-0918701887
Driver Name: qlc
Driver Version: 20110825-3.06
Type: unknown
State: offline
Supported Speeds: 1Gb 2Gb 4Gb
Current Speed: not established
Node WWN: 2001001b32b04445
</source>
===fcinfo remote-port --port <HBA Port WWN> --linkstat===
<source lang=bash>
# fcinfo remote-port --port 2100001b32904445 --linkstat
Remote Port WWN: 201600a0b86e103c
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200600a0b86e103c
Link Error Statistics:
Link Failure Count: 3
Loss of Sync Count: 3
Loss of Signal Count: 2
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 201700a0b86e103c
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200600a0b86e103c
Link Error Statistics:
Link Failure Count: 4
Loss of Sync Count: 261
Loss of Signal Count: 4
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 202200a0b85aeb2d
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200200a0b85aeb2d
Link Error Statistics:
Link Failure Count: 2
Loss of Sync Count: 1
Loss of Signal Count: 2
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 202300a0b85aeb2d
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200200a0b85aeb2d
Link Error Statistics:
Link Failure Count: 2
Loss of Sync Count: 1
Loss of Signal Count: 2
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 201600a0b86e10e4
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200600a0b86e10e4
Link Error Statistics:
Link Failure Count: 3
Loss of Sync Count: 1
Loss of Signal Count: 0
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 201700a0b86e10e4
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200600a0b86e10e4
Link Error Statistics:
Link Failure Count: 3
Loss of Sync Count: 1
Loss of Signal Count: 0
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 202400a0b85bb030
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200400a0b85bb030
Link Error Statistics:
Link Failure Count: 2
Loss of Sync Count: 1
Loss of Signal Count: 2
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 202500a0b85bb030
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200400a0b85bb030
Link Error Statistics:
Link Failure Count: 2
Loss of Sync Count: 1
Loss of Signal Count: 3
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
</source>
==mpathadm==
===mpathadm list lu===
<source lang=bash>
</source>
==cfgadm==
===cfgadm -al -o show_FCP_dev [<controller>]===
<source lang=bash>
# cfgadm -al -o show_FCP_dev | grep unusable
c8::21000024ff2d49a2,0 disk connected configured unusable
c8::21000024ff2d49a2,1 disk connected configured unusable
c8::21000024ff2d49a2,2 disk connected configured unusable
c8::21000024ff2d49a2,3 disk connected configured unusable
c8::21000024ff2d49a2,4 disk connected configured unusable
c8::21000024ff2d49a2,5 disk connected configured unusable
c8::21000024ff2d49a2,6 disk connected configured unusable
c8::21000024ff2d49a2,7 disk connected configured unusable
c8::21000024ff2d49a2,8 disk connected configured unusable
c8::21000024ff2d49a2,9 disk connected configured unusable
c8::21000024ff2d49a2,10 disk connected configured unusable
c9::203400a0b839c421,31 disk connected configured unusable
c9::203400a0b84913d2,31 disk connected configured unusable
c9::203500a0b839c421,31 disk connected configured unusable
c9::203500a0b84913d2,31 disk connected configured unusable
</source>
===cfgadm -c unconfigure -o unusable_SCSI_LUN <unusable device>===
<source lang=bash>
# cfgadm -c unconfigure -o unusable_SCSI_LUN c8::21000024ff2d49a2
</source>
Alle aufräumen:
<source lang=bash>
# cfgadm -alo show_SCSI_LUN | nawk '$NF=="unusable"{gsub(/,[0-9]+$/,"",$1);print $1}' | sort -u | xargs -n 1 cfgadm -c unconfigure -o unusable_SCSI_LUN
</source>
===cfgadm -o force_update -c configure <controller>===
Rescan LUNs. Be careful! Does a forcelip!
<source lang=bash>
# cfgadm -o force_update -c configure c10
</source>
==prtconf -Da <device>==
<source lang=bash>
# prtconf -Da /dev/cfg/c3
i86pc (driver name: rootnex)
pci, instance #0 (driver name: npe)
pci8086,3410, instance #5 (driver name: pcieb)
pci111d,806e, instance #12 (driver name: pcieb)
pci111d,806e, instance #13 (driver name: pcieb)
pci1077,170, instance #0 (driver name: qlc) <---
fp, instance #0 (driver name: fp)
</source>
=Kommandos : Common Array Manager=
==lsscs==
Ist unter Solaris in /opt/SUNWsefms/bin
===lsscs list array===
<source lang=bash>
</source>
===lsscs list array <array_name>===
<source lang=bash>
</source>
===lsscs list -a <array_name> fcport===
<source lang=bash>
</source>
=Kommandos : Brocade=
==Switch-Kommandos==
===switchshow===
<source lang=bash>
san-sw_11:admin> switchshow
switchName: san-sw_11
switchType: 71.2
switchState: Online
switchMode: Native
switchRole: Principal
switchDomain: 1
switchId: fffc01
switchWwn: 10:00:00:05:33:df:43:5a
zoning: ON (Fabric1)
switchBeacon: OFF
Index Port Address Media Speed State Proto
==============================================
0 0 010000 id N8 No_Light FC
1 1 010100 id N8 Online FC E-Port 10:00:00:05:33:df:bd:b9 "san-sw_21" (downstream)
2 2 010200 id N8 Online FC F-Port 21:00:00:24:ff:05:74:e4
3 3 010300 id N8 Online FC F-Port 50:0a:09:81:8d:32:5d:c4
4 4 010400 id N8 No_Light FC
5 5 010500 id N8 Online FC E-Port 10:00:00:05:33:df:bd:b9 "san-sw_21"
6 6 010600 id N4 Online FC F-Port 20:06:00:a0:b8:32:38:17
7 7 010700 id N4 Online FC F-Port 20:07:00:a0:b8:32:38:17
8 8 010800 id N4 Online FC F-Port 21:00:00:1b:32:91:4c:ed
9 9 010900 id N4 Online FC F-Port 21:00:00:1b:32:98:05:1a
10 10 010a00 id N8 Online FC F-Port 21:00:00:24:ff:4a:d3:bc
11 11 010b00 id N8 No_Light FC
12 12 010c00 id N8 No_Light FC
13 13 010d00 id N8 No_Light FC
14 14 010e00 id N8 No_Light FC
15 15 010f00 id N8 No_Light FC
16 16 011000 -- N8 No_Module FC (No POD License) Disabled
17 17 011100 -- N8 No_Module FC (No POD License) Disabled
18 18 011200 -- N8 No_Module FC (No POD License) Disabled
19 19 011300 -- N8 No_Module FC (No POD License) Disabled
20 20 011400 -- N8 No_Module FC (No POD License) Disabled
21 21 011500 -- N8 No_Module FC (No POD License) Disabled
22 22 011600 -- N8 No_Module FC (No POD License) Disabled
23 23 011700 -- N8 No_Module FC (No POD License) Disabled
</source>
Was sagt uns das?
# Dies ist der "Principal" (alle andere sind "Subordinate") der Fabric "Fabric1" (switchRole:, zoning:)
# Der Switch ist gezoned (zoning:)
# SwitchID ist "fffc01"
# Es ist ein 24-Port Switch
# Es gibt einen doppelten ISL (InterSwitchLink) zu einem anderen Switch E-Port (san-sw_21)
# 6 Ports sind mit SFPs bestückt, aber nicht belegt (0,4,11-15)
# 8 Ports haben keine Lizenz und auch kein SFP (No_Module)
# 9 Ports sind belegt
<source lang=bash>
san-sw_11:root> fabricshow
Switch ID Worldwide Name Enet IP Addr FC IP Addr Name
-------------------------------------------------------------------------
1: fffc01 10:00:00:05:33:df:43:5a 192.168.1.117 0.0.0.0 >"san-sw_11"
2: fffc02 10:00:00:05:33:df:bd:b9 192.168.1.119 0.0.0.0 "san-sw_21"
The Fabric has 2 switches
</source>
==Port-Kommandos==
===porterrshow===
===portstatsshow===
===portstatsclear===
===portloginshow===
Möchte man sehen, welche WWNs sich hinter einem NPIV-Port verbergen, so hilft portloginshow.
==Zone-Kommandos==
===zoneshow===
===alicreate===
===alishow===
==Backup der Switchconfig per Script==
===Put the backup host ssh-pub-key on the switches===
<source lang=bash>
fcsw1:root> cat >/root/.ssh/authorized_keys <<EOF
> ssh-dss AAAAB3NzaC1...
...
...
lF8qsgtTD8cc= root@host
> EOF
</source>
===Generate ssh-key on the switches===
<source lang=bash>
fcsw1:root> ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
2a:23:33:...:69:bc:25:a5:f9 root@fcsw1
The key's randomart image is:
+--[ RSA 2048]----+
| |
| ... |
| |
+-----------------+
</source>
===Copy the key to your backup users ~/.ssh/authorized_keys on backup host===
<source lang=bash>
fcsw1:root> cat /root/.ssh/id_rsa.pub
ssh-rsa AAAAB3NzaC1yc2EAAA...
...
KHnw1T1NaQ== root@fcsw1
</source>
===Now the script on the backup host===
<source lang=bash>
# cat /opt/bin/backup_brocade_config
#!/bin/bash
SWITCHES="
172.30.40.50
172.30.40.51
"
LOCALUSER="backupuser"
BACKUPDIR="brocade_backup"
BACKUPHOST="172.30.40.10"
DATE="$(date '+%Y%m%d-%H%M%S')"
for switch in ${SWITCHES} ; do
printf "Backing up ${switch} to ~${LOCALUSER}/${BACKUPDIR}/${switch}_config_${date}.txt... "
ssh root@${switch} /fabos/link_sbin/configupload -all -p scp ${BACKUPHOST},${LOCALUSER},${BACKUPDIR}/${switch}_config_${DATE}.txt
done
</source>
==Script zum parsen einer configupload-Datei==
<source lang=awk>
#!/usr/bin/gawk -f
BEGIN{
vendor["001438"]="Hewlett-Packard";
vendor["00a098"]="NetApp";
vendor["0024ff"]="Qlogic";
vendor["001b32"]="Qlogic";
vendor["0000c9"]="Emulex";
vendor["00e002"]="CROSSROADS SYSTEMS, INC.";
}
/\[Zoning\]/,/^$/ {
if(/^cfg./){
split($0,cfgparts,":");
gsub(/^cfg./,"",cfgparts[1]);
cfg[cfgparts[1]]=cfgparts[2];
}
else if(/^zone./) {
zonename=$0;
gsub(/:.*$/,"",zonename);
gsub(/^zone./,"",zonename);
zonemembers=$0;
gsub(/^[^:]*:/,"",zonemembers);
zone[zonename]=zonemembers;
}
else if(/^alias./) {
aliasname=$0;
gsub(/:.*$/,"",aliasname);
gsub(/^alias./,"",aliasname);
aliasmembers=$0;
gsub(/^[^:]*:/,"",aliasmembers);
alias[aliasname]=aliasmembers;
if(length(aliasname)>longestalias){
longestalias=length(aliasname);
}
}
else if(/^enable:/) {
cfgenabled=$0;
gsub(/^enable:/,"",cfgenabled);
}
}
END {
print "Config:",cfgenabled;
split(cfg[cfgenabled],active_zones,";");
for(active_zone in active_zones) {
split(zone[active_zones[active_zone]],zone_members,";");
asort(zone_members);
print "Zone",active_zones[active_zone],"(",length(zone_members),"Members ):";
for(zone_member in zone_members){
member=zone_members[zone_member];
if(alias[member]!=""){
member=alias[member];
}
WWN=member;
gsub(/:/,"",WWN);
if(WWN ~ /^5/){start=2;}else{start=5;}
vendor_id=substr(WWN,start,6);
printf " Member: %s\t",member;
if(alias[zone_members[zone_member]]!=""){
format=sprintf("%%s%%%ds\t",longestalias-length(zone_members[zone_member]));
printf format,zone_members[zone_member]," ";
}
printf "%s\n",vendor[vendor_id];
}
}
printf "\n\n\nCreate config:\n-------------------------------------------------\n";
printf "cfgdelete \"%s\"\n",cfgenabled;
for(active_zone in active_zones) {
split(zone[active_zones[active_zone]],zone_members,";");
asort(zone_members);
for(zone_member in zone_members){
member=zone_members[zone_member];
if(alias[member]!=""){
printf "alicreate \"%s\",\"%s\"\n",member,alias[member];
alias[member]="";
}
}
printf "zonecreate \"%s\",\"%s\"\n",active_zones[active_zone],zone[active_zones[active_zone]];
if(!secondelement){
secondelement=1;
printf "cfgcreate";
} else {
printf "cfgadd ";
}
printf " \"%s\",\"%s\"\n",cfgenabled,active_zones[active_zone];
}
printf "cfgsave\ncfgenable \"%s\"\n",cfgenabled;
}
</source>
=Kommandos: NetApp=
==fcp topology show : Wo hängt mein Frontend-SAN?==
<source lang=bash>
fas01> fcp topology show
Switches connected on adapter 0d:
None connected.
Switches connected on adapter 0c:
None connected.
Switches connected on adapter 1a:
Switch Name: fcsw01
Switch Vendor: Brocade Communications, Inc.
Switch Release: v6.4.2a
Switch Domain: 1
Switch WWN: 10:00:00:05:33:c6:1e:6c
Port Count: 24
Switches connected on adapter 1b:
Switch Name: fcsw02
Switch Vendor: Brocade Communications, Inc.
Switch Release: v6.4.2a
Switch Domain: 1
Switch WWN: 10:00:00:05:33:c7:5e:d2
Port Count: 24
Switches connected on adapter 1c:
None connected.
Switches connected on adapter 1d:
None connected.
</source>
==fcp config <port> : Welche WWN habe ich?==
<source lang=bash>
fas01> fcp config 1a
1a: ONLINE <ADAPTER UP> PTP Fabric
host address 010600
portname 50:0a:09:83:90:00:29:24 nodename 50:0a:09:80:80:00:29:24
mediatype auto speed auto
</source>
Kleines Schmankerl ist noch die "host address", die uns zeigt, daß wir auf Switch-ID 01 Port 06 hängen.
==fcp wwpn-alias (set|show) : Aliasnamen für mehr Klarheit beim Debugging==
<source lang=bash>
fas01> fcp wwpn-alias set sun07_Slot2_Port0 21000024ff363a5a
fas01> fcp wwpn-alias show
WWPN Alias
---- -----
21:00:00:24:ff:36:3a:5a sun07_Slot2_Port0
</source>
==sanlun lun show -d <dev> (mit Solaris und ZPool)==
Wenn man wissen möchte, welche NetApp LUNs zu einem ZPool gehören, geht das folgender Maßen:
<source lang=bash>
# zpool status | nawk '/c[0-9]t/{dev=$1;gsub(/s[0-9]+$/,"",$1);command="/opt/NTAP/SANToolkit/bin/sanlun lun show -d /dev/rdsk/"$1"s2";command | getline; command | getline; print dev,$1$2;next;}{print;}'
</source>
Beispiel:
<source lang=bash>
# zpool status | nawk '/c[0-9]t/{dev=$1;gsub(/s[0-9]+$/,"",$1);command="/opt/NTAP/SANToolkit/bin/sanlun lun show -d /dev/rdsk/"$1"s2";command | getline; command | getline; print dev,$1$2;next;}{print;}'
Pool: testpool
Status: ONLINE
scan: resilvered 11,0G in 0h1m with 0 errors on Thu Oct 2 09:41:39 2014
config:
NAME STATE READ WRITE CKSUM
testpool ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
c5t60A98000433544625634696B76705370d0s0 fas01:/vol/testlun/LUN0
c5t60A980003830304F392446473844375Ad0 fas02:/vol/testlun/LUN0
</source>
=Sonstiges=
==Alle WWNs in einem File finden==
Gibt nur die WWNs aus, auch mehrere, wenn mehrere in einer Zeile sind.
<source lang=awk>
gawk '{line=$0;while(match(line,/[0-9a-f]{2}(:[0-9a-f]{2}){7}/,wwn)){line=substr(line,wwn[0,"start"]+wwn[0,"length"]); print wwn[0];}}' <file>
</source>
debb20e1819e431837d8751d5d393a735b1d3f38
SSH Tipps und Tricks
0
75
677
278
2015-03-17T09:33:48Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:KnowHow]]
=SSH, der Weg zum Ziel=
==SSH über ein oder mehrere Hops==
Um die SSH-Verbindung von Host_A zu Host_B machen zu können muß man sich über zwei Rechner dorthin tunneln (GW_1 und GW_2). Wenn man sich immer erst einloggt und dann weiter einloggt ist es manchmal sehr schwierig die Portforwardings mitzuschleifen, oder den Socks5-Proxy. Einfacher ist es, wenn man sich ProxyCommands für den Weg von Host_a zu Host_b definiert.
Wir kommen also nur von GW_2 zu Host_b, also machen wir uns hierfür einen Eintrag in der ~/.ssh/config:
<pre>
Host Host_B
ProxyCommand ssh GW_2 "/bin/bash -c 'exec 3<>/dev/tcp/%h/%p; cat <&3 & cat >&3;kill $!'"
</pre>
Zu GW_2 kommen wir aber nur über GW_1, also brauchen wir hierfür auch einen Eintrag:
<pre>
Host GW_2
ProxyCommand ssh GW_1 "/bin/bash -c 'exec 3<>/dev/tcp/%h/%p; cat <&3 & cat >&3;kill $!'"
</pre>
Jetzt gibt man auf Host_A einfach <i>ssh Host_B</i> ein und wird über die beiden Gateways GW_1 und GW_2 getunnelt.
Portforwardings für z.B. NFS macht man jetzt einfach so:
<pre>
root@Host_A# share -F nfs -o ro=@127.0.0.1/32 /tmp
root@Host_A# ssh -R 22049:localhost:2049 user@Host_B
user@Host_B$ su -
root@Host_B# mount -oro nfs://127.0.0.1:22049/tmp /mnt
</pre>
Im Hintergrund werden dann die TunnelVerbindungen aufgebaut und man macht das PortForwarding direkt von Host_A nach Host_B. Sehr schlank und elegant.
PS: Das /dev/tcp/%h/%p ist ein BASH-Builtin wobei %h und %p von der SSH durch Host (%h) und Port (%p) ausgefüllt werden
==Ausbruch aus dem Paradies==
Problem: Die Umgebung in der man sich aufhält ist leider so unglücklich mit Firewalls verbaut, daß man nicht arbeiten kann. Man muß aber per SSH raus, um woanders kurz etwas zu schauen, oder zu holen. Nunja, es gibt immer einen Weg...
Vorraussetzung ist ein lokal installiertes [http://www.meadowy.org/~gotoh/projects/connect connect], z.B. unter Ubuntu: apt-get install connect-proxy.
Weiterhin braucht man einen SSH-Server, wo ein sshd auf dem Port 443 lauscht, denn die meisten Proxies wollen einen nur auf known Ports durchlassen.
Dann trägt man in der ~/.ssh/config ein:
<pre>
Host ssh-via-proxy
ProxyCommand connect -H proxy-server:3128 ssh-server 443
</pre>
Schwuppdiwupp ist man mit <i>ssh ssh-server</i> auf dem SSH-Ziel, wo man hinmöchte. Natürlich kann man auf den ssh-server wieder als ProxyCommand eintragen usw. usw.
==Achja... das interne Wiki...==
Auch nicht schlimm, wenn das nur vom internen Netz erreichbar ist, dann fragen wir einfach via Socks-Proxy an:
<pre>
user@Host_A$ ssh -C -N -T -f -D8080 interner-rechner
user@Host_A$ chromium-browser --proxy-server="socks5://localhost:8080" https://wiki.intern.firma.de/ &
</pre>
Die Optionen sind:
<pre>
-C Requests compression <- das ist optional
-N Do not execute a remote command.
-T Disable pseudo-tty allocation.
-f Requests ssh to go to background just before command execution.
-D Local-Remote-Socks5-Proxy Port
</pre>
Oder wieder via ~/.ssh/config:
<pre>
Host wiki
Compression yes
DynamicForward 8888
RequestTTY no
PermitLocalCommand yes
LocalCommand chromium-browser --proxy-server="socks5://localhost:8888" https://wiki.intern.firma.de/ &
Hostname interner-rechner
</pre>
Und dann <i>ssh -N -f wiki</i> (Entsprechungen für -N und -f habe ich noch nicht gefunden).
==Der Fingerabdruck==
Für die Verifikation ist es oft leichter mit kürzeren Zahlenketten. Daher ist der Fingerabdruck praktisch, um Keys einfacher zu vergleichen:
<pre>
$ ssh-keygen -lf ~/.ssh/id_dsa.pub
1024 98:c5:76:...:08:fa:ba lollypop@lollybook (DSA)
</pre>
=PuTTY Portable=
==pageant zusammen mit putty starten==
In die Datei ..\PortableApps\PuTTYPortable\App\AppInfo\Launcher\PuTTYPortable.ini muß folgendes rein:
<pre>
[Launch]
ProgramExecutable=putty\pageant.exe
CommandLineArguments='%PAL:DataDir%\settings\mcs.ppk -c %PAL:AppDir%\putty\putty.exe'
DirectoryMoveOK=yes
SupportsUNC=yes
</pre>
493f21310c53645c6589b5f5ee6652ef63fe0abc
678
677
2015-03-17T09:34:11Z
Lollypop
2
/* pageant zusammen mit putty starten */
wikitext
text/x-wiki
[[Kategorie:KnowHow]]
=SSH, der Weg zum Ziel=
==SSH über ein oder mehrere Hops==
Um die SSH-Verbindung von Host_A zu Host_B machen zu können muß man sich über zwei Rechner dorthin tunneln (GW_1 und GW_2). Wenn man sich immer erst einloggt und dann weiter einloggt ist es manchmal sehr schwierig die Portforwardings mitzuschleifen, oder den Socks5-Proxy. Einfacher ist es, wenn man sich ProxyCommands für den Weg von Host_a zu Host_b definiert.
Wir kommen also nur von GW_2 zu Host_b, also machen wir uns hierfür einen Eintrag in der ~/.ssh/config:
<pre>
Host Host_B
ProxyCommand ssh GW_2 "/bin/bash -c 'exec 3<>/dev/tcp/%h/%p; cat <&3 & cat >&3;kill $!'"
</pre>
Zu GW_2 kommen wir aber nur über GW_1, also brauchen wir hierfür auch einen Eintrag:
<pre>
Host GW_2
ProxyCommand ssh GW_1 "/bin/bash -c 'exec 3<>/dev/tcp/%h/%p; cat <&3 & cat >&3;kill $!'"
</pre>
Jetzt gibt man auf Host_A einfach <i>ssh Host_B</i> ein und wird über die beiden Gateways GW_1 und GW_2 getunnelt.
Portforwardings für z.B. NFS macht man jetzt einfach so:
<pre>
root@Host_A# share -F nfs -o ro=@127.0.0.1/32 /tmp
root@Host_A# ssh -R 22049:localhost:2049 user@Host_B
user@Host_B$ su -
root@Host_B# mount -oro nfs://127.0.0.1:22049/tmp /mnt
</pre>
Im Hintergrund werden dann die TunnelVerbindungen aufgebaut und man macht das PortForwarding direkt von Host_A nach Host_B. Sehr schlank und elegant.
PS: Das /dev/tcp/%h/%p ist ein BASH-Builtin wobei %h und %p von der SSH durch Host (%h) und Port (%p) ausgefüllt werden
==Ausbruch aus dem Paradies==
Problem: Die Umgebung in der man sich aufhält ist leider so unglücklich mit Firewalls verbaut, daß man nicht arbeiten kann. Man muß aber per SSH raus, um woanders kurz etwas zu schauen, oder zu holen. Nunja, es gibt immer einen Weg...
Vorraussetzung ist ein lokal installiertes [http://www.meadowy.org/~gotoh/projects/connect connect], z.B. unter Ubuntu: apt-get install connect-proxy.
Weiterhin braucht man einen SSH-Server, wo ein sshd auf dem Port 443 lauscht, denn die meisten Proxies wollen einen nur auf known Ports durchlassen.
Dann trägt man in der ~/.ssh/config ein:
<pre>
Host ssh-via-proxy
ProxyCommand connect -H proxy-server:3128 ssh-server 443
</pre>
Schwuppdiwupp ist man mit <i>ssh ssh-server</i> auf dem SSH-Ziel, wo man hinmöchte. Natürlich kann man auf den ssh-server wieder als ProxyCommand eintragen usw. usw.
==Achja... das interne Wiki...==
Auch nicht schlimm, wenn das nur vom internen Netz erreichbar ist, dann fragen wir einfach via Socks-Proxy an:
<pre>
user@Host_A$ ssh -C -N -T -f -D8080 interner-rechner
user@Host_A$ chromium-browser --proxy-server="socks5://localhost:8080" https://wiki.intern.firma.de/ &
</pre>
Die Optionen sind:
<pre>
-C Requests compression <- das ist optional
-N Do not execute a remote command.
-T Disable pseudo-tty allocation.
-f Requests ssh to go to background just before command execution.
-D Local-Remote-Socks5-Proxy Port
</pre>
Oder wieder via ~/.ssh/config:
<pre>
Host wiki
Compression yes
DynamicForward 8888
RequestTTY no
PermitLocalCommand yes
LocalCommand chromium-browser --proxy-server="socks5://localhost:8888" https://wiki.intern.firma.de/ &
Hostname interner-rechner
</pre>
Und dann <i>ssh -N -f wiki</i> (Entsprechungen für -N und -f habe ich noch nicht gefunden).
==Der Fingerabdruck==
Für die Verifikation ist es oft leichter mit kürzeren Zahlenketten. Daher ist der Fingerabdruck praktisch, um Keys einfacher zu vergleichen:
<pre>
$ ssh-keygen -lf ~/.ssh/id_dsa.pub
1024 98:c5:76:...:08:fa:ba lollypop@lollybook (DSA)
</pre>
=PuTTY Portable=
==pageant zusammen mit putty starten==
In die Datei ..\PortableApps\PuTTYPortable\App\AppInfo\Launcher\PuTTYPortable.ini muß folgendes rein:
<pre>
[Launch]
ProgramExecutable=putty\pageant.exe
CommandLineArguments='%PAL:DataDir%\settings\mykeys.ppk -c %PAL:AppDir%\putty\putty.exe'
DirectoryMoveOK=yes
SupportsUNC=yes
</pre>
4efa0b81537103ab5320123ecaba4de02616ef30
679
678
2015-03-17T09:34:51Z
Lollypop
2
/* pageant zusammen mit putty starten */
wikitext
text/x-wiki
[[Kategorie:KnowHow]]
=SSH, der Weg zum Ziel=
==SSH über ein oder mehrere Hops==
Um die SSH-Verbindung von Host_A zu Host_B machen zu können muß man sich über zwei Rechner dorthin tunneln (GW_1 und GW_2). Wenn man sich immer erst einloggt und dann weiter einloggt ist es manchmal sehr schwierig die Portforwardings mitzuschleifen, oder den Socks5-Proxy. Einfacher ist es, wenn man sich ProxyCommands für den Weg von Host_a zu Host_b definiert.
Wir kommen also nur von GW_2 zu Host_b, also machen wir uns hierfür einen Eintrag in der ~/.ssh/config:
<pre>
Host Host_B
ProxyCommand ssh GW_2 "/bin/bash -c 'exec 3<>/dev/tcp/%h/%p; cat <&3 & cat >&3;kill $!'"
</pre>
Zu GW_2 kommen wir aber nur über GW_1, also brauchen wir hierfür auch einen Eintrag:
<pre>
Host GW_2
ProxyCommand ssh GW_1 "/bin/bash -c 'exec 3<>/dev/tcp/%h/%p; cat <&3 & cat >&3;kill $!'"
</pre>
Jetzt gibt man auf Host_A einfach <i>ssh Host_B</i> ein und wird über die beiden Gateways GW_1 und GW_2 getunnelt.
Portforwardings für z.B. NFS macht man jetzt einfach so:
<pre>
root@Host_A# share -F nfs -o ro=@127.0.0.1/32 /tmp
root@Host_A# ssh -R 22049:localhost:2049 user@Host_B
user@Host_B$ su -
root@Host_B# mount -oro nfs://127.0.0.1:22049/tmp /mnt
</pre>
Im Hintergrund werden dann die TunnelVerbindungen aufgebaut und man macht das PortForwarding direkt von Host_A nach Host_B. Sehr schlank und elegant.
PS: Das /dev/tcp/%h/%p ist ein BASH-Builtin wobei %h und %p von der SSH durch Host (%h) und Port (%p) ausgefüllt werden
==Ausbruch aus dem Paradies==
Problem: Die Umgebung in der man sich aufhält ist leider so unglücklich mit Firewalls verbaut, daß man nicht arbeiten kann. Man muß aber per SSH raus, um woanders kurz etwas zu schauen, oder zu holen. Nunja, es gibt immer einen Weg...
Vorraussetzung ist ein lokal installiertes [http://www.meadowy.org/~gotoh/projects/connect connect], z.B. unter Ubuntu: apt-get install connect-proxy.
Weiterhin braucht man einen SSH-Server, wo ein sshd auf dem Port 443 lauscht, denn die meisten Proxies wollen einen nur auf known Ports durchlassen.
Dann trägt man in der ~/.ssh/config ein:
<pre>
Host ssh-via-proxy
ProxyCommand connect -H proxy-server:3128 ssh-server 443
</pre>
Schwuppdiwupp ist man mit <i>ssh ssh-server</i> auf dem SSH-Ziel, wo man hinmöchte. Natürlich kann man auf den ssh-server wieder als ProxyCommand eintragen usw. usw.
==Achja... das interne Wiki...==
Auch nicht schlimm, wenn das nur vom internen Netz erreichbar ist, dann fragen wir einfach via Socks-Proxy an:
<pre>
user@Host_A$ ssh -C -N -T -f -D8080 interner-rechner
user@Host_A$ chromium-browser --proxy-server="socks5://localhost:8080" https://wiki.intern.firma.de/ &
</pre>
Die Optionen sind:
<pre>
-C Requests compression <- das ist optional
-N Do not execute a remote command.
-T Disable pseudo-tty allocation.
-f Requests ssh to go to background just before command execution.
-D Local-Remote-Socks5-Proxy Port
</pre>
Oder wieder via ~/.ssh/config:
<pre>
Host wiki
Compression yes
DynamicForward 8888
RequestTTY no
PermitLocalCommand yes
LocalCommand chromium-browser --proxy-server="socks5://localhost:8888" https://wiki.intern.firma.de/ &
Hostname interner-rechner
</pre>
Und dann <i>ssh -N -f wiki</i> (Entsprechungen für -N und -f habe ich noch nicht gefunden).
==Der Fingerabdruck==
Für die Verifikation ist es oft leichter mit kürzeren Zahlenketten. Daher ist der Fingerabdruck praktisch, um Keys einfacher zu vergleichen:
<pre>
$ ssh-keygen -lf ~/.ssh/id_dsa.pub
1024 98:c5:76:...:08:fa:ba lollypop@lollybook (DSA)
</pre>
=PuTTY Portable=
==pageant zusammen mit putty starten==
In die Datei ..\PortableApps\PuTTYPortable\App\AppInfo\Launcher\PuTTYPortable.ini muß folgendes unter [Launch] stehen:
<pre>
[Launch]
ProgramExecutable=putty\pageant.exe
CommandLineArguments='%PAL:DataDir%\settings\mykeys.ppk -c %PAL:AppDir%\putty\putty.exe'
DirectoryMoveOK=yes
SupportsUNC=yes
</pre>
d7798905944f8088f85e7c1c6dd89f88ebf21dc9
680
679
2015-03-17T09:37:07Z
Lollypop
2
/* pageant zusammen mit putty starten */
wikitext
text/x-wiki
[[Kategorie:KnowHow]]
=SSH, der Weg zum Ziel=
==SSH über ein oder mehrere Hops==
Um die SSH-Verbindung von Host_A zu Host_B machen zu können muß man sich über zwei Rechner dorthin tunneln (GW_1 und GW_2). Wenn man sich immer erst einloggt und dann weiter einloggt ist es manchmal sehr schwierig die Portforwardings mitzuschleifen, oder den Socks5-Proxy. Einfacher ist es, wenn man sich ProxyCommands für den Weg von Host_a zu Host_b definiert.
Wir kommen also nur von GW_2 zu Host_b, also machen wir uns hierfür einen Eintrag in der ~/.ssh/config:
<pre>
Host Host_B
ProxyCommand ssh GW_2 "/bin/bash -c 'exec 3<>/dev/tcp/%h/%p; cat <&3 & cat >&3;kill $!'"
</pre>
Zu GW_2 kommen wir aber nur über GW_1, also brauchen wir hierfür auch einen Eintrag:
<pre>
Host GW_2
ProxyCommand ssh GW_1 "/bin/bash -c 'exec 3<>/dev/tcp/%h/%p; cat <&3 & cat >&3;kill $!'"
</pre>
Jetzt gibt man auf Host_A einfach <i>ssh Host_B</i> ein und wird über die beiden Gateways GW_1 und GW_2 getunnelt.
Portforwardings für z.B. NFS macht man jetzt einfach so:
<pre>
root@Host_A# share -F nfs -o ro=@127.0.0.1/32 /tmp
root@Host_A# ssh -R 22049:localhost:2049 user@Host_B
user@Host_B$ su -
root@Host_B# mount -oro nfs://127.0.0.1:22049/tmp /mnt
</pre>
Im Hintergrund werden dann die TunnelVerbindungen aufgebaut und man macht das PortForwarding direkt von Host_A nach Host_B. Sehr schlank und elegant.
PS: Das /dev/tcp/%h/%p ist ein BASH-Builtin wobei %h und %p von der SSH durch Host (%h) und Port (%p) ausgefüllt werden
==Ausbruch aus dem Paradies==
Problem: Die Umgebung in der man sich aufhält ist leider so unglücklich mit Firewalls verbaut, daß man nicht arbeiten kann. Man muß aber per SSH raus, um woanders kurz etwas zu schauen, oder zu holen. Nunja, es gibt immer einen Weg...
Vorraussetzung ist ein lokal installiertes [http://www.meadowy.org/~gotoh/projects/connect connect], z.B. unter Ubuntu: apt-get install connect-proxy.
Weiterhin braucht man einen SSH-Server, wo ein sshd auf dem Port 443 lauscht, denn die meisten Proxies wollen einen nur auf known Ports durchlassen.
Dann trägt man in der ~/.ssh/config ein:
<pre>
Host ssh-via-proxy
ProxyCommand connect -H proxy-server:3128 ssh-server 443
</pre>
Schwuppdiwupp ist man mit <i>ssh ssh-server</i> auf dem SSH-Ziel, wo man hinmöchte. Natürlich kann man auf den ssh-server wieder als ProxyCommand eintragen usw. usw.
==Achja... das interne Wiki...==
Auch nicht schlimm, wenn das nur vom internen Netz erreichbar ist, dann fragen wir einfach via Socks-Proxy an:
<pre>
user@Host_A$ ssh -C -N -T -f -D8080 interner-rechner
user@Host_A$ chromium-browser --proxy-server="socks5://localhost:8080" https://wiki.intern.firma.de/ &
</pre>
Die Optionen sind:
<pre>
-C Requests compression <- das ist optional
-N Do not execute a remote command.
-T Disable pseudo-tty allocation.
-f Requests ssh to go to background just before command execution.
-D Local-Remote-Socks5-Proxy Port
</pre>
Oder wieder via ~/.ssh/config:
<pre>
Host wiki
Compression yes
DynamicForward 8888
RequestTTY no
PermitLocalCommand yes
LocalCommand chromium-browser --proxy-server="socks5://localhost:8888" https://wiki.intern.firma.de/ &
Hostname interner-rechner
</pre>
Und dann <i>ssh -N -f wiki</i> (Entsprechungen für -N und -f habe ich noch nicht gefunden).
==Der Fingerabdruck==
Für die Verifikation ist es oft leichter mit kürzeren Zahlenketten. Daher ist der Fingerabdruck praktisch, um Keys einfacher zu vergleichen:
<pre>
$ ssh-keygen -lf ~/.ssh/id_dsa.pub
1024 98:c5:76:...:08:fa:ba lollypop@lollybook (DSA)
</pre>
=PuTTY Portable=
==pageant zusammen mit putty starten==
In die Datei ..\PortableApps\PuTTYPortable\App\AppInfo\Launcher\PuTTYPortable.ini muß folgendes unter [Launch] stehen:
<pre>
[Launch]
ProgramExecutable=putty\pageant.exe
CommandLineArguments='%PAL:DataDir%\settings\mykeys.ppk -c %PAL:AppDir%\putty\putty.exe'
DirectoryMoveOK=yes
SupportsUNC=yes
</pre>
Zu PortableApps siehe auch:
* [http://portableapps.com/manuals/PortableApps.comLauncher/ref/envsub.html Environment variable substitions]
* [http://portableapps.com/manuals/PortableApps.comLauncher/ref/launcher.ini/launch.html#programexecutable Launch]
c27bebc99080fbd79524ca7c5d7b9d755e99f624
681
680
2015-03-17T09:38:19Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:KnowHow]]
[[Kategorie:Putty]]
=SSH, der Weg zum Ziel=
==SSH über ein oder mehrere Hops==
Um die SSH-Verbindung von Host_A zu Host_B machen zu können muß man sich über zwei Rechner dorthin tunneln (GW_1 und GW_2). Wenn man sich immer erst einloggt und dann weiter einloggt ist es manchmal sehr schwierig die Portforwardings mitzuschleifen, oder den Socks5-Proxy. Einfacher ist es, wenn man sich ProxyCommands für den Weg von Host_a zu Host_b definiert.
Wir kommen also nur von GW_2 zu Host_b, also machen wir uns hierfür einen Eintrag in der ~/.ssh/config:
<pre>
Host Host_B
ProxyCommand ssh GW_2 "/bin/bash -c 'exec 3<>/dev/tcp/%h/%p; cat <&3 & cat >&3;kill $!'"
</pre>
Zu GW_2 kommen wir aber nur über GW_1, also brauchen wir hierfür auch einen Eintrag:
<pre>
Host GW_2
ProxyCommand ssh GW_1 "/bin/bash -c 'exec 3<>/dev/tcp/%h/%p; cat <&3 & cat >&3;kill $!'"
</pre>
Jetzt gibt man auf Host_A einfach <i>ssh Host_B</i> ein und wird über die beiden Gateways GW_1 und GW_2 getunnelt.
Portforwardings für z.B. NFS macht man jetzt einfach so:
<pre>
root@Host_A# share -F nfs -o ro=@127.0.0.1/32 /tmp
root@Host_A# ssh -R 22049:localhost:2049 user@Host_B
user@Host_B$ su -
root@Host_B# mount -oro nfs://127.0.0.1:22049/tmp /mnt
</pre>
Im Hintergrund werden dann die TunnelVerbindungen aufgebaut und man macht das PortForwarding direkt von Host_A nach Host_B. Sehr schlank und elegant.
PS: Das /dev/tcp/%h/%p ist ein BASH-Builtin wobei %h und %p von der SSH durch Host (%h) und Port (%p) ausgefüllt werden
==Ausbruch aus dem Paradies==
Problem: Die Umgebung in der man sich aufhält ist leider so unglücklich mit Firewalls verbaut, daß man nicht arbeiten kann. Man muß aber per SSH raus, um woanders kurz etwas zu schauen, oder zu holen. Nunja, es gibt immer einen Weg...
Vorraussetzung ist ein lokal installiertes [http://www.meadowy.org/~gotoh/projects/connect connect], z.B. unter Ubuntu: apt-get install connect-proxy.
Weiterhin braucht man einen SSH-Server, wo ein sshd auf dem Port 443 lauscht, denn die meisten Proxies wollen einen nur auf known Ports durchlassen.
Dann trägt man in der ~/.ssh/config ein:
<pre>
Host ssh-via-proxy
ProxyCommand connect -H proxy-server:3128 ssh-server 443
</pre>
Schwuppdiwupp ist man mit <i>ssh ssh-server</i> auf dem SSH-Ziel, wo man hinmöchte. Natürlich kann man auf den ssh-server wieder als ProxyCommand eintragen usw. usw.
==Achja... das interne Wiki...==
Auch nicht schlimm, wenn das nur vom internen Netz erreichbar ist, dann fragen wir einfach via Socks-Proxy an:
<pre>
user@Host_A$ ssh -C -N -T -f -D8080 interner-rechner
user@Host_A$ chromium-browser --proxy-server="socks5://localhost:8080" https://wiki.intern.firma.de/ &
</pre>
Die Optionen sind:
<pre>
-C Requests compression <- das ist optional
-N Do not execute a remote command.
-T Disable pseudo-tty allocation.
-f Requests ssh to go to background just before command execution.
-D Local-Remote-Socks5-Proxy Port
</pre>
Oder wieder via ~/.ssh/config:
<pre>
Host wiki
Compression yes
DynamicForward 8888
RequestTTY no
PermitLocalCommand yes
LocalCommand chromium-browser --proxy-server="socks5://localhost:8888" https://wiki.intern.firma.de/ &
Hostname interner-rechner
</pre>
Und dann <i>ssh -N -f wiki</i> (Entsprechungen für -N und -f habe ich noch nicht gefunden).
==Der Fingerabdruck==
Für die Verifikation ist es oft leichter mit kürzeren Zahlenketten. Daher ist der Fingerabdruck praktisch, um Keys einfacher zu vergleichen:
<pre>
$ ssh-keygen -lf ~/.ssh/id_dsa.pub
1024 98:c5:76:...:08:fa:ba lollypop@lollybook (DSA)
</pre>
=PuTTY Portable=
==pageant zusammen mit putty starten==
In die Datei ..\PortableApps\PuTTYPortable\App\AppInfo\Launcher\PuTTYPortable.ini muß folgendes unter [Launch] stehen:
<pre>
[Launch]
ProgramExecutable=putty\pageant.exe
CommandLineArguments='%PAL:DataDir%\settings\mykeys.ppk -c %PAL:AppDir%\putty\putty.exe'
DirectoryMoveOK=yes
SupportsUNC=yes
</pre>
Zu PortableApps siehe auch:
* [http://portableapps.com/manuals/PortableApps.comLauncher/ref/envsub.html Environment variable substitions]
* [http://portableapps.com/manuals/PortableApps.comLauncher/ref/launcher.ini/launch.html#programexecutable Launch]
9db4eaf10bd938e26739397aa3c1338384f8d3ed
Networker Tipps und Tricks
0
204
682
2015-03-24T16:10:51Z
Lollypop
2
Die Seite wurde neu angelegt: „[[Kategorie:Backup]] ==Status des Backups prüfen== Anzeigen, ob für den Client <networker-client> in den letzten 24 Stunden ein Backup gelaufen ist: <sour…“
wikitext
text/x-wiki
[[Kategorie:Backup]]
==Status des Backups prüfen==
Anzeigen, ob für den Client <networker-client> in den letzten 24 Stunden ein Backup gelaufen ist:
<source lang=bash>
# /usr/sbin/mminfo -avot -s <networker-server> -r "client,savetime(17),name" -t "1 day ago" -q client=<networker-client>
</source>
0029135265493c8fae68deb7997443dada84d03b
683
682
2015-03-31T11:58:17Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:Backup]]
==Status des Backups prüfen==
Anzeigen, ob für den Client <networker-client> in den letzten 24 Stunden ein Backup gelaufen ist:
<source lang=bash>
# /usr/sbin/mminfo -avot -s <networker-server> -r "client,savetime(17),name" -t "1 day ago" -q client=<networker-client>
</source>
==Recover/Restore==
SSID des Savesets herausfinden:
<source lang=bash>
# mminfo -s <networker-server> -q "client=<networker-client>,name=<directory>" -r "ssid,name,savetime(17)"
2752466240 <directory> 03/23/15 00:16:16
...
387566382 <directory> 03/31/15 00:16:14
</source>
OK, wir wollen das Backup vom 31.3.2015 00:16:14 Uhr, also SSID 387566382.
Zielverzeichnis für den Restore:
<source lang=bash>
# recover -s <networker-server> -S 387566382 -d <destination-directory>
</source>
Achtung, das sind NUR die Dateien, die an dem Tage gesichert wurden!
Möchte man alles so herstellen, wie es zu einem bestimmten Zetipunkt war, dann geht das folgendermaßen:
<source lang=bash>
# recover -s <networker-server> -c <networker-client> -t '03/31/15 00:16:14' -d <destination-directory> -a <directory>
</source>
1a0bf7cc66932c7db316777a7d1084cd8a4ca090
684
683
2015-04-14T15:00:11Z
Lollypop
2
/* Recover/Restore */
wikitext
text/x-wiki
[[Kategorie:Backup]]
==Status des Backups prüfen==
Anzeigen, ob für den Client <networker-client> in den letzten 24 Stunden ein Backup gelaufen ist:
<source lang=bash>
# /usr/sbin/mminfo -avot -s <networker-server> -r "client,savetime(17),name" -t "1 day ago" -q client=<networker-client>
</source>
==Recover/Restore==
SSID des Savesets herausfinden:
<source lang=bash>
# mminfo -s <networker-server> -q "client=<networker-client>,name=<directory>" -r "ssid,name,savetime(17),sumsize"
2752466240 <directory> 03/23/15 00:16:16 <sumsize>
...
387566382 <directory> 03/31/15 00:16:14 <sumsize>
</source>
OK, wir wollen das Backup vom 31.3.2015 00:16:14 Uhr, also SSID 387566382.
Zielverzeichnis für den Restore:
<source lang=bash>
# recover -s <networker-server> -S 387566382 -d <destination-directory>
</source>
Achtung, das sind NUR die Dateien, die an dem Tage gesichert wurden!
Möchte man alles so herstellen, wie es zu einem bestimmten Zetipunkt war, dann geht das folgendermaßen:
<source lang=bash>
# recover -s <networker-server> -c <networker-client> -t '03/31/15 00:16:14' -d <destination-directory> -a <directory>
</source>
21f86f93d7ba9518fc701f8ba7226fd5630098af
685
684
2015-04-14T15:01:11Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:Backup]]
==Status des Backups prüfen==
Anzeigen, ob für den Client <networker-client> in den letzten 24 Stunden ein Backup gelaufen ist:
<source lang=bash>
# /usr/sbin/mminfo -avot -s <networker-server> -r "client,savetime(17),name,sumsize" -t "1 day ago" -q client=<networker-client>
</source>
==Recover/Restore==
SSID des Savesets herausfinden:
<source lang=bash>
# mminfo -s <networker-server> -q "client=<networker-client>,name=<directory>" -r "ssid,name,savetime(17)"
2752466240 <directory> 03/23/15 00:16:16
...
387566382 <directory> 03/31/15 00:16:14
</source>
OK, wir wollen das Backup vom 31.3.2015 00:16:14 Uhr, also SSID 387566382.
Zielverzeichnis für den Restore:
<source lang=bash>
# recover -s <networker-server> -S 387566382 -d <destination-directory>
</source>
Achtung, das sind NUR die Dateien, die an dem Tage gesichert wurden!
Möchte man alles so herstellen, wie es zu einem bestimmten Zetipunkt war, dann geht das folgendermaßen:
<source lang=bash>
# recover -s <networker-server> -c <networker-client> -t '03/31/15 00:16:14' -d <destination-directory> -a <directory>
</source>
7de51c118b503cb88e86260dba6531069a6746c8
Apache
0
205
686
2015-04-16T09:13:07Z
Lollypop
2
Die Seite wurde neu angelegt: „ == Zertifikat generieren == ===Defaultwerte vernünftig anpassen=== Country & Co auf für einen selbst passende Werte anpassen: <source lang=bash> # vi /etc/s…“
wikitext
text/x-wiki
== Zertifikat generieren ==
===Defaultwerte vernünftig anpassen===
Country & Co auf für einen selbst passende Werte anpassen:
<source lang=bash>
# vi /etc/ssl/openssl.cnf
</source>
===Schlüssel generieren===
<source lang=bash>
# openssl ecparam -genkey -name secp256r1 | openssl ec -out timmann.de.ec-key
</source>
===Zertifikat ausstellen===
<source lang=bash>
# openssl req -new -x509 -sha256 -key timmann.de.ec-key -out timmann.de-wildcard.pem -days 1825 -node
</source>
4c42a651cdd63eb06649d91c8dcca4e6aae4658f
687
686
2015-04-16T09:20:25Z
Lollypop
2
wikitext
text/x-wiki
== Zertifikat generieren ==
===Defaultwerte vernünftig anpassen===
Country & Co auf für einen selbst passende Werte anpassen:
<source lang=bash>
# vi /etc/ssl/openssl.cnf
</source>
===Schlüssel generieren===
<source lang=bash>
# openssl ecparam -genkey -name secp256r1 | openssl ec -out server.de.ec-key
</source>
===Zertifikat ausstellen===
<source lang=bash>
# openssl req -new -x509 -sha256 -key server.de.ec-key -out server.de-wildcard.pem -days 1825 -node
You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter '.', the field will be left blank.
-----
Country Name (2 letter code) [DE]:
State or Province Name (full name) [Hamburg]:
Locality Name (eg, city) [Hamburg]:
Organization Name (eg, company) [My Site]:
Organizational Unit Name (eg, section) [Sub]:
Common Name (e.g. server FQDN or YOUR name) []:*.server.de
Email Address [ssl@server.de]:
</source>
==Apache konfigurieren==
<source lang=apache>
<VirtualHost ssl.server.de:443>
...
SSLEngine On
SSLProtocol all -SSLv2 -SSLv3
SSLCompression off
SSLHonorCipherOrder On
SSLCipherSuite EECDH+AESGCM:EECDH+AES:EDH+AES
SSLCertificateFile /etc/apache2/ssl/server.de-wildcard.pem
SSLCertificateKeyFile /etc/apache2/ssl/server.de.ec-key
SSLOptions +FakeBasicAuth +ExportCertData +StrictRequire
ServerSignature On
</VirtualHost>
</source>
0c13d8ae6d8ad396d9b540c0c5348c110593aa6e
688
687
2015-04-16T09:42:10Z
Lollypop
2
wikitext
text/x-wiki
== Zertifikat generieren ==
===Defaultwerte vernünftig anpassen===
Country & Co auf für einen selbst passende Werte anpassen:
<source lang=bash>
# vi /etc/ssl/openssl.cnf
</source>
===Schlüssel generieren===
<source lang=bash>
# openssl ecparam -genkey -name secp256r1 | openssl ec -out server.de.ec-key
</source>
===Zertifikat ausstellen===
<source lang=bash>
# openssl req -new -x509 -sha256 -key server.de.ec-key -out server.de-wildcard.pem -days 1825 -node
You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter '.', the field will be left blank.
-----
Country Name (2 letter code) [DE]:
State or Province Name (full name) [Hamburg]:
Locality Name (eg, city) [Hamburg]:
Organization Name (eg, company) [My Site]:
Organizational Unit Name (eg, section) [Sub]:
Common Name (e.g. server FQDN or YOUR name) []:*.server.de
Email Address [ssl@server.de]:
</source>
===Zertifikat ansehen===
<source lang=bash>
# openssl x509 -text -noout -in server.de-wildcard.pem Certificate:
Data:
Version: 3 (0x2)
Serial Number: ... (0x...)
Signature Algorithm: ecdsa-with-SHA256
Issuer: C=DE, ST=Hamburg, L=Hamburg, O=My Site, OU=Sub, CN=*.server.de/emailAddress=ssl@server.de
Validity
Not Before: Apr 16 09:35:02 2015 GMT
Not After : Apr 14 09:35:02 2020 GMT
Subject: C=DE, ST=Hamburg, L=Hamburg, O=My Site, OU=Sub, CN=*.server.de/emailAddress=ssl@server.de
Subject Public Key Info:
Public Key Algorithm: id-ecPublicKey
Public-Key: (256 bit)
pub:
...
ASN1 OID: prime256v1
X509v3 extensions:
X509v3 Subject Key Identifier:
...
X509v3 Authority Key Identifier:
keyid:...
X509v3 Basic Constraints:
CA:TRUE
Signature Algorithm: ecdsa-with-SHA256
...
</source>
==Apache konfigurieren==
<source lang=apache>
<VirtualHost ssl.server.de:443>
...
SSLEngine On
SSLProtocol all -SSLv2 -SSLv3
SSLCompression off
SSLHonorCipherOrder On
SSLCipherSuite EECDH+AESGCM:EECDH+AES:EDH+AES
SSLCertificateFile /etc/apache2/ssl/server.de-wildcard.pem
SSLCertificateKeyFile /etc/apache2/ssl/server.de.ec-key
SSLOptions +FakeBasicAuth +ExportCertData +StrictRequire
ServerSignature On
</VirtualHost>
</source>
526e5f2b603d9cbc4940efc3c0a087f8037f6cd6
689
688
2015-04-16T10:05:35Z
Lollypop
2
/* Apache konfigurieren */
wikitext
text/x-wiki
== Zertifikat generieren ==
===Defaultwerte vernünftig anpassen===
Country & Co auf für einen selbst passende Werte anpassen:
<source lang=bash>
# vi /etc/ssl/openssl.cnf
</source>
===Schlüssel generieren===
<source lang=bash>
# openssl ecparam -genkey -name secp256r1 | openssl ec -out server.de.ec-key
</source>
===Zertifikat ausstellen===
<source lang=bash>
# openssl req -new -x509 -sha256 -key server.de.ec-key -out server.de-wildcard.pem -days 1825 -node
You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter '.', the field will be left blank.
-----
Country Name (2 letter code) [DE]:
State or Province Name (full name) [Hamburg]:
Locality Name (eg, city) [Hamburg]:
Organization Name (eg, company) [My Site]:
Organizational Unit Name (eg, section) [Sub]:
Common Name (e.g. server FQDN or YOUR name) []:*.server.de
Email Address [ssl@server.de]:
</source>
===Zertifikat ansehen===
<source lang=bash>
# openssl x509 -text -noout -in server.de-wildcard.pem Certificate:
Data:
Version: 3 (0x2)
Serial Number: ... (0x...)
Signature Algorithm: ecdsa-with-SHA256
Issuer: C=DE, ST=Hamburg, L=Hamburg, O=My Site, OU=Sub, CN=*.server.de/emailAddress=ssl@server.de
Validity
Not Before: Apr 16 09:35:02 2015 GMT
Not After : Apr 14 09:35:02 2020 GMT
Subject: C=DE, ST=Hamburg, L=Hamburg, O=My Site, OU=Sub, CN=*.server.de/emailAddress=ssl@server.de
Subject Public Key Info:
Public Key Algorithm: id-ecPublicKey
Public-Key: (256 bit)
pub:
...
ASN1 OID: prime256v1
X509v3 extensions:
X509v3 Subject Key Identifier:
...
X509v3 Authority Key Identifier:
keyid:...
X509v3 Basic Constraints:
CA:TRUE
Signature Algorithm: ecdsa-with-SHA256
...
</source>
==Apache konfigurieren==
<source lang=apache>
<VirtualHost ssl.server.de:443>
...
SSLEngine On
SSLProtocol all -SSLv2 -SSLv3
SSLCompression off
SSLHonorCipherOrder On
SSLCipherSuite EECDH+AESGCM:EECDH+AES:EDH+AES
SSLCertificateFile /etc/apache2/ssl/server.de-wildcard.pem
SSLCertificateKeyFile /etc/apache2/ssl/server.de.ec-key
SSLOptions +FakeBasicAuth +ExportCertData +StrictRequire
</VirtualHost>
</source>
a99b8109dd37ffcbb9b4a972b969b5b056330a56
690
689
2015-04-16T10:20:15Z
Lollypop
2
/* Schlüssel generieren */
wikitext
text/x-wiki
== Zertifikat generieren ==
===Defaultwerte vernünftig anpassen===
Country & Co auf für einen selbst passende Werte anpassen:
<source lang=bash>
# vi /etc/ssl/openssl.cnf
</source>
===Schlüssel generieren===
<source lang=bash>
# openssl ecparam -genkey -name secp256r1 | openssl ec -aes256 -out server.de.ec-key
</source>
===Zertifikat ausstellen===
<source lang=bash>
# openssl req -new -x509 -sha256 -key server.de.ec-key -out server.de-wildcard.pem -days 1825 -node
You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter '.', the field will be left blank.
-----
Country Name (2 letter code) [DE]:
State or Province Name (full name) [Hamburg]:
Locality Name (eg, city) [Hamburg]:
Organization Name (eg, company) [My Site]:
Organizational Unit Name (eg, section) [Sub]:
Common Name (e.g. server FQDN or YOUR name) []:*.server.de
Email Address [ssl@server.de]:
</source>
===Zertifikat ansehen===
<source lang=bash>
# openssl x509 -text -noout -in server.de-wildcard.pem Certificate:
Data:
Version: 3 (0x2)
Serial Number: ... (0x...)
Signature Algorithm: ecdsa-with-SHA256
Issuer: C=DE, ST=Hamburg, L=Hamburg, O=My Site, OU=Sub, CN=*.server.de/emailAddress=ssl@server.de
Validity
Not Before: Apr 16 09:35:02 2015 GMT
Not After : Apr 14 09:35:02 2020 GMT
Subject: C=DE, ST=Hamburg, L=Hamburg, O=My Site, OU=Sub, CN=*.server.de/emailAddress=ssl@server.de
Subject Public Key Info:
Public Key Algorithm: id-ecPublicKey
Public-Key: (256 bit)
pub:
...
ASN1 OID: prime256v1
X509v3 extensions:
X509v3 Subject Key Identifier:
...
X509v3 Authority Key Identifier:
keyid:...
X509v3 Basic Constraints:
CA:TRUE
Signature Algorithm: ecdsa-with-SHA256
...
</source>
==Apache konfigurieren==
<source lang=apache>
<VirtualHost ssl.server.de:443>
...
SSLEngine On
SSLProtocol all -SSLv2 -SSLv3
SSLCompression off
SSLHonorCipherOrder On
SSLCipherSuite EECDH+AESGCM:EECDH+AES:EDH+AES
SSLCertificateFile /etc/apache2/ssl/server.de-wildcard.pem
SSLCertificateKeyFile /etc/apache2/ssl/server.de.ec-key
SSLOptions +FakeBasicAuth +ExportCertData +StrictRequire
</VirtualHost>
</source>
fdf25934ade97cb38a99ad65cc0964a842cdb08e
691
690
2015-04-16T10:28:15Z
Lollypop
2
/* Zertifikat ausstellen */
wikitext
text/x-wiki
== Zertifikat generieren ==
===Defaultwerte vernünftig anpassen===
Country & Co auf für einen selbst passende Werte anpassen:
<source lang=bash>
# vi /etc/ssl/openssl.cnf
</source>
===Schlüssel generieren===
<source lang=bash>
# openssl ecparam -genkey -name secp256r1 | openssl ec -aes256 -out server.de.ec-key
</source>
===Zertifikat ausstellen===
<source lang=bash>
# openssl req -new -x509 -sha256 -key server.de.ec-key -out server.de-wildcard.pem -days 1825 -nodes
You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter '.', the field will be left blank.
-----
Country Name (2 letter code) [DE]:
State or Province Name (full name) [Hamburg]:
Locality Name (eg, city) [Hamburg]:
Organization Name (eg, company) [My Site]:
Organizational Unit Name (eg, section) [Sub]:
Common Name (e.g. server FQDN or YOUR name) []:*.server.de
Email Address [ssl@server.de]:
</source>
===Zertifikat ansehen===
<source lang=bash>
# openssl x509 -text -noout -in server.de-wildcard.pem Certificate:
Data:
Version: 3 (0x2)
Serial Number: ... (0x...)
Signature Algorithm: ecdsa-with-SHA256
Issuer: C=DE, ST=Hamburg, L=Hamburg, O=My Site, OU=Sub, CN=*.server.de/emailAddress=ssl@server.de
Validity
Not Before: Apr 16 09:35:02 2015 GMT
Not After : Apr 14 09:35:02 2020 GMT
Subject: C=DE, ST=Hamburg, L=Hamburg, O=My Site, OU=Sub, CN=*.server.de/emailAddress=ssl@server.de
Subject Public Key Info:
Public Key Algorithm: id-ecPublicKey
Public-Key: (256 bit)
pub:
...
ASN1 OID: prime256v1
X509v3 extensions:
X509v3 Subject Key Identifier:
...
X509v3 Authority Key Identifier:
keyid:...
X509v3 Basic Constraints:
CA:TRUE
Signature Algorithm: ecdsa-with-SHA256
...
</source>
==Apache konfigurieren==
<source lang=apache>
<VirtualHost ssl.server.de:443>
...
SSLEngine On
SSLProtocol all -SSLv2 -SSLv3
SSLCompression off
SSLHonorCipherOrder On
SSLCipherSuite EECDH+AESGCM:EECDH+AES:EDH+AES
SSLCertificateFile /etc/apache2/ssl/server.de-wildcard.pem
SSLCertificateKeyFile /etc/apache2/ssl/server.de.ec-key
SSLOptions +FakeBasicAuth +ExportCertData +StrictRequire
</VirtualHost>
</source>
f57f4f371ea8eedb9047c61bfaf6e6c69c5fcc1a
SunCluster Delete Ressource Group
0
206
692
2015-04-21T15:51:10Z
Lollypop
2
Die Seite wurde neu angelegt: „=Vorbereitungen= Herleitung der Daten, die nachher in den Einzeilern benutzt werden. Macht dies nicht! Ich übernehme auch hier wieder keine Verantwortung! Al…“
wikitext
text/x-wiki
=Vorbereitungen=
Herleitung der Daten, die nachher in den Einzeilern benutzt werden.
Macht dies nicht! Ich übernehme auch hier wieder keine Verantwortung! Alles falsch! Nicht machen!
==Setzen der betreffenden Ressource Group==
<source lang=bash>
# RG=my-rg
</source>
==Ressource anzeigen==
<source lang=bash>
# clrs list -g ${RG}
my-nsr-res
my-oracle-res
my-lh-res
my-zone-res
my-hasp-zfs-res
</source>
==Abschalten der Ressource Group und Ressourcen==
<source lang=bash>
# clrs offline ${RG}
# clrs disable -g ${RG}
</source>
==ZPools anzeigen==
<source lang=bash>
# clrs show -p ZPools -g ${RG}
...
=== Resources ===
Resource: my-hasp-zfs-res
--- Standard and extension properties ---
Zpools: my_pool my-redo1_pool my-redo2_pool
Class: extension
Description: The list of zpools
Per-node: False
Type: stringarray
...
</source>
==ZPools anzeigen==
<source lang=bash>
# clrs show -p ZPools -g my-rg | nawk '$1=="Zpools:"{$1="";print $0;}'
my_pool my-redo1_pool my-redo2_pool
</source>
==DID Devices anzeigen==
<source lang=bash>
# for disk in $(for zpool in $(clrs show -p ZPools -g ${RG} | nawk '$1=="Zpools:"{$1="";print $0;}' ) ; do zpool import ${zpool} 2>/dev/null ; zpool status ${zpool} ; zpool export ${zpool} ; done | nawk '/c[0-9]+t/{gsub(/s.*$/,"",$1);print $1}') ; do echo /dev/rdsk/${disk}; done | xargs cldev list -vn $(hostname)
DID Device Full Device Path
---------- ----------------
d53 node06:/dev/rdsk/c0t600A0B80006E103C00000B9B50B2F83Ed0
d38 node06:/dev/rdsk/c0t600A0B80006E10020000D54150B2FF26d0
d57 node06:/dev/rdsk/c0t600A0B80006E103C00000B9E50B2F9FFd0
d50 node06:/dev/rdsk/c0t600A0B80006E10020000D54450B300C8d0
d46 node06:/dev/rdsk/c0t600A0B80006E103C00000BA250B3098Ad0
d28 node06:/dev/rdsk/c0t600A0B80006E10020000D54850B310C2d0
d55 node06:/dev/rdsk/c0t600A0B80006E134400000B5350B2FB08d0
d56 node06:/dev/rdsk/c0t600A0B80006E10E40000D6F450B2FBB1d0
d40 node06:/dev/rdsk/c0t600A0B80006E134400000B5950B30D8Bd0
d45 node06:/dev/rdsk/c0t600A0B80006E10E40000D6FA50B30E62d0
</source>
oder nur die DIDs:
<source lang=bash>
# for disk in $(for zpool in $(clrs show -p ZPools -g ${RG} | nawk '$1=="Zpools:"{$1="";print $0;}' ) ; do zpool import ${zpool} 2>/dev/null ; zpool status ${zpool} ; zpool export ${zpool} ; done | nawk '/c[0-9]+t/{gsub(/s.*$/,"",$1);print $1}') ; do echo /dev/rdsk/${disk}; done | xargs scdidadm -lo instance
53
38
57
50
46
28
55
56
40
45
</source>
==Abschalten des Device Monitorings==
Das ist wichtig, um die Devices später ganz aus dem Cluster zu bekommen!
<source lang=bash>
# for disk in $(for zpool in $(clrs show -p ZPools -g ${RG} | nawk '$1=="Zpools:"{$1="";print $0;}' ) ; do zpool import ${zpool} 2>/dev/null ; zpool status ${zpool} ; zpool export ${zpool} ; done | nawk '/c[0-9]+t/{gsub(/s.*$/,"",$1);print $1}') ; do echo /dev/rdsk/${disk}; done | xargs scdidadm -lo instance | xargs cldev unmonitor
</source>
==Ressourcegruppe löschen==
<source lang=bash>
# clrs delete -g ${RG}
# clrg delete ${RG}
</source>
==Jetzt auf dem Storage die LUNs unmappen==
Und gegebenen Falls löschen...
==Nicht mehr vorhandene LUNs aus dem Solaris entfernen==
<source lang=bash>
# for node in $(clnode list) ; do ssh ${node} cfgadm -alo show_SCSI_LUN | nawk '$NF=="unusable"{gsub(/,[0-9]+$/,"",$1);print $1}' | sort -u | xargs -n 1 ssh ${node} cfgadm -c unconfigure -o unusable_SCSI_LUN ; ssh ${node} devfsadm -C -v -c disk ; done
</source>
==DIDs aufräumen==
<source lang=bash>
# for node in $(clnode list) ; do cldev refresh -n ${node} ; cldev clear -n ${node} ; done
</source>
427c0691aeecfb5f2a091db40fc4837d4abce453
693
692
2015-04-21T15:53:13Z
Lollypop
2
wikitext
text/x-wiki
=Komplettes entfernen einer Ressource Group=
Herleitung der Daten, die nachher in den Einzeilern benutzt werden.
Macht dies nicht! Ich übernehme auch hier wieder keine Verantwortung! Alles falsch! Nicht machen!
==Setzen der betreffenden Ressource Group==
<source lang=bash>
# RG=my-rg
</source>
==Ressource anzeigen==
<source lang=bash>
# clrs list -g ${RG}
my-nsr-res
my-oracle-res
my-lh-res
my-zone-res
my-hasp-zfs-res
</source>
==Abschalten der Ressource Group und Ressourcen==
<source lang=bash>
# clrs offline ${RG}
# clrs disable -g ${RG}
</source>
==ZPools anzeigen==
<source lang=bash>
# clrs show -p ZPools -g ${RG}
...
=== Resources ===
Resource: my-hasp-zfs-res
--- Standard and extension properties ---
Zpools: my_pool my-redo1_pool my-redo2_pool
Class: extension
Description: The list of zpools
Per-node: False
Type: stringarray
...
</source>
==ZPools anzeigen==
<source lang=bash>
# clrs show -p ZPools -g my-rg | nawk '$1=="Zpools:"{$1="";print $0;}'
my_pool my-redo1_pool my-redo2_pool
</source>
==DID Devices anzeigen==
<source lang=bash>
# for disk in $(for zpool in $(clrs show -p ZPools -g ${RG} | nawk '$1=="Zpools:"{$1="";print $0;}' ) ; do zpool import ${zpool} 2>/dev/null ; zpool status ${zpool} ; zpool export ${zpool} ; done | nawk '/c[0-9]+t/{gsub(/s.*$/,"",$1);print $1}') ; do echo /dev/rdsk/${disk}; done | xargs cldev list -vn $(hostname)
DID Device Full Device Path
---------- ----------------
d53 node06:/dev/rdsk/c0t600A0B80006E103C00000B9B50B2F83Ed0
d38 node06:/dev/rdsk/c0t600A0B80006E10020000D54150B2FF26d0
d57 node06:/dev/rdsk/c0t600A0B80006E103C00000B9E50B2F9FFd0
d50 node06:/dev/rdsk/c0t600A0B80006E10020000D54450B300C8d0
d46 node06:/dev/rdsk/c0t600A0B80006E103C00000BA250B3098Ad0
d28 node06:/dev/rdsk/c0t600A0B80006E10020000D54850B310C2d0
d55 node06:/dev/rdsk/c0t600A0B80006E134400000B5350B2FB08d0
d56 node06:/dev/rdsk/c0t600A0B80006E10E40000D6F450B2FBB1d0
d40 node06:/dev/rdsk/c0t600A0B80006E134400000B5950B30D8Bd0
d45 node06:/dev/rdsk/c0t600A0B80006E10E40000D6FA50B30E62d0
</source>
oder nur die DIDs:
<source lang=bash>
# for disk in $(for zpool in $(clrs show -p ZPools -g ${RG} | nawk '$1=="Zpools:"{$1="";print $0;}' ) ; do zpool import ${zpool} 2>/dev/null ; zpool status ${zpool} ; zpool export ${zpool} ; done | nawk '/c[0-9]+t/{gsub(/s.*$/,"",$1);print $1}') ; do echo /dev/rdsk/${disk}; done | xargs scdidadm -lo instance
53
38
57
50
46
28
55
56
40
45
</source>
==Abschalten des Device Monitorings==
Das ist wichtig, um die Devices später ganz aus dem Cluster zu bekommen!
<source lang=bash>
# for disk in $(for zpool in $(clrs show -p ZPools -g ${RG} | nawk '$1=="Zpools:"{$1="";print $0;}' ) ; do zpool import ${zpool} 2>/dev/null ; zpool status ${zpool} ; zpool export ${zpool} ; done | nawk '/c[0-9]+t/{gsub(/s.*$/,"",$1);print $1}') ; do echo /dev/rdsk/${disk}; done | xargs scdidadm -lo instance | xargs cldev unmonitor
</source>
==Ressourcegruppe löschen==
<source lang=bash>
# clrs delete -g ${RG}
# clrg delete ${RG}
</source>
==Jetzt auf dem Storage die LUNs unmappen==
Und gegebenen Falls löschen...
==Nicht mehr vorhandene LUNs aus dem Solaris entfernen==
<source lang=bash>
# for node in $(clnode list) ; do ssh ${node} cfgadm -alo show_SCSI_LUN | nawk '$NF=="unusable"{gsub(/,[0-9]+$/,"",$1);print $1}' | sort -u | xargs -n 1 ssh ${node} cfgadm -c unconfigure -o unusable_SCSI_LUN ; ssh ${node} devfsadm -C -v -c disk ; done
</source>
==DIDs aufräumen==
<source lang=bash>
# for node in $(clnode list) ; do cldev refresh -n ${node} ; cldev clear -n ${node} ; done
</source>
cba55f6c216d1dd12d34d8e9e00beff3f71318ed
694
693
2015-04-21T15:53:38Z
Lollypop
2
/* ZPools anzeigen */
wikitext
text/x-wiki
=Komplettes entfernen einer Ressource Group=
Herleitung der Daten, die nachher in den Einzeilern benutzt werden.
Macht dies nicht! Ich übernehme auch hier wieder keine Verantwortung! Alles falsch! Nicht machen!
==Setzen der betreffenden Ressource Group==
<source lang=bash>
# RG=my-rg
</source>
==Ressource anzeigen==
<source lang=bash>
# clrs list -g ${RG}
my-nsr-res
my-oracle-res
my-lh-res
my-zone-res
my-hasp-zfs-res
</source>
==Abschalten der Ressource Group und Ressourcen==
<source lang=bash>
# clrs offline ${RG}
# clrs disable -g ${RG}
</source>
==ZPools anzeigen==
<source lang=bash>
# clrs show -p ZPools -g ${RG}
...
=== Resources ===
Resource: my-hasp-zfs-res
--- Standard and extension properties ---
Zpools: my_pool my-redo1_pool my-redo2_pool
Class: extension
Description: The list of zpools
Per-node: False
Type: stringarray
...
</source>
==ZPools anzeigen==
<source lang=bash>
# clrs show -p ZPools -g ${RG} | nawk '$1=="Zpools:"{$1="";print $0;}'
my_pool my-redo1_pool my-redo2_pool
</source>
==DID Devices anzeigen==
<source lang=bash>
# for disk in $(for zpool in $(clrs show -p ZPools -g ${RG} | nawk '$1=="Zpools:"{$1="";print $0;}' ) ; do zpool import ${zpool} 2>/dev/null ; zpool status ${zpool} ; zpool export ${zpool} ; done | nawk '/c[0-9]+t/{gsub(/s.*$/,"",$1);print $1}') ; do echo /dev/rdsk/${disk}; done | xargs cldev list -vn $(hostname)
DID Device Full Device Path
---------- ----------------
d53 node06:/dev/rdsk/c0t600A0B80006E103C00000B9B50B2F83Ed0
d38 node06:/dev/rdsk/c0t600A0B80006E10020000D54150B2FF26d0
d57 node06:/dev/rdsk/c0t600A0B80006E103C00000B9E50B2F9FFd0
d50 node06:/dev/rdsk/c0t600A0B80006E10020000D54450B300C8d0
d46 node06:/dev/rdsk/c0t600A0B80006E103C00000BA250B3098Ad0
d28 node06:/dev/rdsk/c0t600A0B80006E10020000D54850B310C2d0
d55 node06:/dev/rdsk/c0t600A0B80006E134400000B5350B2FB08d0
d56 node06:/dev/rdsk/c0t600A0B80006E10E40000D6F450B2FBB1d0
d40 node06:/dev/rdsk/c0t600A0B80006E134400000B5950B30D8Bd0
d45 node06:/dev/rdsk/c0t600A0B80006E10E40000D6FA50B30E62d0
</source>
oder nur die DIDs:
<source lang=bash>
# for disk in $(for zpool in $(clrs show -p ZPools -g ${RG} | nawk '$1=="Zpools:"{$1="";print $0;}' ) ; do zpool import ${zpool} 2>/dev/null ; zpool status ${zpool} ; zpool export ${zpool} ; done | nawk '/c[0-9]+t/{gsub(/s.*$/,"",$1);print $1}') ; do echo /dev/rdsk/${disk}; done | xargs scdidadm -lo instance
53
38
57
50
46
28
55
56
40
45
</source>
==Abschalten des Device Monitorings==
Das ist wichtig, um die Devices später ganz aus dem Cluster zu bekommen!
<source lang=bash>
# for disk in $(for zpool in $(clrs show -p ZPools -g ${RG} | nawk '$1=="Zpools:"{$1="";print $0;}' ) ; do zpool import ${zpool} 2>/dev/null ; zpool status ${zpool} ; zpool export ${zpool} ; done | nawk '/c[0-9]+t/{gsub(/s.*$/,"",$1);print $1}') ; do echo /dev/rdsk/${disk}; done | xargs scdidadm -lo instance | xargs cldev unmonitor
</source>
==Ressourcegruppe löschen==
<source lang=bash>
# clrs delete -g ${RG}
# clrg delete ${RG}
</source>
==Jetzt auf dem Storage die LUNs unmappen==
Und gegebenen Falls löschen...
==Nicht mehr vorhandene LUNs aus dem Solaris entfernen==
<source lang=bash>
# for node in $(clnode list) ; do ssh ${node} cfgadm -alo show_SCSI_LUN | nawk '$NF=="unusable"{gsub(/,[0-9]+$/,"",$1);print $1}' | sort -u | xargs -n 1 ssh ${node} cfgadm -c unconfigure -o unusable_SCSI_LUN ; ssh ${node} devfsadm -C -v -c disk ; done
</source>
==DIDs aufräumen==
<source lang=bash>
# for node in $(clnode list) ; do cldev refresh -n ${node} ; cldev clear -n ${node} ; done
</source>
2940dbf60f3801e565c60f854d912daff236aaf8
695
694
2015-04-24T13:11:50Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:SunCluster]]
=Komplettes entfernen einer Ressource Group=
Herleitung der Daten, die nachher in den Einzeilern benutzt werden.
Macht dies nicht! Ich übernehme auch hier wieder keine Verantwortung! Alles falsch! Nicht machen!
==Setzen der betreffenden Ressource Group==
<source lang=bash>
# RG=my-rg
</source>
==Ressource anzeigen==
<source lang=bash>
# clrs list -g ${RG}
my-nsr-res
my-oracle-res
my-lh-res
my-zone-res
my-hasp-zfs-res
</source>
==Abschalten der Ressource Group und Ressourcen==
<source lang=bash>
# clrs offline ${RG}
# clrs disable $(clrs list -g ${RG})
</source>
==ZPools anzeigen==
<source lang=bash>
# clrs show -p ZPools -g ${RG}
...
=== Resources ===
Resource: my-hasp-zfs-res
--- Standard and extension properties ---
Zpools: my_pool my-redo1_pool my-redo2_pool
Class: extension
Description: The list of zpools
Per-node: False
Type: stringarray
...
</source>
==ZPools anzeigen==
<source lang=bash>
# clrs show -p ZPools -g ${RG} | nawk '$1=="Zpools:"{$1="";print $0;}'
my_pool my-redo1_pool my-redo2_pool
</source>
==DID Devices anzeigen==
<source lang=bash>
# for disk in $(for zpool in $(clrs show -p ZPools -g ${RG} | nawk '$1=="Zpools:"{$1="";print $0;}' ) ; do zpool import ${zpool} 2>/dev/null ; zpool status ${zpool} ; zpool export ${zpool} ; done | nawk '/c[0-9]+t/{gsub(/s.*$/,"",$1);print $1}') ; do echo /dev/rdsk/${disk}; done | xargs cldev list -vn $(hostname)
DID Device Full Device Path
---------- ----------------
d53 node06:/dev/rdsk/c0t600A0B80006E103C00000B9B50B2F83Ed0
d38 node06:/dev/rdsk/c0t600A0B80006E10020000D54150B2FF26d0
d57 node06:/dev/rdsk/c0t600A0B80006E103C00000B9E50B2F9FFd0
d50 node06:/dev/rdsk/c0t600A0B80006E10020000D54450B300C8d0
d46 node06:/dev/rdsk/c0t600A0B80006E103C00000BA250B3098Ad0
d28 node06:/dev/rdsk/c0t600A0B80006E10020000D54850B310C2d0
d55 node06:/dev/rdsk/c0t600A0B80006E134400000B5350B2FB08d0
d56 node06:/dev/rdsk/c0t600A0B80006E10E40000D6F450B2FBB1d0
d40 node06:/dev/rdsk/c0t600A0B80006E134400000B5950B30D8Bd0
d45 node06:/dev/rdsk/c0t600A0B80006E10E40000D6FA50B30E62d0
</source>
oder nur die DIDs:
<source lang=bash>
# for disk in $(for zpool in $(clrs show -p ZPools -g ${RG} | nawk '$1=="Zpools:"{$1="";print $0;}' ) ; do zpool import ${zpool} 2>/dev/null ; zpool status ${zpool} ; zpool export ${zpool} ; done | nawk '/c[0-9]+t/{gsub(/s.*$/,"",$1);print $1}') ; do echo /dev/rdsk/${disk}; done | xargs scdidadm -lo instance
53
38
57
50
46
28
55
56
40
45
</source>
==Abschalten des Device Monitorings==
Das ist wichtig, um die Devices später ganz aus dem Cluster zu bekommen!
<source lang=bash>
# for disk in $(for zpool in $(clrs show -p ZPools -g ${RG} | nawk '$1=="Zpools:"{$1="";print $0;}' ) ; do zpool import ${zpool} 2>/dev/null ; zpool status ${zpool} ; zpool export ${zpool} ; done | nawk '/c[0-9]+t/{gsub(/s.*$/,"",$1);print $1}') ; do echo /dev/rdsk/${disk}; done | xargs scdidadm -lo instance | xargs cldev unmonitor
</source>
==Ressourcegruppe löschen==
<source lang=bash>
# clrs delete -g ${RG}
# clrg delete ${RG}
</source>
==Jetzt auf dem Storage die LUNs unmappen==
Und gegebenen Falls löschen...
==Nicht mehr vorhandene LUNs aus dem Solaris entfernen==
<source lang=bash>
# for node in $(clnode list) ; do ssh ${node} cfgadm -alo show_SCSI_LUN | nawk '$NF=="unusable"{gsub(/,[0-9]+$/,"",$1);print $1}' | sort -u | xargs -n 1 ssh ${node} cfgadm -c unconfigure -o unusable_SCSI_LUN ; ssh ${node} devfsadm -C -v -c disk ; done
</source>
==DIDs aufräumen==
<source lang=bash>
# for node in $(clnode list) ; do cldev refresh -n ${node} ; cldev clear -n ${node} ; done
</source>
0fdf316de09f8df647311cb1af720c964bf5ade6
696
695
2015-04-24T13:16:31Z
Lollypop
2
/* Abschalten der Ressource Group und Ressourcen */
wikitext
text/x-wiki
[[Kategorie:SunCluster]]
=Komplettes entfernen einer Ressource Group=
Herleitung der Daten, die nachher in den Einzeilern benutzt werden.
Macht dies nicht! Ich übernehme auch hier wieder keine Verantwortung! Alles falsch! Nicht machen!
==Setzen der betreffenden Ressource Group==
<source lang=bash>
# RG=my-rg
</source>
==Ressource anzeigen==
<source lang=bash>
# clrs list -g ${RG}
my-nsr-res
my-oracle-res
my-lh-res
my-zone-res
my-hasp-zfs-res
</source>
==Abschalten der Ressource Group und Ressourcen==
<source lang=bash>
# clrs offline ${RG}
# clrs list -g ${RG} | xargs clrs disable
</source>
==ZPools anzeigen==
<source lang=bash>
# clrs show -p ZPools -g ${RG}
...
=== Resources ===
Resource: my-hasp-zfs-res
--- Standard and extension properties ---
Zpools: my_pool my-redo1_pool my-redo2_pool
Class: extension
Description: The list of zpools
Per-node: False
Type: stringarray
...
</source>
==ZPools anzeigen==
<source lang=bash>
# clrs show -p ZPools -g ${RG} | nawk '$1=="Zpools:"{$1="";print $0;}'
my_pool my-redo1_pool my-redo2_pool
</source>
==DID Devices anzeigen==
<source lang=bash>
# for disk in $(for zpool in $(clrs show -p ZPools -g ${RG} | nawk '$1=="Zpools:"{$1="";print $0;}' ) ; do zpool import ${zpool} 2>/dev/null ; zpool status ${zpool} ; zpool export ${zpool} ; done | nawk '/c[0-9]+t/{gsub(/s.*$/,"",$1);print $1}') ; do echo /dev/rdsk/${disk}; done | xargs cldev list -vn $(hostname)
DID Device Full Device Path
---------- ----------------
d53 node06:/dev/rdsk/c0t600A0B80006E103C00000B9B50B2F83Ed0
d38 node06:/dev/rdsk/c0t600A0B80006E10020000D54150B2FF26d0
d57 node06:/dev/rdsk/c0t600A0B80006E103C00000B9E50B2F9FFd0
d50 node06:/dev/rdsk/c0t600A0B80006E10020000D54450B300C8d0
d46 node06:/dev/rdsk/c0t600A0B80006E103C00000BA250B3098Ad0
d28 node06:/dev/rdsk/c0t600A0B80006E10020000D54850B310C2d0
d55 node06:/dev/rdsk/c0t600A0B80006E134400000B5350B2FB08d0
d56 node06:/dev/rdsk/c0t600A0B80006E10E40000D6F450B2FBB1d0
d40 node06:/dev/rdsk/c0t600A0B80006E134400000B5950B30D8Bd0
d45 node06:/dev/rdsk/c0t600A0B80006E10E40000D6FA50B30E62d0
</source>
oder nur die DIDs:
<source lang=bash>
# for disk in $(for zpool in $(clrs show -p ZPools -g ${RG} | nawk '$1=="Zpools:"{$1="";print $0;}' ) ; do zpool import ${zpool} 2>/dev/null ; zpool status ${zpool} ; zpool export ${zpool} ; done | nawk '/c[0-9]+t/{gsub(/s.*$/,"",$1);print $1}') ; do echo /dev/rdsk/${disk}; done | xargs scdidadm -lo instance
53
38
57
50
46
28
55
56
40
45
</source>
==Abschalten des Device Monitorings==
Das ist wichtig, um die Devices später ganz aus dem Cluster zu bekommen!
<source lang=bash>
# for disk in $(for zpool in $(clrs show -p ZPools -g ${RG} | nawk '$1=="Zpools:"{$1="";print $0;}' ) ; do zpool import ${zpool} 2>/dev/null ; zpool status ${zpool} ; zpool export ${zpool} ; done | nawk '/c[0-9]+t/{gsub(/s.*$/,"",$1);print $1}') ; do echo /dev/rdsk/${disk}; done | xargs scdidadm -lo instance | xargs cldev unmonitor
</source>
==Ressourcegruppe löschen==
<source lang=bash>
# clrs delete -g ${RG}
# clrg delete ${RG}
</source>
==Jetzt auf dem Storage die LUNs unmappen==
Und gegebenen Falls löschen...
==Nicht mehr vorhandene LUNs aus dem Solaris entfernen==
<source lang=bash>
# for node in $(clnode list) ; do ssh ${node} cfgadm -alo show_SCSI_LUN | nawk '$NF=="unusable"{gsub(/,[0-9]+$/,"",$1);print $1}' | sort -u | xargs -n 1 ssh ${node} cfgadm -c unconfigure -o unusable_SCSI_LUN ; ssh ${node} devfsadm -C -v -c disk ; done
</source>
==DIDs aufräumen==
<source lang=bash>
# for node in $(clnode list) ; do cldev refresh -n ${node} ; cldev clear -n ${node} ; done
</source>
108ae6487512f50734c652c5665d55bb427952cf
697
696
2015-04-24T14:06:35Z
Lollypop
2
/* Abschalten des Device Monitorings */
wikitext
text/x-wiki
[[Kategorie:SunCluster]]
=Komplettes entfernen einer Ressource Group=
Herleitung der Daten, die nachher in den Einzeilern benutzt werden.
Macht dies nicht! Ich übernehme auch hier wieder keine Verantwortung! Alles falsch! Nicht machen!
==Setzen der betreffenden Ressource Group==
<source lang=bash>
# RG=my-rg
</source>
==Ressource anzeigen==
<source lang=bash>
# clrs list -g ${RG}
my-nsr-res
my-oracle-res
my-lh-res
my-zone-res
my-hasp-zfs-res
</source>
==Abschalten der Ressource Group und Ressourcen==
<source lang=bash>
# clrs offline ${RG}
# clrs list -g ${RG} | xargs clrs disable
</source>
==ZPools anzeigen==
<source lang=bash>
# clrs show -p ZPools -g ${RG}
...
=== Resources ===
Resource: my-hasp-zfs-res
--- Standard and extension properties ---
Zpools: my_pool my-redo1_pool my-redo2_pool
Class: extension
Description: The list of zpools
Per-node: False
Type: stringarray
...
</source>
==ZPools anzeigen==
<source lang=bash>
# clrs show -p ZPools -g ${RG} | nawk '$1=="Zpools:"{$1="";print $0;}'
my_pool my-redo1_pool my-redo2_pool
</source>
==DID Devices anzeigen==
<source lang=bash>
# for disk in $(for zpool in $(clrs show -p ZPools -g ${RG} | nawk '$1=="Zpools:"{$1="";print $0;}' ) ; do zpool import ${zpool} 2>/dev/null ; zpool status ${zpool} ; zpool export ${zpool} ; done | nawk '/c[0-9]+t/{gsub(/s.*$/,"",$1);print $1}') ; do echo /dev/rdsk/${disk}; done | xargs cldev list -vn $(hostname)
DID Device Full Device Path
---------- ----------------
d53 node06:/dev/rdsk/c0t600A0B80006E103C00000B9B50B2F83Ed0
d38 node06:/dev/rdsk/c0t600A0B80006E10020000D54150B2FF26d0
d57 node06:/dev/rdsk/c0t600A0B80006E103C00000B9E50B2F9FFd0
d50 node06:/dev/rdsk/c0t600A0B80006E10020000D54450B300C8d0
d46 node06:/dev/rdsk/c0t600A0B80006E103C00000BA250B3098Ad0
d28 node06:/dev/rdsk/c0t600A0B80006E10020000D54850B310C2d0
d55 node06:/dev/rdsk/c0t600A0B80006E134400000B5350B2FB08d0
d56 node06:/dev/rdsk/c0t600A0B80006E10E40000D6F450B2FBB1d0
d40 node06:/dev/rdsk/c0t600A0B80006E134400000B5950B30D8Bd0
d45 node06:/dev/rdsk/c0t600A0B80006E10E40000D6FA50B30E62d0
</source>
oder nur die DIDs:
<source lang=bash>
# for disk in $(for zpool in $(clrs show -p ZPools -g ${RG} | nawk '$1=="Zpools:"{$1="";print $0;}' ) ; do zpool import ${zpool} 2>/dev/null ; zpool status ${zpool} ; zpool export ${zpool} ; done | nawk '/c[0-9]+t/{gsub(/s.*$/,"",$1);print $1}') ; do echo /dev/rdsk/${disk}; done | xargs scdidadm -lo instance
53
38
57
50
46
28
55
56
40
45
</source>
==Abschalten des Device Monitorings==
Das ist wichtig, um die Devices später ganz aus dem Cluster zu bekommen!
<source lang=bash>
# for disk in $(for zpool in $(clrs show -p ZPools -g ${RG} | nawk '$1=="Zpools:"{$1="";print $0;}' ) ; do zpool import ${zpool} 2>/dev/null ; zpool status ${zpool} ; zpool export ${zpool} ; done | nawk '/c[0-9]+t/{gsub(/s.*$/,"",$1);print $1}') ; do echo /dev/rdsk/${disk}; done | xargs scdidadm -lo name | xargs cldev unmonitor
</source>
==Ressourcegruppe löschen==
<source lang=bash>
# clrs delete -g ${RG}
# clrg delete ${RG}
</source>
==Jetzt auf dem Storage die LUNs unmappen==
Und gegebenen Falls löschen...
==Nicht mehr vorhandene LUNs aus dem Solaris entfernen==
<source lang=bash>
# for node in $(clnode list) ; do ssh ${node} cfgadm -alo show_SCSI_LUN | nawk '$NF=="unusable"{gsub(/,[0-9]+$/,"",$1);print $1}' | sort -u | xargs -n 1 ssh ${node} cfgadm -c unconfigure -o unusable_SCSI_LUN ; ssh ${node} devfsadm -C -v -c disk ; done
</source>
==DIDs aufräumen==
<source lang=bash>
# for node in $(clnode list) ; do cldev refresh -n ${node} ; cldev clear -n ${node} ; done
</source>
c245ffd23e5ed50d8ed1ae592347d3ecdead9acb
698
697
2015-04-24T14:08:41Z
Lollypop
2
/* Ressourcegruppe löschen */
wikitext
text/x-wiki
[[Kategorie:SunCluster]]
=Komplettes entfernen einer Ressource Group=
Herleitung der Daten, die nachher in den Einzeilern benutzt werden.
Macht dies nicht! Ich übernehme auch hier wieder keine Verantwortung! Alles falsch! Nicht machen!
==Setzen der betreffenden Ressource Group==
<source lang=bash>
# RG=my-rg
</source>
==Ressource anzeigen==
<source lang=bash>
# clrs list -g ${RG}
my-nsr-res
my-oracle-res
my-lh-res
my-zone-res
my-hasp-zfs-res
</source>
==Abschalten der Ressource Group und Ressourcen==
<source lang=bash>
# clrs offline ${RG}
# clrs list -g ${RG} | xargs clrs disable
</source>
==ZPools anzeigen==
<source lang=bash>
# clrs show -p ZPools -g ${RG}
...
=== Resources ===
Resource: my-hasp-zfs-res
--- Standard and extension properties ---
Zpools: my_pool my-redo1_pool my-redo2_pool
Class: extension
Description: The list of zpools
Per-node: False
Type: stringarray
...
</source>
==ZPools anzeigen==
<source lang=bash>
# clrs show -p ZPools -g ${RG} | nawk '$1=="Zpools:"{$1="";print $0;}'
my_pool my-redo1_pool my-redo2_pool
</source>
==DID Devices anzeigen==
<source lang=bash>
# for disk in $(for zpool in $(clrs show -p ZPools -g ${RG} | nawk '$1=="Zpools:"{$1="";print $0;}' ) ; do zpool import ${zpool} 2>/dev/null ; zpool status ${zpool} ; zpool export ${zpool} ; done | nawk '/c[0-9]+t/{gsub(/s.*$/,"",$1);print $1}') ; do echo /dev/rdsk/${disk}; done | xargs cldev list -vn $(hostname)
DID Device Full Device Path
---------- ----------------
d53 node06:/dev/rdsk/c0t600A0B80006E103C00000B9B50B2F83Ed0
d38 node06:/dev/rdsk/c0t600A0B80006E10020000D54150B2FF26d0
d57 node06:/dev/rdsk/c0t600A0B80006E103C00000B9E50B2F9FFd0
d50 node06:/dev/rdsk/c0t600A0B80006E10020000D54450B300C8d0
d46 node06:/dev/rdsk/c0t600A0B80006E103C00000BA250B3098Ad0
d28 node06:/dev/rdsk/c0t600A0B80006E10020000D54850B310C2d0
d55 node06:/dev/rdsk/c0t600A0B80006E134400000B5350B2FB08d0
d56 node06:/dev/rdsk/c0t600A0B80006E10E40000D6F450B2FBB1d0
d40 node06:/dev/rdsk/c0t600A0B80006E134400000B5950B30D8Bd0
d45 node06:/dev/rdsk/c0t600A0B80006E10E40000D6FA50B30E62d0
</source>
oder nur die DIDs:
<source lang=bash>
# for disk in $(for zpool in $(clrs show -p ZPools -g ${RG} | nawk '$1=="Zpools:"{$1="";print $0;}' ) ; do zpool import ${zpool} 2>/dev/null ; zpool status ${zpool} ; zpool export ${zpool} ; done | nawk '/c[0-9]+t/{gsub(/s.*$/,"",$1);print $1}') ; do echo /dev/rdsk/${disk}; done | xargs scdidadm -lo instance
53
38
57
50
46
28
55
56
40
45
</source>
==Abschalten des Device Monitorings==
Das ist wichtig, um die Devices später ganz aus dem Cluster zu bekommen!
<source lang=bash>
# for disk in $(for zpool in $(clrs show -p ZPools -g ${RG} | nawk '$1=="Zpools:"{$1="";print $0;}' ) ; do zpool import ${zpool} 2>/dev/null ; zpool status ${zpool} ; zpool export ${zpool} ; done | nawk '/c[0-9]+t/{gsub(/s.*$/,"",$1);print $1}') ; do echo /dev/rdsk/${disk}; done | xargs scdidadm -lo name | xargs cldev unmonitor
</source>
==Ressourcegruppe löschen==
<source lang=bash>
# clrs list -g ${RG} | xargs clrs delete
# clrg delete ${RG}
</source>
==Jetzt auf dem Storage die LUNs unmappen==
Und gegebenen Falls löschen...
==Nicht mehr vorhandene LUNs aus dem Solaris entfernen==
<source lang=bash>
# for node in $(clnode list) ; do ssh ${node} cfgadm -alo show_SCSI_LUN | nawk '$NF=="unusable"{gsub(/,[0-9]+$/,"",$1);print $1}' | sort -u | xargs -n 1 ssh ${node} cfgadm -c unconfigure -o unusable_SCSI_LUN ; ssh ${node} devfsadm -C -v -c disk ; done
</source>
==DIDs aufräumen==
<source lang=bash>
# for node in $(clnode list) ; do cldev refresh -n ${node} ; cldev clear -n ${node} ; done
</source>
74d62fa714aa7a7d49084aa489f92ebf62a23db7
699
698
2015-04-24T14:17:44Z
Lollypop
2
/* DIDs aufräumen */
wikitext
text/x-wiki
[[Kategorie:SunCluster]]
=Komplettes entfernen einer Ressource Group=
Herleitung der Daten, die nachher in den Einzeilern benutzt werden.
Macht dies nicht! Ich übernehme auch hier wieder keine Verantwortung! Alles falsch! Nicht machen!
==Setzen der betreffenden Ressource Group==
<source lang=bash>
# RG=my-rg
</source>
==Ressource anzeigen==
<source lang=bash>
# clrs list -g ${RG}
my-nsr-res
my-oracle-res
my-lh-res
my-zone-res
my-hasp-zfs-res
</source>
==Abschalten der Ressource Group und Ressourcen==
<source lang=bash>
# clrs offline ${RG}
# clrs list -g ${RG} | xargs clrs disable
</source>
==ZPools anzeigen==
<source lang=bash>
# clrs show -p ZPools -g ${RG}
...
=== Resources ===
Resource: my-hasp-zfs-res
--- Standard and extension properties ---
Zpools: my_pool my-redo1_pool my-redo2_pool
Class: extension
Description: The list of zpools
Per-node: False
Type: stringarray
...
</source>
==ZPools anzeigen==
<source lang=bash>
# clrs show -p ZPools -g ${RG} | nawk '$1=="Zpools:"{$1="";print $0;}'
my_pool my-redo1_pool my-redo2_pool
</source>
==DID Devices anzeigen==
<source lang=bash>
# for disk in $(for zpool in $(clrs show -p ZPools -g ${RG} | nawk '$1=="Zpools:"{$1="";print $0;}' ) ; do zpool import ${zpool} 2>/dev/null ; zpool status ${zpool} ; zpool export ${zpool} ; done | nawk '/c[0-9]+t/{gsub(/s.*$/,"",$1);print $1}') ; do echo /dev/rdsk/${disk}; done | xargs cldev list -vn $(hostname)
DID Device Full Device Path
---------- ----------------
d53 node06:/dev/rdsk/c0t600A0B80006E103C00000B9B50B2F83Ed0
d38 node06:/dev/rdsk/c0t600A0B80006E10020000D54150B2FF26d0
d57 node06:/dev/rdsk/c0t600A0B80006E103C00000B9E50B2F9FFd0
d50 node06:/dev/rdsk/c0t600A0B80006E10020000D54450B300C8d0
d46 node06:/dev/rdsk/c0t600A0B80006E103C00000BA250B3098Ad0
d28 node06:/dev/rdsk/c0t600A0B80006E10020000D54850B310C2d0
d55 node06:/dev/rdsk/c0t600A0B80006E134400000B5350B2FB08d0
d56 node06:/dev/rdsk/c0t600A0B80006E10E40000D6F450B2FBB1d0
d40 node06:/dev/rdsk/c0t600A0B80006E134400000B5950B30D8Bd0
d45 node06:/dev/rdsk/c0t600A0B80006E10E40000D6FA50B30E62d0
</source>
oder nur die DIDs:
<source lang=bash>
# for disk in $(for zpool in $(clrs show -p ZPools -g ${RG} | nawk '$1=="Zpools:"{$1="";print $0;}' ) ; do zpool import ${zpool} 2>/dev/null ; zpool status ${zpool} ; zpool export ${zpool} ; done | nawk '/c[0-9]+t/{gsub(/s.*$/,"",$1);print $1}') ; do echo /dev/rdsk/${disk}; done | xargs scdidadm -lo instance
53
38
57
50
46
28
55
56
40
45
</source>
==Abschalten des Device Monitorings==
Das ist wichtig, um die Devices später ganz aus dem Cluster zu bekommen!
<source lang=bash>
# for disk in $(for zpool in $(clrs show -p ZPools -g ${RG} | nawk '$1=="Zpools:"{$1="";print $0;}' ) ; do zpool import ${zpool} 2>/dev/null ; zpool status ${zpool} ; zpool export ${zpool} ; done | nawk '/c[0-9]+t/{gsub(/s.*$/,"",$1);print $1}') ; do echo /dev/rdsk/${disk}; done | xargs scdidadm -lo name | xargs cldev unmonitor
</source>
==Ressourcegruppe löschen==
<source lang=bash>
# clrs list -g ${RG} | xargs clrs delete
# clrg delete ${RG}
</source>
==Jetzt auf dem Storage die LUNs unmappen==
Und gegebenen Falls löschen...
==Nicht mehr vorhandene LUNs aus dem Solaris entfernen==
<source lang=bash>
# for node in $(clnode list) ; do ssh ${node} cfgadm -alo show_SCSI_LUN | nawk '$NF=="unusable"{gsub(/,[0-9]+$/,"",$1);print $1}' | sort -u | xargs -n 1 ssh ${node} cfgadm -c unconfigure -o unusable_SCSI_LUN ; ssh ${node} devfsadm -C -v -c disk ; done
</source>
==DIDs aufräumen==
<source lang=bash>
# for node in $(clnode list) ; do cldev refresh -n ${node} ; cldev clear -n ${node} ; done
</source>
==Bei bedarf Zonenkonfigs aufräumen==
<source lang=bash>
# ZONE=my-zone
# for node in $(clnode list) ; do ssh ${node} zonecfg -z ${zone} delete -F ; done
</source>
e7b5c719f15b876011e953dca1a28973a490f9ba
700
699
2015-04-24T14:18:07Z
Lollypop
2
/* Bei bedarf Zonenkonfigs aufräumen */
wikitext
text/x-wiki
[[Kategorie:SunCluster]]
=Komplettes entfernen einer Ressource Group=
Herleitung der Daten, die nachher in den Einzeilern benutzt werden.
Macht dies nicht! Ich übernehme auch hier wieder keine Verantwortung! Alles falsch! Nicht machen!
==Setzen der betreffenden Ressource Group==
<source lang=bash>
# RG=my-rg
</source>
==Ressource anzeigen==
<source lang=bash>
# clrs list -g ${RG}
my-nsr-res
my-oracle-res
my-lh-res
my-zone-res
my-hasp-zfs-res
</source>
==Abschalten der Ressource Group und Ressourcen==
<source lang=bash>
# clrs offline ${RG}
# clrs list -g ${RG} | xargs clrs disable
</source>
==ZPools anzeigen==
<source lang=bash>
# clrs show -p ZPools -g ${RG}
...
=== Resources ===
Resource: my-hasp-zfs-res
--- Standard and extension properties ---
Zpools: my_pool my-redo1_pool my-redo2_pool
Class: extension
Description: The list of zpools
Per-node: False
Type: stringarray
...
</source>
==ZPools anzeigen==
<source lang=bash>
# clrs show -p ZPools -g ${RG} | nawk '$1=="Zpools:"{$1="";print $0;}'
my_pool my-redo1_pool my-redo2_pool
</source>
==DID Devices anzeigen==
<source lang=bash>
# for disk in $(for zpool in $(clrs show -p ZPools -g ${RG} | nawk '$1=="Zpools:"{$1="";print $0;}' ) ; do zpool import ${zpool} 2>/dev/null ; zpool status ${zpool} ; zpool export ${zpool} ; done | nawk '/c[0-9]+t/{gsub(/s.*$/,"",$1);print $1}') ; do echo /dev/rdsk/${disk}; done | xargs cldev list -vn $(hostname)
DID Device Full Device Path
---------- ----------------
d53 node06:/dev/rdsk/c0t600A0B80006E103C00000B9B50B2F83Ed0
d38 node06:/dev/rdsk/c0t600A0B80006E10020000D54150B2FF26d0
d57 node06:/dev/rdsk/c0t600A0B80006E103C00000B9E50B2F9FFd0
d50 node06:/dev/rdsk/c0t600A0B80006E10020000D54450B300C8d0
d46 node06:/dev/rdsk/c0t600A0B80006E103C00000BA250B3098Ad0
d28 node06:/dev/rdsk/c0t600A0B80006E10020000D54850B310C2d0
d55 node06:/dev/rdsk/c0t600A0B80006E134400000B5350B2FB08d0
d56 node06:/dev/rdsk/c0t600A0B80006E10E40000D6F450B2FBB1d0
d40 node06:/dev/rdsk/c0t600A0B80006E134400000B5950B30D8Bd0
d45 node06:/dev/rdsk/c0t600A0B80006E10E40000D6FA50B30E62d0
</source>
oder nur die DIDs:
<source lang=bash>
# for disk in $(for zpool in $(clrs show -p ZPools -g ${RG} | nawk '$1=="Zpools:"{$1="";print $0;}' ) ; do zpool import ${zpool} 2>/dev/null ; zpool status ${zpool} ; zpool export ${zpool} ; done | nawk '/c[0-9]+t/{gsub(/s.*$/,"",$1);print $1}') ; do echo /dev/rdsk/${disk}; done | xargs scdidadm -lo instance
53
38
57
50
46
28
55
56
40
45
</source>
==Abschalten des Device Monitorings==
Das ist wichtig, um die Devices später ganz aus dem Cluster zu bekommen!
<source lang=bash>
# for disk in $(for zpool in $(clrs show -p ZPools -g ${RG} | nawk '$1=="Zpools:"{$1="";print $0;}' ) ; do zpool import ${zpool} 2>/dev/null ; zpool status ${zpool} ; zpool export ${zpool} ; done | nawk '/c[0-9]+t/{gsub(/s.*$/,"",$1);print $1}') ; do echo /dev/rdsk/${disk}; done | xargs scdidadm -lo name | xargs cldev unmonitor
</source>
==Ressourcegruppe löschen==
<source lang=bash>
# clrs list -g ${RG} | xargs clrs delete
# clrg delete ${RG}
</source>
==Jetzt auf dem Storage die LUNs unmappen==
Und gegebenen Falls löschen...
==Nicht mehr vorhandene LUNs aus dem Solaris entfernen==
<source lang=bash>
# for node in $(clnode list) ; do ssh ${node} cfgadm -alo show_SCSI_LUN | nawk '$NF=="unusable"{gsub(/,[0-9]+$/,"",$1);print $1}' | sort -u | xargs -n 1 ssh ${node} cfgadm -c unconfigure -o unusable_SCSI_LUN ; ssh ${node} devfsadm -C -v -c disk ; done
</source>
==DIDs aufräumen==
<source lang=bash>
# for node in $(clnode list) ; do cldev refresh -n ${node} ; cldev clear -n ${node} ; done
</source>
==Bei bedarf Zonenkonfigs aufräumen==
<source lang=bash>
# ZONE=my-zone
# for node in $(clnode list) ; do ssh ${node} zonecfg -z ${ZONE} delete -F ; done
</source>
75f6bf196b8d0e37ebcba878a52edb9451f58ee8
Solaris 11 bootadm
0
207
701
2015-04-29T16:29:36Z
Lollypop
2
Die Seite wurde neu angelegt: „[[Kategorie:Solaris11]] ==Booten via SP-Console== <source lang=bash> # bootadm set-menu console=text serial_params='0,115200,8,N,1' # bootadm add-entry -i 1 …“
wikitext
text/x-wiki
[[Kategorie:Solaris11]]
==Booten via SP-Console==
<source lang=bash>
# bootadm set-menu console=text serial_params='0,115200,8,N,1'
# bootadm add-entry -i 1 "Solaris (single-user)"
# bootadm change-entry -i 1 kargs="-s"
# bootadm add-entry -i 2 "Solaris (milestone=none)"
# bootadm change-entry -i 2 kargs="-m milestone=none"
</source>
a07f3fd467f275fbb4315fed0a7bd926c2607bc1
702
701
2015-04-29T16:36:16Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:Solaris11]]
==Booten via SP-Console==
<source lang=bash>
# bootadm set-menu console=text serial_params='0,115200,8,N,1'
# bootadm add-entry -i 1 "Solaris (single-user)"
# bootadm change-entry -i 1 kargs="-B console=ttya -s"
# bootadm add-entry -i 2 "Solaris (milestone=none)"
# bootadm change-entry -i 2 kargs="-B console=ttya -m milestone=none"
</source>
c2a06274c76f4811b795f8341f68d6a9695552f8
703
702
2015-04-29T16:36:37Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:Solaris11]]
==Booten via SP-Console==
<source lang=bash>
# bootadm set-menu console=text serial_params='0,115200,8,N,1'
# bootadm add-entry -i 1 "Solaris (single-user)"
# bootadm change-entry -i 1 kargs="-B console=ttya -s"
# bootadm add-entry -i 2 "Solaris (milestone=none)"
# bootadm change-entry -i 2 kargs="-B console=ttya -m milestone=none"
</source>
fddf9946f8373758f9b9cf094330d1b93953bbbd
704
703
2015-04-29T16:46:08Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:Solaris11]]
==Booten via SP-Console 115200 Baud==
<source lang=bash>
# bootadm generate-menu
# bootadm set-menu console=text serial_params='0,115200,8,N,1'
# bootadm change-entry -i 0 kargs="-B console=ttya,ttya-mode='115200,8,n,1,-'"
# bootadm add-entry -i 1 "Solaris (single-user)"
# bootadm change-entry -i 1 kargs="-B console=ttya,ttya-mode='115200,8,n,1,-' -s"
# bootadm add-entry -i 2 "Solaris (milestone=none)"
# bootadm change-entry -i 2 kargs="-B console=ttya,ttya-mode='115200,8,n,1,-' -m milestone=none"
</source>
a40df2300d7f1a7ac501f305e5074b4fc27a900e
705
704
2015-04-29T16:56:23Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:Solaris11]]
==Booten via SP-Console 115200 Baud==
<source lang=bash>
# bootadm generate-menu
# bootadm set-menu console=text serial_params='0,115200,8,N,1'
# bootadm change-entry -i 0 kargs="-B \$zfs_bootfs,console=ttya,ttya-mode='115200,8,n,1,-'"
# bootadm add-entry -i 1 "Solaris (non-cluster)"
# bootadm change-entry -i 1 kargs="-B \$zfs_bootfs,console=ttya,ttya-mode='115200,8,n,1,-' -x"
# bootadm add-entry -i 2 "Solaris (non-cluster)(single-user)"
# bootadm change-entry -i 2 kargs="-B \$zfs_bootfs,console=ttya,ttya-mode='115200,8,n,1,-' -xs"
# bootadm add-entry -i 3 "Solaris (non-cluster)(milestone=none)"
# bootadm change-entry -i 3 kargs="-B \$zfs_bootfs,console=ttya,ttya-mode='115200,8,n,1,-' -x -m milestone=none"
</source>
2f0cb0737fb79620a348ba411227bbd3f42aa129
706
705
2015-04-30T06:08:13Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:Solaris11]]
==Booten via SP-Console 115200 Baud==
Not working... will change soon...
<source lang=bash>
# bootadm generate-menu
# bootadm set-menu console=text serial_params='0,115200,8,N,1'
# bootadm change-entry -i 0 kargs="-B \$zfs_bootfs,console=ttya,ttya-mode='115200,8,n,1,-'"
# bootadm add-entry -i 1 "Solaris (non-cluster)"
# bootadm change-entry -i 1 kargs="-B \$zfs_bootfs,console=ttya,ttya-mode='115200,8,n,1,-' -x"
# bootadm add-entry -i 2 "Solaris (non-cluster)(single-user)"
# bootadm change-entry -i 2 kargs="-B \$zfs_bootfs,console=ttya,ttya-mode='115200,8,n,1,-' -xs"
# bootadm add-entry -i 3 "Solaris (non-cluster)(milestone=none)"
# bootadm change-entry -i 3 kargs="-B \$zfs_bootfs,console=ttya,ttya-mode='115200,8,n,1,-' -x -m milestone=none"
</source>
cc18de0a4e52eaa55ec1688a805b291d7a5556db
707
706
2015-04-30T06:44:33Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:Solaris11]]
==Booten via SP-Console 115200 Baud==
Add a new ttydef with 115200:
<source lang=bash>
# echo "console115200:115200 hupcl opost onclr:115200::console" >> /etc/ttydefs
</source>
Set the new console for system/console-login:default
<source lang=bash>
# svccfg -s svc:/system/console-login:default setprop ttymon/label=console115200
# svcadm refresh svc:/system/console-login:default
# svcadm restart svc:/system/console-login:default
</source>
Setup your boot menu:
<source lang=bash>
# bootadm generate-menu
# bootadm set-menu console=text serial_params='0,115200,8,N,1'
# bootadm change-entry -i 0 kargs="-B \$zfs_bootfs,console=ttya"
# bootadm add-entry -i 1 "Solaris (non-cluster)"
# bootadm change-entry -i 1 kargs="-B \$zfs_bootfs,console=ttya -x"
# bootadm add-entry -i 2 "Solaris (non-cluster)(single-user)"
# bootadm change-entry -i 2 kargs="-B \$zfs_bootfs,console=ttya -xs"
# bootadm add-entry -i 3 "Solaris (non-cluster)(milestone=none)"
# bootadm change-entry -i 3 kargs="-B \$zfs_bootfs,console=ttya -x -m milestone=none"
</source>
675e5b4b2732f45ea3da9268f3ecced27bd1766b
709
707
2015-04-30T09:01:45Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:Solaris11]]
==Booten via SP-Console 115200 Baud==
Add a new ttydef with 115200:
<source lang=bash>
# echo "console115200:115200 hupcl opost onclr:115200::console" >> /etc/ttydefs
</source>
Set the new console for system/console-login:default
<source lang=bash>
# svccfg -s svc:/system/console-login:default setprop ttymon/label=console115200
# svcadm refresh svc:/system/console-login:default
# svcadm restart svc:/system/console-login:default
</source>
Setup your boot menu:
<source lang=bash>
# bootadm generate-menu
# bootadm set-menu console=text serial_params='0,115200,8,N,1'
# bootadm change-entry -i 0 kargs="-B \$zfs_bootfs,console=ttya"
# bootadm add-entry -i 1 "Solaris (non-cluster)"
# bootadm change-entry -i 1 kargs="-B \$zfs_bootfs,console=ttya -x"
# bootadm add-entry -i 2 "Solaris (non-cluster)(single-user)"
# bootadm change-entry -i 2 kargs="-B \$zfs_bootfs,console=ttya -xs"
# bootadm add-entry -i 3 "Solaris (kernel debugger)"
# bootadm change-entry -i 3 kargs="-B \$zfs_bootfs,console=ttya -k"
# bootadm add-entry -i 4 "Solaris (non-cluster)(milestone=none)"
# bootadm change-entry -i 4 kargs="-B \$zfs_bootfs,console=ttya -x -m milestone=none"
</source>
bebfae2c198514fa85f0882b70b6aff0bb25a4e5
Solaris 11 Networking
0
96
708
267
2015-04-30T08:37:28Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:Solaris11]]
= Switch to manual configuration =
To disable automatic procedures to take back your changes you have to enable the manual configuration mode.
<pre>
# netadm enable –p ncp defaultfixed
</pre>
= Nodename =
<pre>
# svccfg -s svc:/system/identity:node setprop config/nodename = astring: camponotus
# svcadm refresh svc:/system/identity:node
# svcadm restart svc:/system/identity:node
</pre>
= Interfaces =
== Initial setup ==
<pre>
# ipadm create-ip net1
# ipadm create-addr -T static -a local=192.168.5.101/24 net1/v4mailcluster1
</pre>
== IPMP ==
<pre>
# ipadm create-ip net2
# ipadm create-ip net3
# ipadm create-addr -T static -a 192.168.5.102/24 net2/v4ipmptestadress
# ipadm create-addr -T static -a 192.168.5.103/24 net3/v4ipmptestadress
# ipadm create-ipmp ipmp0
# ipadm add-ipmp -i net2 -i net3 ipmp0
# ipadm create-addr -T static -a 192.168.5.101/24 ipmp0/v4mailcluster0
# ipmpstat -i
INTERFACE ACTIVE GROUP FLAGS LINK PROBE STATE
net2 yes ipmp0 ------- up ok ok
net3 yes ipmp0 --mbM-- up ok ok
# ipmpstat -an
ADDRESS STATE GROUP INBOUND OUTBOUND
:: down ipmp0 -- --
192.168.5.101 up ipmp0 net3 net2 net3
</pre>
Set one interface to standby:
<pre>
# ipadm set-ifprop -p standby=on -m ip net2
# ipmpstat -i
INTERFACE ACTIVE GROUP FLAGS LINK PROBE STATE
net3 yes ipmp0 --mbM-- up ok ok
net2 no ipmp0 is----- up ok ok
# ipmpstat -g
GROUP GROUPNAME STATE FDT INTERFACES
ipmp0 ipmp0 ok 10.00s net3 (net2)
</pre>
== Change adress ==
1. Create new interface:
<pre>
# ipadm create-addr -T static -a 192.168.5.111/24 ipmp0/v4mailcluster1
</pre>
2. Login to new IP.
3. Delete the old interface:
<pre>
# ipadm delete-addr ipmp0/v4mailcluster0
</pre>
= DNS =
== Client ==
<pre>
# svccfg -s svc:/network/dns/client setprop config/nameserver = net_address: "( 0.0.0.0 192.168.1.1 )"
# svccfg -s svc:/network/dns/client setprop config/search = astring: "timmann.de blindhuhn.de"
# svcadm refresh svc:/network/dns/client:default
# svcadm restart svc:/network/dns/client:default
</pre>
Activate dns in nameservice switch (nsswitch.conf):
<pre>
# perl -pi -e "s/^hosts:\s+files$/hosts: files dns/g" /etc/nsswitch.conf
# nscfg import -f svc:/system/name-service/switch:default
# svcadm refresh name-service/switch
# svcprop -p config/host svc:/system/name-service/switch:default
files\ dns
</pre>
== Server ==
<pre>
# groupadd -g 53 dns
# useradd -u 53 -g dns -d /var/named -m dns
# usermod -A solaris.smf.manage.bind dns
# svccfg -s svc:network/dns/server:default setprop start/group = dns
# svccfg -s svc:network/dns/server:default setprop start/user = dns
# svccfg -s svc:network/dns/server:default setprop options/ip_interfaces = IPv4
# svccfg -s svc:network/dns/server:default setprop options/configuration_file = /etc/named.conf
# svcadm refresh svc:network/dns/server:default
# svcadm enable svc:network/dns/server:default
</pre>
== Set tcp/udp parameter (formerly ndd) ==
<source lang=bash>
# ipadm show-prop -p smallest_anon_port tcp
PROTO PROPERTY PERM CURRENT PERSISTENT DEFAULT POSSIBLE
tcp smallest_anon_port rw 1024 -- 1024 1024-65535
</source>
<source lang=bash>
# ipadm set-prop -p smallest_anon_port=9000 tcp
# ipadm set-prop -p smallest_anon_port=9000 udp
# ipadm set-prop -p largest_anon_port=65500 tcp
# ipadm set-prop -p largest_anon_port=65500 udp
</source>
c84aa6b400bfeb3aaa35a39030bc56a56b0e4643
Solaris IO Analyse
0
208
710
2015-04-30T14:13:29Z
Lollypop
2
Die Seite wurde neu angelegt: „[[Kategorie:Solaris]] ==Which filesystem is busy?== For zfs (-F zfs) you can use this oneliner: <source lang=bash> # fsstat -i $(df -hF zfs | nawk '{print $N…“
wikitext
text/x-wiki
[[Kategorie:Solaris]]
==Which filesystem is busy?==
For zfs (-F zfs) you can use this oneliner:
<source lang=bash>
# fsstat -i $(df -hF zfs | nawk '{print $NF}') 5
</source>
304dcd3f622ac8794e901f05ef306c74944735dc
Solaris IO Analyse
0
208
711
710
2015-04-30T14:17:54Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:Solaris]]
==Which filesystem is busy?==
For zfs (-F zfs) you can use this oneliner:
<source lang=bash>
# fsstat -i $(df -hF zfs | nawk '{print $NF}') 5
</source>
==Links==
* [https://blogs.oracle.com/BestPerf/entry/i_o_analysis_using_dtrace I/O analysis using DTrace]
* [http://www.brendangregg.com/DTrace/dtrace_oneliners.txt Brendan Gregg's DTrace onliners]
198aa5e3f5db0fc7ee48bddc323cd6e764edf55d
Solaris SMF
0
100
712
674
2015-05-07T16:05:18Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:Solaris|SMF]]
== Running foreground processes ==
<source lang=xml>
<?xml version='1.0'?>
<!DOCTYPE service_bundle SYSTEM '/usr/share/lib/xml/dtd/service_bundle.dtd.1'>
<service_bundle type='manifest' name='export'>
<service name='network/foreground-daemon' type='service' version='0'>
<single_instance/>
<dependency name='filesystem_minimal' grouping='require_all' restart_on='none' type='service'>
<service_fmri value='svc:/system/filesystem/local'/>
</dependency>
<dependency name='loopback' grouping='require_any' restart_on='error' type='service'>
<service_fmri value='svc:/network/loopback'/>
</dependency>
<dependency name='network' grouping='optional_all' restart_on='error' type='service'>
<service_fmri value='svc:/milestone/network'/>
</dependency>
<instance name='default' enabled='true'>
<exec_method name='refresh' type='method' exec=':true' timeout_seconds='60'/>
<exec_method name='stop' type='method' exec=':kill' timeout_seconds='60'/>
<exec_method name='start' type='method' exec='/opt/foreground/bin/foreground-daemon %m' timeout_seconds='0'>
<method_context project='foreground-project' >
<method_credential user='foreground-user' group='noaccess' />
</method_context>
</exec_method>
<property_group type="framework" name="startd">
<propval type="astring" name="duration" value="child"/>
</property_group>
<template>
<common_name>
<loctext xml:lang='C'>Foreground Daemon</loctext>
</common_name>
<documentation>
<manpage title='foreground-daemon' section='1M' manpath='/opt/foreground/man'/>
</documentation>
</template>
</instance>
<stability value='Unstable'/>
</service>
</service_bundle>
</source>
==Adding dependency on another service==
For example mount NFS after ZFS:
<source lang=bash>
svccfg -s svc:/network/nfs/client addpg filesystem-local dependency
svccfg -s svc:/network/nfs/client setprop filesystem-local/grouping = astring: require_all
svccfg -s svc:/network/nfs/client setprop filesystem-local/entities = fmri: svc:/system/filesystem/local:default
svccfg -s svc:/network/nfs/client setprop filesystem-local/restart_on = astring: none
svccfg -s svc:/network/nfs/client setprop filesystem-local/type = astring: service
</source>
==Setting multiple parameters to environment variables==
The goal:
Setting -Xmx from 512m to 2G
The problem:
<source lang=bash>
# svccfg -s svc:/cms/web:tomcat setenv -m start CATALINA_OPTS '-XX:MaxPermSize=256m -Xmx2G -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.port=9004 -Dcom.sun.management.jmxremote.ssl=false -Dcom.sun.management.jmxremote.authenticate=false -Dorg.apache.el.parser.SKIP_IDENTIFIER_CHECK=true -Djava.rmi.server.hostname=tomcat.server.de'
svccfg: Syntax error.
</source>
So you have to set the complete environment this way:
<source lang=bash>
# svccfg -s svc:/cms/web:tomcat listprop method_context/environment
method_context/environment astring "PATH=/usr/jdk/latest/bin:/usr/sbin:/usr/bin" "LC_CTYPE=de_DE.ISO8859-15@euro" "JAVA_OPTS=-Dhttp.proxyHost=proxy.server.de -Dhttp.proxyPort=8080 -Djava.awt.headless=true" "JAVA_HOME=/usr/jdk/latest" "CATALINA_OPTS=-XX:MaxPermSize=256m -Xmx512m -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.port=9004 -Dcom.sun.management.jmxremote.ssl=false -Dcom.sun.management.jmxremote.authenticate=false -Dorg.apache.el.parser.SKIP_IDENTIFIER_CHECK=true -Djava.rmi.server.hostname=tomcat.server.de"
# svccfg -s svc:/cms/web:tomcat setprop method_context/environment = astring: '("PATH=/usr/jdk/latest/bin:/usr/sbin:/usr/bin" "LC_CTYPE=de_DE.ISO8859-15@euro" "JAVA_OPTS=-Dhttp.proxyHost=proxy.server.de -Dhttp.proxyPort=8080 -Djava.awt.headless=true" "JAVA_HOME=/usr/jdk/latest" "CATALINA_OPTS=-XX:MaxPermSize=256m -Xmx2G -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.port=9004 -Dcom.sun.management.jmxremote.ssl=false -Dcom.sun.management.jmxremote.authenticate=false -Dorg.apache.el.parser.SKIP_IDENTIFIER_CHECK=true -Djava.rmi.server.hostname=tomcat.server.de")'
# svcadm refresh svc:/cms/web:tomcat
</source>
Check it with:
<source lang=bash>
# svccfg -s svc:/cms/sophora-web:tomcat listprop method_context/environment
</source>
2c3492f50263bcbb64a8cae0373976aff98d5930
713
712
2015-05-07T16:06:50Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:Solaris|SMF]]
== Running foreground processes ==
<source lang=xml>
<?xml version='1.0'?>
<!DOCTYPE service_bundle SYSTEM '/usr/share/lib/xml/dtd/service_bundle.dtd.1'>
<service_bundle type='manifest' name='export'>
<service name='network/foreground-daemon' type='service' version='0'>
<single_instance/>
<dependency name='filesystem_minimal' grouping='require_all' restart_on='none' type='service'>
<service_fmri value='svc:/system/filesystem/local'/>
</dependency>
<dependency name='loopback' grouping='require_any' restart_on='error' type='service'>
<service_fmri value='svc:/network/loopback'/>
</dependency>
<dependency name='network' grouping='optional_all' restart_on='error' type='service'>
<service_fmri value='svc:/milestone/network'/>
</dependency>
<instance name='default' enabled='true'>
<exec_method name='refresh' type='method' exec=':true' timeout_seconds='60'/>
<exec_method name='stop' type='method' exec=':kill' timeout_seconds='60'/>
<exec_method name='start' type='method' exec='/opt/foreground/bin/foreground-daemon %m' timeout_seconds='0'>
<method_context project='foreground-project' >
<method_credential user='foreground-user' group='noaccess' />
</method_context>
</exec_method>
<property_group type="framework" name="startd">
<propval type="astring" name="duration" value="child"/>
</property_group>
<template>
<common_name>
<loctext xml:lang='C'>Foreground Daemon</loctext>
</common_name>
<documentation>
<manpage title='foreground-daemon' section='1M' manpath='/opt/foreground/man'/>
</documentation>
</template>
</instance>
<stability value='Unstable'/>
</service>
</service_bundle>
</source>
==Adding dependency on another service==
For example mount NFS after ZFS:
<source lang=bash>
svccfg -s svc:/network/nfs/client addpg filesystem-local dependency
svccfg -s svc:/network/nfs/client setprop filesystem-local/grouping = astring: require_all
svccfg -s svc:/network/nfs/client setprop filesystem-local/entities = fmri: svc:/system/filesystem/local:default
svccfg -s svc:/network/nfs/client setprop filesystem-local/restart_on = astring: none
svccfg -s svc:/network/nfs/client setprop filesystem-local/type = astring: service
</source>
==Setting multiple parameters to environment variables==
The goal:
Setting -Xmx from 512m to 2G
The problem:
<source lang=bash>
# svccfg -s svc:/cms/web:tomcat setenv -m start CATALINA_OPTS '-XX:MaxPermSize=256m -Xmx2G -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.port=9004 -Dcom.sun.management.jmxremote.ssl=false -Dcom.sun.management.jmxremote.authenticate=false -Dorg.apache.el.parser.SKIP_IDENTIFIER_CHECK=true -Djava.rmi.server.hostname=tomcat.server.de'
svccfg: Syntax error.
</source>
So you have to set the complete environment this way:
Get the complete environment:
<source lang=bash>
# svccfg -s svc:/cms/web:tomcat listprop method_context/environment
method_context/environment astring "PATH=/usr/jdk/latest/bin:/usr/sbin:/usr/bin" "LC_CTYPE=de_DE.ISO8859-15@euro" "JAVA_OPTS=-Dhttp.proxyHost=proxy.server.de -Dhttp.proxyPort=8080 -Djava.awt.headless=true" "JAVA_HOME=/usr/jdk/latest" "CATALINA_OPTS=-XX:MaxPermSize=256m -Xmx512m -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.port=9004 -Dcom.sun.management.jmxremote.ssl=false -Dcom.sun.management.jmxremote.authenticate=false -Dorg.apache.el.parser.SKIP_IDENTIFIER_CHECK=true -Djava.rmi.server.hostname=tomcat.server.de"
Set the complete (modified) environment:
<source lang=bash>
# svccfg -s svc:/cms/web:tomcat setprop method_context/environment = astring: '("PATH=/usr/jdk/latest/bin:/usr/sbin:/usr/bin" "LC_CTYPE=de_DE.ISO8859-15@euro" "JAVA_OPTS=-Dhttp.proxyHost=proxy.server.de -Dhttp.proxyPort=8080 -Djava.awt.headless=true" "JAVA_HOME=/usr/jdk/latest" "CATALINA_OPTS=-XX:MaxPermSize=256m -Xmx2G -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.port=9004 -Dcom.sun.management.jmxremote.ssl=false -Dcom.sun.management.jmxremote.authenticate=false -Dorg.apache.el.parser.SKIP_IDENTIFIER_CHECK=true -Djava.rmi.server.hostname=tomcat.server.de")'
# svcadm refresh svc:/cms/web:tomcat
</source>
Check it with:
<source lang=bash>
# svccfg -s svc:/cms/sophora-web:tomcat listprop method_context/environment
</source>
25a49776f50e76311015d9e101f87ecd1ad5a351
714
713
2015-05-07T16:07:12Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:Solaris|SMF]]
== Running foreground processes ==
<source lang=xml>
<?xml version='1.0'?>
<!DOCTYPE service_bundle SYSTEM '/usr/share/lib/xml/dtd/service_bundle.dtd.1'>
<service_bundle type='manifest' name='export'>
<service name='network/foreground-daemon' type='service' version='0'>
<single_instance/>
<dependency name='filesystem_minimal' grouping='require_all' restart_on='none' type='service'>
<service_fmri value='svc:/system/filesystem/local'/>
</dependency>
<dependency name='loopback' grouping='require_any' restart_on='error' type='service'>
<service_fmri value='svc:/network/loopback'/>
</dependency>
<dependency name='network' grouping='optional_all' restart_on='error' type='service'>
<service_fmri value='svc:/milestone/network'/>
</dependency>
<instance name='default' enabled='true'>
<exec_method name='refresh' type='method' exec=':true' timeout_seconds='60'/>
<exec_method name='stop' type='method' exec=':kill' timeout_seconds='60'/>
<exec_method name='start' type='method' exec='/opt/foreground/bin/foreground-daemon %m' timeout_seconds='0'>
<method_context project='foreground-project' >
<method_credential user='foreground-user' group='noaccess' />
</method_context>
</exec_method>
<property_group type="framework" name="startd">
<propval type="astring" name="duration" value="child"/>
</property_group>
<template>
<common_name>
<loctext xml:lang='C'>Foreground Daemon</loctext>
</common_name>
<documentation>
<manpage title='foreground-daemon' section='1M' manpath='/opt/foreground/man'/>
</documentation>
</template>
</instance>
<stability value='Unstable'/>
</service>
</service_bundle>
</source>
==Adding dependency on another service==
For example mount NFS after ZFS:
<source lang=bash>
svccfg -s svc:/network/nfs/client addpg filesystem-local dependency
svccfg -s svc:/network/nfs/client setprop filesystem-local/grouping = astring: require_all
svccfg -s svc:/network/nfs/client setprop filesystem-local/entities = fmri: svc:/system/filesystem/local:default
svccfg -s svc:/network/nfs/client setprop filesystem-local/restart_on = astring: none
svccfg -s svc:/network/nfs/client setprop filesystem-local/type = astring: service
</source>
==Setting multiple parameters to environment variables==
The goal:
Setting -Xmx from 512m to 2G
The problem:
<source lang=bash>
# svccfg -s svc:/cms/web:tomcat setenv -m start CATALINA_OPTS '-XX:MaxPermSize=256m -Xmx2G -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.port=9004 -Dcom.sun.management.jmxremote.ssl=false -Dcom.sun.management.jmxremote.authenticate=false -Dorg.apache.el.parser.SKIP_IDENTIFIER_CHECK=true -Djava.rmi.server.hostname=tomcat.server.de'
svccfg: Syntax error.
</source>
So you have to set the complete environment this way:
Get the complete environment:
<source lang=bash>
# svccfg -s svc:/cms/web:tomcat listprop method_context/environment
method_context/environment astring "PATH=/usr/jdk/latest/bin:/usr/sbin:/usr/bin" "LC_CTYPE=de_DE.ISO8859-15@euro" "JAVA_OPTS=-Dhttp.proxyHost=proxy.server.de -Dhttp.proxyPort=8080 -Djava.awt.headless=true" "JAVA_HOME=/usr/jdk/latest" "CATALINA_OPTS=-XX:MaxPermSize=256m -Xmx512m -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.port=9004 -Dcom.sun.management.jmxremote.ssl=false -Dcom.sun.management.jmxremote.authenticate=false -Dorg.apache.el.parser.SKIP_IDENTIFIER_CHECK=true -Djava.rmi.server.hostname=tomcat.server.de"
</source>
Set the complete (modified) environment:
<source lang=bash>
# svccfg -s svc:/cms/web:tomcat setprop method_context/environment = astring: '("PATH=/usr/jdk/latest/bin:/usr/sbin:/usr/bin" "LC_CTYPE=de_DE.ISO8859-15@euro" "JAVA_OPTS=-Dhttp.proxyHost=proxy.server.de -Dhttp.proxyPort=8080 -Djava.awt.headless=true" "JAVA_HOME=/usr/jdk/latest" "CATALINA_OPTS=-XX:MaxPermSize=256m -Xmx2G -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.port=9004 -Dcom.sun.management.jmxremote.ssl=false -Dcom.sun.management.jmxremote.authenticate=false -Dorg.apache.el.parser.SKIP_IDENTIFIER_CHECK=true -Djava.rmi.server.hostname=tomcat.server.de")'
# svcadm refresh svc:/cms/web:tomcat
</source>
Check it with:
<source lang=bash>
# svccfg -s svc:/cms/sophora-web:tomcat listprop method_context/environment
</source>
914e993650e08cb8fa17022abf3955e67aa20d4a
715
714
2015-05-07T16:08:37Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:Solaris|SMF]]
== Running foreground processes ==
<source lang=xml>
<?xml version='1.0'?>
<!DOCTYPE service_bundle SYSTEM '/usr/share/lib/xml/dtd/service_bundle.dtd.1'>
<service_bundle type='manifest' name='export'>
<service name='network/foreground-daemon' type='service' version='0'>
<single_instance/>
<dependency name='filesystem_minimal' grouping='require_all' restart_on='none' type='service'>
<service_fmri value='svc:/system/filesystem/local'/>
</dependency>
<dependency name='loopback' grouping='require_any' restart_on='error' type='service'>
<service_fmri value='svc:/network/loopback'/>
</dependency>
<dependency name='network' grouping='optional_all' restart_on='error' type='service'>
<service_fmri value='svc:/milestone/network'/>
</dependency>
<instance name='default' enabled='true'>
<exec_method name='refresh' type='method' exec=':true' timeout_seconds='60'/>
<exec_method name='stop' type='method' exec=':kill' timeout_seconds='60'/>
<exec_method name='start' type='method' exec='/opt/foreground/bin/foreground-daemon %m' timeout_seconds='0'>
<method_context project='foreground-project' >
<method_credential user='foreground-user' group='noaccess' />
</method_context>
</exec_method>
<property_group type="framework" name="startd">
<propval type="astring" name="duration" value="child"/>
</property_group>
<template>
<common_name>
<loctext xml:lang='C'>Foreground Daemon</loctext>
</common_name>
<documentation>
<manpage title='foreground-daemon' section='1M' manpath='/opt/foreground/man'/>
</documentation>
</template>
</instance>
<stability value='Unstable'/>
</service>
</service_bundle>
</source>
==Adding dependency on another service==
For example mount NFS after ZFS:
<source lang=bash>
svccfg -s svc:/network/nfs/client addpg filesystem-local dependency
svccfg -s svc:/network/nfs/client setprop filesystem-local/grouping = astring: require_all
svccfg -s svc:/network/nfs/client setprop filesystem-local/entities = fmri: svc:/system/filesystem/local:default
svccfg -s svc:/network/nfs/client setprop filesystem-local/restart_on = astring: none
svccfg -s svc:/network/nfs/client setprop filesystem-local/type = astring: service
</source>
==Setting multiple parameters to environment variables==
1. The goal:
* Setting -Xmx from 512m to 2G
The problem:
<source lang=bash>
# svccfg -s svc:/cms/web:tomcat setenv -m start CATALINA_OPTS '-XX:MaxPermSize=256m -Xmx2G -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.port=9004 -Dcom.sun.management.jmxremote.ssl=false -Dcom.sun.management.jmxremote.authenticate=false -Dorg.apache.el.parser.SKIP_IDENTIFIER_CHECK=true -Djava.rmi.server.hostname=tomcat.server.de'
svccfg: Syntax error.
</source>
So you have to set the complete environment this way:
* Get the complete environment:
<source lang=bash>
# svccfg -s svc:/cms/web:tomcat listprop method_context/environment
method_context/environment astring "PATH=/usr/jdk/latest/bin:/usr/sbin:/usr/bin" "LC_CTYPE=de_DE.ISO8859-15@euro" "JAVA_OPTS=-Dhttp.proxyHost=proxy.server.de -Dhttp.proxyPort=8080 -Djava.awt.headless=true" "JAVA_HOME=/usr/jdk/latest" "CATALINA_OPTS=-XX:MaxPermSize=256m -Xmx512m -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.port=9004 -Dcom.sun.management.jmxremote.ssl=false -Dcom.sun.management.jmxremote.authenticate=false -Dorg.apache.el.parser.SKIP_IDENTIFIER_CHECK=true -Djava.rmi.server.hostname=tomcat.server.de"
</source>
* Set the complete (modified) environment:
<source lang=bash>
# svccfg -s svc:/cms/web:tomcat setprop method_context/environment = astring: '("PATH=/usr/jdk/latest/bin:/usr/sbin:/usr/bin" "LC_CTYPE=de_DE.ISO8859-15@euro" "JAVA_OPTS=-Dhttp.proxyHost=proxy.server.de -Dhttp.proxyPort=8080 -Djava.awt.headless=true" "JAVA_HOME=/usr/jdk/latest" "CATALINA_OPTS=-XX:MaxPermSize=256m -Xmx2G -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.port=9004 -Dcom.sun.management.jmxremote.ssl=false -Dcom.sun.management.jmxremote.authenticate=false -Dorg.apache.el.parser.SKIP_IDENTIFIER_CHECK=true -Djava.rmi.server.hostname=tomcat.server.de")'
# svcadm refresh svc:/cms/web:tomcat
</source>
* Check it with:
<source lang=bash>
# svccfg -s svc:/cms/web:tomcat listprop method_context/environment
</source>
d91e8d6d38dcd495a80583356ff65279bb806181
728
715
2015-05-12T06:40:35Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:Solaris|SMF]]
__FORCETOC__
== Running foreground processes ==
<source lang=xml>
<?xml version='1.0'?>
<!DOCTYPE service_bundle SYSTEM '/usr/share/lib/xml/dtd/service_bundle.dtd.1'>
<service_bundle type='manifest' name='export'>
<service name='network/foreground-daemon' type='service' version='0'>
<single_instance/>
<dependency name='filesystem_minimal' grouping='require_all' restart_on='none' type='service'>
<service_fmri value='svc:/system/filesystem/local'/>
</dependency>
<dependency name='loopback' grouping='require_any' restart_on='error' type='service'>
<service_fmri value='svc:/network/loopback'/>
</dependency>
<dependency name='network' grouping='optional_all' restart_on='error' type='service'>
<service_fmri value='svc:/milestone/network'/>
</dependency>
<instance name='default' enabled='true'>
<exec_method name='refresh' type='method' exec=':true' timeout_seconds='60'/>
<exec_method name='stop' type='method' exec=':kill' timeout_seconds='60'/>
<exec_method name='start' type='method' exec='/opt/foreground/bin/foreground-daemon %m' timeout_seconds='0'>
<method_context project='foreground-project' >
<method_credential user='foreground-user' group='noaccess' />
</method_context>
</exec_method>
<property_group type="framework" name="startd">
<propval type="astring" name="duration" value="child"/>
</property_group>
<template>
<common_name>
<loctext xml:lang='C'>Foreground Daemon</loctext>
</common_name>
<documentation>
<manpage title='foreground-daemon' section='1M' manpath='/opt/foreground/man'/>
</documentation>
</template>
</instance>
<stability value='Unstable'/>
</service>
</service_bundle>
</source>
==Adding dependency on another service==
For example mount NFS after ZFS:
<source lang=bash>
svccfg -s svc:/network/nfs/client addpg filesystem-local dependency
svccfg -s svc:/network/nfs/client setprop filesystem-local/grouping = astring: require_all
svccfg -s svc:/network/nfs/client setprop filesystem-local/entities = fmri: svc:/system/filesystem/local:default
svccfg -s svc:/network/nfs/client setprop filesystem-local/restart_on = astring: none
svccfg -s svc:/network/nfs/client setprop filesystem-local/type = astring: service
</source>
==Setting multiple parameters to environment variables==
1. The goal:
* Setting -Xmx from 512m to 2G
The problem:
<source lang=bash>
# svccfg -s svc:/cms/web:tomcat setenv -m start CATALINA_OPTS '-XX:MaxPermSize=256m -Xmx2G -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.port=9004 -Dcom.sun.management.jmxremote.ssl=false -Dcom.sun.management.jmxremote.authenticate=false -Dorg.apache.el.parser.SKIP_IDENTIFIER_CHECK=true -Djava.rmi.server.hostname=tomcat.server.de'
svccfg: Syntax error.
</source>
So you have to set the complete environment this way:
* Get the complete environment:
<source lang=bash>
# svccfg -s svc:/cms/web:tomcat listprop method_context/environment
method_context/environment astring "PATH=/usr/jdk/latest/bin:/usr/sbin:/usr/bin" "LC_CTYPE=de_DE.ISO8859-15@euro" "JAVA_OPTS=-Dhttp.proxyHost=proxy.server.de -Dhttp.proxyPort=8080 -Djava.awt.headless=true" "JAVA_HOME=/usr/jdk/latest" "CATALINA_OPTS=-XX:MaxPermSize=256m -Xmx512m -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.port=9004 -Dcom.sun.management.jmxremote.ssl=false -Dcom.sun.management.jmxremote.authenticate=false -Dorg.apache.el.parser.SKIP_IDENTIFIER_CHECK=true -Djava.rmi.server.hostname=tomcat.server.de"
</source>
* Set the complete (modified) environment:
<source lang=bash>
# svccfg -s svc:/cms/web:tomcat setprop method_context/environment = astring: '("PATH=/usr/jdk/latest/bin:/usr/sbin:/usr/bin" "LC_CTYPE=de_DE.ISO8859-15@euro" "JAVA_OPTS=-Dhttp.proxyHost=proxy.server.de -Dhttp.proxyPort=8080 -Djava.awt.headless=true" "JAVA_HOME=/usr/jdk/latest" "CATALINA_OPTS=-XX:MaxPermSize=256m -Xmx2G -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.port=9004 -Dcom.sun.management.jmxremote.ssl=false -Dcom.sun.management.jmxremote.authenticate=false -Dorg.apache.el.parser.SKIP_IDENTIFIER_CHECK=true -Djava.rmi.server.hostname=tomcat.server.de")'
# svcadm refresh svc:/cms/web:tomcat
</source>
* Check it with:
<source lang=bash>
# svccfg -s svc:/cms/web:tomcat listprop method_context/environment
</source>
656fcef06e0d358655a36aeccc279fb61c79c042
Category:Hardware
14
209
716
2015-05-08T08:44:11Z
Lollypop
2
Die Seite wurde neu angelegt: „[[Kategorie:KnowHow]]“
wikitext
text/x-wiki
[[Kategorie:KnowHow]]
5b3e805e2df69a16d339bfd0115e4688ccfd0e65
SunServer
0
210
717
2015-05-08T08:44:36Z
Lollypop
2
Die Seite wurde neu angelegt: „[[Kategorie:Hardware]]“
wikitext
text/x-wiki
[[Kategorie:Hardware]]
31523e955d52e57c3a02392cd235709b8fa6f1ab
718
717
2015-05-08T08:47:23Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:Hardware]]
==Set SP IP address from OS via ipmitool==
* Set:
<source lang=bash>
# ipmitool lan set 1 ipaddr 172.30.40.149
Setting LAN IP Address to 172.30.42.149
# ipmitool lan set 1 netmask 255.255.255.0
Setting LAN Subnet Mask to 255.255.255.0
# ipmitool lan set 1 defgw ipaddr 172.30.42.1
Setting LAN Default Gateway IP to 172.30.42.1
</source>
* Check:
<source lang=bash>
# ipmitool lan print
Set in Progress : Commit Write
Auth Type Support : NONE MD2 MD5 PASSWORD
Auth Type Enable : Callback : MD2 MD5 PASSWORD
: User : MD2 MD5 PASSWORD
: Operator : MD2 MD5 PASSWORD
: Admin : MD2 MD5 PASSWORD
: OEM :
IP Address Source : Static Address
IP Address : 172.30.42.149
Subnet Mask : 255.255.255.0
MAC Address : 00:1c:24:f0:70:b0
SNMP Community String : public
IP Header : TTL=0x40 Flags=0x40 Precedence=0x00 TOS=0x10
Default Gateway IP : 172.30.42.1
Default Gateway MAC : ff:ff:ff:ff:ff:ff
Backup Gateway IP : 255.255.255.255
Backup Gateway MAC : ff:ff:ff:ff:ff:ff
Cipher Suite Priv Max : aaaaaaaaaaaaaaa
: X=Cipher Suite Unused
: c=CALLBACK
: u=USER
: o=OPERATOR
: a=ADMIN
: O=OEM
</source>
9f84c54717bc53e8cefa2beefdc8958f25654d52
719
718
2015-05-08T10:12:40Z
Lollypop
2
/* Set SP IP address from OS via ipmitool */
wikitext
text/x-wiki
[[Kategorie:Hardware]]
==Set SP IP address from OS via ipmitool==
* Set:
<source lang=bash>
# ipmitool lan set 1 ipaddr 172.30.42.149
Setting LAN IP Address to 172.30.42.149
# ipmitool lan set 1 netmask 255.255.255.0
Setting LAN Subnet Mask to 255.255.255.0
# ipmitool lan set 1 defgw ipaddr 172.30.42.1
Setting LAN Default Gateway IP to 172.30.42.1
</source>
* Check:
<source lang=bash>
# ipmitool lan print
Set in Progress : Commit Write
Auth Type Support : NONE MD2 MD5 PASSWORD
Auth Type Enable : Callback : MD2 MD5 PASSWORD
: User : MD2 MD5 PASSWORD
: Operator : MD2 MD5 PASSWORD
: Admin : MD2 MD5 PASSWORD
: OEM :
IP Address Source : Static Address
IP Address : 172.30.42.149
Subnet Mask : 255.255.255.0
MAC Address : 00:1c:24:f0:70:b0
SNMP Community String : public
IP Header : TTL=0x40 Flags=0x40 Precedence=0x00 TOS=0x10
Default Gateway IP : 172.30.42.1
Default Gateway MAC : ff:ff:ff:ff:ff:ff
Backup Gateway IP : 255.255.255.255
Backup Gateway MAC : ff:ff:ff:ff:ff:ff
Cipher Suite Priv Max : aaaaaaaaaaaaaaa
: X=Cipher Suite Unused
: c=CALLBACK
: u=USER
: o=OPERATOR
: a=ADMIN
: O=OEM
</source>
563518aeeb366c85d304afca1df358fbfd3d853c
720
719
2015-05-08T10:58:14Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:Hardware]]
=X86 Systeme=
==ILOM==
===Set SP IP address from OS via ipmitool===
* Set:
<source lang=bash>
# ipmitool lan set 1 ipaddr 172.30.42.149
Setting LAN IP Address to 172.30.42.149
# ipmitool lan set 1 netmask 255.255.255.0
Setting LAN Subnet Mask to 255.255.255.0
# ipmitool lan set 1 defgw ipaddr 172.30.42.1
Setting LAN Default Gateway IP to 172.30.42.1
</source>
* Check:
<source lang=bash>
# ipmitool lan print
Set in Progress : Commit Write
Auth Type Support : NONE MD2 MD5 PASSWORD
Auth Type Enable : Callback : MD2 MD5 PASSWORD
: User : MD2 MD5 PASSWORD
: Operator : MD2 MD5 PASSWORD
: Admin : MD2 MD5 PASSWORD
: OEM :
IP Address Source : Static Address
IP Address : 172.30.42.149
Subnet Mask : 255.255.255.0
MAC Address : 00:1c:24:f0:70:b0
SNMP Community String : public
IP Header : TTL=0x40 Flags=0x40 Precedence=0x00 TOS=0x10
Default Gateway IP : 172.30.42.1
Default Gateway MAC : ff:ff:ff:ff:ff:ff
Backup Gateway IP : 255.255.255.255
Backup Gateway MAC : ff:ff:ff:ff:ff:ff
Cipher Suite Priv Max : aaaaaaaaaaaaaaa
: X=Cipher Suite Unused
: c=CALLBACK
: u=USER
: o=OPERATOR
: a=ADMIN
: O=OEM
</source>
=SPARC Systeme=
==XSCF==
===Set XSCF IP address from OS via ssh through dscp===
<source lang=bash>
# /usr/platform/`uname -i`/sbin/prtdscp
Domain Address: 192.168.224.2
SP Address: 192.168.224.1
# ssh eis-installer@192.168.224.1
XSCF> shownetwork -a
xscf#0-lan#0
Link encap:Ethernet HWaddr 00:0B:5D:E3:D8:C4
inet addr:172.42.0.120 Bcast:172.42.0.255 Mask:255.255.255.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:885919090 errors:0 dropped:0 overruns:0 frame:0
TX packets:7150700 errors:1 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:1987024183 (1.8 GiB) TX bytes:492148426 (469.3 MiB)
Base address:0xe000
xscf#0-lan#1
Link encap:Ethernet HWaddr 00:0B:5D:E3:D8:C5
BROADCAST MULTICAST MTU:1500 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
Base address:0xc000
</source>
312f1a3cb266f81cc28cd3ce1ff64c2767dffa4f
721
720
2015-05-08T11:11:48Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:Hardware]]
=X86 Systeme=
==ILOM==
===Set SP IP address from OS via ipmitool===
* Set:
<source lang=bash>
# ipmitool lan set 1 ipaddr 172.30.42.149
Setting LAN IP Address to 172.30.42.149
# ipmitool lan set 1 netmask 255.255.255.0
Setting LAN Subnet Mask to 255.255.255.0
# ipmitool lan set 1 defgw ipaddr 172.30.42.1
Setting LAN Default Gateway IP to 172.30.42.1
</source>
* Check:
<source lang=bash>
# ipmitool lan print
Set in Progress : Commit Write
Auth Type Support : NONE MD2 MD5 PASSWORD
Auth Type Enable : Callback : MD2 MD5 PASSWORD
: User : MD2 MD5 PASSWORD
: Operator : MD2 MD5 PASSWORD
: Admin : MD2 MD5 PASSWORD
: OEM :
IP Address Source : Static Address
IP Address : 172.30.42.149
Subnet Mask : 255.255.255.0
MAC Address : 00:1c:24:f0:70:b0
SNMP Community String : public
IP Header : TTL=0x40 Flags=0x40 Precedence=0x00 TOS=0x10
Default Gateway IP : 172.30.42.1
Default Gateway MAC : ff:ff:ff:ff:ff:ff
Backup Gateway IP : 255.255.255.255
Backup Gateway MAC : ff:ff:ff:ff:ff:ff
Cipher Suite Priv Max : aaaaaaaaaaaaaaa
: X=Cipher Suite Unused
: c=CALLBACK
: u=USER
: o=OPERATOR
: a=ADMIN
: O=OEM
</source>
=SPARC Systeme=
==XSCF==
===Set XSCF IP address from OS via ssh through dscp===
* Show:
<source lang=bash>
# /usr/platform/`uname -i`/sbin/prtdscp
Domain Address: 192.168.224.2
SP Address: 192.168.224.1
# ssh eis-installer@192.168.224.1
XSCF> shownetwork -a
xscf#0-lan#0
Link encap:Ethernet HWaddr 00:0B:5D:E3:D8:C4
inet addr:172.42.0.120 Bcast:172.42.0.255 Mask:255.255.255.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:885919090 errors:0 dropped:0 overruns:0 frame:0
TX packets:7150700 errors:1 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:1987024183 (1.8 GiB) TX bytes:492148426 (469.3 MiB)
Base address:0xe000
xscf#0-lan#1
Link encap:Ethernet HWaddr 00:0B:5D:E3:D8:C5
BROADCAST MULTICAST MTU:1500 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
Base address:0xc000
XSCF> showroute -a
Destination Gateway Netmask Flags Interface
172.42.0.0 * 255.255.0.0 U xscf#0-lan#0
default 172.42.0.1 0.0.0.0 UG xscf#0-lan#0
</source>
* Delete default gateway:
<source lang=bash>
XSCF> setroute -c del -n 0.0.0.0 -m 0.0.0.0 -g 172.42.0.1 xscf#0-lan#0
</source>
* Set:
<source lang=bash>
XSCF> setnetwork xscf#0-lan#0 172.32.40.52 -m 255.255.255.0
XSCF> setroute -c add -n 0.0.0.0 -m 0.0.0.0 -g 172.31.40.1 xscf#0-lan#0
XSCF> applynetwork
The following network settings will be applied:
xscf#0 hostname :hfgsun07-xsfc
DNS domain name :intern.hfg-inkasso.de
nameserver :172.41.0.2
interface :xscf#0-lan#0
status :up
IP address :172.32.40.52
netmask :255.255.255.0
route :-n 172.32.40.1 -m 255.255.255.255
route :-n 0.0.0.0 -m 0.0.0.0 -g 172.32.40.1
interface :xscf#0-lan#1
status :down
IP address :
netmask :
route :
Continue? [y|n] :y
</source>
667892381235b7f1c53cd0da5a3e799af5f75867
722
721
2015-05-08T11:12:37Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:Hardware]]
=X86 Systeme=
==ILOM==
===Set SP IP address from OS via ipmitool===
* Set:
<source lang=bash>
# ipmitool lan set 1 ipaddr 172.30.42.149
Setting LAN IP Address to 172.30.42.149
# ipmitool lan set 1 netmask 255.255.255.0
Setting LAN Subnet Mask to 255.255.255.0
# ipmitool lan set 1 defgw ipaddr 172.30.42.1
Setting LAN Default Gateway IP to 172.30.42.1
</source>
* Check:
<source lang=bash>
# ipmitool lan print
Set in Progress : Commit Write
Auth Type Support : NONE MD2 MD5 PASSWORD
Auth Type Enable : Callback : MD2 MD5 PASSWORD
: User : MD2 MD5 PASSWORD
: Operator : MD2 MD5 PASSWORD
: Admin : MD2 MD5 PASSWORD
: OEM :
IP Address Source : Static Address
IP Address : 172.30.42.149
Subnet Mask : 255.255.255.0
MAC Address : 00:1c:24:f0:70:b0
SNMP Community String : public
IP Header : TTL=0x40 Flags=0x40 Precedence=0x00 TOS=0x10
Default Gateway IP : 172.30.42.1
Default Gateway MAC : ff:ff:ff:ff:ff:ff
Backup Gateway IP : 255.255.255.255
Backup Gateway MAC : ff:ff:ff:ff:ff:ff
Cipher Suite Priv Max : aaaaaaaaaaaaaaa
: X=Cipher Suite Unused
: c=CALLBACK
: u=USER
: o=OPERATOR
: a=ADMIN
: O=OEM
</source>
=SPARC Systeme=
==XSCF==
===Set XSCF IP address from OS via ssh through dscp===
* Show:
<source lang=bash>
# /usr/platform/`uname -i`/sbin/prtdscp
Domain Address: 192.168.224.2
SP Address: 192.168.224.1
# ssh eis-installer@192.168.224.1
XSCF> shownetwork -a
xscf#0-lan#0
Link encap:Ethernet HWaddr 00:0B:5D:E3:D8:C4
inet addr:172.42.0.120 Bcast:172.42.0.255 Mask:255.255.255.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:885919090 errors:0 dropped:0 overruns:0 frame:0
TX packets:7150700 errors:1 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:1987024183 (1.8 GiB) TX bytes:492148426 (469.3 MiB)
Base address:0xe000
xscf#0-lan#1
Link encap:Ethernet HWaddr 00:0B:5D:E3:D8:C5
BROADCAST MULTICAST MTU:1500 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
Base address:0xc000
XSCF> showroute -a
Destination Gateway Netmask Flags Interface
172.42.0.0 * 255.255.0.0 U xscf#0-lan#0
default 172.42.0.1 0.0.0.0 UG xscf#0-lan#0
</source>
* Delete default gateway:
<source lang=bash>
XSCF> setroute -c del -n 0.0.0.0 -m 0.0.0.0 -g 172.42.0.1 xscf#0-lan#0
</source>
* Set:
<source lang=bash>
XSCF> setnetwork xscf#0-lan#0 172.32.40.52 -m 255.255.255.0
XSCF> setroute -c add -n 0.0.0.0 -m 0.0.0.0 -g 172.31.40.1 xscf#0-lan#0
XSCF> applynetwork
The following network settings will be applied:
xscf#0 hostname :hfgsun07-xsfc
DNS domain name :intern.hfg-inkasso.de
nameserver :172.41.0.2
interface :xscf#0-lan#0
status :up
IP address :172.32.40.52
netmask :255.255.255.0
route :-n 172.32.40.1 -m 255.255.255.255
route :-n 0.0.0.0 -m 0.0.0.0 -g 172.32.40.1
interface :xscf#0-lan#1
status :down
IP address :
netmask :
route :
Continue? [y|n] :y
Please reset the XSCF by rebootxscf to apply the network settings.
Please confirm that the settings have been applied by executing
showhostname, shownetwork, showroute and shownameserver after rebooting
the XSCF.
XSCF> rebootxscf
The XSCF will be reset. Continue? [y|n] :y
</source>
ec371ec9ad4d716a28520c2bd26dfd68a813ea3f
723
722
2015-05-08T11:23:27Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:Hardware]]
=X86 Systeme=
==ILOM==
===Set SP IP address from OS via ipmitool===
* Set:
<source lang=bash>
# ipmitool lan set 1 ipaddr 172.30.42.149
Setting LAN IP Address to 172.30.42.149
# ipmitool lan set 1 netmask 255.255.255.0
Setting LAN Subnet Mask to 255.255.255.0
# ipmitool lan set 1 defgw ipaddr 172.30.42.1
Setting LAN Default Gateway IP to 172.30.42.1
</source>
* Check:
<source lang=bash>
# ipmitool lan print
Set in Progress : Commit Write
Auth Type Support : NONE MD2 MD5 PASSWORD
Auth Type Enable : Callback : MD2 MD5 PASSWORD
: User : MD2 MD5 PASSWORD
: Operator : MD2 MD5 PASSWORD
: Admin : MD2 MD5 PASSWORD
: OEM :
IP Address Source : Static Address
IP Address : 172.30.42.149
Subnet Mask : 255.255.255.0
MAC Address : 00:1c:24:f0:70:b0
SNMP Community String : public
IP Header : TTL=0x40 Flags=0x40 Precedence=0x00 TOS=0x10
Default Gateway IP : 172.30.42.1
Default Gateway MAC : ff:ff:ff:ff:ff:ff
Backup Gateway IP : 255.255.255.255
Backup Gateway MAC : ff:ff:ff:ff:ff:ff
Cipher Suite Priv Max : aaaaaaaaaaaaaaa
: X=Cipher Suite Unused
: c=CALLBACK
: u=USER
: o=OPERATOR
: a=ADMIN
: O=OEM
</source>
=SPARC Systeme=
==XSCF==
===Set XSCF IP address from OS via ssh through dscp===
* Show:
<source lang=bash>
# /usr/platform/`uname -i`/sbin/prtdscp
Domain Address: 192.168.224.2
SP Address: 192.168.224.1
# ssh eis-installer@192.168.224.1
XSCF> shownetwork -a
xscf#0-lan#0
Link encap:Ethernet HWaddr 00:0B:5D:E3:D8:C4
inet addr:172.42.0.120 Bcast:172.42.0.255 Mask:255.255.255.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:885919090 errors:0 dropped:0 overruns:0 frame:0
TX packets:7150700 errors:1 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:1987024183 (1.8 GiB) TX bytes:492148426 (469.3 MiB)
Base address:0xe000
xscf#0-lan#1
Link encap:Ethernet HWaddr 00:0B:5D:E3:D8:C5
BROADCAST MULTICAST MTU:1500 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
Base address:0xc000
XSCF> showroute -a
Destination Gateway Netmask Flags Interface
172.42.0.0 * 255.255.0.0 U xscf#0-lan#0
default 172.42.0.1 0.0.0.0 UG xscf#0-lan#0
</source>
* Delete default gateway:
<source lang=bash>
XSCF> setroute -c del -n 0.0.0.0 -m 0.0.0.0 -g 172.42.0.1 xscf#0-lan#0
</source>
* Set:
<source lang=bash>
XSCF> setnetwork xscf#0-lan#0 172.32.40.52 -m 255.255.255.0
XSCF> setroute -c add -n 0.0.0.0 -m 0.0.0.0 -g 172.32.40.1 xscf#0-lan#0
XSCF> applynetwork
The following network settings will be applied:
xscf#0 hostname :hfgsun07-xsfc
DNS domain name :intern.hfg-inkasso.de
nameserver :172.41.0.2
interface :xscf#0-lan#0
status :up
IP address :172.32.40.52
netmask :255.255.255.0
route :-n 172.32.40.1 -m 255.255.255.255
route :-n 0.0.0.0 -m 0.0.0.0 -g 172.32.40.1
interface :xscf#0-lan#1
status :down
IP address :
netmask :
route :
Continue? [y|n] :y
Please reset the XSCF by rebootxscf to apply the network settings.
Please confirm that the settings have been applied by executing
showhostname, shownetwork, showroute and shownameserver after rebooting
the XSCF.
XSCF> rebootxscf
The XSCF will be reset. Continue? [y|n] :y
</source>
4c4f7d58a4ad92a2f29a7b71d0c674a9e25ba939
761
723
2015-06-18T09:22:24Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:Hardware]]
=X86 Systeme=
==ILOM==
===Set SP IP address from OS via ipmitool===
* Set:
<source lang=bash>
# ipmitool lan set 1 ipaddr 172.30.42.149
Setting LAN IP Address to 172.30.42.149
# ipmitool lan set 1 netmask 255.255.255.0
Setting LAN Subnet Mask to 255.255.255.0
# ipmitool lan set 1 defgw ipaddr 172.30.42.1
Setting LAN Default Gateway IP to 172.30.42.1
</source>
* Check:
<source lang=bash>
# ipmitool lan print
Set in Progress : Commit Write
Auth Type Support : NONE MD2 MD5 PASSWORD
Auth Type Enable : Callback : MD2 MD5 PASSWORD
: User : MD2 MD5 PASSWORD
: Operator : MD2 MD5 PASSWORD
: Admin : MD2 MD5 PASSWORD
: OEM :
IP Address Source : Static Address
IP Address : 172.30.42.149
Subnet Mask : 255.255.255.0
MAC Address : 00:1c:24:f0:70:b0
SNMP Community String : public
IP Header : TTL=0x40 Flags=0x40 Precedence=0x00 TOS=0x10
Default Gateway IP : 172.30.42.1
Default Gateway MAC : ff:ff:ff:ff:ff:ff
Backup Gateway IP : 255.255.255.255
Backup Gateway MAC : ff:ff:ff:ff:ff:ff
Cipher Suite Priv Max : aaaaaaaaaaaaaaa
: X=Cipher Suite Unused
: c=CALLBACK
: u=USER
: o=OPERATOR
: a=ADMIN
: O=OEM
</source>
===Restore lost Serial/Product Information===
<source lanig=bash>
$ ssh root@x4100-sp
-> show /SYS/MB
/SYS/MB
...
Properties:
type = Motherboard
chassis_name = SUN FIRE X4100
chassis_part_number = 541-0250-04
chassis_serial_number = 0000000-0000000000
chassis_manufacturer = SUN MICROSYSTEMS
product_name = SUN FIRE X4100
product_part_number = 602-0000-00
product_serial_number = 0000000000
product_version = (none)
product_manufacturer = SUN MICROSYSTEMS
fru_name = ASSY,MOTHERBOARD,A64
fru_manufacturer = SUN MICROSYSTEMS
fru_part_number = 501-7644-01
fru_serial_number = 1762TH1-0627002296
...
-> exit
$ ssh sunservice@x4100-sp
Password: <the root password>
[(flash)root@X4100-SP:~]# servicetool --board_replaced=mainboard --fru_product_serial_number --fru_chassis_serial_number --fru_product_part_number
<Fill out the answers>
</source>
=SPARC Systeme=
==XSCF==
===Set XSCF IP address from OS via ssh through dscp===
* Show:
<source lang=bash>
# /usr/platform/`uname -i`/sbin/prtdscp
Domain Address: 192.168.224.2
SP Address: 192.168.224.1
# ssh eis-installer@192.168.224.1
XSCF> shownetwork -a
xscf#0-lan#0
Link encap:Ethernet HWaddr 00:0B:5D:E3:D8:C4
inet addr:172.42.0.120 Bcast:172.42.0.255 Mask:255.255.255.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:885919090 errors:0 dropped:0 overruns:0 frame:0
TX packets:7150700 errors:1 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:1987024183 (1.8 GiB) TX bytes:492148426 (469.3 MiB)
Base address:0xe000
xscf#0-lan#1
Link encap:Ethernet HWaddr 00:0B:5D:E3:D8:C5
BROADCAST MULTICAST MTU:1500 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
Base address:0xc000
XSCF> showroute -a
Destination Gateway Netmask Flags Interface
172.42.0.0 * 255.255.0.0 U xscf#0-lan#0
default 172.42.0.1 0.0.0.0 UG xscf#0-lan#0
</source>
* Delete default gateway:
<source lang=bash>
XSCF> setroute -c del -n 0.0.0.0 -m 0.0.0.0 -g 172.42.0.1 xscf#0-lan#0
</source>
* Set:
<source lang=bash>
XSCF> setnetwork xscf#0-lan#0 172.32.40.52 -m 255.255.255.0
XSCF> setroute -c add -n 0.0.0.0 -m 0.0.0.0 -g 172.32.40.1 xscf#0-lan#0
XSCF> applynetwork
The following network settings will be applied:
xscf#0 hostname :hfgsun07-xsfc
DNS domain name :intern.hfg-inkasso.de
nameserver :172.41.0.2
interface :xscf#0-lan#0
status :up
IP address :172.32.40.52
netmask :255.255.255.0
route :-n 172.32.40.1 -m 255.255.255.255
route :-n 0.0.0.0 -m 0.0.0.0 -g 172.32.40.1
interface :xscf#0-lan#1
status :down
IP address :
netmask :
route :
Continue? [y|n] :y
Please reset the XSCF by rebootxscf to apply the network settings.
Please confirm that the settings have been applied by executing
showhostname, shownetwork, showroute and shownameserver after rebooting
the XSCF.
XSCF> rebootxscf
The XSCF will be reset. Continue? [y|n] :y
</source>
61f6e011ea16fe7c08baac76bf2afd8880a18492
NetApp SP
0
211
724
2015-05-08T11:42:06Z
Lollypop
2
Die Seite wurde neu angelegt: „[[Kategorie:Hardware|NetApp]] [[Kategorie:NetApp]]“
wikitext
text/x-wiki
[[Kategorie:Hardware|NetApp]]
[[Kategorie:NetApp]]
209ee33755fe7592602a00fc5d7348fa6216d8e2
725
724
2015-05-08T11:42:25Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:Hardware NetApp]]
[[Kategorie:NetApp]]
9136ff8ecdf9227e5184db098dc2e76887a77122
726
725
2015-05-08T11:42:57Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:Hardware|NetApp]]
[[Kategorie:NetApp]]
209ee33755fe7592602a00fc5d7348fa6216d8e2
727
726
2015-05-08T11:47:30Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:Hardware|NetApp]]
[[Kategorie:NetApp]]
== Setup SP IP address==
<source lang=bash>
filer01> system node service-processor network modify -address-type IPv4 -ip-address 172.32.40.54 -netmask 255.255.255.0 -gateway 172.32.40.1 -enable true
filer01> system node service-processor reboot-sp
Note: If your console connection is through the SP, it will be disconnected.
Do you want to reboot the SP ? {y|n}: y
</source>
eb7f3d3338e16918614dd07c5e40f205b21c7b21
Template:TOCright
10
212
729
2015-05-12T06:46:42Z
Lollypop
2
Die Seite wurde neu angelegt: „<onlyinclude><includeonly><div class="float-right" {{#if:{{{Breite|}}}|style="max-width:{{{Breite}}};"}}>__TOC__</div></includeonly><noinclude>{{Dokumentation}…“
wikitext
text/x-wiki
<onlyinclude><includeonly><div class="float-right" {{#if:{{{Breite|}}}|style="max-width:{{{Breite}}};"}}>__TOC__</div></includeonly><noinclude>{{Dokumentation}}</noinclude></onlyinclude>
b8cc428c85571e246fedcc12939f261215c8cf05
SunCluster Delete Ressource Group
0
206
730
700
2015-05-12T14:57:40Z
Lollypop
2
/* Abschalten der Ressource Group und Ressourcen */
wikitext
text/x-wiki
[[Kategorie:SunCluster]]
=Komplettes entfernen einer Ressource Group=
Herleitung der Daten, die nachher in den Einzeilern benutzt werden.
Macht dies nicht! Ich übernehme auch hier wieder keine Verantwortung! Alles falsch! Nicht machen!
==Setzen der betreffenden Ressource Group==
<source lang=bash>
# RG=my-rg
</source>
==Ressource anzeigen==
<source lang=bash>
# clrs list -g ${RG}
my-nsr-res
my-oracle-res
my-lh-res
my-zone-res
my-hasp-zfs-res
</source>
==Abschalten der Ressource Group und Ressourcen==
<source lang=bash>
# clrg offline ${RG}
# clrs list -g ${RG} | xargs clrs disable
</source>
==ZPools anzeigen==
<source lang=bash>
# clrs show -p ZPools -g ${RG}
...
=== Resources ===
Resource: my-hasp-zfs-res
--- Standard and extension properties ---
Zpools: my_pool my-redo1_pool my-redo2_pool
Class: extension
Description: The list of zpools
Per-node: False
Type: stringarray
...
</source>
==ZPools anzeigen==
<source lang=bash>
# clrs show -p ZPools -g ${RG} | nawk '$1=="Zpools:"{$1="";print $0;}'
my_pool my-redo1_pool my-redo2_pool
</source>
==DID Devices anzeigen==
<source lang=bash>
# for disk in $(for zpool in $(clrs show -p ZPools -g ${RG} | nawk '$1=="Zpools:"{$1="";print $0;}' ) ; do zpool import ${zpool} 2>/dev/null ; zpool status ${zpool} ; zpool export ${zpool} ; done | nawk '/c[0-9]+t/{gsub(/s.*$/,"",$1);print $1}') ; do echo /dev/rdsk/${disk}; done | xargs cldev list -vn $(hostname)
DID Device Full Device Path
---------- ----------------
d53 node06:/dev/rdsk/c0t600A0B80006E103C00000B9B50B2F83Ed0
d38 node06:/dev/rdsk/c0t600A0B80006E10020000D54150B2FF26d0
d57 node06:/dev/rdsk/c0t600A0B80006E103C00000B9E50B2F9FFd0
d50 node06:/dev/rdsk/c0t600A0B80006E10020000D54450B300C8d0
d46 node06:/dev/rdsk/c0t600A0B80006E103C00000BA250B3098Ad0
d28 node06:/dev/rdsk/c0t600A0B80006E10020000D54850B310C2d0
d55 node06:/dev/rdsk/c0t600A0B80006E134400000B5350B2FB08d0
d56 node06:/dev/rdsk/c0t600A0B80006E10E40000D6F450B2FBB1d0
d40 node06:/dev/rdsk/c0t600A0B80006E134400000B5950B30D8Bd0
d45 node06:/dev/rdsk/c0t600A0B80006E10E40000D6FA50B30E62d0
</source>
oder nur die DIDs:
<source lang=bash>
# for disk in $(for zpool in $(clrs show -p ZPools -g ${RG} | nawk '$1=="Zpools:"{$1="";print $0;}' ) ; do zpool import ${zpool} 2>/dev/null ; zpool status ${zpool} ; zpool export ${zpool} ; done | nawk '/c[0-9]+t/{gsub(/s.*$/,"",$1);print $1}') ; do echo /dev/rdsk/${disk}; done | xargs scdidadm -lo instance
53
38
57
50
46
28
55
56
40
45
</source>
==Abschalten des Device Monitorings==
Das ist wichtig, um die Devices später ganz aus dem Cluster zu bekommen!
<source lang=bash>
# for disk in $(for zpool in $(clrs show -p ZPools -g ${RG} | nawk '$1=="Zpools:"{$1="";print $0;}' ) ; do zpool import ${zpool} 2>/dev/null ; zpool status ${zpool} ; zpool export ${zpool} ; done | nawk '/c[0-9]+t/{gsub(/s.*$/,"",$1);print $1}') ; do echo /dev/rdsk/${disk}; done | xargs scdidadm -lo name | xargs cldev unmonitor
</source>
==Ressourcegruppe löschen==
<source lang=bash>
# clrs list -g ${RG} | xargs clrs delete
# clrg delete ${RG}
</source>
==Jetzt auf dem Storage die LUNs unmappen==
Und gegebenen Falls löschen...
==Nicht mehr vorhandene LUNs aus dem Solaris entfernen==
<source lang=bash>
# for node in $(clnode list) ; do ssh ${node} cfgadm -alo show_SCSI_LUN | nawk '$NF=="unusable"{gsub(/,[0-9]+$/,"",$1);print $1}' | sort -u | xargs -n 1 ssh ${node} cfgadm -c unconfigure -o unusable_SCSI_LUN ; ssh ${node} devfsadm -C -v -c disk ; done
</source>
==DIDs aufräumen==
<source lang=bash>
# for node in $(clnode list) ; do cldev refresh -n ${node} ; cldev clear -n ${node} ; done
</source>
==Bei bedarf Zonenkonfigs aufräumen==
<source lang=bash>
# ZONE=my-zone
# for node in $(clnode list) ; do ssh ${node} zonecfg -z ${ZONE} delete -F ; done
</source>
36c556a017f0ca0940b33cc4a476de1def52908f
ZFS Networker
0
158
731
667
2015-05-28T07:10:08Z
Lollypop
2
/* The pre-/pstcmd-script */
wikitext
text/x-wiki
[[Kategorie:ZFS|Backup]]
[[Kategorie:Backup|Networker]]
[[Kategorie:Solaris|Backup]]
=Backup of ZFS snapshots on Solaris Cluster with Legato/EMC Networker=
This describes how to setup a backup of the Solaris Cluster resource group named sample-rg.
The structure of my RGs is always:
<pre>
RG: <name>-rg
ZFS-HASP: <name>-hasp-zfs-res
Logical Host: <name>-lh-res
Logical Host Name: <name>-lh
ZPOOL: <name>_pool
</pre>
I used the bash as shell.
==Define variables used in the following command lines==
<source lang=bash>
# NAME=sample
# RGname=${NAME}-rg
# NetworkerGroup=$(echo ${NAME} | tr 'a-z' 'A-Z' )
# ZPOOL=${NAME}_pool
# ZPOOL_BASEDIR=/local/${RGname}
</source>
==Define a resource for Networker==
What we need now is a resource definition in our Networker directory like this:
<source lang=bash>
# mkdir /nsr/{bin,log,res}
# cat > /nsr/res/${NetworkerGroup}.res <<EOF
type: savepnpc;
precmd: "/nsr/bin/nsr_snapshot.sh pre >/nsr/log/networker_precmd.log 2>&1";
pstcmd: "/nsr/bin/nsr_snapshot.sh pst >/nsr/log/networker_pstcmd.log 2>&1";
timeout: "08:00am";
abort precmd with group: Yes;
EOF
</source>
==The pre-/pstcmd-script==
!!!THIS CODE IS UNTESTED DO NOT USE THIS!!!
!!!THIS JUST AN EXAMPLE!!!
<source lang=bash>
#!/bin/bash
cmd_option=$1
export cmd_option
SNAPSHOT_NAME="nsr"
BASE_LOG_DIR="/nsr/logs"
NSR_BACKUP_CLONE="nsr_backup"
# Commands
ZFS_CMD="/usr/sbin/zfs"
ZPOOL_CMD="/usr/sbin/zpool"
ZLOGIN_CMD="/usr/sbin/zlogin"
ZONECFG_CMD="/usr/sbin/zonecfg"
SVCS_CMD="/usr/sbin/svcs"
SVCADM_CMD="/usr/sbin/svcadm"
DF_CMD="/usr/bin/df"
RM_CMD="/usr/bin/rm"
AWK_CMD="/usr/bin/nawk"
MKNOD_CMD="/usr/sbin/mknod"
XARGS_CMD="/usr/bin/xargs"
PARGS_CMD="/usr/bin/pargs"
PTREE_CMD="/usr/bin/ptree"
CLRS_CMD="/usr/cluster/bin/clrs"
CLRG_CMD="/usr/cluster/bin/clrg"
CLRT_CMD="/usr/cluster/bin/clrt"
BASENAME_CMD="/usr/bin/basename"
GETENT_CMD="/usr/bin/getent"
SCHA_RESOURCE_GET_CMD="/usr/cluster/bin/scha_resource_get"
WGET_CMD=/usr/sfw/bin/wget
HOSTNAME_CMD="/usr/bin/uname -n"
# Subdir in ZFS where to put ZFS-config
ZFS_SETUP_SUBDIR="cluster_config"
ZFS_CONFIG_FILE=ZFS_Setup.sh
# Oracle parameter
ORACLE_SID=SAMPLE
ORACLE_USER=oracle
# Sophora parameter
SOPHORA_FMRI="svc:/cms/sophora:default"
SOPHORA_USER=admin
SOPHORA_PASS=password
GLOBAL_LOGFILE=${BASE_LOG_DIR}/$(${BASENAME_CMD} $0 .sh).log
# For all but get_slaves redirect output to log
case ${cmd_option} in
get_slaves)
;;
*)
exec >>${GLOBAL_LOGFILE} 2>&1
;;
esac
function print_option () {
option=$1; shift
# now process line
while [ $# -gt 0 ]
do
case $1 in
${option})
echo $2
shift
shift
;;
*)
shift
;;
esac
done
}
function sophora_startup () {
SOPHORA_ZONE=$1 # Zone for zlogin
SOPHORA_FMRI=$2 # FMRI for svcadm
print_log ${LOGFILE} "Starting sophora in ${SOPHORA_ZONE}..."
${ZLOGIN_CMD} ${SOPHORA_ZONE} ${SVCADM_CMD} enable ${SOPHORA_FMRI}
}
function sophora_shutdown () {
SOPHORA_ZONE=$1 # Zone for zlogin
SOPHORA_FMRI=$2 # FMRI for svcadm
print_log ${LOGFILE} "Shutting down sophora in ${SOPHORA_ZONE}..."
${ZLOGIN_CMD} ${SOPHORA_ZONE} ${SVCADM_CMD} disable -t ${SOPHORA_FMRI}
}
function sophora_get_slaves () {
SOPHORA_ZONE=$1 # Zone for zlogin
SOPHORA_PORT=$2 # Sophora port at localhost
SOPHORA_USER=$3 # Sophora admin user
SOPHORA_PASS=$4 # Sophora admin port
${ZLOGIN_CMD} ${SOPHORA_ZONE} \
${WGET_CMD} \
-qO- \
--no-proxy \
--http-user=${S/OPHORA_USER} \
--http-password=${SOPHORA_PASS} \
"http://localhost:${SOPHORA_PORT}/content-api/servers/?replicationMode=SLAVE" | \
${AWK_CMD} '
function get_param(param,name){
name="\""name"\"";
count=split(param,tupel,/,/);
for(i=1;i<=count;i++){
split(tupel[i],part,/:/);
if(part[1]==name){
gsub(/\"/,"",part[2]);return part[2];
}
}
}
{
json=$0;
gsub(/(\[\{|\}\])/,"",json);
elements=split(json,array,/\},\{/);
for(element=1;element<=elements;element++){
print get_param(array[element],"hostname");
}
}' | ${XARGS_CMD} -n 1 -i ${BASENAME_CMD} {} .server.de
}
function get_zone_hostname () {
${ZLOGIN_CMD} $1 ${HOSTNAME_CMD}
}
function print_log () {
LOGFILE=$1 ; shift
if [ $# -gt 0 ]
then
printf "%s (%s): %s\n" "$(date '+%Y%m%d %H:%M:%S')" "${cmd_option}" "$*" >> ${LOGFILE}
else
#printf "%s (%s): " "$(date '+%Y%m%d %H:%M:%S')" "${cmd_option}" >> ${LOGFILE}
while read data
do
printf "%s (%s): %s\n" "$(date '+%Y%m%d %H:%M:%S')" "${cmd_option}" "${data}" >> ${LOGFILE}
done
fi
}
function dump_zfs_config {
ZPOOL=$1
OUTPUT_FILE=$2
printf "\n\n# Create ZPool ${ZPOOL} with size $(${ZPOOL_CMD} list -Ho size ${ZPOOL}):\n\n" >> ${OUTPUT_FILE}
${ZPOOL_CMD} status ${ZPOOL} | ${AWK_CMD} '/config:/,/errors:/{if(/NAME/){getline; printf "Zpool structure of %s:\n\nzpool create %s",$1,$1; getline ; device=0; while(!/^$/ && !/errors:/){gsub(/mirror-[0-9]+/,"mirror",$1);gsub(/logs/,"log",$1);gsub(/(\/dev\/(r)*dsk\/)*c[0-9]+t[0-9A-F]+d[0-9]+(s[0-9]+)*/,"<device"device">",$1);if(/device/)device++;printf " %s",$1 ; getline}};printf "\n" ;}' >> ${OUTPUT_FILE}
printf "\n\n# Create ZFS\n\n" >> ${OUTPUT_FILE}
${ZFS_CMD} list -Hrt filesystem -o name,origin ${ZPOOL} | ${AWK_CMD} -v zfs_cmd=${ZFS_CMD} 'NR>1 && $2=="-"{print zfs_cmd,"create -o mountpoint=none",$1}' >> ${OUTPUT_FILE}
printf "\n\n# Set ZFS values\n\n" >> ${OUTPUT_FILE}
${ZFS_CMD} get -s local -Ho name,property,value -pr all ${ZPOOL} | ${AWK_CMD} -v zfs_cmd=${ZFS_CMD} '$2!="readonly"{printf "%s set -p %s=%s %s\n",zfs_cmd,$2,$3,$1}' >> ${OUTPUT_FILE}
}
function dump_cluster_config {
RG=$1
OUTPUT_DIR=$2
${RM_CMD} -f ${OUTPUT_DIR}/${RG}.clrg_export.xml
${CLRG_CMD} export -o ${OUTPUT_DIR}/${RG}.clrg_export.xml ${RG}
for RES in $(${CLRS_CMD} list -g ${RG})
do
${RM_CMD} -f ${OUTPUT_DIR}/${RES}.clrs_export.xml
${CLRS_CMD} export -o ${OUTPUT_DIR}/${RES}.clrs_export.xml ${RES}
done
# Commands to recreate the RG
COMMAND_FILE="${OUTPUT_DIR}/${RG}.ClusterCreateCommands.txt"
printf "Recreate %s:\n%s create -i %s %s\n\n" "${RG}" "${CLRG_CMD}" "${OUTPUT_DIR}/${RG}.clrg_export.xml" "${RG}" > ${COMMAND_FILE}
for RT in SUNW.LogicalHostname SUNW.HAStoragePlus SUNW.gds LGTO.clnt
do
for RT_VERSION in $(${CLRT_CMD} list | ${AWK_CMD} -v rt=${RT} '$1 ~ rt')
do
for RES in $(${CLRS_CMD} list -g ${RG} -t ${RT_VERSION})
do
if [ "_${RT}_" == "_SUNW.LogicalHostname_" ]
then
printf "Add the following entries to all nodes!!!:\n/etc/inet/hosts:\n" >> ${COMMAND_FILE}
${GETENT_CMD} hosts $(${CLRS_CMD} show -p HostnameList ${RES} | nawk '$1=="HostnameList:"{$1="";print}') >> ${COMMAND_FILE}
printf "\n" >> ${COMMAND_FILE}
fi
printf "Recreate %s:\n%s create -i %s %s\n\n" "${RES}" "${CLRS_CMD}" "${OUTPUT_DIR}/${RES}.clrs_export.xml" "${RES}" >> ${COMMAND_FILE}
done
done
done
}
function snapshot_pre {
DB=$1
DBUSER=$2
if [ $# -eq 3 -a "_$3_" != "__" ]
then
ZONE=$3
ZONE_CMD="${ZLOGIN_CMD} -l ${DBUSER} ${ZONE}"
ZONE_BASE=$(/usr/sbin/zonecfg -z ${ZONE} info zonepath | ${AWK_CMD} '{print $NF;}')
ZONE_ROOT="${ZONE_BASE}/root"
else
ZONE_ROOT=""
ZONE_CMD="su - ${DBUSER} -c"
fi
if( ${ZONE_CMD} echo >/dev/null 2>&1 )
then
SCRIPT_NAME="tmp/.nsr-pre-snap-script.$$"
# Create script inside zone
cat >${ZONE_ROOT}/{SCRIPT_NAME} <<EOS
#!/bin/bash
DBDIR=\$(${AWK_CMD} -F':' -v ORACLE_SID=${ORACLE_SID} '\$1==ORACLE_SID {print \$2;}' /var/opt/oracle/oratab)
\${DBDIR}/bin/sqlplus sys/${DBUSER} as sysdba << EOF
create pfile from spfile;
alter system archive log current;
alter database backup controlfile to trace;
alter database begin backup;
EOF
EOS
chmod 755 ${ZONE_ROOT}/${SCRIPT_NAME}
${ZONE_CMD} /${SCRIPT_NAME} 2>&1 | print_log ${LOGFILE}
rm -f ${ZONE_ROOT}/${SCRIPT_NAME}
fi
}
function snapshot_pst {
DB=$1
DBUSER=$2
if [ $# -eq 3 -a "_$3_" != "__" ]
then
ZONE=$3
ZONE_CMD="${ZLOGIN_CMD} -l ${DBUSER} ${ZONE}"
ZONE_BASE=$(/usr/sbin/zonecfg -z ${ZONE} info zonepath | ${AWK_CMD} '{print $NF;}')
ZONE_ROOT="${ZONE_BASE}/root"
else
ZONE_ROOT=""
ZONE_CMD="su - ${DBUSER} -c"
fi
if( ${ZONE_CMD} echo >/dev/null 2>&1 )
then
SCRIPT_NAME="tmp/.nsr-pre-snap-script.$$"
# Create script inside zone
cat >${ZONE_ROOT}/{SCRIPT_NAME} <<EOS
#!/bin/bash
DBDIR=\$(${AWK_CMD} -F':' -v ORACLE_SID=${ORACLE_SID} '\$1==ORACLE_SID {print \$2;}' /var/opt/oracle/oratab)
\${DBDIR}/bin/sqlplus sys/${DBUSER} as sysdba << EOF
alter database end backup;
alter system archive log current;
EOF
EOS
chmod 755 ${ZONE_ROOT}/${SCRIPT_NAME}
${ZONE_CMD} /${SCRIPT_NAME} 2>&1 | print_log ${LOGFILE}
rm -f ${ZONE_ROOT}/${SCRIPT_NAME}
fi
}
function snapshot_create {
ZPOOL=$1
SNAPSHOT_NAME=$2
print_log ${LOGFILE} "Create ZFS snapshot -r ${ZPOOL}@${SNAPSHOT_NAME}"
${ZFS_CMD} snapshot -r ${ZPOOL}@${SNAPSHOT_NAME}
for zfs_snapshot in $(${ZFS_CMD} list -Ho name -t snapshot -r ${ZPOOL} | grep ${SNAPSHOT_NAME})
do
${ZFS_CMD} clone -o readonly=on ${zfs_snapshot} ${zfs_snapshot/@*/}/${NSR_BACKUP_CLONE}
${ZFS_CMD} mount ${zfs_snapshot/@*/}/${NSR_BACKUP_CLONE} 2>/dev/null
if [ "_$(${ZFS_CMD} get -Ho value mounted ${zfs_snapshot/@*/}/${NSR_BACKUP_CLONE})_" == "_yes_" ]
then
# echo /usr/sbin/save -s ${SERVER_NAME} -g ${GROUP_NAME} -LL -m ${CLIENT_NAME} $(${ZFS_CMD} get -Ho value mountpoint ${zfs_snapshot/@*/}/${NSR_BACKUP_CLONE})
${ZFS_CMD} list -Ho creation,name ${zfs_snapshot/@*/}/${NSR_BACKUP_CLONE} | print_log ${LOGFILE}
fi
done
}
function snapshot_destroy {
ZPOOL=$1
SNAPSHOT_NAME=$2
RES="$(${CLRS_CMD} show -p ZPools | ${AWK_CMD} -v pool=${ZPOOL} '/^Resource:/{res=$NF;}$NF ~ pool{print res;}')"
# Because of problems with unmounting during cluster monitoring disable montoring for this step
print_log ${LOGFILE} "Telling Cluster not to monitor ${RES}"
if [ "_${RES}_" != "__" ]
then
${CLRS_CMD} unmonitor ${RES}
fi
if (${ZFS_CMD} list -t snapshot ${ZPOOL}@${SNAPSHOT_NAME} > /dev/null)
then
for zfs_snapshot in $(${ZFS_CMD} list -Ho name -t snapshot -r ${ZPOOL} | grep ${SNAPSHOT_NAME})
do
if [ "_$(${ZFS_CMD} get -Ho value mounted ${zfs_snapshot/@*/}/${NSR_BACKUP_CLONE})_" == "_yes_" ]
then
print_log ${LOGFILE} "Unmount ZFS clone ${zfs_snapshot/@*/}/${NSR_BACKUP_CLONE}"
${ZFS_CMD} unmount ${zfs_snapshot/@*/}/${NSR_BACKUP_CLONE}
fi
# If this is a clone of ${zfs_snapshot}, then destroy it
if [ "_$(${ZFS_CMD} list -Ho origin ${zfs_snapshot/@*/}/${NSR_BACKUP_CLONE})_" == "_${zfs_snapshot}_" ]
then
print_log ${LOGFILE} "Destroy ZFS clone ${zfs_snapshot/@*/}/${NSR_BACKUP_CLONE}"
${ZFS_CMD} destroy ${zfs_snapshot/@*/}/${NSR_BACKUP_CLONE}
fi
done
print_log ${LOGFILE} "Destroy ZFS snapshot -r ${ZPOOL}@${SNAPSHOT_NAME}"
${ZFS_CMD} destroy -r ${ZPOOL}@${SNAPSHOT_NAME}
fi
print_log ${LOGFILE} "Telling Cluster to monitor ${RES} again"
if [ "_${RES}_" != "__" ]
then
${CLRS_CMD} monitor ${RES}
fi
}
function usage {
echo "Usage: $0 (pre|pst)"
echo "Usage: $0 init <ZPool-Name>"
echo "Usage: $0 initall"
echo "Usage: $0 dump <ZPool-Name> <Output-File>"
exit 1
}
case ${cmd_option} in
pre|pst)
case ${cmd_option} in
pre)
# Get commandline from parent pid
# pre /usr/sbin/savepnpc -c <NetworkerClient> -s <NetworkerServer> -g <NetworkerGroup> -LL
print_log ${GLOBAL_LOGFILE} "Begin (${cmd_option}) Called from $(${PTREE_CMD} $$ | ${AWK_CMD} '/savepnpc/{print $0}')"
pid=$(${PTREE_CMD} $$ | ${AWK_CMD} '/savepnpc/{print $1}')
;;
pst)
# Get commandline from parent pid
# pst /usr/bin/pstclntsave -s <NetworkerServer> -g <NetworkerGroup> -c <NetworkerClient>
print_log ${GLOBAL_LOGFILE} "Begin (${cmd_option}) Called from $(${PTREE_CMD} $$ | ${AWK_CMD} '/pstclntsave/{print $0}')"
pid=$(${PTREE_CMD} $$ | ${AWK_CMD} '/pstclntsave/{print $1}')
${PTREE_CMD} $$ | print_log ${GLOBAL_LOGFILE}
print_log ${GLOBAL_LOGFILE} "(${cmd_option}) PID=${pid}"
;;
esac
commandline="$(${PARGS_CMD} -c ${pid} | ${AWK_CMD} -F':' '$1 ~ /^argv/{printf $2}END{print;}')"
# Called from backupserver use -c
CLIENT_NAME=$(print_option -c ${commandline})
# If called from cmdline use -m
CLIENT_NAME=${CLIENT_NAME:-$(print_option -m ${commandline})}
# Last resort pre/post
CLIENT_NAME=${CLIENT_NAME:-${cmd_option}}
SERVER_NAME=$(print_option -s ${commandline})
GROUP_NAME=$(print_option -g ${commandline})
LOGFILE=${BASE_LOG_DIR}/${CLIENT_NAME}.log
print_log ${LOGFILE} "Called from ${commandline}"
named_pipe=/tmp/.named_pipe.$$
# Delete named pipe on exit
trap "rm -f ${named_pipe}" EXIT
# Create named pipe
${MKNOD_CMD} ${named_pipe} p
# Read from named pipe and send it to print_log
tee <${named_pipe} | print_log ${LOGFILE}&
# Close STDOUT & STDERR
exec 1>&-
exec 2>&-
# Redirect them to named pipe
exec >${named_pipe} 2>&1
print_log ${LOGFILE} "Begin backup of ${CLIENT_NAME}"
# Get resource name from hostname
LH_RES=$(${CLRS_CMD} show -t SUNW.LogicalHostname -p HostnameList | ${AWK_CMD} -v Hostname="${CLIENT_NAME}" '/^Resource:/{res=$NF} /HostnameList:/ {for(i=2;i<=NF;i++){if($i == Hostname){print res}}}')
print_log ${LOGFILE} "LogicalHostname of ${CLIENT_NAME} is ${LH_RES}"
# Get ressourceGroup name from ressource name
RG=$(${SCHA_RESOURCE_GET_CMD} -O GROUP -R ${LH_RES})
print_log ${LOGFILE} "RessourceGroup of ${LH_RES} is ${RG}"
ZPOOLS=$(${CLRS_CMD} show -g ${RG} -p Zpools | ${AWK_CMD} '$1=="Zpools:"{$1="";print $0}')
print_log ${LOGFILE} "ZPools used in ${RG}: ${ZPOOLS}"
Start_command=$(${CLRS_CMD} show -p Start_command -g ${RG} | ${AWK_CMD} -F ':' '$1 ~ /Start_command/ && $2 ~ /sczbt/')
print_log ${LOGFILE} "sczbt Start_command is: ${Start_command}"
sczbt_config=$(print_option -P ${Start_command})/sczbt_$(print_option -R ${Start_command})
print_log ${LOGFILE} "sczbt_config is ${sczbt_config}"
ZONE=$(${AWK_CMD} -F '=' '$1=="Zonename"{gsub(/"/,"",$2);print $2}' ${sczbt_config})
print_log ${LOGFILE} "Zone from ${sczbt_config} is ${ZONE}"
;;
init)
LOGFILE=${BASE_LOG_DIR}/init.log
if [ $# -ne 2 ]
then
echo "Wrong count of parameters."
echo "Use $0 init <ZPool-Name>"
exit 1
fi
ZPOOL=$2
print_log ${GLOBAL_LOGFILE} "Begin (${cmd_option}) of zpool ${ZPOOL}"
print_log ${LOGFILE} "Begin init of zpool ${ZPOOL}"
;;
initall)
LOGFILE=${BASE_LOG_DIR}/initall.log
print_log ${GLOBAL_LOGFILE} "Begin (${cmd_option})"
;;
get_slaves)
if [ $# -ne 5 ]
then
echo "Wrong count of parameters."
echo "Use $0 get_slaves <Zone-Name> <Sophora-Port> <Sophora-Adminuser> <Sophora-Password>"
exit 1
fi
echo "Slave node(s): $(sophora_get_slaves $2 $3 $4 $5)"
exit 0
;;
esac
case ${cmd_option} in
dump_cluster)
if [ $# -ne 3 ]
then
echo "Wrong count of parameters."
echo "Use $0 dump_cluster <Ressource_Group> <DIR>"
exit 1
fi
dump_cluster_config $2 $3
;;
dump)
if [ $# -ne 3 ]
then
echo "Wrong count of parameters."
echo "Use $0 dump <ZPool-Name> <File>"
exit 1
fi
dump_zfs_config $2 $3
;;
init)
snapshot_destroy ${ZPOOL} ${SNAPSHOT_NAME}
snapshot_create ${ZPOOL} ${SNAPSHOT_NAME}
print_log ${LOGFILE} "End init of zpool ${ZPOOL}"
;;
initall)
for ZPOOL in $(${ZPOOL_CMD} list -Ho name)
do
if [ "_${ZPOOL}_" == "_rpool_" ]
then
continue
fi
print_log ${LOGFILE} "Begin init of zpool ${ZPOOL}"
snapshot_destroy ${ZPOOL} ${SNAPSHOT_NAME}
snapshot_create ${ZPOOL} ${SNAPSHOT_NAME}
print_log ${LOGFILE} "End init of zpool ${ZPOOL}"
done
;;
pre)
for ZPOOL in ${ZPOOLS}
do
snapshot_destroy ${ZPOOL} ${SNAPSHOT_NAME}
done
# Shutdown Sophora?
startup="No"
case ${ZONE} in
arcus-rg)
# Staging zones
#sophora_shutdown ${ZONE} ${SOPHORA_FMRI}
#startup="Yes"
;;
incus-zone|velum-zone)
SOPHORA_ADMINPORT=1196
# Master-/slave-zones
is_slave=0
zone_hostname=$(get_zone_hostname ${ZONE})
for slave in $(sophora_get_slaves ${ZONE} ${SOPHORA_ADMINPORT} ${SOPHORA_USER} ${SOPHORA_PASS})
do
print_log ${LOGFILE} "_${slave}_ == _${zone_hostname}_?"
if [ "_${slave}_" == "_${zone_hostname}_" ]
then
is_slave=1
fi
done
if [ ${is_slave} -eq 1 ]
then
# Slave
print_log ${LOGFILE} "Slave..."
sophora_shutdown ${ZONE} ${SOPHORA_FMRI}
startup="Yes"
else
# Master
print_log ${LOGFILE} "Master... Not shutting down Sophora"
fi
;;
merkel-zone|brandt-zone|schmidt-zone)
SOPHORA_ADMINPORT=1396
# Master-/slave-zones
is_slave=0
zone_hostname=$(get_zone_hostname ${ZONE})
for slave in $(sophora_get_slaves ${ZONE} ${SOPHORA_ADMINPORT} ${SOPHORA_USER} ${SOPHORA_PASS})
do
print_log ${LOGFILE} "_${slave}_ == _${zone_hostname}_?"
if [ "_${slave}_" == "_${zone_hostname}_" ]
then
is_slave=1
fi
done
if [ ${is_slave} -eq 1 ]
then
# Slave
print_log ${LOGFILE} "Slave..."
sophora_shutdown ${ZONE} ${SOPHORA_FMRI}
startup="Yes"
else
# Master
print_log ${LOGFILE} "Master... Not shutting down Sophora"
fi
;;
*)
;;
esac
# Find the dir to write down zfs-setup
for ZPOOL in ${ZPOOLS}
do
if [ "_$(${ZFS_CMD} list -Ho name ${ZPOOL}/${ZFS_SETUP_SUBDIR} 2>/dev/null)_" != "__" ]
then
CONFIG_DIR=$(${ZFS_CMD} get -Ho value mountpoint ${ZPOOL}/${ZFS_SETUP_SUBDIR})
else
if [ -d $(${ZFS_CMD} get -Ho value mountpoint ${ZPOOL})/${ZFS_SETUP_SUBDIR} ]
then
CONFIG_DIR=$(${ZFS_CMD} get -Ho value mountpoint ${ZPOOL})/${ZFS_SETUP_SUBDIR}
fi
fi
if [ -d ${CONFIG_DIR} ]
then
printf "# Settings for ZFS\n\n" > ${CONFIG_DIR}/${ZFS_CONFIG_FILE}
ZONE_CONFIG_FILE=zonecfg_${ZONE}.export
${ZONECFG_CMD} -z ${ZONE} export > ${CONFIG_DIR}/${ZONE_CONFIG_FILE}
fi
done
# Save configs and create snapshots
for ZPOOL in ${ZPOOLS}
do
if [ "_${CONFIG_DIR}_" != "__" ]
then
# Save zfs config
dump_zfs_config ${ZPOOL} ${CONFIG_DIR}/${ZFS_CONFIG_FILE}
# Save Clusterconfig
dump_cluster_config ${RG} ${CONFIG_DIR}
fi
snapshot_create ${ZPOOL} ${SNAPSHOT_NAME}
done
# Startup Sophora?
if [ "_${startup}_" == "_Yes_" ]
then
sophora_startup ${ZONE} ${SOPHORA_FMRI}
fi
print_log ${LOGFILE} "End backup of ${CLIENT_NAME}"
;;
pst)
for ZPOOL in ${ZPOOLS}
do
snapshot_destroy ${ZPOOL} ${SNAPSHOT_NAME}
done
print_log ${LOGFILE} "End backup of ${CLIENT_NAME}"
;;
*)
usage
;;
esac
print_log ${GLOBAL_LOGFILE} "End (${cmd_option}) Called from:"
${PTREE_CMD} $$ | print_log ${GLOBAL_LOGFILE}
exit 0
</source>
MD5-Checksum
<source lang=bash>
# digest -a md5 /nsr/bin/nsr_snapshot.sh
aedff1a8bfa8ee0a012cd7def115e626
</source>
!!!THIS CODE IS UNTESTED DO NOT USE THIS!!!
!!!THIS JUST AN EXAMPLE!!!
==Restore/Recover==
===Set some variables===
<source lang=bash>
NSR_CLIENT="sample-cl"
NSR_SERVER="nsr-server"
ZPOOL="sample_pool"
RG="${NSR_CLIENT%-cl}-rg"
ZONE="${NSR_CLIENT%-cl}-zone"
</source>
===Look for a valid backup===
<source lang=bash>
# /usr/sbin/mminfo -s ${NSR_SERVER} -o t -N /local/${RG}/cluster_config/nsr_backup
</source>
===Restore ZFS configuration===
<source lang=bash>
# /usr/sbin/recover -s ${NSR_SERVER} -c ${NSR_CLIENT} -d /tmp -a /local/${RG}/cluster_config/nsr_backup/ZFS_Setup.sh
</source>
Look into file /tmp/ZFS_Setup.sh which should look like this:
<source lang=bash>
# Create ZPool sample_pool with size 1.02T:
Zpool structure of sample_pool:
zpool create sample_pool mirror <device0> <device1>
# Create ZFS
/usr/sbin/zfs create -o mountpoint=none sample_pool/app
/usr/sbin/zfs create -o mountpoint=none sample_pool/cluster_config
/usr/sbin/zfs create -o mountpoint=none sample_pool/data1
/usr/sbin/zfs create -o mountpoint=none sample_pool/data2
/usr/sbin/zfs create -o mountpoint=none sample_pool/home
/usr/sbin/zfs create -o mountpoint=none sample_pool/log
/usr/sbin/zfs create -o mountpoint=none sample_pool/usr_local
/usr/sbin/zfs create -o mountpoint=none sample_pool/zone
# Set ZFS values
/usr/sbin/zfs set -p reservation=104857600 sample_pool
/usr/sbin/zfs set -p mountpoint=none sample_pool
/usr/sbin/zfs set -p mountpoint=/local/sample-rg/app sample_pool/app
/usr/sbin/zfs set -p mountpoint=/local/sample-rg/cluster_config sample_pool/cluster_config
/usr/sbin/zfs set -p mountpoint=/local/sample-rg/data1 sample_pool/data1
/usr/sbin/zfs set -p mountpoint=/local/sample-rg/data2 sample_pool/data2
/usr/sbin/zfs set -p mountpoint=/local/sample-rg/home sample_pool/home
/usr/sbin/zfs set -p mountpoint=/local/sample-rg/log sample_pool/log
/usr/sbin/zfs set -p mountpoint=/local/sample-rg/usr_local sample_pool/usr_local
/usr/sbin/zfs set -p mountpoint=/local/sample-rg/zone sample_pool/zone
/usr/sbin/zfs set -p zpdata:zn=sample-zone sample_pool/zone
/usr/sbin/zfs set -p zpdata:rbe=S10_U9 sample_pool/zone
/usr/sbin/zfs set -p mountpoint=/local/sample-rg/zone-zfsBE_20121105 sample_pool/zone-zfsBE_20121105
/usr/sbin/zfs set -p zoned=off sample_pool/zone-zfsBE_20121105
/usr/sbin/zfs set -p canmount=on sample_pool/zone-zfsBE_20121105
/usr/sbin/zfs set -p zpdata:zn=sample-zone sample_pool/zone-zfsBE_20121105
/usr/sbin/zfs set -p zpdata:rbe=S10_U9 sample_pool/zone-zfsBE_20121105
</source>
Mount the needed ZFS filesystems.
===Restore zone configuration===
<source lang=bash>
# /usr/sbin/recover -s ${NSR_SERVER} -c ${NSR_CLIENT} -d /tmp -a /local/${RG}/cluster_config/nsr_backup/zonecfg_${ZONE}.export
# zonecfg -z ${ZONE} -f /tmp/zonecfg_${ZONE}.export
# zonecfg -z ${ZONE} info
</source>
===Restore cluster configuration===
<source lang=bash>
# /usr/sbin/recover -s ${NSR_SERVER} -c ${NSR_CLIENT} -d /tmp -a /local/${RG}/cluster_config/nsr_backup/*_export.xml
# /usr/sbin/recover -s ${NSR_SERVER} -c ${NSR_CLIENT} -d /tmp -a /local/${RG}/cluster_config/nsr_backup/*.ClusterCreateCommands.txt
# /usr/bin/perl -pi -e "s#/local/${RG}/cluster_config/nsr_backup/#/tmp/#g" /tmp/${RG}.ClusterCreateCommands.txt
</source>
Follow the instructions in /tmp/${RG}.ClusterCreateCommands.txt:
<source lang=bash>
Recreate sample-rg:
/usr/cluster/bin/clrg create -i /tmp/sample-rg.clrg_export.xml sample-rg
Add the following entries to all nodes!!!:
/etc/inet/hosts:
10.29.7.96 sample-cl
Recreate sample-lh-res:
/usr/cluster/bin/clrs create -i /tmp/sample-lh-res.clrs_export.xml sample-lh-res
Recreate sample-hasp-zfs-res:
/usr/cluster/bin/clrs create -i /tmp/sample-hasp-zfs-res.clrs_export.xml sample-hasp-zfs-res
Recreate sample-emctl-res:
/usr/cluster/bin/clrs create -i /tmp/sample-emctl-res.clrs_export.xml sample-emctl-res
Recreate sample-oracle-res:
/usr/cluster/bin/clrs create -i /tmp/sample-oracle-res.clrs_export.xml sample-oracle-res
Recreate sample-zone-res:
/usr/cluster/bin/clrs create -i /tmp/sample-zone-res.clrs_export.xml sample-zone-res
Recreate sample-nsr-res:
/usr/cluster/bin/clrs create -i /tmp/sample-nsr-res.clrs_export.xml sample-nsr-res
</source>
==Registering new resource type LGTO.clnt==
1. Install Solaris client package LGTOclnt
2. Register new resource type in cluster. One one node do:
<source lang=bash>
# clrt register -f /usr/sbin/LGTO.clnt.rtr LGTO.clnt
</source>
Now you have a new resource type LGTO.clnt in your cluster.
==Create client resource of type LGTO.clnt==
So I use scripts like this:
<source lang=bash>
# RGname=sample-rg
# clrs create \
-t LGTO.clnt \
-g ${RGname} \
-p Resource_dependencies=$(basename ${RGname} -rg)-hasp-zfs-res \
-p clientname=$(basename ${RGname} -rg)-lh \
-p Network_resource=$(basename ${RGname} -rg)-lh-res \
-p owned_paths=${ZPOOL_BASEDIR} \
$(basename ${RGname} -rg)-nsr-res
</source>
This expands to:
<source lang=bash>
# clrs create \
-t LGTO.clnt \
-g sample-rg \
-p Resource_dependencies=sample-hasp-zfs-res \
-p clientname=sample-lh \
-p Network_resource=sample-lh-res \
-p owned_paths=/local/sample-rg \
sample-nsr-res
</source>
Now we have a client name to which we can connect to: sample-lh
ca922567adb15cdd163e7ed2bea0918ffab8cc9d
732
731
2015-05-28T07:12:30Z
Lollypop
2
/* The pre-/pstcmd-script */
wikitext
text/x-wiki
[[Kategorie:ZFS|Backup]]
[[Kategorie:Backup|Networker]]
[[Kategorie:Solaris|Backup]]
=Backup of ZFS snapshots on Solaris Cluster with Legato/EMC Networker=
This describes how to setup a backup of the Solaris Cluster resource group named sample-rg.
The structure of my RGs is always:
<pre>
RG: <name>-rg
ZFS-HASP: <name>-hasp-zfs-res
Logical Host: <name>-lh-res
Logical Host Name: <name>-lh
ZPOOL: <name>_pool
</pre>
I used the bash as shell.
==Define variables used in the following command lines==
<source lang=bash>
# NAME=sample
# RGname=${NAME}-rg
# NetworkerGroup=$(echo ${NAME} | tr 'a-z' 'A-Z' )
# ZPOOL=${NAME}_pool
# ZPOOL_BASEDIR=/local/${RGname}
</source>
==Define a resource for Networker==
What we need now is a resource definition in our Networker directory like this:
<source lang=bash>
# mkdir /nsr/{bin,log,res}
# cat > /nsr/res/${NetworkerGroup}.res <<EOF
type: savepnpc;
precmd: "/nsr/bin/nsr_snapshot.sh pre >/nsr/log/networker_precmd.log 2>&1";
pstcmd: "/nsr/bin/nsr_snapshot.sh pst >/nsr/log/networker_pstcmd.log 2>&1";
timeout: "08:00am";
abort precmd with group: Yes;
EOF
</source>
==The pre-/pstcmd-script==
!!!THIS CODE IS UNTESTED DO NOT USE THIS!!!
!!!THIS JUST AN EXAMPLE!!!
<source lang=bash>
#!/bin/bash
cmd_option=$1
export cmd_option
SNAPSHOT_NAME="nsr"
BASE_LOG_DIR="/nsr/logs"
NSR_BACKUP_CLONE="nsr_backup"
# Commands
ZFS_CMD="/usr/sbin/zfs"
ZPOOL_CMD="/usr/sbin/zpool"
ZLOGIN_CMD="/usr/sbin/zlogin"
ZONECFG_CMD="/usr/sbin/zonecfg"
SVCS_CMD="/usr/sbin/svcs"
SVCADM_CMD="/usr/sbin/svcadm"
DF_CMD="/usr/bin/df"
RM_CMD="/usr/bin/rm"
AWK_CMD="/usr/bin/nawk"
MKNOD_CMD="/usr/sbin/mknod"
XARGS_CMD="/usr/bin/xargs"
PARGS_CMD="/usr/bin/pargs"
PTREE_CMD="/usr/bin/ptree"
CLRS_CMD="/usr/cluster/bin/clrs"
CLRG_CMD="/usr/cluster/bin/clrg"
CLRT_CMD="/usr/cluster/bin/clrt"
BASENAME_CMD="/usr/bin/basename"
GETENT_CMD="/usr/bin/getent"
SCHA_RESOURCE_GET_CMD="/usr/cluster/bin/scha_resource_get"
WGET_CMD=/usr/sfw/bin/wget
HOSTNAME_CMD="/usr/bin/uname -n"
# Subdir in ZFS where to put ZFS-config
ZFS_SETUP_SUBDIR="cluster_config"
ZFS_CONFIG_FILE=ZFS_Setup.sh
# Oracle parameter
ORACLE_SID=SAMPLE
ORACLE_USER=oracle
# Sophora parameter
SOPHORA_FMRI="svc:/cms/sophora:default"
SOPHORA_USER=admin
SOPHORA_PASS=password
GLOBAL_LOGFILE=${BASE_LOG_DIR}/$(${BASENAME_CMD} $0 .sh).log
# For all but get_slaves redirect output to log
case ${cmd_option} in
get_slaves)
;;
*)
exec >>${GLOBAL_LOGFILE} 2>&1
;;
esac
function print_option () {
option=$1; shift
# now process line
while [ $# -gt 0 ]
do
case $1 in
${option})
echo $2
shift
shift
;;
*)
shift
;;
esac
done
}
function sophora_startup () {
SOPHORA_ZONE=$1 # Zone for zlogin
SOPHORA_FMRI=$2 # FMRI for svcadm
print_log ${LOGFILE} "Starting sophora in ${SOPHORA_ZONE}..."
${ZLOGIN_CMD} ${SOPHORA_ZONE} ${SVCADM_CMD} enable ${SOPHORA_FMRI}
}
function sophora_shutdown () {
SOPHORA_ZONE=$1 # Zone for zlogin
SOPHORA_FMRI=$2 # FMRI for svcadm
print_log ${LOGFILE} "Shutting down sophora in ${SOPHORA_ZONE}..."
${ZLOGIN_CMD} ${SOPHORA_ZONE} ${SVCADM_CMD} disable -t ${SOPHORA_FMRI}
}
function sophora_get_slaves () {
SOPHORA_ZONE=$1 # Zone for zlogin
SOPHORA_PORT=$2 # Sophora port at localhost
SOPHORA_USER=$3 # Sophora admin user
SOPHORA_PASS=$4 # Sophora admin port
${ZLOGIN_CMD} ${SOPHORA_ZONE} \
${WGET_CMD} \
-qO- \
--no-proxy \
--http-user=${S/OPHORA_USER} \
--http-password=${SOPHORA_PASS} \
"http://localhost:${SOPHORA_PORT}/content-api/servers/?replicationMode=SLAVE" | \
${AWK_CMD} '
function get_param(param,name){
name="\""name"\"";
count=split(param,tupel,/,/);
for(i=1;i<=count;i++){
split(tupel[i],part,/:/);
if(part[1]==name){
gsub(/\"/,"",part[2]);return part[2];
}
}
}
{
json=$0;
gsub(/(\[\{|\}\])/,"",json);
elements=split(json,array,/\},\{/);
for(element=1;element<=elements;element++){
print get_param(array[element],"hostname");
}
}' | ${XARGS_CMD} -n 1 -i ${BASENAME_CMD} {} .server.de
}
function get_zone_hostname () {
${ZLOGIN_CMD} $1 ${HOSTNAME_CMD}
}
function print_log () {
LOGFILE=$1 ; shift
if [ $# -gt 0 ]
then
printf "%s (%s): %s\n" "$(date '+%Y%m%d %H:%M:%S')" "${cmd_option}" "$*" >> ${LOGFILE}
else
#printf "%s (%s): " "$(date '+%Y%m%d %H:%M:%S')" "${cmd_option}" >> ${LOGFILE}
while read data
do
printf "%s (%s): %s\n" "$(date '+%Y%m%d %H:%M:%S')" "${cmd_option}" "${data}" >> ${LOGFILE}
done
fi
}
function dump_zfs_config {
ZPOOL=$1
OUTPUT_FILE=$2
printf "\n\n# Create ZPool ${ZPOOL} with size $(${ZPOOL_CMD} list -Ho size ${ZPOOL}):\n\n" >> ${OUTPUT_FILE}
${ZPOOL_CMD} status ${ZPOOL} | ${AWK_CMD} '/config:/,/errors:/{if(/NAME/){getline; printf "Zpool structure of %s:\n\nzpool create %s",$1,$1; getline ; device=0; while(!/^$/ && !/errors:/){gsub(/mirror-[0-9]+/,"mirror",$1);gsub(/logs/,"log",$1);gsub(/(\/dev\/(r)*dsk\/)*c[0-9]+t[0-9A-F]+d[0-9]+(s[0-9]+)*/,"<device"device">",$1);if(/device/)device++;printf " %s",$1 ; getline}};printf "\n" ;}' >> ${OUTPUT_FILE}
printf "\n\n# Create ZFS\n\n" >> ${OUTPUT_FILE}
${ZFS_CMD} list -Hrt filesystem -o name,origin ${ZPOOL} | ${AWK_CMD} -v zfs_cmd=${ZFS_CMD} 'NR>1 && $2=="-"{print zfs_cmd,"create -o mountpoint=none",$1}' >> ${OUTPUT_FILE}
printf "\n\n# Set ZFS values\n\n" >> ${OUTPUT_FILE}
${ZFS_CMD} get -s local -Ho name,property,value -pr all ${ZPOOL} | ${AWK_CMD} -v zfs_cmd=${ZFS_CMD} '$2!="readonly"{printf "%s set -p %s=%s %s\n",zfs_cmd,$2,$3,$1}' >> ${OUTPUT_FILE}
}
function dump_cluster_config {
RG=$1
OUTPUT_DIR=$2
${RM_CMD} -f ${OUTPUT_DIR}/${RG}.clrg_export.xml
${CLRG_CMD} export -o ${OUTPUT_DIR}/${RG}.clrg_export.xml ${RG}
for RES in $(${CLRS_CMD} list -g ${RG})
do
${RM_CMD} -f ${OUTPUT_DIR}/${RES}.clrs_export.xml
${CLRS_CMD} export -o ${OUTPUT_DIR}/${RES}.clrs_export.xml ${RES}
done
# Commands to recreate the RG
COMMAND_FILE="${OUTPUT_DIR}/${RG}.ClusterCreateCommands.txt"
printf "Recreate %s:\n%s create -i %s %s\n\n" "${RG}" "${CLRG_CMD}" "${OUTPUT_DIR}/${RG}.clrg_export.xml" "${RG}" > ${COMMAND_FILE}
for RT in SUNW.LogicalHostname SUNW.HAStoragePlus SUNW.gds LGTO.clnt
do
for RT_VERSION in $(${CLRT_CMD} list | ${AWK_CMD} -v rt=${RT} '$1 ~ rt')
do
for RES in $(${CLRS_CMD} list -g ${RG} -t ${RT_VERSION})
do
if [ "_${RT}_" == "_SUNW.LogicalHostname_" ]
then
printf "Add the following entries to all nodes!!!:\n/etc/inet/hosts:\n" >> ${COMMAND_FILE}
${GETENT_CMD} hosts $(${CLRS_CMD} show -p HostnameList ${RES} | nawk '$1=="HostnameList:"{$1="";print}') >> ${COMMAND_FILE}
printf "\n" >> ${COMMAND_FILE}
fi
printf "Recreate %s:\n%s create -i %s %s\n\n" "${RES}" "${CLRS_CMD}" "${OUTPUT_DIR}/${RES}.clrs_export.xml" "${RES}" >> ${COMMAND_FILE}
done
done
done
}
function snapshot_pre {
DB=$1
DBUSER=$2
if [ $# -eq 3 -a "_$3_" != "__" ]
then
ZONE=$3
ZONE_CMD="${ZLOGIN_CMD} -l ${DBUSER} ${ZONE}"
ZONE_BASE=$(/usr/sbin/zonecfg -z ${ZONE} info zonepath | ${AWK_CMD} '{print $NF;}')
ZONE_ROOT="${ZONE_BASE}/root"
else
ZONE_ROOT=""
ZONE_CMD="su - ${DBUSER} -c"
fi
if( ${ZONE_CMD} echo >/dev/null 2>&1 )
then
SCRIPT_NAME="tmp/.nsr-pre-snap-script.$$"
# Create script inside zone
cat >${ZONE_ROOT}/{SCRIPT_NAME} <<EOS
#!/bin/bash
DBDIR=\$(${AWK_CMD} -F':' -v ORACLE_SID=${ORACLE_SID} '\$1==ORACLE_SID {print \$2;}' /var/opt/oracle/oratab)
\${DBDIR}/bin/sqlplus sys/${DBUSER} as sysdba << EOF
create pfile from spfile;
alter system archive log current;
alter database backup controlfile to trace;
alter database begin backup;
EOF
EOS
chmod 755 ${ZONE_ROOT}/${SCRIPT_NAME}
${ZONE_CMD} /${SCRIPT_NAME} 2>&1 | print_log ${LOGFILE}
rm -f ${ZONE_ROOT}/${SCRIPT_NAME}
fi
}
function snapshot_pst {
DB=$1
DBUSER=$2
if [ $# -eq 3 -a "_$3_" != "__" ]
then
ZONE=$3
ZONE_CMD="${ZLOGIN_CMD} -l ${DBUSER} ${ZONE}"
ZONE_BASE=$(/usr/sbin/zonecfg -z ${ZONE} info zonepath | ${AWK_CMD} '{print $NF;}')
ZONE_ROOT="${ZONE_BASE}/root"
else
ZONE_ROOT=""
ZONE_CMD="su - ${DBUSER} -c"
fi
if( ${ZONE_CMD} echo >/dev/null 2>&1 )
then
SCRIPT_NAME="tmp/.nsr-pre-snap-script.$$"
# Create script inside zone
cat >${ZONE_ROOT}/{SCRIPT_NAME} <<EOS
#!/bin/bash
DBDIR=\$(${AWK_CMD} -F':' -v ORACLE_SID=${ORACLE_SID} '\$1==ORACLE_SID {print \$2;}' /var/opt/oracle/oratab)
\${DBDIR}/bin/sqlplus sys/${DBUSER} as sysdba << EOF
alter database end backup;
alter system archive log current;
EOF
EOS
chmod 755 ${ZONE_ROOT}/${SCRIPT_NAME}
${ZONE_CMD} /${SCRIPT_NAME} 2>&1 | print_log ${LOGFILE}
rm -f ${ZONE_ROOT}/${SCRIPT_NAME}
fi
}
function snapshot_create {
ZPOOL=$1
SNAPSHOT_NAME=$2
print_log ${LOGFILE} "Create ZFS snapshot -r ${ZPOOL}@${SNAPSHOT_NAME}"
${ZFS_CMD} snapshot -r ${ZPOOL}@${SNAPSHOT_NAME}
for zfs_snapshot in $(${ZFS_CMD} list -Ho name -t snapshot -r ${ZPOOL} | grep ${SNAPSHOT_NAME})
do
${ZFS_CMD} clone -o readonly=on ${zfs_snapshot} ${zfs_snapshot/@*/}/${NSR_BACKUP_CLONE}
${ZFS_CMD} mount ${zfs_snapshot/@*/}/${NSR_BACKUP_CLONE} 2>/dev/null
if [ "_$(${ZFS_CMD} get -Ho value mounted ${zfs_snapshot/@*/}/${NSR_BACKUP_CLONE})_" == "_yes_" ]
then
# echo /usr/sbin/save -s ${SERVER_NAME} -g ${GROUP_NAME} -LL -m ${CLIENT_NAME} $(${ZFS_CMD} get -Ho value mountpoint ${zfs_snapshot/@*/}/${NSR_BACKUP_CLONE})
${ZFS_CMD} list -Ho creation,name ${zfs_snapshot/@*/}/${NSR_BACKUP_CLONE} | print_log ${LOGFILE}
fi
done
}
function snapshot_destroy {
ZPOOL=$1
SNAPSHOT_NAME=$2
RES="$(${CLRS_CMD} show -p ZPools | ${AWK_CMD} -v pool=${ZPOOL} '/^Resource:/{res=$NF;}$NF ~ pool{print res;}')"
# Because of problems with unmounting during cluster monitoring disable montoring for this step
print_log ${LOGFILE} "Telling Cluster not to monitor ${RES}"
if [ "_${RES}_" != "__" ]
then
${CLRS_CMD} unmonitor ${RES}
fi
if (${ZFS_CMD} list -t snapshot ${ZPOOL}@${SNAPSHOT_NAME} > /dev/null)
then
for zfs_snapshot in $(${ZFS_CMD} list -Ho name -t snapshot -r ${ZPOOL} | grep ${SNAPSHOT_NAME})
do
if [ "_$(${ZFS_CMD} get -Ho value mounted ${zfs_snapshot/@*/}/${NSR_BACKUP_CLONE})_" == "_yes_" ]
then
print_log ${LOGFILE} "Unmount ZFS clone ${zfs_snapshot/@*/}/${NSR_BACKUP_CLONE}"
${ZFS_CMD} unmount ${zfs_snapshot/@*/}/${NSR_BACKUP_CLONE}
fi
# If this is a clone of ${zfs_snapshot}, then destroy it
if [ "_$(${ZFS_CMD} list -Ho origin ${zfs_snapshot/@*/}/${NSR_BACKUP_CLONE})_" == "_${zfs_snapshot}_" ]
then
print_log ${LOGFILE} "Destroy ZFS clone ${zfs_snapshot/@*/}/${NSR_BACKUP_CLONE}"
${ZFS_CMD} destroy ${zfs_snapshot/@*/}/${NSR_BACKUP_CLONE}
fi
done
print_log ${LOGFILE} "Destroy ZFS snapshot -r ${ZPOOL}@${SNAPSHOT_NAME}"
${ZFS_CMD} destroy -r ${ZPOOL}@${SNAPSHOT_NAME}
fi
print_log ${LOGFILE} "Telling Cluster to monitor ${RES} again"
if [ "_${RES}_" != "__" ]
then
${CLRS_CMD} monitor ${RES}
fi
}
function usage {
echo "Usage: $0 (pre|pst)"
echo "Usage: $0 init <ZPool-Name>"
echo "Usage: $0 initall"
echo "Usage: $0 dump <ZPool-Name> <Output-File>"
exit 1
}
case ${cmd_option} in
pre|pst)
case ${cmd_option} in
pre)
# Get commandline from parent pid
# pre /usr/sbin/savepnpc -c <NetworkerClient> -s <NetworkerServer> -g <NetworkerGroup> -LL
print_log ${GLOBAL_LOGFILE} "Begin (${cmd_option}) Called from $(${PTREE_CMD} $$ | ${AWK_CMD} '/savepnpc/{print $0}')"
pid=$(${PTREE_CMD} $$ | ${AWK_CMD} '/savepnpc/{print $1}')
;;
pst)
# Get commandline from parent pid
# pst /usr/bin/pstclntsave -s <NetworkerServer> -g <NetworkerGroup> -c <NetworkerClient>
print_log ${GLOBAL_LOGFILE} "Begin (${cmd_option}) Called from $(${PTREE_CMD} $$ | ${AWK_CMD} '/pstclntsave/{print $0}')"
pid=$(${PTREE_CMD} $$ | ${AWK_CMD} '/pstclntsave/{print $1}')
${PTREE_CMD} $$ | print_log ${GLOBAL_LOGFILE}
print_log ${GLOBAL_LOGFILE} "(${cmd_option}) PID=${pid}"
;;
esac
commandline="$(${PARGS_CMD} -c ${pid} | ${AWK_CMD} -F':' '$1 ~ /^argv/{printf $2}END{print;}')"
# Called from backupserver use -c
CLIENT_NAME=$(print_option -c ${commandline})
# If called from cmdline use -m
CLIENT_NAME=${CLIENT_NAME:-$(print_option -m ${commandline})}
# Last resort pre/post
CLIENT_NAME=${CLIENT_NAME:-${cmd_option}}
SERVER_NAME=$(print_option -s ${commandline})
GROUP_NAME=$(print_option -g ${commandline})
LOGFILE=${BASE_LOG_DIR}/${CLIENT_NAME}.log
print_log ${LOGFILE} "Called from ${commandline}"
named_pipe=/tmp/.named_pipe.$$
# Delete named pipe on exit
trap "rm -f ${named_pipe}" EXIT
# Create named pipe
${MKNOD_CMD} ${named_pipe} p
# Read from named pipe and send it to print_log
tee <${named_pipe} | print_log ${LOGFILE}&
# Close STDOUT & STDERR
exec 1>&-
exec 2>&-
# Redirect them to named pipe
exec >${named_pipe} 2>&1
print_log ${LOGFILE} "Begin backup of ${CLIENT_NAME}"
# Get resource name from hostname
LH_RES=$(${CLRS_CMD} show -t SUNW.LogicalHostname -p HostnameList | ${AWK_CMD} -v Hostname="${CLIENT_NAME}" '/^Resource:/{res=$NF} /HostnameList:/ {for(i=2;i<=NF;i++){if($i == Hostname){print res}}}')
print_log ${LOGFILE} "LogicalHostname of ${CLIENT_NAME} is ${LH_RES}"
# Get ressourceGroup name from ressource name
RG=$(${SCHA_RESOURCE_GET_CMD} -O GROUP -R ${LH_RES})
print_log ${LOGFILE} "RessourceGroup of ${LH_RES} is ${RG}"
ZPOOLS=$(${CLRS_CMD} show -g ${RG} -p Zpools | ${AWK_CMD} '$1=="Zpools:"{$1="";print $0}')
print_log ${LOGFILE} "ZPools used in ${RG}: ${ZPOOLS}"
Start_command=$(${CLRS_CMD} show -p Start_command -g ${RG} | ${AWK_CMD} -F ':' '$1 ~ /Start_command/ && $2 ~ /sczbt/')
print_log ${LOGFILE} "sczbt Start_command is: ${Start_command}"
sczbt_config=$(print_option -P ${Start_command})/sczbt_$(print_option -R ${Start_command})
print_log ${LOGFILE} "sczbt_config is ${sczbt_config}"
ZONE=$(${AWK_CMD} -F '=' '$1=="Zonename"{gsub(/"/,"",$2);print $2}' ${sczbt_config})
print_log ${LOGFILE} "Zone from ${sczbt_config} is ${ZONE}"
;;
init)
LOGFILE=${BASE_LOG_DIR}/init.log
if [ $# -ne 2 ]
then
echo "Wrong count of parameters."
echo "Use $0 init <ZPool-Name>"
exit 1
fi
ZPOOL=$2
print_log ${GLOBAL_LOGFILE} "Begin (${cmd_option}) of zpool ${ZPOOL}"
print_log ${LOGFILE} "Begin init of zpool ${ZPOOL}"
;;
initall)
LOGFILE=${BASE_LOG_DIR}/initall.log
print_log ${GLOBAL_LOGFILE} "Begin (${cmd_option})"
;;
get_slaves)
if [ $# -ne 5 ]
then
echo "Wrong count of parameters."
echo "Use $0 get_slaves <Zone-Name> <Sophora-Port> <Sophora-Adminuser> <Sophora-Password>"
exit 1
fi
echo "Slave node(s): $(sophora_get_slaves $2 $3 $4 $5)"
exit 0
;;
esac
case ${cmd_option} in
dump_cluster)
if [ $# -ne 3 ]
then
echo "Wrong count of parameters."
echo "Use $0 dump_cluster <Ressource_Group> <DIR>"
exit 1
fi
dump_cluster_config $2 $3
;;
dump)
if [ $# -ne 3 ]
then
echo "Wrong count of parameters."
echo "Use $0 dump <ZPool-Name> <File>"
exit 1
fi
dump_zfs_config $2 $3
;;
init)
snapshot_destroy ${ZPOOL} ${SNAPSHOT_NAME}
snapshot_create ${ZPOOL} ${SNAPSHOT_NAME}
print_log ${LOGFILE} "End init of zpool ${ZPOOL}"
;;
initall)
for ZPOOL in $(${ZPOOL_CMD} list -Ho name)
do
if [ "_${ZPOOL}_" == "_rpool_" ]
then
continue
fi
print_log ${LOGFILE} "Begin init of zpool ${ZPOOL}"
snapshot_destroy ${ZPOOL} ${SNAPSHOT_NAME}
snapshot_create ${ZPOOL} ${SNAPSHOT_NAME}
print_log ${LOGFILE} "End init of zpool ${ZPOOL}"
done
;;
pre)
for ZPOOL in ${ZPOOLS}
do
snapshot_destroy ${ZPOOL} ${SNAPSHOT_NAME}
done
# Shutdown Sophora?
startup="No"
case ${ZONE} in
arcus-rg)
# Staging zones
#sophora_shutdown ${ZONE} ${SOPHORA_FMRI}
#startup="Yes"
;;
incus-zone|velum-zone)
SOPHORA_ADMINPORT=1196
# Master-/slave-zones
is_slave=0
zone_hostname=$(get_zone_hostname ${ZONE})
for slave in $(sophora_get_slaves ${ZONE} ${SOPHORA_ADMINPORT} ${SOPHORA_USER} ${SOPHORA_PASS})
do
print_log ${LOGFILE} "_${slave}_ == _${zone_hostname}_?"
if [ "_${slave}_" == "_${zone_hostname}_" ]
then
is_slave=1
fi
done
if [ ${is_slave} -eq 1 ]
then
# Slave
print_log ${LOGFILE} "Slave..."
sophora_shutdown ${ZONE} ${SOPHORA_FMRI}
startup="Yes"
else
# Master
print_log ${LOGFILE} "Master... Not shutting down Sophora"
fi
;;
merkel-zone|brandt-zone|schmidt-zone)
SOPHORA_ADMINPORT=1396
# Master-/slave-zones
is_slave=0
zone_hostname=$(get_zone_hostname ${ZONE})
for slave in $(sophora_get_slaves ${ZONE} ${SOPHORA_ADMINPORT} ${SOPHORA_USER} ${SOPHORA_PASS})
do
print_log ${LOGFILE} "_${slave}_ == _${zone_hostname}_?"
if [ "_${slave}_" == "_${zone_hostname}_" ]
then
is_slave=1
fi
done
if [ ${is_slave} -eq 1 ]
then
# Slave
print_log ${LOGFILE} "Slave..."
sophora_shutdown ${ZONE} ${SOPHORA_FMRI}
startup="Yes"
else
# Master
print_log ${LOGFILE} "Master... Not shutting down Sophora"
fi
;;
*)
;;
esac
# Find the dir to write down zfs-setup
for ZPOOL in ${ZPOOLS}
do
if [ "_$(${ZFS_CMD} list -Ho name ${ZPOOL}/${ZFS_SETUP_SUBDIR} 2>/dev/null)_" != "__" ]
then
CONFIG_DIR=$(${ZFS_CMD} get -Ho value mountpoint ${ZPOOL}/${ZFS_SETUP_SUBDIR})
else
if [ -d $(${ZFS_CMD} get -Ho value mountpoint ${ZPOOL})/${ZFS_SETUP_SUBDIR} ]
then
CONFIG_DIR=$(${ZFS_CMD} get -Ho value mountpoint ${ZPOOL})/${ZFS_SETUP_SUBDIR}
fi
fi
if [ -d ${CONFIG_DIR} ]
then
printf "# Settings for ZFS\n\n" > ${CONFIG_DIR}/${ZFS_CONFIG_FILE}
ZONE_CONFIG_FILE=zonecfg_${ZONE}.export
${ZONECFG_CMD} -z ${ZONE} export > ${CONFIG_DIR}/${ZONE_CONFIG_FILE}
fi
done
# Save configs and create snapshots
for ZPOOL in ${ZPOOLS}
do
if [ "_${CONFIG_DIR}_" != "__" ]
then
# Save zfs config
dump_zfs_config ${ZPOOL} ${CONFIG_DIR}/${ZFS_CONFIG_FILE}
# Save Clusterconfig
dump_cluster_config ${RG} ${CONFIG_DIR}
fi
snapshot_create ${ZPOOL} ${SNAPSHOT_NAME}
done
# Startup Sophora?
if [ "_${startup}_" == "_Yes_" ]
then
sophora_startup ${ZONE} ${SOPHORA_FMRI}
fi
print_log ${LOGFILE} "End backup of ${CLIENT_NAME}"
;;
pst)
for ZPOOL in ${ZPOOLS}
do
snapshot_destroy ${ZPOOL} ${SNAPSHOT_NAME}
done
print_log ${LOGFILE} "End backup of ${CLIENT_NAME}"
;;
*)
usage
;;
esac
print_log ${GLOBAL_LOGFILE} "End (${cmd_option}) Called from:"
${PTREE_CMD} $$ | print_log ${GLOBAL_LOGFILE}
exit 0
</source>
MD5-Checksum
<source lang=bash>
# digest -a md5 /nsr/bin/nsr_snapshot.sh
01be6677ddf4342b625b1aa59d805628
</source>
!!!THIS CODE IS UNTESTED DO NOT USE THIS!!!
!!!THIS JUST AN EXAMPLE!!!
==Restore/Recover==
===Set some variables===
<source lang=bash>
NSR_CLIENT="sample-cl"
NSR_SERVER="nsr-server"
ZPOOL="sample_pool"
RG="${NSR_CLIENT%-cl}-rg"
ZONE="${NSR_CLIENT%-cl}-zone"
</source>
===Look for a valid backup===
<source lang=bash>
# /usr/sbin/mminfo -s ${NSR_SERVER} -o t -N /local/${RG}/cluster_config/nsr_backup
</source>
===Restore ZFS configuration===
<source lang=bash>
# /usr/sbin/recover -s ${NSR_SERVER} -c ${NSR_CLIENT} -d /tmp -a /local/${RG}/cluster_config/nsr_backup/ZFS_Setup.sh
</source>
Look into file /tmp/ZFS_Setup.sh which should look like this:
<source lang=bash>
# Create ZPool sample_pool with size 1.02T:
Zpool structure of sample_pool:
zpool create sample_pool mirror <device0> <device1>
# Create ZFS
/usr/sbin/zfs create -o mountpoint=none sample_pool/app
/usr/sbin/zfs create -o mountpoint=none sample_pool/cluster_config
/usr/sbin/zfs create -o mountpoint=none sample_pool/data1
/usr/sbin/zfs create -o mountpoint=none sample_pool/data2
/usr/sbin/zfs create -o mountpoint=none sample_pool/home
/usr/sbin/zfs create -o mountpoint=none sample_pool/log
/usr/sbin/zfs create -o mountpoint=none sample_pool/usr_local
/usr/sbin/zfs create -o mountpoint=none sample_pool/zone
# Set ZFS values
/usr/sbin/zfs set -p reservation=104857600 sample_pool
/usr/sbin/zfs set -p mountpoint=none sample_pool
/usr/sbin/zfs set -p mountpoint=/local/sample-rg/app sample_pool/app
/usr/sbin/zfs set -p mountpoint=/local/sample-rg/cluster_config sample_pool/cluster_config
/usr/sbin/zfs set -p mountpoint=/local/sample-rg/data1 sample_pool/data1
/usr/sbin/zfs set -p mountpoint=/local/sample-rg/data2 sample_pool/data2
/usr/sbin/zfs set -p mountpoint=/local/sample-rg/home sample_pool/home
/usr/sbin/zfs set -p mountpoint=/local/sample-rg/log sample_pool/log
/usr/sbin/zfs set -p mountpoint=/local/sample-rg/usr_local sample_pool/usr_local
/usr/sbin/zfs set -p mountpoint=/local/sample-rg/zone sample_pool/zone
/usr/sbin/zfs set -p zpdata:zn=sample-zone sample_pool/zone
/usr/sbin/zfs set -p zpdata:rbe=S10_U9 sample_pool/zone
/usr/sbin/zfs set -p mountpoint=/local/sample-rg/zone-zfsBE_20121105 sample_pool/zone-zfsBE_20121105
/usr/sbin/zfs set -p zoned=off sample_pool/zone-zfsBE_20121105
/usr/sbin/zfs set -p canmount=on sample_pool/zone-zfsBE_20121105
/usr/sbin/zfs set -p zpdata:zn=sample-zone sample_pool/zone-zfsBE_20121105
/usr/sbin/zfs set -p zpdata:rbe=S10_U9 sample_pool/zone-zfsBE_20121105
</source>
Mount the needed ZFS filesystems.
===Restore zone configuration===
<source lang=bash>
# /usr/sbin/recover -s ${NSR_SERVER} -c ${NSR_CLIENT} -d /tmp -a /local/${RG}/cluster_config/nsr_backup/zonecfg_${ZONE}.export
# zonecfg -z ${ZONE} -f /tmp/zonecfg_${ZONE}.export
# zonecfg -z ${ZONE} info
</source>
===Restore cluster configuration===
<source lang=bash>
# /usr/sbin/recover -s ${NSR_SERVER} -c ${NSR_CLIENT} -d /tmp -a /local/${RG}/cluster_config/nsr_backup/*_export.xml
# /usr/sbin/recover -s ${NSR_SERVER} -c ${NSR_CLIENT} -d /tmp -a /local/${RG}/cluster_config/nsr_backup/*.ClusterCreateCommands.txt
# /usr/bin/perl -pi -e "s#/local/${RG}/cluster_config/nsr_backup/#/tmp/#g" /tmp/${RG}.ClusterCreateCommands.txt
</source>
Follow the instructions in /tmp/${RG}.ClusterCreateCommands.txt:
<source lang=bash>
Recreate sample-rg:
/usr/cluster/bin/clrg create -i /tmp/sample-rg.clrg_export.xml sample-rg
Add the following entries to all nodes!!!:
/etc/inet/hosts:
10.29.7.96 sample-cl
Recreate sample-lh-res:
/usr/cluster/bin/clrs create -i /tmp/sample-lh-res.clrs_export.xml sample-lh-res
Recreate sample-hasp-zfs-res:
/usr/cluster/bin/clrs create -i /tmp/sample-hasp-zfs-res.clrs_export.xml sample-hasp-zfs-res
Recreate sample-emctl-res:
/usr/cluster/bin/clrs create -i /tmp/sample-emctl-res.clrs_export.xml sample-emctl-res
Recreate sample-oracle-res:
/usr/cluster/bin/clrs create -i /tmp/sample-oracle-res.clrs_export.xml sample-oracle-res
Recreate sample-zone-res:
/usr/cluster/bin/clrs create -i /tmp/sample-zone-res.clrs_export.xml sample-zone-res
Recreate sample-nsr-res:
/usr/cluster/bin/clrs create -i /tmp/sample-nsr-res.clrs_export.xml sample-nsr-res
</source>
==Registering new resource type LGTO.clnt==
1. Install Solaris client package LGTOclnt
2. Register new resource type in cluster. One one node do:
<source lang=bash>
# clrt register -f /usr/sbin/LGTO.clnt.rtr LGTO.clnt
</source>
Now you have a new resource type LGTO.clnt in your cluster.
==Create client resource of type LGTO.clnt==
So I use scripts like this:
<source lang=bash>
# RGname=sample-rg
# clrs create \
-t LGTO.clnt \
-g ${RGname} \
-p Resource_dependencies=$(basename ${RGname} -rg)-hasp-zfs-res \
-p clientname=$(basename ${RGname} -rg)-lh \
-p Network_resource=$(basename ${RGname} -rg)-lh-res \
-p owned_paths=${ZPOOL_BASEDIR} \
$(basename ${RGname} -rg)-nsr-res
</source>
This expands to:
<source lang=bash>
# clrs create \
-t LGTO.clnt \
-g sample-rg \
-p Resource_dependencies=sample-hasp-zfs-res \
-p clientname=sample-lh \
-p Network_resource=sample-lh-res \
-p owned_paths=/local/sample-rg \
sample-nsr-res
</source>
Now we have a client name to which we can connect to: sample-lh
1d39148d9c9ccd3d20fd4e2cf7c67f50e51c98d4
HP 3par
0
213
733
2015-05-29T12:34:45Z
Lollypop
2
Die Seite wurde neu angelegt: „Unsortierte Sammlung... <source lang=bash> 3par-storage cli% createcpg -t r5 -ssz 4 -ha mag -p -devtype FC -p -cg 1 FC_R5_31_cage1 </source>“
wikitext
text/x-wiki
Unsortierte Sammlung...
<source lang=bash>
3par-storage cli% createcpg -t r5 -ssz 4 -ha mag -p -devtype FC -p -cg 1 FC_R5_31_cage1
</source>
995a62da7b402e6d9008bcb1e35a86859d2d9e84
734
733
2015-05-29T12:39:09Z
Lollypop
2
wikitext
text/x-wiki
Unsortierte Sammlung...
Funktioniert nicht so...
<source lang=bash>
3par-storage cli% createcpg -t r5 -ssz 4 -ha mag -p -devtype FC -cg 0 FC_R5_31_cage0
3par-storage cli% createcpg -t r5 -ssz 4 -ha mag -p -devtype FC -cg 1 FC_R5_31_cage1
</source>
7b6064680b64afa8e4d0cbbf34eb4add2652efe5
735
734
2015-05-29T12:55:12Z
Lollypop
2
wikitext
text/x-wiki
Unsortierte Sammlung...
Funktioniert nicht so...
<source lang=bash>
3par-storage cli% createcpg -t r5 -ssz 4 -ha mag -p -devtype FC -cg 0 FC_R5_31_cage0
3par-storage cli% createcpg -t r5 -ssz 4 -ha mag -p -devtype FC -cg 1 FC_R5_31_cage1
3par-storage cli% showcpg -sdg
------(MB)------
Id Name Warn Limit Grow Args
...
6 FC_R5_31_cage0 - - 32768 -t r5 -ssz 4 -ha mag -p -devtype FC -cg 0
7 FC_R5_31_cage1 - - 32768 -t r5 -ssz 4 -ha mag -p -devtype FC -cg 1
</source>
eeadc8981b29d7c1e244dc36afc93378d0cf4947
736
735
2015-05-29T13:02:23Z
Lollypop
2
wikitext
text/x-wiki
Unsortierte Sammlung...
Funktioniert nicht so...
<source lang=bash>
3par-clusterstorage cli% showcage
Id Name LoopA Pos.A LoopB Pos.B Drives Temp RevA RevB Model Side
0 cage0 1:0:1 0 0:0:1 0 24 29-35 321a 321a DCN1 n/a
1 cage1 1:0:2 0 0:0:2 0 24 34-36 321a 321a DCS2 n/a
3par-storage cli% createcpg -t r5 -ssz 4 -ha mag -p -devtype FC -cg 0 FC_R5_31_cage0
3par-storage cli% createcpg -t r5 -ssz 4 -ha mag -p -devtype FC -cg 1 FC_R5_31_cage1
3par-storage cli% showcpg -sdg
------(MB)------
Id Name Warn Limit Grow Args
...
6 FC_R5_31_cage0 - - 32768 -t r5 -ssz 4 -ha mag -p -devtype FC -cg 0
7 FC_R5_31_cage1 - - 32768 -t r5 -ssz 4 -ha mag -p -devtype FC -cg 1
</source>
a800f883f5bb59d7892dba7ebe02c7f59e9dd390
737
736
2015-05-29T14:34:36Z
Lollypop
2
wikitext
text/x-wiki
Unsortierte Sammlung...
Funktioniert nicht so...
<source lang=bash>
3par-clusterstorage cli% showcage
Id Name LoopA Pos.A LoopB Pos.B Drives Temp RevA RevB Model Side
0 cage0 1:0:1 0 0:0:1 0 24 29-35 321a 321a DCN1 n/a
1 cage1 1:0:2 0 0:0:2 0 24 34-36 321a 321a DCS2 n/a
3par-storage cli% createcpg -t r5 -ssz 4 -ha mag -p -devtype FC -mg 0-19 -cg 0 FC_R5_31_cage0
3par-storage cli% createcpg -t r5 -ssz 4 -ha mag -p -devtype FC -mg 0-19 -cg 1 FC_R5_31_cage1
3par-storage cli% showcpg -sdg
------(MB)------
Id Name Warn Limit Grow Args
...
6 FC_R5_31_cage0 - - 32768 -t r5 -ssz 4 -ha mag -p -devtype FC -mg 0-19 -cg 0
7 FC_R5_31_cage1 - - 32768 -t r5 -ssz 4 -ha mag -p -devtype FC -mg 0-19 -cg 1
</source>
c6e25211d30208b0dc187e0a6e929f59fc621b91
738
737
2015-05-29T15:05:31Z
Lollypop
2
wikitext
text/x-wiki
Unsortierte Sammlung...
Funktioniert nicht so...
<source lang=bash>
3par-clusterstorage cli% showcage
Id Name LoopA Pos.A LoopB Pos.B Drives Temp RevA RevB Model Side
0 cage0 1:0:1 0 0:0:1 0 24 29-35 321a 321a DCN1 n/a
1 cage1 1:0:2 0 0:0:2 0 24 34-36 321a 321a DCS2 n/a
3par-storage cli% createcpg -t r5 -ssz 4 -ha mag -p -devtype FC -mg 0-19 -cg 0 FC_R5_31_cage0
3par-storage cli% createcpg -t r5 -ssz 4 -ha mag -p -devtype FC -mg 0-19 -cg 1 FC_R5_31_cage1
3par-storage cli% showcpg -sdg
------(MB)------
Id Name Warn Limit Grow Args
...
6 FC_R5_31_cage0 - - 32768 -t r5 -ssz 4 -ha mag -p -devtype FC -mg 0-19 -cg 0
7 FC_R5_31_cage1 - - 32768 -t r5 -ssz 4 -ha mag -p -devtype FC -mg 0-19 -cg 1
3par-storage cli% createvv FC_R5_31_cage0 LUB_VV_DB_DE_DATA_DS.1 2T
3par-storage cli% createvv FC_R5_31_cage1 LUB_VV_DB_DE_DATA_DS.1 2T
</source>
ec7d3e90350fb0fe1470e767d665c3ade8df906d
739
738
2015-05-29T15:06:43Z
Lollypop
2
wikitext
text/x-wiki
Unsortierte Sammlung...
Funktioniert nicht so...
<source lang=bash>
3par-clusterstorage cli% showcage
Id Name LoopA Pos.A LoopB Pos.B Drives Temp RevA RevB Model Side
0 cage0 1:0:1 0 0:0:1 0 24 29-35 321a 321a DCN1 n/a
1 cage1 1:0:2 0 0:0:2 0 24 34-36 321a 321a DCS2 n/a
3par-storage cli% createcpg -t r5 -ssz 4 -ha mag -p -devtype FC -mg 0-19 -cg 0 FC_R5_31_cage0
3par-storage cli% createcpg -t r5 -ssz 4 -ha mag -p -devtype FC -mg 0-19 -cg 1 FC_R5_31_cage1
3par-storage cli% showcpg -sdg
------(MB)------
Id Name Warn Limit Grow Args
...
6 FC_R5_31_cage0 - - 32768 -t r5 -ssz 4 -ha mag -p -devtype FC -mg 0-19 -cg 0
7 FC_R5_31_cage1 - - 32768 -t r5 -ssz 4 -ha mag -p -devtype FC -mg 0-19 -cg 1
3par-storage cli% createvv FC_R5_31_cage0 LUB_VV_DB_DE_DATA_DS.1 2T
3par-storage cli% createvv FC_R5_31_cage1 LUB_VV_DB_DE_DATA_DS.2 2T
</source>
ef00677fa5e9f71a83640f68d197eb7c1268206c
740
739
2015-06-02T10:27:58Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:3Par]]
Unsortierte Sammlung...
Funktioniert nicht so...
<source lang=bash>
3par-clusterstorage cli% showcage
Id Name LoopA Pos.A LoopB Pos.B Drives Temp RevA RevB Model Side
0 cage0 1:0:1 0 0:0:1 0 24 29-35 321a 321a DCN1 n/a
1 cage1 1:0:2 0 0:0:2 0 24 34-36 321a 321a DCS2 n/a
3par-storage cli% createcpg -t r5 -ssz 4 -ha mag -p -devtype FC -mg 0-19 -cg 0 FC_R5_31_cage0
3par-storage cli% createcpg -t r5 -ssz 4 -ha mag -p -devtype FC -mg 0-19 -cg 1 FC_R5_31_cage1
3par-storage cli% showcpg -sdg
------(MB)------
Id Name Warn Limit Grow Args
...
6 FC_R5_31_cage0 - - 32768 -t r5 -ssz 4 -ha mag -p -devtype FC -mg 0-19 -cg 0
7 FC_R5_31_cage1 - - 32768 -t r5 -ssz 4 -ha mag -p -devtype FC -mg 0-19 -cg 1
3par-storage cli% createvv FC_R5_31_cage0 LUB_VV_DB_DE_DATA_DS.1 2T
3par-storage cli% createvv FC_R5_31_cage1 LUB_VV_DB_DE_DATA_DS.2 2T
</source>
5f5f1fe8d60d2bcafd1416fd62c83db675f75974
742
740
2015-06-02T10:29:41Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:Hardware]]
Unsortierte Sammlung...
Funktioniert nicht so...
<source lang=bash>
3par-clusterstorage cli% showcage
Id Name LoopA Pos.A LoopB Pos.B Drives Temp RevA RevB Model Side
0 cage0 1:0:1 0 0:0:1 0 24 29-35 321a 321a DCN1 n/a
1 cage1 1:0:2 0 0:0:2 0 24 34-36 321a 321a DCS2 n/a
3par-storage cli% createcpg -t r5 -ssz 4 -ha mag -p -devtype FC -mg 0-19 -cg 0 FC_R5_31_cage0
3par-storage cli% createcpg -t r5 -ssz 4 -ha mag -p -devtype FC -mg 0-19 -cg 1 FC_R5_31_cage1
3par-storage cli% showcpg -sdg
------(MB)------
Id Name Warn Limit Grow Args
...
6 FC_R5_31_cage0 - - 32768 -t r5 -ssz 4 -ha mag -p -devtype FC -mg 0-19 -cg 0
7 FC_R5_31_cage1 - - 32768 -t r5 -ssz 4 -ha mag -p -devtype FC -mg 0-19 -cg 1
3par-storage cli% createvv FC_R5_31_cage0 LUB_VV_DB_DE_DATA_DS.1 2T
3par-storage cli% createvv FC_R5_31_cage1 LUB_VV_DB_DE_DATA_DS.2 2T
</source>
7db1d75252e865ba474f393b996d9e640ce919bd
749
742
2015-06-04T05:52:38Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:Hardware]]
Unsortierte Sammlung...
Funktioniert nicht so...
<source lang=bash>
3par-clusterstorage cli% showcage
Id Name LoopA Pos.A LoopB Pos.B Drives Temp RevA RevB Model Side
0 cage0 1:0:1 0 0:0:1 0 24 29-35 321a 321a DCN1 n/a
1 cage1 1:0:2 0 0:0:2 0 24 34-36 321a 321a DCS2 n/a
3par-storage cli% createcpg -t r5 -ssz 4 -ha mag -p -devtype FC -mg 0-19 -cg 0 FC_R5_31_cage0
3par-storage cli% createcpg -t r5 -ssz 4 -ha mag -p -devtype FC -mg 0-19 -cg 1 FC_R5_31_cage1
3par-storage cli% showcpg -sdg
------(MB)------
Id Name Warn Limit Grow Args
...
6 FC_R5_31_cage0 - - 32768 -t r5 -ssz 4 -ha mag -p -devtype FC -mg 0-19 -cg 0
7 FC_R5_31_cage1 - - 32768 -t r5 -ssz 4 -ha mag -p -devtype FC -mg 0-19 -cg 1
3par-storage cli% createvv -wait 0 -comment "Mirror A: PRODDB" FC_R5_31_cage0 LUB_VV_DB_PROD01_DATA_DS.1 2T
3par-storage cli% createvv -wait 0 -comment "Mirror B: PRODDB" FC_R5_31_cage1 LUB_VV_DB_PROD01_DATA_DS.2 2T
3par-storage cli% createvv -wait 0 -comment "Mirror A: TESTDB" FC_R5_31_cage0 LUB_VV_DB_TEST01_DATA_DS.3 2T
3par-storage cli% createvv -wait 0 -comment "Mirror B: TESTDB" FC_R5_31_cage1 LUB_VV_DB_TEST01_DATA_DS.4 2T
3par-storage cli% showvv -sortcol 0 -showcols Id,Name,UsrCPG,Prov,Usr_Used_MB -cpg FC_R5_31_cage0,FC_R5_31_cage1
Id Name UsrCPG Prov Usr_Used_MB
2 LUB_VV_DB_PROD01_DATA_DS.1 FC_R5_31_cage0 full 2097152
3 LUB_VV_DB_PROD01_DATA_DS.2 FC_R5_31_cage1 full 2097152
4 LUB_VV_DB_TEST01_DATA_DS.3 FC_R5_31_cage0 full 2097152
5 LUB_VV_DB_TEST01_DATA_DS.4 FC_R5_31_cage1 full 2097152
-----------------------------------------------------------------
2 total 8388608
</source>
4864494b0fd98c1e7c706db342ff05613987e2c3
ZFS sync script
0
215
743
2015-06-02T13:26:56Z
Lollypop
2
Die Seite wurde neu angelegt: „[[Kategorie:ZFS]] Like all of my scripts this script is coming without any guaranties!!! You can use it on your own risk! ==About the script== * It uses [ht…“
wikitext
text/x-wiki
[[Kategorie:ZFS]]
Like all of my scripts this script is coming without any guaranties!!!
You can use it on your own risk!
==About the script==
* It uses [http://www.maier-komor.de/mbuffer.html mbuffer]. It is easy to compile.
* It uses gawk.
* The variable ''SECURE'' defines if you want to use ssh to encrypt your stream. Set it to ''yes'' or ''no''.
* To mark the datasets to copy from the backup host use this on the source:
# /usr/sbin/zfs set de.timmann:auto-backup=<backup host> <dataset>
* Run the script on the destination/backup host.
* Make an ssh-key exchange to login without password.
* If you don't want to use root as backup-user on source host do this to create a ''zfssync'' user:
<source lang=bash>
# useradd -m zfssync
# passwd -N zfssync
# usermod -K type=normal zfssync
</source>
Good luck!
==zfs_sync.sh==
<source lang=bash>
#!/usr/bin/bash
# Written by Lars Timmann <L@rs.Timmann.de> 2013
SRC_USER=zfssync
SRC_HOST=my_source_server
SRC_POOL=my_source_zpool
DST_POOL=my_local_destination_zpool
INITIAL_COPIES=3
# Default yes means use SSH for encryption over the net. Every other value means just mbuffer.
SECURE="no"
MBUFFER_PORT=10001
MBUFFER_OPTS="-v 0 --md5 -s 128k -m 256M"
ZFS=/usr/sbin/zfs
SSH="/usr/bin/ssh -xc arcfour128"
#AWK=/usr/bin/gawk
AWK=/opt/sfw/bin/gawk
GREP=/usr/bin/grep
DATE=/usr/bin/date
MD5="/usr/bin/digest -a md5"
ROUTE=/usr/sbin/route
MBUFFER="/opt/mbuffer/bin/mbuffer"
# Guess the right IP for communication with source host
DST_HOST=$(${ROUTE} -vn get ${SRC_HOST} | ${AWK} '{ip=$2}END{print ip}')
MYNAME=$(/usr/bin/basename $0 .sh)
MYSELF=$(/usr/bin/hostname)
SRC_DATASETS=/tmp/${MYNAME}_src_ds.out
DST_DATASETS=/tmp/${MYNAME}_dst_ds.out
LOCK_FILE=/var/run/${MYNAME}.lck
TMP_FILE1=/tmp/${MYNAME}.tmp1
TMP_FILE2=/tmp/${MYNAME}.tmp2
BACKUP_PROPERTY="de.timmann:auto-backup"
START_TIME=$(${AWK} 'BEGIN{printf systime();}')
${AWK} -v time=${START_TIME} 'BEGIN{print "START:",strftime("%d.%m.%Y %H:%M.%S",time)}'
# Was tun bei Unterbrechung
# -------------------------
trap 'echo "\n--- Signal empfangen: Exiting ...\n"; \
date ; \
rm -f ${LOCK_FILE}; \
sleep 3; kill -9 ${!} 2>/dev/null; exit 1' 1 2 3 13 14 15 18
###########################
if [ -f ${LOCK_FILE} ] ; then
echo "$0 is allready running as PID $(/usr/bin/cat ${LOCK_FILE}) look in ${LOCK_FILE}"
exit 1
else
echo $$ > ${LOCK_FILE}
fi
${SSH} ${SRC_USER:+"${SRC_USER}@"}${SRC_HOST} "${ZFS} list -rH -t filesystem,snapshot,volume -o name,type,${BACKUP_PROPERTY} -s creation ${SRC_POOL}" > ${SRC_DATASETS} &
${ZFS} list -rH -t filesystem,snapshot,volume -o name,type -s creation ${DST_POOL} > ${DST_DATASETS} &
wait
function convert_to_poolname () {
from_zfs=$1
search=$2
replace=$3
echo ${from_zfs} | sed -e "s#^${search}#${replace}#g"
}
function is_available () {
snapshot=$1
list=$2
${AWK} -v snapshot=${snapshot} 'BEGIN{rc=1;}$1 == snapshot{print $1; rc=0;}END{exit rc;}' ${list}
return $?
}
function expire_dst_pool_snapshots () {
days_to_keep=$1
min_to_keep=$2
for expired_zfs in $(
${ZFS} list -o creation,name -S creation -t snapshot | \
${AWK} \
-v days_to_keep=${days_to_keep} \
-v min_to_keep=${min_to_keep} \
-v DST_POOL="^${DST_POOL}" \
'
BEGIN{
split("Jan:Feb:Mar:Apr:May:Jun:Jul:Aug:Sep:Oct:Nov:Dec",mon,":");
for(m in mon){
month[mon[m]]=m
};
expire_date=systime()-days_to_keep*60*60*24
}
$NF ~ DST_POOL {
filesystem=$NF;
gsub(/@.*$/,"",filesystem);
split($4,time,":");
filesystem_date=mktime(sprintf("%d %02d %02d %02d %02d 00", $5, month[$2], $3, time[1], time[2]));
count[filesystem]++;
if(filesystem_date < expire_date && count[filesystem] > min_to_keep )
{
print $NF;
}
}')
do
printf "$(${DATE}) Destroying snapshot ${expired_zfs}\n"
${ZFS} destroy ${expired_zfs}
done
}
function get_src_list () {
${AWK} -v backup_server=${MYSELF} '
( $2=="filesystem" || $2=="volume" ) && $3==backup_server {
path[$1]=1;
for(name in path){
# delete name from list, if name is substring of $1
if( index($1,name)==1 && name != $1 && path[name]!=0 ){
path[name]=0;
}
}
}
END{
for(name in path){
if(path[name]==1) print name
}
}
' ${SRC_DATASETS}
}
function first_snapshot () {
${AWK} -v zfs="${1}@" '
$2=="snapshot" && $1 ~ zfs {
first=$1;
# und raus...
nextfile;
}
END{
print first;
}
' $2
}
function last_snapshot () {
${AWK} -v zfs="${1}@" '
$2=="snapshot" && $1 ~ zfs {
last=$1;
}
END{
print last;
}
' $2
}
function get_recursive () {
src_host=$1
src_datasets=$2
first=$3
last=$4
dst_pool=$5
dst_datasets=$6
if [ $# -lt 6 ] ; then
echo "Called from line ${BASH_LINENO[$i]} with $# Arguments"
end 1
fi
src_zfs=$(echo ${first} | ${AWK} -F'@' '{print $1}')
first_snap=$(echo ${first} | ${AWK} -F'@' '{print FS""$2}')
echo "Getting snapshot ${zfs}..."
if [ "_${SECURE}_" == "_yes_" ]
then
# setup receiver
${MBUFFER} ${MBUFFER_OPTS} -l ${TMP_FILE1} -I 127.0.0.1:${MBUFFER_PORT} | \
${ZFS} recv -vFd ${dst_pool} 2>&1 &
# start sender
${SSH} ${SRC_USER:+"${SRC_USER}@"}${SRC_HOST} \
-R ${MBUFFER_PORT}:127.0.0.1:${MBUFFER_PORT} \
"${ZFS} send -I ${first_snap} ${last} | ${MBUFFER} ${MBUFFER_OPTS} -O 127.0.0.1:${MBUFFER_PORT} 2>&1" >${TMP_FILE2} &
else
# setup receiver
${MBUFFER} ${MBUFFER_OPTS} -l ${TMP_FILE1} -I ${MBUFFER_PORT} | \
${ZFS} recv -vFd ${dst_pool} 2>&1 &
# start sender
${SSH} ${SRC_USER:+"${SRC_USER}@"}${SRC_HOST} \
"${ZFS} send -I ${first_snap} ${last} | ${MBUFFER} ${MBUFFER_OPTS} -O ${DST_HOST}:${MBUFFER_PORT} 2>&1" >${TMP_FILE2} &
fi
wait
local_md5=$(grep md5 ${TMP_FILE1})
remote_md5=$(grep md5 ${TMP_FILE2})
local_summary=$(grep summary ${TMP_FILE1})
remote_summary=$(grep summary ${TMP_FILE2})
printf "remote %s\nlocal %s\n" "${remote_md5}" "${local_md5}"
printf "remote %s\nlocal %s\n" "${remote_summary}" "${local_summary}"
rm -f ${TMP_FILE1} ${TMP_FILE2}
}
function get_snapshot () {
src_host=$1
src_datasets=$2
zfs=$3
dst_pool=$4
dst_datasets=$5
if [ -z "$(is_available ${zfs} ${dst_datasets})" ] ; then
echo "Getting snapshot ${zfs}..."
if [ "_${SECURE}_" == "_yes_" ]
then
# setup receiver
${MBUFFER} ${MBUFFER_OPTS} -l ${TMP_FILE1} -I 127.0.0.1:${MBUFFER_PORT} | \
${ZFS} recv -vFd ${dst_pool} 2>&1 &
# start sender
${SSH} ${SRC_USER:+"${SRC_USER}@"}${SRC_HOST} \
-R ${MBUFFER_PORT}:127.0.0.1:${MBUFFER_PORT} \
"${ZFS} send -R ${zfs} | ${MBUFFER} ${MBUFFER_OPTS} -O 127.0.0.1:${MBUFFER_PORT} 2>&1" >${TMP_FILE2} &
else
# setup receiver
${MBUFFER} ${MBUFFER_OPTS} -l ${TMP_FILE1} -I ${MBUFFER_PORT} | \
${ZFS} recv -vFd ${dst_pool} 2>&1 &
# start sender
${SSH} ${SRC_USER:+"${SRC_USER}@"}${SRC_HOST} \
"${ZFS} send -R ${zfs} | ${MBUFFER} ${MBUFFER_OPTS} -O ${DST_HOST}:${MBUFFER_PORT} 2>&1" >${TMP_FILE2} &
fi
wait
local_md5=$(grep md5 ${TMP_FILE1})
remote_md5=$(grep md5 ${TMP_FILE2})
local_summary=$(grep summary ${TMP_FILE1})
remote_summary=$(grep summary ${TMP_FILE2})
printf "remote %s\nlocal %s\n" "${remote_md5}" "${local_md5}"
printf "remote %s\nlocal %s\n" "${remote_summary}" "${local_summary}"
rm -f ${TMP_FILE1} ${TMP_FILE2}
fi
}
function timestamp () {
echo $(${DATE} '+%Y%m%d-%H:%M')
}
function expire_backup_snapshots () {
src_host=$1
src_datasets=$2
dst_datasets=$3
src_last_to_keep=$4
dst_pool=$5
src_zfs=$(echo ${src_last_to_keep} | ${AWK} -F'@' '{print $1}')
dst_zfs=$(convert_to_poolname ${src_zfs} ${SRC_POOL} ${dst_pool})
dst_last_to_keep=$(convert_to_poolname ${src_last_to_keep} ${SRC_POOL} ${dst_pool})
echo "Deleting old backup snapshots before ${dst_last_to_keep}"
if ( ${ZFS} list -o name ${dst_last_to_keep} >/dev/null 2>&1 ) ; then
for src_backup_snapshot in $(${AWK} -v src_backup="${src_zfs}@backup" -v src_last_to_keep="${src_last_to_keep}" '
$1 == src_last_to_keep {
exit 0;
}
$1 ~ src_backup {
print $1;
}
' ${src_datasets})
do
printf "\tDeleting on src ${src_backup_snapshot} ..."
if ( ${SSH} ${SRC_USER:+"${SRC_USER}@"}${SRC_HOST} "${ZFS} destroy ${src_backup_snapshot}" ) ; then
echo "done"
else
echo "failed"
fi
done
for dst_backup_snapshot in $(${AWK} -v dst_backup="${dst_zfs}@backup" -v dst_last_to_keep=${dst_last_to_keep} '
$1 == dst_last_to_keep {
exit 0;
}
$1 ~ dst_backup {
print $1;
}
' ${dst_datasets})
do
printf "\tDeleting on destination ${dst_backup_snapshot} ..."
if ( ${ZFS} destroy ${dst_backup_snapshot} ) ; then
echo "done"
else
echo "failed"
fi
done
else
echo "Strange we do not have the copy of ${dst_last_to_keep} => STOP!"
fi
}
function end () {
/usr/bin/rm -f ${LOCK_FILE}
exit $1
}
for src_zfs in $(get_src_list) ; do
echo "Evaluating ${src_zfs}"
dst_zfs=$(convert_to_poolname ${src_zfs} ${SRC_POOL} ${DST_POOL})
last_src=$(last_snapshot ${src_zfs} ${SRC_DATASETS})
last_dst=$(last_snapshot ${dst_zfs} ${DST_DATASETS})
last_backup_src=$(${AWK} -v zfs="${src_zfs}@backup" '$1 ~ zfs{last=$1}END{print last}' ${SRC_DATASETS})
last_backup_dst=$(${AWK} -v zfs="${dst_zfs}@backup" '$1 ~ zfs{last=$1}END{print last}' ${DST_DATASETS})
last_dst_on_src=$(convert_to_poolname ${last_dst} ${DST_POOL} ${SRC_POOL})
this_backup_src=${src_zfs}@backup_$(timestamp)
# Create snapshot for incremental backups
${SSH} ${SRC_USER:+"${SRC_USER}@"}${SRC_HOST} "${ZFS} snapshot ${this_backup_src}"
if [ -n "$(is_available ${dst_zfs} ${DST_DATASETS})" -a -z "${last_dst}" ] ; then
echo "zfs is on dst, but no snapshots. Getting ${last_src}..."
get_snapshot ${SRC_HOST} ${SRC_DATASETS} ${last_src} ${DST_POOL} ${DST_DATASETS}
# Look for last backup snapshot on destination
elif [ -n "${last_backup_dst}" ] ; then
# Name of last backup snapshot on src
last_dst_backup_on_src=$(convert_to_poolname ${last_backup_dst} ${DST_POOL} ${SRC_POOL})
# If converted name is not empty and snapshot is in the list of src snapshots
# then get all snapshots from last backup until now
if [ -n "${last_dst_backup_on_src}" ] ; then
if [ -n "$(is_available ${last_dst_backup_on_src} ${SRC_DATASETS})" ] ; then
# Get the snapshot of this backup
printf "%s\tsnapshot\n" ${this_backup_src} >> ${SRC_DATASETS}
get_recursive ${SRC_HOST} ${SRC_DATASETS} ${last_dst_backup_on_src} ${this_backup_src} ${DST_POOL} ${DST_DATASETS} && \
expire_backup_snapshots ${SRC_HOST} ${SRC_DATASETS} ${DST_DATASETS} ${this_backup_src} ${DST_POOL}
fi
fi
elif [ -n "$(is_available ${dst_zfs} ${DST_DATASETS})" ] ; then
# No last backup snapshot on dst but we have snapshots
if [ -n "$(is_available ${last_dst_on_src} ${SRC_DATASETS})" ] ; then
echo "Try to backup from ${last_dst_on_src} to ${this_backup_src}"
first=${last_dst_on_src}
last=${last_src}
get_recursive ${SRC_HOST} ${SRC_DATASETS} ${first} ${last} ${DST_POOL} ${DST_DATASETS} && \
expire_backup_snapshots ${SRC_HOST} ${SRC_DATASETS} ${DST_DATASETS} ${this_backup_src} ${DST_POOL}
# Get the snapshot of this backup
printf "%s\tsnapshot\n" ${this_backup_src} >> ${SRC_DATASETS}
get_recursive ${SRC_HOST} ${SRC_DATASETS} ${last} ${this_backup_src} ${DST_POOL} ${DST_DATASETS} && \
expire_backup_snapshots ${SRC_HOST} ${SRC_DATASETS} ${DST_DATASETS} ${this_backup_src} ${DST_POOL}
else
echo "OK I tried hard... now it is your job..."
fi
else
# No existing copies for this zfs. Get the last <INITIAL_COPIES> copies
first=$(${AWK} -v zfs=${src_zfs} -v intitial_copies=$((${INITIAL_COPIES}-1)) '
$1 ~ zfs && $2=="snapshot" {
last[++count]=$1;
}
END {
if(count>intitial_copies){
print last[count-intitial_copies]
}else{
print last[1]
}
}' ${SRC_DATASETS})
last=$( ${AWK} -v zfs=${src_zfs} '$1 ~ zfs && $2=="snapshot"{last=$1}END{print last}' ${SRC_DATASETS} )
get_snapshot ${SRC_HOST} ${SRC_DATASETS} ${first} ${DST_POOL} ${DST_DATASETS}
get_recursive ${SRC_HOST} ${SRC_DATASETS} ${first} ${last} ${DST_POOL} ${DST_DATASETS} && \
expire_backup_snapshots ${SRC_HOST} ${SRC_DATASETS} ${DST_DATASETS} ${this_backup_src} ${DST_POOL}
# Get the snapshot of this backup
printf "%s\tsnapshot\n" ${this_backup_src} >> ${SRC_DATASETS}
get_recursive ${SRC_HOST} ${SRC_DATASETS} ${last} ${this_backup_src} ${DST_POOL} ${DST_DATASETS} && \
expire_backup_snapshots ${SRC_HOST} ${SRC_DATASETS} ${DST_DATASETS} ${this_backup_src} ${DST_POOL}
fi
echo
echo --------------------------------------------------------------------------------
date
echo
done
# expire_dst_pool_snapshots days_to_keep min_to_keep
expire_dst_pool_snapshots 34 70
END_TIME=$(${AWK} 'BEGIN{printf systime();}')
${AWK} -v time=${END_TIME} 'BEGIN{print "END :",strftime("%d.%m.%Y %H:%M.%S",time)}'
${AWK} -v start=${START_TIME} -v end=${END_TIME} 'BEGIN{print "DURATION:",strftime("%H:%M.%S",end-start-3600*strftime("%H",0))}'
end 0
</source>
b4024ddb7f70ddc24d25a105586d751cc5650fea
744
743
2015-06-02T13:28:02Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:ZFS]]
Like all of my scripts this script is coming without any guaranties!!!
You can use it on your own risk!
==About the script==
* It uses [http://www.maier-komor.de/mbuffer.html mbuffer]. It is easy to compile.
* It uses gawk.
* The variable ''SECURE'' defines if you want to use ssh to encrypt your stream. Set it to ''yes'' or ''no''.
* To mark the datasets to copy from the backup host use this on the source:
# /usr/sbin/zfs set de.timmann:auto-backup=<backup host> <dataset>
* Run the script on the destination/backup host.
* Make an ssh-key exchange to login without password.
* If you don't want to use root as backup-user on source host do this to create a ''zfssync'' user:
<source lang=bash>
# useradd -m zfssync
# passwd -N zfssync
# usermod -K type=normal zfssync
</source>
Good luck!
==zfs_sync.sh==
<source lang=bash>
#!/usr/bin/bash
# Written by Lars Timmann <L@rs.Timmann.de> 2013
# This script is a rotten bunch of code... rewrite it!
SRC_USER=zfssync
SRC_HOST=my_source_server
SRC_POOL=my_source_zpool
DST_POOL=my_local_destination_zpool
INITIAL_COPIES=3
# Default yes means use SSH for encryption over the net. Every other value means just mbuffer.
SECURE="no"
MBUFFER_PORT=10001
MBUFFER_OPTS="-v 0 --md5 -s 128k -m 256M"
ZFS=/usr/sbin/zfs
SSH="/usr/bin/ssh -xc arcfour128"
#AWK=/usr/bin/gawk
AWK=/opt/sfw/bin/gawk
GREP=/usr/bin/grep
DATE=/usr/bin/date
MD5="/usr/bin/digest -a md5"
ROUTE=/usr/sbin/route
MBUFFER="/opt/mbuffer/bin/mbuffer"
# Guess the right IP for communication with source host
DST_HOST=$(${ROUTE} -vn get ${SRC_HOST} | ${AWK} '{ip=$2}END{print ip}')
MYNAME=$(/usr/bin/basename $0 .sh)
MYSELF=$(/usr/bin/hostname)
SRC_DATASETS=/tmp/${MYNAME}_src_ds.out
DST_DATASETS=/tmp/${MYNAME}_dst_ds.out
LOCK_FILE=/var/run/${MYNAME}.lck
TMP_FILE1=/tmp/${MYNAME}.tmp1
TMP_FILE2=/tmp/${MYNAME}.tmp2
BACKUP_PROPERTY="de.timmann:auto-backup"
START_TIME=$(${AWK} 'BEGIN{printf systime();}')
${AWK} -v time=${START_TIME} 'BEGIN{print "START:",strftime("%d.%m.%Y %H:%M.%S",time)}'
# Was tun bei Unterbrechung
# -------------------------
trap 'echo "\n--- Signal empfangen: Exiting ...\n"; \
date ; \
rm -f ${LOCK_FILE}; \
sleep 3; kill -9 ${!} 2>/dev/null; exit 1' 1 2 3 13 14 15 18
###########################
if [ -f ${LOCK_FILE} ] ; then
echo "$0 is allready running as PID $(/usr/bin/cat ${LOCK_FILE}) look in ${LOCK_FILE}"
exit 1
else
echo $$ > ${LOCK_FILE}
fi
${SSH} ${SRC_USER:+"${SRC_USER}@"}${SRC_HOST} "${ZFS} list -rH -t filesystem,snapshot,volume -o name,type,${BACKUP_PROPERTY} -s creation ${SRC_POOL}" > ${SRC_DATASETS} &
${ZFS} list -rH -t filesystem,snapshot,volume -o name,type -s creation ${DST_POOL} > ${DST_DATASETS} &
wait
function convert_to_poolname () {
from_zfs=$1
search=$2
replace=$3
echo ${from_zfs} | sed -e "s#^${search}#${replace}#g"
}
function is_available () {
snapshot=$1
list=$2
${AWK} -v snapshot=${snapshot} 'BEGIN{rc=1;}$1 == snapshot{print $1; rc=0;}END{exit rc;}' ${list}
return $?
}
function expire_dst_pool_snapshots () {
days_to_keep=$1
min_to_keep=$2
for expired_zfs in $(
${ZFS} list -o creation,name -S creation -t snapshot | \
${AWK} \
-v days_to_keep=${days_to_keep} \
-v min_to_keep=${min_to_keep} \
-v DST_POOL="^${DST_POOL}" \
'
BEGIN{
split("Jan:Feb:Mar:Apr:May:Jun:Jul:Aug:Sep:Oct:Nov:Dec",mon,":");
for(m in mon){
month[mon[m]]=m
};
expire_date=systime()-days_to_keep*60*60*24
}
$NF ~ DST_POOL {
filesystem=$NF;
gsub(/@.*$/,"",filesystem);
split($4,time,":");
filesystem_date=mktime(sprintf("%d %02d %02d %02d %02d 00", $5, month[$2], $3, time[1], time[2]));
count[filesystem]++;
if(filesystem_date < expire_date && count[filesystem] > min_to_keep )
{
print $NF;
}
}')
do
printf "$(${DATE}) Destroying snapshot ${expired_zfs}\n"
${ZFS} destroy ${expired_zfs}
done
}
function get_src_list () {
${AWK} -v backup_server=${MYSELF} '
( $2=="filesystem" || $2=="volume" ) && $3==backup_server {
path[$1]=1;
for(name in path){
# delete name from list, if name is substring of $1
if( index($1,name)==1 && name != $1 && path[name]!=0 ){
path[name]=0;
}
}
}
END{
for(name in path){
if(path[name]==1) print name
}
}
' ${SRC_DATASETS}
}
function first_snapshot () {
${AWK} -v zfs="${1}@" '
$2=="snapshot" && $1 ~ zfs {
first=$1;
# und raus...
nextfile;
}
END{
print first;
}
' $2
}
function last_snapshot () {
${AWK} -v zfs="${1}@" '
$2=="snapshot" && $1 ~ zfs {
last=$1;
}
END{
print last;
}
' $2
}
function get_recursive () {
src_host=$1
src_datasets=$2
first=$3
last=$4
dst_pool=$5
dst_datasets=$6
if [ $# -lt 6 ] ; then
echo "Called from line ${BASH_LINENO[$i]} with $# Arguments"
end 1
fi
src_zfs=$(echo ${first} | ${AWK} -F'@' '{print $1}')
first_snap=$(echo ${first} | ${AWK} -F'@' '{print FS""$2}')
echo "Getting snapshot ${zfs}..."
if [ "_${SECURE}_" == "_yes_" ]
then
# setup receiver
${MBUFFER} ${MBUFFER_OPTS} -l ${TMP_FILE1} -I 127.0.0.1:${MBUFFER_PORT} | \
${ZFS} recv -vFd ${dst_pool} 2>&1 &
# start sender
${SSH} ${SRC_USER:+"${SRC_USER}@"}${SRC_HOST} \
-R ${MBUFFER_PORT}:127.0.0.1:${MBUFFER_PORT} \
"${ZFS} send -I ${first_snap} ${last} | ${MBUFFER} ${MBUFFER_OPTS} -O 127.0.0.1:${MBUFFER_PORT} 2>&1" >${TMP_FILE2} &
else
# setup receiver
${MBUFFER} ${MBUFFER_OPTS} -l ${TMP_FILE1} -I ${MBUFFER_PORT} | \
${ZFS} recv -vFd ${dst_pool} 2>&1 &
# start sender
${SSH} ${SRC_USER:+"${SRC_USER}@"}${SRC_HOST} \
"${ZFS} send -I ${first_snap} ${last} | ${MBUFFER} ${MBUFFER_OPTS} -O ${DST_HOST}:${MBUFFER_PORT} 2>&1" >${TMP_FILE2} &
fi
wait
local_md5=$(grep md5 ${TMP_FILE1})
remote_md5=$(grep md5 ${TMP_FILE2})
local_summary=$(grep summary ${TMP_FILE1})
remote_summary=$(grep summary ${TMP_FILE2})
printf "remote %s\nlocal %s\n" "${remote_md5}" "${local_md5}"
printf "remote %s\nlocal %s\n" "${remote_summary}" "${local_summary}"
rm -f ${TMP_FILE1} ${TMP_FILE2}
}
function get_snapshot () {
src_host=$1
src_datasets=$2
zfs=$3
dst_pool=$4
dst_datasets=$5
if [ -z "$(is_available ${zfs} ${dst_datasets})" ] ; then
echo "Getting snapshot ${zfs}..."
if [ "_${SECURE}_" == "_yes_" ]
then
# setup receiver
${MBUFFER} ${MBUFFER_OPTS} -l ${TMP_FILE1} -I 127.0.0.1:${MBUFFER_PORT} | \
${ZFS} recv -vFd ${dst_pool} 2>&1 &
# start sender
${SSH} ${SRC_USER:+"${SRC_USER}@"}${SRC_HOST} \
-R ${MBUFFER_PORT}:127.0.0.1:${MBUFFER_PORT} \
"${ZFS} send -R ${zfs} | ${MBUFFER} ${MBUFFER_OPTS} -O 127.0.0.1:${MBUFFER_PORT} 2>&1" >${TMP_FILE2} &
else
# setup receiver
${MBUFFER} ${MBUFFER_OPTS} -l ${TMP_FILE1} -I ${MBUFFER_PORT} | \
${ZFS} recv -vFd ${dst_pool} 2>&1 &
# start sender
${SSH} ${SRC_USER:+"${SRC_USER}@"}${SRC_HOST} \
"${ZFS} send -R ${zfs} | ${MBUFFER} ${MBUFFER_OPTS} -O ${DST_HOST}:${MBUFFER_PORT} 2>&1" >${TMP_FILE2} &
fi
wait
local_md5=$(grep md5 ${TMP_FILE1})
remote_md5=$(grep md5 ${TMP_FILE2})
local_summary=$(grep summary ${TMP_FILE1})
remote_summary=$(grep summary ${TMP_FILE2})
printf "remote %s\nlocal %s\n" "${remote_md5}" "${local_md5}"
printf "remote %s\nlocal %s\n" "${remote_summary}" "${local_summary}"
rm -f ${TMP_FILE1} ${TMP_FILE2}
fi
}
function timestamp () {
echo $(${DATE} '+%Y%m%d-%H:%M')
}
function expire_backup_snapshots () {
src_host=$1
src_datasets=$2
dst_datasets=$3
src_last_to_keep=$4
dst_pool=$5
src_zfs=$(echo ${src_last_to_keep} | ${AWK} -F'@' '{print $1}')
dst_zfs=$(convert_to_poolname ${src_zfs} ${SRC_POOL} ${dst_pool})
dst_last_to_keep=$(convert_to_poolname ${src_last_to_keep} ${SRC_POOL} ${dst_pool})
echo "Deleting old backup snapshots before ${dst_last_to_keep}"
if ( ${ZFS} list -o name ${dst_last_to_keep} >/dev/null 2>&1 ) ; then
for src_backup_snapshot in $(${AWK} -v src_backup="${src_zfs}@backup" -v src_last_to_keep="${src_last_to_keep}" '
$1 == src_last_to_keep {
exit 0;
}
$1 ~ src_backup {
print $1;
}
' ${src_datasets})
do
printf "\tDeleting on src ${src_backup_snapshot} ..."
if ( ${SSH} ${SRC_USER:+"${SRC_USER}@"}${SRC_HOST} "${ZFS} destroy ${src_backup_snapshot}" ) ; then
echo "done"
else
echo "failed"
fi
done
for dst_backup_snapshot in $(${AWK} -v dst_backup="${dst_zfs}@backup" -v dst_last_to_keep=${dst_last_to_keep} '
$1 == dst_last_to_keep {
exit 0;
}
$1 ~ dst_backup {
print $1;
}
' ${dst_datasets})
do
printf "\tDeleting on destination ${dst_backup_snapshot} ..."
if ( ${ZFS} destroy ${dst_backup_snapshot} ) ; then
echo "done"
else
echo "failed"
fi
done
else
echo "Strange we do not have the copy of ${dst_last_to_keep} => STOP!"
fi
}
function end () {
/usr/bin/rm -f ${LOCK_FILE}
exit $1
}
for src_zfs in $(get_src_list) ; do
echo "Evaluating ${src_zfs}"
dst_zfs=$(convert_to_poolname ${src_zfs} ${SRC_POOL} ${DST_POOL})
last_src=$(last_snapshot ${src_zfs} ${SRC_DATASETS})
last_dst=$(last_snapshot ${dst_zfs} ${DST_DATASETS})
last_backup_src=$(${AWK} -v zfs="${src_zfs}@backup" '$1 ~ zfs{last=$1}END{print last}' ${SRC_DATASETS})
last_backup_dst=$(${AWK} -v zfs="${dst_zfs}@backup" '$1 ~ zfs{last=$1}END{print last}' ${DST_DATASETS})
last_dst_on_src=$(convert_to_poolname ${last_dst} ${DST_POOL} ${SRC_POOL})
this_backup_src=${src_zfs}@backup_$(timestamp)
# Create snapshot for incremental backups
${SSH} ${SRC_USER:+"${SRC_USER}@"}${SRC_HOST} "${ZFS} snapshot ${this_backup_src}"
if [ -n "$(is_available ${dst_zfs} ${DST_DATASETS})" -a -z "${last_dst}" ] ; then
echo "zfs is on dst, but no snapshots. Getting ${last_src}..."
get_snapshot ${SRC_HOST} ${SRC_DATASETS} ${last_src} ${DST_POOL} ${DST_DATASETS}
# Look for last backup snapshot on destination
elif [ -n "${last_backup_dst}" ] ; then
# Name of last backup snapshot on src
last_dst_backup_on_src=$(convert_to_poolname ${last_backup_dst} ${DST_POOL} ${SRC_POOL})
# If converted name is not empty and snapshot is in the list of src snapshots
# then get all snapshots from last backup until now
if [ -n "${last_dst_backup_on_src}" ] ; then
if [ -n "$(is_available ${last_dst_backup_on_src} ${SRC_DATASETS})" ] ; then
# Get the snapshot of this backup
printf "%s\tsnapshot\n" ${this_backup_src} >> ${SRC_DATASETS}
get_recursive ${SRC_HOST} ${SRC_DATASETS} ${last_dst_backup_on_src} ${this_backup_src} ${DST_POOL} ${DST_DATASETS} && \
expire_backup_snapshots ${SRC_HOST} ${SRC_DATASETS} ${DST_DATASETS} ${this_backup_src} ${DST_POOL}
fi
fi
elif [ -n "$(is_available ${dst_zfs} ${DST_DATASETS})" ] ; then
# No last backup snapshot on dst but we have snapshots
if [ -n "$(is_available ${last_dst_on_src} ${SRC_DATASETS})" ] ; then
echo "Try to backup from ${last_dst_on_src} to ${this_backup_src}"
first=${last_dst_on_src}
last=${last_src}
get_recursive ${SRC_HOST} ${SRC_DATASETS} ${first} ${last} ${DST_POOL} ${DST_DATASETS} && \
expire_backup_snapshots ${SRC_HOST} ${SRC_DATASETS} ${DST_DATASETS} ${this_backup_src} ${DST_POOL}
# Get the snapshot of this backup
printf "%s\tsnapshot\n" ${this_backup_src} >> ${SRC_DATASETS}
get_recursive ${SRC_HOST} ${SRC_DATASETS} ${last} ${this_backup_src} ${DST_POOL} ${DST_DATASETS} && \
expire_backup_snapshots ${SRC_HOST} ${SRC_DATASETS} ${DST_DATASETS} ${this_backup_src} ${DST_POOL}
else
echo "OK I tried hard... now it is your job..."
fi
else
# No existing copies for this zfs. Get the last <INITIAL_COPIES> copies
first=$(${AWK} -v zfs=${src_zfs} -v intitial_copies=$((${INITIAL_COPIES}-1)) '
$1 ~ zfs && $2=="snapshot" {
last[++count]=$1;
}
END {
if(count>intitial_copies){
print last[count-intitial_copies]
}else{
print last[1]
}
}' ${SRC_DATASETS})
last=$( ${AWK} -v zfs=${src_zfs} '$1 ~ zfs && $2=="snapshot"{last=$1}END{print last}' ${SRC_DATASETS} )
get_snapshot ${SRC_HOST} ${SRC_DATASETS} ${first} ${DST_POOL} ${DST_DATASETS}
get_recursive ${SRC_HOST} ${SRC_DATASETS} ${first} ${last} ${DST_POOL} ${DST_DATASETS} && \
expire_backup_snapshots ${SRC_HOST} ${SRC_DATASETS} ${DST_DATASETS} ${this_backup_src} ${DST_POOL}
# Get the snapshot of this backup
printf "%s\tsnapshot\n" ${this_backup_src} >> ${SRC_DATASETS}
get_recursive ${SRC_HOST} ${SRC_DATASETS} ${last} ${this_backup_src} ${DST_POOL} ${DST_DATASETS} && \
expire_backup_snapshots ${SRC_HOST} ${SRC_DATASETS} ${DST_DATASETS} ${this_backup_src} ${DST_POOL}
fi
echo
echo --------------------------------------------------------------------------------
date
echo
done
# expire_dst_pool_snapshots days_to_keep min_to_keep
expire_dst_pool_snapshots 34 70
END_TIME=$(${AWK} 'BEGIN{printf systime();}')
${AWK} -v time=${END_TIME} 'BEGIN{print "END :",strftime("%d.%m.%Y %H:%M.%S",time)}'
${AWK} -v start=${START_TIME} -v end=${END_TIME} 'BEGIN{print "DURATION:",strftime("%H:%M.%S",end-start-3600*strftime("%H",0))}'
end 0
</source>
f72e98a821b4544879ec481a7a9860e78cf34611
745
744
2015-06-02T13:28:51Z
Lollypop
2
/* About the script */
wikitext
text/x-wiki
[[Kategorie:ZFS]]
Like all of my scripts this script is coming without any guaranties!!!
You can use it on your own risk!
==About the script==
* It uses [http://www.maier-komor.de/mbuffer.html mbuffer]. It is easy to compile.
* It uses gawk.
* The variable ''SECURE'' defines if you want to use ssh to encrypt your stream. Set it to ''yes'' or ''no''.
* To mark the datasets to copy from the backup host use this on the source:
<source lang=bash>
# /usr/sbin/zfs set de.timmann:auto-backup=<backup host> <dataset>
</source>
* Run the script on the destination/backup host.
* Make an ssh-key exchange to login without password.
* If you don't want to use root as backup-user on source host do this to create a ''zfssync'' user:
<source lang=bash>
# useradd -m zfssync
# passwd -N zfssync
# usermod -K type=normal zfssync
</source>
Good luck!
==zfs_sync.sh==
<source lang=bash>
#!/usr/bin/bash
# Written by Lars Timmann <L@rs.Timmann.de> 2013
# This script is a rotten bunch of code... rewrite it!
SRC_USER=zfssync
SRC_HOST=my_source_server
SRC_POOL=my_source_zpool
DST_POOL=my_local_destination_zpool
INITIAL_COPIES=3
# Default yes means use SSH for encryption over the net. Every other value means just mbuffer.
SECURE="no"
MBUFFER_PORT=10001
MBUFFER_OPTS="-v 0 --md5 -s 128k -m 256M"
ZFS=/usr/sbin/zfs
SSH="/usr/bin/ssh -xc arcfour128"
#AWK=/usr/bin/gawk
AWK=/opt/sfw/bin/gawk
GREP=/usr/bin/grep
DATE=/usr/bin/date
MD5="/usr/bin/digest -a md5"
ROUTE=/usr/sbin/route
MBUFFER="/opt/mbuffer/bin/mbuffer"
# Guess the right IP for communication with source host
DST_HOST=$(${ROUTE} -vn get ${SRC_HOST} | ${AWK} '{ip=$2}END{print ip}')
MYNAME=$(/usr/bin/basename $0 .sh)
MYSELF=$(/usr/bin/hostname)
SRC_DATASETS=/tmp/${MYNAME}_src_ds.out
DST_DATASETS=/tmp/${MYNAME}_dst_ds.out
LOCK_FILE=/var/run/${MYNAME}.lck
TMP_FILE1=/tmp/${MYNAME}.tmp1
TMP_FILE2=/tmp/${MYNAME}.tmp2
BACKUP_PROPERTY="de.timmann:auto-backup"
START_TIME=$(${AWK} 'BEGIN{printf systime();}')
${AWK} -v time=${START_TIME} 'BEGIN{print "START:",strftime("%d.%m.%Y %H:%M.%S",time)}'
# Was tun bei Unterbrechung
# -------------------------
trap 'echo "\n--- Signal empfangen: Exiting ...\n"; \
date ; \
rm -f ${LOCK_FILE}; \
sleep 3; kill -9 ${!} 2>/dev/null; exit 1' 1 2 3 13 14 15 18
###########################
if [ -f ${LOCK_FILE} ] ; then
echo "$0 is allready running as PID $(/usr/bin/cat ${LOCK_FILE}) look in ${LOCK_FILE}"
exit 1
else
echo $$ > ${LOCK_FILE}
fi
${SSH} ${SRC_USER:+"${SRC_USER}@"}${SRC_HOST} "${ZFS} list -rH -t filesystem,snapshot,volume -o name,type,${BACKUP_PROPERTY} -s creation ${SRC_POOL}" > ${SRC_DATASETS} &
${ZFS} list -rH -t filesystem,snapshot,volume -o name,type -s creation ${DST_POOL} > ${DST_DATASETS} &
wait
function convert_to_poolname () {
from_zfs=$1
search=$2
replace=$3
echo ${from_zfs} | sed -e "s#^${search}#${replace}#g"
}
function is_available () {
snapshot=$1
list=$2
${AWK} -v snapshot=${snapshot} 'BEGIN{rc=1;}$1 == snapshot{print $1; rc=0;}END{exit rc;}' ${list}
return $?
}
function expire_dst_pool_snapshots () {
days_to_keep=$1
min_to_keep=$2
for expired_zfs in $(
${ZFS} list -o creation,name -S creation -t snapshot | \
${AWK} \
-v days_to_keep=${days_to_keep} \
-v min_to_keep=${min_to_keep} \
-v DST_POOL="^${DST_POOL}" \
'
BEGIN{
split("Jan:Feb:Mar:Apr:May:Jun:Jul:Aug:Sep:Oct:Nov:Dec",mon,":");
for(m in mon){
month[mon[m]]=m
};
expire_date=systime()-days_to_keep*60*60*24
}
$NF ~ DST_POOL {
filesystem=$NF;
gsub(/@.*$/,"",filesystem);
split($4,time,":");
filesystem_date=mktime(sprintf("%d %02d %02d %02d %02d 00", $5, month[$2], $3, time[1], time[2]));
count[filesystem]++;
if(filesystem_date < expire_date && count[filesystem] > min_to_keep )
{
print $NF;
}
}')
do
printf "$(${DATE}) Destroying snapshot ${expired_zfs}\n"
${ZFS} destroy ${expired_zfs}
done
}
function get_src_list () {
${AWK} -v backup_server=${MYSELF} '
( $2=="filesystem" || $2=="volume" ) && $3==backup_server {
path[$1]=1;
for(name in path){
# delete name from list, if name is substring of $1
if( index($1,name)==1 && name != $1 && path[name]!=0 ){
path[name]=0;
}
}
}
END{
for(name in path){
if(path[name]==1) print name
}
}
' ${SRC_DATASETS}
}
function first_snapshot () {
${AWK} -v zfs="${1}@" '
$2=="snapshot" && $1 ~ zfs {
first=$1;
# und raus...
nextfile;
}
END{
print first;
}
' $2
}
function last_snapshot () {
${AWK} -v zfs="${1}@" '
$2=="snapshot" && $1 ~ zfs {
last=$1;
}
END{
print last;
}
' $2
}
function get_recursive () {
src_host=$1
src_datasets=$2
first=$3
last=$4
dst_pool=$5
dst_datasets=$6
if [ $# -lt 6 ] ; then
echo "Called from line ${BASH_LINENO[$i]} with $# Arguments"
end 1
fi
src_zfs=$(echo ${first} | ${AWK} -F'@' '{print $1}')
first_snap=$(echo ${first} | ${AWK} -F'@' '{print FS""$2}')
echo "Getting snapshot ${zfs}..."
if [ "_${SECURE}_" == "_yes_" ]
then
# setup receiver
${MBUFFER} ${MBUFFER_OPTS} -l ${TMP_FILE1} -I 127.0.0.1:${MBUFFER_PORT} | \
${ZFS} recv -vFd ${dst_pool} 2>&1 &
# start sender
${SSH} ${SRC_USER:+"${SRC_USER}@"}${SRC_HOST} \
-R ${MBUFFER_PORT}:127.0.0.1:${MBUFFER_PORT} \
"${ZFS} send -I ${first_snap} ${last} | ${MBUFFER} ${MBUFFER_OPTS} -O 127.0.0.1:${MBUFFER_PORT} 2>&1" >${TMP_FILE2} &
else
# setup receiver
${MBUFFER} ${MBUFFER_OPTS} -l ${TMP_FILE1} -I ${MBUFFER_PORT} | \
${ZFS} recv -vFd ${dst_pool} 2>&1 &
# start sender
${SSH} ${SRC_USER:+"${SRC_USER}@"}${SRC_HOST} \
"${ZFS} send -I ${first_snap} ${last} | ${MBUFFER} ${MBUFFER_OPTS} -O ${DST_HOST}:${MBUFFER_PORT} 2>&1" >${TMP_FILE2} &
fi
wait
local_md5=$(grep md5 ${TMP_FILE1})
remote_md5=$(grep md5 ${TMP_FILE2})
local_summary=$(grep summary ${TMP_FILE1})
remote_summary=$(grep summary ${TMP_FILE2})
printf "remote %s\nlocal %s\n" "${remote_md5}" "${local_md5}"
printf "remote %s\nlocal %s\n" "${remote_summary}" "${local_summary}"
rm -f ${TMP_FILE1} ${TMP_FILE2}
}
function get_snapshot () {
src_host=$1
src_datasets=$2
zfs=$3
dst_pool=$4
dst_datasets=$5
if [ -z "$(is_available ${zfs} ${dst_datasets})" ] ; then
echo "Getting snapshot ${zfs}..."
if [ "_${SECURE}_" == "_yes_" ]
then
# setup receiver
${MBUFFER} ${MBUFFER_OPTS} -l ${TMP_FILE1} -I 127.0.0.1:${MBUFFER_PORT} | \
${ZFS} recv -vFd ${dst_pool} 2>&1 &
# start sender
${SSH} ${SRC_USER:+"${SRC_USER}@"}${SRC_HOST} \
-R ${MBUFFER_PORT}:127.0.0.1:${MBUFFER_PORT} \
"${ZFS} send -R ${zfs} | ${MBUFFER} ${MBUFFER_OPTS} -O 127.0.0.1:${MBUFFER_PORT} 2>&1" >${TMP_FILE2} &
else
# setup receiver
${MBUFFER} ${MBUFFER_OPTS} -l ${TMP_FILE1} -I ${MBUFFER_PORT} | \
${ZFS} recv -vFd ${dst_pool} 2>&1 &
# start sender
${SSH} ${SRC_USER:+"${SRC_USER}@"}${SRC_HOST} \
"${ZFS} send -R ${zfs} | ${MBUFFER} ${MBUFFER_OPTS} -O ${DST_HOST}:${MBUFFER_PORT} 2>&1" >${TMP_FILE2} &
fi
wait
local_md5=$(grep md5 ${TMP_FILE1})
remote_md5=$(grep md5 ${TMP_FILE2})
local_summary=$(grep summary ${TMP_FILE1})
remote_summary=$(grep summary ${TMP_FILE2})
printf "remote %s\nlocal %s\n" "${remote_md5}" "${local_md5}"
printf "remote %s\nlocal %s\n" "${remote_summary}" "${local_summary}"
rm -f ${TMP_FILE1} ${TMP_FILE2}
fi
}
function timestamp () {
echo $(${DATE} '+%Y%m%d-%H:%M')
}
function expire_backup_snapshots () {
src_host=$1
src_datasets=$2
dst_datasets=$3
src_last_to_keep=$4
dst_pool=$5
src_zfs=$(echo ${src_last_to_keep} | ${AWK} -F'@' '{print $1}')
dst_zfs=$(convert_to_poolname ${src_zfs} ${SRC_POOL} ${dst_pool})
dst_last_to_keep=$(convert_to_poolname ${src_last_to_keep} ${SRC_POOL} ${dst_pool})
echo "Deleting old backup snapshots before ${dst_last_to_keep}"
if ( ${ZFS} list -o name ${dst_last_to_keep} >/dev/null 2>&1 ) ; then
for src_backup_snapshot in $(${AWK} -v src_backup="${src_zfs}@backup" -v src_last_to_keep="${src_last_to_keep}" '
$1 == src_last_to_keep {
exit 0;
}
$1 ~ src_backup {
print $1;
}
' ${src_datasets})
do
printf "\tDeleting on src ${src_backup_snapshot} ..."
if ( ${SSH} ${SRC_USER:+"${SRC_USER}@"}${SRC_HOST} "${ZFS} destroy ${src_backup_snapshot}" ) ; then
echo "done"
else
echo "failed"
fi
done
for dst_backup_snapshot in $(${AWK} -v dst_backup="${dst_zfs}@backup" -v dst_last_to_keep=${dst_last_to_keep} '
$1 == dst_last_to_keep {
exit 0;
}
$1 ~ dst_backup {
print $1;
}
' ${dst_datasets})
do
printf "\tDeleting on destination ${dst_backup_snapshot} ..."
if ( ${ZFS} destroy ${dst_backup_snapshot} ) ; then
echo "done"
else
echo "failed"
fi
done
else
echo "Strange we do not have the copy of ${dst_last_to_keep} => STOP!"
fi
}
function end () {
/usr/bin/rm -f ${LOCK_FILE}
exit $1
}
for src_zfs in $(get_src_list) ; do
echo "Evaluating ${src_zfs}"
dst_zfs=$(convert_to_poolname ${src_zfs} ${SRC_POOL} ${DST_POOL})
last_src=$(last_snapshot ${src_zfs} ${SRC_DATASETS})
last_dst=$(last_snapshot ${dst_zfs} ${DST_DATASETS})
last_backup_src=$(${AWK} -v zfs="${src_zfs}@backup" '$1 ~ zfs{last=$1}END{print last}' ${SRC_DATASETS})
last_backup_dst=$(${AWK} -v zfs="${dst_zfs}@backup" '$1 ~ zfs{last=$1}END{print last}' ${DST_DATASETS})
last_dst_on_src=$(convert_to_poolname ${last_dst} ${DST_POOL} ${SRC_POOL})
this_backup_src=${src_zfs}@backup_$(timestamp)
# Create snapshot for incremental backups
${SSH} ${SRC_USER:+"${SRC_USER}@"}${SRC_HOST} "${ZFS} snapshot ${this_backup_src}"
if [ -n "$(is_available ${dst_zfs} ${DST_DATASETS})" -a -z "${last_dst}" ] ; then
echo "zfs is on dst, but no snapshots. Getting ${last_src}..."
get_snapshot ${SRC_HOST} ${SRC_DATASETS} ${last_src} ${DST_POOL} ${DST_DATASETS}
# Look for last backup snapshot on destination
elif [ -n "${last_backup_dst}" ] ; then
# Name of last backup snapshot on src
last_dst_backup_on_src=$(convert_to_poolname ${last_backup_dst} ${DST_POOL} ${SRC_POOL})
# If converted name is not empty and snapshot is in the list of src snapshots
# then get all snapshots from last backup until now
if [ -n "${last_dst_backup_on_src}" ] ; then
if [ -n "$(is_available ${last_dst_backup_on_src} ${SRC_DATASETS})" ] ; then
# Get the snapshot of this backup
printf "%s\tsnapshot\n" ${this_backup_src} >> ${SRC_DATASETS}
get_recursive ${SRC_HOST} ${SRC_DATASETS} ${last_dst_backup_on_src} ${this_backup_src} ${DST_POOL} ${DST_DATASETS} && \
expire_backup_snapshots ${SRC_HOST} ${SRC_DATASETS} ${DST_DATASETS} ${this_backup_src} ${DST_POOL}
fi
fi
elif [ -n "$(is_available ${dst_zfs} ${DST_DATASETS})" ] ; then
# No last backup snapshot on dst but we have snapshots
if [ -n "$(is_available ${last_dst_on_src} ${SRC_DATASETS})" ] ; then
echo "Try to backup from ${last_dst_on_src} to ${this_backup_src}"
first=${last_dst_on_src}
last=${last_src}
get_recursive ${SRC_HOST} ${SRC_DATASETS} ${first} ${last} ${DST_POOL} ${DST_DATASETS} && \
expire_backup_snapshots ${SRC_HOST} ${SRC_DATASETS} ${DST_DATASETS} ${this_backup_src} ${DST_POOL}
# Get the snapshot of this backup
printf "%s\tsnapshot\n" ${this_backup_src} >> ${SRC_DATASETS}
get_recursive ${SRC_HOST} ${SRC_DATASETS} ${last} ${this_backup_src} ${DST_POOL} ${DST_DATASETS} && \
expire_backup_snapshots ${SRC_HOST} ${SRC_DATASETS} ${DST_DATASETS} ${this_backup_src} ${DST_POOL}
else
echo "OK I tried hard... now it is your job..."
fi
else
# No existing copies for this zfs. Get the last <INITIAL_COPIES> copies
first=$(${AWK} -v zfs=${src_zfs} -v intitial_copies=$((${INITIAL_COPIES}-1)) '
$1 ~ zfs && $2=="snapshot" {
last[++count]=$1;
}
END {
if(count>intitial_copies){
print last[count-intitial_copies]
}else{
print last[1]
}
}' ${SRC_DATASETS})
last=$( ${AWK} -v zfs=${src_zfs} '$1 ~ zfs && $2=="snapshot"{last=$1}END{print last}' ${SRC_DATASETS} )
get_snapshot ${SRC_HOST} ${SRC_DATASETS} ${first} ${DST_POOL} ${DST_DATASETS}
get_recursive ${SRC_HOST} ${SRC_DATASETS} ${first} ${last} ${DST_POOL} ${DST_DATASETS} && \
expire_backup_snapshots ${SRC_HOST} ${SRC_DATASETS} ${DST_DATASETS} ${this_backup_src} ${DST_POOL}
# Get the snapshot of this backup
printf "%s\tsnapshot\n" ${this_backup_src} >> ${SRC_DATASETS}
get_recursive ${SRC_HOST} ${SRC_DATASETS} ${last} ${this_backup_src} ${DST_POOL} ${DST_DATASETS} && \
expire_backup_snapshots ${SRC_HOST} ${SRC_DATASETS} ${DST_DATASETS} ${this_backup_src} ${DST_POOL}
fi
echo
echo --------------------------------------------------------------------------------
date
echo
done
# expire_dst_pool_snapshots days_to_keep min_to_keep
expire_dst_pool_snapshots 34 70
END_TIME=$(${AWK} 'BEGIN{printf systime();}')
${AWK} -v time=${END_TIME} 'BEGIN{print "END :",strftime("%d.%m.%Y %H:%M.%S",time)}'
${AWK} -v start=${START_TIME} -v end=${END_TIME} 'BEGIN{print "DURATION:",strftime("%H:%M.%S",end-start-3600*strftime("%H",0))}'
end 0
</source>
543763fe35174dd428a378556bf1ca17602cd04a
746
745
2015-06-02T13:31:30Z
Lollypop
2
/* About the script */
wikitext
text/x-wiki
[[Kategorie:ZFS]]
Like all of my scripts this script is coming without any guaranties!!!
You can use it on your own risk!
==About the script==
* It uses [http://www.maier-komor.de/mbuffer.html mbuffer]. It is easy to compile.
* It uses gawk.
* The variable ''SECURE'' defines if you want to use ssh to encrypt your stream. Set it to ''yes'' or ''no''.
* To mark the datasets to copy from the backup host use this on the source:
<source lang=bash>
# /usr/sbin/zfs set de.timmann:auto-backup=<backup host> <dataset>
</source>
* Run the script on the destination/backup host.
* If you don't want to use root as backup-user on source host do this to create a ''zfssync'' user:
<source lang=bash>
# useradd -m zfssync
# passwd -N zfssync
# usermod -K type=normal zfssync
</source>
* Make an ssh-key exchange to login without password for SRC_USER.
Good luck!
==zfs_sync.sh==
<source lang=bash>
#!/usr/bin/bash
# Written by Lars Timmann <L@rs.Timmann.de> 2013
# This script is a rotten bunch of code... rewrite it!
SRC_USER=zfssync
SRC_HOST=my_source_server
SRC_POOL=my_source_zpool
DST_POOL=my_local_destination_zpool
INITIAL_COPIES=3
# Default yes means use SSH for encryption over the net. Every other value means just mbuffer.
SECURE="no"
MBUFFER_PORT=10001
MBUFFER_OPTS="-v 0 --md5 -s 128k -m 256M"
ZFS=/usr/sbin/zfs
SSH="/usr/bin/ssh -xc arcfour128"
#AWK=/usr/bin/gawk
AWK=/opt/sfw/bin/gawk
GREP=/usr/bin/grep
DATE=/usr/bin/date
MD5="/usr/bin/digest -a md5"
ROUTE=/usr/sbin/route
MBUFFER="/opt/mbuffer/bin/mbuffer"
# Guess the right IP for communication with source host
DST_HOST=$(${ROUTE} -vn get ${SRC_HOST} | ${AWK} '{ip=$2}END{print ip}')
MYNAME=$(/usr/bin/basename $0 .sh)
MYSELF=$(/usr/bin/hostname)
SRC_DATASETS=/tmp/${MYNAME}_src_ds.out
DST_DATASETS=/tmp/${MYNAME}_dst_ds.out
LOCK_FILE=/var/run/${MYNAME}.lck
TMP_FILE1=/tmp/${MYNAME}.tmp1
TMP_FILE2=/tmp/${MYNAME}.tmp2
BACKUP_PROPERTY="de.timmann:auto-backup"
START_TIME=$(${AWK} 'BEGIN{printf systime();}')
${AWK} -v time=${START_TIME} 'BEGIN{print "START:",strftime("%d.%m.%Y %H:%M.%S",time)}'
# Was tun bei Unterbrechung
# -------------------------
trap 'echo "\n--- Signal empfangen: Exiting ...\n"; \
date ; \
rm -f ${LOCK_FILE}; \
sleep 3; kill -9 ${!} 2>/dev/null; exit 1' 1 2 3 13 14 15 18
###########################
if [ -f ${LOCK_FILE} ] ; then
echo "$0 is allready running as PID $(/usr/bin/cat ${LOCK_FILE}) look in ${LOCK_FILE}"
exit 1
else
echo $$ > ${LOCK_FILE}
fi
${SSH} ${SRC_USER:+"${SRC_USER}@"}${SRC_HOST} "${ZFS} list -rH -t filesystem,snapshot,volume -o name,type,${BACKUP_PROPERTY} -s creation ${SRC_POOL}" > ${SRC_DATASETS} &
${ZFS} list -rH -t filesystem,snapshot,volume -o name,type -s creation ${DST_POOL} > ${DST_DATASETS} &
wait
function convert_to_poolname () {
from_zfs=$1
search=$2
replace=$3
echo ${from_zfs} | sed -e "s#^${search}#${replace}#g"
}
function is_available () {
snapshot=$1
list=$2
${AWK} -v snapshot=${snapshot} 'BEGIN{rc=1;}$1 == snapshot{print $1; rc=0;}END{exit rc;}' ${list}
return $?
}
function expire_dst_pool_snapshots () {
days_to_keep=$1
min_to_keep=$2
for expired_zfs in $(
${ZFS} list -o creation,name -S creation -t snapshot | \
${AWK} \
-v days_to_keep=${days_to_keep} \
-v min_to_keep=${min_to_keep} \
-v DST_POOL="^${DST_POOL}" \
'
BEGIN{
split("Jan:Feb:Mar:Apr:May:Jun:Jul:Aug:Sep:Oct:Nov:Dec",mon,":");
for(m in mon){
month[mon[m]]=m
};
expire_date=systime()-days_to_keep*60*60*24
}
$NF ~ DST_POOL {
filesystem=$NF;
gsub(/@.*$/,"",filesystem);
split($4,time,":");
filesystem_date=mktime(sprintf("%d %02d %02d %02d %02d 00", $5, month[$2], $3, time[1], time[2]));
count[filesystem]++;
if(filesystem_date < expire_date && count[filesystem] > min_to_keep )
{
print $NF;
}
}')
do
printf "$(${DATE}) Destroying snapshot ${expired_zfs}\n"
${ZFS} destroy ${expired_zfs}
done
}
function get_src_list () {
${AWK} -v backup_server=${MYSELF} '
( $2=="filesystem" || $2=="volume" ) && $3==backup_server {
path[$1]=1;
for(name in path){
# delete name from list, if name is substring of $1
if( index($1,name)==1 && name != $1 && path[name]!=0 ){
path[name]=0;
}
}
}
END{
for(name in path){
if(path[name]==1) print name
}
}
' ${SRC_DATASETS}
}
function first_snapshot () {
${AWK} -v zfs="${1}@" '
$2=="snapshot" && $1 ~ zfs {
first=$1;
# und raus...
nextfile;
}
END{
print first;
}
' $2
}
function last_snapshot () {
${AWK} -v zfs="${1}@" '
$2=="snapshot" && $1 ~ zfs {
last=$1;
}
END{
print last;
}
' $2
}
function get_recursive () {
src_host=$1
src_datasets=$2
first=$3
last=$4
dst_pool=$5
dst_datasets=$6
if [ $# -lt 6 ] ; then
echo "Called from line ${BASH_LINENO[$i]} with $# Arguments"
end 1
fi
src_zfs=$(echo ${first} | ${AWK} -F'@' '{print $1}')
first_snap=$(echo ${first} | ${AWK} -F'@' '{print FS""$2}')
echo "Getting snapshot ${zfs}..."
if [ "_${SECURE}_" == "_yes_" ]
then
# setup receiver
${MBUFFER} ${MBUFFER_OPTS} -l ${TMP_FILE1} -I 127.0.0.1:${MBUFFER_PORT} | \
${ZFS} recv -vFd ${dst_pool} 2>&1 &
# start sender
${SSH} ${SRC_USER:+"${SRC_USER}@"}${SRC_HOST} \
-R ${MBUFFER_PORT}:127.0.0.1:${MBUFFER_PORT} \
"${ZFS} send -I ${first_snap} ${last} | ${MBUFFER} ${MBUFFER_OPTS} -O 127.0.0.1:${MBUFFER_PORT} 2>&1" >${TMP_FILE2} &
else
# setup receiver
${MBUFFER} ${MBUFFER_OPTS} -l ${TMP_FILE1} -I ${MBUFFER_PORT} | \
${ZFS} recv -vFd ${dst_pool} 2>&1 &
# start sender
${SSH} ${SRC_USER:+"${SRC_USER}@"}${SRC_HOST} \
"${ZFS} send -I ${first_snap} ${last} | ${MBUFFER} ${MBUFFER_OPTS} -O ${DST_HOST}:${MBUFFER_PORT} 2>&1" >${TMP_FILE2} &
fi
wait
local_md5=$(grep md5 ${TMP_FILE1})
remote_md5=$(grep md5 ${TMP_FILE2})
local_summary=$(grep summary ${TMP_FILE1})
remote_summary=$(grep summary ${TMP_FILE2})
printf "remote %s\nlocal %s\n" "${remote_md5}" "${local_md5}"
printf "remote %s\nlocal %s\n" "${remote_summary}" "${local_summary}"
rm -f ${TMP_FILE1} ${TMP_FILE2}
}
function get_snapshot () {
src_host=$1
src_datasets=$2
zfs=$3
dst_pool=$4
dst_datasets=$5
if [ -z "$(is_available ${zfs} ${dst_datasets})" ] ; then
echo "Getting snapshot ${zfs}..."
if [ "_${SECURE}_" == "_yes_" ]
then
# setup receiver
${MBUFFER} ${MBUFFER_OPTS} -l ${TMP_FILE1} -I 127.0.0.1:${MBUFFER_PORT} | \
${ZFS} recv -vFd ${dst_pool} 2>&1 &
# start sender
${SSH} ${SRC_USER:+"${SRC_USER}@"}${SRC_HOST} \
-R ${MBUFFER_PORT}:127.0.0.1:${MBUFFER_PORT} \
"${ZFS} send -R ${zfs} | ${MBUFFER} ${MBUFFER_OPTS} -O 127.0.0.1:${MBUFFER_PORT} 2>&1" >${TMP_FILE2} &
else
# setup receiver
${MBUFFER} ${MBUFFER_OPTS} -l ${TMP_FILE1} -I ${MBUFFER_PORT} | \
${ZFS} recv -vFd ${dst_pool} 2>&1 &
# start sender
${SSH} ${SRC_USER:+"${SRC_USER}@"}${SRC_HOST} \
"${ZFS} send -R ${zfs} | ${MBUFFER} ${MBUFFER_OPTS} -O ${DST_HOST}:${MBUFFER_PORT} 2>&1" >${TMP_FILE2} &
fi
wait
local_md5=$(grep md5 ${TMP_FILE1})
remote_md5=$(grep md5 ${TMP_FILE2})
local_summary=$(grep summary ${TMP_FILE1})
remote_summary=$(grep summary ${TMP_FILE2})
printf "remote %s\nlocal %s\n" "${remote_md5}" "${local_md5}"
printf "remote %s\nlocal %s\n" "${remote_summary}" "${local_summary}"
rm -f ${TMP_FILE1} ${TMP_FILE2}
fi
}
function timestamp () {
echo $(${DATE} '+%Y%m%d-%H:%M')
}
function expire_backup_snapshots () {
src_host=$1
src_datasets=$2
dst_datasets=$3
src_last_to_keep=$4
dst_pool=$5
src_zfs=$(echo ${src_last_to_keep} | ${AWK} -F'@' '{print $1}')
dst_zfs=$(convert_to_poolname ${src_zfs} ${SRC_POOL} ${dst_pool})
dst_last_to_keep=$(convert_to_poolname ${src_last_to_keep} ${SRC_POOL} ${dst_pool})
echo "Deleting old backup snapshots before ${dst_last_to_keep}"
if ( ${ZFS} list -o name ${dst_last_to_keep} >/dev/null 2>&1 ) ; then
for src_backup_snapshot in $(${AWK} -v src_backup="${src_zfs}@backup" -v src_last_to_keep="${src_last_to_keep}" '
$1 == src_last_to_keep {
exit 0;
}
$1 ~ src_backup {
print $1;
}
' ${src_datasets})
do
printf "\tDeleting on src ${src_backup_snapshot} ..."
if ( ${SSH} ${SRC_USER:+"${SRC_USER}@"}${SRC_HOST} "${ZFS} destroy ${src_backup_snapshot}" ) ; then
echo "done"
else
echo "failed"
fi
done
for dst_backup_snapshot in $(${AWK} -v dst_backup="${dst_zfs}@backup" -v dst_last_to_keep=${dst_last_to_keep} '
$1 == dst_last_to_keep {
exit 0;
}
$1 ~ dst_backup {
print $1;
}
' ${dst_datasets})
do
printf "\tDeleting on destination ${dst_backup_snapshot} ..."
if ( ${ZFS} destroy ${dst_backup_snapshot} ) ; then
echo "done"
else
echo "failed"
fi
done
else
echo "Strange we do not have the copy of ${dst_last_to_keep} => STOP!"
fi
}
function end () {
/usr/bin/rm -f ${LOCK_FILE}
exit $1
}
for src_zfs in $(get_src_list) ; do
echo "Evaluating ${src_zfs}"
dst_zfs=$(convert_to_poolname ${src_zfs} ${SRC_POOL} ${DST_POOL})
last_src=$(last_snapshot ${src_zfs} ${SRC_DATASETS})
last_dst=$(last_snapshot ${dst_zfs} ${DST_DATASETS})
last_backup_src=$(${AWK} -v zfs="${src_zfs}@backup" '$1 ~ zfs{last=$1}END{print last}' ${SRC_DATASETS})
last_backup_dst=$(${AWK} -v zfs="${dst_zfs}@backup" '$1 ~ zfs{last=$1}END{print last}' ${DST_DATASETS})
last_dst_on_src=$(convert_to_poolname ${last_dst} ${DST_POOL} ${SRC_POOL})
this_backup_src=${src_zfs}@backup_$(timestamp)
# Create snapshot for incremental backups
${SSH} ${SRC_USER:+"${SRC_USER}@"}${SRC_HOST} "${ZFS} snapshot ${this_backup_src}"
if [ -n "$(is_available ${dst_zfs} ${DST_DATASETS})" -a -z "${last_dst}" ] ; then
echo "zfs is on dst, but no snapshots. Getting ${last_src}..."
get_snapshot ${SRC_HOST} ${SRC_DATASETS} ${last_src} ${DST_POOL} ${DST_DATASETS}
# Look for last backup snapshot on destination
elif [ -n "${last_backup_dst}" ] ; then
# Name of last backup snapshot on src
last_dst_backup_on_src=$(convert_to_poolname ${last_backup_dst} ${DST_POOL} ${SRC_POOL})
# If converted name is not empty and snapshot is in the list of src snapshots
# then get all snapshots from last backup until now
if [ -n "${last_dst_backup_on_src}" ] ; then
if [ -n "$(is_available ${last_dst_backup_on_src} ${SRC_DATASETS})" ] ; then
# Get the snapshot of this backup
printf "%s\tsnapshot\n" ${this_backup_src} >> ${SRC_DATASETS}
get_recursive ${SRC_HOST} ${SRC_DATASETS} ${last_dst_backup_on_src} ${this_backup_src} ${DST_POOL} ${DST_DATASETS} && \
expire_backup_snapshots ${SRC_HOST} ${SRC_DATASETS} ${DST_DATASETS} ${this_backup_src} ${DST_POOL}
fi
fi
elif [ -n "$(is_available ${dst_zfs} ${DST_DATASETS})" ] ; then
# No last backup snapshot on dst but we have snapshots
if [ -n "$(is_available ${last_dst_on_src} ${SRC_DATASETS})" ] ; then
echo "Try to backup from ${last_dst_on_src} to ${this_backup_src}"
first=${last_dst_on_src}
last=${last_src}
get_recursive ${SRC_HOST} ${SRC_DATASETS} ${first} ${last} ${DST_POOL} ${DST_DATASETS} && \
expire_backup_snapshots ${SRC_HOST} ${SRC_DATASETS} ${DST_DATASETS} ${this_backup_src} ${DST_POOL}
# Get the snapshot of this backup
printf "%s\tsnapshot\n" ${this_backup_src} >> ${SRC_DATASETS}
get_recursive ${SRC_HOST} ${SRC_DATASETS} ${last} ${this_backup_src} ${DST_POOL} ${DST_DATASETS} && \
expire_backup_snapshots ${SRC_HOST} ${SRC_DATASETS} ${DST_DATASETS} ${this_backup_src} ${DST_POOL}
else
echo "OK I tried hard... now it is your job..."
fi
else
# No existing copies for this zfs. Get the last <INITIAL_COPIES> copies
first=$(${AWK} -v zfs=${src_zfs} -v intitial_copies=$((${INITIAL_COPIES}-1)) '
$1 ~ zfs && $2=="snapshot" {
last[++count]=$1;
}
END {
if(count>intitial_copies){
print last[count-intitial_copies]
}else{
print last[1]
}
}' ${SRC_DATASETS})
last=$( ${AWK} -v zfs=${src_zfs} '$1 ~ zfs && $2=="snapshot"{last=$1}END{print last}' ${SRC_DATASETS} )
get_snapshot ${SRC_HOST} ${SRC_DATASETS} ${first} ${DST_POOL} ${DST_DATASETS}
get_recursive ${SRC_HOST} ${SRC_DATASETS} ${first} ${last} ${DST_POOL} ${DST_DATASETS} && \
expire_backup_snapshots ${SRC_HOST} ${SRC_DATASETS} ${DST_DATASETS} ${this_backup_src} ${DST_POOL}
# Get the snapshot of this backup
printf "%s\tsnapshot\n" ${this_backup_src} >> ${SRC_DATASETS}
get_recursive ${SRC_HOST} ${SRC_DATASETS} ${last} ${this_backup_src} ${DST_POOL} ${DST_DATASETS} && \
expire_backup_snapshots ${SRC_HOST} ${SRC_DATASETS} ${DST_DATASETS} ${this_backup_src} ${DST_POOL}
fi
echo
echo --------------------------------------------------------------------------------
date
echo
done
# expire_dst_pool_snapshots days_to_keep min_to_keep
expire_dst_pool_snapshots 34 70
END_TIME=$(${AWK} 'BEGIN{printf systime();}')
${AWK} -v time=${END_TIME} 'BEGIN{print "END :",strftime("%d.%m.%Y %H:%M.%S",time)}'
${AWK} -v start=${START_TIME} -v end=${END_TIME} 'BEGIN{print "DURATION:",strftime("%H:%M.%S",end-start-3600*strftime("%H",0))}'
end 0
</source>
c1ae76b338edd40f22987ff6608763c61e4d1943
747
746
2015-06-02T13:31:51Z
Lollypop
2
/* About the script */
wikitext
text/x-wiki
[[Kategorie:ZFS]]
Like all of my scripts this script is coming without any guaranties!!!
You can use it on your own risk!
==About the script==
* It uses [http://www.maier-komor.de/mbuffer.html mbuffer]. It is easy to compile.
* It uses gawk.
* The variable ''SECURE'' defines if you want to use ssh to encrypt your stream. Set it to ''yes'' or ''no''.
* To mark the datasets to copy from the backup host use this on the source:
<source lang=bash>
# /usr/sbin/zfs set de.timmann:auto-backup=<backup host> <dataset>
</source>
* Run the script on the destination/backup host.
* If you don't want to use root as backup-user on source host do this to create a ''zfssync'' user:
<source lang=bash>
# useradd -m zfssync
# passwd -N zfssync
# usermod -K type=normal zfssync
</source>
* Make an ssh-key exchange to login without password for ''SRC_USER''.
Good luck!
==zfs_sync.sh==
<source lang=bash>
#!/usr/bin/bash
# Written by Lars Timmann <L@rs.Timmann.de> 2013
# This script is a rotten bunch of code... rewrite it!
SRC_USER=zfssync
SRC_HOST=my_source_server
SRC_POOL=my_source_zpool
DST_POOL=my_local_destination_zpool
INITIAL_COPIES=3
# Default yes means use SSH for encryption over the net. Every other value means just mbuffer.
SECURE="no"
MBUFFER_PORT=10001
MBUFFER_OPTS="-v 0 --md5 -s 128k -m 256M"
ZFS=/usr/sbin/zfs
SSH="/usr/bin/ssh -xc arcfour128"
#AWK=/usr/bin/gawk
AWK=/opt/sfw/bin/gawk
GREP=/usr/bin/grep
DATE=/usr/bin/date
MD5="/usr/bin/digest -a md5"
ROUTE=/usr/sbin/route
MBUFFER="/opt/mbuffer/bin/mbuffer"
# Guess the right IP for communication with source host
DST_HOST=$(${ROUTE} -vn get ${SRC_HOST} | ${AWK} '{ip=$2}END{print ip}')
MYNAME=$(/usr/bin/basename $0 .sh)
MYSELF=$(/usr/bin/hostname)
SRC_DATASETS=/tmp/${MYNAME}_src_ds.out
DST_DATASETS=/tmp/${MYNAME}_dst_ds.out
LOCK_FILE=/var/run/${MYNAME}.lck
TMP_FILE1=/tmp/${MYNAME}.tmp1
TMP_FILE2=/tmp/${MYNAME}.tmp2
BACKUP_PROPERTY="de.timmann:auto-backup"
START_TIME=$(${AWK} 'BEGIN{printf systime();}')
${AWK} -v time=${START_TIME} 'BEGIN{print "START:",strftime("%d.%m.%Y %H:%M.%S",time)}'
# Was tun bei Unterbrechung
# -------------------------
trap 'echo "\n--- Signal empfangen: Exiting ...\n"; \
date ; \
rm -f ${LOCK_FILE}; \
sleep 3; kill -9 ${!} 2>/dev/null; exit 1' 1 2 3 13 14 15 18
###########################
if [ -f ${LOCK_FILE} ] ; then
echo "$0 is allready running as PID $(/usr/bin/cat ${LOCK_FILE}) look in ${LOCK_FILE}"
exit 1
else
echo $$ > ${LOCK_FILE}
fi
${SSH} ${SRC_USER:+"${SRC_USER}@"}${SRC_HOST} "${ZFS} list -rH -t filesystem,snapshot,volume -o name,type,${BACKUP_PROPERTY} -s creation ${SRC_POOL}" > ${SRC_DATASETS} &
${ZFS} list -rH -t filesystem,snapshot,volume -o name,type -s creation ${DST_POOL} > ${DST_DATASETS} &
wait
function convert_to_poolname () {
from_zfs=$1
search=$2
replace=$3
echo ${from_zfs} | sed -e "s#^${search}#${replace}#g"
}
function is_available () {
snapshot=$1
list=$2
${AWK} -v snapshot=${snapshot} 'BEGIN{rc=1;}$1 == snapshot{print $1; rc=0;}END{exit rc;}' ${list}
return $?
}
function expire_dst_pool_snapshots () {
days_to_keep=$1
min_to_keep=$2
for expired_zfs in $(
${ZFS} list -o creation,name -S creation -t snapshot | \
${AWK} \
-v days_to_keep=${days_to_keep} \
-v min_to_keep=${min_to_keep} \
-v DST_POOL="^${DST_POOL}" \
'
BEGIN{
split("Jan:Feb:Mar:Apr:May:Jun:Jul:Aug:Sep:Oct:Nov:Dec",mon,":");
for(m in mon){
month[mon[m]]=m
};
expire_date=systime()-days_to_keep*60*60*24
}
$NF ~ DST_POOL {
filesystem=$NF;
gsub(/@.*$/,"",filesystem);
split($4,time,":");
filesystem_date=mktime(sprintf("%d %02d %02d %02d %02d 00", $5, month[$2], $3, time[1], time[2]));
count[filesystem]++;
if(filesystem_date < expire_date && count[filesystem] > min_to_keep )
{
print $NF;
}
}')
do
printf "$(${DATE}) Destroying snapshot ${expired_zfs}\n"
${ZFS} destroy ${expired_zfs}
done
}
function get_src_list () {
${AWK} -v backup_server=${MYSELF} '
( $2=="filesystem" || $2=="volume" ) && $3==backup_server {
path[$1]=1;
for(name in path){
# delete name from list, if name is substring of $1
if( index($1,name)==1 && name != $1 && path[name]!=0 ){
path[name]=0;
}
}
}
END{
for(name in path){
if(path[name]==1) print name
}
}
' ${SRC_DATASETS}
}
function first_snapshot () {
${AWK} -v zfs="${1}@" '
$2=="snapshot" && $1 ~ zfs {
first=$1;
# und raus...
nextfile;
}
END{
print first;
}
' $2
}
function last_snapshot () {
${AWK} -v zfs="${1}@" '
$2=="snapshot" && $1 ~ zfs {
last=$1;
}
END{
print last;
}
' $2
}
function get_recursive () {
src_host=$1
src_datasets=$2
first=$3
last=$4
dst_pool=$5
dst_datasets=$6
if [ $# -lt 6 ] ; then
echo "Called from line ${BASH_LINENO[$i]} with $# Arguments"
end 1
fi
src_zfs=$(echo ${first} | ${AWK} -F'@' '{print $1}')
first_snap=$(echo ${first} | ${AWK} -F'@' '{print FS""$2}')
echo "Getting snapshot ${zfs}..."
if [ "_${SECURE}_" == "_yes_" ]
then
# setup receiver
${MBUFFER} ${MBUFFER_OPTS} -l ${TMP_FILE1} -I 127.0.0.1:${MBUFFER_PORT} | \
${ZFS} recv -vFd ${dst_pool} 2>&1 &
# start sender
${SSH} ${SRC_USER:+"${SRC_USER}@"}${SRC_HOST} \
-R ${MBUFFER_PORT}:127.0.0.1:${MBUFFER_PORT} \
"${ZFS} send -I ${first_snap} ${last} | ${MBUFFER} ${MBUFFER_OPTS} -O 127.0.0.1:${MBUFFER_PORT} 2>&1" >${TMP_FILE2} &
else
# setup receiver
${MBUFFER} ${MBUFFER_OPTS} -l ${TMP_FILE1} -I ${MBUFFER_PORT} | \
${ZFS} recv -vFd ${dst_pool} 2>&1 &
# start sender
${SSH} ${SRC_USER:+"${SRC_USER}@"}${SRC_HOST} \
"${ZFS} send -I ${first_snap} ${last} | ${MBUFFER} ${MBUFFER_OPTS} -O ${DST_HOST}:${MBUFFER_PORT} 2>&1" >${TMP_FILE2} &
fi
wait
local_md5=$(grep md5 ${TMP_FILE1})
remote_md5=$(grep md5 ${TMP_FILE2})
local_summary=$(grep summary ${TMP_FILE1})
remote_summary=$(grep summary ${TMP_FILE2})
printf "remote %s\nlocal %s\n" "${remote_md5}" "${local_md5}"
printf "remote %s\nlocal %s\n" "${remote_summary}" "${local_summary}"
rm -f ${TMP_FILE1} ${TMP_FILE2}
}
function get_snapshot () {
src_host=$1
src_datasets=$2
zfs=$3
dst_pool=$4
dst_datasets=$5
if [ -z "$(is_available ${zfs} ${dst_datasets})" ] ; then
echo "Getting snapshot ${zfs}..."
if [ "_${SECURE}_" == "_yes_" ]
then
# setup receiver
${MBUFFER} ${MBUFFER_OPTS} -l ${TMP_FILE1} -I 127.0.0.1:${MBUFFER_PORT} | \
${ZFS} recv -vFd ${dst_pool} 2>&1 &
# start sender
${SSH} ${SRC_USER:+"${SRC_USER}@"}${SRC_HOST} \
-R ${MBUFFER_PORT}:127.0.0.1:${MBUFFER_PORT} \
"${ZFS} send -R ${zfs} | ${MBUFFER} ${MBUFFER_OPTS} -O 127.0.0.1:${MBUFFER_PORT} 2>&1" >${TMP_FILE2} &
else
# setup receiver
${MBUFFER} ${MBUFFER_OPTS} -l ${TMP_FILE1} -I ${MBUFFER_PORT} | \
${ZFS} recv -vFd ${dst_pool} 2>&1 &
# start sender
${SSH} ${SRC_USER:+"${SRC_USER}@"}${SRC_HOST} \
"${ZFS} send -R ${zfs} | ${MBUFFER} ${MBUFFER_OPTS} -O ${DST_HOST}:${MBUFFER_PORT} 2>&1" >${TMP_FILE2} &
fi
wait
local_md5=$(grep md5 ${TMP_FILE1})
remote_md5=$(grep md5 ${TMP_FILE2})
local_summary=$(grep summary ${TMP_FILE1})
remote_summary=$(grep summary ${TMP_FILE2})
printf "remote %s\nlocal %s\n" "${remote_md5}" "${local_md5}"
printf "remote %s\nlocal %s\n" "${remote_summary}" "${local_summary}"
rm -f ${TMP_FILE1} ${TMP_FILE2}
fi
}
function timestamp () {
echo $(${DATE} '+%Y%m%d-%H:%M')
}
function expire_backup_snapshots () {
src_host=$1
src_datasets=$2
dst_datasets=$3
src_last_to_keep=$4
dst_pool=$5
src_zfs=$(echo ${src_last_to_keep} | ${AWK} -F'@' '{print $1}')
dst_zfs=$(convert_to_poolname ${src_zfs} ${SRC_POOL} ${dst_pool})
dst_last_to_keep=$(convert_to_poolname ${src_last_to_keep} ${SRC_POOL} ${dst_pool})
echo "Deleting old backup snapshots before ${dst_last_to_keep}"
if ( ${ZFS} list -o name ${dst_last_to_keep} >/dev/null 2>&1 ) ; then
for src_backup_snapshot in $(${AWK} -v src_backup="${src_zfs}@backup" -v src_last_to_keep="${src_last_to_keep}" '
$1 == src_last_to_keep {
exit 0;
}
$1 ~ src_backup {
print $1;
}
' ${src_datasets})
do
printf "\tDeleting on src ${src_backup_snapshot} ..."
if ( ${SSH} ${SRC_USER:+"${SRC_USER}@"}${SRC_HOST} "${ZFS} destroy ${src_backup_snapshot}" ) ; then
echo "done"
else
echo "failed"
fi
done
for dst_backup_snapshot in $(${AWK} -v dst_backup="${dst_zfs}@backup" -v dst_last_to_keep=${dst_last_to_keep} '
$1 == dst_last_to_keep {
exit 0;
}
$1 ~ dst_backup {
print $1;
}
' ${dst_datasets})
do
printf "\tDeleting on destination ${dst_backup_snapshot} ..."
if ( ${ZFS} destroy ${dst_backup_snapshot} ) ; then
echo "done"
else
echo "failed"
fi
done
else
echo "Strange we do not have the copy of ${dst_last_to_keep} => STOP!"
fi
}
function end () {
/usr/bin/rm -f ${LOCK_FILE}
exit $1
}
for src_zfs in $(get_src_list) ; do
echo "Evaluating ${src_zfs}"
dst_zfs=$(convert_to_poolname ${src_zfs} ${SRC_POOL} ${DST_POOL})
last_src=$(last_snapshot ${src_zfs} ${SRC_DATASETS})
last_dst=$(last_snapshot ${dst_zfs} ${DST_DATASETS})
last_backup_src=$(${AWK} -v zfs="${src_zfs}@backup" '$1 ~ zfs{last=$1}END{print last}' ${SRC_DATASETS})
last_backup_dst=$(${AWK} -v zfs="${dst_zfs}@backup" '$1 ~ zfs{last=$1}END{print last}' ${DST_DATASETS})
last_dst_on_src=$(convert_to_poolname ${last_dst} ${DST_POOL} ${SRC_POOL})
this_backup_src=${src_zfs}@backup_$(timestamp)
# Create snapshot for incremental backups
${SSH} ${SRC_USER:+"${SRC_USER}@"}${SRC_HOST} "${ZFS} snapshot ${this_backup_src}"
if [ -n "$(is_available ${dst_zfs} ${DST_DATASETS})" -a -z "${last_dst}" ] ; then
echo "zfs is on dst, but no snapshots. Getting ${last_src}..."
get_snapshot ${SRC_HOST} ${SRC_DATASETS} ${last_src} ${DST_POOL} ${DST_DATASETS}
# Look for last backup snapshot on destination
elif [ -n "${last_backup_dst}" ] ; then
# Name of last backup snapshot on src
last_dst_backup_on_src=$(convert_to_poolname ${last_backup_dst} ${DST_POOL} ${SRC_POOL})
# If converted name is not empty and snapshot is in the list of src snapshots
# then get all snapshots from last backup until now
if [ -n "${last_dst_backup_on_src}" ] ; then
if [ -n "$(is_available ${last_dst_backup_on_src} ${SRC_DATASETS})" ] ; then
# Get the snapshot of this backup
printf "%s\tsnapshot\n" ${this_backup_src} >> ${SRC_DATASETS}
get_recursive ${SRC_HOST} ${SRC_DATASETS} ${last_dst_backup_on_src} ${this_backup_src} ${DST_POOL} ${DST_DATASETS} && \
expire_backup_snapshots ${SRC_HOST} ${SRC_DATASETS} ${DST_DATASETS} ${this_backup_src} ${DST_POOL}
fi
fi
elif [ -n "$(is_available ${dst_zfs} ${DST_DATASETS})" ] ; then
# No last backup snapshot on dst but we have snapshots
if [ -n "$(is_available ${last_dst_on_src} ${SRC_DATASETS})" ] ; then
echo "Try to backup from ${last_dst_on_src} to ${this_backup_src}"
first=${last_dst_on_src}
last=${last_src}
get_recursive ${SRC_HOST} ${SRC_DATASETS} ${first} ${last} ${DST_POOL} ${DST_DATASETS} && \
expire_backup_snapshots ${SRC_HOST} ${SRC_DATASETS} ${DST_DATASETS} ${this_backup_src} ${DST_POOL}
# Get the snapshot of this backup
printf "%s\tsnapshot\n" ${this_backup_src} >> ${SRC_DATASETS}
get_recursive ${SRC_HOST} ${SRC_DATASETS} ${last} ${this_backup_src} ${DST_POOL} ${DST_DATASETS} && \
expire_backup_snapshots ${SRC_HOST} ${SRC_DATASETS} ${DST_DATASETS} ${this_backup_src} ${DST_POOL}
else
echo "OK I tried hard... now it is your job..."
fi
else
# No existing copies for this zfs. Get the last <INITIAL_COPIES> copies
first=$(${AWK} -v zfs=${src_zfs} -v intitial_copies=$((${INITIAL_COPIES}-1)) '
$1 ~ zfs && $2=="snapshot" {
last[++count]=$1;
}
END {
if(count>intitial_copies){
print last[count-intitial_copies]
}else{
print last[1]
}
}' ${SRC_DATASETS})
last=$( ${AWK} -v zfs=${src_zfs} '$1 ~ zfs && $2=="snapshot"{last=$1}END{print last}' ${SRC_DATASETS} )
get_snapshot ${SRC_HOST} ${SRC_DATASETS} ${first} ${DST_POOL} ${DST_DATASETS}
get_recursive ${SRC_HOST} ${SRC_DATASETS} ${first} ${last} ${DST_POOL} ${DST_DATASETS} && \
expire_backup_snapshots ${SRC_HOST} ${SRC_DATASETS} ${DST_DATASETS} ${this_backup_src} ${DST_POOL}
# Get the snapshot of this backup
printf "%s\tsnapshot\n" ${this_backup_src} >> ${SRC_DATASETS}
get_recursive ${SRC_HOST} ${SRC_DATASETS} ${last} ${this_backup_src} ${DST_POOL} ${DST_DATASETS} && \
expire_backup_snapshots ${SRC_HOST} ${SRC_DATASETS} ${DST_DATASETS} ${this_backup_src} ${DST_POOL}
fi
echo
echo --------------------------------------------------------------------------------
date
echo
done
# expire_dst_pool_snapshots days_to_keep min_to_keep
expire_dst_pool_snapshots 34 70
END_TIME=$(${AWK} 'BEGIN{printf systime();}')
${AWK} -v time=${END_TIME} 'BEGIN{print "END :",strftime("%d.%m.%Y %H:%M.%S",time)}'
${AWK} -v start=${START_TIME} -v end=${END_TIME} 'BEGIN{print "DURATION:",strftime("%H:%M.%S",end-start-3600*strftime("%H",0))}'
end 0
</source>
adde3671f29f70244660ee50a0dd2d3d556afc9d
748
747
2015-06-03T11:15:13Z
Lollypop
2
/* zfs_sync.sh */
wikitext
text/x-wiki
[[Kategorie:ZFS]]
Like all of my scripts this script is coming without any guaranties!!!
You can use it on your own risk!
==About the script==
* It uses [http://www.maier-komor.de/mbuffer.html mbuffer]. It is easy to compile.
* It uses gawk.
* The variable ''SECURE'' defines if you want to use ssh to encrypt your stream. Set it to ''yes'' or ''no''.
* To mark the datasets to copy from the backup host use this on the source:
<source lang=bash>
# /usr/sbin/zfs set de.timmann:auto-backup=<backup host> <dataset>
</source>
* Run the script on the destination/backup host.
* If you don't want to use root as backup-user on source host do this to create a ''zfssync'' user:
<source lang=bash>
# useradd -m zfssync
# passwd -N zfssync
# usermod -K type=normal zfssync
</source>
* Make an ssh-key exchange to login without password for ''SRC_USER''.
Good luck!
==zfs_sync.sh==
<source lang=bash>
#!/usr/bin/bash
# Written by Lars Timmann <L@rs.Timmann.de> 2013
# This script is a rotten bunch of code... rewrite it!
SRC_USER=zfssync
SRC_HOST=my_source_server
SRC_POOL=my_source_zpool
DST_POOL=my_local_destination_zpool
INITIAL_COPIES=3
# Default yes means use SSH for encryption over the net. Every other value means just mbuffer.
SECURE="no"
MBUFFER_PORT=10001
MBUFFER_OPTS="-v 0 --md5 -s 128k -m 256M"
ZFS=/usr/sbin/zfs
SSH="/usr/bin/ssh -xc arcfour128"
#AWK=/usr/bin/gawk
AWK=/opt/sfw/bin/gawk
GREP=/usr/bin/grep
DATE=/usr/bin/date
MD5="/usr/bin/digest -a md5"
ROUTE=/usr/sbin/route
MBUFFER="/opt/mbuffer/bin/mbuffer"
# Guess the right IP for communication with source host
DST_HOST=$(${ROUTE} -vn get ${SRC_HOST} | ${AWK} '{ip=$2}END{print ip}')
MYNAME=$(/usr/bin/basename $0 .sh)
MYSELF=$(/usr/bin/hostname)
SRC_DATASETS=/tmp/${MYNAME}_src_ds.out
DST_DATASETS=/tmp/${MYNAME}_dst_ds.out
LOCK_FILE=/var/run/${MYNAME}.lck
TMP_FILE1=/tmp/${MYNAME}.tmp1
TMP_FILE2=/tmp/${MYNAME}.tmp2
BACKUP_PROPERTY="de.timmann:auto-backup"
START_TIME=$(${AWK} 'BEGIN{printf systime();}')
${AWK} -v time=${START_TIME} 'BEGIN{print "START:",strftime("%d.%m.%Y %H:%M.%S",time)}'
# Clean up on signal
# -------------------------
trap 'echo "\n--- Got signal: Exiting ...\n"; \
date ; \
sleep 3; kill -9 ${!} 2>/dev/null; \
end 1' 1 2 3 13 14 15 18
###########################
if [ -f ${LOCK_FILE} ] ; then
echo "$0 is allready running as PID $(/usr/bin/cat ${LOCK_FILE}) look in ${LOCK_FILE}"
exit 1
else
echo $$ > ${LOCK_FILE}
fi
${SSH} ${SRC_USER:+"${SRC_USER}@"}${SRC_HOST} "${ZFS} list -rH -t filesystem,snapshot,volume -o name,type,${BACKUP_PROPERTY} -s creation ${SRC_POOL}" > ${SRC_DATASETS} &
${ZFS} list -rH -t filesystem,snapshot,volume -o name,type -s creation ${DST_POOL} > ${DST_DATASETS} &
wait
function convert_to_poolname () {
from_zfs=$1
search=$2
replace=$3
echo ${from_zfs} | sed -e "s#^${search}#${replace}#g"
}
function is_available () {
snapshot=$1
list=$2
${AWK} -v snapshot=${snapshot} 'BEGIN{rc=1;}$1 == snapshot{print $1; rc=0;}END{exit rc;}' ${list}
return $?
}
function expire_dst_pool_snapshots () {
days_to_keep=$1
min_to_keep=$2
for expired_zfs in $(
${ZFS} list -o creation,name -S creation -t snapshot | \
${AWK} \
-v days_to_keep=${days_to_keep} \
-v min_to_keep=${min_to_keep} \
-v DST_POOL="^${DST_POOL}" \
'
BEGIN{
split("Jan:Feb:Mar:Apr:May:Jun:Jul:Aug:Sep:Oct:Nov:Dec",mon,":");
for(m in mon){
month[mon[m]]=m
};
expire_date=systime()-days_to_keep*60*60*24
}
$NF ~ DST_POOL {
filesystem=$NF;
gsub(/@.*$/,"",filesystem);
split($4,time,":");
filesystem_date=mktime(sprintf("%d %02d %02d %02d %02d 00", $5, month[$2], $3, time[1], time[2]));
count[filesystem]++;
if(filesystem_date < expire_date && count[filesystem] > min_to_keep )
{
print $NF;
}
}')
do
printf "$(${DATE}) Destroying snapshot ${expired_zfs}\n"
${ZFS} destroy ${expired_zfs}
done
}
function get_src_list () {
${AWK} -v backup_server=${MYSELF} '
( $2=="filesystem" || $2=="volume" ) && $3==backup_server {
path[$1]=1;
for(name in path){
# delete name from list, if name is substring of $1
if( index($1,name)==1 && name != $1 && path[name]!=0 ){
path[name]=0;
}
}
}
END{
for(name in path){
if(path[name]==1) print name
}
}
' ${SRC_DATASETS}
}
function first_snapshot () {
${AWK} -v zfs="${1}@" '
$2=="snapshot" && $1 ~ zfs {
first=$1;
# und raus...
nextfile;
}
END{
print first;
}
' $2
}
function last_snapshot () {
${AWK} -v zfs="${1}@" '
$2=="snapshot" && $1 ~ zfs {
last=$1;
}
END{
print last;
}
' $2
}
function get_recursive () {
src_host=$1
src_datasets=$2
first=$3
last=$4
dst_pool=$5
dst_datasets=$6
if [ $# -lt 6 ] ; then
echo "Called from line ${BASH_LINENO[$i]} with $# Arguments"
end 1
fi
src_zfs=$(echo ${first} | ${AWK} -F'@' '{print $1}')
first_snap=$(echo ${first} | ${AWK} -F'@' '{print FS""$2}')
echo "Getting snapshot ${zfs}..."
if [ "_${SECURE}_" == "_yes_" ]
then
# setup receiver
${MBUFFER} ${MBUFFER_OPTS} -l ${TMP_FILE1} -I 127.0.0.1:${MBUFFER_PORT} | \
${ZFS} recv -vFd ${dst_pool} 2>&1 &
# start sender
${SSH} ${SRC_USER:+"${SRC_USER}@"}${SRC_HOST} \
-R ${MBUFFER_PORT}:127.0.0.1:${MBUFFER_PORT} \
"${ZFS} send -I ${first_snap} ${last} | ${MBUFFER} ${MBUFFER_OPTS} -O 127.0.0.1:${MBUFFER_PORT} 2>&1" >${TMP_FILE2} &
else
# setup receiver
${MBUFFER} ${MBUFFER_OPTS} -l ${TMP_FILE1} -I ${MBUFFER_PORT} | \
${ZFS} recv -vFd ${dst_pool} 2>&1 &
# start sender
${SSH} ${SRC_USER:+"${SRC_USER}@"}${SRC_HOST} \
"${ZFS} send -I ${first_snap} ${last} | ${MBUFFER} ${MBUFFER_OPTS} -O ${DST_HOST}:${MBUFFER_PORT} 2>&1" >${TMP_FILE2} &
fi
wait
local_md5=$(grep md5 ${TMP_FILE1})
remote_md5=$(grep md5 ${TMP_FILE2})
local_summary=$(grep summary ${TMP_FILE1})
remote_summary=$(grep summary ${TMP_FILE2})
printf "remote %s\nlocal %s\n" "${remote_md5}" "${local_md5}"
printf "remote %s\nlocal %s\n" "${remote_summary}" "${local_summary}"
rm -f ${TMP_FILE1} ${TMP_FILE2}
}
function get_snapshot () {
src_host=$1
src_datasets=$2
zfs=$3
dst_pool=$4
dst_datasets=$5
if [ -z "$(is_available ${zfs} ${dst_datasets})" ] ; then
echo "Getting snapshot ${zfs}..."
if [ "_${SECURE}_" == "_yes_" ]
then
# setup receiver
${MBUFFER} ${MBUFFER_OPTS} -l ${TMP_FILE1} -I 127.0.0.1:${MBUFFER_PORT} | \
${ZFS} recv -vFd ${dst_pool} 2>&1 &
# start sender
${SSH} ${SRC_USER:+"${SRC_USER}@"}${SRC_HOST} \
-R ${MBUFFER_PORT}:127.0.0.1:${MBUFFER_PORT} \
"${ZFS} send -R ${zfs} | ${MBUFFER} ${MBUFFER_OPTS} -O 127.0.0.1:${MBUFFER_PORT} 2>&1" >${TMP_FILE2} &
else
# setup receiver
${MBUFFER} ${MBUFFER_OPTS} -l ${TMP_FILE1} -I ${MBUFFER_PORT} | \
${ZFS} recv -vFd ${dst_pool} 2>&1 &
# start sender
${SSH} ${SRC_USER:+"${SRC_USER}@"}${SRC_HOST} \
"${ZFS} send -R ${zfs} | ${MBUFFER} ${MBUFFER_OPTS} -O ${DST_HOST}:${MBUFFER_PORT} 2>&1" >${TMP_FILE2} &
fi
wait
local_md5=$(grep md5 ${TMP_FILE1})
remote_md5=$(grep md5 ${TMP_FILE2})
local_summary=$(grep summary ${TMP_FILE1})
remote_summary=$(grep summary ${TMP_FILE2})
printf "remote %s\nlocal %s\n" "${remote_md5}" "${local_md5}"
printf "remote %s\nlocal %s\n" "${remote_summary}" "${local_summary}"
rm -f ${TMP_FILE1} ${TMP_FILE2}
fi
}
function timestamp () {
echo $(${DATE} '+%Y%m%d-%H:%M')
}
function expire_backup_snapshots () {
src_host=$1
src_datasets=$2
dst_datasets=$3
src_last_to_keep=$4
dst_pool=$5
src_zfs=$(echo ${src_last_to_keep} | ${AWK} -F'@' '{print $1}')
dst_zfs=$(convert_to_poolname ${src_zfs} ${SRC_POOL} ${dst_pool})
dst_last_to_keep=$(convert_to_poolname ${src_last_to_keep} ${SRC_POOL} ${dst_pool})
echo "Deleting old backup snapshots before ${dst_last_to_keep}"
if ( ${ZFS} list -o name ${dst_last_to_keep} >/dev/null 2>&1 ) ; then
for src_backup_snapshot in $(${AWK} -v src_backup="${src_zfs}@backup" -v src_last_to_keep="${src_last_to_keep}" '
$1 == src_last_to_keep {
exit 0;
}
$1 ~ src_backup {
print $1;
}
' ${src_datasets})
do
printf "\tDeleting on src ${src_backup_snapshot} ..."
if ( ${SSH} ${SRC_USER:+"${SRC_USER}@"}${SRC_HOST} "${ZFS} destroy ${src_backup_snapshot}" ) ; then
echo "done"
else
echo "failed"
fi
done
for dst_backup_snapshot in $(${AWK} -v dst_backup="${dst_zfs}@backup" -v dst_last_to_keep=${dst_last_to_keep} '
$1 == dst_last_to_keep {
exit 0;
}
$1 ~ dst_backup {
print $1;
}
' ${dst_datasets})
do
printf "\tDeleting on destination ${dst_backup_snapshot} ..."
if ( ${ZFS} destroy ${dst_backup_snapshot} ) ; then
echo "done"
else
echo "failed"
fi
done
else
echo "Strange we do not have the copy of ${dst_last_to_keep} => STOP!"
fi
}
function end () {
/usr/bin/rm -f ${LOCK_FILE}
exit $1
}
for src_zfs in $(get_src_list) ; do
echo "Evaluating ${src_zfs}"
dst_zfs=$(convert_to_poolname ${src_zfs} ${SRC_POOL} ${DST_POOL})
last_src=$(last_snapshot ${src_zfs} ${SRC_DATASETS})
last_dst=$(last_snapshot ${dst_zfs} ${DST_DATASETS})
last_backup_src=$(${AWK} -v zfs="${src_zfs}@backup" '$1 ~ zfs{last=$1}END{print last}' ${SRC_DATASETS})
last_backup_dst=$(${AWK} -v zfs="${dst_zfs}@backup" '$1 ~ zfs{last=$1}END{print last}' ${DST_DATASETS})
last_dst_on_src=$(convert_to_poolname ${last_dst} ${DST_POOL} ${SRC_POOL})
this_backup_src=${src_zfs}@backup_$(timestamp)
# Create snapshot for incremental backups
${SSH} ${SRC_USER:+"${SRC_USER}@"}${SRC_HOST} "${ZFS} snapshot ${this_backup_src}"
if [ -n "$(is_available ${dst_zfs} ${DST_DATASETS})" -a -z "${last_dst}" ] ; then
echo "zfs is on dst, but no snapshots. Getting ${last_src}..."
get_snapshot ${SRC_HOST} ${SRC_DATASETS} ${last_src} ${DST_POOL} ${DST_DATASETS}
# Look for last backup snapshot on destination
elif [ -n "${last_backup_dst}" ] ; then
# Name of last backup snapshot on src
last_dst_backup_on_src=$(convert_to_poolname ${last_backup_dst} ${DST_POOL} ${SRC_POOL})
# If converted name is not empty and snapshot is in the list of src snapshots
# then get all snapshots from last backup until now
if [ -n "${last_dst_backup_on_src}" ] ; then
if [ -n "$(is_available ${last_dst_backup_on_src} ${SRC_DATASETS})" ] ; then
# Get the snapshot of this backup
printf "%s\tsnapshot\n" ${this_backup_src} >> ${SRC_DATASETS}
get_recursive ${SRC_HOST} ${SRC_DATASETS} ${last_dst_backup_on_src} ${this_backup_src} ${DST_POOL} ${DST_DATASETS} && \
expire_backup_snapshots ${SRC_HOST} ${SRC_DATASETS} ${DST_DATASETS} ${this_backup_src} ${DST_POOL}
fi
fi
elif [ -n "$(is_available ${dst_zfs} ${DST_DATASETS})" ] ; then
# No last backup snapshot on dst but we have snapshots
if [ -n "$(is_available ${last_dst_on_src} ${SRC_DATASETS})" ] ; then
echo "Try to backup from ${last_dst_on_src} to ${this_backup_src}"
first=${last_dst_on_src}
last=${last_src}
get_recursive ${SRC_HOST} ${SRC_DATASETS} ${first} ${last} ${DST_POOL} ${DST_DATASETS} && \
expire_backup_snapshots ${SRC_HOST} ${SRC_DATASETS} ${DST_DATASETS} ${this_backup_src} ${DST_POOL}
# Get the snapshot of this backup
printf "%s\tsnapshot\n" ${this_backup_src} >> ${SRC_DATASETS}
get_recursive ${SRC_HOST} ${SRC_DATASETS} ${last} ${this_backup_src} ${DST_POOL} ${DST_DATASETS} && \
expire_backup_snapshots ${SRC_HOST} ${SRC_DATASETS} ${DST_DATASETS} ${this_backup_src} ${DST_POOL}
else
echo "OK I tried hard... now it is your job..."
fi
else
# No existing copies for this zfs. Get the last <INITIAL_COPIES> copies
first=$(${AWK} -v zfs=${src_zfs} -v intitial_copies=$((${INITIAL_COPIES}-1)) '
$1 ~ zfs && $2=="snapshot" {
last[++count]=$1;
}
END {
if(count>intitial_copies){
print last[count-intitial_copies]
}else{
print last[1]
}
}' ${SRC_DATASETS})
last=$( ${AWK} -v zfs=${src_zfs} '$1 ~ zfs && $2=="snapshot"{last=$1}END{print last}' ${SRC_DATASETS} )
get_snapshot ${SRC_HOST} ${SRC_DATASETS} ${first} ${DST_POOL} ${DST_DATASETS}
get_recursive ${SRC_HOST} ${SRC_DATASETS} ${first} ${last} ${DST_POOL} ${DST_DATASETS} && \
expire_backup_snapshots ${SRC_HOST} ${SRC_DATASETS} ${DST_DATASETS} ${this_backup_src} ${DST_POOL}
# Get the snapshot of this backup
printf "%s\tsnapshot\n" ${this_backup_src} >> ${SRC_DATASETS}
get_recursive ${SRC_HOST} ${SRC_DATASETS} ${last} ${this_backup_src} ${DST_POOL} ${DST_DATASETS} && \
expire_backup_snapshots ${SRC_HOST} ${SRC_DATASETS} ${DST_DATASETS} ${this_backup_src} ${DST_POOL}
fi
echo
echo --------------------------------------------------------------------------------
date
echo
done
# expire_dst_pool_snapshots days_to_keep min_to_keep
expire_dst_pool_snapshots 34 70
END_TIME=$(${AWK} 'BEGIN{printf systime();}')
${AWK} -v time=${END_TIME} 'BEGIN{print "END :",strftime("%d.%m.%Y %H:%M.%S",time)}'
${AWK} -v start=${START_TIME} -v end=${END_TIME} 'BEGIN{print "DURATION:",strftime("%H:%M.%S",end-start-3600*strftime("%H",0))}'
end 0
</source>
584f4b00606ea2701d088993873df6485994ada0
750
748
2015-06-04T11:04:07Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:ZFS]]
Like all of my scripts this script is coming without any guaranties!!!
You can use it on your own risk!
==About the script==
* It uses [http://www.maier-komor.de/mbuffer.html mbuffer]. It is easy to compile.
* It uses gawk.
* The variable ''SECURE'' defines if you want to use ssh to encrypt your stream. Set it to ''yes'' or ''no''.
* To mark the datasets to copy from the backup host use this on the source:
<source lang=bash>
# /usr/sbin/zfs set de.timmann:auto-backup=<backup host> <dataset>
</source>
* Run the script on the destination/backup host.
* If you don't want to use root as backup-user on source host do this to create a ''zfssync'' user:
<source lang=bash>
# useradd -m zfssync
# passwd -N zfssync
# usermod -K type=normal zfssync
</source>
* Make an ssh-key exchange to login without password for ''SRC_USER''.
Good luck!
==zfs_sync.sh==
<source lang=bash>
#!/usr/bin/bash
# Written by Lars Timmann <L@rs.Timmann.de> 2013
# This script is a rotten bunch of code... rewrite it!
SRC_USER=zfssync
SRC_HOST=my_source_server
SRC_POOL=my_source_zpool
DST_POOL=my_local_destination_zpool
INITIAL_COPIES=3
# Default yes means use SSH for encryption over the net. Every other value means just mbuffer.
SECURE="no"
MBUFFER_PORT=10001
MBUFFER_OPTS="-v 0 --md5 -s 128k -m 256M"
ZFS=/usr/sbin/zfs
SSH="/usr/bin/ssh -xc blowfish"
#AWK=/usr/bin/gawk
AWK=/opt/sfw/bin/gawk
GREP=/usr/bin/grep
DATE=/usr/bin/date
MD5="/usr/bin/digest -a md5"
ROUTE=/usr/sbin/route
MBUFFER="/opt/mbuffer/bin/mbuffer"
# Guess the right IP for communication with source host
DST_HOST=$(${ROUTE} -vn get ${SRC_HOST} | ${AWK} '{ip=$2}END{print ip}')
MYNAME=$(/usr/bin/basename $0 .sh)
MYSELF=$(/usr/bin/hostname)
SRC_DATASETS=/tmp/${MYNAME}_src_ds.out
DST_DATASETS=/tmp/${MYNAME}_dst_ds.out
LOCK_FILE=/var/run/${MYNAME}.lck
TMP_FILE1=/tmp/${MYNAME}.tmp1
TMP_FILE2=/tmp/${MYNAME}.tmp2
BACKUP_PROPERTY="de.timmann:auto-backup"
START_TIME=$(${AWK} 'BEGIN{printf systime();}')
${AWK} -v time=${START_TIME} 'BEGIN{print "START:",strftime("%d.%m.%Y %H:%M.%S",time)}'
# Clean up on signal
# -------------------------
trap 'echo "\n--- Got signal: Exiting ...\n"; \
date ; \
sleep 3; kill -9 ${!} 2>/dev/null; \
end 1' 1 2 3 13 14 15 18
###########################
if [ -f ${LOCK_FILE} ] ; then
echo "$0 is allready running as PID $(/usr/bin/cat ${LOCK_FILE}) look in ${LOCK_FILE}"
exit 1
else
echo $$ > ${LOCK_FILE}
fi
${SSH} ${SRC_USER:+"${SRC_USER}@"}${SRC_HOST} "${ZFS} list -rH -t filesystem,snapshot,volume -o name,type,${BACKUP_PROPERTY} -s creation ${SRC_POOL}" > ${SRC_DATASETS} &
${ZFS} list -rH -t filesystem,snapshot,volume -o name,type -s creation ${DST_POOL} > ${DST_DATASETS} &
wait
function convert_to_poolname () {
from_zfs=$1
search=$2
replace=$3
echo ${from_zfs} | sed -e "s#^${search}#${replace}#g"
}
function is_available () {
snapshot=$1
list=$2
${AWK} -v snapshot=${snapshot} 'BEGIN{rc=1;}$1 == snapshot{print $1; rc=0;}END{exit rc;}' ${list}
return $?
}
function expire_dst_pool_snapshots () {
days_to_keep=$1
min_to_keep=$2
for expired_zfs in $(
${ZFS} list -o creation,name -S creation -t snapshot | \
${AWK} \
-v days_to_keep=${days_to_keep} \
-v min_to_keep=${min_to_keep} \
-v DST_POOL="^${DST_POOL}" \
'
BEGIN{
split("Jan:Feb:Mar:Apr:May:Jun:Jul:Aug:Sep:Oct:Nov:Dec",mon,":");
for(m in mon){
month[mon[m]]=m
};
expire_date=systime()-days_to_keep*60*60*24
}
$NF ~ DST_POOL {
filesystem=$NF;
gsub(/@.*$/,"",filesystem);
split($4,time,":");
filesystem_date=mktime(sprintf("%d %02d %02d %02d %02d 00", $5, month[$2], $3, time[1], time[2]));
count[filesystem]++;
if(filesystem_date < expire_date && count[filesystem] > min_to_keep )
{
print $NF;
}
}')
do
printf "$(${DATE}) Destroying snapshot ${expired_zfs}\n"
${ZFS} destroy ${expired_zfs}
done
}
function get_src_list () {
${AWK} -v backup_server=${MYSELF} '
( $2=="filesystem" || $2=="volume" ) && $3==backup_server {
path[$1]=1;
for(name in path){
# delete name from list, if name is substring of $1
if( index($1,name)==1 && name != $1 && path[name]!=0 ){
path[name]=0;
}
}
}
END{
for(name in path){
if(path[name]==1) print name
}
}
' ${SRC_DATASETS}
}
function first_snapshot () {
${AWK} -v zfs="${1}@" '
$2=="snapshot" && $1 ~ zfs {
first=$1;
# und raus...
nextfile;
}
END{
print first;
}
' $2
}
function last_snapshot () {
${AWK} -v zfs="${1}@" '
$2=="snapshot" && $1 ~ zfs {
last=$1;
}
END{
print last;
}
' $2
}
function get_recursive () {
src_host=$1
src_datasets=$2
first=$3
last=$4
dst_pool=$5
dst_datasets=$6
if [ $# -lt 6 ] ; then
echo "Called from line ${BASH_LINENO[$i]} with $# Arguments"
end 1
fi
src_zfs=$(echo ${first} | ${AWK} -F'@' '{print $1}')
first_snap=$(echo ${first} | ${AWK} -F'@' '{print FS""$2}')
echo "Getting snapshot ${zfs}..."
if [ "_${SECURE}_" == "_yes_" ]
then
# setup receiver
${MBUFFER} ${MBUFFER_OPTS} -l ${TMP_FILE1} -I 127.0.0.1:${MBUFFER_PORT} | \
${ZFS} recv -vFd ${dst_pool} 2>&1 &
# start sender
${SSH} ${SRC_USER:+"${SRC_USER}@"}${SRC_HOST} \
-R ${MBUFFER_PORT}:127.0.0.1:${MBUFFER_PORT} \
"${ZFS} send -I ${first_snap} ${last} | ${MBUFFER} ${MBUFFER_OPTS} -O 127.0.0.1:${MBUFFER_PORT} 2>&1" >${TMP_FILE2} &
else
# setup receiver
${MBUFFER} ${MBUFFER_OPTS} -l ${TMP_FILE1} -I ${MBUFFER_PORT} | \
${ZFS} recv -vFd ${dst_pool} 2>&1 &
# start sender
${SSH} ${SRC_USER:+"${SRC_USER}@"}${SRC_HOST} \
"${ZFS} send -I ${first_snap} ${last} | ${MBUFFER} ${MBUFFER_OPTS} -O ${DST_HOST}:${MBUFFER_PORT} 2>&1" >${TMP_FILE2} &
fi
wait
local_md5=$(grep md5 ${TMP_FILE1})
remote_md5=$(grep md5 ${TMP_FILE2})
local_summary=$(grep summary ${TMP_FILE1})
remote_summary=$(grep summary ${TMP_FILE2})
printf "remote %s\nlocal %s\n" "${remote_md5}" "${local_md5}"
printf "remote %s\nlocal %s\n" "${remote_summary}" "${local_summary}"
rm -f ${TMP_FILE1} ${TMP_FILE2}
}
function get_snapshot () {
src_host=$1
src_datasets=$2
zfs=$3
dst_pool=$4
dst_datasets=$5
if [ -z "$(is_available ${zfs} ${dst_datasets})" ] ; then
echo "Getting snapshot ${zfs}..."
if [ "_${SECURE}_" == "_yes_" ]
then
# setup receiver
${MBUFFER} ${MBUFFER_OPTS} -l ${TMP_FILE1} -I 127.0.0.1:${MBUFFER_PORT} | \
${ZFS} recv -vFd ${dst_pool} 2>&1 &
# start sender
${SSH} ${SRC_USER:+"${SRC_USER}@"}${SRC_HOST} \
-R ${MBUFFER_PORT}:127.0.0.1:${MBUFFER_PORT} \
"${ZFS} send -R ${zfs} | ${MBUFFER} ${MBUFFER_OPTS} -O 127.0.0.1:${MBUFFER_PORT} 2>&1" >${TMP_FILE2} &
else
# setup receiver
${MBUFFER} ${MBUFFER_OPTS} -l ${TMP_FILE1} -I ${MBUFFER_PORT} | \
${ZFS} recv -vFd ${dst_pool} 2>&1 &
# start sender
${SSH} ${SRC_USER:+"${SRC_USER}@"}${SRC_HOST} \
"${ZFS} send -R ${zfs} | ${MBUFFER} ${MBUFFER_OPTS} -O ${DST_HOST}:${MBUFFER_PORT} 2>&1" >${TMP_FILE2} &
fi
wait
local_md5=$(grep md5 ${TMP_FILE1})
remote_md5=$(grep md5 ${TMP_FILE2})
local_summary=$(grep summary ${TMP_FILE1})
remote_summary=$(grep summary ${TMP_FILE2})
printf "remote %s\nlocal %s\n" "${remote_md5}" "${local_md5}"
printf "remote %s\nlocal %s\n" "${remote_summary}" "${local_summary}"
rm -f ${TMP_FILE1} ${TMP_FILE2}
fi
}
function timestamp () {
echo $(${DATE} '+%Y%m%d-%H:%M')
}
function expire_backup_snapshots () {
src_host=$1
src_datasets=$2
dst_datasets=$3
src_last_to_keep=$4
dst_pool=$5
src_zfs=$(echo ${src_last_to_keep} | ${AWK} -F'@' '{print $1}')
dst_zfs=$(convert_to_poolname ${src_zfs} ${SRC_POOL} ${dst_pool})
dst_last_to_keep=$(convert_to_poolname ${src_last_to_keep} ${SRC_POOL} ${dst_pool})
echo "Deleting old backup snapshots before ${dst_last_to_keep}"
if ( ${ZFS} list -o name ${dst_last_to_keep} >/dev/null 2>&1 ) ; then
for src_backup_snapshot in $(${AWK} -v src_backup="${src_zfs}@backup" -v src_last_to_keep="${src_last_to_keep}" '
$1 == src_last_to_keep {
exit 0;
}
$1 ~ src_backup {
print $1;
}
' ${src_datasets})
do
printf "\tDeleting on src ${src_backup_snapshot} ..."
if ( ${SSH} ${SRC_USER:+"${SRC_USER}@"}${SRC_HOST} "${ZFS} destroy ${src_backup_snapshot}" ) ; then
echo "done"
else
echo "failed"
fi
done
for dst_backup_snapshot in $(${AWK} -v dst_backup="${dst_zfs}@backup" -v dst_last_to_keep=${dst_last_to_keep} '
$1 == dst_last_to_keep {
exit 0;
}
$1 ~ dst_backup {
print $1;
}
' ${dst_datasets})
do
printf "\tDeleting on destination ${dst_backup_snapshot} ..."
if ( ${ZFS} destroy ${dst_backup_snapshot} ) ; then
echo "done"
else
echo "failed"
fi
done
else
echo "Strange we do not have the copy of ${dst_last_to_keep} => STOP!"
fi
}
function end () {
/usr/bin/rm -f ${LOCK_FILE}
exit $1
}
for src_zfs in $(get_src_list) ; do
echo "Evaluating ${src_zfs}"
dst_zfs=$(convert_to_poolname ${src_zfs} ${SRC_POOL} ${DST_POOL})
last_src=$(last_snapshot ${src_zfs} ${SRC_DATASETS})
last_dst=$(last_snapshot ${dst_zfs} ${DST_DATASETS})
last_backup_src=$(${AWK} -v zfs="${src_zfs}@backup" '$1 ~ zfs{last=$1}END{print last}' ${SRC_DATASETS})
last_backup_dst=$(${AWK} -v zfs="${dst_zfs}@backup" '$1 ~ zfs{last=$1}END{print last}' ${DST_DATASETS})
last_dst_on_src=$(convert_to_poolname ${last_dst} ${DST_POOL} ${SRC_POOL})
this_backup_src=${src_zfs}@backup_$(timestamp)
# Create snapshot for incremental backups
${SSH} ${SRC_USER:+"${SRC_USER}@"}${SRC_HOST} "${ZFS} snapshot ${this_backup_src}"
if [ -n "$(is_available ${dst_zfs} ${DST_DATASETS})" -a -z "${last_dst}" ] ; then
echo "zfs is on dst, but no snapshots. Getting ${last_src}..."
get_snapshot ${SRC_HOST} ${SRC_DATASETS} ${last_src} ${DST_POOL} ${DST_DATASETS}
# Look for last backup snapshot on destination
elif [ -n "${last_backup_dst}" ] ; then
# Name of last backup snapshot on src
last_dst_backup_on_src=$(convert_to_poolname ${last_backup_dst} ${DST_POOL} ${SRC_POOL})
# If converted name is not empty and snapshot is in the list of src snapshots
# then get all snapshots from last backup until now
if [ -n "${last_dst_backup_on_src}" ] ; then
if [ -n "$(is_available ${last_dst_backup_on_src} ${SRC_DATASETS})" ] ; then
# Get the snapshot of this backup
printf "%s\tsnapshot\n" ${this_backup_src} >> ${SRC_DATASETS}
get_recursive ${SRC_HOST} ${SRC_DATASETS} ${last_dst_backup_on_src} ${this_backup_src} ${DST_POOL} ${DST_DATASETS} && \
expire_backup_snapshots ${SRC_HOST} ${SRC_DATASETS} ${DST_DATASETS} ${this_backup_src} ${DST_POOL}
fi
fi
elif [ -n "$(is_available ${dst_zfs} ${DST_DATASETS})" ] ; then
# No last backup snapshot on dst but we have snapshots
if [ -n "$(is_available ${last_dst_on_src} ${SRC_DATASETS})" ] ; then
echo "Try to backup from ${last_dst_on_src} to ${this_backup_src}"
first=${last_dst_on_src}
last=${last_src}
get_recursive ${SRC_HOST} ${SRC_DATASETS} ${first} ${last} ${DST_POOL} ${DST_DATASETS} && \
expire_backup_snapshots ${SRC_HOST} ${SRC_DATASETS} ${DST_DATASETS} ${this_backup_src} ${DST_POOL}
# Get the snapshot of this backup
printf "%s\tsnapshot\n" ${this_backup_src} >> ${SRC_DATASETS}
get_recursive ${SRC_HOST} ${SRC_DATASETS} ${last} ${this_backup_src} ${DST_POOL} ${DST_DATASETS} && \
expire_backup_snapshots ${SRC_HOST} ${SRC_DATASETS} ${DST_DATASETS} ${this_backup_src} ${DST_POOL}
else
echo "OK I tried hard... now it is your job..."
fi
else
# No existing copies for this zfs. Get the last <INITIAL_COPIES> copies
first=$(${AWK} -v zfs=${src_zfs} -v intitial_copies=$((${INITIAL_COPIES}-1)) '
$1 ~ zfs && $2=="snapshot" {
last[++count]=$1;
}
END {
if(count>intitial_copies){
print last[count-intitial_copies]
}else{
print last[1]
}
}' ${SRC_DATASETS})
last=$( ${AWK} -v zfs=${src_zfs} '$1 ~ zfs && $2=="snapshot"{last=$1}END{print last}' ${SRC_DATASETS} )
get_snapshot ${SRC_HOST} ${SRC_DATASETS} ${first} ${DST_POOL} ${DST_DATASETS}
get_recursive ${SRC_HOST} ${SRC_DATASETS} ${first} ${last} ${DST_POOL} ${DST_DATASETS} && \
expire_backup_snapshots ${SRC_HOST} ${SRC_DATASETS} ${DST_DATASETS} ${this_backup_src} ${DST_POOL}
# Get the snapshot of this backup
printf "%s\tsnapshot\n" ${this_backup_src} >> ${SRC_DATASETS}
get_recursive ${SRC_HOST} ${SRC_DATASETS} ${last} ${this_backup_src} ${DST_POOL} ${DST_DATASETS} && \
expire_backup_snapshots ${SRC_HOST} ${SRC_DATASETS} ${DST_DATASETS} ${this_backup_src} ${DST_POOL}
fi
echo
echo --------------------------------------------------------------------------------
date
echo
done
# expire_dst_pool_snapshots days_to_keep min_to_keep
expire_dst_pool_snapshots 34 70
END_TIME=$(${AWK} 'BEGIN{printf systime();}')
${AWK} -v time=${END_TIME} 'BEGIN{print "END :",strftime("%d.%m.%Y %H:%M.%S",time)}'
${AWK} -v start=${START_TIME} -v end=${END_TIME} 'BEGIN{print "DURATION:",strftime("%H:%M.%S",end-start-3600*strftime("%H",0))}'
end 0
</source>
efdb13ade0e129f4f9a4d945d22ce0cf5baef5a9
751
750
2015-06-04T11:06:26Z
Lollypop
2
/* About the script */
wikitext
text/x-wiki
[[Kategorie:ZFS]]
Like all of my scripts this script is coming without any guaranties!!!
You can use it on your own risk!
==About the script==
* It uses [http://www.maier-komor.de/mbuffer.html mbuffer]. It is easy to compile.
* It uses gawk.
* The variable ''SECURE'' defines if you want to use ssh to encrypt your stream. Set it to ''yes'' or ''no''.
* To mark the datasets to copy from the backup host use this on the source:
<source lang=bash>
# /usr/sbin/zfs set de.timmann:auto-backup=<backup host> <dataset>
</source>
* Run the script on the destination/backup host.
* If you don't want to use root as backup-user on source host do this to create a ''zfssync'' user (Solaris syntax):
<source lang=bash>
# useradd -m zfssync
# passwd -N zfssync
# usermod -K type=normal zfssync
</source>
* Make an ssh-key exchange to login without password for ''SRC_USER''.
Good luck!
==zfs_sync.sh==
<source lang=bash>
#!/usr/bin/bash
# Written by Lars Timmann <L@rs.Timmann.de> 2013
# This script is a rotten bunch of code... rewrite it!
SRC_USER=zfssync
SRC_HOST=my_source_server
SRC_POOL=my_source_zpool
DST_POOL=my_local_destination_zpool
INITIAL_COPIES=3
# Default yes means use SSH for encryption over the net. Every other value means just mbuffer.
SECURE="no"
MBUFFER_PORT=10001
MBUFFER_OPTS="-v 0 --md5 -s 128k -m 256M"
ZFS=/usr/sbin/zfs
SSH="/usr/bin/ssh -xc blowfish"
#AWK=/usr/bin/gawk
AWK=/opt/sfw/bin/gawk
GREP=/usr/bin/grep
DATE=/usr/bin/date
MD5="/usr/bin/digest -a md5"
ROUTE=/usr/sbin/route
MBUFFER="/opt/mbuffer/bin/mbuffer"
# Guess the right IP for communication with source host
DST_HOST=$(${ROUTE} -vn get ${SRC_HOST} | ${AWK} '{ip=$2}END{print ip}')
MYNAME=$(/usr/bin/basename $0 .sh)
MYSELF=$(/usr/bin/hostname)
SRC_DATASETS=/tmp/${MYNAME}_src_ds.out
DST_DATASETS=/tmp/${MYNAME}_dst_ds.out
LOCK_FILE=/var/run/${MYNAME}.lck
TMP_FILE1=/tmp/${MYNAME}.tmp1
TMP_FILE2=/tmp/${MYNAME}.tmp2
BACKUP_PROPERTY="de.timmann:auto-backup"
START_TIME=$(${AWK} 'BEGIN{printf systime();}')
${AWK} -v time=${START_TIME} 'BEGIN{print "START:",strftime("%d.%m.%Y %H:%M.%S",time)}'
# Clean up on signal
# -------------------------
trap 'echo "\n--- Got signal: Exiting ...\n"; \
date ; \
sleep 3; kill -9 ${!} 2>/dev/null; \
end 1' 1 2 3 13 14 15 18
###########################
if [ -f ${LOCK_FILE} ] ; then
echo "$0 is allready running as PID $(/usr/bin/cat ${LOCK_FILE}) look in ${LOCK_FILE}"
exit 1
else
echo $$ > ${LOCK_FILE}
fi
${SSH} ${SRC_USER:+"${SRC_USER}@"}${SRC_HOST} "${ZFS} list -rH -t filesystem,snapshot,volume -o name,type,${BACKUP_PROPERTY} -s creation ${SRC_POOL}" > ${SRC_DATASETS} &
${ZFS} list -rH -t filesystem,snapshot,volume -o name,type -s creation ${DST_POOL} > ${DST_DATASETS} &
wait
function convert_to_poolname () {
from_zfs=$1
search=$2
replace=$3
echo ${from_zfs} | sed -e "s#^${search}#${replace}#g"
}
function is_available () {
snapshot=$1
list=$2
${AWK} -v snapshot=${snapshot} 'BEGIN{rc=1;}$1 == snapshot{print $1; rc=0;}END{exit rc;}' ${list}
return $?
}
function expire_dst_pool_snapshots () {
days_to_keep=$1
min_to_keep=$2
for expired_zfs in $(
${ZFS} list -o creation,name -S creation -t snapshot | \
${AWK} \
-v days_to_keep=${days_to_keep} \
-v min_to_keep=${min_to_keep} \
-v DST_POOL="^${DST_POOL}" \
'
BEGIN{
split("Jan:Feb:Mar:Apr:May:Jun:Jul:Aug:Sep:Oct:Nov:Dec",mon,":");
for(m in mon){
month[mon[m]]=m
};
expire_date=systime()-days_to_keep*60*60*24
}
$NF ~ DST_POOL {
filesystem=$NF;
gsub(/@.*$/,"",filesystem);
split($4,time,":");
filesystem_date=mktime(sprintf("%d %02d %02d %02d %02d 00", $5, month[$2], $3, time[1], time[2]));
count[filesystem]++;
if(filesystem_date < expire_date && count[filesystem] > min_to_keep )
{
print $NF;
}
}')
do
printf "$(${DATE}) Destroying snapshot ${expired_zfs}\n"
${ZFS} destroy ${expired_zfs}
done
}
function get_src_list () {
${AWK} -v backup_server=${MYSELF} '
( $2=="filesystem" || $2=="volume" ) && $3==backup_server {
path[$1]=1;
for(name in path){
# delete name from list, if name is substring of $1
if( index($1,name)==1 && name != $1 && path[name]!=0 ){
path[name]=0;
}
}
}
END{
for(name in path){
if(path[name]==1) print name
}
}
' ${SRC_DATASETS}
}
function first_snapshot () {
${AWK} -v zfs="${1}@" '
$2=="snapshot" && $1 ~ zfs {
first=$1;
# und raus...
nextfile;
}
END{
print first;
}
' $2
}
function last_snapshot () {
${AWK} -v zfs="${1}@" '
$2=="snapshot" && $1 ~ zfs {
last=$1;
}
END{
print last;
}
' $2
}
function get_recursive () {
src_host=$1
src_datasets=$2
first=$3
last=$4
dst_pool=$5
dst_datasets=$6
if [ $# -lt 6 ] ; then
echo "Called from line ${BASH_LINENO[$i]} with $# Arguments"
end 1
fi
src_zfs=$(echo ${first} | ${AWK} -F'@' '{print $1}')
first_snap=$(echo ${first} | ${AWK} -F'@' '{print FS""$2}')
echo "Getting snapshot ${zfs}..."
if [ "_${SECURE}_" == "_yes_" ]
then
# setup receiver
${MBUFFER} ${MBUFFER_OPTS} -l ${TMP_FILE1} -I 127.0.0.1:${MBUFFER_PORT} | \
${ZFS} recv -vFd ${dst_pool} 2>&1 &
# start sender
${SSH} ${SRC_USER:+"${SRC_USER}@"}${SRC_HOST} \
-R ${MBUFFER_PORT}:127.0.0.1:${MBUFFER_PORT} \
"${ZFS} send -I ${first_snap} ${last} | ${MBUFFER} ${MBUFFER_OPTS} -O 127.0.0.1:${MBUFFER_PORT} 2>&1" >${TMP_FILE2} &
else
# setup receiver
${MBUFFER} ${MBUFFER_OPTS} -l ${TMP_FILE1} -I ${MBUFFER_PORT} | \
${ZFS} recv -vFd ${dst_pool} 2>&1 &
# start sender
${SSH} ${SRC_USER:+"${SRC_USER}@"}${SRC_HOST} \
"${ZFS} send -I ${first_snap} ${last} | ${MBUFFER} ${MBUFFER_OPTS} -O ${DST_HOST}:${MBUFFER_PORT} 2>&1" >${TMP_FILE2} &
fi
wait
local_md5=$(grep md5 ${TMP_FILE1})
remote_md5=$(grep md5 ${TMP_FILE2})
local_summary=$(grep summary ${TMP_FILE1})
remote_summary=$(grep summary ${TMP_FILE2})
printf "remote %s\nlocal %s\n" "${remote_md5}" "${local_md5}"
printf "remote %s\nlocal %s\n" "${remote_summary}" "${local_summary}"
rm -f ${TMP_FILE1} ${TMP_FILE2}
}
function get_snapshot () {
src_host=$1
src_datasets=$2
zfs=$3
dst_pool=$4
dst_datasets=$5
if [ -z "$(is_available ${zfs} ${dst_datasets})" ] ; then
echo "Getting snapshot ${zfs}..."
if [ "_${SECURE}_" == "_yes_" ]
then
# setup receiver
${MBUFFER} ${MBUFFER_OPTS} -l ${TMP_FILE1} -I 127.0.0.1:${MBUFFER_PORT} | \
${ZFS} recv -vFd ${dst_pool} 2>&1 &
# start sender
${SSH} ${SRC_USER:+"${SRC_USER}@"}${SRC_HOST} \
-R ${MBUFFER_PORT}:127.0.0.1:${MBUFFER_PORT} \
"${ZFS} send -R ${zfs} | ${MBUFFER} ${MBUFFER_OPTS} -O 127.0.0.1:${MBUFFER_PORT} 2>&1" >${TMP_FILE2} &
else
# setup receiver
${MBUFFER} ${MBUFFER_OPTS} -l ${TMP_FILE1} -I ${MBUFFER_PORT} | \
${ZFS} recv -vFd ${dst_pool} 2>&1 &
# start sender
${SSH} ${SRC_USER:+"${SRC_USER}@"}${SRC_HOST} \
"${ZFS} send -R ${zfs} | ${MBUFFER} ${MBUFFER_OPTS} -O ${DST_HOST}:${MBUFFER_PORT} 2>&1" >${TMP_FILE2} &
fi
wait
local_md5=$(grep md5 ${TMP_FILE1})
remote_md5=$(grep md5 ${TMP_FILE2})
local_summary=$(grep summary ${TMP_FILE1})
remote_summary=$(grep summary ${TMP_FILE2})
printf "remote %s\nlocal %s\n" "${remote_md5}" "${local_md5}"
printf "remote %s\nlocal %s\n" "${remote_summary}" "${local_summary}"
rm -f ${TMP_FILE1} ${TMP_FILE2}
fi
}
function timestamp () {
echo $(${DATE} '+%Y%m%d-%H:%M')
}
function expire_backup_snapshots () {
src_host=$1
src_datasets=$2
dst_datasets=$3
src_last_to_keep=$4
dst_pool=$5
src_zfs=$(echo ${src_last_to_keep} | ${AWK} -F'@' '{print $1}')
dst_zfs=$(convert_to_poolname ${src_zfs} ${SRC_POOL} ${dst_pool})
dst_last_to_keep=$(convert_to_poolname ${src_last_to_keep} ${SRC_POOL} ${dst_pool})
echo "Deleting old backup snapshots before ${dst_last_to_keep}"
if ( ${ZFS} list -o name ${dst_last_to_keep} >/dev/null 2>&1 ) ; then
for src_backup_snapshot in $(${AWK} -v src_backup="${src_zfs}@backup" -v src_last_to_keep="${src_last_to_keep}" '
$1 == src_last_to_keep {
exit 0;
}
$1 ~ src_backup {
print $1;
}
' ${src_datasets})
do
printf "\tDeleting on src ${src_backup_snapshot} ..."
if ( ${SSH} ${SRC_USER:+"${SRC_USER}@"}${SRC_HOST} "${ZFS} destroy ${src_backup_snapshot}" ) ; then
echo "done"
else
echo "failed"
fi
done
for dst_backup_snapshot in $(${AWK} -v dst_backup="${dst_zfs}@backup" -v dst_last_to_keep=${dst_last_to_keep} '
$1 == dst_last_to_keep {
exit 0;
}
$1 ~ dst_backup {
print $1;
}
' ${dst_datasets})
do
printf "\tDeleting on destination ${dst_backup_snapshot} ..."
if ( ${ZFS} destroy ${dst_backup_snapshot} ) ; then
echo "done"
else
echo "failed"
fi
done
else
echo "Strange we do not have the copy of ${dst_last_to_keep} => STOP!"
fi
}
function end () {
/usr/bin/rm -f ${LOCK_FILE}
exit $1
}
for src_zfs in $(get_src_list) ; do
echo "Evaluating ${src_zfs}"
dst_zfs=$(convert_to_poolname ${src_zfs} ${SRC_POOL} ${DST_POOL})
last_src=$(last_snapshot ${src_zfs} ${SRC_DATASETS})
last_dst=$(last_snapshot ${dst_zfs} ${DST_DATASETS})
last_backup_src=$(${AWK} -v zfs="${src_zfs}@backup" '$1 ~ zfs{last=$1}END{print last}' ${SRC_DATASETS})
last_backup_dst=$(${AWK} -v zfs="${dst_zfs}@backup" '$1 ~ zfs{last=$1}END{print last}' ${DST_DATASETS})
last_dst_on_src=$(convert_to_poolname ${last_dst} ${DST_POOL} ${SRC_POOL})
this_backup_src=${src_zfs}@backup_$(timestamp)
# Create snapshot for incremental backups
${SSH} ${SRC_USER:+"${SRC_USER}@"}${SRC_HOST} "${ZFS} snapshot ${this_backup_src}"
if [ -n "$(is_available ${dst_zfs} ${DST_DATASETS})" -a -z "${last_dst}" ] ; then
echo "zfs is on dst, but no snapshots. Getting ${last_src}..."
get_snapshot ${SRC_HOST} ${SRC_DATASETS} ${last_src} ${DST_POOL} ${DST_DATASETS}
# Look for last backup snapshot on destination
elif [ -n "${last_backup_dst}" ] ; then
# Name of last backup snapshot on src
last_dst_backup_on_src=$(convert_to_poolname ${last_backup_dst} ${DST_POOL} ${SRC_POOL})
# If converted name is not empty and snapshot is in the list of src snapshots
# then get all snapshots from last backup until now
if [ -n "${last_dst_backup_on_src}" ] ; then
if [ -n "$(is_available ${last_dst_backup_on_src} ${SRC_DATASETS})" ] ; then
# Get the snapshot of this backup
printf "%s\tsnapshot\n" ${this_backup_src} >> ${SRC_DATASETS}
get_recursive ${SRC_HOST} ${SRC_DATASETS} ${last_dst_backup_on_src} ${this_backup_src} ${DST_POOL} ${DST_DATASETS} && \
expire_backup_snapshots ${SRC_HOST} ${SRC_DATASETS} ${DST_DATASETS} ${this_backup_src} ${DST_POOL}
fi
fi
elif [ -n "$(is_available ${dst_zfs} ${DST_DATASETS})" ] ; then
# No last backup snapshot on dst but we have snapshots
if [ -n "$(is_available ${last_dst_on_src} ${SRC_DATASETS})" ] ; then
echo "Try to backup from ${last_dst_on_src} to ${this_backup_src}"
first=${last_dst_on_src}
last=${last_src}
get_recursive ${SRC_HOST} ${SRC_DATASETS} ${first} ${last} ${DST_POOL} ${DST_DATASETS} && \
expire_backup_snapshots ${SRC_HOST} ${SRC_DATASETS} ${DST_DATASETS} ${this_backup_src} ${DST_POOL}
# Get the snapshot of this backup
printf "%s\tsnapshot\n" ${this_backup_src} >> ${SRC_DATASETS}
get_recursive ${SRC_HOST} ${SRC_DATASETS} ${last} ${this_backup_src} ${DST_POOL} ${DST_DATASETS} && \
expire_backup_snapshots ${SRC_HOST} ${SRC_DATASETS} ${DST_DATASETS} ${this_backup_src} ${DST_POOL}
else
echo "OK I tried hard... now it is your job..."
fi
else
# No existing copies for this zfs. Get the last <INITIAL_COPIES> copies
first=$(${AWK} -v zfs=${src_zfs} -v intitial_copies=$((${INITIAL_COPIES}-1)) '
$1 ~ zfs && $2=="snapshot" {
last[++count]=$1;
}
END {
if(count>intitial_copies){
print last[count-intitial_copies]
}else{
print last[1]
}
}' ${SRC_DATASETS})
last=$( ${AWK} -v zfs=${src_zfs} '$1 ~ zfs && $2=="snapshot"{last=$1}END{print last}' ${SRC_DATASETS} )
get_snapshot ${SRC_HOST} ${SRC_DATASETS} ${first} ${DST_POOL} ${DST_DATASETS}
get_recursive ${SRC_HOST} ${SRC_DATASETS} ${first} ${last} ${DST_POOL} ${DST_DATASETS} && \
expire_backup_snapshots ${SRC_HOST} ${SRC_DATASETS} ${DST_DATASETS} ${this_backup_src} ${DST_POOL}
# Get the snapshot of this backup
printf "%s\tsnapshot\n" ${this_backup_src} >> ${SRC_DATASETS}
get_recursive ${SRC_HOST} ${SRC_DATASETS} ${last} ${this_backup_src} ${DST_POOL} ${DST_DATASETS} && \
expire_backup_snapshots ${SRC_HOST} ${SRC_DATASETS} ${DST_DATASETS} ${this_backup_src} ${DST_POOL}
fi
echo
echo --------------------------------------------------------------------------------
date
echo
done
# expire_dst_pool_snapshots days_to_keep min_to_keep
expire_dst_pool_snapshots 34 70
END_TIME=$(${AWK} 'BEGIN{printf systime();}')
${AWK} -v time=${END_TIME} 'BEGIN{print "END :",strftime("%d.%m.%Y %H:%M.%S",time)}'
${AWK} -v start=${START_TIME} -v end=${END_TIME} 'BEGIN{print "DURATION:",strftime("%H:%M.%S",end-start-3600*strftime("%H",0))}'
end 0
</source>
2764706391478a8e1376861ff97e50242e654a4e
Solaris Loadgenerator
0
216
752
2015-06-04T15:08:14Z
Lollypop
2
Die Seite wurde neu angelegt: „This is a little script to generate load. It uses gzip and bzip2 to generate load fetched from void and compressed into the void again :-). Call it with <scri…“
wikitext
text/x-wiki
This is a little script to generate load. It uses gzip and bzip2 to generate load fetched from void and compressed into the void again :-).
Call it with <scriptname> <number> to generate a load of <number>.
<source lang=bash>
#!/usr/bin/bash
count=$1
for((i=1;i<=${count};i++))
do
cat /dev/urandom | bzip2 | gzip -9 >/dev/null &
done
</source>
be6405874c0d2b08a4f0d4b7e3adf4576d16d534
753
752
2015-06-04T15:13:45Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:Solaris]]
This is a little script to generate load. It uses gzip and bzip2 to generate load fetched from void and compressed into the void again :-).
Call it with <scriptname> <number> to generate a load of <number>.
<source lang=bash>
#!/usr/bin/bash
count=$1
for((i=1;i<=${count};i++))
do
cat /dev/urandom | bzip2 | gzip -9 >/dev/null &
done
</source>
cbdcdda0866f872bb97d880ac8c565183a704a3b
754
753
2015-06-04T15:14:12Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:Solaris Loadgenerator]]
This is a little script to generate load. It uses gzip and bzip2 to generate load fetched from void and compressed into the void again :-).
Call it with <scriptname> <number> to generate a load of <number>.
<source lang=bash>
#!/usr/bin/bash
count=$1
for((i=1;i<=${count};i++))
do
cat /dev/urandom | bzip2 | gzip -9 >/dev/null &
done
</source>
991a8e31fe6faaf5ba94d3b589bbf82658211ff8
755
754
2015-06-04T15:14:28Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:Solaris|Loadgenerator]]
This is a little script to generate load. It uses gzip and bzip2 to generate load fetched from void and compressed into the void again :-).
Call it with <scriptname> <number> to generate a load of <number>.
<source lang=bash>
#!/usr/bin/bash
count=$1
for((i=1;i<=${count};i++))
do
cat /dev/urandom | bzip2 | gzip -9 >/dev/null &
done
</source>
0f5f4ca89d10420e8c555997c5b4f2a2b75efc53
756
755
2015-06-04T15:15:04Z
Lollypop
2
Lollypop verschob Seite [[Solaris loadgenerator]] nach [[Solaris Loadgenerator]]: ^
wikitext
text/x-wiki
[[Kategorie:Solaris|Loadgenerator]]
This is a little script to generate load. It uses gzip and bzip2 to generate load fetched from void and compressed into the void again :-).
Call it with <scriptname> <number> to generate a load of <number>.
<source lang=bash>
#!/usr/bin/bash
count=$1
for((i=1;i<=${count};i++))
do
cat /dev/urandom | bzip2 | gzip -9 >/dev/null &
done
</source>
0f5f4ca89d10420e8c555997c5b4f2a2b75efc53
Solaris loadgenerator
0
217
757
2015-06-04T15:15:04Z
Lollypop
2
Lollypop verschob Seite [[Solaris loadgenerator]] nach [[Solaris Loadgenerator]]: ^
wikitext
text/x-wiki
#WEITERLEITUNG [[Solaris Loadgenerator]]
3f47f3908551d016ab9a63b509d30d730dd3378c
Fibrechannel Analyse
0
139
758
676
2015-06-10T06:56:33Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:Solaris]]
[[Kategorie:Brocade]]
[[Kategorie:NetApp]]
[[Kategorie:FC]]
=Fibrechannel Analyse=
=Kommandos : Solaris=
==luxadm==
===luxadm -e port===
Gibt die Hardwarepfade der vorhandened Fibrechannelports und deren Status aus:
<source lang=bash>
# luxadm -e port
/devices/pci@79,0/pci10de,378@b/pci1077,143@0/fp@0,0:devctl CONNECTED
/devices/pci@79,0/pci10de,378@b/pci1077,143@0,1/fp@0,0:devctl NOT CONNECTED
/devices/pci@79,0/pci10de,376@e/pci1077,143@0/fp@0,0:devctl CONNECTED
/devices/pci@79,0/pci10de,376@e/pci1077,143@0,1/fp@0,0:devctl NOT CONNECTED
</source>
2 Dualport Karten:
/devices/pci@79,0/pci10de,378@b/pci1077,143@0 und ...,1
/devices/pci@79,0/pci10de,376@e/pci1077,143@0 und ...,1
<source lang=bash>
# prtdiag -v | head -1
System Configuration: Sun Microsystems Sun Fire X4440
</source>
Aus der Seite [https://support.oracle.com/epmos/faces/DocContentDisplay?id=1277396.1 Sun x86 Platforms: Matrix of Recognized Device Paths (Doc ID 1277396.1)] (Oracle Support Login benötigt):
Sun Fire x4440 (Tucana)
PCI:
PCIe SLOT0 /pci@0,0/pci10de,375@f/pci1000,3150@0 // with PCI Express 8-Port SAS/SATA HBA
PCIe SLOT0 /pci@0,0/pci10de,375@f/ // without PCI Express 8-Port SAS/SATA HBA
PCIe SLOT1 /pci@0,0/pci10de,376@e/
PCIe SLOT2 /pci@7c,0/pci10de,377@f/
PCIe SLOT3 /pci@0,0/pci10de,377@a/
PCIe SLOT4 /pci@7c,0/pci10de,376@e/
PCIe SLOT5 /pci@7c,0/pci10de,378@b/
(7c can be renamed something else depending on BIOS/OS version)
Also stecken unsere Karten in Slot 4 und 5.
===luxadm -e dump_map <HW_path>===
Gibt die Tabelle der bekannten Geräte an einem Port aus
<source lang=bash>
# luxadm -e dump_map /devices/pci@79,0/pci10de,378@b/pci1077,143@0/fp@0,0:devctl
Pos Port_ID Hard_Addr Port WWN Node WWN Type
0 30200 0 202600a0b86e10e4 200600a0b86e10e4 0x0 (Disk device)
1 30600 0 202700a0b86e10e4 200600a0b86e10e4 0x0 (Disk device)
2 10100 0 203400a0b85bb030 200400a0b85bb030 0x0 (Disk device)
3 10500 0 203500a0b85bb030 200400a0b85bb030 0x0 (Disk device)
4 10200 0 202600a0b86e103c 200600a0b86e103c 0x0 (Disk device)
5 11400 0 202700a0b86e103c 200600a0b86e103c 0x0 (Disk device)
6 30100 0 203200a0b85aeb2d 200200a0b85aeb2d 0x0 (Disk device)
7 30500 0 203300a0b85aeb2d 200200a0b85aeb2d 0x0 (Disk device)
8 10800 0 2100001b32902d45 2000001b32902d45 0x1f (Unknown Type,Host Bus Adapter)
</source>
Erklärung der interessanten Spalten:
* Port_ID <Switch_ID><Switchport><??>
Es sind also offensichtlich 2 Switches in der Fabric an Port /devices/pci@79,0/pci10de,378@b/pci1077,143@0/fp@0,0:devctl
und zwar mit der ID 1 und mit der ID 3.
Switch ID 1
Port 1 und 5 : Node WWN 200400a0b85bb030
Port 2 und 14 : Node WWN 200600a0b86e103c
Port 8 : Node WWN 2000001b32902d45 (Wir selbst)
Switch ID 3
Port 1 und 5 : Node WWN 200200a0b85aeb2d
Port 2 und 6 : Node WWN 200600a0b86e10e4
Wir hängen also mit 2 Storages auf dem Switch mit der ID 1 und haben eine Verbindung zu einem Switch mit der ID 3 an dem 2 weitere Storages hängen.
* Node WWN
Wir sehen hier 4 Disk Devices mit jeweils 2 Einträgen (Gleiche Node WWN)
* Port WWN
Dies ist die Port WWN der an den Switch angeschlossenen Geräte (unter 8 finden wir uns selbst).
Pro Storage sehen wir hier 2 Port WWNs, also 2 Pfade über unseren einen Hostport.
Daher nachher 4 Pfade (2 Pro Hostport) beim [[#mpathadm list lu]].
* Type
Disk Device: Storage
Host Bus Adapter: FC-Karte
===luxadm probe===
Auflistung aller erkannten Fibrechanneldevices
<source lang=bash>
#> luxadm probe
Found Fibre Channel device(s):
Node WWN:200600a0b86e10e4 Device Type:Disk device
Logical Path:/dev/rdsk/c8t600A0B80006E10E40000DC1C52E8B751d0s2
...
</source>
===luxadm display <Diskpath|WWN>===
<source lang=bash>
#> luxadm display /dev/rdsk/c8t600A0B80006E10E40000DC1C52E8B751d0s2
DEVICE PROPERTIES for disk: /dev/rdsk/c8t600A0B80006E10E40000DC1C52E8B751d0s2
Vendor: SUN
Product ID: STK6580_6780
Revision: 0784
Serial Num: SP01068442
Unformatted capacity: 204800.000 MBytes
Write Cache: Enabled
Read Cache: Enabled
Minimum prefetch: 0x300
Maximum prefetch: 0x0
Device Type: Disk device
Path(s):
/dev/rdsk/c8t600A0B80006E10E40000DC1C52E8B751d0s2
/devices/scsi_vhci/disk@g600a0b80006e10e40000dc1c52e8b751:c,raw
Controller /dev/cfg/c4
Device Address 202600a0b86e10e4,5
Host controller port WWN 2100001b328a417f
Class primary
State ONLINE
Controller /dev/cfg/c4
Device Address 202700a0b86e10e4,5
Host controller port WWN 2100001b328a417f
Class secondary
State STANDBY
Controller /dev/cfg/c6
Device Address 201600a0b86e10e4,5
Host controller port WWN 2100001b32904445
Class primary
State ONLINE
Controller /dev/cfg/c6
Device Address 201700a0b86e10e4,5
Host controller port WWN 2100001b32904445
Class secondary
State STANDBY
</source>
* Vendor: SUN
Hersteller
* Product ID: STK6580_6780
Also ein StorageTek 6580/6780
* Revision: 0784
Grobe Firmwarepeilung (Firmware Version: 07.84.47.10)
Siehe hier [[#lsscs list array <array_name>]]
* Serial Num: SP01068442
Praktisch, wenn man mit NetApps arbeitet, um die LUNs zuzuordnen.
* Unformatted capacity: 204800.000 MBytes
Immer gut zu wissen
* Write Cache: Enabled
Die Batterie im Storage sollte also OK sein ;-)
* Path(s):
Rawdevicepath
Hardwaredevicepath
Jetzt folgen immer pro Pfad zu diesem Device ein Block aus
Controller (siehe unten)
Device Address <Port WWN vom Device>,<LUN ID>
Class <primary|secondary> (siehe unten)
State <Online|Standby|Oflline>
Zuweisung Controller zum FC-Port über:
<source lang=bash>
# ls -al /dev/cfg/c6
lrwxrwxrwx 1 root root 60 Sep 3 2009 /dev/cfg/c6 -> ../../devices/pci@79,0/pci10de,376@e/pci1077,143@0/fp@0,0:fc
</source>
Man sieht den Hardwarepfad von [[#luxadm -e port]]
Class:
Via ALUA (Asymmetric Logical Unit Access) teilt das Device dem Host mit, über welche Pfade der Host primär auf die LUN zugreifen soll.
==fcinfo==
===fcinfo hba-port===
Gibt ein paar Infos über Hersteller, Modell, Firmware, Port und Node WWN, Current Speed, ... aus
<source lang=bash>
#> fcinfo hba-port
HBA Port WWN: 2100001b328a417f
OS Device Name: /dev/cfg/c4
Manufacturer: QLogic Corp.
Model: 375-3356-02
Firmware Version: 05.06.00
FCode/BIOS Version: BIOS: 2.02; fcode: 2.01; EFI: 2.00;
Serial Number: 0402R00-0918701860
Driver Name: qlc
Driver Version: 20110825-3.06
Type: N-port
State: online
Supported Speeds: 1Gb 2Gb 4Gb
Current Speed: 4Gb
Node WWN: 2000001b328a417f
HBA Port WWN: 2101001b32aa417f
OS Device Name: /dev/cfg/c5
Manufacturer: QLogic Corp.
Model: 375-3356-02
Firmware Version: 05.06.00
FCode/BIOS Version: BIOS: 2.02; fcode: 2.01; EFI: 2.00;
Serial Number: 0402R00-0918701860
Driver Name: qlc
Driver Version: 20110825-3.06
Type: unknown
State: offline
Supported Speeds: 1Gb 2Gb 4Gb
Current Speed: not established
Node WWN: 2001001b32aa417f
HBA Port WWN: 2100001b32904445
OS Device Name: /dev/cfg/c6
Manufacturer: QLogic Corp.
Model: 375-3356-02
Firmware Version: 05.06.00
FCode/BIOS Version: BIOS: 2.02; fcode: 2.01; EFI: 2.00;
Serial Number: 0402R00-0918701887
Driver Name: qlc
Driver Version: 20110825-3.06
Type: N-port
State: online
Supported Speeds: 1Gb 2Gb 4Gb
Current Speed: 4Gb
Node WWN: 2000001b32904445
HBA Port WWN: 2101001b32b04445
OS Device Name: /dev/cfg/c7
Manufacturer: QLogic Corp.
Model: 375-3356-02
Firmware Version: 05.06.00
FCode/BIOS Version: BIOS: 2.02; fcode: 2.01; EFI: 2.00;
Serial Number: 0402R00-0918701887
Driver Name: qlc
Driver Version: 20110825-3.06
Type: unknown
State: offline
Supported Speeds: 1Gb 2Gb 4Gb
Current Speed: not established
Node WWN: 2001001b32b04445
</source>
===fcinfo remote-port --port <HBA Port WWN> --linkstat===
<source lang=bash>
# fcinfo remote-port --port 2100001b32904445 --linkstat
Remote Port WWN: 201600a0b86e103c
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200600a0b86e103c
Link Error Statistics:
Link Failure Count: 3
Loss of Sync Count: 3
Loss of Signal Count: 2
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 201700a0b86e103c
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200600a0b86e103c
Link Error Statistics:
Link Failure Count: 4
Loss of Sync Count: 261
Loss of Signal Count: 4
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 202200a0b85aeb2d
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200200a0b85aeb2d
Link Error Statistics:
Link Failure Count: 2
Loss of Sync Count: 1
Loss of Signal Count: 2
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 202300a0b85aeb2d
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200200a0b85aeb2d
Link Error Statistics:
Link Failure Count: 2
Loss of Sync Count: 1
Loss of Signal Count: 2
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 201600a0b86e10e4
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200600a0b86e10e4
Link Error Statistics:
Link Failure Count: 3
Loss of Sync Count: 1
Loss of Signal Count: 0
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 201700a0b86e10e4
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200600a0b86e10e4
Link Error Statistics:
Link Failure Count: 3
Loss of Sync Count: 1
Loss of Signal Count: 0
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 202400a0b85bb030
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200400a0b85bb030
Link Error Statistics:
Link Failure Count: 2
Loss of Sync Count: 1
Loss of Signal Count: 2
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 202500a0b85bb030
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200400a0b85bb030
Link Error Statistics:
Link Failure Count: 2
Loss of Sync Count: 1
Loss of Signal Count: 3
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
</source>
===fcinfo remote-port --port <HBA Port WWN> --===
<source lang=bash>
# fcinfo hba-port | grep HBA
HBA Port WWN: 21000024ff3cf472
HBA Port WWN: 21000024ff3cf473
HBA Port WWN: 21000024ff3cf454
HBA Port WWN: 21000024ff3cf455
# fcinfo remote-port --port 21000024ff3cf472 --scsi-target
Remote Port WWN: 20110002ac0059ce
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 2ff70002ac0059ce
LUN: 0
Vendor: 3PARdata
Product: VV
OS Device Name: /dev/rdsk/c6t60002AC00000000000000002000059CEd0s2
LUN: 1
Vendor: 3PARdata
Product: VV
OS Device Name: /dev/rdsk/c6t60002AC00000000000000003000059CEd0s2
LUN: 2
Vendor: 3PARdata
Product: VV
OS Device Name: /dev/rdsk/c6t60002AC00000000000000004000059CEd0s2
...
</source>
==mpathadm==
===mpathadm list lu===
<source lang=bash>
</source>
==cfgadm==
===cfgadm -al -o show_FCP_dev [<controller>]===
<source lang=bash>
# cfgadm -al -o show_FCP_dev | grep unusable
c8::21000024ff2d49a2,0 disk connected configured unusable
c8::21000024ff2d49a2,1 disk connected configured unusable
c8::21000024ff2d49a2,2 disk connected configured unusable
c8::21000024ff2d49a2,3 disk connected configured unusable
c8::21000024ff2d49a2,4 disk connected configured unusable
c8::21000024ff2d49a2,5 disk connected configured unusable
c8::21000024ff2d49a2,6 disk connected configured unusable
c8::21000024ff2d49a2,7 disk connected configured unusable
c8::21000024ff2d49a2,8 disk connected configured unusable
c8::21000024ff2d49a2,9 disk connected configured unusable
c8::21000024ff2d49a2,10 disk connected configured unusable
c9::203400a0b839c421,31 disk connected configured unusable
c9::203400a0b84913d2,31 disk connected configured unusable
c9::203500a0b839c421,31 disk connected configured unusable
c9::203500a0b84913d2,31 disk connected configured unusable
</source>
===cfgadm -c unconfigure -o unusable_SCSI_LUN <unusable device>===
<source lang=bash>
# cfgadm -c unconfigure -o unusable_SCSI_LUN c8::21000024ff2d49a2
</source>
Alle aufräumen:
<source lang=bash>
# cfgadm -alo show_SCSI_LUN | nawk '$NF=="unusable"{gsub(/,[0-9]+$/,"",$1);print $1}' | sort -u | xargs -n 1 cfgadm -c unconfigure -o unusable_SCSI_LUN
</source>
===cfgadm -o force_update -c configure <controller>===
Rescan LUNs. Be careful! Does a forcelip!
<source lang=bash>
# cfgadm -o force_update -c configure c10
</source>
==prtconf -Da <device>==
<source lang=bash>
# prtconf -Da /dev/cfg/c3
i86pc (driver name: rootnex)
pci, instance #0 (driver name: npe)
pci8086,3410, instance #5 (driver name: pcieb)
pci111d,806e, instance #12 (driver name: pcieb)
pci111d,806e, instance #13 (driver name: pcieb)
pci1077,170, instance #0 (driver name: qlc) <---
fp, instance #0 (driver name: fp)
</source>
=Kommandos : Common Array Manager=
==lsscs==
Ist unter Solaris in /opt/SUNWsefms/bin
===lsscs list array===
<source lang=bash>
</source>
===lsscs list array <array_name>===
<source lang=bash>
</source>
===lsscs list -a <array_name> fcport===
<source lang=bash>
</source>
=Kommandos : Brocade=
==Switch-Kommandos==
===switchshow===
<source lang=bash>
san-sw_11:admin> switchshow
switchName: san-sw_11
switchType: 71.2
switchState: Online
switchMode: Native
switchRole: Principal
switchDomain: 1
switchId: fffc01
switchWwn: 10:00:00:05:33:df:43:5a
zoning: ON (Fabric1)
switchBeacon: OFF
Index Port Address Media Speed State Proto
==============================================
0 0 010000 id N8 No_Light FC
1 1 010100 id N8 Online FC E-Port 10:00:00:05:33:df:bd:b9 "san-sw_21" (downstream)
2 2 010200 id N8 Online FC F-Port 21:00:00:24:ff:05:74:e4
3 3 010300 id N8 Online FC F-Port 50:0a:09:81:8d:32:5d:c4
4 4 010400 id N8 No_Light FC
5 5 010500 id N8 Online FC E-Port 10:00:00:05:33:df:bd:b9 "san-sw_21"
6 6 010600 id N4 Online FC F-Port 20:06:00:a0:b8:32:38:17
7 7 010700 id N4 Online FC F-Port 20:07:00:a0:b8:32:38:17
8 8 010800 id N4 Online FC F-Port 21:00:00:1b:32:91:4c:ed
9 9 010900 id N4 Online FC F-Port 21:00:00:1b:32:98:05:1a
10 10 010a00 id N8 Online FC F-Port 21:00:00:24:ff:4a:d3:bc
11 11 010b00 id N8 No_Light FC
12 12 010c00 id N8 No_Light FC
13 13 010d00 id N8 No_Light FC
14 14 010e00 id N8 No_Light FC
15 15 010f00 id N8 No_Light FC
16 16 011000 -- N8 No_Module FC (No POD License) Disabled
17 17 011100 -- N8 No_Module FC (No POD License) Disabled
18 18 011200 -- N8 No_Module FC (No POD License) Disabled
19 19 011300 -- N8 No_Module FC (No POD License) Disabled
20 20 011400 -- N8 No_Module FC (No POD License) Disabled
21 21 011500 -- N8 No_Module FC (No POD License) Disabled
22 22 011600 -- N8 No_Module FC (No POD License) Disabled
23 23 011700 -- N8 No_Module FC (No POD License) Disabled
</source>
Was sagt uns das?
# Dies ist der "Principal" (alle andere sind "Subordinate") der Fabric "Fabric1" (switchRole:, zoning:)
# Der Switch ist gezoned (zoning:)
# SwitchID ist "fffc01"
# Es ist ein 24-Port Switch
# Es gibt einen doppelten ISL (InterSwitchLink) zu einem anderen Switch E-Port (san-sw_21)
# 6 Ports sind mit SFPs bestückt, aber nicht belegt (0,4,11-15)
# 8 Ports haben keine Lizenz und auch kein SFP (No_Module)
# 9 Ports sind belegt
<source lang=bash>
san-sw_11:root> fabricshow
Switch ID Worldwide Name Enet IP Addr FC IP Addr Name
-------------------------------------------------------------------------
1: fffc01 10:00:00:05:33:df:43:5a 192.168.1.117 0.0.0.0 >"san-sw_11"
2: fffc02 10:00:00:05:33:df:bd:b9 192.168.1.119 0.0.0.0 "san-sw_21"
The Fabric has 2 switches
</source>
==Port-Kommandos==
===porterrshow===
===portstatsshow===
===portstatsclear===
===portloginshow===
Möchte man sehen, welche WWNs sich hinter einem NPIV-Port verbergen, so hilft portloginshow.
==Zone-Kommandos==
===zoneshow===
===alicreate===
===alishow===
==Backup der Switchconfig per Script==
===Put the backup host ssh-pub-key on the switches===
<source lang=bash>
fcsw1:root> cat >/root/.ssh/authorized_keys <<EOF
> ssh-dss AAAAB3NzaC1...
...
...
lF8qsgtTD8cc= root@host
> EOF
</source>
===Generate ssh-key on the switches===
<source lang=bash>
fcsw1:root> ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
2a:23:33:...:69:bc:25:a5:f9 root@fcsw1
The key's randomart image is:
+--[ RSA 2048]----+
| |
| ... |
| |
+-----------------+
</source>
===Copy the key to your backup users ~/.ssh/authorized_keys on backup host===
<source lang=bash>
fcsw1:root> cat /root/.ssh/id_rsa.pub
ssh-rsa AAAAB3NzaC1yc2EAAA...
...
KHnw1T1NaQ== root@fcsw1
</source>
===Now the script on the backup host===
<source lang=bash>
# cat /opt/bin/backup_brocade_config
#!/bin/bash
SWITCHES="
172.30.40.50
172.30.40.51
"
LOCALUSER="backupuser"
BACKUPDIR="brocade_backup"
BACKUPHOST="172.30.40.10"
DATE="$(date '+%Y%m%d-%H%M%S')"
for switch in ${SWITCHES} ; do
printf "Backing up ${switch} to ~${LOCALUSER}/${BACKUPDIR}/${switch}_config_${date}.txt... "
ssh root@${switch} /fabos/link_sbin/configupload -all -p scp ${BACKUPHOST},${LOCALUSER},${BACKUPDIR}/${switch}_config_${DATE}.txt
done
</source>
==Script zum parsen einer configupload-Datei==
<source lang=awk>
#!/usr/bin/gawk -f
BEGIN{
vendor["001438"]="Hewlett-Packard";
vendor["00a098"]="NetApp";
vendor["0024ff"]="Qlogic";
vendor["001b32"]="Qlogic";
vendor["0000c9"]="Emulex";
vendor["00e002"]="CROSSROADS SYSTEMS, INC.";
}
/\[Zoning\]/,/^$/ {
if(/^cfg./){
split($0,cfgparts,":");
gsub(/^cfg./,"",cfgparts[1]);
cfg[cfgparts[1]]=cfgparts[2];
}
else if(/^zone./) {
zonename=$0;
gsub(/:.*$/,"",zonename);
gsub(/^zone./,"",zonename);
zonemembers=$0;
gsub(/^[^:]*:/,"",zonemembers);
zone[zonename]=zonemembers;
}
else if(/^alias./) {
aliasname=$0;
gsub(/:.*$/,"",aliasname);
gsub(/^alias./,"",aliasname);
aliasmembers=$0;
gsub(/^[^:]*:/,"",aliasmembers);
alias[aliasname]=aliasmembers;
if(length(aliasname)>longestalias){
longestalias=length(aliasname);
}
}
else if(/^enable:/) {
cfgenabled=$0;
gsub(/^enable:/,"",cfgenabled);
}
}
END {
print "Config:",cfgenabled;
split(cfg[cfgenabled],active_zones,";");
for(active_zone in active_zones) {
split(zone[active_zones[active_zone]],zone_members,";");
asort(zone_members);
print "Zone",active_zones[active_zone],"(",length(zone_members),"Members ):";
for(zone_member in zone_members){
member=zone_members[zone_member];
if(alias[member]!=""){
member=alias[member];
}
WWN=member;
gsub(/:/,"",WWN);
if(WWN ~ /^5/){start=2;}else{start=5;}
vendor_id=substr(WWN,start,6);
printf " Member: %s\t",member;
if(alias[zone_members[zone_member]]!=""){
format=sprintf("%%s%%%ds\t",longestalias-length(zone_members[zone_member]));
printf format,zone_members[zone_member]," ";
}
printf "%s\n",vendor[vendor_id];
}
}
printf "\n\n\nCreate config:\n-------------------------------------------------\n";
printf "cfgdelete \"%s\"\n",cfgenabled;
for(active_zone in active_zones) {
split(zone[active_zones[active_zone]],zone_members,";");
asort(zone_members);
for(zone_member in zone_members){
member=zone_members[zone_member];
if(alias[member]!=""){
printf "alicreate \"%s\",\"%s\"\n",member,alias[member];
alias[member]="";
}
}
printf "zonecreate \"%s\",\"%s\"\n",active_zones[active_zone],zone[active_zones[active_zone]];
if(!secondelement){
secondelement=1;
printf "cfgcreate";
} else {
printf "cfgadd ";
}
printf " \"%s\",\"%s\"\n",cfgenabled,active_zones[active_zone];
}
printf "cfgsave\ncfgenable \"%s\"\n",cfgenabled;
}
</source>
=Kommandos: NetApp=
==fcp topology show : Wo hängt mein Frontend-SAN?==
<source lang=bash>
fas01> fcp topology show
Switches connected on adapter 0d:
None connected.
Switches connected on adapter 0c:
None connected.
Switches connected on adapter 1a:
Switch Name: fcsw01
Switch Vendor: Brocade Communications, Inc.
Switch Release: v6.4.2a
Switch Domain: 1
Switch WWN: 10:00:00:05:33:c6:1e:6c
Port Count: 24
Switches connected on adapter 1b:
Switch Name: fcsw02
Switch Vendor: Brocade Communications, Inc.
Switch Release: v6.4.2a
Switch Domain: 1
Switch WWN: 10:00:00:05:33:c7:5e:d2
Port Count: 24
Switches connected on adapter 1c:
None connected.
Switches connected on adapter 1d:
None connected.
</source>
==fcp config <port> : Welche WWN habe ich?==
<source lang=bash>
fas01> fcp config 1a
1a: ONLINE <ADAPTER UP> PTP Fabric
host address 010600
portname 50:0a:09:83:90:00:29:24 nodename 50:0a:09:80:80:00:29:24
mediatype auto speed auto
</source>
Kleines Schmankerl ist noch die "host address", die uns zeigt, daß wir auf Switch-ID 01 Port 06 hängen.
==fcp wwpn-alias (set|show) : Aliasnamen für mehr Klarheit beim Debugging==
<source lang=bash>
fas01> fcp wwpn-alias set sun07_Slot2_Port0 21000024ff363a5a
fas01> fcp wwpn-alias show
WWPN Alias
---- -----
21:00:00:24:ff:36:3a:5a sun07_Slot2_Port0
</source>
==sanlun lun show -d <dev> (mit Solaris und ZPool)==
Wenn man wissen möchte, welche NetApp LUNs zu einem ZPool gehören, geht das folgender Maßen:
<source lang=bash>
# zpool status | nawk '/c[0-9]t/{dev=$1;gsub(/s[0-9]+$/,"",$1);command="/opt/NTAP/SANToolkit/bin/sanlun lun show -d /dev/rdsk/"$1"s2";command | getline; command | getline; print dev,$1$2;next;}{print;}'
</source>
Beispiel:
<source lang=bash>
# zpool status | nawk '/c[0-9]t/{dev=$1;gsub(/s[0-9]+$/,"",$1);command="/opt/NTAP/SANToolkit/bin/sanlun lun show -d /dev/rdsk/"$1"s2";command | getline; command | getline; print dev,$1$2;next;}{print;}'
Pool: testpool
Status: ONLINE
scan: resilvered 11,0G in 0h1m with 0 errors on Thu Oct 2 09:41:39 2014
config:
NAME STATE READ WRITE CKSUM
testpool ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
c5t60A98000433544625634696B76705370d0s0 fas01:/vol/testlun/LUN0
c5t60A980003830304F392446473844375Ad0 fas02:/vol/testlun/LUN0
</source>
=Sonstiges=
==Alle WWNs in einem File finden==
Gibt nur die WWNs aus, auch mehrere, wenn mehrere in einer Zeile sind.
<source lang=awk>
gawk '{line=$0;while(match(line,/[0-9a-f]{2}(:[0-9a-f]{2}){7}/,wwn)){line=substr(line,wwn[0,"start"]+wwn[0,"length"]); print wwn[0];}}' <file>
</source>
3ea7baf23392aff4d401709955b7e44e948e95b3
759
758
2015-06-10T06:56:55Z
Lollypop
2
/* fcinfo remote-port --port -- */
wikitext
text/x-wiki
[[Kategorie:Solaris]]
[[Kategorie:Brocade]]
[[Kategorie:NetApp]]
[[Kategorie:FC]]
=Fibrechannel Analyse=
=Kommandos : Solaris=
==luxadm==
===luxadm -e port===
Gibt die Hardwarepfade der vorhandened Fibrechannelports und deren Status aus:
<source lang=bash>
# luxadm -e port
/devices/pci@79,0/pci10de,378@b/pci1077,143@0/fp@0,0:devctl CONNECTED
/devices/pci@79,0/pci10de,378@b/pci1077,143@0,1/fp@0,0:devctl NOT CONNECTED
/devices/pci@79,0/pci10de,376@e/pci1077,143@0/fp@0,0:devctl CONNECTED
/devices/pci@79,0/pci10de,376@e/pci1077,143@0,1/fp@0,0:devctl NOT CONNECTED
</source>
2 Dualport Karten:
/devices/pci@79,0/pci10de,378@b/pci1077,143@0 und ...,1
/devices/pci@79,0/pci10de,376@e/pci1077,143@0 und ...,1
<source lang=bash>
# prtdiag -v | head -1
System Configuration: Sun Microsystems Sun Fire X4440
</source>
Aus der Seite [https://support.oracle.com/epmos/faces/DocContentDisplay?id=1277396.1 Sun x86 Platforms: Matrix of Recognized Device Paths (Doc ID 1277396.1)] (Oracle Support Login benötigt):
Sun Fire x4440 (Tucana)
PCI:
PCIe SLOT0 /pci@0,0/pci10de,375@f/pci1000,3150@0 // with PCI Express 8-Port SAS/SATA HBA
PCIe SLOT0 /pci@0,0/pci10de,375@f/ // without PCI Express 8-Port SAS/SATA HBA
PCIe SLOT1 /pci@0,0/pci10de,376@e/
PCIe SLOT2 /pci@7c,0/pci10de,377@f/
PCIe SLOT3 /pci@0,0/pci10de,377@a/
PCIe SLOT4 /pci@7c,0/pci10de,376@e/
PCIe SLOT5 /pci@7c,0/pci10de,378@b/
(7c can be renamed something else depending on BIOS/OS version)
Also stecken unsere Karten in Slot 4 und 5.
===luxadm -e dump_map <HW_path>===
Gibt die Tabelle der bekannten Geräte an einem Port aus
<source lang=bash>
# luxadm -e dump_map /devices/pci@79,0/pci10de,378@b/pci1077,143@0/fp@0,0:devctl
Pos Port_ID Hard_Addr Port WWN Node WWN Type
0 30200 0 202600a0b86e10e4 200600a0b86e10e4 0x0 (Disk device)
1 30600 0 202700a0b86e10e4 200600a0b86e10e4 0x0 (Disk device)
2 10100 0 203400a0b85bb030 200400a0b85bb030 0x0 (Disk device)
3 10500 0 203500a0b85bb030 200400a0b85bb030 0x0 (Disk device)
4 10200 0 202600a0b86e103c 200600a0b86e103c 0x0 (Disk device)
5 11400 0 202700a0b86e103c 200600a0b86e103c 0x0 (Disk device)
6 30100 0 203200a0b85aeb2d 200200a0b85aeb2d 0x0 (Disk device)
7 30500 0 203300a0b85aeb2d 200200a0b85aeb2d 0x0 (Disk device)
8 10800 0 2100001b32902d45 2000001b32902d45 0x1f (Unknown Type,Host Bus Adapter)
</source>
Erklärung der interessanten Spalten:
* Port_ID <Switch_ID><Switchport><??>
Es sind also offensichtlich 2 Switches in der Fabric an Port /devices/pci@79,0/pci10de,378@b/pci1077,143@0/fp@0,0:devctl
und zwar mit der ID 1 und mit der ID 3.
Switch ID 1
Port 1 und 5 : Node WWN 200400a0b85bb030
Port 2 und 14 : Node WWN 200600a0b86e103c
Port 8 : Node WWN 2000001b32902d45 (Wir selbst)
Switch ID 3
Port 1 und 5 : Node WWN 200200a0b85aeb2d
Port 2 und 6 : Node WWN 200600a0b86e10e4
Wir hängen also mit 2 Storages auf dem Switch mit der ID 1 und haben eine Verbindung zu einem Switch mit der ID 3 an dem 2 weitere Storages hängen.
* Node WWN
Wir sehen hier 4 Disk Devices mit jeweils 2 Einträgen (Gleiche Node WWN)
* Port WWN
Dies ist die Port WWN der an den Switch angeschlossenen Geräte (unter 8 finden wir uns selbst).
Pro Storage sehen wir hier 2 Port WWNs, also 2 Pfade über unseren einen Hostport.
Daher nachher 4 Pfade (2 Pro Hostport) beim [[#mpathadm list lu]].
* Type
Disk Device: Storage
Host Bus Adapter: FC-Karte
===luxadm probe===
Auflistung aller erkannten Fibrechanneldevices
<source lang=bash>
#> luxadm probe
Found Fibre Channel device(s):
Node WWN:200600a0b86e10e4 Device Type:Disk device
Logical Path:/dev/rdsk/c8t600A0B80006E10E40000DC1C52E8B751d0s2
...
</source>
===luxadm display <Diskpath|WWN>===
<source lang=bash>
#> luxadm display /dev/rdsk/c8t600A0B80006E10E40000DC1C52E8B751d0s2
DEVICE PROPERTIES for disk: /dev/rdsk/c8t600A0B80006E10E40000DC1C52E8B751d0s2
Vendor: SUN
Product ID: STK6580_6780
Revision: 0784
Serial Num: SP01068442
Unformatted capacity: 204800.000 MBytes
Write Cache: Enabled
Read Cache: Enabled
Minimum prefetch: 0x300
Maximum prefetch: 0x0
Device Type: Disk device
Path(s):
/dev/rdsk/c8t600A0B80006E10E40000DC1C52E8B751d0s2
/devices/scsi_vhci/disk@g600a0b80006e10e40000dc1c52e8b751:c,raw
Controller /dev/cfg/c4
Device Address 202600a0b86e10e4,5
Host controller port WWN 2100001b328a417f
Class primary
State ONLINE
Controller /dev/cfg/c4
Device Address 202700a0b86e10e4,5
Host controller port WWN 2100001b328a417f
Class secondary
State STANDBY
Controller /dev/cfg/c6
Device Address 201600a0b86e10e4,5
Host controller port WWN 2100001b32904445
Class primary
State ONLINE
Controller /dev/cfg/c6
Device Address 201700a0b86e10e4,5
Host controller port WWN 2100001b32904445
Class secondary
State STANDBY
</source>
* Vendor: SUN
Hersteller
* Product ID: STK6580_6780
Also ein StorageTek 6580/6780
* Revision: 0784
Grobe Firmwarepeilung (Firmware Version: 07.84.47.10)
Siehe hier [[#lsscs list array <array_name>]]
* Serial Num: SP01068442
Praktisch, wenn man mit NetApps arbeitet, um die LUNs zuzuordnen.
* Unformatted capacity: 204800.000 MBytes
Immer gut zu wissen
* Write Cache: Enabled
Die Batterie im Storage sollte also OK sein ;-)
* Path(s):
Rawdevicepath
Hardwaredevicepath
Jetzt folgen immer pro Pfad zu diesem Device ein Block aus
Controller (siehe unten)
Device Address <Port WWN vom Device>,<LUN ID>
Class <primary|secondary> (siehe unten)
State <Online|Standby|Oflline>
Zuweisung Controller zum FC-Port über:
<source lang=bash>
# ls -al /dev/cfg/c6
lrwxrwxrwx 1 root root 60 Sep 3 2009 /dev/cfg/c6 -> ../../devices/pci@79,0/pci10de,376@e/pci1077,143@0/fp@0,0:fc
</source>
Man sieht den Hardwarepfad von [[#luxadm -e port]]
Class:
Via ALUA (Asymmetric Logical Unit Access) teilt das Device dem Host mit, über welche Pfade der Host primär auf die LUN zugreifen soll.
==fcinfo==
===fcinfo hba-port===
Gibt ein paar Infos über Hersteller, Modell, Firmware, Port und Node WWN, Current Speed, ... aus
<source lang=bash>
#> fcinfo hba-port
HBA Port WWN: 2100001b328a417f
OS Device Name: /dev/cfg/c4
Manufacturer: QLogic Corp.
Model: 375-3356-02
Firmware Version: 05.06.00
FCode/BIOS Version: BIOS: 2.02; fcode: 2.01; EFI: 2.00;
Serial Number: 0402R00-0918701860
Driver Name: qlc
Driver Version: 20110825-3.06
Type: N-port
State: online
Supported Speeds: 1Gb 2Gb 4Gb
Current Speed: 4Gb
Node WWN: 2000001b328a417f
HBA Port WWN: 2101001b32aa417f
OS Device Name: /dev/cfg/c5
Manufacturer: QLogic Corp.
Model: 375-3356-02
Firmware Version: 05.06.00
FCode/BIOS Version: BIOS: 2.02; fcode: 2.01; EFI: 2.00;
Serial Number: 0402R00-0918701860
Driver Name: qlc
Driver Version: 20110825-3.06
Type: unknown
State: offline
Supported Speeds: 1Gb 2Gb 4Gb
Current Speed: not established
Node WWN: 2001001b32aa417f
HBA Port WWN: 2100001b32904445
OS Device Name: /dev/cfg/c6
Manufacturer: QLogic Corp.
Model: 375-3356-02
Firmware Version: 05.06.00
FCode/BIOS Version: BIOS: 2.02; fcode: 2.01; EFI: 2.00;
Serial Number: 0402R00-0918701887
Driver Name: qlc
Driver Version: 20110825-3.06
Type: N-port
State: online
Supported Speeds: 1Gb 2Gb 4Gb
Current Speed: 4Gb
Node WWN: 2000001b32904445
HBA Port WWN: 2101001b32b04445
OS Device Name: /dev/cfg/c7
Manufacturer: QLogic Corp.
Model: 375-3356-02
Firmware Version: 05.06.00
FCode/BIOS Version: BIOS: 2.02; fcode: 2.01; EFI: 2.00;
Serial Number: 0402R00-0918701887
Driver Name: qlc
Driver Version: 20110825-3.06
Type: unknown
State: offline
Supported Speeds: 1Gb 2Gb 4Gb
Current Speed: not established
Node WWN: 2001001b32b04445
</source>
===fcinfo remote-port --port <HBA Port WWN> --linkstat===
<source lang=bash>
# fcinfo remote-port --port 2100001b32904445 --linkstat
Remote Port WWN: 201600a0b86e103c
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200600a0b86e103c
Link Error Statistics:
Link Failure Count: 3
Loss of Sync Count: 3
Loss of Signal Count: 2
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 201700a0b86e103c
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200600a0b86e103c
Link Error Statistics:
Link Failure Count: 4
Loss of Sync Count: 261
Loss of Signal Count: 4
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 202200a0b85aeb2d
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200200a0b85aeb2d
Link Error Statistics:
Link Failure Count: 2
Loss of Sync Count: 1
Loss of Signal Count: 2
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 202300a0b85aeb2d
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200200a0b85aeb2d
Link Error Statistics:
Link Failure Count: 2
Loss of Sync Count: 1
Loss of Signal Count: 2
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 201600a0b86e10e4
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200600a0b86e10e4
Link Error Statistics:
Link Failure Count: 3
Loss of Sync Count: 1
Loss of Signal Count: 0
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 201700a0b86e10e4
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200600a0b86e10e4
Link Error Statistics:
Link Failure Count: 3
Loss of Sync Count: 1
Loss of Signal Count: 0
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 202400a0b85bb030
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200400a0b85bb030
Link Error Statistics:
Link Failure Count: 2
Loss of Sync Count: 1
Loss of Signal Count: 2
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 202500a0b85bb030
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200400a0b85bb030
Link Error Statistics:
Link Failure Count: 2
Loss of Sync Count: 1
Loss of Signal Count: 3
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
</source>
===fcinfo remote-port --port <HBA Port WWN> --scsi-target===
<source lang=bash>
# fcinfo hba-port | grep HBA
HBA Port WWN: 21000024ff3cf472
HBA Port WWN: 21000024ff3cf473
HBA Port WWN: 21000024ff3cf454
HBA Port WWN: 21000024ff3cf455
# fcinfo remote-port --port 21000024ff3cf472 --scsi-target
Remote Port WWN: 20110002ac0059ce
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 2ff70002ac0059ce
LUN: 0
Vendor: 3PARdata
Product: VV
OS Device Name: /dev/rdsk/c6t60002AC00000000000000002000059CEd0s2
LUN: 1
Vendor: 3PARdata
Product: VV
OS Device Name: /dev/rdsk/c6t60002AC00000000000000003000059CEd0s2
LUN: 2
Vendor: 3PARdata
Product: VV
OS Device Name: /dev/rdsk/c6t60002AC00000000000000004000059CEd0s2
...
</source>
==mpathadm==
===mpathadm list lu===
<source lang=bash>
</source>
==cfgadm==
===cfgadm -al -o show_FCP_dev [<controller>]===
<source lang=bash>
# cfgadm -al -o show_FCP_dev | grep unusable
c8::21000024ff2d49a2,0 disk connected configured unusable
c8::21000024ff2d49a2,1 disk connected configured unusable
c8::21000024ff2d49a2,2 disk connected configured unusable
c8::21000024ff2d49a2,3 disk connected configured unusable
c8::21000024ff2d49a2,4 disk connected configured unusable
c8::21000024ff2d49a2,5 disk connected configured unusable
c8::21000024ff2d49a2,6 disk connected configured unusable
c8::21000024ff2d49a2,7 disk connected configured unusable
c8::21000024ff2d49a2,8 disk connected configured unusable
c8::21000024ff2d49a2,9 disk connected configured unusable
c8::21000024ff2d49a2,10 disk connected configured unusable
c9::203400a0b839c421,31 disk connected configured unusable
c9::203400a0b84913d2,31 disk connected configured unusable
c9::203500a0b839c421,31 disk connected configured unusable
c9::203500a0b84913d2,31 disk connected configured unusable
</source>
===cfgadm -c unconfigure -o unusable_SCSI_LUN <unusable device>===
<source lang=bash>
# cfgadm -c unconfigure -o unusable_SCSI_LUN c8::21000024ff2d49a2
</source>
Alle aufräumen:
<source lang=bash>
# cfgadm -alo show_SCSI_LUN | nawk '$NF=="unusable"{gsub(/,[0-9]+$/,"",$1);print $1}' | sort -u | xargs -n 1 cfgadm -c unconfigure -o unusable_SCSI_LUN
</source>
===cfgadm -o force_update -c configure <controller>===
Rescan LUNs. Be careful! Does a forcelip!
<source lang=bash>
# cfgadm -o force_update -c configure c10
</source>
==prtconf -Da <device>==
<source lang=bash>
# prtconf -Da /dev/cfg/c3
i86pc (driver name: rootnex)
pci, instance #0 (driver name: npe)
pci8086,3410, instance #5 (driver name: pcieb)
pci111d,806e, instance #12 (driver name: pcieb)
pci111d,806e, instance #13 (driver name: pcieb)
pci1077,170, instance #0 (driver name: qlc) <---
fp, instance #0 (driver name: fp)
</source>
=Kommandos : Common Array Manager=
==lsscs==
Ist unter Solaris in /opt/SUNWsefms/bin
===lsscs list array===
<source lang=bash>
</source>
===lsscs list array <array_name>===
<source lang=bash>
</source>
===lsscs list -a <array_name> fcport===
<source lang=bash>
</source>
=Kommandos : Brocade=
==Switch-Kommandos==
===switchshow===
<source lang=bash>
san-sw_11:admin> switchshow
switchName: san-sw_11
switchType: 71.2
switchState: Online
switchMode: Native
switchRole: Principal
switchDomain: 1
switchId: fffc01
switchWwn: 10:00:00:05:33:df:43:5a
zoning: ON (Fabric1)
switchBeacon: OFF
Index Port Address Media Speed State Proto
==============================================
0 0 010000 id N8 No_Light FC
1 1 010100 id N8 Online FC E-Port 10:00:00:05:33:df:bd:b9 "san-sw_21" (downstream)
2 2 010200 id N8 Online FC F-Port 21:00:00:24:ff:05:74:e4
3 3 010300 id N8 Online FC F-Port 50:0a:09:81:8d:32:5d:c4
4 4 010400 id N8 No_Light FC
5 5 010500 id N8 Online FC E-Port 10:00:00:05:33:df:bd:b9 "san-sw_21"
6 6 010600 id N4 Online FC F-Port 20:06:00:a0:b8:32:38:17
7 7 010700 id N4 Online FC F-Port 20:07:00:a0:b8:32:38:17
8 8 010800 id N4 Online FC F-Port 21:00:00:1b:32:91:4c:ed
9 9 010900 id N4 Online FC F-Port 21:00:00:1b:32:98:05:1a
10 10 010a00 id N8 Online FC F-Port 21:00:00:24:ff:4a:d3:bc
11 11 010b00 id N8 No_Light FC
12 12 010c00 id N8 No_Light FC
13 13 010d00 id N8 No_Light FC
14 14 010e00 id N8 No_Light FC
15 15 010f00 id N8 No_Light FC
16 16 011000 -- N8 No_Module FC (No POD License) Disabled
17 17 011100 -- N8 No_Module FC (No POD License) Disabled
18 18 011200 -- N8 No_Module FC (No POD License) Disabled
19 19 011300 -- N8 No_Module FC (No POD License) Disabled
20 20 011400 -- N8 No_Module FC (No POD License) Disabled
21 21 011500 -- N8 No_Module FC (No POD License) Disabled
22 22 011600 -- N8 No_Module FC (No POD License) Disabled
23 23 011700 -- N8 No_Module FC (No POD License) Disabled
</source>
Was sagt uns das?
# Dies ist der "Principal" (alle andere sind "Subordinate") der Fabric "Fabric1" (switchRole:, zoning:)
# Der Switch ist gezoned (zoning:)
# SwitchID ist "fffc01"
# Es ist ein 24-Port Switch
# Es gibt einen doppelten ISL (InterSwitchLink) zu einem anderen Switch E-Port (san-sw_21)
# 6 Ports sind mit SFPs bestückt, aber nicht belegt (0,4,11-15)
# 8 Ports haben keine Lizenz und auch kein SFP (No_Module)
# 9 Ports sind belegt
<source lang=bash>
san-sw_11:root> fabricshow
Switch ID Worldwide Name Enet IP Addr FC IP Addr Name
-------------------------------------------------------------------------
1: fffc01 10:00:00:05:33:df:43:5a 192.168.1.117 0.0.0.0 >"san-sw_11"
2: fffc02 10:00:00:05:33:df:bd:b9 192.168.1.119 0.0.0.0 "san-sw_21"
The Fabric has 2 switches
</source>
==Port-Kommandos==
===porterrshow===
===portstatsshow===
===portstatsclear===
===portloginshow===
Möchte man sehen, welche WWNs sich hinter einem NPIV-Port verbergen, so hilft portloginshow.
==Zone-Kommandos==
===zoneshow===
===alicreate===
===alishow===
==Backup der Switchconfig per Script==
===Put the backup host ssh-pub-key on the switches===
<source lang=bash>
fcsw1:root> cat >/root/.ssh/authorized_keys <<EOF
> ssh-dss AAAAB3NzaC1...
...
...
lF8qsgtTD8cc= root@host
> EOF
</source>
===Generate ssh-key on the switches===
<source lang=bash>
fcsw1:root> ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
2a:23:33:...:69:bc:25:a5:f9 root@fcsw1
The key's randomart image is:
+--[ RSA 2048]----+
| |
| ... |
| |
+-----------------+
</source>
===Copy the key to your backup users ~/.ssh/authorized_keys on backup host===
<source lang=bash>
fcsw1:root> cat /root/.ssh/id_rsa.pub
ssh-rsa AAAAB3NzaC1yc2EAAA...
...
KHnw1T1NaQ== root@fcsw1
</source>
===Now the script on the backup host===
<source lang=bash>
# cat /opt/bin/backup_brocade_config
#!/bin/bash
SWITCHES="
172.30.40.50
172.30.40.51
"
LOCALUSER="backupuser"
BACKUPDIR="brocade_backup"
BACKUPHOST="172.30.40.10"
DATE="$(date '+%Y%m%d-%H%M%S')"
for switch in ${SWITCHES} ; do
printf "Backing up ${switch} to ~${LOCALUSER}/${BACKUPDIR}/${switch}_config_${date}.txt... "
ssh root@${switch} /fabos/link_sbin/configupload -all -p scp ${BACKUPHOST},${LOCALUSER},${BACKUPDIR}/${switch}_config_${DATE}.txt
done
</source>
==Script zum parsen einer configupload-Datei==
<source lang=awk>
#!/usr/bin/gawk -f
BEGIN{
vendor["001438"]="Hewlett-Packard";
vendor["00a098"]="NetApp";
vendor["0024ff"]="Qlogic";
vendor["001b32"]="Qlogic";
vendor["0000c9"]="Emulex";
vendor["00e002"]="CROSSROADS SYSTEMS, INC.";
}
/\[Zoning\]/,/^$/ {
if(/^cfg./){
split($0,cfgparts,":");
gsub(/^cfg./,"",cfgparts[1]);
cfg[cfgparts[1]]=cfgparts[2];
}
else if(/^zone./) {
zonename=$0;
gsub(/:.*$/,"",zonename);
gsub(/^zone./,"",zonename);
zonemembers=$0;
gsub(/^[^:]*:/,"",zonemembers);
zone[zonename]=zonemembers;
}
else if(/^alias./) {
aliasname=$0;
gsub(/:.*$/,"",aliasname);
gsub(/^alias./,"",aliasname);
aliasmembers=$0;
gsub(/^[^:]*:/,"",aliasmembers);
alias[aliasname]=aliasmembers;
if(length(aliasname)>longestalias){
longestalias=length(aliasname);
}
}
else if(/^enable:/) {
cfgenabled=$0;
gsub(/^enable:/,"",cfgenabled);
}
}
END {
print "Config:",cfgenabled;
split(cfg[cfgenabled],active_zones,";");
for(active_zone in active_zones) {
split(zone[active_zones[active_zone]],zone_members,";");
asort(zone_members);
print "Zone",active_zones[active_zone],"(",length(zone_members),"Members ):";
for(zone_member in zone_members){
member=zone_members[zone_member];
if(alias[member]!=""){
member=alias[member];
}
WWN=member;
gsub(/:/,"",WWN);
if(WWN ~ /^5/){start=2;}else{start=5;}
vendor_id=substr(WWN,start,6);
printf " Member: %s\t",member;
if(alias[zone_members[zone_member]]!=""){
format=sprintf("%%s%%%ds\t",longestalias-length(zone_members[zone_member]));
printf format,zone_members[zone_member]," ";
}
printf "%s\n",vendor[vendor_id];
}
}
printf "\n\n\nCreate config:\n-------------------------------------------------\n";
printf "cfgdelete \"%s\"\n",cfgenabled;
for(active_zone in active_zones) {
split(zone[active_zones[active_zone]],zone_members,";");
asort(zone_members);
for(zone_member in zone_members){
member=zone_members[zone_member];
if(alias[member]!=""){
printf "alicreate \"%s\",\"%s\"\n",member,alias[member];
alias[member]="";
}
}
printf "zonecreate \"%s\",\"%s\"\n",active_zones[active_zone],zone[active_zones[active_zone]];
if(!secondelement){
secondelement=1;
printf "cfgcreate";
} else {
printf "cfgadd ";
}
printf " \"%s\",\"%s\"\n",cfgenabled,active_zones[active_zone];
}
printf "cfgsave\ncfgenable \"%s\"\n",cfgenabled;
}
</source>
=Kommandos: NetApp=
==fcp topology show : Wo hängt mein Frontend-SAN?==
<source lang=bash>
fas01> fcp topology show
Switches connected on adapter 0d:
None connected.
Switches connected on adapter 0c:
None connected.
Switches connected on adapter 1a:
Switch Name: fcsw01
Switch Vendor: Brocade Communications, Inc.
Switch Release: v6.4.2a
Switch Domain: 1
Switch WWN: 10:00:00:05:33:c6:1e:6c
Port Count: 24
Switches connected on adapter 1b:
Switch Name: fcsw02
Switch Vendor: Brocade Communications, Inc.
Switch Release: v6.4.2a
Switch Domain: 1
Switch WWN: 10:00:00:05:33:c7:5e:d2
Port Count: 24
Switches connected on adapter 1c:
None connected.
Switches connected on adapter 1d:
None connected.
</source>
==fcp config <port> : Welche WWN habe ich?==
<source lang=bash>
fas01> fcp config 1a
1a: ONLINE <ADAPTER UP> PTP Fabric
host address 010600
portname 50:0a:09:83:90:00:29:24 nodename 50:0a:09:80:80:00:29:24
mediatype auto speed auto
</source>
Kleines Schmankerl ist noch die "host address", die uns zeigt, daß wir auf Switch-ID 01 Port 06 hängen.
==fcp wwpn-alias (set|show) : Aliasnamen für mehr Klarheit beim Debugging==
<source lang=bash>
fas01> fcp wwpn-alias set sun07_Slot2_Port0 21000024ff363a5a
fas01> fcp wwpn-alias show
WWPN Alias
---- -----
21:00:00:24:ff:36:3a:5a sun07_Slot2_Port0
</source>
==sanlun lun show -d <dev> (mit Solaris und ZPool)==
Wenn man wissen möchte, welche NetApp LUNs zu einem ZPool gehören, geht das folgender Maßen:
<source lang=bash>
# zpool status | nawk '/c[0-9]t/{dev=$1;gsub(/s[0-9]+$/,"",$1);command="/opt/NTAP/SANToolkit/bin/sanlun lun show -d /dev/rdsk/"$1"s2";command | getline; command | getline; print dev,$1$2;next;}{print;}'
</source>
Beispiel:
<source lang=bash>
# zpool status | nawk '/c[0-9]t/{dev=$1;gsub(/s[0-9]+$/,"",$1);command="/opt/NTAP/SANToolkit/bin/sanlun lun show -d /dev/rdsk/"$1"s2";command | getline; command | getline; print dev,$1$2;next;}{print;}'
Pool: testpool
Status: ONLINE
scan: resilvered 11,0G in 0h1m with 0 errors on Thu Oct 2 09:41:39 2014
config:
NAME STATE READ WRITE CKSUM
testpool ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
c5t60A98000433544625634696B76705370d0s0 fas01:/vol/testlun/LUN0
c5t60A980003830304F392446473844375Ad0 fas02:/vol/testlun/LUN0
</source>
=Sonstiges=
==Alle WWNs in einem File finden==
Gibt nur die WWNs aus, auch mehrere, wenn mehrere in einer Zeile sind.
<source lang=awk>
gawk '{line=$0;while(match(line,/[0-9a-f]{2}(:[0-9a-f]{2}){7}/,wwn)){line=substr(line,wwn[0,"start"]+wwn[0,"length"]); print wwn[0];}}' <file>
</source>
acde6ffe91b1952d914183af3d3953ee1ec2b69a
ZFS cheatsheet
0
29
760
647
2015-06-16T09:32:02Z
Lollypop
2
wikitext
text/x-wiki
== Links ==
* [[ZFS_Recovery|Reparieren von defktem ZFS]]
* Wichtige ZFS-Patches: 127729-07 (x86) / 127728-06 (SPARC)
* ZFS Best Practices Guide http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide
* ZFS FAQ bei Opensolaris.org http://www.opensolaris.org/os/community/zfs/faq/
== Löschen nicht löschbarer Snapshots ==
Hier nach einem abgebrochenen ZFS send/recv:
<source lang=bash>
# zfs destroy MYSQL-LOG/binlog@copy_20130403
cannot destroy 'MYSQL-LOG/binlog@copy_20130403': dataset is busy
# zfs holds -r MYSQL-LOG@copy_20130403
NAME TAG TIMESTAMP
MYSQL-LOG@copy_20130403 .send-22887-0 Wed Apr 3 09:03:32 2013
# zfs release .send-22887-0 MYSQL-LOG@copy_20130403
# zfs destroy MYSQL-LOG/binlog@copy_20130403
</source>
== ZFS Tuning ==
Eine gefühlte Langsamkeit auf Systemen mit ZFS kommt vom sehr großen Cachehunger. Den kann man eingrenzen:
Erstmal schauen, was phase ist:
<source lang=bash>
lollypop@wirefall:~# echo "::kmastat ! grep Total" |mdb -k
Total [hat_memload] 13508608B 309323764 0
Total [kmem_msb] 24010752B 1509706 0
Total [kmem_va] 660340736B 140448 0
Total [kmem_default] 690409472B 1416078794 0
Total [kmem_io_64G] 34619392B 8456 0
Total [kmem_io_4G] 16384B 92 0
Total [kmem_io_2G] 24576B 62 0
Total [bp_map] 1048576B 234488 0
Total [umem_np] 786432B 976 0
Total [id32] 4096B 2620 0
Total [zfs_file_data_buf] 1471275008B 1326646 0
Total [segkp] 589824B 192886 0
Total [ip_minor_arena_sa] 64B 13332 0
Total [ip_minor_arena_la] 192B 45183 0
Total [spdsock] 64B 1 0
Total [namefs_inodes] 64B 24 0
lollypop@wirefall:~# echo "::memstat" | mdb -k
Page Summary Pages MB %Tot
------------ ---------------- ---------------- ----
Kernel 255013 996 24%
ZFS File Data 359196 1403 34%
Anon 346538 1353 33%
Exec and libs 33948 132 3%
Page cache 4836 18 0%
Free (cachelist) 22086 86 2%
Free (freelist) 23420 91 2%
Total 1045037 4082
Physical 1045036 4082
</source>
Oder nur ZFS
<source lang=bash>
echo "::memstat ! egrep '(Page Summary|-----|ZFS)'"| mdb -k
</source>
Ausgeben aller ARC-Parameter:
<source lang=bash>
lollypop@wirefall:~# echo "::arc -m" | mdb -k
hits = 80839319
misses = 3717788
demand_data_hits = 4127150
demand_data_misses = 51589
demand_metadata_hits = 9467792
demand_metadata_misses = 2125852
prefetch_data_hits = 127941
prefetch_data_misses = 596238
prefetch_metadata_hits = 67116436
prefetch_metadata_misses = 944109
mru_hits = 2031248
mru_ghost_hits = 1906199
mfu_hits = 78514880
mfu_ghost_hits = 993236
deleted = 880714
recycle_miss = 1381210
mutex_miss = 197
evict_skip = 38573528
evict_l2_cached = 0
evict_l2_eligible = 94658370048
evict_l2_ineligible = 8946457600
hash_elements = 79571
hash_elements_max = 82328
hash_collisions = 3005774
hash_chains = 22460
hash_chain_max = 8
p = 64 MB
c = 512 MB
c_min = 127 MB
c_max = 512 MB
size = 512 MB
hdr_size = 14825736
data_size = 468982784
other_size = 53480992
l2_hits = 0
l2_misses = 0
l2_feeds = 0
l2_rw_clash = 0
l2_read_bytes = 0
l2_write_bytes = 0
l2_writes_sent = 0
l2_writes_done = 0
l2_writes_error = 0
l2_writes_hdr_miss = 0
l2_evict_lock_retry = 0
l2_evict_reading = 0
l2_free_on_write = 0
l2_abort_lowmem = 0
l2_cksum_bad = 0
l2_io_error = 0
l2_size = 0
l2_hdr_size = 0
memory_throttle_count = 0
arc_no_grow = 0
arc_tempreserve = 0 MB
arc_meta_used = 150 MB
arc_meta_limit = 128 MB
arc_meta_max = 313 MB
</source>
Man kann sich auch alle Parameter ausgeben lassen, die für ZFS gesetzt sind mit:
<source lang=bash>
# echo ::zfs_params | mdb -k
arc_reduce_dnlc_percent = 0x3
zfs_arc_max = 0x100000000
zfs_arc_min = 0x0
arc_shrink_shift = 0x5
zfs_mdcomp_disable = 0x0
zfs_prefetch_disable = 0x0
zfetch_max_streams = 0x8
zfetch_min_sec_reap = 0x2
zfetch_block_cap = 0x100
zfetch_array_rd_sz = 0x100000
zfs_default_bs = 0x9
zfs_default_ibs = 0xe
...
# echo "::arc -a" | mdb -k
hits = 592730
misses = 5095
demand_data_hits = 0
demand_data_misses = 0
demand_metadata_hits = 592719
demand_metadata_misses = 4866
prefetch_data_hits = 0
prefetch_data_misses = 0
...
</source>
Setzen von Kernelparametern geht auch online mit:
<source lang=bash>
# echo zfs_arc_max/Z100000000 | mdb -kw
zfs_arc_max: <old value> = 0x100000000
</source>
Das setzt den zfs_arc_max auf 4GB = 0x100000000
== Limitieren des ARC Cache ==
In der /etc/system einfach:
set zfs:zfs_arc_max = <Number of bytes>
Siehe auch [http://www.solarisinternals.com/wiki/index.php/ZFS_Evil_Tuning_Guide#Limiting_the_ARC_Cache Limiting the ARC Cache]
!!!! NEVER DO THIS !!!!
Never use ''mdb -kw'' to set the values!!!
But on a '''test system''' you could try to get the position in the Kernel with
<source lang=bash>
> arc_stats::print -a arcstat_p.value.ui64 arcstat_c.value.ui64 arcstat_c_max.value.ui64
</source>
Calculate for example 8GB:
<source lang=bash>
# printf "0x%x\n" $[ 8 * 1024 *1024 *1024 ]
0x200000000
</source>
And raise the values like this:
arc.c = arc.c_max
arc.p = arc.c / 2
<source lang=bash>
# mdb -kw
Loading modules: [ unix krtld genunix dtrace specfs uppc pcplusmp cpu.generic zfs mpt_sas sockfs ip hook neti dls sctp arp usba uhci fcp fctl qlc nca md lofs sata cpc fcip random crypto logindmux ptm ufs sppp nfs ipc ]
> arc_stats::print -a arcstat_p.value.ui64 arcstat_c.value.ui64 arcstat_c_max.value.ui64
fffffffffbcfaf90 arcstat_p.value.ui64 = 0x4000000
fffffffffbcfafc0 arcstat_c.value.ui64 = 0x40000000
fffffffffbcfb020 arcstat_c_max.value.ui64 = 0x40000000
> fffffffffbcfb020/Z 0x200000000
arc_stats+0x4a0:0x40000000 = 0x200000000
> fffffffffbcfafc0/Z 0x200000000
arc_stats+0x440:0x44a42480 = 0x200000000
> fffffffffbcfaf90/Z 0x100000000
arc_stats+0x410:0x4000000 = 0x100000000
</source>
== ZFS Platzverbrauch besser anzeigen ==
<pre>
$ zfs list -o space
NAME AVAIL USED USEDSNAP USEDDS USEDREFRESERV USEDCHILD
rpool 25.4G 7.79G 0 64K 0 7.79G
rpool/ROOT 25.4G 6.29G 0 18K 0 6.29G
rpool/ROOT/snv_98 25.4G 6.29G 0 6.29G 0 0
rpool/dump 25.4G 1.00G 0 1.00G 0 0
rpool/export 25.4G 38K 0 20K 0 18K
rpool/export/home 25.4G 18K 0 18K 0 0
rpool/swap 25.8G 512M 0 111M 401M 0
</pre>
Wenn zfs list -o space als shortcut noch nicht zur Verfügung steht, geht meist:
<pre>
$ zfs list -o name,avail,used,usedsnap,usedds,usedrefreserv,usedchild -t filesystem,volume
</pre>
== Migration UFS-Root -> ZFS-Root via Live-Upgrade ==
Erstmal den ZFS Rootpool anlegen:
# zpool create rpool /dev/dsk/<zfs-disk>
Wer Problemen aus dem Weg gehen möchte, läßt den Namen bei rpool.
Boot-Environment (BE) mit lucreate erstellen
# lucreate -c ufsBE -n zfsBE -p rpool
Hiermit werden die Files in die ZFS-Umgebung kopiert.
Prüfen, ob das BootFS richtig gesetzt wurde:
<pre>
# zpool get bootfs rpool
NAME PROPERTY VALUE SOURCE
rpool bootfs rpool/ROOT/zfsBE local
</pre>
Auskommentieren von eventuell noch nachgebliebenen rootdev-Einträgen in der /etc/system
<pre>
# zpool export rpool
# mkdir /tmp/rpool
# zpool import -R /tmp/rpool rpool
# zfs unmount rpool
# rmdir /tmp/rpool/rpool
# zfs mount rpool/ROOT/zfsBE
# perl -pi.orig -e 's#^(rootdev.*)$#* \1#g' /tmp/rpool/etc/system
# zpool export rpool
</pre>
Bootblock für ZFS auf die ZFS-Platte
# installboot -F zfs /usr/platform/`uname -i`/lib/fs/zfs/bootblk /dev/rdsk/<zfs-disk>
Aktivieren des neuen BEs
# luactivate zfsBE
==cannot destroy 'snapshot': dataset is busy==
<source lang=bash>
root@sun1 # zfs destroy zpool1/raiddisk0@send_1
cannot destroy 'zpool1/raiddisk0@send_1': dataset is busy
root@sun1 # zfs holds zpool1/raiddisk0@send_1
NAME TAG TIMESTAMP
zpool1/raiddisk0@send_1 .send-14952-0 Mon Jun 15 15:29:09 2015
zpool1/raiddisk0@send_1 .send-16117-0 Mon Jun 15 15:29:28 2015
zpool1/raiddisk0@send_1 .send-26208-0 Tue Jun 16 10:14:47 2015
zpool1/raiddisk0@send_1 .send-8129-0 Mon Jun 15 15:26:54 2015
root@sun1 # zfs release .send-14952-0 zpool1/raiddisk0@send_1
root@sun1 # zfs release .send-16117-0 zpool1/raiddisk0@send_1
root@sun1 # zfs release .send-26208-0 zpool1/raiddisk0@send_1
root@sun1 # zfs release .send-8129-0 zpool1/raiddisk0@send_1
root@sun1 # zfs holds zpool1/raiddisk0@send_1
root@sun1 #
root@sun1 # zfs destroy zpool1/raiddisk0@send_1
root@sun1 #
</source>
[[Kategorie:ZFS]]
23cc28f214d8e4a91410e96ee0b057132594e88d
SunServer
0
210
762
761
2015-06-18T12:38:30Z
Lollypop
2
/* XSCF */
wikitext
text/x-wiki
[[Kategorie:Hardware]]
=X86 Systeme=
==ILOM==
===Set SP IP address from OS via ipmitool===
* Set:
<source lang=bash>
# ipmitool lan set 1 ipaddr 172.30.42.149
Setting LAN IP Address to 172.30.42.149
# ipmitool lan set 1 netmask 255.255.255.0
Setting LAN Subnet Mask to 255.255.255.0
# ipmitool lan set 1 defgw ipaddr 172.30.42.1
Setting LAN Default Gateway IP to 172.30.42.1
</source>
* Check:
<source lang=bash>
# ipmitool lan print
Set in Progress : Commit Write
Auth Type Support : NONE MD2 MD5 PASSWORD
Auth Type Enable : Callback : MD2 MD5 PASSWORD
: User : MD2 MD5 PASSWORD
: Operator : MD2 MD5 PASSWORD
: Admin : MD2 MD5 PASSWORD
: OEM :
IP Address Source : Static Address
IP Address : 172.30.42.149
Subnet Mask : 255.255.255.0
MAC Address : 00:1c:24:f0:70:b0
SNMP Community String : public
IP Header : TTL=0x40 Flags=0x40 Precedence=0x00 TOS=0x10
Default Gateway IP : 172.30.42.1
Default Gateway MAC : ff:ff:ff:ff:ff:ff
Backup Gateway IP : 255.255.255.255
Backup Gateway MAC : ff:ff:ff:ff:ff:ff
Cipher Suite Priv Max : aaaaaaaaaaaaaaa
: X=Cipher Suite Unused
: c=CALLBACK
: u=USER
: o=OPERATOR
: a=ADMIN
: O=OEM
</source>
===Restore lost Serial/Product Information===
<source lanig=bash>
$ ssh root@x4100-sp
-> show /SYS/MB
/SYS/MB
...
Properties:
type = Motherboard
chassis_name = SUN FIRE X4100
chassis_part_number = 541-0250-04
chassis_serial_number = 0000000-0000000000
chassis_manufacturer = SUN MICROSYSTEMS
product_name = SUN FIRE X4100
product_part_number = 602-0000-00
product_serial_number = 0000000000
product_version = (none)
product_manufacturer = SUN MICROSYSTEMS
fru_name = ASSY,MOTHERBOARD,A64
fru_manufacturer = SUN MICROSYSTEMS
fru_part_number = 501-7644-01
fru_serial_number = 1762TH1-0627002296
...
-> exit
$ ssh sunservice@x4100-sp
Password: <the root password>
[(flash)root@X4100-SP:~]# servicetool --board_replaced=mainboard --fru_product_serial_number --fru_chassis_serial_number --fru_product_part_number
<Fill out the answers>
</source>
=SPARC Systeme=
==XSCF==
===Set XSCF IP address from OS via ssh through dscp===
* Show:
<source lang=bash>
# /usr/platform/`uname -i`/sbin/prtdscp
Domain Address: 192.168.224.2
SP Address: 192.168.224.1
# ssh eis-installer@192.168.224.1
XSCF> shownetwork -a
xscf#0-lan#0
Link encap:Ethernet HWaddr 00:0B:5D:E3:D8:C4
inet addr:172.42.0.120 Bcast:172.42.0.255 Mask:255.255.255.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:885919090 errors:0 dropped:0 overruns:0 frame:0
TX packets:7150700 errors:1 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:1987024183 (1.8 GiB) TX bytes:492148426 (469.3 MiB)
Base address:0xe000
xscf#0-lan#1
Link encap:Ethernet HWaddr 00:0B:5D:E3:D8:C5
BROADCAST MULTICAST MTU:1500 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
Base address:0xc000
XSCF> showroute -a
Destination Gateway Netmask Flags Interface
172.42.0.0 * 255.255.0.0 U xscf#0-lan#0
default 172.42.0.1 0.0.0.0 UG xscf#0-lan#0
</source>
* Delete default gateway:
<source lang=bash>
XSCF> setroute -c del -n 0.0.0.0 -m 0.0.0.0 -g 172.42.0.1 xscf#0-lan#0
</source>
* Set:
<source lang=bash>
XSCF> setnetwork xscf#0-lan#0 172.32.40.52 -m 255.255.255.0
XSCF> setroute -c add -n 0.0.0.0 -m 0.0.0.0 -g 172.32.40.1 xscf#0-lan#0
XSCF> applynetwork
The following network settings will be applied:
xscf#0 hostname :hfgsun07-xsfc
DNS domain name :intern.hfg-inkasso.de
nameserver :172.41.0.2
interface :xscf#0-lan#0
status :up
IP address :172.32.40.52
netmask :255.255.255.0
route :-n 172.32.40.1 -m 255.255.255.255
route :-n 0.0.0.0 -m 0.0.0.0 -g 172.32.40.1
interface :xscf#0-lan#1
status :down
IP address :
netmask :
route :
Continue? [y|n] :y
Please reset the XSCF by rebootxscf to apply the network settings.
Please confirm that the settings have been applied by executing
showhostname, shownetwork, showroute and shownameserver after rebooting
the XSCF.
XSCF> rebootxscf
The XSCF will be reset. Continue? [y|n] :y
</source>
===Enable sending of break signal===
Example on a M10 (Partition 0):
<source lang=bash>
XSCF> showpparmode -p 0
Host-ID :90071f40
Diagnostic Level :max
Message Level :max
Alive Check :on
Watchdog Reaction :reset
Break Signal :on
Autoboot(Guest Domain) :on
Elastic Mode :off
IOreconfigure :false
CPU Mode :auto
PPAR DR(Current) :off
PPAR DR(Next) :off
XSCF> setpparmode -p 0 -m break_signal=off
Diagnostic Level :max -> -
Message Level :max -> -
Alive Check :on -> -
Watchdog Reaction :reset -> -
Break Signal :on -> off
Autoboot(Guest Domain) :on -> -
Elastic Mode :off -> -
IOreconfigure :false -> -
CPU Mode :auto -> -
PPAR DR :off -> -
The specified modes will be changed.
Continue? [y|n] :y
configured.
Diagnostic Level :max
Message Level :max
Alive Check :on (alive check:available)
Watchdog Reaction :reset (watchdog reaction:reset)
Break Signal :off (break signal:send)
Autoboot(Guest Domain) :on
Elastic Mode :off
IOreconfigure :false
CPU Mode :auto
PPAR DR :off
XSCF> sendbreak -y -p0
Send break signal to PPAR-ID 0?[y|n] :y
XSCF> console -y -p0
Console contents may be logged.
Connect to PPAR-ID 0?[y|n] :y
c)ontinue, s)ync, r)eset? c
^@Notifying cluster that this node is panicking
panic[cpu10]/thread=2a100df1c80: Aborting node because pm_tick delay of 13644 ms exceeds 5050 ms
...
</source>
1714a9ac124800677f0f3daa9b236fcca183064a
763
762
2015-06-18T12:39:12Z
Lollypop
2
/* Restore lost Serial/Product Information */
wikitext
text/x-wiki
[[Kategorie:Hardware]]
=X86 Systeme=
==ILOM==
===Set SP IP address from OS via ipmitool===
* Set:
<source lang=bash>
# ipmitool lan set 1 ipaddr 172.30.42.149
Setting LAN IP Address to 172.30.42.149
# ipmitool lan set 1 netmask 255.255.255.0
Setting LAN Subnet Mask to 255.255.255.0
# ipmitool lan set 1 defgw ipaddr 172.30.42.1
Setting LAN Default Gateway IP to 172.30.42.1
</source>
* Check:
<source lang=bash>
# ipmitool lan print
Set in Progress : Commit Write
Auth Type Support : NONE MD2 MD5 PASSWORD
Auth Type Enable : Callback : MD2 MD5 PASSWORD
: User : MD2 MD5 PASSWORD
: Operator : MD2 MD5 PASSWORD
: Admin : MD2 MD5 PASSWORD
: OEM :
IP Address Source : Static Address
IP Address : 172.30.42.149
Subnet Mask : 255.255.255.0
MAC Address : 00:1c:24:f0:70:b0
SNMP Community String : public
IP Header : TTL=0x40 Flags=0x40 Precedence=0x00 TOS=0x10
Default Gateway IP : 172.30.42.1
Default Gateway MAC : ff:ff:ff:ff:ff:ff
Backup Gateway IP : 255.255.255.255
Backup Gateway MAC : ff:ff:ff:ff:ff:ff
Cipher Suite Priv Max : aaaaaaaaaaaaaaa
: X=Cipher Suite Unused
: c=CALLBACK
: u=USER
: o=OPERATOR
: a=ADMIN
: O=OEM
</source>
===Restore lost Serial/Product Information===
<source lang=bash>
$ ssh root@x4100-sp
-> show /SYS/MB
/SYS/MB
...
Properties:
type = Motherboard
chassis_name = SUN FIRE X4100
chassis_part_number = 541-0250-04
chassis_serial_number = 0000000-0000000000
chassis_manufacturer = SUN MICROSYSTEMS
product_name = SUN FIRE X4100
product_part_number = 602-0000-00
product_serial_number = 0000000000
product_version = (none)
product_manufacturer = SUN MICROSYSTEMS
fru_name = ASSY,MOTHERBOARD,A64
fru_manufacturer = SUN MICROSYSTEMS
fru_part_number = 501-7644-01
fru_serial_number = 1762TH1-0627002296
...
-> exit
$ ssh sunservice@x4100-sp
Password: <the root password>
[(flash)root@X4100-SP:~]# servicetool --board_replaced=mainboard --fru_product_serial_number --fru_chassis_serial_number --fru_product_part_number
<Fill out the answers>
</source>
=SPARC Systeme=
==XSCF==
===Set XSCF IP address from OS via ssh through dscp===
* Show:
<source lang=bash>
# /usr/platform/`uname -i`/sbin/prtdscp
Domain Address: 192.168.224.2
SP Address: 192.168.224.1
# ssh eis-installer@192.168.224.1
XSCF> shownetwork -a
xscf#0-lan#0
Link encap:Ethernet HWaddr 00:0B:5D:E3:D8:C4
inet addr:172.42.0.120 Bcast:172.42.0.255 Mask:255.255.255.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:885919090 errors:0 dropped:0 overruns:0 frame:0
TX packets:7150700 errors:1 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:1987024183 (1.8 GiB) TX bytes:492148426 (469.3 MiB)
Base address:0xe000
xscf#0-lan#1
Link encap:Ethernet HWaddr 00:0B:5D:E3:D8:C5
BROADCAST MULTICAST MTU:1500 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
Base address:0xc000
XSCF> showroute -a
Destination Gateway Netmask Flags Interface
172.42.0.0 * 255.255.0.0 U xscf#0-lan#0
default 172.42.0.1 0.0.0.0 UG xscf#0-lan#0
</source>
* Delete default gateway:
<source lang=bash>
XSCF> setroute -c del -n 0.0.0.0 -m 0.0.0.0 -g 172.42.0.1 xscf#0-lan#0
</source>
* Set:
<source lang=bash>
XSCF> setnetwork xscf#0-lan#0 172.32.40.52 -m 255.255.255.0
XSCF> setroute -c add -n 0.0.0.0 -m 0.0.0.0 -g 172.32.40.1 xscf#0-lan#0
XSCF> applynetwork
The following network settings will be applied:
xscf#0 hostname :hfgsun07-xsfc
DNS domain name :intern.hfg-inkasso.de
nameserver :172.41.0.2
interface :xscf#0-lan#0
status :up
IP address :172.32.40.52
netmask :255.255.255.0
route :-n 172.32.40.1 -m 255.255.255.255
route :-n 0.0.0.0 -m 0.0.0.0 -g 172.32.40.1
interface :xscf#0-lan#1
status :down
IP address :
netmask :
route :
Continue? [y|n] :y
Please reset the XSCF by rebootxscf to apply the network settings.
Please confirm that the settings have been applied by executing
showhostname, shownetwork, showroute and shownameserver after rebooting
the XSCF.
XSCF> rebootxscf
The XSCF will be reset. Continue? [y|n] :y
</source>
===Enable sending of break signal===
Example on a M10 (Partition 0):
<source lang=bash>
XSCF> showpparmode -p 0
Host-ID :90071f40
Diagnostic Level :max
Message Level :max
Alive Check :on
Watchdog Reaction :reset
Break Signal :on
Autoboot(Guest Domain) :on
Elastic Mode :off
IOreconfigure :false
CPU Mode :auto
PPAR DR(Current) :off
PPAR DR(Next) :off
XSCF> setpparmode -p 0 -m break_signal=off
Diagnostic Level :max -> -
Message Level :max -> -
Alive Check :on -> -
Watchdog Reaction :reset -> -
Break Signal :on -> off
Autoboot(Guest Domain) :on -> -
Elastic Mode :off -> -
IOreconfigure :false -> -
CPU Mode :auto -> -
PPAR DR :off -> -
The specified modes will be changed.
Continue? [y|n] :y
configured.
Diagnostic Level :max
Message Level :max
Alive Check :on (alive check:available)
Watchdog Reaction :reset (watchdog reaction:reset)
Break Signal :off (break signal:send)
Autoboot(Guest Domain) :on
Elastic Mode :off
IOreconfigure :false
CPU Mode :auto
PPAR DR :off
XSCF> sendbreak -y -p0
Send break signal to PPAR-ID 0?[y|n] :y
XSCF> console -y -p0
Console contents may be logged.
Connect to PPAR-ID 0?[y|n] :y
c)ontinue, s)ync, r)eset? c
^@Notifying cluster that this node is panicking
panic[cpu10]/thread=2a100df1c80: Aborting node because pm_tick delay of 13644 ms exceeds 5050 ms
...
</source>
371b18270d8855e1e85b044b5f6d176add8b5d82
764
763
2015-06-18T12:41:06Z
Lollypop
2
/* Enable sending of break signal */
wikitext
text/x-wiki
[[Kategorie:Hardware]]
=X86 Systeme=
==ILOM==
===Set SP IP address from OS via ipmitool===
* Set:
<source lang=bash>
# ipmitool lan set 1 ipaddr 172.30.42.149
Setting LAN IP Address to 172.30.42.149
# ipmitool lan set 1 netmask 255.255.255.0
Setting LAN Subnet Mask to 255.255.255.0
# ipmitool lan set 1 defgw ipaddr 172.30.42.1
Setting LAN Default Gateway IP to 172.30.42.1
</source>
* Check:
<source lang=bash>
# ipmitool lan print
Set in Progress : Commit Write
Auth Type Support : NONE MD2 MD5 PASSWORD
Auth Type Enable : Callback : MD2 MD5 PASSWORD
: User : MD2 MD5 PASSWORD
: Operator : MD2 MD5 PASSWORD
: Admin : MD2 MD5 PASSWORD
: OEM :
IP Address Source : Static Address
IP Address : 172.30.42.149
Subnet Mask : 255.255.255.0
MAC Address : 00:1c:24:f0:70:b0
SNMP Community String : public
IP Header : TTL=0x40 Flags=0x40 Precedence=0x00 TOS=0x10
Default Gateway IP : 172.30.42.1
Default Gateway MAC : ff:ff:ff:ff:ff:ff
Backup Gateway IP : 255.255.255.255
Backup Gateway MAC : ff:ff:ff:ff:ff:ff
Cipher Suite Priv Max : aaaaaaaaaaaaaaa
: X=Cipher Suite Unused
: c=CALLBACK
: u=USER
: o=OPERATOR
: a=ADMIN
: O=OEM
</source>
===Restore lost Serial/Product Information===
<source lang=bash>
$ ssh root@x4100-sp
-> show /SYS/MB
/SYS/MB
...
Properties:
type = Motherboard
chassis_name = SUN FIRE X4100
chassis_part_number = 541-0250-04
chassis_serial_number = 0000000-0000000000
chassis_manufacturer = SUN MICROSYSTEMS
product_name = SUN FIRE X4100
product_part_number = 602-0000-00
product_serial_number = 0000000000
product_version = (none)
product_manufacturer = SUN MICROSYSTEMS
fru_name = ASSY,MOTHERBOARD,A64
fru_manufacturer = SUN MICROSYSTEMS
fru_part_number = 501-7644-01
fru_serial_number = 1762TH1-0627002296
...
-> exit
$ ssh sunservice@x4100-sp
Password: <the root password>
[(flash)root@X4100-SP:~]# servicetool --board_replaced=mainboard --fru_product_serial_number --fru_chassis_serial_number --fru_product_part_number
<Fill out the answers>
</source>
=SPARC Systeme=
==XSCF==
===Set XSCF IP address from OS via ssh through dscp===
* Show:
<source lang=bash>
# /usr/platform/`uname -i`/sbin/prtdscp
Domain Address: 192.168.224.2
SP Address: 192.168.224.1
# ssh eis-installer@192.168.224.1
XSCF> shownetwork -a
xscf#0-lan#0
Link encap:Ethernet HWaddr 00:0B:5D:E3:D8:C4
inet addr:172.42.0.120 Bcast:172.42.0.255 Mask:255.255.255.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:885919090 errors:0 dropped:0 overruns:0 frame:0
TX packets:7150700 errors:1 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:1987024183 (1.8 GiB) TX bytes:492148426 (469.3 MiB)
Base address:0xe000
xscf#0-lan#1
Link encap:Ethernet HWaddr 00:0B:5D:E3:D8:C5
BROADCAST MULTICAST MTU:1500 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
Base address:0xc000
XSCF> showroute -a
Destination Gateway Netmask Flags Interface
172.42.0.0 * 255.255.0.0 U xscf#0-lan#0
default 172.42.0.1 0.0.0.0 UG xscf#0-lan#0
</source>
* Delete default gateway:
<source lang=bash>
XSCF> setroute -c del -n 0.0.0.0 -m 0.0.0.0 -g 172.42.0.1 xscf#0-lan#0
</source>
* Set:
<source lang=bash>
XSCF> setnetwork xscf#0-lan#0 172.32.40.52 -m 255.255.255.0
XSCF> setroute -c add -n 0.0.0.0 -m 0.0.0.0 -g 172.32.40.1 xscf#0-lan#0
XSCF> applynetwork
The following network settings will be applied:
xscf#0 hostname :hfgsun07-xsfc
DNS domain name :intern.hfg-inkasso.de
nameserver :172.41.0.2
interface :xscf#0-lan#0
status :up
IP address :172.32.40.52
netmask :255.255.255.0
route :-n 172.32.40.1 -m 255.255.255.255
route :-n 0.0.0.0 -m 0.0.0.0 -g 172.32.40.1
interface :xscf#0-lan#1
status :down
IP address :
netmask :
route :
Continue? [y|n] :y
Please reset the XSCF by rebootxscf to apply the network settings.
Please confirm that the settings have been applied by executing
showhostname, shownetwork, showroute and shownameserver after rebooting
the XSCF.
XSCF> rebootxscf
The XSCF will be reset. Continue? [y|n] :y
</source>
===Enable sending of break signal===
Example on a M10 (Partition 0)... break_signal=off means turn supression of break signals off ;-) :
<source lang=bash>
XSCF> showpparmode -p 0
Host-ID :90071f40
Diagnostic Level :max
Message Level :max
Alive Check :on
Watchdog Reaction :reset
Break Signal :on
Autoboot(Guest Domain) :on
Elastic Mode :off
IOreconfigure :false
CPU Mode :auto
PPAR DR(Current) :off
PPAR DR(Next) :off
XSCF> setpparmode -p 0 -m break_signal=off
Diagnostic Level :max -> -
Message Level :max -> -
Alive Check :on -> -
Watchdog Reaction :reset -> -
Break Signal :on -> off
Autoboot(Guest Domain) :on -> -
Elastic Mode :off -> -
IOreconfigure :false -> -
CPU Mode :auto -> -
PPAR DR :off -> -
The specified modes will be changed.
Continue? [y|n] :y
configured.
Diagnostic Level :max
Message Level :max
Alive Check :on (alive check:available)
Watchdog Reaction :reset (watchdog reaction:reset)
Break Signal :off (break signal:send)
Autoboot(Guest Domain) :on
Elastic Mode :off
IOreconfigure :false
CPU Mode :auto
PPAR DR :off
XSCF> sendbreak -y -p0
Send break signal to PPAR-ID 0?[y|n] :y
XSCF> console -y -p0
Console contents may be logged.
Connect to PPAR-ID 0?[y|n] :y
c)ontinue, s)ync, r)eset? c
^@Notifying cluster that this node is panicking
panic[cpu10]/thread=2a100df1c80: Aborting node because pm_tick delay of 13644 ms exceeds 5050 ms
...
</source>
42eb22c1f5dc4f7bb61fc3ce2bfcb6c4832459a6
796
764
2015-08-06T11:48:51Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:Hardware]]
=X86 Systeme=
==ILOM==
===Set SP IP address from OS via ipmitool===
* Set:
<source lang=bash>
# ipmitool lan set 1 ipaddr 172.30.42.149
Setting LAN IP Address to 172.30.42.149
# ipmitool lan set 1 netmask 255.255.255.0
Setting LAN Subnet Mask to 255.255.255.0
# ipmitool lan set 1 defgw ipaddr 172.30.42.1
Setting LAN Default Gateway IP to 172.30.42.1
</source>
* Check:
<source lang=bash>
# ipmitool lan print
Set in Progress : Commit Write
Auth Type Support : NONE MD2 MD5 PASSWORD
Auth Type Enable : Callback : MD2 MD5 PASSWORD
: User : MD2 MD5 PASSWORD
: Operator : MD2 MD5 PASSWORD
: Admin : MD2 MD5 PASSWORD
: OEM :
IP Address Source : Static Address
IP Address : 172.30.42.149
Subnet Mask : 255.255.255.0
MAC Address : 00:1c:24:f0:70:b0
SNMP Community String : public
IP Header : TTL=0x40 Flags=0x40 Precedence=0x00 TOS=0x10
Default Gateway IP : 172.30.42.1
Default Gateway MAC : ff:ff:ff:ff:ff:ff
Backup Gateway IP : 255.255.255.255
Backup Gateway MAC : ff:ff:ff:ff:ff:ff
Cipher Suite Priv Max : aaaaaaaaaaaaaaa
: X=Cipher Suite Unused
: c=CALLBACK
: u=USER
: o=OPERATOR
: a=ADMIN
: O=OEM
</source>
===Restore lost Serial/Product Information===
<source lang=bash>
$ ssh root@x4100-sp
-> show /SYS/MB
/SYS/MB
...
Properties:
type = Motherboard
chassis_name = SUN FIRE X4100
chassis_part_number = 541-0250-04
chassis_serial_number = 0000000-0000000000
chassis_manufacturer = SUN MICROSYSTEMS
product_name = SUN FIRE X4100
product_part_number = 602-0000-00
product_serial_number = 0000000000
product_version = (none)
product_manufacturer = SUN MICROSYSTEMS
fru_name = ASSY,MOTHERBOARD,A64
fru_manufacturer = SUN MICROSYSTEMS
fru_part_number = 501-7644-01
fru_serial_number = 1762TH1-0627002296
...
-> exit
$ ssh sunservice@x4100-sp
Password: <the root password>
[(flash)root@X4100-SP:~]# servicetool --board_replaced=mainboard --fru_product_serial_number --fru_chassis_serial_number --fru_product_part_number
<Fill out the answers>
</source>
=SPARC Systeme=
==T4-1==
===Get disk slot===
<source lang=bash>
#!/bin/bash
/usr/sbin/prtconf -v | nawk -v disk="$1" '
function get_value() {
getline line;
split(line,values,"=");
return values[2];
}
/inquiry-serial-no/ {
inquiry_serial_no=get_value();
}
/inquiry-product-id/ {
inquiry_product_id=get_value();
}
/inquiry-vendor-id/ {
inquiry_vendor_id=get_value();
}
/obp-path/ {
obp_path=get_value();
}
/phy-num/ {
phy_num[obp_path]=get_value();
}
$0 ~ "/dev/rdsk/"disk"$" {
split(obp_path,path_parts,"/");
if(path_parts[2]=="pci@1"){
controller[obp_path]=0;
}
if(path_parts[2]=="pci@2"){
controller[obp_path]=1;
}
printf "%s\n\t%s\n\t%s\n\t%s\n\tcontroller %d, PhyNum %d => Slot %d\n",$1,inquiry_vendor_id,inquiry_serial_no,obp_path,controller[obp_path],phy_num[obp_path],4*controller[obp_path]+phy_num[obp_path]
}'
</source>
Example:
<source lang=bash>
./get_disk_slot.sh c0t5000C500230D43A3d0
dev_link=/dev/rdsk/c0t5000C500230D43A3d0
'SEAGATE'
'00101381ZVHM 3SE1ZVHM'
'/pci@400/pci@2/pci@0/pci@4/scsi@0/disk@w5000c500230d43a1,0'
controller 0, PhyNum 2 => Slot 2
</source>
==XSCF==
===Set XSCF IP address from OS via ssh through dscp===
* Show:
<source lang=bash>
# /usr/platform/`uname -i`/sbin/prtdscp
Domain Address: 192.168.224.2
SP Address: 192.168.224.1
# ssh eis-installer@192.168.224.1
XSCF> shownetwork -a
xscf#0-lan#0
Link encap:Ethernet HWaddr 00:0B:5D:E3:D8:C4
inet addr:172.42.0.120 Bcast:172.42.0.255 Mask:255.255.255.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:885919090 errors:0 dropped:0 overruns:0 frame:0
TX packets:7150700 errors:1 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:1987024183 (1.8 GiB) TX bytes:492148426 (469.3 MiB)
Base address:0xe000
xscf#0-lan#1
Link encap:Ethernet HWaddr 00:0B:5D:E3:D8:C5
BROADCAST MULTICAST MTU:1500 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
Base address:0xc000
XSCF> showroute -a
Destination Gateway Netmask Flags Interface
172.42.0.0 * 255.255.0.0 U xscf#0-lan#0
default 172.42.0.1 0.0.0.0 UG xscf#0-lan#0
</source>
* Delete default gateway:
<source lang=bash>
XSCF> setroute -c del -n 0.0.0.0 -m 0.0.0.0 -g 172.42.0.1 xscf#0-lan#0
</source>
* Set:
<source lang=bash>
XSCF> setnetwork xscf#0-lan#0 172.32.40.52 -m 255.255.255.0
XSCF> setroute -c add -n 0.0.0.0 -m 0.0.0.0 -g 172.32.40.1 xscf#0-lan#0
XSCF> applynetwork
The following network settings will be applied:
xscf#0 hostname :hfgsun07-xsfc
DNS domain name :intern.hfg-inkasso.de
nameserver :172.41.0.2
interface :xscf#0-lan#0
status :up
IP address :172.32.40.52
netmask :255.255.255.0
route :-n 172.32.40.1 -m 255.255.255.255
route :-n 0.0.0.0 -m 0.0.0.0 -g 172.32.40.1
interface :xscf#0-lan#1
status :down
IP address :
netmask :
route :
Continue? [y|n] :y
Please reset the XSCF by rebootxscf to apply the network settings.
Please confirm that the settings have been applied by executing
showhostname, shownetwork, showroute and shownameserver after rebooting
the XSCF.
XSCF> rebootxscf
The XSCF will be reset. Continue? [y|n] :y
</source>
===Enable sending of break signal===
Example on a M10 (Partition 0)... break_signal=off means turn supression of break signals off ;-) :
<source lang=bash>
XSCF> showpparmode -p 0
Host-ID :90071f40
Diagnostic Level :max
Message Level :max
Alive Check :on
Watchdog Reaction :reset
Break Signal :on
Autoboot(Guest Domain) :on
Elastic Mode :off
IOreconfigure :false
CPU Mode :auto
PPAR DR(Current) :off
PPAR DR(Next) :off
XSCF> setpparmode -p 0 -m break_signal=off
Diagnostic Level :max -> -
Message Level :max -> -
Alive Check :on -> -
Watchdog Reaction :reset -> -
Break Signal :on -> off
Autoboot(Guest Domain) :on -> -
Elastic Mode :off -> -
IOreconfigure :false -> -
CPU Mode :auto -> -
PPAR DR :off -> -
The specified modes will be changed.
Continue? [y|n] :y
configured.
Diagnostic Level :max
Message Level :max
Alive Check :on (alive check:available)
Watchdog Reaction :reset (watchdog reaction:reset)
Break Signal :off (break signal:send)
Autoboot(Guest Domain) :on
Elastic Mode :off
IOreconfigure :false
CPU Mode :auto
PPAR DR :off
XSCF> sendbreak -y -p0
Send break signal to PPAR-ID 0?[y|n] :y
XSCF> console -y -p0
Console contents may be logged.
Connect to PPAR-ID 0?[y|n] :y
c)ontinue, s)ync, r)eset? c
^@Notifying cluster that this node is panicking
panic[cpu10]/thread=2a100df1c80: Aborting node because pm_tick delay of 13644 ms exceeds 5050 ms
...
</source>
a47a167cde04405023249815b183c7bd7a21ca5a
797
796
2015-08-06T11:53:05Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:Hardware]]
=X86 Systeme=
==ILOM==
===Set SP IP address from OS via ipmitool===
* Set:
<source lang=bash>
# ipmitool lan set 1 ipaddr 172.30.42.149
Setting LAN IP Address to 172.30.42.149
# ipmitool lan set 1 netmask 255.255.255.0
Setting LAN Subnet Mask to 255.255.255.0
# ipmitool lan set 1 defgw ipaddr 172.30.42.1
Setting LAN Default Gateway IP to 172.30.42.1
</source>
* Check:
<source lang=bash>
# ipmitool lan print
Set in Progress : Commit Write
Auth Type Support : NONE MD2 MD5 PASSWORD
Auth Type Enable : Callback : MD2 MD5 PASSWORD
: User : MD2 MD5 PASSWORD
: Operator : MD2 MD5 PASSWORD
: Admin : MD2 MD5 PASSWORD
: OEM :
IP Address Source : Static Address
IP Address : 172.30.42.149
Subnet Mask : 255.255.255.0
MAC Address : 00:1c:24:f0:70:b0
SNMP Community String : public
IP Header : TTL=0x40 Flags=0x40 Precedence=0x00 TOS=0x10
Default Gateway IP : 172.30.42.1
Default Gateway MAC : ff:ff:ff:ff:ff:ff
Backup Gateway IP : 255.255.255.255
Backup Gateway MAC : ff:ff:ff:ff:ff:ff
Cipher Suite Priv Max : aaaaaaaaaaaaaaa
: X=Cipher Suite Unused
: c=CALLBACK
: u=USER
: o=OPERATOR
: a=ADMIN
: O=OEM
</source>
===Restore lost Serial/Product Information===
<source lang=bash>
$ ssh root@x4100-sp
-> show /SYS/MB
/SYS/MB
...
Properties:
type = Motherboard
chassis_name = SUN FIRE X4100
chassis_part_number = 541-0250-04
chassis_serial_number = 0000000-0000000000
chassis_manufacturer = SUN MICROSYSTEMS
product_name = SUN FIRE X4100
product_part_number = 602-0000-00
product_serial_number = 0000000000
product_version = (none)
product_manufacturer = SUN MICROSYSTEMS
fru_name = ASSY,MOTHERBOARD,A64
fru_manufacturer = SUN MICROSYSTEMS
fru_part_number = 501-7644-01
fru_serial_number = 1762TH1-0627002296
...
-> exit
$ ssh sunservice@x4100-sp
Password: <the root password>
[(flash)root@X4100-SP:~]# servicetool --board_replaced=mainboard --fru_product_serial_number --fru_chassis_serial_number --fru_product_part_number
<Fill out the answers>
</source>
=SPARC Systeme=
==T4-1==
===Get disk slot===
<source lang=bash>
#!/bin/bash
/usr/sbin/prtconf -v | nawk -v disk="$1" '
function get_value() {
getline line;
split(line,values,"=");
return values[2];
}
/inquiry-serial-no/ {
inquiry_serial_no=get_value();
}
/inquiry-product-id/ {
inquiry_product_id=get_value();
}
/inquiry-vendor-id/ {
inquiry_vendor_id=get_value();
}
/obp-path/ {
obp_path=get_value();
}
/phy-num/ {
phy_num[obp_path]=get_value();
}
$0 ~ "/dev/rdsk/"disk"$" {
split(obp_path,path_parts,"/");
if(path_parts[3]=="pci@1"){
controller[obp_path]=0;
}
if(path_parts[3]=="pci@2"){
controller[obp_path]=1;
}
printf "%s\n\t%s\n\t%s\n\t%s\n\tcontroller %d, PhyNum %d => Slot %d\n",$1,inquiry_vendor_id,inquiry_serial_no,obp_path,controller[obp_path],phy_num[obp_path],4*controller[obp_path]+phy_num[obp_path]
}'
</source>
Example:
<source lang=bash>
./get_disk_slot.sh c0t5000C500230D43A3d0
dev_link=/dev/rdsk/c0t5000C500230D43A3d0
'SEAGATE'
'00101381ZVHM 3SE1ZVHM'
'/pci@400/pci@2/pci@0/pci@4/scsi@0/disk@w5000c500230d43a1,0'
controller 0, PhyNum 2 => Slot 2
</source>
==XSCF==
===Set XSCF IP address from OS via ssh through dscp===
* Show:
<source lang=bash>
# /usr/platform/`uname -i`/sbin/prtdscp
Domain Address: 192.168.224.2
SP Address: 192.168.224.1
# ssh eis-installer@192.168.224.1
XSCF> shownetwork -a
xscf#0-lan#0
Link encap:Ethernet HWaddr 00:0B:5D:E3:D8:C4
inet addr:172.42.0.120 Bcast:172.42.0.255 Mask:255.255.255.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:885919090 errors:0 dropped:0 overruns:0 frame:0
TX packets:7150700 errors:1 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:1987024183 (1.8 GiB) TX bytes:492148426 (469.3 MiB)
Base address:0xe000
xscf#0-lan#1
Link encap:Ethernet HWaddr 00:0B:5D:E3:D8:C5
BROADCAST MULTICAST MTU:1500 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
Base address:0xc000
XSCF> showroute -a
Destination Gateway Netmask Flags Interface
172.42.0.0 * 255.255.0.0 U xscf#0-lan#0
default 172.42.0.1 0.0.0.0 UG xscf#0-lan#0
</source>
* Delete default gateway:
<source lang=bash>
XSCF> setroute -c del -n 0.0.0.0 -m 0.0.0.0 -g 172.42.0.1 xscf#0-lan#0
</source>
* Set:
<source lang=bash>
XSCF> setnetwork xscf#0-lan#0 172.32.40.52 -m 255.255.255.0
XSCF> setroute -c add -n 0.0.0.0 -m 0.0.0.0 -g 172.32.40.1 xscf#0-lan#0
XSCF> applynetwork
The following network settings will be applied:
xscf#0 hostname :hfgsun07-xsfc
DNS domain name :intern.hfg-inkasso.de
nameserver :172.41.0.2
interface :xscf#0-lan#0
status :up
IP address :172.32.40.52
netmask :255.255.255.0
route :-n 172.32.40.1 -m 255.255.255.255
route :-n 0.0.0.0 -m 0.0.0.0 -g 172.32.40.1
interface :xscf#0-lan#1
status :down
IP address :
netmask :
route :
Continue? [y|n] :y
Please reset the XSCF by rebootxscf to apply the network settings.
Please confirm that the settings have been applied by executing
showhostname, shownetwork, showroute and shownameserver after rebooting
the XSCF.
XSCF> rebootxscf
The XSCF will be reset. Continue? [y|n] :y
</source>
===Enable sending of break signal===
Example on a M10 (Partition 0)... break_signal=off means turn supression of break signals off ;-) :
<source lang=bash>
XSCF> showpparmode -p 0
Host-ID :90071f40
Diagnostic Level :max
Message Level :max
Alive Check :on
Watchdog Reaction :reset
Break Signal :on
Autoboot(Guest Domain) :on
Elastic Mode :off
IOreconfigure :false
CPU Mode :auto
PPAR DR(Current) :off
PPAR DR(Next) :off
XSCF> setpparmode -p 0 -m break_signal=off
Diagnostic Level :max -> -
Message Level :max -> -
Alive Check :on -> -
Watchdog Reaction :reset -> -
Break Signal :on -> off
Autoboot(Guest Domain) :on -> -
Elastic Mode :off -> -
IOreconfigure :false -> -
CPU Mode :auto -> -
PPAR DR :off -> -
The specified modes will be changed.
Continue? [y|n] :y
configured.
Diagnostic Level :max
Message Level :max
Alive Check :on (alive check:available)
Watchdog Reaction :reset (watchdog reaction:reset)
Break Signal :off (break signal:send)
Autoboot(Guest Domain) :on
Elastic Mode :off
IOreconfigure :false
CPU Mode :auto
PPAR DR :off
XSCF> sendbreak -y -p0
Send break signal to PPAR-ID 0?[y|n] :y
XSCF> console -y -p0
Console contents may be logged.
Connect to PPAR-ID 0?[y|n] :y
c)ontinue, s)ync, r)eset? c
^@Notifying cluster that this node is panicking
panic[cpu10]/thread=2a100df1c80: Aborting node because pm_tick delay of 13644 ms exceeds 5050 ms
...
</source>
7757e11c2086686b272093216b17b6d9e523037d
798
797
2015-08-06T11:54:56Z
Lollypop
2
/* Get disk slot */
wikitext
text/x-wiki
[[Kategorie:Hardware]]
=X86 Systeme=
==ILOM==
===Set SP IP address from OS via ipmitool===
* Set:
<source lang=bash>
# ipmitool lan set 1 ipaddr 172.30.42.149
Setting LAN IP Address to 172.30.42.149
# ipmitool lan set 1 netmask 255.255.255.0
Setting LAN Subnet Mask to 255.255.255.0
# ipmitool lan set 1 defgw ipaddr 172.30.42.1
Setting LAN Default Gateway IP to 172.30.42.1
</source>
* Check:
<source lang=bash>
# ipmitool lan print
Set in Progress : Commit Write
Auth Type Support : NONE MD2 MD5 PASSWORD
Auth Type Enable : Callback : MD2 MD5 PASSWORD
: User : MD2 MD5 PASSWORD
: Operator : MD2 MD5 PASSWORD
: Admin : MD2 MD5 PASSWORD
: OEM :
IP Address Source : Static Address
IP Address : 172.30.42.149
Subnet Mask : 255.255.255.0
MAC Address : 00:1c:24:f0:70:b0
SNMP Community String : public
IP Header : TTL=0x40 Flags=0x40 Precedence=0x00 TOS=0x10
Default Gateway IP : 172.30.42.1
Default Gateway MAC : ff:ff:ff:ff:ff:ff
Backup Gateway IP : 255.255.255.255
Backup Gateway MAC : ff:ff:ff:ff:ff:ff
Cipher Suite Priv Max : aaaaaaaaaaaaaaa
: X=Cipher Suite Unused
: c=CALLBACK
: u=USER
: o=OPERATOR
: a=ADMIN
: O=OEM
</source>
===Restore lost Serial/Product Information===
<source lang=bash>
$ ssh root@x4100-sp
-> show /SYS/MB
/SYS/MB
...
Properties:
type = Motherboard
chassis_name = SUN FIRE X4100
chassis_part_number = 541-0250-04
chassis_serial_number = 0000000-0000000000
chassis_manufacturer = SUN MICROSYSTEMS
product_name = SUN FIRE X4100
product_part_number = 602-0000-00
product_serial_number = 0000000000
product_version = (none)
product_manufacturer = SUN MICROSYSTEMS
fru_name = ASSY,MOTHERBOARD,A64
fru_manufacturer = SUN MICROSYSTEMS
fru_part_number = 501-7644-01
fru_serial_number = 1762TH1-0627002296
...
-> exit
$ ssh sunservice@x4100-sp
Password: <the root password>
[(flash)root@X4100-SP:~]# servicetool --board_replaced=mainboard --fru_product_serial_number --fru_chassis_serial_number --fru_product_part_number
<Fill out the answers>
</source>
=SPARC Systeme=
==T4-1==
===Get disk slot===
<b>get_disk_slot.sh</b>:
<source lang=bash>
#!/bin/bash
/usr/sbin/prtconf -v | nawk -v disk="$1" '
function get_value() {
getline line;
split(line,values,"=");
return values[2];
}
/inquiry-serial-no/ {
inquiry_serial_no=get_value();
}
/inquiry-product-id/ {
inquiry_product_id=get_value();
}
/inquiry-vendor-id/ {
inquiry_vendor_id=get_value();
}
/obp-path/ {
obp_path=get_value();
}
/phy-num/ {
phy_num[obp_path]=get_value();
}
$0 ~ "/dev/rdsk/"disk"$" {
split(obp_path,path_parts,"/");
if(path_parts[3]=="pci@1"){
controller[obp_path]=0;
}
if(path_parts[3]=="pci@2"){
controller[obp_path]=1;
}
printf "%s\n\t%s\n\t%s\n\t%s\n\tcontroller %d, PhyNum %d => Slot %d\n",$1,inquiry_vendor_id,inquiry_serial_no,obp_path,controller[obp_path],phy_num[obp_path],4*controller[obp_path]+phy_num[obp_path]
}'
</source>
Example:
<source lang=bash>
# ./get_disk_slot.sh c0t5000C500230D43A3d0
dev_link=/dev/rdsk/c0t5000C500230D43A3d0
'SEAGATE'
'00101371ZVHM 3SE1ZVHM'
'/pci@400/pci@2/pci@0/pci@4/scsi@0/disk@w5000c500230d43a1,0'
controller 1, PhyNum 2 => Slot 6
</source>
==XSCF==
===Set XSCF IP address from OS via ssh through dscp===
* Show:
<source lang=bash>
# /usr/platform/`uname -i`/sbin/prtdscp
Domain Address: 192.168.224.2
SP Address: 192.168.224.1
# ssh eis-installer@192.168.224.1
XSCF> shownetwork -a
xscf#0-lan#0
Link encap:Ethernet HWaddr 00:0B:5D:E3:D8:C4
inet addr:172.42.0.120 Bcast:172.42.0.255 Mask:255.255.255.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:885919090 errors:0 dropped:0 overruns:0 frame:0
TX packets:7150700 errors:1 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:1987024183 (1.8 GiB) TX bytes:492148426 (469.3 MiB)
Base address:0xe000
xscf#0-lan#1
Link encap:Ethernet HWaddr 00:0B:5D:E3:D8:C5
BROADCAST MULTICAST MTU:1500 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
Base address:0xc000
XSCF> showroute -a
Destination Gateway Netmask Flags Interface
172.42.0.0 * 255.255.0.0 U xscf#0-lan#0
default 172.42.0.1 0.0.0.0 UG xscf#0-lan#0
</source>
* Delete default gateway:
<source lang=bash>
XSCF> setroute -c del -n 0.0.0.0 -m 0.0.0.0 -g 172.42.0.1 xscf#0-lan#0
</source>
* Set:
<source lang=bash>
XSCF> setnetwork xscf#0-lan#0 172.32.40.52 -m 255.255.255.0
XSCF> setroute -c add -n 0.0.0.0 -m 0.0.0.0 -g 172.32.40.1 xscf#0-lan#0
XSCF> applynetwork
The following network settings will be applied:
xscf#0 hostname :hfgsun07-xsfc
DNS domain name :intern.hfg-inkasso.de
nameserver :172.41.0.2
interface :xscf#0-lan#0
status :up
IP address :172.32.40.52
netmask :255.255.255.0
route :-n 172.32.40.1 -m 255.255.255.255
route :-n 0.0.0.0 -m 0.0.0.0 -g 172.32.40.1
interface :xscf#0-lan#1
status :down
IP address :
netmask :
route :
Continue? [y|n] :y
Please reset the XSCF by rebootxscf to apply the network settings.
Please confirm that the settings have been applied by executing
showhostname, shownetwork, showroute and shownameserver after rebooting
the XSCF.
XSCF> rebootxscf
The XSCF will be reset. Continue? [y|n] :y
</source>
===Enable sending of break signal===
Example on a M10 (Partition 0)... break_signal=off means turn supression of break signals off ;-) :
<source lang=bash>
XSCF> showpparmode -p 0
Host-ID :90071f40
Diagnostic Level :max
Message Level :max
Alive Check :on
Watchdog Reaction :reset
Break Signal :on
Autoboot(Guest Domain) :on
Elastic Mode :off
IOreconfigure :false
CPU Mode :auto
PPAR DR(Current) :off
PPAR DR(Next) :off
XSCF> setpparmode -p 0 -m break_signal=off
Diagnostic Level :max -> -
Message Level :max -> -
Alive Check :on -> -
Watchdog Reaction :reset -> -
Break Signal :on -> off
Autoboot(Guest Domain) :on -> -
Elastic Mode :off -> -
IOreconfigure :false -> -
CPU Mode :auto -> -
PPAR DR :off -> -
The specified modes will be changed.
Continue? [y|n] :y
configured.
Diagnostic Level :max
Message Level :max
Alive Check :on (alive check:available)
Watchdog Reaction :reset (watchdog reaction:reset)
Break Signal :off (break signal:send)
Autoboot(Guest Domain) :on
Elastic Mode :off
IOreconfigure :false
CPU Mode :auto
PPAR DR :off
XSCF> sendbreak -y -p0
Send break signal to PPAR-ID 0?[y|n] :y
XSCF> console -y -p0
Console contents may be logged.
Connect to PPAR-ID 0?[y|n] :y
c)ontinue, s)ync, r)eset? c
^@Notifying cluster that this node is panicking
panic[cpu10]/thread=2a100df1c80: Aborting node because pm_tick delay of 13644 ms exceeds 5050 ms
...
</source>
5e2bc8918ad58794e331d119533ba1131af680ef
StorageTek SL150
0
190
765
627
2015-07-03T07:01:49Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:Backup]]
=StorageTek SL150 Modular Tapelibrary=
==General Knowledge==
===Default Password===
passw0rd
===Solaris Configuration===
To use the Ultrium-6 Tape drives with Solaris you have to put the following into your st.conf:
<source lang=bash>
"HP Ultrium 6-SCSI", "HP Ultrium 6-SCSI ", "CFGHPULTRIUM6SCSI";
CFGHPULTRIUM6SCSI = 2,0x3B,0,0x18619,4,0x58,0x58,0x5A,0x5A,3,60,1200,600,1200,600,600,18000;
</source>
The vendor string has to be exactly 8 characters:
HP<6 spaces>Product...
==General Documentation==
* [https://support.oracle.com/handbook_partner/Systems/SL150/SL150.html System Handbook]
* [https://support.oracle.com/epmos/faces/DocContentDisplay?id=1476370.2 Information Center]
* [http://docs.oracle.com/cd/E35103_07/index.html StorageTek SL150 Modular Tape Library]
==Service Requests==
* [https://support.oracle.com/epmos/faces/DocContentDisplay?id=1599469.1 How to Generate and Retrieve a Service Bundle]
* [https://support.oracle.com/epmos/faces/DocContentDisplay?id=1505959.1 Format of SL150 Serial Number]
==Firmware==
* [https://support.oracle.com/epmos/faces/DocContentDisplay?id=1474172.1 How to Find Firmware Update Patches]
* [https://support.oracle.com/epmos/faces/DocContentDisplay?id=1922504.1 How to find drive firmware patches for LTO tape drives]
==Backup Software related links==
* [http://www-01.ibm.com/support/docview.wss?uid=swg21598187 Oracle StorageTek SL150 Modular Tape Library System Configuration Information for IBM Tivoli Storage Manager Server]
==Other Links==
===Installation things===
* [https://support.oracle.com/epmos/faces/DocContentDisplay?id=1473827.1 How to Manually Retract the Robot Up To the Parked Position]
===Features===
* [https://support.oracle.com/epmos/faces/DocContentDisplay?id=1481733.1 Auto Clean Support for SL150 Library]
4303883353a623f2cb47b6ddd2dc2eac0f167dcd
766
765
2015-07-03T07:24:07Z
Lollypop
2
/* Solaris Configuration */
wikitext
text/x-wiki
[[Kategorie:Backup]]
=StorageTek SL150 Modular Tapelibrary=
==General Knowledge==
===Default Password===
passw0rd
===Solaris Configuration===
To use the Ultrium-6 Tape drives with Solaris you have to put the following into your st.conf:
<source lang=bash>
tape-config-list =
"HP Ultrium 6-SCSI ","HP Ultrium 6-SCSI","HP Ultrium 6","HP Ultrium LTO 6","HP_LTO_GEN_6";
HP_LTO_GEN_6 = 2,0x3B,0,0x18659,4,0x00,0x46,0x58,0x5A,3,60,1200,600,1200,600,600,18000
</source>
The vendor string has to be exactly 8 characters:
HP<6 spaces>Product...
Unload the st driver after changing the st.conf:
<source lang=bash>
# modunload -i $(modinfo | nawk '$6=="st"{print $1}')
</source>
Check if the new config settings matched the drive:
<source lang=bash>
# mt -f /dev/rmt/0cn config
"HP Ultrium 6-SCSI", "HP Ultrium 6-SCSI ", "CFGHPULTRIUM6SCSI";
CFGHPULTRIUM6SCSI = 2,0x3B,0,0x18619,4,0x58,0x58,0x5A,0x5A,3,60,1200,600,1200,600,600,18000;
</source>
==General Documentation==
* [https://support.oracle.com/handbook_partner/Systems/SL150/SL150.html System Handbook]
* [https://support.oracle.com/epmos/faces/DocContentDisplay?id=1476370.2 Information Center]
* [http://docs.oracle.com/cd/E35103_07/index.html StorageTek SL150 Modular Tape Library]
==Service Requests==
* [https://support.oracle.com/epmos/faces/DocContentDisplay?id=1599469.1 How to Generate and Retrieve a Service Bundle]
* [https://support.oracle.com/epmos/faces/DocContentDisplay?id=1505959.1 Format of SL150 Serial Number]
==Firmware==
* [https://support.oracle.com/epmos/faces/DocContentDisplay?id=1474172.1 How to Find Firmware Update Patches]
* [https://support.oracle.com/epmos/faces/DocContentDisplay?id=1922504.1 How to find drive firmware patches for LTO tape drives]
==Backup Software related links==
* [http://www-01.ibm.com/support/docview.wss?uid=swg21598187 Oracle StorageTek SL150 Modular Tape Library System Configuration Information for IBM Tivoli Storage Manager Server]
==Other Links==
===Installation things===
* [https://support.oracle.com/epmos/faces/DocContentDisplay?id=1473827.1 How to Manually Retract the Robot Up To the Parked Position]
===Features===
* [https://support.oracle.com/epmos/faces/DocContentDisplay?id=1481733.1 Auto Clean Support for SL150 Library]
a01d79ba9b6bb6b10b0e462f0c0f2f8c0deeafbf
Solaris 11 Networking
0
96
767
708
2015-07-08T07:26:53Z
Lollypop
2
/* Change adress */
wikitext
text/x-wiki
[[Kategorie:Solaris11]]
= Switch to manual configuration =
To disable automatic procedures to take back your changes you have to enable the manual configuration mode.
<pre>
# netadm enable –p ncp defaultfixed
</pre>
= Nodename =
<pre>
# svccfg -s svc:/system/identity:node setprop config/nodename = astring: camponotus
# svcadm refresh svc:/system/identity:node
# svcadm restart svc:/system/identity:node
</pre>
= Interfaces =
== Initial setup ==
<pre>
# ipadm create-ip net1
# ipadm create-addr -T static -a local=192.168.5.101/24 net1/v4mailcluster1
</pre>
== IPMP ==
<pre>
# ipadm create-ip net2
# ipadm create-ip net3
# ipadm create-addr -T static -a 192.168.5.102/24 net2/v4ipmptestadress
# ipadm create-addr -T static -a 192.168.5.103/24 net3/v4ipmptestadress
# ipadm create-ipmp ipmp0
# ipadm add-ipmp -i net2 -i net3 ipmp0
# ipadm create-addr -T static -a 192.168.5.101/24 ipmp0/v4mailcluster0
# ipmpstat -i
INTERFACE ACTIVE GROUP FLAGS LINK PROBE STATE
net2 yes ipmp0 ------- up ok ok
net3 yes ipmp0 --mbM-- up ok ok
# ipmpstat -an
ADDRESS STATE GROUP INBOUND OUTBOUND
:: down ipmp0 -- --
192.168.5.101 up ipmp0 net3 net2 net3
</pre>
Set one interface to standby:
<pre>
# ipadm set-ifprop -p standby=on -m ip net2
# ipmpstat -i
INTERFACE ACTIVE GROUP FLAGS LINK PROBE STATE
net3 yes ipmp0 --mbM-- up ok ok
net2 no ipmp0 is----- up ok ok
# ipmpstat -g
GROUP GROUPNAME STATE FDT INTERFACES
ipmp0 ipmp0 ok 10.00s net3 (net2)
</pre>
== Change address ==
1. Create new interface:
<pre>
# ipadm create-addr -T static -a 192.168.5.111/24 ipmp0/v4mailcluster1
</pre>
2. Login to new IP.
3. Delete the old interface:
<pre>
# ipadm delete-addr ipmp0/v4mailcluster0
</pre>
= DNS =
== Client ==
<pre>
# svccfg -s svc:/network/dns/client setprop config/nameserver = net_address: "( 0.0.0.0 192.168.1.1 )"
# svccfg -s svc:/network/dns/client setprop config/search = astring: "timmann.de blindhuhn.de"
# svcadm refresh svc:/network/dns/client:default
# svcadm restart svc:/network/dns/client:default
</pre>
Activate dns in nameservice switch (nsswitch.conf):
<pre>
# perl -pi -e "s/^hosts:\s+files$/hosts: files dns/g" /etc/nsswitch.conf
# nscfg import -f svc:/system/name-service/switch:default
# svcadm refresh name-service/switch
# svcprop -p config/host svc:/system/name-service/switch:default
files\ dns
</pre>
== Server ==
<pre>
# groupadd -g 53 dns
# useradd -u 53 -g dns -d /var/named -m dns
# usermod -A solaris.smf.manage.bind dns
# svccfg -s svc:network/dns/server:default setprop start/group = dns
# svccfg -s svc:network/dns/server:default setprop start/user = dns
# svccfg -s svc:network/dns/server:default setprop options/ip_interfaces = IPv4
# svccfg -s svc:network/dns/server:default setprop options/configuration_file = /etc/named.conf
# svcadm refresh svc:network/dns/server:default
# svcadm enable svc:network/dns/server:default
</pre>
== Set tcp/udp parameter (formerly ndd) ==
<source lang=bash>
# ipadm show-prop -p smallest_anon_port tcp
PROTO PROPERTY PERM CURRENT PERSISTENT DEFAULT POSSIBLE
tcp smallest_anon_port rw 1024 -- 1024 1024-65535
</source>
<source lang=bash>
# ipadm set-prop -p smallest_anon_port=9000 tcp
# ipadm set-prop -p smallest_anon_port=9000 udp
# ipadm set-prop -p largest_anon_port=65500 tcp
# ipadm set-prop -p largest_anon_port=65500 udp
</source>
94d2ac67500909f2b9413e8869ca021a6d09ba66
768
767
2015-07-08T07:29:37Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:Solaris11]]
= Switch to manual configuration =
To disable automatic procedures to take back your changes you have to enable the manual configuration mode.
<pre>
# netadm enable –p ncp defaultfixed
</pre>
= Nodename =
<pre>
# svccfg -s svc:/system/identity:node setprop config/nodename = astring: camponotus
# svcadm refresh svc:/system/identity:node
# svcadm restart svc:/system/identity:node
</pre>
= Interfaces =
== Initial setup ==
<pre>
# ipadm create-ip net1
# ipadm create-addr -T static -a local=192.168.5.101/24 net1/v4mailcluster1
</pre>
== IPMP ==
<pre>
# ipadm create-ip net2
# ipadm create-ip net3
# ipadm create-addr -T static -a 192.168.5.102/24 net2/v4ipmptestadress
# ipadm create-addr -T static -a 192.168.5.103/24 net3/v4ipmptestadress
# ipadm create-ipmp ipmp0
# ipadm add-ipmp -i net2 -i net3 ipmp0
# ipadm create-addr -T static -a 192.168.5.101/24 ipmp0/v4mailcluster0
# ipmpstat -i
INTERFACE ACTIVE GROUP FLAGS LINK PROBE STATE
net2 yes ipmp0 ------- up ok ok
net3 yes ipmp0 --mbM-- up ok ok
# ipmpstat -an
ADDRESS STATE GROUP INBOUND OUTBOUND
:: down ipmp0 -- --
192.168.5.101 up ipmp0 net3 net2 net3
</pre>
Set one interface to standby:
<pre>
# ipadm set-ifprop -p standby=on -m ip net2
# ipmpstat -i
INTERFACE ACTIVE GROUP FLAGS LINK PROBE STATE
net3 yes ipmp0 --mbM-- up ok ok
net2 no ipmp0 is----- up ok ok
# ipmpstat -g
GROUP GROUPNAME STATE FDT INTERFACES
ipmp0 ipmp0 ok 10.00s net3 (net2)
</pre>
== Change address ==
1. Create new interface:
<pre>
# ipadm create-addr -T static -a 192.168.5.111/24 ipmp0/v4mailcluster1
</pre>
2. Login to new IP.
3. Delete the old interface:
<pre>
# ipadm delete-addr ipmp0/v4mailcluster0
</pre>
= DNS =
== Client ==
<pre>
# svccfg -s svc:/network/dns/client setprop config/nameserver = net_address: "( 0.0.0.0 192.168.1.1 )"
# svccfg -s svc:/network/dns/client setprop config/search = astring: "timmann.de blindhuhn.de"
# svcadm refresh svc:/network/dns/client:default
# svcadm restart svc:/network/dns/client:default
</pre>
Activate dns in nameservice switch (nsswitch.conf):
<pre>
# perl -pi -e "s/^hosts:\s+files$/hosts: files dns/g" /etc/nsswitch.conf
# nscfg import -f svc:/system/name-service/switch:default
# svcadm refresh name-service/switch
# svcprop -p config/host svc:/system/name-service/switch:default
files\ dns
</pre>
== Server ==
<pre>
# groupadd -g 53 dns
# useradd -u 53 -g dns -d /var/named -m dns
# usermod -A solaris.smf.manage.bind dns
# svccfg -s svc:network/dns/server:default setprop start/group = dns
# svccfg -s svc:network/dns/server:default setprop start/user = dns
# svccfg -s svc:network/dns/server:default setprop options/ip_interfaces = IPv4
# svccfg -s svc:network/dns/server:default setprop options/configuration_file = /etc/named.conf
# svcadm refresh svc:network/dns/server:default
# svcadm enable svc:network/dns/server:default
</pre>
= Set tcp/udp parameter (formerly ndd) =
<source lang=bash>
# ipadm show-prop -p smallest_anon_port tcp
PROTO PROPERTY PERM CURRENT PERSISTENT DEFAULT POSSIBLE
tcp smallest_anon_port rw 1024 -- 1024 1024-65535
</source>
<source lang=bash>
# ipadm set-prop -p smallest_anon_port=9000 tcp
# ipadm set-prop -p smallest_anon_port=9000 udp
# ipadm set-prop -p largest_anon_port=65500 tcp
# ipadm set-prop -p largest_anon_port=65500 udp
</source>
dd91c979d4871c651765353caa04af0b7f452e7e
Solaris LiveUpgrade
0
218
769
2015-07-08T12:10:12Z
Lollypop
2
Die Seite wurde neu angelegt: „[[Kategorie:Solaris|LiveUpgrade]]“
wikitext
text/x-wiki
[[Kategorie:Solaris|LiveUpgrade]]
2ed35fb1bd98e3565b2d2ea52003e703d4252a40
770
769
2015-07-08T12:22:41Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:Solaris|LiveUpgrade]]
=Upgrade Solaris release=
==Install LiveUpgrade patches==
[http://sysadmin-tips-and-tricks.blogspot.co.uk/2012/07/solaris-live-upgrade-installation.html This site] has a good list of patches needed:
<source lang=bash>
SPARC:
119254-LR Install and Patch Utilities Patch
121430-LR Live Upgrade patch
121428-LR SUNWluzone required patches
138130-01 vold patch
140914-02 cpio patch
x86:
119255-LR Install and Patch Utilities Patch
121431-LR Live Upgrade patch
121429-LR SUNWluzone required patches
138884-01 SunOS 5.10_x86: GRUB patch
138131-01 vold patch
140915-02 cpio patch
</source>
Higher patch revisions may be available...
==Mount the Solaris 10 DVD ISO-image==
<source lang=bash>
# mkdir /tmp/os
# mount $(lofiadm -a /root/sol-10-u11-ga-x86-dvd.iso) /tmp/os
</source>
==Create the new BootEnvironment==
<source lang=bash>
# lucreate -n Solaris10u11
</source>
==Upgrade the new BootEnvironment==
<source lang=bash>
# echo "autoreg=disable" > /tmp/no-autoreg
# luupgrade -u -n Solaris10u11 -s /tmp/os -k /tmp/no-autoreg
</source>
==Activate the new BootEnvironment==
<source lang=bash>
# luactivate Solaris10u11
</source>
07ab7440a1f63140097d6e08a6ccb0d52f5f5a3f
807
770
2015-08-11T09:12:18Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:Solaris|LiveUpgrade]]
=Upgrade Solaris release=
==Install LiveUpgrade patches==
[http://sysadmin-tips-and-tricks.blogspot.co.uk/2012/07/solaris-live-upgrade-installation.html This site] has a good list of patches needed:
<source lang=bash>
SPARC:
119254-LR Install and Patch Utilities Patch
121430-LR Live Upgrade patch
121428-LR SUNWluzone required patches
138130-01 vold patch
140914-02 cpio patch
x86:
119255-LR Install and Patch Utilities Patch
121431-LR Live Upgrade patch
121429-LR SUNWluzone required patches
138884-01 SunOS 5.10_x86: GRUB patch
138131-01 vold patch
140915-02 cpio patch
</source>
Higher patch revisions may be available...
==Mount the Solaris 10 DVD ISO-image==
<source lang=bash>
# mkdir /tmp/os
# mount $(lofiadm -a /root/sol-10-u11-ga-x86-dvd.iso) /tmp/os
</source>
==Create the new BootEnvironment==
<source lang=bash>
# lucreate -n Solaris10u11
</source>
==Upgrade the new BootEnvironment==
<source lang=bash>
# echo "autoreg=disable" > /tmp/no-autoreg
# luupgrade -u -n Solaris10u11 -s /tmp/os -k /tmp/no-autoreg
</source>
==Activate the new BootEnvironment==
<source lang=bash>
# luactivate Solaris10u11
</source>
=Install EIS patches=
==Mount the new EIS-ISO==
<source lang=bash>
# mkdir /tmp/eis
# mount -F hsfs $(lofiadm -a /root/EIS/EIS-DVD-ONE-15JUL15.iso) /tmp/eis
</source>
==Update LU patches==
<source lang=bash>
# cd /tmp/eis/sun/patch/x86/LU/10
# unpack-patches -q -r
# cd
</source>
==Create the new BootEnvironment==
<source lang=bash>
# lucreate -n Solaris10-EIS-15JUL15
</source>
==Mount the new BootEnvironment==
<source lang=bash>
# mkdir /tmp/BE
# lumount Solaris10-EIS-15JUL15 /tmp/BE
</source>
==Install EIS-Patches==
<source lang=bash>
# cd /tmp/eis/sun
# patch-EIS -R /tmp/BE /var/tmp
Will apply patches from directories: x86/10 x86/cacao/2.1 x86/SWUP/10 SunVTS/7.0_x86 x86/LU/10
Patching from directory: patch/x86/10
Cleaning out /tmp/BE//var/tmp/10...
...
# luumount Solaris10-EIS-15JUL15
</source>
==Activate BE & Reboot==
<source lang=bash>
# luactivate Solaris10-EIS-15JUL15
# init 6
</source>
ceba2fc820c3b511a11597ac336d9cf2c3e7b71e
808
807
2015-08-11T09:59:08Z
Lollypop
2
/* Install EIS-Patches */
wikitext
text/x-wiki
[[Kategorie:Solaris|LiveUpgrade]]
=Upgrade Solaris release=
==Install LiveUpgrade patches==
[http://sysadmin-tips-and-tricks.blogspot.co.uk/2012/07/solaris-live-upgrade-installation.html This site] has a good list of patches needed:
<source lang=bash>
SPARC:
119254-LR Install and Patch Utilities Patch
121430-LR Live Upgrade patch
121428-LR SUNWluzone required patches
138130-01 vold patch
140914-02 cpio patch
x86:
119255-LR Install and Patch Utilities Patch
121431-LR Live Upgrade patch
121429-LR SUNWluzone required patches
138884-01 SunOS 5.10_x86: GRUB patch
138131-01 vold patch
140915-02 cpio patch
</source>
Higher patch revisions may be available...
==Mount the Solaris 10 DVD ISO-image==
<source lang=bash>
# mkdir /tmp/os
# mount $(lofiadm -a /root/sol-10-u11-ga-x86-dvd.iso) /tmp/os
</source>
==Create the new BootEnvironment==
<source lang=bash>
# lucreate -n Solaris10u11
</source>
==Upgrade the new BootEnvironment==
<source lang=bash>
# echo "autoreg=disable" > /tmp/no-autoreg
# luupgrade -u -n Solaris10u11 -s /tmp/os -k /tmp/no-autoreg
</source>
==Activate the new BootEnvironment==
<source lang=bash>
# luactivate Solaris10u11
</source>
=Install EIS patches=
==Mount the new EIS-ISO==
<source lang=bash>
# mkdir /tmp/eis
# mount -F hsfs $(lofiadm -a /root/EIS/EIS-DVD-ONE-15JUL15.iso) /tmp/eis
</source>
==Update LU patches==
<source lang=bash>
# cd /tmp/eis/sun/patch/x86/LU/10
# unpack-patches -q -r
# cd
</source>
==Create the new BootEnvironment==
<source lang=bash>
# lucreate -n Solaris10-EIS-15JUL15
</source>
==Mount the new BootEnvironment==
<source lang=bash>
# mkdir /tmp/BE
# lumount Solaris10-EIS-15JUL15 /tmp/BE
</source>
==Install EIS-Patches==
<source lang=bash>
# cd /tmp/eis/sun
# patch-EIS -R /tmp/BE /var/tmp
Will apply patches from directories: x86/10 x86/cacao/2.1 x86/SWUP/10 SunVTS/7.0_x86 x86/LU/10
Patching from directory: patch/x86/10
Cleaning out /tmp/BE//var/tmp/10...
...
Now the Solaris 10_x86 Recommended Patches...
...
</source>
==Problems: Installing this patch set to an alternate boot environment first requires the live boot environment to have patch utilities and other prerequisite patches==
<source lang=bash>
Installing this patch set to an alternate boot environment first requires the
live boot environment to have patch utilities and other prerequisite patches
at the same (or higher) patch revisions as those delivered by this patch set.
The required prerequisite patches can be applied to the live boot environment
by invoking this script with the '--apply-prereq' option, ie.
./installpatchset --apply-prereq --s10patchset
</source>
==Solution==
<source lang=bash>
root@solaris10 # cd /mnt/var/tmp/10/10_x86_Recommended
root@solaris10 # ./installpatchset --apply-prereq --s10patchset
</source>
<source lang=bash>
# luumount Solaris10-EIS-15JUL15
</source>
==Activate BE & Reboot==
<source lang=bash>
# luactivate Solaris10-EIS-15JUL15
# init 6
</source>
effa3c506482d89d9bca7a56929b339f17649998
809
808
2015-08-11T10:00:24Z
Lollypop
2
/* Solution */
wikitext
text/x-wiki
[[Kategorie:Solaris|LiveUpgrade]]
=Upgrade Solaris release=
==Install LiveUpgrade patches==
[http://sysadmin-tips-and-tricks.blogspot.co.uk/2012/07/solaris-live-upgrade-installation.html This site] has a good list of patches needed:
<source lang=bash>
SPARC:
119254-LR Install and Patch Utilities Patch
121430-LR Live Upgrade patch
121428-LR SUNWluzone required patches
138130-01 vold patch
140914-02 cpio patch
x86:
119255-LR Install and Patch Utilities Patch
121431-LR Live Upgrade patch
121429-LR SUNWluzone required patches
138884-01 SunOS 5.10_x86: GRUB patch
138131-01 vold patch
140915-02 cpio patch
</source>
Higher patch revisions may be available...
==Mount the Solaris 10 DVD ISO-image==
<source lang=bash>
# mkdir /tmp/os
# mount $(lofiadm -a /root/sol-10-u11-ga-x86-dvd.iso) /tmp/os
</source>
==Create the new BootEnvironment==
<source lang=bash>
# lucreate -n Solaris10u11
</source>
==Upgrade the new BootEnvironment==
<source lang=bash>
# echo "autoreg=disable" > /tmp/no-autoreg
# luupgrade -u -n Solaris10u11 -s /tmp/os -k /tmp/no-autoreg
</source>
==Activate the new BootEnvironment==
<source lang=bash>
# luactivate Solaris10u11
</source>
=Install EIS patches=
==Mount the new EIS-ISO==
<source lang=bash>
# mkdir /tmp/eis
# mount -F hsfs $(lofiadm -a /root/EIS/EIS-DVD-ONE-15JUL15.iso) /tmp/eis
</source>
==Update LU patches==
<source lang=bash>
# cd /tmp/eis/sun/patch/x86/LU/10
# unpack-patches -q -r
# cd
</source>
==Create the new BootEnvironment==
<source lang=bash>
# lucreate -n Solaris10-EIS-15JUL15
</source>
==Mount the new BootEnvironment==
<source lang=bash>
# mkdir /tmp/BE
# lumount Solaris10-EIS-15JUL15 /tmp/BE
</source>
==Install EIS-Patches==
<source lang=bash>
# cd /tmp/eis/sun
# patch-EIS -R /tmp/BE /var/tmp
Will apply patches from directories: x86/10 x86/cacao/2.1 x86/SWUP/10 SunVTS/7.0_x86 x86/LU/10
Patching from directory: patch/x86/10
Cleaning out /tmp/BE//var/tmp/10...
...
Now the Solaris 10_x86 Recommended Patches...
...
</source>
==Problems: Installing this patch set to an alternate boot environment first requires the live boot environment to have patch utilities and other prerequisite patches==
<source lang=bash>
Installing this patch set to an alternate boot environment first requires the
live boot environment to have patch utilities and other prerequisite patches
at the same (or higher) patch revisions as those delivered by this patch set.
The required prerequisite patches can be applied to the live boot environment
by invoking this script with the '--apply-prereq' option, ie.
./installpatchset --apply-prereq --s10patchset
</source>
===Solution===
<source lang=bash>
root@solaris10 # cd /mnt/var/tmp/10/10_x86_Recommended
root@solaris10 # ./installpatchset --apply-prereq --s10patchset
...
Installation of prerequisite patches complete.
...
</source>
==Umount the BE==
<source lang=bash>
# luumount Solaris10-EIS-15JUL15
</source>
==Activate BE & Reboot==
<source lang=bash>
# luactivate Solaris10-EIS-15JUL15
# init 6
</source>
28ca3a79f2ee8eceed6b953c4bcf7a748222d87e
NetApp and Solaris
0
219
771
2015-07-14T12:24:35Z
Lollypop
2
Die Seite wurde neu angelegt: „[[Kategorie:NetApp|Solaris]] [[Kategorie:Solaris|NetApp]] Just some unsorted lines... ==Timeout settings in Solaris== Settings for MPxIO over FC: ===/kernel/d…“
wikitext
text/x-wiki
[[Kategorie:NetApp|Solaris]]
[[Kategorie:Solaris|NetApp]]
Just some unsorted lines...
==Timeout settings in Solaris==
Settings for MPxIO over FC:
===/kernel/drv/ssd.conf===
<source lang=bash>
ssd-config-list="NETAPP LUN", "netapp-ssd-config";
netapp-ssd-config=1,0x9007,64,300,30,0,0,0,0,0,0,0,0,0,30,0,0,8,0,0;
</source>
c1ce6c6f3b03d0cf3b9f4f937f2282ff3675f819
772
771
2015-07-14T13:11:50Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:NetApp|Solaris]]
[[Kategorie:Solaris|NetApp]]
Just some unsorted lines...
==Timeout settings in Solaris==
Settings for MPxIO over FC:
===/kernel/drv/ssd.conf===
<source lang=bash>
ssd-config-list="NETAPP LUN", "netapp-ssd-config";
netapp-ssd-config=1,0x9007,64,300,30,0,0,0,0,0,0,0,0,0,30,0,0,8,0,0;
</source>
==LUN alignment and ZFS==
First read [https://library.netapp.com/ecmdocs/ECMP1148982/html/html/GUID-42CC2EB6-E667-4305-914C-7C2C459EF841.html ZFS zpools create misaligned I/O in Solaris 11 and Solaris 10 Update 8 and later (407376)].
So... what is it?
409bf11e82c395fe4d970ae3d6225b67172339f7
776
772
2015-07-14T13:52:55Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:NetApp|Solaris]]
[[Kategorie:Solaris|NetApp]]
Just some unsorted lines...
==Timeout settings in Solaris==
Settings for MPxIO over FC:
===/kernel/drv/ssd.conf===
<source lang=bash>
ssd-config-list="NETAPP LUN", "netapp-ssd-config";
netapp-ssd-config=1,0x9007,64,300,30,0,0,0,0,0,0,0,0,0,30,0,0,8,0,0;
</source>
===/kernel/drv/sd.conf===
<source lang=bash>
sd-config-list=
"NETAPP LUN", "physical-block-size:4096";
</source>
==LUN alignment and ZFS==
First read [https://library.netapp.com/ecmdocs/ECMP1148982/html/html/GUID-42CC2EB6-E667-4305-914C-7C2C459EF841.html ZFS zpools create misaligned I/O in Solaris 11 and Solaris 10 Update 8 and later (407376)].
So... what is it?
0adb560ea0b24168f3c5ddc8a5b5da36f39df253
777
776
2015-07-14T14:02:31Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:NetApp|Solaris]]
[[Kategorie:Solaris|NetApp]]
Just some unsorted lines...
==Timeout settings in Solaris==
Settings for MPxIO over FC:
===/kernel/drv/ssd.conf===
<source lang=bash>
ssd-config-list="NETAPP LUN","netapp-ssd-config";
netapp-ssd-config=1,0x9007,64,300,30,0,0,0,0,0,0,0,0,0,30,0,0,8,0,0;
</source>
===/kernel/drv/sd.conf===
<source lang=bash>
sd-config-list=
"NETAPP LUN","physical-block-size:4096";
</source>
==LUN alignment and ZFS==
First read [https://library.netapp.com/ecmdocs/ECMP1148982/html/html/GUID-42CC2EB6-E667-4305-914C-7C2C459EF841.html ZFS zpools create misaligned I/O in Solaris 11 and Solaris 10 Update 8 and later (407376)].
So... what is it?
19d93fa0261b9d527fdef67bdb3007641303c60d
778
777
2015-07-14T14:15:02Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:NetApp|Solaris]]
[[Kategorie:Solaris|NetApp]]
Just some unsorted lines...
==Timeout settings in Solaris==
Settings for MPxIO over FC:
===/kernel/drv/ssd.conf===
<source lang=bash>
ssd-config-list="NETAPP LUN","netapp-ssd-config";
netapp-ssd-config=1,0x9007,64,300,30,0,0,0,0,0,0,0,0,0,30,0,0,8,0,0;
</source>
===/kernel/drv/sd.conf===
<source lang=bash>
sd-config-list=
"NETAPP LUN","physical-block-size:4096";
</source>
==Alignment and ZFS==
First read [https://library.netapp.com/ecmdocs/ECMP1148982/html/html/GUID-42CC2EB6-E667-4305-914C-7C2C459EF841.html ZFS zpools create misaligned I/O in Solaris 11 and Solaris 10 Update 8 and later (407376)].
If you have 4k as block size in your storage use ashift=12.
===ashift=12? Why 12?===
<source lang=bash>
# echo "2^12" | bc -l
4096
</source>
OK... 4k... I see.
===Create ZPools on NetApp LUNs with this syntax===
<source lang=bash>
# zpool create -o ashift=12 <mypool> mirror <vdev1> <vdev2>
</source>
==Links==
* [http://wiki.illumos.org/display/illumos/List+of+sd-config-list+entries+for+Advanced-Format+drives List of sd-config-list entries for Advanced-Format drives]
1c6ffbec62f4d99343b8a3926c21c008334e95f0
779
778
2015-07-14T14:19:32Z
Lollypop
2
/* Alignment and ZFS */
wikitext
text/x-wiki
[[Kategorie:NetApp|Solaris]]
[[Kategorie:Solaris|NetApp]]
Just some unsorted lines...
==Timeout settings in Solaris==
Settings for MPxIO over FC:
===/kernel/drv/ssd.conf===
<source lang=bash>
ssd-config-list="NETAPP LUN","netapp-ssd-config";
netapp-ssd-config=1,0x9007,64,300,30,0,0,0,0,0,0,0,0,0,30,0,0,8,0,0;
</source>
===/kernel/drv/sd.conf===
<source lang=bash>
sd-config-list=
"NETAPP LUN","physical-block-size:4096";
</source>
==Alignment and ZFS==
First read [https://library.netapp.com/ecmdocs/ECMP1148982/html/html/GUID-42CC2EB6-E667-4305-914C-7C2C459EF841.html ZFS zpools create misaligned I/O in Solaris 11 and Solaris 10 Update 8 and later (407376)].
If you have 4k as block size in your storage use ashift=12.
===ashift=12? Why 12?===
<source lang=bash>
# echo "2^12" | bc -l
4096
</source>
OK... 4k... I see.
===What ashift do I have?===
<source lang=bash>
# zdb | egrep 'ashift| name'
name: 'apache_pool'
ashift: 9
name: 'mysql_pool'
ashift: 9
...
</source>
===Create ZPools on NetApp LUNs with this syntax===
<source lang=bash>
# zpool create -o ashift=12 <mypool> mirror <vdev1> <vdev2>
</source>
==Links==
* [http://wiki.illumos.org/display/illumos/List+of+sd-config-list+entries+for+Advanced-Format+drives List of sd-config-list entries for Advanced-Format drives]
87212cd054987375f65078527d10f8063ba684a2
780
779
2015-07-14T15:30:47Z
Lollypop
2
/* Timeout settings in Solaris */
wikitext
text/x-wiki
[[Kategorie:NetApp|Solaris]]
[[Kategorie:Solaris|NetApp]]
Just some unsorted lines...
==Settings in Solaris==
Settings for MPxIO over FC:
===/kernel/drv/ssd.conf===
<source lang=bash>
ssd-config-list="NETAPP LUN","netapp-ssd-config";
netapp-ssd-config=1,0x9007,64,300,30,0,0,0,0,0,0,0,0,0,30,0,0,8,0,0;
</source>
===/kernel/drv/sd.conf===
<source lang=bash>
sd-config-list=
"NETAPP LUN","physical-block-size:4096";
</source>
===Check it out===
<source lang=bash>
# iostat -Er | /opt/sfw/bin/gawk 'BEGIN{command="echo ::ssd_state | mdb -k"; while(command|getline){if(/^un [0-9]+:/ && $NF != "0"){ssd=$2;gsub(/:$/,"",ssd);while(!/^}/){command|getline;if(/un_phy_blocksize/){un_phy_blocksize[ssd]=strtonum($NF);}}}};close(command);}/ssd/{ssd=$1;gsub(/^ssd/,"",ssd);getline;split($0,vendor,",");printf "ssd: %s\tun_phy_blocksize: %d\t%s\t%s\n",ssd,un_phy_blocksize[ssd],vendor[1],vendor[4];}'
</source>
==Alignment and ZFS==
First read [https://library.netapp.com/ecmdocs/ECMP1148982/html/html/GUID-42CC2EB6-E667-4305-914C-7C2C459EF841.html ZFS zpools create misaligned I/O in Solaris 11 and Solaris 10 Update 8 and later (407376)].
If you have 4k as block size in your storage use ashift=12.
===ashift=12? Why 12?===
<source lang=bash>
# echo "2^12" | bc -l
4096
</source>
OK... 4k... I see.
===What ashift do I have?===
<source lang=bash>
# zdb | egrep 'ashift| name'
name: 'apache_pool'
ashift: 9
name: 'mysql_pool'
ashift: 9
...
</source>
===Create ZPools on NetApp LUNs with this syntax===
<source lang=bash>
# zpool create -o ashift=12 <mypool> mirror <vdev1> <vdev2>
</source>
==Links==
* [http://wiki.illumos.org/display/illumos/List+of+sd-config-list+entries+for+Advanced-Format+drives List of sd-config-list entries for Advanced-Format drives]
b16be29d7d631f83deca61e5c4a44e1370eef624
781
780
2015-07-14T15:31:40Z
Lollypop
2
/* What ashift do I have? */
wikitext
text/x-wiki
[[Kategorie:NetApp|Solaris]]
[[Kategorie:Solaris|NetApp]]
Just some unsorted lines...
==Settings in Solaris==
Settings for MPxIO over FC:
===/kernel/drv/ssd.conf===
<source lang=bash>
ssd-config-list="NETAPP LUN","netapp-ssd-config";
netapp-ssd-config=1,0x9007,64,300,30,0,0,0,0,0,0,0,0,0,30,0,0,8,0,0;
</source>
===/kernel/drv/sd.conf===
<source lang=bash>
sd-config-list=
"NETAPP LUN","physical-block-size:4096";
</source>
===Check it out===
<source lang=bash>
# iostat -Er | /opt/sfw/bin/gawk 'BEGIN{command="echo ::ssd_state | mdb -k"; while(command|getline){if(/^un [0-9]+:/ && $NF != "0"){ssd=$2;gsub(/:$/,"",ssd);while(!/^}/){command|getline;if(/un_phy_blocksize/){un_phy_blocksize[ssd]=strtonum($NF);}}}};close(command);}/ssd/{ssd=$1;gsub(/^ssd/,"",ssd);getline;split($0,vendor,",");printf "ssd: %s\tun_phy_blocksize: %d\t%s\t%s\n",ssd,un_phy_blocksize[ssd],vendor[1],vendor[4];}'
</source>
==Alignment and ZFS==
First read [https://library.netapp.com/ecmdocs/ECMP1148982/html/html/GUID-42CC2EB6-E667-4305-914C-7C2C459EF841.html ZFS zpools create misaligned I/O in Solaris 11 and Solaris 10 Update 8 and later (407376)].
If you have 4k as block size in your storage use ashift=12.
===ashift=12? Why 12?===
<source lang=bash>
# echo "2^12" | bc -l
4096
</source>
OK... 4k... I see.
===What ashift do I have?===
<source lang=bash>
# zdb | egrep ' name|ashift'
name: 'apache_pool'
ashift: 9
name: 'mysql_pool'
ashift: 9
...
</source>
===Create ZPools on NetApp LUNs with this syntax===
<source lang=bash>
# zpool create -o ashift=12 <mypool> mirror <vdev1> <vdev2>
</source>
==Links==
* [http://wiki.illumos.org/display/illumos/List+of+sd-config-list+entries+for+Advanced-Format+drives List of sd-config-list entries for Advanced-Format drives]
81564065c7cffa41a6fd372196b64b02ac2942c3
782
781
2015-07-14T15:35:06Z
Lollypop
2
/* Alignment and ZFS */
wikitext
text/x-wiki
[[Kategorie:NetApp|Solaris]]
[[Kategorie:Solaris|NetApp]]
Just some unsorted lines...
==Settings in Solaris==
Settings for MPxIO over FC:
===/kernel/drv/ssd.conf===
<source lang=bash>
ssd-config-list="NETAPP LUN","netapp-ssd-config";
netapp-ssd-config=1,0x9007,64,300,30,0,0,0,0,0,0,0,0,0,30,0,0,8,0,0;
</source>
===/kernel/drv/sd.conf===
<source lang=bash>
sd-config-list=
"NETAPP LUN","physical-block-size:4096";
</source>
===Check it out===
<source lang=bash>
# iostat -Er | /opt/sfw/bin/gawk 'BEGIN{command="echo ::ssd_state | mdb -k"; while(command|getline){if(/^un [0-9]+:/ && $NF != "0"){ssd=$2;gsub(/:$/,"",ssd);while(!/^}/){command|getline;if(/un_phy_blocksize/){un_phy_blocksize[ssd]=strtonum($NF);}}}};close(command);}/ssd/{ssd=$1;gsub(/^ssd/,"",ssd);getline;split($0,vendor,",");printf "ssd: %s\tun_phy_blocksize: %d\t%s\t%s\n",ssd,un_phy_blocksize[ssd],vendor[1],vendor[4];}'
</source>
==Alignment and ZFS==
First read [https://library.netapp.com/ecmdocs/ECMP1148982/html/html/GUID-42CC2EB6-E667-4305-914C-7C2C459EF841.html ZFS zpools create misaligned I/O in Solaris 11 and Solaris 10 Update 8 and later (407376)].
If you have 4k as block size in your storage use ashift=12 (alignment shift exponent).
===ashift=12? Why 12?===
<source lang=bash>
# echo "2^12" | bc -l
4096
</source>
OK... 4k... I see.
===What ashift do I have?===
<source lang=bash>
# zdb | egrep ' name|ashift'
name: 'apache_pool'
ashift: 9
name: 'mysql_pool'
ashift: 9
...
</source>
===Create ZPools on NetApp LUNs with this syntax===
<source lang=bash>
# zpool create -o ashift=12 <mypool> mirror <vdev1> <vdev2>
</source>
==Links==
* [http://wiki.illumos.org/display/illumos/List+of+sd-config-list+entries+for+Advanced-Format+drives List of sd-config-list entries for Advanced-Format drives]
c23e97ded2bae7bc0fa074e8afd298ff5de92976
783
782
2015-07-14T16:03:39Z
Lollypop
2
/* Links */
wikitext
text/x-wiki
[[Kategorie:NetApp|Solaris]]
[[Kategorie:Solaris|NetApp]]
Just some unsorted lines...
==Settings in Solaris==
Settings for MPxIO over FC:
===/kernel/drv/ssd.conf===
<source lang=bash>
ssd-config-list="NETAPP LUN","netapp-ssd-config";
netapp-ssd-config=1,0x9007,64,300,30,0,0,0,0,0,0,0,0,0,30,0,0,8,0,0;
</source>
===/kernel/drv/sd.conf===
<source lang=bash>
sd-config-list=
"NETAPP LUN","physical-block-size:4096";
</source>
===Check it out===
<source lang=bash>
# iostat -Er | /opt/sfw/bin/gawk 'BEGIN{command="echo ::ssd_state | mdb -k"; while(command|getline){if(/^un [0-9]+:/ && $NF != "0"){ssd=$2;gsub(/:$/,"",ssd);while(!/^}/){command|getline;if(/un_phy_blocksize/){un_phy_blocksize[ssd]=strtonum($NF);}}}};close(command);}/ssd/{ssd=$1;gsub(/^ssd/,"",ssd);getline;split($0,vendor,",");printf "ssd: %s\tun_phy_blocksize: %d\t%s\t%s\n",ssd,un_phy_blocksize[ssd],vendor[1],vendor[4];}'
</source>
==Alignment and ZFS==
First read [https://library.netapp.com/ecmdocs/ECMP1148982/html/html/GUID-42CC2EB6-E667-4305-914C-7C2C459EF841.html ZFS zpools create misaligned I/O in Solaris 11 and Solaris 10 Update 8 and later (407376)].
If you have 4k as block size in your storage use ashift=12 (alignment shift exponent).
===ashift=12? Why 12?===
<source lang=bash>
# echo "2^12" | bc -l
4096
</source>
OK... 4k... I see.
===What ashift do I have?===
<source lang=bash>
# zdb | egrep ' name|ashift'
name: 'apache_pool'
ashift: 9
name: 'mysql_pool'
ashift: 9
...
</source>
===Create ZPools on NetApp LUNs with this syntax===
<source lang=bash>
# zpool create -o ashift=12 <mypool> mirror <vdev1> <vdev2>
</source>
==Links==
* [http://wiki.illumos.org/display/illumos/List+of+sd-config-list+entries+for+Advanced-Format+drives Illumos: List of sd-config-list entries for Advanced-Format drives]
* [https://kb.netapp.com/index?page=content&id=3011193 NetApp: What is an unaligned I/O?]
a12957d8097ecb1df8dc33bf25d60b454466c959
784
783
2015-07-15T07:22:18Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:NetApp|Solaris]]
[[Kategorie:Solaris|NetApp]]
Just some unsorted lines...
==Settings in Solaris==
Settings for MPxIO over FC:
===/kernel/drv/ssd.conf===
<source lang=bash>
ssd-config-list="NETAPP LUN","netapp-ssd-config";
netapp-ssd-config=1,0x9007,64,300,30,0,0,0,0,0,0,0,0,0,30,0,0,8,0,0;
</source>
===/kernel/drv/sd.conf===
<source lang=bash>
sd-config-list=
"NETAPP LUN","physical-block-size:4096";
</source>
===Check it out===
<source lang=bash>
# iostat -Er | /opt/sfw/bin/gawk 'BEGIN{command="echo ::ssd_state | mdb -k"; while(command|getline){if(/^un [0-9]+:/ && $NF != "0"){ssd=$2;gsub(/:$/,"",ssd);while(!/^}/){command|getline;if(/un_phy_blocksize/){un_phy_blocksize[ssd]=strtonum($NF);}}}};close(command);}/ssd/{ssd=$1;gsub(/^ssd/,"",ssd);getline;split($0,vendor,",");printf "ssd: %s\tun_phy_blocksize: %d\t%s\t%s\n",ssd,un_phy_blocksize[ssd],vendor[1],vendor[4];}'
</source>
==Alignment and ZFS==
First read [https://library.netapp.com/ecmdocs/ECMP1148982/html/html/GUID-42CC2EB6-E667-4305-914C-7C2C459EF841.html ZFS zpools create misaligned I/O in Solaris 11 and Solaris 10 Update 8 and later (407376)].
If you have 4k as block size in your storage use ashift=12 (alignment shift exponent).
===Status of alignment===
<source lang=bash>
# ssh filer01 "priv set -q diag ; lun show -v all; priv set"
-------------------------------------------------------------------------------
LUN for ZFS
-------------------------------------------------------------------------------
/vol/ZoneLUNs/Zone01.lun 50g (53687091200) (r/w, online, mapped)
Serial#: 800KP+EpO-33
Share: none
Space Reservation: enabled
Multiprotocol Type: solaris_efi
Maps: SUN_SERVER01_SERVER02=40
Occupied Size: 46.2g (49583595520)
Creation Time: Wed Jan 7 11:37:58 CET 2015
---> Alignment: partial-writes
Cluster Shared Volume Information: 0x0
Space_alloc: disabled
report-physical-size: enabled
Read-Only: disabled
-------------------------------------------------------------------------------
LUN for Oracle Database
-------------------------------------------------------------------------------
/vol/TEMP201/TEMP201 25g (26843545600) (r/w, online, mapped)
Serial#: 800KP+EpO-2t
Share: none
Space Reservation: enabled
Multiprotocol Type: solaris_efi
Maps: KSP_SUN_SRV06_SRV07=35
Occupied Size: 21.6g (23195856896)
Creation Time: Fri Jul 4 11:02:34 CEST 2014
---> Alignment: misaligned
Cluster Shared Volume Information: 0x0
Space_alloc: disabled
report-physical-size: enabled
Read-Only: disabled
...
</source>
<source lang=bash>
stats show -e lun:/vol/TEMP201:.*_align_histo.*
</source>
===ashift=12? Why 12?===
<source lang=bash>
# echo "2^12" | bc -l
4096
</source>
OK... 4k... I see.
===What ashift do I have?===
<source lang=bash>
# zdb | egrep ' name|ashift'
name: 'apache_pool'
ashift: 9
name: 'mysql_pool'
ashift: 9
...
</source>
===Create ZPools on NetApp LUNs with this syntax===
<source lang=bash>
# zpool create -o ashift=12 <mypool> mirror <vdev1> <vdev2>
</source>
==Links==
* [http://wiki.illumos.org/display/illumos/List+of+sd-config-list+entries+for+Advanced-Format+drives Illumos: List of sd-config-list entries for Advanced-Format drives]
* [https://kb.netapp.com/index?page=content&id=3011193 NetApp: What is an unaligned I/O?]
7169708a2df25bce8072e0aca8826f94ddea6281
785
784
2015-07-15T12:31:29Z
Lollypop
2
/* /kernel/drv/ssd.conf */
wikitext
text/x-wiki
[[Kategorie:NetApp|Solaris]]
[[Kategorie:Solaris|NetApp]]
Just some unsorted lines...
==Settings in Solaris==
Settings for MPxIO over FC:
===/kernel/drv/ssd.conf===
<source lang=bash>
###### START changes by host_config #####
ssd-config-list="NETAPP LUN", "physical-block-size:4096, retries-busy:30, retries-reset:30, retries-notready:300, retries-timeout:10, throttle-max:64, throttle-
min:8";
###### END changes by host_config ####
</source>
===/kernel/drv/sd.conf===
<source lang=bash>
sd-config-list=
"NETAPP LUN","physical-block-size:4096";
</source>
===Check it out===
<source lang=bash>
# iostat -Er | /opt/sfw/bin/gawk 'BEGIN{command="echo ::ssd_state | mdb -k"; while(command|getline){if(/^un [0-9]+:/ && $NF != "0"){ssd=$2;gsub(/:$/,"",ssd);while(!/^}/){command|getline;if(/un_phy_blocksize/){un_phy_blocksize[ssd]=strtonum($NF);}}}};close(command);}/ssd/{ssd=$1;gsub(/^ssd/,"",ssd);getline;split($0,vendor,",");printf "ssd: %s\tun_phy_blocksize: %d\t%s\t%s\n",ssd,un_phy_blocksize[ssd],vendor[1],vendor[4];}'
</source>
==Alignment and ZFS==
First read [https://library.netapp.com/ecmdocs/ECMP1148982/html/html/GUID-42CC2EB6-E667-4305-914C-7C2C459EF841.html ZFS zpools create misaligned I/O in Solaris 11 and Solaris 10 Update 8 and later (407376)].
If you have 4k as block size in your storage use ashift=12 (alignment shift exponent).
===Status of alignment===
<source lang=bash>
# ssh filer01 "priv set -q diag ; lun show -v all; priv set"
-------------------------------------------------------------------------------
LUN for ZFS
-------------------------------------------------------------------------------
/vol/ZoneLUNs/Zone01.lun 50g (53687091200) (r/w, online, mapped)
Serial#: 800KP+EpO-33
Share: none
Space Reservation: enabled
Multiprotocol Type: solaris_efi
Maps: SUN_SERVER01_SERVER02=40
Occupied Size: 46.2g (49583595520)
Creation Time: Wed Jan 7 11:37:58 CET 2015
---> Alignment: partial-writes
Cluster Shared Volume Information: 0x0
Space_alloc: disabled
report-physical-size: enabled
Read-Only: disabled
-------------------------------------------------------------------------------
LUN for Oracle Database
-------------------------------------------------------------------------------
/vol/TEMP201/TEMP201 25g (26843545600) (r/w, online, mapped)
Serial#: 800KP+EpO-2t
Share: none
Space Reservation: enabled
Multiprotocol Type: solaris_efi
Maps: KSP_SUN_SRV06_SRV07=35
Occupied Size: 21.6g (23195856896)
Creation Time: Fri Jul 4 11:02:34 CEST 2014
---> Alignment: misaligned
Cluster Shared Volume Information: 0x0
Space_alloc: disabled
report-physical-size: enabled
Read-Only: disabled
...
</source>
<source lang=bash>
stats show -e lun:/vol/TEMP201:.*_align_histo.*
</source>
===ashift=12? Why 12?===
<source lang=bash>
# echo "2^12" | bc -l
4096
</source>
OK... 4k... I see.
===What ashift do I have?===
<source lang=bash>
# zdb | egrep ' name|ashift'
name: 'apache_pool'
ashift: 9
name: 'mysql_pool'
ashift: 9
...
</source>
===Create ZPools on NetApp LUNs with this syntax===
<source lang=bash>
# zpool create -o ashift=12 <mypool> mirror <vdev1> <vdev2>
</source>
==Links==
* [http://wiki.illumos.org/display/illumos/List+of+sd-config-list+entries+for+Advanced-Format+drives Illumos: List of sd-config-list entries for Advanced-Format drives]
* [https://kb.netapp.com/index?page=content&id=3011193 NetApp: What is an unaligned I/O?]
48941def342b5c017ec223c96468a77d48f4de11
786
785
2015-07-15T12:32:02Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:NetApp|Solaris]]
[[Kategorie:Solaris|NetApp]]
Just some unsorted lines...
==Settings in Solaris==
Settings for MPxIO over FC:
===/kernel/drv/ssd.conf===
<source lang=bash>
###### START changes by host_config #####
ssd-config-list="NETAPP LUN", "physical-block-size:4096, retries-busy:30, retries-reset:30, retries-notready:300, retries-timeout:10, throttle-max:64, throttle-
min:8";
###### END changes by host_config ####
</source>
===Check it out===
<source lang=bash>
# iostat -Er | /opt/sfw/bin/gawk 'BEGIN{command="echo ::ssd_state | mdb -k"; while(command|getline){if(/^un [0-9]+:/ && $NF != "0"){ssd=$2;gsub(/:$/,"",ssd);while(!/^}/){command|getline;if(/un_phy_blocksize/){un_phy_blocksize[ssd]=strtonum($NF);}}}};close(command);}/ssd/{ssd=$1;gsub(/^ssd/,"",ssd);getline;split($0,vendor,",");printf "ssd: %s\tun_phy_blocksize: %d\t%s\t%s\n",ssd,un_phy_blocksize[ssd],vendor[1],vendor[4];}'
</source>
==Alignment and ZFS==
First read [https://library.netapp.com/ecmdocs/ECMP1148982/html/html/GUID-42CC2EB6-E667-4305-914C-7C2C459EF841.html ZFS zpools create misaligned I/O in Solaris 11 and Solaris 10 Update 8 and later (407376)].
If you have 4k as block size in your storage use ashift=12 (alignment shift exponent).
===Status of alignment===
<source lang=bash>
# ssh filer01 "priv set -q diag ; lun show -v all; priv set"
-------------------------------------------------------------------------------
LUN for ZFS
-------------------------------------------------------------------------------
/vol/ZoneLUNs/Zone01.lun 50g (53687091200) (r/w, online, mapped)
Serial#: 800KP+EpO-33
Share: none
Space Reservation: enabled
Multiprotocol Type: solaris_efi
Maps: SUN_SERVER01_SERVER02=40
Occupied Size: 46.2g (49583595520)
Creation Time: Wed Jan 7 11:37:58 CET 2015
---> Alignment: partial-writes
Cluster Shared Volume Information: 0x0
Space_alloc: disabled
report-physical-size: enabled
Read-Only: disabled
-------------------------------------------------------------------------------
LUN for Oracle Database
-------------------------------------------------------------------------------
/vol/TEMP201/TEMP201 25g (26843545600) (r/w, online, mapped)
Serial#: 800KP+EpO-2t
Share: none
Space Reservation: enabled
Multiprotocol Type: solaris_efi
Maps: KSP_SUN_SRV06_SRV07=35
Occupied Size: 21.6g (23195856896)
Creation Time: Fri Jul 4 11:02:34 CEST 2014
---> Alignment: misaligned
Cluster Shared Volume Information: 0x0
Space_alloc: disabled
report-physical-size: enabled
Read-Only: disabled
...
</source>
<source lang=bash>
stats show -e lun:/vol/TEMP201:.*_align_histo.*
</source>
===ashift=12? Why 12?===
<source lang=bash>
# echo "2^12" | bc -l
4096
</source>
OK... 4k... I see.
===What ashift do I have?===
<source lang=bash>
# zdb | egrep ' name|ashift'
name: 'apache_pool'
ashift: 9
name: 'mysql_pool'
ashift: 9
...
</source>
===Create ZPools on NetApp LUNs with this syntax===
<source lang=bash>
# zpool create -o ashift=12 <mypool> mirror <vdev1> <vdev2>
</source>
==Links==
* [http://wiki.illumos.org/display/illumos/List+of+sd-config-list+entries+for+Advanced-Format+drives Illumos: List of sd-config-list entries for Advanced-Format drives]
* [https://kb.netapp.com/index?page=content&id=3011193 NetApp: What is an unaligned I/O?]
4232a952feb207ed5dc2d37604a487c863e55e6b
787
786
2015-07-15T12:32:31Z
Lollypop
2
/* /kernel/drv/ssd.conf */
wikitext
text/x-wiki
[[Kategorie:NetApp|Solaris]]
[[Kategorie:Solaris|NetApp]]
Just some unsorted lines...
==Settings in Solaris==
Settings for MPxIO over FC:
===/kernel/drv/ssd.conf===
<source lang=bash>
###### START changes by host_config #####
ssd-config-list="NETAPP LUN", "physical-block-size:4096, retries-busy:30, retries-reset:30, retries-notready:300, retriestimeout:10, throttle-max:64, throttle-min:8";
###### END changes by host_config ####
</source>
===Check it out===
<source lang=bash>
# iostat -Er | /opt/sfw/bin/gawk 'BEGIN{command="echo ::ssd_state | mdb -k"; while(command|getline){if(/^un [0-9]+:/ && $NF != "0"){ssd=$2;gsub(/:$/,"",ssd);while(!/^}/){command|getline;if(/un_phy_blocksize/){un_phy_blocksize[ssd]=strtonum($NF);}}}};close(command);}/ssd/{ssd=$1;gsub(/^ssd/,"",ssd);getline;split($0,vendor,",");printf "ssd: %s\tun_phy_blocksize: %d\t%s\t%s\n",ssd,un_phy_blocksize[ssd],vendor[1],vendor[4];}'
</source>
==Alignment and ZFS==
First read [https://library.netapp.com/ecmdocs/ECMP1148982/html/html/GUID-42CC2EB6-E667-4305-914C-7C2C459EF841.html ZFS zpools create misaligned I/O in Solaris 11 and Solaris 10 Update 8 and later (407376)].
If you have 4k as block size in your storage use ashift=12 (alignment shift exponent).
===Status of alignment===
<source lang=bash>
# ssh filer01 "priv set -q diag ; lun show -v all; priv set"
-------------------------------------------------------------------------------
LUN for ZFS
-------------------------------------------------------------------------------
/vol/ZoneLUNs/Zone01.lun 50g (53687091200) (r/w, online, mapped)
Serial#: 800KP+EpO-33
Share: none
Space Reservation: enabled
Multiprotocol Type: solaris_efi
Maps: SUN_SERVER01_SERVER02=40
Occupied Size: 46.2g (49583595520)
Creation Time: Wed Jan 7 11:37:58 CET 2015
---> Alignment: partial-writes
Cluster Shared Volume Information: 0x0
Space_alloc: disabled
report-physical-size: enabled
Read-Only: disabled
-------------------------------------------------------------------------------
LUN for Oracle Database
-------------------------------------------------------------------------------
/vol/TEMP201/TEMP201 25g (26843545600) (r/w, online, mapped)
Serial#: 800KP+EpO-2t
Share: none
Space Reservation: enabled
Multiprotocol Type: solaris_efi
Maps: KSP_SUN_SRV06_SRV07=35
Occupied Size: 21.6g (23195856896)
Creation Time: Fri Jul 4 11:02:34 CEST 2014
---> Alignment: misaligned
Cluster Shared Volume Information: 0x0
Space_alloc: disabled
report-physical-size: enabled
Read-Only: disabled
...
</source>
<source lang=bash>
stats show -e lun:/vol/TEMP201:.*_align_histo.*
</source>
===ashift=12? Why 12?===
<source lang=bash>
# echo "2^12" | bc -l
4096
</source>
OK... 4k... I see.
===What ashift do I have?===
<source lang=bash>
# zdb | egrep ' name|ashift'
name: 'apache_pool'
ashift: 9
name: 'mysql_pool'
ashift: 9
...
</source>
===Create ZPools on NetApp LUNs with this syntax===
<source lang=bash>
# zpool create -o ashift=12 <mypool> mirror <vdev1> <vdev2>
</source>
==Links==
* [http://wiki.illumos.org/display/illumos/List+of+sd-config-list+entries+for+Advanced-Format+drives Illumos: List of sd-config-list entries for Advanced-Format drives]
* [https://kb.netapp.com/index?page=content&id=3011193 NetApp: What is an unaligned I/O?]
22a65d3df99cdc178dcb67f39e4c7564c0ec5fc5
788
787
2015-07-15T12:44:07Z
Lollypop
2
/* Status of alignment */
wikitext
text/x-wiki
[[Kategorie:NetApp|Solaris]]
[[Kategorie:Solaris|NetApp]]
Just some unsorted lines...
==Settings in Solaris==
Settings for MPxIO over FC:
===/kernel/drv/ssd.conf===
<source lang=bash>
###### START changes by host_config #####
ssd-config-list="NETAPP LUN", "physical-block-size:4096, retries-busy:30, retries-reset:30, retries-notready:300, retriestimeout:10, throttle-max:64, throttle-min:8";
###### END changes by host_config ####
</source>
===Check it out===
<source lang=bash>
# iostat -Er | /opt/sfw/bin/gawk 'BEGIN{command="echo ::ssd_state | mdb -k"; while(command|getline){if(/^un [0-9]+:/ && $NF != "0"){ssd=$2;gsub(/:$/,"",ssd);while(!/^}/){command|getline;if(/un_phy_blocksize/){un_phy_blocksize[ssd]=strtonum($NF);}}}};close(command);}/ssd/{ssd=$1;gsub(/^ssd/,"",ssd);getline;split($0,vendor,",");printf "ssd: %s\tun_phy_blocksize: %d\t%s\t%s\n",ssd,un_phy_blocksize[ssd],vendor[1],vendor[4];}'
</source>
==Alignment and ZFS==
First read [https://library.netapp.com/ecmdocs/ECMP1148982/html/html/GUID-42CC2EB6-E667-4305-914C-7C2C459EF841.html ZFS zpools create misaligned I/O in Solaris 11 and Solaris 10 Update 8 and later (407376)].
If you have 4k as block size in your storage use ashift=12 (alignment shift exponent).
===Status of alignment===
<source lang=bash>
# ssh filer01 "priv set -q diag ; lun show -v all; priv set"
-------------------------------------------------------------------------------
LUN for ZFS
-------------------------------------------------------------------------------
/vol/ZoneLUNs/Zone01.lun 50g (53687091200) (r/w, online, mapped)
Serial#: 800KP+EpO-33
Share: none
Space Reservation: enabled
Multiprotocol Type: solaris_efi
Maps: SUN_SERVER01_SERVER02=40
Occupied Size: 46.2g (49583595520)
Creation Time: Wed Jan 7 11:37:58 CET 2015
---> Alignment: partial-writes
Cluster Shared Volume Information: 0x0
Space_alloc: disabled
report-physical-size: enabled
Read-Only: disabled
-------------------------------------------------------------------------------
LUN for Oracle Database
-------------------------------------------------------------------------------
/vol/TEMP201/TEMP201 25g (26843545600) (r/w, online, mapped)
Serial#: 800KP+EpO-2t
Share: none
Space Reservation: enabled
Multiprotocol Type: solaris_efi
Maps: KSP_SUN_SRV06_SRV07=35
Occupied Size: 21.6g (23195856896)
Creation Time: Fri Jul 4 11:02:34 CEST 2014
---> Alignment: misaligned
Cluster Shared Volume Information: 0x0
Space_alloc: disabled
report-physical-size: enabled
Read-Only: disabled
...
</source>
Or use "lun alignment show":
<source lang=bash>
# ssh filer01 "priv set -q diag ; lun alignment show; priv set"
-------------------------------------------------------------------------------
LUN for ZFS
Wide spread reads. I think the ashift is not correct.
-------------------------------------------------------------------------------
/vol/ZoneLUNs/Zone01.lun
Multiprotocol type: solaris_efi
Alignment: partial-writes
Write alignment histogram percentage: 5, 5, 4, 6, 4, 6, 14, 5
Read alignment histogram percentage: 8, 7, 10, 7, 7, 8, 36, 5
Partial writes percentage: 47
Partial reads percentage: 9
-------------------------------------------------------------------------------
LUN for Oracle Database
-------------------------------------------------------------------------------
/vol/TEMP201/TEMP201
Multiprotocol type: solaris_efi
Alignment: misaligned
Write alignment histogram percentage: 0, 0, 0, 0, 0, 0, 99, 0
Read alignment histogram percentage: 0, 0, 8, 0, 0, 0, 77, 0
Partial writes percentage: 0
Partial reads percentage: 14
</source>
Or "stats show lun":
<source lang=bash>
stats show -e lun:/vol/TEMP201:.*_align_histo.*
</source>
===ashift=12? Why 12?===
<source lang=bash>
# echo "2^12" | bc -l
4096
</source>
OK... 4k... I see.
===What ashift do I have?===
<source lang=bash>
# zdb | egrep ' name|ashift'
name: 'apache_pool'
ashift: 9
name: 'mysql_pool'
ashift: 9
...
</source>
===Create ZPools on NetApp LUNs with this syntax===
<source lang=bash>
# zpool create -o ashift=12 <mypool> mirror <vdev1> <vdev2>
</source>
==Links==
* [http://wiki.illumos.org/display/illumos/List+of+sd-config-list+entries+for+Advanced-Format+drives Illumos: List of sd-config-list entries for Advanced-Format drives]
* [https://kb.netapp.com/index?page=content&id=3011193 NetApp: What is an unaligned I/O?]
adc6f0eead2970515c4f31ae4662b67e42f18548
789
788
2015-07-15T12:48:53Z
Lollypop
2
/* Status of alignment */
wikitext
text/x-wiki
[[Kategorie:NetApp|Solaris]]
[[Kategorie:Solaris|NetApp]]
Just some unsorted lines...
==Settings in Solaris==
Settings for MPxIO over FC:
===/kernel/drv/ssd.conf===
<source lang=bash>
###### START changes by host_config #####
ssd-config-list="NETAPP LUN", "physical-block-size:4096, retries-busy:30, retries-reset:30, retries-notready:300, retriestimeout:10, throttle-max:64, throttle-min:8";
###### END changes by host_config ####
</source>
===Check it out===
<source lang=bash>
# iostat -Er | /opt/sfw/bin/gawk 'BEGIN{command="echo ::ssd_state | mdb -k"; while(command|getline){if(/^un [0-9]+:/ && $NF != "0"){ssd=$2;gsub(/:$/,"",ssd);while(!/^}/){command|getline;if(/un_phy_blocksize/){un_phy_blocksize[ssd]=strtonum($NF);}}}};close(command);}/ssd/{ssd=$1;gsub(/^ssd/,"",ssd);getline;split($0,vendor,",");printf "ssd: %s\tun_phy_blocksize: %d\t%s\t%s\n",ssd,un_phy_blocksize[ssd],vendor[1],vendor[4];}'
</source>
==Alignment and ZFS==
First read [https://library.netapp.com/ecmdocs/ECMP1148982/html/html/GUID-42CC2EB6-E667-4305-914C-7C2C459EF841.html ZFS zpools create misaligned I/O in Solaris 11 and Solaris 10 Update 8 and later (407376)].
If you have 4k as block size in your storage use ashift=12 (alignment shift exponent).
===Status of alignment===
<source lang=bash>
# ssh filer01 "priv set -q diag ; lun show -v all; priv set"
-------------------------------------------------------------------------------
LUN for ZFS
-------------------------------------------------------------------------------
/vol/ZoneLUNs/Zone01.lun 50g (53687091200) (r/w, online, mapped)
Serial#: 800KP+EpO-33
Share: none
Space Reservation: enabled
Multiprotocol Type: solaris_efi
Maps: SUN_SERVER01_SERVER02=40
Occupied Size: 46.2g (49583595520)
Creation Time: Wed Jan 7 11:37:58 CET 2015
---> Alignment: partial-writes
Cluster Shared Volume Information: 0x0
Space_alloc: disabled
report-physical-size: enabled
Read-Only: disabled
-------------------------------------------------------------------------------
LUN for Oracle Database
-------------------------------------------------------------------------------
/vol/TEMP201/TEMP201 25g (26843545600) (r/w, online, mapped)
Serial#: 800KP+EpO-2t
Share: none
Space Reservation: enabled
Multiprotocol Type: solaris_efi
Maps: KSP_SUN_SRV06_SRV07=35
Occupied Size: 21.6g (23195856896)
Creation Time: Fri Jul 4 11:02:34 CEST 2014
---> Alignment: misaligned
Cluster Shared Volume Information: 0x0
Space_alloc: disabled
report-physical-size: enabled
Read-Only: disabled
...
</source>
Or use "lun alignment show":
<source lang=bash>
# ssh filer01 "priv set -q diag ; lun alignment show; priv set"
-------------------------------------------------------------------------------
LUN for ZFS
Wide spread reads. I think the ashift is not correct.
-------------------------------------------------------------------------------
/vol/ZoneLUNs/Zone01.lun
Multiprotocol type: solaris_efi
Alignment: partial-writes
Write alignment histogram percentage: 5, 5, 4, 6, 4, 6, 14, 5
Read alignment histogram percentage: 8, 7, 10, 7, 7, 8, 36, 5
Partial writes percentage: 47
Partial reads percentage: 9
-------------------------------------------------------------------------------
LUN for Oracle Database
-------------------------------------------------------------------------------
/vol/TEMP201/TEMP201
Multiprotocol type: solaris_efi
Alignment: misaligned
Write alignment histogram percentage: 0, 0, 0, 0, 0, 0, 99, 0
Read alignment histogram percentage: 0, 0, 8, 0, 0, 0, 77, 0
Partial writes percentage: 0
Partial reads percentage: 14
</source>
Or "stats show lun":
<source lang=bash>
filer01*> stats show -e lun:/vol/TEMP201:.*_align_histo.*
</source>
===ashift=12? Why 12?===
<source lang=bash>
# echo "2^12" | bc -l
4096
</source>
OK... 4k... I see.
===What ashift do I have?===
<source lang=bash>
# zdb | egrep ' name|ashift'
name: 'apache_pool'
ashift: 9
name: 'mysql_pool'
ashift: 9
...
</source>
===Create ZPools on NetApp LUNs with this syntax===
<source lang=bash>
# zpool create -o ashift=12 <mypool> mirror <vdev1> <vdev2>
</source>
==Links==
* [http://wiki.illumos.org/display/illumos/List+of+sd-config-list+entries+for+Advanced-Format+drives Illumos: List of sd-config-list entries for Advanced-Format drives]
* [https://kb.netapp.com/index?page=content&id=3011193 NetApp: What is an unaligned I/O?]
8b828a617091a9e5dd11276363d53cb32b2335f2
790
789
2015-07-15T13:07:51Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:NetApp|Solaris]]
[[Kategorie:Solaris|NetApp]]
Just some unsorted lines...
Working on it... don't believe what you can read here! It is not proofed for now.
==Settings in Solaris==
Settings for MPxIO over FC:
===/kernel/drv/ssd.conf===
<source lang=bash>
###### START changes by host_config #####
ssd-config-list="NETAPP LUN", "physical-block-size:4096, retries-busy:30, retries-reset:30, retries-notready:300, retriestimeout:10, throttle-max:64, throttle-min:8";
###### END changes by host_config ####
</source>
===Check it out===
<source lang=bash>
# iostat -Er | /opt/sfw/bin/gawk 'BEGIN{command="echo ::ssd_state | mdb -k"; while(command|getline){if(/^un [0-9]+:/ && $NF != "0"){ssd=$2;gsub(/:$/,"",ssd);while(!/^}/){command|getline;if(/un_phy_blocksize/){un_phy_blocksize[ssd]=strtonum($NF);}}}};close(command);}/ssd/{ssd=$1;gsub(/^ssd/,"",ssd);getline;split($0,vendor,",");printf "ssd: %s\tun_phy_blocksize: %d\t%s\t%s\n",ssd,un_phy_blocksize[ssd],vendor[1],vendor[4];}'
</source>
==Alignment and ZFS==
First read [https://library.netapp.com/ecmdocs/ECMP1148982/html/html/GUID-42CC2EB6-E667-4305-914C-7C2C459EF841.html ZFS zpools create misaligned I/O in Solaris 11 and Solaris 10 Update 8 and later (407376)].
If you have 4k as block size in your storage use ashift=12 (alignment shift exponent).
===Status of alignment===
<source lang=bash>
# ssh filer01 "priv set -q diag ; lun show -v all; priv set"
-------------------------------------------------------------------------------
LUN for ZFS
-------------------------------------------------------------------------------
/vol/ZoneLUNs/Zone01.lun 50g (53687091200) (r/w, online, mapped)
Serial#: 800KP+EpO-33
Share: none
Space Reservation: enabled
Multiprotocol Type: solaris_efi
Maps: SUN_SERVER01_SERVER02=40
Occupied Size: 46.2g (49583595520)
Creation Time: Wed Jan 7 11:37:58 CET 2015
---> Alignment: partial-writes
Cluster Shared Volume Information: 0x0
Space_alloc: disabled
report-physical-size: enabled
Read-Only: disabled
-------------------------------------------------------------------------------
LUN for Oracle Database
-------------------------------------------------------------------------------
/vol/TEMP201/TEMP201 25g (26843545600) (r/w, online, mapped)
Serial#: 800KP+EpO-2t
Share: none
Space Reservation: enabled
Multiprotocol Type: solaris_efi
Maps: KSP_SUN_SRV06_SRV07=35
Occupied Size: 21.6g (23195856896)
Creation Time: Fri Jul 4 11:02:34 CEST 2014
---> Alignment: misaligned
Cluster Shared Volume Information: 0x0
Space_alloc: disabled
report-physical-size: enabled
Read-Only: disabled
...
</source>
Or use "lun alignment show":
<source lang=bash>
# ssh filer01 "priv set -q diag ; lun alignment show; priv set"
-------------------------------------------------------------------------------
LUN for ZFS
Wide spread reads. I think the ashift is not correct.
-------------------------------------------------------------------------------
/vol/ZoneLUNs/Zone01.lun
Multiprotocol type: solaris_efi
Alignment: partial-writes
Write alignment histogram percentage: 5, 5, 4, 6, 4, 6, 14, 5
Read alignment histogram percentage: 8, 7, 10, 7, 7, 8, 36, 5
Partial writes percentage: 47
Partial reads percentage: 9
-------------------------------------------------------------------------------
LUN for Oracle Database
-------------------------------------------------------------------------------
/vol/TEMP201/TEMP201
Multiprotocol type: solaris_efi
Alignment: misaligned
Write alignment histogram percentage: 0, 0, 0, 0, 0, 0, 99, 0
Read alignment histogram percentage: 0, 0, 8, 0, 0, 0, 77, 0
Partial writes percentage: 0
Partial reads percentage: 14
</source>
Or "stats show lun":
<source lang=bash>
filer01*> stats show -e lun:/vol/TEMP201:.*_align_histo.*
</source>
===ashift=12? Why 12?===
<source lang=bash>
# echo "2^12" | bc -l
4096
</source>
OK... 4k... I see.
===What ashift do I have?===
<source lang=bash>
# zdb | egrep ' name|ashift'
name: 'apache_pool'
ashift: 9
name: 'mysql_pool'
ashift: 9
...
</source>
===Create ZPools on NetApp LUNs with this syntax===
<source lang=bash>
# zpool create -o ashift=12 <mypool> mirror <vdev1> <vdev2>
</source>
==Links==
* [http://wiki.illumos.org/display/illumos/List+of+sd-config-list+entries+for+Advanced-Format+drives Illumos: List of sd-config-list entries for Advanced-Format drives]
* [https://kb.netapp.com/index?page=content&id=3011193 NetApp: What is an unaligned I/O?]
8da2f25e2b92048308302f2ef2f5915ae7557bd0
791
790
2015-07-15T13:08:45Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:NetApp|Solaris]]
[[Kategorie:Solaris|NetApp]]
'''Just some unsorted lines...'''
'''Working on it... don't believe what you can read here! It is not proofed for now.'''
==Settings in Solaris==
Settings for MPxIO over FC:
===/kernel/drv/ssd.conf===
<source lang=bash>
###### START changes by host_config #####
ssd-config-list="NETAPP LUN", "physical-block-size:4096, retries-busy:30, retries-reset:30, retries-notready:300, retriestimeout:10, throttle-max:64, throttle-min:8";
###### END changes by host_config ####
</source>
===Check it out===
<source lang=bash>
# iostat -Er | /opt/sfw/bin/gawk 'BEGIN{command="echo ::ssd_state | mdb -k"; while(command|getline){if(/^un [0-9]+:/ && $NF != "0"){ssd=$2;gsub(/:$/,"",ssd);while(!/^}/){command|getline;if(/un_phy_blocksize/){un_phy_blocksize[ssd]=strtonum($NF);}}}};close(command);}/ssd/{ssd=$1;gsub(/^ssd/,"",ssd);getline;split($0,vendor,",");printf "ssd: %s\tun_phy_blocksize: %d\t%s\t%s\n",ssd,un_phy_blocksize[ssd],vendor[1],vendor[4];}'
</source>
==Alignment and ZFS==
First read [https://library.netapp.com/ecmdocs/ECMP1148982/html/html/GUID-42CC2EB6-E667-4305-914C-7C2C459EF841.html ZFS zpools create misaligned I/O in Solaris 11 and Solaris 10 Update 8 and later (407376)].
If you have 4k as block size in your storage use ashift=12 (alignment shift exponent).
===Status of alignment===
<source lang=bash>
# ssh filer01 "priv set -q diag ; lun show -v all; priv set"
-------------------------------------------------------------------------------
LUN for ZFS
-------------------------------------------------------------------------------
/vol/ZoneLUNs/Zone01.lun 50g (53687091200) (r/w, online, mapped)
Serial#: 800KP+EpO-33
Share: none
Space Reservation: enabled
Multiprotocol Type: solaris_efi
Maps: SUN_SERVER01_SERVER02=40
Occupied Size: 46.2g (49583595520)
Creation Time: Wed Jan 7 11:37:58 CET 2015
---> Alignment: partial-writes
Cluster Shared Volume Information: 0x0
Space_alloc: disabled
report-physical-size: enabled
Read-Only: disabled
-------------------------------------------------------------------------------
LUN for Oracle Database
-------------------------------------------------------------------------------
/vol/TEMP201/TEMP201 25g (26843545600) (r/w, online, mapped)
Serial#: 800KP+EpO-2t
Share: none
Space Reservation: enabled
Multiprotocol Type: solaris_efi
Maps: KSP_SUN_SRV06_SRV07=35
Occupied Size: 21.6g (23195856896)
Creation Time: Fri Jul 4 11:02:34 CEST 2014
---> Alignment: misaligned
Cluster Shared Volume Information: 0x0
Space_alloc: disabled
report-physical-size: enabled
Read-Only: disabled
...
</source>
Or use "lun alignment show":
<source lang=bash>
# ssh filer01 "priv set -q diag ; lun alignment show; priv set"
-------------------------------------------------------------------------------
LUN for ZFS
Wide spread reads. I think the ashift is not correct.
-------------------------------------------------------------------------------
/vol/ZoneLUNs/Zone01.lun
Multiprotocol type: solaris_efi
Alignment: partial-writes
Write alignment histogram percentage: 5, 5, 4, 6, 4, 6, 14, 5
Read alignment histogram percentage: 8, 7, 10, 7, 7, 8, 36, 5
Partial writes percentage: 47
Partial reads percentage: 9
-------------------------------------------------------------------------------
LUN for Oracle Database
-------------------------------------------------------------------------------
/vol/TEMP201/TEMP201
Multiprotocol type: solaris_efi
Alignment: misaligned
Write alignment histogram percentage: 0, 0, 0, 0, 0, 0, 99, 0
Read alignment histogram percentage: 0, 0, 8, 0, 0, 0, 77, 0
Partial writes percentage: 0
Partial reads percentage: 14
</source>
Or "stats show lun":
<source lang=bash>
filer01*> stats show -e lun:/vol/TEMP201:.*_align_histo.*
</source>
===ashift=12? Why 12?===
<source lang=bash>
# echo "2^12" | bc -l
4096
</source>
OK... 4k... I see.
===What ashift do I have?===
<source lang=bash>
# zdb | egrep ' name|ashift'
name: 'apache_pool'
ashift: 9
name: 'mysql_pool'
ashift: 9
...
</source>
===Create ZPools on NetApp LUNs with this syntax===
<source lang=bash>
# zpool create -o ashift=12 <mypool> mirror <vdev1> <vdev2>
</source>
==Links==
* [http://wiki.illumos.org/display/illumos/List+of+sd-config-list+entries+for+Advanced-Format+drives Illumos: List of sd-config-list entries for Advanced-Format drives]
* [https://kb.netapp.com/index?page=content&id=3011193 NetApp: What is an unaligned I/O?]
8e3fe8510f3122c7df71674c27d73e5b55514705
792
791
2015-07-15T15:23:27Z
Lollypop
2
/* Status of alignment */
wikitext
text/x-wiki
[[Kategorie:NetApp|Solaris]]
[[Kategorie:Solaris|NetApp]]
'''Just some unsorted lines...'''
'''Working on it... don't believe what you can read here! It is not proofed for now.'''
==Settings in Solaris==
Settings for MPxIO over FC:
===/kernel/drv/ssd.conf===
<source lang=bash>
###### START changes by host_config #####
ssd-config-list="NETAPP LUN", "physical-block-size:4096, retries-busy:30, retries-reset:30, retries-notready:300, retriestimeout:10, throttle-max:64, throttle-min:8";
###### END changes by host_config ####
</source>
===Check it out===
<source lang=bash>
# iostat -Er | /opt/sfw/bin/gawk 'BEGIN{command="echo ::ssd_state | mdb -k"; while(command|getline){if(/^un [0-9]+:/ && $NF != "0"){ssd=$2;gsub(/:$/,"",ssd);while(!/^}/){command|getline;if(/un_phy_blocksize/){un_phy_blocksize[ssd]=strtonum($NF);}}}};close(command);}/ssd/{ssd=$1;gsub(/^ssd/,"",ssd);getline;split($0,vendor,",");printf "ssd: %s\tun_phy_blocksize: %d\t%s\t%s\n",ssd,un_phy_blocksize[ssd],vendor[1],vendor[4];}'
</source>
==Alignment and ZFS==
First read [https://library.netapp.com/ecmdocs/ECMP1148982/html/html/GUID-42CC2EB6-E667-4305-914C-7C2C459EF841.html ZFS zpools create misaligned I/O in Solaris 11 and Solaris 10 Update 8 and later (407376)].
If you have 4k as block size in your storage use ashift=12 (alignment shift exponent).
===Status of alignment===
<source lang=bash>
# ssh filer01 "priv set -q diag ; lun show -v all; priv set"
-------------------------------------------------------------------------------
LUN for ZFS
-------------------------------------------------------------------------------
/vol/ZoneLUNs/Zone01.lun 50g (53687091200) (r/w, online, mapped)
Serial#: 800KP+EpO-33
Share: none
Space Reservation: enabled
Multiprotocol Type: solaris_efi
Maps: SUN_SERVER01_SERVER02=40
Occupied Size: 46.2g (49583595520)
Creation Time: Wed Jan 7 11:37:58 CET 2015
---> Alignment: partial-writes
Cluster Shared Volume Information: 0x0
Space_alloc: disabled
report-physical-size: enabled
Read-Only: disabled
-------------------------------------------------------------------------------
LUN for Oracle Database
-------------------------------------------------------------------------------
/vol/TEMP201/TEMP201 25g (26843545600) (r/w, online, mapped)
Serial#: 800KP+EpO-2t
Share: none
Space Reservation: enabled
Multiprotocol Type: solaris_efi
Maps: SUN_SERVER01_SERVER02=35
Occupied Size: 21.6g (23195856896)
Creation Time: Fri Jul 4 11:02:34 CEST 2014
---> Alignment: misaligned
Cluster Shared Volume Information: 0x0
Space_alloc: disabled
report-physical-size: enabled
Read-Only: disabled
...
</source>
Or use "lun alignment show":
<source lang=bash>
# ssh filer01 "priv set -q diag ; lun alignment show; priv set"
-------------------------------------------------------------------------------
LUN for ZFS
Wide spread reads. I think the ashift is not correct.
-------------------------------------------------------------------------------
/vol/ZoneLUNs/Zone01.lun
Multiprotocol type: solaris_efi
Alignment: partial-writes
Write alignment histogram percentage: 5, 5, 4, 6, 4, 6, 14, 5
Read alignment histogram percentage: 8, 7, 10, 7, 7, 8, 36, 5
Partial writes percentage: 47
Partial reads percentage: 9
-------------------------------------------------------------------------------
LUN for Oracle Database
-------------------------------------------------------------------------------
/vol/TEMP201/TEMP201
Multiprotocol type: solaris_efi
Alignment: misaligned
Write alignment histogram percentage: 0, 0, 0, 0, 0, 0, 99, 0
Read alignment histogram percentage: 0, 0, 8, 0, 0, 0, 77, 0
Partial writes percentage: 0
Partial reads percentage: 14
</source>
Or "stats show lun":
<source lang=bash>
filer01*> stats show -e lun:/vol/TEMP201:.*_align_histo.*
</source>
===ashift=12? Why 12?===
<source lang=bash>
# echo "2^12" | bc -l
4096
</source>
OK... 4k... I see.
===What ashift do I have?===
<source lang=bash>
# zdb | egrep ' name|ashift'
name: 'apache_pool'
ashift: 9
name: 'mysql_pool'
ashift: 9
...
</source>
===Create ZPools on NetApp LUNs with this syntax===
<source lang=bash>
# zpool create -o ashift=12 <mypool> mirror <vdev1> <vdev2>
</source>
==Links==
* [http://wiki.illumos.org/display/illumos/List+of+sd-config-list+entries+for+Advanced-Format+drives Illumos: List of sd-config-list entries for Advanced-Format drives]
* [https://kb.netapp.com/index?page=content&id=3011193 NetApp: What is an unaligned I/O?]
5b9ae0aa5ceff4dd3c7a78f32ed788add0c8a47a
NetApp SMO
0
77
773
176
2015-07-14T13:13:20Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:NetApp|SMO]]
=Installation des SnapManager for Oracle unter Solaris=
==HostUtilities==
<pre>
# cd /tmp
# gtar xzf ~/smo/netapp_solaris_host_utilities_5_1_sparc.tar.gz
# pkgadd -d NTAPSANTool.pkg
</pre>
Und um Himmelswillen nicht:
# /opt/NTAP/SANToolkit/bin/mpxio_set -e --no-never-do-this
Dann klapp ALUA nicht!
<pre>
# /opt/NTAP/SANToolkit/bin/basic_config -ssd_set
# touch /reconfigure
# init 6
</pre>
Test mit:
<pre>
# /opt/NTAP/SANToolkit/bin/sanlun fcp show adapter
# /opt/NTAP/SANToolkit/bin/sanlun lun show all
</pre>
==SnapDrive==
<pre>
# cd /tmp
# gtar xzf ~/smo/NTAPsnapdrive_sun_sparc_5.0P1.tar.Z
# pkgadd -d NTAPsnapdrive_sun_sparc_5.0/NTAPsnapdrive.pkg
</pre>
Jetzt noch die /opt/NTAPsnapdrive/snapdrive.conf anpassen.
Und für Solaris mit MPxIO und UFS sieht die /opt/NTAPsnapdrive/snapdrive.conf dann so aus:
<pre>
# Snapdrive Configuration
# file: /opt/NTAPsnapdrive/snapdrive.conf
# Version 5.0 (Change 1612424 Built 'Sun Feb 26 03 11 54 PST 2012')
#
# Default values are shown by lines which are commented-out in this file.
# If there is no un-commented-out line in this file relating to a particular value, then
# the default value represented in the commented-out line is what SnapDrive will use.
#
# To change a value:
#
# -- copy the line that is commented out to another line
# -- Leave the commented-out line
# -- Modify the new line to remove the '#' and to set the new value.
# -- Save the file and exit
# -- Also remember to restart the snapdrive daemon by issuing 'snapdrived restart'
#
#
#PATH="/sbin:/usr/sbin:/bin:/usr/lib/vxvm/bin:/usr/bin:/opt/VRTS/bin:/etc/vx/bin" #toolset search path
all-access-if-rbac-unspecified="on" #Allows access to all filer operations if the RBAC permissions file is missing in filer volume
#audit-log-file="/var/log/sd-audit.log" #Audit Log File Path
#audit-log-max-size=20480 #Maximum size (in bytes) of audit log file
#audit-log-save=2 #Number of historical audit log file to save
#autosupport-enabled="on" #Enable autosupport (requires autosupport-filer be set)
#available-lun-reserve=8 #Number of LUNs for which to reserve host resources
#check-export-permission-nfs-clone="on" #Checks if the host has nfs export permissions for resource being connected
#client-trace-log-file="/var/log/sd-client-trace.log" #client trace log file (Probably never used or useful)
#cluster-operation-timeout-secs=600 #Cluster Operation timeout in seconds (Useful only on SFRAC Environments). Increase this value if you frequent failures in SFRAC environments
#contact-http-dfm-port=8088 #HTTP server port to contact to access the DFM (Change this only if you have modified DFM Server settings)
#contact-http-port=80 #HTTP port to contact to access the filer (This should not be changed most of the time)
#contact-http-port-sdu-daemon=4094 #HTTP port on which sdu daemon will bind
#contact-https-port-sdu-daemon=4095 #HTTPS port on which sdu daemon will bind
#contact-ssl-dfm-port=8488 #SSL server port to contact to access the DFM
#contact-ssl-port=443 #SSL port to contact to access the filer
#contact-viadmin-port=8043 #HTTP/HTTPS port to contact to access the virtual interface admin
#daemon-trace-log-file="/var/log/sd-daemon-trace.log" #daemon trace log file
#datamotion-cutover-wait=120 #Wait time in seconds during data motion
#default-noprompt="off" #A default value for -noprompt option in the command line
default-transport="fcp" #Transport type to use for storage provisioning, when a decision is needed
#device-retries=3 #Number of retries on Ontap filer LUN device inquiry (This is no longer useful or used)
#device-retry-sleep-secs=1 #Number of seconds between Ontap filer LUN device inquiry retries (This is no longer useful or used)
#dfm-api-timeout=180 #Timeout in seconds for calling DFM API
#dfm-rbac-retries=12 #Number of access retries until DFM Refreshes (Increase this value if DFM is unable to discover newly created Volumes)
#dfm-rbac-retry-sleep-secs=15 #Number of seconds between DFM rbac access retries(Increase this value if DFM is unable to discover the Volume)
#do-lunclone="on" #Lunclone for Dataset mount_backup if readonly qtree is detected
#enable-alua="on" #Enable ALUA for the igroup
#enable-fcp-cache="on" #Enable FCP Cache in Assistants
#enable-implicit-host-preparation="on" #Enable implicit host preparation for LUN creation
#enable-parallel-operations="on" #Enable support for parallel operations
#enable-split-clone="off" #Enable split clone volume or lun during connnect/disconnect
#filer-restore-retries=1440 #Number of retries while doing lun restore
#filer-restore-retry-sleep-secs=15 #Number of secs between retries while restoring lun
#filesystem-freeze-timeout-secs=300 #File system freeze timeout in seconds
#flexclone-writereserve-enabled="off" #Enable space reservations during FlexClone creation
fstype="ufs" #File system to use when more than one file system is available
#lun-onlining-in-progress-retries=40 #Number of retries when lun onlining in progress after VBSR
#lun-onlining-in-progress-sleep-secs=3 #Number of secs between retries when lun onlining in progress after VBSR
#mgmt-retries=2 #Number of retries on ManageONTAP control channel
#mgmt-retry-sleep-long-secs=90 #Number of seconds between retries on ManageONTAP control channel (failover error)
#mgmt-retry-sleep-secs=2 #Number of seconds between retries on ManageONTAP control channel
#migrate-file="/opt/NTAPsnapdrive/.migfile" #Location of Migrate File
#multipathing-type="DMP" #Multipathing software to use when more than one multipathing solution is available.
multipathing-type="mpxio" #Multipathing software to use when more than one multipathing solution is available.
#password-file="/opt/NTAPsnapdrive/.pwfile" #location of password file
#portset-file="/opt/NTAPsnapdrive/.portset" #location of portset configuration file
#prefix-clone-name="" #Prefix string for naming FlexClone
#prefix-filer-lun="" #Prefix for all filer LUN names internally generated by storage create
#prepare-lun-count=16 #Number of LUNs for which to request host preparation
#rbac-cache="off" #Use RBAC cache when all DFM servers are down. Active only when rbac-method is dfm.
#rbac-method="native" #Role Based Access Control(RBAC) methods
#recovery-log-file="/var/log/sd-recovery.log" #recovery log file
#recovery-log-save=20 #Number of old copies of recovery log file to save
#san-clone-method="lunclone" #Clone methods for snap connect
#sdu-daemon-certificate-path="/opt/NTAPsnapdrive/snapdrive.pem" #location of https server certificate
#sdu-password-file="/opt/NTAPsnapdrive/.sdupw" #location of SDU Daemon and DFM password file
#secure-communication-among-cluster-nodes="off" #Enable Secure Communication (Useful only on SFRAC environments)
#sfsr-polling-frequency=10 #Sleep for the given amount of seconds before attempting SFSR
#snapconnect-nfs-removedirectories="off" #NFS snap connect cleaup unwanted dirs;
#snapcreate-cg-timeout="relaxed" #Timeout type used in snapshot creation with Consitency Groups.
#snapcreate-check-nonpersistent-nfs="on" #Check that entries exist in persistent filesystem file for specified nfs fs.
#snapcreate-consistency-retries=3 #Number of retries on best-effort snapshot consistency check failure
#snapcreate-consistency-retry-sleep=1 #Number of seconds between best-effort snapshot consistency retries
#snapcreate-must-make-snapinfo-on-qtree="off" #snap create must be able to create snapinfo on qtree
#snapdelete-delete-rollback-with-snap="off" #Delete all rollback snapshots related to specified snapshot
#snapmirror-dest-snap-support-enabled="on" #Enables snap restore and snap connect commands to deal with snapshots which were moved to another filer volume (e.g. via SnapMirror)
#snaprestore-delete-rollback-after-restore="on" #Delete rollback snapshot after a successfull restore
#snaprestore-make-rollback="on" #Create snap rollback before restore
#snaprestore-must-make-rollback="on" #Do not continue 'snap restore' if rollback creation fails
#snaprestore-snapmirror-check="on" #Enable snapmirror destination volume check in snap restore
#space-reservations-enabled="on" #Enable space reservations when creating new luns
#space-reservations-volume-enabled="snapshot" #Enable space reservation over volume.
#split-clone-async="on" #Lunclone for Dataset mount_backup if readonly qtree is detected
#trace-enabled="on" #Enable trace
#trace-level=7 #Trace levels: 1=FatalError; 2=AdminError; 3=CommandError; 4=warning, 5=info, 6=verbose, 7=full
#trace-log-file="/var/log/sd-trace.log" #trace log file
#trace-log-max-size=10485760 #Maximum size of trace log file in bytes; 0 means one trace log file per command
#trace-log-save=100 #Number of old copies of trace log file to save
#use-efi-label="off" #Enables use of EFI labels on Solaris which is required for lun size > 1 TB
#use-https-to-dfm="on" #Communication with DFM done via HTTPS instead of HTTP
use-https-to-filer="on" #Communication with filer done via HTTPS instead of HTTP
#use-https-to-sdu-daemon="off" #Communication with daemon done via HTTPS instead of HTTP
#use-https-to-viadmin="on" #Specifies if HTTPS must be used to communicate with SMVI Product
#vif-password-file="/opt/NTAPsnapdrive/.vifpw" #location of Virtual Interface Server password file
#virtualization-operation-timeout-secs=600 #Virtualization Operation timeout in seconds
#vmtype="vxvm" #Volume manager to use when more than one volume manager is available
vmtype="svm" #Volume manager to use when more than one volume manager is available
#vol-restore="off" #Method of restoring a volume
#volmove-cutover-retry=3 #Number of retries during volume migration
#volmove-cutover-retry-sleep=3 #Number of seconds between retries during volume migration cutover phase
</pre>
Nun erst den snapdrived starten:
<pre>
# /usr/sbin/snapdrived start
</pre>
Verbindung mit dem Filer herstellen von SnapDrive:
<pre>
# getent hosts fas01 >> /etc/hosts
# /opt/NTAPsnapdrive/bin/snapdrive config set root fas01
# /opt/NTAPsnapdrive/bin/snapdrive snap list -filer fas01
</pre>
Check:
<pre>
# /opt/NTAPsnapdrive/bin/snapdrive config list
</pre>
==SnapManager for Oracle==
<pre>
# sh ./netapp.smo.sunos-sparc64-3.2.bin
# smogui
</pre>
Ab durch den Wizard...
6c9b3b59f71911e9b16b7aa9dcfe95c57233c909
NetApp SSH
0
110
774
309
2015-07-14T13:13:42Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:NetApp|SSH]]
== Prüfen, ob das SSH-Homedir /etc/sshd/<user>/.ssh existiert ==
<source lang=bash>
nac*> priv set -q diag
nac*> ls /etc/sshd/
.
..
ssh_host_key
ssh_host_key.pub
ssh_host_rsa_key
ssh_host_rsa_key.pub
ssh_host_dsa_key
ssh_host_dsa_key.pub
</source>
== Anlegen eines Verzeichnisses mit Mode 0700 ==
<source lang=bash>
nac*> options wafl.default_qtree_mode
wafl.default_qtree_mode 0777
nac*> options wafl.default_qtree_mode 0700
nac*> qtree create /vol/vol0/__
nac*> options wafl.default_qtree_mode 0777
</source>
== NDMPd Status prüfen / anschalten ==
<source lang=bash>
nac*> ndmpd status
ndmpd OFF.
No ndmpd sessions active.
nac*> ndmpd on
nac*> ndmpd status
ndmpd ON.
No ndmpd sessions active.
</source>
== Verzeichnis erzeugen durch kopieren des QTrees ==
<source lang=bash>
nac*> ndmpcopy /vol/vol0/__ /vol/vol0/etc/sshd/root/.ssh
...
Ndmpcopy: Transfer successful [ 0 hours, 0 minutes, 20 seconds ]
Ndmpcopy: Done
nac*> qtree delete /vol/vol0/__
</source>
== SSH-Key /etc/sshd/<user>/.ssh/authorized_keys schreiben ==
<source lang=bash>
nac*> wrfile /etc/sshd/root/.ssh/authorized_keys
ssh-dss AAA...== user@clienthost
^C
</source>
0c7b52088c7da5fc6f50e3f70fddf61651f95b07
NetApp SP
0
211
775
727
2015-07-14T13:14:05Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:Hardware|NetApp]]
[[Kategorie:NetApp|SP]]
== Setup SP IP address==
<source lang=bash>
filer01> system node service-processor network modify -address-type IPv4 -ip-address 172.32.40.54 -netmask 255.255.255.0 -gateway 172.32.40.1 -enable true
filer01> system node service-processor reboot-sp
Note: If your console connection is through the SP, it will be disconnected.
Do you want to reboot the SP ? {y|n}: y
</source>
51049e9896ca2715ce810ca5a6328a9bf46183cc
Oracle Tips and Tricks
0
220
793
2015-07-25T19:08:43Z
Lollypop
2
Die Seite wurde neu angelegt: „[[Kategorie:Oracle|Tipps]] ==Show non default settings== <source lang=sql> SQL> select name || ' = ' || value from v$parameter where isdefault = 'FALSE'; </s…“
wikitext
text/x-wiki
[[Kategorie:Oracle|Tipps]]
==Show non default settings==
<source lang=sql>
SQL> select name || ' = ' || value from v$parameter where isdefault = 'FALSE';
</source>
3df91662e6124119063234dbfe0bf7ce2583ca5b
795
793
2015-07-25T19:22:58Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:Oracle|Tipps]]
==Recover datafiles==
Problem:
<source lang=oracle11>
ORA-00376: file 18 cannot be read at this time
ORA-01110: data file 18: '/data/oracle/oradata/datafile04.dbf'
</source>
<source lang=oracle11>
SQL> select * from v$recover_file;
FILE# ONLINE ONLINE_
---------- ------- -------
ERROR CHANGE#
----------------------------------------------------------------- ----------
TIME
---------
18 OFFLINE OFFLINE
5.8016E+12
22-JUL-15
</source>
<source lang=oracle11>
SQL> select ONLINE_STATUS from dba_data_files where file_id = 18;
ONLINE_
-------
RECOVER
</source>
Recover datafile:
<source lang=oracle11>
SQL> recover datafile 18;
ORA-00279: change 5801623243148 generated at 07/22/2015 21:26:51 needed for thread 1
ORA-00289: suggestion :
/data/oracle/arclog/ORACLESID_1946_1_882824275.ARC
ORA-00280: change 5801623243148 for thread 1 is in sequence #1946
ORA-00278: log file
'/data/oracle/arclog/ORACLESID_1945_1_882824275.ARC' no longer needed
for this recovery
Specify log: {<RET>=suggested | filename | AUTO | CANCEL}
AUTO
Log applied.
Media recovery complete.
</source>
Set file online:
<source lang=oracle11>
SQL> alter database datafile 18 online
</source>
Anything else?
<source lang=oracle11>
SQL> select * from v$recover_file;
no rows selected
</source>
==Show non default settings==
<source lang=oracle11>
SQL> select name || ' = ' || value from v$parameter where isdefault = 'FALSE';
</source>
497fe288e507c103c45f94b4b20bb94f351e391c
799
795
2015-08-07T10:41:04Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:Oracle|Tipps]]
==Recover datafiles==
Problem:
<source lang=oracle11>
ORA-00376: file 18 cannot be read at this time
ORA-01110: data file 18: '/data/oracle/oradata/datafile04.dbf'
</source>
<source lang=oracle11>
SQL> select * from v$recover_file;
FILE# ONLINE ONLINE_
---------- ------- -------
ERROR CHANGE#
----------------------------------------------------------------- ----------
TIME
---------
18 OFFLINE OFFLINE
5.8016E+12
22-JUL-15
</source>
<source lang=oracle11>
SQL> select ONLINE_STATUS from dba_data_files where file_id = 18;
ONLINE_
-------
RECOVER
</source>
Recover datafile:
<source lang=oracle11>
SQL> recover datafile 18;
ORA-00279: change 5801623243148 generated at 07/22/2015 21:26:51 needed for thread 1
ORA-00289: suggestion :
/data/oracle/arclog/ORACLESID_1946_1_882824275.ARC
ORA-00280: change 5801623243148 for thread 1 is in sequence #1946
ORA-00278: log file
'/data/oracle/arclog/ORACLESID_1945_1_882824275.ARC' no longer needed
for this recovery
Specify log: {<RET>=suggested | filename | AUTO | CANCEL}
AUTO
Log applied.
Media recovery complete.
</source>
Set file online:
<source lang=oracle11>
SQL> alter database datafile 18 online
</source>
Anything else?
<source lang=oracle11>
SQL> select * from v$recover_file;
no rows selected
</source>
==Show non default settings==
<source lang=oracle11>
SQL> select name || ' = ' || value from v$parameter where isdefault = 'FALSE';
</source>
==Show CPU count from database==
<source lang=oracle11>
SQL> SELECT 'DATABASE CPU COUNT: ' || value ||
decode(ISDEFAULT, 'TRUE', ' (ISDEFAULT)', ' (IS NOT DEFAULT !!!: '|| ISDEFAULT ||')')
from V$PARAMETER where UPPER(name) like '%CPU_COUNT%'
</source>
feabbf6717ac166047354b1f3214407f366bad14
Category:Oracle
14
221
794
2015-07-25T19:09:09Z
Lollypop
2
Die Seite wurde neu angelegt: „[[Kategorie:KnowHow]]“
wikitext
text/x-wiki
[[Kategorie:KnowHow]]
5b3e805e2df69a16d339bfd0115e4688ccfd0e65
MySQL Tipps und Tricks
0
197
800
646
2015-08-11T08:28:30Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:MySQL|Tipps und Tricks]]
==Oneliner==
===All grants===
<source lang=bash>
# mysql --skip-column-names --batch --execute 'select concat_ws("@",user,host) from mysql.user' | xargs -n 1 -i mysql --execute 'show grants for {}'
</source>
==InnoDB space==
===Per database===
<source lang=mysql>
mysql> select table_schema, sum(round(data_length/1024/1024,2)) as total_size_mb from information_schema.tables where engine like 'innodb' group by table_schema;
</source>
===Per table===
<source lang=mysql>
mysql> select table_schema,table_name,concat_ws(" ",round(data_length/1024/1024,2),"MB") as size_mb from information_schema.tables order by size_mb;
</source>
92839879a17a8606c30eefa199297073bc0c64b2
801
800
2015-08-11T08:28:56Z
Lollypop
2
/* Per table */
wikitext
text/x-wiki
[[Kategorie:MySQL|Tipps und Tricks]]
==Oneliner==
===All grants===
<source lang=bash>
# mysql --skip-column-names --batch --execute 'select concat_ws("@",user,host) from mysql.user' | xargs -n 1 -i mysql --execute 'show grants for {}'
</source>
==InnoDB space==
===Per database===
<source lang=mysql>
mysql> select table_schema, sum(round(data_length/1024/1024,2)) as total_size_mb from information_schema.tables where engine like 'innodb' group by table_schema;
</source>
===Per table===
<source lang=mysql>
mysql> select table_schema,table_name,round(data_length/1024/1024,2) as size_mb from information_schema.tables order by size_mb;
</source>
479658df9e9d00a76e63afa5a34ab707d2b891ab
802
801
2015-08-11T08:29:20Z
Lollypop
2
/* Per table */
wikitext
text/x-wiki
[[Kategorie:MySQL|Tipps und Tricks]]
==Oneliner==
===All grants===
<source lang=bash>
# mysql --skip-column-names --batch --execute 'select concat_ws("@",user,host) from mysql.user' | xargs -n 1 -i mysql --execute 'show grants for {}'
</source>
==InnoDB space==
===Per database===
<source lang=mysql>
mysql> select table_schema, sum(round(data_length/1024/1024,2)) as total_size_mb from information_schema.tables where engine like 'innodb' group by table_schema;
</source>
===Per table===
<source lang=mysql>
mysql> select table_schema as database,table_name,round(data_length/1024/1024,2) as size_mb from information_schema.tables order by size_mb;
</source>
78ac100440ea5390ea805c2769889b4b74814f65
803
802
2015-08-11T08:29:42Z
Lollypop
2
/* Per database */
wikitext
text/x-wiki
[[Kategorie:MySQL|Tipps und Tricks]]
==Oneliner==
===All grants===
<source lang=bash>
# mysql --skip-column-names --batch --execute 'select concat_ws("@",user,host) from mysql.user' | xargs -n 1 -i mysql --execute 'show grants for {}'
</source>
==InnoDB space==
===Per database===
<source lang=mysql>
mysql> select table_schema as database, sum(round(data_length/1024/1024,2)) as total_size_mb from information_schema.tables where engine like 'innodb' group by table_schema;
</source>
===Per table===
<source lang=mysql>
mysql> select table_schema as database,table_name,round(data_length/1024/1024,2) as size_mb from information_schema.tables order by size_mb;
</source>
b9c2187e5a2603cf8db88f7a779206c409f5f250
804
803
2015-08-11T08:31:21Z
Lollypop
2
/* Per database */
wikitext
text/x-wiki
[[Kategorie:MySQL|Tipps und Tricks]]
==Oneliner==
===All grants===
<source lang=bash>
# mysql --skip-column-names --batch --execute 'select concat_ws("@",user,host) from mysql.user' | xargs -n 1 -i mysql --execute 'show grants for {}'
</source>
==InnoDB space==
===Per database===
<source lang=mysql>
mysql> select table_schema as database_name, sum(round(data_length/1024/1024,2)) as total_size_mb from information_schema.tables where engine like 'innodb' group by table_schema;
</source>
===Per table===
<source lang=mysql>
mysql> select table_schema as database,table_name,round(data_length/1024/1024,2) as size_mb from information_schema.tables order by size_mb;
</source>
a5605c752f65570f19962a7745da270494027f24
805
804
2015-08-11T08:31:31Z
Lollypop
2
/* Per table */
wikitext
text/x-wiki
[[Kategorie:MySQL|Tipps und Tricks]]
==Oneliner==
===All grants===
<source lang=bash>
# mysql --skip-column-names --batch --execute 'select concat_ws("@",user,host) from mysql.user' | xargs -n 1 -i mysql --execute 'show grants for {}'
</source>
==InnoDB space==
===Per database===
<source lang=mysql>
mysql> select table_schema as database_name, sum(round(data_length/1024/1024,2)) as total_size_mb from information_schema.tables where engine like 'innodb' group by table_schema;
</source>
===Per table===
<source lang=mysql>
mysql> select table_schema as database_name,table_name,round(data_length/1024/1024,2) as size_mb from information_schema.tables order by size_mb;
</source>
2809a94e5638a8ceefafe3ec3d62ea811ae13a80
806
805
2015-08-11T08:33:43Z
Lollypop
2
/* Per database */
wikitext
text/x-wiki
[[Kategorie:MySQL|Tipps und Tricks]]
==Oneliner==
===All grants===
<source lang=bash>
# mysql --skip-column-names --batch --execute 'select concat_ws("@",user,host) from mysql.user' | xargs -n 1 -i mysql --execute 'show grants for {}'
</source>
==InnoDB space==
===Per database===
<source lang=mysql>
mysql> select table_schema as database_name, sum(round(data_length/1024/1024,2)) as total_size_mb from information_schema.tables where engine like 'innodb' group by table_schema order by total_size_mb;
</source>
===Per table===
<source lang=mysql>
mysql> select table_schema as database_name,table_name,round(data_length/1024/1024,2) as size_mb from information_schema.tables order by size_mb;
</source>
9af05f9e144e167242f571023080affb44011a11
810
806
2015-08-12T09:31:11Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:MySQL|Tipps und Tricks]]
==Oneliner==
===All grants===
<source lang=bash>
# mysql --skip-column-names --batch --execute 'select concat_ws("@",user,host) from mysql.user' | xargs -n 1 -i mysql --execute 'show grants for {}'
</source>
==InnoDB space==
===Per database===
<source lang=mysql>
mysql> select table_schema as database_name, sum(round(data_length/1024/1024,2)) as total_size_mb from information_schema.tables where engine like 'innodb' group by table_schema order by total_size_mb;
</source>
===Per table===
<source lang=mysql>
mysql> select table_schema as database_name,table_name,round(data_length/1024/1024,2) as size_mb from information_schema.tables order by size_mb;
</source>
==Logging==
===Enable/disable logging===
<source lang=mysql>
mysql> SET GLOBAL general_log = 'ON';
Query OK, 0 rows affected (0.00 sec)
mysql> SET GLOBAL general_log = 'OFF';
Query OK, 0 rows affected (0.00 sec)
</source>
38e87215267f7ff56e38dc5fe6442af329432530
811
810
2015-08-12T09:32:46Z
Lollypop
2
/* Enable/disable logging */
wikitext
text/x-wiki
[[Kategorie:MySQL|Tipps und Tricks]]
==Oneliner==
===All grants===
<source lang=bash>
# mysql --skip-column-names --batch --execute 'select concat_ws("@",user,host) from mysql.user' | xargs -n 1 -i mysql --execute 'show grants for {}'
</source>
==InnoDB space==
===Per database===
<source lang=mysql>
mysql> select table_schema as database_name, sum(round(data_length/1024/1024,2)) as total_size_mb from information_schema.tables where engine like 'innodb' group by table_schema order by total_size_mb;
</source>
===Per table===
<source lang=mysql>
mysql> select table_schema as database_name,table_name,round(data_length/1024/1024,2) as size_mb from information_schema.tables order by size_mb;
</source>
==Logging==
===Enable/disable general logging===
<source lang=mysql>
mysql> SET GLOBAL general_log_file = '/var/lib/mysql/general.log';
Query OK, 0 rows affected (0.00 sec)
mysql> SET GLOBAL general_log = 'ON';
Query OK, 0 rows affected (0.00 sec)
mysql> SET GLOBAL general_log = 'OFF';
Query OK, 0 rows affected (0.00 sec)
</source>
2a255ca2bbb5b23042313dc76c30879353d4a966
MySQL Tipps und Tricks
0
197
812
811
2015-08-12T09:33:58Z
Lollypop
2
/* Logging */
wikitext
text/x-wiki
[[Kategorie:MySQL|Tipps und Tricks]]
==Oneliner==
===All grants===
<source lang=bash>
# mysql --skip-column-names --batch --execute 'select concat_ws("@",user,host) from mysql.user' | xargs -n 1 -i mysql --execute 'show grants for {}'
</source>
==InnoDB space==
===Per database===
<source lang=mysql>
mysql> select table_schema as database_name, sum(round(data_length/1024/1024,2)) as total_size_mb from information_schema.tables where engine like 'innodb' group by table_schema order by total_size_mb;
</source>
===Per table===
<source lang=mysql>
mysql> select table_schema as database_name,table_name,round(data_length/1024/1024,2) as size_mb from information_schema.tables order by size_mb;
</source>
==Logging==
===Enable/disable general logging===
<source lang=mysql>
mysql> SET GLOBAL general_log_file = '/var/lib/mysql/general.log';
Query OK, 0 rows affected (0.00 sec)
mysql> SET GLOBAL general_log = 'ON';
Query OK, 0 rows affected (0.00 sec)
</source>
<source lang=mysql>
mysql> SET GLOBAL general_log = 'OFF';
Query OK, 0 rows affected (0.00 sec)
</source>
===Enable/disable general logging===
<source lang=mysql>
mysql> SET GLOBAL slow_query_log_file = '/var/lib/mysql/general.log';
Query OK, 0 rows affected (0.00 sec)
mysql> SET GLOBAL slow_query_log = 'ON';
Query OK, 0 rows affected (0.00 sec)
</source>
<source lang=mysql>
mysql> SET GLOBAL slow_query_log = 'OFF';
Query OK, 0 rows affected (0.00 sec)
</source>
7799db15a194b6361a49e5adb7709e845ee8eba6
813
812
2015-08-12T09:34:38Z
Lollypop
2
/* Enable/disable general logging */
wikitext
text/x-wiki
[[Kategorie:MySQL|Tipps und Tricks]]
==Oneliner==
===All grants===
<source lang=bash>
# mysql --skip-column-names --batch --execute 'select concat_ws("@",user,host) from mysql.user' | xargs -n 1 -i mysql --execute 'show grants for {}'
</source>
==InnoDB space==
===Per database===
<source lang=mysql>
mysql> select table_schema as database_name, sum(round(data_length/1024/1024,2)) as total_size_mb from information_schema.tables where engine like 'innodb' group by table_schema order by total_size_mb;
</source>
===Per table===
<source lang=mysql>
mysql> select table_schema as database_name,table_name,round(data_length/1024/1024,2) as size_mb from information_schema.tables order by size_mb;
</source>
==Logging==
===Enable/disable general logging===
<source lang=mysql>
mysql> SET GLOBAL general_log_file = '/var/lib/mysql/general.log';
Query OK, 0 rows affected (0.00 sec)
mysql> SET GLOBAL general_log = 'ON';
Query OK, 0 rows affected (0.00 sec)
</source>
<source lang=mysql>
mysql> SET GLOBAL general_log = 'OFF';
Query OK, 0 rows affected (0.00 sec)
</source>
===Enable/disable logging of slow queries===
<source lang=mysql>
mysql> SET GLOBAL slow_query_log_file = '/var/lib/mysql/slow-query.log';
Query OK, 0 rows affected (0.00 sec)
mysql> SET GLOBAL slow_query_log = 'ON';
Query OK, 0 rows affected (0.00 sec)
</source>
<source lang=mysql>
mysql> SET GLOBAL slow_query_log = 'OFF';
Query OK, 0 rows affected (0.00 sec)
</source>
fec4c82518784a99471d6a9fe327f5d70a3ad30c
814
813
2015-08-12T09:35:54Z
Lollypop
2
/* Logging */
wikitext
text/x-wiki
[[Kategorie:MySQL|Tipps und Tricks]]
==Oneliner==
===All grants===
<source lang=bash>
# mysql --skip-column-names --batch --execute 'select concat_ws("@",user,host) from mysql.user' | xargs -n 1 -i mysql --execute 'show grants for {}'
</source>
==InnoDB space==
===Per database===
<source lang=mysql>
mysql> select table_schema as database_name, sum(round(data_length/1024/1024,2)) as total_size_mb from information_schema.tables where engine like 'innodb' group by table_schema order by total_size_mb;
</source>
===Per table===
<source lang=mysql>
mysql> select table_schema as database_name,table_name,round(data_length/1024/1024,2) as size_mb from information_schema.tables order by size_mb;
</source>
==Logging==
If you use SET GLOBAL it is just for the moment.
'''Don't forget to add it in your my.cnf to make it permanent!'''
===Enable/disable general logging===
<source lang=mysql>
mysql> SET GLOBAL general_log_file = '/var/lib/mysql/general.log';
Query OK, 0 rows affected (0.00 sec)
mysql> SET GLOBAL general_log = 'ON';
Query OK, 0 rows affected (0.00 sec)
</source>
<source lang=mysql>
mysql> SET GLOBAL general_log = 'OFF';
Query OK, 0 rows affected (0.00 sec)
</source>
===Enable/disable logging of slow queries===
<source lang=mysql>
mysql> SET GLOBAL slow_query_log_file = '/var/lib/mysql/slow-query.log';
Query OK, 0 rows affected (0.00 sec)
mysql> SET GLOBAL slow_query_log = 'ON';
Query OK, 0 rows affected (0.00 sec)
</source>
<source lang=mysql>
mysql> SET GLOBAL slow_query_log = 'OFF';
Query OK, 0 rows affected (0.00 sec)
</source>
51e423c4b173acfe8020181161dd4f58527d5865
815
814
2015-08-12T09:38:27Z
Lollypop
2
/* Logging */
wikitext
text/x-wiki
[[Kategorie:MySQL|Tipps und Tricks]]
==Oneliner==
===All grants===
<source lang=bash>
# mysql --skip-column-names --batch --execute 'select concat_ws("@",user,host) from mysql.user' | xargs -n 1 -i mysql --execute 'show grants for {}'
</source>
==InnoDB space==
===Per database===
<source lang=mysql>
mysql> select table_schema as database_name, sum(round(data_length/1024/1024,2)) as total_size_mb from information_schema.tables where engine like 'innodb' group by table_schema order by total_size_mb;
</source>
===Per table===
<source lang=mysql>
mysql> select table_schema as database_name,table_name,round(data_length/1024/1024,2) as size_mb from information_schema.tables order by size_mb;
</source>
==Logging==
If you use SET GLOBAL it is just for the moment.
'''Don't forget to add it in your my.cnf to make it permanent!'''
===What can I log?===
The interesting variables here are:
* log_queries_not_using_indexes
* log_slave_updates
* log_slow_queries
* general_log
===Enable/disable general logging===
<source lang=mysql>
mysql> SET GLOBAL general_log_file = '/var/lib/mysql/general.log';
Query OK, 0 rows affected (0.00 sec)
mysql> SET GLOBAL general_log = 'ON';
Query OK, 0 rows affected (0.00 sec)
</source>
<source lang=mysql>
mysql> SET GLOBAL general_log = 'OFF';
Query OK, 0 rows affected (0.00 sec)
</source>
===Enable/disable logging of slow queries===
<source lang=mysql>
mysql> SET GLOBAL slow_query_log_file = '/var/lib/mysql/slow-query.log';
Query OK, 0 rows affected (0.00 sec)
mysql> SET GLOBAL slow_query_log = 'ON';
Query OK, 0 rows affected (0.00 sec)
</source>
<source lang=mysql>
mysql> SET GLOBAL slow_query_log = 'OFF';
Query OK, 0 rows affected (0.00 sec)
</source>
f7a96b44a00805561b2e184cfa2275544904e3b0
816
815
2015-08-12T09:42:34Z
Lollypop
2
/* What can I log? */
wikitext
text/x-wiki
[[Kategorie:MySQL|Tipps und Tricks]]
==Oneliner==
===All grants===
<source lang=bash>
# mysql --skip-column-names --batch --execute 'select concat_ws("@",user,host) from mysql.user' | xargs -n 1 -i mysql --execute 'show grants for {}'
</source>
==InnoDB space==
===Per database===
<source lang=mysql>
mysql> select table_schema as database_name, sum(round(data_length/1024/1024,2)) as total_size_mb from information_schema.tables where engine like 'innodb' group by table_schema order by total_size_mb;
</source>
===Per table===
<source lang=mysql>
mysql> select table_schema as database_name,table_name,round(data_length/1024/1024,2) as size_mb from information_schema.tables order by size_mb;
</source>
==Logging==
If you use SET GLOBAL it is just for the moment.
'''Don't forget to add it in your my.cnf to make it permanent!'''
===What can I log?===
The interesting variables here are:
* log_queries_not_using_indexes
* log_slave_updates
* log_slow_queries
* general_log
===Choose logging destination FILE/TABLE/NONE===
This affects general_log and slow_query_log.
* Log to the table mysql.slow_log
<source lang=mysql>
mysql> SET GLOBAL log_output=TABLE;
</source>
===Enable/disable general logging===
<source lang=mysql>
mysql> SET GLOBAL general_log_file = '/var/lib/mysql/general.log';
Query OK, 0 rows affected (0.00 sec)
mysql> SET GLOBAL general_log = 'ON';
Query OK, 0 rows affected (0.00 sec)
</source>
<source lang=mysql>
mysql> SET GLOBAL general_log = 'OFF';
Query OK, 0 rows affected (0.00 sec)
</source>
===Enable/disable logging of slow queries===
<source lang=mysql>
mysql> SET GLOBAL slow_query_log_file = '/var/lib/mysql/slow-query.log';
Query OK, 0 rows affected (0.00 sec)
mysql> SET GLOBAL slow_query_log = 'ON';
Query OK, 0 rows affected (0.00 sec)
</source>
<source lang=mysql>
mysql> SET GLOBAL slow_query_log = 'OFF';
Query OK, 0 rows affected (0.00 sec)
</source>
cb5df3619e746b321cb3d6ccaab2a42e36b1ceee
817
816
2015-08-12T10:50:57Z
Lollypop
2
/* Choose logging destination FILE/TABLE/NONE */
wikitext
text/x-wiki
[[Kategorie:MySQL|Tipps und Tricks]]
==Oneliner==
===All grants===
<source lang=bash>
# mysql --skip-column-names --batch --execute 'select concat_ws("@",user,host) from mysql.user' | xargs -n 1 -i mysql --execute 'show grants for {}'
</source>
==InnoDB space==
===Per database===
<source lang=mysql>
mysql> select table_schema as database_name, sum(round(data_length/1024/1024,2)) as total_size_mb from information_schema.tables where engine like 'innodb' group by table_schema order by total_size_mb;
</source>
===Per table===
<source lang=mysql>
mysql> select table_schema as database_name,table_name,round(data_length/1024/1024,2) as size_mb from information_schema.tables order by size_mb;
</source>
==Logging==
If you use SET GLOBAL it is just for the moment.
'''Don't forget to add it in your my.cnf to make it permanent!'''
===What can I log?===
The interesting variables here are:
* log_queries_not_using_indexes
* log_slave_updates
* log_slow_queries
* general_log
===Choose logging destination FILE/TABLE/NONE===
This affects general_log and slow_query_log.
* Log to the table mysql.slow_log and mysql.general_log
<source lang=mysql>
mysql> SET GLOBAL log_output=TABLE;
</source>
* Log to the table mysql.slow_log and mysql.general_log
<source lang=mysql>
mysql> SET GLOBAL log_output=TABLE;
</source>
* Both: tables and files
<source lang=mysql>
mysql> SET GLOBAL log_output = 'TABLE,FILE';
</source>
* None, if NONE appears in the log_output destinations there is no logging
<source lang=mysql>
mysql> SET GLOBAL log_output = 'TABLE,FILE,NONE';
</source>
is equal to
<source lang=mysql>
mysql> SET GLOBAL log_output = 'NONE';
</source>
===Enable/disable general logging===
<source lang=mysql>
mysql> SET GLOBAL general_log_file = '/var/lib/mysql/general.log';
Query OK, 0 rows affected (0.00 sec)
mysql> SET GLOBAL general_log = 'ON';
Query OK, 0 rows affected (0.00 sec)
</source>
<source lang=mysql>
mysql> SET GLOBAL general_log = 'OFF';
Query OK, 0 rows affected (0.00 sec)
</source>
===Enable/disable logging of slow queries===
<source lang=mysql>
mysql> SET GLOBAL slow_query_log_file = '/var/lib/mysql/slow-query.log';
Query OK, 0 rows affected (0.00 sec)
mysql> SET GLOBAL slow_query_log = 'ON';
Query OK, 0 rows affected (0.00 sec)
</source>
<source lang=mysql>
mysql> SET GLOBAL slow_query_log = 'OFF';
Query OK, 0 rows affected (0.00 sec)
</source>
44cf1939dffa656000af8d886742ad40460beac5
818
817
2015-08-12T10:58:58Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:MySQL|Tipps und Tricks]]
==Oneliner==
===All grants===
<source lang=bash>
# mysql --skip-column-names --batch --execute 'select concat_ws("@",user,host) from mysql.user' | xargs -n 1 -i mysql --execute 'show grants for {}'
</source>
==InnoDB space==
===Per database===
<source lang=mysql>
mysql> select table_schema as database_name, sum(round(data_length/1024/1024,2)) as total_size_mb from information_schema.tables where engine like 'innodb' group by table_schema order by total_size_mb;
</source>
===Per table===
<source lang=mysql>
mysql> select table_schema as database_name,table_name,round(data_length/1024/1024,2) as size_mb from information_schema.tables order by size_mb;
</source>
==Logging==
If you use SET GLOBAL it is just for the moment.
'''Don't forget to add it in your my.cnf to make it permanent!'''
===What can I log?===
The interesting variables here are:
* log_queries_not_using_indexes
* log_slave_updates
* log_slow_queries
* general_log
===Choose logging destination FILE/TABLE/NONE===
This affects general_log and slow_query_log.
* Log to the table mysql.slow_log and mysql.general_log
<source lang=mysql>
mysql> SET GLOBAL log_output=TABLE;
</source>
* Log to the table mysql.slow_log and mysql.general_log
<source lang=mysql>
mysql> SET GLOBAL log_output=TABLE;
</source>
* Both: tables and files
<source lang=mysql>
mysql> SET GLOBAL log_output = 'TABLE,FILE';
</source>
* None, if NONE appears in the log_output destinations there is no logging
<source lang=mysql>
mysql> SET GLOBAL log_output = 'TABLE,FILE,NONE';
</source>
is equal to
<source lang=mysql>
mysql> SET GLOBAL log_output = 'NONE';
</source>
===Enable/disable general logging===
<source lang=mysql>
mysql> SET GLOBAL general_log_file = '/var/lib/mysql/general.log';
Query OK, 0 rows affected (0.00 sec)
mysql> SET GLOBAL general_log = 'ON';
Query OK, 0 rows affected (0.00 sec)
</source>
<source lang=mysql>
mysql> SET GLOBAL general_log = 'OFF';
Query OK, 0 rows affected (0.00 sec)
</source>
===Enable/disable logging of slow queries===
<source lang=mysql>
mysql> SET GLOBAL slow_query_log_file = '/var/lib/mysql/slow-query.log';
Query OK, 0 rows affected (0.00 sec)
mysql> SET GLOBAL slow_query_log = 'ON';
Query OK, 0 rows affected (0.00 sec)
</source>
<source lang=mysql>
mysql> SET GLOBAL slow_query_log = 'OFF';
Query OK, 0 rows affected (0.00 sec)
</source>
==Filesystems for MySQL==
===ext3/ext4===
Mountoptions are:
* noatime
* data=writeback (best performance , only metadata is logged)
* data=ordered (ok performance , recording metadata and grouping metadata related to the data changes)
* data=journal (worst performance, but best data protection, ext3 default mode, recording metadata and all data)
1e2362f61932a96358b343fce277181d9a76cbfa
819
818
2015-08-12T11:54:37Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:MySQL|Tipps und Tricks]]
==Oneliner==
===All grants===
<source lang=bash>
# mysql --skip-column-names --batch --execute 'select concat_ws("@",user,host) from mysql.user' | xargs -n 1 -i mysql --execute 'show grants for {}'
</source>
==InnoDB space==
===Per database===
<source lang=mysql>
mysql> select table_schema as database_name, sum(round(data_length/1024/1024,2)) as total_size_mb from information_schema.tables where engine like 'innodb' group by table_schema order by total_size_mb;
</source>
===Per table===
<source lang=mysql>
mysql> select table_schema as database_name,table_name,round(data_length/1024/1024,2) as size_mb from information_schema.tables order by size_mb;
</source>
==Logging==
If you use SET GLOBAL it is just for the moment.
'''Don't forget to add it in your my.cnf to make it permanent!'''
===What can I log?===
The interesting variables here are:
* log_queries_not_using_indexes
* log_slave_updates
* log_slow_queries
* general_log
===Choose logging destination FILE/TABLE/NONE===
This affects general_log and slow_query_log.
* Log to the table mysql.slow_log and mysql.general_log
<source lang=mysql>
mysql> SET GLOBAL log_output=TABLE;
</source>
* Log to the table mysql.slow_log and mysql.general_log
<source lang=mysql>
mysql> SET GLOBAL log_output=TABLE;
</source>
* Both: tables and files
<source lang=mysql>
mysql> SET GLOBAL log_output = 'TABLE,FILE';
</source>
* None, if NONE appears in the log_output destinations there is no logging
<source lang=mysql>
mysql> SET GLOBAL log_output = 'TABLE,FILE,NONE';
</source>
is equal to
<source lang=mysql>
mysql> SET GLOBAL log_output = 'NONE';
</source>
===Enable/disable general logging===
<source lang=mysql>
mysql> SET GLOBAL general_log_file = '/var/lib/mysql/general.log';
Query OK, 0 rows affected (0.00 sec)
mysql> SET GLOBAL general_log = 'ON';
Query OK, 0 rows affected (0.00 sec)
</source>
<source lang=mysql>
mysql> SET GLOBAL general_log = 'OFF';
Query OK, 0 rows affected (0.00 sec)
</source>
===Enable/disable logging of slow queries===
<source lang=mysql>
mysql> SET GLOBAL slow_query_log_file = '/var/lib/mysql/slow-query.log';
Query OK, 0 rows affected (0.00 sec)
mysql> SET GLOBAL slow_query_log = 'ON';
Query OK, 0 rows affected (0.00 sec)
</source>
<source lang=mysql>
mysql> SET GLOBAL slow_query_log = 'OFF';
Query OK, 0 rows affected (0.00 sec)
</source>
==Filesystems for MySQL==
===ext3/ext4===
Mountoptions are:
* noatime
* data=writeback (best performance , only metadata is logged)
* data=ordered (ok performance , recording metadata and grouping metadata related to the data changes)
* data=journal (worst performance, but best data protection, ext3 default mode, recording metadata and all data)
==Analyse==
<source lang=mysql>
mysql> select * from <tablename> PROCEDURE ANALYSE();
</source>
95531595adb92d1647fc825837cd71ba13bfe91b
823
819
2015-08-12T13:48:02Z
Lollypop
2
/* Filesystems for MySQL */
wikitext
text/x-wiki
[[Kategorie:MySQL|Tipps und Tricks]]
==Oneliner==
===All grants===
<source lang=bash>
# mysql --skip-column-names --batch --execute 'select concat_ws("@",user,host) from mysql.user' | xargs -n 1 -i mysql --execute 'show grants for {}'
</source>
==InnoDB space==
===Per database===
<source lang=mysql>
mysql> select table_schema as database_name, sum(round(data_length/1024/1024,2)) as total_size_mb from information_schema.tables where engine like 'innodb' group by table_schema order by total_size_mb;
</source>
===Per table===
<source lang=mysql>
mysql> select table_schema as database_name,table_name,round(data_length/1024/1024,2) as size_mb from information_schema.tables order by size_mb;
</source>
==Logging==
If you use SET GLOBAL it is just for the moment.
'''Don't forget to add it in your my.cnf to make it permanent!'''
===What can I log?===
The interesting variables here are:
* log_queries_not_using_indexes
* log_slave_updates
* log_slow_queries
* general_log
===Choose logging destination FILE/TABLE/NONE===
This affects general_log and slow_query_log.
* Log to the table mysql.slow_log and mysql.general_log
<source lang=mysql>
mysql> SET GLOBAL log_output=TABLE;
</source>
* Log to the table mysql.slow_log and mysql.general_log
<source lang=mysql>
mysql> SET GLOBAL log_output=TABLE;
</source>
* Both: tables and files
<source lang=mysql>
mysql> SET GLOBAL log_output = 'TABLE,FILE';
</source>
* None, if NONE appears in the log_output destinations there is no logging
<source lang=mysql>
mysql> SET GLOBAL log_output = 'TABLE,FILE,NONE';
</source>
is equal to
<source lang=mysql>
mysql> SET GLOBAL log_output = 'NONE';
</source>
===Enable/disable general logging===
<source lang=mysql>
mysql> SET GLOBAL general_log_file = '/var/lib/mysql/general.log';
Query OK, 0 rows affected (0.00 sec)
mysql> SET GLOBAL general_log = 'ON';
Query OK, 0 rows affected (0.00 sec)
</source>
<source lang=mysql>
mysql> SET GLOBAL general_log = 'OFF';
Query OK, 0 rows affected (0.00 sec)
</source>
===Enable/disable logging of slow queries===
<source lang=mysql>
mysql> SET GLOBAL slow_query_log_file = '/var/lib/mysql/slow-query.log';
Query OK, 0 rows affected (0.00 sec)
mysql> SET GLOBAL slow_query_log = 'ON';
Query OK, 0 rows affected (0.00 sec)
</source>
<source lang=mysql>
mysql> SET GLOBAL slow_query_log = 'OFF';
Query OK, 0 rows affected (0.00 sec)
</source>
==Filesystems for MySQL==
===ext3/ext4===
Mountoptions are:
* noatime
* data=writeback (best performance , only metadata is logged)
* data=ordered (ok performance , recording metadata and grouping metadata related to the data changes)
* data=journal (worst performance, but best data protection, ext3 default mode, recording metadata and all data)
===Raw devices with InnoDB===
Take a look at [[Linux_udev_permissions|setting device permissions via udev]] first.
<source lang=bash>
# lvs vg-data
LV VG Attr LSize Pool Origin Data% Move Log Copy% Convert
lv-rawdisk-innodb01 vg-data -wi-a---- 25.00g
# fdisk -l /dev/vg-data/lv-rawdisk-innodb01
Disk /dev/vg-data/lv-rawdisk-innodb01: 26.8 GB, 26843545600 bytes
255 heads, 63 sectors/track, 3263 cylinders, total 52428800 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
# bc -l
26843545600/(1024*1024*1024)
25.00000000000000000000
</source>
Yes... really 25GB!
Add your logical volume to your configuration /etc/mysql/conf.d/innodb.cnf :
<source lang=mysql>
[mysqld]
# InnoDB raw disks
innodb_data_home_dir=
innodb_data_file_path=/dev/vg-data/lv-rawdisk-innodb01:25Gnewraw
</source>
Do not forget apparmor! Like I did.. :-D
<source lang=mysql>
InnoDB: Operating system error number 13 in a file operation.
InnoDB: The error means mysqld does not have the access rights to
InnoDB: the directory.
InnoDB: File name /dev/dm-0
InnoDB: File operation call: 'open'.
InnoDB: Cannot continue operation.
</source>
Add your raw device to the apparmor config in /etc/apparmor.d/local/usr.sbin.mysqld :
<source lang=bash>
# Site-specific additions and overrides for usr.sbin.mysqld.
# For more details, please see /etc/apparmor.d/local/README.
/dev/dm-* rwk,
</source>
Reload apparmor:
<source lang=bash>
# service apparmor reload
</source>
==Analyse==
<source lang=mysql>
mysql> select * from <tablename> PROCEDURE ANALYSE();
</source>
a62c132386d8d3da4a39f013f784f8b6934f4cea
824
823
2015-08-12T13:50:17Z
Lollypop
2
/* Raw devices with InnoDB */
wikitext
text/x-wiki
[[Kategorie:MySQL|Tipps und Tricks]]
==Oneliner==
===All grants===
<source lang=bash>
# mysql --skip-column-names --batch --execute 'select concat_ws("@",user,host) from mysql.user' | xargs -n 1 -i mysql --execute 'show grants for {}'
</source>
==InnoDB space==
===Per database===
<source lang=mysql>
mysql> select table_schema as database_name, sum(round(data_length/1024/1024,2)) as total_size_mb from information_schema.tables where engine like 'innodb' group by table_schema order by total_size_mb;
</source>
===Per table===
<source lang=mysql>
mysql> select table_schema as database_name,table_name,round(data_length/1024/1024,2) as size_mb from information_schema.tables order by size_mb;
</source>
==Logging==
If you use SET GLOBAL it is just for the moment.
'''Don't forget to add it in your my.cnf to make it permanent!'''
===What can I log?===
The interesting variables here are:
* log_queries_not_using_indexes
* log_slave_updates
* log_slow_queries
* general_log
===Choose logging destination FILE/TABLE/NONE===
This affects general_log and slow_query_log.
* Log to the table mysql.slow_log and mysql.general_log
<source lang=mysql>
mysql> SET GLOBAL log_output=TABLE;
</source>
* Log to the table mysql.slow_log and mysql.general_log
<source lang=mysql>
mysql> SET GLOBAL log_output=TABLE;
</source>
* Both: tables and files
<source lang=mysql>
mysql> SET GLOBAL log_output = 'TABLE,FILE';
</source>
* None, if NONE appears in the log_output destinations there is no logging
<source lang=mysql>
mysql> SET GLOBAL log_output = 'TABLE,FILE,NONE';
</source>
is equal to
<source lang=mysql>
mysql> SET GLOBAL log_output = 'NONE';
</source>
===Enable/disable general logging===
<source lang=mysql>
mysql> SET GLOBAL general_log_file = '/var/lib/mysql/general.log';
Query OK, 0 rows affected (0.00 sec)
mysql> SET GLOBAL general_log = 'ON';
Query OK, 0 rows affected (0.00 sec)
</source>
<source lang=mysql>
mysql> SET GLOBAL general_log = 'OFF';
Query OK, 0 rows affected (0.00 sec)
</source>
===Enable/disable logging of slow queries===
<source lang=mysql>
mysql> SET GLOBAL slow_query_log_file = '/var/lib/mysql/slow-query.log';
Query OK, 0 rows affected (0.00 sec)
mysql> SET GLOBAL slow_query_log = 'ON';
Query OK, 0 rows affected (0.00 sec)
</source>
<source lang=mysql>
mysql> SET GLOBAL slow_query_log = 'OFF';
Query OK, 0 rows affected (0.00 sec)
</source>
==Filesystems for MySQL==
===ext3/ext4===
Mountoptions are:
* noatime
* data=writeback (best performance , only metadata is logged)
* data=ordered (ok performance , recording metadata and grouping metadata related to the data changes)
* data=journal (worst performance, but best data protection, ext3 default mode, recording metadata and all data)
===Raw devices with InnoDB===
Take a look at [[Linux_udev_permissions|setting device permissions via udev]] first.
<source lang=bash>
# lvs vg-data
LV VG Attr LSize Pool Origin Data% Move Log Copy% Convert
lv-rawdisk-innodb01 vg-data -wi-a---- 25.00g
# fdisk -l /dev/vg-data/lv-rawdisk-innodb01
Disk /dev/vg-data/lv-rawdisk-innodb01: 26.8 GB, 26843545600 bytes
255 heads, 63 sectors/track, 3263 cylinders, total 52428800 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
# bc -l
26843545600/(1024*1024*1024)
25.00000000000000000000
</source>
Yes... really 25GB!
Add your logical volume to your configuration /etc/mysql/conf.d/innodb.cnf :
<source lang=mysql>
[mysqld]
# InnoDB raw disks
innodb_data_home_dir=
innodb_data_file_path=/dev/vg-data/lv-rawdisk-innodb01:25Gnewraw
</source>
Do not forget apparmor! Like I did.. :-D
<source lang=mysql>
InnoDB: Operating system error number 13 in a file operation.
InnoDB: The error means mysqld does not have the access rights to
InnoDB: the directory.
InnoDB: File name /dev/dm-0
InnoDB: File operation call: 'open'.
InnoDB: Cannot continue operation.
</source>
Add your raw device to the apparmor config in /etc/apparmor.d/local/usr.sbin.mysqld :
<source lang=bash>
# Site-specific additions and overrides for usr.sbin.mysqld.
# For more details, please see /etc/apparmor.d/local/README.
/dev/dm-* rwk,
</source>
Reload apparmor:
<source lang=bash>
# service apparmor reload
</source>
Another try!
<source lang=bash>
# service mysql start
</source>
<source lang=mysql>
InnoDB: The first specified data file /dev/vg-data/lv-rawdisk-innodb01 did not exist:
InnoDB: a new database to be created!
150812 15:48:23 InnoDB: Setting file /dev/vg-data/lv-rawdisk-innodb01 size to 25600 MB
InnoDB: Database physically writes the file full: wait...
InnoDB: Progress in MB: 100 200 300 400 500 600 700 800 900 1000 1100 1200 ...
</source>
Much better!
==Analyse==
<source lang=mysql>
mysql> select * from <tablename> PROCEDURE ANALYSE();
</source>
ca0aac2331c0f715e861daac6a308b9da92c63d7
825
824
2015-08-12T13:54:01Z
Lollypop
2
/* Raw devices with InnoDB */
wikitext
text/x-wiki
[[Kategorie:MySQL|Tipps und Tricks]]
==Oneliner==
===All grants===
<source lang=bash>
# mysql --skip-column-names --batch --execute 'select concat_ws("@",user,host) from mysql.user' | xargs -n 1 -i mysql --execute 'show grants for {}'
</source>
==InnoDB space==
===Per database===
<source lang=mysql>
mysql> select table_schema as database_name, sum(round(data_length/1024/1024,2)) as total_size_mb from information_schema.tables where engine like 'innodb' group by table_schema order by total_size_mb;
</source>
===Per table===
<source lang=mysql>
mysql> select table_schema as database_name,table_name,round(data_length/1024/1024,2) as size_mb from information_schema.tables order by size_mb;
</source>
==Logging==
If you use SET GLOBAL it is just for the moment.
'''Don't forget to add it in your my.cnf to make it permanent!'''
===What can I log?===
The interesting variables here are:
* log_queries_not_using_indexes
* log_slave_updates
* log_slow_queries
* general_log
===Choose logging destination FILE/TABLE/NONE===
This affects general_log and slow_query_log.
* Log to the table mysql.slow_log and mysql.general_log
<source lang=mysql>
mysql> SET GLOBAL log_output=TABLE;
</source>
* Log to the table mysql.slow_log and mysql.general_log
<source lang=mysql>
mysql> SET GLOBAL log_output=TABLE;
</source>
* Both: tables and files
<source lang=mysql>
mysql> SET GLOBAL log_output = 'TABLE,FILE';
</source>
* None, if NONE appears in the log_output destinations there is no logging
<source lang=mysql>
mysql> SET GLOBAL log_output = 'TABLE,FILE,NONE';
</source>
is equal to
<source lang=mysql>
mysql> SET GLOBAL log_output = 'NONE';
</source>
===Enable/disable general logging===
<source lang=mysql>
mysql> SET GLOBAL general_log_file = '/var/lib/mysql/general.log';
Query OK, 0 rows affected (0.00 sec)
mysql> SET GLOBAL general_log = 'ON';
Query OK, 0 rows affected (0.00 sec)
</source>
<source lang=mysql>
mysql> SET GLOBAL general_log = 'OFF';
Query OK, 0 rows affected (0.00 sec)
</source>
===Enable/disable logging of slow queries===
<source lang=mysql>
mysql> SET GLOBAL slow_query_log_file = '/var/lib/mysql/slow-query.log';
Query OK, 0 rows affected (0.00 sec)
mysql> SET GLOBAL slow_query_log = 'ON';
Query OK, 0 rows affected (0.00 sec)
</source>
<source lang=mysql>
mysql> SET GLOBAL slow_query_log = 'OFF';
Query OK, 0 rows affected (0.00 sec)
</source>
==Filesystems for MySQL==
===ext3/ext4===
Mountoptions are:
* noatime
* data=writeback (best performance , only metadata is logged)
* data=ordered (ok performance , recording metadata and grouping metadata related to the data changes)
* data=journal (worst performance, but best data protection, ext3 default mode, recording metadata and all data)
===Raw devices with InnoDB===
Take a look at [[Linux_udev_permissions|setting device permissions via udev]] first.
After that the device is owned by mysql:
<source lang=bash>
# ls -alL /dev/vg-data/lv-rawdisk-innodb01
brw-rw---- 1 mysql mysql 252, 0 Aug 12 15:07 /dev/vg-data/lv-rawdisk-innodb01
</source>
Determine the size:
<source lang=bash>
# lvs vg-data
LV VG Attr LSize Pool Origin Data% Move Log Copy% Convert
lv-rawdisk-innodb01 vg-data -wi-a---- 25.00g
# fdisk -l /dev/vg-data/lv-rawdisk-innodb01
Disk /dev/vg-data/lv-rawdisk-innodb01: 26.8 GB, 26843545600 bytes
255 heads, 63 sectors/track, 3263 cylinders, total 52428800 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
# bc -l
26843545600/(1024*1024*1024)
25.00000000000000000000
</source>
Yes... really 25GB!
Add your logical volume to your configuration /etc/mysql/conf.d/innodb.cnf :
<source lang=mysql>
[mysqld]
# InnoDB raw disks
innodb_data_home_dir=
innodb_data_file_path=/dev/vg-data/lv-rawdisk-innodb01:25Gnewraw
</source>
Start mysql:
<source lang=bash>
# service mysql start
</source>
Aaaaaand.. do not forget apparmor! Like I did.. :-D
<source lang=mysql>
InnoDB: Operating system error number 13 in a file operation.
InnoDB: The error means mysqld does not have the access rights to
InnoDB: the directory.
InnoDB: File name /dev/dm-0
InnoDB: File operation call: 'open'.
InnoDB: Cannot continue operation.
</source>
<source lang=bash>
# tail /var/log/kern.log
...
Aug 12 15:30:09 mysql kernel: [ 5840.118528] audit: type=1400 audit(1439386209.399:33): apparmor="DENIED" operation="open" profile="/usr/sbin/mysqld" name="/dev/dm-0" pid=11810 comm="mysqld" requested_mask="wr" denied_mask="wr" fsuid=108 ouid=108
...
</source>
Add your raw device to the apparmor config in /etc/apparmor.d/local/usr.sbin.mysqld :
<source lang=bash>
# Site-specific additions and overrides for usr.sbin.mysqld.
# For more details, please see /etc/apparmor.d/local/README.
/dev/dm-* rwk,
</source>
Reload apparmor:
<source lang=bash>
# service apparmor reload
</source>
Another try!
<source lang=bash>
# service mysql start
</source>
<source lang=mysql>
InnoDB: The first specified data file /dev/vg-data/lv-rawdisk-innodb01 did not exist:
InnoDB: a new database to be created!
150812 15:48:23 InnoDB: Setting file /dev/vg-data/lv-rawdisk-innodb01 size to 25600 MB
InnoDB: Database physically writes the file full: wait...
InnoDB: Progress in MB: 100 200 300 400 500 600 700 800 900 1000 1100 1200 ...
</source>
Much better!
==Analyse==
<source lang=mysql>
mysql> select * from <tablename> PROCEDURE ANALYSE();
</source>
d563f4d4afdd3b323c8be8430625e92743b21503
826
825
2015-08-12T13:59:08Z
Lollypop
2
/* Raw devices with InnoDB */
wikitext
text/x-wiki
[[Kategorie:MySQL|Tipps und Tricks]]
==Oneliner==
===All grants===
<source lang=bash>
# mysql --skip-column-names --batch --execute 'select concat_ws("@",user,host) from mysql.user' | xargs -n 1 -i mysql --execute 'show grants for {}'
</source>
==InnoDB space==
===Per database===
<source lang=mysql>
mysql> select table_schema as database_name, sum(round(data_length/1024/1024,2)) as total_size_mb from information_schema.tables where engine like 'innodb' group by table_schema order by total_size_mb;
</source>
===Per table===
<source lang=mysql>
mysql> select table_schema as database_name,table_name,round(data_length/1024/1024,2) as size_mb from information_schema.tables order by size_mb;
</source>
==Logging==
If you use SET GLOBAL it is just for the moment.
'''Don't forget to add it in your my.cnf to make it permanent!'''
===What can I log?===
The interesting variables here are:
* log_queries_not_using_indexes
* log_slave_updates
* log_slow_queries
* general_log
===Choose logging destination FILE/TABLE/NONE===
This affects general_log and slow_query_log.
* Log to the table mysql.slow_log and mysql.general_log
<source lang=mysql>
mysql> SET GLOBAL log_output=TABLE;
</source>
* Log to the table mysql.slow_log and mysql.general_log
<source lang=mysql>
mysql> SET GLOBAL log_output=TABLE;
</source>
* Both: tables and files
<source lang=mysql>
mysql> SET GLOBAL log_output = 'TABLE,FILE';
</source>
* None, if NONE appears in the log_output destinations there is no logging
<source lang=mysql>
mysql> SET GLOBAL log_output = 'TABLE,FILE,NONE';
</source>
is equal to
<source lang=mysql>
mysql> SET GLOBAL log_output = 'NONE';
</source>
===Enable/disable general logging===
<source lang=mysql>
mysql> SET GLOBAL general_log_file = '/var/lib/mysql/general.log';
Query OK, 0 rows affected (0.00 sec)
mysql> SET GLOBAL general_log = 'ON';
Query OK, 0 rows affected (0.00 sec)
</source>
<source lang=mysql>
mysql> SET GLOBAL general_log = 'OFF';
Query OK, 0 rows affected (0.00 sec)
</source>
===Enable/disable logging of slow queries===
<source lang=mysql>
mysql> SET GLOBAL slow_query_log_file = '/var/lib/mysql/slow-query.log';
Query OK, 0 rows affected (0.00 sec)
mysql> SET GLOBAL slow_query_log = 'ON';
Query OK, 0 rows affected (0.00 sec)
</source>
<source lang=mysql>
mysql> SET GLOBAL slow_query_log = 'OFF';
Query OK, 0 rows affected (0.00 sec)
</source>
==Filesystems for MySQL==
===ext3/ext4===
Mountoptions are:
* noatime
* data=writeback (best performance , only metadata is logged)
* data=ordered (ok performance , recording metadata and grouping metadata related to the data changes)
* data=journal (worst performance, but best data protection, ext3 default mode, recording metadata and all data)
===Raw devices with InnoDB===
'''Take a look at [[Linux_udev_permissions|setting device permissions via udev]] first.'''
'''After''' that the device is owned by mysql:
<source lang=bash>
# ls -alL /dev/vg-data/lv-rawdisk-innodb01
brw-rw---- 1 mysql mysql 252, 0 Aug 12 15:07 /dev/vg-data/lv-rawdisk-innodb01
</source>
Determine the size:
<source lang=bash>
# lvs vg-data
LV VG Attr LSize Pool Origin Data% Move Log Copy% Convert
lv-rawdisk-innodb01 vg-data -wi-a---- 25.00g
# fdisk -l /dev/vg-data/lv-rawdisk-innodb01
Disk /dev/vg-data/lv-rawdisk-innodb01: 26.8 GB, 26843545600 bytes
255 heads, 63 sectors/track, 3263 cylinders, total 52428800 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
# bc -l
26843545600/(1024*1024*1024)
25.00000000000000000000
</source>
Yes... really 25GB!
Add your logical volume to your configuration /etc/mysql/conf.d/innodb.cnf :
<source lang=mysql>
[mysqld]
# InnoDB raw disks
innodb_data_home_dir=
innodb_data_file_path=/dev/vg-data/lv-rawdisk-innodb01:25Gnewraw
</source>
Start mysql:
<source lang=bash>
# service mysql start
</source>
Aaaaaand.. do not forget apparmor! Like I did.. :-D
<source lang=mysql>
InnoDB: Operating system error number 13 in a file operation.
InnoDB: The error means mysqld does not have the access rights to
InnoDB: the directory.
InnoDB: File name /dev/dm-0
InnoDB: File operation call: 'open'.
InnoDB: Cannot continue operation.
</source>
<source lang=bash>
# tail /var/log/kern.log
...
Aug 12 15:30:09 mysql kernel: [ 5840.118528] audit: type=1400 audit(1439386209.399:33): apparmor="DENIED" operation="open" profile="/usr/sbin/mysqld" name="/dev/dm-0" pid=11810 comm="mysqld" requested_mask="wr" denied_mask="wr" fsuid=108 ouid=108
...
</source>
Add your raw device to the apparmor config in /etc/apparmor.d/local/usr.sbin.mysqld :
<source lang=bash>
# Site-specific additions and overrides for usr.sbin.mysqld.
# For more details, please see /etc/apparmor.d/local/README.
/dev/dm-* rwk,
</source>
Reload apparmor:
<source lang=bash>
# service apparmor reload
</source>
Another try!
<source lang=bash>
# service mysql start
</source>
<source lang=mysql>
InnoDB: The first specified data file /dev/vg-data/lv-rawdisk-innodb01 did not exist:
InnoDB: a new database to be created!
150812 15:48:23 InnoDB: Setting file /dev/vg-data/lv-rawdisk-innodb01 size to 25600 MB
InnoDB: Database physically writes the file full: wait...
InnoDB: Progress in MB: 100 200 300 400 500 600 700 800 900 1000 1100 1200 ...
</source>
Much better!
==Analyse==
<source lang=mysql>
mysql> select * from <tablename> PROCEDURE ANALYSE();
</source>
a5a1133d3e7cfe7f3caae37070b68611f8d59ddc
827
826
2015-08-12T14:17:51Z
Lollypop
2
/* Raw devices with InnoDB */
wikitext
text/x-wiki
[[Kategorie:MySQL|Tipps und Tricks]]
==Oneliner==
===All grants===
<source lang=bash>
# mysql --skip-column-names --batch --execute 'select concat_ws("@",user,host) from mysql.user' | xargs -n 1 -i mysql --execute 'show grants for {}'
</source>
==InnoDB space==
===Per database===
<source lang=mysql>
mysql> select table_schema as database_name, sum(round(data_length/1024/1024,2)) as total_size_mb from information_schema.tables where engine like 'innodb' group by table_schema order by total_size_mb;
</source>
===Per table===
<source lang=mysql>
mysql> select table_schema as database_name,table_name,round(data_length/1024/1024,2) as size_mb from information_schema.tables order by size_mb;
</source>
==Logging==
If you use SET GLOBAL it is just for the moment.
'''Don't forget to add it in your my.cnf to make it permanent!'''
===What can I log?===
The interesting variables here are:
* log_queries_not_using_indexes
* log_slave_updates
* log_slow_queries
* general_log
===Choose logging destination FILE/TABLE/NONE===
This affects general_log and slow_query_log.
* Log to the table mysql.slow_log and mysql.general_log
<source lang=mysql>
mysql> SET GLOBAL log_output=TABLE;
</source>
* Log to the table mysql.slow_log and mysql.general_log
<source lang=mysql>
mysql> SET GLOBAL log_output=TABLE;
</source>
* Both: tables and files
<source lang=mysql>
mysql> SET GLOBAL log_output = 'TABLE,FILE';
</source>
* None, if NONE appears in the log_output destinations there is no logging
<source lang=mysql>
mysql> SET GLOBAL log_output = 'TABLE,FILE,NONE';
</source>
is equal to
<source lang=mysql>
mysql> SET GLOBAL log_output = 'NONE';
</source>
===Enable/disable general logging===
<source lang=mysql>
mysql> SET GLOBAL general_log_file = '/var/lib/mysql/general.log';
Query OK, 0 rows affected (0.00 sec)
mysql> SET GLOBAL general_log = 'ON';
Query OK, 0 rows affected (0.00 sec)
</source>
<source lang=mysql>
mysql> SET GLOBAL general_log = 'OFF';
Query OK, 0 rows affected (0.00 sec)
</source>
===Enable/disable logging of slow queries===
<source lang=mysql>
mysql> SET GLOBAL slow_query_log_file = '/var/lib/mysql/slow-query.log';
Query OK, 0 rows affected (0.00 sec)
mysql> SET GLOBAL slow_query_log = 'ON';
Query OK, 0 rows affected (0.00 sec)
</source>
<source lang=mysql>
mysql> SET GLOBAL slow_query_log = 'OFF';
Query OK, 0 rows affected (0.00 sec)
</source>
==Filesystems for MySQL==
===ext3/ext4===
Mountoptions are:
* noatime
* data=writeback (best performance , only metadata is logged)
* data=ordered (ok performance , recording metadata and grouping metadata related to the data changes)
* data=journal (worst performance, but best data protection, ext3 default mode, recording metadata and all data)
===Raw devices with InnoDB===
'''Take a look at [[Linux_udev_permissions|setting device permissions via udev]] first.'''
'''After''' that the device is owned by mysql:
<source lang=bash>
# ls -alL /dev/vg-data/lv-rawdisk-innodb01
brw-rw---- 1 mysql mysql 252, 0 Aug 12 15:07 /dev/vg-data/lv-rawdisk-innodb01
</source>
Determine the size:
<source lang=bash>
# lvs vg-data
LV VG Attr LSize Pool Origin Data% Move Log Copy% Convert
lv-rawdisk-innodb01 vg-data -wi-a---- 25.00g
# fdisk -l /dev/vg-data/lv-rawdisk-innodb01
Disk /dev/vg-data/lv-rawdisk-innodb01: 26.8 GB, 26843545600 bytes
255 heads, 63 sectors/track, 3263 cylinders, total 52428800 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
# bc -l
26843545600/(1024*1024*1024)
25.00000000000000000000
</source>
Yes... really 25GB!
Add your logical volume to your configuration /etc/mysql/conf.d/innodb.cnf :
<source lang=mysql>
[mysqld]
# InnoDB raw disks
innodb_data_home_dir=
innodb_data_file_path=/dev/vg-data/lv-rawdisk-innodb01:25Gnewraw
</source>
Start mysql:
<source lang=bash>
# service mysql start
</source>
Aaaaaand.. do not forget apparmor! Like I did.. :-D
<source lang=mysql>
InnoDB: Operating system error number 13 in a file operation.
InnoDB: The error means mysqld does not have the access rights to
InnoDB: the directory.
InnoDB: File name /dev/dm-0
InnoDB: File operation call: 'open'.
InnoDB: Cannot continue operation.
</source>
<source lang=bash>
# tail /var/log/kern.log
...
Aug 12 15:30:09 mysql kernel: [ 5840.118528] audit: type=1400 audit(1439386209.399:33): apparmor="DENIED" operation="open" profile="/usr/sbin/mysqld" name="/dev/dm-0" pid=11810 comm="mysqld" requested_mask="wr" denied_mask="wr" fsuid=108 ouid=108
...
</source>
Add your raw device to the apparmor config in /etc/apparmor.d/local/usr.sbin.mysqld :
<source lang=bash>
# Site-specific additions and overrides for usr.sbin.mysqld.
# For more details, please see /etc/apparmor.d/local/README.
/dev/dm-* rwk,
</source>
Reload apparmor:
<source lang=bash>
# service apparmor reload
</source>
Another try!
<source lang=bash>
# service mysql start
</source>
<source lang=mysql>
InnoDB: The first specified data file /dev/vg-data/lv-rawdisk-innodb01 did not exist:
InnoDB: a new database to be created!
150812 15:48:23 InnoDB: Setting file /dev/vg-data/lv-rawdisk-innodb01 size to 25600 MB
InnoDB: Database physically writes the file full: wait...
InnoDB: Progress in MB: 100 200 300 400 500 600 700 800 900 1000 1100 1200 ...
</source>
Much better!
So shutdown MySQL again!
Change your configuration /etc/mysql/conf.d/innodb.cnf and '''change newraw to raw!''' :
<source lang=mysql>
[mysqld]
# InnoDB raw disks
innodb_data_home_dir=
innodb_data_file_path=/dev/vg-data/lv-rawdisk-innodb01:25Graw
</source>
==Analyse==
<source lang=mysql>
mysql> select * from <tablename> PROCEDURE ANALYSE();
</source>
4e8b085648b7858fc7c5e27e5036654da3b29f82
828
827
2015-08-12T14:21:11Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:MySQL|Tipps und Tricks]]
==Oneliner==
===All grants===
<source lang=bash>
# mysql --skip-column-names --batch --execute 'select concat_ws("@",user,host) from mysql.user' | xargs -n 1 -i mysql --execute 'show grants for {}'
</source>
==InnoDB space==
===Per database===
<source lang=mysql>
mysql> select table_schema as database_name, sum(round(data_length/1024/1024,2)) as total_size_mb from information_schema.tables where engine like 'innodb' group by table_schema order by total_size_mb;
</source>
===Per table===
<source lang=mysql>
mysql> select table_schema as database_name,table_name,round(data_length/1024/1024,2) as size_mb from information_schema.tables order by size_mb;
</source>
==Logging==
If you use SET GLOBAL it is just for the moment.
'''Don't forget to add it in your my.cnf to make it permanent!'''
===What can I log?===
The interesting variables here are:
* log_queries_not_using_indexes
* log_slave_updates
* log_slow_queries
* general_log
===Choose logging destination FILE/TABLE/NONE===
This affects general_log and slow_query_log.
* Log to the table mysql.slow_log and mysql.general_log
<source lang=mysql>
mysql> SET GLOBAL log_output=TABLE;
</source>
* Log to the table mysql.slow_log and mysql.general_log
<source lang=mysql>
mysql> SET GLOBAL log_output=TABLE;
</source>
* Both: tables and files
<source lang=mysql>
mysql> SET GLOBAL log_output = 'TABLE,FILE';
</source>
* None, if NONE appears in the log_output destinations there is no logging
<source lang=mysql>
mysql> SET GLOBAL log_output = 'TABLE,FILE,NONE';
</source>
is equal to
<source lang=mysql>
mysql> SET GLOBAL log_output = 'NONE';
</source>
===Enable/disable general logging===
<source lang=mysql>
mysql> SET GLOBAL general_log_file = '/var/lib/mysql/general.log';
Query OK, 0 rows affected (0.00 sec)
mysql> SET GLOBAL general_log = 'ON';
Query OK, 0 rows affected (0.00 sec)
</source>
<source lang=mysql>
mysql> SET GLOBAL general_log = 'OFF';
Query OK, 0 rows affected (0.00 sec)
</source>
===Enable/disable logging of slow queries===
<source lang=mysql>
mysql> SET GLOBAL slow_query_log_file = '/var/lib/mysql/slow-query.log';
Query OK, 0 rows affected (0.00 sec)
mysql> SET GLOBAL slow_query_log = 'ON';
Query OK, 0 rows affected (0.00 sec)
</source>
<source lang=mysql>
mysql> SET GLOBAL slow_query_log = 'OFF';
Query OK, 0 rows affected (0.00 sec)
</source>
==Filesystems for MySQL==
===ext3/ext4===
Mountoptions are:
* noatime
* data=writeback (best performance , only metadata is logged)
* data=ordered (ok performance , recording metadata and grouping metadata related to the data changes)
* data=journal (worst performance, but best data protection, ext3 default mode, recording metadata and all data)
===Raw devices with InnoDB===
'''Take a look at [[Linux_udev_permissions|setting device permissions via udev]] first.'''
'''After''' that the device is owned by mysql:
<source lang=bash>
# ls -alL /dev/vg-data/lv-rawdisk-innodb01
brw-rw---- 1 mysql mysql 252, 0 Aug 12 15:07 /dev/vg-data/lv-rawdisk-innodb01
</source>
Determine the size:
<source lang=bash>
# lvs vg-data
LV VG Attr LSize Pool Origin Data% Move Log Copy% Convert
lv-rawdisk-innodb01 vg-data -wi-a---- 25.00g
# fdisk -l /dev/vg-data/lv-rawdisk-innodb01
Disk /dev/vg-data/lv-rawdisk-innodb01: 26.8 GB, 26843545600 bytes
255 heads, 63 sectors/track, 3263 cylinders, total 52428800 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
# bc -l
26843545600/(1024*1024*1024)
25.00000000000000000000
</source>
Yes... really 25GB!
Add your logical volume to your configuration /etc/mysql/conf.d/innodb.cnf :
<source lang=mysql>
[mysqld]
# InnoDB raw disks
innodb_data_home_dir=
innodb_data_file_path=/dev/vg-data/lv-rawdisk-innodb01:25Gnewraw
</source>
Start mysql:
<source lang=bash>
# service mysql start
</source>
Aaaaaand.. do not forget apparmor! Like I did.. :-D
<source lang=mysql>
InnoDB: Operating system error number 13 in a file operation.
InnoDB: The error means mysqld does not have the access rights to
InnoDB: the directory.
InnoDB: File name /dev/dm-0
InnoDB: File operation call: 'open'.
InnoDB: Cannot continue operation.
</source>
<source lang=bash>
# tail /var/log/kern.log
...
Aug 12 15:30:09 mysql kernel: [ 5840.118528] audit: type=1400 audit(1439386209.399:33): apparmor="DENIED" operation="open" profile="/usr/sbin/mysqld" name="/dev/dm-0" pid=11810 comm="mysqld" requested_mask="wr" denied_mask="wr" fsuid=108 ouid=108
...
</source>
Add your raw device to the apparmor config in /etc/apparmor.d/local/usr.sbin.mysqld :
<source lang=bash>
# Site-specific additions and overrides for usr.sbin.mysqld.
# For more details, please see /etc/apparmor.d/local/README.
/dev/dm-* rwk,
</source>
Reload apparmor:
<source lang=bash>
# service apparmor reload
</source>
Another try!
<source lang=bash>
# service mysql start
</source>
<source lang=mysql>
InnoDB: The first specified data file /dev/vg-data/lv-rawdisk-innodb01 did not exist:
InnoDB: a new database to be created!
150812 15:48:23 InnoDB: Setting file /dev/vg-data/lv-rawdisk-innodb01 size to 25600 MB
InnoDB: Database physically writes the file full: wait...
InnoDB: Progress in MB: 100 200 300 400 500 600 700 800 900 1000 1100 1200 ...
</source>
Much better!
So shutdown MySQL again!
Change your configuration /etc/mysql/conf.d/innodb.cnf and '''change newraw to raw!''' :
<source lang=mysql>
[mysqld]
# InnoDB raw disks
innodb_data_home_dir=
innodb_data_file_path=/dev/vg-data/lv-rawdisk-innodb01:25Graw
</source>
==Sample InnoDB configuration==
/etc/mysql/conf.d/innodb.cnf
<source lang=mysql>
[mysqld]
# InnoDB Parameters
# innodb_buffer_pool_size=(0.7*total_mem_size)
innodb_buffer_pool_size=1433M
# bulk_insert_buffer_size
bulk_insert_buffer_size=256M
# innodb_buffer_pool_instances=... more = more concurrency
innodb_buffer_pool_instances=2
# innodb_thread_concurrency= 2*CPUs
innodb_thread_concurrency=4
# innodb_flush_method=O_DIRECT (avoids double buffering)
innodb_flush_method=O_DIRECT
# InnoDB data raw disks
innodb_data_home_dir=
innodb_data_file_path=/dev/vg-data/lv-rawdisk-innodb01:25Gnewraw
# InnoDB log files
innodb_log_files_in_group=2
innodb_log_file_size=100M
innodb_log_group_home_dir=/var/lib/mysql/ib_log
</source>
==Analyse==
<source lang=mysql>
mysql> select * from <tablename> PROCEDURE ANALYSE();
</source>
08e4461ce748f1844758993fe446f365cff8629e
829
828
2015-08-12T14:22:28Z
Lollypop
2
/* Sample InnoDB configuration */
wikitext
text/x-wiki
[[Kategorie:MySQL|Tipps und Tricks]]
==Oneliner==
===All grants===
<source lang=bash>
# mysql --skip-column-names --batch --execute 'select concat_ws("@",user,host) from mysql.user' | xargs -n 1 -i mysql --execute 'show grants for {}'
</source>
==InnoDB space==
===Per database===
<source lang=mysql>
mysql> select table_schema as database_name, sum(round(data_length/1024/1024,2)) as total_size_mb from information_schema.tables where engine like 'innodb' group by table_schema order by total_size_mb;
</source>
===Per table===
<source lang=mysql>
mysql> select table_schema as database_name,table_name,round(data_length/1024/1024,2) as size_mb from information_schema.tables order by size_mb;
</source>
==Logging==
If you use SET GLOBAL it is just for the moment.
'''Don't forget to add it in your my.cnf to make it permanent!'''
===What can I log?===
The interesting variables here are:
* log_queries_not_using_indexes
* log_slave_updates
* log_slow_queries
* general_log
===Choose logging destination FILE/TABLE/NONE===
This affects general_log and slow_query_log.
* Log to the table mysql.slow_log and mysql.general_log
<source lang=mysql>
mysql> SET GLOBAL log_output=TABLE;
</source>
* Log to the table mysql.slow_log and mysql.general_log
<source lang=mysql>
mysql> SET GLOBAL log_output=TABLE;
</source>
* Both: tables and files
<source lang=mysql>
mysql> SET GLOBAL log_output = 'TABLE,FILE';
</source>
* None, if NONE appears in the log_output destinations there is no logging
<source lang=mysql>
mysql> SET GLOBAL log_output = 'TABLE,FILE,NONE';
</source>
is equal to
<source lang=mysql>
mysql> SET GLOBAL log_output = 'NONE';
</source>
===Enable/disable general logging===
<source lang=mysql>
mysql> SET GLOBAL general_log_file = '/var/lib/mysql/general.log';
Query OK, 0 rows affected (0.00 sec)
mysql> SET GLOBAL general_log = 'ON';
Query OK, 0 rows affected (0.00 sec)
</source>
<source lang=mysql>
mysql> SET GLOBAL general_log = 'OFF';
Query OK, 0 rows affected (0.00 sec)
</source>
===Enable/disable logging of slow queries===
<source lang=mysql>
mysql> SET GLOBAL slow_query_log_file = '/var/lib/mysql/slow-query.log';
Query OK, 0 rows affected (0.00 sec)
mysql> SET GLOBAL slow_query_log = 'ON';
Query OK, 0 rows affected (0.00 sec)
</source>
<source lang=mysql>
mysql> SET GLOBAL slow_query_log = 'OFF';
Query OK, 0 rows affected (0.00 sec)
</source>
==Filesystems for MySQL==
===ext3/ext4===
Mountoptions are:
* noatime
* data=writeback (best performance , only metadata is logged)
* data=ordered (ok performance , recording metadata and grouping metadata related to the data changes)
* data=journal (worst performance, but best data protection, ext3 default mode, recording metadata and all data)
===Raw devices with InnoDB===
'''Take a look at [[Linux_udev_permissions|setting device permissions via udev]] first.'''
'''After''' that the device is owned by mysql:
<source lang=bash>
# ls -alL /dev/vg-data/lv-rawdisk-innodb01
brw-rw---- 1 mysql mysql 252, 0 Aug 12 15:07 /dev/vg-data/lv-rawdisk-innodb01
</source>
Determine the size:
<source lang=bash>
# lvs vg-data
LV VG Attr LSize Pool Origin Data% Move Log Copy% Convert
lv-rawdisk-innodb01 vg-data -wi-a---- 25.00g
# fdisk -l /dev/vg-data/lv-rawdisk-innodb01
Disk /dev/vg-data/lv-rawdisk-innodb01: 26.8 GB, 26843545600 bytes
255 heads, 63 sectors/track, 3263 cylinders, total 52428800 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
# bc -l
26843545600/(1024*1024*1024)
25.00000000000000000000
</source>
Yes... really 25GB!
Add your logical volume to your configuration /etc/mysql/conf.d/innodb.cnf :
<source lang=mysql>
[mysqld]
# InnoDB raw disks
innodb_data_home_dir=
innodb_data_file_path=/dev/vg-data/lv-rawdisk-innodb01:25Gnewraw
</source>
Start mysql:
<source lang=bash>
# service mysql start
</source>
Aaaaaand.. do not forget apparmor! Like I did.. :-D
<source lang=mysql>
InnoDB: Operating system error number 13 in a file operation.
InnoDB: The error means mysqld does not have the access rights to
InnoDB: the directory.
InnoDB: File name /dev/dm-0
InnoDB: File operation call: 'open'.
InnoDB: Cannot continue operation.
</source>
<source lang=bash>
# tail /var/log/kern.log
...
Aug 12 15:30:09 mysql kernel: [ 5840.118528] audit: type=1400 audit(1439386209.399:33): apparmor="DENIED" operation="open" profile="/usr/sbin/mysqld" name="/dev/dm-0" pid=11810 comm="mysqld" requested_mask="wr" denied_mask="wr" fsuid=108 ouid=108
...
</source>
Add your raw device to the apparmor config in /etc/apparmor.d/local/usr.sbin.mysqld :
<source lang=bash>
# Site-specific additions and overrides for usr.sbin.mysqld.
# For more details, please see /etc/apparmor.d/local/README.
/dev/dm-* rwk,
</source>
Reload apparmor:
<source lang=bash>
# service apparmor reload
</source>
Another try!
<source lang=bash>
# service mysql start
</source>
<source lang=mysql>
InnoDB: The first specified data file /dev/vg-data/lv-rawdisk-innodb01 did not exist:
InnoDB: a new database to be created!
150812 15:48:23 InnoDB: Setting file /dev/vg-data/lv-rawdisk-innodb01 size to 25600 MB
InnoDB: Database physically writes the file full: wait...
InnoDB: Progress in MB: 100 200 300 400 500 600 700 800 900 1000 1100 1200 ...
</source>
Much better!
So shutdown MySQL again!
Change your configuration /etc/mysql/conf.d/innodb.cnf and '''change newraw to raw!''' :
<source lang=mysql>
[mysqld]
# InnoDB raw disks
innodb_data_home_dir=
innodb_data_file_path=/dev/vg-data/lv-rawdisk-innodb01:25Graw
</source>
==Sample InnoDB configuration==
/etc/mysql/conf.d/innodb.cnf
<source lang=mysql>
[mysqld]
# InnoDB Parameters
# innodb_buffer_pool_size=(0.7*total_mem_size)
innodb_buffer_pool_size=1433M
# bulk_insert_buffer_size
bulk_insert_buffer_size=256M
# innodb_buffer_pool_instances=... more = more concurrency
innodb_buffer_pool_instances=2
# innodb_thread_concurrency= 2*CPUs
innodb_thread_concurrency=4
# innodb_flush_method=O_DIRECT (avoids double buffering)
innodb_flush_method=O_DIRECT
# InnoDB data raw disks
innodb_data_home_dir=
innodb_data_file_path=/dev/vg-data/lv-rawdisk-innodb01:25Graw
# InnoDB log files
innodb_log_files_in_group=2
innodb_log_file_size=100M
innodb_log_group_home_dir=/var/lib/mysql/ib_log
</source>
==Analyse==
<source lang=mysql>
mysql> select * from <tablename> PROCEDURE ANALYSE();
</source>
caa2c06c6da020676b76ad36857816f4b5bab5e6
842
829
2015-08-12T17:20:58Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:MySQL|Tipps und Tricks]]
==Oneliner==
===All grants===
<source lang=bash>
# mysql --skip-column-names --batch --execute 'select concat_ws("@",user,host) from mysql.user' | xargs -n 1 -i mysql --execute 'show grants for {}'
</source>
==InnoDB space==
===Per database===
<source lang=mysql>
mysql> select table_schema as database_name, sum(round(data_length/1024/1024,2)) as total_size_mb from information_schema.tables where engine like 'innodb' group by table_schema order by total_size_mb;
</source>
===Per table===
<source lang=mysql>
mysql> select table_schema as database_name,table_name,round(data_length/1024/1024,2) as size_mb from information_schema.tables order by size_mb;
</source>
==Logging==
If you use SET GLOBAL it is just for the moment.
'''Don't forget to add it in your my.cnf to make it permanent!'''
===What can I log?===
The interesting variables here are:
* log_queries_not_using_indexes
* log_slave_updates
* log_slow_queries
* general_log
===Choose logging destination FILE/TABLE/NONE===
This affects general_log and slow_query_log.
* Log to the table mysql.slow_log and mysql.general_log
<source lang=mysql>
mysql> SET GLOBAL log_output=TABLE;
</source>
* Log to the table mysql.slow_log and mysql.general_log
<source lang=mysql>
mysql> SET GLOBAL log_output=TABLE;
</source>
* Both: tables and files
<source lang=mysql>
mysql> SET GLOBAL log_output = 'TABLE,FILE';
</source>
* None, if NONE appears in the log_output destinations there is no logging
<source lang=mysql>
mysql> SET GLOBAL log_output = 'TABLE,FILE,NONE';
</source>
is equal to
<source lang=mysql>
mysql> SET GLOBAL log_output = 'NONE';
</source>
===Enable/disable general logging===
<source lang=mysql>
mysql> SET GLOBAL general_log_file = '/var/lib/mysql/general.log';
Query OK, 0 rows affected (0.00 sec)
mysql> SET GLOBAL general_log = 'ON';
Query OK, 0 rows affected (0.00 sec)
</source>
<source lang=mysql>
mysql> SET GLOBAL general_log = 'OFF';
Query OK, 0 rows affected (0.00 sec)
</source>
===Enable/disable logging of slow queries===
<source lang=mysql>
mysql> SET GLOBAL slow_query_log_file = '/var/lib/mysql/slow-query.log';
Query OK, 0 rows affected (0.00 sec)
mysql> SET GLOBAL slow_query_log = 'ON';
Query OK, 0 rows affected (0.00 sec)
</source>
<source lang=mysql>
mysql> SET GLOBAL slow_query_log = 'OFF';
Query OK, 0 rows affected (0.00 sec)
</source>
==Filesystems for MySQL==
===ext3/ext4===
Mountoptions are:
* noatime
* data=writeback (best performance , only metadata is logged)
* data=ordered (ok performance , recording metadata and grouping metadata related to the data changes)
* data=journal (worst performance, but best data protection, ext3 default mode, recording metadata and all data)
===Raw devices with InnoDB===
'''Take a look at [[Linux_udev_permissions|setting device permissions via udev]] first.'''
'''After''' that the device is owned by mysql:
<source lang=bash>
# ls -alL /dev/vg-data/lv-rawdisk-innodb01
brw-rw---- 1 mysql mysql 252, 0 Aug 12 15:07 /dev/vg-data/lv-rawdisk-innodb01
</source>
Determine the size:
<source lang=bash>
# lvs vg-data
LV VG Attr LSize Pool Origin Data% Move Log Copy% Convert
lv-rawdisk-innodb01 vg-data -wi-a---- 25.00g
# fdisk -l /dev/vg-data/lv-rawdisk-innodb01
Disk /dev/vg-data/lv-rawdisk-innodb01: 26.8 GB, 26843545600 bytes
255 heads, 63 sectors/track, 3263 cylinders, total 52428800 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
# bc -l
26843545600/(1024*1024*1024)
25.00000000000000000000
</source>
Yes... really 25GB!
Add your logical volume to your configuration /etc/mysql/conf.d/innodb.cnf :
<source lang=mysql>
[mysqld]
# InnoDB raw disks
innodb_data_home_dir=
innodb_data_file_path=/dev/vg-data/lv-rawdisk-innodb01:25Gnewraw
</source>
Start mysql:
<source lang=bash>
# service mysql start
</source>
Aaaaaand.. do not forget apparmor! Like I did.. :-D
<source lang=mysql>
InnoDB: Operating system error number 13 in a file operation.
InnoDB: The error means mysqld does not have the access rights to
InnoDB: the directory.
InnoDB: File name /dev/dm-0
InnoDB: File operation call: 'open'.
InnoDB: Cannot continue operation.
</source>
<source lang=bash>
# tail /var/log/kern.log
...
Aug 12 15:30:09 mysql kernel: [ 5840.118528] audit: type=1400 audit(1439386209.399:33): apparmor="DENIED" operation="open" profile="/usr/sbin/mysqld" name="/dev/dm-0" pid=11810 comm="mysqld" requested_mask="wr" denied_mask="wr" fsuid=108 ouid=108
...
</source>
Add your raw device to the apparmor config in /etc/apparmor.d/local/usr.sbin.mysqld :
<source lang=bash>
# Site-specific additions and overrides for usr.sbin.mysqld.
# For more details, please see /etc/apparmor.d/local/README.
/dev/dm-* rwk,
</source>
Reload apparmor:
<source lang=bash>
# service apparmor reload
</source>
Another try!
<source lang=bash>
# service mysql start
</source>
<source lang=mysql>
InnoDB: The first specified data file /dev/vg-data/lv-rawdisk-innodb01 did not exist:
InnoDB: a new database to be created!
150812 15:48:23 InnoDB: Setting file /dev/vg-data/lv-rawdisk-innodb01 size to 25600 MB
InnoDB: Database physically writes the file full: wait...
InnoDB: Progress in MB: 100 200 300 400 500 600 700 800 900 1000 1100 1200 ...
</source>
Much better!
So shutdown MySQL again!
Change your configuration /etc/mysql/conf.d/innodb.cnf and '''change newraw to raw!''' :
<source lang=mysql>
[mysqld]
# InnoDB raw disks
innodb_data_home_dir=
innodb_data_file_path=/dev/vg-data/lv-rawdisk-innodb01:25Graw
</source>
==Sample InnoDB configuration==
/etc/mysql/conf.d/innodb.cnf
<source lang=mysql>
[mysqld]
# InnoDB Parameters
# innodb_buffer_pool_size=(0.7*total_mem_size)
innodb_buffer_pool_size=1433M
# bulk_insert_buffer_size
bulk_insert_buffer_size=256M
# innodb_buffer_pool_instances=... more = more concurrency
innodb_buffer_pool_instances=2
# innodb_thread_concurrency= 2*CPUs
innodb_thread_concurrency=4
# innodb_flush_method=O_DIRECT (avoids double buffering)
innodb_flush_method=O_DIRECT
# InnoDB data raw disks
innodb_data_home_dir=
innodb_data_file_path=/dev/vg-data/lv-rawdisk-innodb01:25Graw
# InnoDB log files
innodb_log_files_in_group=2
innodb_log_file_size=100M
innodb_log_group_home_dir=/var/lib/mysql/ib_log
</source>
==Analyse==
<source lang=mysql>
mysql> select * from <tablename> PROCEDURE ANALYSE();
</source>
==Recover a damaged root account==
===Lost grants===
<source lang=bash>
# service mysql stop
# echo "grant all privileges on *.* to 'root'@'localhost';" > /root/mysql-init
# mysqld_safe --init-file=/tmp/mysql-init
...
150812 19:14:24 mysqld_safe mysqld from pid file /var/run/mysqld/mysqld.pid ended
# rm /root/mysql-init
</source>
===Lost password===
<source lang=bash>
# service mysql stop
# echo "SET PASSWORD FOR 'root'@'localhost' = PASSWORD('the root password for mysql');" > /root/mysql-init
# mysqld_safe --init-file=/tmp/mysql-init
...
150812 19:15:24 mysqld_safe mysqld from pid file /var/run/mysqld/mysqld.pid ended
# rm /root/mysql-init
</source>
c961b55f2beaac8845b15366c4bc6ea726314594
843
842
2015-08-12T17:32:00Z
Lollypop
2
/* Recover a damaged root account */
wikitext
text/x-wiki
[[Kategorie:MySQL|Tipps und Tricks]]
==Oneliner==
===All grants===
<source lang=bash>
# mysql --skip-column-names --batch --execute 'select concat_ws("@",user,host) from mysql.user' | xargs -n 1 -i mysql --execute 'show grants for {}'
</source>
==InnoDB space==
===Per database===
<source lang=mysql>
mysql> select table_schema as database_name, sum(round(data_length/1024/1024,2)) as total_size_mb from information_schema.tables where engine like 'innodb' group by table_schema order by total_size_mb;
</source>
===Per table===
<source lang=mysql>
mysql> select table_schema as database_name,table_name,round(data_length/1024/1024,2) as size_mb from information_schema.tables order by size_mb;
</source>
==Logging==
If you use SET GLOBAL it is just for the moment.
'''Don't forget to add it in your my.cnf to make it permanent!'''
===What can I log?===
The interesting variables here are:
* log_queries_not_using_indexes
* log_slave_updates
* log_slow_queries
* general_log
===Choose logging destination FILE/TABLE/NONE===
This affects general_log and slow_query_log.
* Log to the table mysql.slow_log and mysql.general_log
<source lang=mysql>
mysql> SET GLOBAL log_output=TABLE;
</source>
* Log to the table mysql.slow_log and mysql.general_log
<source lang=mysql>
mysql> SET GLOBAL log_output=TABLE;
</source>
* Both: tables and files
<source lang=mysql>
mysql> SET GLOBAL log_output = 'TABLE,FILE';
</source>
* None, if NONE appears in the log_output destinations there is no logging
<source lang=mysql>
mysql> SET GLOBAL log_output = 'TABLE,FILE,NONE';
</source>
is equal to
<source lang=mysql>
mysql> SET GLOBAL log_output = 'NONE';
</source>
===Enable/disable general logging===
<source lang=mysql>
mysql> SET GLOBAL general_log_file = '/var/lib/mysql/general.log';
Query OK, 0 rows affected (0.00 sec)
mysql> SET GLOBAL general_log = 'ON';
Query OK, 0 rows affected (0.00 sec)
</source>
<source lang=mysql>
mysql> SET GLOBAL general_log = 'OFF';
Query OK, 0 rows affected (0.00 sec)
</source>
===Enable/disable logging of slow queries===
<source lang=mysql>
mysql> SET GLOBAL slow_query_log_file = '/var/lib/mysql/slow-query.log';
Query OK, 0 rows affected (0.00 sec)
mysql> SET GLOBAL slow_query_log = 'ON';
Query OK, 0 rows affected (0.00 sec)
</source>
<source lang=mysql>
mysql> SET GLOBAL slow_query_log = 'OFF';
Query OK, 0 rows affected (0.00 sec)
</source>
==Filesystems for MySQL==
===ext3/ext4===
Mountoptions are:
* noatime
* data=writeback (best performance , only metadata is logged)
* data=ordered (ok performance , recording metadata and grouping metadata related to the data changes)
* data=journal (worst performance, but best data protection, ext3 default mode, recording metadata and all data)
===Raw devices with InnoDB===
'''Take a look at [[Linux_udev_permissions|setting device permissions via udev]] first.'''
'''After''' that the device is owned by mysql:
<source lang=bash>
# ls -alL /dev/vg-data/lv-rawdisk-innodb01
brw-rw---- 1 mysql mysql 252, 0 Aug 12 15:07 /dev/vg-data/lv-rawdisk-innodb01
</source>
Determine the size:
<source lang=bash>
# lvs vg-data
LV VG Attr LSize Pool Origin Data% Move Log Copy% Convert
lv-rawdisk-innodb01 vg-data -wi-a---- 25.00g
# fdisk -l /dev/vg-data/lv-rawdisk-innodb01
Disk /dev/vg-data/lv-rawdisk-innodb01: 26.8 GB, 26843545600 bytes
255 heads, 63 sectors/track, 3263 cylinders, total 52428800 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
# bc -l
26843545600/(1024*1024*1024)
25.00000000000000000000
</source>
Yes... really 25GB!
Add your logical volume to your configuration /etc/mysql/conf.d/innodb.cnf :
<source lang=mysql>
[mysqld]
# InnoDB raw disks
innodb_data_home_dir=
innodb_data_file_path=/dev/vg-data/lv-rawdisk-innodb01:25Gnewraw
</source>
Start mysql:
<source lang=bash>
# service mysql start
</source>
Aaaaaand.. do not forget apparmor! Like I did.. :-D
<source lang=mysql>
InnoDB: Operating system error number 13 in a file operation.
InnoDB: The error means mysqld does not have the access rights to
InnoDB: the directory.
InnoDB: File name /dev/dm-0
InnoDB: File operation call: 'open'.
InnoDB: Cannot continue operation.
</source>
<source lang=bash>
# tail /var/log/kern.log
...
Aug 12 15:30:09 mysql kernel: [ 5840.118528] audit: type=1400 audit(1439386209.399:33): apparmor="DENIED" operation="open" profile="/usr/sbin/mysqld" name="/dev/dm-0" pid=11810 comm="mysqld" requested_mask="wr" denied_mask="wr" fsuid=108 ouid=108
...
</source>
Add your raw device to the apparmor config in /etc/apparmor.d/local/usr.sbin.mysqld :
<source lang=bash>
# Site-specific additions and overrides for usr.sbin.mysqld.
# For more details, please see /etc/apparmor.d/local/README.
/dev/dm-* rwk,
</source>
Reload apparmor:
<source lang=bash>
# service apparmor reload
</source>
Another try!
<source lang=bash>
# service mysql start
</source>
<source lang=mysql>
InnoDB: The first specified data file /dev/vg-data/lv-rawdisk-innodb01 did not exist:
InnoDB: a new database to be created!
150812 15:48:23 InnoDB: Setting file /dev/vg-data/lv-rawdisk-innodb01 size to 25600 MB
InnoDB: Database physically writes the file full: wait...
InnoDB: Progress in MB: 100 200 300 400 500 600 700 800 900 1000 1100 1200 ...
</source>
Much better!
So shutdown MySQL again!
Change your configuration /etc/mysql/conf.d/innodb.cnf and '''change newraw to raw!''' :
<source lang=mysql>
[mysqld]
# InnoDB raw disks
innodb_data_home_dir=
innodb_data_file_path=/dev/vg-data/lv-rawdisk-innodb01:25Graw
</source>
==Sample InnoDB configuration==
/etc/mysql/conf.d/innodb.cnf
<source lang=mysql>
[mysqld]
# InnoDB Parameters
# innodb_buffer_pool_size=(0.7*total_mem_size)
innodb_buffer_pool_size=1433M
# bulk_insert_buffer_size
bulk_insert_buffer_size=256M
# innodb_buffer_pool_instances=... more = more concurrency
innodb_buffer_pool_instances=2
# innodb_thread_concurrency= 2*CPUs
innodb_thread_concurrency=4
# innodb_flush_method=O_DIRECT (avoids double buffering)
innodb_flush_method=O_DIRECT
# InnoDB data raw disks
innodb_data_home_dir=
innodb_data_file_path=/dev/vg-data/lv-rawdisk-innodb01:25Graw
# InnoDB log files
innodb_log_files_in_group=2
innodb_log_file_size=100M
innodb_log_group_home_dir=/var/lib/mysql/ib_log
</source>
==Analyse==
<source lang=mysql>
mysql> select * from <tablename> PROCEDURE ANALYSE();
</source>
==Recover a damaged root account==
===Lost grants===
<source lang=bash>
# service mysql stop
# echo "grant all privileges on *.* to 'root'@'localhost' with grant option;" > /root/mysql-init
# mysqld_safe --init-file=/tmp/mysql-init
...
150812 19:14:24 mysqld_safe mysqld from pid file /var/run/mysqld/mysqld.pid ended
# rm /root/mysql-init
</source>
===Lost password===
<source lang=bash>
# service mysql stop
# echo "SET PASSWORD FOR 'root'@'localhost' = PASSWORD('the root password for mysql');" > /root/mysql-init
# mysqld_safe --init-file=/tmp/mysql-init
...
150812 19:15:24 mysqld_safe mysqld from pid file /var/run/mysqld/mysqld.pid ended
# rm /root/mysql-init
</source>
7d9ef900f3c8f89783e90ca30b5f127425a5faaf
844
843
2015-08-12T17:40:13Z
Lollypop
2
/* Recover a damaged root account */
wikitext
text/x-wiki
[[Kategorie:MySQL|Tipps und Tricks]]
==Oneliner==
===All grants===
<source lang=bash>
# mysql --skip-column-names --batch --execute 'select concat_ws("@",user,host) from mysql.user' | xargs -n 1 -i mysql --execute 'show grants for {}'
</source>
==InnoDB space==
===Per database===
<source lang=mysql>
mysql> select table_schema as database_name, sum(round(data_length/1024/1024,2)) as total_size_mb from information_schema.tables where engine like 'innodb' group by table_schema order by total_size_mb;
</source>
===Per table===
<source lang=mysql>
mysql> select table_schema as database_name,table_name,round(data_length/1024/1024,2) as size_mb from information_schema.tables order by size_mb;
</source>
==Logging==
If you use SET GLOBAL it is just for the moment.
'''Don't forget to add it in your my.cnf to make it permanent!'''
===What can I log?===
The interesting variables here are:
* log_queries_not_using_indexes
* log_slave_updates
* log_slow_queries
* general_log
===Choose logging destination FILE/TABLE/NONE===
This affects general_log and slow_query_log.
* Log to the table mysql.slow_log and mysql.general_log
<source lang=mysql>
mysql> SET GLOBAL log_output=TABLE;
</source>
* Log to the table mysql.slow_log and mysql.general_log
<source lang=mysql>
mysql> SET GLOBAL log_output=TABLE;
</source>
* Both: tables and files
<source lang=mysql>
mysql> SET GLOBAL log_output = 'TABLE,FILE';
</source>
* None, if NONE appears in the log_output destinations there is no logging
<source lang=mysql>
mysql> SET GLOBAL log_output = 'TABLE,FILE,NONE';
</source>
is equal to
<source lang=mysql>
mysql> SET GLOBAL log_output = 'NONE';
</source>
===Enable/disable general logging===
<source lang=mysql>
mysql> SET GLOBAL general_log_file = '/var/lib/mysql/general.log';
Query OK, 0 rows affected (0.00 sec)
mysql> SET GLOBAL general_log = 'ON';
Query OK, 0 rows affected (0.00 sec)
</source>
<source lang=mysql>
mysql> SET GLOBAL general_log = 'OFF';
Query OK, 0 rows affected (0.00 sec)
</source>
===Enable/disable logging of slow queries===
<source lang=mysql>
mysql> SET GLOBAL slow_query_log_file = '/var/lib/mysql/slow-query.log';
Query OK, 0 rows affected (0.00 sec)
mysql> SET GLOBAL slow_query_log = 'ON';
Query OK, 0 rows affected (0.00 sec)
</source>
<source lang=mysql>
mysql> SET GLOBAL slow_query_log = 'OFF';
Query OK, 0 rows affected (0.00 sec)
</source>
==Filesystems for MySQL==
===ext3/ext4===
Mountoptions are:
* noatime
* data=writeback (best performance , only metadata is logged)
* data=ordered (ok performance , recording metadata and grouping metadata related to the data changes)
* data=journal (worst performance, but best data protection, ext3 default mode, recording metadata and all data)
===Raw devices with InnoDB===
'''Take a look at [[Linux_udev_permissions|setting device permissions via udev]] first.'''
'''After''' that the device is owned by mysql:
<source lang=bash>
# ls -alL /dev/vg-data/lv-rawdisk-innodb01
brw-rw---- 1 mysql mysql 252, 0 Aug 12 15:07 /dev/vg-data/lv-rawdisk-innodb01
</source>
Determine the size:
<source lang=bash>
# lvs vg-data
LV VG Attr LSize Pool Origin Data% Move Log Copy% Convert
lv-rawdisk-innodb01 vg-data -wi-a---- 25.00g
# fdisk -l /dev/vg-data/lv-rawdisk-innodb01
Disk /dev/vg-data/lv-rawdisk-innodb01: 26.8 GB, 26843545600 bytes
255 heads, 63 sectors/track, 3263 cylinders, total 52428800 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
# bc -l
26843545600/(1024*1024*1024)
25.00000000000000000000
</source>
Yes... really 25GB!
Add your logical volume to your configuration /etc/mysql/conf.d/innodb.cnf :
<source lang=mysql>
[mysqld]
# InnoDB raw disks
innodb_data_home_dir=
innodb_data_file_path=/dev/vg-data/lv-rawdisk-innodb01:25Gnewraw
</source>
Start mysql:
<source lang=bash>
# service mysql start
</source>
Aaaaaand.. do not forget apparmor! Like I did.. :-D
<source lang=mysql>
InnoDB: Operating system error number 13 in a file operation.
InnoDB: The error means mysqld does not have the access rights to
InnoDB: the directory.
InnoDB: File name /dev/dm-0
InnoDB: File operation call: 'open'.
InnoDB: Cannot continue operation.
</source>
<source lang=bash>
# tail /var/log/kern.log
...
Aug 12 15:30:09 mysql kernel: [ 5840.118528] audit: type=1400 audit(1439386209.399:33): apparmor="DENIED" operation="open" profile="/usr/sbin/mysqld" name="/dev/dm-0" pid=11810 comm="mysqld" requested_mask="wr" denied_mask="wr" fsuid=108 ouid=108
...
</source>
Add your raw device to the apparmor config in /etc/apparmor.d/local/usr.sbin.mysqld :
<source lang=bash>
# Site-specific additions and overrides for usr.sbin.mysqld.
# For more details, please see /etc/apparmor.d/local/README.
/dev/dm-* rwk,
</source>
Reload apparmor:
<source lang=bash>
# service apparmor reload
</source>
Another try!
<source lang=bash>
# service mysql start
</source>
<source lang=mysql>
InnoDB: The first specified data file /dev/vg-data/lv-rawdisk-innodb01 did not exist:
InnoDB: a new database to be created!
150812 15:48:23 InnoDB: Setting file /dev/vg-data/lv-rawdisk-innodb01 size to 25600 MB
InnoDB: Database physically writes the file full: wait...
InnoDB: Progress in MB: 100 200 300 400 500 600 700 800 900 1000 1100 1200 ...
</source>
Much better!
So shutdown MySQL again!
Change your configuration /etc/mysql/conf.d/innodb.cnf and '''change newraw to raw!''' :
<source lang=mysql>
[mysqld]
# InnoDB raw disks
innodb_data_home_dir=
innodb_data_file_path=/dev/vg-data/lv-rawdisk-innodb01:25Graw
</source>
==Sample InnoDB configuration==
/etc/mysql/conf.d/innodb.cnf
<source lang=mysql>
[mysqld]
# InnoDB Parameters
# innodb_buffer_pool_size=(0.7*total_mem_size)
innodb_buffer_pool_size=1433M
# bulk_insert_buffer_size
bulk_insert_buffer_size=256M
# innodb_buffer_pool_instances=... more = more concurrency
innodb_buffer_pool_instances=2
# innodb_thread_concurrency= 2*CPUs
innodb_thread_concurrency=4
# innodb_flush_method=O_DIRECT (avoids double buffering)
innodb_flush_method=O_DIRECT
# InnoDB data raw disks
innodb_data_home_dir=
innodb_data_file_path=/dev/vg-data/lv-rawdisk-innodb01:25Graw
# InnoDB log files
innodb_log_files_in_group=2
innodb_log_file_size=100M
innodb_log_group_home_dir=/var/lib/mysql/ib_log
</source>
==Analyse==
<source lang=mysql>
mysql> select * from <tablename> PROCEDURE ANALYSE();
</source>
==Recover a damaged root account==
===Lost grants===
<source lang=bash>
# service mysql stop
# echo "grant all privileges on *.* to 'root'@'localhost' with grant option;" > /root/mysql-init
# mysqld_safe --init-file=/root/mysql-init
...
150812 19:14:24 mysqld_safe mysqld from pid file /var/run/mysqld/mysqld.pid ended
# rm /root/mysql-init
# service mysql start
</source>
===Lost password===
<source lang=bash>
# service mysql stop
# echo "SET PASSWORD FOR 'root'@'localhost' = PASSWORD('the root password for mysql');" > /root/mysql-init
# mysqld_safe --init-file=/root/mysql-init
...
150812 19:15:24 mysqld_safe mysqld from pid file /var/run/mysqld/mysqld.pid ended
# rm /root/mysql-init
# service mysql start
</source>
d403e53f5eaba8ad9dab69f061fbbb4ec5df368d
845
844
2015-08-13T11:34:46Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:MySQL|Tipps und Tricks]]
==Oneliner==
===All grants===
<source lang=bash>
# mysql --skip-column-names --batch --execute 'select concat_ws("@",user,host) from mysql.user' | xargs -n 1 -i mysql --execute 'show grants for {}'
</source>
===Last update time===
<source lang=mysql>
mysql> SELECT TABLE_SCHEMA AS DB,TABLE_NAME,UPDATE_TIME FROM INFORMATION_SCHEMA.TABLES ORDER BY DB,UPDATE_TIME;
</source>
==InnoDB space==
===Per database===
<source lang=mysql>
mysql> select table_schema as database_name, sum(round(data_length/1024/1024,2)) as total_size_mb from information_schema.tables where engine like 'innodb' group by table_schema order by total_size_mb;
</source>
===Per table===
<source lang=mysql>
mysql> select table_schema as database_name,table_name,round(data_length/1024/1024,2) as size_mb from information_schema.tables order by size_mb;
</source>
==Logging==
If you use SET GLOBAL it is just for the moment.
'''Don't forget to add it in your my.cnf to make it permanent!'''
===What can I log?===
The interesting variables here are:
* log_queries_not_using_indexes
* log_slave_updates
* log_slow_queries
* general_log
===Choose logging destination FILE/TABLE/NONE===
This affects general_log and slow_query_log.
* Log to the table mysql.slow_log and mysql.general_log
<source lang=mysql>
mysql> SET GLOBAL log_output=TABLE;
</source>
* Log to the table mysql.slow_log and mysql.general_log
<source lang=mysql>
mysql> SET GLOBAL log_output=TABLE;
</source>
* Both: tables and files
<source lang=mysql>
mysql> SET GLOBAL log_output = 'TABLE,FILE';
</source>
* None, if NONE appears in the log_output destinations there is no logging
<source lang=mysql>
mysql> SET GLOBAL log_output = 'TABLE,FILE,NONE';
</source>
is equal to
<source lang=mysql>
mysql> SET GLOBAL log_output = 'NONE';
</source>
===Enable/disable general logging===
<source lang=mysql>
mysql> SET GLOBAL general_log_file = '/var/lib/mysql/general.log';
Query OK, 0 rows affected (0.00 sec)
mysql> SET GLOBAL general_log = 'ON';
Query OK, 0 rows affected (0.00 sec)
</source>
<source lang=mysql>
mysql> SET GLOBAL general_log = 'OFF';
Query OK, 0 rows affected (0.00 sec)
</source>
===Enable/disable logging of slow queries===
<source lang=mysql>
mysql> SET GLOBAL slow_query_log_file = '/var/lib/mysql/slow-query.log';
Query OK, 0 rows affected (0.00 sec)
mysql> SET GLOBAL slow_query_log = 'ON';
Query OK, 0 rows affected (0.00 sec)
</source>
<source lang=mysql>
mysql> SET GLOBAL slow_query_log = 'OFF';
Query OK, 0 rows affected (0.00 sec)
</source>
==Filesystems for MySQL==
===ext3/ext4===
Mountoptions are:
* noatime
* data=writeback (best performance , only metadata is logged)
* data=ordered (ok performance , recording metadata and grouping metadata related to the data changes)
* data=journal (worst performance, but best data protection, ext3 default mode, recording metadata and all data)
===Raw devices with InnoDB===
'''Take a look at [[Linux_udev_permissions|setting device permissions via udev]] first.'''
'''After''' that the device is owned by mysql:
<source lang=bash>
# ls -alL /dev/vg-data/lv-rawdisk-innodb01
brw-rw---- 1 mysql mysql 252, 0 Aug 12 15:07 /dev/vg-data/lv-rawdisk-innodb01
</source>
Determine the size:
<source lang=bash>
# lvs vg-data
LV VG Attr LSize Pool Origin Data% Move Log Copy% Convert
lv-rawdisk-innodb01 vg-data -wi-a---- 25.00g
# fdisk -l /dev/vg-data/lv-rawdisk-innodb01
Disk /dev/vg-data/lv-rawdisk-innodb01: 26.8 GB, 26843545600 bytes
255 heads, 63 sectors/track, 3263 cylinders, total 52428800 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
# bc -l
26843545600/(1024*1024*1024)
25.00000000000000000000
</source>
Yes... really 25GB!
Add your logical volume to your configuration /etc/mysql/conf.d/innodb.cnf :
<source lang=mysql>
[mysqld]
# InnoDB raw disks
innodb_data_home_dir=
innodb_data_file_path=/dev/vg-data/lv-rawdisk-innodb01:25Gnewraw
</source>
Start mysql:
<source lang=bash>
# service mysql start
</source>
Aaaaaand.. do not forget apparmor! Like I did.. :-D
<source lang=mysql>
InnoDB: Operating system error number 13 in a file operation.
InnoDB: The error means mysqld does not have the access rights to
InnoDB: the directory.
InnoDB: File name /dev/dm-0
InnoDB: File operation call: 'open'.
InnoDB: Cannot continue operation.
</source>
<source lang=bash>
# tail /var/log/kern.log
...
Aug 12 15:30:09 mysql kernel: [ 5840.118528] audit: type=1400 audit(1439386209.399:33): apparmor="DENIED" operation="open" profile="/usr/sbin/mysqld" name="/dev/dm-0" pid=11810 comm="mysqld" requested_mask="wr" denied_mask="wr" fsuid=108 ouid=108
...
</source>
Add your raw device to the apparmor config in /etc/apparmor.d/local/usr.sbin.mysqld :
<source lang=bash>
# Site-specific additions and overrides for usr.sbin.mysqld.
# For more details, please see /etc/apparmor.d/local/README.
/dev/dm-* rwk,
</source>
Reload apparmor:
<source lang=bash>
# service apparmor reload
</source>
Another try!
<source lang=bash>
# service mysql start
</source>
<source lang=mysql>
InnoDB: The first specified data file /dev/vg-data/lv-rawdisk-innodb01 did not exist:
InnoDB: a new database to be created!
150812 15:48:23 InnoDB: Setting file /dev/vg-data/lv-rawdisk-innodb01 size to 25600 MB
InnoDB: Database physically writes the file full: wait...
InnoDB: Progress in MB: 100 200 300 400 500 600 700 800 900 1000 1100 1200 ...
</source>
Much better!
So shutdown MySQL again!
Change your configuration /etc/mysql/conf.d/innodb.cnf and '''change newraw to raw!''' :
<source lang=mysql>
[mysqld]
# InnoDB raw disks
innodb_data_home_dir=
innodb_data_file_path=/dev/vg-data/lv-rawdisk-innodb01:25Graw
</source>
==Sample InnoDB configuration==
/etc/mysql/conf.d/innodb.cnf
<source lang=mysql>
[mysqld]
# InnoDB Parameters
# innodb_buffer_pool_size=(0.7*total_mem_size)
innodb_buffer_pool_size=1433M
# bulk_insert_buffer_size
bulk_insert_buffer_size=256M
# innodb_buffer_pool_instances=... more = more concurrency
innodb_buffer_pool_instances=2
# innodb_thread_concurrency= 2*CPUs
innodb_thread_concurrency=4
# innodb_flush_method=O_DIRECT (avoids double buffering)
innodb_flush_method=O_DIRECT
# InnoDB data raw disks
innodb_data_home_dir=
innodb_data_file_path=/dev/vg-data/lv-rawdisk-innodb01:25Graw
# InnoDB log files
innodb_log_files_in_group=2
innodb_log_file_size=100M
innodb_log_group_home_dir=/var/lib/mysql/ib_log
</source>
==Analyse==
<source lang=mysql>
mysql> select * from <tablename> PROCEDURE ANALYSE();
</source>
==Recover a damaged root account==
===Lost grants===
<source lang=bash>
# service mysql stop
# echo "grant all privileges on *.* to 'root'@'localhost' with grant option;" > /root/mysql-init
# mysqld_safe --init-file=/root/mysql-init
...
150812 19:14:24 mysqld_safe mysqld from pid file /var/run/mysqld/mysqld.pid ended
# rm /root/mysql-init
# service mysql start
</source>
===Lost password===
<source lang=bash>
# service mysql stop
# echo "SET PASSWORD FOR 'root'@'localhost' = PASSWORD('the root password for mysql');" > /root/mysql-init
# mysqld_safe --init-file=/root/mysql-init
...
150812 19:15:24 mysqld_safe mysqld from pid file /var/run/mysqld/mysqld.pid ended
# rm /root/mysql-init
# service mysql start
</source>
0eaab3af9195adc4d02d10e4cbd8a209629e8d26
846
845
2015-08-13T11:37:59Z
Lollypop
2
/* Last update time */
wikitext
text/x-wiki
[[Kategorie:MySQL|Tipps und Tricks]]
==Oneliner==
===All grants===
<source lang=bash>
# mysql --skip-column-names --batch --execute 'select concat_ws("@",user,host) from mysql.user' | xargs -n 1 -i mysql --execute 'show grants for {}'
</source>
===Last update time===
* Per table
<source lang=mysql>
mysql> SELECT TABLE_SCHEMA AS DB,TABLE_NAME,UPDATE_TIME FROM INFORMATION_SCHEMA.TABLES ORDER BY DB,UPDATE_TIME;
</source>
* Per database
<source lang=mysql>
mysql> SELECT TABLE_SCHEMA AS DB,MAX(UPDATE_TIME) AS LAST_UPDATE FROM INFORMATION_SCHEMA.TABLES GROUP BY DB ORDER BY LAST_UPDATE;
</source>
==InnoDB space==
===Per database===
<source lang=mysql>
mysql> select table_schema as database_name, sum(round(data_length/1024/1024,2)) as total_size_mb from information_schema.tables where engine like 'innodb' group by table_schema order by total_size_mb;
</source>
===Per table===
<source lang=mysql>
mysql> select table_schema as database_name,table_name,round(data_length/1024/1024,2) as size_mb from information_schema.tables order by size_mb;
</source>
==Logging==
If you use SET GLOBAL it is just for the moment.
'''Don't forget to add it in your my.cnf to make it permanent!'''
===What can I log?===
The interesting variables here are:
* log_queries_not_using_indexes
* log_slave_updates
* log_slow_queries
* general_log
===Choose logging destination FILE/TABLE/NONE===
This affects general_log and slow_query_log.
* Log to the table mysql.slow_log and mysql.general_log
<source lang=mysql>
mysql> SET GLOBAL log_output=TABLE;
</source>
* Log to the table mysql.slow_log and mysql.general_log
<source lang=mysql>
mysql> SET GLOBAL log_output=TABLE;
</source>
* Both: tables and files
<source lang=mysql>
mysql> SET GLOBAL log_output = 'TABLE,FILE';
</source>
* None, if NONE appears in the log_output destinations there is no logging
<source lang=mysql>
mysql> SET GLOBAL log_output = 'TABLE,FILE,NONE';
</source>
is equal to
<source lang=mysql>
mysql> SET GLOBAL log_output = 'NONE';
</source>
===Enable/disable general logging===
<source lang=mysql>
mysql> SET GLOBAL general_log_file = '/var/lib/mysql/general.log';
Query OK, 0 rows affected (0.00 sec)
mysql> SET GLOBAL general_log = 'ON';
Query OK, 0 rows affected (0.00 sec)
</source>
<source lang=mysql>
mysql> SET GLOBAL general_log = 'OFF';
Query OK, 0 rows affected (0.00 sec)
</source>
===Enable/disable logging of slow queries===
<source lang=mysql>
mysql> SET GLOBAL slow_query_log_file = '/var/lib/mysql/slow-query.log';
Query OK, 0 rows affected (0.00 sec)
mysql> SET GLOBAL slow_query_log = 'ON';
Query OK, 0 rows affected (0.00 sec)
</source>
<source lang=mysql>
mysql> SET GLOBAL slow_query_log = 'OFF';
Query OK, 0 rows affected (0.00 sec)
</source>
==Filesystems for MySQL==
===ext3/ext4===
Mountoptions are:
* noatime
* data=writeback (best performance , only metadata is logged)
* data=ordered (ok performance , recording metadata and grouping metadata related to the data changes)
* data=journal (worst performance, but best data protection, ext3 default mode, recording metadata and all data)
===Raw devices with InnoDB===
'''Take a look at [[Linux_udev_permissions|setting device permissions via udev]] first.'''
'''After''' that the device is owned by mysql:
<source lang=bash>
# ls -alL /dev/vg-data/lv-rawdisk-innodb01
brw-rw---- 1 mysql mysql 252, 0 Aug 12 15:07 /dev/vg-data/lv-rawdisk-innodb01
</source>
Determine the size:
<source lang=bash>
# lvs vg-data
LV VG Attr LSize Pool Origin Data% Move Log Copy% Convert
lv-rawdisk-innodb01 vg-data -wi-a---- 25.00g
# fdisk -l /dev/vg-data/lv-rawdisk-innodb01
Disk /dev/vg-data/lv-rawdisk-innodb01: 26.8 GB, 26843545600 bytes
255 heads, 63 sectors/track, 3263 cylinders, total 52428800 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
# bc -l
26843545600/(1024*1024*1024)
25.00000000000000000000
</source>
Yes... really 25GB!
Add your logical volume to your configuration /etc/mysql/conf.d/innodb.cnf :
<source lang=mysql>
[mysqld]
# InnoDB raw disks
innodb_data_home_dir=
innodb_data_file_path=/dev/vg-data/lv-rawdisk-innodb01:25Gnewraw
</source>
Start mysql:
<source lang=bash>
# service mysql start
</source>
Aaaaaand.. do not forget apparmor! Like I did.. :-D
<source lang=mysql>
InnoDB: Operating system error number 13 in a file operation.
InnoDB: The error means mysqld does not have the access rights to
InnoDB: the directory.
InnoDB: File name /dev/dm-0
InnoDB: File operation call: 'open'.
InnoDB: Cannot continue operation.
</source>
<source lang=bash>
# tail /var/log/kern.log
...
Aug 12 15:30:09 mysql kernel: [ 5840.118528] audit: type=1400 audit(1439386209.399:33): apparmor="DENIED" operation="open" profile="/usr/sbin/mysqld" name="/dev/dm-0" pid=11810 comm="mysqld" requested_mask="wr" denied_mask="wr" fsuid=108 ouid=108
...
</source>
Add your raw device to the apparmor config in /etc/apparmor.d/local/usr.sbin.mysqld :
<source lang=bash>
# Site-specific additions and overrides for usr.sbin.mysqld.
# For more details, please see /etc/apparmor.d/local/README.
/dev/dm-* rwk,
</source>
Reload apparmor:
<source lang=bash>
# service apparmor reload
</source>
Another try!
<source lang=bash>
# service mysql start
</source>
<source lang=mysql>
InnoDB: The first specified data file /dev/vg-data/lv-rawdisk-innodb01 did not exist:
InnoDB: a new database to be created!
150812 15:48:23 InnoDB: Setting file /dev/vg-data/lv-rawdisk-innodb01 size to 25600 MB
InnoDB: Database physically writes the file full: wait...
InnoDB: Progress in MB: 100 200 300 400 500 600 700 800 900 1000 1100 1200 ...
</source>
Much better!
So shutdown MySQL again!
Change your configuration /etc/mysql/conf.d/innodb.cnf and '''change newraw to raw!''' :
<source lang=mysql>
[mysqld]
# InnoDB raw disks
innodb_data_home_dir=
innodb_data_file_path=/dev/vg-data/lv-rawdisk-innodb01:25Graw
</source>
==Sample InnoDB configuration==
/etc/mysql/conf.d/innodb.cnf
<source lang=mysql>
[mysqld]
# InnoDB Parameters
# innodb_buffer_pool_size=(0.7*total_mem_size)
innodb_buffer_pool_size=1433M
# bulk_insert_buffer_size
bulk_insert_buffer_size=256M
# innodb_buffer_pool_instances=... more = more concurrency
innodb_buffer_pool_instances=2
# innodb_thread_concurrency= 2*CPUs
innodb_thread_concurrency=4
# innodb_flush_method=O_DIRECT (avoids double buffering)
innodb_flush_method=O_DIRECT
# InnoDB data raw disks
innodb_data_home_dir=
innodb_data_file_path=/dev/vg-data/lv-rawdisk-innodb01:25Graw
# InnoDB log files
innodb_log_files_in_group=2
innodb_log_file_size=100M
innodb_log_group_home_dir=/var/lib/mysql/ib_log
</source>
==Analyse==
<source lang=mysql>
mysql> select * from <tablename> PROCEDURE ANALYSE();
</source>
==Recover a damaged root account==
===Lost grants===
<source lang=bash>
# service mysql stop
# echo "grant all privileges on *.* to 'root'@'localhost' with grant option;" > /root/mysql-init
# mysqld_safe --init-file=/root/mysql-init
...
150812 19:14:24 mysqld_safe mysqld from pid file /var/run/mysqld/mysqld.pid ended
# rm /root/mysql-init
# service mysql start
</source>
===Lost password===
<source lang=bash>
# service mysql stop
# echo "SET PASSWORD FOR 'root'@'localhost' = PASSWORD('the root password for mysql');" > /root/mysql-init
# mysqld_safe --init-file=/root/mysql-init
...
150812 19:15:24 mysqld_safe mysqld from pid file /var/run/mysqld/mysqld.pid ended
# rm /root/mysql-init
# service mysql start
</source>
dfc506c06dd6f6c8ca5fd114892120b2aef9b99c
847
846
2015-08-13T14:41:33Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:MySQL|Tipps und Tricks]]
==Oneliner==
===All grants===
<source lang=bash>
# mysql --skip-column-names --batch --execute 'select concat_ws("@",user,host) from mysql.user' | xargs -n 1 -i mysql --execute 'show grants for {}'
</source>
===Last update time===
* Per table
<source lang=mysql>
mysql> SELECT TABLE_SCHEMA AS DB,TABLE_NAME,UPDATE_TIME FROM INFORMATION_SCHEMA.TABLES ORDER BY DB,UPDATE_TIME;
</source>
* Per database
<source lang=mysql>
mysql> SELECT TABLE_SCHEMA AS DB,MAX(UPDATE_TIME) AS LAST_UPDATE FROM INFORMATION_SCHEMA.TABLES GROUP BY DB ORDER BY LAST_UPDATE;
</source>
==InnoDB space==
===Per database===
<source lang=mysql>
mysql> select table_schema as database_name, sum(round(data_length/1024/1024,2)) as total_size_mb from information_schema.tables where engine like 'innodb' group by table_schema order by total_size_mb;
</source>
===Per table===
<source lang=mysql>
mysql> select table_schema as database_name,table_name,round(data_length/1024/1024,2) as size_mb from information_schema.tables order by size_mb;
</source>
==Logging==
If you use SET GLOBAL it is just for the moment.
'''Don't forget to add it in your my.cnf to make it permanent!'''
===What can I log?===
The interesting variables here are:
* log_queries_not_using_indexes
* log_slave_updates
* log_slow_queries
* general_log
===Choose logging destination FILE/TABLE/NONE===
This affects general_log and slow_query_log.
* Log to the table mysql.slow_log and mysql.general_log
<source lang=mysql>
mysql> SET GLOBAL log_output=TABLE;
</source>
* Log to the table mysql.slow_log and mysql.general_log
<source lang=mysql>
mysql> SET GLOBAL log_output=TABLE;
</source>
* Both: tables and files
<source lang=mysql>
mysql> SET GLOBAL log_output = 'TABLE,FILE';
</source>
* None, if NONE appears in the log_output destinations there is no logging
<source lang=mysql>
mysql> SET GLOBAL log_output = 'TABLE,FILE,NONE';
</source>
is equal to
<source lang=mysql>
mysql> SET GLOBAL log_output = 'NONE';
</source>
===Enable/disable general logging===
<source lang=mysql>
mysql> SET GLOBAL general_log_file = '/var/lib/mysql/general.log';
Query OK, 0 rows affected (0.00 sec)
mysql> SET GLOBAL general_log = 'ON';
Query OK, 0 rows affected (0.00 sec)
</source>
<source lang=mysql>
mysql> SET GLOBAL general_log = 'OFF';
Query OK, 0 rows affected (0.00 sec)
</source>
===Enable/disable logging of slow queries===
<source lang=mysql>
mysql> SET GLOBAL slow_query_log_file = '/var/lib/mysql/slow-query.log';
Query OK, 0 rows affected (0.00 sec)
mysql> SET GLOBAL slow_query_log = 'ON';
Query OK, 0 rows affected (0.00 sec)
</source>
<source lang=mysql>
mysql> SET GLOBAL slow_query_log = 'OFF';
Query OK, 0 rows affected (0.00 sec)
</source>
==Filesystems for MySQL==
===ext3/ext4===
Mountoptions are:
* noatime
* data=writeback (best performance , only metadata is logged)
* data=ordered (ok performance , recording metadata and grouping metadata related to the data changes)
* data=journal (worst performance, but best data protection, ext3 default mode, recording metadata and all data)
===Raw devices with InnoDB===
'''Take a look at [[Linux_udev_permissions|setting device permissions via udev]] first.'''
'''After''' that the device is owned by mysql:
<source lang=bash>
# ls -alL /dev/vg-data/lv-rawdisk-innodb01
brw-rw---- 1 mysql mysql 252, 0 Aug 12 15:07 /dev/vg-data/lv-rawdisk-innodb01
</source>
Determine the size:
<source lang=bash>
# lvs vg-data
LV VG Attr LSize Pool Origin Data% Move Log Copy% Convert
lv-rawdisk-innodb01 vg-data -wi-a---- 25.00g
# fdisk -l /dev/vg-data/lv-rawdisk-innodb01
Disk /dev/vg-data/lv-rawdisk-innodb01: 26.8 GB, 26843545600 bytes
255 heads, 63 sectors/track, 3263 cylinders, total 52428800 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
# bc -l
26843545600/(1024*1024*1024)
25.00000000000000000000
</source>
Yes... really 25GB!
Add your logical volume to your configuration /etc/mysql/conf.d/innodb.cnf :
<source lang=mysql>
[mysqld]
# InnoDB raw disks
innodb_data_home_dir=
innodb_data_file_path=/dev/vg-data/lv-rawdisk-innodb01:25Gnewraw
</source>
Start mysql:
<source lang=bash>
# service mysql start
</source>
Aaaaaand.. do not forget apparmor! Like I did.. :-D
<source lang=mysql>
InnoDB: Operating system error number 13 in a file operation.
InnoDB: The error means mysqld does not have the access rights to
InnoDB: the directory.
InnoDB: File name /dev/dm-0
InnoDB: File operation call: 'open'.
InnoDB: Cannot continue operation.
</source>
<source lang=bash>
# tail /var/log/kern.log
...
Aug 12 15:30:09 mysql kernel: [ 5840.118528] audit: type=1400 audit(1439386209.399:33): apparmor="DENIED" operation="open" profile="/usr/sbin/mysqld" name="/dev/dm-0" pid=11810 comm="mysqld" requested_mask="wr" denied_mask="wr" fsuid=108 ouid=108
...
</source>
Add your raw device to the apparmor config in /etc/apparmor.d/local/usr.sbin.mysqld :
<source lang=bash>
# Site-specific additions and overrides for usr.sbin.mysqld.
# For more details, please see /etc/apparmor.d/local/README.
/dev/dm-* rwk,
</source>
Reload apparmor:
<source lang=bash>
# service apparmor reload
</source>
Another try!
<source lang=bash>
# service mysql start
</source>
<source lang=mysql>
InnoDB: The first specified data file /dev/vg-data/lv-rawdisk-innodb01 did not exist:
InnoDB: a new database to be created!
150812 15:48:23 InnoDB: Setting file /dev/vg-data/lv-rawdisk-innodb01 size to 25600 MB
InnoDB: Database physically writes the file full: wait...
InnoDB: Progress in MB: 100 200 300 400 500 600 700 800 900 1000 1100 1200 ...
</source>
Much better!
So shutdown MySQL again!
Change your configuration /etc/mysql/conf.d/innodb.cnf and '''change newraw to raw!''' :
<source lang=mysql>
[mysqld]
# InnoDB raw disks
innodb_data_home_dir=
innodb_data_file_path=/dev/vg-data/lv-rawdisk-innodb01:25Graw
</source>
==Sample InnoDB configuration==
/etc/mysql/conf.d/innodb.cnf
<source lang=mysql>
[mysqld]
# InnoDB Parameters
# innodb_buffer_pool_size=(0.7*total_mem_size)
innodb_buffer_pool_size=1433M
# bulk_insert_buffer_size
bulk_insert_buffer_size=256M
# innodb_buffer_pool_instances=... more = more concurrency
innodb_buffer_pool_instances=2
# innodb_thread_concurrency= 2*CPUs
innodb_thread_concurrency=4
# innodb_flush_method=O_DIRECT (avoids double buffering)
innodb_flush_method=O_DIRECT
# InnoDB data raw disks
innodb_data_home_dir=
innodb_data_file_path=/dev/vg-data/lv-rawdisk-innodb01:25Graw
# InnoDB log files
innodb_log_files_in_group=2
innodb_log_file_size=100M
innodb_log_group_home_dir=/var/lib/mysql/ib_log
</source>
==Analyse==
<source lang=mysql>
mysql> select * from <tablename> PROCEDURE ANALYSE();
</source>
===Sysbench===
<source lang=bash>
# mysql -u root -e "create database sbtest;"
# sysbench --test=oltp --db-driver=mysql --mysql-table-engine=innodb --mysql-db=sbtest --oltp-table-size=10000000 --mysql-user=root --mysql-password=$(nawk -F'=' '/password/{print $2}' /root/.my.cnf) --mysql-socket=/var/run/mysqld/mysqld.sock prepare
</source>
==Recover a damaged root account==
===Lost grants===
<source lang=bash>
# service mysql stop
# echo "grant all privileges on *.* to 'root'@'localhost' with grant option;" > /root/mysql-init
# mysqld_safe --init-file=/root/mysql-init
...
150812 19:14:24 mysqld_safe mysqld from pid file /var/run/mysqld/mysqld.pid ended
# rm /root/mysql-init
# service mysql start
</source>
===Lost password===
<source lang=bash>
# service mysql stop
# echo "SET PASSWORD FOR 'root'@'localhost' = PASSWORD('the root password for mysql');" > /root/mysql-init
# mysqld_safe --init-file=/root/mysql-init
...
150812 19:15:24 mysqld_safe mysqld from pid file /var/run/mysqld/mysqld.pid ended
# rm /root/mysql-init
# service mysql start
</source>
ba60e45986fc78ea129fae8a80e40757ca52a5e4
848
847
2015-08-13T14:56:55Z
Lollypop
2
/* Sysbench */
wikitext
text/x-wiki
[[Kategorie:MySQL|Tipps und Tricks]]
==Oneliner==
===All grants===
<source lang=bash>
# mysql --skip-column-names --batch --execute 'select concat_ws("@",user,host) from mysql.user' | xargs -n 1 -i mysql --execute 'show grants for {}'
</source>
===Last update time===
* Per table
<source lang=mysql>
mysql> SELECT TABLE_SCHEMA AS DB,TABLE_NAME,UPDATE_TIME FROM INFORMATION_SCHEMA.TABLES ORDER BY DB,UPDATE_TIME;
</source>
* Per database
<source lang=mysql>
mysql> SELECT TABLE_SCHEMA AS DB,MAX(UPDATE_TIME) AS LAST_UPDATE FROM INFORMATION_SCHEMA.TABLES GROUP BY DB ORDER BY LAST_UPDATE;
</source>
==InnoDB space==
===Per database===
<source lang=mysql>
mysql> select table_schema as database_name, sum(round(data_length/1024/1024,2)) as total_size_mb from information_schema.tables where engine like 'innodb' group by table_schema order by total_size_mb;
</source>
===Per table===
<source lang=mysql>
mysql> select table_schema as database_name,table_name,round(data_length/1024/1024,2) as size_mb from information_schema.tables order by size_mb;
</source>
==Logging==
If you use SET GLOBAL it is just for the moment.
'''Don't forget to add it in your my.cnf to make it permanent!'''
===What can I log?===
The interesting variables here are:
* log_queries_not_using_indexes
* log_slave_updates
* log_slow_queries
* general_log
===Choose logging destination FILE/TABLE/NONE===
This affects general_log and slow_query_log.
* Log to the table mysql.slow_log and mysql.general_log
<source lang=mysql>
mysql> SET GLOBAL log_output=TABLE;
</source>
* Log to the table mysql.slow_log and mysql.general_log
<source lang=mysql>
mysql> SET GLOBAL log_output=TABLE;
</source>
* Both: tables and files
<source lang=mysql>
mysql> SET GLOBAL log_output = 'TABLE,FILE';
</source>
* None, if NONE appears in the log_output destinations there is no logging
<source lang=mysql>
mysql> SET GLOBAL log_output = 'TABLE,FILE,NONE';
</source>
is equal to
<source lang=mysql>
mysql> SET GLOBAL log_output = 'NONE';
</source>
===Enable/disable general logging===
<source lang=mysql>
mysql> SET GLOBAL general_log_file = '/var/lib/mysql/general.log';
Query OK, 0 rows affected (0.00 sec)
mysql> SET GLOBAL general_log = 'ON';
Query OK, 0 rows affected (0.00 sec)
</source>
<source lang=mysql>
mysql> SET GLOBAL general_log = 'OFF';
Query OK, 0 rows affected (0.00 sec)
</source>
===Enable/disable logging of slow queries===
<source lang=mysql>
mysql> SET GLOBAL slow_query_log_file = '/var/lib/mysql/slow-query.log';
Query OK, 0 rows affected (0.00 sec)
mysql> SET GLOBAL slow_query_log = 'ON';
Query OK, 0 rows affected (0.00 sec)
</source>
<source lang=mysql>
mysql> SET GLOBAL slow_query_log = 'OFF';
Query OK, 0 rows affected (0.00 sec)
</source>
==Filesystems for MySQL==
===ext3/ext4===
Mountoptions are:
* noatime
* data=writeback (best performance , only metadata is logged)
* data=ordered (ok performance , recording metadata and grouping metadata related to the data changes)
* data=journal (worst performance, but best data protection, ext3 default mode, recording metadata and all data)
===Raw devices with InnoDB===
'''Take a look at [[Linux_udev_permissions|setting device permissions via udev]] first.'''
'''After''' that the device is owned by mysql:
<source lang=bash>
# ls -alL /dev/vg-data/lv-rawdisk-innodb01
brw-rw---- 1 mysql mysql 252, 0 Aug 12 15:07 /dev/vg-data/lv-rawdisk-innodb01
</source>
Determine the size:
<source lang=bash>
# lvs vg-data
LV VG Attr LSize Pool Origin Data% Move Log Copy% Convert
lv-rawdisk-innodb01 vg-data -wi-a---- 25.00g
# fdisk -l /dev/vg-data/lv-rawdisk-innodb01
Disk /dev/vg-data/lv-rawdisk-innodb01: 26.8 GB, 26843545600 bytes
255 heads, 63 sectors/track, 3263 cylinders, total 52428800 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
# bc -l
26843545600/(1024*1024*1024)
25.00000000000000000000
</source>
Yes... really 25GB!
Add your logical volume to your configuration /etc/mysql/conf.d/innodb.cnf :
<source lang=mysql>
[mysqld]
# InnoDB raw disks
innodb_data_home_dir=
innodb_data_file_path=/dev/vg-data/lv-rawdisk-innodb01:25Gnewraw
</source>
Start mysql:
<source lang=bash>
# service mysql start
</source>
Aaaaaand.. do not forget apparmor! Like I did.. :-D
<source lang=mysql>
InnoDB: Operating system error number 13 in a file operation.
InnoDB: The error means mysqld does not have the access rights to
InnoDB: the directory.
InnoDB: File name /dev/dm-0
InnoDB: File operation call: 'open'.
InnoDB: Cannot continue operation.
</source>
<source lang=bash>
# tail /var/log/kern.log
...
Aug 12 15:30:09 mysql kernel: [ 5840.118528] audit: type=1400 audit(1439386209.399:33): apparmor="DENIED" operation="open" profile="/usr/sbin/mysqld" name="/dev/dm-0" pid=11810 comm="mysqld" requested_mask="wr" denied_mask="wr" fsuid=108 ouid=108
...
</source>
Add your raw device to the apparmor config in /etc/apparmor.d/local/usr.sbin.mysqld :
<source lang=bash>
# Site-specific additions and overrides for usr.sbin.mysqld.
# For more details, please see /etc/apparmor.d/local/README.
/dev/dm-* rwk,
</source>
Reload apparmor:
<source lang=bash>
# service apparmor reload
</source>
Another try!
<source lang=bash>
# service mysql start
</source>
<source lang=mysql>
InnoDB: The first specified data file /dev/vg-data/lv-rawdisk-innodb01 did not exist:
InnoDB: a new database to be created!
150812 15:48:23 InnoDB: Setting file /dev/vg-data/lv-rawdisk-innodb01 size to 25600 MB
InnoDB: Database physically writes the file full: wait...
InnoDB: Progress in MB: 100 200 300 400 500 600 700 800 900 1000 1100 1200 ...
</source>
Much better!
So shutdown MySQL again!
Change your configuration /etc/mysql/conf.d/innodb.cnf and '''change newraw to raw!''' :
<source lang=mysql>
[mysqld]
# InnoDB raw disks
innodb_data_home_dir=
innodb_data_file_path=/dev/vg-data/lv-rawdisk-innodb01:25Graw
</source>
==Sample InnoDB configuration==
/etc/mysql/conf.d/innodb.cnf
<source lang=mysql>
[mysqld]
# InnoDB Parameters
# innodb_buffer_pool_size=(0.7*total_mem_size)
innodb_buffer_pool_size=1433M
# bulk_insert_buffer_size
bulk_insert_buffer_size=256M
# innodb_buffer_pool_instances=... more = more concurrency
innodb_buffer_pool_instances=2
# innodb_thread_concurrency= 2*CPUs
innodb_thread_concurrency=4
# innodb_flush_method=O_DIRECT (avoids double buffering)
innodb_flush_method=O_DIRECT
# InnoDB data raw disks
innodb_data_home_dir=
innodb_data_file_path=/dev/vg-data/lv-rawdisk-innodb01:25Graw
# InnoDB log files
innodb_log_files_in_group=2
innodb_log_file_size=100M
innodb_log_group_home_dir=/var/lib/mysql/ib_log
</source>
==Analyse==
<source lang=mysql>
mysql> select * from <tablename> PROCEDURE ANALYSE();
</source>
===Sysbench===
<source lang=bash>
# mysql -u root -e "create database sbtest;"
# sysbench \
--test=oltp \
--oltp-table-size=10000000 \
--db-driver=mysql \
--mysql-table-engine=innodb \
--mysql-db=sbtest \
--mysql-user=root \
--mysql-password=$(nawk -F'=' '/password/{print $2}' /root/.my.cnf) \
--mysql-socket=/var/run/mysqld/mysqld.sock \
prepare
# sysbench \
--test=oltp \
--oltp-test-mode=complex \
--oltp-table-size=80000000 \
--db-driver=mysql \
--mysql-table-engine=innodb \
--mysql-db=sbtest \
--mysql-user=root_rw \
--mysql-password=$(nawk -F'=' '/password/{print $2}' /root/.my.cnf) \
--mysql-socket=/var/run/mysqld/mysqld.sock \
--num-threads=4 \
--max-time=900 \
--max-requests=500000 \
run
</source>
==Recover a damaged root account==
===Lost grants===
<source lang=bash>
# service mysql stop
# echo "grant all privileges on *.* to 'root'@'localhost' with grant option;" > /root/mysql-init
# mysqld_safe --init-file=/root/mysql-init
...
150812 19:14:24 mysqld_safe mysqld from pid file /var/run/mysqld/mysqld.pid ended
# rm /root/mysql-init
# service mysql start
</source>
===Lost password===
<source lang=bash>
# service mysql stop
# echo "SET PASSWORD FOR 'root'@'localhost' = PASSWORD('the root password for mysql');" > /root/mysql-init
# mysqld_safe --init-file=/root/mysql-init
...
150812 19:15:24 mysqld_safe mysqld from pid file /var/run/mysqld/mysqld.pid ended
# rm /root/mysql-init
# service mysql start
</source>
d193bacf2f113788a66d3f20796092880a0cd297
849
848
2015-08-13T14:58:24Z
Lollypop
2
/* Sysbench */
wikitext
text/x-wiki
[[Kategorie:MySQL|Tipps und Tricks]]
==Oneliner==
===All grants===
<source lang=bash>
# mysql --skip-column-names --batch --execute 'select concat_ws("@",user,host) from mysql.user' | xargs -n 1 -i mysql --execute 'show grants for {}'
</source>
===Last update time===
* Per table
<source lang=mysql>
mysql> SELECT TABLE_SCHEMA AS DB,TABLE_NAME,UPDATE_TIME FROM INFORMATION_SCHEMA.TABLES ORDER BY DB,UPDATE_TIME;
</source>
* Per database
<source lang=mysql>
mysql> SELECT TABLE_SCHEMA AS DB,MAX(UPDATE_TIME) AS LAST_UPDATE FROM INFORMATION_SCHEMA.TABLES GROUP BY DB ORDER BY LAST_UPDATE;
</source>
==InnoDB space==
===Per database===
<source lang=mysql>
mysql> select table_schema as database_name, sum(round(data_length/1024/1024,2)) as total_size_mb from information_schema.tables where engine like 'innodb' group by table_schema order by total_size_mb;
</source>
===Per table===
<source lang=mysql>
mysql> select table_schema as database_name,table_name,round(data_length/1024/1024,2) as size_mb from information_schema.tables order by size_mb;
</source>
==Logging==
If you use SET GLOBAL it is just for the moment.
'''Don't forget to add it in your my.cnf to make it permanent!'''
===What can I log?===
The interesting variables here are:
* log_queries_not_using_indexes
* log_slave_updates
* log_slow_queries
* general_log
===Choose logging destination FILE/TABLE/NONE===
This affects general_log and slow_query_log.
* Log to the table mysql.slow_log and mysql.general_log
<source lang=mysql>
mysql> SET GLOBAL log_output=TABLE;
</source>
* Log to the table mysql.slow_log and mysql.general_log
<source lang=mysql>
mysql> SET GLOBAL log_output=TABLE;
</source>
* Both: tables and files
<source lang=mysql>
mysql> SET GLOBAL log_output = 'TABLE,FILE';
</source>
* None, if NONE appears in the log_output destinations there is no logging
<source lang=mysql>
mysql> SET GLOBAL log_output = 'TABLE,FILE,NONE';
</source>
is equal to
<source lang=mysql>
mysql> SET GLOBAL log_output = 'NONE';
</source>
===Enable/disable general logging===
<source lang=mysql>
mysql> SET GLOBAL general_log_file = '/var/lib/mysql/general.log';
Query OK, 0 rows affected (0.00 sec)
mysql> SET GLOBAL general_log = 'ON';
Query OK, 0 rows affected (0.00 sec)
</source>
<source lang=mysql>
mysql> SET GLOBAL general_log = 'OFF';
Query OK, 0 rows affected (0.00 sec)
</source>
===Enable/disable logging of slow queries===
<source lang=mysql>
mysql> SET GLOBAL slow_query_log_file = '/var/lib/mysql/slow-query.log';
Query OK, 0 rows affected (0.00 sec)
mysql> SET GLOBAL slow_query_log = 'ON';
Query OK, 0 rows affected (0.00 sec)
</source>
<source lang=mysql>
mysql> SET GLOBAL slow_query_log = 'OFF';
Query OK, 0 rows affected (0.00 sec)
</source>
==Filesystems for MySQL==
===ext3/ext4===
Mountoptions are:
* noatime
* data=writeback (best performance , only metadata is logged)
* data=ordered (ok performance , recording metadata and grouping metadata related to the data changes)
* data=journal (worst performance, but best data protection, ext3 default mode, recording metadata and all data)
===Raw devices with InnoDB===
'''Take a look at [[Linux_udev_permissions|setting device permissions via udev]] first.'''
'''After''' that the device is owned by mysql:
<source lang=bash>
# ls -alL /dev/vg-data/lv-rawdisk-innodb01
brw-rw---- 1 mysql mysql 252, 0 Aug 12 15:07 /dev/vg-data/lv-rawdisk-innodb01
</source>
Determine the size:
<source lang=bash>
# lvs vg-data
LV VG Attr LSize Pool Origin Data% Move Log Copy% Convert
lv-rawdisk-innodb01 vg-data -wi-a---- 25.00g
# fdisk -l /dev/vg-data/lv-rawdisk-innodb01
Disk /dev/vg-data/lv-rawdisk-innodb01: 26.8 GB, 26843545600 bytes
255 heads, 63 sectors/track, 3263 cylinders, total 52428800 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
# bc -l
26843545600/(1024*1024*1024)
25.00000000000000000000
</source>
Yes... really 25GB!
Add your logical volume to your configuration /etc/mysql/conf.d/innodb.cnf :
<source lang=mysql>
[mysqld]
# InnoDB raw disks
innodb_data_home_dir=
innodb_data_file_path=/dev/vg-data/lv-rawdisk-innodb01:25Gnewraw
</source>
Start mysql:
<source lang=bash>
# service mysql start
</source>
Aaaaaand.. do not forget apparmor! Like I did.. :-D
<source lang=mysql>
InnoDB: Operating system error number 13 in a file operation.
InnoDB: The error means mysqld does not have the access rights to
InnoDB: the directory.
InnoDB: File name /dev/dm-0
InnoDB: File operation call: 'open'.
InnoDB: Cannot continue operation.
</source>
<source lang=bash>
# tail /var/log/kern.log
...
Aug 12 15:30:09 mysql kernel: [ 5840.118528] audit: type=1400 audit(1439386209.399:33): apparmor="DENIED" operation="open" profile="/usr/sbin/mysqld" name="/dev/dm-0" pid=11810 comm="mysqld" requested_mask="wr" denied_mask="wr" fsuid=108 ouid=108
...
</source>
Add your raw device to the apparmor config in /etc/apparmor.d/local/usr.sbin.mysqld :
<source lang=bash>
# Site-specific additions and overrides for usr.sbin.mysqld.
# For more details, please see /etc/apparmor.d/local/README.
/dev/dm-* rwk,
</source>
Reload apparmor:
<source lang=bash>
# service apparmor reload
</source>
Another try!
<source lang=bash>
# service mysql start
</source>
<source lang=mysql>
InnoDB: The first specified data file /dev/vg-data/lv-rawdisk-innodb01 did not exist:
InnoDB: a new database to be created!
150812 15:48:23 InnoDB: Setting file /dev/vg-data/lv-rawdisk-innodb01 size to 25600 MB
InnoDB: Database physically writes the file full: wait...
InnoDB: Progress in MB: 100 200 300 400 500 600 700 800 900 1000 1100 1200 ...
</source>
Much better!
So shutdown MySQL again!
Change your configuration /etc/mysql/conf.d/innodb.cnf and '''change newraw to raw!''' :
<source lang=mysql>
[mysqld]
# InnoDB raw disks
innodb_data_home_dir=
innodb_data_file_path=/dev/vg-data/lv-rawdisk-innodb01:25Graw
</source>
==Sample InnoDB configuration==
/etc/mysql/conf.d/innodb.cnf
<source lang=mysql>
[mysqld]
# InnoDB Parameters
# innodb_buffer_pool_size=(0.7*total_mem_size)
innodb_buffer_pool_size=1433M
# bulk_insert_buffer_size
bulk_insert_buffer_size=256M
# innodb_buffer_pool_instances=... more = more concurrency
innodb_buffer_pool_instances=2
# innodb_thread_concurrency= 2*CPUs
innodb_thread_concurrency=4
# innodb_flush_method=O_DIRECT (avoids double buffering)
innodb_flush_method=O_DIRECT
# InnoDB data raw disks
innodb_data_home_dir=
innodb_data_file_path=/dev/vg-data/lv-rawdisk-innodb01:25Graw
# InnoDB log files
innodb_log_files_in_group=2
innodb_log_file_size=100M
innodb_log_group_home_dir=/var/lib/mysql/ib_log
</source>
==Analyse==
<source lang=mysql>
mysql> select * from <tablename> PROCEDURE ANALYSE();
</source>
===Sysbench===
<source lang=bash>
# mysql -u root -e "create database sbtest;"
# sysbench \
--test=oltp \
--oltp-table-size=10000000 \
--db-driver=mysql \
--mysql-table-engine=innodb \
--mysql-db=sbtest \
--mysql-user=root \
--mysql-password=$(nawk -F'=' '/password/{print $2}' /root/.my.cnf) \
--mysql-socket=/var/run/mysqld/mysqld.sock \
prepare
# sysbench \
--test=oltp \
--oltp-test-mode=complex \
--oltp-table-size=80000000 \
--db-driver=mysql \
--mysql-table-engine=innodb \
--mysql-db=sbtest \
--mysql-user=root_rw \
--mysql-password=$(nawk -F'=' '/password/{print $2}' /root/.my.cnf) \
--mysql-socket=/var/run/mysqld/mysqld.sock \
--num-threads=4 \
--max-time=900 \
--max-requests=500000 \
run
# mysql -u root_rw -e "drop table sbtest;" sbtest
</source>
==Recover a damaged root account==
===Lost grants===
<source lang=bash>
# service mysql stop
# echo "grant all privileges on *.* to 'root'@'localhost' with grant option;" > /root/mysql-init
# mysqld_safe --init-file=/root/mysql-init
...
150812 19:14:24 mysqld_safe mysqld from pid file /var/run/mysqld/mysqld.pid ended
# rm /root/mysql-init
# service mysql start
</source>
===Lost password===
<source lang=bash>
# service mysql stop
# echo "SET PASSWORD FOR 'root'@'localhost' = PASSWORD('the root password for mysql');" > /root/mysql-init
# mysqld_safe --init-file=/root/mysql-init
...
150812 19:15:24 mysqld_safe mysqld from pid file /var/run/mysqld/mysqld.pid ended
# rm /root/mysql-init
# service mysql start
</source>
02d92250d3b6550b07938ccf98f69dfa04e572f2
850
849
2015-08-13T15:10:56Z
Lollypop
2
/* Analyse */
wikitext
text/x-wiki
[[Kategorie:MySQL|Tipps und Tricks]]
==Oneliner==
===All grants===
<source lang=bash>
# mysql --skip-column-names --batch --execute 'select concat_ws("@",user,host) from mysql.user' | xargs -n 1 -i mysql --execute 'show grants for {}'
</source>
===Last update time===
* Per table
<source lang=mysql>
mysql> SELECT TABLE_SCHEMA AS DB,TABLE_NAME,UPDATE_TIME FROM INFORMATION_SCHEMA.TABLES ORDER BY DB,UPDATE_TIME;
</source>
* Per database
<source lang=mysql>
mysql> SELECT TABLE_SCHEMA AS DB,MAX(UPDATE_TIME) AS LAST_UPDATE FROM INFORMATION_SCHEMA.TABLES GROUP BY DB ORDER BY LAST_UPDATE;
</source>
==InnoDB space==
===Per database===
<source lang=mysql>
mysql> select table_schema as database_name, sum(round(data_length/1024/1024,2)) as total_size_mb from information_schema.tables where engine like 'innodb' group by table_schema order by total_size_mb;
</source>
===Per table===
<source lang=mysql>
mysql> select table_schema as database_name,table_name,round(data_length/1024/1024,2) as size_mb from information_schema.tables order by size_mb;
</source>
==Logging==
If you use SET GLOBAL it is just for the moment.
'''Don't forget to add it in your my.cnf to make it permanent!'''
===What can I log?===
The interesting variables here are:
* log_queries_not_using_indexes
* log_slave_updates
* log_slow_queries
* general_log
===Choose logging destination FILE/TABLE/NONE===
This affects general_log and slow_query_log.
* Log to the table mysql.slow_log and mysql.general_log
<source lang=mysql>
mysql> SET GLOBAL log_output=TABLE;
</source>
* Log to the table mysql.slow_log and mysql.general_log
<source lang=mysql>
mysql> SET GLOBAL log_output=TABLE;
</source>
* Both: tables and files
<source lang=mysql>
mysql> SET GLOBAL log_output = 'TABLE,FILE';
</source>
* None, if NONE appears in the log_output destinations there is no logging
<source lang=mysql>
mysql> SET GLOBAL log_output = 'TABLE,FILE,NONE';
</source>
is equal to
<source lang=mysql>
mysql> SET GLOBAL log_output = 'NONE';
</source>
===Enable/disable general logging===
<source lang=mysql>
mysql> SET GLOBAL general_log_file = '/var/lib/mysql/general.log';
Query OK, 0 rows affected (0.00 sec)
mysql> SET GLOBAL general_log = 'ON';
Query OK, 0 rows affected (0.00 sec)
</source>
<source lang=mysql>
mysql> SET GLOBAL general_log = 'OFF';
Query OK, 0 rows affected (0.00 sec)
</source>
===Enable/disable logging of slow queries===
<source lang=mysql>
mysql> SET GLOBAL slow_query_log_file = '/var/lib/mysql/slow-query.log';
Query OK, 0 rows affected (0.00 sec)
mysql> SET GLOBAL slow_query_log = 'ON';
Query OK, 0 rows affected (0.00 sec)
</source>
<source lang=mysql>
mysql> SET GLOBAL slow_query_log = 'OFF';
Query OK, 0 rows affected (0.00 sec)
</source>
==Filesystems for MySQL==
===ext3/ext4===
Mountoptions are:
* noatime
* data=writeback (best performance , only metadata is logged)
* data=ordered (ok performance , recording metadata and grouping metadata related to the data changes)
* data=journal (worst performance, but best data protection, ext3 default mode, recording metadata and all data)
===Raw devices with InnoDB===
'''Take a look at [[Linux_udev_permissions|setting device permissions via udev]] first.'''
'''After''' that the device is owned by mysql:
<source lang=bash>
# ls -alL /dev/vg-data/lv-rawdisk-innodb01
brw-rw---- 1 mysql mysql 252, 0 Aug 12 15:07 /dev/vg-data/lv-rawdisk-innodb01
</source>
Determine the size:
<source lang=bash>
# lvs vg-data
LV VG Attr LSize Pool Origin Data% Move Log Copy% Convert
lv-rawdisk-innodb01 vg-data -wi-a---- 25.00g
# fdisk -l /dev/vg-data/lv-rawdisk-innodb01
Disk /dev/vg-data/lv-rawdisk-innodb01: 26.8 GB, 26843545600 bytes
255 heads, 63 sectors/track, 3263 cylinders, total 52428800 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
# bc -l
26843545600/(1024*1024*1024)
25.00000000000000000000
</source>
Yes... really 25GB!
Add your logical volume to your configuration /etc/mysql/conf.d/innodb.cnf :
<source lang=mysql>
[mysqld]
# InnoDB raw disks
innodb_data_home_dir=
innodb_data_file_path=/dev/vg-data/lv-rawdisk-innodb01:25Gnewraw
</source>
Start mysql:
<source lang=bash>
# service mysql start
</source>
Aaaaaand.. do not forget apparmor! Like I did.. :-D
<source lang=mysql>
InnoDB: Operating system error number 13 in a file operation.
InnoDB: The error means mysqld does not have the access rights to
InnoDB: the directory.
InnoDB: File name /dev/dm-0
InnoDB: File operation call: 'open'.
InnoDB: Cannot continue operation.
</source>
<source lang=bash>
# tail /var/log/kern.log
...
Aug 12 15:30:09 mysql kernel: [ 5840.118528] audit: type=1400 audit(1439386209.399:33): apparmor="DENIED" operation="open" profile="/usr/sbin/mysqld" name="/dev/dm-0" pid=11810 comm="mysqld" requested_mask="wr" denied_mask="wr" fsuid=108 ouid=108
...
</source>
Add your raw device to the apparmor config in /etc/apparmor.d/local/usr.sbin.mysqld :
<source lang=bash>
# Site-specific additions and overrides for usr.sbin.mysqld.
# For more details, please see /etc/apparmor.d/local/README.
/dev/dm-* rwk,
</source>
Reload apparmor:
<source lang=bash>
# service apparmor reload
</source>
Another try!
<source lang=bash>
# service mysql start
</source>
<source lang=mysql>
InnoDB: The first specified data file /dev/vg-data/lv-rawdisk-innodb01 did not exist:
InnoDB: a new database to be created!
150812 15:48:23 InnoDB: Setting file /dev/vg-data/lv-rawdisk-innodb01 size to 25600 MB
InnoDB: Database physically writes the file full: wait...
InnoDB: Progress in MB: 100 200 300 400 500 600 700 800 900 1000 1100 1200 ...
</source>
Much better!
So shutdown MySQL again!
Change your configuration /etc/mysql/conf.d/innodb.cnf and '''change newraw to raw!''' :
<source lang=mysql>
[mysqld]
# InnoDB raw disks
innodb_data_home_dir=
innodb_data_file_path=/dev/vg-data/lv-rawdisk-innodb01:25Graw
</source>
==Sample InnoDB configuration==
/etc/mysql/conf.d/innodb.cnf
<source lang=mysql>
[mysqld]
# InnoDB Parameters
# innodb_buffer_pool_size=(0.7*total_mem_size)
innodb_buffer_pool_size=1433M
# bulk_insert_buffer_size
bulk_insert_buffer_size=256M
# innodb_buffer_pool_instances=... more = more concurrency
innodb_buffer_pool_instances=2
# innodb_thread_concurrency= 2*CPUs
innodb_thread_concurrency=4
# innodb_flush_method=O_DIRECT (avoids double buffering)
innodb_flush_method=O_DIRECT
# InnoDB data raw disks
innodb_data_home_dir=
innodb_data_file_path=/dev/vg-data/lv-rawdisk-innodb01:25Graw
# InnoDB log files
innodb_log_files_in_group=2
innodb_log_file_size=100M
innodb_log_group_home_dir=/var/lib/mysql/ib_log
</source>
==Analyse==
<source lang=mysql>
mysql> select * from <tablename> PROCEDURE ANALYSE();
</source>
<source lang=mysql>
mysql> SHOW /*!50000 GLOBAL*/ STATUS;
</source>
===Sysbench===
<source lang=bash>
# mysql -u root -e "create database sbtest;"
# sysbench \
--test=oltp \
--oltp-table-size=10000000 \
--db-driver=mysql \
--mysql-table-engine=innodb \
--mysql-db=sbtest \
--mysql-user=root \
--mysql-password=$(nawk -F'=' '/password/{print $2}' /root/.my.cnf) \
--mysql-socket=/var/run/mysqld/mysqld.sock \
prepare
# sysbench \
--test=oltp \
--oltp-test-mode=complex \
--oltp-table-size=80000000 \
--db-driver=mysql \
--mysql-table-engine=innodb \
--mysql-db=sbtest \
--mysql-user=root_rw \
--mysql-password=$(nawk -F'=' '/password/{print $2}' /root/.my.cnf) \
--mysql-socket=/var/run/mysqld/mysqld.sock \
--num-threads=4 \
--max-time=900 \
--max-requests=500000 \
run
# mysql -u root_rw -e "drop table sbtest;" sbtest
</source>
==Recover a damaged root account==
===Lost grants===
<source lang=bash>
# service mysql stop
# echo "grant all privileges on *.* to 'root'@'localhost' with grant option;" > /root/mysql-init
# mysqld_safe --init-file=/root/mysql-init
...
150812 19:14:24 mysqld_safe mysqld from pid file /var/run/mysqld/mysqld.pid ended
# rm /root/mysql-init
# service mysql start
</source>
===Lost password===
<source lang=bash>
# service mysql stop
# echo "SET PASSWORD FOR 'root'@'localhost' = PASSWORD('the root password for mysql');" > /root/mysql-init
# mysqld_safe --init-file=/root/mysql-init
...
150812 19:15:24 mysqld_safe mysqld from pid file /var/run/mysqld/mysqld.pid ended
# rm /root/mysql-init
# service mysql start
</source>
2cad429871c97f5ed2fdaa67440e1055d947daa1
851
850
2015-08-13T15:11:35Z
Lollypop
2
/* Analyse */
wikitext
text/x-wiki
[[Kategorie:MySQL|Tipps und Tricks]]
==Oneliner==
===All grants===
<source lang=bash>
# mysql --skip-column-names --batch --execute 'select concat_ws("@",user,host) from mysql.user' | xargs -n 1 -i mysql --execute 'show grants for {}'
</source>
===Last update time===
* Per table
<source lang=mysql>
mysql> SELECT TABLE_SCHEMA AS DB,TABLE_NAME,UPDATE_TIME FROM INFORMATION_SCHEMA.TABLES ORDER BY DB,UPDATE_TIME;
</source>
* Per database
<source lang=mysql>
mysql> SELECT TABLE_SCHEMA AS DB,MAX(UPDATE_TIME) AS LAST_UPDATE FROM INFORMATION_SCHEMA.TABLES GROUP BY DB ORDER BY LAST_UPDATE;
</source>
==InnoDB space==
===Per database===
<source lang=mysql>
mysql> select table_schema as database_name, sum(round(data_length/1024/1024,2)) as total_size_mb from information_schema.tables where engine like 'innodb' group by table_schema order by total_size_mb;
</source>
===Per table===
<source lang=mysql>
mysql> select table_schema as database_name,table_name,round(data_length/1024/1024,2) as size_mb from information_schema.tables order by size_mb;
</source>
==Logging==
If you use SET GLOBAL it is just for the moment.
'''Don't forget to add it in your my.cnf to make it permanent!'''
===What can I log?===
The interesting variables here are:
* log_queries_not_using_indexes
* log_slave_updates
* log_slow_queries
* general_log
===Choose logging destination FILE/TABLE/NONE===
This affects general_log and slow_query_log.
* Log to the table mysql.slow_log and mysql.general_log
<source lang=mysql>
mysql> SET GLOBAL log_output=TABLE;
</source>
* Log to the table mysql.slow_log and mysql.general_log
<source lang=mysql>
mysql> SET GLOBAL log_output=TABLE;
</source>
* Both: tables and files
<source lang=mysql>
mysql> SET GLOBAL log_output = 'TABLE,FILE';
</source>
* None, if NONE appears in the log_output destinations there is no logging
<source lang=mysql>
mysql> SET GLOBAL log_output = 'TABLE,FILE,NONE';
</source>
is equal to
<source lang=mysql>
mysql> SET GLOBAL log_output = 'NONE';
</source>
===Enable/disable general logging===
<source lang=mysql>
mysql> SET GLOBAL general_log_file = '/var/lib/mysql/general.log';
Query OK, 0 rows affected (0.00 sec)
mysql> SET GLOBAL general_log = 'ON';
Query OK, 0 rows affected (0.00 sec)
</source>
<source lang=mysql>
mysql> SET GLOBAL general_log = 'OFF';
Query OK, 0 rows affected (0.00 sec)
</source>
===Enable/disable logging of slow queries===
<source lang=mysql>
mysql> SET GLOBAL slow_query_log_file = '/var/lib/mysql/slow-query.log';
Query OK, 0 rows affected (0.00 sec)
mysql> SET GLOBAL slow_query_log = 'ON';
Query OK, 0 rows affected (0.00 sec)
</source>
<source lang=mysql>
mysql> SET GLOBAL slow_query_log = 'OFF';
Query OK, 0 rows affected (0.00 sec)
</source>
==Filesystems for MySQL==
===ext3/ext4===
Mountoptions are:
* noatime
* data=writeback (best performance , only metadata is logged)
* data=ordered (ok performance , recording metadata and grouping metadata related to the data changes)
* data=journal (worst performance, but best data protection, ext3 default mode, recording metadata and all data)
===Raw devices with InnoDB===
'''Take a look at [[Linux_udev_permissions|setting device permissions via udev]] first.'''
'''After''' that the device is owned by mysql:
<source lang=bash>
# ls -alL /dev/vg-data/lv-rawdisk-innodb01
brw-rw---- 1 mysql mysql 252, 0 Aug 12 15:07 /dev/vg-data/lv-rawdisk-innodb01
</source>
Determine the size:
<source lang=bash>
# lvs vg-data
LV VG Attr LSize Pool Origin Data% Move Log Copy% Convert
lv-rawdisk-innodb01 vg-data -wi-a---- 25.00g
# fdisk -l /dev/vg-data/lv-rawdisk-innodb01
Disk /dev/vg-data/lv-rawdisk-innodb01: 26.8 GB, 26843545600 bytes
255 heads, 63 sectors/track, 3263 cylinders, total 52428800 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
# bc -l
26843545600/(1024*1024*1024)
25.00000000000000000000
</source>
Yes... really 25GB!
Add your logical volume to your configuration /etc/mysql/conf.d/innodb.cnf :
<source lang=mysql>
[mysqld]
# InnoDB raw disks
innodb_data_home_dir=
innodb_data_file_path=/dev/vg-data/lv-rawdisk-innodb01:25Gnewraw
</source>
Start mysql:
<source lang=bash>
# service mysql start
</source>
Aaaaaand.. do not forget apparmor! Like I did.. :-D
<source lang=mysql>
InnoDB: Operating system error number 13 in a file operation.
InnoDB: The error means mysqld does not have the access rights to
InnoDB: the directory.
InnoDB: File name /dev/dm-0
InnoDB: File operation call: 'open'.
InnoDB: Cannot continue operation.
</source>
<source lang=bash>
# tail /var/log/kern.log
...
Aug 12 15:30:09 mysql kernel: [ 5840.118528] audit: type=1400 audit(1439386209.399:33): apparmor="DENIED" operation="open" profile="/usr/sbin/mysqld" name="/dev/dm-0" pid=11810 comm="mysqld" requested_mask="wr" denied_mask="wr" fsuid=108 ouid=108
...
</source>
Add your raw device to the apparmor config in /etc/apparmor.d/local/usr.sbin.mysqld :
<source lang=bash>
# Site-specific additions and overrides for usr.sbin.mysqld.
# For more details, please see /etc/apparmor.d/local/README.
/dev/dm-* rwk,
</source>
Reload apparmor:
<source lang=bash>
# service apparmor reload
</source>
Another try!
<source lang=bash>
# service mysql start
</source>
<source lang=mysql>
InnoDB: The first specified data file /dev/vg-data/lv-rawdisk-innodb01 did not exist:
InnoDB: a new database to be created!
150812 15:48:23 InnoDB: Setting file /dev/vg-data/lv-rawdisk-innodb01 size to 25600 MB
InnoDB: Database physically writes the file full: wait...
InnoDB: Progress in MB: 100 200 300 400 500 600 700 800 900 1000 1100 1200 ...
</source>
Much better!
So shutdown MySQL again!
Change your configuration /etc/mysql/conf.d/innodb.cnf and '''change newraw to raw!''' :
<source lang=mysql>
[mysqld]
# InnoDB raw disks
innodb_data_home_dir=
innodb_data_file_path=/dev/vg-data/lv-rawdisk-innodb01:25Graw
</source>
==Sample InnoDB configuration==
/etc/mysql/conf.d/innodb.cnf
<source lang=mysql>
[mysqld]
# InnoDB Parameters
# innodb_buffer_pool_size=(0.7*total_mem_size)
innodb_buffer_pool_size=1433M
# bulk_insert_buffer_size
bulk_insert_buffer_size=256M
# innodb_buffer_pool_instances=... more = more concurrency
innodb_buffer_pool_instances=2
# innodb_thread_concurrency= 2*CPUs
innodb_thread_concurrency=4
# innodb_flush_method=O_DIRECT (avoids double buffering)
innodb_flush_method=O_DIRECT
# InnoDB data raw disks
innodb_data_home_dir=
innodb_data_file_path=/dev/vg-data/lv-rawdisk-innodb01:25Graw
# InnoDB log files
innodb_log_files_in_group=2
innodb_log_file_size=100M
innodb_log_group_home_dir=/var/lib/mysql/ib_log
</source>
==Analyze==
<source lang=mysql>
mysql> select * from <tablename> PROCEDURE ANALYSE();
</source>
<source lang=mysql>
mysql> SHOW /*!50000 GLOBAL*/ STATUS;
</source>
===Sysbench===
<source lang=bash>
# mysql -u root -e "create database sbtest;"
# sysbench \
--test=oltp \
--oltp-table-size=10000000 \
--db-driver=mysql \
--mysql-table-engine=innodb \
--mysql-db=sbtest \
--mysql-user=root \
--mysql-password=$(nawk -F'=' '/password/{print $2}' /root/.my.cnf) \
--mysql-socket=/var/run/mysqld/mysqld.sock \
prepare
# sysbench \
--test=oltp \
--oltp-test-mode=complex \
--oltp-table-size=80000000 \
--db-driver=mysql \
--mysql-table-engine=innodb \
--mysql-db=sbtest \
--mysql-user=root_rw \
--mysql-password=$(nawk -F'=' '/password/{print $2}' /root/.my.cnf) \
--mysql-socket=/var/run/mysqld/mysqld.sock \
--num-threads=4 \
--max-time=900 \
--max-requests=500000 \
run
# mysql -u root_rw -e "drop table sbtest;" sbtest
</source>
==Recover a damaged root account==
===Lost grants===
<source lang=bash>
# service mysql stop
# echo "grant all privileges on *.* to 'root'@'localhost' with grant option;" > /root/mysql-init
# mysqld_safe --init-file=/root/mysql-init
...
150812 19:14:24 mysqld_safe mysqld from pid file /var/run/mysqld/mysqld.pid ended
# rm /root/mysql-init
# service mysql start
</source>
===Lost password===
<source lang=bash>
# service mysql stop
# echo "SET PASSWORD FOR 'root'@'localhost' = PASSWORD('the root password for mysql');" > /root/mysql-init
# mysqld_safe --init-file=/root/mysql-init
...
150812 19:15:24 mysqld_safe mysqld from pid file /var/run/mysqld/mysqld.pid ended
# rm /root/mysql-init
# service mysql start
</source>
99d93a42afe1252c211b192e4d8af528fe2d2d14
852
851
2015-08-13T15:16:54Z
Lollypop
2
/* Analyze */
wikitext
text/x-wiki
[[Kategorie:MySQL|Tipps und Tricks]]
==Oneliner==
===All grants===
<source lang=bash>
# mysql --skip-column-names --batch --execute 'select concat_ws("@",user,host) from mysql.user' | xargs -n 1 -i mysql --execute 'show grants for {}'
</source>
===Last update time===
* Per table
<source lang=mysql>
mysql> SELECT TABLE_SCHEMA AS DB,TABLE_NAME,UPDATE_TIME FROM INFORMATION_SCHEMA.TABLES ORDER BY DB,UPDATE_TIME;
</source>
* Per database
<source lang=mysql>
mysql> SELECT TABLE_SCHEMA AS DB,MAX(UPDATE_TIME) AS LAST_UPDATE FROM INFORMATION_SCHEMA.TABLES GROUP BY DB ORDER BY LAST_UPDATE;
</source>
==InnoDB space==
===Per database===
<source lang=mysql>
mysql> select table_schema as database_name, sum(round(data_length/1024/1024,2)) as total_size_mb from information_schema.tables where engine like 'innodb' group by table_schema order by total_size_mb;
</source>
===Per table===
<source lang=mysql>
mysql> select table_schema as database_name,table_name,round(data_length/1024/1024,2) as size_mb from information_schema.tables order by size_mb;
</source>
==Logging==
If you use SET GLOBAL it is just for the moment.
'''Don't forget to add it in your my.cnf to make it permanent!'''
===What can I log?===
The interesting variables here are:
* log_queries_not_using_indexes
* log_slave_updates
* log_slow_queries
* general_log
===Choose logging destination FILE/TABLE/NONE===
This affects general_log and slow_query_log.
* Log to the table mysql.slow_log and mysql.general_log
<source lang=mysql>
mysql> SET GLOBAL log_output=TABLE;
</source>
* Log to the table mysql.slow_log and mysql.general_log
<source lang=mysql>
mysql> SET GLOBAL log_output=TABLE;
</source>
* Both: tables and files
<source lang=mysql>
mysql> SET GLOBAL log_output = 'TABLE,FILE';
</source>
* None, if NONE appears in the log_output destinations there is no logging
<source lang=mysql>
mysql> SET GLOBAL log_output = 'TABLE,FILE,NONE';
</source>
is equal to
<source lang=mysql>
mysql> SET GLOBAL log_output = 'NONE';
</source>
===Enable/disable general logging===
<source lang=mysql>
mysql> SET GLOBAL general_log_file = '/var/lib/mysql/general.log';
Query OK, 0 rows affected (0.00 sec)
mysql> SET GLOBAL general_log = 'ON';
Query OK, 0 rows affected (0.00 sec)
</source>
<source lang=mysql>
mysql> SET GLOBAL general_log = 'OFF';
Query OK, 0 rows affected (0.00 sec)
</source>
===Enable/disable logging of slow queries===
<source lang=mysql>
mysql> SET GLOBAL slow_query_log_file = '/var/lib/mysql/slow-query.log';
Query OK, 0 rows affected (0.00 sec)
mysql> SET GLOBAL slow_query_log = 'ON';
Query OK, 0 rows affected (0.00 sec)
</source>
<source lang=mysql>
mysql> SET GLOBAL slow_query_log = 'OFF';
Query OK, 0 rows affected (0.00 sec)
</source>
==Filesystems for MySQL==
===ext3/ext4===
Mountoptions are:
* noatime
* data=writeback (best performance , only metadata is logged)
* data=ordered (ok performance , recording metadata and grouping metadata related to the data changes)
* data=journal (worst performance, but best data protection, ext3 default mode, recording metadata and all data)
===Raw devices with InnoDB===
'''Take a look at [[Linux_udev_permissions|setting device permissions via udev]] first.'''
'''After''' that the device is owned by mysql:
<source lang=bash>
# ls -alL /dev/vg-data/lv-rawdisk-innodb01
brw-rw---- 1 mysql mysql 252, 0 Aug 12 15:07 /dev/vg-data/lv-rawdisk-innodb01
</source>
Determine the size:
<source lang=bash>
# lvs vg-data
LV VG Attr LSize Pool Origin Data% Move Log Copy% Convert
lv-rawdisk-innodb01 vg-data -wi-a---- 25.00g
# fdisk -l /dev/vg-data/lv-rawdisk-innodb01
Disk /dev/vg-data/lv-rawdisk-innodb01: 26.8 GB, 26843545600 bytes
255 heads, 63 sectors/track, 3263 cylinders, total 52428800 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
# bc -l
26843545600/(1024*1024*1024)
25.00000000000000000000
</source>
Yes... really 25GB!
Add your logical volume to your configuration /etc/mysql/conf.d/innodb.cnf :
<source lang=mysql>
[mysqld]
# InnoDB raw disks
innodb_data_home_dir=
innodb_data_file_path=/dev/vg-data/lv-rawdisk-innodb01:25Gnewraw
</source>
Start mysql:
<source lang=bash>
# service mysql start
</source>
Aaaaaand.. do not forget apparmor! Like I did.. :-D
<source lang=mysql>
InnoDB: Operating system error number 13 in a file operation.
InnoDB: The error means mysqld does not have the access rights to
InnoDB: the directory.
InnoDB: File name /dev/dm-0
InnoDB: File operation call: 'open'.
InnoDB: Cannot continue operation.
</source>
<source lang=bash>
# tail /var/log/kern.log
...
Aug 12 15:30:09 mysql kernel: [ 5840.118528] audit: type=1400 audit(1439386209.399:33): apparmor="DENIED" operation="open" profile="/usr/sbin/mysqld" name="/dev/dm-0" pid=11810 comm="mysqld" requested_mask="wr" denied_mask="wr" fsuid=108 ouid=108
...
</source>
Add your raw device to the apparmor config in /etc/apparmor.d/local/usr.sbin.mysqld :
<source lang=bash>
# Site-specific additions and overrides for usr.sbin.mysqld.
# For more details, please see /etc/apparmor.d/local/README.
/dev/dm-* rwk,
</source>
Reload apparmor:
<source lang=bash>
# service apparmor reload
</source>
Another try!
<source lang=bash>
# service mysql start
</source>
<source lang=mysql>
InnoDB: The first specified data file /dev/vg-data/lv-rawdisk-innodb01 did not exist:
InnoDB: a new database to be created!
150812 15:48:23 InnoDB: Setting file /dev/vg-data/lv-rawdisk-innodb01 size to 25600 MB
InnoDB: Database physically writes the file full: wait...
InnoDB: Progress in MB: 100 200 300 400 500 600 700 800 900 1000 1100 1200 ...
</source>
Much better!
So shutdown MySQL again!
Change your configuration /etc/mysql/conf.d/innodb.cnf and '''change newraw to raw!''' :
<source lang=mysql>
[mysqld]
# InnoDB raw disks
innodb_data_home_dir=
innodb_data_file_path=/dev/vg-data/lv-rawdisk-innodb01:25Graw
</source>
==Sample InnoDB configuration==
/etc/mysql/conf.d/innodb.cnf
<source lang=mysql>
[mysqld]
# InnoDB Parameters
# innodb_buffer_pool_size=(0.7*total_mem_size)
innodb_buffer_pool_size=1433M
# bulk_insert_buffer_size
bulk_insert_buffer_size=256M
# innodb_buffer_pool_instances=... more = more concurrency
innodb_buffer_pool_instances=2
# innodb_thread_concurrency= 2*CPUs
innodb_thread_concurrency=4
# innodb_flush_method=O_DIRECT (avoids double buffering)
innodb_flush_method=O_DIRECT
# InnoDB data raw disks
innodb_data_home_dir=
innodb_data_file_path=/dev/vg-data/lv-rawdisk-innodb01:25Graw
# InnoDB log files
innodb_log_files_in_group=2
innodb_log_file_size=100M
innodb_log_group_home_dir=/var/lib/mysql/ib_log
</source>
==Analyze==
<source lang=mysql>
mysql> select * from <tablename> PROCEDURE ANALYSE();
</source>
<source lang=mysql>
mysql> SHOW /*!50000 GLOBAL*/ STATUS;
</source>
* See [[http://de.slideshare.net/shinguz/pt-presentation-11465700|MySQL Performance Tuning]]
===Sysbench===
<source lang=bash>
# mysql -u root -e "create database sbtest;"
# sysbench \
--test=oltp \
--oltp-table-size=10000000 \
--db-driver=mysql \
--mysql-table-engine=innodb \
--mysql-db=sbtest \
--mysql-user=root \
--mysql-password=$(nawk -F'=' '/password/{print $2}' /root/.my.cnf) \
--mysql-socket=/var/run/mysqld/mysqld.sock \
prepare
# sysbench \
--test=oltp \
--oltp-test-mode=complex \
--oltp-table-size=80000000 \
--db-driver=mysql \
--mysql-table-engine=innodb \
--mysql-db=sbtest \
--mysql-user=root_rw \
--mysql-password=$(nawk -F'=' '/password/{print $2}' /root/.my.cnf) \
--mysql-socket=/var/run/mysqld/mysqld.sock \
--num-threads=4 \
--max-time=900 \
--max-requests=500000 \
run
# mysql -u root_rw -e "drop table sbtest;" sbtest
</source>
==Recover a damaged root account==
===Lost grants===
<source lang=bash>
# service mysql stop
# echo "grant all privileges on *.* to 'root'@'localhost' with grant option;" > /root/mysql-init
# mysqld_safe --init-file=/root/mysql-init
...
150812 19:14:24 mysqld_safe mysqld from pid file /var/run/mysqld/mysqld.pid ended
# rm /root/mysql-init
# service mysql start
</source>
===Lost password===
<source lang=bash>
# service mysql stop
# echo "SET PASSWORD FOR 'root'@'localhost' = PASSWORD('the root password for mysql');" > /root/mysql-init
# mysqld_safe --init-file=/root/mysql-init
...
150812 19:15:24 mysqld_safe mysqld from pid file /var/run/mysqld/mysqld.pid ended
# rm /root/mysql-init
# service mysql start
</source>
27fc6c1a66bbd75427961e6f0b4d42b3113ed156
853
852
2015-08-13T15:17:14Z
Lollypop
2
/* Analyze */
wikitext
text/x-wiki
[[Kategorie:MySQL|Tipps und Tricks]]
==Oneliner==
===All grants===
<source lang=bash>
# mysql --skip-column-names --batch --execute 'select concat_ws("@",user,host) from mysql.user' | xargs -n 1 -i mysql --execute 'show grants for {}'
</source>
===Last update time===
* Per table
<source lang=mysql>
mysql> SELECT TABLE_SCHEMA AS DB,TABLE_NAME,UPDATE_TIME FROM INFORMATION_SCHEMA.TABLES ORDER BY DB,UPDATE_TIME;
</source>
* Per database
<source lang=mysql>
mysql> SELECT TABLE_SCHEMA AS DB,MAX(UPDATE_TIME) AS LAST_UPDATE FROM INFORMATION_SCHEMA.TABLES GROUP BY DB ORDER BY LAST_UPDATE;
</source>
==InnoDB space==
===Per database===
<source lang=mysql>
mysql> select table_schema as database_name, sum(round(data_length/1024/1024,2)) as total_size_mb from information_schema.tables where engine like 'innodb' group by table_schema order by total_size_mb;
</source>
===Per table===
<source lang=mysql>
mysql> select table_schema as database_name,table_name,round(data_length/1024/1024,2) as size_mb from information_schema.tables order by size_mb;
</source>
==Logging==
If you use SET GLOBAL it is just for the moment.
'''Don't forget to add it in your my.cnf to make it permanent!'''
===What can I log?===
The interesting variables here are:
* log_queries_not_using_indexes
* log_slave_updates
* log_slow_queries
* general_log
===Choose logging destination FILE/TABLE/NONE===
This affects general_log and slow_query_log.
* Log to the table mysql.slow_log and mysql.general_log
<source lang=mysql>
mysql> SET GLOBAL log_output=TABLE;
</source>
* Log to the table mysql.slow_log and mysql.general_log
<source lang=mysql>
mysql> SET GLOBAL log_output=TABLE;
</source>
* Both: tables and files
<source lang=mysql>
mysql> SET GLOBAL log_output = 'TABLE,FILE';
</source>
* None, if NONE appears in the log_output destinations there is no logging
<source lang=mysql>
mysql> SET GLOBAL log_output = 'TABLE,FILE,NONE';
</source>
is equal to
<source lang=mysql>
mysql> SET GLOBAL log_output = 'NONE';
</source>
===Enable/disable general logging===
<source lang=mysql>
mysql> SET GLOBAL general_log_file = '/var/lib/mysql/general.log';
Query OK, 0 rows affected (0.00 sec)
mysql> SET GLOBAL general_log = 'ON';
Query OK, 0 rows affected (0.00 sec)
</source>
<source lang=mysql>
mysql> SET GLOBAL general_log = 'OFF';
Query OK, 0 rows affected (0.00 sec)
</source>
===Enable/disable logging of slow queries===
<source lang=mysql>
mysql> SET GLOBAL slow_query_log_file = '/var/lib/mysql/slow-query.log';
Query OK, 0 rows affected (0.00 sec)
mysql> SET GLOBAL slow_query_log = 'ON';
Query OK, 0 rows affected (0.00 sec)
</source>
<source lang=mysql>
mysql> SET GLOBAL slow_query_log = 'OFF';
Query OK, 0 rows affected (0.00 sec)
</source>
==Filesystems for MySQL==
===ext3/ext4===
Mountoptions are:
* noatime
* data=writeback (best performance , only metadata is logged)
* data=ordered (ok performance , recording metadata and grouping metadata related to the data changes)
* data=journal (worst performance, but best data protection, ext3 default mode, recording metadata and all data)
===Raw devices with InnoDB===
'''Take a look at [[Linux_udev_permissions|setting device permissions via udev]] first.'''
'''After''' that the device is owned by mysql:
<source lang=bash>
# ls -alL /dev/vg-data/lv-rawdisk-innodb01
brw-rw---- 1 mysql mysql 252, 0 Aug 12 15:07 /dev/vg-data/lv-rawdisk-innodb01
</source>
Determine the size:
<source lang=bash>
# lvs vg-data
LV VG Attr LSize Pool Origin Data% Move Log Copy% Convert
lv-rawdisk-innodb01 vg-data -wi-a---- 25.00g
# fdisk -l /dev/vg-data/lv-rawdisk-innodb01
Disk /dev/vg-data/lv-rawdisk-innodb01: 26.8 GB, 26843545600 bytes
255 heads, 63 sectors/track, 3263 cylinders, total 52428800 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
# bc -l
26843545600/(1024*1024*1024)
25.00000000000000000000
</source>
Yes... really 25GB!
Add your logical volume to your configuration /etc/mysql/conf.d/innodb.cnf :
<source lang=mysql>
[mysqld]
# InnoDB raw disks
innodb_data_home_dir=
innodb_data_file_path=/dev/vg-data/lv-rawdisk-innodb01:25Gnewraw
</source>
Start mysql:
<source lang=bash>
# service mysql start
</source>
Aaaaaand.. do not forget apparmor! Like I did.. :-D
<source lang=mysql>
InnoDB: Operating system error number 13 in a file operation.
InnoDB: The error means mysqld does not have the access rights to
InnoDB: the directory.
InnoDB: File name /dev/dm-0
InnoDB: File operation call: 'open'.
InnoDB: Cannot continue operation.
</source>
<source lang=bash>
# tail /var/log/kern.log
...
Aug 12 15:30:09 mysql kernel: [ 5840.118528] audit: type=1400 audit(1439386209.399:33): apparmor="DENIED" operation="open" profile="/usr/sbin/mysqld" name="/dev/dm-0" pid=11810 comm="mysqld" requested_mask="wr" denied_mask="wr" fsuid=108 ouid=108
...
</source>
Add your raw device to the apparmor config in /etc/apparmor.d/local/usr.sbin.mysqld :
<source lang=bash>
# Site-specific additions and overrides for usr.sbin.mysqld.
# For more details, please see /etc/apparmor.d/local/README.
/dev/dm-* rwk,
</source>
Reload apparmor:
<source lang=bash>
# service apparmor reload
</source>
Another try!
<source lang=bash>
# service mysql start
</source>
<source lang=mysql>
InnoDB: The first specified data file /dev/vg-data/lv-rawdisk-innodb01 did not exist:
InnoDB: a new database to be created!
150812 15:48:23 InnoDB: Setting file /dev/vg-data/lv-rawdisk-innodb01 size to 25600 MB
InnoDB: Database physically writes the file full: wait...
InnoDB: Progress in MB: 100 200 300 400 500 600 700 800 900 1000 1100 1200 ...
</source>
Much better!
So shutdown MySQL again!
Change your configuration /etc/mysql/conf.d/innodb.cnf and '''change newraw to raw!''' :
<source lang=mysql>
[mysqld]
# InnoDB raw disks
innodb_data_home_dir=
innodb_data_file_path=/dev/vg-data/lv-rawdisk-innodb01:25Graw
</source>
==Sample InnoDB configuration==
/etc/mysql/conf.d/innodb.cnf
<source lang=mysql>
[mysqld]
# InnoDB Parameters
# innodb_buffer_pool_size=(0.7*total_mem_size)
innodb_buffer_pool_size=1433M
# bulk_insert_buffer_size
bulk_insert_buffer_size=256M
# innodb_buffer_pool_instances=... more = more concurrency
innodb_buffer_pool_instances=2
# innodb_thread_concurrency= 2*CPUs
innodb_thread_concurrency=4
# innodb_flush_method=O_DIRECT (avoids double buffering)
innodb_flush_method=O_DIRECT
# InnoDB data raw disks
innodb_data_home_dir=
innodb_data_file_path=/dev/vg-data/lv-rawdisk-innodb01:25Graw
# InnoDB log files
innodb_log_files_in_group=2
innodb_log_file_size=100M
innodb_log_group_home_dir=/var/lib/mysql/ib_log
</source>
==Analyze==
<source lang=mysql>
mysql> select * from <tablename> PROCEDURE ANALYSE();
</source>
<source lang=mysql>
mysql> SHOW /*!50000 GLOBAL*/ STATUS;
</source>
* See [[http://de.slideshare.net/shinguz/pt-presentation-11465700 MySQL Performance Tuning]]
===Sysbench===
<source lang=bash>
# mysql -u root -e "create database sbtest;"
# sysbench \
--test=oltp \
--oltp-table-size=10000000 \
--db-driver=mysql \
--mysql-table-engine=innodb \
--mysql-db=sbtest \
--mysql-user=root \
--mysql-password=$(nawk -F'=' '/password/{print $2}' /root/.my.cnf) \
--mysql-socket=/var/run/mysqld/mysqld.sock \
prepare
# sysbench \
--test=oltp \
--oltp-test-mode=complex \
--oltp-table-size=80000000 \
--db-driver=mysql \
--mysql-table-engine=innodb \
--mysql-db=sbtest \
--mysql-user=root_rw \
--mysql-password=$(nawk -F'=' '/password/{print $2}' /root/.my.cnf) \
--mysql-socket=/var/run/mysqld/mysqld.sock \
--num-threads=4 \
--max-time=900 \
--max-requests=500000 \
run
# mysql -u root_rw -e "drop table sbtest;" sbtest
</source>
==Recover a damaged root account==
===Lost grants===
<source lang=bash>
# service mysql stop
# echo "grant all privileges on *.* to 'root'@'localhost' with grant option;" > /root/mysql-init
# mysqld_safe --init-file=/root/mysql-init
...
150812 19:14:24 mysqld_safe mysqld from pid file /var/run/mysqld/mysqld.pid ended
# rm /root/mysql-init
# service mysql start
</source>
===Lost password===
<source lang=bash>
# service mysql stop
# echo "SET PASSWORD FOR 'root'@'localhost' = PASSWORD('the root password for mysql');" > /root/mysql-init
# mysqld_safe --init-file=/root/mysql-init
...
150812 19:15:24 mysqld_safe mysqld from pid file /var/run/mysqld/mysqld.pid ended
# rm /root/mysql-init
# service mysql start
</source>
e6b6f23f963b67a8e4347656220a6028c9a35bf8
854
853
2015-08-13T15:43:55Z
Lollypop
2
/* Analyze */
wikitext
text/x-wiki
[[Kategorie:MySQL|Tipps und Tricks]]
==Oneliner==
===All grants===
<source lang=bash>
# mysql --skip-column-names --batch --execute 'select concat_ws("@",user,host) from mysql.user' | xargs -n 1 -i mysql --execute 'show grants for {}'
</source>
===Last update time===
* Per table
<source lang=mysql>
mysql> SELECT TABLE_SCHEMA AS DB,TABLE_NAME,UPDATE_TIME FROM INFORMATION_SCHEMA.TABLES ORDER BY DB,UPDATE_TIME;
</source>
* Per database
<source lang=mysql>
mysql> SELECT TABLE_SCHEMA AS DB,MAX(UPDATE_TIME) AS LAST_UPDATE FROM INFORMATION_SCHEMA.TABLES GROUP BY DB ORDER BY LAST_UPDATE;
</source>
==InnoDB space==
===Per database===
<source lang=mysql>
mysql> select table_schema as database_name, sum(round(data_length/1024/1024,2)) as total_size_mb from information_schema.tables where engine like 'innodb' group by table_schema order by total_size_mb;
</source>
===Per table===
<source lang=mysql>
mysql> select table_schema as database_name,table_name,round(data_length/1024/1024,2) as size_mb from information_schema.tables order by size_mb;
</source>
==Logging==
If you use SET GLOBAL it is just for the moment.
'''Don't forget to add it in your my.cnf to make it permanent!'''
===What can I log?===
The interesting variables here are:
* log_queries_not_using_indexes
* log_slave_updates
* log_slow_queries
* general_log
===Choose logging destination FILE/TABLE/NONE===
This affects general_log and slow_query_log.
* Log to the table mysql.slow_log and mysql.general_log
<source lang=mysql>
mysql> SET GLOBAL log_output=TABLE;
</source>
* Log to the table mysql.slow_log and mysql.general_log
<source lang=mysql>
mysql> SET GLOBAL log_output=TABLE;
</source>
* Both: tables and files
<source lang=mysql>
mysql> SET GLOBAL log_output = 'TABLE,FILE';
</source>
* None, if NONE appears in the log_output destinations there is no logging
<source lang=mysql>
mysql> SET GLOBAL log_output = 'TABLE,FILE,NONE';
</source>
is equal to
<source lang=mysql>
mysql> SET GLOBAL log_output = 'NONE';
</source>
===Enable/disable general logging===
<source lang=mysql>
mysql> SET GLOBAL general_log_file = '/var/lib/mysql/general.log';
Query OK, 0 rows affected (0.00 sec)
mysql> SET GLOBAL general_log = 'ON';
Query OK, 0 rows affected (0.00 sec)
</source>
<source lang=mysql>
mysql> SET GLOBAL general_log = 'OFF';
Query OK, 0 rows affected (0.00 sec)
</source>
===Enable/disable logging of slow queries===
<source lang=mysql>
mysql> SET GLOBAL slow_query_log_file = '/var/lib/mysql/slow-query.log';
Query OK, 0 rows affected (0.00 sec)
mysql> SET GLOBAL slow_query_log = 'ON';
Query OK, 0 rows affected (0.00 sec)
</source>
<source lang=mysql>
mysql> SET GLOBAL slow_query_log = 'OFF';
Query OK, 0 rows affected (0.00 sec)
</source>
==Filesystems for MySQL==
===ext3/ext4===
Mountoptions are:
* noatime
* data=writeback (best performance , only metadata is logged)
* data=ordered (ok performance , recording metadata and grouping metadata related to the data changes)
* data=journal (worst performance, but best data protection, ext3 default mode, recording metadata and all data)
===Raw devices with InnoDB===
'''Take a look at [[Linux_udev_permissions|setting device permissions via udev]] first.'''
'''After''' that the device is owned by mysql:
<source lang=bash>
# ls -alL /dev/vg-data/lv-rawdisk-innodb01
brw-rw---- 1 mysql mysql 252, 0 Aug 12 15:07 /dev/vg-data/lv-rawdisk-innodb01
</source>
Determine the size:
<source lang=bash>
# lvs vg-data
LV VG Attr LSize Pool Origin Data% Move Log Copy% Convert
lv-rawdisk-innodb01 vg-data -wi-a---- 25.00g
# fdisk -l /dev/vg-data/lv-rawdisk-innodb01
Disk /dev/vg-data/lv-rawdisk-innodb01: 26.8 GB, 26843545600 bytes
255 heads, 63 sectors/track, 3263 cylinders, total 52428800 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
# bc -l
26843545600/(1024*1024*1024)
25.00000000000000000000
</source>
Yes... really 25GB!
Add your logical volume to your configuration /etc/mysql/conf.d/innodb.cnf :
<source lang=mysql>
[mysqld]
# InnoDB raw disks
innodb_data_home_dir=
innodb_data_file_path=/dev/vg-data/lv-rawdisk-innodb01:25Gnewraw
</source>
Start mysql:
<source lang=bash>
# service mysql start
</source>
Aaaaaand.. do not forget apparmor! Like I did.. :-D
<source lang=mysql>
InnoDB: Operating system error number 13 in a file operation.
InnoDB: The error means mysqld does not have the access rights to
InnoDB: the directory.
InnoDB: File name /dev/dm-0
InnoDB: File operation call: 'open'.
InnoDB: Cannot continue operation.
</source>
<source lang=bash>
# tail /var/log/kern.log
...
Aug 12 15:30:09 mysql kernel: [ 5840.118528] audit: type=1400 audit(1439386209.399:33): apparmor="DENIED" operation="open" profile="/usr/sbin/mysqld" name="/dev/dm-0" pid=11810 comm="mysqld" requested_mask="wr" denied_mask="wr" fsuid=108 ouid=108
...
</source>
Add your raw device to the apparmor config in /etc/apparmor.d/local/usr.sbin.mysqld :
<source lang=bash>
# Site-specific additions and overrides for usr.sbin.mysqld.
# For more details, please see /etc/apparmor.d/local/README.
/dev/dm-* rwk,
</source>
Reload apparmor:
<source lang=bash>
# service apparmor reload
</source>
Another try!
<source lang=bash>
# service mysql start
</source>
<source lang=mysql>
InnoDB: The first specified data file /dev/vg-data/lv-rawdisk-innodb01 did not exist:
InnoDB: a new database to be created!
150812 15:48:23 InnoDB: Setting file /dev/vg-data/lv-rawdisk-innodb01 size to 25600 MB
InnoDB: Database physically writes the file full: wait...
InnoDB: Progress in MB: 100 200 300 400 500 600 700 800 900 1000 1100 1200 ...
</source>
Much better!
So shutdown MySQL again!
Change your configuration /etc/mysql/conf.d/innodb.cnf and '''change newraw to raw!''' :
<source lang=mysql>
[mysqld]
# InnoDB raw disks
innodb_data_home_dir=
innodb_data_file_path=/dev/vg-data/lv-rawdisk-innodb01:25Graw
</source>
==Sample InnoDB configuration==
/etc/mysql/conf.d/innodb.cnf
<source lang=mysql>
[mysqld]
# InnoDB Parameters
# innodb_buffer_pool_size=(0.7*total_mem_size)
innodb_buffer_pool_size=1433M
# bulk_insert_buffer_size
bulk_insert_buffer_size=256M
# innodb_buffer_pool_instances=... more = more concurrency
innodb_buffer_pool_instances=2
# innodb_thread_concurrency= 2*CPUs
innodb_thread_concurrency=4
# innodb_flush_method=O_DIRECT (avoids double buffering)
innodb_flush_method=O_DIRECT
# InnoDB data raw disks
innodb_data_home_dir=
innodb_data_file_path=/dev/vg-data/lv-rawdisk-innodb01:25Graw
# InnoDB log files
innodb_log_files_in_group=2
innodb_log_file_size=100M
innodb_log_group_home_dir=/var/lib/mysql/ib_log
</source>
==Analyze==
<source lang=mysql>
mysql> select * from <tablename> PROCEDURE ANALYSE();
</source>
<source lang=mysql>
mysql> SHOW /*!50000 GLOBAL*/ STATUS;
</source>
* See [[http://de.slideshare.net/shinguz/pt-presentation-11465700 MySQL Performance Tuning]]
===percona-toolkit===
<source lang=bash>
# aptitude install percona-toolkit
# mysql -e "explain select * from mysql.user,mysql.db where user.user=db.user" | pt-visual-explain
JOIN
+- Bookmark lookup
| +- Table
| | table db
| | possible_keys User
| +- Index lookup
| key db->User
| possible_keys User
| key_len 48
| ref mysql.user.User
| rows 3
+- Table scan
rows 68
+- Table
table user
</source>
===Sysbench===
<source lang=bash>
# mysql -u root -e "create database sbtest;"
# sysbench \
--test=oltp \
--oltp-table-size=10000000 \
--db-driver=mysql \
--mysql-table-engine=innodb \
--mysql-db=sbtest \
--mysql-user=root \
--mysql-password=$(nawk -F'=' '/password/{print $2}' /root/.my.cnf) \
--mysql-socket=/var/run/mysqld/mysqld.sock \
prepare
# sysbench \
--test=oltp \
--oltp-test-mode=complex \
--oltp-table-size=80000000 \
--db-driver=mysql \
--mysql-table-engine=innodb \
--mysql-db=sbtest \
--mysql-user=root_rw \
--mysql-password=$(nawk -F'=' '/password/{print $2}' /root/.my.cnf) \
--mysql-socket=/var/run/mysqld/mysqld.sock \
--num-threads=4 \
--max-time=900 \
--max-requests=500000 \
run
# mysql -u root_rw -e "drop table sbtest;" sbtest
</source>
==Recover a damaged root account==
===Lost grants===
<source lang=bash>
# service mysql stop
# echo "grant all privileges on *.* to 'root'@'localhost' with grant option;" > /root/mysql-init
# mysqld_safe --init-file=/root/mysql-init
...
150812 19:14:24 mysqld_safe mysqld from pid file /var/run/mysqld/mysqld.pid ended
# rm /root/mysql-init
# service mysql start
</source>
===Lost password===
<source lang=bash>
# service mysql stop
# echo "SET PASSWORD FOR 'root'@'localhost' = PASSWORD('the root password for mysql');" > /root/mysql-init
# mysqld_safe --init-file=/root/mysql-init
...
150812 19:15:24 mysqld_safe mysqld from pid file /var/run/mysqld/mysqld.pid ended
# rm /root/mysql-init
# service mysql start
</source>
61d96e862a61b994fe95dae826a3c2f586ef8d0d
855
854
2015-08-17T07:11:15Z
Lollypop
2
/* Lost grants */
wikitext
text/x-wiki
[[Kategorie:MySQL|Tipps und Tricks]]
==Oneliner==
===All grants===
<source lang=bash>
# mysql --skip-column-names --batch --execute 'select concat_ws("@",user,host) from mysql.user' | xargs -n 1 -i mysql --execute 'show grants for {}'
</source>
===Last update time===
* Per table
<source lang=mysql>
mysql> SELECT TABLE_SCHEMA AS DB,TABLE_NAME,UPDATE_TIME FROM INFORMATION_SCHEMA.TABLES ORDER BY DB,UPDATE_TIME;
</source>
* Per database
<source lang=mysql>
mysql> SELECT TABLE_SCHEMA AS DB,MAX(UPDATE_TIME) AS LAST_UPDATE FROM INFORMATION_SCHEMA.TABLES GROUP BY DB ORDER BY LAST_UPDATE;
</source>
==InnoDB space==
===Per database===
<source lang=mysql>
mysql> select table_schema as database_name, sum(round(data_length/1024/1024,2)) as total_size_mb from information_schema.tables where engine like 'innodb' group by table_schema order by total_size_mb;
</source>
===Per table===
<source lang=mysql>
mysql> select table_schema as database_name,table_name,round(data_length/1024/1024,2) as size_mb from information_schema.tables order by size_mb;
</source>
==Logging==
If you use SET GLOBAL it is just for the moment.
'''Don't forget to add it in your my.cnf to make it permanent!'''
===What can I log?===
The interesting variables here are:
* log_queries_not_using_indexes
* log_slave_updates
* log_slow_queries
* general_log
===Choose logging destination FILE/TABLE/NONE===
This affects general_log and slow_query_log.
* Log to the table mysql.slow_log and mysql.general_log
<source lang=mysql>
mysql> SET GLOBAL log_output=TABLE;
</source>
* Log to the table mysql.slow_log and mysql.general_log
<source lang=mysql>
mysql> SET GLOBAL log_output=TABLE;
</source>
* Both: tables and files
<source lang=mysql>
mysql> SET GLOBAL log_output = 'TABLE,FILE';
</source>
* None, if NONE appears in the log_output destinations there is no logging
<source lang=mysql>
mysql> SET GLOBAL log_output = 'TABLE,FILE,NONE';
</source>
is equal to
<source lang=mysql>
mysql> SET GLOBAL log_output = 'NONE';
</source>
===Enable/disable general logging===
<source lang=mysql>
mysql> SET GLOBAL general_log_file = '/var/lib/mysql/general.log';
Query OK, 0 rows affected (0.00 sec)
mysql> SET GLOBAL general_log = 'ON';
Query OK, 0 rows affected (0.00 sec)
</source>
<source lang=mysql>
mysql> SET GLOBAL general_log = 'OFF';
Query OK, 0 rows affected (0.00 sec)
</source>
===Enable/disable logging of slow queries===
<source lang=mysql>
mysql> SET GLOBAL slow_query_log_file = '/var/lib/mysql/slow-query.log';
Query OK, 0 rows affected (0.00 sec)
mysql> SET GLOBAL slow_query_log = 'ON';
Query OK, 0 rows affected (0.00 sec)
</source>
<source lang=mysql>
mysql> SET GLOBAL slow_query_log = 'OFF';
Query OK, 0 rows affected (0.00 sec)
</source>
==Filesystems for MySQL==
===ext3/ext4===
Mountoptions are:
* noatime
* data=writeback (best performance , only metadata is logged)
* data=ordered (ok performance , recording metadata and grouping metadata related to the data changes)
* data=journal (worst performance, but best data protection, ext3 default mode, recording metadata and all data)
===Raw devices with InnoDB===
'''Take a look at [[Linux_udev_permissions|setting device permissions via udev]] first.'''
'''After''' that the device is owned by mysql:
<source lang=bash>
# ls -alL /dev/vg-data/lv-rawdisk-innodb01
brw-rw---- 1 mysql mysql 252, 0 Aug 12 15:07 /dev/vg-data/lv-rawdisk-innodb01
</source>
Determine the size:
<source lang=bash>
# lvs vg-data
LV VG Attr LSize Pool Origin Data% Move Log Copy% Convert
lv-rawdisk-innodb01 vg-data -wi-a---- 25.00g
# fdisk -l /dev/vg-data/lv-rawdisk-innodb01
Disk /dev/vg-data/lv-rawdisk-innodb01: 26.8 GB, 26843545600 bytes
255 heads, 63 sectors/track, 3263 cylinders, total 52428800 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
# bc -l
26843545600/(1024*1024*1024)
25.00000000000000000000
</source>
Yes... really 25GB!
Add your logical volume to your configuration /etc/mysql/conf.d/innodb.cnf :
<source lang=mysql>
[mysqld]
# InnoDB raw disks
innodb_data_home_dir=
innodb_data_file_path=/dev/vg-data/lv-rawdisk-innodb01:25Gnewraw
</source>
Start mysql:
<source lang=bash>
# service mysql start
</source>
Aaaaaand.. do not forget apparmor! Like I did.. :-D
<source lang=mysql>
InnoDB: Operating system error number 13 in a file operation.
InnoDB: The error means mysqld does not have the access rights to
InnoDB: the directory.
InnoDB: File name /dev/dm-0
InnoDB: File operation call: 'open'.
InnoDB: Cannot continue operation.
</source>
<source lang=bash>
# tail /var/log/kern.log
...
Aug 12 15:30:09 mysql kernel: [ 5840.118528] audit: type=1400 audit(1439386209.399:33): apparmor="DENIED" operation="open" profile="/usr/sbin/mysqld" name="/dev/dm-0" pid=11810 comm="mysqld" requested_mask="wr" denied_mask="wr" fsuid=108 ouid=108
...
</source>
Add your raw device to the apparmor config in /etc/apparmor.d/local/usr.sbin.mysqld :
<source lang=bash>
# Site-specific additions and overrides for usr.sbin.mysqld.
# For more details, please see /etc/apparmor.d/local/README.
/dev/dm-* rwk,
</source>
Reload apparmor:
<source lang=bash>
# service apparmor reload
</source>
Another try!
<source lang=bash>
# service mysql start
</source>
<source lang=mysql>
InnoDB: The first specified data file /dev/vg-data/lv-rawdisk-innodb01 did not exist:
InnoDB: a new database to be created!
150812 15:48:23 InnoDB: Setting file /dev/vg-data/lv-rawdisk-innodb01 size to 25600 MB
InnoDB: Database physically writes the file full: wait...
InnoDB: Progress in MB: 100 200 300 400 500 600 700 800 900 1000 1100 1200 ...
</source>
Much better!
So shutdown MySQL again!
Change your configuration /etc/mysql/conf.d/innodb.cnf and '''change newraw to raw!''' :
<source lang=mysql>
[mysqld]
# InnoDB raw disks
innodb_data_home_dir=
innodb_data_file_path=/dev/vg-data/lv-rawdisk-innodb01:25Graw
</source>
==Sample InnoDB configuration==
/etc/mysql/conf.d/innodb.cnf
<source lang=mysql>
[mysqld]
# InnoDB Parameters
# innodb_buffer_pool_size=(0.7*total_mem_size)
innodb_buffer_pool_size=1433M
# bulk_insert_buffer_size
bulk_insert_buffer_size=256M
# innodb_buffer_pool_instances=... more = more concurrency
innodb_buffer_pool_instances=2
# innodb_thread_concurrency= 2*CPUs
innodb_thread_concurrency=4
# innodb_flush_method=O_DIRECT (avoids double buffering)
innodb_flush_method=O_DIRECT
# InnoDB data raw disks
innodb_data_home_dir=
innodb_data_file_path=/dev/vg-data/lv-rawdisk-innodb01:25Graw
# InnoDB log files
innodb_log_files_in_group=2
innodb_log_file_size=100M
innodb_log_group_home_dir=/var/lib/mysql/ib_log
</source>
==Analyze==
<source lang=mysql>
mysql> select * from <tablename> PROCEDURE ANALYSE();
</source>
<source lang=mysql>
mysql> SHOW /*!50000 GLOBAL*/ STATUS;
</source>
* See [[http://de.slideshare.net/shinguz/pt-presentation-11465700 MySQL Performance Tuning]]
===percona-toolkit===
<source lang=bash>
# aptitude install percona-toolkit
# mysql -e "explain select * from mysql.user,mysql.db where user.user=db.user" | pt-visual-explain
JOIN
+- Bookmark lookup
| +- Table
| | table db
| | possible_keys User
| +- Index lookup
| key db->User
| possible_keys User
| key_len 48
| ref mysql.user.User
| rows 3
+- Table scan
rows 68
+- Table
table user
</source>
===Sysbench===
<source lang=bash>
# mysql -u root -e "create database sbtest;"
# sysbench \
--test=oltp \
--oltp-table-size=10000000 \
--db-driver=mysql \
--mysql-table-engine=innodb \
--mysql-db=sbtest \
--mysql-user=root \
--mysql-password=$(nawk -F'=' '/password/{print $2}' /root/.my.cnf) \
--mysql-socket=/var/run/mysqld/mysqld.sock \
prepare
# sysbench \
--test=oltp \
--oltp-test-mode=complex \
--oltp-table-size=80000000 \
--db-driver=mysql \
--mysql-table-engine=innodb \
--mysql-db=sbtest \
--mysql-user=root_rw \
--mysql-password=$(nawk -F'=' '/password/{print $2}' /root/.my.cnf) \
--mysql-socket=/var/run/mysqld/mysqld.sock \
--num-threads=4 \
--max-time=900 \
--max-requests=500000 \
run
# mysql -u root_rw -e "drop table sbtest;" sbtest
</source>
==Recover a damaged root account==
===Lost grants===
Try out:
<source lang=bash>
# service mysql stop
# echo "grant all privileges on *.* to 'root'@'localhost' with grant option;" > /root/mysql-init
# mysqld_safe --init-file=/root/mysql-init
...
150812 19:14:24 mysqld_safe mysqld from pid file /var/run/mysqld/mysqld.pid ended
# rm /root/mysql-init
# service mysql start
</source>
Or:
<source lang=bash>
# service mysql stop
# mysqld_safe --skip-grant-tables &
...
# mysql -e "UPDATE mysql.user SET Grant_priv='Y', Super_priv='Y' WHERE User='root'; FLUSH PRIVILEGES; GRANT ALL ON *.* TO 'root'@'localhost';"
# mysqladmin -u root shutdown
# service mysql start
</source>
===Lost password===
<source lang=bash>
# service mysql stop
# echo "SET PASSWORD FOR 'root'@'localhost' = PASSWORD('the root password for mysql');" > /root/mysql-init
# mysqld_safe --init-file=/root/mysql-init
...
150812 19:15:24 mysqld_safe mysqld from pid file /var/run/mysqld/mysqld.pid ended
# rm /root/mysql-init
# service mysql start
</source>
fd2a27c1219966c2f68e7db3241fb2cc46fa5d84
Linux udev
0
88
820
179
2015-08-12T13:03:02Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:Linux|LVM]]
/etc/udev/rules.d/99-custom.rules
ENV{DM_VG_NAME}=="VolumeGroup1", ENV{DM_LV_NAME}=="LogicalVolume1", MODE="0660", OWNER="lollypop", GROUP="disk", SYMLINK+="VirtualBox-$env{DM_NAME}"
==udev ofr MySQL on LVM with InnoDB on raw devices==
===Make your rule===
<source lang=bash>
root@mysql:~# cat /etc/udev/rules.d/99-lvm-mysql-permissions.rules
# udevadm info --query=all --name /dev/dm-0
# DM_VG_NAME=vg-data
# DM_LV_NAME=lv-rawdisk-innodb01
ENV{DM_VG_NAME}=="vg-data" ENV{DM_LV_NAME}=="lv-rawdisk-innodb*" OWNER="mysql"
</source>
===Test your rule===
<source lang=bash>
root@mysql:~# ls -al /dev/vg-data/lv-rawdisk-innodb01
lrwxrwxrwx 1 root root 7 Aug 12 14:45 /dev/vg-data/lv-rawdisk-innodb01 -> ../dm-0
root@mysql:~# udevadm test /class/block/dm-0
...
read rules file: /etc/udev/rules.d/99-lvm-mysql-permissions.rules
specified user 'mysql' unknown
...
</source>
OK user mysql unknown... maybe I should install MySQL ;-).
After that:
<source lang=bash>
root@mysql:~# id -a mysql
uid=108(mysql) gid=114(mysql) groups=114(mysql)
root@mysql:~# udevadm test /class/block/dm-0 2>&1 | grep OWNER
OWNER 108 /etc/udev/rules.d/99-lvm-mysql-permissions.rules:4
</source>
aedf6f4a2fb7c259a1d27a9f25a6586c0f6a80b7
821
820
2015-08-12T13:10:49Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:Linux|LVM]]
/etc/udev/rules.d/99-custom.rules
ENV{DM_VG_NAME}=="VolumeGroup1", ENV{DM_LV_NAME}=="LogicalVolume1", MODE="0660", OWNER="lollypop", GROUP="disk", SYMLINK+="VirtualBox-$env{DM_NAME}"
==udev ofr MySQL on LVM with InnoDB on raw devices==
===Make your rule===
<source lang=bash>
root@mysql:~# cat /etc/udev/rules.d/99-lvm-mysql-permissions.rules
# udevadm info --query=all --name /dev/dm-0
# DM_VG_NAME=vg-data
# DM_LV_NAME=lv-rawdisk-innodb01
ENV{DM_VG_NAME}=="vg-data" ENV{DM_LV_NAME}=="lv-rawdisk-innodb*" OWNER="mysql"
</source>
===Test your rule===
<source lang=bash>
root@mysql:~# ls -al /dev/vg-data/lv-rawdisk-innodb01
lrwxrwxrwx 1 root root 7 Aug 12 14:45 /dev/vg-data/lv-rawdisk-innodb01 -> ../dm-0
root@mysql:~# udevadm test /class/block/dm-0
...
read rules file: /etc/udev/rules.d/99-lvm-mysql-permissions.rules
specified user 'mysql' unknown
...
</source>
OK user mysql unknown... maybe I should install MySQL ;-).
After that:
<source lang=bash>
root@mysql:~# id -a mysql
uid=108(mysql) gid=114(mysql) groups=114(mysql)
root@mysql:~# udevadm test /class/block/dm-0
...
OWNER 108 /etc/udev/rules.d/99-lvm-mysql-permissions.rules:4
GROUP 114 /etc/udev/rules.d/99-lvm-mysql-permissions.rules:4
handling device node '/dev/dm-0', devnum=b252:0, mode=0660, uid=108, gid=114
set permissions /dev/dm-0, 060660, uid=108, gid=114
...
</source>
===Trigger your rule===
<source lang=bash>
root@mysql:~# udevadm trigger
root@mysql:~# ls -alL /dev/vg-data/lv-rawdisk-innodb01
brw-rw---- 1 mysql mysql 252, 0 Aug 12 15:07 /dev/vg-data/lv-rawdisk-innodb01
</source>
efec84375c9a1703b250f533d1e00b8d725f8c9e
822
821
2015-08-12T13:12:58Z
Lollypop
2
/* Make your rule */
wikitext
text/x-wiki
[[Kategorie:Linux|LVM]]
/etc/udev/rules.d/99-custom.rules
ENV{DM_VG_NAME}=="VolumeGroup1", ENV{DM_LV_NAME}=="LogicalVolume1", MODE="0660", OWNER="lollypop", GROUP="disk", SYMLINK+="VirtualBox-$env{DM_NAME}"
==udev ofr MySQL on LVM with InnoDB on raw devices==
===Make your rule===
<source lang=bash>
root@mysql:~# cat /etc/udev/rules.d/99-lvm-mysql-permissions.rules
# udevadm info --query=all --name /dev/vg-data/lv-rawdisk-innodb01
# DM_VG_NAME=vg-data
# DM_LV_NAME=lv-rawdisk-innodb01
ENV{DM_VG_NAME}=="vg-data" ENV{DM_LV_NAME}=="lv-rawdisk-innodb*" OWNER="mysql" GROUP="mysql"
</source>
===Test your rule===
<source lang=bash>
root@mysql:~# ls -al /dev/vg-data/lv-rawdisk-innodb01
lrwxrwxrwx 1 root root 7 Aug 12 14:45 /dev/vg-data/lv-rawdisk-innodb01 -> ../dm-0
root@mysql:~# udevadm test /class/block/dm-0
...
read rules file: /etc/udev/rules.d/99-lvm-mysql-permissions.rules
specified user 'mysql' unknown
...
</source>
OK user mysql unknown... maybe I should install MySQL ;-).
After that:
<source lang=bash>
root@mysql:~# id -a mysql
uid=108(mysql) gid=114(mysql) groups=114(mysql)
root@mysql:~# udevadm test /class/block/dm-0
...
OWNER 108 /etc/udev/rules.d/99-lvm-mysql-permissions.rules:4
GROUP 114 /etc/udev/rules.d/99-lvm-mysql-permissions.rules:4
handling device node '/dev/dm-0', devnum=b252:0, mode=0660, uid=108, gid=114
set permissions /dev/dm-0, 060660, uid=108, gid=114
...
</source>
===Trigger your rule===
<source lang=bash>
root@mysql:~# udevadm trigger
root@mysql:~# ls -alL /dev/vg-data/lv-rawdisk-innodb01
brw-rw---- 1 mysql mysql 252, 0 Aug 12 15:07 /dev/vg-data/lv-rawdisk-innodb01
</source>
247791052a03d454ead361f9659dc860c713c8b1
ZFS on Linux
0
222
830
2015-08-12T15:23:54Z
Lollypop
2
Die Seite wurde neu angelegt: „[[Kategorie:Linux|ZFS]] [[Kategorie:ZFS|Linux]] Create /etc/udev/rules.d/99-local-grub.rules with this content: <source lang=bash> # Create by-id links in /de…“
wikitext
text/x-wiki
[[Kategorie:Linux|ZFS]]
[[Kategorie:ZFS|Linux]]
Create /etc/udev/rules.d/99-local-grub.rules with this content:
<source lang=bash>
# Create by-id links in /dev as well for zfs vdev. Needed by grub
# Add links for zfs_member only
KERNEL=="sd*[0-9]", IMPORT{parent}=="ID_*", ENV{ID_FS_TYPE}=="zfs_member", SYMLINK+="$env{ID_BUS}-$env{ID_SERIAL}-part%n"
</source>
1628fff93162b87cbb4725d5cfc5e12e6897e62c
831
830
2015-08-12T15:45:03Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:Linux|ZFS]]
[[Kategorie:ZFS|Linux]]
[[Kategorie:VirtualBox|ZFS]]
Create /etc/udev/rules.d/99-local-grub.rules with this content:
<source lang=bash>
# Create by-id links in /dev as well for zfs vdev. Needed by grub
# Add links for zfs_member only
KERNEL=="sd*[0-9]", IMPORT{parent}=="ID_*", ENV{ID_FS_TYPE}=="zfs_member", SYMLINK+="$env{ID_BUS}-$env{ID_SERIAL}-part%n"
</source>
If you use ZVols as rawvmdk-device in VirtualBox as normal user (vmuser in this example) create /etc/udev/rules.d/99-local-zvol.rules with this content:
<source lang=bash>
KERNEL=="zd*" SUBSYSTEM=="block" ACTION=="add|change" PROGRAM="/lib/udev/zvol_id /dev/%k" RESULT=="rpool/VM/*" OWNER="vmuser"
</source>
<source lang=bash>
vmuser@virtualbox-server:~$ VBoxManage internalcommands createrawvmdk -filename /var/data/VMs/dev/Solaris10.vmdk -rawdisk /dev/zvol/rpool/VM/Solaris10
</source>
9fba8deb77f90f18ba5b768db722e3793bdfbd50
832
831
2015-08-12T15:45:44Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:Linux|ZFS]]
[[Kategorie:ZFS|Linux]]
[[Kategorie:VirtualBox|ZFS]]
==Grub==
Create /etc/udev/rules.d/99-local-grub.rules with this content:
<source lang=bash>
# Create by-id links in /dev as well for zfs vdev. Needed by grub
# Add links for zfs_member only
KERNEL=="sd*[0-9]", IMPORT{parent}=="ID_*", ENV{ID_FS_TYPE}=="zfs_member", SYMLINK+="$env{ID_BUS}-$env{ID_SERIAL}-part%n"
</source>
==Virtualbox on ZVols==
If you use ZVols as rawvmdk-device in VirtualBox as normal user (vmuser in this example) create /etc/udev/rules.d/99-local-zvol.rules with this content:
<source lang=bash>
KERNEL=="zd*" SUBSYSTEM=="block" ACTION=="add|change" PROGRAM="/lib/udev/zvol_id /dev/%k" RESULT=="rpool/VM/*" OWNER="vmuser"
</source>
<source lang=bash>
vmuser@virtualbox-server:~$ VBoxManage internalcommands createrawvmdk -filename /var/data/VMs/dev/Solaris10.vmdk -rawdisk /dev/zvol/rpool/VM/Solaris10
</source>
a29ec0ae6061a16469e0ac0d8b1b190d4d143fb8
Category:VirtualBox
14
223
833
2015-08-12T15:46:04Z
Lollypop
2
Die Seite wurde neu angelegt: „[[Kategorie:KnowHow]]“
wikitext
text/x-wiki
[[Kategorie:KnowHow]]
5b3e805e2df69a16d339bfd0115e4688ccfd0e65
Category:ZFS
14
31
834
54
2015-08-12T15:46:45Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:KnowHow]]
5b3e805e2df69a16d339bfd0115e4688ccfd0e65
ZFS fast scrub
0
141
835
399
2015-08-12T15:47:37Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:ZFS|fast scrub]]
[[Kategorie:Solaris]]
NEVER DO THIS!!!
If you need a fast scrub to get to production state after an bloody hard unplanned downtime... and so on...
I would expect you to not to do this.
But it worked for me:
<source lang=bash>
# echo "zfs_scrub_delay/D" | mdb -k
zfs_scrub_delay:
zfs_scrub_delay:4
# echo "zfs_scrub_delay/W0" | mdb -kw
zfs_scrub_delay:0x4 = 0x0
</source>
This sets the scrub delay to zero... your system will do a lot of scrubbing and not so much other things.
Remember to set it back to the old value later (4 in this example)!
<source lang=bash>
# echo "zfs_scrub_delay/W4" | mdb -kw
zfs_scrub_delay:0x0 = 0x4
</source>
But remember I told you: NEVER DO THIS!!!
00efd2b0755de8ffef4137bd73388b6d64006642
ZFS RaidController
0
186
836
616
2015-08-12T15:48:28Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:Solaris]]
[[Kategorie:ZFS|RaidController]]
=ZFS ist besser direkt auf den Disks=
Wegen der besseren Checksummen im ZFS möchte man den RaidController abschalten, oder die Disks einzeln herausreichen.
==X4170 mit MegaRAID==
Konfiguration aller Disks als einzelne LogicalDrives
<source lang=bash>
-cfgclr -a0
-cfgldadd -r0[252:0] -a0
-cfgldadd -r0[252:1] -a0
-cfgldadd -r0[252:2] -a0
-cfgldadd -r0[252:3] -a0
-ldsetprop EnDskCache -LAll -a0
-AdpBootDrive -set -L0 -a0
-AdpSetProp MaintainPdFailHistoryEnbl 0 -a0
q for quit
</source>
e0abeb8e0db26a1c5b4017a0c3c85351c28c947f
ZFS cheatsheet
0
29
837
760
2015-08-12T15:49:21Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:ZFS|cheatsheet]]
== Links ==
* [[ZFS_Recovery|Reparieren von defktem ZFS]]
* Wichtige ZFS-Patches: 127729-07 (x86) / 127728-06 (SPARC)
* ZFS Best Practices Guide http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide
* ZFS FAQ bei Opensolaris.org http://www.opensolaris.org/os/community/zfs/faq/
== Löschen nicht löschbarer Snapshots ==
Hier nach einem abgebrochenen ZFS send/recv:
<source lang=bash>
# zfs destroy MYSQL-LOG/binlog@copy_20130403
cannot destroy 'MYSQL-LOG/binlog@copy_20130403': dataset is busy
# zfs holds -r MYSQL-LOG@copy_20130403
NAME TAG TIMESTAMP
MYSQL-LOG@copy_20130403 .send-22887-0 Wed Apr 3 09:03:32 2013
# zfs release .send-22887-0 MYSQL-LOG@copy_20130403
# zfs destroy MYSQL-LOG/binlog@copy_20130403
</source>
== ZFS Tuning ==
Eine gefühlte Langsamkeit auf Systemen mit ZFS kommt vom sehr großen Cachehunger. Den kann man eingrenzen:
Erstmal schauen, was phase ist:
<source lang=bash>
lollypop@wirefall:~# echo "::kmastat ! grep Total" |mdb -k
Total [hat_memload] 13508608B 309323764 0
Total [kmem_msb] 24010752B 1509706 0
Total [kmem_va] 660340736B 140448 0
Total [kmem_default] 690409472B 1416078794 0
Total [kmem_io_64G] 34619392B 8456 0
Total [kmem_io_4G] 16384B 92 0
Total [kmem_io_2G] 24576B 62 0
Total [bp_map] 1048576B 234488 0
Total [umem_np] 786432B 976 0
Total [id32] 4096B 2620 0
Total [zfs_file_data_buf] 1471275008B 1326646 0
Total [segkp] 589824B 192886 0
Total [ip_minor_arena_sa] 64B 13332 0
Total [ip_minor_arena_la] 192B 45183 0
Total [spdsock] 64B 1 0
Total [namefs_inodes] 64B 24 0
lollypop@wirefall:~# echo "::memstat" | mdb -k
Page Summary Pages MB %Tot
------------ ---------------- ---------------- ----
Kernel 255013 996 24%
ZFS File Data 359196 1403 34%
Anon 346538 1353 33%
Exec and libs 33948 132 3%
Page cache 4836 18 0%
Free (cachelist) 22086 86 2%
Free (freelist) 23420 91 2%
Total 1045037 4082
Physical 1045036 4082
</source>
Oder nur ZFS
<source lang=bash>
echo "::memstat ! egrep '(Page Summary|-----|ZFS)'"| mdb -k
</source>
Ausgeben aller ARC-Parameter:
<source lang=bash>
lollypop@wirefall:~# echo "::arc -m" | mdb -k
hits = 80839319
misses = 3717788
demand_data_hits = 4127150
demand_data_misses = 51589
demand_metadata_hits = 9467792
demand_metadata_misses = 2125852
prefetch_data_hits = 127941
prefetch_data_misses = 596238
prefetch_metadata_hits = 67116436
prefetch_metadata_misses = 944109
mru_hits = 2031248
mru_ghost_hits = 1906199
mfu_hits = 78514880
mfu_ghost_hits = 993236
deleted = 880714
recycle_miss = 1381210
mutex_miss = 197
evict_skip = 38573528
evict_l2_cached = 0
evict_l2_eligible = 94658370048
evict_l2_ineligible = 8946457600
hash_elements = 79571
hash_elements_max = 82328
hash_collisions = 3005774
hash_chains = 22460
hash_chain_max = 8
p = 64 MB
c = 512 MB
c_min = 127 MB
c_max = 512 MB
size = 512 MB
hdr_size = 14825736
data_size = 468982784
other_size = 53480992
l2_hits = 0
l2_misses = 0
l2_feeds = 0
l2_rw_clash = 0
l2_read_bytes = 0
l2_write_bytes = 0
l2_writes_sent = 0
l2_writes_done = 0
l2_writes_error = 0
l2_writes_hdr_miss = 0
l2_evict_lock_retry = 0
l2_evict_reading = 0
l2_free_on_write = 0
l2_abort_lowmem = 0
l2_cksum_bad = 0
l2_io_error = 0
l2_size = 0
l2_hdr_size = 0
memory_throttle_count = 0
arc_no_grow = 0
arc_tempreserve = 0 MB
arc_meta_used = 150 MB
arc_meta_limit = 128 MB
arc_meta_max = 313 MB
</source>
Man kann sich auch alle Parameter ausgeben lassen, die für ZFS gesetzt sind mit:
<source lang=bash>
# echo ::zfs_params | mdb -k
arc_reduce_dnlc_percent = 0x3
zfs_arc_max = 0x100000000
zfs_arc_min = 0x0
arc_shrink_shift = 0x5
zfs_mdcomp_disable = 0x0
zfs_prefetch_disable = 0x0
zfetch_max_streams = 0x8
zfetch_min_sec_reap = 0x2
zfetch_block_cap = 0x100
zfetch_array_rd_sz = 0x100000
zfs_default_bs = 0x9
zfs_default_ibs = 0xe
...
# echo "::arc -a" | mdb -k
hits = 592730
misses = 5095
demand_data_hits = 0
demand_data_misses = 0
demand_metadata_hits = 592719
demand_metadata_misses = 4866
prefetch_data_hits = 0
prefetch_data_misses = 0
...
</source>
Setzen von Kernelparametern geht auch online mit:
<source lang=bash>
# echo zfs_arc_max/Z100000000 | mdb -kw
zfs_arc_max: <old value> = 0x100000000
</source>
Das setzt den zfs_arc_max auf 4GB = 0x100000000
== Limitieren des ARC Cache ==
In der /etc/system einfach:
set zfs:zfs_arc_max = <Number of bytes>
Siehe auch [http://www.solarisinternals.com/wiki/index.php/ZFS_Evil_Tuning_Guide#Limiting_the_ARC_Cache Limiting the ARC Cache]
!!!! NEVER DO THIS !!!!
Never use ''mdb -kw'' to set the values!!!
But on a '''test system''' you could try to get the position in the Kernel with
<source lang=bash>
> arc_stats::print -a arcstat_p.value.ui64 arcstat_c.value.ui64 arcstat_c_max.value.ui64
</source>
Calculate for example 8GB:
<source lang=bash>
# printf "0x%x\n" $[ 8 * 1024 *1024 *1024 ]
0x200000000
</source>
And raise the values like this:
arc.c = arc.c_max
arc.p = arc.c / 2
<source lang=bash>
# mdb -kw
Loading modules: [ unix krtld genunix dtrace specfs uppc pcplusmp cpu.generic zfs mpt_sas sockfs ip hook neti dls sctp arp usba uhci fcp fctl qlc nca md lofs sata cpc fcip random crypto logindmux ptm ufs sppp nfs ipc ]
> arc_stats::print -a arcstat_p.value.ui64 arcstat_c.value.ui64 arcstat_c_max.value.ui64
fffffffffbcfaf90 arcstat_p.value.ui64 = 0x4000000
fffffffffbcfafc0 arcstat_c.value.ui64 = 0x40000000
fffffffffbcfb020 arcstat_c_max.value.ui64 = 0x40000000
> fffffffffbcfb020/Z 0x200000000
arc_stats+0x4a0:0x40000000 = 0x200000000
> fffffffffbcfafc0/Z 0x200000000
arc_stats+0x440:0x44a42480 = 0x200000000
> fffffffffbcfaf90/Z 0x100000000
arc_stats+0x410:0x4000000 = 0x100000000
</source>
== ZFS Platzverbrauch besser anzeigen ==
<pre>
$ zfs list -o space
NAME AVAIL USED USEDSNAP USEDDS USEDREFRESERV USEDCHILD
rpool 25.4G 7.79G 0 64K 0 7.79G
rpool/ROOT 25.4G 6.29G 0 18K 0 6.29G
rpool/ROOT/snv_98 25.4G 6.29G 0 6.29G 0 0
rpool/dump 25.4G 1.00G 0 1.00G 0 0
rpool/export 25.4G 38K 0 20K 0 18K
rpool/export/home 25.4G 18K 0 18K 0 0
rpool/swap 25.8G 512M 0 111M 401M 0
</pre>
Wenn zfs list -o space als shortcut noch nicht zur Verfügung steht, geht meist:
<pre>
$ zfs list -o name,avail,used,usedsnap,usedds,usedrefreserv,usedchild -t filesystem,volume
</pre>
== Migration UFS-Root -> ZFS-Root via Live-Upgrade ==
Erstmal den ZFS Rootpool anlegen:
# zpool create rpool /dev/dsk/<zfs-disk>
Wer Problemen aus dem Weg gehen möchte, läßt den Namen bei rpool.
Boot-Environment (BE) mit lucreate erstellen
# lucreate -c ufsBE -n zfsBE -p rpool
Hiermit werden die Files in die ZFS-Umgebung kopiert.
Prüfen, ob das BootFS richtig gesetzt wurde:
<pre>
# zpool get bootfs rpool
NAME PROPERTY VALUE SOURCE
rpool bootfs rpool/ROOT/zfsBE local
</pre>
Auskommentieren von eventuell noch nachgebliebenen rootdev-Einträgen in der /etc/system
<pre>
# zpool export rpool
# mkdir /tmp/rpool
# zpool import -R /tmp/rpool rpool
# zfs unmount rpool
# rmdir /tmp/rpool/rpool
# zfs mount rpool/ROOT/zfsBE
# perl -pi.orig -e 's#^(rootdev.*)$#* \1#g' /tmp/rpool/etc/system
# zpool export rpool
</pre>
Bootblock für ZFS auf die ZFS-Platte
# installboot -F zfs /usr/platform/`uname -i`/lib/fs/zfs/bootblk /dev/rdsk/<zfs-disk>
Aktivieren des neuen BEs
# luactivate zfsBE
==cannot destroy 'snapshot': dataset is busy==
<source lang=bash>
root@sun1 # zfs destroy zpool1/raiddisk0@send_1
cannot destroy 'zpool1/raiddisk0@send_1': dataset is busy
root@sun1 # zfs holds zpool1/raiddisk0@send_1
NAME TAG TIMESTAMP
zpool1/raiddisk0@send_1 .send-14952-0 Mon Jun 15 15:29:09 2015
zpool1/raiddisk0@send_1 .send-16117-0 Mon Jun 15 15:29:28 2015
zpool1/raiddisk0@send_1 .send-26208-0 Tue Jun 16 10:14:47 2015
zpool1/raiddisk0@send_1 .send-8129-0 Mon Jun 15 15:26:54 2015
root@sun1 # zfs release .send-14952-0 zpool1/raiddisk0@send_1
root@sun1 # zfs release .send-16117-0 zpool1/raiddisk0@send_1
root@sun1 # zfs release .send-26208-0 zpool1/raiddisk0@send_1
root@sun1 # zfs release .send-8129-0 zpool1/raiddisk0@send_1
root@sun1 # zfs holds zpool1/raiddisk0@send_1
root@sun1 #
root@sun1 # zfs destroy zpool1/raiddisk0@send_1
root@sun1 #
</source>
040ce68a3b23286e24892b015f8e4e01f4626861
ZFS fileinfo
0
90
838
215
2015-08-12T15:50:18Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:ZFS|fileinfo]]
Wenn man nachträglich z.B. sehen möchte, mit welcher Blocksize ein File angelegt wurde, so kann man sich das anschauen mit zdb:
<pre>
# zdb -ddd <ZFS> <i-Node>
</pre>
z.B.
<pre>
# ls -i /.globaldevices
524575 /.globaldevices
# zdb -dddd rpool/ROOT/zfsBE 524575
Dataset rpool/ROOT/zfsBE [ZPL], ID 45, cr_txg 8, 27.5G, 459538 objects, rootbp DVA[0]=<0:b1eb43600:200:STD:1> DVA[1]=<0:da0e39e00:200:STD:1> [L0 DMU objset] fletcher4 lzjb BE contiguous unique 2-copy size=800L/200P birth=3168L/3168P fill=459538 cksum=17cad0b0f0:7230399a8a3:134096738e1d8:25bba0c8eec052
Object lvl iblk dblk dsize lsize %full type
524575 3 16K 128K 100M 100M 100.00 ZFS plain file
168 bonus System attributes
dnode flags: USED_BYTES USERUSED_ACCOUNTED
dnode maxblkid: 799
path /.globaldevices
uid 0
gid 0
atime Wed Aug 22 09:50:28 2012
mtime Wed Aug 22 09:50:28 2012
ctime Wed Aug 22 09:50:28 2012
crtime Wed Aug 22 09:47:15 2012
gen 2639
mode 101600
size 104857600
parent 4
links 1
</pre>
273e675fdaea0bb9c1023a4015fdf84c7f1a9162
ZFS Recovery
0
30
839
396
2015-08-12T15:50:42Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:ZFS|Recovery]]
[[Kategorie:Solaris]]
Siehe [http://sunsolve.sun.com/search/document.do?assetkey=1-66-233602-1 SunAlert 233602 : Solaris 10 Assertion Failure in ZFS May Cause a System Panic]:
The best recovery for this is to do the following:
<pre>
1. Set the following in /etc/system:
set zfs:zfs_recover=1
set aok=1
2. Import the pool using 'zpool import'
3. Run a full scrub on the pool using 'zpool scrub'
4. Use 'zdb -d' and make sure that there is no ondisk corruption reported
5. Once the pool comes to a clean state, comment / remove the added entries in /etc/system.
</pre>
==Zurückgehen auf einen früheren Uberblock==
<source lang=bash>
# zpool import defect_pool
cannot import 'defect_pool': I/O error
Destroy and re-create the pool from
a backup source.
</source>
Unter /etc/zfs:
<source lang=bash>
# cd /etc/zfs
# strings zpool.cache | nawk '/c[0-9]+t/'
...
/dev/dsk/c7t0d0s0
...
# zdb -l /dev/dsk/c7t0d0s0 | nawk '$1=="name:"{print;exit;}'
name: 'defect_pool'
</source>
Für einen ZPool im Solaris Cluster:
<source lang=bash>
# cd /var/cluster/run/HAStoragePlus/zfs/
# strings defect_pool.cachefile | nawk '/c[0-9]+t/'
0/dev/dsk/c8t600A0B80006E103C000008164E51CDD2d0s0
0/dev/dsk/c8t600A0B80006E10E40000D47D4E51CF9Ed0s0
</source>
oder
<source lang=bash>
# zpool import -o readonly=on -c defect_pool.cachefile
</source>
<source lang=bash>
# zdb -lu /dev/dsk/c8t600A0B80006E103C000008164E51CDD2d0s0 | nawk '/txg =/{txg=$NF}/timestamp =/{printf "txg %d\t%s\n",txg,$0}' | sort -n -k 2n,2n | uniq | tail -10
txg 40353851 timestamp = 1352184849 UTC = Tue Nov 6 07:54:09 2012
txg 40353852 timestamp = 1352184849 UTC = Tue Nov 6 07:54:09 2012
txg 40353853 timestamp = 1352184849 UTC = Tue Nov 6 07:54:09 2012
txg 40353870 timestamp = 1352185334 UTC = Tue Nov 6 08:02:14 2012
txg 40353871 timestamp = 1352185334 UTC = Tue Nov 6 08:02:14 2012
txg 40353872 timestamp = 1352185334 UTC = Tue Nov 6 08:02:14 2012
txg 40353873 timestamp = 1352185334 UTC = Tue Nov 6 08:02:14 2012
txg 40353874 timestamp = 1352185334 UTC = Tue Nov 6 08:02:14 2012
txg 40353875 timestamp = 1352185334 UTC = Tue Nov 6 08:02:14 2012
txg 40353879 timestamp = 1352185334 UTC = Tue Nov 6 08:02:14 2012
# zpool import -o readonly=on -T <txg> defect_pool
</source>
Also z.B. auf Tue Nov 6 07:54:09 2012 -> txg 40353853
<source lang=bash>
# zpool import -T 40353853 defect_pool
Pool defect_pool returned to its state as of Tue Nov 06 07:32:33 2012.
Discarded approximately 22 minutes of transactions.
</source>
==PANIC, NOTICE: spa_import_rootpool: error 19==
Die Lösung ist, den Pool und das Device explizit anzugeben. Wenn beim booten also kommt:
<pre>
NOTICE: spa_import_rootpool: error 19
Cannot mount root on /pci@0,0/pci8086,340a@3/pci1000,3150@0/sd@1,0:a
panic[cpu0]/thread=fffffffffbc28820: vfs_mountroot: cannot mount root
</pre>
Hilft ein Boot in den Failsafe mode und editieren der /a/rpool/boot/grub/menu.lst, oder Eingabe der Parameter in der Grub-Commandline:
<pre>
title s10x_u8wos_08a
findroot (s10x_u8wos_08a,0,a)
bootfs rpool/ROOT/s10x_u8wos_08a
kernel$ /platform/i86pc/multiboot -B zfs-bootfs=rpool/ROOT/s10x_u8wos_08a,bootpath="/pci@0,0/pci8086,340a@3/pci1000,3150@0/sd@1,0:a"
module /platform/i86pc/boot_archive
</pre>
00906b59696907540f9c9b209daa34d11f1a59db
840
839
2015-08-12T15:51:38Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:ZFS|Recovery]]
[[Kategorie:Solaris]]
==Panic at boot time==
Siehe [http://sunsolve.sun.com/search/document.do?assetkey=1-66-233602-1 SunAlert 233602 : Solaris 10 Assertion Failure in ZFS May Cause a System Panic]:
The best recovery for this is to do the following:
<pre>
1. Set the following in /etc/system:
set zfs:zfs_recover=1
set aok=1
2. Import the pool using 'zpool import'
3. Run a full scrub on the pool using 'zpool scrub'
4. Use 'zdb -d' and make sure that there is no ondisk corruption reported
5. Once the pool comes to a clean state, comment / remove the added entries in /etc/system.
</pre>
==Zurückgehen auf einen früheren Uberblock==
<source lang=bash>
# zpool import defect_pool
cannot import 'defect_pool': I/O error
Destroy and re-create the pool from
a backup source.
</source>
Unter /etc/zfs:
<source lang=bash>
# cd /etc/zfs
# strings zpool.cache | nawk '/c[0-9]+t/'
...
/dev/dsk/c7t0d0s0
...
# zdb -l /dev/dsk/c7t0d0s0 | nawk '$1=="name:"{print;exit;}'
name: 'defect_pool'
</source>
Für einen ZPool im Solaris Cluster:
<source lang=bash>
# cd /var/cluster/run/HAStoragePlus/zfs/
# strings defect_pool.cachefile | nawk '/c[0-9]+t/'
0/dev/dsk/c8t600A0B80006E103C000008164E51CDD2d0s0
0/dev/dsk/c8t600A0B80006E10E40000D47D4E51CF9Ed0s0
</source>
oder
<source lang=bash>
# zpool import -o readonly=on -c defect_pool.cachefile
</source>
<source lang=bash>
# zdb -lu /dev/dsk/c8t600A0B80006E103C000008164E51CDD2d0s0 | nawk '/txg =/{txg=$NF}/timestamp =/{printf "txg %d\t%s\n",txg,$0}' | sort -n -k 2n,2n | uniq | tail -10
txg 40353851 timestamp = 1352184849 UTC = Tue Nov 6 07:54:09 2012
txg 40353852 timestamp = 1352184849 UTC = Tue Nov 6 07:54:09 2012
txg 40353853 timestamp = 1352184849 UTC = Tue Nov 6 07:54:09 2012
txg 40353870 timestamp = 1352185334 UTC = Tue Nov 6 08:02:14 2012
txg 40353871 timestamp = 1352185334 UTC = Tue Nov 6 08:02:14 2012
txg 40353872 timestamp = 1352185334 UTC = Tue Nov 6 08:02:14 2012
txg 40353873 timestamp = 1352185334 UTC = Tue Nov 6 08:02:14 2012
txg 40353874 timestamp = 1352185334 UTC = Tue Nov 6 08:02:14 2012
txg 40353875 timestamp = 1352185334 UTC = Tue Nov 6 08:02:14 2012
txg 40353879 timestamp = 1352185334 UTC = Tue Nov 6 08:02:14 2012
# zpool import -o readonly=on -T <txg> defect_pool
</source>
Also z.B. auf Tue Nov 6 07:54:09 2012 -> txg 40353853
<source lang=bash>
# zpool import -T 40353853 defect_pool
Pool defect_pool returned to its state as of Tue Nov 06 07:32:33 2012.
Discarded approximately 22 minutes of transactions.
</source>
==PANIC, NOTICE: spa_import_rootpool: error 19==
Die Lösung ist, den Pool und das Device explizit anzugeben. Wenn beim booten also kommt:
<pre>
NOTICE: spa_import_rootpool: error 19
Cannot mount root on /pci@0,0/pci8086,340a@3/pci1000,3150@0/sd@1,0:a
panic[cpu0]/thread=fffffffffbc28820: vfs_mountroot: cannot mount root
</pre>
Hilft ein Boot in den Failsafe mode und editieren der /a/rpool/boot/grub/menu.lst, oder Eingabe der Parameter in der Grub-Commandline:
<pre>
title s10x_u8wos_08a
findroot (s10x_u8wos_08a,0,a)
bootfs rpool/ROOT/s10x_u8wos_08a
kernel$ /platform/i86pc/multiboot -B zfs-bootfs=rpool/ROOT/s10x_u8wos_08a,bootpath="/pci@0,0/pci8086,340a@3/pci1000,3150@0/sd@1,0:a"
module /platform/i86pc/boot_archive
</pre>
23d6354b97e85acf68f1c5c7d79afe92e8f79e3b
ZFS sync script
0
215
841
751
2015-08-12T15:52:09Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:ZFS|Sync]]
Like all of my scripts this script is coming without any guaranties!!!
You can use it on your own risk!
==About the script==
* It uses [http://www.maier-komor.de/mbuffer.html mbuffer]. It is easy to compile.
* It uses gawk.
* The variable ''SECURE'' defines if you want to use ssh to encrypt your stream. Set it to ''yes'' or ''no''.
* To mark the datasets to copy from the backup host use this on the source:
<source lang=bash>
# /usr/sbin/zfs set de.timmann:auto-backup=<backup host> <dataset>
</source>
* Run the script on the destination/backup host.
* If you don't want to use root as backup-user on source host do this to create a ''zfssync'' user (Solaris syntax):
<source lang=bash>
# useradd -m zfssync
# passwd -N zfssync
# usermod -K type=normal zfssync
</source>
* Make an ssh-key exchange to login without password for ''SRC_USER''.
Good luck!
==zfs_sync.sh==
<source lang=bash>
#!/usr/bin/bash
# Written by Lars Timmann <L@rs.Timmann.de> 2013
# This script is a rotten bunch of code... rewrite it!
SRC_USER=zfssync
SRC_HOST=my_source_server
SRC_POOL=my_source_zpool
DST_POOL=my_local_destination_zpool
INITIAL_COPIES=3
# Default yes means use SSH for encryption over the net. Every other value means just mbuffer.
SECURE="no"
MBUFFER_PORT=10001
MBUFFER_OPTS="-v 0 --md5 -s 128k -m 256M"
ZFS=/usr/sbin/zfs
SSH="/usr/bin/ssh -xc blowfish"
#AWK=/usr/bin/gawk
AWK=/opt/sfw/bin/gawk
GREP=/usr/bin/grep
DATE=/usr/bin/date
MD5="/usr/bin/digest -a md5"
ROUTE=/usr/sbin/route
MBUFFER="/opt/mbuffer/bin/mbuffer"
# Guess the right IP for communication with source host
DST_HOST=$(${ROUTE} -vn get ${SRC_HOST} | ${AWK} '{ip=$2}END{print ip}')
MYNAME=$(/usr/bin/basename $0 .sh)
MYSELF=$(/usr/bin/hostname)
SRC_DATASETS=/tmp/${MYNAME}_src_ds.out
DST_DATASETS=/tmp/${MYNAME}_dst_ds.out
LOCK_FILE=/var/run/${MYNAME}.lck
TMP_FILE1=/tmp/${MYNAME}.tmp1
TMP_FILE2=/tmp/${MYNAME}.tmp2
BACKUP_PROPERTY="de.timmann:auto-backup"
START_TIME=$(${AWK} 'BEGIN{printf systime();}')
${AWK} -v time=${START_TIME} 'BEGIN{print "START:",strftime("%d.%m.%Y %H:%M.%S",time)}'
# Clean up on signal
# -------------------------
trap 'echo "\n--- Got signal: Exiting ...\n"; \
date ; \
sleep 3; kill -9 ${!} 2>/dev/null; \
end 1' 1 2 3 13 14 15 18
###########################
if [ -f ${LOCK_FILE} ] ; then
echo "$0 is allready running as PID $(/usr/bin/cat ${LOCK_FILE}) look in ${LOCK_FILE}"
exit 1
else
echo $$ > ${LOCK_FILE}
fi
${SSH} ${SRC_USER:+"${SRC_USER}@"}${SRC_HOST} "${ZFS} list -rH -t filesystem,snapshot,volume -o name,type,${BACKUP_PROPERTY} -s creation ${SRC_POOL}" > ${SRC_DATASETS} &
${ZFS} list -rH -t filesystem,snapshot,volume -o name,type -s creation ${DST_POOL} > ${DST_DATASETS} &
wait
function convert_to_poolname () {
from_zfs=$1
search=$2
replace=$3
echo ${from_zfs} | sed -e "s#^${search}#${replace}#g"
}
function is_available () {
snapshot=$1
list=$2
${AWK} -v snapshot=${snapshot} 'BEGIN{rc=1;}$1 == snapshot{print $1; rc=0;}END{exit rc;}' ${list}
return $?
}
function expire_dst_pool_snapshots () {
days_to_keep=$1
min_to_keep=$2
for expired_zfs in $(
${ZFS} list -o creation,name -S creation -t snapshot | \
${AWK} \
-v days_to_keep=${days_to_keep} \
-v min_to_keep=${min_to_keep} \
-v DST_POOL="^${DST_POOL}" \
'
BEGIN{
split("Jan:Feb:Mar:Apr:May:Jun:Jul:Aug:Sep:Oct:Nov:Dec",mon,":");
for(m in mon){
month[mon[m]]=m
};
expire_date=systime()-days_to_keep*60*60*24
}
$NF ~ DST_POOL {
filesystem=$NF;
gsub(/@.*$/,"",filesystem);
split($4,time,":");
filesystem_date=mktime(sprintf("%d %02d %02d %02d %02d 00", $5, month[$2], $3, time[1], time[2]));
count[filesystem]++;
if(filesystem_date < expire_date && count[filesystem] > min_to_keep )
{
print $NF;
}
}')
do
printf "$(${DATE}) Destroying snapshot ${expired_zfs}\n"
${ZFS} destroy ${expired_zfs}
done
}
function get_src_list () {
${AWK} -v backup_server=${MYSELF} '
( $2=="filesystem" || $2=="volume" ) && $3==backup_server {
path[$1]=1;
for(name in path){
# delete name from list, if name is substring of $1
if( index($1,name)==1 && name != $1 && path[name]!=0 ){
path[name]=0;
}
}
}
END{
for(name in path){
if(path[name]==1) print name
}
}
' ${SRC_DATASETS}
}
function first_snapshot () {
${AWK} -v zfs="${1}@" '
$2=="snapshot" && $1 ~ zfs {
first=$1;
# und raus...
nextfile;
}
END{
print first;
}
' $2
}
function last_snapshot () {
${AWK} -v zfs="${1}@" '
$2=="snapshot" && $1 ~ zfs {
last=$1;
}
END{
print last;
}
' $2
}
function get_recursive () {
src_host=$1
src_datasets=$2
first=$3
last=$4
dst_pool=$5
dst_datasets=$6
if [ $# -lt 6 ] ; then
echo "Called from line ${BASH_LINENO[$i]} with $# Arguments"
end 1
fi
src_zfs=$(echo ${first} | ${AWK} -F'@' '{print $1}')
first_snap=$(echo ${first} | ${AWK} -F'@' '{print FS""$2}')
echo "Getting snapshot ${zfs}..."
if [ "_${SECURE}_" == "_yes_" ]
then
# setup receiver
${MBUFFER} ${MBUFFER_OPTS} -l ${TMP_FILE1} -I 127.0.0.1:${MBUFFER_PORT} | \
${ZFS} recv -vFd ${dst_pool} 2>&1 &
# start sender
${SSH} ${SRC_USER:+"${SRC_USER}@"}${SRC_HOST} \
-R ${MBUFFER_PORT}:127.0.0.1:${MBUFFER_PORT} \
"${ZFS} send -I ${first_snap} ${last} | ${MBUFFER} ${MBUFFER_OPTS} -O 127.0.0.1:${MBUFFER_PORT} 2>&1" >${TMP_FILE2} &
else
# setup receiver
${MBUFFER} ${MBUFFER_OPTS} -l ${TMP_FILE1} -I ${MBUFFER_PORT} | \
${ZFS} recv -vFd ${dst_pool} 2>&1 &
# start sender
${SSH} ${SRC_USER:+"${SRC_USER}@"}${SRC_HOST} \
"${ZFS} send -I ${first_snap} ${last} | ${MBUFFER} ${MBUFFER_OPTS} -O ${DST_HOST}:${MBUFFER_PORT} 2>&1" >${TMP_FILE2} &
fi
wait
local_md5=$(grep md5 ${TMP_FILE1})
remote_md5=$(grep md5 ${TMP_FILE2})
local_summary=$(grep summary ${TMP_FILE1})
remote_summary=$(grep summary ${TMP_FILE2})
printf "remote %s\nlocal %s\n" "${remote_md5}" "${local_md5}"
printf "remote %s\nlocal %s\n" "${remote_summary}" "${local_summary}"
rm -f ${TMP_FILE1} ${TMP_FILE2}
}
function get_snapshot () {
src_host=$1
src_datasets=$2
zfs=$3
dst_pool=$4
dst_datasets=$5
if [ -z "$(is_available ${zfs} ${dst_datasets})" ] ; then
echo "Getting snapshot ${zfs}..."
if [ "_${SECURE}_" == "_yes_" ]
then
# setup receiver
${MBUFFER} ${MBUFFER_OPTS} -l ${TMP_FILE1} -I 127.0.0.1:${MBUFFER_PORT} | \
${ZFS} recv -vFd ${dst_pool} 2>&1 &
# start sender
${SSH} ${SRC_USER:+"${SRC_USER}@"}${SRC_HOST} \
-R ${MBUFFER_PORT}:127.0.0.1:${MBUFFER_PORT} \
"${ZFS} send -R ${zfs} | ${MBUFFER} ${MBUFFER_OPTS} -O 127.0.0.1:${MBUFFER_PORT} 2>&1" >${TMP_FILE2} &
else
# setup receiver
${MBUFFER} ${MBUFFER_OPTS} -l ${TMP_FILE1} -I ${MBUFFER_PORT} | \
${ZFS} recv -vFd ${dst_pool} 2>&1 &
# start sender
${SSH} ${SRC_USER:+"${SRC_USER}@"}${SRC_HOST} \
"${ZFS} send -R ${zfs} | ${MBUFFER} ${MBUFFER_OPTS} -O ${DST_HOST}:${MBUFFER_PORT} 2>&1" >${TMP_FILE2} &
fi
wait
local_md5=$(grep md5 ${TMP_FILE1})
remote_md5=$(grep md5 ${TMP_FILE2})
local_summary=$(grep summary ${TMP_FILE1})
remote_summary=$(grep summary ${TMP_FILE2})
printf "remote %s\nlocal %s\n" "${remote_md5}" "${local_md5}"
printf "remote %s\nlocal %s\n" "${remote_summary}" "${local_summary}"
rm -f ${TMP_FILE1} ${TMP_FILE2}
fi
}
function timestamp () {
echo $(${DATE} '+%Y%m%d-%H:%M')
}
function expire_backup_snapshots () {
src_host=$1
src_datasets=$2
dst_datasets=$3
src_last_to_keep=$4
dst_pool=$5
src_zfs=$(echo ${src_last_to_keep} | ${AWK} -F'@' '{print $1}')
dst_zfs=$(convert_to_poolname ${src_zfs} ${SRC_POOL} ${dst_pool})
dst_last_to_keep=$(convert_to_poolname ${src_last_to_keep} ${SRC_POOL} ${dst_pool})
echo "Deleting old backup snapshots before ${dst_last_to_keep}"
if ( ${ZFS} list -o name ${dst_last_to_keep} >/dev/null 2>&1 ) ; then
for src_backup_snapshot in $(${AWK} -v src_backup="${src_zfs}@backup" -v src_last_to_keep="${src_last_to_keep}" '
$1 == src_last_to_keep {
exit 0;
}
$1 ~ src_backup {
print $1;
}
' ${src_datasets})
do
printf "\tDeleting on src ${src_backup_snapshot} ..."
if ( ${SSH} ${SRC_USER:+"${SRC_USER}@"}${SRC_HOST} "${ZFS} destroy ${src_backup_snapshot}" ) ; then
echo "done"
else
echo "failed"
fi
done
for dst_backup_snapshot in $(${AWK} -v dst_backup="${dst_zfs}@backup" -v dst_last_to_keep=${dst_last_to_keep} '
$1 == dst_last_to_keep {
exit 0;
}
$1 ~ dst_backup {
print $1;
}
' ${dst_datasets})
do
printf "\tDeleting on destination ${dst_backup_snapshot} ..."
if ( ${ZFS} destroy ${dst_backup_snapshot} ) ; then
echo "done"
else
echo "failed"
fi
done
else
echo "Strange we do not have the copy of ${dst_last_to_keep} => STOP!"
fi
}
function end () {
/usr/bin/rm -f ${LOCK_FILE}
exit $1
}
for src_zfs in $(get_src_list) ; do
echo "Evaluating ${src_zfs}"
dst_zfs=$(convert_to_poolname ${src_zfs} ${SRC_POOL} ${DST_POOL})
last_src=$(last_snapshot ${src_zfs} ${SRC_DATASETS})
last_dst=$(last_snapshot ${dst_zfs} ${DST_DATASETS})
last_backup_src=$(${AWK} -v zfs="${src_zfs}@backup" '$1 ~ zfs{last=$1}END{print last}' ${SRC_DATASETS})
last_backup_dst=$(${AWK} -v zfs="${dst_zfs}@backup" '$1 ~ zfs{last=$1}END{print last}' ${DST_DATASETS})
last_dst_on_src=$(convert_to_poolname ${last_dst} ${DST_POOL} ${SRC_POOL})
this_backup_src=${src_zfs}@backup_$(timestamp)
# Create snapshot for incremental backups
${SSH} ${SRC_USER:+"${SRC_USER}@"}${SRC_HOST} "${ZFS} snapshot ${this_backup_src}"
if [ -n "$(is_available ${dst_zfs} ${DST_DATASETS})" -a -z "${last_dst}" ] ; then
echo "zfs is on dst, but no snapshots. Getting ${last_src}..."
get_snapshot ${SRC_HOST} ${SRC_DATASETS} ${last_src} ${DST_POOL} ${DST_DATASETS}
# Look for last backup snapshot on destination
elif [ -n "${last_backup_dst}" ] ; then
# Name of last backup snapshot on src
last_dst_backup_on_src=$(convert_to_poolname ${last_backup_dst} ${DST_POOL} ${SRC_POOL})
# If converted name is not empty and snapshot is in the list of src snapshots
# then get all snapshots from last backup until now
if [ -n "${last_dst_backup_on_src}" ] ; then
if [ -n "$(is_available ${last_dst_backup_on_src} ${SRC_DATASETS})" ] ; then
# Get the snapshot of this backup
printf "%s\tsnapshot\n" ${this_backup_src} >> ${SRC_DATASETS}
get_recursive ${SRC_HOST} ${SRC_DATASETS} ${last_dst_backup_on_src} ${this_backup_src} ${DST_POOL} ${DST_DATASETS} && \
expire_backup_snapshots ${SRC_HOST} ${SRC_DATASETS} ${DST_DATASETS} ${this_backup_src} ${DST_POOL}
fi
fi
elif [ -n "$(is_available ${dst_zfs} ${DST_DATASETS})" ] ; then
# No last backup snapshot on dst but we have snapshots
if [ -n "$(is_available ${last_dst_on_src} ${SRC_DATASETS})" ] ; then
echo "Try to backup from ${last_dst_on_src} to ${this_backup_src}"
first=${last_dst_on_src}
last=${last_src}
get_recursive ${SRC_HOST} ${SRC_DATASETS} ${first} ${last} ${DST_POOL} ${DST_DATASETS} && \
expire_backup_snapshots ${SRC_HOST} ${SRC_DATASETS} ${DST_DATASETS} ${this_backup_src} ${DST_POOL}
# Get the snapshot of this backup
printf "%s\tsnapshot\n" ${this_backup_src} >> ${SRC_DATASETS}
get_recursive ${SRC_HOST} ${SRC_DATASETS} ${last} ${this_backup_src} ${DST_POOL} ${DST_DATASETS} && \
expire_backup_snapshots ${SRC_HOST} ${SRC_DATASETS} ${DST_DATASETS} ${this_backup_src} ${DST_POOL}
else
echo "OK I tried hard... now it is your job..."
fi
else
# No existing copies for this zfs. Get the last <INITIAL_COPIES> copies
first=$(${AWK} -v zfs=${src_zfs} -v intitial_copies=$((${INITIAL_COPIES}-1)) '
$1 ~ zfs && $2=="snapshot" {
last[++count]=$1;
}
END {
if(count>intitial_copies){
print last[count-intitial_copies]
}else{
print last[1]
}
}' ${SRC_DATASETS})
last=$( ${AWK} -v zfs=${src_zfs} '$1 ~ zfs && $2=="snapshot"{last=$1}END{print last}' ${SRC_DATASETS} )
get_snapshot ${SRC_HOST} ${SRC_DATASETS} ${first} ${DST_POOL} ${DST_DATASETS}
get_recursive ${SRC_HOST} ${SRC_DATASETS} ${first} ${last} ${DST_POOL} ${DST_DATASETS} && \
expire_backup_snapshots ${SRC_HOST} ${SRC_DATASETS} ${DST_DATASETS} ${this_backup_src} ${DST_POOL}
# Get the snapshot of this backup
printf "%s\tsnapshot\n" ${this_backup_src} >> ${SRC_DATASETS}
get_recursive ${SRC_HOST} ${SRC_DATASETS} ${last} ${this_backup_src} ${DST_POOL} ${DST_DATASETS} && \
expire_backup_snapshots ${SRC_HOST} ${SRC_DATASETS} ${DST_DATASETS} ${this_backup_src} ${DST_POOL}
fi
echo
echo --------------------------------------------------------------------------------
date
echo
done
# expire_dst_pool_snapshots days_to_keep min_to_keep
expire_dst_pool_snapshots 34 70
END_TIME=$(${AWK} 'BEGIN{printf systime();}')
${AWK} -v time=${END_TIME} 'BEGIN{print "END :",strftime("%d.%m.%Y %H:%M.%S",time)}'
${AWK} -v start=${START_TIME} -v end=${END_TIME} 'BEGIN{print "DURATION:",strftime("%H:%M.%S",end-start-3600*strftime("%H",0))}'
end 0
</source>
d7b9c0f049350baa2c4d12d220db9f8f4d820e49
Ufw
0
224
856
2015-08-17T07:13:25Z
Lollypop
2
Die Seite wurde neu angelegt: „[[Kategorie:Linux]] <source lang=bash> # ufw [insert <number>] allow log-all from 10.0.0.0/16 to any app OpenSSH Rule inserted # ufw status verbose Status: ac…“
wikitext
text/x-wiki
[[Kategorie:Linux]]
<source lang=bash>
# ufw [insert <number>] allow log-all from 10.0.0.0/16 to any app OpenSSH
Rule inserted
# ufw status verbose
Status: active
Logging: on (low)
Default: deny (incoming), allow (outgoing), disabled (routed)
New profiles: skip
To Action From
-- ------ ----
22/tcp (OpenSSH) ALLOW IN 10.0.0.0/16 (log-all)
</source>
0d6483465d80fc1642211f377cb6509645d4ab2b
857
856
2015-08-17T07:26:58Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:Linux]]
==Diable IPv6==
/etc/default/ufw
<source lang=bash>
# Set to yes to apply rules to support IPv6 (no means only IPv6 on loopback
# accepted). You will need to 'disable' and then 'enable' the firewall for
# the changes to take affect.
IPV6=no
</source>
/etc/ufw/sysctl.conf
<source lang=bash>
# Uncomment this to turn off ipv6 autoconfiguration
net/ipv6/conf/default/autoconf=0
net/ipv6/conf/all/autoconf=0
</source>
==Setup Rules==
===Adding a rule===
<source lang=bash>
# ufw allow log-all from 192.168.2.0/24 to any app OpenSSH
Rule added
# ufw status verbose
Status: active
Logging: on (low)
Default: reject (incoming), allow (outgoing), disabled (routed)
New profiles: skip
To Action From
-- ------ ----
22/tcp (OpenSSH) ALLOW IN 192.168.2.0/24 (log-all)
</source>
===Inserting before===
<source lang=bash>
# ufw insert 1 allow log-all from 192.168.1.0/24 to any app OpenSSH
Rule inserted
# ufw status verbose
Status: active
Logging: on (low)
Default: reject (incoming), allow (outgoing), disabled (routed)
New profiles: skip
To Action From
-- ------ ----
22/tcp (OpenSSH) ALLOW IN 192.168.1.0/24 (log-all)
22/tcp (OpenSSH) ALLOW IN 192.168.2.0/24 (log-all)
# ufw status numbered
Status: active
To Action From
-- ------ ----
[ 1] OpenSSH ALLOW IN 192.168.1.0/24 (log-all)
[ 2] OpenSSH ALLOW IN 192.168.2.0/24 (log-all)
</source>
38b674aede9fc512e9744beb19c969fa32f89f0c
858
857
2015-08-17T07:31:10Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:Linux]]
==Diable IPv6==
/etc/default/ufw
<source lang=bash>
# Set to yes to apply rules to support IPv6 (no means only IPv6 on loopback
# accepted). You will need to 'disable' and then 'enable' the firewall for
# the changes to take affect.
IPV6=no
</source>
/etc/ufw/sysctl.conf
<source lang=bash>
# Uncomment this to turn off ipv6 autoconfiguration
net/ipv6/conf/default/autoconf=0
net/ipv6/conf/all/autoconf=0
</source>
==Setup Rules==
===Adding a rule===
<source lang=bash>
# ufw allow log-all from 192.168.2.0/24 to any app OpenSSH
Rule added
# ufw status verbose
Status: active
Logging: on (low)
Default: reject (incoming), allow (outgoing), disabled (routed)
New profiles: skip
To Action From
-- ------ ----
22/tcp (OpenSSH) ALLOW IN 192.168.2.0/24 (log-all)
</source>
===Inserting before===
<source lang=bash>
# ufw insert 1 allow log-all from 192.168.1.0/24 to any app OpenSSH
Rule inserted
# ufw status verbose
Status: active
Logging: on (low)
Default: reject (incoming), allow (outgoing), disabled (routed)
New profiles: skip
To Action From
-- ------ ----
22/tcp (OpenSSH) ALLOW IN 192.168.1.0/24 (log-all)
22/tcp (OpenSSH) ALLOW IN 192.168.2.0/24 (log-all)
# ufw status numbered
Status: active
To Action From
-- ------ ----
[ 1] OpenSSH ALLOW IN 192.168.1.0/24 (log-all)
[ 2] OpenSSH ALLOW IN 192.168.2.0/24 (log-all)
</source>
==Own applications==
/etc/ufw/applications.d/nrpe
<source lang=bash>
[NRPE]
title=Nagios NRPE
description=Nagios Remote Plugin Executor
ports=5666/tcp
</source>
/etc/ufw/applications.d/mysql
<source lang=bash>
[MySQL]
title=MySQL Server (MySQL, MYSQL)
description=Old and rusty SQL server
ports=3306/tcp
</source>
81d0af6f83f790f6b21b36c798bac5b5c7c6ee00
859
858
2015-08-17T07:36:10Z
Lollypop
2
/* Own applications */
wikitext
text/x-wiki
[[Kategorie:Linux]]
==Diable IPv6==
/etc/default/ufw
<source lang=bash>
# Set to yes to apply rules to support IPv6 (no means only IPv6 on loopback
# accepted). You will need to 'disable' and then 'enable' the firewall for
# the changes to take affect.
IPV6=no
</source>
/etc/ufw/sysctl.conf
<source lang=bash>
# Uncomment this to turn off ipv6 autoconfiguration
net/ipv6/conf/default/autoconf=0
net/ipv6/conf/all/autoconf=0
</source>
==Setup Rules==
===Adding a rule===
<source lang=bash>
# ufw allow log-all from 192.168.2.0/24 to any app OpenSSH
Rule added
# ufw status verbose
Status: active
Logging: on (low)
Default: reject (incoming), allow (outgoing), disabled (routed)
New profiles: skip
To Action From
-- ------ ----
22/tcp (OpenSSH) ALLOW IN 192.168.2.0/24 (log-all)
</source>
===Inserting before===
<source lang=bash>
# ufw insert 1 allow log-all from 192.168.1.0/24 to any app OpenSSH
Rule inserted
# ufw status verbose
Status: active
Logging: on (low)
Default: reject (incoming), allow (outgoing), disabled (routed)
New profiles: skip
To Action From
-- ------ ----
22/tcp (OpenSSH) ALLOW IN 192.168.1.0/24 (log-all)
22/tcp (OpenSSH) ALLOW IN 192.168.2.0/24 (log-all)
# ufw status numbered
Status: active
To Action From
-- ------ ----
[ 1] OpenSSH ALLOW IN 192.168.1.0/24 (log-all)
[ 2] OpenSSH ALLOW IN 192.168.2.0/24 (log-all)
</source>
==Own applications==
/etc/ufw/applications.d/nrpe
<source lang=bash>
[NRPE]
title=Nagios NRPE
description=Nagios Remote Plugin Executor
ports=5666/tcp
</source>
/etc/ufw/applications.d/mysql
<source lang=bash>
[MySQL]
title=MySQL Server (MySQL, MYSQL)
description=Old and rusty SQL server
ports=3306/tcp
</source>
To inspect use:
<source lang=bash>
# ufw app info MySQL
Profile: MySQL
Title: MySQL Server (MySQL, MYSQL)
Description: Old and rusty SQL server
Port:
3306/tcp
</source>
816233f9676f8959f3a6edf6e7d3aafe9f3a9df8
860
859
2015-08-17T07:59:00Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:Linux]]
==Disable IPv6==
/etc/default/ufw
<source lang=bash>
# Set to yes to apply rules to support IPv6 (no means only IPv6 on loopback
# accepted). You will need to 'disable' and then 'enable' the firewall for
# the changes to take affect.
IPV6=no
</source>
/etc/ufw/sysctl.conf
<source lang=bash>
# Uncomment this to turn off ipv6 autoconfiguration
net/ipv6/conf/default/autoconf=0
net/ipv6/conf/all/autoconf=0
</source>
==Setup Rules==
===Adding a rule===
<source lang=bash>
# ufw allow log-all from 192.168.2.0/24 to any app OpenSSH
Rule added
# ufw status verbose
Status: active
Logging: on (low)
Default: reject (incoming), allow (outgoing), disabled (routed)
New profiles: skip
To Action From
-- ------ ----
22/tcp (OpenSSH) ALLOW IN 192.168.2.0/24 (log-all)
</source>
===Inserting before===
<source lang=bash>
# ufw insert 1 allow log-all from 192.168.1.0/24 to any app OpenSSH
Rule inserted
# ufw status verbose
Status: active
Logging: on (low)
Default: reject (incoming), allow (outgoing), disabled (routed)
New profiles: skip
To Action From
-- ------ ----
22/tcp (OpenSSH) ALLOW IN 192.168.1.0/24 (log-all)
22/tcp (OpenSSH) ALLOW IN 192.168.2.0/24 (log-all)
# ufw status numbered
Status: active
To Action From
-- ------ ----
[ 1] OpenSSH ALLOW IN 192.168.1.0/24 (log-all)
[ 2] OpenSSH ALLOW IN 192.168.2.0/24 (log-all)
</source>
==Own applications==
/etc/ufw/applications.d/nrpe
<source lang=bash>
[NRPE]
title=Nagios NRPE
description=Nagios Remote Plugin Executor
ports=5666/tcp
</source>
/etc/ufw/applications.d/mysql
<source lang=bash>
[MySQL]
title=MySQL Server (MySQL, MYSQL)
description=Old and rusty SQL server
ports=3306/tcp
</source>
To inspect use:
<source lang=bash>
# ufw app info MySQL
Profile: MySQL
Title: MySQL Server (MySQL, MYSQL)
Description: Old and rusty SQL server
Port:
3306/tcp
</source>
1b18ca335f58674139a67997accac3953d95b5bd
SSH Tipps und Tricks
0
75
861
681
2015-08-17T07:59:32Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:SSH]]
[[Kategorie:Putty]]
=SSH, der Weg zum Ziel=
==SSH über ein oder mehrere Hops==
Um die SSH-Verbindung von Host_A zu Host_B machen zu können muß man sich über zwei Rechner dorthin tunneln (GW_1 und GW_2). Wenn man sich immer erst einloggt und dann weiter einloggt ist es manchmal sehr schwierig die Portforwardings mitzuschleifen, oder den Socks5-Proxy. Einfacher ist es, wenn man sich ProxyCommands für den Weg von Host_a zu Host_b definiert.
Wir kommen also nur von GW_2 zu Host_b, also machen wir uns hierfür einen Eintrag in der ~/.ssh/config:
<pre>
Host Host_B
ProxyCommand ssh GW_2 "/bin/bash -c 'exec 3<>/dev/tcp/%h/%p; cat <&3 & cat >&3;kill $!'"
</pre>
Zu GW_2 kommen wir aber nur über GW_1, also brauchen wir hierfür auch einen Eintrag:
<pre>
Host GW_2
ProxyCommand ssh GW_1 "/bin/bash -c 'exec 3<>/dev/tcp/%h/%p; cat <&3 & cat >&3;kill $!'"
</pre>
Jetzt gibt man auf Host_A einfach <i>ssh Host_B</i> ein und wird über die beiden Gateways GW_1 und GW_2 getunnelt.
Portforwardings für z.B. NFS macht man jetzt einfach so:
<pre>
root@Host_A# share -F nfs -o ro=@127.0.0.1/32 /tmp
root@Host_A# ssh -R 22049:localhost:2049 user@Host_B
user@Host_B$ su -
root@Host_B# mount -oro nfs://127.0.0.1:22049/tmp /mnt
</pre>
Im Hintergrund werden dann die TunnelVerbindungen aufgebaut und man macht das PortForwarding direkt von Host_A nach Host_B. Sehr schlank und elegant.
PS: Das /dev/tcp/%h/%p ist ein BASH-Builtin wobei %h und %p von der SSH durch Host (%h) und Port (%p) ausgefüllt werden
==Ausbruch aus dem Paradies==
Problem: Die Umgebung in der man sich aufhält ist leider so unglücklich mit Firewalls verbaut, daß man nicht arbeiten kann. Man muß aber per SSH raus, um woanders kurz etwas zu schauen, oder zu holen. Nunja, es gibt immer einen Weg...
Vorraussetzung ist ein lokal installiertes [http://www.meadowy.org/~gotoh/projects/connect connect], z.B. unter Ubuntu: apt-get install connect-proxy.
Weiterhin braucht man einen SSH-Server, wo ein sshd auf dem Port 443 lauscht, denn die meisten Proxies wollen einen nur auf known Ports durchlassen.
Dann trägt man in der ~/.ssh/config ein:
<pre>
Host ssh-via-proxy
ProxyCommand connect -H proxy-server:3128 ssh-server 443
</pre>
Schwuppdiwupp ist man mit <i>ssh ssh-server</i> auf dem SSH-Ziel, wo man hinmöchte. Natürlich kann man auf den ssh-server wieder als ProxyCommand eintragen usw. usw.
==Achja... das interne Wiki...==
Auch nicht schlimm, wenn das nur vom internen Netz erreichbar ist, dann fragen wir einfach via Socks-Proxy an:
<pre>
user@Host_A$ ssh -C -N -T -f -D8080 interner-rechner
user@Host_A$ chromium-browser --proxy-server="socks5://localhost:8080" https://wiki.intern.firma.de/ &
</pre>
Die Optionen sind:
<pre>
-C Requests compression <- das ist optional
-N Do not execute a remote command.
-T Disable pseudo-tty allocation.
-f Requests ssh to go to background just before command execution.
-D Local-Remote-Socks5-Proxy Port
</pre>
Oder wieder via ~/.ssh/config:
<pre>
Host wiki
Compression yes
DynamicForward 8888
RequestTTY no
PermitLocalCommand yes
LocalCommand chromium-browser --proxy-server="socks5://localhost:8888" https://wiki.intern.firma.de/ &
Hostname interner-rechner
</pre>
Und dann <i>ssh -N -f wiki</i> (Entsprechungen für -N und -f habe ich noch nicht gefunden).
==Der Fingerabdruck==
Für die Verifikation ist es oft leichter mit kürzeren Zahlenketten. Daher ist der Fingerabdruck praktisch, um Keys einfacher zu vergleichen:
<pre>
$ ssh-keygen -lf ~/.ssh/id_dsa.pub
1024 98:c5:76:...:08:fa:ba lollypop@lollybook (DSA)
</pre>
=PuTTY Portable=
==pageant zusammen mit putty starten==
In die Datei ..\PortableApps\PuTTYPortable\App\AppInfo\Launcher\PuTTYPortable.ini muß folgendes unter [Launch] stehen:
<pre>
[Launch]
ProgramExecutable=putty\pageant.exe
CommandLineArguments='%PAL:DataDir%\settings\mykeys.ppk -c %PAL:AppDir%\putty\putty.exe'
DirectoryMoveOK=yes
SupportsUNC=yes
</pre>
Zu PortableApps siehe auch:
* [http://portableapps.com/manuals/PortableApps.comLauncher/ref/envsub.html Environment variable substitions]
* [http://portableapps.com/manuals/PortableApps.comLauncher/ref/launcher.ini/launch.html#programexecutable Launch]
06b5448d9972d44b9c7b27a194e8bad16065f698
Category:SSH
14
225
862
2015-08-17T07:59:48Z
Lollypop
2
Die Seite wurde neu angelegt: „[[Kategorie:KnowHow]]“
wikitext
text/x-wiki
[[Kategorie:KnowHow]]
5b3e805e2df69a16d339bfd0115e4688ccfd0e65
881
862
2015-08-18T11:36:47Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:KnowHow]]
[[Kategorie:Security]]
5b0dec7d42472e13df91db84d5eae2168fab97c0
Category:Putty
14
226
863
2015-08-17T08:00:10Z
Lollypop
2
Die Seite wurde neu angelegt: „[[Kategorie:KnowHow]]“
wikitext
text/x-wiki
[[Kategorie:KnowHow]]
5b3e805e2df69a16d339bfd0115e4688ccfd0e65
MySQL Tipps und Tricks
0
197
864
855
2015-08-17T08:21:07Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:MySQL|Tipps und Tricks]]
==Oneliner==
===All grants===
<source lang=bash>
# mysql --skip-column-names --batch --execute 'select concat_ws("@",user,host) from mysql.user' | xargs -n 1 -i mysql --execute 'show grants for {}'
</source>
===Last update time===
* Per table
<source lang=mysql>
mysql> SELECT TABLE_SCHEMA AS DB,TABLE_NAME,UPDATE_TIME FROM INFORMATION_SCHEMA.TABLES ORDER BY DB,UPDATE_TIME;
</source>
* Per database
<source lang=mysql>
mysql> SELECT TABLE_SCHEMA AS DB,MAX(UPDATE_TIME) AS LAST_UPDATE FROM INFORMATION_SCHEMA.TABLES GROUP BY DB ORDER BY LAST_UPDATE;
</source>
==InnoDB space==
===Per database===
<source lang=mysql>
mysql> select table_schema as database_name, sum(round(data_length/1024/1024,2)) as total_size_mb from information_schema.tables where engine like 'innodb' group by table_schema order by total_size_mb;
</source>
===Per table===
<source lang=mysql>
mysql> select table_schema as database_name,table_name,round(data_length/1024/1024,2) as size_mb from information_schema.tables order by size_mb;
</source>
==Logging==
If you use SET GLOBAL it is just for the moment.
'''Don't forget to add it in your my.cnf to make it permanent!'''
===What can I log?===
The interesting variables here are:
* log_queries_not_using_indexes
* log_slave_updates
* log_slow_queries
* general_log
===Choose logging destination FILE/TABLE/NONE===
This affects general_log and slow_query_log.
* Log to the table mysql.slow_log and mysql.general_log
<source lang=mysql>
mysql> SET GLOBAL log_output=TABLE;
</source>
* Log to the table mysql.slow_log and mysql.general_log
<source lang=mysql>
mysql> SET GLOBAL log_output=TABLE;
</source>
* Both: tables and files
<source lang=mysql>
mysql> SET GLOBAL log_output = 'TABLE,FILE';
</source>
* None, if NONE appears in the log_output destinations there is no logging
<source lang=mysql>
mysql> SET GLOBAL log_output = 'TABLE,FILE,NONE';
</source>
is equal to
<source lang=mysql>
mysql> SET GLOBAL log_output = 'NONE';
</source>
===Enable/disable general logging===
<source lang=mysql>
mysql> SET GLOBAL general_log_file = '/var/lib/mysql/general.log';
Query OK, 0 rows affected (0.00 sec)
mysql> SET GLOBAL general_log = 'ON';
Query OK, 0 rows affected (0.00 sec)
</source>
<source lang=mysql>
mysql> SET GLOBAL general_log = 'OFF';
Query OK, 0 rows affected (0.00 sec)
</source>
===Enable/disable logging of slow queries===
<source lang=mysql>
mysql> SET GLOBAL slow_query_log_file = '/var/lib/mysql/slow-query.log';
Query OK, 0 rows affected (0.00 sec)
mysql> SET GLOBAL slow_query_log = 'ON';
Query OK, 0 rows affected (0.00 sec)
</source>
<source lang=mysql>
mysql> SET GLOBAL slow_query_log = 'OFF';
Query OK, 0 rows affected (0.00 sec)
</source>
==Filesystems for MySQL==
===ext3/ext4===
Mountoptions are:
* noatime
* data=writeback (best performance , only metadata is logged)
* data=ordered (ok performance , recording metadata and grouping metadata related to the data changes)
* data=journal (worst performance, but best data protection, ext3 default mode, recording metadata and all data)
===Raw devices with InnoDB===
'''Take a look at [[Linux_udev_permissions|setting device permissions via udev]] first.'''
'''After''' that the device is owned by mysql:
<source lang=bash>
# ls -alL /dev/vg-data/lv-rawdisk-innodb01
brw-rw---- 1 mysql mysql 252, 0 Aug 12 15:07 /dev/vg-data/lv-rawdisk-innodb01
</source>
Determine the size:
<source lang=bash>
# lvs vg-data
LV VG Attr LSize Pool Origin Data% Move Log Copy% Convert
lv-rawdisk-innodb01 vg-data -wi-a---- 25.00g
# fdisk -l /dev/vg-data/lv-rawdisk-innodb01
Disk /dev/vg-data/lv-rawdisk-innodb01: 26.8 GB, 26843545600 bytes
255 heads, 63 sectors/track, 3263 cylinders, total 52428800 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
# bc -l
26843545600/(1024*1024*1024)
25.00000000000000000000
</source>
Yes... really 25GB!
Add your logical volume to your configuration /etc/mysql/conf.d/innodb.cnf :
<source lang=mysql>
[mysqld]
# InnoDB raw disks
innodb_data_home_dir=
innodb_data_file_path=/dev/vg-data/lv-rawdisk-innodb01:25Gnewraw
</source>
Start mysql:
<source lang=bash>
# service mysql start
</source>
Aaaaaand.. do not forget apparmor! Like I did.. :-D
<source lang=mysql>
InnoDB: Operating system error number 13 in a file operation.
InnoDB: The error means mysqld does not have the access rights to
InnoDB: the directory.
InnoDB: File name /dev/dm-0
InnoDB: File operation call: 'open'.
InnoDB: Cannot continue operation.
</source>
<source lang=bash>
# tail /var/log/kern.log
...
Aug 12 15:30:09 mysql kernel: [ 5840.118528] audit: type=1400 audit(1439386209.399:33): apparmor="DENIED" operation="open" profile="/usr/sbin/mysqld" name="/dev/dm-0" pid=11810 comm="mysqld" requested_mask="wr" denied_mask="wr" fsuid=108 ouid=108
...
</source>
Add your raw device to the apparmor config in /etc/apparmor.d/local/usr.sbin.mysqld :
<source lang=bash>
# Site-specific additions and overrides for usr.sbin.mysqld.
# For more details, please see /etc/apparmor.d/local/README.
/dev/dm-* rwk,
</source>
Reload apparmor:
<source lang=bash>
# service apparmor reload
</source>
Another try!
<source lang=bash>
# service mysql start
</source>
<source lang=mysql>
InnoDB: The first specified data file /dev/vg-data/lv-rawdisk-innodb01 did not exist:
InnoDB: a new database to be created!
150812 15:48:23 InnoDB: Setting file /dev/vg-data/lv-rawdisk-innodb01 size to 25600 MB
InnoDB: Database physically writes the file full: wait...
InnoDB: Progress in MB: 100 200 300 400 500 600 700 800 900 1000 1100 1200 ...
</source>
Much better!
So shutdown MySQL again!
Change your configuration /etc/mysql/conf.d/innodb.cnf and '''change newraw to raw!''' :
<source lang=mysql>
[mysqld]
# InnoDB raw disks
innodb_data_home_dir=
innodb_data_file_path=/dev/vg-data/lv-rawdisk-innodb01:25Graw
</source>
==Sample InnoDB configuration==
/etc/mysql/conf.d/innodb.cnf
<source lang=mysql>
[mysqld]
# InnoDB Parameters
# innodb_buffer_pool_size=(0.7*total_mem_size)
innodb_buffer_pool_size=1433M
# bulk_insert_buffer_size
bulk_insert_buffer_size=256M
# innodb_buffer_pool_instances=... more = more concurrency
innodb_buffer_pool_instances=2
# innodb_thread_concurrency= 2*CPUs
innodb_thread_concurrency=4
# innodb_flush_method=O_DIRECT (avoids double buffering)
innodb_flush_method=O_DIRECT
# InnoDB data raw disks
innodb_data_home_dir=
innodb_data_file_path=/dev/vg-data/lv-rawdisk-innodb01:25Graw
# InnoDB log files
innodb_log_files_in_group=2
innodb_log_file_size=100M
innodb_log_group_home_dir=/var/lib/mysql/ib_log
</source>
==Analyze==
<source lang=mysql>
mysql> select * from <tablename> PROCEDURE ANALYSE();
</source>
<source lang=mysql>
mysql> SHOW /*!50000 GLOBAL*/ STATUS;
</source>
* See [[http://de.slideshare.net/shinguz/pt-presentation-11465700 MySQL Performance Tuning]]
===percona-toolkit===
<source lang=bash>
# aptitude install percona-toolkit
# mysql -e "explain select * from mysql.user,mysql.db where user.user=db.user" | pt-visual-explain
JOIN
+- Bookmark lookup
| +- Table
| | table db
| | possible_keys User
| +- Index lookup
| key db->User
| possible_keys User
| key_len 48
| ref mysql.user.User
| rows 3
+- Table scan
rows 68
+- Table
table user
</source>
===Sysbench===
<source lang=bash>
# mysql -u root -e "create database sbtest;"
# sysbench \
--test=oltp \
--oltp-table-size=10000000 \
--db-driver=mysql \
--mysql-table-engine=innodb \
--mysql-db=sbtest \
--mysql-user=root \
--mysql-password=$(nawk -F'=' '/password/{print $2}' /root/.my.cnf) \
--mysql-socket=/var/run/mysqld/mysqld.sock \
prepare
# sysbench \
--test=oltp \
--oltp-test-mode=complex \
--oltp-table-size=80000000 \
--db-driver=mysql \
--mysql-table-engine=innodb \
--mysql-db=sbtest \
--mysql-user=root_rw \
--mysql-password=$(nawk -F'=' '/password/{print $2}' /root/.my.cnf) \
--mysql-socket=/var/run/mysqld/mysqld.sock \
--num-threads=4 \
--max-time=900 \
--max-requests=500000 \
run
# mysql -u root_rw -e "drop table sbtest;" sbtest
</source>
==Recover a damaged root account==
===Lost grants===
Try out:
<source lang=bash>
# service mysql stop
# echo "grant all privileges on *.* to 'root'@'localhost' with grant option;" > /root/mysql-init
# mysqld_safe --init-file=/root/mysql-init
...
150812 19:14:24 mysqld_safe mysqld from pid file /var/run/mysqld/mysqld.pid ended
# rm /root/mysql-init
# service mysql start
</source>
Or:
<source lang=bash>
# service mysql stop
# mysqld_safe --skip-grant-tables &
...
# mysql -e "UPDATE mysql.user SET Grant_priv='Y', Super_priv='Y' WHERE User='root'; FLUSH PRIVILEGES; GRANT ALL ON *.* TO 'root'@'localhost';"
# mysqladmin -u root shutdown
# service mysql start
</source>
===Lost password===
<source lang=bash>
# service mysql stop
# echo "SET PASSWORD FOR 'root'@'localhost' = PASSWORD('the root password for mysql');" > /root/mysql-init
# mysqld_safe --init-file=/root/mysql-init
...
150812 19:15:24 mysqld_safe mysqld from pid file /var/run/mysqld/mysqld.pid ended
# rm /root/mysql-init
# service mysql start
</source>
==Structured configuration==
This is the default in Ubuntus /etc/mysql/my.cnf:
<source lang=mysql>
...
#
# * IMPORTANT: Additional settings that can override those from this file!
# The files must end with '.cnf', otherwise they'll be ignored.
#
!includedir /etc/mysql/conf.d/
</source>
/etc/mysql/conf.d/innodb.cnf:
<source lang=mysql>
[mysqld]
# InnoDB Parameters
# innodb_buffer_pool_size=(0.7*total_mem_size)
#innodb_buffer_pool_size=512M
innodb_buffer_pool_size=256M
# bulk_insert_buffer_size
#bulk_insert_buffer_size=256M
bulk_insert_buffer_size=128M
# innodb_buffer_pool_instances=... more = more concurrency
innodb_buffer_pool_instances=2
# innodb_thread_concurrency= 2*CPUs
innodb_thread_concurrency=4
# innodb_flush_method=O_DIRECT (avoids double buffering)
innodb_flush_method=O_DIRECT
# InnoDB data raw disks
innodb_data_home_dir=
innodb_data_file_path=/dev/vg-data/lv-rawdisk-innodb01:25Graw
# InnoDB log files
innodb_log_files_in_group=2
innodb_log_file_size=100M
innodb_log_group_home_dir=/var/lib/mysql/ib_log
</source>
/etc/mysql/conf.d/myisam.cnf:
<source lang=mysql>
[mysqld]
#key_buffer = 512M
key_buffer = 128M
table_cache = 8K
myisam_sort_buffer_size = 64M
tmp_table_size = 64M
# Variable: concurrent_insert
# Value Description
# 0 Disables concurrent inserts
# 1 (Default) Enables concurrent insert for MyISAM tables that do not have holes
# 2 Enables concurrent inserts for all MyISAM tables, even those that have holes.
# For a table with a hole, new rows are inserted at the end of the table if it is in use by another thread.
# Otherwise, MySQL acquires a normal write lock and inserts the row into the hole.
concurrent_insert=2
# Variable: myisam_use_mmap
# https://www.percona.com/blog/2006/05/26/myisam-mmap-feature-51/
#
myisam_use_mmap=1
</source>
/etc/mysql/conf.d/mysqld.cnf:
<source lang=mysql>
[mysqld]
datadir = /var/lib/mysql/data/data
# because mysql is soooo stupid
#ignore-db-dirs = lost+found # when we will have mysql >= 5.6.3
bind-address = 127.0.0.1
open-files-limit = 4096
max_connections = 512
max_allowed_packet = 16M
thread_stack = 192K
thread_cache_size = 8
myisam-recover-options = BACKUP
max_connections = 512
table_cache = 8192
thread_concurrency = 4
default-storage-engine = innodb
# Enable the full query log. Every query (even ones with incorrect
# syntax) that the server receives will be logged. This is useful for
# debugging, it is usually disabled in production use.
#log
# Print warnings to the error log file. If you have any problem with
# MySQL you should enable logging of warnings and examine the error log
# for possible explanations.
log_warnings
# Log slow queries. Slow queries are queries which take more than the
# amount of time defined in "long_query_time" or which do not use
# indexes well, if log_long_format is enabled. It is normally good idea
# to have this turned on if you frequently add new queries to the
# system.
log_slow_queries
slow_query_log_file = /var/log/mysql/mysql-slow.log
# All queries taking more than this amount of time (in seconds) will be
# trated as slow. Do not use "1" as a value here, as this will result in
# even very fast queries being logged from time to time (as MySQL
# currently measures time with second accuracy only).
long_query_time = 2
# Log more information in the slow query log. Normally it is good to
# have this turned on. This will enable logging of queries that are not
# using indexes in addition to long running queries.
#log_long_format
log_bin = /var/lib/mysql/binlog/mysql-bin.log
expire_logs_days = 10
max_binlog_size = 100M
sync_binlog = 0
performance_schema = ON
</source>
/etc/mysql/conf.d/mysqld_safe.cnf:
<source lang=mysql>
[mysqld_safe]
</source>
/etc/mysql/conf.d/mysqld_safe_syslog.cnf:
<source lang=mysql>
[mysqld_safe]
syslog
</source>
/etc/mysql/conf.d/query_cache.cnf:
<source lang=mysql>
[mysqld]
query_cache_limit = 4M
query_cache_size = 128M
query_cache_min_res_unit = 2K
</source>
33ac726954d6d0e8603eb63ca5ec32866304787d
865
864
2015-08-17T10:54:01Z
Lollypop
2
/* Filesystems for MySQL */
wikitext
text/x-wiki
[[Kategorie:MySQL|Tipps und Tricks]]
==Oneliner==
===All grants===
<source lang=bash>
# mysql --skip-column-names --batch --execute 'select concat_ws("@",user,host) from mysql.user' | xargs -n 1 -i mysql --execute 'show grants for {}'
</source>
===Last update time===
* Per table
<source lang=mysql>
mysql> SELECT TABLE_SCHEMA AS DB,TABLE_NAME,UPDATE_TIME FROM INFORMATION_SCHEMA.TABLES ORDER BY DB,UPDATE_TIME;
</source>
* Per database
<source lang=mysql>
mysql> SELECT TABLE_SCHEMA AS DB,MAX(UPDATE_TIME) AS LAST_UPDATE FROM INFORMATION_SCHEMA.TABLES GROUP BY DB ORDER BY LAST_UPDATE;
</source>
==InnoDB space==
===Per database===
<source lang=mysql>
mysql> select table_schema as database_name, sum(round(data_length/1024/1024,2)) as total_size_mb from information_schema.tables where engine like 'innodb' group by table_schema order by total_size_mb;
</source>
===Per table===
<source lang=mysql>
mysql> select table_schema as database_name,table_name,round(data_length/1024/1024,2) as size_mb from information_schema.tables order by size_mb;
</source>
==Logging==
If you use SET GLOBAL it is just for the moment.
'''Don't forget to add it in your my.cnf to make it permanent!'''
===What can I log?===
The interesting variables here are:
* log_queries_not_using_indexes
* log_slave_updates
* log_slow_queries
* general_log
===Choose logging destination FILE/TABLE/NONE===
This affects general_log and slow_query_log.
* Log to the table mysql.slow_log and mysql.general_log
<source lang=mysql>
mysql> SET GLOBAL log_output=TABLE;
</source>
* Log to the table mysql.slow_log and mysql.general_log
<source lang=mysql>
mysql> SET GLOBAL log_output=TABLE;
</source>
* Both: tables and files
<source lang=mysql>
mysql> SET GLOBAL log_output = 'TABLE,FILE';
</source>
* None, if NONE appears in the log_output destinations there is no logging
<source lang=mysql>
mysql> SET GLOBAL log_output = 'TABLE,FILE,NONE';
</source>
is equal to
<source lang=mysql>
mysql> SET GLOBAL log_output = 'NONE';
</source>
===Enable/disable general logging===
<source lang=mysql>
mysql> SET GLOBAL general_log_file = '/var/lib/mysql/general.log';
Query OK, 0 rows affected (0.00 sec)
mysql> SET GLOBAL general_log = 'ON';
Query OK, 0 rows affected (0.00 sec)
</source>
<source lang=mysql>
mysql> SET GLOBAL general_log = 'OFF';
Query OK, 0 rows affected (0.00 sec)
</source>
===Enable/disable logging of slow queries===
<source lang=mysql>
mysql> SET GLOBAL slow_query_log_file = '/var/lib/mysql/slow-query.log';
Query OK, 0 rows affected (0.00 sec)
mysql> SET GLOBAL slow_query_log = 'ON';
Query OK, 0 rows affected (0.00 sec)
</source>
<source lang=mysql>
mysql> SET GLOBAL slow_query_log = 'OFF';
Query OK, 0 rows affected (0.00 sec)
</source>
==Filesystems for MySQL==
===ext3/ext4===
====Create Options====
<source lang=bash>
# mkfs.ext4 -b 4096 /dev/mapper/vg--data-lv--ext4--mysql_data
</source>
====Mountoptions====
* noatime
* data=writeback (best performance , only metadata is logged)
* data=ordered (ok performance , recording metadata and grouping metadata related to the data changes)
* data=journal (worst performance, but best data protection, ext3 default mode, recording metadata and all data)
===Raw devices with InnoDB===
'''Take a look at [[Linux_udev_permissions|setting device permissions via udev]] first.'''
'''After''' that the device is owned by mysql:
<source lang=bash>
# ls -alL /dev/vg-data/lv-rawdisk-innodb01
brw-rw---- 1 mysql mysql 252, 0 Aug 12 15:07 /dev/vg-data/lv-rawdisk-innodb01
</source>
Determine the size:
<source lang=bash>
# lvs vg-data
LV VG Attr LSize Pool Origin Data% Move Log Copy% Convert
lv-rawdisk-innodb01 vg-data -wi-a---- 25.00g
# fdisk -l /dev/vg-data/lv-rawdisk-innodb01
Disk /dev/vg-data/lv-rawdisk-innodb01: 26.8 GB, 26843545600 bytes
255 heads, 63 sectors/track, 3263 cylinders, total 52428800 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
# bc -l
26843545600/(1024*1024*1024)
25.00000000000000000000
</source>
Yes... really 25GB!
Add your logical volume to your configuration /etc/mysql/conf.d/innodb.cnf :
<source lang=mysql>
[mysqld]
# InnoDB raw disks
innodb_data_home_dir=
innodb_data_file_path=/dev/vg-data/lv-rawdisk-innodb01:25Gnewraw
</source>
Start mysql:
<source lang=bash>
# service mysql start
</source>
Aaaaaand.. do not forget apparmor! Like I did.. :-D
<source lang=mysql>
InnoDB: Operating system error number 13 in a file operation.
InnoDB: The error means mysqld does not have the access rights to
InnoDB: the directory.
InnoDB: File name /dev/dm-0
InnoDB: File operation call: 'open'.
InnoDB: Cannot continue operation.
</source>
<source lang=bash>
# tail /var/log/kern.log
...
Aug 12 15:30:09 mysql kernel: [ 5840.118528] audit: type=1400 audit(1439386209.399:33): apparmor="DENIED" operation="open" profile="/usr/sbin/mysqld" name="/dev/dm-0" pid=11810 comm="mysqld" requested_mask="wr" denied_mask="wr" fsuid=108 ouid=108
...
</source>
Add your raw device to the apparmor config in /etc/apparmor.d/local/usr.sbin.mysqld :
<source lang=bash>
# Site-specific additions and overrides for usr.sbin.mysqld.
# For more details, please see /etc/apparmor.d/local/README.
/dev/dm-* rwk,
</source>
Reload apparmor:
<source lang=bash>
# service apparmor reload
</source>
Another try!
<source lang=bash>
# service mysql start
</source>
<source lang=mysql>
InnoDB: The first specified data file /dev/vg-data/lv-rawdisk-innodb01 did not exist:
InnoDB: a new database to be created!
150812 15:48:23 InnoDB: Setting file /dev/vg-data/lv-rawdisk-innodb01 size to 25600 MB
InnoDB: Database physically writes the file full: wait...
InnoDB: Progress in MB: 100 200 300 400 500 600 700 800 900 1000 1100 1200 ...
</source>
Much better!
So shutdown MySQL again!
Change your configuration /etc/mysql/conf.d/innodb.cnf and '''change newraw to raw!''' :
<source lang=mysql>
[mysqld]
# InnoDB raw disks
innodb_data_home_dir=
innodb_data_file_path=/dev/vg-data/lv-rawdisk-innodb01:25Graw
</source>
==Sample InnoDB configuration==
/etc/mysql/conf.d/innodb.cnf
<source lang=mysql>
[mysqld]
# InnoDB Parameters
# innodb_buffer_pool_size=(0.7*total_mem_size)
innodb_buffer_pool_size=1433M
# bulk_insert_buffer_size
bulk_insert_buffer_size=256M
# innodb_buffer_pool_instances=... more = more concurrency
innodb_buffer_pool_instances=2
# innodb_thread_concurrency= 2*CPUs
innodb_thread_concurrency=4
# innodb_flush_method=O_DIRECT (avoids double buffering)
innodb_flush_method=O_DIRECT
# InnoDB data raw disks
innodb_data_home_dir=
innodb_data_file_path=/dev/vg-data/lv-rawdisk-innodb01:25Graw
# InnoDB log files
innodb_log_files_in_group=2
innodb_log_file_size=100M
innodb_log_group_home_dir=/var/lib/mysql/ib_log
</source>
==Analyze==
<source lang=mysql>
mysql> select * from <tablename> PROCEDURE ANALYSE();
</source>
<source lang=mysql>
mysql> SHOW /*!50000 GLOBAL*/ STATUS;
</source>
* See [[http://de.slideshare.net/shinguz/pt-presentation-11465700 MySQL Performance Tuning]]
===percona-toolkit===
<source lang=bash>
# aptitude install percona-toolkit
# mysql -e "explain select * from mysql.user,mysql.db where user.user=db.user" | pt-visual-explain
JOIN
+- Bookmark lookup
| +- Table
| | table db
| | possible_keys User
| +- Index lookup
| key db->User
| possible_keys User
| key_len 48
| ref mysql.user.User
| rows 3
+- Table scan
rows 68
+- Table
table user
</source>
===Sysbench===
<source lang=bash>
# mysql -u root -e "create database sbtest;"
# sysbench \
--test=oltp \
--oltp-table-size=10000000 \
--db-driver=mysql \
--mysql-table-engine=innodb \
--mysql-db=sbtest \
--mysql-user=root \
--mysql-password=$(nawk -F'=' '/password/{print $2}' /root/.my.cnf) \
--mysql-socket=/var/run/mysqld/mysqld.sock \
prepare
# sysbench \
--test=oltp \
--oltp-test-mode=complex \
--oltp-table-size=80000000 \
--db-driver=mysql \
--mysql-table-engine=innodb \
--mysql-db=sbtest \
--mysql-user=root_rw \
--mysql-password=$(nawk -F'=' '/password/{print $2}' /root/.my.cnf) \
--mysql-socket=/var/run/mysqld/mysqld.sock \
--num-threads=4 \
--max-time=900 \
--max-requests=500000 \
run
# mysql -u root_rw -e "drop table sbtest;" sbtest
</source>
==Recover a damaged root account==
===Lost grants===
Try out:
<source lang=bash>
# service mysql stop
# echo "grant all privileges on *.* to 'root'@'localhost' with grant option;" > /root/mysql-init
# mysqld_safe --init-file=/root/mysql-init
...
150812 19:14:24 mysqld_safe mysqld from pid file /var/run/mysqld/mysqld.pid ended
# rm /root/mysql-init
# service mysql start
</source>
Or:
<source lang=bash>
# service mysql stop
# mysqld_safe --skip-grant-tables &
...
# mysql -e "UPDATE mysql.user SET Grant_priv='Y', Super_priv='Y' WHERE User='root'; FLUSH PRIVILEGES; GRANT ALL ON *.* TO 'root'@'localhost';"
# mysqladmin -u root shutdown
# service mysql start
</source>
===Lost password===
<source lang=bash>
# service mysql stop
# echo "SET PASSWORD FOR 'root'@'localhost' = PASSWORD('the root password for mysql');" > /root/mysql-init
# mysqld_safe --init-file=/root/mysql-init
...
150812 19:15:24 mysqld_safe mysqld from pid file /var/run/mysqld/mysqld.pid ended
# rm /root/mysql-init
# service mysql start
</source>
==Structured configuration==
This is the default in Ubuntus /etc/mysql/my.cnf:
<source lang=mysql>
...
#
# * IMPORTANT: Additional settings that can override those from this file!
# The files must end with '.cnf', otherwise they'll be ignored.
#
!includedir /etc/mysql/conf.d/
</source>
/etc/mysql/conf.d/innodb.cnf:
<source lang=mysql>
[mysqld]
# InnoDB Parameters
# innodb_buffer_pool_size=(0.7*total_mem_size)
#innodb_buffer_pool_size=512M
innodb_buffer_pool_size=256M
# bulk_insert_buffer_size
#bulk_insert_buffer_size=256M
bulk_insert_buffer_size=128M
# innodb_buffer_pool_instances=... more = more concurrency
innodb_buffer_pool_instances=2
# innodb_thread_concurrency= 2*CPUs
innodb_thread_concurrency=4
# innodb_flush_method=O_DIRECT (avoids double buffering)
innodb_flush_method=O_DIRECT
# InnoDB data raw disks
innodb_data_home_dir=
innodb_data_file_path=/dev/vg-data/lv-rawdisk-innodb01:25Graw
# InnoDB log files
innodb_log_files_in_group=2
innodb_log_file_size=100M
innodb_log_group_home_dir=/var/lib/mysql/ib_log
</source>
/etc/mysql/conf.d/myisam.cnf:
<source lang=mysql>
[mysqld]
#key_buffer = 512M
key_buffer = 128M
table_cache = 8K
myisam_sort_buffer_size = 64M
tmp_table_size = 64M
# Variable: concurrent_insert
# Value Description
# 0 Disables concurrent inserts
# 1 (Default) Enables concurrent insert for MyISAM tables that do not have holes
# 2 Enables concurrent inserts for all MyISAM tables, even those that have holes.
# For a table with a hole, new rows are inserted at the end of the table if it is in use by another thread.
# Otherwise, MySQL acquires a normal write lock and inserts the row into the hole.
concurrent_insert=2
# Variable: myisam_use_mmap
# https://www.percona.com/blog/2006/05/26/myisam-mmap-feature-51/
#
myisam_use_mmap=1
</source>
/etc/mysql/conf.d/mysqld.cnf:
<source lang=mysql>
[mysqld]
datadir = /var/lib/mysql/data/data
# because mysql is soooo stupid
#ignore-db-dirs = lost+found # when we will have mysql >= 5.6.3
bind-address = 127.0.0.1
open-files-limit = 4096
max_connections = 512
max_allowed_packet = 16M
thread_stack = 192K
thread_cache_size = 8
myisam-recover-options = BACKUP
max_connections = 512
table_cache = 8192
thread_concurrency = 4
default-storage-engine = innodb
# Enable the full query log. Every query (even ones with incorrect
# syntax) that the server receives will be logged. This is useful for
# debugging, it is usually disabled in production use.
#log
# Print warnings to the error log file. If you have any problem with
# MySQL you should enable logging of warnings and examine the error log
# for possible explanations.
log_warnings
# Log slow queries. Slow queries are queries which take more than the
# amount of time defined in "long_query_time" or which do not use
# indexes well, if log_long_format is enabled. It is normally good idea
# to have this turned on if you frequently add new queries to the
# system.
log_slow_queries
slow_query_log_file = /var/log/mysql/mysql-slow.log
# All queries taking more than this amount of time (in seconds) will be
# trated as slow. Do not use "1" as a value here, as this will result in
# even very fast queries being logged from time to time (as MySQL
# currently measures time with second accuracy only).
long_query_time = 2
# Log more information in the slow query log. Normally it is good to
# have this turned on. This will enable logging of queries that are not
# using indexes in addition to long running queries.
#log_long_format
log_bin = /var/lib/mysql/binlog/mysql-bin.log
expire_logs_days = 10
max_binlog_size = 100M
sync_binlog = 0
performance_schema = ON
</source>
/etc/mysql/conf.d/mysqld_safe.cnf:
<source lang=mysql>
[mysqld_safe]
</source>
/etc/mysql/conf.d/mysqld_safe_syslog.cnf:
<source lang=mysql>
[mysqld_safe]
syslog
</source>
/etc/mysql/conf.d/query_cache.cnf:
<source lang=mysql>
[mysqld]
query_cache_limit = 4M
query_cache_size = 128M
query_cache_min_res_unit = 2K
</source>
2033fbc13b4ab3989c639ccf1ba24d917d12b56c
889
865
2015-08-20T08:16:36Z
Lollypop
2
/* Sysbench */
wikitext
text/x-wiki
[[Kategorie:MySQL|Tipps und Tricks]]
==Oneliner==
===All grants===
<source lang=bash>
# mysql --skip-column-names --batch --execute 'select concat_ws("@",user,host) from mysql.user' | xargs -n 1 -i mysql --execute 'show grants for {}'
</source>
===Last update time===
* Per table
<source lang=mysql>
mysql> SELECT TABLE_SCHEMA AS DB,TABLE_NAME,UPDATE_TIME FROM INFORMATION_SCHEMA.TABLES ORDER BY DB,UPDATE_TIME;
</source>
* Per database
<source lang=mysql>
mysql> SELECT TABLE_SCHEMA AS DB,MAX(UPDATE_TIME) AS LAST_UPDATE FROM INFORMATION_SCHEMA.TABLES GROUP BY DB ORDER BY LAST_UPDATE;
</source>
==InnoDB space==
===Per database===
<source lang=mysql>
mysql> select table_schema as database_name, sum(round(data_length/1024/1024,2)) as total_size_mb from information_schema.tables where engine like 'innodb' group by table_schema order by total_size_mb;
</source>
===Per table===
<source lang=mysql>
mysql> select table_schema as database_name,table_name,round(data_length/1024/1024,2) as size_mb from information_schema.tables order by size_mb;
</source>
==Logging==
If you use SET GLOBAL it is just for the moment.
'''Don't forget to add it in your my.cnf to make it permanent!'''
===What can I log?===
The interesting variables here are:
* log_queries_not_using_indexes
* log_slave_updates
* log_slow_queries
* general_log
===Choose logging destination FILE/TABLE/NONE===
This affects general_log and slow_query_log.
* Log to the table mysql.slow_log and mysql.general_log
<source lang=mysql>
mysql> SET GLOBAL log_output=TABLE;
</source>
* Log to the table mysql.slow_log and mysql.general_log
<source lang=mysql>
mysql> SET GLOBAL log_output=TABLE;
</source>
* Both: tables and files
<source lang=mysql>
mysql> SET GLOBAL log_output = 'TABLE,FILE';
</source>
* None, if NONE appears in the log_output destinations there is no logging
<source lang=mysql>
mysql> SET GLOBAL log_output = 'TABLE,FILE,NONE';
</source>
is equal to
<source lang=mysql>
mysql> SET GLOBAL log_output = 'NONE';
</source>
===Enable/disable general logging===
<source lang=mysql>
mysql> SET GLOBAL general_log_file = '/var/lib/mysql/general.log';
Query OK, 0 rows affected (0.00 sec)
mysql> SET GLOBAL general_log = 'ON';
Query OK, 0 rows affected (0.00 sec)
</source>
<source lang=mysql>
mysql> SET GLOBAL general_log = 'OFF';
Query OK, 0 rows affected (0.00 sec)
</source>
===Enable/disable logging of slow queries===
<source lang=mysql>
mysql> SET GLOBAL slow_query_log_file = '/var/lib/mysql/slow-query.log';
Query OK, 0 rows affected (0.00 sec)
mysql> SET GLOBAL slow_query_log = 'ON';
Query OK, 0 rows affected (0.00 sec)
</source>
<source lang=mysql>
mysql> SET GLOBAL slow_query_log = 'OFF';
Query OK, 0 rows affected (0.00 sec)
</source>
==Filesystems for MySQL==
===ext3/ext4===
====Create Options====
<source lang=bash>
# mkfs.ext4 -b 4096 /dev/mapper/vg--data-lv--ext4--mysql_data
</source>
====Mountoptions====
* noatime
* data=writeback (best performance , only metadata is logged)
* data=ordered (ok performance , recording metadata and grouping metadata related to the data changes)
* data=journal (worst performance, but best data protection, ext3 default mode, recording metadata and all data)
===Raw devices with InnoDB===
'''Take a look at [[Linux_udev_permissions|setting device permissions via udev]] first.'''
'''After''' that the device is owned by mysql:
<source lang=bash>
# ls -alL /dev/vg-data/lv-rawdisk-innodb01
brw-rw---- 1 mysql mysql 252, 0 Aug 12 15:07 /dev/vg-data/lv-rawdisk-innodb01
</source>
Determine the size:
<source lang=bash>
# lvs vg-data
LV VG Attr LSize Pool Origin Data% Move Log Copy% Convert
lv-rawdisk-innodb01 vg-data -wi-a---- 25.00g
# fdisk -l /dev/vg-data/lv-rawdisk-innodb01
Disk /dev/vg-data/lv-rawdisk-innodb01: 26.8 GB, 26843545600 bytes
255 heads, 63 sectors/track, 3263 cylinders, total 52428800 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
# bc -l
26843545600/(1024*1024*1024)
25.00000000000000000000
</source>
Yes... really 25GB!
Add your logical volume to your configuration /etc/mysql/conf.d/innodb.cnf :
<source lang=mysql>
[mysqld]
# InnoDB raw disks
innodb_data_home_dir=
innodb_data_file_path=/dev/vg-data/lv-rawdisk-innodb01:25Gnewraw
</source>
Start mysql:
<source lang=bash>
# service mysql start
</source>
Aaaaaand.. do not forget apparmor! Like I did.. :-D
<source lang=mysql>
InnoDB: Operating system error number 13 in a file operation.
InnoDB: The error means mysqld does not have the access rights to
InnoDB: the directory.
InnoDB: File name /dev/dm-0
InnoDB: File operation call: 'open'.
InnoDB: Cannot continue operation.
</source>
<source lang=bash>
# tail /var/log/kern.log
...
Aug 12 15:30:09 mysql kernel: [ 5840.118528] audit: type=1400 audit(1439386209.399:33): apparmor="DENIED" operation="open" profile="/usr/sbin/mysqld" name="/dev/dm-0" pid=11810 comm="mysqld" requested_mask="wr" denied_mask="wr" fsuid=108 ouid=108
...
</source>
Add your raw device to the apparmor config in /etc/apparmor.d/local/usr.sbin.mysqld :
<source lang=bash>
# Site-specific additions and overrides for usr.sbin.mysqld.
# For more details, please see /etc/apparmor.d/local/README.
/dev/dm-* rwk,
</source>
Reload apparmor:
<source lang=bash>
# service apparmor reload
</source>
Another try!
<source lang=bash>
# service mysql start
</source>
<source lang=mysql>
InnoDB: The first specified data file /dev/vg-data/lv-rawdisk-innodb01 did not exist:
InnoDB: a new database to be created!
150812 15:48:23 InnoDB: Setting file /dev/vg-data/lv-rawdisk-innodb01 size to 25600 MB
InnoDB: Database physically writes the file full: wait...
InnoDB: Progress in MB: 100 200 300 400 500 600 700 800 900 1000 1100 1200 ...
</source>
Much better!
So shutdown MySQL again!
Change your configuration /etc/mysql/conf.d/innodb.cnf and '''change newraw to raw!''' :
<source lang=mysql>
[mysqld]
# InnoDB raw disks
innodb_data_home_dir=
innodb_data_file_path=/dev/vg-data/lv-rawdisk-innodb01:25Graw
</source>
==Sample InnoDB configuration==
/etc/mysql/conf.d/innodb.cnf
<source lang=mysql>
[mysqld]
# InnoDB Parameters
# innodb_buffer_pool_size=(0.7*total_mem_size)
innodb_buffer_pool_size=1433M
# bulk_insert_buffer_size
bulk_insert_buffer_size=256M
# innodb_buffer_pool_instances=... more = more concurrency
innodb_buffer_pool_instances=2
# innodb_thread_concurrency= 2*CPUs
innodb_thread_concurrency=4
# innodb_flush_method=O_DIRECT (avoids double buffering)
innodb_flush_method=O_DIRECT
# InnoDB data raw disks
innodb_data_home_dir=
innodb_data_file_path=/dev/vg-data/lv-rawdisk-innodb01:25Graw
# InnoDB log files
innodb_log_files_in_group=2
innodb_log_file_size=100M
innodb_log_group_home_dir=/var/lib/mysql/ib_log
</source>
==Analyze==
<source lang=mysql>
mysql> select * from <tablename> PROCEDURE ANALYSE();
</source>
<source lang=mysql>
mysql> SHOW /*!50000 GLOBAL*/ STATUS;
</source>
* See [[http://de.slideshare.net/shinguz/pt-presentation-11465700 MySQL Performance Tuning]]
===percona-toolkit===
<source lang=bash>
# aptitude install percona-toolkit
# mysql -e "explain select * from mysql.user,mysql.db where user.user=db.user" | pt-visual-explain
JOIN
+- Bookmark lookup
| +- Table
| | table db
| | possible_keys User
| +- Index lookup
| key db->User
| possible_keys User
| key_len 48
| ref mysql.user.User
| rows 3
+- Table scan
rows 68
+- Table
table user
</source>
===Sysbench===
<source lang=bash>
# mysql -u root -e "create database sbtest;"
# sysbench \
--test=oltp \
--oltp-table-size=10000000 \
--db-driver=mysql \
--mysql-table-engine=innodb \
--mysql-db=sbtest \
--mysql-user=root \
--mysql-password=$(nawk -F'=' '/password/{print $2}' /root/.my.cnf) \
--mysql-socket=/var/run/mysqld/mysqld.sock \
prepare
# sysbench \
--test=oltp \
--oltp-test-mode=complex \
--oltp-table-size=80000000 \
--db-driver=mysql \
--mysql-table-engine=innodb \
--mysql-db=sbtest \
--mysql-user=root \
--mysql-password=$(nawk -F'=' '/password/{print $2}' /root/.my.cnf) \
--mysql-socket=/var/run/mysqld/mysqld.sock \
--num-threads=4 \
--max-time=900 \
--max-requests=500000 \
run
# mysql -u root_rw -e "drop table sbtest;" sbtest
</source>
==Recover a damaged root account==
===Lost grants===
Try out:
<source lang=bash>
# service mysql stop
# echo "grant all privileges on *.* to 'root'@'localhost' with grant option;" > /root/mysql-init
# mysqld_safe --init-file=/root/mysql-init
...
150812 19:14:24 mysqld_safe mysqld from pid file /var/run/mysqld/mysqld.pid ended
# rm /root/mysql-init
# service mysql start
</source>
Or:
<source lang=bash>
# service mysql stop
# mysqld_safe --skip-grant-tables &
...
# mysql -e "UPDATE mysql.user SET Grant_priv='Y', Super_priv='Y' WHERE User='root'; FLUSH PRIVILEGES; GRANT ALL ON *.* TO 'root'@'localhost';"
# mysqladmin -u root shutdown
# service mysql start
</source>
===Lost password===
<source lang=bash>
# service mysql stop
# echo "SET PASSWORD FOR 'root'@'localhost' = PASSWORD('the root password for mysql');" > /root/mysql-init
# mysqld_safe --init-file=/root/mysql-init
...
150812 19:15:24 mysqld_safe mysqld from pid file /var/run/mysqld/mysqld.pid ended
# rm /root/mysql-init
# service mysql start
</source>
==Structured configuration==
This is the default in Ubuntus /etc/mysql/my.cnf:
<source lang=mysql>
...
#
# * IMPORTANT: Additional settings that can override those from this file!
# The files must end with '.cnf', otherwise they'll be ignored.
#
!includedir /etc/mysql/conf.d/
</source>
/etc/mysql/conf.d/innodb.cnf:
<source lang=mysql>
[mysqld]
# InnoDB Parameters
# innodb_buffer_pool_size=(0.7*total_mem_size)
#innodb_buffer_pool_size=512M
innodb_buffer_pool_size=256M
# bulk_insert_buffer_size
#bulk_insert_buffer_size=256M
bulk_insert_buffer_size=128M
# innodb_buffer_pool_instances=... more = more concurrency
innodb_buffer_pool_instances=2
# innodb_thread_concurrency= 2*CPUs
innodb_thread_concurrency=4
# innodb_flush_method=O_DIRECT (avoids double buffering)
innodb_flush_method=O_DIRECT
# InnoDB data raw disks
innodb_data_home_dir=
innodb_data_file_path=/dev/vg-data/lv-rawdisk-innodb01:25Graw
# InnoDB log files
innodb_log_files_in_group=2
innodb_log_file_size=100M
innodb_log_group_home_dir=/var/lib/mysql/ib_log
</source>
/etc/mysql/conf.d/myisam.cnf:
<source lang=mysql>
[mysqld]
#key_buffer = 512M
key_buffer = 128M
table_cache = 8K
myisam_sort_buffer_size = 64M
tmp_table_size = 64M
# Variable: concurrent_insert
# Value Description
# 0 Disables concurrent inserts
# 1 (Default) Enables concurrent insert for MyISAM tables that do not have holes
# 2 Enables concurrent inserts for all MyISAM tables, even those that have holes.
# For a table with a hole, new rows are inserted at the end of the table if it is in use by another thread.
# Otherwise, MySQL acquires a normal write lock and inserts the row into the hole.
concurrent_insert=2
# Variable: myisam_use_mmap
# https://www.percona.com/blog/2006/05/26/myisam-mmap-feature-51/
#
myisam_use_mmap=1
</source>
/etc/mysql/conf.d/mysqld.cnf:
<source lang=mysql>
[mysqld]
datadir = /var/lib/mysql/data/data
# because mysql is soooo stupid
#ignore-db-dirs = lost+found # when we will have mysql >= 5.6.3
bind-address = 127.0.0.1
open-files-limit = 4096
max_connections = 512
max_allowed_packet = 16M
thread_stack = 192K
thread_cache_size = 8
myisam-recover-options = BACKUP
max_connections = 512
table_cache = 8192
thread_concurrency = 4
default-storage-engine = innodb
# Enable the full query log. Every query (even ones with incorrect
# syntax) that the server receives will be logged. This is useful for
# debugging, it is usually disabled in production use.
#log
# Print warnings to the error log file. If you have any problem with
# MySQL you should enable logging of warnings and examine the error log
# for possible explanations.
log_warnings
# Log slow queries. Slow queries are queries which take more than the
# amount of time defined in "long_query_time" or which do not use
# indexes well, if log_long_format is enabled. It is normally good idea
# to have this turned on if you frequently add new queries to the
# system.
log_slow_queries
slow_query_log_file = /var/log/mysql/mysql-slow.log
# All queries taking more than this amount of time (in seconds) will be
# trated as slow. Do not use "1" as a value here, as this will result in
# even very fast queries being logged from time to time (as MySQL
# currently measures time with second accuracy only).
long_query_time = 2
# Log more information in the slow query log. Normally it is good to
# have this turned on. This will enable logging of queries that are not
# using indexes in addition to long running queries.
#log_long_format
log_bin = /var/lib/mysql/binlog/mysql-bin.log
expire_logs_days = 10
max_binlog_size = 100M
sync_binlog = 0
performance_schema = ON
</source>
/etc/mysql/conf.d/mysqld_safe.cnf:
<source lang=mysql>
[mysqld_safe]
</source>
/etc/mysql/conf.d/mysqld_safe_syslog.cnf:
<source lang=mysql>
[mysqld_safe]
syslog
</source>
/etc/mysql/conf.d/query_cache.cnf:
<source lang=mysql>
[mysqld]
query_cache_limit = 4M
query_cache_size = 128M
query_cache_min_res_unit = 2K
</source>
e44aa1fa873222508fdbb0afc0b53f2abb19db97
NetApp and Linux
0
227
866
2015-08-17T11:17:35Z
Lollypop
2
Die Seite wurde neu angelegt: „[[Kategorie:NetApp]] [[Kategorie:Linux]] ==Check partitioning== Maybe this works: <source lang=bash> # fdisk -l /dev/sda | awk '/^\/dev\//{fieldnum=2;if($fiel…“
wikitext
text/x-wiki
[[Kategorie:NetApp]]
[[Kategorie:Linux]]
==Check partitioning==
Maybe this works:
<source lang=bash>
# fdisk -l /dev/sda | awk '/^\/dev\//{fieldnum=2;if($fielnum=="*"){fieldnum++}blockalign=$fieldnum/8;if(blockalign!=int(blockalign)){match($0,$fieldnum);printf "%s <<<<<<< Bucket %d\n",substr($0,0,RSTART-2)"_"substr($0,RSTART,RLENGTH)"_"substr($0,RSTART+RLENGTH+1),8*(blockalign-int(blockalign));}else{print $0;}next;}{print;}'
Disk /dev/sda: 17.2 GB, 17179869184 bytes
64 heads, 32 sectors/track, 16384 cylinders, total 33554432 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x000d6bea
Device Boot Start End Blocks Id System
/dev/sda1 * 2048 13672447 6835200 83 Linux
/dev/sda2 _13674494_ 33552383 9938945 5 Extended <<<<<<< Bucket 6
/dev/sda5 13674496 25391103 5858304 83 Linux
/dev/sda6 25393152 33552383 4079616 82 Linux swap / Solaris
</source>
b9842d31d2d91f366bd0ab7afc93f2d6f1005055
867
866
2015-08-17T11:18:17Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:NetApp|Linux]]
[[Kategorie:Linux|NetApp]]
==Check partitioning==
Maybe this works:
<source lang=bash>
# fdisk -l /dev/sda | awk '/^\/dev\//{fieldnum=2;if($fielnum=="*"){fieldnum++}blockalign=$fieldnum/8;if(blockalign!=int(blockalign)){match($0,$fieldnum);printf "%s <<<<<<< Bucket %d\n",substr($0,0,RSTART-2)"_"substr($0,RSTART,RLENGTH)"_"substr($0,RSTART+RLENGTH+1),8*(blockalign-int(blockalign));}else{print $0;}next;}{print;}'
Disk /dev/sda: 17.2 GB, 17179869184 bytes
64 heads, 32 sectors/track, 16384 cylinders, total 33554432 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x000d6bea
Device Boot Start End Blocks Id System
/dev/sda1 * 2048 13672447 6835200 83 Linux
/dev/sda2 _13674494_ 33552383 9938945 5 Extended <<<<<<< Bucket 6
/dev/sda5 13674496 25391103 5858304 83 Linux
/dev/sda6 25393152 33552383 4079616 82 Linux swap / Solaris
</source>
fd79110f1615a203577cf42f92aae4d1cee34948
868
867
2015-08-17T11:19:46Z
Lollypop
2
/* Check partitioning */
wikitext
text/x-wiki
[[Kategorie:NetApp|Linux]]
[[Kategorie:Linux|NetApp]]
==Check partitioning==
Maybe this works:
<source lang=bash>
# fdisk -l /dev/sda | awk '/^\/dev\//{fieldnum=2;if($fielnum=="*"){fieldnum++}blockalign=$fieldnum/8;if(blockalign!=int(blockalign)){match($0,$fieldnum);printf "%s <<<<<<< Bucket %d\n",substr($0,0,RSTART-2)"_"substr($0,RSTART,RLENGTH)"_"substr($0,RSTART+RLENGTH+1),8*(blockalign-int(blockalign));}else{print $0;}next;}{print;}'
Disk /dev/sda: 17.2 GB, 17179869184 bytes
64 heads, 32 sectors/track, 16384 cylinders, total 33554432 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x000d6bea
Device Boot Start End Blocks Id System
/dev/sda1 * 2048 13672447 6835200 83 Linux
/dev/sda2 _13674494_ 33552383 9938945 5 Extended <<<<<<< Bucket 6
/dev/sda5 13674496 25391103 5858304 83 Linux
/dev/sda6 25393152 33552383 4079616 82 Linux swap / Solaris
</source>
Partition 2 does not matter because it is just metadata of partitioning. We are aligned!
b341e89cac3583ad7f3e3daa109d8205a5a52893
869
868
2015-08-17T11:51:26Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:NetApp|Linux]]
[[Kategorie:Linux|NetApp]]
==Check partitioning==
Maybe this works:
<source lang=bash>
# fdisk -l /dev/sda | awk '/^\/dev\//{fieldnum=2;if($fielnum=="*"){fieldnum++}blockalign=$fieldnum/8;if(blockalign!=int(blockalign)){match($0,$fieldnum);printf "%s <<<<<<< Bucket %d\n",substr($0,0,RSTART-2)"_"substr($0,RSTART,RLENGTH)"_"substr($0,RSTART+RLENGTH+1),8*(blockalign-int(blockalign));}else{print $0;}next;}{print;}'
Disk /dev/sda: 17.2 GB, 17179869184 bytes
64 heads, 32 sectors/track, 16384 cylinders, total 33554432 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x000d6bea
Device Boot Start End Blocks Id System
/dev/sda1 * 2048 13672447 6835200 83 Linux
/dev/sda2 _13674494_ 33552383 9938945 5 Extended <<<<<<< Bucket 6
/dev/sda5 13674496 25391103 5858304 83 Linux
/dev/sda6 25393152 33552383 4079616 82 Linux swap / Solaris
</source>
Partition 2 does not matter because it is just metadata of partitioning. We are aligned!
==Links==
* [[https://kb.netapp.com/index?page=content&id=3011193 KB ID: 3011193 - What is an unaligned I/O?]]
d8568bdfe3c656742426f0f541e68915629aa2d5
870
869
2015-08-17T11:51:41Z
Lollypop
2
/* Links */
wikitext
text/x-wiki
[[Kategorie:NetApp|Linux]]
[[Kategorie:Linux|NetApp]]
==Check partitioning==
Maybe this works:
<source lang=bash>
# fdisk -l /dev/sda | awk '/^\/dev\//{fieldnum=2;if($fielnum=="*"){fieldnum++}blockalign=$fieldnum/8;if(blockalign!=int(blockalign)){match($0,$fieldnum);printf "%s <<<<<<< Bucket %d\n",substr($0,0,RSTART-2)"_"substr($0,RSTART,RLENGTH)"_"substr($0,RSTART+RLENGTH+1),8*(blockalign-int(blockalign));}else{print $0;}next;}{print;}'
Disk /dev/sda: 17.2 GB, 17179869184 bytes
64 heads, 32 sectors/track, 16384 cylinders, total 33554432 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x000d6bea
Device Boot Start End Blocks Id System
/dev/sda1 * 2048 13672447 6835200 83 Linux
/dev/sda2 _13674494_ 33552383 9938945 5 Extended <<<<<<< Bucket 6
/dev/sda5 13674496 25391103 5858304 83 Linux
/dev/sda6 25393152 33552383 4079616 82 Linux swap / Solaris
</source>
Partition 2 does not matter because it is just metadata of partitioning. We are aligned!
==Links==
* [https://kb.netapp.com/index?page=content&id=3011193 KB ID: 3011193 - What is an unaligned I/O?]
d7ea607e50791e3a645567b1ad2deb0b79318c6c
871
870
2015-08-17T11:51:54Z
Lollypop
2
/* Links */
wikitext
text/x-wiki
[[Kategorie:NetApp|Linux]]
[[Kategorie:Linux|NetApp]]
==Check partitioning==
Maybe this works:
<source lang=bash>
# fdisk -l /dev/sda | awk '/^\/dev\//{fieldnum=2;if($fielnum=="*"){fieldnum++}blockalign=$fieldnum/8;if(blockalign!=int(blockalign)){match($0,$fieldnum);printf "%s <<<<<<< Bucket %d\n",substr($0,0,RSTART-2)"_"substr($0,RSTART,RLENGTH)"_"substr($0,RSTART+RLENGTH+1),8*(blockalign-int(blockalign));}else{print $0;}next;}{print;}'
Disk /dev/sda: 17.2 GB, 17179869184 bytes
64 heads, 32 sectors/track, 16384 cylinders, total 33554432 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x000d6bea
Device Boot Start End Blocks Id System
/dev/sda1 * 2048 13672447 6835200 83 Linux
/dev/sda2 _13674494_ 33552383 9938945 5 Extended <<<<<<< Bucket 6
/dev/sda5 13674496 25391103 5858304 83 Linux
/dev/sda6 25393152 33552383 4079616 82 Linux swap / Solaris
</source>
Partition 2 does not matter because it is just metadata of partitioning. We are aligned!
==Links==
* [https://kb.netapp.com/index?page=content&id=3011193 NetApp KB ID: 3011193 - What is an unaligned I/O?]
e3bd97525cf5438f3ec03bcede8152a66dec7ae3
Lolly's Wiki:Aktuelle Ereignisse
4
228
872
2015-08-17T12:02:47Z
Lollypop
2
Die Seite wurde neu angelegt: „{{Special:RecentChanges}}“
wikitext
text/x-wiki
{{Special:RecentChanges}}
10474b90b1b64e0c54b5c5e5faae1f3a60e009c5
873
872
2015-08-17T12:04:25Z
Lollypop
2
wikitext
text/x-wiki
'''Recent Changes to the wiki:'''<small>{{Special:Recentchanges/8}}</small>'''[[Special:Recentchanges|...more Recent Changes]]'''
44c6c8a3f515088e41d2b076c0f1427bd1ac17db
891
873
2015-08-25T08:32:10Z
Lollypop
2
wikitext
text/x-wiki
'''Recent Changes to the wiki:'''<small>{{Special:Recentchanges/8}}</small>'''[[Special:Recentchanges|...more Recent Changes]]'''
<news/>
6604b1519c91c6636280918e68c089c4e1efb455
SSL and TLS
0
229
874
2015-08-18T10:20:08Z
Lollypop
2
Die Seite wurde neu angelegt: „[[Kategorie: Security]] ==HPKP - HTTP Public Key Pinning== A helpful script to create the hashes was made by Hanno Böck and is accessible at [https://github…“
wikitext
text/x-wiki
[[Kategorie: Security]]
==HPKP - HTTP Public Key Pinning==
A helpful script to create the hashes was made by Hanno Böck and is accessible at [https://github.com/hannob/hpkp Github].
I added a create option which makes the script more comfortable for me at [https://github.com/Popyllol/hpkp Github], too.
The public key pins for this site are created like this:
<source lang=bash>
# /etc/apache2/ssl/hpkp-gen.sh create DE Hamburg Hamburg lars.timmann.de
Generating RSA private key, 4096 bit long modulus
..............................++
..........................................++
e is 65537 (0x10001)
Generating RSA private key, 4096 bit long modulus
...................................................................................................................++
.......................................................++
e is 65537 (0x10001)
Header always set Strict-Transport-Security "max-age=31556926;"
Header always set Public-Key-Pins "max-age=5184000; pin-sha256=\"i38qmLX9VLKCmH4XNvctxbv+ogiJXHtdPA/6RvvuJHE=\";pin-sha256=\"Oh+mTGIdu9+uughG5M1W6pCBRO5Ukja5MOzcl4qxKKw=\";pin-sha256=\"i38qmLX9VLKCmH4XNvctxbv+ogiJXHtdPA/6RvvuJHE=\";"
</source>
At the end you get on line for optional adding Strict-Transport-Security and one for Public-Key-Pins. Both in Apache format.
<source lang=apache>
<VirtualHost lars.timmann.de:443>
...
SSLEngine On
SSLProtocol all -SSLv2 -SSLv3
SSLCompression off
SSLOptions +FakeBasicAuth +ExportCertData +StrictRequire
SSLCertificateFile /etc/apache2/ssl/timmann.de-wildcard.pem
SSLCertificateKeyFile /etc/apache2/ssl/timmann.de.ec-key
Header always set Strict-Transport-Security "max-age=31556926;"
Header always set Public-Key-Pins "max-age=5184000; pin-sha256=\"sEQMIUbXSCbQQAMcCH7712u+cYCjFITlUSH/C1DEGHY=\";pin-sha256=\"9f3SRITO2UNdpnurhfJGLZqcaXJBUm3WRKRIKYiPARc=\";pin-sha256=\"sEQMIUbXSCbQQAMcCH7712u+cYCjFITlUSH/C1DEGHY=\";"
...
</VirtualHost>
</source>
You need to enable the headers module in Apache.
On Ubuntu just do:
<source lang=bash>
# sudo a2enmod headers
</source>
2d2e23137888daae544e50d3414bf84ad350b279
875
874
2015-08-18T10:24:20Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie: Security]]
==HTTPS==
===HSTS - HTTP Strict Transport Security===
* [https://en.wikipedia.org/wiki/Hypertext_Transfer_Protocol_Secure#HSTS HSTS at Wikipedia (English)]
* [https://de.wikipedia.org/wiki/Hypertext_Transfer_Protocol_Secure#HSTS HSTS at Wikipedia (German)]
===HPKP - HTTP Public Key Pinning===
A helpful script to create the hashes was made by Hanno Böck and is accessible at [https://github.com/hannob/hpkp Github].
I added a create option which makes the script more comfortable for me at [https://github.com/Popyllol/hpkp Github], too.
The public key pins for this site are created like this:
<source lang=bash>
# /etc/apache2/ssl/hpkp-gen.sh create DE Hamburg Hamburg lars.timmann.de
Generating RSA private key, 4096 bit long modulus
..............................++
..........................................++
e is 65537 (0x10001)
Generating RSA private key, 4096 bit long modulus
...................................................................................................................++
.......................................................++
e is 65537 (0x10001)
Header always set Strict-Transport-Security "max-age=31556926;"
Header always set Public-Key-Pins "max-age=5184000; pin-sha256=\"i38qmLX9VLKCmH4XNvctxbv+ogiJXHtdPA/6RvvuJHE=\";pin-sha256=\"Oh+mTGIdu9+uughG5M1W6pCBRO5Ukja5MOzcl4qxKKw=\";pin-sha256=\"i38qmLX9VLKCmH4XNvctxbv+ogiJXHtdPA/6RvvuJHE=\";"
</source>
At the end you get on line for optional adding Strict-Transport-Security and one for Public-Key-Pins. Both in Apache format.
<source lang=apache>
<VirtualHost lars.timmann.de:443>
...
SSLEngine On
SSLProtocol all -SSLv2 -SSLv3
SSLCompression off
SSLOptions +FakeBasicAuth +ExportCertData +StrictRequire
SSLCertificateFile /etc/apache2/ssl/timmann.de-wildcard.pem
SSLCertificateKeyFile /etc/apache2/ssl/timmann.de.ec-key
Header always set Strict-Transport-Security "max-age=31556926;"
Header always set Public-Key-Pins "max-age=5184000; pin-sha256=\"sEQMIUbXSCbQQAMcCH7712u+cYCjFITlUSH/C1DEGHY=\";pin-sha256=\"9f3SRITO2UNdpnurhfJGLZqcaXJBUm3WRKRIKYiPARc=\";pin-sha256=\"sEQMIUbXSCbQQAMcCH7712u+cYCjFITlUSH/C1DEGHY=\";"
...
</VirtualHost>
</source>
You need to enable the headers module in Apache.
On Ubuntu just do:
<source lang=bash>
# sudo a2enmod headers
</source>
871d11ce2359af704f854dc16f09a7148d5a2fd7
876
875
2015-08-18T10:27:37Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie: Security]]
==HTTPS==
===HSTS - HTTP Strict Transport Security===
* [https://en.wikipedia.org/wiki/HTTP_Strict_Transport_Security HSTS at Wikipedia (English)]
* [https://de.wikipedia.org/wiki/Hypertext_Transfer_Protocol_Secure#HSTS HSTS at Wikipedia (German)]
===HPKP - HTTP Public Key Pinning===
A helpful script to create the hashes was made by Hanno Böck and is accessible at [https://github.com/hannob/hpkp Github].
I added a create option which makes the script more comfortable for me at [https://github.com/Popyllol/hpkp Github], too.
The public key pins for this site are created like this:
<source lang=bash>
# /etc/apache2/ssl/hpkp-gen.sh create DE Hamburg Hamburg lars.timmann.de
Generating RSA private key, 4096 bit long modulus
..............................++
..........................................++
e is 65537 (0x10001)
Generating RSA private key, 4096 bit long modulus
...................................................................................................................++
.......................................................++
e is 65537 (0x10001)
Header always set Strict-Transport-Security "max-age=31556926;"
Header always set Public-Key-Pins "max-age=5184000; pin-sha256=\"i38qmLX9VLKCmH4XNvctxbv+ogiJXHtdPA/6RvvuJHE=\";pin-sha256=\"Oh+mTGIdu9+uughG5M1W6pCBRO5Ukja5MOzcl4qxKKw=\";pin-sha256=\"i38qmLX9VLKCmH4XNvctxbv+ogiJXHtdPA/6RvvuJHE=\";"
</source>
At the end you get one line for adding Strict-Transport-Security and one for Public-Key-Pins. Both in Apache format.
<source lang=apache>
<VirtualHost lars.timmann.de:443>
...
SSLEngine On
SSLProtocol all -SSLv2 -SSLv3
SSLCompression off
SSLOptions +FakeBasicAuth +ExportCertData +StrictRequire
SSLCertificateFile /etc/apache2/ssl/timmann.de-wildcard.pem
SSLCertificateKeyFile /etc/apache2/ssl/timmann.de.ec-key
Header always set Strict-Transport-Security "max-age=31556926;"
Header always set Public-Key-Pins "max-age=5184000; pin-sha256=\"sEQMIUbXSCbQQAMcCH7712u+cYCjFITlUSH/C1DEGHY=\";pin-sha256=\"9f3SRITO2UNdpnurhfJGLZqcaXJBUm3WRKRIKYiPARc=\";pin-sha256=\"sEQMIUbXSCbQQAMcCH7712u+cYCjFITlUSH/C1DEGHY=\";"
...
</VirtualHost>
</source>
You need to enable the headers module in Apache.
On Ubuntu just do:
<source lang=bash>
# sudo a2enmod headers
</source>
f52f9229f6b71116674f9b12e363943d31c40b11
877
876
2015-08-18T10:28:04Z
Lollypop
2
Lollypop verschob Seite [[SSL]] nach [[SSL and TLS]]: Name not good
wikitext
text/x-wiki
[[Kategorie: Security]]
==HTTPS==
===HSTS - HTTP Strict Transport Security===
* [https://en.wikipedia.org/wiki/HTTP_Strict_Transport_Security HSTS at Wikipedia (English)]
* [https://de.wikipedia.org/wiki/Hypertext_Transfer_Protocol_Secure#HSTS HSTS at Wikipedia (German)]
===HPKP - HTTP Public Key Pinning===
A helpful script to create the hashes was made by Hanno Böck and is accessible at [https://github.com/hannob/hpkp Github].
I added a create option which makes the script more comfortable for me at [https://github.com/Popyllol/hpkp Github], too.
The public key pins for this site are created like this:
<source lang=bash>
# /etc/apache2/ssl/hpkp-gen.sh create DE Hamburg Hamburg lars.timmann.de
Generating RSA private key, 4096 bit long modulus
..............................++
..........................................++
e is 65537 (0x10001)
Generating RSA private key, 4096 bit long modulus
...................................................................................................................++
.......................................................++
e is 65537 (0x10001)
Header always set Strict-Transport-Security "max-age=31556926;"
Header always set Public-Key-Pins "max-age=5184000; pin-sha256=\"i38qmLX9VLKCmH4XNvctxbv+ogiJXHtdPA/6RvvuJHE=\";pin-sha256=\"Oh+mTGIdu9+uughG5M1W6pCBRO5Ukja5MOzcl4qxKKw=\";pin-sha256=\"i38qmLX9VLKCmH4XNvctxbv+ogiJXHtdPA/6RvvuJHE=\";"
</source>
At the end you get one line for adding Strict-Transport-Security and one for Public-Key-Pins. Both in Apache format.
<source lang=apache>
<VirtualHost lars.timmann.de:443>
...
SSLEngine On
SSLProtocol all -SSLv2 -SSLv3
SSLCompression off
SSLOptions +FakeBasicAuth +ExportCertData +StrictRequire
SSLCertificateFile /etc/apache2/ssl/timmann.de-wildcard.pem
SSLCertificateKeyFile /etc/apache2/ssl/timmann.de.ec-key
Header always set Strict-Transport-Security "max-age=31556926;"
Header always set Public-Key-Pins "max-age=5184000; pin-sha256=\"sEQMIUbXSCbQQAMcCH7712u+cYCjFITlUSH/C1DEGHY=\";pin-sha256=\"9f3SRITO2UNdpnurhfJGLZqcaXJBUm3WRKRIKYiPARc=\";pin-sha256=\"sEQMIUbXSCbQQAMcCH7712u+cYCjFITlUSH/C1DEGHY=\";"
...
</VirtualHost>
</source>
You need to enable the headers module in Apache.
On Ubuntu just do:
<source lang=bash>
# sudo a2enmod headers
</source>
f52f9229f6b71116674f9b12e363943d31c40b11
879
877
2015-08-18T10:46:35Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie: Security]]
==HTTPS==
===HSTS - HTTP Strict Transport Security===
<source lang=apache>
<VirtualHost <host>:443>
...
Header always set Strict-Transport-Security "max-age=31556926; includeSubDomains;"
...
</VirtualHost>
</source>
You need to enable the headers module in Apache.
On Ubuntu just do:
<source lang=bash>
# sudo a2enmod headers
</source>
The max-age is entered in seconds:
<source lang=bash>
$ bc -l
31556926/(60*60*24)
365.24219907407407407407
</source>
So this value is a year as seconds.
What changes when we set this header and the browser understands it?
The browser transforms any link on this page to https even if the link is a http link. If the secure connection cannot be established because of Certificate errors, the browser will refuse to load the page. If this header contains ''includeSubDomains;'' subdomains are treated like this as well.
Links:
* [https://en.wikipedia.org/wiki/HTTP_Strict_Transport_Security HSTS at Wikipedia (English)]
* [https://de.wikipedia.org/wiki/Hypertext_Transfer_Protocol_Secure#HSTS HSTS at Wikipedia (German)]
===HPKP - HTTP Public Key Pinning===
A helpful script to create the hashes was made by Hanno Böck and is accessible at [https://github.com/hannob/hpkp Github].
I added a create option which makes the script more comfortable for me at [https://github.com/Popyllol/hpkp Github], too.
The public key pins for this site are created like this:
<source lang=bash>
# /etc/apache2/ssl/hpkp-gen.sh create DE Hamburg Hamburg lars.timmann.de
Generating RSA private key, 4096 bit long modulus
..................................................................................................................................................................................................................++
..........................................................................................................................................................................................++
e is 65537 (0x10001)
Generating RSA private key, 4096 bit long modulus
..................................................++
..........................................++
e is 65537 (0x10001)
Header always set Strict-Transport-Security "max-age=31556926;"
Header always set Public-Key-Pins "max-age=5184000; pin-sha256=\"UcmGe/VSm6N9ruX235yb9PEYseuo+mr2volWwx1RffE=\";pin-sha256=\"O8xUszxHm+JJpRR4Pycl7LCnKjFpTY3REemrBxQZWQU=\";pin-sha256=\"UcmGe/VSm6N9ruX235yb9PEYseuo+mr2volWwx1RffE=\";"
</source>
At the end you get one line for adding Strict-Transport-Security and one for Public-Key-Pins. Both in Apache format.
<source lang=apache>
<VirtualHost lars.timmann.de:443>
...
SSLEngine On
SSLProtocol all -SSLv2 -SSLv3
SSLCompression off
SSLOptions +FakeBasicAuth +ExportCertData +StrictRequire
SSLCertificateFile /etc/apache2/ssl/timmann.de-wildcard.pem
SSLCertificateKeyFile /etc/apache2/ssl/timmann.de.ec-key
Header always set Strict-Transport-Security "max-age=31556926;"
Header always set Public-Key-Pins "max-age=5184000; pin-sha256=\"sEQMIUbXSCbQQAMcCH7712u+cYCjFITlUSH/C1DEGHY=\";pin-sha256=\"9f3SRITO2UNdpnurhfJGLZqcaXJBUm3WRKRIKYiPARc=\";pin-sha256=\"sEQMIUbXSCbQQAMcCH7712u+cYCjFITlUSH/C1DEGHY=\";"
...
</VirtualHost>
</source>
You need to enable the headers module in Apache.
On Ubuntu just do:
<source lang=bash>
# sudo a2enmod headers
</source>
3a435c34ad6526f4c0e2b8107b009c9f1e3e4e84
890
879
2015-08-21T13:11:18Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie: Security]]
=Web=
==HTTPS==
===HSTS - HTTP Strict Transport Security===
<source lang=apache>
<VirtualHost <host>:443>
...
Header always set Strict-Transport-Security "max-age=31556926; includeSubDomains;"
...
</VirtualHost>
</source>
You need to enable the headers module in Apache.
On Ubuntu just do:
<source lang=bash>
# sudo a2enmod headers
</source>
The max-age is entered in seconds:
<source lang=bash>
$ bc -l
31556926/(60*60*24)
365.24219907407407407407
</souce>
So this value is a year as seconds.
What changes when we set this header and the browser understands it?
The browser transforms any link on this page to https even if the link is a http link. If the secure connection cannot be established because of Certificate errors, the browser will refuse to load the page. If this header contains ''includeSubDomains;'' subdomains are treated like this as well.
Links:
* [https://en.wikipedia.org/wiki/HTTP_Strict_Transport_Security HSTS at Wikipedia (English)]
* [https://de.wikipedia.org/wiki/Hypertext_Transfer_Protocol_Secure#HSTS HSTS at Wikipedia (German)]
===HPKP - HTTP Public Key Pinning===
A helpful script to create the hashes was made by Hanno Böck and is accessible at [https://github.com/hannob/hpkp Github].
I added a create option which makes the script more comfortable for me at [https://github.com/Popyllol/hpkp Github], too.
The public key pins for this site are created like this:
<source lang=bash>
# /etc/apache2/ssl/hpkp-gen.sh create DE Hamburg Hamburg lars.timmann.de
Generating RSA private key, 4096 bit long modulus
..................................................................................................................................................................................................................++
..........................................................................................................................................................................................++
e is 65537 (0x10001)
Generating RSA private key, 4096 bit long modulus
..................................................++
..........................................++
e is 65537 (0x10001)
Header always set Strict-Transport-Security "max-age=31556926;"
Header always set Public-Key-Pins "max-age=5184000; pin-sha256=\"UcmGe/VSm6N9ruX235yb9PEYseuo+mr2volWwx1RffE=\";pin-sha256=\"O8xUszxHm+JJpRR4Pycl7LCnKjFpTY3REemrBxQZWQU=\";pin-sha256=\"UcmGe/VSm6N9ruX235yb9PEYseuo+mr2volWwx1RffE=\";"
</source>
At the end you get one line for adding Strict-Transport-Security and one for Public-Key-Pins. Both in Apache format.
<source lang=apache>
<VirtualHost lars.timmann.de:443>
...
SSLEngine On
SSLProtocol all -SSLv2 -SSLv3
SSLCompression off
SSLOptions +FakeBasicAuth +ExportCertData +StrictRequire
SSLCertificateFile /etc/apache2/ssl/timmann.de-wildcard.pem
SSLCertificateKeyFile /etc/apache2/ssl/timmann.de.ec-key
Header always set Strict-Transport-Security "max-age=31556926;"
Header always set Public-Key-Pins "max-age=5184000; pin-sha256=\"sEQMIUbXSCbQQAMcCH7712u+cYCjFITlUSH/C1DEGHY=\";pin-sha256=\"9f3SRITO2UNdpnurhfJGLZqcaXJBUm3WRKRIKYiPARc=\";pin-sha256=\"sEQMIUbXSCbQQAMcCH7712u+cYCjFITlUSH/C1DEGHY=\";"
...
</VirtualHost>
</source>
You need to enable the headers module in Apache.
On Ubuntu just do:
<source lang=bash>
# sudo a2enmod headers
</source>
=Mail=
==STARTTLS==
<source lang=bash>
$ openssl s_client -starttls smtp -connect <mailserver>:<port>
</source>
==SMTPS==
<source lang=bash>
$ openssl s_client -connect <mailserver>:465
</source>
0b2c9c400fe51c1bf7819f6fcf96861cc451276d
SSL
0
230
878
2015-08-18T10:28:04Z
Lollypop
2
Lollypop verschob Seite [[SSL]] nach [[SSL and TLS]]: Name not good
wikitext
text/x-wiki
#WEITERLEITUNG [[SSL and TLS]]
11f235319af4ceb3048b4963dfe63367095436c7
Category:Security
14
231
880
2015-08-18T10:47:15Z
Lollypop
2
Die Seite wurde neu angelegt: „[[Kategorie: KnowHow]]“
wikitext
text/x-wiki
[[Kategorie: KnowHow]]
66e66ecf096ffc26f093363afb83fca47a7b1982
NetApp Commands
0
201
882
669
2015-08-18T12:59:52Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie: NetApp]]
==Alignment==
CDOT 8.3:
<source lang=bash>
netapp-svm1::> set -priv diag
netapp-svm1::*> lun alignment show -vserver svm_kerberos_backup -fields vserver,lun,alignment -path /vol/kerberos_vol_luns_backup/kerberos_lun_*
vserver path lun alignment
------------------- --------------------------------- ------------ ---------
svm_kerberos_backup /vol/vol_luns_backup/lun_1.bcklun lun_1.bcklun aligned
svm_kerberos_backup /vol/vol_luns_backup/lun_2.bcklun lun_2.bcklun aligned
svm_kerberos_backup /vol/vol_luns_backup/lun_3.bcklun lun_3.bcklun aligned
svm_kerberos_backup /vol/vol_luns_backup/lun_4.bcklun lun_4.bcklun aligned
4 entries were displayed.
netapp-svm1::> set -priv admin
</source>
==Performance ==
filer> priv set -q diag ; statit -b ; sysstat -x -s -c 20 3 ; statit -e ; priv set
filer> priv set -q diag ; stats show lun:*:avg_latency ; priv set
=== Flashpool ===
filer> priv set -q diag ; stats show -p hybrid_aggr ; priv set
6ec5ed595a6c9ca7d5dadade3021cdb9635586b5
883
882
2015-08-18T13:04:32Z
Lollypop
2
/* Alignment */
wikitext
text/x-wiki
[[Kategorie: NetApp]]
==Alignment==
CDOT 8.3:
<source lang=bash>
netapp-svm1::> set -priv diag
netapp-svm1::*> lun alignment show -vserver svm_kerberos_backup -fields vserver,lun,alignment -path /vol/kerberos_vol_luns_backup/kerberos_lun_*
vserver path lun alignment
---------- --------------------------------- ------------ ---------
svm_backup /vol/vol_luns_backup/lun_1.bcklun lun_1.bcklun aligned
svm_backup /vol/vol_luns_backup/lun_2.bcklun lun_2.bcklun aligned
svm_backup /vol/vol_luns_backup/lun_3.bcklun lun_3.bcklun aligned
svm_backup /vol/vol_luns_backup/lun_4.bcklun lun_4.bcklun aligned
4 entries were displayed.
netapp-svm1::> set -priv admin
</source>
To see on which bucket the reads and writes occure:
<source lang=bash>
netapp-svm1::> set -priv diag
netapp-svm1::*> lun alignment show -vserver svm_kerberos_backup -fields vserver,lun,alignment,read-histogram,write-histogram -path /vol/kerberos_vol_luns_backup/kerberos_lun_**
vserver path lun alignment write-histogram read-histogram
---------- --------------------------------- ------------ --------- ---------------- ----------------
svm_backup /vol/vol_luns_backup/lun_1.bcklun lun_1.bcklun aligned 99,0,0,0,0,0,0,0 99,0,0,0,0,0,0,0
svm_backup /vol/vol_luns_backup/lun_2.bcklun lun_2.bcklun aligned 99,0,0,0,0,0,0,0 99,0,0,0,0,0,0,0
svm_backup /vol/vol_luns_backup/lun_3.bcklun lun_3.bcklun aligned 99,0,0,0,0,0,0,0 99,0,0,0,0,0,0,0
svm_backup /vol/vol_luns_backup/lun_4.bcklun lun_4.bcklun aligned 99,0,0,0,0,0,0,0 99,0,0,0,0,0,0,0
4 entries were displayed.
netapp-svm1::> set -priv admin
</source>
==Performance ==
filer> priv set -q diag ; statit -b ; sysstat -x -s -c 20 3 ; statit -e ; priv set
filer> priv set -q diag ; stats show lun:*:avg_latency ; priv set
=== Flashpool ===
filer> priv set -q diag ; stats show -p hybrid_aggr ; priv set
1de2d36170998bfaf5be701f8dc87726adc14196
Roundcube
0
232
884
2015-08-18T14:01:46Z
Lollypop
2
Die Seite wurde neu angelegt: „==Automatic import carddav from Owncloud== Enable carddav: /etc/roundcube/config.inc.php: <source lang=php> ... <// List of active plugins (in plugins/ direct…“
wikitext
text/x-wiki
==Automatic import carddav from Owncloud==
Enable carddav:
/etc/roundcube/config.inc.php:
<source lang=php>
...
<// List of active plugins (in plugins/ directory)
$config['plugins'] = array(
'carddav', // <---- Enable carddav
'archive',
);
...
</source>
This imports automagically all Owncloud contacts from the addressbook "contacts" into roundcube carddav:
/usr/share/roundcube/plugins/carddav/config.inc.php
<source lang=php>
...
$prefs['OwnCloud-Contacts'] = array(
// required attributes
'name' => 'Cloud->contacts->',
'username' => '%u',
'password' => '%p',
'url' => 'https://$cloudserver/remote.php/carddav/addressbooks/%u/contacts/',
// optional attributes
'active' => true,
'readonly' => false,
'refresh_time' => '01:00:00',
'preemptive_auth' => 1,
// attributes that are fixed (i.e., not editable by the user) and
// auto-updated for this preset
'fixed' => array('name', 'active', ),
// hide this preset from CalDAV preferences section so users can't even
// see it
'hide' => false,
);
</source>
562febe0f18c9717e80ece1672563ba194941fec
Apache
0
205
885
691
2015-08-19T09:22:26Z
Lollypop
2
/* Zertifikat ansehen */
wikitext
text/x-wiki
== Zertifikat generieren ==
===Defaultwerte vernünftig anpassen===
Country & Co auf für einen selbst passende Werte anpassen:
<source lang=bash>
# vi /etc/ssl/openssl.cnf
</source>
===Schlüssel generieren===
<source lang=bash>
# openssl ecparam -genkey -name secp256r1 | openssl ec -aes256 -out server.de.ec-key
</source>
===Zertifikat ausstellen===
<source lang=bash>
# openssl req -new -x509 -sha256 -key server.de.ec-key -out server.de-wildcard.pem -days 1825 -nodes
You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter '.', the field will be left blank.
-----
Country Name (2 letter code) [DE]:
State or Province Name (full name) [Hamburg]:
Locality Name (eg, city) [Hamburg]:
Organization Name (eg, company) [My Site]:
Organizational Unit Name (eg, section) [Sub]:
Common Name (e.g. server FQDN or YOUR name) []:*.server.de
Email Address [ssl@server.de]:
</source>
===Zertifikat ansehen===
<source lang=bash>
# openssl x509 -text -noout -in server.de-wildcard.pem
Certificate:
Data:
Version: 3 (0x2)
Serial Number: ... (0x...)
Signature Algorithm: ecdsa-with-SHA256
Issuer: C=DE, ST=Hamburg, L=Hamburg, O=My Site, OU=Sub, CN=*.server.de/emailAddress=ssl@server.de
Validity
Not Before: Apr 16 09:35:02 2015 GMT
Not After : Apr 14 09:35:02 2020 GMT
Subject: C=DE, ST=Hamburg, L=Hamburg, O=My Site, OU=Sub, CN=*.server.de/emailAddress=ssl@server.de
Subject Public Key Info:
Public Key Algorithm: id-ecPublicKey
Public-Key: (256 bit)
pub:
...
ASN1 OID: prime256v1
X509v3 extensions:
X509v3 Subject Key Identifier:
...
X509v3 Authority Key Identifier:
keyid:...
X509v3 Basic Constraints:
CA:TRUE
Signature Algorithm: ecdsa-with-SHA256
...
</source>
==Apache konfigurieren==
<source lang=apache>
<VirtualHost ssl.server.de:443>
...
SSLEngine On
SSLProtocol all -SSLv2 -SSLv3
SSLCompression off
SSLHonorCipherOrder On
SSLCipherSuite EECDH+AESGCM:EECDH+AES:EDH+AES
SSLCertificateFile /etc/apache2/ssl/server.de-wildcard.pem
SSLCertificateKeyFile /etc/apache2/ssl/server.de.ec-key
SSLOptions +FakeBasicAuth +ExportCertData +StrictRequire
</VirtualHost>
</source>
7a2d0839f7aaa34518d530bb2f549ee1ffa521fe
Ufw
0
224
886
860
2015-08-19T14:01:49Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:Linux]]
==Disable IPv6==
/etc/default/ufw
<source lang=bash>
# Set to yes to apply rules to support IPv6 (no means only IPv6 on loopback
# accepted). You will need to 'disable' and then 'enable' the firewall for
# the changes to take affect.
IPV6=no
</source>
/etc/ufw/sysctl.conf
<source lang=bash>
# Uncomment this to turn off ipv6 autoconfiguration
net/ipv6/conf/default/autoconf=0
net/ipv6/conf/all/autoconf=0
</source>
==Setup Rules==
===Adding a rule===
<source lang=bash>
# ufw allow log-all from 192.168.2.0/24 to any app OpenSSH
Rule added
# ufw status verbose
Status: active
Logging: on (low)
Default: reject (incoming), allow (outgoing), disabled (routed)
New profiles: skip
To Action From
-- ------ ----
22/tcp (OpenSSH) ALLOW IN 192.168.2.0/24 (log-all)
</source>
===Inserting before===
<source lang=bash>
# ufw insert 1 allow log-all from 192.168.1.0/24 to any app OpenSSH
Rule inserted
# ufw status verbose
Status: active
Logging: on (low)
Default: reject (incoming), allow (outgoing), disabled (routed)
New profiles: skip
To Action From
-- ------ ----
22/tcp (OpenSSH) ALLOW IN 192.168.1.0/24 (log-all)
22/tcp (OpenSSH) ALLOW IN 192.168.2.0/24 (log-all)
# ufw status numbered
Status: active
To Action From
-- ------ ----
[ 1] OpenSSH ALLOW IN 192.168.1.0/24 (log-all)
[ 2] OpenSSH ALLOW IN 192.168.2.0/24 (log-all)
</source>
==Own applications==
===nrpe===
/etc/ufw/applications.d/nrpe
<source lang=bash>
[NRPE]
title=Nagios NRPE
description=Nagios Remote Plugin Executor
ports=5666/tcp
</source>
===MySQL===
/etc/ufw/applications.d/mysql
<source lang=bash>
[MySQL]
title=MySQL Server (MySQL, MYSQL)
description=Old and rusty SQL server
ports=3306/tcp
</source>
===Exim===
/etc/ufw/applications.d/exim
<source lang=bash>
[Exim SMTP]
title=Mail Server (Exim, SMTP)
description=Small, but very powerful and efficient mail server
ports=25/tcp
[Exim SMTP Virusscanned]
title=Mail Server (Exim, SMTP Virusscanned)
description=Small, but very powerful and efficient mail server
ports=26/tcp
[Exim SMTPS]
title=Mail Server (Exim, SMTPS)
description=Small, but very powerful and efficient mail server
ports=465/tcp
[Exim SMTP Message Submission]
title=Mail Server (Exim, Message Submission)
description=Small, but very powerful and efficient mail server
ports=587/tcp
</source>
===Inspect your application profile===
<source lang=bash>
# ufw app info MySQL
Profile: MySQL
Title: MySQL Server (MySQL, MYSQL)
Description: Old and rusty SQL server
Port:
3306/tcp
</source>
18dc18bf964a0300f76ef27309d6f91521b641ff
887
886
2015-08-19T14:02:11Z
Lollypop
2
/* Inspect your application profile */
wikitext
text/x-wiki
[[Kategorie:Linux]]
==Disable IPv6==
/etc/default/ufw
<source lang=bash>
# Set to yes to apply rules to support IPv6 (no means only IPv6 on loopback
# accepted). You will need to 'disable' and then 'enable' the firewall for
# the changes to take affect.
IPV6=no
</source>
/etc/ufw/sysctl.conf
<source lang=bash>
# Uncomment this to turn off ipv6 autoconfiguration
net/ipv6/conf/default/autoconf=0
net/ipv6/conf/all/autoconf=0
</source>
==Setup Rules==
===Adding a rule===
<source lang=bash>
# ufw allow log-all from 192.168.2.0/24 to any app OpenSSH
Rule added
# ufw status verbose
Status: active
Logging: on (low)
Default: reject (incoming), allow (outgoing), disabled (routed)
New profiles: skip
To Action From
-- ------ ----
22/tcp (OpenSSH) ALLOW IN 192.168.2.0/24 (log-all)
</source>
===Inserting before===
<source lang=bash>
# ufw insert 1 allow log-all from 192.168.1.0/24 to any app OpenSSH
Rule inserted
# ufw status verbose
Status: active
Logging: on (low)
Default: reject (incoming), allow (outgoing), disabled (routed)
New profiles: skip
To Action From
-- ------ ----
22/tcp (OpenSSH) ALLOW IN 192.168.1.0/24 (log-all)
22/tcp (OpenSSH) ALLOW IN 192.168.2.0/24 (log-all)
# ufw status numbered
Status: active
To Action From
-- ------ ----
[ 1] OpenSSH ALLOW IN 192.168.1.0/24 (log-all)
[ 2] OpenSSH ALLOW IN 192.168.2.0/24 (log-all)
</source>
==Own applications==
===nrpe===
/etc/ufw/applications.d/nrpe
<source lang=bash>
[NRPE]
title=Nagios NRPE
description=Nagios Remote Plugin Executor
ports=5666/tcp
</source>
===MySQL===
/etc/ufw/applications.d/mysql
<source lang=bash>
[MySQL]
title=MySQL Server (MySQL, MYSQL)
description=Old and rusty SQL server
ports=3306/tcp
</source>
===Exim===
/etc/ufw/applications.d/exim
<source lang=bash>
[Exim SMTP]
title=Mail Server (Exim, SMTP)
description=Small, but very powerful and efficient mail server
ports=25/tcp
[Exim SMTP Virusscanned]
title=Mail Server (Exim, SMTP Virusscanned)
description=Small, but very powerful and efficient mail server
ports=26/tcp
[Exim SMTPS]
title=Mail Server (Exim, SMTPS)
description=Small, but very powerful and efficient mail server
ports=465/tcp
[Exim SMTP Message Submission]
title=Mail Server (Exim, Message Submission)
description=Small, but very powerful and efficient mail server
ports=587/tcp
</source>
==Inspect your application profile==
<source lang=bash>
# ufw app info MySQL
Profile: MySQL
Title: MySQL Server (MySQL, MYSQL)
Description: Old and rusty SQL server
Port:
3306/tcp
</source>
c061f3b7fbc60b9a6251a5fc8b99c3e05ae88d6b
888
887
2015-08-19T14:09:12Z
Lollypop
2
/* Exim */
wikitext
text/x-wiki
[[Kategorie:Linux]]
==Disable IPv6==
/etc/default/ufw
<source lang=bash>
# Set to yes to apply rules to support IPv6 (no means only IPv6 on loopback
# accepted). You will need to 'disable' and then 'enable' the firewall for
# the changes to take affect.
IPV6=no
</source>
/etc/ufw/sysctl.conf
<source lang=bash>
# Uncomment this to turn off ipv6 autoconfiguration
net/ipv6/conf/default/autoconf=0
net/ipv6/conf/all/autoconf=0
</source>
==Setup Rules==
===Adding a rule===
<source lang=bash>
# ufw allow log-all from 192.168.2.0/24 to any app OpenSSH
Rule added
# ufw status verbose
Status: active
Logging: on (low)
Default: reject (incoming), allow (outgoing), disabled (routed)
New profiles: skip
To Action From
-- ------ ----
22/tcp (OpenSSH) ALLOW IN 192.168.2.0/24 (log-all)
</source>
===Inserting before===
<source lang=bash>
# ufw insert 1 allow log-all from 192.168.1.0/24 to any app OpenSSH
Rule inserted
# ufw status verbose
Status: active
Logging: on (low)
Default: reject (incoming), allow (outgoing), disabled (routed)
New profiles: skip
To Action From
-- ------ ----
22/tcp (OpenSSH) ALLOW IN 192.168.1.0/24 (log-all)
22/tcp (OpenSSH) ALLOW IN 192.168.2.0/24 (log-all)
# ufw status numbered
Status: active
To Action From
-- ------ ----
[ 1] OpenSSH ALLOW IN 192.168.1.0/24 (log-all)
[ 2] OpenSSH ALLOW IN 192.168.2.0/24 (log-all)
</source>
==Own applications==
===nrpe===
/etc/ufw/applications.d/nrpe
<source lang=bash>
[NRPE]
title=Nagios NRPE
description=Nagios Remote Plugin Executor
ports=5666/tcp
</source>
===MySQL===
/etc/ufw/applications.d/mysql
<source lang=bash>
[MySQL]
title=MySQL Server (MySQL, MYSQL)
description=Old and rusty SQL server
ports=3306/tcp
</source>
===Exim===
/etc/ufw/applications.d/exim
<source lang=bash>
[Exim SMTP]
title=Mail Server (Exim, SMTP)
description=Small, but very powerful and efficient mail server
ports=25/tcp
[Exim SMTP Virusscanned]
title=Mail Server (Exim, SMTP Virusscanned)
description=Small, but very powerful and efficient mail server
ports=26/tcp
[Exim SMTPS]
title=Mail Server (Exim, SMTPS)
description=Small, but very powerful and efficient mail server
ports=465/tcp
[Exim SMTP Message Submission]
title=Mail Server (Exim, Message Submission)
description=Small, but very powerful and efficient mail server
ports=587/tcp
</source>
Get a list of rules to set from Exim's configuration:
<source lang=awk>
# exim -bP local_interfaces | awk '
BEGIN{
ports[25]="Exim SMTP";
ports[26]="Exim SMTP Virusscanned"
ports[465]="Exim SMTPS";
ports[587]="Exim SMTP Message Submission";
from="any"; # <----- Look if it fits what you want
}
{
gsub(/^.*= /,"");
split($0,services,/ : /);
for(service in services){
split(services[service],part,/\./);
ip=part[1]"."part[2]"."part[3]"."part[4];
port=part[5];
printf "ufw allow log from %s to %s app \"%s\"\n",from,ip,ports[port];
}
}'
ufw allow log from any to 192.168.5.103 app "Exim SMTP"
ufw allow log from any to 192.168.5.103 app "Exim SMTP Virusscanned"
ufw allow log from any to 192.168.5.103 app "Exim SMTPS"
</source>
==Inspect your application profile==
<source lang=bash>
# ufw app info MySQL
Profile: MySQL
Title: MySQL Server (MySQL, MYSQL)
Description: Old and rusty SQL server
Port:
3306/tcp
</source>
5c2ba7f822e740a11472d66891c3e414e2eb6e11
Systemd
0
233
892
2015-08-28T07:43:32Z
Lollypop
2
Die Seite wurde neu angelegt: „[[Kategorie:Linux]] ==Units== ===Display units=== <source lang=ini> # systemctl cat zfs.target # /lib/systemd/system/zfs.target [Unit] Description=ZFS startup…“
wikitext
text/x-wiki
[[Kategorie:Linux]]
==Units==
===Display units===
<source lang=ini>
# systemctl cat zfs.target
# /lib/systemd/system/zfs.target
[Unit]
Description=ZFS startup target
Requires=zfs-mount.service
Requires=zfs-share.service
Wants=zed.service
[Install]
WantedBy=multi-user.target
</source>
917b00a59cb6cf89262f05543c06216a5d3785e1
893
892
2015-08-28T07:51:50Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:Linux]]
==Units==
==List units===
As you can see, there are hardware and software related units.
<source lang=bash>
# systemctl list-units
UNIT LOAD ACTIVE SUB DESCRIPTION
proc-sys-fs-binfmt_misc.automount loaded active running Arbitrary Executable File Formats File System Automount Point
sys-devices-pci0000:00-0000:00:02.0-backlight-acpi_video0.device loaded active plugged /sys/devices/pci0000:00/0000:00:02.0/backlight/acpi_video0
sys-devices-pci0000:00-0000:00:02.0-drm-card0-card0\x2dLVDS\x2d1-intel_backlight.device loaded active plugged /sys/devices/pci0000:00/0000:00:02.0/drm
sys-devices-pci0000:00-0000:00:19.0-net-eth0.device loaded active plugged 82579LM Gigabit Network Connection
sys-devices-pci0000:00-0000:00:1a.0-usb1-1\x2d1-1\x2d1.4-1\x2d1.4:1.0-bluetooth-hci0-rfkill3.device loaded active plugged /sys/devices/pci0000:00/0000
sys-devices-pci0000:00-0000:00:1a.0-usb1-1\x2d1-1\x2d1.4-1\x2d1.4:1.0-bluetooth-hci0.device loaded active plugged /sys/devices/pci0000:00/0000:00:1a.0
sys-devices-pci0000:00-0000:00:1b.0-sound-card0.device loaded active plugged 6 Series/C200 Series Chipset Family High Definition Audio Contro
sys-devices-pci0000:00-0000:00:1c.1-0000:03:00.0-ieee80211-phy0-rfkill2.device loaded active plugged /sys/devices/pci0000:00/0000:00:1c.1/0000:03:00.0
sys-devices-pci0000:00-0000:00:1c.1-0000:03:00.0-net-wlan0.device loaded active plugged Centrino Advanced-N 6205 [Taylor Peak] (Centrino Advanced-N 62
sys-devices-pci0000:00-0000:00:1d.0-usb2-2\x2d1-2\x2d1.4-2\x2d1.4:1.1-tty-ttyACM0.device loaded active plugged F5521gw
sys-devices-pci0000:00-0000:00:1d.0-usb2-2\x2d1-2\x2d1.4-2\x2d1.4:1.3-tty-ttyACM1.device loaded active plugged F5521gw
...
session-c2.scope loaded active running Session c2 of user lollypop
accounts-daemon.service loaded active running Accounts Service
● anacron.service loaded failed failed Run anacron jobs
apparmor.service loaded active exited LSB: AppArmor initialization
apport.service loaded active exited LSB: automatic crash report generation
...
</source>
In this example you can see that the anacron.service failed to start.
===Display unit status===
<source lang=bash>
# systemctl status anacron
● anacron.service - Run anacron jobs
Loaded: loaded (/lib/systemd/system/anacron.service; enabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Fr 2015-08-28 09:18:13 CEST; 31min ago
Process: 1591 ExecStart=/usr/sbin/anacron -dsq (code=exited, status=1/FAILURE)
Main PID: 1591 (code=exited, status=1/FAILURE)
Aug 28 09:18:13 lollybook systemd[1]: Started Run anacron jobs.
Aug 28 09:18:13 lollybook systemd[1]: Starting Run anacron jobs...
Aug 28 09:18:13 lollybook systemd[1]: anacron.service: main process exited, code=exited, status=1/FAILURE
Aug 28 09:18:13 lollybook anacron[1591]: anacron: Can't chdir to /var/spool/anacron: No such file or directory
Aug 28 09:18:13 lollybook systemd[1]: Unit anacron.service entered failed state.
Aug 28 09:18:13 lollybook systemd[1]: anacron.service failed.
</source>
Ah, deleted the anacron spool directory. ;-)
===Display units===
<source lang=ini>
# systemctl cat zfs.target
# /lib/systemd/system/zfs.target
[Unit]
Description=ZFS startup target
Requires=zfs-mount.service
Requires=zfs-share.service
Wants=zed.service
[Install]
WantedBy=multi-user.target
</source>
00db8e1abf41a4eb12458995bc1326b6efcb7231
894
893
2015-08-28T07:52:24Z
Lollypop
2
/* List units= */
wikitext
text/x-wiki
[[Kategorie:Linux]]
==Units==
===List units===
As you can see, there are hardware and software related units.
<source lang=bash>
# systemctl list-units
UNIT LOAD ACTIVE SUB DESCRIPTION
proc-sys-fs-binfmt_misc.automount loaded active running Arbitrary Executable File Formats File System Automount Point
sys-devices-pci0000:00-0000:00:02.0-backlight-acpi_video0.device loaded active plugged /sys/devices/pci0000:00/0000:00:02.0/backlight/acpi_video0
sys-devices-pci0000:00-0000:00:02.0-drm-card0-card0\x2dLVDS\x2d1-intel_backlight.device loaded active plugged /sys/devices/pci0000:00/0000:00:02.0/drm
sys-devices-pci0000:00-0000:00:19.0-net-eth0.device loaded active plugged 82579LM Gigabit Network Connection
sys-devices-pci0000:00-0000:00:1a.0-usb1-1\x2d1-1\x2d1.4-1\x2d1.4:1.0-bluetooth-hci0-rfkill3.device loaded active plugged /sys/devices/pci0000:00/0000
sys-devices-pci0000:00-0000:00:1a.0-usb1-1\x2d1-1\x2d1.4-1\x2d1.4:1.0-bluetooth-hci0.device loaded active plugged /sys/devices/pci0000:00/0000:00:1a.0
sys-devices-pci0000:00-0000:00:1b.0-sound-card0.device loaded active plugged 6 Series/C200 Series Chipset Family High Definition Audio Contro
sys-devices-pci0000:00-0000:00:1c.1-0000:03:00.0-ieee80211-phy0-rfkill2.device loaded active plugged /sys/devices/pci0000:00/0000:00:1c.1/0000:03:00.0
sys-devices-pci0000:00-0000:00:1c.1-0000:03:00.0-net-wlan0.device loaded active plugged Centrino Advanced-N 6205 [Taylor Peak] (Centrino Advanced-N 62
sys-devices-pci0000:00-0000:00:1d.0-usb2-2\x2d1-2\x2d1.4-2\x2d1.4:1.1-tty-ttyACM0.device loaded active plugged F5521gw
sys-devices-pci0000:00-0000:00:1d.0-usb2-2\x2d1-2\x2d1.4-2\x2d1.4:1.3-tty-ttyACM1.device loaded active plugged F5521gw
...
session-c2.scope loaded active running Session c2 of user lollypop
accounts-daemon.service loaded active running Accounts Service
● anacron.service loaded failed failed Run anacron jobs
apparmor.service loaded active exited LSB: AppArmor initialization
apport.service loaded active exited LSB: automatic crash report generation
...
</source>
In this example you can see that the anacron.service failed to start.
===Display unit status===
<source lang=bash>
# systemctl status anacron
● anacron.service - Run anacron jobs
Loaded: loaded (/lib/systemd/system/anacron.service; enabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Fr 2015-08-28 09:18:13 CEST; 31min ago
Process: 1591 ExecStart=/usr/sbin/anacron -dsq (code=exited, status=1/FAILURE)
Main PID: 1591 (code=exited, status=1/FAILURE)
Aug 28 09:18:13 lollybook systemd[1]: Started Run anacron jobs.
Aug 28 09:18:13 lollybook systemd[1]: Starting Run anacron jobs...
Aug 28 09:18:13 lollybook systemd[1]: anacron.service: main process exited, code=exited, status=1/FAILURE
Aug 28 09:18:13 lollybook anacron[1591]: anacron: Can't chdir to /var/spool/anacron: No such file or directory
Aug 28 09:18:13 lollybook systemd[1]: Unit anacron.service entered failed state.
Aug 28 09:18:13 lollybook systemd[1]: anacron.service failed.
</source>
Ah, deleted the anacron spool directory. ;-)
===Display units===
<source lang=ini>
# systemctl cat zfs.target
# /lib/systemd/system/zfs.target
[Unit]
Description=ZFS startup target
Requires=zfs-mount.service
Requires=zfs-share.service
Wants=zed.service
[Install]
WantedBy=multi-user.target
</source>
d3501a41850368ba2824e83cf8a394876a0fddb4
895
894
2015-08-28T07:52:49Z
Lollypop
2
/* Display units */
wikitext
text/x-wiki
[[Kategorie:Linux]]
==Units==
===List units===
As you can see, there are hardware and software related units.
<source lang=bash>
# systemctl list-units
UNIT LOAD ACTIVE SUB DESCRIPTION
proc-sys-fs-binfmt_misc.automount loaded active running Arbitrary Executable File Formats File System Automount Point
sys-devices-pci0000:00-0000:00:02.0-backlight-acpi_video0.device loaded active plugged /sys/devices/pci0000:00/0000:00:02.0/backlight/acpi_video0
sys-devices-pci0000:00-0000:00:02.0-drm-card0-card0\x2dLVDS\x2d1-intel_backlight.device loaded active plugged /sys/devices/pci0000:00/0000:00:02.0/drm
sys-devices-pci0000:00-0000:00:19.0-net-eth0.device loaded active plugged 82579LM Gigabit Network Connection
sys-devices-pci0000:00-0000:00:1a.0-usb1-1\x2d1-1\x2d1.4-1\x2d1.4:1.0-bluetooth-hci0-rfkill3.device loaded active plugged /sys/devices/pci0000:00/0000
sys-devices-pci0000:00-0000:00:1a.0-usb1-1\x2d1-1\x2d1.4-1\x2d1.4:1.0-bluetooth-hci0.device loaded active plugged /sys/devices/pci0000:00/0000:00:1a.0
sys-devices-pci0000:00-0000:00:1b.0-sound-card0.device loaded active plugged 6 Series/C200 Series Chipset Family High Definition Audio Contro
sys-devices-pci0000:00-0000:00:1c.1-0000:03:00.0-ieee80211-phy0-rfkill2.device loaded active plugged /sys/devices/pci0000:00/0000:00:1c.1/0000:03:00.0
sys-devices-pci0000:00-0000:00:1c.1-0000:03:00.0-net-wlan0.device loaded active plugged Centrino Advanced-N 6205 [Taylor Peak] (Centrino Advanced-N 62
sys-devices-pci0000:00-0000:00:1d.0-usb2-2\x2d1-2\x2d1.4-2\x2d1.4:1.1-tty-ttyACM0.device loaded active plugged F5521gw
sys-devices-pci0000:00-0000:00:1d.0-usb2-2\x2d1-2\x2d1.4-2\x2d1.4:1.3-tty-ttyACM1.device loaded active plugged F5521gw
...
session-c2.scope loaded active running Session c2 of user lollypop
accounts-daemon.service loaded active running Accounts Service
● anacron.service loaded failed failed Run anacron jobs
apparmor.service loaded active exited LSB: AppArmor initialization
apport.service loaded active exited LSB: automatic crash report generation
...
</source>
In this example you can see that the anacron.service failed to start.
===Display unit status===
<source lang=bash>
# systemctl status anacron
● anacron.service - Run anacron jobs
Loaded: loaded (/lib/systemd/system/anacron.service; enabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Fr 2015-08-28 09:18:13 CEST; 31min ago
Process: 1591 ExecStart=/usr/sbin/anacron -dsq (code=exited, status=1/FAILURE)
Main PID: 1591 (code=exited, status=1/FAILURE)
Aug 28 09:18:13 lollybook systemd[1]: Started Run anacron jobs.
Aug 28 09:18:13 lollybook systemd[1]: Starting Run anacron jobs...
Aug 28 09:18:13 lollybook systemd[1]: anacron.service: main process exited, code=exited, status=1/FAILURE
Aug 28 09:18:13 lollybook anacron[1591]: anacron: Can't chdir to /var/spool/anacron: No such file or directory
Aug 28 09:18:13 lollybook systemd[1]: Unit anacron.service entered failed state.
Aug 28 09:18:13 lollybook systemd[1]: anacron.service failed.
</source>
Ah, deleted the anacron spool directory. ;-)
===Display unit declaration===
<source lang=ini>
# systemctl cat zfs.target
# /lib/systemd/system/zfs.target
[Unit]
Description=ZFS startup target
Requires=zfs-mount.service
Requires=zfs-share.service
Wants=zed.service
[Install]
WantedBy=multi-user.target
</source>
7ec325e3514608530200b44c0990b03ee0c2d260
896
895
2015-08-28T07:55:57Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:Linux]]
==Units==
===List units===
As you can see, there are hardware and software related units.
<source lang=bash>
# systemctl list-units
UNIT LOAD ACTIVE SUB DESCRIPTION
proc-sys-fs-binfmt_misc.automount loaded active running Arbitrary Executable File Formats File System Automount Point
sys-devices-pci0000:00-0000:00:02.0-backlight-acpi_video0.device loaded active plugged /sys/devices/pci0000:00/0000:00:02.0/backlight/acpi_video0
sys-devices-pci0000:00-0000:00:02.0-drm-card0-card0\x2dLVDS\x2d1-intel_backlight.device loaded active plugged /sys/devices/pci0000:00/0000:00:02.0/drm
sys-devices-pci0000:00-0000:00:19.0-net-eth0.device loaded active plugged 82579LM Gigabit Network Connection
sys-devices-pci0000:00-0000:00:1a.0-usb1-1\x2d1-1\x2d1.4-1\x2d1.4:1.0-bluetooth-hci0-rfkill3.device loaded active plugged /sys/devices/pci0000:00/0000
sys-devices-pci0000:00-0000:00:1a.0-usb1-1\x2d1-1\x2d1.4-1\x2d1.4:1.0-bluetooth-hci0.device loaded active plugged /sys/devices/pci0000:00/0000:00:1a.0
sys-devices-pci0000:00-0000:00:1b.0-sound-card0.device loaded active plugged 6 Series/C200 Series Chipset Family High Definition Audio Contro
sys-devices-pci0000:00-0000:00:1c.1-0000:03:00.0-ieee80211-phy0-rfkill2.device loaded active plugged /sys/devices/pci0000:00/0000:00:1c.1/0000:03:00.0
sys-devices-pci0000:00-0000:00:1c.1-0000:03:00.0-net-wlan0.device loaded active plugged Centrino Advanced-N 6205 [Taylor Peak] (Centrino Advanced-N 62
sys-devices-pci0000:00-0000:00:1d.0-usb2-2\x2d1-2\x2d1.4-2\x2d1.4:1.1-tty-ttyACM0.device loaded active plugged F5521gw
sys-devices-pci0000:00-0000:00:1d.0-usb2-2\x2d1-2\x2d1.4-2\x2d1.4:1.3-tty-ttyACM1.device loaded active plugged F5521gw
...
session-c2.scope loaded active running Session c2 of user lollypop
accounts-daemon.service loaded active running Accounts Service
● anacron.service loaded failed failed Run anacron jobs
apparmor.service loaded active exited LSB: AppArmor initialization
apport.service loaded active exited LSB: automatic crash report generation
...
</source>
In this example you can see that the anacron.service failed to start.
===Display unit status===
<source lang=bash>
# systemctl status anacron
● anacron.service - Run anacron jobs
Loaded: loaded (/lib/systemd/system/anacron.service; enabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Fr 2015-08-28 09:18:13 CEST; 31min ago
Process: 1591 ExecStart=/usr/sbin/anacron -dsq (code=exited, status=1/FAILURE)
Main PID: 1591 (code=exited, status=1/FAILURE)
Aug 28 09:18:13 lollybook systemd[1]: Started Run anacron jobs.
Aug 28 09:18:13 lollybook systemd[1]: Starting Run anacron jobs...
Aug 28 09:18:13 lollybook systemd[1]: anacron.service: main process exited, code=exited, status=1/FAILURE
Aug 28 09:18:13 lollybook anacron[1591]: anacron: Can't chdir to /var/spool/anacron: No such file or directory
Aug 28 09:18:13 lollybook systemd[1]: Unit anacron.service entered failed state.
Aug 28 09:18:13 lollybook systemd[1]: anacron.service failed.
</source>
Ah, deleted the anacron spool directory. ;-)
===Restart units===
Fix the problem and restart the service.
<source lang=bash>
root@lollybook:~# mkdir /var/spool/anacron
root@lollybook:~# systemctl restart anacron.service
root@lollybook:~# systemctl status anacron
● anacron.service - Run anacron jobs
Loaded: loaded (/lib/systemd/system/anacron.service; enabled; vendor preset: enabled)
Active: active (running) since Fr 2015-08-28 09:53:49 CEST; 4s ago
Main PID: 5179 (anacron)
CGroup: /system.slice/anacron.service
└─5179 /usr/sbin/anacron -dsq
Aug 28 09:53:49 lollybook systemd[1]: Started Run anacron jobs.
Aug 28 09:53:49 lollybook systemd[1]: Starting Run anacron jobs...
Aug 28 09:53:49 lollybook anacron[5179]: Anacron 2.3 started on 2015-08-28
Aug 28 09:53:49 lollybook anacron[5179]: Will run job `cron.daily' in 5 min.
Aug 28 09:53:49 lollybook anacron[5179]: Will run job `cron.weekly' in 10 min.
Aug 28 09:53:49 lollybook anacron[5179]: Will run job `cron.monthly' in 15 min.
Aug 28 09:53:49 lollybook anacron[5179]: Jobs will be executed sequentially
</source>
===Display unit declaration===
<source lang=ini>
# systemctl cat zfs.target
# /lib/systemd/system/zfs.target
[Unit]
Description=ZFS startup target
Requires=zfs-mount.service
Requires=zfs-share.service
Wants=zed.service
[Install]
WantedBy=multi-user.target
</source>
6195998fb1fdb82f54c78a1eff64331e16cce345
897
896
2015-08-28T10:12:44Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:Linux]]
=What is systemd?=
systemd is a replacement for the old and rusty init system of Linux.
It has many new features and extends the normal init system with the ability to watch processes after the start has done, list sockets owned by processes started with systemd, adds security features like [http://manpages.ubuntu.com/manpages/vivid/en/man7/capabilities.7.html capabilities(7)] and a lot more.
Maybe it will be as good as SMF (Service Management Facility) of Solaris one day :-).
=systemd=
Yes, like daemons are usually written this has to be written lowercase.
==Security==
==Units==
===List units===
As you can see, there are hardware and software related units.
<source lang=bash>
# systemctl list-units
UNIT LOAD ACTIVE SUB DESCRIPTION
proc-sys-fs-binfmt_misc.automount loaded active running Arbitrary Executable File Formats File System Automount Point
sys-devices-pci0000:00-0000:00:02.0-backlight-acpi_video0.device loaded active plugged /sys/devices/pci0000:00/0000:00:02.0/backlight/acpi_video0
sys-devices-pci0000:00-0000:00:02.0-drm-card0-card0\x2dLVDS\x2d1-intel_backlight.device loaded active plugged /sys/devices/pci0000:00/0000:00:02.0/drm
sys-devices-pci0000:00-0000:00:19.0-net-eth0.device loaded active plugged 82579LM Gigabit Network Connection
sys-devices-pci0000:00-0000:00:1a.0-usb1-1\x2d1-1\x2d1.4-1\x2d1.4:1.0-bluetooth-hci0-rfkill3.device loaded active plugged /sys/devices/pci0000:00/0000
sys-devices-pci0000:00-0000:00:1a.0-usb1-1\x2d1-1\x2d1.4-1\x2d1.4:1.0-bluetooth-hci0.device loaded active plugged /sys/devices/pci0000:00/0000:00:1a.0
sys-devices-pci0000:00-0000:00:1b.0-sound-card0.device loaded active plugged 6 Series/C200 Series Chipset Family High Definition Audio Contro
sys-devices-pci0000:00-0000:00:1c.1-0000:03:00.0-ieee80211-phy0-rfkill2.device loaded active plugged /sys/devices/pci0000:00/0000:00:1c.1/0000:03:00.0
sys-devices-pci0000:00-0000:00:1c.1-0000:03:00.0-net-wlan0.device loaded active plugged Centrino Advanced-N 6205 [Taylor Peak] (Centrino Advanced-N 62
sys-devices-pci0000:00-0000:00:1d.0-usb2-2\x2d1-2\x2d1.4-2\x2d1.4:1.1-tty-ttyACM0.device loaded active plugged F5521gw
sys-devices-pci0000:00-0000:00:1d.0-usb2-2\x2d1-2\x2d1.4-2\x2d1.4:1.3-tty-ttyACM1.device loaded active plugged F5521gw
...
session-c2.scope loaded active running Session c2 of user lollypop
accounts-daemon.service loaded active running Accounts Service
● anacron.service loaded failed failed Run anacron jobs
apparmor.service loaded active exited LSB: AppArmor initialization
apport.service loaded active exited LSB: automatic crash report generation
...
</source>
In this example you can see that the anacron.service failed to start.
===Display unit status===
<source lang=bash>
# systemctl status anacron
● anacron.service - Run anacron jobs
Loaded: loaded (/lib/systemd/system/anacron.service; enabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Fr 2015-08-28 09:18:13 CEST; 31min ago
Process: 1591 ExecStart=/usr/sbin/anacron -dsq (code=exited, status=1/FAILURE)
Main PID: 1591 (code=exited, status=1/FAILURE)
Aug 28 09:18:13 lollybook systemd[1]: Started Run anacron jobs.
Aug 28 09:18:13 lollybook systemd[1]: Starting Run anacron jobs...
Aug 28 09:18:13 lollybook systemd[1]: anacron.service: main process exited, code=exited, status=1/FAILURE
Aug 28 09:18:13 lollybook anacron[1591]: anacron: Can't chdir to /var/spool/anacron: No such file or directory
Aug 28 09:18:13 lollybook systemd[1]: Unit anacron.service entered failed state.
Aug 28 09:18:13 lollybook systemd[1]: anacron.service failed.
</source>
Ah, deleted the anacron spool directory. ;-)
===Restart units===
Fix the problem and restart the service.
<source lang=bash>
root@lollybook:~# mkdir /var/spool/anacron
root@lollybook:~# systemctl restart anacron.service
root@lollybook:~# systemctl status anacron
● anacron.service - Run anacron jobs
Loaded: loaded (/lib/systemd/system/anacron.service; enabled; vendor preset: enabled)
Active: active (running) since Fr 2015-08-28 09:53:49 CEST; 4s ago
Main PID: 5179 (anacron)
CGroup: /system.slice/anacron.service
└─5179 /usr/sbin/anacron -dsq
Aug 28 09:53:49 lollybook systemd[1]: Started Run anacron jobs.
Aug 28 09:53:49 lollybook systemd[1]: Starting Run anacron jobs...
Aug 28 09:53:49 lollybook anacron[5179]: Anacron 2.3 started on 2015-08-28
Aug 28 09:53:49 lollybook anacron[5179]: Will run job `cron.daily' in 5 min.
Aug 28 09:53:49 lollybook anacron[5179]: Will run job `cron.weekly' in 10 min.
Aug 28 09:53:49 lollybook anacron[5179]: Will run job `cron.monthly' in 15 min.
Aug 28 09:53:49 lollybook anacron[5179]: Jobs will be executed sequentially
</source>
===Display unit declaration===
<source lang=ini>
# systemctl cat zfs.target
# /lib/systemd/system/zfs.target
[Unit]
Description=ZFS startup target
Requires=zfs-mount.service
Requires=zfs-share.service
Wants=zed.service
[Install]
WantedBy=multi-user.target
</source>
==Sockets==
<source lang=bash>
# systemctl list-sockets --all
LISTEN UNIT ACTIVATES
/run/acpid.socket acpid.socket acpid.service
/run/systemd/fsckd systemd-fsckd.socket systemd-fsckd.service
/run/systemd/initctl/fifo systemd-initctl.socket systemd-initctl.service
/run/systemd/journal/dev-log systemd-journald-dev-log.socket systemd-journald.service
/run/systemd/journal/socket systemd-journald.socket systemd-journald.service
/run/systemd/journal/stdout systemd-journald.socket systemd-journald.service
/run/systemd/journal/syslog syslog.socket rsyslog.service
/run/systemd/shutdownd systemd-shutdownd.socket systemd-shutdownd.service
/run/udev/control systemd-udevd-control.socket systemd-udevd.service
/run/uuidd/request uuidd.socket uuidd.service
/var/run/avahi-daemon/socket avahi-daemon.socket avahi-daemon.service
/var/run/cups/cups.sock cups.socket cups.service
/var/run/dbus/system_bus_socket dbus.socket dbus.service
127.0.0.1:631 cups.socket cups.service
[::1]:631 cups.socket cups.service
audit 1 systemd-journald-audit.socket systemd-journald.service
kobject-uevent 1 systemd-udevd-kernel.socket systemd-udevd.service
17 sockets listed.
</source>
11ad9f32637c82a8a298e4d5a4ee540ce99712fd
898
897
2015-08-28T10:18:04Z
Lollypop
2
/* Security */
wikitext
text/x-wiki
[[Kategorie:Linux]]
=What is systemd?=
systemd is a replacement for the old and rusty init system of Linux.
It has many new features and extends the normal init system with the ability to watch processes after the start has done, list sockets owned by processes started with systemd, adds security features like [http://manpages.ubuntu.com/manpages/vivid/en/man7/capabilities.7.html capabilities(7)] and a lot more.
Maybe it will be as good as SMF (Service Management Facility) of Solaris one day :-).
=systemd=
Yes, like daemons are usually written this has to be written lowercase.
==Security==
===Use capabilities to drop user privileges (CapabilityBoundingSet)===
<source lang=bash>
# systemctl cat systemd-networkd.service --no-pager
# /lib/systemd/system/systemd-networkd.service
# This file is part of systemd.
#
# systemd is free software; you can redistribute it and/or modify it
# under the terms of the GNU Lesser General Public License as published by
# the Free Software Foundation; either version 2.1 of the License, or
# (at your option) any later version.
[Unit]
Description=Network Service
Documentation=man:systemd-networkd.service(8)
ConditionCapability=CAP_NET_ADMIN
DefaultDependencies=no
# dbus.service can be dropped once on kdbus, and systemd-udevd.service can be
# dropped once tuntap is moved to netlink
After=systemd-udevd.service dbus.service network-pre.target systemd-sysusers.service
Before=network.target multi-user.target shutdown.target
Conflicts=shutdown.target
Wants=network.target
[Service]
Type=notify
Restart=on-failure
RestartSec=0
ExecStart=/lib/systemd/systemd-networkd
CapabilityBoundingSet=CAP_NET_ADMIN CAP_NET_BIND_SERVICE CAP_NET_BROADCAST CAP_NET_RAW CAP_SETUID CAP_SETGID CAP_SETPCAP CAP_CHOWN CAP_DAC_OVERRIDE CAP_FOWNER
ProtectSystem=full
ProtectHome=yes
WatchdogSec=1min
[Install]
WantedBy=multi-user.target
Also=systemd-networkd.socket
</source>
==Units==
===List units===
As you can see, there are hardware and software related units.
<source lang=bash>
# systemctl list-units
UNIT LOAD ACTIVE SUB DESCRIPTION
proc-sys-fs-binfmt_misc.automount loaded active running Arbitrary Executable File Formats File System Automount Point
sys-devices-pci0000:00-0000:00:02.0-backlight-acpi_video0.device loaded active plugged /sys/devices/pci0000:00/0000:00:02.0/backlight/acpi_video0
sys-devices-pci0000:00-0000:00:02.0-drm-card0-card0\x2dLVDS\x2d1-intel_backlight.device loaded active plugged /sys/devices/pci0000:00/0000:00:02.0/drm
sys-devices-pci0000:00-0000:00:19.0-net-eth0.device loaded active plugged 82579LM Gigabit Network Connection
sys-devices-pci0000:00-0000:00:1a.0-usb1-1\x2d1-1\x2d1.4-1\x2d1.4:1.0-bluetooth-hci0-rfkill3.device loaded active plugged /sys/devices/pci0000:00/0000
sys-devices-pci0000:00-0000:00:1a.0-usb1-1\x2d1-1\x2d1.4-1\x2d1.4:1.0-bluetooth-hci0.device loaded active plugged /sys/devices/pci0000:00/0000:00:1a.0
sys-devices-pci0000:00-0000:00:1b.0-sound-card0.device loaded active plugged 6 Series/C200 Series Chipset Family High Definition Audio Contro
sys-devices-pci0000:00-0000:00:1c.1-0000:03:00.0-ieee80211-phy0-rfkill2.device loaded active plugged /sys/devices/pci0000:00/0000:00:1c.1/0000:03:00.0
sys-devices-pci0000:00-0000:00:1c.1-0000:03:00.0-net-wlan0.device loaded active plugged Centrino Advanced-N 6205 [Taylor Peak] (Centrino Advanced-N 62
sys-devices-pci0000:00-0000:00:1d.0-usb2-2\x2d1-2\x2d1.4-2\x2d1.4:1.1-tty-ttyACM0.device loaded active plugged F5521gw
sys-devices-pci0000:00-0000:00:1d.0-usb2-2\x2d1-2\x2d1.4-2\x2d1.4:1.3-tty-ttyACM1.device loaded active plugged F5521gw
...
session-c2.scope loaded active running Session c2 of user lollypop
accounts-daemon.service loaded active running Accounts Service
● anacron.service loaded failed failed Run anacron jobs
apparmor.service loaded active exited LSB: AppArmor initialization
apport.service loaded active exited LSB: automatic crash report generation
...
</source>
In this example you can see that the anacron.service failed to start.
===Display unit status===
<source lang=bash>
# systemctl status anacron
● anacron.service - Run anacron jobs
Loaded: loaded (/lib/systemd/system/anacron.service; enabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Fr 2015-08-28 09:18:13 CEST; 31min ago
Process: 1591 ExecStart=/usr/sbin/anacron -dsq (code=exited, status=1/FAILURE)
Main PID: 1591 (code=exited, status=1/FAILURE)
Aug 28 09:18:13 lollybook systemd[1]: Started Run anacron jobs.
Aug 28 09:18:13 lollybook systemd[1]: Starting Run anacron jobs...
Aug 28 09:18:13 lollybook systemd[1]: anacron.service: main process exited, code=exited, status=1/FAILURE
Aug 28 09:18:13 lollybook anacron[1591]: anacron: Can't chdir to /var/spool/anacron: No such file or directory
Aug 28 09:18:13 lollybook systemd[1]: Unit anacron.service entered failed state.
Aug 28 09:18:13 lollybook systemd[1]: anacron.service failed.
</source>
Ah, deleted the anacron spool directory. ;-)
===Restart units===
Fix the problem and restart the service.
<source lang=bash>
root@lollybook:~# mkdir /var/spool/anacron
root@lollybook:~# systemctl restart anacron.service
root@lollybook:~# systemctl status anacron
● anacron.service - Run anacron jobs
Loaded: loaded (/lib/systemd/system/anacron.service; enabled; vendor preset: enabled)
Active: active (running) since Fr 2015-08-28 09:53:49 CEST; 4s ago
Main PID: 5179 (anacron)
CGroup: /system.slice/anacron.service
└─5179 /usr/sbin/anacron -dsq
Aug 28 09:53:49 lollybook systemd[1]: Started Run anacron jobs.
Aug 28 09:53:49 lollybook systemd[1]: Starting Run anacron jobs...
Aug 28 09:53:49 lollybook anacron[5179]: Anacron 2.3 started on 2015-08-28
Aug 28 09:53:49 lollybook anacron[5179]: Will run job `cron.daily' in 5 min.
Aug 28 09:53:49 lollybook anacron[5179]: Will run job `cron.weekly' in 10 min.
Aug 28 09:53:49 lollybook anacron[5179]: Will run job `cron.monthly' in 15 min.
Aug 28 09:53:49 lollybook anacron[5179]: Jobs will be executed sequentially
</source>
===Display unit declaration===
<source lang=ini>
# systemctl cat zfs.target
# /lib/systemd/system/zfs.target
[Unit]
Description=ZFS startup target
Requires=zfs-mount.service
Requires=zfs-share.service
Wants=zed.service
[Install]
WantedBy=multi-user.target
</source>
==Sockets==
<source lang=bash>
# systemctl list-sockets --all
LISTEN UNIT ACTIVATES
/run/acpid.socket acpid.socket acpid.service
/run/systemd/fsckd systemd-fsckd.socket systemd-fsckd.service
/run/systemd/initctl/fifo systemd-initctl.socket systemd-initctl.service
/run/systemd/journal/dev-log systemd-journald-dev-log.socket systemd-journald.service
/run/systemd/journal/socket systemd-journald.socket systemd-journald.service
/run/systemd/journal/stdout systemd-journald.socket systemd-journald.service
/run/systemd/journal/syslog syslog.socket rsyslog.service
/run/systemd/shutdownd systemd-shutdownd.socket systemd-shutdownd.service
/run/udev/control systemd-udevd-control.socket systemd-udevd.service
/run/uuidd/request uuidd.socket uuidd.service
/var/run/avahi-daemon/socket avahi-daemon.socket avahi-daemon.service
/var/run/cups/cups.sock cups.socket cups.service
/var/run/dbus/system_bus_socket dbus.socket dbus.service
127.0.0.1:631 cups.socket cups.service
[::1]:631 cups.socket cups.service
audit 1 systemd-journald-audit.socket systemd-journald.service
kobject-uevent 1 systemd-udevd-kernel.socket systemd-udevd.service
17 sockets listed.
</source>
eb0135629befda12fbe1db95a259af1afcd8a469
899
898
2015-08-28T10:26:36Z
Lollypop
2
/* Use capabilities to drop user privileges (CapabilityBoundingSet) */
wikitext
text/x-wiki
[[Kategorie:Linux]]
=What is systemd?=
systemd is a replacement for the old and rusty init system of Linux.
It has many new features and extends the normal init system with the ability to watch processes after the start has done, list sockets owned by processes started with systemd, adds security features like [http://manpages.ubuntu.com/manpages/vivid/en/man7/capabilities.7.html capabilities(7)] and a lot more.
Maybe it will be as good as SMF (Service Management Facility) of Solaris one day :-).
=systemd=
Yes, like daemons are usually written this has to be written lowercase.
==Security==
===Use capabilities to drop user privileges (CapabilityBoundingSet)===
<source lang=bash>
# systemctl cat systemd-networkd.service --no-pager
...
[Service]
Type=notify
Restart=on-failure
RestartSec=0
ExecStart=/lib/systemd/systemd-networkd
CapabilityBoundingSet=CAP_NET_ADMIN CAP_NET_BIND_SERVICE CAP_NET_BROADCAST CAP_NET_RAW CAP_SETUID CAP_SETGID CAP_SETPCAP CAP_CHOWN CAP_DAC_OVERRIDE CAP_FOWNER
ProtectSystem=full
ProtectHome=yes
WatchdogSec=1min
...
</source>
Now the process is started with exactly the capabilities it needs to have. Even if it starts as root all unnessesary capabilities are dropped for starting the process.
I dont want to copy the whole man page of [http://manpages.ubuntu.com/manpages/vivid/en/man7/capabilities.7.html capabilities(7)] here but you can take a look to understand what this capabilities are.
==Units==
===List units===
As you can see, there are hardware and software related units.
<source lang=bash>
# systemctl list-units
UNIT LOAD ACTIVE SUB DESCRIPTION
proc-sys-fs-binfmt_misc.automount loaded active running Arbitrary Executable File Formats File System Automount Point
sys-devices-pci0000:00-0000:00:02.0-backlight-acpi_video0.device loaded active plugged /sys/devices/pci0000:00/0000:00:02.0/backlight/acpi_video0
sys-devices-pci0000:00-0000:00:02.0-drm-card0-card0\x2dLVDS\x2d1-intel_backlight.device loaded active plugged /sys/devices/pci0000:00/0000:00:02.0/drm
sys-devices-pci0000:00-0000:00:19.0-net-eth0.device loaded active plugged 82579LM Gigabit Network Connection
sys-devices-pci0000:00-0000:00:1a.0-usb1-1\x2d1-1\x2d1.4-1\x2d1.4:1.0-bluetooth-hci0-rfkill3.device loaded active plugged /sys/devices/pci0000:00/0000
sys-devices-pci0000:00-0000:00:1a.0-usb1-1\x2d1-1\x2d1.4-1\x2d1.4:1.0-bluetooth-hci0.device loaded active plugged /sys/devices/pci0000:00/0000:00:1a.0
sys-devices-pci0000:00-0000:00:1b.0-sound-card0.device loaded active plugged 6 Series/C200 Series Chipset Family High Definition Audio Contro
sys-devices-pci0000:00-0000:00:1c.1-0000:03:00.0-ieee80211-phy0-rfkill2.device loaded active plugged /sys/devices/pci0000:00/0000:00:1c.1/0000:03:00.0
sys-devices-pci0000:00-0000:00:1c.1-0000:03:00.0-net-wlan0.device loaded active plugged Centrino Advanced-N 6205 [Taylor Peak] (Centrino Advanced-N 62
sys-devices-pci0000:00-0000:00:1d.0-usb2-2\x2d1-2\x2d1.4-2\x2d1.4:1.1-tty-ttyACM0.device loaded active plugged F5521gw
sys-devices-pci0000:00-0000:00:1d.0-usb2-2\x2d1-2\x2d1.4-2\x2d1.4:1.3-tty-ttyACM1.device loaded active plugged F5521gw
...
session-c2.scope loaded active running Session c2 of user lollypop
accounts-daemon.service loaded active running Accounts Service
● anacron.service loaded failed failed Run anacron jobs
apparmor.service loaded active exited LSB: AppArmor initialization
apport.service loaded active exited LSB: automatic crash report generation
...
</source>
In this example you can see that the anacron.service failed to start.
===Display unit status===
<source lang=bash>
# systemctl status anacron
● anacron.service - Run anacron jobs
Loaded: loaded (/lib/systemd/system/anacron.service; enabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Fr 2015-08-28 09:18:13 CEST; 31min ago
Process: 1591 ExecStart=/usr/sbin/anacron -dsq (code=exited, status=1/FAILURE)
Main PID: 1591 (code=exited, status=1/FAILURE)
Aug 28 09:18:13 lollybook systemd[1]: Started Run anacron jobs.
Aug 28 09:18:13 lollybook systemd[1]: Starting Run anacron jobs...
Aug 28 09:18:13 lollybook systemd[1]: anacron.service: main process exited, code=exited, status=1/FAILURE
Aug 28 09:18:13 lollybook anacron[1591]: anacron: Can't chdir to /var/spool/anacron: No such file or directory
Aug 28 09:18:13 lollybook systemd[1]: Unit anacron.service entered failed state.
Aug 28 09:18:13 lollybook systemd[1]: anacron.service failed.
</source>
Ah, deleted the anacron spool directory. ;-)
===Restart units===
Fix the problem and restart the service.
<source lang=bash>
root@lollybook:~# mkdir /var/spool/anacron
root@lollybook:~# systemctl restart anacron.service
root@lollybook:~# systemctl status anacron
● anacron.service - Run anacron jobs
Loaded: loaded (/lib/systemd/system/anacron.service; enabled; vendor preset: enabled)
Active: active (running) since Fr 2015-08-28 09:53:49 CEST; 4s ago
Main PID: 5179 (anacron)
CGroup: /system.slice/anacron.service
└─5179 /usr/sbin/anacron -dsq
Aug 28 09:53:49 lollybook systemd[1]: Started Run anacron jobs.
Aug 28 09:53:49 lollybook systemd[1]: Starting Run anacron jobs...
Aug 28 09:53:49 lollybook anacron[5179]: Anacron 2.3 started on 2015-08-28
Aug 28 09:53:49 lollybook anacron[5179]: Will run job `cron.daily' in 5 min.
Aug 28 09:53:49 lollybook anacron[5179]: Will run job `cron.weekly' in 10 min.
Aug 28 09:53:49 lollybook anacron[5179]: Will run job `cron.monthly' in 15 min.
Aug 28 09:53:49 lollybook anacron[5179]: Jobs will be executed sequentially
</source>
===Display unit declaration===
<source lang=ini>
# systemctl cat zfs.target
# /lib/systemd/system/zfs.target
[Unit]
Description=ZFS startup target
Requires=zfs-mount.service
Requires=zfs-share.service
Wants=zed.service
[Install]
WantedBy=multi-user.target
</source>
==Sockets==
<source lang=bash>
# systemctl list-sockets --all
LISTEN UNIT ACTIVATES
/run/acpid.socket acpid.socket acpid.service
/run/systemd/fsckd systemd-fsckd.socket systemd-fsckd.service
/run/systemd/initctl/fifo systemd-initctl.socket systemd-initctl.service
/run/systemd/journal/dev-log systemd-journald-dev-log.socket systemd-journald.service
/run/systemd/journal/socket systemd-journald.socket systemd-journald.service
/run/systemd/journal/stdout systemd-journald.socket systemd-journald.service
/run/systemd/journal/syslog syslog.socket rsyslog.service
/run/systemd/shutdownd systemd-shutdownd.socket systemd-shutdownd.service
/run/udev/control systemd-udevd-control.socket systemd-udevd.service
/run/uuidd/request uuidd.socket uuidd.service
/var/run/avahi-daemon/socket avahi-daemon.socket avahi-daemon.service
/var/run/cups/cups.sock cups.socket cups.service
/var/run/dbus/system_bus_socket dbus.socket dbus.service
127.0.0.1:631 cups.socket cups.service
[::1]:631 cups.socket cups.service
audit 1 systemd-journald-audit.socket systemd-journald.service
kobject-uevent 1 systemd-udevd-kernel.socket systemd-udevd.service
17 sockets listed.
</source>
5512041a3741b8d1e5cc1d5675662debf324d2a0
900
899
2015-08-28T10:46:12Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:Linux]]
=What is systemd?=
systemd is a replacement for the old and rusty init system of Linux.
It has many new features and extends the normal init system with the ability to watch processes after the start has done, list sockets owned by processes started with systemd, adds security features like [http://manpages.ubuntu.com/manpages/vivid/en/man7/capabilities.7.html capabilities(7)] and a lot more.
Maybe it will be as good as SMF (Service Management Facility) of Solaris one day :-).
=systemd=
Yes, like daemons are usually written this has to be written lowercase.
==Security==
===Use capabilities to drop user privileges (CapabilityBoundingSet)===
<source lang=bash>
# systemctl cat systemd-networkd.service --no-pager
...
[Service]
Type=notify
Restart=on-failure
RestartSec=0
ExecStart=/lib/systemd/systemd-networkd
CapabilityBoundingSet=CAP_NET_ADMIN CAP_NET_BIND_SERVICE CAP_NET_BROADCAST CAP_NET_RAW CAP_SETUID CAP_SETGID CAP_SETPCAP CAP_CHOWN CAP_DAC_OVERRIDE CAP_FOWNER
ProtectSystem=full
ProtectHome=yes
WatchdogSec=1min
...
</source>
Now the process is started with exactly the capabilities it needs to have. Even if it starts as root all unnessesary capabilities are dropped for starting the process.
I dont want to copy the whole man page of [http://manpages.ubuntu.com/manpages/vivid/en/man7/capabilities.7.html capabilities(7)] here but you can take a look to understand what this capabilities are.
==Units==
===List units===
As you can see, there are hardware and software related units.
<source lang=bash>
# systemctl list-units
UNIT LOAD ACTIVE SUB DESCRIPTION
proc-sys-fs-binfmt_misc.automount loaded active running Arbitrary Executable File Formats File System Automount Point
sys-devices-pci0000:00-0000:00:02.0-backlight-acpi_video0.device loaded active plugged /sys/devices/pci0000:00/0000:00:02.0/backlight/acpi_video0
sys-devices-pci0000:00-0000:00:02.0-drm-card0-card0\x2dLVDS\x2d1-intel_backlight.device loaded active plugged /sys/devices/pci0000:00/0000:00:02.0/drm
sys-devices-pci0000:00-0000:00:19.0-net-eth0.device loaded active plugged 82579LM Gigabit Network Connection
sys-devices-pci0000:00-0000:00:1a.0-usb1-1\x2d1-1\x2d1.4-1\x2d1.4:1.0-bluetooth-hci0-rfkill3.device loaded active plugged /sys/devices/pci0000:00/0000
sys-devices-pci0000:00-0000:00:1a.0-usb1-1\x2d1-1\x2d1.4-1\x2d1.4:1.0-bluetooth-hci0.device loaded active plugged /sys/devices/pci0000:00/0000:00:1a.0
sys-devices-pci0000:00-0000:00:1b.0-sound-card0.device loaded active plugged 6 Series/C200 Series Chipset Family High Definition Audio Contro
sys-devices-pci0000:00-0000:00:1c.1-0000:03:00.0-ieee80211-phy0-rfkill2.device loaded active plugged /sys/devices/pci0000:00/0000:00:1c.1/0000:03:00.0
sys-devices-pci0000:00-0000:00:1c.1-0000:03:00.0-net-wlan0.device loaded active plugged Centrino Advanced-N 6205 [Taylor Peak] (Centrino Advanced-N 62
sys-devices-pci0000:00-0000:00:1d.0-usb2-2\x2d1-2\x2d1.4-2\x2d1.4:1.1-tty-ttyACM0.device loaded active plugged F5521gw
sys-devices-pci0000:00-0000:00:1d.0-usb2-2\x2d1-2\x2d1.4-2\x2d1.4:1.3-tty-ttyACM1.device loaded active plugged F5521gw
...
session-c2.scope loaded active running Session c2 of user lollypop
accounts-daemon.service loaded active running Accounts Service
● anacron.service loaded failed failed Run anacron jobs
apparmor.service loaded active exited LSB: AppArmor initialization
apport.service loaded active exited LSB: automatic crash report generation
...
</source>
In this example you can see that the anacron.service failed to start.
===Display unit status===
<source lang=bash>
# systemctl status anacron
● anacron.service - Run anacron jobs
Loaded: loaded (/lib/systemd/system/anacron.service; enabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Fr 2015-08-28 09:18:13 CEST; 31min ago
Process: 1591 ExecStart=/usr/sbin/anacron -dsq (code=exited, status=1/FAILURE)
Main PID: 1591 (code=exited, status=1/FAILURE)
Aug 28 09:18:13 lollybook systemd[1]: Started Run anacron jobs.
Aug 28 09:18:13 lollybook systemd[1]: Starting Run anacron jobs...
Aug 28 09:18:13 lollybook systemd[1]: anacron.service: main process exited, code=exited, status=1/FAILURE
Aug 28 09:18:13 lollybook anacron[1591]: anacron: Can't chdir to /var/spool/anacron: No such file or directory
Aug 28 09:18:13 lollybook systemd[1]: Unit anacron.service entered failed state.
Aug 28 09:18:13 lollybook systemd[1]: anacron.service failed.
</source>
Ah, deleted the anacron spool directory. ;-)
===Restart units===
Fix the problem and restart the service.
<source lang=bash>
root@lollybook:~# mkdir /var/spool/anacron
root@lollybook:~# systemctl restart anacron.service
root@lollybook:~# systemctl status anacron
● anacron.service - Run anacron jobs
Loaded: loaded (/lib/systemd/system/anacron.service; enabled; vendor preset: enabled)
Active: active (running) since Fr 2015-08-28 09:53:49 CEST; 4s ago
Main PID: 5179 (anacron)
CGroup: /system.slice/anacron.service
└─5179 /usr/sbin/anacron -dsq
Aug 28 09:53:49 lollybook systemd[1]: Started Run anacron jobs.
Aug 28 09:53:49 lollybook systemd[1]: Starting Run anacron jobs...
Aug 28 09:53:49 lollybook anacron[5179]: Anacron 2.3 started on 2015-08-28
Aug 28 09:53:49 lollybook anacron[5179]: Will run job `cron.daily' in 5 min.
Aug 28 09:53:49 lollybook anacron[5179]: Will run job `cron.weekly' in 10 min.
Aug 28 09:53:49 lollybook anacron[5179]: Will run job `cron.monthly' in 15 min.
Aug 28 09:53:49 lollybook anacron[5179]: Jobs will be executed sequentially
</source>
===Display unit declaration===
<source lang=ini>
# systemctl cat zfs.target
# /lib/systemd/system/zfs.target
[Unit]
Description=ZFS startup target
Requires=zfs-mount.service
Requires=zfs-share.service
Wants=zed.service
[Install]
WantedBy=multi-user.target
</source>
==Sockets==
<source lang=bash>
# systemctl list-sockets --all
LISTEN UNIT ACTIVATES
/run/acpid.socket acpid.socket acpid.service
/run/systemd/fsckd systemd-fsckd.socket systemd-fsckd.service
/run/systemd/initctl/fifo systemd-initctl.socket systemd-initctl.service
/run/systemd/journal/dev-log systemd-journald-dev-log.socket systemd-journald.service
/run/systemd/journal/socket systemd-journald.socket systemd-journald.service
/run/systemd/journal/stdout systemd-journald.socket systemd-journald.service
/run/systemd/journal/syslog syslog.socket rsyslog.service
/run/systemd/shutdownd systemd-shutdownd.socket systemd-shutdownd.service
/run/udev/control systemd-udevd-control.socket systemd-udevd.service
/run/uuidd/request uuidd.socket uuidd.service
/var/run/avahi-daemon/socket avahi-daemon.socket avahi-daemon.service
/var/run/cups/cups.sock cups.socket cups.service
/var/run/dbus/system_bus_socket dbus.socket dbus.service
127.0.0.1:631 cups.socket cups.service
[::1]:631 cups.socket cups.service
audit 1 systemd-journald-audit.socket systemd-journald.service
kobject-uevent 1 systemd-udevd-kernel.socket systemd-udevd.service
17 sockets listed.
</source>
=Tools=
==Testing around with capabilities==
For example arping:
<source lang=bash>
# getcap /usr/bin/arping
/usr/bin/arping = cap_net_raw+ep
</source>
With this capability set we can use this as normal user:
<source lang=bash>
lollypop $ /usr/bin/arping -I wlan0 192.168.178.1
ARPING 192.168.178.1 from 192.168.178.31 wlan0
Unicast reply from 192.168.178.1 [24:65:11:F0:DC:A8] 1.774ms
Unicast reply from 192.168.178.1 [24:65:11:F0:DC:A8] 1.658ms
</source>
If we remove this capability it does not work:
<source lang=bash>
# setcap cap_net_raw=-ep /usr/bin/arping
</source>
<source lang=bash>
lollypop $ /usr/bin/arping -I wlan0 192.168.178.1
arping: socket: Operation not permitted
</source>
Of course it still works as root as root has all capabilities:
<source lang=bash>
root@lollybook:~# /usr/bin/arping -I wlan0 192.168.178.1
ARPING 192.168.178.1 from 192.168.178.31 wlan0
Unicast reply from 192.168.178.1 [24:65:11:F0:DC:A8] 2.052ms
Unicast reply from 192.168.178.1 [24:65:11:F0:DC:A8] 1.852ms
Received 2 response(s)
</source>
So we better set this capability again:
<source lang=bash>
# setcap cap_net_raw=+ep /usr/bin/arping
</source>
5d159eaddd67a561cfdcf73ea926899080200656
901
900
2015-08-28T11:01:01Z
Lollypop
2
/* systemd */
wikitext
text/x-wiki
[[Kategorie:Linux]]
=What is systemd?=
systemd is a replacement for the old and rusty init system of Linux.
It has many new features and extends the normal init system with the ability to watch processes after the start has done, list sockets owned by processes started with systemd, adds security features like [http://manpages.ubuntu.com/manpages/vivid/en/man7/capabilities.7.html capabilities(7)] and a lot more.
Maybe it will be as good as SMF (Service Management Facility) of Solaris one day :-).
=systemd=
Yes, like daemons are usually written this has to be written lowercase.
==Security==
===Use capabilities to drop user privileges (CapabilityBoundingSet)===
<source lang=bash>
# systemctl cat systemd-networkd.service --no-pager
...
[Service]
Type=notify
Restart=on-failure
RestartSec=0
ExecStart=/lib/systemd/systemd-networkd
CapabilityBoundingSet=CAP_NET_ADMIN CAP_NET_BIND_SERVICE CAP_NET_BROADCAST CAP_NET_RAW CAP_SETUID CAP_SETGID CAP_SETPCAP CAP_CHOWN CAP_DAC_OVERRIDE CAP_FOWNER
ProtectSystem=full
ProtectHome=yes
WatchdogSec=1min
...
</source>
Now the process is started with exactly the capabilities it needs to have. Even if it starts as root all unnessesary capabilities are dropped for starting the process.
I dont want to copy the whole man page of [http://manpages.ubuntu.com/manpages/vivid/en/man7/capabilities.7.html capabilities(7)] here but you can take a look to understand what this capabilities are.
==Units==
===List units===
As you can see, there are hardware and software related units.
<source lang=bash>
# systemctl list-units
UNIT LOAD ACTIVE SUB DESCRIPTION
proc-sys-fs-binfmt_misc.automount loaded active running Arbitrary Executable File Formats File System Automount Point
sys-devices-pci0000:00-0000:00:02.0-backlight-acpi_video0.device loaded active plugged /sys/devices/pci0000:00/0000:00:02.0/backlight/acpi_video0
sys-devices-pci0000:00-0000:00:02.0-drm-card0-card0\x2dLVDS\x2d1-intel_backlight.device loaded active plugged /sys/devices/pci0000:00/0000:00:02.0/drm
sys-devices-pci0000:00-0000:00:19.0-net-eth0.device loaded active plugged 82579LM Gigabit Network Connection
sys-devices-pci0000:00-0000:00:1a.0-usb1-1\x2d1-1\x2d1.4-1\x2d1.4:1.0-bluetooth-hci0-rfkill3.device loaded active plugged /sys/devices/pci0000:00/0000
sys-devices-pci0000:00-0000:00:1a.0-usb1-1\x2d1-1\x2d1.4-1\x2d1.4:1.0-bluetooth-hci0.device loaded active plugged /sys/devices/pci0000:00/0000:00:1a.0
sys-devices-pci0000:00-0000:00:1b.0-sound-card0.device loaded active plugged 6 Series/C200 Series Chipset Family High Definition Audio Contro
sys-devices-pci0000:00-0000:00:1c.1-0000:03:00.0-ieee80211-phy0-rfkill2.device loaded active plugged /sys/devices/pci0000:00/0000:00:1c.1/0000:03:00.0
sys-devices-pci0000:00-0000:00:1c.1-0000:03:00.0-net-wlan0.device loaded active plugged Centrino Advanced-N 6205 [Taylor Peak] (Centrino Advanced-N 62
sys-devices-pci0000:00-0000:00:1d.0-usb2-2\x2d1-2\x2d1.4-2\x2d1.4:1.1-tty-ttyACM0.device loaded active plugged F5521gw
sys-devices-pci0000:00-0000:00:1d.0-usb2-2\x2d1-2\x2d1.4-2\x2d1.4:1.3-tty-ttyACM1.device loaded active plugged F5521gw
...
session-c2.scope loaded active running Session c2 of user lollypop
accounts-daemon.service loaded active running Accounts Service
● anacron.service loaded failed failed Run anacron jobs
apparmor.service loaded active exited LSB: AppArmor initialization
apport.service loaded active exited LSB: automatic crash report generation
...
</source>
In this example you can see that the anacron.service failed to start.
===Display unit status===
<source lang=bash>
# systemctl status anacron
● anacron.service - Run anacron jobs
Loaded: loaded (/lib/systemd/system/anacron.service; enabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Fr 2015-08-28 09:18:13 CEST; 31min ago
Process: 1591 ExecStart=/usr/sbin/anacron -dsq (code=exited, status=1/FAILURE)
Main PID: 1591 (code=exited, status=1/FAILURE)
Aug 28 09:18:13 lollybook systemd[1]: Started Run anacron jobs.
Aug 28 09:18:13 lollybook systemd[1]: Starting Run anacron jobs...
Aug 28 09:18:13 lollybook systemd[1]: anacron.service: main process exited, code=exited, status=1/FAILURE
Aug 28 09:18:13 lollybook anacron[1591]: anacron: Can't chdir to /var/spool/anacron: No such file or directory
Aug 28 09:18:13 lollybook systemd[1]: Unit anacron.service entered failed state.
Aug 28 09:18:13 lollybook systemd[1]: anacron.service failed.
</source>
Ah, deleted the anacron spool directory. ;-)
===Restart units===
Fix the problem and restart the service.
<source lang=bash>
root@lollybook:~# mkdir /var/spool/anacron
root@lollybook:~# systemctl restart anacron.service
root@lollybook:~# systemctl status anacron
● anacron.service - Run anacron jobs
Loaded: loaded (/lib/systemd/system/anacron.service; enabled; vendor preset: enabled)
Active: active (running) since Fr 2015-08-28 09:53:49 CEST; 4s ago
Main PID: 5179 (anacron)
CGroup: /system.slice/anacron.service
└─5179 /usr/sbin/anacron -dsq
Aug 28 09:53:49 lollybook systemd[1]: Started Run anacron jobs.
Aug 28 09:53:49 lollybook systemd[1]: Starting Run anacron jobs...
Aug 28 09:53:49 lollybook anacron[5179]: Anacron 2.3 started on 2015-08-28
Aug 28 09:53:49 lollybook anacron[5179]: Will run job `cron.daily' in 5 min.
Aug 28 09:53:49 lollybook anacron[5179]: Will run job `cron.weekly' in 10 min.
Aug 28 09:53:49 lollybook anacron[5179]: Will run job `cron.monthly' in 15 min.
Aug 28 09:53:49 lollybook anacron[5179]: Jobs will be executed sequentially
</source>
===Display unit declaration===
<source lang=ini>
# systemctl cat zfs.target
# /lib/systemd/system/zfs.target
[Unit]
Description=ZFS startup target
Requires=zfs-mount.service
Requires=zfs-share.service
Wants=zed.service
[Install]
WantedBy=multi-user.target
</source>
==Sockets==
<source lang=bash>
# systemctl list-sockets --all
LISTEN UNIT ACTIVATES
/run/acpid.socket acpid.socket acpid.service
/run/systemd/fsckd systemd-fsckd.socket systemd-fsckd.service
/run/systemd/initctl/fifo systemd-initctl.socket systemd-initctl.service
/run/systemd/journal/dev-log systemd-journald-dev-log.socket systemd-journald.service
/run/systemd/journal/socket systemd-journald.socket systemd-journald.service
/run/systemd/journal/stdout systemd-journald.socket systemd-journald.service
/run/systemd/journal/syslog syslog.socket rsyslog.service
/run/systemd/shutdownd systemd-shutdownd.socket systemd-shutdownd.service
/run/udev/control systemd-udevd-control.socket systemd-udevd.service
/run/uuidd/request uuidd.socket uuidd.service
/var/run/avahi-daemon/socket avahi-daemon.socket avahi-daemon.service
/var/run/cups/cups.sock cups.socket cups.service
/var/run/dbus/system_bus_socket dbus.socket dbus.service
127.0.0.1:631 cups.socket cups.service
[::1]:631 cups.socket cups.service
audit 1 systemd-journald-audit.socket systemd-journald.service
kobject-uevent 1 systemd-udevd-kernel.socket systemd-udevd.service
17 sockets listed.
</source>
==Dependencies==
===View dependencies===
For example: What do we need to reach the ''zfs.target''?
<source lang=bash>
# systemctl list-dependencies -r zfs.target
zfs.target
● ├─zed.service
● ├─zfs-mount.service
● └─zfs-share.service
</source>
This is defined in the file as follows:
<source lang=bash>
# systemctl cat zfs.target
# /lib/systemd/system/zfs.target
[Unit]
Description=ZFS startup target
Requires=zfs-mount.service
Requires=zfs-share.service
Wants=zed.service
[Install]
WantedBy=multi-user.target
</source>
=Tools=
==Testing around with capabilities==
For example arping:
<source lang=bash>
# getcap /usr/bin/arping
/usr/bin/arping = cap_net_raw+ep
</source>
With this capability set we can use this as normal user:
<source lang=bash>
lollypop $ /usr/bin/arping -I wlan0 192.168.178.1
ARPING 192.168.178.1 from 192.168.178.31 wlan0
Unicast reply from 192.168.178.1 [24:65:11:F0:DC:A8] 1.774ms
Unicast reply from 192.168.178.1 [24:65:11:F0:DC:A8] 1.658ms
</source>
If we remove this capability it does not work:
<source lang=bash>
# setcap cap_net_raw=-ep /usr/bin/arping
</source>
<source lang=bash>
lollypop $ /usr/bin/arping -I wlan0 192.168.178.1
arping: socket: Operation not permitted
</source>
Of course it still works as root as root has all capabilities:
<source lang=bash>
root@lollybook:~# /usr/bin/arping -I wlan0 192.168.178.1
ARPING 192.168.178.1 from 192.168.178.31 wlan0
Unicast reply from 192.168.178.1 [24:65:11:F0:DC:A8] 2.052ms
Unicast reply from 192.168.178.1 [24:65:11:F0:DC:A8] 1.852ms
Received 2 response(s)
</source>
So we better set this capability again:
<source lang=bash>
# setcap cap_net_raw=+ep /usr/bin/arping
</source>
f50de3ec3219649f468cd5064815c1c3dc336aa3
902
901
2015-08-28T11:11:59Z
Lollypop
2
/* Dependencies */
wikitext
text/x-wiki
[[Kategorie:Linux]]
=What is systemd?=
systemd is a replacement for the old and rusty init system of Linux.
It has many new features and extends the normal init system with the ability to watch processes after the start has done, list sockets owned by processes started with systemd, adds security features like [http://manpages.ubuntu.com/manpages/vivid/en/man7/capabilities.7.html capabilities(7)] and a lot more.
Maybe it will be as good as SMF (Service Management Facility) of Solaris one day :-).
=systemd=
Yes, like daemons are usually written this has to be written lowercase.
==Security==
===Use capabilities to drop user privileges (CapabilityBoundingSet)===
<source lang=bash>
# systemctl cat systemd-networkd.service --no-pager
...
[Service]
Type=notify
Restart=on-failure
RestartSec=0
ExecStart=/lib/systemd/systemd-networkd
CapabilityBoundingSet=CAP_NET_ADMIN CAP_NET_BIND_SERVICE CAP_NET_BROADCAST CAP_NET_RAW CAP_SETUID CAP_SETGID CAP_SETPCAP CAP_CHOWN CAP_DAC_OVERRIDE CAP_FOWNER
ProtectSystem=full
ProtectHome=yes
WatchdogSec=1min
...
</source>
Now the process is started with exactly the capabilities it needs to have. Even if it starts as root all unnessesary capabilities are dropped for starting the process.
I dont want to copy the whole man page of [http://manpages.ubuntu.com/manpages/vivid/en/man7/capabilities.7.html capabilities(7)] here but you can take a look to understand what this capabilities are.
==Units==
===List units===
As you can see, there are hardware and software related units.
<source lang=bash>
# systemctl list-units
UNIT LOAD ACTIVE SUB DESCRIPTION
proc-sys-fs-binfmt_misc.automount loaded active running Arbitrary Executable File Formats File System Automount Point
sys-devices-pci0000:00-0000:00:02.0-backlight-acpi_video0.device loaded active plugged /sys/devices/pci0000:00/0000:00:02.0/backlight/acpi_video0
sys-devices-pci0000:00-0000:00:02.0-drm-card0-card0\x2dLVDS\x2d1-intel_backlight.device loaded active plugged /sys/devices/pci0000:00/0000:00:02.0/drm
sys-devices-pci0000:00-0000:00:19.0-net-eth0.device loaded active plugged 82579LM Gigabit Network Connection
sys-devices-pci0000:00-0000:00:1a.0-usb1-1\x2d1-1\x2d1.4-1\x2d1.4:1.0-bluetooth-hci0-rfkill3.device loaded active plugged /sys/devices/pci0000:00/0000
sys-devices-pci0000:00-0000:00:1a.0-usb1-1\x2d1-1\x2d1.4-1\x2d1.4:1.0-bluetooth-hci0.device loaded active plugged /sys/devices/pci0000:00/0000:00:1a.0
sys-devices-pci0000:00-0000:00:1b.0-sound-card0.device loaded active plugged 6 Series/C200 Series Chipset Family High Definition Audio Contro
sys-devices-pci0000:00-0000:00:1c.1-0000:03:00.0-ieee80211-phy0-rfkill2.device loaded active plugged /sys/devices/pci0000:00/0000:00:1c.1/0000:03:00.0
sys-devices-pci0000:00-0000:00:1c.1-0000:03:00.0-net-wlan0.device loaded active plugged Centrino Advanced-N 6205 [Taylor Peak] (Centrino Advanced-N 62
sys-devices-pci0000:00-0000:00:1d.0-usb2-2\x2d1-2\x2d1.4-2\x2d1.4:1.1-tty-ttyACM0.device loaded active plugged F5521gw
sys-devices-pci0000:00-0000:00:1d.0-usb2-2\x2d1-2\x2d1.4-2\x2d1.4:1.3-tty-ttyACM1.device loaded active plugged F5521gw
...
session-c2.scope loaded active running Session c2 of user lollypop
accounts-daemon.service loaded active running Accounts Service
● anacron.service loaded failed failed Run anacron jobs
apparmor.service loaded active exited LSB: AppArmor initialization
apport.service loaded active exited LSB: automatic crash report generation
...
</source>
In this example you can see that the anacron.service failed to start.
===Display unit status===
<source lang=bash>
# systemctl status anacron
● anacron.service - Run anacron jobs
Loaded: loaded (/lib/systemd/system/anacron.service; enabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Fr 2015-08-28 09:18:13 CEST; 31min ago
Process: 1591 ExecStart=/usr/sbin/anacron -dsq (code=exited, status=1/FAILURE)
Main PID: 1591 (code=exited, status=1/FAILURE)
Aug 28 09:18:13 lollybook systemd[1]: Started Run anacron jobs.
Aug 28 09:18:13 lollybook systemd[1]: Starting Run anacron jobs...
Aug 28 09:18:13 lollybook systemd[1]: anacron.service: main process exited, code=exited, status=1/FAILURE
Aug 28 09:18:13 lollybook anacron[1591]: anacron: Can't chdir to /var/spool/anacron: No such file or directory
Aug 28 09:18:13 lollybook systemd[1]: Unit anacron.service entered failed state.
Aug 28 09:18:13 lollybook systemd[1]: anacron.service failed.
</source>
Ah, deleted the anacron spool directory. ;-)
===Restart units===
Fix the problem and restart the service.
<source lang=bash>
root@lollybook:~# mkdir /var/spool/anacron
root@lollybook:~# systemctl restart anacron.service
root@lollybook:~# systemctl status anacron
● anacron.service - Run anacron jobs
Loaded: loaded (/lib/systemd/system/anacron.service; enabled; vendor preset: enabled)
Active: active (running) since Fr 2015-08-28 09:53:49 CEST; 4s ago
Main PID: 5179 (anacron)
CGroup: /system.slice/anacron.service
└─5179 /usr/sbin/anacron -dsq
Aug 28 09:53:49 lollybook systemd[1]: Started Run anacron jobs.
Aug 28 09:53:49 lollybook systemd[1]: Starting Run anacron jobs...
Aug 28 09:53:49 lollybook anacron[5179]: Anacron 2.3 started on 2015-08-28
Aug 28 09:53:49 lollybook anacron[5179]: Will run job `cron.daily' in 5 min.
Aug 28 09:53:49 lollybook anacron[5179]: Will run job `cron.weekly' in 10 min.
Aug 28 09:53:49 lollybook anacron[5179]: Will run job `cron.monthly' in 15 min.
Aug 28 09:53:49 lollybook anacron[5179]: Jobs will be executed sequentially
</source>
===Display unit declaration===
<source lang=ini>
# systemctl cat zfs.target
# /lib/systemd/system/zfs.target
[Unit]
Description=ZFS startup target
Requires=zfs-mount.service
Requires=zfs-share.service
Wants=zed.service
[Install]
WantedBy=multi-user.target
</source>
==Sockets==
<source lang=bash>
# systemctl list-sockets --all
LISTEN UNIT ACTIVATES
/run/acpid.socket acpid.socket acpid.service
/run/systemd/fsckd systemd-fsckd.socket systemd-fsckd.service
/run/systemd/initctl/fifo systemd-initctl.socket systemd-initctl.service
/run/systemd/journal/dev-log systemd-journald-dev-log.socket systemd-journald.service
/run/systemd/journal/socket systemd-journald.socket systemd-journald.service
/run/systemd/journal/stdout systemd-journald.socket systemd-journald.service
/run/systemd/journal/syslog syslog.socket rsyslog.service
/run/systemd/shutdownd systemd-shutdownd.socket systemd-shutdownd.service
/run/udev/control systemd-udevd-control.socket systemd-udevd.service
/run/uuidd/request uuidd.socket uuidd.service
/var/run/avahi-daemon/socket avahi-daemon.socket avahi-daemon.service
/var/run/cups/cups.sock cups.socket cups.service
/var/run/dbus/system_bus_socket dbus.socket dbus.service
127.0.0.1:631 cups.socket cups.service
[::1]:631 cups.socket cups.service
audit 1 systemd-journald-audit.socket systemd-journald.service
kobject-uevent 1 systemd-udevd-kernel.socket systemd-udevd.service
17 sockets listed.
</source>
==Dependencies==
===Define dependencies===
For example the ''zfs.target'' is defined like this:
<source lang=bash>
# systemctl cat zfs.target
# /lib/systemd/system/zfs.target
[Unit]
Description=ZFS startup target
Requires=zfs-mount.service
Requires=zfs-share.service
Wants=zed.service
[Install]
WantedBy=multi-user.target
</source>
This means to reach the ''zfs.target'' we want that ''zed.service'' is started if enabled and we need ''zfs-mount.service'' and ''zfs-share.service''.
===View dependencies===
What depends on ''zfs.target'':
<source lang=bash>
# systemctl list-dependencies --reverse zfs.target
zfs.target
● ├─basic.target
...
● └─multi-user.target
...
</source>
And what do we need to reach the ''zfs.target''?
<source lang=bash>
# systemctl list-dependencies --recursive zfs.target
zfs.target
● ├─zed.service
● ├─zfs-mount.service
● └─zfs-share.service
</source>
=Tools=
==Testing around with capabilities==
For example arping:
<source lang=bash>
# getcap /usr/bin/arping
/usr/bin/arping = cap_net_raw+ep
</source>
With this capability set we can use this as normal user:
<source lang=bash>
lollypop $ /usr/bin/arping -I wlan0 192.168.178.1
ARPING 192.168.178.1 from 192.168.178.31 wlan0
Unicast reply from 192.168.178.1 [24:65:11:F0:DC:A8] 1.774ms
Unicast reply from 192.168.178.1 [24:65:11:F0:DC:A8] 1.658ms
</source>
If we remove this capability it does not work:
<source lang=bash>
# setcap cap_net_raw=-ep /usr/bin/arping
</source>
<source lang=bash>
lollypop $ /usr/bin/arping -I wlan0 192.168.178.1
arping: socket: Operation not permitted
</source>
Of course it still works as root as root has all capabilities:
<source lang=bash>
root@lollybook:~# /usr/bin/arping -I wlan0 192.168.178.1
ARPING 192.168.178.1 from 192.168.178.31 wlan0
Unicast reply from 192.168.178.1 [24:65:11:F0:DC:A8] 2.052ms
Unicast reply from 192.168.178.1 [24:65:11:F0:DC:A8] 1.852ms
Received 2 response(s)
</source>
So we better set this capability again:
<source lang=bash>
# setcap cap_net_raw=+ep /usr/bin/arping
</source>
af29d34803c7ede0773607fe583e009b923ce97a
903
902
2015-09-02T21:29:44Z
Lollypop
2
/* Units */
wikitext
text/x-wiki
[[Kategorie:Linux]]
=What is systemd?=
systemd is a replacement for the old and rusty init system of Linux.
It has many new features and extends the normal init system with the ability to watch processes after the start has done, list sockets owned by processes started with systemd, adds security features like [http://manpages.ubuntu.com/manpages/vivid/en/man7/capabilities.7.html capabilities(7)] and a lot more.
Maybe it will be as good as SMF (Service Management Facility) of Solaris one day :-).
=systemd=
Yes, like daemons are usually written this has to be written lowercase.
==Security==
===Use capabilities to drop user privileges (CapabilityBoundingSet)===
<source lang=bash>
# systemctl cat systemd-networkd.service --no-pager
...
[Service]
Type=notify
Restart=on-failure
RestartSec=0
ExecStart=/lib/systemd/systemd-networkd
CapabilityBoundingSet=CAP_NET_ADMIN CAP_NET_BIND_SERVICE CAP_NET_BROADCAST CAP_NET_RAW CAP_SETUID CAP_SETGID CAP_SETPCAP CAP_CHOWN CAP_DAC_OVERRIDE CAP_FOWNER
ProtectSystem=full
ProtectHome=yes
WatchdogSec=1min
...
</source>
Now the process is started with exactly the capabilities it needs to have. Even if it starts as root all unnessesary capabilities are dropped for starting the process.
I dont want to copy the whole man page of [http://manpages.ubuntu.com/manpages/vivid/en/man7/capabilities.7.html capabilities(7)] here but you can take a look to understand what this capabilities are.
==Units==
===List units===
As you can see, there are hardware and software related units.
<source lang=bash>
# systemctl list-units
UNIT LOAD ACTIVE SUB DESCRIPTION
proc-sys-fs-binfmt_misc.automount loaded active running Arbitrary Executable File Formats File System Automount Point
sys-devices-pci0000:00-0000:00:02.0-backlight-acpi_video0.device loaded active plugged /sys/devices/pci0000:00/0000:00:02.0/backlight/acpi_video0
sys-devices-pci0000:00-0000:00:02.0-drm-card0-card0\x2dLVDS\x2d1-intel_backlight.device loaded active plugged /sys/devices/pci0000:00/0000:00:02.0/drm
sys-devices-pci0000:00-0000:00:19.0-net-eth0.device loaded active plugged 82579LM Gigabit Network Connection
sys-devices-pci0000:00-0000:00:1a.0-usb1-1\x2d1-1\x2d1.4-1\x2d1.4:1.0-bluetooth-hci0-rfkill3.device loaded active plugged /sys/devices/pci0000:00/0000
sys-devices-pci0000:00-0000:00:1a.0-usb1-1\x2d1-1\x2d1.4-1\x2d1.4:1.0-bluetooth-hci0.device loaded active plugged /sys/devices/pci0000:00/0000:00:1a.0
sys-devices-pci0000:00-0000:00:1b.0-sound-card0.device loaded active plugged 6 Series/C200 Series Chipset Family High Definition Audio Contro
sys-devices-pci0000:00-0000:00:1c.1-0000:03:00.0-ieee80211-phy0-rfkill2.device loaded active plugged /sys/devices/pci0000:00/0000:00:1c.1/0000:03:00.0
sys-devices-pci0000:00-0000:00:1c.1-0000:03:00.0-net-wlan0.device loaded active plugged Centrino Advanced-N 6205 [Taylor Peak] (Centrino Advanced-N 62
sys-devices-pci0000:00-0000:00:1d.0-usb2-2\x2d1-2\x2d1.4-2\x2d1.4:1.1-tty-ttyACM0.device loaded active plugged F5521gw
sys-devices-pci0000:00-0000:00:1d.0-usb2-2\x2d1-2\x2d1.4-2\x2d1.4:1.3-tty-ttyACM1.device loaded active plugged F5521gw
...
session-c2.scope loaded active running Session c2 of user lollypop
accounts-daemon.service loaded active running Accounts Service
● anacron.service loaded failed failed Run anacron jobs
apparmor.service loaded active exited LSB: AppArmor initialization
apport.service loaded active exited LSB: automatic crash report generation
...
</source>
In this example you can see that the anacron.service failed to start.
===Display unit status===
<source lang=bash>
# systemctl status anacron
● anacron.service - Run anacron jobs
Loaded: loaded (/lib/systemd/system/anacron.service; enabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Fr 2015-08-28 09:18:13 CEST; 31min ago
Process: 1591 ExecStart=/usr/sbin/anacron -dsq (code=exited, status=1/FAILURE)
Main PID: 1591 (code=exited, status=1/FAILURE)
Aug 28 09:18:13 lollybook systemd[1]: Started Run anacron jobs.
Aug 28 09:18:13 lollybook systemd[1]: Starting Run anacron jobs...
Aug 28 09:18:13 lollybook systemd[1]: anacron.service: main process exited, code=exited, status=1/FAILURE
Aug 28 09:18:13 lollybook anacron[1591]: anacron: Can't chdir to /var/spool/anacron: No such file or directory
Aug 28 09:18:13 lollybook systemd[1]: Unit anacron.service entered failed state.
Aug 28 09:18:13 lollybook systemd[1]: anacron.service failed.
</source>
Ah, deleted the anacron spool directory. ;-)
===Restart units===
Fix the problem and restart the service.
<source lang=bash>
root@lollybook:~# mkdir /var/spool/anacron
root@lollybook:~# systemctl restart anacron.service
root@lollybook:~# systemctl status anacron
● anacron.service - Run anacron jobs
Loaded: loaded (/lib/systemd/system/anacron.service; enabled; vendor preset: enabled)
Active: active (running) since Fr 2015-08-28 09:53:49 CEST; 4s ago
Main PID: 5179 (anacron)
CGroup: /system.slice/anacron.service
└─5179 /usr/sbin/anacron -dsq
Aug 28 09:53:49 lollybook systemd[1]: Started Run anacron jobs.
Aug 28 09:53:49 lollybook systemd[1]: Starting Run anacron jobs...
Aug 28 09:53:49 lollybook anacron[5179]: Anacron 2.3 started on 2015-08-28
Aug 28 09:53:49 lollybook anacron[5179]: Will run job `cron.daily' in 5 min.
Aug 28 09:53:49 lollybook anacron[5179]: Will run job `cron.weekly' in 10 min.
Aug 28 09:53:49 lollybook anacron[5179]: Will run job `cron.monthly' in 15 min.
Aug 28 09:53:49 lollybook anacron[5179]: Jobs will be executed sequentially
</source>
===Display unit declaration===
<source lang=ini>
# systemctl cat zfs.target
# /lib/systemd/system/zfs.target
[Unit]
Description=ZFS startup target
Requires=zfs-mount.service
Requires=zfs-share.service
Wants=zed.service
[Install]
WantedBy=multi-user.target
</source>
===Directories===
====ReadWrite-, ReadOnly- and InaccessibleDirectories====
====Private Tmp-Directories====
Mounts a private incarnation of /tmp and /var/tmp which only lives as long as the unit is up. When the unit comes down the directories are cleared. This is done by a seperate namespace for this unit.
<source lang=ini>
[Unit]
...
PrivateTmp=true|false
...
</source>
If several units should share a private tmp-directory you can use ''JoinsNamespaceOf=<unit1>[,<unit2>,<unit3>]''.
=Service=
=Install=
==Sockets==
<source lang=bash>
# systemctl list-sockets --all
LISTEN UNIT ACTIVATES
/run/acpid.socket acpid.socket acpid.service
/run/systemd/fsckd systemd-fsckd.socket systemd-fsckd.service
/run/systemd/initctl/fifo systemd-initctl.socket systemd-initctl.service
/run/systemd/journal/dev-log systemd-journald-dev-log.socket systemd-journald.service
/run/systemd/journal/socket systemd-journald.socket systemd-journald.service
/run/systemd/journal/stdout systemd-journald.socket systemd-journald.service
/run/systemd/journal/syslog syslog.socket rsyslog.service
/run/systemd/shutdownd systemd-shutdownd.socket systemd-shutdownd.service
/run/udev/control systemd-udevd-control.socket systemd-udevd.service
/run/uuidd/request uuidd.socket uuidd.service
/var/run/avahi-daemon/socket avahi-daemon.socket avahi-daemon.service
/var/run/cups/cups.sock cups.socket cups.service
/var/run/dbus/system_bus_socket dbus.socket dbus.service
127.0.0.1:631 cups.socket cups.service
[::1]:631 cups.socket cups.service
audit 1 systemd-journald-audit.socket systemd-journald.service
kobject-uevent 1 systemd-udevd-kernel.socket systemd-udevd.service
17 sockets listed.
</source>
==Dependencies==
===Define dependencies===
For example the ''zfs.target'' is defined like this:
<source lang=bash>
# systemctl cat zfs.target
# /lib/systemd/system/zfs.target
[Unit]
Description=ZFS startup target
Requires=zfs-mount.service
Requires=zfs-share.service
Wants=zed.service
[Install]
WantedBy=multi-user.target
</source>
This means to reach the ''zfs.target'' we want that ''zed.service'' is started if enabled and we need ''zfs-mount.service'' and ''zfs-share.service''.
===View dependencies===
What depends on ''zfs.target'':
<source lang=bash>
# systemctl list-dependencies --reverse zfs.target
zfs.target
● ├─basic.target
...
● └─multi-user.target
...
</source>
And what do we need to reach the ''zfs.target''?
<source lang=bash>
# systemctl list-dependencies --recursive zfs.target
zfs.target
● ├─zed.service
● ├─zfs-mount.service
● └─zfs-share.service
</source>
=Tools=
==Testing around with capabilities==
For example arping:
<source lang=bash>
# getcap /usr/bin/arping
/usr/bin/arping = cap_net_raw+ep
</source>
With this capability set we can use this as normal user:
<source lang=bash>
lollypop $ /usr/bin/arping -I wlan0 192.168.178.1
ARPING 192.168.178.1 from 192.168.178.31 wlan0
Unicast reply from 192.168.178.1 [24:65:11:F0:DC:A8] 1.774ms
Unicast reply from 192.168.178.1 [24:65:11:F0:DC:A8] 1.658ms
</source>
If we remove this capability it does not work:
<source lang=bash>
# setcap cap_net_raw=-ep /usr/bin/arping
</source>
<source lang=bash>
lollypop $ /usr/bin/arping -I wlan0 192.168.178.1
arping: socket: Operation not permitted
</source>
Of course it still works as root as root has all capabilities:
<source lang=bash>
root@lollybook:~# /usr/bin/arping -I wlan0 192.168.178.1
ARPING 192.168.178.1 from 192.168.178.31 wlan0
Unicast reply from 192.168.178.1 [24:65:11:F0:DC:A8] 2.052ms
Unicast reply from 192.168.178.1 [24:65:11:F0:DC:A8] 1.852ms
Received 2 response(s)
</source>
So we better set this capability again:
<source lang=bash>
# setcap cap_net_raw=+ep /usr/bin/arping
</source>
19119af84c60ab96bb7c722e6ef23278d3c21056
904
903
2015-09-02T21:31:34Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:Linux]]
=What is systemd?=
systemd is a replacement for the old and rusty init system of Linux.
It has many new features and extends the normal init system with the ability to watch processes after the start has done, list sockets owned by processes started with systemd, adds security features like [http://manpages.ubuntu.com/manpages/vivid/en/man7/capabilities.7.html capabilities(7)] and a lot more.
Maybe it will be as good as SMF (Service Management Facility) of Solaris one day :-).
=systemd=
Yes, like daemons are usually written this has to be written lowercase.
==Security==
===Use capabilities to drop user privileges (CapabilityBoundingSet)===
<source lang=bash>
# systemctl cat systemd-networkd.service --no-pager
...
[Service]
Type=notify
Restart=on-failure
RestartSec=0
ExecStart=/lib/systemd/systemd-networkd
CapabilityBoundingSet=CAP_NET_ADMIN CAP_NET_BIND_SERVICE CAP_NET_BROADCAST CAP_NET_RAW CAP_SETUID CAP_SETGID CAP_SETPCAP CAP_CHOWN CAP_DAC_OVERRIDE CAP_FOWNER
ProtectSystem=full
ProtectHome=yes
WatchdogSec=1min
...
</source>
Now the process is started with exactly the capabilities it needs to have. Even if it starts as root all unnessesary capabilities are dropped for starting the process.
I dont want to copy the whole man page of [http://manpages.ubuntu.com/manpages/vivid/en/man7/capabilities.7.html capabilities(7)] here but you can take a look to understand what this capabilities are.
==Units==
===List units===
As you can see, there are hardware and software related units.
<source lang=bash>
# systemctl list-units
UNIT LOAD ACTIVE SUB DESCRIPTION
proc-sys-fs-binfmt_misc.automount loaded active running Arbitrary Executable File Formats File System Automount Point
sys-devices-pci0000:00-0000:00:02.0-backlight-acpi_video0.device loaded active plugged /sys/devices/pci0000:00/0000:00:02.0/backlight/acpi_video0
sys-devices-pci0000:00-0000:00:02.0-drm-card0-card0\x2dLVDS\x2d1-intel_backlight.device loaded active plugged /sys/devices/pci0000:00/0000:00:02.0/drm
sys-devices-pci0000:00-0000:00:19.0-net-eth0.device loaded active plugged 82579LM Gigabit Network Connection
sys-devices-pci0000:00-0000:00:1a.0-usb1-1\x2d1-1\x2d1.4-1\x2d1.4:1.0-bluetooth-hci0-rfkill3.device loaded active plugged /sys/devices/pci0000:00/0000
sys-devices-pci0000:00-0000:00:1a.0-usb1-1\x2d1-1\x2d1.4-1\x2d1.4:1.0-bluetooth-hci0.device loaded active plugged /sys/devices/pci0000:00/0000:00:1a.0
sys-devices-pci0000:00-0000:00:1b.0-sound-card0.device loaded active plugged 6 Series/C200 Series Chipset Family High Definition Audio Contro
sys-devices-pci0000:00-0000:00:1c.1-0000:03:00.0-ieee80211-phy0-rfkill2.device loaded active plugged /sys/devices/pci0000:00/0000:00:1c.1/0000:03:00.0
sys-devices-pci0000:00-0000:00:1c.1-0000:03:00.0-net-wlan0.device loaded active plugged Centrino Advanced-N 6205 [Taylor Peak] (Centrino Advanced-N 62
sys-devices-pci0000:00-0000:00:1d.0-usb2-2\x2d1-2\x2d1.4-2\x2d1.4:1.1-tty-ttyACM0.device loaded active plugged F5521gw
sys-devices-pci0000:00-0000:00:1d.0-usb2-2\x2d1-2\x2d1.4-2\x2d1.4:1.3-tty-ttyACM1.device loaded active plugged F5521gw
...
session-c2.scope loaded active running Session c2 of user lollypop
accounts-daemon.service loaded active running Accounts Service
● anacron.service loaded failed failed Run anacron jobs
apparmor.service loaded active exited LSB: AppArmor initialization
apport.service loaded active exited LSB: automatic crash report generation
...
</source>
In this example you can see that the anacron.service failed to start.
===Display unit status===
<source lang=bash>
# systemctl status anacron
● anacron.service - Run anacron jobs
Loaded: loaded (/lib/systemd/system/anacron.service; enabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Fr 2015-08-28 09:18:13 CEST; 31min ago
Process: 1591 ExecStart=/usr/sbin/anacron -dsq (code=exited, status=1/FAILURE)
Main PID: 1591 (code=exited, status=1/FAILURE)
Aug 28 09:18:13 lollybook systemd[1]: Started Run anacron jobs.
Aug 28 09:18:13 lollybook systemd[1]: Starting Run anacron jobs...
Aug 28 09:18:13 lollybook systemd[1]: anacron.service: main process exited, code=exited, status=1/FAILURE
Aug 28 09:18:13 lollybook anacron[1591]: anacron: Can't chdir to /var/spool/anacron: No such file or directory
Aug 28 09:18:13 lollybook systemd[1]: Unit anacron.service entered failed state.
Aug 28 09:18:13 lollybook systemd[1]: anacron.service failed.
</source>
Ah, deleted the anacron spool directory. ;-)
===Restart units===
Fix the problem and restart the service.
<source lang=bash>
root@lollybook:~# mkdir /var/spool/anacron
root@lollybook:~# systemctl restart anacron.service
root@lollybook:~# systemctl status anacron
● anacron.service - Run anacron jobs
Loaded: loaded (/lib/systemd/system/anacron.service; enabled; vendor preset: enabled)
Active: active (running) since Fr 2015-08-28 09:53:49 CEST; 4s ago
Main PID: 5179 (anacron)
CGroup: /system.slice/anacron.service
└─5179 /usr/sbin/anacron -dsq
Aug 28 09:53:49 lollybook systemd[1]: Started Run anacron jobs.
Aug 28 09:53:49 lollybook systemd[1]: Starting Run anacron jobs...
Aug 28 09:53:49 lollybook anacron[5179]: Anacron 2.3 started on 2015-08-28
Aug 28 09:53:49 lollybook anacron[5179]: Will run job `cron.daily' in 5 min.
Aug 28 09:53:49 lollybook anacron[5179]: Will run job `cron.weekly' in 10 min.
Aug 28 09:53:49 lollybook anacron[5179]: Will run job `cron.monthly' in 15 min.
Aug 28 09:53:49 lollybook anacron[5179]: Jobs will be executed sequentially
</source>
===Display unit declaration===
<source lang=ini>
# systemctl cat zfs.target
# /lib/systemd/system/zfs.target
[Unit]
Description=ZFS startup target
Requires=zfs-mount.service
Requires=zfs-share.service
Wants=zed.service
[Install]
WantedBy=multi-user.target
</source>
===Directories===
====ReadWrite-, ReadOnly- and InaccessibleDirectories====
====Private Tmp-Directories====
Mounts a private incarnation of /tmp and /var/tmp which only lives as long as the unit is up. When the unit comes down the directories are cleared. This is done by a seperate namespace for this unit.
<source lang=ini>
[Unit]
...
PrivateTmp=true|false
...
</source>
If several units should share a private tmp-directory you can use ''JoinsNamespaceOf=<unit1>[,<unit2>,<unit3>]''.
=Service=
=Install=
=Take a look with systemctl=
==Sockets==
<source lang=bash>
# systemctl list-sockets --all
LISTEN UNIT ACTIVATES
/run/acpid.socket acpid.socket acpid.service
/run/systemd/fsckd systemd-fsckd.socket systemd-fsckd.service
/run/systemd/initctl/fifo systemd-initctl.socket systemd-initctl.service
/run/systemd/journal/dev-log systemd-journald-dev-log.socket systemd-journald.service
/run/systemd/journal/socket systemd-journald.socket systemd-journald.service
/run/systemd/journal/stdout systemd-journald.socket systemd-journald.service
/run/systemd/journal/syslog syslog.socket rsyslog.service
/run/systemd/shutdownd systemd-shutdownd.socket systemd-shutdownd.service
/run/udev/control systemd-udevd-control.socket systemd-udevd.service
/run/uuidd/request uuidd.socket uuidd.service
/var/run/avahi-daemon/socket avahi-daemon.socket avahi-daemon.service
/var/run/cups/cups.sock cups.socket cups.service
/var/run/dbus/system_bus_socket dbus.socket dbus.service
127.0.0.1:631 cups.socket cups.service
[::1]:631 cups.socket cups.service
audit 1 systemd-journald-audit.socket systemd-journald.service
kobject-uevent 1 systemd-udevd-kernel.socket systemd-udevd.service
17 sockets listed.
</source>
==Dependencies==
===Define dependencies===
For example the ''zfs.target'' is defined like this:
<source lang=bash>
# systemctl cat zfs.target
# /lib/systemd/system/zfs.target
[Unit]
Description=ZFS startup target
Requires=zfs-mount.service
Requires=zfs-share.service
Wants=zed.service
[Install]
WantedBy=multi-user.target
</source>
This means to reach the ''zfs.target'' we want that ''zed.service'' is started if enabled and we need ''zfs-mount.service'' and ''zfs-share.service''.
===View dependencies===
What depends on ''zfs.target'':
<source lang=bash>
# systemctl list-dependencies --reverse zfs.target
zfs.target
● ├─basic.target
...
● └─multi-user.target
...
</source>
And what do we need to reach the ''zfs.target''?
<source lang=bash>
# systemctl list-dependencies --recursive zfs.target
zfs.target
● ├─zed.service
● ├─zfs-mount.service
● └─zfs-share.service
</source>
=Tools=
==Testing around with capabilities==
For example arping:
<source lang=bash>
# getcap /usr/bin/arping
/usr/bin/arping = cap_net_raw+ep
</source>
With this capability set we can use this as normal user:
<source lang=bash>
lollypop $ /usr/bin/arping -I wlan0 192.168.178.1
ARPING 192.168.178.1 from 192.168.178.31 wlan0
Unicast reply from 192.168.178.1 [24:65:11:F0:DC:A8] 1.774ms
Unicast reply from 192.168.178.1 [24:65:11:F0:DC:A8] 1.658ms
</source>
If we remove this capability it does not work:
<source lang=bash>
# setcap cap_net_raw=-ep /usr/bin/arping
</source>
<source lang=bash>
lollypop $ /usr/bin/arping -I wlan0 192.168.178.1
arping: socket: Operation not permitted
</source>
Of course it still works as root as root has all capabilities:
<source lang=bash>
root@lollybook:~# /usr/bin/arping -I wlan0 192.168.178.1
ARPING 192.168.178.1 from 192.168.178.31 wlan0
Unicast reply from 192.168.178.1 [24:65:11:F0:DC:A8] 2.052ms
Unicast reply from 192.168.178.1 [24:65:11:F0:DC:A8] 1.852ms
Received 2 response(s)
</source>
So we better set this capability again:
<source lang=bash>
# setcap cap_net_raw=+ep /usr/bin/arping
</source>
8f12815c321819695fac0a1581b4938ccb332ce3
905
904
2015-09-02T21:41:23Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:Linux]]
=systemd=
Yes, like daemons are usually written this has to be written lowercase.
=What is systemd?=
systemd is a replacement for the old and rusty init system of Linux.
It has many new features and extends the normal init system with the ability to watch processes after the start has done, list sockets owned by processes started with systemd, adds security features like [http://manpages.ubuntu.com/manpages/vivid/en/man7/capabilities.7.html capabilities(7)] and a lot more.
Maybe it will be as good as SMF (Service Management Facility) of Solaris one day :-).
=Take a look with systemctl=
==List units==
As you can see, there are hardware and software related units.
<source lang=bash>
# systemctl list-units
UNIT LOAD ACTIVE SUB DESCRIPTION
proc-sys-fs-binfmt_misc.automount loaded active running Arbitrary Executable File Formats File System Automount Point
sys-devices-pci0000:00-0000:00:02.0-backlight-acpi_video0.device loaded active plugged /sys/devices/pci0000:00/0000:00:02.0/backlight/acpi_video0
sys-devices-pci0000:00-0000:00:02.0-drm-card0-card0\x2dLVDS\x2d1-intel_backlight.device loaded active plugged /sys/devices/pci0000:00/0000:00:02.0/drm
sys-devices-pci0000:00-0000:00:19.0-net-eth0.device loaded active plugged 82579LM Gigabit Network Connection
sys-devices-pci0000:00-0000:00:1a.0-usb1-1\x2d1-1\x2d1.4-1\x2d1.4:1.0-bluetooth-hci0-rfkill3.device loaded active plugged /sys/devices/pci0000:00/0000
sys-devices-pci0000:00-0000:00:1a.0-usb1-1\x2d1-1\x2d1.4-1\x2d1.4:1.0-bluetooth-hci0.device loaded active plugged /sys/devices/pci0000:00/0000:00:1a.0
sys-devices-pci0000:00-0000:00:1b.0-sound-card0.device loaded active plugged 6 Series/C200 Series Chipset Family High Definition Audio Contro
sys-devices-pci0000:00-0000:00:1c.1-0000:03:00.0-ieee80211-phy0-rfkill2.device loaded active plugged /sys/devices/pci0000:00/0000:00:1c.1/0000:03:00.0
sys-devices-pci0000:00-0000:00:1c.1-0000:03:00.0-net-wlan0.device loaded active plugged Centrino Advanced-N 6205 [Taylor Peak] (Centrino Advanced-N 62
sys-devices-pci0000:00-0000:00:1d.0-usb2-2\x2d1-2\x2d1.4-2\x2d1.4:1.1-tty-ttyACM0.device loaded active plugged F5521gw
sys-devices-pci0000:00-0000:00:1d.0-usb2-2\x2d1-2\x2d1.4-2\x2d1.4:1.3-tty-ttyACM1.device loaded active plugged F5521gw
...
session-c2.scope loaded active running Session c2 of user lollypop
accounts-daemon.service loaded active running Accounts Service
● anacron.service loaded failed failed Run anacron jobs
apparmor.service loaded active exited LSB: AppArmor initialization
apport.service loaded active exited LSB: automatic crash report generation
...
</source>
In this example you can see that the anacron.service failed to start.
==Display unit status==
<source lang=bash>
# systemctl status anacron
● anacron.service - Run anacron jobs
Loaded: loaded (/lib/systemd/system/anacron.service; enabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Fr 2015-08-28 09:18:13 CEST; 31min ago
Process: 1591 ExecStart=/usr/sbin/anacron -dsq (code=exited, status=1/FAILURE)
Main PID: 1591 (code=exited, status=1/FAILURE)
Aug 28 09:18:13 lollybook systemd[1]: Started Run anacron jobs.
Aug 28 09:18:13 lollybook systemd[1]: Starting Run anacron jobs...
Aug 28 09:18:13 lollybook systemd[1]: anacron.service: main process exited, code=exited, status=1/FAILURE
Aug 28 09:18:13 lollybook anacron[1591]: anacron: Can't chdir to /var/spool/anacron: No such file or directory
Aug 28 09:18:13 lollybook systemd[1]: Unit anacron.service entered failed state.
Aug 28 09:18:13 lollybook systemd[1]: anacron.service failed.
</source>
Ah, deleted the anacron spool directory. ;-)
==Restart units==
Fix the problem and restart the service.
<source lang=bash>
root@lollybook:~# mkdir /var/spool/anacron
root@lollybook:~# systemctl restart anacron.service
root@lollybook:~# systemctl status anacron
● anacron.service - Run anacron jobs
Loaded: loaded (/lib/systemd/system/anacron.service; enabled; vendor preset: enabled)
Active: active (running) since Fr 2015-08-28 09:53:49 CEST; 4s ago
Main PID: 5179 (anacron)
CGroup: /system.slice/anacron.service
└─5179 /usr/sbin/anacron -dsq
Aug 28 09:53:49 lollybook systemd[1]: Started Run anacron jobs.
Aug 28 09:53:49 lollybook systemd[1]: Starting Run anacron jobs...
Aug 28 09:53:49 lollybook anacron[5179]: Anacron 2.3 started on 2015-08-28
Aug 28 09:53:49 lollybook anacron[5179]: Will run job `cron.daily' in 5 min.
Aug 28 09:53:49 lollybook anacron[5179]: Will run job `cron.weekly' in 10 min.
Aug 28 09:53:49 lollybook anacron[5179]: Will run job `cron.monthly' in 15 min.
Aug 28 09:53:49 lollybook anacron[5179]: Jobs will be executed sequentially
</source>
==Display unit declaration==
<source lang=ini>
# systemctl cat zfs.target
# /lib/systemd/system/zfs.target
[Unit]
Description=ZFS startup target
Requires=zfs-mount.service
Requires=zfs-share.service
Wants=zed.service
[Install]
WantedBy=multi-user.target
</source>
==Sockets==
<source lang=bash>
# systemctl list-sockets --all
LISTEN UNIT ACTIVATES
/run/acpid.socket acpid.socket acpid.service
/run/systemd/fsckd systemd-fsckd.socket systemd-fsckd.service
/run/systemd/initctl/fifo systemd-initctl.socket systemd-initctl.service
/run/systemd/journal/dev-log systemd-journald-dev-log.socket systemd-journald.service
/run/systemd/journal/socket systemd-journald.socket systemd-journald.service
/run/systemd/journal/stdout systemd-journald.socket systemd-journald.service
/run/systemd/journal/syslog syslog.socket rsyslog.service
/run/systemd/shutdownd systemd-shutdownd.socket systemd-shutdownd.service
/run/udev/control systemd-udevd-control.socket systemd-udevd.service
/run/uuidd/request uuidd.socket uuidd.service
/var/run/avahi-daemon/socket avahi-daemon.socket avahi-daemon.service
/var/run/cups/cups.sock cups.socket cups.service
/var/run/dbus/system_bus_socket dbus.socket dbus.service
127.0.0.1:631 cups.socket cups.service
[::1]:631 cups.socket cups.service
audit 1 systemd-journald-audit.socket systemd-journald.service
kobject-uevent 1 systemd-udevd-kernel.socket systemd-udevd.service
17 sockets listed.
</source>
==View dependencies==
What depends on ''zfs.target'':
<source lang=bash>
# systemctl list-dependencies --reverse zfs.target
zfs.target
● ├─basic.target
...
● └─multi-user.target
...
</source>
And what do we need to reach the ''zfs.target''?
<source lang=bash>
# systemctl list-dependencies --recursive zfs.target
zfs.target
● ├─zed.service
● ├─zfs-mount.service
● └─zfs-share.service
</source>
=Security=
==Use capabilities to drop user privileges (CapabilityBoundingSet)==
<source lang=bash>
# systemctl cat systemd-networkd.service --no-pager
...
[Service]
Type=notify
Restart=on-failure
RestartSec=0
ExecStart=/lib/systemd/systemd-networkd
CapabilityBoundingSet=CAP_NET_ADMIN CAP_NET_BIND_SERVICE CAP_NET_BROADCAST CAP_NET_RAW CAP_SETUID CAP_SETGID CAP_SETPCAP CAP_CHOWN CAP_DAC_OVERRIDE CAP_FOWNER
ProtectSystem=full
ProtectHome=yes
WatchdogSec=1min
...
</source>
Now the process is started with exactly the capabilities it needs to have. Even if it starts as root all unnessesary capabilities are dropped for starting the process.
I dont want to copy the whole man page of [http://manpages.ubuntu.com/manpages/vivid/en/man7/capabilities.7.html capabilities(7)] here but you can take a look to understand what this capabilities are.
=Units=
==[Unit]==
===Define dependencies===
For example the ''zfs.target'' is defined like this:
<source lang=bash>
# systemctl cat zfs.target
# /lib/systemd/system/zfs.target
[Unit]
Description=ZFS startup target
Requires=zfs-mount.service
Requires=zfs-share.service
Wants=zed.service
[Install]
WantedBy=multi-user.target
</source>
This means to reach the ''zfs.target'' we want that ''zed.service'' is started if enabled and we need ''zfs-mount.service'' and ''zfs-share.service''.
===Directories===
====ReadWrite-, ReadOnly- and InaccessibleDirectories====
====Private Tmp-Directories====
Mounts a private incarnation of /tmp and /var/tmp which only lives as long as the unit is up. When the unit comes down the directories are cleared. This is done by a seperate namespace for this unit.
<source lang=ini>
[Unit]
...
PrivateTmp=true|false
...
</source>
If several units should share a private tmp-directory you can use ''JoinsNamespaceOf=<unit1>[,<unit2>,<unit3>]''.
==[Service]==
==[Install]==
=Tools=
==Testing around with capabilities==
For example arping:
<source lang=bash>
# getcap /usr/bin/arping
/usr/bin/arping = cap_net_raw+ep
</source>
With this capability set we can use this as normal user:
<source lang=bash>
lollypop $ /usr/bin/arping -I wlan0 192.168.178.1
ARPING 192.168.178.1 from 192.168.178.31 wlan0
Unicast reply from 192.168.178.1 [24:65:11:F0:DC:A8] 1.774ms
Unicast reply from 192.168.178.1 [24:65:11:F0:DC:A8] 1.658ms
</source>
If we remove this capability it does not work:
<source lang=bash>
# setcap cap_net_raw=-ep /usr/bin/arping
</source>
<source lang=bash>
lollypop $ /usr/bin/arping -I wlan0 192.168.178.1
arping: socket: Operation not permitted
</source>
Of course it still works as root as root has all capabilities:
<source lang=bash>
root@lollybook:~# /usr/bin/arping -I wlan0 192.168.178.1
ARPING 192.168.178.1 from 192.168.178.31 wlan0
Unicast reply from 192.168.178.1 [24:65:11:F0:DC:A8] 2.052ms
Unicast reply from 192.168.178.1 [24:65:11:F0:DC:A8] 1.852ms
Received 2 response(s)
</source>
So we better set this capability again:
<source lang=bash>
# setcap cap_net_raw=+ep /usr/bin/arping
</source>
86c366cbdf5a8e4b3d3361f99a5d8feb3bf6753b
906
905
2015-09-02T21:56:22Z
Lollypop
2
/* Security */
wikitext
text/x-wiki
[[Kategorie:Linux]]
=systemd=
Yes, like daemons are usually written this has to be written lowercase.
=What is systemd?=
systemd is a replacement for the old and rusty init system of Linux.
It has many new features and extends the normal init system with the ability to watch processes after the start has done, list sockets owned by processes started with systemd, adds security features like [http://manpages.ubuntu.com/manpages/vivid/en/man7/capabilities.7.html capabilities(7)] and a lot more.
Maybe it will be as good as SMF (Service Management Facility) of Solaris one day :-).
=Take a look with systemctl=
==List units==
As you can see, there are hardware and software related units.
<source lang=bash>
# systemctl list-units
UNIT LOAD ACTIVE SUB DESCRIPTION
proc-sys-fs-binfmt_misc.automount loaded active running Arbitrary Executable File Formats File System Automount Point
sys-devices-pci0000:00-0000:00:02.0-backlight-acpi_video0.device loaded active plugged /sys/devices/pci0000:00/0000:00:02.0/backlight/acpi_video0
sys-devices-pci0000:00-0000:00:02.0-drm-card0-card0\x2dLVDS\x2d1-intel_backlight.device loaded active plugged /sys/devices/pci0000:00/0000:00:02.0/drm
sys-devices-pci0000:00-0000:00:19.0-net-eth0.device loaded active plugged 82579LM Gigabit Network Connection
sys-devices-pci0000:00-0000:00:1a.0-usb1-1\x2d1-1\x2d1.4-1\x2d1.4:1.0-bluetooth-hci0-rfkill3.device loaded active plugged /sys/devices/pci0000:00/0000
sys-devices-pci0000:00-0000:00:1a.0-usb1-1\x2d1-1\x2d1.4-1\x2d1.4:1.0-bluetooth-hci0.device loaded active plugged /sys/devices/pci0000:00/0000:00:1a.0
sys-devices-pci0000:00-0000:00:1b.0-sound-card0.device loaded active plugged 6 Series/C200 Series Chipset Family High Definition Audio Contro
sys-devices-pci0000:00-0000:00:1c.1-0000:03:00.0-ieee80211-phy0-rfkill2.device loaded active plugged /sys/devices/pci0000:00/0000:00:1c.1/0000:03:00.0
sys-devices-pci0000:00-0000:00:1c.1-0000:03:00.0-net-wlan0.device loaded active plugged Centrino Advanced-N 6205 [Taylor Peak] (Centrino Advanced-N 62
sys-devices-pci0000:00-0000:00:1d.0-usb2-2\x2d1-2\x2d1.4-2\x2d1.4:1.1-tty-ttyACM0.device loaded active plugged F5521gw
sys-devices-pci0000:00-0000:00:1d.0-usb2-2\x2d1-2\x2d1.4-2\x2d1.4:1.3-tty-ttyACM1.device loaded active plugged F5521gw
...
session-c2.scope loaded active running Session c2 of user lollypop
accounts-daemon.service loaded active running Accounts Service
● anacron.service loaded failed failed Run anacron jobs
apparmor.service loaded active exited LSB: AppArmor initialization
apport.service loaded active exited LSB: automatic crash report generation
...
</source>
In this example you can see that the anacron.service failed to start.
==Display unit status==
<source lang=bash>
# systemctl status anacron
● anacron.service - Run anacron jobs
Loaded: loaded (/lib/systemd/system/anacron.service; enabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Fr 2015-08-28 09:18:13 CEST; 31min ago
Process: 1591 ExecStart=/usr/sbin/anacron -dsq (code=exited, status=1/FAILURE)
Main PID: 1591 (code=exited, status=1/FAILURE)
Aug 28 09:18:13 lollybook systemd[1]: Started Run anacron jobs.
Aug 28 09:18:13 lollybook systemd[1]: Starting Run anacron jobs...
Aug 28 09:18:13 lollybook systemd[1]: anacron.service: main process exited, code=exited, status=1/FAILURE
Aug 28 09:18:13 lollybook anacron[1591]: anacron: Can't chdir to /var/spool/anacron: No such file or directory
Aug 28 09:18:13 lollybook systemd[1]: Unit anacron.service entered failed state.
Aug 28 09:18:13 lollybook systemd[1]: anacron.service failed.
</source>
Ah, deleted the anacron spool directory. ;-)
==Restart units==
Fix the problem and restart the service.
<source lang=bash>
root@lollybook:~# mkdir /var/spool/anacron
root@lollybook:~# systemctl restart anacron.service
root@lollybook:~# systemctl status anacron
● anacron.service - Run anacron jobs
Loaded: loaded (/lib/systemd/system/anacron.service; enabled; vendor preset: enabled)
Active: active (running) since Fr 2015-08-28 09:53:49 CEST; 4s ago
Main PID: 5179 (anacron)
CGroup: /system.slice/anacron.service
└─5179 /usr/sbin/anacron -dsq
Aug 28 09:53:49 lollybook systemd[1]: Started Run anacron jobs.
Aug 28 09:53:49 lollybook systemd[1]: Starting Run anacron jobs...
Aug 28 09:53:49 lollybook anacron[5179]: Anacron 2.3 started on 2015-08-28
Aug 28 09:53:49 lollybook anacron[5179]: Will run job `cron.daily' in 5 min.
Aug 28 09:53:49 lollybook anacron[5179]: Will run job `cron.weekly' in 10 min.
Aug 28 09:53:49 lollybook anacron[5179]: Will run job `cron.monthly' in 15 min.
Aug 28 09:53:49 lollybook anacron[5179]: Jobs will be executed sequentially
</source>
==Display unit declaration==
<source lang=ini>
# systemctl cat zfs.target
# /lib/systemd/system/zfs.target
[Unit]
Description=ZFS startup target
Requires=zfs-mount.service
Requires=zfs-share.service
Wants=zed.service
[Install]
WantedBy=multi-user.target
</source>
==Sockets==
<source lang=bash>
# systemctl list-sockets --all
LISTEN UNIT ACTIVATES
/run/acpid.socket acpid.socket acpid.service
/run/systemd/fsckd systemd-fsckd.socket systemd-fsckd.service
/run/systemd/initctl/fifo systemd-initctl.socket systemd-initctl.service
/run/systemd/journal/dev-log systemd-journald-dev-log.socket systemd-journald.service
/run/systemd/journal/socket systemd-journald.socket systemd-journald.service
/run/systemd/journal/stdout systemd-journald.socket systemd-journald.service
/run/systemd/journal/syslog syslog.socket rsyslog.service
/run/systemd/shutdownd systemd-shutdownd.socket systemd-shutdownd.service
/run/udev/control systemd-udevd-control.socket systemd-udevd.service
/run/uuidd/request uuidd.socket uuidd.service
/var/run/avahi-daemon/socket avahi-daemon.socket avahi-daemon.service
/var/run/cups/cups.sock cups.socket cups.service
/var/run/dbus/system_bus_socket dbus.socket dbus.service
127.0.0.1:631 cups.socket cups.service
[::1]:631 cups.socket cups.service
audit 1 systemd-journald-audit.socket systemd-journald.service
kobject-uevent 1 systemd-udevd-kernel.socket systemd-udevd.service
17 sockets listed.
</source>
==View dependencies==
What depends on ''zfs.target'':
<source lang=bash>
# systemctl list-dependencies --reverse zfs.target
zfs.target
● ├─basic.target
...
● └─multi-user.target
...
</source>
And what do we need to reach the ''zfs.target''?
<source lang=bash>
# systemctl list-dependencies --recursive zfs.target
zfs.target
● ├─zed.service
● ├─zfs-mount.service
● └─zfs-share.service
</source>
=Security=
==Use capabilities to drop user privileges (CapabilityBoundingSet)==
<source lang=bash>
# systemctl cat systemd-networkd.service --no-pager
...
[Service]
Type=notify
Restart=on-failure
RestartSec=0
ExecStart=/lib/systemd/systemd-networkd
CapabilityBoundingSet=CAP_NET_ADMIN CAP_NET_BIND_SERVICE CAP_NET_BROADCAST CAP_NET_RAW CAP_SETUID CAP_SETGID CAP_SETPCAP CAP_CHOWN CAP_DAC_OVERRIDE CAP_FOWNER
ProtectSystem=full
ProtectHome=yes
WatchdogSec=1min
...
</source>
Now the process is started with exactly the capabilities it needs to have. Even if it starts as root all unnessesary capabilities are dropped for starting the process.
I dont want to copy the whole man page of [http://manpages.ubuntu.com/manpages/vivid/en/man7/capabilities.7.html capabilities(7)] here but you can take a look to understand what this capabilities are.
'''BUT''' beware of programs which just test on UID 0!
==Nailing a process to it's rights : NoNewPrivileges==
Setting ''NoNewPrivileges=true'' ensures that the processtree from this level on will stuck at the UID and the privileges it has. This prohibits UID changes. No set UID binary will help the hacker to get more privileges than the user of the exploited service.
=Units=
==[Unit]==
===Define dependencies===
For example the ''zfs.target'' is defined like this:
<source lang=bash>
# systemctl cat zfs.target
# /lib/systemd/system/zfs.target
[Unit]
Description=ZFS startup target
Requires=zfs-mount.service
Requires=zfs-share.service
Wants=zed.service
[Install]
WantedBy=multi-user.target
</source>
This means to reach the ''zfs.target'' we want that ''zed.service'' is started if enabled and we need ''zfs-mount.service'' and ''zfs-share.service''.
===Directories===
====ReadWrite-, ReadOnly- and InaccessibleDirectories====
====Private Tmp-Directories====
Mounts a private incarnation of /tmp and /var/tmp which only lives as long as the unit is up. When the unit comes down the directories are cleared. This is done by a seperate namespace for this unit.
<source lang=ini>
[Unit]
...
PrivateTmp=true|false
...
</source>
If several units should share a private tmp-directory you can use ''JoinsNamespaceOf=<unit1>[,<unit2>,<unit3>]''.
==[Service]==
==[Install]==
=Tools=
==Testing around with capabilities==
For example arping:
<source lang=bash>
# getcap /usr/bin/arping
/usr/bin/arping = cap_net_raw+ep
</source>
With this capability set we can use this as normal user:
<source lang=bash>
lollypop $ /usr/bin/arping -I wlan0 192.168.178.1
ARPING 192.168.178.1 from 192.168.178.31 wlan0
Unicast reply from 192.168.178.1 [24:65:11:F0:DC:A8] 1.774ms
Unicast reply from 192.168.178.1 [24:65:11:F0:DC:A8] 1.658ms
</source>
If we remove this capability it does not work:
<source lang=bash>
# setcap cap_net_raw=-ep /usr/bin/arping
</source>
<source lang=bash>
lollypop $ /usr/bin/arping -I wlan0 192.168.178.1
arping: socket: Operation not permitted
</source>
Of course it still works as root as root has all capabilities:
<source lang=bash>
root@lollybook:~# /usr/bin/arping -I wlan0 192.168.178.1
ARPING 192.168.178.1 from 192.168.178.31 wlan0
Unicast reply from 192.168.178.1 [24:65:11:F0:DC:A8] 2.052ms
Unicast reply from 192.168.178.1 [24:65:11:F0:DC:A8] 1.852ms
Received 2 response(s)
</source>
So we better set this capability again:
<source lang=bash>
# setcap cap_net_raw=+ep /usr/bin/arping
</source>
a0c71fdf8834bc7a7eed5f1ae74e3c5c4eaa92dd
Solaris 11 Networking
0
96
907
768
2015-09-10T12:29:28Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:Solaris11]]
= Switch to manual configuration =
To disable automatic procedures to take back your changes you have to enable the manual configuration mode.
<pre>
# netadm enable –p ncp defaultfixed
</pre>
= Nodename =
<pre>
# svccfg -s svc:/system/identity:node setprop config/nodename = astring: camponotus
# svcadm refresh svc:/system/identity:node
# svcadm restart svc:/system/identity:node
</pre>
= Interfaces =
== Initial setup ==
<pre>
# ipadm create-ip net1
# ipadm create-addr -T static -a local=192.168.5.101/24 net1/v4mailcluster1
</pre>
== IPMP ==
<pre>
# ipadm create-ip net2
# ipadm create-ip net3
# ipadm create-addr -T static -a 192.168.5.102/24 net2/v4ipmptestadress
# ipadm create-addr -T static -a 192.168.5.103/24 net3/v4ipmptestadress
# ipadm create-ipmp ipmp0
# ipadm add-ipmp -i net2 -i net3 ipmp0
# ipadm create-addr -T static -a 192.168.5.101/24 ipmp0/v4mailcluster0
# ipmpstat -i
INTERFACE ACTIVE GROUP FLAGS LINK PROBE STATE
net2 yes ipmp0 ------- up ok ok
net3 yes ipmp0 --mbM-- up ok ok
# ipmpstat -an
ADDRESS STATE GROUP INBOUND OUTBOUND
:: down ipmp0 -- --
192.168.5.101 up ipmp0 net3 net2 net3
</pre>
Set one interface to standby:
<pre>
# ipadm set-ifprop -p standby=on -m ip net2
# ipmpstat -i
INTERFACE ACTIVE GROUP FLAGS LINK PROBE STATE
net3 yes ipmp0 --mbM-- up ok ok
net2 no ipmp0 is----- up ok ok
# ipmpstat -g
GROUP GROUPNAME STATE FDT INTERFACES
ipmp0 ipmp0 ok 10.00s net3 (net2)
</pre>
== More sophisticated with aggregations and vnics ==
<source lang=bash>
# dladm show-phys -L
LINK DEVICE LOC
net0 igb12 /SYS/MB
net1 igb13 /SYS/MB
net2 igb14 /SYS/MB
net3 igb15 /SYS/MB
net4 igb0 /SYS/MB/PCI_MEZZ/PCIE3
net5 igb1 /SYS/MB/PCI_MEZZ/PCIE3
net6 igb2 /SYS/MB/PCI_MEZZ/PCIE3
net7 igb3 /SYS/MB/PCI_MEZZ/PCIE3
net8 igb4 /SYS/MB/RISER2/PCIE2
net9 igb5 /SYS/MB/RISER2/PCIE2
net10 igb6 /SYS/MB/RISER2/PCIE2
net11 igb7 /SYS/MB/RISER2/PCIE2
net12 igb8 /SYS/MB/RISER0/PCIE0
net13 igb9 /SYS/MB/RISER0/PCIE0
net14 igb10 /SYS/MB/RISER0/PCIE0
net15 igb11 /SYS/MB/RISER0/PCIE0
net16 usbecm2 --
# dladm create-aggr -P L2,L3 -l net12 -l net13 -l net14 -l net15 PCIE0
# dladm create-aggr -P L2,L3 -l net8 -l net9 -l net10 -l net11 PCIE2
# dladm create-aggr -P L2,L3 -l net4 -l net5 -l net6 -l net7 PCIE3
# dladm create-vnic -l PCIE2 zone01_ipmp0
# dladm create-vnic -l PCIE3 zone01_ipmp1
# zonecfg -z zone01
zonecfg:zone01> add net
zonecfg:zone01:net> set configure-allowed-address=true
zonecfg:zone01:net> set physical=zone01_ipmp0
zonecfg:zone01:net> end
zonecfg:zone01> add net
zonecfg:zone01:net> set configure-allowed-address=true
zonecfg:zone01:net> set physical=zone01_ipmp1
zonecfg:zone01:net> end
zonecfg:zone01> verify
zonecfg:zone01> commit
zonecfg:zone01> exit
</source>
== Change address ==
1. Create new interface:
<pre>
# ipadm create-addr -T static -a 192.168.5.111/24 ipmp0/v4mailcluster1
</pre>
2. Login to new IP.
3. Delete the old interface:
<pre>
# ipadm delete-addr ipmp0/v4mailcluster0
</pre>
= DNS =
== Client ==
<pre>
# svccfg -s svc:/network/dns/client setprop config/nameserver = net_address: "( 0.0.0.0 192.168.1.1 )"
# svccfg -s svc:/network/dns/client setprop config/search = astring: "timmann.de blindhuhn.de"
# svcadm refresh svc:/network/dns/client:default
# svcadm restart svc:/network/dns/client:default
</pre>
Activate dns in nameservice switch (nsswitch.conf):
<pre>
# perl -pi -e "s/^hosts:\s+files$/hosts: files dns/g" /etc/nsswitch.conf
# nscfg import -f svc:/system/name-service/switch:default
# svcadm refresh name-service/switch
# svcprop -p config/host svc:/system/name-service/switch:default
files\ dns
</pre>
== Server ==
<pre>
# groupadd -g 53 dns
# useradd -u 53 -g dns -d /var/named -m dns
# usermod -A solaris.smf.manage.bind dns
# svccfg -s svc:network/dns/server:default setprop start/group = dns
# svccfg -s svc:network/dns/server:default setprop start/user = dns
# svccfg -s svc:network/dns/server:default setprop options/ip_interfaces = IPv4
# svccfg -s svc:network/dns/server:default setprop options/configuration_file = /etc/named.conf
# svcadm refresh svc:network/dns/server:default
# svcadm enable svc:network/dns/server:default
</pre>
= Set tcp/udp parameter (formerly ndd) =
<source lang=bash>
# ipadm show-prop -p smallest_anon_port tcp
PROTO PROPERTY PERM CURRENT PERSISTENT DEFAULT POSSIBLE
tcp smallest_anon_port rw 1024 -- 1024 1024-65535
</source>
<source lang=bash>
# ipadm set-prop -p smallest_anon_port=9000 tcp
# ipadm set-prop -p smallest_anon_port=9000 udp
# ipadm set-prop -p largest_anon_port=65500 tcp
# ipadm set-prop -p largest_anon_port=65500 udp
</source>
cc81aa30091adcb26a9f39070dfbe0db574dfdb8
908
907
2015-09-10T12:34:17Z
Lollypop
2
/* More sophisticated with aggregations and vnics */
wikitext
text/x-wiki
[[Kategorie:Solaris11]]
= Switch to manual configuration =
To disable automatic procedures to take back your changes you have to enable the manual configuration mode.
<pre>
# netadm enable –p ncp defaultfixed
</pre>
= Nodename =
<pre>
# svccfg -s svc:/system/identity:node setprop config/nodename = astring: camponotus
# svcadm refresh svc:/system/identity:node
# svcadm restart svc:/system/identity:node
</pre>
= Interfaces =
== Initial setup ==
<pre>
# ipadm create-ip net1
# ipadm create-addr -T static -a local=192.168.5.101/24 net1/v4mailcluster1
</pre>
== IPMP ==
<pre>
# ipadm create-ip net2
# ipadm create-ip net3
# ipadm create-addr -T static -a 192.168.5.102/24 net2/v4ipmptestadress
# ipadm create-addr -T static -a 192.168.5.103/24 net3/v4ipmptestadress
# ipadm create-ipmp ipmp0
# ipadm add-ipmp -i net2 -i net3 ipmp0
# ipadm create-addr -T static -a 192.168.5.101/24 ipmp0/v4mailcluster0
# ipmpstat -i
INTERFACE ACTIVE GROUP FLAGS LINK PROBE STATE
net2 yes ipmp0 ------- up ok ok
net3 yes ipmp0 --mbM-- up ok ok
# ipmpstat -an
ADDRESS STATE GROUP INBOUND OUTBOUND
:: down ipmp0 -- --
192.168.5.101 up ipmp0 net3 net2 net3
</pre>
Set one interface to standby:
<pre>
# ipadm set-ifprop -p standby=on -m ip net2
# ipmpstat -i
INTERFACE ACTIVE GROUP FLAGS LINK PROBE STATE
net3 yes ipmp0 --mbM-- up ok ok
net2 no ipmp0 is----- up ok ok
# ipmpstat -g
GROUP GROUPNAME STATE FDT INTERFACES
ipmp0 ipmp0 ok 10.00s net3 (net2)
</pre>
== More sophisticated with aggregations and vnics ==
<source lang=bash>
# dladm show-phys -L
LINK DEVICE LOC
net0 igb12 /SYS/MB
net1 igb13 /SYS/MB
net2 igb14 /SYS/MB
net3 igb15 /SYS/MB
net4 igb0 /SYS/MB/PCI_MEZZ/PCIE3
net5 igb1 /SYS/MB/PCI_MEZZ/PCIE3
net6 igb2 /SYS/MB/PCI_MEZZ/PCIE3
net7 igb3 /SYS/MB/PCI_MEZZ/PCIE3
net8 igb4 /SYS/MB/RISER2/PCIE2
net9 igb5 /SYS/MB/RISER2/PCIE2
net10 igb6 /SYS/MB/RISER2/PCIE2
net11 igb7 /SYS/MB/RISER2/PCIE2
net12 igb8 /SYS/MB/RISER0/PCIE0
net13 igb9 /SYS/MB/RISER0/PCIE0
net14 igb10 /SYS/MB/RISER0/PCIE0
net15 igb11 /SYS/MB/RISER0/PCIE0
net16 usbecm2 --
# dladm create-aggr -P L2,L3 -l net8 -l net9 -l net10 -l net11 PCIE2
# dladm create-aggr -P L2,L3 -l net4 -l net5 -l net6 -l net7 PCIE3
# dladm show-link
...
PCIE2 aggr 1500 up net8 net9 net10 net11
PCIE3 aggr 1500 up net4 net5 net6 net7
...
# dladm create-vnic -l PCIE2 zone01_ipmp0
# dladm create-vnic -l PCIE3 zone01_ipmp1
# dladm show-link
...
zcyrus01_ipmp1 vnic 1500 up PCIE3
zcyrus01_ipmp0 vnic 1500 up PCIE2
...
# zonecfg -z zone01
zonecfg:zone01> add net
zonecfg:zone01:net> set configure-allowed-address=true
zonecfg:zone01:net> set physical=zone01_ipmp0
zonecfg:zone01:net> end
zonecfg:zone01> add net
zonecfg:zone01:net> set configure-allowed-address=true
zonecfg:zone01:net> set physical=zone01_ipmp1
zonecfg:zone01:net> end
zonecfg:zone01> verify
zonecfg:zone01> commit
zonecfg:zone01> exit
</source>
== Change address ==
1. Create new interface:
<pre>
# ipadm create-addr -T static -a 192.168.5.111/24 ipmp0/v4mailcluster1
</pre>
2. Login to new IP.
3. Delete the old interface:
<pre>
# ipadm delete-addr ipmp0/v4mailcluster0
</pre>
= DNS =
== Client ==
<pre>
# svccfg -s svc:/network/dns/client setprop config/nameserver = net_address: "( 0.0.0.0 192.168.1.1 )"
# svccfg -s svc:/network/dns/client setprop config/search = astring: "timmann.de blindhuhn.de"
# svcadm refresh svc:/network/dns/client:default
# svcadm restart svc:/network/dns/client:default
</pre>
Activate dns in nameservice switch (nsswitch.conf):
<pre>
# perl -pi -e "s/^hosts:\s+files$/hosts: files dns/g" /etc/nsswitch.conf
# nscfg import -f svc:/system/name-service/switch:default
# svcadm refresh name-service/switch
# svcprop -p config/host svc:/system/name-service/switch:default
files\ dns
</pre>
== Server ==
<pre>
# groupadd -g 53 dns
# useradd -u 53 -g dns -d /var/named -m dns
# usermod -A solaris.smf.manage.bind dns
# svccfg -s svc:network/dns/server:default setprop start/group = dns
# svccfg -s svc:network/dns/server:default setprop start/user = dns
# svccfg -s svc:network/dns/server:default setprop options/ip_interfaces = IPv4
# svccfg -s svc:network/dns/server:default setprop options/configuration_file = /etc/named.conf
# svcadm refresh svc:network/dns/server:default
# svcadm enable svc:network/dns/server:default
</pre>
= Set tcp/udp parameter (formerly ndd) =
<source lang=bash>
# ipadm show-prop -p smallest_anon_port tcp
PROTO PROPERTY PERM CURRENT PERSISTENT DEFAULT POSSIBLE
tcp smallest_anon_port rw 1024 -- 1024 1024-65535
</source>
<source lang=bash>
# ipadm set-prop -p smallest_anon_port=9000 tcp
# ipadm set-prop -p smallest_anon_port=9000 udp
# ipadm set-prop -p largest_anon_port=65500 tcp
# ipadm set-prop -p largest_anon_port=65500 udp
</source>
e1b3771fa4fd1495f757d1ad632b1a66a8823310
909
908
2015-09-10T12:34:56Z
Lollypop
2
/* More sophisticated with aggregations and vnics */
wikitext
text/x-wiki
[[Kategorie:Solaris11]]
= Switch to manual configuration =
To disable automatic procedures to take back your changes you have to enable the manual configuration mode.
<pre>
# netadm enable –p ncp defaultfixed
</pre>
= Nodename =
<pre>
# svccfg -s svc:/system/identity:node setprop config/nodename = astring: camponotus
# svcadm refresh svc:/system/identity:node
# svcadm restart svc:/system/identity:node
</pre>
= Interfaces =
== Initial setup ==
<pre>
# ipadm create-ip net1
# ipadm create-addr -T static -a local=192.168.5.101/24 net1/v4mailcluster1
</pre>
== IPMP ==
<pre>
# ipadm create-ip net2
# ipadm create-ip net3
# ipadm create-addr -T static -a 192.168.5.102/24 net2/v4ipmptestadress
# ipadm create-addr -T static -a 192.168.5.103/24 net3/v4ipmptestadress
# ipadm create-ipmp ipmp0
# ipadm add-ipmp -i net2 -i net3 ipmp0
# ipadm create-addr -T static -a 192.168.5.101/24 ipmp0/v4mailcluster0
# ipmpstat -i
INTERFACE ACTIVE GROUP FLAGS LINK PROBE STATE
net2 yes ipmp0 ------- up ok ok
net3 yes ipmp0 --mbM-- up ok ok
# ipmpstat -an
ADDRESS STATE GROUP INBOUND OUTBOUND
:: down ipmp0 -- --
192.168.5.101 up ipmp0 net3 net2 net3
</pre>
Set one interface to standby:
<pre>
# ipadm set-ifprop -p standby=on -m ip net2
# ipmpstat -i
INTERFACE ACTIVE GROUP FLAGS LINK PROBE STATE
net3 yes ipmp0 --mbM-- up ok ok
net2 no ipmp0 is----- up ok ok
# ipmpstat -g
GROUP GROUPNAME STATE FDT INTERFACES
ipmp0 ipmp0 ok 10.00s net3 (net2)
</pre>
== More sophisticated with aggregations and vnics ==
<source lang=bash>
# dladm show-phys -L
LINK DEVICE LOC
net0 igb12 /SYS/MB
net1 igb13 /SYS/MB
net2 igb14 /SYS/MB
net3 igb15 /SYS/MB
net4 igb0 /SYS/MB/PCI_MEZZ/PCIE3
net5 igb1 /SYS/MB/PCI_MEZZ/PCIE3
net6 igb2 /SYS/MB/PCI_MEZZ/PCIE3
net7 igb3 /SYS/MB/PCI_MEZZ/PCIE3
net8 igb4 /SYS/MB/RISER2/PCIE2
net9 igb5 /SYS/MB/RISER2/PCIE2
net10 igb6 /SYS/MB/RISER2/PCIE2
net11 igb7 /SYS/MB/RISER2/PCIE2
net12 igb8 /SYS/MB/RISER0/PCIE0
net13 igb9 /SYS/MB/RISER0/PCIE0
net14 igb10 /SYS/MB/RISER0/PCIE0
net15 igb11 /SYS/MB/RISER0/PCIE0
net16 usbecm2 --
# dladm create-aggr -P L2,L3 -l net8 -l net9 -l net10 -l net11 PCIE2
# dladm create-aggr -P L2,L3 -l net4 -l net5 -l net6 -l net7 PCIE3
# dladm show-link
...
PCIE2 aggr 1500 up net8 net9 net10 net11
PCIE3 aggr 1500 up net4 net5 net6 net7
...
# dladm create-vnic -l PCIE2 zone01_ipmp0
# dladm create-vnic -l PCIE3 zone01_ipmp1
# dladm show-link
...
zone01_ipmp1 vnic 1500 up PCIE3
zone01_ipmp0 vnic 1500 up PCIE2
...
# zonecfg -z zone01
zonecfg:zone01> add net
zonecfg:zone01:net> set configure-allowed-address=true
zonecfg:zone01:net> set physical=zone01_ipmp0
zonecfg:zone01:net> end
zonecfg:zone01> add net
zonecfg:zone01:net> set configure-allowed-address=true
zonecfg:zone01:net> set physical=zone01_ipmp1
zonecfg:zone01:net> end
zonecfg:zone01> verify
zonecfg:zone01> commit
zonecfg:zone01> exit
</source>
== Change address ==
1. Create new interface:
<pre>
# ipadm create-addr -T static -a 192.168.5.111/24 ipmp0/v4mailcluster1
</pre>
2. Login to new IP.
3. Delete the old interface:
<pre>
# ipadm delete-addr ipmp0/v4mailcluster0
</pre>
= DNS =
== Client ==
<pre>
# svccfg -s svc:/network/dns/client setprop config/nameserver = net_address: "( 0.0.0.0 192.168.1.1 )"
# svccfg -s svc:/network/dns/client setprop config/search = astring: "timmann.de blindhuhn.de"
# svcadm refresh svc:/network/dns/client:default
# svcadm restart svc:/network/dns/client:default
</pre>
Activate dns in nameservice switch (nsswitch.conf):
<pre>
# perl -pi -e "s/^hosts:\s+files$/hosts: files dns/g" /etc/nsswitch.conf
# nscfg import -f svc:/system/name-service/switch:default
# svcadm refresh name-service/switch
# svcprop -p config/host svc:/system/name-service/switch:default
files\ dns
</pre>
== Server ==
<pre>
# groupadd -g 53 dns
# useradd -u 53 -g dns -d /var/named -m dns
# usermod -A solaris.smf.manage.bind dns
# svccfg -s svc:network/dns/server:default setprop start/group = dns
# svccfg -s svc:network/dns/server:default setprop start/user = dns
# svccfg -s svc:network/dns/server:default setprop options/ip_interfaces = IPv4
# svccfg -s svc:network/dns/server:default setprop options/configuration_file = /etc/named.conf
# svcadm refresh svc:network/dns/server:default
# svcadm enable svc:network/dns/server:default
</pre>
= Set tcp/udp parameter (formerly ndd) =
<source lang=bash>
# ipadm show-prop -p smallest_anon_port tcp
PROTO PROPERTY PERM CURRENT PERSISTENT DEFAULT POSSIBLE
tcp smallest_anon_port rw 1024 -- 1024 1024-65535
</source>
<source lang=bash>
# ipadm set-prop -p smallest_anon_port=9000 tcp
# ipadm set-prop -p smallest_anon_port=9000 udp
# ipadm set-prop -p largest_anon_port=65500 tcp
# ipadm set-prop -p largest_anon_port=65500 udp
</source>
9d101c0aa99b6051ae7e85016298f14fbba24f19
Snorby
0
234
910
2015-09-11T07:20:16Z
Lollypop
2
Die Seite wurde neu angelegt: „Just a scribble... /usr/local/bin/suricata -D -c /etc/suricata/suricata.yaml -i eth1 --init-errors-fatal barnyard2 -c /etc/suricata/barnyard2.conf -d /var/log…“
wikitext
text/x-wiki
Just a scribble...
/usr/local/bin/suricata -D -c /etc/suricata/suricata.yaml -i eth1 --init-errors-fatal
barnyard2 -c /etc/suricata/barnyard2.conf -d /var/log/suricata -f unified2.alert -w /var/log/suricata/suricata.waldo -D
7afa7f54c256f7c513b8e6d69d58c820434f1340
911
910
2015-09-11T10:11:21Z
Lollypop
2
wikitext
text/x-wiki
Just a scribble...
<source lang=bash>
/usr/local/bin/suricata -D -c /etc/suricata/suricata.yaml -i eth1 --init-errors-fatal
barnyard2 -c /etc/suricata/barnyard2.conf -d /var/log/suricata -f unified2.alert -w /var/log/suricata/suricata.waldo -D
</source>
8102d5515650509fa117766154f82500613b06e8
MariaDB Tipps und Tricks
0
235
912
2015-09-18T13:47:28Z
Lollypop
2
Die Seite wurde neu angelegt: „==ERROR 1524 (HY000): Plugin 'unix_socket' is not loaded== ===Problem=== <source lang=bash> # mysql ERROR 1524 (HY000): Plugin 'unix_socket' is not loaded </so…“
wikitext
text/x-wiki
==ERROR 1524 (HY000): Plugin 'unix_socket' is not loaded==
===Problem===
<source lang=bash>
# mysql
ERROR 1524 (HY000): Plugin 'unix_socket' is not loaded
</source>
===Solution===
<source lang=bash>
# service mysql stop
# mysqld_safe --skip-grant-tables
150918 15:41:13 mysqld_safe Logging to '/var/log/mysql/error.log'.
150918 15:41:13 mysqld_safe Starting mysqld daemon with databases from /var/lib/mysql
# mysql
Welcome to the MariaDB monitor. Commands end with ; or \g.
Your MariaDB connection id is 2
Server version: 10.0.20-MariaDB-0ubuntu0.15.04.1 (Ubuntu)
Copyright (c) 2000, 2015, Oracle, MariaDB Corporation Ab and others.
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
MariaDB [(none)]> INSERT INTO mysql.plugin (name, dl) VALUES ('unix_socket', 'auth_socket');
Query OK, 1 row affected (0.00 sec)
MariaDB [(none)]> shutdown
# service mysql start
</source>
4e1b7945a44f02085332294a0e4e1af62a4c5933
913
912
2015-09-18T13:48:24Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:MySQL]]
[[Kategorie:MariaDB]]
==ERROR 1524 (HY000): Plugin 'unix_socket' is not loaded==
===Problem===
<source lang=bash>
# mysql
ERROR 1524 (HY000): Plugin 'unix_socket' is not loaded
</source>
===Solution===
<source lang=bash>
# service mysql stop
# mysqld_safe --skip-grant-tables
150918 15:41:13 mysqld_safe Logging to '/var/log/mysql/error.log'.
150918 15:41:13 mysqld_safe Starting mysqld daemon with databases from /var/lib/mysql
# mysql
Welcome to the MariaDB monitor. Commands end with ; or \g.
Your MariaDB connection id is 2
Server version: 10.0.20-MariaDB-0ubuntu0.15.04.1 (Ubuntu)
Copyright (c) 2000, 2015, Oracle, MariaDB Corporation Ab and others.
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
MariaDB [(none)]> INSERT INTO mysql.plugin (name, dl) VALUES ('unix_socket', 'auth_socket');
Query OK, 1 row affected (0.00 sec)
MariaDB [(none)]> shutdown
# service mysql start
</source>
8b58cc30264716ed20559605c97fe45f02285d5a
Category:MariaDB
14
236
914
2015-09-18T13:48:44Z
Lollypop
2
Die Seite wurde neu angelegt: „[[Kategorie:KnowHow]]“
wikitext
text/x-wiki
[[Kategorie:KnowHow]]
5b3e805e2df69a16d339bfd0115e4688ccfd0e65
Solaris 11 Networking
0
96
915
909
2015-09-24T15:21:46Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:Solaris11]]
= Switch to manual configuration =
To disable automatic procedures to take back your changes you have to enable the manual configuration mode.
<pre>
# netadm enable –p ncp defaultfixed
</pre>
= Nodename =
<pre>
# svccfg -s svc:/system/identity:node setprop config/nodename = astring: camponotus
# svcadm refresh svc:/system/identity:node
# svcadm restart svc:/system/identity:node
</pre>
= Interfaces =
== Initial setup ==
<pre>
# ipadm create-ip net1
# ipadm create-addr -T static -a local=192.168.5.101/24 net1/v4mailcluster1
</pre>
== IPMP ==
<pre>
# ipadm create-ip net2
# ipadm create-ip net3
# ipadm create-addr -T static -a 192.168.5.102/24 net2/v4ipmptestadress
# ipadm create-addr -T static -a 192.168.5.103/24 net3/v4ipmptestadress
# ipadm create-ipmp ipmp0
# ipadm add-ipmp -i net2 -i net3 ipmp0
# ipadm create-addr -T static -a 192.168.5.101/24 ipmp0/v4mailcluster0
# ipmpstat -i
INTERFACE ACTIVE GROUP FLAGS LINK PROBE STATE
net2 yes ipmp0 ------- up ok ok
net3 yes ipmp0 --mbM-- up ok ok
# ipmpstat -an
ADDRESS STATE GROUP INBOUND OUTBOUND
:: down ipmp0 -- --
192.168.5.101 up ipmp0 net3 net2 net3
</pre>
Set one interface to standby:
<pre>
# ipadm set-ifprop -p standby=on -m ip net2
# ipmpstat -i
INTERFACE ACTIVE GROUP FLAGS LINK PROBE STATE
net3 yes ipmp0 --mbM-- up ok ok
net2 no ipmp0 is----- up ok ok
# ipmpstat -g
GROUP GROUPNAME STATE FDT INTERFACES
ipmp0 ipmp0 ok 10.00s net3 (net2)
</pre>
== More sophisticated with aggregations and vnics ==
<source lang=bash>
# dladm show-phys -L
LINK DEVICE LOC
net0 igb12 /SYS/MB
net1 igb13 /SYS/MB
net2 igb14 /SYS/MB
net3 igb15 /SYS/MB
net4 igb0 /SYS/MB/PCI_MEZZ/PCIE3
net5 igb1 /SYS/MB/PCI_MEZZ/PCIE3
net6 igb2 /SYS/MB/PCI_MEZZ/PCIE3
net7 igb3 /SYS/MB/PCI_MEZZ/PCIE3
net8 igb4 /SYS/MB/RISER2/PCIE2
net9 igb5 /SYS/MB/RISER2/PCIE2
net10 igb6 /SYS/MB/RISER2/PCIE2
net11 igb7 /SYS/MB/RISER2/PCIE2
net12 igb8 /SYS/MB/RISER0/PCIE0
net13 igb9 /SYS/MB/RISER0/PCIE0
net14 igb10 /SYS/MB/RISER0/PCIE0
net15 igb11 /SYS/MB/RISER0/PCIE0
net16 usbecm2 --
# dladm create-aggr -P L2,L3 -l net8 -l net9 -l net10 -l net11 PCIE2
# dladm create-aggr -P L2,L3 -l net4 -l net5 -l net6 -l net7 PCIE3
# dladm show-link
...
PCIE2 aggr 1500 up net8 net9 net10 net11
PCIE3 aggr 1500 up net4 net5 net6 net7
...
# dladm create-vnic -l PCIE2 zone01_ipmp0
# dladm create-vnic -l PCIE3 zone01_ipmp1
# dladm show-link
...
zone01_ipmp1 vnic 1500 up PCIE3
zone01_ipmp0 vnic 1500 up PCIE2
...
# zonecfg -z zone01
zonecfg:zone01> add net
zonecfg:zone01:net> set configure-allowed-address=true
zonecfg:zone01:net> set physical=zone01_ipmp0
zonecfg:zone01:net> end
zonecfg:zone01> add net
zonecfg:zone01:net> set configure-allowed-address=true
zonecfg:zone01:net> set physical=zone01_ipmp1
zonecfg:zone01:net> end
zonecfg:zone01> verify
zonecfg:zone01> commit
zonecfg:zone01> exit
</source>
== Change address ==
1. Create new interface:
<pre>
# ipadm create-addr -T static -a 192.168.5.111/24 ipmp0/v4mailcluster1
</pre>
2. Login to new IP.
3. Delete the old interface:
<pre>
# ipadm delete-addr ipmp0/v4mailcluster0
</pre>
= DNS =
== Client ==
<pre>
# svccfg -s svc:/network/dns/client setprop config/nameserver = net_address: "( 0.0.0.0 192.168.1.1 )"
# svccfg -s svc:/network/dns/client setprop config/search = astring: "timmann.de blindhuhn.de"
# svcadm refresh svc:/network/dns/client:default
# svcadm restart svc:/network/dns/client:default
</pre>
Activate dns in nameservice switch (nsswitch.conf):
<pre>
# perl -pi -e "s/^hosts:\s+files$/hosts: files dns/g" /etc/nsswitch.conf
# nscfg import -f svc:/system/name-service/switch:default
# svcadm refresh name-service/switch
# svcprop -p config/host svc:/system/name-service/switch:default
files\ dns
</pre>
== Server ==
<pre>
# groupadd -g 53 dns
# useradd -u 53 -g dns -d /var/named -m dns
# usermod -A solaris.smf.manage.bind dns
# svccfg -s svc:network/dns/server:default setprop start/group = dns
# svccfg -s svc:network/dns/server:default setprop start/user = dns
# svccfg -s svc:network/dns/server:default setprop options/ip_interfaces = IPv4
# svccfg -s svc:network/dns/server:default setprop options/configuration_file = /etc/named.conf
# svcadm refresh svc:network/dns/server:default
# svcadm enable svc:network/dns/server:default
</pre>
= Set tcp/udp parameter (formerly ndd) =
<source lang=bash>
# ipadm show-prop -p smallest_anon_port tcp
PROTO PROPERTY PERM CURRENT PERSISTENT DEFAULT POSSIBLE
tcp smallest_anon_port rw 1024 -- 1024 1024-65535
</source>
<source lang=bash>
# ipadm set-prop -p smallest_anon_port=9000 tcp
# ipadm set-prop -p smallest_anon_port=9000 udp
# ipadm set-prop -p largest_anon_port=65500 tcp
# ipadm set-prop -p largest_anon_port=65500 udp
</source>
= Jumbo Frames =
The MTU of an ipadm-interface can never be greater than its underlying dladm-interface.
To change the dladm-interface the ipadm-interface has to be disabled (DOWNTIME!? BE CAREFUL!).
<source lang=bash>
# ipadm disable-if -t iscsi0
# dladm set-linkprop -p mtu=9000 iscsi0
# ipadm enable-if -t iscsi0
# ipadm set-ifprop -m ipv4 -p mtu=9000 iscsi0
</source>
e6ec52880498416c974bec6c55c97967072f9932
916
915
2015-09-25T10:15:38Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:Solaris11]]
= Switch to manual configuration =
To disable automatic procedures to take back your changes you have to enable the manual configuration mode.
<pre>
# netadm enable –p ncp defaultfixed
</pre>
= Nodename =
<pre>
# svccfg -s svc:/system/identity:node setprop config/nodename = astring: camponotus
# svcadm refresh svc:/system/identity:node
# svcadm restart svc:/system/identity:node
</pre>
= Interfaces =
== Initial setup ==
<pre>
# ipadm create-ip net1
# ipadm create-addr -T static -a local=192.168.5.101/24 net1/v4mailcluster1
</pre>
== IPMP ==
<pre>
# ipadm create-ip net2
# ipadm create-ip net3
# ipadm create-addr -T static -a 192.168.5.102/24 net2/v4ipmptestadress
# ipadm create-addr -T static -a 192.168.5.103/24 net3/v4ipmptestadress
# ipadm create-ipmp ipmp0
# ipadm add-ipmp -i net2 -i net3 ipmp0
# ipadm create-addr -T static -a 192.168.5.101/24 ipmp0/v4mailcluster0
# ipmpstat -i
INTERFACE ACTIVE GROUP FLAGS LINK PROBE STATE
net2 yes ipmp0 ------- up ok ok
net3 yes ipmp0 --mbM-- up ok ok
# ipmpstat -an
ADDRESS STATE GROUP INBOUND OUTBOUND
:: down ipmp0 -- --
192.168.5.101 up ipmp0 net3 net2 net3
</pre>
Set one interface to standby:
<pre>
# ipadm set-ifprop -p standby=on -m ip net2
# ipmpstat -i
INTERFACE ACTIVE GROUP FLAGS LINK PROBE STATE
net3 yes ipmp0 --mbM-- up ok ok
net2 no ipmp0 is----- up ok ok
# ipmpstat -g
GROUP GROUPNAME STATE FDT INTERFACES
ipmp0 ipmp0 ok 10.00s net3 (net2)
</pre>
== More sophisticated with aggregations and vnics ==
<source lang=bash>
# dladm show-phys -L
LINK DEVICE LOC
net0 igb12 /SYS/MB
net1 igb13 /SYS/MB
net2 igb14 /SYS/MB
net3 igb15 /SYS/MB
net4 igb0 /SYS/MB/PCI_MEZZ/PCIE3
net5 igb1 /SYS/MB/PCI_MEZZ/PCIE3
net6 igb2 /SYS/MB/PCI_MEZZ/PCIE3
net7 igb3 /SYS/MB/PCI_MEZZ/PCIE3
net8 igb4 /SYS/MB/RISER2/PCIE2
net9 igb5 /SYS/MB/RISER2/PCIE2
net10 igb6 /SYS/MB/RISER2/PCIE2
net11 igb7 /SYS/MB/RISER2/PCIE2
net12 igb8 /SYS/MB/RISER0/PCIE0
net13 igb9 /SYS/MB/RISER0/PCIE0
net14 igb10 /SYS/MB/RISER0/PCIE0
net15 igb11 /SYS/MB/RISER0/PCIE0
net16 usbecm2 --
# dladm create-aggr -P L2,L3 -l net8 -l net9 -l net10 -l net11 PCIE2
# dladm create-aggr -P L2,L3 -l net4 -l net5 -l net6 -l net7 PCIE3
# dladm show-link
...
PCIE2 aggr 1500 up net8 net9 net10 net11
PCIE3 aggr 1500 up net4 net5 net6 net7
...
# dladm create-vnic -l PCIE2 zone01_ipmp0
# dladm create-vnic -l PCIE3 zone01_ipmp1
# dladm show-link
...
zone01_ipmp1 vnic 1500 up PCIE3
zone01_ipmp0 vnic 1500 up PCIE2
...
# zonecfg -z zone01
zonecfg:zone01> add net
zonecfg:zone01:net> set configure-allowed-address=true
zonecfg:zone01:net> set physical=zone01_ipmp0
zonecfg:zone01:net> end
zonecfg:zone01> add net
zonecfg:zone01:net> set configure-allowed-address=true
zonecfg:zone01:net> set physical=zone01_ipmp1
zonecfg:zone01:net> end
zonecfg:zone01> verify
zonecfg:zone01> commit
zonecfg:zone01> exit
</source>
== Change address ==
1. Create new interface:
<pre>
# ipadm create-addr -T static -a 192.168.5.111/24 ipmp0/v4mailcluster1
</pre>
2. Login to new IP.
3. Delete the old interface:
<pre>
# ipadm delete-addr ipmp0/v4mailcluster0
</pre>
= DNS =
== Client ==
<pre>
# svccfg -s svc:/network/dns/client setprop config/nameserver = net_address: "( 0.0.0.0 192.168.1.1 )"
# svccfg -s svc:/network/dns/client setprop config/search = astring: "timmann.de blindhuhn.de"
# svcadm refresh svc:/network/dns/client:default
# svcadm restart svc:/network/dns/client:default
</pre>
Activate dns in nameservice switch (nsswitch.conf):
<pre>
# perl -pi -e "s/^hosts:\s+files$/hosts: files dns/g" /etc/nsswitch.conf
# nscfg import -f svc:/system/name-service/switch:default
# svcadm refresh name-service/switch
# svcprop -p config/host svc:/system/name-service/switch:default
files\ dns
</pre>
== Server ==
<pre>
# groupadd -g 53 dns
# useradd -u 53 -g dns -d /var/named -m dns
# usermod -A solaris.smf.manage.bind dns
# svccfg -s svc:network/dns/server:default setprop start/group = dns
# svccfg -s svc:network/dns/server:default setprop start/user = dns
# svccfg -s svc:network/dns/server:default setprop options/ip_interfaces = IPv4
# svccfg -s svc:network/dns/server:default setprop options/configuration_file = /etc/named.conf
# svcadm refresh svc:network/dns/server:default
# svcadm enable svc:network/dns/server:default
</pre>
= Set tcp/udp parameter (formerly ndd) =
<source lang=bash>
# ipadm show-prop -p smallest_anon_port tcp
PROTO PROPERTY PERM CURRENT PERSISTENT DEFAULT POSSIBLE
tcp smallest_anon_port rw 1024 -- 1024 1024-65535
</source>
<source lang=bash>
# ipadm set-prop -p smallest_anon_port=9000 tcp
# ipadm set-prop -p smallest_anon_port=9000 udp
# ipadm set-prop -p largest_anon_port=65500 tcp
# ipadm set-prop -p largest_anon_port=65500 udp
</source>
= Jumbo Frames =
The MTU of an ipadm-interface can never be greater than its underlying dladm-interface.
To change the dladm-interface the ipadm-interface has to be disabled (DOWNTIME!? BE CAREFUL!).
<source lang=bash>
# ipadm disable-if -t iscsi0
# dladm set-linkprop -p mtu=9000 iscsi0
# ipadm enable-if -t iscsi0
# ipadm set-ifprop -m ipv4 -p mtu=9000 iscsi0
</source>
= Aggregate for iSCSI =
This is cruel but worked on our ciscos:
<source lang=bash>
# dladm create-aggr -m trunk -P L4 -L off "-l iscsi"{0,1,2,3,4,5,6,7} iscsi_aggr0
# dladm show-aggr -P iscsi_aggr0
LINK MODE POLICY ADDRPOLICY LACPACTIVITY LACPTIMER
iscsi_aggr0 trunk L4 auto off short
# dladm show-aggr -L iscsi_aggr0
LINK PORT AGGREGATABLE SYNC COLL DIST DEFAULTED EXPIRED
iscsi_aggr0 iscsi0 no no no no yes no
-- iscsi1 no no no no yes no
-- iscsi2 no no no no yes no
-- iscsi3 no no no no yes no
-- iscsi4 no no no no yes no
-- iscsi5 no no no no yes no
-- iscsi6 no no no no yes no
-- iscsi7 no no no no yes no
</source>
2f6242ffc506f70286fe54371a26dde2e435510e
917
916
2015-09-25T14:02:23Z
Lollypop
2
/* Aggregate for iSCSI */
wikitext
text/x-wiki
[[Kategorie:Solaris11]]
= Switch to manual configuration =
To disable automatic procedures to take back your changes you have to enable the manual configuration mode.
<pre>
# netadm enable –p ncp defaultfixed
</pre>
= Nodename =
<pre>
# svccfg -s svc:/system/identity:node setprop config/nodename = astring: camponotus
# svcadm refresh svc:/system/identity:node
# svcadm restart svc:/system/identity:node
</pre>
= Interfaces =
== Initial setup ==
<pre>
# ipadm create-ip net1
# ipadm create-addr -T static -a local=192.168.5.101/24 net1/v4mailcluster1
</pre>
== IPMP ==
<pre>
# ipadm create-ip net2
# ipadm create-ip net3
# ipadm create-addr -T static -a 192.168.5.102/24 net2/v4ipmptestadress
# ipadm create-addr -T static -a 192.168.5.103/24 net3/v4ipmptestadress
# ipadm create-ipmp ipmp0
# ipadm add-ipmp -i net2 -i net3 ipmp0
# ipadm create-addr -T static -a 192.168.5.101/24 ipmp0/v4mailcluster0
# ipmpstat -i
INTERFACE ACTIVE GROUP FLAGS LINK PROBE STATE
net2 yes ipmp0 ------- up ok ok
net3 yes ipmp0 --mbM-- up ok ok
# ipmpstat -an
ADDRESS STATE GROUP INBOUND OUTBOUND
:: down ipmp0 -- --
192.168.5.101 up ipmp0 net3 net2 net3
</pre>
Set one interface to standby:
<pre>
# ipadm set-ifprop -p standby=on -m ip net2
# ipmpstat -i
INTERFACE ACTIVE GROUP FLAGS LINK PROBE STATE
net3 yes ipmp0 --mbM-- up ok ok
net2 no ipmp0 is----- up ok ok
# ipmpstat -g
GROUP GROUPNAME STATE FDT INTERFACES
ipmp0 ipmp0 ok 10.00s net3 (net2)
</pre>
== More sophisticated with aggregations and vnics ==
<source lang=bash>
# dladm show-phys -L
LINK DEVICE LOC
net0 igb12 /SYS/MB
net1 igb13 /SYS/MB
net2 igb14 /SYS/MB
net3 igb15 /SYS/MB
net4 igb0 /SYS/MB/PCI_MEZZ/PCIE3
net5 igb1 /SYS/MB/PCI_MEZZ/PCIE3
net6 igb2 /SYS/MB/PCI_MEZZ/PCIE3
net7 igb3 /SYS/MB/PCI_MEZZ/PCIE3
net8 igb4 /SYS/MB/RISER2/PCIE2
net9 igb5 /SYS/MB/RISER2/PCIE2
net10 igb6 /SYS/MB/RISER2/PCIE2
net11 igb7 /SYS/MB/RISER2/PCIE2
net12 igb8 /SYS/MB/RISER0/PCIE0
net13 igb9 /SYS/MB/RISER0/PCIE0
net14 igb10 /SYS/MB/RISER0/PCIE0
net15 igb11 /SYS/MB/RISER0/PCIE0
net16 usbecm2 --
# dladm create-aggr -P L2,L3 -l net8 -l net9 -l net10 -l net11 PCIE2
# dladm create-aggr -P L2,L3 -l net4 -l net5 -l net6 -l net7 PCIE3
# dladm show-link
...
PCIE2 aggr 1500 up net8 net9 net10 net11
PCIE3 aggr 1500 up net4 net5 net6 net7
...
# dladm create-vnic -l PCIE2 zone01_ipmp0
# dladm create-vnic -l PCIE3 zone01_ipmp1
# dladm show-link
...
zone01_ipmp1 vnic 1500 up PCIE3
zone01_ipmp0 vnic 1500 up PCIE2
...
# zonecfg -z zone01
zonecfg:zone01> add net
zonecfg:zone01:net> set configure-allowed-address=true
zonecfg:zone01:net> set physical=zone01_ipmp0
zonecfg:zone01:net> end
zonecfg:zone01> add net
zonecfg:zone01:net> set configure-allowed-address=true
zonecfg:zone01:net> set physical=zone01_ipmp1
zonecfg:zone01:net> end
zonecfg:zone01> verify
zonecfg:zone01> commit
zonecfg:zone01> exit
</source>
== Change address ==
1. Create new interface:
<pre>
# ipadm create-addr -T static -a 192.168.5.111/24 ipmp0/v4mailcluster1
</pre>
2. Login to new IP.
3. Delete the old interface:
<pre>
# ipadm delete-addr ipmp0/v4mailcluster0
</pre>
= DNS =
== Client ==
<pre>
# svccfg -s svc:/network/dns/client setprop config/nameserver = net_address: "( 0.0.0.0 192.168.1.1 )"
# svccfg -s svc:/network/dns/client setprop config/search = astring: "timmann.de blindhuhn.de"
# svcadm refresh svc:/network/dns/client:default
# svcadm restart svc:/network/dns/client:default
</pre>
Activate dns in nameservice switch (nsswitch.conf):
<pre>
# perl -pi -e "s/^hosts:\s+files$/hosts: files dns/g" /etc/nsswitch.conf
# nscfg import -f svc:/system/name-service/switch:default
# svcadm refresh name-service/switch
# svcprop -p config/host svc:/system/name-service/switch:default
files\ dns
</pre>
== Server ==
<pre>
# groupadd -g 53 dns
# useradd -u 53 -g dns -d /var/named -m dns
# usermod -A solaris.smf.manage.bind dns
# svccfg -s svc:network/dns/server:default setprop start/group = dns
# svccfg -s svc:network/dns/server:default setprop start/user = dns
# svccfg -s svc:network/dns/server:default setprop options/ip_interfaces = IPv4
# svccfg -s svc:network/dns/server:default setprop options/configuration_file = /etc/named.conf
# svcadm refresh svc:network/dns/server:default
# svcadm enable svc:network/dns/server:default
</pre>
= Set tcp/udp parameter (formerly ndd) =
<source lang=bash>
# ipadm show-prop -p smallest_anon_port tcp
PROTO PROPERTY PERM CURRENT PERSISTENT DEFAULT POSSIBLE
tcp smallest_anon_port rw 1024 -- 1024 1024-65535
</source>
<source lang=bash>
# ipadm set-prop -p smallest_anon_port=9000 tcp
# ipadm set-prop -p smallest_anon_port=9000 udp
# ipadm set-prop -p largest_anon_port=65500 tcp
# ipadm set-prop -p largest_anon_port=65500 udp
</source>
= Jumbo Frames =
The MTU of an ipadm-interface can never be greater than its underlying dladm-interface.
To change the dladm-interface the ipadm-interface has to be disabled (DOWNTIME!? BE CAREFUL!).
<source lang=bash>
# ipadm disable-if -t iscsi0
# dladm set-linkprop -p mtu=9000 iscsi0
# ipadm enable-if -t iscsi0
# ipadm set-ifprop -m ipv4 -p mtu=9000 iscsi0
</source>
= Aggregate for iSCSI =
This is cruel but worked on our ciscos:
<source lang=bash>
# dladm create-aggr -m trunk -P L4 -L off "-l iscsi"{0..7} iscsi_aggr0 | /bin/sh
# dladm show-aggr -P iscsi_aggr0
LINK MODE POLICY ADDRPOLICY LACPACTIVITY LACPTIMER
iscsi_aggr0 trunk L4 auto off short
# dladm show-aggr -L iscsi_aggr0
LINK PORT AGGREGATABLE SYNC COLL DIST DEFAULTED EXPIRED
iscsi_aggr0 iscsi0 no no no no yes no
-- iscsi1 no no no no yes no
-- iscsi2 no no no no yes no
-- iscsi3 no no no no yes no
-- iscsi4 no no no no yes no
-- iscsi5 no no no no yes no
-- iscsi6 no no no no yes no
-- iscsi7 no no no no yes no
</source>
fb383055907ac80b74f2100780170e5ba0328ec0
RootKitScanner
0
237
918
2015-10-01T09:18:51Z
Lollypop
2
Die Seite wurde neu angelegt: „[[Kategorie:Security]] =RKHunter= RKHunter is a local security scanner for Linux, Solaris and some other UNIX operating systems. I will describe usage for Ubun…“
wikitext
text/x-wiki
[[Kategorie:Security]]
=RKHunter=
RKHunter is a local security scanner for Linux, Solaris and some other UNIX operating systems.
I will describe usage for Ubuntu/Linux here.
==Installation==
First of all install it to your system:
<source lang=bash>
# aptitude install rkhunter
</source>
==Update the rule base==
After that (and do this from time to time) update the rule base:
<source lang=bash>
# rkhunter --update
[ Rootkit Hunter version 1.4.0 ]
Checking rkhunter data files...
Checking file mirrors.dat [ No update ]
Checking file programs_bad.dat [ Updated ]
Checking file backdoorports.dat [ No update ]
Checking file suspscan.dat [ No update ]
Checking file i18n/cn [ No update ]
Checking file i18n/de [ Updated ]
Checking file i18n/en [ Updated ]
Checking file i18n/tr [ Updated ]
Checking file i18n/tr.utf8 [ Updated ]
Checking file i18n/zh [ No update ]
Checking file i18n/zh.utf8 [ No update ]
</source>
==Do the first check==
<source lang=bash>
# rkhunter --check --pkgmgr DPKG --skip-keypress --report-warnings-only
Warning: Found enabled inetd service: rstatd/1-5
Warning: syslog-ng configuration file allows remote logging: destination d_logserver { udp("logserver-1"); };
Warning: Suspicious file types found in /dev:
/dev/.udev/rules.d/root.rules: ASCII text
Warning: Hidden directory found: '/etc/.bzr: directory '
Warning: Hidden directory found: '/dev/.udev: directory '
Warning: Hidden file found: /etc/.bzrignore: ASCII text
Warning: Hidden file found: /etc/.etckeeper: ASCII text
Warning: Hidden file found: /dev/.initramfs: symbolic link to `/run/initramfs'
</source>
==Acknowledge false positives==
Many warnings. Check which are false positives and modify your '''/etc/rkhunter.conf'''.
For example to get rid of this warnings add:
<source lang=bash>
ALLOWHIDDENDIR="/dev/.udev"
ALLOWHIDDENDIR="/etc/.bzr"
ALLOWHIDDENFILE="/etc/.bzrignore"
ALLOWHIDDENFILE="/etc/.etckeeper"
ALLOWHIDDENFILE="/dev/.initramfs"
ALLOWDEVFILE="/dev/.udev/rules.d/root.rules"
INETD_ALLOWED_SVC=rstatd/1-5
ALLOW_SYSLOG_REMOTE_LOGGING=1
</source>
71c195a4366ea02d0a9ff3d037698796fa0f6a5e
919
918
2015-10-01T09:20:08Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:Security]]
=RKHunter=
RKHunter is a local security scanner for Linux, Solaris and some other UNIX operating systems.
I will describe usage for Ubuntu/Linux here.
==Installation==
First of all install it to your system:
<source lang=bash>
# aptitude install rkhunter
</source>
==Update the rule base==
After that (and do this from time to time) update the rule base:
<source lang=bash>
# rkhunter --update
[ Rootkit Hunter version 1.4.0 ]
Checking rkhunter data files...
Checking file mirrors.dat [ No update ]
Checking file programs_bad.dat [ Updated ]
Checking file backdoorports.dat [ No update ]
Checking file suspscan.dat [ No update ]
Checking file i18n/cn [ No update ]
Checking file i18n/de [ Updated ]
Checking file i18n/en [ Updated ]
Checking file i18n/tr [ Updated ]
Checking file i18n/tr.utf8 [ Updated ]
Checking file i18n/zh [ No update ]
Checking file i18n/zh.utf8 [ No update ]
</source>
==Do the first check==
<source lang=bash>
# rkhunter --check --pkgmgr DPKG --skip-keypress --report-warnings-only
Warning: Found enabled inetd service: rstatd/1-5
Warning: syslog-ng configuration file allows remote logging: destination d_logserver { udp("logserver-1"); };
Warning: Suspicious file types found in /dev:
/dev/.udev/rules.d/root.rules: ASCII text
Warning: Hidden directory found: '/etc/.bzr: directory '
Warning: Hidden directory found: '/dev/.udev: directory '
Warning: Hidden file found: /etc/.bzrignore: ASCII text
Warning: Hidden file found: /etc/.etckeeper: ASCII text
Warning: Hidden file found: /dev/.initramfs: symbolic link to `/run/initramfs'
</source>
Many warnings.
Check which are false positives and modify your '''/etc/rkhunter.conf'''.
==Acknowledge false positives==
For example to get rid of the warnings above add this lines to the '''/etc/rkhunter.conf''':
<source lang=bash>
ALLOWHIDDENDIR="/dev/.udev"
ALLOWHIDDENDIR="/etc/.bzr"
ALLOWHIDDENFILE="/etc/.bzrignore"
ALLOWHIDDENFILE="/etc/.etckeeper"
ALLOWHIDDENFILE="/dev/.initramfs"
ALLOWDEVFILE="/dev/.udev/rules.d/root.rules"
INETD_ALLOWED_SVC=rstatd/1-5
ALLOW_SYSLOG_REMOTE_LOGGING=1
</source>
5bf30fca5a0324d29390605fe830f5d2a72852d7
920
919
2015-10-01T09:29:19Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:Security]]
=RKHunter=
RKHunter is a local security scanner for Linux, Solaris and some other UNIX operating systems.
I will describe usage for Ubuntu/Linux here.
==Installation==
First of all install it to your system:
<source lang=bash>
# aptitude install rkhunter
</source>
==Update the rule base==
After that (and do this from time to time) update the rule base:
<source lang=bash>
# rkhunter --update
[ Rootkit Hunter version 1.4.0 ]
Checking rkhunter data files...
Checking file mirrors.dat [ No update ]
Checking file programs_bad.dat [ Updated ]
Checking file backdoorports.dat [ No update ]
Checking file suspscan.dat [ No update ]
Checking file i18n/cn [ No update ]
Checking file i18n/de [ Updated ]
Checking file i18n/en [ Updated ]
Checking file i18n/tr [ Updated ]
Checking file i18n/tr.utf8 [ Updated ]
Checking file i18n/zh [ No update ]
Checking file i18n/zh.utf8 [ No update ]
</source>
==Do the first check==
<source lang=bash>
# rkhunter --check --pkgmgr DPKG --skip-keypress --report-warnings-only
Warning: Found enabled inetd service: rstatd/1-5
Warning: syslog-ng configuration file allows remote logging: destination d_logserver { udp("logserver-1"); };
Warning: Suspicious file types found in /dev:
/dev/.udev/rules.d/root.rules: ASCII text
Warning: Hidden directory found: '/etc/.bzr: directory '
Warning: Hidden directory found: '/dev/.udev: directory '
Warning: Hidden file found: /etc/.bzrignore: ASCII text
Warning: Hidden file found: /etc/.etckeeper: ASCII text
Warning: Hidden file found: /dev/.initramfs: symbolic link to `/run/initramfs'
</source>
Many warnings.
Check which are false positives and modify your '''/etc/rkhunter.conf'''.
==Acknowledge false positives==
For example to get rid of the warnings above add this lines to the '''/etc/rkhunter.conf''':
<source lang=bash>
ALLOWHIDDENDIR="/dev/.udev"
ALLOWHIDDENDIR="/etc/.bzr"
ALLOWHIDDENFILE="/etc/.bzrignore"
ALLOWHIDDENFILE="/etc/.etckeeper"
ALLOWHIDDENFILE="/dev/.initramfs"
ALLOWDEVFILE="/dev/.udev/rules.d/root.rules"
INETD_ALLOWED_SVC=rstatd/1-5
ALLOW_SYSLOG_REMOTE_LOGGING=1
</source>
After that rkhunter should have no output:
<source lang=bash>
# rkhunter --check --pkgmgr DPKG --skip-keypress --report-warnings-only
#
</source>
Now you have done your base setup. From now all further output should force you to get a closer look to your system.
==Configure ongoing security checks==
Configure the user which should get warnings via email in your '''/etc/rkhunter.conf''':
<source lang=bash>
MAIL-ON-WARNING="security-team@yourdomain.tld"
</source>
e5dbdc760ac119139ff7be267ed2206cbaf758a8
TShark
0
238
921
2015-10-02T06:58:20Z
Lollypop
2
Die Seite wurde neu angelegt: „[[Kategorie:MySQL]] [[Kategorie:Security]] =TShark= [https://www.wireshark.org/docs/wsug_html_chunked/AppToolstshark.html TShark is the terminal based wiresha…“
wikitext
text/x-wiki
[[Kategorie:MySQL]]
[[Kategorie:Security]]
=TShark=
[https://www.wireshark.org/docs/wsug_html_chunked/AppToolstshark.html TShark is the terminal based wireshark.]
The ultimate tool to sniff network traffic when you have no X. It analyzes the traffic as wireshark does. Great tool!
==MySQL traffic==
The little awk-magic selects only pakets which are from our ethernet address.
<source lang=bash>
# IFACE=eth0 ; tshark -i ${IFACE} -aduration:60 -d tcp.port==3306,mysql -R "eth.addr eq $(ip link show ${IFACE} | awk '$1 ~ /link\/ether/{print $2}')" -T fields -e mysql.query 'port 3306'
</source>
8b2a29b58edc9722134b2995d015f4819caa7280
922
921
2015-10-02T07:00:54Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:MySQL]]
[[Kategorie:Security]]
=TShark=
[https://www.wireshark.org/docs/wsug_html_chunked/AppToolstshark.html TShark is the terminal based wireshark.]
The ultimate tool to sniff network traffic when you have no X. It analyzes the traffic as wireshark does. Great tool!
==MySQL traffic==
To look on an application server for MySQL traffic you can use this line:
<source lang=bash>
# IFACE=eth0 ; tshark -i ${IFACE} -d tcp.port==3306,mysql -R "eth.addr eq $(ip link show ${IFACE} | awk '$1 ~ /link\/ether/{print $2}')" -T fields -e mysql.query 'port 3306'
</source>
The little awk magic selects only pakets which are from our ethernet address on interface '''IFACE'''.
6af19c6a2c9b9943afce69eff4d804abd6a46909
923
922
2015-10-02T07:01:17Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:MySQL]]
[[Kategorie:Security]]
=TShark=
[https://www.wireshark.org/docs/wsug_html_chunked/AppToolstshark.html TShark is the terminal based wireshark.]
The ultimate tool to sniff network traffic when you have no X. It analyzes the traffic as wireshark does. Great tool!
==MySQL traffic==
To look on an application server for MySQL traffic you can use this line:
<source lang=bash>
# IFACE=eth0 ; tshark -i ${IFACE} -d tcp.port==3306,mysql -R "eth.addr eq $(ip link show ${IFACE} | awk '$1 ~ /link\/ether/{print $2}')" -T fields -e mysql.query 'port 3306'
</source>
The little awk magic selects only pakets which are from our ethernet address on interface ''IFACE''.
7ff24b8c3f91badff727cf10a2fa1fbf6e1e1447
MySQL slave with LVM
0
239
924
2015-10-02T21:31:10Z
Lollypop
2
Die Seite wurde neu angelegt: „'''UNFINISHED first few lines...''' ==Create LVM snapshot== ===Get the data mount=== <source lang=bash> # df -h $(mysql --batch --skip-column-names -e "show …“
wikitext
text/x-wiki
'''UNFINISHED first few lines...'''
==Create LVM snapshot==
===Get the data mount===
<source lang=bash>
# df -h $(mysql --batch --skip-column-names -e "show variables like 'datadir'" | awk '{print $NF;}')
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/vg--mysql-mysql--data 138G 78G 55G 59% /var/lib/mysql
</source>
Enough space for a snapshot?
<source lang=bash>
# vgs vg-mysql
VG #PV #LV #SN Attr VSize VFree
vg-mysql 2 3 1 wz--n- 199,99g 20,00g
</source>
===Create a concsistent snapshot===
<source lang=bash>
master # mysql -e "FLUSH TABLES WITH READ LOCK; SHOW MASTER STATUS;" > /var/lib/mysql/master_status.$(date "+%Y%m%d_%H%M%S")
master # lvcreate -l50%FREE -s -n mysql-data-snap /dev/vg-mysql/mysql-data
master # mysql -e "UNLOCK TABLES;"
master # mount /dev/vg-mysql/mysql-data-snap /mnt
master # cat /mnt/master_status.20151002_225659
File Position Binlog_Do_DB Binlog_Ignore_DB
mysql-bin.002366 263911913
master # mysql --batch --skip-column-names -e "show variables like 'innodb_data_file_path'"
innodb_data_file_path ibdata1:5G;ibdata2:5G;ibdata3:5G;ibdata4:50M:autoextend
</source>
Set the innodb_data_file_path to the same value on the slave.
==Copy the data to the slave==
<source lang=bash>
slave# ssh -c blowfish master "cd /mnt ; tar cSpzf - ." | ( cd /var/lib/mysql ; tar xlvSpzf - )
</source>
<source lang=bash>
</source>
<source lang=bash>
</source>
3a1cad8c0b36def7053601d794b53393d477cd7e
925
924
2015-10-02T21:37:53Z
Lollypop
2
wikitext
text/x-wiki
'''UNFINISHED first few lines...'''
==Create LVM snapshot==
===Get the data mount===
<source lang=bash>
master# df -h $(mysql --batch --skip-column-names -e "show variables like 'datadir'" | awk '{print $NF;}')
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/vg--mysql-mysql--data 138G 78G 55G 59% /var/lib/mysql
</source>
Enough space for a snapshot?
<source lang=bash>
master# vgs vg-mysql
VG #PV #LV #SN Attr VSize VFree
vg-mysql 2 3 1 wz--n- 199,99g 20,00g
</source>
===Create a concsistent snapshot===
<source lang=bash>
master# mysql -e "FLUSH TABLES WITH READ LOCK; SHOW MASTER STATUS;" > /var/lib/mysql/master_status.$(date "+%Y%m%d_%H%M%S")
master# lvcreate -l50%FREE -s -n mysql-data-snap /dev/vg-mysql/mysql-data
master# mysql -e "UNLOCK TABLES;"
master# mount /dev/vg-mysql/mysql-data-snap /mnt
master# cat /mnt/master_status.20151002_225659
File Position Binlog_Do_DB Binlog_Ignore_DB
mysql-bin.002366 263911913
master# mysql --batch --skip-column-names -e "show variables like 'innodb_data_file_path'"
innodb_data_file_path ibdata1:5G;ibdata2:5G;ibdata3:5G;ibdata4:50M:autoextend
</source>
Set the innodb_data_file_path to the same value on the slave.
==Copy the data to the slave==
<source lang=bash>
slave# ssh -c blowfish master "cd /mnt ; tar cSpzf - ." | ( cd /var/lib/mysql ; tar xlvSpzf - )
</source>
<source lang=bash>
</source>
<source lang=bash>
</source>
8b1666b0fee27aa1df55d27076e7c2462bf07bfa
934
925
2015-10-05T14:38:52Z
Lollypop
2
wikitext
text/x-wiki
'''UNFINISHED first few lines...'''
==Create LVM snapshot==
===Get the data mount===
<source lang=bash>
master# df -h $(mysql --batch --skip-column-names -e "show variables like 'datadir'" | awk '{print $NF;}')
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/vg--mysql-mysql--data 138G 78G 55G 59% /var/lib/mysql
master# DATADIR="$(mysql --batch --skip-column-names -e "show variables like 'datadir'" | awk '{print $NF;}')"
</source>
Enough space for a snapshot?
<source lang=bash>
master# vgs vg-mysql
VG #PV #LV #SN Attr VSize VFree
vg-mysql 2 3 1 wz--n- 199,99g 20,00g
</source>
===Create a concsistent snapshot===
<source lang=bash>
master# mysql -e "FLUSH TABLES WITH READ LOCK; SHOW MASTER STATUS;" > ${DATADIR}/master_status.$(date "+%Y%m%d_%H%M%S")
master# lvcreate -l50%FREE -s -n mysql-data-snap /dev/vg-mysql/mysql-data
master# mysql -e "UNLOCK TABLES;"
master# mount /dev/vg-mysql/mysql-data-snap /mnt
master# cat /mnt/master_status.20151002_225659
File Position Binlog_Do_DB Binlog_Ignore_DB
mysql-bin.002366 263911913
master# mysql --batch --skip-column-names -e "show variables like 'innodb_data_file_path'"
innodb_data_file_path ibdata1:5G;ibdata2:5G;ibdata3:5G;ibdata4:50M:autoextend
</source>
Set the innodb_data_file_path to the same value on the slave.
==Copy the data to the slave==
<source lang=bash>
slave# DATADIR="$(mysql --batch --skip-column-names -e "show variables like 'datadir'" | awk '{print $NF;}')"
slave# ssh -c blowfish master "cd /mnt ; tar cSpzf - ." | ( cd ${DATADIR} ; tar xlvSpzf - )
</source>
==Create replication user on master==
<source lang=bash>
master# mysql -e ""
</source>
==Setup slave==
<source lang=bash>
slave# mysql -e ""
</source>
fcab44bc718eb8316e568d9b935f3cf895b14dae
938
934
2015-10-09T14:17:52Z
Lollypop
2
/* Get the data mount */
wikitext
text/x-wiki
'''UNFINISHED first few lines...'''
==Create LVM snapshot==
===Get the data mount===
<source lang=bash>
master# df -h $(mysql --batch --skip-column-names -e "show variables like 'datadir'" | awk '{print $NF;}')
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/vg--mysql-mysql--data 138G 78G 55G 59% /var/lib/mysql
master# DATADIR="$(mysql --batch --skip-column-names -e "show variables like 'datadir'" | awk '{print $NF;}')"
</source>
Enough space for a snapshot?
<source lang=bash>
master# lvs /dev/mapper/vg--mysql-mysql--data
LV VG Attr LSize Pool Origin Data% Move Log Copy% Convert
mysql-data vg-mysql -wi-ao--- 140,00g
master# vgs vg-mysql
VG #PV #LV #SN Attr VSize VFree
vg-mysql 2 3 1 wz--n- 199,99g 20,00g
</source>
===Create a concsistent snapshot===
<source lang=bash>
master# mysql -e "FLUSH TABLES WITH READ LOCK; SHOW MASTER STATUS;" > ${DATADIR}/master_status.$(date "+%Y%m%d_%H%M%S")
master# lvcreate -l50%FREE -s -n mysql-data-snap /dev/vg-mysql/mysql-data
master# mysql -e "UNLOCK TABLES;"
master# mount /dev/vg-mysql/mysql-data-snap /mnt
master# cat /mnt/master_status.20151002_225659
File Position Binlog_Do_DB Binlog_Ignore_DB
mysql-bin.002366 263911913
master# mysql --batch --skip-column-names -e "show variables like 'innodb_data_file_path'"
innodb_data_file_path ibdata1:5G;ibdata2:5G;ibdata3:5G;ibdata4:50M:autoextend
</source>
Set the innodb_data_file_path to the same value on the slave.
==Copy the data to the slave==
<source lang=bash>
slave# DATADIR="$(mysql --batch --skip-column-names -e "show variables like 'datadir'" | awk '{print $NF;}')"
slave# ssh -c blowfish master "cd /mnt ; tar cSpzf - ." | ( cd ${DATADIR} ; tar xlvSpzf - )
</source>
==Create replication user on master==
<source lang=bash>
master# mysql -e ""
</source>
==Setup slave==
<source lang=bash>
slave# mysql -e ""
</source>
24a17f717789bb636dba15612cb5c6b88e90cc34
File:Firefox about-config ssl.png
6
240
926
2015-10-05T07:43:14Z
Lollypop
2
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
ProblemsWithSecurity
0
241
927
2015-10-05T07:47:41Z
Lollypop
2
Die Seite wurde neu angelegt: „[[Kategorie:Security]] '''Avoiding Security is not an otpion! But helps sometimes if you have no chance to administrate your devices without cheating...''' =F…“
wikitext
text/x-wiki
[[Kategorie:Security]]
'''Avoiding Security is not an otpion! But helps sometimes if you have no chance to administrate your devices without cheating...'''
=Firefox=
Goto URL ''about:config'' and navigate to the section ''security.ssl3'', and double click ''security.ssl3.dhe_aes_{128,256}_sha'' to set it to false.
[[Datei:Firefox_about-config_ssl.png]]
5c27ec4b3b0ecb89b4662aecb4b5a9bb2b1c5243
937
927
2015-10-07T12:49:38Z
Lollypop
2
/* Firefox */
wikitext
text/x-wiki
[[Kategorie:Security]]
'''Avoiding Security is not an otpion! But helps sometimes if you have no chance to administrate your devices without cheating...'''
=Firefox=
Go '''not''' to URL ''about:config'' and navigate '''not''' to the section ''security.ssl3'', and '''not''' double click ''security.ssl3.dhe_aes_{128,256}_sha'' to set it to false.
[[Datei:Firefox_about-config_ssl.png]]
f5c54b2f93ab17b2d72cb351eaf6d8e33c8c56ff
NetApp Partner path misconfigured
0
87
928
177
2015-10-05T14:27:44Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:NetApp]]
=FCP PARTNER PATH MISCONFIGURED=
==Auf dem Filer nachsehen, auf welchen LUNs das Problem besteht==
Statistiken au Null setzen:
filer> lun stats -z
Dann schauen:
filer> lun stats -o
Schlecht ist, wenn unter der Spalte "Partner Ops" oder "Partner KBytes" etwas größer 0 steht.
==Mögliche Ursachen==
# Kein ALUA konfiguriert
# MPxIO falsch konfiguriert: Niemals "/opt/NTAP/SANToolkit/bin/mpxio_set -e" ausführen. Zurücknehmen kann man das mit "/opt/NTAP/SANToolkit/bin/mpxio_set -d" oder händisch in /kernel/drv/scsi_vhci.conf. Danach "touch /reconfigure ; init 6"
5c9737ed16dee536d21b89a24c9aa7794964c1f2
Roundcube
0
232
929
884
2015-10-05T14:29:22Z
Lollypop
2
/* Automatic import carddav from Owncloud */
wikitext
text/x-wiki
[[Kategorie:Web]]
[[Kategorie:Mail]]
==Automatic import carddav from Owncloud==
Enable carddav:
/etc/roundcube/config.inc.php:
<source lang=php>
...
<// List of active plugins (in plugins/ directory)
$config['plugins'] = array(
'carddav', // <---- Enable carddav
'archive',
);
...
</source>
This imports automagically all Owncloud contacts from the addressbook "contacts" into roundcube carddav:
/usr/share/roundcube/plugins/carddav/config.inc.php
<source lang=php>
...
$prefs['OwnCloud-Contacts'] = array(
// required attributes
'name' => 'Cloud->contacts->',
'username' => '%u',
'password' => '%p',
'url' => 'https://$cloudserver/remote.php/carddav/addressbooks/%u/contacts/',
// optional attributes
'active' => true,
'readonly' => false,
'refresh_time' => '01:00:00',
'preemptive_auth' => 1,
// attributes that are fixed (i.e., not editable by the user) and
// auto-updated for this preset
'fixed' => array('name', 'active', ),
// hide this preset from CalDAV preferences section so users can't even
// see it
'hide' => false,
);
</source>
c65a6fd12cbf53af31b9e84d0f56812de33cfbc0
Category:Mail
14
242
930
2015-10-05T14:30:00Z
Lollypop
2
Die Seite wurde neu angelegt: „[[Kategorie:HowTo]]“
wikitext
text/x-wiki
[[Kategorie:HowTo]]
ebe1831714ffacf092965cac32e409c01adc61d0
931
930
2015-10-05T14:30:22Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:KnowHow]]
5b3e805e2df69a16d339bfd0115e4688ccfd0e65
Category:Web
14
243
932
2015-10-05T14:30:32Z
Lollypop
2
Die Seite wurde neu angelegt: „[[Kategorie:KnowHow]]“
wikitext
text/x-wiki
[[Kategorie:KnowHow]]
5b3e805e2df69a16d339bfd0115e4688ccfd0e65
Apache
0
205
933
885
2015-10-05T14:32:35Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:Web]]
== Zertifikat generieren ==
===Defaultwerte vernünftig anpassen===
Country & Co auf für einen selbst passende Werte anpassen:
<source lang=bash>
# vi /etc/ssl/openssl.cnf
</source>
===Schlüssel generieren===
<source lang=bash>
# openssl ecparam -genkey -name secp256r1 | openssl ec -aes256 -out server.de.ec-key
</source>
===Zertifikat ausstellen===
<source lang=bash>
# openssl req -new -x509 -sha256 -key server.de.ec-key -out server.de-wildcard.pem -days 1825 -nodes
You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter '.', the field will be left blank.
-----
Country Name (2 letter code) [DE]:
State or Province Name (full name) [Hamburg]:
Locality Name (eg, city) [Hamburg]:
Organization Name (eg, company) [My Site]:
Organizational Unit Name (eg, section) [Sub]:
Common Name (e.g. server FQDN or YOUR name) []:*.server.de
Email Address [ssl@server.de]:
</source>
===Zertifikat ansehen===
<source lang=bash>
# openssl x509 -text -noout -in server.de-wildcard.pem
Certificate:
Data:
Version: 3 (0x2)
Serial Number: ... (0x...)
Signature Algorithm: ecdsa-with-SHA256
Issuer: C=DE, ST=Hamburg, L=Hamburg, O=My Site, OU=Sub, CN=*.server.de/emailAddress=ssl@server.de
Validity
Not Before: Apr 16 09:35:02 2015 GMT
Not After : Apr 14 09:35:02 2020 GMT
Subject: C=DE, ST=Hamburg, L=Hamburg, O=My Site, OU=Sub, CN=*.server.de/emailAddress=ssl@server.de
Subject Public Key Info:
Public Key Algorithm: id-ecPublicKey
Public-Key: (256 bit)
pub:
...
ASN1 OID: prime256v1
X509v3 extensions:
X509v3 Subject Key Identifier:
...
X509v3 Authority Key Identifier:
keyid:...
X509v3 Basic Constraints:
CA:TRUE
Signature Algorithm: ecdsa-with-SHA256
...
</source>
==Apache konfigurieren==
<source lang=apache>
<VirtualHost ssl.server.de:443>
...
SSLEngine On
SSLProtocol all -SSLv2 -SSLv3
SSLCompression off
SSLHonorCipherOrder On
SSLCipherSuite EECDH+AESGCM:EECDH+AES:EDH+AES
SSLCertificateFile /etc/apache2/ssl/server.de-wildcard.pem
SSLCertificateKeyFile /etc/apache2/ssl/server.de.ec-key
SSLOptions +FakeBasicAuth +ExportCertData +StrictRequire
</VirtualHost>
</source>
0726055fd989889a0ff25dc236c7b37bc45303e5
961
933
2015-10-30T15:26:21Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:Web]]
== Zertifikat generieren ==
===Defaultwerte vernünftig anpassen===
Country & Co auf für einen selbst passende Werte anpassen:
<source lang=bash>
# vi /etc/ssl/openssl.cnf
</source>
===Schlüssel generieren===
<source lang=bash>
# openssl ecparam -genkey -name secp256r1 | openssl ec -aes256 -out server.de.ec-key
</source>
===Zertifikat ausstellen===
<source lang=bash>
# openssl req -new -x509 -sha256 -key server.de.ec-key -out server.de-wildcard.pem -days 1825 -nodes
You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter '.', the field will be left blank.
-----
Country Name (2 letter code) [DE]:
State or Province Name (full name) [Hamburg]:
Locality Name (eg, city) [Hamburg]:
Organization Name (eg, company) [My Site]:
Organizational Unit Name (eg, section) [Sub]:
Common Name (e.g. server FQDN or YOUR name) []:*.server.de
Email Address [ssl@server.de]:
</source>
===Zertifikat ansehen===
<source lang=bash>
# openssl x509 -text -noout -in server.de-wildcard.pem
Certificate:
Data:
Version: 3 (0x2)
Serial Number: ... (0x...)
Signature Algorithm: ecdsa-with-SHA256
Issuer: C=DE, ST=Hamburg, L=Hamburg, O=My Site, OU=Sub, CN=*.server.de/emailAddress=ssl@server.de
Validity
Not Before: Apr 16 09:35:02 2015 GMT
Not After : Apr 14 09:35:02 2020 GMT
Subject: C=DE, ST=Hamburg, L=Hamburg, O=My Site, OU=Sub, CN=*.server.de/emailAddress=ssl@server.de
Subject Public Key Info:
Public Key Algorithm: id-ecPublicKey
Public-Key: (256 bit)
pub:
...
ASN1 OID: prime256v1
X509v3 extensions:
X509v3 Subject Key Identifier:
...
X509v3 Authority Key Identifier:
keyid:...
X509v3 Basic Constraints:
CA:TRUE
Signature Algorithm: ecdsa-with-SHA256
...
</source>
==Apache konfigurieren==
<source lang=apache>
<VirtualHost ssl.server.de:443>
...
SSLEngine On
SSLProtocol all -SSLv2 -SSLv3
SSLCompression off
SSLHonorCipherOrder On
SSLCipherSuite EECDH+AESGCM:EECDH+AES:EDH+AES
SSLCertificateFile /etc/apache2/ssl/server.de-wildcard.pem
SSLCertificateKeyFile /etc/apache2/ssl/server.de.ec-key
SSLOptions +FakeBasicAuth +ExportCertData +StrictRequire
</VirtualHost>
</source>
==ApacheTop==
Top of all sites on your host:
<source lang=bash>
# ls /var/log/apache2/*.log | xargs -n 1 echo -f | xargs apachetop
</source>
f1f891a3aa0793e654a57d3e24eb95774df00c3e
Networker Tipps und Tricks
0
204
935
685
2015-10-06T08:15:05Z
Lollypop
2
/* Status des Backups prüfen */
wikitext
text/x-wiki
[[Kategorie:Backup]]
==Status des Backups prüfen==
Anzeigen, ob für den Client <networker-client> in den letzten 24 Stunden ein Backup gelaufen ist:
<source lang=bash>
# /usr/sbin/mminfo -avot -s <networker-server> -r "client,savetime(17),name,sumsize" -t "1 day ago" -q client=<networker-client>
</source>
Oder das letzte Backup für einen Client:
<source lang=bash>
# /usr/sbin/mminfo -avot -s hhlokens01.srv.ndr-net.de -r "client,group,savetime(17),name,sumsize" -q "group=<group>,client=<networker-client>"
</source>
==Recover/Restore==
SSID des Savesets herausfinden:
<source lang=bash>
# mminfo -s <networker-server> -q "client=<networker-client>,name=<directory>" -r "ssid,name,savetime(17)"
2752466240 <directory> 03/23/15 00:16:16
...
387566382 <directory> 03/31/15 00:16:14
</source>
OK, wir wollen das Backup vom 31.3.2015 00:16:14 Uhr, also SSID 387566382.
Zielverzeichnis für den Restore:
<source lang=bash>
# recover -s <networker-server> -S 387566382 -d <destination-directory>
</source>
Achtung, das sind NUR die Dateien, die an dem Tage gesichert wurden!
Möchte man alles so herstellen, wie es zu einem bestimmten Zetipunkt war, dann geht das folgendermaßen:
<source lang=bash>
# recover -s <networker-server> -c <networker-client> -t '03/31/15 00:16:14' -d <destination-directory> -a <directory>
</source>
860e233d17a29c3b431e71b7b09da1cc9ab0d878
936
935
2015-10-06T09:44:30Z
Lollypop
2
/* Status des Backups prüfen */
wikitext
text/x-wiki
[[Kategorie:Backup]]
==Status des Backups prüfen==
Anzeigen, ob für den Client <networker-client> in den letzten 24 Stunden ein Backup gelaufen ist:
<source lang=bash>
# /usr/sbin/mminfo -avot -s <networker-server> -r "client,savetime(17),name,sumsize" -t "1 day ago" -q client=<networker-client>
</source>
Oder das letzte Backup für einen Client:
<source lang=bash>
# /usr/sbin/mminfo -avot -s <networker-server> -r "client,group,savetime(17),name,sumsize" -q "group=<group>,client=<networker-client>"
</source>
==Recover/Restore==
SSID des Savesets herausfinden:
<source lang=bash>
# mminfo -s <networker-server> -q "client=<networker-client>,name=<directory>" -r "ssid,name,savetime(17)"
2752466240 <directory> 03/23/15 00:16:16
...
387566382 <directory> 03/31/15 00:16:14
</source>
OK, wir wollen das Backup vom 31.3.2015 00:16:14 Uhr, also SSID 387566382.
Zielverzeichnis für den Restore:
<source lang=bash>
# recover -s <networker-server> -S 387566382 -d <destination-directory>
</source>
Achtung, das sind NUR die Dateien, die an dem Tage gesichert wurden!
Möchte man alles so herstellen, wie es zu einem bestimmten Zetipunkt war, dann geht das folgendermaßen:
<source lang=bash>
# recover -s <networker-server> -c <networker-client> -t '03/31/15 00:16:14' -d <destination-directory> -a <directory>
</source>
127272da8a8278ea3c233bf9016ffc61dbcd9dab
NetApp Commands
0
201
939
883
2015-10-12T08:30:34Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie: NetApp]]
==Alignment==
CDOT 8.3:
<source lang=bash>
netapp-svm1::> set -priv diag
netapp-svm1::*> lun alignment show -vserver svm_kerberos_backup -fields vserver,lun,alignment -path /vol/kerberos_vol_luns_backup/kerberos_lun_*
vserver path lun alignment
---------- --------------------------------- ------------ ---------
svm_backup /vol/vol_luns_backup/lun_1.bcklun lun_1.bcklun aligned
svm_backup /vol/vol_luns_backup/lun_2.bcklun lun_2.bcklun aligned
svm_backup /vol/vol_luns_backup/lun_3.bcklun lun_3.bcklun aligned
svm_backup /vol/vol_luns_backup/lun_4.bcklun lun_4.bcklun aligned
4 entries were displayed.
netapp-svm1::> set -priv admin
</source>
To see on which bucket the reads and writes occure:
<source lang=bash>
netapp-svm1::> set -priv diag
netapp-svm1::*> lun alignment show -vserver svm_kerberos_backup -fields vserver,lun,alignment,read-histogram,write-histogram -path /vol/kerberos_vol_luns_backup/kerberos_lun_**
vserver path lun alignment write-histogram read-histogram
---------- --------------------------------- ------------ --------- ---------------- ----------------
svm_backup /vol/vol_luns_backup/lun_1.bcklun lun_1.bcklun aligned 99,0,0,0,0,0,0,0 99,0,0,0,0,0,0,0
svm_backup /vol/vol_luns_backup/lun_2.bcklun lun_2.bcklun aligned 99,0,0,0,0,0,0,0 99,0,0,0,0,0,0,0
svm_backup /vol/vol_luns_backup/lun_3.bcklun lun_3.bcklun aligned 99,0,0,0,0,0,0,0 99,0,0,0,0,0,0,0
svm_backup /vol/vol_luns_backup/lun_4.bcklun lun_4.bcklun aligned 99,0,0,0,0,0,0,0 99,0,0,0,0,0,0,0
4 entries were displayed.
netapp-svm1::> set -priv admin
</source>
==Performance ==
filer> priv set -q diag ; statit -b ; sysstat -x -s -c 20 3 ; statit -e ; priv set
filer> priv set -q diag ; stats show lun:*:avg_latency ; priv set
=== Flashpool ===
filer> priv set -q diag ; stats show -p hybrid_aggr ; priv set
==Links==
* [http://www.cosonok.com/2014/02/brief-notes-on-advanced-troubleshooting.html Brief Notes on Advanced Troubleshooting in CDOT]
0a9573ce3bd734e7c850054f27a24d92f64f6922
ZFS Networker
0
158
940
732
2015-10-15T08:33:33Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:ZFS|Backup]]
[[Kategorie:Backup|Networker]]
[[Kategorie:Solaris|Backup]]
=Backup of ZFS snapshots on Solaris Cluster with Legato/EMC Networker=
This describes how to setup a backup of the Solaris Cluster resource group named sample-rg.
The structure of my RGs is always:
<pre>
RG: <name>-rg
ZFS-HASP: <name>-hasp-zfs-res
Logical Host: <name>-lh-res
Logical Host Name: <name>-lh
ZPOOL: <name>_pool
</pre>
I used the bash as shell.
==Define variables used in the following command lines==
<source lang=bash>
# NAME=sample
# RGname=${NAME}-rg
# NetworkerGroup=$(echo ${NAME} | tr 'a-z' 'A-Z' )
# ZPOOL=${NAME}_pool
# ZPOOL_BASEDIR=/local/${RGname}
</source>
==Define a resource for Networker==
What we need now is a resource definition in our Networker directory like this:
<source lang=bash>
# mkdir /nsr/{bin,log,res}
# cat > /nsr/res/${NetworkerGroup}.res <<EOF
type: savepnpc;
precmd: "/nsr/bin/nsr_snapshot.sh pre >/nsr/log/networker_precmd.log 2>&1";
pstcmd: "/nsr/bin/nsr_snapshot.sh pst >/nsr/log/networker_pstcmd.log 2>&1";
timeout: "08:00am";
abort precmd with group: Yes;
EOF
</source>
==The pre-/pstcmd-script==
!!!THIS CODE IS UNTESTED DO NOT USE THIS!!!
!!!THIS JUST AN EXAMPLE!!!
<source lang=bash>
#!/bin/bash
cmd_option=$1
export cmd_option
SNAPSHOT_NAME="nsr"
BASE_LOG_DIR="/nsr/logs"
NSR_BACKUP_CLONE="nsr_backup"
# Commands
ZFS_CMD="/usr/sbin/zfs"
ZPOOL_CMD="/usr/sbin/zpool"
ZLOGIN_CMD="/usr/sbin/zlogin"
ZONECFG_CMD="/usr/sbin/zonecfg"
SVCS_CMD="/usr/sbin/svcs"
SVCADM_CMD="/usr/sbin/svcadm"
DF_CMD="/usr/bin/df"
RM_CMD="/usr/bin/rm"
AWK_CMD="/usr/bin/nawk"
MKNOD_CMD="/usr/sbin/mknod"
XARGS_CMD="/usr/bin/xargs"
PARGS_CMD="/usr/bin/pargs"
PTREE_CMD="/usr/bin/ptree"
CLRS_CMD="/usr/cluster/bin/clrs"
CLRG_CMD="/usr/cluster/bin/clrg"
CLRT_CMD="/usr/cluster/bin/clrt"
BASENAME_CMD="/usr/bin/basename"
GETENT_CMD="/usr/bin/getent"
SCHA_RESOURCE_GET_CMD="/usr/cluster/bin/scha_resource_get"
WGET_CMD=/usr/sfw/bin/wget
HOSTNAME_CMD="/usr/bin/uname -n"
# Subdir in ZFS where to put ZFS-config
ZFS_SETUP_SUBDIR="cluster_config"
ZFS_CONFIG_FILE=ZFS_Setup.sh
# Oracle parameter
ORACLE_SID=SAMPLE
ORACLE_USER=oracle
# Sophora parameter
SOPHORA_FMRI="svc:/cms/sophora:default"
SOPHORA_USER=admin
SOPHORA_PASS=password
GLOBAL_LOGFILE=${BASE_LOG_DIR}/$(${BASENAME_CMD} $0 .sh).log
# For all but get_slaves redirect output to log
case ${cmd_option} in
get_slaves)
;;
*)
exec >>${GLOBAL_LOGFILE} 2>&1
;;
esac
function print_option () {
option=$1; shift
# now process line
while [ $# -gt 0 ]
do
case $1 in
${option})
echo $2
shift
shift
;;
*)
shift
;;
esac
done
}
function sophora_startup () {
SOPHORA_ZONE=$1 # Zone for zlogin
SOPHORA_FMRI=$2 # FMRI for svcadm
print_log ${LOGFILE} "Starting sophora in ${SOPHORA_ZONE}..."
${ZLOGIN_CMD} ${SOPHORA_ZONE} ${SVCADM_CMD} enable ${SOPHORA_FMRI}
}
function sophora_shutdown () {
SOPHORA_ZONE=$1 # Zone for zlogin
SOPHORA_FMRI=$2 # FMRI for svcadm
print_log ${LOGFILE} "Shutting down sophora in ${SOPHORA_ZONE}..."
${ZLOGIN_CMD} ${SOPHORA_ZONE} ${SVCADM_CMD} disable -t ${SOPHORA_FMRI}
}
function sophora_get_slaves () {
SOPHORA_ZONE=$1 # Zone for zlogin
SOPHORA_PORT=$2 # Sophora port at localhost
SOPHORA_USER=$3 # Sophora admin user
SOPHORA_PASS=$4 # Sophora admin port
${ZLOGIN_CMD} ${SOPHORA_ZONE} \
${WGET_CMD} \
-qO- \
--no-proxy \
--http-user=${SOPHORA_USER} \
--http-password=${SOPHORA_PASS} \
"http://localhost:${SOPHORA_PORT}/content-api/servers/?replicationMode=SLAVE" | \
${AWK_CMD} '
function get_param(param,name){
name="\""name"\"";
count=split(param,tupel,/,/);
for(i=1;i<=count;i++){
split(tupel[i],part,/:/);
if(part[1]==name){
gsub(/\"/,"",part[2]);return part[2];
}
}
}
{
json=$0;
gsub(/(\[\{|\}\])/,"",json);
elements=split(json,array,/\},\{/);
for(element=1;element<=elements;element++){
print get_param(array[element],"hostname");
}
}' | ${XARGS_CMD} -n 1 -i ${BASENAME_CMD} {} .server.de
}
function get_zone_hostname () {
${ZLOGIN_CMD} $1 ${HOSTNAME_CMD}
}
function print_log () {
LOGFILE=$1 ; shift
if [ $# -gt 0 ]
then
printf "%s (%s): %s\n" "$(date '+%Y%m%d %H:%M:%S')" "${cmd_option}" "$*" >> ${LOGFILE}
else
#printf "%s (%s): " "$(date '+%Y%m%d %H:%M:%S')" "${cmd_option}" >> ${LOGFILE}
while read data
do
printf "%s (%s): %s\n" "$(date '+%Y%m%d %H:%M:%S')" "${cmd_option}" "${data}" >> ${LOGFILE}
done
fi
}
function dump_zfs_config {
ZPOOL=$1
OUTPUT_FILE=$2
printf "\n\n# Create ZPool ${ZPOOL} with size $(${ZPOOL_CMD} list -Ho size ${ZPOOL}):\n\n" >> ${OUTPUT_FILE}
${ZPOOL_CMD} status ${ZPOOL} | ${AWK_CMD} '/config:/,/errors:/{if(/NAME/){getline; printf "Zpool structure of %s:\n\nzpool create %s",$1,$1; getline ; device=0; while(!/^$/ && !/errors:/){gsub(/mirror-[0-9]+/,"mirror",$1);gsub(/logs/,"log",$1);gsub(/(\/dev\/(r)*dsk\/)*c[0-9]+t[0-9A-F]+d[0-9]+(s[0-9]+)*/,"<device"device">",$1);if(/device/)device++;printf " %s",$1 ; getline}};printf "\n" ;}' >> ${OUTPUT_FILE}
printf "\n\n# Create ZFS\n\n" >> ${OUTPUT_FILE}
${ZFS_CMD} list -Hrt filesystem -o name,origin ${ZPOOL} | ${AWK_CMD} -v zfs_cmd=${ZFS_CMD} 'NR>1 && $2=="-"{print zfs_cmd,"create -o mountpoint=none",$1}' >> ${OUTPUT_FILE}
printf "\n\n# Set ZFS values\n\n" >> ${OUTPUT_FILE}
${ZFS_CMD} get -s local -Ho name,property,value -pr all ${ZPOOL} | ${AWK_CMD} -v zfs_cmd=${ZFS_CMD} '$2!="readonly"{printf "%s set -p %s=%s %s\n",zfs_cmd,$2,$3,$1}' >> ${OUTPUT_FILE}
}
function dump_cluster_config {
RG=$1
OUTPUT_DIR=$2
${RM_CMD} -f ${OUTPUT_DIR}/${RG}.clrg_export.xml
${CLRG_CMD} export -o ${OUTPUT_DIR}/${RG}.clrg_export.xml ${RG}
for RES in $(${CLRS_CMD} list -g ${RG})
do
${RM_CMD} -f ${OUTPUT_DIR}/${RES}.clrs_export.xml
${CLRS_CMD} export -o ${OUTPUT_DIR}/${RES}.clrs_export.xml ${RES}
done
# Commands to recreate the RG
COMMAND_FILE="${OUTPUT_DIR}/${RG}.ClusterCreateCommands.txt"
printf "Recreate %s:\n%s create -i %s %s\n\n" "${RG}" "${CLRG_CMD}" "${OUTPUT_DIR}/${RG}.clrg_export.xml" "${RG}" > ${COMMAND_FILE}
for RT in SUNW.LogicalHostname SUNW.HAStoragePlus SUNW.gds LGTO.clnt
do
for RT_VERSION in $(${CLRT_CMD} list | ${AWK_CMD} -v rt=${RT} '$1 ~ rt')
do
for RES in $(${CLRS_CMD} list -g ${RG} -t ${RT_VERSION})
do
if [ "_${RT}_" == "_SUNW.LogicalHostname_" ]
then
printf "Add the following entries to all nodes!!!:\n/etc/inet/hosts:\n" >> ${COMMAND_FILE}
${GETENT_CMD} hosts $(${CLRS_CMD} show -p HostnameList ${RES} | nawk '$1=="HostnameList:"{$1="";print}') >> ${COMMAND_FILE}
printf "\n" >> ${COMMAND_FILE}
fi
printf "Recreate %s:\n%s create -i %s %s\n\n" "${RES}" "${CLRS_CMD}" "${OUTPUT_DIR}/${RES}.clrs_export.xml" "${RES}" >> ${COMMAND_FILE}
done
done
done
}
function snapshot_pre {
DB=$1
DBUSER=$2
if [ $# -eq 3 -a "_$3_" != "__" ]
then
ZONE=$3
ZONE_CMD="${ZLOGIN_CMD} -l ${DBUSER} ${ZONE}"
ZONE_BASE=$(/usr/sbin/zonecfg -z ${ZONE} info zonepath | ${AWK_CMD} '{print $NF;}')
ZONE_ROOT="${ZONE_BASE}/root"
else
ZONE_ROOT=""
ZONE_CMD="su - ${DBUSER} -c"
fi
if( ${ZONE_CMD} echo >/dev/null 2>&1 )
then
SCRIPT_NAME="tmp/.nsr-pre-snap-script.$$"
# Create script inside zone
cat >${ZONE_ROOT}/{SCRIPT_NAME} <<EOS
#!/bin/bash
DBDIR=\$(${AWK_CMD} -F':' -v ORACLE_SID=${ORACLE_SID} '\$1==ORACLE_SID {print \$2;}' /var/opt/oracle/oratab)
\${DBDIR}/bin/sqlplus sys/${DBUSER} as sysdba << EOF
create pfile from spfile;
alter system archive log current;
alter database backup controlfile to trace;
alter database begin backup;
EOF
EOS
chmod 755 ${ZONE_ROOT}/${SCRIPT_NAME}
${ZONE_CMD} /${SCRIPT_NAME} 2>&1 | print_log ${LOGFILE}
rm -f ${ZONE_ROOT}/${SCRIPT_NAME}
fi
}
function snapshot_pst {
DB=$1
DBUSER=$2
if [ $# -eq 3 -a "_$3_" != "__" ]
then
ZONE=$3
ZONE_CMD="${ZLOGIN_CMD} -l ${DBUSER} ${ZONE}"
ZONE_BASE=$(/usr/sbin/zonecfg -z ${ZONE} info zonepath | ${AWK_CMD} '{print $NF;}')
ZONE_ROOT="${ZONE_BASE}/root"
else
ZONE_ROOT=""
ZONE_CMD="su - ${DBUSER} -c"
fi
if( ${ZONE_CMD} echo >/dev/null 2>&1 )
then
SCRIPT_NAME="tmp/.nsr-pre-snap-script.$$"
# Create script inside zone
cat >${ZONE_ROOT}/{SCRIPT_NAME} <<EOS
#!/bin/bash
DBDIR=\$(${AWK_CMD} -F':' -v ORACLE_SID=${ORACLE_SID} '\$1==ORACLE_SID {print \$2;}' /var/opt/oracle/oratab)
\${DBDIR}/bin/sqlplus sys/${DBUSER} as sysdba << EOF
alter database end backup;
alter system archive log current;
EOF
EOS
chmod 755 ${ZONE_ROOT}/${SCRIPT_NAME}
${ZONE_CMD} /${SCRIPT_NAME} 2>&1 | print_log ${LOGFILE}
rm -f ${ZONE_ROOT}/${SCRIPT_NAME}
fi
}
function snapshot_create {
ZPOOL=$1
SNAPSHOT_NAME=$2
print_log ${LOGFILE} "Create ZFS snapshot -r ${ZPOOL}@${SNAPSHOT_NAME}"
${ZFS_CMD} snapshot -r ${ZPOOL}@${SNAPSHOT_NAME}
for zfs_snapshot in $(${ZFS_CMD} list -Ho name -t snapshot -r ${ZPOOL} | grep ${SNAPSHOT_NAME})
do
${ZFS_CMD} clone -o readonly=on ${zfs_snapshot} ${zfs_snapshot/@*/}/${NSR_BACKUP_CLONE}
${ZFS_CMD} mount ${zfs_snapshot/@*/}/${NSR_BACKUP_CLONE} 2>/dev/null
if [ "_$(${ZFS_CMD} get -Ho value mounted ${zfs_snapshot/@*/}/${NSR_BACKUP_CLONE})_" == "_yes_" ]
then
# echo /usr/sbin/save -s ${SERVER_NAME} -g ${GROUP_NAME} -LL -m ${CLIENT_NAME} $(${ZFS_CMD} get -Ho value mountpoint ${zfs_snapshot/@*/}/${NSR_BACKUP_CLONE})
${ZFS_CMD} list -Ho creation,name ${zfs_snapshot/@*/}/${NSR_BACKUP_CLONE} | print_log ${LOGFILE}
fi
done
}
function snapshot_destroy {
ZPOOL=$1
SNAPSHOT_NAME=$2
RES="$(${CLRS_CMD} show -p ZPools | ${AWK_CMD} -v pool=${ZPOOL} '/^Resource:/{res=$NF;}$NF ~ pool{print res;}')"
# Because of problems with unmounting during cluster monitoring disable montoring for this step
print_log ${LOGFILE} "Telling Cluster not to monitor ${RES}"
if [ "_${RES}_" != "__" ]
then
${CLRS_CMD} unmonitor ${RES}
fi
if (${ZFS_CMD} list -t snapshot ${ZPOOL}@${SNAPSHOT_NAME} > /dev/null)
then
for zfs_snapshot in $(${ZFS_CMD} list -Ho name -t snapshot -r ${ZPOOL} | grep ${SNAPSHOT_NAME})
do
if [ "_$(${ZFS_CMD} get -Ho value mounted ${zfs_snapshot/@*/}/${NSR_BACKUP_CLONE})_" == "_yes_" ]
then
print_log ${LOGFILE} "Unmount ZFS clone ${zfs_snapshot/@*/}/${NSR_BACKUP_CLONE}"
${ZFS_CMD} unmount ${zfs_snapshot/@*/}/${NSR_BACKUP_CLONE}
fi
# If this is a clone of ${zfs_snapshot}, then destroy it
if [ "_$(${ZFS_CMD} list -Ho origin ${zfs_snapshot/@*/}/${NSR_BACKUP_CLONE})_" == "_${zfs_snapshot}_" ]
then
print_log ${LOGFILE} "Destroy ZFS clone ${zfs_snapshot/@*/}/${NSR_BACKUP_CLONE}"
${ZFS_CMD} destroy ${zfs_snapshot/@*/}/${NSR_BACKUP_CLONE}
fi
done
print_log ${LOGFILE} "Destroy ZFS snapshot -r ${ZPOOL}@${SNAPSHOT_NAME}"
${ZFS_CMD} destroy -r ${ZPOOL}@${SNAPSHOT_NAME}
fi
print_log ${LOGFILE} "Telling Cluster to monitor ${RES} again"
if [ "_${RES}_" != "__" ]
then
${CLRS_CMD} monitor ${RES}
fi
}
function usage {
echo "Usage: $0 (pre|pst)"
echo "Usage: $0 init <ZPool-Name>"
echo "Usage: $0 initall"
echo "Usage: $0 dump <ZPool-Name> <Output-File>"
exit 1
}
case ${cmd_option} in
pre|pst)
case ${cmd_option} in
pre)
# Get commandline from parent pid
# pre /usr/sbin/savepnpc -c <NetworkerClient> -s <NetworkerServer> -g <NetworkerGroup> -LL
print_log ${GLOBAL_LOGFILE} "Begin (${cmd_option}) Called from $(${PTREE_CMD} $$ | ${AWK_CMD} '/savepnpc/{print $0}')"
pid=$(${PTREE_CMD} $$ | ${AWK_CMD} '/savepnpc/{print $1}')
;;
pst)
# Get commandline from parent pid
# pst /usr/bin/pstclntsave -s <NetworkerServer> -g <NetworkerGroup> -c <NetworkerClient>
print_log ${GLOBAL_LOGFILE} "Begin (${cmd_option}) Called from $(${PTREE_CMD} $$ | ${AWK_CMD} '/pstclntsave/{print $0}')"
pid=$(${PTREE_CMD} $$ | ${AWK_CMD} '/pstclntsave/{print $1}')
${PTREE_CMD} $$ | print_log ${GLOBAL_LOGFILE}
print_log ${GLOBAL_LOGFILE} "(${cmd_option}) PID=${pid}"
;;
esac
commandline="$(${PARGS_CMD} -c ${pid} | ${AWK_CMD} -F':' '$1 ~ /^argv/{printf $2}END{print;}')"
# Called from backupserver use -c
CLIENT_NAME=$(print_option -c ${commandline})
# If called from cmdline use -m
CLIENT_NAME=${CLIENT_NAME:-$(print_option -m ${commandline})}
# Last resort pre/post
CLIENT_NAME=${CLIENT_NAME:-${cmd_option}}
SERVER_NAME=$(print_option -s ${commandline})
GROUP_NAME=$(print_option -g ${commandline})
LOGFILE=${BASE_LOG_DIR}/${CLIENT_NAME}.log
print_log ${LOGFILE} "Called from ${commandline}"
named_pipe=/tmp/.named_pipe.$$
# Delete named pipe on exit
trap "rm -f ${named_pipe}" EXIT
# Create named pipe
${MKNOD_CMD} ${named_pipe} p
# Read from named pipe and send it to print_log
tee <${named_pipe} | print_log ${LOGFILE}&
# Close STDOUT & STDERR
exec 1>&-
exec 2>&-
# Redirect them to named pipe
exec >${named_pipe} 2>&1
print_log ${LOGFILE} "Begin backup of ${CLIENT_NAME}"
# Get resource name from hostname
LH_RES=$(${CLRS_CMD} show -t SUNW.LogicalHostname -p HostnameList | ${AWK_CMD} -v Hostname="${CLIENT_NAME}" '/^Resource:/{res=$NF} /HostnameList:/ {for(i=2;i<=NF;i++){if($i == Hostname){print res}}}')
print_log ${LOGFILE} "LogicalHostname of ${CLIENT_NAME} is ${LH_RES}"
# Get ressourceGroup name from ressource name
RG=$(${SCHA_RESOURCE_GET_CMD} -O GROUP -R ${LH_RES})
print_log ${LOGFILE} "RessourceGroup of ${LH_RES} is ${RG}"
ZPOOLS=$(${CLRS_CMD} show -g ${RG} -p Zpools | ${AWK_CMD} '$1=="Zpools:"{$1="";print $0}')
print_log ${LOGFILE} "ZPools used in ${RG}: ${ZPOOLS}"
Start_command=$(${CLRS_CMD} show -p Start_command -g ${RG} | ${AWK_CMD} -F ':' '$1 ~ /Start_command/ && $2 ~ /sczbt/')
print_log ${LOGFILE} "sczbt Start_command is: ${Start_command}"
sczbt_config=$(print_option -P ${Start_command})/sczbt_$(print_option -R ${Start_command})
print_log ${LOGFILE} "sczbt_config is ${sczbt_config}"
ZONE=$(${AWK_CMD} -F '=' '$1=="Zonename"{gsub(/"/,"",$2);print $2}' ${sczbt_config})
print_log ${LOGFILE} "Zone from ${sczbt_config} is ${ZONE}"
;;
init)
LOGFILE=${BASE_LOG_DIR}/init.log
if [ $# -ne 2 ]
then
echo "Wrong count of parameters."
echo "Use $0 init <ZPool-Name>"
exit 1
fi
ZPOOL=$2
print_log ${GLOBAL_LOGFILE} "Begin (${cmd_option}) of zpool ${ZPOOL}"
print_log ${LOGFILE} "Begin init of zpool ${ZPOOL}"
;;
initall)
LOGFILE=${BASE_LOG_DIR}/initall.log
print_log ${GLOBAL_LOGFILE} "Begin (${cmd_option})"
;;
get_slaves)
if [ $# -ne 5 ]
then
echo "Wrong count of parameters."
echo "Use $0 get_slaves <Zone-Name> <Sophora-Port> <Sophora-Adminuser> <Sophora-Password>"
exit 1
fi
echo "Slave node(s): $(sophora_get_slaves $2 $3 $4 $5)"
exit 0
;;
esac
case ${cmd_option} in
dump_cluster)
if [ $# -ne 3 ]
then
echo "Wrong count of parameters."
echo "Use $0 dump_cluster <Ressource_Group> <DIR>"
exit 1
fi
dump_cluster_config $2 $3
;;
dump)
if [ $# -ne 3 ]
then
echo "Wrong count of parameters."
echo "Use $0 dump <ZPool-Name> <File>"
exit 1
fi
dump_zfs_config $2 $3
;;
init)
snapshot_destroy ${ZPOOL} ${SNAPSHOT_NAME}
snapshot_create ${ZPOOL} ${SNAPSHOT_NAME}
print_log ${LOGFILE} "End init of zpool ${ZPOOL}"
;;
initall)
for ZPOOL in $(${ZPOOL_CMD} list -Ho name)
do
if [ "_${ZPOOL}_" == "_rpool_" ]
then
continue
fi
print_log ${LOGFILE} "Begin init of zpool ${ZPOOL}"
snapshot_destroy ${ZPOOL} ${SNAPSHOT_NAME}
snapshot_create ${ZPOOL} ${SNAPSHOT_NAME}
print_log ${LOGFILE} "End init of zpool ${ZPOOL}"
done
;;
pre)
for ZPOOL in ${ZPOOLS}
do
snapshot_destroy ${ZPOOL} ${SNAPSHOT_NAME}
done
# Shutdown Sophora?
startup="No"
case ${ZONE} in
arcus-rg)
# Staging zones
#sophora_shutdown ${ZONE} ${SOPHORA_FMRI}
#startup="Yes"
;;
incus-zone|velum-zone)
SOPHORA_ADMINPORT=1196
# Master-/slave-zones
is_slave=0
zone_hostname=$(get_zone_hostname ${ZONE})
for slave in $(sophora_get_slaves ${ZONE} ${SOPHORA_ADMINPORT} ${SOPHORA_USER} ${SOPHORA_PASS})
do
print_log ${LOGFILE} "_${slave}_ == _${zone_hostname}_?"
if [ "_${slave}_" == "_${zone_hostname}_" ]
then
is_slave=1
fi
done
if [ ${is_slave} -eq 1 ]
then
# Slave
print_log ${LOGFILE} "Slave..."
sophora_shutdown ${ZONE} ${SOPHORA_FMRI}
startup="Yes"
else
# Master
print_log ${LOGFILE} "Master... Not shutting down Sophora"
fi
;;
merkel-zone|brandt-zone|schmidt-zone)
SOPHORA_ADMINPORT=1396
# Master-/slave-zones
is_slave=0
zone_hostname=$(get_zone_hostname ${ZONE})
for slave in $(sophora_get_slaves ${ZONE} ${SOPHORA_ADMINPORT} ${SOPHORA_USER} ${SOPHORA_PASS})
do
print_log ${LOGFILE} "_${slave}_ == _${zone_hostname}_?"
if [ "_${slave}_" == "_${zone_hostname}_" ]
then
is_slave=1
fi
done
if [ ${is_slave} -eq 1 ]
then
# Slave
print_log ${LOGFILE} "Slave..."
sophora_shutdown ${ZONE} ${SOPHORA_FMRI}
startup="Yes"
else
# Master
print_log ${LOGFILE} "Master... Not shutting down Sophora"
fi
;;
*)
;;
esac
# Find the dir to write down zfs-setup
for ZPOOL in ${ZPOOLS}
do
if [ "_$(${ZFS_CMD} list -Ho name ${ZPOOL}/${ZFS_SETUP_SUBDIR} 2>/dev/null)_" != "__" ]
then
CONFIG_DIR=$(${ZFS_CMD} get -Ho value mountpoint ${ZPOOL}/${ZFS_SETUP_SUBDIR})
else
if [ -d $(${ZFS_CMD} get -Ho value mountpoint ${ZPOOL})/${ZFS_SETUP_SUBDIR} ]
then
CONFIG_DIR=$(${ZFS_CMD} get -Ho value mountpoint ${ZPOOL})/${ZFS_SETUP_SUBDIR}
fi
fi
if [ -d ${CONFIG_DIR} ]
then
printf "# Settings for ZFS\n\n" > ${CONFIG_DIR}/${ZFS_CONFIG_FILE}
ZONE_CONFIG_FILE=zonecfg_${ZONE}.export
${ZONECFG_CMD} -z ${ZONE} export > ${CONFIG_DIR}/${ZONE_CONFIG_FILE}
fi
done
# Save configs and create snapshots
for ZPOOL in ${ZPOOLS}
do
if [ "_${CONFIG_DIR}_" != "__" ]
then
# Save zfs config
dump_zfs_config ${ZPOOL} ${CONFIG_DIR}/${ZFS_CONFIG_FILE}
# Save Clusterconfig
dump_cluster_config ${RG} ${CONFIG_DIR}
fi
snapshot_create ${ZPOOL} ${SNAPSHOT_NAME}
done
# Startup Sophora?
if [ "_${startup}_" == "_Yes_" ]
then
sophora_startup ${ZONE} ${SOPHORA_FMRI}
fi
print_log ${LOGFILE} "End backup of ${CLIENT_NAME}"
;;
pst)
for ZPOOL in ${ZPOOLS}
do
snapshot_destroy ${ZPOOL} ${SNAPSHOT_NAME}
done
print_log ${LOGFILE} "End backup of ${CLIENT_NAME}"
;;
*)
usage
;;
esac
print_log ${GLOBAL_LOGFILE} "End (${cmd_option}) Called from:"
${PTREE_CMD} $$ | print_log ${GLOBAL_LOGFILE}
exit 0
</source>
MD5-Checksum
<source lang=bash>
# digest -a md5 /nsr/bin/nsr_snapshot.sh
01be6677ddf4342b625b1aa59d805628
</source>
!!!THIS CODE IS UNTESTED DO NOT USE THIS!!!
!!!THIS JUST AN EXAMPLE!!!
==Restore/Recover==
===Set some variables===
<source lang=bash>
NSR_CLIENT="sample-cl"
NSR_SERVER="nsr-server"
ZPOOL="sample_pool"
RG="${NSR_CLIENT%-cl}-rg"
ZONE="${NSR_CLIENT%-cl}-zone"
</source>
===Look for a valid backup===
<source lang=bash>
# /usr/sbin/mminfo -s ${NSR_SERVER} -o t -N /local/${RG}/cluster_config/nsr_backup
</source>
===Restore ZFS configuration===
<source lang=bash>
# /usr/sbin/recover -s ${NSR_SERVER} -c ${NSR_CLIENT} -d /tmp -a /local/${RG}/cluster_config/nsr_backup/ZFS_Setup.sh
</source>
Look into file /tmp/ZFS_Setup.sh which should look like this:
<source lang=bash>
# Create ZPool sample_pool with size 1.02T:
Zpool structure of sample_pool:
zpool create sample_pool mirror <device0> <device1>
# Create ZFS
/usr/sbin/zfs create -o mountpoint=none sample_pool/app
/usr/sbin/zfs create -o mountpoint=none sample_pool/cluster_config
/usr/sbin/zfs create -o mountpoint=none sample_pool/data1
/usr/sbin/zfs create -o mountpoint=none sample_pool/data2
/usr/sbin/zfs create -o mountpoint=none sample_pool/home
/usr/sbin/zfs create -o mountpoint=none sample_pool/log
/usr/sbin/zfs create -o mountpoint=none sample_pool/usr_local
/usr/sbin/zfs create -o mountpoint=none sample_pool/zone
# Set ZFS values
/usr/sbin/zfs set -p reservation=104857600 sample_pool
/usr/sbin/zfs set -p mountpoint=none sample_pool
/usr/sbin/zfs set -p mountpoint=/local/sample-rg/app sample_pool/app
/usr/sbin/zfs set -p mountpoint=/local/sample-rg/cluster_config sample_pool/cluster_config
/usr/sbin/zfs set -p mountpoint=/local/sample-rg/data1 sample_pool/data1
/usr/sbin/zfs set -p mountpoint=/local/sample-rg/data2 sample_pool/data2
/usr/sbin/zfs set -p mountpoint=/local/sample-rg/home sample_pool/home
/usr/sbin/zfs set -p mountpoint=/local/sample-rg/log sample_pool/log
/usr/sbin/zfs set -p mountpoint=/local/sample-rg/usr_local sample_pool/usr_local
/usr/sbin/zfs set -p mountpoint=/local/sample-rg/zone sample_pool/zone
/usr/sbin/zfs set -p zpdata:zn=sample-zone sample_pool/zone
/usr/sbin/zfs set -p zpdata:rbe=S10_U9 sample_pool/zone
/usr/sbin/zfs set -p mountpoint=/local/sample-rg/zone-zfsBE_20121105 sample_pool/zone-zfsBE_20121105
/usr/sbin/zfs set -p zoned=off sample_pool/zone-zfsBE_20121105
/usr/sbin/zfs set -p canmount=on sample_pool/zone-zfsBE_20121105
/usr/sbin/zfs set -p zpdata:zn=sample-zone sample_pool/zone-zfsBE_20121105
/usr/sbin/zfs set -p zpdata:rbe=S10_U9 sample_pool/zone-zfsBE_20121105
</source>
Mount the needed ZFS filesystems.
===Restore zone configuration===
<source lang=bash>
# /usr/sbin/recover -s ${NSR_SERVER} -c ${NSR_CLIENT} -d /tmp -a /local/${RG}/cluster_config/nsr_backup/zonecfg_${ZONE}.export
# zonecfg -z ${ZONE} -f /tmp/zonecfg_${ZONE}.export
# zonecfg -z ${ZONE} info
</source>
===Restore cluster configuration===
<source lang=bash>
# /usr/sbin/recover -s ${NSR_SERVER} -c ${NSR_CLIENT} -d /tmp -a /local/${RG}/cluster_config/nsr_backup/*_export.xml
# /usr/sbin/recover -s ${NSR_SERVER} -c ${NSR_CLIENT} -d /tmp -a /local/${RG}/cluster_config/nsr_backup/*.ClusterCreateCommands.txt
# /usr/bin/perl -pi -e "s#/local/${RG}/cluster_config/nsr_backup/#/tmp/#g" /tmp/${RG}.ClusterCreateCommands.txt
</source>
Follow the instructions in /tmp/${RG}.ClusterCreateCommands.txt:
<source lang=bash>
Recreate sample-rg:
/usr/cluster/bin/clrg create -i /tmp/sample-rg.clrg_export.xml sample-rg
Add the following entries to all nodes!!!:
/etc/inet/hosts:
10.29.7.96 sample-cl
Recreate sample-lh-res:
/usr/cluster/bin/clrs create -i /tmp/sample-lh-res.clrs_export.xml sample-lh-res
Recreate sample-hasp-zfs-res:
/usr/cluster/bin/clrs create -i /tmp/sample-hasp-zfs-res.clrs_export.xml sample-hasp-zfs-res
Recreate sample-emctl-res:
/usr/cluster/bin/clrs create -i /tmp/sample-emctl-res.clrs_export.xml sample-emctl-res
Recreate sample-oracle-res:
/usr/cluster/bin/clrs create -i /tmp/sample-oracle-res.clrs_export.xml sample-oracle-res
Recreate sample-zone-res:
/usr/cluster/bin/clrs create -i /tmp/sample-zone-res.clrs_export.xml sample-zone-res
Recreate sample-nsr-res:
/usr/cluster/bin/clrs create -i /tmp/sample-nsr-res.clrs_export.xml sample-nsr-res
</source>
==Registering new resource type LGTO.clnt==
1. Install Solaris client package LGTOclnt
2. Register new resource type in cluster. One one node do:
<source lang=bash>
# clrt register -f /usr/sbin/LGTO.clnt.rtr LGTO.clnt
</source>
Now you have a new resource type LGTO.clnt in your cluster.
==Create client resource of type LGTO.clnt==
So I use scripts like this:
<source lang=bash>
# RGname=sample-rg
# clrs create \
-t LGTO.clnt \
-g ${RGname} \
-p Resource_dependencies=$(basename ${RGname} -rg)-hasp-zfs-res \
-p clientname=$(basename ${RGname} -rg)-lh \
-p Network_resource=$(basename ${RGname} -rg)-lh-res \
-p owned_paths=${ZPOOL_BASEDIR} \
$(basename ${RGname} -rg)-nsr-res
</source>
This expands to:
<source lang=bash>
# clrs create \
-t LGTO.clnt \
-g sample-rg \
-p Resource_dependencies=sample-hasp-zfs-res \
-p clientname=sample-lh \
-p Network_resource=sample-lh-res \
-p owned_paths=/local/sample-rg \
sample-nsr-res
</source>
Now we have a client name to which we can connect to: sample-lh
8979f0c959b3fb046c2fd0626045488879014be4
941
940
2015-10-15T08:38:42Z
Lollypop
2
/* The pre-/pstcmd-script */
wikitext
text/x-wiki
[[Kategorie:ZFS|Backup]]
[[Kategorie:Backup|Networker]]
[[Kategorie:Solaris|Backup]]
=Backup of ZFS snapshots on Solaris Cluster with Legato/EMC Networker=
This describes how to setup a backup of the Solaris Cluster resource group named sample-rg.
The structure of my RGs is always:
<pre>
RG: <name>-rg
ZFS-HASP: <name>-hasp-zfs-res
Logical Host: <name>-lh-res
Logical Host Name: <name>-lh
ZPOOL: <name>_pool
</pre>
I used the bash as shell.
==Define variables used in the following command lines==
<source lang=bash>
# NAME=sample
# RGname=${NAME}-rg
# NetworkerGroup=$(echo ${NAME} | tr 'a-z' 'A-Z' )
# ZPOOL=${NAME}_pool
# ZPOOL_BASEDIR=/local/${RGname}
</source>
==Define a resource for Networker==
What we need now is a resource definition in our Networker directory like this:
<source lang=bash>
# mkdir /nsr/{bin,log,res}
# cat > /nsr/res/${NetworkerGroup}.res <<EOF
type: savepnpc;
precmd: "/nsr/bin/nsr_snapshot.sh pre >/nsr/log/networker_precmd.log 2>&1";
pstcmd: "/nsr/bin/nsr_snapshot.sh pst >/nsr/log/networker_pstcmd.log 2>&1";
timeout: "08:00am";
abort precmd with group: Yes;
EOF
</source>
==The pre-/pstcmd-script==
!!!THIS CODE IS UNTESTED DO NOT USE THIS!!!
!!!THIS JUST AN EXAMPLE!!!
<source lang=bash>
#!/bin/bash
cmd_option=$1
export cmd_option
SNAPSHOT_NAME="nsr"
BASE_LOG_DIR="/nsr/logs"
NSR_BACKUP_CLONE="nsr_backup"
# Commands
ZFS_CMD="/usr/sbin/zfs"
ZPOOL_CMD="/usr/sbin/zpool"
ZLOGIN_CMD="/usr/sbin/zlogin"
ZONECFG_CMD="/usr/sbin/zonecfg"
SVCS_CMD="/usr/sbin/svcs"
SVCADM_CMD="/usr/sbin/svcadm"
DF_CMD="/usr/bin/df"
RM_CMD="/usr/bin/rm"
AWK_CMD="/usr/bin/nawk"
MKNOD_CMD="/usr/sbin/mknod"
XARGS_CMD="/usr/bin/xargs"
PARGS_CMD="/usr/bin/pargs"
PTREE_CMD="/usr/bin/ptree"
CLRS_CMD="/usr/cluster/bin/clrs"
CLRG_CMD="/usr/cluster/bin/clrg"
CLRT_CMD="/usr/cluster/bin/clrt"
BASENAME_CMD="/usr/bin/basename"
GETENT_CMD="/usr/bin/getent"
SCHA_RESOURCE_GET_CMD="/usr/cluster/bin/scha_resource_get"
WGET_CMD=/usr/sfw/bin/wget
HOSTNAME_CMD="/usr/bin/uname -n"
# Subdir in ZFS where to put ZFS-config
ZFS_SETUP_SUBDIR="cluster_config"
ZFS_CONFIG_FILE=ZFS_Setup.sh
# Oracle parameter
ORACLE_SID=SAMPLE
ORACLE_USER=oracle
# Sophora parameter
SOPHORA_FMRI="svc:/cms/sophora:default"
SOPHORA_USER=admin
SOPHORA_PASS=password
GLOBAL_LOGFILE=${BASE_LOG_DIR}/$(${BASENAME_CMD} $0 .sh).log
# For all but get_slaves redirect output to log
case ${cmd_option} in
get_slaves)
;;
*)
exec >>${GLOBAL_LOGFILE} 2>&1
;;
esac
function print_option () {
option=$1; shift
# now process line
while [ $# -gt 0 ]
do
case $1 in
${option})
echo $2
shift
shift
;;
*)
shift
;;
esac
done
}
function sophora_startup () {
SOPHORA_ZONE=$1 # Zone for zlogin
SOPHORA_FMRI=$2 # FMRI for svcadm
print_log ${LOGFILE} "Starting sophora in ${SOPHORA_ZONE}..."
${ZLOGIN_CMD} ${SOPHORA_ZONE} ${SVCADM_CMD} enable ${SOPHORA_FMRI}
}
function sophora_shutdown () {
SOPHORA_ZONE=$1 # Zone for zlogin
SOPHORA_FMRI=$2 # FMRI for svcadm
print_log ${LOGFILE} "Shutting down sophora in ${SOPHORA_ZONE}..."
${ZLOGIN_CMD} ${SOPHORA_ZONE} ${SVCADM_CMD} disable -t ${SOPHORA_FMRI}
}
function sophora_get_slaves () {
SOPHORA_ZONE=$1 # Zone for zlogin
SOPHORA_PORT=$2 # Sophora port at localhost
SOPHORA_USER=$3 # Sophora admin user
SOPHORA_PASS=$4 # Sophora admin port
${ZLOGIN_CMD} ${SOPHORA_ZONE} \
${WGET_CMD} \
-qO- \
--no-proxy \
--http-user=${SOPHORA_USER} \
--http-password=${SOPHORA_PASS} \
"http://localhost:${SOPHORA_PORT}/content-api/servers/?replicationMode=SLAVE" | \
${AWK_CMD} '
function get_param(param,name){
name="\""name"\"";
count=split(param,tupel,/,/);
for(i=1;i<=count;i++){
split(tupel[i],part,/:/);
if(part[1]==name){
gsub(/\"/,"",part[2]);return part[2];
}
}
}
{
json=$0;
gsub(/(\[\{|\}\])/,"",json);
elements=split(json,array,/\},\{/);
for(element=1;element<=elements;element++){
print get_param(array[element],"hostname");
}
}' | ${XARGS_CMD} -n 1 -i ${BASENAME_CMD} {} .server.de
}
function get_zone_hostname () {
${ZLOGIN_CMD} $1 ${HOSTNAME_CMD}
}
function print_log () {
LOGFILE=$1 ; shift
if [ $# -gt 0 ]
then
printf "%s (%s): %s\n" "$(date '+%Y%m%d %H:%M:%S')" "${cmd_option}" "$*" >> ${LOGFILE}
else
#printf "%s (%s): " "$(date '+%Y%m%d %H:%M:%S')" "${cmd_option}" >> ${LOGFILE}
while read data
do
printf "%s (%s): %s\n" "$(date '+%Y%m%d %H:%M:%S')" "${cmd_option}" "${data}" >> ${LOGFILE}
done
fi
}
function dump_zfs_config {
ZPOOL=$1
OUTPUT_FILE=$2
printf "\n\n# Create ZPool ${ZPOOL} with size $(${ZPOOL_CMD} list -Ho size ${ZPOOL}):\n\n" >> ${OUTPUT_FILE}
${ZPOOL_CMD} status ${ZPOOL} | ${AWK_CMD} '/config:/,/errors:/{if(/NAME/){getline; printf "Zpool structure of %s:\n\nzpool create %s",$1,$1; getline ; device=0; while(!/^$/ && !/errors:/){gsub(/mirror-[0-9]+/,"mirror",$1);gsub(/logs/,"log",$1);gsub(/(\/dev\/(r)*dsk\/)*c[0-9]+t[0-9A-F]+d[0-9]+(s[0-9]+)*/,"<device"device">",$1);if(/device/)device++;printf " %s",$1 ; getline}};printf "\n" ;}' >> ${OUTPUT_FILE}
printf "\n\n# Create ZFS\n\n" >> ${OUTPUT_FILE}
${ZFS_CMD} list -Hrt filesystem -o name,origin ${ZPOOL} | ${AWK_CMD} -v zfs_cmd=${ZFS_CMD} 'NR>1 && $2=="-"{print zfs_cmd,"create -o mountpoint=none",$1}' >> ${OUTPUT_FILE}
printf "\n\n# Set ZFS values\n\n" >> ${OUTPUT_FILE}
${ZFS_CMD} get -s local -Ho name,property,value -pr all ${ZPOOL} | ${AWK_CMD} -v zfs_cmd=${ZFS_CMD} '$2!="readonly"{printf "%s set -p %s=%s %s\n",zfs_cmd,$2,$3,$1}' >> ${OUTPUT_FILE}
}
function dump_cluster_config {
RG=$1
OUTPUT_DIR=$2
${RM_CMD} -f ${OUTPUT_DIR}/${RG}.clrg_export.xml
${CLRG_CMD} export -o ${OUTPUT_DIR}/${RG}.clrg_export.xml ${RG}
for RES in $(${CLRS_CMD} list -g ${RG})
do
${RM_CMD} -f ${OUTPUT_DIR}/${RES}.clrs_export.xml
${CLRS_CMD} export -o ${OUTPUT_DIR}/${RES}.clrs_export.xml ${RES}
done
# Commands to recreate the RG
COMMAND_FILE="${OUTPUT_DIR}/${RG}.ClusterCreateCommands.txt"
printf "Recreate %s:\n%s create -i %s %s\n\n" "${RG}" "${CLRG_CMD}" "${OUTPUT_DIR}/${RG}.clrg_export.xml" "${RG}" > ${COMMAND_FILE}
for RT in SUNW.LogicalHostname SUNW.HAStoragePlus SUNW.gds LGTO.clnt
do
for RT_VERSION in $(${CLRT_CMD} list | ${AWK_CMD} -v rt=${RT} '$1 ~ rt')
do
for RES in $(${CLRS_CMD} list -g ${RG} -t ${RT_VERSION})
do
if [ "_${RT}_" == "_SUNW.LogicalHostname_" ]
then
printf "Add the following entries to all nodes!!!:\n/etc/inet/hosts:\n" >> ${COMMAND_FILE}
${GETENT_CMD} hosts $(${CLRS_CMD} show -p HostnameList ${RES} | nawk '$1=="HostnameList:"{$1="";print}') >> ${COMMAND_FILE}
printf "\n" >> ${COMMAND_FILE}
fi
printf "Recreate %s:\n%s create -i %s %s\n\n" "${RES}" "${CLRS_CMD}" "${OUTPUT_DIR}/${RES}.clrs_export.xml" "${RES}" >> ${COMMAND_FILE}
done
done
done
}
function snapshot_pre {
DB=$1
DBUSER=$2
if [ $# -eq 3 -a "_$3_" != "__" ]
then
ZONE=$3
ZONE_CMD="${ZLOGIN_CMD} -l ${DBUSER} ${ZONE}"
ZONE_BASE=$(/usr/sbin/zonecfg -z ${ZONE} info zonepath | ${AWK_CMD} '{print $NF;}')
ZONE_ROOT="${ZONE_BASE}/root"
else
ZONE_ROOT=""
ZONE_CMD="su - ${DBUSER} -c"
fi
if( ${ZONE_CMD} echo >/dev/null 2>&1 )
then
SCRIPT_NAME="tmp/.nsr-pre-snap-script.$$"
# Create script inside zone
cat >${ZONE_ROOT}/{SCRIPT_NAME} <<EOS
#!/bin/bash
DBDIR=\$(${AWK_CMD} -F':' -v ORACLE_SID=${ORACLE_SID} '\$1==ORACLE_SID {print \$2;}' /var/opt/oracle/oratab)
\${DBDIR}/bin/sqlplus sys/${DBUSER} as sysdba << EOF
create pfile from spfile;
alter system archive log current;
alter database backup controlfile to trace;
alter database begin backup;
EOF
EOS
chmod 755 ${ZONE_ROOT}/${SCRIPT_NAME}
${ZONE_CMD} /${SCRIPT_NAME} 2>&1 | print_log ${LOGFILE}
rm -f ${ZONE_ROOT}/${SCRIPT_NAME}
fi
}
function snapshot_pst {
DB=$1
DBUSER=$2
if [ $# -eq 3 -a "_$3_" != "__" ]
then
ZONE=$3
ZONE_CMD="${ZLOGIN_CMD} -l ${DBUSER} ${ZONE}"
ZONE_BASE=$(/usr/sbin/zonecfg -z ${ZONE} info zonepath | ${AWK_CMD} '{print $NF;}')
ZONE_ROOT="${ZONE_BASE}/root"
else
ZONE_ROOT=""
ZONE_CMD="su - ${DBUSER} -c"
fi
if( ${ZONE_CMD} echo >/dev/null 2>&1 )
then
SCRIPT_NAME="tmp/.nsr-pre-snap-script.$$"
# Create script inside zone
cat >${ZONE_ROOT}/{SCRIPT_NAME} <<EOS
#!/bin/bash
DBDIR=\$(${AWK_CMD} -F':' -v ORACLE_SID=${ORACLE_SID} '\$1==ORACLE_SID {print \$2;}' /var/opt/oracle/oratab)
\${DBDIR}/bin/sqlplus sys/${DBUSER} as sysdba << EOF
alter database end backup;
alter system archive log current;
EOF
EOS
chmod 755 ${ZONE_ROOT}/${SCRIPT_NAME}
${ZONE_CMD} /${SCRIPT_NAME} 2>&1 | print_log ${LOGFILE}
rm -f ${ZONE_ROOT}/${SCRIPT_NAME}
fi
}
function snapshot_create {
ZPOOL=$1
SNAPSHOT_NAME=$2
RES="$(${CLRS_CMD} show -p ZPools | ${AWK_CMD} -v pool=${ZPOOL} '/^Resource:/{res=$NF;}$NF ~ pool{print res;}')"
# Because of problems with unmounting during cluster monitoring disable montoring for this step
print_log ${LOGFILE} "Telling Cluster not to monitor ${RES}"
if [ "_${RES}_" != "__" ]
then
${CLRS_CMD} unmonitor ${RES}
fi
print_log ${LOGFILE} "Create ZFS snapshot -r ${ZPOOL}@${SNAPSHOT_NAME}"
${ZFS_CMD} snapshot -r ${ZPOOL}@${SNAPSHOT_NAME}
for zfs_snapshot in $(${ZFS_CMD} list -Ho name -t snapshot -r ${ZPOOL} | grep ${SNAPSHOT_NAME})
do
${ZFS_CMD} clone -o readonly=on ${zfs_snapshot} ${zfs_snapshot/@*/}/${NSR_BACKUP_CLONE}
${ZFS_CMD} mount ${zfs_snapshot/@*/}/${NSR_BACKUP_CLONE} 2>/dev/null
if [ "_$(${ZFS_CMD} get -Ho value mounted ${zfs_snapshot/@*/}/${NSR_BACKUP_CLONE})_" == "_yes_" ]
then
# echo /usr/sbin/save -s ${SERVER_NAME} -g ${GROUP_NAME} -LL -m ${CLIENT_NAME} $(${ZFS_CMD} get -Ho value mountpoint ${zfs_snapshot/@*/}/${NSR_BACKUP_CLONE})
${ZFS_CMD} list -Ho creation,name ${zfs_snapshot/@*/}/${NSR_BACKUP_CLONE} | print_log ${LOGFILE}
fi
done
print_log ${LOGFILE} "Telling Cluster to monitor ${RES} again"
if [ "_${RES}_" != "__" ]
then
sleep 1
${CLRS_CMD} monitor ${RES}
fi
}
function snapshot_destroy {
ZPOOL=$1
SNAPSHOT_NAME=$2
RES="$(${CLRS_CMD} show -p ZPools | ${AWK_CMD} -v pool=${ZPOOL} '/^Resource:/{res=$NF;}$NF ~ pool{print res;}')"
# Because of problems with unmounting during cluster monitoring disable montoring for this step
print_log ${LOGFILE} "Telling Cluster not to monitor ${RES}"
if [ "_${RES}_" != "__" ]
then
${CLRS_CMD} unmonitor ${RES}
fi
if (${ZFS_CMD} list -t snapshot ${ZPOOL}@${SNAPSHOT_NAME} > /dev/null)
then
for zfs_snapshot in $(${ZFS_CMD} list -Ho name -t snapshot -r ${ZPOOL} | grep ${SNAPSHOT_NAME})
do
if [ "_$(${ZFS_CMD} get -Ho value mounted ${zfs_snapshot/@*/}/${NSR_BACKUP_CLONE})_" == "_yes_" ]
then
print_log ${LOGFILE} "Unmount ZFS clone ${zfs_snapshot/@*/}/${NSR_BACKUP_CLONE}"
${ZFS_CMD} unmount ${zfs_snapshot/@*/}/${NSR_BACKUP_CLONE}
fi
# If this is a clone of ${zfs_snapshot}, then destroy it
if [ "_$(${ZFS_CMD} list -Ho origin ${zfs_snapshot/@*/}/${NSR_BACKUP_CLONE})_" == "_${zfs_snapshot}_" ]
then
print_log ${LOGFILE} "Destroy ZFS clone ${zfs_snapshot/@*/}/${NSR_BACKUP_CLONE}"
${ZFS_CMD} destroy ${zfs_snapshot/@*/}/${NSR_BACKUP_CLONE}
fi
done
print_log ${LOGFILE} "Destroy ZFS snapshot -r ${ZPOOL}@${SNAPSHOT_NAME}"
${ZFS_CMD} destroy -r ${ZPOOL}@${SNAPSHOT_NAME}
fi
print_log ${LOGFILE} "Telling Cluster to monitor ${RES} again"
if [ "_${RES}_" != "__" ]
then
${CLRS_CMD} monitor ${RES}
fi
}
function usage {
echo "Usage: $0 (pre|pst)"
echo "Usage: $0 init <ZPool-Name>"
echo "Usage: $0 initall"
echo "Usage: $0 dump <ZPool-Name> <Output-File>"
exit 1
}
case ${cmd_option} in
pre|pst)
case ${cmd_option} in
pre)
# Get commandline from parent pid
# pre /usr/sbin/savepnpc -c <NetworkerClient> -s <NetworkerServer> -g <NetworkerGroup> -LL
print_log ${GLOBAL_LOGFILE} "Begin (${cmd_option}) Called from $(${PTREE_CMD} $$ | ${AWK_CMD} '/savepnpc/{print $0}')"
pid=$(${PTREE_CMD} $$ | ${AWK_CMD} '/savepnpc/{print $1}')
;;
pst)
# Get commandline from parent pid
# pst /usr/bin/pstclntsave -s <NetworkerServer> -g <NetworkerGroup> -c <NetworkerClient>
print_log ${GLOBAL_LOGFILE} "Begin (${cmd_option}) Called from $(${PTREE_CMD} $$ | ${AWK_CMD} '/pstclntsave/{print $0}')"
pid=$(${PTREE_CMD} $$ | ${AWK_CMD} '/pstclntsave/{print $1}')
${PTREE_CMD} $$ | print_log ${GLOBAL_LOGFILE}
print_log ${GLOBAL_LOGFILE} "(${cmd_option}) PID=${pid}"
;;
esac
commandline="$(${PARGS_CMD} -c ${pid} | ${AWK_CMD} -F':' '$1 ~ /^argv/{printf $2}END{print;}')"
# Called from backupserver use -c
CLIENT_NAME=$(print_option -c ${commandline})
# If called from cmdline use -m
CLIENT_NAME=${CLIENT_NAME:-$(print_option -m ${commandline})}
# Last resort pre/post
CLIENT_NAME=${CLIENT_NAME:-${cmd_option}}
SERVER_NAME=$(print_option -s ${commandline})
GROUP_NAME=$(print_option -g ${commandline})
LOGFILE=${BASE_LOG_DIR}/${CLIENT_NAME}.log
print_log ${LOGFILE} "Called from ${commandline}"
named_pipe=/tmp/.named_pipe.$$
# Delete named pipe on exit
trap "rm -f ${named_pipe}" EXIT
# Create named pipe
${MKNOD_CMD} ${named_pipe} p
# Read from named pipe and send it to print_log
tee <${named_pipe} | print_log ${LOGFILE}&
# Close STDOUT & STDERR
exec 1>&-
exec 2>&-
# Redirect them to named pipe
exec >${named_pipe} 2>&1
print_log ${LOGFILE} "Begin backup of ${CLIENT_NAME}"
# Get resource name from hostname
LH_RES=$(${CLRS_CMD} show -t SUNW.LogicalHostname -p HostnameList | ${AWK_CMD} -v Hostname="${CLIENT_NAME}" '/^Resource:/{res=$NF} /HostnameList:/ {for(i=2;i<=NF;i++){if($i == Hostname){print res}}}')
print_log ${LOGFILE} "LogicalHostname of ${CLIENT_NAME} is ${LH_RES}"
# Get ressourceGroup name from ressource name
RG=$(${SCHA_RESOURCE_GET_CMD} -O GROUP -R ${LH_RES})
print_log ${LOGFILE} "RessourceGroup of ${LH_RES} is ${RG}"
ZPOOLS=$(${CLRS_CMD} show -g ${RG} -p Zpools | ${AWK_CMD} '$1=="Zpools:"{$1="";print $0}')
print_log ${LOGFILE} "ZPools used in ${RG}: ${ZPOOLS}"
Start_command=$(${CLRS_CMD} show -p Start_command -g ${RG} | ${AWK_CMD} -F ':' '$1 ~ /Start_command/ && $2 ~ /sczbt/')
print_log ${LOGFILE} "sczbt Start_command is: ${Start_command}"
sczbt_config=$(print_option -P ${Start_command})/sczbt_$(print_option -R ${Start_command})
print_log ${LOGFILE} "sczbt_config is ${sczbt_config}"
ZONE=$(${AWK_CMD} -F '=' '$1=="Zonename"{gsub(/"/,"",$2);print $2}' ${sczbt_config})
print_log ${LOGFILE} "Zone from ${sczbt_config} is ${ZONE}"
;;
init)
LOGFILE=${BASE_LOG_DIR}/init.log
if [ $# -ne 2 ]
then
echo "Wrong count of parameters."
echo "Use $0 init <ZPool-Name>"
exit 1
fi
ZPOOL=$2
print_log ${GLOBAL_LOGFILE} "Begin (${cmd_option}) of zpool ${ZPOOL}"
print_log ${LOGFILE} "Begin init of zpool ${ZPOOL}"
;;
initall)
LOGFILE=${BASE_LOG_DIR}/initall.log
print_log ${GLOBAL_LOGFILE} "Begin (${cmd_option})"
;;
get_slaves)
if [ $# -ne 5 ]
then
echo "Wrong count of parameters."
echo "Use $0 get_slaves <Zone-Name> <Sophora-Port> <Sophora-Adminuser> <Sophora-Password>"
exit 1
fi
echo "Slave node(s): $(sophora_get_slaves $2 $3 $4 $5)"
exit 0
;;
esac
case ${cmd_option} in
dump_cluster)
if [ $# -ne 3 ]
then
echo "Wrong count of parameters."
echo "Use $0 dump_cluster <Ressource_Group> <DIR>"
exit 1
fi
dump_cluster_config $2 $3
;;
dump)
if [ $# -ne 3 ]
then
echo "Wrong count of parameters."
echo "Use $0 dump <ZPool-Name> <File>"
exit 1
fi
dump_zfs_config $2 $3
;;
init)
snapshot_destroy ${ZPOOL} ${SNAPSHOT_NAME}
snapshot_create ${ZPOOL} ${SNAPSHOT_NAME}
print_log ${LOGFILE} "End init of zpool ${ZPOOL}"
;;
initall)
for ZPOOL in $(${ZPOOL_CMD} list -Ho name)
do
if [ "_${ZPOOL}_" == "_rpool_" ]
then
continue
fi
print_log ${LOGFILE} "Begin init of zpool ${ZPOOL}"
snapshot_destroy ${ZPOOL} ${SNAPSHOT_NAME}
snapshot_create ${ZPOOL} ${SNAPSHOT_NAME}
print_log ${LOGFILE} "End init of zpool ${ZPOOL}"
done
;;
pre)
for ZPOOL in ${ZPOOLS}
do
snapshot_destroy ${ZPOOL} ${SNAPSHOT_NAME}
done
# Shutdown Sophora?
startup="No"
case ${ZONE} in
arcus-rg)
# Staging zones
#sophora_shutdown ${ZONE} ${SOPHORA_FMRI}
#startup="Yes"
;;
incus-zone|velum-zone)
SOPHORA_ADMINPORT=1196
# Master-/slave-zones
is_slave=0
zone_hostname=$(get_zone_hostname ${ZONE})
for slave in $(sophora_get_slaves ${ZONE} ${SOPHORA_ADMINPORT} ${SOPHORA_USER} ${SOPHORA_PASS})
do
print_log ${LOGFILE} "_${slave}_ == _${zone_hostname}_?"
if [ "_${slave}_" == "_${zone_hostname}_" ]
then
is_slave=1
fi
done
if [ ${is_slave} -eq 1 ]
then
# Slave
print_log ${LOGFILE} "Slave..."
sophora_shutdown ${ZONE} ${SOPHORA_FMRI}
startup="Yes"
else
# Master
print_log ${LOGFILE} "Master... Not shutting down Sophora"
fi
;;
merkel-zone|brandt-zone|schmidt-zone)
SOPHORA_ADMINPORT=1396
# Master-/slave-zones
is_slave=0
zone_hostname=$(get_zone_hostname ${ZONE})
for slave in $(sophora_get_slaves ${ZONE} ${SOPHORA_ADMINPORT} ${SOPHORA_USER} ${SOPHORA_PASS})
do
print_log ${LOGFILE} "_${slave}_ == _${zone_hostname}_?"
if [ "_${slave}_" == "_${zone_hostname}_" ]
then
is_slave=1
fi
done
if [ ${is_slave} -eq 1 ]
then
# Slave
print_log ${LOGFILE} "Slave..."
sophora_shutdown ${ZONE} ${SOPHORA_FMRI}
startup="Yes"
else
# Master
print_log ${LOGFILE} "Master... Not shutting down Sophora"
fi
;;
*)
;;
esac
# Find the dir to write down zfs-setup
for ZPOOL in ${ZPOOLS}
do
if [ "_$(${ZFS_CMD} list -Ho name ${ZPOOL}/${ZFS_SETUP_SUBDIR} 2>/dev/null)_" != "__" ]
then
CONFIG_DIR=$(${ZFS_CMD} get -Ho value mountpoint ${ZPOOL}/${ZFS_SETUP_SUBDIR})
else
if [ -d $(${ZFS_CMD} get -Ho value mountpoint ${ZPOOL})/${ZFS_SETUP_SUBDIR} ]
then
CONFIG_DIR=$(${ZFS_CMD} get -Ho value mountpoint ${ZPOOL})/${ZFS_SETUP_SUBDIR}
fi
fi
if [ -d ${CONFIG_DIR} ]
then
printf "# Settings for ZFS\n\n" > ${CONFIG_DIR}/${ZFS_CONFIG_FILE}
ZONE_CONFIG_FILE=zonecfg_${ZONE}.export
[ "_${ZONE}_" != "__" ] && ${ZONECFG_CMD} -z ${ZONE} export > ${CONFIG_DIR}/${ZONE_CONFIG_FILE}
fi
done
# Save configs and create snapshots
for ZPOOL in ${ZPOOLS}
do
if [ "_${CONFIG_DIR}_" != "__" ]
then
# Save zfs config
dump_zfs_config ${ZPOOL} ${CONFIG_DIR}/${ZFS_CONFIG_FILE}
# Save Clusterconfig
dump_cluster_config ${RG} ${CONFIG_DIR}
fi
snapshot_create ${ZPOOL} ${SNAPSHOT_NAME}
done
# Startup Sophora?
if [ "_${startup}_" == "_Yes_" ]
then
sophora_startup ${ZONE} ${SOPHORA_FMRI}
fi
print_log ${LOGFILE} "End backup of ${CLIENT_NAME}"
;;
pst)
for ZPOOL in ${ZPOOLS}
do
snapshot_destroy ${ZPOOL} ${SNAPSHOT_NAME}
done
print_log ${LOGFILE} "End backup of ${CLIENT_NAME}"
;;
*)
usage
;;
esac
print_log ${GLOBAL_LOGFILE} "End (${cmd_option}) Called from:"
${PTREE_CMD} $$ | print_log ${GLOBAL_LOGFILE}
exit 0
</source>
MD5-Checksum
<source lang=bash>
# digest -a md5 /nsr/bin/nsr_snapshot.sh
01be6677ddf4342b625b1aa59d805628
</source>
!!!THIS CODE IS UNTESTED DO NOT USE THIS!!!
!!!THIS JUST AN EXAMPLE!!!
==Restore/Recover==
===Set some variables===
<source lang=bash>
NSR_CLIENT="sample-cl"
NSR_SERVER="nsr-server"
ZPOOL="sample_pool"
RG="${NSR_CLIENT%-cl}-rg"
ZONE="${NSR_CLIENT%-cl}-zone"
</source>
===Look for a valid backup===
<source lang=bash>
# /usr/sbin/mminfo -s ${NSR_SERVER} -o t -N /local/${RG}/cluster_config/nsr_backup
</source>
===Restore ZFS configuration===
<source lang=bash>
# /usr/sbin/recover -s ${NSR_SERVER} -c ${NSR_CLIENT} -d /tmp -a /local/${RG}/cluster_config/nsr_backup/ZFS_Setup.sh
</source>
Look into file /tmp/ZFS_Setup.sh which should look like this:
<source lang=bash>
# Create ZPool sample_pool with size 1.02T:
Zpool structure of sample_pool:
zpool create sample_pool mirror <device0> <device1>
# Create ZFS
/usr/sbin/zfs create -o mountpoint=none sample_pool/app
/usr/sbin/zfs create -o mountpoint=none sample_pool/cluster_config
/usr/sbin/zfs create -o mountpoint=none sample_pool/data1
/usr/sbin/zfs create -o mountpoint=none sample_pool/data2
/usr/sbin/zfs create -o mountpoint=none sample_pool/home
/usr/sbin/zfs create -o mountpoint=none sample_pool/log
/usr/sbin/zfs create -o mountpoint=none sample_pool/usr_local
/usr/sbin/zfs create -o mountpoint=none sample_pool/zone
# Set ZFS values
/usr/sbin/zfs set -p reservation=104857600 sample_pool
/usr/sbin/zfs set -p mountpoint=none sample_pool
/usr/sbin/zfs set -p mountpoint=/local/sample-rg/app sample_pool/app
/usr/sbin/zfs set -p mountpoint=/local/sample-rg/cluster_config sample_pool/cluster_config
/usr/sbin/zfs set -p mountpoint=/local/sample-rg/data1 sample_pool/data1
/usr/sbin/zfs set -p mountpoint=/local/sample-rg/data2 sample_pool/data2
/usr/sbin/zfs set -p mountpoint=/local/sample-rg/home sample_pool/home
/usr/sbin/zfs set -p mountpoint=/local/sample-rg/log sample_pool/log
/usr/sbin/zfs set -p mountpoint=/local/sample-rg/usr_local sample_pool/usr_local
/usr/sbin/zfs set -p mountpoint=/local/sample-rg/zone sample_pool/zone
/usr/sbin/zfs set -p zpdata:zn=sample-zone sample_pool/zone
/usr/sbin/zfs set -p zpdata:rbe=S10_U9 sample_pool/zone
/usr/sbin/zfs set -p mountpoint=/local/sample-rg/zone-zfsBE_20121105 sample_pool/zone-zfsBE_20121105
/usr/sbin/zfs set -p zoned=off sample_pool/zone-zfsBE_20121105
/usr/sbin/zfs set -p canmount=on sample_pool/zone-zfsBE_20121105
/usr/sbin/zfs set -p zpdata:zn=sample-zone sample_pool/zone-zfsBE_20121105
/usr/sbin/zfs set -p zpdata:rbe=S10_U9 sample_pool/zone-zfsBE_20121105
</source>
Mount the needed ZFS filesystems.
===Restore zone configuration===
<source lang=bash>
# /usr/sbin/recover -s ${NSR_SERVER} -c ${NSR_CLIENT} -d /tmp -a /local/${RG}/cluster_config/nsr_backup/zonecfg_${ZONE}.export
# zonecfg -z ${ZONE} -f /tmp/zonecfg_${ZONE}.export
# zonecfg -z ${ZONE} info
</source>
===Restore cluster configuration===
<source lang=bash>
# /usr/sbin/recover -s ${NSR_SERVER} -c ${NSR_CLIENT} -d /tmp -a /local/${RG}/cluster_config/nsr_backup/*_export.xml
# /usr/sbin/recover -s ${NSR_SERVER} -c ${NSR_CLIENT} -d /tmp -a /local/${RG}/cluster_config/nsr_backup/*.ClusterCreateCommands.txt
# /usr/bin/perl -pi -e "s#/local/${RG}/cluster_config/nsr_backup/#/tmp/#g" /tmp/${RG}.ClusterCreateCommands.txt
</source>
Follow the instructions in /tmp/${RG}.ClusterCreateCommands.txt:
<source lang=bash>
Recreate sample-rg:
/usr/cluster/bin/clrg create -i /tmp/sample-rg.clrg_export.xml sample-rg
Add the following entries to all nodes!!!:
/etc/inet/hosts:
10.29.7.96 sample-cl
Recreate sample-lh-res:
/usr/cluster/bin/clrs create -i /tmp/sample-lh-res.clrs_export.xml sample-lh-res
Recreate sample-hasp-zfs-res:
/usr/cluster/bin/clrs create -i /tmp/sample-hasp-zfs-res.clrs_export.xml sample-hasp-zfs-res
Recreate sample-emctl-res:
/usr/cluster/bin/clrs create -i /tmp/sample-emctl-res.clrs_export.xml sample-emctl-res
Recreate sample-oracle-res:
/usr/cluster/bin/clrs create -i /tmp/sample-oracle-res.clrs_export.xml sample-oracle-res
Recreate sample-zone-res:
/usr/cluster/bin/clrs create -i /tmp/sample-zone-res.clrs_export.xml sample-zone-res
Recreate sample-nsr-res:
/usr/cluster/bin/clrs create -i /tmp/sample-nsr-res.clrs_export.xml sample-nsr-res
</source>
==Registering new resource type LGTO.clnt==
1. Install Solaris client package LGTOclnt
2. Register new resource type in cluster. One one node do:
<source lang=bash>
# clrt register -f /usr/sbin/LGTO.clnt.rtr LGTO.clnt
</source>
Now you have a new resource type LGTO.clnt in your cluster.
==Create client resource of type LGTO.clnt==
So I use scripts like this:
<source lang=bash>
# RGname=sample-rg
# clrs create \
-t LGTO.clnt \
-g ${RGname} \
-p Resource_dependencies=$(basename ${RGname} -rg)-hasp-zfs-res \
-p clientname=$(basename ${RGname} -rg)-lh \
-p Network_resource=$(basename ${RGname} -rg)-lh-res \
-p owned_paths=${ZPOOL_BASEDIR} \
$(basename ${RGname} -rg)-nsr-res
</source>
This expands to:
<source lang=bash>
# clrs create \
-t LGTO.clnt \
-g sample-rg \
-p Resource_dependencies=sample-hasp-zfs-res \
-p clientname=sample-lh \
-p Network_resource=sample-lh-res \
-p owned_paths=/local/sample-rg \
sample-nsr-res
</source>
Now we have a client name to which we can connect to: sample-lh
82b924ef752ee1e9ff48df396a4711f5f03f3a3b
HP 3par
0
213
942
749
2015-10-17T16:40:34Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:Hardware]]
Unsorted collection... Don't do this...
Unsortierte Sammlung...
Funktioniert nicht so...
<source lang=bash>
3par-clusterstorage cli% showcage
Id Name LoopA Pos.A LoopB Pos.B Drives Temp RevA RevB Model Side
0 cage0 1:0:1 0 0:0:1 0 24 29-35 321a 321a DCN1 n/a
1 cage1 1:0:2 0 0:0:2 0 24 34-36 321a 321a DCS2 n/a
</source>
<source lang=bash>
3par-storage cli% createcpg -t r5 -ssz 4 -ha mag -p -devtype FC -mg 0-19 -cg 0 FC_R5_31_cage0
3par-storage cli% createcpg -t r5 -ssz 4 -ha mag -p -devtype FC -mg 0-19 -cg 1 FC_R5_31_cage1
</source>
<source lang=bash>
3par-storage cli% showcpg -sdg
------(MB)------
Id Name Warn Limit Grow Args
...
6 FC_R5_31_cage0 - - 32768 -t r5 -ssz 4 -ha mag -p -devtype FC -mg 0-19 -cg 0
7 FC_R5_31_cage1 - - 32768 -t r5 -ssz 4 -ha mag -p -devtype FC -mg 0-19 -cg 1
</source>
<source lang=bash>
3par-storage cli% createvv -wait 0 -comment "Mirror A: PRODDB" FC_R5_31_cage0 VV_DB_PROD01_DATA_DS.1 2T
3par-storage cli% createvv -wait 0 -comment "Mirror B: PRODDB" FC_R5_31_cage1 VV_DB_PROD01_DATA_DS.2 2T
3par-storage cli% createvv -wait 0 -comment "Mirror A: TESTDB" FC_R5_31_cage0 VV_DB_TEST01_DATA_DS.3 2T
3par-storage cli% createvv -wait 0 -comment "Mirror B: TESTDB" FC_R5_31_cage1 VV_DB_TEST01_DATA_DS.4 2T
</source>
<source lang=bash>
3par-storage cli% showvv -sortcol 0 -showcols Id,Name,UsrCPG,Prov,Usr_Used_MB -cpg FC_R5_31_cage0,FC_R5_31_cage1
Id Name UsrCPG Prov Usr_Used_MB
2 VV_DB_PROD01_DATA_DS.1 FC_R5_31_cage0 full 2097152
3 VV_DB_PROD01_DATA_DS.2 FC_R5_31_cage1 full 2097152
4 VV_DB_TEST01_DATA_DS.3 FC_R5_31_cage0 full 2097152
5 VV_DB_TEST01_DATA_DS.4 FC_R5_31_cage1 full 2097152
-----------------------------------------------------------------
2 total 8388608
</source>
==Watch disk initialization==
<source lang=bash>
3par-storage cli% showsys -space -devtype FC
------------- System Capacity (MB) -------------
Total Capacity : 57139200
Allocated : 40258560
Volumes : 36577280
Non-CPGs : 0
User : 0
Snapshot : 0
Admin : 0
CPGs (TPVVs & TDVVs & CPVVs) : 36577280
User : 36577280
Used : 36427020
Unused : 0
Snapshot : 0
Used : 0
Unused : 0
Admin : 0
Used : 0
Unused : 0
Unmapped : 0
System : 3681280
Internal : 252928
Spare : 3428352
Used : 0
Unused : 3428352
Free : 16880640
Initialized : 7827456
Uninitialized : 9053184 <--- Still initializing!!!!
Unavailable : 0
Failed : 0
------------- Capacity Efficiency --------------
Compaction : 1.0
Dedup : --------
</source>
9c148cdb471dfe5d7bd4845967da76720b06dab2
943
942
2015-10-18T09:23:03Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:Hardware]]
Unsorted collection... Don't do this...
Unsortierte Sammlung...
Funktioniert nicht so...
<source lang=bash>
3par-clusterstorage cli% showcage
Id Name LoopA Pos.A LoopB Pos.B Drives Temp RevA RevB Model Side
0 cage0 1:0:1 0 0:0:1 0 24 29-35 321a 321a DCN1 n/a
1 cage1 1:0:2 0 0:0:2 0 24 34-36 321a 321a DCS2 n/a
</source>
<source lang=bash>
3par-storage cli% createcpg -t r5 -ssz 4 -ha mag -p -devtype FC -mg 0-19 -cg 0 FC_R5_31_cage0
3par-storage cli% createcpg -t r5 -ssz 4 -ha mag -p -devtype FC -mg 0-19 -cg 1 FC_R5_31_cage1
</source>
<source lang=bash>
3par-storage cli% showcpg -sdg
------(MB)------
Id Name Warn Limit Grow Args
...
6 FC_R5_31_cage0 - - 32768 -t r5 -ssz 4 -ha mag -p -devtype FC -mg 0-19 -cg 0
7 FC_R5_31_cage1 - - 32768 -t r5 -ssz 4 -ha mag -p -devtype FC -mg 0-19 -cg 1
</source>
<source lang=bash>
3par-storage cli% createvv -wait 0 -comment "Mirror A: PRODDB" FC_R5_31_cage0 VV_DB_PROD01_DATA_DS.1 2T
3par-storage cli% createvv -wait 0 -comment "Mirror B: PRODDB" FC_R5_31_cage1 VV_DB_PROD01_DATA_DS.2 2T
3par-storage cli% createvv -wait 0 -comment "Mirror A: TESTDB" FC_R5_31_cage0 VV_DB_TEST01_DATA_DS.3 2T
3par-storage cli% createvv -wait 0 -comment "Mirror B: TESTDB" FC_R5_31_cage1 VV_DB_TEST01_DATA_DS.4 2T
</source>
<source lang=bash>
3par-storage cli% showvv -sortcol 0 -showcols Id,Name,UsrCPG,Prov,Usr_Used_MB -cpg FC_R5_31_cage0,FC_R5_31_cage1
Id Name UsrCPG Prov Usr_Used_MB
2 VV_DB_PROD01_DATA_DS.1 FC_R5_31_cage0 full 2097152
3 VV_DB_PROD01_DATA_DS.2 FC_R5_31_cage1 full 2097152
4 VV_DB_TEST01_DATA_DS.3 FC_R5_31_cage0 full 2097152
5 VV_DB_TEST01_DATA_DS.4 FC_R5_31_cage1 full 2097152
-----------------------------------------------------------------
2 total 8388608
</source>
==Create a set of initiators==
<source lang=bash>
3par-storage cli% createhost -os Solaris -model M10 -contact "SuperAdmin" -comment "Developer node" -loc "Germany, Hamburg" -persona 1 unix14_c2 21000024ff8f5aae
3par-storage cli% createhost -os Solaris -model M10 -contact "SuperAdmin" -comment "Developer node" -loc "Germany, Hamburg" -persona 1 unix14_c3 21000024ff8f5aaf
</source>
<source lang=bash>
3par-storage cli% createhostset Devel
3par-storage cli% createhostset -add Devel unix14_c2
3par-storage cli% createhostset -add Devel unix14_c3
</source>
==Watch disk initialization==
<source lang=bash>
3par-storage cli% showsys -space -devtype FC
------------- System Capacity (MB) -------------
Total Capacity : 57139200
Allocated : 40258560
Volumes : 36577280
Non-CPGs : 0
User : 0
Snapshot : 0
Admin : 0
CPGs (TPVVs & TDVVs & CPVVs) : 36577280
User : 36577280
Used : 36427020
Unused : 0
Snapshot : 0
Used : 0
Unused : 0
Admin : 0
Used : 0
Unused : 0
Unmapped : 0
System : 3681280
Internal : 252928
Spare : 3428352
Used : 0
Unused : 3428352
Free : 16880640
Initialized : 7827456
Uninitialized : 9053184 <--- Still initializing!!!!
Unavailable : 0
Failed : 0
------------- Capacity Efficiency --------------
Compaction : 1.0
Dedup : --------
</source>
02a020e67824fab59f25d4e5c88321377e9d63ca
944
943
2015-10-18T09:42:24Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:Hardware]]
Unsorted collection... Don't do this...
Unsortierte Sammlung...
Funktioniert nicht so...
<source lang=bash>
3par-clusterstorage cli% showcage
Id Name LoopA Pos.A LoopB Pos.B Drives Temp RevA RevB Model Side
0 cage0 1:0:1 0 0:0:1 0 24 29-35 321a 321a DCN1 n/a
1 cage1 1:0:2 0 0:0:2 0 24 34-36 321a 321a DCS2 n/a
</source>
<source lang=bash>
3par-storage cli% createcpg -t r5 -ssz 4 -ha mag -p -devtype FC -mg 0-19 -cg 0 FC_R5_31_cage0
3par-storage cli% createcpg -t r5 -ssz 4 -ha mag -p -devtype FC -mg 0-19 -cg 1 FC_R5_31_cage1
</source>
<source lang=bash>
3par-storage cli% showcpg -sdg
------(MB)------
Id Name Warn Limit Grow Args
...
6 FC_R5_31_cage0 - - 32768 -t r5 -ssz 4 -ha mag -p -devtype FC -mg 0-19 -cg 0
7 FC_R5_31_cage1 - - 32768 -t r5 -ssz 4 -ha mag -p -devtype FC -mg 0-19 -cg 1
</source>
<source lang=bash>
3par-storage cli% createvv -wait 0 -comment "Mirror A: PRODDB" FC_R5_31_cage0 VV_DB_PROD01_DATA_DS.1 2T
3par-storage cli% createvv -wait 0 -comment "Mirror B: PRODDB" FC_R5_31_cage1 VV_DB_PROD01_DATA_DS.2 2T
3par-storage cli% createvv -wait 0 -comment "Mirror A: TESTDB" FC_R5_31_cage0 VV_DB_TEST01_DATA_DS.3 2T
3par-storage cli% createvv -wait 0 -comment "Mirror B: TESTDB" FC_R5_31_cage1 VV_DB_TEST01_DATA_DS.4 2T
</source>
<source lang=bash>
3par-storage cli% showvv -sortcol 0 -showcols Id,Name,UsrCPG,Prov,Usr_Used_MB -cpg FC_R5_31_cage0,FC_R5_31_cage1
Id Name UsrCPG Prov Usr_Used_MB
2 VV_DB_PROD01_DATA_DS.1 FC_R5_31_cage0 full 2097152
3 VV_DB_PROD01_DATA_DS.2 FC_R5_31_cage1 full 2097152
4 VV_DB_TEST01_DATA_DS.3 FC_R5_31_cage0 full 2097152
5 VV_DB_TEST01_DATA_DS.4 FC_R5_31_cage1 full 2097152
-----------------------------------------------------------------
2 total 8388608
</source>
==Group virtual volumes to sets (vv -> vvset)==
<source lang=bash>
3par-storage cli% createvvset -comment "Set for all vvs of Solaris Devel" DevelVVSet
3par-storage cli% createvvset -add DevelVVSet VV_DB_TEST01_DATA_DS.3
3par-storage cli% createvvset -add DevelVVSet VV_DB_TEST01_DATA_DS.4
</source>
==Create a set of initiators==
<source lang=bash>
3par-storage cli% createhost -os Solaris -model M10 -contact "SuperAdmin" -comment "Developer node" -loc "Germany, Hamburg" -persona 1 unix14_c2 21000024ff8f5aae
3par-storage cli% createhost -os Solaris -model M10 -contact "SuperAdmin" -comment "Developer node" -loc "Germany, Hamburg" -persona 1 unix14_c3 21000024ff8f5aaf
</source>
<source lang=bash>
3par-storage cli% createhostset DevelHosts
3par-storage cli% createhostset -add DevelHosts unix14_c2
3par-storage cli% createhostset -add DevelHosts unix14_c3
</source>
==Map virtual volumes as LUNs to a set of initiators==
<source lang=bash>
3par-storage cli% createvlun set:DevelVVSet 0+ set:DevelHosts
</source>
Means map all VVs from DevelVVSet to all hosts in DevelHosts and do auto LUN numbering (+) starting with 0.
<source lang=bash>
3par-storage cli% showvlun
Active VLUNs
Lun VVName HostName -Host_WWN/iSCSI_Name- Port Type Status ID
0 VV_DB_TEST01_DATA_DS.3 unix14_c2 21000024FF8F5AAE 0:1:1 host set active 1
1 VV_DB_TEST01_DATA_DS.4 unix14_c2 21000024FF8F5AAE 0:1:1 host set active 1
0 VV_DB_TEST01_DATA_DS.3 unix14_c3 21000024FF8F5AAF 0:1:2 host set active 1
1 VV_DB_TEST01_DATA_DS.4 unix14_c3 21000024FF8F5AAF 0:1:2 host set active 1
-----------------------------------------------------------------------------------------------
4 total
VLUN Templates
Lun VVName HostName -Host_WWN/iSCSI_Name- Port Type
0 set:DevelVVset set:DevelHosts ---------------- --- host set
---------------------------------------------------------------------
1 total
</source>
==Watch disk initialization==
<source lang=bash>
3par-storage cli% showsys -space -devtype FC
------------- System Capacity (MB) -------------
Total Capacity : 57139200
Allocated : 40258560
Volumes : 36577280
Non-CPGs : 0
User : 0
Snapshot : 0
Admin : 0
CPGs (TPVVs & TDVVs & CPVVs) : 36577280
User : 36577280
Used : 36427020
Unused : 0
Snapshot : 0
Used : 0
Unused : 0
Admin : 0
Used : 0
Unused : 0
Unmapped : 0
System : 3681280
Internal : 252928
Spare : 3428352
Used : 0
Unused : 3428352
Free : 16880640
Initialized : 7827456
Uninitialized : 9053184 <--- Still initializing!!!!
Unavailable : 0
Failed : 0
------------- Capacity Efficiency --------------
Compaction : 1.0
Dedup : --------
</source>
37ff30797ab99d71f21eeb5ec46040e64a2043f0
Dpkg
0
244
945
2015-10-28T08:16:52Z
Lollypop
2
Die Seite wurde neu angelegt: „[[Kategorie:Linux]] ==Missing key id NO_PUBKEY== <source lang=bash> # apt-key adv --keyserver keyserver.ubuntu.com --recv <keyid> </source>“
wikitext
text/x-wiki
[[Kategorie:Linux]]
==Missing key id NO_PUBKEY==
<source lang=bash>
# apt-key adv --keyserver keyserver.ubuntu.com --recv <keyid>
</source>
4c3d10a4759eacdd7b7f6a0904bc7edaa1501ac8
Bash cheatsheet
0
37
946
137
2015-10-28T08:42:42Z
Lollypop
2
wikitext
text/x-wiki
=bash prompt=
Put this in your ~/.bash_profile
<source lang=bash>
typeset +x PS1="\[\e]0;\u@\h: \w\a\]\u@\h:\w# "
</source>
=Nützliche Variablenersetzungen=
==dirname==
<pre>
$ myself=/usr/bin/blafasel ; echo ${myself%/*}
/usr/bin
</pre>
==basename==
<pre>
$ myself=/usr/bin/blafasel ; echo ${myself##*/}
blafasel
</pre>
=Schleifen=
==Zahlenfolgen==
$ for i in {0..9} ; do echo $i ; done
oder
$ for ((i=0;i<=9;i++)); do echo $i; done
so gehen natürlich auch andere Sprünge, z.B. immer 3 weiter:
$ for ((i=0;i<=9;i+=3)); do echo $i; done
oder oder oder
$ for ((i=0,j=1;i<=9;i+=3,j++)); do echo "$i $j"; done
=Rechnen=
$ echo $[ 3 + 4 ]
$ echo $[ 2 ** 8 ] # 2^8
[[Kategorie:Bash]]
ed0acfc4912269a23c533531a52f4ad874c6e64b
Wireshark
0
245
947
2015-10-28T09:56:36Z
Lollypop
2
Die Seite wurde neu angelegt: „[[Kategorie:MySQL]] [[Kategorie:Security]] ==Add MySQL decoding==“
wikitext
text/x-wiki
[[Kategorie:MySQL]]
[[Kategorie:Security]]
==Add MySQL decoding==
558410d9c4c314ad725160dd2bfd8ceece18f2f8
952
947
2015-10-28T10:06:17Z
Lollypop
2
/* Add MySQL decoding */
wikitext
text/x-wiki
[[Kategorie:MySQL]]
[[Kategorie:Security]]
==Add MySQL decoding==
[[Datei:Wireshark Column Preferences.jpg|800px|left|Select "Column Preferences..."]]
[[Datei:Wireshark Column Add.jpg|800px|left|Add a column]]
[[Datei:Wireshark Column Add Field Name.jpg|800px|left|Field type: "Custom", Field name: "mysql.query"]]
[[Datei:Wireshark Column Name.jpg|800px|left|Click on "New Colum" and customize the name]]
85decd63955507b18bb21e1022096e8cd0b7b570
953
952
2015-10-28T10:06:49Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:MySQL]]
[[Kategorie:Security]]
==Add MySQL decoding==
[[Datei:Wireshark Column Preferences.jpg|800px|left|Select "Column Preferences..."]]
[[Datei:Wireshark Column Add.jpg|800px|left|Add a column]]
[[Datei:Wireshark Column Add Field Name.jpg|800px|left|Field type: "Custom", Field name: "mysql.query"]]
[[Datei:Wireshark Column Name.jpg|800px|left|Click on "New Column" and customize the name]]
969f00fbf89be869b57cf64e5dcbc86fa84f489c
955
953
2015-10-28T10:13:20Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:MySQL]]
[[Kategorie:Security]]
==Add MySQL decoding==
[[Datei:Wireshark Column Preferences.jpg|800px|left|Select "Column Preferences..."]]
[[Datei:Wireshark Column Add.jpg|800px|left|Add a column]]
[[Datei:Wireshark Column Add Field Name.jpg|800px|left|Field type: "Custom", Field name: "mysql.query"]]
[[Datei:Wireshark Column Name.jpg|800px|left|Click on "New Column" and customize the name]]
[[Datei:Wireshark Column MySQL Query.jpg|800px|left|Et voila!]]
495c59cfc92ffc6e418879b296c821ac3892df2a
File:Wireshark Column Preferences.jpg
6
246
948
2015-10-28T09:57:41Z
Lollypop
2
Wireshark Column Preferences
wikitext
text/x-wiki
Wireshark Column Preferences
be1814fc86b6cc9fe969edca95ed6b634df2db6f
File:Wireshark Column Add.jpg
6
247
949
2015-10-28T09:59:05Z
Lollypop
2
Wireshark Column Add
wikitext
text/x-wiki
Wireshark Column Add
872ff090ac1dc5105dd5079cb3f0f23151cbfbe1
File:Wireshark Column Add Field Name.jpg
6
248
950
2015-10-28T10:00:43Z
Lollypop
2
Wireshark Column Add Field Name
wikitext
text/x-wiki
Wireshark Column Add Field Name
0ce41658ec001d543934055b4c45a1d13a4978be
File:Wireshark Column Name.jpg
6
249
951
2015-10-28T10:01:28Z
Lollypop
2
Wireshark Column Name
wikitext
text/x-wiki
Wireshark Column Name
bfcaa797d0d3748792dafe8b40ab7c662158398c
File:Wireshark Column MySQL Query.jpg
6
250
954
2015-10-28T10:11:40Z
Lollypop
2
Wireshark Column MySQL Query
wikitext
text/x-wiki
Wireshark Column MySQL Query
143417149c18171aff434c0bd461e42f2b0e50e8
SSL and TLS
0
229
956
890
2015-10-29T15:54:18Z
Lollypop
2
/* Mail */
wikitext
text/x-wiki
[[Kategorie: Security]]
=Web=
==HTTPS==
===HSTS - HTTP Strict Transport Security===
<source lang=apache>
<VirtualHost <host>:443>
...
Header always set Strict-Transport-Security "max-age=31556926; includeSubDomains;"
...
</VirtualHost>
</source>
You need to enable the headers module in Apache.
On Ubuntu just do:
<source lang=bash>
# sudo a2enmod headers
</source>
The max-age is entered in seconds:
<source lang=bash>
$ bc -l
31556926/(60*60*24)
365.24219907407407407407
</souce>
So this value is a year as seconds.
What changes when we set this header and the browser understands it?
The browser transforms any link on this page to https even if the link is a http link. If the secure connection cannot be established because of Certificate errors, the browser will refuse to load the page. If this header contains ''includeSubDomains;'' subdomains are treated like this as well.
Links:
* [https://en.wikipedia.org/wiki/HTTP_Strict_Transport_Security HSTS at Wikipedia (English)]
* [https://de.wikipedia.org/wiki/Hypertext_Transfer_Protocol_Secure#HSTS HSTS at Wikipedia (German)]
===HPKP - HTTP Public Key Pinning===
A helpful script to create the hashes was made by Hanno Böck and is accessible at [https://github.com/hannob/hpkp Github].
I added a create option which makes the script more comfortable for me at [https://github.com/Popyllol/hpkp Github], too.
The public key pins for this site are created like this:
<source lang=bash>
# /etc/apache2/ssl/hpkp-gen.sh create DE Hamburg Hamburg lars.timmann.de
Generating RSA private key, 4096 bit long modulus
..................................................................................................................................................................................................................++
..........................................................................................................................................................................................++
e is 65537 (0x10001)
Generating RSA private key, 4096 bit long modulus
..................................................++
..........................................++
e is 65537 (0x10001)
Header always set Strict-Transport-Security "max-age=31556926;"
Header always set Public-Key-Pins "max-age=5184000; pin-sha256=\"UcmGe/VSm6N9ruX235yb9PEYseuo+mr2volWwx1RffE=\";pin-sha256=\"O8xUszxHm+JJpRR4Pycl7LCnKjFpTY3REemrBxQZWQU=\";pin-sha256=\"UcmGe/VSm6N9ruX235yb9PEYseuo+mr2volWwx1RffE=\";"
</source>
At the end you get one line for adding Strict-Transport-Security and one for Public-Key-Pins. Both in Apache format.
<source lang=apache>
<VirtualHost lars.timmann.de:443>
...
SSLEngine On
SSLProtocol all -SSLv2 -SSLv3
SSLCompression off
SSLOptions +FakeBasicAuth +ExportCertData +StrictRequire
SSLCertificateFile /etc/apache2/ssl/timmann.de-wildcard.pem
SSLCertificateKeyFile /etc/apache2/ssl/timmann.de.ec-key
Header always set Strict-Transport-Security "max-age=31556926;"
Header always set Public-Key-Pins "max-age=5184000; pin-sha256=\"sEQMIUbXSCbQQAMcCH7712u+cYCjFITlUSH/C1DEGHY=\";pin-sha256=\"9f3SRITO2UNdpnurhfJGLZqcaXJBUm3WRKRIKYiPARc=\";pin-sha256=\"sEQMIUbXSCbQQAMcCH7712u+cYCjFITlUSH/C1DEGHY=\";"
...
</VirtualHost>
</source>
You need to enable the headers module in Apache.
On Ubuntu just do:
<source lang=bash>
# sudo a2enmod headers
</source>
=Mail=
==STARTTLS==
with OpenSSL:
<source lang=bash>
$ openssl s_client -starttls smtp -connect <mailserver>:<port>
</source>
with GNUTLS:
<source lang=bash>
$ gnutls-cli --starttls --port <port> <mailserver>
EHLO hey <-- Send EHLO
250-<mailserver> Hello <yourhost> [<yourip>]
250-SIZE 52428800
250-8BITMIME
250-ETRN
250-PIPELINING
250-AUTH PLAIN
250-STARTTLS
250 HELP
STARTTLS <-- Send STARTTLS
220 TLS go ahead
^D <-- Send CTRL-D to begin STARTTLS handshake
...
- Version: TLS1.2
- Key Exchange: DHE-RSA
- Cipher: AES-256-CBC
- MAC: SHA256
- Compression: NULL
</source>
You can specify the security priority for the handshake like this:
<source lang=bash>
$ gnutls-cli --starttls --priority 'SECURE256:%LATEST_RECORD_VERSION:-VERS-SSL3.0' --port <port> <mailserver>
</source>
Or us sslscan to check the available ciphers:
<source lang=bash>
$ sudo apt-get install sslscan
$ sslscan --no-failed --starttls <mailserver>:<port>
</source>
==SMTPS==
with OpenSSL:
<source lang=bash>
$ openssl s_client -connect <mailserver>:465
</source>
with GNUTLS:
<source lang=bash>
$ gnutls-cli --port 465 <mailserver>
</source>
1e7a56f0ff2b102063e24edde9a6bd6753919a56
957
956
2015-10-29T16:09:28Z
Lollypop
2
/* STARTTLS */
wikitext
text/x-wiki
[[Kategorie: Security]]
=Web=
==HTTPS==
===HSTS - HTTP Strict Transport Security===
<source lang=apache>
<VirtualHost <host>:443>
...
Header always set Strict-Transport-Security "max-age=31556926; includeSubDomains;"
...
</VirtualHost>
</source>
You need to enable the headers module in Apache.
On Ubuntu just do:
<source lang=bash>
# sudo a2enmod headers
</source>
The max-age is entered in seconds:
<source lang=bash>
$ bc -l
31556926/(60*60*24)
365.24219907407407407407
</souce>
So this value is a year as seconds.
What changes when we set this header and the browser understands it?
The browser transforms any link on this page to https even if the link is a http link. If the secure connection cannot be established because of Certificate errors, the browser will refuse to load the page. If this header contains ''includeSubDomains;'' subdomains are treated like this as well.
Links:
* [https://en.wikipedia.org/wiki/HTTP_Strict_Transport_Security HSTS at Wikipedia (English)]
* [https://de.wikipedia.org/wiki/Hypertext_Transfer_Protocol_Secure#HSTS HSTS at Wikipedia (German)]
===HPKP - HTTP Public Key Pinning===
A helpful script to create the hashes was made by Hanno Böck and is accessible at [https://github.com/hannob/hpkp Github].
I added a create option which makes the script more comfortable for me at [https://github.com/Popyllol/hpkp Github], too.
The public key pins for this site are created like this:
<source lang=bash>
# /etc/apache2/ssl/hpkp-gen.sh create DE Hamburg Hamburg lars.timmann.de
Generating RSA private key, 4096 bit long modulus
..................................................................................................................................................................................................................++
..........................................................................................................................................................................................++
e is 65537 (0x10001)
Generating RSA private key, 4096 bit long modulus
..................................................++
..........................................++
e is 65537 (0x10001)
Header always set Strict-Transport-Security "max-age=31556926;"
Header always set Public-Key-Pins "max-age=5184000; pin-sha256=\"UcmGe/VSm6N9ruX235yb9PEYseuo+mr2volWwx1RffE=\";pin-sha256=\"O8xUszxHm+JJpRR4Pycl7LCnKjFpTY3REemrBxQZWQU=\";pin-sha256=\"UcmGe/VSm6N9ruX235yb9PEYseuo+mr2volWwx1RffE=\";"
</source>
At the end you get one line for adding Strict-Transport-Security and one for Public-Key-Pins. Both in Apache format.
<source lang=apache>
<VirtualHost lars.timmann.de:443>
...
SSLEngine On
SSLProtocol all -SSLv2 -SSLv3
SSLCompression off
SSLOptions +FakeBasicAuth +ExportCertData +StrictRequire
SSLCertificateFile /etc/apache2/ssl/timmann.de-wildcard.pem
SSLCertificateKeyFile /etc/apache2/ssl/timmann.de.ec-key
Header always set Strict-Transport-Security "max-age=31556926;"
Header always set Public-Key-Pins "max-age=5184000; pin-sha256=\"sEQMIUbXSCbQQAMcCH7712u+cYCjFITlUSH/C1DEGHY=\";pin-sha256=\"9f3SRITO2UNdpnurhfJGLZqcaXJBUm3WRKRIKYiPARc=\";pin-sha256=\"sEQMIUbXSCbQQAMcCH7712u+cYCjFITlUSH/C1DEGHY=\";"
...
</VirtualHost>
</source>
You need to enable the headers module in Apache.
On Ubuntu just do:
<source lang=bash>
# sudo a2enmod headers
</source>
=Mail=
==STARTTLS==
with OpenSSL:
<source lang=bash>
$ openssl s_client -starttls smtp -connect <mailserver>:<port>
</source>
with GNUTLS:
<source lang=bash>
$ gnutls-cli --crlf --starttls --port <port> <mailserver>
EHLO hey <-- Send EHLO
250-<mailserver> Hello <yourhost> [<yourip>]
250-SIZE 52428800
250-8BITMIME
250-ETRN
250-PIPELINING
250-AUTH PLAIN
250-STARTTLS
250 HELP
STARTTLS <-- Send STARTTLS
220 TLS go ahead
^D <-- Send CTRL-D to begin STARTTLS handshake
...
- Version: TLS1.2
- Key Exchange: DHE-RSA
- Cipher: AES-256-CBC
- MAC: SHA256
- Compression: NULL
</source>
You can specify the security priority for the handshake like this:
<source lang=bash>
$ gnutls-cli --crlf --starttls --priority 'SECURE256:%LATEST_RECORD_VERSION:-VERS-SSL3.0' --port <port> <mailserver>
</source>
Or us sslscan to check the available ciphers:
<source lang=bash>
$ sudo apt-get install sslscan
$ sslscan --no-failed --starttls <mailserver>:<port>
</source>
==SMTPS==
with OpenSSL:
<source lang=bash>
$ openssl s_client -connect <mailserver>:465
</source>
with GNUTLS:
<source lang=bash>
$ gnutls-cli --port 465 <mailserver>
</source>
5a50b55f268fe1548bb2f09bd2028395153b888f
Inetd services
0
251
958
2015-10-30T07:57:02Z
Lollypop
2
Die Seite wurde neu angelegt: „[[Kategorie:Solaris]] ==Setting up rsyncd as inetd service== 1. Put it into the legacy file /etc/inetd.conf <source lang=bash> # printf "rsync\tstream\ttcp\tn…“
wikitext
text/x-wiki
[[Kategorie:Solaris]]
==Setting up rsyncd as inetd service==
1. Put it into the legacy file /etc/inetd.conf
<source lang=bash>
# printf "rsync\tstream\ttcp\tnowait\troot\t/usr/bin/rsync\t/usr/bin/rsync --config=/etc/rsyncd.conf --daemon\n" >> /etc/inetd.conf
</source>
2. Use inetconv to generate your XML file
<source lang=bash>
# inetconv -o /tmp
100235/1 -> /tmp/100235_1-rpc_ticotsord.xml
Importing 100235_1-rpc_ticotsord.xml ...Done
rsync -> /tmp/rsync-tcp.xml
Importing rsync-tcp.xml ...Done
</source>
3. Optionally modify the generated XML file /tmp/rsync-tcp.xml
4. Import the XML file
<source lang=bash>
# svccfg import /tmp/rsync-tcp.xml
</source>
5. Enable it:
<source lang=bash>
# inetadm -e svc:/network/rsync/tcp:default
</source>
fe3f75cf69fc3457f9c4ede28c4ce89ff1bb4fa7
959
958
2015-10-30T07:57:23Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:Solaris]]
==Setting up rsyncd as inetd service==
1. Put it into the legacy file /etc/inetd.conf
<source lang=bash>
# printf "rsync\tstream\ttcp\tnowait\troot\t/usr/bin/rsync\t/usr/bin/rsync --config=/etc/rsyncd.conf --daemon\n" >> /etc/inetd.conf
</source>
2. Use inetconv to generate your XML file
<source lang=bash>
# inetconv -o /tmp
100235/1 -> /tmp/100235_1-rpc_ticotsord.xml
Importing 100235_1-rpc_ticotsord.xml ...Done
rsync -> /tmp/rsync-tcp.xml
Importing rsync-tcp.xml ...Done
</source>
3. Optionally modify the generated XML file /tmp/rsync-tcp.xml
4. Import the XML file
<source lang=bash>
# svccfg import /tmp/rsync-tcp.xml
</source>
5. Enable it:
<source lang=bash>
# inetadm -e svc:/network/rsync/tcp:default
</source>
c5295b19f3d2ae9d670c224d65fac04c44d8ca59
960
959
2015-10-30T08:03:55Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:Solaris]]
==Setting up rsyncd as inetd service==
1. Put it into the legacy file /etc/inetd.conf
<source lang=bash>
# printf "rsync\tstream\ttcp\tnowait\troot\t/usr/bin/rsync\t/usr/bin/rsync --config=/etc/rsyncd.conf --daemon\n" >> /etc/inetd.conf
</source>
2. Use inetconv to generate your XML file
<source lang=bash>
# inetconv -o /tmp
100235/1 -> /tmp/100235_1-rpc_ticotsord.xml
Importing 100235_1-rpc_ticotsord.xml ...Done
rsync -> /tmp/rsync-tcp.xml
Importing rsync-tcp.xml ...Done
</source>
3. Optionally modify the generated XML file /tmp/rsync-tcp.xml
4. Import the XML file
<source lang=bash>
# svccfg import /tmp/rsync-tcp.xml
</source>
5. Enable it:
<source lang=bash>
# inetadm -e svc:/network/rsync/tcp:default
</source>
6. Check it:
<source lang=bash>
# netstat -anf inet | nawk -v port="$(nawk '$1=="rsync"{gsub(/\/.*$/,"",$2);print $2;}' /etc/services)" '$1 ~ port"$" && $NF=="LISTEN"'
*.873 *.* 0 0 49152 0 LISTEN
</source>
e4de3d460682e46d04ef7cd302cce3bdce1801f6
Solaris 11 First Steps
0
97
962
272
2015-11-05T10:53:05Z
Lollypop
2
/* Package Management */
wikitext
text/x-wiki
= What's new in Solaris 11 =
== Installation ==
=== Automated Installer ===
The automated installer short AI is a new way to setup an install server. The configuration is in XML files.
For further informations look [http://www.oracle.com/technetwork/articles/servers-storage-admin/best-commands-ai-1667217.html here].
== Package Management ==
No more patching! The new way to update your operating system is pkg. This tool get's new versions of Software over the network.
You can add multiple repositories, search repositories for software packages and install them over the network.
[[IPS_cheat_sheet#Examples|Some examples]].
===Support repository===
[[https://pkg-register.oracle.com/register/certificate Get your client certificates]]
== Live upgrade is now Boot environments (beadm) ==
For many years the usage of live upgrade was a bit difficult. With support of ZFS in live upgrade the updates went easier and consumed less disk space.
Since OpenSolaris (and now in Solaris 11) we have a new way to make updates.
The new way to handle upgrades and updates is beadm the boot environment admin tool. You can create a boot environment manually at any time as known from live upgrade.
New is that software updates from pkg create boot environments automatically if needed (or if pkg is used with --require-new-be or --require-backup-be).
== Distro Constructor ==
You can compile your own Solaris 11 distribution ISO image by using the Distribution Constructor. This will make customized installations much faster.
There is a good article at Oracle called [http://www.oracle.com/technetwork/articles/servers-storage-admin/o11-087-sol11-dist-const-496819.html How to Create a Customized Oracle Solaris 11 Image Using the Distribution Constructor].
== Networking (Crossbow) ==
An enhanced version of the new network stack from the project Crossbow (known from OpenSolaris) is implemented.
The new stack virtualizes the network of your Solaris. This means a lot of new features like virtual switches, virtual NICs and so on can be used.
You can build even complex networks virtualized inside your Solaris instance.
=== Interface Names ===
The new network virtualization covers interface names. They are now named net0, net1, ... and not after their drivers. So now you can just say net0 is frontend traffic. and interface net1 is backend. Independant of which hardware your server is build of.
You can even name them after their usage like frontend0 and backend0. So you always now what kind of traffic is at this interface.
=== Etherstubs and VNICs ===
Etherstubs are virtual switches inside your OS which can be connected to VNICs and physical interfaces.
=== ipadm ===
The tool ipadm is, together with dladm, a powerful tool to manage your network stack.
== Storage Engine (COMSTAR) ==
== ZFS deduplication and encryption ==
=== ZFS deduplication ===
=== ZFS encryption ===
== Zones ==
=== Immutable Zones ===
=== zonestat ===
== Kernel based CIFS ==
[[Kategorie:Solaris11]]
08d45d4a1fc68aeca597bd9f62cd03236b48d507
963
962
2015-11-05T11:08:23Z
Lollypop
2
/* Support repository */
wikitext
text/x-wiki
= What's new in Solaris 11 =
== Installation ==
=== Automated Installer ===
The automated installer short AI is a new way to setup an install server. The configuration is in XML files.
For further informations look [http://www.oracle.com/technetwork/articles/servers-storage-admin/best-commands-ai-1667217.html here].
== Package Management ==
No more patching! The new way to update your operating system is pkg. This tool get's new versions of Software over the network.
You can add multiple repositories, search repositories for software packages and install them over the network.
[[IPS_cheat_sheet#Examples|Some examples]].
===Support repository===
[[https://pkg-register.oracle.com/register/certificate Get your client certificates]]
[[https://pkg-register.oracle.com/register/product_info/1/ Instructions]]
== Live upgrade is now Boot environments (beadm) ==
For many years the usage of live upgrade was a bit difficult. With support of ZFS in live upgrade the updates went easier and consumed less disk space.
Since OpenSolaris (and now in Solaris 11) we have a new way to make updates.
The new way to handle upgrades and updates is beadm the boot environment admin tool. You can create a boot environment manually at any time as known from live upgrade.
New is that software updates from pkg create boot environments automatically if needed (or if pkg is used with --require-new-be or --require-backup-be).
== Distro Constructor ==
You can compile your own Solaris 11 distribution ISO image by using the Distribution Constructor. This will make customized installations much faster.
There is a good article at Oracle called [http://www.oracle.com/technetwork/articles/servers-storage-admin/o11-087-sol11-dist-const-496819.html How to Create a Customized Oracle Solaris 11 Image Using the Distribution Constructor].
== Networking (Crossbow) ==
An enhanced version of the new network stack from the project Crossbow (known from OpenSolaris) is implemented.
The new stack virtualizes the network of your Solaris. This means a lot of new features like virtual switches, virtual NICs and so on can be used.
You can build even complex networks virtualized inside your Solaris instance.
=== Interface Names ===
The new network virtualization covers interface names. They are now named net0, net1, ... and not after their drivers. So now you can just say net0 is frontend traffic. and interface net1 is backend. Independant of which hardware your server is build of.
You can even name them after their usage like frontend0 and backend0. So you always now what kind of traffic is at this interface.
=== Etherstubs and VNICs ===
Etherstubs are virtual switches inside your OS which can be connected to VNICs and physical interfaces.
=== ipadm ===
The tool ipadm is, together with dladm, a powerful tool to manage your network stack.
== Storage Engine (COMSTAR) ==
== ZFS deduplication and encryption ==
=== ZFS deduplication ===
=== ZFS encryption ===
== Zones ==
=== Immutable Zones ===
=== zonestat ===
== Kernel based CIFS ==
[[Kategorie:Solaris11]]
5b2451a969296d33731b33949e7b98efb386a791
Fibrechannel Analyse
0
139
964
759
2015-11-06T13:18:00Z
Lollypop
2
/* Kommandos : Solaris */
wikitext
text/x-wiki
[[Kategorie:Solaris]]
[[Kategorie:Brocade]]
[[Kategorie:NetApp]]
[[Kategorie:FC]]
=Fibrechannel Analyse=
=Kommandos : Solaris=
==luxadm==
===luxadm -e port===
Gibt die Hardwarepfade der vorhandened Fibrechannelports und deren Status aus:
<source lang=bash>
# luxadm -e port
/devices/pci@79,0/pci10de,378@b/pci1077,143@0/fp@0,0:devctl CONNECTED
/devices/pci@79,0/pci10de,378@b/pci1077,143@0,1/fp@0,0:devctl NOT CONNECTED
/devices/pci@79,0/pci10de,376@e/pci1077,143@0/fp@0,0:devctl CONNECTED
/devices/pci@79,0/pci10de,376@e/pci1077,143@0,1/fp@0,0:devctl NOT CONNECTED
</source>
2 Dualport Karten:
/devices/pci@79,0/pci10de,378@b/pci1077,143@0 und ...,1
/devices/pci@79,0/pci10de,376@e/pci1077,143@0 und ...,1
<source lang=bash>
# prtdiag -v | head -1
System Configuration: Sun Microsystems Sun Fire X4440
</source>
Aus der Seite [https://support.oracle.com/epmos/faces/DocContentDisplay?id=1277396.1 Sun x86 Platforms: Matrix of Recognized Device Paths (Doc ID 1277396.1)] (Oracle Support Login benötigt):
Sun Fire x4440 (Tucana)
PCI:
PCIe SLOT0 /pci@0,0/pci10de,375@f/pci1000,3150@0 // with PCI Express 8-Port SAS/SATA HBA
PCIe SLOT0 /pci@0,0/pci10de,375@f/ // without PCI Express 8-Port SAS/SATA HBA
PCIe SLOT1 /pci@0,0/pci10de,376@e/
PCIe SLOT2 /pci@7c,0/pci10de,377@f/
PCIe SLOT3 /pci@0,0/pci10de,377@a/
PCIe SLOT4 /pci@7c,0/pci10de,376@e/
PCIe SLOT5 /pci@7c,0/pci10de,378@b/
(7c can be renamed something else depending on BIOS/OS version)
Also stecken unsere Karten in Slot 4 und 5.
===luxadm -e dump_map <HW_path>===
Gibt die Tabelle der bekannten Geräte an einem Port aus
<source lang=bash>
# luxadm -e dump_map /devices/pci@79,0/pci10de,378@b/pci1077,143@0/fp@0,0:devctl
Pos Port_ID Hard_Addr Port WWN Node WWN Type
0 30200 0 202600a0b86e10e4 200600a0b86e10e4 0x0 (Disk device)
1 30600 0 202700a0b86e10e4 200600a0b86e10e4 0x0 (Disk device)
2 10100 0 203400a0b85bb030 200400a0b85bb030 0x0 (Disk device)
3 10500 0 203500a0b85bb030 200400a0b85bb030 0x0 (Disk device)
4 10200 0 202600a0b86e103c 200600a0b86e103c 0x0 (Disk device)
5 11400 0 202700a0b86e103c 200600a0b86e103c 0x0 (Disk device)
6 30100 0 203200a0b85aeb2d 200200a0b85aeb2d 0x0 (Disk device)
7 30500 0 203300a0b85aeb2d 200200a0b85aeb2d 0x0 (Disk device)
8 10800 0 2100001b32902d45 2000001b32902d45 0x1f (Unknown Type,Host Bus Adapter)
</source>
Erklärung der interessanten Spalten:
* Port_ID <Switch_ID><Switchport><??>
Es sind also offensichtlich 2 Switches in der Fabric an Port /devices/pci@79,0/pci10de,378@b/pci1077,143@0/fp@0,0:devctl
und zwar mit der ID 1 und mit der ID 3.
Switch ID 1
Port 1 und 5 : Node WWN 200400a0b85bb030
Port 2 und 14 : Node WWN 200600a0b86e103c
Port 8 : Node WWN 2000001b32902d45 (Wir selbst)
Switch ID 3
Port 1 und 5 : Node WWN 200200a0b85aeb2d
Port 2 und 6 : Node WWN 200600a0b86e10e4
Wir hängen also mit 2 Storages auf dem Switch mit der ID 1 und haben eine Verbindung zu einem Switch mit der ID 3 an dem 2 weitere Storages hängen.
* Node WWN
Wir sehen hier 4 Disk Devices mit jeweils 2 Einträgen (Gleiche Node WWN)
* Port WWN
Dies ist die Port WWN der an den Switch angeschlossenen Geräte (unter 8 finden wir uns selbst).
Pro Storage sehen wir hier 2 Port WWNs, also 2 Pfade über unseren einen Hostport.
Daher nachher 4 Pfade (2 Pro Hostport) beim [[#mpathadm list lu]].
* Type
Disk Device: Storage
Host Bus Adapter: FC-Karte
===luxadm probe===
Auflistung aller erkannten Fibrechanneldevices
<source lang=bash>
#> luxadm probe
Found Fibre Channel device(s):
Node WWN:200600a0b86e10e4 Device Type:Disk device
Logical Path:/dev/rdsk/c8t600A0B80006E10E40000DC1C52E8B751d0s2
...
</source>
===luxadm display <Diskpath|WWN>===
<source lang=bash>
#> luxadm display /dev/rdsk/c8t600A0B80006E10E40000DC1C52E8B751d0s2
DEVICE PROPERTIES for disk: /dev/rdsk/c8t600A0B80006E10E40000DC1C52E8B751d0s2
Vendor: SUN
Product ID: STK6580_6780
Revision: 0784
Serial Num: SP01068442
Unformatted capacity: 204800.000 MBytes
Write Cache: Enabled
Read Cache: Enabled
Minimum prefetch: 0x300
Maximum prefetch: 0x0
Device Type: Disk device
Path(s):
/dev/rdsk/c8t600A0B80006E10E40000DC1C52E8B751d0s2
/devices/scsi_vhci/disk@g600a0b80006e10e40000dc1c52e8b751:c,raw
Controller /dev/cfg/c4
Device Address 202600a0b86e10e4,5
Host controller port WWN 2100001b328a417f
Class primary
State ONLINE
Controller /dev/cfg/c4
Device Address 202700a0b86e10e4,5
Host controller port WWN 2100001b328a417f
Class secondary
State STANDBY
Controller /dev/cfg/c6
Device Address 201600a0b86e10e4,5
Host controller port WWN 2100001b32904445
Class primary
State ONLINE
Controller /dev/cfg/c6
Device Address 201700a0b86e10e4,5
Host controller port WWN 2100001b32904445
Class secondary
State STANDBY
</source>
* Vendor: SUN
Hersteller
* Product ID: STK6580_6780
Also ein StorageTek 6580/6780
* Revision: 0784
Grobe Firmwarepeilung (Firmware Version: 07.84.47.10)
Siehe hier [[#lsscs list array <array_name>]]
* Serial Num: SP01068442
Praktisch, wenn man mit NetApps arbeitet, um die LUNs zuzuordnen.
* Unformatted capacity: 204800.000 MBytes
Immer gut zu wissen
* Write Cache: Enabled
Die Batterie im Storage sollte also OK sein ;-)
* Path(s):
Rawdevicepath
Hardwaredevicepath
Jetzt folgen immer pro Pfad zu diesem Device ein Block aus
Controller (siehe unten)
Device Address <Port WWN vom Device>,<LUN ID>
Class <primary|secondary> (siehe unten)
State <Online|Standby|Oflline>
Zuweisung Controller zum FC-Port über:
<source lang=bash>
# ls -al /dev/cfg/c6
lrwxrwxrwx 1 root root 60 Sep 3 2009 /dev/cfg/c6 -> ../../devices/pci@79,0/pci10de,376@e/pci1077,143@0/fp@0,0:fc
</source>
Man sieht den Hardwarepfad von [[#luxadm -e port]]
Class:
Via ALUA (Asymmetric Logical Unit Access) teilt das Device dem Host mit, über welche Pfade der Host primär auf die LUN zugreifen soll.
==fcinfo==
===fcinfo hba-port===
Gibt ein paar Infos über Hersteller, Modell, Firmware, Port und Node WWN, Current Speed, ... aus
<source lang=bash>
#> fcinfo hba-port
HBA Port WWN: 2100001b328a417f
OS Device Name: /dev/cfg/c4
Manufacturer: QLogic Corp.
Model: 375-3356-02
Firmware Version: 05.06.00
FCode/BIOS Version: BIOS: 2.02; fcode: 2.01; EFI: 2.00;
Serial Number: 0402R00-0918701860
Driver Name: qlc
Driver Version: 20110825-3.06
Type: N-port
State: online
Supported Speeds: 1Gb 2Gb 4Gb
Current Speed: 4Gb
Node WWN: 2000001b328a417f
HBA Port WWN: 2101001b32aa417f
OS Device Name: /dev/cfg/c5
Manufacturer: QLogic Corp.
Model: 375-3356-02
Firmware Version: 05.06.00
FCode/BIOS Version: BIOS: 2.02; fcode: 2.01; EFI: 2.00;
Serial Number: 0402R00-0918701860
Driver Name: qlc
Driver Version: 20110825-3.06
Type: unknown
State: offline
Supported Speeds: 1Gb 2Gb 4Gb
Current Speed: not established
Node WWN: 2001001b32aa417f
HBA Port WWN: 2100001b32904445
OS Device Name: /dev/cfg/c6
Manufacturer: QLogic Corp.
Model: 375-3356-02
Firmware Version: 05.06.00
FCode/BIOS Version: BIOS: 2.02; fcode: 2.01; EFI: 2.00;
Serial Number: 0402R00-0918701887
Driver Name: qlc
Driver Version: 20110825-3.06
Type: N-port
State: online
Supported Speeds: 1Gb 2Gb 4Gb
Current Speed: 4Gb
Node WWN: 2000001b32904445
HBA Port WWN: 2101001b32b04445
OS Device Name: /dev/cfg/c7
Manufacturer: QLogic Corp.
Model: 375-3356-02
Firmware Version: 05.06.00
FCode/BIOS Version: BIOS: 2.02; fcode: 2.01; EFI: 2.00;
Serial Number: 0402R00-0918701887
Driver Name: qlc
Driver Version: 20110825-3.06
Type: unknown
State: offline
Supported Speeds: 1Gb 2Gb 4Gb
Current Speed: not established
Node WWN: 2001001b32b04445
</source>
===fcinfo remote-port --port <HBA Port WWN> --linkstat===
<source lang=bash>
# fcinfo remote-port --port 2100001b32904445 --linkstat
Remote Port WWN: 201600a0b86e103c
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200600a0b86e103c
Link Error Statistics:
Link Failure Count: 3
Loss of Sync Count: 3
Loss of Signal Count: 2
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 201700a0b86e103c
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200600a0b86e103c
Link Error Statistics:
Link Failure Count: 4
Loss of Sync Count: 261
Loss of Signal Count: 4
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 202200a0b85aeb2d
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200200a0b85aeb2d
Link Error Statistics:
Link Failure Count: 2
Loss of Sync Count: 1
Loss of Signal Count: 2
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 202300a0b85aeb2d
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200200a0b85aeb2d
Link Error Statistics:
Link Failure Count: 2
Loss of Sync Count: 1
Loss of Signal Count: 2
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 201600a0b86e10e4
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200600a0b86e10e4
Link Error Statistics:
Link Failure Count: 3
Loss of Sync Count: 1
Loss of Signal Count: 0
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 201700a0b86e10e4
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200600a0b86e10e4
Link Error Statistics:
Link Failure Count: 3
Loss of Sync Count: 1
Loss of Signal Count: 0
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 202400a0b85bb030
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200400a0b85bb030
Link Error Statistics:
Link Failure Count: 2
Loss of Sync Count: 1
Loss of Signal Count: 2
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 202500a0b85bb030
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200400a0b85bb030
Link Error Statistics:
Link Failure Count: 2
Loss of Sync Count: 1
Loss of Signal Count: 3
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
</source>
===fcinfo remote-port --port <HBA Port WWN> --scsi-target===
<source lang=bash>
# fcinfo hba-port | grep HBA
HBA Port WWN: 21000024ff3cf472
HBA Port WWN: 21000024ff3cf473
HBA Port WWN: 21000024ff3cf454
HBA Port WWN: 21000024ff3cf455
# fcinfo remote-port --port 21000024ff3cf472 --scsi-target
Remote Port WWN: 20110002ac0059ce
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 2ff70002ac0059ce
LUN: 0
Vendor: 3PARdata
Product: VV
OS Device Name: /dev/rdsk/c6t60002AC00000000000000002000059CEd0s2
LUN: 1
Vendor: 3PARdata
Product: VV
OS Device Name: /dev/rdsk/c6t60002AC00000000000000003000059CEd0s2
LUN: 2
Vendor: 3PARdata
Product: VV
OS Device Name: /dev/rdsk/c6t60002AC00000000000000004000059CEd0s2
...
</source>
==mpathadm==
===mpathadm list lu===
<source lang=bash>
</source>
==cfgadm==
===cfgadm -al -o show_FCP_dev [<controller>]===
<source lang=bash>
# cfgadm -al -o show_FCP_dev | grep unusable
c8::21000024ff2d49a2,0 disk connected configured unusable
c8::21000024ff2d49a2,1 disk connected configured unusable
c8::21000024ff2d49a2,2 disk connected configured unusable
c8::21000024ff2d49a2,3 disk connected configured unusable
c8::21000024ff2d49a2,4 disk connected configured unusable
c8::21000024ff2d49a2,5 disk connected configured unusable
c8::21000024ff2d49a2,6 disk connected configured unusable
c8::21000024ff2d49a2,7 disk connected configured unusable
c8::21000024ff2d49a2,8 disk connected configured unusable
c8::21000024ff2d49a2,9 disk connected configured unusable
c8::21000024ff2d49a2,10 disk connected configured unusable
c9::203400a0b839c421,31 disk connected configured unusable
c9::203400a0b84913d2,31 disk connected configured unusable
c9::203500a0b839c421,31 disk connected configured unusable
c9::203500a0b84913d2,31 disk connected configured unusable
</source>
===cfgadm -c unconfigure -o unusable_SCSI_LUN <unusable device>===
<source lang=bash>
# cfgadm -c unconfigure -o unusable_SCSI_LUN c8::21000024ff2d49a2
</source>
Alle aufräumen:
<source lang=bash>
# cfgadm -alo show_SCSI_LUN | nawk '$NF=="unusable"{gsub(/,[0-9]+$/,"",$1);print $1}' | sort -u | xargs -n 1 cfgadm -c unconfigure -o unusable_SCSI_LUN
</source>
===cfgadm -o force_update -c configure <controller>===
Rescan LUNs. Be careful! Does a forcelip!
<source lang=bash>
# cfgadm -o force_update -c configure c10
</source>
==prtconf -Da <device>==
<source lang=bash>
# prtconf -Da /dev/cfg/c3
i86pc (driver name: rootnex)
pci, instance #0 (driver name: npe)
pci8086,3410, instance #5 (driver name: pcieb)
pci111d,806e, instance #12 (driver name: pcieb)
pci111d,806e, instance #13 (driver name: pcieb)
pci1077,170, instance #0 (driver name: qlc) <---
fp, instance #0 (driver name: fp)
</source>
==LUN masking (access LUNs of a storage)==
<source lang=bash>
Nov 6 13:44:59 server01 Corrupt label; wrong magic number
Nov 6 13:44:59 server01 cmlb: WARNING: /pci@380/pci@1/pci@0/pci@5/SUNW,qlc@0/fp@0,0/ssd@w204300a096691217,7 (ssd7):
Nov 6 13:44:59 server01 Corrupt label; wrong magic number
Nov 6 13:44:59 server01 cmlb: WARNING: /pci@380/pci@1/pci@0/pci@5/SUNW,qlc@0/fp@0,0/ssd@w204300a096691217,7 (ssd7):
Nov 6 13:44:59 server01 Corrupt label; wrong magic number
Nov 6 13:44:59 server01 cmlb: WARNING: /pci@300/pci@1/pci@0/pci@4/SUNW,qlc@0/fp@0,0/ssd@w203300a096691217,7 (ssd2):
Nov 6 13:44:59 server01 Corrupt label; wrong magic number
...
</source>
<source lang=bash>
# cat /etc/driver/drv/fp.conf
#
# Copyright (c) 2011, Oracle and/or its affiliates. All rights reserved.
#
#
# I/O multipathing feature (MPxIO) can be enabled or disabled using
# mpxio-disable property. Setting mpxio-disable="no" will activate
# I/O multipathing; setting mpxio-disable="yes" disables the feature.
#
# Global mpxio-disable property:
#
# To globally enable MPxIO on all fp ports set:
# mpxio-disable="no";
#
# To globally disable MPxIO on all fp ports set:
# mpxio-disable="yes";
#
# Per port mpxio-disable property:
#
# You can also enable or disable MPxIO on a per port basis.
# Per port settings override the global setting for the specified ports.
# To disable MPxIO on port 0 whose parent is /pci@8,600000/SUNW,qlc@4 set:
# name="fp" parent="/pci@8,600000/SUNW,qlc@4" port=0 mpxio-disable="yes";
#
# NOTE: If you just want to enable or disable MPxIO on all fp ports, it is
# better to use stmsboot(1M) as it also updates /etc/vfstab.
#
mpxio-disable="no";
pwwn-lun-blacklist=
"203200a096691265,7",
"203300a096691265,7",
"204200a096691265,7",
"204300a096691265,7",
"203200a096691217,7",
"203300a096691217,7",
"204200a096691217,7",
"204300a096691217,7";
</source>
</source lang=bash>
# reboot -- -r
...
Boot device: /pci@300/pci@1/pci@0/pci@2/scsi@0/disk@p0 File and args: -r
SunOS Release 5.11 Version 11.3 64-bit
Copyright (c) 1983, 2015, Oracle and/or its affiliates. All rights reserved.
/pseudo/fcp@0 (fcp0):
LUN 7 of port 203300a096691217 is masked due to black listing.
/pseudo/fcp@0 (fcp0):
LUN 7 of port 203200a096691217 is masked due to black listing.
/pseudo/fcp@0 (fcp0):
LUN 7 of port 203300a096691265 is masked due to black listing.
/pseudo/fcp@0 (fcp0):
LUN 7 of port 203200a096691265 is masked due to black listing.
/pseudo/fcp@0 (fcp0):
LUN 7 of port 204300a096691217 is masked due to black listing.
/pseudo/fcp@0 (fcp0):
LUN 7 of port 204200a096691217 is masked due to black listing.
/pseudo/fcp@0 (fcp0):
LUN 7 of port 204300a096691265 is masked due to black listing.
/pseudo/fcp@0 (fcp0):
LUN 7 of port 204200a096691265 is masked due to black listing.
Configuring devices.
</source>
=Kommandos : Common Array Manager=
==lsscs==
Ist unter Solaris in /opt/SUNWsefms/bin
===lsscs list array===
<source lang=bash>
</source>
===lsscs list array <array_name>===
<source lang=bash>
</source>
===lsscs list -a <array_name> fcport===
<source lang=bash>
</source>
=Kommandos : Brocade=
==Switch-Kommandos==
===switchshow===
<source lang=bash>
san-sw_11:admin> switchshow
switchName: san-sw_11
switchType: 71.2
switchState: Online
switchMode: Native
switchRole: Principal
switchDomain: 1
switchId: fffc01
switchWwn: 10:00:00:05:33:df:43:5a
zoning: ON (Fabric1)
switchBeacon: OFF
Index Port Address Media Speed State Proto
==============================================
0 0 010000 id N8 No_Light FC
1 1 010100 id N8 Online FC E-Port 10:00:00:05:33:df:bd:b9 "san-sw_21" (downstream)
2 2 010200 id N8 Online FC F-Port 21:00:00:24:ff:05:74:e4
3 3 010300 id N8 Online FC F-Port 50:0a:09:81:8d:32:5d:c4
4 4 010400 id N8 No_Light FC
5 5 010500 id N8 Online FC E-Port 10:00:00:05:33:df:bd:b9 "san-sw_21"
6 6 010600 id N4 Online FC F-Port 20:06:00:a0:b8:32:38:17
7 7 010700 id N4 Online FC F-Port 20:07:00:a0:b8:32:38:17
8 8 010800 id N4 Online FC F-Port 21:00:00:1b:32:91:4c:ed
9 9 010900 id N4 Online FC F-Port 21:00:00:1b:32:98:05:1a
10 10 010a00 id N8 Online FC F-Port 21:00:00:24:ff:4a:d3:bc
11 11 010b00 id N8 No_Light FC
12 12 010c00 id N8 No_Light FC
13 13 010d00 id N8 No_Light FC
14 14 010e00 id N8 No_Light FC
15 15 010f00 id N8 No_Light FC
16 16 011000 -- N8 No_Module FC (No POD License) Disabled
17 17 011100 -- N8 No_Module FC (No POD License) Disabled
18 18 011200 -- N8 No_Module FC (No POD License) Disabled
19 19 011300 -- N8 No_Module FC (No POD License) Disabled
20 20 011400 -- N8 No_Module FC (No POD License) Disabled
21 21 011500 -- N8 No_Module FC (No POD License) Disabled
22 22 011600 -- N8 No_Module FC (No POD License) Disabled
23 23 011700 -- N8 No_Module FC (No POD License) Disabled
</source>
Was sagt uns das?
# Dies ist der "Principal" (alle andere sind "Subordinate") der Fabric "Fabric1" (switchRole:, zoning:)
# Der Switch ist gezoned (zoning:)
# SwitchID ist "fffc01"
# Es ist ein 24-Port Switch
# Es gibt einen doppelten ISL (InterSwitchLink) zu einem anderen Switch E-Port (san-sw_21)
# 6 Ports sind mit SFPs bestückt, aber nicht belegt (0,4,11-15)
# 8 Ports haben keine Lizenz und auch kein SFP (No_Module)
# 9 Ports sind belegt
<source lang=bash>
san-sw_11:root> fabricshow
Switch ID Worldwide Name Enet IP Addr FC IP Addr Name
-------------------------------------------------------------------------
1: fffc01 10:00:00:05:33:df:43:5a 192.168.1.117 0.0.0.0 >"san-sw_11"
2: fffc02 10:00:00:05:33:df:bd:b9 192.168.1.119 0.0.0.0 "san-sw_21"
The Fabric has 2 switches
</source>
==Port-Kommandos==
===porterrshow===
===portstatsshow===
===portstatsclear===
===portloginshow===
Möchte man sehen, welche WWNs sich hinter einem NPIV-Port verbergen, so hilft portloginshow.
==Zone-Kommandos==
===zoneshow===
===alicreate===
===alishow===
==Backup der Switchconfig per Script==
===Put the backup host ssh-pub-key on the switches===
<source lang=bash>
fcsw1:root> cat >/root/.ssh/authorized_keys <<EOF
> ssh-dss AAAAB3NzaC1...
...
...
lF8qsgtTD8cc= root@host
> EOF
</source>
===Generate ssh-key on the switches===
<source lang=bash>
fcsw1:root> ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
2a:23:33:...:69:bc:25:a5:f9 root@fcsw1
The key's randomart image is:
+--[ RSA 2048]----+
| |
| ... |
| |
+-----------------+
</source>
===Copy the key to your backup users ~/.ssh/authorized_keys on backup host===
<source lang=bash>
fcsw1:root> cat /root/.ssh/id_rsa.pub
ssh-rsa AAAAB3NzaC1yc2EAAA...
...
KHnw1T1NaQ== root@fcsw1
</source>
===Now the script on the backup host===
<source lang=bash>
# cat /opt/bin/backup_brocade_config
#!/bin/bash
SWITCHES="
172.30.40.50
172.30.40.51
"
LOCALUSER="backupuser"
BACKUPDIR="brocade_backup"
BACKUPHOST="172.30.40.10"
DATE="$(date '+%Y%m%d-%H%M%S')"
for switch in ${SWITCHES} ; do
printf "Backing up ${switch} to ~${LOCALUSER}/${BACKUPDIR}/${switch}_config_${date}.txt... "
ssh root@${switch} /fabos/link_sbin/configupload -all -p scp ${BACKUPHOST},${LOCALUSER},${BACKUPDIR}/${switch}_config_${DATE}.txt
done
</source>
==Script zum parsen einer configupload-Datei==
<source lang=awk>
#!/usr/bin/gawk -f
BEGIN{
vendor["001438"]="Hewlett-Packard";
vendor["00a098"]="NetApp";
vendor["0024ff"]="Qlogic";
vendor["001b32"]="Qlogic";
vendor["0000c9"]="Emulex";
vendor["00e002"]="CROSSROADS SYSTEMS, INC.";
}
/\[Zoning\]/,/^$/ {
if(/^cfg./){
split($0,cfgparts,":");
gsub(/^cfg./,"",cfgparts[1]);
cfg[cfgparts[1]]=cfgparts[2];
}
else if(/^zone./) {
zonename=$0;
gsub(/:.*$/,"",zonename);
gsub(/^zone./,"",zonename);
zonemembers=$0;
gsub(/^[^:]*:/,"",zonemembers);
zone[zonename]=zonemembers;
}
else if(/^alias./) {
aliasname=$0;
gsub(/:.*$/,"",aliasname);
gsub(/^alias./,"",aliasname);
aliasmembers=$0;
gsub(/^[^:]*:/,"",aliasmembers);
alias[aliasname]=aliasmembers;
if(length(aliasname)>longestalias){
longestalias=length(aliasname);
}
}
else if(/^enable:/) {
cfgenabled=$0;
gsub(/^enable:/,"",cfgenabled);
}
}
END {
print "Config:",cfgenabled;
split(cfg[cfgenabled],active_zones,";");
for(active_zone in active_zones) {
split(zone[active_zones[active_zone]],zone_members,";");
asort(zone_members);
print "Zone",active_zones[active_zone],"(",length(zone_members),"Members ):";
for(zone_member in zone_members){
member=zone_members[zone_member];
if(alias[member]!=""){
member=alias[member];
}
WWN=member;
gsub(/:/,"",WWN);
if(WWN ~ /^5/){start=2;}else{start=5;}
vendor_id=substr(WWN,start,6);
printf " Member: %s\t",member;
if(alias[zone_members[zone_member]]!=""){
format=sprintf("%%s%%%ds\t",longestalias-length(zone_members[zone_member]));
printf format,zone_members[zone_member]," ";
}
printf "%s\n",vendor[vendor_id];
}
}
printf "\n\n\nCreate config:\n-------------------------------------------------\n";
printf "cfgdelete \"%s\"\n",cfgenabled;
for(active_zone in active_zones) {
split(zone[active_zones[active_zone]],zone_members,";");
asort(zone_members);
for(zone_member in zone_members){
member=zone_members[zone_member];
if(alias[member]!=""){
printf "alicreate \"%s\",\"%s\"\n",member,alias[member];
alias[member]="";
}
}
printf "zonecreate \"%s\",\"%s\"\n",active_zones[active_zone],zone[active_zones[active_zone]];
if(!secondelement){
secondelement=1;
printf "cfgcreate";
} else {
printf "cfgadd ";
}
printf " \"%s\",\"%s\"\n",cfgenabled,active_zones[active_zone];
}
printf "cfgsave\ncfgenable \"%s\"\n",cfgenabled;
}
</source>
=Kommandos: NetApp=
==fcp topology show : Wo hängt mein Frontend-SAN?==
<source lang=bash>
fas01> fcp topology show
Switches connected on adapter 0d:
None connected.
Switches connected on adapter 0c:
None connected.
Switches connected on adapter 1a:
Switch Name: fcsw01
Switch Vendor: Brocade Communications, Inc.
Switch Release: v6.4.2a
Switch Domain: 1
Switch WWN: 10:00:00:05:33:c6:1e:6c
Port Count: 24
Switches connected on adapter 1b:
Switch Name: fcsw02
Switch Vendor: Brocade Communications, Inc.
Switch Release: v6.4.2a
Switch Domain: 1
Switch WWN: 10:00:00:05:33:c7:5e:d2
Port Count: 24
Switches connected on adapter 1c:
None connected.
Switches connected on adapter 1d:
None connected.
</source>
==fcp config <port> : Welche WWN habe ich?==
<source lang=bash>
fas01> fcp config 1a
1a: ONLINE <ADAPTER UP> PTP Fabric
host address 010600
portname 50:0a:09:83:90:00:29:24 nodename 50:0a:09:80:80:00:29:24
mediatype auto speed auto
</source>
Kleines Schmankerl ist noch die "host address", die uns zeigt, daß wir auf Switch-ID 01 Port 06 hängen.
==fcp wwpn-alias (set|show) : Aliasnamen für mehr Klarheit beim Debugging==
<source lang=bash>
fas01> fcp wwpn-alias set sun07_Slot2_Port0 21000024ff363a5a
fas01> fcp wwpn-alias show
WWPN Alias
---- -----
21:00:00:24:ff:36:3a:5a sun07_Slot2_Port0
</source>
==sanlun lun show -d <dev> (mit Solaris und ZPool)==
Wenn man wissen möchte, welche NetApp LUNs zu einem ZPool gehören, geht das folgender Maßen:
<source lang=bash>
# zpool status | nawk '/c[0-9]t/{dev=$1;gsub(/s[0-9]+$/,"",$1);command="/opt/NTAP/SANToolkit/bin/sanlun lun show -d /dev/rdsk/"$1"s2";command | getline; command | getline; print dev,$1$2;next;}{print;}'
</source>
Beispiel:
<source lang=bash>
# zpool status | nawk '/c[0-9]t/{dev=$1;gsub(/s[0-9]+$/,"",$1);command="/opt/NTAP/SANToolkit/bin/sanlun lun show -d /dev/rdsk/"$1"s2";command | getline; command | getline; print dev,$1$2;next;}{print;}'
Pool: testpool
Status: ONLINE
scan: resilvered 11,0G in 0h1m with 0 errors on Thu Oct 2 09:41:39 2014
config:
NAME STATE READ WRITE CKSUM
testpool ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
c5t60A98000433544625634696B76705370d0s0 fas01:/vol/testlun/LUN0
c5t60A980003830304F392446473844375Ad0 fas02:/vol/testlun/LUN0
</source>
=Sonstiges=
==Alle WWNs in einem File finden==
Gibt nur die WWNs aus, auch mehrere, wenn mehrere in einer Zeile sind.
<source lang=awk>
gawk '{line=$0;while(match(line,/[0-9a-f]{2}(:[0-9a-f]{2}){7}/,wwn)){line=substr(line,wwn[0,"start"]+wwn[0,"length"]); print wwn[0];}}' <file>
</source>
d73fbb4bb26075573309af3fab86d66fe50ac90b
965
964
2015-11-06T13:18:45Z
Lollypop
2
/* LUN masking (access LUNs of a storage) */
wikitext
text/x-wiki
[[Kategorie:Solaris]]
[[Kategorie:Brocade]]
[[Kategorie:NetApp]]
[[Kategorie:FC]]
=Fibrechannel Analyse=
=Kommandos : Solaris=
==luxadm==
===luxadm -e port===
Gibt die Hardwarepfade der vorhandened Fibrechannelports und deren Status aus:
<source lang=bash>
# luxadm -e port
/devices/pci@79,0/pci10de,378@b/pci1077,143@0/fp@0,0:devctl CONNECTED
/devices/pci@79,0/pci10de,378@b/pci1077,143@0,1/fp@0,0:devctl NOT CONNECTED
/devices/pci@79,0/pci10de,376@e/pci1077,143@0/fp@0,0:devctl CONNECTED
/devices/pci@79,0/pci10de,376@e/pci1077,143@0,1/fp@0,0:devctl NOT CONNECTED
</source>
2 Dualport Karten:
/devices/pci@79,0/pci10de,378@b/pci1077,143@0 und ...,1
/devices/pci@79,0/pci10de,376@e/pci1077,143@0 und ...,1
<source lang=bash>
# prtdiag -v | head -1
System Configuration: Sun Microsystems Sun Fire X4440
</source>
Aus der Seite [https://support.oracle.com/epmos/faces/DocContentDisplay?id=1277396.1 Sun x86 Platforms: Matrix of Recognized Device Paths (Doc ID 1277396.1)] (Oracle Support Login benötigt):
Sun Fire x4440 (Tucana)
PCI:
PCIe SLOT0 /pci@0,0/pci10de,375@f/pci1000,3150@0 // with PCI Express 8-Port SAS/SATA HBA
PCIe SLOT0 /pci@0,0/pci10de,375@f/ // without PCI Express 8-Port SAS/SATA HBA
PCIe SLOT1 /pci@0,0/pci10de,376@e/
PCIe SLOT2 /pci@7c,0/pci10de,377@f/
PCIe SLOT3 /pci@0,0/pci10de,377@a/
PCIe SLOT4 /pci@7c,0/pci10de,376@e/
PCIe SLOT5 /pci@7c,0/pci10de,378@b/
(7c can be renamed something else depending on BIOS/OS version)
Also stecken unsere Karten in Slot 4 und 5.
===luxadm -e dump_map <HW_path>===
Gibt die Tabelle der bekannten Geräte an einem Port aus
<source lang=bash>
# luxadm -e dump_map /devices/pci@79,0/pci10de,378@b/pci1077,143@0/fp@0,0:devctl
Pos Port_ID Hard_Addr Port WWN Node WWN Type
0 30200 0 202600a0b86e10e4 200600a0b86e10e4 0x0 (Disk device)
1 30600 0 202700a0b86e10e4 200600a0b86e10e4 0x0 (Disk device)
2 10100 0 203400a0b85bb030 200400a0b85bb030 0x0 (Disk device)
3 10500 0 203500a0b85bb030 200400a0b85bb030 0x0 (Disk device)
4 10200 0 202600a0b86e103c 200600a0b86e103c 0x0 (Disk device)
5 11400 0 202700a0b86e103c 200600a0b86e103c 0x0 (Disk device)
6 30100 0 203200a0b85aeb2d 200200a0b85aeb2d 0x0 (Disk device)
7 30500 0 203300a0b85aeb2d 200200a0b85aeb2d 0x0 (Disk device)
8 10800 0 2100001b32902d45 2000001b32902d45 0x1f (Unknown Type,Host Bus Adapter)
</source>
Erklärung der interessanten Spalten:
* Port_ID <Switch_ID><Switchport><??>
Es sind also offensichtlich 2 Switches in der Fabric an Port /devices/pci@79,0/pci10de,378@b/pci1077,143@0/fp@0,0:devctl
und zwar mit der ID 1 und mit der ID 3.
Switch ID 1
Port 1 und 5 : Node WWN 200400a0b85bb030
Port 2 und 14 : Node WWN 200600a0b86e103c
Port 8 : Node WWN 2000001b32902d45 (Wir selbst)
Switch ID 3
Port 1 und 5 : Node WWN 200200a0b85aeb2d
Port 2 und 6 : Node WWN 200600a0b86e10e4
Wir hängen also mit 2 Storages auf dem Switch mit der ID 1 und haben eine Verbindung zu einem Switch mit der ID 3 an dem 2 weitere Storages hängen.
* Node WWN
Wir sehen hier 4 Disk Devices mit jeweils 2 Einträgen (Gleiche Node WWN)
* Port WWN
Dies ist die Port WWN der an den Switch angeschlossenen Geräte (unter 8 finden wir uns selbst).
Pro Storage sehen wir hier 2 Port WWNs, also 2 Pfade über unseren einen Hostport.
Daher nachher 4 Pfade (2 Pro Hostport) beim [[#mpathadm list lu]].
* Type
Disk Device: Storage
Host Bus Adapter: FC-Karte
===luxadm probe===
Auflistung aller erkannten Fibrechanneldevices
<source lang=bash>
#> luxadm probe
Found Fibre Channel device(s):
Node WWN:200600a0b86e10e4 Device Type:Disk device
Logical Path:/dev/rdsk/c8t600A0B80006E10E40000DC1C52E8B751d0s2
...
</source>
===luxadm display <Diskpath|WWN>===
<source lang=bash>
#> luxadm display /dev/rdsk/c8t600A0B80006E10E40000DC1C52E8B751d0s2
DEVICE PROPERTIES for disk: /dev/rdsk/c8t600A0B80006E10E40000DC1C52E8B751d0s2
Vendor: SUN
Product ID: STK6580_6780
Revision: 0784
Serial Num: SP01068442
Unformatted capacity: 204800.000 MBytes
Write Cache: Enabled
Read Cache: Enabled
Minimum prefetch: 0x300
Maximum prefetch: 0x0
Device Type: Disk device
Path(s):
/dev/rdsk/c8t600A0B80006E10E40000DC1C52E8B751d0s2
/devices/scsi_vhci/disk@g600a0b80006e10e40000dc1c52e8b751:c,raw
Controller /dev/cfg/c4
Device Address 202600a0b86e10e4,5
Host controller port WWN 2100001b328a417f
Class primary
State ONLINE
Controller /dev/cfg/c4
Device Address 202700a0b86e10e4,5
Host controller port WWN 2100001b328a417f
Class secondary
State STANDBY
Controller /dev/cfg/c6
Device Address 201600a0b86e10e4,5
Host controller port WWN 2100001b32904445
Class primary
State ONLINE
Controller /dev/cfg/c6
Device Address 201700a0b86e10e4,5
Host controller port WWN 2100001b32904445
Class secondary
State STANDBY
</source>
* Vendor: SUN
Hersteller
* Product ID: STK6580_6780
Also ein StorageTek 6580/6780
* Revision: 0784
Grobe Firmwarepeilung (Firmware Version: 07.84.47.10)
Siehe hier [[#lsscs list array <array_name>]]
* Serial Num: SP01068442
Praktisch, wenn man mit NetApps arbeitet, um die LUNs zuzuordnen.
* Unformatted capacity: 204800.000 MBytes
Immer gut zu wissen
* Write Cache: Enabled
Die Batterie im Storage sollte also OK sein ;-)
* Path(s):
Rawdevicepath
Hardwaredevicepath
Jetzt folgen immer pro Pfad zu diesem Device ein Block aus
Controller (siehe unten)
Device Address <Port WWN vom Device>,<LUN ID>
Class <primary|secondary> (siehe unten)
State <Online|Standby|Oflline>
Zuweisung Controller zum FC-Port über:
<source lang=bash>
# ls -al /dev/cfg/c6
lrwxrwxrwx 1 root root 60 Sep 3 2009 /dev/cfg/c6 -> ../../devices/pci@79,0/pci10de,376@e/pci1077,143@0/fp@0,0:fc
</source>
Man sieht den Hardwarepfad von [[#luxadm -e port]]
Class:
Via ALUA (Asymmetric Logical Unit Access) teilt das Device dem Host mit, über welche Pfade der Host primär auf die LUN zugreifen soll.
==fcinfo==
===fcinfo hba-port===
Gibt ein paar Infos über Hersteller, Modell, Firmware, Port und Node WWN, Current Speed, ... aus
<source lang=bash>
#> fcinfo hba-port
HBA Port WWN: 2100001b328a417f
OS Device Name: /dev/cfg/c4
Manufacturer: QLogic Corp.
Model: 375-3356-02
Firmware Version: 05.06.00
FCode/BIOS Version: BIOS: 2.02; fcode: 2.01; EFI: 2.00;
Serial Number: 0402R00-0918701860
Driver Name: qlc
Driver Version: 20110825-3.06
Type: N-port
State: online
Supported Speeds: 1Gb 2Gb 4Gb
Current Speed: 4Gb
Node WWN: 2000001b328a417f
HBA Port WWN: 2101001b32aa417f
OS Device Name: /dev/cfg/c5
Manufacturer: QLogic Corp.
Model: 375-3356-02
Firmware Version: 05.06.00
FCode/BIOS Version: BIOS: 2.02; fcode: 2.01; EFI: 2.00;
Serial Number: 0402R00-0918701860
Driver Name: qlc
Driver Version: 20110825-3.06
Type: unknown
State: offline
Supported Speeds: 1Gb 2Gb 4Gb
Current Speed: not established
Node WWN: 2001001b32aa417f
HBA Port WWN: 2100001b32904445
OS Device Name: /dev/cfg/c6
Manufacturer: QLogic Corp.
Model: 375-3356-02
Firmware Version: 05.06.00
FCode/BIOS Version: BIOS: 2.02; fcode: 2.01; EFI: 2.00;
Serial Number: 0402R00-0918701887
Driver Name: qlc
Driver Version: 20110825-3.06
Type: N-port
State: online
Supported Speeds: 1Gb 2Gb 4Gb
Current Speed: 4Gb
Node WWN: 2000001b32904445
HBA Port WWN: 2101001b32b04445
OS Device Name: /dev/cfg/c7
Manufacturer: QLogic Corp.
Model: 375-3356-02
Firmware Version: 05.06.00
FCode/BIOS Version: BIOS: 2.02; fcode: 2.01; EFI: 2.00;
Serial Number: 0402R00-0918701887
Driver Name: qlc
Driver Version: 20110825-3.06
Type: unknown
State: offline
Supported Speeds: 1Gb 2Gb 4Gb
Current Speed: not established
Node WWN: 2001001b32b04445
</source>
===fcinfo remote-port --port <HBA Port WWN> --linkstat===
<source lang=bash>
# fcinfo remote-port --port 2100001b32904445 --linkstat
Remote Port WWN: 201600a0b86e103c
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200600a0b86e103c
Link Error Statistics:
Link Failure Count: 3
Loss of Sync Count: 3
Loss of Signal Count: 2
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 201700a0b86e103c
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200600a0b86e103c
Link Error Statistics:
Link Failure Count: 4
Loss of Sync Count: 261
Loss of Signal Count: 4
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 202200a0b85aeb2d
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200200a0b85aeb2d
Link Error Statistics:
Link Failure Count: 2
Loss of Sync Count: 1
Loss of Signal Count: 2
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 202300a0b85aeb2d
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200200a0b85aeb2d
Link Error Statistics:
Link Failure Count: 2
Loss of Sync Count: 1
Loss of Signal Count: 2
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 201600a0b86e10e4
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200600a0b86e10e4
Link Error Statistics:
Link Failure Count: 3
Loss of Sync Count: 1
Loss of Signal Count: 0
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 201700a0b86e10e4
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200600a0b86e10e4
Link Error Statistics:
Link Failure Count: 3
Loss of Sync Count: 1
Loss of Signal Count: 0
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 202400a0b85bb030
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200400a0b85bb030
Link Error Statistics:
Link Failure Count: 2
Loss of Sync Count: 1
Loss of Signal Count: 2
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 202500a0b85bb030
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200400a0b85bb030
Link Error Statistics:
Link Failure Count: 2
Loss of Sync Count: 1
Loss of Signal Count: 3
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
</source>
===fcinfo remote-port --port <HBA Port WWN> --scsi-target===
<source lang=bash>
# fcinfo hba-port | grep HBA
HBA Port WWN: 21000024ff3cf472
HBA Port WWN: 21000024ff3cf473
HBA Port WWN: 21000024ff3cf454
HBA Port WWN: 21000024ff3cf455
# fcinfo remote-port --port 21000024ff3cf472 --scsi-target
Remote Port WWN: 20110002ac0059ce
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 2ff70002ac0059ce
LUN: 0
Vendor: 3PARdata
Product: VV
OS Device Name: /dev/rdsk/c6t60002AC00000000000000002000059CEd0s2
LUN: 1
Vendor: 3PARdata
Product: VV
OS Device Name: /dev/rdsk/c6t60002AC00000000000000003000059CEd0s2
LUN: 2
Vendor: 3PARdata
Product: VV
OS Device Name: /dev/rdsk/c6t60002AC00000000000000004000059CEd0s2
...
</source>
==mpathadm==
===mpathadm list lu===
<source lang=bash>
</source>
==cfgadm==
===cfgadm -al -o show_FCP_dev [<controller>]===
<source lang=bash>
# cfgadm -al -o show_FCP_dev | grep unusable
c8::21000024ff2d49a2,0 disk connected configured unusable
c8::21000024ff2d49a2,1 disk connected configured unusable
c8::21000024ff2d49a2,2 disk connected configured unusable
c8::21000024ff2d49a2,3 disk connected configured unusable
c8::21000024ff2d49a2,4 disk connected configured unusable
c8::21000024ff2d49a2,5 disk connected configured unusable
c8::21000024ff2d49a2,6 disk connected configured unusable
c8::21000024ff2d49a2,7 disk connected configured unusable
c8::21000024ff2d49a2,8 disk connected configured unusable
c8::21000024ff2d49a2,9 disk connected configured unusable
c8::21000024ff2d49a2,10 disk connected configured unusable
c9::203400a0b839c421,31 disk connected configured unusable
c9::203400a0b84913d2,31 disk connected configured unusable
c9::203500a0b839c421,31 disk connected configured unusable
c9::203500a0b84913d2,31 disk connected configured unusable
</source>
===cfgadm -c unconfigure -o unusable_SCSI_LUN <unusable device>===
<source lang=bash>
# cfgadm -c unconfigure -o unusable_SCSI_LUN c8::21000024ff2d49a2
</source>
Alle aufräumen:
<source lang=bash>
# cfgadm -alo show_SCSI_LUN | nawk '$NF=="unusable"{gsub(/,[0-9]+$/,"",$1);print $1}' | sort -u | xargs -n 1 cfgadm -c unconfigure -o unusable_SCSI_LUN
</source>
===cfgadm -o force_update -c configure <controller>===
Rescan LUNs. Be careful! Does a forcelip!
<source lang=bash>
# cfgadm -o force_update -c configure c10
</source>
==prtconf -Da <device>==
<source lang=bash>
# prtconf -Da /dev/cfg/c3
i86pc (driver name: rootnex)
pci, instance #0 (driver name: npe)
pci8086,3410, instance #5 (driver name: pcieb)
pci111d,806e, instance #12 (driver name: pcieb)
pci111d,806e, instance #13 (driver name: pcieb)
pci1077,170, instance #0 (driver name: qlc) <---
fp, instance #0 (driver name: fp)
</source>
==LUN masking (access LUNs of a storage)==
<source lang=bash>
Nov 6 13:44:59 server01 Corrupt label; wrong magic number
Nov 6 13:44:59 server01 cmlb: WARNING: /pci@380/pci@1/pci@0/pci@5/SUNW,qlc@0/fp@0,0/ssd@w204300a096691217,7 (ssd7):
Nov 6 13:44:59 server01 Corrupt label; wrong magic number
Nov 6 13:44:59 server01 cmlb: WARNING: /pci@380/pci@1/pci@0/pci@5/SUNW,qlc@0/fp@0,0/ssd@w204300a096691217,7 (ssd7):
Nov 6 13:44:59 server01 Corrupt label; wrong magic number
Nov 6 13:44:59 server01 cmlb: WARNING: /pci@300/pci@1/pci@0/pci@4/SUNW,qlc@0/fp@0,0/ssd@w203300a096691217,7 (ssd2):
Nov 6 13:44:59 server01 Corrupt label; wrong magic number
...
</source>
<source lang=bash>
# cat /etc/driver/drv/fp.conf
mpxio-disable="no";
pwwn-lun-blacklist=
"203200a096691265,7",
"203300a096691265,7",
"204200a096691265,7",
"204300a096691265,7",
"203200a096691217,7",
"203300a096691217,7",
"204200a096691217,7",
"204300a096691217,7";
</source>
<source lang=bash>
# reboot -- -r
...
Boot device: /pci@300/pci@1/pci@0/pci@2/scsi@0/disk@p0 File and args: -r
SunOS Release 5.11 Version 11.3 64-bit
Copyright (c) 1983, 2015, Oracle and/or its affiliates. All rights reserved.
/pseudo/fcp@0 (fcp0):
LUN 7 of port 203300a096691217 is masked due to black listing.
/pseudo/fcp@0 (fcp0):
LUN 7 of port 203200a096691217 is masked due to black listing.
/pseudo/fcp@0 (fcp0):
LUN 7 of port 203300a096691265 is masked due to black listing.
/pseudo/fcp@0 (fcp0):
LUN 7 of port 203200a096691265 is masked due to black listing.
/pseudo/fcp@0 (fcp0):
LUN 7 of port 204300a096691217 is masked due to black listing.
/pseudo/fcp@0 (fcp0):
LUN 7 of port 204200a096691217 is masked due to black listing.
/pseudo/fcp@0 (fcp0):
LUN 7 of port 204300a096691265 is masked due to black listing.
/pseudo/fcp@0 (fcp0):
LUN 7 of port 204200a096691265 is masked due to black listing.
Configuring devices.
</source>
=Kommandos : Common Array Manager=
==lsscs==
Ist unter Solaris in /opt/SUNWsefms/bin
===lsscs list array===
<source lang=bash>
</source>
===lsscs list array <array_name>===
<source lang=bash>
</source>
===lsscs list -a <array_name> fcport===
<source lang=bash>
</source>
=Kommandos : Brocade=
==Switch-Kommandos==
===switchshow===
<source lang=bash>
san-sw_11:admin> switchshow
switchName: san-sw_11
switchType: 71.2
switchState: Online
switchMode: Native
switchRole: Principal
switchDomain: 1
switchId: fffc01
switchWwn: 10:00:00:05:33:df:43:5a
zoning: ON (Fabric1)
switchBeacon: OFF
Index Port Address Media Speed State Proto
==============================================
0 0 010000 id N8 No_Light FC
1 1 010100 id N8 Online FC E-Port 10:00:00:05:33:df:bd:b9 "san-sw_21" (downstream)
2 2 010200 id N8 Online FC F-Port 21:00:00:24:ff:05:74:e4
3 3 010300 id N8 Online FC F-Port 50:0a:09:81:8d:32:5d:c4
4 4 010400 id N8 No_Light FC
5 5 010500 id N8 Online FC E-Port 10:00:00:05:33:df:bd:b9 "san-sw_21"
6 6 010600 id N4 Online FC F-Port 20:06:00:a0:b8:32:38:17
7 7 010700 id N4 Online FC F-Port 20:07:00:a0:b8:32:38:17
8 8 010800 id N4 Online FC F-Port 21:00:00:1b:32:91:4c:ed
9 9 010900 id N4 Online FC F-Port 21:00:00:1b:32:98:05:1a
10 10 010a00 id N8 Online FC F-Port 21:00:00:24:ff:4a:d3:bc
11 11 010b00 id N8 No_Light FC
12 12 010c00 id N8 No_Light FC
13 13 010d00 id N8 No_Light FC
14 14 010e00 id N8 No_Light FC
15 15 010f00 id N8 No_Light FC
16 16 011000 -- N8 No_Module FC (No POD License) Disabled
17 17 011100 -- N8 No_Module FC (No POD License) Disabled
18 18 011200 -- N8 No_Module FC (No POD License) Disabled
19 19 011300 -- N8 No_Module FC (No POD License) Disabled
20 20 011400 -- N8 No_Module FC (No POD License) Disabled
21 21 011500 -- N8 No_Module FC (No POD License) Disabled
22 22 011600 -- N8 No_Module FC (No POD License) Disabled
23 23 011700 -- N8 No_Module FC (No POD License) Disabled
</source>
Was sagt uns das?
# Dies ist der "Principal" (alle andere sind "Subordinate") der Fabric "Fabric1" (switchRole:, zoning:)
# Der Switch ist gezoned (zoning:)
# SwitchID ist "fffc01"
# Es ist ein 24-Port Switch
# Es gibt einen doppelten ISL (InterSwitchLink) zu einem anderen Switch E-Port (san-sw_21)
# 6 Ports sind mit SFPs bestückt, aber nicht belegt (0,4,11-15)
# 8 Ports haben keine Lizenz und auch kein SFP (No_Module)
# 9 Ports sind belegt
<source lang=bash>
san-sw_11:root> fabricshow
Switch ID Worldwide Name Enet IP Addr FC IP Addr Name
-------------------------------------------------------------------------
1: fffc01 10:00:00:05:33:df:43:5a 192.168.1.117 0.0.0.0 >"san-sw_11"
2: fffc02 10:00:00:05:33:df:bd:b9 192.168.1.119 0.0.0.0 "san-sw_21"
The Fabric has 2 switches
</source>
==Port-Kommandos==
===porterrshow===
===portstatsshow===
===portstatsclear===
===portloginshow===
Möchte man sehen, welche WWNs sich hinter einem NPIV-Port verbergen, so hilft portloginshow.
==Zone-Kommandos==
===zoneshow===
===alicreate===
===alishow===
==Backup der Switchconfig per Script==
===Put the backup host ssh-pub-key on the switches===
<source lang=bash>
fcsw1:root> cat >/root/.ssh/authorized_keys <<EOF
> ssh-dss AAAAB3NzaC1...
...
...
lF8qsgtTD8cc= root@host
> EOF
</source>
===Generate ssh-key on the switches===
<source lang=bash>
fcsw1:root> ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
2a:23:33:...:69:bc:25:a5:f9 root@fcsw1
The key's randomart image is:
+--[ RSA 2048]----+
| |
| ... |
| |
+-----------------+
</source>
===Copy the key to your backup users ~/.ssh/authorized_keys on backup host===
<source lang=bash>
fcsw1:root> cat /root/.ssh/id_rsa.pub
ssh-rsa AAAAB3NzaC1yc2EAAA...
...
KHnw1T1NaQ== root@fcsw1
</source>
===Now the script on the backup host===
<source lang=bash>
# cat /opt/bin/backup_brocade_config
#!/bin/bash
SWITCHES="
172.30.40.50
172.30.40.51
"
LOCALUSER="backupuser"
BACKUPDIR="brocade_backup"
BACKUPHOST="172.30.40.10"
DATE="$(date '+%Y%m%d-%H%M%S')"
for switch in ${SWITCHES} ; do
printf "Backing up ${switch} to ~${LOCALUSER}/${BACKUPDIR}/${switch}_config_${date}.txt... "
ssh root@${switch} /fabos/link_sbin/configupload -all -p scp ${BACKUPHOST},${LOCALUSER},${BACKUPDIR}/${switch}_config_${DATE}.txt
done
</source>
==Script zum parsen einer configupload-Datei==
<source lang=awk>
#!/usr/bin/gawk -f
BEGIN{
vendor["001438"]="Hewlett-Packard";
vendor["00a098"]="NetApp";
vendor["0024ff"]="Qlogic";
vendor["001b32"]="Qlogic";
vendor["0000c9"]="Emulex";
vendor["00e002"]="CROSSROADS SYSTEMS, INC.";
}
/\[Zoning\]/,/^$/ {
if(/^cfg./){
split($0,cfgparts,":");
gsub(/^cfg./,"",cfgparts[1]);
cfg[cfgparts[1]]=cfgparts[2];
}
else if(/^zone./) {
zonename=$0;
gsub(/:.*$/,"",zonename);
gsub(/^zone./,"",zonename);
zonemembers=$0;
gsub(/^[^:]*:/,"",zonemembers);
zone[zonename]=zonemembers;
}
else if(/^alias./) {
aliasname=$0;
gsub(/:.*$/,"",aliasname);
gsub(/^alias./,"",aliasname);
aliasmembers=$0;
gsub(/^[^:]*:/,"",aliasmembers);
alias[aliasname]=aliasmembers;
if(length(aliasname)>longestalias){
longestalias=length(aliasname);
}
}
else if(/^enable:/) {
cfgenabled=$0;
gsub(/^enable:/,"",cfgenabled);
}
}
END {
print "Config:",cfgenabled;
split(cfg[cfgenabled],active_zones,";");
for(active_zone in active_zones) {
split(zone[active_zones[active_zone]],zone_members,";");
asort(zone_members);
print "Zone",active_zones[active_zone],"(",length(zone_members),"Members ):";
for(zone_member in zone_members){
member=zone_members[zone_member];
if(alias[member]!=""){
member=alias[member];
}
WWN=member;
gsub(/:/,"",WWN);
if(WWN ~ /^5/){start=2;}else{start=5;}
vendor_id=substr(WWN,start,6);
printf " Member: %s\t",member;
if(alias[zone_members[zone_member]]!=""){
format=sprintf("%%s%%%ds\t",longestalias-length(zone_members[zone_member]));
printf format,zone_members[zone_member]," ";
}
printf "%s\n",vendor[vendor_id];
}
}
printf "\n\n\nCreate config:\n-------------------------------------------------\n";
printf "cfgdelete \"%s\"\n",cfgenabled;
for(active_zone in active_zones) {
split(zone[active_zones[active_zone]],zone_members,";");
asort(zone_members);
for(zone_member in zone_members){
member=zone_members[zone_member];
if(alias[member]!=""){
printf "alicreate \"%s\",\"%s\"\n",member,alias[member];
alias[member]="";
}
}
printf "zonecreate \"%s\",\"%s\"\n",active_zones[active_zone],zone[active_zones[active_zone]];
if(!secondelement){
secondelement=1;
printf "cfgcreate";
} else {
printf "cfgadd ";
}
printf " \"%s\",\"%s\"\n",cfgenabled,active_zones[active_zone];
}
printf "cfgsave\ncfgenable \"%s\"\n",cfgenabled;
}
</source>
=Kommandos: NetApp=
==fcp topology show : Wo hängt mein Frontend-SAN?==
<source lang=bash>
fas01> fcp topology show
Switches connected on adapter 0d:
None connected.
Switches connected on adapter 0c:
None connected.
Switches connected on adapter 1a:
Switch Name: fcsw01
Switch Vendor: Brocade Communications, Inc.
Switch Release: v6.4.2a
Switch Domain: 1
Switch WWN: 10:00:00:05:33:c6:1e:6c
Port Count: 24
Switches connected on adapter 1b:
Switch Name: fcsw02
Switch Vendor: Brocade Communications, Inc.
Switch Release: v6.4.2a
Switch Domain: 1
Switch WWN: 10:00:00:05:33:c7:5e:d2
Port Count: 24
Switches connected on adapter 1c:
None connected.
Switches connected on adapter 1d:
None connected.
</source>
==fcp config <port> : Welche WWN habe ich?==
<source lang=bash>
fas01> fcp config 1a
1a: ONLINE <ADAPTER UP> PTP Fabric
host address 010600
portname 50:0a:09:83:90:00:29:24 nodename 50:0a:09:80:80:00:29:24
mediatype auto speed auto
</source>
Kleines Schmankerl ist noch die "host address", die uns zeigt, daß wir auf Switch-ID 01 Port 06 hängen.
==fcp wwpn-alias (set|show) : Aliasnamen für mehr Klarheit beim Debugging==
<source lang=bash>
fas01> fcp wwpn-alias set sun07_Slot2_Port0 21000024ff363a5a
fas01> fcp wwpn-alias show
WWPN Alias
---- -----
21:00:00:24:ff:36:3a:5a sun07_Slot2_Port0
</source>
==sanlun lun show -d <dev> (mit Solaris und ZPool)==
Wenn man wissen möchte, welche NetApp LUNs zu einem ZPool gehören, geht das folgender Maßen:
<source lang=bash>
# zpool status | nawk '/c[0-9]t/{dev=$1;gsub(/s[0-9]+$/,"",$1);command="/opt/NTAP/SANToolkit/bin/sanlun lun show -d /dev/rdsk/"$1"s2";command | getline; command | getline; print dev,$1$2;next;}{print;}'
</source>
Beispiel:
<source lang=bash>
# zpool status | nawk '/c[0-9]t/{dev=$1;gsub(/s[0-9]+$/,"",$1);command="/opt/NTAP/SANToolkit/bin/sanlun lun show -d /dev/rdsk/"$1"s2";command | getline; command | getline; print dev,$1$2;next;}{print;}'
Pool: testpool
Status: ONLINE
scan: resilvered 11,0G in 0h1m with 0 errors on Thu Oct 2 09:41:39 2014
config:
NAME STATE READ WRITE CKSUM
testpool ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
c5t60A98000433544625634696B76705370d0s0 fas01:/vol/testlun/LUN0
c5t60A980003830304F392446473844375Ad0 fas02:/vol/testlun/LUN0
</source>
=Sonstiges=
==Alle WWNs in einem File finden==
Gibt nur die WWNs aus, auch mehrere, wenn mehrere in einer Zeile sind.
<source lang=awk>
gawk '{line=$0;while(match(line,/[0-9a-f]{2}(:[0-9a-f]{2}){7}/,wwn)){line=substr(line,wwn[0,"start"]+wwn[0,"length"]); print wwn[0];}}' <file>
</source>
2ee4801b9b8628c30febdc3b3a61e0f7faee7ba5
Apache
0
205
966
961
2015-11-11T09:38:15Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:Web]]
== Zertifikat generieren ==
===Defaultwerte vernünftig anpassen===
Country & Co auf für einen selbst passende Werte anpassen:
<source lang=bash>
# vi /etc/ssl/openssl.cnf
</source>
===Schlüssel generieren===
<source lang=bash>
# openssl ecparam -genkey -name secp256r1 | openssl ec -aes256 -out server.de.ec-key
read EC key
using curve name prime256v1 instead of secp256r1
writing EC key
Enter PEM pass phrase:
Verifying - Enter PEM pass phrase:
</source>
Sollte es gewünscht sein den Schlüssel ohne Passwort abzulegen, kann man den Schlüssel nachträglich so entfernen:
<source lang=bash>
# openssl ec -in server.de.ec-key -out server.de.ec-key
read EC key
Enter PEM pass phrase:
writing EC key
</source>
===Zertifikat ausstellen===
<source lang=bash>
# openssl req -new -x509 -sha256 -key server.de.ec-key -out server.de-wildcard.pem -days 1825 -nodes
You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter '.', the field will be left blank.
-----
Country Name (2 letter code) [DE]:
State or Province Name (full name) [Hamburg]:
Locality Name (eg, city) [Hamburg]:
Organization Name (eg, company) [My Site]:
Organizational Unit Name (eg, section) [Sub]:
Common Name (e.g. server FQDN or YOUR name) []:*.server.de
Email Address [ssl@server.de]:
</source>
===Zertifikat ansehen===
<source lang=bash>
# openssl x509 -text -noout -in server.de-wildcard.pem
Certificate:
Data:
Version: 3 (0x2)
Serial Number: ... (0x...)
Signature Algorithm: ecdsa-with-SHA256
Issuer: C=DE, ST=Hamburg, L=Hamburg, O=My Site, OU=Sub, CN=*.server.de/emailAddress=ssl@server.de
Validity
Not Before: Apr 16 09:35:02 2015 GMT
Not After : Apr 14 09:35:02 2020 GMT
Subject: C=DE, ST=Hamburg, L=Hamburg, O=My Site, OU=Sub, CN=*.server.de/emailAddress=ssl@server.de
Subject Public Key Info:
Public Key Algorithm: id-ecPublicKey
Public-Key: (256 bit)
pub:
...
ASN1 OID: prime256v1
X509v3 extensions:
X509v3 Subject Key Identifier:
...
X509v3 Authority Key Identifier:
keyid:...
X509v3 Basic Constraints:
CA:TRUE
Signature Algorithm: ecdsa-with-SHA256
...
</source>
==Apache konfigurieren==
<source lang=apache>
<VirtualHost ssl.server.de:443>
...
SSLEngine On
SSLProtocol all -SSLv2 -SSLv3
SSLCompression off
SSLHonorCipherOrder On
SSLCipherSuite EECDH+AESGCM:EECDH+AES:EDH+AES
SSLCertificateFile /etc/apache2/ssl/server.de-wildcard.pem
SSLCertificateKeyFile /etc/apache2/ssl/server.de.ec-key
SSLOptions +FakeBasicAuth +ExportCertData +StrictRequire
</VirtualHost>
</source>
==ApacheTop==
Top of all sites on your host:
<source lang=bash>
# ls /var/log/apache2/*.log | xargs -n 1 echo -f | xargs apachetop
</source>
e72423b7b30114fead40b5316b78f8f46dec2d15
NicTool
0
252
967
2015-11-11T16:32:13Z
Lollypop
2
Die Seite wurde neu angelegt: „<source lang=bash> root@nictool:/var/www/nictool# wget https://github.com/msimerson/NicTool/releases/download/2.30/NicTool.tar.gz root@nictool:/var/www/nictool…“
wikitext
text/x-wiki
<source lang=bash>
root@nictool:/var/www/nictool# wget https://github.com/msimerson/NicTool/releases/download/2.30/NicTool.tar.gz
root@nictool:/var/www/nictool# tar -xzf NicTool.tar.gz
root@nictool:/var/www/nictool# tar -xzf server/NicToolServer-2.??.tar.gz
root@nictool:/var/www/nictool# tar -xzf client/NicToolClient-2.??.tar.gz
root@nictool:/var/www/nictool# mv server foo; mv NicToolServer-2.?? server
root@nictool:/var/www/nictool# mv client bar; mv NicToolClient-2.?? client
root@nictool:/var/www/nictool# rm -rf foo bar
root@nictool:/var/www/nictool# cd client; perl Makefile.PL; make; sudo make install clean
root@nictool:/var/www/nictool# cd ../server; perl Makefile.PL; make; sudo make install clean
root@nictool:/var/www/nictool# cp server/lib/nictoolserver.conf{.dist,}
root@nictool:/var/www/nictool# cp client/lib/nictoolclient.conf{.dist,}
</source>
60d59123d31b47fa16638866302254902999a9ae
Nice Options
0
253
968
2015-11-11T16:33:26Z
Lollypop
2
Die Seite wurde neu angelegt: „ Linux: <source lang=bash> netstat -plant netstat -tulpen </source> Solaris: <source lang=bash> </source>“
wikitext
text/x-wiki
Linux:
<source lang=bash>
netstat -plant
netstat -tulpen
</source>
Solaris:
<source lang=bash>
</source>
a686492bb1806fe19e873bf93d44eb2bfdab1660
969
968
2015-11-11T16:34:13Z
Lollypop
2
wikitext
text/x-wiki
Linux:
<source lang=bash>
ls -aldi
ls -aladin
netstat -plant
netstat -tulpen
</source>
Solaris:
<source lang=bash>
</source>
cb0c66286b2d40b628cb3ff1fd55f4816b07836e
Solaris process debugging
0
254
970
2015-12-03T10:38:47Z
Lollypop
2
Die Seite wurde neu angelegt: „[[Kategorie:Solaris]] ==Set the core file size limit on a process== For example for the sshd (and all resulting childs later): <source lang=bash> # prctl -n pr…“
wikitext
text/x-wiki
[[Kategorie:Solaris]]
==Set the core file size limit on a process==
For example for the sshd (and all resulting childs later):
<source lang=bash>
# prctl -n process.max-core-size -v 8m -t privileged -r -e deny $(pgrep -u root -o sshd)
</source>
58e22fbcb295d3d03caebcf1dff2dfde5a006d86
971
970
2015-12-03T10:48:15Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:Solaris|Debugging]]
==Set the core file size limit on a process==
For example for the sshd (and all resulting childs from now):
<source lang=bash>
ssh-server# prctl -n process.max-core-size -v 2g -t privileged -r -e deny $(pgrep -u root -o sshd)
</source>
Check:
<source lang=bash>
ssh-server# prctl -n process.max-core-size $(pgrep -u root -o sshd)
process: 1491: /usr/lib/ssh/sshd
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
process.max-core-size
privileged 2.00GB - deny -
system 8.00EB max deny -
</source>
Now all processes (for example new logged in users) will have a core file size limit of 2GB... really? No!
<source lang=bash>
ssh-client# ssh ssh-server
ssh-server# ulimit -Ha | grep core
core file size (blocks, -c) 2097152
</source>
See what it says: blocks <-- !!!
From man page: -c Maximum core file size (in 512-byte blocks)
b09084dfa4c40dbaa3f88f796f11ec23c1aa476d
ZFS sync script
0
215
972
841
2015-12-03T11:07:50Z
Lollypop
2
/* zfs_sync.sh */
wikitext
text/x-wiki
[[Kategorie:ZFS|Sync]]
Like all of my scripts this script is coming without any guaranties!!!
You can use it on your own risk!
==About the script==
* It uses [http://www.maier-komor.de/mbuffer.html mbuffer]. It is easy to compile.
* It uses gawk.
* The variable ''SECURE'' defines if you want to use ssh to encrypt your stream. Set it to ''yes'' or ''no''.
* To mark the datasets to copy from the backup host use this on the source:
<source lang=bash>
# /usr/sbin/zfs set de.timmann:auto-backup=<backup host> <dataset>
</source>
* Run the script on the destination/backup host.
* If you don't want to use root as backup-user on source host do this to create a ''zfssync'' user (Solaris syntax):
<source lang=bash>
# useradd -m zfssync
# passwd -N zfssync
# usermod -K type=normal zfssync
</source>
* Make an ssh-key exchange to login without password for ''SRC_USER''.
Good luck!
==zfs_sync.sh==
<source lang=bash>
#!/bin/bash
# Written by Lars Timmann <L@rs.Timmann.de> 2013
# This script is a rotten bunch of code... rewrite it!
# Some defaults
BACKUP_PROPERTY="de.timmann:auto-backup"
BACKUP_SNAPSHOT_NAME="zfssync"
MBUFFER_PORT=10001
MBUFFER=/opt/mbuffer/bin/mbuffer;
SRC_USER=zfssync
INITIAL_COPIES=3
# Default yes means use SSH for encryption over the net. Every other value means just mbuffer.
SECURE="yes"
LOCAL_SYNC="no"
MBUFFER_PORT=10001
MBUFFER_OPTS="-v 0 --md5 -s 128k -m 256M"
BACKUP_PROPERTY="de.timmann:auto-backup"
ZFS=/usr/sbin/zfs
SSH="/usr/bin/ssh -xc blowfish"
AWK=/usr/bin/gawk
#AWK=/opt/sfw/bin/gawk
GREP=/usr/bin/grep
DATE=/usr/bin/date
MD5="/usr/bin/digest -a md5"
ROUTE=/usr/sbin/route
MBUFFER="/opt/mbuffer/bin/mbuffer"
MYHOST=$(/usr/bin/hostname)
MYNAME=$(/usr/bin/basename $0)
function usage () {
if [ $# -gt 0 ]
then
if [ "_${1}_" != "_help_" ]
then
echo "Error: ${MYNAME} : $*"
fi
else
echo "Error: ${MYNAME} : Check parameters"
fi
cat <<EOU
Usage: ${MYNAME} <params>
Where params is from this set of parameters:
-s|--src-ip <IP> The host from where we want to sync
-d|--dst-ip <IP> The IP on this host where the remote mbuffer should try to connect to
If omitted the IP to use is guessed via route get.
-u|--user <user> The user on "--src-ip" which has rights to send a zfs.
It must be able to login via ssh with public key.
On Solaris it is the profile "ZFS File System Management"
Try this on the "--src-ip":
# roleadd \
-d /export/home/zfssync \
-c "User for zfs send/recv" \
-s /bin/bash \
-m \
-P "ZFS File System Management" \
zfssync
# rolemod -K type=normal zfssync
# passwd -N zfssync
And then put the ssh-public-key from this host into
/export/home/zfssync/.ssh/authorized_keys
on the "--src-ip".
Remember to set the permissions on .ssh to 700 and .ssh/authorized_keys to 600.
The Homedir of the user must not be world writeable.
-sp|--src-pool <zpool> The zpool we want to sync from "--src-ip".
-dp|--dst-pool <zpool> The zpool on this host where we want to sync to ${MYNAME}.
-mbp|--mbuffer-port <port>
If the default port 10001 is in use use another port.
-mb|--mbuffer-path <path>
Path of mbuffer binary including binary itself.
-mbbw|--mbuffer-bwlimit <rate>
Limit the read bandwith of mbuffer (mbuffer option -r)
From mbuffer --help: limit read rate to <rate> B/s, where <rate> can be given in b,k,M,G
-bp|--backup-property <property>
This defaults to ${BACKUP_PROPERTY}.
You have to set this property on all ZFS datasets to ${MYHOST}.
# /usr/sbin/zfs set ${BACKUP_PROPERTY}=${MYHOST} <dataset>
This is inherited as usual.
-bsn|--backup-snap-name <snapshotname>
This is the name of the snapshot which we use to sync.
This defaults to ${BACKUP_SNAPSHOT_NAME}.
Never delete this snapshot manually or you will break the sync and restart
from the beginning.
-i|--insecure Not for production environments! No ssh tunneling. No encryption over the net!
EOU
##-l|--local Just do a local zfs send/recv...
exit 1
}
while [ $# -gt 0 ]
do
#if [ $# -ge 2 ]; then value=$2; fi
case $1 in
--help|-h)
usage "help"
;;
-l|--local)
LOCAL_SYNC="yes"
SRC_HOST="localhost"
param="dummy"
shift;
;;
-i|--insecure|--fuck-off-security)
SECURE="no"
param="dummy"
shift;
;;
--?*=?*|-?*=?*)
param=${1%=*}
value=${1#*=}
shift;
;;
--?*=|-?*=)
param=${1%=*}
usage "${param} needs a vlaue!"
;;
*)
param=$1
if [ $# -ge 2 -a "_${2%-*}_" != "__" ]
then
value=$2
shift
fi
shift
;;
esac
case $param in
-s|--src-ip)
if [ -z $value ] ; then usage "Param ${param} needs a value" ; fi
SRC_HOST=${value}
;;
-d|--dst-ip)
if [ -z $value ] ; then usage "Param ${param} needs a value" ; fi
DST_HOST=${value};
;;
-u|--user)
if [ -z $value ] ; then usage "Param ${param} needs a value" ; fi
SRC_USER=${value}
;;
-sp|--src-pool)
if [ -z $value ] ; then usage "Param ${param} needs a value" ; fi
SRC_POOL=${value}
;;
-bsn|--backup-snap-name)
if [ -z $value ] ; then usage "Param ${param} needs a value" ; fi
BACKUP_SNAPSHOT_NAME=${value}
;;
-dp|--dst-pool)
if [ -z $value ] ; then usage "Param ${param} needs a value" ; fi
DST_POOL=${value}
;;
-mbp|--mbuffer-port)
if [ -z $value ] ; then usage "Param ${param} needs a value" ; fi
MBUFFER_PORT=${value}
;;
-mb|--mbuffer-path)
if [ -z $value ] ; then usage "Param ${param} needs a value" ; fi
MBUFFER=${value}
;;
-mbbw|--mbuffer-bwlimit)
if [ -z $value ] ; then usage "Param ${param} needs a value" ; fi
MBUFFER_OPTS="${MBUFFER_OPTS} -r ${value}"
;;
-bp|--backup-property)
if [ -z $value ] ; then usage "Param ${param} needs a value" ; fi
BACKUP_PROPERTY=${value}
;;
dummy)
;;
*)
usage "Unknown parameter $1"
esac
done
if [ "_${LOCAL_SYNC}_" == "no" ]
then
if [ -z ${SRC_HOST} ]; then usage "-s|--src-ip is missing" ; fi
# Guess the right IP for communication with source host
if [ -z ${DST_HOST} ]; then
DST_HOST=$(${ROUTE} -vn get ${SRC_HOST} | ${AWK} '{ip=$2}END{print ip}')
if [ -z ${DST_HOST} ]; then
usage "-d|--dst-ip is missing"
fi
fi
fi
if [ -z ${SRC_POOL} ]; then usage "-sp|--src-pool is missing" ; fi
if [ -z ${DST_POOL} ]; then usage "-dp|--dst-pool is missing" ; fi
SRC_DATASETS=/tmp/${MYNAME}_${DST_POOL/\//_}_src_ds.out
DST_DATASETS=/tmp/${MYNAME}_${DST_POOL/\//_}_dst_ds.out
LOCK_FILE=/var/run/${MYNAME}_${DST_POOL/\//_}.lck
TMP_FILE1=/tmp/${MYNAME}_${DST_POOL/\//_}.tmp1
TMP_FILE2=/tmp/${MYNAME}_${DST_POOL/\//_}.tmp2
START_TIME=$(${AWK} 'BEGIN{printf systime();}')
${AWK} -v time=${START_TIME} 'BEGIN{print "START:",strftime("%d.%m.%Y %H:%M.%S",time)}'
# Clean up on signal
# -------------------------
trap 'echo "\n--- Got signal: Exiting ...\n"; \
date ; \
sleep 3; kill -9 ${!} 2>/dev/null; \
/usr/bin/rm -f ${LOCK_FILE}; \
exit 1' 1 2 3 13 14 15 18
###########################
if [ -f ${LOCK_FILE} ] ; then
echo "$0 is allready running as PID $(/usr/bin/cat ${LOCK_FILE}) look in ${LOCK_FILE}"
exit 1
else
echo $$ > ${LOCK_FILE}
fi
if [ "_${LOCAL_SYNC}_" == "_yes_" ]
then
${ZFS} list -rH -t filesystem,snapshot,volume -o name,type,${BACKUP_PROPERTY} -s creation ${SRC_POOL} > ${SRC_DATASETS} &
else
${SSH} ${SRC_USER:+"${SRC_USER}@"}${SRC_HOST} "${ZFS} list -rH -t filesystem,snapshot,volume -o name,type,${BACKUP_PROPERTY} -s creation ${SRC_POOL}" > ${SRC_DATASETS} &
fi
${ZFS} list -rH -t filesystem,snapshot,volume -o name,type -s creation ${DST_POOL} > ${DST_DATASETS} &
wait
function convert_to_poolname () {
from_zfs=$1
search=$2
replace=$3
echo ${from_zfs} | sed -e "s#^${search}#${replace}#g"
}
function is_available () {
snapshot=$1
list=$2
${AWK} -v snapshot=${snapshot} 'BEGIN{rc=1;}$1 == snapshot{print $1; rc=0;}END{exit rc;}' ${list}
return $?
}
function expire_dst_pool_snapshots () {
days_to_keep=$1
min_to_keep=$2
for expired_zfs in $(
${ZFS} list -o creation,name -S creation -t snapshot | \
${AWK} \
-v days_to_keep=${days_to_keep} \
-v min_to_keep=${min_to_keep} \
-v DST_POOL="^${DST_POOL}" \
'
BEGIN{
split("Jan:Feb:Mar:Apr:May:Jun:Jul:Aug:Sep:Oct:Nov:Dec",mon,":");
for(m in mon){
month[mon[m]]=m
};
expire_date=systime()-days_to_keep*60*60*24
}
$NF ~ DST_POOL {
filesystem=$NF;
gsub(/@.*$/,"",filesystem);
split($4,time,":");
filesystem_date=mktime(sprintf("%d %02d %02d %02d %02d 00", $5, month[$2], $3, time[1], time[2]));
count[filesystem]++;
if(filesystem_date < expire_date && count[filesystem] > min_to_keep )
{
print $NF;
}
}')
do
printf "$(${DATE}) Destroying snapshot ${expired_zfs}\n"
${ZFS} destroy ${expired_zfs}
done
}
function get_src_list () {
${AWK} -v backup_server=${MYHOST} '
( $2=="filesystem" || $2=="volume" ) && $3==backup_server {
path[$1]=1;
for(name in path){
# delete name from list, if name is substring of $1
if( index($1,name)==1 && name != $1 && path[name]!=0 ){
path[name]=0;
}
}
}
END{
for(name in path){
if(path[name]==1) print name
}
}
' ${SRC_DATASETS}
}
function first_snapshot () {
${AWK} -v zfs="${1}@" '
$2=="snapshot" && $1 ~ zfs {
first=$1;
# und raus...
nextfile;
}
END{
print first;
}
' $2
}
function last_snapshot () {
${AWK} -v zfs="^${1}" -F '[@ \t]' '
$3 == "snapshot" && $1 ~ zfs {
last=$1"@"$2;
}
END{
printf last;
}
' $2
}
function get_incremental_snapshot () {
src_host=$1
src_datasets=$2
first=$3
last=$4
dst_pool=$5
dst_datasets=$6
if [ $# -lt 6 ] ; then
echo "Called from line ${BASH_LINENO[$i]} with $# Arguments"
end 1
fi
src_zfs=$(echo ${first} | ${AWK} -F'@' '{print $1}')
first_snap=$(echo ${first} | ${AWK} -F'@' '{print FS""$2}')
echo "Getting snapshot ${zfs}..."
if [ "_${LOCAL_SYNC}_" == "_yes_" ]
then
${ZFS} send -I ${first_snap} ${last} | ${ZFS} recv -vFd ${dst_pool}
else
if [ "_${SECURE}_" == "_yes_" ]
then
# setup receiver
${MBUFFER} ${MBUFFER_OPTS} -l ${TMP_FILE1} -I 127.0.0.1:${MBUFFER_PORT} | \
${ZFS} recv -vFd ${dst_pool} 2>&1 &
# start sender
${SSH} ${SRC_USER:+"${SRC_USER}@"}${SRC_HOST} \
-R ${MBUFFER_PORT}:127.0.0.1:${MBUFFER_PORT} \
"${ZFS} send -I ${first_snap} ${last} | ${MBUFFER} ${MBUFFER_OPTS} -O 127.0.0.1:${MBUFFER_PORT} 2>&1" >${TMP_FILE2} &
else
# setup receiver
${MBUFFER} ${MBUFFER_OPTS} -l ${TMP_FILE1} -I ${MBUFFER_PORT} | \
${ZFS} recv -vFd ${dst_pool} 2>&1 &
# start sender
${SSH} ${SRC_USER:+"${SRC_USER}@"}${SRC_HOST} \
"${ZFS} send -I ${first_snap} ${last} | ${MBUFFER} ${MBUFFER_OPTS} -O ${DST_HOST}:${MBUFFER_PORT} 2>&1" >${TMP_FILE2} &
fi
wait
local_md5=$(grep md5 ${TMP_FILE1})
remote_md5=$(grep md5 ${TMP_FILE2})
local_summary=$(grep summary ${TMP_FILE1})
remote_summary=$(grep summary ${TMP_FILE2})
printf "remote %s\nlocal %s\n" "${remote_md5}" "${local_md5}"
printf "remote %s\nlocal %s\n" "${remote_summary}" "${local_summary}"
rm -f ${TMP_FILE1} ${TMP_FILE2}
fi
}
function get_initial_snapshot () {
src_host=$1
src_datasets=$2
zfs=$3
dst_pool=$4
dst_datasets=$5
if [ -z "$(is_available ${zfs} ${dst_datasets})" ] ; then
echo "Getting snapshot ${zfs}..."
if [ "_${LOCAL_SYNC}_" == "_yes_" ]
then
${ZFS} send -R ${zfs} | ${ZFS} recv -vFd ${dst_pool}
else
if [ "_${SECURE}_" == "_yes_" ]
then
# setup receiver
${MBUFFER} ${MBUFFER_OPTS} -l ${TMP_FILE1} -I 127.0.0.1:${MBUFFER_PORT} | \
${ZFS} recv -vFd ${dst_pool} 2>&1 &
# start sender
${SSH} ${SRC_USER:+"${SRC_USER}@"}${SRC_HOST} \
-R ${MBUFFER_PORT}:127.0.0.1:${MBUFFER_PORT} \
"${ZFS} send -R ${zfs} | ${MBUFFER} ${MBUFFER_OPTS} -O 127.0.0.1:${MBUFFER_PORT} 2>&1" >${TMP_FILE2} &
else
# setup receiver
${MBUFFER} ${MBUFFER_OPTS} -l ${TMP_FILE1} -I ${MBUFFER_PORT} | \
${ZFS} recv -vFd ${dst_pool} 2>&1 &
# start sender
${SSH} ${SRC_USER:+"${SRC_USER}@"}${SRC_HOST} \
"${ZFS} send -R ${zfs} | ${MBUFFER} ${MBUFFER_OPTS} -O ${DST_HOST}:${MBUFFER_PORT} 2>&1" >${TMP_FILE2} &
fi
wait
local_md5=$(grep md5 ${TMP_FILE1})
remote_md5=$(grep md5 ${TMP_FILE2})
local_summary=$(grep summary ${TMP_FILE1})
remote_summary=$(grep summary ${TMP_FILE2})
printf "remote %s\nlocal %s\n" "${remote_md5}" "${local_md5}"
printf "remote %s\nlocal %s\n" "${remote_summary}" "${local_summary}"
rm -f ${TMP_FILE1} ${TMP_FILE2}
fi
fi
}
function timestamp () {
echo $(${DATE} '+%Y%m%d-%H:%M:%S')
}
function expire_backup_snapshots () {
src_host=$1
src_datasets=$2
dst_datasets=$3
src_last_to_keep=$4
dst_pool=$5
src_zfs=$(echo ${src_last_to_keep} | ${AWK} -F'@' '{print $1}')
dst_zfs=$(convert_to_poolname ${src_zfs} ${SRC_POOL} ${dst_pool})
dst_last_to_keep=$(convert_to_poolname ${src_last_to_keep} ${SRC_POOL} ${dst_pool})
echo "Deleting old backup snapshots before ${dst_last_to_keep}"
if ( ${ZFS} list -o name ${dst_last_to_keep} >/dev/null 2>&1 ) ; then
for src_backup_snapshot in $(${AWK} -v src_backup="${src_zfs}@${BACKUP_SNAPSHOT_NAME}" -v src_last_to_keep="${src_last_to_keep}" '
$1 == src_last_to_keep {
exit 0;
}
$1 ~ src_backup {
print $1;
}
' ${src_datasets})
do
printf "\tDeleting on src ${src_backup_snapshot} ..."
if [ "_${LOCAL_SYNC}_" == "_yes_" ]
then
${ZFS} destroy ${src_backup_snapshot}
status=$?
else
${SSH} ${SRC_USER:+"${SRC_USER}@"}${SRC_HOST} "${ZFS} destroy ${src_backup_snapshot}"
status=$?
fi
if [ ${status} -eq 0 ] ; then
echo "done"
else
echo "failed"
fi
done
for dst_backup_snapshot in $(${AWK} -v dst_backup="${dst_zfs}@${BACKUP_SNAPSHOT_NAME}" -v dst_last_to_keep=${dst_last_to_keep} '
$1 == dst_last_to_keep {
exit 0;
}
$1 ~ dst_backup {
print $1;
}
' ${dst_datasets})
do
printf "\tDeleting on destination ${dst_backup_snapshot} ..."
if ( ${ZFS} destroy ${dst_backup_snapshot} ) ; then
echo "done"
else
echo "failed"
fi
done
else
echo "Strange we do not have the copy of ${dst_last_to_keep} => STOP!"
fi
}
function end () {
/usr/bin/rm -f ${LOCK_FILE}
exit $1
}
for src_zfs in $(get_src_list) ; do
echo "Evaluating ${src_zfs}"
dst_zfs=$(convert_to_poolname ${src_zfs} ${SRC_POOL} ${DST_POOL})
last_src=$(last_snapshot ${src_zfs} ${SRC_DATASETS})
last_dst=$(last_snapshot ${dst_zfs} ${DST_DATASETS})
last_backup_src=$(${AWK} -v zfs="${src_zfs}@${BACKUP_SNAPSHOT_NAME}" '$1 ~ zfs{last=$1}END{printf last}' ${SRC_DATASETS})
last_backup_dst=$(${AWK} -v zfs="${dst_zfs}@${BACKUP_SNAPSHOT_NAME}" '$1 ~ zfs{last=$1}END{printf last}' ${DST_DATASETS})
last_dst_on_src=$(convert_to_poolname ${last_dst} ${DST_POOL} ${SRC_POOL})
this_backup_src=${src_zfs}@${BACKUP_SNAPSHOT_NAME}_$(timestamp)
# Create snapshot for incremental backups
if [ "_${LOCAL_SYNC}_" == "_yes_" ]
then
${ZFS} snapshot ${this_backup_src}
else
${SSH} ${SRC_USER:+"${SRC_USER}@"}${SRC_HOST} "${ZFS} snapshot ${this_backup_src}"
fi
if [ -z "${last_src}" ] ; then
last_src=${this_backup_src}
fi
if [ -n "$(is_available ${dst_zfs} ${DST_DATASETS})" -a -z "${last_dst}" ] ; then
echo "zfs is on dst, but no snapshots. Getting ${last_src}..."
get_initial_snapshot ${SRC_HOST} ${SRC_DATASETS} ${last_src} ${DST_POOL} ${DST_DATASETS}
# Look for last backup snapshot on destination
elif [ -n "${last_backup_dst}" ] ; then
# Name of last backup snapshot on src
last_dst_backup_on_src=$(convert_to_poolname ${last_backup_dst} ${DST_POOL} ${SRC_POOL})
# If converted name is not empty and snapshot is in the list of src snapshots
# then get all snapshots from last backup until now
if [ -n "${last_dst_backup_on_src}" ] ; then
if [ -n "$(is_available ${last_dst_backup_on_src} ${SRC_DATASETS})" ] ; then
# Get the snapshot of this backup
printf "%s\tsnapshot\n" ${this_backup_src} >> ${SRC_DATASETS}
get_incremental_snapshot ${SRC_HOST} ${SRC_DATASETS} ${last_dst_backup_on_src} ${this_backup_src} ${DST_POOL} ${DST_DATASETS} && \
expire_backup_snapshots ${SRC_HOST} ${SRC_DATASETS} ${DST_DATASETS} ${this_backup_src} ${DST_POOL}
fi
fi
elif [ -n "$(is_available ${dst_zfs} ${DST_DATASETS})" ] ; then
# No last backup snapshot on dst but we have snapshots
if [ -n "$(is_available ${last_dst_on_src} ${SRC_DATASETS})" ] ; then
echo "Try to backup from ${last_dst_on_src} to ${this_backup_src}"
first=${last_dst_on_src}
last=${last_src}
get_incremental_snapshot ${SRC_HOST} ${SRC_DATASETS} ${first} ${last} ${DST_POOL} ${DST_DATASETS} && \
expire_backup_snapshots ${SRC_HOST} ${SRC_DATASETS} ${DST_DATASETS} ${this_backup_src} ${DST_POOL}
# Get the snapshot of this backup
printf "%s\tsnapshot\n" ${this_backup_src} >> ${SRC_DATASETS}
get_incremental_snapshot ${SRC_HOST} ${SRC_DATASETS} ${last} ${this_backup_src} ${DST_POOL} ${DST_DATASETS} && \
expire_backup_snapshots ${SRC_HOST} ${SRC_DATASETS} ${DST_DATASETS} ${this_backup_src} ${DST_POOL}
else
echo "OK I tried hard... now it is your job..."
fi
else
# No existing copies for this zfs. Get the last <INITIAL_COPIES> copies
first=$(${AWK} -v zfs=${src_zfs} -v intitial_copies=$((${INITIAL_COPIES}-1)) '
$1 ~ zfs && $2=="snapshot" {
last[++count]=$1;
}
END {
if(count>intitial_copies){
print last[count-intitial_copies]
}else{
print last[1]
}
}' ${SRC_DATASETS})
last=$( ${AWK} -v zfs=${src_zfs} '$1 ~ zfs && $2=="snapshot"{last=$1}END{printf last}' ${SRC_DATASETS} )
get_initial_snapshot ${SRC_HOST} ${SRC_DATASETS} ${first} ${DST_POOL} ${DST_DATASETS}
get_incremental_snapshot ${SRC_HOST} ${SRC_DATASETS} ${first} ${last} ${DST_POOL} ${DST_DATASETS} && \
expire_backup_snapshots ${SRC_HOST} ${SRC_DATASETS} ${DST_DATASETS} ${this_backup_src} ${DST_POOL}
# Get the snapshot of this backup
printf "%s\tsnapshot\n" ${this_backup_src} >> ${SRC_DATASETS}
get_incremental_snapshot ${SRC_HOST} ${SRC_DATASETS} ${last} ${this_backup_src} ${DST_POOL} ${DST_DATASETS} && \
expire_backup_snapshots ${SRC_HOST} ${SRC_DATASETS} ${DST_DATASETS} ${this_backup_src} ${DST_POOL}
fi
echo
echo --------------------------------------------------------------------------------
date
echo
done
# expire_dst_pool_snapshots days_to_keep min_to_keep
expire_dst_pool_snapshots 34 70
END_TIME=$(${AWK} 'BEGIN{printf systime();}')
${AWK} -v time=${END_TIME} 'BEGIN{print "END :",strftime("%d.%m.%Y %H:%M.%S",time)}'
${AWK} -v start=${START_TIME} -v end=${END_TIME} 'BEGIN{print "DURATION:",strftime("%H:%M.%S",end-start-3600*strftime("%H",0))}'
end 0
</source>
2dd274456d900651b4c51ababcffa8af9c0e7d78
LUKS - Linux Unified Key Setup
0
255
973
2015-12-07T14:38:12Z
Lollypop
2
Die Seite wurde neu angelegt: „[[Kategorie:Linux]] ==Encrypted swap on LVM== ===Create logical volume for swap== <source lang=bash> # lvcreate -L 2g -n lv-swap vg-root Logical volume "lv-…“
wikitext
text/x-wiki
[[Kategorie:Linux]]
==Encrypted swap on LVM==
===Create logical volume for swap==
<source lang=bash>
# lvcreate -L 2g -n lv-swap vg-root
Logical volume "lv-swap" created
</source>
<source lang=bash>
# lvs /dev/vg-root/lv-swap
LV VG Attr LSize Pool Origin Data% Move Log Copy% Convert
lv-swap vg-root -wi-ao--- 2.00g
</source>
===Create and get the UUID===
<source lang=bash>
# mkswap /dev/vg-root/lv-swap
mkswap: /dev/vg-root/lv-swap: warning: don't erase bootbits sectors
on whole disk. Use -f to force.
Setting up swapspace version 1, size = 2097148 KiB
no label, '''UUID=4764e516-d025-41de-ab5b-72070a3ae765'''
</source>
Save this UUID for the next step!!!
===Create the crypted swap===
Put this in your /etc/crypttab :
<source lang=bash>
cryptswap1 UUID=4764e516-d025-41de-ab5b-72070a3ae765 /dev/urandom swap,cipher=aes-cbc-essiv:sha256,offset=40,noearly
</source>
The UUID is the one from mkswap before!!!
Important things:
# offset=40 : Save the region where your UUID is written on disk.
# noearly : Avoid race conditions of the init scripts (cryptdisks and cryptdisks-early).
a4bf28013a24cb9c40993b05e1c519cca84491e2
974
973
2015-12-07T14:47:05Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:Linux]]
==Encrypted swap on LVM==
===Create logical volume for swap===
<source lang=bash>
# lvcreate -L 2g -n lv-swap vg-root
Logical volume "lv-swap" created
</source>
<source lang=bash>
# lvs /dev/vg-root/lv-swap
LV VG Attr LSize Pool Origin Data% Move Log Copy% Convert
lv-swap vg-root -wi-ao--- 2.00g
</source>
===Create and get the UUID===
<source lang=bash>
# mkswap /dev/vg-root/lv-swap
mkswap: /dev/vg-root/lv-swap: warning: don't erase bootbits sectors
on whole disk. Use -f to force.
Setting up swapspace version 1, size = 2097148 KiB
no label, '''UUID=4764e516-d025-41de-ab5b-72070a3ae765'''
</source>
Save this UUID for the next step!!!
===Create the crypted swap===
Put this in your /etc/crypttab :
<source lang=bash>
cryptswap1 UUID=4764e516-d025-41de-ab5b-72070a3ae765 /dev/urandom swap,cipher=aes-cbc-essiv:sha256,offset=40,noearly
</source>
The UUID is the one from mkswap before!!!
Important things:
# offset=40 : Save the region where your UUID is written on disk.
# noearly : Avoid race conditions of the init scripts (cryptdisks and cryptdisks-early).
====Start the crypted partition====
<source lang=bash>
# cryptdisks_start cryptswap1
* Starting crypto disk...
* cryptswap1 (starting)..
* cryptswap1 (started)...
</source>
====Check the status====
<source lang=bash>
# cryptsetup status cryptswap1
/dev/mapper/cryptswap1 is active.
type: PLAIN
cipher: aes-cbc-essiv:sha256
keysize: 256 bits
device: /dev/mapper/vg--root-lv--swap
offset: 40 sectors
size: 4194264 sectors
mode: read/write
</source>
====Make the swapFS====
<source lang=bash>
# mkswap /dev/mapper/cryptswap1
mkswap: /dev/mapper/cryptswap1: warning: don't erase bootbits sectors
on whole disk. Use -f to force.
Setting up swapspace version 1, size = 2097128 KiB
no label, UUID=ccdd1d28-0504-4682-8ece-8b6ef381d7e9
</source>
This new UUID has no relevance for /etc/crypttab.
===Edit the /etc/fstab===
<source lang=bash>
# vit /etc/fstab
...
/dev/mapper/cryptswap1 none swap sw 0 0
</source>
Reboot to test your settings.
7cd1ddd980bb39521c998ad311159815000b3f6c
975
974
2015-12-07T14:47:50Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:Linux]]
==Encrypted swap on LVM==
===Create logical volume for swap===
<source lang=bash>
# lvcreate -L 2g -n lv-swap vg-root
Logical volume "lv-swap" created
</source>
<source lang=bash>
# lvs /dev/vg-root/lv-swap
LV VG Attr LSize Pool Origin Data% Move Log Copy% Convert
lv-swap vg-root -wi-ao--- 2.00g
</source>
===Create and get the UUID===
<source lang=bash>
# mkswap /dev/vg-root/lv-swap
mkswap: /dev/vg-root/lv-swap: warning: don't erase bootbits sectors
on whole disk. Use -f to force.
Setting up swapspace version 1, size = 2097148 KiB
no label, UUID=4764e516-d025-41de-ab5b-72070a3ae765
</source>
Save this UUID for the next step!!!
===Create the crypted swap===
Put this in your /etc/crypttab :
<source lang=bash>
cryptswap1 UUID=4764e516-d025-41de-ab5b-72070a3ae765 /dev/urandom swap,cipher=aes-cbc-essiv:sha256,offset=40,noearly
</source>
The UUID is the one from mkswap before!!!
Important things:
# offset=40 : Save the region where your UUID is written on disk.
# noearly : Avoid race conditions of the init scripts (cryptdisks and cryptdisks-early).
====Start the crypted partition====
<source lang=bash>
# cryptdisks_start cryptswap1
* Starting crypto disk...
* cryptswap1 (starting)..
* cryptswap1 (started)...
</source>
====Check the status====
<source lang=bash>
# cryptsetup status cryptswap1
/dev/mapper/cryptswap1 is active.
type: PLAIN
cipher: aes-cbc-essiv:sha256
keysize: 256 bits
device: /dev/mapper/vg--root-lv--swap
offset: 40 sectors
size: 4194264 sectors
mode: read/write
</source>
====Make the swapFS====
<source lang=bash>
# mkswap /dev/mapper/cryptswap1
mkswap: /dev/mapper/cryptswap1: warning: don't erase bootbits sectors
on whole disk. Use -f to force.
Setting up swapspace version 1, size = 2097128 KiB
no label, UUID=ccdd1d28-0504-4682-8ece-8b6ef381d7e9
</source>
This new UUID has no relevance for /etc/crypttab.
===Edit the /etc/fstab===
<source lang=bash>
# vit /etc/fstab
...
/dev/mapper/cryptswap1 none swap sw 0 0
</source>
Reboot to test your settings.
6b16cd36331f75543626253f80d7f9c4ca3fee1f
976
975
2015-12-07T14:48:35Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:Linux|Security]]
[[Kategorie:Security]]
==Encrypted swap on LVM==
===Create logical volume for swap===
<source lang=bash>
# lvcreate -L 2g -n lv-swap vg-root
Logical volume "lv-swap" created
</source>
<source lang=bash>
# lvs /dev/vg-root/lv-swap
LV VG Attr LSize Pool Origin Data% Move Log Copy% Convert
lv-swap vg-root -wi-ao--- 2.00g
</source>
===Create and get the UUID===
<source lang=bash>
# mkswap /dev/vg-root/lv-swap
mkswap: /dev/vg-root/lv-swap: warning: don't erase bootbits sectors
on whole disk. Use -f to force.
Setting up swapspace version 1, size = 2097148 KiB
no label, UUID=4764e516-d025-41de-ab5b-72070a3ae765
</source>
Save this UUID for the next step!!!
===Create the crypted swap===
Put this in your /etc/crypttab :
<source lang=bash>
cryptswap1 UUID=4764e516-d025-41de-ab5b-72070a3ae765 /dev/urandom swap,cipher=aes-cbc-essiv:sha256,offset=40,noearly
</source>
The UUID is the one from mkswap before!!!
Important things:
# offset=40 : Save the region where your UUID is written on disk.
# noearly : Avoid race conditions of the init scripts (cryptdisks and cryptdisks-early).
====Start the crypted partition====
<source lang=bash>
# cryptdisks_start cryptswap1
* Starting crypto disk...
* cryptswap1 (starting)..
* cryptswap1 (started)...
</source>
====Check the status====
<source lang=bash>
# cryptsetup status cryptswap1
/dev/mapper/cryptswap1 is active.
type: PLAIN
cipher: aes-cbc-essiv:sha256
keysize: 256 bits
device: /dev/mapper/vg--root-lv--swap
offset: 40 sectors
size: 4194264 sectors
mode: read/write
</source>
====Make the swapFS====
<source lang=bash>
# mkswap /dev/mapper/cryptswap1
mkswap: /dev/mapper/cryptswap1: warning: don't erase bootbits sectors
on whole disk. Use -f to force.
Setting up swapspace version 1, size = 2097128 KiB
no label, UUID=ccdd1d28-0504-4682-8ece-8b6ef381d7e9
</source>
This new UUID has no relevance for /etc/crypttab.
===Edit the /etc/fstab===
<source lang=bash>
# vit /etc/fstab
...
/dev/mapper/cryptswap1 none swap sw 0 0
</source>
Reboot to test your settings.
f7aa56add399301b7a5868de703e07acc3167067
977
976
2015-12-07T14:49:34Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:Linux]]
[[Kategorie:Security]]
==Encrypted swap on LVM==
===Create logical volume for swap===
<source lang=bash>
# lvcreate -L 2g -n lv-swap vg-root
Logical volume "lv-swap" created
</source>
<source lang=bash>
# lvs /dev/vg-root/lv-swap
LV VG Attr LSize Pool Origin Data% Move Log Copy% Convert
lv-swap vg-root -wi-ao--- 2.00g
</source>
===Create and get the UUID===
<source lang=bash>
# mkswap /dev/vg-root/lv-swap
mkswap: /dev/vg-root/lv-swap: warning: don't erase bootbits sectors
on whole disk. Use -f to force.
Setting up swapspace version 1, size = 2097148 KiB
no label, UUID=4764e516-d025-41de-ab5b-72070a3ae765
</source>
Save this UUID for the next step!!!
===Create the crypted swap===
Put this in your /etc/crypttab :
<source lang=bash>
cryptswap1 UUID=4764e516-d025-41de-ab5b-72070a3ae765 /dev/urandom swap,cipher=aes-cbc-essiv:sha256,offset=40,noearly
</source>
The UUID is the one from mkswap before!!!
Important things:
# offset=40 : Save the region where your UUID is written on disk.
# noearly : Avoid race conditions of the init scripts (cryptdisks and cryptdisks-early).
====Start the crypted partition====
<source lang=bash>
# cryptdisks_start cryptswap1
* Starting crypto disk...
* cryptswap1 (starting)..
* cryptswap1 (started)...
</source>
====Check the status====
<source lang=bash>
# cryptsetup status cryptswap1
/dev/mapper/cryptswap1 is active.
type: PLAIN
cipher: aes-cbc-essiv:sha256
keysize: 256 bits
device: /dev/mapper/vg--root-lv--swap
offset: 40 sectors
size: 4194264 sectors
mode: read/write
</source>
====Make the swapFS====
<source lang=bash>
# mkswap /dev/mapper/cryptswap1
mkswap: /dev/mapper/cryptswap1: warning: don't erase bootbits sectors
on whole disk. Use -f to force.
Setting up swapspace version 1, size = 2097128 KiB
no label, UUID=ccdd1d28-0504-4682-8ece-8b6ef381d7e9
</source>
This new UUID has no relevance for /etc/crypttab.
===Edit the /etc/fstab===
<source lang=bash>
# vit /etc/fstab
...
/dev/mapper/cryptswap1 none swap sw 0 0
</source>
Reboot to test your settings.
48cae7326b079d3d1f1ff1ec56d119f895266301
978
977
2015-12-07T14:49:56Z
Lollypop
2
Lollypop verschob Seite [[Linux Unified Key Setup]] nach [[LUKS - Linux Unified Key Setup]], ohne dabei eine Weiterleitung anzulegen
wikitext
text/x-wiki
[[Kategorie:Linux]]
[[Kategorie:Security]]
==Encrypted swap on LVM==
===Create logical volume for swap===
<source lang=bash>
# lvcreate -L 2g -n lv-swap vg-root
Logical volume "lv-swap" created
</source>
<source lang=bash>
# lvs /dev/vg-root/lv-swap
LV VG Attr LSize Pool Origin Data% Move Log Copy% Convert
lv-swap vg-root -wi-ao--- 2.00g
</source>
===Create and get the UUID===
<source lang=bash>
# mkswap /dev/vg-root/lv-swap
mkswap: /dev/vg-root/lv-swap: warning: don't erase bootbits sectors
on whole disk. Use -f to force.
Setting up swapspace version 1, size = 2097148 KiB
no label, UUID=4764e516-d025-41de-ab5b-72070a3ae765
</source>
Save this UUID for the next step!!!
===Create the crypted swap===
Put this in your /etc/crypttab :
<source lang=bash>
cryptswap1 UUID=4764e516-d025-41de-ab5b-72070a3ae765 /dev/urandom swap,cipher=aes-cbc-essiv:sha256,offset=40,noearly
</source>
The UUID is the one from mkswap before!!!
Important things:
# offset=40 : Save the region where your UUID is written on disk.
# noearly : Avoid race conditions of the init scripts (cryptdisks and cryptdisks-early).
====Start the crypted partition====
<source lang=bash>
# cryptdisks_start cryptswap1
* Starting crypto disk...
* cryptswap1 (starting)..
* cryptswap1 (started)...
</source>
====Check the status====
<source lang=bash>
# cryptsetup status cryptswap1
/dev/mapper/cryptswap1 is active.
type: PLAIN
cipher: aes-cbc-essiv:sha256
keysize: 256 bits
device: /dev/mapper/vg--root-lv--swap
offset: 40 sectors
size: 4194264 sectors
mode: read/write
</source>
====Make the swapFS====
<source lang=bash>
# mkswap /dev/mapper/cryptswap1
mkswap: /dev/mapper/cryptswap1: warning: don't erase bootbits sectors
on whole disk. Use -f to force.
Setting up swapspace version 1, size = 2097128 KiB
no label, UUID=ccdd1d28-0504-4682-8ece-8b6ef381d7e9
</source>
This new UUID has no relevance for /etc/crypttab.
===Edit the /etc/fstab===
<source lang=bash>
# vit /etc/fstab
...
/dev/mapper/cryptswap1 none swap sw 0 0
</source>
Reboot to test your settings.
48cae7326b079d3d1f1ff1ec56d119f895266301
979
978
2015-12-07T14:52:59Z
Lollypop
2
/* Create and get the UUID */
wikitext
text/x-wiki
[[Kategorie:Linux]]
[[Kategorie:Security]]
==Encrypted swap on LVM==
===Create logical volume for swap===
<source lang=bash>
# lvcreate -L 2g -n lv-swap vg-root
Logical volume "lv-swap" created
</source>
<source lang=bash>
# lvs /dev/vg-root/lv-swap
LV VG Attr LSize Pool Origin Data% Move Log Copy% Convert
lv-swap vg-root -wi-ao--- 2.00g
</source>
===Create and get the UUID===
'''This step will erase all of your data from the disk after the mkswap command!!!'''
So be sure you pick the right one!
<source lang=bash>
# mkswap /dev/vg-root/lv-swap
mkswap: /dev/vg-root/lv-swap: warning: don't erase bootbits sectors
on whole disk. Use -f to force.
Setting up swapspace version 1, size = 2097148 KiB
no label, UUID=4764e516-d025-41de-ab5b-72070a3ae765
</source>
Save this UUID for the next step!!!
===Create the crypted swap===
Put this in your /etc/crypttab :
<source lang=bash>
cryptswap1 UUID=4764e516-d025-41de-ab5b-72070a3ae765 /dev/urandom swap,cipher=aes-cbc-essiv:sha256,offset=40,noearly
</source>
The UUID is the one from mkswap before!!!
Important things:
# offset=40 : Save the region where your UUID is written on disk.
# noearly : Avoid race conditions of the init scripts (cryptdisks and cryptdisks-early).
====Start the crypted partition====
<source lang=bash>
# cryptdisks_start cryptswap1
* Starting crypto disk...
* cryptswap1 (starting)..
* cryptswap1 (started)...
</source>
====Check the status====
<source lang=bash>
# cryptsetup status cryptswap1
/dev/mapper/cryptswap1 is active.
type: PLAIN
cipher: aes-cbc-essiv:sha256
keysize: 256 bits
device: /dev/mapper/vg--root-lv--swap
offset: 40 sectors
size: 4194264 sectors
mode: read/write
</source>
====Make the swapFS====
<source lang=bash>
# mkswap /dev/mapper/cryptswap1
mkswap: /dev/mapper/cryptswap1: warning: don't erase bootbits sectors
on whole disk. Use -f to force.
Setting up swapspace version 1, size = 2097128 KiB
no label, UUID=ccdd1d28-0504-4682-8ece-8b6ef381d7e9
</source>
This new UUID has no relevance for /etc/crypttab.
===Edit the /etc/fstab===
<source lang=bash>
# vit /etc/fstab
...
/dev/mapper/cryptswap1 none swap sw 0 0
</source>
Reboot to test your settings.
4e5a5e9743d7c01ab189737d8f2197c072976495
Bash cheatsheet
0
37
980
946
2015-12-10T16:20:36Z
Lollypop
2
wikitext
text/x-wiki
=bash history per user=
If you are using ssh public keys for authenticating and want to use a seperate history for each user, you can put this in your .bash_profile:
<source lang=bash>
FINGERPRINT=$(nawk -v ssh_connection="${SSH_CONNECTION}" -v user=${LOGNAME} 'BEGIN{split(ssh_connection,connection)}/.*sshd\[[0-9]+\]: Accepted publickey for/ && $(NF-5)==connection[1] && $(NF-3)==connection[2] {print $NF;}' /var/log/auth.log)
export HISTFILE=.bash_history_${FINGERPRINT:-${SUDO_USER}}
</source>
You need to set LogLevel of sshd to VERBOSE in your /etc/ssh/sshd_config:
<source lang=bash>
...
LogLevel VERBOSE
...
</source>
=bash prompt=
Put this in your ~/.bash_profile
<source lang=bash>
typeset +x PS1="\[\e]0;\u@\h: \w\a\]\u@\h:\w# "
</source>
=Nützliche Variablenersetzungen=
==dirname==
<pre>
$ myself=/usr/bin/blafasel ; echo ${myself%/*}
/usr/bin
</pre>
==basename==
<pre>
$ myself=/usr/bin/blafasel ; echo ${myself##*/}
blafasel
</pre>
=Schleifen=
==Zahlenfolgen==
$ for i in {0..9} ; do echo $i ; done
oder
$ for ((i=0;i<=9;i++)); do echo $i; done
so gehen natürlich auch andere Sprünge, z.B. immer 3 weiter:
$ for ((i=0;i<=9;i+=3)); do echo $i; done
oder oder oder
$ for ((i=0,j=1;i<=9;i+=3,j++)); do echo "$i $j"; done
=Rechnen=
$ echo $[ 3 + 4 ]
$ echo $[ 2 ** 8 ] # 2^8
[[Kategorie:Bash]]
ae55b006344e1d91b8bdd504fa74b388b2198a35
981
980
2015-12-10T16:21:32Z
Lollypop
2
/* bash history per user */
wikitext
text/x-wiki
=bash history per user=
If you are using ssh public keys for authenticating and want to use a seperate history for each user, you can put this in your .bash_profile:
<source lang=bash>
FINGERPRINT=$(nawk -v ssh_connection="${SSH_CONNECTION}" -v user=${LOGNAME} 'BEGIN{split(ssh_connection,connection)}/.*sshd\[[0-9]+\]: Accepted publickey for/ && $(NF-5)==connection[1] && $(NF-3)==connection[2] {print $NF;}' /var/log/auth.log)
export HISTFILE=.bash_history_${FINGERPRINT:-${SUDO_USER}}
</source>
If $FINGERPRINT is empty the sudo user will be used.
You need to set LogLevel of sshd to VERBOSE in your /etc/ssh/sshd_config:
<source lang=bash>
...
LogLevel VERBOSE
...
</source>
=bash prompt=
Put this in your ~/.bash_profile
<source lang=bash>
typeset +x PS1="\[\e]0;\u@\h: \w\a\]\u@\h:\w# "
</source>
=Nützliche Variablenersetzungen=
==dirname==
<pre>
$ myself=/usr/bin/blafasel ; echo ${myself%/*}
/usr/bin
</pre>
==basename==
<pre>
$ myself=/usr/bin/blafasel ; echo ${myself##*/}
blafasel
</pre>
=Schleifen=
==Zahlenfolgen==
$ for i in {0..9} ; do echo $i ; done
oder
$ for ((i=0;i<=9;i++)); do echo $i; done
so gehen natürlich auch andere Sprünge, z.B. immer 3 weiter:
$ for ((i=0;i<=9;i+=3)); do echo $i; done
oder oder oder
$ for ((i=0,j=1;i<=9;i+=3,j++)); do echo "$i $j"; done
=Rechnen=
$ echo $[ 3 + 4 ]
$ echo $[ 2 ** 8 ] # 2^8
[[Kategorie:Bash]]
420f2750fc4d38fc81f852b9d9a3db1ed14ec4e8
982
981
2015-12-10T16:30:46Z
Lollypop
2
/* bash history per user */
wikitext
text/x-wiki
=bash history per user=
If you are using ssh public keys for authenticating and want to use a seperate history for each user, you can put this in your .bash_profile:
<source lang=bash>
FINGERPRINT=$(nawk -v ssh_connection="${SSH_CONNECTION}" -v user=${LOGNAME} 'BEGIN{split(ssh_connection,connection)}/.*sshd\[[0-9]+\]: Accepted publickey for/ && $(NF-5)==connection[1] && $(NF-3)==connection[2] {print $NF;}' /var/log/auth.log)
export HISTFILE=.bash_history_${FINGERPRINT:-${SUDO_USER:-login}}
</source>
If $FINGERPRINT is empty the sudo user will be used.
You need to set LogLevel of sshd to VERBOSE in your /etc/ssh/sshd_config:
<source lang=bash>
...
LogLevel VERBOSE
...
</source>
=bash prompt=
Put this in your ~/.bash_profile
<source lang=bash>
typeset +x PS1="\[\e]0;\u@\h: \w\a\]\u@\h:\w# "
</source>
=Nützliche Variablenersetzungen=
==dirname==
<pre>
$ myself=/usr/bin/blafasel ; echo ${myself%/*}
/usr/bin
</pre>
==basename==
<pre>
$ myself=/usr/bin/blafasel ; echo ${myself##*/}
blafasel
</pre>
=Schleifen=
==Zahlenfolgen==
$ for i in {0..9} ; do echo $i ; done
oder
$ for ((i=0;i<=9;i++)); do echo $i; done
so gehen natürlich auch andere Sprünge, z.B. immer 3 weiter:
$ for ((i=0;i<=9;i+=3)); do echo $i; done
oder oder oder
$ for ((i=0,j=1;i<=9;i+=3,j++)); do echo "$i $j"; done
=Rechnen=
$ echo $[ 3 + 4 ]
$ echo $[ 2 ** 8 ] # 2^8
[[Kategorie:Bash]]
d53f9e1549ccccc50a6f157e12985dd532136a60
984
982
2015-12-11T10:49:39Z
Lollypop
2
/* bash history per user */
wikitext
text/x-wiki
=bash history per user=
You need to set LogLevel of sshd to VERBOSE in your /etc/ssh/sshd_config:
<source lang=bash>
...
LogLevel VERBOSE
...
</source>
If you are using ssh public keys for authenticating and want to use a seperate history for each user, you can put this in your .bash_profile:
<source lang=bash>
[ -f /var/log/fingerprint.log ] && FINGERPRINT=$(nawk -v ssh_connection="${SSH_CONNECTION}" -v user=${LOGNAME} 'BEGIN{split(ssh_connection,connection)}/.*sshd\[[0-9]+\]: Accepted publickey for/ && $(NF-5)==connection[1] && $(NF-3)==connection[2] {print $NF;}' /var/log/fingerprint.log)
export HISTFILE=.bash_history_${FINGERPRINT:-${SUDO_USER:-login}}
</source>
If $FINGERPRINT is empty the sudo user will be used.
I forced rsyslog to write another logfile where group ssh may read:
/etc/rsyslog.d/99-fingerprint.conf:
<source lang=bash>
$FileCreateMode 0640
$FileGroup ssh
auth /var/log/fingerprint.log
</source>
Add user syslog to group ssh so that syslog can open a file as group ssh:
<source lang=bash>
# usermod -aG ssh syslog
</source>
Let only users from group ssh login via ssh except the syslog user:
/etc/ssh/sshd_config:
<source lang=bash>
# SSH is only allowed for users in this group
AllowGroups ssh
DenyUsers syslog
</source>
=bash prompt=
Put this in your ~/.bash_profile
<source lang=bash>
typeset +x PS1="\[\e]0;\u@\h: \w\a\]\u@\h:\w# "
</source>
=Nützliche Variablenersetzungen=
==dirname==
<pre>
$ myself=/usr/bin/blafasel ; echo ${myself%/*}
/usr/bin
</pre>
==basename==
<pre>
$ myself=/usr/bin/blafasel ; echo ${myself##*/}
blafasel
</pre>
=Schleifen=
==Zahlenfolgen==
$ for i in {0..9} ; do echo $i ; done
oder
$ for ((i=0;i<=9;i++)); do echo $i; done
so gehen natürlich auch andere Sprünge, z.B. immer 3 weiter:
$ for ((i=0;i<=9;i+=3)); do echo $i; done
oder oder oder
$ for ((i=0,j=1;i<=9;i+=3,j++)); do echo "$i $j"; done
=Rechnen=
$ echo $[ 3 + 4 ]
$ echo $[ 2 ** 8 ] # 2^8
[[Kategorie:Bash]]
65368d6efc6940f28c22f057ea883eebc286b12e
985
984
2015-12-11T10:50:33Z
Lollypop
2
/* bash history per user */
wikitext
text/x-wiki
=bash history per user=
You need to set LogLevel of sshd to VERBOSE in your /etc/ssh/sshd_config:
<source lang=bash>
...
LogLevel VERBOSE
...
</source>
If you are using ssh public keys for authenticating and want to use a seperate history for each user, you can put this in your .bash_profile:
<source lang=bash>
[ -f /var/log/fingerprint.log ] && FINGERPRINT=$(nawk -v ssh_connection="${SSH_CONNECTION}" -v user=${LOGNAME} 'BEGIN{split(ssh_connection,connection)}/.*sshd\[[0-9]+\]: Accepted publickey for/ && $(NF-5)==connection[1] && $(NF-3)==connection[2] {print $NF;}' /var/log/fingerprint.log)
export HISTFILE=.bash_history_${FINGERPRINT:-${SUDO_USER:-default}}
</source>
If $FINGERPRINT is empty the sudo user will be used.
If $SUDO_USER is empty too, use "default" as extension.
I forced rsyslog to write another logfile where group ssh may read:
/etc/rsyslog.d/99-fingerprint.conf:
<source lang=bash>
$FileCreateMode 0640
$FileGroup ssh
auth /var/log/fingerprint.log
</source>
Add user syslog to group ssh so that syslog can open a file as group ssh:
<source lang=bash>
# usermod -aG ssh syslog
</source>
Let only users from group ssh login via ssh except the syslog user:
/etc/ssh/sshd_config:
<source lang=bash>
# SSH is only allowed for users in this group
AllowGroups ssh
DenyUsers syslog
</source>
=bash prompt=
Put this in your ~/.bash_profile
<source lang=bash>
typeset +x PS1="\[\e]0;\u@\h: \w\a\]\u@\h:\w# "
</source>
=Nützliche Variablenersetzungen=
==dirname==
<pre>
$ myself=/usr/bin/blafasel ; echo ${myself%/*}
/usr/bin
</pre>
==basename==
<pre>
$ myself=/usr/bin/blafasel ; echo ${myself##*/}
blafasel
</pre>
=Schleifen=
==Zahlenfolgen==
$ for i in {0..9} ; do echo $i ; done
oder
$ for ((i=0;i<=9;i++)); do echo $i; done
so gehen natürlich auch andere Sprünge, z.B. immer 3 weiter:
$ for ((i=0;i<=9;i+=3)); do echo $i; done
oder oder oder
$ for ((i=0,j=1;i<=9;i+=3,j++)); do echo "$i $j"; done
=Rechnen=
$ echo $[ 3 + 4 ]
$ echo $[ 2 ** 8 ] # 2^8
[[Kategorie:Bash]]
d425b9327444fb0b238c0f25fa8ea362cee56c85
986
985
2015-12-11T15:45:10Z
Lollypop
2
/* bash history per user */
wikitext
text/x-wiki
=bash history per user=
You need to set LogLevel of sshd to VERBOSE in your /etc/ssh/sshd_config:
<source lang=bash>
...
LogLevel VERBOSE
...
</source>
If you are using ssh public keys for authenticating and want to use a seperate history for each user, you can put this in your .bash_profile:
<source lang=bash>
[ -f /var/log/fingerprint.log ] && FINGERPRINT=$(nawk -v ssh_connection="${SSH_CONNECTION}" -v user=${LOGNAME} 'BEGIN{split(ssh_connection,connection)}/.*sshd\[[0-9]+\]: Accepted publickey for/ && $(NF-5)==connection[1] && $(NF-3)==connection[2] {print $NF;}' /var/log/fingerprint.log)
export HISTFILE=~/.bash_history_${FINGERPRINT:-${SUDO_USER:-default}}
</source>
If $FINGERPRINT is empty the sudo user will be used.
If $SUDO_USER is empty too, use "default" as extension.
I forced rsyslog to write another logfile where group ssh may read:
/etc/rsyslog.d/99-fingerprint.conf:
<source lang=bash>
$FileCreateMode 0640
$FileGroup ssh
auth /var/log/fingerprint.log
</source>
Add user syslog to group ssh so that syslog can open a file as group ssh:
<source lang=bash>
# usermod -aG ssh syslog
</source>
Let only users from group ssh login via ssh except the syslog user:
/etc/ssh/sshd_config:
<source lang=bash>
# SSH is only allowed for users in this group
AllowGroups ssh
DenyUsers syslog
</source>
=bash prompt=
Put this in your ~/.bash_profile
<source lang=bash>
typeset +x PS1="\[\e]0;\u@\h: \w\a\]\u@\h:\w# "
</source>
=Nützliche Variablenersetzungen=
==dirname==
<pre>
$ myself=/usr/bin/blafasel ; echo ${myself%/*}
/usr/bin
</pre>
==basename==
<pre>
$ myself=/usr/bin/blafasel ; echo ${myself##*/}
blafasel
</pre>
=Schleifen=
==Zahlenfolgen==
$ for i in {0..9} ; do echo $i ; done
oder
$ for ((i=0;i<=9;i++)); do echo $i; done
so gehen natürlich auch andere Sprünge, z.B. immer 3 weiter:
$ for ((i=0;i<=9;i+=3)); do echo $i; done
oder oder oder
$ for ((i=0,j=1;i<=9;i+=3,j++)); do echo "$i $j"; done
=Rechnen=
$ echo $[ 3 + 4 ]
$ echo $[ 2 ** 8 ] # 2^8
[[Kategorie:Bash]]
fc4ac94244ebef05d9e0f15e6bbe5f306633cb19
SSH Tipps und Tricks
0
75
983
861
2015-12-11T10:42:28Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:SSH]]
[[Kategorie:Putty]]
=SSH, der Weg zum Ziel=
==SSH über ein oder mehrere Hops==
Um die SSH-Verbindung von Host_A zu Host_B machen zu können muß man sich über zwei Rechner dorthin tunneln (GW_1 und GW_2). Wenn man sich immer erst einloggt und dann weiter einloggt ist es manchmal sehr schwierig die Portforwardings mitzuschleifen, oder den Socks5-Proxy. Einfacher ist es, wenn man sich ProxyCommands für den Weg von Host_a zu Host_b definiert.
Wir kommen also nur von GW_2 zu Host_b, also machen wir uns hierfür einen Eintrag in der ~/.ssh/config:
<pre>
Host Host_B
ProxyCommand ssh GW_2 "/bin/bash -c 'exec 3<>/dev/tcp/%h/%p; cat <&3 & cat >&3;kill $!'"
</pre>
Zu GW_2 kommen wir aber nur über GW_1, also brauchen wir hierfür auch einen Eintrag:
<pre>
Host GW_2
ProxyCommand ssh GW_1 "/bin/bash -c 'exec 3<>/dev/tcp/%h/%p; cat <&3 & cat >&3;kill $!'"
</pre>
Jetzt gibt man auf Host_A einfach <i>ssh Host_B</i> ein und wird über die beiden Gateways GW_1 und GW_2 getunnelt.
Portforwardings für z.B. NFS macht man jetzt einfach so:
<pre>
root@Host_A# share -F nfs -o ro=@127.0.0.1/32 /tmp
root@Host_A# ssh -R 22049:localhost:2049 user@Host_B
user@Host_B$ su -
root@Host_B# mount -oro nfs://127.0.0.1:22049/tmp /mnt
</pre>
Im Hintergrund werden dann die TunnelVerbindungen aufgebaut und man macht das PortForwarding direkt von Host_A nach Host_B. Sehr schlank und elegant.
PS: Das /dev/tcp/%h/%p ist ein BASH-Builtin wobei %h und %p von der SSH durch Host (%h) und Port (%p) ausgefüllt werden
==Ausbruch aus dem Paradies==
Problem: Die Umgebung in der man sich aufhält ist leider so unglücklich mit Firewalls verbaut, daß man nicht arbeiten kann. Man muß aber per SSH raus, um woanders kurz etwas zu schauen, oder zu holen. Nunja, es gibt immer einen Weg...
Vorraussetzung ist ein lokal installiertes [http://www.meadowy.org/~gotoh/projects/connect connect], z.B. unter Ubuntu: apt-get install connect-proxy.
Weiterhin braucht man einen SSH-Server, wo ein sshd auf dem Port 443 lauscht, denn die meisten Proxies wollen einen nur auf known Ports durchlassen.
Dann trägt man in der ~/.ssh/config ein:
<pre>
Host ssh-via-proxy
ProxyCommand connect -H proxy-server:3128 ssh-server 443
</pre>
Schwuppdiwupp ist man mit <i>ssh ssh-server</i> auf dem SSH-Ziel, wo man hinmöchte. Natürlich kann man auf den ssh-server wieder als ProxyCommand eintragen usw. usw.
==Achja... das interne Wiki...==
Auch nicht schlimm, wenn das nur vom internen Netz erreichbar ist, dann fragen wir einfach via Socks-Proxy an:
<pre>
user@Host_A$ ssh -C -N -T -f -D8080 interner-rechner
user@Host_A$ chromium-browser --proxy-server="socks5://localhost:8080" https://wiki.intern.firma.de/ &
</pre>
Die Optionen sind:
<pre>
-C Requests compression <- das ist optional
-N Do not execute a remote command.
-T Disable pseudo-tty allocation.
-f Requests ssh to go to background just before command execution.
-D Local-Remote-Socks5-Proxy Port
</pre>
Oder wieder via ~/.ssh/config:
<pre>
Host wiki
Compression yes
DynamicForward 8888
RequestTTY no
PermitLocalCommand yes
LocalCommand chromium-browser --proxy-server="socks5://localhost:8888" https://wiki.intern.firma.de/ &
Hostname interner-rechner
</pre>
Und dann <i>ssh -N -f wiki</i> (Entsprechungen für -N und -f habe ich noch nicht gefunden).
=Der Fingerabdruck=
Für die Verifikation ist es oft leichter mit kürzeren Zahlenketten. Daher ist der Fingerabdruck praktisch, um Keys einfacher zu vergleichen:
<pre>
$ ssh-keygen -lf ~/.ssh/id_dsa.pub
1024 98:c5:76:...:08:fa:ba lollypop@lollybook (DSA)
</pre>
=Nutzer einschränken=
<source lang=bash>
# SSH is only allowed for users in the group ssh except syslog
AllowGroups ssh
DenyUsers syslog
</source>
=PuTTY Portable=
==pageant zusammen mit putty starten==
In die Datei ..\PortableApps\PuTTYPortable\App\AppInfo\Launcher\PuTTYPortable.ini muß folgendes unter [Launch] stehen:
<pre>
[Launch]
ProgramExecutable=putty\pageant.exe
CommandLineArguments='%PAL:DataDir%\settings\mykeys.ppk -c %PAL:AppDir%\putty\putty.exe'
DirectoryMoveOK=yes
SupportsUNC=yes
</pre>
Zu PortableApps siehe auch:
* [http://portableapps.com/manuals/PortableApps.comLauncher/ref/envsub.html Environment variable substitions]
* [http://portableapps.com/manuals/PortableApps.comLauncher/ref/launcher.ini/launch.html#programexecutable Launch]
175046f168fc9aee4c4a7609bbd15b20c4eb8ea5
Autofs
0
256
987
2015-12-16T09:08:04Z
Lollypop
2
Die Seite wurde neu angelegt: „[[Kategorie:Linux|autofs]] [[Kategorie:Solaris|autofs]] ==Automount home directories== ===/etc/auto.master=== <source lang=bash> # # Include /etc/auto.master.…“
wikitext
text/x-wiki
[[Kategorie:Linux|autofs]]
[[Kategorie:Solaris|autofs]]
==Automount home directories==
===/etc/auto.master===
<source lang=bash>
#
# Include /etc/auto.master.d/*.autofs
#
+dir:/etc/auto.master.d
</source>
===/etc/auto.master.d/home.autofs===
<source lang=bash>
/home /etc/auto.master.d/home.map
</source>
===/etc/auto.master.d/home.map===
Mount homes from different locations.
<source lang=bash>
* :/data/home/& nfs.server.de:/home/&
</source>
The asterisk marks any dir in /home/* should be matched by this rule.
The ampers and is replaced by the part which was matched by *.
So if you enter /home/a the automounter searches local for /data/home/a which will be mounted when found.
<source lang=bash>
# mount -v | grep /home/a
/data/home/a on /home/a type none (rw,bind)
</source>
For another /home/b which is on the nfs server it looks like this:
<source lang=bash>
# mount -v | grep /home/b
nfs.server.de:/home/b on /home/b type nfs (rw,addr=172.16.17.24)
</source>
18b84c4b24f92918a83932a2176eb296bdf3ad0e
988
987
2015-12-16T09:36:51Z
Lollypop
2
/* /etc/auto.master.d/home.map */
wikitext
text/x-wiki
[[Kategorie:Linux|autofs]]
[[Kategorie:Solaris|autofs]]
==Automount home directories==
===/etc/auto.master===
<source lang=bash>
#
# Include /etc/auto.master.d/*.autofs
#
+dir:/etc/auto.master.d
</source>
===/etc/auto.master.d/home.autofs===
<source lang=bash>
/home /etc/auto.master.d/home.map
</source>
===/etc/auto.master.d/home.map===
Mount homes from different locations.
<source lang=bash>
* :/data/home/& nfs.server.de:/home/&
</source>
The asterisk marks any dir in /home/* should be matched by this rule.
The ampers and is replaced by the part which was matched by *.
So if you enter /home/a the automounter searches local for /data/home/a which will be mounted when found.
<source lang=bash>
# cd /home/a
# mount -v | grep /home/a
/data/home/a on /home/a type none (rw,bind)
</source>
For another /home/b which is on the nfs server it looks like this:
<source lang=bash>
# cd /home/b
# mount -v | grep /home/b
nfs.server.de:/home/b on /home/b type nfs (rw,addr=172.16.17.24)
</source>
b27446613c5b2e4ff97d52dec65eeb085337723c
NetApp and Solaris
0
219
989
792
2015-12-21T16:04:55Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:NetApp|Solaris]]
[[Kategorie:Solaris|NetApp]]
'''Just some unsorted lines...'''
'''Working on it... don't believe what you can read here! It is not proofed for now.'''
==Settings in Solaris==
Settings for MPxIO over FC:
===/kernel/drv/ssd.conf===
<source lang=bash>
###### START changes by host_config #####
ssd-config-list="NETAPP LUN", "physical-block-size:4096, retries-busy:30, retries-reset:30, retries-notready:300, retriestimeout:10, throttle-max:64, throttle-min:8";
###### END changes by host_config ####
</source>
===Check it out===
<source lang=bash>
# iostat -Er | /opt/sfw/bin/gawk 'BEGIN{command="echo ::ssd_state | mdb -k"; while(command|getline){if(/^un [0-9]+:/ && $NF != "0"){ssd=$2;gsub(/:$/,"",ssd);while(!/^}/){command|getline;if(/un_phy_blocksize/){un_phy_blocksize[ssd]=strtonum($NF);}}}};close(command);}/ssd/{ssd=$1;gsub(/^ssd/,"",ssd);getline;split($0,vendor,",");printf "ssd: %s\tun_phy_blocksize: %d\t%s\t%s\n",ssd,un_phy_blocksize[ssd],vendor[1],vendor[4];}'
</source>
==Alignment and ZFS==
First read [https://library.netapp.com/ecmdocs/ECMP1148982/html/html/GUID-42CC2EB6-E667-4305-914C-7C2C459EF841.html ZFS zpools create misaligned I/O in Solaris 11 and Solaris 10 Update 8 and later (407376)].
If you have 4k as block size in your storage use ashift=12 (alignment shift exponent).
===Status of alignment===
<source lang=bash>
# ssh filer01 "priv set -q diag ; lun show -v all; priv set"
-------------------------------------------------------------------------------
LUN for ZFS
-------------------------------------------------------------------------------
/vol/ZoneLUNs/Zone01.lun 50g (53687091200) (r/w, online, mapped)
Serial#: 800KP+EpO-33
Share: none
Space Reservation: enabled
Multiprotocol Type: solaris_efi
Maps: SUN_SERVER01_SERVER02=40
Occupied Size: 46.2g (49583595520)
Creation Time: Wed Jan 7 11:37:58 CET 2015
---> Alignment: partial-writes
Cluster Shared Volume Information: 0x0
Space_alloc: disabled
report-physical-size: enabled
Read-Only: disabled
-------------------------------------------------------------------------------
LUN for Oracle Database
-------------------------------------------------------------------------------
/vol/TEMP201/TEMP201 25g (26843545600) (r/w, online, mapped)
Serial#: 800KP+EpO-2t
Share: none
Space Reservation: enabled
Multiprotocol Type: solaris_efi
Maps: SUN_SERVER01_SERVER02=35
Occupied Size: 21.6g (23195856896)
Creation Time: Fri Jul 4 11:02:34 CEST 2014
---> Alignment: misaligned
Cluster Shared Volume Information: 0x0
Space_alloc: disabled
report-physical-size: enabled
Read-Only: disabled
...
</source>
Or use "lun alignment show":
<source lang=bash>
# ssh filer01 "priv set -q diag ; lun alignment show; priv set"
-------------------------------------------------------------------------------
LUN for ZFS
Wide spread reads. I think the ashift is not correct.
-------------------------------------------------------------------------------
/vol/ZoneLUNs/Zone01.lun
Multiprotocol type: solaris_efi
Alignment: partial-writes
Write alignment histogram percentage: 5, 5, 4, 6, 4, 6, 14, 5
Read alignment histogram percentage: 8, 7, 10, 7, 7, 8, 36, 5
Partial writes percentage: 47
Partial reads percentage: 9
-------------------------------------------------------------------------------
LUN for Oracle Database
-------------------------------------------------------------------------------
/vol/TEMP201/TEMP201
Multiprotocol type: solaris_efi
Alignment: misaligned
Write alignment histogram percentage: 0, 0, 0, 0, 0, 0, 99, 0
Read alignment histogram percentage: 0, 0, 8, 0, 0, 0, 77, 0
Partial writes percentage: 0
Partial reads percentage: 14
</source>
Or "stats show lun":
<source lang=bash>
filer01*> stats show -e lun:/vol/TEMP201:.*_align_histo.*
</source>
===ashift=12? Why 12?===
<source lang=bash>
# echo "2^12" | bc -l
4096
</source>
OK... 4k... I see.
===What ashift do I have?===
<source lang=bash>
# zdb | egrep ' name|ashift'
name: 'apache_pool'
ashift: 9
name: 'mysql_pool'
ashift: 9
...
</source>
===Create ZPools on NetApp LUNs with this syntax===
<source lang=bash>
# zpool create -o ashift=12 <mypool> mirror <vdev1> <vdev2>
</source>
===Solaris Cluster===
<source lang=bash>
# /opt/NTAP/SANToolkit/bin/sanlun lun show | nawk '$3 ~ /^\/dev\//{line=$0;gsub(/s[0-9]+$/,"",$3);command="/usr/cluster/bin/cldev list "$3; command | getline; close(command); print line,$1; next;}NR==2{print $0,"DID";next;}NR==3{print $0"-------";next}{print;}'
controller(7mode)/ device host lun
vserver(Cmode) lun-pathname filename adapter protocol size mode DID
--------------------------------------------------------------------------------------------------------------------------------------------------------
ncl01-iscsi-svm1 /vol/vol_cyrus01/lun_tz_cyrus01_1 /dev/rdsk/c0t600A0980383033777B244834556D4865d0s2 iscsi0 iSCSI 500.1g C d5
ncl01-iscsi-svm1 /vol/vol_cyrus01/lun_tz_cyrus01_2 /dev/rdsk/c0t600A0980383033777B244834556D4866d0s2 iscsi0 iSCSI 500.1g C d6
ncl01-iscsi-svm1 /vol/vol_cyrus01/lun_tz_cyrus01_3 /dev/rdsk/c0t600A0980383033777B244834556D4867d0s2 iscsi0 iSCSI 500.1g C d7
...
</source>
==Links==
* [http://wiki.illumos.org/display/illumos/List+of+sd-config-list+entries+for+Advanced-Format+drives Illumos: List of sd-config-list entries for Advanced-Format drives]
* [https://kb.netapp.com/index?page=content&id=3011193 NetApp: What is an unaligned I/O?]
81798a40f203ef3afa4143769a613310a6c80619
Solaris 11 Zones
0
257
990
2015-12-21T17:17:24Z
Lollypop
2
Die Seite wurde neu angelegt: „[[Kategorie:Solaris11|Zones]] <source lang=bash> # pkg update --be-name solaris_11.3.3_sc_4.2.5.1-1 -v --accept .. Planning linked: 0/1 done; 1 working: zone:…“
wikitext
text/x-wiki
[[Kategorie:Solaris11|Zones]]
<source lang=bash>
# pkg update --be-name solaris_11.3.3_sc_4.2.5.1-1 -v --accept
..
Planning linked: 0/1 done; 1 working: zone:zone01
Linked image 'zone:zone01' output:
...
# zlogin zone01 beadm list | tail -1
solaris-7 !RO - 18.09M static 2015-12-21 17:57
# clrs disable zone01-rs
</source>
Switch to patched node...
<source lang=bash>
# zoneadm -z zone01 attach -x deny-zbe-clone -z solaris-7
# clrs enable zone01-rs
</source>
9f0be137b54c1dc4b0744ae20bb971ba8370e2fd
991
990
2015-12-21T17:53:35Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:Solaris11|Zones]]
<source lang=bash>
# pkg update --be-name solaris_11.3.3_sc_4.2.5.1-1 -v --accept
..
Planning linked: 0/1 done; 1 working: zone:zone01
Linked image 'zone:zone01' output:
...
# zlogin zone01 beadm list | tail -1
solaris-7 !RO - 18.09M static 2015-12-21 17:57
# clrs disable zone01-rs
</source>
Switch to patched node...
<source lang=bash>
# zoneadm -z zone01 attach -x deny-zbe-clone -z solaris-7
# clrs enable zone01-rs
</source>
<source lang=bash>
# /usr/lib/brand/solaris/attach:
Brand specific options:
brand-specific usage:
Usage:
attach [-uv] [-a archive | -d directory | -z zbe]
[-c profile.xml | dir] [-x attach-last-booted-zbe|
force-zbe-clone|deny-zbe-clone|destroy-orphan-zbes]
-u Update the software in the attached zone boot environment to
match the sofware in the global zone boot environment.
-v Verbose.
-c Update the zone configuration with the sysconfig profile
specified in the given file or directory.
-a Extract the specified archive into the zone then attach the
active boot environment found in the archive. The archive
may be a zfs, cpio, or tar archive. It may be compressed with
gzip or bzip2.
-d Copy the specified directory into a new zone boot environment
then attach the zone boot environment.
-z Attach the specified zone boot environment.
-x attach-last-booted-zbe : Attach the last booted zone boot
environment.
force-zbe-clone : Clone zone boot environment
on attach.
deny-zbe-clone : Do not clone zone boot environment
on attach.
destroy-orphan-zbes : Destroy all orphan zone boot
environments. (not associated with
any global BE)
</source>
ae794bf6e48c7293ca8978777d1bbeeed2f36594
Gromphadorhina oblongonota
0
175
992
587
2016-01-05T11:17:16Z
Lollypop
2
wikitext
text/x-wiki
{{Systematik
| DeName = Fauchschabe
| WissName = Gromphadorhina oblongonata
| Autor = van Herrewege, 1973
| Familie = Blaberidae
| Unterfamilie = Oxyhaloinae
| Gattung = Gromphadorhina
| Untergattung =
| Tribus = Gromphadorhini
| Art = oblongonata
| Verbreitung =
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| StudyGroupNumber = BCG 48
| Winterruhe =
}}
9f1fe0802285905b32026bc6f599e9160d366c0f
993
992
2016-01-05T11:18:06Z
Lollypop
2
Lollypop verschob Seite [[Gromphadorhina oblongonota]] nach [[Gromphadorhina oblongonata]] und überschrieb dabei eine Weiterleitung: Typo
wikitext
text/x-wiki
{{Systematik
| DeName = Fauchschabe
| WissName = Gromphadorhina oblongonata
| Autor = van Herrewege, 1973
| Familie = Blaberidae
| Unterfamilie = Oxyhaloinae
| Gattung = Gromphadorhina
| Untergattung =
| Tribus = Gromphadorhini
| Art = oblongonata
| Verbreitung =
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| StudyGroupNumber = BCG 48
| Winterruhe =
}}
9f1fe0802285905b32026bc6f599e9160d366c0f
Elliptorhina javanica
0
146
995
565
2016-01-05T11:37:38Z
Lollypop
2
wikitext
text/x-wiki
{{Systematik
| DeName = Fauchschabe
| WissName = Elliptorhina javanica
| Autor = Hanitsch, 1930
| Bild = Elliptorhina_javanica.JPG
| Bildbeschreibung = Elliptorhina javanica an einem Champignon
| Familie = Blaberidae
| Unterfamilie = Oxyhaloinae
| Gattung = Elliptorhina
| Untergattung =
| Tribus = Gromphadorhini
| Art = javanica
| Verbreitung =
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| Winterruhe =
}}
b7813a5c5e334e079cd19da728f5bba89bb94e9b
999
995
2016-01-05T11:41:28Z
Lollypop
2
wikitext
text/x-wiki
{{Systematik
| DeName = Fauchschabe
| WissName = Elliptorhina javanica
| Autor = Hanitsch, 1930
| Bild = Elliptorhina_javanica.JPG
| Bildbeschreibung = Elliptorhina javanica an einem Champignon
| Familie = Blaberidae
| Unterfamilie = Oxyhaloinae
| Gattung = Elliptorhina
| Untergattung =
| Tribus = Gromphadorhini
| Art = javanica
| Verbreitung =
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| Winterruhe =
}}
* [http://cockroach.speciesfile.org/common/basic/Taxa.aspx?TaxonNameID=1174403 Speciesfile.org -> Elliptorhina javanica]
3647230c16e8ee7749fdc3035490ff58cc7ce7a9
Category:Elliptorhina
14
147
1002
410
2016-01-06T08:04:28Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:Schaben]]
{{Systematik
| Autor = van Herrewege, 1973
| Bild =
| Bildbeschreibung =
| Familie = Blaberidae
| Unterfamilie = Oxyhaloinae
| Gattung = Elliptorhina
| Untergattung =
| Tribus = Gromphadorhini
| Verbreitung =
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Winterruhe =
}}
* [http://cockroach.speciesfile.org/common/basic/Taxa.aspx?TaxonNameID=1174395 Speciesfile.org -> Elliptorhina]
5d58c8a1f4c69bb3c7b39b8b9c5cb50020c66510
1005
1002
2016-01-06T09:29:45Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:Schaben]]
{{Systematik
| Autor = van Herrewege, 1973
| Bild =
| Bildbeschreibung =
| Familie = Blaberidae
| Unterfamilie = Oxyhaloinae
| Gattung = Elliptorhina
| Untergattung =
| Tribus = Gromphadorhini
}}
* [http://cockroach.speciesfile.org/common/basic/Taxa.aspx?TaxonNameID=1174395 Speciesfile.org -> Elliptorhina]
00ae956c157cba532a5475a55917c9cb22937eff
Template:Systematik
10
117
1003
586
2016-01-06T09:26:28Z
Lollypop
2
wikitext
text/x-wiki
<includeonly>
{| cellpadding="2" cellspacing="0" style="border: 1px solid #6688AA;width:260px; background-color:#efefef; float:right;overflow: hidden; margin-left:10px; margin-bottom:10px;" valign="middle" |
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| '''''{{{WissName}}}''''' {{#if:{{{DeName|}}}| <br>({{{DeName|}}}) }}
|-
|style=" border: 0px solid #6688AA;border-bottom-width:1px;"| <div style="text-align:center;font-size:8pt;">
{{#if: {{{Bild|}}} | [[Bild:{{{Bild}}}|frameless|250x300px|{{{Bildbeschreibung}}}]]{{{Bildbeschreibung}}}</div> | |}}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| Systematik
|-
|style="text-align:center;width=250px;"|
{| style="background-color:#efefef;text-align:left;"
|-
{{#if:{{{Familie|}}}|
{{!-}}
{{!}} Familie:
{{!}} ''[[{{{Familie|}}}]]''
}}
|-
{{#if:{{{Unterfamilie|}}}|
{{!-}}
{{!}} Unterfamilie:
{{!}} ''[[{{{Unterfamilie|}}}]]''
}}
|-
{{#if:{{{Tribus|}}}|
{{!-}}
{{!}} Tribus:
{{!}} ''[[{{{Tribus|}}}]]''
}}
|-
{{#if:{{{Gattung|}}}|
{{!-}}
{{!}} Gattung:
{{!}} ''[[{{{Gattung|}}}]]''
}}
|-
{{#if:{{{Untergattung|}}}|
{{!-}}
{{!}} Untergattung:
{{!}} ''[[{{{Untergattung|}}}]]''
}}
|-
| Art:
|''{{{WissName|}}}''
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Wissenschaftlicher Name
|-
|style="text-align:center"|
{| style="text-align:center;background-color:#efefef;widht:250px"
|-
|style="text-align:center;width:250px"|''{{{WissName}}}''
{{#if:{{{Autor|}}}| {{{Autor|}}} | }}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Weitere Informationen
|-
|style="text-align:left"|
{| style="text-align:left;background-color:#efefef;widht:250px"
{{#if:{{{Verbreitung|}}} |{{!-}}
{{!}} Verbreitung:
{{!}} {{{Verbreitung|}}} |{{!-}}}}
{{#if:{{{Habitat|}}}
| {{!-}}
{{!}} Habitat:
{{!}} {{{Habitat|}}}|{{!-}}}}
{{#if:{{{Nahrung|}}}
| {{!-}}
{{!}} Nahrung:
{{!}} {{{Nahrung|}}} |{{!-}}}}
{{#if:{{{Luftfeuchtigkeit|}}}
| {{!-}}
{{!}} Luftfeuchtigkeit:
{{!}} {{{Luftfeuchtigkeit|}}} |{{!-}}}}
{{#if:{{{Temperatur|}}}
| {{!-}}
{{!}} Temperatur:
{{!}} {{{Temperatur|}}} |{{!-}}}}
{{#if:{{{StudyGroupNumber|}}}
| {{!-}}
{{!}} StudyGroupNumber:
{{!}} {{{StudyGroupNumber|}}} |{{!-}}}}
|}
|}
{{#if:{{{Untergattung|}}}| [[Kategorie:{{{Untergattung|}}}]]}}
{{#if:{{Boolandnot|{{{Gattung|}}}|{{{WissName|}}}}}{{{| [[Kategorie:{{{Gattung|}}}{{#if:{{{Art|}}}| {{!}}{{{Art|}}},{{{Gattung}}}}}]]}}
{{#ifeq: {{NAMESPACE}} | {{ns:0}} | [[Kategorie:Spezies]]}}
{{#if:{{{Familie|}}}| [[{{{Familie|}}}]]| ???}}->{{#if:{{{Unterfamilie|}}}| [[{{{Unterfamilie|}}}]]| ???}}->{{#if:{{{Tribus|}}}| [[{{{Tribus|}}}]]| ???}}->{{#if:{{{Gattung|}}}| [[{{{Gattung|}}}]]| ???}}->{{#if:{{{Untergattung|}}}| [[{{{Untergattung|}}}]]| ???}}
</includeonly>
<noinclude>
aa63704997de59870b05ff6f33039e66397d2151
1004
1003
2016-01-06T09:28:44Z
Lollypop
2
wikitext
text/x-wiki
<includeonly>
{| cellpadding="2" cellspacing="0" style="border: 1px solid #6688AA;width:260px; background-color:#efefef; float:right;overflow: hidden; margin-left:10px; margin-bottom:10px;" valign="middle" |
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| '''''{{{WissName}}}''''' {{#if:{{{DeName|}}}| <br>({{{DeName|}}}) }}
|-
|style=" border: 0px solid #6688AA;border-bottom-width:1px;"| <div style="text-align:center;font-size:8pt;">
{{#if: {{{Bild|}}} | [[Bild:{{{Bild}}}|frameless|250x300px|{{{Bildbeschreibung}}}]]{{{Bildbeschreibung}}}</div> | |}}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| Systematik
|-
|style="text-align:center;width=250px;"|
{| style="background-color:#efefef;text-align:left;"
|-
{{#if:{{{Familie|}}}|
{{!-}}
{{!}} Familie:
{{!}} ''[[{{{Familie|}}}]]''
}}
|-
{{#if:{{{Unterfamilie|}}}|
{{!-}}
{{!}} Unterfamilie:
{{!}} ''[[{{{Unterfamilie|}}}]]''
}}
|-
{{#if:{{{Tribus|}}}|
{{!-}}
{{!}} Tribus:
{{!}} ''[[{{{Tribus|}}}]]''
}}
|-
{{#if:{{{Gattung|}}}|
{{!-}}
{{!}} Gattung:
{{!}} ''[[{{{Gattung|}}}]]''
}}
|-
{{#if:{{{Untergattung|}}}|
{{!-}}
{{!}} Untergattung:
{{!}} ''[[{{{Untergattung|}}}]]''
}}
|-
| Art:
|''{{{WissName|}}}''
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Wissenschaftlicher Name
|-
|style="text-align:center"|
{| style="text-align:center;background-color:#efefef;widht:250px"
|-
|style="text-align:center;width:250px"|''{{{WissName}}}''
{{#if:{{{Autor|}}}| {{{Autor|}}} | }}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Weitere Informationen
|-
|style="text-align:left"|
{| style="text-align:left;background-color:#efefef;widht:250px"
{{#if:{{{Verbreitung|}}} |{{!-}}
{{!}} Verbreitung:
{{!}} {{{Verbreitung|}}} |{{!-}}}}
{{#if:{{{Habitat|}}}
| {{!-}}
{{!}} Habitat:
{{!}} {{{Habitat|}}}|{{!-}}}}
{{#if:{{{Nahrung|}}}
| {{!-}}
{{!}} Nahrung:
{{!}} {{{Nahrung|}}} |{{!-}}}}
{{#if:{{{Luftfeuchtigkeit|}}}
| {{!-}}
{{!}} Luftfeuchtigkeit:
{{!}} {{{Luftfeuchtigkeit|}}} |{{!-}}}}
{{#if:{{{Temperatur|}}}
| {{!-}}
{{!}} Temperatur:
{{!}} {{{Temperatur|}}} |{{!-}}}}
{{#if:{{{StudyGroupNumber|}}}
| {{!-}}
{{!}} StudyGroupNumber:
{{!}} {{{StudyGroupNumber|}}} |{{!-}}}}
|}
|}
{{#if:{{{Untergattung|}}}| [[Kategorie:{{{Untergattung|}}}]]}}
{{#if:{{Boolandnot|{{{Gattung|}}}|{{{Art|}}}}}{{{| [[Kategorie:{{{Gattung|}}}{{#if:{{{Art|}}}| {{!}}{{{Art|}}},{{{Gattung}}}}}]]}}
{{#ifeq: {{NAMESPACE}} | {{ns:0}} | [[Kategorie:Spezies]]}}
{{#if:{{{Familie|}}}| [[{{{Familie|}}}]]| ???}}->{{#if:{{{Unterfamilie|}}}| [[{{{Unterfamilie|}}}]]| ???}}->{{#if:{{{Tribus|}}}| [[{{{Tribus|}}}]]| ???}}->{{#if:{{{Gattung|}}}| [[{{{Gattung|}}}]]| ???}}->{{#if:{{{Untergattung|}}}| [[{{{Untergattung|}}}]]| ???}}
</includeonly>
<noinclude>
b8dc8eb568e88d1258874b82f4b9bc456fe3cdc1
1006
1004
2016-01-06T09:30:36Z
Lollypop
2
wikitext
text/x-wiki
<includeonly>
{| cellpadding="2" cellspacing="0" style="border: 1px solid #6688AA;width:260px; background-color:#efefef; float:right;overflow: hidden; margin-left:10px; margin-bottom:10px;" valign="middle" |
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| '''''{{{WissName}}}''''' {{#if:{{{DeName|}}}| <br>({{{DeName|}}}) }}
|-
|style=" border: 0px solid #6688AA;border-bottom-width:1px;"| <div style="text-align:center;font-size:8pt;">
{{#if: {{{Bild|}}} | [[Bild:{{{Bild}}}|frameless|250x300px|{{{Bildbeschreibung}}}]]{{{Bildbeschreibung}}}</div> | |}}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| Systematik
|-
|style="text-align:center;width=250px;"|
{| style="background-color:#efefef;text-align:left;"
|-
{{#if:{{{Familie|}}}|
{{!-}}
{{!}} Familie:
{{!}} ''[[{{{Familie|}}}]]''
}}
|-
{{#if:{{{Unterfamilie|}}}|
{{!-}}
{{!}} Unterfamilie:
{{!}} ''[[{{{Unterfamilie|}}}]]''
}}
|-
{{#if:{{{Tribus|}}}|
{{!-}}
{{!}} Tribus:
{{!}} ''[[{{{Tribus|}}}]]''
}}
|-
{{#if:{{{Gattung|}}}|
{{!-}}
{{!}} Gattung:
{{!}} ''[[{{{Gattung|}}}]]''
}}
|-
{{#if:{{{Untergattung|}}}|
{{!-}}
{{!}} Untergattung:
{{!}} ''[[{{{Untergattung|}}}]]''
}}
|-
| Art:
|''{{{WissName|}}}''
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Wissenschaftlicher Name
|-
|style="text-align:center"|
{| style="text-align:center;background-color:#efefef;widht:250px"
|-
|style="text-align:center;width:250px"|''{{{WissName}}}''
{{#if:{{{Autor|}}}| {{{Autor|}}} | }}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Weitere Informationen
|-
|style="text-align:left"|
{| style="text-align:left;background-color:#efefef;widht:250px"
{{#if:{{{Verbreitung|}}} |{{!-}}
{{!}} Verbreitung:
{{!}} {{{Verbreitung|}}} |{{!-}}}}
{{#if:{{{Habitat|}}}
| {{!-}}
{{!}} Habitat:
{{!}} {{{Habitat|}}}|{{!-}}}}
{{#if:{{{Nahrung|}}}
| {{!-}}
{{!}} Nahrung:
{{!}} {{{Nahrung|}}} |{{!-}}}}
{{#if:{{{Luftfeuchtigkeit|}}}
| {{!-}}
{{!}} Luftfeuchtigkeit:
{{!}} {{{Luftfeuchtigkeit|}}} |{{!-}}}}
{{#if:{{{Temperatur|}}}
| {{!-}}
{{!}} Temperatur:
{{!}} {{{Temperatur|}}} |{{!-}}}}
{{#if:{{{StudyGroupNumber|}}}
| {{!-}}
{{!}} StudyGroupNumber:
{{!}} {{{StudyGroupNumber|}}} |{{!-}}}}
|}
|}
{{#if:{{{Untergattung|}}}| [[Kategorie:{{{Untergattung|}}}]]}}
{{#if:{{Boolandnot|{{{Gattung|}}}||{{{Art|}}}}}{{{| [[Kategorie:{{{Gattung|}}}{{#if:{{{Art|}}}| {{!}}{{{Art|}}},{{{Gattung}}}}}]]}}
{{#ifeq: {{NAMESPACE}} | {{ns:0}} | [[Kategorie:Spezies]]}}
{{#if:{{{Familie|}}}| [[{{{Familie|}}}]]| ???}}->{{#if:{{{Unterfamilie|}}}| [[{{{Unterfamilie|}}}]]| ???}}->{{#if:{{{Tribus|}}}| [[{{{Tribus|}}}]]| ???}}->{{#if:{{{Gattung|}}}| [[{{{Gattung|}}}]]| ???}}->{{#if:{{{Untergattung|}}}| [[{{{Untergattung|}}}]]| ???}}
</includeonly>
<noinclude>
4c1d0627c2c0f59aeae1c1315f3cc4ff160aa76d
1007
1006
2016-01-06T09:32:33Z
Lollypop
2
wikitext
text/x-wiki
<includeonly>
{| cellpadding="2" cellspacing="0" style="border: 1px solid #6688AA;width:260px; background-color:#efefef; float:right;overflow: hidden; margin-left:10px; margin-bottom:10px;" valign="middle" |
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| '''''{{{WissName}}}''''' {{#if:{{{DeName|}}}| <br>({{{DeName|}}}) }}
|-
|style=" border: 0px solid #6688AA;border-bottom-width:1px;"| <div style="text-align:center;font-size:8pt;">
{{#if: {{{Bild|}}} | [[Bild:{{{Bild}}}|frameless|250x300px|{{{Bildbeschreibung}}}]]{{{Bildbeschreibung}}}</div> | |}}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| Systematik
|-
|style="text-align:center;width=250px;"|
{| style="background-color:#efefef;text-align:left;"
|-
{{#if:{{{Familie|}}}|
{{!-}}
{{!}} Familie:
{{!}} ''[[{{{Familie|}}}]]''
}}
|-
{{#if:{{{Unterfamilie|}}}|
{{!-}}
{{!}} Unterfamilie:
{{!}} ''[[{{{Unterfamilie|}}}]]''
}}
|-
{{#if:{{{Tribus|}}}|
{{!-}}
{{!}} Tribus:
{{!}} ''[[{{{Tribus|}}}]]''
}}
|-
{{#if:{{{Gattung|}}}|
{{!-}}
{{!}} Gattung:
{{!}} ''[[{{{Gattung|}}}]]''
}}
|-
{{#if:{{{Untergattung|}}}|
{{!-}}
{{!}} Untergattung:
{{!}} ''[[{{{Untergattung|}}}]]''
}}
|-
| Art:
|''{{{WissName|}}}''
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Wissenschaftlicher Name
|-
|style="text-align:center"|
{| style="text-align:center;background-color:#efefef;widht:250px"
|-
|style="text-align:center;width:250px"|''{{{WissName}}}''
{{#if:{{{Autor|}}}| {{{Autor|}}} | }}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Weitere Informationen
|-
|style="text-align:left"|
{| style="text-align:left;background-color:#efefef;widht:250px"
{{#if:{{{Verbreitung|}}} |{{!-}}
{{!}} Verbreitung:
{{!}} {{{Verbreitung|}}} |{{!-}}}}
{{#if:{{{Habitat|}}}
| {{!-}}
{{!}} Habitat:
{{!}} {{{Habitat|}}}|{{!-}}}}
{{#if:{{{Nahrung|}}}
| {{!-}}
{{!}} Nahrung:
{{!}} {{{Nahrung|}}} |{{!-}}}}
{{#if:{{{Luftfeuchtigkeit|}}}
| {{!-}}
{{!}} Luftfeuchtigkeit:
{{!}} {{{Luftfeuchtigkeit|}}} |{{!-}}}}
{{#if:{{{Temperatur|}}}
| {{!-}}
{{!}} Temperatur:
{{!}} {{{Temperatur|}}} |{{!-}}}}
{{#if:{{{StudyGroupNumber|}}}
| {{!-}}
{{!}} StudyGroupNumber:
{{!}} {{{StudyGroupNumber|}}} |{{!-}}}}
|}
|}
{{#if:{{{Untergattung|}}}| [[Kategorie:{{{Untergattung|}}}]]}}
{{#if:{{Boolandnot|{{{Gattung|}}}|{{{WissName|}}}}}|{{{| [[Kategorie:{{{Gattung|}}}{{#if:{{{Art|}}}| {{!}}{{{Art|}}},{{{Gattung}}}}}]]}}
{{#ifeq: {{NAMESPACE}} | {{ns:0}} | [[Kategorie:Spezies]]}}
{{#if:{{{Familie|}}}| [[{{{Familie|}}}]]| ???}}->{{#if:{{{Unterfamilie|}}}| [[{{{Unterfamilie|}}}]]| ???}}->{{#if:{{{Tribus|}}}| [[{{{Tribus|}}}]]| ???}}->{{#if:{{{Gattung|}}}| [[{{{Gattung|}}}]]| ???}}->{{#if:{{{Untergattung|}}}| [[{{{Untergattung|}}}]]| ???}}
</includeonly>
<noinclude>
2686f9a1d82ede40e18992c275fc85cae0d5119b
1008
1007
2016-01-06T09:38:37Z
Lollypop
2
wikitext
text/x-wiki
<includeonly>
{| cellpadding="2" cellspacing="0" style="border: 1px solid #6688AA;width:260px; background-color:#efefef; float:right;overflow: hidden; margin-left:10px; margin-bottom:10px;" valign="middle" |
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| '''''{{{Gattung|}}} {{{Art|}}}''''' {{#if:{{{DeName|}}}| <br>({{{DeName|}}}) }}
|-
|style=" border: 0px solid #6688AA;border-bottom-width:1px;"| <div style="text-align:center;font-size:8pt;">
{{#if: {{{Bild|}}} | [[Bild:{{{Bild}}}|frameless|250x300px|{{{Bildbeschreibung}}}]]{{{Bildbeschreibung}}}</div> | |}}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| Systematik
|-
|style="text-align:center;width=250px;"|
{| style="background-color:#efefef;text-align:left;"
|-
{{#if:{{{Familie|}}}|
{{!-}}
{{!}} Familie:
{{!}} ''[[{{{Familie|}}}]]''
}}
|-
{{#if:{{{Unterfamilie|}}}|
{{!-}}
{{!}} Unterfamilie:
{{!}} ''[[{{{Unterfamilie|}}}]]''
}}
|-
{{#if:{{{Tribus|}}}|
{{!-}}
{{!}} Tribus:
{{!}} ''[[{{{Tribus|}}}]]''
}}
|-
{{#if:{{{Gattung|}}}|
{{!-}}
{{!}} Gattung:
{{!}} ''[[{{{Gattung|}}}]]''
}}
|-
{{#if:{{{Untergattung|}}}|
{{!-}}
{{!}} Untergattung:
{{!}} ''[[{{{Untergattung|}}}]]''
}}
|-
| Art:
|''{{{Gattung|}}} {{{Art|}}}''
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Wissenschaftlicher Name
|-
|style="text-align:center"|
{| style="text-align:center;background-color:#efefef;widht:250px"
|-
|style="text-align:center;width:250px"|''{{{WissName}}}''
{{#if:{{{Autor|}}}| {{{Autor|}}} | }}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Weitere Informationen
|-
|style="text-align:left"|
{| style="text-align:left;background-color:#efefef;widht:250px"
{{#if:{{{Verbreitung|}}} |{{!-}}
{{!}} Verbreitung:
{{!}} {{{Verbreitung|}}} |{{!-}}}}
{{#if:{{{Habitat|}}}
| {{!-}}
{{!}} Habitat:
{{!}} {{{Habitat|}}}|{{!-}}}}
{{#if:{{{Nahrung|}}}
| {{!-}}
{{!}} Nahrung:
{{!}} {{{Nahrung|}}} |{{!-}}}}
{{#if:{{{Luftfeuchtigkeit|}}}
| {{!-}}
{{!}} Luftfeuchtigkeit:
{{!}} {{{Luftfeuchtigkeit|}}} |{{!-}}}}
{{#if:{{{Temperatur|}}}
| {{!-}}
{{!}} Temperatur:
{{!}} {{{Temperatur|}}} |{{!-}}}}
{{#if:{{{StudyGroupNumber|}}}
| {{!-}}
{{!}} StudyGroupNumber:
{{!}} {{{StudyGroupNumber|}}} |{{!-}}}}
|}
|}
{{#if:{{{Untergattung|}}}| [[Kategorie:{{{Untergattung|}}}]]}}
{{#if:
{{Boolandnot|
{{{Gattung|}}}|
{{{WissName|}}}
}}
| Kategorie:{{#if:{{{Art|}}}| {{!}}{{{Art|}}},{{{Gattung}}}}}))
| Kategorie:{{{Gattung|}}}{{#if:{{{Art|}}}| {{!}}{{{Art|}}},{{{Gattung}}}}}))
}}
{{#ifeq: {{NAMESPACE}} | {{ns:0}} | [[Kategorie:Spezies]]}}
{{#if:{{{Familie|}}}| [[{{{Familie|}}}]]| ???}}->{{#if:{{{Unterfamilie|}}}| [[{{{Unterfamilie|}}}]]| ???}}->{{#if:{{{Tribus|}}}| [[{{{Tribus|}}}]]| ???}}->{{#if:{{{Gattung|}}}| [[{{{Gattung|}}}]]| ???}}->{{#if:{{{Untergattung|}}}| [[{{{Untergattung|}}}]]| ???}}
</includeonly>
<noinclude>
16bcd9de793a27964c36d140f0ea0a265db95ccb
1009
1008
2016-01-06T09:40:07Z
Lollypop
2
wikitext
text/x-wiki
<includeonly>
{| cellpadding="2" cellspacing="0" style="border: 1px solid #6688AA;width:260px; background-color:#efefef; float:right;overflow: hidden; margin-left:10px; margin-bottom:10px;" valign="middle" |
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| '''''{{{Gattung|}}} {{{Art|}}}''''' {{#if:{{{DeName|}}}| <br>({{{DeName|}}}) }}
|-
|style=" border: 0px solid #6688AA;border-bottom-width:1px;"| <div style="text-align:center;font-size:8pt;">
{{#if: {{{Bild|}}} | [[Bild:{{{Bild}}}|frameless|250x300px|{{{Bildbeschreibung}}}]]{{{Bildbeschreibung}}}</div> | |}}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| Systematik
|-
|style="text-align:center;width=250px;"|
{| style="background-color:#efefef;text-align:left;"
|-
{{#if:{{{Familie|}}}|
{{!-}}
{{!}} Familie:
{{!}} ''[[{{{Familie|}}}]]''
}}
|-
{{#if:{{{Unterfamilie|}}}|
{{!-}}
{{!}} Unterfamilie:
{{!}} ''[[{{{Unterfamilie|}}}]]''
}}
|-
{{#if:{{{Tribus|}}}|
{{!-}}
{{!}} Tribus:
{{!}} ''[[{{{Tribus|}}}]]''
}}
|-
{{#if:{{{Gattung|}}}|
{{!-}}
{{!}} Gattung:
{{!}} ''[[{{{Gattung|}}}]]''
}}
|-
{{#if:{{{Untergattung|}}}|
{{!-}}
{{!}} Untergattung:
{{!}} ''[[{{{Untergattung|}}}]]''
}}
|-
| Art:
|''{{{Gattung|}}} {{{Art|}}}''
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Wissenschaftlicher Name
|-
|style="text-align:center"|
{| style="text-align:center;background-color:#efefef;widht:250px"
|-
|style="text-align:center;width:250px"|''{{{WissName|}}}''
{{#if:{{{Autor|}}}| {{{Autor|}}} | }}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Weitere Informationen
|-
|style="text-align:left"|
{| style="text-align:left;background-color:#efefef;widht:250px"
{{#if:{{{Verbreitung|}}} |{{!-}}
{{!}} Verbreitung:
{{!}} {{{Verbreitung|}}} |{{!-}}}}
{{#if:{{{Habitat|}}}
| {{!-}}
{{!}} Habitat:
{{!}} {{{Habitat|}}}|{{!-}}}}
{{#if:{{{Nahrung|}}}
| {{!-}}
{{!}} Nahrung:
{{!}} {{{Nahrung|}}} |{{!-}}}}
{{#if:{{{Luftfeuchtigkeit|}}}
| {{!-}}
{{!}} Luftfeuchtigkeit:
{{!}} {{{Luftfeuchtigkeit|}}} |{{!-}}}}
{{#if:{{{Temperatur|}}}
| {{!-}}
{{!}} Temperatur:
{{!}} {{{Temperatur|}}} |{{!-}}}}
{{#if:{{{StudyGroupNumber|}}}
| {{!-}}
{{!}} StudyGroupNumber:
{{!}} {{{StudyGroupNumber|}}} |{{!-}}}}
|}
|}
{{#if:{{{Untergattung|}}}| [[Kategorie:{{{Untergattung|}}}]]}}
{{#if:
{{Boolandnot|
{{{Gattung|}}}|
{{{WissName|}}}
}}
| Kategorie:{{#if:{{{Art|}}}| {{!}}{{{Art|}}},{{{Gattung}}}}}))
| Kategorie:{{{Gattung|}}}{{#if:{{{Art|}}}| {{!}}{{{Art|}}},{{{Gattung}}}}}))
}}
{{#ifeq: {{NAMESPACE}} | {{ns:0}} | [[Kategorie:Spezies]]}}
{{#if:{{{Familie|}}}| [[{{{Familie|}}}]]| ???}}->{{#if:{{{Unterfamilie|}}}| [[{{{Unterfamilie|}}}]]| ???}}->{{#if:{{{Tribus|}}}| [[{{{Tribus|}}}]]| ???}}->{{#if:{{{Gattung|}}}| [[{{{Gattung|}}}]]| ???}}->{{#if:{{{Untergattung|}}}| [[{{{Untergattung|}}}]]| ???}}
</includeonly>
<noinclude>
bf8cda5f092fdb9ef44d22bdf899d23b3d269458
1010
1009
2016-01-06T09:41:00Z
Lollypop
2
wikitext
text/x-wiki
<includeonly>
{| cellpadding="2" cellspacing="0" style="border: 1px solid #6688AA;width:260px; background-color:#efefef; float:right;overflow: hidden; margin-left:10px; margin-bottom:10px;" valign="middle" |
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| '''''{{{Gattung|}}} {{{Art|}}}''''' {{#if:{{{DeName|}}}| <br>({{{DeName|}}}) }}
|-
|style=" border: 0px solid #6688AA;border-bottom-width:1px;"| <div style="text-align:center;font-size:8pt;">
{{#if: {{{Bild|}}} | [[Bild:{{{Bild}}}|frameless|250x300px|{{{Bildbeschreibung}}}]]{{{Bildbeschreibung}}}</div> | |}}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| Systematik
|-
|style="text-align:center;width=250px;"|
{| style="background-color:#efefef;text-align:left;"
|-
{{#if:{{{Familie|}}}|
{{!-}}
{{!}} Familie:
{{!}} ''[[{{{Familie|}}}]]''
}}
|-
{{#if:{{{Unterfamilie|}}}|
{{!-}}
{{!}} Unterfamilie:
{{!}} ''[[{{{Unterfamilie|}}}]]''
}}
|-
{{#if:{{{Tribus|}}}|
{{!-}}
{{!}} Tribus:
{{!}} ''[[{{{Tribus|}}}]]''
}}
|-
{{#if:{{{Gattung|}}}|
{{!-}}
{{!}} Gattung:
{{!}} ''[[{{{Gattung|}}}]]''
}}
|-
{{#if:{{{Untergattung|}}}|
{{!-}}
{{!}} Untergattung:
{{!}} ''[[{{{Untergattung|}}}]]''
}}
|-
| Art:
|''{{{Art|}}}''
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Wissenschaftlicher Name
|-
|style="text-align:center"|
{| style="text-align:center;background-color:#efefef;widht:250px"
|-
|style="text-align:center;width:250px"|''{{{Gattung|}}} {{{Art|}}}''
{{#if:{{{Autor|}}}| {{{Autor|}}} | }}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Weitere Informationen
|-
|style="text-align:left"|
{| style="text-align:left;background-color:#efefef;widht:250px"
{{#if:{{{Verbreitung|}}} |{{!-}}
{{!}} Verbreitung:
{{!}} {{{Verbreitung|}}} |{{!-}}}}
{{#if:{{{Habitat|}}}
| {{!-}}
{{!}} Habitat:
{{!}} {{{Habitat|}}}|{{!-}}}}
{{#if:{{{Nahrung|}}}
| {{!-}}
{{!}} Nahrung:
{{!}} {{{Nahrung|}}} |{{!-}}}}
{{#if:{{{Luftfeuchtigkeit|}}}
| {{!-}}
{{!}} Luftfeuchtigkeit:
{{!}} {{{Luftfeuchtigkeit|}}} |{{!-}}}}
{{#if:{{{Temperatur|}}}
| {{!-}}
{{!}} Temperatur:
{{!}} {{{Temperatur|}}} |{{!-}}}}
{{#if:{{{StudyGroupNumber|}}}
| {{!-}}
{{!}} StudyGroupNumber:
{{!}} {{{StudyGroupNumber|}}} |{{!-}}}}
|}
|}
{{#if:{{{Untergattung|}}}| [[Kategorie:{{{Untergattung|}}}]]}}
{{#if:
{{Boolandnot|
{{{Gattung|}}}|
{{{WissName|}}}
}}
| Kategorie:{{#if:{{{Art|}}}| {{!}}{{{Art|}}},{{{Gattung}}}}}))
| Kategorie:{{{Gattung|}}}{{#if:{{{Art|}}}| {{!}}{{{Art|}}},{{{Gattung}}}}}))
}}
{{#ifeq: {{NAMESPACE}} | {{ns:0}} | [[Kategorie:Spezies]]}}
{{#if:{{{Familie|}}}| [[{{{Familie|}}}]]| ???}}->{{#if:{{{Unterfamilie|}}}| [[{{{Unterfamilie|}}}]]| ???}}->{{#if:{{{Tribus|}}}| [[{{{Tribus|}}}]]| ???}}->{{#if:{{{Gattung|}}}| [[{{{Gattung|}}}]]| ???}}->{{#if:{{{Untergattung|}}}| [[{{{Untergattung|}}}]]| ???}}
</includeonly>
<noinclude>
4b6c24ffa3647c616b5ddbfa4636092f3dcf7f60
1012
1010
2016-01-06T09:43:37Z
Lollypop
2
wikitext
text/x-wiki
<includeonly>
{| cellpadding="2" cellspacing="0" style="border: 1px solid #6688AA;width:260px; background-color:#efefef; float:right;overflow: hidden; margin-left:10px; margin-bottom:10px;" valign="middle" |
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| '''''{{{Gattung|}}} {{{Art|}}}''''' {{#if:{{{DeName|}}}| <br>({{{DeName|}}}) }}
|-
|style=" border: 0px solid #6688AA;border-bottom-width:1px;"| <div style="text-align:center;font-size:8pt;">
{{#if: {{{Bild|}}} | [[Bild:{{{Bild}}}|frameless|250x300px|{{{Bildbeschreibung}}}]]{{{Bildbeschreibung}}}</div> | |}}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| Systematik
|-
|style="text-align:center;width=250px;"|
{| style="background-color:#efefef;text-align:left;"
|-
{{#if:{{{Familie|}}}|
{{!-}}
{{!}} Familie:
{{!}} ''[[{{{Familie|}}}]]''
}}
|-
{{#if:{{{Unterfamilie|}}}|
{{!-}}
{{!}} Unterfamilie:
{{!}} ''[[{{{Unterfamilie|}}}]]''
}}
|-
{{#if:{{{Tribus|}}}|
{{!-}}
{{!}} Tribus:
{{!}} ''[[{{{Tribus|}}}]]''
}}
|-
{{#if:{{{Gattung|}}}|
{{!-}}
{{!}} Gattung:
{{!}} ''[[{{{Gattung|}}}]]''
}}
|-
{{#if:{{{Untergattung|}}}|
{{!-}}
{{!}} Untergattung:
{{!}} ''[[{{{Untergattung|}}}]]''
}}
|-
| Art:
|''{{{Art|}}}''
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Wissenschaftlicher Name
|-
|style="text-align:center"|
{| style="text-align:center;background-color:#efefef;widht:250px"
|-
|style="text-align:center;width:250px"|''{{{Gattung|}}} {{{Art|}}}''
{{#if:{{{Autor|}}}| {{{Autor|}}} | }}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Weitere Informationen
|-
|style="text-align:left"|
{| style="text-align:left;background-color:#efefef;widht:250px"
{{#if:{{{Verbreitung|}}} |{{!-}}
{{!}} Verbreitung:
{{!}} {{{Verbreitung|}}} |{{!-}}}}
{{#if:{{{Habitat|}}}
| {{!-}}
{{!}} Habitat:
{{!}} {{{Habitat|}}}|{{!-}}}}
{{#if:{{{Nahrung|}}}
| {{!-}}
{{!}} Nahrung:
{{!}} {{{Nahrung|}}} |{{!-}}}}
{{#if:{{{Luftfeuchtigkeit|}}}
| {{!-}}
{{!}} Luftfeuchtigkeit:
{{!}} {{{Luftfeuchtigkeit|}}} |{{!-}}}}
{{#if:{{{Temperatur|}}}
| {{!-}}
{{!}} Temperatur:
{{!}} {{{Temperatur|}}} |{{!-}}}}
{{#if:{{{StudyGroupNumber|}}}
| {{!-}}
{{!}} StudyGroupNumber:
{{!}} {{{StudyGroupNumber|}}} |{{!-}}}}
|}
|}
{{#if:{{{Untergattung|}}}| [[Kategorie:{{{Untergattung|}}}]]}}
{{#if:
{{Boolandnot|
{{{Gattung|}}}|
{{{Art|}}}
}}
|
|((Kategorie:{{{Gattung|}}}))
}}
{{#ifeq: {{NAMESPACE}} | {{ns:0}} | [[Kategorie:Spezies]]}}
{{#if:{{{Familie|}}}| [[{{{Familie|}}}]]| ???}}->{{#if:{{{Unterfamilie|}}}| [[{{{Unterfamilie|}}}]]| ???}}->{{#if:{{{Tribus|}}}| [[{{{Tribus|}}}]]| ???}}->{{#if:{{{Gattung|}}}| [[{{{Gattung|}}}]]| ???}}->{{#if:{{{Untergattung|}}}| [[{{{Untergattung|}}}]]| ???}}
</includeonly>
<noinclude>
0334db8c1335743f539c048fee91bdeff532436b
1013
1012
2016-01-06T09:44:31Z
Lollypop
2
wikitext
text/x-wiki
<includeonly>
{| cellpadding="2" cellspacing="0" style="border: 1px solid #6688AA;width:260px; background-color:#efefef; float:right;overflow: hidden; margin-left:10px; margin-bottom:10px;" valign="middle" |
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| '''''{{{Gattung|}}} {{{Art|}}}''''' {{#if:{{{DeName|}}}| <br>({{{DeName|}}}) }}
|-
|style=" border: 0px solid #6688AA;border-bottom-width:1px;"| <div style="text-align:center;font-size:8pt;">
{{#if: {{{Bild|}}} | [[Bild:{{{Bild}}}|frameless|250x300px|{{{Bildbeschreibung}}}]]{{{Bildbeschreibung}}}</div> | |}}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| Systematik
|-
|style="text-align:center;width=250px;"|
{| style="background-color:#efefef;text-align:left;"
|-
{{#if:{{{Familie|}}}|
{{!-}}
{{!}} Familie:
{{!}} ''[[{{{Familie|}}}]]''
}}
|-
{{#if:{{{Unterfamilie|}}}|
{{!-}}
{{!}} Unterfamilie:
{{!}} ''[[{{{Unterfamilie|}}}]]''
}}
|-
{{#if:{{{Tribus|}}}|
{{!-}}
{{!}} Tribus:
{{!}} ''[[{{{Tribus|}}}]]''
}}
|-
{{#if:{{{Gattung|}}}|
{{!-}}
{{!}} Gattung:
{{!}} ''[[{{{Gattung|}}}]]''
}}
|-
{{#if:{{{Untergattung|}}}|
{{!-}}
{{!}} Untergattung:
{{!}} ''[[{{{Untergattung|}}}]]''
}}
|-
| Art:
|''{{{Art|}}}''
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Wissenschaftlicher Name
|-
|style="text-align:center"|
{| style="text-align:center;background-color:#efefef;widht:250px"
|-
|style="text-align:center;width:250px"|''{{{Gattung|}}} {{{Art|}}}''
{{#if:{{{Autor|}}}| {{{Autor|}}} | }}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Weitere Informationen
|-
|style="text-align:left"|
{| style="text-align:left;background-color:#efefef;widht:250px"
{{#if:{{{Verbreitung|}}} |{{!-}}
{{!}} Verbreitung:
{{!}} {{{Verbreitung|}}} |{{!-}}}}
{{#if:{{{Habitat|}}}
| {{!-}}
{{!}} Habitat:
{{!}} {{{Habitat|}}}|{{!-}}}}
{{#if:{{{Nahrung|}}}
| {{!-}}
{{!}} Nahrung:
{{!}} {{{Nahrung|}}} |{{!-}}}}
{{#if:{{{Luftfeuchtigkeit|}}}
| {{!-}}
{{!}} Luftfeuchtigkeit:
{{!}} {{{Luftfeuchtigkeit|}}} |{{!-}}}}
{{#if:{{{Temperatur|}}}
| {{!-}}
{{!}} Temperatur:
{{!}} {{{Temperatur|}}} |{{!-}}}}
{{#if:{{{StudyGroupNumber|}}}
| {{!-}}
{{!}} StudyGroupNumber:
{{!}} {{{StudyGroupNumber|}}} |{{!-}}}}
|}
|}
{{#if:{{{Untergattung|}}}| [[Kategorie:{{{Untergattung|}}}]]}}
{{#if:
{{Boolandnot|{{{Gattung|}}}|{{{Art|}}}}}
|
|((Kategorie:{{{Gattung|}}}))
}}
{{#ifeq: {{NAMESPACE}} | {{ns:0}} | [[Kategorie:Spezies]]}}
{{#if:{{{Familie|}}}| [[{{{Familie|}}}]]| ???}}->{{#if:{{{Unterfamilie|}}}| [[{{{Unterfamilie|}}}]]| ???}}->{{#if:{{{Tribus|}}}| [[{{{Tribus|}}}]]| ???}}->{{#if:{{{Gattung|}}}| [[{{{Gattung|}}}]]| ???}}->{{#if:{{{Untergattung|}}}| [[{{{Untergattung|}}}]]| ???}}
</includeonly>
<noinclude>
08797b18544251e0d9b87f8c42ee13f70310bc7e
1014
1013
2016-01-06T09:46:19Z
Lollypop
2
wikitext
text/x-wiki
<includeonly>
{| cellpadding="2" cellspacing="0" style="border: 1px solid #6688AA;width:260px; background-color:#efefef; float:right;overflow: hidden; margin-left:10px; margin-bottom:10px;" valign="middle" |
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| '''''{{{Gattung|}}} {{{Art|}}}''''' {{#if:{{{DeName|}}}| <br>({{{DeName|}}}) }}
|-
|style=" border: 0px solid #6688AA;border-bottom-width:1px;"| <div style="text-align:center;font-size:8pt;">
{{#if: {{{Bild|}}} | [[Bild:{{{Bild}}}|frameless|250x300px|{{{Bildbeschreibung}}}]]{{{Bildbeschreibung}}}</div> | |}}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| Systematik
|-
|style="text-align:center;width=250px;"|
{| style="background-color:#efefef;text-align:left;"
|-
{{#if:{{{Familie|}}}|
{{!-}}
{{!}} Familie:
{{!}} ''[[{{{Familie|}}}]]''
}}
|-
{{#if:{{{Unterfamilie|}}}|
{{!-}}
{{!}} Unterfamilie:
{{!}} ''[[{{{Unterfamilie|}}}]]''
}}
|-
{{#if:{{{Tribus|}}}|
{{!-}}
{{!}} Tribus:
{{!}} ''[[{{{Tribus|}}}]]''
}}
|-
{{#if:{{{Gattung|}}}|
{{!-}}
{{!}} Gattung:
{{!}} ''[[{{{Gattung|}}}]]''
}}
|-
{{#if:{{{Untergattung|}}}|
{{!-}}
{{!}} Untergattung:
{{!}} ''[[{{{Untergattung|}}}]]''
}}
|-
| Art:
|''{{{Art|}}}''
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Wissenschaftlicher Name
|-
|style="text-align:center"|
{| style="text-align:center;background-color:#efefef;widht:250px"
|-
|style="text-align:center;width:250px"|''{{{Gattung|}}} {{{Art|}}}''
{{#if:{{{Autor|}}}| {{{Autor|}}} | }}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Weitere Informationen
|-
|style="text-align:left"|
{| style="text-align:left;background-color:#efefef;widht:250px"
{{#if:{{{Verbreitung|}}} |{{!-}}
{{!}} Verbreitung:
{{!}} {{{Verbreitung|}}} |{{!-}}}}
{{#if:{{{Habitat|}}}
| {{!-}}
{{!}} Habitat:
{{!}} {{{Habitat|}}}|{{!-}}}}
{{#if:{{{Nahrung|}}}
| {{!-}}
{{!}} Nahrung:
{{!}} {{{Nahrung|}}} |{{!-}}}}
{{#if:{{{Luftfeuchtigkeit|}}}
| {{!-}}
{{!}} Luftfeuchtigkeit:
{{!}} {{{Luftfeuchtigkeit|}}} |{{!-}}}}
{{#if:{{{Temperatur|}}}
| {{!-}}
{{!}} Temperatur:
{{!}} {{{Temperatur|}}} |{{!-}}}}
{{#if:{{{StudyGroupNumber|}}}
| {{!-}}
{{!}} StudyGroupNumber:
{{!}} {{{StudyGroupNumber|}}} |{{!-}}}}
|}
|}
{{#if:{{{Untergattung|}}}| [[Kategorie:{{{Untergattung|}}}]]}}
{{#if:
{{{Art|}}}
|((Kategorie:{{{Gattung|}}}))
|
}}
{{#ifeq: {{NAMESPACE}} | {{ns:0}} | [[Kategorie:Spezies]]}}
{{#if:{{{Familie|}}}| [[{{{Familie|}}}]]| ???}}->{{#if:{{{Unterfamilie|}}}| [[{{{Unterfamilie|}}}]]| ???}}->{{#if:{{{Tribus|}}}| [[{{{Tribus|}}}]]| ???}}->{{#if:{{{Gattung|}}}| [[{{{Gattung|}}}]]| ???}}->{{#if:{{{Untergattung|}}}| [[{{{Untergattung|}}}]]| ???}}
</includeonly>
<noinclude>
fc5ca5433bf4093ccaa50fe816ab4a05386b29b3
1015
1014
2016-01-06T09:47:08Z
Lollypop
2
wikitext
text/x-wiki
<includeonly>
{| cellpadding="2" cellspacing="0" style="border: 1px solid #6688AA;width:260px; background-color:#efefef; float:right;overflow: hidden; margin-left:10px; margin-bottom:10px;" valign="middle" |
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| '''''{{{Gattung|}}} {{{Art|}}}''''' {{#if:{{{DeName|}}}| <br>({{{DeName|}}}) }}
|-
|style=" border: 0px solid #6688AA;border-bottom-width:1px;"| <div style="text-align:center;font-size:8pt;">
{{#if: {{{Bild|}}} | [[Bild:{{{Bild}}}|frameless|250x300px|{{{Bildbeschreibung}}}]]{{{Bildbeschreibung}}}</div> | |}}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| Systematik
|-
|style="text-align:center;width=250px;"|
{| style="background-color:#efefef;text-align:left;"
|-
{{#if:{{{Familie|}}}|
{{!-}}
{{!}} Familie:
{{!}} ''[[{{{Familie|}}}]]''
}}
|-
{{#if:{{{Unterfamilie|}}}|
{{!-}}
{{!}} Unterfamilie:
{{!}} ''[[{{{Unterfamilie|}}}]]''
}}
|-
{{#if:{{{Tribus|}}}|
{{!-}}
{{!}} Tribus:
{{!}} ''[[{{{Tribus|}}}]]''
}}
|-
{{#if:{{{Gattung|}}}|
{{!-}}
{{!}} Gattung:
{{!}} ''[[{{{Gattung|}}}]]''
}}
|-
{{#if:{{{Untergattung|}}}|
{{!-}}
{{!}} Untergattung:
{{!}} ''[[{{{Untergattung|}}}]]''
}}
|-
| Art:
|''{{{Art|}}}''
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Wissenschaftlicher Name
|-
|style="text-align:center"|
{| style="text-align:center;background-color:#efefef;widht:250px"
|-
|style="text-align:center;width:250px"|''{{{Gattung|}}} {{{Art|}}}''
{{#if:{{{Autor|}}}| {{{Autor|}}} | }}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Weitere Informationen
|-
|style="text-align:left"|
{| style="text-align:left;background-color:#efefef;widht:250px"
{{#if:{{{Verbreitung|}}} |{{!-}}
{{!}} Verbreitung:
{{!}} {{{Verbreitung|}}} |{{!-}}}}
{{#if:{{{Habitat|}}}
| {{!-}}
{{!}} Habitat:
{{!}} {{{Habitat|}}}|{{!-}}}}
{{#if:{{{Nahrung|}}}
| {{!-}}
{{!}} Nahrung:
{{!}} {{{Nahrung|}}} |{{!-}}}}
{{#if:{{{Luftfeuchtigkeit|}}}
| {{!-}}
{{!}} Luftfeuchtigkeit:
{{!}} {{{Luftfeuchtigkeit|}}} |{{!-}}}}
{{#if:{{{Temperatur|}}}
| {{!-}}
{{!}} Temperatur:
{{!}} {{{Temperatur|}}} |{{!-}}}}
{{#if:{{{StudyGroupNumber|}}}
| {{!-}}
{{!}} StudyGroupNumber:
{{!}} {{{StudyGroupNumber|}}} |{{!-}}}}
|}
|}
{{#if:{{{Untergattung|}}}| [[Kategorie:{{{Untergattung|}}}]]}}
{{#if:
{{{Art|}}}
|[[Kategorie:{{{Gattung|}}}]]
|
}}
{{#ifeq: {{NAMESPACE}} | {{ns:0}} | [[Kategorie:Spezies]]}}
{{#if:{{{Familie|}}}| [[{{{Familie|}}}]]| ???}}->{{#if:{{{Unterfamilie|}}}| [[{{{Unterfamilie|}}}]]| ???}}->{{#if:{{{Tribus|}}}| [[{{{Tribus|}}}]]| ???}}->{{#if:{{{Gattung|}}}| [[{{{Gattung|}}}]]| ???}}->{{#if:{{{Untergattung|}}}| [[{{{Untergattung|}}}]]| ???}}
</includeonly>
<noinclude>
477a6e233c04395a5e96a59894afe304cce16d3d
1016
1015
2016-01-06T09:47:59Z
Lollypop
2
wikitext
text/x-wiki
<includeonly>
{| cellpadding="2" cellspacing="0" style="border: 1px solid #6688AA;width:260px; background-color:#efefef; float:right;overflow: hidden; margin-left:10px; margin-bottom:10px;" valign="middle" |
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| '''''{{{Gattung|}}} {{{Art|}}}''''' {{#if:{{{DeName|}}}| <br>({{{DeName|}}}) }}
|-
|style=" border: 0px solid #6688AA;border-bottom-width:1px;"| <div style="text-align:center;font-size:8pt;">
{{#if: {{{Bild|}}} | [[Bild:{{{Bild}}}|frameless|250x300px|{{{Bildbeschreibung}}}]]{{{Bildbeschreibung}}}</div> | |}}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| Systematik
|-
|style="text-align:center;width=250px;"|
{| style="background-color:#efefef;text-align:left;"
|-
{{#if:{{{Familie|}}}|
{{!-}}
{{!}} Familie:
{{!}} ''[[{{{Familie|}}}]]''
}}
|-
{{#if:{{{Unterfamilie|}}}|
{{!-}}
{{!}} Unterfamilie:
{{!}} ''[[{{{Unterfamilie|}}}]]''
}}
|-
{{#if:{{{Tribus|}}}|
{{!-}}
{{!}} Tribus:
{{!}} ''[[{{{Tribus|}}}]]''
}}
|-
{{#if:{{{Gattung|}}}|
{{!-}}
{{!}} Gattung:
{{!}} ''[[{{{Gattung|}}}]]''
}}
|-
{{#if:{{{Untergattung|}}}|
{{!-}}
{{!}} Untergattung:
{{!}} ''[[{{{Untergattung|}}}]]''
}}
|-
| Art:
|''{{{Art|}}}''
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Wissenschaftlicher Name
|-
|style="text-align:center"|
{| style="text-align:center;background-color:#efefef;widht:250px"
|-
|style="text-align:center;width:250px"|''{{{Gattung|}}} {{{Art|}}}''
{{#if:{{{Autor|}}}| {{{Autor|}}} | }}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Weitere Informationen
|-
|style="text-align:left"|
{| style="text-align:left;background-color:#efefef;widht:250px"
{{#if:{{{Verbreitung|}}} |{{!-}}
{{!}} Verbreitung:
{{!}} {{{Verbreitung|}}} |{{!-}}}}
{{#if:{{{Habitat|}}}
| {{!-}}
{{!}} Habitat:
{{!}} {{{Habitat|}}}|{{!-}}}}
{{#if:{{{Nahrung|}}}
| {{!-}}
{{!}} Nahrung:
{{!}} {{{Nahrung|}}} |{{!-}}}}
{{#if:{{{Luftfeuchtigkeit|}}}
| {{!-}}
{{!}} Luftfeuchtigkeit:
{{!}} {{{Luftfeuchtigkeit|}}} |{{!-}}}}
{{#if:{{{Temperatur|}}}
| {{!-}}
{{!}} Temperatur:
{{!}} {{{Temperatur|}}} |{{!-}}}}
{{#if:{{{StudyGroupNumber|}}}
| {{!-}}
{{!}} StudyGroupNumber:
{{!}} {{{StudyGroupNumber|}}} |{{!-}}}}
|}
|}
{{#if:{{{Untergattung|}}}| [[Kategorie:{{{Untergattung|}}}]]}}
{{#if:
{{{Gattung|}}}
|[[Kategorie:{{{Gattung|}}}]]
|
}}
{{#ifeq: {{NAMESPACE}} | {{ns:0}} | [[Kategorie:Spezies]]}}
{{#if:{{{Familie|}}}| [[{{{Familie|}}}]]| ???}}->{{#if:{{{Unterfamilie|}}}| [[{{{Unterfamilie|}}}]]| ???}}->{{#if:{{{Tribus|}}}| [[{{{Tribus|}}}]]| ???}}->{{#if:{{{Gattung|}}}| [[{{{Gattung|}}}]]| ???}}->{{#if:{{{Untergattung|}}}| [[{{{Untergattung|}}}]]| ???}}
</includeonly>
<noinclude>
81a8cb26b0f7b1543ecf51cba5c4ac045a00fafb
1017
1016
2016-01-06T09:48:31Z
Lollypop
2
wikitext
text/x-wiki
<includeonly>
{| cellpadding="2" cellspacing="0" style="border: 1px solid #6688AA;width:260px; background-color:#efefef; float:right;overflow: hidden; margin-left:10px; margin-bottom:10px;" valign="middle" |
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| '''''{{{Gattung|}}} {{{Art|}}}''''' {{#if:{{{DeName|}}}| <br>({{{DeName|}}}) }}
|-
|style=" border: 0px solid #6688AA;border-bottom-width:1px;"| <div style="text-align:center;font-size:8pt;">
{{#if: {{{Bild|}}} | [[Bild:{{{Bild}}}|frameless|250x300px|{{{Bildbeschreibung}}}]]{{{Bildbeschreibung}}}</div> | |}}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| Systematik
|-
|style="text-align:center;width=250px;"|
{| style="background-color:#efefef;text-align:left;"
|-
{{#if:{{{Familie|}}}|
{{!-}}
{{!}} Familie:
{{!}} ''[[{{{Familie|}}}]]''
}}
|-
{{#if:{{{Unterfamilie|}}}|
{{!-}}
{{!}} Unterfamilie:
{{!}} ''[[{{{Unterfamilie|}}}]]''
}}
|-
{{#if:{{{Tribus|}}}|
{{!-}}
{{!}} Tribus:
{{!}} ''[[{{{Tribus|}}}]]''
}}
|-
{{#if:{{{Gattung|}}}|
{{!-}}
{{!}} Gattung:
{{!}} ''[[{{{Gattung|}}}]]''
}}
|-
{{#if:{{{Untergattung|}}}|
{{!-}}
{{!}} Untergattung:
{{!}} ''[[{{{Untergattung|}}}]]''
}}
|-
| Art:
|''{{{Art|}}}''
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Wissenschaftlicher Name
|-
|style="text-align:center"|
{| style="text-align:center;background-color:#efefef;widht:250px"
|-
|style="text-align:center;width:250px"|''{{{Gattung|}}} {{{Art|}}}''
{{#if:{{{Autor|}}}| {{{Autor|}}} | }}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Weitere Informationen
|-
|style="text-align:left"|
{| style="text-align:left;background-color:#efefef;widht:250px"
{{#if:{{{Verbreitung|}}} |{{!-}}
{{!}} Verbreitung:
{{!}} {{{Verbreitung|}}} |{{!-}}}}
{{#if:{{{Habitat|}}}
| {{!-}}
{{!}} Habitat:
{{!}} {{{Habitat|}}}|{{!-}}}}
{{#if:{{{Nahrung|}}}
| {{!-}}
{{!}} Nahrung:
{{!}} {{{Nahrung|}}} |{{!-}}}}
{{#if:{{{Luftfeuchtigkeit|}}}
| {{!-}}
{{!}} Luftfeuchtigkeit:
{{!}} {{{Luftfeuchtigkeit|}}} |{{!-}}}}
{{#if:{{{Temperatur|}}}
| {{!-}}
{{!}} Temperatur:
{{!}} {{{Temperatur|}}} |{{!-}}}}
{{#if:{{{StudyGroupNumber|}}}
| {{!-}}
{{!}} StudyGroupNumber:
{{!}} {{{StudyGroupNumber|}}} |{{!-}}}}
|}
|}
{{#if:{{{Untergattung|}}}| [[Kategorie:{{{Untergattung|}}}]]}}
{{#if:{{{Art|}}}
|[[Kategorie:{{{Gattung|}}}]]
|
}}
{{#ifeq: {{NAMESPACE}} | {{ns:0}} | [[Kategorie:Spezies]]}}
{{#if:{{{Familie|}}}| [[{{{Familie|}}}]]| ???}}->{{#if:{{{Unterfamilie|}}}| [[{{{Unterfamilie|}}}]]| ???}}->{{#if:{{{Tribus|}}}| [[{{{Tribus|}}}]]| ???}}->{{#if:{{{Gattung|}}}| [[{{{Gattung|}}}]]| ???}}->{{#if:{{{Untergattung|}}}| [[{{{Untergattung|}}}]]| ???}}
</includeonly>
<noinclude>
9fb4054c1f8d42c6549f87ec50e71ebc7a29d6d5
1018
1017
2016-01-06T09:49:06Z
Lollypop
2
wikitext
text/x-wiki
<includeonly>
{| cellpadding="2" cellspacing="0" style="border: 1px solid #6688AA;width:260px; background-color:#efefef; float:right;overflow: hidden; margin-left:10px; margin-bottom:10px;" valign="middle" |
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| '''''{{{Gattung|}}} {{{Art|}}}''''' {{#if:{{{DeName|}}}| <br>({{{DeName|}}}) }}
|-
|style=" border: 0px solid #6688AA;border-bottom-width:1px;"| <div style="text-align:center;font-size:8pt;">
{{#if: {{{Bild|}}} | [[Bild:{{{Bild}}}|frameless|250x300px|{{{Bildbeschreibung}}}]]{{{Bildbeschreibung}}}</div> | |}}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| Systematik
|-
|style="text-align:center;width=250px;"|
{| style="background-color:#efefef;text-align:left;"
|-
{{#if:{{{Familie|}}}|
{{!-}}
{{!}} Familie:
{{!}} ''[[{{{Familie|}}}]]''
}}
|-
{{#if:{{{Unterfamilie|}}}|
{{!-}}
{{!}} Unterfamilie:
{{!}} ''[[{{{Unterfamilie|}}}]]''
}}
|-
{{#if:{{{Tribus|}}}|
{{!-}}
{{!}} Tribus:
{{!}} ''[[{{{Tribus|}}}]]''
}}
|-
{{#if:{{{Gattung|}}}|
{{!-}}
{{!}} Gattung:
{{!}} ''[[{{{Gattung|}}}]]''
}}
|-
{{#if:{{{Untergattung|}}}|
{{!-}}
{{!}} Untergattung:
{{!}} ''[[{{{Untergattung|}}}]]''
}}
|-
| Art:
|''{{{Art|}}}''
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Wissenschaftlicher Name
|-
|style="text-align:center"|
{| style="text-align:center;background-color:#efefef;widht:250px"
|-
|style="text-align:center;width:250px"|''{{{Gattung|}}} {{{Art|}}}''
{{#if:{{{Autor|}}}| {{{Autor|}}} | }}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Weitere Informationen
|-
|style="text-align:left"|
{| style="text-align:left;background-color:#efefef;widht:250px"
{{#if:{{{Verbreitung|}}} |{{!-}}
{{!}} Verbreitung:
{{!}} {{{Verbreitung|}}} |{{!-}}}}
{{#if:{{{Habitat|}}}
| {{!-}}
{{!}} Habitat:
{{!}} {{{Habitat|}}}|{{!-}}}}
{{#if:{{{Nahrung|}}}
| {{!-}}
{{!}} Nahrung:
{{!}} {{{Nahrung|}}} |{{!-}}}}
{{#if:{{{Luftfeuchtigkeit|}}}
| {{!-}}
{{!}} Luftfeuchtigkeit:
{{!}} {{{Luftfeuchtigkeit|}}} |{{!-}}}}
{{#if:{{{Temperatur|}}}
| {{!-}}
{{!}} Temperatur:
{{!}} {{{Temperatur|}}} |{{!-}}}}
{{#if:{{{StudyGroupNumber|}}}
| {{!-}}
{{!}} StudyGroupNumber:
{{!}} {{{StudyGroupNumber|}}} |{{!-}}}}
|}
|}
{{#if:{{{Untergattung|}}}| [[Kategorie:{{{Untergattung|}}}]]}}
{{#if:{{{Art|}}}| [[Kategorie:{{{Gattung|}}}]]}}
{{#ifeq: {{NAMESPACE}} | {{ns:0}} | [[Kategorie:Spezies]]}}
{{#if:{{{Familie|}}}| [[{{{Familie|}}}]]| ???}}->{{#if:{{{Unterfamilie|}}}| [[{{{Unterfamilie|}}}]]| ???}}->{{#if:{{{Tribus|}}}| [[{{{Tribus|}}}]]| ???}}->{{#if:{{{Gattung|}}}| [[{{{Gattung|}}}]]| ???}}->{{#if:{{{Untergattung|}}}| [[{{{Untergattung|}}}]]| ???}}
</includeonly>
<noinclude>
acd7fe9603539188df682488d09cd4dae9666467
Template:Systematik
10
117
1019
1018
2016-01-06T09:50:19Z
Lollypop
2
wikitext
text/x-wiki
<includeonly>
{| cellpadding="2" cellspacing="0" style="border: 1px solid #6688AA;width:260px; background-color:#efefef; float:right;overflow: hidden; margin-left:10px; margin-bottom:10px;" valign="middle" |
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| '''''{{{Gattung|}}} {{{Art|}}}''''' {{#if:{{{DeName|}}}| <br>({{{DeName|}}}) }}
|-
|style=" border: 0px solid #6688AA;border-bottom-width:1px;"| <div style="text-align:center;font-size:8pt;">
{{#if: {{{Bild|}}} | [[Bild:{{{Bild}}}|frameless|250x300px|{{{Bildbeschreibung}}}]]{{{Bildbeschreibung}}}</div> | |}}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| Systematik
|-
|style="text-align:center;width=250px;"|
{| style="background-color:#efefef;text-align:left;"
|-
{{#if:{{{Familie|}}}|
{{!-}}
{{!}} Familie:
{{!}} ''[[{{{Familie|}}}]]''
}}
|-
{{#if:{{{Unterfamilie|}}}|
{{!-}}
{{!}} Unterfamilie:
{{!}} ''[[{{{Unterfamilie|}}}]]''
}}
|-
{{#if:{{{Tribus|}}}|
{{!-}}
{{!}} Tribus:
{{!}} ''[[{{{Tribus|}}}]]''
}}
|-
{{#if:{{{Gattung|}}}|
{{!-}}
{{!}} Gattung:
{{!}} ''[[{{{Gattung|}}}]]''
}}
|-
{{#if:{{{Untergattung|}}}|
{{!-}}
{{!}} Untergattung:
{{!}} ''[[{{{Untergattung|}}}]]''
}}
|-
{{#if:{{{Art|}}}|
{{!-}}
{{!}} Arti:
{{!}} ''[[{{{Art|}}}]]''
}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Wissenschaftlicher Name
|-
|style="text-align:center"|
{| style="text-align:center;background-color:#efefef;widht:250px"
|-
|style="text-align:center;width:250px"|''{{{Gattung|}}} {{{Art|}}}''
{{#if:{{{Autor|}}}| {{{Autor|}}} | }}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Weitere Informationen
|-
|style="text-align:left"|
{| style="text-align:left;background-color:#efefef;widht:250px"
{{#if:{{{Verbreitung|}}} |{{!-}}
{{!}} Verbreitung:
{{!}} {{{Verbreitung|}}} |{{!-}}}}
{{#if:{{{Habitat|}}}
| {{!-}}
{{!}} Habitat:
{{!}} {{{Habitat|}}}|{{!-}}}}
{{#if:{{{Nahrung|}}}
| {{!-}}
{{!}} Nahrung:
{{!}} {{{Nahrung|}}} |{{!-}}}}
{{#if:{{{Luftfeuchtigkeit|}}}
| {{!-}}
{{!}} Luftfeuchtigkeit:
{{!}} {{{Luftfeuchtigkeit|}}} |{{!-}}}}
{{#if:{{{Temperatur|}}}
| {{!-}}
{{!}} Temperatur:
{{!}} {{{Temperatur|}}} |{{!-}}}}
{{#if:{{{StudyGroupNumber|}}}
| {{!-}}
{{!}} StudyGroupNumber:
{{!}} {{{StudyGroupNumber|}}} |{{!-}}}}
|}
|}
{{#if:{{{Untergattung|}}}| [[Kategorie:{{{Untergattung|}}}]]}}
{{#if:{{{Art|}}}| [[Kategorie:{{{Gattung|}}}]]}}
{{#ifeq: {{NAMESPACE}} | {{ns:0}} | [[Kategorie:Spezies]]}}
{{#if:{{{Familie|}}}| [[{{{Familie|}}}]]| ???}}->{{#if:{{{Unterfamilie|}}}| [[{{{Unterfamilie|}}}]]| ???}}->{{#if:{{{Tribus|}}}| [[{{{Tribus|}}}]]| ???}}->{{#if:{{{Gattung|}}}| [[{{{Gattung|}}}]]| ???}}->{{#if:{{{Untergattung|}}}| [[{{{Untergattung|}}}]]| ???}}
</includeonly>
<noinclude>
9290617d84b81815b89325559030ab01c9f4b70d
1020
1019
2016-01-06T09:52:16Z
Lollypop
2
wikitext
text/x-wiki
<includeonly>
{| cellpadding="2" cellspacing="0" style="border: 1px solid #6688AA;width:260px; background-color:#efefef; float:right;overflow: hidden; margin-left:10px; margin-bottom:10px;" valign="middle" |
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| '''''{{{Gattung|}}} {{{Art|}}}''''' {{#if:{{{DeName|}}}| <br>({{{DeName|}}}) }}
|-
|style=" border: 0px solid #6688AA;border-bottom-width:1px;"| <div style="text-align:center;font-size:8pt;">
{{#if: {{{Bild|}}} | [[Bild:{{{Bild}}}|frameless|250x300px|{{{Bildbeschreibung}}}]]{{{Bildbeschreibung}}}</div> | |}}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| Systematik
|-
|style="text-align:center;width=250px;"|
{| style="background-color:#efefef;text-align:left;"
|-
{{#if:{{{Familie|}}}|
{{!-}}
{{!}} Familie:
{{!}} ''[[{{{Familie|}}}]]''
}}
|-
{{#if:{{{Unterfamilie|}}}|
{{!-}}
{{!}} Unterfamilie:
{{!}} ''[[{{{Unterfamilie|}}}]]''
}}
|-
{{#if:{{{Tribus|}}}|
{{!-}}
{{!}} Tribus:
{{!}} ''[[{{{Tribus|}}}]]''
}}
|-
{{#if:{{{Gattung|}}}|
{{!-}}
{{!}} Gattung:
{{!}} ''[[{{{Gattung|}}}]]''
}}
|-
{{#if:{{{Untergattung|}}}|
{{!-}}
{{!}} Untergattung:
{{!}} ''[[{{{Untergattung|}}}]]''
}}
|-
{{#if:{{{Art|}}}|
{{!-}}
{{!}} Art---:
{{!}} ''[[{{{Art|}}}]]''
}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Wissenschaftlicher Name
|-
|style="text-align:center"|
{| style="text-align:center;background-color:#efefef;widht:250px"
|-
|style="text-align:center;width:250px"|''{{{Gattung|}}} {{{Art|}}}''
{{#if:{{{Autor|}}}| {{{Autor|}}} | }}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Weitere Informationen
|-
|style="text-align:left"|
{| style="text-align:left;background-color:#efefef;widht:250px"
{{#if:{{{Verbreitung|}}} |{{!-}}
{{!}} Verbreitung:
{{!}} {{{Verbreitung|}}} |{{!-}}}}
{{#if:{{{Habitat|}}}
| {{!-}}
{{!}} Habitat:
{{!}} {{{Habitat|}}}|{{!-}}}}
{{#if:{{{Nahrung|}}}
| {{!-}}
{{!}} Nahrung:
{{!}} {{{Nahrung|}}} |{{!-}}}}
{{#if:{{{Luftfeuchtigkeit|}}}
| {{!-}}
{{!}} Luftfeuchtigkeit:
{{!}} {{{Luftfeuchtigkeit|}}} |{{!-}}}}
{{#if:{{{Temperatur|}}}
| {{!-}}
{{!}} Temperatur:
{{!}} {{{Temperatur|}}} |{{!-}}}}
{{#if:{{{StudyGroupNumber|}}}
| {{!-}}
{{!}} StudyGroupNumber:
{{!}} {{{StudyGroupNumber|}}} |{{!-}}}}
|}
|}
{{#if:{{{Untergattung|}}}| [[Kategorie:{{{Untergattung|}}}]]}}
{{#if:{{{Art|}}}| [[Kategorie:{{{Gattung|}}}]]}}
{{#ifeq: {{NAMESPACE}} | {{ns:0}} | [[Kategorie:Spezies]]}}
{{#if:{{{Familie|}}}| [[{{{Familie|}}}]]| ???}}->{{#if:{{{Unterfamilie|}}}| [[{{{Unterfamilie|}}}]]| ???}}->{{#if:{{{Tribus|}}}| [[{{{Tribus|}}}]]| ???}}->{{#if:{{{Gattung|}}}| [[{{{Gattung|}}}]]| ???}}->{{#if:{{{Untergattung|}}}| [[{{{Untergattung|}}}]]| ???}}
</includeonly>
<noinclude>
59d731644faa8c78d6a6e3b50072f128d73f9fe9
1021
1020
2016-01-06T09:53:17Z
Lollypop
2
wikitext
text/x-wiki
<includeonly>
{| cellpadding="2" cellspacing="0" style="border: 1px solid #6688AA;width:260px; background-color:#efefef; float:right;overflow: hidden; margin-left:10px; margin-bottom:10px;" valign="middle" |
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| '''''{{{Gattung|}}} {{{Art|}}}''''' {{#if:{{{DeName|}}}| <br>({{{DeName|}}}) }}
|-
|style=" border: 0px solid #6688AA;border-bottom-width:1px;"| <div style="text-align:center;font-size:8pt;">
{{#if: {{{Bild|}}} | [[Bild:{{{Bild}}}|frameless|250x300px|{{{Bildbeschreibung}}}]]{{{Bildbeschreibung}}}</div> | |}}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| Systematik
|-
|style="text-align:center;width=250px;"|
{| style="background-color:#efefef;text-align:left;"
|-
{{#if:{{{Familie|}}}|
{{!-}}
{{!}} Familie:
{{!}} ''[[{{{Familie|}}}]]''
}}
|-
{{#if:{{{Unterfamilie|}}}|
{{!-}}
{{!}} Unterfamilie:
{{!}} ''[[{{{Unterfamilie|}}}]]''
}}
|-
{{#if:{{{Tribus|}}}|
{{!-}}
{{!}} Tribus:
{{!}} ''[[{{{Tribus|}}}]]''
}}
|-
{{#if:{{{Gattung|}}}|
{{!-}}
{{!}} Gattung:
{{!}} ''[[{{{Gattung|}}}]]''
}}
|-
{{#if:{{{Untergattung|}}}|
{{!-}}
{{!}} Untergattung:
{{!}} ''[[{{{Untergattung|}}}]]''
}}
|-
{{#if:{{{Art|}}}|
{{!-}}
{{!}} Art:
{{!}} ''[[{{{Art|}}}]]''
}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Wissenschaftlicher Name
|-
|style="text-align:center"|
{| style="text-align:center;background-color:#efefef;widht:250px"
|-
|style="text-align:center;width:250px"|''{{{Gattung|}}} {{{Art|}}}''
{{#if:{{{Autor|}}}| {{{Autor|}}} | }}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Weitere Informationen
|-
|style="text-align:left"|
{| style="text-align:left;background-color:#efefef;widht:250px"
{{#if:{{{Verbreitung|}}} |{{!-}}
{{!}} Verbreitung:
{{!}} {{{Verbreitung|}}} |{{!-}}}}
{{#if:{{{Habitat|}}}
| {{!-}}
{{!}} Habitat:
{{!}} {{{Habitat|}}}|{{!-}}}}
{{#if:{{{Nahrung|}}}
| {{!-}}
{{!}} Nahrung:
{{!}} {{{Nahrung|}}} |{{!-}}}}
{{#if:{{{Luftfeuchtigkeit|}}}
| {{!-}}
{{!}} Luftfeuchtigkeit:
{{!}} {{{Luftfeuchtigkeit|}}} |{{!-}}}}
{{#if:{{{Temperatur|}}}
| {{!-}}
{{!}} Temperatur:
{{!}} {{{Temperatur|}}} |{{!-}}}}
{{#if:{{{StudyGroupNumber|}}}
| {{!-}}
{{!}} StudyGroupNumber:
{{!}} {{{StudyGroupNumber|}}} |{{!-}}}}
|}
|}
{{#if:{{{Untergattung|}}}| [[Kategorie:{{{Untergattung|}}}]]}}
{{#if:{{{Art|}}}|
[[Kategorie:{{{Gattung|}}}]]
}}
{{#ifeq: {{NAMESPACE}} | {{ns:0}} | [[Kategorie:Spezies]]}}
{{#if:{{{Familie|}}}| [[{{{Familie|}}}]]| ???}}->{{#if:{{{Unterfamilie|}}}| [[{{{Unterfamilie|}}}]]| ???}}->{{#if:{{{Tribus|}}}| [[{{{Tribus|}}}]]| ???}}->{{#if:{{{Gattung|}}}| [[{{{Gattung|}}}]]| ???}}->{{#if:{{{Untergattung|}}}| [[{{{Untergattung|}}}]]| ???}}
</includeonly>
<noinclude>
a3d9c2b302002d56dc5f031f699fbba1babfb1ec
1022
1021
2016-01-06T09:55:52Z
Lollypop
2
wikitext
text/x-wiki
<includeonly>
{| cellpadding="2" cellspacing="0" style="border: 1px solid #6688AA;width:260px; background-color:#efefef; float:right;overflow: hidden; margin-left:10px; margin-bottom:10px;" valign="middle" |
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| '''''{{{Gattung|}}} {{{Art|}}}''''' {{#if:{{{DeName|}}}| <br>({{{DeName|}}}) }}
|-
|style=" border: 0px solid #6688AA;border-bottom-width:1px;"| <div style="text-align:center;font-size:8pt;">
{{#if: {{{Bild|}}} | [[Bild:{{{Bild}}}|frameless|250x300px|{{{Bildbeschreibung}}}]]{{{Bildbeschreibung}}}</div> | |}}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| Systematik
|-
|style="text-align:center;width=250px;"|
{| style="background-color:#efefef;text-align:left;"
|-
{{#if:{{{Familie|}}}|
{{!-}}
{{!}} Familie:
{{!}} ''[[{{{Familie|}}}]]''
}}
|-
{{#if:{{{Unterfamilie|}}}|
{{!-}}
{{!}} Unterfamilie:
{{!}} ''[[{{{Unterfamilie|}}}]]''
}}
|-
{{#if:{{{Tribus|}}}|
{{!-}}
{{!}} Tribus:
{{!}} ''[[{{{Tribus|}}}]]''
}}
|-
{{#if:{{{Gattung|}}}|
{{!-}}
{{!}} Gattung:
{{!}} ''[[{{{Gattung|}}}]]''
}}
|-
{{#if:{{{Untergattung|}}}|
{{!-}}
{{!}} Untergattung:
{{!}} ''[[{{{Untergattung|}}}]]''
}}
|-
{{#if:{{{Art|}}}|
{{!-}}
{{!}} Art:
{{!}} ''[[{{{Art|}}}]]''
}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Wissenschaftlicher Name
|-
|style="text-align:center"|
{| style="text-align:center;background-color:#efefef;widht:250px"
|-
|style="text-align:center;width:250px"|''{{{Gattung|}}} {{{Art|}}}''
{{#if:{{{Autor|}}}| {{{Autor|}}} | }}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Weitere Informationen
|-
|style="text-align:left"|
{| style="text-align:left;background-color:#efefef;widht:250px"
{{#if:{{{Verbreitung|}}} |{{!-}}
{{!}} Verbreitung:
{{!}} {{{Verbreitung|}}} |{{!-}}}}
{{#if:{{{Habitat|}}}
| {{!-}}
{{!}} Habitat:
{{!}} {{{Habitat|}}}|{{!-}}}}
{{#if:{{{Nahrung|}}}
| {{!-}}
{{!}} Nahrung:
{{!}} {{{Nahrung|}}} |{{!-}}}}
{{#if:{{{Luftfeuchtigkeit|}}}
| {{!-}}
{{!}} Luftfeuchtigkeit:
{{!}} {{{Luftfeuchtigkeit|}}} |{{!-}}}}
{{#if:{{{Temperatur|}}}
| {{!-}}
{{!}} Temperatur:
{{!}} {{{Temperatur|}}} |{{!-}}}}
{{#if:{{{StudyGroupNumber|}}}
| {{!-}}
{{!}} StudyGroupNumber:
{{!}} {{{StudyGroupNumber|}}} |{{!-}}}}
|}
|}
{{#if:{{{Untergattung|}}}|
[[Kategorie:{{{Untergattung|}}}]]
}}
{{#if:{{{Art|}}}|
[[Kategorie:{{{Gattung|}}}{{!}}{{{Art|}}}]]
[[Kategorie:{{{Untergattung|}}}{{!}}{{{Art|}}}]]
}}
{{#ifeq: {{NAMESPACE}} | {{ns:0}} | [[Kategorie:Spezies]]}}
{{#if:{{{Familie|}}}| [[{{{Familie|}}}]]| ???}}->{{#if:{{{Unterfamilie|}}}| [[{{{Unterfamilie|}}}]]| ???}}->{{#if:{{{Tribus|}}}| [[{{{Tribus|}}}]]| ???}}->{{#if:{{{Gattung|}}}| [[{{{Gattung|}}}]]| ???}}->{{#if:{{{Untergattung|}}}| [[{{{Untergattung|}}}]]| ???}}
</includeonly>
<noinclude>
bcf60dd09b35e516059167c76b812497471945bb
1023
1022
2016-01-06T10:00:12Z
Lollypop
2
wikitext
text/x-wiki
<includeonly>
{| cellpadding="2" cellspacing="0" style="border: 1px solid #6688AA;width:260px; background-color:#efefef; float:right;overflow: hidden; margin-left:10px; margin-bottom:10px;" valign="middle" |
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| '''''{{{Gattung|}}} {{{Art|}}}''''' {{#if:{{{DeName|}}}| <br>({{{DeName|}}}) }}
|-
|style=" border: 0px solid #6688AA;border-bottom-width:1px;"| <div style="text-align:center;font-size:8pt;">
{{#if: {{{Bild|}}} | [[Bild:{{{Bild}}}|frameless|250x300px|{{{Bildbeschreibung}}}]]{{{Bildbeschreibung}}}</div> | |}}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| Systematik
|-
|style="text-align:center;width=250px;"|
{| style="background-color:#efefef;text-align:left;"
|-
{{#if:{{{Familie|}}}|
{{!-}}
{{!}} Familie:
{{!}} ''[[{{{Familie|}}}]]''
}}
|-
{{#if:{{{Unterfamilie|}}}|
{{!-}}
{{!}} Unterfamilie:
{{!}} ''[[{{{Unterfamilie|}}}]]''
}}
|-
{{#if:{{{Tribus|}}}|
{{!-}}
{{!}} Tribus:
{{!}} ''[[{{{Tribus|}}}]]''
}}
|-
{{#if:{{{Gattung|}}}|
{{!-}}
{{!}} Gattung:
{{!}} ''[[{{{Gattung|}}}]]''
}}
|-
{{#if:{{{Untergattung|}}}|
{{!-}}
{{!}} Untergattung:
{{!}} ''[[{{{Untergattung|}}}]]''
}}
|-
{{#if:{{{Art|}}}|
{{!-}}
{{!}} Art:
{{!}} ''[[{{{Art|}}}]]''
}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Wissenschaftlicher Name
|-
|style="text-align:center"|
{| style="text-align:center;background-color:#efefef;widht:250px"
|-
|style="text-align:center;width:250px"|''{{{Gattung|}}} {{{Art|}}}''
{{#if:{{{Autor|}}}| {{{Autor|}}} | }}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Weitere Informationen
|-
|style="text-align:left"|
{| style="text-align:left;background-color:#efefef;widht:250px"
{{#if:{{{Verbreitung|}}} |{{!-}}
{{!}} Verbreitung:
{{!}} {{{Verbreitung|}}} |{{!-}}}}
{{#if:{{{Habitat|}}}
|
{{!-}}
{{!}} Habitat:
{{!}} {{{Habitat|}}}
|
{{!-}}
}}
{{#if:{{{Nahrung|}}}
| {{!-}}
{{!}} Nahrung:
{{!}} {{{Nahrung|}}} |{{!-}}}}
{{#if:{{{Luftfeuchtigkeit|}}}
| {{!-}}
{{!}} Luftfeuchtigkeit:
{{!}} {{{Luftfeuchtigkeit|}}} |{{!-}}}}
{{#if:{{{Temperatur|}}}
| {{!-}}
{{!}} Temperatur:
{{!}} {{{Temperatur|}}} |{{!-}}}}
{{#if:{{{StudyGroupNumber|}}}
| {{!-}}
{{!}} StudyGroupNumber:
{{!}} {{{StudyGroupNumber|}}} |{{!-}}}}
|}
|}
{{#if:{{{Untergattung|}}}|
[[Kategorie:{{{Untergattung|}}}]]
}}
{{#if:{{{Art|}}}|
[[Kategorie:{{{Gattung|}}}{{!}}{{{Art|}}}]]
[[Kategorie:{{{Untergattung|}}}{{!}}{{{Art|}}}]]
}}
{{#ifeq: {{NAMESPACE}} | {{ns:0}} | [[Kategorie:Spezies]]}}
{{#if:{{{Familie|}}}| [[{{{Familie|}}}]]| ???}}->{{#if:{{{Unterfamilie|}}}| [[{{{Unterfamilie|}}}]]| ???}}->{{#if:{{{Tribus|}}}| [[{{{Tribus|}}}]]| ???}}->{{#if:{{{Gattung|}}}| [[{{{Gattung|}}}]]| ???}}->{{#if:{{{Untergattung|}}}| [[{{{Untergattung|}}}]]| ???}}
</includeonly>
<noinclude>
db84c4e201b3226919aaa85c73061a025edff80c
1024
1023
2016-01-06T10:01:40Z
Lollypop
2
wikitext
text/x-wiki
<includeonly>
{| cellpadding="2" cellspacing="0" style="border: 1px solid #6688AA;width:260px; background-color:#efefef; float:right;overflow: hidden; margin-left:10px; margin-bottom:10px;" valign="middle" |
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| '''''{{{Gattung|}}} {{{Art|}}}''''' {{#if:{{{DeName|}}}| <br>({{{DeName|}}}) }}
|-
|style=" border: 0px solid #6688AA;border-bottom-width:1px;"| <div style="text-align:center;font-size:8pt;">
{{#if: {{{Bild|}}} | [[Bild:{{{Bild}}}|frameless|250x300px|{{{Bildbeschreibung}}}]]{{{Bildbeschreibung}}}</div> | |}}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| Systematik
|-
|style="text-align:center;width=250px;"|
{| style="background-color:#efefef;text-align:left;"
{{#if:{{{Familie|}}}
|
{{!-}}
{{!}} Familie:
{{!}} ''[[{{{Familie|}}}]]''
}}
{{#if:{{{Unterfamilie|}}}
|
{{!-}}
{{!}} Unterfamilie:
{{!}} ''[[{{{Unterfamilie|}}}]]''
}}
|-
{{#if:{{{Tribus|}}}|
{{!-}}
{{!}} Tribus:
{{!}} ''[[{{{Tribus|}}}]]''
}}
|-
{{#if:{{{Gattung|}}}|
{{!-}}
{{!}} Gattung:
{{!}} ''[[{{{Gattung|}}}]]''
}}
|-
{{#if:{{{Untergattung|}}}|
{{!-}}
{{!}} Untergattung:
{{!}} ''[[{{{Untergattung|}}}]]''
}}
|-
{{#if:{{{Art|}}}|
{{!-}}
{{!}} Art:
{{!}} ''[[{{{Art|}}}]]''
}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Wissenschaftlicher Name
|-
|style="text-align:center"|
{| style="text-align:center;background-color:#efefef;widht:250px"
|-
|style="text-align:center;width:250px"|''{{{Gattung|}}} {{{Art|}}}''
{{#if:{{{Autor|}}}| {{{Autor|}}} | }}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Weitere Informationen
|-
|style="text-align:left"|
{| style="text-align:left;background-color:#efefef;widht:250px"
{{#if:{{{Verbreitung|}}} |{{!-}}
{{!}} Verbreitung:
{{!}} {{{Verbreitung|}}} |{{!-}}}}
{{#if:{{{Habitat|}}}
|
{{!-}}
{{!}} Habitat:
{{!}} {{{Habitat|}}}
|
{{!-}}
}}
{{#if:{{{Nahrung|}}}
| {{!-}}
{{!}} Nahrung:
{{!}} {{{Nahrung|}}} |{{!-}}}}
{{#if:{{{Luftfeuchtigkeit|}}}
| {{!-}}
{{!}} Luftfeuchtigkeit:
{{!}} {{{Luftfeuchtigkeit|}}} |{{!-}}}}
{{#if:{{{Temperatur|}}}
| {{!-}}
{{!}} Temperatur:
{{!}} {{{Temperatur|}}} |{{!-}}}}
{{#if:{{{StudyGroupNumber|}}}
| {{!-}}
{{!}} StudyGroupNumber:
{{!}} {{{StudyGroupNumber|}}} |{{!-}}}}
|}
|}
{{#if:{{{Untergattung|}}}|
[[Kategorie:{{{Untergattung|}}}]]
}}
{{#if:{{{Art|}}}|
[[Kategorie:{{{Gattung|}}}{{!}}{{{Art|}}}]]
[[Kategorie:{{{Untergattung|}}}{{!}}{{{Art|}}}]]
}}
{{#ifeq: {{NAMESPACE}} | {{ns:0}} | [[Kategorie:Spezies]]}}
{{#if:{{{Familie|}}}| [[{{{Familie|}}}]]| ???}}->{{#if:{{{Unterfamilie|}}}| [[{{{Unterfamilie|}}}]]| ???}}->{{#if:{{{Tribus|}}}| [[{{{Tribus|}}}]]| ???}}->{{#if:{{{Gattung|}}}| [[{{{Gattung|}}}]]| ???}}->{{#if:{{{Untergattung|}}}| [[{{{Untergattung|}}}]]| ???}}
</includeonly>
<noinclude>
02d260599c0c9aab2fd583477c716afe644ba823
1025
1024
2016-01-06T10:04:56Z
Lollypop
2
wikitext
text/x-wiki
<includeonly>
{| cellpadding="2" cellspacing="0" style="border: 1px solid #6688AA;width:260px; background-color:#efefef; float:right;overflow: hidden; margin-left:10px; margin-bottom:10px;" valign="middle" |
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| '''''{{{Gattung|}}} {{{Art|}}}''''' {{#if:{{{DeName|}}}| <br>({{{DeName|}}}) }}
|-
|style=" border: 0px solid #6688AA;border-bottom-width:1px;"
| <div style="text-align:center;font-size:8pt;">
{{#if: {{{Bild|}}}
|
[[Bild:{{{Bild}}}{{!}}frameless{{!}}250x300px{{!}}{{{Bildbeschreibung}}}]]
|
{{{Bildbeschreibung}}}
</div>
|
|
}}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| Systematik
|-
|style="text-align:center;width=250px;"|
{| style="background-color:#efefef;text-align:left;"
{{#if:{{{Familie|}}}
|
{{!-}}
{{!}} Familie:
{{!}} ''[[{{{Familie|}}}]]''
}}
{{#if:{{{Unterfamilie|}}}
|
{{!-}}
{{!}} Unterfamilie:
{{!}} ''[[{{{Unterfamilie|}}}]]''
}}
{{#if:{{{Tribus|}}}
|
{{!-}}
{{!}} Tribus:
{{!}} ''[[{{{Tribus|}}}]]''
}}
|-
{{#if:{{{Gattung|}}}|
{{!-}}
{{!}} Gattung:
{{!}} ''[[{{{Gattung|}}}]]''
}}
|-
{{#if:{{{Untergattung|}}}|
{{!-}}
{{!}} Untergattung:
{{!}} ''[[{{{Untergattung|}}}]]''
}}
|-
{{#if:{{{Art|}}}|
{{!-}}
{{!}} Art:
{{!}} ''[[{{{Art|}}}]]''
}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Wissenschaftlicher Name
|-
|style="text-align:center"|
{| style="text-align:center;background-color:#efefef;widht:250px"
|-
|style="text-align:center;width:250px"|''{{{Gattung|}}} {{{Art|}}}''
{{#if:{{{Autor|}}}| {{{Autor|}}} | }}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Weitere Informationen
|-
|style="text-align:left"|
{| style="text-align:left;background-color:#efefef;widht:250px"
{{#if:{{{Verbreitung|}}} |{{!-}}
{{!}} Verbreitung:
{{!}} {{{Verbreitung|}}} |{{!-}}}}
{{#if:{{{Habitat|}}}
|
{{!-}}
{{!}} Habitat:
{{!}} {{{Habitat|}}}
|
{{!-}}
}}
{{#if:{{{Nahrung|}}}
| {{!-}}
{{!}} Nahrung:
{{!}} {{{Nahrung|}}} |{{!-}}}}
{{#if:{{{Luftfeuchtigkeit|}}}
| {{!-}}
{{!}} Luftfeuchtigkeit:
{{!}} {{{Luftfeuchtigkeit|}}} |{{!-}}}}
{{#if:{{{Temperatur|}}}
| {{!-}}
{{!}} Temperatur:
{{!}} {{{Temperatur|}}} |{{!-}}}}
{{#if:{{{StudyGroupNumber|}}}
| {{!-}}
{{!}} StudyGroupNumber:
{{!}} {{{StudyGroupNumber|}}} |{{!-}}}}
|}
|}
{{#if:{{{Untergattung|}}}|
[[Kategorie:{{{Untergattung|}}}]]
}}
{{#if:{{{Art|}}}|
[[Kategorie:{{{Gattung|}}}{{!}}{{{Art|}}}]]
[[Kategorie:{{{Untergattung|}}}{{!}}{{{Art|}}}]]
}}
{{#ifeq: {{NAMESPACE}} | {{ns:0}} | [[Kategorie:Spezies]]}}
{{#if:{{{Familie|}}}| [[{{{Familie|}}}]]| ???}}->{{#if:{{{Unterfamilie|}}}| [[{{{Unterfamilie|}}}]]| ???}}->{{#if:{{{Tribus|}}}| [[{{{Tribus|}}}]]| ???}}->{{#if:{{{Gattung|}}}| [[{{{Gattung|}}}]]| ???}}->{{#if:{{{Untergattung|}}}| [[{{{Untergattung|}}}]]| ???}}
</includeonly>
<noinclude>
3e018d293d593fa4445e10ae87a142d575695b94
1026
1025
2016-01-06T10:07:40Z
Lollypop
2
wikitext
text/x-wiki
<includeonly>
{| cellpadding="2" cellspacing="0" style="border: 1px solid #6688AA;width:260px; background-color:#efefef; float:right;overflow: hidden; margin-left:10px; margin-bottom:10px;" valign="middle" |
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| '''''{{{Gattung|}}} {{{Art|}}}''''' {{#if:{{{DeName|}}}| <br>({{{DeName|}}}) }}
|-
|style="border: 0px solid #6688AA;border-bottom-width:1px;"| <div style="text-align:center;font-size:8pt;">
{{#if: {{{Bild|}}}
|
[[Bild:{{{Bild}}}{{!}}frameless{{!}}250x300px{{!}}{{{Bildbeschreibung}}}]]
|
{{{Bildbeschreibung}}}
}}
</div>
|
|
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| Systematik
|-
|style="text-align:center;width=250px;"|
{| style="background-color:#efefef;text-align:left;"
{{#if:{{{Familie|}}}
|
{{!-}}
{{!}} Familie:
{{!}} ''[[{{{Familie|}}}]]''
}}
{{#if:{{{Unterfamilie|}}}
|
{{!-}}
{{!}} Unterfamilie:
{{!}} ''[[{{{Unterfamilie|}}}]]''
}}
{{#if:{{{Tribus|}}}
|
{{!-}}
{{!}} Tribus:
{{!}} ''[[{{{Tribus|}}}]]''
}}
|-
{{#if:{{{Gattung|}}}|
{{!-}}
{{!}} Gattung:
{{!}} ''[[{{{Gattung|}}}]]''
}}
|-
{{#if:{{{Untergattung|}}}|
{{!-}}
{{!}} Untergattung:
{{!}} ''[[{{{Untergattung|}}}]]''
}}
|-
{{#if:{{{Art|}}}|
{{!-}}
{{!}} Art:
{{!}} ''[[{{{Art|}}}]]''
}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Wissenschaftlicher Name
|-
|style="text-align:center"|
{| style="text-align:center;background-color:#efefef;widht:250px"
|-
|style="text-align:center;width:250px"|''{{{Gattung|}}} {{{Art|}}}''
{{#if:{{{Autor|}}}| {{{Autor|}}} | }}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Weitere Informationen
|-
|style="text-align:left"|
{| style="text-align:left;background-color:#efefef;widht:250px"
{{#if:{{{Verbreitung|}}} |{{!-}}
{{!}} Verbreitung:
{{!}} {{{Verbreitung|}}} |{{!-}}}}
{{#if:{{{Habitat|}}}
|
{{!-}}
{{!}} Habitat:
{{!}} {{{Habitat|}}}
|
{{!-}}
}}
{{#if:{{{Nahrung|}}}
| {{!-}}
{{!}} Nahrung:
{{!}} {{{Nahrung|}}} |{{!-}}}}
{{#if:{{{Luftfeuchtigkeit|}}}
| {{!-}}
{{!}} Luftfeuchtigkeit:
{{!}} {{{Luftfeuchtigkeit|}}} |{{!-}}}}
{{#if:{{{Temperatur|}}}
| {{!-}}
{{!}} Temperatur:
{{!}} {{{Temperatur|}}} |{{!-}}}}
{{#if:{{{StudyGroupNumber|}}}
| {{!-}}
{{!}} StudyGroupNumber:
{{!}} {{{StudyGroupNumber|}}} |{{!-}}}}
|}
|}
{{#if:{{{Untergattung|}}}|
[[Kategorie:{{{Untergattung|}}}]]
}}
{{#if:{{{Art|}}}|
[[Kategorie:{{{Gattung|}}}{{!}}{{{Art|}}}]]
[[Kategorie:{{{Untergattung|}}}{{!}}{{{Art|}}}]]
}}
{{#ifeq: {{NAMESPACE}} | {{ns:0}} | [[Kategorie:Spezies]]}}
{{#if:{{{Familie|}}}| [[{{{Familie|}}}]]| ???}}->{{#if:{{{Unterfamilie|}}}| [[{{{Unterfamilie|}}}]]| ???}}->{{#if:{{{Tribus|}}}| [[{{{Tribus|}}}]]| ???}}->{{#if:{{{Gattung|}}}| [[{{{Gattung|}}}]]| ???}}->{{#if:{{{Untergattung|}}}| [[{{{Untergattung|}}}]]| ???}}
</includeonly>
<noinclude>
8c72a2465feda17258a24bf0d997f541a9ea120a
1027
1026
2016-01-06T10:11:33Z
Lollypop
2
wikitext
text/x-wiki
<includeonly>
{| cellpadding="2" cellspacing="0" style="border: 1px solid #6688AA;width:260px; background-color:#efefef; float:right;overflow: hidden; margin-left:10px; margin-bottom:10px;" valign="middle" |
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| '''''{{{Gattung|}}} {{{Art|}}}''''' {{#if:{{{DeName|}}}| <br>({{{DeName|}}}) }}
|-
|style="border: 0px solid #6688AA;border-bottom-width:1px;"| <div style="text-align:center;font-size:8pt;">
{{#if: {{{Bild|}}}
|
[[Bild:{{{Bild}}}{{!}}frameless{{!}}250x300px{{!}}{{{Bildbeschreibung}}}]]
|
{{{Bildbeschreibung}}}
}}
</div>
|
|
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| Systematik
|-
|style="text-align:center;width=250px;"|
{| style="background-color:#efefef;text-align:left;"
{{#if:{{{Familie|}}}
|
{{!-}}
{{!}} Familie:
{{!}} ''[[{{{Familie|}}}]]''
}}
{{#if:{{{Unterfamilie|}}}
|
{{!-}}
{{!}} Unterfamilie:
{{!}} ''[[{{{Unterfamilie|}}}]]''
}}
{{#if:{{{Tribus|}}}
|
{{!-}}
{{!}} Tribus:
{{!}} ''[[{{{Tribus|}}}]]''
}}
|-
{{#if:{{{Gattung|}}}|
{{!-}}
{{!}} Gattung:
{{!}} ''[[{{{Gattung|}}}]]''
}}
|-
{{#if:{{{Untergattung|}}}|
{{!-}}
{{!}} Untergattung:
{{!}} ''[[{{{Untergattung|}}}]]''
}}
|-
{{#if:{{{Art|}}}|
{{!-}}
{{!}} Art:
{{!}} ''[[{{{Art|}}}]]''
}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Wissenschaftlicher Name
|-
|style="text-align:center"|
{| style="text-align:center;background-color:#efefef;widht:250px"
|-
|style="text-align:center;width:250px"|''{{{Gattung|}}} {{{Art|}}}'' {{#if:{{{Autor|}}}|{{{Autor|}}}}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Weitere Informationen
|-
|style="text-align:left"|
{| style="text-align:left;background-color:#efefef;widht:250px"
{{#if:{{{Verbreitung|}}}
|
{{!-}}
{{!}} Verbreitung:
{{!}} {{{Verbreitung|}}}
|
{{!-}}
}}
{{#if:{{{Habitat|}}}
|
{{!-}}
{{!}} Habitat:
{{!}} {{{Habitat|}}}
|
{{!-}}
}}
{{#if:{{{Nahrung|}}}
|
{{!-}}
{{!}} Nahrung:
{{!}} {{{Nahrung|}}}
|
{{!-}}
}}
{{#if:{{{Luftfeuchtigkeit|}}}
|
{{!-}}
{{!}} Luftfeuchtigkeit:
{{!}} {{{Luftfeuchtigkeit|}}}
|
{{!-}}
}}
{{#if:{{{Temperatur|}}}
|
{{!-}}
{{!}} Temperatur:
{{!}} {{{Temperatur|}}}
|
{{!-}}
}}
{{#if:{{{StudyGroupNumber|}}}
|
{{!-}}
{{!}} StudyGroupNumber:
{{!}} {{{StudyGroupNumber|}}}
|
{{!-}}
}}
|}
|}
{{#if:{{{Untergattung|}}}|
[[Kategorie:{{{Untergattung|}}}]]
}}
{{#if:{{{Art|}}}|
[[Kategorie:{{{Gattung|}}}{{!}}{{{Art|}}}]]
[[Kategorie:{{{Untergattung|}}}{{!}}{{{Art|}}}]]
}}
{{#ifeq: {{NAMESPACE}} | {{ns:0}} | [[Kategorie:Spezies]]}}
{{#if:{{{Familie|}}}| [[{{{Familie|}}}]]| ???}}->{{#if:{{{Unterfamilie|}}}| [[{{{Unterfamilie|}}}]]| ???}}->{{#if:{{{Tribus|}}}| [[{{{Tribus|}}}]]| ???}}->{{#if:{{{Gattung|}}}| [[{{{Gattung|}}}]]| ???}}->{{#if:{{{Untergattung|}}}| [[{{{Untergattung|}}}]]| ???}}
</includeonly>
<noinclude>
2bfb025077539ac1846620fdb7eaa5637bc0227f
1028
1027
2016-01-06T10:12:46Z
Lollypop
2
wikitext
text/x-wiki
<includeonly>
{| cellpadding="2" cellspacing="0" style="border: 1px solid #6688AA;width:260px; background-color:#efefef; float:right;overflow: hidden; margin-left:10px; margin-bottom:10px;" valign="middle" |
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"|
'''''{{{Gattung|}}} {{{Art|}}}'''''
{{#if:{{{DeName|}}}| <br>({{{DeName|}}}) }}
|-
|style="border: 0px solid #6688AA;border-bottom-width:1px;"| <div style="text-align:center;font-size:8pt;">
{{#if: {{{Bild|}}}
|
[[Bild:{{{Bild}}}{{!}}frameless{{!}}250x300px{{!}}{{{Bildbeschreibung}}}]]
|
{{{Bildbeschreibung}}}
}}
</div>
|
|
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| Systematik
|-
|style="text-align:center;width=250px;"|
{| style="background-color:#efefef;text-align:left;"
{{#if:{{{Familie|}}}
|
{{!-}}
{{!}} Familie:
{{!}} ''[[{{{Familie|}}}]]''
}}
{{#if:{{{Unterfamilie|}}}
|
{{!-}}
{{!}} Unterfamilie:
{{!}} ''[[{{{Unterfamilie|}}}]]''
}}
{{#if:{{{Tribus|}}}
|
{{!-}}
{{!}} Tribus:
{{!}} ''[[{{{Tribus|}}}]]''
}}
|-
{{#if:{{{Gattung|}}}|
{{!-}}
{{!}} Gattung:
{{!}} ''[[{{{Gattung|}}}]]''
}}
|-
{{#if:{{{Untergattung|}}}|
{{!-}}
{{!}} Untergattung:
{{!}} ''[[{{{Untergattung|}}}]]''
}}
|-
{{#if:{{{Art|}}}|
{{!-}}
{{!}} Art:
{{!}} ''[[{{{Art|}}}]]''
}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Wissenschaftlicher Name
|-
|style="text-align:center"|
{| style="text-align:center;background-color:#efefef;widht:250px"
|-
|style="text-align:center;width:250px"|''{{{Gattung|}}} {{{Art|}}}'' {{#if:{{{Autor|}}}|{{{Autor|}}}}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Weitere Informationen
|-
|style="text-align:left"|
{| style="text-align:left;background-color:#efefef;widht:250px"
{{#if:{{{Verbreitung|}}}
|
{{!-}}
{{!}} Verbreitung:
{{!}} {{{Verbreitung|}}}
|
{{!-}}
}}
{{#if:{{{Habitat|}}}
|
{{!-}}
{{!}} Habitat:
{{!}} {{{Habitat|}}}
|
{{!-}}
}}
{{#if:{{{Nahrung|}}}
|
{{!-}}
{{!}} Nahrung:
{{!}} {{{Nahrung|}}}
|
{{!-}}
}}
{{#if:{{{Luftfeuchtigkeit|}}}
|
{{!-}}
{{!}} Luftfeuchtigkeit:
{{!}} {{{Luftfeuchtigkeit|}}}
|
{{!-}}
}}
{{#if:{{{Temperatur|}}}
|
{{!-}}
{{!}} Temperatur:
{{!}} {{{Temperatur|}}}
|
{{!-}}
}}
{{#if:{{{StudyGroupNumber|}}}
|
{{!-}}
{{!}} StudyGroupNumber:
{{!}} {{{StudyGroupNumber|}}}
|
{{!-}}
}}
|}
|}
{{#if:{{{Untergattung|}}}|
[[Kategorie:{{{Untergattung|}}}]]
}}
{{#if:{{{Art|}}}|
[[Kategorie:{{{Gattung|}}}{{!}}{{{Art|}}}]]
[[Kategorie:{{{Untergattung|}}}{{!}}{{{Art|}}}]]
}}
{{#ifeq: {{NAMESPACE}} | {{ns:0}} | [[Kategorie:Spezies]]}}
{{#if:{{{Familie|}}}| [[{{{Familie|}}}]]| ???}}->{{#if:{{{Unterfamilie|}}}| [[{{{Unterfamilie|}}}]]| ???}}->{{#if:{{{Tribus|}}}| [[{{{Tribus|}}}]]| ???}}->{{#if:{{{Gattung|}}}| [[{{{Gattung|}}}]]| ???}}->{{#if:{{{Untergattung|}}}| [[{{{Untergattung|}}}]]| ???}}
</includeonly>
<noinclude>
756410ceb11d34f7fd3614171ef6c95984d7ccfd
1029
1028
2016-01-06T10:14:27Z
Lollypop
2
wikitext
text/x-wiki
<includeonly>
{| cellpadding="2" cellspacing="0" style="border: 1px solid #6688AA;width:260px; background-color:#efefef; float:right;overflow: hidden; margin-left:10px; margin-bottom:10px;" valign="middle" |
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"|
'''''{{{Gattung|}}} {{{Art|}}}'''''
{{#if:{{{DeName|}}}|
<br>({{{DeName|}}})
}}
|-
|style="border: 0px solid #6688AA;border-bottom-width:1px;"| <div style="text-align:center;font-size:8pt;">
{{#if: {{{Bild|}}}
|
[[Bild:{{{Bild}}}{{!}}frameless{{!}}250x300px{{!}}{{{Bildbeschreibung}}}]]
|
{{{Bildbeschreibung}}}
}}
</div>
|
|
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| Systematik
|-
|style="text-align:center;width=250px;"|
{| style="background-color:#efefef;text-align:left;"
{{#if:{{{Familie|}}}
|
{{!-}}
{{!}} Familie:
{{!}} ''[[{{{Familie|}}}]]''
}}
{{#if:{{{Unterfamilie|}}}
|
{{!-}}
{{!}} Unterfamilie:
{{!}} ''[[{{{Unterfamilie|}}}]]''
}}
{{#if:{{{Tribus|}}}
|
{{!-}}
{{!}} Tribus:
{{!}} ''[[{{{Tribus|}}}]]''
}}
|-
{{#if:{{{Gattung|}}}|
{{!-}}
{{!}} Gattung:
{{!}} ''[[{{{Gattung|}}}]]''
}}
|-
{{#if:{{{Untergattung|}}}|
{{!-}}
{{!}} Untergattung:
{{!}} ''[[{{{Untergattung|}}}]]''
}}
|-
{{#if:{{{Art|}}}|
{{!-}}
{{!}} Art:
{{!}} ''[[{{{Art|}}}]]''
}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Wissenschaftlicher Name
|-
|style="text-align:center"|
{| style="text-align:center;background-color:#efefef;widht:250px"
|-
|style="text-align:center;width:250px"|''{{{Gattung|}}} {{{Art|}}}'' {{#if:{{{Autor|}}}|{{{Autor|}}}}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Weitere Informationen
|-
|style="text-align:left"|
{| style="text-align:left;background-color:#efefef;widht:250px"
{{#if:{{{Verbreitung|}}}
|
{{!-}}
{{!}} Verbreitung:
{{!}} {{{Verbreitung|}}}
|
{{!-}}
}}
{{#if:{{{Habitat|}}}
|
{{!-}}
{{!}} Habitat:
{{!}} {{{Habitat|}}}
|
{{!-}}
}}
{{#if:{{{Nahrung|}}}
|
{{!-}}
{{!}} Nahrung:
{{!}} {{{Nahrung|}}}
|
{{!-}}
}}
{{#if:{{{Luftfeuchtigkeit|}}}
|
{{!-}}
{{!}} Luftfeuchtigkeit:
{{!}} {{{Luftfeuchtigkeit|}}}
|
{{!-}}
}}
{{#if:{{{Temperatur|}}}
|
{{!-}}
{{!}} Temperatur:
{{!}} {{{Temperatur|}}}
|
{{!-}}
}}
{{#if:{{{StudyGroupNumber|}}}
|
{{!-}}
{{!}} StudyGroupNumber:
{{!}} {{{StudyGroupNumber|}}}
|
{{!-}}
}}
|}
|}
{{#if:{{{Untergattung|}}}|
[[Kategorie:{{{Untergattung|}}}]]
}}
{{#if:{{{Art|}}}|
[[Kategorie:{{{Gattung|}}}{{!}}{{{Art|}}}]]
[[Kategorie:{{{Untergattung|}}}{{!}}{{{Art|}}}]]
}}
{{#ifeq: {{NAMESPACE}} | {{ns:0}} | [[Kategorie:Spezies]]}}
{{#if:{{{Familie|}}}| [[{{{Familie|}}}]]| ???}}->{{#if:{{{Unterfamilie|}}}| [[{{{Unterfamilie|}}}]]| ???}}->{{#if:{{{Tribus|}}}| [[{{{Tribus|}}}]]| ???}}->{{#if:{{{Gattung|}}}| [[{{{Gattung|}}}]]| ???}}->{{#if:{{{Untergattung|}}}| [[{{{Untergattung|}}}]]| ???}}
</includeonly>
<noinclude>
db33c704212f4114fee31a7fb88b15f3999215d7
1030
1029
2016-01-06T10:27:09Z
Lollypop
2
wikitext
text/x-wiki
<includeonly>
{| cellpadding="2" cellspacing="0" style="border: 1px solid #6688AA;width:260px; background-color:#efefef; float:right;overflow: hidden; margin-left:10px; margin-bottom:10px;" valign="middle" |
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"|
'''''{{{Gattung|}}} {{{Art|}}}'''''
{{#if:{{{DeName|}}}|
<br>({{{DeName|}}})
}}
|-
|style="border: 0px solid #6688AA;border-bottom-width:1px;"| <div style="text-align:center;font-size:8pt;">
{{#if: {{{Bild|}}}|
[[Bild:{{{Bild}}}{{!}}frameless{{!}}250x300px{{!}}{{{Bildbeschreibung}}}]]
}}
</div>
|
|
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| Systematik
|-
|style="text-align:center;width=250px;"|
|style="background-color:#efefef;text-align:left;"
{{#if:{{{Familie|}}}|
{{!-}}
{{!}} Familie:
{{!}} ''[[{{{Familie|}}}]]''
}}
{{#if:{{{Unterfamilie|}}}|
{{!-}}
{{!}} Unterfamilie:
{{!}} ''[[{{{Unterfamilie|}}}]]''
}}
{{#if:{{{Tribus|}}}|
{{!-}}
{{!}} Tribus:
{{!}} ''[[{{{Tribus|}}}]]''
}}
{{#if:{{{Gattung|}}}|
{{!-}}
{{!}} Gattung:
{{!}} ''[[{{{Gattung|}}}]]''
}}
{{#if:{{{Untergattung|}}}|
{{!-}}
{{!}} Untergattung:
{{!}} ''[[{{{Untergattung|}}}]]''
}}
{{#if:{{{Art|}}}|
{{!-}}
{{!}} Art:
{{!}} ''[[{{{Art|}}}]]''
}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Wissenschaftlicher Name
|-
|style="text-align:center"|
{| style="text-align:center;background-color:#efefef;widht:250px"
|-
|style="text-align:center;width:250px"|''{{{Gattung|}}} {{{Art|}}}'' {{#if:{{{Autor|}}}|{{{Autor|}}}}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Weitere Informationen
|-
|style="text-align:left"|
{| style="text-align:left;background-color:#efefef;widht:250px"
{{#if:{{{Verbreitung|}}}|
{{!-}}
{{!}} Verbreitung:
{{!}} {{{Verbreitung|}}}
}}
{{#if:{{{Habitat|}}}|
{{!-}}
{{!}} Habitat:
{{!}} {{{Habitat|}}}
}}
{{#if:{{{Nahrung|}}}|
{{!-}}
{{!}} Nahrung:
{{!}} {{{Nahrung|}}}
}}
{{#if:{{{Luftfeuchtigkeit|}}}|
{{!-}}
{{!}} Luftfeuchtigkeit:
{{!}} {{{Luftfeuchtigkeit|}}}
}}
{{#if:{{{Temperatur|}}}|
{{!-}}
{{!}} Temperatur:
{{!}} {{{Temperatur|}}}
}}
{{#if:{{{StudyGroupNumber|}}}|
{{!-}}
{{!}} StudyGroupNumber:
{{!}} {{{StudyGroupNumber|}}}
}}
|}
|}
{{#if:{{{Untergattung|}}}|
[[Kategorie:{{{Untergattung|}}}]]
}}
{{#if:{{{Art|}}}|
[[Kategorie:{{{Gattung|}}}{{!}}{{{Art|}}}]]
[[Kategorie:{{{Untergattung|}}}{{!}}{{{Art|}}}]]
}}
{{#ifeq:{{NAMESPACE}}|{{ns:0}}|
[[Kategorie:Spezies]]
}}
{{#if:{{{Familie|}}}|
-->[[{{{Familie|}}}]]|}}
}}
{{#if:{{{Unterfamilie|}}}|
-->[[{{{Unterfamilie|}}}]]
}}
{{#if:{{{Tribus|}}}|
-->[[{{{Tribus|}}}]]
}}
{{#if:{{{Gattung|}}}|
-->[[{{{Gattung|}}}]]
}}
{{#if:{{{Untergattung|}}}|
-->[[{{{Untergattung|}}}]]
}}
</includeonly>
<noinclude>
327364b3b2463e5793709f9b590b5bfdb5f53edb
1031
1030
2016-01-06T10:28:42Z
Lollypop
2
wikitext
text/x-wiki
<includeonly>
{|cellpadding="2" cellspacing="0" style="border: 1px solid #6688AA;width:260px; background-color:#efefef; float:right;overflow: hidden; margin-left:10px; margin-bottom:10px;" valign="middle" |
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"|
'''''{{{Gattung|}}} {{{Art|}}}'''''
{{#if:{{{DeName|}}}|
<br>({{{DeName|}}})
}}
|-
|style="border: 0px solid #6688AA;border-bottom-width:1px;"| <div style="text-align:center;font-size:8pt;">
{{#if: {{{Bild|}}}|
[[Bild:{{{Bild}}}{{!}}frameless{{!}}250x300px{{!}}{{{Bildbeschreibung}}}]]
}}
</div>
|
|
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| Systematik
|-
|style="text-align:center;width=250px;"|
|style="background-color:#efefef;text-align:left;"
{{#if:{{{Familie|}}}|
{{!-}}
{{!}} Familie:
{{!}} ''[[{{{Familie|}}}]]''
}}
{{#if:{{{Unterfamilie|}}}|
{{!-}}
{{!}} Unterfamilie:
{{!}} ''[[{{{Unterfamilie|}}}]]''
}}
{{#if:{{{Tribus|}}}|
{{!-}}
{{!}} Tribus:
{{!}} ''[[{{{Tribus|}}}]]''
}}
{{#if:{{{Gattung|}}}|
{{!-}}
{{!}} Gattung:
{{!}} ''[[{{{Gattung|}}}]]''
}}
{{#if:{{{Untergattung|}}}|
{{!-}}
{{!}} Untergattung:
{{!}} ''[[{{{Untergattung|}}}]]''
}}
{{#if:{{{Art|}}}|
{{!-}}
{{!}} Art:
{{!}} ''[[{{{Art|}}}]]''
}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Wissenschaftlicher Name
|-
|style="text-align:center"|
{| style="text-align:center;background-color:#efefef;widht:250px"
|-
|style="text-align:center;width:250px"|''{{{Gattung|}}} {{{Art|}}}'' {{#if:{{{Autor|}}}|{{{Autor|}}}}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Weitere Informationen
|-
|style="text-align:left"|
{| style="text-align:left;background-color:#efefef;widht:250px"
{{#if:{{{Verbreitung|}}}|
{{!-}}
{{!}} Verbreitung:
{{!}} {{{Verbreitung|}}}
}}
{{#if:{{{Habitat|}}}|
{{!-}}
{{!}} Habitat:
{{!}} {{{Habitat|}}}
}}
{{#if:{{{Nahrung|}}}|
{{!-}}
{{!}} Nahrung:
{{!}} {{{Nahrung|}}}
}}
{{#if:{{{Luftfeuchtigkeit|}}}|
{{!-}}
{{!}} Luftfeuchtigkeit:
{{!}} {{{Luftfeuchtigkeit|}}}
}}
{{#if:{{{Temperatur|}}}|
{{!-}}
{{!}} Temperatur:
{{!}} {{{Temperatur|}}}
}}
{{#if:{{{StudyGroupNumber|}}}|
{{!-}}
{{!}} StudyGroupNumber:
{{!}} {{{StudyGroupNumber|}}}
}}
|}
|}
{{#if:{{{Untergattung|}}}|
[[Kategorie:{{{Untergattung|}}}]]
}}
{{#if:{{{Art|}}}|
[[Kategorie:{{{Gattung|}}}{{!}}{{{Art|}}}]]
[[Kategorie:{{{Untergattung|}}}{{!}}{{{Art|}}}]]
}}
{{#ifeq:{{NAMESPACE}}|{{ns:0}}|
[[Kategorie:Spezies]]
}}
{{#if:{{{Familie|}}}|
-->[[{{{Familie|}}}]]|}}
}}
{{#if:{{{Unterfamilie|}}}|
-->[[{{{Unterfamilie|}}}]]
}}
{{#if:{{{Tribus|}}}|
-->[[{{{Tribus|}}}]]
}}
{{#if:{{{Gattung|}}}|
-->[[{{{Gattung|}}}]]
}}
{{#if:{{{Untergattung|}}}|
-->[[{{{Untergattung|}}}]]
}}
</includeonly>
<noinclude>
f5e9fa25616c74defc85fb1b134cca2d7e7566e3
1032
1031
2016-01-06T10:31:57Z
Lollypop
2
wikitext
text/x-wiki
<includeonly>
{|
! cellpadding="2" cellspacing="0" style="border: 1px solid #6688AA;width:260px; background-color:#efefef; float:right;overflow: hidden; margin-left:10px; margin-bottom:10px;" valign="middle"
!
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"|
'''''{{{Gattung|}}} {{{Art|}}}'''''
{{#if:{{{DeName|}}}|
<br>({{{DeName|}}})
}}
|-
|style="border: 0px solid #6688AA;border-bottom-width:1px;"| <div style="text-align:center;font-size:8pt;">
{{#if: {{{Bild|}}}|
[[Bild:{{{Bild}}}{{!}}frameless{{!}}250x300px{{!}}{{{Bildbeschreibung}}}]]
}}
</div>
|
|
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| Systematik
|-
|style="text-align:center;width=250px;"|
|style="background-color:#efefef;text-align:left;"
{{#if:{{{Familie|}}}|
{{!-}}
{{!}} Familie:
{{!}} ''[[{{{Familie|}}}]]''
}}
{{#if:{{{Unterfamilie|}}}|
{{!-}}
{{!}} Unterfamilie:
{{!}} ''[[{{{Unterfamilie|}}}]]''
}}
{{#if:{{{Tribus|}}}|
{{!-}}
{{!}} Tribus:
{{!}} ''[[{{{Tribus|}}}]]''
}}
{{#if:{{{Gattung|}}}|
{{!-}}
{{!}} Gattung:
{{!}} ''[[{{{Gattung|}}}]]''
}}
{{#if:{{{Untergattung|}}}|
{{!-}}
{{!}} Untergattung:
{{!}} ''[[{{{Untergattung|}}}]]''
}}
{{#if:{{{Art|}}}|
{{!-}}
{{!}} Art:
{{!}} ''[[{{{Art|}}}]]''
}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Wissenschaftlicher Name
|-
|style="text-align:center"|
{| style="text-align:center;background-color:#efefef;widht:250px"
|-
|style="text-align:center;width:250px"|''{{{Gattung|}}} {{{Art|}}}'' {{#if:{{{Autor|}}}|{{{Autor|}}}}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Weitere Informationen
|-
|style="text-align:left"|
{| style="text-align:left;background-color:#efefef;widht:250px"
{{#if:{{{Verbreitung|}}}|
{{!-}}
{{!}} Verbreitung:
{{!}} {{{Verbreitung|}}}
}}
{{#if:{{{Habitat|}}}|
{{!-}}
{{!}} Habitat:
{{!}} {{{Habitat|}}}
}}
{{#if:{{{Nahrung|}}}|
{{!-}}
{{!}} Nahrung:
{{!}} {{{Nahrung|}}}
}}
{{#if:{{{Luftfeuchtigkeit|}}}|
{{!-}}
{{!}} Luftfeuchtigkeit:
{{!}} {{{Luftfeuchtigkeit|}}}
}}
{{#if:{{{Temperatur|}}}|
{{!-}}
{{!}} Temperatur:
{{!}} {{{Temperatur|}}}
}}
{{#if:{{{StudyGroupNumber|}}}|
{{!-}}
{{!}} StudyGroupNumber:
{{!}} {{{StudyGroupNumber|}}}
}}
|}
|}
{{#if:{{{Untergattung|}}}|
[[Kategorie:{{{Untergattung|}}}]]
}}
{{#if:{{{Art|}}}|
[[Kategorie:{{{Gattung|}}}{{!}}{{{Art|}}}]]
[[Kategorie:{{{Untergattung|}}}{{!}}{{{Art|}}}]]
}}
{{#ifeq:{{NAMESPACE}}|{{ns:0}}|
[[Kategorie:Spezies]]
}}
{{#if:{{{Familie|}}}|
-->[[{{{Familie|}}}]]|}}
}}
{{#if:{{{Unterfamilie|}}}|
-->[[{{{Unterfamilie|}}}]]
}}
{{#if:{{{Tribus|}}}|
-->[[{{{Tribus|}}}]]
}}
{{#if:{{{Gattung|}}}|
-->[[{{{Gattung|}}}]]
}}
{{#if:{{{Untergattung|}}}|
-->[[{{{Untergattung|}}}]]
}}
</includeonly>
<noinclude>
9379d04e9f25484e6cfb9c5fb9806120227d62d1
1033
1032
2016-01-06T10:32:26Z
Lollypop
2
wikitext
text/x-wiki
<includeonly>
{| cellpadding="2" cellspacing="0" style="border: 1px solid #6688AA;width:260px; background-color:#efefef; float:right;overflow: hidden; margin-left:10px; margin-bottom:10px;" valign="middle"
!
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"|
'''''{{{Gattung|}}} {{{Art|}}}'''''
{{#if:{{{DeName|}}}|
<br>({{{DeName|}}})
}}
|-
|style="border: 0px solid #6688AA;border-bottom-width:1px;"| <div style="text-align:center;font-size:8pt;">
{{#if: {{{Bild|}}}|
[[Bild:{{{Bild}}}{{!}}frameless{{!}}250x300px{{!}}{{{Bildbeschreibung}}}]]
}}
</div>
|
|
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| Systematik
|-
|style="text-align:center;width=250px;"|
|style="background-color:#efefef;text-align:left;"
{{#if:{{{Familie|}}}|
{{!-}}
{{!}} Familie:
{{!}} ''[[{{{Familie|}}}]]''
}}
{{#if:{{{Unterfamilie|}}}|
{{!-}}
{{!}} Unterfamilie:
{{!}} ''[[{{{Unterfamilie|}}}]]''
}}
{{#if:{{{Tribus|}}}|
{{!-}}
{{!}} Tribus:
{{!}} ''[[{{{Tribus|}}}]]''
}}
{{#if:{{{Gattung|}}}|
{{!-}}
{{!}} Gattung:
{{!}} ''[[{{{Gattung|}}}]]''
}}
{{#if:{{{Untergattung|}}}|
{{!-}}
{{!}} Untergattung:
{{!}} ''[[{{{Untergattung|}}}]]''
}}
{{#if:{{{Art|}}}|
{{!-}}
{{!}} Art:
{{!}} ''[[{{{Art|}}}]]''
}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Wissenschaftlicher Name
|-
|style="text-align:center"|
{| style="text-align:center;background-color:#efefef;widht:250px"
|-
|style="text-align:center;width:250px"|''{{{Gattung|}}} {{{Art|}}}'' {{#if:{{{Autor|}}}|{{{Autor|}}}}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Weitere Informationen
|-
|style="text-align:left"|
{| style="text-align:left;background-color:#efefef;widht:250px"
{{#if:{{{Verbreitung|}}}|
{{!-}}
{{!}} Verbreitung:
{{!}} {{{Verbreitung|}}}
}}
{{#if:{{{Habitat|}}}|
{{!-}}
{{!}} Habitat:
{{!}} {{{Habitat|}}}
}}
{{#if:{{{Nahrung|}}}|
{{!-}}
{{!}} Nahrung:
{{!}} {{{Nahrung|}}}
}}
{{#if:{{{Luftfeuchtigkeit|}}}|
{{!-}}
{{!}} Luftfeuchtigkeit:
{{!}} {{{Luftfeuchtigkeit|}}}
}}
{{#if:{{{Temperatur|}}}|
{{!-}}
{{!}} Temperatur:
{{!}} {{{Temperatur|}}}
}}
{{#if:{{{StudyGroupNumber|}}}|
{{!-}}
{{!}} StudyGroupNumber:
{{!}} {{{StudyGroupNumber|}}}
}}
|}
|}
{{#if:{{{Untergattung|}}}|
[[Kategorie:{{{Untergattung|}}}]]
}}
{{#if:{{{Art|}}}|
[[Kategorie:{{{Gattung|}}}{{!}}{{{Art|}}}]]
[[Kategorie:{{{Untergattung|}}}{{!}}{{{Art|}}}]]
}}
{{#ifeq:{{NAMESPACE}}|{{ns:0}}|
[[Kategorie:Spezies]]
}}
{{#if:{{{Familie|}}}|
-->[[{{{Familie|}}}]]|}}
}}
{{#if:{{{Unterfamilie|}}}|
-->[[{{{Unterfamilie|}}}]]
}}
{{#if:{{{Tribus|}}}|
-->[[{{{Tribus|}}}]]
}}
{{#if:{{{Gattung|}}}|
-->[[{{{Gattung|}}}]]
}}
{{#if:{{{Untergattung|}}}|
-->[[{{{Untergattung|}}}]]
}}
</includeonly>
<noinclude>
58f1a6e49bf7c0b2362e8d2d99e67fe182df016a
1034
1033
2016-01-06T10:33:53Z
Lollypop
2
wikitext
text/x-wiki
<includeonly>
{| cellpadding="2" cellspacing="0" style="border: 1px solid #6688AA;width:260px; background-color:#efefef; float:right;overflow: hidden; margin-left:10px; margin-bottom:10px;" valign="middle"
!
|-
| style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"|
'''''{{{Gattung|}}} {{{Art|}}}'''''
{{#if:{{{DeName|}}}|
<br>({{{DeName|}}})
}}
|-
|style="border: 0px solid #6688AA;border-bottom-width:1px;"| <div style="text-align:center;font-size:8pt;">
{{#if: {{{Bild|}}}|
[[Bild:{{{Bild}}}{{!}}frameless{{!}}250x300px{{!}}{{{Bildbeschreibung}}}]]
}}
</div>
|
|
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| Systematik
|-
|style="text-align:center;width=250px;"|
|style="background-color:#efefef;text-align:left;"
{{#if:{{{Familie|}}}|
{{!-}}
{{!}} Familie:
{{!}} ''[[{{{Familie|}}}]]''
}}
{{#if:{{{Unterfamilie|}}}|
{{!-}}
{{!}} Unterfamilie:
{{!}} ''[[{{{Unterfamilie|}}}]]''
}}
{{#if:{{{Tribus|}}}|
{{!-}}
{{!}} Tribus:
{{!}} ''[[{{{Tribus|}}}]]''
}}
{{#if:{{{Gattung|}}}|
{{!-}}
{{!}} Gattung:
{{!}} ''[[{{{Gattung|}}}]]''
}}
{{#if:{{{Untergattung|}}}|
{{!-}}
{{!}} Untergattung:
{{!}} ''[[{{{Untergattung|}}}]]''
}}
{{#if:{{{Art|}}}|
{{!-}}
{{!}} Art:
{{!}} ''[[{{{Art|}}}]]''
}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Wissenschaftlicher Name
|-
|style="text-align:center"|
{| style="text-align:center;background-color:#efefef;widht:250px"
|-
|style="text-align:center;width:250px"|''{{{Gattung|}}} {{{Art|}}}'' {{#if:{{{Autor|}}}|{{{Autor|}}}}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Weitere Informationen
|-
|style="text-align:left"|
{| style="text-align:left;background-color:#efefef;widht:250px"
{{#if:{{{Verbreitung|}}}|
{{!-}}
{{!}} Verbreitung:
{{!}} {{{Verbreitung|}}}
}}
{{#if:{{{Habitat|}}}|
{{!-}}
{{!}} Habitat:
{{!}} {{{Habitat|}}}
}}
{{#if:{{{Nahrung|}}}|
{{!-}}
{{!}} Nahrung:
{{!}} {{{Nahrung|}}}
}}
{{#if:{{{Luftfeuchtigkeit|}}}|
{{!-}}
{{!}} Luftfeuchtigkeit:
{{!}} {{{Luftfeuchtigkeit|}}}
}}
{{#if:{{{Temperatur|}}}|
{{!-}}
{{!}} Temperatur:
{{!}} {{{Temperatur|}}}
}}
{{#if:{{{StudyGroupNumber|}}}|
{{!-}}
{{!}} StudyGroupNumber:
{{!}} {{{StudyGroupNumber|}}}
}}
|}
|}
{{#if:{{{Untergattung|}}}|
[[Kategorie:{{{Untergattung|}}}]]
}}
{{#if:{{{Art|}}}|
[[Kategorie:{{{Gattung|}}}{{!}}{{{Art|}}}]]
[[Kategorie:{{{Untergattung|}}}{{!}}{{{Art|}}}]]
}}
{{#ifeq:{{NAMESPACE}}|{{ns:0}}|
[[Kategorie:Spezies]]
}}
{{#if:{{{Familie|}}}|
-->[[{{{Familie|}}}]]|}}
}}
{{#if:{{{Unterfamilie|}}}|
-->[[{{{Unterfamilie|}}}]]
}}
{{#if:{{{Tribus|}}}|
-->[[{{{Tribus|}}}]]
}}
{{#if:{{{Gattung|}}}|
-->[[{{{Gattung|}}}]]
}}
{{#if:{{{Untergattung|}}}|
-->[[{{{Untergattung|}}}]]
}}
</includeonly>
<noinclude>
a8d0eb6c755130c2b79abd931a9bebf1afdcfbf8
1035
1034
2016-01-06T10:38:21Z
Lollypop
2
wikitext
text/x-wiki
<includeonly>
{| cellpadding="2" cellspacing="0" style="border: 1px solid #6688AA;width:260px; background-color:#efefef; float:right;overflow: hidden; margin-left:10px; margin-bottom:10px;" valign="middle" |
|-
| style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"|
'''''{{{Gattung|}}} {{{Art|}}}'''''
{{#if:{{{DeName|}}}|
<br>({{{DeName|}}})
}}
|-
|style="border: 0px solid #6688AA;border-bottom-width:1px;"| <div style="text-align:center;font-size:8pt;">
{{#if: {{{Bild|}}}|
[[Bild:{{{Bild}}}{{!}}frameless{{!}}250x300px{{!}}{{{Bildbeschreibung}}}]]
}}
</div>
|
|
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| Systematik
|-
|style="text-align:center;width=250px;"|
|style="background-color:#efefef;text-align:left;"
{{#if:{{{Familie|}}}|
{{!-}}
{{!}} Familie:
{{!}} ''[[{{{Familie|}}}]]''
}}
{{#if:{{{Unterfamilie|}}}|
{{!-}}
{{!}} Unterfamilie:
{{!}} ''[[{{{Unterfamilie|}}}]]''
}}
{{#if:{{{Tribus|}}}|
{{!-}}
{{!}} Tribus:
{{!}} ''[[{{{Tribus|}}}]]''
}}
{{#if:{{{Gattung|}}}|
{{!-}}
{{!}} Gattung:
{{!}} ''[[{{{Gattung|}}}]]''
}}
{{#if:{{{Untergattung|}}}|
{{!-}}
{{!}} Untergattung:
{{!}} ''[[{{{Untergattung|}}}]]''
}}
{{#if:{{{Art|}}}|
{{!-}}
{{!}} Art:
{{!}} ''[[{{{Art|}}}]]''
}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Wissenschaftlicher Name
|-
|style="text-align:center"|
{| style="text-align:center;background-color:#efefef;widht:250px"
|-
|style="text-align:center;width:250px"|''{{{Gattung|}}} {{{Art|}}}'' {{#if:{{{Autor|}}}|{{{Autor|}}}}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Weitere Informationen
|-
|style="text-align:left"|
{| style="text-align:left;background-color:#efefef;widht:250px"
{{#if:{{{Verbreitung|}}}|
{{!-}}
{{!}} Verbreitung:
{{!}} {{{Verbreitung|}}}
}}
{{#if:{{{Habitat|}}}|
{{!-}}
{{!}} Habitat:
{{!}} {{{Habitat|}}}
}}
{{#if:{{{Nahrung|}}}|
{{!-}}
{{!}} Nahrung:
{{!}} {{{Nahrung|}}}
}}
{{#if:{{{Luftfeuchtigkeit|}}}|
{{!-}}
{{!}} Luftfeuchtigkeit:
{{!}} {{{Luftfeuchtigkeit|}}}
}}
{{#if:{{{Temperatur|}}}|
{{!-}}
{{!}} Temperatur:
{{!}} {{{Temperatur|}}}
}}
{{#if:{{{StudyGroupNumber|}}}|
{{!-}}
{{!}} StudyGroupNumber:
{{!}} {{{StudyGroupNumber|}}}
}}
|}
|}
{{#if:{{{Untergattung|}}}|
[[Kategorie:{{{Untergattung|}}}]]
}}
{{#if:{{{Art|}}}|
[[Kategorie:{{{Gattung|}}}{{!}}{{{Art|}}}]]
[[Kategorie:{{{Untergattung|}}}{{!}}{{{Art|}}}]]
}}
{{#ifeq:{{NAMESPACE}}|{{ns:0}}|
[[Kategorie:Spezies]]
}}
{{#if:{{{Familie|}}}|
-->[[{{{Familie|}}}]]
}}
{{#if:{{{Unterfamilie|}}}|
-->[[{{{Unterfamilie|}}}]]
}}
{{#if:{{{Tribus|}}}|
-->[[{{{Tribus|}}}]]
}}
{{#if:{{{Gattung|}}}|
-->[[{{{Gattung|}}}]]
}}
{{#if:{{{Untergattung|}}}|
-->[[{{{Untergattung|}}}]]
}}
</includeonly>
<noinclude>
def3edd137820e5009e65bc6930c8780322348d6
1036
1035
2016-01-06T10:39:01Z
Lollypop
2
wikitext
text/x-wiki
<includeonly>
{|cellpadding="2" cellspacing="0" style="border: 1px solid #6688AA;width:260px; background-color:#efefef; float:right;overflow: hidden; margin-left:10px; margin-bottom:10px;" valign="middle" |
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"|
'''''{{{Gattung|}}} {{{Art|}}}'''''
{{#if:{{{DeName|}}}|
<br>({{{DeName|}}})
}}
|-
|style="border: 0px solid #6688AA;border-bottom-width:1px;"| <div style="text-align:center;font-size:8pt;">
{{#if: {{{Bild|}}}|
[[Bild:{{{Bild}}}{{!}}frameless{{!}}250x300px{{!}}{{{Bildbeschreibung}}}]]
}}
</div>
|
|
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| Systematik
|-
|style="text-align:center;width=250px;"|
|style="background-color:#efefef;text-align:left;"
{{#if:{{{Familie|}}}|
{{!-}}
{{!}} Familie:
{{!}} ''[[{{{Familie|}}}]]''
}}
{{#if:{{{Unterfamilie|}}}|
{{!-}}
{{!}} Unterfamilie:
{{!}} ''[[{{{Unterfamilie|}}}]]''
}}
{{#if:{{{Tribus|}}}|
{{!-}}
{{!}} Tribus:
{{!}} ''[[{{{Tribus|}}}]]''
}}
{{#if:{{{Gattung|}}}|
{{!-}}
{{!}} Gattung:
{{!}} ''[[{{{Gattung|}}}]]''
}}
{{#if:{{{Untergattung|}}}|
{{!-}}
{{!}} Untergattung:
{{!}} ''[[{{{Untergattung|}}}]]''
}}
{{#if:{{{Art|}}}|
{{!-}}
{{!}} Art:
{{!}} ''[[{{{Art|}}}]]''
}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Wissenschaftlicher Name
|-
|style="text-align:center"|
{| style="text-align:center;background-color:#efefef;widht:250px"
|-
|style="text-align:center;width:250px"|''{{{Gattung|}}} {{{Art|}}}'' {{#if:{{{Autor|}}}|{{{Autor|}}}}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Weitere Informationen
|-
|style="text-align:left"|
{| style="text-align:left;background-color:#efefef;widht:250px"
{{#if:{{{Verbreitung|}}}|
{{!-}}
{{!}} Verbreitung:
{{!}} {{{Verbreitung|}}}
}}
{{#if:{{{Habitat|}}}|
{{!-}}
{{!}} Habitat:
{{!}} {{{Habitat|}}}
}}
{{#if:{{{Nahrung|}}}|
{{!-}}
{{!}} Nahrung:
{{!}} {{{Nahrung|}}}
}}
{{#if:{{{Luftfeuchtigkeit|}}}|
{{!-}}
{{!}} Luftfeuchtigkeit:
{{!}} {{{Luftfeuchtigkeit|}}}
}}
{{#if:{{{Temperatur|}}}|
{{!-}}
{{!}} Temperatur:
{{!}} {{{Temperatur|}}}
}}
{{#if:{{{StudyGroupNumber|}}}|
{{!-}}
{{!}} StudyGroupNumber:
{{!}} {{{StudyGroupNumber|}}}
}}
|}
|}
{{#if:{{{Untergattung|}}}|
[[Kategorie:{{{Untergattung|}}}]]
}}
{{#if:{{{Art|}}}|
[[Kategorie:{{{Gattung|}}}{{!}}{{{Art|}}}]]
[[Kategorie:{{{Untergattung|}}}{{!}}{{{Art|}}}]]
}}
{{#ifeq:{{NAMESPACE}}|{{ns:0}}|
[[Kategorie:Spezies]]
}}
{{#if:{{{Familie|}}}|
-->[[{{{Familie|}}}]]|}}
}}
{{#if:{{{Unterfamilie|}}}|
-->[[{{{Unterfamilie|}}}]]
}}
{{#if:{{{Tribus|}}}|
-->[[{{{Tribus|}}}]]
}}
{{#if:{{{Gattung|}}}|
-->[[{{{Gattung|}}}]]
}}
{{#if:{{{Untergattung|}}}|
-->[[{{{Untergattung|}}}]]
}}
</includeonly>
<noinclude>
f5e9fa25616c74defc85fb1b134cca2d7e7566e3
1037
1036
2016-01-06T10:39:45Z
Lollypop
2
wikitext
text/x-wiki
<includeonly>
{| cellpadding="2" cellspacing="0" style="border: 1px solid #6688AA;width:260px; background-color:#efefef; float:right;overflow: hidden; margin-left:10px; margin-bottom:10px;" valign="middle" |
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"|
'''''{{{Gattung|}}} {{{Art|}}}'''''
{{#if:{{{DeName|}}}|
<br>({{{DeName|}}})
}}
|-
|style="border: 0px solid #6688AA;border-bottom-width:1px;"| <div style="text-align:center;font-size:8pt;">
{{#if: {{{Bild|}}}
|
[[Bild:{{{Bild}}}{{!}}frameless{{!}}250x300px{{!}}{{{Bildbeschreibung}}}]]
|
{{{Bildbeschreibung}}}
}}
</div>
|
|
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| Systematik
|-
|style="text-align:center;width=250px;"|
{| style="background-color:#efefef;text-align:left;"
{{#if:{{{Familie|}}}
|
{{!-}}
{{!}} Familie:
{{!}} ''[[{{{Familie|}}}]]''
}}
{{#if:{{{Unterfamilie|}}}
|
{{!-}}
{{!}} Unterfamilie:
{{!}} ''[[{{{Unterfamilie|}}}]]''
}}
{{#if:{{{Tribus|}}}
|
{{!-}}
{{!}} Tribus:
{{!}} ''[[{{{Tribus|}}}]]''
}}
|-
{{#if:{{{Gattung|}}}|
{{!-}}
{{!}} Gattung:
{{!}} ''[[{{{Gattung|}}}]]''
}}
|-
{{#if:{{{Untergattung|}}}|
{{!-}}
{{!}} Untergattung:
{{!}} ''[[{{{Untergattung|}}}]]''
}}
|-
{{#if:{{{Art|}}}|
{{!-}}
{{!}} Art:
{{!}} ''[[{{{Art|}}}]]''
}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Wissenschaftlicher Name
|-
|style="text-align:center"|
{| style="text-align:center;background-color:#efefef;widht:250px"
|-
|style="text-align:center;width:250px"|''{{{Gattung|}}} {{{Art|}}}'' {{#if:{{{Autor|}}}|{{{Autor|}}}}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Weitere Informationen
|-
|style="text-align:left"|
{| style="text-align:left;background-color:#efefef;widht:250px"
{{#if:{{{Verbreitung|}}}
|
{{!-}}
{{!}} Verbreitung:
{{!}} {{{Verbreitung|}}}
|
{{!-}}
}}
{{#if:{{{Habitat|}}}
|
{{!-}}
{{!}} Habitat:
{{!}} {{{Habitat|}}}
|
{{!-}}
}}
{{#if:{{{Nahrung|}}}
|
{{!-}}
{{!}} Nahrung:
{{!}} {{{Nahrung|}}}
|
{{!-}}
}}
{{#if:{{{Luftfeuchtigkeit|}}}
|
{{!-}}
{{!}} Luftfeuchtigkeit:
{{!}} {{{Luftfeuchtigkeit|}}}
|
{{!-}}
}}
{{#if:{{{Temperatur|}}}
|
{{!-}}
{{!}} Temperatur:
{{!}} {{{Temperatur|}}}
|
{{!-}}
}}
{{#if:{{{StudyGroupNumber|}}}
|
{{!-}}
{{!}} StudyGroupNumber:
{{!}} {{{StudyGroupNumber|}}}
|
{{!-}}
}}
|}
|}
{{#if:{{{Untergattung|}}}|
[[Kategorie:{{{Untergattung|}}}]]
}}
{{#if:{{{Art|}}}|
[[Kategorie:{{{Gattung|}}}{{!}}{{{Art|}}}]]
[[Kategorie:{{{Untergattung|}}}{{!}}{{{Art|}}}]]
}}
{{#ifeq: {{NAMESPACE}} | {{ns:0}} | [[Kategorie:Spezies]]}}
{{#if:{{{Familie|}}}| [[{{{Familie|}}}]]| ???}}->{{#if:{{{Unterfamilie|}}}| [[{{{Unterfamilie|}}}]]| ???}}->{{#if:{{{Tribus|}}}| [[{{{Tribus|}}}]]| ???}}->{{#if:{{{Gattung|}}}| [[{{{Gattung|}}}]]| ???}}->{{#if:{{{Untergattung|}}}| [[{{{Untergattung|}}}]]| ???}}
</includeonly>
<noinclude>
db33c704212f4114fee31a7fb88b15f3999215d7
1038
1037
2016-01-06T10:41:35Z
Lollypop
2
wikitext
text/x-wiki
<includeonly>
{| cellpadding="2" cellspacing="0" style="border: 1px solid #6688AA;width:260px; background-color:#efefef; float:right;overflow: hidden; margin-left:10px; margin-bottom:10px;" valign="middle" |
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"|
'''''{{{Gattung|}}} {{{Art|}}}'''''
{{#if:{{{DeName|}}}|
<br>({{{DeName|}}})
}}
|-
|style="border: 0px solid #6688AA;border-bottom-width:1px;"| <div style="text-align:center;font-size:8pt;">
{{#if: {{{Bild|}}}
|
[[Bild:{{{Bild}}}{{!}}frameless{{!}}250x300px{{!}}{{{Bildbeschreibung}}}]]
|
{{{Bildbeschreibung}}}
}}
</div>
|
|
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| Systematik
|-
|style="text-align:center;width=250px;"|
{| style="background-color:#efefef;text-align:left;"
{{#if:{{{Familie|}}}
|
{{!-}}
{{!}} Familie:
{{!}} ''[[{{{Familie|}}}]]''
}}
{{#if:{{{Unterfamilie|}}}
|
{{!-}}
{{!}} Unterfamilie:
{{!}} ''[[{{{Unterfamilie|}}}]]''
}}
{{#if:{{{Tribus|}}}
|
{{!-}}
{{!}} Tribus:
{{!}} ''[[{{{Tribus|}}}]]''
}}
|-
{{#if:{{{Gattung|}}}|
{{!-}}
{{!}} Gattung:
{{!}} ''[[{{{Gattung|}}}]]''
}}
|-
{{#if:{{{Untergattung|}}}|
{{!-}}
{{!}} Untergattung:
{{!}} ''[[{{{Untergattung|}}}]]''
}}
|-
{{#if:{{{Art|}}}|
{{!-}}
{{!}} Art:
{{!}} ''[[{{{Art|}}}]]''
}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Wissenschaftlicher Name
|-
|style="text-align:center"|
{| style="text-align:center;background-color:#efefef;widht:250px"
|-
|style="text-align:center;width:250px"|''{{{Gattung|}}} {{{Art|}}}'' {{#if:{{{Autor|}}}|{{{Autor|}}}}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Weitere Informationen
|-
|style="text-align:left"|
{| style="text-align:left;background-color:#efefef;widht:250px"
{{#if:{{{Verbreitung|}}}
|
{{!-}}
{{!}} Verbreitung:
{{!}} {{{Verbreitung|}}}
|
{{!-}}
}}
{{#if:{{{Habitat|}}}
|
{{!-}}
{{!}} Habitat:
{{!}} {{{Habitat|}}}
|
{{!-}}
}}
{{#if:{{{Nahrung|}}}
|
{{!-}}
{{!}} Nahrung:
{{!}} {{{Nahrung|}}}
|
{{!-}}
}}
{{#if:{{{Luftfeuchtigkeit|}}}
|
{{!-}}
{{!}} Luftfeuchtigkeit:
{{!}} {{{Luftfeuchtigkeit|}}}
|
{{!-}}
}}
{{#if:{{{Temperatur|}}}
|
{{!-}}
{{!}} Temperatur:
{{!}} {{{Temperatur|}}}
|
{{!-}}
}}
{{#if:{{{StudyGroupNumber|}}}
|
{{!-}}
{{!}} StudyGroupNumber:
{{!}} {{{StudyGroupNumber|}}}
|
{{!-}}
}}
|}
|}
{{#if:{{{Art|}}}|
[[Kategorie:{{{Gattung|}}}{{!}}{{{Art|}}}]]
{{#if:{{{Untergattung|}}}|
[[Kategorie:{{{Untergattung|}}}{{!}}{{{Art|}}}]]
}}
}}
{{#ifeq: {{NAMESPACE}} | {{ns:0}} | [[Kategorie:Spezies]]}}
{{#if:{{{Familie|}}}| [[{{{Familie|}}}]]| ???}}->{{#if:{{{Unterfamilie|}}}| [[{{{Unterfamilie|}}}]]| ???}}->{{#if:{{{Tribus|}}}| [[{{{Tribus|}}}]]| ???}}->{{#if:{{{Gattung|}}}| [[{{{Gattung|}}}]]| ???}}->{{#if:{{{Untergattung|}}}| [[{{{Untergattung|}}}]]| ???}}
</includeonly>
<noinclude>
4886fe05708a2683581cfa8eb8fae1aae328d85a
1039
1038
2016-01-06T10:45:04Z
Lollypop
2
wikitext
text/x-wiki
<includeonly>
{| cellpadding="2" cellspacing="0" style="border: 1px solid #6688AA;width:260px; background-color:#efefef; float:right;overflow: hidden; margin-left:10px; margin-bottom:10px;" valign="middle" |
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"|
'''''{{{Gattung|}}} {{{Art|}}}'''''
{{#if:{{{DeName|}}}|
<br>({{{DeName|}}})
}}
|-
|style="border: 0px solid #6688AA;border-bottom-width:1px;"| <div style="text-align:center;font-size:8pt;">
{{#if: {{{Bild|}}}
|
[[Bild:{{{Bild}}}{{!}}frameless{{!}}250x300px{{!}}{{{Bildbeschreibung}}}]]
|
{{{Bildbeschreibung}}}
}}
</div>
|
|
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| Systematik
|-
|style="text-align:center;width=250px;"|
{| style="background-color:#efefef;text-align:left;"
{{#if:{{{Familie|}}}
|
{{!-}}
{{!}} Familie:
{{!}} ''[[{{{Familie|}}}]]''
}}
{{#if:{{{Unterfamilie|}}}
|
{{!-}}
{{!}} Unterfamilie:
{{!}} ''[[{{{Unterfamilie|}}}]]''
}}
{{#if:{{{Tribus|}}}
|
{{!-}}
{{!}} Tribus:
{{!}} ''[[{{{Tribus|}}}]]''
}}
|-
{{#if:{{{Gattung|}}}|
{{!-}}
{{!}} Gattung:
{{!}} ''[[{{{Gattung|}}}]]''
}}
|-
{{#if:{{{Untergattung|}}}|
{{!-}}
{{!}} Untergattung:
{{!}} ''[[{{{Untergattung|}}}]]''
}}
|-
{{#if:{{{Art|}}}|
{{!-}}
{{!}} Art:
{{!}} ''[[{{{Art|}}}]]''
}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Wissenschaftlicher Name
|-
|style="text-align:center"|
{| style="text-align:center;background-color:#efefef;widht:250px"
|-
|style="text-align:center;width:250px"|''{{{Gattung|}}} {{{Art|}}}'' {{#if:{{{Autor|}}}|{{{Autor|}}}}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Weitere Informationen
|-
|style="text-align:left"|
{| style="text-align:left;background-color:#efefef;widht:250px"
{{#if:{{{Verbreitung|}}}
|
{{!-}}
{{!}} Verbreitung:
{{!}} {{{Verbreitung|}}}
|
{{!-}}
}}
{{#if:{{{Habitat|}}}
|
{{!-}}
{{!}} Habitat:
{{!}} {{{Habitat|}}}
|
{{!-}}
}}
{{#if:{{{Nahrung|}}}
|
{{!-}}
{{!}} Nahrung:
{{!}} {{{Nahrung|}}}
|
{{!-}}
}}
{{#if:{{{Luftfeuchtigkeit|}}}
|
{{!-}}
{{!}} Luftfeuchtigkeit:
{{!}} {{{Luftfeuchtigkeit|}}}
|
{{!-}}
}}
{{#if:{{{Temperatur|}}}
|
{{!-}}
{{!}} Temperatur:
{{!}} {{{Temperatur|}}}
|
{{!-}}
}}
{{#if:{{{StudyGroupNumber|}}}
|
{{!-}}
{{!}} StudyGroupNumber:
{{!}} {{{StudyGroupNumber|}}}
|
{{!-}}
}}
|}
|}
{{#if:{{{Art|}}}|
[[Kategorie:{{{Gattung|}}}{{!}}{{{Art|}}}]]
{{#if:{{{Untergattung|}}}|
[[Kategorie:{{{Untergattung|}}}{{!}}{{{Art|}}}]]
}}
}}
{{#ifeq: {{NAMESPACE}} | {{ns:0}} | [[Kategorie:Spezies]]}}
{{#if:{{{Familie|}}}|
[[{{{Familie|}}}]]
}}
{{#if:{{{Unterfamilie|}}}|
-> [[{{{Unterfamilie|}}}]]
}}
{{#if:{{{Tribus|}}}|
-> [[{{{Tribus|}}}]]
}}
{{#if:{{{Gattung|}}}|
-> [[{{{Gattung|}}}]]
}}
{{#if:{{{Untergattung|}}}|
-> [[{{{Untergattung|}}}]]
}}
</includeonly>
<noinclude>
0180deab68189d5065725f34041b8dc40874b6ad
1044
1039
2016-01-06T12:28:08Z
Lollypop
2
wikitext
text/x-wiki
<includeonly>
{| cellpadding="2" cellspacing="0" style="border: 1px solid #6688AA;width:260px; background-color:#efefef; float:right;overflow: hidden; margin-left:10px; margin-bottom:10px;" valign="middle" |
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"|
'''''{{{Gattung|}}} {{{Art|}}}'''''
{{#if:{{{DeName|}}}|
<br>({{{DeName|}}})
}}
|-
|style="border: 0px solid #6688AA;border-bottom-width:1px;"| <div style="text-align:center;font-size:8pt;">
{{#if: {{{Bild|}}}
|
[[Bild:{{{Bild}}}{{!}}frameless{{!}}250x300px{{!}}{{{Bildbeschreibung}}}]]
}}
</div>
|
|
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| Systematik
|-
|style="text-align:center;width=250px;"|
{| style="background-color:#efefef;text-align:left;"
{{#if:{{{Familie|}}}
|
{{!-}}
{{!}} Familie:
{{!}} ''[[{{{Familie|}}}]]''
}}
{{#if:{{{Unterfamilie|}}}
|
{{!-}}
{{!}} Unterfamilie:
{{!}} ''[[{{{Unterfamilie|}}}]]''
}}
{{#if:{{{Tribus|}}}
|
{{!-}}
{{!}} Tribus:
{{!}} ''[[{{{Tribus|}}}]]''
}}
|-
{{#if:{{{Gattung|}}}|
{{!-}}
{{!}} Gattung:
{{!}} ''[[{{{Gattung|}}}]]''
}}
|-
{{#if:{{{Untergattung|}}}|
{{!-}}
{{!}} Untergattung:
{{!}} ''[[{{{Untergattung|}}}]]''
}}
|-
{{#if:{{{Art|}}}|
{{!-}}
{{!}} Art:
{{!}} ''[[{{{Art|}}}]]''
}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Wissenschaftlicher Name
|-
|style="text-align:center"|
{| style="text-align:center;background-color:#efefef;widht:250px"
|-
|style="text-align:center;width:250px"|''{{{Gattung|}}} {{{Art|}}}'' {{#if:{{{Autor|}}}|{{{Autor|}}}}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Weitere Informationen
|-
|style="text-align:left"|
{| style="text-align:left;background-color:#efefef;widht:250px"
{{#if:{{{Verbreitung|}}}
|
{{!-}}
{{!}} Verbreitung:
{{!}} {{{Verbreitung|}}}
|
{{!-}}
}}
{{#if:{{{Habitat|}}}
|
{{!-}}
{{!}} Habitat:
{{!}} {{{Habitat|}}}
|
{{!-}}
}}
{{#if:{{{Nahrung|}}}
|
{{!-}}
{{!}} Nahrung:
{{!}} {{{Nahrung|}}}
|
{{!-}}
}}
{{#if:{{{Luftfeuchtigkeit|}}}
|
{{!-}}
{{!}} Luftfeuchtigkeit:
{{!}} {{{Luftfeuchtigkeit|}}}
|
{{!-}}
}}
{{#if:{{{Temperatur|}}}
|
{{!-}}
{{!}} Temperatur:
{{!}} {{{Temperatur|}}}
|
{{!-}}
}}
{{#if:{{{StudyGroupNumber|}}}
|
{{!-}}
{{!}} StudyGroupNumber:
{{!}} {{{StudyGroupNumber|}}}
|
{{!-}}
}}
|}
|}
{{#if:{{{Art|}}}|
[[Kategorie:{{{Gattung|}}}{{!}}{{{Art|}}}]]
{{#if:{{{Untergattung|}}}|
[[Kategorie:{{{Untergattung|}}}{{!}}{{{Art|}}}]]
}}
}}
{{#ifeq: {{NAMESPACE}} | {{ns:0}} | [[Kategorie:Spezies]]}}
{{#if:{{{Familie|}}}|
[[{{{Familie|}}}]]
}}
{{#if:{{{Unterfamilie|}}}|
-> [[{{{Unterfamilie|}}}]]
}}
{{#if:{{{Tribus|}}}|
-> [[{{{Tribus|}}}]]
}}
{{#if:{{{Gattung|}}}|
-> [[{{{Gattung|}}}]]
}}
{{#if:{{{Untergattung|}}}|
-> [[{{{Untergattung|}}}]]
}}
</includeonly>
<noinclude>
3ef942bfd0bf24b1d3e94a3488d868609fc3b515
1046
1044
2016-01-06T12:33:07Z
Lollypop
2
wikitext
text/x-wiki
<includeonly>
{| cellpadding="2" cellspacing="0" style="border: 1px solid #6688AA;width:260px; background-color:#efefef; float:right;overflow: hidden; margin-left:10px; margin-bottom:10px;" valign="middle" |
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"|
'''''{{{Gattung|}}} {{{Art|}}}'''''
{{#if:{{{DeName|}}}|
<br>({{{DeName|}}})
}}
|-
|style="border: 0px solid #6688AA;border-bottom-width:1px;"| <div style="text-align:center;font-size:8pt;">
{{#if: {{{Bild|}}}
|
[[Bild:{{{Bild}}}{{!}}frameless{{!}}250x300px{{!}}{{{Bildbeschreibung}}}]]
}}
</div>
|
|
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| Systematik
|-
|style="text-align:center;width=250px;"|
{| style="background-color:#efefef;text-align:left;"
{{#if:{{{Familie|}}}
|
{{!-}}
{{!}} Familie:
{{!}} ''[[{{{Familie|}}}]]''
}}
{{#if:{{{Unterfamilie|}}}
|
{{!-}}
{{!}} Unterfamilie:
{{!}} ''[[{{{Unterfamilie|}}}]]''
}}
{{#if:{{{Tribus|}}}
|
{{!-}}
{{!}} Tribus:
{{!}} ''[[{{{Tribus|}}}]]''
}}
|-
{{#if:{{{Gattung|}}}|
{{!-}}
{{!}} Gattung:
{{!}} ''[[{{{Gattung|}}}]]''
}}
|-
{{#if:{{{Untergattung|}}}|
{{!-}}
{{!}} Untergattung:
{{!}} ''[[{{{Untergattung|}}}]]''
}}
|-
{{#if:{{{Art|}}}|
{{!-}}
{{!}} Art:
{{!}} ''[[{{{Art|}}}]]''
}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Wissenschaftlicher Name
|-
|style="text-align:center"|
{| style="text-align:center;background-color:#efefef;widht:250px"
|-
|style="text-align:center;width:250px"|''{{{Gattung|}}} {{{Art|}}}'' {{#if:{{{Autor|}}}|{{{Autor|}}}}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Weitere Informationen
|-
|style="text-align:left"|
{| style="text-align:left;background-color:#efefef;widht:250px"
{{#if:{{{Verbreitung|}}}
|
{{!-}}
{{!}} Verbreitung:
{{!}} {{{Verbreitung|}}}
|
{{!-}}
}}
{{#if:{{{Habitat|}}}
|
{{!-}}
{{!}} Habitat:
{{!}} {{{Habitat|}}}
|
{{!-}}
}}
{{#if:{{{Nahrung|}}}
|
{{!-}}
{{!}} Nahrung:
{{!}} {{{Nahrung|}}}
|
{{!-}}
}}
{{#if:{{{Luftfeuchtigkeit|}}}
|
{{!-}}
{{!}} Luftfeuchtigkeit:
{{!}} {{{Luftfeuchtigkeit|}}}
|
{{!-}}
}}
{{#if:{{{Temperatur|}}}
|
{{!-}}
{{!}} Temperatur:
{{!}} {{{Temperatur|}}}
|
{{!-}}
}}
{{#if:{{{StudyGroupNumber|}}}
|
{{!-}}
{{!}} StudyGroupNumber:
{{!}} {{{StudyGroupNumber|}}}
|
{{!-}}
}}
|}
|}
{{#if:{{{Art|}}}|
[[Kategorie:{{{Gattung|}}}{{!}}{{{Art|}}}]]
{{#if:{{{Untergattung|}}}|
[[Kategorie:{{{Untergattung|}}}{{!}}{{{Art|}}}]]
}}
}}
{{#ifeq: {{NAMESPACE}} | {{ns:0}} | [[Kategorie:Spezies]]}}
{{#if:{{{Familie|}}}|
[[{{{Familie|}}}]]
}}
{{#if:{{{Unterfamilie|}}}|
-> [[{{{Unterfamilie|}}}]]
}}
{{#if:{{{Tribus|}}}|
-> [[{{{Tribus|}}}]]
}}
{{#if:{{{Gattung|}}}|
-> [[{{{Gattung|}}}]]
}}
{{#if:{{{Untergattung|}}}|
-> [[{{{Untergattung|}}}]]
}}
{{#if:{{{cockroach.speciesfile.org_TaxonNameID|}}}|
* [http://cockroach.speciesfile.org/common/basic/Taxa.aspx?TaxonNameID={{{TaxonNameID|}}} -> {{{Gattung|}}} {{{Art|}}}]]
}}
</includeonly>
<noinclude>
b86acefb0043f3ab9ac32c1c544cb8b233bbf07a
1047
1046
2016-01-06T12:33:58Z
Lollypop
2
wikitext
text/x-wiki
<includeonly>
{| cellpadding="2" cellspacing="0" style="border: 1px solid #6688AA;width:260px; background-color:#efefef; float:right;overflow: hidden; margin-left:10px; margin-bottom:10px;" valign="middle" |
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"|
'''''{{{Gattung|}}} {{{Art|}}}'''''
{{#if:{{{DeName|}}}|
<br>({{{DeName|}}})
}}
|-
|style="border: 0px solid #6688AA;border-bottom-width:1px;"| <div style="text-align:center;font-size:8pt;">
{{#if: {{{Bild|}}}
|
[[Bild:{{{Bild}}}{{!}}frameless{{!}}250x300px{{!}}{{{Bildbeschreibung}}}]]
}}
</div>
|
|
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| Systematik
|-
|style="text-align:center;width=250px;"|
{| style="background-color:#efefef;text-align:left;"
{{#if:{{{Familie|}}}
|
{{!-}}
{{!}} Familie:
{{!}} ''[[{{{Familie|}}}]]''
}}
{{#if:{{{Unterfamilie|}}}
|
{{!-}}
{{!}} Unterfamilie:
{{!}} ''[[{{{Unterfamilie|}}}]]''
}}
{{#if:{{{Tribus|}}}
|
{{!-}}
{{!}} Tribus:
{{!}} ''[[{{{Tribus|}}}]]''
}}
|-
{{#if:{{{Gattung|}}}|
{{!-}}
{{!}} Gattung:
{{!}} ''[[{{{Gattung|}}}]]''
}}
|-
{{#if:{{{Untergattung|}}}|
{{!-}}
{{!}} Untergattung:
{{!}} ''[[{{{Untergattung|}}}]]''
}}
|-
{{#if:{{{Art|}}}|
{{!-}}
{{!}} Art:
{{!}} ''[[{{{Art|}}}]]''
}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Wissenschaftlicher Name
|-
|style="text-align:center"|
{| style="text-align:center;background-color:#efefef;widht:250px"
|-
|style="text-align:center;width:250px"|''{{{Gattung|}}} {{{Art|}}}'' {{#if:{{{Autor|}}}|{{{Autor|}}}}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Weitere Informationen
|-
|style="text-align:left"|
{| style="text-align:left;background-color:#efefef;widht:250px"
{{#if:{{{Verbreitung|}}}
|
{{!-}}
{{!}} Verbreitung:
{{!}} {{{Verbreitung|}}}
|
{{!-}}
}}
{{#if:{{{Habitat|}}}
|
{{!-}}
{{!}} Habitat:
{{!}} {{{Habitat|}}}
|
{{!-}}
}}
{{#if:{{{Nahrung|}}}
|
{{!-}}
{{!}} Nahrung:
{{!}} {{{Nahrung|}}}
|
{{!-}}
}}
{{#if:{{{Luftfeuchtigkeit|}}}
|
{{!-}}
{{!}} Luftfeuchtigkeit:
{{!}} {{{Luftfeuchtigkeit|}}}
|
{{!-}}
}}
{{#if:{{{Temperatur|}}}
|
{{!-}}
{{!}} Temperatur:
{{!}} {{{Temperatur|}}}
|
{{!-}}
}}
{{#if:{{{StudyGroupNumber|}}}
|
{{!-}}
{{!}} StudyGroupNumber:
{{!}} {{{StudyGroupNumber|}}}
|
{{!-}}
}}
|}
|}
{{#if:{{{Art|}}}|
[[Kategorie:{{{Gattung|}}}{{!}}{{{Art|}}}]]
{{#if:{{{Untergattung|}}}|
[[Kategorie:{{{Untergattung|}}}{{!}}{{{Art|}}}]]
}}
}}
{{#ifeq: {{NAMESPACE}} | {{ns:0}} | [[Kategorie:Spezies]]}}
{{#if:{{{Familie|}}}|
[[{{{Familie|}}}]]
}}
{{#if:{{{Unterfamilie|}}}|
-> [[{{{Unterfamilie|}}}]]
}}
{{#if:{{{Tribus|}}}|
-> [[{{{Tribus|}}}]]
}}
{{#if:{{{Gattung|}}}|
-> [[{{{Gattung|}}}]]
}}
{{#if:{{{Untergattung|}}}|
-> [[{{{Untergattung|}}}]]
}}
{{#if:{{{cockroach.speciesfile.org_TaxonNameID|}}}|
* [http://cockroach.speciesfile.org/common/basic/Taxa.aspx?TaxonNameID={{{cockroach.speciesfile.org_TaxonNameID|}}} -> {{{Gattung|}}} {{{Art|}}}]]
}}
</includeonly>
<noinclude>
b401d6bd487e8cfc7a3fc2f08641793be3a94130
1048
1047
2016-01-06T12:34:35Z
Lollypop
2
wikitext
text/x-wiki
<includeonly>
{| cellpadding="2" cellspacing="0" style="border: 1px solid #6688AA;width:260px; background-color:#efefef; float:right;overflow: hidden; margin-left:10px; margin-bottom:10px;" valign="middle" |
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"|
'''''{{{Gattung|}}} {{{Art|}}}'''''
{{#if:{{{DeName|}}}|
<br>({{{DeName|}}})
}}
|-
|style="border: 0px solid #6688AA;border-bottom-width:1px;"| <div style="text-align:center;font-size:8pt;">
{{#if: {{{Bild|}}}
|
[[Bild:{{{Bild}}}{{!}}frameless{{!}}250x300px{{!}}{{{Bildbeschreibung}}}]]
}}
</div>
|
|
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| Systematik
|-
|style="text-align:center;width=250px;"|
{| style="background-color:#efefef;text-align:left;"
{{#if:{{{Familie|}}}
|
{{!-}}
{{!}} Familie:
{{!}} ''[[{{{Familie|}}}]]''
}}
{{#if:{{{Unterfamilie|}}}
|
{{!-}}
{{!}} Unterfamilie:
{{!}} ''[[{{{Unterfamilie|}}}]]''
}}
{{#if:{{{Tribus|}}}
|
{{!-}}
{{!}} Tribus:
{{!}} ''[[{{{Tribus|}}}]]''
}}
|-
{{#if:{{{Gattung|}}}|
{{!-}}
{{!}} Gattung:
{{!}} ''[[{{{Gattung|}}}]]''
}}
|-
{{#if:{{{Untergattung|}}}|
{{!-}}
{{!}} Untergattung:
{{!}} ''[[{{{Untergattung|}}}]]''
}}
|-
{{#if:{{{Art|}}}|
{{!-}}
{{!}} Art:
{{!}} ''[[{{{Art|}}}]]''
}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Wissenschaftlicher Name
|-
|style="text-align:center"|
{| style="text-align:center;background-color:#efefef;widht:250px"
|-
|style="text-align:center;width:250px"|''{{{Gattung|}}} {{{Art|}}}'' {{#if:{{{Autor|}}}|{{{Autor|}}}}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Weitere Informationen
|-
|style="text-align:left"|
{| style="text-align:left;background-color:#efefef;widht:250px"
{{#if:{{{Verbreitung|}}}
|
{{!-}}
{{!}} Verbreitung:
{{!}} {{{Verbreitung|}}}
|
{{!-}}
}}
{{#if:{{{Habitat|}}}
|
{{!-}}
{{!}} Habitat:
{{!}} {{{Habitat|}}}
|
{{!-}}
}}
{{#if:{{{Nahrung|}}}
|
{{!-}}
{{!}} Nahrung:
{{!}} {{{Nahrung|}}}
|
{{!-}}
}}
{{#if:{{{Luftfeuchtigkeit|}}}
|
{{!-}}
{{!}} Luftfeuchtigkeit:
{{!}} {{{Luftfeuchtigkeit|}}}
|
{{!-}}
}}
{{#if:{{{Temperatur|}}}
|
{{!-}}
{{!}} Temperatur:
{{!}} {{{Temperatur|}}}
|
{{!-}}
}}
{{#if:{{{StudyGroupNumber|}}}
|
{{!-}}
{{!}} StudyGroupNumber:
{{!}} {{{StudyGroupNumber|}}}
|
{{!-}}
}}
|}
|}
{{#if:{{{Art|}}}|
[[Kategorie:{{{Gattung|}}}{{!}}{{{Art|}}}]]
{{#if:{{{Untergattung|}}}|
[[Kategorie:{{{Untergattung|}}}{{!}}{{{Art|}}}]]
}}
}}
{{#ifeq: {{NAMESPACE}} | {{ns:0}} | [[Kategorie:Spezies]]}}
{{#if:{{{Familie|}}}|
[[{{{Familie|}}}]]
}}
{{#if:{{{Unterfamilie|}}}|
-> [[{{{Unterfamilie|}}}]]
}}
{{#if:{{{Tribus|}}}|
-> [[{{{Tribus|}}}]]
}}
{{#if:{{{Gattung|}}}|
-> [[{{{Gattung|}}}]]
}}
{{#if:{{{Untergattung|}}}|
-> [[{{{Untergattung|}}}]]
}}
{{#if:{{{cockroach.speciesfile.org_TaxonNameID|}}}|
* [http://cockroach.speciesfile.org/common/basic/Taxa.aspx?TaxonNameID={{{cockroach.speciesfile.org_TaxonNameID|}}} cockroach.speciesfile.org -> {{{Gattung|}}} {{{Art|}}}]]
}}
</includeonly>
<noinclude>
0da64e4e5b14cdb726107a61293b4fd3c79e6f8b
1050
1048
2016-01-06T12:35:55Z
Lollypop
2
wikitext
text/x-wiki
<includeonly>
{| cellpadding="2" cellspacing="0" style="border: 1px solid #6688AA;width:260px; background-color:#efefef; float:right;overflow: hidden; margin-left:10px; margin-bottom:10px;" valign="middle" |
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"|
'''''{{{Gattung|}}} {{{Art|}}}'''''
{{#if:{{{DeName|}}}|
<br>({{{DeName|}}})
}}
|-
|style="border: 0px solid #6688AA;border-bottom-width:1px;"| <div style="text-align:center;font-size:8pt;">
{{#if: {{{Bild|}}}
|
[[Bild:{{{Bild}}}{{!}}frameless{{!}}250x300px{{!}}{{{Bildbeschreibung}}}]]
}}
</div>
|
|
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| Systematik
|-
|style="text-align:center;width=250px;"|
{| style="background-color:#efefef;text-align:left;"
{{#if:{{{Familie|}}}
|
{{!-}}
{{!}} Familie:
{{!}} ''[[{{{Familie|}}}]]''
}}
{{#if:{{{Unterfamilie|}}}
|
{{!-}}
{{!}} Unterfamilie:
{{!}} ''[[{{{Unterfamilie|}}}]]''
}}
{{#if:{{{Tribus|}}}
|
{{!-}}
{{!}} Tribus:
{{!}} ''[[{{{Tribus|}}}]]''
}}
|-
{{#if:{{{Gattung|}}}|
{{!-}}
{{!}} Gattung:
{{!}} ''[[{{{Gattung|}}}]]''
}}
|-
{{#if:{{{Untergattung|}}}|
{{!-}}
{{!}} Untergattung:
{{!}} ''[[{{{Untergattung|}}}]]''
}}
|-
{{#if:{{{Art|}}}|
{{!-}}
{{!}} Art:
{{!}} ''[[{{{Art|}}}]]''
}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Wissenschaftlicher Name
|-
|style="text-align:center"|
{| style="text-align:center;background-color:#efefef;widht:250px"
|-
|style="text-align:center;width:250px"|''{{{Gattung|}}} {{{Art|}}}'' {{#if:{{{Autor|}}}|{{{Autor|}}}}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Weitere Informationen
|-
|style="text-align:left"|
{| style="text-align:left;background-color:#efefef;widht:250px"
{{#if:{{{Verbreitung|}}}
|
{{!-}}
{{!}} Verbreitung:
{{!}} {{{Verbreitung|}}}
|
{{!-}}
}}
{{#if:{{{Habitat|}}}
|
{{!-}}
{{!}} Habitat:
{{!}} {{{Habitat|}}}
|
{{!-}}
}}
{{#if:{{{Nahrung|}}}
|
{{!-}}
{{!}} Nahrung:
{{!}} {{{Nahrung|}}}
|
{{!-}}
}}
{{#if:{{{Luftfeuchtigkeit|}}}
|
{{!-}}
{{!}} Luftfeuchtigkeit:
{{!}} {{{Luftfeuchtigkeit|}}}
|
{{!-}}
}}
{{#if:{{{Temperatur|}}}
|
{{!-}}
{{!}} Temperatur:
{{!}} {{{Temperatur|}}}
|
{{!-}}
}}
{{#if:{{{StudyGroupNumber|}}}
|
{{!-}}
{{!}} StudyGroupNumber:
{{!}} {{{StudyGroupNumber|}}}
|
{{!-}}
}}
|}
|}
{{#if:{{{Art|}}}|
[[Kategorie:{{{Gattung|}}}{{!}}{{{Art|}}}]]
{{#if:{{{Untergattung|}}}|
[[Kategorie:{{{Untergattung|}}}{{!}}{{{Art|}}}]]
}}
}}
{{#ifeq: {{NAMESPACE}} | {{ns:0}} | [[Kategorie:Spezies]]}}
{{#if:{{{Familie|}}}|
[[{{{Familie|}}}]]
}}
{{#if:{{{Unterfamilie|}}}|
-> [[{{{Unterfamilie|}}}]]
}}
{{#if:{{{Tribus|}}}|
-> [[{{{Tribus|}}}]]
}}
{{#if:{{{Gattung|}}}|
-> [[{{{Gattung|}}}]]
}}
{{#if:{{{Untergattung|}}}|
-> [[{{{Untergattung|}}}]]
}}
{{#if:{{{cockroach.speciesfile.org_TaxonNameID|}}}|
* [http://cockroach.speciesfile.org/common/basic/Taxa.aspx?TaxonNameID={{{cockroach.speciesfile.org_TaxonNameID|}}} cockroach.speciesfile.org -> {{{Gattung|}}} {{{Art|}}}]
}}
</includeonly>
<noinclude>
0b8ab29b8527548d1f0284b84b95101562726fb0
1058
1050
2016-01-06T13:43:46Z
Lollypop
2
wikitext
text/x-wiki
<includeonly>
{| cellpadding="2" cellspacing="0" style="border: 1px solid #6688AA;width:260px; background-color:#efefef; float:right;overflow: hidden; margin-left:10px; margin-bottom:10px;" valign="middle" |
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"|
'''''{{{Gattung|}}} {{{Art|}}}'''''
{{#if:{{{DeName|}}}|
<br>({{{DeName|}}})
}}
|-
|style="border: 0px solid #6688AA;border-bottom-width:1px;"| <div style="text-align:center;font-size:8pt;">
{{#if: {{{Bild|}}}
|
[[Bild:{{{Bild}}}{{!}}frameless{{!}}250x300px{{!}}{{{Bildbeschreibung}}}]]
}}
</div>
|
|
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| Systematik
|-
|style="text-align:center;width=250px;"|
{| style="background-color:#efefef;text-align:left;"
{{#if:{{{Familie|}}}
|
{{!-}}
{{!}} Familie:
{{!}} ''[[{{{Familie|}}}]]''
}}
{{#if:{{{Unterfamilie|}}}
|
{{!-}}
{{!}} Unterfamilie:
{{!}} ''[[{{{Unterfamilie|}}}]]''
}}
{{#if:{{{Tribus|}}}
|
{{!-}}
{{!}} Tribus:
{{!}} ''[[{{{Tribus|}}}]]''
}}
|-
{{#if:{{{Gattung|}}}|
{{!-}}
{{!}} Gattung:
{{!}} ''[[{{{Gattung|}}}]]''
}}
|-
{{#if:{{{Untergattung|}}}|
{{!-}}
{{!}} Untergattung:
{{!}} ''[[{{{Untergattung|}}}]]''
}}
|-
{{#if:{{{Art|}}}|
{{!-}}
{{!}} Art:
{{!}} ''[[{{{Art|}}}]]''
}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Wissenschaftlicher Name
|-
|style="text-align:center"|
{| style="text-align:center;background-color:#efefef;widht:250px"
|-
|style="text-align:center;width:250px"|''{{{Gattung|}}} {{{Art|}}}'' {{#if:{{{Autor|}}}|{{{Autor|}}}}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Weitere Informationen
|-
|style="text-align:left"|
{| style="text-align:left;background-color:#efefef;widht:250px"
{{#if:{{{Verbreitung|}}}
|
{{!-}}
{{!}} Verbreitung:
{{!}} {{{Verbreitung|}}}
|
{{!-}}
}}
{{#if:{{{Habitat|}}}
|
{{!-}}
{{!}} Habitat:
{{!}} {{{Habitat|}}}
|
{{!-}}
}}
{{#if:{{{Nahrung|}}}
|
{{!-}}
{{!}} Nahrung:
{{!}} {{{Nahrung|}}}
|
{{!-}}
}}
{{#if:{{{Luftfeuchtigkeit|}}}
|
{{!-}}
{{!}} Luftfeuchtigkeit:
{{!}} {{{Luftfeuchtigkeit|}}}
|
{{!-}}
}}
{{#if:{{{Temperatur|}}}
|
{{!-}}
{{!}} Temperatur:
{{!}} {{{Temperatur|}}}
|
{{!-}}
}}
{{#if:{{{StudyGroupNumber|}}}
|
{{!-}}
{{!}} StudyGroupNumber:
{{!}} {{{StudyGroupNumber|}}}
|
{{!-}}
}}
|}
|}
{{#if:{{{Art|}}}|
[[Kategorie:{{{Gattung|}}}{{!}}{{{Art|}}}]]
{{#if:{{{Untergattung|}}}|
[[Kategorie:{{{Untergattung|}}}{{!}}{{{Art|}}}]]
}}
}}
{{#ifeq: {{NAMESPACE}} | {{ns:0}} | [[Kategorie:Spezies]]}}
{{#if:{{{Familie|}}}|
[[{{{Familie|}}}]]
}}
{{#if:{{{Unterfamilie|}}}|
-> [[{{{Unterfamilie|}}}]]
}}
{{#if:{{{Tribus|}}}|
-> [[{{{Tribus|}}}]]
}}
{{#if:{{{Gattung|}}}|
-> [[{{{Gattung|}}}]]
}}
{{#if:{{{Untergattung|}}}|
-> [[{{{Untergattung|}}}]]
}}
{{#if:{{{cockroach.speciesfile.org_TaxonNameID|}}}|
* [http://cockroach.speciesfile.org/common/basic/Taxa.aspx?TaxonNameID={{{cockroach.speciesfile.org_TaxonNameID|}}} cockroach.speciesfile.org -> {{{Gattung|}}} {{{Art|}}}]
}}
{{#if:{{{LSID|}}}|
* urn:lsid:{{{LSID|}}} -> {{{Gattung|}}} {{{Art|}}}]
}}
</includeonly>
<noinclude>
220209b54fcdfa935776e84eeb07f08b8f60b892
1059
1058
2016-01-06T13:45:07Z
Lollypop
2
wikitext
text/x-wiki
<includeonly>
{| cellpadding="2" cellspacing="0" style="border: 1px solid #6688AA;width:260px; background-color:#efefef; float:right;overflow: hidden; margin-left:10px; margin-bottom:10px;" valign="middle" |
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"|
'''''{{{Gattung|}}} {{{Art|}}}'''''
{{#if:{{{DeName|}}}|
<br>({{{DeName|}}})
}}
|-
|style="border: 0px solid #6688AA;border-bottom-width:1px;"| <div style="text-align:center;font-size:8pt;">
{{#if: {{{Bild|}}}
|
[[Bild:{{{Bild}}}{{!}}frameless{{!}}250x300px{{!}}{{{Bildbeschreibung}}}]]
}}
</div>
|
|
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| Systematik
|-
|style="text-align:center;width=250px;"|
{| style="background-color:#efefef;text-align:left;"
{{#if:{{{Familie|}}}
|
{{!-}}
{{!}} Familie:
{{!}} ''[[{{{Familie|}}}]]''
}}
{{#if:{{{Unterfamilie|}}}
|
{{!-}}
{{!}} Unterfamilie:
{{!}} ''[[{{{Unterfamilie|}}}]]''
}}
{{#if:{{{Tribus|}}}
|
{{!-}}
{{!}} Tribus:
{{!}} ''[[{{{Tribus|}}}]]''
}}
|-
{{#if:{{{Gattung|}}}|
{{!-}}
{{!}} Gattung:
{{!}} ''[[{{{Gattung|}}}]]''
}}
|-
{{#if:{{{Untergattung|}}}|
{{!-}}
{{!}} Untergattung:
{{!}} ''[[{{{Untergattung|}}}]]''
}}
|-
{{#if:{{{Art|}}}|
{{!-}}
{{!}} Art:
{{!}} ''[[{{{Art|}}}]]''
}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Wissenschaftlicher Name
|-
|style="text-align:center"|
{| style="text-align:center;background-color:#efefef;widht:250px"
|-
|style="text-align:center;width:250px"|''{{{Gattung|}}} {{{Art|}}}'' {{#if:{{{Autor|}}}|{{{Autor|}}}}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Weitere Informationen
|-
|style="text-align:left"|
{| style="text-align:left;background-color:#efefef;widht:250px"
{{#if:{{{Verbreitung|}}}
|
{{!-}}
{{!}} Verbreitung:
{{!}} {{{Verbreitung|}}}
|
{{!-}}
}}
{{#if:{{{Habitat|}}}
|
{{!-}}
{{!}} Habitat:
{{!}} {{{Habitat|}}}
|
{{!-}}
}}
{{#if:{{{Nahrung|}}}
|
{{!-}}
{{!}} Nahrung:
{{!}} {{{Nahrung|}}}
|
{{!-}}
}}
{{#if:{{{Luftfeuchtigkeit|}}}
|
{{!-}}
{{!}} Luftfeuchtigkeit:
{{!}} {{{Luftfeuchtigkeit|}}}
|
{{!-}}
}}
{{#if:{{{Temperatur|}}}
|
{{!-}}
{{!}} Temperatur:
{{!}} {{{Temperatur|}}}
|
{{!-}}
}}
{{#if:{{{StudyGroupNumber|}}}
|
{{!-}}
{{!}} StudyGroupNumber:
{{!}} {{{StudyGroupNumber|}}}
|
{{!-}}
}}
|}
|}
{{#if:{{{Art|}}}|
[[Kategorie:{{{Gattung|}}}{{!}}{{{Art|}}}]]
{{#if:{{{Untergattung|}}}|
[[Kategorie:{{{Untergattung|}}}{{!}}{{{Art|}}}]]
}}
}}
{{#ifeq: {{NAMESPACE}} | {{ns:0}} | [[Kategorie:Spezies]]}}
{{#if:{{{Familie|}}}|
[[{{{Familie|}}}]]
}}
{{#if:{{{Unterfamilie|}}}|
-> [[{{{Unterfamilie|}}}]]
}}
{{#if:{{{Tribus|}}}|
-> [[{{{Tribus|}}}]]
}}
{{#if:{{{Gattung|}}}|
-> [[{{{Gattung|}}}]]
}}
{{#if:{{{Untergattung|}}}|
-> [[{{{Untergattung|}}}]]
}}
{{#if:{{{cockroach.speciesfile.org_TaxonNameID|}}}|
* [http://cockroach.speciesfile.org/common/basic/Taxa.aspx?TaxonNameID={{{cockroach.speciesfile.org_TaxonNameID|}}} cockroach.speciesfile.org -> {{{Gattung|}}} {{{Art|}}}]
}}
{{#if:{{{LSID|}}}|
* [http://www.ipni.org/lsids.html LSID] : {{{LSID|}}} -> {{{Gattung|}}} {{{Art|}}}
}}
</includeonly>
<noinclude>
468c74d1a9e610647e64d0a33022b932e8e19122
1062
1059
2016-01-06T13:46:52Z
Lollypop
2
wikitext
text/x-wiki
<includeonly>
{| cellpadding="2" cellspacing="0" style="border: 1px solid #6688AA;width:260px; background-color:#efefef; float:right;overflow: hidden; margin-left:10px; margin-bottom:10px;" valign="middle" |
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"|
'''''{{{Gattung|}}} {{{Art|}}}'''''
{{#if:{{{DeName|}}}|
<br>({{{DeName|}}})
}}
|-
|style="border: 0px solid #6688AA;border-bottom-width:1px;"| <div style="text-align:center;font-size:8pt;">
{{#if: {{{Bild|}}}
|
[[Bild:{{{Bild}}}{{!}}frameless{{!}}250x300px{{!}}{{{Bildbeschreibung}}}]]
}}
</div>
|
|
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| Systematik
|-
|style="text-align:center;width=250px;"|
{| style="background-color:#efefef;text-align:left;"
{{#if:{{{Familie|}}}
|
{{!-}}
{{!}} Familie:
{{!}} ''[[{{{Familie|}}}]]''
}}
{{#if:{{{Unterfamilie|}}}
|
{{!-}}
{{!}} Unterfamilie:
{{!}} ''[[{{{Unterfamilie|}}}]]''
}}
{{#if:{{{Tribus|}}}
|
{{!-}}
{{!}} Tribus:
{{!}} ''[[{{{Tribus|}}}]]''
}}
|-
{{#if:{{{Gattung|}}}|
{{!-}}
{{!}} Gattung:
{{!}} ''[[{{{Gattung|}}}]]''
}}
|-
{{#if:{{{Untergattung|}}}|
{{!-}}
{{!}} Untergattung:
{{!}} ''[[{{{Untergattung|}}}]]''
}}
|-
{{#if:{{{Art|}}}|
{{!-}}
{{!}} Art:
{{!}} ''[[{{{Art|}}}]]''
}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Wissenschaftlicher Name
|-
|style="text-align:center"|
{| style="text-align:center;background-color:#efefef;widht:250px"
|-
|style="text-align:center;width:250px"|''{{{Gattung|}}} {{{Art|}}}'' {{#if:{{{Autor|}}}|{{{Autor|}}}}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Weitere Informationen
|-
|style="text-align:left"|
{| style="text-align:left;background-color:#efefef;widht:250px"
{{#if:{{{Verbreitung|}}}
|
{{!-}}
{{!}} Verbreitung:
{{!}} {{{Verbreitung|}}}
|
{{!-}}
}}
{{#if:{{{Habitat|}}}
|
{{!-}}
{{!}} Habitat:
{{!}} {{{Habitat|}}}
|
{{!-}}
}}
{{#if:{{{Nahrung|}}}
|
{{!-}}
{{!}} Nahrung:
{{!}} {{{Nahrung|}}}
|
{{!-}}
}}
{{#if:{{{Luftfeuchtigkeit|}}}
|
{{!-}}
{{!}} Luftfeuchtigkeit:
{{!}} {{{Luftfeuchtigkeit|}}}
|
{{!-}}
}}
{{#if:{{{Temperatur|}}}
|
{{!-}}
{{!}} Temperatur:
{{!}} {{{Temperatur|}}}
|
{{!-}}
}}
{{#if:{{{StudyGroupNumber|}}}
|
{{!-}}
{{!}} StudyGroupNumber:
{{!}} {{{StudyGroupNumber|}}}
|
{{!-}}
}}
|}
|}
{{#if:{{{Art|}}}|
[[Kategorie:{{{Gattung|}}}{{!}}{{{Art|}}}]]
{{#if:{{{Untergattung|}}}|
[[Kategorie:{{{Untergattung|}}}{{!}}{{{Art|}}}]]
}}
}}
{{#ifeq: {{NAMESPACE}} | {{ns:0}} | [[Kategorie:Spezies]]}}
{{#if:{{{Familie|}}}|
[[{{{Familie|}}}]]
}}
{{#if:{{{Unterfamilie|}}}|
-> [[{{{Unterfamilie|}}}]]
}}
{{#if:{{{Tribus|}}}|
-> [[{{{Tribus|}}}]]
}}
{{#if:{{{Gattung|}}}|
-> [[{{{Gattung|}}}]]
}}
{{#if:{{{Untergattung|}}}|
-> [[{{{Untergattung|}}}]]
}}
{{#if:{{{cockroach.speciesfile.org_TaxonNameID|}}}|
* [http://cockroach.speciesfile.org/common/basic/Taxa.aspx?TaxonNameID={{{cockroach.speciesfile.org_TaxonNameID|}}} cockroach.speciesfile.org -> {{{Gattung|}}} {{{Art|}}}]
}}
{{#if:{{{LSID|}}}|
* [http://www.ipni.org/lsids.html LSID] : {{{LSID|}}}
}}
</includeonly>
<noinclude>
8826c98d2d993d1ec828dc990af2e4b7821a528a
Category:Gromphadorhina
14
183
1040
557
2016-01-06T12:25:02Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:Schaben]]
{{Systematik
| Autor = Brunner von Wattenwyl, 1865
| Bild =
| Bildbeschreibung =
| Familie = Blaberidae
| Unterfamilie = Oxyhaloinae
| Gattung = Gromphadorhina
| Untergattung =
| Tribus = Gromphadorhini
}}
* [http://cockroach.speciesfile.org/common/basic/Taxa.aspx?TaxonNameID=1174409 Speciesfile.org -> Gromphadorhina]
0fb299a3e00529e3b7996ebd85ffa3d1b784f297
1055
1040
2016-01-06T12:41:55Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:Schaben]]
{{Systematik
| Autor = Brunner von Wattenwyl, 1865
| Bild =
| Bildbeschreibung =
| Familie = Blaberidae
| Unterfamilie = Oxyhaloinae
| Gattung = Gromphadorhina
| Untergattung =
| Tribus = Gromphadorhini
| cockroach.speciesfile.org_TaxonNameID = 1174409
}}
31e2f9ba9f80cecab956e4614e31af06318ad552
1063
1055
2016-01-06T13:47:34Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:Schaben]]
{{Systematik
| Autor = Brunner von Wattenwyl, 1865
| Bild =
| Bildbeschreibung =
| Familie = Blaberidae
| Unterfamilie = Oxyhaloinae
| Gattung = Gromphadorhina
| Untergattung =
| Tribus = Gromphadorhini
| cockroach.speciesfile.org_TaxonNameID = 1174409
| LSID = urn:lsid:Blattodea.speciesfile.org:TaxonName:6328
}}
0ecd40d61658da91c7df8651e2f5960ae8818b94
Gromphadorhina oblongonota
0
175
1041
993
2016-01-06T12:27:06Z
Lollypop
2
wikitext
text/x-wiki
{{Systematik
| DeName = Fauchschabe
| WissName = Gromphadorhina oblongonota
| Autor = van Herrewege, 1973
| Familie = Blaberidae
| Unterfamilie = Oxyhaloinae
| Gattung = Gromphadorhina
| Untergattung =
| Tribus = Gromphadorhini
| Art = oblongonota
| Verbreitung =
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| StudyGroupNumber = BCG 48
| Winterruhe =
}}
* [http://cockroach.speciesfile.org/common/basic/Taxa.aspx?TaxonNameID=1174411 Speciesfile.org -> Gromphadorhina oblongonota]
cedd28f862178e912f61c595bf8ea498c6979d55
1042
1041
2016-01-06T12:27:18Z
Lollypop
2
Lollypop verschob Seite [[Gromphadorhina oblongonata]] nach [[Gromphadorhina oblongonota]] und überschrieb dabei eine Weiterleitung
wikitext
text/x-wiki
{{Systematik
| DeName = Fauchschabe
| WissName = Gromphadorhina oblongonota
| Autor = van Herrewege, 1973
| Familie = Blaberidae
| Unterfamilie = Oxyhaloinae
| Gattung = Gromphadorhina
| Untergattung =
| Tribus = Gromphadorhini
| Art = oblongonota
| Verbreitung =
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| StudyGroupNumber = BCG 48
| Winterruhe =
}}
* [http://cockroach.speciesfile.org/common/basic/Taxa.aspx?TaxonNameID=1174411 Speciesfile.org -> Gromphadorhina oblongonota]
cedd28f862178e912f61c595bf8ea498c6979d55
1049
1042
2016-01-06T12:35:38Z
Lollypop
2
wikitext
text/x-wiki
{{Systematik
| DeName = Fauchschabe
| WissName = Gromphadorhina oblongonota
| Autor = van Herrewege, 1973
| Familie = Blaberidae
| Unterfamilie = Oxyhaloinae
| Gattung = Gromphadorhina
| Untergattung =
| Tribus = Gromphadorhini
| Art = oblongonota
| Verbreitung =
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| StudyGroupNumber = BCG 48
| Winterruhe =
| cockroach.speciesfile.org_TaxonNameID = 1174411
}}
b584bdc1e6980fddb7c6bbe12ca2997e39d02ea0
1064
1049
2016-01-06T13:51:23Z
Lollypop
2
wikitext
text/x-wiki
{{Systematik
| DeName = Fauchschabe
| WissName = Gromphadorhina oblongonota
| Autor = van Herrewege, 1973
| Familie = Blaberidae
| Unterfamilie = Oxyhaloinae
| Gattung = Gromphadorhina
| Untergattung =
| Tribus = Gromphadorhini
| Art = oblongonota
| Verbreitung =
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| StudyGroupNumber = BCG 48
| Winterruhe =
| cockroach.speciesfile.org_TaxonNameID = 1174411
| LSID = urn:lsid:Blattodea.speciesfile.org:TaxonName:6332
}}
ac1e5893633d2a7b5c6b5a8f637c9abf363abe3b
Gromphadorhina oblongonata
0
261
1043
2016-01-06T12:27:18Z
Lollypop
2
Lollypop verschob Seite [[Gromphadorhina oblongonata]] nach [[Gromphadorhina oblongonota]] und überschrieb dabei eine Weiterleitung
wikitext
text/x-wiki
#WEITERLEITUNG [[Gromphadorhina oblongonota]]
e6697c042de0a72d387bf1c0fdf8b66f9d85fc3a
Gromphadorhina portentosa
0
145
1045
588
2016-01-06T12:32:59Z
Lollypop
2
wikitext
text/x-wiki
{{Systematik
| DeName = Fauchschabe
| WissName = Gromphadorhina portentosa
| Autor = Schaum, 1853
| Familie = Blaberidae
| Unterfamilie = Oxyhaloinae
| Tribus = Gromphadorhini
| Gattung = Gromphadorhina
| Untergattung =
| Art = portentosa
| Verbreitung =
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| StudyGroupNumber = BCG 12
| Winterruhe =
| cockroach.speciesfile.org_TaxonNameID = 1174413
}}
c1cb0d70ed2ed50a07526a3a1c1f9f0c26c62ed6
1065
1045
2016-01-06T13:52:15Z
Lollypop
2
wikitext
text/x-wiki
{{Systematik
| DeName = Fauchschabe
| WissName = Gromphadorhina portentosa
| Autor = Schaum, 1853
| Familie = Blaberidae
| Unterfamilie = Oxyhaloinae
| Tribus = Gromphadorhini
| Gattung = Gromphadorhina
| Untergattung =
| Art = portentosa
| Verbreitung =
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| StudyGroupNumber = BCG 12
| Winterruhe =
| LSID = urn:lsid:Blattodea.speciesfile.org:TaxonName:6329
| cockroach.speciesfile.org_TaxonNameID = 1174413
}}
0a80dff556377c9cd97c566f9742355ad2c1d2c2
Elliptorhina javanica
0
146
1052
999
2016-01-06T12:38:59Z
Lollypop
2
wikitext
text/x-wiki
{{Systematik
| DeName = Fauchschabe
| WissName = Elliptorhina javanica
| Autor = Hanitsch, 1930
| Bild = Elliptorhina_javanica.JPG
| Bildbeschreibung = Elliptorhina javanica an einem Champignon
| Familie = Blaberidae
| Unterfamilie = Oxyhaloinae
| Gattung = Elliptorhina
| Untergattung =
| Tribus = Gromphadorhini
| Art = javanica
| Verbreitung =
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| Winterruhe =
| cockroach.speciesfile.org_TaxonNameID = 1174403
}}
faf376de7e23b3b894215548da14a61378e28657
1068
1052
2016-01-06T13:55:29Z
Lollypop
2
wikitext
text/x-wiki
{{Systematik
| DeName = Fauchschabe
| WissName = Elliptorhina javanica
| Autor = Hanitsch, 1930
| Bild = Elliptorhina_javanica.JPG
| Bildbeschreibung = Elliptorhina javanica an einem Champignon
| Familie = Blaberidae
| Unterfamilie = Oxyhaloinae
| Gattung = Elliptorhina
| Untergattung =
| Tribus = Gromphadorhini
| Art = javanica
| Verbreitung =
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| Winterruhe =
| LSID = urn:lsid:Blattodea.speciesfile.org:TaxonName:6342
| cockroach.speciesfile.org_TaxonNameID = 1174403
}}
b079d198d067fc249f1b37987f077792aa21d0b4
Category:Elliptorhina
14
147
1054
1005
2016-01-06T12:40:51Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:Schaben]]
{{Systematik
| Autor = van Herrewege, 1973
| Bild =
| Bildbeschreibung =
| Familie = Blaberidae
| Unterfamilie = Oxyhaloinae
| Gattung = Elliptorhina
| Untergattung =
| Tribus = Gromphadorhini
| cockroach.speciesfile.org_TaxonNameID = 1174395
}}
f564d7ae48fcc9e64d8ee7ff839034f6b8c292ed
1066
1054
2016-01-06T13:53:54Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:Schaben]]
{{Systematik
| Autor = van Herrewege, 1973
| Bild =
| Bildbeschreibung =
| Familie = Blaberidae
| Unterfamilie = Oxyhaloinae
| Gattung = Elliptorhina
| Untergattung =
| Tribus = Gromphadorhini
| LSID = urn:lsid:Blattodea.speciesfile.org:TaxonName:6334
| cockroach.speciesfile.org_TaxonNameID = 1174395
}}
f4d293557f9f8edff49115a93556494a38a66402
Category:Archimandrita
14
149
1056
414
2016-01-06T12:45:25Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:Schaben]]
{{Systematik
| Autor = Saussure, 1893
| Bild =
| Bildbeschreibung =
| Familie = Blaberidae
| Unterfamilie = Blaberinae
| Gattung = Archimandrita
| Untergattung =
| Tribus =
| cockroach.speciesfile.org_TaxonNameID = 1174139
}}
ecffae7dac923589ec4867c94b3cb3d4260062cf
1061
1056
2016-01-06T13:46:11Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:Schaben]]
{{Systematik
| Autor = Saussure, 1893
| Bild =
| Bildbeschreibung =
| Familie = Blaberidae
| Unterfamilie = Blaberinae
| Gattung = Archimandrita
| Untergattung =
| Tribus =
| cockroach.speciesfile.org_TaxonNameID = 1174139
| LSID = urn:lsid:Blattodea.speciesfile.org:TaxonName:6664
}}
ebb6c7a70cef8cb6b4f7b2b6c8c0d8825192c3e2
Archimandrita tesselata
0
148
1057
590
2016-01-06T13:42:12Z
Lollypop
2
wikitext
text/x-wiki
{{Systematik
| DeName = Pfefferschabe
| Bild = Archimandrita_tesselata_IMG_2891.JPG
| Bildbeschreibung = Archimandrita Tesselata an einem Stück Gurke
| Autor = Rehn, 1903
| Familie = Blaberidae
| Unterfamilie = Blaberinae
| Gattung = Archimandrita
| Untergattung =
| Art = tesselata
| Verbreitung = Guatemala, Costa Rica, Panama, Kolumbien
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| StudyGroupNumber = BCG 23
| Winterruhe =
| cockroach.speciesfile.org_TaxonNameID = 1174141
| LSID = Blattodea.speciesfile.org:TaxonName:6665
}}
Archimandrita tesselata ist eine große, meist recht scheue Schabenart.
Ihren Namen "Pfefferschabe" trägt sie aufgrund der Zeichnung auf ihren Flügeldecken.
<gallery mode="packed-hover">
Image:Archimandrita tesselata IMG 2891.JPG|An Gurke
Image:Archimandrita_tesselata_R0015551.png|Können dese Augen lügen?
</gallery>
<gallery mode="slideshow" caption="Häutung eines Archimandrita tesselata Männchens">
Image:Archimandrita_tesselata_IMG_2901.png
Image:Archimandrita_tesselata_IMG_2902.png
Image:Archimandrita_tesselata_IMG_2903.png
Image:Archimandrita_tesselata_IMG_2904.png
Image:Archimandrita_tesselata_IMG_2905.png
Image:Archimandrita_tesselata_IMG_2906.png
Image:Archimandrita_tesselata_IMG_2907.png
Image:Archimandrita_tesselata_IMG_2908.png
Image:Archimandrita_tesselata_IMG_2909.png
Image:Archimandrita_tesselata_IMG_2910.png
Image:Archimandrita_tesselata_IMG_2911.png
Image:Archimandrita_tesselata_IMG_2912.png
Image:Archimandrita_tesselata_IMG_2913.png
Image:Archimandrita_tesselata_IMG_2914.png
Image:Archimandrita_tesselata_IMG_2915.png
Image:Archimandrita_tesselata_IMG_2916.png
Image:Archimandrita_tesselata_IMG_2917.png
Image:Archimandrita_tesselata_IMG_2918.png
</gallery>
<slideshow sequence="forward" transition="blindDown" refresh="3000">
[[Image:Archimandrita_tesselata_IMG_2901.png|thumb|right|256px|Caption 1]]
[[Image:Archimandrita_tesselata_IMG_2902.png|thumb|right|256px|Caption 2]]
[[Image:Archimandrita_tesselata_IMG_2903.png|thumb|right|256px|Caption 3]]
[[Image:Archimandrita_tesselata_IMG_2904.png|thumb|right|256px|Caption 4]]
[[Image:Archimandrita_tesselata_IMG_2905.png|thumb|right|256px|Caption 5]]
[[Image:Archimandrita_tesselata_IMG_2906.png|thumb|right|256px|Caption 6]]
</slideshow>
50ba4eccbc8168215311c36ad695539ccff11615
1060
1057
2016-01-06T13:45:22Z
Lollypop
2
wikitext
text/x-wiki
{{Systematik
| DeName = Pfefferschabe
| Bild = Archimandrita_tesselata_IMG_2891.JPG
| Bildbeschreibung = Archimandrita Tesselata an einem Stück Gurke
| Autor = Rehn, 1903
| Familie = Blaberidae
| Unterfamilie = Blaberinae
| Gattung = Archimandrita
| Untergattung =
| Art = tesselata
| Verbreitung = Guatemala, Costa Rica, Panama, Kolumbien
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| StudyGroupNumber = BCG 23
| Winterruhe =
| cockroach.speciesfile.org_TaxonNameID = 1174141
| LSID = urn:lsid:Blattodea.speciesfile.org:TaxonName:6665
}}
Archimandrita tesselata ist eine große, meist recht scheue Schabenart.
Ihren Namen "Pfefferschabe" trägt sie aufgrund der Zeichnung auf ihren Flügeldecken.
<gallery mode="packed-hover">
Image:Archimandrita tesselata IMG 2891.JPG|An Gurke
Image:Archimandrita_tesselata_R0015551.png|Können dese Augen lügen?
</gallery>
<gallery mode="slideshow" caption="Häutung eines Archimandrita tesselata Männchens">
Image:Archimandrita_tesselata_IMG_2901.png
Image:Archimandrita_tesselata_IMG_2902.png
Image:Archimandrita_tesselata_IMG_2903.png
Image:Archimandrita_tesselata_IMG_2904.png
Image:Archimandrita_tesselata_IMG_2905.png
Image:Archimandrita_tesselata_IMG_2906.png
Image:Archimandrita_tesselata_IMG_2907.png
Image:Archimandrita_tesselata_IMG_2908.png
Image:Archimandrita_tesselata_IMG_2909.png
Image:Archimandrita_tesselata_IMG_2910.png
Image:Archimandrita_tesselata_IMG_2911.png
Image:Archimandrita_tesselata_IMG_2912.png
Image:Archimandrita_tesselata_IMG_2913.png
Image:Archimandrita_tesselata_IMG_2914.png
Image:Archimandrita_tesselata_IMG_2915.png
Image:Archimandrita_tesselata_IMG_2916.png
Image:Archimandrita_tesselata_IMG_2917.png
Image:Archimandrita_tesselata_IMG_2918.png
</gallery>
<slideshow sequence="forward" transition="blindDown" refresh="3000">
[[Image:Archimandrita_tesselata_IMG_2901.png|thumb|right|256px|Caption 1]]
[[Image:Archimandrita_tesselata_IMG_2902.png|thumb|right|256px|Caption 2]]
[[Image:Archimandrita_tesselata_IMG_2903.png|thumb|right|256px|Caption 3]]
[[Image:Archimandrita_tesselata_IMG_2904.png|thumb|right|256px|Caption 4]]
[[Image:Archimandrita_tesselata_IMG_2905.png|thumb|right|256px|Caption 5]]
[[Image:Archimandrita_tesselata_IMG_2906.png|thumb|right|256px|Caption 6]]
</slideshow>
67908133b75ba1f05debef6f693464f43bad9d6b
Category:Blaberus
14
193
1069
629
2016-01-06T13:58:25Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:Schaben]]
{{Systematik
| Autor = Serville, 1831
| Bild =
| Bildbeschreibung =
| Familie = Blaberidae
| Unterfamilie = Blaberinae
| Gattung = Blaberus
| Untergattung =
| Tribus =
| cockroach.speciesfile.org_TaxonNameID = 1174154
| LSID = urn:lsid:Blattodea.speciesfile.org:TaxonName:6590
}}
12e1f71b535061f8b5f7fe2071ead01a7df32f5e
Blaberus giganteus
0
192
1070
628
2016-01-06T14:00:17Z
Lollypop
2
wikitext
text/x-wiki
{{Systematik
| DeName = Mittelamerikanische Riesenschabe
| WissName = Blaberus giganteus
| Autor = Linnaeus, 1758
| Untergattung =
| Gattung = Blaberus
| Familie = Blaberidae
| Unterfamilie = Blaberinae
| Tribus =
| Art = giganteus
| Verbreitung = Mittelamerika und nördliches Südamerika
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| Winterruhe =
| cockroach.speciesfile.org_TaxonNameID = 1174190
| LSID = urn:lsid:Blattodea.speciesfile.org:TaxonName:6598
}}
bd6f8b5d47faeb007c0eb0f445f1afcf065591ea
Category:Blaptica
14
151
1071
417
2016-01-06T14:04:16Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie: Schaben]]
{{Systematik
| Autor = Stål, 1874
| Bild =
| Bildbeschreibung =
| Familie = Blaberidae
| Unterfamilie = Blaberinae
| Gattung = Blaptica
| Untergattung =
| Tribus =
| cockroach.speciesfile.org_TaxonNameID = 1174201
| LSID = urn:lsid:Blattodea.speciesfile.org:TaxonName:6568
}}
3bf21dc8a04025c736904eff64578b64af69a058
Blaptica dubia
0
150
1072
569
2016-01-06T14:05:54Z
Lollypop
2
wikitext
text/x-wiki
{{Systematik
| DeName = Argentinische Waldschabe
| Autor = Serville, 1838
| Untergattung =
| Gattung = Blaptica
| Familie = Blaberidae
| Unterfamilie = Blaberinae
| Tribus =
| Art = dubia
| Verbreitung = Argentinien, Paraguay, Uruguay
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| Winterruhe =
| cockroach.speciesfile.org_TaxonNameID = 1174202
| LSID = urn:lsid:Blattodea.speciesfile.org:TaxonName:6586
}}
b5a29b02ad48d0e152c17d5fc3e5f5d0c3a310f6
Category:Therea
14
172
1073
486
2016-01-06T14:38:09Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:Schaben]]
{{Systematik
| Autor = Billberg, 1820
| Bild =
| Bildbeschreibung =
| Familie = Corydiidae
| Unterfamilie = Corydiinae
| Gattung = Therea
| cockroach.speciesfile.org_TaxonNameID = 1178142
| LSID = urn:lsid:Blattodea.speciesfile.org:TaxonName:1257
}}
5200a22a8d6add9ff746ba350206809ddaa66b19
1097
1073
2016-01-06T15:45:13Z
Lollypop
2
wikitext
text/x-wiki
{{Systematik
| Autor = Billberg, 1820
| Bild =
| Bildbeschreibung =
| Familie = Corydiidae
| Unterfamilie = Corydiinae
| Gattung = Therea
| cockroach.speciesfile.org_TaxonNameID = 1178142
| LSID = urn:lsid:Blattodea.speciesfile.org:TaxonName:1257
}}
a6bbe09163359b64d72fd29cdf51c215dad39967
1116
1097
2016-01-06T16:33:59Z
Lollypop
2
wikitext
text/x-wiki
{{Systematik
| Autor = Billberg, 1820
| Bild =
| Bildbeschreibung =
| Ordnung = Blattodea
| Superfamilie = Corydioidea
| Familie = Corydiidae
| Unterfamilie = Corydiinae
| Gattung = Therea
| cockroach.speciesfile.org_TaxonNameID = 1178142
| LSID = urn:lsid:Blattodea.speciesfile.org:TaxonName:1257
}}
246c74b62208c5f67924532afdeb6f914f9fb8c1
Therea olegrandjeani
0
173
1074
571
2016-01-06T14:39:44Z
Lollypop
2
wikitext
text/x-wiki
{{Systematik
| DeName = Fragezeichen-Schabe
| WissName = Therea olegrandjeani
| Autor = Fritzsche & Zompro, 2008
| Familie = Corydiidae
| Unterfamilie = Corydiinae
| Tribus =
| Gattung = Therea
| Untergattung =
| Art = olegrandjeani
| Verbreitung = Indien
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| Winterruhe =
| cockroach.speciesfile.org_TaxonNameID = 1178153
| LSID = urn:lsid:Blattodea.speciesfile.org:TaxonName:1259
}}
0af48dc2aad033010f0fb033f4d3562c0f993019
Therea regularis
0
171
1075
573
2016-01-06T14:41:23Z
Lollypop
2
wikitext
text/x-wiki
{{Systematik
| DeName = Dominoschabe
| WissName = Therea regularis
| Autor = Grandcolas, 1993
| Familie = Corydiidae
| Unterfamilie = Corydiinae
| Gattung = Therea
| Untergattung =
| Art = regularis
| Verbreitung = Indien
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| Winterruhe =
| cockroach.speciesfile.org_TaxonNameID = 1178147
| LSID = urn:lsid:Blattodea.speciesfile.org:TaxonName:1267
}}
Kleine, quirlige Art.
bb09219202c7d716988a14003e71fb375b62ecc1
Category:Princisia
14
153
1076
419
2016-01-06T14:44:32Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:Schaben]]
{{Systematik
| Autor = van Herrewege, 1973
| Bild =
| Bildbeschreibung =
| Familie = Blaberidae
| Unterfamilie = Oxyhaloinae
| Gattung = Princisia
| cockroach.speciesfile.org_TaxonNameID = 1174415
| LSID = urn:lsid:Blattodea.speciesfile.org:TaxonName:6325
}}
e735ca75945b7e791bde0bbe17c4d3001e504a97
Princisia vanwaerebeki
0
152
1077
589
2016-01-06T14:45:29Z
Lollypop
2
wikitext
text/x-wiki
{{Systematik
| DeName = Fauchschabe
| WissName = Princisia vanwaerebeki
| Autor = van Herrewege, 1973
| Familie = Blaberidae
| Unterfamilie = Oxyhaloinae
| Gattung = Princisia
| Untergattung =
| Tribus = Gromphadorhini
| Art = vanwaerebeki
| Verbreitung =
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| StudyGroupNumber = BCG 34
| Winterruhe =
| cockroach.speciesfile.org_TaxonNameID = 1174416
| LSID = urn:lsid:Blattodea.speciesfile.org:TaxonName:6326
}}
a8c34ab80d7f9db2854d635abf425c563aec8713
Template:Systematik
10
117
1078
1062
2016-01-06T15:00:23Z
Lollypop
2
wikitext
text/x-wiki
<includeonly>
{| cellpadding="2" cellspacing="0" style="border: 1px solid #6688AA;width:260px; background-color:#efefef; float:right;overflow: hidden; margin-left:10px; margin-bottom:10px;" valign="middle" |
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"|
'''''{{{Gattung|}}} {{{Art|}}}'''''
{{#if:{{{DeName|}}}|
<br>({{{DeName|}}})
}}
|-
|style="border: 0px solid #6688AA;border-bottom-width:1px;"| <div style="text-align:center;font-size:8pt;">
{{#if: {{{Bild|}}}
|
[[Bild:{{{Bild}}}{{!}}frameless{{!}}250x300px{{!}}{{{Bildbeschreibung}}}]]
}}
</div>
|
|
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| Systematik
|-
|style="text-align:center;width=250px;"|
{| style="background-color:#efefef;text-align:left;"
{{#if:{{{Familie|}}}
|
{{!-}}
{{!}} Familie:
{{!}} ''[[{{{Familie|}}}]]''
}}
{{#if:{{{Unterfamilie|}}}
|
{{!-}}
{{!}} Unterfamilie:
{{!}} ''[[{{{Unterfamilie|}}}]]''
}}
{{#if:{{{Tribus|}}}
|
{{!-}}
{{!}} Tribus:
{{!}} ''[[{{{Tribus|}}}]]''
}}
|-
{{#if:{{{Gattung|}}}|
{{!-}}
{{!}} Gattung:
{{!}} ''[[{{{Gattung|}}}]]''
}}
|-
{{#if:{{{Untergattung|}}}|
{{!-}}
{{!}} Untergattung:
{{!}} ''[[{{{Untergattung|}}}]]''
}}
|-
{{#if:{{{Art|}}}|
{{!-}}
{{!}} Art:
{{!}} ''[[{{{Art|}}}]]''
}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Wissenschaftlicher Name
|-
|style="text-align:center"|
{| style="text-align:center;background-color:#efefef;widht:250px"
|-
|style="text-align:center;width:250px"|''{{{Gattung|}}} {{{Art|}}}'' {{#if:{{{Autor|}}}|{{{Autor|}}}}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Weitere Informationen
|-
|style="text-align:left"|
{| style="text-align:left;background-color:#efefef;widht:250px"
{{#if:{{{Verbreitung|}}}
|
{{!-}}
{{!}} Verbreitung:
{{!}} {{{Verbreitung|}}}
|
{{!-}}
}}
{{#if:{{{Habitat|}}}
|
{{!-}}
{{!}} Habitat:
{{!}} {{{Habitat|}}}
|
{{!-}}
}}
{{#if:{{{Nahrung|}}}
|
{{!-}}
{{!}} Nahrung:
{{!}} {{{Nahrung|}}}
|
{{!-}}
}}
{{#if:{{{Luftfeuchtigkeit|}}}
|
{{!-}}
{{!}} Luftfeuchtigkeit:
{{!}} {{{Luftfeuchtigkeit|}}}
|
{{!-}}
}}
{{#if:{{{Temperatur|}}}
|
{{!-}}
{{!}} Temperatur:
{{!}} {{{Temperatur|}}}
|
{{!-}}
}}
{{#if:{{{StudyGroupNumber|}}}
|
{{!-}}
{{!}} StudyGroupNumber:
{{!}} {{{StudyGroupNumber|}}}
|
{{!-}}
}}
|}
|}
{{#if:{{{Art|}}}|
[[Kategorie:{{{Gattung|}}}{{!}}{{{Art|}}}]]
{{#if:{{{Untergattung|}}}|
[[Kategorie:{{{Untergattung|}}}{{!}}{{{Art|}}}]]
}}
}}
{{#ifeq: {{NAMESPACE}} | {{ns:0}} | [[Kategorie:Spezies]]}}
{{#if:{{{Familie|}}}|
[[{{{Familie|}}}]]
}}
{{#if:{{{Unterfamilie|}}}|
-> [[{{{Unterfamilie|}}}]]
}}
{{#if:{{{Tribus|}}}|
-> [[{{{Tribus|}}}]]
}}
{{#if:{{{Gattung|}}}|
-> [[{{{Gattung|}}}]]
}}
{{#if:{{{Untergattung|}}}|
-> [[{{{Untergattung|}}}]]
}}
{{#if:{{{cockroach.speciesfile.org_TaxonNameID|}}}|
* [http://cockroach.speciesfile.org/common/basic/Taxa.aspx?TaxonNameID={{{cockroach.speciesfile.org_TaxonNameID|}}} cockroach.speciesfile.org -> {{{Gattung|}}} {{{Art|}}}]
}}
{{#if:{{{LSID|}}}|
* [http://www.ipni.org/lsids.html LSID] : {{{LSID|}}}
}}
</includeonly>
<noinclude>
Beispielaufruf:
{{Systematik
| DeName = Fauchschabe
| WissName = Princisia vanwaerebeki
| Autor = van Herrewege, 1973
| Familie = Blaberidae
| Unterfamilie = Oxyhaloinae
| Gattung = Princisia
| Untergattung =
| Tribus = Gromphadorhini
| Art = vanwaerebeki
| Verbreitung =
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| StudyGroupNumber = BCG 34
| Winterruhe =
| cockroach.speciesfile.org_TaxonNameID = 1174416
| LSID = urn:lsid:Blattodea.speciesfile.org:TaxonName:6326
}}
</noinclude>
9bd9ca4610e1cf12fb801db24c5e2c39913e10ff
1079
1078
2016-01-06T15:03:09Z
Lollypop
2
wikitext
text/x-wiki
<includeonly>
{| cellpadding="2" cellspacing="0" style="border: 1px solid #6688AA;width:260px; background-color:#efefef; float:right;overflow: hidden; margin-left:10px; margin-bottom:10px;" valign="middle" |
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"|
'''''{{{Gattung|}}} {{{Art|}}}'''''
{{#if:{{{DeName|}}}|
<br>({{{DeName|}}})
}}
|-
|style="border: 0px solid #6688AA;border-bottom-width:1px;"| <div style="text-align:center;font-size:8pt;">
{{#if: {{{Bild|}}}
|
[[Bild:{{{Bild}}}{{!}}frameless{{!}}250x300px{{!}}{{{Bildbeschreibung}}}]]
}}
</div>
|
|
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| Systematik
|-
|style="text-align:center;width=250px;"|
{| style="background-color:#efefef;text-align:left;"
{{#if:{{{Familie|}}}
|
{{!-}}
{{!}} Familie:
{{!}} ''[[{{{Familie|}}}]]''
}}
{{#if:{{{Unterfamilie|}}}
|
{{!-}}
{{!}} Unterfamilie:
{{!}} ''[[{{{Unterfamilie|}}}]]''
}}
{{#if:{{{Tribus|}}}
|
{{!-}}
{{!}} Tribus:
{{!}} ''[[{{{Tribus|}}}]]''
}}
|-
{{#if:{{{Gattung|}}}|
{{!-}}
{{!}} Gattung:
{{!}} ''[[{{{Gattung|}}}]]''
}}
|-
{{#if:{{{Untergattung|}}}|
{{!-}}
{{!}} Untergattung:
{{!}} ''[[{{{Untergattung|}}}]]''
}}
|-
{{#if:{{{Art|}}}|
{{!-}}
{{!}} Art:
{{!}} ''[[{{{Art|}}}]]''
}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Wissenschaftlicher Name
|-
|style="text-align:center"|
{| style="text-align:center;background-color:#efefef;widht:250px"
|-
|style="text-align:center;width:250px"|''{{{Gattung|}}} {{{Art|}}}'' {{#if:{{{Autor|}}}|{{{Autor|}}}}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Weitere Informationen
|-
|style="text-align:left"|
{| style="text-align:left;background-color:#efefef;widht:250px"
{{#if:{{{Verbreitung|}}}
|
{{!-}}
{{!}} Verbreitung:
{{!}} {{{Verbreitung|}}}
|
{{!-}}
}}
{{#if:{{{Habitat|}}}
|
{{!-}}
{{!}} Habitat:
{{!}} {{{Habitat|}}}
|
{{!-}}
}}
{{#if:{{{Nahrung|}}}
|
{{!-}}
{{!}} Nahrung:
{{!}} {{{Nahrung|}}}
|
{{!-}}
}}
{{#if:{{{Luftfeuchtigkeit|}}}
|
{{!-}}
{{!}} Luftfeuchtigkeit:
{{!}} {{{Luftfeuchtigkeit|}}}
|
{{!-}}
}}
{{#if:{{{Temperatur|}}}
|
{{!-}}
{{!}} Temperatur:
{{!}} {{{Temperatur|}}}
|
{{!-}}
}}
{{#if:{{{StudyGroupNumber|}}}
|
{{!-}}
{{!}} StudyGroupNumber:
{{!}} {{{StudyGroupNumber|}}}
|
{{!-}}
}}
|}
|}
{{#if:{{{Art|}}}|
[[Kategorie:{{{Gattung|}}}{{!}}{{{Art|}}}]]
{{#if:{{{Untergattung|}}}|
[[Kategorie:{{{Untergattung|}}}{{!}}{{{Art|}}}]]
}}
}}
{{#ifeq: {{NAMESPACE}} | {{ns:0}} | [[Kategorie:Spezies]]}}
{{#if:{{{Familie|}}}|
[[{{{Familie|}}}]]
}}
{{#if:{{{Unterfamilie|}}}|
-> [[{{{Unterfamilie|}}}]]
}}
{{#if:{{{Tribus|}}}|
-> [[{{{Tribus|}}}]]
}}
{{#if:{{{Gattung|}}}|
-> [[{{{Gattung|}}}]]
}}
{{#if:{{{Untergattung|}}}|
-> [[{{{Untergattung|}}}]]
}}
{{#if:{{{cockroach.speciesfile.org_TaxonNameID|}}}|
* [http://cockroach.speciesfile.org/common/basic/Taxa.aspx?TaxonNameID={{{cockroach.speciesfile.org_TaxonNameID|}}} cockroach.speciesfile.org -> {{{Gattung|}}} {{{Art|}}}]
}}
{{#if:{{{LSID|}}}|
* [http://www.ipni.org/lsids.html LSID] : {{{LSID|}}}
}}
</includeonly>
<noinclude>
<pre>
Beispielaufruf:
{{Systematik
| DeName = Fauchschabe
| WissName = Princisia vanwaerebeki
| Autor = van Herrewege, 1973
| Familie = Blaberidae
| Unterfamilie = Oxyhaloinae
| Gattung = Princisia
| Untergattung =
| Tribus = Gromphadorhini
| Art = vanwaerebeki
| Verbreitung =
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| StudyGroupNumber = BCG 34
| Winterruhe =
| cockroach.speciesfile.org_TaxonNameID = 1174416
| LSID = urn:lsid:Blattodea.speciesfile.org:TaxonName:6326
}}
</pre>
</noinclude>
0deaa45a268f280788f7ef0207496a9f30a025ee
1085
1079
2016-01-06T15:11:21Z
Lollypop
2
wikitext
text/x-wiki
<includeonly>
{| cellpadding="2" cellspacing="0" style="border: 1px solid #6688AA;width:260px; background-color:#efefef; float:right;overflow: hidden; margin-left:10px; margin-bottom:10px;" valign="middle" |
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"|
'''''{{{Gattung|}}} {{{Art|}}}'''''
{{#if:{{{DeName|}}}|
<br>({{{DeName|}}})
}}
|-
|style="border: 0px solid #6688AA;border-bottom-width:1px;"| <div style="text-align:center;font-size:8pt;">
{{#if: {{{Bild|}}}
|
[[Bild:{{{Bild}}}{{!}}frameless{{!}}250x300px{{!}}{{{Bildbeschreibung}}}]]
}}
</div>
|
|
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| Systematik
|-
|style="text-align:center;width=250px;"|
{| style="background-color:#efefef;text-align:left;"
{{#if:{{{Familie|}}}
|
{{!-}}
{{!}} Familie:
{{!}} ''[[{{{Familie|}}}]]''
}}
{{#if:{{{Unterfamilie|}}}
|
{{!-}}
{{!}} Unterfamilie:
{{!}} ''[[{{{Unterfamilie|}}}]]''
}}
{{#if:{{{Tribus|}}}
|
{{!-}}
{{!}} Tribus:
{{!}} ''[[{{{Tribus|}}}]]''
}}
|-
{{#if:{{{Gattung|}}}|
{{!-}}
{{!}} Gattung:
{{!}} ''[[{{{Gattung|}}}]]''
}}
|-
{{#if:{{{Untergattung|}}}|
{{!-}}
{{!}} Untergattung:
{{!}} ''[[{{{Untergattung|}}}]]''
}}
|-
{{#if:{{{Art|}}}|
{{!-}}
{{!}} Art:
{{!}} ''[[{{{Art|}}}]]''
}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Wissenschaftlicher Name
|-
|style="text-align:center"|
{| style="text-align:center;background-color:#efefef;widht:250px"
|-
|style="text-align:center;width:250px"|''{{{Gattung|}}} {{{Art|}}}'' {{#if:{{{Autor|}}}|{{{Autor|}}}}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Weitere Informationen
|-
|style="text-align:left"|
{| style="text-align:left;background-color:#efefef;widht:250px"
{{#if:{{{Verbreitung|}}}
|
{{!-}}
{{!}} Verbreitung:
{{!}} {{{Verbreitung|}}}
|
{{!-}}
}}
{{#if:{{{Habitat|}}}
|
{{!-}}
{{!}} Habitat:
{{!}} {{{Habitat|}}}
|
{{!-}}
}}
{{#if:{{{Nahrung|}}}
|
{{!-}}
{{!}} Nahrung:
{{!}} {{{Nahrung|}}}
|
{{!-}}
}}
{{#if:{{{Luftfeuchtigkeit|}}}
|
{{!-}}
{{!}} Luftfeuchtigkeit:
{{!}} {{{Luftfeuchtigkeit|}}}
|
{{!-}}
}}
{{#if:{{{Temperatur|}}}
|
{{!-}}
{{!}} Temperatur:
{{!}} {{{Temperatur|}}}
|
{{!-}}
}}
{{#if:{{{StudyGroupNumber|}}}
|
{{!-}}
{{!}} StudyGroupNumber:
{{!}} {{{StudyGroupNumber|}}}
|
{{!-}}
}}
|}
|}
{{#if:{{{Art|}}}|
[[Kategorie:{{{Gattung|}}}{{!}}{{{Art|}}}]]
{{#if:{{{Untergattung|}}}|
[[Kategorie:{{{Untergattung|}}}{{!}}{{{Art|}}}]]
}}
}}
{{#ifeq: {{NAMESPACE}} | {{ns:0}} | [[Kategorie:Spezies]]}}
{{#if:{{{Familie|}}}|
[[:Category:{{{Familie|}}}|{{{Familie|}}}]]
}}
{{#if:{{{Unterfamilie|}}}|
-> [[{{{Unterfamilie|}}}]]
[[:Category:{{{Unterfamilie|}}}|{{{Unterfamilie|}}}]]
}}
{{#if:{{{Tribus|}}}|
-> [[{{{Tribus|}}}]]
[[:Category:{{{Tribus|}}}|{{{Tribus|}}}]]
}}
{{#if:{{{Gattung|}}}|
-> [[{{{Gattung|}}}]]
[[:Category:{{{Gattung|}}}|{{{Gattung|}}}]]
}}
{{#if:{{{Untergattung|}}}|
-> [[{{{Untergattung|}}}]]
[[:Category:{{{Untergattung|}}}|{{{Untergattung|}}}]]
}}
{{#if:{{{cockroach.speciesfile.org_TaxonNameID|}}}|
* [http://cockroach.speciesfile.org/common/basic/Taxa.aspx?TaxonNameID={{{cockroach.speciesfile.org_TaxonNameID|}}} cockroach.speciesfile.org -> {{{Gattung|}}} {{{Art|}}}]
}}
{{#if:{{{LSID|}}}|
* [http://www.ipni.org/lsids.html LSID] : {{{LSID|}}}
}}
</includeonly>
<noinclude>
<pre>
Beispielaufruf:
{{Systematik
| DeName = Fauchschabe
| WissName = Princisia vanwaerebeki
| Autor = van Herrewege, 1973
| Familie = Blaberidae
| Unterfamilie = Oxyhaloinae
| Gattung = Princisia
| Untergattung =
| Tribus = Gromphadorhini
| Art = vanwaerebeki
| Verbreitung =
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| StudyGroupNumber = BCG 34
| Winterruhe =
| cockroach.speciesfile.org_TaxonNameID = 1174416
| LSID = urn:lsid:Blattodea.speciesfile.org:TaxonName:6326
}}
</pre>
</noinclude>
6a9f7fc61a9df78e6a7fd15c3c23319bc384939f
1086
1085
2016-01-06T15:14:30Z
Lollypop
2
wikitext
text/x-wiki
<includeonly>
{| cellpadding="2" cellspacing="0" style="border: 1px solid #6688AA;width:260px; background-color:#efefef; float:right;overflow: hidden; margin-left:10px; margin-bottom:10px;" valign="middle" |
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"|
'''''{{{Gattung|}}} {{{Art|}}}'''''
{{#if:{{{DeName|}}}|
<br>({{{DeName|}}})
}}
|-
|style="border: 0px solid #6688AA;border-bottom-width:1px;"| <div style="text-align:center;font-size:8pt;">
{{#if: {{{Bild|}}}
|
[[Bild:{{{Bild}}}{{!}}frameless{{!}}250x300px{{!}}{{{Bildbeschreibung}}}]]
}}
</div>
|
|
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| Systematik
|-
|style="text-align:center;width=250px;"|
{| style="background-color:#efefef;text-align:left;"
{{#if:{{{Familie|}}}
|
{{!-}}
{{!}} Familie:
{{!}} ''[[{{{Familie|}}}]]''
}}
{{#if:{{{Unterfamilie|}}}
|
{{!-}}
{{!}} Unterfamilie:
{{!}} ''[[{{{Unterfamilie|}}}]]''
}}
{{#if:{{{Tribus|}}}
|
{{!-}}
{{!}} Tribus:
{{!}} ''[[{{{Tribus|}}}]]''
}}
|-
{{#if:{{{Gattung|}}}|
{{!-}}
{{!}} Gattung:
{{!}} ''[[{{{Gattung|}}}]]''
}}
|-
{{#if:{{{Untergattung|}}}|
{{!-}}
{{!}} Untergattung:
{{!}} ''[[{{{Untergattung|}}}]]''
}}
|-
{{#if:{{{Art|}}}|
{{!-}}
{{!}} Art:
{{!}} ''[[{{{Art|}}}]]''
}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Wissenschaftlicher Name
|-
|style="text-align:center"|
{| style="text-align:center;background-color:#efefef;widht:250px"
|-
|style="text-align:center;width:250px"|''{{{Gattung|}}} {{{Art|}}}'' {{#if:{{{Autor|}}}|{{{Autor|}}}}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Weitere Informationen
|-
|style="text-align:left"|
{| style="text-align:left;background-color:#efefef;widht:250px"
{{#if:{{{Verbreitung|}}}
|
{{!-}}
{{!}} Verbreitung:
{{!}} {{{Verbreitung|}}}
|
{{!-}}
}}
{{#if:{{{Habitat|}}}
|
{{!-}}
{{!}} Habitat:
{{!}} {{{Habitat|}}}
|
{{!-}}
}}
{{#if:{{{Nahrung|}}}
|
{{!-}}
{{!}} Nahrung:
{{!}} {{{Nahrung|}}}
|
{{!-}}
}}
{{#if:{{{Luftfeuchtigkeit|}}}
|
{{!-}}
{{!}} Luftfeuchtigkeit:
{{!}} {{{Luftfeuchtigkeit|}}}
|
{{!-}}
}}
{{#if:{{{Temperatur|}}}
|
{{!-}}
{{!}} Temperatur:
{{!}} {{{Temperatur|}}}
|
{{!-}}
}}
{{#if:{{{StudyGroupNumber|}}}
|
{{!-}}
{{!}} StudyGroupNumber:
{{!}} {{{StudyGroupNumber|}}}
|
{{!-}}
}}
|}
|}
{{#if:{{{Art|}}}|
[[Kategorie:{{{Gattung|}}}{{!}}{{{Art|}}}]]
{{#if:{{{Untergattung|}}}|
[[Kategorie:{{{Untergattung|}}}{{!}}{{{Art|}}}]]
}}
}}
{{#ifeq: {{NAMESPACE}} | {{ns:0}} | [[Kategorie:Spezies]]}}
{{#if:{{{Familie|}}}|
[[:Kategorie:{{{Familie|}}}{{!}}{{{Familie|}}}]]
}}
{{#if:{{{Unterfamilie|}}}|
[[:Kategorie:{{{Unterfamilie|}}}{{!}}{{{Unterfamilie|}}}]]
}}
{{#if:{{{Tribus|}}}|
-> [[:Kategorie:{{{Tribus|}}}{{!}}{{{Tribus|}}}]]
}}
{{#if:{{{Gattung|}}}|
-> [[:Kategorie:{{{Gattung|}}}{{!}}{{{Gattung|}}}]]
}}
{{#if:{{{Untergattung|}}}|
-> [[:Kategorie:{{{Untergattung|}}}{{!}}{{{Untergattung|}}}]]
}}
{{#if:{{{cockroach.speciesfile.org_TaxonNameID|}}}|
* [http://cockroach.speciesfile.org/common/basic/Taxa.aspx?TaxonNameID={{{cockroach.speciesfile.org_TaxonNameID|}}} cockroach.speciesfile.org -> {{{Gattung|}}} {{{Art|}}}]
}}
{{#if:{{{LSID|}}}|
* [http://www.ipni.org/lsids.html LSID] : {{{LSID|}}}
}}
</includeonly>
<noinclude>
<pre>
Beispielaufruf:
{{Systematik
| DeName = Fauchschabe
| WissName = Princisia vanwaerebeki
| Autor = van Herrewege, 1973
| Familie = Blaberidae
| Unterfamilie = Oxyhaloinae
| Gattung = Princisia
| Untergattung =
| Tribus = Gromphadorhini
| Art = vanwaerebeki
| Verbreitung =
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| StudyGroupNumber = BCG 34
| Winterruhe =
| cockroach.speciesfile.org_TaxonNameID = 1174416
| LSID = urn:lsid:Blattodea.speciesfile.org:TaxonName:6326
}}
</pre>
</noinclude>
7d4daed0bec2ecff1498951520c34c27d51703bc
1087
1086
2016-01-06T15:15:42Z
Lollypop
2
wikitext
text/x-wiki
<includeonly>
{| cellpadding="2" cellspacing="0" style="border: 1px solid #6688AA;width:260px; background-color:#efefef; float:right;overflow: hidden; margin-left:10px; margin-bottom:10px;" valign="middle" |
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"|
'''''{{{Gattung|}}} {{{Art|}}}'''''
{{#if:{{{DeName|}}}|
<br>({{{DeName|}}})
}}
|-
|style="border: 0px solid #6688AA;border-bottom-width:1px;"| <div style="text-align:center;font-size:8pt;">
{{#if: {{{Bild|}}}
|
[[Bild:{{{Bild}}}{{!}}frameless{{!}}250x300px{{!}}{{{Bildbeschreibung}}}]]
}}
</div>
|
|
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| Systematik
|-
|style="text-align:center;width=250px;"|
{| style="background-color:#efefef;text-align:left;"
{{#if:{{{Familie|}}}
|
{{!-}}
{{!}} Familie:
{{!}} ''[[{{{Familie|}}}]]''
}}
{{#if:{{{Unterfamilie|}}}
|
{{!-}}
{{!}} Unterfamilie:
{{!}} ''[[{{{Unterfamilie|}}}]]''
}}
{{#if:{{{Tribus|}}}
|
{{!-}}
{{!}} Tribus:
{{!}} ''[[{{{Tribus|}}}]]''
}}
|-
{{#if:{{{Gattung|}}}|
{{!-}}
{{!}} Gattung:
{{!}} ''[[{{{Gattung|}}}]]''
}}
|-
{{#if:{{{Untergattung|}}}|
{{!-}}
{{!}} Untergattung:
{{!}} ''[[{{{Untergattung|}}}]]''
}}
|-
{{#if:{{{Art|}}}|
{{!-}}
{{!}} Art:
{{!}} ''[[{{{Art|}}}]]''
}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Wissenschaftlicher Name
|-
|style="text-align:center"|
{| style="text-align:center;background-color:#efefef;widht:250px"
|-
|style="text-align:center;width:250px"|''{{{Gattung|}}} {{{Art|}}}'' {{#if:{{{Autor|}}}|{{{Autor|}}}}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Weitere Informationen
|-
|style="text-align:left"|
{| style="text-align:left;background-color:#efefef;widht:250px"
{{#if:{{{Verbreitung|}}}
|
{{!-}}
{{!}} Verbreitung:
{{!}} {{{Verbreitung|}}}
|
{{!-}}
}}
{{#if:{{{Habitat|}}}
|
{{!-}}
{{!}} Habitat:
{{!}} {{{Habitat|}}}
|
{{!-}}
}}
{{#if:{{{Nahrung|}}}
|
{{!-}}
{{!}} Nahrung:
{{!}} {{{Nahrung|}}}
|
{{!-}}
}}
{{#if:{{{Luftfeuchtigkeit|}}}
|
{{!-}}
{{!}} Luftfeuchtigkeit:
{{!}} {{{Luftfeuchtigkeit|}}}
|
{{!-}}
}}
{{#if:{{{Temperatur|}}}
|
{{!-}}
{{!}} Temperatur:
{{!}} {{{Temperatur|}}}
|
{{!-}}
}}
{{#if:{{{StudyGroupNumber|}}}
|
{{!-}}
{{!}} StudyGroupNumber:
{{!}} {{{StudyGroupNumber|}}}
|
{{!-}}
}}
|}
|}
{{#if:{{{Art|}}}|
[[Kategorie:{{{Gattung|}}}{{!}}{{{Art|}}}]]
{{#if:{{{Untergattung|}}}|
[[Kategorie:{{{Untergattung|}}}{{!}}{{{Art|}}}]]
}}
}}
{{#ifeq: {{NAMESPACE}} | {{ns:0}} | [[Kategorie:Spezies]]}}
{{#if:{{{Familie|}}}|
[[:Kategorie:{{{Familie|}}}{{!}}{{{Familie|}}}]]
}}
{{#if:{{{Unterfamilie|}}}|
-> [[:Kategorie:{{{Unterfamilie|}}}{{!}}{{{Unterfamilie|}}}]]
}}
{{#if:{{{Tribus|}}}|
-> [[:Kategorie:{{{Tribus|}}}{{!}}{{{Tribus|}}}]]
}}
{{#if:{{{Gattung|}}}|
-> [[:Kategorie:{{{Gattung|}}}{{!}}{{{Gattung|}}}]]
}}
{{#if:{{{Untergattung|}}}|
-> [[:Kategorie:{{{Untergattung|}}}{{!}}{{{Untergattung|}}}]]
}}
{{#if:{{{cockroach.speciesfile.org_TaxonNameID|}}}|
* [http://cockroach.speciesfile.org/common/basic/Taxa.aspx?TaxonNameID={{{cockroach.speciesfile.org_TaxonNameID|}}} cockroach.speciesfile.org -> {{{Gattung|}}} {{{Art|}}}]
}}
{{#if:{{{LSID|}}}|
* [http://www.ipni.org/lsids.html LSID] : {{{LSID|}}}
}}
</includeonly>
<noinclude>
<pre>
Beispielaufruf:
{{Systematik
| DeName = Fauchschabe
| WissName = Princisia vanwaerebeki
| Autor = van Herrewege, 1973
| Familie = Blaberidae
| Unterfamilie = Oxyhaloinae
| Gattung = Princisia
| Untergattung =
| Tribus = Gromphadorhini
| Art = vanwaerebeki
| Verbreitung =
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| StudyGroupNumber = BCG 34
| Winterruhe =
| cockroach.speciesfile.org_TaxonNameID = 1174416
| LSID = urn:lsid:Blattodea.speciesfile.org:TaxonName:6326
}}
</pre>
</noinclude>
808fee23dabe8aec25da17dc30189ff9b6ab063b
1088
1087
2016-01-06T15:22:09Z
Lollypop
2
wikitext
text/x-wiki
<includeonly>
{| cellpadding="2" cellspacing="0" style="border: 1px solid #6688AA;width:260px; background-color:#efefef; float:right;overflow: hidden; margin-left:10px; margin-bottom:10px;" valign="middle" |
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"|
'''''{{{Gattung|}}} {{{Art|}}}'''''
{{#if:{{{DeName|}}}|
<br>({{{DeName|}}})
}}
|-
|style="border: 0px solid #6688AA;border-bottom-width:1px;"| <div style="text-align:center;font-size:8pt;">
{{#if: {{{Bild|}}}
|
[[Bild:{{{Bild}}}{{!}}frameless{{!}}250x300px{{!}}{{{Bildbeschreibung}}}]]
}}
</div>
|
|
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| Systematik
|-
|style="text-align:center;width=250px;"|
{| style="background-color:#efefef;text-align:left;"
{{#if:{{{Familie|}}}
|
{{!-}}
{{!}} Familie:
{{!}} ''[[:Kategorie:{{{Familie|}}}{{!}}{{{Familie|}}}]]''
}}
{{#if:{{{Unterfamilie|}}}
|
{{!-}}
{{!}} Unterfamilie:
{{!}} ''[[:Kategorie:{{{Unterfamilie|}}}{{!}}{{{Unterfamilie|}}}]]''
}}
{{#if:{{{Tribus|}}}
|
{{!-}}
{{!}} Tribus:
{{!}} ''[[:Kategorie:{{{Tribus|}}}{{!}}{{{Tribus|}}}]]''
}}
|-
{{#if:{{{Gattung|}}}|
{{!-}}
{{!}} Gattung:
{{!}} ''[[:Kategorie:{{{Gattung|}}}{{!}}{{{Gattung|}}}]]''
}}
|-
{{#if:{{{Untergattung|}}}|
{{!-}}
{{!}} Untergattung:
{{!}} ''[[:Kategorie:{{{Untergattung|}}}{{!}}{{{Untergattung|}}}]]''
}}
|-
{{#if:{{{Art|}}}|
{{!-}}
{{!}} Art:
{{!}} ''{{{Gattung|}}} {{{Untergattung|}}} {{{Art|}}}''
}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Wissenschaftlicher Name
|-
|style="text-align:center"|
{| style="text-align:center;background-color:#efefef;widht:250px"
|-
|style="text-align:center;width:250px"|''{{{Gattung|}}} {{{Art|}}}'' {{#if:{{{Autor|}}}|{{{Autor|}}}}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Weitere Informationen
|-
|style="text-align:left"|
{| style="text-align:left;background-color:#efefef;widht:250px"
{{#if:{{{Verbreitung|}}}
|
{{!-}}
{{!}} Verbreitung:
{{!}} {{{Verbreitung|}}}
|
{{!-}}
}}
{{#if:{{{Habitat|}}}
|
{{!-}}
{{!}} Habitat:
{{!}} {{{Habitat|}}}
|
{{!-}}
}}
{{#if:{{{Nahrung|}}}
|
{{!-}}
{{!}} Nahrung:
{{!}} {{{Nahrung|}}}
|
{{!-}}
}}
{{#if:{{{Luftfeuchtigkeit|}}}
|
{{!-}}
{{!}} Luftfeuchtigkeit:
{{!}} {{{Luftfeuchtigkeit|}}}
|
{{!-}}
}}
{{#if:{{{Temperatur|}}}
|
{{!-}}
{{!}} Temperatur:
{{!}} {{{Temperatur|}}}
|
{{!-}}
}}
{{#if:{{{StudyGroupNumber|}}}
|
{{!-}}
{{!}} StudyGroupNumber:
{{!}} {{{StudyGroupNumber|}}}
|
{{!-}}
}}
|}
|}
{{#if:{{{Art|}}}|
[[Kategorie:{{{Gattung|}}}{{!}}{{{Art|}}}]]
{{#if:{{{Untergattung|}}}|
[[Kategorie:{{{Untergattung|}}}{{!}}{{{Art|}}}]]
}}
}}
{{#ifeq: {{NAMESPACE}} | {{ns:0}} | [[Kategorie:Spezies]]}}
{{#if:{{{Familie|}}}|
[[:Kategorie:{{{Familie|}}}{{!}}{{{Familie|}}}]]
}}
{{#if:{{{Unterfamilie|}}}|
-> [[:Kategorie:{{{Unterfamilie|}}}{{!}}{{{Unterfamilie|}}}]]
}}
{{#if:{{{Tribus|}}}|
-> [[:Kategorie:{{{Tribus|}}}{{!}}{{{Tribus|}}}]]
}}
{{#if:{{{Gattung|}}}|
-> [[:Kategorie:{{{Gattung|}}}{{!}}{{{Gattung|}}}]]
}}
{{#if:{{{Untergattung|}}}|
-> [[:Kategorie:{{{Untergattung|}}}{{!}}{{{Untergattung|}}}]]
}}
{{#if:{{{cockroach.speciesfile.org_TaxonNameID|}}}|
* [http://cockroach.speciesfile.org/common/basic/Taxa.aspx?TaxonNameID={{{cockroach.speciesfile.org_TaxonNameID|}}} cockroach.speciesfile.org -> {{{Gattung|}}} {{{Art|}}}]
}}
{{#if:{{{LSID|}}}|
* [http://www.ipni.org/lsids.html LSID] : {{{LSID|}}}
}}
</includeonly>
<noinclude>
<pre>
Beispielaufruf:
{{Systematik
| DeName = Fauchschabe
| WissName = Princisia vanwaerebeki
| Autor = van Herrewege, 1973
| Familie = Blaberidae
| Unterfamilie = Oxyhaloinae
| Gattung = Princisia
| Untergattung =
| Tribus = Gromphadorhini
| Art = vanwaerebeki
| Verbreitung =
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| StudyGroupNumber = BCG 34
| Winterruhe =
| cockroach.speciesfile.org_TaxonNameID = 1174416
| LSID = urn:lsid:Blattodea.speciesfile.org:TaxonName:6326
}}
</pre>
</noinclude>
89fb8af6725b369dd35f08d0651d85028a381533
1089
1088
2016-01-06T15:29:04Z
Lollypop
2
wikitext
text/x-wiki
<includeonly>
{| cellpadding="2" cellspacing="0" style="border: 1px solid #6688AA;width:260px; background-color:#efefef; float:right;overflow: hidden; margin-left:10px; margin-bottom:10px;" valign="middle" |
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"|
'''''{{{Gattung|}}} {{{Art|}}}'''''
{{#if:{{{DeName|}}}|
<br>({{{DeName|}}})
}}
|-
|style="border: 0px solid #6688AA;border-bottom-width:1px;"| <div style="text-align:center;font-size:8pt;">
{{#if: {{{Bild|}}}
|
[[Bild:{{{Bild}}}{{!}}frameless{{!}}250x300px{{!}}{{{Bildbeschreibung}}}]]
}}
</div>
|
|
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| Systematik
|-
|style="text-align:center;width=250px;"|
{| style="background-color:#efefef;text-align:left;"
{{#if:{{{Familie|}}}
|
{{!-}}
{{!}} Familie:
{{!}} ''[[:Kategorie:{{{Familie|}}}{{!}}{{{Familie|}}}]]''
}}
{{#if:{{{Unterfamilie|}}}
|
{{!-}}
{{!}} Unterfamilie:
{{!}} ''[[:Kategorie:{{{Unterfamilie|}}}{{!}}{{{Unterfamilie|}}}]]''
}}
{{#if:{{{Tribus|}}}
|
{{!-}}
{{!}} Tribus:
{{!}} ''[[:Kategorie:{{{Tribus|}}}{{!}}{{{Tribus|}}}]]''
}}
|-
{{#if:{{{Gattung|}}}|
{{!-}}
{{!}} Gattung:
{{!}} ''[[:Kategorie:{{{Gattung|}}}{{!}}{{{Gattung|}}}]]''
}}
|-
{{#if:{{{Untergattung|}}}|
{{!-}}
{{!}} Untergattung:
{{!}} ''[[:Kategorie:{{{Untergattung|}}}{{!}}{{{Untergattung|}}}]]''
}}
|-
{{#if:{{{Art|}}}|
{{!-}}
{{!}} Art:
{{!}} ''{{{Gattung|}}} {{{Untergattung|}}} {{{Art|}}}''
}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Wissenschaftlicher Name
|-
|style="text-align:center"|
{| style="text-align:center;background-color:#efefef;widht:250px"
|-
|style="text-align:center;width:250px"|''{{{Gattung|}}} {{{Art|}}}'' {{#if:{{{Autor|}}}|{{{Autor|}}}}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Weitere Informationen
|-
|style="text-align:left"|
{| style="text-align:left;background-color:#efefef;widht:250px"
{{#if:{{{Verbreitung|}}}
|
{{!-}}
{{!}} Verbreitung:
{{!}} {{{Verbreitung|}}}
|
{{!-}}
}}
{{#if:{{{Habitat|}}}
|
{{!-}}
{{!}} Habitat:
{{!}} {{{Habitat|}}}
|
{{!-}}
}}
{{#if:{{{Nahrung|}}}
|
{{!-}}
{{!}} Nahrung:
{{!}} {{{Nahrung|}}}
|
{{!-}}
}}
{{#if:{{{Luftfeuchtigkeit|}}}
|
{{!-}}
{{!}} Luftfeuchtigkeit:
{{!}} {{{Luftfeuchtigkeit|}}}
|
{{!-}}
}}
{{#if:{{{Temperatur|}}}
|
{{!-}}
{{!}} Temperatur:
{{!}} {{{Temperatur|}}}
|
{{!-}}
}}
{{#if:{{{StudyGroupNumber|}}}
|
{{!-}}
{{!}} StudyGroupNumber:
{{!}} {{{StudyGroupNumber|}}}
|
{{!-}}
}}
|}
|}
{{#if:Boolandnot|{{{Gattung|}}}|{{{Art|}}}|
[[Kategorie:{{{Unterfamilie|}}}{{!}}{{{Gattung|}}}]]
}}
{{#if:{{{Art|}}}|
[[Kategorie:{{{Gattung|}}}{{!}}{{{Art|}}}]]
{{#if:{{{Untergattung|}}}|
[[Kategorie:{{{Untergattung|}}}{{!}}{{{Art|}}}]]
}}
}}
{{#ifeq: {{NAMESPACE}} | {{ns:0}} | [[Kategorie:Spezies]]}}
{{#if:{{{Familie|}}}|
[[:Kategorie:{{{Familie|}}}{{!}}{{{Familie|}}}]]
}}
{{#if:{{{Unterfamilie|}}}|
-> [[:Kategorie:{{{Unterfamilie|}}}{{!}}{{{Unterfamilie|}}}]]
}}
{{#if:{{{Tribus|}}}|
-> [[:Kategorie:{{{Tribus|}}}{{!}}{{{Tribus|}}}]]
}}
{{#if:{{{Gattung|}}}|
-> [[:Kategorie:{{{Gattung|}}}{{!}}{{{Gattung|}}}]]
}}
{{#if:{{{Untergattung|}}}|
-> [[:Kategorie:{{{Untergattung|}}}{{!}}{{{Untergattung|}}}]]
}}
{{#if:{{{cockroach.speciesfile.org_TaxonNameID|}}}|
* [http://cockroach.speciesfile.org/common/basic/Taxa.aspx?TaxonNameID={{{cockroach.speciesfile.org_TaxonNameID|}}} cockroach.speciesfile.org -> {{{Gattung|}}} {{{Art|}}}]
}}
{{#if:{{{LSID|}}}|
* [http://www.ipni.org/lsids.html LSID] : {{{LSID|}}}
}}
</includeonly>
<noinclude>
<pre>
Beispielaufruf:
{{Systematik
| DeName = Fauchschabe
| WissName = Princisia vanwaerebeki
| Autor = van Herrewege, 1973
| Familie = Blaberidae
| Unterfamilie = Oxyhaloinae
| Gattung = Princisia
| Untergattung =
| Tribus = Gromphadorhini
| Art = vanwaerebeki
| Verbreitung =
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| StudyGroupNumber = BCG 34
| Winterruhe =
| cockroach.speciesfile.org_TaxonNameID = 1174416
| LSID = urn:lsid:Blattodea.speciesfile.org:TaxonName:6326
}}
</pre>
</noinclude>
dfe49374586800f69bc20cb75a3a06d2ec5798e0
1090
1089
2016-01-06T15:31:53Z
Lollypop
2
wikitext
text/x-wiki
<includeonly>
{| cellpadding="2" cellspacing="0" style="border: 1px solid #6688AA;width:260px; background-color:#efefef; float:right;overflow: hidden; margin-left:10px; margin-bottom:10px;" valign="middle" |
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"|
'''''{{{Gattung|}}} {{{Art|}}}'''''
{{#if:{{{DeName|}}}|
<br>({{{DeName|}}})
}}
|-
|style="border: 0px solid #6688AA;border-bottom-width:1px;"| <div style="text-align:center;font-size:8pt;">
{{#if: {{{Bild|}}}
|
[[Bild:{{{Bild}}}{{!}}frameless{{!}}250x300px{{!}}{{{Bildbeschreibung}}}]]
}}
</div>
|
|
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| Systematik
|-
|style="text-align:center;width=250px;"|
{| style="background-color:#efefef;text-align:left;"
{{#if:{{{Familie|}}}
|
{{!-}}
{{!}} Familie:
{{!}} ''[[:Kategorie:{{{Familie|}}}{{!}}{{{Familie|}}}]]''
}}
{{#if:{{{Unterfamilie|}}}
|
{{!-}}
{{!}} Unterfamilie:
{{!}} ''[[:Kategorie:{{{Unterfamilie|}}}{{!}}{{{Unterfamilie|}}}]]''
}}
{{#if:{{{Tribus|}}}
|
{{!-}}
{{!}} Tribus:
{{!}} ''[[:Kategorie:{{{Tribus|}}}{{!}}{{{Tribus|}}}]]''
}}
|-
{{#if:{{{Gattung|}}}|
{{!-}}
{{!}} Gattung:
{{!}} ''[[:Kategorie:{{{Gattung|}}}{{!}}{{{Gattung|}}}]]''
}}
|-
{{#if:{{{Untergattung|}}}|
{{!-}}
{{!}} Untergattung:
{{!}} ''[[:Kategorie:{{{Untergattung|}}}{{!}}{{{Untergattung|}}}]]''
}}
|-
{{#if:{{{Art|}}}|
{{!-}}
{{!}} Art:
{{!}} ''{{{Gattung|}}} {{{Untergattung|}}} {{{Art|}}}''
}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Wissenschaftlicher Name
|-
|style="text-align:center"|
{| style="text-align:center;background-color:#efefef;widht:250px"
|-
|style="text-align:center;width:250px"|''{{{Gattung|}}} {{{Art|}}}'' {{#if:{{{Autor|}}}|{{{Autor|}}}}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Weitere Informationen
|-
|style="text-align:left"|
{| style="text-align:left;background-color:#efefef;widht:250px"
{{#if:{{{Verbreitung|}}}
|
{{!-}}
{{!}} Verbreitung:
{{!}} {{{Verbreitung|}}}
|
{{!-}}
}}
{{#if:{{{Habitat|}}}
|
{{!-}}
{{!}} Habitat:
{{!}} {{{Habitat|}}}
|
{{!-}}
}}
{{#if:{{{Nahrung|}}}
|
{{!-}}
{{!}} Nahrung:
{{!}} {{{Nahrung|}}}
|
{{!-}}
}}
{{#if:{{{Luftfeuchtigkeit|}}}
|
{{!-}}
{{!}} Luftfeuchtigkeit:
{{!}} {{{Luftfeuchtigkeit|}}}
|
{{!-}}
}}
{{#if:{{{Temperatur|}}}
|
{{!-}}
{{!}} Temperatur:
{{!}} {{{Temperatur|}}}
|
{{!-}}
}}
{{#if:{{{StudyGroupNumber|}}}
|
{{!-}}
{{!}} StudyGroupNumber:
{{!}} {{{StudyGroupNumber|}}}
|
{{!-}}
}}
|}
|}
{{#if:{{{Gattung|}}}
|
|
[[Kategorie:{{{Unterfamilie|}}}{{!}}{{{Gattung|}}}]]
}}
{{#if:{{{Art|}}}|
[[Kategorie:{{{Gattung|}}}{{!}}{{{Art|}}}]]
{{#if:{{{Untergattung|}}}|
[[Kategorie:{{{Untergattung|}}}{{!}}{{{Art|}}}]]
}}
}}
{{#ifeq: {{NAMESPACE}} | {{ns:0}} | [[Kategorie:Spezies]]}}
{{#if:{{{Familie|}}}|
[[:Kategorie:{{{Familie|}}}{{!}}{{{Familie|}}}]]
}}
{{#if:{{{Unterfamilie|}}}|
-> [[:Kategorie:{{{Unterfamilie|}}}{{!}}{{{Unterfamilie|}}}]]
}}
{{#if:{{{Tribus|}}}|
-> [[:Kategorie:{{{Tribus|}}}{{!}}{{{Tribus|}}}]]
}}
{{#if:{{{Gattung|}}}|
-> [[:Kategorie:{{{Gattung|}}}{{!}}{{{Gattung|}}}]]
}}
{{#if:{{{Untergattung|}}}|
-> [[:Kategorie:{{{Untergattung|}}}{{!}}{{{Untergattung|}}}]]
}}
{{#if:{{{cockroach.speciesfile.org_TaxonNameID|}}}|
* [http://cockroach.speciesfile.org/common/basic/Taxa.aspx?TaxonNameID={{{cockroach.speciesfile.org_TaxonNameID|}}} cockroach.speciesfile.org -> {{{Gattung|}}} {{{Art|}}}]
}}
{{#if:{{{LSID|}}}|
* [http://www.ipni.org/lsids.html LSID] : {{{LSID|}}}
}}
</includeonly>
<noinclude>
<pre>
Beispielaufruf:
{{Systematik
| DeName = Fauchschabe
| WissName = Princisia vanwaerebeki
| Autor = van Herrewege, 1973
| Familie = Blaberidae
| Unterfamilie = Oxyhaloinae
| Gattung = Princisia
| Untergattung =
| Tribus = Gromphadorhini
| Art = vanwaerebeki
| Verbreitung =
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| StudyGroupNumber = BCG 34
| Winterruhe =
| cockroach.speciesfile.org_TaxonNameID = 1174416
| LSID = urn:lsid:Blattodea.speciesfile.org:TaxonName:6326
}}
</pre>
</noinclude>
1744c36440cf199436d95edc5ae677905498da26
1091
1090
2016-01-06T15:33:21Z
Lollypop
2
wikitext
text/x-wiki
<includeonly>
{| cellpadding="2" cellspacing="0" style="border: 1px solid #6688AA;width:260px; background-color:#efefef; float:right;overflow: hidden; margin-left:10px; margin-bottom:10px;" valign="middle" |
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"|
'''''{{{Gattung|}}} {{{Art|}}}'''''
{{#if:{{{DeName|}}}|
<br>({{{DeName|}}})
}}
|-
|style="border: 0px solid #6688AA;border-bottom-width:1px;"| <div style="text-align:center;font-size:8pt;">
{{#if: {{{Bild|}}}
|
[[Bild:{{{Bild}}}{{!}}frameless{{!}}250x300px{{!}}{{{Bildbeschreibung}}}]]
}}
</div>
|
|
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| Systematik
|-
|style="text-align:center;width=250px;"|
{| style="background-color:#efefef;text-align:left;"
{{#if:{{{Familie|}}}
|
{{!-}}
{{!}} Familie:
{{!}} ''[[:Kategorie:{{{Familie|}}}{{!}}{{{Familie|}}}]]''
}}
{{#if:{{{Unterfamilie|}}}
|
{{!-}}
{{!}} Unterfamilie:
{{!}} ''[[:Kategorie:{{{Unterfamilie|}}}{{!}}{{{Unterfamilie|}}}]]''
}}
{{#if:{{{Tribus|}}}
|
{{!-}}
{{!}} Tribus:
{{!}} ''[[:Kategorie:{{{Tribus|}}}{{!}}{{{Tribus|}}}]]''
}}
|-
{{#if:{{{Gattung|}}}|
{{!-}}
{{!}} Gattung:
{{!}} ''[[:Kategorie:{{{Gattung|}}}{{!}}{{{Gattung|}}}]]''
}}
|-
{{#if:{{{Untergattung|}}}|
{{!-}}
{{!}} Untergattung:
{{!}} ''[[:Kategorie:{{{Untergattung|}}}{{!}}{{{Untergattung|}}}]]''
}}
|-
{{#if:{{{Art|}}}|
{{!-}}
{{!}} Art:
{{!}} ''{{{Gattung|}}} {{{Untergattung|}}} {{{Art|}}}''
}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Wissenschaftlicher Name
|-
|style="text-align:center"|
{| style="text-align:center;background-color:#efefef;widht:250px"
|-
|style="text-align:center;width:250px"|''{{{Gattung|}}} {{{Art|}}}'' {{#if:{{{Autor|}}}|{{{Autor|}}}}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Weitere Informationen
|-
|style="text-align:left"|
{| style="text-align:left;background-color:#efefef;widht:250px"
{{#if:{{{Verbreitung|}}}
|
{{!-}}
{{!}} Verbreitung:
{{!}} {{{Verbreitung|}}}
|
{{!-}}
}}
{{#if:{{{Habitat|}}}
|
{{!-}}
{{!}} Habitat:
{{!}} {{{Habitat|}}}
|
{{!-}}
}}
{{#if:{{{Nahrung|}}}
|
{{!-}}
{{!}} Nahrung:
{{!}} {{{Nahrung|}}}
|
{{!-}}
}}
{{#if:{{{Luftfeuchtigkeit|}}}
|
{{!-}}
{{!}} Luftfeuchtigkeit:
{{!}} {{{Luftfeuchtigkeit|}}}
|
{{!-}}
}}
{{#if:{{{Temperatur|}}}
|
{{!-}}
{{!}} Temperatur:
{{!}} {{{Temperatur|}}}
|
{{!-}}
}}
{{#if:{{{StudyGroupNumber|}}}
|
{{!-}}
{{!}} StudyGroupNumber:
{{!}} {{{StudyGroupNumber|}}}
|
{{!-}}
}}
|}
|}
{{#if:{{{Gattung|}}}||
[[Kategorie:{{{Unterfamilie|}}}{{!}}{{{Gattung|}}}]]
}}
{{#if:{{{Art|}}}|
[[Kategorie:{{{Gattung|}}}{{!}}{{{Art|}}}]]
{{#if:{{{Untergattung|}}}|
[[Kategorie:{{{Untergattung|}}}{{!}}{{{Art|}}}]]
}}
}}
{{#ifeq: {{NAMESPACE}} | {{ns:0}} | [[Kategorie:Spezies]]}}
{{#if:{{{Familie|}}}|
[[:Kategorie:{{{Familie|}}}{{!}}{{{Familie|}}}]]
}}
{{#if:{{{Unterfamilie|}}}|
-> [[:Kategorie:{{{Unterfamilie|}}}{{!}}{{{Unterfamilie|}}}]]
}}
{{#if:{{{Tribus|}}}|
-> [[:Kategorie:{{{Tribus|}}}{{!}}{{{Tribus|}}}]]
}}
{{#if:{{{Gattung|}}}|
-> [[:Kategorie:{{{Gattung|}}}{{!}}{{{Gattung|}}}]]
}}
{{#if:{{{Untergattung|}}}|
-> [[:Kategorie:{{{Untergattung|}}}{{!}}{{{Untergattung|}}}]]
}}
{{#if:{{{cockroach.speciesfile.org_TaxonNameID|}}}|
* [http://cockroach.speciesfile.org/common/basic/Taxa.aspx?TaxonNameID={{{cockroach.speciesfile.org_TaxonNameID|}}} cockroach.speciesfile.org -> {{{Gattung|}}} {{{Art|}}}]
}}
{{#if:{{{LSID|}}}|
* [http://www.ipni.org/lsids.html LSID] : {{{LSID|}}}
}}
</includeonly>
<noinclude>
<pre>
Beispielaufruf:
{{Systematik
| DeName = Fauchschabe
| WissName = Princisia vanwaerebeki
| Autor = van Herrewege, 1973
| Familie = Blaberidae
| Unterfamilie = Oxyhaloinae
| Gattung = Princisia
| Untergattung =
| Tribus = Gromphadorhini
| Art = vanwaerebeki
| Verbreitung =
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| StudyGroupNumber = BCG 34
| Winterruhe =
| cockroach.speciesfile.org_TaxonNameID = 1174416
| LSID = urn:lsid:Blattodea.speciesfile.org:TaxonName:6326
}}
</pre>
</noinclude>
47181bfc84fca18c5956416baba034907d687301
1092
1091
2016-01-06T15:36:36Z
Lollypop
2
wikitext
text/x-wiki
<includeonly>
{| cellpadding="2" cellspacing="0" style="border: 1px solid #6688AA;width:260px; background-color:#efefef; float:right;overflow: hidden; margin-left:10px; margin-bottom:10px;" valign="middle" |
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"|
'''''{{{Gattung|}}} {{{Art|}}}'''''
{{#if:{{{DeName|}}}|
<br>({{{DeName|}}})
}}
|-
|style="border: 0px solid #6688AA;border-bottom-width:1px;"| <div style="text-align:center;font-size:8pt;">
{{#if: {{{Bild|}}}
|
[[Bild:{{{Bild}}}{{!}}frameless{{!}}250x300px{{!}}{{{Bildbeschreibung}}}]]
}}
</div>
|
|
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| Systematik
|-
|style="text-align:center;width=250px;"|
{| style="background-color:#efefef;text-align:left;"
{{#if:{{{Familie|}}}
|
{{!-}}
{{!}} Familie:
{{!}} ''[[:Kategorie:{{{Familie|}}}{{!}}{{{Familie|}}}]]''
}}
{{#if:{{{Unterfamilie|}}}
|
{{!-}}
{{!}} Unterfamilie:
{{!}} ''[[:Kategorie:{{{Unterfamilie|}}}{{!}}{{{Unterfamilie|}}}]]''
}}
{{#if:{{{Tribus|}}}
|
{{!-}}
{{!}} Tribus:
{{!}} ''[[:Kategorie:{{{Tribus|}}}{{!}}{{{Tribus|}}}]]''
}}
|-
{{#if:{{{Gattung|}}}|
{{!-}}
{{!}} Gattung:
{{!}} ''[[:Kategorie:{{{Gattung|}}}{{!}}{{{Gattung|}}}]]''
}}
|-
{{#if:{{{Untergattung|}}}|
{{!-}}
{{!}} Untergattung:
{{!}} ''[[:Kategorie:{{{Untergattung|}}}{{!}}{{{Untergattung|}}}]]''
}}
|-
{{#if:{{{Art|}}}|
{{!-}}
{{!}} Art:
{{!}} ''{{{Gattung|}}} {{{Untergattung|}}} {{{Art|}}}''
}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Wissenschaftlicher Name
|-
|style="text-align:center"|
{| style="text-align:center;background-color:#efefef;widht:250px"
|-
|style="text-align:center;width:250px"|''{{{Gattung|}}} {{{Art|}}}'' {{#if:{{{Autor|}}}|{{{Autor|}}}}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Weitere Informationen
|-
|style="text-align:left"|
{| style="text-align:left;background-color:#efefef;widht:250px"
{{#if:{{{Verbreitung|}}}
|
{{!-}}
{{!}} Verbreitung:
{{!}} {{{Verbreitung|}}}
|
{{!-}}
}}
{{#if:{{{Habitat|}}}
|
{{!-}}
{{!}} Habitat:
{{!}} {{{Habitat|}}}
|
{{!-}}
}}
{{#if:{{{Nahrung|}}}
|
{{!-}}
{{!}} Nahrung:
{{!}} {{{Nahrung|}}}
|
{{!-}}
}}
{{#if:{{{Luftfeuchtigkeit|}}}
|
{{!-}}
{{!}} Luftfeuchtigkeit:
{{!}} {{{Luftfeuchtigkeit|}}}
|
{{!-}}
}}
{{#if:{{{Temperatur|}}}
|
{{!-}}
{{!}} Temperatur:
{{!}} {{{Temperatur|}}}
|
{{!-}}
}}
{{#if:{{{StudyGroupNumber|}}}
|
{{!-}}
{{!}} StudyGroupNumber:
{{!}} {{{StudyGroupNumber|}}}
|
{{!-}}
}}
|}
|}
{{#if:{{{Art|}}}||
[[Kategorie:{{{Unterfamilie|}}}{{!}}{{{Gattung|}}}]]
}}
{{#if:{{{Art|}}}|
[[Kategorie:{{{Gattung|}}}{{!}}{{{Art|}}}]]
{{#if:{{{Untergattung|}}}|
[[Kategorie:{{{Untergattung|}}}{{!}}{{{Art|}}}]]
}}
}}
{{#ifeq: {{NAMESPACE}} | {{ns:0}} | [[Kategorie:Spezies]]}}
{{#if:{{{Familie|}}}|
[[:Kategorie:{{{Familie|}}}{{!}}{{{Familie|}}}]]
}}
{{#if:{{{Unterfamilie|}}}|
-> [[:Kategorie:{{{Unterfamilie|}}}{{!}}{{{Unterfamilie|}}}]]
}}
{{#if:{{{Tribus|}}}|
-> [[:Kategorie:{{{Tribus|}}}{{!}}{{{Tribus|}}}]]
}}
{{#if:{{{Gattung|}}}|
-> [[:Kategorie:{{{Gattung|}}}{{!}}{{{Gattung|}}}]]
}}
{{#if:{{{Untergattung|}}}|
-> [[:Kategorie:{{{Untergattung|}}}{{!}}{{{Untergattung|}}}]]
}}
{{#if:{{{cockroach.speciesfile.org_TaxonNameID|}}}|
* [http://cockroach.speciesfile.org/common/basic/Taxa.aspx?TaxonNameID={{{cockroach.speciesfile.org_TaxonNameID|}}} cockroach.speciesfile.org -> {{{Gattung|}}} {{{Art|}}}]
}}
{{#if:{{{LSID|}}}|
* [http://www.ipni.org/lsids.html LSID] : {{{LSID|}}}
}}
* {{PAGENAME}}
</includeonly>
<noinclude>
<pre>
Beispielaufruf:
{{Systematik
| DeName = Fauchschabe
| WissName = Princisia vanwaerebeki
| Autor = van Herrewege, 1973
| Familie = Blaberidae
| Unterfamilie = Oxyhaloinae
| Gattung = Princisia
| Untergattung =
| Tribus = Gromphadorhini
| Art = vanwaerebeki
| Verbreitung =
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| StudyGroupNumber = BCG 34
| Winterruhe =
| cockroach.speciesfile.org_TaxonNameID = 1174416
| LSID = urn:lsid:Blattodea.speciesfile.org:TaxonName:6326
}}
</pre>
</noinclude>
8da945719079369a343ddd98144ff80c9d6f5dc3
1093
1092
2016-01-06T15:37:07Z
Lollypop
2
wikitext
text/x-wiki
<includeonly>
{| cellpadding="2" cellspacing="0" style="border: 1px solid #6688AA;width:260px; background-color:#efefef; float:right;overflow: hidden; margin-left:10px; margin-bottom:10px;" valign="middle" |
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"|
'''''{{{Gattung|}}} {{{Art|}}}'''''
{{#if:{{{DeName|}}}|
<br>({{{DeName|}}})
}}
|-
|style="border: 0px solid #6688AA;border-bottom-width:1px;"| <div style="text-align:center;font-size:8pt;">
{{#if: {{{Bild|}}}
|
[[Bild:{{{Bild}}}{{!}}frameless{{!}}250x300px{{!}}{{{Bildbeschreibung}}}]]
}}
</div>
|
|
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| Systematik
|-
|style="text-align:center;width=250px;"|
{| style="background-color:#efefef;text-align:left;"
{{#if:{{{Familie|}}}
|
{{!-}}
{{!}} Familie:
{{!}} ''[[:Kategorie:{{{Familie|}}}{{!}}{{{Familie|}}}]]''
}}
{{#if:{{{Unterfamilie|}}}
|
{{!-}}
{{!}} Unterfamilie:
{{!}} ''[[:Kategorie:{{{Unterfamilie|}}}{{!}}{{{Unterfamilie|}}}]]''
}}
{{#if:{{{Tribus|}}}
|
{{!-}}
{{!}} Tribus:
{{!}} ''[[:Kategorie:{{{Tribus|}}}{{!}}{{{Tribus|}}}]]''
}}
|-
{{#if:{{{Gattung|}}}|
{{!-}}
{{!}} Gattung:
{{!}} ''[[:Kategorie:{{{Gattung|}}}{{!}}{{{Gattung|}}}]]''
}}
|-
{{#if:{{{Untergattung|}}}|
{{!-}}
{{!}} Untergattung:
{{!}} ''[[:Kategorie:{{{Untergattung|}}}{{!}}{{{Untergattung|}}}]]''
}}
|-
{{#if:{{{Art|}}}|
{{!-}}
{{!}} Art:
{{!}} ''{{{Gattung|}}} {{{Untergattung|}}} {{{Art|}}}''
}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Wissenschaftlicher Name
|-
|style="text-align:center"|
{| style="text-align:center;background-color:#efefef;widht:250px"
|-
|style="text-align:center;width:250px"|''{{{Gattung|}}} {{{Art|}}}'' {{#if:{{{Autor|}}}|{{{Autor|}}}}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Weitere Informationen
|-
|style="text-align:left"|
{| style="text-align:left;background-color:#efefef;widht:250px"
{{#if:{{{Verbreitung|}}}
|
{{!-}}
{{!}} Verbreitung:
{{!}} {{{Verbreitung|}}}
|
{{!-}}
}}
{{#if:{{{Habitat|}}}
|
{{!-}}
{{!}} Habitat:
{{!}} {{{Habitat|}}}
|
{{!-}}
}}
{{#if:{{{Nahrung|}}}
|
{{!-}}
{{!}} Nahrung:
{{!}} {{{Nahrung|}}}
|
{{!-}}
}}
{{#if:{{{Luftfeuchtigkeit|}}}
|
{{!-}}
{{!}} Luftfeuchtigkeit:
{{!}} {{{Luftfeuchtigkeit|}}}
|
{{!-}}
}}
{{#if:{{{Temperatur|}}}
|
{{!-}}
{{!}} Temperatur:
{{!}} {{{Temperatur|}}}
|
{{!-}}
}}
{{#if:{{{StudyGroupNumber|}}}
|
{{!-}}
{{!}} StudyGroupNumber:
{{!}} {{{StudyGroupNumber|}}}
|
{{!-}}
}}
|}
|}
{{#if:{{{Art|}}}||
[[Kategorie:{{{Unterfamilie|}}}{{!}}{{{Gattung|}}}]]
}}
{{#if:{{{Art|}}}|
[[Kategorie:{{{Gattung|}}}{{!}}{{{Art|}}}]]
{{#if:{{{Untergattung|}}}|
[[Kategorie:{{{Untergattung|}}}{{!}}{{{Art|}}}]]
}}
}}
{{#ifeq: {{NAMESPACE}} | {{ns:0}} | [[Kategorie:Spezies]]}}
{{#if:{{{Familie|}}}|
[[:Kategorie:{{{Familie|}}}{{!}}{{{Familie|}}}]]
}}
{{#if:{{{Unterfamilie|}}}|
-> [[:Kategorie:{{{Unterfamilie|}}}{{!}}{{{Unterfamilie|}}}]]
}}
{{#if:{{{Tribus|}}}|
-> [[:Kategorie:{{{Tribus|}}}{{!}}{{{Tribus|}}}]]
}}
{{#if:{{{Gattung|}}}|
-> [[:Kategorie:{{{Gattung|}}}{{!}}{{{Gattung|}}}]]
}}
{{#if:{{{Untergattung|}}}|
-> [[:Kategorie:{{{Untergattung|}}}{{!}}{{{Untergattung|}}}]]
}}
{{#if:{{{cockroach.speciesfile.org_TaxonNameID|}}}|
* [http://cockroach.speciesfile.org/common/basic/Taxa.aspx?TaxonNameID={{{cockroach.speciesfile.org_TaxonNameID|}}} cockroach.speciesfile.org -> {{{Gattung|}}} {{{Art|}}}]
}}
{{#if:{{{LSID|}}}|
* [http://www.ipni.org/lsids.html LSID] : {{{LSID|}}}
}}
* {{FULLPAGENAME}}
</includeonly>
<noinclude>
<pre>
Beispielaufruf:
{{Systematik
| DeName = Fauchschabe
| WissName = Princisia vanwaerebeki
| Autor = van Herrewege, 1973
| Familie = Blaberidae
| Unterfamilie = Oxyhaloinae
| Gattung = Princisia
| Untergattung =
| Tribus = Gromphadorhini
| Art = vanwaerebeki
| Verbreitung =
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| StudyGroupNumber = BCG 34
| Winterruhe =
| cockroach.speciesfile.org_TaxonNameID = 1174416
| LSID = urn:lsid:Blattodea.speciesfile.org:TaxonName:6326
}}
</pre>
</noinclude>
cdc281846872b5e1feda8bfa057d1611769904d0
1094
1093
2016-01-06T15:40:09Z
Lollypop
2
wikitext
text/x-wiki
<includeonly>
{| cellpadding="2" cellspacing="0" style="border: 1px solid #6688AA;width:260px; background-color:#efefef; float:right;overflow: hidden; margin-left:10px; margin-bottom:10px;" valign="middle" |
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"|
'''''{{{Gattung|}}} {{{Art|}}}'''''
{{#if:{{{DeName|}}}|
<br>({{{DeName|}}})
}}
|-
|style="border: 0px solid #6688AA;border-bottom-width:1px;"| <div style="text-align:center;font-size:8pt;">
{{#if: {{{Bild|}}}
|
[[Bild:{{{Bild}}}{{!}}frameless{{!}}250x300px{{!}}{{{Bildbeschreibung}}}]]
}}
</div>
|
|
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| Systematik
|-
|style="text-align:center;width=250px;"|
{| style="background-color:#efefef;text-align:left;"
{{#if:{{{Familie|}}}
|
{{!-}}
{{!}} Familie:
{{!}} ''[[:Kategorie:{{{Familie|}}}{{!}}{{{Familie|}}}]]''
}}
{{#if:{{{Unterfamilie|}}}
|
{{!-}}
{{!}} Unterfamilie:
{{!}} ''[[:Kategorie:{{{Unterfamilie|}}}{{!}}{{{Unterfamilie|}}}]]''
}}
{{#if:{{{Tribus|}}}
|
{{!-}}
{{!}} Tribus:
{{!}} ''[[:Kategorie:{{{Tribus|}}}{{!}}{{{Tribus|}}}]]''
}}
|-
{{#if:{{{Gattung|}}}|
{{!-}}
{{!}} Gattung:
{{!}} ''[[:Kategorie:{{{Gattung|}}}{{!}}{{{Gattung|}}}]]''
}}
|-
{{#if:{{{Untergattung|}}}|
{{!-}}
{{!}} Untergattung:
{{!}} ''[[:Kategorie:{{{Untergattung|}}}{{!}}{{{Untergattung|}}}]]''
}}
|-
{{#if:{{{Art|}}}|
{{!-}}
{{!}} Art:
{{!}} ''{{{Gattung|}}} {{{Untergattung|}}} {{{Art|}}}''
}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Wissenschaftlicher Name
|-
|style="text-align:center"|
{| style="text-align:center;background-color:#efefef;widht:250px"
|-
|style="text-align:center;width:250px"|''{{{Gattung|}}} {{{Art|}}}'' {{#if:{{{Autor|}}}|{{{Autor|}}}}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Weitere Informationen
|-
|style="text-align:left"|
{| style="text-align:left;background-color:#efefef;widht:250px"
{{#if:{{{Verbreitung|}}}
|
{{!-}}
{{!}} Verbreitung:
{{!}} {{{Verbreitung|}}}
|
{{!-}}
}}
{{#if:{{{Habitat|}}}
|
{{!-}}
{{!}} Habitat:
{{!}} {{{Habitat|}}}
|
{{!-}}
}}
{{#if:{{{Nahrung|}}}
|
{{!-}}
{{!}} Nahrung:
{{!}} {{{Nahrung|}}}
|
{{!-}}
}}
{{#if:{{{Luftfeuchtigkeit|}}}
|
{{!-}}
{{!}} Luftfeuchtigkeit:
{{!}} {{{Luftfeuchtigkeit|}}}
|
{{!-}}
}}
{{#if:{{{Temperatur|}}}
|
{{!-}}
{{!}} Temperatur:
{{!}} {{{Temperatur|}}}
|
{{!-}}
}}
{{#if:{{{StudyGroupNumber|}}}
|
{{!-}}
{{!}} StudyGroupNumber:
{{!}} {{{StudyGroupNumber|}}}
|
{{!-}}
}}
|}
|}
{{#ifeq:{{PAGENAME}}|{{{Gattung}}}|
[[Kategorie:{{{Unterfamilie|}}}{{!}}{{{Gattung|}}}]]
}}
{{#if:{{{Art|}}}|
[[Kategorie:{{{Gattung|}}}{{!}}{{{Art|}}}]]
{{#if:{{{Untergattung|}}}|
[[Kategorie:{{{Untergattung|}}}{{!}}{{{Art|}}}]]
}}
}}
{{#ifeq: {{NAMESPACE}} | {{ns:0}} | [[Kategorie:Spezies]]}}
{{#if:{{{Familie|}}}|
[[:Kategorie:{{{Familie|}}}{{!}}{{{Familie|}}}]]
}}
{{#if:{{{Unterfamilie|}}}|
-> [[:Kategorie:{{{Unterfamilie|}}}{{!}}{{{Unterfamilie|}}}]]
}}
{{#if:{{{Tribus|}}}|
-> [[:Kategorie:{{{Tribus|}}}{{!}}{{{Tribus|}}}]]
}}
{{#if:{{{Gattung|}}}|
-> [[:Kategorie:{{{Gattung|}}}{{!}}{{{Gattung|}}}]]
}}
{{#if:{{{Untergattung|}}}|
-> [[:Kategorie:{{{Untergattung|}}}{{!}}{{{Untergattung|}}}]]
}}
{{#if:{{{cockroach.speciesfile.org_TaxonNameID|}}}|
* [http://cockroach.speciesfile.org/common/basic/Taxa.aspx?TaxonNameID={{{cockroach.speciesfile.org_TaxonNameID|}}} cockroach.speciesfile.org -> {{{Gattung|}}} {{{Art|}}}]
}}
{{#if:{{{LSID|}}}|
* [http://www.ipni.org/lsids.html LSID] : {{{LSID|}}}
}}
</includeonly>
<noinclude>
<pre>
Beispielaufruf:
{{Systematik
| DeName = Fauchschabe
| WissName = Princisia vanwaerebeki
| Autor = van Herrewege, 1973
| Familie = Blaberidae
| Unterfamilie = Oxyhaloinae
| Gattung = Princisia
| Untergattung =
| Tribus = Gromphadorhini
| Art = vanwaerebeki
| Verbreitung =
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| StudyGroupNumber = BCG 34
| Winterruhe =
| cockroach.speciesfile.org_TaxonNameID = 1174416
| LSID = urn:lsid:Blattodea.speciesfile.org:TaxonName:6326
}}
</pre>
</noinclude>
b592559c20e4a815f886e3dcecee74d3cfd5b4e6
1102
1094
2016-01-06T15:51:50Z
Lollypop
2
wikitext
text/x-wiki
<includeonly>
{| cellpadding="2" cellspacing="0" style="border: 1px solid #6688AA;width:260px; background-color:#efefef; float:right;overflow: hidden; margin-left:10px; margin-bottom:10px;" valign="middle" |
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"|
'''''{{{Gattung|}}} {{{Art|}}}'''''
{{#if:{{{DeName|}}}|
<br>({{{DeName|}}})
}}
|-
|style="border: 0px solid #6688AA;border-bottom-width:1px;"| <div style="text-align:center;font-size:8pt;">
{{#if: {{{Bild|}}}
|
[[Bild:{{{Bild}}}{{!}}frameless{{!}}250x300px{{!}}{{{Bildbeschreibung}}}]]
}}
</div>
|
|
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| Systematik
|-
|style="text-align:center;width=250px;"|
{| style="background-color:#efefef;text-align:left;"
{{#if:{{{Familie|}}}
|
{{!-}}
{{!}} Familie:
{{!}} ''[[:Kategorie:{{{Familie|}}}{{!}}{{{Familie|}}}]]''
}}
{{#if:{{{Unterfamilie|}}}
|
{{!-}}
{{!}} Unterfamilie:
{{!}} ''[[:Kategorie:{{{Unterfamilie|}}}{{!}}{{{Unterfamilie|}}}]]''
}}
{{#if:{{{Tribus|}}}
|
{{!-}}
{{!}} Tribus:
{{!}} ''[[:Kategorie:{{{Tribus|}}}{{!}}{{{Tribus|}}}]]''
}}
|-
{{#if:{{{Gattung|}}}|
{{!-}}
{{!}} Gattung:
{{!}} ''[[:Kategorie:{{{Gattung|}}}{{!}}{{{Gattung|}}}]]''
}}
|-
{{#if:{{{Untergattung|}}}|
{{!-}}
{{!}} Untergattung:
{{!}} ''[[:Kategorie:{{{Untergattung|}}}{{!}}{{{Untergattung|}}}]]''
}}
|-
{{#if:{{{Art|}}}|
{{!-}}
{{!}} Art:
{{!}} ''{{{Gattung|}}} {{{Untergattung|}}} {{{Art|}}}''
}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Wissenschaftlicher Name
|-
|style="text-align:center"|
{| style="text-align:center;background-color:#efefef;widht:250px"
|-
|style="text-align:center;width:250px"|''{{{Gattung|}}} {{{Art|}}}'' {{#if:{{{Autor|}}}|{{{Autor|}}}}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Weitere Informationen
|-
|style="text-align:left"|
{| style="text-align:left;background-color:#efefef;widht:250px"
{{#if:{{{Verbreitung|}}}
|
{{!-}}
{{!}} Verbreitung:
{{!}} {{{Verbreitung|}}}
|
{{!-}}
}}
{{#if:{{{Habitat|}}}
|
{{!-}}
{{!}} Habitat:
{{!}} {{{Habitat|}}}
|
{{!-}}
}}
{{#if:{{{Nahrung|}}}
|
{{!-}}
{{!}} Nahrung:
{{!}} {{{Nahrung|}}}
|
{{!-}}
}}
{{#if:{{{Luftfeuchtigkeit|}}}
|
{{!-}}
{{!}} Luftfeuchtigkeit:
{{!}} {{{Luftfeuchtigkeit|}}}
|
{{!-}}
}}
{{#if:{{{Temperatur|}}}
|
{{!-}}
{{!}} Temperatur:
{{!}} {{{Temperatur|}}}
|
{{!-}}
}}
{{#if:{{{StudyGroupNumber|}}}
|
{{!-}}
{{!}} StudyGroupNumber:
{{!}} {{{StudyGroupNumber|}}}
|
{{!-}}
}}
|}
|}
{{#ifeq: {{NAMESPACE}} | {{ns:0}} | [[Kategorie:Spezies]]}}
{{#if:{{{Familie|}}}|
[[:Kategorie:{{{Familie|}}}{{!}}{{{Familie|}}}]]
}}
{{#if:{{{Unterfamilie|}}}|
-> [[:Kategorie:{{{Unterfamilie|}}}{{!}}{{{Unterfamilie|}}}]]
}}
{{#if:{{{Tribus|}}}|
-> [[:Kategorie:{{{Tribus|}}}{{!}}{{{Tribus|}}}]]
}}
{{#if:{{{Gattung|}}}|
-> [[:Kategorie:{{{Gattung|}}}{{!}}{{{Gattung|}}}]]
}}
{{#if:{{{Untergattung|}}}|
-> [[:Kategorie:{{{Untergattung|}}}{{!}}{{{Untergattung|}}}]]
}}
{{#if:{{{cockroach.speciesfile.org_TaxonNameID|}}}|
* [http://cockroach.speciesfile.org/common/basic/Taxa.aspx?TaxonNameID={{{cockroach.speciesfile.org_TaxonNameID|}}} cockroach.speciesfile.org -> {{{Gattung|}}} {{{Art|}}}]
}}
{{#if:{{{LSID|}}}|
* [http://www.ipni.org/lsids.html LSID] : {{{LSID|}}}
}}
{{#ifeq:{{PAGENAME}}|{{{Gattung}}}|
[[Kategorie:{{{Unterfamilie|}}}{{!}}{{{Gattung|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{Unterfamilie}}}|
[[Kategorie:{{{Familie|}}}{{!}}{{{Unterfamilie|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{Art}}}|
[[Kategorie:{{{Gattung|}}}{{!}}{{{Art|}}}]]
}}
</includeonly>
<noinclude>
<pre>
Beispielaufruf:
{{Systematik
| DeName = Fauchschabe
| WissName = Princisia vanwaerebeki
| Autor = van Herrewege, 1973
| Familie = Blaberidae
| Unterfamilie = Oxyhaloinae
| Gattung = Princisia
| Untergattung =
| Tribus = Gromphadorhini
| Art = vanwaerebeki
| Verbreitung =
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| StudyGroupNumber = BCG 34
| Winterruhe =
| cockroach.speciesfile.org_TaxonNameID = 1174416
| LSID = urn:lsid:Blattodea.speciesfile.org:TaxonName:6326
}}
</pre>
</noinclude>
8b885b9e2fca6f83d59e8d127616af1268e8686e
1103
1102
2016-01-06T15:58:22Z
Lollypop
2
wikitext
text/x-wiki
<includeonly>
{| cellpadding="2" cellspacing="0" style="border: 1px solid #6688AA;width:260px; background-color:#efefef; float:right;overflow: hidden; margin-left:10px; margin-bottom:10px;" valign="middle" |
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"|
'''''{{{Gattung|}}} {{{Art|}}}'''''
{{#if:{{{DeName|}}}|
<br>({{{DeName|}}})
}}
|-
|style="border: 0px solid #6688AA;border-bottom-width:1px;"| <div style="text-align:center;font-size:8pt;">
{{#if: {{{Bild|}}}
|
[[Bild:{{{Bild}}}{{!}}frameless{{!}}250x300px{{!}}{{{Bildbeschreibung}}}]]
}}
</div>
|
|
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| Systematik
|-
|style="text-align:center;width=250px;"|
{| style="background-color:#efefef;text-align:left;"
{{#if:{{{Familie|}}}
|
{{!-}}
{{!}} Familie:
{{!}} ''[[:Kategorie:{{{Familie|}}}{{!}}{{{Familie|}}}]]''
}}
{{#if:{{{Unterfamilie|}}}
|
{{!-}}
{{!}} Unterfamilie:
{{!}} ''[[:Kategorie:{{{Unterfamilie|}}}{{!}}{{{Unterfamilie|}}}]]''
}}
{{#if:{{{Tribus|}}}
|
{{!-}}
{{!}} Tribus:
{{!}} ''[[:Kategorie:{{{Tribus|}}}{{!}}{{{Tribus|}}}]]''
}}
|-
{{#if:{{{Gattung|}}}|
{{!-}}
{{!}} Gattung:
{{!}} ''[[:Kategorie:{{{Gattung|}}}{{!}}{{{Gattung|}}}]]''
}}
|-
{{#if:{{{Untergattung|}}}|
{{!-}}
{{!}} Untergattung:
{{!}} ''[[:Kategorie:{{{Untergattung|}}}{{!}}{{{Untergattung|}}}]]''
}}
|-
{{#if:{{{Art|}}}|
{{!-}}
{{!}} Art:
{{!}} ''{{{Gattung|}}} {{{Untergattung|}}} {{{Art|}}}''
}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Wissenschaftlicher Name
|-
|style="text-align:center"|
{| style="text-align:center;background-color:#efefef;widht:250px"
|-
|style="text-align:center;width:250px"|''{{{Gattung|}}} {{{Art|}}}'' {{#if:{{{Autor|}}}|{{{Autor|}}}}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Weitere Informationen
|-
|style="text-align:left"|
{| style="text-align:left;background-color:#efefef;widht:250px"
{{#if:{{{Verbreitung|}}}
|
{{!-}}
{{!}} Verbreitung:
{{!}} {{{Verbreitung|}}}
|
{{!-}}
}}
{{#if:{{{Habitat|}}}
|
{{!-}}
{{!}} Habitat:
{{!}} {{{Habitat|}}}
|
{{!-}}
}}
{{#if:{{{Nahrung|}}}
|
{{!-}}
{{!}} Nahrung:
{{!}} {{{Nahrung|}}}
|
{{!-}}
}}
{{#if:{{{Luftfeuchtigkeit|}}}
|
{{!-}}
{{!}} Luftfeuchtigkeit:
{{!}} {{{Luftfeuchtigkeit|}}}
|
{{!-}}
}}
{{#if:{{{Temperatur|}}}
|
{{!-}}
{{!}} Temperatur:
{{!}} {{{Temperatur|}}}
|
{{!-}}
}}
{{#if:{{{StudyGroupNumber|}}}
|
{{!-}}
{{!}} StudyGroupNumber:
{{!}} {{{StudyGroupNumber|}}}
|
{{!-}}
}}
|}
|}
{{#ifeq: {{NAMESPACE}} | {{ns:0}} | [[Kategorie:Spezies]]}}
{{#if:{{{Familie|}}}|
[[:Kategorie:{{{Familie|}}}{{!}}{{{Familie|}}}]]
}}
{{#if:{{{Unterfamilie|}}}|
-> [[:Kategorie:{{{Unterfamilie|}}}{{!}}{{{Unterfamilie|}}}]]
}}
{{#if:{{{Tribus|}}}|
-> [[:Kategorie:{{{Tribus|}}}{{!}}{{{Tribus|}}}]]
}}
{{#if:{{{Gattung|}}}|
-> [[:Kategorie:{{{Gattung|}}}{{!}}{{{Gattung|}}}]]
}}
{{#if:{{{Untergattung|}}}|
-> [[:Kategorie:{{{Untergattung|}}}{{!}}{{{Untergattung|}}}]]
}}
{{#if:{{{cockroach.speciesfile.org_TaxonNameID|}}}|
* [http://cockroach.speciesfile.org/common/basic/Taxa.aspx?TaxonNameID={{{cockroach.speciesfile.org_TaxonNameID|}}} cockroach.speciesfile.org -> {{{Gattung|}}} {{{Art|}}}]
}}
{{#if:{{{LSID|}}}|
* [http://www.ipni.org/lsids.html LSID] : {{{LSID|}}}
}}
{{#ifeq:{{PAGENAME}}|{{{Familie}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=4}}
[[Kategorie:Schaben]]
}}
{{#ifeq:{{PAGENAME}}|{{{Unterfamilie}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=3}}
[[Kategorie:{{{Familie|}}}{{!}}{{{Unterfamilie|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{Gattung}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=2}}
[[Kategorie:{{{Unterfamilie|}}}{{!}}{{{Gattung|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{Art}}}|
[[Kategorie:{{{Gattung|}}}{{!}}{{{Art|}}}]]
}}
</includeonly>
<noinclude>
<pre>
Beispielaufruf:
{{Systematik
| DeName = Fauchschabe
| WissName = Princisia vanwaerebeki
| Autor = van Herrewege, 1973
| Familie = Blaberidae
| Unterfamilie = Oxyhaloinae
| Gattung = Princisia
| Untergattung =
| Tribus = Gromphadorhini
| Art = vanwaerebeki
| Verbreitung =
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| StudyGroupNumber = BCG 34
| Winterruhe =
| cockroach.speciesfile.org_TaxonNameID = 1174416
| LSID = urn:lsid:Blattodea.speciesfile.org:TaxonName:6326
}}
</pre>
</noinclude>
df9c00a7951b1b71bf5ac65df49d16c991dd01b9
1105
1103
2016-01-06T16:06:55Z
Lollypop
2
wikitext
text/x-wiki
<includeonly>
{| cellpadding="2" cellspacing="0" style="border: 1px solid #6688AA;width:260px; background-color:#efefef; float:right;overflow: hidden; margin-left:10px; margin-bottom:10px;" valign="middle" |
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"|
'''''{{{Gattung|}}} {{{Art|}}}'''''
{{#if:{{{DeName|}}}|
<br>({{{DeName|}}})
}}
|-
|style="border: 0px solid #6688AA;border-bottom-width:1px;"| <div style="text-align:center;font-size:8pt;">
{{#if: {{{Bild|}}}
|
[[Bild:{{{Bild}}}{{!}}frameless{{!}}250x300px{{!}}{{{Bildbeschreibung}}}]]
}}
</div>
|
|
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| Systematik
|-
|style="text-align:center;width=250px;"|
{| style="background-color:#efefef;text-align:left;"
{{#if:{{{Familie|}}}
|
{{!-}}
{{!}} Familie:
{{!}} ''[[:Kategorie:{{{Familie|}}}{{!}}{{{Familie|}}}]]''
}}
{{#if:{{{Unterfamilie|}}}
|
{{!-}}
{{!}} Unterfamilie:
{{!}} ''[[:Kategorie:{{{Unterfamilie|}}}{{!}}{{{Unterfamilie|}}}]]''
}}
{{#if:{{{Tribus|}}}
|
{{!-}}
{{!}} Tribus:
{{!}} ''[[:Kategorie:{{{Tribus|}}}{{!}}{{{Tribus|}}}]]''
}}
|-
{{#if:{{{Gattung|}}}|
{{!-}}
{{!}} Gattung:
{{!}} ''[[:Kategorie:{{{Gattung|}}}{{!}}{{{Gattung|}}}]]''
}}
|-
{{#if:{{{Untergattung|}}}|
{{!-}}
{{!}} Untergattung:
{{!}} ''[[:Kategorie:{{{Untergattung|}}}{{!}}{{{Untergattung|}}}]]''
}}
|-
{{#if:{{{Art|}}}|
{{!-}}
{{!}} Art:
{{!}} ''{{{Gattung|}}} {{{Untergattung|}}} {{{Art|}}}''
}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Wissenschaftlicher Name
|-
|style="text-align:center"|
{| style="text-align:center;background-color:#efefef;widht:250px"
|-
|style="text-align:center;width:250px"|''{{{Gattung|}}} {{{Art|}}}'' {{#if:{{{Autor|}}}|{{{Autor|}}}}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Weitere Informationen
|-
|style="text-align:left"|
{| style="text-align:left;background-color:#efefef;widht:250px"
{{#if:{{{Verbreitung|}}}
|
{{!-}}
{{!}} Verbreitung:
{{!}} {{{Verbreitung|}}}
|
{{!-}}
}}
{{#if:{{{Habitat|}}}
|
{{!-}}
{{!}} Habitat:
{{!}} {{{Habitat|}}}
|
{{!-}}
}}
{{#if:{{{Nahrung|}}}
|
{{!-}}
{{!}} Nahrung:
{{!}} {{{Nahrung|}}}
|
{{!-}}
}}
{{#if:{{{Luftfeuchtigkeit|}}}
|
{{!-}}
{{!}} Luftfeuchtigkeit:
{{!}} {{{Luftfeuchtigkeit|}}}
|
{{!-}}
}}
{{#if:{{{Temperatur|}}}
|
{{!-}}
{{!}} Temperatur:
{{!}} {{{Temperatur|}}}
|
{{!-}}
}}
{{#if:{{{StudyGroupNumber|}}}
|
{{!-}}
{{!}} StudyGroupNumber:
{{!}} {{{StudyGroupNumber|}}}
|
{{!-}}
}}
|}
|}
{{#ifeq: {{NAMESPACE}} | {{ns:0}} | [[Kategorie:Spezies]]}}
{{#if:{{{Familie|}}}|
[[:Kategorie:{{{Familie|}}}{{!}}{{{Familie|}}}]]
}}
{{#if:{{{Unterfamilie|}}}|
-> [[:Kategorie:{{{Unterfamilie|}}}{{!}}{{{Unterfamilie|}}}]]
}}
{{#if:{{{Tribus|}}}|
-> [[:Kategorie:{{{Tribus|}}}{{!}}{{{Tribus|}}}]]
}}
{{#if:{{{Gattung|}}}|
-> [[:Kategorie:{{{Gattung|}}}{{!}}{{{Gattung|}}}]]
}}
{{#if:{{{Untergattung|}}}|
-> [[:Kategorie:{{{Untergattung|}}}{{!}}{{{Untergattung|}}}]]
}}
{{#if:{{{cockroach.speciesfile.org_TaxonNameID|}}}|
* [http://cockroach.speciesfile.org/common/basic/Taxa.aspx?TaxonNameID={{{cockroach.speciesfile.org_TaxonNameID|}}} cockroach.speciesfile.org -> {{{Gattung|}}} {{{Art|}}}]
}}
{{#if:{{{LSID|}}}|
* [http://www.ipni.org/lsids.html LSID] : {{{LSID|}}}
}}
{{#ifeq:{{PAGENAME}}|{{{Familie}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=4}}
[[Kategorie:Schaben]]
}}
{{#ifeq:{{PAGENAME}}|{{{Unterfamilie}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=3}}
[[Kategorie:{{{Familie|}}}{{!}}{{{Unterfamilie|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{Gattung}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=2}}
[[Kategorie:{{{Unterfamilie|}}}{{!}}{{{Gattung|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{Gattung}}} {{{Art}}}|
[[Kategorie:{{{Gattung|}}}{{!}}{{{Art|}}}]]
}}
</includeonly>
<noinclude>
<pre>
Beispielaufruf:
{{Systematik
| DeName = Fauchschabe
| WissName = Princisia vanwaerebeki
| Autor = van Herrewege, 1973
| Familie = Blaberidae
| Unterfamilie = Oxyhaloinae
| Gattung = Princisia
| Untergattung =
| Tribus = Gromphadorhini
| Art = vanwaerebeki
| Verbreitung =
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| StudyGroupNumber = BCG 34
| Winterruhe =
| cockroach.speciesfile.org_TaxonNameID = 1174416
| LSID = urn:lsid:Blattodea.speciesfile.org:TaxonName:6326
}}
</pre>
</noinclude>
76008e70851883cb56eb016a8e6a40f8082dc716
1106
1105
2016-01-06T16:09:26Z
Lollypop
2
wikitext
text/x-wiki
<includeonly>
{| cellpadding="2" cellspacing="0" style="border: 1px solid #6688AA;width:260px; background-color:#efefef; float:right;overflow: hidden; margin-left:10px; margin-bottom:10px;" valign="middle" |
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"|
'''''{{{Gattung|}}} {{{Art|}}}'''''
{{#if:{{{DeName|}}}|
<br>({{{DeName|}}})
}}
|-
|style="border: 0px solid #6688AA;border-bottom-width:1px;"| <div style="text-align:center;font-size:8pt;">
{{#if: {{{Bild|}}}
|
[[Bild:{{{Bild}}}{{!}}frameless{{!}}250x300px{{!}}{{{Bildbeschreibung}}}]]
}}
</div>
|
|
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| Systematik
|-
|style="text-align:center;width=250px;"|
{| style="background-color:#efefef;text-align:left;"
{{#if:{{{Familie|}}}
|
{{!-}}
{{!}} Familie:
{{!}} ''[[:Kategorie:{{{Familie|}}}{{!}}{{{Familie|}}}]]''
}}
{{#if:{{{Unterfamilie|}}}
|
{{!-}}
{{!}} Unterfamilie:
{{!}} ''[[:Kategorie:{{{Unterfamilie|}}}{{!}}{{{Unterfamilie|}}}]]''
}}
{{#if:{{{Tribus|}}}
|
{{!-}}
{{!}} Tribus:
{{!}} ''[[:Kategorie:{{{Tribus|}}}{{!}}{{{Tribus|}}}]]''
}}
|-
{{#if:{{{Gattung|}}}|
{{!-}}
{{!}} Gattung:
{{!}} ''[[:Kategorie:{{{Gattung|}}}{{!}}{{{Gattung|}}}]]''
}}
|-
{{#if:{{{Untergattung|}}}|
{{!-}}
{{!}} Untergattung:
{{!}} ''[[:Kategorie:{{{Untergattung|}}}{{!}}{{{Untergattung|}}}]]''
}}
|-
{{#if:{{{Art|}}}|
{{!-}}
{{!}} Art:
{{!}} ''{{{Gattung|}}} {{{Untergattung|}}} {{{Art|}}}''
}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Wissenschaftlicher Name
|-
|style="text-align:center"|
{| style="text-align:center;background-color:#efefef;widht:250px"
|-
|style="text-align:center;width:250px"|''{{{Gattung|}}} {{{Art|}}}'' {{#if:{{{Autor|}}}|{{{Autor|}}}}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Weitere Informationen
|-
|style="text-align:left"|
{| style="text-align:left;background-color:#efefef;widht:250px"
{{#if:{{{Verbreitung|}}}
|
{{!-}}
{{!}} Verbreitung:
{{!}} {{{Verbreitung|}}}
|
{{!-}}
}}
{{#if:{{{Habitat|}}}
|
{{!-}}
{{!}} Habitat:
{{!}} {{{Habitat|}}}
|
{{!-}}
}}
{{#if:{{{Nahrung|}}}
|
{{!-}}
{{!}} Nahrung:
{{!}} {{{Nahrung|}}}
|
{{!-}}
}}
{{#if:{{{Luftfeuchtigkeit|}}}
|
{{!-}}
{{!}} Luftfeuchtigkeit:
{{!}} {{{Luftfeuchtigkeit|}}}
|
{{!-}}
}}
{{#if:{{{Temperatur|}}}
|
{{!-}}
{{!}} Temperatur:
{{!}} {{{Temperatur|}}}
|
{{!-}}
}}
{{#if:{{{StudyGroupNumber|}}}
|
{{!-}}
{{!}} StudyGroupNumber:
{{!}} {{{StudyGroupNumber|}}}
|
{{!-}}
}}
|}
|}
{{#ifeq: {{NAMESPACE}} | {{ns:0}} | [[Kategorie:Spezies]]}}
{{#if:{{{Familie|}}}|
[[:Kategorie:{{{Familie|}}}{{!}}{{{Familie|}}}]]
}}
{{#if:{{{Unterfamilie|}}}|
-> [[:Kategorie:{{{Unterfamilie|}}}{{!}}{{{Unterfamilie|}}}]]
}}
{{#if:{{{Tribus|}}}|
-> [[:Kategorie:{{{Tribus|}}}{{!}}{{{Tribus|}}}]]
}}
{{#if:{{{Gattung|}}}|
-> [[:Kategorie:{{{Gattung|}}}{{!}}{{{Gattung|}}}]]
}}
{{#if:{{{Untergattung|}}}|
-> [[:Kategorie:{{{Untergattung|}}}{{!}}{{{Untergattung|}}}]]
}}
{{#if:{{{cockroach.speciesfile.org_TaxonNameID|}}}|
* [http://cockroach.speciesfile.org/common/basic/Taxa.aspx?TaxonNameID={{{cockroach.speciesfile.org_TaxonNameID|}}} cockroach.speciesfile.org -> {{{Gattung|}}} {{{Art|}}}]
}}
{{#if:{{{LSID|}}}|
* [http://www.ipni.org/lsids.html LSID] : {{{LSID|}}}
}}
{{#ifeq:{{PAGENAME}}|{{{Familie}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=4}}
[[Kategorie:Schaben]]
}}
{{#ifeq:{{PAGENAME}}|{{{Unterfamilie}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=3}}
[[Kategorie:{{{Familie|}}}{{!}}{{{Unterfamilie|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{Gattung}}}|
[[Kategorie:{{{Unterfamilie|}}}{{!}}{{{Gattung|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{Gattung}}} {{{Art}}}|
[[Kategorie:{{{Gattung|}}}{{!}}{{{Art|}}}]]
}}
</includeonly>
<noinclude>
<pre>
Beispielaufruf:
{{Systematik
| DeName = Fauchschabe
| WissName = Princisia vanwaerebeki
| Autor = van Herrewege, 1973
| Familie = Blaberidae
| Unterfamilie = Oxyhaloinae
| Gattung = Princisia
| Untergattung =
| Tribus = Gromphadorhini
| Art = vanwaerebeki
| Verbreitung =
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| StudyGroupNumber = BCG 34
| Winterruhe =
| cockroach.speciesfile.org_TaxonNameID = 1174416
| LSID = urn:lsid:Blattodea.speciesfile.org:TaxonName:6326
}}
</pre>
</noinclude>
956fa9e3437eb8da434696d07775077740e6c476
1110
1106
2016-01-06T16:21:08Z
Lollypop
2
wikitext
text/x-wiki
<includeonly>
{| cellpadding="2" cellspacing="0" style="border: 1px solid #6688AA;width:260px; background-color:#efefef; float:right;overflow: hidden; margin-left:10px; margin-bottom:10px;" valign="middle" |
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"|
'''''{{{Gattung|}}} {{{Art|}}}'''''
{{#if:{{{DeName|}}}|
<br>({{{DeName|}}})
}}
|-
|style="border: 0px solid #6688AA;border-bottom-width:1px;"| <div style="text-align:center;font-size:8pt;">
{{#if: {{{Bild|}}}
|
[[Bild:{{{Bild}}}{{!}}frameless{{!}}250x300px{{!}}{{{Bildbeschreibung}}}]]
}}
</div>
|
|
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| Systematik
|-
|style="text-align:center;width=250px;"|
{| style="background-color:#efefef;text-align:left;"
{{#if:{{{Familie|}}}
|
{{!-}}
{{!}} Familie:
{{!}} ''[[:Kategorie:{{{Familie|}}}{{!}}{{{Familie|}}}]]''
}}
{{#if:{{{Unterfamilie|}}}
|
{{!-}}
{{!}} Unterfamilie:
{{!}} ''[[:Kategorie:{{{Unterfamilie|}}}{{!}}{{{Unterfamilie|}}}]]''
}}
{{#if:{{{Tribus|}}}
|
{{!-}}
{{!}} Tribus:
{{!}} ''[[:Kategorie:{{{Tribus|}}}{{!}}{{{Tribus|}}}]]''
}}
|-
{{#if:{{{Gattung|}}}|
{{!-}}
{{!}} Gattung:
{{!}} ''[[:Kategorie:{{{Gattung|}}}{{!}}{{{Gattung|}}}]]''
}}
|-
{{#if:{{{Untergattung|}}}|
{{!-}}
{{!}} Untergattung:
{{!}} ''[[:Kategorie:{{{Untergattung|}}}{{!}}{{{Untergattung|}}}]]''
}}
|-
{{#if:{{{Art|}}}|
{{!-}}
{{!}} Art:
{{!}} ''{{{Gattung|}}} {{{Untergattung|}}} {{{Art|}}}''
}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Wissenschaftlicher Name
|-
|style="text-align:center"|
{| style="text-align:center;background-color:#efefef;widht:250px"
|-
|style="text-align:center;width:250px"|''{{{Gattung|}}} {{{Art|}}}'' {{#if:{{{Autor|}}}|{{{Autor|}}}}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Weitere Informationen
|-
|style="text-align:left"|
{| style="text-align:left;background-color:#efefef;widht:250px"
{{#if:{{{Verbreitung|}}}
|
{{!-}}
{{!}} Verbreitung:
{{!}} {{{Verbreitung|}}}
|
{{!-}}
}}
{{#if:{{{Habitat|}}}
|
{{!-}}
{{!}} Habitat:
{{!}} {{{Habitat|}}}
|
{{!-}}
}}
{{#if:{{{Nahrung|}}}
|
{{!-}}
{{!}} Nahrung:
{{!}} {{{Nahrung|}}}
|
{{!-}}
}}
{{#if:{{{Luftfeuchtigkeit|}}}
|
{{!-}}
{{!}} Luftfeuchtigkeit:
{{!}} {{{Luftfeuchtigkeit|}}}
|
{{!-}}
}}
{{#if:{{{Temperatur|}}}
|
{{!-}}
{{!}} Temperatur:
{{!}} {{{Temperatur|}}}
|
{{!-}}
}}
{{#if:{{{StudyGroupNumber|}}}
|
{{!-}}
{{!}} StudyGroupNumber:
{{!}} {{{StudyGroupNumber|}}}
|
{{!-}}
}}
|}
|}
{{#ifeq: {{NAMESPACE}} | {{ns:0}} | [[Kategorie:Spezies]]}}
{{#if:{{{Familie|}}}|
[[:Kategorie:{{{Familie|}}}{{!}}{{{Familie|}}}]]
}}
{{#if:{{{Unterfamilie|}}}|
-> [[:Kategorie:{{{Unterfamilie|}}}{{!}}{{{Unterfamilie|}}}]]
}}
{{#if:{{{Tribus|}}}|
-> [[:Kategorie:{{{Tribus|}}}{{!}}{{{Tribus|}}}]]
}}
{{#if:{{{Gattung|}}}|
-> [[:Kategorie:{{{Gattung|}}}{{!}}{{{Gattung|}}}]]
}}
{{#if:{{{Untergattung|}}}|
-> [[:Kategorie:{{{Untergattung|}}}{{!}}{{{Untergattung|}}}]]
}}
{{#if:{{{cockroach.speciesfile.org_TaxonNameID|}}}|
* [http://cockroach.speciesfile.org/common/basic/Taxa.aspx?TaxonNameID={{{cockroach.speciesfile.org_TaxonNameID|}}} cockroach.speciesfile.org -> {{PAGENAME}}]
}}
{{#if:{{{LSID|}}}|
* [http://www.ipni.org/lsids.html LSID] : {{{LSID|}}}
}}
{{#ifeq:{{PAGENAME}}|{{{Familie}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=4}}
[[Kategorie:Schaben]]
}}
{{#ifeq:{{PAGENAME}}|{{{Unterfamilie}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=3}}
[[Kategorie:{{{Familie|}}}{{!}}{{{Unterfamilie|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{Gattung}}}|
[[Kategorie:{{{Unterfamilie|}}}{{!}}{{{Gattung|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{Gattung}}} {{{Art}}}|
[[Kategorie:{{{Gattung|}}}{{!}}{{{Art|}}}]]
}}
</includeonly>
<noinclude>
<pre>
Beispielaufruf:
{{Systematik
| DeName = Fauchschabe
| WissName = Princisia vanwaerebeki
| Autor = van Herrewege, 1973
| Familie = Blaberidae
| Unterfamilie = Oxyhaloinae
| Gattung = Princisia
| Untergattung =
| Tribus = Gromphadorhini
| Art = vanwaerebeki
| Verbreitung =
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| StudyGroupNumber = BCG 34
| Winterruhe =
| cockroach.speciesfile.org_TaxonNameID = 1174416
| LSID = urn:lsid:Blattodea.speciesfile.org:TaxonName:6326
}}
</pre>
</noinclude>
3393588945ef217c1d3dd8a648524fba861d977f
1112
1110
2016-01-06T16:28:21Z
Lollypop
2
wikitext
text/x-wiki
<includeonly>
{| cellpadding="2" cellspacing="0" style="border: 1px solid #6688AA;width:260px; background-color:#efefef; float:right;overflow: hidden; margin-left:10px; margin-bottom:10px;" valign="middle" |
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"|
'''''{{{Gattung|}}} {{{Art|}}}'''''
{{#if:{{{DeName|}}}|
<br>({{{DeName|}}})
}}
|-
|style="border: 0px solid #6688AA;border-bottom-width:1px;"| <div style="text-align:center;font-size:8pt;">
{{#if: {{{Bild|}}}
|
[[Bild:{{{Bild}}}{{!}}frameless{{!}}250x300px{{!}}{{{Bildbeschreibung}}}]]
}}
</div>
|
|
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| Systematik
|-
|style="text-align:center;width=250px;"|
{| style="background-color:#efefef;text-align:left;"
{{#if:{{{Familie|}}}
|
{{!-}}
{{!}} Familie:
{{!}} ''[[:Kategorie:{{{Familie|}}}{{!}}{{{Familie|}}}]]''
}}
{{#if:{{{Unterfamilie|}}}
|
{{!-}}
{{!}} Unterfamilie:
{{!}} ''[[:Kategorie:{{{Unterfamilie|}}}{{!}}{{{Unterfamilie|}}}]]''
}}
{{#if:{{{Tribus|}}}
|
{{!-}}
{{!}} Tribus:
{{!}} ''[[:Kategorie:{{{Tribus|}}}{{!}}{{{Tribus|}}}]]''
}}
|-
{{#if:{{{Gattung|}}}|
{{!-}}
{{!}} Gattung:
{{!}} ''[[:Kategorie:{{{Gattung|}}}{{!}}{{{Gattung|}}}]]''
}}
|-
{{#if:{{{Untergattung|}}}|
{{!-}}
{{!}} Untergattung:
{{!}} ''[[:Kategorie:{{{Untergattung|}}}{{!}}{{{Untergattung|}}}]]''
}}
|-
{{#if:{{{Art|}}}|
{{!-}}
{{!}} Art:
{{!}} ''{{{Gattung|}}} {{{Untergattung|}}} {{{Art|}}}''
}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Wissenschaftlicher Name
|-
|style="text-align:center"|
{| style="text-align:center;background-color:#efefef;widht:250px"
|-
|style="text-align:center;width:250px"|''{{{Gattung|}}} {{{Art|}}}'' {{#if:{{{Autor|}}}|{{{Autor|}}}}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Weitere Informationen
|-
|style="text-align:left"|
{| style="text-align:left;background-color:#efefef;widht:250px"
{{#if:{{{Verbreitung|}}}
|
{{!-}}
{{!}} Verbreitung:
{{!}} {{{Verbreitung|}}}
|
{{!-}}
}}
{{#if:{{{Habitat|}}}
|
{{!-}}
{{!}} Habitat:
{{!}} {{{Habitat|}}}
|
{{!-}}
}}
{{#if:{{{Nahrung|}}}
|
{{!-}}
{{!}} Nahrung:
{{!}} {{{Nahrung|}}}
|
{{!-}}
}}
{{#if:{{{Luftfeuchtigkeit|}}}
|
{{!-}}
{{!}} Luftfeuchtigkeit:
{{!}} {{{Luftfeuchtigkeit|}}}
|
{{!-}}
}}
{{#if:{{{Temperatur|}}}
|
{{!-}}
{{!}} Temperatur:
{{!}} {{{Temperatur|}}}
|
{{!-}}
}}
{{#if:{{{StudyGroupNumber|}}}
|
{{!-}}
{{!}} StudyGroupNumber:
{{!}} {{{StudyGroupNumber|}}}
|
{{!-}}
}}
|}
|}
{{#ifeq: {{NAMESPACE}} | {{ns:0}} | [[Kategorie:Spezies]]}}
{{#if:{{{Familie|}}}|
[[:Kategorie:{{{Familie|}}}{{!}}{{{Familie|}}}]]
}}
{{#if:{{{Unterfamilie|}}}|
-> [[:Kategorie:{{{Unterfamilie|}}}{{!}}{{{Unterfamilie|}}}]]
}}
{{#if:{{{Tribus|}}}|
-> [[:Kategorie:{{{Tribus|}}}{{!}}{{{Tribus|}}}]]
}}
{{#if:{{{Gattung|}}}|
-> [[:Kategorie:{{{Gattung|}}}{{!}}{{{Gattung|}}}]]
}}
{{#if:{{{Untergattung|}}}|
-> [[:Kategorie:{{{Untergattung|}}}{{!}}{{{Untergattung|}}}]]
}}
{{#if:{{{cockroach.speciesfile.org_TaxonNameID|}}}|
* [http://cockroach.speciesfile.org/common/basic/Taxa.aspx?TaxonNameID={{{cockroach.speciesfile.org_TaxonNameID|}}} cockroach.speciesfile.org -> {{PAGENAME}}]
}}
{{#if:{{{LSID|}}}|
* [http://www.ipni.org/lsids.html LSID] : {{{LSID|}}}
}}
{{#ifeq:{{PAGENAME}}|{{{Ordnung}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=6}}
{{#ifeq:{{PAGENAME}}|Blattodea|
[[Kategorie:Schaben]]
}}
}}
{{#ifeq:{{PAGENAME}}|{{{Superfamilie}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
[[Kategorie:{{{Ordnung|}}}{{!}}{{{Superfamilie|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{Familie}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=4}}
[[Kategorie:{{{Superfamilie|}}}{{!}}{{{Familie|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{Unterfamilie}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=3}}
[[Kategorie:{{{Familie|}}}{{!}}{{{Unterfamilie|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{Gattung}}}|
[[Kategorie:{{{Unterfamilie|}}}{{!}}{{{Gattung|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{Gattung}}} {{{Art}}}|
[[Kategorie:{{{Gattung|}}}{{!}}{{{Art|}}}]]
}}
</includeonly>
<noinclude>
<pre>
Beispielaufruf:
{{Systematik
| DeName = Fauchschabe
| Autor = van Herrewege, 1973
| Ordnung =
| Superfamily =
| Familie = Blaberidae
| Unterfamilie = Oxyhaloinae
| Gattung = Princisia
| Untergattung =
| Tribus = Gromphadorhini
| Art = vanwaerebeki
| Verbreitung =
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| StudyGroupNumber = BCG 34
| Winterruhe =
| cockroach.speciesfile.org_TaxonNameID = 1174416
| LSID = urn:lsid:Blattodea.speciesfile.org:TaxonName:6326
}}
</pre>
</noinclude>
2997e39f5ea45912013db1a04466ec14be6866dd
1118
1112
2016-01-06T16:35:26Z
Lollypop
2
wikitext
text/x-wiki
<includeonly>
{| cellpadding="2" cellspacing="0" style="border: 1px solid #6688AA;width:260px; background-color:#efefef; float:right;overflow: hidden; margin-left:10px; margin-bottom:10px;" valign="middle" |
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"|
'''''{{{Gattung|}}} {{{Art|}}}'''''
{{#if:{{{DeName|}}}|
<br>({{{DeName|}}})
}}
|-
|style="border: 0px solid #6688AA;border-bottom-width:1px;"| <div style="text-align:center;font-size:8pt;">
{{#if: {{{Bild|}}}
|
[[Bild:{{{Bild}}}{{!}}frameless{{!}}250x300px{{!}}{{{Bildbeschreibung}}}]]
}}
</div>
|
|
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| Systematik
|-
|style="text-align:center;width=250px;"|
{| style="background-color:#efefef;text-align:left;"
{{#if:{{{Familie|}}}
|
{{!-}}
{{!}} Familie:
{{!}} ''[[:Kategorie:{{{Familie|}}}{{!}}{{{Familie|}}}]]''
}}
{{#if:{{{Unterfamilie|}}}
|
{{!-}}
{{!}} Unterfamilie:
{{!}} ''[[:Kategorie:{{{Unterfamilie|}}}{{!}}{{{Unterfamilie|}}}]]''
}}
{{#if:{{{Tribus|}}}
|
{{!-}}
{{!}} Tribus:
{{!}} ''[[:Kategorie:{{{Tribus|}}}{{!}}{{{Tribus|}}}]]''
}}
|-
{{#if:{{{Gattung|}}}|
{{!-}}
{{!}} Gattung:
{{!}} ''[[:Kategorie:{{{Gattung|}}}{{!}}{{{Gattung|}}}]]''
}}
|-
{{#if:{{{Untergattung|}}}|
{{!-}}
{{!}} Untergattung:
{{!}} ''[[:Kategorie:{{{Untergattung|}}}{{!}}{{{Untergattung|}}}]]''
}}
|-
{{#if:{{{Art|}}}|
{{!-}}
{{!}} Art:
{{!}} ''{{{Gattung|}}} {{{Untergattung|}}} {{{Art|}}}''
}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Wissenschaftlicher Name
|-
|style="text-align:center"|
{| style="text-align:center;background-color:#efefef;widht:250px"
|-
|style="text-align:center;width:250px"|''{{{Gattung|}}} {{{Art|}}}'' {{#if:{{{Autor|}}}|{{{Autor|}}}}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Weitere Informationen
|-
|style="text-align:left"|
{| style="text-align:left;background-color:#efefef;widht:250px"
{{#if:{{{Verbreitung|}}}
|
{{!-}}
{{!}} Verbreitung:
{{!}} {{{Verbreitung|}}}
|
{{!-}}
}}
{{#if:{{{Habitat|}}}
|
{{!-}}
{{!}} Habitat:
{{!}} {{{Habitat|}}}
|
{{!-}}
}}
{{#if:{{{Nahrung|}}}
|
{{!-}}
{{!}} Nahrung:
{{!}} {{{Nahrung|}}}
|
{{!-}}
}}
{{#if:{{{Luftfeuchtigkeit|}}}
|
{{!-}}
{{!}} Luftfeuchtigkeit:
{{!}} {{{Luftfeuchtigkeit|}}}
|
{{!-}}
}}
{{#if:{{{Temperatur|}}}
|
{{!-}}
{{!}} Temperatur:
{{!}} {{{Temperatur|}}}
|
{{!-}}
}}
{{#if:{{{StudyGroupNumber|}}}
|
{{!-}}
{{!}} StudyGroupNumber:
{{!}} {{{StudyGroupNumber|}}}
|
{{!-}}
}}
|}
|}
{{#ifeq: {{NAMESPACE}} | {{ns:0}} | [[Kategorie:Spezies]]}}
{{#if:{{{Ordnung|}}}|
-> [[:Kategorie:{{{Ordnung|}}}{{!}}{{{Ordnung|}}}]]
}}
{{#if:{{{Superfamilie|}}}|
-> [[:Kategorie:{{{Superfamilie|}}}{{!}}{{{Superfamilie|}}}]]
}}
{{#if:{{{Familie|}}}|
-> [[:Kategorie:{{{Familie|}}}{{!}}{{{Familie|}}}]]
}}
{{#if:{{{Unterfamilie|}}}|
-> [[:Kategorie:{{{Unterfamilie|}}}{{!}}{{{Unterfamilie|}}}]]
}}
{{#if:{{{Tribus|}}}|
-> [[:Kategorie:{{{Tribus|}}}{{!}}{{{Tribus|}}}]]
}}
{{#if:{{{Gattung|}}}|
-> [[:Kategorie:{{{Gattung|}}}{{!}}{{{Gattung|}}}]]
}}
{{#if:{{{Untergattung|}}}|
-> [[:Kategorie:{{{Untergattung|}}}{{!}}{{{Untergattung|}}}]]
}}
{{#if:{{{cockroach.speciesfile.org_TaxonNameID|}}}|
* [http://cockroach.speciesfile.org/common/basic/Taxa.aspx?TaxonNameID={{{cockroach.speciesfile.org_TaxonNameID|}}} cockroach.speciesfile.org -> {{PAGENAME}}]
}}
{{#if:{{{LSID|}}}|
* [http://www.ipni.org/lsids.html LSID] : {{{LSID|}}}
}}
{{#ifeq:{{PAGENAME}}|{{{Ordnung}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=6}}
{{#ifeq:{{PAGENAME}}|Blattodea|
[[Kategorie:Schaben]]
}}
}}
{{#ifeq:{{PAGENAME}}|{{{Superfamilie}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
[[Kategorie:{{{Ordnung|}}}{{!}}{{{Superfamilie|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{Familie}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=4}}
[[Kategorie:{{{Superfamilie|}}}{{!}}{{{Familie|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{Unterfamilie}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=3}}
[[Kategorie:{{{Familie|}}}{{!}}{{{Unterfamilie|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{Gattung}}}|
[[Kategorie:{{{Unterfamilie|}}}{{!}}{{{Gattung|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{Gattung}}} {{{Art}}}|
[[Kategorie:{{{Gattung|}}}{{!}}{{{Art|}}}]]
}}
</includeonly>
<noinclude>
<pre>
Beispielaufruf:
{{Systematik
| DeName = Fauchschabe
| Autor = van Herrewege, 1973
| Ordnung =
| Superfamily =
| Familie = Blaberidae
| Unterfamilie = Oxyhaloinae
| Gattung = Princisia
| Untergattung =
| Tribus = Gromphadorhini
| Art = vanwaerebeki
| Verbreitung =
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| StudyGroupNumber = BCG 34
| Winterruhe =
| cockroach.speciesfile.org_TaxonNameID = 1174416
| LSID = urn:lsid:Blattodea.speciesfile.org:TaxonName:6326
}}
</pre>
</noinclude>
7bb7c87d4904a54ae164c211e16d89ba193e6a92
1120
1118
2016-01-06T16:40:27Z
Lollypop
2
wikitext
text/x-wiki
<includeonly>
{| cellpadding="2" cellspacing="0" style="border: 1px solid #6688AA;width:260px; background-color:#efefef; float:right;overflow: hidden; margin-left:10px; margin-bottom:10px;" valign="middle" |
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"|
'''''{{PAGENAME}}'''''
{{#if:{{{DeName|}}}|
<br>({{{DeName|}}})
}}
|-
|style="border: 0px solid #6688AA;border-bottom-width:1px;"| <div style="text-align:center;font-size:8pt;">
{{#if: {{{Bild|}}}
|
[[Bild:{{{Bild}}}{{!}}frameless{{!}}250x300px{{!}}{{{Bildbeschreibung}}}]]
}}
</div>
|
|
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| Systematik
|-
|style="text-align:center;width=250px;"|
{| style="background-color:#efefef;text-align:left;"
{{#if:{{{Familie|}}}
|
{{!-}}
{{!}} Familie:
{{!}} ''[[:Kategorie:{{{Familie|}}}{{!}}{{{Familie|}}}]]''
}}
{{#if:{{{Unterfamilie|}}}
|
{{!-}}
{{!}} Unterfamilie:
{{!}} ''[[:Kategorie:{{{Unterfamilie|}}}{{!}}{{{Unterfamilie|}}}]]''
}}
{{#if:{{{Tribus|}}}
|
{{!-}}
{{!}} Tribus:
{{!}} ''[[:Kategorie:{{{Tribus|}}}{{!}}{{{Tribus|}}}]]''
}}
|-
{{#if:{{{Gattung|}}}|
{{!-}}
{{!}} Gattung:
{{!}} ''[[:Kategorie:{{{Gattung|}}}{{!}}{{{Gattung|}}}]]''
}}
|-
{{#if:{{{Untergattung|}}}|
{{!-}}
{{!}} Untergattung:
{{!}} ''[[:Kategorie:{{{Untergattung|}}}{{!}}{{{Untergattung|}}}]]''
}}
|-
{{#if:{{{Art|}}}|
{{!-}}
{{!}} Art:
{{!}} ''{{{Gattung|}}} {{{Untergattung|}}} {{{Art|}}}''
}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Wissenschaftlicher Name
|-
|style="text-align:center"|
{| style="text-align:center;background-color:#efefef;widht:250px"
|-
|style="text-align:center;width:250px"|''{{{Gattung|}}} {{{Art|}}}'' {{#if:{{{Autor|}}}|{{{Autor|}}}}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Weitere Informationen
|-
|style="text-align:left"|
{| style="text-align:left;background-color:#efefef;widht:250px"
{{#if:{{{Verbreitung|}}}
|
{{!-}}
{{!}} Verbreitung:
{{!}} {{{Verbreitung|}}}
|
{{!-}}
}}
{{#if:{{{Habitat|}}}
|
{{!-}}
{{!}} Habitat:
{{!}} {{{Habitat|}}}
|
{{!-}}
}}
{{#if:{{{Nahrung|}}}
|
{{!-}}
{{!}} Nahrung:
{{!}} {{{Nahrung|}}}
|
{{!-}}
}}
{{#if:{{{Luftfeuchtigkeit|}}}
|
{{!-}}
{{!}} Luftfeuchtigkeit:
{{!}} {{{Luftfeuchtigkeit|}}}
|
{{!-}}
}}
{{#if:{{{Temperatur|}}}
|
{{!-}}
{{!}} Temperatur:
{{!}} {{{Temperatur|}}}
|
{{!-}}
}}
{{#if:{{{StudyGroupNumber|}}}
|
{{!-}}
{{!}} StudyGroupNumber:
{{!}} {{{StudyGroupNumber|}}}
|
{{!-}}
}}
|}
|}
{{#ifeq: {{NAMESPACE}} | {{ns:0}} | [[Kategorie:Spezies]]}}
{{#if:{{{Ordnung|}}}|
-> [[:Kategorie:{{{Ordnung|}}}{{!}}{{{Ordnung|}}}]]
}}
{{#if:{{{Superfamilie|}}}|
-> [[:Kategorie:{{{Superfamilie|}}}{{!}}{{{Superfamilie|}}}]]
}}
{{#if:{{{Familie|}}}|
-> [[:Kategorie:{{{Familie|}}}{{!}}{{{Familie|}}}]]
}}
{{#if:{{{Unterfamilie|}}}|
-> [[:Kategorie:{{{Unterfamilie|}}}{{!}}{{{Unterfamilie|}}}]]
}}
{{#if:{{{Tribus|}}}|
-> [[:Kategorie:{{{Tribus|}}}{{!}}{{{Tribus|}}}]]
}}
{{#if:{{{Gattung|}}}|
-> [[:Kategorie:{{{Gattung|}}}{{!}}{{{Gattung|}}}]]
}}
{{#if:{{{Untergattung|}}}|
-> [[:Kategorie:{{{Untergattung|}}}{{!}}{{{Untergattung|}}}]]
}}
{{#if:{{{cockroach.speciesfile.org_TaxonNameID|}}}|
* [http://cockroach.speciesfile.org/common/basic/Taxa.aspx?TaxonNameID={{{cockroach.speciesfile.org_TaxonNameID|}}} cockroach.speciesfile.org -> {{PAGENAME}}]
}}
{{#if:{{{LSID|}}}|
* [http://www.ipni.org/lsids.html LSID] : {{{LSID|}}}
}}
{{#ifeq:{{PAGENAME}}|{{{Ordnung}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=6}}
{{#ifeq:{{PAGENAME}}|Blattodea|
[[Kategorie:Schaben]]
}}
}}
{{#ifeq:{{PAGENAME}}|{{{Superfamilie}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
[[Kategorie:{{{Ordnung|}}}{{!}}{{{Superfamilie|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{Familie}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=4}}
[[Kategorie:{{{Superfamilie|}}}{{!}}{{{Familie|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{Unterfamilie}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=3}}
[[Kategorie:{{{Familie|}}}{{!}}{{{Unterfamilie|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{Gattung}}}|
[[Kategorie:{{{Unterfamilie|}}}{{!}}{{{Gattung|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{Gattung}}} {{{Art}}}|
[[Kategorie:{{{Gattung|}}}{{!}}{{{Art|}}}]]
}}
</includeonly>
<noinclude>
<pre>
Beispielaufruf:
{{Systematik
| DeName = Fauchschabe
| Autor = van Herrewege, 1973
| Ordnung =
| Superfamily =
| Familie = Blaberidae
| Unterfamilie = Oxyhaloinae
| Gattung = Princisia
| Untergattung =
| Tribus = Gromphadorhini
| Art = vanwaerebeki
| Verbreitung =
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| StudyGroupNumber = BCG 34
| Winterruhe =
| cockroach.speciesfile.org_TaxonNameID = 1174416
| LSID = urn:lsid:Blattodea.speciesfile.org:TaxonName:6326
}}
</pre>
</noinclude>
af92e62c2b5cdf89bc2e97c78aaf2a6ee8012fef
1121
1120
2016-01-06T16:41:50Z
Lollypop
2
wikitext
text/x-wiki
<includeonly>
{| cellpadding="2" cellspacing="0" style="border: 1px solid #6688AA;width:260px; background-color:#efefef; float:right;overflow: hidden; margin-left:10px; margin-bottom:10px;" valign="middle" |
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"|
'''''{{PAGENAME}}'''''
{{#if:{{{DeName|}}}|
<br>({{{DeName|}}})
}}
|-
|style="border: 0px solid #6688AA;border-bottom-width:1px;"| <div style="text-align:center;font-size:8pt;">
{{#if: {{{Bild|}}}
|
[[Bild:{{{Bild}}}{{!}}frameless{{!}}250x300px{{!}}{{{Bildbeschreibung}}}]]
}}
</div>
|
|
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| Systematik
|-
|style="text-align:center;width=250px;"|
{| style="background-color:#efefef;text-align:left;"
{{#if:{{{Ordnung|}}}
|
{{!-}}
{{!}} Ordnung:
{{!}} ''[[:Kategorie:{{{Ordnung|}}}{{!}}{{{Ordnung|}}}]]''
}}
{{#if:{{{Superfamilie|}}}
|
{{!-}}
{{!}} Superfamilie:
{{!}} ''[[:Kategorie:{{{Superfamilie|}}}{{!}}{{{Superfamilie|}}}]]''
}}
{{#if:{{{Familie|}}}
|
{{!-}}
{{!}} Familie:
{{!}} ''[[:Kategorie:{{{Familie|}}}{{!}}{{{Familie|}}}]]''
}}
{{#if:{{{Unterfamilie|}}}
|
{{!-}}
{{!}} Unterfamilie:
{{!}} ''[[:Kategorie:{{{Unterfamilie|}}}{{!}}{{{Unterfamilie|}}}]]''
}}
{{#if:{{{Tribus|}}}
|
{{!-}}
{{!}} Tribus:
{{!}} ''[[:Kategorie:{{{Tribus|}}}{{!}}{{{Tribus|}}}]]''
}}
|-
{{#if:{{{Gattung|}}}|
{{!-}}
{{!}} Gattung:
{{!}} ''[[:Kategorie:{{{Gattung|}}}{{!}}{{{Gattung|}}}]]''
}}
|-
{{#if:{{{Untergattung|}}}|
{{!-}}
{{!}} Untergattung:
{{!}} ''[[:Kategorie:{{{Untergattung|}}}{{!}}{{{Untergattung|}}}]]''
}}
|-
{{#if:{{{Art|}}}|
{{!-}}
{{!}} Art:
{{!}} ''{{{Gattung|}}} {{{Untergattung|}}} {{{Art|}}}''
}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Wissenschaftlicher Name
|-
|style="text-align:center"|
{| style="text-align:center;background-color:#efefef;widht:250px"
|-
|style="text-align:center;width:250px"|''{{{Gattung|}}} {{{Art|}}}'' {{#if:{{{Autor|}}}|{{{Autor|}}}}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Weitere Informationen
|-
|style="text-align:left"|
{| style="text-align:left;background-color:#efefef;widht:250px"
{{#if:{{{Verbreitung|}}}
|
{{!-}}
{{!}} Verbreitung:
{{!}} {{{Verbreitung|}}}
|
{{!-}}
}}
{{#if:{{{Habitat|}}}
|
{{!-}}
{{!}} Habitat:
{{!}} {{{Habitat|}}}
|
{{!-}}
}}
{{#if:{{{Nahrung|}}}
|
{{!-}}
{{!}} Nahrung:
{{!}} {{{Nahrung|}}}
|
{{!-}}
}}
{{#if:{{{Luftfeuchtigkeit|}}}
|
{{!-}}
{{!}} Luftfeuchtigkeit:
{{!}} {{{Luftfeuchtigkeit|}}}
|
{{!-}}
}}
{{#if:{{{Temperatur|}}}
|
{{!-}}
{{!}} Temperatur:
{{!}} {{{Temperatur|}}}
|
{{!-}}
}}
{{#if:{{{StudyGroupNumber|}}}
|
{{!-}}
{{!}} StudyGroupNumber:
{{!}} {{{StudyGroupNumber|}}}
|
{{!-}}
}}
|}
|}
{{#ifeq: {{NAMESPACE}} | {{ns:0}} | [[Kategorie:Spezies]]}}
{{#if:{{{Ordnung|}}}|
-> [[:Kategorie:{{{Ordnung|}}}{{!}}{{{Ordnung|}}}]]
}}
{{#if:{{{Superfamilie|}}}|
-> [[:Kategorie:{{{Superfamilie|}}}{{!}}{{{Superfamilie|}}}]]
}}
{{#if:{{{Familie|}}}|
-> [[:Kategorie:{{{Familie|}}}{{!}}{{{Familie|}}}]]
}}
{{#if:{{{Unterfamilie|}}}|
-> [[:Kategorie:{{{Unterfamilie|}}}{{!}}{{{Unterfamilie|}}}]]
}}
{{#if:{{{Tribus|}}}|
-> [[:Kategorie:{{{Tribus|}}}{{!}}{{{Tribus|}}}]]
}}
{{#if:{{{Gattung|}}}|
-> [[:Kategorie:{{{Gattung|}}}{{!}}{{{Gattung|}}}]]
}}
{{#if:{{{Untergattung|}}}|
-> [[:Kategorie:{{{Untergattung|}}}{{!}}{{{Untergattung|}}}]]
}}
{{#if:{{{cockroach.speciesfile.org_TaxonNameID|}}}|
* [http://cockroach.speciesfile.org/common/basic/Taxa.aspx?TaxonNameID={{{cockroach.speciesfile.org_TaxonNameID|}}} cockroach.speciesfile.org -> {{PAGENAME}}]
}}
{{#if:{{{LSID|}}}|
* [http://www.ipni.org/lsids.html LSID] : {{{LSID|}}}
}}
{{#ifeq:{{PAGENAME}}|{{{Ordnung}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=6}}
{{#ifeq:{{PAGENAME}}|Blattodea|
[[Kategorie:Schaben]]
}}
}}
{{#ifeq:{{PAGENAME}}|{{{Superfamilie}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
[[Kategorie:{{{Ordnung|}}}{{!}}{{{Superfamilie|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{Familie}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=4}}
[[Kategorie:{{{Superfamilie|}}}{{!}}{{{Familie|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{Unterfamilie}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=3}}
[[Kategorie:{{{Familie|}}}{{!}}{{{Unterfamilie|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{Gattung}}}|
[[Kategorie:{{{Unterfamilie|}}}{{!}}{{{Gattung|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{Gattung}}} {{{Art}}}|
[[Kategorie:{{{Gattung|}}}{{!}}{{{Art|}}}]]
}}
</includeonly>
<noinclude>
<pre>
Beispielaufruf:
{{Systematik
| DeName = Fauchschabe
| Autor = van Herrewege, 1973
| Ordnung =
| Superfamily =
| Familie = Blaberidae
| Unterfamilie = Oxyhaloinae
| Gattung = Princisia
| Untergattung =
| Tribus = Gromphadorhini
| Art = vanwaerebeki
| Verbreitung =
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| StudyGroupNumber = BCG 34
| Winterruhe =
| cockroach.speciesfile.org_TaxonNameID = 1174416
| LSID = urn:lsid:Blattodea.speciesfile.org:TaxonName:6326
}}
</pre>
</noinclude>
3fba4fa2b16c517508cad950185a1d59ae011601
Category:Gromphadorhina
14
183
1080
1063
2016-01-06T15:03:59Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:Schaben]]
{{Systematik
| Autor = Brunner von Wattenwyl, 1865
| Bild =
| Bildbeschreibung =
| Familie = Blaberidae
| Unterfamilie = Oxyhaloinae
| Gattung = Gromphadorhina
| cockroach.speciesfile.org_TaxonNameID = 1174409
| LSID = urn:lsid:Blattodea.speciesfile.org:TaxonName:6328
}}
7306d1ff6fae0bcc5505904f37f5b6a9b4dd4852
Category:Archimandrita
14
149
1081
1061
2016-01-06T15:04:27Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:Schaben]]
{{Systematik
| Autor = Saussure, 1893
| Bild =
| Bildbeschreibung =
| Familie = Blaberidae
| Unterfamilie = Blaberinae
| Gattung = Archimandrita
| cockroach.speciesfile.org_TaxonNameID = 1174139
| LSID = urn:lsid:Blattodea.speciesfile.org:TaxonName:6664
}}
7dc82eb97b4d5b68358e0112954ca89585cf0eee
Category:Blaberus
14
193
1082
1069
2016-01-06T15:04:46Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:Schaben]]
{{Systematik
| Autor = Serville, 1831
| Bild =
| Bildbeschreibung =
| Familie = Blaberidae
| Unterfamilie = Blaberinae
| Gattung = Blaberus
| cockroach.speciesfile.org_TaxonNameID = 1174154
| LSID = urn:lsid:Blattodea.speciesfile.org:TaxonName:6590
}}
22995d97541a376808469e85d12e11c87e554774
Category:Blaptica
14
151
1083
1071
2016-01-06T15:05:02Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie: Schaben]]
{{Systematik
| Autor = Stål, 1874
| Bild =
| Bildbeschreibung =
| Familie = Blaberidae
| Unterfamilie = Blaberinae
| Gattung = Blaptica
| cockroach.speciesfile.org_TaxonNameID = 1174201
| LSID = urn:lsid:Blattodea.speciesfile.org:TaxonName:6568
}}
90828dd04b207f426241b29568e8f58394e5e8bf
Category:Elliptorhina
14
147
1084
1066
2016-01-06T15:05:20Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:Schaben]]
{{Systematik
| Autor = van Herrewege, 1973
| Bild =
| Bildbeschreibung =
| Familie = Blaberidae
| Unterfamilie = Oxyhaloinae
| Gattung = Elliptorhina
| LSID = urn:lsid:Blattodea.speciesfile.org:TaxonName:6334
| cockroach.speciesfile.org_TaxonNameID = 1174395
}}
c9f74f7e6d31c09b2fd1221b9f2a0121f9cf19fa
1098
1084
2016-01-06T15:45:50Z
Lollypop
2
wikitext
text/x-wiki
{{Systematik
| Autor = van Herrewege, 1973
| Bild =
| Bildbeschreibung =
| Familie = Blaberidae
| Unterfamilie = Oxyhaloinae
| Gattung = Elliptorhina
| LSID = urn:lsid:Blattodea.speciesfile.org:TaxonName:6334
| cockroach.speciesfile.org_TaxonNameID = 1174395
}}
297662a7ba358719f9a29511b6d0b9bd5ac6a745
Category:Corydiinae
14
262
1095
2016-01-06T15:43:39Z
Lollypop
2
Die Seite wurde neu angelegt: „[[Kategorie:Schaben]] {{Systematik | Autor = | Bild = | Bildbeschreibung = | Familie = Corydiidae | Unterfamilie = …“
wikitext
text/x-wiki
[[Kategorie:Schaben]]
{{Systematik
| Autor =
| Bild =
| Bildbeschreibung =
| Familie = Corydiidae
| Unterfamilie = Corydiinae
| cockroach.speciesfile.org_TaxonNameID =
| LSID = urn:lsid:Blattodea.speciesfile.org:TaxonName:
}}
1cac2455f9bb456899302fb2a00fd699449f267d
1096
1095
2016-01-06T15:44:35Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:Schaben]]
{{Systematik
| Autor = Saussure, 1864
| Bild =
| Bildbeschreibung =
| Familie = Corydiidae
| Unterfamilie = Corydiinae
| cockroach.speciesfile.org_TaxonNameID = 1177956
| LSID = urn:lsid:Blattodea.speciesfile.org:TaxonName:1256
}}
e408842f83f85ece14e4dd303e18eb0ead9b1b5e
1107
1096
2016-01-06T16:11:41Z
Lollypop
2
wikitext
text/x-wiki
{{Systematik
| Autor = Saussure, 1864
| Bild =
| Bildbeschreibung =
| Familie = Corydiidae
| Unterfamilie = Corydiinae
| cockroach.speciesfile.org_TaxonNameID = 1177956
| LSID = urn:lsid:Blattodea.speciesfile.org:TaxonName:1256
}}
919ac09fb00c238fddffd39e350eb4981baf4887
1117
1107
2016-01-06T16:34:13Z
Lollypop
2
wikitext
text/x-wiki
{{Systematik
| Autor = Saussure, 1864
| Bild =
| Bildbeschreibung =
| Ordnung = Blattodea
| Superfamilie = Corydioidea
| Familie = Corydiidae
| Unterfamilie = Corydiinae
| cockroach.speciesfile.org_TaxonNameID = 1177956
| LSID = urn:lsid:Blattodea.speciesfile.org:TaxonName:1256
}}
83d8b3e2e033e9329620392cf10acc0f2e00c9ac
Category:Oxyhaloinae
14
263
1099
2016-01-06T15:47:50Z
Lollypop
2
Die Seite wurde neu angelegt: „{{Systematik | Autor = | Bild = | Bildbeschreibung = | Familie = Blaberidae | Unterfamilie = Oxyhaloinae | LSID …“
wikitext
text/x-wiki
{{Systematik
| Autor =
| Bild =
| Bildbeschreibung =
| Familie = Blaberidae
| Unterfamilie = Oxyhaloinae
| LSID = urn:lsid:Blattodea.speciesfile.org:TaxonName:6192
| cockroach.speciesfile.org_TaxonNameID = 1174364
}}
bc8b8a9f7909c4b09a9b48d9d6e3517ad2e8a9d7
Category:Blaberidae
14
264
1100
2016-01-06T15:48:57Z
Lollypop
2
Die Seite wurde neu angelegt: „{{Systematik | Autor = Saussure, 1864 | Bild = | Bildbeschreibung = | Familie = Blaberidae | LSID = urn:lsid…“
wikitext
text/x-wiki
{{Systematik
| Autor = Saussure, 1864
| Bild =
| Bildbeschreibung =
| Familie = Blaberidae
| LSID = urn:lsid:Blattodea.speciesfile.org:TaxonName:6191
| cockroach.speciesfile.org_TaxonNameID = 1172575
}}
ba7cb3a78a5a5a11191205f24c42803a67f9bf66
1101
1100
2016-01-06T15:49:16Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:Schaben]]
{{Systematik
| Autor = Saussure, 1864
| Bild =
| Bildbeschreibung =
| Familie = Blaberidae
| LSID = urn:lsid:Blattodea.speciesfile.org:TaxonName:6191
| cockroach.speciesfile.org_TaxonNameID = 1172575
}}
c60efbea172838f0dc820ebfa866aab687bafc54
1111
1101
2016-01-06T16:25:11Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:Schaben]]
{{Systematik
| Autor = Saussure, 1864
| Bild =
| Bildbeschreibung =
| Ordnung = Blattodea
| Superfamilie = Blaberoidea
| Familie = Blaberidae
| LSID = urn:lsid:Blattodea.speciesfile.org:TaxonName:6191
| cockroach.speciesfile.org_TaxonNameID = 1172575
}}
c8316d351edd615ae31941d2c6710199149e7204
Category:Schaben
14
142
1104
547
2016-01-06T15:59:15Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:Insekten]]
{{#categorytree:{{PAGENAMEE}}|mode=pages|hideroot=on|depth=4}}
ff4a5541a43fe42e32199208194dbbfc240c2b94
Category:Corydiidae
14
265
1108
2016-01-06T16:19:43Z
Lollypop
2
Die Seite wurde neu angelegt: „{{Systematik | Autor = Saussure, 1864 | Bild = | Bildbeschreibung = | Familie = Corydiidae | cockroach.speciesfile.org_Ta…“
wikitext
text/x-wiki
{{Systematik
| Autor = Saussure, 1864
| Bild =
| Bildbeschreibung =
| Familie = Corydiidae
| cockroach.speciesfile.org_TaxonNameID =
| LSID = urn:lsid:Blattodea.speciesfile.org:TaxonName:
}}
364c69ed91b365a5539b10a2a5f4abdd83d0d828
1109
1108
2016-01-06T16:20:16Z
Lollypop
2
wikitext
text/x-wiki
{{Systematik
| Autor = Saussure, 1864
| Bild =
| Bildbeschreibung =
| Familie = Corydiidae
| cockroach.speciesfile.org_TaxonNameID = 1177778
| LSID = urn:lsid:Blattodea.speciesfile.org:TaxonName:1253
}}
f817af4166fa8d8f29bc938e5fdd4b2534bce4da
1113
1109
2016-01-06T16:29:48Z
Lollypop
2
wikitext
text/x-wiki
{{Systematik
| Autor = Saussure, 1864
| Bild =
| Bildbeschreibung =
| Ordnung = Blattodea
| Superfamilie = Corydioidea
| Familie = Corydiidae
| cockroach.speciesfile.org_TaxonNameID = 1177778
| LSID = urn:lsid:Blattodea.speciesfile.org:TaxonName:1253
}}
8b4cd661efbba7f1e91acce0ea6553b529f53793
Category:Corydioidea
14
266
1114
2016-01-06T16:31:16Z
Lollypop
2
Die Seite wurde neu angelegt: „{{Systematik | Autor = Saussure, 1864 | Bild = | Bildbeschreibung = | Ordnung = Blattodea | Superfamilie = Corydioid…“
wikitext
text/x-wiki
{{Systematik
| Autor = Saussure, 1864
| Bild =
| Bildbeschreibung =
| Ordnung = Blattodea
| Superfamilie = Corydioidea
| cockroach.speciesfile.org_TaxonNameID = 1177728
| LSID = urn:lsid:Blattodea.speciesfile.org:TaxonName:1252
}}
4b67ebe4fdb86ba61a91826041b73ffabca8d90d
Category:Blattodea
14
267
1115
2016-01-06T16:32:36Z
Lollypop
2
Die Seite wurde neu angelegt: „{{Systematik | Autor = | Bild = | Bildbeschreibung = | Ordnung = Blattodea | cockroach.speciesfile.org_TaxonNameID = 117…“
wikitext
text/x-wiki
{{Systematik
| Autor =
| Bild =
| Bildbeschreibung =
| Ordnung = Blattodea
| cockroach.speciesfile.org_TaxonNameID = 1172573
| LSID = urn:lsid:Blattodea.speciesfile.org:TaxonName:1
}}
2614d4ab33c2ac42133bb7a4a7b4cf9e2fb3c124
Category:Blaberoidea
14
268
1119
2016-01-06T16:38:25Z
Lollypop
2
Die Seite wurde neu angelegt: „{{Systematik | Autor = Saussure, 1864 | Bild = | Bildbeschreibung = | Ordnung = Blattodea | Superfamilie = Blaberoid…“
wikitext
text/x-wiki
{{Systematik
| Autor = Saussure, 1864
| Bild =
| Bildbeschreibung =
| Ordnung = Blattodea
| Superfamilie = Blaberoidea
| cockroach.speciesfile.org_TaxonNameID = 1172574
| LSID = urn:lsid:Blattodea.speciesfile.org:TaxonName:1848
}}
319689d82bdc4b899a7dac0e435cd77b573bb88e
Category:Blaberidae
14
264
1122
1111
2016-01-06T16:43:36Z
Lollypop
2
wikitext
text/x-wiki
{{Systematik
| Autor = Saussure, 1864
| Bild =
| Bildbeschreibung =
| Ordnung = Blattodea
| Superfamilie = Blaberoidea
| Familie = Blaberidae
| LSID = urn:lsid:Blattodea.speciesfile.org:TaxonName:6191
| cockroach.speciesfile.org_TaxonNameID = 1172575
}}
7fc22807ffad10c91c315105c2212a306dcc3874
Category:Oxyhaloinae
14
263
1123
1099
2016-01-06T16:43:52Z
Lollypop
2
wikitext
text/x-wiki
{{Systematik
| Autor =
| Bild =
| Bildbeschreibung =
| Ordnung = Blattodea
| Superfamilie = Blaberoidea
| Familie = Blaberidae
| Unterfamilie = Oxyhaloinae
| LSID = urn:lsid:Blattodea.speciesfile.org:TaxonName:6192
| cockroach.speciesfile.org_TaxonNameID = 1174364
}}
c4dab6ed94189e5791937232154311beb64c22c7
Category:Elliptorhina
14
147
1124
1098
2016-01-06T16:44:05Z
Lollypop
2
wikitext
text/x-wiki
{{Systematik
| Autor = van Herrewege, 1973
| Bild =
| Bildbeschreibung =
| Ordnung = Blattodea
| Superfamilie = Blaberoidea
| Familie = Blaberidae
| Unterfamilie = Oxyhaloinae
| Gattung = Elliptorhina
| LSID = urn:lsid:Blattodea.speciesfile.org:TaxonName:6334
| cockroach.speciesfile.org_TaxonNameID = 1174395
}}
6aa0cbb76e0a335796701c5d141a1ebda81c9735
Elliptorhina javanica
0
146
1125
1068
2016-01-06T16:44:17Z
Lollypop
2
wikitext
text/x-wiki
{{Systematik
| DeName = Fauchschabe
| WissName = Elliptorhina javanica
| Autor = Hanitsch, 1930
| Bild = Elliptorhina_javanica.JPG
| Bildbeschreibung = Elliptorhina javanica an einem Champignon
| Ordnung = Blattodea
| Superfamilie = Blaberoidea
| Familie = Blaberidae
| Unterfamilie = Oxyhaloinae
| Gattung = Elliptorhina
| Untergattung =
| Tribus = Gromphadorhini
| Art = javanica
| Verbreitung =
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| Winterruhe =
| LSID = urn:lsid:Blattodea.speciesfile.org:TaxonName:6342
| cockroach.speciesfile.org_TaxonNameID = 1174403
}}
07013894f62890711c6121f7da331e4dd91eb74d
Category:Gromphadorhina
14
183
1127
1080
2016-01-06T16:45:05Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:Schaben]]
{{Systematik
| Autor = Brunner von Wattenwyl, 1865
| Bild =
| Bildbeschreibung =
| Ordnung = Blattodea
| Superfamilie = Blaberoidea
| Familie = Blaberidae
| Unterfamilie = Oxyhaloinae
| Gattung = Gromphadorhina
| cockroach.speciesfile.org_TaxonNameID = 1174409
| LSID = urn:lsid:Blattodea.speciesfile.org:TaxonName:6328
}}
e6c6fa5d8a7ef93356f2466dad2a7cf732fd2135
1140
1127
2016-01-06T16:54:34Z
Lollypop
2
wikitext
text/x-wiki
{{Systematik
| Autor = Brunner von Wattenwyl, 1865
| Bild =
| Bildbeschreibung =
| Ordnung = Blattodea
| Superfamilie = Blaberoidea
| Familie = Blaberidae
| Unterfamilie = Oxyhaloinae
| Gattung = Gromphadorhina
| cockroach.speciesfile.org_TaxonNameID = 1174409
| LSID = urn:lsid:Blattodea.speciesfile.org:TaxonName:6328
}}
82e3df26ccc3ee85dc5b842809c18e805bab1fe1
Gromphadorhina oblongonota
0
175
1128
1064
2016-01-06T16:45:24Z
Lollypop
2
wikitext
text/x-wiki
{{Systematik
| DeName = Fauchschabe
| WissName = Gromphadorhina oblongonota
| Autor = van Herrewege, 1973
| Ordnung = Blattodea
| Superfamilie = Blaberoidea
| Familie = Blaberidae
| Unterfamilie = Oxyhaloinae
| Gattung = Gromphadorhina
| Untergattung =
| Tribus = Gromphadorhini
| Art = oblongonota
| Verbreitung =
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| StudyGroupNumber = BCG 48
| Winterruhe =
| cockroach.speciesfile.org_TaxonNameID = 1174411
| LSID = urn:lsid:Blattodea.speciesfile.org:TaxonName:6332
}}
7d5f69bec9aa2cdfc66095cc7933ced654cd1cb7
Gromphadorhina portentosa
0
145
1129
1065
2016-01-06T16:45:41Z
Lollypop
2
wikitext
text/x-wiki
{{Systematik
| DeName = Fauchschabe
| WissName = Gromphadorhina portentosa
| Autor = Schaum, 1853
| Ordnung = Blattodea
| Superfamilie = Blaberoidea
| Familie = Blaberidae
| Unterfamilie = Oxyhaloinae
| Tribus = Gromphadorhini
| Gattung = Gromphadorhina
| Untergattung =
| Art = portentosa
| Verbreitung =
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| StudyGroupNumber = BCG 12
| Winterruhe =
| LSID = urn:lsid:Blattodea.speciesfile.org:TaxonName:6329
| cockroach.speciesfile.org_TaxonNameID = 1174413
}}
48b4066ddda0efac33e087b9c6e2258347318a61
Gromphadorhina spec.
0
144
1130
562
2016-01-06T16:46:08Z
Lollypop
2
wikitext
text/x-wiki
{{Systematik
| DeName = Fauchschabe
| WissName = Gromphadorhina spec.
| Autor =
| Untergattung =
| Gattung = Gromphadorhina
| Ordnung = Blattodea
| Superfamilie = Blaberoidea
| Familie = Blaberidae
| Unterfamilie = Oxyhaloinae
| Tribus = Gromphadorhini
| Art = spec.
| Verbreitung =
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| Winterruhe =
}}
64d79c7d84274fb80a7ecdb9778cfc666c949872
1131
1130
2016-01-06T16:46:42Z
Lollypop
2
wikitext
text/x-wiki
{{Systematik
| DeName = Fauchschabe
| Autor =
| Ordnung = Blattodea
| Superfamilie = Blaberoidea
| Familie = Blaberidae
| Unterfamilie = Oxyhaloinae
| Gattung = Gromphadorhina
| Untergattung =
| Tribus = Gromphadorhini
| Art = spec.
| Verbreitung =
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| Winterruhe =
}}
df30431f17f4694b8270c8230714344828947890
Category:Princisia
14
153
1132
1076
2016-01-06T16:47:57Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:Schaben]]
{{Systematik
| Autor = van Herrewege, 1973
| Bild =
| Bildbeschreibung =
| Ordnung = Blattodea
| Superfamilie = Blaberoidea
| Familie = Blaberidae
| Unterfamilie = Oxyhaloinae
| Gattung = Princisia
| cockroach.speciesfile.org_TaxonNameID = 1174415
| LSID = urn:lsid:Blattodea.speciesfile.org:TaxonName:6325
}}
3d51eb72b13300f0abdfc299807b3f09ff82fc99
1141
1132
2016-01-06T16:55:00Z
Lollypop
2
wikitext
text/x-wiki
{{Systematik
| Autor = van Herrewege, 1973
| Bild =
| Bildbeschreibung =
| Ordnung = Blattodea
| Superfamilie = Blaberoidea
| Familie = Blaberidae
| Unterfamilie = Oxyhaloinae
| Gattung = Princisia
| cockroach.speciesfile.org_TaxonNameID = 1174415
| LSID = urn:lsid:Blattodea.speciesfile.org:TaxonName:6325
}}
20e8c35849e140477ff9b942807d279fdc01a884
Princisia vanwaerebeki
0
152
1133
1077
2016-01-06T16:48:25Z
Lollypop
2
wikitext
text/x-wiki
{{Systematik
| DeName = Fauchschabe
| Autor = van Herrewege, 1973
| Ordnung = Blattodea
| Superfamilie = Blaberoidea
| Familie = Blaberidae
| Unterfamilie = Oxyhaloinae
| Gattung = Princisia
| Untergattung =
| Tribus = Gromphadorhini
| Art = vanwaerebeki
| Verbreitung =
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| StudyGroupNumber = BCG 34
| Winterruhe =
| cockroach.speciesfile.org_TaxonNameID = 1174416
| LSID = urn:lsid:Blattodea.speciesfile.org:TaxonName:6326
}}
55b12fd2d6f2c4254ac470faac484653408a0bda
Category:Blaptica
14
151
1134
1083
2016-01-06T16:49:54Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie: Schaben]]
{{Systematik
| Autor = Stål, 1874
| Bild =
| Bildbeschreibung =
| Ordnung = Blattodea
| Superfamilie = Blaberoidea
| Familie = Blaberidae
| Unterfamilie = Blaberinae
| Gattung = Blaptica
| cockroach.speciesfile.org_TaxonNameID = 1174201
| LSID = urn:lsid:Blattodea.speciesfile.org:TaxonName:6568
}}
32d9dedf4ba0d892b81d6fe998cc098379809fb3
1143
1134
2016-01-06T16:55:31Z
Lollypop
2
wikitext
text/x-wiki
{{Systematik
| Autor = Stål, 1874
| Bild =
| Bildbeschreibung =
| Ordnung = Blattodea
| Superfamilie = Blaberoidea
| Familie = Blaberidae
| Unterfamilie = Blaberinae
| Gattung = Blaptica
| cockroach.speciesfile.org_TaxonNameID = 1174201
| LSID = urn:lsid:Blattodea.speciesfile.org:TaxonName:6568
}}
c462854b875c27945c2a453192087b18b6d4db07
Category:Blaberinae
14
269
1135
2016-01-06T16:51:07Z
Lollypop
2
Die Seite wurde neu angelegt: „{{Systematik | Autor = Saussure, 1864 | Bild = | Bildbeschreibung = | Ordnung = Blattodea | Superfamilie = Blaberoid…“
wikitext
text/x-wiki
{{Systematik
| Autor = Saussure, 1864
| Bild =
| Bildbeschreibung =
| Ordnung = Blattodea
| Superfamilie = Blaberoidea
| Familie = Blaberidae
| Unterfamilie = Blaberinae
| cockroach.speciesfile.org_TaxonNameID = 1174138
| LSID = urn:lsid:Blattodea.speciesfile.org:TaxonName:6392
}}
16ed3a8ff45d9174a5c8064f428669cbda7987e8
Category:Archimandrita
14
149
1136
1081
2016-01-06T16:51:41Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:Schaben]]
{{Systematik
| Autor = Saussure, 1893
| Bild =
| Bildbeschreibung =
| Ordnung = Blattodea
| Superfamilie = Blaberoidea
| Familie = Blaberidae
| Unterfamilie = Blaberinae
| Gattung = Archimandrita
| cockroach.speciesfile.org_TaxonNameID = 1174139
| LSID = urn:lsid:Blattodea.speciesfile.org:TaxonName:6664
}}
b53e63b999c5db0a82e97cec5512180c0ad28fc9
1139
1136
2016-01-06T16:54:15Z
Lollypop
2
wikitext
text/x-wiki
{{Systematik
| Autor = Saussure, 1893
| Bild =
| Bildbeschreibung =
| Ordnung = Blattodea
| Superfamilie = Blaberoidea
| Familie = Blaberidae
| Unterfamilie = Blaberinae
| Gattung = Archimandrita
| cockroach.speciesfile.org_TaxonNameID = 1174139
| LSID = urn:lsid:Blattodea.speciesfile.org:TaxonName:6664
}}
3deecebd65b60d3a2183defd34bcb28360380e51
Category:Blaberus
14
193
1137
1082
2016-01-06T16:51:55Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:Schaben]]
{{Systematik
| Autor = Serville, 1831
| Bild =
| Bildbeschreibung =
| Ordnung = Blattodea
| Superfamilie = Blaberoidea
| Familie = Blaberidae
| Unterfamilie = Blaberinae
| Gattung = Blaberus
| cockroach.speciesfile.org_TaxonNameID = 1174154
| LSID = urn:lsid:Blattodea.speciesfile.org:TaxonName:6590
}}
191ee5cd4c224496c3fec6de8f05e7936246464e
1142
1137
2016-01-06T16:55:15Z
Lollypop
2
wikitext
text/x-wiki
{{Systematik
| Autor = Serville, 1831
| Bild =
| Bildbeschreibung =
| Ordnung = Blattodea
| Superfamilie = Blaberoidea
| Familie = Blaberidae
| Unterfamilie = Blaberinae
| Gattung = Blaberus
| cockroach.speciesfile.org_TaxonNameID = 1174154
| LSID = urn:lsid:Blattodea.speciesfile.org:TaxonName:6590
}}
33a8c4adb8994039d2fab72455cee696cdca254c
Blaberus giganteus
0
192
1138
1070
2016-01-06T16:52:09Z
Lollypop
2
wikitext
text/x-wiki
{{Systematik
| DeName = Mittelamerikanische Riesenschabe
| WissName = Blaberus giganteus
| Autor = Linnaeus, 1758
| Untergattung =
| Gattung = Blaberus
| Ordnung = Blattodea
| Superfamilie = Blaberoidea
| Familie = Blaberidae
| Unterfamilie = Blaberinae
| Tribus =
| Art = giganteus
| Verbreitung = Mittelamerika und nördliches Südamerika
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| Winterruhe =
| cockroach.speciesfile.org_TaxonNameID = 1174190
| LSID = urn:lsid:Blattodea.speciesfile.org:TaxonName:6598
}}
a95fe5a01cb8118fc77ed971c7b8685735856794
1145
1138
2016-01-10T15:25:13Z
Lollypop
2
wikitext
text/x-wiki
{{Systematik
| DeName = Mittelamerikanische Riesenschabe
| WissName = Blaberus giganteus
| Autor = Linnaeus, 1758
| Bild = Blaberus_giganteus.jpg
| Bildbeschreibung = Adult Blaberus giganteus
| Untergattung =
| Gattung = Blaberus
| Ordnung = Blattodea
| Superfamilie = Blaberoidea
| Familie = Blaberidae
| Unterfamilie = Blaberinae
| Tribus =
| Art = giganteus
| Verbreitung = Mittelamerika und nördliches Südamerika
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| Winterruhe =
| cockroach.speciesfile.org_TaxonNameID = 1174190
| LSID = urn:lsid:Blattodea.speciesfile.org:TaxonName:6598
}}
3ff027dbe911b42aa3a8edd9250cfb5a357a5143
1147
1145
2016-01-10T15:29:07Z
Lollypop
2
wikitext
text/x-wiki
{{Systematik
| DeName = Mittelamerikanische Riesenschabe
| WissName = Blaberus giganteus
| Autor = Linnaeus, 1758
| Bild = Blaberus_giganteus.jpg
| Bildbeschreibung = Adult Blaberus giganteus
| Ordnung = Blattodea
| Superfamilie = Blaberoidea
| Familie = Blaberidae
| Unterfamilie = Blaberinae
| Tribus =
| Gattung = Blaberus
| Untergattung =
| Art = giganteus
| Verbreitung = Mittelamerika und nördliches Südamerika
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| Winterruhe =
| cockroach.speciesfile.org_TaxonNameID = 1174190
| LSID = urn:lsid:Blattodea.speciesfile.org:TaxonName:6598
}}
94456000d8bf2179c093e6c812a975f598e4c623
File:Blaberus giganteus.jpg
6
270
1144
2016-01-10T15:23:19Z
Lollypop
2
Lars Timmann
wikitext
text/x-wiki
Lars Timmann
46d235dbc7e486e0c3caa747e6ec61caa3343562
1146
1144
2016-01-10T15:26:17Z
Lollypop
2
Lollypop lud eine neue Version von „[[Datei:Blaberus giganteus.jpg]]“ hoch: Lars Timmann
wikitext
text/x-wiki
Lars Timmann
46d235dbc7e486e0c3caa747e6ec61caa3343562
Blaptica dubia
0
150
1148
1072
2016-01-10T15:29:24Z
Lollypop
2
wikitext
text/x-wiki
{{Systematik
| DeName = Argentinische Waldschabe
| Autor = Serville, 1838
| Untergattung =
| Ordnung = Blattodea
| Superfamilie = Blaberoidea
| Familie = Blaberidae
| Unterfamilie = Blaberinae
| Tribus =
| Gattung = Blaptica
| Art = dubia
| Verbreitung = Argentinien, Paraguay, Uruguay
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| Winterruhe =
| cockroach.speciesfile.org_TaxonNameID = 1174202
| LSID = urn:lsid:Blattodea.speciesfile.org:TaxonName:6586
}}
070092076068e1a614c08aa8f8a16f2c7b071b66
Elliptorhina laevigata
0
271
1149
2016-01-10T15:33:44Z
Lollypop
2
Die Seite wurde neu angelegt: „{{Systematik | DeName = Fauchschabe | Autor = (Saussure & Zehntner, 1895) | Bild = | Bildbeschreibung = | Ordnung …“
wikitext
text/x-wiki
{{Systematik
| DeName = Fauchschabe
| Autor = (Saussure & Zehntner, 1895)
| Bild =
| Bildbeschreibung =
| Ordnung = Blattodea
| Superfamilie = Blaberoidea
| Familie = Blaberidae
| Unterfamilie = Oxyhaloinae
| Tribus = Gromphadorhini
| Gattung = Elliptorhina
| Untergattung =
| Art = laevigata
| Verbreitung =
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| Winterruhe =
| LSID = urn:lsid:Blattodea.speciesfile.org:TaxonName:6339
| cockroach.speciesfile.org_TaxonNameID = 1174404
}}
ebc02c658923197f76844912399dc5b448602b36
1150
1149
2016-01-10T15:34:25Z
Lollypop
2
wikitext
text/x-wiki
{{Systematik
| DeName = Fauchschabe
| Autor = Saussure & Zehntner, 1895
| Bild =
| Bildbeschreibung =
| Ordnung = Blattodea
| Superfamilie = Blaberoidea
| Familie = Blaberidae
| Unterfamilie = Oxyhaloinae
| Tribus = Gromphadorhini
| Gattung = Elliptorhina
| Untergattung =
| Art = laevigata
| Verbreitung =
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| Winterruhe =
| LSID = urn:lsid:Blattodea.speciesfile.org:TaxonName:6339
| cockroach.speciesfile.org_TaxonNameID = 1174404
}}
04e2894694dd7d930a4abcff2203629a29f4315d
MySQL Tipps und Tricks
0
197
1152
889
2016-01-11T13:19:36Z
Lollypop
2
/* All grants */
wikitext
text/x-wiki
[[Kategorie:MySQL|Tipps und Tricks]]
==Oneliner==
===All grants===
<source lang=bash>
# mysql --skip-column-names --batch --execute 'select concat("`",user,"`@`",host,"`") from mysql.user' | xargs -n 1 -i mysql --execute 'show grants for {}'
</source>
===Last update time===
* Per table
<source lang=mysql>
mysql> SELECT TABLE_SCHEMA AS DB,TABLE_NAME,UPDATE_TIME FROM INFORMATION_SCHEMA.TABLES ORDER BY DB,UPDATE_TIME;
</source>
* Per database
<source lang=mysql>
mysql> SELECT TABLE_SCHEMA AS DB,MAX(UPDATE_TIME) AS LAST_UPDATE FROM INFORMATION_SCHEMA.TABLES GROUP BY DB ORDER BY LAST_UPDATE;
</source>
==InnoDB space==
===Per database===
<source lang=mysql>
mysql> select table_schema as database_name, sum(round(data_length/1024/1024,2)) as total_size_mb from information_schema.tables where engine like 'innodb' group by table_schema order by total_size_mb;
</source>
===Per table===
<source lang=mysql>
mysql> select table_schema as database_name,table_name,round(data_length/1024/1024,2) as size_mb from information_schema.tables order by size_mb;
</source>
==Logging==
If you use SET GLOBAL it is just for the moment.
'''Don't forget to add it in your my.cnf to make it permanent!'''
===What can I log?===
The interesting variables here are:
* log_queries_not_using_indexes
* log_slave_updates
* log_slow_queries
* general_log
===Choose logging destination FILE/TABLE/NONE===
This affects general_log and slow_query_log.
* Log to the table mysql.slow_log and mysql.general_log
<source lang=mysql>
mysql> SET GLOBAL log_output=TABLE;
</source>
* Log to the table mysql.slow_log and mysql.general_log
<source lang=mysql>
mysql> SET GLOBAL log_output=TABLE;
</source>
* Both: tables and files
<source lang=mysql>
mysql> SET GLOBAL log_output = 'TABLE,FILE';
</source>
* None, if NONE appears in the log_output destinations there is no logging
<source lang=mysql>
mysql> SET GLOBAL log_output = 'TABLE,FILE,NONE';
</source>
is equal to
<source lang=mysql>
mysql> SET GLOBAL log_output = 'NONE';
</source>
===Enable/disable general logging===
<source lang=mysql>
mysql> SET GLOBAL general_log_file = '/var/lib/mysql/general.log';
Query OK, 0 rows affected (0.00 sec)
mysql> SET GLOBAL general_log = 'ON';
Query OK, 0 rows affected (0.00 sec)
</source>
<source lang=mysql>
mysql> SET GLOBAL general_log = 'OFF';
Query OK, 0 rows affected (0.00 sec)
</source>
===Enable/disable logging of slow queries===
<source lang=mysql>
mysql> SET GLOBAL slow_query_log_file = '/var/lib/mysql/slow-query.log';
Query OK, 0 rows affected (0.00 sec)
mysql> SET GLOBAL slow_query_log = 'ON';
Query OK, 0 rows affected (0.00 sec)
</source>
<source lang=mysql>
mysql> SET GLOBAL slow_query_log = 'OFF';
Query OK, 0 rows affected (0.00 sec)
</source>
==Filesystems for MySQL==
===ext3/ext4===
====Create Options====
<source lang=bash>
# mkfs.ext4 -b 4096 /dev/mapper/vg--data-lv--ext4--mysql_data
</source>
====Mountoptions====
* noatime
* data=writeback (best performance , only metadata is logged)
* data=ordered (ok performance , recording metadata and grouping metadata related to the data changes)
* data=journal (worst performance, but best data protection, ext3 default mode, recording metadata and all data)
===Raw devices with InnoDB===
'''Take a look at [[Linux_udev_permissions|setting device permissions via udev]] first.'''
'''After''' that the device is owned by mysql:
<source lang=bash>
# ls -alL /dev/vg-data/lv-rawdisk-innodb01
brw-rw---- 1 mysql mysql 252, 0 Aug 12 15:07 /dev/vg-data/lv-rawdisk-innodb01
</source>
Determine the size:
<source lang=bash>
# lvs vg-data
LV VG Attr LSize Pool Origin Data% Move Log Copy% Convert
lv-rawdisk-innodb01 vg-data -wi-a---- 25.00g
# fdisk -l /dev/vg-data/lv-rawdisk-innodb01
Disk /dev/vg-data/lv-rawdisk-innodb01: 26.8 GB, 26843545600 bytes
255 heads, 63 sectors/track, 3263 cylinders, total 52428800 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
# bc -l
26843545600/(1024*1024*1024)
25.00000000000000000000
</source>
Yes... really 25GB!
Add your logical volume to your configuration /etc/mysql/conf.d/innodb.cnf :
<source lang=mysql>
[mysqld]
# InnoDB raw disks
innodb_data_home_dir=
innodb_data_file_path=/dev/vg-data/lv-rawdisk-innodb01:25Gnewraw
</source>
Start mysql:
<source lang=bash>
# service mysql start
</source>
Aaaaaand.. do not forget apparmor! Like I did.. :-D
<source lang=mysql>
InnoDB: Operating system error number 13 in a file operation.
InnoDB: The error means mysqld does not have the access rights to
InnoDB: the directory.
InnoDB: File name /dev/dm-0
InnoDB: File operation call: 'open'.
InnoDB: Cannot continue operation.
</source>
<source lang=bash>
# tail /var/log/kern.log
...
Aug 12 15:30:09 mysql kernel: [ 5840.118528] audit: type=1400 audit(1439386209.399:33): apparmor="DENIED" operation="open" profile="/usr/sbin/mysqld" name="/dev/dm-0" pid=11810 comm="mysqld" requested_mask="wr" denied_mask="wr" fsuid=108 ouid=108
...
</source>
Add your raw device to the apparmor config in /etc/apparmor.d/local/usr.sbin.mysqld :
<source lang=bash>
# Site-specific additions and overrides for usr.sbin.mysqld.
# For more details, please see /etc/apparmor.d/local/README.
/dev/dm-* rwk,
</source>
Reload apparmor:
<source lang=bash>
# service apparmor reload
</source>
Another try!
<source lang=bash>
# service mysql start
</source>
<source lang=mysql>
InnoDB: The first specified data file /dev/vg-data/lv-rawdisk-innodb01 did not exist:
InnoDB: a new database to be created!
150812 15:48:23 InnoDB: Setting file /dev/vg-data/lv-rawdisk-innodb01 size to 25600 MB
InnoDB: Database physically writes the file full: wait...
InnoDB: Progress in MB: 100 200 300 400 500 600 700 800 900 1000 1100 1200 ...
</source>
Much better!
So shutdown MySQL again!
Change your configuration /etc/mysql/conf.d/innodb.cnf and '''change newraw to raw!''' :
<source lang=mysql>
[mysqld]
# InnoDB raw disks
innodb_data_home_dir=
innodb_data_file_path=/dev/vg-data/lv-rawdisk-innodb01:25Graw
</source>
==Sample InnoDB configuration==
/etc/mysql/conf.d/innodb.cnf
<source lang=mysql>
[mysqld]
# InnoDB Parameters
# innodb_buffer_pool_size=(0.7*total_mem_size)
innodb_buffer_pool_size=1433M
# bulk_insert_buffer_size
bulk_insert_buffer_size=256M
# innodb_buffer_pool_instances=... more = more concurrency
innodb_buffer_pool_instances=2
# innodb_thread_concurrency= 2*CPUs
innodb_thread_concurrency=4
# innodb_flush_method=O_DIRECT (avoids double buffering)
innodb_flush_method=O_DIRECT
# InnoDB data raw disks
innodb_data_home_dir=
innodb_data_file_path=/dev/vg-data/lv-rawdisk-innodb01:25Graw
# InnoDB log files
innodb_log_files_in_group=2
innodb_log_file_size=100M
innodb_log_group_home_dir=/var/lib/mysql/ib_log
</source>
==Analyze==
<source lang=mysql>
mysql> select * from <tablename> PROCEDURE ANALYSE();
</source>
<source lang=mysql>
mysql> SHOW /*!50000 GLOBAL*/ STATUS;
</source>
* See [[http://de.slideshare.net/shinguz/pt-presentation-11465700 MySQL Performance Tuning]]
===percona-toolkit===
<source lang=bash>
# aptitude install percona-toolkit
# mysql -e "explain select * from mysql.user,mysql.db where user.user=db.user" | pt-visual-explain
JOIN
+- Bookmark lookup
| +- Table
| | table db
| | possible_keys User
| +- Index lookup
| key db->User
| possible_keys User
| key_len 48
| ref mysql.user.User
| rows 3
+- Table scan
rows 68
+- Table
table user
</source>
===Sysbench===
<source lang=bash>
# mysql -u root -e "create database sbtest;"
# sysbench \
--test=oltp \
--oltp-table-size=10000000 \
--db-driver=mysql \
--mysql-table-engine=innodb \
--mysql-db=sbtest \
--mysql-user=root \
--mysql-password=$(nawk -F'=' '/password/{print $2}' /root/.my.cnf) \
--mysql-socket=/var/run/mysqld/mysqld.sock \
prepare
# sysbench \
--test=oltp \
--oltp-test-mode=complex \
--oltp-table-size=80000000 \
--db-driver=mysql \
--mysql-table-engine=innodb \
--mysql-db=sbtest \
--mysql-user=root \
--mysql-password=$(nawk -F'=' '/password/{print $2}' /root/.my.cnf) \
--mysql-socket=/var/run/mysqld/mysqld.sock \
--num-threads=4 \
--max-time=900 \
--max-requests=500000 \
run
# mysql -u root_rw -e "drop table sbtest;" sbtest
</source>
==Recover a damaged root account==
===Lost grants===
Try out:
<source lang=bash>
# service mysql stop
# echo "grant all privileges on *.* to 'root'@'localhost' with grant option;" > /root/mysql-init
# mysqld_safe --init-file=/root/mysql-init
...
150812 19:14:24 mysqld_safe mysqld from pid file /var/run/mysqld/mysqld.pid ended
# rm /root/mysql-init
# service mysql start
</source>
Or:
<source lang=bash>
# service mysql stop
# mysqld_safe --skip-grant-tables &
...
# mysql -e "UPDATE mysql.user SET Grant_priv='Y', Super_priv='Y' WHERE User='root'; FLUSH PRIVILEGES; GRANT ALL ON *.* TO 'root'@'localhost';"
# mysqladmin -u root shutdown
# service mysql start
</source>
===Lost password===
<source lang=bash>
# service mysql stop
# echo "SET PASSWORD FOR 'root'@'localhost' = PASSWORD('the root password for mysql');" > /root/mysql-init
# mysqld_safe --init-file=/root/mysql-init
...
150812 19:15:24 mysqld_safe mysqld from pid file /var/run/mysqld/mysqld.pid ended
# rm /root/mysql-init
# service mysql start
</source>
==Structured configuration==
This is the default in Ubuntus /etc/mysql/my.cnf:
<source lang=mysql>
...
#
# * IMPORTANT: Additional settings that can override those from this file!
# The files must end with '.cnf', otherwise they'll be ignored.
#
!includedir /etc/mysql/conf.d/
</source>
/etc/mysql/conf.d/innodb.cnf:
<source lang=mysql>
[mysqld]
# InnoDB Parameters
# innodb_buffer_pool_size=(0.7*total_mem_size)
#innodb_buffer_pool_size=512M
innodb_buffer_pool_size=256M
# bulk_insert_buffer_size
#bulk_insert_buffer_size=256M
bulk_insert_buffer_size=128M
# innodb_buffer_pool_instances=... more = more concurrency
innodb_buffer_pool_instances=2
# innodb_thread_concurrency= 2*CPUs
innodb_thread_concurrency=4
# innodb_flush_method=O_DIRECT (avoids double buffering)
innodb_flush_method=O_DIRECT
# InnoDB data raw disks
innodb_data_home_dir=
innodb_data_file_path=/dev/vg-data/lv-rawdisk-innodb01:25Graw
# InnoDB log files
innodb_log_files_in_group=2
innodb_log_file_size=100M
innodb_log_group_home_dir=/var/lib/mysql/ib_log
</source>
/etc/mysql/conf.d/myisam.cnf:
<source lang=mysql>
[mysqld]
#key_buffer = 512M
key_buffer = 128M
table_cache = 8K
myisam_sort_buffer_size = 64M
tmp_table_size = 64M
# Variable: concurrent_insert
# Value Description
# 0 Disables concurrent inserts
# 1 (Default) Enables concurrent insert for MyISAM tables that do not have holes
# 2 Enables concurrent inserts for all MyISAM tables, even those that have holes.
# For a table with a hole, new rows are inserted at the end of the table if it is in use by another thread.
# Otherwise, MySQL acquires a normal write lock and inserts the row into the hole.
concurrent_insert=2
# Variable: myisam_use_mmap
# https://www.percona.com/blog/2006/05/26/myisam-mmap-feature-51/
#
myisam_use_mmap=1
</source>
/etc/mysql/conf.d/mysqld.cnf:
<source lang=mysql>
[mysqld]
datadir = /var/lib/mysql/data/data
# because mysql is soooo stupid
#ignore-db-dirs = lost+found # when we will have mysql >= 5.6.3
bind-address = 127.0.0.1
open-files-limit = 4096
max_connections = 512
max_allowed_packet = 16M
thread_stack = 192K
thread_cache_size = 8
myisam-recover-options = BACKUP
max_connections = 512
table_cache = 8192
thread_concurrency = 4
default-storage-engine = innodb
# Enable the full query log. Every query (even ones with incorrect
# syntax) that the server receives will be logged. This is useful for
# debugging, it is usually disabled in production use.
#log
# Print warnings to the error log file. If you have any problem with
# MySQL you should enable logging of warnings and examine the error log
# for possible explanations.
log_warnings
# Log slow queries. Slow queries are queries which take more than the
# amount of time defined in "long_query_time" or which do not use
# indexes well, if log_long_format is enabled. It is normally good idea
# to have this turned on if you frequently add new queries to the
# system.
log_slow_queries
slow_query_log_file = /var/log/mysql/mysql-slow.log
# All queries taking more than this amount of time (in seconds) will be
# trated as slow. Do not use "1" as a value here, as this will result in
# even very fast queries being logged from time to time (as MySQL
# currently measures time with second accuracy only).
long_query_time = 2
# Log more information in the slow query log. Normally it is good to
# have this turned on. This will enable logging of queries that are not
# using indexes in addition to long running queries.
#log_long_format
log_bin = /var/lib/mysql/binlog/mysql-bin.log
expire_logs_days = 10
max_binlog_size = 100M
sync_binlog = 0
performance_schema = ON
</source>
/etc/mysql/conf.d/mysqld_safe.cnf:
<source lang=mysql>
[mysqld_safe]
</source>
/etc/mysql/conf.d/mysqld_safe_syslog.cnf:
<source lang=mysql>
[mysqld_safe]
syslog
</source>
/etc/mysql/conf.d/query_cache.cnf:
<source lang=mysql>
[mysqld]
query_cache_limit = 4M
query_cache_size = 128M
query_cache_min_res_unit = 2K
</source>
0b35b27efbce35c54a1a4dc96c6df6b740133ac8
MySQL Symmetric Encryption
0
272
1153
2016-01-11T14:35:00Z
Lollypop
2
Die Seite wurde neu angelegt: „ <source lang=mysql> > select hex(aes_encrypt(rpad("abcqweqweqweqwe",31,"~"),"mykey")) as encrypted; +---------------------------------------------------------…“
wikitext
text/x-wiki
<source lang=mysql>
> select hex(aes_encrypt(rpad("abcqweqweqweqwe",31,"~"),"mykey")) as encrypted;
+------------------------------------------------------------------+
| encrypted |
+------------------------------------------------------------------+
| E5FB394568B8F03D43CF083F5065C959AC6E22BDB7749E4D97F5ABC72B08D843 |
+------------------------------------------------------------------+
</source>
<source lang=mysql>
> select trim(trailing "~" from aes_decrypt(unhex("E5FB394568B8F03D43CF083F5065C959AC6E22BDB7749E4D97F5ABC72B08D843"),"mykey")) as decrypted;
+-----------------+
| decrypted |
+-----------------+
| abcqweqweqweqwe |
+-----------------+
</source>
658ed7e9b6a84dde2271057b42c2600923de1c2a
Nice Options
0
253
1154
969
2016-01-14T08:29:36Z
Lollypop
2
wikitext
text/x-wiki
Linux:
<source lang=bash>
ls -aldi
ls -aladin
netstat -plant
netstat -tulpen
pwgen -nancy 17
</source>
Solaris:
<source lang=bash>
</source>
f1f56276c5eab80c7beb14463539a5db9cad6337
Archimandrita tesselata
0
148
1155
1060
2016-01-19T17:11:24Z
Lollypop
2
wikitext
text/x-wiki
{{Systematik
| DeName = Pfefferschabe
| Bild = Archimandrita_tesselata_IMG_2891.JPG
| Bildbeschreibung = Archimandrita Tesselata an einem Stück Gurke
| Autor = Rehn, 1903
| Ordnung = Blattodea
| Superfamilie = Blaberoidea
| Familie = Blaberidae
| Unterfamilie = Blaberinae
| Gattung = Archimandrita
| Untergattung =
| Art = tesselata
| Verbreitung = Guatemala, Costa Rica, Panama, Kolumbien
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| StudyGroupNumber = BCG 23
| Winterruhe =
| cockroach.speciesfile.org_TaxonNameID = 1174141
| LSID = urn:lsid:Blattodea.speciesfile.org:TaxonName:6665
}}
Archimandrita tesselata ist eine große, meist recht scheue Schabenart.
Ihren Namen "Pfefferschabe" trägt sie aufgrund der Zeichnung auf ihren Flügeldecken.
<gallery mode="packed-hover">
Image:Archimandrita tesselata IMG 2891.JPG|An Gurke
Image:Archimandrita_tesselata_R0015551.png|Können dese Augen lügen?
</gallery>
<gallery mode="slideshow" caption="Häutung eines Archimandrita tesselata Männchens">
Image:Archimandrita_tesselata_IMG_2901.png
Image:Archimandrita_tesselata_IMG_2902.png
Image:Archimandrita_tesselata_IMG_2903.png
Image:Archimandrita_tesselata_IMG_2904.png
Image:Archimandrita_tesselata_IMG_2905.png
Image:Archimandrita_tesselata_IMG_2906.png
Image:Archimandrita_tesselata_IMG_2907.png
Image:Archimandrita_tesselata_IMG_2908.png
Image:Archimandrita_tesselata_IMG_2909.png
Image:Archimandrita_tesselata_IMG_2910.png
Image:Archimandrita_tesselata_IMG_2911.png
Image:Archimandrita_tesselata_IMG_2912.png
Image:Archimandrita_tesselata_IMG_2913.png
Image:Archimandrita_tesselata_IMG_2914.png
Image:Archimandrita_tesselata_IMG_2915.png
Image:Archimandrita_tesselata_IMG_2916.png
Image:Archimandrita_tesselata_IMG_2917.png
Image:Archimandrita_tesselata_IMG_2918.png
</gallery>
<slideshow sequence="forward" transition="blindDown" refresh="3000">
[[Image:Archimandrita_tesselata_IMG_2901.png|thumb|right|256px|Caption 1]]
[[Image:Archimandrita_tesselata_IMG_2902.png|thumb|right|256px|Caption 2]]
[[Image:Archimandrita_tesselata_IMG_2903.png|thumb|right|256px|Caption 3]]
[[Image:Archimandrita_tesselata_IMG_2904.png|thumb|right|256px|Caption 4]]
[[Image:Archimandrita_tesselata_IMG_2905.png|thumb|right|256px|Caption 5]]
[[Image:Archimandrita_tesselata_IMG_2906.png|thumb|right|256px|Caption 6]]
</slideshow>
e1a3e3d5f8ebee4e0e84ffdc1940115f91790c8d
1156
1155
2016-01-19T17:12:33Z
Lollypop
2
wikitext
text/x-wiki
{{Systematik
| DeName = Pfefferschabe
| Bild = Archimandrita_tesselata_IMG_2891.JPG
| Bildbeschreibung = Archimandrita Tesselata an einem Stück Gurke
| Autor = Rehn, 1903
| Ordnung = Blattodea
| Superfamilie = Blaberoidea
| Familie = Blaberidae
| Unterfamilie = Blaberinae
| Gattung = Archimandrita
| Untergattung =
| Art = tesselata
| Verbreitung = Guatemala, Costa Rica, Panama, Kolumbien
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| StudyGroupNumber = BCG 23
| Winterruhe =
| cockroach.speciesfile.org_TaxonNameID = 1174141
| LSID = urn:lsid:Blattodea.speciesfile.org:TaxonName:6665
}}
Archimandrita tesselata ist eine große, meist recht scheue Schabenart.
Ihren Namen "Pfefferschabe" trägt sie aufgrund der Zeichnung auf ihren Flügeldecken.
<gallery mode="packed-hover">
Image:Archimandrita tesselata IMG 2891.JPG|An Gurke
Image:Archimandrita_tesselata_R0015551.png|Können dese Augen lügen?
</gallery>
<gallery mode="slideshow" caption="Häutung eines Archimandrita tesselata Männchens">
Image:Archimandrita_tesselata_IMG_2901.png
Image:Archimandrita_tesselata_IMG_2902.png
Image:Archimandrita_tesselata_IMG_2903.png
Image:Archimandrita_tesselata_IMG_2904.png
Image:Archimandrita_tesselata_IMG_2905.png
Image:Archimandrita_tesselata_IMG_2906.png
Image:Archimandrita_tesselata_IMG_2907.png
Image:Archimandrita_tesselata_IMG_2908.png
Image:Archimandrita_tesselata_IMG_2909.png
Image:Archimandrita_tesselata_IMG_2910.png
Image:Archimandrita_tesselata_IMG_2911.png
Image:Archimandrita_tesselata_IMG_2912.png
Image:Archimandrita_tesselata_IMG_2913.png
Image:Archimandrita_tesselata_IMG_2914.png
Image:Archimandrita_tesselata_IMG_2915.png
Image:Archimandrita_tesselata_IMG_2916.png
Image:Archimandrita_tesselata_IMG_2917.png
Image:Archimandrita_tesselata_IMG_2918.png
</gallery>
<slideshow sequence="forward" transition="blindDown" refresh="3000">
[[Image:Archimandrita_tesselata_IMG_2901.png|thumb|right|256px|Caption 1]]
[[Image:Archimandrita_tesselata_IMG_2902.png|thumb|right|256px|Caption 2]]
[[Image:Archimandrita_tesselata_IMG_2903.png|thumb|right|256px|Caption 3]]
[[Image:Archimandrita_tesselata_IMG_2904.png|thumb|right|256px|Caption 4]]
[[Image:Archimandrita_tesselata_IMG_2905.png|thumb|right|256px|Caption 5]]
[[Image:Archimandrita_tesselata_IMG_2906.png|thumb|right|256px|Caption 6]]
</slideshow>
740358cebf9aaf104e321432d15d971b7de61f32
Solaris SMF
0
100
1157
728
2016-01-20T14:55:27Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:Solaris|SMF]]
__FORCETOC__
== Running foreground processes ==
<source lang=xml>
<?xml version='1.0'?>
<!DOCTYPE service_bundle SYSTEM '/usr/share/lib/xml/dtd/service_bundle.dtd.1'>
<service_bundle type='manifest' name='export'>
<service name='network/foreground-daemon' type='service' version='0'>
<single_instance/>
<dependency name='filesystem_minimal' grouping='require_all' restart_on='none' type='service'>
<service_fmri value='svc:/system/filesystem/local'/>
</dependency>
<dependency name='loopback' grouping='require_any' restart_on='error' type='service'>
<service_fmri value='svc:/network/loopback'/>
</dependency>
<dependency name='network' grouping='optional_all' restart_on='error' type='service'>
<service_fmri value='svc:/milestone/network'/>
</dependency>
<instance name='default' enabled='true'>
<exec_method name='refresh' type='method' exec=':true' timeout_seconds='60'/>
<exec_method name='stop' type='method' exec=':kill' timeout_seconds='60'/>
<exec_method name='start' type='method' exec='/opt/foreground/bin/foreground-daemon %m' timeout_seconds='0'>
<method_context project='foreground-project' >
<method_credential user='foreground-user' group='noaccess' />
</method_context>
</exec_method>
<property_group type="framework" name="startd">
<propval type="astring" name="duration" value="child"/>
</property_group>
<template>
<common_name>
<loctext xml:lang='C'>Foreground Daemon</loctext>
</common_name>
<documentation>
<manpage title='foreground-daemon' section='1M' manpath='/opt/foreground/man'/>
</documentation>
</template>
</instance>
<stability value='Unstable'/>
</service>
</service_bundle>
</source>
==Adding dependency on another service==
For example mount NFS after ZFS:
<source lang=bash>
svccfg -s svc:/network/nfs/client addpg filesystem-local dependency
svccfg -s svc:/network/nfs/client setprop filesystem-local/grouping = astring: require_all
svccfg -s svc:/network/nfs/client setprop filesystem-local/entities = fmri: svc:/system/filesystem/local:default
svccfg -s svc:/network/nfs/client setprop filesystem-local/restart_on = astring: none
svccfg -s svc:/network/nfs/client setprop filesystem-local/type = astring: service
</source>
==Setting multiple parameters to environment variables==
1. The goal:
* Setting -Xmx from 512m to 2G
The problem:
<source lang=bash>
# svccfg -s svc:/cms/web:tomcat setenv -m start CATALINA_OPTS '-XX:MaxPermSize=256m -Xmx2G -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.port=9004 -Dcom.sun.management.jmxremote.ssl=false -Dcom.sun.management.jmxremote.authenticate=false -Dorg.apache.el.parser.SKIP_IDENTIFIER_CHECK=true -Djava.rmi.server.hostname=tomcat.server.de'
svccfg: Syntax error.
</source>
So you have to set the complete environment this way:
* Get the complete environment:
<source lang=bash>
# svccfg -s svc:/cms/web:tomcat listprop method_context/environment
method_context/environment astring "PATH=/usr/jdk/latest/bin:/usr/sbin:/usr/bin" "LC_CTYPE=de_DE.ISO8859-15@euro" "JAVA_OPTS=-Dhttp.proxyHost=proxy.server.de -Dhttp.proxyPort=8080 -Djava.awt.headless=true" "JAVA_HOME=/usr/jdk/latest" "CATALINA_OPTS=-XX:MaxPermSize=256m -Xmx512m -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.port=9004 -Dcom.sun.management.jmxremote.ssl=false -Dcom.sun.management.jmxremote.authenticate=false -Dorg.apache.el.parser.SKIP_IDENTIFIER_CHECK=true -Djava.rmi.server.hostname=tomcat.server.de"
</source>
* Set the complete (modified) environment:
<source lang=bash>
# svccfg -s svc:/cms/web:tomcat setprop method_context/environment = astring: '("PATH=/usr/jdk/latest/bin:/usr/sbin:/usr/bin" "LC_CTYPE=de_DE.ISO8859-15@euro" "JAVA_OPTS=-Dhttp.proxyHost=proxy.server.de -Dhttp.proxyPort=8080 -Djava.awt.headless=true" "JAVA_HOME=/usr/jdk/latest" "CATALINA_OPTS=-XX:MaxPermSize=256m -Xmx2G -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.port=9004 -Dcom.sun.management.jmxremote.ssl=false -Dcom.sun.management.jmxremote.authenticate=false -Dorg.apache.el.parser.SKIP_IDENTIFIER_CHECK=true -Djava.rmi.server.hostname=tomcat.server.de")'
# svcadm refresh svc:/cms/web:tomcat
</source>
* Check it with:
<source lang=bash>
# svccfg -s svc:/cms/web:tomcat listprop method_context/environment
</source>
== Ignore child process coredumps ==
<source lang=xml>
<property_group name='startd' type='framework'>
<!-- sub-process core dumps shouldn't restart
session -->
<propval name='ignore_error' type='astring'
value='core,signal' />
</property_group>
</source>
b7b9a8c913ec3d8e22dcc56f46f446314fb666d4
1161
1157
2016-01-21T15:11:00Z
Lollypop
2
/* Ignore child process coredumps */
wikitext
text/x-wiki
[[Kategorie:Solaris|SMF]]
__FORCETOC__
== Running foreground processes ==
<source lang=xml>
<?xml version='1.0'?>
<!DOCTYPE service_bundle SYSTEM '/usr/share/lib/xml/dtd/service_bundle.dtd.1'>
<service_bundle type='manifest' name='export'>
<service name='network/foreground-daemon' type='service' version='0'>
<single_instance/>
<dependency name='filesystem_minimal' grouping='require_all' restart_on='none' type='service'>
<service_fmri value='svc:/system/filesystem/local'/>
</dependency>
<dependency name='loopback' grouping='require_any' restart_on='error' type='service'>
<service_fmri value='svc:/network/loopback'/>
</dependency>
<dependency name='network' grouping='optional_all' restart_on='error' type='service'>
<service_fmri value='svc:/milestone/network'/>
</dependency>
<instance name='default' enabled='true'>
<exec_method name='refresh' type='method' exec=':true' timeout_seconds='60'/>
<exec_method name='stop' type='method' exec=':kill' timeout_seconds='60'/>
<exec_method name='start' type='method' exec='/opt/foreground/bin/foreground-daemon %m' timeout_seconds='0'>
<method_context project='foreground-project' >
<method_credential user='foreground-user' group='noaccess' />
</method_context>
</exec_method>
<property_group type="framework" name="startd">
<propval type="astring" name="duration" value="child"/>
</property_group>
<template>
<common_name>
<loctext xml:lang='C'>Foreground Daemon</loctext>
</common_name>
<documentation>
<manpage title='foreground-daemon' section='1M' manpath='/opt/foreground/man'/>
</documentation>
</template>
</instance>
<stability value='Unstable'/>
</service>
</service_bundle>
</source>
==Adding dependency on another service==
For example mount NFS after ZFS:
<source lang=bash>
svccfg -s svc:/network/nfs/client addpg filesystem-local dependency
svccfg -s svc:/network/nfs/client setprop filesystem-local/grouping = astring: require_all
svccfg -s svc:/network/nfs/client setprop filesystem-local/entities = fmri: svc:/system/filesystem/local:default
svccfg -s svc:/network/nfs/client setprop filesystem-local/restart_on = astring: none
svccfg -s svc:/network/nfs/client setprop filesystem-local/type = astring: service
</source>
==Setting multiple parameters to environment variables==
1. The goal:
* Setting -Xmx from 512m to 2G
The problem:
<source lang=bash>
# svccfg -s svc:/cms/web:tomcat setenv -m start CATALINA_OPTS '-XX:MaxPermSize=256m -Xmx2G -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.port=9004 -Dcom.sun.management.jmxremote.ssl=false -Dcom.sun.management.jmxremote.authenticate=false -Dorg.apache.el.parser.SKIP_IDENTIFIER_CHECK=true -Djava.rmi.server.hostname=tomcat.server.de'
svccfg: Syntax error.
</source>
So you have to set the complete environment this way:
* Get the complete environment:
<source lang=bash>
# svccfg -s svc:/cms/web:tomcat listprop method_context/environment
method_context/environment astring "PATH=/usr/jdk/latest/bin:/usr/sbin:/usr/bin" "LC_CTYPE=de_DE.ISO8859-15@euro" "JAVA_OPTS=-Dhttp.proxyHost=proxy.server.de -Dhttp.proxyPort=8080 -Djava.awt.headless=true" "JAVA_HOME=/usr/jdk/latest" "CATALINA_OPTS=-XX:MaxPermSize=256m -Xmx512m -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.port=9004 -Dcom.sun.management.jmxremote.ssl=false -Dcom.sun.management.jmxremote.authenticate=false -Dorg.apache.el.parser.SKIP_IDENTIFIER_CHECK=true -Djava.rmi.server.hostname=tomcat.server.de"
</source>
* Set the complete (modified) environment:
<source lang=bash>
# svccfg -s svc:/cms/web:tomcat setprop method_context/environment = astring: '("PATH=/usr/jdk/latest/bin:/usr/sbin:/usr/bin" "LC_CTYPE=de_DE.ISO8859-15@euro" "JAVA_OPTS=-Dhttp.proxyHost=proxy.server.de -Dhttp.proxyPort=8080 -Djava.awt.headless=true" "JAVA_HOME=/usr/jdk/latest" "CATALINA_OPTS=-XX:MaxPermSize=256m -Xmx2G -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.port=9004 -Dcom.sun.management.jmxremote.ssl=false -Dcom.sun.management.jmxremote.authenticate=false -Dorg.apache.el.parser.SKIP_IDENTIFIER_CHECK=true -Djava.rmi.server.hostname=tomcat.server.de")'
# svcadm refresh svc:/cms/web:tomcat
</source>
* Check it with:
<source lang=bash>
# svccfg -s svc:/cms/web:tomcat listprop method_context/environment
</source>
== Ignore child process coredumps ==
<source lang=xml>
<property_group name='startd' type='framework'>
<!-- sub-process core dumps shouldn't restart
session -->
<propval name='ignore_error' type='astring'
value='core,signal' />
</property_group>
</source>
<source lang=bash>
# svccfg -s clamav
svc:/network/clamav> addpg startd framework
svc:/network/clamav> addpropvalue startd/ignore_error astring: core,signal
svc:/network/clamav> end
</source>
d2e8883c973e31368ba2f19b1446c291ba408196
Solaris 11 Networking
0
96
1158
917
2016-01-21T12:24:44Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:Solaris11]]
= Switch to manual configuration =
To disable automatic procedures to take back your changes you have to enable the manual configuration mode.
<pre>
# netadm enable –p ncp defaultfixed
</pre>
= Nodename =
<pre>
# svccfg -s svc:/system/identity:node setprop config/nodename = astring: camponotus
# svcadm refresh svc:/system/identity:node
# svcadm restart svc:/system/identity:node
</pre>
= Interfaces =
== Initial setup ==
<pre>
# ipadm create-ip net1
# ipadm create-addr -T static -a local=192.168.5.101/24 net1/v4mailcluster1
</pre>
== IPMP ==
<pre>
# ipadm create-ip net2
# ipadm create-ip net3
# ipadm create-addr -T static -a 192.168.5.102/24 net2/v4ipmptestadress
# ipadm create-addr -T static -a 192.168.5.103/24 net3/v4ipmptestadress
# ipadm create-ipmp ipmp0
# ipadm add-ipmp -i net2 -i net3 ipmp0
# ipadm create-addr -T static -a 192.168.5.101/24 ipmp0/v4mailcluster0
# ipmpstat -i
INTERFACE ACTIVE GROUP FLAGS LINK PROBE STATE
net2 yes ipmp0 ------- up ok ok
net3 yes ipmp0 --mbM-- up ok ok
# ipmpstat -an
ADDRESS STATE GROUP INBOUND OUTBOUND
:: down ipmp0 -- --
192.168.5.101 up ipmp0 net3 net2 net3
</pre>
Set one interface to standby:
<pre>
# ipadm set-ifprop -p standby=on -m ip net2
# ipmpstat -i
INTERFACE ACTIVE GROUP FLAGS LINK PROBE STATE
net3 yes ipmp0 --mbM-- up ok ok
net2 no ipmp0 is----- up ok ok
# ipmpstat -g
GROUP GROUPNAME STATE FDT INTERFACES
ipmp0 ipmp0 ok 10.00s net3 (net2)
</pre>
== More sophisticated with aggregations and vnics ==
<source lang=bash>
# dladm show-phys -L
LINK DEVICE LOC
net0 igb12 /SYS/MB
net1 igb13 /SYS/MB
net2 igb14 /SYS/MB
net3 igb15 /SYS/MB
net4 igb0 /SYS/MB/PCI_MEZZ/PCIE3
net5 igb1 /SYS/MB/PCI_MEZZ/PCIE3
net6 igb2 /SYS/MB/PCI_MEZZ/PCIE3
net7 igb3 /SYS/MB/PCI_MEZZ/PCIE3
net8 igb4 /SYS/MB/RISER2/PCIE2
net9 igb5 /SYS/MB/RISER2/PCIE2
net10 igb6 /SYS/MB/RISER2/PCIE2
net11 igb7 /SYS/MB/RISER2/PCIE2
net12 igb8 /SYS/MB/RISER0/PCIE0
net13 igb9 /SYS/MB/RISER0/PCIE0
net14 igb10 /SYS/MB/RISER0/PCIE0
net15 igb11 /SYS/MB/RISER0/PCIE0
net16 usbecm2 --
# dladm create-aggr -P L2,L3 -l net8 -l net9 -l net10 -l net11 PCIE2
# dladm create-aggr -P L2,L3 -l net4 -l net5 -l net6 -l net7 PCIE3
# dladm show-link
...
PCIE2 aggr 1500 up net8 net9 net10 net11
PCIE3 aggr 1500 up net4 net5 net6 net7
...
# dladm create-vnic -l PCIE2 zone01_ipmp0
# dladm create-vnic -l PCIE3 zone01_ipmp1
# dladm show-link
...
zone01_ipmp1 vnic 1500 up PCIE3
zone01_ipmp0 vnic 1500 up PCIE2
...
# zonecfg -z zone01
zonecfg:zone01> add net
zonecfg:zone01:net> set configure-allowed-address=true
zonecfg:zone01:net> set physical=zone01_ipmp0
zonecfg:zone01:net> end
zonecfg:zone01> add net
zonecfg:zone01:net> set configure-allowed-address=true
zonecfg:zone01:net> set physical=zone01_ipmp1
zonecfg:zone01:net> end
zonecfg:zone01> verify
zonecfg:zone01> commit
zonecfg:zone01> exit
</source>
== Change address ==
1. Create new interface:
<pre>
# ipadm create-addr -T static -a 192.168.5.111/24 ipmp0/v4mailcluster1
</pre>
2. Login to new IP.
3. Delete the old interface:
<pre>
# ipadm delete-addr ipmp0/v4mailcluster0
</pre>
= DNS =
== Client ==
<pre>
# svccfg -s svc:/network/dns/client setprop config/nameserver = net_address: "( 0.0.0.0 192.168.1.1 )"
# svccfg -s svc:/network/dns/client setprop config/search = astring: "timmann.de blindhuhn.de"
# svcadm refresh svc:/network/dns/client:default
# svcadm restart svc:/network/dns/client:default
</pre>
Activate dns in nameservice switch (nsswitch.conf):
<pre>
# perl -pi -e "s/^hosts:\s+files$/hosts: files dns/g" /etc/nsswitch.conf
# nscfg import -f svc:/system/name-service/switch:default
# svcadm refresh name-service/switch
# svcprop -p config/host svc:/system/name-service/switch:default
files\ dns
</pre>
== Server ==
<pre>
# groupadd -g 53 dns
# useradd -u 53 -g dns -d /var/named -m dns
# usermod -A solaris.smf.manage.bind dns
# svccfg -s svc:network/dns/server:default setprop start/group = dns
# svccfg -s svc:network/dns/server:default setprop start/user = dns
# svccfg -s svc:network/dns/server:default setprop options/ip_interfaces = IPv4
# svccfg -s svc:network/dns/server:default setprop options/configuration_file = /etc/named.conf
# svcadm refresh svc:network/dns/server:default
# svcadm enable svc:network/dns/server:default
</pre>
= Set tcp/udp parameter (formerly ndd) =
<source lang=bash>
# ipadm show-prop -p smallest_anon_port tcp
PROTO PROPERTY PERM CURRENT PERSISTENT DEFAULT POSSIBLE
tcp smallest_anon_port rw 1024 -- 1024 1024-65535
</source>
<source lang=bash>
# ipadm set-prop -p smallest_anon_port=9000 tcp
# ipadm set-prop -p smallest_anon_port=9000 udp
# ipadm set-prop -p largest_anon_port=65500 tcp
# ipadm set-prop -p largest_anon_port=65500 udp
</source>
= Jumbo Frames =
The MTU of an ipadm-interface can never be greater than its underlying dladm-interface.
To change the dladm-interface the ipadm-interface has to be disabled (DOWNTIME!? BE CAREFUL!).
<source lang=bash>
# ipadm disable-if -t iscsi0
# dladm set-linkprop -p mtu=9000 iscsi0
# ipadm enable-if -t iscsi0
# ipadm set-ifprop -m ipv4 -p mtu=9000 iscsi0
</source>
= Aggregate for iSCSI =
This is cruel but worked on our ciscos:
<source lang=bash>
# dladm create-aggr -m trunk -P L4 -L off "-l iscsi"{0..7} iscsi_aggr0 | /bin/sh
# dladm show-aggr -P iscsi_aggr0
LINK MODE POLICY ADDRPOLICY LACPACTIVITY LACPTIMER
iscsi_aggr0 trunk L4 auto off short
# dladm show-aggr -L iscsi_aggr0
LINK PORT AGGREGATABLE SYNC COLL DIST DEFAULTED EXPIRED
iscsi_aggr0 iscsi0 no no no no yes no
-- iscsi1 no no no no yes no
-- iscsi2 no no no no yes no
-- iscsi3 no no no no yes no
-- iscsi4 no no no no yes no
-- iscsi5 no no no no yes no
-- iscsi6 no no no no yes no
-- iscsi7 no no no no yes no
</source>
== Set TCP parameters in immutable zones ==
In normal immutable mode zlogin -U does not change it:
<source lang=bash>
root@global# zlogin -U immutable-zone ipadm set-prop -p _time_wait_interval=30000 tcp
ipadm: set-prop: _time_wait_interval: Invalid argument provided
root@global# zlogin immutable-zone ipadm show-prop -p _time_wait_interval tcp
PROTO PROPERTY PERM CURRENT PERSISTENT DEFAULT POSSIBLE
tcp _time_wait_interval rw 30000 -- 60000 1000-600000
</source>
Need to boot into writable:
<source lang=bash>
root@global# zoneadm -z immutable-zone reboot -w
root@global# zlogin -U immutable-zone ipadm set-prop -p _time_wait_interval=30000 tcp
root@global# zlogin immutable-zone ipadm show-prop -p _time_wait_interval tcp
PROTO PROPERTY PERM CURRENT PERSISTENT DEFAULT POSSIBLE
tcp _time_wait_interval rw 30000 30000 60000 1000-600000
root@global# zoneadm -z immutable-zone reboot
</source>
31ce0cac8560153b20698d7bff7184e41e0d1129
1159
1158
2016-01-21T12:25:22Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:Solaris11]]
= Switch to manual configuration =
To disable automatic procedures to take back your changes you have to enable the manual configuration mode.
<pre>
# netadm enable –p ncp defaultfixed
</pre>
= Nodename =
<pre>
# svccfg -s svc:/system/identity:node setprop config/nodename = astring: camponotus
# svcadm refresh svc:/system/identity:node
# svcadm restart svc:/system/identity:node
</pre>
= Interfaces =
== Initial setup ==
<pre>
# ipadm create-ip net1
# ipadm create-addr -T static -a local=192.168.5.101/24 net1/v4mailcluster1
</pre>
== IPMP ==
<pre>
# ipadm create-ip net2
# ipadm create-ip net3
# ipadm create-addr -T static -a 192.168.5.102/24 net2/v4ipmptestadress
# ipadm create-addr -T static -a 192.168.5.103/24 net3/v4ipmptestadress
# ipadm create-ipmp ipmp0
# ipadm add-ipmp -i net2 -i net3 ipmp0
# ipadm create-addr -T static -a 192.168.5.101/24 ipmp0/v4mailcluster0
# ipmpstat -i
INTERFACE ACTIVE GROUP FLAGS LINK PROBE STATE
net2 yes ipmp0 ------- up ok ok
net3 yes ipmp0 --mbM-- up ok ok
# ipmpstat -an
ADDRESS STATE GROUP INBOUND OUTBOUND
:: down ipmp0 -- --
192.168.5.101 up ipmp0 net3 net2 net3
</pre>
Set one interface to standby:
<pre>
# ipadm set-ifprop -p standby=on -m ip net2
# ipmpstat -i
INTERFACE ACTIVE GROUP FLAGS LINK PROBE STATE
net3 yes ipmp0 --mbM-- up ok ok
net2 no ipmp0 is----- up ok ok
# ipmpstat -g
GROUP GROUPNAME STATE FDT INTERFACES
ipmp0 ipmp0 ok 10.00s net3 (net2)
</pre>
== More sophisticated with aggregations and vnics ==
<source lang=bash>
# dladm show-phys -L
LINK DEVICE LOC
net0 igb12 /SYS/MB
net1 igb13 /SYS/MB
net2 igb14 /SYS/MB
net3 igb15 /SYS/MB
net4 igb0 /SYS/MB/PCI_MEZZ/PCIE3
net5 igb1 /SYS/MB/PCI_MEZZ/PCIE3
net6 igb2 /SYS/MB/PCI_MEZZ/PCIE3
net7 igb3 /SYS/MB/PCI_MEZZ/PCIE3
net8 igb4 /SYS/MB/RISER2/PCIE2
net9 igb5 /SYS/MB/RISER2/PCIE2
net10 igb6 /SYS/MB/RISER2/PCIE2
net11 igb7 /SYS/MB/RISER2/PCIE2
net12 igb8 /SYS/MB/RISER0/PCIE0
net13 igb9 /SYS/MB/RISER0/PCIE0
net14 igb10 /SYS/MB/RISER0/PCIE0
net15 igb11 /SYS/MB/RISER0/PCIE0
net16 usbecm2 --
# dladm create-aggr -P L2,L3 -l net8 -l net9 -l net10 -l net11 PCIE2
# dladm create-aggr -P L2,L3 -l net4 -l net5 -l net6 -l net7 PCIE3
# dladm show-link
...
PCIE2 aggr 1500 up net8 net9 net10 net11
PCIE3 aggr 1500 up net4 net5 net6 net7
...
# dladm create-vnic -l PCIE2 zone01_ipmp0
# dladm create-vnic -l PCIE3 zone01_ipmp1
# dladm show-link
...
zone01_ipmp1 vnic 1500 up PCIE3
zone01_ipmp0 vnic 1500 up PCIE2
...
# zonecfg -z zone01
zonecfg:zone01> add net
zonecfg:zone01:net> set configure-allowed-address=true
zonecfg:zone01:net> set physical=zone01_ipmp0
zonecfg:zone01:net> end
zonecfg:zone01> add net
zonecfg:zone01:net> set configure-allowed-address=true
zonecfg:zone01:net> set physical=zone01_ipmp1
zonecfg:zone01:net> end
zonecfg:zone01> verify
zonecfg:zone01> commit
zonecfg:zone01> exit
</source>
== Change address ==
1. Create new interface:
<pre>
# ipadm create-addr -T static -a 192.168.5.111/24 ipmp0/v4mailcluster1
</pre>
2. Login to new IP.
3. Delete the old interface:
<pre>
# ipadm delete-addr ipmp0/v4mailcluster0
</pre>
= DNS =
== Client ==
<pre>
# svccfg -s svc:/network/dns/client setprop config/nameserver = net_address: "( 0.0.0.0 192.168.1.1 )"
# svccfg -s svc:/network/dns/client setprop config/search = astring: "timmann.de blindhuhn.de"
# svcadm refresh svc:/network/dns/client:default
# svcadm restart svc:/network/dns/client:default
</pre>
Activate dns in nameservice switch (nsswitch.conf):
<pre>
# perl -pi -e "s/^hosts:\s+files$/hosts: files dns/g" /etc/nsswitch.conf
# nscfg import -f svc:/system/name-service/switch:default
# svcadm refresh name-service/switch
# svcprop -p config/host svc:/system/name-service/switch:default
files\ dns
</pre>
== Server ==
<pre>
# groupadd -g 53 dns
# useradd -u 53 -g dns -d /var/named -m dns
# usermod -A solaris.smf.manage.bind dns
# svccfg -s svc:network/dns/server:default setprop start/group = dns
# svccfg -s svc:network/dns/server:default setprop start/user = dns
# svccfg -s svc:network/dns/server:default setprop options/ip_interfaces = IPv4
# svccfg -s svc:network/dns/server:default setprop options/configuration_file = /etc/named.conf
# svcadm refresh svc:network/dns/server:default
# svcadm enable svc:network/dns/server:default
</pre>
= Set tcp/udp parameter (formerly ndd) =
<source lang=bash>
# ipadm show-prop -p smallest_anon_port tcp
PROTO PROPERTY PERM CURRENT PERSISTENT DEFAULT POSSIBLE
tcp smallest_anon_port rw 1024 -- 1024 1024-65535
</source>
<source lang=bash>
# ipadm set-prop -p smallest_anon_port=9000 tcp
# ipadm set-prop -p smallest_anon_port=9000 udp
# ipadm set-prop -p largest_anon_port=65500 tcp
# ipadm set-prop -p largest_anon_port=65500 udp
</source>
= Jumbo Frames =
The MTU of an ipadm-interface can never be greater than its underlying dladm-interface.
To change the dladm-interface the ipadm-interface has to be disabled (DOWNTIME!? BE CAREFUL!).
<source lang=bash>
# ipadm disable-if -t iscsi0
# dladm set-linkprop -p mtu=9000 iscsi0
# ipadm enable-if -t iscsi0
# ipadm set-ifprop -m ipv4 -p mtu=9000 iscsi0
</source>
= Aggregate for iSCSI =
This is cruel but worked on our ciscos:
<source lang=bash>
# dladm create-aggr -m trunk -P L4 -L off "-l iscsi"{0..7} iscsi_aggr0 | /bin/sh
# dladm show-aggr -P iscsi_aggr0
LINK MODE POLICY ADDRPOLICY LACPACTIVITY LACPTIMER
iscsi_aggr0 trunk L4 auto off short
# dladm show-aggr -L iscsi_aggr0
LINK PORT AGGREGATABLE SYNC COLL DIST DEFAULTED EXPIRED
iscsi_aggr0 iscsi0 no no no no yes no
-- iscsi1 no no no no yes no
-- iscsi2 no no no no yes no
-- iscsi3 no no no no yes no
-- iscsi4 no no no no yes no
-- iscsi5 no no no no yes no
-- iscsi6 no no no no yes no
-- iscsi7 no no no no yes no
</source>
= Set TCP parameters in immutable zones =
In normal immutable mode zlogin -U does not change it:
<source lang=bash>
root@global# zlogin -U immutable-zone ipadm set-prop -p _time_wait_interval=30000 tcp
ipadm: set-prop: _time_wait_interval: Invalid argument provided
root@global# zlogin immutable-zone ipadm show-prop -p _time_wait_interval tcp
PROTO PROPERTY PERM CURRENT PERSISTENT DEFAULT POSSIBLE
tcp _time_wait_interval rw 30000 -- 60000 1000-600000
</source>
Need to boot into writable:
<source lang=bash>
root@global# zoneadm -z immutable-zone reboot -w
root@global# zlogin -U immutable-zone ipadm set-prop -p _time_wait_interval=30000 tcp
root@global# zlogin immutable-zone ipadm show-prop -p _time_wait_interval tcp
PROTO PROPERTY PERM CURRENT PERSISTENT DEFAULT POSSIBLE
tcp _time_wait_interval rw 30000 30000 60000 1000-600000
root@global# zoneadm -z immutable-zone reboot
</source>
9b452119eb48c36a57621b2ae29903f840d3ebab
1160
1159
2016-01-21T12:25:52Z
Lollypop
2
/* Set TCP parameters in immutable zones */
wikitext
text/x-wiki
[[Kategorie:Solaris11]]
= Switch to manual configuration =
To disable automatic procedures to take back your changes you have to enable the manual configuration mode.
<pre>
# netadm enable –p ncp defaultfixed
</pre>
= Nodename =
<pre>
# svccfg -s svc:/system/identity:node setprop config/nodename = astring: camponotus
# svcadm refresh svc:/system/identity:node
# svcadm restart svc:/system/identity:node
</pre>
= Interfaces =
== Initial setup ==
<pre>
# ipadm create-ip net1
# ipadm create-addr -T static -a local=192.168.5.101/24 net1/v4mailcluster1
</pre>
== IPMP ==
<pre>
# ipadm create-ip net2
# ipadm create-ip net3
# ipadm create-addr -T static -a 192.168.5.102/24 net2/v4ipmptestadress
# ipadm create-addr -T static -a 192.168.5.103/24 net3/v4ipmptestadress
# ipadm create-ipmp ipmp0
# ipadm add-ipmp -i net2 -i net3 ipmp0
# ipadm create-addr -T static -a 192.168.5.101/24 ipmp0/v4mailcluster0
# ipmpstat -i
INTERFACE ACTIVE GROUP FLAGS LINK PROBE STATE
net2 yes ipmp0 ------- up ok ok
net3 yes ipmp0 --mbM-- up ok ok
# ipmpstat -an
ADDRESS STATE GROUP INBOUND OUTBOUND
:: down ipmp0 -- --
192.168.5.101 up ipmp0 net3 net2 net3
</pre>
Set one interface to standby:
<pre>
# ipadm set-ifprop -p standby=on -m ip net2
# ipmpstat -i
INTERFACE ACTIVE GROUP FLAGS LINK PROBE STATE
net3 yes ipmp0 --mbM-- up ok ok
net2 no ipmp0 is----- up ok ok
# ipmpstat -g
GROUP GROUPNAME STATE FDT INTERFACES
ipmp0 ipmp0 ok 10.00s net3 (net2)
</pre>
== More sophisticated with aggregations and vnics ==
<source lang=bash>
# dladm show-phys -L
LINK DEVICE LOC
net0 igb12 /SYS/MB
net1 igb13 /SYS/MB
net2 igb14 /SYS/MB
net3 igb15 /SYS/MB
net4 igb0 /SYS/MB/PCI_MEZZ/PCIE3
net5 igb1 /SYS/MB/PCI_MEZZ/PCIE3
net6 igb2 /SYS/MB/PCI_MEZZ/PCIE3
net7 igb3 /SYS/MB/PCI_MEZZ/PCIE3
net8 igb4 /SYS/MB/RISER2/PCIE2
net9 igb5 /SYS/MB/RISER2/PCIE2
net10 igb6 /SYS/MB/RISER2/PCIE2
net11 igb7 /SYS/MB/RISER2/PCIE2
net12 igb8 /SYS/MB/RISER0/PCIE0
net13 igb9 /SYS/MB/RISER0/PCIE0
net14 igb10 /SYS/MB/RISER0/PCIE0
net15 igb11 /SYS/MB/RISER0/PCIE0
net16 usbecm2 --
# dladm create-aggr -P L2,L3 -l net8 -l net9 -l net10 -l net11 PCIE2
# dladm create-aggr -P L2,L3 -l net4 -l net5 -l net6 -l net7 PCIE3
# dladm show-link
...
PCIE2 aggr 1500 up net8 net9 net10 net11
PCIE3 aggr 1500 up net4 net5 net6 net7
...
# dladm create-vnic -l PCIE2 zone01_ipmp0
# dladm create-vnic -l PCIE3 zone01_ipmp1
# dladm show-link
...
zone01_ipmp1 vnic 1500 up PCIE3
zone01_ipmp0 vnic 1500 up PCIE2
...
# zonecfg -z zone01
zonecfg:zone01> add net
zonecfg:zone01:net> set configure-allowed-address=true
zonecfg:zone01:net> set physical=zone01_ipmp0
zonecfg:zone01:net> end
zonecfg:zone01> add net
zonecfg:zone01:net> set configure-allowed-address=true
zonecfg:zone01:net> set physical=zone01_ipmp1
zonecfg:zone01:net> end
zonecfg:zone01> verify
zonecfg:zone01> commit
zonecfg:zone01> exit
</source>
== Change address ==
1. Create new interface:
<pre>
# ipadm create-addr -T static -a 192.168.5.111/24 ipmp0/v4mailcluster1
</pre>
2. Login to new IP.
3. Delete the old interface:
<pre>
# ipadm delete-addr ipmp0/v4mailcluster0
</pre>
= DNS =
== Client ==
<pre>
# svccfg -s svc:/network/dns/client setprop config/nameserver = net_address: "( 0.0.0.0 192.168.1.1 )"
# svccfg -s svc:/network/dns/client setprop config/search = astring: "timmann.de blindhuhn.de"
# svcadm refresh svc:/network/dns/client:default
# svcadm restart svc:/network/dns/client:default
</pre>
Activate dns in nameservice switch (nsswitch.conf):
<pre>
# perl -pi -e "s/^hosts:\s+files$/hosts: files dns/g" /etc/nsswitch.conf
# nscfg import -f svc:/system/name-service/switch:default
# svcadm refresh name-service/switch
# svcprop -p config/host svc:/system/name-service/switch:default
files\ dns
</pre>
== Server ==
<pre>
# groupadd -g 53 dns
# useradd -u 53 -g dns -d /var/named -m dns
# usermod -A solaris.smf.manage.bind dns
# svccfg -s svc:network/dns/server:default setprop start/group = dns
# svccfg -s svc:network/dns/server:default setprop start/user = dns
# svccfg -s svc:network/dns/server:default setprop options/ip_interfaces = IPv4
# svccfg -s svc:network/dns/server:default setprop options/configuration_file = /etc/named.conf
# svcadm refresh svc:network/dns/server:default
# svcadm enable svc:network/dns/server:default
</pre>
= Set tcp/udp parameter (formerly ndd) =
<source lang=bash>
# ipadm show-prop -p smallest_anon_port tcp
PROTO PROPERTY PERM CURRENT PERSISTENT DEFAULT POSSIBLE
tcp smallest_anon_port rw 1024 -- 1024 1024-65535
</source>
<source lang=bash>
# ipadm set-prop -p smallest_anon_port=9000 tcp
# ipadm set-prop -p smallest_anon_port=9000 udp
# ipadm set-prop -p largest_anon_port=65500 tcp
# ipadm set-prop -p largest_anon_port=65500 udp
</source>
= Jumbo Frames =
The MTU of an ipadm-interface can never be greater than its underlying dladm-interface.
To change the dladm-interface the ipadm-interface has to be disabled (DOWNTIME!? BE CAREFUL!).
<source lang=bash>
# ipadm disable-if -t iscsi0
# dladm set-linkprop -p mtu=9000 iscsi0
# ipadm enable-if -t iscsi0
# ipadm set-ifprop -m ipv4 -p mtu=9000 iscsi0
</source>
= Aggregate for iSCSI =
This is cruel but worked on our ciscos:
<source lang=bash>
# dladm create-aggr -m trunk -P L4 -L off "-l iscsi"{0..7} iscsi_aggr0 | /bin/sh
# dladm show-aggr -P iscsi_aggr0
LINK MODE POLICY ADDRPOLICY LACPACTIVITY LACPTIMER
iscsi_aggr0 trunk L4 auto off short
# dladm show-aggr -L iscsi_aggr0
LINK PORT AGGREGATABLE SYNC COLL DIST DEFAULTED EXPIRED
iscsi_aggr0 iscsi0 no no no no yes no
-- iscsi1 no no no no yes no
-- iscsi2 no no no no yes no
-- iscsi3 no no no no yes no
-- iscsi4 no no no no yes no
-- iscsi5 no no no no yes no
-- iscsi6 no no no no yes no
-- iscsi7 no no no no yes no
</source>
= Set TCP parameters in immutable zones =
In normal immutable mode zlogin -U does not change it:
<source lang=bash>
root@global# zlogin -U immutable-zone ipadm set-prop -p _time_wait_interval=30000 tcp
ipadm: set-prop: _time_wait_interval: Invalid argument provided
root@global# zlogin immutable-zone ipadm show-prop -p _time_wait_interval tcp
PROTO PROPERTY PERM CURRENT PERSISTENT DEFAULT POSSIBLE
tcp _time_wait_interval rw 30000 -- 60000 1000-600000
</source>
Need to boot into writable:
<source lang=bash>
root@global# zoneadm -z immutable-zone reboot -w
root@global# zlogin -U immutable-zone ipadm set-prop -p _time_wait_interval=30000 tcp
root@global# zlogin immutable-zone ipadm show-prop -p _time_wait_interval tcp
PROTO PROPERTY PERM CURRENT PERSISTENT DEFAULT POSSIBLE
tcp _time_wait_interval rw 30000 30000 60000 1000-600000
root@global# zoneadm -z immutable-zone reboot
</source>
a7f303278b2777491c6569247de04b3b936a344e
Linux Tipps und Tricks
0
273
1162
2016-01-22T13:07:30Z
Lollypop
2
Die Seite wurde neu angelegt: „==Hard reboot== This is the hard way to kick your kernel into void. No filesystem sync is done, just an ugly fast direkt reboot! <source lang=bash> # echo 1 > …“
wikitext
text/x-wiki
==Hard reboot==
This is the hard way to kick your kernel into void. No filesystem sync is done, just an ugly fast direkt reboot!
<source lang=bash>
# echo 1 > /proc/sys/kernel/sysrq
# echo b > /proc/sysrq-trigger
</source>
First line enables sysrq, second line sends the reboot request.
For more look at [https://www.kernel.org/doc/Documentation/sysrq.txt kernel.org]!
bf36217fca1cd056ab10d4c27c3231fe5440910e
1163
1162
2016-01-22T13:08:19Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:Linux]]
==Hard reboot==
This is the hard way to kick your kernel into void. No filesystem sync is done, just an ugly fast direkt reboot!
You should never do this...
<source lang=bash>
# echo 1 > /proc/sys/kernel/sysrq
# echo b > /proc/sysrq-trigger
</source>
First line enables sysrq, second line sends the reboot request.
For more look at [https://www.kernel.org/doc/Documentation/sysrq.txt kernel.org]!
5fa74e3c703fb0ff83d851f0d73ed697ab0d627a
1164
1163
2016-01-22T13:08:40Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:Linux|Tipps und Tricks]]
==Hard reboot==
This is the hard way to kick your kernel into void. No filesystem sync is done, just an ugly fast direkt reboot!
You should never do this...
<source lang=bash>
# echo 1 > /proc/sys/kernel/sysrq
# echo b > /proc/sysrq-trigger
</source>
First line enables sysrq, second line sends the reboot request.
For more look at [https://www.kernel.org/doc/Documentation/sysrq.txt kernel.org]!
286ddb7712795e08e4f286a9ee6aacc8298fe5c8
Linux udev
0
88
1165
822
2016-01-22T13:09:42Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:Linux|LVM]]
[[Kategorie:Linux|Udev]]
/etc/udev/rules.d/99-custom.rules
ENV{DM_VG_NAME}=="VolumeGroup1", ENV{DM_LV_NAME}=="LogicalVolume1", MODE="0660", OWNER="lollypop", GROUP="disk", SYMLINK+="VirtualBox-$env{DM_NAME}"
==udev ofr MySQL on LVM with InnoDB on raw devices==
===Make your rule===
<source lang=bash>
root@mysql:~# cat /etc/udev/rules.d/99-lvm-mysql-permissions.rules
# udevadm info --query=all --name /dev/vg-data/lv-rawdisk-innodb01
# DM_VG_NAME=vg-data
# DM_LV_NAME=lv-rawdisk-innodb01
ENV{DM_VG_NAME}=="vg-data" ENV{DM_LV_NAME}=="lv-rawdisk-innodb*" OWNER="mysql" GROUP="mysql"
</source>
===Test your rule===
<source lang=bash>
root@mysql:~# ls -al /dev/vg-data/lv-rawdisk-innodb01
lrwxrwxrwx 1 root root 7 Aug 12 14:45 /dev/vg-data/lv-rawdisk-innodb01 -> ../dm-0
root@mysql:~# udevadm test /class/block/dm-0
...
read rules file: /etc/udev/rules.d/99-lvm-mysql-permissions.rules
specified user 'mysql' unknown
...
</source>
OK user mysql unknown... maybe I should install MySQL ;-).
After that:
<source lang=bash>
root@mysql:~# id -a mysql
uid=108(mysql) gid=114(mysql) groups=114(mysql)
root@mysql:~# udevadm test /class/block/dm-0
...
OWNER 108 /etc/udev/rules.d/99-lvm-mysql-permissions.rules:4
GROUP 114 /etc/udev/rules.d/99-lvm-mysql-permissions.rules:4
handling device node '/dev/dm-0', devnum=b252:0, mode=0660, uid=108, gid=114
set permissions /dev/dm-0, 060660, uid=108, gid=114
...
</source>
===Trigger your rule===
<source lang=bash>
root@mysql:~# udevadm trigger
root@mysql:~# ls -alL /dev/vg-data/lv-rawdisk-innodb01
brw-rw---- 1 mysql mysql 252, 0 Aug 12 15:07 /dev/vg-data/lv-rawdisk-innodb01
</source>
b834e3bf60c495d49e61262526dcafe2366d422a
1166
1165
2016-01-22T13:10:24Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:Linux|LVM|udev]]
/etc/udev/rules.d/99-custom.rules
ENV{DM_VG_NAME}=="VolumeGroup1", ENV{DM_LV_NAME}=="LogicalVolume1", MODE="0660", OWNER="lollypop", GROUP="disk", SYMLINK+="VirtualBox-$env{DM_NAME}"
==udev ofr MySQL on LVM with InnoDB on raw devices==
===Make your rule===
<source lang=bash>
root@mysql:~# cat /etc/udev/rules.d/99-lvm-mysql-permissions.rules
# udevadm info --query=all --name /dev/vg-data/lv-rawdisk-innodb01
# DM_VG_NAME=vg-data
# DM_LV_NAME=lv-rawdisk-innodb01
ENV{DM_VG_NAME}=="vg-data" ENV{DM_LV_NAME}=="lv-rawdisk-innodb*" OWNER="mysql" GROUP="mysql"
</source>
===Test your rule===
<source lang=bash>
root@mysql:~# ls -al /dev/vg-data/lv-rawdisk-innodb01
lrwxrwxrwx 1 root root 7 Aug 12 14:45 /dev/vg-data/lv-rawdisk-innodb01 -> ../dm-0
root@mysql:~# udevadm test /class/block/dm-0
...
read rules file: /etc/udev/rules.d/99-lvm-mysql-permissions.rules
specified user 'mysql' unknown
...
</source>
OK user mysql unknown... maybe I should install MySQL ;-).
After that:
<source lang=bash>
root@mysql:~# id -a mysql
uid=108(mysql) gid=114(mysql) groups=114(mysql)
root@mysql:~# udevadm test /class/block/dm-0
...
OWNER 108 /etc/udev/rules.d/99-lvm-mysql-permissions.rules:4
GROUP 114 /etc/udev/rules.d/99-lvm-mysql-permissions.rules:4
handling device node '/dev/dm-0', devnum=b252:0, mode=0660, uid=108, gid=114
set permissions /dev/dm-0, 060660, uid=108, gid=114
...
</source>
===Trigger your rule===
<source lang=bash>
root@mysql:~# udevadm trigger
root@mysql:~# ls -alL /dev/vg-data/lv-rawdisk-innodb01
brw-rw---- 1 mysql mysql 252, 0 Aug 12 15:07 /dev/vg-data/lv-rawdisk-innodb01
</source>
dfd3cfda1a350f74bdd1733e59a51e0daee5d4f0
1167
1166
2016-01-22T13:10:53Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:Linux|udev]]
/etc/udev/rules.d/99-custom.rules
ENV{DM_VG_NAME}=="VolumeGroup1", ENV{DM_LV_NAME}=="LogicalVolume1", MODE="0660", OWNER="lollypop", GROUP="disk", SYMLINK+="VirtualBox-$env{DM_NAME}"
==udev ofr MySQL on LVM with InnoDB on raw devices==
===Make your rule===
<source lang=bash>
root@mysql:~# cat /etc/udev/rules.d/99-lvm-mysql-permissions.rules
# udevadm info --query=all --name /dev/vg-data/lv-rawdisk-innodb01
# DM_VG_NAME=vg-data
# DM_LV_NAME=lv-rawdisk-innodb01
ENV{DM_VG_NAME}=="vg-data" ENV{DM_LV_NAME}=="lv-rawdisk-innodb*" OWNER="mysql" GROUP="mysql"
</source>
===Test your rule===
<source lang=bash>
root@mysql:~# ls -al /dev/vg-data/lv-rawdisk-innodb01
lrwxrwxrwx 1 root root 7 Aug 12 14:45 /dev/vg-data/lv-rawdisk-innodb01 -> ../dm-0
root@mysql:~# udevadm test /class/block/dm-0
...
read rules file: /etc/udev/rules.d/99-lvm-mysql-permissions.rules
specified user 'mysql' unknown
...
</source>
OK user mysql unknown... maybe I should install MySQL ;-).
After that:
<source lang=bash>
root@mysql:~# id -a mysql
uid=108(mysql) gid=114(mysql) groups=114(mysql)
root@mysql:~# udevadm test /class/block/dm-0
...
OWNER 108 /etc/udev/rules.d/99-lvm-mysql-permissions.rules:4
GROUP 114 /etc/udev/rules.d/99-lvm-mysql-permissions.rules:4
handling device node '/dev/dm-0', devnum=b252:0, mode=0660, uid=108, gid=114
set permissions /dev/dm-0, 060660, uid=108, gid=114
...
</source>
===Trigger your rule===
<source lang=bash>
root@mysql:~# udevadm trigger
root@mysql:~# ls -alL /dev/vg-data/lv-rawdisk-innodb01
brw-rw---- 1 mysql mysql 252, 0 Aug 12 15:07 /dev/vg-data/lv-rawdisk-innodb01
</source>
45c9205d62038c5dbcfd1f4b395b8c04279e94e9
1168
1167
2016-01-22T13:12:14Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:Linux|udev]]
==udev ofr MySQL on LVM with InnoDB on raw devices==
/etc/udev/rules.d/99-custom.rules
ENV{DM_VG_NAME}=="VolumeGroup1", ENV{DM_LV_NAME}=="LogicalVolume1", MODE="0660", OWNER="lollypop", GROUP="disk", SYMLINK+="VirtualBox-$env{DM_NAME}"
===Make your rule===
<source lang=bash>
root@mysql:~# cat /etc/udev/rules.d/99-lvm-mysql-permissions.rules
# udevadm info --query=all --name /dev/vg-data/lv-rawdisk-innodb01
# DM_VG_NAME=vg-data
# DM_LV_NAME=lv-rawdisk-innodb01
ENV{DM_VG_NAME}=="vg-data" ENV{DM_LV_NAME}=="lv-rawdisk-innodb*" OWNER="mysql" GROUP="mysql"
</source>
===Test your rule===
<source lang=bash>
root@mysql:~# ls -al /dev/vg-data/lv-rawdisk-innodb01
lrwxrwxrwx 1 root root 7 Aug 12 14:45 /dev/vg-data/lv-rawdisk-innodb01 -> ../dm-0
root@mysql:~# udevadm test /class/block/dm-0
...
read rules file: /etc/udev/rules.d/99-lvm-mysql-permissions.rules
specified user 'mysql' unknown
...
</source>
OK user mysql unknown... maybe I should install MySQL ;-).
After that:
<source lang=bash>
root@mysql:~# id -a mysql
uid=108(mysql) gid=114(mysql) groups=114(mysql)
root@mysql:~# udevadm test /class/block/dm-0
...
OWNER 108 /etc/udev/rules.d/99-lvm-mysql-permissions.rules:4
GROUP 114 /etc/udev/rules.d/99-lvm-mysql-permissions.rules:4
handling device node '/dev/dm-0', devnum=b252:0, mode=0660, uid=108, gid=114
set permissions /dev/dm-0, 060660, uid=108, gid=114
...
</source>
===Trigger your rule===
<source lang=bash>
root@mysql:~# udevadm trigger
root@mysql:~# ls -alL /dev/vg-data/lv-rawdisk-innodb01
brw-rw---- 1 mysql mysql 252, 0 Aug 12 15:07 /dev/vg-data/lv-rawdisk-innodb01
</source>
b441106fcf7909f2c2c6dfa1319d739c46d76e58
1169
1168
2016-01-22T13:23:54Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:Linux|udev]]
==udev for MySQL on LVM with InnoDB on raw devices==
===Make your rule===
<source lang=bash>
root@mysql:~# cat /etc/udev/rules.d/99-lvm-mysql-permissions.rules
# udevadm info --query=all --name /dev/vg-data/lv-rawdisk-innodb01
# DM_VG_NAME=vg-data
# DM_LV_NAME=lv-rawdisk-innodb01
ENV{DM_VG_NAME}=="vg-data" ENV{DM_LV_NAME}=="lv-rawdisk-innodb*" OWNER="mysql" GROUP="mysql"
</source>
===Test your rule===
<source lang=bash>
root@mysql:~# ls -al /dev/vg-data/lv-rawdisk-innodb01
lrwxrwxrwx 1 root root 7 Aug 12 14:45 /dev/vg-data/lv-rawdisk-innodb01 -> ../dm-0
root@mysql:~# udevadm test /class/block/dm-0
...
read rules file: /etc/udev/rules.d/99-lvm-mysql-permissions.rules
specified user 'mysql' unknown
...
</source>
OK user mysql unknown... maybe I should install MySQL ;-).
After that:
<source lang=bash>
root@mysql:~# id -a mysql
uid=108(mysql) gid=114(mysql) groups=114(mysql)
root@mysql:~# udevadm test /class/block/dm-0
...
OWNER 108 /etc/udev/rules.d/99-lvm-mysql-permissions.rules:4
GROUP 114 /etc/udev/rules.d/99-lvm-mysql-permissions.rules:4
handling device node '/dev/dm-0', devnum=b252:0, mode=0660, uid=108, gid=114
set permissions /dev/dm-0, 060660, uid=108, gid=114
...
</source>
===Trigger your rule===
<source lang=bash>
root@mysql:~# udevadm trigger
root@mysql:~# ls -alL /dev/vg-data/lv-rawdisk-innodb01
brw-rw---- 1 mysql mysql 252, 0 Aug 12 15:07 /dev/vg-data/lv-rawdisk-innodb01
</source>
==VirtualBox on ZVols==
This owns all ZVols under rpool/VM to the user <i>lollypop</i>:
* /etc/udev/rules.d/99-local-zvol.rules
<source lang=bash>
KERNEL=="zd*" SUBSYSTEM=="block" ACTION=="add|change" PROGRAM="/lib/udev/zvol_id /dev/%k" RESULT=="rpool/VM/*" OWNER="lollypop"
</source>
0e7e0c36eff55866c9dbc530efa9b445e0e84eb7
1170
1169
2016-01-22T13:24:50Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:Linux|udev]]
==udev for MySQL on LVM with InnoDB on raw devices==
===Make your rule===
<source lang=bash>
root@mysql:~# cat /etc/udev/rules.d/99-lvm-mysql-permissions.rules
# udevadm info --query=all --name /dev/vg-data/lv-rawdisk-innodb01
# DM_VG_NAME=vg-data
# DM_LV_NAME=lv-rawdisk-innodb01
ENV{DM_VG_NAME}=="vg-data" ENV{DM_LV_NAME}=="lv-rawdisk-innodb*" OWNER="mysql" GROUP="mysql"
</source>
===Test your rule===
<source lang=bash>
root@mysql:~# ls -al /dev/vg-data/lv-rawdisk-innodb01
lrwxrwxrwx 1 root root 7 Aug 12 14:45 /dev/vg-data/lv-rawdisk-innodb01 -> ../dm-0
root@mysql:~# udevadm test /class/block/dm-0
...
read rules file: /etc/udev/rules.d/99-lvm-mysql-permissions.rules
specified user 'mysql' unknown
...
</source>
OK user mysql unknown... maybe I should install MySQL ;-).
After that:
<source lang=bash>
root@mysql:~# id -a mysql
uid=108(mysql) gid=114(mysql) groups=114(mysql)
root@mysql:~# udevadm test /class/block/dm-0
...
OWNER 108 /etc/udev/rules.d/99-lvm-mysql-permissions.rules:4
GROUP 114 /etc/udev/rules.d/99-lvm-mysql-permissions.rules:4
handling device node '/dev/dm-0', devnum=b252:0, mode=0660, uid=108, gid=114
set permissions /dev/dm-0, 060660, uid=108, gid=114
...
</source>
===Trigger your rule===
<source lang=bash>
root@mysql:~# udevadm trigger
root@mysql:~# ls -alL /dev/vg-data/lv-rawdisk-innodb01
brw-rw---- 1 mysql mysql 252, 0 Aug 12 15:07 /dev/vg-data/lv-rawdisk-innodb01
</source>
836029582d626444f04fe9947b508785ea586e17
Apache
0
205
1171
966
2016-01-28T09:37:45Z
Lollypop
2
/* Apache konfigurieren */
wikitext
text/x-wiki
[[Kategorie:Web]]
== Zertifikat generieren ==
===Defaultwerte vernünftig anpassen===
Country & Co auf für einen selbst passende Werte anpassen:
<source lang=bash>
# vi /etc/ssl/openssl.cnf
</source>
===Schlüssel generieren===
<source lang=bash>
# openssl ecparam -genkey -name secp256r1 | openssl ec -aes256 -out server.de.ec-key
read EC key
using curve name prime256v1 instead of secp256r1
writing EC key
Enter PEM pass phrase:
Verifying - Enter PEM pass phrase:
</source>
Sollte es gewünscht sein den Schlüssel ohne Passwort abzulegen, kann man den Schlüssel nachträglich so entfernen:
<source lang=bash>
# openssl ec -in server.de.ec-key -out server.de.ec-key
read EC key
Enter PEM pass phrase:
writing EC key
</source>
===Zertifikat ausstellen===
<source lang=bash>
# openssl req -new -x509 -sha256 -key server.de.ec-key -out server.de-wildcard.pem -days 1825 -nodes
You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter '.', the field will be left blank.
-----
Country Name (2 letter code) [DE]:
State or Province Name (full name) [Hamburg]:
Locality Name (eg, city) [Hamburg]:
Organization Name (eg, company) [My Site]:
Organizational Unit Name (eg, section) [Sub]:
Common Name (e.g. server FQDN or YOUR name) []:*.server.de
Email Address [ssl@server.de]:
</source>
===Zertifikat ansehen===
<source lang=bash>
# openssl x509 -text -noout -in server.de-wildcard.pem
Certificate:
Data:
Version: 3 (0x2)
Serial Number: ... (0x...)
Signature Algorithm: ecdsa-with-SHA256
Issuer: C=DE, ST=Hamburg, L=Hamburg, O=My Site, OU=Sub, CN=*.server.de/emailAddress=ssl@server.de
Validity
Not Before: Apr 16 09:35:02 2015 GMT
Not After : Apr 14 09:35:02 2020 GMT
Subject: C=DE, ST=Hamburg, L=Hamburg, O=My Site, OU=Sub, CN=*.server.de/emailAddress=ssl@server.de
Subject Public Key Info:
Public Key Algorithm: id-ecPublicKey
Public-Key: (256 bit)
pub:
...
ASN1 OID: prime256v1
X509v3 extensions:
X509v3 Subject Key Identifier:
...
X509v3 Authority Key Identifier:
keyid:...
X509v3 Basic Constraints:
CA:TRUE
Signature Algorithm: ecdsa-with-SHA256
...
</source>
==Apache konfigurieren==
<source lang=apache>
<VirtualHost ssl.server.de:443>
...
SSLEngine On
SSLProtocol all -SSLv2 -SSLv3
SSLCompression off
SSLHonorCipherOrder On
SSLCipherSuite EECDH+AESGCM:EECDH+AES:EDH+AES
SSLCertificateFile /etc/apache2/ssl/server.de-wildcard.pem
SSLCertificateKeyFile /etc/apache2/ssl/server.de.ec-key
SSLOptions +FakeBasicAuth +ExportCertData +StrictRequire
SetEnvIfNoCase Referer ^https://lars\.timmann\.de keep_cookies
RequestHeader unset Cookie env=!keep_cookies
<IfModule mod_headers.c>
# https://kb.sucuri.net/warnings/hardening/headers-x-content-type
Header set X-Content-Type-Options nosniff
# https://kb.sucuri.net/warnings/hardening/headers-x-frame-clickjacking
Header append X-FRAME-OPTIONS "SAMEORIGIN"
# https://kb.sucuri.net/warnings/hardening/headers-x-xss-protection
Header set X-XSS-Protection "1; mode=block"
# Strict Transport Security
Header always set Strict-Transport-Security "max-age=31556926;"
# Public Key Pins
Header always set Public-Key-Pins "max-age=5184000; pin-sha256=\"...\"; pin-sha256=\"...\"; includeSubDomains"
</IfModule>
</VirtualHost>
</source>
==ApacheTop==
Top of all sites on your host:
<source lang=bash>
# ls /var/log/apache2/*.log | xargs -n 1 echo -f | xargs apachetop
</source>
09869907889fe4f5b3c339f60bce29c77defb836
1172
1171
2016-01-28T09:49:20Z
Lollypop
2
/* Apache konfigurieren */
wikitext
text/x-wiki
[[Kategorie:Web]]
== Zertifikat generieren ==
===Defaultwerte vernünftig anpassen===
Country & Co auf für einen selbst passende Werte anpassen:
<source lang=bash>
# vi /etc/ssl/openssl.cnf
</source>
===Schlüssel generieren===
<source lang=bash>
# openssl ecparam -genkey -name secp256r1 | openssl ec -aes256 -out server.de.ec-key
read EC key
using curve name prime256v1 instead of secp256r1
writing EC key
Enter PEM pass phrase:
Verifying - Enter PEM pass phrase:
</source>
Sollte es gewünscht sein den Schlüssel ohne Passwort abzulegen, kann man den Schlüssel nachträglich so entfernen:
<source lang=bash>
# openssl ec -in server.de.ec-key -out server.de.ec-key
read EC key
Enter PEM pass phrase:
writing EC key
</source>
===Zertifikat ausstellen===
<source lang=bash>
# openssl req -new -x509 -sha256 -key server.de.ec-key -out server.de-wildcard.pem -days 1825 -nodes
You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter '.', the field will be left blank.
-----
Country Name (2 letter code) [DE]:
State or Province Name (full name) [Hamburg]:
Locality Name (eg, city) [Hamburg]:
Organization Name (eg, company) [My Site]:
Organizational Unit Name (eg, section) [Sub]:
Common Name (e.g. server FQDN or YOUR name) []:*.server.de
Email Address [ssl@server.de]:
</source>
===Zertifikat ansehen===
<source lang=bash>
# openssl x509 -text -noout -in server.de-wildcard.pem
Certificate:
Data:
Version: 3 (0x2)
Serial Number: ... (0x...)
Signature Algorithm: ecdsa-with-SHA256
Issuer: C=DE, ST=Hamburg, L=Hamburg, O=My Site, OU=Sub, CN=*.server.de/emailAddress=ssl@server.de
Validity
Not Before: Apr 16 09:35:02 2015 GMT
Not After : Apr 14 09:35:02 2020 GMT
Subject: C=DE, ST=Hamburg, L=Hamburg, O=My Site, OU=Sub, CN=*.server.de/emailAddress=ssl@server.de
Subject Public Key Info:
Public Key Algorithm: id-ecPublicKey
Public-Key: (256 bit)
pub:
...
ASN1 OID: prime256v1
X509v3 extensions:
X509v3 Subject Key Identifier:
...
X509v3 Authority Key Identifier:
keyid:...
X509v3 Basic Constraints:
CA:TRUE
Signature Algorithm: ecdsa-with-SHA256
...
</source>
==Apache konfigurieren==
<source lang=apache>
<VirtualHost ssl.server.de:443>
...
SSLEngine On
SSLProtocol all -SSLv2 -SSLv3
SSLCompression off
SSLHonorCipherOrder On
SSLCipherSuite EECDH+AESGCM:EECDH+AES:EDH+AES
SSLCertificateFile /etc/apache2/ssl/server.de-wildcard.pem
SSLCertificateKeyFile /etc/apache2/ssl/server.de.ec-key
SSLOptions +FakeBasicAuth +ExportCertData +StrictRequire
SetEnvIfNoCase Referer ^https://ssl\.server\.de keep_cookies
RequestHeader unset Cookie env=!keep_cookies
<IfModule mod_headers.c>
# https://kb.sucuri.net/warnings/hardening/headers-x-content-type
Header set X-Content-Type-Options nosniff
# https://kb.sucuri.net/warnings/hardening/headers-x-frame-clickjacking
Header append X-FRAME-OPTIONS "SAMEORIGIN"
# https://kb.sucuri.net/warnings/hardening/headers-x-xss-protection
Header set X-XSS-Protection "1; mode=block"
# Strict Transport Security
Header always set Strict-Transport-Security "max-age=31556926;"
# Public Key Pins
Header always set Public-Key-Pins "max-age=5184000; pin-sha256=\"...\"; pin-sha256=\"...\"; includeSubDomains"
</IfModule>
</VirtualHost>
</source>
==ApacheTop==
Top of all sites on your host:
<source lang=bash>
# ls /var/log/apache2/*.log | xargs -n 1 echo -f | xargs apachetop
</source>
3016ae8b7b09d0b2bcd7c3aaa428fd2831d1527c
1173
1172
2016-01-28T09:51:53Z
Lollypop
2
/* Apache konfigurieren */
wikitext
text/x-wiki
[[Kategorie:Web]]
== Zertifikat generieren ==
===Defaultwerte vernünftig anpassen===
Country & Co auf für einen selbst passende Werte anpassen:
<source lang=bash>
# vi /etc/ssl/openssl.cnf
</source>
===Schlüssel generieren===
<source lang=bash>
# openssl ecparam -genkey -name secp256r1 | openssl ec -aes256 -out server.de.ec-key
read EC key
using curve name prime256v1 instead of secp256r1
writing EC key
Enter PEM pass phrase:
Verifying - Enter PEM pass phrase:
</source>
Sollte es gewünscht sein den Schlüssel ohne Passwort abzulegen, kann man den Schlüssel nachträglich so entfernen:
<source lang=bash>
# openssl ec -in server.de.ec-key -out server.de.ec-key
read EC key
Enter PEM pass phrase:
writing EC key
</source>
===Zertifikat ausstellen===
<source lang=bash>
# openssl req -new -x509 -sha256 -key server.de.ec-key -out server.de-wildcard.pem -days 1825 -nodes
You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter '.', the field will be left blank.
-----
Country Name (2 letter code) [DE]:
State or Province Name (full name) [Hamburg]:
Locality Name (eg, city) [Hamburg]:
Organization Name (eg, company) [My Site]:
Organizational Unit Name (eg, section) [Sub]:
Common Name (e.g. server FQDN or YOUR name) []:*.server.de
Email Address [ssl@server.de]:
</source>
===Zertifikat ansehen===
<source lang=bash>
# openssl x509 -text -noout -in server.de-wildcard.pem
Certificate:
Data:
Version: 3 (0x2)
Serial Number: ... (0x...)
Signature Algorithm: ecdsa-with-SHA256
Issuer: C=DE, ST=Hamburg, L=Hamburg, O=My Site, OU=Sub, CN=*.server.de/emailAddress=ssl@server.de
Validity
Not Before: Apr 16 09:35:02 2015 GMT
Not After : Apr 14 09:35:02 2020 GMT
Subject: C=DE, ST=Hamburg, L=Hamburg, O=My Site, OU=Sub, CN=*.server.de/emailAddress=ssl@server.de
Subject Public Key Info:
Public Key Algorithm: id-ecPublicKey
Public-Key: (256 bit)
pub:
...
ASN1 OID: prime256v1
X509v3 extensions:
X509v3 Subject Key Identifier:
...
X509v3 Authority Key Identifier:
keyid:...
X509v3 Basic Constraints:
CA:TRUE
Signature Algorithm: ecdsa-with-SHA256
...
</source>
==Apache konfigurieren==
<source lang=apache>
<VirtualHost ssl.server.de:443>
...
SSLEngine On
SSLProtocol all -SSLv2 -SSLv3
SSLCompression off
SSLHonorCipherOrder On
SSLCipherSuite EECDH+AESGCM:EECDH+AES:EDH+AES
SSLCertificateFile /etc/apache2/ssl/server.de-wildcard.pem
SSLCertificateKeyFile /etc/apache2/ssl/server.de.ec-key
SSLOptions +FakeBasicAuth +ExportCertData +StrictRequire
SetEnvIfNoCase Referer ^https://ssl\.server\.de keep_cookies
RequestHeader unset Cookie env=!keep_cookies
<IfModule mod_headers.c>
# https://kb.sucuri.net/warnings/hardening/headers-x-content-type
Header set X-Content-Type-Options nosniff
# https://kb.sucuri.net/warnings/hardening/headers-x-frame-clickjacking
Header append X-FRAME-OPTIONS "SAMEORIGIN"
# https://kb.sucuri.net/warnings/hardening/headers-x-xss-protection
Header set X-XSS-Protection "1; mode=block"
# Strict Transport Security
Header always set Strict-Transport-Security "max-age=31556926;"
# Public Key Pins
Header always set Public-Key-Pins "max-age=5184000; pin-sha256=\"...\"; pin-sha256=\"...\"; includeSubDomains"
</IfModule>
<IfModule mod_rewrite.c>
RewriteEngine On
# https://kb.sucuri.net/warnings/hardening/http-trace HTTP Trace Method
RewriteCond %{REQUEST_METHOD} ^TRACE
RewriteRule .* - [F]
</IfModule>
</VirtualHost>
</source>
==ApacheTop==
Top of all sites on your host:
<source lang=bash>
# ls /var/log/apache2/*.log | xargs -n 1 echo -f | xargs apachetop
</source>
3908248a7568106662be5287e6f97d99e0dc9392
Apache
0
205
1174
1173
2016-01-28T09:52:34Z
Lollypop
2
/* Apache konfigurieren */
wikitext
text/x-wiki
[[Kategorie:Web]]
== Zertifikat generieren ==
===Defaultwerte vernünftig anpassen===
Country & Co auf für einen selbst passende Werte anpassen:
<source lang=bash>
# vi /etc/ssl/openssl.cnf
</source>
===Schlüssel generieren===
<source lang=bash>
# openssl ecparam -genkey -name secp256r1 | openssl ec -aes256 -out server.de.ec-key
read EC key
using curve name prime256v1 instead of secp256r1
writing EC key
Enter PEM pass phrase:
Verifying - Enter PEM pass phrase:
</source>
Sollte es gewünscht sein den Schlüssel ohne Passwort abzulegen, kann man den Schlüssel nachträglich so entfernen:
<source lang=bash>
# openssl ec -in server.de.ec-key -out server.de.ec-key
read EC key
Enter PEM pass phrase:
writing EC key
</source>
===Zertifikat ausstellen===
<source lang=bash>
# openssl req -new -x509 -sha256 -key server.de.ec-key -out server.de-wildcard.pem -days 1825 -nodes
You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter '.', the field will be left blank.
-----
Country Name (2 letter code) [DE]:
State or Province Name (full name) [Hamburg]:
Locality Name (eg, city) [Hamburg]:
Organization Name (eg, company) [My Site]:
Organizational Unit Name (eg, section) [Sub]:
Common Name (e.g. server FQDN or YOUR name) []:*.server.de
Email Address [ssl@server.de]:
</source>
===Zertifikat ansehen===
<source lang=bash>
# openssl x509 -text -noout -in server.de-wildcard.pem
Certificate:
Data:
Version: 3 (0x2)
Serial Number: ... (0x...)
Signature Algorithm: ecdsa-with-SHA256
Issuer: C=DE, ST=Hamburg, L=Hamburg, O=My Site, OU=Sub, CN=*.server.de/emailAddress=ssl@server.de
Validity
Not Before: Apr 16 09:35:02 2015 GMT
Not After : Apr 14 09:35:02 2020 GMT
Subject: C=DE, ST=Hamburg, L=Hamburg, O=My Site, OU=Sub, CN=*.server.de/emailAddress=ssl@server.de
Subject Public Key Info:
Public Key Algorithm: id-ecPublicKey
Public-Key: (256 bit)
pub:
...
ASN1 OID: prime256v1
X509v3 extensions:
X509v3 Subject Key Identifier:
...
X509v3 Authority Key Identifier:
keyid:...
X509v3 Basic Constraints:
CA:TRUE
Signature Algorithm: ecdsa-with-SHA256
...
</source>
==Apache konfigurieren==
<source lang=apache>
<VirtualHost ssl.server.de:443>
# ...
SSLEngine On
SSLProtocol all -SSLv2 -SSLv3
SSLCompression off
SSLHonorCipherOrder On
SSLCipherSuite EECDH+AESGCM:EECDH+AES:EDH+AES
SSLCertificateFile /etc/apache2/ssl/server.de-wildcard.pem
SSLCertificateKeyFile /etc/apache2/ssl/server.de.ec-key
SSLOptions +FakeBasicAuth +ExportCertData +StrictRequire
SetEnvIfNoCase Referer ^https://ssl\.server\.de keep_cookies
RequestHeader unset Cookie env=!keep_cookies
<IfModule mod_headers.c>
# https://kb.sucuri.net/warnings/hardening/headers-x-content-type
Header set X-Content-Type-Options nosniff
# https://kb.sucuri.net/warnings/hardening/headers-x-frame-clickjacking
Header append X-FRAME-OPTIONS "SAMEORIGIN"
# https://kb.sucuri.net/warnings/hardening/headers-x-xss-protection
Header set X-XSS-Protection "1; mode=block"
# Strict Transport Security
Header always set Strict-Transport-Security "max-age=31556926;"
# Public Key Pins
Header always set Public-Key-Pins "max-age=5184000; pin-sha256=\"...\"; pin-sha256=\"...\"; includeSubDomains"
</IfModule>
<IfModule mod_rewrite.c>
RewriteEngine On
# https://kb.sucuri.net/warnings/hardening/http-trace HTTP Trace Method
RewriteCond %{REQUEST_METHOD} ^TRACE
RewriteRule .* - [F]
</IfModule>
</VirtualHost>
</source>
==ApacheTop==
Top of all sites on your host:
<source lang=bash>
# ls /var/log/apache2/*.log | xargs -n 1 echo -f | xargs apachetop
</source>
877620aa6fb051486c4d0279eb163b2f79979f8c
ProblemsWithSecurity
0
241
1175
937
2016-02-02T19:37:09Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:Security]]
'''Avoiding Security is not an otpion! But helps sometimes if you have no chance to administrate your devices without cheating...'''
=Firefox=
Go '''not''' to URL ''about:config'' and navigate '''not''' to the section ''security.ssl3'', and '''not''' double click ''security.ssl3.dhe_aes_{128,256}_sha'' to set it to false.
[[Datei:Firefox_about-config_ssl.png]]
=Chrome=
==NET::ERR_SSL_PINNED_KEY_NOT_IN_CERT_CHAIN==
when a site changed the Certificate and the max-age is not reached you cann clear the cache for this site at: chrome://net-internals/#hsts
Enter your changed Domain at <i>Delete domain</i>
And press Delete.
eeeb6183965d65bd3a4ba9b102c87ab29d5a5a15
Solaris OracleClusterware
0
274
1176
2016-02-08T08:49:35Z
Lollypop
2
Die Seite wurde neu angelegt: „ ==Set swap to physical RAM== <source lang=bash> # export RAM=256G # swap -d /dev/zvol/dsk/rpool/swap # zfs destroy rpool/swap # zfs create \ -V ${RAM} \ …“
wikitext
text/x-wiki
==Set swap to physical RAM==
<source lang=bash>
# export RAM=256G
# swap -d /dev/zvol/dsk/rpool/swap
# zfs destroy rpool/swap
# zfs create \
-V ${RAM} \
-b 8k \
-o primarycache=metadata \
-o chksum=on \
-o dedup=off \
-o encryption=off \
-o compression=off \
rpool/swap
# swap -a /dev/zvol/dsk/rpool/swap
</source>
ed722f07314ba787b635e0e6289f962a36bdc12d
1177
1176
2016-02-08T09:29:02Z
Lollypop
2
wikitext
text/x-wiki
==Set swap to physical RAM==
<source lang=bash>
# export RAM=256G
# swap -d /dev/zvol/dsk/rpool/swap
# zfs destroy rpool/swap
# zfs create \
-V ${RAM} \
-b 8k \
-o primarycache=metadata \
-o chksum=on \
-o dedup=off \
-o encryption=off \
-o compression=off \
rpool/swap
# swap -a /dev/zvol/dsk/rpool/swap
</source>
==Set slew always for ntp==
After configuring ntp set slew always to avoid time warps!
<source lang=bash>
# svccfg -s svc:/network/ntp:default setprop config/slew_always = true
# svcadm refresh svc:/network/ntp:default
</source>
6bea1c78fe0d8e727e7dfb67a80c61e234e4d73c
1178
1177
2016-02-08T09:40:37Z
Lollypop
2
/* Set slew always for ntp */
wikitext
text/x-wiki
==Set swap to physical RAM==
<source lang=bash>
# export RAM=256G
# swap -d /dev/zvol/dsk/rpool/swap
# zfs destroy rpool/swap
# zfs create \
-V ${RAM} \
-b 8k \
-o primarycache=metadata \
-o chksum=on \
-o dedup=off \
-o encryption=off \
-o compression=off \
rpool/swap
# swap -a /dev/zvol/dsk/rpool/swap
</source>
==Set slew always for ntp==
After configuring ntp set slew always to avoid time warps!
<source lang=bash>
# svccfg -s svc:/network/ntp:default setprop config/slew_always = true
# svcadm refresh svc:/network/ntp:default
# svccfg -s svc:/network/ntp:default listprop config/slew_always
config/slew_always boolean true
</source>
440b7ac01c6161964875b7eadb66d454cd67f613
1179
1178
2016-02-08T09:53:20Z
Lollypop
2
wikitext
text/x-wiki
==Get release information==
<source lang=bash>
# pkg info kernel | nawl -F '.' '/Build Release:/{solaris=$NF;}/Branch:/{subrel=$3;update=$4;}END{printf "Solaris %d.%d Update %d\n",solaris,subrel,update;}'
</source>
==Set swap to physical RAM==
<source lang=bash>
# export RAM=256G
# swap -d /dev/zvol/dsk/rpool/swap
# zfs destroy rpool/swap
# zfs create \
-V ${RAM} \
-b 8k \
-o primarycache=metadata \
-o chksum=on \
-o dedup=off \
-o encryption=off \
-o compression=off \
rpool/swap
# swap -a /dev/zvol/dsk/rpool/swap
</source>
==Set slew always for ntp==
After configuring ntp set slew always to avoid time warps!
<source lang=bash>
# svccfg -s svc:/network/ntp:default setprop config/slew_always = true
# svcadm refresh svc:/network/ntp:default
# svccfg -s svc:/network/ntp:default listprop config/slew_always
config/slew_always boolean true
</source>
7247b49febe295992dab60e41e1764959e07a74d
1180
1179
2016-02-08T09:55:36Z
Lollypop
2
wikitext
text/x-wiki
==Get release information==
<source lang=bash>
# pkg info kernel | nawl -F '.' '/Build Release:/{solaris=$NF;}/Branch:/{subrel=$3;update=$4;}END{printf "Solaris %d.%d Update %d\n",solaris,subrel,update;}'
</source>
==Check pkg dependencies==
<source lang=bash>
# pkg list developer/assembler developer/build/make
</source>
==Set swap to physical RAM==
<source lang=bash>
# export RAM=256G
# swap -d /dev/zvol/dsk/rpool/swap
# zfs destroy rpool/swap
# zfs create \
-V ${RAM} \
-b 8k \
-o primarycache=metadata \
-o chksum=on \
-o dedup=off \
-o encryption=off \
-o compression=off \
rpool/swap
# swap -a /dev/zvol/dsk/rpool/swap
</source>
==Set slew always for ntp==
After configuring ntp set slew always to avoid time warps!
<source lang=bash>
# svccfg -s svc:/network/ntp:default setprop config/slew_always = true
# svcadm refresh svc:/network/ntp:default
# svccfg -s svc:/network/ntp:default listprop config/slew_always
config/slew_always boolean true
</source>
e9f7cda0261c0febacbe803c2924baf20a2bf033
1181
1180
2016-02-08T10:07:58Z
Lollypop
2
wikitext
text/x-wiki
==Get release information==
<source lang=bash>
# pkg info kernel | nawl -F '.' '/Build Release:/{solaris=$NF;}/Branch:/{subrel=$3;update=$4;}END{printf "Solaris %d.%d Update %d\n",solaris,subrel,update;}'
</source>
==Check pkg dependencies==
<source lang=bash>
# pkg list developer/assembler developer/build/make
</source>
==Check port ranges==
<source lang=bash>
# for protocol in tcp udp ; do ipadm show-prop ${protocol} -p smallest_anon_port,largest_anon_port ; done
PROTO PROPERTY PERM CURRENT PERSISTENT DEFAULT POSSIBLE
tcp smallest_anon_port rw 9000 9000 32768 1024-65500
tcp largest_anon_port rw 65500 65500 65535 9000-65535
PROTO PROPERTY PERM CURRENT PERSISTENT DEFAULT POSSIBLE
udp smallest_anon_port rw 9000 9000 32768 1024-65500
udp largest_anon_port rw 65500 65500 65535 9000-65535
</source>
==Set swap to physical RAM==
<source lang=bash>
# export RAM=256G
# swap -d /dev/zvol/dsk/rpool/swap
# zfs destroy rpool/swap
# zfs create \
-V ${RAM} \
-b 8k \
-o primarycache=metadata \
-o chksum=on \
-o dedup=off \
-o encryption=off \
-o compression=off \
rpool/swap
# swap -a /dev/zvol/dsk/rpool/swap
</source>
==Set slew always for ntp==
After configuring ntp set slew always to avoid time warps!
<source lang=bash>
# svccfg -s svc:/network/ntp:default setprop config/slew_always = true
# svcadm refresh svc:/network/ntp:default
# svccfg -s svc:/network/ntp:default listprop config/slew_always
config/slew_always boolean true
</source>
7cc20e5aca4a07695b5cdabc095eb83264b1b6ea
1182
1181
2016-02-08T10:26:49Z
Lollypop
2
wikitext
text/x-wiki
==Get release information==
<source lang=bash>
# pkg info kernel | nawl -F '.' '/Build Release:/{solaris=$NF;}/Branch:/{subrel=$3;update=$4;}END{printf "Solaris %d.%d Update %d\n",solaris,subrel,update;}'
</source>
==Check pkg dependencies==
<source lang=bash>
# pkg list developer/assembler developer/build/make
</source>
=User / group settings=
==Project==
<source lang=bash>
# groupadd -g 186 oinstall
# projadd -p 186 -G oinstall \
-K process.max-file-descriptor="(privileged,65536,deny)" \
-K process.max-sem-nsems="(privileged,2048,deny)" \
-K project.max-sem-ids="(privileged,2048,deny)" \
-K project.max-shm-ids="(privileged,200,deny)" \
-K project.max-shm-memory="(privileged,274877906944,deny)" \
group.oinstall
</source>
=Network=
==Check port ranges==
<source lang=bash>
# for protocol in tcp udp ; do ipadm show-prop ${protocol} -p smallest_anon_port,largest_anon_port ; done
PROTO PROPERTY PERM CURRENT PERSISTENT DEFAULT POSSIBLE
tcp smallest_anon_port rw 9000 9000 32768 1024-65500
tcp largest_anon_port rw 65500 65500 65535 9000-65535
PROTO PROPERTY PERM CURRENT PERSISTENT DEFAULT POSSIBLE
udp smallest_anon_port rw 9000 9000 32768 1024-65500
udp largest_anon_port rw 65500 65500 65535 9000-65535
</source>
==Set swap to physical RAM==
<source lang=bash>
# export RAM=256G
# swap -d /dev/zvol/dsk/rpool/swap
# zfs destroy rpool/swap
# zfs create \
-V ${RAM} \
-b 8k \
-o primarycache=metadata \
-o chksum=on \
-o dedup=off \
-o encryption=off \
-o compression=off \
rpool/swap
# swap -a /dev/zvol/dsk/rpool/swap
</source>
==Set slew always for ntp==
After configuring ntp set slew always to avoid time warps!
<source lang=bash>
# svccfg -s svc:/network/ntp:default setprop config/slew_always = true
# svcadm refresh svc:/network/ntp:default
# svccfg -s svc:/network/ntp:default listprop config/slew_always
config/slew_always boolean true
</source>
26b8ad611c9554f87b8e0d69f546d9f7e11aff61
1183
1182
2016-02-08T10:34:23Z
Lollypop
2
wikitext
text/x-wiki
==Get release information==
<source lang=bash>
# pkg info kernel | nawl -F '.' '/Build Release:/{solaris=$NF;}/Branch:/{subrel=$3;update=$4;}END{printf "Solaris %d.%d Update %d\n",solaris,subrel,update;}'
</source>
==Check pkg dependencies==
<source lang=bash>
# pkg list developer/assembler developer/build/make
</source>
=User / group settings=
==Groups==
<source lang=bash>
# groupadd -g 186 oinstall
# groupadd -g 187 asmadmin
# groupadd -g 188 asmdba
</source>
==Projects==
<source lang=bash>
# projadd -p 186 -G oinstall \
-K process.max-file-descriptor="(privileged,65536,deny)" \
-K process.max-sem-nsems="(privileged,2048,deny)" \
-K project.max-sem-ids="(privileged,2048,deny)" \
-K project.max-shm-ids="(privileged,200,deny)" \
-K project.max-shm-memory="(privileged,274877906944,deny)" \
group.oinstall
</source>
=Network=
==Check port ranges==
<source lang=bash>
# for protocol in tcp udp ; do ipadm show-prop ${protocol} -p smallest_anon_port,largest_anon_port ; done
PROTO PROPERTY PERM CURRENT PERSISTENT DEFAULT POSSIBLE
tcp smallest_anon_port rw 9000 9000 32768 1024-65500
tcp largest_anon_port rw 65500 65500 65535 9000-65535
PROTO PROPERTY PERM CURRENT PERSISTENT DEFAULT POSSIBLE
udp smallest_anon_port rw 9000 9000 32768 1024-65500
udp largest_anon_port rw 65500 65500 65535 9000-65535
</source>
==Set swap to physical RAM==
<source lang=bash>
# export RAM=256G
# swap -d /dev/zvol/dsk/rpool/swap
# zfs destroy rpool/swap
# zfs create \
-V ${RAM} \
-b 8k \
-o primarycache=metadata \
-o chksum=on \
-o dedup=off \
-o encryption=off \
-o compression=off \
rpool/swap
# swap -a /dev/zvol/dsk/rpool/swap
</source>
==Set slew always for ntp==
After configuring ntp set slew always to avoid time warps!
<source lang=bash>
# svccfg -s svc:/network/ntp:default setprop config/slew_always = true
# svcadm refresh svc:/network/ntp:default
# svccfg -s svc:/network/ntp:default listprop config/slew_always
config/slew_always boolean true
</source>
ce2ffb531071059bd2d5c95f1fe498b5263012a4
1184
1183
2016-02-08T10:42:55Z
Lollypop
2
/* Groups */
wikitext
text/x-wiki
==Get release information==
<source lang=bash>
# pkg info kernel | nawl -F '.' '/Build Release:/{solaris=$NF;}/Branch:/{subrel=$3;update=$4;}END{printf "Solaris %d.%d Update %d\n",solaris,subrel,update;}'
</source>
==Check pkg dependencies==
<source lang=bash>
# pkg list developer/assembler developer/build/make
</source>
=User / group settings=
==Groups==
<source lang=bash>
# groupadd -g 186 oinstall
# groupadd -g 187 asmadmin
# groupadd -g 188 asmdba
# groupadd -g 200 dba
</source>
==User==
<source lang=bash>
# useradd \
-u 102 \
-g oinstall \
-G asmdba,dba \
-c "Oracle DB" \
-m -d /export/home/oracle \
oracle
# useradd \
-u 406 \
-g oinstall \
-G asmdba,asmadmin,dba \
-c "Oracle Grid" \
-m -d /export/home/grid \
grid
</source>
==Projects==
<source lang=bash>
# projadd -p 186 -G oinstall \
-K process.max-file-descriptor="(privileged,65536,deny)" \
-K process.max-sem-nsems="(privileged,2048,deny)" \
-K project.max-sem-ids="(privileged,2048,deny)" \
-K project.max-shm-ids="(privileged,200,deny)" \
-K project.max-shm-memory="(privileged,274877906944,deny)" \
group.oinstall
</source>
=Network=
==Check port ranges==
<source lang=bash>
# for protocol in tcp udp ; do ipadm show-prop ${protocol} -p smallest_anon_port,largest_anon_port ; done
PROTO PROPERTY PERM CURRENT PERSISTENT DEFAULT POSSIBLE
tcp smallest_anon_port rw 9000 9000 32768 1024-65500
tcp largest_anon_port rw 65500 65500 65535 9000-65535
PROTO PROPERTY PERM CURRENT PERSISTENT DEFAULT POSSIBLE
udp smallest_anon_port rw 9000 9000 32768 1024-65500
udp largest_anon_port rw 65500 65500 65535 9000-65535
</source>
==Set swap to physical RAM==
<source lang=bash>
# export RAM=256G
# swap -d /dev/zvol/dsk/rpool/swap
# zfs destroy rpool/swap
# zfs create \
-V ${RAM} \
-b 8k \
-o primarycache=metadata \
-o chksum=on \
-o dedup=off \
-o encryption=off \
-o compression=off \
rpool/swap
# swap -a /dev/zvol/dsk/rpool/swap
</source>
==Set slew always for ntp==
After configuring ntp set slew always to avoid time warps!
<source lang=bash>
# svccfg -s svc:/network/ntp:default setprop config/slew_always = true
# svcadm refresh svc:/network/ntp:default
# svccfg -s svc:/network/ntp:default listprop config/slew_always
config/slew_always boolean true
</source>
04659f70f086bd2f161884838b0dc7da77bafd93
1185
1184
2016-02-08T10:53:55Z
Lollypop
2
/* Projects */
wikitext
text/x-wiki
==Get release information==
<source lang=bash>
# pkg info kernel | nawl -F '.' '/Build Release:/{solaris=$NF;}/Branch:/{subrel=$3;update=$4;}END{printf "Solaris %d.%d Update %d\n",solaris,subrel,update;}'
</source>
==Check pkg dependencies==
<source lang=bash>
# pkg list developer/assembler developer/build/make
</source>
=User / group settings=
==Groups==
<source lang=bash>
# groupadd -g 186 oinstall
# groupadd -g 187 asmadmin
# groupadd -g 188 asmdba
# groupadd -g 200 dba
</source>
==User==
<source lang=bash>
# useradd \
-u 102 \
-g oinstall \
-G asmdba,dba \
-c "Oracle DB" \
-m -d /export/home/oracle \
oracle
# useradd \
-u 406 \
-g oinstall \
-G asmdba,asmadmin,dba \
-c "Oracle Grid" \
-m -d /export/home/grid \
grid
</source>
==Projects==
<source lang=bash>
# projadd -p 186 -G oinstall \
-K process.max-file-descriptor="(privileged,65536,deny)" \
-K process.max-sem-nsems="(privileged,2048,deny)" \
-K project.max-sem-ids="(privileged,2048,deny)" \
-K project.max-shm-ids="(privileged,200,deny)" \
-K project.max-shm-memory="(privileged,274877906944,deny)" \
group.oinstall
</source>
===Check project settings===
<source lang=bash>
# su - oracle
$ for name in process.{max-file-descriptor,max-sem-nsems} ; do prctl -t privileged -i process -n ${name} $$ ; done
process: 14822: -bash
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
process.max-file-descriptor
privileged 65.5K - deny -
process: 14822: -bash
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
process.max-sem-nsems
privileged 2.05K - deny -
$ for name in project.{max-sem-ids,max-shm-ids,max-shm-memory} ; do prctl -t privileged -n ${name} $$ ; done
process: 14822: -bash
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
project.max-sem-ids
privileged 2.05K - deny -
process: 14822: -bash
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
project.max-shm-ids
privileged 200 - deny -
process: 14822: -bash
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
project.max-shm-memory
usage 0B
privileged 256GB - deny -
</source>
=Network=
==Check port ranges==
<source lang=bash>
# for protocol in tcp udp ; do ipadm show-prop ${protocol} -p smallest_anon_port,largest_anon_port ; done
PROTO PROPERTY PERM CURRENT PERSISTENT DEFAULT POSSIBLE
tcp smallest_anon_port rw 9000 9000 32768 1024-65500
tcp largest_anon_port rw 65500 65500 65535 9000-65535
PROTO PROPERTY PERM CURRENT PERSISTENT DEFAULT POSSIBLE
udp smallest_anon_port rw 9000 9000 32768 1024-65500
udp largest_anon_port rw 65500 65500 65535 9000-65535
</source>
==Set swap to physical RAM==
<source lang=bash>
# export RAM=256G
# swap -d /dev/zvol/dsk/rpool/swap
# zfs destroy rpool/swap
# zfs create \
-V ${RAM} \
-b 8k \
-o primarycache=metadata \
-o chksum=on \
-o dedup=off \
-o encryption=off \
-o compression=off \
rpool/swap
# swap -a /dev/zvol/dsk/rpool/swap
</source>
==Set slew always for ntp==
After configuring ntp set slew always to avoid time warps!
<source lang=bash>
# svccfg -s svc:/network/ntp:default setprop config/slew_always = true
# svcadm refresh svc:/network/ntp:default
# svccfg -s svc:/network/ntp:default listprop config/slew_always
config/slew_always boolean true
</source>
b6ea279d8395e889a38bb5e1ab7812c587555581
1186
1185
2016-02-08T10:58:27Z
Lollypop
2
wikitext
text/x-wiki
==Get release information==
<source lang=bash>
# pkg info kernel | nawl -F '.' '/Build Release:/{solaris=$NF;}/Branch:/{subrel=$3;update=$4;}END{printf "Solaris %d.%d Update %d\n",solaris,subrel,update;}'
</source>
==Check pkg dependencies==
<source lang=bash>
# pkg list developer/assembler developer/build/make
</source>
=User / group settings=
==Groups==
<source lang=bash>
# groupadd -g 186 oinstall
# groupadd -g 187 asmadmin
# groupadd -g 188 asmdba
# groupadd -g 200 dba
</source>
==User==
<source lang=bash>
# useradd \
-u 102 \
-g oinstall \
-G asmdba,dba \
-c "Oracle DB" \
-m -d /export/home/oracle \
oracle
# useradd \
-u 406 \
-g oinstall \
-G asmdba,asmadmin,dba \
-c "Oracle Grid" \
-m -d /export/home/grid \
grid
</source>
==Projects==
<source lang=bash>
# projadd -p 186 -G oinstall \
-K process.max-file-descriptor="(privileged,65536,deny)" \
-K process.max-sem-nsems="(privileged,2048,deny)" \
-K project.max-sem-ids="(privileged,2048,deny)" \
-K project.max-shm-ids="(privileged,200,deny)" \
-K project.max-shm-memory="(privileged,274877906944,deny)" \
group.oinstall
</source>
===Check project settings===
<source lang=bash>
# su - oracle
$ for name in process.{max-file-descriptor,max-sem-nsems} ; do prctl -t privileged -i process -n ${name} $$ ; done
process: 14822: -bash
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
process.max-file-descriptor
privileged 65.5K - deny -
process: 14822: -bash
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
process.max-sem-nsems
privileged 2.05K - deny -
$ for name in project.{max-sem-ids,max-shm-ids,max-shm-memory} ; do prctl -t privileged -n ${name} $$ ; done
process: 14822: -bash
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
project.max-sem-ids
privileged 2.05K - deny -
process: 14822: -bash
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
project.max-shm-ids
privileged 200 - deny -
process: 14822: -bash
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
project.max-shm-memory
usage 0B
privileged 256GB - deny -
</source>
=Directories=
<source lang=bash>
# mkdir -p -m 0755 /opt/{grid{home,base},oraInventory}
# chown -R grid:oinstall /opt/{grid{home,base},oraInventory}
</source>
=Network=
==Check port ranges==
<source lang=bash>
# for protocol in tcp udp ; do ipadm show-prop ${protocol} -p smallest_anon_port,largest_anon_port ; done
PROTO PROPERTY PERM CURRENT PERSISTENT DEFAULT POSSIBLE
tcp smallest_anon_port rw 9000 9000 32768 1024-65500
tcp largest_anon_port rw 65500 65500 65535 9000-65535
PROTO PROPERTY PERM CURRENT PERSISTENT DEFAULT POSSIBLE
udp smallest_anon_port rw 9000 9000 32768 1024-65500
udp largest_anon_port rw 65500 65500 65535 9000-65535
</source>
==Set swap to physical RAM==
<source lang=bash>
# export RAM=256G
# swap -d /dev/zvol/dsk/rpool/swap
# zfs destroy rpool/swap
# zfs create \
-V ${RAM} \
-b 8k \
-o primarycache=metadata \
-o chksum=on \
-o dedup=off \
-o encryption=off \
-o compression=off \
rpool/swap
# swap -a /dev/zvol/dsk/rpool/swap
</source>
==Set slew always for ntp==
After configuring ntp set slew always to avoid time warps!
<source lang=bash>
# svccfg -s svc:/network/ntp:default setprop config/slew_always = true
# svcadm refresh svc:/network/ntp:default
# svccfg -s svc:/network/ntp:default listprop config/slew_always
config/slew_always boolean true
</source>
36a795005084616a77640dc4f254e5f188030542
1187
1186
2016-02-08T11:14:51Z
Lollypop
2
wikitext
text/x-wiki
==Get release information==
<source lang=bash>
# pkg info kernel | nawl -F '.' '/Build Release:/{solaris=$NF;}/Branch:/{subrel=$3;update=$4;}END{printf "Solaris %d.%d Update %d\n",solaris,subrel,update;}'
</source>
==Check pkg dependencies==
<source lang=bash>
# pkg list developer/assembler developer/build/make
</source>
=User / group settings=
==Groups==
<source lang=bash>
# groupadd -g 186 oinstall
# groupadd -g 187 asmadmin
# groupadd -g 188 asmdba
# groupadd -g 200 dba
</source>
==User==
<source lang=bash>
# useradd \
-u 102 \
-g oinstall \
-G asmdba,dba \
-c "Oracle DB" \
-m -d /export/home/oracle \
oracle
# useradd \
-u 406 \
-g oinstall \
-G asmdba,asmadmin,dba \
-c "Oracle Grid" \
-m -d /export/home/grid \
grid
</source>
==Projects==
<source lang=bash>
# projadd -p 186 -G oinstall \
-K process.max-file-descriptor="(privileged,65536,deny)" \
-K process.max-sem-nsems="(privileged,2048,deny)" \
-K project.max-sem-ids="(privileged,2048,deny)" \
-K project.max-shm-ids="(privileged,200,deny)" \
-K project.max-shm-memory="(privileged,274877906944,deny)" \
group.oinstall
</source>
===Check project settings===
<source lang=bash>
# su - oracle
$ for name in process.{max-file-descriptor,max-sem-nsems} ; do prctl -t privileged -i process -n ${name} $$ ; done
process: 14822: -bash
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
process.max-file-descriptor
privileged 65.5K - deny -
process: 14822: -bash
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
process.max-sem-nsems
privileged 2.05K - deny -
$ for name in project.{max-sem-ids,max-shm-ids,max-shm-memory} ; do prctl -t privileged -n ${name} $$ ; done
process: 14822: -bash
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
project.max-sem-ids
privileged 2.05K - deny -
process: 14822: -bash
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
project.max-shm-ids
privileged 200 - deny -
process: 14822: -bash
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
project.max-shm-memory
usage 0B
privileged 256GB - deny -
</source>
=Directories=
<source lang=bash>
# mkdir -p -m 0755 /opt/{grid{home,base},oraInventory}
# chown -R grid:oinstall /opt/{grid{home,base},oraInventory}
</source>
=ASM=
==Discover LUNs==
<source lang=bash>
# luxadm -e dump_map /devices/pci@0,0/pci8086,2f04@2/pci103c,197f@0,1/fp@0,0:devctl | nawk '/Disk device/{print $5}' | sort -u | xargs luxadm display | nawk '/DEVICE PROPERTIES for disk:/{disk=$NF}/DEVICE PROPERTIES for:/{disk="";}disk && /Vendor:/{vendor=$NF;}/Serial Num:/{serial=$NF;}/Unformatted capacity:/{capacity=$(NF-1)""$NF;}disk && /^$/{printf "%s vendor=%s serial=%s capacity=%s\n",disk,vendor,serial,capacity}' | sort -u
</source>
=Network=
==Check port ranges==
<source lang=bash>
# for protocol in tcp udp ; do ipadm show-prop ${protocol} -p smallest_anon_port,largest_anon_port ; done
PROTO PROPERTY PERM CURRENT PERSISTENT DEFAULT POSSIBLE
tcp smallest_anon_port rw 9000 9000 32768 1024-65500
tcp largest_anon_port rw 65500 65500 65535 9000-65535
PROTO PROPERTY PERM CURRENT PERSISTENT DEFAULT POSSIBLE
udp smallest_anon_port rw 9000 9000 32768 1024-65500
udp largest_anon_port rw 65500 65500 65535 9000-65535
</source>
==Set swap to physical RAM==
<source lang=bash>
# export RAM=256G
# swap -d /dev/zvol/dsk/rpool/swap
# zfs destroy rpool/swap
# zfs create \
-V ${RAM} \
-b 8k \
-o primarycache=metadata \
-o chksum=on \
-o dedup=off \
-o encryption=off \
-o compression=off \
rpool/swap
# swap -a /dev/zvol/dsk/rpool/swap
</source>
==Set slew always for ntp==
After configuring ntp set slew always to avoid time warps!
<source lang=bash>
# svccfg -s svc:/network/ntp:default setprop config/slew_always = true
# svcadm refresh svc:/network/ntp:default
# svccfg -s svc:/network/ntp:default listprop config/slew_always
config/slew_always boolean true
</source>
30c060666c2ba441804d5b6cd0cd4defd4253dfe
1188
1187
2016-02-08T11:25:48Z
Lollypop
2
/* ASM */
wikitext
text/x-wiki
==Get release information==
<source lang=bash>
# pkg info kernel | nawl -F '.' '/Build Release:/{solaris=$NF;}/Branch:/{subrel=$3;update=$4;}END{printf "Solaris %d.%d Update %d\n",solaris,subrel,update;}'
</source>
==Check pkg dependencies==
<source lang=bash>
# pkg list developer/assembler developer/build/make
</source>
=User / group settings=
==Groups==
<source lang=bash>
# groupadd -g 186 oinstall
# groupadd -g 187 asmadmin
# groupadd -g 188 asmdba
# groupadd -g 200 dba
</source>
==User==
<source lang=bash>
# useradd \
-u 102 \
-g oinstall \
-G asmdba,dba \
-c "Oracle DB" \
-m -d /export/home/oracle \
oracle
# useradd \
-u 406 \
-g oinstall \
-G asmdba,asmadmin,dba \
-c "Oracle Grid" \
-m -d /export/home/grid \
grid
</source>
==Projects==
<source lang=bash>
# projadd -p 186 -G oinstall \
-K process.max-file-descriptor="(privileged,65536,deny)" \
-K process.max-sem-nsems="(privileged,2048,deny)" \
-K project.max-sem-ids="(privileged,2048,deny)" \
-K project.max-shm-ids="(privileged,200,deny)" \
-K project.max-shm-memory="(privileged,274877906944,deny)" \
group.oinstall
</source>
===Check project settings===
<source lang=bash>
# su - oracle
$ for name in process.{max-file-descriptor,max-sem-nsems} ; do prctl -t privileged -i process -n ${name} $$ ; done
process: 14822: -bash
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
process.max-file-descriptor
privileged 65.5K - deny -
process: 14822: -bash
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
process.max-sem-nsems
privileged 2.05K - deny -
$ for name in project.{max-sem-ids,max-shm-ids,max-shm-memory} ; do prctl -t privileged -n ${name} $$ ; done
process: 14822: -bash
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
project.max-sem-ids
privileged 2.05K - deny -
process: 14822: -bash
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
project.max-shm-ids
privileged 200 - deny -
process: 14822: -bash
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
project.max-shm-memory
usage 0B
privileged 256GB - deny -
</source>
=Directories=
<source lang=bash>
# mkdir -p -m 0755 /opt/{grid{home,base},oraInventory}
# chown -R grid:oinstall /opt/{grid{home,base},oraInventory}
</source>
=ASM=
==Discover LUNs==
<source lang=bash>
# luxadm -e dump_map /devices/pci@0,0/pci8086,2f04@2/pci103c,197f@0,1/fp@0,0:devctl | nawk '/Disk device/{print $5}' | sort -u | xargs luxadm display | nawk '/DEVICE PROPERTIES for disk:/{disk=$NF}/DEVICE PROPERTIES for:/{disk="";}disk && /Vendor:/{vendor=$NF;}/Serial Num:/{serial=$NF;}/Unformatted capacity:/{capacity=$(NF-1)""$NF;}disk && /^$/{printf "%s vendor=%s serial=%s capacity=%s\n",disk,vendor,serial,capacity}' | sort -u
</source>
==Label Disks==
<source lang=bash>
printf 'type 0 no no\nlabel 1 yes\npartition\n0 usr wm 8192 $\nlabel 1 yes\nquit\nquit\n' | format -e /dev/rdsk/<disk>
</source>
=Network=
==Check port ranges==
<source lang=bash>
# for protocol in tcp udp ; do ipadm show-prop ${protocol} -p smallest_anon_port,largest_anon_port ; done
PROTO PROPERTY PERM CURRENT PERSISTENT DEFAULT POSSIBLE
tcp smallest_anon_port rw 9000 9000 32768 1024-65500
tcp largest_anon_port rw 65500 65500 65535 9000-65535
PROTO PROPERTY PERM CURRENT PERSISTENT DEFAULT POSSIBLE
udp smallest_anon_port rw 9000 9000 32768 1024-65500
udp largest_anon_port rw 65500 65500 65535 9000-65535
</source>
==Set swap to physical RAM==
<source lang=bash>
# export RAM=256G
# swap -d /dev/zvol/dsk/rpool/swap
# zfs destroy rpool/swap
# zfs create \
-V ${RAM} \
-b 8k \
-o primarycache=metadata \
-o chksum=on \
-o dedup=off \
-o encryption=off \
-o compression=off \
rpool/swap
# swap -a /dev/zvol/dsk/rpool/swap
</source>
==Set slew always for ntp==
After configuring ntp set slew always to avoid time warps!
<source lang=bash>
# svccfg -s svc:/network/ntp:default setprop config/slew_always = true
# svcadm refresh svc:/network/ntp:default
# svccfg -s svc:/network/ntp:default listprop config/slew_always
config/slew_always boolean true
</source>
d563edf73d12e78a367294db9ea90092c255d143
1189
1188
2016-02-08T12:42:57Z
Lollypop
2
/* ASM */
wikitext
text/x-wiki
==Get release information==
<source lang=bash>
# pkg info kernel | nawl -F '.' '/Build Release:/{solaris=$NF;}/Branch:/{subrel=$3;update=$4;}END{printf "Solaris %d.%d Update %d\n",solaris,subrel,update;}'
</source>
==Check pkg dependencies==
<source lang=bash>
# pkg list developer/assembler developer/build/make
</source>
=User / group settings=
==Groups==
<source lang=bash>
# groupadd -g 186 oinstall
# groupadd -g 187 asmadmin
# groupadd -g 188 asmdba
# groupadd -g 200 dba
</source>
==User==
<source lang=bash>
# useradd \
-u 102 \
-g oinstall \
-G asmdba,dba \
-c "Oracle DB" \
-m -d /export/home/oracle \
oracle
# useradd \
-u 406 \
-g oinstall \
-G asmdba,asmadmin,dba \
-c "Oracle Grid" \
-m -d /export/home/grid \
grid
</source>
==Projects==
<source lang=bash>
# projadd -p 186 -G oinstall \
-K process.max-file-descriptor="(privileged,65536,deny)" \
-K process.max-sem-nsems="(privileged,2048,deny)" \
-K project.max-sem-ids="(privileged,2048,deny)" \
-K project.max-shm-ids="(privileged,200,deny)" \
-K project.max-shm-memory="(privileged,274877906944,deny)" \
group.oinstall
</source>
===Check project settings===
<source lang=bash>
# su - oracle
$ for name in process.{max-file-descriptor,max-sem-nsems} ; do prctl -t privileged -i process -n ${name} $$ ; done
process: 14822: -bash
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
process.max-file-descriptor
privileged 65.5K - deny -
process: 14822: -bash
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
process.max-sem-nsems
privileged 2.05K - deny -
$ for name in project.{max-sem-ids,max-shm-ids,max-shm-memory} ; do prctl -t privileged -n ${name} $$ ; done
process: 14822: -bash
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
project.max-sem-ids
privileged 2.05K - deny -
process: 14822: -bash
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
project.max-shm-ids
privileged 200 - deny -
process: 14822: -bash
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
project.max-shm-memory
usage 0B
privileged 256GB - deny -
</source>
=Directories=
<source lang=bash>
# mkdir -p -m 0755 /opt/{grid{home,base},oraInventory}
# chown -R grid:oinstall /opt/{grid{home,base},oraInventory}
</source>
=ASM=
==Discover LUNs==
<source lang=bash>
# luxadm -e dump_map /devices/pci@0,0/pci8086,2f04@2/pci103c,197f@0,1/fp@0,0:devctl | nawk '/Disk device/{print $5}' | sort -u | xargs luxadm display | nawk '/DEVICE PROPERTIES for disk:/{disk=$NF}/DEVICE PROPERTIES for:/{disk="";}disk && /Vendor:/{vendor=$NF;}/Serial Num:/{serial=$NF;}/Unformatted capacity:/{capacity=$(NF-1)""$NF;}disk && /^$/{printf "%s vendor=%s serial=%s capacity=%s\n",disk,vendor,serial,capacity}' | sort -u
</source>
<pre>
/dev/rdsk/c0t60002AC000000000C061010650004020d0s2 vendor=3PARdata serial=1688061 capacity=16384.000MBytes
/dev/rdsk/c0t60002AC000000000C913010650004001d0s2 vendor=3PARdata serial=1609913 capacity=102400.000MBytes
/dev/rdsk/c0t60002AC000000000C913010650004002d0s2 vendor=3PARdata serial=1609913 capacity=102400.000MBytes
/dev/rdsk/c0t60002AC000000000C913010650004003d0s2 vendor=3PARdata serial=1609913 capacity=102400.000MBytes
/dev/rdsk/c0t60002AC000000000C913010650004004d0s2 vendor=3PARdata serial=1609913 capacity=102400.000MBytes
/dev/rdsk/c0t60002AC000000000C913010650004005d0s2 vendor=3PARdata serial=1609913 capacity=102400.000MBytes
/dev/rdsk/c0t60002AC000000000C913010650004006d0s2 vendor=3PARdata serial=1609913 capacity=102400.000MBytes
/dev/rdsk/c0t60002AC000000000C913010650004007d0s2 vendor=3PARdata serial=1609913 capacity=102400.000MBytes
/dev/rdsk/c0t60002AC000000000C913010650004008d0s2 vendor=3PARdata serial=1609913 capacity=102400.000MBytes
/dev/rdsk/c0t60002AC000000000C913010650004009d0s2 vendor=3PARdata serial=1609913 capacity=102400.000MBytes
/dev/rdsk/c0t60002AC000000000C913010650004010d0s2 vendor=3PARdata serial=1609913 capacity=102400.000MBytes
/dev/rdsk/c0t60002AC000000000C913010650004011d0s2 vendor=3PARdata serial=1609913 capacity=102400.000MBytes
/dev/rdsk/c0t60002AC000000000C913010650004012d0s2 vendor=3PARdata serial=1609913 capacity=20480.000MBytes
/dev/rdsk/c0t60002AC000000000C913010650004013d0s2 vendor=3PARdata serial=1609913 capacity=20480.000MBytes
/dev/rdsk/c0t60002AC000000000C913010650004014d0s2 vendor=3PARdata serial=1609913 capacity=20480.000MBytes
/dev/rdsk/c0t60002AC000000000C913010650004015d0s2 vendor=3PARdata serial=1609913 capacity=20480.000MBytes
/dev/rdsk/c0t60002AC000000000C913010650004016d0s2 vendor=3PARdata serial=1609913 capacity=102400.000MBytes
/dev/rdsk/c0t60002AC000000000C913010650004017d0s2 vendor=3PARdata serial=1609913 capacity=102400.000MBytes
/dev/rdsk/c0t60002AC000000000C913010650004018d0s2 vendor=3PARdata serial=1609913 capacity=102400.000MBytes
/dev/rdsk/c0t60002AC000000000C913010650004019d0s2 vendor=3PARdata serial=1609913 capacity=102400.000MBytes
/dev/rdsk/c0t60002AC000000000C913010650004020d0s2 vendor=3PARdata serial=1609913 capacity=16384.000MBytes
/dev/rdsk/c0t60002AC000000000C916010650004001d0s2 vendor=3PARdata serial=1609916 capacity=102400.000MBytes
/dev/rdsk/c0t60002AC000000000C916010650004002d0s2 vendor=3PARdata serial=1609916 capacity=102400.000MBytes
/dev/rdsk/c0t60002AC000000000C916010650004003d0s2 vendor=3PARdata serial=1609916 capacity=102400.000MBytes
/dev/rdsk/c0t60002AC000000000C916010650004004d0s2 vendor=3PARdata serial=1609916 capacity=102400.000MBytes
/dev/rdsk/c0t60002AC000000000C916010650004005d0s2 vendor=3PARdata serial=1609916 capacity=102400.000MBytes
/dev/rdsk/c0t60002AC000000000C916010650004006d0s2 vendor=3PARdata serial=1609916 capacity=102400.000MBytes
/dev/rdsk/c0t60002AC000000000C916010650004007d0s2 vendor=3PARdata serial=1609916 capacity=102400.000MBytes
/dev/rdsk/c0t60002AC000000000C916010650004008d0s2 vendor=3PARdata serial=1609916 capacity=102400.000MBytes
/dev/rdsk/c0t60002AC000000000C916010650004009d0s2 vendor=3PARdata serial=1609916 capacity=102400.000MBytes
/dev/rdsk/c0t60002AC000000000C916010650004010d0s2 vendor=3PARdata serial=1609916 capacity=102400.000MBytes
/dev/rdsk/c0t60002AC000000000C916010650004011d0s2 vendor=3PARdata serial=1609916 capacity=102400.000MBytes
/dev/rdsk/c0t60002AC000000000C916010650004012d0s2 vendor=3PARdata serial=1609916 capacity=20480.000MBytes
/dev/rdsk/c0t60002AC000000000C916010650004013d0s2 vendor=3PARdata serial=1609916 capacity=20480.000MBytes
/dev/rdsk/c0t60002AC000000000C916010650004014d0s2 vendor=3PARdata serial=1609916 capacity=20480.000MBytes
/dev/rdsk/c0t60002AC000000000C916010650004015d0s2 vendor=3PARdata serial=1609916 capacity=20480.000MBytes
/dev/rdsk/c0t60002AC000000000C916010650004016d0s2 vendor=3PARdata serial=1609916 capacity=102400.000MBytes
/dev/rdsk/c0t60002AC000000000C916010650004017d0s2 vendor=3PARdata serial=1609916 capacity=102400.000MBytes
/dev/rdsk/c0t60002AC000000000C916010650004018d0s2 vendor=3PARdata serial=1609916 capacity=102400.000MBytes
/dev/rdsk/c0t60002AC000000000C916010650004019d0s2 vendor=3PARdata serial=1609916 capacity=102400.000MBytes
/dev/rdsk/c0t60002AC000000000C916010650004020d0s2 vendor=3PARdata serial=1609916 capacity=16384.000MBytes
</pre>
==Label Disks==
===Single Disk===
<source lang=bash>
printf 'type 0 no no\nlabel 1 yes\npartition\n0 usr wm 8192 $\nlabel 1 yes\nquit\nquit\n' | format -e /dev/rdsk/<disk>
</source>
===All FC disks===
For x86 you have to call format -> fdisk -> y for all disks first :-\
'''DON'T DO THE NEXT STEP IF YOU DO NOT KNOW WHAT YOU DO!'''
format_command_file.txt:
<source lang=bash>
type 0 no no
label 1 yes
partition
0 usr wm 8192 $
label 1 yes
quit
quit
</source>
<source lang=bash>
# luxadm -e port | nawk '{print $1}' | xargs -n 1 luxadm -e dump_map | nawk '/Disk device/{print $5}' | sort -u | xargs luxadm display | nawk '/DEVICE PROPERTIES for disk:/{disk=$NF}/DEVICE PROPERTIES for:/{disk="";}disk && /^$/{printf "%s\n",disk}' | sort -u | xargs -n 1 format -e -f ~/format_command_file.txt
</source>
=Network=
==Check port ranges==
<source lang=bash>
# for protocol in tcp udp ; do ipadm show-prop ${protocol} -p smallest_anon_port,largest_anon_port ; done
PROTO PROPERTY PERM CURRENT PERSISTENT DEFAULT POSSIBLE
tcp smallest_anon_port rw 9000 9000 32768 1024-65500
tcp largest_anon_port rw 65500 65500 65535 9000-65535
PROTO PROPERTY PERM CURRENT PERSISTENT DEFAULT POSSIBLE
udp smallest_anon_port rw 9000 9000 32768 1024-65500
udp largest_anon_port rw 65500 65500 65535 9000-65535
</source>
==Set swap to physical RAM==
<source lang=bash>
# export RAM=256G
# swap -d /dev/zvol/dsk/rpool/swap
# zfs destroy rpool/swap
# zfs create \
-V ${RAM} \
-b 8k \
-o primarycache=metadata \
-o chksum=on \
-o dedup=off \
-o encryption=off \
-o compression=off \
rpool/swap
# swap -a /dev/zvol/dsk/rpool/swap
</source>
==Set slew always for ntp==
After configuring ntp set slew always to avoid time warps!
<source lang=bash>
# svccfg -s svc:/network/ntp:default setprop config/slew_always = true
# svcadm refresh svc:/network/ntp:default
# svccfg -s svc:/network/ntp:default listprop config/slew_always
config/slew_always boolean true
</source>
9fb3fcb1ba2ca465ae478d48e8b4831fdc12fee4
1190
1189
2016-02-08T13:27:08Z
Lollypop
2
/* ASM */
wikitext
text/x-wiki
==Get release information==
<source lang=bash>
# pkg info kernel | nawl -F '.' '/Build Release:/{solaris=$NF;}/Branch:/{subrel=$3;update=$4;}END{printf "Solaris %d.%d Update %d\n",solaris,subrel,update;}'
</source>
==Check pkg dependencies==
<source lang=bash>
# pkg list developer/assembler developer/build/make
</source>
=User / group settings=
==Groups==
<source lang=bash>
# groupadd -g 186 oinstall
# groupadd -g 187 asmadmin
# groupadd -g 188 asmdba
# groupadd -g 200 dba
</source>
==User==
<source lang=bash>
# useradd \
-u 102 \
-g oinstall \
-G asmdba,dba \
-c "Oracle DB" \
-m -d /export/home/oracle \
oracle
# useradd \
-u 406 \
-g oinstall \
-G asmdba,asmadmin,dba \
-c "Oracle Grid" \
-m -d /export/home/grid \
grid
</source>
==Projects==
<source lang=bash>
# projadd -p 186 -G oinstall \
-K process.max-file-descriptor="(privileged,65536,deny)" \
-K process.max-sem-nsems="(privileged,2048,deny)" \
-K project.max-sem-ids="(privileged,2048,deny)" \
-K project.max-shm-ids="(privileged,200,deny)" \
-K project.max-shm-memory="(privileged,274877906944,deny)" \
group.oinstall
</source>
===Check project settings===
<source lang=bash>
# su - oracle
$ for name in process.{max-file-descriptor,max-sem-nsems} ; do prctl -t privileged -i process -n ${name} $$ ; done
process: 14822: -bash
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
process.max-file-descriptor
privileged 65.5K - deny -
process: 14822: -bash
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
process.max-sem-nsems
privileged 2.05K - deny -
$ for name in project.{max-sem-ids,max-shm-ids,max-shm-memory} ; do prctl -t privileged -n ${name} $$ ; done
process: 14822: -bash
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
project.max-sem-ids
privileged 2.05K - deny -
process: 14822: -bash
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
project.max-shm-ids
privileged 200 - deny -
process: 14822: -bash
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
project.max-shm-memory
usage 0B
privileged 256GB - deny -
</source>
=Directories=
<source lang=bash>
# mkdir -p -m 0755 /opt/{grid{home,base},oraInventory}
# chown -R grid:oinstall /opt/{grid{home,base},oraInventory}
</source>
=ASM=
==Discover LUNs==
<source lang=bash>
# luxadm -e dump_map /devices/pci@0,0/pci8086,2f04@2/pci103c,197f@0,1/fp@0,0:devctl | nawk '/Disk device/{print $5}' | sort -u | xargs luxadm display | nawk '/DEVICE PROPERTIES for disk:/{disk=$NF}/DEVICE PROPERTIES for:/{disk="";}disk && /Vendor:/{vendor=$NF;}/Serial Num:/{serial=$NF;}/Unformatted capacity:/{capacity=$(NF-1)""$NF;}disk && /^$/{printf "%s vendor=%s serial=%s capacity=%s\n",disk,vendor,serial,capacity}' | sort -u
</source>
<pre>
/dev/rdsk/c0t60002AC000000000C061010650004020d0s2 vendor=3PARdata serial=1688061 capacity=16384.000MBytes
/dev/rdsk/c0t60002AC000000000C913010650004001d0s2 vendor=3PARdata serial=1609913 capacity=102400.000MBytes
/dev/rdsk/c0t60002AC000000000C913010650004002d0s2 vendor=3PARdata serial=1609913 capacity=102400.000MBytes
/dev/rdsk/c0t60002AC000000000C913010650004003d0s2 vendor=3PARdata serial=1609913 capacity=102400.000MBytes
/dev/rdsk/c0t60002AC000000000C913010650004004d0s2 vendor=3PARdata serial=1609913 capacity=102400.000MBytes
/dev/rdsk/c0t60002AC000000000C913010650004005d0s2 vendor=3PARdata serial=1609913 capacity=102400.000MBytes
/dev/rdsk/c0t60002AC000000000C913010650004006d0s2 vendor=3PARdata serial=1609913 capacity=102400.000MBytes
/dev/rdsk/c0t60002AC000000000C913010650004007d0s2 vendor=3PARdata serial=1609913 capacity=102400.000MBytes
/dev/rdsk/c0t60002AC000000000C913010650004008d0s2 vendor=3PARdata serial=1609913 capacity=102400.000MBytes
/dev/rdsk/c0t60002AC000000000C913010650004009d0s2 vendor=3PARdata serial=1609913 capacity=102400.000MBytes
/dev/rdsk/c0t60002AC000000000C913010650004010d0s2 vendor=3PARdata serial=1609913 capacity=102400.000MBytes
/dev/rdsk/c0t60002AC000000000C913010650004011d0s2 vendor=3PARdata serial=1609913 capacity=102400.000MBytes
/dev/rdsk/c0t60002AC000000000C913010650004012d0s2 vendor=3PARdata serial=1609913 capacity=20480.000MBytes
/dev/rdsk/c0t60002AC000000000C913010650004013d0s2 vendor=3PARdata serial=1609913 capacity=20480.000MBytes
/dev/rdsk/c0t60002AC000000000C913010650004014d0s2 vendor=3PARdata serial=1609913 capacity=20480.000MBytes
/dev/rdsk/c0t60002AC000000000C913010650004015d0s2 vendor=3PARdata serial=1609913 capacity=20480.000MBytes
/dev/rdsk/c0t60002AC000000000C913010650004016d0s2 vendor=3PARdata serial=1609913 capacity=102400.000MBytes
/dev/rdsk/c0t60002AC000000000C913010650004017d0s2 vendor=3PARdata serial=1609913 capacity=102400.000MBytes
/dev/rdsk/c0t60002AC000000000C913010650004018d0s2 vendor=3PARdata serial=1609913 capacity=102400.000MBytes
/dev/rdsk/c0t60002AC000000000C913010650004019d0s2 vendor=3PARdata serial=1609913 capacity=102400.000MBytes
/dev/rdsk/c0t60002AC000000000C913010650004020d0s2 vendor=3PARdata serial=1609913 capacity=16384.000MBytes
/dev/rdsk/c0t60002AC000000000C916010650004001d0s2 vendor=3PARdata serial=1609916 capacity=102400.000MBytes
/dev/rdsk/c0t60002AC000000000C916010650004002d0s2 vendor=3PARdata serial=1609916 capacity=102400.000MBytes
/dev/rdsk/c0t60002AC000000000C916010650004003d0s2 vendor=3PARdata serial=1609916 capacity=102400.000MBytes
/dev/rdsk/c0t60002AC000000000C916010650004004d0s2 vendor=3PARdata serial=1609916 capacity=102400.000MBytes
/dev/rdsk/c0t60002AC000000000C916010650004005d0s2 vendor=3PARdata serial=1609916 capacity=102400.000MBytes
/dev/rdsk/c0t60002AC000000000C916010650004006d0s2 vendor=3PARdata serial=1609916 capacity=102400.000MBytes
/dev/rdsk/c0t60002AC000000000C916010650004007d0s2 vendor=3PARdata serial=1609916 capacity=102400.000MBytes
/dev/rdsk/c0t60002AC000000000C916010650004008d0s2 vendor=3PARdata serial=1609916 capacity=102400.000MBytes
/dev/rdsk/c0t60002AC000000000C916010650004009d0s2 vendor=3PARdata serial=1609916 capacity=102400.000MBytes
/dev/rdsk/c0t60002AC000000000C916010650004010d0s2 vendor=3PARdata serial=1609916 capacity=102400.000MBytes
/dev/rdsk/c0t60002AC000000000C916010650004011d0s2 vendor=3PARdata serial=1609916 capacity=102400.000MBytes
/dev/rdsk/c0t60002AC000000000C916010650004012d0s2 vendor=3PARdata serial=1609916 capacity=20480.000MBytes
/dev/rdsk/c0t60002AC000000000C916010650004013d0s2 vendor=3PARdata serial=1609916 capacity=20480.000MBytes
/dev/rdsk/c0t60002AC000000000C916010650004014d0s2 vendor=3PARdata serial=1609916 capacity=20480.000MBytes
/dev/rdsk/c0t60002AC000000000C916010650004015d0s2 vendor=3PARdata serial=1609916 capacity=20480.000MBytes
/dev/rdsk/c0t60002AC000000000C916010650004016d0s2 vendor=3PARdata serial=1609916 capacity=102400.000MBytes
/dev/rdsk/c0t60002AC000000000C916010650004017d0s2 vendor=3PARdata serial=1609916 capacity=102400.000MBytes
/dev/rdsk/c0t60002AC000000000C916010650004018d0s2 vendor=3PARdata serial=1609916 capacity=102400.000MBytes
/dev/rdsk/c0t60002AC000000000C916010650004019d0s2 vendor=3PARdata serial=1609916 capacity=102400.000MBytes
/dev/rdsk/c0t60002AC000000000C916010650004020d0s2 vendor=3PARdata serial=1609916 capacity=16384.000MBytes
</pre>
==Label Disks==
===Single Disk===
<source lang=bash>
printf 'type 0 no no\nlabel 1 yes\npartition\n0 usr wm 8192 $\nlabel 1 yes\nquit\nquit\n' | format -e /dev/rdsk/<disk>
</source>
===All FC disks===
For x86 you have to call format -> fdisk -> y for all disks first :-\
'''DON'T DO THE NEXT STEP IF YOU DO NOT KNOW WHAT YOU DO!'''
format_command_file.txt:
<source lang=bash>
type 0 no no
label 1 yes
partition
0 usr wm 8192 $
label 1 yes
quit
quit
</source>
<source lang=bash>
# luxadm -e port | nawk '{print $1}' | xargs -n 1 luxadm -e dump_map | nawk '/Disk device/{print $5}' | sort -u | xargs luxadm display | nawk '/DEVICE PROPERTIES for disk:/{disk=$NF}/DEVICE PROPERTIES for:/{disk="";}disk && /^$/{printf "%s\n",disk}' | sort -u | xargs -n 1 format -e -f ~/format_command_file.txt
</source>
<source lang=bash>
# chown -RL grid:asmadmin /dev/rdsk/c0t6000*
# chmod 660 /dev/rdsk/c0t6000*
</source>
=Network=
==Check port ranges==
<source lang=bash>
# for protocol in tcp udp ; do ipadm show-prop ${protocol} -p smallest_anon_port,largest_anon_port ; done
PROTO PROPERTY PERM CURRENT PERSISTENT DEFAULT POSSIBLE
tcp smallest_anon_port rw 9000 9000 32768 1024-65500
tcp largest_anon_port rw 65500 65500 65535 9000-65535
PROTO PROPERTY PERM CURRENT PERSISTENT DEFAULT POSSIBLE
udp smallest_anon_port rw 9000 9000 32768 1024-65500
udp largest_anon_port rw 65500 65500 65535 9000-65535
</source>
==Set swap to physical RAM==
<source lang=bash>
# export RAM=256G
# swap -d /dev/zvol/dsk/rpool/swap
# zfs destroy rpool/swap
# zfs create \
-V ${RAM} \
-b 8k \
-o primarycache=metadata \
-o chksum=on \
-o dedup=off \
-o encryption=off \
-o compression=off \
rpool/swap
# swap -a /dev/zvol/dsk/rpool/swap
</source>
==Set slew always for ntp==
After configuring ntp set slew always to avoid time warps!
<source lang=bash>
# svccfg -s svc:/network/ntp:default setprop config/slew_always = true
# svcadm refresh svc:/network/ntp:default
# svccfg -s svc:/network/ntp:default listprop config/slew_always
config/slew_always boolean true
</source>
c4c70cd3b199b7f07d3de13ba9d5593d8cc203ba
1191
1190
2016-02-08T13:36:26Z
Lollypop
2
/* Check pkg dependencies */
wikitext
text/x-wiki
==Get release information==
<source lang=bash>
# pkg info kernel | nawl -F '.' '/Build Release:/{solaris=$NF;}/Branch:/{subrel=$3;update=$4;}END{printf "Solaris %d.%d Update %d\n",solaris,subrel,update;}'
</source>
==Install pkg dependencies==
<source lang=bash>
# pkg install developer/assembler
# pkg install developer/build/make
# pkg install x11/diagnostic/x11-info-clients
</source>
==Check pkg dependencies==
<source lang=bash>
# pkg list developer/assembler developer/build/make x11/diagnostic/x11-info-clients
</source>
=User / group settings=
==Groups==
<source lang=bash>
# groupadd -g 186 oinstall
# groupadd -g 187 asmadmin
# groupadd -g 188 asmdba
# groupadd -g 200 dba
</source>
==User==
<source lang=bash>
# useradd \
-u 102 \
-g oinstall \
-G asmdba,dba \
-c "Oracle DB" \
-m -d /export/home/oracle \
oracle
# useradd \
-u 406 \
-g oinstall \
-G asmdba,asmadmin,dba \
-c "Oracle Grid" \
-m -d /export/home/grid \
grid
</source>
==Projects==
<source lang=bash>
# projadd -p 186 -G oinstall \
-K process.max-file-descriptor="(privileged,65536,deny)" \
-K process.max-sem-nsems="(privileged,2048,deny)" \
-K project.max-sem-ids="(privileged,2048,deny)" \
-K project.max-shm-ids="(privileged,200,deny)" \
-K project.max-shm-memory="(privileged,274877906944,deny)" \
group.oinstall
</source>
===Check project settings===
<source lang=bash>
# su - oracle
$ for name in process.{max-file-descriptor,max-sem-nsems} ; do prctl -t privileged -i process -n ${name} $$ ; done
process: 14822: -bash
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
process.max-file-descriptor
privileged 65.5K - deny -
process: 14822: -bash
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
process.max-sem-nsems
privileged 2.05K - deny -
$ for name in project.{max-sem-ids,max-shm-ids,max-shm-memory} ; do prctl -t privileged -n ${name} $$ ; done
process: 14822: -bash
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
project.max-sem-ids
privileged 2.05K - deny -
process: 14822: -bash
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
project.max-shm-ids
privileged 200 - deny -
process: 14822: -bash
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
project.max-shm-memory
usage 0B
privileged 256GB - deny -
</source>
=Directories=
<source lang=bash>
# mkdir -p -m 0755 /opt/{grid{home,base},oraInventory}
# chown -R grid:oinstall /opt/{grid{home,base},oraInventory}
</source>
=ASM=
==Discover LUNs==
<source lang=bash>
# luxadm -e dump_map /devices/pci@0,0/pci8086,2f04@2/pci103c,197f@0,1/fp@0,0:devctl | nawk '/Disk device/{print $5}' | sort -u | xargs luxadm display | nawk '/DEVICE PROPERTIES for disk:/{disk=$NF}/DEVICE PROPERTIES for:/{disk="";}disk && /Vendor:/{vendor=$NF;}/Serial Num:/{serial=$NF;}/Unformatted capacity:/{capacity=$(NF-1)""$NF;}disk && /^$/{printf "%s vendor=%s serial=%s capacity=%s\n",disk,vendor,serial,capacity}' | sort -u
</source>
<pre>
/dev/rdsk/c0t60002AC000000000C061010650004020d0s2 vendor=3PARdata serial=1688061 capacity=16384.000MBytes
/dev/rdsk/c0t60002AC000000000C913010650004001d0s2 vendor=3PARdata serial=1609913 capacity=102400.000MBytes
/dev/rdsk/c0t60002AC000000000C913010650004002d0s2 vendor=3PARdata serial=1609913 capacity=102400.000MBytes
/dev/rdsk/c0t60002AC000000000C913010650004003d0s2 vendor=3PARdata serial=1609913 capacity=102400.000MBytes
/dev/rdsk/c0t60002AC000000000C913010650004004d0s2 vendor=3PARdata serial=1609913 capacity=102400.000MBytes
/dev/rdsk/c0t60002AC000000000C913010650004005d0s2 vendor=3PARdata serial=1609913 capacity=102400.000MBytes
/dev/rdsk/c0t60002AC000000000C913010650004006d0s2 vendor=3PARdata serial=1609913 capacity=102400.000MBytes
/dev/rdsk/c0t60002AC000000000C913010650004007d0s2 vendor=3PARdata serial=1609913 capacity=102400.000MBytes
/dev/rdsk/c0t60002AC000000000C913010650004008d0s2 vendor=3PARdata serial=1609913 capacity=102400.000MBytes
/dev/rdsk/c0t60002AC000000000C913010650004009d0s2 vendor=3PARdata serial=1609913 capacity=102400.000MBytes
/dev/rdsk/c0t60002AC000000000C913010650004010d0s2 vendor=3PARdata serial=1609913 capacity=102400.000MBytes
/dev/rdsk/c0t60002AC000000000C913010650004011d0s2 vendor=3PARdata serial=1609913 capacity=102400.000MBytes
/dev/rdsk/c0t60002AC000000000C913010650004012d0s2 vendor=3PARdata serial=1609913 capacity=20480.000MBytes
/dev/rdsk/c0t60002AC000000000C913010650004013d0s2 vendor=3PARdata serial=1609913 capacity=20480.000MBytes
/dev/rdsk/c0t60002AC000000000C913010650004014d0s2 vendor=3PARdata serial=1609913 capacity=20480.000MBytes
/dev/rdsk/c0t60002AC000000000C913010650004015d0s2 vendor=3PARdata serial=1609913 capacity=20480.000MBytes
/dev/rdsk/c0t60002AC000000000C913010650004016d0s2 vendor=3PARdata serial=1609913 capacity=102400.000MBytes
/dev/rdsk/c0t60002AC000000000C913010650004017d0s2 vendor=3PARdata serial=1609913 capacity=102400.000MBytes
/dev/rdsk/c0t60002AC000000000C913010650004018d0s2 vendor=3PARdata serial=1609913 capacity=102400.000MBytes
/dev/rdsk/c0t60002AC000000000C913010650004019d0s2 vendor=3PARdata serial=1609913 capacity=102400.000MBytes
/dev/rdsk/c0t60002AC000000000C913010650004020d0s2 vendor=3PARdata serial=1609913 capacity=16384.000MBytes
/dev/rdsk/c0t60002AC000000000C916010650004001d0s2 vendor=3PARdata serial=1609916 capacity=102400.000MBytes
/dev/rdsk/c0t60002AC000000000C916010650004002d0s2 vendor=3PARdata serial=1609916 capacity=102400.000MBytes
/dev/rdsk/c0t60002AC000000000C916010650004003d0s2 vendor=3PARdata serial=1609916 capacity=102400.000MBytes
/dev/rdsk/c0t60002AC000000000C916010650004004d0s2 vendor=3PARdata serial=1609916 capacity=102400.000MBytes
/dev/rdsk/c0t60002AC000000000C916010650004005d0s2 vendor=3PARdata serial=1609916 capacity=102400.000MBytes
/dev/rdsk/c0t60002AC000000000C916010650004006d0s2 vendor=3PARdata serial=1609916 capacity=102400.000MBytes
/dev/rdsk/c0t60002AC000000000C916010650004007d0s2 vendor=3PARdata serial=1609916 capacity=102400.000MBytes
/dev/rdsk/c0t60002AC000000000C916010650004008d0s2 vendor=3PARdata serial=1609916 capacity=102400.000MBytes
/dev/rdsk/c0t60002AC000000000C916010650004009d0s2 vendor=3PARdata serial=1609916 capacity=102400.000MBytes
/dev/rdsk/c0t60002AC000000000C916010650004010d0s2 vendor=3PARdata serial=1609916 capacity=102400.000MBytes
/dev/rdsk/c0t60002AC000000000C916010650004011d0s2 vendor=3PARdata serial=1609916 capacity=102400.000MBytes
/dev/rdsk/c0t60002AC000000000C916010650004012d0s2 vendor=3PARdata serial=1609916 capacity=20480.000MBytes
/dev/rdsk/c0t60002AC000000000C916010650004013d0s2 vendor=3PARdata serial=1609916 capacity=20480.000MBytes
/dev/rdsk/c0t60002AC000000000C916010650004014d0s2 vendor=3PARdata serial=1609916 capacity=20480.000MBytes
/dev/rdsk/c0t60002AC000000000C916010650004015d0s2 vendor=3PARdata serial=1609916 capacity=20480.000MBytes
/dev/rdsk/c0t60002AC000000000C916010650004016d0s2 vendor=3PARdata serial=1609916 capacity=102400.000MBytes
/dev/rdsk/c0t60002AC000000000C916010650004017d0s2 vendor=3PARdata serial=1609916 capacity=102400.000MBytes
/dev/rdsk/c0t60002AC000000000C916010650004018d0s2 vendor=3PARdata serial=1609916 capacity=102400.000MBytes
/dev/rdsk/c0t60002AC000000000C916010650004019d0s2 vendor=3PARdata serial=1609916 capacity=102400.000MBytes
/dev/rdsk/c0t60002AC000000000C916010650004020d0s2 vendor=3PARdata serial=1609916 capacity=16384.000MBytes
</pre>
==Label Disks==
===Single Disk===
<source lang=bash>
printf 'type 0 no no\nlabel 1 yes\npartition\n0 usr wm 8192 $\nlabel 1 yes\nquit\nquit\n' | format -e /dev/rdsk/<disk>
</source>
===All FC disks===
For x86 you have to call format -> fdisk -> y for all disks first :-\
'''DON'T DO THE NEXT STEP IF YOU DO NOT KNOW WHAT YOU DO!'''
format_command_file.txt:
<source lang=bash>
type 0 no no
label 1 yes
partition
0 usr wm 8192 $
label 1 yes
quit
quit
</source>
<source lang=bash>
# luxadm -e port | nawk '{print $1}' | xargs -n 1 luxadm -e dump_map | nawk '/Disk device/{print $5}' | sort -u | xargs luxadm display | nawk '/DEVICE PROPERTIES for disk:/{disk=$NF}/DEVICE PROPERTIES for:/{disk="";}disk && /^$/{printf "%s\n",disk}' | sort -u | xargs -n 1 format -e -f ~/format_command_file.txt
</source>
<source lang=bash>
# chown -RL grid:asmadmin /dev/rdsk/c0t6000*
# chmod 660 /dev/rdsk/c0t6000*
</source>
=Network=
==Check port ranges==
<source lang=bash>
# for protocol in tcp udp ; do ipadm show-prop ${protocol} -p smallest_anon_port,largest_anon_port ; done
PROTO PROPERTY PERM CURRENT PERSISTENT DEFAULT POSSIBLE
tcp smallest_anon_port rw 9000 9000 32768 1024-65500
tcp largest_anon_port rw 65500 65500 65535 9000-65535
PROTO PROPERTY PERM CURRENT PERSISTENT DEFAULT POSSIBLE
udp smallest_anon_port rw 9000 9000 32768 1024-65500
udp largest_anon_port rw 65500 65500 65535 9000-65535
</source>
==Set swap to physical RAM==
<source lang=bash>
# export RAM=256G
# swap -d /dev/zvol/dsk/rpool/swap
# zfs destroy rpool/swap
# zfs create \
-V ${RAM} \
-b 8k \
-o primarycache=metadata \
-o chksum=on \
-o dedup=off \
-o encryption=off \
-o compression=off \
rpool/swap
# swap -a /dev/zvol/dsk/rpool/swap
</source>
==Set slew always for ntp==
After configuring ntp set slew always to avoid time warps!
<source lang=bash>
# svccfg -s svc:/network/ntp:default setprop config/slew_always = true
# svcadm refresh svc:/network/ntp:default
# svccfg -s svc:/network/ntp:default listprop config/slew_always
config/slew_always boolean true
</source>
e053029472c5e49f5ae001770e3671bb5fdc7425
1192
1191
2016-02-08T14:21:29Z
Lollypop
2
wikitext
text/x-wiki
==Get Solaris release information==
<source lang=bash>
# pkg info kernel | \
nawk -F '.' '
/Build Release:/{
solaris=$NF;
}
/Branch:/{
subrel=$3;
update=$4;
}
END{
printf "Solaris %d.%d Update %d\n",solaris,subrel,update;
}'
</source>
=Needed Solaris packages=
==Install pkg dependencies==
<source lang=bash>
# pkg install developer/assembler
# pkg install developer/build/make
# pkg install x11/diagnostic/x11-info-clients
</source>
==Check pkg dependencies==
<source lang=bash>
# pkg list \
developer/assembler \
developer/build/make \
x11/diagnostic/x11-info-clients
</source>
=User / group settings=
==Groups==
<source lang=bash>
# groupadd -g 186 oinstall
# groupadd -g 187 asmadmin
# groupadd -g 188 asmdba
# groupadd -g 200 dba
</source>
==User==
<source lang=bash>
# useradd \
-u 102 \
-g oinstall \
-G asmdba,dba \
-c "Oracle DB" \
-m -d /export/home/oracle \
oracle
# useradd \
-u 406 \
-g oinstall \
-G asmdba,asmadmin,dba \
-c "Oracle Grid" \
-m -d /export/home/grid \
grid
</source>
===Generate ssh public keys===
<source lang=bash>
# su - grid
$ ssh-keygen -t rsa -b 2048
Generating public/private rsa key pair.
Enter file in which to save the key (/export/home/grid/.ssh/id_rsa): <Enter>
Created directory '/export/home/grid/.ssh'.
Enter passphrase (empty for no passphrase): <Enter>
Enter same passphrase again: <Enter>
Your identification has been saved in /export/home/grid/.ssh/id_rsa.
Your public key has been saved in /export/home/grid/.ssh/id_rsa.pub.
The key fingerprint is:
..:..:.. grid@grid01
$ cat .ssh/id_rsa.pub > .ssh/authorized_keys
$ chmod 600 .ssh/authorized_keys
$ vi .ssh/authorized_keys
</source>
Add the public key of other nodes.
After that do this on all other nodes added as grid:
<source lang=bash>
$ scp grid01:.ssh/authorized_keys .ssh/authorized_keys
</source>
==Projects==
<source lang=bash>
# projadd -p 186 -G oinstall \
-K process.max-file-descriptor="(privileged,65536,deny)" \
-K process.max-sem-nsems="(privileged,2048,deny)" \
-K project.max-sem-ids="(privileged,2048,deny)" \
-K project.max-shm-ids="(privileged,200,deny)" \
-K project.max-shm-memory="(privileged,274877906944,deny)" \
group.oinstall
</source>
===Check project settings===
<source lang=bash>
# su - oracle
$ for name in process.{max-file-descriptor,max-sem-nsems} ; do prctl -t privileged -i process -n ${name} $$ ; done
process: 14822: -bash
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
process.max-file-descriptor
privileged 65.5K - deny -
process: 14822: -bash
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
process.max-sem-nsems
privileged 2.05K - deny -
$ for name in project.{max-sem-ids,max-shm-ids,max-shm-memory} ; do prctl -t privileged -n ${name} $$ ; done
process: 14822: -bash
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
project.max-sem-ids
privileged 2.05K - deny -
process: 14822: -bash
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
project.max-shm-ids
privileged 200 - deny -
process: 14822: -bash
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
project.max-shm-memory
usage 0B
privileged 256GB - deny -
</source>
=Directories=
<source lang=bash>
# mkdir -p -m 0755 /opt/{grid{home,base},oraInventory}
# chown -R grid:oinstall /opt/{grid{home,base},oraInventory}
</source>
=Storage tasks=
==Discover LUNs==
<source lang=bash>
# luxadm -e dump_map /devices/pci@0,0/pci8086,2f04@2/pci103c,197f@0,1/fp@0,0:devctl | nawk '/Disk device/{print $5}' | sort -u | xargs luxadm display | nawk '/DEVICE PROPERTIES for disk:/{disk=$NF}/DEVICE PROPERTIES for:/{disk="";}disk && /Vendor:/{vendor=$NF;}/Serial Num:/{serial=$NF;}/Unformatted capacity:/{capacity=$(NF-1)""$NF;}disk && /^$/{printf "%s vendor=%s serial=%s capacity=%s\n",disk,vendor,serial,capacity}' | sort -u
</source>
==Label Disks==
===Single Disk===
<source lang=bash>
printf 'type 0 no no\nlabel 1 yes\npartition\n0 usr wm 8192 $\nlabel 1 yes\nquit\nquit\n' | format -e /dev/rdsk/<disk>
</source>
===All FC disks===
For x86 you have to call format -> fdisk -> y for all disks first :-\
'''DON'T DO THE NEXT STEP IF YOU DO NOT KNOW WHAT YOU DO!'''
format_command_file.txt:
<source lang=bash>
type 0 no no
label 1 yes
partition
0 usr wm 8192 $
label 1 yes
quit
quit
</source>
<source lang=bash>
# luxadm -e port | nawk '{print $1}' | xargs -n 1 luxadm -e dump_map | nawk '/Disk device/{print $5}' | sort -u | xargs luxadm display | nawk '/DEVICE PROPERTIES for disk:/{disk=$NF}/DEVICE PROPERTIES for:/{disk="";}disk && /^$/{printf "%s\n",disk}' | sort -u | xargs -n 1 format -e -f ~/format_command_file.txt
</source>
<source lang=bash>
# chown -RL grid:asmadmin /dev/rdsk/c0t6000*
# chmod 660 /dev/rdsk/c0t6000*
</source>
==Set swap to physical RAM==
<source lang=bash>
# export RAM=256G
# swap -d /dev/zvol/dsk/rpool/swap
# zfs destroy rpool/swap
# zfs create \
-V ${RAM} \
-b 8k \
-o primarycache=metadata \
-o chksum=on \
-o dedup=off \
-o encryption=off \
-o compression=off \
rpool/swap
# swap -a /dev/zvol/dsk/rpool/swap
</source>
=Network=
==Check port ranges==
<source lang=bash>
# for protocol in tcp udp ; do ipadm show-prop ${protocol} -p smallest_anon_port,largest_anon_port ; done
PROTO PROPERTY PERM CURRENT PERSISTENT DEFAULT POSSIBLE
tcp smallest_anon_port rw 9000 9000 32768 1024-65500
tcp largest_anon_port rw 65500 65500 65535 9000-65535
PROTO PROPERTY PERM CURRENT PERSISTENT DEFAULT POSSIBLE
udp smallest_anon_port rw 9000 9000 32768 1024-65500
udp largest_anon_port rw 65500 65500 65535 9000-65535
</source>
==Setup private cluster interconnects==
Example with a small net with six (eight with net and broadcast) usable IPs. This limits the maximum number of nodes to six... which is obvious...
First node:
<source lang=bash>
# ipadm create-ip net1
# ipadm create-addr -T static -a 10.65.0.1/29 net1/ci1
# ipadm create-ip net5
# ipadm create-addr -T static -a 10.65.0.9/29 net5/ci2
</source>
Second node:
<source lang=bash>
# ipadm create-ip net1
# ipadm create-addr -T static -a 10.65.0.2/29 net1/ci1
# ipadm create-ip net5
# ipadm create-addr -T static -a 10.65.0.10/29 net5/ci2
</source>
==Set slew always for ntp==
After configuring ntp set slew always to avoid time warps!
<source lang=bash>
# svccfg -s svc:/network/ntp:default setprop config/slew_always = true
# svcadm refresh svc:/network/ntp:default
# svccfg -s svc:/network/ntp:default listprop config/slew_always
config/slew_always boolean true
</source>
<pre>
/dev/rdsk/c0t60002AC000000000C061010650004020d0s2 vendor=3PARdata serial=1688061 capacity=16384.000MBytes
/dev/rdsk/c0t60002AC000000000C913010650004001d0s2 vendor=3PARdata serial=1609913 capacity=102400.000MBytes
/dev/rdsk/c0t60002AC000000000C913010650004002d0s2 vendor=3PARdata serial=1609913 capacity=102400.000MBytes
/dev/rdsk/c0t60002AC000000000C913010650004003d0s2 vendor=3PARdata serial=1609913 capacity=102400.000MBytes
/dev/rdsk/c0t60002AC000000000C913010650004004d0s2 vendor=3PARdata serial=1609913 capacity=102400.000MBytes
/dev/rdsk/c0t60002AC000000000C913010650004005d0s2 vendor=3PARdata serial=1609913 capacity=102400.000MBytes
/dev/rdsk/c0t60002AC000000000C913010650004006d0s2 vendor=3PARdata serial=1609913 capacity=102400.000MBytes
/dev/rdsk/c0t60002AC000000000C913010650004007d0s2 vendor=3PARdata serial=1609913 capacity=102400.000MBytes
/dev/rdsk/c0t60002AC000000000C913010650004008d0s2 vendor=3PARdata serial=1609913 capacity=102400.000MBytes
/dev/rdsk/c0t60002AC000000000C913010650004009d0s2 vendor=3PARdata serial=1609913 capacity=102400.000MBytes
/dev/rdsk/c0t60002AC000000000C913010650004010d0s2 vendor=3PARdata serial=1609913 capacity=102400.000MBytes
/dev/rdsk/c0t60002AC000000000C913010650004011d0s2 vendor=3PARdata serial=1609913 capacity=102400.000MBytes
/dev/rdsk/c0t60002AC000000000C913010650004012d0s2 vendor=3PARdata serial=1609913 capacity=20480.000MBytes
/dev/rdsk/c0t60002AC000000000C913010650004013d0s2 vendor=3PARdata serial=1609913 capacity=20480.000MBytes
/dev/rdsk/c0t60002AC000000000C913010650004014d0s2 vendor=3PARdata serial=1609913 capacity=20480.000MBytes
/dev/rdsk/c0t60002AC000000000C913010650004015d0s2 vendor=3PARdata serial=1609913 capacity=20480.000MBytes
/dev/rdsk/c0t60002AC000000000C913010650004016d0s2 vendor=3PARdata serial=1609913 capacity=102400.000MBytes
/dev/rdsk/c0t60002AC000000000C913010650004017d0s2 vendor=3PARdata serial=1609913 capacity=102400.000MBytes
/dev/rdsk/c0t60002AC000000000C913010650004018d0s2 vendor=3PARdata serial=1609913 capacity=102400.000MBytes
/dev/rdsk/c0t60002AC000000000C913010650004019d0s2 vendor=3PARdata serial=1609913 capacity=102400.000MBytes
/dev/rdsk/c0t60002AC000000000C913010650004020d0s2 vendor=3PARdata serial=1609913 capacity=16384.000MBytes
/dev/rdsk/c0t60002AC000000000C916010650004001d0s2 vendor=3PARdata serial=1609916 capacity=102400.000MBytes
/dev/rdsk/c0t60002AC000000000C916010650004002d0s2 vendor=3PARdata serial=1609916 capacity=102400.000MBytes
/dev/rdsk/c0t60002AC000000000C916010650004003d0s2 vendor=3PARdata serial=1609916 capacity=102400.000MBytes
/dev/rdsk/c0t60002AC000000000C916010650004004d0s2 vendor=3PARdata serial=1609916 capacity=102400.000MBytes
/dev/rdsk/c0t60002AC000000000C916010650004005d0s2 vendor=3PARdata serial=1609916 capacity=102400.000MBytes
/dev/rdsk/c0t60002AC000000000C916010650004006d0s2 vendor=3PARdata serial=1609916 capacity=102400.000MBytes
/dev/rdsk/c0t60002AC000000000C916010650004007d0s2 vendor=3PARdata serial=1609916 capacity=102400.000MBytes
/dev/rdsk/c0t60002AC000000000C916010650004008d0s2 vendor=3PARdata serial=1609916 capacity=102400.000MBytes
/dev/rdsk/c0t60002AC000000000C916010650004009d0s2 vendor=3PARdata serial=1609916 capacity=102400.000MBytes
/dev/rdsk/c0t60002AC000000000C916010650004010d0s2 vendor=3PARdata serial=1609916 capacity=102400.000MBytes
/dev/rdsk/c0t60002AC000000000C916010650004011d0s2 vendor=3PARdata serial=1609916 capacity=102400.000MBytes
/dev/rdsk/c0t60002AC000000000C916010650004012d0s2 vendor=3PARdata serial=1609916 capacity=20480.000MBytes
/dev/rdsk/c0t60002AC000000000C916010650004013d0s2 vendor=3PARdata serial=1609916 capacity=20480.000MBytes
/dev/rdsk/c0t60002AC000000000C916010650004014d0s2 vendor=3PARdata serial=1609916 capacity=20480.000MBytes
/dev/rdsk/c0t60002AC000000000C916010650004015d0s2 vendor=3PARdata serial=1609916 capacity=20480.000MBytes
/dev/rdsk/c0t60002AC000000000C916010650004016d0s2 vendor=3PARdata serial=1609916 capacity=102400.000MBytes
/dev/rdsk/c0t60002AC000000000C916010650004017d0s2 vendor=3PARdata serial=1609916 capacity=102400.000MBytes
/dev/rdsk/c0t60002AC000000000C916010650004018d0s2 vendor=3PARdata serial=1609916 capacity=102400.000MBytes
/dev/rdsk/c0t60002AC000000000C916010650004019d0s2 vendor=3PARdata serial=1609916 capacity=102400.000MBytes
/dev/rdsk/c0t60002AC000000000C916010650004020d0s2 vendor=3PARdata serial=1609916 capacity=16384.000MBytes
</pre>
96b40457f70ca413b85939c9bdef02eb00b42b9a
1193
1192
2016-02-08T14:26:22Z
Lollypop
2
wikitext
text/x-wiki
==Get Solaris release information==
<source lang=bash>
# pkg info kernel | \
nawk -F '.' '
/Build Release:/{
solaris=$NF;
}
/Branch:/{
subrel=$3;
update=$4;
}
END{
printf "Solaris %d.%d Update %d\n",solaris,subrel,update;
}'
</source>
=Needed Solaris packages=
==Install pkg dependencies==
<source lang=bash>
# pkg install developer/assembler
# pkg install developer/build/make
# pkg install x11/diagnostic/x11-info-clients
</source>
==Check pkg dependencies==
<source lang=bash>
# pkg list \
developer/assembler \
developer/build/make \
x11/diagnostic/x11-info-clients
</source>
=User / group settings=
==Groups==
<source lang=bash>
# groupadd -g 186 oinstall
# groupadd -g 187 asmadmin
# groupadd -g 188 asmdba
# groupadd -g 200 dba
</source>
==User==
<source lang=bash>
# useradd \
-u 102 \
-g oinstall \
-G asmdba,dba \
-c "Oracle DB" \
-m -d /export/home/oracle \
oracle
# useradd \
-u 406 \
-g oinstall \
-G asmdba,asmadmin,dba \
-c "Oracle Grid" \
-m -d /export/home/grid \
grid
</source>
===Generate ssh public keys===
<source lang=bash>
# su - grid
$ ssh-keygen -t rsa -b 2048
Generating public/private rsa key pair.
Enter file in which to save the key (/export/home/grid/.ssh/id_rsa): <Enter>
Created directory '/export/home/grid/.ssh'.
Enter passphrase (empty for no passphrase): <Enter>
Enter same passphrase again: <Enter>
Your identification has been saved in /export/home/grid/.ssh/id_rsa.
Your public key has been saved in /export/home/grid/.ssh/id_rsa.pub.
The key fingerprint is:
..:..:.. grid@grid01
$ cat .ssh/id_rsa.pub > .ssh/authorized_keys
$ chmod 600 .ssh/authorized_keys
$ vi .ssh/authorized_keys
</source>
Add the public key of other nodes.
After that do this on all other nodes added as grid:
<source lang=bash>
$ scp grid01:.ssh/authorized_keys .ssh/authorized_keys
</source>
==Projects==
<source lang=bash>
# projadd -p 186 -G oinstall \
-K process.max-file-descriptor="(privileged,65536,deny)" \
-K process.max-sem-nsems="(privileged,2048,deny)" \
-K project.max-sem-ids="(privileged,2048,deny)" \
-K project.max-shm-ids="(privileged,200,deny)" \
-K project.max-shm-memory="(privileged,274877906944,deny)" \
group.oinstall
</source>
===Check project settings===
<source lang=bash>
# su - oracle
$ for name in process.{max-file-descriptor,max-sem-nsems} ; do prctl -t privileged -i process -n ${name} $$ ; done
process: 14822: -bash
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
process.max-file-descriptor
privileged 65.5K - deny -
process: 14822: -bash
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
process.max-sem-nsems
privileged 2.05K - deny -
$ for name in project.{max-sem-ids,max-shm-ids,max-shm-memory} ; do prctl -t privileged -n ${name} $$ ; done
process: 14822: -bash
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
project.max-sem-ids
privileged 2.05K - deny -
process: 14822: -bash
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
project.max-shm-ids
privileged 200 - deny -
process: 14822: -bash
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
project.max-shm-memory
usage 0B
privileged 256GB - deny -
</source>
=Directories=
<source lang=bash>
# mkdir -p -m 0755 /opt/{grid{home,base},oraInventory}
# chown -R grid:oinstall /opt/{grid{home,base},oraInventory}
</source>
=Storage tasks=
==Discover LUNs==
<source lang=bash>
# luxadm -e port | \
nawk '{print $1}' | \
xargs -n 1 luxadm -e dump_map | \
nawk '/Disk device/{print $5}' | \
sort -u | \
xargs luxadm display | \
nawk '
/DEVICE PROPERTIES for disk:/{
disk=$NF;
}
/DEVICE PROPERTIES for:/{
disk="";
}
/Vendor:/{
vendor=$NF;
}
/Serial Num:/{
serial=$NF;
}
/Unformatted capacity:/{
capacity=$(NF-1)""$NF;
}
disk != "" && /^$/{
printf "%s vendor=%s serial=%s capacity=%s\n",disk,vendor,serial,capacity;
}' | \
sort -u
</source>
==Label Disks==
===Single Disk===
<source lang=bash>
# printf 'type 0 no no\nlabel 1 yes\npartition\n0 usr wm 8192 $\nlabel 1 yes\nquit\nquit\n' | \
format -e /dev/rdsk/<disk>
</source>
===All FC disks===
For x86 you have to call format -> fdisk -> y for all disks first :-\
'''DON'T DO THE NEXT STEP IF YOU DO NOT KNOW WHAT YOU DO!'''
format_command_file.txt:
<source lang=bash>
type 0 no no
label 1 yes
partition
0 usr wm 8192 $
label 1 yes
quit
quit
</source>
<source lang=bash>
# luxadm -e port | \
nawk '{print $1}' | \
xargs -n 1 luxadm -e dump_map | \
nawk '/Disk device/{print $5}' | \
sort -u | \
xargs luxadm display | \
nawk '
/DEVICE PROPERTIES for disk:/{
disk=$NF;
}
/DEVICE PROPERTIES for:/{
disk="";
}
disk && /^$/{
printf "%s\n",disk;
}' | \
sort -u | \
xargs -n 1 format -e -f ~/format_command_file.txt
</source>
<source lang=bash>
# chown -RL grid:asmadmin /dev/rdsk/c0t6000*
# chmod 660 /dev/rdsk/c0t6000*
</source>
==Set swap to physical RAM==
<source lang=bash>
# export RAM=256G
# swap -d /dev/zvol/dsk/rpool/swap
# zfs destroy rpool/swap
# zfs create \
-V ${RAM} \
-b 8k \
-o primarycache=metadata \
-o chksum=on \
-o dedup=off \
-o encryption=off \
-o compression=off \
rpool/swap
# swap -a /dev/zvol/dsk/rpool/swap
</source>
=Network=
==Check port ranges==
<source lang=bash>
# for protocol in tcp udp ; do ipadm show-prop ${protocol} -p smallest_anon_port,largest_anon_port ; done
PROTO PROPERTY PERM CURRENT PERSISTENT DEFAULT POSSIBLE
tcp smallest_anon_port rw 9000 9000 32768 1024-65500
tcp largest_anon_port rw 65500 65500 65535 9000-65535
PROTO PROPERTY PERM CURRENT PERSISTENT DEFAULT POSSIBLE
udp smallest_anon_port rw 9000 9000 32768 1024-65500
udp largest_anon_port rw 65500 65500 65535 9000-65535
</source>
==Setup private cluster interconnects==
Example with a small net with six (eight with net and broadcast) usable IPs. This limits the maximum number of nodes to six... which is obvious...
First node:
<source lang=bash>
# ipadm create-ip net1
# ipadm create-addr -T static -a 10.65.0.1/29 net1/ci1
# ipadm create-ip net5
# ipadm create-addr -T static -a 10.65.0.9/29 net5/ci2
</source>
Second node:
<source lang=bash>
# ipadm create-ip net1
# ipadm create-addr -T static -a 10.65.0.2/29 net1/ci1
# ipadm create-ip net5
# ipadm create-addr -T static -a 10.65.0.10/29 net5/ci2
</source>
==Set slew always for ntp==
After configuring ntp set slew always to avoid time warps!
<source lang=bash>
# svccfg -s svc:/network/ntp:default setprop config/slew_always = true
# svcadm refresh svc:/network/ntp:default
# svccfg -s svc:/network/ntp:default listprop config/slew_always
config/slew_always boolean true
</source>
<pre>
/dev/rdsk/c0t60002AC000000000C061010650004020d0s2 vendor=3PARdata serial=1688061 capacity=16384.000MBytes
/dev/rdsk/c0t60002AC000000000C913010650004001d0s2 vendor=3PARdata serial=1609913 capacity=102400.000MBytes
/dev/rdsk/c0t60002AC000000000C913010650004002d0s2 vendor=3PARdata serial=1609913 capacity=102400.000MBytes
/dev/rdsk/c0t60002AC000000000C913010650004003d0s2 vendor=3PARdata serial=1609913 capacity=102400.000MBytes
/dev/rdsk/c0t60002AC000000000C913010650004004d0s2 vendor=3PARdata serial=1609913 capacity=102400.000MBytes
/dev/rdsk/c0t60002AC000000000C913010650004005d0s2 vendor=3PARdata serial=1609913 capacity=102400.000MBytes
/dev/rdsk/c0t60002AC000000000C913010650004006d0s2 vendor=3PARdata serial=1609913 capacity=102400.000MBytes
/dev/rdsk/c0t60002AC000000000C913010650004007d0s2 vendor=3PARdata serial=1609913 capacity=102400.000MBytes
/dev/rdsk/c0t60002AC000000000C913010650004008d0s2 vendor=3PARdata serial=1609913 capacity=102400.000MBytes
/dev/rdsk/c0t60002AC000000000C913010650004009d0s2 vendor=3PARdata serial=1609913 capacity=102400.000MBytes
/dev/rdsk/c0t60002AC000000000C913010650004010d0s2 vendor=3PARdata serial=1609913 capacity=102400.000MBytes
/dev/rdsk/c0t60002AC000000000C913010650004011d0s2 vendor=3PARdata serial=1609913 capacity=102400.000MBytes
/dev/rdsk/c0t60002AC000000000C913010650004012d0s2 vendor=3PARdata serial=1609913 capacity=20480.000MBytes
/dev/rdsk/c0t60002AC000000000C913010650004013d0s2 vendor=3PARdata serial=1609913 capacity=20480.000MBytes
/dev/rdsk/c0t60002AC000000000C913010650004014d0s2 vendor=3PARdata serial=1609913 capacity=20480.000MBytes
/dev/rdsk/c0t60002AC000000000C913010650004015d0s2 vendor=3PARdata serial=1609913 capacity=20480.000MBytes
/dev/rdsk/c0t60002AC000000000C913010650004016d0s2 vendor=3PARdata serial=1609913 capacity=102400.000MBytes
/dev/rdsk/c0t60002AC000000000C913010650004017d0s2 vendor=3PARdata serial=1609913 capacity=102400.000MBytes
/dev/rdsk/c0t60002AC000000000C913010650004018d0s2 vendor=3PARdata serial=1609913 capacity=102400.000MBytes
/dev/rdsk/c0t60002AC000000000C913010650004019d0s2 vendor=3PARdata serial=1609913 capacity=102400.000MBytes
/dev/rdsk/c0t60002AC000000000C913010650004020d0s2 vendor=3PARdata serial=1609913 capacity=16384.000MBytes
/dev/rdsk/c0t60002AC000000000C916010650004001d0s2 vendor=3PARdata serial=1609916 capacity=102400.000MBytes
/dev/rdsk/c0t60002AC000000000C916010650004002d0s2 vendor=3PARdata serial=1609916 capacity=102400.000MBytes
/dev/rdsk/c0t60002AC000000000C916010650004003d0s2 vendor=3PARdata serial=1609916 capacity=102400.000MBytes
/dev/rdsk/c0t60002AC000000000C916010650004004d0s2 vendor=3PARdata serial=1609916 capacity=102400.000MBytes
/dev/rdsk/c0t60002AC000000000C916010650004005d0s2 vendor=3PARdata serial=1609916 capacity=102400.000MBytes
/dev/rdsk/c0t60002AC000000000C916010650004006d0s2 vendor=3PARdata serial=1609916 capacity=102400.000MBytes
/dev/rdsk/c0t60002AC000000000C916010650004007d0s2 vendor=3PARdata serial=1609916 capacity=102400.000MBytes
/dev/rdsk/c0t60002AC000000000C916010650004008d0s2 vendor=3PARdata serial=1609916 capacity=102400.000MBytes
/dev/rdsk/c0t60002AC000000000C916010650004009d0s2 vendor=3PARdata serial=1609916 capacity=102400.000MBytes
/dev/rdsk/c0t60002AC000000000C916010650004010d0s2 vendor=3PARdata serial=1609916 capacity=102400.000MBytes
/dev/rdsk/c0t60002AC000000000C916010650004011d0s2 vendor=3PARdata serial=1609916 capacity=102400.000MBytes
/dev/rdsk/c0t60002AC000000000C916010650004012d0s2 vendor=3PARdata serial=1609916 capacity=20480.000MBytes
/dev/rdsk/c0t60002AC000000000C916010650004013d0s2 vendor=3PARdata serial=1609916 capacity=20480.000MBytes
/dev/rdsk/c0t60002AC000000000C916010650004014d0s2 vendor=3PARdata serial=1609916 capacity=20480.000MBytes
/dev/rdsk/c0t60002AC000000000C916010650004015d0s2 vendor=3PARdata serial=1609916 capacity=20480.000MBytes
/dev/rdsk/c0t60002AC000000000C916010650004016d0s2 vendor=3PARdata serial=1609916 capacity=102400.000MBytes
/dev/rdsk/c0t60002AC000000000C916010650004017d0s2 vendor=3PARdata serial=1609916 capacity=102400.000MBytes
/dev/rdsk/c0t60002AC000000000C916010650004018d0s2 vendor=3PARdata serial=1609916 capacity=102400.000MBytes
/dev/rdsk/c0t60002AC000000000C916010650004019d0s2 vendor=3PARdata serial=1609916 capacity=102400.000MBytes
/dev/rdsk/c0t60002AC000000000C916010650004020d0s2 vendor=3PARdata serial=1609916 capacity=16384.000MBytes
</pre>
5dc7d197b11749b5b58b21a4ddaaa82b4e4e4d52
1194
1193
2016-02-08T14:46:11Z
Lollypop
2
wikitext
text/x-wiki
==Get Solaris release information==
<source lang=bash>
# pkg info kernel | \
nawk -F '.' '
/Build Release:/{
solaris=$NF;
}
/Branch:/{
subrel=$3;
update=$4;
}
END{
printf "Solaris %d.%d Update %d\n",solaris,subrel,update;
}'
</source>
=Needed Solaris packages=
==Install pkg dependencies==
<source lang=bash>
# pkg install developer/assembler
# pkg install developer/build/make
# pkg install x11/diagnostic/x11-info-clients
</source>
==Check pkg dependencies==
<source lang=bash>
# pkg list \
developer/assembler \
developer/build/make \
x11/diagnostic/x11-info-clients
</source>
=User / group settings=
==Groups==
<source lang=bash>
# groupadd -g 186 oinstall
# groupadd -g 187 asmadmin
# groupadd -g 188 asmdba
# groupadd -g 200 dba
</source>
==User==
<source lang=bash>
# useradd \
-u 102 \
-g oinstall \
-G asmdba,dba \
-c "Oracle DB" \
-m -d /export/home/oracle \
oracle
# useradd \
-u 406 \
-g oinstall \
-G asmdba,asmadmin,dba \
-c "Oracle Grid" \
-m -d /export/home/grid \
grid
</source>
===Generate ssh public keys===
<source lang=bash>
# su - grid
$ ssh-keygen -t rsa -b 2048
Generating public/private rsa key pair.
Enter file in which to save the key (/export/home/grid/.ssh/id_rsa): <Enter>
Created directory '/export/home/grid/.ssh'.
Enter passphrase (empty for no passphrase): <Enter>
Enter same passphrase again: <Enter>
Your identification has been saved in /export/home/grid/.ssh/id_rsa.
Your public key has been saved in /export/home/grid/.ssh/id_rsa.pub.
The key fingerprint is:
..:..:.. grid@grid01
$ cat .ssh/id_rsa.pub > .ssh/authorized_keys
$ chmod 600 .ssh/authorized_keys
$ vi .ssh/authorized_keys
</source>
Add the public key of other nodes.
After that do this on all other nodes added as grid:
<source lang=bash>
$ scp grid01:.ssh/authorized_keys .ssh/authorized_keys
</source>
Now do a cross login from every node to every other node (even to its self) to add all to the known_hosts. The installer needs this.
==Projects==
<source lang=bash>
# projadd -p 186 -G oinstall \
-K process.max-file-descriptor="(privileged,65536,deny)" \
-K process.max-sem-nsems="(privileged,2048,deny)" \
-K project.max-sem-ids="(privileged,2048,deny)" \
-K project.max-shm-ids="(privileged,200,deny)" \
-K project.max-shm-memory="(privileged,274877906944,deny)" \
group.oinstall
</source>
===Check project settings===
<source lang=bash>
# su - oracle
$ for name in process.{max-file-descriptor,max-sem-nsems} ; do prctl -t privileged -i process -n ${name} $$ ; done
process: 14822: -bash
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
process.max-file-descriptor
privileged 65.5K - deny -
process: 14822: -bash
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
process.max-sem-nsems
privileged 2.05K - deny -
$ for name in project.{max-sem-ids,max-shm-ids,max-shm-memory} ; do prctl -t privileged -n ${name} $$ ; done
process: 14822: -bash
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
project.max-sem-ids
privileged 2.05K - deny -
process: 14822: -bash
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
project.max-shm-ids
privileged 200 - deny -
process: 14822: -bash
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
project.max-shm-memory
usage 0B
privileged 256GB - deny -
</source>
=Directories=
<source lang=bash>
# mkdir -p -m 0755 /opt/{grid{home,base},oraInventory}
# chown -R grid:oinstall /opt/{grid{home,base},oraInventory}
</source>
=Storage tasks=
==Discover LUNs==
<source lang=bash>
# luxadm -e port | \
nawk '{print $1}' | \
xargs -n 1 luxadm -e dump_map | \
nawk '/Disk device/{print $5}' | \
sort -u | \
xargs luxadm display | \
nawk '
/DEVICE PROPERTIES for disk:/{
disk=$NF;
}
/DEVICE PROPERTIES for:/{
disk="";
}
/Vendor:/{
vendor=$NF;
}
/Serial Num:/{
serial=$NF;
}
/Unformatted capacity:/{
capacity=$(NF-1)""$NF;
}
disk != "" && /^$/{
printf "%s vendor=%s serial=%s capacity=%s\n",disk,vendor,serial,capacity;
}' | \
sort -u
</source>
==Label Disks==
===Single Disk===
<source lang=bash>
# printf 'type 0 no no\nlabel 1 yes\npartition\n0 usr wm 8192 $\nlabel 1 yes\nquit\nquit\n' | \
format -e /dev/rdsk/<disk>
</source>
===All FC disks===
For x86 you have to call format -> fdisk -> y for all disks first :-\
'''DON'T DO THE NEXT STEP IF YOU DO NOT KNOW WHAT YOU DO!'''
format_command_file.txt:
<source lang=bash>
type 0 no no
label 1 yes
partition
0 usr wm 8192 $
label 1 yes
quit
quit
</source>
<source lang=bash>
# luxadm -e port | \
nawk '{print $1}' | \
xargs -n 1 luxadm -e dump_map | \
nawk '/Disk device/{print $5}' | \
sort -u | \
xargs luxadm display | \
nawk '
/DEVICE PROPERTIES for disk:/{
disk=$NF;
}
/DEVICE PROPERTIES for:/{
disk="";
}
disk && /^$/{
printf "%s\n",disk;
}' | \
sort -u | \
xargs -n 1 format -e -f ~/format_command_file.txt
</source>
<source lang=bash>
# chown -RL grid:asmadmin /dev/rdsk/c0t6000*
# chmod 660 /dev/rdsk/c0t6000*
</source>
==Set swap to physical RAM==
<source lang=bash>
# export RAM=256G
# swap -d /dev/zvol/dsk/rpool/swap
# zfs destroy rpool/swap
# zfs create \
-V ${RAM} \
-b 8k \
-o primarycache=metadata \
-o chksum=on \
-o dedup=off \
-o encryption=off \
-o compression=off \
rpool/swap
# swap -a /dev/zvol/dsk/rpool/swap
</source>
=Network=
==Check port ranges==
<source lang=bash>
# for protocol in tcp udp ; do ipadm show-prop ${protocol} -p smallest_anon_port,largest_anon_port ; done
PROTO PROPERTY PERM CURRENT PERSISTENT DEFAULT POSSIBLE
tcp smallest_anon_port rw 9000 9000 32768 1024-65500
tcp largest_anon_port rw 65500 65500 65535 9000-65535
PROTO PROPERTY PERM CURRENT PERSISTENT DEFAULT POSSIBLE
udp smallest_anon_port rw 9000 9000 32768 1024-65500
udp largest_anon_port rw 65500 65500 65535 9000-65535
</source>
==Setup private cluster interconnects==
Example with a small net with six (eight with net and broadcast) usable IPs. This limits the maximum number of nodes to six... which is obvious...
First node:
<source lang=bash>
# ipadm create-ip net1
# ipadm create-addr -T static -a 10.65.0.1/29 net1/ci1
# ipadm create-ip net5
# ipadm create-addr -T static -a 10.65.0.9/29 net5/ci2
</source>
Second node:
<source lang=bash>
# ipadm create-ip net1
# ipadm create-addr -T static -a 10.65.0.2/29 net1/ci1
# ipadm create-ip net5
# ipadm create-addr -T static -a 10.65.0.10/29 net5/ci2
</source>
==Set slew always for ntp==
After configuring ntp set slew always to avoid time warps!
<source lang=bash>
# svccfg -s svc:/network/ntp:default setprop config/slew_always = true
# svcadm refresh svc:/network/ntp:default
# svccfg -s svc:/network/ntp:default listprop config/slew_always
config/slew_always boolean true
</source>
<pre>
/dev/rdsk/c0t60002AC000000000C061010650004020d0s2 vendor=3PARdata serial=1688061 capacity=16384.000MBytes
/dev/rdsk/c0t60002AC000000000C913010650004001d0s2 vendor=3PARdata serial=1609913 capacity=102400.000MBytes
/dev/rdsk/c0t60002AC000000000C913010650004002d0s2 vendor=3PARdata serial=1609913 capacity=102400.000MBytes
/dev/rdsk/c0t60002AC000000000C913010650004003d0s2 vendor=3PARdata serial=1609913 capacity=102400.000MBytes
/dev/rdsk/c0t60002AC000000000C913010650004004d0s2 vendor=3PARdata serial=1609913 capacity=102400.000MBytes
/dev/rdsk/c0t60002AC000000000C913010650004005d0s2 vendor=3PARdata serial=1609913 capacity=102400.000MBytes
/dev/rdsk/c0t60002AC000000000C913010650004006d0s2 vendor=3PARdata serial=1609913 capacity=102400.000MBytes
/dev/rdsk/c0t60002AC000000000C913010650004007d0s2 vendor=3PARdata serial=1609913 capacity=102400.000MBytes
/dev/rdsk/c0t60002AC000000000C913010650004008d0s2 vendor=3PARdata serial=1609913 capacity=102400.000MBytes
/dev/rdsk/c0t60002AC000000000C913010650004009d0s2 vendor=3PARdata serial=1609913 capacity=102400.000MBytes
/dev/rdsk/c0t60002AC000000000C913010650004010d0s2 vendor=3PARdata serial=1609913 capacity=102400.000MBytes
/dev/rdsk/c0t60002AC000000000C913010650004011d0s2 vendor=3PARdata serial=1609913 capacity=102400.000MBytes
/dev/rdsk/c0t60002AC000000000C913010650004012d0s2 vendor=3PARdata serial=1609913 capacity=20480.000MBytes
/dev/rdsk/c0t60002AC000000000C913010650004013d0s2 vendor=3PARdata serial=1609913 capacity=20480.000MBytes
/dev/rdsk/c0t60002AC000000000C913010650004014d0s2 vendor=3PARdata serial=1609913 capacity=20480.000MBytes
/dev/rdsk/c0t60002AC000000000C913010650004015d0s2 vendor=3PARdata serial=1609913 capacity=20480.000MBytes
/dev/rdsk/c0t60002AC000000000C913010650004016d0s2 vendor=3PARdata serial=1609913 capacity=102400.000MBytes
/dev/rdsk/c0t60002AC000000000C913010650004017d0s2 vendor=3PARdata serial=1609913 capacity=102400.000MBytes
/dev/rdsk/c0t60002AC000000000C913010650004018d0s2 vendor=3PARdata serial=1609913 capacity=102400.000MBytes
/dev/rdsk/c0t60002AC000000000C913010650004019d0s2 vendor=3PARdata serial=1609913 capacity=102400.000MBytes
/dev/rdsk/c0t60002AC000000000C913010650004020d0s2 vendor=3PARdata serial=1609913 capacity=16384.000MBytes
/dev/rdsk/c0t60002AC000000000C916010650004001d0s2 vendor=3PARdata serial=1609916 capacity=102400.000MBytes
/dev/rdsk/c0t60002AC000000000C916010650004002d0s2 vendor=3PARdata serial=1609916 capacity=102400.000MBytes
/dev/rdsk/c0t60002AC000000000C916010650004003d0s2 vendor=3PARdata serial=1609916 capacity=102400.000MBytes
/dev/rdsk/c0t60002AC000000000C916010650004004d0s2 vendor=3PARdata serial=1609916 capacity=102400.000MBytes
/dev/rdsk/c0t60002AC000000000C916010650004005d0s2 vendor=3PARdata serial=1609916 capacity=102400.000MBytes
/dev/rdsk/c0t60002AC000000000C916010650004006d0s2 vendor=3PARdata serial=1609916 capacity=102400.000MBytes
/dev/rdsk/c0t60002AC000000000C916010650004007d0s2 vendor=3PARdata serial=1609916 capacity=102400.000MBytes
/dev/rdsk/c0t60002AC000000000C916010650004008d0s2 vendor=3PARdata serial=1609916 capacity=102400.000MBytes
/dev/rdsk/c0t60002AC000000000C916010650004009d0s2 vendor=3PARdata serial=1609916 capacity=102400.000MBytes
/dev/rdsk/c0t60002AC000000000C916010650004010d0s2 vendor=3PARdata serial=1609916 capacity=102400.000MBytes
/dev/rdsk/c0t60002AC000000000C916010650004011d0s2 vendor=3PARdata serial=1609916 capacity=102400.000MBytes
/dev/rdsk/c0t60002AC000000000C916010650004012d0s2 vendor=3PARdata serial=1609916 capacity=20480.000MBytes
/dev/rdsk/c0t60002AC000000000C916010650004013d0s2 vendor=3PARdata serial=1609916 capacity=20480.000MBytes
/dev/rdsk/c0t60002AC000000000C916010650004014d0s2 vendor=3PARdata serial=1609916 capacity=20480.000MBytes
/dev/rdsk/c0t60002AC000000000C916010650004015d0s2 vendor=3PARdata serial=1609916 capacity=20480.000MBytes
/dev/rdsk/c0t60002AC000000000C916010650004016d0s2 vendor=3PARdata serial=1609916 capacity=102400.000MBytes
/dev/rdsk/c0t60002AC000000000C916010650004017d0s2 vendor=3PARdata serial=1609916 capacity=102400.000MBytes
/dev/rdsk/c0t60002AC000000000C916010650004018d0s2 vendor=3PARdata serial=1609916 capacity=102400.000MBytes
/dev/rdsk/c0t60002AC000000000C916010650004019d0s2 vendor=3PARdata serial=1609916 capacity=102400.000MBytes
/dev/rdsk/c0t60002AC000000000C916010650004020d0s2 vendor=3PARdata serial=1609916 capacity=16384.000MBytes
</pre>
51e1ef9de7a5cf3301022910ea897d51131d50a7
1195
1194
2016-02-08T14:46:31Z
Lollypop
2
/* Set slew always for ntp */
wikitext
text/x-wiki
==Get Solaris release information==
<source lang=bash>
# pkg info kernel | \
nawk -F '.' '
/Build Release:/{
solaris=$NF;
}
/Branch:/{
subrel=$3;
update=$4;
}
END{
printf "Solaris %d.%d Update %d\n",solaris,subrel,update;
}'
</source>
=Needed Solaris packages=
==Install pkg dependencies==
<source lang=bash>
# pkg install developer/assembler
# pkg install developer/build/make
# pkg install x11/diagnostic/x11-info-clients
</source>
==Check pkg dependencies==
<source lang=bash>
# pkg list \
developer/assembler \
developer/build/make \
x11/diagnostic/x11-info-clients
</source>
=User / group settings=
==Groups==
<source lang=bash>
# groupadd -g 186 oinstall
# groupadd -g 187 asmadmin
# groupadd -g 188 asmdba
# groupadd -g 200 dba
</source>
==User==
<source lang=bash>
# useradd \
-u 102 \
-g oinstall \
-G asmdba,dba \
-c "Oracle DB" \
-m -d /export/home/oracle \
oracle
# useradd \
-u 406 \
-g oinstall \
-G asmdba,asmadmin,dba \
-c "Oracle Grid" \
-m -d /export/home/grid \
grid
</source>
===Generate ssh public keys===
<source lang=bash>
# su - grid
$ ssh-keygen -t rsa -b 2048
Generating public/private rsa key pair.
Enter file in which to save the key (/export/home/grid/.ssh/id_rsa): <Enter>
Created directory '/export/home/grid/.ssh'.
Enter passphrase (empty for no passphrase): <Enter>
Enter same passphrase again: <Enter>
Your identification has been saved in /export/home/grid/.ssh/id_rsa.
Your public key has been saved in /export/home/grid/.ssh/id_rsa.pub.
The key fingerprint is:
..:..:.. grid@grid01
$ cat .ssh/id_rsa.pub > .ssh/authorized_keys
$ chmod 600 .ssh/authorized_keys
$ vi .ssh/authorized_keys
</source>
Add the public key of other nodes.
After that do this on all other nodes added as grid:
<source lang=bash>
$ scp grid01:.ssh/authorized_keys .ssh/authorized_keys
</source>
Now do a cross login from every node to every other node (even to its self) to add all to the known_hosts. The installer needs this.
==Projects==
<source lang=bash>
# projadd -p 186 -G oinstall \
-K process.max-file-descriptor="(privileged,65536,deny)" \
-K process.max-sem-nsems="(privileged,2048,deny)" \
-K project.max-sem-ids="(privileged,2048,deny)" \
-K project.max-shm-ids="(privileged,200,deny)" \
-K project.max-shm-memory="(privileged,274877906944,deny)" \
group.oinstall
</source>
===Check project settings===
<source lang=bash>
# su - oracle
$ for name in process.{max-file-descriptor,max-sem-nsems} ; do prctl -t privileged -i process -n ${name} $$ ; done
process: 14822: -bash
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
process.max-file-descriptor
privileged 65.5K - deny -
process: 14822: -bash
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
process.max-sem-nsems
privileged 2.05K - deny -
$ for name in project.{max-sem-ids,max-shm-ids,max-shm-memory} ; do prctl -t privileged -n ${name} $$ ; done
process: 14822: -bash
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
project.max-sem-ids
privileged 2.05K - deny -
process: 14822: -bash
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
project.max-shm-ids
privileged 200 - deny -
process: 14822: -bash
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
project.max-shm-memory
usage 0B
privileged 256GB - deny -
</source>
=Directories=
<source lang=bash>
# mkdir -p -m 0755 /opt/{grid{home,base},oraInventory}
# chown -R grid:oinstall /opt/{grid{home,base},oraInventory}
</source>
=Storage tasks=
==Discover LUNs==
<source lang=bash>
# luxadm -e port | \
nawk '{print $1}' | \
xargs -n 1 luxadm -e dump_map | \
nawk '/Disk device/{print $5}' | \
sort -u | \
xargs luxadm display | \
nawk '
/DEVICE PROPERTIES for disk:/{
disk=$NF;
}
/DEVICE PROPERTIES for:/{
disk="";
}
/Vendor:/{
vendor=$NF;
}
/Serial Num:/{
serial=$NF;
}
/Unformatted capacity:/{
capacity=$(NF-1)""$NF;
}
disk != "" && /^$/{
printf "%s vendor=%s serial=%s capacity=%s\n",disk,vendor,serial,capacity;
}' | \
sort -u
</source>
==Label Disks==
===Single Disk===
<source lang=bash>
# printf 'type 0 no no\nlabel 1 yes\npartition\n0 usr wm 8192 $\nlabel 1 yes\nquit\nquit\n' | \
format -e /dev/rdsk/<disk>
</source>
===All FC disks===
For x86 you have to call format -> fdisk -> y for all disks first :-\
'''DON'T DO THE NEXT STEP IF YOU DO NOT KNOW WHAT YOU DO!'''
format_command_file.txt:
<source lang=bash>
type 0 no no
label 1 yes
partition
0 usr wm 8192 $
label 1 yes
quit
quit
</source>
<source lang=bash>
# luxadm -e port | \
nawk '{print $1}' | \
xargs -n 1 luxadm -e dump_map | \
nawk '/Disk device/{print $5}' | \
sort -u | \
xargs luxadm display | \
nawk '
/DEVICE PROPERTIES for disk:/{
disk=$NF;
}
/DEVICE PROPERTIES for:/{
disk="";
}
disk && /^$/{
printf "%s\n",disk;
}' | \
sort -u | \
xargs -n 1 format -e -f ~/format_command_file.txt
</source>
<source lang=bash>
# chown -RL grid:asmadmin /dev/rdsk/c0t6000*
# chmod 660 /dev/rdsk/c0t6000*
</source>
==Set swap to physical RAM==
<source lang=bash>
# export RAM=256G
# swap -d /dev/zvol/dsk/rpool/swap
# zfs destroy rpool/swap
# zfs create \
-V ${RAM} \
-b 8k \
-o primarycache=metadata \
-o chksum=on \
-o dedup=off \
-o encryption=off \
-o compression=off \
rpool/swap
# swap -a /dev/zvol/dsk/rpool/swap
</source>
=Network=
==Check port ranges==
<source lang=bash>
# for protocol in tcp udp ; do ipadm show-prop ${protocol} -p smallest_anon_port,largest_anon_port ; done
PROTO PROPERTY PERM CURRENT PERSISTENT DEFAULT POSSIBLE
tcp smallest_anon_port rw 9000 9000 32768 1024-65500
tcp largest_anon_port rw 65500 65500 65535 9000-65535
PROTO PROPERTY PERM CURRENT PERSISTENT DEFAULT POSSIBLE
udp smallest_anon_port rw 9000 9000 32768 1024-65500
udp largest_anon_port rw 65500 65500 65535 9000-65535
</source>
==Setup private cluster interconnects==
Example with a small net with six (eight with net and broadcast) usable IPs. This limits the maximum number of nodes to six... which is obvious...
First node:
<source lang=bash>
# ipadm create-ip net1
# ipadm create-addr -T static -a 10.65.0.1/29 net1/ci1
# ipadm create-ip net5
# ipadm create-addr -T static -a 10.65.0.9/29 net5/ci2
</source>
Second node:
<source lang=bash>
# ipadm create-ip net1
# ipadm create-addr -T static -a 10.65.0.2/29 net1/ci1
# ipadm create-ip net5
# ipadm create-addr -T static -a 10.65.0.10/29 net5/ci2
</source>
==Set slew always for ntp==
After configuring ntp set slew always to avoid time warps!
<source lang=bash>
# svccfg -s svc:/network/ntp:default setprop config/slew_always = true
# svcadm refresh svc:/network/ntp:default
# svccfg -s svc:/network/ntp:default listprop config/slew_always
config/slew_always boolean true
</source>
280254bc455535614a40581b5909de4c0ebd25a5
1196
1195
2016-02-08T15:11:25Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:Solaris11]]
==Get Solaris release information==
<source lang=bash>
# pkg info kernel | \
nawk -F '.' '
/Build Release:/{
solaris=$NF;
}
/Branch:/{
subrel=$3;
update=$4;
}
END{
printf "Solaris %d.%d Update %d\n",solaris,subrel,update;
}'
</source>
=Needed Solaris packages=
==Install pkg dependencies==
<source lang=bash>
# pkg install developer/assembler
# pkg install developer/build/make
# pkg install x11/diagnostic/x11-info-clients
</source>
==Check pkg dependencies==
<source lang=bash>
# pkg list \
developer/assembler \
developer/build/make \
x11/diagnostic/x11-info-clients
</source>
=User / group settings=
==Groups==
<source lang=bash>
# groupadd -g 186 oinstall
# groupadd -g 187 asmadmin
# groupadd -g 188 asmdba
# groupadd -g 200 dba
</source>
==User==
<source lang=bash>
# useradd \
-u 102 \
-g oinstall \
-G asmdba,dba \
-c "Oracle DB" \
-m -d /export/home/oracle \
oracle
# useradd \
-u 406 \
-g oinstall \
-G asmdba,asmadmin,dba \
-c "Oracle Grid" \
-m -d /export/home/grid \
grid
</source>
===Generate ssh public keys===
<source lang=bash>
# su - grid
$ ssh-keygen -t rsa -b 2048
Generating public/private rsa key pair.
Enter file in which to save the key (/export/home/grid/.ssh/id_rsa): <Enter>
Created directory '/export/home/grid/.ssh'.
Enter passphrase (empty for no passphrase): <Enter>
Enter same passphrase again: <Enter>
Your identification has been saved in /export/home/grid/.ssh/id_rsa.
Your public key has been saved in /export/home/grid/.ssh/id_rsa.pub.
The key fingerprint is:
..:..:.. grid@grid01
$ cat .ssh/id_rsa.pub > .ssh/authorized_keys
$ chmod 600 .ssh/authorized_keys
$ vi .ssh/authorized_keys
</source>
Add the public key of other nodes.
After that do this on all other nodes added as grid:
<source lang=bash>
$ scp grid01:.ssh/authorized_keys .ssh/authorized_keys
</source>
Now do a cross login from every node to every other node (even to its self) to add all to the known_hosts. The installer needs this.
==Projects==
<source lang=bash>
# projadd -p 186 -G oinstall \
-K process.max-file-descriptor="(privileged,65536,deny)" \
-K process.max-sem-nsems="(privileged,2048,deny)" \
-K project.max-sem-ids="(privileged,2048,deny)" \
-K project.max-shm-ids="(privileged,200,deny)" \
-K project.max-shm-memory="(privileged,274877906944,deny)" \
group.oinstall
</source>
===Check project settings===
<source lang=bash>
# su - oracle
$ for name in process.{max-file-descriptor,max-sem-nsems} ; do prctl -t privileged -i process -n ${name} $$ ; done
process: 14822: -bash
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
process.max-file-descriptor
privileged 65.5K - deny -
process: 14822: -bash
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
process.max-sem-nsems
privileged 2.05K - deny -
$ for name in project.{max-sem-ids,max-shm-ids,max-shm-memory} ; do prctl -t privileged -n ${name} $$ ; done
process: 14822: -bash
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
project.max-sem-ids
privileged 2.05K - deny -
process: 14822: -bash
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
project.max-shm-ids
privileged 200 - deny -
process: 14822: -bash
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
project.max-shm-memory
usage 0B
privileged 256GB - deny -
</source>
=Directories=
<source lang=bash>
# mkdir -p -m 0755 /opt/{grid{home,base},oraInventory}
# chown -R grid:oinstall /opt/{grid{home,base},oraInventory}
</source>
=Storage tasks=
==Discover LUNs==
<source lang=bash>
# luxadm -e port | \
nawk '{print $1}' | \
xargs -n 1 luxadm -e dump_map | \
nawk '/Disk device/{print $5}' | \
sort -u | \
xargs luxadm display | \
nawk '
/DEVICE PROPERTIES for disk:/{
disk=$NF;
}
/DEVICE PROPERTIES for:/{
disk="";
}
/Vendor:/{
vendor=$NF;
}
/Serial Num:/{
serial=$NF;
}
/Unformatted capacity:/{
capacity=$(NF-1)""$NF;
}
disk != "" && /^$/{
printf "%s vendor=%s serial=%s capacity=%s\n",disk,vendor,serial,capacity;
}' | \
sort -u
</source>
==Label Disks==
===Single Disk===
<source lang=bash>
# printf 'type 0 no no\nlabel 1 yes\npartition\n0 usr wm 8192 $\nlabel 1 yes\nquit\nquit\n' | \
format -e /dev/rdsk/<disk>
</source>
===All FC disks===
For x86 you have to call format -> fdisk -> y for all disks first :-\
'''DON'T DO THE NEXT STEP IF YOU DO NOT KNOW WHAT YOU DO!'''
format_command_file.txt:
<source lang=bash>
type 0 no no
label 1 yes
partition
0 usr wm 8192 $
label 1 yes
quit
quit
</source>
<source lang=bash>
# luxadm -e port | \
nawk '{print $1}' | \
xargs -n 1 luxadm -e dump_map | \
nawk '/Disk device/{print $5}' | \
sort -u | \
xargs luxadm display | \
nawk '
/DEVICE PROPERTIES for disk:/{
disk=$NF;
}
/DEVICE PROPERTIES for:/{
disk="";
}
disk && /^$/{
printf "%s\n",disk;
}' | \
sort -u | \
xargs -n 1 format -e -f ~/format_command_file.txt
</source>
<source lang=bash>
# chown -RL grid:asmadmin /dev/rdsk/c0t6000*
# chmod 660 /dev/rdsk/c0t6000*
</source>
==Set swap to physical RAM==
<source lang=bash>
# export RAM=256G
# swap -d /dev/zvol/dsk/rpool/swap
# zfs destroy rpool/swap
# zfs create \
-V ${RAM} \
-b 8k \
-o primarycache=metadata \
-o chksum=on \
-o dedup=off \
-o encryption=off \
-o compression=off \
rpool/swap
# swap -a /dev/zvol/dsk/rpool/swap
</source>
=Network=
==Check port ranges==
<source lang=bash>
# for protocol in tcp udp ; do ipadm show-prop ${protocol} -p smallest_anon_port,largest_anon_port ; done
PROTO PROPERTY PERM CURRENT PERSISTENT DEFAULT POSSIBLE
tcp smallest_anon_port rw 9000 9000 32768 1024-65500
tcp largest_anon_port rw 65500 65500 65535 9000-65535
PROTO PROPERTY PERM CURRENT PERSISTENT DEFAULT POSSIBLE
udp smallest_anon_port rw 9000 9000 32768 1024-65500
udp largest_anon_port rw 65500 65500 65535 9000-65535
</source>
==Setup private cluster interconnects==
Example with a small net with six (eight with net and broadcast) usable IPs. This limits the maximum number of nodes to six... which is obvious...
First node:
<source lang=bash>
# ipadm create-ip net1
# ipadm create-addr -T static -a 10.65.0.1/29 net1/ci1
# ipadm create-ip net5
# ipadm create-addr -T static -a 10.65.0.9/29 net5/ci2
</source>
Second node:
<source lang=bash>
# ipadm create-ip net1
# ipadm create-addr -T static -a 10.65.0.2/29 net1/ci1
# ipadm create-ip net5
# ipadm create-addr -T static -a 10.65.0.10/29 net5/ci2
</source>
==Set slew always for ntp==
After configuring ntp set slew always to avoid time warps!
<source lang=bash>
# svccfg -s svc:/network/ntp:default setprop config/slew_always = true
# svcadm refresh svc:/network/ntp:default
# svccfg -s svc:/network/ntp:default listprop config/slew_always
config/slew_always boolean true
</source>
970393f4bb0821239cb9539bc1db829b6773c3af
1197
1196
2016-02-08T15:12:29Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:Solaris11|Clusterware]]
[[Kategorie:Oracle|Clusterware]]
==Get Solaris release information==
<source lang=bash>
# pkg info kernel | \
nawk -F '.' '
/Build Release:/{
solaris=$NF;
}
/Branch:/{
subrel=$3;
update=$4;
}
END{
printf "Solaris %d.%d Update %d\n",solaris,subrel,update;
}'
</source>
=Needed Solaris packages=
==Install pkg dependencies==
<source lang=bash>
# pkg install developer/assembler
# pkg install developer/build/make
# pkg install x11/diagnostic/x11-info-clients
</source>
==Check pkg dependencies==
<source lang=bash>
# pkg list \
developer/assembler \
developer/build/make \
x11/diagnostic/x11-info-clients
</source>
=User / group settings=
==Groups==
<source lang=bash>
# groupadd -g 186 oinstall
# groupadd -g 187 asmadmin
# groupadd -g 188 asmdba
# groupadd -g 200 dba
</source>
==User==
<source lang=bash>
# useradd \
-u 102 \
-g oinstall \
-G asmdba,dba \
-c "Oracle DB" \
-m -d /export/home/oracle \
oracle
# useradd \
-u 406 \
-g oinstall \
-G asmdba,asmadmin,dba \
-c "Oracle Grid" \
-m -d /export/home/grid \
grid
</source>
===Generate ssh public keys===
<source lang=bash>
# su - grid
$ ssh-keygen -t rsa -b 2048
Generating public/private rsa key pair.
Enter file in which to save the key (/export/home/grid/.ssh/id_rsa): <Enter>
Created directory '/export/home/grid/.ssh'.
Enter passphrase (empty for no passphrase): <Enter>
Enter same passphrase again: <Enter>
Your identification has been saved in /export/home/grid/.ssh/id_rsa.
Your public key has been saved in /export/home/grid/.ssh/id_rsa.pub.
The key fingerprint is:
..:..:.. grid@grid01
$ cat .ssh/id_rsa.pub > .ssh/authorized_keys
$ chmod 600 .ssh/authorized_keys
$ vi .ssh/authorized_keys
</source>
Add the public key of other nodes.
After that do this on all other nodes added as grid:
<source lang=bash>
$ scp grid01:.ssh/authorized_keys .ssh/authorized_keys
</source>
Now do a cross login from every node to every other node (even to its self) to add all to the known_hosts. The installer needs this.
==Projects==
<source lang=bash>
# projadd -p 186 -G oinstall \
-K process.max-file-descriptor="(privileged,65536,deny)" \
-K process.max-sem-nsems="(privileged,2048,deny)" \
-K project.max-sem-ids="(privileged,2048,deny)" \
-K project.max-shm-ids="(privileged,200,deny)" \
-K project.max-shm-memory="(privileged,274877906944,deny)" \
group.oinstall
</source>
===Check project settings===
<source lang=bash>
# su - oracle
$ for name in process.{max-file-descriptor,max-sem-nsems} ; do prctl -t privileged -i process -n ${name} $$ ; done
process: 14822: -bash
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
process.max-file-descriptor
privileged 65.5K - deny -
process: 14822: -bash
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
process.max-sem-nsems
privileged 2.05K - deny -
$ for name in project.{max-sem-ids,max-shm-ids,max-shm-memory} ; do prctl -t privileged -n ${name} $$ ; done
process: 14822: -bash
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
project.max-sem-ids
privileged 2.05K - deny -
process: 14822: -bash
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
project.max-shm-ids
privileged 200 - deny -
process: 14822: -bash
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
project.max-shm-memory
usage 0B
privileged 256GB - deny -
</source>
=Directories=
<source lang=bash>
# mkdir -p -m 0755 /opt/{grid{home,base},oraInventory}
# chown -R grid:oinstall /opt/{grid{home,base},oraInventory}
</source>
=Storage tasks=
==Discover LUNs==
<source lang=bash>
# luxadm -e port | \
nawk '{print $1}' | \
xargs -n 1 luxadm -e dump_map | \
nawk '/Disk device/{print $5}' | \
sort -u | \
xargs luxadm display | \
nawk '
/DEVICE PROPERTIES for disk:/{
disk=$NF;
}
/DEVICE PROPERTIES for:/{
disk="";
}
/Vendor:/{
vendor=$NF;
}
/Serial Num:/{
serial=$NF;
}
/Unformatted capacity:/{
capacity=$(NF-1)""$NF;
}
disk != "" && /^$/{
printf "%s vendor=%s serial=%s capacity=%s\n",disk,vendor,serial,capacity;
}' | \
sort -u
</source>
==Label Disks==
===Single Disk===
<source lang=bash>
# printf 'type 0 no no\nlabel 1 yes\npartition\n0 usr wm 8192 $\nlabel 1 yes\nquit\nquit\n' | \
format -e /dev/rdsk/<disk>
</source>
===All FC disks===
For x86 you have to call format -> fdisk -> y for all disks first :-\
'''DON'T DO THE NEXT STEP IF YOU DO NOT KNOW WHAT YOU DO!'''
format_command_file.txt:
<source lang=bash>
type 0 no no
label 1 yes
partition
0 usr wm 8192 $
label 1 yes
quit
quit
</source>
<source lang=bash>
# luxadm -e port | \
nawk '{print $1}' | \
xargs -n 1 luxadm -e dump_map | \
nawk '/Disk device/{print $5}' | \
sort -u | \
xargs luxadm display | \
nawk '
/DEVICE PROPERTIES for disk:/{
disk=$NF;
}
/DEVICE PROPERTIES for:/{
disk="";
}
disk && /^$/{
printf "%s\n",disk;
}' | \
sort -u | \
xargs -n 1 format -e -f ~/format_command_file.txt
</source>
<source lang=bash>
# chown -RL grid:asmadmin /dev/rdsk/c0t6000*
# chmod 660 /dev/rdsk/c0t6000*
</source>
==Set swap to physical RAM==
<source lang=bash>
# export RAM=256G
# swap -d /dev/zvol/dsk/rpool/swap
# zfs destroy rpool/swap
# zfs create \
-V ${RAM} \
-b 8k \
-o primarycache=metadata \
-o chksum=on \
-o dedup=off \
-o encryption=off \
-o compression=off \
rpool/swap
# swap -a /dev/zvol/dsk/rpool/swap
</source>
=Network=
==Check port ranges==
<source lang=bash>
# for protocol in tcp udp ; do ipadm show-prop ${protocol} -p smallest_anon_port,largest_anon_port ; done
PROTO PROPERTY PERM CURRENT PERSISTENT DEFAULT POSSIBLE
tcp smallest_anon_port rw 9000 9000 32768 1024-65500
tcp largest_anon_port rw 65500 65500 65535 9000-65535
PROTO PROPERTY PERM CURRENT PERSISTENT DEFAULT POSSIBLE
udp smallest_anon_port rw 9000 9000 32768 1024-65500
udp largest_anon_port rw 65500 65500 65535 9000-65535
</source>
==Setup private cluster interconnects==
Example with a small net with six (eight with net and broadcast) usable IPs. This limits the maximum number of nodes to six... which is obvious...
First node:
<source lang=bash>
# ipadm create-ip net1
# ipadm create-addr -T static -a 10.65.0.1/29 net1/ci1
# ipadm create-ip net5
# ipadm create-addr -T static -a 10.65.0.9/29 net5/ci2
</source>
Second node:
<source lang=bash>
# ipadm create-ip net1
# ipadm create-addr -T static -a 10.65.0.2/29 net1/ci1
# ipadm create-ip net5
# ipadm create-addr -T static -a 10.65.0.10/29 net5/ci2
</source>
==Set slew always for ntp==
After configuring ntp set slew always to avoid time warps!
<source lang=bash>
# svccfg -s svc:/network/ntp:default setprop config/slew_always = true
# svcadm refresh svc:/network/ntp:default
# svccfg -s svc:/network/ntp:default listprop config/slew_always
config/slew_always boolean true
</source>
1d3720ff3a4b872e7fcbcafd34d1aa7dea7ae948
1198
1197
2016-02-10T11:06:09Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:Solaris11|Clusterware]]
[[Kategorie:Oracle|Clusterware]]
==Get Solaris release information==
<source lang=bash>
# pkg info kernel | \
nawk -F '.' '
/Build Release:/{
solaris=$NF;
}
/Branch:/{
subrel=$3;
update=$4;
}
END{
printf "Solaris %d.%d Update %d\n",solaris,subrel,update;
}'
</source>
=Needed Solaris packages=
==Install pkg dependencies==
<source lang=bash>
# pkg install developer/assembler
# pkg install developer/build/make
# pkg install x11/diagnostic/x11-info-clients
</source>
==Check pkg dependencies==
<source lang=bash>
# pkg list \
developer/assembler \
developer/build/make \
x11/diagnostic/x11-info-clients
</source>
=User / group settings=
==Groups==
<source lang=bash>
# groupadd -g 186 oinstall
# groupadd -g 187 asmadmin
# groupadd -g 188 asmdba
# groupadd -g 200 dba
</source>
==User==
<source lang=bash>
# useradd \
-u 102 \
-g oinstall \
-G asmdba,dba \
-c "Oracle DB" \
-m -d /export/home/oracle \
oracle
# useradd \
-u 406 \
-g oinstall \
-G asmdba,asmadmin,dba \
-c "Oracle Grid" \
-m -d /export/home/grid \
grid
</source>
===Generate ssh public keys===
<source lang=bash>
# su - grid
$ ssh-keygen -t rsa -b 2048
Generating public/private rsa key pair.
Enter file in which to save the key (/export/home/grid/.ssh/id_rsa): <Enter>
Created directory '/export/home/grid/.ssh'.
Enter passphrase (empty for no passphrase): <Enter>
Enter same passphrase again: <Enter>
Your identification has been saved in /export/home/grid/.ssh/id_rsa.
Your public key has been saved in /export/home/grid/.ssh/id_rsa.pub.
The key fingerprint is:
..:..:.. grid@grid01
$ cat .ssh/id_rsa.pub > .ssh/authorized_keys
$ chmod 600 .ssh/authorized_keys
$ vi .ssh/authorized_keys
</source>
Add the public key of other nodes.
After that do this on all other nodes added as grid:
<source lang=bash>
$ scp grid01:.ssh/authorized_keys .ssh/authorized_keys
</source>
Now do a cross login from every node to every other node (even to its self) to add all to the known_hosts. The installer needs this.
==Projects==
<source lang=bash>
# projadd -p 186 -G oinstall \
-K process.max-file-descriptor="(privileged,65536,deny)" \
-K process.max-sem-nsems="(privileged,2048,deny)" \
-K project.max-sem-ids="(privileged,2048,deny)" \
-K project.max-shm-ids="(privileged,200,deny)" \
-K project.max-shm-memory="(privileged,274877906944,deny)" \
group.oinstall
</source>
===Check project settings===
<source lang=bash>
# su - oracle
$ for name in process.{max-file-descriptor,max-sem-nsems} ; do prctl -t privileged -i process -n ${name} $$ ; done
process: 14822: -bash
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
process.max-file-descriptor
privileged 65.5K - deny -
process: 14822: -bash
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
process.max-sem-nsems
privileged 2.05K - deny -
$ for name in project.{max-sem-ids,max-shm-ids,max-shm-memory} ; do prctl -t privileged -n ${name} $$ ; done
process: 14822: -bash
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
project.max-sem-ids
privileged 2.05K - deny -
process: 14822: -bash
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
project.max-shm-ids
privileged 200 - deny -
process: 14822: -bash
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
project.max-shm-memory
usage 0B
privileged 256GB - deny -
</source>
=Directories=
<source lang=bash>
# zfs create -o mountpoint=none rpool/grid
# zfs create -o mountpoint=/opt/gridhome rpool/grid/gridhome
# zfs create -o mountpoint=/opt/gridbase rpool/grid/gridbase
# zfs create -o mountpoint=/opt/oraInventory rpool/grid/oraInventory
# chown -R grid:oinstall /opt/{grid{home,base},oraInventory}
</source>
=Storage tasks=
==Discover LUNs==
<source lang=bash>
# luxadm -e port | \
nawk '{print $1}' | \
xargs -n 1 luxadm -e dump_map | \
nawk '/Disk device/{print $5}' | \
sort -u | \
xargs luxadm display | \
nawk '
/DEVICE PROPERTIES for disk:/{
disk=$NF;
}
/DEVICE PROPERTIES for:/{
disk="";
}
/Vendor:/{
vendor=$NF;
}
/Serial Num:/{
serial=$NF;
}
/Unformatted capacity:/{
capacity=$(NF-1)""$NF;
}
disk != "" && /^$/{
printf "%s vendor=%s serial=%s capacity=%s\n",disk,vendor,serial,capacity;
}' | \
sort -u
</source>
==Label Disks==
===Single Disk===
<source lang=bash>
# printf 'type 0 no no\nlabel 1 yes\npartition\n0 usr wm 8192 $\nlabel 1 yes\nquit\nquit\n' | \
format -e /dev/rdsk/<disk>
</source>
===All FC disks===
For x86 you have to call format -> fdisk -> y for all disks first :-\
'''DON'T DO THE NEXT STEP IF YOU DO NOT KNOW WHAT YOU DO!'''
format_command_file.txt:
<source lang=bash>
type 0 no no
label 1 yes
partition
0 usr wm 8192 $
label 1 yes
quit
quit
</source>
<source lang=bash>
# luxadm -e port | \
nawk '{print $1}' | \
xargs -n 1 luxadm -e dump_map | \
nawk '/Disk device/{print $5}' | \
sort -u | \
xargs luxadm display | \
nawk '
/DEVICE PROPERTIES for disk:/{
disk=$NF;
}
/DEVICE PROPERTIES for:/{
disk="";
}
disk && /^$/{
printf "%s\n",disk;
}' | \
sort -u | \
xargs -n 1 format -e -f ~/format_command_file.txt
</source>
<source lang=bash>
# chown -RL grid:asmadmin /dev/rdsk/c0t6000*
# chmod 660 /dev/rdsk/c0t6000*
</source>
==Set swap to physical RAM==
<source lang=bash>
# export RAM=256G
# swap -d /dev/zvol/dsk/rpool/swap
# zfs destroy rpool/swap
# zfs create \
-V ${RAM} \
-b 8k \
-o primarycache=metadata \
-o chksum=on \
-o dedup=off \
-o encryption=off \
-o compression=off \
rpool/swap
# swap -a /dev/zvol/dsk/rpool/swap
</source>
=Network=
==Check port ranges==
<source lang=bash>
# for protocol in tcp udp ; do ipadm show-prop ${protocol} -p smallest_anon_port,largest_anon_port ; done
PROTO PROPERTY PERM CURRENT PERSISTENT DEFAULT POSSIBLE
tcp smallest_anon_port rw 9000 9000 32768 1024-65500
tcp largest_anon_port rw 65500 65500 65535 9000-65535
PROTO PROPERTY PERM CURRENT PERSISTENT DEFAULT POSSIBLE
udp smallest_anon_port rw 9000 9000 32768 1024-65500
udp largest_anon_port rw 65500 65500 65535 9000-65535
</source>
==Setup private cluster interconnects==
Example with a small net with six (eight with net and broadcast) usable IPs. This limits the maximum number of nodes to six... which is obvious...
First node:
<source lang=bash>
# ipadm create-ip net1
# ipadm create-addr -T static -a 10.65.0.1/29 net1/ci1
# ipadm create-ip net5
# ipadm create-addr -T static -a 10.65.0.9/29 net5/ci2
</source>
Second node:
<source lang=bash>
# ipadm create-ip net1
# ipadm create-addr -T static -a 10.65.0.2/29 net1/ci1
# ipadm create-ip net5
# ipadm create-addr -T static -a 10.65.0.10/29 net5/ci2
</source>
==Set slew always for ntp==
After configuring ntp set slew always to avoid time warps!
<source lang=bash>
# svccfg -s svc:/network/ntp:default setprop config/slew_always = true
# svcadm refresh svc:/network/ntp:default
# svccfg -s svc:/network/ntp:default listprop config/slew_always
config/slew_always boolean true
</source>
7171b9fff9d0a29e659c692742f99ea6ee683fec
1199
1198
2016-02-10T12:19:42Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:Solaris11|Clusterware]]
[[Kategorie:Oracle|Clusterware]]
==Get Solaris release information==
<source lang=bash>
# pkg info kernel | \
nawk -F '.' '
/Build Release:/{
solaris=$NF;
}
/Branch:/{
subrel=$3;
update=$4;
}
END{
printf "Solaris %d.%d Update %d\n",solaris,subrel,update;
}'
</source>
=Needed Solaris packages=
==Install pkg dependencies==
<source lang=bash>
# pkg install developer/assembler
# pkg install developer/build/make
# pkg install x11/diagnostic/x11-info-clients
</source>
==Check pkg dependencies==
<source lang=bash>
# pkg list \
developer/assembler \
developer/build/make \
x11/diagnostic/x11-info-clients
</source>
=User / group settings=
==Groups==
<source lang=bash>
# groupadd -g 186 oinstall
# groupadd -g 187 asmadmin
# groupadd -g 188 asmdba
# groupadd -g 200 dba
</source>
==User==
<source lang=bash>
# useradd \
-u 102 \
-g oinstall \
-G asmdba,dba \
-c "Oracle DB" \
-m -d /export/home/oracle \
oracle
# useradd \
-u 406 \
-g oinstall \
-G asmdba,asmadmin,dba \
-c "Oracle Grid" \
-m -d /export/home/grid \
grid
</source>
===Generate ssh public keys===
<source lang=bash>
# su - grid
$ ssh-keygen -t rsa -b 2048
Generating public/private rsa key pair.
Enter file in which to save the key (/export/home/grid/.ssh/id_rsa): <Enter>
Created directory '/export/home/grid/.ssh'.
Enter passphrase (empty for no passphrase): <Enter>
Enter same passphrase again: <Enter>
Your identification has been saved in /export/home/grid/.ssh/id_rsa.
Your public key has been saved in /export/home/grid/.ssh/id_rsa.pub.
The key fingerprint is:
..:..:.. grid@grid01
$ cat .ssh/id_rsa.pub > .ssh/authorized_keys
$ chmod 600 .ssh/authorized_keys
$ vi .ssh/authorized_keys
</source>
Add the public key of other nodes.
After that do this on all other nodes added as grid:
<source lang=bash>
$ scp grid01:.ssh/authorized_keys .ssh/authorized_keys
</source>
Now do a cross login from every node to every other node (even to its self) to add all to the known_hosts. The installer needs this.
==Projects==
<source lang=bash>
# projadd -p 186 -G oinstall \
-K process.max-file-descriptor="(privileged,65536,deny)" \
-K process.max-sem-nsems="(privileged,2048,deny)" \
-K project.max-sem-ids="(privileged,2048,deny)" \
-K project.max-shm-ids="(privileged,200,deny)" \
-K project.max-shm-memory="(privileged,274877906944,deny)" \
group.oinstall
</source>
===Check project settings===
<source lang=bash>
# su - oracle
$ for name in process.{max-file-descriptor,max-sem-nsems} ; do prctl -t privileged -i process -n ${name} $$ ; done
process: 14822: -bash
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
process.max-file-descriptor
privileged 65.5K - deny -
process: 14822: -bash
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
process.max-sem-nsems
privileged 2.05K - deny -
$ for name in project.{max-sem-ids,max-shm-ids,max-shm-memory} ; do prctl -t privileged -n ${name} $$ ; done
process: 14822: -bash
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
project.max-sem-ids
privileged 2.05K - deny -
process: 14822: -bash
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
project.max-shm-ids
privileged 200 - deny -
process: 14822: -bash
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
project.max-shm-memory
usage 0B
privileged 256GB - deny -
</source>
=Directories=
<source lang=bash>
# zfs create -o mountpoint=none rpool/grid
# zfs create -o mountpoint=/opt/gridhome rpool/grid/gridhome
# zfs create -o mountpoint=/opt/gridbase rpool/grid/gridbase
# zfs create -o mountpoint=/opt/oraInventory rpool/grid/oraInventory
# chown -R grid:oinstall /opt/{grid{home,base},oraInventory}
</source>
=Storage tasks=
==Discover LUNs==
<source lang=bash>
# luxadm -e port | \
nawk '{print $1}' | \
xargs -n 1 luxadm -e dump_map | \
nawk '/Disk device/{print $5}' | \
sort -u | \
xargs luxadm display | \
nawk '
/DEVICE PROPERTIES for disk:/{
disk=$NF;
}
/DEVICE PROPERTIES for:/{
disk="";
}
/Vendor:/{
vendor=$NF;
}
/Serial Num:/{
serial=$NF;
}
/Unformatted capacity:/{
capacity=$(NF-1)""$NF;
}
disk != "" && /^$/{
printf "%s vendor=%s serial=%s capacity=%s\n",disk,vendor,serial,capacity;
}' | \
sort -u
</source>
==Label Disks==
===Single Disk===
<source lang=bash>
# printf 'type 0 no no\nlabel 1 yes\npartition\n0 usr wm 8192 $\nlabel 1 yes\nquit\nquit\n' | \
format -e /dev/rdsk/<disk>
</source>
===All FC disks===
For x86 you have to call format -> fdisk -> y for all disks first :-\
'''DON'T DO THE NEXT STEP IF YOU DO NOT KNOW WHAT YOU DO!'''
format_command_file.txt:
<source lang=bash>
type 0 no no
label 1 yes
partition
0 usr wm 8192 $
label 1 yes
quit
quit
</source>
<source lang=bash>
# luxadm -e port | \
nawk '{print $1}' | \
xargs -n 1 luxadm -e dump_map | \
nawk '/Disk device/{print $5}' | \
sort -u | \
xargs luxadm display | \
nawk '
/DEVICE PROPERTIES for disk:/{
disk=$NF;
}
/DEVICE PROPERTIES for:/{
disk="";
}
disk && /^$/{
printf "%s\n",disk;
}' | \
sort -u | \
xargs -n 1 format -e -f ~/format_command_file.txt
</source>
<source lang=bash>
# chown -RL grid:asmadmin /dev/rdsk/c0t6000*
# chmod 660 /dev/rdsk/c0t6000*
</source>
==Set swap to physical RAM==
<source lang=bash>
# export RAM=256G
# swap -d /dev/zvol/dsk/rpool/swap
# zfs destroy rpool/swap
# zfs create \
-V ${RAM} \
-b 8k \
-o primarycache=metadata \
-o chksum=on \
-o dedup=off \
-o encryption=off \
-o compression=off \
rpool/swap
# swap -a /dev/zvol/dsk/rpool/swap
</source>
=Network=
==Check port ranges==
<source lang=bash>
# for protocol in tcp udp ; do ipadm show-prop ${protocol} -p smallest_anon_port,largest_anon_port ; done
PROTO PROPERTY PERM CURRENT PERSISTENT DEFAULT POSSIBLE
tcp smallest_anon_port rw 9000 9000 32768 1024-65500
tcp largest_anon_port rw 65500 65500 65535 9000-65535
PROTO PROPERTY PERM CURRENT PERSISTENT DEFAULT POSSIBLE
udp smallest_anon_port rw 9000 9000 32768 1024-65500
udp largest_anon_port rw 65500 65500 65535 9000-65535
</source>
==Setup private cluster interconnects==
Example with a small net with six (eight with net and broadcast) usable IPs. This limits the maximum number of nodes to six... which is obvious...
First node:
<source lang=bash>
# ipadm create-ip net1
# ipadm create-addr -T static -a 10.65.0.1/29 net1/ci1
# ipadm create-ip net5
# ipadm create-addr -T static -a 10.65.0.9/29 net5/ci2
</source>
Second node:
<source lang=bash>
# ipadm create-ip net1
# ipadm create-addr -T static -a 10.65.0.2/29 net1/ci1
# ipadm create-ip net5
# ipadm create-addr -T static -a 10.65.0.10/29 net5/ci2
</source>
==Set slew always for ntp==
After configuring ntp set slew always to avoid time warps!
<source lang=bash>
# svccfg -s svc:/network/ntp:default setprop config/slew_always = true
# svcadm refresh svc:/network/ntp:default
# svccfg -s svc:/network/ntp:default listprop config/slew_always
config/slew_always boolean true
</source>
==Upgrade OPatch==
<source lang=bash>
# OPATCH_PATCH_ZIP=~oracle/orainst/p6880880_112000_Solaris86-64.zip
# export ORACLE_HOME=/opt/gridhome/11.2.0.4
# export PATH=${PATH}:${ORACLE_HOME}/OPatch
# zfs snapshot -r rpool/grid@$(opatch version | nawk '/OPatch Version:/{print $1"_"$NF;}')
# eval mv ${ORACLE_HOME}/{$(opatch version | nawk '/OPatch Version:/{print $1","$1"_"$NF;}')}
# unzip -d ${ORACLE_HOME} ${OPATCH_PATCH_ZIP}
# zfs snapshot -r rpool/grid@$(opatch version | nawk '/OPatch Version:/{print $1"_"$NF;}')
</source>
</source>
9b9d8777743746d66fa21cc1dd0100884b2c8cde
1200
1199
2016-02-10T13:01:34Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:Solaris11|Clusterware]]
[[Kategorie:Oracle|Clusterware]]
==Get Solaris release information==
<source lang=bash>
# pkg info kernel | \
nawk -F '.' '
/Build Release:/{
solaris=$NF;
}
/Branch:/{
subrel=$3;
update=$4;
}
END{
printf "Solaris %d.%d Update %d\n",solaris,subrel,update;
}'
</source>
=Needed Solaris packages=
==Install pkg dependencies==
<source lang=bash>
# pkg install developer/assembler
# pkg install developer/build/make
# pkg install x11/diagnostic/x11-info-clients
</source>
==Check pkg dependencies==
<source lang=bash>
# pkg list \
developer/assembler \
developer/build/make \
x11/diagnostic/x11-info-clients
</source>
=User / group settings=
==Groups==
<source lang=bash>
# groupadd -g 186 oinstall
# groupadd -g 187 asmadmin
# groupadd -g 188 asmdba
# groupadd -g 200 dba
</source>
==User==
<source lang=bash>
# useradd \
-u 102 \
-g oinstall \
-G asmdba,dba \
-c "Oracle DB" \
-m -d /export/home/oracle \
oracle
# useradd \
-u 406 \
-g oinstall \
-G asmdba,asmadmin,dba \
-c "Oracle Grid" \
-m -d /export/home/grid \
grid
</source>
===Generate ssh public keys===
<source lang=bash>
# su - grid
$ ssh-keygen -t rsa -b 2048
Generating public/private rsa key pair.
Enter file in which to save the key (/export/home/grid/.ssh/id_rsa): <Enter>
Created directory '/export/home/grid/.ssh'.
Enter passphrase (empty for no passphrase): <Enter>
Enter same passphrase again: <Enter>
Your identification has been saved in /export/home/grid/.ssh/id_rsa.
Your public key has been saved in /export/home/grid/.ssh/id_rsa.pub.
The key fingerprint is:
..:..:.. grid@grid01
$ cat .ssh/id_rsa.pub > .ssh/authorized_keys
$ chmod 600 .ssh/authorized_keys
$ vi .ssh/authorized_keys
</source>
Add the public key of other nodes.
After that do this on all other nodes added as grid:
<source lang=bash>
$ scp grid01:.ssh/authorized_keys .ssh/authorized_keys
</source>
Now do a cross login from every node to every other node (even to its self) to add all to the known_hosts. The installer needs this.
==Projects==
<source lang=bash>
# projadd -p 186 -G oinstall \
-K process.max-file-descriptor="(privileged,65536,deny)" \
-K process.max-sem-nsems="(privileged,2048,deny)" \
-K project.max-sem-ids="(privileged,2048,deny)" \
-K project.max-shm-ids="(privileged,200,deny)" \
-K project.max-shm-memory="(privileged,274877906944,deny)" \
group.oinstall
</source>
===Check project settings===
<source lang=bash>
# su - oracle
$ for name in process.{max-file-descriptor,max-sem-nsems} ; do prctl -t privileged -i process -n ${name} $$ ; done
process: 14822: -bash
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
process.max-file-descriptor
privileged 65.5K - deny -
process: 14822: -bash
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
process.max-sem-nsems
privileged 2.05K - deny -
$ for name in project.{max-sem-ids,max-shm-ids,max-shm-memory} ; do prctl -t privileged -n ${name} $$ ; done
process: 14822: -bash
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
project.max-sem-ids
privileged 2.05K - deny -
process: 14822: -bash
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
project.max-shm-ids
privileged 200 - deny -
process: 14822: -bash
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
project.max-shm-memory
usage 0B
privileged 256GB - deny -
</source>
=Directories=
<source lang=bash>
# zfs create -o mountpoint=none rpool/grid
# zfs create -o mountpoint=/opt/gridhome rpool/grid/gridhome
# zfs create -o mountpoint=/opt/gridbase rpool/grid/gridbase
# zfs create -o mountpoint=/opt/oraInventory rpool/grid/oraInventory
# chown -R grid:oinstall /opt/{grid{home,base},oraInventory}
</source>
=Storage tasks=
==Discover LUNs==
<source lang=bash>
# luxadm -e port | \
nawk '{print $1}' | \
xargs -n 1 luxadm -e dump_map | \
nawk '/Disk device/{print $5}' | \
sort -u | \
xargs luxadm display | \
nawk '
/DEVICE PROPERTIES for disk:/{
disk=$NF;
}
/DEVICE PROPERTIES for:/{
disk="";
}
/Vendor:/{
vendor=$NF;
}
/Serial Num:/{
serial=$NF;
}
/Unformatted capacity:/{
capacity=$(NF-1)""$NF;
}
disk != "" && /^$/{
printf "%s vendor=%s serial=%s capacity=%s\n",disk,vendor,serial,capacity;
}' | \
sort -u
</source>
==Label Disks==
===Single Disk===
<source lang=bash>
# printf 'type 0 no no\nlabel 1 yes\npartition\n0 usr wm 8192 $\nlabel 1 yes\nquit\nquit\n' | \
format -e /dev/rdsk/<disk>
</source>
===All FC disks===
For x86 you have to call format -> fdisk -> y for all disks first :-\
'''DON'T DO THE NEXT STEP IF YOU DO NOT KNOW WHAT YOU DO!'''
format_command_file.txt:
<source lang=bash>
type 0 no no
label 1 yes
partition
0 usr wm 8192 $
label 1 yes
quit
quit
</source>
<source lang=bash>
# luxadm -e port | \
nawk '{print $1}' | \
xargs -n 1 luxadm -e dump_map | \
nawk '/Disk device/{print $5}' | \
sort -u | \
xargs luxadm display | \
nawk '
/DEVICE PROPERTIES for disk:/{
disk=$NF;
}
/DEVICE PROPERTIES for:/{
disk="";
}
disk && /^$/{
printf "%s\n",disk;
}' | \
sort -u | \
xargs -n 1 format -e -f ~/format_command_file.txt
</source>
<source lang=bash>
# chown -RL grid:asmadmin /dev/rdsk/c0t6000*
# chmod 660 /dev/rdsk/c0t6000*
</source>
==Set swap to physical RAM==
<source lang=bash>
# export RAM=256G
# swap -d /dev/zvol/dsk/rpool/swap
# zfs destroy rpool/swap
# zfs create \
-V ${RAM} \
-b 8k \
-o primarycache=metadata \
-o chksum=on \
-o dedup=off \
-o encryption=off \
-o compression=off \
rpool/swap
# swap -a /dev/zvol/dsk/rpool/swap
</source>
=Network=
==Check port ranges==
<source lang=bash>
# for protocol in tcp udp ; do ipadm show-prop ${protocol} -p smallest_anon_port,largest_anon_port ; done
PROTO PROPERTY PERM CURRENT PERSISTENT DEFAULT POSSIBLE
tcp smallest_anon_port rw 9000 9000 32768 1024-65500
tcp largest_anon_port rw 65500 65500 65535 9000-65535
PROTO PROPERTY PERM CURRENT PERSISTENT DEFAULT POSSIBLE
udp smallest_anon_port rw 9000 9000 32768 1024-65500
udp largest_anon_port rw 65500 65500 65535 9000-65535
</source>
==Setup private cluster interconnects==
Example with a small net with six (eight with net and broadcast) usable IPs. This limits the maximum number of nodes to six... which is obvious...
First node:
<source lang=bash>
# ipadm create-ip net1
# ipadm create-addr -T static -a 10.65.0.1/29 net1/ci1
# ipadm create-ip net5
# ipadm create-addr -T static -a 10.65.0.9/29 net5/ci2
</source>
Second node:
<source lang=bash>
# ipadm create-ip net1
# ipadm create-addr -T static -a 10.65.0.2/29 net1/ci1
# ipadm create-ip net5
# ipadm create-addr -T static -a 10.65.0.10/29 net5/ci2
</source>
==Set slew always for ntp==
After configuring ntp set slew always to avoid time warps!
<source lang=bash>
# svccfg -s svc:/network/ntp:default setprop config/slew_always = true
# svcadm refresh svc:/network/ntp:default
# svccfg -s svc:/network/ntp:default listprop config/slew_always
config/slew_always boolean true
</source>
==Upgrade OPatch==
<source lang=bash>
# OPATCH_PATCH_ZIP=~oracle/orainst/p6880880_112000_Solaris86-64.zip
# export ORACLE_HOME=/opt/gridhome/11.2.0.4
# export PATH=${PATH}:${ORACLE_HOME}/OPatch
# zfs snapshot -r rpool/grid@$(opatch version | nawk '/OPatch Version:/{print $1"_"$NF;}')
# eval mv ${ORACLE_HOME}/{$(opatch version | nawk '/OPatch Version:/{print $1","$1"_"$NF;}')}
# unzip -d ${ORACLE_HOME} ${OPATCH_PATCH_ZIP}
# zfs snapshot -r rpool/grid@$(opatch version | nawk '/OPatch Version:/{print $1"_"$NF;}')
</source>
==Apply PSU==
On first node as user grid:
<source lang=bash>
$ export ORACLE_HOME=/opt/gridhome/11.2.0.4
$ OCM_RSP=~grid/ocm_gridcluster1.rsp
$ ${ORACLE_HOME}/OPatch/ocm/bin/emocmrsp -output ${OCM_RSP}
$ scp ~/${OCM_RSP} <other node1>:
$ scp ~/${OCM_RSP} <other node2>:
...
</source>
On all nodes:
<source lang=bash>
# PSU_DIR=~oracle/orainst/psu
# PSU_ZIP=~oracle/orainst/p22378167_112040_Solaris86-64.zip
# OCM_RSP=~grid/ocm_gridcluster1.rsp
# PSU=~oracle/orainst/psu/22378167
# export ORACLE_HOME=/opt/gridhome/11.2.0.4
# export PATH=${PATH}:${ORACLE_HOME}/bin
# export PATH=${PATH}:${ORACLE_HOME}/OPatch
# su grid
$ mkdir -p ${PSU_DIR}
$ unzip -d ${PSU_DIR} ${PSU_ZIP}
$ exit
# zfs snapshot -r rpool/grid@before_psu_${PSU##*/}
# cd ~grid
# for patch in $(find ${PSU} -name bundle.xml | xargs -n 1 dirname) ; do
opatch auto ${patch} -oh ${ORACLE_HOME} -ocmrf ${OCM_RSP}
done
# zfs snapshot -r rpool/grid@after_psu_${PSU##*/}
</source>
35616c911dfced495b5d3314661bf0d2b60a4f13
1201
1200
2016-02-10T13:04:00Z
Lollypop
2
/* Upgrade OPatch */
wikitext
text/x-wiki
[[Kategorie:Solaris11|Clusterware]]
[[Kategorie:Oracle|Clusterware]]
==Get Solaris release information==
<source lang=bash>
# pkg info kernel | \
nawk -F '.' '
/Build Release:/{
solaris=$NF;
}
/Branch:/{
subrel=$3;
update=$4;
}
END{
printf "Solaris %d.%d Update %d\n",solaris,subrel,update;
}'
</source>
=Needed Solaris packages=
==Install pkg dependencies==
<source lang=bash>
# pkg install developer/assembler
# pkg install developer/build/make
# pkg install x11/diagnostic/x11-info-clients
</source>
==Check pkg dependencies==
<source lang=bash>
# pkg list \
developer/assembler \
developer/build/make \
x11/diagnostic/x11-info-clients
</source>
=User / group settings=
==Groups==
<source lang=bash>
# groupadd -g 186 oinstall
# groupadd -g 187 asmadmin
# groupadd -g 188 asmdba
# groupadd -g 200 dba
</source>
==User==
<source lang=bash>
# useradd \
-u 102 \
-g oinstall \
-G asmdba,dba \
-c "Oracle DB" \
-m -d /export/home/oracle \
oracle
# useradd \
-u 406 \
-g oinstall \
-G asmdba,asmadmin,dba \
-c "Oracle Grid" \
-m -d /export/home/grid \
grid
</source>
===Generate ssh public keys===
<source lang=bash>
# su - grid
$ ssh-keygen -t rsa -b 2048
Generating public/private rsa key pair.
Enter file in which to save the key (/export/home/grid/.ssh/id_rsa): <Enter>
Created directory '/export/home/grid/.ssh'.
Enter passphrase (empty for no passphrase): <Enter>
Enter same passphrase again: <Enter>
Your identification has been saved in /export/home/grid/.ssh/id_rsa.
Your public key has been saved in /export/home/grid/.ssh/id_rsa.pub.
The key fingerprint is:
..:..:.. grid@grid01
$ cat .ssh/id_rsa.pub > .ssh/authorized_keys
$ chmod 600 .ssh/authorized_keys
$ vi .ssh/authorized_keys
</source>
Add the public key of other nodes.
After that do this on all other nodes added as grid:
<source lang=bash>
$ scp grid01:.ssh/authorized_keys .ssh/authorized_keys
</source>
Now do a cross login from every node to every other node (even to its self) to add all to the known_hosts. The installer needs this.
==Projects==
<source lang=bash>
# projadd -p 186 -G oinstall \
-K process.max-file-descriptor="(privileged,65536,deny)" \
-K process.max-sem-nsems="(privileged,2048,deny)" \
-K project.max-sem-ids="(privileged,2048,deny)" \
-K project.max-shm-ids="(privileged,200,deny)" \
-K project.max-shm-memory="(privileged,274877906944,deny)" \
group.oinstall
</source>
===Check project settings===
<source lang=bash>
# su - oracle
$ for name in process.{max-file-descriptor,max-sem-nsems} ; do prctl -t privileged -i process -n ${name} $$ ; done
process: 14822: -bash
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
process.max-file-descriptor
privileged 65.5K - deny -
process: 14822: -bash
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
process.max-sem-nsems
privileged 2.05K - deny -
$ for name in project.{max-sem-ids,max-shm-ids,max-shm-memory} ; do prctl -t privileged -n ${name} $$ ; done
process: 14822: -bash
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
project.max-sem-ids
privileged 2.05K - deny -
process: 14822: -bash
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
project.max-shm-ids
privileged 200 - deny -
process: 14822: -bash
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
project.max-shm-memory
usage 0B
privileged 256GB - deny -
</source>
=Directories=
<source lang=bash>
# zfs create -o mountpoint=none rpool/grid
# zfs create -o mountpoint=/opt/gridhome rpool/grid/gridhome
# zfs create -o mountpoint=/opt/gridbase rpool/grid/gridbase
# zfs create -o mountpoint=/opt/oraInventory rpool/grid/oraInventory
# chown -R grid:oinstall /opt/{grid{home,base},oraInventory}
</source>
=Storage tasks=
==Discover LUNs==
<source lang=bash>
# luxadm -e port | \
nawk '{print $1}' | \
xargs -n 1 luxadm -e dump_map | \
nawk '/Disk device/{print $5}' | \
sort -u | \
xargs luxadm display | \
nawk '
/DEVICE PROPERTIES for disk:/{
disk=$NF;
}
/DEVICE PROPERTIES for:/{
disk="";
}
/Vendor:/{
vendor=$NF;
}
/Serial Num:/{
serial=$NF;
}
/Unformatted capacity:/{
capacity=$(NF-1)""$NF;
}
disk != "" && /^$/{
printf "%s vendor=%s serial=%s capacity=%s\n",disk,vendor,serial,capacity;
}' | \
sort -u
</source>
==Label Disks==
===Single Disk===
<source lang=bash>
# printf 'type 0 no no\nlabel 1 yes\npartition\n0 usr wm 8192 $\nlabel 1 yes\nquit\nquit\n' | \
format -e /dev/rdsk/<disk>
</source>
===All FC disks===
For x86 you have to call format -> fdisk -> y for all disks first :-\
'''DON'T DO THE NEXT STEP IF YOU DO NOT KNOW WHAT YOU DO!'''
format_command_file.txt:
<source lang=bash>
type 0 no no
label 1 yes
partition
0 usr wm 8192 $
label 1 yes
quit
quit
</source>
<source lang=bash>
# luxadm -e port | \
nawk '{print $1}' | \
xargs -n 1 luxadm -e dump_map | \
nawk '/Disk device/{print $5}' | \
sort -u | \
xargs luxadm display | \
nawk '
/DEVICE PROPERTIES for disk:/{
disk=$NF;
}
/DEVICE PROPERTIES for:/{
disk="";
}
disk && /^$/{
printf "%s\n",disk;
}' | \
sort -u | \
xargs -n 1 format -e -f ~/format_command_file.txt
</source>
<source lang=bash>
# chown -RL grid:asmadmin /dev/rdsk/c0t6000*
# chmod 660 /dev/rdsk/c0t6000*
</source>
==Set swap to physical RAM==
<source lang=bash>
# export RAM=256G
# swap -d /dev/zvol/dsk/rpool/swap
# zfs destroy rpool/swap
# zfs create \
-V ${RAM} \
-b 8k \
-o primarycache=metadata \
-o chksum=on \
-o dedup=off \
-o encryption=off \
-o compression=off \
rpool/swap
# swap -a /dev/zvol/dsk/rpool/swap
</source>
=Network=
==Check port ranges==
<source lang=bash>
# for protocol in tcp udp ; do ipadm show-prop ${protocol} -p smallest_anon_port,largest_anon_port ; done
PROTO PROPERTY PERM CURRENT PERSISTENT DEFAULT POSSIBLE
tcp smallest_anon_port rw 9000 9000 32768 1024-65500
tcp largest_anon_port rw 65500 65500 65535 9000-65535
PROTO PROPERTY PERM CURRENT PERSISTENT DEFAULT POSSIBLE
udp smallest_anon_port rw 9000 9000 32768 1024-65500
udp largest_anon_port rw 65500 65500 65535 9000-65535
</source>
==Setup private cluster interconnects==
Example with a small net with six (eight with net and broadcast) usable IPs. This limits the maximum number of nodes to six... which is obvious...
First node:
<source lang=bash>
# ipadm create-ip net1
# ipadm create-addr -T static -a 10.65.0.1/29 net1/ci1
# ipadm create-ip net5
# ipadm create-addr -T static -a 10.65.0.9/29 net5/ci2
</source>
Second node:
<source lang=bash>
# ipadm create-ip net1
# ipadm create-addr -T static -a 10.65.0.2/29 net1/ci1
# ipadm create-ip net5
# ipadm create-addr -T static -a 10.65.0.10/29 net5/ci2
</source>
==Set slew always for ntp==
After configuring ntp set slew always to avoid time warps!
<source lang=bash>
# svccfg -s svc:/network/ntp:default setprop config/slew_always = true
# svcadm refresh svc:/network/ntp:default
# svccfg -s svc:/network/ntp:default listprop config/slew_always
config/slew_always boolean true
</source>
==Upgrade OPatch==
<source lang=bash>
# OPATCH_PATCH_ZIP=~oracle/orainst/p6880880_112000_Solaris86-64.zip
# export ORACLE_HOME=/opt/gridhome/11.2.0.4
# export PATH=${PATH}:${ORACLE_HOME}/OPatch
# zfs snapshot -r rpool/grid@$(opatch version | nawk '/OPatch Version:/{print $1"_"$NF;}')
# eval mv ${ORACLE_HOME}/{$(opatch version | nawk '/OPatch Version:/{print $1","$1"_"$NF;}')}
# unzip -d ${ORACLE_HOME} ${OPATCH_PATCH_ZIP}
# chown -R grid:oinstall ${ORACLE_HOME}/OPatch
# zfs snapshot -r rpool/grid@$(opatch version | nawk '/OPatch Version:/{print $1"_"$NF;}')
</source>
==Apply PSU==
On first node as user grid:
<source lang=bash>
$ export ORACLE_HOME=/opt/gridhome/11.2.0.4
$ OCM_RSP=~grid/ocm_gridcluster1.rsp
$ ${ORACLE_HOME}/OPatch/ocm/bin/emocmrsp -output ${OCM_RSP}
$ scp ~/${OCM_RSP} <other node1>:
$ scp ~/${OCM_RSP} <other node2>:
...
</source>
On all nodes:
<source lang=bash>
# PSU_DIR=~oracle/orainst/psu
# PSU_ZIP=~oracle/orainst/p22378167_112040_Solaris86-64.zip
# OCM_RSP=~grid/ocm_gridcluster1.rsp
# PSU=~oracle/orainst/psu/22378167
# export ORACLE_HOME=/opt/gridhome/11.2.0.4
# export PATH=${PATH}:${ORACLE_HOME}/bin
# export PATH=${PATH}:${ORACLE_HOME}/OPatch
# su grid
$ mkdir -p ${PSU_DIR}
$ unzip -d ${PSU_DIR} ${PSU_ZIP}
$ exit
# zfs snapshot -r rpool/grid@before_psu_${PSU##*/}
# cd ~grid
# for patch in $(find ${PSU} -name bundle.xml | xargs -n 1 dirname) ; do
opatch auto ${patch} -oh ${ORACLE_HOME} -ocmrf ${OCM_RSP}
done
# zfs snapshot -r rpool/grid@after_psu_${PSU##*/}
</source>
42b647d7cb546bc13137d9ce9f8820ee7b2a381b
1202
1201
2016-02-10T13:06:34Z
Lollypop
2
/* Apply PSU */
wikitext
text/x-wiki
[[Kategorie:Solaris11|Clusterware]]
[[Kategorie:Oracle|Clusterware]]
==Get Solaris release information==
<source lang=bash>
# pkg info kernel | \
nawk -F '.' '
/Build Release:/{
solaris=$NF;
}
/Branch:/{
subrel=$3;
update=$4;
}
END{
printf "Solaris %d.%d Update %d\n",solaris,subrel,update;
}'
</source>
=Needed Solaris packages=
==Install pkg dependencies==
<source lang=bash>
# pkg install developer/assembler
# pkg install developer/build/make
# pkg install x11/diagnostic/x11-info-clients
</source>
==Check pkg dependencies==
<source lang=bash>
# pkg list \
developer/assembler \
developer/build/make \
x11/diagnostic/x11-info-clients
</source>
=User / group settings=
==Groups==
<source lang=bash>
# groupadd -g 186 oinstall
# groupadd -g 187 asmadmin
# groupadd -g 188 asmdba
# groupadd -g 200 dba
</source>
==User==
<source lang=bash>
# useradd \
-u 102 \
-g oinstall \
-G asmdba,dba \
-c "Oracle DB" \
-m -d /export/home/oracle \
oracle
# useradd \
-u 406 \
-g oinstall \
-G asmdba,asmadmin,dba \
-c "Oracle Grid" \
-m -d /export/home/grid \
grid
</source>
===Generate ssh public keys===
<source lang=bash>
# su - grid
$ ssh-keygen -t rsa -b 2048
Generating public/private rsa key pair.
Enter file in which to save the key (/export/home/grid/.ssh/id_rsa): <Enter>
Created directory '/export/home/grid/.ssh'.
Enter passphrase (empty for no passphrase): <Enter>
Enter same passphrase again: <Enter>
Your identification has been saved in /export/home/grid/.ssh/id_rsa.
Your public key has been saved in /export/home/grid/.ssh/id_rsa.pub.
The key fingerprint is:
..:..:.. grid@grid01
$ cat .ssh/id_rsa.pub > .ssh/authorized_keys
$ chmod 600 .ssh/authorized_keys
$ vi .ssh/authorized_keys
</source>
Add the public key of other nodes.
After that do this on all other nodes added as grid:
<source lang=bash>
$ scp grid01:.ssh/authorized_keys .ssh/authorized_keys
</source>
Now do a cross login from every node to every other node (even to its self) to add all to the known_hosts. The installer needs this.
==Projects==
<source lang=bash>
# projadd -p 186 -G oinstall \
-K process.max-file-descriptor="(privileged,65536,deny)" \
-K process.max-sem-nsems="(privileged,2048,deny)" \
-K project.max-sem-ids="(privileged,2048,deny)" \
-K project.max-shm-ids="(privileged,200,deny)" \
-K project.max-shm-memory="(privileged,274877906944,deny)" \
group.oinstall
</source>
===Check project settings===
<source lang=bash>
# su - oracle
$ for name in process.{max-file-descriptor,max-sem-nsems} ; do prctl -t privileged -i process -n ${name} $$ ; done
process: 14822: -bash
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
process.max-file-descriptor
privileged 65.5K - deny -
process: 14822: -bash
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
process.max-sem-nsems
privileged 2.05K - deny -
$ for name in project.{max-sem-ids,max-shm-ids,max-shm-memory} ; do prctl -t privileged -n ${name} $$ ; done
process: 14822: -bash
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
project.max-sem-ids
privileged 2.05K - deny -
process: 14822: -bash
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
project.max-shm-ids
privileged 200 - deny -
process: 14822: -bash
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
project.max-shm-memory
usage 0B
privileged 256GB - deny -
</source>
=Directories=
<source lang=bash>
# zfs create -o mountpoint=none rpool/grid
# zfs create -o mountpoint=/opt/gridhome rpool/grid/gridhome
# zfs create -o mountpoint=/opt/gridbase rpool/grid/gridbase
# zfs create -o mountpoint=/opt/oraInventory rpool/grid/oraInventory
# chown -R grid:oinstall /opt/{grid{home,base},oraInventory}
</source>
=Storage tasks=
==Discover LUNs==
<source lang=bash>
# luxadm -e port | \
nawk '{print $1}' | \
xargs -n 1 luxadm -e dump_map | \
nawk '/Disk device/{print $5}' | \
sort -u | \
xargs luxadm display | \
nawk '
/DEVICE PROPERTIES for disk:/{
disk=$NF;
}
/DEVICE PROPERTIES for:/{
disk="";
}
/Vendor:/{
vendor=$NF;
}
/Serial Num:/{
serial=$NF;
}
/Unformatted capacity:/{
capacity=$(NF-1)""$NF;
}
disk != "" && /^$/{
printf "%s vendor=%s serial=%s capacity=%s\n",disk,vendor,serial,capacity;
}' | \
sort -u
</source>
==Label Disks==
===Single Disk===
<source lang=bash>
# printf 'type 0 no no\nlabel 1 yes\npartition\n0 usr wm 8192 $\nlabel 1 yes\nquit\nquit\n' | \
format -e /dev/rdsk/<disk>
</source>
===All FC disks===
For x86 you have to call format -> fdisk -> y for all disks first :-\
'''DON'T DO THE NEXT STEP IF YOU DO NOT KNOW WHAT YOU DO!'''
format_command_file.txt:
<source lang=bash>
type 0 no no
label 1 yes
partition
0 usr wm 8192 $
label 1 yes
quit
quit
</source>
<source lang=bash>
# luxadm -e port | \
nawk '{print $1}' | \
xargs -n 1 luxadm -e dump_map | \
nawk '/Disk device/{print $5}' | \
sort -u | \
xargs luxadm display | \
nawk '
/DEVICE PROPERTIES for disk:/{
disk=$NF;
}
/DEVICE PROPERTIES for:/{
disk="";
}
disk && /^$/{
printf "%s\n",disk;
}' | \
sort -u | \
xargs -n 1 format -e -f ~/format_command_file.txt
</source>
<source lang=bash>
# chown -RL grid:asmadmin /dev/rdsk/c0t6000*
# chmod 660 /dev/rdsk/c0t6000*
</source>
==Set swap to physical RAM==
<source lang=bash>
# export RAM=256G
# swap -d /dev/zvol/dsk/rpool/swap
# zfs destroy rpool/swap
# zfs create \
-V ${RAM} \
-b 8k \
-o primarycache=metadata \
-o chksum=on \
-o dedup=off \
-o encryption=off \
-o compression=off \
rpool/swap
# swap -a /dev/zvol/dsk/rpool/swap
</source>
=Network=
==Check port ranges==
<source lang=bash>
# for protocol in tcp udp ; do ipadm show-prop ${protocol} -p smallest_anon_port,largest_anon_port ; done
PROTO PROPERTY PERM CURRENT PERSISTENT DEFAULT POSSIBLE
tcp smallest_anon_port rw 9000 9000 32768 1024-65500
tcp largest_anon_port rw 65500 65500 65535 9000-65535
PROTO PROPERTY PERM CURRENT PERSISTENT DEFAULT POSSIBLE
udp smallest_anon_port rw 9000 9000 32768 1024-65500
udp largest_anon_port rw 65500 65500 65535 9000-65535
</source>
==Setup private cluster interconnects==
Example with a small net with six (eight with net and broadcast) usable IPs. This limits the maximum number of nodes to six... which is obvious...
First node:
<source lang=bash>
# ipadm create-ip net1
# ipadm create-addr -T static -a 10.65.0.1/29 net1/ci1
# ipadm create-ip net5
# ipadm create-addr -T static -a 10.65.0.9/29 net5/ci2
</source>
Second node:
<source lang=bash>
# ipadm create-ip net1
# ipadm create-addr -T static -a 10.65.0.2/29 net1/ci1
# ipadm create-ip net5
# ipadm create-addr -T static -a 10.65.0.10/29 net5/ci2
</source>
==Set slew always for ntp==
After configuring ntp set slew always to avoid time warps!
<source lang=bash>
# svccfg -s svc:/network/ntp:default setprop config/slew_always = true
# svcadm refresh svc:/network/ntp:default
# svccfg -s svc:/network/ntp:default listprop config/slew_always
config/slew_always boolean true
</source>
==Upgrade OPatch==
<source lang=bash>
# OPATCH_PATCH_ZIP=~oracle/orainst/p6880880_112000_Solaris86-64.zip
# export ORACLE_HOME=/opt/gridhome/11.2.0.4
# export PATH=${PATH}:${ORACLE_HOME}/OPatch
# zfs snapshot -r rpool/grid@$(opatch version | nawk '/OPatch Version:/{print $1"_"$NF;}')
# eval mv ${ORACLE_HOME}/{$(opatch version | nawk '/OPatch Version:/{print $1","$1"_"$NF;}')}
# unzip -d ${ORACLE_HOME} ${OPATCH_PATCH_ZIP}
# chown -R grid:oinstall ${ORACLE_HOME}/OPatch
# zfs snapshot -r rpool/grid@$(opatch version | nawk '/OPatch Version:/{print $1"_"$NF;}')
</source>
==Apply PSU==
On first node as user grid:
<source lang=bash>
$ export ORACLE_HOME=/opt/gridhome/11.2.0.4
$ OCM_RSP=~grid/ocm_gridcluster1.rsp
$ ${ORACLE_HOME}/OPatch/ocm/bin/emocmrsp -output ${OCM_RSP}
$ scp ${OCM_RSP} <other node1>:
$ scp ${OCM_RSP} <other node2>:
...
</source>
On all nodes:
<source lang=bash>
# PSU_DIR=~oracle/orainst/psu
# PSU_ZIP=~oracle/orainst/p22378167_112040_Solaris86-64.zip
# OCM_RSP=~grid/ocm_gridcluster1.rsp
# PSU=~oracle/orainst/psu/22378167
# export ORACLE_HOME=/opt/gridhome/11.2.0.4
# export PATH=${PATH}:${ORACLE_HOME}/bin
# export PATH=${PATH}:${ORACLE_HOME}/OPatch
# su grid
$ mkdir -p ${PSU_DIR}
$ unzip -d ${PSU_DIR} ${PSU_ZIP}
$ exit
# zfs snapshot -r rpool/grid@before_psu_${PSU##*/}
# cd ~grid
# for patch in $(find ${PSU} -name bundle.xml | xargs -n 1 dirname) ; do
opatch auto ${patch} -oh ${ORACLE_HOME} -ocmrf ${OCM_RSP}
done
# zfs snapshot -r rpool/grid@after_psu_${PSU##*/}
</source>
7667ab7925000db9550b662918b6d13315e9f64c
1203
1202
2016-02-10T13:16:49Z
Lollypop
2
/* Apply PSU */
wikitext
text/x-wiki
[[Kategorie:Solaris11|Clusterware]]
[[Kategorie:Oracle|Clusterware]]
==Get Solaris release information==
<source lang=bash>
# pkg info kernel | \
nawk -F '.' '
/Build Release:/{
solaris=$NF;
}
/Branch:/{
subrel=$3;
update=$4;
}
END{
printf "Solaris %d.%d Update %d\n",solaris,subrel,update;
}'
</source>
=Needed Solaris packages=
==Install pkg dependencies==
<source lang=bash>
# pkg install developer/assembler
# pkg install developer/build/make
# pkg install x11/diagnostic/x11-info-clients
</source>
==Check pkg dependencies==
<source lang=bash>
# pkg list \
developer/assembler \
developer/build/make \
x11/diagnostic/x11-info-clients
</source>
=User / group settings=
==Groups==
<source lang=bash>
# groupadd -g 186 oinstall
# groupadd -g 187 asmadmin
# groupadd -g 188 asmdba
# groupadd -g 200 dba
</source>
==User==
<source lang=bash>
# useradd \
-u 102 \
-g oinstall \
-G asmdba,dba \
-c "Oracle DB" \
-m -d /export/home/oracle \
oracle
# useradd \
-u 406 \
-g oinstall \
-G asmdba,asmadmin,dba \
-c "Oracle Grid" \
-m -d /export/home/grid \
grid
</source>
===Generate ssh public keys===
<source lang=bash>
# su - grid
$ ssh-keygen -t rsa -b 2048
Generating public/private rsa key pair.
Enter file in which to save the key (/export/home/grid/.ssh/id_rsa): <Enter>
Created directory '/export/home/grid/.ssh'.
Enter passphrase (empty for no passphrase): <Enter>
Enter same passphrase again: <Enter>
Your identification has been saved in /export/home/grid/.ssh/id_rsa.
Your public key has been saved in /export/home/grid/.ssh/id_rsa.pub.
The key fingerprint is:
..:..:.. grid@grid01
$ cat .ssh/id_rsa.pub > .ssh/authorized_keys
$ chmod 600 .ssh/authorized_keys
$ vi .ssh/authorized_keys
</source>
Add the public key of other nodes.
After that do this on all other nodes added as grid:
<source lang=bash>
$ scp grid01:.ssh/authorized_keys .ssh/authorized_keys
</source>
Now do a cross login from every node to every other node (even to its self) to add all to the known_hosts. The installer needs this.
==Projects==
<source lang=bash>
# projadd -p 186 -G oinstall \
-K process.max-file-descriptor="(privileged,65536,deny)" \
-K process.max-sem-nsems="(privileged,2048,deny)" \
-K project.max-sem-ids="(privileged,2048,deny)" \
-K project.max-shm-ids="(privileged,200,deny)" \
-K project.max-shm-memory="(privileged,274877906944,deny)" \
group.oinstall
</source>
===Check project settings===
<source lang=bash>
# su - oracle
$ for name in process.{max-file-descriptor,max-sem-nsems} ; do prctl -t privileged -i process -n ${name} $$ ; done
process: 14822: -bash
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
process.max-file-descriptor
privileged 65.5K - deny -
process: 14822: -bash
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
process.max-sem-nsems
privileged 2.05K - deny -
$ for name in project.{max-sem-ids,max-shm-ids,max-shm-memory} ; do prctl -t privileged -n ${name} $$ ; done
process: 14822: -bash
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
project.max-sem-ids
privileged 2.05K - deny -
process: 14822: -bash
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
project.max-shm-ids
privileged 200 - deny -
process: 14822: -bash
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
project.max-shm-memory
usage 0B
privileged 256GB - deny -
</source>
=Directories=
<source lang=bash>
# zfs create -o mountpoint=none rpool/grid
# zfs create -o mountpoint=/opt/gridhome rpool/grid/gridhome
# zfs create -o mountpoint=/opt/gridbase rpool/grid/gridbase
# zfs create -o mountpoint=/opt/oraInventory rpool/grid/oraInventory
# chown -R grid:oinstall /opt/{grid{home,base},oraInventory}
</source>
=Storage tasks=
==Discover LUNs==
<source lang=bash>
# luxadm -e port | \
nawk '{print $1}' | \
xargs -n 1 luxadm -e dump_map | \
nawk '/Disk device/{print $5}' | \
sort -u | \
xargs luxadm display | \
nawk '
/DEVICE PROPERTIES for disk:/{
disk=$NF;
}
/DEVICE PROPERTIES for:/{
disk="";
}
/Vendor:/{
vendor=$NF;
}
/Serial Num:/{
serial=$NF;
}
/Unformatted capacity:/{
capacity=$(NF-1)""$NF;
}
disk != "" && /^$/{
printf "%s vendor=%s serial=%s capacity=%s\n",disk,vendor,serial,capacity;
}' | \
sort -u
</source>
==Label Disks==
===Single Disk===
<source lang=bash>
# printf 'type 0 no no\nlabel 1 yes\npartition\n0 usr wm 8192 $\nlabel 1 yes\nquit\nquit\n' | \
format -e /dev/rdsk/<disk>
</source>
===All FC disks===
For x86 you have to call format -> fdisk -> y for all disks first :-\
'''DON'T DO THE NEXT STEP IF YOU DO NOT KNOW WHAT YOU DO!'''
format_command_file.txt:
<source lang=bash>
type 0 no no
label 1 yes
partition
0 usr wm 8192 $
label 1 yes
quit
quit
</source>
<source lang=bash>
# luxadm -e port | \
nawk '{print $1}' | \
xargs -n 1 luxadm -e dump_map | \
nawk '/Disk device/{print $5}' | \
sort -u | \
xargs luxadm display | \
nawk '
/DEVICE PROPERTIES for disk:/{
disk=$NF;
}
/DEVICE PROPERTIES for:/{
disk="";
}
disk && /^$/{
printf "%s\n",disk;
}' | \
sort -u | \
xargs -n 1 format -e -f ~/format_command_file.txt
</source>
<source lang=bash>
# chown -RL grid:asmadmin /dev/rdsk/c0t6000*
# chmod 660 /dev/rdsk/c0t6000*
</source>
==Set swap to physical RAM==
<source lang=bash>
# export RAM=256G
# swap -d /dev/zvol/dsk/rpool/swap
# zfs destroy rpool/swap
# zfs create \
-V ${RAM} \
-b 8k \
-o primarycache=metadata \
-o chksum=on \
-o dedup=off \
-o encryption=off \
-o compression=off \
rpool/swap
# swap -a /dev/zvol/dsk/rpool/swap
</source>
=Network=
==Check port ranges==
<source lang=bash>
# for protocol in tcp udp ; do ipadm show-prop ${protocol} -p smallest_anon_port,largest_anon_port ; done
PROTO PROPERTY PERM CURRENT PERSISTENT DEFAULT POSSIBLE
tcp smallest_anon_port rw 9000 9000 32768 1024-65500
tcp largest_anon_port rw 65500 65500 65535 9000-65535
PROTO PROPERTY PERM CURRENT PERSISTENT DEFAULT POSSIBLE
udp smallest_anon_port rw 9000 9000 32768 1024-65500
udp largest_anon_port rw 65500 65500 65535 9000-65535
</source>
==Setup private cluster interconnects==
Example with a small net with six (eight with net and broadcast) usable IPs. This limits the maximum number of nodes to six... which is obvious...
First node:
<source lang=bash>
# ipadm create-ip net1
# ipadm create-addr -T static -a 10.65.0.1/29 net1/ci1
# ipadm create-ip net5
# ipadm create-addr -T static -a 10.65.0.9/29 net5/ci2
</source>
Second node:
<source lang=bash>
# ipadm create-ip net1
# ipadm create-addr -T static -a 10.65.0.2/29 net1/ci1
# ipadm create-ip net5
# ipadm create-addr -T static -a 10.65.0.10/29 net5/ci2
</source>
==Set slew always for ntp==
After configuring ntp set slew always to avoid time warps!
<source lang=bash>
# svccfg -s svc:/network/ntp:default setprop config/slew_always = true
# svcadm refresh svc:/network/ntp:default
# svccfg -s svc:/network/ntp:default listprop config/slew_always
config/slew_always boolean true
</source>
==Upgrade OPatch==
<source lang=bash>
# OPATCH_PATCH_ZIP=~oracle/orainst/p6880880_112000_Solaris86-64.zip
# export ORACLE_HOME=/opt/gridhome/11.2.0.4
# export PATH=${PATH}:${ORACLE_HOME}/OPatch
# zfs snapshot -r rpool/grid@$(opatch version | nawk '/OPatch Version:/{print $1"_"$NF;}')
# eval mv ${ORACLE_HOME}/{$(opatch version | nawk '/OPatch Version:/{print $1","$1"_"$NF;}')}
# unzip -d ${ORACLE_HOME} ${OPATCH_PATCH_ZIP}
# chown -R grid:oinstall ${ORACLE_HOME}/OPatch
# zfs snapshot -r rpool/grid@$(opatch version | nawk '/OPatch Version:/{print $1"_"$NF;}')
</source>
==Apply PSU==
On first node as user grid:
<source lang=bash>
$ export ORACLE_HOME=/opt/gridhome/11.2.0.4
$ OCM_RSP=~grid/ocm_gridcluster1.rsp
$ ${ORACLE_HOME}/OPatch/ocm/bin/emocmrsp -output ${OCM_RSP}
$ scp ${OCM_RSP} <other node1>:
$ scp ${OCM_RSP} <other node2>:
...
</source>
On all nodes:
<source lang=bash>
# PSU_DIR=~oracle/orainst/psu
# PSU_ZIP=~oracle/orainst/p22378167_112040_Solaris86-64.zip
# OCM_RSP=~grid/ocm_gridcluster1.rsp
# PSU=~oracle/orainst/psu/22378167
# export ORACLE_HOME=/opt/gridhome/11.2.0.4
# export PATH=${PATH}:${ORACLE_HOME}/bin
# export PATH=${PATH}:${ORACLE_HOME}/OPatch
# su - grid -c "mkdir -p ${PSU_DIR}"
# su - grid -c "unzip -d ${PSU_DIR} ${PSU_ZIP}"
# zfs snapshot -r rpool/grid@before_psu_${PSU##*/}
# cd ~grid
# for patch in $(find ${PSU} -name bundle.xml | xargs -n 1 dirname) ; do
opatch auto ${patch} -oh ${ORACLE_HOME} -ocmrf ${OCM_RSP}
done
# zfs snapshot -r rpool/grid@after_psu_${PSU##*/}
</source>
adcfb0a1e7a7b22c128e1b615b2ffdaac1e4229b
1204
1203
2016-02-10T13:28:14Z
Lollypop
2
/* Apply PSU */
wikitext
text/x-wiki
[[Kategorie:Solaris11|Clusterware]]
[[Kategorie:Oracle|Clusterware]]
==Get Solaris release information==
<source lang=bash>
# pkg info kernel | \
nawk -F '.' '
/Build Release:/{
solaris=$NF;
}
/Branch:/{
subrel=$3;
update=$4;
}
END{
printf "Solaris %d.%d Update %d\n",solaris,subrel,update;
}'
</source>
=Needed Solaris packages=
==Install pkg dependencies==
<source lang=bash>
# pkg install developer/assembler
# pkg install developer/build/make
# pkg install x11/diagnostic/x11-info-clients
</source>
==Check pkg dependencies==
<source lang=bash>
# pkg list \
developer/assembler \
developer/build/make \
x11/diagnostic/x11-info-clients
</source>
=User / group settings=
==Groups==
<source lang=bash>
# groupadd -g 186 oinstall
# groupadd -g 187 asmadmin
# groupadd -g 188 asmdba
# groupadd -g 200 dba
</source>
==User==
<source lang=bash>
# useradd \
-u 102 \
-g oinstall \
-G asmdba,dba \
-c "Oracle DB" \
-m -d /export/home/oracle \
oracle
# useradd \
-u 406 \
-g oinstall \
-G asmdba,asmadmin,dba \
-c "Oracle Grid" \
-m -d /export/home/grid \
grid
</source>
===Generate ssh public keys===
<source lang=bash>
# su - grid
$ ssh-keygen -t rsa -b 2048
Generating public/private rsa key pair.
Enter file in which to save the key (/export/home/grid/.ssh/id_rsa): <Enter>
Created directory '/export/home/grid/.ssh'.
Enter passphrase (empty for no passphrase): <Enter>
Enter same passphrase again: <Enter>
Your identification has been saved in /export/home/grid/.ssh/id_rsa.
Your public key has been saved in /export/home/grid/.ssh/id_rsa.pub.
The key fingerprint is:
..:..:.. grid@grid01
$ cat .ssh/id_rsa.pub > .ssh/authorized_keys
$ chmod 600 .ssh/authorized_keys
$ vi .ssh/authorized_keys
</source>
Add the public key of other nodes.
After that do this on all other nodes added as grid:
<source lang=bash>
$ scp grid01:.ssh/authorized_keys .ssh/authorized_keys
</source>
Now do a cross login from every node to every other node (even to its self) to add all to the known_hosts. The installer needs this.
==Projects==
<source lang=bash>
# projadd -p 186 -G oinstall \
-K process.max-file-descriptor="(privileged,65536,deny)" \
-K process.max-sem-nsems="(privileged,2048,deny)" \
-K project.max-sem-ids="(privileged,2048,deny)" \
-K project.max-shm-ids="(privileged,200,deny)" \
-K project.max-shm-memory="(privileged,274877906944,deny)" \
group.oinstall
</source>
===Check project settings===
<source lang=bash>
# su - oracle
$ for name in process.{max-file-descriptor,max-sem-nsems} ; do prctl -t privileged -i process -n ${name} $$ ; done
process: 14822: -bash
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
process.max-file-descriptor
privileged 65.5K - deny -
process: 14822: -bash
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
process.max-sem-nsems
privileged 2.05K - deny -
$ for name in project.{max-sem-ids,max-shm-ids,max-shm-memory} ; do prctl -t privileged -n ${name} $$ ; done
process: 14822: -bash
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
project.max-sem-ids
privileged 2.05K - deny -
process: 14822: -bash
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
project.max-shm-ids
privileged 200 - deny -
process: 14822: -bash
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
project.max-shm-memory
usage 0B
privileged 256GB - deny -
</source>
=Directories=
<source lang=bash>
# zfs create -o mountpoint=none rpool/grid
# zfs create -o mountpoint=/opt/gridhome rpool/grid/gridhome
# zfs create -o mountpoint=/opt/gridbase rpool/grid/gridbase
# zfs create -o mountpoint=/opt/oraInventory rpool/grid/oraInventory
# chown -R grid:oinstall /opt/{grid{home,base},oraInventory}
</source>
=Storage tasks=
==Discover LUNs==
<source lang=bash>
# luxadm -e port | \
nawk '{print $1}' | \
xargs -n 1 luxadm -e dump_map | \
nawk '/Disk device/{print $5}' | \
sort -u | \
xargs luxadm display | \
nawk '
/DEVICE PROPERTIES for disk:/{
disk=$NF;
}
/DEVICE PROPERTIES for:/{
disk="";
}
/Vendor:/{
vendor=$NF;
}
/Serial Num:/{
serial=$NF;
}
/Unformatted capacity:/{
capacity=$(NF-1)""$NF;
}
disk != "" && /^$/{
printf "%s vendor=%s serial=%s capacity=%s\n",disk,vendor,serial,capacity;
}' | \
sort -u
</source>
==Label Disks==
===Single Disk===
<source lang=bash>
# printf 'type 0 no no\nlabel 1 yes\npartition\n0 usr wm 8192 $\nlabel 1 yes\nquit\nquit\n' | \
format -e /dev/rdsk/<disk>
</source>
===All FC disks===
For x86 you have to call format -> fdisk -> y for all disks first :-\
'''DON'T DO THE NEXT STEP IF YOU DO NOT KNOW WHAT YOU DO!'''
format_command_file.txt:
<source lang=bash>
type 0 no no
label 1 yes
partition
0 usr wm 8192 $
label 1 yes
quit
quit
</source>
<source lang=bash>
# luxadm -e port | \
nawk '{print $1}' | \
xargs -n 1 luxadm -e dump_map | \
nawk '/Disk device/{print $5}' | \
sort -u | \
xargs luxadm display | \
nawk '
/DEVICE PROPERTIES for disk:/{
disk=$NF;
}
/DEVICE PROPERTIES for:/{
disk="";
}
disk && /^$/{
printf "%s\n",disk;
}' | \
sort -u | \
xargs -n 1 format -e -f ~/format_command_file.txt
</source>
<source lang=bash>
# chown -RL grid:asmadmin /dev/rdsk/c0t6000*
# chmod 660 /dev/rdsk/c0t6000*
</source>
==Set swap to physical RAM==
<source lang=bash>
# export RAM=256G
# swap -d /dev/zvol/dsk/rpool/swap
# zfs destroy rpool/swap
# zfs create \
-V ${RAM} \
-b 8k \
-o primarycache=metadata \
-o chksum=on \
-o dedup=off \
-o encryption=off \
-o compression=off \
rpool/swap
# swap -a /dev/zvol/dsk/rpool/swap
</source>
=Network=
==Check port ranges==
<source lang=bash>
# for protocol in tcp udp ; do ipadm show-prop ${protocol} -p smallest_anon_port,largest_anon_port ; done
PROTO PROPERTY PERM CURRENT PERSISTENT DEFAULT POSSIBLE
tcp smallest_anon_port rw 9000 9000 32768 1024-65500
tcp largest_anon_port rw 65500 65500 65535 9000-65535
PROTO PROPERTY PERM CURRENT PERSISTENT DEFAULT POSSIBLE
udp smallest_anon_port rw 9000 9000 32768 1024-65500
udp largest_anon_port rw 65500 65500 65535 9000-65535
</source>
==Setup private cluster interconnects==
Example with a small net with six (eight with net and broadcast) usable IPs. This limits the maximum number of nodes to six... which is obvious...
First node:
<source lang=bash>
# ipadm create-ip net1
# ipadm create-addr -T static -a 10.65.0.1/29 net1/ci1
# ipadm create-ip net5
# ipadm create-addr -T static -a 10.65.0.9/29 net5/ci2
</source>
Second node:
<source lang=bash>
# ipadm create-ip net1
# ipadm create-addr -T static -a 10.65.0.2/29 net1/ci1
# ipadm create-ip net5
# ipadm create-addr -T static -a 10.65.0.10/29 net5/ci2
</source>
==Set slew always for ntp==
After configuring ntp set slew always to avoid time warps!
<source lang=bash>
# svccfg -s svc:/network/ntp:default setprop config/slew_always = true
# svcadm refresh svc:/network/ntp:default
# svccfg -s svc:/network/ntp:default listprop config/slew_always
config/slew_always boolean true
</source>
==Upgrade OPatch==
<source lang=bash>
# OPATCH_PATCH_ZIP=~oracle/orainst/p6880880_112000_Solaris86-64.zip
# export ORACLE_HOME=/opt/gridhome/11.2.0.4
# export PATH=${PATH}:${ORACLE_HOME}/OPatch
# zfs snapshot -r rpool/grid@$(opatch version | nawk '/OPatch Version:/{print $1"_"$NF;}')
# eval mv ${ORACLE_HOME}/{$(opatch version | nawk '/OPatch Version:/{print $1","$1"_"$NF;}')}
# unzip -d ${ORACLE_HOME} ${OPATCH_PATCH_ZIP}
# chown -R grid:oinstall ${ORACLE_HOME}/OPatch
# zfs snapshot -r rpool/grid@$(opatch version | nawk '/OPatch Version:/{print $1"_"$NF;}')
</source>
==Apply PSU==
On first node as user grid:
<source lang=bash>
export ORACLE_HOME=/opt/gridhome/11.2.0.4
OCM_RSP=~grid/ocm_gridcluster1.rsp
${ORACLE_HOME}/OPatch/ocm/bin/emocmrsp -output ${OCM_RSP}
scp ${OCM_RSP} <other node1>:
scp ${OCM_RSP} <other node2>:
...
</source>
On all nodes do as root:
<source lang=bash>
export ORACLE_HOME=/opt/gridhome/11.2.0.4
export PATH=${PATH}:${ORACLE_HOME}/bin
export PATH=${PATH}:${ORACLE_HOME}/OPatch
OCM_RSP=~grid/ocm_gridcluster1.rsp
PSU_DIR=~oracle/orainst/psu
PSU_ZIP=~oracle/orainst/p22378167_112040_Solaris86-64.zip
PSU=~oracle/orainst/psu/22378167
su - grid -c "mkdir -p ${PSU_DIR}"
su - grid -c "unzip -d ${PSU_DIR} ${PSU_ZIP}"
su - grid -c "opatch lsinventory -detail -oh ${ORACLE_HOME} > ~grid/lsinventory_before_${PSU##*/}"
zfs snapshot -r rpool/grid@before_psu_${PSU##*/}
cd ~grid
for patch in $(find ${PSU} -name bundle.xml | xargs -n 1 dirname) ; do
opatch auto ${patch} -oh ${ORACLE_HOME} -ocmrf ${OCM_RSP}
done
zfs snapshot -r rpool/grid@after_psu_${PSU##*/}
su - grid -c "opatch lsinventory -detail -oh ${ORACLE_HOME} > ~grid/lsinventory_after_${PSU##*/}"
</source>
d7b77c57ad4ecd95ce27b49eb12ca307d8d81e92
1205
1204
2016-02-10T13:28:56Z
Lollypop
2
/* Upgrade OPatch */
wikitext
text/x-wiki
[[Kategorie:Solaris11|Clusterware]]
[[Kategorie:Oracle|Clusterware]]
==Get Solaris release information==
<source lang=bash>
# pkg info kernel | \
nawk -F '.' '
/Build Release:/{
solaris=$NF;
}
/Branch:/{
subrel=$3;
update=$4;
}
END{
printf "Solaris %d.%d Update %d\n",solaris,subrel,update;
}'
</source>
=Needed Solaris packages=
==Install pkg dependencies==
<source lang=bash>
# pkg install developer/assembler
# pkg install developer/build/make
# pkg install x11/diagnostic/x11-info-clients
</source>
==Check pkg dependencies==
<source lang=bash>
# pkg list \
developer/assembler \
developer/build/make \
x11/diagnostic/x11-info-clients
</source>
=User / group settings=
==Groups==
<source lang=bash>
# groupadd -g 186 oinstall
# groupadd -g 187 asmadmin
# groupadd -g 188 asmdba
# groupadd -g 200 dba
</source>
==User==
<source lang=bash>
# useradd \
-u 102 \
-g oinstall \
-G asmdba,dba \
-c "Oracle DB" \
-m -d /export/home/oracle \
oracle
# useradd \
-u 406 \
-g oinstall \
-G asmdba,asmadmin,dba \
-c "Oracle Grid" \
-m -d /export/home/grid \
grid
</source>
===Generate ssh public keys===
<source lang=bash>
# su - grid
$ ssh-keygen -t rsa -b 2048
Generating public/private rsa key pair.
Enter file in which to save the key (/export/home/grid/.ssh/id_rsa): <Enter>
Created directory '/export/home/grid/.ssh'.
Enter passphrase (empty for no passphrase): <Enter>
Enter same passphrase again: <Enter>
Your identification has been saved in /export/home/grid/.ssh/id_rsa.
Your public key has been saved in /export/home/grid/.ssh/id_rsa.pub.
The key fingerprint is:
..:..:.. grid@grid01
$ cat .ssh/id_rsa.pub > .ssh/authorized_keys
$ chmod 600 .ssh/authorized_keys
$ vi .ssh/authorized_keys
</source>
Add the public key of other nodes.
After that do this on all other nodes added as grid:
<source lang=bash>
$ scp grid01:.ssh/authorized_keys .ssh/authorized_keys
</source>
Now do a cross login from every node to every other node (even to its self) to add all to the known_hosts. The installer needs this.
==Projects==
<source lang=bash>
# projadd -p 186 -G oinstall \
-K process.max-file-descriptor="(privileged,65536,deny)" \
-K process.max-sem-nsems="(privileged,2048,deny)" \
-K project.max-sem-ids="(privileged,2048,deny)" \
-K project.max-shm-ids="(privileged,200,deny)" \
-K project.max-shm-memory="(privileged,274877906944,deny)" \
group.oinstall
</source>
===Check project settings===
<source lang=bash>
# su - oracle
$ for name in process.{max-file-descriptor,max-sem-nsems} ; do prctl -t privileged -i process -n ${name} $$ ; done
process: 14822: -bash
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
process.max-file-descriptor
privileged 65.5K - deny -
process: 14822: -bash
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
process.max-sem-nsems
privileged 2.05K - deny -
$ for name in project.{max-sem-ids,max-shm-ids,max-shm-memory} ; do prctl -t privileged -n ${name} $$ ; done
process: 14822: -bash
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
project.max-sem-ids
privileged 2.05K - deny -
process: 14822: -bash
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
project.max-shm-ids
privileged 200 - deny -
process: 14822: -bash
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
project.max-shm-memory
usage 0B
privileged 256GB - deny -
</source>
=Directories=
<source lang=bash>
# zfs create -o mountpoint=none rpool/grid
# zfs create -o mountpoint=/opt/gridhome rpool/grid/gridhome
# zfs create -o mountpoint=/opt/gridbase rpool/grid/gridbase
# zfs create -o mountpoint=/opt/oraInventory rpool/grid/oraInventory
# chown -R grid:oinstall /opt/{grid{home,base},oraInventory}
</source>
=Storage tasks=
==Discover LUNs==
<source lang=bash>
# luxadm -e port | \
nawk '{print $1}' | \
xargs -n 1 luxadm -e dump_map | \
nawk '/Disk device/{print $5}' | \
sort -u | \
xargs luxadm display | \
nawk '
/DEVICE PROPERTIES for disk:/{
disk=$NF;
}
/DEVICE PROPERTIES for:/{
disk="";
}
/Vendor:/{
vendor=$NF;
}
/Serial Num:/{
serial=$NF;
}
/Unformatted capacity:/{
capacity=$(NF-1)""$NF;
}
disk != "" && /^$/{
printf "%s vendor=%s serial=%s capacity=%s\n",disk,vendor,serial,capacity;
}' | \
sort -u
</source>
==Label Disks==
===Single Disk===
<source lang=bash>
# printf 'type 0 no no\nlabel 1 yes\npartition\n0 usr wm 8192 $\nlabel 1 yes\nquit\nquit\n' | \
format -e /dev/rdsk/<disk>
</source>
===All FC disks===
For x86 you have to call format -> fdisk -> y for all disks first :-\
'''DON'T DO THE NEXT STEP IF YOU DO NOT KNOW WHAT YOU DO!'''
format_command_file.txt:
<source lang=bash>
type 0 no no
label 1 yes
partition
0 usr wm 8192 $
label 1 yes
quit
quit
</source>
<source lang=bash>
# luxadm -e port | \
nawk '{print $1}' | \
xargs -n 1 luxadm -e dump_map | \
nawk '/Disk device/{print $5}' | \
sort -u | \
xargs luxadm display | \
nawk '
/DEVICE PROPERTIES for disk:/{
disk=$NF;
}
/DEVICE PROPERTIES for:/{
disk="";
}
disk && /^$/{
printf "%s\n",disk;
}' | \
sort -u | \
xargs -n 1 format -e -f ~/format_command_file.txt
</source>
<source lang=bash>
# chown -RL grid:asmadmin /dev/rdsk/c0t6000*
# chmod 660 /dev/rdsk/c0t6000*
</source>
==Set swap to physical RAM==
<source lang=bash>
# export RAM=256G
# swap -d /dev/zvol/dsk/rpool/swap
# zfs destroy rpool/swap
# zfs create \
-V ${RAM} \
-b 8k \
-o primarycache=metadata \
-o chksum=on \
-o dedup=off \
-o encryption=off \
-o compression=off \
rpool/swap
# swap -a /dev/zvol/dsk/rpool/swap
</source>
=Network=
==Check port ranges==
<source lang=bash>
# for protocol in tcp udp ; do ipadm show-prop ${protocol} -p smallest_anon_port,largest_anon_port ; done
PROTO PROPERTY PERM CURRENT PERSISTENT DEFAULT POSSIBLE
tcp smallest_anon_port rw 9000 9000 32768 1024-65500
tcp largest_anon_port rw 65500 65500 65535 9000-65535
PROTO PROPERTY PERM CURRENT PERSISTENT DEFAULT POSSIBLE
udp smallest_anon_port rw 9000 9000 32768 1024-65500
udp largest_anon_port rw 65500 65500 65535 9000-65535
</source>
==Setup private cluster interconnects==
Example with a small net with six (eight with net and broadcast) usable IPs. This limits the maximum number of nodes to six... which is obvious...
First node:
<source lang=bash>
# ipadm create-ip net1
# ipadm create-addr -T static -a 10.65.0.1/29 net1/ci1
# ipadm create-ip net5
# ipadm create-addr -T static -a 10.65.0.9/29 net5/ci2
</source>
Second node:
<source lang=bash>
# ipadm create-ip net1
# ipadm create-addr -T static -a 10.65.0.2/29 net1/ci1
# ipadm create-ip net5
# ipadm create-addr -T static -a 10.65.0.10/29 net5/ci2
</source>
==Set slew always for ntp==
After configuring ntp set slew always to avoid time warps!
<source lang=bash>
# svccfg -s svc:/network/ntp:default setprop config/slew_always = true
# svcadm refresh svc:/network/ntp:default
# svccfg -s svc:/network/ntp:default listprop config/slew_always
config/slew_always boolean true
</source>
==Upgrade OPatch==
Do as root:
<source lang=bash>
export ORACLE_HOME=/opt/gridhome/11.2.0.4
export PATH=${PATH}:${ORACLE_HOME}/OPatch
OPATCH_PATCH_ZIP=~oracle/orainst/p6880880_112000_Solaris86-64.zip
zfs snapshot -r rpool/grid@$(opatch version | nawk '/OPatch Version:/{print $1"_"$NF;}')
eval mv ${ORACLE_HOME}/{$(opatch version | nawk '/OPatch Version:/{print $1","$1"_"$NF;}')}
unzip -d ${ORACLE_HOME} ${OPATCH_PATCH_ZIP}
chown -R grid:oinstall ${ORACLE_HOME}/OPatch
zfs snapshot -r rpool/grid@$(opatch version | nawk '/OPatch Version:/{print $1"_"$NF;}')
</source>
==Apply PSU==
On first node as user grid:
<source lang=bash>
export ORACLE_HOME=/opt/gridhome/11.2.0.4
OCM_RSP=~grid/ocm_gridcluster1.rsp
${ORACLE_HOME}/OPatch/ocm/bin/emocmrsp -output ${OCM_RSP}
scp ${OCM_RSP} <other node1>:
scp ${OCM_RSP} <other node2>:
...
</source>
On all nodes do as root:
<source lang=bash>
export ORACLE_HOME=/opt/gridhome/11.2.0.4
export PATH=${PATH}:${ORACLE_HOME}/bin
export PATH=${PATH}:${ORACLE_HOME}/OPatch
OCM_RSP=~grid/ocm_gridcluster1.rsp
PSU_DIR=~oracle/orainst/psu
PSU_ZIP=~oracle/orainst/p22378167_112040_Solaris86-64.zip
PSU=~oracle/orainst/psu/22378167
su - grid -c "mkdir -p ${PSU_DIR}"
su - grid -c "unzip -d ${PSU_DIR} ${PSU_ZIP}"
su - grid -c "opatch lsinventory -detail -oh ${ORACLE_HOME} > ~grid/lsinventory_before_${PSU##*/}"
zfs snapshot -r rpool/grid@before_psu_${PSU##*/}
cd ~grid
for patch in $(find ${PSU} -name bundle.xml | xargs -n 1 dirname) ; do
opatch auto ${patch} -oh ${ORACLE_HOME} -ocmrf ${OCM_RSP}
done
zfs snapshot -r rpool/grid@after_psu_${PSU##*/}
su - grid -c "opatch lsinventory -detail -oh ${ORACLE_HOME} > ~grid/lsinventory_after_${PSU##*/}"
</source>
c8bbcdeacd6ac51c1c1bf61e3c4d0159b4921f08
1206
1205
2016-02-10T14:11:03Z
Lollypop
2
/* Apply PSU */
wikitext
text/x-wiki
[[Kategorie:Solaris11|Clusterware]]
[[Kategorie:Oracle|Clusterware]]
==Get Solaris release information==
<source lang=bash>
# pkg info kernel | \
nawk -F '.' '
/Build Release:/{
solaris=$NF;
}
/Branch:/{
subrel=$3;
update=$4;
}
END{
printf "Solaris %d.%d Update %d\n",solaris,subrel,update;
}'
</source>
=Needed Solaris packages=
==Install pkg dependencies==
<source lang=bash>
# pkg install developer/assembler
# pkg install developer/build/make
# pkg install x11/diagnostic/x11-info-clients
</source>
==Check pkg dependencies==
<source lang=bash>
# pkg list \
developer/assembler \
developer/build/make \
x11/diagnostic/x11-info-clients
</source>
=User / group settings=
==Groups==
<source lang=bash>
# groupadd -g 186 oinstall
# groupadd -g 187 asmadmin
# groupadd -g 188 asmdba
# groupadd -g 200 dba
</source>
==User==
<source lang=bash>
# useradd \
-u 102 \
-g oinstall \
-G asmdba,dba \
-c "Oracle DB" \
-m -d /export/home/oracle \
oracle
# useradd \
-u 406 \
-g oinstall \
-G asmdba,asmadmin,dba \
-c "Oracle Grid" \
-m -d /export/home/grid \
grid
</source>
===Generate ssh public keys===
<source lang=bash>
# su - grid
$ ssh-keygen -t rsa -b 2048
Generating public/private rsa key pair.
Enter file in which to save the key (/export/home/grid/.ssh/id_rsa): <Enter>
Created directory '/export/home/grid/.ssh'.
Enter passphrase (empty for no passphrase): <Enter>
Enter same passphrase again: <Enter>
Your identification has been saved in /export/home/grid/.ssh/id_rsa.
Your public key has been saved in /export/home/grid/.ssh/id_rsa.pub.
The key fingerprint is:
..:..:.. grid@grid01
$ cat .ssh/id_rsa.pub > .ssh/authorized_keys
$ chmod 600 .ssh/authorized_keys
$ vi .ssh/authorized_keys
</source>
Add the public key of other nodes.
After that do this on all other nodes added as grid:
<source lang=bash>
$ scp grid01:.ssh/authorized_keys .ssh/authorized_keys
</source>
Now do a cross login from every node to every other node (even to its self) to add all to the known_hosts. The installer needs this.
==Projects==
<source lang=bash>
# projadd -p 186 -G oinstall \
-K process.max-file-descriptor="(privileged,65536,deny)" \
-K process.max-sem-nsems="(privileged,2048,deny)" \
-K project.max-sem-ids="(privileged,2048,deny)" \
-K project.max-shm-ids="(privileged,200,deny)" \
-K project.max-shm-memory="(privileged,274877906944,deny)" \
group.oinstall
</source>
===Check project settings===
<source lang=bash>
# su - oracle
$ for name in process.{max-file-descriptor,max-sem-nsems} ; do prctl -t privileged -i process -n ${name} $$ ; done
process: 14822: -bash
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
process.max-file-descriptor
privileged 65.5K - deny -
process: 14822: -bash
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
process.max-sem-nsems
privileged 2.05K - deny -
$ for name in project.{max-sem-ids,max-shm-ids,max-shm-memory} ; do prctl -t privileged -n ${name} $$ ; done
process: 14822: -bash
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
project.max-sem-ids
privileged 2.05K - deny -
process: 14822: -bash
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
project.max-shm-ids
privileged 200 - deny -
process: 14822: -bash
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
project.max-shm-memory
usage 0B
privileged 256GB - deny -
</source>
=Directories=
<source lang=bash>
# zfs create -o mountpoint=none rpool/grid
# zfs create -o mountpoint=/opt/gridhome rpool/grid/gridhome
# zfs create -o mountpoint=/opt/gridbase rpool/grid/gridbase
# zfs create -o mountpoint=/opt/oraInventory rpool/grid/oraInventory
# chown -R grid:oinstall /opt/{grid{home,base},oraInventory}
</source>
=Storage tasks=
==Discover LUNs==
<source lang=bash>
# luxadm -e port | \
nawk '{print $1}' | \
xargs -n 1 luxadm -e dump_map | \
nawk '/Disk device/{print $5}' | \
sort -u | \
xargs luxadm display | \
nawk '
/DEVICE PROPERTIES for disk:/{
disk=$NF;
}
/DEVICE PROPERTIES for:/{
disk="";
}
/Vendor:/{
vendor=$NF;
}
/Serial Num:/{
serial=$NF;
}
/Unformatted capacity:/{
capacity=$(NF-1)""$NF;
}
disk != "" && /^$/{
printf "%s vendor=%s serial=%s capacity=%s\n",disk,vendor,serial,capacity;
}' | \
sort -u
</source>
==Label Disks==
===Single Disk===
<source lang=bash>
# printf 'type 0 no no\nlabel 1 yes\npartition\n0 usr wm 8192 $\nlabel 1 yes\nquit\nquit\n' | \
format -e /dev/rdsk/<disk>
</source>
===All FC disks===
For x86 you have to call format -> fdisk -> y for all disks first :-\
'''DON'T DO THE NEXT STEP IF YOU DO NOT KNOW WHAT YOU DO!'''
format_command_file.txt:
<source lang=bash>
type 0 no no
label 1 yes
partition
0 usr wm 8192 $
label 1 yes
quit
quit
</source>
<source lang=bash>
# luxadm -e port | \
nawk '{print $1}' | \
xargs -n 1 luxadm -e dump_map | \
nawk '/Disk device/{print $5}' | \
sort -u | \
xargs luxadm display | \
nawk '
/DEVICE PROPERTIES for disk:/{
disk=$NF;
}
/DEVICE PROPERTIES for:/{
disk="";
}
disk && /^$/{
printf "%s\n",disk;
}' | \
sort -u | \
xargs -n 1 format -e -f ~/format_command_file.txt
</source>
<source lang=bash>
# chown -RL grid:asmadmin /dev/rdsk/c0t6000*
# chmod 660 /dev/rdsk/c0t6000*
</source>
==Set swap to physical RAM==
<source lang=bash>
# export RAM=256G
# swap -d /dev/zvol/dsk/rpool/swap
# zfs destroy rpool/swap
# zfs create \
-V ${RAM} \
-b 8k \
-o primarycache=metadata \
-o chksum=on \
-o dedup=off \
-o encryption=off \
-o compression=off \
rpool/swap
# swap -a /dev/zvol/dsk/rpool/swap
</source>
=Network=
==Check port ranges==
<source lang=bash>
# for protocol in tcp udp ; do ipadm show-prop ${protocol} -p smallest_anon_port,largest_anon_port ; done
PROTO PROPERTY PERM CURRENT PERSISTENT DEFAULT POSSIBLE
tcp smallest_anon_port rw 9000 9000 32768 1024-65500
tcp largest_anon_port rw 65500 65500 65535 9000-65535
PROTO PROPERTY PERM CURRENT PERSISTENT DEFAULT POSSIBLE
udp smallest_anon_port rw 9000 9000 32768 1024-65500
udp largest_anon_port rw 65500 65500 65535 9000-65535
</source>
==Setup private cluster interconnects==
Example with a small net with six (eight with net and broadcast) usable IPs. This limits the maximum number of nodes to six... which is obvious...
First node:
<source lang=bash>
# ipadm create-ip net1
# ipadm create-addr -T static -a 10.65.0.1/29 net1/ci1
# ipadm create-ip net5
# ipadm create-addr -T static -a 10.65.0.9/29 net5/ci2
</source>
Second node:
<source lang=bash>
# ipadm create-ip net1
# ipadm create-addr -T static -a 10.65.0.2/29 net1/ci1
# ipadm create-ip net5
# ipadm create-addr -T static -a 10.65.0.10/29 net5/ci2
</source>
==Set slew always for ntp==
After configuring ntp set slew always to avoid time warps!
<source lang=bash>
# svccfg -s svc:/network/ntp:default setprop config/slew_always = true
# svcadm refresh svc:/network/ntp:default
# svccfg -s svc:/network/ntp:default listprop config/slew_always
config/slew_always boolean true
</source>
==Upgrade OPatch==
Do as root:
<source lang=bash>
export ORACLE_HOME=/opt/gridhome/11.2.0.4
export PATH=${PATH}:${ORACLE_HOME}/OPatch
OPATCH_PATCH_ZIP=~oracle/orainst/p6880880_112000_Solaris86-64.zip
zfs snapshot -r rpool/grid@$(opatch version | nawk '/OPatch Version:/{print $1"_"$NF;}')
eval mv ${ORACLE_HOME}/{$(opatch version | nawk '/OPatch Version:/{print $1","$1"_"$NF;}')}
unzip -d ${ORACLE_HOME} ${OPATCH_PATCH_ZIP}
chown -R grid:oinstall ${ORACLE_HOME}/OPatch
zfs snapshot -r rpool/grid@$(opatch version | nawk '/OPatch Version:/{print $1"_"$NF;}')
</source>
==Apply PSU==
On first node as user grid:
<source lang=bash>
export ORACLE_HOME=/opt/gridhome/11.2.0.4
OCM_RSP=~grid/ocm_gridcluster1.rsp
${ORACLE_HOME}/OPatch/ocm/bin/emocmrsp -output ${OCM_RSP}
scp ${OCM_RSP} <other node1>:
scp ${OCM_RSP} <other node2>:
...
</source>
On all nodes do as root:
<source lang=bash>
export ORACLE_HOME=/opt/gridhome/11.2.0.4
export PATH=${PATH}:${ORACLE_HOME}/bin
export PATH=${PATH}:${ORACLE_HOME}/OPatch
OCM_RSP=~grid/ocm_gridcluster1.rsp
PSU_DIR=~oracle/orainst/psu
PSU_ZIP=~oracle/orainst/p22378167_112040_Solaris86-64.zip
PSU=~oracle/orainst/psu/22378167
su - grid -c "mkdir -p ${PSU_DIR}"
su - grid -c "unzip -d ${PSU_DIR} ${PSU_ZIP}"
su - grid -c "opatch lsinventory -detail -oh ${ORACLE_HOME} > ~grid/lsinventory_before_${PSU##*/}"
${ORACLE_HOME}/bin/emctl stop dbconsole
zfs snapshot -r rpool/grid@before_psu_${PSU##*/}
cd ~grid
for patch in $(find ${PSU} -name bundle.xml | xargs -n 1 dirname) ; do
opatch auto ${patch} -oh ${ORACLE_HOME} -ocmrf ${OCM_RSP}
done
# opatch apply other patches...
zfs snapshot -r rpool/grid@after_psu_${PSU##*/}
${ORACLE_HOME}/bin/emctl start dbconsole
su - grid -c "opatch lsinventory -detail -oh ${ORACLE_HOME} > ~grid/lsinventory_after_${PSU##*/}"
</source>
fb58d3c7019701935d9aa6344263cd3812e20b71
1207
1206
2016-02-10T14:12:45Z
Lollypop
2
/* Apply PSU */
wikitext
text/x-wiki
[[Kategorie:Solaris11|Clusterware]]
[[Kategorie:Oracle|Clusterware]]
==Get Solaris release information==
<source lang=bash>
# pkg info kernel | \
nawk -F '.' '
/Build Release:/{
solaris=$NF;
}
/Branch:/{
subrel=$3;
update=$4;
}
END{
printf "Solaris %d.%d Update %d\n",solaris,subrel,update;
}'
</source>
=Needed Solaris packages=
==Install pkg dependencies==
<source lang=bash>
# pkg install developer/assembler
# pkg install developer/build/make
# pkg install x11/diagnostic/x11-info-clients
</source>
==Check pkg dependencies==
<source lang=bash>
# pkg list \
developer/assembler \
developer/build/make \
x11/diagnostic/x11-info-clients
</source>
=User / group settings=
==Groups==
<source lang=bash>
# groupadd -g 186 oinstall
# groupadd -g 187 asmadmin
# groupadd -g 188 asmdba
# groupadd -g 200 dba
</source>
==User==
<source lang=bash>
# useradd \
-u 102 \
-g oinstall \
-G asmdba,dba \
-c "Oracle DB" \
-m -d /export/home/oracle \
oracle
# useradd \
-u 406 \
-g oinstall \
-G asmdba,asmadmin,dba \
-c "Oracle Grid" \
-m -d /export/home/grid \
grid
</source>
===Generate ssh public keys===
<source lang=bash>
# su - grid
$ ssh-keygen -t rsa -b 2048
Generating public/private rsa key pair.
Enter file in which to save the key (/export/home/grid/.ssh/id_rsa): <Enter>
Created directory '/export/home/grid/.ssh'.
Enter passphrase (empty for no passphrase): <Enter>
Enter same passphrase again: <Enter>
Your identification has been saved in /export/home/grid/.ssh/id_rsa.
Your public key has been saved in /export/home/grid/.ssh/id_rsa.pub.
The key fingerprint is:
..:..:.. grid@grid01
$ cat .ssh/id_rsa.pub > .ssh/authorized_keys
$ chmod 600 .ssh/authorized_keys
$ vi .ssh/authorized_keys
</source>
Add the public key of other nodes.
After that do this on all other nodes added as grid:
<source lang=bash>
$ scp grid01:.ssh/authorized_keys .ssh/authorized_keys
</source>
Now do a cross login from every node to every other node (even to its self) to add all to the known_hosts. The installer needs this.
==Projects==
<source lang=bash>
# projadd -p 186 -G oinstall \
-K process.max-file-descriptor="(privileged,65536,deny)" \
-K process.max-sem-nsems="(privileged,2048,deny)" \
-K project.max-sem-ids="(privileged,2048,deny)" \
-K project.max-shm-ids="(privileged,200,deny)" \
-K project.max-shm-memory="(privileged,274877906944,deny)" \
group.oinstall
</source>
===Check project settings===
<source lang=bash>
# su - oracle
$ for name in process.{max-file-descriptor,max-sem-nsems} ; do prctl -t privileged -i process -n ${name} $$ ; done
process: 14822: -bash
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
process.max-file-descriptor
privileged 65.5K - deny -
process: 14822: -bash
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
process.max-sem-nsems
privileged 2.05K - deny -
$ for name in project.{max-sem-ids,max-shm-ids,max-shm-memory} ; do prctl -t privileged -n ${name} $$ ; done
process: 14822: -bash
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
project.max-sem-ids
privileged 2.05K - deny -
process: 14822: -bash
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
project.max-shm-ids
privileged 200 - deny -
process: 14822: -bash
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
project.max-shm-memory
usage 0B
privileged 256GB - deny -
</source>
=Directories=
<source lang=bash>
# zfs create -o mountpoint=none rpool/grid
# zfs create -o mountpoint=/opt/gridhome rpool/grid/gridhome
# zfs create -o mountpoint=/opt/gridbase rpool/grid/gridbase
# zfs create -o mountpoint=/opt/oraInventory rpool/grid/oraInventory
# chown -R grid:oinstall /opt/{grid{home,base},oraInventory}
</source>
=Storage tasks=
==Discover LUNs==
<source lang=bash>
# luxadm -e port | \
nawk '{print $1}' | \
xargs -n 1 luxadm -e dump_map | \
nawk '/Disk device/{print $5}' | \
sort -u | \
xargs luxadm display | \
nawk '
/DEVICE PROPERTIES for disk:/{
disk=$NF;
}
/DEVICE PROPERTIES for:/{
disk="";
}
/Vendor:/{
vendor=$NF;
}
/Serial Num:/{
serial=$NF;
}
/Unformatted capacity:/{
capacity=$(NF-1)""$NF;
}
disk != "" && /^$/{
printf "%s vendor=%s serial=%s capacity=%s\n",disk,vendor,serial,capacity;
}' | \
sort -u
</source>
==Label Disks==
===Single Disk===
<source lang=bash>
# printf 'type 0 no no\nlabel 1 yes\npartition\n0 usr wm 8192 $\nlabel 1 yes\nquit\nquit\n' | \
format -e /dev/rdsk/<disk>
</source>
===All FC disks===
For x86 you have to call format -> fdisk -> y for all disks first :-\
'''DON'T DO THE NEXT STEP IF YOU DO NOT KNOW WHAT YOU DO!'''
format_command_file.txt:
<source lang=bash>
type 0 no no
label 1 yes
partition
0 usr wm 8192 $
label 1 yes
quit
quit
</source>
<source lang=bash>
# luxadm -e port | \
nawk '{print $1}' | \
xargs -n 1 luxadm -e dump_map | \
nawk '/Disk device/{print $5}' | \
sort -u | \
xargs luxadm display | \
nawk '
/DEVICE PROPERTIES for disk:/{
disk=$NF;
}
/DEVICE PROPERTIES for:/{
disk="";
}
disk && /^$/{
printf "%s\n",disk;
}' | \
sort -u | \
xargs -n 1 format -e -f ~/format_command_file.txt
</source>
<source lang=bash>
# chown -RL grid:asmadmin /dev/rdsk/c0t6000*
# chmod 660 /dev/rdsk/c0t6000*
</source>
==Set swap to physical RAM==
<source lang=bash>
# export RAM=256G
# swap -d /dev/zvol/dsk/rpool/swap
# zfs destroy rpool/swap
# zfs create \
-V ${RAM} \
-b 8k \
-o primarycache=metadata \
-o chksum=on \
-o dedup=off \
-o encryption=off \
-o compression=off \
rpool/swap
# swap -a /dev/zvol/dsk/rpool/swap
</source>
=Network=
==Check port ranges==
<source lang=bash>
# for protocol in tcp udp ; do ipadm show-prop ${protocol} -p smallest_anon_port,largest_anon_port ; done
PROTO PROPERTY PERM CURRENT PERSISTENT DEFAULT POSSIBLE
tcp smallest_anon_port rw 9000 9000 32768 1024-65500
tcp largest_anon_port rw 65500 65500 65535 9000-65535
PROTO PROPERTY PERM CURRENT PERSISTENT DEFAULT POSSIBLE
udp smallest_anon_port rw 9000 9000 32768 1024-65500
udp largest_anon_port rw 65500 65500 65535 9000-65535
</source>
==Setup private cluster interconnects==
Example with a small net with six (eight with net and broadcast) usable IPs. This limits the maximum number of nodes to six... which is obvious...
First node:
<source lang=bash>
# ipadm create-ip net1
# ipadm create-addr -T static -a 10.65.0.1/29 net1/ci1
# ipadm create-ip net5
# ipadm create-addr -T static -a 10.65.0.9/29 net5/ci2
</source>
Second node:
<source lang=bash>
# ipadm create-ip net1
# ipadm create-addr -T static -a 10.65.0.2/29 net1/ci1
# ipadm create-ip net5
# ipadm create-addr -T static -a 10.65.0.10/29 net5/ci2
</source>
==Set slew always for ntp==
After configuring ntp set slew always to avoid time warps!
<source lang=bash>
# svccfg -s svc:/network/ntp:default setprop config/slew_always = true
# svcadm refresh svc:/network/ntp:default
# svccfg -s svc:/network/ntp:default listprop config/slew_always
config/slew_always boolean true
</source>
==Upgrade OPatch==
Do as root:
<source lang=bash>
export ORACLE_HOME=/opt/gridhome/11.2.0.4
export PATH=${PATH}:${ORACLE_HOME}/OPatch
OPATCH_PATCH_ZIP=~oracle/orainst/p6880880_112000_Solaris86-64.zip
zfs snapshot -r rpool/grid@$(opatch version | nawk '/OPatch Version:/{print $1"_"$NF;}')
eval mv ${ORACLE_HOME}/{$(opatch version | nawk '/OPatch Version:/{print $1","$1"_"$NF;}')}
unzip -d ${ORACLE_HOME} ${OPATCH_PATCH_ZIP}
chown -R grid:oinstall ${ORACLE_HOME}/OPatch
zfs snapshot -r rpool/grid@$(opatch version | nawk '/OPatch Version:/{print $1"_"$NF;}')
</source>
==Apply PSU==
On first node as user grid:
<source lang=bash>
export ORACLE_HOME=/opt/gridhome/11.2.0.4
OCM_RSP=~grid/ocm_gridcluster1.rsp
${ORACLE_HOME}/OPatch/ocm/bin/emocmrsp -output ${OCM_RSP}
scp ${OCM_RSP} <other node1>:
scp ${OCM_RSP} <other node2>:
...
</source>
On all nodes do as root:
<source lang=bash>
export ORACLE_HOME=/opt/gridhome/11.2.0.4
export PATH=${PATH}:${ORACLE_HOME}/bin
export PATH=${PATH}:${ORACLE_HOME}/OPatch
OCM_RSP=~grid/ocm_gridcluster1.rsp
PSU_DIR=~oracle/orainst/psu
PSU_ZIP=~oracle/orainst/p22378167_112040_Solaris86-64.zip
PSU=~oracle/orainst/psu/22378167
su - grid -c "mkdir -p ${PSU_DIR}"
su - grid -c "unzip -d ${PSU_DIR} ${PSU_ZIP}"
su - grid -c "opatch lsinventory -detail -oh ${ORACLE_HOME} > ~grid/lsinventory_before_${PSU##*/}"
${ORACLE_HOME}/bin/emctl stop dbconsole
zfs snapshot -r rpool/grid@before_psu_${PSU##*/}
cd ~grid
for patch in $(find ${PSU} -name bundle.xml | xargs -n 1 dirname) ; do
opatch auto ${patch} -oh ${ORACLE_HOME} -ocmrf ${OCM_RSP}
done
$ORACLE_HOME/crs/install/rootcrs.pl -unlock # <-- on all nodes
# opatch apply other patches as user grid...
$ORACLE_HOME/crs/install/rootcrs.pl -patch # <-- on all nodes
zfs snapshot -r rpool/grid@after_psu_${PSU##*/}
${ORACLE_HOME}/bin/emctl start dbconsole
su - grid -c "opatch lsinventory -detail -oh ${ORACLE_HOME} > ~grid/lsinventory_after_${PSU##*/}"
</source>
3d842373b2ece89f0f6fe4889d76f90ed8ed8b2e
1208
1207
2016-02-10T14:21:49Z
Lollypop
2
/* Apply PSU */
wikitext
text/x-wiki
[[Kategorie:Solaris11|Clusterware]]
[[Kategorie:Oracle|Clusterware]]
==Get Solaris release information==
<source lang=bash>
# pkg info kernel | \
nawk -F '.' '
/Build Release:/{
solaris=$NF;
}
/Branch:/{
subrel=$3;
update=$4;
}
END{
printf "Solaris %d.%d Update %d\n",solaris,subrel,update;
}'
</source>
=Needed Solaris packages=
==Install pkg dependencies==
<source lang=bash>
# pkg install developer/assembler
# pkg install developer/build/make
# pkg install x11/diagnostic/x11-info-clients
</source>
==Check pkg dependencies==
<source lang=bash>
# pkg list \
developer/assembler \
developer/build/make \
x11/diagnostic/x11-info-clients
</source>
=User / group settings=
==Groups==
<source lang=bash>
# groupadd -g 186 oinstall
# groupadd -g 187 asmadmin
# groupadd -g 188 asmdba
# groupadd -g 200 dba
</source>
==User==
<source lang=bash>
# useradd \
-u 102 \
-g oinstall \
-G asmdba,dba \
-c "Oracle DB" \
-m -d /export/home/oracle \
oracle
# useradd \
-u 406 \
-g oinstall \
-G asmdba,asmadmin,dba \
-c "Oracle Grid" \
-m -d /export/home/grid \
grid
</source>
===Generate ssh public keys===
<source lang=bash>
# su - grid
$ ssh-keygen -t rsa -b 2048
Generating public/private rsa key pair.
Enter file in which to save the key (/export/home/grid/.ssh/id_rsa): <Enter>
Created directory '/export/home/grid/.ssh'.
Enter passphrase (empty for no passphrase): <Enter>
Enter same passphrase again: <Enter>
Your identification has been saved in /export/home/grid/.ssh/id_rsa.
Your public key has been saved in /export/home/grid/.ssh/id_rsa.pub.
The key fingerprint is:
..:..:.. grid@grid01
$ cat .ssh/id_rsa.pub > .ssh/authorized_keys
$ chmod 600 .ssh/authorized_keys
$ vi .ssh/authorized_keys
</source>
Add the public key of other nodes.
After that do this on all other nodes added as grid:
<source lang=bash>
$ scp grid01:.ssh/authorized_keys .ssh/authorized_keys
</source>
Now do a cross login from every node to every other node (even to its self) to add all to the known_hosts. The installer needs this.
==Projects==
<source lang=bash>
# projadd -p 186 -G oinstall \
-K process.max-file-descriptor="(privileged,65536,deny)" \
-K process.max-sem-nsems="(privileged,2048,deny)" \
-K project.max-sem-ids="(privileged,2048,deny)" \
-K project.max-shm-ids="(privileged,200,deny)" \
-K project.max-shm-memory="(privileged,274877906944,deny)" \
group.oinstall
</source>
===Check project settings===
<source lang=bash>
# su - oracle
$ for name in process.{max-file-descriptor,max-sem-nsems} ; do prctl -t privileged -i process -n ${name} $$ ; done
process: 14822: -bash
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
process.max-file-descriptor
privileged 65.5K - deny -
process: 14822: -bash
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
process.max-sem-nsems
privileged 2.05K - deny -
$ for name in project.{max-sem-ids,max-shm-ids,max-shm-memory} ; do prctl -t privileged -n ${name} $$ ; done
process: 14822: -bash
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
project.max-sem-ids
privileged 2.05K - deny -
process: 14822: -bash
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
project.max-shm-ids
privileged 200 - deny -
process: 14822: -bash
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
project.max-shm-memory
usage 0B
privileged 256GB - deny -
</source>
=Directories=
<source lang=bash>
# zfs create -o mountpoint=none rpool/grid
# zfs create -o mountpoint=/opt/gridhome rpool/grid/gridhome
# zfs create -o mountpoint=/opt/gridbase rpool/grid/gridbase
# zfs create -o mountpoint=/opt/oraInventory rpool/grid/oraInventory
# chown -R grid:oinstall /opt/{grid{home,base},oraInventory}
</source>
=Storage tasks=
==Discover LUNs==
<source lang=bash>
# luxadm -e port | \
nawk '{print $1}' | \
xargs -n 1 luxadm -e dump_map | \
nawk '/Disk device/{print $5}' | \
sort -u | \
xargs luxadm display | \
nawk '
/DEVICE PROPERTIES for disk:/{
disk=$NF;
}
/DEVICE PROPERTIES for:/{
disk="";
}
/Vendor:/{
vendor=$NF;
}
/Serial Num:/{
serial=$NF;
}
/Unformatted capacity:/{
capacity=$(NF-1)""$NF;
}
disk != "" && /^$/{
printf "%s vendor=%s serial=%s capacity=%s\n",disk,vendor,serial,capacity;
}' | \
sort -u
</source>
==Label Disks==
===Single Disk===
<source lang=bash>
# printf 'type 0 no no\nlabel 1 yes\npartition\n0 usr wm 8192 $\nlabel 1 yes\nquit\nquit\n' | \
format -e /dev/rdsk/<disk>
</source>
===All FC disks===
For x86 you have to call format -> fdisk -> y for all disks first :-\
'''DON'T DO THE NEXT STEP IF YOU DO NOT KNOW WHAT YOU DO!'''
format_command_file.txt:
<source lang=bash>
type 0 no no
label 1 yes
partition
0 usr wm 8192 $
label 1 yes
quit
quit
</source>
<source lang=bash>
# luxadm -e port | \
nawk '{print $1}' | \
xargs -n 1 luxadm -e dump_map | \
nawk '/Disk device/{print $5}' | \
sort -u | \
xargs luxadm display | \
nawk '
/DEVICE PROPERTIES for disk:/{
disk=$NF;
}
/DEVICE PROPERTIES for:/{
disk="";
}
disk && /^$/{
printf "%s\n",disk;
}' | \
sort -u | \
xargs -n 1 format -e -f ~/format_command_file.txt
</source>
<source lang=bash>
# chown -RL grid:asmadmin /dev/rdsk/c0t6000*
# chmod 660 /dev/rdsk/c0t6000*
</source>
==Set swap to physical RAM==
<source lang=bash>
# export RAM=256G
# swap -d /dev/zvol/dsk/rpool/swap
# zfs destroy rpool/swap
# zfs create \
-V ${RAM} \
-b 8k \
-o primarycache=metadata \
-o chksum=on \
-o dedup=off \
-o encryption=off \
-o compression=off \
rpool/swap
# swap -a /dev/zvol/dsk/rpool/swap
</source>
=Network=
==Check port ranges==
<source lang=bash>
# for protocol in tcp udp ; do ipadm show-prop ${protocol} -p smallest_anon_port,largest_anon_port ; done
PROTO PROPERTY PERM CURRENT PERSISTENT DEFAULT POSSIBLE
tcp smallest_anon_port rw 9000 9000 32768 1024-65500
tcp largest_anon_port rw 65500 65500 65535 9000-65535
PROTO PROPERTY PERM CURRENT PERSISTENT DEFAULT POSSIBLE
udp smallest_anon_port rw 9000 9000 32768 1024-65500
udp largest_anon_port rw 65500 65500 65535 9000-65535
</source>
==Setup private cluster interconnects==
Example with a small net with six (eight with net and broadcast) usable IPs. This limits the maximum number of nodes to six... which is obvious...
First node:
<source lang=bash>
# ipadm create-ip net1
# ipadm create-addr -T static -a 10.65.0.1/29 net1/ci1
# ipadm create-ip net5
# ipadm create-addr -T static -a 10.65.0.9/29 net5/ci2
</source>
Second node:
<source lang=bash>
# ipadm create-ip net1
# ipadm create-addr -T static -a 10.65.0.2/29 net1/ci1
# ipadm create-ip net5
# ipadm create-addr -T static -a 10.65.0.10/29 net5/ci2
</source>
==Set slew always for ntp==
After configuring ntp set slew always to avoid time warps!
<source lang=bash>
# svccfg -s svc:/network/ntp:default setprop config/slew_always = true
# svcadm refresh svc:/network/ntp:default
# svccfg -s svc:/network/ntp:default listprop config/slew_always
config/slew_always boolean true
</source>
==Upgrade OPatch==
Do as root:
<source lang=bash>
export ORACLE_HOME=/opt/gridhome/11.2.0.4
export PATH=${PATH}:${ORACLE_HOME}/OPatch
OPATCH_PATCH_ZIP=~oracle/orainst/p6880880_112000_Solaris86-64.zip
zfs snapshot -r rpool/grid@$(opatch version | nawk '/OPatch Version:/{print $1"_"$NF;}')
eval mv ${ORACLE_HOME}/{$(opatch version | nawk '/OPatch Version:/{print $1","$1"_"$NF;}')}
unzip -d ${ORACLE_HOME} ${OPATCH_PATCH_ZIP}
chown -R grid:oinstall ${ORACLE_HOME}/OPatch
zfs snapshot -r rpool/grid@$(opatch version | nawk '/OPatch Version:/{print $1"_"$NF;}')
</source>
==Apply PSU==
On first node as user grid:
<source lang=bash>
export ORACLE_HOME=/opt/gridhome/11.2.0.4
OCM_RSP=~grid/ocm_gridcluster1.rsp
${ORACLE_HOME}/OPatch/ocm/bin/emocmrsp -output ${OCM_RSP}
scp ${OCM_RSP} <other node1>:
scp ${OCM_RSP} <other node2>:
...
</source>
On all nodes do as root:
<source lang=bash>
export ORACLE_HOME=/opt/gridhome/11.2.0.4
export PATH=${PATH}:${ORACLE_HOME}/bin
export PATH=${PATH}:${ORACLE_HOME}/OPatch
OCM_RSP=~grid/ocm_gridcluster1.rsp
PSU_DIR=~oracle/orainst/psu
PSU_ZIP=~oracle/orainst/p22378167_112040_Solaris86-64.zip
PSU=~oracle/orainst/psu/22378167
su - grid -c "mkdir -p ${PSU_DIR}"
su - grid -c "unzip -d ${PSU_DIR} ${PSU_ZIP}"
su - grid -c "opatch lsinventory -detail -oh ${ORACLE_HOME} > ~grid/lsinventory_before_${PSU##*/}"
zfs snapshot -r rpool/grid@before_psu_${PSU##*/}
cd ~grid
for patch in $(find ${PSU} -name bundle.xml | xargs -n 1 dirname) ; do
opatch auto ${patch} -oh ${ORACLE_HOME} -ocmrf ${OCM_RSP}
done
$ORACLE_HOME/crs/install/rootcrs.pl -unlock # <-- on all nodes
# opatch apply other patches as user grid...
$ORACLE_HOME/crs/install/rootcrs.pl -patch # <-- on all nodes
zfs snapshot -r rpool/grid@after_psu_${PSU##*/}
${ORACLE_HOME}/bin/emctl start dbconsole
su - grid -c "opatch lsinventory -detail -oh ${ORACLE_HOME} > ~grid/lsinventory_after_${PSU##*/}"
</source>
8a427a05d99c0b117a0cfec7e547b364603e69d7
1209
1208
2016-02-10T14:24:53Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:Solaris11|Clusterware]]
[[Kategorie:Oracle|Clusterware]]
==Get Solaris release information==
<source lang=bash>
# pkg info kernel | \
nawk -F '.' '
/Build Release:/{
solaris=$NF;
}
/Branch:/{
subrel=$3;
update=$4;
}
END{
printf "Solaris %d.%d Update %d\n",solaris,subrel,update;
}'
</source>
=Needed Solaris packages=
==Install pkg dependencies==
<source lang=bash>
# pkg install developer/assembler
# pkg install developer/build/make
# pkg install x11/diagnostic/x11-info-clients
</source>
==Check pkg dependencies==
<source lang=bash>
# pkg list \
developer/assembler \
developer/build/make \
x11/diagnostic/x11-info-clients
</source>
=User / group settings=
==Groups==
<source lang=bash>
# groupadd -g 186 oinstall
# groupadd -g 187 asmadmin
# groupadd -g 188 asmdba
# groupadd -g 200 dba
</source>
==User==
<source lang=bash>
# useradd \
-u 102 \
-g oinstall \
-G asmdba,dba \
-c "Oracle DB" \
-m -d /export/home/oracle \
oracle
# useradd \
-u 406 \
-g oinstall \
-G asmdba,asmadmin,dba \
-c "Oracle Grid" \
-m -d /export/home/grid \
grid
</source>
===Generate ssh public keys===
<source lang=bash>
# su - grid
$ ssh-keygen -t rsa -b 2048
Generating public/private rsa key pair.
Enter file in which to save the key (/export/home/grid/.ssh/id_rsa): <Enter>
Created directory '/export/home/grid/.ssh'.
Enter passphrase (empty for no passphrase): <Enter>
Enter same passphrase again: <Enter>
Your identification has been saved in /export/home/grid/.ssh/id_rsa.
Your public key has been saved in /export/home/grid/.ssh/id_rsa.pub.
The key fingerprint is:
..:..:.. grid@grid01
$ cat .ssh/id_rsa.pub > .ssh/authorized_keys
$ chmod 600 .ssh/authorized_keys
$ vi .ssh/authorized_keys
</source>
Add the public key of other nodes.
After that do this on all other nodes added as grid:
<source lang=bash>
$ scp grid01:.ssh/authorized_keys .ssh/authorized_keys
</source>
Now do a cross login from every node to every other node (even to its self) to add all to the known_hosts. The installer needs this.
==Projects==
<source lang=bash>
# projadd -p 186 -G oinstall \
-K process.max-file-descriptor="(privileged,65536,deny)" \
-K process.max-sem-nsems="(privileged,2048,deny)" \
-K project.max-sem-ids="(privileged,2048,deny)" \
-K project.max-shm-ids="(privileged,200,deny)" \
-K project.max-shm-memory="(privileged,274877906944,deny)" \
group.oinstall
</source>
===Check project settings===
<source lang=bash>
# su - oracle
$ for name in process.{max-file-descriptor,max-sem-nsems} ; do prctl -t privileged -i process -n ${name} $$ ; done
process: 14822: -bash
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
process.max-file-descriptor
privileged 65.5K - deny -
process: 14822: -bash
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
process.max-sem-nsems
privileged 2.05K - deny -
$ for name in project.{max-sem-ids,max-shm-ids,max-shm-memory} ; do prctl -t privileged -n ${name} $$ ; done
process: 14822: -bash
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
project.max-sem-ids
privileged 2.05K - deny -
process: 14822: -bash
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
project.max-shm-ids
privileged 200 - deny -
process: 14822: -bash
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
project.max-shm-memory
usage 0B
privileged 256GB - deny -
</source>
=Directories=
<source lang=bash>
# zfs create -o mountpoint=none rpool/grid
# zfs create -o mountpoint=/opt/gridhome rpool/grid/gridhome
# zfs create -o mountpoint=/opt/gridbase rpool/grid/gridbase
# zfs create -o mountpoint=/opt/oraInventory rpool/grid/oraInventory
# chown -R grid:oinstall /opt/{grid{home,base},oraInventory}
</source>
=Storage tasks=
==Discover LUNs==
<source lang=bash>
# luxadm -e port | \
nawk '{print $1}' | \
xargs -n 1 luxadm -e dump_map | \
nawk '/Disk device/{print $5}' | \
sort -u | \
xargs luxadm display | \
nawk '
/DEVICE PROPERTIES for disk:/{
disk=$NF;
}
/DEVICE PROPERTIES for:/{
disk="";
}
/Vendor:/{
vendor=$NF;
}
/Serial Num:/{
serial=$NF;
}
/Unformatted capacity:/{
capacity=$(NF-1)""$NF;
}
disk != "" && /^$/{
printf "%s vendor=%s serial=%s capacity=%s\n",disk,vendor,serial,capacity;
}' | \
sort -u
</source>
==Label Disks==
===Single Disk===
<source lang=bash>
# printf 'type 0 no no\nlabel 1 yes\npartition\n0 usr wm 8192 $\nlabel 1 yes\nquit\nquit\n' | \
format -e /dev/rdsk/<disk>
</source>
===All FC disks===
For x86 you have to call format -> fdisk -> y for all disks first :-\
'''DON'T DO THE NEXT STEP IF YOU DO NOT KNOW WHAT YOU DO!'''
format_command_file.txt:
<source lang=bash>
type 0 no no
label 1 yes
partition
0 usr wm 8192 $
label 1 yes
quit
quit
</source>
<source lang=bash>
# luxadm -e port | \
nawk '{print $1}' | \
xargs -n 1 luxadm -e dump_map | \
nawk '/Disk device/{print $5}' | \
sort -u | \
xargs luxadm display | \
nawk '
/DEVICE PROPERTIES for disk:/{
disk=$NF;
}
/DEVICE PROPERTIES for:/{
disk="";
}
disk && /^$/{
printf "%s\n",disk;
}' | \
sort -u | \
xargs -n 1 format -e -f ~/format_command_file.txt
</source>
<source lang=bash>
# chown -RL grid:asmadmin /dev/rdsk/c0t6000*
# chmod 660 /dev/rdsk/c0t6000*
</source>
==Set swap to physical RAM==
<source lang=bash>
# export RAM=256G
# swap -d /dev/zvol/dsk/rpool/swap
# zfs destroy rpool/swap
# zfs create \
-V ${RAM} \
-b 8k \
-o primarycache=metadata \
-o chksum=on \
-o dedup=off \
-o encryption=off \
-o compression=off \
rpool/swap
# swap -a /dev/zvol/dsk/rpool/swap
</source>
=Network=
==Check port ranges==
<source lang=bash>
# for protocol in tcp udp ; do ipadm show-prop ${protocol} -p smallest_anon_port,largest_anon_port ; done
PROTO PROPERTY PERM CURRENT PERSISTENT DEFAULT POSSIBLE
tcp smallest_anon_port rw 9000 9000 32768 1024-65500
tcp largest_anon_port rw 65500 65500 65535 9000-65535
PROTO PROPERTY PERM CURRENT PERSISTENT DEFAULT POSSIBLE
udp smallest_anon_port rw 9000 9000 32768 1024-65500
udp largest_anon_port rw 65500 65500 65535 9000-65535
</source>
==Setup private cluster interconnects==
Example with a small net with six (eight with net and broadcast) usable IPs. This limits the maximum number of nodes to six... which is obvious...
First node:
<source lang=bash>
# ipadm create-ip net1
# ipadm create-addr -T static -a 10.65.0.1/29 net1/ci1
# ipadm create-ip net5
# ipadm create-addr -T static -a 10.65.0.9/29 net5/ci2
</source>
Second node:
<source lang=bash>
# ipadm create-ip net1
# ipadm create-addr -T static -a 10.65.0.2/29 net1/ci1
# ipadm create-ip net5
# ipadm create-addr -T static -a 10.65.0.10/29 net5/ci2
</source>
==Set slew always for ntp==
After configuring ntp set slew always to avoid time warps!
<source lang=bash>
# svccfg -s svc:/network/ntp:default setprop config/slew_always = true
# svcadm refresh svc:/network/ntp:default
# svccfg -s svc:/network/ntp:default listprop config/slew_always
config/slew_always boolean true
</source>
==Upgrade OPatch==
Do as root:
<source lang=bash>
export ORACLE_HOME=/opt/gridhome/11.2.0.4
export PATH=${PATH}:${ORACLE_HOME}/OPatch
OPATCH_PATCH_ZIP=~oracle/orainst/p6880880_112000_Solaris86-64.zip
zfs snapshot -r rpool/grid@$(opatch version | nawk '/OPatch Version:/{print $1"_"$NF;}')
eval mv ${ORACLE_HOME}/{$(opatch version | nawk '/OPatch Version:/{print $1","$1"_"$NF;}')}
unzip -d ${ORACLE_HOME} ${OPATCH_PATCH_ZIP}
chown -R grid:oinstall ${ORACLE_HOME}/OPatch
zfs snapshot -r rpool/grid@$(opatch version | nawk '/OPatch Version:/{print $1"_"$NF;}')
</source>
==Apply PSU==
On first node as user grid:
<source lang=bash>
export ORACLE_HOME=/opt/gridhome/11.2.0.4
OCM_RSP=~grid/ocm_gridcluster1.rsp
${ORACLE_HOME}/OPatch/ocm/bin/emocmrsp -output ${OCM_RSP}
scp ${OCM_RSP} <other node1>:
scp ${OCM_RSP} <other node2>:
...
</source>
On first node do as root:
<source lang=bash>
export ORACLE_HOME=/opt/gridhome/11.2.0.4
export PATH=${PATH}:${ORACLE_HOME}/bin
export PATH=${PATH}:${ORACLE_HOME}/OPatch
OCM_RSP=~grid/ocm_gridcluster1.rsp
PSU_DIR=~oracle/orainst/psu
PSU_ZIP=~oracle/orainst/p22378167_112040_Solaris86-64.zip
PSU=~oracle/orainst/psu/22378167
su - grid -c "mkdir -p ${PSU_DIR}"
su - grid -c "unzip -d ${PSU_DIR} ${PSU_ZIP}"
su - grid -c "opatch lsinventory -detail -oh ${ORACLE_HOME} > ~grid/lsinventory_before_${PSU##*/}"
zfs snapshot -r rpool/grid@before_psu_${PSU##*/}
cd ~grid
for patch in $(find ${PSU} -name bundle.xml | xargs -n 1 dirname) ; do
opatch auto ${patch} -oh ${ORACLE_HOME} -ocmrf ${OCM_RSP}
done
$ORACLE_HOME/crs/install/rootcrs.pl -unlock # <-- on all nodes
# opatch apply other patches as user grid...
$ORACLE_HOME/crs/install/rootcrs.pl -patch # <-- on all nodes
zfs snapshot -r rpool/grid@after_psu_${PSU##*/}
${ORACLE_HOME}/bin/emctl start dbconsole
su - grid -c "opatch lsinventory -detail -oh ${ORACLE_HOME} > ~grid/lsinventory_after_${PSU##*/}"
</source>
2083b7eb26b8388fb570c372129015277ce1a5e2
1210
1209
2016-02-10T14:37:01Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:Solaris11|Clusterware]]
[[Kategorie:Oracle|Clusterware]]
==Get Solaris release information==
<source lang=bash>
# pkg info kernel | \
nawk -F '.' '
/Build Release:/{
solaris=$NF;
}
/Branch:/{
subrel=$3;
update=$4;
}
END{
printf "Solaris %d.%d Update %d\n",solaris,subrel,update;
}'
</source>
=Needed Solaris packages=
==Install pkg dependencies==
<source lang=bash>
# pkg install developer/assembler
# pkg install developer/build/make
# pkg install x11/diagnostic/x11-info-clients
</source>
==Check pkg dependencies==
<source lang=bash>
# pkg list \
developer/assembler \
developer/build/make \
x11/diagnostic/x11-info-clients
</source>
=User / group settings=
==Groups==
<source lang=bash>
# groupadd -g 186 oinstall
# groupadd -g 187 asmadmin
# groupadd -g 188 asmdba
# groupadd -g 200 dba
</source>
==User==
<source lang=bash>
# useradd \
-u 102 \
-g oinstall \
-G asmdba,dba \
-c "Oracle DB" \
-m -d /export/home/oracle \
oracle
# useradd \
-u 406 \
-g oinstall \
-G asmdba,asmadmin,dba \
-c "Oracle Grid" \
-m -d /export/home/grid \
grid
</source>
===Generate ssh public keys===
<source lang=bash>
# su - grid
$ ssh-keygen -t rsa -b 2048
Generating public/private rsa key pair.
Enter file in which to save the key (/export/home/grid/.ssh/id_rsa): <Enter>
Created directory '/export/home/grid/.ssh'.
Enter passphrase (empty for no passphrase): <Enter>
Enter same passphrase again: <Enter>
Your identification has been saved in /export/home/grid/.ssh/id_rsa.
Your public key has been saved in /export/home/grid/.ssh/id_rsa.pub.
The key fingerprint is:
..:..:.. grid@grid01
$ cat .ssh/id_rsa.pub > .ssh/authorized_keys
$ chmod 600 .ssh/authorized_keys
$ vi .ssh/authorized_keys
</source>
Add the public key of other nodes.
After that do this on all other nodes added as grid:
<source lang=bash>
$ scp grid01:.ssh/authorized_keys .ssh/authorized_keys
</source>
Now do a cross login from every node to every other node (even to its self) to add all to the known_hosts. The installer needs this.
==Projects==
<source lang=bash>
# projadd -p 186 -G oinstall \
-K process.max-file-descriptor="(privileged,65536,deny)" \
-K process.max-sem-nsems="(privileged,2048,deny)" \
-K project.max-sem-ids="(privileged,2048,deny)" \
-K project.max-shm-ids="(privileged,200,deny)" \
-K project.max-shm-memory="(privileged,274877906944,deny)" \
group.oinstall
</source>
===Check project settings===
<source lang=bash>
# su - oracle
$ for name in process.{max-file-descriptor,max-sem-nsems} ; do prctl -t privileged -i process -n ${name} $$ ; done
process: 14822: -bash
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
process.max-file-descriptor
privileged 65.5K - deny -
process: 14822: -bash
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
process.max-sem-nsems
privileged 2.05K - deny -
$ for name in project.{max-sem-ids,max-shm-ids,max-shm-memory} ; do prctl -t privileged -n ${name} $$ ; done
process: 14822: -bash
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
project.max-sem-ids
privileged 2.05K - deny -
process: 14822: -bash
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
project.max-shm-ids
privileged 200 - deny -
process: 14822: -bash
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
project.max-shm-memory
usage 0B
privileged 256GB - deny -
</source>
=Directories=
<source lang=bash>
# zfs create -o mountpoint=none rpool/grid
# zfs create -o mountpoint=/opt/gridhome rpool/grid/gridhome
# zfs create -o mountpoint=/opt/gridbase rpool/grid/gridbase
# zfs create -o mountpoint=/opt/oraInventory rpool/grid/oraInventory
# chown -R grid:oinstall /opt/{grid{home,base},oraInventory}
</source>
=Storage tasks=
==Discover LUNs==
<source lang=bash>
# luxadm -e port | \
nawk '{print $1}' | \
xargs -n 1 luxadm -e dump_map | \
nawk '/Disk device/{print $5}' | \
sort -u | \
xargs luxadm display | \
nawk '
/DEVICE PROPERTIES for disk:/{
disk=$NF;
}
/DEVICE PROPERTIES for:/{
disk="";
}
/Vendor:/{
vendor=$NF;
}
/Serial Num:/{
serial=$NF;
}
/Unformatted capacity:/{
capacity=$(NF-1)""$NF;
}
disk != "" && /^$/{
printf "%s vendor=%s serial=%s capacity=%s\n",disk,vendor,serial,capacity;
}' | \
sort -u
</source>
==Label Disks==
===Single Disk===
<source lang=bash>
# printf 'type 0 no no\nlabel 1 yes\npartition\n0 usr wm 8192 $\nlabel 1 yes\nquit\nquit\n' | \
format -e /dev/rdsk/<disk>
</source>
===All FC disks===
For x86 you have to call format -> fdisk -> y for all disks first :-\
'''DON'T DO THE NEXT STEP IF YOU DO NOT KNOW WHAT YOU DO!'''
format_command_file.txt:
<source lang=bash>
type 0 no no
label 1 yes
partition
0 usr wm 8192 $
label 1 yes
quit
quit
</source>
<source lang=bash>
# luxadm -e port | \
nawk '{print $1}' | \
xargs -n 1 luxadm -e dump_map | \
nawk '/Disk device/{print $5}' | \
sort -u | \
xargs luxadm display | \
nawk '
/DEVICE PROPERTIES for disk:/{
disk=$NF;
}
/DEVICE PROPERTIES for:/{
disk="";
}
disk && /^$/{
printf "%s\n",disk;
}' | \
sort -u | \
xargs -n 1 format -e -f ~/format_command_file.txt
</source>
<source lang=bash>
# chown -RL grid:asmadmin /dev/rdsk/c0t6000*
# chmod 660 /dev/rdsk/c0t6000*
</source>
==Set swap to physical RAM==
<source lang=bash>
# export RAM=256G
# swap -d /dev/zvol/dsk/rpool/swap
# zfs destroy rpool/swap
# zfs create \
-V ${RAM} \
-b 8k \
-o primarycache=metadata \
-o chksum=on \
-o dedup=off \
-o encryption=off \
-o compression=off \
rpool/swap
# swap -a /dev/zvol/dsk/rpool/swap
</source>
=Network=
==Check port ranges==
<source lang=bash>
# for protocol in tcp udp ; do ipadm show-prop ${protocol} -p smallest_anon_port,largest_anon_port ; done
PROTO PROPERTY PERM CURRENT PERSISTENT DEFAULT POSSIBLE
tcp smallest_anon_port rw 9000 9000 32768 1024-65500
tcp largest_anon_port rw 65500 65500 65535 9000-65535
PROTO PROPERTY PERM CURRENT PERSISTENT DEFAULT POSSIBLE
udp smallest_anon_port rw 9000 9000 32768 1024-65500
udp largest_anon_port rw 65500 65500 65535 9000-65535
</source>
==Setup private cluster interconnects==
Example with a small net with six (eight with net and broadcast) usable IPs. This limits the maximum number of nodes to six... which is obvious...
First node:
<source lang=bash>
# ipadm create-ip net1
# ipadm create-addr -T static -a 10.65.0.1/29 net1/ci1
# ipadm create-ip net5
# ipadm create-addr -T static -a 10.65.0.9/29 net5/ci2
</source>
Second node:
<source lang=bash>
# ipadm create-ip net1
# ipadm create-addr -T static -a 10.65.0.2/29 net1/ci1
# ipadm create-ip net5
# ipadm create-addr -T static -a 10.65.0.10/29 net5/ci2
</source>
==Set slew always for ntp==
After configuring ntp set slew always to avoid time warps!
<source lang=bash>
# svccfg -s svc:/network/ntp:default setprop config/slew_always = true
# svcadm refresh svc:/network/ntp:default
# svccfg -s svc:/network/ntp:default listprop config/slew_always
config/slew_always boolean true
</source>
==Upgrade OPatch==
Do as root:
<source lang=bash>
export ORACLE_HOME=/opt/gridhome/11.2.0.4
export PATH=${PATH}:${ORACLE_HOME}/OPatch
OPATCH_PATCH_ZIP=~oracle/orainst/p6880880_112000_Solaris86-64.zip
zfs snapshot -r rpool/grid@$(opatch version | nawk '/OPatch Version:/{print $1"_"$NF;}')
eval mv ${ORACLE_HOME}/{$(opatch version | nawk '/OPatch Version:/{print $1","$1"_"$NF;}')}
unzip -d ${ORACLE_HOME} ${OPATCH_PATCH_ZIP}
chown -R grid:oinstall ${ORACLE_HOME}/OPatch
zfs snapshot -r rpool/grid@$(opatch version | nawk '/OPatch Version:/{print $1"_"$NF;}')
</source>
==Apply PSU==
On first node as user grid:
<source lang=bash>
export ORACLE_HOME=/opt/gridhome/11.2.0.4
OCM_RSP=~grid/ocm_gridcluster1.rsp
${ORACLE_HOME}/OPatch/ocm/bin/emocmrsp -output ${OCM_RSP}
scp ${OCM_RSP} <other node1>:
scp ${OCM_RSP} <other node2>:
...
</source>
On first node do as root:
<source lang=bash>
export ORACLE_HOME=/opt/gridhome/11.2.0.4
export PATH=${PATH}:${ORACLE_HOME}/bin
export PATH=${PATH}:${ORACLE_HOME}/OPatch
OCM_RSP=~grid/ocm_gridcluster1.rsp
PSU_DIR=~oracle/orainst/psu
PSU_ZIP=~oracle/orainst/p22378167_112040_Solaris86-64.zip
PSU=~oracle/orainst/psu/22378167
su - grid -c "mkdir -p ${PSU_DIR}"
su - grid -c "unzip -d ${PSU_DIR} ${PSU_ZIP}"
su - grid -c "opatch lsinventory -detail -oh ${ORACLE_HOME} > ~grid/lsinventory_before_${PSU##*/}"
zfs snapshot -r rpool/grid@before_psu_${PSU##*/}
cd ~grid
for patch in $(find ${PSU} -name bundle.xml | xargs -n 1 dirname) ; do
opatch auto ${patch} -oh ${ORACLE_HOME} -ocmrf ${OCM_RSP}
done
$ORACLE_HOME/crs/install/rootcrs.pl -unlock # <-- on all nodes
# opatch apply other patches as user grid...
$ORACLE_HOME/crs/install/rootcrs.pl -patch # <-- on all nodes
zfs snapshot -r rpool/grid@after_psu_${PSU##*/}
${ORACLE_HOME}/bin/emctl start dbconsole
su - grid -c "opatch lsinventory -detail -oh ${ORACLE_HOME} > ~grid/lsinventory_after_${PSU##*/}"
</source>
==Configure Listener==
<source lang=bash>
grid@grid01:~$ srvctl modify listener -l LISTENER -o ${ORACLE_HOME} -p "TCP:50650"
grid@grid01:~$ srvctl config listener
Name: LISTENER
Network: 1, Owner: grid
Home: <CRS home>
End points: TCP:50650
</source>
ee19cf8046b29c2f05846ab2c9503a2e76de866b
1211
1210
2016-02-10T14:41:36Z
Lollypop
2
/* Apply PSU */
wikitext
text/x-wiki
[[Kategorie:Solaris11|Clusterware]]
[[Kategorie:Oracle|Clusterware]]
==Get Solaris release information==
<source lang=bash>
# pkg info kernel | \
nawk -F '.' '
/Build Release:/{
solaris=$NF;
}
/Branch:/{
subrel=$3;
update=$4;
}
END{
printf "Solaris %d.%d Update %d\n",solaris,subrel,update;
}'
</source>
=Needed Solaris packages=
==Install pkg dependencies==
<source lang=bash>
# pkg install developer/assembler
# pkg install developer/build/make
# pkg install x11/diagnostic/x11-info-clients
</source>
==Check pkg dependencies==
<source lang=bash>
# pkg list \
developer/assembler \
developer/build/make \
x11/diagnostic/x11-info-clients
</source>
=User / group settings=
==Groups==
<source lang=bash>
# groupadd -g 186 oinstall
# groupadd -g 187 asmadmin
# groupadd -g 188 asmdba
# groupadd -g 200 dba
</source>
==User==
<source lang=bash>
# useradd \
-u 102 \
-g oinstall \
-G asmdba,dba \
-c "Oracle DB" \
-m -d /export/home/oracle \
oracle
# useradd \
-u 406 \
-g oinstall \
-G asmdba,asmadmin,dba \
-c "Oracle Grid" \
-m -d /export/home/grid \
grid
</source>
===Generate ssh public keys===
<source lang=bash>
# su - grid
$ ssh-keygen -t rsa -b 2048
Generating public/private rsa key pair.
Enter file in which to save the key (/export/home/grid/.ssh/id_rsa): <Enter>
Created directory '/export/home/grid/.ssh'.
Enter passphrase (empty for no passphrase): <Enter>
Enter same passphrase again: <Enter>
Your identification has been saved in /export/home/grid/.ssh/id_rsa.
Your public key has been saved in /export/home/grid/.ssh/id_rsa.pub.
The key fingerprint is:
..:..:.. grid@grid01
$ cat .ssh/id_rsa.pub > .ssh/authorized_keys
$ chmod 600 .ssh/authorized_keys
$ vi .ssh/authorized_keys
</source>
Add the public key of other nodes.
After that do this on all other nodes added as grid:
<source lang=bash>
$ scp grid01:.ssh/authorized_keys .ssh/authorized_keys
</source>
Now do a cross login from every node to every other node (even to its self) to add all to the known_hosts. The installer needs this.
==Projects==
<source lang=bash>
# projadd -p 186 -G oinstall \
-K process.max-file-descriptor="(privileged,65536,deny)" \
-K process.max-sem-nsems="(privileged,2048,deny)" \
-K project.max-sem-ids="(privileged,2048,deny)" \
-K project.max-shm-ids="(privileged,200,deny)" \
-K project.max-shm-memory="(privileged,274877906944,deny)" \
group.oinstall
</source>
===Check project settings===
<source lang=bash>
# su - oracle
$ for name in process.{max-file-descriptor,max-sem-nsems} ; do prctl -t privileged -i process -n ${name} $$ ; done
process: 14822: -bash
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
process.max-file-descriptor
privileged 65.5K - deny -
process: 14822: -bash
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
process.max-sem-nsems
privileged 2.05K - deny -
$ for name in project.{max-sem-ids,max-shm-ids,max-shm-memory} ; do prctl -t privileged -n ${name} $$ ; done
process: 14822: -bash
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
project.max-sem-ids
privileged 2.05K - deny -
process: 14822: -bash
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
project.max-shm-ids
privileged 200 - deny -
process: 14822: -bash
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
project.max-shm-memory
usage 0B
privileged 256GB - deny -
</source>
=Directories=
<source lang=bash>
# zfs create -o mountpoint=none rpool/grid
# zfs create -o mountpoint=/opt/gridhome rpool/grid/gridhome
# zfs create -o mountpoint=/opt/gridbase rpool/grid/gridbase
# zfs create -o mountpoint=/opt/oraInventory rpool/grid/oraInventory
# chown -R grid:oinstall /opt/{grid{home,base},oraInventory}
</source>
=Storage tasks=
==Discover LUNs==
<source lang=bash>
# luxadm -e port | \
nawk '{print $1}' | \
xargs -n 1 luxadm -e dump_map | \
nawk '/Disk device/{print $5}' | \
sort -u | \
xargs luxadm display | \
nawk '
/DEVICE PROPERTIES for disk:/{
disk=$NF;
}
/DEVICE PROPERTIES for:/{
disk="";
}
/Vendor:/{
vendor=$NF;
}
/Serial Num:/{
serial=$NF;
}
/Unformatted capacity:/{
capacity=$(NF-1)""$NF;
}
disk != "" && /^$/{
printf "%s vendor=%s serial=%s capacity=%s\n",disk,vendor,serial,capacity;
}' | \
sort -u
</source>
==Label Disks==
===Single Disk===
<source lang=bash>
# printf 'type 0 no no\nlabel 1 yes\npartition\n0 usr wm 8192 $\nlabel 1 yes\nquit\nquit\n' | \
format -e /dev/rdsk/<disk>
</source>
===All FC disks===
For x86 you have to call format -> fdisk -> y for all disks first :-\
'''DON'T DO THE NEXT STEP IF YOU DO NOT KNOW WHAT YOU DO!'''
format_command_file.txt:
<source lang=bash>
type 0 no no
label 1 yes
partition
0 usr wm 8192 $
label 1 yes
quit
quit
</source>
<source lang=bash>
# luxadm -e port | \
nawk '{print $1}' | \
xargs -n 1 luxadm -e dump_map | \
nawk '/Disk device/{print $5}' | \
sort -u | \
xargs luxadm display | \
nawk '
/DEVICE PROPERTIES for disk:/{
disk=$NF;
}
/DEVICE PROPERTIES for:/{
disk="";
}
disk && /^$/{
printf "%s\n",disk;
}' | \
sort -u | \
xargs -n 1 format -e -f ~/format_command_file.txt
</source>
<source lang=bash>
# chown -RL grid:asmadmin /dev/rdsk/c0t6000*
# chmod 660 /dev/rdsk/c0t6000*
</source>
==Set swap to physical RAM==
<source lang=bash>
# export RAM=256G
# swap -d /dev/zvol/dsk/rpool/swap
# zfs destroy rpool/swap
# zfs create \
-V ${RAM} \
-b 8k \
-o primarycache=metadata \
-o chksum=on \
-o dedup=off \
-o encryption=off \
-o compression=off \
rpool/swap
# swap -a /dev/zvol/dsk/rpool/swap
</source>
=Network=
==Check port ranges==
<source lang=bash>
# for protocol in tcp udp ; do ipadm show-prop ${protocol} -p smallest_anon_port,largest_anon_port ; done
PROTO PROPERTY PERM CURRENT PERSISTENT DEFAULT POSSIBLE
tcp smallest_anon_port rw 9000 9000 32768 1024-65500
tcp largest_anon_port rw 65500 65500 65535 9000-65535
PROTO PROPERTY PERM CURRENT PERSISTENT DEFAULT POSSIBLE
udp smallest_anon_port rw 9000 9000 32768 1024-65500
udp largest_anon_port rw 65500 65500 65535 9000-65535
</source>
==Setup private cluster interconnects==
Example with a small net with six (eight with net and broadcast) usable IPs. This limits the maximum number of nodes to six... which is obvious...
First node:
<source lang=bash>
# ipadm create-ip net1
# ipadm create-addr -T static -a 10.65.0.1/29 net1/ci1
# ipadm create-ip net5
# ipadm create-addr -T static -a 10.65.0.9/29 net5/ci2
</source>
Second node:
<source lang=bash>
# ipadm create-ip net1
# ipadm create-addr -T static -a 10.65.0.2/29 net1/ci1
# ipadm create-ip net5
# ipadm create-addr -T static -a 10.65.0.10/29 net5/ci2
</source>
==Set slew always for ntp==
After configuring ntp set slew always to avoid time warps!
<source lang=bash>
# svccfg -s svc:/network/ntp:default setprop config/slew_always = true
# svcadm refresh svc:/network/ntp:default
# svccfg -s svc:/network/ntp:default listprop config/slew_always
config/slew_always boolean true
</source>
==Upgrade OPatch==
Do as root:
<source lang=bash>
export ORACLE_HOME=/opt/gridhome/11.2.0.4
export PATH=${PATH}:${ORACLE_HOME}/OPatch
OPATCH_PATCH_ZIP=~oracle/orainst/p6880880_112000_Solaris86-64.zip
zfs snapshot -r rpool/grid@$(opatch version | nawk '/OPatch Version:/{print $1"_"$NF;}')
eval mv ${ORACLE_HOME}/{$(opatch version | nawk '/OPatch Version:/{print $1","$1"_"$NF;}')}
unzip -d ${ORACLE_HOME} ${OPATCH_PATCH_ZIP}
chown -R grid:oinstall ${ORACLE_HOME}/OPatch
zfs snapshot -r rpool/grid@$(opatch version | nawk '/OPatch Version:/{print $1"_"$NF;}')
</source>
==Apply PSU==
On first node as user grid:
<source lang=bash>
export ORACLE_HOME=/opt/gridhome/11.2.0.4
OCM_RSP=~grid/ocm_gridcluster1.rsp
${ORACLE_HOME}/OPatch/ocm/bin/emocmrsp -output ${OCM_RSP}
scp ${OCM_RSP} <other node1>:
scp ${OCM_RSP} <other node2>:
...
</source>
On all nodes do as root:
<source lang=bash>
export ORACLE_HOME=/opt/gridhome/11.2.0.4
export PATH=${PATH}:${ORACLE_HOME}/bin
export PATH=${PATH}:${ORACLE_HOME}/OPatch
OCM_RSP=~grid/ocm_gridcluster1.rsp
PSU_DIR=~oracle/orainst/psu
PSU_ZIP=~oracle/orainst/p22378167_112040_Solaris86-64.zip
PSU=~oracle/orainst/psu/22378167
su - grid -c "mkdir -p ${PSU_DIR}"
su - grid -c "unzip -d ${PSU_DIR} ${PSU_ZIP}"
su - grid -c "opatch lsinventory -detail -oh ${ORACLE_HOME} > ~grid/lsinventory_before_${PSU##*/}"
zfs snapshot -r rpool/grid@before_psu_${PSU##*/}
cd ~grid
for patch in $(find ${PSU} -name bundle.xml | xargs -n 1 dirname) ; do
opatch auto ${patch} -oh ${ORACLE_HOME} -ocmrf ${OCM_RSP}
done
$ORACLE_HOME/crs/install/rootcrs.pl -unlock # <-- on all nodes
# opatch apply other patches as user grid... # <-- only on first node
$ORACLE_HOME/crs/install/rootcrs.pl -patch # <-- on all nodes
zfs snapshot -r rpool/grid@after_psu_${PSU##*/}
${ORACLE_HOME}/bin/emctl start dbconsole
su - grid -c "opatch lsinventory -detail -oh ${ORACLE_HOME} > ~grid/lsinventory_after_${PSU##*/}"
</source>
==Configure Listener==
<source lang=bash>
grid@grid01:~$ srvctl modify listener -l LISTENER -o ${ORACLE_HOME} -p "TCP:50650"
grid@grid01:~$ srvctl config listener
Name: LISTENER
Network: 1, Owner: grid
Home: <CRS home>
End points: TCP:50650
</source>
1f8f78bef20d791e332af8ca6f31d6a6dd2e14d3
1212
1211
2016-02-10T14:42:23Z
Lollypop
2
/* Configure Listener */
wikitext
text/x-wiki
[[Kategorie:Solaris11|Clusterware]]
[[Kategorie:Oracle|Clusterware]]
==Get Solaris release information==
<source lang=bash>
# pkg info kernel | \
nawk -F '.' '
/Build Release:/{
solaris=$NF;
}
/Branch:/{
subrel=$3;
update=$4;
}
END{
printf "Solaris %d.%d Update %d\n",solaris,subrel,update;
}'
</source>
=Needed Solaris packages=
==Install pkg dependencies==
<source lang=bash>
# pkg install developer/assembler
# pkg install developer/build/make
# pkg install x11/diagnostic/x11-info-clients
</source>
==Check pkg dependencies==
<source lang=bash>
# pkg list \
developer/assembler \
developer/build/make \
x11/diagnostic/x11-info-clients
</source>
=User / group settings=
==Groups==
<source lang=bash>
# groupadd -g 186 oinstall
# groupadd -g 187 asmadmin
# groupadd -g 188 asmdba
# groupadd -g 200 dba
</source>
==User==
<source lang=bash>
# useradd \
-u 102 \
-g oinstall \
-G asmdba,dba \
-c "Oracle DB" \
-m -d /export/home/oracle \
oracle
# useradd \
-u 406 \
-g oinstall \
-G asmdba,asmadmin,dba \
-c "Oracle Grid" \
-m -d /export/home/grid \
grid
</source>
===Generate ssh public keys===
<source lang=bash>
# su - grid
$ ssh-keygen -t rsa -b 2048
Generating public/private rsa key pair.
Enter file in which to save the key (/export/home/grid/.ssh/id_rsa): <Enter>
Created directory '/export/home/grid/.ssh'.
Enter passphrase (empty for no passphrase): <Enter>
Enter same passphrase again: <Enter>
Your identification has been saved in /export/home/grid/.ssh/id_rsa.
Your public key has been saved in /export/home/grid/.ssh/id_rsa.pub.
The key fingerprint is:
..:..:.. grid@grid01
$ cat .ssh/id_rsa.pub > .ssh/authorized_keys
$ chmod 600 .ssh/authorized_keys
$ vi .ssh/authorized_keys
</source>
Add the public key of other nodes.
After that do this on all other nodes added as grid:
<source lang=bash>
$ scp grid01:.ssh/authorized_keys .ssh/authorized_keys
</source>
Now do a cross login from every node to every other node (even to its self) to add all to the known_hosts. The installer needs this.
==Projects==
<source lang=bash>
# projadd -p 186 -G oinstall \
-K process.max-file-descriptor="(privileged,65536,deny)" \
-K process.max-sem-nsems="(privileged,2048,deny)" \
-K project.max-sem-ids="(privileged,2048,deny)" \
-K project.max-shm-ids="(privileged,200,deny)" \
-K project.max-shm-memory="(privileged,274877906944,deny)" \
group.oinstall
</source>
===Check project settings===
<source lang=bash>
# su - oracle
$ for name in process.{max-file-descriptor,max-sem-nsems} ; do prctl -t privileged -i process -n ${name} $$ ; done
process: 14822: -bash
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
process.max-file-descriptor
privileged 65.5K - deny -
process: 14822: -bash
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
process.max-sem-nsems
privileged 2.05K - deny -
$ for name in project.{max-sem-ids,max-shm-ids,max-shm-memory} ; do prctl -t privileged -n ${name} $$ ; done
process: 14822: -bash
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
project.max-sem-ids
privileged 2.05K - deny -
process: 14822: -bash
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
project.max-shm-ids
privileged 200 - deny -
process: 14822: -bash
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
project.max-shm-memory
usage 0B
privileged 256GB - deny -
</source>
=Directories=
<source lang=bash>
# zfs create -o mountpoint=none rpool/grid
# zfs create -o mountpoint=/opt/gridhome rpool/grid/gridhome
# zfs create -o mountpoint=/opt/gridbase rpool/grid/gridbase
# zfs create -o mountpoint=/opt/oraInventory rpool/grid/oraInventory
# chown -R grid:oinstall /opt/{grid{home,base},oraInventory}
</source>
=Storage tasks=
==Discover LUNs==
<source lang=bash>
# luxadm -e port | \
nawk '{print $1}' | \
xargs -n 1 luxadm -e dump_map | \
nawk '/Disk device/{print $5}' | \
sort -u | \
xargs luxadm display | \
nawk '
/DEVICE PROPERTIES for disk:/{
disk=$NF;
}
/DEVICE PROPERTIES for:/{
disk="";
}
/Vendor:/{
vendor=$NF;
}
/Serial Num:/{
serial=$NF;
}
/Unformatted capacity:/{
capacity=$(NF-1)""$NF;
}
disk != "" && /^$/{
printf "%s vendor=%s serial=%s capacity=%s\n",disk,vendor,serial,capacity;
}' | \
sort -u
</source>
==Label Disks==
===Single Disk===
<source lang=bash>
# printf 'type 0 no no\nlabel 1 yes\npartition\n0 usr wm 8192 $\nlabel 1 yes\nquit\nquit\n' | \
format -e /dev/rdsk/<disk>
</source>
===All FC disks===
For x86 you have to call format -> fdisk -> y for all disks first :-\
'''DON'T DO THE NEXT STEP IF YOU DO NOT KNOW WHAT YOU DO!'''
format_command_file.txt:
<source lang=bash>
type 0 no no
label 1 yes
partition
0 usr wm 8192 $
label 1 yes
quit
quit
</source>
<source lang=bash>
# luxadm -e port | \
nawk '{print $1}' | \
xargs -n 1 luxadm -e dump_map | \
nawk '/Disk device/{print $5}' | \
sort -u | \
xargs luxadm display | \
nawk '
/DEVICE PROPERTIES for disk:/{
disk=$NF;
}
/DEVICE PROPERTIES for:/{
disk="";
}
disk && /^$/{
printf "%s\n",disk;
}' | \
sort -u | \
xargs -n 1 format -e -f ~/format_command_file.txt
</source>
<source lang=bash>
# chown -RL grid:asmadmin /dev/rdsk/c0t6000*
# chmod 660 /dev/rdsk/c0t6000*
</source>
==Set swap to physical RAM==
<source lang=bash>
# export RAM=256G
# swap -d /dev/zvol/dsk/rpool/swap
# zfs destroy rpool/swap
# zfs create \
-V ${RAM} \
-b 8k \
-o primarycache=metadata \
-o chksum=on \
-o dedup=off \
-o encryption=off \
-o compression=off \
rpool/swap
# swap -a /dev/zvol/dsk/rpool/swap
</source>
=Network=
==Check port ranges==
<source lang=bash>
# for protocol in tcp udp ; do ipadm show-prop ${protocol} -p smallest_anon_port,largest_anon_port ; done
PROTO PROPERTY PERM CURRENT PERSISTENT DEFAULT POSSIBLE
tcp smallest_anon_port rw 9000 9000 32768 1024-65500
tcp largest_anon_port rw 65500 65500 65535 9000-65535
PROTO PROPERTY PERM CURRENT PERSISTENT DEFAULT POSSIBLE
udp smallest_anon_port rw 9000 9000 32768 1024-65500
udp largest_anon_port rw 65500 65500 65535 9000-65535
</source>
==Setup private cluster interconnects==
Example with a small net with six (eight with net and broadcast) usable IPs. This limits the maximum number of nodes to six... which is obvious...
First node:
<source lang=bash>
# ipadm create-ip net1
# ipadm create-addr -T static -a 10.65.0.1/29 net1/ci1
# ipadm create-ip net5
# ipadm create-addr -T static -a 10.65.0.9/29 net5/ci2
</source>
Second node:
<source lang=bash>
# ipadm create-ip net1
# ipadm create-addr -T static -a 10.65.0.2/29 net1/ci1
# ipadm create-ip net5
# ipadm create-addr -T static -a 10.65.0.10/29 net5/ci2
</source>
==Set slew always for ntp==
After configuring ntp set slew always to avoid time warps!
<source lang=bash>
# svccfg -s svc:/network/ntp:default setprop config/slew_always = true
# svcadm refresh svc:/network/ntp:default
# svccfg -s svc:/network/ntp:default listprop config/slew_always
config/slew_always boolean true
</source>
==Upgrade OPatch==
Do as root:
<source lang=bash>
export ORACLE_HOME=/opt/gridhome/11.2.0.4
export PATH=${PATH}:${ORACLE_HOME}/OPatch
OPATCH_PATCH_ZIP=~oracle/orainst/p6880880_112000_Solaris86-64.zip
zfs snapshot -r rpool/grid@$(opatch version | nawk '/OPatch Version:/{print $1"_"$NF;}')
eval mv ${ORACLE_HOME}/{$(opatch version | nawk '/OPatch Version:/{print $1","$1"_"$NF;}')}
unzip -d ${ORACLE_HOME} ${OPATCH_PATCH_ZIP}
chown -R grid:oinstall ${ORACLE_HOME}/OPatch
zfs snapshot -r rpool/grid@$(opatch version | nawk '/OPatch Version:/{print $1"_"$NF;}')
</source>
==Apply PSU==
On first node as user grid:
<source lang=bash>
export ORACLE_HOME=/opt/gridhome/11.2.0.4
OCM_RSP=~grid/ocm_gridcluster1.rsp
${ORACLE_HOME}/OPatch/ocm/bin/emocmrsp -output ${OCM_RSP}
scp ${OCM_RSP} <other node1>:
scp ${OCM_RSP} <other node2>:
...
</source>
On all nodes do as root:
<source lang=bash>
export ORACLE_HOME=/opt/gridhome/11.2.0.4
export PATH=${PATH}:${ORACLE_HOME}/bin
export PATH=${PATH}:${ORACLE_HOME}/OPatch
OCM_RSP=~grid/ocm_gridcluster1.rsp
PSU_DIR=~oracle/orainst/psu
PSU_ZIP=~oracle/orainst/p22378167_112040_Solaris86-64.zip
PSU=~oracle/orainst/psu/22378167
su - grid -c "mkdir -p ${PSU_DIR}"
su - grid -c "unzip -d ${PSU_DIR} ${PSU_ZIP}"
su - grid -c "opatch lsinventory -detail -oh ${ORACLE_HOME} > ~grid/lsinventory_before_${PSU##*/}"
zfs snapshot -r rpool/grid@before_psu_${PSU##*/}
cd ~grid
for patch in $(find ${PSU} -name bundle.xml | xargs -n 1 dirname) ; do
opatch auto ${patch} -oh ${ORACLE_HOME} -ocmrf ${OCM_RSP}
done
$ORACLE_HOME/crs/install/rootcrs.pl -unlock # <-- on all nodes
# opatch apply other patches as user grid... # <-- only on first node
$ORACLE_HOME/crs/install/rootcrs.pl -patch # <-- on all nodes
zfs snapshot -r rpool/grid@after_psu_${PSU##*/}
${ORACLE_HOME}/bin/emctl start dbconsole
su - grid -c "opatch lsinventory -detail -oh ${ORACLE_HOME} > ~grid/lsinventory_after_${PSU##*/}"
</source>
==Configure Listener==
As grid user:
<source lang=bash>
$ srvctl modify listener -l LISTENER -o ${ORACLE_HOME} -p "TCP:50650"
$ srvctl config listener
Name: LISTENER
Network: 1, Owner: grid
Home: <CRS home>
End points: TCP:50650
$ srvctl stop listener -l LISTENER ; srvctl start listener -l LISTENER
</source>
8f7693d390382d2905c94372d79c9ea802b5a9b5
1213
1212
2016-02-10T14:51:08Z
Lollypop
2
/* Configure Listener */
wikitext
text/x-wiki
[[Kategorie:Solaris11|Clusterware]]
[[Kategorie:Oracle|Clusterware]]
==Get Solaris release information==
<source lang=bash>
# pkg info kernel | \
nawk -F '.' '
/Build Release:/{
solaris=$NF;
}
/Branch:/{
subrel=$3;
update=$4;
}
END{
printf "Solaris %d.%d Update %d\n",solaris,subrel,update;
}'
</source>
=Needed Solaris packages=
==Install pkg dependencies==
<source lang=bash>
# pkg install developer/assembler
# pkg install developer/build/make
# pkg install x11/diagnostic/x11-info-clients
</source>
==Check pkg dependencies==
<source lang=bash>
# pkg list \
developer/assembler \
developer/build/make \
x11/diagnostic/x11-info-clients
</source>
=User / group settings=
==Groups==
<source lang=bash>
# groupadd -g 186 oinstall
# groupadd -g 187 asmadmin
# groupadd -g 188 asmdba
# groupadd -g 200 dba
</source>
==User==
<source lang=bash>
# useradd \
-u 102 \
-g oinstall \
-G asmdba,dba \
-c "Oracle DB" \
-m -d /export/home/oracle \
oracle
# useradd \
-u 406 \
-g oinstall \
-G asmdba,asmadmin,dba \
-c "Oracle Grid" \
-m -d /export/home/grid \
grid
</source>
===Generate ssh public keys===
<source lang=bash>
# su - grid
$ ssh-keygen -t rsa -b 2048
Generating public/private rsa key pair.
Enter file in which to save the key (/export/home/grid/.ssh/id_rsa): <Enter>
Created directory '/export/home/grid/.ssh'.
Enter passphrase (empty for no passphrase): <Enter>
Enter same passphrase again: <Enter>
Your identification has been saved in /export/home/grid/.ssh/id_rsa.
Your public key has been saved in /export/home/grid/.ssh/id_rsa.pub.
The key fingerprint is:
..:..:.. grid@grid01
$ cat .ssh/id_rsa.pub > .ssh/authorized_keys
$ chmod 600 .ssh/authorized_keys
$ vi .ssh/authorized_keys
</source>
Add the public key of other nodes.
After that do this on all other nodes added as grid:
<source lang=bash>
$ scp grid01:.ssh/authorized_keys .ssh/authorized_keys
</source>
Now do a cross login from every node to every other node (even to its self) to add all to the known_hosts. The installer needs this.
==Projects==
<source lang=bash>
# projadd -p 186 -G oinstall \
-K process.max-file-descriptor="(privileged,65536,deny)" \
-K process.max-sem-nsems="(privileged,2048,deny)" \
-K project.max-sem-ids="(privileged,2048,deny)" \
-K project.max-shm-ids="(privileged,200,deny)" \
-K project.max-shm-memory="(privileged,274877906944,deny)" \
group.oinstall
</source>
===Check project settings===
<source lang=bash>
# su - oracle
$ for name in process.{max-file-descriptor,max-sem-nsems} ; do prctl -t privileged -i process -n ${name} $$ ; done
process: 14822: -bash
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
process.max-file-descriptor
privileged 65.5K - deny -
process: 14822: -bash
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
process.max-sem-nsems
privileged 2.05K - deny -
$ for name in project.{max-sem-ids,max-shm-ids,max-shm-memory} ; do prctl -t privileged -n ${name} $$ ; done
process: 14822: -bash
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
project.max-sem-ids
privileged 2.05K - deny -
process: 14822: -bash
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
project.max-shm-ids
privileged 200 - deny -
process: 14822: -bash
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
project.max-shm-memory
usage 0B
privileged 256GB - deny -
</source>
=Directories=
<source lang=bash>
# zfs create -o mountpoint=none rpool/grid
# zfs create -o mountpoint=/opt/gridhome rpool/grid/gridhome
# zfs create -o mountpoint=/opt/gridbase rpool/grid/gridbase
# zfs create -o mountpoint=/opt/oraInventory rpool/grid/oraInventory
# chown -R grid:oinstall /opt/{grid{home,base},oraInventory}
</source>
=Storage tasks=
==Discover LUNs==
<source lang=bash>
# luxadm -e port | \
nawk '{print $1}' | \
xargs -n 1 luxadm -e dump_map | \
nawk '/Disk device/{print $5}' | \
sort -u | \
xargs luxadm display | \
nawk '
/DEVICE PROPERTIES for disk:/{
disk=$NF;
}
/DEVICE PROPERTIES for:/{
disk="";
}
/Vendor:/{
vendor=$NF;
}
/Serial Num:/{
serial=$NF;
}
/Unformatted capacity:/{
capacity=$(NF-1)""$NF;
}
disk != "" && /^$/{
printf "%s vendor=%s serial=%s capacity=%s\n",disk,vendor,serial,capacity;
}' | \
sort -u
</source>
==Label Disks==
===Single Disk===
<source lang=bash>
# printf 'type 0 no no\nlabel 1 yes\npartition\n0 usr wm 8192 $\nlabel 1 yes\nquit\nquit\n' | \
format -e /dev/rdsk/<disk>
</source>
===All FC disks===
For x86 you have to call format -> fdisk -> y for all disks first :-\
'''DON'T DO THE NEXT STEP IF YOU DO NOT KNOW WHAT YOU DO!'''
format_command_file.txt:
<source lang=bash>
type 0 no no
label 1 yes
partition
0 usr wm 8192 $
label 1 yes
quit
quit
</source>
<source lang=bash>
# luxadm -e port | \
nawk '{print $1}' | \
xargs -n 1 luxadm -e dump_map | \
nawk '/Disk device/{print $5}' | \
sort -u | \
xargs luxadm display | \
nawk '
/DEVICE PROPERTIES for disk:/{
disk=$NF;
}
/DEVICE PROPERTIES for:/{
disk="";
}
disk && /^$/{
printf "%s\n",disk;
}' | \
sort -u | \
xargs -n 1 format -e -f ~/format_command_file.txt
</source>
<source lang=bash>
# chown -RL grid:asmadmin /dev/rdsk/c0t6000*
# chmod 660 /dev/rdsk/c0t6000*
</source>
==Set swap to physical RAM==
<source lang=bash>
# export RAM=256G
# swap -d /dev/zvol/dsk/rpool/swap
# zfs destroy rpool/swap
# zfs create \
-V ${RAM} \
-b 8k \
-o primarycache=metadata \
-o chksum=on \
-o dedup=off \
-o encryption=off \
-o compression=off \
rpool/swap
# swap -a /dev/zvol/dsk/rpool/swap
</source>
=Network=
==Check port ranges==
<source lang=bash>
# for protocol in tcp udp ; do ipadm show-prop ${protocol} -p smallest_anon_port,largest_anon_port ; done
PROTO PROPERTY PERM CURRENT PERSISTENT DEFAULT POSSIBLE
tcp smallest_anon_port rw 9000 9000 32768 1024-65500
tcp largest_anon_port rw 65500 65500 65535 9000-65535
PROTO PROPERTY PERM CURRENT PERSISTENT DEFAULT POSSIBLE
udp smallest_anon_port rw 9000 9000 32768 1024-65500
udp largest_anon_port rw 65500 65500 65535 9000-65535
</source>
==Setup private cluster interconnects==
Example with a small net with six (eight with net and broadcast) usable IPs. This limits the maximum number of nodes to six... which is obvious...
First node:
<source lang=bash>
# ipadm create-ip net1
# ipadm create-addr -T static -a 10.65.0.1/29 net1/ci1
# ipadm create-ip net5
# ipadm create-addr -T static -a 10.65.0.9/29 net5/ci2
</source>
Second node:
<source lang=bash>
# ipadm create-ip net1
# ipadm create-addr -T static -a 10.65.0.2/29 net1/ci1
# ipadm create-ip net5
# ipadm create-addr -T static -a 10.65.0.10/29 net5/ci2
</source>
==Set slew always for ntp==
After configuring ntp set slew always to avoid time warps!
<source lang=bash>
# svccfg -s svc:/network/ntp:default setprop config/slew_always = true
# svcadm refresh svc:/network/ntp:default
# svccfg -s svc:/network/ntp:default listprop config/slew_always
config/slew_always boolean true
</source>
==Upgrade OPatch==
Do as root:
<source lang=bash>
export ORACLE_HOME=/opt/gridhome/11.2.0.4
export PATH=${PATH}:${ORACLE_HOME}/OPatch
OPATCH_PATCH_ZIP=~oracle/orainst/p6880880_112000_Solaris86-64.zip
zfs snapshot -r rpool/grid@$(opatch version | nawk '/OPatch Version:/{print $1"_"$NF;}')
eval mv ${ORACLE_HOME}/{$(opatch version | nawk '/OPatch Version:/{print $1","$1"_"$NF;}')}
unzip -d ${ORACLE_HOME} ${OPATCH_PATCH_ZIP}
chown -R grid:oinstall ${ORACLE_HOME}/OPatch
zfs snapshot -r rpool/grid@$(opatch version | nawk '/OPatch Version:/{print $1"_"$NF;}')
</source>
==Apply PSU==
On first node as user grid:
<source lang=bash>
export ORACLE_HOME=/opt/gridhome/11.2.0.4
OCM_RSP=~grid/ocm_gridcluster1.rsp
${ORACLE_HOME}/OPatch/ocm/bin/emocmrsp -output ${OCM_RSP}
scp ${OCM_RSP} <other node1>:
scp ${OCM_RSP} <other node2>:
...
</source>
On all nodes do as root:
<source lang=bash>
export ORACLE_HOME=/opt/gridhome/11.2.0.4
export PATH=${PATH}:${ORACLE_HOME}/bin
export PATH=${PATH}:${ORACLE_HOME}/OPatch
OCM_RSP=~grid/ocm_gridcluster1.rsp
PSU_DIR=~oracle/orainst/psu
PSU_ZIP=~oracle/orainst/p22378167_112040_Solaris86-64.zip
PSU=~oracle/orainst/psu/22378167
su - grid -c "mkdir -p ${PSU_DIR}"
su - grid -c "unzip -d ${PSU_DIR} ${PSU_ZIP}"
su - grid -c "opatch lsinventory -detail -oh ${ORACLE_HOME} > ~grid/lsinventory_before_${PSU##*/}"
zfs snapshot -r rpool/grid@before_psu_${PSU##*/}
cd ~grid
for patch in $(find ${PSU} -name bundle.xml | xargs -n 1 dirname) ; do
opatch auto ${patch} -oh ${ORACLE_HOME} -ocmrf ${OCM_RSP}
done
$ORACLE_HOME/crs/install/rootcrs.pl -unlock # <-- on all nodes
# opatch apply other patches as user grid... # <-- only on first node
$ORACLE_HOME/crs/install/rootcrs.pl -patch # <-- on all nodes
zfs snapshot -r rpool/grid@after_psu_${PSU##*/}
${ORACLE_HOME}/bin/emctl start dbconsole
su - grid -c "opatch lsinventory -detail -oh ${ORACLE_HOME} > ~grid/lsinventory_after_${PSU##*/}"
</source>
==Configure local listener to another port==
As grid user:
<source lang=bash>
$ srvctl modify listener -l LISTENER -o ${ORACLE_HOME} -p "TCP:50650"
$ srvctl config listener
Name: LISTENER
Network: 1, Owner: grid
Home: <CRS home>
End points: TCP:50650
$ srvctl stop listener -l LISTENER ; srvctl start listener -l LISTENER
$ sqh
SQL>show parameter list
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
listener_networks string
local_listener string (DESCRIPTION=(ADDRESS_LIST=(A
DDRESS=(PROTOCOL=TCP)(HOST=172
.1.20.1)(PORT=1521))))
remote_listener string
SQL> alter system set local_listener ="(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=172.1.20.1)(PORT=50650))))" SID='+ASM1' ;
System altered.
SQL> ^D
</source>
d2f627f73a623e06198e336f012bdd5553a049c4
1214
1213
2016-02-10T14:52:59Z
Lollypop
2
/* Apply PSU */
wikitext
text/x-wiki
[[Kategorie:Solaris11|Clusterware]]
[[Kategorie:Oracle|Clusterware]]
==Get Solaris release information==
<source lang=bash>
# pkg info kernel | \
nawk -F '.' '
/Build Release:/{
solaris=$NF;
}
/Branch:/{
subrel=$3;
update=$4;
}
END{
printf "Solaris %d.%d Update %d\n",solaris,subrel,update;
}'
</source>
=Needed Solaris packages=
==Install pkg dependencies==
<source lang=bash>
# pkg install developer/assembler
# pkg install developer/build/make
# pkg install x11/diagnostic/x11-info-clients
</source>
==Check pkg dependencies==
<source lang=bash>
# pkg list \
developer/assembler \
developer/build/make \
x11/diagnostic/x11-info-clients
</source>
=User / group settings=
==Groups==
<source lang=bash>
# groupadd -g 186 oinstall
# groupadd -g 187 asmadmin
# groupadd -g 188 asmdba
# groupadd -g 200 dba
</source>
==User==
<source lang=bash>
# useradd \
-u 102 \
-g oinstall \
-G asmdba,dba \
-c "Oracle DB" \
-m -d /export/home/oracle \
oracle
# useradd \
-u 406 \
-g oinstall \
-G asmdba,asmadmin,dba \
-c "Oracle Grid" \
-m -d /export/home/grid \
grid
</source>
===Generate ssh public keys===
<source lang=bash>
# su - grid
$ ssh-keygen -t rsa -b 2048
Generating public/private rsa key pair.
Enter file in which to save the key (/export/home/grid/.ssh/id_rsa): <Enter>
Created directory '/export/home/grid/.ssh'.
Enter passphrase (empty for no passphrase): <Enter>
Enter same passphrase again: <Enter>
Your identification has been saved in /export/home/grid/.ssh/id_rsa.
Your public key has been saved in /export/home/grid/.ssh/id_rsa.pub.
The key fingerprint is:
..:..:.. grid@grid01
$ cat .ssh/id_rsa.pub > .ssh/authorized_keys
$ chmod 600 .ssh/authorized_keys
$ vi .ssh/authorized_keys
</source>
Add the public key of other nodes.
After that do this on all other nodes added as grid:
<source lang=bash>
$ scp grid01:.ssh/authorized_keys .ssh/authorized_keys
</source>
Now do a cross login from every node to every other node (even to its self) to add all to the known_hosts. The installer needs this.
==Projects==
<source lang=bash>
# projadd -p 186 -G oinstall \
-K process.max-file-descriptor="(privileged,65536,deny)" \
-K process.max-sem-nsems="(privileged,2048,deny)" \
-K project.max-sem-ids="(privileged,2048,deny)" \
-K project.max-shm-ids="(privileged,200,deny)" \
-K project.max-shm-memory="(privileged,274877906944,deny)" \
group.oinstall
</source>
===Check project settings===
<source lang=bash>
# su - oracle
$ for name in process.{max-file-descriptor,max-sem-nsems} ; do prctl -t privileged -i process -n ${name} $$ ; done
process: 14822: -bash
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
process.max-file-descriptor
privileged 65.5K - deny -
process: 14822: -bash
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
process.max-sem-nsems
privileged 2.05K - deny -
$ for name in project.{max-sem-ids,max-shm-ids,max-shm-memory} ; do prctl -t privileged -n ${name} $$ ; done
process: 14822: -bash
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
project.max-sem-ids
privileged 2.05K - deny -
process: 14822: -bash
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
project.max-shm-ids
privileged 200 - deny -
process: 14822: -bash
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
project.max-shm-memory
usage 0B
privileged 256GB - deny -
</source>
=Directories=
<source lang=bash>
# zfs create -o mountpoint=none rpool/grid
# zfs create -o mountpoint=/opt/gridhome rpool/grid/gridhome
# zfs create -o mountpoint=/opt/gridbase rpool/grid/gridbase
# zfs create -o mountpoint=/opt/oraInventory rpool/grid/oraInventory
# chown -R grid:oinstall /opt/{grid{home,base},oraInventory}
</source>
=Storage tasks=
==Discover LUNs==
<source lang=bash>
# luxadm -e port | \
nawk '{print $1}' | \
xargs -n 1 luxadm -e dump_map | \
nawk '/Disk device/{print $5}' | \
sort -u | \
xargs luxadm display | \
nawk '
/DEVICE PROPERTIES for disk:/{
disk=$NF;
}
/DEVICE PROPERTIES for:/{
disk="";
}
/Vendor:/{
vendor=$NF;
}
/Serial Num:/{
serial=$NF;
}
/Unformatted capacity:/{
capacity=$(NF-1)""$NF;
}
disk != "" && /^$/{
printf "%s vendor=%s serial=%s capacity=%s\n",disk,vendor,serial,capacity;
}' | \
sort -u
</source>
==Label Disks==
===Single Disk===
<source lang=bash>
# printf 'type 0 no no\nlabel 1 yes\npartition\n0 usr wm 8192 $\nlabel 1 yes\nquit\nquit\n' | \
format -e /dev/rdsk/<disk>
</source>
===All FC disks===
For x86 you have to call format -> fdisk -> y for all disks first :-\
'''DON'T DO THE NEXT STEP IF YOU DO NOT KNOW WHAT YOU DO!'''
format_command_file.txt:
<source lang=bash>
type 0 no no
label 1 yes
partition
0 usr wm 8192 $
label 1 yes
quit
quit
</source>
<source lang=bash>
# luxadm -e port | \
nawk '{print $1}' | \
xargs -n 1 luxadm -e dump_map | \
nawk '/Disk device/{print $5}' | \
sort -u | \
xargs luxadm display | \
nawk '
/DEVICE PROPERTIES for disk:/{
disk=$NF;
}
/DEVICE PROPERTIES for:/{
disk="";
}
disk && /^$/{
printf "%s\n",disk;
}' | \
sort -u | \
xargs -n 1 format -e -f ~/format_command_file.txt
</source>
<source lang=bash>
# chown -RL grid:asmadmin /dev/rdsk/c0t6000*
# chmod 660 /dev/rdsk/c0t6000*
</source>
==Set swap to physical RAM==
<source lang=bash>
# export RAM=256G
# swap -d /dev/zvol/dsk/rpool/swap
# zfs destroy rpool/swap
# zfs create \
-V ${RAM} \
-b 8k \
-o primarycache=metadata \
-o chksum=on \
-o dedup=off \
-o encryption=off \
-o compression=off \
rpool/swap
# swap -a /dev/zvol/dsk/rpool/swap
</source>
=Network=
==Check port ranges==
<source lang=bash>
# for protocol in tcp udp ; do ipadm show-prop ${protocol} -p smallest_anon_port,largest_anon_port ; done
PROTO PROPERTY PERM CURRENT PERSISTENT DEFAULT POSSIBLE
tcp smallest_anon_port rw 9000 9000 32768 1024-65500
tcp largest_anon_port rw 65500 65500 65535 9000-65535
PROTO PROPERTY PERM CURRENT PERSISTENT DEFAULT POSSIBLE
udp smallest_anon_port rw 9000 9000 32768 1024-65500
udp largest_anon_port rw 65500 65500 65535 9000-65535
</source>
==Setup private cluster interconnects==
Example with a small net with six (eight with net and broadcast) usable IPs. This limits the maximum number of nodes to six... which is obvious...
First node:
<source lang=bash>
# ipadm create-ip net1
# ipadm create-addr -T static -a 10.65.0.1/29 net1/ci1
# ipadm create-ip net5
# ipadm create-addr -T static -a 10.65.0.9/29 net5/ci2
</source>
Second node:
<source lang=bash>
# ipadm create-ip net1
# ipadm create-addr -T static -a 10.65.0.2/29 net1/ci1
# ipadm create-ip net5
# ipadm create-addr -T static -a 10.65.0.10/29 net5/ci2
</source>
==Set slew always for ntp==
After configuring ntp set slew always to avoid time warps!
<source lang=bash>
# svccfg -s svc:/network/ntp:default setprop config/slew_always = true
# svcadm refresh svc:/network/ntp:default
# svccfg -s svc:/network/ntp:default listprop config/slew_always
config/slew_always boolean true
</source>
==Upgrade OPatch==
Do as root:
<source lang=bash>
export ORACLE_HOME=/opt/gridhome/11.2.0.4
export PATH=${PATH}:${ORACLE_HOME}/OPatch
OPATCH_PATCH_ZIP=~oracle/orainst/p6880880_112000_Solaris86-64.zip
zfs snapshot -r rpool/grid@$(opatch version | nawk '/OPatch Version:/{print $1"_"$NF;}')
eval mv ${ORACLE_HOME}/{$(opatch version | nawk '/OPatch Version:/{print $1","$1"_"$NF;}')}
unzip -d ${ORACLE_HOME} ${OPATCH_PATCH_ZIP}
chown -R grid:oinstall ${ORACLE_HOME}/OPatch
zfs snapshot -r rpool/grid@$(opatch version | nawk '/OPatch Version:/{print $1"_"$NF;}')
</source>
==Apply PSU==
On first node as user grid:
<source lang=bash>
export ORACLE_HOME=/opt/gridhome/11.2.0.4
OCM_RSP=~grid/ocm_gridcluster1.rsp
${ORACLE_HOME}/OPatch/ocm/bin/emocmrsp -output ${OCM_RSP}
scp ${OCM_RSP} <other node1>:
scp ${OCM_RSP} <other node2>:
...
</source>
On all nodes do as root:
<source lang=bash>
export ORACLE_HOME=/opt/gridhome/11.2.0.4
export PATH=${PATH}:${ORACLE_HOME}/bin
export PATH=${PATH}:${ORACLE_HOME}/OPatch
OCM_RSP=~grid/ocm_gridcluster1.rsp
PSU_DIR=~oracle/orainst/psu
PSU_ZIP=~oracle/orainst/p22378167_112040_Solaris86-64.zip
PSU=~oracle/orainst/psu/22378167
su - grid -c "mkdir -p ${PSU_DIR}"
su - grid -c "unzip -d ${PSU_DIR} ${PSU_ZIP}"
su - grid -c "opatch lsinventory -detail -oh ${ORACLE_HOME} > ~grid/lsinventory_before_${PSU##*/}"
zfs snapshot -r rpool/grid@before_psu_${PSU##*/}
cd ~grid
for patch in $(find ${PSU} -name bundle.xml | xargs -n 1 dirname) ; do
opatch auto ${patch} -oh ${ORACLE_HOME} -ocmrf ${OCM_RSP}
done
$ORACLE_HOME/crs/install/rootcrs.pl -unlock # <-- on all nodes
# opatch prereq CheckConflictAgainstOHWithDetail -ph ./ # <-- only on first node
# opatch apply other patches as user grid... # <-- only on first node
$ORACLE_HOME/crs/install/rootcrs.pl -patch # <-- on all nodes
zfs snapshot -r rpool/grid@after_psu_${PSU##*/}
${ORACLE_HOME}/bin/emctl start dbconsole
su - grid -c "opatch lsinventory -detail -oh ${ORACLE_HOME} > ~grid/lsinventory_after_${PSU##*/}"
</source>
==Configure local listener to another port==
As grid user:
<source lang=bash>
$ srvctl modify listener -l LISTENER -o ${ORACLE_HOME} -p "TCP:50650"
$ srvctl config listener
Name: LISTENER
Network: 1, Owner: grid
Home: <CRS home>
End points: TCP:50650
$ srvctl stop listener -l LISTENER ; srvctl start listener -l LISTENER
$ sqh
SQL>show parameter list
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
listener_networks string
local_listener string (DESCRIPTION=(ADDRESS_LIST=(A
DDRESS=(PROTOCOL=TCP)(HOST=172
.1.20.1)(PORT=1521))))
remote_listener string
SQL> alter system set local_listener ="(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=172.1.20.1)(PORT=50650))))" SID='+ASM1' ;
System altered.
SQL> ^D
</source>
c51517b98e63de514834a7854b8d7fb10dc52f0a
1215
1214
2016-02-10T15:00:31Z
Lollypop
2
/* Apply PSU */
wikitext
text/x-wiki
[[Kategorie:Solaris11|Clusterware]]
[[Kategorie:Oracle|Clusterware]]
==Get Solaris release information==
<source lang=bash>
# pkg info kernel | \
nawk -F '.' '
/Build Release:/{
solaris=$NF;
}
/Branch:/{
subrel=$3;
update=$4;
}
END{
printf "Solaris %d.%d Update %d\n",solaris,subrel,update;
}'
</source>
=Needed Solaris packages=
==Install pkg dependencies==
<source lang=bash>
# pkg install developer/assembler
# pkg install developer/build/make
# pkg install x11/diagnostic/x11-info-clients
</source>
==Check pkg dependencies==
<source lang=bash>
# pkg list \
developer/assembler \
developer/build/make \
x11/diagnostic/x11-info-clients
</source>
=User / group settings=
==Groups==
<source lang=bash>
# groupadd -g 186 oinstall
# groupadd -g 187 asmadmin
# groupadd -g 188 asmdba
# groupadd -g 200 dba
</source>
==User==
<source lang=bash>
# useradd \
-u 102 \
-g oinstall \
-G asmdba,dba \
-c "Oracle DB" \
-m -d /export/home/oracle \
oracle
# useradd \
-u 406 \
-g oinstall \
-G asmdba,asmadmin,dba \
-c "Oracle Grid" \
-m -d /export/home/grid \
grid
</source>
===Generate ssh public keys===
<source lang=bash>
# su - grid
$ ssh-keygen -t rsa -b 2048
Generating public/private rsa key pair.
Enter file in which to save the key (/export/home/grid/.ssh/id_rsa): <Enter>
Created directory '/export/home/grid/.ssh'.
Enter passphrase (empty for no passphrase): <Enter>
Enter same passphrase again: <Enter>
Your identification has been saved in /export/home/grid/.ssh/id_rsa.
Your public key has been saved in /export/home/grid/.ssh/id_rsa.pub.
The key fingerprint is:
..:..:.. grid@grid01
$ cat .ssh/id_rsa.pub > .ssh/authorized_keys
$ chmod 600 .ssh/authorized_keys
$ vi .ssh/authorized_keys
</source>
Add the public key of other nodes.
After that do this on all other nodes added as grid:
<source lang=bash>
$ scp grid01:.ssh/authorized_keys .ssh/authorized_keys
</source>
Now do a cross login from every node to every other node (even to its self) to add all to the known_hosts. The installer needs this.
==Projects==
<source lang=bash>
# projadd -p 186 -G oinstall \
-K process.max-file-descriptor="(privileged,65536,deny)" \
-K process.max-sem-nsems="(privileged,2048,deny)" \
-K project.max-sem-ids="(privileged,2048,deny)" \
-K project.max-shm-ids="(privileged,200,deny)" \
-K project.max-shm-memory="(privileged,274877906944,deny)" \
group.oinstall
</source>
===Check project settings===
<source lang=bash>
# su - oracle
$ for name in process.{max-file-descriptor,max-sem-nsems} ; do prctl -t privileged -i process -n ${name} $$ ; done
process: 14822: -bash
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
process.max-file-descriptor
privileged 65.5K - deny -
process: 14822: -bash
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
process.max-sem-nsems
privileged 2.05K - deny -
$ for name in project.{max-sem-ids,max-shm-ids,max-shm-memory} ; do prctl -t privileged -n ${name} $$ ; done
process: 14822: -bash
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
project.max-sem-ids
privileged 2.05K - deny -
process: 14822: -bash
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
project.max-shm-ids
privileged 200 - deny -
process: 14822: -bash
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
project.max-shm-memory
usage 0B
privileged 256GB - deny -
</source>
=Directories=
<source lang=bash>
# zfs create -o mountpoint=none rpool/grid
# zfs create -o mountpoint=/opt/gridhome rpool/grid/gridhome
# zfs create -o mountpoint=/opt/gridbase rpool/grid/gridbase
# zfs create -o mountpoint=/opt/oraInventory rpool/grid/oraInventory
# chown -R grid:oinstall /opt/{grid{home,base},oraInventory}
</source>
=Storage tasks=
==Discover LUNs==
<source lang=bash>
# luxadm -e port | \
nawk '{print $1}' | \
xargs -n 1 luxadm -e dump_map | \
nawk '/Disk device/{print $5}' | \
sort -u | \
xargs luxadm display | \
nawk '
/DEVICE PROPERTIES for disk:/{
disk=$NF;
}
/DEVICE PROPERTIES for:/{
disk="";
}
/Vendor:/{
vendor=$NF;
}
/Serial Num:/{
serial=$NF;
}
/Unformatted capacity:/{
capacity=$(NF-1)""$NF;
}
disk != "" && /^$/{
printf "%s vendor=%s serial=%s capacity=%s\n",disk,vendor,serial,capacity;
}' | \
sort -u
</source>
==Label Disks==
===Single Disk===
<source lang=bash>
# printf 'type 0 no no\nlabel 1 yes\npartition\n0 usr wm 8192 $\nlabel 1 yes\nquit\nquit\n' | \
format -e /dev/rdsk/<disk>
</source>
===All FC disks===
For x86 you have to call format -> fdisk -> y for all disks first :-\
'''DON'T DO THE NEXT STEP IF YOU DO NOT KNOW WHAT YOU DO!'''
format_command_file.txt:
<source lang=bash>
type 0 no no
label 1 yes
partition
0 usr wm 8192 $
label 1 yes
quit
quit
</source>
<source lang=bash>
# luxadm -e port | \
nawk '{print $1}' | \
xargs -n 1 luxadm -e dump_map | \
nawk '/Disk device/{print $5}' | \
sort -u | \
xargs luxadm display | \
nawk '
/DEVICE PROPERTIES for disk:/{
disk=$NF;
}
/DEVICE PROPERTIES for:/{
disk="";
}
disk && /^$/{
printf "%s\n",disk;
}' | \
sort -u | \
xargs -n 1 format -e -f ~/format_command_file.txt
</source>
<source lang=bash>
# chown -RL grid:asmadmin /dev/rdsk/c0t6000*
# chmod 660 /dev/rdsk/c0t6000*
</source>
==Set swap to physical RAM==
<source lang=bash>
# export RAM=256G
# swap -d /dev/zvol/dsk/rpool/swap
# zfs destroy rpool/swap
# zfs create \
-V ${RAM} \
-b 8k \
-o primarycache=metadata \
-o chksum=on \
-o dedup=off \
-o encryption=off \
-o compression=off \
rpool/swap
# swap -a /dev/zvol/dsk/rpool/swap
</source>
=Network=
==Check port ranges==
<source lang=bash>
# for protocol in tcp udp ; do ipadm show-prop ${protocol} -p smallest_anon_port,largest_anon_port ; done
PROTO PROPERTY PERM CURRENT PERSISTENT DEFAULT POSSIBLE
tcp smallest_anon_port rw 9000 9000 32768 1024-65500
tcp largest_anon_port rw 65500 65500 65535 9000-65535
PROTO PROPERTY PERM CURRENT PERSISTENT DEFAULT POSSIBLE
udp smallest_anon_port rw 9000 9000 32768 1024-65500
udp largest_anon_port rw 65500 65500 65535 9000-65535
</source>
==Setup private cluster interconnects==
Example with a small net with six (eight with net and broadcast) usable IPs. This limits the maximum number of nodes to six... which is obvious...
First node:
<source lang=bash>
# ipadm create-ip net1
# ipadm create-addr -T static -a 10.65.0.1/29 net1/ci1
# ipadm create-ip net5
# ipadm create-addr -T static -a 10.65.0.9/29 net5/ci2
</source>
Second node:
<source lang=bash>
# ipadm create-ip net1
# ipadm create-addr -T static -a 10.65.0.2/29 net1/ci1
# ipadm create-ip net5
# ipadm create-addr -T static -a 10.65.0.10/29 net5/ci2
</source>
==Set slew always for ntp==
After configuring ntp set slew always to avoid time warps!
<source lang=bash>
# svccfg -s svc:/network/ntp:default setprop config/slew_always = true
# svcadm refresh svc:/network/ntp:default
# svccfg -s svc:/network/ntp:default listprop config/slew_always
config/slew_always boolean true
</source>
==Upgrade OPatch==
Do as root:
<source lang=bash>
export ORACLE_HOME=/opt/gridhome/11.2.0.4
export PATH=${PATH}:${ORACLE_HOME}/OPatch
OPATCH_PATCH_ZIP=~oracle/orainst/p6880880_112000_Solaris86-64.zip
zfs snapshot -r rpool/grid@$(opatch version | nawk '/OPatch Version:/{print $1"_"$NF;}')
eval mv ${ORACLE_HOME}/{$(opatch version | nawk '/OPatch Version:/{print $1","$1"_"$NF;}')}
unzip -d ${ORACLE_HOME} ${OPATCH_PATCH_ZIP}
chown -R grid:oinstall ${ORACLE_HOME}/OPatch
zfs snapshot -r rpool/grid@$(opatch version | nawk '/OPatch Version:/{print $1"_"$NF;}')
</source>
==Apply PSU==
On first node as user grid:
<source lang=bash>
export ORACLE_HOME=/opt/gridhome/11.2.0.4
OCM_RSP=~grid/ocm_gridcluster1.rsp
${ORACLE_HOME}/OPatch/ocm/bin/emocmrsp -output ${OCM_RSP}
scp ${OCM_RSP} <other node1>:
scp ${OCM_RSP} <other node2>:
...
</source>
On all nodes do as root:
<source lang=bash>
export ORACLE_HOME=/opt/gridhome/11.2.0.4
export PATH=${PATH}:${ORACLE_HOME}/bin
export PATH=${PATH}:${ORACLE_HOME}/OPatch
OCM_RSP=~grid/ocm_gridcluster1.rsp
PSU_DIR=~oracle/orainst/psu
PSU_ZIP=~oracle/orainst/p22378167_112040_Solaris86-64.zip
PSU=~oracle/orainst/psu/22378167
su - grid -c "mkdir -p ${PSU_DIR}"
su - grid -c "unzip -d ${PSU_DIR} ${PSU_ZIP}"
su - grid -c "opatch lsinventory -detail -oh ${ORACLE_HOME} > ~grid/lsinventory_before_${PSU##*/}"
zfs snapshot -r rpool/grid@before_psu_${PSU##*/}
cd ~grid
for patch in $(find ${PSU} -name bundle.xml | xargs -n 1 dirname) ; do
opatch auto ${patch} -oh ${ORACLE_HOME} -ocmrf ${OCM_RSP}
done
$ORACLE_HOME/crs/install/rootcrs.pl -unlock # <-- on all nodes
# For every other patch do:
su - grid -c "cd ${patchdir} ; opatch prereq CheckConflictAgainstOHWithDetail -ph ./" # <-- only on first node
su - grid -c "cd ${patchdir} ; opatch apply" # <-- only on first node
$ORACLE_HOME/crs/install/rootcrs.pl -patch # <-- on all nodes
zfs snapshot -r rpool/grid@after_psu_${PSU##*/}
${ORACLE_HOME}/bin/emctl start dbconsole
su - grid -c "opatch lsinventory -detail -oh ${ORACLE_HOME} > ~grid/lsinventory_after_${PSU##*/}"
</source>
==Configure local listener to another port==
As grid user:
<source lang=bash>
$ srvctl modify listener -l LISTENER -o ${ORACLE_HOME} -p "TCP:50650"
$ srvctl config listener
Name: LISTENER
Network: 1, Owner: grid
Home: <CRS home>
End points: TCP:50650
$ srvctl stop listener -l LISTENER ; srvctl start listener -l LISTENER
$ sqh
SQL>show parameter list
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
listener_networks string
local_listener string (DESCRIPTION=(ADDRESS_LIST=(A
DDRESS=(PROTOCOL=TCP)(HOST=172
.1.20.1)(PORT=1521))))
remote_listener string
SQL> alter system set local_listener ="(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=172.1.20.1)(PORT=50650))))" SID='+ASM1' ;
System altered.
SQL> ^D
</source>
d23c9a1419088e4a4beebe64a1afa236acc05db1
1216
1215
2016-02-10T15:20:39Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:Solaris11|Clusterware]]
[[Kategorie:Oracle|Clusterware]]
==Get Solaris release information==
<source lang=bash>
# pkg info kernel | \
nawk -F '.' '
/Build Release:/{
solaris=$NF;
}
/Branch:/{
subrel=$3;
update=$4;
}
END{
printf "Solaris %d.%d Update %d\n",solaris,subrel,update;
}'
</source>
=Needed Solaris packages=
==Install pkg dependencies==
<source lang=bash>
# pkg install developer/assembler
# pkg install developer/build/make
# pkg install x11/diagnostic/x11-info-clients
</source>
==Check pkg dependencies==
<source lang=bash>
# pkg list \
developer/assembler \
developer/build/make \
x11/diagnostic/x11-info-clients
</source>
=User / group settings=
==Groups==
<source lang=bash>
# groupadd -g 186 oinstall
# groupadd -g 187 asmadmin
# groupadd -g 188 asmdba
# groupadd -g 200 dba
</source>
==User==
<source lang=bash>
# useradd \
-u 102 \
-g oinstall \
-G asmdba,dba \
-c "Oracle DB" \
-m -d /export/home/oracle \
oracle
# useradd \
-u 406 \
-g oinstall \
-G asmdba,asmadmin,dba \
-c "Oracle Grid" \
-m -d /export/home/grid \
grid
</source>
===Generate ssh public keys===
<source lang=bash>
# su - grid
$ ssh-keygen -t rsa -b 2048
Generating public/private rsa key pair.
Enter file in which to save the key (/export/home/grid/.ssh/id_rsa): <Enter>
Created directory '/export/home/grid/.ssh'.
Enter passphrase (empty for no passphrase): <Enter>
Enter same passphrase again: <Enter>
Your identification has been saved in /export/home/grid/.ssh/id_rsa.
Your public key has been saved in /export/home/grid/.ssh/id_rsa.pub.
The key fingerprint is:
..:..:.. grid@grid01
$ cat .ssh/id_rsa.pub > .ssh/authorized_keys
$ chmod 600 .ssh/authorized_keys
$ vi .ssh/authorized_keys
</source>
Add the public key of other nodes.
After that do this on all other nodes added as grid:
<source lang=bash>
$ scp grid01:.ssh/authorized_keys .ssh/authorized_keys
</source>
Now do a cross login from every node to every other node (even to its self) to add all to the known_hosts. The installer needs this.
==Projects==
<source lang=bash>
# projadd -p 186 -G oinstall \
-K process.max-file-descriptor="(privileged,65536,deny)" \
-K process.max-sem-nsems="(privileged,2048,deny)" \
-K project.max-sem-ids="(privileged,2048,deny)" \
-K project.max-shm-ids="(privileged,200,deny)" \
-K project.max-shm-memory="(privileged,274877906944,deny)" \
group.oinstall
</source>
===Check project settings===
<source lang=bash>
# su - oracle
$ for name in process.{max-file-descriptor,max-sem-nsems} ; do prctl -t privileged -i process -n ${name} $$ ; done
process: 14822: -bash
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
process.max-file-descriptor
privileged 65.5K - deny -
process: 14822: -bash
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
process.max-sem-nsems
privileged 2.05K - deny -
$ for name in project.{max-sem-ids,max-shm-ids,max-shm-memory} ; do prctl -t privileged -n ${name} $$ ; done
process: 14822: -bash
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
project.max-sem-ids
privileged 2.05K - deny -
process: 14822: -bash
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
project.max-shm-ids
privileged 200 - deny -
process: 14822: -bash
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
project.max-shm-memory
usage 0B
privileged 256GB - deny -
</source>
=Directories=
<source lang=bash>
# zfs create -o mountpoint=none rpool/grid
# zfs create -o mountpoint=/opt/gridhome rpool/grid/gridhome
# zfs create -o mountpoint=/opt/gridbase rpool/grid/gridbase
# zfs create -o mountpoint=/opt/oraInventory rpool/grid/oraInventory
# chown -R grid:oinstall /opt/{grid{home,base},oraInventory}
</source>
=Storage tasks=
==Discover LUNs==
<source lang=bash>
# luxadm -e port | \
nawk '{print $1}' | \
xargs -n 1 luxadm -e dump_map | \
nawk '/Disk device/{print $5}' | \
sort -u | \
xargs luxadm display | \
nawk '
/DEVICE PROPERTIES for disk:/{
disk=$NF;
}
/DEVICE PROPERTIES for:/{
disk="";
}
/Vendor:/{
vendor=$NF;
}
/Serial Num:/{
serial=$NF;
}
/Unformatted capacity:/{
capacity=$(NF-1)""$NF;
}
disk != "" && /^$/{
printf "%s vendor=%s serial=%s capacity=%s\n",disk,vendor,serial,capacity;
}' | \
sort -u
</source>
==Label Disks==
===Single Disk===
<source lang=bash>
# printf 'type 0 no no\nlabel 1 yes\npartition\n0 usr wm 8192 $\nlabel 1 yes\nquit\nquit\n' | \
format -e /dev/rdsk/<disk>
</source>
===All FC disks===
For x86 you have to call format -> fdisk -> y for all disks first :-\
'''DON'T DO THE NEXT STEP IF YOU DO NOT KNOW WHAT YOU DO!'''
format_command_file.txt:
<source lang=bash>
type 0 no no
label 1 yes
partition
0 usr wm 8192 $
label 1 yes
quit
quit
</source>
<source lang=bash>
# luxadm -e port | \
nawk '{print $1}' | \
xargs -n 1 luxadm -e dump_map | \
nawk '/Disk device/{print $5}' | \
sort -u | \
xargs luxadm display | \
nawk '
/DEVICE PROPERTIES for disk:/{
disk=$NF;
}
/DEVICE PROPERTIES for:/{
disk="";
}
disk && /^$/{
printf "%s\n",disk;
}' | \
sort -u | \
xargs -n 1 format -e -f ~/format_command_file.txt
</source>
<source lang=bash>
# chown -RL grid:asmadmin /dev/rdsk/c0t6000*
# chmod 660 /dev/rdsk/c0t6000*
</source>
==Set swap to physical RAM==
<source lang=bash>
# export RAM=256G
# swap -d /dev/zvol/dsk/rpool/swap
# zfs destroy rpool/swap
# zfs create \
-V ${RAM} \
-b 8k \
-o primarycache=metadata \
-o chksum=on \
-o dedup=off \
-o encryption=off \
-o compression=off \
rpool/swap
# swap -a /dev/zvol/dsk/rpool/swap
</source>
=Network=
==Check port ranges==
<source lang=bash>
# for protocol in tcp udp ; do ipadm show-prop ${protocol} -p smallest_anon_port,largest_anon_port ; done
PROTO PROPERTY PERM CURRENT PERSISTENT DEFAULT POSSIBLE
tcp smallest_anon_port rw 9000 9000 32768 1024-65500
tcp largest_anon_port rw 65500 65500 65535 9000-65535
PROTO PROPERTY PERM CURRENT PERSISTENT DEFAULT POSSIBLE
udp smallest_anon_port rw 9000 9000 32768 1024-65500
udp largest_anon_port rw 65500 65500 65535 9000-65535
</source>
==Setup private cluster interconnects==
Example with a small net with six (eight with net and broadcast) usable IPs. This limits the maximum number of nodes to six... which is obvious...
First node:
<source lang=bash>
# ipadm create-ip net1
# ipadm create-addr -T static -a 10.65.0.1/29 net1/ci1
# ipadm create-ip net5
# ipadm create-addr -T static -a 10.65.0.9/29 net5/ci2
</source>
Second node:
<source lang=bash>
# ipadm create-ip net1
# ipadm create-addr -T static -a 10.65.0.2/29 net1/ci1
# ipadm create-ip net5
# ipadm create-addr -T static -a 10.65.0.10/29 net5/ci2
</source>
==Set slew always for ntp==
After configuring ntp set slew always to avoid time warps!
<source lang=bash>
# svccfg -s svc:/network/ntp:default setprop config/slew_always = true
# svcadm refresh svc:/network/ntp:default
# svccfg -s svc:/network/ntp:default listprop config/slew_always
config/slew_always boolean true
</source>
==Upgrade OPatch==
Do as root:
<source lang=bash>
export ORACLE_HOME=/opt/gridhome/11.2.0.4
export PATH=${PATH}:${ORACLE_HOME}/OPatch
OPATCH_PATCH_ZIP=~oracle/orainst/p6880880_112000_Solaris86-64.zip
zfs snapshot -r rpool/grid@$(opatch version | nawk '/OPatch Version:/{print $1"_"$NF;}')
eval mv ${ORACLE_HOME}/{$(opatch version | nawk '/OPatch Version:/{print $1","$1"_"$NF;}')}
unzip -d ${ORACLE_HOME} ${OPATCH_PATCH_ZIP}
chown -R grid:oinstall ${ORACLE_HOME}/OPatch
zfs snapshot -r rpool/grid@$(opatch version | nawk '/OPatch Version:/{print $1"_"$NF;}')
</source>
==Apply PSU==
On first node as user grid:
<source lang=bash>
export ORACLE_HOME=/opt/gridhome/11.2.0.4
OCM_RSP=~grid/ocm_gridcluster1.rsp
${ORACLE_HOME}/OPatch/ocm/bin/emocmrsp -output ${OCM_RSP}
scp ${OCM_RSP} <other node1>:
scp ${OCM_RSP} <other node2>:
...
</source>
On all nodes do as root:
<source lang=bash>
export ORACLE_HOME=/opt/gridhome/11.2.0.4
export PATH=${PATH}:${ORACLE_HOME}/bin
export PATH=${PATH}:${ORACLE_HOME}/OPatch
OCM_RSP=~grid/ocm_gridcluster1.rsp
PSU_DIR=~oracle/orainst/psu
PSU_ZIP=~oracle/orainst/p22378167_112040_Solaris86-64.zip
PSU=~oracle/orainst/psu/22378167
su - grid -c "mkdir -p ${PSU_DIR}"
su - grid -c "unzip -d ${PSU_DIR} ${PSU_ZIP}"
su - grid -c "opatch lsinventory -detail -oh ${ORACLE_HOME} > ~grid/lsinventory_before_${PSU##*/}"
zfs snapshot -r rpool/grid@before_psu_${PSU##*/}
cd ~grid
for patch in $(find ${PSU} -name bundle.xml | xargs -n 1 dirname) ; do
opatch auto ${patch} -oh ${ORACLE_HOME} -ocmrf ${OCM_RSP}
done
$ORACLE_HOME/crs/install/rootcrs.pl -unlock # <-- on all nodes
# For every other patch do:
su - grid -c "cd ${patchdir} ; opatch prereq CheckConflictAgainstOHWithDetail -ph ./" # <-- only on first node
su - grid -c "cd ${patchdir} ; opatch apply" # <-- only on first node
$ORACLE_HOME/crs/install/rootcrs.pl -patch # <-- on all nodes
zfs snapshot -r rpool/grid@after_psu_${PSU##*/}
${ORACLE_HOME}/bin/emctl start dbconsole
su - grid -c "opatch lsinventory -detail -oh ${ORACLE_HOME} > ~grid/lsinventory_after_${PSU##*/}"
</source>
==Configure local listener to another port==
As grid user:
<source lang=bash>
$ srvctl modify listener -l LISTENER -o ${ORACLE_HOME} -p "TCP:50650"
$ srvctl config listener
Name: LISTENER
Network: 1, Owner: grid
Home: <CRS home>
End points: TCP:50650
$ srvctl stop listener -l LISTENER ; srvctl start listener -l LISTENER
$ sqh
SQL>show parameter list
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
listener_networks string
local_listener string (DESCRIPTION=(ADDRESS_LIST=(A
DDRESS=(PROTOCOL=TCP)(HOST=172
.1.20.1)(PORT=1521))))
remote_listener string
SQL> alter system set local_listener ="(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=172.1.20.1)(PORT=50650))))" SID='+ASM1' ;
System altered.
SQL> ^D
</source>
==Create ASM diskgroups==
data_config.xml:
<source lang=xml>
<chdg name="data" power="3">
<add>
<fg name="HSA3-DATA">
<dsk string="/dev/rdsk/"/>
</fg>
<fg name="HSA4-DATA">
<dsk string="/dev/rdsk/"/>
</fg>
</add>
</chdg>
</source>
asmh:
<source lang=oracle>
ASMCMD [+] > chdg data_config.xml
</source>
d5a83ae5324360ba0e7fa41719a778c171e018ff
1217
1216
2016-02-10T15:21:53Z
Lollypop
2
/* Create ASM diskgroups */
wikitext
text/x-wiki
[[Kategorie:Solaris11|Clusterware]]
[[Kategorie:Oracle|Clusterware]]
==Get Solaris release information==
<source lang=bash>
# pkg info kernel | \
nawk -F '.' '
/Build Release:/{
solaris=$NF;
}
/Branch:/{
subrel=$3;
update=$4;
}
END{
printf "Solaris %d.%d Update %d\n",solaris,subrel,update;
}'
</source>
=Needed Solaris packages=
==Install pkg dependencies==
<source lang=bash>
# pkg install developer/assembler
# pkg install developer/build/make
# pkg install x11/diagnostic/x11-info-clients
</source>
==Check pkg dependencies==
<source lang=bash>
# pkg list \
developer/assembler \
developer/build/make \
x11/diagnostic/x11-info-clients
</source>
=User / group settings=
==Groups==
<source lang=bash>
# groupadd -g 186 oinstall
# groupadd -g 187 asmadmin
# groupadd -g 188 asmdba
# groupadd -g 200 dba
</source>
==User==
<source lang=bash>
# useradd \
-u 102 \
-g oinstall \
-G asmdba,dba \
-c "Oracle DB" \
-m -d /export/home/oracle \
oracle
# useradd \
-u 406 \
-g oinstall \
-G asmdba,asmadmin,dba \
-c "Oracle Grid" \
-m -d /export/home/grid \
grid
</source>
===Generate ssh public keys===
<source lang=bash>
# su - grid
$ ssh-keygen -t rsa -b 2048
Generating public/private rsa key pair.
Enter file in which to save the key (/export/home/grid/.ssh/id_rsa): <Enter>
Created directory '/export/home/grid/.ssh'.
Enter passphrase (empty for no passphrase): <Enter>
Enter same passphrase again: <Enter>
Your identification has been saved in /export/home/grid/.ssh/id_rsa.
Your public key has been saved in /export/home/grid/.ssh/id_rsa.pub.
The key fingerprint is:
..:..:.. grid@grid01
$ cat .ssh/id_rsa.pub > .ssh/authorized_keys
$ chmod 600 .ssh/authorized_keys
$ vi .ssh/authorized_keys
</source>
Add the public key of other nodes.
After that do this on all other nodes added as grid:
<source lang=bash>
$ scp grid01:.ssh/authorized_keys .ssh/authorized_keys
</source>
Now do a cross login from every node to every other node (even to its self) to add all to the known_hosts. The installer needs this.
==Projects==
<source lang=bash>
# projadd -p 186 -G oinstall \
-K process.max-file-descriptor="(privileged,65536,deny)" \
-K process.max-sem-nsems="(privileged,2048,deny)" \
-K project.max-sem-ids="(privileged,2048,deny)" \
-K project.max-shm-ids="(privileged,200,deny)" \
-K project.max-shm-memory="(privileged,274877906944,deny)" \
group.oinstall
</source>
===Check project settings===
<source lang=bash>
# su - oracle
$ for name in process.{max-file-descriptor,max-sem-nsems} ; do prctl -t privileged -i process -n ${name} $$ ; done
process: 14822: -bash
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
process.max-file-descriptor
privileged 65.5K - deny -
process: 14822: -bash
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
process.max-sem-nsems
privileged 2.05K - deny -
$ for name in project.{max-sem-ids,max-shm-ids,max-shm-memory} ; do prctl -t privileged -n ${name} $$ ; done
process: 14822: -bash
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
project.max-sem-ids
privileged 2.05K - deny -
process: 14822: -bash
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
project.max-shm-ids
privileged 200 - deny -
process: 14822: -bash
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
project.max-shm-memory
usage 0B
privileged 256GB - deny -
</source>
=Directories=
<source lang=bash>
# zfs create -o mountpoint=none rpool/grid
# zfs create -o mountpoint=/opt/gridhome rpool/grid/gridhome
# zfs create -o mountpoint=/opt/gridbase rpool/grid/gridbase
# zfs create -o mountpoint=/opt/oraInventory rpool/grid/oraInventory
# chown -R grid:oinstall /opt/{grid{home,base},oraInventory}
</source>
=Storage tasks=
==Discover LUNs==
<source lang=bash>
# luxadm -e port | \
nawk '{print $1}' | \
xargs -n 1 luxadm -e dump_map | \
nawk '/Disk device/{print $5}' | \
sort -u | \
xargs luxadm display | \
nawk '
/DEVICE PROPERTIES for disk:/{
disk=$NF;
}
/DEVICE PROPERTIES for:/{
disk="";
}
/Vendor:/{
vendor=$NF;
}
/Serial Num:/{
serial=$NF;
}
/Unformatted capacity:/{
capacity=$(NF-1)""$NF;
}
disk != "" && /^$/{
printf "%s vendor=%s serial=%s capacity=%s\n",disk,vendor,serial,capacity;
}' | \
sort -u
</source>
==Label Disks==
===Single Disk===
<source lang=bash>
# printf 'type 0 no no\nlabel 1 yes\npartition\n0 usr wm 8192 $\nlabel 1 yes\nquit\nquit\n' | \
format -e /dev/rdsk/<disk>
</source>
===All FC disks===
For x86 you have to call format -> fdisk -> y for all disks first :-\
'''DON'T DO THE NEXT STEP IF YOU DO NOT KNOW WHAT YOU DO!'''
format_command_file.txt:
<source lang=bash>
type 0 no no
label 1 yes
partition
0 usr wm 8192 $
label 1 yes
quit
quit
</source>
<source lang=bash>
# luxadm -e port | \
nawk '{print $1}' | \
xargs -n 1 luxadm -e dump_map | \
nawk '/Disk device/{print $5}' | \
sort -u | \
xargs luxadm display | \
nawk '
/DEVICE PROPERTIES for disk:/{
disk=$NF;
}
/DEVICE PROPERTIES for:/{
disk="";
}
disk && /^$/{
printf "%s\n",disk;
}' | \
sort -u | \
xargs -n 1 format -e -f ~/format_command_file.txt
</source>
<source lang=bash>
# chown -RL grid:asmadmin /dev/rdsk/c0t6000*
# chmod 660 /dev/rdsk/c0t6000*
</source>
==Set swap to physical RAM==
<source lang=bash>
# export RAM=256G
# swap -d /dev/zvol/dsk/rpool/swap
# zfs destroy rpool/swap
# zfs create \
-V ${RAM} \
-b 8k \
-o primarycache=metadata \
-o chksum=on \
-o dedup=off \
-o encryption=off \
-o compression=off \
rpool/swap
# swap -a /dev/zvol/dsk/rpool/swap
</source>
=Network=
==Check port ranges==
<source lang=bash>
# for protocol in tcp udp ; do ipadm show-prop ${protocol} -p smallest_anon_port,largest_anon_port ; done
PROTO PROPERTY PERM CURRENT PERSISTENT DEFAULT POSSIBLE
tcp smallest_anon_port rw 9000 9000 32768 1024-65500
tcp largest_anon_port rw 65500 65500 65535 9000-65535
PROTO PROPERTY PERM CURRENT PERSISTENT DEFAULT POSSIBLE
udp smallest_anon_port rw 9000 9000 32768 1024-65500
udp largest_anon_port rw 65500 65500 65535 9000-65535
</source>
==Setup private cluster interconnects==
Example with a small net with six (eight with net and broadcast) usable IPs. This limits the maximum number of nodes to six... which is obvious...
First node:
<source lang=bash>
# ipadm create-ip net1
# ipadm create-addr -T static -a 10.65.0.1/29 net1/ci1
# ipadm create-ip net5
# ipadm create-addr -T static -a 10.65.0.9/29 net5/ci2
</source>
Second node:
<source lang=bash>
# ipadm create-ip net1
# ipadm create-addr -T static -a 10.65.0.2/29 net1/ci1
# ipadm create-ip net5
# ipadm create-addr -T static -a 10.65.0.10/29 net5/ci2
</source>
==Set slew always for ntp==
After configuring ntp set slew always to avoid time warps!
<source lang=bash>
# svccfg -s svc:/network/ntp:default setprop config/slew_always = true
# svcadm refresh svc:/network/ntp:default
# svccfg -s svc:/network/ntp:default listprop config/slew_always
config/slew_always boolean true
</source>
==Upgrade OPatch==
Do as root:
<source lang=bash>
export ORACLE_HOME=/opt/gridhome/11.2.0.4
export PATH=${PATH}:${ORACLE_HOME}/OPatch
OPATCH_PATCH_ZIP=~oracle/orainst/p6880880_112000_Solaris86-64.zip
zfs snapshot -r rpool/grid@$(opatch version | nawk '/OPatch Version:/{print $1"_"$NF;}')
eval mv ${ORACLE_HOME}/{$(opatch version | nawk '/OPatch Version:/{print $1","$1"_"$NF;}')}
unzip -d ${ORACLE_HOME} ${OPATCH_PATCH_ZIP}
chown -R grid:oinstall ${ORACLE_HOME}/OPatch
zfs snapshot -r rpool/grid@$(opatch version | nawk '/OPatch Version:/{print $1"_"$NF;}')
</source>
==Apply PSU==
On first node as user grid:
<source lang=bash>
export ORACLE_HOME=/opt/gridhome/11.2.0.4
OCM_RSP=~grid/ocm_gridcluster1.rsp
${ORACLE_HOME}/OPatch/ocm/bin/emocmrsp -output ${OCM_RSP}
scp ${OCM_RSP} <other node1>:
scp ${OCM_RSP} <other node2>:
...
</source>
On all nodes do as root:
<source lang=bash>
export ORACLE_HOME=/opt/gridhome/11.2.0.4
export PATH=${PATH}:${ORACLE_HOME}/bin
export PATH=${PATH}:${ORACLE_HOME}/OPatch
OCM_RSP=~grid/ocm_gridcluster1.rsp
PSU_DIR=~oracle/orainst/psu
PSU_ZIP=~oracle/orainst/p22378167_112040_Solaris86-64.zip
PSU=~oracle/orainst/psu/22378167
su - grid -c "mkdir -p ${PSU_DIR}"
su - grid -c "unzip -d ${PSU_DIR} ${PSU_ZIP}"
su - grid -c "opatch lsinventory -detail -oh ${ORACLE_HOME} > ~grid/lsinventory_before_${PSU##*/}"
zfs snapshot -r rpool/grid@before_psu_${PSU##*/}
cd ~grid
for patch in $(find ${PSU} -name bundle.xml | xargs -n 1 dirname) ; do
opatch auto ${patch} -oh ${ORACLE_HOME} -ocmrf ${OCM_RSP}
done
$ORACLE_HOME/crs/install/rootcrs.pl -unlock # <-- on all nodes
# For every other patch do:
su - grid -c "cd ${patchdir} ; opatch prereq CheckConflictAgainstOHWithDetail -ph ./" # <-- only on first node
su - grid -c "cd ${patchdir} ; opatch apply" # <-- only on first node
$ORACLE_HOME/crs/install/rootcrs.pl -patch # <-- on all nodes
zfs snapshot -r rpool/grid@after_psu_${PSU##*/}
${ORACLE_HOME}/bin/emctl start dbconsole
su - grid -c "opatch lsinventory -detail -oh ${ORACLE_HOME} > ~grid/lsinventory_after_${PSU##*/}"
</source>
==Configure local listener to another port==
As grid user:
<source lang=bash>
$ srvctl modify listener -l LISTENER -o ${ORACLE_HOME} -p "TCP:50650"
$ srvctl config listener
Name: LISTENER
Network: 1, Owner: grid
Home: <CRS home>
End points: TCP:50650
$ srvctl stop listener -l LISTENER ; srvctl start listener -l LISTENER
$ sqh
SQL>show parameter list
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
listener_networks string
local_listener string (DESCRIPTION=(ADDRESS_LIST=(A
DDRESS=(PROTOCOL=TCP)(HOST=172
.1.20.1)(PORT=1521))))
remote_listener string
SQL> alter system set local_listener ="(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=172.1.20.1)(PORT=50650))))" SID='+ASM1' ;
System altered.
SQL> ^D
</source>
=ASM=
==Create ASM diskgroups==
data_config.xml:
<source lang=xml>
<chdg name="data" power="3">
<add>
<fg name="HSA3-DATA">
<dsk string="/dev/rdsk/"/>
</fg>
<fg name="HSA4-DATA">
<dsk string="/dev/rdsk/"/>
</fg>
</add>
</chdg>
</source>
asmh:
<source lang=oracle11>
ASMCMD [+] > chdg data_config.xml
</source>
91169e014295c5d618f3eb6ece767a75fd3f4fea
1218
1217
2016-02-10T15:22:27Z
Lollypop
2
/* Upgrade OPatch */
wikitext
text/x-wiki
[[Kategorie:Solaris11|Clusterware]]
[[Kategorie:Oracle|Clusterware]]
==Get Solaris release information==
<source lang=bash>
# pkg info kernel | \
nawk -F '.' '
/Build Release:/{
solaris=$NF;
}
/Branch:/{
subrel=$3;
update=$4;
}
END{
printf "Solaris %d.%d Update %d\n",solaris,subrel,update;
}'
</source>
=Needed Solaris packages=
==Install pkg dependencies==
<source lang=bash>
# pkg install developer/assembler
# pkg install developer/build/make
# pkg install x11/diagnostic/x11-info-clients
</source>
==Check pkg dependencies==
<source lang=bash>
# pkg list \
developer/assembler \
developer/build/make \
x11/diagnostic/x11-info-clients
</source>
=User / group settings=
==Groups==
<source lang=bash>
# groupadd -g 186 oinstall
# groupadd -g 187 asmadmin
# groupadd -g 188 asmdba
# groupadd -g 200 dba
</source>
==User==
<source lang=bash>
# useradd \
-u 102 \
-g oinstall \
-G asmdba,dba \
-c "Oracle DB" \
-m -d /export/home/oracle \
oracle
# useradd \
-u 406 \
-g oinstall \
-G asmdba,asmadmin,dba \
-c "Oracle Grid" \
-m -d /export/home/grid \
grid
</source>
===Generate ssh public keys===
<source lang=bash>
# su - grid
$ ssh-keygen -t rsa -b 2048
Generating public/private rsa key pair.
Enter file in which to save the key (/export/home/grid/.ssh/id_rsa): <Enter>
Created directory '/export/home/grid/.ssh'.
Enter passphrase (empty for no passphrase): <Enter>
Enter same passphrase again: <Enter>
Your identification has been saved in /export/home/grid/.ssh/id_rsa.
Your public key has been saved in /export/home/grid/.ssh/id_rsa.pub.
The key fingerprint is:
..:..:.. grid@grid01
$ cat .ssh/id_rsa.pub > .ssh/authorized_keys
$ chmod 600 .ssh/authorized_keys
$ vi .ssh/authorized_keys
</source>
Add the public key of other nodes.
After that do this on all other nodes added as grid:
<source lang=bash>
$ scp grid01:.ssh/authorized_keys .ssh/authorized_keys
</source>
Now do a cross login from every node to every other node (even to its self) to add all to the known_hosts. The installer needs this.
==Projects==
<source lang=bash>
# projadd -p 186 -G oinstall \
-K process.max-file-descriptor="(privileged,65536,deny)" \
-K process.max-sem-nsems="(privileged,2048,deny)" \
-K project.max-sem-ids="(privileged,2048,deny)" \
-K project.max-shm-ids="(privileged,200,deny)" \
-K project.max-shm-memory="(privileged,274877906944,deny)" \
group.oinstall
</source>
===Check project settings===
<source lang=bash>
# su - oracle
$ for name in process.{max-file-descriptor,max-sem-nsems} ; do prctl -t privileged -i process -n ${name} $$ ; done
process: 14822: -bash
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
process.max-file-descriptor
privileged 65.5K - deny -
process: 14822: -bash
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
process.max-sem-nsems
privileged 2.05K - deny -
$ for name in project.{max-sem-ids,max-shm-ids,max-shm-memory} ; do prctl -t privileged -n ${name} $$ ; done
process: 14822: -bash
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
project.max-sem-ids
privileged 2.05K - deny -
process: 14822: -bash
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
project.max-shm-ids
privileged 200 - deny -
process: 14822: -bash
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
project.max-shm-memory
usage 0B
privileged 256GB - deny -
</source>
=Directories=
<source lang=bash>
# zfs create -o mountpoint=none rpool/grid
# zfs create -o mountpoint=/opt/gridhome rpool/grid/gridhome
# zfs create -o mountpoint=/opt/gridbase rpool/grid/gridbase
# zfs create -o mountpoint=/opt/oraInventory rpool/grid/oraInventory
# chown -R grid:oinstall /opt/{grid{home,base},oraInventory}
</source>
=Storage tasks=
==Discover LUNs==
<source lang=bash>
# luxadm -e port | \
nawk '{print $1}' | \
xargs -n 1 luxadm -e dump_map | \
nawk '/Disk device/{print $5}' | \
sort -u | \
xargs luxadm display | \
nawk '
/DEVICE PROPERTIES for disk:/{
disk=$NF;
}
/DEVICE PROPERTIES for:/{
disk="";
}
/Vendor:/{
vendor=$NF;
}
/Serial Num:/{
serial=$NF;
}
/Unformatted capacity:/{
capacity=$(NF-1)""$NF;
}
disk != "" && /^$/{
printf "%s vendor=%s serial=%s capacity=%s\n",disk,vendor,serial,capacity;
}' | \
sort -u
</source>
==Label Disks==
===Single Disk===
<source lang=bash>
# printf 'type 0 no no\nlabel 1 yes\npartition\n0 usr wm 8192 $\nlabel 1 yes\nquit\nquit\n' | \
format -e /dev/rdsk/<disk>
</source>
===All FC disks===
For x86 you have to call format -> fdisk -> y for all disks first :-\
'''DON'T DO THE NEXT STEP IF YOU DO NOT KNOW WHAT YOU DO!'''
format_command_file.txt:
<source lang=bash>
type 0 no no
label 1 yes
partition
0 usr wm 8192 $
label 1 yes
quit
quit
</source>
<source lang=bash>
# luxadm -e port | \
nawk '{print $1}' | \
xargs -n 1 luxadm -e dump_map | \
nawk '/Disk device/{print $5}' | \
sort -u | \
xargs luxadm display | \
nawk '
/DEVICE PROPERTIES for disk:/{
disk=$NF;
}
/DEVICE PROPERTIES for:/{
disk="";
}
disk && /^$/{
printf "%s\n",disk;
}' | \
sort -u | \
xargs -n 1 format -e -f ~/format_command_file.txt
</source>
<source lang=bash>
# chown -RL grid:asmadmin /dev/rdsk/c0t6000*
# chmod 660 /dev/rdsk/c0t6000*
</source>
==Set swap to physical RAM==
<source lang=bash>
# export RAM=256G
# swap -d /dev/zvol/dsk/rpool/swap
# zfs destroy rpool/swap
# zfs create \
-V ${RAM} \
-b 8k \
-o primarycache=metadata \
-o chksum=on \
-o dedup=off \
-o encryption=off \
-o compression=off \
rpool/swap
# swap -a /dev/zvol/dsk/rpool/swap
</source>
=Network=
==Check port ranges==
<source lang=bash>
# for protocol in tcp udp ; do ipadm show-prop ${protocol} -p smallest_anon_port,largest_anon_port ; done
PROTO PROPERTY PERM CURRENT PERSISTENT DEFAULT POSSIBLE
tcp smallest_anon_port rw 9000 9000 32768 1024-65500
tcp largest_anon_port rw 65500 65500 65535 9000-65535
PROTO PROPERTY PERM CURRENT PERSISTENT DEFAULT POSSIBLE
udp smallest_anon_port rw 9000 9000 32768 1024-65500
udp largest_anon_port rw 65500 65500 65535 9000-65535
</source>
==Setup private cluster interconnects==
Example with a small net with six (eight with net and broadcast) usable IPs. This limits the maximum number of nodes to six... which is obvious...
First node:
<source lang=bash>
# ipadm create-ip net1
# ipadm create-addr -T static -a 10.65.0.1/29 net1/ci1
# ipadm create-ip net5
# ipadm create-addr -T static -a 10.65.0.9/29 net5/ci2
</source>
Second node:
<source lang=bash>
# ipadm create-ip net1
# ipadm create-addr -T static -a 10.65.0.2/29 net1/ci1
# ipadm create-ip net5
# ipadm create-addr -T static -a 10.65.0.10/29 net5/ci2
</source>
==Set slew always for ntp==
After configuring ntp set slew always to avoid time warps!
<source lang=bash>
# svccfg -s svc:/network/ntp:default setprop config/slew_always = true
# svcadm refresh svc:/network/ntp:default
# svccfg -s svc:/network/ntp:default listprop config/slew_always
config/slew_always boolean true
</source>
=Patching==
==Upgrade OPatch==
Do as root:
<source lang=bash>
export ORACLE_HOME=/opt/gridhome/11.2.0.4
export PATH=${PATH}:${ORACLE_HOME}/OPatch
OPATCH_PATCH_ZIP=~oracle/orainst/p6880880_112000_Solaris86-64.zip
zfs snapshot -r rpool/grid@$(opatch version | nawk '/OPatch Version:/{print $1"_"$NF;}')
eval mv ${ORACLE_HOME}/{$(opatch version | nawk '/OPatch Version:/{print $1","$1"_"$NF;}')}
unzip -d ${ORACLE_HOME} ${OPATCH_PATCH_ZIP}
chown -R grid:oinstall ${ORACLE_HOME}/OPatch
zfs snapshot -r rpool/grid@$(opatch version | nawk '/OPatch Version:/{print $1"_"$NF;}')
</source>
==Apply PSU==
On first node as user grid:
<source lang=bash>
export ORACLE_HOME=/opt/gridhome/11.2.0.4
OCM_RSP=~grid/ocm_gridcluster1.rsp
${ORACLE_HOME}/OPatch/ocm/bin/emocmrsp -output ${OCM_RSP}
scp ${OCM_RSP} <other node1>:
scp ${OCM_RSP} <other node2>:
...
</source>
On all nodes do as root:
<source lang=bash>
export ORACLE_HOME=/opt/gridhome/11.2.0.4
export PATH=${PATH}:${ORACLE_HOME}/bin
export PATH=${PATH}:${ORACLE_HOME}/OPatch
OCM_RSP=~grid/ocm_gridcluster1.rsp
PSU_DIR=~oracle/orainst/psu
PSU_ZIP=~oracle/orainst/p22378167_112040_Solaris86-64.zip
PSU=~oracle/orainst/psu/22378167
su - grid -c "mkdir -p ${PSU_DIR}"
su - grid -c "unzip -d ${PSU_DIR} ${PSU_ZIP}"
su - grid -c "opatch lsinventory -detail -oh ${ORACLE_HOME} > ~grid/lsinventory_before_${PSU##*/}"
zfs snapshot -r rpool/grid@before_psu_${PSU##*/}
cd ~grid
for patch in $(find ${PSU} -name bundle.xml | xargs -n 1 dirname) ; do
opatch auto ${patch} -oh ${ORACLE_HOME} -ocmrf ${OCM_RSP}
done
$ORACLE_HOME/crs/install/rootcrs.pl -unlock # <-- on all nodes
# For every other patch do:
su - grid -c "cd ${patchdir} ; opatch prereq CheckConflictAgainstOHWithDetail -ph ./" # <-- only on first node
su - grid -c "cd ${patchdir} ; opatch apply" # <-- only on first node
$ORACLE_HOME/crs/install/rootcrs.pl -patch # <-- on all nodes
zfs snapshot -r rpool/grid@after_psu_${PSU##*/}
${ORACLE_HOME}/bin/emctl start dbconsole
su - grid -c "opatch lsinventory -detail -oh ${ORACLE_HOME} > ~grid/lsinventory_after_${PSU##*/}"
</source>
==Configure local listener to another port==
As grid user:
<source lang=bash>
$ srvctl modify listener -l LISTENER -o ${ORACLE_HOME} -p "TCP:50650"
$ srvctl config listener
Name: LISTENER
Network: 1, Owner: grid
Home: <CRS home>
End points: TCP:50650
$ srvctl stop listener -l LISTENER ; srvctl start listener -l LISTENER
$ sqh
SQL>show parameter list
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
listener_networks string
local_listener string (DESCRIPTION=(ADDRESS_LIST=(A
DDRESS=(PROTOCOL=TCP)(HOST=172
.1.20.1)(PORT=1521))))
remote_listener string
SQL> alter system set local_listener ="(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=172.1.20.1)(PORT=50650))))" SID='+ASM1' ;
System altered.
SQL> ^D
</source>
=ASM=
==Create ASM diskgroups==
data_config.xml:
<source lang=xml>
<chdg name="data" power="3">
<add>
<fg name="HSA3-DATA">
<dsk string="/dev/rdsk/"/>
</fg>
<fg name="HSA4-DATA">
<dsk string="/dev/rdsk/"/>
</fg>
</add>
</chdg>
</source>
asmh:
<source lang=oracle11>
ASMCMD [+] > chdg data_config.xml
</source>
6d7cc4aba2ee48a40e4b90f65b7630c6affb8f13
1219
1218
2016-02-10T15:22:52Z
Lollypop
2
/* Patching= */
wikitext
text/x-wiki
[[Kategorie:Solaris11|Clusterware]]
[[Kategorie:Oracle|Clusterware]]
==Get Solaris release information==
<source lang=bash>
# pkg info kernel | \
nawk -F '.' '
/Build Release:/{
solaris=$NF;
}
/Branch:/{
subrel=$3;
update=$4;
}
END{
printf "Solaris %d.%d Update %d\n",solaris,subrel,update;
}'
</source>
=Needed Solaris packages=
==Install pkg dependencies==
<source lang=bash>
# pkg install developer/assembler
# pkg install developer/build/make
# pkg install x11/diagnostic/x11-info-clients
</source>
==Check pkg dependencies==
<source lang=bash>
# pkg list \
developer/assembler \
developer/build/make \
x11/diagnostic/x11-info-clients
</source>
=User / group settings=
==Groups==
<source lang=bash>
# groupadd -g 186 oinstall
# groupadd -g 187 asmadmin
# groupadd -g 188 asmdba
# groupadd -g 200 dba
</source>
==User==
<source lang=bash>
# useradd \
-u 102 \
-g oinstall \
-G asmdba,dba \
-c "Oracle DB" \
-m -d /export/home/oracle \
oracle
# useradd \
-u 406 \
-g oinstall \
-G asmdba,asmadmin,dba \
-c "Oracle Grid" \
-m -d /export/home/grid \
grid
</source>
===Generate ssh public keys===
<source lang=bash>
# su - grid
$ ssh-keygen -t rsa -b 2048
Generating public/private rsa key pair.
Enter file in which to save the key (/export/home/grid/.ssh/id_rsa): <Enter>
Created directory '/export/home/grid/.ssh'.
Enter passphrase (empty for no passphrase): <Enter>
Enter same passphrase again: <Enter>
Your identification has been saved in /export/home/grid/.ssh/id_rsa.
Your public key has been saved in /export/home/grid/.ssh/id_rsa.pub.
The key fingerprint is:
..:..:.. grid@grid01
$ cat .ssh/id_rsa.pub > .ssh/authorized_keys
$ chmod 600 .ssh/authorized_keys
$ vi .ssh/authorized_keys
</source>
Add the public key of other nodes.
After that do this on all other nodes added as grid:
<source lang=bash>
$ scp grid01:.ssh/authorized_keys .ssh/authorized_keys
</source>
Now do a cross login from every node to every other node (even to its self) to add all to the known_hosts. The installer needs this.
==Projects==
<source lang=bash>
# projadd -p 186 -G oinstall \
-K process.max-file-descriptor="(privileged,65536,deny)" \
-K process.max-sem-nsems="(privileged,2048,deny)" \
-K project.max-sem-ids="(privileged,2048,deny)" \
-K project.max-shm-ids="(privileged,200,deny)" \
-K project.max-shm-memory="(privileged,274877906944,deny)" \
group.oinstall
</source>
===Check project settings===
<source lang=bash>
# su - oracle
$ for name in process.{max-file-descriptor,max-sem-nsems} ; do prctl -t privileged -i process -n ${name} $$ ; done
process: 14822: -bash
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
process.max-file-descriptor
privileged 65.5K - deny -
process: 14822: -bash
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
process.max-sem-nsems
privileged 2.05K - deny -
$ for name in project.{max-sem-ids,max-shm-ids,max-shm-memory} ; do prctl -t privileged -n ${name} $$ ; done
process: 14822: -bash
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
project.max-sem-ids
privileged 2.05K - deny -
process: 14822: -bash
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
project.max-shm-ids
privileged 200 - deny -
process: 14822: -bash
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
project.max-shm-memory
usage 0B
privileged 256GB - deny -
</source>
=Directories=
<source lang=bash>
# zfs create -o mountpoint=none rpool/grid
# zfs create -o mountpoint=/opt/gridhome rpool/grid/gridhome
# zfs create -o mountpoint=/opt/gridbase rpool/grid/gridbase
# zfs create -o mountpoint=/opt/oraInventory rpool/grid/oraInventory
# chown -R grid:oinstall /opt/{grid{home,base},oraInventory}
</source>
=Storage tasks=
==Discover LUNs==
<source lang=bash>
# luxadm -e port | \
nawk '{print $1}' | \
xargs -n 1 luxadm -e dump_map | \
nawk '/Disk device/{print $5}' | \
sort -u | \
xargs luxadm display | \
nawk '
/DEVICE PROPERTIES for disk:/{
disk=$NF;
}
/DEVICE PROPERTIES for:/{
disk="";
}
/Vendor:/{
vendor=$NF;
}
/Serial Num:/{
serial=$NF;
}
/Unformatted capacity:/{
capacity=$(NF-1)""$NF;
}
disk != "" && /^$/{
printf "%s vendor=%s serial=%s capacity=%s\n",disk,vendor,serial,capacity;
}' | \
sort -u
</source>
==Label Disks==
===Single Disk===
<source lang=bash>
# printf 'type 0 no no\nlabel 1 yes\npartition\n0 usr wm 8192 $\nlabel 1 yes\nquit\nquit\n' | \
format -e /dev/rdsk/<disk>
</source>
===All FC disks===
For x86 you have to call format -> fdisk -> y for all disks first :-\
'''DON'T DO THE NEXT STEP IF YOU DO NOT KNOW WHAT YOU DO!'''
format_command_file.txt:
<source lang=bash>
type 0 no no
label 1 yes
partition
0 usr wm 8192 $
label 1 yes
quit
quit
</source>
<source lang=bash>
# luxadm -e port | \
nawk '{print $1}' | \
xargs -n 1 luxadm -e dump_map | \
nawk '/Disk device/{print $5}' | \
sort -u | \
xargs luxadm display | \
nawk '
/DEVICE PROPERTIES for disk:/{
disk=$NF;
}
/DEVICE PROPERTIES for:/{
disk="";
}
disk && /^$/{
printf "%s\n",disk;
}' | \
sort -u | \
xargs -n 1 format -e -f ~/format_command_file.txt
</source>
<source lang=bash>
# chown -RL grid:asmadmin /dev/rdsk/c0t6000*
# chmod 660 /dev/rdsk/c0t6000*
</source>
==Set swap to physical RAM==
<source lang=bash>
# export RAM=256G
# swap -d /dev/zvol/dsk/rpool/swap
# zfs destroy rpool/swap
# zfs create \
-V ${RAM} \
-b 8k \
-o primarycache=metadata \
-o chksum=on \
-o dedup=off \
-o encryption=off \
-o compression=off \
rpool/swap
# swap -a /dev/zvol/dsk/rpool/swap
</source>
=Network=
==Check port ranges==
<source lang=bash>
# for protocol in tcp udp ; do ipadm show-prop ${protocol} -p smallest_anon_port,largest_anon_port ; done
PROTO PROPERTY PERM CURRENT PERSISTENT DEFAULT POSSIBLE
tcp smallest_anon_port rw 9000 9000 32768 1024-65500
tcp largest_anon_port rw 65500 65500 65535 9000-65535
PROTO PROPERTY PERM CURRENT PERSISTENT DEFAULT POSSIBLE
udp smallest_anon_port rw 9000 9000 32768 1024-65500
udp largest_anon_port rw 65500 65500 65535 9000-65535
</source>
==Setup private cluster interconnects==
Example with a small net with six (eight with net and broadcast) usable IPs. This limits the maximum number of nodes to six... which is obvious...
First node:
<source lang=bash>
# ipadm create-ip net1
# ipadm create-addr -T static -a 10.65.0.1/29 net1/ci1
# ipadm create-ip net5
# ipadm create-addr -T static -a 10.65.0.9/29 net5/ci2
</source>
Second node:
<source lang=bash>
# ipadm create-ip net1
# ipadm create-addr -T static -a 10.65.0.2/29 net1/ci1
# ipadm create-ip net5
# ipadm create-addr -T static -a 10.65.0.10/29 net5/ci2
</source>
==Set slew always for ntp==
After configuring ntp set slew always to avoid time warps!
<source lang=bash>
# svccfg -s svc:/network/ntp:default setprop config/slew_always = true
# svcadm refresh svc:/network/ntp:default
# svccfg -s svc:/network/ntp:default listprop config/slew_always
config/slew_always boolean true
</source>
=Patching=
==Upgrade OPatch==
Do as root:
<source lang=bash>
export ORACLE_HOME=/opt/gridhome/11.2.0.4
export PATH=${PATH}:${ORACLE_HOME}/OPatch
OPATCH_PATCH_ZIP=~oracle/orainst/p6880880_112000_Solaris86-64.zip
zfs snapshot -r rpool/grid@$(opatch version | nawk '/OPatch Version:/{print $1"_"$NF;}')
eval mv ${ORACLE_HOME}/{$(opatch version | nawk '/OPatch Version:/{print $1","$1"_"$NF;}')}
unzip -d ${ORACLE_HOME} ${OPATCH_PATCH_ZIP}
chown -R grid:oinstall ${ORACLE_HOME}/OPatch
zfs snapshot -r rpool/grid@$(opatch version | nawk '/OPatch Version:/{print $1"_"$NF;}')
</source>
==Apply PSU==
On first node as user grid:
<source lang=bash>
export ORACLE_HOME=/opt/gridhome/11.2.0.4
OCM_RSP=~grid/ocm_gridcluster1.rsp
${ORACLE_HOME}/OPatch/ocm/bin/emocmrsp -output ${OCM_RSP}
scp ${OCM_RSP} <other node1>:
scp ${OCM_RSP} <other node2>:
...
</source>
On all nodes do as root:
<source lang=bash>
export ORACLE_HOME=/opt/gridhome/11.2.0.4
export PATH=${PATH}:${ORACLE_HOME}/bin
export PATH=${PATH}:${ORACLE_HOME}/OPatch
OCM_RSP=~grid/ocm_gridcluster1.rsp
PSU_DIR=~oracle/orainst/psu
PSU_ZIP=~oracle/orainst/p22378167_112040_Solaris86-64.zip
PSU=~oracle/orainst/psu/22378167
su - grid -c "mkdir -p ${PSU_DIR}"
su - grid -c "unzip -d ${PSU_DIR} ${PSU_ZIP}"
su - grid -c "opatch lsinventory -detail -oh ${ORACLE_HOME} > ~grid/lsinventory_before_${PSU##*/}"
zfs snapshot -r rpool/grid@before_psu_${PSU##*/}
cd ~grid
for patch in $(find ${PSU} -name bundle.xml | xargs -n 1 dirname) ; do
opatch auto ${patch} -oh ${ORACLE_HOME} -ocmrf ${OCM_RSP}
done
$ORACLE_HOME/crs/install/rootcrs.pl -unlock # <-- on all nodes
# For every other patch do:
su - grid -c "cd ${patchdir} ; opatch prereq CheckConflictAgainstOHWithDetail -ph ./" # <-- only on first node
su - grid -c "cd ${patchdir} ; opatch apply" # <-- only on first node
$ORACLE_HOME/crs/install/rootcrs.pl -patch # <-- on all nodes
zfs snapshot -r rpool/grid@after_psu_${PSU##*/}
${ORACLE_HOME}/bin/emctl start dbconsole
su - grid -c "opatch lsinventory -detail -oh ${ORACLE_HOME} > ~grid/lsinventory_after_${PSU##*/}"
</source>
==Configure local listener to another port==
As grid user:
<source lang=bash>
$ srvctl modify listener -l LISTENER -o ${ORACLE_HOME} -p "TCP:50650"
$ srvctl config listener
Name: LISTENER
Network: 1, Owner: grid
Home: <CRS home>
End points: TCP:50650
$ srvctl stop listener -l LISTENER ; srvctl start listener -l LISTENER
$ sqh
SQL>show parameter list
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
listener_networks string
local_listener string (DESCRIPTION=(ADDRESS_LIST=(A
DDRESS=(PROTOCOL=TCP)(HOST=172
.1.20.1)(PORT=1521))))
remote_listener string
SQL> alter system set local_listener ="(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=172.1.20.1)(PORT=50650))))" SID='+ASM1' ;
System altered.
SQL> ^D
</source>
=ASM=
==Create ASM diskgroups==
data_config.xml:
<source lang=xml>
<chdg name="data" power="3">
<add>
<fg name="HSA3-DATA">
<dsk string="/dev/rdsk/"/>
</fg>
<fg name="HSA4-DATA">
<dsk string="/dev/rdsk/"/>
</fg>
</add>
</chdg>
</source>
asmh:
<source lang=oracle11>
ASMCMD [+] > chdg data_config.xml
</source>
57b8261e31e49ba847d9e527b08ab311c1f254a6
1220
1219
2016-02-10T15:29:09Z
Lollypop
2
/* Create ASM diskgroups */
wikitext
text/x-wiki
[[Kategorie:Solaris11|Clusterware]]
[[Kategorie:Oracle|Clusterware]]
==Get Solaris release information==
<source lang=bash>
# pkg info kernel | \
nawk -F '.' '
/Build Release:/{
solaris=$NF;
}
/Branch:/{
subrel=$3;
update=$4;
}
END{
printf "Solaris %d.%d Update %d\n",solaris,subrel,update;
}'
</source>
=Needed Solaris packages=
==Install pkg dependencies==
<source lang=bash>
# pkg install developer/assembler
# pkg install developer/build/make
# pkg install x11/diagnostic/x11-info-clients
</source>
==Check pkg dependencies==
<source lang=bash>
# pkg list \
developer/assembler \
developer/build/make \
x11/diagnostic/x11-info-clients
</source>
=User / group settings=
==Groups==
<source lang=bash>
# groupadd -g 186 oinstall
# groupadd -g 187 asmadmin
# groupadd -g 188 asmdba
# groupadd -g 200 dba
</source>
==User==
<source lang=bash>
# useradd \
-u 102 \
-g oinstall \
-G asmdba,dba \
-c "Oracle DB" \
-m -d /export/home/oracle \
oracle
# useradd \
-u 406 \
-g oinstall \
-G asmdba,asmadmin,dba \
-c "Oracle Grid" \
-m -d /export/home/grid \
grid
</source>
===Generate ssh public keys===
<source lang=bash>
# su - grid
$ ssh-keygen -t rsa -b 2048
Generating public/private rsa key pair.
Enter file in which to save the key (/export/home/grid/.ssh/id_rsa): <Enter>
Created directory '/export/home/grid/.ssh'.
Enter passphrase (empty for no passphrase): <Enter>
Enter same passphrase again: <Enter>
Your identification has been saved in /export/home/grid/.ssh/id_rsa.
Your public key has been saved in /export/home/grid/.ssh/id_rsa.pub.
The key fingerprint is:
..:..:.. grid@grid01
$ cat .ssh/id_rsa.pub > .ssh/authorized_keys
$ chmod 600 .ssh/authorized_keys
$ vi .ssh/authorized_keys
</source>
Add the public key of other nodes.
After that do this on all other nodes added as grid:
<source lang=bash>
$ scp grid01:.ssh/authorized_keys .ssh/authorized_keys
</source>
Now do a cross login from every node to every other node (even to its self) to add all to the known_hosts. The installer needs this.
==Projects==
<source lang=bash>
# projadd -p 186 -G oinstall \
-K process.max-file-descriptor="(privileged,65536,deny)" \
-K process.max-sem-nsems="(privileged,2048,deny)" \
-K project.max-sem-ids="(privileged,2048,deny)" \
-K project.max-shm-ids="(privileged,200,deny)" \
-K project.max-shm-memory="(privileged,274877906944,deny)" \
group.oinstall
</source>
===Check project settings===
<source lang=bash>
# su - oracle
$ for name in process.{max-file-descriptor,max-sem-nsems} ; do prctl -t privileged -i process -n ${name} $$ ; done
process: 14822: -bash
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
process.max-file-descriptor
privileged 65.5K - deny -
process: 14822: -bash
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
process.max-sem-nsems
privileged 2.05K - deny -
$ for name in project.{max-sem-ids,max-shm-ids,max-shm-memory} ; do prctl -t privileged -n ${name} $$ ; done
process: 14822: -bash
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
project.max-sem-ids
privileged 2.05K - deny -
process: 14822: -bash
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
project.max-shm-ids
privileged 200 - deny -
process: 14822: -bash
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
project.max-shm-memory
usage 0B
privileged 256GB - deny -
</source>
=Directories=
<source lang=bash>
# zfs create -o mountpoint=none rpool/grid
# zfs create -o mountpoint=/opt/gridhome rpool/grid/gridhome
# zfs create -o mountpoint=/opt/gridbase rpool/grid/gridbase
# zfs create -o mountpoint=/opt/oraInventory rpool/grid/oraInventory
# chown -R grid:oinstall /opt/{grid{home,base},oraInventory}
</source>
=Storage tasks=
==Discover LUNs==
<source lang=bash>
# luxadm -e port | \
nawk '{print $1}' | \
xargs -n 1 luxadm -e dump_map | \
nawk '/Disk device/{print $5}' | \
sort -u | \
xargs luxadm display | \
nawk '
/DEVICE PROPERTIES for disk:/{
disk=$NF;
}
/DEVICE PROPERTIES for:/{
disk="";
}
/Vendor:/{
vendor=$NF;
}
/Serial Num:/{
serial=$NF;
}
/Unformatted capacity:/{
capacity=$(NF-1)""$NF;
}
disk != "" && /^$/{
printf "%s vendor=%s serial=%s capacity=%s\n",disk,vendor,serial,capacity;
}' | \
sort -u
</source>
==Label Disks==
===Single Disk===
<source lang=bash>
# printf 'type 0 no no\nlabel 1 yes\npartition\n0 usr wm 8192 $\nlabel 1 yes\nquit\nquit\n' | \
format -e /dev/rdsk/<disk>
</source>
===All FC disks===
For x86 you have to call format -> fdisk -> y for all disks first :-\
'''DON'T DO THE NEXT STEP IF YOU DO NOT KNOW WHAT YOU DO!'''
format_command_file.txt:
<source lang=bash>
type 0 no no
label 1 yes
partition
0 usr wm 8192 $
label 1 yes
quit
quit
</source>
<source lang=bash>
# luxadm -e port | \
nawk '{print $1}' | \
xargs -n 1 luxadm -e dump_map | \
nawk '/Disk device/{print $5}' | \
sort -u | \
xargs luxadm display | \
nawk '
/DEVICE PROPERTIES for disk:/{
disk=$NF;
}
/DEVICE PROPERTIES for:/{
disk="";
}
disk && /^$/{
printf "%s\n",disk;
}' | \
sort -u | \
xargs -n 1 format -e -f ~/format_command_file.txt
</source>
<source lang=bash>
# chown -RL grid:asmadmin /dev/rdsk/c0t6000*
# chmod 660 /dev/rdsk/c0t6000*
</source>
==Set swap to physical RAM==
<source lang=bash>
# export RAM=256G
# swap -d /dev/zvol/dsk/rpool/swap
# zfs destroy rpool/swap
# zfs create \
-V ${RAM} \
-b 8k \
-o primarycache=metadata \
-o chksum=on \
-o dedup=off \
-o encryption=off \
-o compression=off \
rpool/swap
# swap -a /dev/zvol/dsk/rpool/swap
</source>
=Network=
==Check port ranges==
<source lang=bash>
# for protocol in tcp udp ; do ipadm show-prop ${protocol} -p smallest_anon_port,largest_anon_port ; done
PROTO PROPERTY PERM CURRENT PERSISTENT DEFAULT POSSIBLE
tcp smallest_anon_port rw 9000 9000 32768 1024-65500
tcp largest_anon_port rw 65500 65500 65535 9000-65535
PROTO PROPERTY PERM CURRENT PERSISTENT DEFAULT POSSIBLE
udp smallest_anon_port rw 9000 9000 32768 1024-65500
udp largest_anon_port rw 65500 65500 65535 9000-65535
</source>
==Setup private cluster interconnects==
Example with a small net with six (eight with net and broadcast) usable IPs. This limits the maximum number of nodes to six... which is obvious...
First node:
<source lang=bash>
# ipadm create-ip net1
# ipadm create-addr -T static -a 10.65.0.1/29 net1/ci1
# ipadm create-ip net5
# ipadm create-addr -T static -a 10.65.0.9/29 net5/ci2
</source>
Second node:
<source lang=bash>
# ipadm create-ip net1
# ipadm create-addr -T static -a 10.65.0.2/29 net1/ci1
# ipadm create-ip net5
# ipadm create-addr -T static -a 10.65.0.10/29 net5/ci2
</source>
==Set slew always for ntp==
After configuring ntp set slew always to avoid time warps!
<source lang=bash>
# svccfg -s svc:/network/ntp:default setprop config/slew_always = true
# svcadm refresh svc:/network/ntp:default
# svccfg -s svc:/network/ntp:default listprop config/slew_always
config/slew_always boolean true
</source>
=Patching=
==Upgrade OPatch==
Do as root:
<source lang=bash>
export ORACLE_HOME=/opt/gridhome/11.2.0.4
export PATH=${PATH}:${ORACLE_HOME}/OPatch
OPATCH_PATCH_ZIP=~oracle/orainst/p6880880_112000_Solaris86-64.zip
zfs snapshot -r rpool/grid@$(opatch version | nawk '/OPatch Version:/{print $1"_"$NF;}')
eval mv ${ORACLE_HOME}/{$(opatch version | nawk '/OPatch Version:/{print $1","$1"_"$NF;}')}
unzip -d ${ORACLE_HOME} ${OPATCH_PATCH_ZIP}
chown -R grid:oinstall ${ORACLE_HOME}/OPatch
zfs snapshot -r rpool/grid@$(opatch version | nawk '/OPatch Version:/{print $1"_"$NF;}')
</source>
==Apply PSU==
On first node as user grid:
<source lang=bash>
export ORACLE_HOME=/opt/gridhome/11.2.0.4
OCM_RSP=~grid/ocm_gridcluster1.rsp
${ORACLE_HOME}/OPatch/ocm/bin/emocmrsp -output ${OCM_RSP}
scp ${OCM_RSP} <other node1>:
scp ${OCM_RSP} <other node2>:
...
</source>
On all nodes do as root:
<source lang=bash>
export ORACLE_HOME=/opt/gridhome/11.2.0.4
export PATH=${PATH}:${ORACLE_HOME}/bin
export PATH=${PATH}:${ORACLE_HOME}/OPatch
OCM_RSP=~grid/ocm_gridcluster1.rsp
PSU_DIR=~oracle/orainst/psu
PSU_ZIP=~oracle/orainst/p22378167_112040_Solaris86-64.zip
PSU=~oracle/orainst/psu/22378167
su - grid -c "mkdir -p ${PSU_DIR}"
su - grid -c "unzip -d ${PSU_DIR} ${PSU_ZIP}"
su - grid -c "opatch lsinventory -detail -oh ${ORACLE_HOME} > ~grid/lsinventory_before_${PSU##*/}"
zfs snapshot -r rpool/grid@before_psu_${PSU##*/}
cd ~grid
for patch in $(find ${PSU} -name bundle.xml | xargs -n 1 dirname) ; do
opatch auto ${patch} -oh ${ORACLE_HOME} -ocmrf ${OCM_RSP}
done
$ORACLE_HOME/crs/install/rootcrs.pl -unlock # <-- on all nodes
# For every other patch do:
su - grid -c "cd ${patchdir} ; opatch prereq CheckConflictAgainstOHWithDetail -ph ./" # <-- only on first node
su - grid -c "cd ${patchdir} ; opatch apply" # <-- only on first node
$ORACLE_HOME/crs/install/rootcrs.pl -patch # <-- on all nodes
zfs snapshot -r rpool/grid@after_psu_${PSU##*/}
${ORACLE_HOME}/bin/emctl start dbconsole
su - grid -c "opatch lsinventory -detail -oh ${ORACLE_HOME} > ~grid/lsinventory_after_${PSU##*/}"
</source>
==Configure local listener to another port==
As grid user:
<source lang=bash>
$ srvctl modify listener -l LISTENER -o ${ORACLE_HOME} -p "TCP:50650"
$ srvctl config listener
Name: LISTENER
Network: 1, Owner: grid
Home: <CRS home>
End points: TCP:50650
$ srvctl stop listener -l LISTENER ; srvctl start listener -l LISTENER
$ sqh
SQL>show parameter list
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
listener_networks string
local_listener string (DESCRIPTION=(ADDRESS_LIST=(A
DDRESS=(PROTOCOL=TCP)(HOST=172
.1.20.1)(PORT=1521))))
remote_listener string
SQL> alter system set local_listener ="(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=172.1.20.1)(PORT=50650))))" SID='+ASM1' ;
System altered.
SQL> ^D
</source>
=ASM=
==Create ASM diskgroups==
data_config.xml:
<source lang=xml>
<chdg name="data" power="3">
<add>
<fg name="HSA1-DATA">
<dsk string="/dev/rdsk/c0t60002AC000000000C913010650004002d0s0"/>
<dsk string="/dev/rdsk/c0t60002AC000000000C913010650004003d0s0"/>
<dsk string="/dev/rdsk/c0t60002AC000000000C913010650004004d0s0"/>
<dsk string="/dev/rdsk/c0t60002AC000000000C913010650004005d0s0"/>
<dsk string="/dev/rdsk/c0t60002AC000000000C913010650004006d0s0"/>
<dsk string="/dev/rdsk/c0t60002AC000000000C913010650004007d0s0"/>
<dsk string="/dev/rdsk/c0t60002AC000000000C913010650004008d0s0"/>
<dsk string="/dev/rdsk/c0t60002AC000000000C913010650004009d0s0"/>
<dsk string="/dev/rdsk/c0t60002AC000000000C913010650004010d0s0"/>
<dsk string="/dev/rdsk/c0t60002AC000000000C913010650004011d0s0"/>
</fg>
<fg name="HSA2-DATA">
<dsk string="/dev/rdsk/c0t60002AC000000000C916010650004002d0s0"/>
<dsk string="/dev/rdsk/c0t60002AC000000000C916010650004003d0s0"/>
<dsk string="/dev/rdsk/c0t60002AC000000000C916010650004004d0s0"/>
<dsk string="/dev/rdsk/c0t60002AC000000000C916010650004005d0s0"/>
<dsk string="/dev/rdsk/c0t60002AC000000000C916010650004006d0s0"/>
<dsk string="/dev/rdsk/c0t60002AC000000000C916010650004007d0s0"/>
<dsk string="/dev/rdsk/c0t60002AC000000000C916010650004008d0s0"/>
<dsk string="/dev/rdsk/c0t60002AC000000000C916010650004009d0s0"/>
<dsk string="/dev/rdsk/c0t60002AC000000000C916010650004010d0s0"/>
<dsk string="/dev/rdsk/c0t60002AC000000000C916010650004011d0s0"/>
</fg>
</add>
</chdg>
</source>
asmh:
<source lang=oracle11>
ASMCMD [+] > chdg data_config.xml
</source>
085b1c5be2a8fb6c06dcc8d6ede2176042196e10
1221
1220
2016-02-10T16:05:28Z
Lollypop
2
/* Create ASM diskgroups */
wikitext
text/x-wiki
[[Kategorie:Solaris11|Clusterware]]
[[Kategorie:Oracle|Clusterware]]
==Get Solaris release information==
<source lang=bash>
# pkg info kernel | \
nawk -F '.' '
/Build Release:/{
solaris=$NF;
}
/Branch:/{
subrel=$3;
update=$4;
}
END{
printf "Solaris %d.%d Update %d\n",solaris,subrel,update;
}'
</source>
=Needed Solaris packages=
==Install pkg dependencies==
<source lang=bash>
# pkg install developer/assembler
# pkg install developer/build/make
# pkg install x11/diagnostic/x11-info-clients
</source>
==Check pkg dependencies==
<source lang=bash>
# pkg list \
developer/assembler \
developer/build/make \
x11/diagnostic/x11-info-clients
</source>
=User / group settings=
==Groups==
<source lang=bash>
# groupadd -g 186 oinstall
# groupadd -g 187 asmadmin
# groupadd -g 188 asmdba
# groupadd -g 200 dba
</source>
==User==
<source lang=bash>
# useradd \
-u 102 \
-g oinstall \
-G asmdba,dba \
-c "Oracle DB" \
-m -d /export/home/oracle \
oracle
# useradd \
-u 406 \
-g oinstall \
-G asmdba,asmadmin,dba \
-c "Oracle Grid" \
-m -d /export/home/grid \
grid
</source>
===Generate ssh public keys===
<source lang=bash>
# su - grid
$ ssh-keygen -t rsa -b 2048
Generating public/private rsa key pair.
Enter file in which to save the key (/export/home/grid/.ssh/id_rsa): <Enter>
Created directory '/export/home/grid/.ssh'.
Enter passphrase (empty for no passphrase): <Enter>
Enter same passphrase again: <Enter>
Your identification has been saved in /export/home/grid/.ssh/id_rsa.
Your public key has been saved in /export/home/grid/.ssh/id_rsa.pub.
The key fingerprint is:
..:..:.. grid@grid01
$ cat .ssh/id_rsa.pub > .ssh/authorized_keys
$ chmod 600 .ssh/authorized_keys
$ vi .ssh/authorized_keys
</source>
Add the public key of other nodes.
After that do this on all other nodes added as grid:
<source lang=bash>
$ scp grid01:.ssh/authorized_keys .ssh/authorized_keys
</source>
Now do a cross login from every node to every other node (even to its self) to add all to the known_hosts. The installer needs this.
==Projects==
<source lang=bash>
# projadd -p 186 -G oinstall \
-K process.max-file-descriptor="(privileged,65536,deny)" \
-K process.max-sem-nsems="(privileged,2048,deny)" \
-K project.max-sem-ids="(privileged,2048,deny)" \
-K project.max-shm-ids="(privileged,200,deny)" \
-K project.max-shm-memory="(privileged,274877906944,deny)" \
group.oinstall
</source>
===Check project settings===
<source lang=bash>
# su - oracle
$ for name in process.{max-file-descriptor,max-sem-nsems} ; do prctl -t privileged -i process -n ${name} $$ ; done
process: 14822: -bash
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
process.max-file-descriptor
privileged 65.5K - deny -
process: 14822: -bash
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
process.max-sem-nsems
privileged 2.05K - deny -
$ for name in project.{max-sem-ids,max-shm-ids,max-shm-memory} ; do prctl -t privileged -n ${name} $$ ; done
process: 14822: -bash
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
project.max-sem-ids
privileged 2.05K - deny -
process: 14822: -bash
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
project.max-shm-ids
privileged 200 - deny -
process: 14822: -bash
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
project.max-shm-memory
usage 0B
privileged 256GB - deny -
</source>
=Directories=
<source lang=bash>
# zfs create -o mountpoint=none rpool/grid
# zfs create -o mountpoint=/opt/gridhome rpool/grid/gridhome
# zfs create -o mountpoint=/opt/gridbase rpool/grid/gridbase
# zfs create -o mountpoint=/opt/oraInventory rpool/grid/oraInventory
# chown -R grid:oinstall /opt/{grid{home,base},oraInventory}
</source>
=Storage tasks=
==Discover LUNs==
<source lang=bash>
# luxadm -e port | \
nawk '{print $1}' | \
xargs -n 1 luxadm -e dump_map | \
nawk '/Disk device/{print $5}' | \
sort -u | \
xargs luxadm display | \
nawk '
/DEVICE PROPERTIES for disk:/{
disk=$NF;
}
/DEVICE PROPERTIES for:/{
disk="";
}
/Vendor:/{
vendor=$NF;
}
/Serial Num:/{
serial=$NF;
}
/Unformatted capacity:/{
capacity=$(NF-1)""$NF;
}
disk != "" && /^$/{
printf "%s vendor=%s serial=%s capacity=%s\n",disk,vendor,serial,capacity;
}' | \
sort -u
</source>
==Label Disks==
===Single Disk===
<source lang=bash>
# printf 'type 0 no no\nlabel 1 yes\npartition\n0 usr wm 8192 $\nlabel 1 yes\nquit\nquit\n' | \
format -e /dev/rdsk/<disk>
</source>
===All FC disks===
For x86 you have to call format -> fdisk -> y for all disks first :-\
'''DON'T DO THE NEXT STEP IF YOU DO NOT KNOW WHAT YOU DO!'''
format_command_file.txt:
<source lang=bash>
type 0 no no
label 1 yes
partition
0 usr wm 8192 $
label 1 yes
quit
quit
</source>
<source lang=bash>
# luxadm -e port | \
nawk '{print $1}' | \
xargs -n 1 luxadm -e dump_map | \
nawk '/Disk device/{print $5}' | \
sort -u | \
xargs luxadm display | \
nawk '
/DEVICE PROPERTIES for disk:/{
disk=$NF;
}
/DEVICE PROPERTIES for:/{
disk="";
}
disk && /^$/{
printf "%s\n",disk;
}' | \
sort -u | \
xargs -n 1 format -e -f ~/format_command_file.txt
</source>
<source lang=bash>
# chown -RL grid:asmadmin /dev/rdsk/c0t6000*
# chmod 660 /dev/rdsk/c0t6000*
</source>
==Set swap to physical RAM==
<source lang=bash>
# export RAM=256G
# swap -d /dev/zvol/dsk/rpool/swap
# zfs destroy rpool/swap
# zfs create \
-V ${RAM} \
-b 8k \
-o primarycache=metadata \
-o chksum=on \
-o dedup=off \
-o encryption=off \
-o compression=off \
rpool/swap
# swap -a /dev/zvol/dsk/rpool/swap
</source>
=Network=
==Check port ranges==
<source lang=bash>
# for protocol in tcp udp ; do ipadm show-prop ${protocol} -p smallest_anon_port,largest_anon_port ; done
PROTO PROPERTY PERM CURRENT PERSISTENT DEFAULT POSSIBLE
tcp smallest_anon_port rw 9000 9000 32768 1024-65500
tcp largest_anon_port rw 65500 65500 65535 9000-65535
PROTO PROPERTY PERM CURRENT PERSISTENT DEFAULT POSSIBLE
udp smallest_anon_port rw 9000 9000 32768 1024-65500
udp largest_anon_port rw 65500 65500 65535 9000-65535
</source>
==Setup private cluster interconnects==
Example with a small net with six (eight with net and broadcast) usable IPs. This limits the maximum number of nodes to six... which is obvious...
First node:
<source lang=bash>
# ipadm create-ip net1
# ipadm create-addr -T static -a 10.65.0.1/29 net1/ci1
# ipadm create-ip net5
# ipadm create-addr -T static -a 10.65.0.9/29 net5/ci2
</source>
Second node:
<source lang=bash>
# ipadm create-ip net1
# ipadm create-addr -T static -a 10.65.0.2/29 net1/ci1
# ipadm create-ip net5
# ipadm create-addr -T static -a 10.65.0.10/29 net5/ci2
</source>
==Set slew always for ntp==
After configuring ntp set slew always to avoid time warps!
<source lang=bash>
# svccfg -s svc:/network/ntp:default setprop config/slew_always = true
# svcadm refresh svc:/network/ntp:default
# svccfg -s svc:/network/ntp:default listprop config/slew_always
config/slew_always boolean true
</source>
=Patching=
==Upgrade OPatch==
Do as root:
<source lang=bash>
export ORACLE_HOME=/opt/gridhome/11.2.0.4
export PATH=${PATH}:${ORACLE_HOME}/OPatch
OPATCH_PATCH_ZIP=~oracle/orainst/p6880880_112000_Solaris86-64.zip
zfs snapshot -r rpool/grid@$(opatch version | nawk '/OPatch Version:/{print $1"_"$NF;}')
eval mv ${ORACLE_HOME}/{$(opatch version | nawk '/OPatch Version:/{print $1","$1"_"$NF;}')}
unzip -d ${ORACLE_HOME} ${OPATCH_PATCH_ZIP}
chown -R grid:oinstall ${ORACLE_HOME}/OPatch
zfs snapshot -r rpool/grid@$(opatch version | nawk '/OPatch Version:/{print $1"_"$NF;}')
</source>
==Apply PSU==
On first node as user grid:
<source lang=bash>
export ORACLE_HOME=/opt/gridhome/11.2.0.4
OCM_RSP=~grid/ocm_gridcluster1.rsp
${ORACLE_HOME}/OPatch/ocm/bin/emocmrsp -output ${OCM_RSP}
scp ${OCM_RSP} <other node1>:
scp ${OCM_RSP} <other node2>:
...
</source>
On all nodes do as root:
<source lang=bash>
export ORACLE_HOME=/opt/gridhome/11.2.0.4
export PATH=${PATH}:${ORACLE_HOME}/bin
export PATH=${PATH}:${ORACLE_HOME}/OPatch
OCM_RSP=~grid/ocm_gridcluster1.rsp
PSU_DIR=~oracle/orainst/psu
PSU_ZIP=~oracle/orainst/p22378167_112040_Solaris86-64.zip
PSU=~oracle/orainst/psu/22378167
su - grid -c "mkdir -p ${PSU_DIR}"
su - grid -c "unzip -d ${PSU_DIR} ${PSU_ZIP}"
su - grid -c "opatch lsinventory -detail -oh ${ORACLE_HOME} > ~grid/lsinventory_before_${PSU##*/}"
zfs snapshot -r rpool/grid@before_psu_${PSU##*/}
cd ~grid
for patch in $(find ${PSU} -name bundle.xml | xargs -n 1 dirname) ; do
opatch auto ${patch} -oh ${ORACLE_HOME} -ocmrf ${OCM_RSP}
done
$ORACLE_HOME/crs/install/rootcrs.pl -unlock # <-- on all nodes
# For every other patch do:
su - grid -c "cd ${patchdir} ; opatch prereq CheckConflictAgainstOHWithDetail -ph ./" # <-- only on first node
su - grid -c "cd ${patchdir} ; opatch apply" # <-- only on first node
$ORACLE_HOME/crs/install/rootcrs.pl -patch # <-- on all nodes
zfs snapshot -r rpool/grid@after_psu_${PSU##*/}
${ORACLE_HOME}/bin/emctl start dbconsole
su - grid -c "opatch lsinventory -detail -oh ${ORACLE_HOME} > ~grid/lsinventory_after_${PSU##*/}"
</source>
==Configure local listener to another port==
As grid user:
<source lang=bash>
$ srvctl modify listener -l LISTENER -o ${ORACLE_HOME} -p "TCP:50650"
$ srvctl config listener
Name: LISTENER
Network: 1, Owner: grid
Home: <CRS home>
End points: TCP:50650
$ srvctl stop listener -l LISTENER ; srvctl start listener -l LISTENER
$ sqh
SQL>show parameter list
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
listener_networks string
local_listener string (DESCRIPTION=(ADDRESS_LIST=(A
DDRESS=(PROTOCOL=TCP)(HOST=172
.1.20.1)(PORT=1521))))
remote_listener string
SQL> alter system set local_listener ="(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=172.1.20.1)(PORT=50650))))" SID='+ASM1' ;
System altered.
SQL> ^D
</source>
=ASM=
==Create ASM diskgroups==
data_config.xml:
<source lang=xml>
<chdg name="data" power="3">
<add>
<fg name="HSA1_DATA">
<dsk name="HSA1_DATA01" string="/dev/rdsk/c0t60002AC000000000C913010650004002d0s0"/>
<dsk name="HSA1_DATA02" string="/dev/rdsk/c0t60002AC000000000C913010650004003d0s0"/>
<dsk name="HSA1_DATA03" string="/dev/rdsk/c0t60002AC000000000C913010650004004d0s0"/>
<dsk name="HSA1_DATA04" string="/dev/rdsk/c0t60002AC000000000C913010650004005d0s0"/>
<dsk name="HSA1_DATA05" string="/dev/rdsk/c0t60002AC000000000C913010650004006d0s0"/>
<dsk name="HSA1_DATA06" string="/dev/rdsk/c0t60002AC000000000C913010650004007d0s0"/>
<dsk name="HSA1_DATA07" string="/dev/rdsk/c0t60002AC000000000C913010650004008d0s0"/>
<dsk name="HSA1_DATA08" string="/dev/rdsk/c0t60002AC000000000C913010650004009d0s0"/>
<dsk name="HSA1_DATA09" string="/dev/rdsk/c0t60002AC000000000C913010650004010d0s0"/>
<dsk name="HSA1_DATA10" string="/dev/rdsk/c0t60002AC000000000C913010650004011d0s0"/>
</fg>
</add>
<add>
<fg name="HSA2_DATA">
<dsk name="HSA2_DATA01" string="/dev/rdsk/c0t60002AC000000000C916010650004002d0s0"/>
<dsk name="HSA2_DATA02" string="/dev/rdsk/c0t60002AC000000000C916010650004003d0s0"/>
<dsk name="HSA2_DATA03" string="/dev/rdsk/c0t60002AC000000000C916010650004004d0s0"/>
<dsk name="HSA2_DATA04" string="/dev/rdsk/c0t60002AC000000000C916010650004005d0s0"/>
<dsk name="HSA2_DATA05" string="/dev/rdsk/c0t60002AC000000000C916010650004006d0s0"/>
<dsk name="HSA2_DATA06" string="/dev/rdsk/c0t60002AC000000000C916010650004007d0s0"/>
<dsk name="HSA2_DATA07" string="/dev/rdsk/c0t60002AC000000000C916010650004008d0s0"/>
<dsk name="HSA2_DATA08" string="/dev/rdsk/c0t60002AC000000000C916010650004009d0s0"/>
<dsk name="HSA2_DATA09" string="/dev/rdsk/c0t60002AC000000000C916010650004010d0s0"/>
<dsk name="HSA2_DATA10" string="/dev/rdsk/c0t60002AC000000000C916010650004011d0s0"/>
</fg>
</add>
</chdg>
</source>
asmh:
<source lang=oracle11>
ASMCMD [+] > chdg data_config.xml
</source>
95232c50e3424bb53a3b46b3604b9a8bcae2b568
1222
1221
2016-02-10T16:14:28Z
Lollypop
2
/* Create ASM diskgroups */
wikitext
text/x-wiki
[[Kategorie:Solaris11|Clusterware]]
[[Kategorie:Oracle|Clusterware]]
==Get Solaris release information==
<source lang=bash>
# pkg info kernel | \
nawk -F '.' '
/Build Release:/{
solaris=$NF;
}
/Branch:/{
subrel=$3;
update=$4;
}
END{
printf "Solaris %d.%d Update %d\n",solaris,subrel,update;
}'
</source>
=Needed Solaris packages=
==Install pkg dependencies==
<source lang=bash>
# pkg install developer/assembler
# pkg install developer/build/make
# pkg install x11/diagnostic/x11-info-clients
</source>
==Check pkg dependencies==
<source lang=bash>
# pkg list \
developer/assembler \
developer/build/make \
x11/diagnostic/x11-info-clients
</source>
=User / group settings=
==Groups==
<source lang=bash>
# groupadd -g 186 oinstall
# groupadd -g 187 asmadmin
# groupadd -g 188 asmdba
# groupadd -g 200 dba
</source>
==User==
<source lang=bash>
# useradd \
-u 102 \
-g oinstall \
-G asmdba,dba \
-c "Oracle DB" \
-m -d /export/home/oracle \
oracle
# useradd \
-u 406 \
-g oinstall \
-G asmdba,asmadmin,dba \
-c "Oracle Grid" \
-m -d /export/home/grid \
grid
</source>
===Generate ssh public keys===
<source lang=bash>
# su - grid
$ ssh-keygen -t rsa -b 2048
Generating public/private rsa key pair.
Enter file in which to save the key (/export/home/grid/.ssh/id_rsa): <Enter>
Created directory '/export/home/grid/.ssh'.
Enter passphrase (empty for no passphrase): <Enter>
Enter same passphrase again: <Enter>
Your identification has been saved in /export/home/grid/.ssh/id_rsa.
Your public key has been saved in /export/home/grid/.ssh/id_rsa.pub.
The key fingerprint is:
..:..:.. grid@grid01
$ cat .ssh/id_rsa.pub > .ssh/authorized_keys
$ chmod 600 .ssh/authorized_keys
$ vi .ssh/authorized_keys
</source>
Add the public key of other nodes.
After that do this on all other nodes added as grid:
<source lang=bash>
$ scp grid01:.ssh/authorized_keys .ssh/authorized_keys
</source>
Now do a cross login from every node to every other node (even to its self) to add all to the known_hosts. The installer needs this.
==Projects==
<source lang=bash>
# projadd -p 186 -G oinstall \
-K process.max-file-descriptor="(privileged,65536,deny)" \
-K process.max-sem-nsems="(privileged,2048,deny)" \
-K project.max-sem-ids="(privileged,2048,deny)" \
-K project.max-shm-ids="(privileged,200,deny)" \
-K project.max-shm-memory="(privileged,274877906944,deny)" \
group.oinstall
</source>
===Check project settings===
<source lang=bash>
# su - oracle
$ for name in process.{max-file-descriptor,max-sem-nsems} ; do prctl -t privileged -i process -n ${name} $$ ; done
process: 14822: -bash
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
process.max-file-descriptor
privileged 65.5K - deny -
process: 14822: -bash
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
process.max-sem-nsems
privileged 2.05K - deny -
$ for name in project.{max-sem-ids,max-shm-ids,max-shm-memory} ; do prctl -t privileged -n ${name} $$ ; done
process: 14822: -bash
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
project.max-sem-ids
privileged 2.05K - deny -
process: 14822: -bash
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
project.max-shm-ids
privileged 200 - deny -
process: 14822: -bash
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
project.max-shm-memory
usage 0B
privileged 256GB - deny -
</source>
=Directories=
<source lang=bash>
# zfs create -o mountpoint=none rpool/grid
# zfs create -o mountpoint=/opt/gridhome rpool/grid/gridhome
# zfs create -o mountpoint=/opt/gridbase rpool/grid/gridbase
# zfs create -o mountpoint=/opt/oraInventory rpool/grid/oraInventory
# chown -R grid:oinstall /opt/{grid{home,base},oraInventory}
</source>
=Storage tasks=
==Discover LUNs==
<source lang=bash>
# luxadm -e port | \
nawk '{print $1}' | \
xargs -n 1 luxadm -e dump_map | \
nawk '/Disk device/{print $5}' | \
sort -u | \
xargs luxadm display | \
nawk '
/DEVICE PROPERTIES for disk:/{
disk=$NF;
}
/DEVICE PROPERTIES for:/{
disk="";
}
/Vendor:/{
vendor=$NF;
}
/Serial Num:/{
serial=$NF;
}
/Unformatted capacity:/{
capacity=$(NF-1)""$NF;
}
disk != "" && /^$/{
printf "%s vendor=%s serial=%s capacity=%s\n",disk,vendor,serial,capacity;
}' | \
sort -u
</source>
==Label Disks==
===Single Disk===
<source lang=bash>
# printf 'type 0 no no\nlabel 1 yes\npartition\n0 usr wm 8192 $\nlabel 1 yes\nquit\nquit\n' | \
format -e /dev/rdsk/<disk>
</source>
===All FC disks===
For x86 you have to call format -> fdisk -> y for all disks first :-\
'''DON'T DO THE NEXT STEP IF YOU DO NOT KNOW WHAT YOU DO!'''
format_command_file.txt:
<source lang=bash>
type 0 no no
label 1 yes
partition
0 usr wm 8192 $
label 1 yes
quit
quit
</source>
<source lang=bash>
# luxadm -e port | \
nawk '{print $1}' | \
xargs -n 1 luxadm -e dump_map | \
nawk '/Disk device/{print $5}' | \
sort -u | \
xargs luxadm display | \
nawk '
/DEVICE PROPERTIES for disk:/{
disk=$NF;
}
/DEVICE PROPERTIES for:/{
disk="";
}
disk && /^$/{
printf "%s\n",disk;
}' | \
sort -u | \
xargs -n 1 format -e -f ~/format_command_file.txt
</source>
<source lang=bash>
# chown -RL grid:asmadmin /dev/rdsk/c0t6000*
# chmod 660 /dev/rdsk/c0t6000*
</source>
==Set swap to physical RAM==
<source lang=bash>
# export RAM=256G
# swap -d /dev/zvol/dsk/rpool/swap
# zfs destroy rpool/swap
# zfs create \
-V ${RAM} \
-b 8k \
-o primarycache=metadata \
-o chksum=on \
-o dedup=off \
-o encryption=off \
-o compression=off \
rpool/swap
# swap -a /dev/zvol/dsk/rpool/swap
</source>
=Network=
==Check port ranges==
<source lang=bash>
# for protocol in tcp udp ; do ipadm show-prop ${protocol} -p smallest_anon_port,largest_anon_port ; done
PROTO PROPERTY PERM CURRENT PERSISTENT DEFAULT POSSIBLE
tcp smallest_anon_port rw 9000 9000 32768 1024-65500
tcp largest_anon_port rw 65500 65500 65535 9000-65535
PROTO PROPERTY PERM CURRENT PERSISTENT DEFAULT POSSIBLE
udp smallest_anon_port rw 9000 9000 32768 1024-65500
udp largest_anon_port rw 65500 65500 65535 9000-65535
</source>
==Setup private cluster interconnects==
Example with a small net with six (eight with net and broadcast) usable IPs. This limits the maximum number of nodes to six... which is obvious...
First node:
<source lang=bash>
# ipadm create-ip net1
# ipadm create-addr -T static -a 10.65.0.1/29 net1/ci1
# ipadm create-ip net5
# ipadm create-addr -T static -a 10.65.0.9/29 net5/ci2
</source>
Second node:
<source lang=bash>
# ipadm create-ip net1
# ipadm create-addr -T static -a 10.65.0.2/29 net1/ci1
# ipadm create-ip net5
# ipadm create-addr -T static -a 10.65.0.10/29 net5/ci2
</source>
==Set slew always for ntp==
After configuring ntp set slew always to avoid time warps!
<source lang=bash>
# svccfg -s svc:/network/ntp:default setprop config/slew_always = true
# svcadm refresh svc:/network/ntp:default
# svccfg -s svc:/network/ntp:default listprop config/slew_always
config/slew_always boolean true
</source>
=Patching=
==Upgrade OPatch==
Do as root:
<source lang=bash>
export ORACLE_HOME=/opt/gridhome/11.2.0.4
export PATH=${PATH}:${ORACLE_HOME}/OPatch
OPATCH_PATCH_ZIP=~oracle/orainst/p6880880_112000_Solaris86-64.zip
zfs snapshot -r rpool/grid@$(opatch version | nawk '/OPatch Version:/{print $1"_"$NF;}')
eval mv ${ORACLE_HOME}/{$(opatch version | nawk '/OPatch Version:/{print $1","$1"_"$NF;}')}
unzip -d ${ORACLE_HOME} ${OPATCH_PATCH_ZIP}
chown -R grid:oinstall ${ORACLE_HOME}/OPatch
zfs snapshot -r rpool/grid@$(opatch version | nawk '/OPatch Version:/{print $1"_"$NF;}')
</source>
==Apply PSU==
On first node as user grid:
<source lang=bash>
export ORACLE_HOME=/opt/gridhome/11.2.0.4
OCM_RSP=~grid/ocm_gridcluster1.rsp
${ORACLE_HOME}/OPatch/ocm/bin/emocmrsp -output ${OCM_RSP}
scp ${OCM_RSP} <other node1>:
scp ${OCM_RSP} <other node2>:
...
</source>
On all nodes do as root:
<source lang=bash>
export ORACLE_HOME=/opt/gridhome/11.2.0.4
export PATH=${PATH}:${ORACLE_HOME}/bin
export PATH=${PATH}:${ORACLE_HOME}/OPatch
OCM_RSP=~grid/ocm_gridcluster1.rsp
PSU_DIR=~oracle/orainst/psu
PSU_ZIP=~oracle/orainst/p22378167_112040_Solaris86-64.zip
PSU=~oracle/orainst/psu/22378167
su - grid -c "mkdir -p ${PSU_DIR}"
su - grid -c "unzip -d ${PSU_DIR} ${PSU_ZIP}"
su - grid -c "opatch lsinventory -detail -oh ${ORACLE_HOME} > ~grid/lsinventory_before_${PSU##*/}"
zfs snapshot -r rpool/grid@before_psu_${PSU##*/}
cd ~grid
for patch in $(find ${PSU} -name bundle.xml | xargs -n 1 dirname) ; do
opatch auto ${patch} -oh ${ORACLE_HOME} -ocmrf ${OCM_RSP}
done
$ORACLE_HOME/crs/install/rootcrs.pl -unlock # <-- on all nodes
# For every other patch do:
su - grid -c "cd ${patchdir} ; opatch prereq CheckConflictAgainstOHWithDetail -ph ./" # <-- only on first node
su - grid -c "cd ${patchdir} ; opatch apply" # <-- only on first node
$ORACLE_HOME/crs/install/rootcrs.pl -patch # <-- on all nodes
zfs snapshot -r rpool/grid@after_psu_${PSU##*/}
${ORACLE_HOME}/bin/emctl start dbconsole
su - grid -c "opatch lsinventory -detail -oh ${ORACLE_HOME} > ~grid/lsinventory_after_${PSU##*/}"
</source>
==Configure local listener to another port==
As grid user:
<source lang=bash>
$ srvctl modify listener -l LISTENER -o ${ORACLE_HOME} -p "TCP:50650"
$ srvctl config listener
Name: LISTENER
Network: 1, Owner: grid
Home: <CRS home>
End points: TCP:50650
$ srvctl stop listener -l LISTENER ; srvctl start listener -l LISTENER
$ sqh
SQL>show parameter list
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
listener_networks string
local_listener string (DESCRIPTION=(ADDRESS_LIST=(A
DDRESS=(PROTOCOL=TCP)(HOST=172
.1.20.1)(PORT=1521))))
remote_listener string
SQL> alter system set local_listener ="(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=172.1.20.1)(PORT=50650))))" SID='+ASM1' ;
System altered.
SQL> ^D
</source>
=ASM=
==Create ASM diskgroups==
LUNs.txt contains all disks with:
# one line per disk.
# each disk in the first field.
<source lang=awk>
# nawk -v type='DATA' '
BEGIN {
printf "<chdg name=\"DATA\" power=\"3\">\n";
}
/002d0/,/011d0/ {
if(/C913/){storage="HSA1";};
if(/C916/){storage="HSA2";};
if(/C061/){storage="HSA3";};
if(/C062/){storage="HSA4";};
if(/002d0/){
# first disk
count=1;
printf " <add>\n";
printf " <fg name=\"%s-%s\">\n",storage,type;
};
gsub(/s2$/,"s0",$1);
printf " <dsk name=\"%s_%s%02d\" string=\"%s\"/>\n",storage,type,count++,$1;
if(/011d0/){
# last disk
print " </fg>";
print " </add>";
}
}
' LUNs.txt
</source>
data_config.xml:
<source lang=xml>
<chdg name="data" power="3">
<add>
<fg name="HSA1_DATA">
<dsk name="HSA1_DATA01" string="/dev/rdsk/c0t60002AC000000000C913010650004002d0s0"/>
<dsk name="HSA1_DATA02" string="/dev/rdsk/c0t60002AC000000000C913010650004003d0s0"/>
<dsk name="HSA1_DATA03" string="/dev/rdsk/c0t60002AC000000000C913010650004004d0s0"/>
<dsk name="HSA1_DATA04" string="/dev/rdsk/c0t60002AC000000000C913010650004005d0s0"/>
<dsk name="HSA1_DATA05" string="/dev/rdsk/c0t60002AC000000000C913010650004006d0s0"/>
<dsk name="HSA1_DATA06" string="/dev/rdsk/c0t60002AC000000000C913010650004007d0s0"/>
<dsk name="HSA1_DATA07" string="/dev/rdsk/c0t60002AC000000000C913010650004008d0s0"/>
<dsk name="HSA1_DATA08" string="/dev/rdsk/c0t60002AC000000000C913010650004009d0s0"/>
<dsk name="HSA1_DATA09" string="/dev/rdsk/c0t60002AC000000000C913010650004010d0s0"/>
<dsk name="HSA1_DATA10" string="/dev/rdsk/c0t60002AC000000000C913010650004011d0s0"/>
</fg>
</add>
<add>
<fg name="HSA2_DATA">
<dsk name="HSA2_DATA01" string="/dev/rdsk/c0t60002AC000000000C916010650004002d0s0"/>
<dsk name="HSA2_DATA02" string="/dev/rdsk/c0t60002AC000000000C916010650004003d0s0"/>
<dsk name="HSA2_DATA03" string="/dev/rdsk/c0t60002AC000000000C916010650004004d0s0"/>
<dsk name="HSA2_DATA04" string="/dev/rdsk/c0t60002AC000000000C916010650004005d0s0"/>
<dsk name="HSA2_DATA05" string="/dev/rdsk/c0t60002AC000000000C916010650004006d0s0"/>
<dsk name="HSA2_DATA06" string="/dev/rdsk/c0t60002AC000000000C916010650004007d0s0"/>
<dsk name="HSA2_DATA07" string="/dev/rdsk/c0t60002AC000000000C916010650004008d0s0"/>
<dsk name="HSA2_DATA08" string="/dev/rdsk/c0t60002AC000000000C916010650004009d0s0"/>
<dsk name="HSA2_DATA09" string="/dev/rdsk/c0t60002AC000000000C916010650004010d0s0"/>
<dsk name="HSA2_DATA10" string="/dev/rdsk/c0t60002AC000000000C916010650004011d0s0"/>
</fg>
</add>
</chdg>
</source>
asmh:
<source lang=oracle11>
ASMCMD [+] > chdg data_config.xml
</source>
ff35a2918fb1335255852f59fef44a920ff564a9
1223
1222
2016-02-10T16:15:45Z
Lollypop
2
/* Create ASM diskgroups */
wikitext
text/x-wiki
[[Kategorie:Solaris11|Clusterware]]
[[Kategorie:Oracle|Clusterware]]
==Get Solaris release information==
<source lang=bash>
# pkg info kernel | \
nawk -F '.' '
/Build Release:/{
solaris=$NF;
}
/Branch:/{
subrel=$3;
update=$4;
}
END{
printf "Solaris %d.%d Update %d\n",solaris,subrel,update;
}'
</source>
=Needed Solaris packages=
==Install pkg dependencies==
<source lang=bash>
# pkg install developer/assembler
# pkg install developer/build/make
# pkg install x11/diagnostic/x11-info-clients
</source>
==Check pkg dependencies==
<source lang=bash>
# pkg list \
developer/assembler \
developer/build/make \
x11/diagnostic/x11-info-clients
</source>
=User / group settings=
==Groups==
<source lang=bash>
# groupadd -g 186 oinstall
# groupadd -g 187 asmadmin
# groupadd -g 188 asmdba
# groupadd -g 200 dba
</source>
==User==
<source lang=bash>
# useradd \
-u 102 \
-g oinstall \
-G asmdba,dba \
-c "Oracle DB" \
-m -d /export/home/oracle \
oracle
# useradd \
-u 406 \
-g oinstall \
-G asmdba,asmadmin,dba \
-c "Oracle Grid" \
-m -d /export/home/grid \
grid
</source>
===Generate ssh public keys===
<source lang=bash>
# su - grid
$ ssh-keygen -t rsa -b 2048
Generating public/private rsa key pair.
Enter file in which to save the key (/export/home/grid/.ssh/id_rsa): <Enter>
Created directory '/export/home/grid/.ssh'.
Enter passphrase (empty for no passphrase): <Enter>
Enter same passphrase again: <Enter>
Your identification has been saved in /export/home/grid/.ssh/id_rsa.
Your public key has been saved in /export/home/grid/.ssh/id_rsa.pub.
The key fingerprint is:
..:..:.. grid@grid01
$ cat .ssh/id_rsa.pub > .ssh/authorized_keys
$ chmod 600 .ssh/authorized_keys
$ vi .ssh/authorized_keys
</source>
Add the public key of other nodes.
After that do this on all other nodes added as grid:
<source lang=bash>
$ scp grid01:.ssh/authorized_keys .ssh/authorized_keys
</source>
Now do a cross login from every node to every other node (even to its self) to add all to the known_hosts. The installer needs this.
==Projects==
<source lang=bash>
# projadd -p 186 -G oinstall \
-K process.max-file-descriptor="(privileged,65536,deny)" \
-K process.max-sem-nsems="(privileged,2048,deny)" \
-K project.max-sem-ids="(privileged,2048,deny)" \
-K project.max-shm-ids="(privileged,200,deny)" \
-K project.max-shm-memory="(privileged,274877906944,deny)" \
group.oinstall
</source>
===Check project settings===
<source lang=bash>
# su - oracle
$ for name in process.{max-file-descriptor,max-sem-nsems} ; do prctl -t privileged -i process -n ${name} $$ ; done
process: 14822: -bash
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
process.max-file-descriptor
privileged 65.5K - deny -
process: 14822: -bash
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
process.max-sem-nsems
privileged 2.05K - deny -
$ for name in project.{max-sem-ids,max-shm-ids,max-shm-memory} ; do prctl -t privileged -n ${name} $$ ; done
process: 14822: -bash
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
project.max-sem-ids
privileged 2.05K - deny -
process: 14822: -bash
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
project.max-shm-ids
privileged 200 - deny -
process: 14822: -bash
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
project.max-shm-memory
usage 0B
privileged 256GB - deny -
</source>
=Directories=
<source lang=bash>
# zfs create -o mountpoint=none rpool/grid
# zfs create -o mountpoint=/opt/gridhome rpool/grid/gridhome
# zfs create -o mountpoint=/opt/gridbase rpool/grid/gridbase
# zfs create -o mountpoint=/opt/oraInventory rpool/grid/oraInventory
# chown -R grid:oinstall /opt/{grid{home,base},oraInventory}
</source>
=Storage tasks=
==Discover LUNs==
<source lang=bash>
# luxadm -e port | \
nawk '{print $1}' | \
xargs -n 1 luxadm -e dump_map | \
nawk '/Disk device/{print $5}' | \
sort -u | \
xargs luxadm display | \
nawk '
/DEVICE PROPERTIES for disk:/{
disk=$NF;
}
/DEVICE PROPERTIES for:/{
disk="";
}
/Vendor:/{
vendor=$NF;
}
/Serial Num:/{
serial=$NF;
}
/Unformatted capacity:/{
capacity=$(NF-1)""$NF;
}
disk != "" && /^$/{
printf "%s vendor=%s serial=%s capacity=%s\n",disk,vendor,serial,capacity;
}' | \
sort -u
</source>
==Label Disks==
===Single Disk===
<source lang=bash>
# printf 'type 0 no no\nlabel 1 yes\npartition\n0 usr wm 8192 $\nlabel 1 yes\nquit\nquit\n' | \
format -e /dev/rdsk/<disk>
</source>
===All FC disks===
For x86 you have to call format -> fdisk -> y for all disks first :-\
'''DON'T DO THE NEXT STEP IF YOU DO NOT KNOW WHAT YOU DO!'''
format_command_file.txt:
<source lang=bash>
type 0 no no
label 1 yes
partition
0 usr wm 8192 $
label 1 yes
quit
quit
</source>
<source lang=bash>
# luxadm -e port | \
nawk '{print $1}' | \
xargs -n 1 luxadm -e dump_map | \
nawk '/Disk device/{print $5}' | \
sort -u | \
xargs luxadm display | \
nawk '
/DEVICE PROPERTIES for disk:/{
disk=$NF;
}
/DEVICE PROPERTIES for:/{
disk="";
}
disk && /^$/{
printf "%s\n",disk;
}' | \
sort -u | \
xargs -n 1 format -e -f ~/format_command_file.txt
</source>
<source lang=bash>
# chown -RL grid:asmadmin /dev/rdsk/c0t6000*
# chmod 660 /dev/rdsk/c0t6000*
</source>
==Set swap to physical RAM==
<source lang=bash>
# export RAM=256G
# swap -d /dev/zvol/dsk/rpool/swap
# zfs destroy rpool/swap
# zfs create \
-V ${RAM} \
-b 8k \
-o primarycache=metadata \
-o chksum=on \
-o dedup=off \
-o encryption=off \
-o compression=off \
rpool/swap
# swap -a /dev/zvol/dsk/rpool/swap
</source>
=Network=
==Check port ranges==
<source lang=bash>
# for protocol in tcp udp ; do ipadm show-prop ${protocol} -p smallest_anon_port,largest_anon_port ; done
PROTO PROPERTY PERM CURRENT PERSISTENT DEFAULT POSSIBLE
tcp smallest_anon_port rw 9000 9000 32768 1024-65500
tcp largest_anon_port rw 65500 65500 65535 9000-65535
PROTO PROPERTY PERM CURRENT PERSISTENT DEFAULT POSSIBLE
udp smallest_anon_port rw 9000 9000 32768 1024-65500
udp largest_anon_port rw 65500 65500 65535 9000-65535
</source>
==Setup private cluster interconnects==
Example with a small net with six (eight with net and broadcast) usable IPs. This limits the maximum number of nodes to six... which is obvious...
First node:
<source lang=bash>
# ipadm create-ip net1
# ipadm create-addr -T static -a 10.65.0.1/29 net1/ci1
# ipadm create-ip net5
# ipadm create-addr -T static -a 10.65.0.9/29 net5/ci2
</source>
Second node:
<source lang=bash>
# ipadm create-ip net1
# ipadm create-addr -T static -a 10.65.0.2/29 net1/ci1
# ipadm create-ip net5
# ipadm create-addr -T static -a 10.65.0.10/29 net5/ci2
</source>
==Set slew always for ntp==
After configuring ntp set slew always to avoid time warps!
<source lang=bash>
# svccfg -s svc:/network/ntp:default setprop config/slew_always = true
# svcadm refresh svc:/network/ntp:default
# svccfg -s svc:/network/ntp:default listprop config/slew_always
config/slew_always boolean true
</source>
=Patching=
==Upgrade OPatch==
Do as root:
<source lang=bash>
export ORACLE_HOME=/opt/gridhome/11.2.0.4
export PATH=${PATH}:${ORACLE_HOME}/OPatch
OPATCH_PATCH_ZIP=~oracle/orainst/p6880880_112000_Solaris86-64.zip
zfs snapshot -r rpool/grid@$(opatch version | nawk '/OPatch Version:/{print $1"_"$NF;}')
eval mv ${ORACLE_HOME}/{$(opatch version | nawk '/OPatch Version:/{print $1","$1"_"$NF;}')}
unzip -d ${ORACLE_HOME} ${OPATCH_PATCH_ZIP}
chown -R grid:oinstall ${ORACLE_HOME}/OPatch
zfs snapshot -r rpool/grid@$(opatch version | nawk '/OPatch Version:/{print $1"_"$NF;}')
</source>
==Apply PSU==
On first node as user grid:
<source lang=bash>
export ORACLE_HOME=/opt/gridhome/11.2.0.4
OCM_RSP=~grid/ocm_gridcluster1.rsp
${ORACLE_HOME}/OPatch/ocm/bin/emocmrsp -output ${OCM_RSP}
scp ${OCM_RSP} <other node1>:
scp ${OCM_RSP} <other node2>:
...
</source>
On all nodes do as root:
<source lang=bash>
export ORACLE_HOME=/opt/gridhome/11.2.0.4
export PATH=${PATH}:${ORACLE_HOME}/bin
export PATH=${PATH}:${ORACLE_HOME}/OPatch
OCM_RSP=~grid/ocm_gridcluster1.rsp
PSU_DIR=~oracle/orainst/psu
PSU_ZIP=~oracle/orainst/p22378167_112040_Solaris86-64.zip
PSU=~oracle/orainst/psu/22378167
su - grid -c "mkdir -p ${PSU_DIR}"
su - grid -c "unzip -d ${PSU_DIR} ${PSU_ZIP}"
su - grid -c "opatch lsinventory -detail -oh ${ORACLE_HOME} > ~grid/lsinventory_before_${PSU##*/}"
zfs snapshot -r rpool/grid@before_psu_${PSU##*/}
cd ~grid
for patch in $(find ${PSU} -name bundle.xml | xargs -n 1 dirname) ; do
opatch auto ${patch} -oh ${ORACLE_HOME} -ocmrf ${OCM_RSP}
done
$ORACLE_HOME/crs/install/rootcrs.pl -unlock # <-- on all nodes
# For every other patch do:
su - grid -c "cd ${patchdir} ; opatch prereq CheckConflictAgainstOHWithDetail -ph ./" # <-- only on first node
su - grid -c "cd ${patchdir} ; opatch apply" # <-- only on first node
$ORACLE_HOME/crs/install/rootcrs.pl -patch # <-- on all nodes
zfs snapshot -r rpool/grid@after_psu_${PSU##*/}
${ORACLE_HOME}/bin/emctl start dbconsole
su - grid -c "opatch lsinventory -detail -oh ${ORACLE_HOME} > ~grid/lsinventory_after_${PSU##*/}"
</source>
==Configure local listener to another port==
As grid user:
<source lang=bash>
$ srvctl modify listener -l LISTENER -o ${ORACLE_HOME} -p "TCP:50650"
$ srvctl config listener
Name: LISTENER
Network: 1, Owner: grid
Home: <CRS home>
End points: TCP:50650
$ srvctl stop listener -l LISTENER ; srvctl start listener -l LISTENER
$ sqh
SQL>show parameter list
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
listener_networks string
local_listener string (DESCRIPTION=(ADDRESS_LIST=(A
DDRESS=(PROTOCOL=TCP)(HOST=172
.1.20.1)(PORT=1521))))
remote_listener string
SQL> alter system set local_listener ="(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=172.1.20.1)(PORT=50650))))" SID='+ASM1' ;
System altered.
SQL> ^D
</source>
=ASM=
==Create ASM diskgroups==
LUNs.txt contains all disks with:
# one line per disk.
# each disk in the first field.
<source lang=awk>
# nawk -v type='DATA' '
BEGIN {
printf "<chdg name=\"DATA\" power=\"3\">\n";
}
/002d0/,/011d0/ {
if(/C913/){storage="HSA1";};
if(/C916/){storage="HSA2";};
if(/C061/){storage="HSA3";};
if(/C062/){storage="HSA4";};
if(/002d0/){
# first disk
count=1;
printf " <add>\n";
printf " <fg name=\"%s-%s\">\n",storage,type;
};
gsub(/s2$/,"s0",$1);
printf " <dsk name=\"%s_%s%02d\" string=\"%s\"/>\n",storage,type,count++,$1;
if(/011d0/){
# last disk
print " </fg>";
print " </add>";
}
}
END {
printf "</chdg>\n";
}
' LUNs.txt
</source>
data_config.xml:
<source lang=xml>
<chdg name="data" power="3">
<add>
<fg name="HSA1_DATA">
<dsk name="HSA1_DATA01" string="/dev/rdsk/c0t60002AC000000000C913010650004002d0s0"/>
<dsk name="HSA1_DATA02" string="/dev/rdsk/c0t60002AC000000000C913010650004003d0s0"/>
<dsk name="HSA1_DATA03" string="/dev/rdsk/c0t60002AC000000000C913010650004004d0s0"/>
<dsk name="HSA1_DATA04" string="/dev/rdsk/c0t60002AC000000000C913010650004005d0s0"/>
<dsk name="HSA1_DATA05" string="/dev/rdsk/c0t60002AC000000000C913010650004006d0s0"/>
<dsk name="HSA1_DATA06" string="/dev/rdsk/c0t60002AC000000000C913010650004007d0s0"/>
<dsk name="HSA1_DATA07" string="/dev/rdsk/c0t60002AC000000000C913010650004008d0s0"/>
<dsk name="HSA1_DATA08" string="/dev/rdsk/c0t60002AC000000000C913010650004009d0s0"/>
<dsk name="HSA1_DATA09" string="/dev/rdsk/c0t60002AC000000000C913010650004010d0s0"/>
<dsk name="HSA1_DATA10" string="/dev/rdsk/c0t60002AC000000000C913010650004011d0s0"/>
</fg>
</add>
<add>
<fg name="HSA2_DATA">
<dsk name="HSA2_DATA01" string="/dev/rdsk/c0t60002AC000000000C916010650004002d0s0"/>
<dsk name="HSA2_DATA02" string="/dev/rdsk/c0t60002AC000000000C916010650004003d0s0"/>
<dsk name="HSA2_DATA03" string="/dev/rdsk/c0t60002AC000000000C916010650004004d0s0"/>
<dsk name="HSA2_DATA04" string="/dev/rdsk/c0t60002AC000000000C916010650004005d0s0"/>
<dsk name="HSA2_DATA05" string="/dev/rdsk/c0t60002AC000000000C916010650004006d0s0"/>
<dsk name="HSA2_DATA06" string="/dev/rdsk/c0t60002AC000000000C916010650004007d0s0"/>
<dsk name="HSA2_DATA07" string="/dev/rdsk/c0t60002AC000000000C916010650004008d0s0"/>
<dsk name="HSA2_DATA08" string="/dev/rdsk/c0t60002AC000000000C916010650004009d0s0"/>
<dsk name="HSA2_DATA09" string="/dev/rdsk/c0t60002AC000000000C916010650004010d0s0"/>
<dsk name="HSA2_DATA10" string="/dev/rdsk/c0t60002AC000000000C916010650004011d0s0"/>
</fg>
</add>
</chdg>
</source>
asmh:
<source lang=oracle11>
ASMCMD [+] > chdg data_config.xml
</source>
0130cf147b6ce2f6eff7db8b6c646fa903840fd6
Solaris OracleClusterware
0
274
1224
1223
2016-02-10T16:37:49Z
Lollypop
2
/* Create ASM diskgroups */
wikitext
text/x-wiki
[[Kategorie:Solaris11|Clusterware]]
[[Kategorie:Oracle|Clusterware]]
==Get Solaris release information==
<source lang=bash>
# pkg info kernel | \
nawk -F '.' '
/Build Release:/{
solaris=$NF;
}
/Branch:/{
subrel=$3;
update=$4;
}
END{
printf "Solaris %d.%d Update %d\n",solaris,subrel,update;
}'
</source>
=Needed Solaris packages=
==Install pkg dependencies==
<source lang=bash>
# pkg install developer/assembler
# pkg install developer/build/make
# pkg install x11/diagnostic/x11-info-clients
</source>
==Check pkg dependencies==
<source lang=bash>
# pkg list \
developer/assembler \
developer/build/make \
x11/diagnostic/x11-info-clients
</source>
=User / group settings=
==Groups==
<source lang=bash>
# groupadd -g 186 oinstall
# groupadd -g 187 asmadmin
# groupadd -g 188 asmdba
# groupadd -g 200 dba
</source>
==User==
<source lang=bash>
# useradd \
-u 102 \
-g oinstall \
-G asmdba,dba \
-c "Oracle DB" \
-m -d /export/home/oracle \
oracle
# useradd \
-u 406 \
-g oinstall \
-G asmdba,asmadmin,dba \
-c "Oracle Grid" \
-m -d /export/home/grid \
grid
</source>
===Generate ssh public keys===
<source lang=bash>
# su - grid
$ ssh-keygen -t rsa -b 2048
Generating public/private rsa key pair.
Enter file in which to save the key (/export/home/grid/.ssh/id_rsa): <Enter>
Created directory '/export/home/grid/.ssh'.
Enter passphrase (empty for no passphrase): <Enter>
Enter same passphrase again: <Enter>
Your identification has been saved in /export/home/grid/.ssh/id_rsa.
Your public key has been saved in /export/home/grid/.ssh/id_rsa.pub.
The key fingerprint is:
..:..:.. grid@grid01
$ cat .ssh/id_rsa.pub > .ssh/authorized_keys
$ chmod 600 .ssh/authorized_keys
$ vi .ssh/authorized_keys
</source>
Add the public key of other nodes.
After that do this on all other nodes added as grid:
<source lang=bash>
$ scp grid01:.ssh/authorized_keys .ssh/authorized_keys
</source>
Now do a cross login from every node to every other node (even to its self) to add all to the known_hosts. The installer needs this.
==Projects==
<source lang=bash>
# projadd -p 186 -G oinstall \
-K process.max-file-descriptor="(privileged,65536,deny)" \
-K process.max-sem-nsems="(privileged,2048,deny)" \
-K project.max-sem-ids="(privileged,2048,deny)" \
-K project.max-shm-ids="(privileged,200,deny)" \
-K project.max-shm-memory="(privileged,274877906944,deny)" \
group.oinstall
</source>
===Check project settings===
<source lang=bash>
# su - oracle
$ for name in process.{max-file-descriptor,max-sem-nsems} ; do prctl -t privileged -i process -n ${name} $$ ; done
process: 14822: -bash
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
process.max-file-descriptor
privileged 65.5K - deny -
process: 14822: -bash
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
process.max-sem-nsems
privileged 2.05K - deny -
$ for name in project.{max-sem-ids,max-shm-ids,max-shm-memory} ; do prctl -t privileged -n ${name} $$ ; done
process: 14822: -bash
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
project.max-sem-ids
privileged 2.05K - deny -
process: 14822: -bash
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
project.max-shm-ids
privileged 200 - deny -
process: 14822: -bash
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
project.max-shm-memory
usage 0B
privileged 256GB - deny -
</source>
=Directories=
<source lang=bash>
# zfs create -o mountpoint=none rpool/grid
# zfs create -o mountpoint=/opt/gridhome rpool/grid/gridhome
# zfs create -o mountpoint=/opt/gridbase rpool/grid/gridbase
# zfs create -o mountpoint=/opt/oraInventory rpool/grid/oraInventory
# chown -R grid:oinstall /opt/{grid{home,base},oraInventory}
</source>
=Storage tasks=
==Discover LUNs==
<source lang=bash>
# luxadm -e port | \
nawk '{print $1}' | \
xargs -n 1 luxadm -e dump_map | \
nawk '/Disk device/{print $5}' | \
sort -u | \
xargs luxadm display | \
nawk '
/DEVICE PROPERTIES for disk:/{
disk=$NF;
}
/DEVICE PROPERTIES for:/{
disk="";
}
/Vendor:/{
vendor=$NF;
}
/Serial Num:/{
serial=$NF;
}
/Unformatted capacity:/{
capacity=$(NF-1)""$NF;
}
disk != "" && /^$/{
printf "%s vendor=%s serial=%s capacity=%s\n",disk,vendor,serial,capacity;
}' | \
sort -u
</source>
==Label Disks==
===Single Disk===
<source lang=bash>
# printf 'type 0 no no\nlabel 1 yes\npartition\n0 usr wm 8192 $\nlabel 1 yes\nquit\nquit\n' | \
format -e /dev/rdsk/<disk>
</source>
===All FC disks===
For x86 you have to call format -> fdisk -> y for all disks first :-\
'''DON'T DO THE NEXT STEP IF YOU DO NOT KNOW WHAT YOU DO!'''
format_command_file.txt:
<source lang=bash>
type 0 no no
label 1 yes
partition
0 usr wm 8192 $
label 1 yes
quit
quit
</source>
<source lang=bash>
# luxadm -e port | \
nawk '{print $1}' | \
xargs -n 1 luxadm -e dump_map | \
nawk '/Disk device/{print $5}' | \
sort -u | \
xargs luxadm display | \
nawk '
/DEVICE PROPERTIES for disk:/{
disk=$NF;
}
/DEVICE PROPERTIES for:/{
disk="";
}
disk && /^$/{
printf "%s\n",disk;
}' | \
sort -u | \
xargs -n 1 format -e -f ~/format_command_file.txt
</source>
<source lang=bash>
# chown -RL grid:asmadmin /dev/rdsk/c0t6000*
# chmod 660 /dev/rdsk/c0t6000*
</source>
==Set swap to physical RAM==
<source lang=bash>
# export RAM=256G
# swap -d /dev/zvol/dsk/rpool/swap
# zfs destroy rpool/swap
# zfs create \
-V ${RAM} \
-b 8k \
-o primarycache=metadata \
-o chksum=on \
-o dedup=off \
-o encryption=off \
-o compression=off \
rpool/swap
# swap -a /dev/zvol/dsk/rpool/swap
</source>
=Network=
==Check port ranges==
<source lang=bash>
# for protocol in tcp udp ; do ipadm show-prop ${protocol} -p smallest_anon_port,largest_anon_port ; done
PROTO PROPERTY PERM CURRENT PERSISTENT DEFAULT POSSIBLE
tcp smallest_anon_port rw 9000 9000 32768 1024-65500
tcp largest_anon_port rw 65500 65500 65535 9000-65535
PROTO PROPERTY PERM CURRENT PERSISTENT DEFAULT POSSIBLE
udp smallest_anon_port rw 9000 9000 32768 1024-65500
udp largest_anon_port rw 65500 65500 65535 9000-65535
</source>
==Setup private cluster interconnects==
Example with a small net with six (eight with net and broadcast) usable IPs. This limits the maximum number of nodes to six... which is obvious...
First node:
<source lang=bash>
# ipadm create-ip net1
# ipadm create-addr -T static -a 10.65.0.1/29 net1/ci1
# ipadm create-ip net5
# ipadm create-addr -T static -a 10.65.0.9/29 net5/ci2
</source>
Second node:
<source lang=bash>
# ipadm create-ip net1
# ipadm create-addr -T static -a 10.65.0.2/29 net1/ci1
# ipadm create-ip net5
# ipadm create-addr -T static -a 10.65.0.10/29 net5/ci2
</source>
==Set slew always for ntp==
After configuring ntp set slew always to avoid time warps!
<source lang=bash>
# svccfg -s svc:/network/ntp:default setprop config/slew_always = true
# svcadm refresh svc:/network/ntp:default
# svccfg -s svc:/network/ntp:default listprop config/slew_always
config/slew_always boolean true
</source>
=Patching=
==Upgrade OPatch==
Do as root:
<source lang=bash>
export ORACLE_HOME=/opt/gridhome/11.2.0.4
export PATH=${PATH}:${ORACLE_HOME}/OPatch
OPATCH_PATCH_ZIP=~oracle/orainst/p6880880_112000_Solaris86-64.zip
zfs snapshot -r rpool/grid@$(opatch version | nawk '/OPatch Version:/{print $1"_"$NF;}')
eval mv ${ORACLE_HOME}/{$(opatch version | nawk '/OPatch Version:/{print $1","$1"_"$NF;}')}
unzip -d ${ORACLE_HOME} ${OPATCH_PATCH_ZIP}
chown -R grid:oinstall ${ORACLE_HOME}/OPatch
zfs snapshot -r rpool/grid@$(opatch version | nawk '/OPatch Version:/{print $1"_"$NF;}')
</source>
==Apply PSU==
On first node as user grid:
<source lang=bash>
export ORACLE_HOME=/opt/gridhome/11.2.0.4
OCM_RSP=~grid/ocm_gridcluster1.rsp
${ORACLE_HOME}/OPatch/ocm/bin/emocmrsp -output ${OCM_RSP}
scp ${OCM_RSP} <other node1>:
scp ${OCM_RSP} <other node2>:
...
</source>
On all nodes do as root:
<source lang=bash>
export ORACLE_HOME=/opt/gridhome/11.2.0.4
export PATH=${PATH}:${ORACLE_HOME}/bin
export PATH=${PATH}:${ORACLE_HOME}/OPatch
OCM_RSP=~grid/ocm_gridcluster1.rsp
PSU_DIR=~oracle/orainst/psu
PSU_ZIP=~oracle/orainst/p22378167_112040_Solaris86-64.zip
PSU=~oracle/orainst/psu/22378167
su - grid -c "mkdir -p ${PSU_DIR}"
su - grid -c "unzip -d ${PSU_DIR} ${PSU_ZIP}"
su - grid -c "opatch lsinventory -detail -oh ${ORACLE_HOME} > ~grid/lsinventory_before_${PSU##*/}"
zfs snapshot -r rpool/grid@before_psu_${PSU##*/}
cd ~grid
for patch in $(find ${PSU} -name bundle.xml | xargs -n 1 dirname) ; do
opatch auto ${patch} -oh ${ORACLE_HOME} -ocmrf ${OCM_RSP}
done
$ORACLE_HOME/crs/install/rootcrs.pl -unlock # <-- on all nodes
# For every other patch do:
su - grid -c "cd ${patchdir} ; opatch prereq CheckConflictAgainstOHWithDetail -ph ./" # <-- only on first node
su - grid -c "cd ${patchdir} ; opatch apply" # <-- only on first node
$ORACLE_HOME/crs/install/rootcrs.pl -patch # <-- on all nodes
zfs snapshot -r rpool/grid@after_psu_${PSU##*/}
${ORACLE_HOME}/bin/emctl start dbconsole
su - grid -c "opatch lsinventory -detail -oh ${ORACLE_HOME} > ~grid/lsinventory_after_${PSU##*/}"
</source>
==Configure local listener to another port==
As grid user:
<source lang=bash>
$ srvctl modify listener -l LISTENER -o ${ORACLE_HOME} -p "TCP:50650"
$ srvctl config listener
Name: LISTENER
Network: 1, Owner: grid
Home: <CRS home>
End points: TCP:50650
$ srvctl stop listener -l LISTENER ; srvctl start listener -l LISTENER
$ sqh
SQL>show parameter list
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
listener_networks string
local_listener string (DESCRIPTION=(ADDRESS_LIST=(A
DDRESS=(PROTOCOL=TCP)(HOST=172
.1.20.1)(PORT=1521))))
remote_listener string
SQL> alter system set local_listener ="(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=172.1.20.1)(PORT=50650))))" SID='+ASM1' ;
System altered.
SQL> ^D
</source>
=ASM=
==Create ASM diskgroups==
LUNs.txt contains all disks with:
# one line per disk.
# each disk in the first field.
Just an example for chdg:
<source lang=awk>
# nawk -v type='DATA' '
BEGIN {
printf "<chdg name=\"%s\" power=\"3\">\n",type;
}
/002d0/,/011d0/ {
if(/C913/){storage="HSA1";};
if(/C916/){storage="HSA2";};
if(/C061/){storage="HSA3";};
if(/C062/){storage="HSA4";};
if(/002d0/){
# first disk
count=1;
printf " <add>\n";
printf " <fg name=\"%s_%s\">\n",storage,type;
};
gsub(/s2$/,"s0",$1);
printf " <dsk name=\"%s_%s%02d\" string=\"%s\"/>\n",storage,type,count++,$1;
if(/011d0/){
# last disk
print " </fg>";
print " </add>";
}
}
END {
printf "</chdg>\n";
}
' LUNs.txt
</source>
Just an example for mkdg:
<source lang=awk>
# nawk -v type='FRA' '
BEGIN {
printf "<dg name=\"%s\" redundancy=\"normal\">\n",type;
}
/002d0/,/012d0/ {
if(/C913/){storage="HSA1";};
if(/C916/){storage="HSA2";};
if(/C061/){storage="HSA3";};
if(/C062/){storage="HSA4";};
if(/012d0/){
# first disk
count=1;
printf " <fg name=\"%s_%s\">\n",storage,type;
};
gsub(/s2$/,"s0",$1);
printf " <dsk name=\"%s_%s%02d\" string=\"%s\"/>\n",storage,type,count++,$1;
if(/015d0/){
# last disk
print " </fg>";
}
}
END {
printf "</dg>\n";
}
' LUNs.txt
</source>
data_config.xml:
<source lang=xml>
<chdg name="data" power="3">
<add>
<fg name="HSA1_DATA">
<dsk name="HSA1_DATA01" string="/dev/rdsk/c0t60002AC000000000C913010650004002d0s0"/>
<dsk name="HSA1_DATA02" string="/dev/rdsk/c0t60002AC000000000C913010650004003d0s0"/>
<dsk name="HSA1_DATA03" string="/dev/rdsk/c0t60002AC000000000C913010650004004d0s0"/>
<dsk name="HSA1_DATA04" string="/dev/rdsk/c0t60002AC000000000C913010650004005d0s0"/>
<dsk name="HSA1_DATA05" string="/dev/rdsk/c0t60002AC000000000C913010650004006d0s0"/>
<dsk name="HSA1_DATA06" string="/dev/rdsk/c0t60002AC000000000C913010650004007d0s0"/>
<dsk name="HSA1_DATA07" string="/dev/rdsk/c0t60002AC000000000C913010650004008d0s0"/>
<dsk name="HSA1_DATA08" string="/dev/rdsk/c0t60002AC000000000C913010650004009d0s0"/>
<dsk name="HSA1_DATA09" string="/dev/rdsk/c0t60002AC000000000C913010650004010d0s0"/>
<dsk name="HSA1_DATA10" string="/dev/rdsk/c0t60002AC000000000C913010650004011d0s0"/>
</fg>
</add>
<add>
<fg name="HSA2_DATA">
<dsk name="HSA2_DATA01" string="/dev/rdsk/c0t60002AC000000000C916010650004002d0s0"/>
<dsk name="HSA2_DATA02" string="/dev/rdsk/c0t60002AC000000000C916010650004003d0s0"/>
<dsk name="HSA2_DATA03" string="/dev/rdsk/c0t60002AC000000000C916010650004004d0s0"/>
<dsk name="HSA2_DATA04" string="/dev/rdsk/c0t60002AC000000000C916010650004005d0s0"/>
<dsk name="HSA2_DATA05" string="/dev/rdsk/c0t60002AC000000000C916010650004006d0s0"/>
<dsk name="HSA2_DATA06" string="/dev/rdsk/c0t60002AC000000000C916010650004007d0s0"/>
<dsk name="HSA2_DATA07" string="/dev/rdsk/c0t60002AC000000000C916010650004008d0s0"/>
<dsk name="HSA2_DATA08" string="/dev/rdsk/c0t60002AC000000000C916010650004009d0s0"/>
<dsk name="HSA2_DATA09" string="/dev/rdsk/c0t60002AC000000000C916010650004010d0s0"/>
<dsk name="HSA2_DATA10" string="/dev/rdsk/c0t60002AC000000000C916010650004011d0s0"/>
</fg>
</add>
</chdg>
</source>
asmh:
<source lang=oracle11>
ASMCMD [+] > chdg data_config.xml
</source>
479eb69457905dde0025807a6679b170d30638f7
1225
1224
2016-02-10T16:39:06Z
Lollypop
2
/* Create ASM diskgroups */
wikitext
text/x-wiki
[[Kategorie:Solaris11|Clusterware]]
[[Kategorie:Oracle|Clusterware]]
==Get Solaris release information==
<source lang=bash>
# pkg info kernel | \
nawk -F '.' '
/Build Release:/{
solaris=$NF;
}
/Branch:/{
subrel=$3;
update=$4;
}
END{
printf "Solaris %d.%d Update %d\n",solaris,subrel,update;
}'
</source>
=Needed Solaris packages=
==Install pkg dependencies==
<source lang=bash>
# pkg install developer/assembler
# pkg install developer/build/make
# pkg install x11/diagnostic/x11-info-clients
</source>
==Check pkg dependencies==
<source lang=bash>
# pkg list \
developer/assembler \
developer/build/make \
x11/diagnostic/x11-info-clients
</source>
=User / group settings=
==Groups==
<source lang=bash>
# groupadd -g 186 oinstall
# groupadd -g 187 asmadmin
# groupadd -g 188 asmdba
# groupadd -g 200 dba
</source>
==User==
<source lang=bash>
# useradd \
-u 102 \
-g oinstall \
-G asmdba,dba \
-c "Oracle DB" \
-m -d /export/home/oracle \
oracle
# useradd \
-u 406 \
-g oinstall \
-G asmdba,asmadmin,dba \
-c "Oracle Grid" \
-m -d /export/home/grid \
grid
</source>
===Generate ssh public keys===
<source lang=bash>
# su - grid
$ ssh-keygen -t rsa -b 2048
Generating public/private rsa key pair.
Enter file in which to save the key (/export/home/grid/.ssh/id_rsa): <Enter>
Created directory '/export/home/grid/.ssh'.
Enter passphrase (empty for no passphrase): <Enter>
Enter same passphrase again: <Enter>
Your identification has been saved in /export/home/grid/.ssh/id_rsa.
Your public key has been saved in /export/home/grid/.ssh/id_rsa.pub.
The key fingerprint is:
..:..:.. grid@grid01
$ cat .ssh/id_rsa.pub > .ssh/authorized_keys
$ chmod 600 .ssh/authorized_keys
$ vi .ssh/authorized_keys
</source>
Add the public key of other nodes.
After that do this on all other nodes added as grid:
<source lang=bash>
$ scp grid01:.ssh/authorized_keys .ssh/authorized_keys
</source>
Now do a cross login from every node to every other node (even to its self) to add all to the known_hosts. The installer needs this.
==Projects==
<source lang=bash>
# projadd -p 186 -G oinstall \
-K process.max-file-descriptor="(privileged,65536,deny)" \
-K process.max-sem-nsems="(privileged,2048,deny)" \
-K project.max-sem-ids="(privileged,2048,deny)" \
-K project.max-shm-ids="(privileged,200,deny)" \
-K project.max-shm-memory="(privileged,274877906944,deny)" \
group.oinstall
</source>
===Check project settings===
<source lang=bash>
# su - oracle
$ for name in process.{max-file-descriptor,max-sem-nsems} ; do prctl -t privileged -i process -n ${name} $$ ; done
process: 14822: -bash
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
process.max-file-descriptor
privileged 65.5K - deny -
process: 14822: -bash
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
process.max-sem-nsems
privileged 2.05K - deny -
$ for name in project.{max-sem-ids,max-shm-ids,max-shm-memory} ; do prctl -t privileged -n ${name} $$ ; done
process: 14822: -bash
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
project.max-sem-ids
privileged 2.05K - deny -
process: 14822: -bash
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
project.max-shm-ids
privileged 200 - deny -
process: 14822: -bash
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
project.max-shm-memory
usage 0B
privileged 256GB - deny -
</source>
=Directories=
<source lang=bash>
# zfs create -o mountpoint=none rpool/grid
# zfs create -o mountpoint=/opt/gridhome rpool/grid/gridhome
# zfs create -o mountpoint=/opt/gridbase rpool/grid/gridbase
# zfs create -o mountpoint=/opt/oraInventory rpool/grid/oraInventory
# chown -R grid:oinstall /opt/{grid{home,base},oraInventory}
</source>
=Storage tasks=
==Discover LUNs==
<source lang=bash>
# luxadm -e port | \
nawk '{print $1}' | \
xargs -n 1 luxadm -e dump_map | \
nawk '/Disk device/{print $5}' | \
sort -u | \
xargs luxadm display | \
nawk '
/DEVICE PROPERTIES for disk:/{
disk=$NF;
}
/DEVICE PROPERTIES for:/{
disk="";
}
/Vendor:/{
vendor=$NF;
}
/Serial Num:/{
serial=$NF;
}
/Unformatted capacity:/{
capacity=$(NF-1)""$NF;
}
disk != "" && /^$/{
printf "%s vendor=%s serial=%s capacity=%s\n",disk,vendor,serial,capacity;
}' | \
sort -u
</source>
==Label Disks==
===Single Disk===
<source lang=bash>
# printf 'type 0 no no\nlabel 1 yes\npartition\n0 usr wm 8192 $\nlabel 1 yes\nquit\nquit\n' | \
format -e /dev/rdsk/<disk>
</source>
===All FC disks===
For x86 you have to call format -> fdisk -> y for all disks first :-\
'''DON'T DO THE NEXT STEP IF YOU DO NOT KNOW WHAT YOU DO!'''
format_command_file.txt:
<source lang=bash>
type 0 no no
label 1 yes
partition
0 usr wm 8192 $
label 1 yes
quit
quit
</source>
<source lang=bash>
# luxadm -e port | \
nawk '{print $1}' | \
xargs -n 1 luxadm -e dump_map | \
nawk '/Disk device/{print $5}' | \
sort -u | \
xargs luxadm display | \
nawk '
/DEVICE PROPERTIES for disk:/{
disk=$NF;
}
/DEVICE PROPERTIES for:/{
disk="";
}
disk && /^$/{
printf "%s\n",disk;
}' | \
sort -u | \
xargs -n 1 format -e -f ~/format_command_file.txt
</source>
<source lang=bash>
# chown -RL grid:asmadmin /dev/rdsk/c0t6000*
# chmod 660 /dev/rdsk/c0t6000*
</source>
==Set swap to physical RAM==
<source lang=bash>
# export RAM=256G
# swap -d /dev/zvol/dsk/rpool/swap
# zfs destroy rpool/swap
# zfs create \
-V ${RAM} \
-b 8k \
-o primarycache=metadata \
-o chksum=on \
-o dedup=off \
-o encryption=off \
-o compression=off \
rpool/swap
# swap -a /dev/zvol/dsk/rpool/swap
</source>
=Network=
==Check port ranges==
<source lang=bash>
# for protocol in tcp udp ; do ipadm show-prop ${protocol} -p smallest_anon_port,largest_anon_port ; done
PROTO PROPERTY PERM CURRENT PERSISTENT DEFAULT POSSIBLE
tcp smallest_anon_port rw 9000 9000 32768 1024-65500
tcp largest_anon_port rw 65500 65500 65535 9000-65535
PROTO PROPERTY PERM CURRENT PERSISTENT DEFAULT POSSIBLE
udp smallest_anon_port rw 9000 9000 32768 1024-65500
udp largest_anon_port rw 65500 65500 65535 9000-65535
</source>
==Setup private cluster interconnects==
Example with a small net with six (eight with net and broadcast) usable IPs. This limits the maximum number of nodes to six... which is obvious...
First node:
<source lang=bash>
# ipadm create-ip net1
# ipadm create-addr -T static -a 10.65.0.1/29 net1/ci1
# ipadm create-ip net5
# ipadm create-addr -T static -a 10.65.0.9/29 net5/ci2
</source>
Second node:
<source lang=bash>
# ipadm create-ip net1
# ipadm create-addr -T static -a 10.65.0.2/29 net1/ci1
# ipadm create-ip net5
# ipadm create-addr -T static -a 10.65.0.10/29 net5/ci2
</source>
==Set slew always for ntp==
After configuring ntp set slew always to avoid time warps!
<source lang=bash>
# svccfg -s svc:/network/ntp:default setprop config/slew_always = true
# svcadm refresh svc:/network/ntp:default
# svccfg -s svc:/network/ntp:default listprop config/slew_always
config/slew_always boolean true
</source>
=Patching=
==Upgrade OPatch==
Do as root:
<source lang=bash>
export ORACLE_HOME=/opt/gridhome/11.2.0.4
export PATH=${PATH}:${ORACLE_HOME}/OPatch
OPATCH_PATCH_ZIP=~oracle/orainst/p6880880_112000_Solaris86-64.zip
zfs snapshot -r rpool/grid@$(opatch version | nawk '/OPatch Version:/{print $1"_"$NF;}')
eval mv ${ORACLE_HOME}/{$(opatch version | nawk '/OPatch Version:/{print $1","$1"_"$NF;}')}
unzip -d ${ORACLE_HOME} ${OPATCH_PATCH_ZIP}
chown -R grid:oinstall ${ORACLE_HOME}/OPatch
zfs snapshot -r rpool/grid@$(opatch version | nawk '/OPatch Version:/{print $1"_"$NF;}')
</source>
==Apply PSU==
On first node as user grid:
<source lang=bash>
export ORACLE_HOME=/opt/gridhome/11.2.0.4
OCM_RSP=~grid/ocm_gridcluster1.rsp
${ORACLE_HOME}/OPatch/ocm/bin/emocmrsp -output ${OCM_RSP}
scp ${OCM_RSP} <other node1>:
scp ${OCM_RSP} <other node2>:
...
</source>
On all nodes do as root:
<source lang=bash>
export ORACLE_HOME=/opt/gridhome/11.2.0.4
export PATH=${PATH}:${ORACLE_HOME}/bin
export PATH=${PATH}:${ORACLE_HOME}/OPatch
OCM_RSP=~grid/ocm_gridcluster1.rsp
PSU_DIR=~oracle/orainst/psu
PSU_ZIP=~oracle/orainst/p22378167_112040_Solaris86-64.zip
PSU=~oracle/orainst/psu/22378167
su - grid -c "mkdir -p ${PSU_DIR}"
su - grid -c "unzip -d ${PSU_DIR} ${PSU_ZIP}"
su - grid -c "opatch lsinventory -detail -oh ${ORACLE_HOME} > ~grid/lsinventory_before_${PSU##*/}"
zfs snapshot -r rpool/grid@before_psu_${PSU##*/}
cd ~grid
for patch in $(find ${PSU} -name bundle.xml | xargs -n 1 dirname) ; do
opatch auto ${patch} -oh ${ORACLE_HOME} -ocmrf ${OCM_RSP}
done
$ORACLE_HOME/crs/install/rootcrs.pl -unlock # <-- on all nodes
# For every other patch do:
su - grid -c "cd ${patchdir} ; opatch prereq CheckConflictAgainstOHWithDetail -ph ./" # <-- only on first node
su - grid -c "cd ${patchdir} ; opatch apply" # <-- only on first node
$ORACLE_HOME/crs/install/rootcrs.pl -patch # <-- on all nodes
zfs snapshot -r rpool/grid@after_psu_${PSU##*/}
${ORACLE_HOME}/bin/emctl start dbconsole
su - grid -c "opatch lsinventory -detail -oh ${ORACLE_HOME} > ~grid/lsinventory_after_${PSU##*/}"
</source>
==Configure local listener to another port==
As grid user:
<source lang=bash>
$ srvctl modify listener -l LISTENER -o ${ORACLE_HOME} -p "TCP:50650"
$ srvctl config listener
Name: LISTENER
Network: 1, Owner: grid
Home: <CRS home>
End points: TCP:50650
$ srvctl stop listener -l LISTENER ; srvctl start listener -l LISTENER
$ sqh
SQL>show parameter list
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
listener_networks string
local_listener string (DESCRIPTION=(ADDRESS_LIST=(A
DDRESS=(PROTOCOL=TCP)(HOST=172
.1.20.1)(PORT=1521))))
remote_listener string
SQL> alter system set local_listener ="(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=172.1.20.1)(PORT=50650))))" SID='+ASM1' ;
System altered.
SQL> ^D
</source>
=ASM=
==Create ASM diskgroups==
LUNs.txt contains all disks with:
# one line per disk.
# each disk in the first field.
===Example for chdg===
<source lang=awk>
# nawk -v type='DATA' '
BEGIN {
printf "<chdg name=\"%s\" power=\"3\">\n",type;
}
/002d0/,/011d0/ {
if(/C913/){storage="HSA1";};
if(/C916/){storage="HSA2";};
if(/C061/){storage="HSA3";};
if(/C062/){storage="HSA4";};
if(/002d0/){
# first disk
count=1;
printf " <add>\n";
printf " <fg name=\"%s_%s\">\n",storage,type;
};
gsub(/s2$/,"s0",$1);
printf " <dsk name=\"%s_%s%02d\" string=\"%s\"/>\n",storage,type,count++,$1;
if(/011d0/){
# last disk
print " </fg>";
print " </add>";
}
}
END {
printf "</chdg>\n";
}
' LUNs.txt
</source>
===Example for mkdg===
<source lang=awk>
# nawk -v type='FRA' '
BEGIN {
printf "<dg name=\"%s\" redundancy=\"normal\">\n",type;
}
/012d0/,/015d0/ {
if(/C913/){storage="HSA1";};
if(/C916/){storage="HSA2";};
if(/C061/){storage="HSA3";};
if(/C062/){storage="HSA4";};
if(/012d0/){
# first disk
count=1;
printf " <fg name=\"%s_%s\">\n",storage,type;
};
gsub(/s2$/,"s0",$1);
printf " <dsk name=\"%s_%s%02d\" string=\"%s\"/>\n",storage,type,count++,$1;
if(/015d0/){
# last disk
print " </fg>";
}
}
END {
printf "</dg>\n";
}
' LUNs.txt
</source>
data_config.xml:
<source lang=xml>
<chdg name="data" power="3">
<add>
<fg name="HSA1_DATA">
<dsk name="HSA1_DATA01" string="/dev/rdsk/c0t60002AC000000000C913010650004002d0s0"/>
<dsk name="HSA1_DATA02" string="/dev/rdsk/c0t60002AC000000000C913010650004003d0s0"/>
<dsk name="HSA1_DATA03" string="/dev/rdsk/c0t60002AC000000000C913010650004004d0s0"/>
<dsk name="HSA1_DATA04" string="/dev/rdsk/c0t60002AC000000000C913010650004005d0s0"/>
<dsk name="HSA1_DATA05" string="/dev/rdsk/c0t60002AC000000000C913010650004006d0s0"/>
<dsk name="HSA1_DATA06" string="/dev/rdsk/c0t60002AC000000000C913010650004007d0s0"/>
<dsk name="HSA1_DATA07" string="/dev/rdsk/c0t60002AC000000000C913010650004008d0s0"/>
<dsk name="HSA1_DATA08" string="/dev/rdsk/c0t60002AC000000000C913010650004009d0s0"/>
<dsk name="HSA1_DATA09" string="/dev/rdsk/c0t60002AC000000000C913010650004010d0s0"/>
<dsk name="HSA1_DATA10" string="/dev/rdsk/c0t60002AC000000000C913010650004011d0s0"/>
</fg>
</add>
<add>
<fg name="HSA2_DATA">
<dsk name="HSA2_DATA01" string="/dev/rdsk/c0t60002AC000000000C916010650004002d0s0"/>
<dsk name="HSA2_DATA02" string="/dev/rdsk/c0t60002AC000000000C916010650004003d0s0"/>
<dsk name="HSA2_DATA03" string="/dev/rdsk/c0t60002AC000000000C916010650004004d0s0"/>
<dsk name="HSA2_DATA04" string="/dev/rdsk/c0t60002AC000000000C916010650004005d0s0"/>
<dsk name="HSA2_DATA05" string="/dev/rdsk/c0t60002AC000000000C916010650004006d0s0"/>
<dsk name="HSA2_DATA06" string="/dev/rdsk/c0t60002AC000000000C916010650004007d0s0"/>
<dsk name="HSA2_DATA07" string="/dev/rdsk/c0t60002AC000000000C916010650004008d0s0"/>
<dsk name="HSA2_DATA08" string="/dev/rdsk/c0t60002AC000000000C916010650004009d0s0"/>
<dsk name="HSA2_DATA09" string="/dev/rdsk/c0t60002AC000000000C916010650004010d0s0"/>
<dsk name="HSA2_DATA10" string="/dev/rdsk/c0t60002AC000000000C916010650004011d0s0"/>
</fg>
</add>
</chdg>
</source>
asmh:
<source lang=oracle11>
ASMCMD [+] > chdg data_config.xml
</source>
dfcfffee2f0a838d6fdcc40aeb9173a75bce6203
1226
1225
2016-02-10T17:05:20Z
Lollypop
2
/* Example for mkdg */
wikitext
text/x-wiki
[[Kategorie:Solaris11|Clusterware]]
[[Kategorie:Oracle|Clusterware]]
==Get Solaris release information==
<source lang=bash>
# pkg info kernel | \
nawk -F '.' '
/Build Release:/{
solaris=$NF;
}
/Branch:/{
subrel=$3;
update=$4;
}
END{
printf "Solaris %d.%d Update %d\n",solaris,subrel,update;
}'
</source>
=Needed Solaris packages=
==Install pkg dependencies==
<source lang=bash>
# pkg install developer/assembler
# pkg install developer/build/make
# pkg install x11/diagnostic/x11-info-clients
</source>
==Check pkg dependencies==
<source lang=bash>
# pkg list \
developer/assembler \
developer/build/make \
x11/diagnostic/x11-info-clients
</source>
=User / group settings=
==Groups==
<source lang=bash>
# groupadd -g 186 oinstall
# groupadd -g 187 asmadmin
# groupadd -g 188 asmdba
# groupadd -g 200 dba
</source>
==User==
<source lang=bash>
# useradd \
-u 102 \
-g oinstall \
-G asmdba,dba \
-c "Oracle DB" \
-m -d /export/home/oracle \
oracle
# useradd \
-u 406 \
-g oinstall \
-G asmdba,asmadmin,dba \
-c "Oracle Grid" \
-m -d /export/home/grid \
grid
</source>
===Generate ssh public keys===
<source lang=bash>
# su - grid
$ ssh-keygen -t rsa -b 2048
Generating public/private rsa key pair.
Enter file in which to save the key (/export/home/grid/.ssh/id_rsa): <Enter>
Created directory '/export/home/grid/.ssh'.
Enter passphrase (empty for no passphrase): <Enter>
Enter same passphrase again: <Enter>
Your identification has been saved in /export/home/grid/.ssh/id_rsa.
Your public key has been saved in /export/home/grid/.ssh/id_rsa.pub.
The key fingerprint is:
..:..:.. grid@grid01
$ cat .ssh/id_rsa.pub > .ssh/authorized_keys
$ chmod 600 .ssh/authorized_keys
$ vi .ssh/authorized_keys
</source>
Add the public key of other nodes.
After that do this on all other nodes added as grid:
<source lang=bash>
$ scp grid01:.ssh/authorized_keys .ssh/authorized_keys
</source>
Now do a cross login from every node to every other node (even to its self) to add all to the known_hosts. The installer needs this.
==Projects==
<source lang=bash>
# projadd -p 186 -G oinstall \
-K process.max-file-descriptor="(privileged,65536,deny)" \
-K process.max-sem-nsems="(privileged,2048,deny)" \
-K project.max-sem-ids="(privileged,2048,deny)" \
-K project.max-shm-ids="(privileged,200,deny)" \
-K project.max-shm-memory="(privileged,274877906944,deny)" \
group.oinstall
</source>
===Check project settings===
<source lang=bash>
# su - oracle
$ for name in process.{max-file-descriptor,max-sem-nsems} ; do prctl -t privileged -i process -n ${name} $$ ; done
process: 14822: -bash
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
process.max-file-descriptor
privileged 65.5K - deny -
process: 14822: -bash
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
process.max-sem-nsems
privileged 2.05K - deny -
$ for name in project.{max-sem-ids,max-shm-ids,max-shm-memory} ; do prctl -t privileged -n ${name} $$ ; done
process: 14822: -bash
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
project.max-sem-ids
privileged 2.05K - deny -
process: 14822: -bash
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
project.max-shm-ids
privileged 200 - deny -
process: 14822: -bash
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
project.max-shm-memory
usage 0B
privileged 256GB - deny -
</source>
=Directories=
<source lang=bash>
# zfs create -o mountpoint=none rpool/grid
# zfs create -o mountpoint=/opt/gridhome rpool/grid/gridhome
# zfs create -o mountpoint=/opt/gridbase rpool/grid/gridbase
# zfs create -o mountpoint=/opt/oraInventory rpool/grid/oraInventory
# chown -R grid:oinstall /opt/{grid{home,base},oraInventory}
</source>
=Storage tasks=
==Discover LUNs==
<source lang=bash>
# luxadm -e port | \
nawk '{print $1}' | \
xargs -n 1 luxadm -e dump_map | \
nawk '/Disk device/{print $5}' | \
sort -u | \
xargs luxadm display | \
nawk '
/DEVICE PROPERTIES for disk:/{
disk=$NF;
}
/DEVICE PROPERTIES for:/{
disk="";
}
/Vendor:/{
vendor=$NF;
}
/Serial Num:/{
serial=$NF;
}
/Unformatted capacity:/{
capacity=$(NF-1)""$NF;
}
disk != "" && /^$/{
printf "%s vendor=%s serial=%s capacity=%s\n",disk,vendor,serial,capacity;
}' | \
sort -u
</source>
==Label Disks==
===Single Disk===
<source lang=bash>
# printf 'type 0 no no\nlabel 1 yes\npartition\n0 usr wm 8192 $\nlabel 1 yes\nquit\nquit\n' | \
format -e /dev/rdsk/<disk>
</source>
===All FC disks===
For x86 you have to call format -> fdisk -> y for all disks first :-\
'''DON'T DO THE NEXT STEP IF YOU DO NOT KNOW WHAT YOU DO!'''
format_command_file.txt:
<source lang=bash>
type 0 no no
label 1 yes
partition
0 usr wm 8192 $
label 1 yes
quit
quit
</source>
<source lang=bash>
# luxadm -e port | \
nawk '{print $1}' | \
xargs -n 1 luxadm -e dump_map | \
nawk '/Disk device/{print $5}' | \
sort -u | \
xargs luxadm display | \
nawk '
/DEVICE PROPERTIES for disk:/{
disk=$NF;
}
/DEVICE PROPERTIES for:/{
disk="";
}
disk && /^$/{
printf "%s\n",disk;
}' | \
sort -u | \
xargs -n 1 format -e -f ~/format_command_file.txt
</source>
<source lang=bash>
# chown -RL grid:asmadmin /dev/rdsk/c0t6000*
# chmod 660 /dev/rdsk/c0t6000*
</source>
==Set swap to physical RAM==
<source lang=bash>
# export RAM=256G
# swap -d /dev/zvol/dsk/rpool/swap
# zfs destroy rpool/swap
# zfs create \
-V ${RAM} \
-b 8k \
-o primarycache=metadata \
-o chksum=on \
-o dedup=off \
-o encryption=off \
-o compression=off \
rpool/swap
# swap -a /dev/zvol/dsk/rpool/swap
</source>
=Network=
==Check port ranges==
<source lang=bash>
# for protocol in tcp udp ; do ipadm show-prop ${protocol} -p smallest_anon_port,largest_anon_port ; done
PROTO PROPERTY PERM CURRENT PERSISTENT DEFAULT POSSIBLE
tcp smallest_anon_port rw 9000 9000 32768 1024-65500
tcp largest_anon_port rw 65500 65500 65535 9000-65535
PROTO PROPERTY PERM CURRENT PERSISTENT DEFAULT POSSIBLE
udp smallest_anon_port rw 9000 9000 32768 1024-65500
udp largest_anon_port rw 65500 65500 65535 9000-65535
</source>
==Setup private cluster interconnects==
Example with a small net with six (eight with net and broadcast) usable IPs. This limits the maximum number of nodes to six... which is obvious...
First node:
<source lang=bash>
# ipadm create-ip net1
# ipadm create-addr -T static -a 10.65.0.1/29 net1/ci1
# ipadm create-ip net5
# ipadm create-addr -T static -a 10.65.0.9/29 net5/ci2
</source>
Second node:
<source lang=bash>
# ipadm create-ip net1
# ipadm create-addr -T static -a 10.65.0.2/29 net1/ci1
# ipadm create-ip net5
# ipadm create-addr -T static -a 10.65.0.10/29 net5/ci2
</source>
==Set slew always for ntp==
After configuring ntp set slew always to avoid time warps!
<source lang=bash>
# svccfg -s svc:/network/ntp:default setprop config/slew_always = true
# svcadm refresh svc:/network/ntp:default
# svccfg -s svc:/network/ntp:default listprop config/slew_always
config/slew_always boolean true
</source>
=Patching=
==Upgrade OPatch==
Do as root:
<source lang=bash>
export ORACLE_HOME=/opt/gridhome/11.2.0.4
export PATH=${PATH}:${ORACLE_HOME}/OPatch
OPATCH_PATCH_ZIP=~oracle/orainst/p6880880_112000_Solaris86-64.zip
zfs snapshot -r rpool/grid@$(opatch version | nawk '/OPatch Version:/{print $1"_"$NF;}')
eval mv ${ORACLE_HOME}/{$(opatch version | nawk '/OPatch Version:/{print $1","$1"_"$NF;}')}
unzip -d ${ORACLE_HOME} ${OPATCH_PATCH_ZIP}
chown -R grid:oinstall ${ORACLE_HOME}/OPatch
zfs snapshot -r rpool/grid@$(opatch version | nawk '/OPatch Version:/{print $1"_"$NF;}')
</source>
==Apply PSU==
On first node as user grid:
<source lang=bash>
export ORACLE_HOME=/opt/gridhome/11.2.0.4
OCM_RSP=~grid/ocm_gridcluster1.rsp
${ORACLE_HOME}/OPatch/ocm/bin/emocmrsp -output ${OCM_RSP}
scp ${OCM_RSP} <other node1>:
scp ${OCM_RSP} <other node2>:
...
</source>
On all nodes do as root:
<source lang=bash>
export ORACLE_HOME=/opt/gridhome/11.2.0.4
export PATH=${PATH}:${ORACLE_HOME}/bin
export PATH=${PATH}:${ORACLE_HOME}/OPatch
OCM_RSP=~grid/ocm_gridcluster1.rsp
PSU_DIR=~oracle/orainst/psu
PSU_ZIP=~oracle/orainst/p22378167_112040_Solaris86-64.zip
PSU=~oracle/orainst/psu/22378167
su - grid -c "mkdir -p ${PSU_DIR}"
su - grid -c "unzip -d ${PSU_DIR} ${PSU_ZIP}"
su - grid -c "opatch lsinventory -detail -oh ${ORACLE_HOME} > ~grid/lsinventory_before_${PSU##*/}"
zfs snapshot -r rpool/grid@before_psu_${PSU##*/}
cd ~grid
for patch in $(find ${PSU} -name bundle.xml | xargs -n 1 dirname) ; do
opatch auto ${patch} -oh ${ORACLE_HOME} -ocmrf ${OCM_RSP}
done
$ORACLE_HOME/crs/install/rootcrs.pl -unlock # <-- on all nodes
# For every other patch do:
su - grid -c "cd ${patchdir} ; opatch prereq CheckConflictAgainstOHWithDetail -ph ./" # <-- only on first node
su - grid -c "cd ${patchdir} ; opatch apply" # <-- only on first node
$ORACLE_HOME/crs/install/rootcrs.pl -patch # <-- on all nodes
zfs snapshot -r rpool/grid@after_psu_${PSU##*/}
${ORACLE_HOME}/bin/emctl start dbconsole
su - grid -c "opatch lsinventory -detail -oh ${ORACLE_HOME} > ~grid/lsinventory_after_${PSU##*/}"
</source>
==Configure local listener to another port==
As grid user:
<source lang=bash>
$ srvctl modify listener -l LISTENER -o ${ORACLE_HOME} -p "TCP:50650"
$ srvctl config listener
Name: LISTENER
Network: 1, Owner: grid
Home: <CRS home>
End points: TCP:50650
$ srvctl stop listener -l LISTENER ; srvctl start listener -l LISTENER
$ sqh
SQL>show parameter list
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
listener_networks string
local_listener string (DESCRIPTION=(ADDRESS_LIST=(A
DDRESS=(PROTOCOL=TCP)(HOST=172
.1.20.1)(PORT=1521))))
remote_listener string
SQL> alter system set local_listener ="(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=172.1.20.1)(PORT=50650))))" SID='+ASM1' ;
System altered.
SQL> ^D
</source>
=ASM=
==Create ASM diskgroups==
LUNs.txt contains all disks with:
# one line per disk.
# each disk in the first field.
===Example for chdg===
<source lang=awk>
# nawk -v type='DATA' '
BEGIN {
printf "<chdg name=\"%s\" power=\"3\">\n",type;
}
/002d0/,/011d0/ {
if(/C913/){storage="HSA1";};
if(/C916/){storage="HSA2";};
if(/C061/){storage="HSA3";};
if(/C062/){storage="HSA4";};
if(/002d0/){
# first disk
count=1;
printf " <add>\n";
printf " <fg name=\"%s_%s\">\n",storage,type;
};
gsub(/s2$/,"s0",$1);
printf " <dsk name=\"%s_%s%02d\" string=\"%s\"/>\n",storage,type,count++,$1;
if(/011d0/){
# last disk
print " </fg>";
print " </add>";
}
}
END {
printf "</chdg>\n";
}
' LUNs.txt
</source>
===Example for mkdg===
<source lang=awk>
# nawk -v type='FRA' '
BEGIN {
printf "<dg name=\"%s\" redundancy=\"normal\">\n",type;
}
/012d0/,/015d0/ {
if(/C913/){storage="HSA1";};
if(/C916/){storage="HSA2";};
if(/C061/){storage="HSA3";};
if(/C062/){storage="HSA4";};
if(/012d0/){
# first disk
count=1;
printf " <fg name=\"%s_%s\">\n",storage,type;
};
gsub(/s2$/,"s0",$1);
printf " <dsk name=\"%s_%s%02d\" string=\"%s\"/>\n",storage,type,count++,$1;
if(/015d0/){
# last disk
print " </fg>";
}
}
END {
printf "</dg>\n";
}
' LUNs.txt
</source>
data_config.xml:
<source lang=xml>
<chdg name="data" power="3">
<add>
<fg name="HSA1_DATA">
<dsk name="HSA1_DATA01" string="/dev/rdsk/c0t60002AC000000000C913010650004002d0s0"/>
<dsk name="HSA1_DATA02" string="/dev/rdsk/c0t60002AC000000000C913010650004003d0s0"/>
<dsk name="HSA1_DATA03" string="/dev/rdsk/c0t60002AC000000000C913010650004004d0s0"/>
<dsk name="HSA1_DATA04" string="/dev/rdsk/c0t60002AC000000000C913010650004005d0s0"/>
<dsk name="HSA1_DATA05" string="/dev/rdsk/c0t60002AC000000000C913010650004006d0s0"/>
<dsk name="HSA1_DATA06" string="/dev/rdsk/c0t60002AC000000000C913010650004007d0s0"/>
<dsk name="HSA1_DATA07" string="/dev/rdsk/c0t60002AC000000000C913010650004008d0s0"/>
<dsk name="HSA1_DATA08" string="/dev/rdsk/c0t60002AC000000000C913010650004009d0s0"/>
<dsk name="HSA1_DATA09" string="/dev/rdsk/c0t60002AC000000000C913010650004010d0s0"/>
<dsk name="HSA1_DATA10" string="/dev/rdsk/c0t60002AC000000000C913010650004011d0s0"/>
</fg>
</add>
<add>
<fg name="HSA2_DATA">
<dsk name="HSA2_DATA01" string="/dev/rdsk/c0t60002AC000000000C916010650004002d0s0"/>
<dsk name="HSA2_DATA02" string="/dev/rdsk/c0t60002AC000000000C916010650004003d0s0"/>
<dsk name="HSA2_DATA03" string="/dev/rdsk/c0t60002AC000000000C916010650004004d0s0"/>
<dsk name="HSA2_DATA04" string="/dev/rdsk/c0t60002AC000000000C916010650004005d0s0"/>
<dsk name="HSA2_DATA05" string="/dev/rdsk/c0t60002AC000000000C916010650004006d0s0"/>
<dsk name="HSA2_DATA06" string="/dev/rdsk/c0t60002AC000000000C916010650004007d0s0"/>
<dsk name="HSA2_DATA07" string="/dev/rdsk/c0t60002AC000000000C916010650004008d0s0"/>
<dsk name="HSA2_DATA08" string="/dev/rdsk/c0t60002AC000000000C916010650004009d0s0"/>
<dsk name="HSA2_DATA09" string="/dev/rdsk/c0t60002AC000000000C916010650004010d0s0"/>
<dsk name="HSA2_DATA10" string="/dev/rdsk/c0t60002AC000000000C916010650004011d0s0"/>
</fg>
</add>
<a name="compatible.asm" value="11.2"/>
<a name="compatible.rdbms" value="11.2"/>
<a name="compatible.advm" value="11.2"/>
</chdg>
</source>
asmh:
<source lang=oracle11>
ASMCMD [+] > chdg data_config.xml
</source>
e65e48b26b8a5dae13f5c15aef66e88d741a026b
1227
1226
2016-02-10T17:11:37Z
Lollypop
2
/* Example for mkdg */
wikitext
text/x-wiki
[[Kategorie:Solaris11|Clusterware]]
[[Kategorie:Oracle|Clusterware]]
==Get Solaris release information==
<source lang=bash>
# pkg info kernel | \
nawk -F '.' '
/Build Release:/{
solaris=$NF;
}
/Branch:/{
subrel=$3;
update=$4;
}
END{
printf "Solaris %d.%d Update %d\n",solaris,subrel,update;
}'
</source>
=Needed Solaris packages=
==Install pkg dependencies==
<source lang=bash>
# pkg install developer/assembler
# pkg install developer/build/make
# pkg install x11/diagnostic/x11-info-clients
</source>
==Check pkg dependencies==
<source lang=bash>
# pkg list \
developer/assembler \
developer/build/make \
x11/diagnostic/x11-info-clients
</source>
=User / group settings=
==Groups==
<source lang=bash>
# groupadd -g 186 oinstall
# groupadd -g 187 asmadmin
# groupadd -g 188 asmdba
# groupadd -g 200 dba
</source>
==User==
<source lang=bash>
# useradd \
-u 102 \
-g oinstall \
-G asmdba,dba \
-c "Oracle DB" \
-m -d /export/home/oracle \
oracle
# useradd \
-u 406 \
-g oinstall \
-G asmdba,asmadmin,dba \
-c "Oracle Grid" \
-m -d /export/home/grid \
grid
</source>
===Generate ssh public keys===
<source lang=bash>
# su - grid
$ ssh-keygen -t rsa -b 2048
Generating public/private rsa key pair.
Enter file in which to save the key (/export/home/grid/.ssh/id_rsa): <Enter>
Created directory '/export/home/grid/.ssh'.
Enter passphrase (empty for no passphrase): <Enter>
Enter same passphrase again: <Enter>
Your identification has been saved in /export/home/grid/.ssh/id_rsa.
Your public key has been saved in /export/home/grid/.ssh/id_rsa.pub.
The key fingerprint is:
..:..:.. grid@grid01
$ cat .ssh/id_rsa.pub > .ssh/authorized_keys
$ chmod 600 .ssh/authorized_keys
$ vi .ssh/authorized_keys
</source>
Add the public key of other nodes.
After that do this on all other nodes added as grid:
<source lang=bash>
$ scp grid01:.ssh/authorized_keys .ssh/authorized_keys
</source>
Now do a cross login from every node to every other node (even to its self) to add all to the known_hosts. The installer needs this.
==Projects==
<source lang=bash>
# projadd -p 186 -G oinstall \
-K process.max-file-descriptor="(privileged,65536,deny)" \
-K process.max-sem-nsems="(privileged,2048,deny)" \
-K project.max-sem-ids="(privileged,2048,deny)" \
-K project.max-shm-ids="(privileged,200,deny)" \
-K project.max-shm-memory="(privileged,274877906944,deny)" \
group.oinstall
</source>
===Check project settings===
<source lang=bash>
# su - oracle
$ for name in process.{max-file-descriptor,max-sem-nsems} ; do prctl -t privileged -i process -n ${name} $$ ; done
process: 14822: -bash
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
process.max-file-descriptor
privileged 65.5K - deny -
process: 14822: -bash
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
process.max-sem-nsems
privileged 2.05K - deny -
$ for name in project.{max-sem-ids,max-shm-ids,max-shm-memory} ; do prctl -t privileged -n ${name} $$ ; done
process: 14822: -bash
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
project.max-sem-ids
privileged 2.05K - deny -
process: 14822: -bash
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
project.max-shm-ids
privileged 200 - deny -
process: 14822: -bash
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
project.max-shm-memory
usage 0B
privileged 256GB - deny -
</source>
=Directories=
<source lang=bash>
# zfs create -o mountpoint=none rpool/grid
# zfs create -o mountpoint=/opt/gridhome rpool/grid/gridhome
# zfs create -o mountpoint=/opt/gridbase rpool/grid/gridbase
# zfs create -o mountpoint=/opt/oraInventory rpool/grid/oraInventory
# chown -R grid:oinstall /opt/{grid{home,base},oraInventory}
</source>
=Storage tasks=
==Discover LUNs==
<source lang=bash>
# luxadm -e port | \
nawk '{print $1}' | \
xargs -n 1 luxadm -e dump_map | \
nawk '/Disk device/{print $5}' | \
sort -u | \
xargs luxadm display | \
nawk '
/DEVICE PROPERTIES for disk:/{
disk=$NF;
}
/DEVICE PROPERTIES for:/{
disk="";
}
/Vendor:/{
vendor=$NF;
}
/Serial Num:/{
serial=$NF;
}
/Unformatted capacity:/{
capacity=$(NF-1)""$NF;
}
disk != "" && /^$/{
printf "%s vendor=%s serial=%s capacity=%s\n",disk,vendor,serial,capacity;
}' | \
sort -u
</source>
==Label Disks==
===Single Disk===
<source lang=bash>
# printf 'type 0 no no\nlabel 1 yes\npartition\n0 usr wm 8192 $\nlabel 1 yes\nquit\nquit\n' | \
format -e /dev/rdsk/<disk>
</source>
===All FC disks===
For x86 you have to call format -> fdisk -> y for all disks first :-\
'''DON'T DO THE NEXT STEP IF YOU DO NOT KNOW WHAT YOU DO!'''
format_command_file.txt:
<source lang=bash>
type 0 no no
label 1 yes
partition
0 usr wm 8192 $
label 1 yes
quit
quit
</source>
<source lang=bash>
# luxadm -e port | \
nawk '{print $1}' | \
xargs -n 1 luxadm -e dump_map | \
nawk '/Disk device/{print $5}' | \
sort -u | \
xargs luxadm display | \
nawk '
/DEVICE PROPERTIES for disk:/{
disk=$NF;
}
/DEVICE PROPERTIES for:/{
disk="";
}
disk && /^$/{
printf "%s\n",disk;
}' | \
sort -u | \
xargs -n 1 format -e -f ~/format_command_file.txt
</source>
<source lang=bash>
# chown -RL grid:asmadmin /dev/rdsk/c0t6000*
# chmod 660 /dev/rdsk/c0t6000*
</source>
==Set swap to physical RAM==
<source lang=bash>
# export RAM=256G
# swap -d /dev/zvol/dsk/rpool/swap
# zfs destroy rpool/swap
# zfs create \
-V ${RAM} \
-b 8k \
-o primarycache=metadata \
-o chksum=on \
-o dedup=off \
-o encryption=off \
-o compression=off \
rpool/swap
# swap -a /dev/zvol/dsk/rpool/swap
</source>
=Network=
==Check port ranges==
<source lang=bash>
# for protocol in tcp udp ; do ipadm show-prop ${protocol} -p smallest_anon_port,largest_anon_port ; done
PROTO PROPERTY PERM CURRENT PERSISTENT DEFAULT POSSIBLE
tcp smallest_anon_port rw 9000 9000 32768 1024-65500
tcp largest_anon_port rw 65500 65500 65535 9000-65535
PROTO PROPERTY PERM CURRENT PERSISTENT DEFAULT POSSIBLE
udp smallest_anon_port rw 9000 9000 32768 1024-65500
udp largest_anon_port rw 65500 65500 65535 9000-65535
</source>
==Setup private cluster interconnects==
Example with a small net with six (eight with net and broadcast) usable IPs. This limits the maximum number of nodes to six... which is obvious...
First node:
<source lang=bash>
# ipadm create-ip net1
# ipadm create-addr -T static -a 10.65.0.1/29 net1/ci1
# ipadm create-ip net5
# ipadm create-addr -T static -a 10.65.0.9/29 net5/ci2
</source>
Second node:
<source lang=bash>
# ipadm create-ip net1
# ipadm create-addr -T static -a 10.65.0.2/29 net1/ci1
# ipadm create-ip net5
# ipadm create-addr -T static -a 10.65.0.10/29 net5/ci2
</source>
==Set slew always for ntp==
After configuring ntp set slew always to avoid time warps!
<source lang=bash>
# svccfg -s svc:/network/ntp:default setprop config/slew_always = true
# svcadm refresh svc:/network/ntp:default
# svccfg -s svc:/network/ntp:default listprop config/slew_always
config/slew_always boolean true
</source>
=Patching=
==Upgrade OPatch==
Do as root:
<source lang=bash>
export ORACLE_HOME=/opt/gridhome/11.2.0.4
export PATH=${PATH}:${ORACLE_HOME}/OPatch
OPATCH_PATCH_ZIP=~oracle/orainst/p6880880_112000_Solaris86-64.zip
zfs snapshot -r rpool/grid@$(opatch version | nawk '/OPatch Version:/{print $1"_"$NF;}')
eval mv ${ORACLE_HOME}/{$(opatch version | nawk '/OPatch Version:/{print $1","$1"_"$NF;}')}
unzip -d ${ORACLE_HOME} ${OPATCH_PATCH_ZIP}
chown -R grid:oinstall ${ORACLE_HOME}/OPatch
zfs snapshot -r rpool/grid@$(opatch version | nawk '/OPatch Version:/{print $1"_"$NF;}')
</source>
==Apply PSU==
On first node as user grid:
<source lang=bash>
export ORACLE_HOME=/opt/gridhome/11.2.0.4
OCM_RSP=~grid/ocm_gridcluster1.rsp
${ORACLE_HOME}/OPatch/ocm/bin/emocmrsp -output ${OCM_RSP}
scp ${OCM_RSP} <other node1>:
scp ${OCM_RSP} <other node2>:
...
</source>
On all nodes do as root:
<source lang=bash>
export ORACLE_HOME=/opt/gridhome/11.2.0.4
export PATH=${PATH}:${ORACLE_HOME}/bin
export PATH=${PATH}:${ORACLE_HOME}/OPatch
OCM_RSP=~grid/ocm_gridcluster1.rsp
PSU_DIR=~oracle/orainst/psu
PSU_ZIP=~oracle/orainst/p22378167_112040_Solaris86-64.zip
PSU=~oracle/orainst/psu/22378167
su - grid -c "mkdir -p ${PSU_DIR}"
su - grid -c "unzip -d ${PSU_DIR} ${PSU_ZIP}"
su - grid -c "opatch lsinventory -detail -oh ${ORACLE_HOME} > ~grid/lsinventory_before_${PSU##*/}"
zfs snapshot -r rpool/grid@before_psu_${PSU##*/}
cd ~grid
for patch in $(find ${PSU} -name bundle.xml | xargs -n 1 dirname) ; do
opatch auto ${patch} -oh ${ORACLE_HOME} -ocmrf ${OCM_RSP}
done
$ORACLE_HOME/crs/install/rootcrs.pl -unlock # <-- on all nodes
# For every other patch do:
su - grid -c "cd ${patchdir} ; opatch prereq CheckConflictAgainstOHWithDetail -ph ./" # <-- only on first node
su - grid -c "cd ${patchdir} ; opatch apply" # <-- only on first node
$ORACLE_HOME/crs/install/rootcrs.pl -patch # <-- on all nodes
zfs snapshot -r rpool/grid@after_psu_${PSU##*/}
${ORACLE_HOME}/bin/emctl start dbconsole
su - grid -c "opatch lsinventory -detail -oh ${ORACLE_HOME} > ~grid/lsinventory_after_${PSU##*/}"
</source>
==Configure local listener to another port==
As grid user:
<source lang=bash>
$ srvctl modify listener -l LISTENER -o ${ORACLE_HOME} -p "TCP:50650"
$ srvctl config listener
Name: LISTENER
Network: 1, Owner: grid
Home: <CRS home>
End points: TCP:50650
$ srvctl stop listener -l LISTENER ; srvctl start listener -l LISTENER
$ sqh
SQL>show parameter list
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
listener_networks string
local_listener string (DESCRIPTION=(ADDRESS_LIST=(A
DDRESS=(PROTOCOL=TCP)(HOST=172
.1.20.1)(PORT=1521))))
remote_listener string
SQL> alter system set local_listener ="(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=172.1.20.1)(PORT=50650))))" SID='+ASM1' ;
System altered.
SQL> ^D
</source>
=ASM=
==Create ASM diskgroups==
LUNs.txt contains all disks with:
# one line per disk.
# each disk in the first field.
===Example for chdg===
<source lang=awk>
# nawk -v type='DATA' '
BEGIN {
printf "<chdg name=\"%s\" power=\"3\">\n",type;
}
/002d0/,/011d0/ {
if(/C913/){storage="HSA1";};
if(/C916/){storage="HSA2";};
if(/C061/){storage="HSA3";};
if(/C062/){storage="HSA4";};
if(/002d0/){
# first disk
count=1;
printf " <add>\n";
printf " <fg name=\"%s_%s\">\n",storage,type;
};
gsub(/s2$/,"s0",$1);
printf " <dsk name=\"%s_%s%02d\" string=\"%s\"/>\n",storage,type,count++,$1;
if(/011d0/){
# last disk
print " </fg>";
print " </add>";
}
}
END {
printf "</chdg>\n";
}
' LUNs.txt
</source>
===Example for mkdg===
<source lang=awk>
# nawk -v type='FRA' '
BEGIN {
printf "<dg name=\"%s\" redundancy=\"normal\">\n",type;
}
/012d0/,/015d0/ {
if(/C913/){storage="HSA1";};
if(/C916/){storage="HSA2";};
if(/C061/){storage="HSA3";};
if(/C062/){storage="HSA4";};
if(/012d0/){
# first disk
count=1;
printf " <fg name=\"%s_%s\">\n",storage,type;
};
gsub(/s2$/,"s0",$1);
printf " <dsk name=\"%s_%s%02d\" string=\"%s\"/>\n",storage,type,count++,$1;
if(/015d0/){
# last disk
print " </fg>";
}
}
END {
printf "<a name=\"compatible.asm\" value=\"11.2\"/>\n";
printf "<a name=\"compatible.rdbms\" value=\"11.2\"/>\n";
printf "<a name=\"compatible.advm\" value=\"11.2\"/>\n";
printf "</dg>\n";
}
' LUNs.txt
</source>
data_config.xml:
<source lang=xml>
<chdg name="data" power="3">
<add>
<fg name="HSA1_DATA">
<dsk name="HSA1_DATA01" string="/dev/rdsk/c0t60002AC000000000C913010650004002d0s0"/>
<dsk name="HSA1_DATA02" string="/dev/rdsk/c0t60002AC000000000C913010650004003d0s0"/>
<dsk name="HSA1_DATA03" string="/dev/rdsk/c0t60002AC000000000C913010650004004d0s0"/>
<dsk name="HSA1_DATA04" string="/dev/rdsk/c0t60002AC000000000C913010650004005d0s0"/>
<dsk name="HSA1_DATA05" string="/dev/rdsk/c0t60002AC000000000C913010650004006d0s0"/>
<dsk name="HSA1_DATA06" string="/dev/rdsk/c0t60002AC000000000C913010650004007d0s0"/>
<dsk name="HSA1_DATA07" string="/dev/rdsk/c0t60002AC000000000C913010650004008d0s0"/>
<dsk name="HSA1_DATA08" string="/dev/rdsk/c0t60002AC000000000C913010650004009d0s0"/>
<dsk name="HSA1_DATA09" string="/dev/rdsk/c0t60002AC000000000C913010650004010d0s0"/>
<dsk name="HSA1_DATA10" string="/dev/rdsk/c0t60002AC000000000C913010650004011d0s0"/>
</fg>
</add>
<add>
<fg name="HSA2_DATA">
<dsk name="HSA2_DATA01" string="/dev/rdsk/c0t60002AC000000000C916010650004002d0s0"/>
<dsk name="HSA2_DATA02" string="/dev/rdsk/c0t60002AC000000000C916010650004003d0s0"/>
<dsk name="HSA2_DATA03" string="/dev/rdsk/c0t60002AC000000000C916010650004004d0s0"/>
<dsk name="HSA2_DATA04" string="/dev/rdsk/c0t60002AC000000000C916010650004005d0s0"/>
<dsk name="HSA2_DATA05" string="/dev/rdsk/c0t60002AC000000000C916010650004006d0s0"/>
<dsk name="HSA2_DATA06" string="/dev/rdsk/c0t60002AC000000000C916010650004007d0s0"/>
<dsk name="HSA2_DATA07" string="/dev/rdsk/c0t60002AC000000000C916010650004008d0s0"/>
<dsk name="HSA2_DATA08" string="/dev/rdsk/c0t60002AC000000000C916010650004009d0s0"/>
<dsk name="HSA2_DATA09" string="/dev/rdsk/c0t60002AC000000000C916010650004010d0s0"/>
<dsk name="HSA2_DATA10" string="/dev/rdsk/c0t60002AC000000000C916010650004011d0s0"/>
</fg>
</add>
<a name="compatible.asm" value="11.2"/>
<a name="compatible.rdbms" value="11.2"/>
<a name="compatible.advm" value="11.2"/>
</chdg>
</source>
asmh:
<source lang=oracle11>
ASMCMD [+] > chdg data_config.xml
</source>
93fb3c20b821234c6473e2e0ba651e1f494f1aa2
1228
1227
2016-02-10T17:18:45Z
Lollypop
2
/* Example for chdg */
wikitext
text/x-wiki
[[Kategorie:Solaris11|Clusterware]]
[[Kategorie:Oracle|Clusterware]]
==Get Solaris release information==
<source lang=bash>
# pkg info kernel | \
nawk -F '.' '
/Build Release:/{
solaris=$NF;
}
/Branch:/{
subrel=$3;
update=$4;
}
END{
printf "Solaris %d.%d Update %d\n",solaris,subrel,update;
}'
</source>
=Needed Solaris packages=
==Install pkg dependencies==
<source lang=bash>
# pkg install developer/assembler
# pkg install developer/build/make
# pkg install x11/diagnostic/x11-info-clients
</source>
==Check pkg dependencies==
<source lang=bash>
# pkg list \
developer/assembler \
developer/build/make \
x11/diagnostic/x11-info-clients
</source>
=User / group settings=
==Groups==
<source lang=bash>
# groupadd -g 186 oinstall
# groupadd -g 187 asmadmin
# groupadd -g 188 asmdba
# groupadd -g 200 dba
</source>
==User==
<source lang=bash>
# useradd \
-u 102 \
-g oinstall \
-G asmdba,dba \
-c "Oracle DB" \
-m -d /export/home/oracle \
oracle
# useradd \
-u 406 \
-g oinstall \
-G asmdba,asmadmin,dba \
-c "Oracle Grid" \
-m -d /export/home/grid \
grid
</source>
===Generate ssh public keys===
<source lang=bash>
# su - grid
$ ssh-keygen -t rsa -b 2048
Generating public/private rsa key pair.
Enter file in which to save the key (/export/home/grid/.ssh/id_rsa): <Enter>
Created directory '/export/home/grid/.ssh'.
Enter passphrase (empty for no passphrase): <Enter>
Enter same passphrase again: <Enter>
Your identification has been saved in /export/home/grid/.ssh/id_rsa.
Your public key has been saved in /export/home/grid/.ssh/id_rsa.pub.
The key fingerprint is:
..:..:.. grid@grid01
$ cat .ssh/id_rsa.pub > .ssh/authorized_keys
$ chmod 600 .ssh/authorized_keys
$ vi .ssh/authorized_keys
</source>
Add the public key of other nodes.
After that do this on all other nodes added as grid:
<source lang=bash>
$ scp grid01:.ssh/authorized_keys .ssh/authorized_keys
</source>
Now do a cross login from every node to every other node (even to its self) to add all to the known_hosts. The installer needs this.
==Projects==
<source lang=bash>
# projadd -p 186 -G oinstall \
-K process.max-file-descriptor="(privileged,65536,deny)" \
-K process.max-sem-nsems="(privileged,2048,deny)" \
-K project.max-sem-ids="(privileged,2048,deny)" \
-K project.max-shm-ids="(privileged,200,deny)" \
-K project.max-shm-memory="(privileged,274877906944,deny)" \
group.oinstall
</source>
===Check project settings===
<source lang=bash>
# su - oracle
$ for name in process.{max-file-descriptor,max-sem-nsems} ; do prctl -t privileged -i process -n ${name} $$ ; done
process: 14822: -bash
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
process.max-file-descriptor
privileged 65.5K - deny -
process: 14822: -bash
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
process.max-sem-nsems
privileged 2.05K - deny -
$ for name in project.{max-sem-ids,max-shm-ids,max-shm-memory} ; do prctl -t privileged -n ${name} $$ ; done
process: 14822: -bash
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
project.max-sem-ids
privileged 2.05K - deny -
process: 14822: -bash
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
project.max-shm-ids
privileged 200 - deny -
process: 14822: -bash
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
project.max-shm-memory
usage 0B
privileged 256GB - deny -
</source>
=Directories=
<source lang=bash>
# zfs create -o mountpoint=none rpool/grid
# zfs create -o mountpoint=/opt/gridhome rpool/grid/gridhome
# zfs create -o mountpoint=/opt/gridbase rpool/grid/gridbase
# zfs create -o mountpoint=/opt/oraInventory rpool/grid/oraInventory
# chown -R grid:oinstall /opt/{grid{home,base},oraInventory}
</source>
=Storage tasks=
==Discover LUNs==
<source lang=bash>
# luxadm -e port | \
nawk '{print $1}' | \
xargs -n 1 luxadm -e dump_map | \
nawk '/Disk device/{print $5}' | \
sort -u | \
xargs luxadm display | \
nawk '
/DEVICE PROPERTIES for disk:/{
disk=$NF;
}
/DEVICE PROPERTIES for:/{
disk="";
}
/Vendor:/{
vendor=$NF;
}
/Serial Num:/{
serial=$NF;
}
/Unformatted capacity:/{
capacity=$(NF-1)""$NF;
}
disk != "" && /^$/{
printf "%s vendor=%s serial=%s capacity=%s\n",disk,vendor,serial,capacity;
}' | \
sort -u
</source>
==Label Disks==
===Single Disk===
<source lang=bash>
# printf 'type 0 no no\nlabel 1 yes\npartition\n0 usr wm 8192 $\nlabel 1 yes\nquit\nquit\n' | \
format -e /dev/rdsk/<disk>
</source>
===All FC disks===
For x86 you have to call format -> fdisk -> y for all disks first :-\
'''DON'T DO THE NEXT STEP IF YOU DO NOT KNOW WHAT YOU DO!'''
format_command_file.txt:
<source lang=bash>
type 0 no no
label 1 yes
partition
0 usr wm 8192 $
label 1 yes
quit
quit
</source>
<source lang=bash>
# luxadm -e port | \
nawk '{print $1}' | \
xargs -n 1 luxadm -e dump_map | \
nawk '/Disk device/{print $5}' | \
sort -u | \
xargs luxadm display | \
nawk '
/DEVICE PROPERTIES for disk:/{
disk=$NF;
}
/DEVICE PROPERTIES for:/{
disk="";
}
disk && /^$/{
printf "%s\n",disk;
}' | \
sort -u | \
xargs -n 1 format -e -f ~/format_command_file.txt
</source>
<source lang=bash>
# chown -RL grid:asmadmin /dev/rdsk/c0t6000*
# chmod 660 /dev/rdsk/c0t6000*
</source>
==Set swap to physical RAM==
<source lang=bash>
# export RAM=256G
# swap -d /dev/zvol/dsk/rpool/swap
# zfs destroy rpool/swap
# zfs create \
-V ${RAM} \
-b 8k \
-o primarycache=metadata \
-o chksum=on \
-o dedup=off \
-o encryption=off \
-o compression=off \
rpool/swap
# swap -a /dev/zvol/dsk/rpool/swap
</source>
=Network=
==Check port ranges==
<source lang=bash>
# for protocol in tcp udp ; do ipadm show-prop ${protocol} -p smallest_anon_port,largest_anon_port ; done
PROTO PROPERTY PERM CURRENT PERSISTENT DEFAULT POSSIBLE
tcp smallest_anon_port rw 9000 9000 32768 1024-65500
tcp largest_anon_port rw 65500 65500 65535 9000-65535
PROTO PROPERTY PERM CURRENT PERSISTENT DEFAULT POSSIBLE
udp smallest_anon_port rw 9000 9000 32768 1024-65500
udp largest_anon_port rw 65500 65500 65535 9000-65535
</source>
==Setup private cluster interconnects==
Example with a small net with six (eight with net and broadcast) usable IPs. This limits the maximum number of nodes to six... which is obvious...
First node:
<source lang=bash>
# ipadm create-ip net1
# ipadm create-addr -T static -a 10.65.0.1/29 net1/ci1
# ipadm create-ip net5
# ipadm create-addr -T static -a 10.65.0.9/29 net5/ci2
</source>
Second node:
<source lang=bash>
# ipadm create-ip net1
# ipadm create-addr -T static -a 10.65.0.2/29 net1/ci1
# ipadm create-ip net5
# ipadm create-addr -T static -a 10.65.0.10/29 net5/ci2
</source>
==Set slew always for ntp==
After configuring ntp set slew always to avoid time warps!
<source lang=bash>
# svccfg -s svc:/network/ntp:default setprop config/slew_always = true
# svcadm refresh svc:/network/ntp:default
# svccfg -s svc:/network/ntp:default listprop config/slew_always
config/slew_always boolean true
</source>
=Patching=
==Upgrade OPatch==
Do as root:
<source lang=bash>
export ORACLE_HOME=/opt/gridhome/11.2.0.4
export PATH=${PATH}:${ORACLE_HOME}/OPatch
OPATCH_PATCH_ZIP=~oracle/orainst/p6880880_112000_Solaris86-64.zip
zfs snapshot -r rpool/grid@$(opatch version | nawk '/OPatch Version:/{print $1"_"$NF;}')
eval mv ${ORACLE_HOME}/{$(opatch version | nawk '/OPatch Version:/{print $1","$1"_"$NF;}')}
unzip -d ${ORACLE_HOME} ${OPATCH_PATCH_ZIP}
chown -R grid:oinstall ${ORACLE_HOME}/OPatch
zfs snapshot -r rpool/grid@$(opatch version | nawk '/OPatch Version:/{print $1"_"$NF;}')
</source>
==Apply PSU==
On first node as user grid:
<source lang=bash>
export ORACLE_HOME=/opt/gridhome/11.2.0.4
OCM_RSP=~grid/ocm_gridcluster1.rsp
${ORACLE_HOME}/OPatch/ocm/bin/emocmrsp -output ${OCM_RSP}
scp ${OCM_RSP} <other node1>:
scp ${OCM_RSP} <other node2>:
...
</source>
On all nodes do as root:
<source lang=bash>
export ORACLE_HOME=/opt/gridhome/11.2.0.4
export PATH=${PATH}:${ORACLE_HOME}/bin
export PATH=${PATH}:${ORACLE_HOME}/OPatch
OCM_RSP=~grid/ocm_gridcluster1.rsp
PSU_DIR=~oracle/orainst/psu
PSU_ZIP=~oracle/orainst/p22378167_112040_Solaris86-64.zip
PSU=~oracle/orainst/psu/22378167
su - grid -c "mkdir -p ${PSU_DIR}"
su - grid -c "unzip -d ${PSU_DIR} ${PSU_ZIP}"
su - grid -c "opatch lsinventory -detail -oh ${ORACLE_HOME} > ~grid/lsinventory_before_${PSU##*/}"
zfs snapshot -r rpool/grid@before_psu_${PSU##*/}
cd ~grid
for patch in $(find ${PSU} -name bundle.xml | xargs -n 1 dirname) ; do
opatch auto ${patch} -oh ${ORACLE_HOME} -ocmrf ${OCM_RSP}
done
$ORACLE_HOME/crs/install/rootcrs.pl -unlock # <-- on all nodes
# For every other patch do:
su - grid -c "cd ${patchdir} ; opatch prereq CheckConflictAgainstOHWithDetail -ph ./" # <-- only on first node
su - grid -c "cd ${patchdir} ; opatch apply" # <-- only on first node
$ORACLE_HOME/crs/install/rootcrs.pl -patch # <-- on all nodes
zfs snapshot -r rpool/grid@after_psu_${PSU##*/}
${ORACLE_HOME}/bin/emctl start dbconsole
su - grid -c "opatch lsinventory -detail -oh ${ORACLE_HOME} > ~grid/lsinventory_after_${PSU##*/}"
</source>
==Configure local listener to another port==
As grid user:
<source lang=bash>
$ srvctl modify listener -l LISTENER -o ${ORACLE_HOME} -p "TCP:50650"
$ srvctl config listener
Name: LISTENER
Network: 1, Owner: grid
Home: <CRS home>
End points: TCP:50650
$ srvctl stop listener -l LISTENER ; srvctl start listener -l LISTENER
$ sqh
SQL>show parameter list
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
listener_networks string
local_listener string (DESCRIPTION=(ADDRESS_LIST=(A
DDRESS=(PROTOCOL=TCP)(HOST=172
.1.20.1)(PORT=1521))))
remote_listener string
SQL> alter system set local_listener ="(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=172.1.20.1)(PORT=50650))))" SID='+ASM1' ;
System altered.
SQL> ^D
</source>
=ASM=
==Create ASM diskgroups==
LUNs.txt contains all disks with:
# one line per disk.
# each disk in the first field.
===Example for chdg===
<source lang=awk>
# nawk -v type='DATA' '
BEGIN {
printf "<chdg name=\"%s\" power=\"3\">\n",type;
}
/002d0/,/011d0/ {
if(/C913/){storage="HSA1";};
if(/C916/){storage="HSA2";};
if(/C061/){storage="HSA3";};
if(/C062/){storage="HSA4";};
if(/002d0/){
# first disk
count=1;
printf " <add>\n";
printf " <fg name=\"%s_%s\">\n",storage,type;
};
gsub(/s2$/,"s0",$1);
printf " <dsk name=\"%s_%s%02d\" string=\"%s\"/>\n",storage,type,count++,$1;
if(/011d0/){
# last disk
print " </fg>";
print " </add>";
}
}
END {
printf "<a name=\"compatible.asm\" value=\"11.2\"/>\n";
printf "<a name=\"compatible.rdbms\" value=\"11.2\"/>\n";
printf "<a name=\"compatible.advm\" value=\"11.2\"/>\n";
printf "</chdg>\n";
}
' LUNs.txt
</source>
===Example for mkdg===
<source lang=awk>
# nawk -v type='FRA' '
BEGIN {
printf "<dg name=\"%s\" redundancy=\"normal\">\n",type;
}
/012d0/,/015d0/ {
if(/C913/){storage="HSA1";};
if(/C916/){storage="HSA2";};
if(/C061/){storage="HSA3";};
if(/C062/){storage="HSA4";};
if(/012d0/){
# first disk
count=1;
printf " <fg name=\"%s_%s\">\n",storage,type;
};
gsub(/s2$/,"s0",$1);
printf " <dsk name=\"%s_%s%02d\" string=\"%s\"/>\n",storage,type,count++,$1;
if(/015d0/){
# last disk
print " </fg>";
}
}
END {
printf "<a name=\"compatible.asm\" value=\"11.2\"/>\n";
printf "<a name=\"compatible.rdbms\" value=\"11.2\"/>\n";
printf "<a name=\"compatible.advm\" value=\"11.2\"/>\n";
printf "</dg>\n";
}
' LUNs.txt
</source>
data_config.xml:
<source lang=xml>
<chdg name="data" power="3">
<add>
<fg name="HSA1_DATA">
<dsk name="HSA1_DATA01" string="/dev/rdsk/c0t60002AC000000000C913010650004002d0s0"/>
<dsk name="HSA1_DATA02" string="/dev/rdsk/c0t60002AC000000000C913010650004003d0s0"/>
<dsk name="HSA1_DATA03" string="/dev/rdsk/c0t60002AC000000000C913010650004004d0s0"/>
<dsk name="HSA1_DATA04" string="/dev/rdsk/c0t60002AC000000000C913010650004005d0s0"/>
<dsk name="HSA1_DATA05" string="/dev/rdsk/c0t60002AC000000000C913010650004006d0s0"/>
<dsk name="HSA1_DATA06" string="/dev/rdsk/c0t60002AC000000000C913010650004007d0s0"/>
<dsk name="HSA1_DATA07" string="/dev/rdsk/c0t60002AC000000000C913010650004008d0s0"/>
<dsk name="HSA1_DATA08" string="/dev/rdsk/c0t60002AC000000000C913010650004009d0s0"/>
<dsk name="HSA1_DATA09" string="/dev/rdsk/c0t60002AC000000000C913010650004010d0s0"/>
<dsk name="HSA1_DATA10" string="/dev/rdsk/c0t60002AC000000000C913010650004011d0s0"/>
</fg>
</add>
<add>
<fg name="HSA2_DATA">
<dsk name="HSA2_DATA01" string="/dev/rdsk/c0t60002AC000000000C916010650004002d0s0"/>
<dsk name="HSA2_DATA02" string="/dev/rdsk/c0t60002AC000000000C916010650004003d0s0"/>
<dsk name="HSA2_DATA03" string="/dev/rdsk/c0t60002AC000000000C916010650004004d0s0"/>
<dsk name="HSA2_DATA04" string="/dev/rdsk/c0t60002AC000000000C916010650004005d0s0"/>
<dsk name="HSA2_DATA05" string="/dev/rdsk/c0t60002AC000000000C916010650004006d0s0"/>
<dsk name="HSA2_DATA06" string="/dev/rdsk/c0t60002AC000000000C916010650004007d0s0"/>
<dsk name="HSA2_DATA07" string="/dev/rdsk/c0t60002AC000000000C916010650004008d0s0"/>
<dsk name="HSA2_DATA08" string="/dev/rdsk/c0t60002AC000000000C916010650004009d0s0"/>
<dsk name="HSA2_DATA09" string="/dev/rdsk/c0t60002AC000000000C916010650004010d0s0"/>
<dsk name="HSA2_DATA10" string="/dev/rdsk/c0t60002AC000000000C916010650004011d0s0"/>
</fg>
</add>
<a name="compatible.asm" value="11.2"/>
<a name="compatible.rdbms" value="11.2"/>
<a name="compatible.advm" value="11.2"/>
</chdg>
</source>
asmh:
<source lang=oracle11>
ASMCMD [+] > chdg data_config.xml
</source>
ccdf171489fb4ccd9e81e3cb9b1ba70f3de56ebd
1229
1228
2016-02-10T17:30:16Z
Lollypop
2
/* Example for mkdg */
wikitext
text/x-wiki
[[Kategorie:Solaris11|Clusterware]]
[[Kategorie:Oracle|Clusterware]]
==Get Solaris release information==
<source lang=bash>
# pkg info kernel | \
nawk -F '.' '
/Build Release:/{
solaris=$NF;
}
/Branch:/{
subrel=$3;
update=$4;
}
END{
printf "Solaris %d.%d Update %d\n",solaris,subrel,update;
}'
</source>
=Needed Solaris packages=
==Install pkg dependencies==
<source lang=bash>
# pkg install developer/assembler
# pkg install developer/build/make
# pkg install x11/diagnostic/x11-info-clients
</source>
==Check pkg dependencies==
<source lang=bash>
# pkg list \
developer/assembler \
developer/build/make \
x11/diagnostic/x11-info-clients
</source>
=User / group settings=
==Groups==
<source lang=bash>
# groupadd -g 186 oinstall
# groupadd -g 187 asmadmin
# groupadd -g 188 asmdba
# groupadd -g 200 dba
</source>
==User==
<source lang=bash>
# useradd \
-u 102 \
-g oinstall \
-G asmdba,dba \
-c "Oracle DB" \
-m -d /export/home/oracle \
oracle
# useradd \
-u 406 \
-g oinstall \
-G asmdba,asmadmin,dba \
-c "Oracle Grid" \
-m -d /export/home/grid \
grid
</source>
===Generate ssh public keys===
<source lang=bash>
# su - grid
$ ssh-keygen -t rsa -b 2048
Generating public/private rsa key pair.
Enter file in which to save the key (/export/home/grid/.ssh/id_rsa): <Enter>
Created directory '/export/home/grid/.ssh'.
Enter passphrase (empty for no passphrase): <Enter>
Enter same passphrase again: <Enter>
Your identification has been saved in /export/home/grid/.ssh/id_rsa.
Your public key has been saved in /export/home/grid/.ssh/id_rsa.pub.
The key fingerprint is:
..:..:.. grid@grid01
$ cat .ssh/id_rsa.pub > .ssh/authorized_keys
$ chmod 600 .ssh/authorized_keys
$ vi .ssh/authorized_keys
</source>
Add the public key of other nodes.
After that do this on all other nodes added as grid:
<source lang=bash>
$ scp grid01:.ssh/authorized_keys .ssh/authorized_keys
</source>
Now do a cross login from every node to every other node (even to its self) to add all to the known_hosts. The installer needs this.
==Projects==
<source lang=bash>
# projadd -p 186 -G oinstall \
-K process.max-file-descriptor="(privileged,65536,deny)" \
-K process.max-sem-nsems="(privileged,2048,deny)" \
-K project.max-sem-ids="(privileged,2048,deny)" \
-K project.max-shm-ids="(privileged,200,deny)" \
-K project.max-shm-memory="(privileged,274877906944,deny)" \
group.oinstall
</source>
===Check project settings===
<source lang=bash>
# su - oracle
$ for name in process.{max-file-descriptor,max-sem-nsems} ; do prctl -t privileged -i process -n ${name} $$ ; done
process: 14822: -bash
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
process.max-file-descriptor
privileged 65.5K - deny -
process: 14822: -bash
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
process.max-sem-nsems
privileged 2.05K - deny -
$ for name in project.{max-sem-ids,max-shm-ids,max-shm-memory} ; do prctl -t privileged -n ${name} $$ ; done
process: 14822: -bash
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
project.max-sem-ids
privileged 2.05K - deny -
process: 14822: -bash
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
project.max-shm-ids
privileged 200 - deny -
process: 14822: -bash
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
project.max-shm-memory
usage 0B
privileged 256GB - deny -
</source>
=Directories=
<source lang=bash>
# zfs create -o mountpoint=none rpool/grid
# zfs create -o mountpoint=/opt/gridhome rpool/grid/gridhome
# zfs create -o mountpoint=/opt/gridbase rpool/grid/gridbase
# zfs create -o mountpoint=/opt/oraInventory rpool/grid/oraInventory
# chown -R grid:oinstall /opt/{grid{home,base},oraInventory}
</source>
=Storage tasks=
==Discover LUNs==
<source lang=bash>
# luxadm -e port | \
nawk '{print $1}' | \
xargs -n 1 luxadm -e dump_map | \
nawk '/Disk device/{print $5}' | \
sort -u | \
xargs luxadm display | \
nawk '
/DEVICE PROPERTIES for disk:/{
disk=$NF;
}
/DEVICE PROPERTIES for:/{
disk="";
}
/Vendor:/{
vendor=$NF;
}
/Serial Num:/{
serial=$NF;
}
/Unformatted capacity:/{
capacity=$(NF-1)""$NF;
}
disk != "" && /^$/{
printf "%s vendor=%s serial=%s capacity=%s\n",disk,vendor,serial,capacity;
}' | \
sort -u
</source>
==Label Disks==
===Single Disk===
<source lang=bash>
# printf 'type 0 no no\nlabel 1 yes\npartition\n0 usr wm 8192 $\nlabel 1 yes\nquit\nquit\n' | \
format -e /dev/rdsk/<disk>
</source>
===All FC disks===
For x86 you have to call format -> fdisk -> y for all disks first :-\
'''DON'T DO THE NEXT STEP IF YOU DO NOT KNOW WHAT YOU DO!'''
format_command_file.txt:
<source lang=bash>
type 0 no no
label 1 yes
partition
0 usr wm 8192 $
label 1 yes
quit
quit
</source>
<source lang=bash>
# luxadm -e port | \
nawk '{print $1}' | \
xargs -n 1 luxadm -e dump_map | \
nawk '/Disk device/{print $5}' | \
sort -u | \
xargs luxadm display | \
nawk '
/DEVICE PROPERTIES for disk:/{
disk=$NF;
}
/DEVICE PROPERTIES for:/{
disk="";
}
disk && /^$/{
printf "%s\n",disk;
}' | \
sort -u | \
xargs -n 1 format -e -f ~/format_command_file.txt
</source>
<source lang=bash>
# chown -RL grid:asmadmin /dev/rdsk/c0t6000*
# chmod 660 /dev/rdsk/c0t6000*
</source>
==Set swap to physical RAM==
<source lang=bash>
# export RAM=256G
# swap -d /dev/zvol/dsk/rpool/swap
# zfs destroy rpool/swap
# zfs create \
-V ${RAM} \
-b 8k \
-o primarycache=metadata \
-o chksum=on \
-o dedup=off \
-o encryption=off \
-o compression=off \
rpool/swap
# swap -a /dev/zvol/dsk/rpool/swap
</source>
=Network=
==Check port ranges==
<source lang=bash>
# for protocol in tcp udp ; do ipadm show-prop ${protocol} -p smallest_anon_port,largest_anon_port ; done
PROTO PROPERTY PERM CURRENT PERSISTENT DEFAULT POSSIBLE
tcp smallest_anon_port rw 9000 9000 32768 1024-65500
tcp largest_anon_port rw 65500 65500 65535 9000-65535
PROTO PROPERTY PERM CURRENT PERSISTENT DEFAULT POSSIBLE
udp smallest_anon_port rw 9000 9000 32768 1024-65500
udp largest_anon_port rw 65500 65500 65535 9000-65535
</source>
==Setup private cluster interconnects==
Example with a small net with six (eight with net and broadcast) usable IPs. This limits the maximum number of nodes to six... which is obvious...
First node:
<source lang=bash>
# ipadm create-ip net1
# ipadm create-addr -T static -a 10.65.0.1/29 net1/ci1
# ipadm create-ip net5
# ipadm create-addr -T static -a 10.65.0.9/29 net5/ci2
</source>
Second node:
<source lang=bash>
# ipadm create-ip net1
# ipadm create-addr -T static -a 10.65.0.2/29 net1/ci1
# ipadm create-ip net5
# ipadm create-addr -T static -a 10.65.0.10/29 net5/ci2
</source>
==Set slew always for ntp==
After configuring ntp set slew always to avoid time warps!
<source lang=bash>
# svccfg -s svc:/network/ntp:default setprop config/slew_always = true
# svcadm refresh svc:/network/ntp:default
# svccfg -s svc:/network/ntp:default listprop config/slew_always
config/slew_always boolean true
</source>
=Patching=
==Upgrade OPatch==
Do as root:
<source lang=bash>
export ORACLE_HOME=/opt/gridhome/11.2.0.4
export PATH=${PATH}:${ORACLE_HOME}/OPatch
OPATCH_PATCH_ZIP=~oracle/orainst/p6880880_112000_Solaris86-64.zip
zfs snapshot -r rpool/grid@$(opatch version | nawk '/OPatch Version:/{print $1"_"$NF;}')
eval mv ${ORACLE_HOME}/{$(opatch version | nawk '/OPatch Version:/{print $1","$1"_"$NF;}')}
unzip -d ${ORACLE_HOME} ${OPATCH_PATCH_ZIP}
chown -R grid:oinstall ${ORACLE_HOME}/OPatch
zfs snapshot -r rpool/grid@$(opatch version | nawk '/OPatch Version:/{print $1"_"$NF;}')
</source>
==Apply PSU==
On first node as user grid:
<source lang=bash>
export ORACLE_HOME=/opt/gridhome/11.2.0.4
OCM_RSP=~grid/ocm_gridcluster1.rsp
${ORACLE_HOME}/OPatch/ocm/bin/emocmrsp -output ${OCM_RSP}
scp ${OCM_RSP} <other node1>:
scp ${OCM_RSP} <other node2>:
...
</source>
On all nodes do as root:
<source lang=bash>
export ORACLE_HOME=/opt/gridhome/11.2.0.4
export PATH=${PATH}:${ORACLE_HOME}/bin
export PATH=${PATH}:${ORACLE_HOME}/OPatch
OCM_RSP=~grid/ocm_gridcluster1.rsp
PSU_DIR=~oracle/orainst/psu
PSU_ZIP=~oracle/orainst/p22378167_112040_Solaris86-64.zip
PSU=~oracle/orainst/psu/22378167
su - grid -c "mkdir -p ${PSU_DIR}"
su - grid -c "unzip -d ${PSU_DIR} ${PSU_ZIP}"
su - grid -c "opatch lsinventory -detail -oh ${ORACLE_HOME} > ~grid/lsinventory_before_${PSU##*/}"
zfs snapshot -r rpool/grid@before_psu_${PSU##*/}
cd ~grid
for patch in $(find ${PSU} -name bundle.xml | xargs -n 1 dirname) ; do
opatch auto ${patch} -oh ${ORACLE_HOME} -ocmrf ${OCM_RSP}
done
$ORACLE_HOME/crs/install/rootcrs.pl -unlock # <-- on all nodes
# For every other patch do:
su - grid -c "cd ${patchdir} ; opatch prereq CheckConflictAgainstOHWithDetail -ph ./" # <-- only on first node
su - grid -c "cd ${patchdir} ; opatch apply" # <-- only on first node
$ORACLE_HOME/crs/install/rootcrs.pl -patch # <-- on all nodes
zfs snapshot -r rpool/grid@after_psu_${PSU##*/}
${ORACLE_HOME}/bin/emctl start dbconsole
su - grid -c "opatch lsinventory -detail -oh ${ORACLE_HOME} > ~grid/lsinventory_after_${PSU##*/}"
</source>
==Configure local listener to another port==
As grid user:
<source lang=bash>
$ srvctl modify listener -l LISTENER -o ${ORACLE_HOME} -p "TCP:50650"
$ srvctl config listener
Name: LISTENER
Network: 1, Owner: grid
Home: <CRS home>
End points: TCP:50650
$ srvctl stop listener -l LISTENER ; srvctl start listener -l LISTENER
$ sqh
SQL>show parameter list
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
listener_networks string
local_listener string (DESCRIPTION=(ADDRESS_LIST=(A
DDRESS=(PROTOCOL=TCP)(HOST=172
.1.20.1)(PORT=1521))))
remote_listener string
SQL> alter system set local_listener ="(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=172.1.20.1)(PORT=50650))))" SID='+ASM1' ;
System altered.
SQL> ^D
</source>
=ASM=
==Create ASM diskgroups==
LUNs.txt contains all disks with:
# one line per disk.
# each disk in the first field.
===Example for chdg===
<source lang=awk>
# nawk -v type='DATA' '
BEGIN {
printf "<chdg name=\"%s\" power=\"3\">\n",type;
}
/002d0/,/011d0/ {
if(/C913/){storage="HSA1";};
if(/C916/){storage="HSA2";};
if(/C061/){storage="HSA3";};
if(/C062/){storage="HSA4";};
if(/002d0/){
# first disk
count=1;
printf " <add>\n";
printf " <fg name=\"%s_%s\">\n",storage,type;
};
gsub(/s2$/,"s0",$1);
printf " <dsk name=\"%s_%s%02d\" string=\"%s\"/>\n",storage,type,count++,$1;
if(/011d0/){
# last disk
print " </fg>";
print " </add>";
}
}
END {
printf "<a name=\"compatible.asm\" value=\"11.2\"/>\n";
printf "<a name=\"compatible.rdbms\" value=\"11.2\"/>\n";
printf "<a name=\"compatible.advm\" value=\"11.2\"/>\n";
printf "</chdg>\n";
}
' LUNs.txt
</source>
===Example for mkdg===
<source lang=awk>
# nawk -v type='FRA' '
BEGIN {
printf "<dg name=\"%s\" redundancy=\"normal\">\n",type;
}
/012d0/,/015d0/ {
if(/C913/){storage="HSA1";};
if(/C916/){storage="HSA2";};
if(/C061/){storage="HSA3";};
if(/C062/){storage="HSA4";};
if(/012d0/){
# first disk
count=1;
printf " <fg name=\"%s_%s\">\n",storage,type;
};
gsub(/s2$/,"s0",$1);
printf " <dsk name=\"%s_%s%02d\" string=\"%s\"/>\n",storage,type,count++,$1;
if(/015d0/){
# last disk
print " </fg>";
}
}
END {
printf "<a name=\"compatible.asm\" value=\"11.2\"/>\n";
printf "<a name=\"compatible.rdbms\" value=\"11.2\"/>\n";
printf "<a name=\"compatible.advm\" value=\"11.2\"/>\n";
printf "</dg>\n";
}
' LUNs.txt
</source>
data_config.xml:
<source lang=xml>
<chdg name="data" power="3">
<add>
<fg name="HSA1_DATA">
<dsk name="HSA1_DATA01" string="/dev/rdsk/c0t60002AC000000000C903010650004002d0s0"/>
<dsk name="HSA1_DATA02" string="/dev/rdsk/c0t60002AC000000000C903010650004003d0s0"/>
<dsk name="HSA1_DATA03" string="/dev/rdsk/c0t60002AC000000000C903010650004004d0s0"/>
<dsk name="HSA1_DATA04" string="/dev/rdsk/c0t60002AC000000000C903010650004005d0s0"/>
<dsk name="HSA1_DATA05" string="/dev/rdsk/c0t60002AC000000000C903010650004006d0s0"/>
<dsk name="HSA1_DATA06" string="/dev/rdsk/c0t60002AC000000000C903010650004007d0s0"/>
<dsk name="HSA1_DATA07" string="/dev/rdsk/c0t60002AC000000000C903010650004008d0s0"/>
<dsk name="HSA1_DATA08" string="/dev/rdsk/c0t60002AC000000000C903010650004009d0s0"/>
<dsk name="HSA1_DATA09" string="/dev/rdsk/c0t60002AC000000000C903010650004010d0s0"/>
<dsk name="HSA1_DATA10" string="/dev/rdsk/c0t60002AC000000000C903010650004011d0s0"/>
</fg>
</add>
<add>
<fg name="HSA2_DATA">
<dsk name="HSA2_DATA01" string="/dev/rdsk/c0t60002AC000000000C906010650004002d0s0"/>
<dsk name="HSA2_DATA02" string="/dev/rdsk/c0t60002AC000000000C906010650004003d0s0"/>
<dsk name="HSA2_DATA03" string="/dev/rdsk/c0t60002AC000000000C906010650004004d0s0"/>
<dsk name="HSA2_DATA04" string="/dev/rdsk/c0t60002AC000000000C906010650004005d0s0"/>
<dsk name="HSA2_DATA05" string="/dev/rdsk/c0t60002AC000000000C906010650004006d0s0"/>
<dsk name="HSA2_DATA06" string="/dev/rdsk/c0t60002AC000000000C906010650004007d0s0"/>
<dsk name="HSA2_DATA07" string="/dev/rdsk/c0t60002AC000000000C906010650004008d0s0"/>
<dsk name="HSA2_DATA08" string="/dev/rdsk/c0t60002AC000000000C906010650004009d0s0"/>
<dsk name="HSA2_DATA09" string="/dev/rdsk/c0t60002AC000000000C906010650004010d0s0"/>
<dsk name="HSA2_DATA10" string="/dev/rdsk/c0t60002AC000000000C906010650004011d0s0"/>
</fg>
</add>
<a name="compatible.asm" value="11.2"/>
<a name="compatible.rdbms" value="11.2"/>
<a name="compatible.advm" value="11.2"/>
</chdg>
</source>
asmh:
<source lang=oracle11>
ASMCMD [+] > chdg data_config.xml
</source>
7a47362517a9efaf0b62fbf113fc1637f01aec57
1230
1229
2016-02-10T17:30:45Z
Lollypop
2
/* Example for mkdg */
wikitext
text/x-wiki
[[Kategorie:Solaris11|Clusterware]]
[[Kategorie:Oracle|Clusterware]]
==Get Solaris release information==
<source lang=bash>
# pkg info kernel | \
nawk -F '.' '
/Build Release:/{
solaris=$NF;
}
/Branch:/{
subrel=$3;
update=$4;
}
END{
printf "Solaris %d.%d Update %d\n",solaris,subrel,update;
}'
</source>
=Needed Solaris packages=
==Install pkg dependencies==
<source lang=bash>
# pkg install developer/assembler
# pkg install developer/build/make
# pkg install x11/diagnostic/x11-info-clients
</source>
==Check pkg dependencies==
<source lang=bash>
# pkg list \
developer/assembler \
developer/build/make \
x11/diagnostic/x11-info-clients
</source>
=User / group settings=
==Groups==
<source lang=bash>
# groupadd -g 186 oinstall
# groupadd -g 187 asmadmin
# groupadd -g 188 asmdba
# groupadd -g 200 dba
</source>
==User==
<source lang=bash>
# useradd \
-u 102 \
-g oinstall \
-G asmdba,dba \
-c "Oracle DB" \
-m -d /export/home/oracle \
oracle
# useradd \
-u 406 \
-g oinstall \
-G asmdba,asmadmin,dba \
-c "Oracle Grid" \
-m -d /export/home/grid \
grid
</source>
===Generate ssh public keys===
<source lang=bash>
# su - grid
$ ssh-keygen -t rsa -b 2048
Generating public/private rsa key pair.
Enter file in which to save the key (/export/home/grid/.ssh/id_rsa): <Enter>
Created directory '/export/home/grid/.ssh'.
Enter passphrase (empty for no passphrase): <Enter>
Enter same passphrase again: <Enter>
Your identification has been saved in /export/home/grid/.ssh/id_rsa.
Your public key has been saved in /export/home/grid/.ssh/id_rsa.pub.
The key fingerprint is:
..:..:.. grid@grid01
$ cat .ssh/id_rsa.pub > .ssh/authorized_keys
$ chmod 600 .ssh/authorized_keys
$ vi .ssh/authorized_keys
</source>
Add the public key of other nodes.
After that do this on all other nodes added as grid:
<source lang=bash>
$ scp grid01:.ssh/authorized_keys .ssh/authorized_keys
</source>
Now do a cross login from every node to every other node (even to its self) to add all to the known_hosts. The installer needs this.
==Projects==
<source lang=bash>
# projadd -p 186 -G oinstall \
-K process.max-file-descriptor="(privileged,65536,deny)" \
-K process.max-sem-nsems="(privileged,2048,deny)" \
-K project.max-sem-ids="(privileged,2048,deny)" \
-K project.max-shm-ids="(privileged,200,deny)" \
-K project.max-shm-memory="(privileged,274877906944,deny)" \
group.oinstall
</source>
===Check project settings===
<source lang=bash>
# su - oracle
$ for name in process.{max-file-descriptor,max-sem-nsems} ; do prctl -t privileged -i process -n ${name} $$ ; done
process: 14822: -bash
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
process.max-file-descriptor
privileged 65.5K - deny -
process: 14822: -bash
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
process.max-sem-nsems
privileged 2.05K - deny -
$ for name in project.{max-sem-ids,max-shm-ids,max-shm-memory} ; do prctl -t privileged -n ${name} $$ ; done
process: 14822: -bash
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
project.max-sem-ids
privileged 2.05K - deny -
process: 14822: -bash
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
project.max-shm-ids
privileged 200 - deny -
process: 14822: -bash
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
project.max-shm-memory
usage 0B
privileged 256GB - deny -
</source>
=Directories=
<source lang=bash>
# zfs create -o mountpoint=none rpool/grid
# zfs create -o mountpoint=/opt/gridhome rpool/grid/gridhome
# zfs create -o mountpoint=/opt/gridbase rpool/grid/gridbase
# zfs create -o mountpoint=/opt/oraInventory rpool/grid/oraInventory
# chown -R grid:oinstall /opt/{grid{home,base},oraInventory}
</source>
=Storage tasks=
==Discover LUNs==
<source lang=bash>
# luxadm -e port | \
nawk '{print $1}' | \
xargs -n 1 luxadm -e dump_map | \
nawk '/Disk device/{print $5}' | \
sort -u | \
xargs luxadm display | \
nawk '
/DEVICE PROPERTIES for disk:/{
disk=$NF;
}
/DEVICE PROPERTIES for:/{
disk="";
}
/Vendor:/{
vendor=$NF;
}
/Serial Num:/{
serial=$NF;
}
/Unformatted capacity:/{
capacity=$(NF-1)""$NF;
}
disk != "" && /^$/{
printf "%s vendor=%s serial=%s capacity=%s\n",disk,vendor,serial,capacity;
}' | \
sort -u
</source>
==Label Disks==
===Single Disk===
<source lang=bash>
# printf 'type 0 no no\nlabel 1 yes\npartition\n0 usr wm 8192 $\nlabel 1 yes\nquit\nquit\n' | \
format -e /dev/rdsk/<disk>
</source>
===All FC disks===
For x86 you have to call format -> fdisk -> y for all disks first :-\
'''DON'T DO THE NEXT STEP IF YOU DO NOT KNOW WHAT YOU DO!'''
format_command_file.txt:
<source lang=bash>
type 0 no no
label 1 yes
partition
0 usr wm 8192 $
label 1 yes
quit
quit
</source>
<source lang=bash>
# luxadm -e port | \
nawk '{print $1}' | \
xargs -n 1 luxadm -e dump_map | \
nawk '/Disk device/{print $5}' | \
sort -u | \
xargs luxadm display | \
nawk '
/DEVICE PROPERTIES for disk:/{
disk=$NF;
}
/DEVICE PROPERTIES for:/{
disk="";
}
disk && /^$/{
printf "%s\n",disk;
}' | \
sort -u | \
xargs -n 1 format -e -f ~/format_command_file.txt
</source>
<source lang=bash>
# chown -RL grid:asmadmin /dev/rdsk/c0t6000*
# chmod 660 /dev/rdsk/c0t6000*
</source>
==Set swap to physical RAM==
<source lang=bash>
# export RAM=256G
# swap -d /dev/zvol/dsk/rpool/swap
# zfs destroy rpool/swap
# zfs create \
-V ${RAM} \
-b 8k \
-o primarycache=metadata \
-o chksum=on \
-o dedup=off \
-o encryption=off \
-o compression=off \
rpool/swap
# swap -a /dev/zvol/dsk/rpool/swap
</source>
=Network=
==Check port ranges==
<source lang=bash>
# for protocol in tcp udp ; do ipadm show-prop ${protocol} -p smallest_anon_port,largest_anon_port ; done
PROTO PROPERTY PERM CURRENT PERSISTENT DEFAULT POSSIBLE
tcp smallest_anon_port rw 9000 9000 32768 1024-65500
tcp largest_anon_port rw 65500 65500 65535 9000-65535
PROTO PROPERTY PERM CURRENT PERSISTENT DEFAULT POSSIBLE
udp smallest_anon_port rw 9000 9000 32768 1024-65500
udp largest_anon_port rw 65500 65500 65535 9000-65535
</source>
==Setup private cluster interconnects==
Example with a small net with six (eight with net and broadcast) usable IPs. This limits the maximum number of nodes to six... which is obvious...
First node:
<source lang=bash>
# ipadm create-ip net1
# ipadm create-addr -T static -a 10.65.0.1/29 net1/ci1
# ipadm create-ip net5
# ipadm create-addr -T static -a 10.65.0.9/29 net5/ci2
</source>
Second node:
<source lang=bash>
# ipadm create-ip net1
# ipadm create-addr -T static -a 10.65.0.2/29 net1/ci1
# ipadm create-ip net5
# ipadm create-addr -T static -a 10.65.0.10/29 net5/ci2
</source>
==Set slew always for ntp==
After configuring ntp set slew always to avoid time warps!
<source lang=bash>
# svccfg -s svc:/network/ntp:default setprop config/slew_always = true
# svcadm refresh svc:/network/ntp:default
# svccfg -s svc:/network/ntp:default listprop config/slew_always
config/slew_always boolean true
</source>
=Patching=
==Upgrade OPatch==
Do as root:
<source lang=bash>
export ORACLE_HOME=/opt/gridhome/11.2.0.4
export PATH=${PATH}:${ORACLE_HOME}/OPatch
OPATCH_PATCH_ZIP=~oracle/orainst/p6880880_112000_Solaris86-64.zip
zfs snapshot -r rpool/grid@$(opatch version | nawk '/OPatch Version:/{print $1"_"$NF;}')
eval mv ${ORACLE_HOME}/{$(opatch version | nawk '/OPatch Version:/{print $1","$1"_"$NF;}')}
unzip -d ${ORACLE_HOME} ${OPATCH_PATCH_ZIP}
chown -R grid:oinstall ${ORACLE_HOME}/OPatch
zfs snapshot -r rpool/grid@$(opatch version | nawk '/OPatch Version:/{print $1"_"$NF;}')
</source>
==Apply PSU==
On first node as user grid:
<source lang=bash>
export ORACLE_HOME=/opt/gridhome/11.2.0.4
OCM_RSP=~grid/ocm_gridcluster1.rsp
${ORACLE_HOME}/OPatch/ocm/bin/emocmrsp -output ${OCM_RSP}
scp ${OCM_RSP} <other node1>:
scp ${OCM_RSP} <other node2>:
...
</source>
On all nodes do as root:
<source lang=bash>
export ORACLE_HOME=/opt/gridhome/11.2.0.4
export PATH=${PATH}:${ORACLE_HOME}/bin
export PATH=${PATH}:${ORACLE_HOME}/OPatch
OCM_RSP=~grid/ocm_gridcluster1.rsp
PSU_DIR=~oracle/orainst/psu
PSU_ZIP=~oracle/orainst/p22378167_112040_Solaris86-64.zip
PSU=~oracle/orainst/psu/22378167
su - grid -c "mkdir -p ${PSU_DIR}"
su - grid -c "unzip -d ${PSU_DIR} ${PSU_ZIP}"
su - grid -c "opatch lsinventory -detail -oh ${ORACLE_HOME} > ~grid/lsinventory_before_${PSU##*/}"
zfs snapshot -r rpool/grid@before_psu_${PSU##*/}
cd ~grid
for patch in $(find ${PSU} -name bundle.xml | xargs -n 1 dirname) ; do
opatch auto ${patch} -oh ${ORACLE_HOME} -ocmrf ${OCM_RSP}
done
$ORACLE_HOME/crs/install/rootcrs.pl -unlock # <-- on all nodes
# For every other patch do:
su - grid -c "cd ${patchdir} ; opatch prereq CheckConflictAgainstOHWithDetail -ph ./" # <-- only on first node
su - grid -c "cd ${patchdir} ; opatch apply" # <-- only on first node
$ORACLE_HOME/crs/install/rootcrs.pl -patch # <-- on all nodes
zfs snapshot -r rpool/grid@after_psu_${PSU##*/}
${ORACLE_HOME}/bin/emctl start dbconsole
su - grid -c "opatch lsinventory -detail -oh ${ORACLE_HOME} > ~grid/lsinventory_after_${PSU##*/}"
</source>
==Configure local listener to another port==
As grid user:
<source lang=bash>
$ srvctl modify listener -l LISTENER -o ${ORACLE_HOME} -p "TCP:50650"
$ srvctl config listener
Name: LISTENER
Network: 1, Owner: grid
Home: <CRS home>
End points: TCP:50650
$ srvctl stop listener -l LISTENER ; srvctl start listener -l LISTENER
$ sqh
SQL>show parameter list
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
listener_networks string
local_listener string (DESCRIPTION=(ADDRESS_LIST=(A
DDRESS=(PROTOCOL=TCP)(HOST=172
.1.20.1)(PORT=1521))))
remote_listener string
SQL> alter system set local_listener ="(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=172.1.20.1)(PORT=50650))))" SID='+ASM1' ;
System altered.
SQL> ^D
</source>
=ASM=
==Create ASM diskgroups==
LUNs.txt contains all disks with:
# one line per disk.
# each disk in the first field.
===Example for chdg===
<source lang=awk>
# nawk -v type='DATA' '
BEGIN {
printf "<chdg name=\"%s\" power=\"3\">\n",type;
}
/002d0/,/011d0/ {
if(/C913/){storage="HSA1";};
if(/C916/){storage="HSA2";};
if(/C061/){storage="HSA3";};
if(/C062/){storage="HSA4";};
if(/002d0/){
# first disk
count=1;
printf " <add>\n";
printf " <fg name=\"%s_%s\">\n",storage,type;
};
gsub(/s2$/,"s0",$1);
printf " <dsk name=\"%s_%s%02d\" string=\"%s\"/>\n",storage,type,count++,$1;
if(/011d0/){
# last disk
print " </fg>";
print " </add>";
}
}
END {
printf "<a name=\"compatible.asm\" value=\"11.2\"/>\n";
printf "<a name=\"compatible.rdbms\" value=\"11.2\"/>\n";
printf "<a name=\"compatible.advm\" value=\"11.2\"/>\n";
printf "</chdg>\n";
}
' LUNs.txt
</source>
===Example for mkdg===
<source lang=awk>
# nawk -v type='FRA' '
BEGIN {
printf "<dg name=\"%s\" redundancy=\"normal\">\n",type;
}
/012d0/,/015d0/ {
if(/C903/){storage="HSA1";};
if(/C906/){storage="HSA2";};
if(/C061/){storage="HSA3";};
if(/C062/){storage="HSA4";};
if(/012d0/){
# first disk
count=1;
printf " <fg name=\"%s_%s\">\n",storage,type;
};
gsub(/s2$/,"s0",$1);
printf " <dsk name=\"%s_%s%02d\" string=\"%s\"/>\n",storage,type,count++,$1;
if(/015d0/){
# last disk
print " </fg>";
}
}
END {
printf "<a name=\"compatible.asm\" value=\"11.2\"/>\n";
printf "<a name=\"compatible.rdbms\" value=\"11.2\"/>\n";
printf "<a name=\"compatible.advm\" value=\"11.2\"/>\n";
printf "</dg>\n";
}
' LUNs.txt
</source>
data_config.xml:
<source lang=xml>
<chdg name="data" power="3">
<add>
<fg name="HSA1_DATA">
<dsk name="HSA1_DATA01" string="/dev/rdsk/c0t60002AC000000000C903010650004002d0s0"/>
<dsk name="HSA1_DATA02" string="/dev/rdsk/c0t60002AC000000000C903010650004003d0s0"/>
<dsk name="HSA1_DATA03" string="/dev/rdsk/c0t60002AC000000000C903010650004004d0s0"/>
<dsk name="HSA1_DATA04" string="/dev/rdsk/c0t60002AC000000000C903010650004005d0s0"/>
<dsk name="HSA1_DATA05" string="/dev/rdsk/c0t60002AC000000000C903010650004006d0s0"/>
<dsk name="HSA1_DATA06" string="/dev/rdsk/c0t60002AC000000000C903010650004007d0s0"/>
<dsk name="HSA1_DATA07" string="/dev/rdsk/c0t60002AC000000000C903010650004008d0s0"/>
<dsk name="HSA1_DATA08" string="/dev/rdsk/c0t60002AC000000000C903010650004009d0s0"/>
<dsk name="HSA1_DATA09" string="/dev/rdsk/c0t60002AC000000000C903010650004010d0s0"/>
<dsk name="HSA1_DATA10" string="/dev/rdsk/c0t60002AC000000000C903010650004011d0s0"/>
</fg>
</add>
<add>
<fg name="HSA2_DATA">
<dsk name="HSA2_DATA01" string="/dev/rdsk/c0t60002AC000000000C906010650004002d0s0"/>
<dsk name="HSA2_DATA02" string="/dev/rdsk/c0t60002AC000000000C906010650004003d0s0"/>
<dsk name="HSA2_DATA03" string="/dev/rdsk/c0t60002AC000000000C906010650004004d0s0"/>
<dsk name="HSA2_DATA04" string="/dev/rdsk/c0t60002AC000000000C906010650004005d0s0"/>
<dsk name="HSA2_DATA05" string="/dev/rdsk/c0t60002AC000000000C906010650004006d0s0"/>
<dsk name="HSA2_DATA06" string="/dev/rdsk/c0t60002AC000000000C906010650004007d0s0"/>
<dsk name="HSA2_DATA07" string="/dev/rdsk/c0t60002AC000000000C906010650004008d0s0"/>
<dsk name="HSA2_DATA08" string="/dev/rdsk/c0t60002AC000000000C906010650004009d0s0"/>
<dsk name="HSA2_DATA09" string="/dev/rdsk/c0t60002AC000000000C906010650004010d0s0"/>
<dsk name="HSA2_DATA10" string="/dev/rdsk/c0t60002AC000000000C906010650004011d0s0"/>
</fg>
</add>
<a name="compatible.asm" value="11.2"/>
<a name="compatible.rdbms" value="11.2"/>
<a name="compatible.advm" value="11.2"/>
</chdg>
</source>
asmh:
<source lang=oracle11>
ASMCMD [+] > chdg data_config.xml
</source>
e961f4a2ad1eed4f1b6bba1b6a7e31115f6b21c2
1231
1230
2016-02-10T17:30:59Z
Lollypop
2
/* Example for chdg */
wikitext
text/x-wiki
[[Kategorie:Solaris11|Clusterware]]
[[Kategorie:Oracle|Clusterware]]
==Get Solaris release information==
<source lang=bash>
# pkg info kernel | \
nawk -F '.' '
/Build Release:/{
solaris=$NF;
}
/Branch:/{
subrel=$3;
update=$4;
}
END{
printf "Solaris %d.%d Update %d\n",solaris,subrel,update;
}'
</source>
=Needed Solaris packages=
==Install pkg dependencies==
<source lang=bash>
# pkg install developer/assembler
# pkg install developer/build/make
# pkg install x11/diagnostic/x11-info-clients
</source>
==Check pkg dependencies==
<source lang=bash>
# pkg list \
developer/assembler \
developer/build/make \
x11/diagnostic/x11-info-clients
</source>
=User / group settings=
==Groups==
<source lang=bash>
# groupadd -g 186 oinstall
# groupadd -g 187 asmadmin
# groupadd -g 188 asmdba
# groupadd -g 200 dba
</source>
==User==
<source lang=bash>
# useradd \
-u 102 \
-g oinstall \
-G asmdba,dba \
-c "Oracle DB" \
-m -d /export/home/oracle \
oracle
# useradd \
-u 406 \
-g oinstall \
-G asmdba,asmadmin,dba \
-c "Oracle Grid" \
-m -d /export/home/grid \
grid
</source>
===Generate ssh public keys===
<source lang=bash>
# su - grid
$ ssh-keygen -t rsa -b 2048
Generating public/private rsa key pair.
Enter file in which to save the key (/export/home/grid/.ssh/id_rsa): <Enter>
Created directory '/export/home/grid/.ssh'.
Enter passphrase (empty for no passphrase): <Enter>
Enter same passphrase again: <Enter>
Your identification has been saved in /export/home/grid/.ssh/id_rsa.
Your public key has been saved in /export/home/grid/.ssh/id_rsa.pub.
The key fingerprint is:
..:..:.. grid@grid01
$ cat .ssh/id_rsa.pub > .ssh/authorized_keys
$ chmod 600 .ssh/authorized_keys
$ vi .ssh/authorized_keys
</source>
Add the public key of other nodes.
After that do this on all other nodes added as grid:
<source lang=bash>
$ scp grid01:.ssh/authorized_keys .ssh/authorized_keys
</source>
Now do a cross login from every node to every other node (even to its self) to add all to the known_hosts. The installer needs this.
==Projects==
<source lang=bash>
# projadd -p 186 -G oinstall \
-K process.max-file-descriptor="(privileged,65536,deny)" \
-K process.max-sem-nsems="(privileged,2048,deny)" \
-K project.max-sem-ids="(privileged,2048,deny)" \
-K project.max-shm-ids="(privileged,200,deny)" \
-K project.max-shm-memory="(privileged,274877906944,deny)" \
group.oinstall
</source>
===Check project settings===
<source lang=bash>
# su - oracle
$ for name in process.{max-file-descriptor,max-sem-nsems} ; do prctl -t privileged -i process -n ${name} $$ ; done
process: 14822: -bash
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
process.max-file-descriptor
privileged 65.5K - deny -
process: 14822: -bash
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
process.max-sem-nsems
privileged 2.05K - deny -
$ for name in project.{max-sem-ids,max-shm-ids,max-shm-memory} ; do prctl -t privileged -n ${name} $$ ; done
process: 14822: -bash
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
project.max-sem-ids
privileged 2.05K - deny -
process: 14822: -bash
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
project.max-shm-ids
privileged 200 - deny -
process: 14822: -bash
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
project.max-shm-memory
usage 0B
privileged 256GB - deny -
</source>
=Directories=
<source lang=bash>
# zfs create -o mountpoint=none rpool/grid
# zfs create -o mountpoint=/opt/gridhome rpool/grid/gridhome
# zfs create -o mountpoint=/opt/gridbase rpool/grid/gridbase
# zfs create -o mountpoint=/opt/oraInventory rpool/grid/oraInventory
# chown -R grid:oinstall /opt/{grid{home,base},oraInventory}
</source>
=Storage tasks=
==Discover LUNs==
<source lang=bash>
# luxadm -e port | \
nawk '{print $1}' | \
xargs -n 1 luxadm -e dump_map | \
nawk '/Disk device/{print $5}' | \
sort -u | \
xargs luxadm display | \
nawk '
/DEVICE PROPERTIES for disk:/{
disk=$NF;
}
/DEVICE PROPERTIES for:/{
disk="";
}
/Vendor:/{
vendor=$NF;
}
/Serial Num:/{
serial=$NF;
}
/Unformatted capacity:/{
capacity=$(NF-1)""$NF;
}
disk != "" && /^$/{
printf "%s vendor=%s serial=%s capacity=%s\n",disk,vendor,serial,capacity;
}' | \
sort -u
</source>
==Label Disks==
===Single Disk===
<source lang=bash>
# printf 'type 0 no no\nlabel 1 yes\npartition\n0 usr wm 8192 $\nlabel 1 yes\nquit\nquit\n' | \
format -e /dev/rdsk/<disk>
</source>
===All FC disks===
For x86 you have to call format -> fdisk -> y for all disks first :-\
'''DON'T DO THE NEXT STEP IF YOU DO NOT KNOW WHAT YOU DO!'''
format_command_file.txt:
<source lang=bash>
type 0 no no
label 1 yes
partition
0 usr wm 8192 $
label 1 yes
quit
quit
</source>
<source lang=bash>
# luxadm -e port | \
nawk '{print $1}' | \
xargs -n 1 luxadm -e dump_map | \
nawk '/Disk device/{print $5}' | \
sort -u | \
xargs luxadm display | \
nawk '
/DEVICE PROPERTIES for disk:/{
disk=$NF;
}
/DEVICE PROPERTIES for:/{
disk="";
}
disk && /^$/{
printf "%s\n",disk;
}' | \
sort -u | \
xargs -n 1 format -e -f ~/format_command_file.txt
</source>
<source lang=bash>
# chown -RL grid:asmadmin /dev/rdsk/c0t6000*
# chmod 660 /dev/rdsk/c0t6000*
</source>
==Set swap to physical RAM==
<source lang=bash>
# export RAM=256G
# swap -d /dev/zvol/dsk/rpool/swap
# zfs destroy rpool/swap
# zfs create \
-V ${RAM} \
-b 8k \
-o primarycache=metadata \
-o chksum=on \
-o dedup=off \
-o encryption=off \
-o compression=off \
rpool/swap
# swap -a /dev/zvol/dsk/rpool/swap
</source>
=Network=
==Check port ranges==
<source lang=bash>
# for protocol in tcp udp ; do ipadm show-prop ${protocol} -p smallest_anon_port,largest_anon_port ; done
PROTO PROPERTY PERM CURRENT PERSISTENT DEFAULT POSSIBLE
tcp smallest_anon_port rw 9000 9000 32768 1024-65500
tcp largest_anon_port rw 65500 65500 65535 9000-65535
PROTO PROPERTY PERM CURRENT PERSISTENT DEFAULT POSSIBLE
udp smallest_anon_port rw 9000 9000 32768 1024-65500
udp largest_anon_port rw 65500 65500 65535 9000-65535
</source>
==Setup private cluster interconnects==
Example with a small net with six (eight with net and broadcast) usable IPs. This limits the maximum number of nodes to six... which is obvious...
First node:
<source lang=bash>
# ipadm create-ip net1
# ipadm create-addr -T static -a 10.65.0.1/29 net1/ci1
# ipadm create-ip net5
# ipadm create-addr -T static -a 10.65.0.9/29 net5/ci2
</source>
Second node:
<source lang=bash>
# ipadm create-ip net1
# ipadm create-addr -T static -a 10.65.0.2/29 net1/ci1
# ipadm create-ip net5
# ipadm create-addr -T static -a 10.65.0.10/29 net5/ci2
</source>
==Set slew always for ntp==
After configuring ntp set slew always to avoid time warps!
<source lang=bash>
# svccfg -s svc:/network/ntp:default setprop config/slew_always = true
# svcadm refresh svc:/network/ntp:default
# svccfg -s svc:/network/ntp:default listprop config/slew_always
config/slew_always boolean true
</source>
=Patching=
==Upgrade OPatch==
Do as root:
<source lang=bash>
export ORACLE_HOME=/opt/gridhome/11.2.0.4
export PATH=${PATH}:${ORACLE_HOME}/OPatch
OPATCH_PATCH_ZIP=~oracle/orainst/p6880880_112000_Solaris86-64.zip
zfs snapshot -r rpool/grid@$(opatch version | nawk '/OPatch Version:/{print $1"_"$NF;}')
eval mv ${ORACLE_HOME}/{$(opatch version | nawk '/OPatch Version:/{print $1","$1"_"$NF;}')}
unzip -d ${ORACLE_HOME} ${OPATCH_PATCH_ZIP}
chown -R grid:oinstall ${ORACLE_HOME}/OPatch
zfs snapshot -r rpool/grid@$(opatch version | nawk '/OPatch Version:/{print $1"_"$NF;}')
</source>
==Apply PSU==
On first node as user grid:
<source lang=bash>
export ORACLE_HOME=/opt/gridhome/11.2.0.4
OCM_RSP=~grid/ocm_gridcluster1.rsp
${ORACLE_HOME}/OPatch/ocm/bin/emocmrsp -output ${OCM_RSP}
scp ${OCM_RSP} <other node1>:
scp ${OCM_RSP} <other node2>:
...
</source>
On all nodes do as root:
<source lang=bash>
export ORACLE_HOME=/opt/gridhome/11.2.0.4
export PATH=${PATH}:${ORACLE_HOME}/bin
export PATH=${PATH}:${ORACLE_HOME}/OPatch
OCM_RSP=~grid/ocm_gridcluster1.rsp
PSU_DIR=~oracle/orainst/psu
PSU_ZIP=~oracle/orainst/p22378167_112040_Solaris86-64.zip
PSU=~oracle/orainst/psu/22378167
su - grid -c "mkdir -p ${PSU_DIR}"
su - grid -c "unzip -d ${PSU_DIR} ${PSU_ZIP}"
su - grid -c "opatch lsinventory -detail -oh ${ORACLE_HOME} > ~grid/lsinventory_before_${PSU##*/}"
zfs snapshot -r rpool/grid@before_psu_${PSU##*/}
cd ~grid
for patch in $(find ${PSU} -name bundle.xml | xargs -n 1 dirname) ; do
opatch auto ${patch} -oh ${ORACLE_HOME} -ocmrf ${OCM_RSP}
done
$ORACLE_HOME/crs/install/rootcrs.pl -unlock # <-- on all nodes
# For every other patch do:
su - grid -c "cd ${patchdir} ; opatch prereq CheckConflictAgainstOHWithDetail -ph ./" # <-- only on first node
su - grid -c "cd ${patchdir} ; opatch apply" # <-- only on first node
$ORACLE_HOME/crs/install/rootcrs.pl -patch # <-- on all nodes
zfs snapshot -r rpool/grid@after_psu_${PSU##*/}
${ORACLE_HOME}/bin/emctl start dbconsole
su - grid -c "opatch lsinventory -detail -oh ${ORACLE_HOME} > ~grid/lsinventory_after_${PSU##*/}"
</source>
==Configure local listener to another port==
As grid user:
<source lang=bash>
$ srvctl modify listener -l LISTENER -o ${ORACLE_HOME} -p "TCP:50650"
$ srvctl config listener
Name: LISTENER
Network: 1, Owner: grid
Home: <CRS home>
End points: TCP:50650
$ srvctl stop listener -l LISTENER ; srvctl start listener -l LISTENER
$ sqh
SQL>show parameter list
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
listener_networks string
local_listener string (DESCRIPTION=(ADDRESS_LIST=(A
DDRESS=(PROTOCOL=TCP)(HOST=172
.1.20.1)(PORT=1521))))
remote_listener string
SQL> alter system set local_listener ="(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=172.1.20.1)(PORT=50650))))" SID='+ASM1' ;
System altered.
SQL> ^D
</source>
=ASM=
==Create ASM diskgroups==
LUNs.txt contains all disks with:
# one line per disk.
# each disk in the first field.
===Example for chdg===
<source lang=awk>
# nawk -v type='DATA' '
BEGIN {
printf "<chdg name=\"%s\" power=\"3\">\n",type;
}
/002d0/,/011d0/ {
if(/C903/){storage="HSA1";};
if(/C906/){storage="HSA2";};
if(/C061/){storage="HSA3";};
if(/C062/){storage="HSA4";};
if(/002d0/){
# first disk
count=1;
printf " <add>\n";
printf " <fg name=\"%s_%s\">\n",storage,type;
};
gsub(/s2$/,"s0",$1);
printf " <dsk name=\"%s_%s%02d\" string=\"%s\"/>\n",storage,type,count++,$1;
if(/011d0/){
# last disk
print " </fg>";
print " </add>";
}
}
END {
printf "<a name=\"compatible.asm\" value=\"11.2\"/>\n";
printf "<a name=\"compatible.rdbms\" value=\"11.2\"/>\n";
printf "<a name=\"compatible.advm\" value=\"11.2\"/>\n";
printf "</chdg>\n";
}
' LUNs.txt
</source>
===Example for mkdg===
<source lang=awk>
# nawk -v type='FRA' '
BEGIN {
printf "<dg name=\"%s\" redundancy=\"normal\">\n",type;
}
/012d0/,/015d0/ {
if(/C903/){storage="HSA1";};
if(/C906/){storage="HSA2";};
if(/C061/){storage="HSA3";};
if(/C062/){storage="HSA4";};
if(/012d0/){
# first disk
count=1;
printf " <fg name=\"%s_%s\">\n",storage,type;
};
gsub(/s2$/,"s0",$1);
printf " <dsk name=\"%s_%s%02d\" string=\"%s\"/>\n",storage,type,count++,$1;
if(/015d0/){
# last disk
print " </fg>";
}
}
END {
printf "<a name=\"compatible.asm\" value=\"11.2\"/>\n";
printf "<a name=\"compatible.rdbms\" value=\"11.2\"/>\n";
printf "<a name=\"compatible.advm\" value=\"11.2\"/>\n";
printf "</dg>\n";
}
' LUNs.txt
</source>
data_config.xml:
<source lang=xml>
<chdg name="data" power="3">
<add>
<fg name="HSA1_DATA">
<dsk name="HSA1_DATA01" string="/dev/rdsk/c0t60002AC000000000C903010650004002d0s0"/>
<dsk name="HSA1_DATA02" string="/dev/rdsk/c0t60002AC000000000C903010650004003d0s0"/>
<dsk name="HSA1_DATA03" string="/dev/rdsk/c0t60002AC000000000C903010650004004d0s0"/>
<dsk name="HSA1_DATA04" string="/dev/rdsk/c0t60002AC000000000C903010650004005d0s0"/>
<dsk name="HSA1_DATA05" string="/dev/rdsk/c0t60002AC000000000C903010650004006d0s0"/>
<dsk name="HSA1_DATA06" string="/dev/rdsk/c0t60002AC000000000C903010650004007d0s0"/>
<dsk name="HSA1_DATA07" string="/dev/rdsk/c0t60002AC000000000C903010650004008d0s0"/>
<dsk name="HSA1_DATA08" string="/dev/rdsk/c0t60002AC000000000C903010650004009d0s0"/>
<dsk name="HSA1_DATA09" string="/dev/rdsk/c0t60002AC000000000C903010650004010d0s0"/>
<dsk name="HSA1_DATA10" string="/dev/rdsk/c0t60002AC000000000C903010650004011d0s0"/>
</fg>
</add>
<add>
<fg name="HSA2_DATA">
<dsk name="HSA2_DATA01" string="/dev/rdsk/c0t60002AC000000000C906010650004002d0s0"/>
<dsk name="HSA2_DATA02" string="/dev/rdsk/c0t60002AC000000000C906010650004003d0s0"/>
<dsk name="HSA2_DATA03" string="/dev/rdsk/c0t60002AC000000000C906010650004004d0s0"/>
<dsk name="HSA2_DATA04" string="/dev/rdsk/c0t60002AC000000000C906010650004005d0s0"/>
<dsk name="HSA2_DATA05" string="/dev/rdsk/c0t60002AC000000000C906010650004006d0s0"/>
<dsk name="HSA2_DATA06" string="/dev/rdsk/c0t60002AC000000000C906010650004007d0s0"/>
<dsk name="HSA2_DATA07" string="/dev/rdsk/c0t60002AC000000000C906010650004008d0s0"/>
<dsk name="HSA2_DATA08" string="/dev/rdsk/c0t60002AC000000000C906010650004009d0s0"/>
<dsk name="HSA2_DATA09" string="/dev/rdsk/c0t60002AC000000000C906010650004010d0s0"/>
<dsk name="HSA2_DATA10" string="/dev/rdsk/c0t60002AC000000000C906010650004011d0s0"/>
</fg>
</add>
<a name="compatible.asm" value="11.2"/>
<a name="compatible.rdbms" value="11.2"/>
<a name="compatible.advm" value="11.2"/>
</chdg>
</source>
asmh:
<source lang=oracle11>
ASMCMD [+] > chdg data_config.xml
</source>
fe5de4f6cb95cff3117f1f61492b35494fa8f487
EasyRSA
0
275
1232
2016-02-11T18:04:52Z
Lollypop
2
Die Seite wurde neu angelegt: „ ==User certificates with passwords in scripts== Add a line after <i># output_password = secret</i>: <source lang=bash> # output_password = secret output_pas…“
wikitext
text/x-wiki
==User certificates with passwords in scripts==
Add a line after <i># output_password = secret</i>:
<source lang=bash>
# output_password = secret
output_password = $ENV::KEY_PASS
</source>
After that the openssl calls taking the needed password from the environment variable <i>KEY_PASS</i>.
You can call it like this for example:
<source lang=bash>
KEY_PASS="password" ./build-key-pass --batch user
</source>
f7859a50ca6e765dfc82081a2369d821ae5a33a6
1233
1232
2016-02-11T18:05:32Z
Lollypop
2
wikitext
text/x-wiki
==User certificates with passwords in scripts==
Add a line after <i># output_password = secret</i>:
<source lang=bash>
# output_password = secret
output_password = $ENV::KEY_PASS
</source>
After that the openssl calls taking the needed password from the environment variable <i>KEY_PASS</i>.
You can call it like this for example:
<source lang=bash>
KEY_PASS="password" ./build-key-pass --batch user
</source>
This is useful for batch generation of many client certificates.
37099378a1f13439d02dc134239307808ff42122
1234
1233
2016-02-11T18:07:21Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie: Security]]
[[Kategorie: Linux]]
==User certificates with passwords in scripts==
Add a line after <i># output_password = secret</i>:
<source lang=bash>
# output_password = secret
output_password = $ENV::KEY_PASS
</source>
After that the openssl calls taking the needed password from the environment variable <i>KEY_PASS</i>.
You can call it like this for example:
<source lang=bash>
KEY_PASS="password" ./build-key-pass --batch user
</source>
This is useful for batch generation of many client certificates.
a71bf01d1d9086c3010875e74e926d89da688892
1235
1234
2016-02-15T14:18:31Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie: Security]]
[[Kategorie: Linux]]
=create CA user=
<source lang=bash>
# groupadd -g 22000 ca && adduser --uid 22000 --gid 22000 --gecos "CA user" --encrypt-home ca
</source>
=Do everything CA specific as CA user!=
<source lang=bash>
# su - ca
ca@rzeasyrsa:~$ ecryptfs-mount-private
ca@rzeasyrsa:~$ cd
ca@rzeasyrsa:~$ exec bash
</source>
=Setup EasyRSA=
==Ubuntu packets==
<source lang=bash>
# aptitude install openvpn easy-rsa
</source>
==Create your CA==
<source lang=bash>
mkdir --mode=0700 OpenVPN-CA
cd OpenVPN-CA
for i in /usr/share/easy-rsa/* ; do ln -s $i ; done
rm -f vars clean-all
cp /usr/share/easy-rsa/vars .
</source>
==Edit the defaults==
Setup proper defaults in your vars file.
Source it every time before you do CA work.
==Base setup (Only one time at the beginning!!!)==
'''Really just do this before you start with your CA. It will delete everything: keys and certificates!!!'''
$ cd OpenVPN-CA
$ . vars
$ /usr/share/easy-rsa/clean-all
==Generate DH parameter==
$ cd OpenVPN-CA
$ . vars
$ KEY_SIZE=4096 ./build-dh
or
$ cd OpenVPN-CA/keys
$ openssl dhparam -2 -out dh4096.pem 4096
==Generate TLS-auth parameter==
$ cd OpenVPN-CA/keys
$ /usr/sbin/openvpn --genkey --secret ta.key
==User certificates with passwords in scripts==
If you want to work with password encrypted keys and wat to batch process many users, you might find this helpful.
Add a line after <i># output_password = secret</i>:
<source lang=bash>
# output_password = secret
output_password = $ENV::KEY_PASS
</source>
After that the openssl calls taking the needed password from the environment variable <i>KEY_PASS</i>.
You can call it like this for example:
<source lang=bash>
KEY_PASS="password" ./build-key-pass --batch user
</source>
==Create your CA certificate==
$ cd OpenVPN-CA
$ . vars
$ ./buid-ca
==Create the server certificate==
$ cd OpenVPN-CA
$ . vars
$ ./build-key-server
=Create your OpenVPN config=
==get_ovpn.sh==
I wrote a little helper script called get_ovpn.sh:
<source lang=bash>
#!/bin/bash
# Written by Lars Timmann L@rs.Timmann.de> 2016
# You may use it for free but on your own risk!!!
# Defaults:
TYPE="client"
KEY_DIR="OpenVPN-CA/keys"
function usage() {
if [ "_${1}_" != "_help_" ]
then
printf "ERROR: $*\n"
fi
printf "Options:\n"
cat <<EOF
-h|--help This help
-c|--config-type Default: client (client|server)
-k|--key-dir Default: OpenVPN-CA/keys Directory where certificates and keys can be found
-t|--template Default: ${configtype}.ovpn The template to use
-u|--user User to create config for
-s|--server Servername for --config-type=server
--what-ever=value Replace <WHAT_EVER> in template with value e.g.: --server-net=... replaces <SERVER_NET> with the given value
EOF
exit 1
}
while [ $# -gt 0 ]
do
#if [ $# -ge 2 ]; then value=$2; fi
case $1 in
-h|--help)
usage "help"
;;
--?*=?*|-?*=?*)
param=${1%=*}
value=${1#*=}
shift;
;;
--?*=|-?*=)
param=${1%=*}
usage "${param} needs a vlaue!"
;;
*)
if [ $# -lt 2 ] ; then usage "$1 needs a value!"; fi
param=$1
value=$2
shift; shift;
;;
esac
case $param in
-t|--template)
TEMPLATE=${value}
;;
-k|--key-dir)
KEY_DIR=${value}
;;
-u|--user)
OVPN_USER=${value}
;;
-c|--config-type)
TYPE=${value}
;;
-s|--server-name)
SERVER=${value}
export SERVER
;;
*)
param=${param#--}
param=${param/-/_}
export ${param^^}=${value}
;;
esac
done
TEMPLATE=${TEMPLATE:-"${TYPE}.ovpn"}
[ -z "${SERVER}" -a "_${TYPE}_" == "_server_" ] && usage "For which server?\n"
[ -z "${OVPN_USER}" -a "_${TYPE}_" == "_client_" ] && usage "For which user?\n"
[ ! -f "${TEMPLATE}" ] && usage "Template file ${TEMPLATE} not found!\n"
[ ! -d "${KEY_DIR}" ] && usage "Key directory ${KEY_DIR} not found!\n"
[ ! -f "${KEY_DIR}/ta.key" ] && usage "TLS Auth ${KEY_DIR}/ta.key not found!\n"
[ ! -f "${KEY_DIR}/ca.crt" ] && usage "CA Certificate ${KEY_DIR}/ca.crt not found!\n"
[ ! -f "${KEY_DIR}/${SERVER}.key" -a "_${TYPE}_" == "_server_" ] && usage "Private key ${KEY_DIR}/${SERVER}.key not found!\n"
[ ! -f "${KEY_DIR}/${SERVER}.crt" -a "_${TYPE}_" == "_server_" ] && usage "Certificate ${KEY_DIR}/${SERVER}.crt not found!\n"
[ ! -f "${KEY_DIR}/${OVPN_USER}.key" -a "_${TYPE}_" == "_client_" ] && usage "Private key ${KEY_DIR}/${OVPN_USER}.key not found!\n"
[ ! -f "${KEY_DIR}/${OVPN_USER}.crt" -a "_${TYPE}_" == "_client_" ] && usage "Certificate ${KEY_DIR}/${OVPN_USER}.crt not found!\n"
gawk \
-v user="${OVPN_USER}" \
-v key_dir="${KEY_DIR}" \
-v configtype="${TYPE}" \
-v server="${SERVER}" \
'
function print_fingerprint(certfile){
command="openssl x509 -noout -fingerprint -in "certfile;
FS="=";
while(command | getline);
retval=$2;
close(command);
return retval;
}
function print_part(part,certfile){
command="openssl x509 -noout -text -in "certfile;
while(command | getline){
if ($1 == part) {
for(i=2;i<=NF;i++){
retval=retval""$i;
if(i<NF) retval=retval" ";
}
}
};
close(command);
gsub(/\//,", ", retval)
return retval;
}
function print_cert(name,certfile){
# Header
#printf "# %s\n",certfile;
while(getline < certfile){if(/^#/) print $0};
close(certfile);
printf "<%s>\n",name;
while(getline < certfile){if(!/^#/) print $0};
close(certfile);
printf "</%s>\n",name;
}
{
# Static part
# Replace all <VARIABLE> in template file with ENVIRON["VARIABLE"]
rest=$0;
while(match(rest,/<[A-Z0-9_]+>/)) {
matched=substr(rest,RSTART+1,RLENGTH-2);
##print "Matched:",matched;
if (ENVIRON[matched]) gsub("<"matched">",ENVIRON[matched]);
rest=substr(rest,RSTART+RLENGTH);
}
print $0;
}
END{
# Dynamic part
if(configtype=="client") {
printf "remote-cert-tls server\n";
} else {
printf "remote-cert-tls client\n";
}
# TLS Auth
print_cert("tls-auth",key_dir"/ta.key");
printf "key-direction %d\n",(configtype=="client");
printf "\n";
print_cert("dh",key_dir"/dh4096.pem");
printf "\n";
# CA Certificate
if (configtype=="client") {
printf "verify-x509-name \"%s\"\n",print_part("Subject:",key_dir"/"server".crt");
}
printf "verify-hash %s\n",print_fingerprint(key_dir"/ca.crt");
print_cert("ca",key_dir"/ca.crt");
printf "\n";
# User Data
if (configtype=="client") {
print_cert("cert",key_dir"/"user".crt");
printf "\n";
print_cert("key",key_dir"/"user".key");
printf "\n";
} else {
print_cert("cert",key_dir"/"server".crt");
printf "\n";
# key secret/<SERVER>.key is in template
}
#print ENVIRON["SERVER_NET"];
}' ${TEMPLATE}
</source>
ca@rzeasyrsa:~$ ./get_ovpn.sh --help
Options:
-h|--help This help
-c|--config-type Default: client (client|server)
-k|--key-dir Default: OpenVPN-CA/keys Directory where certificates and keys can be found
-t|--template Default: .ovpn The template to use
-u|--user User to create config for
-s|--server Servername for --config-type=server
--what-ever=value Replace <WHAT_EVER> in template with value e.g.: --server-net=... replaces <SERVER_NET> with the given value
==OpenVPN Server ==
===OpenVPN Server Template===
I am using the mysql-auth-plugin from [[https://github.com/chantra/openvpn-mysql-auth|https://github.com/chantra/openvpn-mysql-auth]]
Example server.ovpn:
<pre>
local <SERVER_IP>
port <SERVER_PORT>
tmp-dir /run/openvpn_tmp
management <MANAGEMENT_IP> <MANAGEMENT_PORT> /etc/openvpn/management-password
proto udp
dev tun
tun-mtu 1500
mssfix
topology subnet
server <SERVER_NET> <SERVER_NETMASK>
push "redirect-gateway def1 bypass-dhcp"
push "dhcp-option DNS <DNS1>"
push "dhcp-option DNS <DNS2>"
push "route 192.168.18.0 255.255.255.0 net_gateway"
push "route 192.168.0.0 255.255.0.0"
push "route 10.0.0.0 255.0.0.0"
push "route 172.28.0.0 255.255.0.0"
client-to-client
duplicate-cn
keepalive 10 120
auth SHA512
cipher AES-256-CBC
tls-cipher DHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES256-SHA256:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES128-SHA256:DHE-RSA-CAMELLIA256-SHA:DHE-RSA-AES256-SHA:DHE-RSA-CAMELLIA128-SHA:DHE-RSA-AES128-SHA:CAMELLIA256-SHA:AES256-SHA:CAMELLIA128-SHA:AES128-SHA
reneg-sec 36000
comp-lzo adaptive
max-clients 25
user openvpn
group openvpn
persist-key
persist-tun
status /var/log/openvpn/<SERVER>-status.log 2
status-version 2
log-append /var/log/openvpn/<SERVER>-openvpn.log
verb 3
plugin /usr/lib/openvpn/libopenvpn-mysql-auth.so -c /etc/openvpn/auth/<SERVER>_auth_mysql.conf
key secret/<SERVER>.key # This file should be kept secret
remote-cert-tls client
username-as-common-name
</pre>
===Generate OpenVPN Config for server===
<source lang=bash>
ca@rzeasyrsa:~$ ./get_ovpn.sh \
--server openvpn \
--config-type server \
--server-ip=192.168.18.23 \
--server-port=1234 \
--server-net=10.214.60.128 \
--server-netmask=255.255.255.128 \
--management-ip=192.168.17.23 \
--management-port=11234 \
--dns1=192.168.0.50 \
--dns2=192.168.0.30 \
--template server.ovpn \
--key-dir=OpenVPN-CA/keys
</source>
==OpenVPN Client==
===OpenVPN client template===
Example client.ovpn:
<pre>
client
dev tun
proto udp
remote <SERVER_IP> <SERVER_PORT>
tls-client
ns-cert-type server
comp-lzo
auth-user-pass
auth SHA512
cipher AES-256-CBC
tls-cipher DHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES256-SHA256:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES128-SHA256:DHE-RSA-CAMELLIA256-SHA:DHE-RSA-AES256-SHA:DHE-RSA-CAMELLIA128-SHA:DHE-RSA-AES128-SHA:CAMELLIA256-SHA:AES256-SHA:CAMELLIA128-SHA:AES128-SHA
#tls-version-min 1.2
route-delay 5 30
persist-key
persist-tun
nobind
mssfix
push-peer-info
reneg-sec 0
tun-mtu 1500
verb 3
#auth-nocache
</pre>
===Generate OpenVPN Config for server===
<source lang=bash>
ca@rzeasyrsa:~$ ./get_ovpn.sh \
--config-type client \
--server-ip 192.168.18.23 \
--server-port 1234 \
--template client.ovpn \
--key-dir OpenVPN-CA/keys \
--user vpnclient
</source>
81bcfbc2962f07955a9db5fd74d1c9b70f1c0bbd
1236
1235
2016-02-15T14:19:18Z
Lollypop
2
/* OpenVPN Server Template */
wikitext
text/x-wiki
[[Kategorie: Security]]
[[Kategorie: Linux]]
=create CA user=
<source lang=bash>
# groupadd -g 22000 ca && adduser --uid 22000 --gid 22000 --gecos "CA user" --encrypt-home ca
</source>
=Do everything CA specific as CA user!=
<source lang=bash>
# su - ca
ca@rzeasyrsa:~$ ecryptfs-mount-private
ca@rzeasyrsa:~$ cd
ca@rzeasyrsa:~$ exec bash
</source>
=Setup EasyRSA=
==Ubuntu packets==
<source lang=bash>
# aptitude install openvpn easy-rsa
</source>
==Create your CA==
<source lang=bash>
mkdir --mode=0700 OpenVPN-CA
cd OpenVPN-CA
for i in /usr/share/easy-rsa/* ; do ln -s $i ; done
rm -f vars clean-all
cp /usr/share/easy-rsa/vars .
</source>
==Edit the defaults==
Setup proper defaults in your vars file.
Source it every time before you do CA work.
==Base setup (Only one time at the beginning!!!)==
'''Really just do this before you start with your CA. It will delete everything: keys and certificates!!!'''
$ cd OpenVPN-CA
$ . vars
$ /usr/share/easy-rsa/clean-all
==Generate DH parameter==
$ cd OpenVPN-CA
$ . vars
$ KEY_SIZE=4096 ./build-dh
or
$ cd OpenVPN-CA/keys
$ openssl dhparam -2 -out dh4096.pem 4096
==Generate TLS-auth parameter==
$ cd OpenVPN-CA/keys
$ /usr/sbin/openvpn --genkey --secret ta.key
==User certificates with passwords in scripts==
If you want to work with password encrypted keys and wat to batch process many users, you might find this helpful.
Add a line after <i># output_password = secret</i>:
<source lang=bash>
# output_password = secret
output_password = $ENV::KEY_PASS
</source>
After that the openssl calls taking the needed password from the environment variable <i>KEY_PASS</i>.
You can call it like this for example:
<source lang=bash>
KEY_PASS="password" ./build-key-pass --batch user
</source>
==Create your CA certificate==
$ cd OpenVPN-CA
$ . vars
$ ./buid-ca
==Create the server certificate==
$ cd OpenVPN-CA
$ . vars
$ ./build-key-server
=Create your OpenVPN config=
==get_ovpn.sh==
I wrote a little helper script called get_ovpn.sh:
<source lang=bash>
#!/bin/bash
# Written by Lars Timmann L@rs.Timmann.de> 2016
# You may use it for free but on your own risk!!!
# Defaults:
TYPE="client"
KEY_DIR="OpenVPN-CA/keys"
function usage() {
if [ "_${1}_" != "_help_" ]
then
printf "ERROR: $*\n"
fi
printf "Options:\n"
cat <<EOF
-h|--help This help
-c|--config-type Default: client (client|server)
-k|--key-dir Default: OpenVPN-CA/keys Directory where certificates and keys can be found
-t|--template Default: ${configtype}.ovpn The template to use
-u|--user User to create config for
-s|--server Servername for --config-type=server
--what-ever=value Replace <WHAT_EVER> in template with value e.g.: --server-net=... replaces <SERVER_NET> with the given value
EOF
exit 1
}
while [ $# -gt 0 ]
do
#if [ $# -ge 2 ]; then value=$2; fi
case $1 in
-h|--help)
usage "help"
;;
--?*=?*|-?*=?*)
param=${1%=*}
value=${1#*=}
shift;
;;
--?*=|-?*=)
param=${1%=*}
usage "${param} needs a vlaue!"
;;
*)
if [ $# -lt 2 ] ; then usage "$1 needs a value!"; fi
param=$1
value=$2
shift; shift;
;;
esac
case $param in
-t|--template)
TEMPLATE=${value}
;;
-k|--key-dir)
KEY_DIR=${value}
;;
-u|--user)
OVPN_USER=${value}
;;
-c|--config-type)
TYPE=${value}
;;
-s|--server-name)
SERVER=${value}
export SERVER
;;
*)
param=${param#--}
param=${param/-/_}
export ${param^^}=${value}
;;
esac
done
TEMPLATE=${TEMPLATE:-"${TYPE}.ovpn"}
[ -z "${SERVER}" -a "_${TYPE}_" == "_server_" ] && usage "For which server?\n"
[ -z "${OVPN_USER}" -a "_${TYPE}_" == "_client_" ] && usage "For which user?\n"
[ ! -f "${TEMPLATE}" ] && usage "Template file ${TEMPLATE} not found!\n"
[ ! -d "${KEY_DIR}" ] && usage "Key directory ${KEY_DIR} not found!\n"
[ ! -f "${KEY_DIR}/ta.key" ] && usage "TLS Auth ${KEY_DIR}/ta.key not found!\n"
[ ! -f "${KEY_DIR}/ca.crt" ] && usage "CA Certificate ${KEY_DIR}/ca.crt not found!\n"
[ ! -f "${KEY_DIR}/${SERVER}.key" -a "_${TYPE}_" == "_server_" ] && usage "Private key ${KEY_DIR}/${SERVER}.key not found!\n"
[ ! -f "${KEY_DIR}/${SERVER}.crt" -a "_${TYPE}_" == "_server_" ] && usage "Certificate ${KEY_DIR}/${SERVER}.crt not found!\n"
[ ! -f "${KEY_DIR}/${OVPN_USER}.key" -a "_${TYPE}_" == "_client_" ] && usage "Private key ${KEY_DIR}/${OVPN_USER}.key not found!\n"
[ ! -f "${KEY_DIR}/${OVPN_USER}.crt" -a "_${TYPE}_" == "_client_" ] && usage "Certificate ${KEY_DIR}/${OVPN_USER}.crt not found!\n"
gawk \
-v user="${OVPN_USER}" \
-v key_dir="${KEY_DIR}" \
-v configtype="${TYPE}" \
-v server="${SERVER}" \
'
function print_fingerprint(certfile){
command="openssl x509 -noout -fingerprint -in "certfile;
FS="=";
while(command | getline);
retval=$2;
close(command);
return retval;
}
function print_part(part,certfile){
command="openssl x509 -noout -text -in "certfile;
while(command | getline){
if ($1 == part) {
for(i=2;i<=NF;i++){
retval=retval""$i;
if(i<NF) retval=retval" ";
}
}
};
close(command);
gsub(/\//,", ", retval)
return retval;
}
function print_cert(name,certfile){
# Header
#printf "# %s\n",certfile;
while(getline < certfile){if(/^#/) print $0};
close(certfile);
printf "<%s>\n",name;
while(getline < certfile){if(!/^#/) print $0};
close(certfile);
printf "</%s>\n",name;
}
{
# Static part
# Replace all <VARIABLE> in template file with ENVIRON["VARIABLE"]
rest=$0;
while(match(rest,/<[A-Z0-9_]+>/)) {
matched=substr(rest,RSTART+1,RLENGTH-2);
##print "Matched:",matched;
if (ENVIRON[matched]) gsub("<"matched">",ENVIRON[matched]);
rest=substr(rest,RSTART+RLENGTH);
}
print $0;
}
END{
# Dynamic part
if(configtype=="client") {
printf "remote-cert-tls server\n";
} else {
printf "remote-cert-tls client\n";
}
# TLS Auth
print_cert("tls-auth",key_dir"/ta.key");
printf "key-direction %d\n",(configtype=="client");
printf "\n";
print_cert("dh",key_dir"/dh4096.pem");
printf "\n";
# CA Certificate
if (configtype=="client") {
printf "verify-x509-name \"%s\"\n",print_part("Subject:",key_dir"/"server".crt");
}
printf "verify-hash %s\n",print_fingerprint(key_dir"/ca.crt");
print_cert("ca",key_dir"/ca.crt");
printf "\n";
# User Data
if (configtype=="client") {
print_cert("cert",key_dir"/"user".crt");
printf "\n";
print_cert("key",key_dir"/"user".key");
printf "\n";
} else {
print_cert("cert",key_dir"/"server".crt");
printf "\n";
# key secret/<SERVER>.key is in template
}
#print ENVIRON["SERVER_NET"];
}' ${TEMPLATE}
</source>
ca@rzeasyrsa:~$ ./get_ovpn.sh --help
Options:
-h|--help This help
-c|--config-type Default: client (client|server)
-k|--key-dir Default: OpenVPN-CA/keys Directory where certificates and keys can be found
-t|--template Default: .ovpn The template to use
-u|--user User to create config for
-s|--server Servername for --config-type=server
--what-ever=value Replace <WHAT_EVER> in template with value e.g.: --server-net=... replaces <SERVER_NET> with the given value
==OpenVPN Server ==
===OpenVPN Server Template===
I am using the mysql-auth-plugin from [[https://github.com/chantra/openvpn-mysql-auth https://github.com/chantra/openvpn-mysql-auth]]
Example server.ovpn:
<pre>
local <SERVER_IP>
port <SERVER_PORT>
tmp-dir /run/openvpn_tmp
management <MANAGEMENT_IP> <MANAGEMENT_PORT> /etc/openvpn/management-password
proto udp
dev tun
tun-mtu 1500
mssfix
topology subnet
server <SERVER_NET> <SERVER_NETMASK>
push "redirect-gateway def1 bypass-dhcp"
push "dhcp-option DNS <DNS1>"
push "dhcp-option DNS <DNS2>"
push "route 192.168.18.0 255.255.255.0 net_gateway"
push "route 192.168.0.0 255.255.0.0"
push "route 10.0.0.0 255.0.0.0"
push "route 172.28.0.0 255.255.0.0"
client-to-client
duplicate-cn
keepalive 10 120
auth SHA512
cipher AES-256-CBC
tls-cipher DHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES256-SHA256:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES128-SHA256:DHE-RSA-CAMELLIA256-SHA:DHE-RSA-AES256-SHA:DHE-RSA-CAMELLIA128-SHA:DHE-RSA-AES128-SHA:CAMELLIA256-SHA:AES256-SHA:CAMELLIA128-SHA:AES128-SHA
reneg-sec 36000
comp-lzo adaptive
max-clients 25
user openvpn
group openvpn
persist-key
persist-tun
status /var/log/openvpn/<SERVER>-status.log 2
status-version 2
log-append /var/log/openvpn/<SERVER>-openvpn.log
verb 3
plugin /usr/lib/openvpn/libopenvpn-mysql-auth.so -c /etc/openvpn/auth/<SERVER>_auth_mysql.conf
key secret/<SERVER>.key # This file should be kept secret
remote-cert-tls client
username-as-common-name
</pre>
===Generate OpenVPN Config for server===
<source lang=bash>
ca@rzeasyrsa:~$ ./get_ovpn.sh \
--server openvpn \
--config-type server \
--server-ip=192.168.18.23 \
--server-port=1234 \
--server-net=10.214.60.128 \
--server-netmask=255.255.255.128 \
--management-ip=192.168.17.23 \
--management-port=11234 \
--dns1=192.168.0.50 \
--dns2=192.168.0.30 \
--template server.ovpn \
--key-dir=OpenVPN-CA/keys
</source>
==OpenVPN Client==
===OpenVPN client template===
Example client.ovpn:
<pre>
client
dev tun
proto udp
remote <SERVER_IP> <SERVER_PORT>
tls-client
ns-cert-type server
comp-lzo
auth-user-pass
auth SHA512
cipher AES-256-CBC
tls-cipher DHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES256-SHA256:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES128-SHA256:DHE-RSA-CAMELLIA256-SHA:DHE-RSA-AES256-SHA:DHE-RSA-CAMELLIA128-SHA:DHE-RSA-AES128-SHA:CAMELLIA256-SHA:AES256-SHA:CAMELLIA128-SHA:AES128-SHA
#tls-version-min 1.2
route-delay 5 30
persist-key
persist-tun
nobind
mssfix
push-peer-info
reneg-sec 0
tun-mtu 1500
verb 3
#auth-nocache
</pre>
===Generate OpenVPN Config for server===
<source lang=bash>
ca@rzeasyrsa:~$ ./get_ovpn.sh \
--config-type client \
--server-ip 192.168.18.23 \
--server-port 1234 \
--template client.ovpn \
--key-dir OpenVPN-CA/keys \
--user vpnclient
</source>
66b9604930a79103fbd677fb77b4ff7964f3374f
1237
1236
2016-02-15T14:22:18Z
Lollypop
2
/* OpenVPN Server Template */
wikitext
text/x-wiki
[[Kategorie: Security]]
[[Kategorie: Linux]]
=create CA user=
<source lang=bash>
# groupadd -g 22000 ca && adduser --uid 22000 --gid 22000 --gecos "CA user" --encrypt-home ca
</source>
=Do everything CA specific as CA user!=
<source lang=bash>
# su - ca
ca@rzeasyrsa:~$ ecryptfs-mount-private
ca@rzeasyrsa:~$ cd
ca@rzeasyrsa:~$ exec bash
</source>
=Setup EasyRSA=
==Ubuntu packets==
<source lang=bash>
# aptitude install openvpn easy-rsa
</source>
==Create your CA==
<source lang=bash>
mkdir --mode=0700 OpenVPN-CA
cd OpenVPN-CA
for i in /usr/share/easy-rsa/* ; do ln -s $i ; done
rm -f vars clean-all
cp /usr/share/easy-rsa/vars .
</source>
==Edit the defaults==
Setup proper defaults in your vars file.
Source it every time before you do CA work.
==Base setup (Only one time at the beginning!!!)==
'''Really just do this before you start with your CA. It will delete everything: keys and certificates!!!'''
$ cd OpenVPN-CA
$ . vars
$ /usr/share/easy-rsa/clean-all
==Generate DH parameter==
$ cd OpenVPN-CA
$ . vars
$ KEY_SIZE=4096 ./build-dh
or
$ cd OpenVPN-CA/keys
$ openssl dhparam -2 -out dh4096.pem 4096
==Generate TLS-auth parameter==
$ cd OpenVPN-CA/keys
$ /usr/sbin/openvpn --genkey --secret ta.key
==User certificates with passwords in scripts==
If you want to work with password encrypted keys and wat to batch process many users, you might find this helpful.
Add a line after <i># output_password = secret</i>:
<source lang=bash>
# output_password = secret
output_password = $ENV::KEY_PASS
</source>
After that the openssl calls taking the needed password from the environment variable <i>KEY_PASS</i>.
You can call it like this for example:
<source lang=bash>
KEY_PASS="password" ./build-key-pass --batch user
</source>
==Create your CA certificate==
$ cd OpenVPN-CA
$ . vars
$ ./buid-ca
==Create the server certificate==
$ cd OpenVPN-CA
$ . vars
$ ./build-key-server
=Create your OpenVPN config=
==get_ovpn.sh==
I wrote a little helper script called get_ovpn.sh:
<source lang=bash>
#!/bin/bash
# Written by Lars Timmann L@rs.Timmann.de> 2016
# You may use it for free but on your own risk!!!
# Defaults:
TYPE="client"
KEY_DIR="OpenVPN-CA/keys"
function usage() {
if [ "_${1}_" != "_help_" ]
then
printf "ERROR: $*\n"
fi
printf "Options:\n"
cat <<EOF
-h|--help This help
-c|--config-type Default: client (client|server)
-k|--key-dir Default: OpenVPN-CA/keys Directory where certificates and keys can be found
-t|--template Default: ${configtype}.ovpn The template to use
-u|--user User to create config for
-s|--server Servername for --config-type=server
--what-ever=value Replace <WHAT_EVER> in template with value e.g.: --server-net=... replaces <SERVER_NET> with the given value
EOF
exit 1
}
while [ $# -gt 0 ]
do
#if [ $# -ge 2 ]; then value=$2; fi
case $1 in
-h|--help)
usage "help"
;;
--?*=?*|-?*=?*)
param=${1%=*}
value=${1#*=}
shift;
;;
--?*=|-?*=)
param=${1%=*}
usage "${param} needs a vlaue!"
;;
*)
if [ $# -lt 2 ] ; then usage "$1 needs a value!"; fi
param=$1
value=$2
shift; shift;
;;
esac
case $param in
-t|--template)
TEMPLATE=${value}
;;
-k|--key-dir)
KEY_DIR=${value}
;;
-u|--user)
OVPN_USER=${value}
;;
-c|--config-type)
TYPE=${value}
;;
-s|--server-name)
SERVER=${value}
export SERVER
;;
*)
param=${param#--}
param=${param/-/_}
export ${param^^}=${value}
;;
esac
done
TEMPLATE=${TEMPLATE:-"${TYPE}.ovpn"}
[ -z "${SERVER}" -a "_${TYPE}_" == "_server_" ] && usage "For which server?\n"
[ -z "${OVPN_USER}" -a "_${TYPE}_" == "_client_" ] && usage "For which user?\n"
[ ! -f "${TEMPLATE}" ] && usage "Template file ${TEMPLATE} not found!\n"
[ ! -d "${KEY_DIR}" ] && usage "Key directory ${KEY_DIR} not found!\n"
[ ! -f "${KEY_DIR}/ta.key" ] && usage "TLS Auth ${KEY_DIR}/ta.key not found!\n"
[ ! -f "${KEY_DIR}/ca.crt" ] && usage "CA Certificate ${KEY_DIR}/ca.crt not found!\n"
[ ! -f "${KEY_DIR}/${SERVER}.key" -a "_${TYPE}_" == "_server_" ] && usage "Private key ${KEY_DIR}/${SERVER}.key not found!\n"
[ ! -f "${KEY_DIR}/${SERVER}.crt" -a "_${TYPE}_" == "_server_" ] && usage "Certificate ${KEY_DIR}/${SERVER}.crt not found!\n"
[ ! -f "${KEY_DIR}/${OVPN_USER}.key" -a "_${TYPE}_" == "_client_" ] && usage "Private key ${KEY_DIR}/${OVPN_USER}.key not found!\n"
[ ! -f "${KEY_DIR}/${OVPN_USER}.crt" -a "_${TYPE}_" == "_client_" ] && usage "Certificate ${KEY_DIR}/${OVPN_USER}.crt not found!\n"
gawk \
-v user="${OVPN_USER}" \
-v key_dir="${KEY_DIR}" \
-v configtype="${TYPE}" \
-v server="${SERVER}" \
'
function print_fingerprint(certfile){
command="openssl x509 -noout -fingerprint -in "certfile;
FS="=";
while(command | getline);
retval=$2;
close(command);
return retval;
}
function print_part(part,certfile){
command="openssl x509 -noout -text -in "certfile;
while(command | getline){
if ($1 == part) {
for(i=2;i<=NF;i++){
retval=retval""$i;
if(i<NF) retval=retval" ";
}
}
};
close(command);
gsub(/\//,", ", retval)
return retval;
}
function print_cert(name,certfile){
# Header
#printf "# %s\n",certfile;
while(getline < certfile){if(/^#/) print $0};
close(certfile);
printf "<%s>\n",name;
while(getline < certfile){if(!/^#/) print $0};
close(certfile);
printf "</%s>\n",name;
}
{
# Static part
# Replace all <VARIABLE> in template file with ENVIRON["VARIABLE"]
rest=$0;
while(match(rest,/<[A-Z0-9_]+>/)) {
matched=substr(rest,RSTART+1,RLENGTH-2);
##print "Matched:",matched;
if (ENVIRON[matched]) gsub("<"matched">",ENVIRON[matched]);
rest=substr(rest,RSTART+RLENGTH);
}
print $0;
}
END{
# Dynamic part
if(configtype=="client") {
printf "remote-cert-tls server\n";
} else {
printf "remote-cert-tls client\n";
}
# TLS Auth
print_cert("tls-auth",key_dir"/ta.key");
printf "key-direction %d\n",(configtype=="client");
printf "\n";
print_cert("dh",key_dir"/dh4096.pem");
printf "\n";
# CA Certificate
if (configtype=="client") {
printf "verify-x509-name \"%s\"\n",print_part("Subject:",key_dir"/"server".crt");
}
printf "verify-hash %s\n",print_fingerprint(key_dir"/ca.crt");
print_cert("ca",key_dir"/ca.crt");
printf "\n";
# User Data
if (configtype=="client") {
print_cert("cert",key_dir"/"user".crt");
printf "\n";
print_cert("key",key_dir"/"user".key");
printf "\n";
} else {
print_cert("cert",key_dir"/"server".crt");
printf "\n";
# key secret/<SERVER>.key is in template
}
#print ENVIRON["SERVER_NET"];
}' ${TEMPLATE}
</source>
ca@rzeasyrsa:~$ ./get_ovpn.sh --help
Options:
-h|--help This help
-c|--config-type Default: client (client|server)
-k|--key-dir Default: OpenVPN-CA/keys Directory where certificates and keys can be found
-t|--template Default: .ovpn The template to use
-u|--user User to create config for
-s|--server Servername for --config-type=server
--what-ever=value Replace <WHAT_EVER> in template with value e.g.: --server-net=... replaces <SERVER_NET> with the given value
==OpenVPN Server ==
===OpenVPN Server Template===
# I am using the mysql-auth-plugin from [https://github.com/chantra/openvpn-mysql-auth https://github.com/chantra/openvpn-mysql-auth]
# On the OpenVPN-Server the user openvpn has uid 1195 and gid 1195 and I have a TMP-dir for this user in the /etc/fstab like this:
none /run/openvpn_tmp tmpfs nodev,noexec,nosuid,size=5m,mode=0700,uid=1195,gid=1195 0 0
Example server.ovpn:
<pre>
local <SERVER_IP>
port <SERVER_PORT>
tmp-dir /run/openvpn_tmp
management <MANAGEMENT_IP> <MANAGEMENT_PORT> /etc/openvpn/management-password
proto udp
dev tun
tun-mtu 1500
mssfix
topology subnet
server <SERVER_NET> <SERVER_NETMASK>
push "redirect-gateway def1 bypass-dhcp"
push "dhcp-option DNS <DNS1>"
push "dhcp-option DNS <DNS2>"
push "route 192.168.18.0 255.255.255.0 net_gateway"
push "route 192.168.0.0 255.255.0.0"
push "route 10.0.0.0 255.0.0.0"
push "route 172.28.0.0 255.255.0.0"
client-to-client
duplicate-cn
keepalive 10 120
auth SHA512
cipher AES-256-CBC
tls-cipher DHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES256-SHA256:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES128-SHA256:DHE-RSA-CAMELLIA256-SHA:DHE-RSA-AES256-SHA:DHE-RSA-CAMELLIA128-SHA:DHE-RSA-AES128-SHA:CAMELLIA256-SHA:AES256-SHA:CAMELLIA128-SHA:AES128-SHA
reneg-sec 36000
comp-lzo adaptive
max-clients 25
user openvpn
group openvpn
persist-key
persist-tun
status /var/log/openvpn/<SERVER>-status.log 2
status-version 2
log-append /var/log/openvpn/<SERVER>-openvpn.log
verb 3
plugin /usr/lib/openvpn/libopenvpn-mysql-auth.so -c /etc/openvpn/auth/<SERVER>_auth_mysql.conf
key secret/<SERVER>.key # This file should be kept secret
remote-cert-tls client
username-as-common-name
</pre>
===Generate OpenVPN Config for server===
<source lang=bash>
ca@rzeasyrsa:~$ ./get_ovpn.sh \
--server openvpn \
--config-type server \
--server-ip=192.168.18.23 \
--server-port=1234 \
--server-net=10.214.60.128 \
--server-netmask=255.255.255.128 \
--management-ip=192.168.17.23 \
--management-port=11234 \
--dns1=192.168.0.50 \
--dns2=192.168.0.30 \
--template server.ovpn \
--key-dir=OpenVPN-CA/keys
</source>
==OpenVPN Client==
===OpenVPN client template===
Example client.ovpn:
<pre>
client
dev tun
proto udp
remote <SERVER_IP> <SERVER_PORT>
tls-client
ns-cert-type server
comp-lzo
auth-user-pass
auth SHA512
cipher AES-256-CBC
tls-cipher DHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES256-SHA256:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES128-SHA256:DHE-RSA-CAMELLIA256-SHA:DHE-RSA-AES256-SHA:DHE-RSA-CAMELLIA128-SHA:DHE-RSA-AES128-SHA:CAMELLIA256-SHA:AES256-SHA:CAMELLIA128-SHA:AES128-SHA
#tls-version-min 1.2
route-delay 5 30
persist-key
persist-tun
nobind
mssfix
push-peer-info
reneg-sec 0
tun-mtu 1500
verb 3
#auth-nocache
</pre>
===Generate OpenVPN Config for server===
<source lang=bash>
ca@rzeasyrsa:~$ ./get_ovpn.sh \
--config-type client \
--server-ip 192.168.18.23 \
--server-port 1234 \
--template client.ovpn \
--key-dir OpenVPN-CA/keys \
--user vpnclient
</source>
640d3fd195c0171786a8341ab4f3576a43145d07
1238
1237
2016-02-15T14:31:16Z
Lollypop
2
/* Create your CA certificate */
wikitext
text/x-wiki
[[Kategorie: Security]]
[[Kategorie: Linux]]
=create CA user=
<source lang=bash>
# groupadd -g 22000 ca && adduser --uid 22000 --gid 22000 --gecos "CA user" --encrypt-home ca
</source>
=Do everything CA specific as CA user!=
<source lang=bash>
# su - ca
ca@rzeasyrsa:~$ ecryptfs-mount-private
ca@rzeasyrsa:~$ cd
ca@rzeasyrsa:~$ exec bash
</source>
=Setup EasyRSA=
==Ubuntu packets==
<source lang=bash>
# aptitude install openvpn easy-rsa
</source>
==Create your CA==
<source lang=bash>
mkdir --mode=0700 OpenVPN-CA
cd OpenVPN-CA
for i in /usr/share/easy-rsa/* ; do ln -s $i ; done
rm -f vars clean-all
cp /usr/share/easy-rsa/vars .
</source>
==Edit the defaults==
Setup proper defaults in your vars file.
Source it every time before you do CA work.
==Base setup (Only one time at the beginning!!!)==
'''Really just do this before you start with your CA. It will delete everything: keys and certificates!!!'''
$ cd OpenVPN-CA
$ . vars
$ /usr/share/easy-rsa/clean-all
==Generate DH parameter==
$ cd OpenVPN-CA
$ . vars
$ KEY_SIZE=4096 ./build-dh
or
$ cd OpenVPN-CA/keys
$ openssl dhparam -2 -out dh4096.pem 4096
==Generate TLS-auth parameter==
$ cd OpenVPN-CA/keys
$ /usr/sbin/openvpn --genkey --secret ta.key
==User certificates with passwords in scripts==
If you want to work with password encrypted keys and wat to batch process many users, you might find this helpful.
Add a line after <i># output_password = secret</i>:
<source lang=bash>
# output_password = secret
output_password = $ENV::KEY_PASS
</source>
After that the openssl calls taking the needed password from the environment variable <i>KEY_PASS</i>.
You can call it like this for example:
<source lang=bash>
KEY_PASS="password" ./build-key-pass --batch user
</source>
==Create your CA certificate==
$ cd OpenVPN-CA
$ . vars
$ ./buid-ca
Check it with
$ openssl x509 -noout -text -in keys/ca.crt
==Create the server certificate==
$ cd OpenVPN-CA
$ . vars
$ ./build-key-server
=Create your OpenVPN config=
==get_ovpn.sh==
I wrote a little helper script called get_ovpn.sh:
<source lang=bash>
#!/bin/bash
# Written by Lars Timmann L@rs.Timmann.de> 2016
# You may use it for free but on your own risk!!!
# Defaults:
TYPE="client"
KEY_DIR="OpenVPN-CA/keys"
function usage() {
if [ "_${1}_" != "_help_" ]
then
printf "ERROR: $*\n"
fi
printf "Options:\n"
cat <<EOF
-h|--help This help
-c|--config-type Default: client (client|server)
-k|--key-dir Default: OpenVPN-CA/keys Directory where certificates and keys can be found
-t|--template Default: ${configtype}.ovpn The template to use
-u|--user User to create config for
-s|--server Servername for --config-type=server
--what-ever=value Replace <WHAT_EVER> in template with value e.g.: --server-net=... replaces <SERVER_NET> with the given value
EOF
exit 1
}
while [ $# -gt 0 ]
do
#if [ $# -ge 2 ]; then value=$2; fi
case $1 in
-h|--help)
usage "help"
;;
--?*=?*|-?*=?*)
param=${1%=*}
value=${1#*=}
shift;
;;
--?*=|-?*=)
param=${1%=*}
usage "${param} needs a vlaue!"
;;
*)
if [ $# -lt 2 ] ; then usage "$1 needs a value!"; fi
param=$1
value=$2
shift; shift;
;;
esac
case $param in
-t|--template)
TEMPLATE=${value}
;;
-k|--key-dir)
KEY_DIR=${value}
;;
-u|--user)
OVPN_USER=${value}
;;
-c|--config-type)
TYPE=${value}
;;
-s|--server-name)
SERVER=${value}
export SERVER
;;
*)
param=${param#--}
param=${param/-/_}
export ${param^^}=${value}
;;
esac
done
TEMPLATE=${TEMPLATE:-"${TYPE}.ovpn"}
[ -z "${SERVER}" -a "_${TYPE}_" == "_server_" ] && usage "For which server?\n"
[ -z "${OVPN_USER}" -a "_${TYPE}_" == "_client_" ] && usage "For which user?\n"
[ ! -f "${TEMPLATE}" ] && usage "Template file ${TEMPLATE} not found!\n"
[ ! -d "${KEY_DIR}" ] && usage "Key directory ${KEY_DIR} not found!\n"
[ ! -f "${KEY_DIR}/ta.key" ] && usage "TLS Auth ${KEY_DIR}/ta.key not found!\n"
[ ! -f "${KEY_DIR}/ca.crt" ] && usage "CA Certificate ${KEY_DIR}/ca.crt not found!\n"
[ ! -f "${KEY_DIR}/${SERVER}.key" -a "_${TYPE}_" == "_server_" ] && usage "Private key ${KEY_DIR}/${SERVER}.key not found!\n"
[ ! -f "${KEY_DIR}/${SERVER}.crt" -a "_${TYPE}_" == "_server_" ] && usage "Certificate ${KEY_DIR}/${SERVER}.crt not found!\n"
[ ! -f "${KEY_DIR}/${OVPN_USER}.key" -a "_${TYPE}_" == "_client_" ] && usage "Private key ${KEY_DIR}/${OVPN_USER}.key not found!\n"
[ ! -f "${KEY_DIR}/${OVPN_USER}.crt" -a "_${TYPE}_" == "_client_" ] && usage "Certificate ${KEY_DIR}/${OVPN_USER}.crt not found!\n"
gawk \
-v user="${OVPN_USER}" \
-v key_dir="${KEY_DIR}" \
-v configtype="${TYPE}" \
-v server="${SERVER}" \
'
function print_fingerprint(certfile){
command="openssl x509 -noout -fingerprint -in "certfile;
FS="=";
while(command | getline);
retval=$2;
close(command);
return retval;
}
function print_part(part,certfile){
command="openssl x509 -noout -text -in "certfile;
while(command | getline){
if ($1 == part) {
for(i=2;i<=NF;i++){
retval=retval""$i;
if(i<NF) retval=retval" ";
}
}
};
close(command);
gsub(/\//,", ", retval)
return retval;
}
function print_cert(name,certfile){
# Header
#printf "# %s\n",certfile;
while(getline < certfile){if(/^#/) print $0};
close(certfile);
printf "<%s>\n",name;
while(getline < certfile){if(!/^#/) print $0};
close(certfile);
printf "</%s>\n",name;
}
{
# Static part
# Replace all <VARIABLE> in template file with ENVIRON["VARIABLE"]
rest=$0;
while(match(rest,/<[A-Z0-9_]+>/)) {
matched=substr(rest,RSTART+1,RLENGTH-2);
##print "Matched:",matched;
if (ENVIRON[matched]) gsub("<"matched">",ENVIRON[matched]);
rest=substr(rest,RSTART+RLENGTH);
}
print $0;
}
END{
# Dynamic part
if(configtype=="client") {
printf "remote-cert-tls server\n";
} else {
printf "remote-cert-tls client\n";
}
# TLS Auth
print_cert("tls-auth",key_dir"/ta.key");
printf "key-direction %d\n",(configtype=="client");
printf "\n";
print_cert("dh",key_dir"/dh4096.pem");
printf "\n";
# CA Certificate
if (configtype=="client") {
printf "verify-x509-name \"%s\"\n",print_part("Subject:",key_dir"/"server".crt");
}
printf "verify-hash %s\n",print_fingerprint(key_dir"/ca.crt");
print_cert("ca",key_dir"/ca.crt");
printf "\n";
# User Data
if (configtype=="client") {
print_cert("cert",key_dir"/"user".crt");
printf "\n";
print_cert("key",key_dir"/"user".key");
printf "\n";
} else {
print_cert("cert",key_dir"/"server".crt");
printf "\n";
# key secret/<SERVER>.key is in template
}
#print ENVIRON["SERVER_NET"];
}' ${TEMPLATE}
</source>
ca@rzeasyrsa:~$ ./get_ovpn.sh --help
Options:
-h|--help This help
-c|--config-type Default: client (client|server)
-k|--key-dir Default: OpenVPN-CA/keys Directory where certificates and keys can be found
-t|--template Default: .ovpn The template to use
-u|--user User to create config for
-s|--server Servername for --config-type=server
--what-ever=value Replace <WHAT_EVER> in template with value e.g.: --server-net=... replaces <SERVER_NET> with the given value
==OpenVPN Server ==
===OpenVPN Server Template===
# I am using the mysql-auth-plugin from [https://github.com/chantra/openvpn-mysql-auth https://github.com/chantra/openvpn-mysql-auth]
# On the OpenVPN-Server the user openvpn has uid 1195 and gid 1195 and I have a TMP-dir for this user in the /etc/fstab like this:
none /run/openvpn_tmp tmpfs nodev,noexec,nosuid,size=5m,mode=0700,uid=1195,gid=1195 0 0
Example server.ovpn:
<pre>
local <SERVER_IP>
port <SERVER_PORT>
tmp-dir /run/openvpn_tmp
management <MANAGEMENT_IP> <MANAGEMENT_PORT> /etc/openvpn/management-password
proto udp
dev tun
tun-mtu 1500
mssfix
topology subnet
server <SERVER_NET> <SERVER_NETMASK>
push "redirect-gateway def1 bypass-dhcp"
push "dhcp-option DNS <DNS1>"
push "dhcp-option DNS <DNS2>"
push "route 192.168.18.0 255.255.255.0 net_gateway"
push "route 192.168.0.0 255.255.0.0"
push "route 10.0.0.0 255.0.0.0"
push "route 172.28.0.0 255.255.0.0"
client-to-client
duplicate-cn
keepalive 10 120
auth SHA512
cipher AES-256-CBC
tls-cipher DHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES256-SHA256:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES128-SHA256:DHE-RSA-CAMELLIA256-SHA:DHE-RSA-AES256-SHA:DHE-RSA-CAMELLIA128-SHA:DHE-RSA-AES128-SHA:CAMELLIA256-SHA:AES256-SHA:CAMELLIA128-SHA:AES128-SHA
reneg-sec 36000
comp-lzo adaptive
max-clients 25
user openvpn
group openvpn
persist-key
persist-tun
status /var/log/openvpn/<SERVER>-status.log 2
status-version 2
log-append /var/log/openvpn/<SERVER>-openvpn.log
verb 3
plugin /usr/lib/openvpn/libopenvpn-mysql-auth.so -c /etc/openvpn/auth/<SERVER>_auth_mysql.conf
key secret/<SERVER>.key # This file should be kept secret
remote-cert-tls client
username-as-common-name
</pre>
===Generate OpenVPN Config for server===
<source lang=bash>
ca@rzeasyrsa:~$ ./get_ovpn.sh \
--server openvpn \
--config-type server \
--server-ip=192.168.18.23 \
--server-port=1234 \
--server-net=10.214.60.128 \
--server-netmask=255.255.255.128 \
--management-ip=192.168.17.23 \
--management-port=11234 \
--dns1=192.168.0.50 \
--dns2=192.168.0.30 \
--template server.ovpn \
--key-dir=OpenVPN-CA/keys
</source>
==OpenVPN Client==
===OpenVPN client template===
Example client.ovpn:
<pre>
client
dev tun
proto udp
remote <SERVER_IP> <SERVER_PORT>
tls-client
ns-cert-type server
comp-lzo
auth-user-pass
auth SHA512
cipher AES-256-CBC
tls-cipher DHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES256-SHA256:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES128-SHA256:DHE-RSA-CAMELLIA256-SHA:DHE-RSA-AES256-SHA:DHE-RSA-CAMELLIA128-SHA:DHE-RSA-AES128-SHA:CAMELLIA256-SHA:AES256-SHA:CAMELLIA128-SHA:AES128-SHA
#tls-version-min 1.2
route-delay 5 30
persist-key
persist-tun
nobind
mssfix
push-peer-info
reneg-sec 0
tun-mtu 1500
verb 3
#auth-nocache
</pre>
===Generate OpenVPN Config for server===
<source lang=bash>
ca@rzeasyrsa:~$ ./get_ovpn.sh \
--config-type client \
--server-ip 192.168.18.23 \
--server-port 1234 \
--template client.ovpn \
--key-dir OpenVPN-CA/keys \
--user vpnclient
</source>
6cbc38ac53164d8b70ce10621e2a9cc25f2dae73
1239
1238
2016-02-15T15:32:55Z
Lollypop
2
/* Create the server certificate */
wikitext
text/x-wiki
[[Kategorie: Security]]
[[Kategorie: Linux]]
=create CA user=
<source lang=bash>
# groupadd -g 22000 ca && adduser --uid 22000 --gid 22000 --gecos "CA user" --encrypt-home ca
</source>
=Do everything CA specific as CA user!=
<source lang=bash>
# su - ca
ca@rzeasyrsa:~$ ecryptfs-mount-private
ca@rzeasyrsa:~$ cd
ca@rzeasyrsa:~$ exec bash
</source>
=Setup EasyRSA=
==Ubuntu packets==
<source lang=bash>
# aptitude install openvpn easy-rsa
</source>
==Create your CA==
<source lang=bash>
mkdir --mode=0700 OpenVPN-CA
cd OpenVPN-CA
for i in /usr/share/easy-rsa/* ; do ln -s $i ; done
rm -f vars clean-all
cp /usr/share/easy-rsa/vars .
</source>
==Edit the defaults==
Setup proper defaults in your vars file.
Source it every time before you do CA work.
==Base setup (Only one time at the beginning!!!)==
'''Really just do this before you start with your CA. It will delete everything: keys and certificates!!!'''
$ cd OpenVPN-CA
$ . vars
$ /usr/share/easy-rsa/clean-all
==Generate DH parameter==
$ cd OpenVPN-CA
$ . vars
$ KEY_SIZE=4096 ./build-dh
or
$ cd OpenVPN-CA/keys
$ openssl dhparam -2 -out dh4096.pem 4096
==Generate TLS-auth parameter==
$ cd OpenVPN-CA/keys
$ /usr/sbin/openvpn --genkey --secret ta.key
==User certificates with passwords in scripts==
If you want to work with password encrypted keys and wat to batch process many users, you might find this helpful.
Add a line after <i># output_password = secret</i>:
<source lang=bash>
# output_password = secret
output_password = $ENV::KEY_PASS
</source>
After that the openssl calls taking the needed password from the environment variable <i>KEY_PASS</i>.
You can call it like this for example:
<source lang=bash>
KEY_PASS="password" ./build-key-pass --batch user
</source>
==Create your CA certificate==
$ cd OpenVPN-CA
$ . vars
$ ./buid-ca
Check it with
$ openssl x509 -noout -text -in keys/ca.crt
==Create the server certificate==
$ cd OpenVPN-CA
$ . vars
$ ./build-key-server openvpn-server
For example server keys with 5 years validity:
$ KEY_EXPIRE=1825 ./build-key-server openvpn-server
=Create your OpenVPN config=
==get_ovpn.sh==
I wrote a little helper script called get_ovpn.sh:
<source lang=bash>
#!/bin/bash
# Written by Lars Timmann L@rs.Timmann.de> 2016
# You may use it for free but on your own risk!!!
# Defaults:
TYPE="client"
KEY_DIR="OpenVPN-CA/keys"
function usage() {
if [ "_${1}_" != "_help_" ]
then
printf "ERROR: $*\n"
fi
printf "Options:\n"
cat <<EOF
-h|--help This help
-c|--config-type Default: client (client|server)
-k|--key-dir Default: OpenVPN-CA/keys Directory where certificates and keys can be found
-t|--template Default: ${configtype}.ovpn The template to use
-u|--user User to create config for
-s|--server Servername for --config-type=server
--what-ever=value Replace <WHAT_EVER> in template with value e.g.: --server-net=... replaces <SERVER_NET> with the given value
EOF
exit 1
}
while [ $# -gt 0 ]
do
#if [ $# -ge 2 ]; then value=$2; fi
case $1 in
-h|--help)
usage "help"
;;
--?*=?*|-?*=?*)
param=${1%=*}
value=${1#*=}
shift;
;;
--?*=|-?*=)
param=${1%=*}
usage "${param} needs a vlaue!"
;;
*)
if [ $# -lt 2 ] ; then usage "$1 needs a value!"; fi
param=$1
value=$2
shift; shift;
;;
esac
case $param in
-t|--template)
TEMPLATE=${value}
;;
-k|--key-dir)
KEY_DIR=${value}
;;
-u|--user)
OVPN_USER=${value}
;;
-c|--config-type)
TYPE=${value}
;;
-s|--server-name)
SERVER=${value}
export SERVER
;;
*)
param=${param#--}
param=${param/-/_}
export ${param^^}=${value}
;;
esac
done
TEMPLATE=${TEMPLATE:-"${TYPE}.ovpn"}
[ -z "${SERVER}" -a "_${TYPE}_" == "_server_" ] && usage "For which server?\n"
[ -z "${OVPN_USER}" -a "_${TYPE}_" == "_client_" ] && usage "For which user?\n"
[ ! -f "${TEMPLATE}" ] && usage "Template file ${TEMPLATE} not found!\n"
[ ! -d "${KEY_DIR}" ] && usage "Key directory ${KEY_DIR} not found!\n"
[ ! -f "${KEY_DIR}/ta.key" ] && usage "TLS Auth ${KEY_DIR}/ta.key not found!\n"
[ ! -f "${KEY_DIR}/ca.crt" ] && usage "CA Certificate ${KEY_DIR}/ca.crt not found!\n"
[ ! -f "${KEY_DIR}/${SERVER}.key" -a "_${TYPE}_" == "_server_" ] && usage "Private key ${KEY_DIR}/${SERVER}.key not found!\n"
[ ! -f "${KEY_DIR}/${SERVER}.crt" -a "_${TYPE}_" == "_server_" ] && usage "Certificate ${KEY_DIR}/${SERVER}.crt not found!\n"
[ ! -f "${KEY_DIR}/${OVPN_USER}.key" -a "_${TYPE}_" == "_client_" ] && usage "Private key ${KEY_DIR}/${OVPN_USER}.key not found!\n"
[ ! -f "${KEY_DIR}/${OVPN_USER}.crt" -a "_${TYPE}_" == "_client_" ] && usage "Certificate ${KEY_DIR}/${OVPN_USER}.crt not found!\n"
gawk \
-v user="${OVPN_USER}" \
-v key_dir="${KEY_DIR}" \
-v configtype="${TYPE}" \
-v server="${SERVER}" \
'
function print_fingerprint(certfile){
command="openssl x509 -noout -fingerprint -in "certfile;
FS="=";
while(command | getline);
retval=$2;
close(command);
return retval;
}
function print_part(part,certfile){
command="openssl x509 -noout -text -in "certfile;
while(command | getline){
if ($1 == part) {
for(i=2;i<=NF;i++){
retval=retval""$i;
if(i<NF) retval=retval" ";
}
}
};
close(command);
gsub(/\//,", ", retval)
return retval;
}
function print_cert(name,certfile){
# Header
#printf "# %s\n",certfile;
while(getline < certfile){if(/^#/) print $0};
close(certfile);
printf "<%s>\n",name;
while(getline < certfile){if(!/^#/) print $0};
close(certfile);
printf "</%s>\n",name;
}
{
# Static part
# Replace all <VARIABLE> in template file with ENVIRON["VARIABLE"]
rest=$0;
while(match(rest,/<[A-Z0-9_]+>/)) {
matched=substr(rest,RSTART+1,RLENGTH-2);
##print "Matched:",matched;
if (ENVIRON[matched]) gsub("<"matched">",ENVIRON[matched]);
rest=substr(rest,RSTART+RLENGTH);
}
print $0;
}
END{
# Dynamic part
if(configtype=="client") {
printf "remote-cert-tls server\n";
} else {
printf "remote-cert-tls client\n";
}
# TLS Auth
print_cert("tls-auth",key_dir"/ta.key");
printf "key-direction %d\n",(configtype=="client");
printf "\n";
print_cert("dh",key_dir"/dh4096.pem");
printf "\n";
# CA Certificate
if (configtype=="client") {
printf "verify-x509-name \"%s\"\n",print_part("Subject:",key_dir"/"server".crt");
}
printf "verify-hash %s\n",print_fingerprint(key_dir"/ca.crt");
print_cert("ca",key_dir"/ca.crt");
printf "\n";
# User Data
if (configtype=="client") {
print_cert("cert",key_dir"/"user".crt");
printf "\n";
print_cert("key",key_dir"/"user".key");
printf "\n";
} else {
print_cert("cert",key_dir"/"server".crt");
printf "\n";
# key secret/<SERVER>.key is in template
}
#print ENVIRON["SERVER_NET"];
}' ${TEMPLATE}
</source>
ca@rzeasyrsa:~$ ./get_ovpn.sh --help
Options:
-h|--help This help
-c|--config-type Default: client (client|server)
-k|--key-dir Default: OpenVPN-CA/keys Directory where certificates and keys can be found
-t|--template Default: .ovpn The template to use
-u|--user User to create config for
-s|--server Servername for --config-type=server
--what-ever=value Replace <WHAT_EVER> in template with value e.g.: --server-net=... replaces <SERVER_NET> with the given value
==OpenVPN Server ==
===OpenVPN Server Template===
# I am using the mysql-auth-plugin from [https://github.com/chantra/openvpn-mysql-auth https://github.com/chantra/openvpn-mysql-auth]
# On the OpenVPN-Server the user openvpn has uid 1195 and gid 1195 and I have a TMP-dir for this user in the /etc/fstab like this:
none /run/openvpn_tmp tmpfs nodev,noexec,nosuid,size=5m,mode=0700,uid=1195,gid=1195 0 0
Example server.ovpn:
<pre>
local <SERVER_IP>
port <SERVER_PORT>
tmp-dir /run/openvpn_tmp
management <MANAGEMENT_IP> <MANAGEMENT_PORT> /etc/openvpn/management-password
proto udp
dev tun
tun-mtu 1500
mssfix
topology subnet
server <SERVER_NET> <SERVER_NETMASK>
push "redirect-gateway def1 bypass-dhcp"
push "dhcp-option DNS <DNS1>"
push "dhcp-option DNS <DNS2>"
push "route 192.168.18.0 255.255.255.0 net_gateway"
push "route 192.168.0.0 255.255.0.0"
push "route 10.0.0.0 255.0.0.0"
push "route 172.28.0.0 255.255.0.0"
client-to-client
duplicate-cn
keepalive 10 120
auth SHA512
cipher AES-256-CBC
tls-cipher DHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES256-SHA256:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES128-SHA256:DHE-RSA-CAMELLIA256-SHA:DHE-RSA-AES256-SHA:DHE-RSA-CAMELLIA128-SHA:DHE-RSA-AES128-SHA:CAMELLIA256-SHA:AES256-SHA:CAMELLIA128-SHA:AES128-SHA
reneg-sec 36000
comp-lzo adaptive
max-clients 25
user openvpn
group openvpn
persist-key
persist-tun
status /var/log/openvpn/<SERVER>-status.log 2
status-version 2
log-append /var/log/openvpn/<SERVER>-openvpn.log
verb 3
plugin /usr/lib/openvpn/libopenvpn-mysql-auth.so -c /etc/openvpn/auth/<SERVER>_auth_mysql.conf
key secret/<SERVER>.key # This file should be kept secret
remote-cert-tls client
username-as-common-name
</pre>
===Generate OpenVPN Config for server===
<source lang=bash>
ca@rzeasyrsa:~$ ./get_ovpn.sh \
--server openvpn \
--config-type server \
--server-ip=192.168.18.23 \
--server-port=1234 \
--server-net=10.214.60.128 \
--server-netmask=255.255.255.128 \
--management-ip=192.168.17.23 \
--management-port=11234 \
--dns1=192.168.0.50 \
--dns2=192.168.0.30 \
--template server.ovpn \
--key-dir=OpenVPN-CA/keys
</source>
==OpenVPN Client==
===OpenVPN client template===
Example client.ovpn:
<pre>
client
dev tun
proto udp
remote <SERVER_IP> <SERVER_PORT>
tls-client
ns-cert-type server
comp-lzo
auth-user-pass
auth SHA512
cipher AES-256-CBC
tls-cipher DHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES256-SHA256:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES128-SHA256:DHE-RSA-CAMELLIA256-SHA:DHE-RSA-AES256-SHA:DHE-RSA-CAMELLIA128-SHA:DHE-RSA-AES128-SHA:CAMELLIA256-SHA:AES256-SHA:CAMELLIA128-SHA:AES128-SHA
#tls-version-min 1.2
route-delay 5 30
persist-key
persist-tun
nobind
mssfix
push-peer-info
reneg-sec 0
tun-mtu 1500
verb 3
#auth-nocache
</pre>
===Generate OpenVPN Config for server===
<source lang=bash>
ca@rzeasyrsa:~$ ./get_ovpn.sh \
--config-type client \
--server-ip 192.168.18.23 \
--server-port 1234 \
--template client.ovpn \
--key-dir OpenVPN-CA/keys \
--user vpnclient
</source>
ae4a338693fec79d33fd72b8c700164a39fce68b
1240
1239
2016-02-15T17:06:06Z
Lollypop
2
/* get_ovpn.sh */
wikitext
text/x-wiki
[[Kategorie: Security]]
[[Kategorie: Linux]]
=create CA user=
<source lang=bash>
# groupadd -g 22000 ca && adduser --uid 22000 --gid 22000 --gecos "CA user" --encrypt-home ca
</source>
=Do everything CA specific as CA user!=
<source lang=bash>
# su - ca
ca@rzeasyrsa:~$ ecryptfs-mount-private
ca@rzeasyrsa:~$ cd
ca@rzeasyrsa:~$ exec bash
</source>
=Setup EasyRSA=
==Ubuntu packets==
<source lang=bash>
# aptitude install openvpn easy-rsa
</source>
==Create your CA==
<source lang=bash>
mkdir --mode=0700 OpenVPN-CA
cd OpenVPN-CA
for i in /usr/share/easy-rsa/* ; do ln -s $i ; done
rm -f vars clean-all
cp /usr/share/easy-rsa/vars .
</source>
==Edit the defaults==
Setup proper defaults in your vars file.
Source it every time before you do CA work.
==Base setup (Only one time at the beginning!!!)==
'''Really just do this before you start with your CA. It will delete everything: keys and certificates!!!'''
$ cd OpenVPN-CA
$ . vars
$ /usr/share/easy-rsa/clean-all
==Generate DH parameter==
$ cd OpenVPN-CA
$ . vars
$ KEY_SIZE=4096 ./build-dh
or
$ cd OpenVPN-CA/keys
$ openssl dhparam -2 -out dh4096.pem 4096
==Generate TLS-auth parameter==
$ cd OpenVPN-CA/keys
$ /usr/sbin/openvpn --genkey --secret ta.key
==User certificates with passwords in scripts==
If you want to work with password encrypted keys and wat to batch process many users, you might find this helpful.
Add a line after <i># output_password = secret</i>:
<source lang=bash>
# output_password = secret
output_password = $ENV::KEY_PASS
</source>
After that the openssl calls taking the needed password from the environment variable <i>KEY_PASS</i>.
You can call it like this for example:
<source lang=bash>
KEY_PASS="password" ./build-key-pass --batch user
</source>
==Create your CA certificate==
$ cd OpenVPN-CA
$ . vars
$ ./buid-ca
Check it with
$ openssl x509 -noout -text -in keys/ca.crt
==Create the server certificate==
$ cd OpenVPN-CA
$ . vars
$ ./build-key-server openvpn-server
For example server keys with 5 years validity:
$ KEY_EXPIRE=1825 ./build-key-server openvpn-server
=Create your OpenVPN config=
==get_ovpn.sh==
I wrote a little helper script called get_ovpn.sh:
<source lang=bash>
#!/bin/bash
# Written by Lars Timmann L@rs.Timmann.de> 2016
# You may use it for free but on your own risk!!!
TYPE="client"
KEY_DIR="OpenVPN-CA/keys"
function usage() {
if [ "_${1}_" != "_help_" ]
then
printf "ERROR: $*\n"
fi
printf "Options:\n"
cat <<EOF
-h|--help This help
-c|--config-type Default: client (client|server)
-k|--key-dir Default: OpenVPN-CA/keys Directory where certificates and keys can be found
-t|--template Default: ${configtype}.ovpn The template to use
-u|--user User to create config for
-s|--server Servername for --config-type=server
--what-ever=value Replace <WHAT_EVER> in template with value e.g.: --server-net=... replaces <SERVER_NET> with the given value
EOF
exit 1
}
while [ $# -gt 0 ]
do
#if [ $# -ge 2 ]; then value=$2; fi
case $1 in
-h|--help)
usage "help"
;;
--?*=?*|-?*=?*)
param=${1%=*}
value=${1#*=}
shift;
;;
--?*=|-?*=)
param=${1%=*}
usage "${param} needs a vlaue!"
;;
*)
if [ $# -lt 2 ] ; then usage "$1 needs a value!"; fi
param=$1
value=$2
shift; shift;
;;
esac
case $param in
-t|--template)
TEMPLATE=${value}
;;
-k|--key-dir)
KEY_DIR=${value}
;;
-u|--user)
OVPN_USER=${value}
;;
-c|--config-type)
TYPE=${value}
;;
-s|--server-name)
SERVER=${value}
;;
*)
param=${param#--}
param=${param/-/_}
export ${param^^}=${value}
;;
esac
done
TEMPLATE=${TEMPLATE:-"${TYPE}.ovpn"}
[ -z "${SERVER}" -a "_${TYPE}_" == "_server_" ] && usage "For which server?\n"
[ -z "${OVPN_USER}" -a "_${TYPE}_" == "_client_" ] && usage "For which user?\n"
[ ! -f "${TEMPLATE}" ] && usage "Template file ${TEMPLATE} not found!\n"
[ ! -d "${KEY_DIR}" ] && usage "Key directory ${KEY_DIR} not found!\n"
[ ! -f "${KEY_DIR}/ta.key" ] && usage "TLS Auth ${KEY_DIR}/ta.key not found!\n"
[ ! -f "${KEY_DIR}/ca.crt" ] && usage "CA Certificate ${KEY_DIR}/ca.crt not found!\n"
[ ! -f "${KEY_DIR}/${SERVER}.key" -a "_${TYPE}_" == "_server_" ] && usage "Private key ${KEY_DIR}/${SERVER}.key not found!\n"
[ ! -f "${KEY_DIR}/${SERVER}.crt" -a "_${TYPE}_" == "_server_" ] && usage "Certificate ${KEY_DIR}/${SERVER}.crt not found!\n"
[ ! -f "${KEY_DIR}/${OVPN_USER}.key" -a "_${TYPE}_" == "_client_" ] && usage "Private key ${KEY_DIR}/${OVPN_USER}.key not found!\n"
[ ! -f "${KEY_DIR}/${OVPN_USER}.crt" -a "_${TYPE}_" == "_client_" ] && usage "Certificate ${KEY_DIR}/${OVPN_USER}.crt not found!\n"
export SERVER
gawk \
-v user="${OVPN_USER}" \
-v key_dir="${KEY_DIR}" \
-v configtype="${TYPE}" \
-v server="${SERVER}" \
'
function print_fingerprint(certfile){
command="openssl x509 -noout -fingerprint -in "certfile;
FS="=";
while(command | getline);
retval=$2;
close(command);
return retval;
}
function print_part(part,certfile){
command="openssl x509 -noout -text -in "certfile;
while(command | getline){
if ($1 == part) {
for(i=2;i<=NF;i++){
if(i==NF) gsub(/\//,", ", $i)
retval=retval""$i;
if(i<NF) retval=retval" ";
}
}
};
close(command);
return retval;
}
function print_cert(name,certfile){
# Header
#printf "# %s\n",certfile;
while(getline < certfile){if(/^#/) print $0};
close(certfile);
printf "<%s>\n",name;
while(getline < certfile){if(!/^#/) print $0};
close(certfile);
printf "</%s>\n",name;
}
{
# Static part
rest=$0;
while(match(rest,/<[A-Z0-9_]+>/)) {
matched=substr(rest,RSTART+1,RLENGTH-2);
##print "Matched:",matched;
if (ENVIRON[matched]) gsub("<"matched">",ENVIRON[matched]);
rest=substr(rest,RSTART+RLENGTH);
}
print $0;
}
END{
# Dynamic part
if(configtype=="client") {
printf "remote-cert-tls server\n";
} else {
printf "remote-cert-tls client\n";
}
# TLS Auth
print_cert("tls-auth",key_dir"/ta.key");
printf "key-direction %d\n",(configtype=="client");
printf "\n";
print_cert("dh",key_dir"/dh4096.pem");
printf "\n";
# Ca Certificate
if (configtype=="client") {
printf "verify-x509-name \"%s\"\n",print_part("Subject:",key_dir"/"server".crt");
}
printf "verify-hash %s\n",print_fingerprint(key_dir"/ca.crt");
print_cert("ca",key_dir"/ca.crt");
printf "\n";
# User Data
if (configtype=="client") {
print_cert("cert",key_dir"/"user".crt");
printf "\n";
print_cert("key",key_dir"/"user".key");
printf "\n";
} else {
print_cert("cert",key_dir"/"server".crt");
printf "\n";
# key secret/<SERVER>.key is in template
}
#print ENVIRON["SERVER_NET"];
}' ${TEMPLATE}
</source>
ca@rzeasyrsa:~$ ./get_ovpn.sh --help
Options:
-h|--help This help
-c|--config-type Default: client (client|server)
-k|--key-dir Default: OpenVPN-CA/keys Directory where certificates and keys can be found
-t|--template Default: .ovpn The template to use
-u|--user User to create config for
-s|--server Servername for --config-type=server
--what-ever=value Replace <WHAT_EVER> in template with value e.g.: --server-net=... replaces <SERVER_NET> with the given value
==OpenVPN Server ==
===OpenVPN Server Template===
# I am using the mysql-auth-plugin from [https://github.com/chantra/openvpn-mysql-auth https://github.com/chantra/openvpn-mysql-auth]
# On the OpenVPN-Server the user openvpn has uid 1195 and gid 1195 and I have a TMP-dir for this user in the /etc/fstab like this:
none /run/openvpn_tmp tmpfs nodev,noexec,nosuid,size=5m,mode=0700,uid=1195,gid=1195 0 0
Example server.ovpn:
<pre>
local <SERVER_IP>
port <SERVER_PORT>
tmp-dir /run/openvpn_tmp
management <MANAGEMENT_IP> <MANAGEMENT_PORT> /etc/openvpn/management-password
proto udp
dev tun
tun-mtu 1500
mssfix
topology subnet
server <SERVER_NET> <SERVER_NETMASK>
push "redirect-gateway def1 bypass-dhcp"
push "dhcp-option DNS <DNS1>"
push "dhcp-option DNS <DNS2>"
push "route 192.168.18.0 255.255.255.0 net_gateway"
push "route 192.168.0.0 255.255.0.0"
push "route 10.0.0.0 255.0.0.0"
push "route 172.28.0.0 255.255.0.0"
client-to-client
duplicate-cn
keepalive 10 120
auth SHA512
cipher AES-256-CBC
tls-cipher DHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES256-SHA256:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES128-SHA256:DHE-RSA-CAMELLIA256-SHA:DHE-RSA-AES256-SHA:DHE-RSA-CAMELLIA128-SHA:DHE-RSA-AES128-SHA:CAMELLIA256-SHA:AES256-SHA:CAMELLIA128-SHA:AES128-SHA
reneg-sec 36000
comp-lzo adaptive
max-clients 25
user openvpn
group openvpn
persist-key
persist-tun
status /var/log/openvpn/<SERVER>-status.log 2
status-version 2
log-append /var/log/openvpn/<SERVER>-openvpn.log
verb 3
plugin /usr/lib/openvpn/libopenvpn-mysql-auth.so -c /etc/openvpn/auth/<SERVER>_auth_mysql.conf
key secret/<SERVER>.key # This file should be kept secret
remote-cert-tls client
username-as-common-name
</pre>
===Generate OpenVPN Config for server===
<source lang=bash>
ca@rzeasyrsa:~$ ./get_ovpn.sh \
--server openvpn \
--config-type server \
--server-ip=192.168.18.23 \
--server-port=1234 \
--server-net=10.214.60.128 \
--server-netmask=255.255.255.128 \
--management-ip=192.168.17.23 \
--management-port=11234 \
--dns1=192.168.0.50 \
--dns2=192.168.0.30 \
--template server.ovpn \
--key-dir=OpenVPN-CA/keys
</source>
==OpenVPN Client==
===OpenVPN client template===
Example client.ovpn:
<pre>
client
dev tun
proto udp
remote <SERVER_IP> <SERVER_PORT>
tls-client
ns-cert-type server
comp-lzo
auth-user-pass
auth SHA512
cipher AES-256-CBC
tls-cipher DHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES256-SHA256:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES128-SHA256:DHE-RSA-CAMELLIA256-SHA:DHE-RSA-AES256-SHA:DHE-RSA-CAMELLIA128-SHA:DHE-RSA-AES128-SHA:CAMELLIA256-SHA:AES256-SHA:CAMELLIA128-SHA:AES128-SHA
#tls-version-min 1.2
route-delay 5 30
persist-key
persist-tun
nobind
mssfix
push-peer-info
reneg-sec 0
tun-mtu 1500
verb 3
#auth-nocache
</pre>
===Generate OpenVPN Config for server===
<source lang=bash>
ca@rzeasyrsa:~$ ./get_ovpn.sh \
--config-type client \
--server-ip 192.168.18.23 \
--server-port 1234 \
--template client.ovpn \
--key-dir OpenVPN-CA/keys \
--user vpnclient
</source>
219b9c717c899fa0850e9ee61379300af2cf0daa
Bash cheatsheet
0
37
1241
986
2016-02-19T11:42:30Z
Lollypop
2
wikitext
text/x-wiki
=bash history per user=
You need to set LogLevel of sshd to VERBOSE in your /etc/ssh/sshd_config:
<source lang=bash>
...
LogLevel VERBOSE
...
</source>
If you are using ssh public keys for authenticating and want to use a seperate history for each user, you can put this in your .bash_profile:
<source lang=bash>
[ -f /var/log/fingerprint.log ] && FINGERPRINT=$(nawk -v ssh_connection="${SSH_CONNECTION}" -v user=${LOGNAME} 'BEGIN{split(ssh_connection,connection)}/.*sshd\[[0-9]+\]: Accepted publickey for/ && $(NF-5)==connection[1] && $(NF-3)==connection[2] {print $NF;}' /var/log/fingerprint.log)
export HISTFILE=~/.bash_history_${FINGERPRINT:-${SUDO_USER:-default}}
</source>
If $FINGERPRINT is empty the sudo user will be used.
If $SUDO_USER is empty too, use "default" as extension.
I forced rsyslog to write another logfile where group ssh may read:
/etc/rsyslog.d/99-fingerprint.conf:
<source lang=bash>
$FileCreateMode 0640
$FileGroup ssh
auth /var/log/fingerprint.log
</source>
Add user syslog to group ssh so that syslog can open a file as group ssh:
<source lang=bash>
# usermod -aG ssh syslog
</source>
Let only users from group ssh login via ssh except the syslog user:
/etc/ssh/sshd_config:
<source lang=bash>
# SSH is only allowed for users in this group
AllowGroups ssh
DenyUsers syslog
</source>
=bash prompt=
Put this in your ~/.bash_profile
<source lang=bash>
typeset +x PS1="\[\e]0;\u@\h: \w\a\]\u@\h:\w# "
</source>
=Nützliche Variablenersetzungen=
==dirname==
<source lang=bash>
$ myself=/usr/bin/blafasel ; echo ${myself%/*}
/usr/bin
</source>
==basename==
<source lang=bash>
$ myself=/usr/bin/blafasel ; echo ${myself##*/}
blafasel
</source>
==Path name resolving function==
<source lang=bash>
# dir_resolve originally from http://stackoverflow.com/a/20901614/5887626
# modified at https://lars.timmann.de/wiki
dir_resolve() {
local dir=${1%/*}
local file=${1##*/}
# if the name does not contain a / leave file blank or the name will be name/name
[ "_${1/\//}_" == "_${1}_" -a -d ${1} ] && file=""
[ "_${1/\//}_" == "_${1}_" -a -f ${1} ] && dir=""
pushd "$dir" &>/dev/null || return $? # On error, return error code
echo ${PWD}${file:+"/"${file}} # output full path with filename
popd &> /dev/null
}
</source>
=Schleifen=
==Zahlenfolgen==
$ for i in {0..9} ; do echo $i ; done
oder
$ for ((i=0;i<=9;i++)); do echo $i; done
so gehen natürlich auch andere Sprünge, z.B. immer 3 weiter:
$ for ((i=0;i<=9;i+=3)); do echo $i; done
oder oder oder
$ for ((i=0,j=1;i<=9;i+=3,j++)); do echo "$i $j"; done
=Rechnen=
$ echo $[ 3 + 4 ]
$ echo $[ 2 ** 8 ] # 2^8
[[Kategorie:Bash]]
2bb249ca2dec5e554734667ceb53a2808cb64f4d
1242
1241
2016-02-19T11:42:59Z
Lollypop
2
/* Path name resolving function */
wikitext
text/x-wiki
=bash history per user=
You need to set LogLevel of sshd to VERBOSE in your /etc/ssh/sshd_config:
<source lang=bash>
...
LogLevel VERBOSE
...
</source>
If you are using ssh public keys for authenticating and want to use a seperate history for each user, you can put this in your .bash_profile:
<source lang=bash>
[ -f /var/log/fingerprint.log ] && FINGERPRINT=$(nawk -v ssh_connection="${SSH_CONNECTION}" -v user=${LOGNAME} 'BEGIN{split(ssh_connection,connection)}/.*sshd\[[0-9]+\]: Accepted publickey for/ && $(NF-5)==connection[1] && $(NF-3)==connection[2] {print $NF;}' /var/log/fingerprint.log)
export HISTFILE=~/.bash_history_${FINGERPRINT:-${SUDO_USER:-default}}
</source>
If $FINGERPRINT is empty the sudo user will be used.
If $SUDO_USER is empty too, use "default" as extension.
I forced rsyslog to write another logfile where group ssh may read:
/etc/rsyslog.d/99-fingerprint.conf:
<source lang=bash>
$FileCreateMode 0640
$FileGroup ssh
auth /var/log/fingerprint.log
</source>
Add user syslog to group ssh so that syslog can open a file as group ssh:
<source lang=bash>
# usermod -aG ssh syslog
</source>
Let only users from group ssh login via ssh except the syslog user:
/etc/ssh/sshd_config:
<source lang=bash>
# SSH is only allowed for users in this group
AllowGroups ssh
DenyUsers syslog
</source>
=bash prompt=
Put this in your ~/.bash_profile
<source lang=bash>
typeset +x PS1="\[\e]0;\u@\h: \w\a\]\u@\h:\w# "
</source>
=Nützliche Variablenersetzungen=
==dirname==
<source lang=bash>
$ myself=/usr/bin/blafasel ; echo ${myself%/*}
/usr/bin
</source>
==basename==
<source lang=bash>
$ myself=/usr/bin/blafasel ; echo ${myself##*/}
blafasel
</source>
==Path name resolving function==
<source lang=bash>
# dir_resolve originally from http://stackoverflow.com/a/20901614/5887626
# modified at https://lars.timmann.de/wiki/index.php/Bash_cheatsheet
dir_resolve() {
local dir=${1%/*}
local file=${1##*/}
# if the name does not contain a / leave file blank or the name will be name/name
[ "_${1/\//}_" == "_${1}_" -a -d ${1} ] && file=""
[ "_${1/\//}_" == "_${1}_" -a -f ${1} ] && dir=""
pushd "$dir" &>/dev/null || return $? # On error, return error code
echo ${PWD}${file:+"/"${file}} # output full path with filename
popd &> /dev/null
}
</source>
=Schleifen=
==Zahlenfolgen==
$ for i in {0..9} ; do echo $i ; done
oder
$ for ((i=0;i<=9;i++)); do echo $i; done
so gehen natürlich auch andere Sprünge, z.B. immer 3 weiter:
$ for ((i=0;i<=9;i+=3)); do echo $i; done
oder oder oder
$ for ((i=0,j=1;i<=9;i+=3,j++)); do echo "$i $j"; done
=Rechnen=
$ echo $[ 3 + 4 ]
$ echo $[ 2 ** 8 ] # 2^8
[[Kategorie:Bash]]
d5d94811460ae8cdafa08a722cfe35d04ea3439b
1263
1242
2016-05-04T14:49:44Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:Bash]]
=bash history per user=
You need to set LogLevel of sshd to VERBOSE in your /etc/ssh/sshd_config:
<source lang=bash>
...
LogLevel VERBOSE
...
</source>
If you are using ssh public keys for authenticating and want to use a seperate history for each user, you can put this in your .bash_profile:
<source lang=bash>
[ -f /var/log/fingerprint.log ] && FINGERPRINT=$(nawk -v ssh_connection="${SSH_CONNECTION}" -v user=${LOGNAME} 'BEGIN{split(ssh_connection,connection)}/.*sshd\[[0-9]+\]: Accepted publickey for/ && $(NF-5)==connection[1] && $(NF-3)==connection[2] {print $NF;}' /var/log/fingerprint.log)
export HISTFILE=~/.bash_history_${FINGERPRINT:-${SUDO_USER:-default}}
</source>
If $FINGERPRINT is empty the sudo user will be used.
If $SUDO_USER is empty too, use "default" as extension.
I forced rsyslog to write another logfile where group ssh may read:
/etc/rsyslog.d/99-fingerprint.conf:
<source lang=bash>
$FileCreateMode 0640
$FileGroup ssh
auth /var/log/fingerprint.log
</source>
Add user syslog to group ssh so that syslog can open a file as group ssh:
<source lang=bash>
# usermod -aG ssh syslog
</source>
Let only users from group ssh login via ssh except the syslog user:
/etc/ssh/sshd_config:
<source lang=bash>
# SSH is only allowed for users in this group
AllowGroups ssh
DenyUsers syslog
</source>
=bash prompt=
Put this in your ~/.bash_profile
<source lang=bash>
typeset +x PS1="\[\e]0;\u@\h: \w\a\]\u@\h:\w# "
</source>
=Nützliche Variablenersetzungen=
==dirname==
<source lang=bash>
$ myself=/usr/bin/blafasel ; echo ${myself%/*}
/usr/bin
</source>
==basename==
<source lang=bash>
$ myself=/usr/bin/blafasel ; echo ${myself##*/}
blafasel
</source>
==Path name resolving function==
<source lang=bash>
# dir_resolve originally from http://stackoverflow.com/a/20901614/5887626
# modified at https://lars.timmann.de/wiki/index.php/Bash_cheatsheet
dir_resolve() {
local dir=${1%/*}
local file=${1##*/}
# if the name does not contain a / leave file blank or the name will be name/name
[ "_${1/\//}_" == "_${1}_" -a -d ${1} ] && file=""
[ "_${1/\//}_" == "_${1}_" -a -f ${1} ] && dir=""
pushd "$dir" &>/dev/null || return $? # On error, return error code
echo ${PWD}${file:+"/"${file}} # output full path with filename
popd &> /dev/null
}
</source>
=Schleifen=
==Zahlenfolgen==
$ for i in {0..9} ; do echo $i ; done
oder
$ for ((i=0;i<=9;i++)); do echo $i; done
so gehen natürlich auch andere Sprünge, z.B. immer 3 weiter:
$ for ((i=0;i<=9;i+=3)); do echo $i; done
oder oder oder
$ for ((i=0,j=1;i<=9;i+=3,j++)); do echo "$i $j"; done
=Rechnen=
$ echo $[ 3 + 4 ]
$ echo $[ 2 ** 8 ] # 2^8
==Exit controlled loop==
Just put your code between <i>while</i> and <i>do</i> and use <i>continue</i> alias : in the loop.
<source lang=bash>
#!/bin/bash
while
# some code
(( <your control expression> ))
do
:
done
</source>
For example:
<source lang=bash>
#!/bin/bash
i=1
while
i=$[ $i + 1 ];
(( $i < 10 ))
do
:
done
</source>
c9745721d358a754ce27645d9b7409f4750f9500
1264
1263
2016-05-04T14:50:17Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:Bash]]
=bash history per user=
You need to set LogLevel of sshd to VERBOSE in your /etc/ssh/sshd_config:
<source lang=bash>
...
LogLevel VERBOSE
...
</source>
If you are using ssh public keys for authenticating and want to use a seperate history for each user, you can put this in your .bash_profile:
<source lang=bash>
[ -f /var/log/fingerprint.log ] && FINGERPRINT=$(nawk -v ssh_connection="${SSH_CONNECTION}" -v user=${LOGNAME} 'BEGIN{split(ssh_connection,connection)}/.*sshd\[[0-9]+\]: Accepted publickey for/ && $(NF-5)==connection[1] && $(NF-3)==connection[2] {print $NF;}' /var/log/fingerprint.log)
export HISTFILE=~/.bash_history_${FINGERPRINT:-${SUDO_USER:-default}}
</source>
If $FINGERPRINT is empty the sudo user will be used.
If $SUDO_USER is empty too, use "default" as extension.
I forced rsyslog to write another logfile where group ssh may read:
/etc/rsyslog.d/99-fingerprint.conf:
<source lang=bash>
$FileCreateMode 0640
$FileGroup ssh
auth /var/log/fingerprint.log
</source>
Add user syslog to group ssh so that syslog can open a file as group ssh:
<source lang=bash>
# usermod -aG ssh syslog
</source>
Let only users from group ssh login via ssh except the syslog user:
/etc/ssh/sshd_config:
<source lang=bash>
# SSH is only allowed for users in this group
AllowGroups ssh
DenyUsers syslog
</source>
=bash prompt=
Put this in your ~/.bash_profile
<source lang=bash>
typeset +x PS1="\[\e]0;\u@\h: \w\a\]\u@\h:\w# "
</source>
=Nützliche Variablenersetzungen=
==dirname==
<source lang=bash>
$ myself=/usr/bin/blafasel ; echo ${myself%/*}
/usr/bin
</source>
==basename==
<source lang=bash>
$ myself=/usr/bin/blafasel ; echo ${myself##*/}
blafasel
</source>
==Path name resolving function==
<source lang=bash>
# dir_resolve originally from http://stackoverflow.com/a/20901614/5887626
# modified at https://lars.timmann.de/wiki/index.php/Bash_cheatsheet
dir_resolve() {
local dir=${1%/*}
local file=${1##*/}
# if the name does not contain a / leave file blank or the name will be name/name
[ "_${1/\//}_" == "_${1}_" -a -d ${1} ] && file=""
[ "_${1/\//}_" == "_${1}_" -a -f ${1} ] && dir=""
pushd "$dir" &>/dev/null || return $? # On error, return error code
echo ${PWD}${file:+"/"${file}} # output full path with filename
popd &> /dev/null
}
</source>
=Schleifen=
==Zahlenfolgen==
$ for i in {0..9} ; do echo $i ; done
oder
$ for ((i=0;i<=9;i++)); do echo $i; done
so gehen natürlich auch andere Sprünge, z.B. immer 3 weiter:
$ for ((i=0;i<=9;i+=3)); do echo $i; done
oder oder oder
$ for ((i=0,j=1;i<=9;i+=3,j++)); do echo "$i $j"; done
==Exit controlled loop==
Just put your code between <i>while</i> and <i>do</i> and use <i>continue</i> alias : in the loop.
<source lang=bash>
#!/bin/bash
while
# some code
(( <your control expression> ))
do
:
done
</source>
For example:
<source lang=bash>
#!/bin/bash
i=1
while
i=$[ $i + 1 ];
(( $i < 10 ))
do
:
done
</source>
=Rechnen=
$ echo $[ 3 + 4 ]
$ echo $[ 2 ** 8 ] # 2^8
6f9621b82aae60fd7f1f1b46808f2415b1105d0c
1269
1264
2016-05-12T12:22:30Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:Bash]]
=bash history per user=
You need to set LogLevel of sshd to VERBOSE in your /etc/ssh/sshd_config:
<source lang=bash>
...
LogLevel VERBOSE
...
</source>
If you are using ssh public keys for authenticating and want to use a seperate history for each user, you can put this in your .bash_profile:
<source lang=bash>
[ -f /var/log/fingerprint.log ] && FINGERPRINT=$(nawk -v ssh_connection="${SSH_CONNECTION}" -v user=${LOGNAME} 'BEGIN{split(ssh_connection,connection)}/.*sshd\[[0-9]+\]: Accepted publickey for/ && $(NF-5)==connection[1] && $(NF-3)==connection[2] {print $NF;}' /var/log/fingerprint.log)
export HISTFILE=~/.bash_history_${FINGERPRINT:-${SUDO_USER:-default}}
</source>
If $FINGERPRINT is empty the sudo user will be used.
If $SUDO_USER is empty too, use "default" as extension.
I forced rsyslog to write another logfile where group ssh may read:
/etc/rsyslog.d/99-fingerprint.conf:
<source lang=bash>
$FileCreateMode 0640
$FileGroup ssh
auth /var/log/fingerprint.log
</source>
Add user syslog to group ssh so that syslog can open a file as group ssh:
<source lang=bash>
# usermod -aG ssh syslog
</source>
Let only users from group ssh login via ssh except the syslog user:
/etc/ssh/sshd_config:
<source lang=bash>
# SSH is only allowed for users in this group
AllowGroups ssh
DenyUsers syslog
</source>
=bash prompt=
Put this in your ~/.bash_profile
<source lang=bash>
typeset +x PS1="\[\e]0;\u@\h: \w\a\]\u@\h:\w# "
</source>
=Nützliche Variablenersetzungen=
==dirname==
<source lang=bash>
$ myself=/usr/bin/blafasel ; echo ${myself%/*}
/usr/bin
</source>
==basename==
<source lang=bash>
$ myself=/usr/bin/blafasel ; echo ${myself##*/}
blafasel
</source>
==Path name resolving function==
<source lang=bash>
# dir_resolve originally from http://stackoverflow.com/a/20901614/5887626
# modified at https://lars.timmann.de/wiki/index.php/Bash_cheatsheet
dir_resolve() {
local dir=${1%/*}
local file=${1##*/}
# if the name does not contain a / leave file blank or the name will be name/name
[ "_${1/\//}_" == "_${1}_" -a -d ${1} ] && file=""
[ "_${1/\//}_" == "_${1}_" -a -f ${1} ] && dir=""
pushd "$dir" &>/dev/null || return $? # On error, return error code
echo ${PWD}${file:+"/"${file}} # output full path with filename
popd &> /dev/null
}
</source>
=Schleifen=
==Zahlenfolgen==
$ for i in {0..9} ; do echo $i ; done
oder
$ for ((i=0;i<=9;i++)); do echo $i; done
so gehen natürlich auch andere Sprünge, z.B. immer 3 weiter:
$ for ((i=0;i<=9;i+=3)); do echo $i; done
oder oder oder
$ for ((i=0,j=1;i<=9;i+=3,j++)); do echo "$i $j"; done
==Exit controlled loop==
Just put your code between <i>while</i> and <i>do</i> and use <i>continue</i> alias : in the loop.
<source lang=bash>
#!/bin/bash
while
# some code
(( <your control expression> ))
do
:
done
</source>
For example:
<source lang=bash>
#!/bin/bash
i=1
while
i=$[ $i + 1 ];
(( $i < 10 ))
do
:
done
</source>
=Calculations=
$ echo $[ 3 + 4 ]
$ echo $[ 2 ** 8 ] # 2^8
=init scripts=
==A basic skeleton==
<source lang=bash>
#!/bin/bash
NAME=<myname> # The name of the daemon
USER=<runuser> # The user to run the daemon as
SELF=${0##*/}
CALLER=$(id -nu)
# Check if called as ${USER}
if [ "_${CALLER}_" != "_${USER}_" ]
then
# If not do a su if called as root
if [ "_${CALLER}_" == "_root_" ]
then
exec su -l ${USER} -c "$0 $@"
else
echo "Please start this script only as user ${USER}"
exit 1
fi
fi
if [ $# -eq 1 ]
then
command=$1
else
# Called as ${NAME}-start.sh or ${NAME}-stop.sh
command=${SELF%.sh}
command=${command##${NAME}-}
[ "_${command}_" == "_${NAME}_" ] && command=""
fi
case ${command} in
start)
# start commands
;;
stop)
# stop commands
;;
restart)
$0 stop
$0 start
;;
*)
[ ! -z "${command}" ] && echo "ERROR: Unknown option ${command}!"
echo "Usage: $0 (start|stop|restart)";
echo "Or call as ${NAME}-(start|stop|restart).sh"
exit 1
;;
esac
</source>
af6025dcf471530b0dcb6660462cdf746e9ced2b
Dpkg
0
244
1243
945
2016-02-25T14:58:07Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:Linux]]
==Missing key id NO_PUBKEY==
<source lang=bash>
# apt-key adv --keyserver keyserver.ubuntu.com --recv <keyid>
</source>
==Packages from a specific source==
===Prequisite: dctrl-tools===
<source lang=bash>
sudo apt-get install dctrl-tools
</source>
===Show packages===
For example all PPA packages
<source lang=bash>
sudo grep-dctrl -sPackage . /var/lib/apt/lists/ppa*_Packages
</source>
==From where is my package installed?==
<source lang=bash>
sudo apt-cache policy <package>
</source>
==Does my file match the checksum from the package?==
If you fear you are hacked verify your binaries!
===Prequisite: debsums===
<source lang=bash>
sudo apt-get install debsums
</source>
===Verify packages===
<source lang=bash>
$ sudo debsums unhide.rb
/usr/bin/unhide.rb OK
/usr/share/doc/unhide.rb/changelog.Debian.gz OK
/usr/share/doc/unhide.rb/copyright OK
/usr/share/lintian/overrides/unhide.rb OK
/usr/share/man/man8/unhide.rb.8.gz OK
</source>
95402c383cac1d7c438a1c1b7586125d63e1d059
1244
1243
2016-02-25T14:58:57Z
Lollypop
2
/* Verify packages */
wikitext
text/x-wiki
[[Kategorie:Linux]]
==Missing key id NO_PUBKEY==
<source lang=bash>
# apt-key adv --keyserver keyserver.ubuntu.com --recv <keyid>
</source>
==Packages from a specific source==
===Prequisite: dctrl-tools===
<source lang=bash>
sudo apt-get install dctrl-tools
</source>
===Show packages===
For example all PPA packages
<source lang=bash>
sudo grep-dctrl -sPackage . /var/lib/apt/lists/ppa*_Packages
</source>
==From where is my package installed?==
<source lang=bash>
sudo apt-cache policy <package>
</source>
==Does my file match the checksum from the package?==
If you fear you are hacked verify your binaries!
===Prequisite: debsums===
<source lang=bash>
sudo apt-get install debsums
</source>
===Verify packages===
<source lang=bash>
sudo debsums <package name>
</source>
<source lang=bash>
$ sudo debsums unhide.rb
/usr/bin/unhide.rb OK
/usr/share/doc/unhide.rb/changelog.Debian.gz OK
/usr/share/doc/unhide.rb/copyright OK
/usr/share/lintian/overrides/unhide.rb OK
/usr/share/man/man8/unhide.rb.8.gz OK
</source>
5bc28d1c0823284bb7e42c0df195812afe5260a6
1248
1244
2016-03-10T12:30:57Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:Linux]]
==Missing key id NO_PUBKEY==
<source lang=bash>
# apt-key adv --keyserver keyserver.ubuntu.com --recv <keyid>
</source>
==Package source which resolves to IPv6 adresses causes sometimes problems==
To force the usage of the returned IPv4 adresses do:
<source lang=bash>
$ echo 'Acquire::ForceIPv4 "true";' | sudo tee /etc/apt/apt.conf.d/99force-ipv4
</source>
==Packages from a specific source==
===Prequisite: dctrl-tools===
<source lang=bash>
sudo apt-get install dctrl-tools
</source>
===Show packages===
For example all PPA packages
<source lang=bash>
sudo grep-dctrl -sPackage . /var/lib/apt/lists/ppa*_Packages
</source>
==From where is my package installed?==
<source lang=bash>
sudo apt-cache policy <package>
</source>
==Does my file match the checksum from the package?==
If you fear you are hacked verify your binaries!
===Prequisite: debsums===
<source lang=bash>
sudo apt-get install debsums
</source>
===Verify packages===
<source lang=bash>
sudo debsums <package name>
</source>
<source lang=bash>
$ sudo debsums unhide.rb
/usr/bin/unhide.rb OK
/usr/share/doc/unhide.rb/changelog.Debian.gz OK
/usr/share/doc/unhide.rb/copyright OK
/usr/share/lintian/overrides/unhide.rb OK
/usr/share/man/man8/unhide.rb.8.gz OK
</source>
80ffce7f12af065c3d07793e28937a6dbc6537f0
Linux Tipps und Tricks
0
273
1245
1164
2016-03-01T16:05:59Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:Linux|Tipps und Tricks]]
==Hard reboot==
This is the hard way to kick your kernel into void. No filesystem sync is done, just an ugly fast direkt reboot!
You should never do this...
<source lang=bash>
# echo 1 > /proc/sys/kernel/sysrq
# echo b > /proc/sysrq-trigger
</source>
First line enables sysrq, second line sends the reboot request.
For more look at [https://www.kernel.org/doc/Documentation/sysrq.txt kernel.org]!
==Scan all SCSI buses for new devices==
<source lang=bash>
# for i in /sys/class/scsi_host/host*/scan ; do echo "- - -" > $i ; done
</source>
==Remove a SCSI-device==
Let us say we want to remove /dev/sdb.
Be careful! Like in this example the lowest SCSI-ID is not always the lowest device name!
Check it with <i>lsscsi</i> from the Ubuntu package lsscsi:
<source lang=bash>
# lsscsi
[2:0:0:0] cd/dvd NECVMWar VMware SATA CD00 1.00 /dev/sr0
[32:0:0:0] disk VMware Virtual disk 1.0 /dev/sdb
[32:0:1:0] disk VMware Virtual disk 1.0 /dev/sda
</source>
Then check it is not longer in use:
# mount
# pvs
# zpool status
# etc.
Then delete it:
<source lang=bash>
# echo 1 > /sys/bus/scsi/drivers/sd/32\:0\:0\:0/delete
</source>
The 32:0:0:0 is the number reported from the lsscsi above.
Et voila:
<source lang=bash>
# lsscsi
[2:0:0:0] cd/dvd NECVMWar VMware SATA CD00 1.00 /dev/sr0
[32:0:1:0] disk VMware Virtual disk 1.0 /dev/sda
</source>
f27ba5e20df030361f17b162e08d885044e7f84a
Fail2ban
0
276
1246
2016-03-09T14:31:05Z
Lollypop
2
Die Seite wurde neu angelegt: „[[Kategorie:Security]] [[Kategorie:Linux]] ==Installation== ===Debian / Ubuntu=== <source lang=bash> # apt-get install fail2ban </source> ==Configuration== T…“
wikitext
text/x-wiki
[[Kategorie:Security]]
[[Kategorie:Linux]]
==Installation==
===Debian / Ubuntu===
<source lang=bash>
# apt-get install fail2ban
</source>
==Configuration==
To be secure on updates put your personal settings in the <i>*.local</i> files. This will protect them from overwriting through update procedures.
===paths-overrides.local===
I have date parts in my logfiles so the defaults from fail2ban would fail to find the logs.
<source lang=bash>
# exim -bP log_file_path
log_file_path = /var/log/exim/%slog-%D
# doveadm log find
Looking for log files from /var/log
Debug: /var/log/dovecot/dovecot.debug-20160309
Info: /var/log/dovecot/dovecot.debug-20160309
Warning: /var/log/dovecot/dovecot.log-20160309
Error: /var/log/dovecot/dovecot.log-20160309
Fatal: /var/log/dovecot/dovecot.log-20160309
</source>
<source lang=ini>
[DEFAULT]
dovecot_log = /var/log/dovecot/dovecot.log-*
exim_main_log = /var/log/exim/mainlog-*
</source>
===jail.local===
<source lang=ini>
[DEFAULT]
bantime = 3600
#
[sshd]
enabled = true
[exim-spam]
enabled = true
[exim]
enabled = true
[sshd-ddos]
enabled = true
[dovecot]
enabled = true
[sieve]
enabled = true
</source>
78c2397a3055c953715d106100ae9b254973c098
1247
1246
2016-03-09T14:31:35Z
Lollypop
2
/* jail.local */
wikitext
text/x-wiki
[[Kategorie:Security]]
[[Kategorie:Linux]]
==Installation==
===Debian / Ubuntu===
<source lang=bash>
# apt-get install fail2ban
</source>
==Configuration==
To be secure on updates put your personal settings in the <i>*.local</i> files. This will protect them from overwriting through update procedures.
===paths-overrides.local===
I have date parts in my logfiles so the defaults from fail2ban would fail to find the logs.
<source lang=bash>
# exim -bP log_file_path
log_file_path = /var/log/exim/%slog-%D
# doveadm log find
Looking for log files from /var/log
Debug: /var/log/dovecot/dovecot.debug-20160309
Info: /var/log/dovecot/dovecot.debug-20160309
Warning: /var/log/dovecot/dovecot.log-20160309
Error: /var/log/dovecot/dovecot.log-20160309
Fatal: /var/log/dovecot/dovecot.log-20160309
</source>
<source lang=ini>
[DEFAULT]
dovecot_log = /var/log/dovecot/dovecot.log-*
exim_main_log = /var/log/exim/mainlog-*
</source>
===jail.local===
<source lang=ini>
[DEFAULT]
bantime = 3600
[sshd]
enabled = true
[exim-spam]
enabled = true
[exim]
enabled = true
[sshd-ddos]
enabled = true
[dovecot]
enabled = true
[sieve]
enabled = true
</source>
7d2dda0c0dd58f2d11c3f11eb5ddd3b94cd7105a
Linux udev
0
88
1249
1170
2016-03-29T15:58:44Z
Lollypop
2
Lollypop verschob Seite [[Linux udev permissions]] nach [[Linux udev]]: simply all udev stuff
wikitext
text/x-wiki
[[Kategorie:Linux|udev]]
==udev for MySQL on LVM with InnoDB on raw devices==
===Make your rule===
<source lang=bash>
root@mysql:~# cat /etc/udev/rules.d/99-lvm-mysql-permissions.rules
# udevadm info --query=all --name /dev/vg-data/lv-rawdisk-innodb01
# DM_VG_NAME=vg-data
# DM_LV_NAME=lv-rawdisk-innodb01
ENV{DM_VG_NAME}=="vg-data" ENV{DM_LV_NAME}=="lv-rawdisk-innodb*" OWNER="mysql" GROUP="mysql"
</source>
===Test your rule===
<source lang=bash>
root@mysql:~# ls -al /dev/vg-data/lv-rawdisk-innodb01
lrwxrwxrwx 1 root root 7 Aug 12 14:45 /dev/vg-data/lv-rawdisk-innodb01 -> ../dm-0
root@mysql:~# udevadm test /class/block/dm-0
...
read rules file: /etc/udev/rules.d/99-lvm-mysql-permissions.rules
specified user 'mysql' unknown
...
</source>
OK user mysql unknown... maybe I should install MySQL ;-).
After that:
<source lang=bash>
root@mysql:~# id -a mysql
uid=108(mysql) gid=114(mysql) groups=114(mysql)
root@mysql:~# udevadm test /class/block/dm-0
...
OWNER 108 /etc/udev/rules.d/99-lvm-mysql-permissions.rules:4
GROUP 114 /etc/udev/rules.d/99-lvm-mysql-permissions.rules:4
handling device node '/dev/dm-0', devnum=b252:0, mode=0660, uid=108, gid=114
set permissions /dev/dm-0, 060660, uid=108, gid=114
...
</source>
===Trigger your rule===
<source lang=bash>
root@mysql:~# udevadm trigger
root@mysql:~# ls -alL /dev/vg-data/lv-rawdisk-innodb01
brw-rw---- 1 mysql mysql 252, 0 Aug 12 15:07 /dev/vg-data/lv-rawdisk-innodb01
</source>
836029582d626444f04fe9947b508785ea586e17
1251
1249
2016-03-29T16:01:49Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:Linux|udev]]
==Persistent network interface names==
If you have no <i>/etc/udev/rules.d/70-persistent-net.rules</i> just create one:
<source lang=bash>
# lshw -C network | awk '/logical name:/{iface=$NF;}/serial:/{mac=$NF;printf "SUBSYSTEM==\"net\", ACTION==\"add\", DRIVERS==\"?*\", ATTR{address}==\"%s\", ATTR{dev_id}==\"0x0\", ATTR{type}==\"1\", KERNEL==\"eth*\", NAME=\"%s\"\n",mac,iface;}' >> /etc/udev/rules.d/70-persistent-net.rules
</source>
Change order with:
<source lang=bash>
# vi /etc/udev/rules.d/70-persistent-net.rules
</source>
Then let udev reread the file:
<source lang=bash>
# udevadm trigger --action=add --subsystem-match=net --verbose
</source>
==udev for MySQL on LVM with InnoDB on raw devices==
===Make your rule===
<source lang=bash>
root@mysql:~# cat /etc/udev/rules.d/99-lvm-mysql-permissions.rules
# udevadm info --query=all --name /dev/vg-data/lv-rawdisk-innodb01
# DM_VG_NAME=vg-data
# DM_LV_NAME=lv-rawdisk-innodb01
ENV{DM_VG_NAME}=="vg-data" ENV{DM_LV_NAME}=="lv-rawdisk-innodb*" OWNER="mysql" GROUP="mysql"
</source>
===Test your rule===
<source lang=bash>
root@mysql:~# ls -al /dev/vg-data/lv-rawdisk-innodb01
lrwxrwxrwx 1 root root 7 Aug 12 14:45 /dev/vg-data/lv-rawdisk-innodb01 -> ../dm-0
root@mysql:~# udevadm test /class/block/dm-0
...
read rules file: /etc/udev/rules.d/99-lvm-mysql-permissions.rules
specified user 'mysql' unknown
...
</source>
OK user mysql unknown... maybe I should install MySQL ;-).
After that:
<source lang=bash>
root@mysql:~# id -a mysql
uid=108(mysql) gid=114(mysql) groups=114(mysql)
root@mysql:~# udevadm test /class/block/dm-0
...
OWNER 108 /etc/udev/rules.d/99-lvm-mysql-permissions.rules:4
GROUP 114 /etc/udev/rules.d/99-lvm-mysql-permissions.rules:4
handling device node '/dev/dm-0', devnum=b252:0, mode=0660, uid=108, gid=114
set permissions /dev/dm-0, 060660, uid=108, gid=114
...
</source>
===Trigger your rule===
<source lang=bash>
root@mysql:~# udevadm trigger
root@mysql:~# ls -alL /dev/vg-data/lv-rawdisk-innodb01
brw-rw---- 1 mysql mysql 252, 0 Aug 12 15:07 /dev/vg-data/lv-rawdisk-innodb01
</source>
09d5b47e0671ff1701e0c86b19a8c1d1e3e87fd8
1268
1251
2016-05-12T07:23:49Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:Linux|udev]]
==Persistent network interface names==
If you have no <i>/etc/udev/rules.d/70-persistent-net.rules</i> just create one:
<source lang=bash>
# lshw -C network | awk '/logical name:/{iface=$NF;}/serial:/{mac=$NF;printf "SUBSYSTEM==\"net\", ACTION==\"add\", DRIVERS==\"?*\", ATTR{address}==\"%s\", ATTR{dev_id}==\"0x0\", ATTR{type}==\"1\", KERNEL==\"eth*\", NAME=\"%s\"\n",mac,iface;}' >> /etc/udev/rules.d/70-persistent-net.rules
</source>
or add a specific interface to <i>/etc/udev/rules.d/70-persistent-net.rules</i>:
<source lang=bash>
# MATCHADDR="00:50:56:a1:20:22" INTERFACE=eth2 /lib/udev/write_net_rules
</source>
Change order with:
<source lang=bash>
# vi /etc/udev/rules.d/70-persistent-net.rules
</source>
Then let udev reread the file:
<source lang=bash>
# udevadm trigger --action=add --subsystem-match=net --verbose
</source>
==udev for MySQL on LVM with InnoDB on raw devices==
===Make your rule===
<source lang=bash>
root@mysql:~# cat /etc/udev/rules.d/99-lvm-mysql-permissions.rules
# udevadm info --query=all --name /dev/vg-data/lv-rawdisk-innodb01
# DM_VG_NAME=vg-data
# DM_LV_NAME=lv-rawdisk-innodb01
ENV{DM_VG_NAME}=="vg-data" ENV{DM_LV_NAME}=="lv-rawdisk-innodb*" OWNER="mysql" GROUP="mysql"
</source>
===Test your rule===
<source lang=bash>
root@mysql:~# ls -al /dev/vg-data/lv-rawdisk-innodb01
lrwxrwxrwx 1 root root 7 Aug 12 14:45 /dev/vg-data/lv-rawdisk-innodb01 -> ../dm-0
root@mysql:~# udevadm test /class/block/dm-0
...
read rules file: /etc/udev/rules.d/99-lvm-mysql-permissions.rules
specified user 'mysql' unknown
...
</source>
OK user mysql unknown... maybe I should install MySQL ;-).
After that:
<source lang=bash>
root@mysql:~# id -a mysql
uid=108(mysql) gid=114(mysql) groups=114(mysql)
root@mysql:~# udevadm test /class/block/dm-0
...
OWNER 108 /etc/udev/rules.d/99-lvm-mysql-permissions.rules:4
GROUP 114 /etc/udev/rules.d/99-lvm-mysql-permissions.rules:4
handling device node '/dev/dm-0', devnum=b252:0, mode=0660, uid=108, gid=114
set permissions /dev/dm-0, 060660, uid=108, gid=114
...
</source>
===Trigger your rule===
<source lang=bash>
root@mysql:~# udevadm trigger
root@mysql:~# ls -alL /dev/vg-data/lv-rawdisk-innodb01
brw-rw---- 1 mysql mysql 252, 0 Aug 12 15:07 /dev/vg-data/lv-rawdisk-innodb01
</source>
6acd06be5693e3b0f5f705212caefb2e375796a8
Linux udev permissions
0
277
1250
2016-03-29T15:58:44Z
Lollypop
2
Lollypop verschob Seite [[Linux udev permissions]] nach [[Linux udev]]: simply all udev stuff
wikitext
text/x-wiki
#WEITERLEITUNG [[Linux udev]]
9771d822afa7e6098d85b6eeca0b8bb4eda8aecc
Solaris 11 Zones
0
257
1252
991
2016-03-31T14:21:19Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:Solaris11|Zones]]
<source lang=bash>
# pkg update --be-name solaris_11.3.3_sc_4.2.5.1-1 -v --accept
..
Planning linked: 0/1 done; 1 working: zone:zone01
Linked image 'zone:zone01' output:
...
# zlogin zone01 beadm list | tail -1
solaris-7 !RO - 18.09M static 2015-12-21 17:57
# clrs disable zone01-rs
</source>
Switch to patched node...
<source lang=bash>
# zoneadm -z zone01 attach -x deny-zbe-clone -z solaris-7
# clrs enable zone01-rs
</source>
<source lang=bash>
# /usr/lib/brand/solaris/attach:
Brand specific options:
brand-specific usage:
Usage:
attach [-uv] [-a archive | -d directory | -z zbe]
[-c profile.xml | dir] [-x attach-last-booted-zbe|
force-zbe-clone|deny-zbe-clone|destroy-orphan-zbes]
-u Update the software in the attached zone boot environment to
match the sofware in the global zone boot environment.
-v Verbose.
-c Update the zone configuration with the sysconfig profile
specified in the given file or directory.
-a Extract the specified archive into the zone then attach the
active boot environment found in the archive. The archive
may be a zfs, cpio, or tar archive. It may be compressed with
gzip or bzip2.
-d Copy the specified directory into a new zone boot environment
then attach the zone boot environment.
-z Attach the specified zone boot environment.
-x attach-last-booted-zbe : Attach the last booted zone boot
environment.
force-zbe-clone : Clone zone boot environment
on attach.
deny-zbe-clone : Do not clone zone boot environment
on attach.
destroy-orphan-zbes : Destroy all orphan zone boot
environments. (not associated with
any global BE)
</source>
==zoneclone.sh==
<source lang=bash>
#!/bin/bash
SRC_ZONE=$1
DST_ZONE=$2
DST_DIR=$3
DST_DATASET=$4
if [ $# -lt 3 ] ; then
echo "Not enough arguments!"
echo "Usage: $0 <src_zone> <dst_zone> <dst_dir> [dst_dataset]"
exit 1
fi
zonecfg -z ${DST_ZONE} info >/dev/null 2>&1 && {
echo "Destination zone exists!"
exit 1
}
zonecfg -z ${SRC_ZONE} info >/dev/null 2>&1 || {
echo "Source zone does not exist!"
exit 1
}
SRC_ZONE_STATUS="$(zoneadm list -cs | nawk -v zone=${SRC_ZONE} '$1==zone {print $2;}')"
if [ "_${SRC_ZONE_STATUS}_" != "_installed_" ] ; then
echo "Zone ${SRC_ZONE} must be in the status \"installed\" and not \"${SRC_ZONE_STATUS}\"!"
exit 1
fi
if [ -n "${DST_DATASET}" ] ; then
if [ -d ${DST_DIR} ] ; then
rmdir ${DST_DIR} || {
echo "${DST_DIR} must be empty!"
exit 1
}
fi
# Is parent dataset there?
zfs list -Ho name ${DST_DATASET%/*} >/dev/null 2>&1 || {
echo "Destination dataset does not exist!"
exit 1
}
zfs create -o mountpoint=${DST_DIR} ${DST_DATASET}
fi
[ -d ${DST_DIR} ] || {
echo "Destination dir must exist!"
exit 1
}
zonecfg -z ${SRC_ZONE} export \
| nawk -v zonepath=${DST_DIR} '
BEGIN {
FS="=";
OFS="=";
}
/set zonepath/{$2=zonepath}
{ print; }
' \
| zonecfg -z ${DST_ZONE} -f -
zoneadm -z ${DST_ZONE} clone ${SRC_ZONE}
</source>
5d8059508766d09e02c0225533f6b66ce9fd3eb2
1253
1252
2016-04-01T01:33:07Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:Solaris11|Zones]]
<source lang=bash>
# pkg update --be-name solaris_11.3.3_sc_4.2.5.1-1 -v --accept
..
Planning linked: 0/1 done; 1 working: zone:zone01
Linked image 'zone:zone01' output:
...
# zlogin zone01 beadm list | tail -1
solaris-7 !RO - 18.09M static 2015-12-21 17:57
# clrs disable zone01-rs
</source>
Switch to patched node...
<source lang=bash>
# zoneadm -z zone01 attach -x deny-zbe-clone -z solaris-7
# clrs enable zone01-rs
</source>
<source lang=bash>
# /usr/lib/brand/solaris/attach:
Brand specific options:
brand-specific usage:
Usage:
attach [-uv] [-a archive | -d directory | -z zbe]
[-c profile.xml | dir] [-x attach-last-booted-zbe|
force-zbe-clone|deny-zbe-clone|destroy-orphan-zbes]
-u Update the software in the attached zone boot environment to
match the sofware in the global zone boot environment.
-v Verbose.
-c Update the zone configuration with the sysconfig profile
specified in the given file or directory.
-a Extract the specified archive into the zone then attach the
active boot environment found in the archive. The archive
may be a zfs, cpio, or tar archive. It may be compressed with
gzip or bzip2.
-d Copy the specified directory into a new zone boot environment
then attach the zone boot environment.
-z Attach the specified zone boot environment.
-x attach-last-booted-zbe : Attach the last booted zone boot
environment.
force-zbe-clone : Clone zone boot environment
on attach.
deny-zbe-clone : Do not clone zone boot environment
on attach.
destroy-orphan-zbes : Destroy all orphan zone boot
environments. (not associated with
any global BE)
</source>
==zoneclone.sh==
<source lang=bash>
#!/bin/bash
SRC_ZONE=$1
DST_ZONE=$2
DST_DIR=$3
DST_DATASET=$4
if [ $# -lt 3 ] ; then
echo "Not enough arguments!"
echo "Usage: $0 <src_zone> <dst_zone> <dst_dir> [dst_dataset]"
exit 1
fi
zonecfg -z ${DST_ZONE} info >/dev/null 2>&1 && {
echo "Destination zone exists!"
exit 1
}
zonecfg -z ${SRC_ZONE} info >/dev/null 2>&1 || {
echo "Source zone does not exist!"
exit 1
}
SRC_ZONE_STATUS="$(zoneadm list -cs | nawk -v zone=${SRC_ZONE} '$1==zone {print $2;}')"
if [ "_${SRC_ZONE_STATUS}_" != "_installed_" ] ; then
echo "Zone ${SRC_ZONE} must be in the status \"installed\" and not \"${SRC_ZONE_STATUS}\"!"
exit 1
fi
if [ -n "${DST_DATASET}" ] ; then
if [ -d ${DST_DIR} ] ; then
rmdir ${DST_DIR} || {
echo "${DST_DIR} must be empty!"
exit 1
}
fi
# Is parent dataset there?
zfs list -Ho name ${DST_DATASET%/*} >/dev/null 2>&1 || {
echo "Destination dataset does not exist!"
exit 1
}
zfs create -o mountpoint=${DST_DIR} ${DST_DATASET}
fi
[ -d ${DST_DIR} ] || {
echo "Destination dir must exist!"
exit 1
}
zonecfg -z ${SRC_ZONE} export \
| nawk -v zonepath=${DST_DIR} '
BEGIN {
FS="=";
OFS="=";
}
/set zonepath/{$2=zonepath}
{ print; }
' \
| zonecfg -z ${DST_ZONE} -f -
zoneadm -z ${DST_ZONE} clone ${SRC_ZONE}
</source>
==Way that works with Solaris Cluster and immutable zones==
Problem was that some steps of updating (indexing man pages etc) could not be done in the immutable zone after solaris update. So one <i>boot -w</i> is necessary to come up in cluster.
===Move all RGs from node first===
<source lang=bash>
# clrg evacuate -n $(hostname) +
</source>
===Update Solaris===
<source lang=bash>
# pkg update --be-name solaris_11.3.3_sc_4.2.5.1-1 -v --accept
# init 6
</source>
===Disable zone on other node and move to self===
But leave the HAStoragePlus resource online
<source lang=bash>
# clrs disable zone01-zone-rs
# clrg switch -n $(hostname) zone01-rg
</source>
===Attach, boot -w, detach without cluster===
<source lang=bash>
# zoneadm -z zone01 attach -u
# zoneadm -z zone01 boot -w
# zlogin zone01 svcs -xv # <- wait for services to start
# zoneadm -z zone01 halt
# zoneadm -z zone01 detach
</source>
===Enable zone in cluster===
<source lang=bash>
# clrs enable zone01-zone-rs
</source>
94c7586952266d1e22808fb09b64b42c9cdd6cee
1254
1253
2016-04-01T01:35:09Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:Solaris11|Zones]]
==zoneclone.sh==
<source lang=bash>
#!/bin/bash
SRC_ZONE=$1
DST_ZONE=$2
DST_DIR=$3
DST_DATASET=$4
if [ $# -lt 3 ] ; then
echo "Not enough arguments!"
echo "Usage: $0 <src_zone> <dst_zone> <dst_dir> [dst_dataset]"
exit 1
fi
zonecfg -z ${DST_ZONE} info >/dev/null 2>&1 && {
echo "Destination zone exists!"
exit 1
}
zonecfg -z ${SRC_ZONE} info >/dev/null 2>&1 || {
echo "Source zone does not exist!"
exit 1
}
SRC_ZONE_STATUS="$(zoneadm list -cs | nawk -v zone=${SRC_ZONE} '$1==zone {print $2;}')"
if [ "_${SRC_ZONE_STATUS}_" != "_installed_" ] ; then
echo "Zone ${SRC_ZONE} must be in the status \"installed\" and not \"${SRC_ZONE_STATUS}\"!"
exit 1
fi
if [ -n "${DST_DATASET}" ] ; then
if [ -d ${DST_DIR} ] ; then
rmdir ${DST_DIR} || {
echo "${DST_DIR} must be empty!"
exit 1
}
fi
# Is parent dataset there?
zfs list -Ho name ${DST_DATASET%/*} >/dev/null 2>&1 || {
echo "Destination dataset does not exist!"
exit 1
}
zfs create -o mountpoint=${DST_DIR} ${DST_DATASET}
fi
[ -d ${DST_DIR} ] || {
echo "Destination dir must exist!"
exit 1
}
zonecfg -z ${SRC_ZONE} export \
| nawk -v zonepath=${DST_DIR} '
BEGIN {
FS="=";
OFS="=";
}
/set zonepath/{$2=zonepath}
{ print; }
' \
| zonecfg -z ${DST_ZONE} -f -
zoneadm -z ${DST_ZONE} clone ${SRC_ZONE}
</source>
==Way that works with Solaris Cluster and immutable zones==
Problem was that some steps of updating (indexing man pages etc) could not be done in the immutable zone after solaris update. So one <i>boot -w</i> is necessary to come up in cluster.
===Move all RGs from node first===
<source lang=bash>
# clrg evacuate -n $(hostname) +
</source>
===Update Solaris===
<source lang=bash>
# pkg update --be-name solaris_11.3.3_sc_4.2.5.1-1 -v --accept
# init 6
</source>
===Disable zone on other node and move to self===
But leave the HAStoragePlus resource online
<source lang=bash>
# clrs disable zone01-zone-rs
# clrg switch -n $(hostname) zone01-rg
</source>
===Attach, boot -w, detach without cluster===
<source lang=bash>
# zoneadm -z zone01 attach -u
# zoneadm -z zone01 boot -w
# zlogin zone01 svcs -xv # <- wait for services to start
# zoneadm -z zone01 halt
# zoneadm -z zone01 detach
</source>
===Enable zone in cluster===
<source lang=bash>
# clrs enable zone01-zone-rs
</source>
==Some other things==
<source lang=bash>
# zoneadm -z zone01 attach -x deny-zbe-clone -z solaris-7
# clrs enable zone01-rs
</source>
<source lang=bash>
# /usr/lib/brand/solaris/attach:
Brand specific options:
brand-specific usage:
Usage:
attach [-uv] [-a archive | -d directory | -z zbe]
[-c profile.xml | dir] [-x attach-last-booted-zbe|
force-zbe-clone|deny-zbe-clone|destroy-orphan-zbes]
-u Update the software in the attached zone boot environment to
match the sofware in the global zone boot environment.
-v Verbose.
-c Update the zone configuration with the sysconfig profile
specified in the given file or directory.
-a Extract the specified archive into the zone then attach the
active boot environment found in the archive. The archive
may be a zfs, cpio, or tar archive. It may be compressed with
gzip or bzip2.
-d Copy the specified directory into a new zone boot environment
then attach the zone boot environment.
-z Attach the specified zone boot environment.
-x attach-last-booted-zbe : Attach the last booted zone boot
environment.
force-zbe-clone : Clone zone boot environment
on attach.
deny-zbe-clone : Do not clone zone boot environment
on attach.
destroy-orphan-zbes : Destroy all orphan zone boot
environments. (not associated with
any global BE)
</source>
85f5ea37d759cd25ba26e4cf73b10b34ff7f6e2a
1255
1254
2016-04-01T01:45:49Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:Solaris11|Zones]]
==zoneclone.sh==
<source lang=bash>
#!/bin/bash
SRC_ZONE=$1
DST_ZONE=$2
DST_DIR=$3
DST_DATASET=$4
if [ $# -lt 3 ] ; then
echo "Not enough arguments!"
echo "Usage: $0 <src_zone> <dst_zone> <dst_dir> [dst_dataset]"
exit 1
fi
zonecfg -z ${DST_ZONE} info >/dev/null 2>&1 && {
echo "Destination zone exists!"
exit 1
}
zonecfg -z ${SRC_ZONE} info >/dev/null 2>&1 || {
echo "Source zone does not exist!"
exit 1
}
SRC_ZONE_STATUS="$(zoneadm list -cs | nawk -v zone=${SRC_ZONE} '$1==zone {print $2;}')"
if [ "_${SRC_ZONE_STATUS}_" != "_installed_" ] ; then
echo "Zone ${SRC_ZONE} must be in the status \"installed\" and not \"${SRC_ZONE_STATUS}\"!"
exit 1
fi
if [ -n "${DST_DATASET}" ] ; then
if [ -d ${DST_DIR} ] ; then
rmdir ${DST_DIR} || {
echo "${DST_DIR} must be empty!"
exit 1
}
fi
# Is parent dataset there?
zfs list -Ho name ${DST_DATASET%/*} >/dev/null 2>&1 || {
echo "Destination dataset does not exist!"
exit 1
}
zfs create -o mountpoint=${DST_DIR} ${DST_DATASET}
fi
[ -d ${DST_DIR} ] || {
echo "Destination dir must exist!"
exit 1
}
zonecfg -z ${SRC_ZONE} export \
| nawk -v zonepath=${DST_DIR} '
BEGIN {
FS="=";
OFS="=";
}
/set zonepath/{$2=zonepath}
{ print; }
' \
| zonecfg -z ${DST_ZONE} -f -
zoneadm -z ${DST_ZONE} clone ${SRC_ZONE}
</source>
==Way that works with Solaris Cluster and immutable zones==
Problem was that some steps of updating (indexing man pages etc) could not be done in the immutable zone after solaris update. So one <i>boot -w</i> is necessary to come up in cluster.
<source lang=bash>
Apr 1 02:31:52 node01 SC[SUNWsczone.start_sczbt]:zone01-rg:zone01-zone-rs: [ID 567783 daemon.error] start_sczbt rc<1> - Installing: Using existing zone boot environment
Apr 1 02:31:52 node01 SC[SUNWsczone.start_sczbt]:zone01-rg:zone01-zone-rs: [ID 567783 daemon.error] start_sczbt rc<1> - Zone BE root dataset: zone01/zone/rpool/ROOT/solaris-6
Apr 1 02:31:52 node01 SC[SUNWsczone.start_sczbt]:zone01-rg:zone01-zone-rs: [ID 567783 daemon.error] start_sczbt rc<1> - Cache: Using /var/pkg/publisher.
Apr 1 02:31:52 node01 SC[SUNWsczone.start_sczbt]:zone01-rg:zone01-zone-rs: [ID 567783 daemon.error] start_sczbt rc<1> - Updating non-global zone: Linking to image /.
Apr 1 02:31:52 node01 SC[SUNWsczone.start_sczbt]:zone01-rg:zone01-zone-rs: [ID 567783 daemon.error] start_sczbt rc<1> - Finished processing linked images.
Apr 1 02:31:52 node01 SC[SUNWsczone.start_sczbt]:zone01-rg:zone01-zone-rs: [ID 567783 daemon.error] start_sczbt rc<1> - Result: Attach Failed.
</source>
===Move all RGs from node first===
<source lang=bash>
# clrg evacuate -n $(hostname) +
</source>
===Update Solaris===
<source lang=bash>
# pkg update --be-name solaris_11.3.3_sc_4.2.5.1-1 -v --accept
# init 6
</source>
===Disable zone on other node and move to self===
But leave the HAStoragePlus resource online
<source lang=bash>
# clrs disable zone01-zone-rs
# clrg switch -n $(hostname) zone01-rg
</source>
===Attach, boot -w, detach without cluster===
<source lang=bash>
# zoneadm -z zone01 attach -u
# zoneadm -z zone01 boot -w
# zlogin zone01 svcs -xv # <- wait for all services to be ready
# zlogin zone01 svcs -xv # <- wait for all services to be ready
...
# zlogin zone01 svcs -xv # <- wait for all services to be ready
# zoneadm -z zone01 halt
# zoneadm -z zone01 detach
</source>
===Enable zone in cluster===
<source lang=bash>
# clrs enable zone01-zone-rs
</source>
==Some other things==
<source lang=bash>
# zoneadm -z zone01 attach -x deny-zbe-clone -z solaris-7
# clrs enable zone01-rs
</source>
<source lang=bash>
# /usr/lib/brand/solaris/attach:
Brand specific options:
brand-specific usage:
Usage:
attach [-uv] [-a archive | -d directory | -z zbe]
[-c profile.xml | dir] [-x attach-last-booted-zbe|
force-zbe-clone|deny-zbe-clone|destroy-orphan-zbes]
-u Update the software in the attached zone boot environment to
match the sofware in the global zone boot environment.
-v Verbose.
-c Update the zone configuration with the sysconfig profile
specified in the given file or directory.
-a Extract the specified archive into the zone then attach the
active boot environment found in the archive. The archive
may be a zfs, cpio, or tar archive. It may be compressed with
gzip or bzip2.
-d Copy the specified directory into a new zone boot environment
then attach the zone boot environment.
-z Attach the specified zone boot environment.
-x attach-last-booted-zbe : Attach the last booted zone boot
environment.
force-zbe-clone : Clone zone boot environment
on attach.
deny-zbe-clone : Do not clone zone boot environment
on attach.
destroy-orphan-zbes : Destroy all orphan zone boot
environments. (not associated with
any global BE)
</source>
460ceb928b57e2eed0529149f64633c2e2e94fa7
1256
1255
2016-04-04T09:20:31Z
Lollypop
2
/* Update Solaris */
wikitext
text/x-wiki
[[Kategorie:Solaris11|Zones]]
==zoneclone.sh==
<source lang=bash>
#!/bin/bash
SRC_ZONE=$1
DST_ZONE=$2
DST_DIR=$3
DST_DATASET=$4
if [ $# -lt 3 ] ; then
echo "Not enough arguments!"
echo "Usage: $0 <src_zone> <dst_zone> <dst_dir> [dst_dataset]"
exit 1
fi
zonecfg -z ${DST_ZONE} info >/dev/null 2>&1 && {
echo "Destination zone exists!"
exit 1
}
zonecfg -z ${SRC_ZONE} info >/dev/null 2>&1 || {
echo "Source zone does not exist!"
exit 1
}
SRC_ZONE_STATUS="$(zoneadm list -cs | nawk -v zone=${SRC_ZONE} '$1==zone {print $2;}')"
if [ "_${SRC_ZONE_STATUS}_" != "_installed_" ] ; then
echo "Zone ${SRC_ZONE} must be in the status \"installed\" and not \"${SRC_ZONE_STATUS}\"!"
exit 1
fi
if [ -n "${DST_DATASET}" ] ; then
if [ -d ${DST_DIR} ] ; then
rmdir ${DST_DIR} || {
echo "${DST_DIR} must be empty!"
exit 1
}
fi
# Is parent dataset there?
zfs list -Ho name ${DST_DATASET%/*} >/dev/null 2>&1 || {
echo "Destination dataset does not exist!"
exit 1
}
zfs create -o mountpoint=${DST_DIR} ${DST_DATASET}
fi
[ -d ${DST_DIR} ] || {
echo "Destination dir must exist!"
exit 1
}
zonecfg -z ${SRC_ZONE} export \
| nawk -v zonepath=${DST_DIR} '
BEGIN {
FS="=";
OFS="=";
}
/set zonepath/{$2=zonepath}
{ print; }
' \
| zonecfg -z ${DST_ZONE} -f -
zoneadm -z ${DST_ZONE} clone ${SRC_ZONE}
</source>
==Way that works with Solaris Cluster and immutable zones==
Problem was that some steps of updating (indexing man pages etc) could not be done in the immutable zone after solaris update. So one <i>boot -w</i> is necessary to come up in cluster.
<source lang=bash>
Apr 1 02:31:52 node01 SC[SUNWsczone.start_sczbt]:zone01-rg:zone01-zone-rs: [ID 567783 daemon.error] start_sczbt rc<1> - Installing: Using existing zone boot environment
Apr 1 02:31:52 node01 SC[SUNWsczone.start_sczbt]:zone01-rg:zone01-zone-rs: [ID 567783 daemon.error] start_sczbt rc<1> - Zone BE root dataset: zone01/zone/rpool/ROOT/solaris-6
Apr 1 02:31:52 node01 SC[SUNWsczone.start_sczbt]:zone01-rg:zone01-zone-rs: [ID 567783 daemon.error] start_sczbt rc<1> - Cache: Using /var/pkg/publisher.
Apr 1 02:31:52 node01 SC[SUNWsczone.start_sczbt]:zone01-rg:zone01-zone-rs: [ID 567783 daemon.error] start_sczbt rc<1> - Updating non-global zone: Linking to image /.
Apr 1 02:31:52 node01 SC[SUNWsczone.start_sczbt]:zone01-rg:zone01-zone-rs: [ID 567783 daemon.error] start_sczbt rc<1> - Finished processing linked images.
Apr 1 02:31:52 node01 SC[SUNWsczone.start_sczbt]:zone01-rg:zone01-zone-rs: [ID 567783 daemon.error] start_sczbt rc<1> - Result: Attach Failed.
</source>
===Move all RGs from node first===
<source lang=bash>
# clrg evacuate -n $(hostname) +
</source>
===Update Solaris===
<source lang=bash>
# pkg update --be-name $(pkg info -r system/kernel | nawk '/Build Release:/{split($NF,release,".");}/Branch:/{split($NF,versions,".");print "Solaris_"release[2]"."versions[3]"_SRU"versions[4];}') --accept -v
# init 6
</source>
===Disable zone on other node and move to self===
But leave the HAStoragePlus resource online
<source lang=bash>
# clrs disable zone01-zone-rs
# clrg switch -n $(hostname) zone01-rg
</source>
===Attach, boot -w, detach without cluster===
<source lang=bash>
# zoneadm -z zone01 attach -u
# zoneadm -z zone01 boot -w
# zlogin zone01 svcs -xv # <- wait for all services to be ready
# zlogin zone01 svcs -xv # <- wait for all services to be ready
...
# zlogin zone01 svcs -xv # <- wait for all services to be ready
# zoneadm -z zone01 halt
# zoneadm -z zone01 detach
</source>
===Enable zone in cluster===
<source lang=bash>
# clrs enable zone01-zone-rs
</source>
==Some other things==
<source lang=bash>
# zoneadm -z zone01 attach -x deny-zbe-clone -z solaris-7
# clrs enable zone01-rs
</source>
<source lang=bash>
# /usr/lib/brand/solaris/attach:
Brand specific options:
brand-specific usage:
Usage:
attach [-uv] [-a archive | -d directory | -z zbe]
[-c profile.xml | dir] [-x attach-last-booted-zbe|
force-zbe-clone|deny-zbe-clone|destroy-orphan-zbes]
-u Update the software in the attached zone boot environment to
match the sofware in the global zone boot environment.
-v Verbose.
-c Update the zone configuration with the sysconfig profile
specified in the given file or directory.
-a Extract the specified archive into the zone then attach the
active boot environment found in the archive. The archive
may be a zfs, cpio, or tar archive. It may be compressed with
gzip or bzip2.
-d Copy the specified directory into a new zone boot environment
then attach the zone boot environment.
-z Attach the specified zone boot environment.
-x attach-last-booted-zbe : Attach the last booted zone boot
environment.
force-zbe-clone : Clone zone boot environment
on attach.
deny-zbe-clone : Do not clone zone boot environment
on attach.
destroy-orphan-zbes : Destroy all orphan zone boot
environments. (not associated with
any global BE)
</source>
29b0bc76a00380682c9944ed0dd0b067b2571ab9
Ubuntu networking
0
278
1257
2016-04-05T13:56:29Z
Lollypop
2
Die Seite wurde neu angelegt: „[[Kategorie:Ubuntu]] [[Kategorie:Linux]] ==Disable IPv6== ===Create /etc/sysctl.d/60-disable-ipv6.conf=== Create a file named <i>/etc/sysctl.d/60-disable-ipv…“
wikitext
text/x-wiki
[[Kategorie:Ubuntu]]
[[Kategorie:Linux]]
==Disable IPv6==
===Create /etc/sysctl.d/60-disable-ipv6.conf===
Create a file named <i>/etc/sysctl.d/60-disable-ipv6.conf</i> with this content:
<source lang=bash>
net.ipv6.conf.all.disable_ipv6 = 1
net.ipv6.conf.default.disable_ipv6 = 1
net.ipv6.conf.lo.disable_ipv6 = 1
</source>
===Activate /etc/sysctl.d/60-disable-ipv6.conf===
<source lang=bash>
# sysctl -p /etc/sysctl.d/10-disable-ipv6.conf
net.ipv6.conf.all.disable_ipv6 = 1
net.ipv6.conf.default.disable_ipv6 = 1
net.ipv6.conf.lo.disable_ipv6 = 1
</source>
===Check settings===
<source lang=bash>
# cat /proc/sys/net/ipv6/conf/all/disable_ipv6
1
</source>
4dfd2b98ca0cd04f56ad690fb1b91993a2e53c9e
1258
1257
2016-04-05T13:57:04Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:Ubuntu|Networking]]
[[Kategorie:Linux|Networking]]
==Disable IPv6==
===Create /etc/sysctl.d/60-disable-ipv6.conf===
Create a file named <i>/etc/sysctl.d/60-disable-ipv6.conf</i> with this content:
<source lang=bash>
net.ipv6.conf.all.disable_ipv6 = 1
net.ipv6.conf.default.disable_ipv6 = 1
net.ipv6.conf.lo.disable_ipv6 = 1
</source>
===Activate /etc/sysctl.d/60-disable-ipv6.conf===
<source lang=bash>
# sysctl -p /etc/sysctl.d/10-disable-ipv6.conf
net.ipv6.conf.all.disable_ipv6 = 1
net.ipv6.conf.default.disable_ipv6 = 1
net.ipv6.conf.lo.disable_ipv6 = 1
</source>
===Check settings===
<source lang=bash>
# cat /proc/sys/net/ipv6/conf/all/disable_ipv6
1
</source>
5c46e26dd824bce101e4fdb0d931579b428a5ede
Ubuntu apt
0
120
1259
332
2016-04-05T13:57:31Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:Ubuntu|apt]]
== Configuring a proxy for apt ==
Put this into your /etc/apt/apt.conf.d/00proxy :
<source lang=bash>
// Options for the downloading routines
Acquire
{
Queue-Mode "host"; // host|access
Retries "0";
Source-Symlinks "true";
// HTTP method configuration
http
{
//Proxy::http.us.debian.org "DIRECT"; // Specific per-host setting
Proxy "http://<user>:<password>@<proxy-host>:<proxy-port>";
Timeout "120";
Pipeline-Depth "5";
// Cache Control. Note these do not work with Squid 2.0.2
No-Cache "false";
Max-Age "86400"; // 1 Day age on index files
No-Store "false"; // Prevent the cache from storing archives
};
ftp
{
Proxy "http://<user>:<password>@<proxy-host>:<proxy-port>";
//Proxy::http.us.debian.org "DIRECT"; // Specific per-host setting
Timeout "120";
/* Passive mode control, proxy, non-proxy and per-host. Pasv mode
is prefered if possible */
Passive "true";
Proxy::Passive "true";
Passive::http.us.debian.org "true"; // Specific per-host setting
};
cdrom
{
mount "/cdrom";
// You need the trailing slash!
"/cdrom"
{
Mount "sleep 1000";
UMount "sleep 500";
}
};
};
</source>
665436e66cca2b143eb8abc5729bc60aba95c281
1260
1259
2016-04-05T13:57:55Z
Lollypop
2
Lollypop verschob Seite [[Ubuntu apt with proxy]] nach [[Ubuntu apt]]: More general
wikitext
text/x-wiki
[[Kategorie:Ubuntu|apt]]
== Configuring a proxy for apt ==
Put this into your /etc/apt/apt.conf.d/00proxy :
<source lang=bash>
// Options for the downloading routines
Acquire
{
Queue-Mode "host"; // host|access
Retries "0";
Source-Symlinks "true";
// HTTP method configuration
http
{
//Proxy::http.us.debian.org "DIRECT"; // Specific per-host setting
Proxy "http://<user>:<password>@<proxy-host>:<proxy-port>";
Timeout "120";
Pipeline-Depth "5";
// Cache Control. Note these do not work with Squid 2.0.2
No-Cache "false";
Max-Age "86400"; // 1 Day age on index files
No-Store "false"; // Prevent the cache from storing archives
};
ftp
{
Proxy "http://<user>:<password>@<proxy-host>:<proxy-port>";
//Proxy::http.us.debian.org "DIRECT"; // Specific per-host setting
Timeout "120";
/* Passive mode control, proxy, non-proxy and per-host. Pasv mode
is prefered if possible */
Passive "true";
Proxy::Passive "true";
Passive::http.us.debian.org "true"; // Specific per-host setting
};
cdrom
{
mount "/cdrom";
// You need the trailing slash!
"/cdrom"
{
Mount "sleep 1000";
UMount "sleep 500";
}
};
};
</source>
665436e66cca2b143eb8abc5729bc60aba95c281
Ubuntu apt with proxy
0
279
1261
2016-04-05T13:57:55Z
Lollypop
2
Lollypop verschob Seite [[Ubuntu apt with proxy]] nach [[Ubuntu apt]]: More general
wikitext
text/x-wiki
#WEITERLEITUNG [[Ubuntu apt]]
bbb0f47bbf62457881f75596a80b45857402f87a
Apache
0
205
1262
1174
2016-04-27T13:06:40Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:Web]]
== Zertifikat generieren ==
===Defaultwerte vernünftig anpassen===
Country & Co auf für einen selbst passende Werte anpassen:
<source lang=bash>
# vi /etc/ssl/openssl.cnf
</source>
===Schlüssel generieren===
<source lang=bash>
# openssl ecparam -genkey -name secp256r1 | openssl ec -aes256 -out server.de.ec-key
read EC key
using curve name prime256v1 instead of secp256r1
writing EC key
Enter PEM pass phrase:
Verifying - Enter PEM pass phrase:
</source>
Sollte es gewünscht sein den Schlüssel ohne Passwort abzulegen, kann man den Schlüssel nachträglich so entfernen:
<source lang=bash>
# openssl ec -in server.de.ec-key -out server.de.ec-key
read EC key
Enter PEM pass phrase:
writing EC key
</source>
===Zertifikat ausstellen===
<source lang=bash>
# openssl req -new -x509 -sha256 -key server.de.ec-key -out server.de-wildcard.pem -days 1825 -nodes
You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter '.', the field will be left blank.
-----
Country Name (2 letter code) [DE]:
State or Province Name (full name) [Hamburg]:
Locality Name (eg, city) [Hamburg]:
Organization Name (eg, company) [My Site]:
Organizational Unit Name (eg, section) [Sub]:
Common Name (e.g. server FQDN or YOUR name) []:*.server.de
Email Address [ssl@server.de]:
</source>
===Zertifikat ansehen===
<source lang=bash>
# openssl x509 -text -noout -in server.de-wildcard.pem
Certificate:
Data:
Version: 3 (0x2)
Serial Number: ... (0x...)
Signature Algorithm: ecdsa-with-SHA256
Issuer: C=DE, ST=Hamburg, L=Hamburg, O=My Site, OU=Sub, CN=*.server.de/emailAddress=ssl@server.de
Validity
Not Before: Apr 16 09:35:02 2015 GMT
Not After : Apr 14 09:35:02 2020 GMT
Subject: C=DE, ST=Hamburg, L=Hamburg, O=My Site, OU=Sub, CN=*.server.de/emailAddress=ssl@server.de
Subject Public Key Info:
Public Key Algorithm: id-ecPublicKey
Public-Key: (256 bit)
pub:
...
ASN1 OID: prime256v1
X509v3 extensions:
X509v3 Subject Key Identifier:
...
X509v3 Authority Key Identifier:
keyid:...
X509v3 Basic Constraints:
CA:TRUE
Signature Algorithm: ecdsa-with-SHA256
...
</source>
==Apache konfigurieren==
<source lang=apache>
<VirtualHost ssl.server.de:443>
# ...
SSLEngine On
SSLProtocol all -SSLv2 -SSLv3
SSLCompression off
SSLHonorCipherOrder On
SSLCipherSuite EECDH+AESGCM:EECDH+AES:EDH+AES
SSLCertificateFile /etc/apache2/ssl/server.de-wildcard.pem
SSLCertificateKeyFile /etc/apache2/ssl/server.de.ec-key
SSLOptions +FakeBasicAuth +ExportCertData +StrictRequire
SetEnvIfNoCase Referer ^https://ssl\.server\.de keep_cookies
RequestHeader unset Cookie env=!keep_cookies
<IfModule mod_headers.c>
# https://kb.sucuri.net/warnings/hardening/headers-x-content-type
Header set X-Content-Type-Options nosniff
# https://kb.sucuri.net/warnings/hardening/headers-x-frame-clickjacking
Header append X-FRAME-OPTIONS "SAMEORIGIN"
# https://kb.sucuri.net/warnings/hardening/headers-x-xss-protection
Header set X-XSS-Protection "1; mode=block"
# Strict Transport Security
Header always set Strict-Transport-Security "max-age=31556926;"
# Public Key Pins
Header always set Public-Key-Pins "max-age=5184000; pin-sha256=\"...\"; pin-sha256=\"...\"; includeSubDomains"
</IfModule>
<IfModule mod_rewrite.c>
RewriteEngine On
# https://kb.sucuri.net/warnings/hardening/http-trace HTTP Trace Method
RewriteCond %{REQUEST_METHOD} ^TRACE
RewriteRule .* - [F]
</IfModule>
</VirtualHost>
</source>
==Client certificates==
<source lang=apache>
#
## <ClientCertificate>
#
SSLVerifyClient none
SSLCACertificateFile "/var/log/apache2/conf/ca.crt"
SSLCARevocationFile "/var/log/apache2/conf/crl.pem"
SSLCARevocationCheck chain
CustomLog "/var/log/apache2/logs/ssl_user.log" \
"%t %h Serial=%{SSL_CLIENT_M_SERIAL}x User=%{SSL_CLIENT_S_DN_CN}x \"%r\" %b"
<Location />
SSLVerifyClient require
SSLVerifyDepth 10
SSLOptions +FakeBasicAuth
SSLRequireSSL
SSLRequire %{SSL_CLIENT_S_DN_O} eq "Your Organization" \
and %{SSL_CLIENT_S_DN_OU} in {"AllowedOU1","AllowedOU2"}
</Location>
#
## </ClientCertificate>
#
</source>
==ApacheTop==
Top of all sites on your host:
<source lang=bash>
# ls /var/log/apache2/*.log | xargs -n 1 echo -f | xargs apachetop
</source>
30513dea0a5bfd70037879e8c78a1ba7fe423d9d
Oracle Tips and Tricks
0
220
1265
799
2016-05-11T13:15:14Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:Oracle|Tipps]]
==Set environment in .bash_profile==
<source lang=bash>
ORATAB=/etc/oratab # <-- maybe somwhere else?
declare -A ORACLE_HOMES
BASE_PATH=${PATH}
while IFS=$': \t\n' read ORACLE_SID ORACLE_HOME BLA
do
if [[ ${ORACLE_SID} =~ ^(#|[\t ]*$|[-+]) ]] ; then continue ; fi
ALIASNAME=${ORACLE_SID,,*}
eval ORACLE_HOMES["${ORACLE_SID}"]=${ORACLE_HOME}
alias ${ALIASNAME}="export ORACLE_SID=${ORACLE_SID}; export ORACLE_HOME=\${ORACLE_HOMES[${ORACLE_SID}]}; export PATH=\${ORACLE_HOMES[${ORACLE_SID}]}/bin:\${ORACLE_HOMES[${ORACLE_SID}]}/OPatch:\${BASE_PATH}"
done < ${ORATAB}
</source>
After that .bash_profile is sourced (as done at login) you have aliases as lower case ORACLE_SIDs that set all you need.
==Recover datafiles==
Problem:
<source lang=oracle11>
ORA-00376: file 18 cannot be read at this time
ORA-01110: data file 18: '/data/oracle/oradata/datafile04.dbf'
</source>
<source lang=oracle11>
SQL> select * from v$recover_file;
FILE# ONLINE ONLINE_
---------- ------- -------
ERROR CHANGE#
----------------------------------------------------------------- ----------
TIME
---------
18 OFFLINE OFFLINE
5.8016E+12
22-JUL-15
</source>
<source lang=oracle11>
SQL> select ONLINE_STATUS from dba_data_files where file_id = 18;
ONLINE_
-------
RECOVER
</source>
Recover datafile:
<source lang=oracle11>
SQL> recover datafile 18;
ORA-00279: change 5801623243148 generated at 07/22/2015 21:26:51 needed for thread 1
ORA-00289: suggestion :
/data/oracle/arclog/ORACLESID_1946_1_882824275.ARC
ORA-00280: change 5801623243148 for thread 1 is in sequence #1946
ORA-00278: log file
'/data/oracle/arclog/ORACLESID_1945_1_882824275.ARC' no longer needed
for this recovery
Specify log: {<RET>=suggested | filename | AUTO | CANCEL}
AUTO
Log applied.
Media recovery complete.
</source>
Set file online:
<source lang=oracle11>
SQL> alter database datafile 18 online
</source>
Anything else?
<source lang=oracle11>
SQL> select * from v$recover_file;
no rows selected
</source>
==Show non default settings==
<source lang=oracle11>
SQL> select name || ' = ' || value from v$parameter where isdefault = 'FALSE';
</source>
==Show CPU count from database==
<source lang=oracle11>
SQL> SELECT 'DATABASE CPU COUNT: ' || value ||
decode(ISDEFAULT, 'TRUE', ' (ISDEFAULT)', ' (IS NOT DEFAULT !!!: '|| ISDEFAULT ||')')
from V$PARAMETER where UPPER(name) like '%CPU_COUNT%'
</source>
61044d054db7f9445f1611d45e3d633e6c746981
1266
1265
2016-05-11T13:18:43Z
Lollypop
2
/* Set environment in .bash_profile */
wikitext
text/x-wiki
[[Kategorie:Oracle|Tipps]]
==Set environment in .bash_profile==
<source lang=bash>
ORATAB=/etc/oratab # <-- maybe somwhere else?
declare -A ORACLE_HOMES
export BASE_PATH=${PATH}
while IFS=$': \t\n' read ORACLE_SID ORACLE_HOME BLA
do
# Ignore empty lines, commented lines (#) and ORACLE_SIDs starting with + or - (RAC)
if [[ ${ORACLE_SID} =~ ^(#|[\t ]*$|[-+]) ]] ; then continue ; fi
ALIASNAME=${ORACLE_SID,,*}
eval ORACLE_HOMES["${ORACLE_SID}"]=${ORACLE_HOME}
alias ${ALIASNAME}="export ORACLE_SID=${ORACLE_SID}; export ORACLE_HOME=\${ORACLE_HOMES[${ORACLE_SID}]}; export PATH=\${ORACLE_HOMES[${ORACLE_SID}]}/bin:\${ORACLE_HOMES[${ORACLE_SID}]}/OPatch:\${BASE_PATH}"
done < ${ORATAB}
</source>
After that .bash_profile is sourced (as done at login) you have aliases as lower case ORACLE_SIDs that set all you need.
==Recover datafiles==
Problem:
<source lang=oracle11>
ORA-00376: file 18 cannot be read at this time
ORA-01110: data file 18: '/data/oracle/oradata/datafile04.dbf'
</source>
<source lang=oracle11>
SQL> select * from v$recover_file;
FILE# ONLINE ONLINE_
---------- ------- -------
ERROR CHANGE#
----------------------------------------------------------------- ----------
TIME
---------
18 OFFLINE OFFLINE
5.8016E+12
22-JUL-15
</source>
<source lang=oracle11>
SQL> select ONLINE_STATUS from dba_data_files where file_id = 18;
ONLINE_
-------
RECOVER
</source>
Recover datafile:
<source lang=oracle11>
SQL> recover datafile 18;
ORA-00279: change 5801623243148 generated at 07/22/2015 21:26:51 needed for thread 1
ORA-00289: suggestion :
/data/oracle/arclog/ORACLESID_1946_1_882824275.ARC
ORA-00280: change 5801623243148 for thread 1 is in sequence #1946
ORA-00278: log file
'/data/oracle/arclog/ORACLESID_1945_1_882824275.ARC' no longer needed
for this recovery
Specify log: {<RET>=suggested | filename | AUTO | CANCEL}
AUTO
Log applied.
Media recovery complete.
</source>
Set file online:
<source lang=oracle11>
SQL> alter database datafile 18 online
</source>
Anything else?
<source lang=oracle11>
SQL> select * from v$recover_file;
no rows selected
</source>
==Show non default settings==
<source lang=oracle11>
SQL> select name || ' = ' || value from v$parameter where isdefault = 'FALSE';
</source>
==Show CPU count from database==
<source lang=oracle11>
SQL> SELECT 'DATABASE CPU COUNT: ' || value ||
decode(ISDEFAULT, 'TRUE', ' (ISDEFAULT)', ' (IS NOT DEFAULT !!!: '|| ISDEFAULT ||')')
from V$PARAMETER where UPPER(name) like '%CPU_COUNT%'
</source>
4683071639a5483766aa0d606c37943913994c6b
1267
1266
2016-05-11T13:19:02Z
Lollypop
2
/* Set environment in .bash_profile */
wikitext
text/x-wiki
[[Kategorie:Oracle|Tipps]]
==Set environment in .bash_profile==
<source lang=bash>
ORATAB=/etc/oratab # <-- maybe somwhere else?
declare -A ORACLE_HOMES
export BASE_PATH=${PATH}
while IFS=$': \t\n' read ORACLE_SID ORACLE_HOME DBSTART
do
# Ignore empty lines, commented lines (#) and ORACLE_SIDs starting with + or - (RAC)
if [[ ${ORACLE_SID} =~ ^(#|[\t ]*$|[-+]) ]] ; then continue ; fi
ALIASNAME=${ORACLE_SID,,*}
eval ORACLE_HOMES["${ORACLE_SID}"]=${ORACLE_HOME}
alias ${ALIASNAME}="export ORACLE_SID=${ORACLE_SID}; export ORACLE_HOME=\${ORACLE_HOMES[${ORACLE_SID}]}; export PATH=\${ORACLE_HOMES[${ORACLE_SID}]}/bin:\${ORACLE_HOMES[${ORACLE_SID}]}/OPatch:\${BASE_PATH}"
done < ${ORATAB}
</source>
After that .bash_profile is sourced (as done at login) you have aliases as lower case ORACLE_SIDs that set all you need.
==Recover datafiles==
Problem:
<source lang=oracle11>
ORA-00376: file 18 cannot be read at this time
ORA-01110: data file 18: '/data/oracle/oradata/datafile04.dbf'
</source>
<source lang=oracle11>
SQL> select * from v$recover_file;
FILE# ONLINE ONLINE_
---------- ------- -------
ERROR CHANGE#
----------------------------------------------------------------- ----------
TIME
---------
18 OFFLINE OFFLINE
5.8016E+12
22-JUL-15
</source>
<source lang=oracle11>
SQL> select ONLINE_STATUS from dba_data_files where file_id = 18;
ONLINE_
-------
RECOVER
</source>
Recover datafile:
<source lang=oracle11>
SQL> recover datafile 18;
ORA-00279: change 5801623243148 generated at 07/22/2015 21:26:51 needed for thread 1
ORA-00289: suggestion :
/data/oracle/arclog/ORACLESID_1946_1_882824275.ARC
ORA-00280: change 5801623243148 for thread 1 is in sequence #1946
ORA-00278: log file
'/data/oracle/arclog/ORACLESID_1945_1_882824275.ARC' no longer needed
for this recovery
Specify log: {<RET>=suggested | filename | AUTO | CANCEL}
AUTO
Log applied.
Media recovery complete.
</source>
Set file online:
<source lang=oracle11>
SQL> alter database datafile 18 online
</source>
Anything else?
<source lang=oracle11>
SQL> select * from v$recover_file;
no rows selected
</source>
==Show non default settings==
<source lang=oracle11>
SQL> select name || ' = ' || value from v$parameter where isdefault = 'FALSE';
</source>
==Show CPU count from database==
<source lang=oracle11>
SQL> SELECT 'DATABASE CPU COUNT: ' || value ||
decode(ISDEFAULT, 'TRUE', ' (ISDEFAULT)', ' (IS NOT DEFAULT !!!: '|| ISDEFAULT ||')')
from V$PARAMETER where UPPER(name) like '%CPU_COUNT%'
</source>
ff1a631bcc904cbe7a927c786598675094f684ed
Hauptseite
0
1
1270
675
2016-05-25T15:33:53Z
Lollypop
2
/* KnowHow */
wikitext
text/x-wiki
First of all please read my [[Project:General_disclaimer|disclaimer]]!
Bitte zuerst bitte meinen [[Project:General_disclaimer|Haftungsausshluss]] lesen!
=[[:Kategorie:KnowHow|KnowHow]]=
<categorytree mode=pages hideroot=on depth=2>KnowHow</categorytree>
<categorytree mode=pages>KnowHow</categorytree>
=[[:Kategorie:Projekte|Meine Projekte]]=
<categorytree mode=pages hideroot=on depth=3>Projekte</categorytree>
Anmerkungen immer gern an mich: Lars Timmann <<email>L@rs.Timmann.de</email>>
= Starthilfen zum Wiki =
Hilfe zur Benutzung und Konfiguration der Wiki-Software findest du im [http://meta.wikimedia.org/wiki/Help:Contents Benutzerhandbuch].
* [http://www.mediawiki.org/wiki/Manual:Configuration_settings Liste der Konfigurationsvariablen]
* [http://www.mediawiki.org/wiki/Manual:FAQ MediaWiki-FAQ]
* [https://lists.wikimedia.org/mailman/listinfo/mediawiki-announce Mailingliste neuer MediaWiki-Versionen]
Mit dem Urteil vom 12. Mai 1998 hat das Landgericht Hamburg entschieden, dass man durch die Anbringung eines Links die Inhalte der gelinkten Seiten ggf. mit zu verantworten hat. Dies kann nur dadurch verhindert werden, dass man sich ausdrücklich von diesem Inhalt distanziert. Für alle Links auf dieser Homepage gilt: Ich distanziere mich hiermit ausdrücklich von allen Inhalten aller verlinkten Seitenadressen auf meiner Homepage und mache mir diese Inhalte nicht zu eigen.
598a043777438c0da208735d74d66c556a4e09f6
1271
1270
2016-05-25T15:34:13Z
Lollypop
2
/* KnowHow */
wikitext
text/x-wiki
First of all please read my [[Project:General_disclaimer|disclaimer]]!
Bitte zuerst bitte meinen [[Project:General_disclaimer|Haftungsausshluss]] lesen!
=[[:Kategorie:KnowHow|KnowHow]]=
<categorytree mode=pages>KnowHow</categorytree>
=[[:Kategorie:Projekte|Meine Projekte]]=
<categorytree mode=pages hideroot=on depth=3>Projekte</categorytree>
Anmerkungen immer gern an mich: Lars Timmann <<email>L@rs.Timmann.de</email>>
= Starthilfen zum Wiki =
Hilfe zur Benutzung und Konfiguration der Wiki-Software findest du im [http://meta.wikimedia.org/wiki/Help:Contents Benutzerhandbuch].
* [http://www.mediawiki.org/wiki/Manual:Configuration_settings Liste der Konfigurationsvariablen]
* [http://www.mediawiki.org/wiki/Manual:FAQ MediaWiki-FAQ]
* [https://lists.wikimedia.org/mailman/listinfo/mediawiki-announce Mailingliste neuer MediaWiki-Versionen]
Mit dem Urteil vom 12. Mai 1998 hat das Landgericht Hamburg entschieden, dass man durch die Anbringung eines Links die Inhalte der gelinkten Seiten ggf. mit zu verantworten hat. Dies kann nur dadurch verhindert werden, dass man sich ausdrücklich von diesem Inhalt distanziert. Für alle Links auf dieser Homepage gilt: Ich distanziere mich hiermit ausdrücklich von allen Inhalten aller verlinkten Seitenadressen auf meiner Homepage und mache mir diese Inhalte nicht zu eigen.
01f93583b9c32c3e4f9e0f7c5eb29e2cddd0441c
1272
1271
2016-05-25T15:34:53Z
Lollypop
2
/* KnowHow */
wikitext
text/x-wiki
First of all please read my [[Project:General_disclaimer|disclaimer]]!
Bitte zuerst bitte meinen [[Project:General_disclaimer|Haftungsausshluss]] lesen!
=[[:Kategorie:KnowHow|KnowHow]]=
<categorytree mode=pages hideroot=on depth=2>KnowHow</categorytree>
=[[:Kategorie:Projekte|Meine Projekte]]=
<categorytree mode=pages hideroot=on depth=3>Projekte</categorytree>
Anmerkungen immer gern an mich: Lars Timmann <<email>L@rs.Timmann.de</email>>
= Starthilfen zum Wiki =
Hilfe zur Benutzung und Konfiguration der Wiki-Software findest du im [http://meta.wikimedia.org/wiki/Help:Contents Benutzerhandbuch].
* [http://www.mediawiki.org/wiki/Manual:Configuration_settings Liste der Konfigurationsvariablen]
* [http://www.mediawiki.org/wiki/Manual:FAQ MediaWiki-FAQ]
* [https://lists.wikimedia.org/mailman/listinfo/mediawiki-announce Mailingliste neuer MediaWiki-Versionen]
Mit dem Urteil vom 12. Mai 1998 hat das Landgericht Hamburg entschieden, dass man durch die Anbringung eines Links die Inhalte der gelinkten Seiten ggf. mit zu verantworten hat. Dies kann nur dadurch verhindert werden, dass man sich ausdrücklich von diesem Inhalt distanziert. Für alle Links auf dieser Homepage gilt: Ich distanziere mich hiermit ausdrücklich von allen Inhalten aller verlinkten Seitenadressen auf meiner Homepage und mache mir diese Inhalte nicht zu eigen.
cc3c548b38ae89d92b4245b92987e97972594833
1273
1272
2016-05-25T15:35:59Z
Lollypop
2
/* KnowHow */
wikitext
text/x-wiki
First of all please read my [[Project:General_disclaimer|disclaimer]]!
Bitte zuerst bitte meinen [[Project:General_disclaimer|Haftungsausshluss]] lesen!
=[[:Kategorie:KnowHow|KnowHow]]=
<categorytree>KnowHow</categorytree>
<categorytree mode=pages depth=2>KnowHow</categorytree>
=[[:Kategorie:Projekte|Meine Projekte]]=
<categorytree mode=pages hideroot=on depth=3>Projekte</categorytree>
Anmerkungen immer gern an mich: Lars Timmann <<email>L@rs.Timmann.de</email>>
= Starthilfen zum Wiki =
Hilfe zur Benutzung und Konfiguration der Wiki-Software findest du im [http://meta.wikimedia.org/wiki/Help:Contents Benutzerhandbuch].
* [http://www.mediawiki.org/wiki/Manual:Configuration_settings Liste der Konfigurationsvariablen]
* [http://www.mediawiki.org/wiki/Manual:FAQ MediaWiki-FAQ]
* [https://lists.wikimedia.org/mailman/listinfo/mediawiki-announce Mailingliste neuer MediaWiki-Versionen]
Mit dem Urteil vom 12. Mai 1998 hat das Landgericht Hamburg entschieden, dass man durch die Anbringung eines Links die Inhalte der gelinkten Seiten ggf. mit zu verantworten hat. Dies kann nur dadurch verhindert werden, dass man sich ausdrücklich von diesem Inhalt distanziert. Für alle Links auf dieser Homepage gilt: Ich distanziere mich hiermit ausdrücklich von allen Inhalten aller verlinkten Seitenadressen auf meiner Homepage und mache mir diese Inhalte nicht zu eigen.
234dc0033ac2fc38d8a8d185154a3ebbd8f10d12
Hauptseite
0
1
1274
1273
2016-05-25T15:36:19Z
Lollypop
2
/* KnowHow */
wikitext
text/x-wiki
First of all please read my [[Project:General_disclaimer|disclaimer]]!
Bitte zuerst bitte meinen [[Project:General_disclaimer|Haftungsausshluss]] lesen!
=[[:Kategorie:KnowHow|KnowHow]]=
<categorytree mode=pages depth=2>KnowHow</categorytree>
=[[:Kategorie:Projekte|Meine Projekte]]=
<categorytree mode=pages hideroot=on depth=3>Projekte</categorytree>
Anmerkungen immer gern an mich: Lars Timmann <<email>L@rs.Timmann.de</email>>
= Starthilfen zum Wiki =
Hilfe zur Benutzung und Konfiguration der Wiki-Software findest du im [http://meta.wikimedia.org/wiki/Help:Contents Benutzerhandbuch].
* [http://www.mediawiki.org/wiki/Manual:Configuration_settings Liste der Konfigurationsvariablen]
* [http://www.mediawiki.org/wiki/Manual:FAQ MediaWiki-FAQ]
* [https://lists.wikimedia.org/mailman/listinfo/mediawiki-announce Mailingliste neuer MediaWiki-Versionen]
Mit dem Urteil vom 12. Mai 1998 hat das Landgericht Hamburg entschieden, dass man durch die Anbringung eines Links die Inhalte der gelinkten Seiten ggf. mit zu verantworten hat. Dies kann nur dadurch verhindert werden, dass man sich ausdrücklich von diesem Inhalt distanziert. Für alle Links auf dieser Homepage gilt: Ich distanziere mich hiermit ausdrücklich von allen Inhalten aller verlinkten Seitenadressen auf meiner Homepage und mache mir diese Inhalte nicht zu eigen.
5c387bcf021619fdafc2d8738dd21645ac7be908
VMWare Certificate
0
280
1275
2016-05-30T11:38:16Z
Lollypop
2
Die Seite wurde neu angelegt: „[[Kategorie:VMware]] [[Kategorie:Security]] == Neues Zertifikat generieren == === ShellWarning deaktivieren === <pre> -> Bestandsliste -> Hosts und Cluster…“
wikitext
text/x-wiki
[[Kategorie:VMware]]
[[Kategorie:Security]]
== Neues Zertifikat generieren ==
=== ShellWarning deaktivieren ===
<pre>
-> Bestandsliste
-> Hosts und Cluster
-> <ESX-Host auswählen>
-> Verwalten
-> Einstellungen
-> System
-> Erweiterte Systemeinstellungen
-> <In der Suche "suppress" suchen>
-> UserVars.SuppressShellWarning
-> Bearbeiten: UserVars.SuppressShellWarning = 1
</pre>
=== SSH in der Firewall freischalten ===
<pre>
-> Bestandsliste
-> Hosts und Cluster
-> <ESX-Host auswählen>
-> Verwalten
-> Einstellungen
-> System
-> Sicherheitsprofil
-> Firewall
-> Eingehende Verbindungen
-> Bearbeiten
-> SSH-Server aktivieren
</pre>
=== SSH aktivieren ===
<pre>
-> Bestandsliste
-> Hosts und Cluster
-> <ESX-Host auswählen>
-> Verwalten
-> Einstellungen
-> System
-> Sicherheitsprofil
-> Dienste
-> Bearbeiten
-> SSH starten
</pre>
<source lang=bash>
$ ssh root@esx-host
~ # cd /etc/vmware/ssl
/etc/vmware/ssl # mv rui.key rui.key.orig
/etc/vmware/ssl # mv rui.crt rui.crt.orig
/etc/vmware/ssl # /sbin/generate-certificates
/etc/vmware/ssl # ls -al *.key *.crt
-rw-r--r-- 1 root root 1440 May 30 09:33 rui.crt
-r-------- 1 root root 1704 May 30 09:33 rui.key
</source>
=== SSH deaktivieren ===
<pre>
-> Bestandsliste
-> Hosts und Cluster
-> <ESX-Host auswählen>
-> Verwalten
-> Einstellungen
-> System
-> Sicherheitsprofil
-> Dienste
-> Bearbeiten
-> SSH stoppen
</pre>
=== ShellWarning aktivieren ===
<pre>
-> Bestandsliste
-> Hosts und Cluster
-> <ESX-Host auswählen>
-> Verwalten
-> Einstellungen
-> System
-> Erweiterte Systemeinstellungen
-> <In der Suche "suppress" suchen>
-> UserVars.SuppressShellWarning
-> Bearbeiten: UserVars.SuppressShellWarning = 0
</pre>
c80079de96610f5c5822c2bd5583fa7f6384401c
1276
1275
2016-05-30T15:53:24Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:VMWare]]
[[Kategorie:Security]]
== Neues Zertifikat generieren ==
=== ShellWarning deaktivieren ===
<pre>
-> Bestandsliste
-> Hosts und Cluster
-> <ESX-Host auswählen>
-> Verwalten
-> Einstellungen
-> System
-> Erweiterte Systemeinstellungen
-> <In der Suche "suppress" suchen>
-> UserVars.SuppressShellWarning
-> Bearbeiten: UserVars.SuppressShellWarning = 1
</pre>
=== SSH in der Firewall freischalten ===
<pre>
-> Bestandsliste
-> Hosts und Cluster
-> <ESX-Host auswählen>
-> Verwalten
-> Einstellungen
-> System
-> Sicherheitsprofil
-> Firewall
-> Eingehende Verbindungen
-> Bearbeiten
-> SSH-Server aktivieren
</pre>
=== SSH aktivieren ===
<pre>
-> Bestandsliste
-> Hosts und Cluster
-> <ESX-Host auswählen>
-> Verwalten
-> Einstellungen
-> System
-> Sicherheitsprofil
-> Dienste
-> Bearbeiten
-> SSH starten
</pre>
<source lang=bash>
$ ssh root@esx-host
~ # cd /etc/vmware/ssl
/etc/vmware/ssl # mv rui.key rui.key.orig
/etc/vmware/ssl # mv rui.crt rui.crt.orig
/etc/vmware/ssl # /sbin/generate-certificates
/etc/vmware/ssl # ls -al *.key *.crt
-rw-r--r-- 1 root root 1440 May 30 09:33 rui.crt
-r-------- 1 root root 1704 May 30 09:33 rui.key
</source>
=== SSH deaktivieren ===
<pre>
-> Bestandsliste
-> Hosts und Cluster
-> <ESX-Host auswählen>
-> Verwalten
-> Einstellungen
-> System
-> Sicherheitsprofil
-> Dienste
-> Bearbeiten
-> SSH stoppen
</pre>
=== ShellWarning aktivieren ===
<pre>
-> Bestandsliste
-> Hosts und Cluster
-> <ESX-Host auswählen>
-> Verwalten
-> Einstellungen
-> System
-> Erweiterte Systemeinstellungen
-> <In der Suche "suppress" suchen>
-> UserVars.SuppressShellWarning
-> Bearbeiten: UserVars.SuppressShellWarning = 0
</pre>
098aeef23b1496a6d949d1a73a011803ef95eb62
1295
1276
2016-06-06T13:10:57Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:VMWare]]
[[Kategorie:Security]]
== Neues Zertifikat generieren ==
=== ShellWarning deaktivieren ===
<pre>
-> Bestandsliste
-> Hosts und Cluster
-> <ESX-Host auswählen>
-> Verwalten
-> Einstellungen
-> System
-> Erweiterte Systemeinstellungen
-> <In der Suche "suppress" suchen>
-> UserVars.SuppressShellWarning
-> Bearbeiten: UserVars.SuppressShellWarning = 1
</pre>
=== SSH in der Firewall freischalten ===
<pre>
-> Bestandsliste
-> Hosts und Cluster
-> <ESX-Host auswählen>
-> Verwalten
-> Einstellungen
-> System
-> Sicherheitsprofil
-> Firewall
-> Eingehende Verbindungen
-> Bearbeiten
-> SSH-Server aktivieren
</pre>
=== SSH aktivieren ===
<pre>
-> Bestandsliste
-> Hosts und Cluster
-> <ESX-Host auswählen>
-> Verwalten
-> Einstellungen
-> System
-> Sicherheitsprofil
-> Dienste
-> Bearbeiten
-> SSH starten
</pre>
<source lang=bash>
$ ssh root@esx-host
~ # cd /etc/vmware/ssl
/etc/vmware/ssl # mv rui.key rui.key.orig
/etc/vmware/ssl # mv rui.crt rui.crt.orig
/etc/vmware/ssl # /sbin/generate-certificates
/etc/vmware/ssl # ls -al *.key *.crt
-rw-r--r-- 1 root root 1440 May 30 09:33 rui.crt
-r-------- 1 root root 1704 May 30 09:33 rui.key
</source>
=== SSH deaktivieren ===
<pre>
-> Bestandsliste
-> Hosts und Cluster
-> <ESX-Host auswählen>
-> Verwalten
-> Einstellungen
-> System
-> Sicherheitsprofil
-> Dienste
-> Bearbeiten
-> SSH stoppen
</pre>
=== ShellWarning aktivieren ===
<pre>
-> Bestandsliste
-> Hosts und Cluster
-> <ESX-Host auswählen>
-> Verwalten
-> Einstellungen
-> System
-> Erweiterte Systemeinstellungen
-> <In der Suche "suppress" suchen>
-> UserVars.SuppressShellWarning
-> Bearbeiten: UserVars.SuppressShellWarning = 0
</pre>
=== CIM-Server neu starten ===
Damit auch das neue Zertifikat genutzt wird, muß der CIM-Server durchgestartet werden.
<pre>
-> Bestandsliste
-> Hosts und Cluster
-> <ESX-Host auswählen>
-> Verwalten
-> Einstellungen
-> System
-> Sicherheitsprofil
-> Dienste
-> Bearbeiten
-> CIM-Server
-> Neu Starten
</pre>
0b12c40c720541750a011b57642efcfdeb766d2b
Qemu
0
281
1277
2016-05-31T08:45:49Z
Lollypop
2
Die Seite wurde neu angelegt: „[[Kategorie:Virtualization]]“
wikitext
text/x-wiki
[[Kategorie:Virtualization]]
f280f0a7946f1feef0db9a161626dcab507b7959
1281
1277
2016-05-31T08:47:39Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:Qemu]]
3ac8438f9224e63a98f1e9a43084d48570ea1548
1283
1281
2016-05-31T08:50:15Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:Qemu]]
=virsh - management user interface=
==Display running domains==
<source lang=bash>
# virsh list
Id Name State
----------------------------------------------------
1 domain_v1 running
</source>
2137bc2ceb18c7ad147e146ed9da0e41806b38fd
1284
1283
2016-05-31T08:55:30Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:Qemu]]
=virsh - management user interface=
==Display running domains==
<source lang=bash>
# virsh list
Id Name State
----------------------------------------------------
1 domain_v1 running
</source>
==Display running domains==
<source lang=bash>
# virsh dominfo domain_v1
Id: 1
Name: domain_v1
UUID: b80fe77e-5bdd-29a9-d4c4-84482ace50ff
OS Type: hvm
State: running
CPU(s): 4
CPU time: 674481.3s
Max memory: 15605760 KiB
Used memory: 15605760 KiB
Persistent: yes
Autostart: disable
Managed save: no
Security model: none
Security DOI: 0
</source>
168f6714a18f4c39f24d2f7e4499fdd20c5e772c
1285
1284
2016-05-31T08:55:48Z
Lollypop
2
/* Display running domains */
wikitext
text/x-wiki
[[Kategorie:Qemu]]
=virsh - management user interface=
==Display running domains==
<source lang=bash>
# virsh list
Id Name State
----------------------------------------------------
1 domain_v1 running
</source>
==Display domain information==
<source lang=bash>
# virsh dominfo domain_v1
Id: 1
Name: domain_v1
UUID: b80fe77e-5bdd-29a9-d4c4-84482ace50ff
OS Type: hvm
State: running
CPU(s): 4
CPU time: 674481.3s
Max memory: 15605760 KiB
Used memory: 15605760 KiB
Persistent: yes
Autostart: disable
Managed save: no
Security model: none
Security DOI: 0
</source>
cb34487acef7b42b59161aea104bcc8ce130a898
Category:Virtualization
14
282
1278
2016-05-31T08:46:06Z
Lollypop
2
Die Seite wurde neu angelegt: „[[Kategorie:KnowHow]]“
wikitext
text/x-wiki
[[Kategorie:KnowHow]]
5b3e805e2df69a16d339bfd0115e4688ccfd0e65
Category:VirtualBox
14
223
1279
833
2016-05-31T08:46:26Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:Virtualization]]
f280f0a7946f1feef0db9a161626dcab507b7959
Category:VMWare
14
109
1280
295
2016-05-31T08:46:39Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:Virtualization]]
f280f0a7946f1feef0db9a161626dcab507b7959
Category:Qemu
14
283
1282
2016-05-31T08:47:56Z
Lollypop
2
Die Seite wurde neu angelegt: „[[Kategorie:Virtualization]]“
wikitext
text/x-wiki
[[Kategorie:Virtualization]]
f280f0a7946f1feef0db9a161626dcab507b7959
OpenVPN Inline Certs
0
104
1286
286
2016-06-01T09:42:13Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:OpenVPN]]
To get an OpenVPN-Configuration in one file you can inline all referred files like this:
<source lang=bash>
$ nawk '
/^(tls-auth|ca|cert|key)/ {
type=$1;
file=$2;
# for tls-auth we need the key-direction
if(type=="tls-auth")print "key-direction",$3;
print "<"type">";
while(getline tlsauth<file)
print tlsauth;
close(file);
print "</"type">";
next;
}
{
# All other lines are printed as they are
print;
}' connection.ovpn
</source>
And inline to files:
<source lang=bash>
$ nawk '
/^<(tls-auth|ca|dh|cert|key)>/ {
type=$1;
gsub(/[<>]/,"",type);
file=type".pem";
print type,file;
print ""> file;
while(getline) {
if($0 == "</"type">"){
fflush(file);
close(file);
break;
}
print $0>>file;}
next;
}
{
# All other lines are printed as they are
print $0;
}' connection.ovpn > connection_.ovpn
</source>
d171ba54fbea92b60272b6ffacdf43a8338f1d24
Network troubleshooting
0
284
1287
2016-06-01T12:26:46Z
Lollypop
2
Die Seite wurde neu angelegt: „=Network troubleshooting= ==Testing connections from virtual interfaces / virtual IPs== === Ping === <source lang=bash> # ping -I <your virtual ip> <destinatio…“
wikitext
text/x-wiki
=Network troubleshooting=
==Testing connections from virtual interfaces / virtual IPs==
=== Ping ===
<source lang=bash>
# ping -I <your virtual ip> <destination>
</source>
=== Traceroute ===
<source lang=bash>
# traceroute -s <your virtual ip> <destination>
</source>
=== SSH ===
<source lang=bash>
# ssh <user>@<destination> -o BindAddress=<your virtual ip>
</source>
=== Telnet ===
<source lang=bash>
# telnet -b <your virtual ip> <destination>
</source>
d07fef5ecde42a1d391ba952a0c31eb037abb099
1288
1287
2016-06-01T12:27:16Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:Networking]]
=Network troubleshooting=
==Testing connections from virtual interfaces / virtual IPs==
=== Ping ===
<source lang=bash>
# ping -I <your virtual ip> <destination>
</source>
=== Traceroute ===
<source lang=bash>
# traceroute -s <your virtual ip> <destination>
</source>
=== SSH ===
<source lang=bash>
# ssh <user>@<destination> -o BindAddress=<your virtual ip>
</source>
=== Telnet ===
<source lang=bash>
# telnet -b <your virtual ip> <destination>
</source>
9c54d849a25b0972ba4be4bac9b4b4ac13486547
1290
1288
2016-06-01T12:27:53Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:Networking|Troubleshooting]]
=Network troubleshooting=
==Testing connections from virtual interfaces / virtual IPs==
=== Ping ===
<source lang=bash>
# ping -I <your virtual ip> <destination>
</source>
=== Traceroute ===
<source lang=bash>
# traceroute -s <your virtual ip> <destination>
</source>
=== SSH ===
<source lang=bash>
# ssh <user>@<destination> -o BindAddress=<your virtual ip>
</source>
=== Telnet ===
<source lang=bash>
# telnet -b <your virtual ip> <destination>
</source>
510c3f99fc082bcb22a4cf7f26b916591b108980
1297
1290
2016-06-14T08:54:06Z
Lollypop
2
/* Ping */
wikitext
text/x-wiki
[[Kategorie:Networking|Troubleshooting]]
=Network troubleshooting=
==Testing connections from virtual interfaces / virtual IPs==
=== Ping ===
<source lang=bash>
# ping -I <your virtual ip> <destination>
</source>
On Solaris
<source lang=bash>
# ping -sni <your virtual ip> <destination>
</source>
=== Traceroute ===
<source lang=bash>
# traceroute -s <your virtual ip> <destination>
</source>
=== SSH ===
<source lang=bash>
# ssh <user>@<destination> -o BindAddress=<your virtual ip>
</source>
=== Telnet ===
<source lang=bash>
# telnet -b <your virtual ip> <destination>
</source>
126e73687877ed66120d6ca8eba17a22defa5a91
1300
1297
2016-06-29T14:57:50Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:Networking|Troubleshooting]]
=Network troubleshooting=
==Testing connections from virtual interfaces / virtual IPs==
=== Ping ===
<source lang=bash>
# ping -I <your virtual ip> <destination>
</source>
On Solaris
<source lang=bash>
# ping -sni <your virtual ip> <destination>
</source>
=== Traceroute ===
<source lang=bash>
# traceroute -s <your virtual ip> <destination>
</source>
=== SSH ===
<source lang=bash>
# ssh <user>@<destination> -o BindAddress=<your virtual ip>
</source>
=== Telnet ===
<source lang=bash>
# telnet -b <your virtual ip> <destination>
</source>
== Interface details ==
=== Linux ===
<source lang=bash>
# ethtool -k eth1
Features for eth1:
rx-checksumming: on
tx-checksumming: on
tx-checksum-ipv4: off [fixed]
tx-checksum-ip-generic: on
tx-checksum-ipv6: off [fixed]
tx-checksum-fcoe-crc: off [fixed]
tx-checksum-sctp: off [fixed]
scatter-gather: on
tx-scatter-gather: on
tx-scatter-gather-fraglist: off [fixed]
tcp-segmentation-offload: off
tx-tcp-segmentation: off
tx-tcp-ecn-segmentation: off [fixed]
tx-tcp6-segmentation: off
udp-fragmentation-offload: off [fixed]
generic-segmentation-offload: off
generic-receive-offload: on
large-receive-offload: on
rx-vlan-offload: on
tx-vlan-offload: on
ntuple-filters: off [fixed]
receive-hashing: on
highdma: on
rx-vlan-filter: on [fixed]
vlan-challenged: off [fixed]
tx-lockless: off [fixed]
netns-local: off [fixed]
tx-gso-robust: off [fixed]
tx-fcoe-segmentation: off [fixed]
tx-gre-segmentation: off [fixed]
tx-ipip-segmentation: off [fixed]
tx-sit-segmentation: off [fixed]
tx-udp_tnl-segmentation: off [fixed]
fcoe-mtu: off [fixed]
tx-nocache-copy: off
loopback: off [fixed]
rx-fcs: off [fixed]
rx-all: off [fixed]
tx-vlan-stag-hw-insert: off [fixed]
rx-vlan-stag-hw-parse: off [fixed]
rx-vlan-stag-filter: off [fixed]
l2-fwd-offload: off [fixed]
busy-poll: off [fixed]
</source>
=== Solaris ===
4228485d08147d00e9ecc0e04483a5e40a0db4d5
Category:Networking
14
285
1289
2016-06-01T12:27:33Z
Lollypop
2
Die Seite wurde neu angelegt: „[[Kategorie:KnowHow]]“
wikitext
text/x-wiki
[[Kategorie:KnowHow]]
5b3e805e2df69a16d339bfd0115e4688ccfd0e65
Exim cheatsheet
0
27
1291
223
2016-06-01T16:07:40Z
Lollypop
2
wikitext
text/x-wiki
=Fragen und Antworten=
==Header einer MailID ansehen==
<pre># exim -mvh <msgid></pre>
==Statistiken der aktuellen Queue ansehen==
<pre># exim -bpu | exiqsum <parameter></pre>
==Routing von Mails testen==
===Kurz und bündig===
<pre># exim -bv -v <Mailadresse></pre>
===Mit viel Debugging===
<pre># exim -bv -d+all <Mailadresse></pre>
==Wie stosse ich den Versand aller Mails für eine bestimmte Domain an?==
<pre># exim -Rff <Domain></pre>
==Wie stosse ich den Versand EINER bestimmten Mail erneut an?==
<pre># exim -M <message-id></pre>
==Wie ermittle ich, wieviele Mails in der Queue liegen?==
<pre># exim -bpc</pre>
==Wie finde ich eine bestimmte Mail in der Queue?==
Dazu kann entweder in den Logfiles gesucht werden
<pre># exigrep <pattern> /var/log/exim/mainlog-jjjjmmdd</pre>
oder es kann in der Queue gesucht werden
<pre># exiqgrep -r <pattern></pre>
Besser, als exigrep ist exipick!
Ausgabe aller frozen Mails in der Queue:
<pre>
# exipick -z
</pre>
Ausgabe aller Mails an <reciepient> in der Queue:
<pre>
# exipick -r <reciepient>
</pre>
Ausgabe aller Mails von <sender> in der Queue:
<pre>
# exipick -f <sender>
</pre>
Ausggeben aller Mails, die Lokal abgesandt wurden in der Queue:
<pre>
# exipick --or '$sender_host_address eq 127.0.0.1' '$received_protocol eq local'
</pre>
Sogar der Body einer Mail kann durchsucht werden:
<pre>
# /opt/exim/bin/exipick '$message_body =~ /.*Vjagra.*/'
</pre>
Oder Ausgabe der sender_host_address für alle Mails die mehr als 40 und weniger als 50 Minuten alt sind und nicht im Status frozen sind:
<pre>
# exipick --show-vars sender_host_address '$message_age > 40m' '$message_age < 50m' '!$deliver_freeze'
</pre>
==Was tun die Exim-Prozesse?==
<pre># exiwhat</pre>
==Ausgeben von Exim-Parametern==
<pre># exim -bP <Parameter></pre>
z.B.:
<pre># exim -bP message_size_limit</pre>
==Immer gut: queue files ansehen==
<pre>
# find $(exim -bP spool_directory | nawk '{print $NF;}')/input
</pre>
====Ratelimit für einen User zurücksetzen====
Einträge finden:
<source lang=bash>
# exim_dumpdb /var/spool/exim ratelimit | grep h100182
24-Mar-2016 09:51:28.152687 rate: 218.512 key: 1d/per_rcpt/mail_recipients:h100182@server.de
24-Mar-2016 09:51:28.098825 rate: 25.618 key: 1d/per_rcpt/failed_recipients:h100182@server.de
</source>
Einträge löschen:
Dafür nimmt man das etwas struppige Tool <i>exim_fixdb</i>. Man gibt den Key ein, den man aus den Ausgaben vom letzten Befehl hat und wählt damit den entsprechenden Eintrag in der DB aus. Als nächstes kommd dann d, wie Delete, gefolgt von einem Enter. Weg ist der entsprechende Eintrag.
<source lang=bash>
# exim_fixdb /var/spool/exim ratelimit
Modifying Exim hints database /var/spool/exim/db/ratelimit
> 1d/per_rcpt/mail_recipients:h100182@server.de
24-Mar-2016 09:51:28
0 time stamp: 24-Mar-2016 09:51:28
1 fract. time: .152687
2 sender rate: 218.512
> d
deleted
> 1d/per_rcpt/failed_recipients:h100182@server.de
24-Mar-2016 09:51:28
0 time stamp: 24-Mar-2016 09:51:28
1 fract. time: .098825
2 sender rate: 25.618
> d
deleted
> ^D
</source>
[[Kategorie:Exim]]
f9e9e05c8015560f6ca9504b6f80d8e5aeebd159
1292
1291
2016-06-01T16:08:02Z
Lollypop
2
wikitext
text/x-wiki
=Fragen und Antworten=
==Header einer MailID ansehen==
<pre># exim -mvh <msgid></pre>
==Statistiken der aktuellen Queue ansehen==
<pre># exim -bpu | exiqsum <parameter></pre>
==Routing von Mails testen==
===Kurz und bündig===
<pre># exim -bv -v <Mailadresse></pre>
===Mit viel Debugging===
<pre># exim -bv -d+all <Mailadresse></pre>
==Wie stosse ich den Versand aller Mails für eine bestimmte Domain an?==
<pre># exim -Rff <Domain></pre>
==Wie stosse ich den Versand EINER bestimmten Mail erneut an?==
<pre># exim -M <message-id></pre>
==Wie ermittle ich, wieviele Mails in der Queue liegen?==
<pre># exim -bpc</pre>
==Wie finde ich eine bestimmte Mail in der Queue?==
Dazu kann entweder in den Logfiles gesucht werden
<pre># exigrep <pattern> /var/log/exim/mainlog-jjjjmmdd</pre>
oder es kann in der Queue gesucht werden
<pre># exiqgrep -r <pattern></pre>
Besser, als exigrep ist exipick!
Ausgabe aller frozen Mails in der Queue:
<pre>
# exipick -z
</pre>
Ausgabe aller Mails an <reciepient> in der Queue:
<pre>
# exipick -r <reciepient>
</pre>
Ausgabe aller Mails von <sender> in der Queue:
<pre>
# exipick -f <sender>
</pre>
Ausggeben aller Mails, die Lokal abgesandt wurden in der Queue:
<pre>
# exipick --or '$sender_host_address eq 127.0.0.1' '$received_protocol eq local'
</pre>
Sogar der Body einer Mail kann durchsucht werden:
<pre>
# /opt/exim/bin/exipick '$message_body =~ /.*Vjagra.*/'
</pre>
Oder Ausgabe der sender_host_address für alle Mails die mehr als 40 und weniger als 50 Minuten alt sind und nicht im Status frozen sind:
<pre>
# exipick --show-vars sender_host_address '$message_age > 40m' '$message_age < 50m' '!$deliver_freeze'
</pre>
==Was tun die Exim-Prozesse?==
<pre># exiwhat</pre>
==Ausgeben von Exim-Parametern==
<pre># exim -bP <Parameter></pre>
z.B.:
<pre># exim -bP message_size_limit</pre>
==Immer gut: queue files ansehen==
<pre>
# find $(exim -bP spool_directory | nawk '{print $NF;}')/input
</pre>
===Ratelimit für einen User zurücksetzen===
Einträge finden:
<source lang=bash>
# exim_dumpdb /var/spool/exim ratelimit | grep h100182
24-Mar-2016 09:51:28.152687 rate: 218.512 key: 1d/per_rcpt/mail_recipients:h100182@server.de
24-Mar-2016 09:51:28.098825 rate: 25.618 key: 1d/per_rcpt/failed_recipients:h100182@server.de
</source>
Einträge löschen:
Dafür nimmt man das etwas struppige Tool <i>exim_fixdb</i>. Man gibt den Key ein, den man aus den Ausgaben vom letzten Befehl hat und wählt damit den entsprechenden Eintrag in der DB aus. Als nächstes kommd dann d, wie Delete, gefolgt von einem Enter. Weg ist der entsprechende Eintrag.
<source lang=bash>
# exim_fixdb /var/spool/exim ratelimit
Modifying Exim hints database /var/spool/exim/db/ratelimit
> 1d/per_rcpt/mail_recipients:h100182@server.de
24-Mar-2016 09:51:28
0 time stamp: 24-Mar-2016 09:51:28
1 fract. time: .152687
2 sender rate: 218.512
> d
deleted
> 1d/per_rcpt/failed_recipients:h100182@server.de
24-Mar-2016 09:51:28
0 time stamp: 24-Mar-2016 09:51:28
1 fract. time: .098825
2 sender rate: 25.618
> d
deleted
> ^D
</source>
[[Kategorie:Exim]]
2d693066c4ebabf9aee4a1e7eb3ebaf6efa4693d
1293
1292
2016-06-01T16:08:16Z
Lollypop
2
wikitext
text/x-wiki
=Fragen und Antworten=
==Header einer MailID ansehen==
<pre># exim -mvh <msgid></pre>
==Statistiken der aktuellen Queue ansehen==
<pre># exim -bpu | exiqsum <parameter></pre>
==Routing von Mails testen==
===Kurz und bündig===
<pre># exim -bv -v <Mailadresse></pre>
===Mit viel Debugging===
<pre># exim -bv -d+all <Mailadresse></pre>
==Wie stosse ich den Versand aller Mails für eine bestimmte Domain an?==
<pre># exim -Rff <Domain></pre>
==Wie stosse ich den Versand EINER bestimmten Mail erneut an?==
<pre># exim -M <message-id></pre>
==Wie ermittle ich, wieviele Mails in der Queue liegen?==
<pre># exim -bpc</pre>
==Wie finde ich eine bestimmte Mail in der Queue?==
Dazu kann entweder in den Logfiles gesucht werden
<pre># exigrep <pattern> /var/log/exim/mainlog-jjjjmmdd</pre>
oder es kann in der Queue gesucht werden
<pre># exiqgrep -r <pattern></pre>
Besser, als exigrep ist exipick!
Ausgabe aller frozen Mails in der Queue:
<pre>
# exipick -z
</pre>
Ausgabe aller Mails an <reciepient> in der Queue:
<pre>
# exipick -r <reciepient>
</pre>
Ausgabe aller Mails von <sender> in der Queue:
<pre>
# exipick -f <sender>
</pre>
Ausggeben aller Mails, die Lokal abgesandt wurden in der Queue:
<pre>
# exipick --or '$sender_host_address eq 127.0.0.1' '$received_protocol eq local'
</pre>
Sogar der Body einer Mail kann durchsucht werden:
<pre>
# /opt/exim/bin/exipick '$message_body =~ /.*Vjagra.*/'
</pre>
Oder Ausgabe der sender_host_address für alle Mails die mehr als 40 und weniger als 50 Minuten alt sind und nicht im Status frozen sind:
<pre>
# exipick --show-vars sender_host_address '$message_age > 40m' '$message_age < 50m' '!$deliver_freeze'
</pre>
==Was tun die Exim-Prozesse?==
<pre># exiwhat</pre>
==Ausgeben von Exim-Parametern==
<pre># exim -bP <Parameter></pre>
z.B.:
<pre># exim -bP message_size_limit</pre>
==Immer gut: queue files ansehen==
<pre>
# find $(exim -bP spool_directory | nawk '{print $NF;}')/input
</pre>
==Ratelimit für einen User zurücksetzen==
Einträge finden:
<source lang=bash>
# exim_dumpdb /var/spool/exim ratelimit | grep h100182
24-Mar-2016 09:51:28.152687 rate: 218.512 key: 1d/per_rcpt/mail_recipients:h100182@server.de
24-Mar-2016 09:51:28.098825 rate: 25.618 key: 1d/per_rcpt/failed_recipients:h100182@server.de
</source>
Einträge löschen:
Dafür nimmt man das etwas struppige Tool <i>exim_fixdb</i>. Man gibt den Key ein, den man aus den Ausgaben vom letzten Befehl hat und wählt damit den entsprechenden Eintrag in der DB aus. Als nächstes kommd dann d, wie Delete, gefolgt von einem Enter. Weg ist der entsprechende Eintrag.
<source lang=bash>
# exim_fixdb /var/spool/exim ratelimit
Modifying Exim hints database /var/spool/exim/db/ratelimit
> 1d/per_rcpt/mail_recipients:h100182@server.de
24-Mar-2016 09:51:28
0 time stamp: 24-Mar-2016 09:51:28
1 fract. time: .152687
2 sender rate: 218.512
> d
deleted
> 1d/per_rcpt/failed_recipients:h100182@server.de
24-Mar-2016 09:51:28
0 time stamp: 24-Mar-2016 09:51:28
1 fract. time: .098825
2 sender rate: 25.618
> d
deleted
> ^D
</source>
[[Kategorie:Exim]]
64aaedac72b0caae69548254d65c89cdab580f9e
1294
1293
2016-06-01T16:09:19Z
Lollypop
2
/* Ratelimit für einen User zurücksetzen */
wikitext
text/x-wiki
=Fragen und Antworten=
==Header einer MailID ansehen==
<pre># exim -mvh <msgid></pre>
==Statistiken der aktuellen Queue ansehen==
<pre># exim -bpu | exiqsum <parameter></pre>
==Routing von Mails testen==
===Kurz und bündig===
<pre># exim -bv -v <Mailadresse></pre>
===Mit viel Debugging===
<pre># exim -bv -d+all <Mailadresse></pre>
==Wie stosse ich den Versand aller Mails für eine bestimmte Domain an?==
<pre># exim -Rff <Domain></pre>
==Wie stosse ich den Versand EINER bestimmten Mail erneut an?==
<pre># exim -M <message-id></pre>
==Wie ermittle ich, wieviele Mails in der Queue liegen?==
<pre># exim -bpc</pre>
==Wie finde ich eine bestimmte Mail in der Queue?==
Dazu kann entweder in den Logfiles gesucht werden
<pre># exigrep <pattern> /var/log/exim/mainlog-jjjjmmdd</pre>
oder es kann in der Queue gesucht werden
<pre># exiqgrep -r <pattern></pre>
Besser, als exigrep ist exipick!
Ausgabe aller frozen Mails in der Queue:
<pre>
# exipick -z
</pre>
Ausgabe aller Mails an <reciepient> in der Queue:
<pre>
# exipick -r <reciepient>
</pre>
Ausgabe aller Mails von <sender> in der Queue:
<pre>
# exipick -f <sender>
</pre>
Ausggeben aller Mails, die Lokal abgesandt wurden in der Queue:
<pre>
# exipick --or '$sender_host_address eq 127.0.0.1' '$received_protocol eq local'
</pre>
Sogar der Body einer Mail kann durchsucht werden:
<pre>
# /opt/exim/bin/exipick '$message_body =~ /.*Vjagra.*/'
</pre>
Oder Ausgabe der sender_host_address für alle Mails die mehr als 40 und weniger als 50 Minuten alt sind und nicht im Status frozen sind:
<pre>
# exipick --show-vars sender_host_address '$message_age > 40m' '$message_age < 50m' '!$deliver_freeze'
</pre>
==Was tun die Exim-Prozesse?==
<pre># exiwhat</pre>
==Ausgeben von Exim-Parametern==
<pre># exim -bP <Parameter></pre>
z.B.:
<pre># exim -bP message_size_limit</pre>
==Immer gut: queue files ansehen==
<pre>
# find $(exim -bP spool_directory | nawk '{print $NF;}')/input
</pre>
==Ratelimit für einen User zurücksetzen==
Einträge finden:
<source lang=bash>
# exim_dumpdb /var/spool/exim ratelimit | grep user
24-Mar-2016 09:51:28.152687 rate: 218.512 key: 1d/per_rcpt/mail_recipients:user@server.de
24-Mar-2016 09:51:28.098825 rate: 25.618 key: 1d/per_rcpt/failed_recipients:user@server.de
</source>
Einträge löschen:
Dafür nimmt man das etwas struppige Tool <i>exim_fixdb</i>. Man gibt den Key ein, den man aus den Ausgaben vom letzten Befehl hat und wählt damit den entsprechenden Eintrag in der DB aus. Als nächstes kommd dann d, wie Delete, gefolgt von einem Enter. Weg ist der entsprechende Eintrag.
<source lang=bash>
# exim_fixdb /var/spool/exim ratelimit
Modifying Exim hints database /var/spool/exim/db/ratelimit
> 1d/per_rcpt/mail_recipients:user@server.de
24-Mar-2016 09:51:28
0 time stamp: 24-Mar-2016 09:51:28
1 fract. time: .152687
2 sender rate: 218.512
> d
deleted
> 1d/per_rcpt/failed_recipients:user@server.de
24-Mar-2016 09:51:28
0 time stamp: 24-Mar-2016 09:51:28
1 fract. time: .098825
2 sender rate: 25.618
> d
deleted
> ^D
</source>
[[Kategorie:Exim]]
185b15577ecc356ef569062aa3739e15f7a031b5
Solaris OracleClusterware
0
274
1296
1231
2016-06-14T08:45:59Z
Lollypop
2
/* Projects */
wikitext
text/x-wiki
[[Kategorie:Solaris11|Clusterware]]
[[Kategorie:Oracle|Clusterware]]
==Get Solaris release information==
<source lang=bash>
# pkg info kernel | \
nawk -F '.' '
/Build Release:/{
solaris=$NF;
}
/Branch:/{
subrel=$3;
update=$4;
}
END{
printf "Solaris %d.%d Update %d\n",solaris,subrel,update;
}'
</source>
=Needed Solaris packages=
==Install pkg dependencies==
<source lang=bash>
# pkg install developer/assembler
# pkg install developer/build/make
# pkg install x11/diagnostic/x11-info-clients
</source>
==Check pkg dependencies==
<source lang=bash>
# pkg list \
developer/assembler \
developer/build/make \
x11/diagnostic/x11-info-clients
</source>
=User / group settings=
==Groups==
<source lang=bash>
# groupadd -g 186 oinstall
# groupadd -g 187 asmadmin
# groupadd -g 188 asmdba
# groupadd -g 200 dba
</source>
==User==
<source lang=bash>
# useradd \
-u 102 \
-g oinstall \
-G asmdba,dba \
-c "Oracle DB" \
-m -d /export/home/oracle \
oracle
# useradd \
-u 406 \
-g oinstall \
-G asmdba,asmadmin,dba \
-c "Oracle Grid" \
-m -d /export/home/grid \
grid
</source>
===Generate ssh public keys===
<source lang=bash>
# su - grid
$ ssh-keygen -t rsa -b 2048
Generating public/private rsa key pair.
Enter file in which to save the key (/export/home/grid/.ssh/id_rsa): <Enter>
Created directory '/export/home/grid/.ssh'.
Enter passphrase (empty for no passphrase): <Enter>
Enter same passphrase again: <Enter>
Your identification has been saved in /export/home/grid/.ssh/id_rsa.
Your public key has been saved in /export/home/grid/.ssh/id_rsa.pub.
The key fingerprint is:
..:..:.. grid@grid01
$ cat .ssh/id_rsa.pub > .ssh/authorized_keys
$ chmod 600 .ssh/authorized_keys
$ vi .ssh/authorized_keys
</source>
Add the public key of other nodes.
After that do this on all other nodes added as grid:
<source lang=bash>
$ scp grid01:.ssh/authorized_keys .ssh/authorized_keys
</source>
Now do a cross login from every node to every other node (even to its self) to add all to the known_hosts. The installer needs this.
==Projects==
<source lang=bash>
# projadd -p 186 -G oinstall \
-K process.max-file-descriptor="(basic,1024,deny)" \
-K process.max-file-descriptor="(privileged,65536,deny)" \
-K process.max-sem-nsems="(privileged,2048,deny)" \
-K project.max-sem-ids="(privileged,2048,deny)" \
-K project.max-shm-ids="(privileged,200,deny)" \
-K project.max-shm-memory="(privileged,274877906944,deny)" \
group.oinstall
</source>
===Check project settings===
<source lang=bash>
# su - oracle
$ for name in process.{max-file-descriptor,max-sem-nsems} ; do prctl -t privileged -i process -n ${name} $$ ; done
process: 14822: -bash
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
process.max-file-descriptor
privileged 65.5K - deny -
process: 14822: -bash
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
process.max-sem-nsems
privileged 2.05K - deny -
$ for name in project.{max-sem-ids,max-shm-ids,max-shm-memory} ; do prctl -t privileged -n ${name} $$ ; done
process: 14822: -bash
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
project.max-sem-ids
privileged 2.05K - deny -
process: 14822: -bash
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
project.max-shm-ids
privileged 200 - deny -
process: 14822: -bash
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
project.max-shm-memory
usage 0B
privileged 256GB - deny -
</source>
=Directories=
<source lang=bash>
# zfs create -o mountpoint=none rpool/grid
# zfs create -o mountpoint=/opt/gridhome rpool/grid/gridhome
# zfs create -o mountpoint=/opt/gridbase rpool/grid/gridbase
# zfs create -o mountpoint=/opt/oraInventory rpool/grid/oraInventory
# chown -R grid:oinstall /opt/{grid{home,base},oraInventory}
</source>
=Storage tasks=
==Discover LUNs==
<source lang=bash>
# luxadm -e port | \
nawk '{print $1}' | \
xargs -n 1 luxadm -e dump_map | \
nawk '/Disk device/{print $5}' | \
sort -u | \
xargs luxadm display | \
nawk '
/DEVICE PROPERTIES for disk:/{
disk=$NF;
}
/DEVICE PROPERTIES for:/{
disk="";
}
/Vendor:/{
vendor=$NF;
}
/Serial Num:/{
serial=$NF;
}
/Unformatted capacity:/{
capacity=$(NF-1)""$NF;
}
disk != "" && /^$/{
printf "%s vendor=%s serial=%s capacity=%s\n",disk,vendor,serial,capacity;
}' | \
sort -u
</source>
==Label Disks==
===Single Disk===
<source lang=bash>
# printf 'type 0 no no\nlabel 1 yes\npartition\n0 usr wm 8192 $\nlabel 1 yes\nquit\nquit\n' | \
format -e /dev/rdsk/<disk>
</source>
===All FC disks===
For x86 you have to call format -> fdisk -> y for all disks first :-\
'''DON'T DO THE NEXT STEP IF YOU DO NOT KNOW WHAT YOU DO!'''
format_command_file.txt:
<source lang=bash>
type 0 no no
label 1 yes
partition
0 usr wm 8192 $
label 1 yes
quit
quit
</source>
<source lang=bash>
# luxadm -e port | \
nawk '{print $1}' | \
xargs -n 1 luxadm -e dump_map | \
nawk '/Disk device/{print $5}' | \
sort -u | \
xargs luxadm display | \
nawk '
/DEVICE PROPERTIES for disk:/{
disk=$NF;
}
/DEVICE PROPERTIES for:/{
disk="";
}
disk && /^$/{
printf "%s\n",disk;
}' | \
sort -u | \
xargs -n 1 format -e -f ~/format_command_file.txt
</source>
<source lang=bash>
# chown -RL grid:asmadmin /dev/rdsk/c0t6000*
# chmod 660 /dev/rdsk/c0t6000*
</source>
==Set swap to physical RAM==
<source lang=bash>
# export RAM=256G
# swap -d /dev/zvol/dsk/rpool/swap
# zfs destroy rpool/swap
# zfs create \
-V ${RAM} \
-b 8k \
-o primarycache=metadata \
-o chksum=on \
-o dedup=off \
-o encryption=off \
-o compression=off \
rpool/swap
# swap -a /dev/zvol/dsk/rpool/swap
</source>
=Network=
==Check port ranges==
<source lang=bash>
# for protocol in tcp udp ; do ipadm show-prop ${protocol} -p smallest_anon_port,largest_anon_port ; done
PROTO PROPERTY PERM CURRENT PERSISTENT DEFAULT POSSIBLE
tcp smallest_anon_port rw 9000 9000 32768 1024-65500
tcp largest_anon_port rw 65500 65500 65535 9000-65535
PROTO PROPERTY PERM CURRENT PERSISTENT DEFAULT POSSIBLE
udp smallest_anon_port rw 9000 9000 32768 1024-65500
udp largest_anon_port rw 65500 65500 65535 9000-65535
</source>
==Setup private cluster interconnects==
Example with a small net with six (eight with net and broadcast) usable IPs. This limits the maximum number of nodes to six... which is obvious...
First node:
<source lang=bash>
# ipadm create-ip net1
# ipadm create-addr -T static -a 10.65.0.1/29 net1/ci1
# ipadm create-ip net5
# ipadm create-addr -T static -a 10.65.0.9/29 net5/ci2
</source>
Second node:
<source lang=bash>
# ipadm create-ip net1
# ipadm create-addr -T static -a 10.65.0.2/29 net1/ci1
# ipadm create-ip net5
# ipadm create-addr -T static -a 10.65.0.10/29 net5/ci2
</source>
==Set slew always for ntp==
After configuring ntp set slew always to avoid time warps!
<source lang=bash>
# svccfg -s svc:/network/ntp:default setprop config/slew_always = true
# svcadm refresh svc:/network/ntp:default
# svccfg -s svc:/network/ntp:default listprop config/slew_always
config/slew_always boolean true
</source>
=Patching=
==Upgrade OPatch==
Do as root:
<source lang=bash>
export ORACLE_HOME=/opt/gridhome/11.2.0.4
export PATH=${PATH}:${ORACLE_HOME}/OPatch
OPATCH_PATCH_ZIP=~oracle/orainst/p6880880_112000_Solaris86-64.zip
zfs snapshot -r rpool/grid@$(opatch version | nawk '/OPatch Version:/{print $1"_"$NF;}')
eval mv ${ORACLE_HOME}/{$(opatch version | nawk '/OPatch Version:/{print $1","$1"_"$NF;}')}
unzip -d ${ORACLE_HOME} ${OPATCH_PATCH_ZIP}
chown -R grid:oinstall ${ORACLE_HOME}/OPatch
zfs snapshot -r rpool/grid@$(opatch version | nawk '/OPatch Version:/{print $1"_"$NF;}')
</source>
==Apply PSU==
On first node as user grid:
<source lang=bash>
export ORACLE_HOME=/opt/gridhome/11.2.0.4
OCM_RSP=~grid/ocm_gridcluster1.rsp
${ORACLE_HOME}/OPatch/ocm/bin/emocmrsp -output ${OCM_RSP}
scp ${OCM_RSP} <other node1>:
scp ${OCM_RSP} <other node2>:
...
</source>
On all nodes do as root:
<source lang=bash>
export ORACLE_HOME=/opt/gridhome/11.2.0.4
export PATH=${PATH}:${ORACLE_HOME}/bin
export PATH=${PATH}:${ORACLE_HOME}/OPatch
OCM_RSP=~grid/ocm_gridcluster1.rsp
PSU_DIR=~oracle/orainst/psu
PSU_ZIP=~oracle/orainst/p22378167_112040_Solaris86-64.zip
PSU=~oracle/orainst/psu/22378167
su - grid -c "mkdir -p ${PSU_DIR}"
su - grid -c "unzip -d ${PSU_DIR} ${PSU_ZIP}"
su - grid -c "opatch lsinventory -detail -oh ${ORACLE_HOME} > ~grid/lsinventory_before_${PSU##*/}"
zfs snapshot -r rpool/grid@before_psu_${PSU##*/}
cd ~grid
for patch in $(find ${PSU} -name bundle.xml | xargs -n 1 dirname) ; do
opatch auto ${patch} -oh ${ORACLE_HOME} -ocmrf ${OCM_RSP}
done
$ORACLE_HOME/crs/install/rootcrs.pl -unlock # <-- on all nodes
# For every other patch do:
su - grid -c "cd ${patchdir} ; opatch prereq CheckConflictAgainstOHWithDetail -ph ./" # <-- only on first node
su - grid -c "cd ${patchdir} ; opatch apply" # <-- only on first node
$ORACLE_HOME/crs/install/rootcrs.pl -patch # <-- on all nodes
zfs snapshot -r rpool/grid@after_psu_${PSU##*/}
${ORACLE_HOME}/bin/emctl start dbconsole
su - grid -c "opatch lsinventory -detail -oh ${ORACLE_HOME} > ~grid/lsinventory_after_${PSU##*/}"
</source>
==Configure local listener to another port==
As grid user:
<source lang=bash>
$ srvctl modify listener -l LISTENER -o ${ORACLE_HOME} -p "TCP:50650"
$ srvctl config listener
Name: LISTENER
Network: 1, Owner: grid
Home: <CRS home>
End points: TCP:50650
$ srvctl stop listener -l LISTENER ; srvctl start listener -l LISTENER
$ sqh
SQL>show parameter list
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
listener_networks string
local_listener string (DESCRIPTION=(ADDRESS_LIST=(A
DDRESS=(PROTOCOL=TCP)(HOST=172
.1.20.1)(PORT=1521))))
remote_listener string
SQL> alter system set local_listener ="(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=172.1.20.1)(PORT=50650))))" SID='+ASM1' ;
System altered.
SQL> ^D
</source>
=ASM=
==Create ASM diskgroups==
LUNs.txt contains all disks with:
# one line per disk.
# each disk in the first field.
===Example for chdg===
<source lang=awk>
# nawk -v type='DATA' '
BEGIN {
printf "<chdg name=\"%s\" power=\"3\">\n",type;
}
/002d0/,/011d0/ {
if(/C903/){storage="HSA1";};
if(/C906/){storage="HSA2";};
if(/C061/){storage="HSA3";};
if(/C062/){storage="HSA4";};
if(/002d0/){
# first disk
count=1;
printf " <add>\n";
printf " <fg name=\"%s_%s\">\n",storage,type;
};
gsub(/s2$/,"s0",$1);
printf " <dsk name=\"%s_%s%02d\" string=\"%s\"/>\n",storage,type,count++,$1;
if(/011d0/){
# last disk
print " </fg>";
print " </add>";
}
}
END {
printf "<a name=\"compatible.asm\" value=\"11.2\"/>\n";
printf "<a name=\"compatible.rdbms\" value=\"11.2\"/>\n";
printf "<a name=\"compatible.advm\" value=\"11.2\"/>\n";
printf "</chdg>\n";
}
' LUNs.txt
</source>
===Example for mkdg===
<source lang=awk>
# nawk -v type='FRA' '
BEGIN {
printf "<dg name=\"%s\" redundancy=\"normal\">\n",type;
}
/012d0/,/015d0/ {
if(/C903/){storage="HSA1";};
if(/C906/){storage="HSA2";};
if(/C061/){storage="HSA3";};
if(/C062/){storage="HSA4";};
if(/012d0/){
# first disk
count=1;
printf " <fg name=\"%s_%s\">\n",storage,type;
};
gsub(/s2$/,"s0",$1);
printf " <dsk name=\"%s_%s%02d\" string=\"%s\"/>\n",storage,type,count++,$1;
if(/015d0/){
# last disk
print " </fg>";
}
}
END {
printf "<a name=\"compatible.asm\" value=\"11.2\"/>\n";
printf "<a name=\"compatible.rdbms\" value=\"11.2\"/>\n";
printf "<a name=\"compatible.advm\" value=\"11.2\"/>\n";
printf "</dg>\n";
}
' LUNs.txt
</source>
data_config.xml:
<source lang=xml>
<chdg name="data" power="3">
<add>
<fg name="HSA1_DATA">
<dsk name="HSA1_DATA01" string="/dev/rdsk/c0t60002AC000000000C903010650004002d0s0"/>
<dsk name="HSA1_DATA02" string="/dev/rdsk/c0t60002AC000000000C903010650004003d0s0"/>
<dsk name="HSA1_DATA03" string="/dev/rdsk/c0t60002AC000000000C903010650004004d0s0"/>
<dsk name="HSA1_DATA04" string="/dev/rdsk/c0t60002AC000000000C903010650004005d0s0"/>
<dsk name="HSA1_DATA05" string="/dev/rdsk/c0t60002AC000000000C903010650004006d0s0"/>
<dsk name="HSA1_DATA06" string="/dev/rdsk/c0t60002AC000000000C903010650004007d0s0"/>
<dsk name="HSA1_DATA07" string="/dev/rdsk/c0t60002AC000000000C903010650004008d0s0"/>
<dsk name="HSA1_DATA08" string="/dev/rdsk/c0t60002AC000000000C903010650004009d0s0"/>
<dsk name="HSA1_DATA09" string="/dev/rdsk/c0t60002AC000000000C903010650004010d0s0"/>
<dsk name="HSA1_DATA10" string="/dev/rdsk/c0t60002AC000000000C903010650004011d0s0"/>
</fg>
</add>
<add>
<fg name="HSA2_DATA">
<dsk name="HSA2_DATA01" string="/dev/rdsk/c0t60002AC000000000C906010650004002d0s0"/>
<dsk name="HSA2_DATA02" string="/dev/rdsk/c0t60002AC000000000C906010650004003d0s0"/>
<dsk name="HSA2_DATA03" string="/dev/rdsk/c0t60002AC000000000C906010650004004d0s0"/>
<dsk name="HSA2_DATA04" string="/dev/rdsk/c0t60002AC000000000C906010650004005d0s0"/>
<dsk name="HSA2_DATA05" string="/dev/rdsk/c0t60002AC000000000C906010650004006d0s0"/>
<dsk name="HSA2_DATA06" string="/dev/rdsk/c0t60002AC000000000C906010650004007d0s0"/>
<dsk name="HSA2_DATA07" string="/dev/rdsk/c0t60002AC000000000C906010650004008d0s0"/>
<dsk name="HSA2_DATA08" string="/dev/rdsk/c0t60002AC000000000C906010650004009d0s0"/>
<dsk name="HSA2_DATA09" string="/dev/rdsk/c0t60002AC000000000C906010650004010d0s0"/>
<dsk name="HSA2_DATA10" string="/dev/rdsk/c0t60002AC000000000C906010650004011d0s0"/>
</fg>
</add>
<a name="compatible.asm" value="11.2"/>
<a name="compatible.rdbms" value="11.2"/>
<a name="compatible.advm" value="11.2"/>
</chdg>
</source>
asmh:
<source lang=oracle11>
ASMCMD [+] > chdg data_config.xml
</source>
639c34e988845005b74c43231dd0443fe58fe62b
Linux Software RAID
0
286
1298
2016-06-16T10:05:28Z
Lollypop
2
Die Seite wurde neu angelegt: „[[Kategorie:Linux]] =mdadm= ==Force rebuild of a failed RAID== Example for /dev/md10 ===The problem: Two failed disks in a RAID5=== Looks ugly but maybe we hav…“
wikitext
text/x-wiki
[[Kategorie:Linux]]
=mdadm=
==Force rebuild of a failed RAID==
Example for /dev/md10
===The problem: Two failed disks in a RAID5===
Looks ugly but maybe we have luck and the disks are just marked as bad.
<source lang=bash>
# mdadm --detail /dev/md10
/dev/md10:
Version : 1.2
Creation Time : Wed Feb 6 13:44:52 2013
Raid Level : raid5
Used Dev Size : 476052288 (454.00 GiB 487.48 GB)
Raid Devices : 11
Total Devices : 11
Persistence : Superblock is persistent
Update Time : Wed Jun 15 17:46:57 2016
State : active, FAILED, Not Started
Active Devices : 9
Working Devices : 11
Failed Devices : 0
Spare Devices : 2
Layout : left-symmetric
Chunk Size : 64K
Name : md10
UUID : 82f2b88d:276a1fd3:55a4928e:b2228edf
Events : 17071
Number Major Minor RaidDevice State
11 66 145 0 active sync /dev/sdap1
1 8 129 1 active sync /dev/sdi1
2 0 0 2 removed
3 65 129 3 active sync /dev/sdy1
4 66 1 4 active sync /dev/sdag1
5 66 129 5 active sync /dev/sdao1
12 8 1 6 active sync /dev/sda1
7 0 0 7 removed
8 65 17 8 active sync /dev/sdr1
13 8 17 9 active sync /dev/sdb1
14 65 145 10 active sync /dev/sdz1
15 66 17 - spare /dev/sdah1
16 65 1 - spare /dev/sdq1
</source>
===Force the rescan and reassemble the RAID===
For a SCSI-rescan you can try this:
[[Linux_Tipps_und_Tricks#Scan_all_SCSI_buses_for_new_devices|Scan all SCSI buses for new devices]]
And you have to do this:
<source lang=bash>
# mdadm --scan /dev/md10
# mdadm --assemble --force --scan
# mdadm --run /dev/md10
</source>
===Check the status===
<source lang=bash>
# mdadm --detail /dev/md10
/dev/md10:
Version : 1.2
Creation Time : Wed Feb 6 13:44:52 2013
Raid Level : raid5
Array Size : 4760522880 (4539.99 GiB 4874.78 GB)
Used Dev Size : 476052288 (454.00 GiB 487.48 GB)
Raid Devices : 11
Total Devices : 12
Persistence : Superblock is persistent
Update Time : Thu Jun 16 10:59:16 2016
State : clean, degraded, recovering
Active Devices : 10
Working Devices : 12
Failed Devices : 0
Spare Devices : 2
Layout : left-symmetric
Chunk Size : 64K
Rebuild Status : 5% complete
Name : md10
UUID : 82f2b88d:276a1fd3:55a4928e:b2228edf
Events : 17074
Number Major Minor RaidDevice State
11 66 145 0 active sync /dev/sdap1
1 8 129 1 active sync /dev/sdi1
16 65 1 2 spare rebuilding /dev/sdq1
3 65 129 3 active sync /dev/sdy1
4 66 1 4 active sync /dev/sdag1
5 66 129 5 active sync /dev/sdao1
12 8 1 6 active sync /dev/sda1
7 8 145 7 active sync /dev/sdj1
8 65 17 8 active sync /dev/sdr1
13 8 17 9 active sync /dev/sdb1
14 65 145 10 active sync /dev/sdz1
15 66 17 - spare /dev/sdah1
</source>
This is good:
State : clean, degraded, recovering
Better wait with the next reboot for completion:
Rebuild Status : 5% complete
It should continue rebuilding if you boot but... know the devils...
4511ab5693761f75a25a3ef44e565dc64db2c6c6
1299
1298
2016-06-16T10:09:21Z
Lollypop
2
/* The problem: Two failed disks in a RAID5 */
wikitext
text/x-wiki
[[Kategorie:Linux]]
=mdadm=
==Force rebuild of a failed RAID==
Example for /dev/md10
===The problem: Two failed disks in a RAID5===
Looks ugly but maybe we have luck and the disks are just marked as bad.
==== cat /proc/mdstat ====
<source lang=bash>
# cat /proc/mdstat
Personalities : [raid1] [raid6] [raid5] [raid4]
...
md10 : inactive sdap1[11] sdao1[5] sdah1[15](S) sdag1[4] sdy1[3] sdz1[14] sdr1[8] sdb1[13] sdq1[16](S) sdi1[1] sda1[12]
5236577280 blocks super 1.2
...
</source>
State is <i>inactive</i> this is not what we want... look for the details in the next step
==== mdadm --detail ====
<source lang=bash>
# mdadm --detail /dev/md10
/dev/md10:
Version : 1.2
Creation Time : Wed Feb 6 13:44:52 2013
Raid Level : raid5
Used Dev Size : 476052288 (454.00 GiB 487.48 GB)
Raid Devices : 11
Total Devices : 11
Persistence : Superblock is persistent
Update Time : Wed Jun 15 17:46:57 2016
State : active, FAILED, Not Started
Active Devices : 9
Working Devices : 11
Failed Devices : 0
Spare Devices : 2
Layout : left-symmetric
Chunk Size : 64K
Name : md10
UUID : 82f2b88d:276a1fd3:55a4928e:b2228edf
Events : 17071
Number Major Minor RaidDevice State
11 66 145 0 active sync /dev/sdap1
1 8 129 1 active sync /dev/sdi1
2 0 0 2 removed
3 65 129 3 active sync /dev/sdy1
4 66 1 4 active sync /dev/sdag1
5 66 129 5 active sync /dev/sdao1
12 8 1 6 active sync /dev/sda1
7 0 0 7 removed
8 65 17 8 active sync /dev/sdr1
13 8 17 9 active sync /dev/sdb1
14 65 145 10 active sync /dev/sdz1
15 66 17 - spare /dev/sdah1
16 65 1 - spare /dev/sdq1
</source>
===Force the rescan and reassemble the RAID===
For a SCSI-rescan you can try this:
[[Linux_Tipps_und_Tricks#Scan_all_SCSI_buses_for_new_devices|Scan all SCSI buses for new devices]]
And you have to do this:
<source lang=bash>
# mdadm --scan /dev/md10
# mdadm --assemble --force --scan
# mdadm --run /dev/md10
</source>
===Check the status===
<source lang=bash>
# mdadm --detail /dev/md10
/dev/md10:
Version : 1.2
Creation Time : Wed Feb 6 13:44:52 2013
Raid Level : raid5
Array Size : 4760522880 (4539.99 GiB 4874.78 GB)
Used Dev Size : 476052288 (454.00 GiB 487.48 GB)
Raid Devices : 11
Total Devices : 12
Persistence : Superblock is persistent
Update Time : Thu Jun 16 10:59:16 2016
State : clean, degraded, recovering
Active Devices : 10
Working Devices : 12
Failed Devices : 0
Spare Devices : 2
Layout : left-symmetric
Chunk Size : 64K
Rebuild Status : 5% complete
Name : md10
UUID : 82f2b88d:276a1fd3:55a4928e:b2228edf
Events : 17074
Number Major Minor RaidDevice State
11 66 145 0 active sync /dev/sdap1
1 8 129 1 active sync /dev/sdi1
16 65 1 2 spare rebuilding /dev/sdq1
3 65 129 3 active sync /dev/sdy1
4 66 1 4 active sync /dev/sdag1
5 66 129 5 active sync /dev/sdao1
12 8 1 6 active sync /dev/sda1
7 8 145 7 active sync /dev/sdj1
8 65 17 8 active sync /dev/sdr1
13 8 17 9 active sync /dev/sdb1
14 65 145 10 active sync /dev/sdz1
15 66 17 - spare /dev/sdah1
</source>
This is good:
State : clean, degraded, recovering
Better wait with the next reboot for completion:
Rebuild Status : 5% complete
It should continue rebuilding if you boot but... know the devils...
3b82009206255294f7c6cdc855f14477a53f9fd2
TShark
0
238
1301
923
2016-06-30T07:10:29Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:MySQL]]
[[Kategorie:Security]]
=TShark=
[https://www.wireshark.org/docs/wsug_html_chunked/AppToolstshark.html TShark is the terminal based wireshark.]
The ultimate tool to sniff network traffic when you have no X. It analyzes the traffic as wireshark does. Great tool!
==MySQL traffic==
To look on an application server for MySQL traffic you can use this line:
<source lang=bash>
# IFACE=eth0 ; tshark -i ${IFACE} -d tcp.port==3306,mysql -R "eth.addr eq $(ip link show ${IFACE} | awk '$1 ~ /link\/ether/{print $2}')" -T fields -e mysql.query 'port 3306'
</source>
The little awk magic selects only pakets which are from our ethernet address on interface ''IFACE''.
==Duplicate ACKs==
<source lang=bash>
# tshark -i eth1 -Y tcp.analysis.duplicate_ack
</source>
==Finding TCP problems==
<source lang=bash>
# tshark -i eth1 -Y 'expert.message == "Retransmission (suspected)" || expert.message == "Duplicate ACK (#1)" || expert.message == "Out-Of-Order segment"'
</source>
4b5a9fa5b649a77bd307d72cc2d6d9b2c26228f2
PowerDNS
0
287
1302
2016-06-30T18:08:26Z
Lollypop
2
Die Seite wurde neu angelegt: „[[Kategorie: DNS]] =PowerDNS Server= ==Logging with systemd and syslog-ng== 1. Tell the journald of systemd to forward messages to syslog: In <i>/etc/system…“
wikitext
text/x-wiki
[[Kategorie: DNS]]
=PowerDNS Server=
==Logging with systemd and syslog-ng==
1. Tell the journald of systemd to forward messages to syslog:
In <i>/etc/systemd/journald.conf</i> set it from
<source>
#ForwardToSyslog=yes
</source>
to
<source>
ForwardToSyslog=yes
</source>
Then restart the journald
<source>
# systemctl restart systemd-journald.service
</source>
2. Tell syslog-ng to take the dev-log-socket from journald as input:
Change the part in <i>/etc/syslog-ng/syslog-ng.conf</i> from
<source>
source s_src {
system();
internal();
};
</source>
to
<source>
source s_src {
system();
internal();
unix-dgram ("/run/systemd/journal/dev-log");
};
</source>
f1e3e79e39abaadd6254e366b402051a054e7915
1303
1302
2016-06-30T18:10:20Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie: DNS]]
=PowerDNS Server=
==Logging with systemd and syslog-ng==
1. Tell the journald of systemd to forward messages to syslog:
In <i>/etc/systemd/journald.conf</i> set it from
<source lang=bash>
#ForwardToSyslog=yes
</source>
to
<source lang=bash>
ForwardToSyslog=yes
</source>
Then restart the journald
<source lang=bash>
# systemctl restart systemd-journald.service
</source>
2. Tell syslog-ng to take the dev-log-socket from journald as input:
Change the part in <i>/etc/syslog-ng/syslog-ng.conf</i> from
<source lang=bash>
source s_src {
system();
internal();
};
</source>
to
<source lang=bash>
source s_src {
system();
internal();
unix-dgram ("/run/systemd/journal/dev-log");
};
</source>
9970e9c95e1ad037acd6d589bea1aa510f6553de
1307
1303
2016-07-01T07:33:35Z
Lollypop
2
/* PowerDNS Server */
wikitext
text/x-wiki
[[Kategorie: DNS]]
=PowerDNS Server (pdns_server)=
==Logging with systemd and syslog-ng==
1. Tell the journald of systemd to forward messages to syslog:
In <i>/etc/systemd/journald.conf</i> set it from
<source lang=bash>
#ForwardToSyslog=yes
</source>
to
<source lang=bash>
ForwardToSyslog=yes
</source>
Then restart the journald
<source lang=bash>
# systemctl restart systemd-journald.service
</source>
2. Tell syslog-ng to take the dev-log-socket from journald as input:
Change the part in <i>/etc/syslog-ng/syslog-ng.conf</i> from
<source lang=bash>
source s_src {
system();
internal();
};
</source>
to
<source lang=bash>
source s_src {
system();
internal();
unix-dgram ("/run/systemd/journal/dev-log");
};
</source>
f5cb41df3dca5ec8ce3903759cee75a795bc91de
1308
1307
2016-07-01T07:55:45Z
Lollypop
2
/* PowerDNS Server (pdns_server) */
wikitext
text/x-wiki
[[Kategorie: DNS]]
=PowerDNS Server (pdns_server)=
==Logging with systemd and syslog-ng==
1. Tell the journald of systemd to forward messages to syslog:
In <i>/etc/systemd/journald.conf</i> set it from
<source lang=bash>
#ForwardToSyslog=yes
</source>
to
<source lang=bash>
ForwardToSyslog=yes
</source>
Then restart the journald
<source lang=bash>
# systemctl restart systemd-journald.service
</source>
2. Tell syslog-ng to take the dev-log-socket from journald as input:
Change the part in <i>/etc/syslog-ng/syslog-ng.conf</i> from
<source lang=bash>
source s_src {
system();
internal();
};
</source>
to
<source lang=bash>
source s_src {
system();
internal();
unix-dgram ("/run/systemd/journal/dev-log");
};
</source>
==chroot with systemd==
At the moment Ubuntu misses the
da129ccdffba25d5591519c68e7b84be66c4fb7c
Category:DNS
14
288
1304
2016-06-30T18:10:44Z
Lollypop
2
Die Seite wurde neu angelegt: „[[Kategorie:KnowHow]]“
wikitext
text/x-wiki
[[Kategorie:KnowHow]]
5b3e805e2df69a16d339bfd0115e4688ccfd0e65
Systemd
0
233
1305
906
2016-07-01T07:22:52Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:Linux]]
=systemd=
Yes, like daemons are usually written this has to be written lowercase.
=What is systemd?=
systemd is a replacement for the old and rusty init system of Linux.
It has many new features and extends the normal init system with the ability to watch processes after the start has done, list sockets owned by processes started with systemd, adds security features like [http://manpages.ubuntu.com/manpages/vivid/en/man7/capabilities.7.html capabilities(7)] and a lot more.
Maybe it will be as good as SMF (Service Management Facility) of Solaris one day :-).
=Take a look with systemctl=
==List units==
As you can see, there are hardware and software related units.
<source lang=bash>
# systemctl list-units
UNIT LOAD ACTIVE SUB DESCRIPTION
proc-sys-fs-binfmt_misc.automount loaded active running Arbitrary Executable File Formats File System Automount Point
sys-devices-pci0000:00-0000:00:02.0-backlight-acpi_video0.device loaded active plugged /sys/devices/pci0000:00/0000:00:02.0/backlight/acpi_video0
sys-devices-pci0000:00-0000:00:02.0-drm-card0-card0\x2dLVDS\x2d1-intel_backlight.device loaded active plugged /sys/devices/pci0000:00/0000:00:02.0/drm
sys-devices-pci0000:00-0000:00:19.0-net-eth0.device loaded active plugged 82579LM Gigabit Network Connection
sys-devices-pci0000:00-0000:00:1a.0-usb1-1\x2d1-1\x2d1.4-1\x2d1.4:1.0-bluetooth-hci0-rfkill3.device loaded active plugged /sys/devices/pci0000:00/0000
sys-devices-pci0000:00-0000:00:1a.0-usb1-1\x2d1-1\x2d1.4-1\x2d1.4:1.0-bluetooth-hci0.device loaded active plugged /sys/devices/pci0000:00/0000:00:1a.0
sys-devices-pci0000:00-0000:00:1b.0-sound-card0.device loaded active plugged 6 Series/C200 Series Chipset Family High Definition Audio Contro
sys-devices-pci0000:00-0000:00:1c.1-0000:03:00.0-ieee80211-phy0-rfkill2.device loaded active plugged /sys/devices/pci0000:00/0000:00:1c.1/0000:03:00.0
sys-devices-pci0000:00-0000:00:1c.1-0000:03:00.0-net-wlan0.device loaded active plugged Centrino Advanced-N 6205 [Taylor Peak] (Centrino Advanced-N 62
sys-devices-pci0000:00-0000:00:1d.0-usb2-2\x2d1-2\x2d1.4-2\x2d1.4:1.1-tty-ttyACM0.device loaded active plugged F5521gw
sys-devices-pci0000:00-0000:00:1d.0-usb2-2\x2d1-2\x2d1.4-2\x2d1.4:1.3-tty-ttyACM1.device loaded active plugged F5521gw
...
session-c2.scope loaded active running Session c2 of user lollypop
accounts-daemon.service loaded active running Accounts Service
● anacron.service loaded failed failed Run anacron jobs
apparmor.service loaded active exited LSB: AppArmor initialization
apport.service loaded active exited LSB: automatic crash report generation
...
</source>
In this example you can see that the anacron.service failed to start.
==Display unit status==
<source lang=bash>
# systemctl status anacron
● anacron.service - Run anacron jobs
Loaded: loaded (/lib/systemd/system/anacron.service; enabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Fr 2015-08-28 09:18:13 CEST; 31min ago
Process: 1591 ExecStart=/usr/sbin/anacron -dsq (code=exited, status=1/FAILURE)
Main PID: 1591 (code=exited, status=1/FAILURE)
Aug 28 09:18:13 lollybook systemd[1]: Started Run anacron jobs.
Aug 28 09:18:13 lollybook systemd[1]: Starting Run anacron jobs...
Aug 28 09:18:13 lollybook systemd[1]: anacron.service: main process exited, code=exited, status=1/FAILURE
Aug 28 09:18:13 lollybook anacron[1591]: anacron: Can't chdir to /var/spool/anacron: No such file or directory
Aug 28 09:18:13 lollybook systemd[1]: Unit anacron.service entered failed state.
Aug 28 09:18:13 lollybook systemd[1]: anacron.service failed.
</source>
Ah, deleted the anacron spool directory. ;-)
==Restart units==
Fix the problem and restart the service.
<source lang=bash>
root@lollybook:~# mkdir /var/spool/anacron
root@lollybook:~# systemctl restart anacron.service
root@lollybook:~# systemctl status anacron
● anacron.service - Run anacron jobs
Loaded: loaded (/lib/systemd/system/anacron.service; enabled; vendor preset: enabled)
Active: active (running) since Fr 2015-08-28 09:53:49 CEST; 4s ago
Main PID: 5179 (anacron)
CGroup: /system.slice/anacron.service
└─5179 /usr/sbin/anacron -dsq
Aug 28 09:53:49 lollybook systemd[1]: Started Run anacron jobs.
Aug 28 09:53:49 lollybook systemd[1]: Starting Run anacron jobs...
Aug 28 09:53:49 lollybook anacron[5179]: Anacron 2.3 started on 2015-08-28
Aug 28 09:53:49 lollybook anacron[5179]: Will run job `cron.daily' in 5 min.
Aug 28 09:53:49 lollybook anacron[5179]: Will run job `cron.weekly' in 10 min.
Aug 28 09:53:49 lollybook anacron[5179]: Will run job `cron.monthly' in 15 min.
Aug 28 09:53:49 lollybook anacron[5179]: Jobs will be executed sequentially
</source>
==Display unit declaration==
<source lang=ini>
# systemctl cat zfs.target
# /lib/systemd/system/zfs.target
[Unit]
Description=ZFS startup target
Requires=zfs-mount.service
Requires=zfs-share.service
Wants=zed.service
[Install]
WantedBy=multi-user.target
</source>
==Sockets==
<source lang=bash>
# systemctl list-sockets --all
LISTEN UNIT ACTIVATES
/run/acpid.socket acpid.socket acpid.service
/run/systemd/fsckd systemd-fsckd.socket systemd-fsckd.service
/run/systemd/initctl/fifo systemd-initctl.socket systemd-initctl.service
/run/systemd/journal/dev-log systemd-journald-dev-log.socket systemd-journald.service
/run/systemd/journal/socket systemd-journald.socket systemd-journald.service
/run/systemd/journal/stdout systemd-journald.socket systemd-journald.service
/run/systemd/journal/syslog syslog.socket rsyslog.service
/run/systemd/shutdownd systemd-shutdownd.socket systemd-shutdownd.service
/run/udev/control systemd-udevd-control.socket systemd-udevd.service
/run/uuidd/request uuidd.socket uuidd.service
/var/run/avahi-daemon/socket avahi-daemon.socket avahi-daemon.service
/var/run/cups/cups.sock cups.socket cups.service
/var/run/dbus/system_bus_socket dbus.socket dbus.service
127.0.0.1:631 cups.socket cups.service
[::1]:631 cups.socket cups.service
audit 1 systemd-journald-audit.socket systemd-journald.service
kobject-uevent 1 systemd-udevd-kernel.socket systemd-udevd.service
17 sockets listed.
</source>
==View dependencies==
What depends on ''zfs.target'':
<source lang=bash>
# systemctl list-dependencies --reverse zfs.target
zfs.target
● ├─basic.target
...
● └─multi-user.target
...
</source>
And what do we need to reach the ''zfs.target''?
<source lang=bash>
# systemctl list-dependencies --recursive zfs.target
zfs.target
● ├─zed.service
● ├─zfs-mount.service
● └─zfs-share.service
</source>
=Security=
==Use capabilities to drop user privileges (CapabilityBoundingSet)==
<source lang=bash>
# systemctl cat systemd-networkd.service --no-pager
...
[Service]
Type=notify
Restart=on-failure
RestartSec=0
ExecStart=/lib/systemd/systemd-networkd
CapabilityBoundingSet=CAP_NET_ADMIN CAP_NET_BIND_SERVICE CAP_NET_BROADCAST CAP_NET_RAW CAP_SETUID CAP_SETGID CAP_SETPCAP CAP_CHOWN CAP_DAC_OVERRIDE CAP_FOWNER
ProtectSystem=full
ProtectHome=yes
WatchdogSec=1min
...
</source>
Now the process is started with exactly the capabilities it needs to have. Even if it starts as root all unnessesary capabilities are dropped for starting the process.
I dont want to copy the whole man page of [http://manpages.ubuntu.com/manpages/vivid/en/man7/capabilities.7.html capabilities(7)] here but you can take a look to understand what this capabilities are.
'''BUT''' beware of programs which just test on UID 0!
==Nailing a process to it's rights : NoNewPrivileges==
Setting ''NoNewPrivileges=true'' ensures that the processtree from this level on will stuck at the UID and the privileges it has. This prohibits UID changes. No set UID binary will help the hacker to get more privileges than the user of the exploited service.
==systemd-timesyncd.service an alternative to ntp==
The ntpd is a good and fat old horse for servers but clients do not necessarily need this one. Just give systemd-timesyncd a chance.
Configuration can be easily made through <i>/etc/systemd/timesyncd.conf</i>:
<source lang=ini>
# This file is part of systemd.
#
# systemd is free software; you can redistribute it and/or modify it
# under the terms of the GNU Lesser General Public License as published by
# the Free Software Foundation; either version 2.1 of the License, or
# (at your option) any later version.
#
# Entries in this file show the compile time defaults.
# You can change settings by editing this file.
# Defaults can be restored by simply deleting this file.
#
# See timesyncd.conf(5) for details.
[Time]
NTP=ptbtime1.ptb.de hora.cs.tu-berlin.de
FallbackNTP=ntp.ubuntu.com
</source>
The server list is a space separated list of NTP servers.
FallbackNTP is a list of servers if none of the NTP list could be reached.
If you want to split them into multiple files or generate them at start you can put files with the ending <i>.conf</i> in <i>/etc/systemd/timesyncd.conf.d/</i>.
After you setup the config you can enable the timesyncd via:
<source lang=bash>
# timedatectl set-ntp true
</source>
Control your success with:
<source lang=bash>
# timedatectl
Local time: Fr 2016-07-01 09:16:24 CEST
Universal time: Fr 2016-07-01 07:16:24 UTC
RTC time: Fr 2016-07-01 07:16:24
Time zone: Europe/Berlin (CEST, +0200)
Network time on: yes
NTP synchronized: yes
RTC in local TZ: no
</source>
Nice it worked <i>NTP synchronized: yes</i>.
If not take a look with <i>systemctl</i>:
<source lang=bash>
# systemctl status systemd-timesyncd.service
● systemd-timesyncd.service - Network Time Synchronization
Loaded: loaded (/lib/systemd/system/systemd-timesyncd.service; enabled; vendor preset: enabled)
Drop-In: /lib/systemd/system/systemd-timesyncd.service.d
└─disable-with-time-daemon.conf
Active: inactive (dead)
Condition: start condition failed at Fr 2016-07-01 10:49:15 CEST; 1h 43min left
Docs: man:systemd-timesyncd.service(8)
</source>
Hmm... let us take a look at ntp:
<source lang=bash>
# systemctl status ntp.service
● ntp.service - LSB: Start NTP daemon
Loaded: loaded (/etc/init.d/ntp; bad; vendor preset: enabled)
Active: active (exited) since Fr 2016-07-01 10:49:19 CEST; 1h 44min left
Docs: man:systemd-sysv-generator(8)
</source>
Maybe we should uninstall or disable ntp first ;-).
<source lang=bash>
# systemctl stop ntp.service
# systemctl disable ntp.service
</source>
<source lang=bash>
# systemctl start systemd-timesyncd.service
# systemctl status systemd-timesyncd.service
● systemd-timesyncd.service - Network Time Synchronization
Loaded: loaded (/lib/systemd/system/systemd-timesyncd.service; enabled; vendor preset: enabled)
Drop-In: /lib/systemd/system/systemd-timesyncd.service.d
└─disable-with-time-daemon.conf
Active: active (running) since Fr 2016-07-01 09:06:10 CEST; 1s ago
Docs: man:systemd-timesyncd.service(8)
Main PID: 12360 (systemd-timesyn)
Status: "Synchronized to time server 192.53.103.108:123 (ptbtime1.ptb.de)."
CGroup: /system.slice/systemd-timesyncd.service
└─12360 /lib/systemd/systemd-timesyncd
Jul 01 09:06:10 lollybook systemd[1]: Starting Network Time Synchronization...
Jul 01 09:06:10 lollybook systemd[1]: Started Network Time Synchronization.
Jul 01 09:06:10 lollybook systemd-timesyncd[12360]: Synchronized to time server 192.53.103.108:123 (ptbtime1.ptb.de).
</source>
That's it!
=Units=
==[Unit]==
===Define dependencies===
For example the ''zfs.target'' is defined like this:
<source lang=bash>
# systemctl cat zfs.target
# /lib/systemd/system/zfs.target
[Unit]
Description=ZFS startup target
Requires=zfs-mount.service
Requires=zfs-share.service
Wants=zed.service
[Install]
WantedBy=multi-user.target
</source>
This means to reach the ''zfs.target'' we want that ''zed.service'' is started if enabled and we need ''zfs-mount.service'' and ''zfs-share.service''.
===Directories===
====ReadWrite-, ReadOnly- and InaccessibleDirectories====
====Private Tmp-Directories====
Mounts a private incarnation of /tmp and /var/tmp which only lives as long as the unit is up. When the unit comes down the directories are cleared. This is done by a seperate namespace for this unit.
<source lang=ini>
[Unit]
...
PrivateTmp=true|false
...
</source>
If several units should share a private tmp-directory you can use ''JoinsNamespaceOf=<unit1>[,<unit2>,<unit3>]''.
==[Service]==
==[Install]==
=Tools=
==Testing around with capabilities==
For example arping:
<source lang=bash>
# getcap /usr/bin/arping
/usr/bin/arping = cap_net_raw+ep
</source>
With this capability set we can use this as normal user:
<source lang=bash>
lollypop $ /usr/bin/arping -I wlan0 192.168.178.1
ARPING 192.168.178.1 from 192.168.178.31 wlan0
Unicast reply from 192.168.178.1 [24:65:11:F0:DC:A8] 1.774ms
Unicast reply from 192.168.178.1 [24:65:11:F0:DC:A8] 1.658ms
</source>
If we remove this capability it does not work:
<source lang=bash>
# setcap cap_net_raw=-ep /usr/bin/arping
</source>
<source lang=bash>
lollypop $ /usr/bin/arping -I wlan0 192.168.178.1
arping: socket: Operation not permitted
</source>
Of course it still works as root as root has all capabilities:
<source lang=bash>
root@lollybook:~# /usr/bin/arping -I wlan0 192.168.178.1
ARPING 192.168.178.1 from 192.168.178.31 wlan0
Unicast reply from 192.168.178.1 [24:65:11:F0:DC:A8] 2.052ms
Unicast reply from 192.168.178.1 [24:65:11:F0:DC:A8] 1.852ms
Received 2 response(s)
</source>
So we better set this capability again:
<source lang=bash>
# setcap cap_net_raw=+ep /usr/bin/arping
</source>
79448efaabe1d3140463622972c9ffae9e107c77
1306
1305
2016-07-01T07:23:37Z
Lollypop
2
/* systemd-timesyncd.service an alternative to ntp */
wikitext
text/x-wiki
[[Kategorie:Linux]]
=systemd=
Yes, like daemons are usually written this has to be written lowercase.
=What is systemd?=
systemd is a replacement for the old and rusty init system of Linux.
It has many new features and extends the normal init system with the ability to watch processes after the start has done, list sockets owned by processes started with systemd, adds security features like [http://manpages.ubuntu.com/manpages/vivid/en/man7/capabilities.7.html capabilities(7)] and a lot more.
Maybe it will be as good as SMF (Service Management Facility) of Solaris one day :-).
=Take a look with systemctl=
==List units==
As you can see, there are hardware and software related units.
<source lang=bash>
# systemctl list-units
UNIT LOAD ACTIVE SUB DESCRIPTION
proc-sys-fs-binfmt_misc.automount loaded active running Arbitrary Executable File Formats File System Automount Point
sys-devices-pci0000:00-0000:00:02.0-backlight-acpi_video0.device loaded active plugged /sys/devices/pci0000:00/0000:00:02.0/backlight/acpi_video0
sys-devices-pci0000:00-0000:00:02.0-drm-card0-card0\x2dLVDS\x2d1-intel_backlight.device loaded active plugged /sys/devices/pci0000:00/0000:00:02.0/drm
sys-devices-pci0000:00-0000:00:19.0-net-eth0.device loaded active plugged 82579LM Gigabit Network Connection
sys-devices-pci0000:00-0000:00:1a.0-usb1-1\x2d1-1\x2d1.4-1\x2d1.4:1.0-bluetooth-hci0-rfkill3.device loaded active plugged /sys/devices/pci0000:00/0000
sys-devices-pci0000:00-0000:00:1a.0-usb1-1\x2d1-1\x2d1.4-1\x2d1.4:1.0-bluetooth-hci0.device loaded active plugged /sys/devices/pci0000:00/0000:00:1a.0
sys-devices-pci0000:00-0000:00:1b.0-sound-card0.device loaded active plugged 6 Series/C200 Series Chipset Family High Definition Audio Contro
sys-devices-pci0000:00-0000:00:1c.1-0000:03:00.0-ieee80211-phy0-rfkill2.device loaded active plugged /sys/devices/pci0000:00/0000:00:1c.1/0000:03:00.0
sys-devices-pci0000:00-0000:00:1c.1-0000:03:00.0-net-wlan0.device loaded active plugged Centrino Advanced-N 6205 [Taylor Peak] (Centrino Advanced-N 62
sys-devices-pci0000:00-0000:00:1d.0-usb2-2\x2d1-2\x2d1.4-2\x2d1.4:1.1-tty-ttyACM0.device loaded active plugged F5521gw
sys-devices-pci0000:00-0000:00:1d.0-usb2-2\x2d1-2\x2d1.4-2\x2d1.4:1.3-tty-ttyACM1.device loaded active plugged F5521gw
...
session-c2.scope loaded active running Session c2 of user lollypop
accounts-daemon.service loaded active running Accounts Service
● anacron.service loaded failed failed Run anacron jobs
apparmor.service loaded active exited LSB: AppArmor initialization
apport.service loaded active exited LSB: automatic crash report generation
...
</source>
In this example you can see that the anacron.service failed to start.
==Display unit status==
<source lang=bash>
# systemctl status anacron
● anacron.service - Run anacron jobs
Loaded: loaded (/lib/systemd/system/anacron.service; enabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Fr 2015-08-28 09:18:13 CEST; 31min ago
Process: 1591 ExecStart=/usr/sbin/anacron -dsq (code=exited, status=1/FAILURE)
Main PID: 1591 (code=exited, status=1/FAILURE)
Aug 28 09:18:13 lollybook systemd[1]: Started Run anacron jobs.
Aug 28 09:18:13 lollybook systemd[1]: Starting Run anacron jobs...
Aug 28 09:18:13 lollybook systemd[1]: anacron.service: main process exited, code=exited, status=1/FAILURE
Aug 28 09:18:13 lollybook anacron[1591]: anacron: Can't chdir to /var/spool/anacron: No such file or directory
Aug 28 09:18:13 lollybook systemd[1]: Unit anacron.service entered failed state.
Aug 28 09:18:13 lollybook systemd[1]: anacron.service failed.
</source>
Ah, deleted the anacron spool directory. ;-)
==Restart units==
Fix the problem and restart the service.
<source lang=bash>
root@lollybook:~# mkdir /var/spool/anacron
root@lollybook:~# systemctl restart anacron.service
root@lollybook:~# systemctl status anacron
● anacron.service - Run anacron jobs
Loaded: loaded (/lib/systemd/system/anacron.service; enabled; vendor preset: enabled)
Active: active (running) since Fr 2015-08-28 09:53:49 CEST; 4s ago
Main PID: 5179 (anacron)
CGroup: /system.slice/anacron.service
└─5179 /usr/sbin/anacron -dsq
Aug 28 09:53:49 lollybook systemd[1]: Started Run anacron jobs.
Aug 28 09:53:49 lollybook systemd[1]: Starting Run anacron jobs...
Aug 28 09:53:49 lollybook anacron[5179]: Anacron 2.3 started on 2015-08-28
Aug 28 09:53:49 lollybook anacron[5179]: Will run job `cron.daily' in 5 min.
Aug 28 09:53:49 lollybook anacron[5179]: Will run job `cron.weekly' in 10 min.
Aug 28 09:53:49 lollybook anacron[5179]: Will run job `cron.monthly' in 15 min.
Aug 28 09:53:49 lollybook anacron[5179]: Jobs will be executed sequentially
</source>
==Display unit declaration==
<source lang=ini>
# systemctl cat zfs.target
# /lib/systemd/system/zfs.target
[Unit]
Description=ZFS startup target
Requires=zfs-mount.service
Requires=zfs-share.service
Wants=zed.service
[Install]
WantedBy=multi-user.target
</source>
==Sockets==
<source lang=bash>
# systemctl list-sockets --all
LISTEN UNIT ACTIVATES
/run/acpid.socket acpid.socket acpid.service
/run/systemd/fsckd systemd-fsckd.socket systemd-fsckd.service
/run/systemd/initctl/fifo systemd-initctl.socket systemd-initctl.service
/run/systemd/journal/dev-log systemd-journald-dev-log.socket systemd-journald.service
/run/systemd/journal/socket systemd-journald.socket systemd-journald.service
/run/systemd/journal/stdout systemd-journald.socket systemd-journald.service
/run/systemd/journal/syslog syslog.socket rsyslog.service
/run/systemd/shutdownd systemd-shutdownd.socket systemd-shutdownd.service
/run/udev/control systemd-udevd-control.socket systemd-udevd.service
/run/uuidd/request uuidd.socket uuidd.service
/var/run/avahi-daemon/socket avahi-daemon.socket avahi-daemon.service
/var/run/cups/cups.sock cups.socket cups.service
/var/run/dbus/system_bus_socket dbus.socket dbus.service
127.0.0.1:631 cups.socket cups.service
[::1]:631 cups.socket cups.service
audit 1 systemd-journald-audit.socket systemd-journald.service
kobject-uevent 1 systemd-udevd-kernel.socket systemd-udevd.service
17 sockets listed.
</source>
==View dependencies==
What depends on ''zfs.target'':
<source lang=bash>
# systemctl list-dependencies --reverse zfs.target
zfs.target
● ├─basic.target
...
● └─multi-user.target
...
</source>
And what do we need to reach the ''zfs.target''?
<source lang=bash>
# systemctl list-dependencies --recursive zfs.target
zfs.target
● ├─zed.service
● ├─zfs-mount.service
● └─zfs-share.service
</source>
=Security=
==Use capabilities to drop user privileges (CapabilityBoundingSet)==
<source lang=bash>
# systemctl cat systemd-networkd.service --no-pager
...
[Service]
Type=notify
Restart=on-failure
RestartSec=0
ExecStart=/lib/systemd/systemd-networkd
CapabilityBoundingSet=CAP_NET_ADMIN CAP_NET_BIND_SERVICE CAP_NET_BROADCAST CAP_NET_RAW CAP_SETUID CAP_SETGID CAP_SETPCAP CAP_CHOWN CAP_DAC_OVERRIDE CAP_FOWNER
ProtectSystem=full
ProtectHome=yes
WatchdogSec=1min
...
</source>
Now the process is started with exactly the capabilities it needs to have. Even if it starts as root all unnessesary capabilities are dropped for starting the process.
I dont want to copy the whole man page of [http://manpages.ubuntu.com/manpages/vivid/en/man7/capabilities.7.html capabilities(7)] here but you can take a look to understand what this capabilities are.
'''BUT''' beware of programs which just test on UID 0!
==Nailing a process to it's rights : NoNewPrivileges==
Setting ''NoNewPrivileges=true'' ensures that the processtree from this level on will stuck at the UID and the privileges it has. This prohibits UID changes. No set UID binary will help the hacker to get more privileges than the user of the exploited service.
=systemd-timesyncd an alternative to ntp=
The ntpd is a good and fat old horse for servers but clients do not necessarily need this one. Just give systemd-timesyncd a chance.
Configuration can be easily made through <i>/etc/systemd/timesyncd.conf</i>:
<source lang=ini>
# This file is part of systemd.
#
# systemd is free software; you can redistribute it and/or modify it
# under the terms of the GNU Lesser General Public License as published by
# the Free Software Foundation; either version 2.1 of the License, or
# (at your option) any later version.
#
# Entries in this file show the compile time defaults.
# You can change settings by editing this file.
# Defaults can be restored by simply deleting this file.
#
# See timesyncd.conf(5) for details.
[Time]
NTP=ptbtime1.ptb.de hora.cs.tu-berlin.de
FallbackNTP=ntp.ubuntu.com
</source>
The server list is a space separated list of NTP servers.
FallbackNTP is a list of servers if none of the NTP list could be reached.
If you want to split them into multiple files or generate them at start you can put files with the ending <i>.conf</i> in <i>/etc/systemd/timesyncd.conf.d/</i>.
After you setup the config you can enable the timesyncd via:
<source lang=bash>
# timedatectl set-ntp true
</source>
Control your success with:
<source lang=bash>
# timedatectl
Local time: Fr 2016-07-01 09:16:24 CEST
Universal time: Fr 2016-07-01 07:16:24 UTC
RTC time: Fr 2016-07-01 07:16:24
Time zone: Europe/Berlin (CEST, +0200)
Network time on: yes
NTP synchronized: yes
RTC in local TZ: no
</source>
Nice it worked <i>NTP synchronized: yes</i>.
If not take a look with <i>systemctl</i>:
<source lang=bash>
# systemctl status systemd-timesyncd.service
● systemd-timesyncd.service - Network Time Synchronization
Loaded: loaded (/lib/systemd/system/systemd-timesyncd.service; enabled; vendor preset: enabled)
Drop-In: /lib/systemd/system/systemd-timesyncd.service.d
└─disable-with-time-daemon.conf
Active: inactive (dead)
Condition: start condition failed at Fr 2016-07-01 10:49:15 CEST; 1h 43min left
Docs: man:systemd-timesyncd.service(8)
</source>
Hmm... let us take a look at ntp:
<source lang=bash>
# systemctl status ntp.service
● ntp.service - LSB: Start NTP daemon
Loaded: loaded (/etc/init.d/ntp; bad; vendor preset: enabled)
Active: active (exited) since Fr 2016-07-01 10:49:19 CEST; 1h 44min left
Docs: man:systemd-sysv-generator(8)
</source>
Maybe we should uninstall or disable ntp first ;-).
<source lang=bash>
# systemctl stop ntp.service
# systemctl disable ntp.service
</source>
<source lang=bash>
# systemctl start systemd-timesyncd.service
# systemctl status systemd-timesyncd.service
● systemd-timesyncd.service - Network Time Synchronization
Loaded: loaded (/lib/systemd/system/systemd-timesyncd.service; enabled; vendor preset: enabled)
Drop-In: /lib/systemd/system/systemd-timesyncd.service.d
└─disable-with-time-daemon.conf
Active: active (running) since Fr 2016-07-01 09:06:10 CEST; 1s ago
Docs: man:systemd-timesyncd.service(8)
Main PID: 12360 (systemd-timesyn)
Status: "Synchronized to time server 192.53.103.108:123 (ptbtime1.ptb.de)."
CGroup: /system.slice/systemd-timesyncd.service
└─12360 /lib/systemd/systemd-timesyncd
Jul 01 09:06:10 lollybook systemd[1]: Starting Network Time Synchronization...
Jul 01 09:06:10 lollybook systemd[1]: Started Network Time Synchronization.
Jul 01 09:06:10 lollybook systemd-timesyncd[12360]: Synchronized to time server 192.53.103.108:123 (ptbtime1.ptb.de).
</source>
That's it!
=Units=
==[Unit]==
===Define dependencies===
For example the ''zfs.target'' is defined like this:
<source lang=bash>
# systemctl cat zfs.target
# /lib/systemd/system/zfs.target
[Unit]
Description=ZFS startup target
Requires=zfs-mount.service
Requires=zfs-share.service
Wants=zed.service
[Install]
WantedBy=multi-user.target
</source>
This means to reach the ''zfs.target'' we want that ''zed.service'' is started if enabled and we need ''zfs-mount.service'' and ''zfs-share.service''.
===Directories===
====ReadWrite-, ReadOnly- and InaccessibleDirectories====
====Private Tmp-Directories====
Mounts a private incarnation of /tmp and /var/tmp which only lives as long as the unit is up. When the unit comes down the directories are cleared. This is done by a seperate namespace for this unit.
<source lang=ini>
[Unit]
...
PrivateTmp=true|false
...
</source>
If several units should share a private tmp-directory you can use ''JoinsNamespaceOf=<unit1>[,<unit2>,<unit3>]''.
==[Service]==
==[Install]==
=Tools=
==Testing around with capabilities==
For example arping:
<source lang=bash>
# getcap /usr/bin/arping
/usr/bin/arping = cap_net_raw+ep
</source>
With this capability set we can use this as normal user:
<source lang=bash>
lollypop $ /usr/bin/arping -I wlan0 192.168.178.1
ARPING 192.168.178.1 from 192.168.178.31 wlan0
Unicast reply from 192.168.178.1 [24:65:11:F0:DC:A8] 1.774ms
Unicast reply from 192.168.178.1 [24:65:11:F0:DC:A8] 1.658ms
</source>
If we remove this capability it does not work:
<source lang=bash>
# setcap cap_net_raw=-ep /usr/bin/arping
</source>
<source lang=bash>
lollypop $ /usr/bin/arping -I wlan0 192.168.178.1
arping: socket: Operation not permitted
</source>
Of course it still works as root as root has all capabilities:
<source lang=bash>
root@lollybook:~# /usr/bin/arping -I wlan0 192.168.178.1
ARPING 192.168.178.1 from 192.168.178.31 wlan0
Unicast reply from 192.168.178.1 [24:65:11:F0:DC:A8] 2.052ms
Unicast reply from 192.168.178.1 [24:65:11:F0:DC:A8] 1.852ms
Received 2 response(s)
</source>
So we better set this capability again:
<source lang=bash>
# setcap cap_net_raw=+ep /usr/bin/arping
</source>
7d8b55396998eedf4f01414a8861e67ac5bfc201
ZFS cheatsheet
0
29
1309
837
2016-07-01T08:11:01Z
Lollypop
2
/* Limitieren des ARC Cache */
wikitext
text/x-wiki
[[Kategorie:ZFS|cheatsheet]]
== Links ==
* [[ZFS_Recovery|Reparieren von defktem ZFS]]
* Wichtige ZFS-Patches: 127729-07 (x86) / 127728-06 (SPARC)
* ZFS Best Practices Guide http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide
* ZFS FAQ bei Opensolaris.org http://www.opensolaris.org/os/community/zfs/faq/
== Löschen nicht löschbarer Snapshots ==
Hier nach einem abgebrochenen ZFS send/recv:
<source lang=bash>
# zfs destroy MYSQL-LOG/binlog@copy_20130403
cannot destroy 'MYSQL-LOG/binlog@copy_20130403': dataset is busy
# zfs holds -r MYSQL-LOG@copy_20130403
NAME TAG TIMESTAMP
MYSQL-LOG@copy_20130403 .send-22887-0 Wed Apr 3 09:03:32 2013
# zfs release .send-22887-0 MYSQL-LOG@copy_20130403
# zfs destroy MYSQL-LOG/binlog@copy_20130403
</source>
== ZFS Tuning ==
Eine gefühlte Langsamkeit auf Systemen mit ZFS kommt vom sehr großen Cachehunger. Den kann man eingrenzen:
Erstmal schauen, was phase ist:
<source lang=bash>
lollypop@wirefall:~# echo "::kmastat ! grep Total" |mdb -k
Total [hat_memload] 13508608B 309323764 0
Total [kmem_msb] 24010752B 1509706 0
Total [kmem_va] 660340736B 140448 0
Total [kmem_default] 690409472B 1416078794 0
Total [kmem_io_64G] 34619392B 8456 0
Total [kmem_io_4G] 16384B 92 0
Total [kmem_io_2G] 24576B 62 0
Total [bp_map] 1048576B 234488 0
Total [umem_np] 786432B 976 0
Total [id32] 4096B 2620 0
Total [zfs_file_data_buf] 1471275008B 1326646 0
Total [segkp] 589824B 192886 0
Total [ip_minor_arena_sa] 64B 13332 0
Total [ip_minor_arena_la] 192B 45183 0
Total [spdsock] 64B 1 0
Total [namefs_inodes] 64B 24 0
lollypop@wirefall:~# echo "::memstat" | mdb -k
Page Summary Pages MB %Tot
------------ ---------------- ---------------- ----
Kernel 255013 996 24%
ZFS File Data 359196 1403 34%
Anon 346538 1353 33%
Exec and libs 33948 132 3%
Page cache 4836 18 0%
Free (cachelist) 22086 86 2%
Free (freelist) 23420 91 2%
Total 1045037 4082
Physical 1045036 4082
</source>
Oder nur ZFS
<source lang=bash>
echo "::memstat ! egrep '(Page Summary|-----|ZFS)'"| mdb -k
</source>
Ausgeben aller ARC-Parameter:
<source lang=bash>
lollypop@wirefall:~# echo "::arc -m" | mdb -k
hits = 80839319
misses = 3717788
demand_data_hits = 4127150
demand_data_misses = 51589
demand_metadata_hits = 9467792
demand_metadata_misses = 2125852
prefetch_data_hits = 127941
prefetch_data_misses = 596238
prefetch_metadata_hits = 67116436
prefetch_metadata_misses = 944109
mru_hits = 2031248
mru_ghost_hits = 1906199
mfu_hits = 78514880
mfu_ghost_hits = 993236
deleted = 880714
recycle_miss = 1381210
mutex_miss = 197
evict_skip = 38573528
evict_l2_cached = 0
evict_l2_eligible = 94658370048
evict_l2_ineligible = 8946457600
hash_elements = 79571
hash_elements_max = 82328
hash_collisions = 3005774
hash_chains = 22460
hash_chain_max = 8
p = 64 MB
c = 512 MB
c_min = 127 MB
c_max = 512 MB
size = 512 MB
hdr_size = 14825736
data_size = 468982784
other_size = 53480992
l2_hits = 0
l2_misses = 0
l2_feeds = 0
l2_rw_clash = 0
l2_read_bytes = 0
l2_write_bytes = 0
l2_writes_sent = 0
l2_writes_done = 0
l2_writes_error = 0
l2_writes_hdr_miss = 0
l2_evict_lock_retry = 0
l2_evict_reading = 0
l2_free_on_write = 0
l2_abort_lowmem = 0
l2_cksum_bad = 0
l2_io_error = 0
l2_size = 0
l2_hdr_size = 0
memory_throttle_count = 0
arc_no_grow = 0
arc_tempreserve = 0 MB
arc_meta_used = 150 MB
arc_meta_limit = 128 MB
arc_meta_max = 313 MB
</source>
Man kann sich auch alle Parameter ausgeben lassen, die für ZFS gesetzt sind mit:
<source lang=bash>
# echo ::zfs_params | mdb -k
arc_reduce_dnlc_percent = 0x3
zfs_arc_max = 0x100000000
zfs_arc_min = 0x0
arc_shrink_shift = 0x5
zfs_mdcomp_disable = 0x0
zfs_prefetch_disable = 0x0
zfetch_max_streams = 0x8
zfetch_min_sec_reap = 0x2
zfetch_block_cap = 0x100
zfetch_array_rd_sz = 0x100000
zfs_default_bs = 0x9
zfs_default_ibs = 0xe
...
# echo "::arc -a" | mdb -k
hits = 592730
misses = 5095
demand_data_hits = 0
demand_data_misses = 0
demand_metadata_hits = 592719
demand_metadata_misses = 4866
prefetch_data_hits = 0
prefetch_data_misses = 0
...
</source>
Setzen von Kernelparametern geht auch online mit:
<source lang=bash>
# echo zfs_arc_max/Z100000000 | mdb -kw
zfs_arc_max: <old value> = 0x100000000
</source>
Das setzt den zfs_arc_max auf 4GB = 0x100000000
== Limitieren des ARC Cache ==
In der /etc/system einfach:
set zfs:zfs_arc_max = <Number of bytes>
Easy calculation:
<source lang=bash>
# NUMGB=32
# printf "set zfs:zfs_arc_max = 0x%x\n" $[ ${NUMGB} * 1024 ** 3 ]
set zfs:zfs_arc_max = 0x800000000
</source>
Siehe auch [http://www.solarisinternals.com/wiki/index.php/ZFS_Evil_Tuning_Guide#Limiting_the_ARC_Cache Limiting the ARC Cache]
But !!!! NEVER DO THIS !!!!
Never use ''mdb -kw'' to set the values!!!
But on a '''test system''' you could try to get the position in the Kernel with
<source lang=bash>
> arc_stats::print -a arcstat_p.value.ui64 arcstat_c.value.ui64 arcstat_c_max.value.ui64
</source>
Calculate for example 8GB:
<source lang=bash>
# printf "0x%x\n" $[ 8 * 1024 *1024 *1024 ]
0x200000000
</source>
And raise the values like this:
arc.c = arc.c_max
arc.p = arc.c / 2
<source lang=bash>
# mdb -kw
Loading modules: [ unix krtld genunix dtrace specfs uppc pcplusmp cpu.generic zfs mpt_sas sockfs ip hook neti dls sctp arp usba uhci fcp fctl qlc nca md lofs sata cpc fcip random crypto logindmux ptm ufs sppp nfs ipc ]
> arc_stats::print -a arcstat_p.value.ui64 arcstat_c.value.ui64 arcstat_c_max.value.ui64
fffffffffbcfaf90 arcstat_p.value.ui64 = 0x4000000
fffffffffbcfafc0 arcstat_c.value.ui64 = 0x40000000
fffffffffbcfb020 arcstat_c_max.value.ui64 = 0x40000000
> fffffffffbcfb020/Z 0x200000000
arc_stats+0x4a0:0x40000000 = 0x200000000
> fffffffffbcfafc0/Z 0x200000000
arc_stats+0x440:0x44a42480 = 0x200000000
> fffffffffbcfaf90/Z 0x100000000
arc_stats+0x410:0x4000000 = 0x100000000
</source>
== ZFS Platzverbrauch besser anzeigen ==
<pre>
$ zfs list -o space
NAME AVAIL USED USEDSNAP USEDDS USEDREFRESERV USEDCHILD
rpool 25.4G 7.79G 0 64K 0 7.79G
rpool/ROOT 25.4G 6.29G 0 18K 0 6.29G
rpool/ROOT/snv_98 25.4G 6.29G 0 6.29G 0 0
rpool/dump 25.4G 1.00G 0 1.00G 0 0
rpool/export 25.4G 38K 0 20K 0 18K
rpool/export/home 25.4G 18K 0 18K 0 0
rpool/swap 25.8G 512M 0 111M 401M 0
</pre>
Wenn zfs list -o space als shortcut noch nicht zur Verfügung steht, geht meist:
<pre>
$ zfs list -o name,avail,used,usedsnap,usedds,usedrefreserv,usedchild -t filesystem,volume
</pre>
== Migration UFS-Root -> ZFS-Root via Live-Upgrade ==
Erstmal den ZFS Rootpool anlegen:
# zpool create rpool /dev/dsk/<zfs-disk>
Wer Problemen aus dem Weg gehen möchte, läßt den Namen bei rpool.
Boot-Environment (BE) mit lucreate erstellen
# lucreate -c ufsBE -n zfsBE -p rpool
Hiermit werden die Files in die ZFS-Umgebung kopiert.
Prüfen, ob das BootFS richtig gesetzt wurde:
<pre>
# zpool get bootfs rpool
NAME PROPERTY VALUE SOURCE
rpool bootfs rpool/ROOT/zfsBE local
</pre>
Auskommentieren von eventuell noch nachgebliebenen rootdev-Einträgen in der /etc/system
<pre>
# zpool export rpool
# mkdir /tmp/rpool
# zpool import -R /tmp/rpool rpool
# zfs unmount rpool
# rmdir /tmp/rpool/rpool
# zfs mount rpool/ROOT/zfsBE
# perl -pi.orig -e 's#^(rootdev.*)$#* \1#g' /tmp/rpool/etc/system
# zpool export rpool
</pre>
Bootblock für ZFS auf die ZFS-Platte
# installboot -F zfs /usr/platform/`uname -i`/lib/fs/zfs/bootblk /dev/rdsk/<zfs-disk>
Aktivieren des neuen BEs
# luactivate zfsBE
==cannot destroy 'snapshot': dataset is busy==
<source lang=bash>
root@sun1 # zfs destroy zpool1/raiddisk0@send_1
cannot destroy 'zpool1/raiddisk0@send_1': dataset is busy
root@sun1 # zfs holds zpool1/raiddisk0@send_1
NAME TAG TIMESTAMP
zpool1/raiddisk0@send_1 .send-14952-0 Mon Jun 15 15:29:09 2015
zpool1/raiddisk0@send_1 .send-16117-0 Mon Jun 15 15:29:28 2015
zpool1/raiddisk0@send_1 .send-26208-0 Tue Jun 16 10:14:47 2015
zpool1/raiddisk0@send_1 .send-8129-0 Mon Jun 15 15:26:54 2015
root@sun1 # zfs release .send-14952-0 zpool1/raiddisk0@send_1
root@sun1 # zfs release .send-16117-0 zpool1/raiddisk0@send_1
root@sun1 # zfs release .send-26208-0 zpool1/raiddisk0@send_1
root@sun1 # zfs release .send-8129-0 zpool1/raiddisk0@send_1
root@sun1 # zfs holds zpool1/raiddisk0@send_1
root@sun1 #
root@sun1 # zfs destroy zpool1/raiddisk0@send_1
root@sun1 #
</source>
db967d21c78c41e91d2a40097720466df8844001
1310
1309
2016-07-01T08:11:51Z
Lollypop
2
/* Limitieren des ARC Cache */
wikitext
text/x-wiki
[[Kategorie:ZFS|cheatsheet]]
== Links ==
* [[ZFS_Recovery|Reparieren von defktem ZFS]]
* Wichtige ZFS-Patches: 127729-07 (x86) / 127728-06 (SPARC)
* ZFS Best Practices Guide http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide
* ZFS FAQ bei Opensolaris.org http://www.opensolaris.org/os/community/zfs/faq/
== Löschen nicht löschbarer Snapshots ==
Hier nach einem abgebrochenen ZFS send/recv:
<source lang=bash>
# zfs destroy MYSQL-LOG/binlog@copy_20130403
cannot destroy 'MYSQL-LOG/binlog@copy_20130403': dataset is busy
# zfs holds -r MYSQL-LOG@copy_20130403
NAME TAG TIMESTAMP
MYSQL-LOG@copy_20130403 .send-22887-0 Wed Apr 3 09:03:32 2013
# zfs release .send-22887-0 MYSQL-LOG@copy_20130403
# zfs destroy MYSQL-LOG/binlog@copy_20130403
</source>
== ZFS Tuning ==
Eine gefühlte Langsamkeit auf Systemen mit ZFS kommt vom sehr großen Cachehunger. Den kann man eingrenzen:
Erstmal schauen, was phase ist:
<source lang=bash>
lollypop@wirefall:~# echo "::kmastat ! grep Total" |mdb -k
Total [hat_memload] 13508608B 309323764 0
Total [kmem_msb] 24010752B 1509706 0
Total [kmem_va] 660340736B 140448 0
Total [kmem_default] 690409472B 1416078794 0
Total [kmem_io_64G] 34619392B 8456 0
Total [kmem_io_4G] 16384B 92 0
Total [kmem_io_2G] 24576B 62 0
Total [bp_map] 1048576B 234488 0
Total [umem_np] 786432B 976 0
Total [id32] 4096B 2620 0
Total [zfs_file_data_buf] 1471275008B 1326646 0
Total [segkp] 589824B 192886 0
Total [ip_minor_arena_sa] 64B 13332 0
Total [ip_minor_arena_la] 192B 45183 0
Total [spdsock] 64B 1 0
Total [namefs_inodes] 64B 24 0
lollypop@wirefall:~# echo "::memstat" | mdb -k
Page Summary Pages MB %Tot
------------ ---------------- ---------------- ----
Kernel 255013 996 24%
ZFS File Data 359196 1403 34%
Anon 346538 1353 33%
Exec and libs 33948 132 3%
Page cache 4836 18 0%
Free (cachelist) 22086 86 2%
Free (freelist) 23420 91 2%
Total 1045037 4082
Physical 1045036 4082
</source>
Oder nur ZFS
<source lang=bash>
echo "::memstat ! egrep '(Page Summary|-----|ZFS)'"| mdb -k
</source>
Ausgeben aller ARC-Parameter:
<source lang=bash>
lollypop@wirefall:~# echo "::arc -m" | mdb -k
hits = 80839319
misses = 3717788
demand_data_hits = 4127150
demand_data_misses = 51589
demand_metadata_hits = 9467792
demand_metadata_misses = 2125852
prefetch_data_hits = 127941
prefetch_data_misses = 596238
prefetch_metadata_hits = 67116436
prefetch_metadata_misses = 944109
mru_hits = 2031248
mru_ghost_hits = 1906199
mfu_hits = 78514880
mfu_ghost_hits = 993236
deleted = 880714
recycle_miss = 1381210
mutex_miss = 197
evict_skip = 38573528
evict_l2_cached = 0
evict_l2_eligible = 94658370048
evict_l2_ineligible = 8946457600
hash_elements = 79571
hash_elements_max = 82328
hash_collisions = 3005774
hash_chains = 22460
hash_chain_max = 8
p = 64 MB
c = 512 MB
c_min = 127 MB
c_max = 512 MB
size = 512 MB
hdr_size = 14825736
data_size = 468982784
other_size = 53480992
l2_hits = 0
l2_misses = 0
l2_feeds = 0
l2_rw_clash = 0
l2_read_bytes = 0
l2_write_bytes = 0
l2_writes_sent = 0
l2_writes_done = 0
l2_writes_error = 0
l2_writes_hdr_miss = 0
l2_evict_lock_retry = 0
l2_evict_reading = 0
l2_free_on_write = 0
l2_abort_lowmem = 0
l2_cksum_bad = 0
l2_io_error = 0
l2_size = 0
l2_hdr_size = 0
memory_throttle_count = 0
arc_no_grow = 0
arc_tempreserve = 0 MB
arc_meta_used = 150 MB
arc_meta_limit = 128 MB
arc_meta_max = 313 MB
</source>
Man kann sich auch alle Parameter ausgeben lassen, die für ZFS gesetzt sind mit:
<source lang=bash>
# echo ::zfs_params | mdb -k
arc_reduce_dnlc_percent = 0x3
zfs_arc_max = 0x100000000
zfs_arc_min = 0x0
arc_shrink_shift = 0x5
zfs_mdcomp_disable = 0x0
zfs_prefetch_disable = 0x0
zfetch_max_streams = 0x8
zfetch_min_sec_reap = 0x2
zfetch_block_cap = 0x100
zfetch_array_rd_sz = 0x100000
zfs_default_bs = 0x9
zfs_default_ibs = 0xe
...
# echo "::arc -a" | mdb -k
hits = 592730
misses = 5095
demand_data_hits = 0
demand_data_misses = 0
demand_metadata_hits = 592719
demand_metadata_misses = 4866
prefetch_data_hits = 0
prefetch_data_misses = 0
...
</source>
Setzen von Kernelparametern geht auch online mit:
<source lang=bash>
# echo zfs_arc_max/Z100000000 | mdb -kw
zfs_arc_max: <old value> = 0x100000000
</source>
Das setzt den zfs_arc_max auf 4GB = 0x100000000
== Limitieren des ARC Cache ==
In der /etc/system einfach:
set zfs:zfs_arc_max = <Number of bytes>
Easy calculation:
<source lang=bash>
# NUMGB=32
# printf "set zfs:zfs_arc_max = 0x%x\n" $[ ${NUMGB} * 1024 ** 3 ]
set zfs:zfs_arc_max = 0x800000000
</source>
Siehe auch [http://www.solarisinternals.com/wiki/index.php/ZFS_Evil_Tuning_Guide#Limiting_the_ARC_Cache Limiting the ARC Cache]
But !!!! NEVER DO THIS !!!!
Never use ''mdb -kw'' to set the values!!!
But on a '''test system''' you could try to get the position in the Kernel with
<source lang=bash>
> arc_stats::print -a arcstat_p.value.ui64 arcstat_c.value.ui64 arcstat_c_max.value.ui64
</source>
Calculate for example 8GB:
<source lang=bash>
# printf "0x%x\n" $[ 8 * 1024 ** 3 ]
0x200000000
</source>
And raise the values like this:
arc.c = arc.c_max
arc.p = arc.c / 2
<source lang=bash>
# mdb -kw
Loading modules: [ unix krtld genunix dtrace specfs uppc pcplusmp cpu.generic zfs mpt_sas sockfs ip hook neti dls sctp arp usba uhci fcp fctl qlc nca md lofs sata cpc fcip random crypto logindmux ptm ufs sppp nfs ipc ]
> arc_stats::print -a arcstat_p.value.ui64 arcstat_c.value.ui64 arcstat_c_max.value.ui64
fffffffffbcfaf90 arcstat_p.value.ui64 = 0x4000000
fffffffffbcfafc0 arcstat_c.value.ui64 = 0x40000000
fffffffffbcfb020 arcstat_c_max.value.ui64 = 0x40000000
> fffffffffbcfb020/Z 0x200000000
arc_stats+0x4a0:0x40000000 = 0x200000000
> fffffffffbcfafc0/Z 0x200000000
arc_stats+0x440:0x44a42480 = 0x200000000
> fffffffffbcfaf90/Z 0x100000000
arc_stats+0x410:0x4000000 = 0x100000000
</source>
== ZFS Platzverbrauch besser anzeigen ==
<pre>
$ zfs list -o space
NAME AVAIL USED USEDSNAP USEDDS USEDREFRESERV USEDCHILD
rpool 25.4G 7.79G 0 64K 0 7.79G
rpool/ROOT 25.4G 6.29G 0 18K 0 6.29G
rpool/ROOT/snv_98 25.4G 6.29G 0 6.29G 0 0
rpool/dump 25.4G 1.00G 0 1.00G 0 0
rpool/export 25.4G 38K 0 20K 0 18K
rpool/export/home 25.4G 18K 0 18K 0 0
rpool/swap 25.8G 512M 0 111M 401M 0
</pre>
Wenn zfs list -o space als shortcut noch nicht zur Verfügung steht, geht meist:
<pre>
$ zfs list -o name,avail,used,usedsnap,usedds,usedrefreserv,usedchild -t filesystem,volume
</pre>
== Migration UFS-Root -> ZFS-Root via Live-Upgrade ==
Erstmal den ZFS Rootpool anlegen:
# zpool create rpool /dev/dsk/<zfs-disk>
Wer Problemen aus dem Weg gehen möchte, läßt den Namen bei rpool.
Boot-Environment (BE) mit lucreate erstellen
# lucreate -c ufsBE -n zfsBE -p rpool
Hiermit werden die Files in die ZFS-Umgebung kopiert.
Prüfen, ob das BootFS richtig gesetzt wurde:
<pre>
# zpool get bootfs rpool
NAME PROPERTY VALUE SOURCE
rpool bootfs rpool/ROOT/zfsBE local
</pre>
Auskommentieren von eventuell noch nachgebliebenen rootdev-Einträgen in der /etc/system
<pre>
# zpool export rpool
# mkdir /tmp/rpool
# zpool import -R /tmp/rpool rpool
# zfs unmount rpool
# rmdir /tmp/rpool/rpool
# zfs mount rpool/ROOT/zfsBE
# perl -pi.orig -e 's#^(rootdev.*)$#* \1#g' /tmp/rpool/etc/system
# zpool export rpool
</pre>
Bootblock für ZFS auf die ZFS-Platte
# installboot -F zfs /usr/platform/`uname -i`/lib/fs/zfs/bootblk /dev/rdsk/<zfs-disk>
Aktivieren des neuen BEs
# luactivate zfsBE
==cannot destroy 'snapshot': dataset is busy==
<source lang=bash>
root@sun1 # zfs destroy zpool1/raiddisk0@send_1
cannot destroy 'zpool1/raiddisk0@send_1': dataset is busy
root@sun1 # zfs holds zpool1/raiddisk0@send_1
NAME TAG TIMESTAMP
zpool1/raiddisk0@send_1 .send-14952-0 Mon Jun 15 15:29:09 2015
zpool1/raiddisk0@send_1 .send-16117-0 Mon Jun 15 15:29:28 2015
zpool1/raiddisk0@send_1 .send-26208-0 Tue Jun 16 10:14:47 2015
zpool1/raiddisk0@send_1 .send-8129-0 Mon Jun 15 15:26:54 2015
root@sun1 # zfs release .send-14952-0 zpool1/raiddisk0@send_1
root@sun1 # zfs release .send-16117-0 zpool1/raiddisk0@send_1
root@sun1 # zfs release .send-26208-0 zpool1/raiddisk0@send_1
root@sun1 # zfs release .send-8129-0 zpool1/raiddisk0@send_1
root@sun1 # zfs holds zpool1/raiddisk0@send_1
root@sun1 #
root@sun1 # zfs destroy zpool1/raiddisk0@send_1
root@sun1 #
</source>
cc54f41374ba21a2fb47a4e6c9afa413aa3c11a0
1323
1310
2016-07-25T08:02:32Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:ZFS|cheatsheet]]
== Links ==
* [[ZFS_Recovery|Reparieren von defktem ZFS]]
* Wichtige ZFS-Patches: 127729-07 (x86) / 127728-06 (SPARC)
* ZFS Best Practices Guide http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide
* ZFS FAQ bei Opensolaris.org http://www.opensolaris.org/os/community/zfs/faq/
== Löschen nicht löschbarer Snapshots ==
Hier nach einem abgebrochenen ZFS send/recv:
<source lang=bash>
# zfs destroy MYSQL-LOG/binlog@copy_20130403
cannot destroy 'MYSQL-LOG/binlog@copy_20130403': dataset is busy
# zfs holds -r MYSQL-LOG@copy_20130403
NAME TAG TIMESTAMP
MYSQL-LOG@copy_20130403 .send-22887-0 Wed Apr 3 09:03:32 2013
# zfs release .send-22887-0 MYSQL-LOG@copy_20130403
# zfs destroy MYSQL-LOG/binlog@copy_20130403
</source>
== ZFS Tuning ==
Eine gefühlte Langsamkeit auf Systemen mit ZFS kommt vom sehr großen Cachehunger. Den kann man eingrenzen:
Erstmal schauen, was phase ist:
<source lang=bash>
lollypop@wirefall:~# echo "::kmastat ! grep Total" |mdb -k
Total [hat_memload] 13508608B 309323764 0
Total [kmem_msb] 24010752B 1509706 0
Total [kmem_va] 660340736B 140448 0
Total [kmem_default] 690409472B 1416078794 0
Total [kmem_io_64G] 34619392B 8456 0
Total [kmem_io_4G] 16384B 92 0
Total [kmem_io_2G] 24576B 62 0
Total [bp_map] 1048576B 234488 0
Total [umem_np] 786432B 976 0
Total [id32] 4096B 2620 0
Total [zfs_file_data_buf] 1471275008B 1326646 0
Total [segkp] 589824B 192886 0
Total [ip_minor_arena_sa] 64B 13332 0
Total [ip_minor_arena_la] 192B 45183 0
Total [spdsock] 64B 1 0
Total [namefs_inodes] 64B 24 0
lollypop@wirefall:~# echo "::memstat" | mdb -k
Page Summary Pages MB %Tot
------------ ---------------- ---------------- ----
Kernel 255013 996 24%
ZFS File Data 359196 1403 34%
Anon 346538 1353 33%
Exec and libs 33948 132 3%
Page cache 4836 18 0%
Free (cachelist) 22086 86 2%
Free (freelist) 23420 91 2%
Total 1045037 4082
Physical 1045036 4082
</source>
Oder nur ZFS
<source lang=bash>
echo "::memstat ! egrep '(Page Summary|-----|ZFS)'"| mdb -k
</source>
Ausgeben aller ARC-Parameter:
<source lang=bash>
lollypop@wirefall:~# echo "::arc -m" | mdb -k
hits = 80839319
misses = 3717788
demand_data_hits = 4127150
demand_data_misses = 51589
demand_metadata_hits = 9467792
demand_metadata_misses = 2125852
prefetch_data_hits = 127941
prefetch_data_misses = 596238
prefetch_metadata_hits = 67116436
prefetch_metadata_misses = 944109
mru_hits = 2031248
mru_ghost_hits = 1906199
mfu_hits = 78514880
mfu_ghost_hits = 993236
deleted = 880714
recycle_miss = 1381210
mutex_miss = 197
evict_skip = 38573528
evict_l2_cached = 0
evict_l2_eligible = 94658370048
evict_l2_ineligible = 8946457600
hash_elements = 79571
hash_elements_max = 82328
hash_collisions = 3005774
hash_chains = 22460
hash_chain_max = 8
p = 64 MB
c = 512 MB
c_min = 127 MB
c_max = 512 MB
size = 512 MB
hdr_size = 14825736
data_size = 468982784
other_size = 53480992
l2_hits = 0
l2_misses = 0
l2_feeds = 0
l2_rw_clash = 0
l2_read_bytes = 0
l2_write_bytes = 0
l2_writes_sent = 0
l2_writes_done = 0
l2_writes_error = 0
l2_writes_hdr_miss = 0
l2_evict_lock_retry = 0
l2_evict_reading = 0
l2_free_on_write = 0
l2_abort_lowmem = 0
l2_cksum_bad = 0
l2_io_error = 0
l2_size = 0
l2_hdr_size = 0
memory_throttle_count = 0
arc_no_grow = 0
arc_tempreserve = 0 MB
arc_meta_used = 150 MB
arc_meta_limit = 128 MB
arc_meta_max = 313 MB
</source>
Man kann sich auch alle Parameter ausgeben lassen, die für ZFS gesetzt sind mit:
<source lang=bash>
# echo ::zfs_params | mdb -k
arc_reduce_dnlc_percent = 0x3
zfs_arc_max = 0x100000000
zfs_arc_min = 0x0
arc_shrink_shift = 0x5
zfs_mdcomp_disable = 0x0
zfs_prefetch_disable = 0x0
zfetch_max_streams = 0x8
zfetch_min_sec_reap = 0x2
zfetch_block_cap = 0x100
zfetch_array_rd_sz = 0x100000
zfs_default_bs = 0x9
zfs_default_ibs = 0xe
...
# echo "::arc -a" | mdb -k
hits = 592730
misses = 5095
demand_data_hits = 0
demand_data_misses = 0
demand_metadata_hits = 592719
demand_metadata_misses = 4866
prefetch_data_hits = 0
prefetch_data_misses = 0
...
</source>
Setzen von Kernelparametern geht auch online mit:
<source lang=bash>
# echo zfs_arc_max/Z100000000 | mdb -kw
zfs_arc_max: <old value> = 0x100000000
</source>
Das setzt den zfs_arc_max auf 4GB = 0x100000000
== Limitieren des ARC Cache ==
In der /etc/system einfach:
set zfs:zfs_arc_max = <Number of bytes>
Easy calculation:
<source lang=bash>
# NUMGB=32
# printf "set zfs:zfs_arc_max = 0x%x\n" $[ ${NUMGB} * 1024 ** 3 ]
set zfs:zfs_arc_max = 0x800000000
</source>
Siehe auch [http://www.solarisinternals.com/wiki/index.php/ZFS_Evil_Tuning_Guide#Limiting_the_ARC_Cache Limiting the ARC Cache]
But !!!! NEVER DO THIS !!!!
Never use ''mdb -kw'' to set the values!!!
But on a '''test system''' you could try to get the position in the Kernel with
<source lang=bash>
> arc_stats::print -a arcstat_p.value.ui64 arcstat_c.value.ui64 arcstat_c_max.value.ui64
</source>
Calculate for example 8GB:
<source lang=bash>
# printf "0x%x\n" $[ 8 * 1024 ** 3 ]
0x200000000
</source>
And raise the values like this:
arc.c = arc.c_max
arc.p = arc.c / 2
<source lang=bash>
# mdb -kw
Loading modules: [ unix krtld genunix dtrace specfs uppc pcplusmp cpu.generic zfs mpt_sas sockfs ip hook neti dls sctp arp usba uhci fcp fctl qlc nca md lofs sata cpc fcip random crypto logindmux ptm ufs sppp nfs ipc ]
> arc_stats::print -a arcstat_p.value.ui64 arcstat_c.value.ui64 arcstat_c_max.value.ui64
fffffffffbcfaf90 arcstat_p.value.ui64 = 0x4000000
fffffffffbcfafc0 arcstat_c.value.ui64 = 0x40000000
fffffffffbcfb020 arcstat_c_max.value.ui64 = 0x40000000
> fffffffffbcfb020/Z 0x200000000
arc_stats+0x4a0:0x40000000 = 0x200000000
> fffffffffbcfafc0/Z 0x200000000
arc_stats+0x440:0x44a42480 = 0x200000000
> fffffffffbcfaf90/Z 0x100000000
arc_stats+0x410:0x4000000 = 0x100000000
</source>
== ZFS Platzverbrauch besser anzeigen ==
<pre>
$ zfs list -o space
NAME AVAIL USED USEDSNAP USEDDS USEDREFRESERV USEDCHILD
rpool 25.4G 7.79G 0 64K 0 7.79G
rpool/ROOT 25.4G 6.29G 0 18K 0 6.29G
rpool/ROOT/snv_98 25.4G 6.29G 0 6.29G 0 0
rpool/dump 25.4G 1.00G 0 1.00G 0 0
rpool/export 25.4G 38K 0 20K 0 18K
rpool/export/home 25.4G 18K 0 18K 0 0
rpool/swap 25.8G 512M 0 111M 401M 0
</pre>
Wenn zfs list -o space als shortcut noch nicht zur Verfügung steht, geht meist:
<pre>
$ zfs list -o name,avail,used,usedsnap,usedds,usedrefreserv,usedchild -t filesystem,volume
</pre>
== Migration UFS-Root -> ZFS-Root via Live-Upgrade ==
Erstmal den ZFS Rootpool anlegen:
# zpool create rpool /dev/dsk/<zfs-disk>
Wer Problemen aus dem Weg gehen möchte, läßt den Namen bei rpool.
Boot-Environment (BE) mit lucreate erstellen
# lucreate -c ufsBE -n zfsBE -p rpool
Hiermit werden die Files in die ZFS-Umgebung kopiert.
Prüfen, ob das BootFS richtig gesetzt wurde:
<pre>
# zpool get bootfs rpool
NAME PROPERTY VALUE SOURCE
rpool bootfs rpool/ROOT/zfsBE local
</pre>
Auskommentieren von eventuell noch nachgebliebenen rootdev-Einträgen in der /etc/system
<pre>
# zpool export rpool
# mkdir /tmp/rpool
# zpool import -R /tmp/rpool rpool
# zfs unmount rpool
# rmdir /tmp/rpool/rpool
# zfs mount rpool/ROOT/zfsBE
# perl -pi.orig -e 's#^(rootdev.*)$#* \1#g' /tmp/rpool/etc/system
# zpool export rpool
</pre>
Bootblock für ZFS auf die ZFS-Platte
# installboot -F zfs /usr/platform/`uname -i`/lib/fs/zfs/bootblk /dev/rdsk/<zfs-disk>
Aktivieren des neuen BEs
# luactivate zfsBE
==cannot destroy 'snapshot': dataset is busy==
<source lang=bash>
root@sun1 # zfs destroy zpool1/raiddisk0@send_1
cannot destroy 'zpool1/raiddisk0@send_1': dataset is busy
root@sun1 # zfs holds zpool1/raiddisk0@send_1
NAME TAG TIMESTAMP
zpool1/raiddisk0@send_1 .send-14952-0 Mon Jun 15 15:29:09 2015
zpool1/raiddisk0@send_1 .send-16117-0 Mon Jun 15 15:29:28 2015
zpool1/raiddisk0@send_1 .send-26208-0 Tue Jun 16 10:14:47 2015
zpool1/raiddisk0@send_1 .send-8129-0 Mon Jun 15 15:26:54 2015
root@sun1 # zfs release .send-14952-0 zpool1/raiddisk0@send_1
root@sun1 # zfs release .send-16117-0 zpool1/raiddisk0@send_1
root@sun1 # zfs release .send-26208-0 zpool1/raiddisk0@send_1
root@sun1 # zfs release .send-8129-0 zpool1/raiddisk0@send_1
root@sun1 # zfs holds zpool1/raiddisk0@send_1
root@sun1 #
root@sun1 # zfs destroy zpool1/raiddisk0@send_1
root@sun1 #
</source>
==Fragmentation==
<source lang=bash>
# zdb -mm <pool> | nawk '/fragmentation/{count++;frag+=$NF}END{printf "Overall fragmentation %.2d\n",(frag/count);}'
</source>
63f042122fd517ee377ec05882d184d082ebb9c4
IPS cheat sheet
0
98
1311
273
2016-07-04T12:46:20Z
Lollypop
2
wikitext
text/x-wiki
=Cheat sheet=
[[File:Ips-one-liners.pdf|page=1|600px]]
=Examples=
== Switching to Oracle Support Repository ==
1. Get the client certificate at https://pkg-register.oracle.com/
(x) Oracle Solaris 11 Support -> Submit
Comment: Rechnername -> Accept
-> Download Key
-> Download Certificate
2. Copy it to your Solaris 11 host into /var/pkg/ssl.
<pre>
# mv Oracle_Solaris_11_Support.key.pem /var/pkg/ssl
# mv Oracle_Solaris_11_Support.certificate.pem /var/pkg/ssl
</pre>
4. Set you proxy environment if needed:
<pre>
# http_proxy=http://proxy:3128/
# https_proxy=http://proxy:3128/
# export http_proxy https_proxy
</pre>
5. Set the publisher to the support repository:
# pkg set-publisher \
-k /var/pkg/ssl/Oracle_Solaris_11_Support.key.pem \
-c /var/pkg/ssl/Oracle_Solaris_11_Support.certificate.pem \
-G '*' -g https://pkg.oracle.com/solaris/support/ solaris
6. Refresh the catalog:
# pkg refresh --full
7. Check for updates:
# pkg update -nv
8. If needed or wanted do the update:
# pkg update -v
== Adding another repository ==
Until now the repository of OpenCSW is [http://www.opencsw.org/2012/02/ips-repository-in-the-works/ not in IPS format].
== Repairing packages ==
Damn fast fingers did it! Lucky Luk style... the man who delete files faster than his shadow...
<pre>
root@solaris11:/home/lollypop# rm /usr/bin/ls
</pre>
So... the file is gone... oops.
No problem in Solaris 11. You can repair package contents!
But.. in which package was it?
<pre>
root@solaris11:/home/lollypop# pkg search /usr/bin/ls
INDEX ACTION VALUE PACKAGE
path file usr/bin/ls pkg:/system/core-os@0.5.11-0.175.0.10.1.0.0
</pre>
So it is in the package pkg:/system/core-os@0.5.11-0.175.0.10.1.0.0 . Let us take a look what the system thinks what is wrong with our files from this package:
<pre>
root@solaris11:/home/lollypop# pkg verify pkg:/system/core-os@0.5.11-0.175.0.10.1.0.0
PACKAGE STATUS <3>
pkg://solaris/system/core-os ERROR
file: usr/bin/ls
Missing: regular file does not exist
</pre>
That is exactly what we thougt :-).
So let us fix it!
<pre>
root@solaris11:/home/lollypop# pkg fix pkg:/system/core-os@0.5.11-0.175.0.10.1.0.0
Verifying: pkg://solaris/system/core-os ERROR <3>
file: usr/bin/ls
Missing: regular file does not exist
Created ZFS snapshot: 2013-04-10-07:40:21
Repairing: pkg://solaris/system/core-os
DOWNLOAD PKGS FILES XFER (MB)
Completed 1/1 1/1 0.0/0.0$<3>
PHASE ACTIONS
Update Phase 1/1
PHASE ITEMS
Image State Update Phase 2/2
root@solaris11:/home/lollypop#
</pre>
Beware of trying this with /usr/bin/pkg !!!
=Solaris 11 release=
<source lang=bash>
$ pkg info kernel | nawk '$1 == "Version:"{split($2,version,/\./)}$1 == "Branch:"{split($2,branch,/\./)}END{printf ("Solaris %d.%d Update %d SRU %d SRU-Build %d\n",version[2],version[3],branch[3],branch[4],branch[6])}'
Solaris 5.11 Update 2 SRU 0 SRU-Build 42
</source>
[[Kategorie:Solaris11]]
f42b46b74550d8cbfcb8d483190d135ff31895e8
1312
1311
2016-07-04T12:49:59Z
Lollypop
2
/* Solaris 11 release */
wikitext
text/x-wiki
=Cheat sheet=
[[File:Ips-one-liners.pdf|page=1|600px]]
=Examples=
== Switching to Oracle Support Repository ==
1. Get the client certificate at https://pkg-register.oracle.com/
(x) Oracle Solaris 11 Support -> Submit
Comment: Rechnername -> Accept
-> Download Key
-> Download Certificate
2. Copy it to your Solaris 11 host into /var/pkg/ssl.
<pre>
# mv Oracle_Solaris_11_Support.key.pem /var/pkg/ssl
# mv Oracle_Solaris_11_Support.certificate.pem /var/pkg/ssl
</pre>
4. Set you proxy environment if needed:
<pre>
# http_proxy=http://proxy:3128/
# https_proxy=http://proxy:3128/
# export http_proxy https_proxy
</pre>
5. Set the publisher to the support repository:
# pkg set-publisher \
-k /var/pkg/ssl/Oracle_Solaris_11_Support.key.pem \
-c /var/pkg/ssl/Oracle_Solaris_11_Support.certificate.pem \
-G '*' -g https://pkg.oracle.com/solaris/support/ solaris
6. Refresh the catalog:
# pkg refresh --full
7. Check for updates:
# pkg update -nv
8. If needed or wanted do the update:
# pkg update -v
== Adding another repository ==
Until now the repository of OpenCSW is [http://www.opencsw.org/2012/02/ips-repository-in-the-works/ not in IPS format].
== Repairing packages ==
Damn fast fingers did it! Lucky Luk style... the man who delete files faster than his shadow...
<pre>
root@solaris11:/home/lollypop# rm /usr/bin/ls
</pre>
So... the file is gone... oops.
No problem in Solaris 11. You can repair package contents!
But.. in which package was it?
<pre>
root@solaris11:/home/lollypop# pkg search /usr/bin/ls
INDEX ACTION VALUE PACKAGE
path file usr/bin/ls pkg:/system/core-os@0.5.11-0.175.0.10.1.0.0
</pre>
So it is in the package pkg:/system/core-os@0.5.11-0.175.0.10.1.0.0 . Let us take a look what the system thinks what is wrong with our files from this package:
<pre>
root@solaris11:/home/lollypop# pkg verify pkg:/system/core-os@0.5.11-0.175.0.10.1.0.0
PACKAGE STATUS <3>
pkg://solaris/system/core-os ERROR
file: usr/bin/ls
Missing: regular file does not exist
</pre>
That is exactly what we thougt :-).
So let us fix it!
<pre>
root@solaris11:/home/lollypop# pkg fix pkg:/system/core-os@0.5.11-0.175.0.10.1.0.0
Verifying: pkg://solaris/system/core-os ERROR <3>
file: usr/bin/ls
Missing: regular file does not exist
Created ZFS snapshot: 2013-04-10-07:40:21
Repairing: pkg://solaris/system/core-os
DOWNLOAD PKGS FILES XFER (MB)
Completed 1/1 1/1 0.0/0.0$<3>
PHASE ACTIONS
Update Phase 1/1
PHASE ITEMS
Image State Update Phase 2/2
root@solaris11:/home/lollypop#
</pre>
Beware of trying this with /usr/bin/pkg !!!
=Solaris 11 release=
<source lang=bash>
$ LANG=C pkg info kernel | nawk '$1 == "Version:"{split($2,version,/\./)}$1 == "Branch:"{split($2,branch,/\./)}END{printf ("Solaris %d.%d Update %d SRU %d SRU-Build %d\n",version[2],version[3],branch[3],branch[4],branch[6])}'
Solaris 5.11 Update 2 SRU 0 SRU-Build 42
</source>
[[Kategorie:Solaris11]]
a07c89eebf0b246ca74050e333473c7c0e99d163
SunCluster Delete Ressource Group
0
206
1313
730
2016-07-05T15:50:17Z
Lollypop
2
/* Ressourcegruppe löschen */
wikitext
text/x-wiki
[[Kategorie:SunCluster]]
=Komplettes entfernen einer Ressource Group=
Herleitung der Daten, die nachher in den Einzeilern benutzt werden.
Macht dies nicht! Ich übernehme auch hier wieder keine Verantwortung! Alles falsch! Nicht machen!
==Setzen der betreffenden Ressource Group==
<source lang=bash>
# RG=my-rg
</source>
==Ressource anzeigen==
<source lang=bash>
# clrs list -g ${RG}
my-nsr-res
my-oracle-res
my-lh-res
my-zone-res
my-hasp-zfs-res
</source>
==Abschalten der Ressource Group und Ressourcen==
<source lang=bash>
# clrg offline ${RG}
# clrs list -g ${RG} | xargs clrs disable
</source>
==ZPools anzeigen==
<source lang=bash>
# clrs show -p ZPools -g ${RG}
...
=== Resources ===
Resource: my-hasp-zfs-res
--- Standard and extension properties ---
Zpools: my_pool my-redo1_pool my-redo2_pool
Class: extension
Description: The list of zpools
Per-node: False
Type: stringarray
...
</source>
==ZPools anzeigen==
<source lang=bash>
# clrs show -p ZPools -g ${RG} | nawk '$1=="Zpools:"{$1="";print $0;}'
my_pool my-redo1_pool my-redo2_pool
</source>
==DID Devices anzeigen==
<source lang=bash>
# for disk in $(for zpool in $(clrs show -p ZPools -g ${RG} | nawk '$1=="Zpools:"{$1="";print $0;}' ) ; do zpool import ${zpool} 2>/dev/null ; zpool status ${zpool} ; zpool export ${zpool} ; done | nawk '/c[0-9]+t/{gsub(/s.*$/,"",$1);print $1}') ; do echo /dev/rdsk/${disk}; done | xargs cldev list -vn $(hostname)
DID Device Full Device Path
---------- ----------------
d53 node06:/dev/rdsk/c0t600A0B80006E103C00000B9B50B2F83Ed0
d38 node06:/dev/rdsk/c0t600A0B80006E10020000D54150B2FF26d0
d57 node06:/dev/rdsk/c0t600A0B80006E103C00000B9E50B2F9FFd0
d50 node06:/dev/rdsk/c0t600A0B80006E10020000D54450B300C8d0
d46 node06:/dev/rdsk/c0t600A0B80006E103C00000BA250B3098Ad0
d28 node06:/dev/rdsk/c0t600A0B80006E10020000D54850B310C2d0
d55 node06:/dev/rdsk/c0t600A0B80006E134400000B5350B2FB08d0
d56 node06:/dev/rdsk/c0t600A0B80006E10E40000D6F450B2FBB1d0
d40 node06:/dev/rdsk/c0t600A0B80006E134400000B5950B30D8Bd0
d45 node06:/dev/rdsk/c0t600A0B80006E10E40000D6FA50B30E62d0
</source>
oder nur die DIDs:
<source lang=bash>
# for disk in $(for zpool in $(clrs show -p ZPools -g ${RG} | nawk '$1=="Zpools:"{$1="";print $0;}' ) ; do zpool import ${zpool} 2>/dev/null ; zpool status ${zpool} ; zpool export ${zpool} ; done | nawk '/c[0-9]+t/{gsub(/s.*$/,"",$1);print $1}') ; do echo /dev/rdsk/${disk}; done | xargs scdidadm -lo instance
53
38
57
50
46
28
55
56
40
45
</source>
==Abschalten des Device Monitorings==
Das ist wichtig, um die Devices später ganz aus dem Cluster zu bekommen!
<source lang=bash>
# for disk in $(for zpool in $(clrs show -p ZPools -g ${RG} | nawk '$1=="Zpools:"{$1="";print $0;}' ) ; do zpool import ${zpool} 2>/dev/null ; zpool status ${zpool} ; zpool export ${zpool} ; done | nawk '/c[0-9]+t/{gsub(/s.*$/,"",$1);print $1}') ; do echo /dev/rdsk/${disk}; done | xargs scdidadm -lo name | xargs cldev unmonitor
</source>
==Ressourcegruppe löschen==
<source lang=bash>
# RG=bla-rg
# clrs disable -g ${RG} +
# clrs delete -g ${RG} +
# clrg delete ${RG}
</source>
==Jetzt auf dem Storage die LUNs unmappen==
Und gegebenen Falls löschen...
==Nicht mehr vorhandene LUNs aus dem Solaris entfernen==
<source lang=bash>
# for node in $(clnode list) ; do ssh ${node} cfgadm -alo show_SCSI_LUN | nawk '$NF=="unusable"{gsub(/,[0-9]+$/,"",$1);print $1}' | sort -u | xargs -n 1 ssh ${node} cfgadm -c unconfigure -o unusable_SCSI_LUN ; ssh ${node} devfsadm -C -v -c disk ; done
</source>
==DIDs aufräumen==
<source lang=bash>
# for node in $(clnode list) ; do cldev refresh -n ${node} ; cldev clear -n ${node} ; done
</source>
==Bei bedarf Zonenkonfigs aufräumen==
<source lang=bash>
# ZONE=my-zone
# for node in $(clnode list) ; do ssh ${node} zonecfg -z ${ZONE} delete -F ; done
</source>
b12f1c5351773455d1d590c3bb6e75a44f2c4ec6
Nice Options
0
253
1314
1154
2016-07-06T07:07:02Z
Lollypop
2
wikitext
text/x-wiki
Linux:
<source lang=bash>
ls -aldi
ls -aladin
netstat -plant
netstat -tulpen
pwgen -nancy 17
</source>
Solaris:
<source lang=bash>
iostat -Erni
</source>
8ab737db4ffcc11d5f5527ba83a390b3c48f7838
1315
1314
2016-07-06T07:09:25Z
Lollypop
2
wikitext
text/x-wiki
Linux:
<source lang=bash>
ls -aldi
ls -aladin
netstat -plant
netstat -tulpen
pwgen -nancy 17
</source>
Solaris:
<source lang=bash>
prstat -Lmaa
iostat -Erni
</source>
6af424dd82cecaf2dc91bf677559e56860ab6866
SSH Tipps und Tricks
0
75
1316
983
2016-07-07T11:08:24Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:SSH]]
[[Kategorie:Putty]]
=SSH, der Weg zum Ziel=
==SSH über ein oder mehrere Hops==
Um die SSH-Verbindung von Host_A zu Host_B machen zu können muß man sich über zwei Rechner dorthin tunneln (GW_1 und GW_2). Wenn man sich immer erst einloggt und dann weiter einloggt ist es manchmal sehr schwierig die Portforwardings mitzuschleifen, oder den Socks5-Proxy. Einfacher ist es, wenn man sich ProxyCommands für den Weg von Host_a zu Host_b definiert.
Wir kommen also nur von GW_2 zu Host_b, also machen wir uns hierfür einen Eintrag in der ~/.ssh/config:
<pre>
Host Host_B
ProxyCommand ssh GW_2 "/bin/bash -c 'exec 3<>/dev/tcp/%h/%p; cat <&3 & cat >&3;kill $!'"
</pre>
Zu GW_2 kommen wir aber nur über GW_1, also brauchen wir hierfür auch einen Eintrag:
<pre>
Host GW_2
ProxyCommand ssh GW_1 "/bin/bash -c 'exec 3<>/dev/tcp/%h/%p; cat <&3 & cat >&3;kill $!'"
</pre>
Jetzt gibt man auf Host_A einfach <i>ssh Host_B</i> ein und wird über die beiden Gateways GW_1 und GW_2 getunnelt.
Portforwardings für z.B. NFS macht man jetzt einfach so:
<pre>
root@Host_A# share -F nfs -o ro=@127.0.0.1/32 /tmp
root@Host_A# ssh -R 22049:localhost:2049 user@Host_B
user@Host_B$ su -
root@Host_B# mount -oro nfs://127.0.0.1:22049/tmp /mnt
</pre>
Im Hintergrund werden dann die TunnelVerbindungen aufgebaut und man macht das PortForwarding direkt von Host_A nach Host_B. Sehr schlank und elegant.
PS: Das /dev/tcp/%h/%p ist ein BASH-Builtin wobei %h und %p von der SSH durch Host (%h) und Port (%p) ausgefüllt werden
==Ausbruch aus dem Paradies==
Problem: Die Umgebung in der man sich aufhält ist leider so unglücklich mit Firewalls verbaut, daß man nicht arbeiten kann. Man muß aber per SSH raus, um woanders kurz etwas zu schauen, oder zu holen. Nunja, es gibt immer einen Weg...
Vorraussetzung ist ein lokal installiertes [http://www.meadowy.org/~gotoh/projects/connect connect], z.B. unter Ubuntu: apt-get install connect-proxy.
Weiterhin braucht man einen SSH-Server, wo ein sshd auf dem Port 443 lauscht, denn die meisten Proxies wollen einen nur auf known Ports durchlassen.
Dann trägt man in der ~/.ssh/config ein:
<pre>
Host ssh-via-proxy
ProxyCommand connect -H proxy-server:3128 ssh-server 443
</pre>
Schwuppdiwupp ist man mit <i>ssh ssh-server</i> auf dem SSH-Ziel, wo man hinmöchte. Natürlich kann man auf den ssh-server wieder als ProxyCommand eintragen usw. usw.
==Achja... das interne Wiki...==
Auch nicht schlimm, wenn das nur vom internen Netz erreichbar ist, dann fragen wir einfach via Socks-Proxy an:
<pre>
user@Host_A$ ssh -C -N -T -f -D8080 interner-rechner
user@Host_A$ chromium-browser --proxy-server="socks5://localhost:8080" https://wiki.intern.firma.de/ &
</pre>
Die Optionen sind:
<pre>
-C Requests compression <- das ist optional
-N Do not execute a remote command.
-T Disable pseudo-tty allocation.
-f Requests ssh to go to background just before command execution.
-D Local-Remote-Socks5-Proxy Port
</pre>
Oder wieder via ~/.ssh/config:
<pre>
Host wiki
Compression yes
DynamicForward 8888
RequestTTY no
PermitLocalCommand yes
LocalCommand chromium-browser --proxy-server="socks5://localhost:8888" https://wiki.intern.firma.de/ &
Hostname interner-rechner
</pre>
Und dann <i>ssh -N -f wiki</i> (Entsprechungen für -N und -f habe ich noch nicht gefunden).
=Der Fingerabdruck=
Für die Verifikation ist es oft leichter mit kürzeren Zahlenketten. Daher ist der Fingerabdruck praktisch, um Keys einfacher zu vergleichen:
<pre>
$ ssh-keygen -lf ~/.ssh/id_dsa.pub
1024 98:c5:76:...:08:fa:ba lollypop@lollybook (DSA)
</pre>
=Nutzer einschränken=
<source lang=bash>
# SSH is only allowed for users in the group ssh except syslog
AllowGroups ssh
DenyUsers syslog
</source>
=PuTTY Portable=
==pageant zusammen mit putty starten==
In die Datei ..\PortableApps\PuTTYPortable\App\AppInfo\Launcher\PuTTYPortable.ini muß folgendes unter [Launch] stehen:
<pre>
[Launch]
ProgramExecutable=putty\pageant.exe
CommandLineArguments='%PAL:DataDir%\settings\mykeys.ppk -c %PAL:AppDir%\putty\putty.exe'
DirectoryMoveOK=yes
SupportsUNC=yes
</pre>
Zu PortableApps siehe auch:
* [http://portableapps.com/manuals/PortableApps.comLauncher/ref/envsub.html Environment variable substitions]
* [http://portableapps.com/manuals/PortableApps.comLauncher/ref/launcher.ini/launch.html#programexecutable Launch]
=Probleme mit älteren Gegenstellen=
==Unable to negotiate with <IP> port 22: no matching host key type found. Their offer: ssh-dss==
<source lang=bash>
$ ssh -oHostKeyAlgorithms=+ssh-dss <IP>
</source>
==ssh_dispatch_run_fatal: Connection to <IP> port 22: DH GEX group out of range==
<source lang=bash>
$ ssh -oKexAlgorithms=diffie-hellman-group-exchange-sha256,diffie-hellman-group14-sha1,diffie-hellman-group1-sha1 <IP>
</source>
55ab610e78f6ee2e946b9fd601c337f7bc4abd9e
Linbit
0
289
1317
2016-07-13T13:17:01Z
Lollypop
2
Die Seite wurde neu angelegt: „ <source lang=bash> # crm configure crm(live)configure# property maintenance-mode=on crm(live)configure# commit crm(live)configure# exit # vi config-files...…“
wikitext
text/x-wiki
<source lang=bash>
# crm configure
crm(live)configure# property maintenance-mode=on
crm(live)configure# commit
crm(live)configure# exit
# vi config-files...
# crm_resource -l | xargs -l crm resource cleanup
# crm configure
crm(live)configure# property maintenance-mode=off
crm(live)configure# ptest actions
INFO: install graphviz to see a transition graph
notice: LogActions: Start stonith_fence_ipmilan_hhlokva04 (hhlokva03.srv.ndr-net.de)
notice: LogActions: Start stonith_fence_ipmilan_hhlokva03 (hhlokva04.srv.ndr-net.de)
crm(live)configure# commit
crm(live)configure# exit
</source>
8a29dfc585e06eb2c066fed6614ef007ed7c84cb
1318
1317
2016-07-13T13:17:47Z
Lollypop
2
wikitext
text/x-wiki
<source lang=bash>
# crm configure
crm(live)configure# property maintenance-mode=on
crm(live)configure# commit
crm(live)configure# ^D
crm(live)configure# bye
# vi config-files...
# crm_resource -l | xargs -l crm resource cleanup
# crm configure
crm(live)configure# property maintenance-mode=off
crm(live)configure# ptest actions
INFO: install graphviz to see a transition graph
notice: LogActions: Start stonith_fence_ipmilan_hhlokva04 (hhlokva03.srv.ndr-net.de)
notice: LogActions: Start stonith_fence_ipmilan_hhlokva03 (hhlokva04.srv.ndr-net.de)
crm(live)configure# commit
crm(live)configure# ^D
crm(live)configure# bye
</source>
350371ca46b861b6c3f1bf4951a3e6a431a963d1
1319
1318
2016-07-13T13:18:50Z
Lollypop
2
wikitext
text/x-wiki
<source lang=bash>
# crm configure
crm(live)configure# property maintenance-mode=on
crm(live)configure# commit
crm(live)configure# ^D
crm(live)configure# bye
# vi config-files...
# crm_resource -l | xargs -l crm resource cleanup
# crm configure
crm(live)configure# property maintenance-mode=off
crm(live)configure# ptest actions
INFO: install graphviz to see a transition graph
notice: LogActions: Start stonith_fence_ipmilan_hhlokva04 (hhlokva03.srv.ndr-net.de)
notice: LogActions: Start stonith_fence_ipmilan_hhlokva03 (hhlokva04.srv.ndr-net.de)
crm(live)configure# commit
crm(live)configure# ^D
crm(live)configure# bye
</source>
<source lang=bash>
# virsh list --all
Id Name State
----------------------------------------------------
1 OSC_v1 running
- ts_v1 shut off
</source>
7343db4c7d9fb56aa37b9cb8fbf16803b0a55541
1320
1319
2016-07-13T13:20:21Z
Lollypop
2
wikitext
text/x-wiki
PingPong abschalten:
<source lang=bash>
# crm configure
crm(live)configure# property maintenance-mode=on
crm(live)configure# commit
crm(live)configure# ^D
crm(live)configure# bye
</source>
<source lang=bash>
# crm configure
crm(live)configure# property maintenance-mode=on
crm(live)configure# commit
crm(live)configure# ^D
crm(live)configure# bye
# vi config-files...
# crm_resource -l | xargs -l crm resource cleanup
# crm configure
crm(live)configure# property maintenance-mode=off
crm(live)configure# ptest actions
INFO: install graphviz to see a transition graph
notice: LogActions: Start stonith_fence_ipmilan_hhlokva04 (hhlokva03.srv.ndr-net.de)
notice: LogActions: Start stonith_fence_ipmilan_hhlokva03 (hhlokva04.srv.ndr-net.de)
crm(live)configure# commit
crm(live)configure# ^D
crm(live)configure# bye
</source>
<source lang=bash>
# virsh list --all
Id Name State
----------------------------------------------------
1 OSC_v1 running
- ts_v1 shut off
</source>
8529ce5328e3aa2075412ff6ae441fd369eca6a0
DNS cheatsheet
0
290
1321
2016-07-14T20:35:10Z
Lollypop
2
Die Seite wurde neu angelegt: „==dns2hosts== <source lang=perl> #!/usr/bin/perl use Net::DNS; use Net::DNS qw(rrsort); my @nameservers = ("auth-dns-1.domain.de","auth-dns-2.domain.de"); my…“
wikitext
text/x-wiki
==dns2hosts==
<source lang=perl>
#!/usr/bin/perl
use Net::DNS;
use Net::DNS qw(rrsort);
my @nameservers = ("auth-dns-1.domain.de","auth-dns-2.domain.de");
my $net_regex = '10\.11\.';
my $domain = 'domain.de';
# cut_off_domain=0 : host.domain
# cut_off_domain=1 : short name only
# cut_off_domain=2 : short name and with domain
my $cut_off_domain=1;
my $res = Net::DNS::Resolver->new;
$res->nameservers(@nameservers);
Net::DNS::RR::A->set_rrsort_func ('asorted',
sub {($a,$b)=($Net::DNS::a,$Net::DNS::b);
$a->{'address'} cmp $b->{'address'}});
# Get the zone
my @zone = $res->axfr($domain);
# All A records
my @addresses = grep { $_->type eq "A" } @zone;
# Filter out net if $net_regex is set
@addresses = grep { $_->address =~ /$net_regex/ } @addresses if(defined($net_regex));
# All CNAME records
my @cnames = grep { $_->type eq "CNAME" } @zone;
my $host;
foreach $rr (rrsort("A","asorted", @addresses)) {
$host=$rr->name;
$host=(split /\./,$host)[0] if ($cut_off_domain eq 1);
$host=(split /\./,$rr->name)[0]." ".$rr->name if ($cut_off_domain eq 2);
print $rr->address."\t".$host;
foreach $cname (grep { $_->cname eq $rr->name } @cnames) {
$host=$cname->name;
$host=(split /\./,$host)[0] if ($cut_off_domain eq 1);
$host=(split /\./,$cname->name)[0]." ".$cname->name if ($cut_off_domain eq 2);
print " ".$host;
}
print "\n";
}
</source>
8fcc432de33fb007460ddde478bb576c3624a12e
1322
1321
2016-07-15T05:57:01Z
Lollypop
2
wikitext
text/x-wiki
=dig=
==Compare several nameserver if SOA matches==
<source lang=bash>
$ domain=denic.de
$ printf "Domain: %s\n" ${domain} ; for ns in $(dig +short ${domain} ns) ; do printf "Nameserver: %s => SOA: %s\n" ${ns} "$(dig +short ${domain} soa @${ns})" ; done
Domain: denic.de
Nameserver: ns2.denic.de. => SOA: ns1.denic.de. its.denic.de. 1468491003 10800 1800 3600000 1800
Nameserver: ns1.denic.de. => SOA: ns1.denic.de. its.denic.de. 1468491003 10800 1800 3600000 1800
Nameserver: ns3.denic.de. => SOA: ns1.denic.de. its.denic.de. 1468491003 10800 1800 3600000 1800
</source>
==dns2hosts==
<source lang=perl>
#!/usr/bin/perl
use Net::DNS;
use Net::DNS qw(rrsort);
my @nameservers = ("auth-dns-1.domain.de","auth-dns-2.domain.de");
my $net_regex = '10\.11\.';
my $domain = 'domain.de';
# cut_off_domain=0 : host.domain
# cut_off_domain=1 : short name only
# cut_off_domain=2 : short name and with domain
my $cut_off_domain=1;
my $res = Net::DNS::Resolver->new;
$res->nameservers(@nameservers);
Net::DNS::RR::A->set_rrsort_func ('asorted',
sub {($a,$b)=($Net::DNS::a,$Net::DNS::b);
$a->{'address'} cmp $b->{'address'}});
# Get the zone
my @zone = $res->axfr($domain);
# All A records
my @addresses = grep { $_->type eq "A" } @zone;
# Filter out net if $net_regex is set
@addresses = grep { $_->address =~ /$net_regex/ } @addresses if(defined($net_regex));
# All CNAME records
my @cnames = grep { $_->type eq "CNAME" } @zone;
my $host;
foreach $rr (rrsort("A","asorted", @addresses)) {
$host=$rr->name;
$host=(split /\./,$host)[0] if ($cut_off_domain eq 1);
$host=(split /\./,$rr->name)[0]." ".$rr->name if ($cut_off_domain eq 2);
print $rr->address."\t".$host;
foreach $cname (grep { $_->cname eq $rr->name } @cnames) {
$host=$cname->name;
$host=(split /\./,$host)[0] if ($cut_off_domain eq 1);
$host=(split /\./,$cname->name)[0]." ".$cname->name if ($cut_off_domain eq 2);
print " ".$host;
}
print "\n";
}
</source>
0684e2eea8de7e02fff6227f48523b2b43155df3
Autofs
0
256
1324
988
2016-07-26T12:07:23Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:Linux|autofs]]
[[Kategorie:Solaris|autofs]]
==Automount home directories==
===/etc/auto.master===
<source lang=bash>
#
# Include /etc/auto.master.d/*.autofs
#
+dir:/etc/auto.master.d
</source>
===/etc/auto.master.d/home.autofs===
<source lang=bash>
/home /etc/auto.master.d/home.map
</source>
===/etc/auto.master.d/home.map===
Mount homes from different locations.
<source lang=bash>
* :/data/home/& nfs.server.de:/home/&
</source>
The asterisk marks any dir in /home/* should be matched by this rule.
The ampers and is replaced by the part which was matched by *.
So if you enter /home/a the automounter searches local for /data/home/a which will be mounted when found.
<source lang=bash>
# cd /home/a
# mount -v | grep /home/a
/data/home/a on /home/a type none (rw,bind)
</source>
For another /home/b which is on the nfs server it looks like this:
<source lang=bash>
# cd /home/b
# mount -v | grep /home/b
nfs.server.de:/home/b on /home/b type nfs (rw,addr=172.16.17.24)
</source>
===cifs===
<i>/etc/auto.master.d/mycifsshare.autofs</i>:
<source lang=bash>
/data/cifs /etc/auto.master.d/mycifsshare.map
</source>
<i>/etc/auto.master.d/mycifsshare.map</i>:
<source lang=bash>
mycifsshare -fstype=cifs,rw,credentials=/etc/samba/mycifsshare_credentials,uid=<myuser>,forceuid ://192.168.1.2/mycifsshare
</source>
af72d2edacaa543cf7fa1fd83bad4fc9dcec0191
Brocade
0
107
1325
623
2016-08-01T11:27:40Z
Lollypop
2
/* Key vom Switch -> Host ~/.ssh/authorized_keys */
wikitext
text/x-wiki
[[Kategorie:FC]]
[[Kategorie:Brocade]]
=Ein paar Kommandos mit kurzer Erklärung dazu=
==Firmware==
<source lang=bash>
brocade:admin> firmwareshow
Appl Primary/Secondary Versions
------------------------------------------
FOS v6.4.2a
v6.4.2a
</source>
== General Switch Information ==
<source lang=bash>
brocade:admin> switchshow
switchName: brocade
switchType: 71.2
switchState: Online
switchMode: Native
switchRole: Principal
switchDomain: 1
switchId: fffc01
switchWwn: 10:00:00:05:34:be:f3:f0
zoning: ON (Fabric1)
switchBeacon: OFF
Index Port Address Media Speed State Proto
==============================================
0 0 010000 id N4 Online FC F-Port 50:0a:09:81:96:c8:3e:f8
1 1 010100 id N4 Online FC F-Port 50:0a:09:81:86:c8:3e:f8
2 2 010200 id N8 Online FC F-Port 21:00:00:24:ff:36:45:02
3 3 010300 id N8 Online FC F-Port 21:00:00:24:ff:36:45:21
4 4 010400 id N8 Online FC F-Port 21:00:00:24:ff:36:44:90
5 5 010500 id N8 Online FC F-Port 21:00:00:24:ff:36:45:f6
6 6 010600 id N8 No_Light FC
...
</source>
Wichtige Zeilen:
===switchshow:switchType===
<source lang=bash>
switchType: 71.2
</source>
switchType gibt Auskunft, welchen Switch wir vor uns haben. Hier einen Brocade 300.
* [https://www.ibm.com/developerworks/community/blogs/anthonyv/entry/brocade_san_switch_models1?lang=en Tabelle von IBM]
* PDF von Brocade: [[Media:Switch-types-blads-ids-product-names.pdf|Switch Types, Blade IDs, and Product Names]]
===switchshow:zoning===
<source lang=bash>
zoning: ON (Fabric1)
</source>
Zeigt an, ob das [[#Zoning|Zoning]] aktiv ist und welche Konfiguration aktiv ist (hier Fabric1) siehe auch [[#Fabric|Fabric]].
===switchshow:switchRole===
Es gibt zwei Rollen
* Principal (also den Chef)
und
* Subordinate (also den Untergeordneten)
z.B.:
<source lang=bash>
switchRole: Principal
</source>
Die Rolle kann man ändern
'''ACHTUNG: Nicht Unterbrechungsfrei!'''<br>
'''WARNING: DISRUPTIVE ACTION !'''
<source lang=bash>
brocade1:admin> fabricprincipal -f 1
</source>
==Fabric==
Eine Fabric besteht aus einem oder mehreren Fibre-Channel-Switchen, die miteinander verbunden sind. Komponenten wie Hosts, Storage und Tapes werden über die Fibre-Channel-Switche mit der Fabric verbunden.
<source lang=bash>
brocade:admin> fabricshow
Switch ID Worldwide Name Enet IP Addr FC IP Addr Name
-------------------------------------------------------------------------
1: fffc01 10:00:00:05:34:be:f3:f0 10.60.1.110 0.0.0.0 >"brocade"
2: fffc02 10:00:00:05:1e:0d:da:27 10.60.1.111 0.0.0.0 "brocade1"
4: fffc04 10:00:00:05:1e:b3:61:7d 10.60.1.113 0.0.0.0 "brocade3"
42: fffc2a 10:00:00:05:1e:0c:f3:98 10.60.1.112 0.0.0.0 "brocade2"
The Fabric has 4 switches
</source>
==InterSwitchLinks (ISL)==
Mit islshow bekommt man heraus, welche weiteren Switches angeschlossen sind und über welche Ports sie mit dem aktuellen verbunden sind.
<source lang=bash>
brocade:admin> islshow
rz1_fab1_01:admin> islshow
1: 0-> 0 10:00:00:05:1e:0d:ca:27 2 brocade1 sp: 4.000G bw: 4.000G
2: 4-> 0 10:00:00:05:1e:0c:e3:98 42 brocade2 sp: 4.000G bw: 4.000G
3: 8-> 17 10:00:00:05:1e:0d:ca:27 2 brocade1 sp: 4.000G bw: 4.000G
4: 9-> 0 10:00:00:05:1e:b3:51:7d 4 brocade3 sp: 4.000G bw: 4.000G
5: 12-> 17 10:00:00:05:1e:0c:e3:98 42 brocade2 sp: 4.000G bw: 4.000G
6: 13-> 17 10:00:00:05:1e:b3:51:7d 4 brocade3 sp: 4.000G bw: 4.000G
</source>
==Zoning==
Eine Zone legt fest, welche Ports oder WWNs sich sehen dürfen.
Heute mach man eigentlich nur noch WWN-Zoning, weil es das flexibelste und sicherste ist. Man kann dadurch einfach die Kabel innerhalb der [[#Fabric|Fabric]] hin und herstecken, ohne daß ein Gerät mit mal ein anderes sehen kann, als vorher.
Bei Portzoning ist die Gefahr des falsch steckens gegeben.
=Switch Types and Product Names=
{| class="wikitable sortable" style="text-align: center; width: 85%"
! Switch Type
! Switch Name
|-
| 1 || Brocade 1000 Switches
|-
| 2, 6 || Brocade 2800 Switch
|-
| 3 || Brocade 2100, 2400 Switches
|-
| 4 || Brocade 20x0, 2010, 2040, 2050 Switches
|-
| 5 || Brocade 22x0, 2210, 2240, 2250 Switches
|-
| 7 || Brocade 2000 Switch
|-
| 9 || Brocade 3800 Switch
|-
| 10 || Brocade 12000 Director
|-
| 12 || Brocade 3900 Switch
|-
| 16 || Brocade 3200 Switch
|-
| 17 || Brocade 3800VL
|-
| 18 || Brocade 3000 Switch
|-
| 21 || Brocade 24000 Director
|-
| 22 || Brocade 3016 Switch
|-
| 26 || Brocade 3850 Switch
|-
| 27 || Brocade 3250 Switch
|-
| 29 || Brocade 4012 Embedded Switch
|-
| 32 || Brocade 4100 Switch
|-
| 33 || Brocade 3014 Switch
|-
| 34 || Brocade 200E Switch
|-
| 37 || Brocade 4020 Embedded Switch
|-
| 38 || Brocade 7420 SAN Router
|-
| 40 || Fibre Channel Routing (FCR) Front Domain
|-
| 41 || Fibre Channel Routing, (FCR) Xlate Domain
|-
| 42 || Brocade 48000 Director
|-
| 43 || Brocade 4024 Embedded Switch
|-
| 44 || Brocade 4900 Switch
|-
| 45 || Brocade 4016 Embedded Switch
|-
| 46 || Brocade 7500 Switch
|-
| 51 || Brocade 4018 Embedded Switch
|-
| 55.2 || Brocade 7600 Switch
|-
| 58 || Brocade 5000 Switch
|-
| 61 || Brocade 4424 Embedded Switch
|-
| 62 || Brocade DCX Backbone
|-
| 64 || Brocade 5300 Switch
|-
| 66 || Brocade 5100 Switch
|-
| 67 || Brocade Encryption Switch
|-
| 69 || Brocade 5410 Blade
|-
| 70 || Brocade 5410 Embedded Switch
|-
| 71 || Brocade 300 Switch
|-
| 72 || Brocade 5480 Embedded Switch
|-
| 73 || Brocade 5470 Embedded Switch
|-
| 75 || Brocade M5424 Embedded Switch
|-
| 76 || Brocade 8000 Switch
|-
| 77 || Brocade DCX-4S Backbone
|-
| 83 || Brocade 7800 Extension Switch
|-
| 86 || Brocade 5450 Embedded Switch
|-
| 87 || Brocade 5460 Embedded Switch
|-
| 90 || Brocade 8470 Embedded Switch
|-
| 92 || Brocade VA-40FC Switch
|-
| 95 || Brocade VDX 6720-24 Data Center Switch
|-
| 96 || Brocade VDX 6730-32 Data Center Switch
|-
| 97 || Brocade VDX 6720-60 Data Center Switch
|-
| 98 || Brocade VDX 6730-76 Data Center Switch
|-
| 108 || Dell M8428-k FCoE Embedded Switch
|-
| 109 || Brocade 6510 Switch
|-
| 116 || Brocade VDX 6710 Data Center Switch
|-
| 117 || Brocade 6547 Embedded Switch
|-
| 118 || Brocade 6505 Switch
|-
| 120 || Brocade DCX 8510-8 Backbone
|-
| 121 || Brocade DCX 8510-4 Backbone
|}
=SSH mit public key=
==Host -> Brocade==
<source lang=bash>
BSAN01:root> cd ~/.ssh
BSAN01:root> ls -al
total 8
drwxr-xr-x 2 root sys 4096 Jul 18 2011 ./
drwxr-x--- 4 root sys 4096 Jun 19 2013 ../
BSAN01:root> echo "ssh-dss AAAA...TD8cc= root@sun" >> authorized_keys
</source>
==Brocade -> Host==
===Key auf Switch generieren===
Als '''admin''' !
<source lang=bash>
Host# ssh admin@bsan01
BSAN01:admin> sshutil genkey
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Key pair generated successfully.
BSAN01:admin> exit
</source>
===Key vom Switch -> Host ~/.ssh/authorized_keys===
Als '''root''' !
<source lang=bash>
Host# ssh root@bsan01 cat .ssh/id_rsa.pub >> ~/.ssh/authorized_keys
</source>
=Backup der Config=
Wichtig, vorher die Keys austauschen!
# Der Brocade Pubkey muß nach ~bckpuser/.ssh/authorized_keys
# Der Pubkey des aufrufenden Users muß auf den Brocade ~root/.ssh/authorized_keys
Ein mögliches Script könnte so aussehen:
<source lang=bash>
#!/bin/bash
SWITCHES="
bsan01
bsan02
"
BACKUP_HOST="10.0.0.42"
LOCALUSER="bckpuser"
BACKUPDIR="brocade_backup"
date="$(date '+%Y%m%d-%H%M%S')"
for switch in ${SWITCHES} ; do
printf "Backing up ${switch} to ~${LOCALUSER}/${BACKUPDIR}/${switch}_config_${date}.txt... "
ssh root@${switch} /fabos/link_sbin/configupload -all -p scp ${BACKUP_HOST},${LOCALUSER},${BACKUPDIR}/${switch}_config_${date}.txt
done
</source>
b6b9ddc0de57d8d64ab64cfa48d7499a14e4b68f
1326
1325
2016-08-01T11:57:02Z
Lollypop
2
/* Backup der Config */
wikitext
text/x-wiki
[[Kategorie:FC]]
[[Kategorie:Brocade]]
=Ein paar Kommandos mit kurzer Erklärung dazu=
==Firmware==
<source lang=bash>
brocade:admin> firmwareshow
Appl Primary/Secondary Versions
------------------------------------------
FOS v6.4.2a
v6.4.2a
</source>
== General Switch Information ==
<source lang=bash>
brocade:admin> switchshow
switchName: brocade
switchType: 71.2
switchState: Online
switchMode: Native
switchRole: Principal
switchDomain: 1
switchId: fffc01
switchWwn: 10:00:00:05:34:be:f3:f0
zoning: ON (Fabric1)
switchBeacon: OFF
Index Port Address Media Speed State Proto
==============================================
0 0 010000 id N4 Online FC F-Port 50:0a:09:81:96:c8:3e:f8
1 1 010100 id N4 Online FC F-Port 50:0a:09:81:86:c8:3e:f8
2 2 010200 id N8 Online FC F-Port 21:00:00:24:ff:36:45:02
3 3 010300 id N8 Online FC F-Port 21:00:00:24:ff:36:45:21
4 4 010400 id N8 Online FC F-Port 21:00:00:24:ff:36:44:90
5 5 010500 id N8 Online FC F-Port 21:00:00:24:ff:36:45:f6
6 6 010600 id N8 No_Light FC
...
</source>
Wichtige Zeilen:
===switchshow:switchType===
<source lang=bash>
switchType: 71.2
</source>
switchType gibt Auskunft, welchen Switch wir vor uns haben. Hier einen Brocade 300.
* [https://www.ibm.com/developerworks/community/blogs/anthonyv/entry/brocade_san_switch_models1?lang=en Tabelle von IBM]
* PDF von Brocade: [[Media:Switch-types-blads-ids-product-names.pdf|Switch Types, Blade IDs, and Product Names]]
===switchshow:zoning===
<source lang=bash>
zoning: ON (Fabric1)
</source>
Zeigt an, ob das [[#Zoning|Zoning]] aktiv ist und welche Konfiguration aktiv ist (hier Fabric1) siehe auch [[#Fabric|Fabric]].
===switchshow:switchRole===
Es gibt zwei Rollen
* Principal (also den Chef)
und
* Subordinate (also den Untergeordneten)
z.B.:
<source lang=bash>
switchRole: Principal
</source>
Die Rolle kann man ändern
'''ACHTUNG: Nicht Unterbrechungsfrei!'''<br>
'''WARNING: DISRUPTIVE ACTION !'''
<source lang=bash>
brocade1:admin> fabricprincipal -f 1
</source>
==Fabric==
Eine Fabric besteht aus einem oder mehreren Fibre-Channel-Switchen, die miteinander verbunden sind. Komponenten wie Hosts, Storage und Tapes werden über die Fibre-Channel-Switche mit der Fabric verbunden.
<source lang=bash>
brocade:admin> fabricshow
Switch ID Worldwide Name Enet IP Addr FC IP Addr Name
-------------------------------------------------------------------------
1: fffc01 10:00:00:05:34:be:f3:f0 10.60.1.110 0.0.0.0 >"brocade"
2: fffc02 10:00:00:05:1e:0d:da:27 10.60.1.111 0.0.0.0 "brocade1"
4: fffc04 10:00:00:05:1e:b3:61:7d 10.60.1.113 0.0.0.0 "brocade3"
42: fffc2a 10:00:00:05:1e:0c:f3:98 10.60.1.112 0.0.0.0 "brocade2"
The Fabric has 4 switches
</source>
==InterSwitchLinks (ISL)==
Mit islshow bekommt man heraus, welche weiteren Switches angeschlossen sind und über welche Ports sie mit dem aktuellen verbunden sind.
<source lang=bash>
brocade:admin> islshow
rz1_fab1_01:admin> islshow
1: 0-> 0 10:00:00:05:1e:0d:ca:27 2 brocade1 sp: 4.000G bw: 4.000G
2: 4-> 0 10:00:00:05:1e:0c:e3:98 42 brocade2 sp: 4.000G bw: 4.000G
3: 8-> 17 10:00:00:05:1e:0d:ca:27 2 brocade1 sp: 4.000G bw: 4.000G
4: 9-> 0 10:00:00:05:1e:b3:51:7d 4 brocade3 sp: 4.000G bw: 4.000G
5: 12-> 17 10:00:00:05:1e:0c:e3:98 42 brocade2 sp: 4.000G bw: 4.000G
6: 13-> 17 10:00:00:05:1e:b3:51:7d 4 brocade3 sp: 4.000G bw: 4.000G
</source>
==Zoning==
Eine Zone legt fest, welche Ports oder WWNs sich sehen dürfen.
Heute mach man eigentlich nur noch WWN-Zoning, weil es das flexibelste und sicherste ist. Man kann dadurch einfach die Kabel innerhalb der [[#Fabric|Fabric]] hin und herstecken, ohne daß ein Gerät mit mal ein anderes sehen kann, als vorher.
Bei Portzoning ist die Gefahr des falsch steckens gegeben.
=Switch Types and Product Names=
{| class="wikitable sortable" style="text-align: center; width: 85%"
! Switch Type
! Switch Name
|-
| 1 || Brocade 1000 Switches
|-
| 2, 6 || Brocade 2800 Switch
|-
| 3 || Brocade 2100, 2400 Switches
|-
| 4 || Brocade 20x0, 2010, 2040, 2050 Switches
|-
| 5 || Brocade 22x0, 2210, 2240, 2250 Switches
|-
| 7 || Brocade 2000 Switch
|-
| 9 || Brocade 3800 Switch
|-
| 10 || Brocade 12000 Director
|-
| 12 || Brocade 3900 Switch
|-
| 16 || Brocade 3200 Switch
|-
| 17 || Brocade 3800VL
|-
| 18 || Brocade 3000 Switch
|-
| 21 || Brocade 24000 Director
|-
| 22 || Brocade 3016 Switch
|-
| 26 || Brocade 3850 Switch
|-
| 27 || Brocade 3250 Switch
|-
| 29 || Brocade 4012 Embedded Switch
|-
| 32 || Brocade 4100 Switch
|-
| 33 || Brocade 3014 Switch
|-
| 34 || Brocade 200E Switch
|-
| 37 || Brocade 4020 Embedded Switch
|-
| 38 || Brocade 7420 SAN Router
|-
| 40 || Fibre Channel Routing (FCR) Front Domain
|-
| 41 || Fibre Channel Routing, (FCR) Xlate Domain
|-
| 42 || Brocade 48000 Director
|-
| 43 || Brocade 4024 Embedded Switch
|-
| 44 || Brocade 4900 Switch
|-
| 45 || Brocade 4016 Embedded Switch
|-
| 46 || Brocade 7500 Switch
|-
| 51 || Brocade 4018 Embedded Switch
|-
| 55.2 || Brocade 7600 Switch
|-
| 58 || Brocade 5000 Switch
|-
| 61 || Brocade 4424 Embedded Switch
|-
| 62 || Brocade DCX Backbone
|-
| 64 || Brocade 5300 Switch
|-
| 66 || Brocade 5100 Switch
|-
| 67 || Brocade Encryption Switch
|-
| 69 || Brocade 5410 Blade
|-
| 70 || Brocade 5410 Embedded Switch
|-
| 71 || Brocade 300 Switch
|-
| 72 || Brocade 5480 Embedded Switch
|-
| 73 || Brocade 5470 Embedded Switch
|-
| 75 || Brocade M5424 Embedded Switch
|-
| 76 || Brocade 8000 Switch
|-
| 77 || Brocade DCX-4S Backbone
|-
| 83 || Brocade 7800 Extension Switch
|-
| 86 || Brocade 5450 Embedded Switch
|-
| 87 || Brocade 5460 Embedded Switch
|-
| 90 || Brocade 8470 Embedded Switch
|-
| 92 || Brocade VA-40FC Switch
|-
| 95 || Brocade VDX 6720-24 Data Center Switch
|-
| 96 || Brocade VDX 6730-32 Data Center Switch
|-
| 97 || Brocade VDX 6720-60 Data Center Switch
|-
| 98 || Brocade VDX 6730-76 Data Center Switch
|-
| 108 || Dell M8428-k FCoE Embedded Switch
|-
| 109 || Brocade 6510 Switch
|-
| 116 || Brocade VDX 6710 Data Center Switch
|-
| 117 || Brocade 6547 Embedded Switch
|-
| 118 || Brocade 6505 Switch
|-
| 120 || Brocade DCX 8510-8 Backbone
|-
| 121 || Brocade DCX 8510-4 Backbone
|}
=SSH mit public key=
==Host -> Brocade==
<source lang=bash>
BSAN01:root> cd ~/.ssh
BSAN01:root> ls -al
total 8
drwxr-xr-x 2 root sys 4096 Jul 18 2011 ./
drwxr-x--- 4 root sys 4096 Jun 19 2013 ../
BSAN01:root> echo "ssh-dss AAAA...TD8cc= root@sun" >> authorized_keys
</source>
==Brocade -> Host==
===Key auf Switch generieren===
Als '''admin''' !
<source lang=bash>
Host# ssh admin@bsan01
BSAN01:admin> sshutil genkey
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Key pair generated successfully.
BSAN01:admin> exit
</source>
===Key vom Switch -> Host ~/.ssh/authorized_keys===
Als '''root''' !
<source lang=bash>
Host# ssh root@bsan01 cat .ssh/id_rsa.pub >> ~/.ssh/authorized_keys
</source>
=Backup der Config=
Wichtig, vorher die Keys austauschen!
# Der Brocade Pubkey muß nach ~bckpuser/.ssh/authorized_keys
# Der Pubkey des aufrufenden Users muß auf den Brocade ~root/.ssh/authorized_keys
Ein mögliches Script könnte so aussehen:
<source lang=bash>
#!/bin/bash
SWITCHES="
bsan01
bsan02
"
BACKUP_HOST="10.0.0.42"
LOCALUSER="bckpuser"
BACKUPDIR="brocade_backup"
[ ! -d ~/brocade_backup ] && mkdir -p ~/brocade_backup
date="$(date '+%Y%m%d-%H%M%S')"
for switch in ${SWITCHES} ; do
printf "Backing up ${switch} to ~${LOCALUSER}/${BACKUPDIR}/${switch}_config_${date}.txt... "
ssh -i ~/.ssh/id_rsa_nopw root@${switch} /fabos/link_sbin/configupload -all -p scp ${BACKUP_HOST},${LOCALUSER},${BACKUPDIR}/${switch}_config_${date}.txt
tmp_file=/tmp/.$$_${switch}.txt
last_backup="$(ls -1t ~/${BACKUPDIR}/${switch}_config_*.txt.gz | tail -1)"
gzip -cd ${last_backup} | grep -v "date =" > ${tmp_file}
if ( grep -v "date =" ~/${BACKUPDIR}/${switch}_config_${date}.txt | diff -ub - ${tmp_file} )
then
# The last backup is identical
rm -f ~/${BACKUPDIR}/${switch}_config_${date}.txt
else
# Differences encountered keep new backup
gzip -9 ~/${BACKUPDIR}/${switch}_config_${date}.txt
fi
[ -f "${tmp_file}" ] && rm -f ${tmp_file}
done
</source>
569071f14c6f9a19dce7528c228dc06460808943
1327
1326
2016-08-01T12:05:58Z
Lollypop
2
/* Backup der Config */
wikitext
text/x-wiki
[[Kategorie:FC]]
[[Kategorie:Brocade]]
=Ein paar Kommandos mit kurzer Erklärung dazu=
==Firmware==
<source lang=bash>
brocade:admin> firmwareshow
Appl Primary/Secondary Versions
------------------------------------------
FOS v6.4.2a
v6.4.2a
</source>
== General Switch Information ==
<source lang=bash>
brocade:admin> switchshow
switchName: brocade
switchType: 71.2
switchState: Online
switchMode: Native
switchRole: Principal
switchDomain: 1
switchId: fffc01
switchWwn: 10:00:00:05:34:be:f3:f0
zoning: ON (Fabric1)
switchBeacon: OFF
Index Port Address Media Speed State Proto
==============================================
0 0 010000 id N4 Online FC F-Port 50:0a:09:81:96:c8:3e:f8
1 1 010100 id N4 Online FC F-Port 50:0a:09:81:86:c8:3e:f8
2 2 010200 id N8 Online FC F-Port 21:00:00:24:ff:36:45:02
3 3 010300 id N8 Online FC F-Port 21:00:00:24:ff:36:45:21
4 4 010400 id N8 Online FC F-Port 21:00:00:24:ff:36:44:90
5 5 010500 id N8 Online FC F-Port 21:00:00:24:ff:36:45:f6
6 6 010600 id N8 No_Light FC
...
</source>
Wichtige Zeilen:
===switchshow:switchType===
<source lang=bash>
switchType: 71.2
</source>
switchType gibt Auskunft, welchen Switch wir vor uns haben. Hier einen Brocade 300.
* [https://www.ibm.com/developerworks/community/blogs/anthonyv/entry/brocade_san_switch_models1?lang=en Tabelle von IBM]
* PDF von Brocade: [[Media:Switch-types-blads-ids-product-names.pdf|Switch Types, Blade IDs, and Product Names]]
===switchshow:zoning===
<source lang=bash>
zoning: ON (Fabric1)
</source>
Zeigt an, ob das [[#Zoning|Zoning]] aktiv ist und welche Konfiguration aktiv ist (hier Fabric1) siehe auch [[#Fabric|Fabric]].
===switchshow:switchRole===
Es gibt zwei Rollen
* Principal (also den Chef)
und
* Subordinate (also den Untergeordneten)
z.B.:
<source lang=bash>
switchRole: Principal
</source>
Die Rolle kann man ändern
'''ACHTUNG: Nicht Unterbrechungsfrei!'''<br>
'''WARNING: DISRUPTIVE ACTION !'''
<source lang=bash>
brocade1:admin> fabricprincipal -f 1
</source>
==Fabric==
Eine Fabric besteht aus einem oder mehreren Fibre-Channel-Switchen, die miteinander verbunden sind. Komponenten wie Hosts, Storage und Tapes werden über die Fibre-Channel-Switche mit der Fabric verbunden.
<source lang=bash>
brocade:admin> fabricshow
Switch ID Worldwide Name Enet IP Addr FC IP Addr Name
-------------------------------------------------------------------------
1: fffc01 10:00:00:05:34:be:f3:f0 10.60.1.110 0.0.0.0 >"brocade"
2: fffc02 10:00:00:05:1e:0d:da:27 10.60.1.111 0.0.0.0 "brocade1"
4: fffc04 10:00:00:05:1e:b3:61:7d 10.60.1.113 0.0.0.0 "brocade3"
42: fffc2a 10:00:00:05:1e:0c:f3:98 10.60.1.112 0.0.0.0 "brocade2"
The Fabric has 4 switches
</source>
==InterSwitchLinks (ISL)==
Mit islshow bekommt man heraus, welche weiteren Switches angeschlossen sind und über welche Ports sie mit dem aktuellen verbunden sind.
<source lang=bash>
brocade:admin> islshow
rz1_fab1_01:admin> islshow
1: 0-> 0 10:00:00:05:1e:0d:ca:27 2 brocade1 sp: 4.000G bw: 4.000G
2: 4-> 0 10:00:00:05:1e:0c:e3:98 42 brocade2 sp: 4.000G bw: 4.000G
3: 8-> 17 10:00:00:05:1e:0d:ca:27 2 brocade1 sp: 4.000G bw: 4.000G
4: 9-> 0 10:00:00:05:1e:b3:51:7d 4 brocade3 sp: 4.000G bw: 4.000G
5: 12-> 17 10:00:00:05:1e:0c:e3:98 42 brocade2 sp: 4.000G bw: 4.000G
6: 13-> 17 10:00:00:05:1e:b3:51:7d 4 brocade3 sp: 4.000G bw: 4.000G
</source>
==Zoning==
Eine Zone legt fest, welche Ports oder WWNs sich sehen dürfen.
Heute mach man eigentlich nur noch WWN-Zoning, weil es das flexibelste und sicherste ist. Man kann dadurch einfach die Kabel innerhalb der [[#Fabric|Fabric]] hin und herstecken, ohne daß ein Gerät mit mal ein anderes sehen kann, als vorher.
Bei Portzoning ist die Gefahr des falsch steckens gegeben.
=Switch Types and Product Names=
{| class="wikitable sortable" style="text-align: center; width: 85%"
! Switch Type
! Switch Name
|-
| 1 || Brocade 1000 Switches
|-
| 2, 6 || Brocade 2800 Switch
|-
| 3 || Brocade 2100, 2400 Switches
|-
| 4 || Brocade 20x0, 2010, 2040, 2050 Switches
|-
| 5 || Brocade 22x0, 2210, 2240, 2250 Switches
|-
| 7 || Brocade 2000 Switch
|-
| 9 || Brocade 3800 Switch
|-
| 10 || Brocade 12000 Director
|-
| 12 || Brocade 3900 Switch
|-
| 16 || Brocade 3200 Switch
|-
| 17 || Brocade 3800VL
|-
| 18 || Brocade 3000 Switch
|-
| 21 || Brocade 24000 Director
|-
| 22 || Brocade 3016 Switch
|-
| 26 || Brocade 3850 Switch
|-
| 27 || Brocade 3250 Switch
|-
| 29 || Brocade 4012 Embedded Switch
|-
| 32 || Brocade 4100 Switch
|-
| 33 || Brocade 3014 Switch
|-
| 34 || Brocade 200E Switch
|-
| 37 || Brocade 4020 Embedded Switch
|-
| 38 || Brocade 7420 SAN Router
|-
| 40 || Fibre Channel Routing (FCR) Front Domain
|-
| 41 || Fibre Channel Routing, (FCR) Xlate Domain
|-
| 42 || Brocade 48000 Director
|-
| 43 || Brocade 4024 Embedded Switch
|-
| 44 || Brocade 4900 Switch
|-
| 45 || Brocade 4016 Embedded Switch
|-
| 46 || Brocade 7500 Switch
|-
| 51 || Brocade 4018 Embedded Switch
|-
| 55.2 || Brocade 7600 Switch
|-
| 58 || Brocade 5000 Switch
|-
| 61 || Brocade 4424 Embedded Switch
|-
| 62 || Brocade DCX Backbone
|-
| 64 || Brocade 5300 Switch
|-
| 66 || Brocade 5100 Switch
|-
| 67 || Brocade Encryption Switch
|-
| 69 || Brocade 5410 Blade
|-
| 70 || Brocade 5410 Embedded Switch
|-
| 71 || Brocade 300 Switch
|-
| 72 || Brocade 5480 Embedded Switch
|-
| 73 || Brocade 5470 Embedded Switch
|-
| 75 || Brocade M5424 Embedded Switch
|-
| 76 || Brocade 8000 Switch
|-
| 77 || Brocade DCX-4S Backbone
|-
| 83 || Brocade 7800 Extension Switch
|-
| 86 || Brocade 5450 Embedded Switch
|-
| 87 || Brocade 5460 Embedded Switch
|-
| 90 || Brocade 8470 Embedded Switch
|-
| 92 || Brocade VA-40FC Switch
|-
| 95 || Brocade VDX 6720-24 Data Center Switch
|-
| 96 || Brocade VDX 6730-32 Data Center Switch
|-
| 97 || Brocade VDX 6720-60 Data Center Switch
|-
| 98 || Brocade VDX 6730-76 Data Center Switch
|-
| 108 || Dell M8428-k FCoE Embedded Switch
|-
| 109 || Brocade 6510 Switch
|-
| 116 || Brocade VDX 6710 Data Center Switch
|-
| 117 || Brocade 6547 Embedded Switch
|-
| 118 || Brocade 6505 Switch
|-
| 120 || Brocade DCX 8510-8 Backbone
|-
| 121 || Brocade DCX 8510-4 Backbone
|}
=SSH mit public key=
==Host -> Brocade==
<source lang=bash>
BSAN01:root> cd ~/.ssh
BSAN01:root> ls -al
total 8
drwxr-xr-x 2 root sys 4096 Jul 18 2011 ./
drwxr-x--- 4 root sys 4096 Jun 19 2013 ../
BSAN01:root> echo "ssh-dss AAAA...TD8cc= root@sun" >> authorized_keys
</source>
==Brocade -> Host==
===Key auf Switch generieren===
Als '''admin''' !
<source lang=bash>
Host# ssh admin@bsan01
BSAN01:admin> sshutil genkey
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Key pair generated successfully.
BSAN01:admin> exit
</source>
===Key vom Switch -> Host ~/.ssh/authorized_keys===
Als '''root''' !
<source lang=bash>
Host# ssh root@bsan01 cat .ssh/id_rsa.pub >> ~/.ssh/authorized_keys
</source>
=Backup der Config=
Wichtig, vorher die Keys austauschen!
# Der Brocade Pubkey muß nach ~bckpuser/.ssh/authorized_keys
# Der Pubkey des aufrufenden Users muß auf den Brocade ~root/.ssh/authorized_keys
Ein mögliches Script könnte so aussehen:
<source lang=bash>
#!/bin/bash
SWITCHES="
bsan01
bsan02
"
BACKUP_HOST="10.0.0.42"
LOCALUSER="bckpuser"
BACKUPDIR="brocade_backup"
[ ! -d ~/brocade_backup ] && mkdir -p ~/brocade_backup
date="$(date '+%Y%m%d-%H%M%S')"
for switch in ${SWITCHES} ; do
printf "Backing up ${switch} to ~${LOCALUSER}/${BACKUPDIR}/${switch}_config_${date}.txt... "
ssh -i ~/.ssh/id_rsa_nopw root@${switch} /fabos/link_sbin/configupload -all -p scp ${BACKUP_HOST},${LOCALUSER},${BACKUPDIR}/${switch}_config_${date}.txt
tmp_file=/tmp/.$$_${switch}.txt
bakup_file=~/${BACKUPDIR}/${switch}_config_${date}.txt
last_backup_file="$(ls -1t ~/${BACKUPDIR}/${switch}_config_*.txt.gz | tail -1)"
gzip -cd ${last_backup_file} | grep -v "date =" > ${tmp_file}
if ( grep -v "date =" ${bakup_file} | diff -ub - ${tmp_file} )
then
# The last backup is identical
rm -f ${bakup_file}
else
# Differences encountered keep new backup
gzip -9 ${bakup_file}
fi
[ -f "${tmp_file}" ] && rm -f ${tmp_file}
done
</source>
24e9c0dfca4ec7bc74567dfbe4910db4879b7dab
1328
1327
2016-08-01T14:17:10Z
Lollypop
2
/* Backup der Config */
wikitext
text/x-wiki
[[Kategorie:FC]]
[[Kategorie:Brocade]]
=Ein paar Kommandos mit kurzer Erklärung dazu=
==Firmware==
<source lang=bash>
brocade:admin> firmwareshow
Appl Primary/Secondary Versions
------------------------------------------
FOS v6.4.2a
v6.4.2a
</source>
== General Switch Information ==
<source lang=bash>
brocade:admin> switchshow
switchName: brocade
switchType: 71.2
switchState: Online
switchMode: Native
switchRole: Principal
switchDomain: 1
switchId: fffc01
switchWwn: 10:00:00:05:34:be:f3:f0
zoning: ON (Fabric1)
switchBeacon: OFF
Index Port Address Media Speed State Proto
==============================================
0 0 010000 id N4 Online FC F-Port 50:0a:09:81:96:c8:3e:f8
1 1 010100 id N4 Online FC F-Port 50:0a:09:81:86:c8:3e:f8
2 2 010200 id N8 Online FC F-Port 21:00:00:24:ff:36:45:02
3 3 010300 id N8 Online FC F-Port 21:00:00:24:ff:36:45:21
4 4 010400 id N8 Online FC F-Port 21:00:00:24:ff:36:44:90
5 5 010500 id N8 Online FC F-Port 21:00:00:24:ff:36:45:f6
6 6 010600 id N8 No_Light FC
...
</source>
Wichtige Zeilen:
===switchshow:switchType===
<source lang=bash>
switchType: 71.2
</source>
switchType gibt Auskunft, welchen Switch wir vor uns haben. Hier einen Brocade 300.
* [https://www.ibm.com/developerworks/community/blogs/anthonyv/entry/brocade_san_switch_models1?lang=en Tabelle von IBM]
* PDF von Brocade: [[Media:Switch-types-blads-ids-product-names.pdf|Switch Types, Blade IDs, and Product Names]]
===switchshow:zoning===
<source lang=bash>
zoning: ON (Fabric1)
</source>
Zeigt an, ob das [[#Zoning|Zoning]] aktiv ist und welche Konfiguration aktiv ist (hier Fabric1) siehe auch [[#Fabric|Fabric]].
===switchshow:switchRole===
Es gibt zwei Rollen
* Principal (also den Chef)
und
* Subordinate (also den Untergeordneten)
z.B.:
<source lang=bash>
switchRole: Principal
</source>
Die Rolle kann man ändern
'''ACHTUNG: Nicht Unterbrechungsfrei!'''<br>
'''WARNING: DISRUPTIVE ACTION !'''
<source lang=bash>
brocade1:admin> fabricprincipal -f 1
</source>
==Fabric==
Eine Fabric besteht aus einem oder mehreren Fibre-Channel-Switchen, die miteinander verbunden sind. Komponenten wie Hosts, Storage und Tapes werden über die Fibre-Channel-Switche mit der Fabric verbunden.
<source lang=bash>
brocade:admin> fabricshow
Switch ID Worldwide Name Enet IP Addr FC IP Addr Name
-------------------------------------------------------------------------
1: fffc01 10:00:00:05:34:be:f3:f0 10.60.1.110 0.0.0.0 >"brocade"
2: fffc02 10:00:00:05:1e:0d:da:27 10.60.1.111 0.0.0.0 "brocade1"
4: fffc04 10:00:00:05:1e:b3:61:7d 10.60.1.113 0.0.0.0 "brocade3"
42: fffc2a 10:00:00:05:1e:0c:f3:98 10.60.1.112 0.0.0.0 "brocade2"
The Fabric has 4 switches
</source>
==InterSwitchLinks (ISL)==
Mit islshow bekommt man heraus, welche weiteren Switches angeschlossen sind und über welche Ports sie mit dem aktuellen verbunden sind.
<source lang=bash>
brocade:admin> islshow
rz1_fab1_01:admin> islshow
1: 0-> 0 10:00:00:05:1e:0d:ca:27 2 brocade1 sp: 4.000G bw: 4.000G
2: 4-> 0 10:00:00:05:1e:0c:e3:98 42 brocade2 sp: 4.000G bw: 4.000G
3: 8-> 17 10:00:00:05:1e:0d:ca:27 2 brocade1 sp: 4.000G bw: 4.000G
4: 9-> 0 10:00:00:05:1e:b3:51:7d 4 brocade3 sp: 4.000G bw: 4.000G
5: 12-> 17 10:00:00:05:1e:0c:e3:98 42 brocade2 sp: 4.000G bw: 4.000G
6: 13-> 17 10:00:00:05:1e:b3:51:7d 4 brocade3 sp: 4.000G bw: 4.000G
</source>
==Zoning==
Eine Zone legt fest, welche Ports oder WWNs sich sehen dürfen.
Heute mach man eigentlich nur noch WWN-Zoning, weil es das flexibelste und sicherste ist. Man kann dadurch einfach die Kabel innerhalb der [[#Fabric|Fabric]] hin und herstecken, ohne daß ein Gerät mit mal ein anderes sehen kann, als vorher.
Bei Portzoning ist die Gefahr des falsch steckens gegeben.
=Switch Types and Product Names=
{| class="wikitable sortable" style="text-align: center; width: 85%"
! Switch Type
! Switch Name
|-
| 1 || Brocade 1000 Switches
|-
| 2, 6 || Brocade 2800 Switch
|-
| 3 || Brocade 2100, 2400 Switches
|-
| 4 || Brocade 20x0, 2010, 2040, 2050 Switches
|-
| 5 || Brocade 22x0, 2210, 2240, 2250 Switches
|-
| 7 || Brocade 2000 Switch
|-
| 9 || Brocade 3800 Switch
|-
| 10 || Brocade 12000 Director
|-
| 12 || Brocade 3900 Switch
|-
| 16 || Brocade 3200 Switch
|-
| 17 || Brocade 3800VL
|-
| 18 || Brocade 3000 Switch
|-
| 21 || Brocade 24000 Director
|-
| 22 || Brocade 3016 Switch
|-
| 26 || Brocade 3850 Switch
|-
| 27 || Brocade 3250 Switch
|-
| 29 || Brocade 4012 Embedded Switch
|-
| 32 || Brocade 4100 Switch
|-
| 33 || Brocade 3014 Switch
|-
| 34 || Brocade 200E Switch
|-
| 37 || Brocade 4020 Embedded Switch
|-
| 38 || Brocade 7420 SAN Router
|-
| 40 || Fibre Channel Routing (FCR) Front Domain
|-
| 41 || Fibre Channel Routing, (FCR) Xlate Domain
|-
| 42 || Brocade 48000 Director
|-
| 43 || Brocade 4024 Embedded Switch
|-
| 44 || Brocade 4900 Switch
|-
| 45 || Brocade 4016 Embedded Switch
|-
| 46 || Brocade 7500 Switch
|-
| 51 || Brocade 4018 Embedded Switch
|-
| 55.2 || Brocade 7600 Switch
|-
| 58 || Brocade 5000 Switch
|-
| 61 || Brocade 4424 Embedded Switch
|-
| 62 || Brocade DCX Backbone
|-
| 64 || Brocade 5300 Switch
|-
| 66 || Brocade 5100 Switch
|-
| 67 || Brocade Encryption Switch
|-
| 69 || Brocade 5410 Blade
|-
| 70 || Brocade 5410 Embedded Switch
|-
| 71 || Brocade 300 Switch
|-
| 72 || Brocade 5480 Embedded Switch
|-
| 73 || Brocade 5470 Embedded Switch
|-
| 75 || Brocade M5424 Embedded Switch
|-
| 76 || Brocade 8000 Switch
|-
| 77 || Brocade DCX-4S Backbone
|-
| 83 || Brocade 7800 Extension Switch
|-
| 86 || Brocade 5450 Embedded Switch
|-
| 87 || Brocade 5460 Embedded Switch
|-
| 90 || Brocade 8470 Embedded Switch
|-
| 92 || Brocade VA-40FC Switch
|-
| 95 || Brocade VDX 6720-24 Data Center Switch
|-
| 96 || Brocade VDX 6730-32 Data Center Switch
|-
| 97 || Brocade VDX 6720-60 Data Center Switch
|-
| 98 || Brocade VDX 6730-76 Data Center Switch
|-
| 108 || Dell M8428-k FCoE Embedded Switch
|-
| 109 || Brocade 6510 Switch
|-
| 116 || Brocade VDX 6710 Data Center Switch
|-
| 117 || Brocade 6547 Embedded Switch
|-
| 118 || Brocade 6505 Switch
|-
| 120 || Brocade DCX 8510-8 Backbone
|-
| 121 || Brocade DCX 8510-4 Backbone
|}
=SSH mit public key=
==Host -> Brocade==
<source lang=bash>
BSAN01:root> cd ~/.ssh
BSAN01:root> ls -al
total 8
drwxr-xr-x 2 root sys 4096 Jul 18 2011 ./
drwxr-x--- 4 root sys 4096 Jun 19 2013 ../
BSAN01:root> echo "ssh-dss AAAA...TD8cc= root@sun" >> authorized_keys
</source>
==Brocade -> Host==
===Key auf Switch generieren===
Als '''admin''' !
<source lang=bash>
Host# ssh admin@bsan01
BSAN01:admin> sshutil genkey
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Key pair generated successfully.
BSAN01:admin> exit
</source>
===Key vom Switch -> Host ~/.ssh/authorized_keys===
Als '''root''' !
<source lang=bash>
Host# ssh root@bsan01 cat .ssh/id_rsa.pub >> ~/.ssh/authorized_keys
</source>
=Backup der Config=
Wichtig, vorher die Keys austauschen!
# Der Brocade Pubkey muß nach ~bckpuser/.ssh/authorized_keys
# Der Pubkey des aufrufenden Users muß auf den Brocade ~root/.ssh/authorized_keys
Ein mögliches Script könnte so aussehen:
<source lang=bash>
#!/bin/bash
SWITCHES="
bsan01
bsan02
"
BACKUP_HOST="10.0.0.42"
LOCALUSER="bckpuser"
BACKUPDIR="brocade_backup"
[ ! -d ~/brocade_backup ] && mkdir -p ~/brocade_backup
date="$(date '+%Y%m%d-%H%M%S')"
for switch in ${SWITCHES} ; do
printf "Backing up ${switch} to ~${LOCALUSER}/${BACKUPDIR}/${switch}_config_${date}.txt... "
ssh -i ~/.ssh/id_rsa_nopw root@${switch} /fabos/link_sbin/configupload -all -p scp ${BACKUP_HOST},${LOCALUSER},${BACKUPDIR}/${switch}_config_${date}.txt
tmp_file=/tmp/.$$_${switch}.txt
bakup_file=~/${BACKUPDIR}/${switch}_config_${date}.txt
last_backup_file="$(ls -1rt ~/${BACKUPDIR}/${switch}_config_*.txt.gz | tail -1)"
gzip -cd ${last_backup_file} | grep -v "date =" > ${tmp_file}
if ( grep -v "date =" ${bakup_file} | diff -ub - ${tmp_file} )
then
# The last backup is identical
rm -f ${bakup_file}
else
# Differences encountered keep new backup
gzip -9 ${bakup_file}
fi
[ -f "${tmp_file}" ] && rm -f ${tmp_file}
done
</source>
29841e4916e3006e9c8c5069461ae1149da752ae
1329
1328
2016-08-04T09:37:33Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:FC]]
[[Kategorie:Brocade]]
=Ein paar Kommandos mit kurzer Erklärung dazu=
==Firmware==
<source lang=bash>
brocade:admin> firmwareshow
Appl Primary/Secondary Versions
------------------------------------------
FOS v6.4.2a
v6.4.2a
</source>
== General Switch Information ==
<source lang=bash>
brocade:admin> switchshow
switchName: brocade
switchType: 71.2
switchState: Online
switchMode: Native
switchRole: Principal
switchDomain: 1
switchId: fffc01
switchWwn: 10:00:00:05:34:be:f3:f0
zoning: ON (Fabric1)
switchBeacon: OFF
Index Port Address Media Speed State Proto
==============================================
0 0 010000 id N4 Online FC F-Port 50:0a:09:81:96:c8:3e:f8
1 1 010100 id N4 Online FC F-Port 50:0a:09:81:86:c8:3e:f8
2 2 010200 id N8 Online FC F-Port 21:00:00:24:ff:36:45:02
3 3 010300 id N8 Online FC F-Port 21:00:00:24:ff:36:45:21
4 4 010400 id N8 Online FC F-Port 21:00:00:24:ff:36:44:90
5 5 010500 id N8 Online FC F-Port 21:00:00:24:ff:36:45:f6
6 6 010600 id N8 No_Light FC
...
</source>
Wichtige Zeilen:
===switchshow:switchType===
<source lang=bash>
switchType: 71.2
</source>
switchType gibt Auskunft, welchen Switch wir vor uns haben. Hier einen Brocade 300.
* [https://www.ibm.com/developerworks/community/blogs/anthonyv/entry/brocade_san_switch_models1?lang=en Tabelle von IBM]
* PDF von Brocade: [[Media:Switch-types-blads-ids-product-names.pdf|Switch Types, Blade IDs, and Product Names]]
===switchshow:zoning===
<source lang=bash>
zoning: ON (Fabric1)
</source>
Zeigt an, ob das [[#Zoning|Zoning]] aktiv ist und welche Konfiguration aktiv ist (hier Fabric1) siehe auch [[#Fabric|Fabric]].
===switchshow:switchRole===
Es gibt zwei Rollen
* Principal (also den Chef)
und
* Subordinate (also den Untergeordneten)
z.B.:
<source lang=bash>
switchRole: Principal
</source>
Die Rolle kann man ändern
'''ACHTUNG: Nicht Unterbrechungsfrei!'''<br>
'''WARNING: DISRUPTIVE ACTION !'''
<source lang=bash>
brocade1:admin> fabricprincipal -f 1
</source>
==Fabric==
Eine Fabric besteht aus einem oder mehreren Fibre-Channel-Switchen, die miteinander verbunden sind. Komponenten wie Hosts, Storage und Tapes werden über die Fibre-Channel-Switche mit der Fabric verbunden.
<source lang=bash>
brocade:admin> fabricshow
Switch ID Worldwide Name Enet IP Addr FC IP Addr Name
-------------------------------------------------------------------------
1: fffc01 10:00:00:05:34:be:f3:f0 10.60.1.110 0.0.0.0 >"brocade"
2: fffc02 10:00:00:05:1e:0d:da:27 10.60.1.111 0.0.0.0 "brocade1"
4: fffc04 10:00:00:05:1e:b3:61:7d 10.60.1.113 0.0.0.0 "brocade3"
42: fffc2a 10:00:00:05:1e:0c:f3:98 10.60.1.112 0.0.0.0 "brocade2"
The Fabric has 4 switches
</source>
==InterSwitchLinks (ISL)==
Mit islshow bekommt man heraus, welche weiteren Switches angeschlossen sind und über welche Ports sie mit dem aktuellen verbunden sind.
<source lang=bash>
brocade:admin> islshow
rz1_fab1_01:admin> islshow
1: 0-> 0 10:00:00:05:1e:0d:ca:27 2 brocade1 sp: 4.000G bw: 4.000G
2: 4-> 0 10:00:00:05:1e:0c:e3:98 42 brocade2 sp: 4.000G bw: 4.000G
3: 8-> 17 10:00:00:05:1e:0d:ca:27 2 brocade1 sp: 4.000G bw: 4.000G
4: 9-> 0 10:00:00:05:1e:b3:51:7d 4 brocade3 sp: 4.000G bw: 4.000G
5: 12-> 17 10:00:00:05:1e:0c:e3:98 42 brocade2 sp: 4.000G bw: 4.000G
6: 13-> 17 10:00:00:05:1e:b3:51:7d 4 brocade3 sp: 4.000G bw: 4.000G
</source>
==Zoning==
Eine Zone legt fest, welche Ports oder WWNs sich sehen dürfen.
Heute mach man eigentlich nur noch WWN-Zoning, weil es das flexibelste und sicherste ist. Man kann dadurch einfach die Kabel innerhalb der [[#Fabric|Fabric]] hin und herstecken, ohne daß ein Gerät mit mal ein anderes sehen kann, als vorher.
Bei Portzoning ist die Gefahr des falsch steckens gegeben.
=Switch Types and Product Names=
{| class="wikitable sortable" style="text-align: center; width: 85%"
! Switch Type
! Switch Name
|-
| 1 || Brocade 1000 Switches
|-
| 2, 6 || Brocade 2800 Switch
|-
| 3 || Brocade 2100, 2400 Switches
|-
| 4 || Brocade 20x0, 2010, 2040, 2050 Switches
|-
| 5 || Brocade 22x0, 2210, 2240, 2250 Switches
|-
| 7 || Brocade 2000 Switch
|-
| 9 || Brocade 3800 Switch
|-
| 10 || Brocade 12000 Director
|-
| 12 || Brocade 3900 Switch
|-
| 16 || Brocade 3200 Switch
|-
| 17 || Brocade 3800VL
|-
| 18 || Brocade 3000 Switch
|-
| 21 || Brocade 24000 Director
|-
| 22 || Brocade 3016 Switch
|-
| 26 || Brocade 3850 Switch
|-
| 27 || Brocade 3250 Switch
|-
| 29 || Brocade 4012 Embedded Switch
|-
| 32 || Brocade 4100 Switch
|-
| 33 || Brocade 3014 Switch
|-
| 34 || Brocade 200E Switch
|-
| 37 || Brocade 4020 Embedded Switch
|-
| 38 || Brocade 7420 SAN Router
|-
| 40 || Fibre Channel Routing (FCR) Front Domain
|-
| 41 || Fibre Channel Routing, (FCR) Xlate Domain
|-
| 42 || Brocade 48000 Director
|-
| 43 || Brocade 4024 Embedded Switch
|-
| 44 || Brocade 4900 Switch
|-
| 45 || Brocade 4016 Embedded Switch
|-
| 46 || Brocade 7500 Switch
|-
| 51 || Brocade 4018 Embedded Switch
|-
| 55.2 || Brocade 7600 Switch
|-
| 58 || Brocade 5000 Switch
|-
| 61 || Brocade 4424 Embedded Switch
|-
| 62 || Brocade DCX Backbone
|-
| 64 || Brocade 5300 Switch
|-
| 66 || Brocade 5100 Switch
|-
| 67 || Brocade Encryption Switch
|-
| 69 || Brocade 5410 Blade
|-
| 70 || Brocade 5410 Embedded Switch
|-
| 71 || Brocade 300 Switch
|-
| 72 || Brocade 5480 Embedded Switch
|-
| 73 || Brocade 5470 Embedded Switch
|-
| 75 || Brocade M5424 Embedded Switch
|-
| 76 || Brocade 8000 Switch
|-
| 77 || Brocade DCX-4S Backbone
|-
| 83 || Brocade 7800 Extension Switch
|-
| 86 || Brocade 5450 Embedded Switch
|-
| 87 || Brocade 5460 Embedded Switch
|-
| 90 || Brocade 8470 Embedded Switch
|-
| 92 || Brocade VA-40FC Switch
|-
| 95 || Brocade VDX 6720-24 Data Center Switch
|-
| 96 || Brocade VDX 6730-32 Data Center Switch
|-
| 97 || Brocade VDX 6720-60 Data Center Switch
|-
| 98 || Brocade VDX 6730-76 Data Center Switch
|-
| 108 || Dell M8428-k FCoE Embedded Switch
|-
| 109 || Brocade 6510 Switch
|-
| 116 || Brocade VDX 6710 Data Center Switch
|-
| 117 || Brocade 6547 Embedded Switch
|-
| 118 || Brocade 6505 Switch
|-
| 120 || Brocade DCX 8510-8 Backbone
|-
| 121 || Brocade DCX 8510-4 Backbone
|}
=SSH mit public key=
==Host -> Brocade==
<source lang=bash>
BSAN01:root> cd ~/.ssh
BSAN01:root> ls -al
total 8
drwxr-xr-x 2 root sys 4096 Jul 18 2011 ./
drwxr-x--- 4 root sys 4096 Jun 19 2013 ../
BSAN01:root> echo "ssh-dss AAAA...TD8cc= root@sun" >> authorized_keys
</source>
==Brocade -> Host==
===Key auf Switch generieren===
Als '''admin''' !
<source lang=bash>
Host# ssh admin@bsan01
BSAN01:admin> sshutil genkey
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Key pair generated successfully.
BSAN01:admin> exit
</source>
===Key vom Switch -> Host ~/.ssh/authorized_keys===
Als '''root''' !
<source lang=bash>
Host# ssh root@bsan01 cat .ssh/id_rsa.pub >> ~/.ssh/authorized_keys
</source>
=Backup der Config=
Wichtig, vorher die Keys austauschen!
# Der Brocade Pubkey muß nach ~bckpuser/.ssh/authorized_keys
# Der Pubkey des aufrufenden Users muß auf den Brocade ~root/.ssh/authorized_keys
Ein mögliches Script könnte so aussehen:
<source lang=bash>
#!/bin/bash
SWITCHES="
bsan01
bsan02
"
BACKUP_HOST="10.0.0.42"
LOCALUSER="bckpuser"
BACKUPDIR="brocade_backup"
[ ! -d ~/brocade_backup ] && mkdir -p ~/brocade_backup
date="$(date '+%Y%m%d-%H%M%S')"
for switch in ${SWITCHES} ; do
printf "Backing up ${switch} to ~${LOCALUSER}/${BACKUPDIR}/${switch}_config_${date}.txt... "
ssh -i ~/.ssh/id_rsa_nopw root@${switch} /fabos/link_sbin/configupload -all -p scp ${BACKUP_HOST},${LOCALUSER},${BACKUPDIR}/${switch}_config_${date}.txt
tmp_file=/tmp/.$$_${switch}.txt
bakup_file=~/${BACKUPDIR}/${switch}_config_${date}.txt
last_backup_file="$(ls -1rt ~/${BACKUPDIR}/${switch}_config_*.txt.gz | tail -1)"
gzip -cd ${last_backup_file} | grep -v "date =" > ${tmp_file}
if ( grep -v "date =" ${bakup_file} | diff -ub - ${tmp_file} )
then
# The last backup is identical
rm -f ${bakup_file}
else
# Differences encountered keep new backup
gzip -9 ${bakup_file}
fi
[ -f "${tmp_file}" ] && rm -f ${tmp_file}
done
</source>
=Firmware update=
==Record the running firmware==
==Example for a brocade sftp firmware download directory==
First take a look [[SSH_Tipps_und_Tricks#SFTP_chroot here]]
<source lang=bash>
# mkdir --parents --mode=0755 /home/sftp/brocade
# useradd --create-home --home-dir /home/sftp/brocade/fw brocade
</source>
If there is allready an brocade user with an authorized_keys file do:
<source lang=bash>
# cp --preserve=mode ~brocade/.ssh/authorized_keys /home/sftp/.authorized_keys/brocade
</source>
else put them into /home/sftp/.authorized_keys/brocade if you want.
3ddffcfcdba6cb79bd7edf3185cb2d72bc3f1268
1330
1329
2016-08-04T09:50:42Z
Lollypop
2
/* Firmware update */
wikitext
text/x-wiki
[[Kategorie:FC]]
[[Kategorie:Brocade]]
=Ein paar Kommandos mit kurzer Erklärung dazu=
==Firmware==
<source lang=bash>
brocade:admin> firmwareshow
Appl Primary/Secondary Versions
------------------------------------------
FOS v6.4.2a
v6.4.2a
</source>
== General Switch Information ==
<source lang=bash>
brocade:admin> switchshow
switchName: brocade
switchType: 71.2
switchState: Online
switchMode: Native
switchRole: Principal
switchDomain: 1
switchId: fffc01
switchWwn: 10:00:00:05:34:be:f3:f0
zoning: ON (Fabric1)
switchBeacon: OFF
Index Port Address Media Speed State Proto
==============================================
0 0 010000 id N4 Online FC F-Port 50:0a:09:81:96:c8:3e:f8
1 1 010100 id N4 Online FC F-Port 50:0a:09:81:86:c8:3e:f8
2 2 010200 id N8 Online FC F-Port 21:00:00:24:ff:36:45:02
3 3 010300 id N8 Online FC F-Port 21:00:00:24:ff:36:45:21
4 4 010400 id N8 Online FC F-Port 21:00:00:24:ff:36:44:90
5 5 010500 id N8 Online FC F-Port 21:00:00:24:ff:36:45:f6
6 6 010600 id N8 No_Light FC
...
</source>
Wichtige Zeilen:
===switchshow:switchType===
<source lang=bash>
switchType: 71.2
</source>
switchType gibt Auskunft, welchen Switch wir vor uns haben. Hier einen Brocade 300.
* [https://www.ibm.com/developerworks/community/blogs/anthonyv/entry/brocade_san_switch_models1?lang=en Tabelle von IBM]
* PDF von Brocade: [[Media:Switch-types-blads-ids-product-names.pdf|Switch Types, Blade IDs, and Product Names]]
===switchshow:zoning===
<source lang=bash>
zoning: ON (Fabric1)
</source>
Zeigt an, ob das [[#Zoning|Zoning]] aktiv ist und welche Konfiguration aktiv ist (hier Fabric1) siehe auch [[#Fabric|Fabric]].
===switchshow:switchRole===
Es gibt zwei Rollen
* Principal (also den Chef)
und
* Subordinate (also den Untergeordneten)
z.B.:
<source lang=bash>
switchRole: Principal
</source>
Die Rolle kann man ändern
'''ACHTUNG: Nicht Unterbrechungsfrei!'''<br>
'''WARNING: DISRUPTIVE ACTION !'''
<source lang=bash>
brocade1:admin> fabricprincipal -f 1
</source>
==Fabric==
Eine Fabric besteht aus einem oder mehreren Fibre-Channel-Switchen, die miteinander verbunden sind. Komponenten wie Hosts, Storage und Tapes werden über die Fibre-Channel-Switche mit der Fabric verbunden.
<source lang=bash>
brocade:admin> fabricshow
Switch ID Worldwide Name Enet IP Addr FC IP Addr Name
-------------------------------------------------------------------------
1: fffc01 10:00:00:05:34:be:f3:f0 10.60.1.110 0.0.0.0 >"brocade"
2: fffc02 10:00:00:05:1e:0d:da:27 10.60.1.111 0.0.0.0 "brocade1"
4: fffc04 10:00:00:05:1e:b3:61:7d 10.60.1.113 0.0.0.0 "brocade3"
42: fffc2a 10:00:00:05:1e:0c:f3:98 10.60.1.112 0.0.0.0 "brocade2"
The Fabric has 4 switches
</source>
==InterSwitchLinks (ISL)==
Mit islshow bekommt man heraus, welche weiteren Switches angeschlossen sind und über welche Ports sie mit dem aktuellen verbunden sind.
<source lang=bash>
brocade:admin> islshow
rz1_fab1_01:admin> islshow
1: 0-> 0 10:00:00:05:1e:0d:ca:27 2 brocade1 sp: 4.000G bw: 4.000G
2: 4-> 0 10:00:00:05:1e:0c:e3:98 42 brocade2 sp: 4.000G bw: 4.000G
3: 8-> 17 10:00:00:05:1e:0d:ca:27 2 brocade1 sp: 4.000G bw: 4.000G
4: 9-> 0 10:00:00:05:1e:b3:51:7d 4 brocade3 sp: 4.000G bw: 4.000G
5: 12-> 17 10:00:00:05:1e:0c:e3:98 42 brocade2 sp: 4.000G bw: 4.000G
6: 13-> 17 10:00:00:05:1e:b3:51:7d 4 brocade3 sp: 4.000G bw: 4.000G
</source>
==Zoning==
Eine Zone legt fest, welche Ports oder WWNs sich sehen dürfen.
Heute mach man eigentlich nur noch WWN-Zoning, weil es das flexibelste und sicherste ist. Man kann dadurch einfach die Kabel innerhalb der [[#Fabric|Fabric]] hin und herstecken, ohne daß ein Gerät mit mal ein anderes sehen kann, als vorher.
Bei Portzoning ist die Gefahr des falsch steckens gegeben.
=Switch Types and Product Names=
{| class="wikitable sortable" style="text-align: center; width: 85%"
! Switch Type
! Switch Name
|-
| 1 || Brocade 1000 Switches
|-
| 2, 6 || Brocade 2800 Switch
|-
| 3 || Brocade 2100, 2400 Switches
|-
| 4 || Brocade 20x0, 2010, 2040, 2050 Switches
|-
| 5 || Brocade 22x0, 2210, 2240, 2250 Switches
|-
| 7 || Brocade 2000 Switch
|-
| 9 || Brocade 3800 Switch
|-
| 10 || Brocade 12000 Director
|-
| 12 || Brocade 3900 Switch
|-
| 16 || Brocade 3200 Switch
|-
| 17 || Brocade 3800VL
|-
| 18 || Brocade 3000 Switch
|-
| 21 || Brocade 24000 Director
|-
| 22 || Brocade 3016 Switch
|-
| 26 || Brocade 3850 Switch
|-
| 27 || Brocade 3250 Switch
|-
| 29 || Brocade 4012 Embedded Switch
|-
| 32 || Brocade 4100 Switch
|-
| 33 || Brocade 3014 Switch
|-
| 34 || Brocade 200E Switch
|-
| 37 || Brocade 4020 Embedded Switch
|-
| 38 || Brocade 7420 SAN Router
|-
| 40 || Fibre Channel Routing (FCR) Front Domain
|-
| 41 || Fibre Channel Routing, (FCR) Xlate Domain
|-
| 42 || Brocade 48000 Director
|-
| 43 || Brocade 4024 Embedded Switch
|-
| 44 || Brocade 4900 Switch
|-
| 45 || Brocade 4016 Embedded Switch
|-
| 46 || Brocade 7500 Switch
|-
| 51 || Brocade 4018 Embedded Switch
|-
| 55.2 || Brocade 7600 Switch
|-
| 58 || Brocade 5000 Switch
|-
| 61 || Brocade 4424 Embedded Switch
|-
| 62 || Brocade DCX Backbone
|-
| 64 || Brocade 5300 Switch
|-
| 66 || Brocade 5100 Switch
|-
| 67 || Brocade Encryption Switch
|-
| 69 || Brocade 5410 Blade
|-
| 70 || Brocade 5410 Embedded Switch
|-
| 71 || Brocade 300 Switch
|-
| 72 || Brocade 5480 Embedded Switch
|-
| 73 || Brocade 5470 Embedded Switch
|-
| 75 || Brocade M5424 Embedded Switch
|-
| 76 || Brocade 8000 Switch
|-
| 77 || Brocade DCX-4S Backbone
|-
| 83 || Brocade 7800 Extension Switch
|-
| 86 || Brocade 5450 Embedded Switch
|-
| 87 || Brocade 5460 Embedded Switch
|-
| 90 || Brocade 8470 Embedded Switch
|-
| 92 || Brocade VA-40FC Switch
|-
| 95 || Brocade VDX 6720-24 Data Center Switch
|-
| 96 || Brocade VDX 6730-32 Data Center Switch
|-
| 97 || Brocade VDX 6720-60 Data Center Switch
|-
| 98 || Brocade VDX 6730-76 Data Center Switch
|-
| 108 || Dell M8428-k FCoE Embedded Switch
|-
| 109 || Brocade 6510 Switch
|-
| 116 || Brocade VDX 6710 Data Center Switch
|-
| 117 || Brocade 6547 Embedded Switch
|-
| 118 || Brocade 6505 Switch
|-
| 120 || Brocade DCX 8510-8 Backbone
|-
| 121 || Brocade DCX 8510-4 Backbone
|}
=SSH mit public key=
==Host -> Brocade==
<source lang=bash>
BSAN01:root> cd ~/.ssh
BSAN01:root> ls -al
total 8
drwxr-xr-x 2 root sys 4096 Jul 18 2011 ./
drwxr-x--- 4 root sys 4096 Jun 19 2013 ../
BSAN01:root> echo "ssh-dss AAAA...TD8cc= root@sun" >> authorized_keys
</source>
==Brocade -> Host==
===Key auf Switch generieren===
Als '''admin''' !
<source lang=bash>
Host# ssh admin@bsan01
BSAN01:admin> sshutil genkey
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Key pair generated successfully.
BSAN01:admin> exit
</source>
===Key vom Switch -> Host ~/.ssh/authorized_keys===
Als '''root''' !
<source lang=bash>
Host# ssh root@bsan01 cat .ssh/id_rsa.pub >> ~/.ssh/authorized_keys
</source>
=Backup der Config=
Wichtig, vorher die Keys austauschen!
# Der Brocade Pubkey muß nach ~bckpuser/.ssh/authorized_keys
# Der Pubkey des aufrufenden Users muß auf den Brocade ~root/.ssh/authorized_keys
Ein mögliches Script könnte so aussehen:
<source lang=bash>
#!/bin/bash
SWITCHES="
bsan01
bsan02
"
BACKUP_HOST="10.0.0.42"
LOCALUSER="bckpuser"
BACKUPDIR="brocade_backup"
[ ! -d ~/brocade_backup ] && mkdir -p ~/brocade_backup
date="$(date '+%Y%m%d-%H%M%S')"
for switch in ${SWITCHES} ; do
printf "Backing up ${switch} to ~${LOCALUSER}/${BACKUPDIR}/${switch}_config_${date}.txt... "
ssh -i ~/.ssh/id_rsa_nopw root@${switch} /fabos/link_sbin/configupload -all -p scp ${BACKUP_HOST},${LOCALUSER},${BACKUPDIR}/${switch}_config_${date}.txt
tmp_file=/tmp/.$$_${switch}.txt
bakup_file=~/${BACKUPDIR}/${switch}_config_${date}.txt
last_backup_file="$(ls -1rt ~/${BACKUPDIR}/${switch}_config_*.txt.gz | tail -1)"
gzip -cd ${last_backup_file} | grep -v "date =" > ${tmp_file}
if ( grep -v "date =" ${bakup_file} | diff -ub - ${tmp_file} )
then
# The last backup is identical
rm -f ${bakup_file}
else
# Differences encountered keep new backup
gzip -9 ${bakup_file}
fi
[ -f "${tmp_file}" ] && rm -f ${tmp_file}
done
</source>
=Firmware update=
==Record the running firmware==
==Example for a brocade sftp firmware download directory==
First take a look [[SSH_Tipps_und_Tricks#SFTP_chroot|here]] for setting up a chroot sftp environment.
Then create the home:
<source lang=bash>
# mkdir --parents --mode=0755 /home/sftp/brocade
# useradd --create-home --home-dir /home/sftp/brocade/fw brocade
</source>
If there is allready an brocade user with an authorized_keys file do:
<source lang=bash>
# cp --preserve=mode ~brocade/.ssh/authorized_keys /home/sftp/.authorized_keys/brocade
</source>
else put them into /home/sftp/.authorized_keys/brocade if you want.
Untar your firmware as brocade in /home/sftp/brocade/fw.
Login to the switch as admin and do for example:
<source lang=bash>
san-sw:admin> firmwaredownload -s -b -p sftp <ip of the sftp-server>,brocade,fw/v7.2.1f
</source>
1bd244b7e075eba612a5907a4456e67633bbf567
1331
1330
2016-08-04T09:51:10Z
Lollypop
2
/* Example for a brocade sftp firmware download directory */
wikitext
text/x-wiki
[[Kategorie:FC]]
[[Kategorie:Brocade]]
=Ein paar Kommandos mit kurzer Erklärung dazu=
==Firmware==
<source lang=bash>
brocade:admin> firmwareshow
Appl Primary/Secondary Versions
------------------------------------------
FOS v6.4.2a
v6.4.2a
</source>
== General Switch Information ==
<source lang=bash>
brocade:admin> switchshow
switchName: brocade
switchType: 71.2
switchState: Online
switchMode: Native
switchRole: Principal
switchDomain: 1
switchId: fffc01
switchWwn: 10:00:00:05:34:be:f3:f0
zoning: ON (Fabric1)
switchBeacon: OFF
Index Port Address Media Speed State Proto
==============================================
0 0 010000 id N4 Online FC F-Port 50:0a:09:81:96:c8:3e:f8
1 1 010100 id N4 Online FC F-Port 50:0a:09:81:86:c8:3e:f8
2 2 010200 id N8 Online FC F-Port 21:00:00:24:ff:36:45:02
3 3 010300 id N8 Online FC F-Port 21:00:00:24:ff:36:45:21
4 4 010400 id N8 Online FC F-Port 21:00:00:24:ff:36:44:90
5 5 010500 id N8 Online FC F-Port 21:00:00:24:ff:36:45:f6
6 6 010600 id N8 No_Light FC
...
</source>
Wichtige Zeilen:
===switchshow:switchType===
<source lang=bash>
switchType: 71.2
</source>
switchType gibt Auskunft, welchen Switch wir vor uns haben. Hier einen Brocade 300.
* [https://www.ibm.com/developerworks/community/blogs/anthonyv/entry/brocade_san_switch_models1?lang=en Tabelle von IBM]
* PDF von Brocade: [[Media:Switch-types-blads-ids-product-names.pdf|Switch Types, Blade IDs, and Product Names]]
===switchshow:zoning===
<source lang=bash>
zoning: ON (Fabric1)
</source>
Zeigt an, ob das [[#Zoning|Zoning]] aktiv ist und welche Konfiguration aktiv ist (hier Fabric1) siehe auch [[#Fabric|Fabric]].
===switchshow:switchRole===
Es gibt zwei Rollen
* Principal (also den Chef)
und
* Subordinate (also den Untergeordneten)
z.B.:
<source lang=bash>
switchRole: Principal
</source>
Die Rolle kann man ändern
'''ACHTUNG: Nicht Unterbrechungsfrei!'''<br>
'''WARNING: DISRUPTIVE ACTION !'''
<source lang=bash>
brocade1:admin> fabricprincipal -f 1
</source>
==Fabric==
Eine Fabric besteht aus einem oder mehreren Fibre-Channel-Switchen, die miteinander verbunden sind. Komponenten wie Hosts, Storage und Tapes werden über die Fibre-Channel-Switche mit der Fabric verbunden.
<source lang=bash>
brocade:admin> fabricshow
Switch ID Worldwide Name Enet IP Addr FC IP Addr Name
-------------------------------------------------------------------------
1: fffc01 10:00:00:05:34:be:f3:f0 10.60.1.110 0.0.0.0 >"brocade"
2: fffc02 10:00:00:05:1e:0d:da:27 10.60.1.111 0.0.0.0 "brocade1"
4: fffc04 10:00:00:05:1e:b3:61:7d 10.60.1.113 0.0.0.0 "brocade3"
42: fffc2a 10:00:00:05:1e:0c:f3:98 10.60.1.112 0.0.0.0 "brocade2"
The Fabric has 4 switches
</source>
==InterSwitchLinks (ISL)==
Mit islshow bekommt man heraus, welche weiteren Switches angeschlossen sind und über welche Ports sie mit dem aktuellen verbunden sind.
<source lang=bash>
brocade:admin> islshow
rz1_fab1_01:admin> islshow
1: 0-> 0 10:00:00:05:1e:0d:ca:27 2 brocade1 sp: 4.000G bw: 4.000G
2: 4-> 0 10:00:00:05:1e:0c:e3:98 42 brocade2 sp: 4.000G bw: 4.000G
3: 8-> 17 10:00:00:05:1e:0d:ca:27 2 brocade1 sp: 4.000G bw: 4.000G
4: 9-> 0 10:00:00:05:1e:b3:51:7d 4 brocade3 sp: 4.000G bw: 4.000G
5: 12-> 17 10:00:00:05:1e:0c:e3:98 42 brocade2 sp: 4.000G bw: 4.000G
6: 13-> 17 10:00:00:05:1e:b3:51:7d 4 brocade3 sp: 4.000G bw: 4.000G
</source>
==Zoning==
Eine Zone legt fest, welche Ports oder WWNs sich sehen dürfen.
Heute mach man eigentlich nur noch WWN-Zoning, weil es das flexibelste und sicherste ist. Man kann dadurch einfach die Kabel innerhalb der [[#Fabric|Fabric]] hin und herstecken, ohne daß ein Gerät mit mal ein anderes sehen kann, als vorher.
Bei Portzoning ist die Gefahr des falsch steckens gegeben.
=Switch Types and Product Names=
{| class="wikitable sortable" style="text-align: center; width: 85%"
! Switch Type
! Switch Name
|-
| 1 || Brocade 1000 Switches
|-
| 2, 6 || Brocade 2800 Switch
|-
| 3 || Brocade 2100, 2400 Switches
|-
| 4 || Brocade 20x0, 2010, 2040, 2050 Switches
|-
| 5 || Brocade 22x0, 2210, 2240, 2250 Switches
|-
| 7 || Brocade 2000 Switch
|-
| 9 || Brocade 3800 Switch
|-
| 10 || Brocade 12000 Director
|-
| 12 || Brocade 3900 Switch
|-
| 16 || Brocade 3200 Switch
|-
| 17 || Brocade 3800VL
|-
| 18 || Brocade 3000 Switch
|-
| 21 || Brocade 24000 Director
|-
| 22 || Brocade 3016 Switch
|-
| 26 || Brocade 3850 Switch
|-
| 27 || Brocade 3250 Switch
|-
| 29 || Brocade 4012 Embedded Switch
|-
| 32 || Brocade 4100 Switch
|-
| 33 || Brocade 3014 Switch
|-
| 34 || Brocade 200E Switch
|-
| 37 || Brocade 4020 Embedded Switch
|-
| 38 || Brocade 7420 SAN Router
|-
| 40 || Fibre Channel Routing (FCR) Front Domain
|-
| 41 || Fibre Channel Routing, (FCR) Xlate Domain
|-
| 42 || Brocade 48000 Director
|-
| 43 || Brocade 4024 Embedded Switch
|-
| 44 || Brocade 4900 Switch
|-
| 45 || Brocade 4016 Embedded Switch
|-
| 46 || Brocade 7500 Switch
|-
| 51 || Brocade 4018 Embedded Switch
|-
| 55.2 || Brocade 7600 Switch
|-
| 58 || Brocade 5000 Switch
|-
| 61 || Brocade 4424 Embedded Switch
|-
| 62 || Brocade DCX Backbone
|-
| 64 || Brocade 5300 Switch
|-
| 66 || Brocade 5100 Switch
|-
| 67 || Brocade Encryption Switch
|-
| 69 || Brocade 5410 Blade
|-
| 70 || Brocade 5410 Embedded Switch
|-
| 71 || Brocade 300 Switch
|-
| 72 || Brocade 5480 Embedded Switch
|-
| 73 || Brocade 5470 Embedded Switch
|-
| 75 || Brocade M5424 Embedded Switch
|-
| 76 || Brocade 8000 Switch
|-
| 77 || Brocade DCX-4S Backbone
|-
| 83 || Brocade 7800 Extension Switch
|-
| 86 || Brocade 5450 Embedded Switch
|-
| 87 || Brocade 5460 Embedded Switch
|-
| 90 || Brocade 8470 Embedded Switch
|-
| 92 || Brocade VA-40FC Switch
|-
| 95 || Brocade VDX 6720-24 Data Center Switch
|-
| 96 || Brocade VDX 6730-32 Data Center Switch
|-
| 97 || Brocade VDX 6720-60 Data Center Switch
|-
| 98 || Brocade VDX 6730-76 Data Center Switch
|-
| 108 || Dell M8428-k FCoE Embedded Switch
|-
| 109 || Brocade 6510 Switch
|-
| 116 || Brocade VDX 6710 Data Center Switch
|-
| 117 || Brocade 6547 Embedded Switch
|-
| 118 || Brocade 6505 Switch
|-
| 120 || Brocade DCX 8510-8 Backbone
|-
| 121 || Brocade DCX 8510-4 Backbone
|}
=SSH mit public key=
==Host -> Brocade==
<source lang=bash>
BSAN01:root> cd ~/.ssh
BSAN01:root> ls -al
total 8
drwxr-xr-x 2 root sys 4096 Jul 18 2011 ./
drwxr-x--- 4 root sys 4096 Jun 19 2013 ../
BSAN01:root> echo "ssh-dss AAAA...TD8cc= root@sun" >> authorized_keys
</source>
==Brocade -> Host==
===Key auf Switch generieren===
Als '''admin''' !
<source lang=bash>
Host# ssh admin@bsan01
BSAN01:admin> sshutil genkey
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Key pair generated successfully.
BSAN01:admin> exit
</source>
===Key vom Switch -> Host ~/.ssh/authorized_keys===
Als '''root''' !
<source lang=bash>
Host# ssh root@bsan01 cat .ssh/id_rsa.pub >> ~/.ssh/authorized_keys
</source>
=Backup der Config=
Wichtig, vorher die Keys austauschen!
# Der Brocade Pubkey muß nach ~bckpuser/.ssh/authorized_keys
# Der Pubkey des aufrufenden Users muß auf den Brocade ~root/.ssh/authorized_keys
Ein mögliches Script könnte so aussehen:
<source lang=bash>
#!/bin/bash
SWITCHES="
bsan01
bsan02
"
BACKUP_HOST="10.0.0.42"
LOCALUSER="bckpuser"
BACKUPDIR="brocade_backup"
[ ! -d ~/brocade_backup ] && mkdir -p ~/brocade_backup
date="$(date '+%Y%m%d-%H%M%S')"
for switch in ${SWITCHES} ; do
printf "Backing up ${switch} to ~${LOCALUSER}/${BACKUPDIR}/${switch}_config_${date}.txt... "
ssh -i ~/.ssh/id_rsa_nopw root@${switch} /fabos/link_sbin/configupload -all -p scp ${BACKUP_HOST},${LOCALUSER},${BACKUPDIR}/${switch}_config_${date}.txt
tmp_file=/tmp/.$$_${switch}.txt
bakup_file=~/${BACKUPDIR}/${switch}_config_${date}.txt
last_backup_file="$(ls -1rt ~/${BACKUPDIR}/${switch}_config_*.txt.gz | tail -1)"
gzip -cd ${last_backup_file} | grep -v "date =" > ${tmp_file}
if ( grep -v "date =" ${bakup_file} | diff -ub - ${tmp_file} )
then
# The last backup is identical
rm -f ${bakup_file}
else
# Differences encountered keep new backup
gzip -9 ${bakup_file}
fi
[ -f "${tmp_file}" ] && rm -f ${tmp_file}
done
</source>
=Firmware update=
==Record the running firmware==
==Example for a brocade sftp firmware download directory==
First take a look [[SSH_Tipps_und_Tricks#SFTP_chroot|here]] for setting up a chroot sftp environment.
Then create the home on the sftp-server:
<source lang=bash>
# mkdir --parents --mode=0755 /home/sftp/brocade
# useradd --create-home --home-dir /home/sftp/brocade/fw brocade
</source>
If there is allready an brocade user with an authorized_keys file do:
<source lang=bash>
# cp --preserve=mode ~brocade/.ssh/authorized_keys /home/sftp/.authorized_keys/brocade
</source>
else put them into /home/sftp/.authorized_keys/brocade if you want.
Untar your firmware as brocade in /home/sftp/brocade/fw.
Login to the switch as admin and do for example:
<source lang=bash>
san-sw:admin> firmwaredownload -s -b -p sftp <ip of the sftp-server>,brocade,fw/v7.2.1f
</source>
dc852f61598c96164d616ed67fdb4561fedc5db0
SSH Tipps und Tricks
0
75
1332
1316
2016-08-04T09:55:06Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:SSH]]
[[Kategorie:Putty]]
=SSH, der Weg zum Ziel=
==SSH über ein oder mehrere Hops==
Um die SSH-Verbindung von Host_A zu Host_B machen zu können muß man sich über zwei Rechner dorthin tunneln (GW_1 und GW_2). Wenn man sich immer erst einloggt und dann weiter einloggt ist es manchmal sehr schwierig die Portforwardings mitzuschleifen, oder den Socks5-Proxy. Einfacher ist es, wenn man sich ProxyCommands für den Weg von Host_a zu Host_b definiert.
Wir kommen also nur von GW_2 zu Host_b, also machen wir uns hierfür einen Eintrag in der ~/.ssh/config:
<pre>
Host Host_B
ProxyCommand ssh GW_2 "/bin/bash -c 'exec 3<>/dev/tcp/%h/%p; cat <&3 & cat >&3;kill $!'"
</pre>
Zu GW_2 kommen wir aber nur über GW_1, also brauchen wir hierfür auch einen Eintrag:
<pre>
Host GW_2
ProxyCommand ssh GW_1 "/bin/bash -c 'exec 3<>/dev/tcp/%h/%p; cat <&3 & cat >&3;kill $!'"
</pre>
Jetzt gibt man auf Host_A einfach <i>ssh Host_B</i> ein und wird über die beiden Gateways GW_1 und GW_2 getunnelt.
Portforwardings für z.B. NFS macht man jetzt einfach so:
<pre>
root@Host_A# share -F nfs -o ro=@127.0.0.1/32 /tmp
root@Host_A# ssh -R 22049:localhost:2049 user@Host_B
user@Host_B$ su -
root@Host_B# mount -oro nfs://127.0.0.1:22049/tmp /mnt
</pre>
Im Hintergrund werden dann die TunnelVerbindungen aufgebaut und man macht das PortForwarding direkt von Host_A nach Host_B. Sehr schlank und elegant.
PS: Das /dev/tcp/%h/%p ist ein BASH-Builtin wobei %h und %p von der SSH durch Host (%h) und Port (%p) ausgefüllt werden
==Ausbruch aus dem Paradies==
Problem: Die Umgebung in der man sich aufhält ist leider so unglücklich mit Firewalls verbaut, daß man nicht arbeiten kann. Man muß aber per SSH raus, um woanders kurz etwas zu schauen, oder zu holen. Nunja, es gibt immer einen Weg...
Vorraussetzung ist ein lokal installiertes [http://www.meadowy.org/~gotoh/projects/connect connect], z.B. unter Ubuntu: apt-get install connect-proxy.
Weiterhin braucht man einen SSH-Server, wo ein sshd auf dem Port 443 lauscht, denn die meisten Proxies wollen einen nur auf known Ports durchlassen.
Dann trägt man in der ~/.ssh/config ein:
<pre>
Host ssh-via-proxy
ProxyCommand connect -H proxy-server:3128 ssh-server 443
</pre>
Schwuppdiwupp ist man mit <i>ssh ssh-server</i> auf dem SSH-Ziel, wo man hinmöchte. Natürlich kann man auf den ssh-server wieder als ProxyCommand eintragen usw. usw.
==Achja... das interne Wiki...==
Auch nicht schlimm, wenn das nur vom internen Netz erreichbar ist, dann fragen wir einfach via Socks-Proxy an:
<pre>
user@Host_A$ ssh -C -N -T -f -D8080 interner-rechner
user@Host_A$ chromium-browser --proxy-server="socks5://localhost:8080" https://wiki.intern.firma.de/ &
</pre>
Die Optionen sind:
<pre>
-C Requests compression <- das ist optional
-N Do not execute a remote command.
-T Disable pseudo-tty allocation.
-f Requests ssh to go to background just before command execution.
-D Local-Remote-Socks5-Proxy Port
</pre>
Oder wieder via ~/.ssh/config:
<pre>
Host wiki
Compression yes
DynamicForward 8888
RequestTTY no
PermitLocalCommand yes
LocalCommand chromium-browser --proxy-server="socks5://localhost:8888" https://wiki.intern.firma.de/ &
Hostname interner-rechner
</pre>
Und dann <i>ssh -N -f wiki</i> (Entsprechungen für -N und -f habe ich noch nicht gefunden).
=Der Fingerabdruck=
Für die Verifikation ist es oft leichter mit kürzeren Zahlenketten. Daher ist der Fingerabdruck praktisch, um Keys einfacher zu vergleichen:
<pre>
$ ssh-keygen -lf ~/.ssh/id_dsa.pub
1024 98:c5:76:...:08:fa:ba lollypop@lollybook (DSA)
</pre>
=Nutzer einschränken=
<source lang=bash>
# SSH is only allowed for users in the group ssh except syslog
AllowGroups ssh
DenyUsers syslog
</source>
=PuTTY Portable=
==pageant zusammen mit putty starten==
In die Datei ..\PortableApps\PuTTYPortable\App\AppInfo\Launcher\PuTTYPortable.ini muß folgendes unter [Launch] stehen:
<pre>
[Launch]
ProgramExecutable=putty\pageant.exe
CommandLineArguments='%PAL:DataDir%\settings\mykeys.ppk -c %PAL:AppDir%\putty\putty.exe'
DirectoryMoveOK=yes
SupportsUNC=yes
</pre>
Zu PortableApps siehe auch:
* [http://portableapps.com/manuals/PortableApps.comLauncher/ref/envsub.html Environment variable substitions]
* [http://portableapps.com/manuals/PortableApps.comLauncher/ref/launcher.ini/launch.html#programexecutable Launch]
=Probleme mit älteren Gegenstellen=
==Unable to negotiate with <IP> port 22: no matching host key type found. Their offer: ssh-dss==
<source lang=bash>
$ ssh -oHostKeyAlgorithms=+ssh-dss <IP>
</source>
==ssh_dispatch_run_fatal: Connection to <IP> port 22: DH GEX group out of range==
<source lang=bash>
$ ssh -oKexAlgorithms=diffie-hellman-group-exchange-sha256,diffie-hellman-group14-sha1,diffie-hellman-group1-sha1 <IP>
</source>
=SFTP chroot=
<source lang=bash>
# mkdir --parents --mode=0755 /home/sftp
# mkdir --mode=0700 /home/sftp/.authorized_keys
</source>
==/etc/ssh/sshd_config==
<source lang=config>
...
Match Group sftp
ChrootDirectory /home/sftp/%u
AuthorizedKeysFile /home/sftp/.authorized_keys/%u
AllowTCPForwarding no
X11Forwarding no
ForceCommand internal-sftp
</source>
Now you can put authorized keys into the files /home/sftp/.authorized_keys/<i>username</i>
And create the sftp users like this:
<source lang=bash>
# USER=myuser
# mkdir --parents --mode=0755 /home/sftp/${USER}
# useradd --create-home --home-dir /home/sftp/${USER}/home ${USER}
</source>
5e7f7338359738ed6e093c0aa9b98fb26a06edbb
1333
1332
2016-08-04T09:55:50Z
Lollypop
2
/* SFTP chroot */
wikitext
text/x-wiki
[[Kategorie:SSH]]
[[Kategorie:Putty]]
=SSH, der Weg zum Ziel=
==SSH über ein oder mehrere Hops==
Um die SSH-Verbindung von Host_A zu Host_B machen zu können muß man sich über zwei Rechner dorthin tunneln (GW_1 und GW_2). Wenn man sich immer erst einloggt und dann weiter einloggt ist es manchmal sehr schwierig die Portforwardings mitzuschleifen, oder den Socks5-Proxy. Einfacher ist es, wenn man sich ProxyCommands für den Weg von Host_a zu Host_b definiert.
Wir kommen also nur von GW_2 zu Host_b, also machen wir uns hierfür einen Eintrag in der ~/.ssh/config:
<pre>
Host Host_B
ProxyCommand ssh GW_2 "/bin/bash -c 'exec 3<>/dev/tcp/%h/%p; cat <&3 & cat >&3;kill $!'"
</pre>
Zu GW_2 kommen wir aber nur über GW_1, also brauchen wir hierfür auch einen Eintrag:
<pre>
Host GW_2
ProxyCommand ssh GW_1 "/bin/bash -c 'exec 3<>/dev/tcp/%h/%p; cat <&3 & cat >&3;kill $!'"
</pre>
Jetzt gibt man auf Host_A einfach <i>ssh Host_B</i> ein und wird über die beiden Gateways GW_1 und GW_2 getunnelt.
Portforwardings für z.B. NFS macht man jetzt einfach so:
<pre>
root@Host_A# share -F nfs -o ro=@127.0.0.1/32 /tmp
root@Host_A# ssh -R 22049:localhost:2049 user@Host_B
user@Host_B$ su -
root@Host_B# mount -oro nfs://127.0.0.1:22049/tmp /mnt
</pre>
Im Hintergrund werden dann die TunnelVerbindungen aufgebaut und man macht das PortForwarding direkt von Host_A nach Host_B. Sehr schlank und elegant.
PS: Das /dev/tcp/%h/%p ist ein BASH-Builtin wobei %h und %p von der SSH durch Host (%h) und Port (%p) ausgefüllt werden
==Ausbruch aus dem Paradies==
Problem: Die Umgebung in der man sich aufhält ist leider so unglücklich mit Firewalls verbaut, daß man nicht arbeiten kann. Man muß aber per SSH raus, um woanders kurz etwas zu schauen, oder zu holen. Nunja, es gibt immer einen Weg...
Vorraussetzung ist ein lokal installiertes [http://www.meadowy.org/~gotoh/projects/connect connect], z.B. unter Ubuntu: apt-get install connect-proxy.
Weiterhin braucht man einen SSH-Server, wo ein sshd auf dem Port 443 lauscht, denn die meisten Proxies wollen einen nur auf known Ports durchlassen.
Dann trägt man in der ~/.ssh/config ein:
<pre>
Host ssh-via-proxy
ProxyCommand connect -H proxy-server:3128 ssh-server 443
</pre>
Schwuppdiwupp ist man mit <i>ssh ssh-server</i> auf dem SSH-Ziel, wo man hinmöchte. Natürlich kann man auf den ssh-server wieder als ProxyCommand eintragen usw. usw.
==Achja... das interne Wiki...==
Auch nicht schlimm, wenn das nur vom internen Netz erreichbar ist, dann fragen wir einfach via Socks-Proxy an:
<pre>
user@Host_A$ ssh -C -N -T -f -D8080 interner-rechner
user@Host_A$ chromium-browser --proxy-server="socks5://localhost:8080" https://wiki.intern.firma.de/ &
</pre>
Die Optionen sind:
<pre>
-C Requests compression <- das ist optional
-N Do not execute a remote command.
-T Disable pseudo-tty allocation.
-f Requests ssh to go to background just before command execution.
-D Local-Remote-Socks5-Proxy Port
</pre>
Oder wieder via ~/.ssh/config:
<pre>
Host wiki
Compression yes
DynamicForward 8888
RequestTTY no
PermitLocalCommand yes
LocalCommand chromium-browser --proxy-server="socks5://localhost:8888" https://wiki.intern.firma.de/ &
Hostname interner-rechner
</pre>
Und dann <i>ssh -N -f wiki</i> (Entsprechungen für -N und -f habe ich noch nicht gefunden).
=Der Fingerabdruck=
Für die Verifikation ist es oft leichter mit kürzeren Zahlenketten. Daher ist der Fingerabdruck praktisch, um Keys einfacher zu vergleichen:
<pre>
$ ssh-keygen -lf ~/.ssh/id_dsa.pub
1024 98:c5:76:...:08:fa:ba lollypop@lollybook (DSA)
</pre>
=Nutzer einschränken=
<source lang=bash>
# SSH is only allowed for users in the group ssh except syslog
AllowGroups ssh
DenyUsers syslog
</source>
=PuTTY Portable=
==pageant zusammen mit putty starten==
In die Datei ..\PortableApps\PuTTYPortable\App\AppInfo\Launcher\PuTTYPortable.ini muß folgendes unter [Launch] stehen:
<pre>
[Launch]
ProgramExecutable=putty\pageant.exe
CommandLineArguments='%PAL:DataDir%\settings\mykeys.ppk -c %PAL:AppDir%\putty\putty.exe'
DirectoryMoveOK=yes
SupportsUNC=yes
</pre>
Zu PortableApps siehe auch:
* [http://portableapps.com/manuals/PortableApps.comLauncher/ref/envsub.html Environment variable substitions]
* [http://portableapps.com/manuals/PortableApps.comLauncher/ref/launcher.ini/launch.html#programexecutable Launch]
=Probleme mit älteren Gegenstellen=
==Unable to negotiate with <IP> port 22: no matching host key type found. Their offer: ssh-dss==
<source lang=bash>
$ ssh -oHostKeyAlgorithms=+ssh-dss <IP>
</source>
==ssh_dispatch_run_fatal: Connection to <IP> port 22: DH GEX group out of range==
<source lang=bash>
$ ssh -oKexAlgorithms=diffie-hellman-group-exchange-sha256,diffie-hellman-group14-sha1,diffie-hellman-group1-sha1 <IP>
</source>
=SFTP chroot=
<source lang=bash>
# mkdir --parents --mode=0755 /home/sftp
# mkdir --mode=0700 /home/sftp/.authorized_keys
</source>
==/etc/ssh/sshd_config==
<source lang=config>
...
Match Group sftp
ChrootDirectory /home/sftp/%u
AuthorizedKeysFile /home/sftp/.authorized_keys/%u
AllowTCPForwarding no
X11Forwarding no
ForceCommand internal-sftp
</source>
==Create SFTP user==
Now you can put authorized keys into the files /home/sftp/.authorized_keys/<i>username</i>
And create the sftp users like this:
<source lang=bash>
# USER=myuser
# mkdir --parents --mode=0755 /home/sftp/${USER}
# useradd --create-home --home-dir /home/sftp/${USER}/home ${USER}
</source>
5884d60a10e9bd7ffe4425b0c031801ce0578e06
Downtime Song
0
291
1334
2016-08-10T09:08:01Z
Lollypop
2
Die Seite wurde neu angelegt: „=DOWNTOWN Admin-Version (to get modified)= * Original by Petula Clark When you're alone and life is making you lonely You can always go - downtime. When you'v…“
wikitext
text/x-wiki
=DOWNTOWN Admin-Version (to get modified)=
* Original by Petula Clark
When you're alone and life is making you lonely
You can always go - downtime.
When you've got worries all the noise and the hurry
Seems to help I know downtime.
Just listen to the music of the traffic in the city
Linger on the sidewalk where the neon signs are pretty
How can you lose?
The lights are much brighter there
you can forget all your troubles, forget all your cares
so go downtime
Things will be great when you're downtime
No finer place for sure downtime
Everything's waiting for you.
Don't hang around and let your problems surround you
There are movie shows downtime.
Maybe you know some little places to go to
where they never close downtime.
Just listen to the rhythm of a gentle bossa nova
You'll be dancing with 'em too before the night is over
happy again.
The lights are much brighter there
you can forget all your troubles, forget all your cares
so go - downtime
Where all the lights are bright downtime
waiting for you tonight downtime
you're gonna be alright now
downtime
downtime
downtime
And you may find somebody kind to help and understand you
Someone who is just like you and needs a gentle hand to
guide them along.
So maybe I'll see you there
we can forget all our troubles, forget all our cares
so go downtime
Things will be great when you're downtime
don't wait a minute more downtime
Everything is waiting for you
downtime
downtime
downtime
downtime
downtime
downtime
downtime...
36bbb7fa1c42f6e0bc8bd716065a253b96193961
1335
1334
2016-08-10T09:08:19Z
Lollypop
2
wikitext
text/x-wiki
=DOWNTOWN Admin-Version (to get modified)=
* Original by Petula Clark
<pre>
When you're alone and life is making you lonely
You can always go - downtime.
When you've got worries all the noise and the hurry
Seems to help I know downtime.
Just listen to the music of the traffic in the city
Linger on the sidewalk where the neon signs are pretty
How can you lose?
The lights are much brighter there
you can forget all your troubles, forget all your cares
so go downtime
Things will be great when you're downtime
No finer place for sure downtime
Everything's waiting for you.
Don't hang around and let your problems surround you
There are movie shows downtime.
Maybe you know some little places to go to
where they never close downtime.
Just listen to the rhythm of a gentle bossa nova
You'll be dancing with 'em too before the night is over
happy again.
The lights are much brighter there
you can forget all your troubles, forget all your cares
so go - downtime
Where all the lights are bright downtime
waiting for you tonight downtime
you're gonna be alright now
downtime
downtime
downtime
And you may find somebody kind to help and understand you
Someone who is just like you and needs a gentle hand to
guide them along.
So maybe I'll see you there
we can forget all our troubles, forget all our cares
so go downtime
Things will be great when you're downtime
don't wait a minute more downtime
Everything is waiting for you
downtime
downtime
downtime
downtime
downtime
downtime
downtime...
</pre>
37e88f30487b83947be189a912dc17c5c52e0ea3
Systemd
0
233
1336
1306
2016-09-12T08:24:36Z
Lollypop
2
/* Tools */
wikitext
text/x-wiki
[[Kategorie:Linux]]
=systemd=
Yes, like daemons are usually written this has to be written lowercase.
=What is systemd?=
systemd is a replacement for the old and rusty init system of Linux.
It has many new features and extends the normal init system with the ability to watch processes after the start has done, list sockets owned by processes started with systemd, adds security features like [http://manpages.ubuntu.com/manpages/vivid/en/man7/capabilities.7.html capabilities(7)] and a lot more.
Maybe it will be as good as SMF (Service Management Facility) of Solaris one day :-).
=Take a look with systemctl=
==List units==
As you can see, there are hardware and software related units.
<source lang=bash>
# systemctl list-units
UNIT LOAD ACTIVE SUB DESCRIPTION
proc-sys-fs-binfmt_misc.automount loaded active running Arbitrary Executable File Formats File System Automount Point
sys-devices-pci0000:00-0000:00:02.0-backlight-acpi_video0.device loaded active plugged /sys/devices/pci0000:00/0000:00:02.0/backlight/acpi_video0
sys-devices-pci0000:00-0000:00:02.0-drm-card0-card0\x2dLVDS\x2d1-intel_backlight.device loaded active plugged /sys/devices/pci0000:00/0000:00:02.0/drm
sys-devices-pci0000:00-0000:00:19.0-net-eth0.device loaded active plugged 82579LM Gigabit Network Connection
sys-devices-pci0000:00-0000:00:1a.0-usb1-1\x2d1-1\x2d1.4-1\x2d1.4:1.0-bluetooth-hci0-rfkill3.device loaded active plugged /sys/devices/pci0000:00/0000
sys-devices-pci0000:00-0000:00:1a.0-usb1-1\x2d1-1\x2d1.4-1\x2d1.4:1.0-bluetooth-hci0.device loaded active plugged /sys/devices/pci0000:00/0000:00:1a.0
sys-devices-pci0000:00-0000:00:1b.0-sound-card0.device loaded active plugged 6 Series/C200 Series Chipset Family High Definition Audio Contro
sys-devices-pci0000:00-0000:00:1c.1-0000:03:00.0-ieee80211-phy0-rfkill2.device loaded active plugged /sys/devices/pci0000:00/0000:00:1c.1/0000:03:00.0
sys-devices-pci0000:00-0000:00:1c.1-0000:03:00.0-net-wlan0.device loaded active plugged Centrino Advanced-N 6205 [Taylor Peak] (Centrino Advanced-N 62
sys-devices-pci0000:00-0000:00:1d.0-usb2-2\x2d1-2\x2d1.4-2\x2d1.4:1.1-tty-ttyACM0.device loaded active plugged F5521gw
sys-devices-pci0000:00-0000:00:1d.0-usb2-2\x2d1-2\x2d1.4-2\x2d1.4:1.3-tty-ttyACM1.device loaded active plugged F5521gw
...
session-c2.scope loaded active running Session c2 of user lollypop
accounts-daemon.service loaded active running Accounts Service
● anacron.service loaded failed failed Run anacron jobs
apparmor.service loaded active exited LSB: AppArmor initialization
apport.service loaded active exited LSB: automatic crash report generation
...
</source>
In this example you can see that the anacron.service failed to start.
==Display unit status==
<source lang=bash>
# systemctl status anacron
● anacron.service - Run anacron jobs
Loaded: loaded (/lib/systemd/system/anacron.service; enabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Fr 2015-08-28 09:18:13 CEST; 31min ago
Process: 1591 ExecStart=/usr/sbin/anacron -dsq (code=exited, status=1/FAILURE)
Main PID: 1591 (code=exited, status=1/FAILURE)
Aug 28 09:18:13 lollybook systemd[1]: Started Run anacron jobs.
Aug 28 09:18:13 lollybook systemd[1]: Starting Run anacron jobs...
Aug 28 09:18:13 lollybook systemd[1]: anacron.service: main process exited, code=exited, status=1/FAILURE
Aug 28 09:18:13 lollybook anacron[1591]: anacron: Can't chdir to /var/spool/anacron: No such file or directory
Aug 28 09:18:13 lollybook systemd[1]: Unit anacron.service entered failed state.
Aug 28 09:18:13 lollybook systemd[1]: anacron.service failed.
</source>
Ah, deleted the anacron spool directory. ;-)
==Restart units==
Fix the problem and restart the service.
<source lang=bash>
root@lollybook:~# mkdir /var/spool/anacron
root@lollybook:~# systemctl restart anacron.service
root@lollybook:~# systemctl status anacron
● anacron.service - Run anacron jobs
Loaded: loaded (/lib/systemd/system/anacron.service; enabled; vendor preset: enabled)
Active: active (running) since Fr 2015-08-28 09:53:49 CEST; 4s ago
Main PID: 5179 (anacron)
CGroup: /system.slice/anacron.service
└─5179 /usr/sbin/anacron -dsq
Aug 28 09:53:49 lollybook systemd[1]: Started Run anacron jobs.
Aug 28 09:53:49 lollybook systemd[1]: Starting Run anacron jobs...
Aug 28 09:53:49 lollybook anacron[5179]: Anacron 2.3 started on 2015-08-28
Aug 28 09:53:49 lollybook anacron[5179]: Will run job `cron.daily' in 5 min.
Aug 28 09:53:49 lollybook anacron[5179]: Will run job `cron.weekly' in 10 min.
Aug 28 09:53:49 lollybook anacron[5179]: Will run job `cron.monthly' in 15 min.
Aug 28 09:53:49 lollybook anacron[5179]: Jobs will be executed sequentially
</source>
==Display unit declaration==
<source lang=ini>
# systemctl cat zfs.target
# /lib/systemd/system/zfs.target
[Unit]
Description=ZFS startup target
Requires=zfs-mount.service
Requires=zfs-share.service
Wants=zed.service
[Install]
WantedBy=multi-user.target
</source>
==Sockets==
<source lang=bash>
# systemctl list-sockets --all
LISTEN UNIT ACTIVATES
/run/acpid.socket acpid.socket acpid.service
/run/systemd/fsckd systemd-fsckd.socket systemd-fsckd.service
/run/systemd/initctl/fifo systemd-initctl.socket systemd-initctl.service
/run/systemd/journal/dev-log systemd-journald-dev-log.socket systemd-journald.service
/run/systemd/journal/socket systemd-journald.socket systemd-journald.service
/run/systemd/journal/stdout systemd-journald.socket systemd-journald.service
/run/systemd/journal/syslog syslog.socket rsyslog.service
/run/systemd/shutdownd systemd-shutdownd.socket systemd-shutdownd.service
/run/udev/control systemd-udevd-control.socket systemd-udevd.service
/run/uuidd/request uuidd.socket uuidd.service
/var/run/avahi-daemon/socket avahi-daemon.socket avahi-daemon.service
/var/run/cups/cups.sock cups.socket cups.service
/var/run/dbus/system_bus_socket dbus.socket dbus.service
127.0.0.1:631 cups.socket cups.service
[::1]:631 cups.socket cups.service
audit 1 systemd-journald-audit.socket systemd-journald.service
kobject-uevent 1 systemd-udevd-kernel.socket systemd-udevd.service
17 sockets listed.
</source>
==View dependencies==
What depends on ''zfs.target'':
<source lang=bash>
# systemctl list-dependencies --reverse zfs.target
zfs.target
● ├─basic.target
...
● └─multi-user.target
...
</source>
And what do we need to reach the ''zfs.target''?
<source lang=bash>
# systemctl list-dependencies --recursive zfs.target
zfs.target
● ├─zed.service
● ├─zfs-mount.service
● └─zfs-share.service
</source>
=Security=
==Use capabilities to drop user privileges (CapabilityBoundingSet)==
<source lang=bash>
# systemctl cat systemd-networkd.service --no-pager
...
[Service]
Type=notify
Restart=on-failure
RestartSec=0
ExecStart=/lib/systemd/systemd-networkd
CapabilityBoundingSet=CAP_NET_ADMIN CAP_NET_BIND_SERVICE CAP_NET_BROADCAST CAP_NET_RAW CAP_SETUID CAP_SETGID CAP_SETPCAP CAP_CHOWN CAP_DAC_OVERRIDE CAP_FOWNER
ProtectSystem=full
ProtectHome=yes
WatchdogSec=1min
...
</source>
Now the process is started with exactly the capabilities it needs to have. Even if it starts as root all unnessesary capabilities are dropped for starting the process.
I dont want to copy the whole man page of [http://manpages.ubuntu.com/manpages/vivid/en/man7/capabilities.7.html capabilities(7)] here but you can take a look to understand what this capabilities are.
'''BUT''' beware of programs which just test on UID 0!
==Nailing a process to it's rights : NoNewPrivileges==
Setting ''NoNewPrivileges=true'' ensures that the processtree from this level on will stuck at the UID and the privileges it has. This prohibits UID changes. No set UID binary will help the hacker to get more privileges than the user of the exploited service.
=systemd-timesyncd an alternative to ntp=
The ntpd is a good and fat old horse for servers but clients do not necessarily need this one. Just give systemd-timesyncd a chance.
Configuration can be easily made through <i>/etc/systemd/timesyncd.conf</i>:
<source lang=ini>
# This file is part of systemd.
#
# systemd is free software; you can redistribute it and/or modify it
# under the terms of the GNU Lesser General Public License as published by
# the Free Software Foundation; either version 2.1 of the License, or
# (at your option) any later version.
#
# Entries in this file show the compile time defaults.
# You can change settings by editing this file.
# Defaults can be restored by simply deleting this file.
#
# See timesyncd.conf(5) for details.
[Time]
NTP=ptbtime1.ptb.de hora.cs.tu-berlin.de
FallbackNTP=ntp.ubuntu.com
</source>
The server list is a space separated list of NTP servers.
FallbackNTP is a list of servers if none of the NTP list could be reached.
If you want to split them into multiple files or generate them at start you can put files with the ending <i>.conf</i> in <i>/etc/systemd/timesyncd.conf.d/</i>.
After you setup the config you can enable the timesyncd via:
<source lang=bash>
# timedatectl set-ntp true
</source>
Control your success with:
<source lang=bash>
# timedatectl
Local time: Fr 2016-07-01 09:16:24 CEST
Universal time: Fr 2016-07-01 07:16:24 UTC
RTC time: Fr 2016-07-01 07:16:24
Time zone: Europe/Berlin (CEST, +0200)
Network time on: yes
NTP synchronized: yes
RTC in local TZ: no
</source>
Nice it worked <i>NTP synchronized: yes</i>.
If not take a look with <i>systemctl</i>:
<source lang=bash>
# systemctl status systemd-timesyncd.service
● systemd-timesyncd.service - Network Time Synchronization
Loaded: loaded (/lib/systemd/system/systemd-timesyncd.service; enabled; vendor preset: enabled)
Drop-In: /lib/systemd/system/systemd-timesyncd.service.d
└─disable-with-time-daemon.conf
Active: inactive (dead)
Condition: start condition failed at Fr 2016-07-01 10:49:15 CEST; 1h 43min left
Docs: man:systemd-timesyncd.service(8)
</source>
Hmm... let us take a look at ntp:
<source lang=bash>
# systemctl status ntp.service
● ntp.service - LSB: Start NTP daemon
Loaded: loaded (/etc/init.d/ntp; bad; vendor preset: enabled)
Active: active (exited) since Fr 2016-07-01 10:49:19 CEST; 1h 44min left
Docs: man:systemd-sysv-generator(8)
</source>
Maybe we should uninstall or disable ntp first ;-).
<source lang=bash>
# systemctl stop ntp.service
# systemctl disable ntp.service
</source>
<source lang=bash>
# systemctl start systemd-timesyncd.service
# systemctl status systemd-timesyncd.service
● systemd-timesyncd.service - Network Time Synchronization
Loaded: loaded (/lib/systemd/system/systemd-timesyncd.service; enabled; vendor preset: enabled)
Drop-In: /lib/systemd/system/systemd-timesyncd.service.d
└─disable-with-time-daemon.conf
Active: active (running) since Fr 2016-07-01 09:06:10 CEST; 1s ago
Docs: man:systemd-timesyncd.service(8)
Main PID: 12360 (systemd-timesyn)
Status: "Synchronized to time server 192.53.103.108:123 (ptbtime1.ptb.de)."
CGroup: /system.slice/systemd-timesyncd.service
└─12360 /lib/systemd/systemd-timesyncd
Jul 01 09:06:10 lollybook systemd[1]: Starting Network Time Synchronization...
Jul 01 09:06:10 lollybook systemd[1]: Started Network Time Synchronization.
Jul 01 09:06:10 lollybook systemd-timesyncd[12360]: Synchronized to time server 192.53.103.108:123 (ptbtime1.ptb.de).
</source>
That's it!
=Units=
==[Unit]==
===Define dependencies===
For example the ''zfs.target'' is defined like this:
<source lang=bash>
# systemctl cat zfs.target
# /lib/systemd/system/zfs.target
[Unit]
Description=ZFS startup target
Requires=zfs-mount.service
Requires=zfs-share.service
Wants=zed.service
[Install]
WantedBy=multi-user.target
</source>
This means to reach the ''zfs.target'' we want that ''zed.service'' is started if enabled and we need ''zfs-mount.service'' and ''zfs-share.service''.
===Directories===
====ReadWrite-, ReadOnly- and InaccessibleDirectories====
====Private Tmp-Directories====
Mounts a private incarnation of /tmp and /var/tmp which only lives as long as the unit is up. When the unit comes down the directories are cleared. This is done by a seperate namespace for this unit.
<source lang=ini>
[Unit]
...
PrivateTmp=true|false
...
</source>
If several units should share a private tmp-directory you can use ''JoinsNamespaceOf=<unit1>[,<unit2>,<unit3>]''.
==[Service]==
==[Install]==
=/tmp as a tmpfs=
/etc/systemd/system/tmp.mount
<source lang=inifile>
[Unit]
Description=Temporary Directory
Documentation=man:hier(7)
Before=local-fs.target
[Mount]
What=tmpfs
Where=/tmp
Type=tmpfs
Options=mode=1777,strictatime
</source>
=Tools=
==Testing around with capabilities==
For example arping:
<source lang=bash>
# getcap /usr/bin/arping
/usr/bin/arping = cap_net_raw+ep
</source>
With this capability set we can use this as normal user:
<source lang=bash>
lollypop $ /usr/bin/arping -I wlan0 192.168.178.1
ARPING 192.168.178.1 from 192.168.178.31 wlan0
Unicast reply from 192.168.178.1 [24:65:11:F0:DC:A8] 1.774ms
Unicast reply from 192.168.178.1 [24:65:11:F0:DC:A8] 1.658ms
</source>
If we remove this capability it does not work:
<source lang=bash>
# setcap cap_net_raw=-ep /usr/bin/arping
</source>
<source lang=bash>
lollypop $ /usr/bin/arping -I wlan0 192.168.178.1
arping: socket: Operation not permitted
</source>
Of course it still works as root as root has all capabilities:
<source lang=bash>
root@lollybook:~# /usr/bin/arping -I wlan0 192.168.178.1
ARPING 192.168.178.1 from 192.168.178.31 wlan0
Unicast reply from 192.168.178.1 [24:65:11:F0:DC:A8] 2.052ms
Unicast reply from 192.168.178.1 [24:65:11:F0:DC:A8] 1.852ms
Received 2 response(s)
</source>
So we better set this capability again:
<source lang=bash>
# setcap cap_net_raw=+ep /usr/bin/arping
</source>
1f952a5a7d2d09747ed50d5b9d100e6595fce46a
1337
1336
2016-09-12T08:26:26Z
Lollypop
2
/* /tmp as a tmpfs */
wikitext
text/x-wiki
[[Kategorie:Linux]]
=systemd=
Yes, like daemons are usually written this has to be written lowercase.
=What is systemd?=
systemd is a replacement for the old and rusty init system of Linux.
It has many new features and extends the normal init system with the ability to watch processes after the start has done, list sockets owned by processes started with systemd, adds security features like [http://manpages.ubuntu.com/manpages/vivid/en/man7/capabilities.7.html capabilities(7)] and a lot more.
Maybe it will be as good as SMF (Service Management Facility) of Solaris one day :-).
=Take a look with systemctl=
==List units==
As you can see, there are hardware and software related units.
<source lang=bash>
# systemctl list-units
UNIT LOAD ACTIVE SUB DESCRIPTION
proc-sys-fs-binfmt_misc.automount loaded active running Arbitrary Executable File Formats File System Automount Point
sys-devices-pci0000:00-0000:00:02.0-backlight-acpi_video0.device loaded active plugged /sys/devices/pci0000:00/0000:00:02.0/backlight/acpi_video0
sys-devices-pci0000:00-0000:00:02.0-drm-card0-card0\x2dLVDS\x2d1-intel_backlight.device loaded active plugged /sys/devices/pci0000:00/0000:00:02.0/drm
sys-devices-pci0000:00-0000:00:19.0-net-eth0.device loaded active plugged 82579LM Gigabit Network Connection
sys-devices-pci0000:00-0000:00:1a.0-usb1-1\x2d1-1\x2d1.4-1\x2d1.4:1.0-bluetooth-hci0-rfkill3.device loaded active plugged /sys/devices/pci0000:00/0000
sys-devices-pci0000:00-0000:00:1a.0-usb1-1\x2d1-1\x2d1.4-1\x2d1.4:1.0-bluetooth-hci0.device loaded active plugged /sys/devices/pci0000:00/0000:00:1a.0
sys-devices-pci0000:00-0000:00:1b.0-sound-card0.device loaded active plugged 6 Series/C200 Series Chipset Family High Definition Audio Contro
sys-devices-pci0000:00-0000:00:1c.1-0000:03:00.0-ieee80211-phy0-rfkill2.device loaded active plugged /sys/devices/pci0000:00/0000:00:1c.1/0000:03:00.0
sys-devices-pci0000:00-0000:00:1c.1-0000:03:00.0-net-wlan0.device loaded active plugged Centrino Advanced-N 6205 [Taylor Peak] (Centrino Advanced-N 62
sys-devices-pci0000:00-0000:00:1d.0-usb2-2\x2d1-2\x2d1.4-2\x2d1.4:1.1-tty-ttyACM0.device loaded active plugged F5521gw
sys-devices-pci0000:00-0000:00:1d.0-usb2-2\x2d1-2\x2d1.4-2\x2d1.4:1.3-tty-ttyACM1.device loaded active plugged F5521gw
...
session-c2.scope loaded active running Session c2 of user lollypop
accounts-daemon.service loaded active running Accounts Service
● anacron.service loaded failed failed Run anacron jobs
apparmor.service loaded active exited LSB: AppArmor initialization
apport.service loaded active exited LSB: automatic crash report generation
...
</source>
In this example you can see that the anacron.service failed to start.
==Display unit status==
<source lang=bash>
# systemctl status anacron
● anacron.service - Run anacron jobs
Loaded: loaded (/lib/systemd/system/anacron.service; enabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Fr 2015-08-28 09:18:13 CEST; 31min ago
Process: 1591 ExecStart=/usr/sbin/anacron -dsq (code=exited, status=1/FAILURE)
Main PID: 1591 (code=exited, status=1/FAILURE)
Aug 28 09:18:13 lollybook systemd[1]: Started Run anacron jobs.
Aug 28 09:18:13 lollybook systemd[1]: Starting Run anacron jobs...
Aug 28 09:18:13 lollybook systemd[1]: anacron.service: main process exited, code=exited, status=1/FAILURE
Aug 28 09:18:13 lollybook anacron[1591]: anacron: Can't chdir to /var/spool/anacron: No such file or directory
Aug 28 09:18:13 lollybook systemd[1]: Unit anacron.service entered failed state.
Aug 28 09:18:13 lollybook systemd[1]: anacron.service failed.
</source>
Ah, deleted the anacron spool directory. ;-)
==Restart units==
Fix the problem and restart the service.
<source lang=bash>
root@lollybook:~# mkdir /var/spool/anacron
root@lollybook:~# systemctl restart anacron.service
root@lollybook:~# systemctl status anacron
● anacron.service - Run anacron jobs
Loaded: loaded (/lib/systemd/system/anacron.service; enabled; vendor preset: enabled)
Active: active (running) since Fr 2015-08-28 09:53:49 CEST; 4s ago
Main PID: 5179 (anacron)
CGroup: /system.slice/anacron.service
└─5179 /usr/sbin/anacron -dsq
Aug 28 09:53:49 lollybook systemd[1]: Started Run anacron jobs.
Aug 28 09:53:49 lollybook systemd[1]: Starting Run anacron jobs...
Aug 28 09:53:49 lollybook anacron[5179]: Anacron 2.3 started on 2015-08-28
Aug 28 09:53:49 lollybook anacron[5179]: Will run job `cron.daily' in 5 min.
Aug 28 09:53:49 lollybook anacron[5179]: Will run job `cron.weekly' in 10 min.
Aug 28 09:53:49 lollybook anacron[5179]: Will run job `cron.monthly' in 15 min.
Aug 28 09:53:49 lollybook anacron[5179]: Jobs will be executed sequentially
</source>
==Display unit declaration==
<source lang=ini>
# systemctl cat zfs.target
# /lib/systemd/system/zfs.target
[Unit]
Description=ZFS startup target
Requires=zfs-mount.service
Requires=zfs-share.service
Wants=zed.service
[Install]
WantedBy=multi-user.target
</source>
==Sockets==
<source lang=bash>
# systemctl list-sockets --all
LISTEN UNIT ACTIVATES
/run/acpid.socket acpid.socket acpid.service
/run/systemd/fsckd systemd-fsckd.socket systemd-fsckd.service
/run/systemd/initctl/fifo systemd-initctl.socket systemd-initctl.service
/run/systemd/journal/dev-log systemd-journald-dev-log.socket systemd-journald.service
/run/systemd/journal/socket systemd-journald.socket systemd-journald.service
/run/systemd/journal/stdout systemd-journald.socket systemd-journald.service
/run/systemd/journal/syslog syslog.socket rsyslog.service
/run/systemd/shutdownd systemd-shutdownd.socket systemd-shutdownd.service
/run/udev/control systemd-udevd-control.socket systemd-udevd.service
/run/uuidd/request uuidd.socket uuidd.service
/var/run/avahi-daemon/socket avahi-daemon.socket avahi-daemon.service
/var/run/cups/cups.sock cups.socket cups.service
/var/run/dbus/system_bus_socket dbus.socket dbus.service
127.0.0.1:631 cups.socket cups.service
[::1]:631 cups.socket cups.service
audit 1 systemd-journald-audit.socket systemd-journald.service
kobject-uevent 1 systemd-udevd-kernel.socket systemd-udevd.service
17 sockets listed.
</source>
==View dependencies==
What depends on ''zfs.target'':
<source lang=bash>
# systemctl list-dependencies --reverse zfs.target
zfs.target
● ├─basic.target
...
● └─multi-user.target
...
</source>
And what do we need to reach the ''zfs.target''?
<source lang=bash>
# systemctl list-dependencies --recursive zfs.target
zfs.target
● ├─zed.service
● ├─zfs-mount.service
● └─zfs-share.service
</source>
=Security=
==Use capabilities to drop user privileges (CapabilityBoundingSet)==
<source lang=bash>
# systemctl cat systemd-networkd.service --no-pager
...
[Service]
Type=notify
Restart=on-failure
RestartSec=0
ExecStart=/lib/systemd/systemd-networkd
CapabilityBoundingSet=CAP_NET_ADMIN CAP_NET_BIND_SERVICE CAP_NET_BROADCAST CAP_NET_RAW CAP_SETUID CAP_SETGID CAP_SETPCAP CAP_CHOWN CAP_DAC_OVERRIDE CAP_FOWNER
ProtectSystem=full
ProtectHome=yes
WatchdogSec=1min
...
</source>
Now the process is started with exactly the capabilities it needs to have. Even if it starts as root all unnessesary capabilities are dropped for starting the process.
I dont want to copy the whole man page of [http://manpages.ubuntu.com/manpages/vivid/en/man7/capabilities.7.html capabilities(7)] here but you can take a look to understand what this capabilities are.
'''BUT''' beware of programs which just test on UID 0!
==Nailing a process to it's rights : NoNewPrivileges==
Setting ''NoNewPrivileges=true'' ensures that the processtree from this level on will stuck at the UID and the privileges it has. This prohibits UID changes. No set UID binary will help the hacker to get more privileges than the user of the exploited service.
=systemd-timesyncd an alternative to ntp=
The ntpd is a good and fat old horse for servers but clients do not necessarily need this one. Just give systemd-timesyncd a chance.
Configuration can be easily made through <i>/etc/systemd/timesyncd.conf</i>:
<source lang=ini>
# This file is part of systemd.
#
# systemd is free software; you can redistribute it and/or modify it
# under the terms of the GNU Lesser General Public License as published by
# the Free Software Foundation; either version 2.1 of the License, or
# (at your option) any later version.
#
# Entries in this file show the compile time defaults.
# You can change settings by editing this file.
# Defaults can be restored by simply deleting this file.
#
# See timesyncd.conf(5) for details.
[Time]
NTP=ptbtime1.ptb.de hora.cs.tu-berlin.de
FallbackNTP=ntp.ubuntu.com
</source>
The server list is a space separated list of NTP servers.
FallbackNTP is a list of servers if none of the NTP list could be reached.
If you want to split them into multiple files or generate them at start you can put files with the ending <i>.conf</i> in <i>/etc/systemd/timesyncd.conf.d/</i>.
After you setup the config you can enable the timesyncd via:
<source lang=bash>
# timedatectl set-ntp true
</source>
Control your success with:
<source lang=bash>
# timedatectl
Local time: Fr 2016-07-01 09:16:24 CEST
Universal time: Fr 2016-07-01 07:16:24 UTC
RTC time: Fr 2016-07-01 07:16:24
Time zone: Europe/Berlin (CEST, +0200)
Network time on: yes
NTP synchronized: yes
RTC in local TZ: no
</source>
Nice it worked <i>NTP synchronized: yes</i>.
If not take a look with <i>systemctl</i>:
<source lang=bash>
# systemctl status systemd-timesyncd.service
● systemd-timesyncd.service - Network Time Synchronization
Loaded: loaded (/lib/systemd/system/systemd-timesyncd.service; enabled; vendor preset: enabled)
Drop-In: /lib/systemd/system/systemd-timesyncd.service.d
└─disable-with-time-daemon.conf
Active: inactive (dead)
Condition: start condition failed at Fr 2016-07-01 10:49:15 CEST; 1h 43min left
Docs: man:systemd-timesyncd.service(8)
</source>
Hmm... let us take a look at ntp:
<source lang=bash>
# systemctl status ntp.service
● ntp.service - LSB: Start NTP daemon
Loaded: loaded (/etc/init.d/ntp; bad; vendor preset: enabled)
Active: active (exited) since Fr 2016-07-01 10:49:19 CEST; 1h 44min left
Docs: man:systemd-sysv-generator(8)
</source>
Maybe we should uninstall or disable ntp first ;-).
<source lang=bash>
# systemctl stop ntp.service
# systemctl disable ntp.service
</source>
<source lang=bash>
# systemctl start systemd-timesyncd.service
# systemctl status systemd-timesyncd.service
● systemd-timesyncd.service - Network Time Synchronization
Loaded: loaded (/lib/systemd/system/systemd-timesyncd.service; enabled; vendor preset: enabled)
Drop-In: /lib/systemd/system/systemd-timesyncd.service.d
└─disable-with-time-daemon.conf
Active: active (running) since Fr 2016-07-01 09:06:10 CEST; 1s ago
Docs: man:systemd-timesyncd.service(8)
Main PID: 12360 (systemd-timesyn)
Status: "Synchronized to time server 192.53.103.108:123 (ptbtime1.ptb.de)."
CGroup: /system.slice/systemd-timesyncd.service
└─12360 /lib/systemd/systemd-timesyncd
Jul 01 09:06:10 lollybook systemd[1]: Starting Network Time Synchronization...
Jul 01 09:06:10 lollybook systemd[1]: Started Network Time Synchronization.
Jul 01 09:06:10 lollybook systemd-timesyncd[12360]: Synchronized to time server 192.53.103.108:123 (ptbtime1.ptb.de).
</source>
That's it!
=Units=
==[Unit]==
===Define dependencies===
For example the ''zfs.target'' is defined like this:
<source lang=bash>
# systemctl cat zfs.target
# /lib/systemd/system/zfs.target
[Unit]
Description=ZFS startup target
Requires=zfs-mount.service
Requires=zfs-share.service
Wants=zed.service
[Install]
WantedBy=multi-user.target
</source>
This means to reach the ''zfs.target'' we want that ''zed.service'' is started if enabled and we need ''zfs-mount.service'' and ''zfs-share.service''.
===Directories===
====ReadWrite-, ReadOnly- and InaccessibleDirectories====
====Private Tmp-Directories====
Mounts a private incarnation of /tmp and /var/tmp which only lives as long as the unit is up. When the unit comes down the directories are cleared. This is done by a seperate namespace for this unit.
<source lang=ini>
[Unit]
...
PrivateTmp=true|false
...
</source>
If several units should share a private tmp-directory you can use ''JoinsNamespaceOf=<unit1>[,<unit2>,<unit3>]''.
==[Service]==
==[Install]==
=/tmp as a tmpfs=
/etc/systemd/system/tmp.mount
<source lang=inifile>
[Unit]
Description=Temporary Directory
Documentation=tmpfs
Before=local-fs.target
[Mount]
What=tmpfs
Where=/tmp
Type=tmpfs
Options=mode=1777,strictatime
</source>
=Tools=
==Testing around with capabilities==
For example arping:
<source lang=bash>
# getcap /usr/bin/arping
/usr/bin/arping = cap_net_raw+ep
</source>
With this capability set we can use this as normal user:
<source lang=bash>
lollypop $ /usr/bin/arping -I wlan0 192.168.178.1
ARPING 192.168.178.1 from 192.168.178.31 wlan0
Unicast reply from 192.168.178.1 [24:65:11:F0:DC:A8] 1.774ms
Unicast reply from 192.168.178.1 [24:65:11:F0:DC:A8] 1.658ms
</source>
If we remove this capability it does not work:
<source lang=bash>
# setcap cap_net_raw=-ep /usr/bin/arping
</source>
<source lang=bash>
lollypop $ /usr/bin/arping -I wlan0 192.168.178.1
arping: socket: Operation not permitted
</source>
Of course it still works as root as root has all capabilities:
<source lang=bash>
root@lollybook:~# /usr/bin/arping -I wlan0 192.168.178.1
ARPING 192.168.178.1 from 192.168.178.31 wlan0
Unicast reply from 192.168.178.1 [24:65:11:F0:DC:A8] 2.052ms
Unicast reply from 192.168.178.1 [24:65:11:F0:DC:A8] 1.852ms
Received 2 response(s)
</source>
So we better set this capability again:
<source lang=bash>
# setcap cap_net_raw=+ep /usr/bin/arping
</source>
5ee699d9b80093dcd2ae7b79f177de0b6d029b85
1338
1337
2016-09-12T08:27:26Z
Lollypop
2
/* /tmp as a tmpfs */
wikitext
text/x-wiki
[[Kategorie:Linux]]
=systemd=
Yes, like daemons are usually written this has to be written lowercase.
=What is systemd?=
systemd is a replacement for the old and rusty init system of Linux.
It has many new features and extends the normal init system with the ability to watch processes after the start has done, list sockets owned by processes started with systemd, adds security features like [http://manpages.ubuntu.com/manpages/vivid/en/man7/capabilities.7.html capabilities(7)] and a lot more.
Maybe it will be as good as SMF (Service Management Facility) of Solaris one day :-).
=Take a look with systemctl=
==List units==
As you can see, there are hardware and software related units.
<source lang=bash>
# systemctl list-units
UNIT LOAD ACTIVE SUB DESCRIPTION
proc-sys-fs-binfmt_misc.automount loaded active running Arbitrary Executable File Formats File System Automount Point
sys-devices-pci0000:00-0000:00:02.0-backlight-acpi_video0.device loaded active plugged /sys/devices/pci0000:00/0000:00:02.0/backlight/acpi_video0
sys-devices-pci0000:00-0000:00:02.0-drm-card0-card0\x2dLVDS\x2d1-intel_backlight.device loaded active plugged /sys/devices/pci0000:00/0000:00:02.0/drm
sys-devices-pci0000:00-0000:00:19.0-net-eth0.device loaded active plugged 82579LM Gigabit Network Connection
sys-devices-pci0000:00-0000:00:1a.0-usb1-1\x2d1-1\x2d1.4-1\x2d1.4:1.0-bluetooth-hci0-rfkill3.device loaded active plugged /sys/devices/pci0000:00/0000
sys-devices-pci0000:00-0000:00:1a.0-usb1-1\x2d1-1\x2d1.4-1\x2d1.4:1.0-bluetooth-hci0.device loaded active plugged /sys/devices/pci0000:00/0000:00:1a.0
sys-devices-pci0000:00-0000:00:1b.0-sound-card0.device loaded active plugged 6 Series/C200 Series Chipset Family High Definition Audio Contro
sys-devices-pci0000:00-0000:00:1c.1-0000:03:00.0-ieee80211-phy0-rfkill2.device loaded active plugged /sys/devices/pci0000:00/0000:00:1c.1/0000:03:00.0
sys-devices-pci0000:00-0000:00:1c.1-0000:03:00.0-net-wlan0.device loaded active plugged Centrino Advanced-N 6205 [Taylor Peak] (Centrino Advanced-N 62
sys-devices-pci0000:00-0000:00:1d.0-usb2-2\x2d1-2\x2d1.4-2\x2d1.4:1.1-tty-ttyACM0.device loaded active plugged F5521gw
sys-devices-pci0000:00-0000:00:1d.0-usb2-2\x2d1-2\x2d1.4-2\x2d1.4:1.3-tty-ttyACM1.device loaded active plugged F5521gw
...
session-c2.scope loaded active running Session c2 of user lollypop
accounts-daemon.service loaded active running Accounts Service
● anacron.service loaded failed failed Run anacron jobs
apparmor.service loaded active exited LSB: AppArmor initialization
apport.service loaded active exited LSB: automatic crash report generation
...
</source>
In this example you can see that the anacron.service failed to start.
==Display unit status==
<source lang=bash>
# systemctl status anacron
● anacron.service - Run anacron jobs
Loaded: loaded (/lib/systemd/system/anacron.service; enabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Fr 2015-08-28 09:18:13 CEST; 31min ago
Process: 1591 ExecStart=/usr/sbin/anacron -dsq (code=exited, status=1/FAILURE)
Main PID: 1591 (code=exited, status=1/FAILURE)
Aug 28 09:18:13 lollybook systemd[1]: Started Run anacron jobs.
Aug 28 09:18:13 lollybook systemd[1]: Starting Run anacron jobs...
Aug 28 09:18:13 lollybook systemd[1]: anacron.service: main process exited, code=exited, status=1/FAILURE
Aug 28 09:18:13 lollybook anacron[1591]: anacron: Can't chdir to /var/spool/anacron: No such file or directory
Aug 28 09:18:13 lollybook systemd[1]: Unit anacron.service entered failed state.
Aug 28 09:18:13 lollybook systemd[1]: anacron.service failed.
</source>
Ah, deleted the anacron spool directory. ;-)
==Restart units==
Fix the problem and restart the service.
<source lang=bash>
root@lollybook:~# mkdir /var/spool/anacron
root@lollybook:~# systemctl restart anacron.service
root@lollybook:~# systemctl status anacron
● anacron.service - Run anacron jobs
Loaded: loaded (/lib/systemd/system/anacron.service; enabled; vendor preset: enabled)
Active: active (running) since Fr 2015-08-28 09:53:49 CEST; 4s ago
Main PID: 5179 (anacron)
CGroup: /system.slice/anacron.service
└─5179 /usr/sbin/anacron -dsq
Aug 28 09:53:49 lollybook systemd[1]: Started Run anacron jobs.
Aug 28 09:53:49 lollybook systemd[1]: Starting Run anacron jobs...
Aug 28 09:53:49 lollybook anacron[5179]: Anacron 2.3 started on 2015-08-28
Aug 28 09:53:49 lollybook anacron[5179]: Will run job `cron.daily' in 5 min.
Aug 28 09:53:49 lollybook anacron[5179]: Will run job `cron.weekly' in 10 min.
Aug 28 09:53:49 lollybook anacron[5179]: Will run job `cron.monthly' in 15 min.
Aug 28 09:53:49 lollybook anacron[5179]: Jobs will be executed sequentially
</source>
==Display unit declaration==
<source lang=ini>
# systemctl cat zfs.target
# /lib/systemd/system/zfs.target
[Unit]
Description=ZFS startup target
Requires=zfs-mount.service
Requires=zfs-share.service
Wants=zed.service
[Install]
WantedBy=multi-user.target
</source>
==Sockets==
<source lang=bash>
# systemctl list-sockets --all
LISTEN UNIT ACTIVATES
/run/acpid.socket acpid.socket acpid.service
/run/systemd/fsckd systemd-fsckd.socket systemd-fsckd.service
/run/systemd/initctl/fifo systemd-initctl.socket systemd-initctl.service
/run/systemd/journal/dev-log systemd-journald-dev-log.socket systemd-journald.service
/run/systemd/journal/socket systemd-journald.socket systemd-journald.service
/run/systemd/journal/stdout systemd-journald.socket systemd-journald.service
/run/systemd/journal/syslog syslog.socket rsyslog.service
/run/systemd/shutdownd systemd-shutdownd.socket systemd-shutdownd.service
/run/udev/control systemd-udevd-control.socket systemd-udevd.service
/run/uuidd/request uuidd.socket uuidd.service
/var/run/avahi-daemon/socket avahi-daemon.socket avahi-daemon.service
/var/run/cups/cups.sock cups.socket cups.service
/var/run/dbus/system_bus_socket dbus.socket dbus.service
127.0.0.1:631 cups.socket cups.service
[::1]:631 cups.socket cups.service
audit 1 systemd-journald-audit.socket systemd-journald.service
kobject-uevent 1 systemd-udevd-kernel.socket systemd-udevd.service
17 sockets listed.
</source>
==View dependencies==
What depends on ''zfs.target'':
<source lang=bash>
# systemctl list-dependencies --reverse zfs.target
zfs.target
● ├─basic.target
...
● └─multi-user.target
...
</source>
And what do we need to reach the ''zfs.target''?
<source lang=bash>
# systemctl list-dependencies --recursive zfs.target
zfs.target
● ├─zed.service
● ├─zfs-mount.service
● └─zfs-share.service
</source>
=Security=
==Use capabilities to drop user privileges (CapabilityBoundingSet)==
<source lang=bash>
# systemctl cat systemd-networkd.service --no-pager
...
[Service]
Type=notify
Restart=on-failure
RestartSec=0
ExecStart=/lib/systemd/systemd-networkd
CapabilityBoundingSet=CAP_NET_ADMIN CAP_NET_BIND_SERVICE CAP_NET_BROADCAST CAP_NET_RAW CAP_SETUID CAP_SETGID CAP_SETPCAP CAP_CHOWN CAP_DAC_OVERRIDE CAP_FOWNER
ProtectSystem=full
ProtectHome=yes
WatchdogSec=1min
...
</source>
Now the process is started with exactly the capabilities it needs to have. Even if it starts as root all unnessesary capabilities are dropped for starting the process.
I dont want to copy the whole man page of [http://manpages.ubuntu.com/manpages/vivid/en/man7/capabilities.7.html capabilities(7)] here but you can take a look to understand what this capabilities are.
'''BUT''' beware of programs which just test on UID 0!
==Nailing a process to it's rights : NoNewPrivileges==
Setting ''NoNewPrivileges=true'' ensures that the processtree from this level on will stuck at the UID and the privileges it has. This prohibits UID changes. No set UID binary will help the hacker to get more privileges than the user of the exploited service.
=systemd-timesyncd an alternative to ntp=
The ntpd is a good and fat old horse for servers but clients do not necessarily need this one. Just give systemd-timesyncd a chance.
Configuration can be easily made through <i>/etc/systemd/timesyncd.conf</i>:
<source lang=ini>
# This file is part of systemd.
#
# systemd is free software; you can redistribute it and/or modify it
# under the terms of the GNU Lesser General Public License as published by
# the Free Software Foundation; either version 2.1 of the License, or
# (at your option) any later version.
#
# Entries in this file show the compile time defaults.
# You can change settings by editing this file.
# Defaults can be restored by simply deleting this file.
#
# See timesyncd.conf(5) for details.
[Time]
NTP=ptbtime1.ptb.de hora.cs.tu-berlin.de
FallbackNTP=ntp.ubuntu.com
</source>
The server list is a space separated list of NTP servers.
FallbackNTP is a list of servers if none of the NTP list could be reached.
If you want to split them into multiple files or generate them at start you can put files with the ending <i>.conf</i> in <i>/etc/systemd/timesyncd.conf.d/</i>.
After you setup the config you can enable the timesyncd via:
<source lang=bash>
# timedatectl set-ntp true
</source>
Control your success with:
<source lang=bash>
# timedatectl
Local time: Fr 2016-07-01 09:16:24 CEST
Universal time: Fr 2016-07-01 07:16:24 UTC
RTC time: Fr 2016-07-01 07:16:24
Time zone: Europe/Berlin (CEST, +0200)
Network time on: yes
NTP synchronized: yes
RTC in local TZ: no
</source>
Nice it worked <i>NTP synchronized: yes</i>.
If not take a look with <i>systemctl</i>:
<source lang=bash>
# systemctl status systemd-timesyncd.service
● systemd-timesyncd.service - Network Time Synchronization
Loaded: loaded (/lib/systemd/system/systemd-timesyncd.service; enabled; vendor preset: enabled)
Drop-In: /lib/systemd/system/systemd-timesyncd.service.d
└─disable-with-time-daemon.conf
Active: inactive (dead)
Condition: start condition failed at Fr 2016-07-01 10:49:15 CEST; 1h 43min left
Docs: man:systemd-timesyncd.service(8)
</source>
Hmm... let us take a look at ntp:
<source lang=bash>
# systemctl status ntp.service
● ntp.service - LSB: Start NTP daemon
Loaded: loaded (/etc/init.d/ntp; bad; vendor preset: enabled)
Active: active (exited) since Fr 2016-07-01 10:49:19 CEST; 1h 44min left
Docs: man:systemd-sysv-generator(8)
</source>
Maybe we should uninstall or disable ntp first ;-).
<source lang=bash>
# systemctl stop ntp.service
# systemctl disable ntp.service
</source>
<source lang=bash>
# systemctl start systemd-timesyncd.service
# systemctl status systemd-timesyncd.service
● systemd-timesyncd.service - Network Time Synchronization
Loaded: loaded (/lib/systemd/system/systemd-timesyncd.service; enabled; vendor preset: enabled)
Drop-In: /lib/systemd/system/systemd-timesyncd.service.d
└─disable-with-time-daemon.conf
Active: active (running) since Fr 2016-07-01 09:06:10 CEST; 1s ago
Docs: man:systemd-timesyncd.service(8)
Main PID: 12360 (systemd-timesyn)
Status: "Synchronized to time server 192.53.103.108:123 (ptbtime1.ptb.de)."
CGroup: /system.slice/systemd-timesyncd.service
└─12360 /lib/systemd/systemd-timesyncd
Jul 01 09:06:10 lollybook systemd[1]: Starting Network Time Synchronization...
Jul 01 09:06:10 lollybook systemd[1]: Started Network Time Synchronization.
Jul 01 09:06:10 lollybook systemd-timesyncd[12360]: Synchronized to time server 192.53.103.108:123 (ptbtime1.ptb.de).
</source>
That's it!
=Units=
==[Unit]==
===Define dependencies===
For example the ''zfs.target'' is defined like this:
<source lang=bash>
# systemctl cat zfs.target
# /lib/systemd/system/zfs.target
[Unit]
Description=ZFS startup target
Requires=zfs-mount.service
Requires=zfs-share.service
Wants=zed.service
[Install]
WantedBy=multi-user.target
</source>
This means to reach the ''zfs.target'' we want that ''zed.service'' is started if enabled and we need ''zfs-mount.service'' and ''zfs-share.service''.
===Directories===
====ReadWrite-, ReadOnly- and InaccessibleDirectories====
====Private Tmp-Directories====
Mounts a private incarnation of /tmp and /var/tmp which only lives as long as the unit is up. When the unit comes down the directories are cleared. This is done by a seperate namespace for this unit.
<source lang=ini>
[Unit]
...
PrivateTmp=true|false
...
</source>
If several units should share a private tmp-directory you can use ''JoinsNamespaceOf=<unit1>[,<unit2>,<unit3>]''.
==[Service]==
==[Install]==
=/tmp as a tmpfs=
/etc/systemd/system/tmp.mount
<source lang=inifile>
[Unit]
Description=Temporary Directory
Documentation=man:mount(8)
Before=local-fs.target
[Mount]
What=tmpfs
Where=/tmp
Type=tmpfs
Options=mode=1777,strictatime
</source>
=Tools=
==Testing around with capabilities==
For example arping:
<source lang=bash>
# getcap /usr/bin/arping
/usr/bin/arping = cap_net_raw+ep
</source>
With this capability set we can use this as normal user:
<source lang=bash>
lollypop $ /usr/bin/arping -I wlan0 192.168.178.1
ARPING 192.168.178.1 from 192.168.178.31 wlan0
Unicast reply from 192.168.178.1 [24:65:11:F0:DC:A8] 1.774ms
Unicast reply from 192.168.178.1 [24:65:11:F0:DC:A8] 1.658ms
</source>
If we remove this capability it does not work:
<source lang=bash>
# setcap cap_net_raw=-ep /usr/bin/arping
</source>
<source lang=bash>
lollypop $ /usr/bin/arping -I wlan0 192.168.178.1
arping: socket: Operation not permitted
</source>
Of course it still works as root as root has all capabilities:
<source lang=bash>
root@lollybook:~# /usr/bin/arping -I wlan0 192.168.178.1
ARPING 192.168.178.1 from 192.168.178.31 wlan0
Unicast reply from 192.168.178.1 [24:65:11:F0:DC:A8] 2.052ms
Unicast reply from 192.168.178.1 [24:65:11:F0:DC:A8] 1.852ms
Received 2 response(s)
</source>
So we better set this capability again:
<source lang=bash>
# setcap cap_net_raw=+ep /usr/bin/arping
</source>
c2879677d7b654f610330930b4236fb48f948218
1339
1338
2016-09-12T08:30:28Z
Lollypop
2
/* /tmp as a tmpfs */
wikitext
text/x-wiki
[[Kategorie:Linux]]
=systemd=
Yes, like daemons are usually written this has to be written lowercase.
=What is systemd?=
systemd is a replacement for the old and rusty init system of Linux.
It has many new features and extends the normal init system with the ability to watch processes after the start has done, list sockets owned by processes started with systemd, adds security features like [http://manpages.ubuntu.com/manpages/vivid/en/man7/capabilities.7.html capabilities(7)] and a lot more.
Maybe it will be as good as SMF (Service Management Facility) of Solaris one day :-).
=Take a look with systemctl=
==List units==
As you can see, there are hardware and software related units.
<source lang=bash>
# systemctl list-units
UNIT LOAD ACTIVE SUB DESCRIPTION
proc-sys-fs-binfmt_misc.automount loaded active running Arbitrary Executable File Formats File System Automount Point
sys-devices-pci0000:00-0000:00:02.0-backlight-acpi_video0.device loaded active plugged /sys/devices/pci0000:00/0000:00:02.0/backlight/acpi_video0
sys-devices-pci0000:00-0000:00:02.0-drm-card0-card0\x2dLVDS\x2d1-intel_backlight.device loaded active plugged /sys/devices/pci0000:00/0000:00:02.0/drm
sys-devices-pci0000:00-0000:00:19.0-net-eth0.device loaded active plugged 82579LM Gigabit Network Connection
sys-devices-pci0000:00-0000:00:1a.0-usb1-1\x2d1-1\x2d1.4-1\x2d1.4:1.0-bluetooth-hci0-rfkill3.device loaded active plugged /sys/devices/pci0000:00/0000
sys-devices-pci0000:00-0000:00:1a.0-usb1-1\x2d1-1\x2d1.4-1\x2d1.4:1.0-bluetooth-hci0.device loaded active plugged /sys/devices/pci0000:00/0000:00:1a.0
sys-devices-pci0000:00-0000:00:1b.0-sound-card0.device loaded active plugged 6 Series/C200 Series Chipset Family High Definition Audio Contro
sys-devices-pci0000:00-0000:00:1c.1-0000:03:00.0-ieee80211-phy0-rfkill2.device loaded active plugged /sys/devices/pci0000:00/0000:00:1c.1/0000:03:00.0
sys-devices-pci0000:00-0000:00:1c.1-0000:03:00.0-net-wlan0.device loaded active plugged Centrino Advanced-N 6205 [Taylor Peak] (Centrino Advanced-N 62
sys-devices-pci0000:00-0000:00:1d.0-usb2-2\x2d1-2\x2d1.4-2\x2d1.4:1.1-tty-ttyACM0.device loaded active plugged F5521gw
sys-devices-pci0000:00-0000:00:1d.0-usb2-2\x2d1-2\x2d1.4-2\x2d1.4:1.3-tty-ttyACM1.device loaded active plugged F5521gw
...
session-c2.scope loaded active running Session c2 of user lollypop
accounts-daemon.service loaded active running Accounts Service
● anacron.service loaded failed failed Run anacron jobs
apparmor.service loaded active exited LSB: AppArmor initialization
apport.service loaded active exited LSB: automatic crash report generation
...
</source>
In this example you can see that the anacron.service failed to start.
==Display unit status==
<source lang=bash>
# systemctl status anacron
● anacron.service - Run anacron jobs
Loaded: loaded (/lib/systemd/system/anacron.service; enabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Fr 2015-08-28 09:18:13 CEST; 31min ago
Process: 1591 ExecStart=/usr/sbin/anacron -dsq (code=exited, status=1/FAILURE)
Main PID: 1591 (code=exited, status=1/FAILURE)
Aug 28 09:18:13 lollybook systemd[1]: Started Run anacron jobs.
Aug 28 09:18:13 lollybook systemd[1]: Starting Run anacron jobs...
Aug 28 09:18:13 lollybook systemd[1]: anacron.service: main process exited, code=exited, status=1/FAILURE
Aug 28 09:18:13 lollybook anacron[1591]: anacron: Can't chdir to /var/spool/anacron: No such file or directory
Aug 28 09:18:13 lollybook systemd[1]: Unit anacron.service entered failed state.
Aug 28 09:18:13 lollybook systemd[1]: anacron.service failed.
</source>
Ah, deleted the anacron spool directory. ;-)
==Restart units==
Fix the problem and restart the service.
<source lang=bash>
root@lollybook:~# mkdir /var/spool/anacron
root@lollybook:~# systemctl restart anacron.service
root@lollybook:~# systemctl status anacron
● anacron.service - Run anacron jobs
Loaded: loaded (/lib/systemd/system/anacron.service; enabled; vendor preset: enabled)
Active: active (running) since Fr 2015-08-28 09:53:49 CEST; 4s ago
Main PID: 5179 (anacron)
CGroup: /system.slice/anacron.service
└─5179 /usr/sbin/anacron -dsq
Aug 28 09:53:49 lollybook systemd[1]: Started Run anacron jobs.
Aug 28 09:53:49 lollybook systemd[1]: Starting Run anacron jobs...
Aug 28 09:53:49 lollybook anacron[5179]: Anacron 2.3 started on 2015-08-28
Aug 28 09:53:49 lollybook anacron[5179]: Will run job `cron.daily' in 5 min.
Aug 28 09:53:49 lollybook anacron[5179]: Will run job `cron.weekly' in 10 min.
Aug 28 09:53:49 lollybook anacron[5179]: Will run job `cron.monthly' in 15 min.
Aug 28 09:53:49 lollybook anacron[5179]: Jobs will be executed sequentially
</source>
==Display unit declaration==
<source lang=ini>
# systemctl cat zfs.target
# /lib/systemd/system/zfs.target
[Unit]
Description=ZFS startup target
Requires=zfs-mount.service
Requires=zfs-share.service
Wants=zed.service
[Install]
WantedBy=multi-user.target
</source>
==Sockets==
<source lang=bash>
# systemctl list-sockets --all
LISTEN UNIT ACTIVATES
/run/acpid.socket acpid.socket acpid.service
/run/systemd/fsckd systemd-fsckd.socket systemd-fsckd.service
/run/systemd/initctl/fifo systemd-initctl.socket systemd-initctl.service
/run/systemd/journal/dev-log systemd-journald-dev-log.socket systemd-journald.service
/run/systemd/journal/socket systemd-journald.socket systemd-journald.service
/run/systemd/journal/stdout systemd-journald.socket systemd-journald.service
/run/systemd/journal/syslog syslog.socket rsyslog.service
/run/systemd/shutdownd systemd-shutdownd.socket systemd-shutdownd.service
/run/udev/control systemd-udevd-control.socket systemd-udevd.service
/run/uuidd/request uuidd.socket uuidd.service
/var/run/avahi-daemon/socket avahi-daemon.socket avahi-daemon.service
/var/run/cups/cups.sock cups.socket cups.service
/var/run/dbus/system_bus_socket dbus.socket dbus.service
127.0.0.1:631 cups.socket cups.service
[::1]:631 cups.socket cups.service
audit 1 systemd-journald-audit.socket systemd-journald.service
kobject-uevent 1 systemd-udevd-kernel.socket systemd-udevd.service
17 sockets listed.
</source>
==View dependencies==
What depends on ''zfs.target'':
<source lang=bash>
# systemctl list-dependencies --reverse zfs.target
zfs.target
● ├─basic.target
...
● └─multi-user.target
...
</source>
And what do we need to reach the ''zfs.target''?
<source lang=bash>
# systemctl list-dependencies --recursive zfs.target
zfs.target
● ├─zed.service
● ├─zfs-mount.service
● └─zfs-share.service
</source>
=Security=
==Use capabilities to drop user privileges (CapabilityBoundingSet)==
<source lang=bash>
# systemctl cat systemd-networkd.service --no-pager
...
[Service]
Type=notify
Restart=on-failure
RestartSec=0
ExecStart=/lib/systemd/systemd-networkd
CapabilityBoundingSet=CAP_NET_ADMIN CAP_NET_BIND_SERVICE CAP_NET_BROADCAST CAP_NET_RAW CAP_SETUID CAP_SETGID CAP_SETPCAP CAP_CHOWN CAP_DAC_OVERRIDE CAP_FOWNER
ProtectSystem=full
ProtectHome=yes
WatchdogSec=1min
...
</source>
Now the process is started with exactly the capabilities it needs to have. Even if it starts as root all unnessesary capabilities are dropped for starting the process.
I dont want to copy the whole man page of [http://manpages.ubuntu.com/manpages/vivid/en/man7/capabilities.7.html capabilities(7)] here but you can take a look to understand what this capabilities are.
'''BUT''' beware of programs which just test on UID 0!
==Nailing a process to it's rights : NoNewPrivileges==
Setting ''NoNewPrivileges=true'' ensures that the processtree from this level on will stuck at the UID and the privileges it has. This prohibits UID changes. No set UID binary will help the hacker to get more privileges than the user of the exploited service.
=systemd-timesyncd an alternative to ntp=
The ntpd is a good and fat old horse for servers but clients do not necessarily need this one. Just give systemd-timesyncd a chance.
Configuration can be easily made through <i>/etc/systemd/timesyncd.conf</i>:
<source lang=ini>
# This file is part of systemd.
#
# systemd is free software; you can redistribute it and/or modify it
# under the terms of the GNU Lesser General Public License as published by
# the Free Software Foundation; either version 2.1 of the License, or
# (at your option) any later version.
#
# Entries in this file show the compile time defaults.
# You can change settings by editing this file.
# Defaults can be restored by simply deleting this file.
#
# See timesyncd.conf(5) for details.
[Time]
NTP=ptbtime1.ptb.de hora.cs.tu-berlin.de
FallbackNTP=ntp.ubuntu.com
</source>
The server list is a space separated list of NTP servers.
FallbackNTP is a list of servers if none of the NTP list could be reached.
If you want to split them into multiple files or generate them at start you can put files with the ending <i>.conf</i> in <i>/etc/systemd/timesyncd.conf.d/</i>.
After you setup the config you can enable the timesyncd via:
<source lang=bash>
# timedatectl set-ntp true
</source>
Control your success with:
<source lang=bash>
# timedatectl
Local time: Fr 2016-07-01 09:16:24 CEST
Universal time: Fr 2016-07-01 07:16:24 UTC
RTC time: Fr 2016-07-01 07:16:24
Time zone: Europe/Berlin (CEST, +0200)
Network time on: yes
NTP synchronized: yes
RTC in local TZ: no
</source>
Nice it worked <i>NTP synchronized: yes</i>.
If not take a look with <i>systemctl</i>:
<source lang=bash>
# systemctl status systemd-timesyncd.service
● systemd-timesyncd.service - Network Time Synchronization
Loaded: loaded (/lib/systemd/system/systemd-timesyncd.service; enabled; vendor preset: enabled)
Drop-In: /lib/systemd/system/systemd-timesyncd.service.d
└─disable-with-time-daemon.conf
Active: inactive (dead)
Condition: start condition failed at Fr 2016-07-01 10:49:15 CEST; 1h 43min left
Docs: man:systemd-timesyncd.service(8)
</source>
Hmm... let us take a look at ntp:
<source lang=bash>
# systemctl status ntp.service
● ntp.service - LSB: Start NTP daemon
Loaded: loaded (/etc/init.d/ntp; bad; vendor preset: enabled)
Active: active (exited) since Fr 2016-07-01 10:49:19 CEST; 1h 44min left
Docs: man:systemd-sysv-generator(8)
</source>
Maybe we should uninstall or disable ntp first ;-).
<source lang=bash>
# systemctl stop ntp.service
# systemctl disable ntp.service
</source>
<source lang=bash>
# systemctl start systemd-timesyncd.service
# systemctl status systemd-timesyncd.service
● systemd-timesyncd.service - Network Time Synchronization
Loaded: loaded (/lib/systemd/system/systemd-timesyncd.service; enabled; vendor preset: enabled)
Drop-In: /lib/systemd/system/systemd-timesyncd.service.d
└─disable-with-time-daemon.conf
Active: active (running) since Fr 2016-07-01 09:06:10 CEST; 1s ago
Docs: man:systemd-timesyncd.service(8)
Main PID: 12360 (systemd-timesyn)
Status: "Synchronized to time server 192.53.103.108:123 (ptbtime1.ptb.de)."
CGroup: /system.slice/systemd-timesyncd.service
└─12360 /lib/systemd/systemd-timesyncd
Jul 01 09:06:10 lollybook systemd[1]: Starting Network Time Synchronization...
Jul 01 09:06:10 lollybook systemd[1]: Started Network Time Synchronization.
Jul 01 09:06:10 lollybook systemd-timesyncd[12360]: Synchronized to time server 192.53.103.108:123 (ptbtime1.ptb.de).
</source>
That's it!
=Units=
==[Unit]==
===Define dependencies===
For example the ''zfs.target'' is defined like this:
<source lang=bash>
# systemctl cat zfs.target
# /lib/systemd/system/zfs.target
[Unit]
Description=ZFS startup target
Requires=zfs-mount.service
Requires=zfs-share.service
Wants=zed.service
[Install]
WantedBy=multi-user.target
</source>
This means to reach the ''zfs.target'' we want that ''zed.service'' is started if enabled and we need ''zfs-mount.service'' and ''zfs-share.service''.
===Directories===
====ReadWrite-, ReadOnly- and InaccessibleDirectories====
====Private Tmp-Directories====
Mounts a private incarnation of /tmp and /var/tmp which only lives as long as the unit is up. When the unit comes down the directories are cleared. This is done by a seperate namespace for this unit.
<source lang=ini>
[Unit]
...
PrivateTmp=true|false
...
</source>
If several units should share a private tmp-directory you can use ''JoinsNamespaceOf=<unit1>[,<unit2>,<unit3>]''.
==[Service]==
==[Install]==
=Tools=
==Testing around with capabilities==
For example arping:
<source lang=bash>
# getcap /usr/bin/arping
/usr/bin/arping = cap_net_raw+ep
</source>
With this capability set we can use this as normal user:
<source lang=bash>
lollypop $ /usr/bin/arping -I wlan0 192.168.178.1
ARPING 192.168.178.1 from 192.168.178.31 wlan0
Unicast reply from 192.168.178.1 [24:65:11:F0:DC:A8] 1.774ms
Unicast reply from 192.168.178.1 [24:65:11:F0:DC:A8] 1.658ms
</source>
If we remove this capability it does not work:
<source lang=bash>
# setcap cap_net_raw=-ep /usr/bin/arping
</source>
<source lang=bash>
lollypop $ /usr/bin/arping -I wlan0 192.168.178.1
arping: socket: Operation not permitted
</source>
Of course it still works as root as root has all capabilities:
<source lang=bash>
root@lollybook:~# /usr/bin/arping -I wlan0 192.168.178.1
ARPING 192.168.178.1 from 192.168.178.31 wlan0
Unicast reply from 192.168.178.1 [24:65:11:F0:DC:A8] 2.052ms
Unicast reply from 192.168.178.1 [24:65:11:F0:DC:A8] 1.852ms
Received 2 response(s)
</source>
So we better set this capability again:
<source lang=bash>
# setcap cap_net_raw=+ep /usr/bin/arping
</source>
7d8b55396998eedf4f01414a8861e67ac5bfc201
SSL and TLS
0
229
1340
957
2016-09-14T10:32:35Z
Lollypop
2
/* HTTPS */
wikitext
text/x-wiki
[[Kategorie: Security]]
=Web=
==HTTPS==
===HSTS - HTTP Strict Transport Security===
<source lang=apache>
<VirtualHost <host>:443>
...
Header always set Strict-Transport-Security "max-age=31556926; includeSubDomains;"
...
</VirtualHost>
</source>
You need to enable the headers module in Apache.
On Ubuntu just do:
<source lang=bash>
# sudo a2enmod headers
</source>
The max-age is entered in seconds:
<source lang=bash>
$ bc -l
31556926/(60*60*24)
365.24219907407407407407
</souce>
So this value is a year as seconds.
What changes when we set this header and the browser understands it?
The browser transforms any link on this page to https even if the link is a http link. If the secure connection cannot be established because of Certificate errors, the browser will refuse to load the page. If this header contains ''includeSubDomains;'' subdomains are treated like this as well.
Links:
* [https://en.wikipedia.org/wiki/HTTP_Strict_Transport_Security HSTS at Wikipedia (English)]
* [https://de.wikipedia.org/wiki/Hypertext_Transfer_Protocol_Secure#HSTS HSTS at Wikipedia (German)]
===HPKP - HTTP Public Key Pinning===
A helpful script to create the hashes was made by Hanno Böck and is accessible at [https://github.com/hannob/hpkp Github].
I added a create option which makes the script more comfortable for me at [https://github.com/Popyllol/hpkp Github], too.
The public key pins for this site are created like this:
<source lang=bash>
# /etc/apache2/ssl/hpkp-gen.sh create DE Hamburg Hamburg lars.timmann.de
Generating RSA private key, 4096 bit long modulus
..................................................................................................................................................................................................................++
..........................................................................................................................................................................................++
e is 65537 (0x10001)
Generating RSA private key, 4096 bit long modulus
..................................................++
..........................................++
e is 65537 (0x10001)
Header always set Strict-Transport-Security "max-age=31556926;"
Header always set Public-Key-Pins "max-age=5184000; pin-sha256=\"UcmGe/VSm6N9ruX235yb9PEYseuo+mr2volWwx1RffE=\";pin-sha256=\"O8xUszxHm+JJpRR4Pycl7LCnKjFpTY3REemrBxQZWQU=\";pin-sha256=\"UcmGe/VSm6N9ruX235yb9PEYseuo+mr2volWwx1RffE=\";"
</source>
At the end you get one line for adding Strict-Transport-Security and one for Public-Key-Pins. Both in Apache format.
<source lang=apache>
<VirtualHost lars.timmann.de:443>
...
SSLEngine On
SSLProtocol all -SSLv2 -SSLv3
SSLCompression off
SSLOptions +FakeBasicAuth +ExportCertData +StrictRequire
SSLCertificateFile /etc/apache2/ssl/timmann.de-wildcard.pem
SSLCertificateKeyFile /etc/apache2/ssl/timmann.de.ec-key
Header always set Strict-Transport-Security "max-age=31556926;"
Header always set Public-Key-Pins "max-age=5184000; pin-sha256=\"sEQMIUbXSCbQQAMcCH7712u+cYCjFITlUSH/C1DEGHY=\";pin-sha256=\"9f3SRITO2UNdpnurhfJGLZqcaXJBUm3WRKRIKYiPARc=\";pin-sha256=\"sEQMIUbXSCbQQAMcCH7712u+cYCjFITlUSH/C1DEGHY=\";"
...
</VirtualHost>
</source>
You need to enable the headers module in Apache.
On Ubuntu just do:
<source lang=bash>
# sudo a2enmod headers
</source>
=Mail=
==STARTTLS==
with OpenSSL:
<source lang=bash>
$ openssl s_client -starttls smtp -connect <mailserver>:<port>
</source>
with GNUTLS:
<source lang=bash>
$ gnutls-cli --crlf --starttls --port <port> <mailserver>
EHLO hey <-- Send EHLO
250-<mailserver> Hello <yourhost> [<yourip>]
250-SIZE 52428800
250-8BITMIME
250-ETRN
250-PIPELINING
250-AUTH PLAIN
250-STARTTLS
250 HELP
STARTTLS <-- Send STARTTLS
220 TLS go ahead
^D <-- Send CTRL-D to begin STARTTLS handshake
...
- Version: TLS1.2
- Key Exchange: DHE-RSA
- Cipher: AES-256-CBC
- MAC: SHA256
- Compression: NULL
</source>
You can specify the security priority for the handshake like this:
<source lang=bash>
$ gnutls-cli --crlf --starttls --priority 'SECURE256:%LATEST_RECORD_VERSION:-VERS-SSL3.0' --port <port> <mailserver>
</source>
Or us sslscan to check the available ciphers:
<source lang=bash>
$ sudo apt-get install sslscan
$ sslscan --no-failed --starttls <mailserver>:<port>
</source>
==SMTPS==
with OpenSSL:
<source lang=bash>
$ openssl s_client -connect <mailserver>:465
</source>
with GNUTLS:
<source lang=bash>
$ gnutls-cli --port 465 <mailserver>
</source>
51dce1b6f990179e77ab319e001483a14590a8c4
Solaris process debugging
0
254
1341
971
2016-09-16T10:28:15Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:Solaris|Debugging]]
==Swap usage per process==
<source lang=bash>
# pgrep . |\
xargs -n 1 pmap -S 2>/dev/null |\
nawk '
$1 ~ /[0-9]+:/{
pid=$1;
proc=$2;
}
/^total/ {
swap_kb=$4;
printf("%6s %6d Kb Swap %s\n",pid,swap_kb,proc);
}' |\
sort -k2n,2n
</source>
==Set the core file size limit on a process==
For example for the sshd (and all resulting childs from now):
<source lang=bash>
ssh-server# prctl -n process.max-core-size -v 2g -t privileged -r -e deny $(pgrep -u root -o sshd)
</source>
Check:
<source lang=bash>
ssh-server# prctl -n process.max-core-size $(pgrep -u root -o sshd)
process: 1491: /usr/lib/ssh/sshd
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
process.max-core-size
privileged 2.00GB - deny -
system 8.00EB max deny -
</source>
Now all processes (for example new logged in users) will have a core file size limit of 2GB... really? No!
<source lang=bash>
ssh-client# ssh ssh-server
ssh-server# ulimit -Ha | grep core
core file size (blocks, -c) 2097152
</source>
See what it says: blocks <-- !!!
From man page: -c Maximum core file size (in 512-byte blocks)
afdb6d53045f0ee3f6ca796e1078b0a8a64ce0d3
1363
1341
2016-11-08T11:30:30Z
Lollypop
2
/* Swap usage per process */
wikitext
text/x-wiki
[[Kategorie:Solaris|Debugging]]
==Swap usage per process==
<source lang=awk>
# pgrep . | xargs -n 1 pmap -S 2>/dev/null | nawk '
function kb2h(value){
unit=1;
while(value>=1024){
unit++;
value/=1024;
};
split("KB,MB,GB,TB,PB", unit_string, /,/);
return sprintf("%7.2f %s",value,unit_string[unit]);
}
/[0-9]+:/ {
pid=$1;
prog=$2;
}
/^total/{
swap_total+=$3;
printf ("%s\t%s\t%s\n",pid,kb2h($3),prog);
}
END{
printf "Total:\t%s\n",kb2h(swap_total);
}'
</source>
==Set the core file size limit on a process==
For example for the sshd (and all resulting childs from now):
<source lang=bash>
ssh-server# prctl -n process.max-core-size -v 2g -t privileged -r -e deny $(pgrep -u root -o sshd)
</source>
Check:
<source lang=bash>
ssh-server# prctl -n process.max-core-size $(pgrep -u root -o sshd)
process: 1491: /usr/lib/ssh/sshd
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
process.max-core-size
privileged 2.00GB - deny -
system 8.00EB max deny -
</source>
Now all processes (for example new logged in users) will have a core file size limit of 2GB... really? No!
<source lang=bash>
ssh-client# ssh ssh-server
ssh-server# ulimit -Ha | grep core
core file size (blocks, -c) 2097152
</source>
See what it says: blocks <-- !!!
From man page: -c Maximum core file size (in 512-byte blocks)
cb78affeaa02ba65dbfe9481ebd42f6bf00f0397
VMWare Linux parameter
0
108
1342
360
2016-09-20T08:43:47Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:VMWare]][[Kategorie:Ubuntu]]
==/etc/sysctl.conf==
<source lang=bash>
# vm.swappiness = 0 The kernel will swap only to avoid an out of memory condition.
vm.swappiness = 0
# TCP SYN Flood Protection
net.ipv4.tcp_syncookies = 1
net.ipv4.tcp_max_syn_backlog = 4096
net.ipv4.tcp_synack_retries = 3
</source>
==Pinning kernel to 2.6 for ESX 4.1==
Create /etc/apt/preferences.d/linux-image with this content:
<source lang=bash>
Package: linux-image-server linux-server linux-headers-server
Pin: version 2.6.*
Pin-Priority: 1000
</source>
==Autobuild of kernel drivers==
Create /etc/kernel/header_postinst.d/vmware :
<source lang=bash>
#!/bin/bash
# We're passed the version of the kernel being installed
inst_kern=$1
/usr/bin/vmware-config-tools.pl --modules-only --default --kernel-version ${inst_kern}
</source>
<source lang=bash>
# chmod 755 /etc/kernel/header_postinst.d/vmware
</source>
==Prebuild packages from VMWare==
<source lang=bash>
echo "deb http://packages.vmware.com/tools/esx/latest/ubuntu $(lsb_release -cs) main" > /etc/apt/sources.list.d/vmware-repository
apt-key adv --keyserver subkeys.pgp.net --recv-keys C0B5E0AB66FD4949
apt-get update
apt-get install vmware-tools-core vmware-tools-esx-nox vmware-tools-foundation \
vmware-tools-guestlib vmware-tools-libraries-nox vmware-tools-libraries-x \
vmware-tools-plugins-autoupgrade vmware-tools-plugins-deploypkg \
vmware-tools-plugins-grabbitmqproxy vmware-tools-plugins-guestinfo \
vmware-tools-plugins-hgfsserver vmware-tools-plugins-powerops \
vmware-tools-plugins-timesync vmware-tools-plugins-vix \
vmware-tools-plugins-vmbackup vmware-tools-services vmware-tools-user
</source>
==Source from VMWare==
After you removed previously installed vmware-tools, just follow these steps:
1. Add this to your /etc/apt/sources.list:
<source lang=bash>
deb http://packages.vmware.com/tools/esx/latest/ubuntu precise main
</source>
Then do:
<source lang=bash>
gpg --search C0B5E0AB66FD4949 # hinzufügen (1)
gpg -a --export C0B5E0AB66FD4949 | apt-key add --
</source>
2. Update your package database:
<source lang=bash>
# aptitude update
</source>
3. Get Module-Assistant:
<source lang=bash>
# aptitude install module-assistant
</source>
4. Get the base packages:
<source lang=bash>
# aptitude install vmware-tools-foundation vmware-tools-libraries-nox vmware-tools-guestlib vmware-tools-core
</source>
5. Get the modules:
<source lang=bash>
# aptitude install vmware-tools-{vmci,vmxnet,vsock,vmblock,vmhgfs,vmsync}-common
# aptitude install vmware-tools-{vmci,vmxnet,vsock,vmblock,vmhgfs,vmsync}-modules-source
</source>
6. Get kernel and headers:
<source lang=bash>
# aptitude install linux-{image,headers}-3.2.0-52-generic
</source>
7. Compile and install the modules with module assistant
<source lang=bash>
# m-a prepare --kvers-list 3.2.0-52-generic
# m-a --text-mode --kvers-list 3.2.0-52-generic build vmware-tools-{vmci,vmxnet,vsock,vmblock,vmhgfs,vmsync}-modules
# m-a --text-mode --kvers-list 3.2.0-52-generic install vmware-tools-{vmci,vmxnet,vsock,vmblock,vmhgfs,vmsync}-modules
</source>
== Minimal /etc/vmware-tools/config ==
<source lang=bash>
libdir = "/usr/lib/vmware-tools"
</source>
== Switch to Ubuntu open-vm-tools ==
<source lang=bash>
# /usr/bin/vmware-uninstall-tools.pl ; aptitude purge open-vm-tools ; apt update ; apt install open-vm-tools
</source>
d12c9f7788c21c0a48f2e3034b30737b9c5399a7
NetApp and Solaris
0
219
1343
989
2016-09-28T12:19:45Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:NetApp|Solaris]]
[[Kategorie:Solaris|NetApp]]
'''Just some unsorted lines...'''
'''Working on it... don't believe what you can read here! It is not proofed for now.'''
==Settings in Solaris==
Settings for MPxIO over FC:
===/kernel/drv/ssd.conf===
<source lang=bash>
###### START changes by host_config #####
ssd-config-list="NETAPP LUN", "physical-block-size:4096, retries-busy:30, retries-reset:30, retries-notready:300, retriestimeout:10, throttle-max:64, throttle-min:8";
###### END changes by host_config ####
</source>
===Check it out===
<source lang=bash>
# iostat -Er | /opt/sfw/bin/gawk 'BEGIN{command="echo ::ssd_state | mdb -k"; while(command|getline){if(/^un [0-9]+:/ && $NF != "0"){ssd=$2;gsub(/:$/,"",ssd);while(!/^}/){command|getline;if(/un_phy_blocksize/){un_phy_blocksize[ssd]=strtonum($NF);}}}};close(command);}/ssd/{ssd=$1;gsub(/^ssd/,"",ssd);getline;split($0,vendor,",");printf "ssd: %s\tun_phy_blocksize: %d\t%s\t%s\n",ssd,un_phy_blocksize[ssd],vendor[1],vendor[4];}'
</source>
==Alignment and ZFS==
First read [https://library.netapp.com/ecmdocs/ECMP1148982/html/GUID-42CC2EB6-E667-4305-914C-7C2C459EF841.html ZFS zpools create misaligned I/O in Solaris 11 and Solaris 10 Update 8 and later (407376)].
If you have 4k as block size in your storage use ashift=12 (alignment shift exponent).
===Status of alignment===
<source lang=bash>
# ssh filer01 "priv set -q diag ; lun show -v all; priv set"
-------------------------------------------------------------------------------
LUN for ZFS
-------------------------------------------------------------------------------
/vol/ZoneLUNs/Zone01.lun 50g (53687091200) (r/w, online, mapped)
Serial#: 800KP+EpO-33
Share: none
Space Reservation: enabled
Multiprotocol Type: solaris_efi
Maps: SUN_SERVER01_SERVER02=40
Occupied Size: 46.2g (49583595520)
Creation Time: Wed Jan 7 11:37:58 CET 2015
---> Alignment: partial-writes
Cluster Shared Volume Information: 0x0
Space_alloc: disabled
report-physical-size: enabled
Read-Only: disabled
-------------------------------------------------------------------------------
LUN for Oracle Database
-------------------------------------------------------------------------------
/vol/TEMP201/TEMP201 25g (26843545600) (r/w, online, mapped)
Serial#: 800KP+EpO-2t
Share: none
Space Reservation: enabled
Multiprotocol Type: solaris_efi
Maps: SUN_SERVER01_SERVER02=35
Occupied Size: 21.6g (23195856896)
Creation Time: Fri Jul 4 11:02:34 CEST 2014
---> Alignment: misaligned
Cluster Shared Volume Information: 0x0
Space_alloc: disabled
report-physical-size: enabled
Read-Only: disabled
...
</source>
Or use "lun alignment show":
<source lang=bash>
# ssh filer01 "priv set -q diag ; lun alignment show; priv set"
-------------------------------------------------------------------------------
LUN for ZFS
Wide spread reads. I think the ashift is not correct.
-------------------------------------------------------------------------------
/vol/ZoneLUNs/Zone01.lun
Multiprotocol type: solaris_efi
Alignment: partial-writes
Write alignment histogram percentage: 5, 5, 4, 6, 4, 6, 14, 5
Read alignment histogram percentage: 8, 7, 10, 7, 7, 8, 36, 5
Partial writes percentage: 47
Partial reads percentage: 9
-------------------------------------------------------------------------------
LUN for Oracle Database
-------------------------------------------------------------------------------
/vol/TEMP201/TEMP201
Multiprotocol type: solaris_efi
Alignment: misaligned
Write alignment histogram percentage: 0, 0, 0, 0, 0, 0, 99, 0
Read alignment histogram percentage: 0, 0, 8, 0, 0, 0, 77, 0
Partial writes percentage: 0
Partial reads percentage: 14
</source>
Or "stats show lun":
<source lang=bash>
filer01*> stats show -e lun:/vol/TEMP201:.*_align_histo.*
</source>
===ashift=12? Why 12?===
<source lang=bash>
# echo "2^12" | bc -l
4096
</source>
OK... 4k... I see.
===What ashift do I have?===
<source lang=bash>
# zdb | egrep ' name|ashift'
name: 'apache_pool'
ashift: 9
name: 'mysql_pool'
ashift: 9
...
</source>
===Create ZPools on NetApp LUNs with this syntax===
<source lang=bash>
# zpool create -o ashift=12 <mypool> mirror <vdev1> <vdev2>
</source>
===Solaris Cluster===
<source lang=bash>
# /opt/NTAP/SANToolkit/bin/sanlun lun show | nawk '$3 ~ /^\/dev\//{line=$0;gsub(/s[0-9]+$/,"",$3);command="/usr/cluster/bin/cldev list "$3; command | getline; close(command); print line,$1; next;}NR==2{print $0,"DID";next;}NR==3{print $0"-------";next}{print;}'
controller(7mode)/ device host lun
vserver(Cmode) lun-pathname filename adapter protocol size mode DID
--------------------------------------------------------------------------------------------------------------------------------------------------------
ncl01-iscsi-svm1 /vol/vol_cyrus01/lun_tz_cyrus01_1 /dev/rdsk/c0t600A0980383033777B244834556D4865d0s2 iscsi0 iSCSI 500.1g C d5
ncl01-iscsi-svm1 /vol/vol_cyrus01/lun_tz_cyrus01_2 /dev/rdsk/c0t600A0980383033777B244834556D4866d0s2 iscsi0 iSCSI 500.1g C d6
ncl01-iscsi-svm1 /vol/vol_cyrus01/lun_tz_cyrus01_3 /dev/rdsk/c0t600A0980383033777B244834556D4867d0s2 iscsi0 iSCSI 500.1g C d7
...
</source>
==Links==
* [http://wiki.illumos.org/display/illumos/List+of+sd-config-list+entries+for+Advanced-Format+drives Illumos: List of sd-config-list entries for Advanced-Format drives]
* [https://kb.netapp.com/index?page=content&id=3011193 NetApp: What is an unaligned I/O?]
fece88432e3b3d466241bb1584ee6f6294283bc2
Awk cheatsheet
0
292
1344
2016-10-14T10:36:28Z
Lollypop
2
Die Seite wurde neu angelegt: „ ==Functions== ===Binary to decimal=== <source lang=awk> function b2d(bin){ len=length(bin); for(i=1;i<=len;i++){ dec+=substr(bin,i,1)*2^(len-i); }…“
wikitext
text/x-wiki
==Functions==
===Binary to decimal===
<source lang=awk>
function b2d(bin){
len=length(bin);
for(i=1;i<=len;i++){
dec+=substr(bin,i,1)*2^(len-i);
}
return dec;
}
</source>
26ebe250318808c3d27622a66da58086c78f4cca
1345
1344
2016-10-14T10:36:51Z
Lollypop
2
wikitext
text/x-wiki
==Functions==
===Binary to decimal===
<source lang=awk>
function b2d(bin){
len=length(bin);
for(i=1;i<=len;i++){
dec+=substr(bin,i,1)*2^(len-i);
}
return dec;
}
</source>
[[Kategorie:AWK]]
45470ca2e2f688407aeeed6814063412e100a1b0
1347
1345
2016-10-14T10:37:40Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:AWK|Cheatsheet]]
==Functions==
===Binary to decimal===
<source lang=awk>
function b2d(bin){
len=length(bin);
for(i=1;i<=len;i++){
dec+=substr(bin,i,1)*2^(len-i);
}
return dec;
}
</source>
7108a79623edce40e1d73b124acdb51904bbf5b5
1348
1347
2016-10-20T08:20:22Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:AWK|Cheatsheet]]
==Functions==
===Binary to decimal===
<source lang=awk>
function b2d(bin){
len=length(bin);
for(i=1;i<=len;i++){
dec+=substr(bin,i,1)*2^(len-i);
}
return dec;
}
</source>
===Sort words inside braces (gawk)===
Written for beautifying lines like
<source lang=mysql>
GRANT SELECT (account_id, user, enable_imap, fk_domain_id, enable_virusscan, max_msg_size, from_authuser_only, id, time_start, changed_by, enable_spamblocker, onhold, time_end, archive, mailbox_quota) ON `mail_db`.`mail_account` TO 'user'@'172.16.16.16'
</source>
<source lang=awk>
function inner_brace_sort (rest) {
sorted="";
while( match(rest,/\([^\)]+/) ) {
sorted=sprintf("%s%s", sorted, substr(rest, 1, RSTART));
inner=substr(rest, RSTART+1, RLENGTH-1);
split(inner, inner_a, /,[ ]*/);
asort(inner_a);
inner_l=length(inner_a);
for(i=1; i<=inner_l; i++) {
sorted=sprintf("%s%s", sorted, inner_a[i]);
if(i<inner_l) sorted=sorted""sprintf(", ");
}
sorted=sprintf("%s)", sorted);
rest=substr(rest,RSTART+RLENGTH+1,length(rest));
}
return sorted""rest;
}
</source>
d7ea843ec9c94e556e99a23bb8fdfe64ac88024e
1349
1348
2016-10-20T08:21:45Z
Lollypop
2
/* Sort words inside braces (gawk) */
wikitext
text/x-wiki
[[Kategorie:AWK|Cheatsheet]]
==Functions==
===Binary to decimal===
<source lang=awk>
function b2d(bin){
len=length(bin);
for(i=1;i<=len;i++){
dec+=substr(bin,i,1)*2^(len-i);
}
return dec;
}
</source>
===Sort words inside braces (gawk)===
Written for beautifying lines like
<source lang=mysql>
GRANT SELECT (account_id, user, enable_imap, fk_domain_id, enable_virusscan, max_msg_size, from_authuser_only, id, time_start, changed_by, enable_spamblocker, onhold, time_end, archive, mailbox_quota) ON `mail_db`.`mail_account` TO 'user'@'172.16.16.16'
</source>
This function:
<source lang=awk>
function inner_brace_sort (rest) {
sorted="";
while( match(rest,/\([^\)]+/) ) {
sorted=sprintf("%s%s", sorted, substr(rest, 1, RSTART));
inner=substr(rest, RSTART+1, RLENGTH-1);
split(inner, inner_a, /,[ ]*/);
asort(inner_a);
inner_l=length(inner_a);
for(i=1; i<=inner_l; i++) {
sorted=sprintf("%s%s", sorted, inner_a[i]);
if(i<inner_l) sorted=sorted""sprintf(", ");
}
sorted=sprintf("%s)", sorted);
rest=substr(rest,RSTART+RLENGTH+1,length(rest));
}
return sorted""rest;
}
</source>
Sorts the fields inside the braces alphabetically.
af8b416892c46af02873fd8196487b8db31a6145
1350
1349
2016-10-20T08:36:07Z
Lollypop
2
/* Sort words inside braces (gawk) */
wikitext
text/x-wiki
[[Kategorie:AWK|Cheatsheet]]
==Functions==
===Binary to decimal===
<source lang=awk>
function b2d(bin){
len=length(bin);
for(i=1;i<=len;i++){
dec+=substr(bin,i,1)*2^(len-i);
}
return dec;
}
</source>
===Sort words inside braces (gawk)===
Written for beautifying lines like
<source lang=mysql>
GRANT SELECT (account_id, user, enable_imap, fk_domain_id, enable_virusscan, max_msg_size, from_authuser_only, id, time_start, changed_by, enable_spamblocker, onhold, time_end, archive, mailbox_quota) ON `mail_db`.`mail_account` TO 'user'@'172.16.16.16'
</source>
This function:
<source lang=awk>
function inner_brace_sort (rest, delimiter) {
sorted="";
while( match(rest,/\([^\)]+\)/) ) {
sorted=sprintf("%s%s", sorted, substr(rest, 1, RSTART));
inner=substr(rest, RSTART+1, RLENGTH-2);
rest=substr(rest,RSTART+RLENGTH-1,length(rest));
split(inner, inner_a, delimiter);
inner_l=asort(inner_a, inner_s);
for(i=1; i<=inner_l; i++) {
sorted=sprintf("%s%s", sorted, inner_s[i]);
if(i<inner_l) sorted=sprintf("%s, ", sorted);
}
sorted=sprintf("%s", sorted);
}
return sorted""rest;
}
</source>
Sorts the fields inside the braces alphabetically and can be called like this:
<source lang=awk>
/\(/ {
print inner_brace_sort($0, ",[ ]*");
}
</source>
9b60c0edc82c0537826be119117f21a2a2ec79a2
1351
1350
2016-10-20T08:37:20Z
Lollypop
2
/* Sort words inside braces (gawk) */
wikitext
text/x-wiki
[[Kategorie:AWK|Cheatsheet]]
==Functions==
===Binary to decimal===
<source lang=awk>
function b2d(bin){
len=length(bin);
for(i=1;i<=len;i++){
dec+=substr(bin,i,1)*2^(len-i);
}
return dec;
}
</source>
===Sort words inside braces (gawk)===
Written for beautifying lines like
<source lang=mysql>
GRANT SELECT (account_id, user, enable_imap, fk_domain_id, enable_virusscan, max_msg_size, from_authuser_only, id, time_start, changed_by, enable_spamblocker, onhold, time_end, archive, mailbox_quota) ON `mail_db`.`mail_account` TO 'user'@'172.16.16.16'
</source>
This function:
<source lang=awk>
function inner_brace_sort (rest, delimiter) {
sorted="";
while( match(rest,/\([^\)]+\)/) ) {
sorted=sprintf("%s%s", sorted, substr(rest, 1, RSTART));
inner=substr(rest, RSTART+1, RLENGTH-2);
rest=substr(rest, RSTART+RLENGTH-1, length(rest));
split(inner, inner_a, delimiter);
inner_l=asort(inner_a, inner_s);
for(i=1; i<=inner_l; i++) {
sorted=sprintf("%s%s", sorted, inner_s[i]);
if(i<inner_l) sorted=sprintf("%s, ", sorted);
}
sorted=sprintf("%s", sorted);
}
return sorted""rest;
}
</source>
Sorts the fields inside the braces alphabetically and can be called like this:
<source lang=awk>
/\(/ {
print inner_brace_sort($0, ",[ ]*");
}
</source>
b6c376e92440f677b8055f715b61a1ec39e884c9
1362
1351
2016-11-08T11:20:22Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:AWK|Cheatsheet]]
==Functions==
===Bytes to human readable===
<source lang=awk>
function b2h(value){
# Bytes to human readable
unit=1;
while(value>=1024){
unit++;
value/=1024;
}
split("B,KB,MB,GB,TB,PB", unit_string, /,/);
return sprintf("%.2f%s",value,unit_string[unit]);
}
</source>
===Binary to decimal===
<source lang=awk>
function b2d(bin){
len=length(bin);
for(i=1;i<=len;i++){
dec+=substr(bin,i,1)*2^(len-i);
}
return dec;
}
</source>
===Sort words inside braces (gawk)===
Written for beautifying lines like
<source lang=mysql>
GRANT SELECT (account_id, user, enable_imap, fk_domain_id, enable_virusscan, max_msg_size, from_authuser_only, id, time_start, changed_by, enable_spamblocker, onhold, time_end, archive, mailbox_quota) ON `mail_db`.`mail_account` TO 'user'@'172.16.16.16'
</source>
This function:
<source lang=awk>
function inner_brace_sort (rest, delimiter) {
sorted="";
while( match(rest,/\([^\)]+\)/) ) {
sorted=sprintf("%s%s", sorted, substr(rest, 1, RSTART));
inner=substr(rest, RSTART+1, RLENGTH-2);
rest=substr(rest, RSTART+RLENGTH-1, length(rest));
split(inner, inner_a, delimiter);
inner_l=asort(inner_a, inner_s);
for(i=1; i<=inner_l; i++) {
sorted=sprintf("%s%s", sorted, inner_s[i]);
if(i<inner_l) sorted=sprintf("%s, ", sorted);
}
sorted=sprintf("%s", sorted);
}
return sorted""rest;
}
</source>
Sorts the fields inside the braces alphabetically and can be called like this:
<source lang=awk>
/\(/ {
print inner_brace_sort($0, ",[ ]*");
}
</source>
02dca21ecd01e84564584cd52b90a2c89a443829
1364
1362
2016-11-08T12:24:11Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:AWK|Cheatsheet]]
==Functions==
===Bytes to human readable===
<source lang=awk>
function b2h(value){
# Bytes to human readable
unit=1;
while(value>=1024){
unit++;
value/=1024;
}
split("B,KB,MB,GB,TB,PB", unit_string, /,/);
return sprintf("%.2f%s",value,unit_string[unit]);
}
</source>
===Binary to decimal===
<source lang=awk>
function b2d(bin){
len=length(bin);
for(i=1;i<=len;i++){
dec+=substr(bin,i,1)*2^(len-i);
}
return dec;
}
</source>
===Quicksort===
This is not my code! It is taken from [[http://awk.info/?quicksort here]].
You can call it like this: qsort(array,1,length(array));
<source lang=awk>
# BEGIN http://awk.info/?quicksort
function qsort(A, left, right, i, last) {
if (left >= right)
return
swap(A, left, left+int((right-left+1)*rand()))
last = left
for (i = left+1; i <= right; i++)
if (int(A[i]) < int(A[left]))
swap(A, ++last, i)
swap(A, left, last)
qsort(A, left, last-1)
qsort(A, last+1, right)
}
# Helper function to swap two elements of an array
function swap(A, i, j, t) {
t = A[i]; A[i] = A[j]; A[j] = t
}
# END http://awk.info/?quicksort
</source>
===Sort words inside braces (gawk)===
Written for beautifying lines like
<source lang=mysql>
GRANT SELECT (account_id, user, enable_imap, fk_domain_id, enable_virusscan, max_msg_size, from_authuser_only, id, time_start, changed_by, enable_spamblocker, onhold, time_end, archive, mailbox_quota) ON `mail_db`.`mail_account` TO 'user'@'172.16.16.16'
</source>
This function:
<source lang=awk>
function inner_brace_sort (rest, delimiter) {
sorted="";
while( match(rest,/\([^\)]+\)/) ) {
sorted=sprintf("%s%s", sorted, substr(rest, 1, RSTART));
inner=substr(rest, RSTART+1, RLENGTH-2);
rest=substr(rest, RSTART+RLENGTH-1, length(rest));
split(inner, inner_a, delimiter);
inner_l=asort(inner_a, inner_s);
for(i=1; i<=inner_l; i++) {
sorted=sprintf("%s%s", sorted, inner_s[i]);
if(i<inner_l) sorted=sprintf("%s, ", sorted);
}
sorted=sprintf("%s", sorted);
}
return sorted""rest;
}
</source>
Sorts the fields inside the braces alphabetically and can be called like this:
<source lang=awk>
/\(/ {
print inner_brace_sort($0, ",[ ]*");
}
</source>
a8be17626fed44aa96ad734f6c99b41fd0a0e777
1365
1364
2016-11-08T12:50:03Z
Lollypop
2
/* Quicksort */
wikitext
text/x-wiki
[[Kategorie:AWK|Cheatsheet]]
==Functions==
===Bytes to human readable===
<source lang=awk>
function b2h(value){
# Bytes to human readable
unit=1;
while(value>=1024){
unit++;
value/=1024;
}
split("B,KB,MB,GB,TB,PB", unit_string, /,/);
return sprintf("%.2f%s",value,unit_string[unit]);
}
</source>
===Binary to decimal===
<source lang=awk>
function b2d(bin){
len=length(bin);
for(i=1;i<=len;i++){
dec+=substr(bin,i,1)*2^(len-i);
}
return dec;
}
</source>
===Quicksort===
This is not my code! It is taken from [[http://awk.info/?quicksort here]] maybe slightly modified cannot check, site is down.
You can call it like this: qsort(array,1,length(array));
<source lang=awk>
# BEGIN http://awk.info/?quicksort
function qsort(A, left, right, i, last) {
if (left >= right)
return;
swap(A, left, left+int((right-left+1)*rand()));
last = left;
for (i = left+1; i <= right; i++)
if (int(A[i]) < int(A[left]))
swap(A, ++last, i)
swap(A, left, last)
qsort(A, left, last-1)
qsort(A, last+1, right)
}
# Helper function to swap two elements of an array
function swap(A, i, j, t) {
t = A[i]; A[i] = A[j]; A[j] = t
}
# END http://awk.info/?quicksort
</source>
Same for alphanumeric:
<source lang=awk>
# BEGIN http://awk.info/?quicksort
function qsort(A, left, right, i, last) {
if (left >= right)
return;
swap(A, left, left+int((right-left+1)*rand()));
last = left;
for (i = left+1; i <= right; i++)
if (int(A[i]) < int(A[left]))
swap(A, ++last, i)
swap(A, left, last)
qsort(A, left, last-1)
qsort(A, last+1, right)
}
# Helper function to swap two elements of an array
function swap(A, i, j, t) {
t = A[i]; A[i] = A[j]; A[j] = t
}
# END http://awk.info/?quicksort
</source>
Test:
<source lang=awk>
BEGIN {
string="1524097359810345254";
split(string,array_i,"");
string="ThisIsAQsortExample";
split(string,array_a,"");
}
# BEGIN http://awk.info/?quicksort
function qsort_i(A, left, right, i, last) {
if (left >= right)
return;
swap(A, left, left+int((right-left+1)*rand()));
last = left;
for (i = left+1; i <= right; i++)
if (int(A[i]) < int(A[left]))
swap(A, ++last, i)
swap(A, left, last)
qsort_i(A, left, last-1)
qsort_i(A, last+1, right)
}
function qsort_a(A, left, right, i, last) {
if (left >= right)
return;
swap(A, left, left+int((right-left+1)*rand()));
last = left;
for (i = left+1; i <= right; i++)
if (A[i] < A[left])
swap(A, ++last, i)
swap(A, left, last)
qsort_a(A, left, last-1)
qsort_a(A, last+1, right)
}
# Helper function to swap two elements of an array
function swap(A, i, j, t) {
t = A[i]; A[i] = A[j]; A[j] = t
}
END {
for(element in array_i)
printf array_i[element];
printf " ===qsort==> "
qsort_i(array_i,1,length(array_i));
for(element in array_i)
printf array_i[element];
print;
for(element in array_a)
printf array_a[element];
printf " ===qsort==> "
qsort_a(array_a,1,length(array_a));
for(element in array_a)
printf array_a[element];
print;
}
</source>
which outputs:
<pre>
1524097359810345254 ===qsort==> 0011223344455557899
ThisIsAQsortExample ===qsort==> AEIQTaehilmoprssstx
</pre>
===Sort words inside braces (gawk)===
Written for beautifying lines like
<source lang=mysql>
GRANT SELECT (account_id, user, enable_imap, fk_domain_id, enable_virusscan, max_msg_size, from_authuser_only, id, time_start, changed_by, enable_spamblocker, onhold, time_end, archive, mailbox_quota) ON `mail_db`.`mail_account` TO 'user'@'172.16.16.16'
</source>
This function:
<source lang=awk>
function inner_brace_sort (rest, delimiter) {
sorted="";
while( match(rest,/\([^\)]+\)/) ) {
sorted=sprintf("%s%s", sorted, substr(rest, 1, RSTART));
inner=substr(rest, RSTART+1, RLENGTH-2);
rest=substr(rest, RSTART+RLENGTH-1, length(rest));
split(inner, inner_a, delimiter);
inner_l=asort(inner_a, inner_s);
for(i=1; i<=inner_l; i++) {
sorted=sprintf("%s%s", sorted, inner_s[i]);
if(i<inner_l) sorted=sprintf("%s, ", sorted);
}
sorted=sprintf("%s", sorted);
}
return sorted""rest;
}
</source>
Sorts the fields inside the braces alphabetically and can be called like this:
<source lang=awk>
/\(/ {
print inner_brace_sort($0, ",[ ]*");
}
</source>
2dd25878b82e3a2ffcc113313006a1bddab0b191
Category:AWK
14
293
1346
2016-10-14T10:37:09Z
Lollypop
2
Die Seite wurde neu angelegt: „[[Kategorie:KnowHow]]“
wikitext
text/x-wiki
[[Kategorie:KnowHow]]
5b3e805e2df69a16d339bfd0115e4688ccfd0e65
Linux Tipps und Tricks
0
273
1352
1245
2016-10-21T12:10:20Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:Linux|Tipps und Tricks]]
==Hard reboot==
This is the hard way to kick your kernel into void. No filesystem sync is done, just an ugly fast direkt reboot!
You should never do this...
<source lang=bash>
# echo 1 > /proc/sys/kernel/sysrq
# echo b > /proc/sysrq-trigger
</source>
First line enables sysrq, second line sends the reboot request.
For more look at [https://www.kernel.org/doc/Documentation/sysrq.txt kernel.org]!
==Scan all SCSI buses for new devices==
<source lang=bash>
# for i in /sys/class/scsi_host/host*/scan ; do echo "- - -" > $i ; done
</source>
==Remove a SCSI-device==
Let us say we want to remove /dev/sdb.
Be careful! Like in this example the lowest SCSI-ID is not always the lowest device name!
Check it with <i>lsscsi</i> from the Ubuntu package lsscsi:
<source lang=bash>
# lsscsi
[2:0:0:0] cd/dvd NECVMWar VMware SATA CD00 1.00 /dev/sr0
[32:0:0:0] disk VMware Virtual disk 1.0 /dev/sdb
[32:0:1:0] disk VMware Virtual disk 1.0 /dev/sda
</source>
Then check it is not longer in use:
# mount
# pvs
# zpool status
# etc.
Then delete it:
<source lang=bash>
# echo 1 > /sys/bus/scsi/drivers/sd/32\:0\:0\:0/delete
</source>
The 32:0:0:0 is the number reported from the lsscsi above.
Et voila:
<source lang=bash>
# lsscsi
[2:0:0:0] cd/dvd NECVMWar VMware SATA CD00 1.00 /dev/sr0
[32:0:1:0] disk VMware Virtual disk 1.0 /dev/sda
</source>
==Resize a GPT partition==
The partition was resized in VMWare from ~6GB to ~50GB.
===Correct the GPT partition table===
<source lang=bash>
root@mariadb:~# parted /dev/sdb
GNU Parted 3.2
Using /dev/sdb
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted) p
Warning: Not all of the space available to /dev/sdb appears to be used, you can fix the GPT to use all of the space (an extra 92274688 blocks) or continue with the
current setting?
Fix/Ignore? F <-- ! choose F
Model: VMware Virtual disk (scsi)
Disk /dev/sdb: 53,7GB <-- ! the new size is reported now
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:
Number Start End Size File system Name Flags
1 1049kB 6442MB 6441MB zfs
</source>
===Resize the partition===
<source lang=bash>
root@mariadb:~# parted /dev/sdb
GNU Parted 3.2
Using /dev/sdb
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted) p
Model: VMware Virtual disk (scsi)
Disk /dev/sdb: 53,7GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:
Number Start End Size File system Name Flags
1 1049kB 6442MB 6441MB zfs
(parted) resizepart 1
End? [6442MB]? 53,7GB <-- ! Put new size here
(parted) p <-- ! Control if it worked
Model: VMware Virtual disk (scsi)
Disk /dev/sdb: 53,7GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:
Number Start End Size File system Name Flags
1 1049kB 53,7GB 53,7GB zfs
(parted) q
Information: You may need to update /etc/fstab.
</source>
===Optional: Resize the ZPool in it===
Check the actual values:
<source lang=bash>
root@mariadb:~# zpool list MYSQL-DATA
NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
MYSQL-DATA 5,97G 994M 5,00G 44G 47% 16% 1.00x ONLINE -
root@mariadb:~# zpool get autoexpand MYSQL-DATA
NAME PROPERTY VALUE SOURCE
MYSQL-DATA autoexpand off default
</source>
Now inform ZPool to grow to the end of the partition.
Set autoexpand to on:
<source lang=bash>
root@mariadb:~# zpool set autoexpand=on MYSQL-DATA
</source>
Send an online to the already onlined device to force a recheck in the ZPool to resize it without export/import:
<source lang=bash>
root@mariadb:~# zpool online MYSQL-DATA /dev/sdb1
</source>
Et voila:
<source lang=bash>
root@mariadb:~# zpool list MYSQL-DATA
NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
MYSQL-DATA 50,0G 994M 49,0G - 5% 1% 1.00x ONLINE -
rpool 19,9G 3,36G 16,5G - 19% 16% 1.00x ONLINE -
</source>
Set autoexpand to off if you want prevent to autoexpand if partition grows:
<source lang=bash>
root@mariadb:~# zpool set autoexpand=off MYSQL-DATA
</source>
dcad30d5ec26fade26ee3dc9d52008b47b2d3e57
1357
1352
2016-10-21T14:57:31Z
Lollypop
2
/* Resize a GPT partition */
wikitext
text/x-wiki
[[Kategorie:Linux|Tipps und Tricks]]
==Hard reboot==
This is the hard way to kick your kernel into void. No filesystem sync is done, just an ugly fast direkt reboot!
You should never do this...
<source lang=bash>
# echo 1 > /proc/sys/kernel/sysrq
# echo b > /proc/sysrq-trigger
</source>
First line enables sysrq, second line sends the reboot request.
For more look at [https://www.kernel.org/doc/Documentation/sysrq.txt kernel.org]!
==Scan all SCSI buses for new devices==
<source lang=bash>
# for i in /sys/class/scsi_host/host*/scan ; do echo "- - -" > $i ; done
</source>
==Remove a SCSI-device==
Let us say we want to remove /dev/sdb.
Be careful! Like in this example the lowest SCSI-ID is not always the lowest device name!
Check it with <i>lsscsi</i> from the Ubuntu package lsscsi:
<source lang=bash>
# lsscsi
[2:0:0:0] cd/dvd NECVMWar VMware SATA CD00 1.00 /dev/sr0
[32:0:0:0] disk VMware Virtual disk 1.0 /dev/sdb
[32:0:1:0] disk VMware Virtual disk 1.0 /dev/sda
</source>
Then check it is not longer in use:
# mount
# pvs
# zpool status
# etc.
Then delete it:
<source lang=bash>
# echo 1 > /sys/bus/scsi/drivers/sd/32\:0\:0\:0/delete
</source>
The 32:0:0:0 is the number reported from the lsscsi above.
Et voila:
<source lang=bash>
# lsscsi
[2:0:0:0] cd/dvd NECVMWar VMware SATA CD00 1.00 /dev/sr0
[32:0:1:0] disk VMware Virtual disk 1.0 /dev/sda
</source>
==Resize a GPT partition==
The partition was resized in VMWare from ~6GB to ~50GB.
In the VM I did [[#Remove a SCSI-device|Remove a SCSI-device]] for the resized device and then [[#Scan all SCSI buses for new devices|Scan all SCSI buses for new devices]] after that parted saw the new size.
===Correct the GPT partition table===
<source lang=bash>
root@mariadb:~# parted /dev/sdb
GNU Parted 3.2
Using /dev/sdb
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted) p
Warning: Not all of the space available to /dev/sdb appears to be used, you can fix the GPT to use all of the space (an extra 92274688 blocks) or continue with the
current setting?
Fix/Ignore? F <-- ! choose F
Model: VMware Virtual disk (scsi)
Disk /dev/sdb: 53,7GB <-- ! the new size is reported now
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:
Number Start End Size File system Name Flags
1 1049kB 6442MB 6441MB zfs
</source>
===Resize the partition===
<source lang=bash>
root@mariadb:~# parted /dev/sdb
GNU Parted 3.2
Using /dev/sdb
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted) p
Model: VMware Virtual disk (scsi)
Disk /dev/sdb: 53,7GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:
Number Start End Size File system Name Flags
1 1049kB 6442MB 6441MB zfs
(parted) resizepart 1
End? [6442MB]? 53,7GB <-- ! Put new size here
(parted) p <-- ! Control if it worked
Model: VMware Virtual disk (scsi)
Disk /dev/sdb: 53,7GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:
Number Start End Size File system Name Flags
1 1049kB 53,7GB 53,7GB zfs
(parted) q
Information: You may need to update /etc/fstab.
</source>
===Optional: Resize the ZPool in it===
Check the actual values:
<source lang=bash>
root@mariadb:~# zpool list MYSQL-DATA
NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
MYSQL-DATA 5,97G 994M 5,00G 44G 47% 16% 1.00x ONLINE -
root@mariadb:~# zpool get autoexpand MYSQL-DATA
NAME PROPERTY VALUE SOURCE
MYSQL-DATA autoexpand off default
</source>
Now inform ZPool to grow to the end of the partition.
Set autoexpand to on:
<source lang=bash>
root@mariadb:~# zpool set autoexpand=on MYSQL-DATA
</source>
Send an online to the already onlined device to force a recheck in the ZPool to resize it without export/import:
<source lang=bash>
root@mariadb:~# zpool online MYSQL-DATA /dev/sdb1
</source>
Et voila:
<source lang=bash>
root@mariadb:~# zpool list MYSQL-DATA
NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
MYSQL-DATA 50,0G 994M 49,0G - 5% 1% 1.00x ONLINE -
rpool 19,9G 3,36G 16,5G - 19% 16% 1.00x ONLINE -
</source>
Set autoexpand to off if you want prevent to autoexpand if partition grows:
<source lang=bash>
root@mariadb:~# zpool set autoexpand=off MYSQL-DATA
</source>
c786ec4ef349ca6f872676546185cabe1aeea2cf
MariaDB on ZFS
0
294
1353
2016-10-21T12:42:49Z
Lollypop
2
Die Seite wurde neu angelegt: „ <source lang=bash> # zfs list -o recordsize,primarycache,compression,compressratio,atime,name -r MYSQL-DATA -r MYSQL-LOG RECSIZE PRIMARYCACHE COMPRESS RATI…“
wikitext
text/x-wiki
<source lang=bash>
# zfs list -o recordsize,primarycache,compression,compressratio,atime,name -r MYSQL-DATA -r MYSQL-LOG
RECSIZE PRIMARYCACHE COMPRESS RATIO ATIME NAME
128K all lz4 1.06x off MYSQL-DATA
16K metadata lz4 2.81x off MYSQL-DATA/InnoDB
8K all lz4 1.05x off MYSQL-DATA/data
128K all lz4 2.15x off MYSQL-LOG
128K all lz4 1.00x off MYSQL-LOG/binlog
128K metadata lz4 2.17x off MYSQL-LOG/ib_log
</source>
<source lang=mysql>
datadir = /MYSQL-DATA/data/mysql
innodb_data_home_dir = /MYSQL-DATA/InnoDB
innodb_data_file_path = ibdata1:2000M:autoextend
innodb_log_group_home_dir = /MYSQL-LOG/ib_log
innodb_flush_method = O_DIRECT
innodb_flush_log_at_trx_commit = 2
skip-innodb_doublewrite
</source>
642249c981a81f20ec74facaef8ed442248fcba7
1354
1353
2016-10-21T12:46:26Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie: MySQL|ZFS]]
[[Kategorie: MariaDB|ZFS]]
<source lang=bash>
# zfs list -o recordsize,primarycache,compression,compressratio,atime,name -r MYSQL-DATA -r MYSQL-LOG
RECSIZE PRIMARYCACHE COMPRESS RATIO ATIME NAME
128K all lz4 1.06x off MYSQL-DATA
16K metadata lz4 2.81x off MYSQL-DATA/InnoDB
8K all lz4 1.05x off MYSQL-DATA/data
128K all lz4 2.15x off MYSQL-LOG
128K all lz4 1.00x off MYSQL-LOG/binlog
128K metadata lz4 2.17x off MYSQL-LOG/ib_log
</source>
<source lang=mysql>
datadir = /MYSQL-DATA/data/mysql
innodb_data_home_dir = /MYSQL-DATA/InnoDB
innodb_data_file_path = ibdata1:2000M:autoextend
innodb_log_group_home_dir = /MYSQL-LOG/ib_log
innodb_flush_method = O_DIRECT
innodb_flush_log_at_trx_commit = 2
skip-innodb_doublewrite
</source>
96669d4268bf72ea69a93155b423c39674791d3e
1355
1354
2016-10-21T12:59:45Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie: MySQL|ZFS]]
[[Kategorie: MariaDB|ZFS]]
==ZFS parameters==
<source lang=bash>
# zfs list -o recordsize,primarycache,compression,compressratio,atime,name -r MYSQL-DATA -r MYSQL-LOG
RECSIZE PRIMARYCACHE COMPRESS RATIO ATIME NAME
128K all lz4 1.06x off MYSQL-DATA
16K metadata lz4 2.81x off MYSQL-DATA/InnoDB
8K all lz4 1.05x off MYSQL-DATA/data
128K all lz4 2.15x off MYSQL-LOG
128K all lz4 1.00x off MYSQL-LOG/binlog
128K metadata lz4 2.17x off MYSQL-LOG/ib_log
</source>
===If you have innodb_file_per_table=on===
<source lang=bash>
# mysql -e 'show variables like "innodb_file_per_table";'
+-----------------------+-------+
| Variable_name | Value |
+-----------------------+-------+
| innodb_file_per_table | ON |
+-----------------------+-------+
</source>
* If you have only InnoDB-Tables or the only productive ones are InnoDB then consider setting the blocksize of MYSQL-DATA/data to 16k because all Innodb-Datafiles (*.ibd) will be written there :-\.*
* consider setting the initial innodb_data_file_path to smaller value like ibdata1:100M:autoextend
==Database parameters for ZFS==
<source lang=mysql>
datadir = /MYSQL-DATA/data/mysql
innodb_data_home_dir = /MYSQL-DATA/InnoDB
innodb_data_file_path = ibdata1:2000M:autoextend
innodb_log_group_home_dir = /MYSQL-LOG/ib_log
innodb_flush_method = O_DIRECT
innodb_flush_log_at_trx_commit = 2
skip-innodb_doublewrite
</source>
<source lang=bash>
# /usr/sbin/mysqld --print-defaults
/usr/sbin/mysqld would have been started with the following arguments:
--server_id=42
--replicate_do_db=mail_db
--replicate_wild_ignore_table=mail_db.bayes_%
--user=mysql
--pid-file=/var/run/mysqld/mysqld.pid
--socket=/var/run/mysqld/mysqld.sock
--port=3306
--basedir=/usr
--datadir=/MYSQL-DATA/data/mysql
--innodb_data_home_dir=/MYSQL-DATA/InnoDB
--innodb_data_file_path=ibdata1:100M:autoextend
--innodb_log_group_home_dir=/MYSQL-LOG/ib_log
--innodb_flush_method=O_DIRECT
--innodb_flush_log_at_trx_commit=2
--skip-innodb_doublewrite
--tmpdir=/tmp
</source>
10ba55123757b4b876fe3ff4d80e5abe797156f7
1356
1355
2016-10-21T13:00:14Z
Lollypop
2
/* Database parameters for ZFS */
wikitext
text/x-wiki
[[Kategorie: MySQL|ZFS]]
[[Kategorie: MariaDB|ZFS]]
==ZFS parameters==
<source lang=bash>
# zfs list -o recordsize,primarycache,compression,compressratio,atime,name -r MYSQL-DATA -r MYSQL-LOG
RECSIZE PRIMARYCACHE COMPRESS RATIO ATIME NAME
128K all lz4 1.06x off MYSQL-DATA
16K metadata lz4 2.81x off MYSQL-DATA/InnoDB
8K all lz4 1.05x off MYSQL-DATA/data
128K all lz4 2.15x off MYSQL-LOG
128K all lz4 1.00x off MYSQL-LOG/binlog
128K metadata lz4 2.17x off MYSQL-LOG/ib_log
</source>
===If you have innodb_file_per_table=on===
<source lang=bash>
# mysql -e 'show variables like "innodb_file_per_table";'
+-----------------------+-------+
| Variable_name | Value |
+-----------------------+-------+
| innodb_file_per_table | ON |
+-----------------------+-------+
</source>
* If you have only InnoDB-Tables or the only productive ones are InnoDB then consider setting the blocksize of MYSQL-DATA/data to 16k because all Innodb-Datafiles (*.ibd) will be written there :-\.*
* consider setting the initial innodb_data_file_path to smaller value like ibdata1:100M:autoextend
==Database parameters for ZFS==
<source lang=mysql>
datadir = /MYSQL-DATA/data/mysql
innodb_data_home_dir = /MYSQL-DATA/InnoDB
innodb_data_file_path = ibdata1:2000M:autoextend
innodb_log_group_home_dir = /MYSQL-LOG/ib_log
innodb_flush_method = O_DIRECT
innodb_flush_log_at_trx_commit = 2
skip-innodb_doublewrite
</source>
<source lang=bash>
# /usr/sbin/mysqld --print-defaults
/usr/sbin/mysqld would have been started with the following arguments:
--server_id=42
--user=mysql
--pid-file=/var/run/mysqld/mysqld.pid
--socket=/var/run/mysqld/mysqld.sock
--port=3306
--basedir=/usr
--datadir=/MYSQL-DATA/data/mysql
--innodb_data_home_dir=/MYSQL-DATA/InnoDB
--innodb_data_file_path=ibdata1:100M:autoextend
--innodb_log_group_home_dir=/MYSQL-LOG/ib_log
--innodb_flush_method=O_DIRECT
--innodb_flush_log_at_trx_commit=2
--skip-innodb_doublewrite
--tmpdir=/tmp
</source>
c7c94c4323696d3a28d91340795fbd71b66339ea
1358
1356
2016-10-21T15:08:35Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie: MySQL|ZFS]]
[[Kategorie: MariaDB|ZFS]]
==ZFS parameters==
<source lang=bash>
zfs set atime=off MYSQL-DATA
zfs set compression=lz4 MYSQL-DATA
zfs set atime=off MYSQL-LOG
zfs set compression=lz4 MYSQL-LOG
zfs set recordsize=8k MYSQL-DATA/data
zfs set recordsize=16k MYSQL-DATA/InnoDB
zfs set primarycache=metadata MYSQL-DATA/InnoDB
zfs set primarycache=metadata MYSQL-LOG/ib_log
</source>
<source lang=bash>
# zfs list -o recordsize,primarycache,compression,compressratio,atime,name -r MYSQL-DATA -r MYSQL-LOG
RECSIZE PRIMARYCACHE COMPRESS RATIO ATIME NAME
128K all lz4 1.06x off MYSQL-DATA
16K metadata lz4 2.81x off MYSQL-DATA/InnoDB
8K all lz4 1.05x off MYSQL-DATA/data
128K all lz4 2.15x off MYSQL-LOG
128K all lz4 1.00x off MYSQL-LOG/binlog
128K metadata lz4 2.17x off MYSQL-LOG/ib_log
</source>
===If you have innodb_file_per_table=on===
<source lang=bash>
# mysql -e 'show variables like "innodb_file_per_table";'
+-----------------------+-------+
| Variable_name | Value |
+-----------------------+-------+
| innodb_file_per_table | ON |
+-----------------------+-------+
</source>
* If you have only InnoDB-Tables or the only productive ones are InnoDB then consider setting the blocksize of MYSQL-DATA/data to 16k because all Innodb-Datafiles (*.ibd) will be written there :-\.*
* consider setting the initial innodb_data_file_path to smaller value like ibdata1:100M:autoextend
==Database parameters for ZFS==
<source lang=mysql>
datadir = /MYSQL-DATA/data/mysql
innodb_data_home_dir = /MYSQL-DATA/InnoDB
innodb_data_file_path = ibdata1:2000M:autoextend
innodb_log_group_home_dir = /MYSQL-LOG/ib_log
innodb_flush_method = O_DIRECT
innodb_flush_log_at_trx_commit = 2
skip-innodb_doublewrite
</source>
<source lang=bash>
# /usr/sbin/mysqld --print-defaults
/usr/sbin/mysqld would have been started with the following arguments:
--server_id=42
--user=mysql
--pid-file=/var/run/mysqld/mysqld.pid
--socket=/var/run/mysqld/mysqld.sock
--port=3306
--basedir=/usr
--datadir=/MYSQL-DATA/data/mysql
--innodb_data_home_dir=/MYSQL-DATA/InnoDB
--innodb_data_file_path=ibdata1:100M:autoextend
--innodb_log_group_home_dir=/MYSQL-LOG/ib_log
--innodb_flush_method=O_DIRECT
--innodb_flush_log_at_trx_commit=2
--skip-innodb_doublewrite
--tmpdir=/tmp
</source>
6e15b11d6dd58d676ecd2f5ff94dfd22eabf78b9
1359
1358
2016-10-21T15:10:11Z
Lollypop
2
/* Database parameters for ZFS */
wikitext
text/x-wiki
[[Kategorie: MySQL|ZFS]]
[[Kategorie: MariaDB|ZFS]]
==ZFS parameters==
<source lang=bash>
zfs set atime=off MYSQL-DATA
zfs set compression=lz4 MYSQL-DATA
zfs set atime=off MYSQL-LOG
zfs set compression=lz4 MYSQL-LOG
zfs set recordsize=8k MYSQL-DATA/data
zfs set recordsize=16k MYSQL-DATA/InnoDB
zfs set primarycache=metadata MYSQL-DATA/InnoDB
zfs set primarycache=metadata MYSQL-LOG/ib_log
</source>
<source lang=bash>
# zfs list -o recordsize,primarycache,compression,compressratio,atime,name -r MYSQL-DATA -r MYSQL-LOG
RECSIZE PRIMARYCACHE COMPRESS RATIO ATIME NAME
128K all lz4 1.06x off MYSQL-DATA
16K metadata lz4 2.81x off MYSQL-DATA/InnoDB
8K all lz4 1.05x off MYSQL-DATA/data
128K all lz4 2.15x off MYSQL-LOG
128K all lz4 1.00x off MYSQL-LOG/binlog
128K metadata lz4 2.17x off MYSQL-LOG/ib_log
</source>
===If you have innodb_file_per_table=on===
<source lang=bash>
# mysql -e 'show variables like "innodb_file_per_table";'
+-----------------------+-------+
| Variable_name | Value |
+-----------------------+-------+
| innodb_file_per_table | ON |
+-----------------------+-------+
</source>
* If you have only InnoDB-Tables or the only productive ones are InnoDB then consider setting the blocksize of MYSQL-DATA/data to 16k because all Innodb-Datafiles (*.ibd) will be written there :-\.*
* consider setting the initial innodb_data_file_path to smaller value like ibdata1:100M:autoextend
==Database parameters for ZFS==
<source lang=mysql>
datadir = /MYSQL-DATA/data/mysql
innodb_data_home_dir = /MYSQL-DATA/InnoDB
innodb_data_file_path = ibdata1:2000M:autoextend
innodb_log_group_home_dir = /MYSQL-LOG/ib_log
#innodb_flush_method = O_DIRECT
innodb_flush_log_at_trx_commit = 2
innodb_file_per_table = off
skip-innodb_doublewrite
</source>
<source lang=bash>
# /usr/sbin/mysqld --print-defaults
/usr/sbin/mysqld would have been started with the following arguments:
--server_id=42
--user=mysql
--pid-file=/var/run/mysqld/mysqld.pid
--socket=/var/run/mysqld/mysqld.sock
--port=3306
--basedir=/usr
--datadir=/MYSQL-DATA/data/mysql
--innodb_data_home_dir=/MYSQL-DATA/InnoDB
--innodb_data_file_path=ibdata1:100M:autoextend
--innodb_log_group_home_dir=/MYSQL-LOG/ib_log
--innodb_flush_method=O_DIRECT
--innodb_flush_log_at_trx_commit=2
--skip-innodb_doublewrite
--tmpdir=/tmp
</source>
e2b8905f546dd1dec6ba4a5bf35c804c6e4d4d00
Bash cheatsheet
0
37
1360
1269
2016-11-03T09:48:29Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:Bash]]
=bash history per user=
You need to set LogLevel of sshd to VERBOSE in your /etc/ssh/sshd_config:
<source lang=bash>
...
LogLevel VERBOSE
...
</source>
If you are using ssh public keys for authenticating and want to use a seperate history for each user, you can put this in your .bash_profile:
<source lang=bash>
[ -f /var/log/fingerprint.log ] && FINGERPRINT=$(nawk -v ssh_connection="${SSH_CONNECTION}" -v user=${LOGNAME} 'BEGIN{split(ssh_connection,connection)}/.*sshd\[[0-9]+\]: Accepted publickey for/ && $(NF-5)==connection[1] && $(NF-3)==connection[2] {print $NF;}' /var/log/fingerprint.log)
export HISTFILE=~/.bash_history_${FINGERPRINT:-${SUDO_USER:-default}}
</source>
If $FINGERPRINT is empty the sudo user will be used.
If $SUDO_USER is empty too, use "default" as extension.
I forced rsyslog to write another logfile where group ssh may read:
/etc/rsyslog.d/99-fingerprint.conf:
<source lang=bash>
$FileCreateMode 0640
$FileGroup ssh
auth /var/log/fingerprint.log
</source>
Add user syslog to group ssh so that syslog can open a file as group ssh:
<source lang=bash>
# usermod -aG ssh syslog
</source>
Let only users from group ssh login via ssh except the syslog user:
/etc/ssh/sshd_config:
<source lang=bash>
# SSH is only allowed for users in this group
AllowGroups ssh
DenyUsers syslog
</source>
=bash prompt=
Put this in your ~/.bash_profile
<source lang=bash>
typeset +x PS1="\[\e]0;\u@\h: \w\a\]\u@\h:\w# "
</source>
=Useful variable substitutions=
==dirname==
<source lang=bash>
$ myself=/usr/bin/blafasel ; echo ${myself%/*}
/usr/bin
</source>
==basename==
<source lang=bash>
$ myself=/usr/bin/blafasel ; echo ${myself##*/}
blafasel
</source>
==Path name resolving function==
<source lang=bash>
# dir_resolve originally from http://stackoverflow.com/a/20901614/5887626
# modified at https://lars.timmann.de/wiki/index.php/Bash_cheatsheet
dir_resolve() {
local dir=${1%/*}
local file=${1##*/}
# if the name does not contain a / leave file blank or the name will be name/name
[ "_${1/\//}_" == "_${1}_" -a -d ${1} ] && file=""
[ "_${1/\//}_" == "_${1}_" -a -f ${1} ] && dir=""
pushd "$dir" &>/dev/null || return $? # On error, return error code
echo ${PWD}${file:+"/"${file}} # output full path with filename
popd &> /dev/null
}
</source>
=Loops=
==Numbers==
$ for i in {0..9} ; do echo $i ; done
oder
$ for ((i=0;i<=9;i++)); do echo $i; done
so gehen natürlich auch andere Sprünge, z.B. immer 3 weiter:
$ for ((i=0;i<=9;i+=3)); do echo $i; done
oder oder oder
$ for ((i=0,j=1;i<=9;i+=3,j++)); do echo "$i $j"; done
==Exit controlled loop==
Just put your code between <i>while</i> and <i>do</i> and use <i>continue</i> alias : in the loop.
<source lang=bash>
#!/bin/bash
while
# some code
(( <your control expression> ))
do
:
done
</source>
For example:
<source lang=bash>
#!/bin/bash
i=1
while
i=$[ $i + 1 ];
(( $i < 10 ))
do
:
done
</source>
=Functions=
==Log with timestamp==
<source lang=bash>
function printlog () {
if [ -n "$*" ]
then
printf "%s %s\n" "$(/bin/date '+%Y%m%d %H:%M:%S')" "${*}"
else
while read input
do
printf "%s %s\n" "$(/bin/date '+%Y%m%d %H:%M:%S')" "${input}"
done
fi
}
</source>
<source lang=bash>
$ printf "test\n\ntoast\n" | printlog
20161103 10:47:25 test
20161103 10:47:25
20161103 10:47:25 toast
$ printlog test
20161103 10:47:30 test
</source>
=Calculations=
$ echo $[ 3 + 4 ]
$ echo $[ 2 ** 8 ] # 2^8
=init scripts=
==A basic skeleton==
<source lang=bash>
#!/bin/bash
NAME=<myname> # The name of the daemon
USER=<runuser> # The user to run the daemon as
SELF=${0##*/}
CALLER=$(id -nu)
# Check if called as ${USER}
if [ "_${CALLER}_" != "_${USER}_" ]
then
# If not do a su if called as root
if [ "_${CALLER}_" == "_root_" ]
then
exec su -l ${USER} -c "$0 $@"
else
echo "Please start this script only as user ${USER}"
exit 1
fi
fi
if [ $# -eq 1 ]
then
command=$1
else
# Called as ${NAME}-start.sh or ${NAME}-stop.sh
command=${SELF%.sh}
command=${command##${NAME}-}
[ "_${command}_" == "_${NAME}_" ] && command=""
fi
case ${command} in
start)
# start commands
;;
stop)
# stop commands
;;
restart)
$0 stop
$0 start
;;
*)
[ ! -z "${command}" ] && echo "ERROR: Unknown option ${command}!"
echo "Usage: $0 (start|stop|restart)";
echo "Or call as ${NAME}-(start|stop|restart).sh"
exit 1
;;
esac
</source>
e1a83cc47087dcc4c403b2ff507817d26880af04
Solaris zone memory on the fly
0
118
1361
666
2016-11-08T10:44:35Z
Lollypop
2
/* Change settings for the running zone */
wikitext
text/x-wiki
[[Kategorie:Solaris|Zone Memory]]
= Setting memory parameter for running zones =
You can change memory parameter for running zones. But remember to make it persistent by changing zone config file, too.
So I do it always in advance.
== Change setting in the config file ==
<source lang=bash>
# zonecfg -z myzone
zonecfg:myzone> select capped-memory
zonecfg:myzone:capped-memory> info
capped-memory:
[swap: 10G]
zonecfg:myzone:capped-memory> set swap=16G
zonecfg:myzone:capped-memory> set physical=16G
zonecfg:myzone:capped-memory> set locked=10G
zonecfg:myzone:capped-memory> info
physical: 16G
[swap: 16G]
[locked: 10G]
zonecfg:myzone:capped-memory> end
zonecfg:myzone> verify
zonecfg:myzone> commit
zonecfg:myzone> exit
#
</source>
== Change settings for the running zone ==
===First take a look===
<source lang=bash>
# zlogin myzone prtconf | grep Memory
prtconf: devinfo facility not available
Memory size: 65536 Megabytes
# prctl -t privileged -i zone myzone
zone: 1: myzone
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
zone.max-swap
privileged 10.0GB - deny -
zone.cpu-shares
privileged 1 - none -
</source>
===Set the new values===
<source lang=bash>
# rcapadm -z myzone -m 16G
# prctl -n zone.max-swap -v 16g -t privileged -r -e deny -i zone myzone
# prctl -n zone.max-locked-memory -v 16g -t privileged -r -e deny -i zone myzone
</source>
===Prove values===
<source lang=bash>
# zlogin myzone prtconf | grep Memory
prtconf: devinfo facility not available
Memory size: 16384 Megabytes
# prctl -t privileged -i zone myzone
zone: 1: myzone
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
zone.max-swap
privileged 16.0GB - deny -
zone.cpu-shares
privileged 1 - none -
</source>
Done.
b9def6e7f9f56cced31751d8c0e0a2f7b7fd5564
ZFS on Linux
0
222
1366
832
2016-11-08T13:28:28Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:Linux|ZFS]]
[[Kategorie:ZFS|Linux]]
[[Kategorie:VirtualBox|ZFS]]
==Grub==
Create /etc/udev/rules.d/99-local-grub.rules with this content:
<source lang=bash>
# Create by-id links in /dev as well for zfs vdev. Needed by grub
# Add links for zfs_member only
KERNEL=="sd*[0-9]", IMPORT{parent}=="ID_*", ENV{ID_FS_TYPE}=="zfs_member", SYMLINK+="$env{ID_BUS}-$env{ID_SERIAL}-part%n"
</source>
==Virtualbox on ZVols==
If you use ZVols as rawvmdk-device in VirtualBox as normal user (vmuser in this example) create /etc/udev/rules.d/99-local-zvol.rules with this content:
<source lang=bash>
KERNEL=="zd*" SUBSYSTEM=="block" ACTION=="add|change" PROGRAM="/lib/udev/zvol_id /dev/%k" RESULT=="rpool/VM/*" OWNER="vmuser"
</source>
<source lang=bash>
vmuser@virtualbox-server:~$ VBoxManage internalcommands createrawvmdk -filename /var/data/VMs/dev/Solaris10.vmdk -rawdisk /dev/zvol/rpool/VM/Solaris10
</source>
==Links==
* [[https://github.com/zfsonlinux/pkg-zfs/wiki/HOWTO-install-Ubuntu-16.04-to-a-Whole-Disk-Native-ZFS-Root-Filesystem-using-Ubiquity-GUI-installer HOWTO install Ubuntu 16.04 to a Whole Disk Native ZFS Root Filesystem using Ubiquity GUI installer]]
* [[https://github.com/zfsonlinux/zfs/wiki/Ubuntu-16.04-Root-on-ZFS Ubuntu 16.04 Root on ZFS]]
3a91a0f46b8282afe19e0791af86f0836b851422
1369
1366
2016-11-10T09:53:49Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:Linux|ZFS]]
[[Kategorie:ZFS|Linux]]
[[Kategorie:VirtualBox|ZFS]]
==Grub==
Create /etc/udev/rules.d/99-local-grub.rules with this content:
<source lang=bash>
# Create by-id links in /dev as well for zfs vdev. Needed by grub
# Add links for zfs_member only
KERNEL=="sd*[0-9]", IMPORT{parent}=="ID_*", ENV{ID_FS_TYPE}=="zfs_member", SYMLINK+="$env{ID_BUS}-$env{ID_SERIAL}-part%n"
</source>
==Virtualbox on ZVols==
If you use ZVols as rawvmdk-device in VirtualBox as normal user (vmuser in this example) create /etc/udev/rules.d/99-local-zvol.rules with this content:
<source lang=bash>
KERNEL=="zd*" SUBSYSTEM=="block" ACTION=="add|change" PROGRAM="/lib/udev/zvol_id /dev/%k" RESULT=="rpool/VM/*" OWNER="vmuser"
</source>
<source lang=bash>
vmuser@virtualbox-server:~$ VBoxManage internalcommands createrawvmdk -filename /var/data/VMs/dev/Solaris10.vmdk -rawdisk /dev/zvol/rpool/VM/Solaris10
</source>
==Setup Ubuntu 16.04 with ZFS root==
[[https://github.com/zfsonlinux/zfs/wiki/Ubuntu-16.04-Root-on-ZFS Ubuntu-16.04-Root-on-ZFS]]
<source lang=bash>
sudo -i
ifconfig ens160 <IP> netmask 255.255.255.0
route add default gw <defaultrouter>
echo "nameserver <nameserver>" >> /etc/resolv.conf
echo 'Acquire::http::Proxy "http://<user>:<pass>@<proxyhost>:<proxyport>";' >> /etc/apt/apt.conf
apt-add-repository universe
apt update
apt --yes install openssh-server
passwd ubuntu
Reconnect via ssh
apt install --yes debootstrap gdisk zfs-initramfs
sgdisk -g -a1 -n2:34:2047 -t2:EF02 /dev/disk/by-id/scsi-36000c2932cdb62febff0b5ac93786dd4
sgdisk -n9:-8M:0 -t9:BF07 /dev/disk/by-id/scsi-36000c2932cdb62febff0b5ac93786dd4
sgdisk -n1:0:0 -t1:BF01 /dev/disk/by-id/scsi-36000c2932cdb62febff0b5ac93786dd4
zpool create -f -o ashift=12 \
-O atime=off \
-O canmount=off \
-O compression=lz4 \
-O normalization=formD \
-O mountpoint=/ \
-R /mnt \
rpool /dev/disk/by-id/scsi-36000c2932cdb62febff0b5ac93786dd4-part1
zfs create -o canmount=off -o mountpoint=none rpool/ROOT
zfs create -o canmount=noauto -o mountpoint=/ rpool/ROOT/ubuntu
zfs mount rpool/ROOT/ubuntu
zfs create -o setuid=off rpool/home
zfs create -o mountpoint=/root rpool/home/root
zfs create -o canmount=off -o setuid=off -o exec=off rpool/var
zfs create -o com.sun:auto-snapshot=false rpool/var/cache
zfs create rpool/var/log
zfs create rpool/var/spool
zfs create -o com.sun:auto-snapshot=false -o exec=on rpool/var/tmp
zfs create -V 4G -b $(getconf PAGESIZE) -o compression=zle \
-o logbias=throughput -o sync=always \
-o primarycache=metadata -o secondarycache=none \
-o com.sun:auto-snapshot=false rpool/swap
cp -p {,/mnt}/etc/apt/apt.conf
export http_proxy=$(awk '/Acquire::http::Proxy/{gsub(/\"/,"");gsub(/;$/,"");print $2}' /mnt/etc/apt/apt.conf)
echo -n xenial{,-security,-updates} | \
xargs -n 1 -d ' ' -I{} echo "deb http://archive.ubuntu.com/ubuntu {} main universe" > /mnt/etc/apt/sources.list
chmod 1777 /mnt/var/tmp
debootstrap xenial /mnt
zfs set devices=off rpool
HOSTNAME=Template-VM
echo ${HOSTNAME} > /mnt/etc/hostname
printf "127.0.1.1\t%s\n" "${HOSTNAME}" >> /mnt/etc/hosts
INTERFACE=$(ip a s scope global | awk 'NR==1{gsub(/:$/,"",$2);print $2;}')
printf "auto %s\niface %s inet dhcp\n" "${INTERFACE}" "${INTERFACE}" > /mnt/etc/network/interfaces.d/${INTERFACE}
mount --rbind /dev /mnt/dev
mount --rbind /proc /mnt/proc
mount --rbind /sys /mnt/sys
cp -p {,/mnt}/etc/apt/apt.conf
echo -n xenial{,-security,-updates} | \
xargs -n 1 -d ' ' -I{} echo "deb http://archive.ubuntu.com/ubuntu {} main universe" > /mnt/etc/apt/sources.list
chroot /mnt /bin/bash --login
locale-gen en_US.UTF-8
echo 'LANG="en_US.UTF-8"' > /etc/default/locale
LANG="en_US.UTF-8"
dpkg-reconfigure tzdata
ln -s /proc/self/mounts /etc/mtab
apt update
apt install --yes ubuntu-minimal
apt install --yes --no-install-recommends linux-image-generic
apt install --yes zfs-initramfs
apt install --yes openssh-server
apt install --yes grub-pc
addgroup --system lpadmin
addgroup --system sambashare
passwd
grub-probe /
update-initramfs -c -k all
vi /etc/default/grub
Comment out: GRUB_HIDDEN_TIMEOUT=0
Remove quiet and splash from: GRUB_CMDLINE_LINUX_DEFAULT
Uncomment: GRUB_TERMINAL=console
update-grub
grub-install /dev/disk/by-id/scsi-36000c2932cdb62febff0b5ac93786dd4
zfs snapshot rpool/ROOT/ubuntu@install
exit
mount | grep -v zfs | tac | awk '/\/mnt/ {print $3}' | xargs -i{} umount -lf {}
zpool export rpool
reboot
apt install --yes cryptsetup
echo cryptswap1 /dev/zvol/rpool/swap /dev/urandom swap,cipher=aes-xts-plain64:sha256,size=256 >> /etc/crypttab
systemctl daemon-reload
systemctl start systemd-cryptsetup@cryptswap1.service
echo /dev/mapper/cryptswap1 none swap defaults 0 0 >> /etc/fstab
swapon -av
</source>
==Links==
* [[https://github.com/zfsonlinux/pkg-zfs/wiki/HOWTO-install-Ubuntu-16.04-to-a-Whole-Disk-Native-ZFS-Root-Filesystem-using-Ubiquity-GUI-installer HOWTO install Ubuntu 16.04 to a Whole Disk Native ZFS Root Filesystem using Ubiquity GUI installer]]
* [[https://github.com/zfsonlinux/zfs/wiki/Ubuntu-16.04-Root-on-ZFS Ubuntu 16.04 Root on ZFS]]
6c256e0462398e33fec7d9faa659afb2bd6ba932
1370
1369
2016-11-10T10:25:29Z
Lollypop
2
/* Setup Ubuntu 16.04 with ZFS root */
wikitext
text/x-wiki
[[Kategorie:Linux|ZFS]]
[[Kategorie:ZFS|Linux]]
[[Kategorie:VirtualBox|ZFS]]
==Grub==
Create /etc/udev/rules.d/99-local-grub.rules with this content:
<source lang=bash>
# Create by-id links in /dev as well for zfs vdev. Needed by grub
# Add links for zfs_member only
KERNEL=="sd*[0-9]", IMPORT{parent}=="ID_*", ENV{ID_FS_TYPE}=="zfs_member", SYMLINK+="$env{ID_BUS}-$env{ID_SERIAL}-part%n"
</source>
==Virtualbox on ZVols==
If you use ZVols as rawvmdk-device in VirtualBox as normal user (vmuser in this example) create /etc/udev/rules.d/99-local-zvol.rules with this content:
<source lang=bash>
KERNEL=="zd*" SUBSYSTEM=="block" ACTION=="add|change" PROGRAM="/lib/udev/zvol_id /dev/%k" RESULT=="rpool/VM/*" OWNER="vmuser"
</source>
<source lang=bash>
vmuser@virtualbox-server:~$ VBoxManage internalcommands createrawvmdk -filename /var/data/VMs/dev/Solaris10.vmdk -rawdisk /dev/zvol/rpool/VM/Solaris10
</source>
==Setup Ubuntu 16.04 with ZFS root==
Most is from here [https://github.com/zfsonlinux/zfs/wiki/Ubuntu-16.04-Root-on-ZFS Ubuntu-16.04-Root-on-ZFS].
Boot Ubuntu Desktop (alias Live CD) and choose "try out".
===Connect it to your network===
<source lang=bash>
sudo -i
ifconfig ens160 <IP> netmask 255.255.255.0
route add default gw <defaultrouter>
echo "nameserver <nameserver>" >> /etc/resolv.conf
echo 'Acquire::http::Proxy "http://<user>:<pass>@<proxyhost>:<proxyport>";' >> /etc/apt/apt.conf
apt-add-repository universe
apt update
apt --yes install openssh-server
passwd ubuntu
Reconnect via ssh
apt install --yes debootstrap gdisk zfs-initramfs
sgdisk -g -a1 -n2:34:2047 -t2:EF02 /dev/disk/by-id/scsi-36000c2932cdb62febff0b5ac93786dd4
sgdisk -n9:-8M:0 -t9:BF07 /dev/disk/by-id/scsi-36000c2932cdb62febff0b5ac93786dd4
sgdisk -n1:0:0 -t1:BF01 /dev/disk/by-id/scsi-36000c2932cdb62febff0b5ac93786dd4
zpool create -f -o ashift=12 \
-O atime=off \
-O canmount=off \
-O compression=lz4 \
-O normalization=formD \
-O mountpoint=/ \
-R /mnt \
rpool /dev/disk/by-id/scsi-36000c2932cdb62febff0b5ac93786dd4-part1
zfs create -o canmount=off -o mountpoint=none rpool/ROOT
zfs create -o canmount=noauto -o mountpoint=/ rpool/ROOT/ubuntu
zfs mount rpool/ROOT/ubuntu
zfs create -o setuid=off rpool/home
zfs create -o mountpoint=/root rpool/home/root
zfs create -o canmount=off -o setuid=off -o exec=off rpool/var
zfs create -o com.sun:auto-snapshot=false rpool/var/cache
zfs create rpool/var/log
zfs create rpool/var/spool
zfs create -o com.sun:auto-snapshot=false -o exec=on rpool/var/tmp
zfs create -V 4G -b $(getconf PAGESIZE) -o compression=zle \
-o logbias=throughput -o sync=always \
-o primarycache=metadata -o secondarycache=none \
-o com.sun:auto-snapshot=false rpool/swap
cp -p {,/mnt}/etc/apt/apt.conf
export http_proxy=$(awk '/Acquire::http::Proxy/{gsub(/\"/,"");gsub(/;$/,"");print $2}' /mnt/etc/apt/apt.conf)
echo -n xenial{,-security,-updates} | \
xargs -n 1 -d ' ' -I{} echo "deb http://archive.ubuntu.com/ubuntu {} main universe" > /mnt/etc/apt/sources.list
chmod 1777 /mnt/var/tmp
debootstrap xenial /mnt
zfs set devices=off rpool
HOSTNAME=Template-VM
echo ${HOSTNAME} > /mnt/etc/hostname
printf "127.0.1.1\t%s\n" "${HOSTNAME}" >> /mnt/etc/hosts
INTERFACE=$(ip a s scope global | awk 'NR==1{gsub(/:$/,"",$2);print $2;}')
printf "auto %s\niface %s inet dhcp\n" "${INTERFACE}" "${INTERFACE}" > /mnt/etc/network/interfaces.d/${INTERFACE}
mount --rbind /dev /mnt/dev
mount --rbind /proc /mnt/proc
mount --rbind /sys /mnt/sys
cp -p {,/mnt}/etc/apt/apt.conf
echo -n xenial{,-security,-updates} | \
xargs -n 1 -d ' ' -I{} echo "deb http://archive.ubuntu.com/ubuntu {} main universe" > /mnt/etc/apt/sources.list
chroot /mnt /bin/bash --login
locale-gen en_US.UTF-8
echo 'LANG="en_US.UTF-8"' > /etc/default/locale
LANG="en_US.UTF-8"
dpkg-reconfigure tzdata
ln -s /proc/self/mounts /etc/mtab
apt update
apt install --yes ubuntu-minimal
apt install --yes --no-install-recommends linux-image-generic
apt install --yes zfs-initramfs
apt install --yes openssh-server
apt install --yes grub-pc
addgroup --system lpadmin
addgroup --system sambashare
passwd
grub-probe /
update-initramfs -c -k all
vi /etc/default/grub
Comment out: GRUB_HIDDEN_TIMEOUT=0
Remove quiet and splash from: GRUB_CMDLINE_LINUX_DEFAULT
Uncomment: GRUB_TERMINAL=console
update-grub
grub-install /dev/disk/by-id/scsi-36000c2932cdb62febff0b5ac93786dd4
zfs snapshot rpool/ROOT/ubuntu@install
exit
mount | grep -v zfs | tac | awk '/\/mnt/ {print $3}' | xargs -i{} umount -lf {}
zpool export rpool
reboot
apt install --yes cryptsetup
echo cryptswap1 /dev/zvol/rpool/swap /dev/urandom swap,cipher=aes-xts-plain64:sha256,size=256 >> /etc/crypttab
systemctl daemon-reload
systemctl start systemd-cryptsetup@cryptswap1.service
echo /dev/mapper/cryptswap1 none swap defaults 0 0 >> /etc/fstab
swapon -av
</source>
==Links==
* [[https://github.com/zfsonlinux/pkg-zfs/wiki/HOWTO-install-Ubuntu-16.04-to-a-Whole-Disk-Native-ZFS-Root-Filesystem-using-Ubiquity-GUI-installer HOWTO install Ubuntu 16.04 to a Whole Disk Native ZFS Root Filesystem using Ubiquity GUI installer]]
* [[https://github.com/zfsonlinux/zfs/wiki/Ubuntu-16.04-Root-on-ZFS Ubuntu 16.04 Root on ZFS]]
42b97f83262c54d708852c167f7475b5de5d5da8
MariaDB SSL
0
295
1367
2016-11-09T20:24:34Z
Lollypop
2
Die Seite wurde neu angelegt: „[[Kategorie:MariaDB|SSL]] [[Kategorie:MySQL|SSL]] ==Create keys and certificates== <source lang=bash> openssl genrsa 2048 > ca-key.pem openssl req -new -x509…“
wikitext
text/x-wiki
[[Kategorie:MariaDB|SSL]]
[[Kategorie:MySQL|SSL]]
==Create keys and certificates==
<source lang=bash>
openssl genrsa 2048 > ca-key.pem
openssl req -new -x509 -nodes -days 3600 -key ca-key.pem -out ca-cert.pem -subj '/C=DE/ST=Hamburg/L=Hamburg/O=Spiders Cave/CN=db-server'
</source>
<source lang=bash>
openssl req -newkey rsa:2048 -days 3600 -nodes -keyout client-key.pem -out client-req.pem -subj '/C=DE/ST=Hamburg/L=Hamburg/O=Spiders Cave/CN=web-server.domain.de'
openssl rsa -in client-key.pem -out client-key.pem
openssl x509 -req -in client-req.pem -days 3600 -CA ca-cert.pem -CAkey ca-key.pem -set_serial 01 -out client-cert.pem
</source>
<source lang=bash>
openssl req -newkey rsa:2048 -days 3600 -nodes -keyout server-key.pem -out server-req.pem -subj '/C=DE/ST=Hamburg/L=Hamburg/O=Spiders Cave/CN=db-server.domain.de'
openssl rsa -in server-key.pem -out server-key.pem
openssl x509 -req -in server-req.pem -days 3600 -CA ca-cert.pem -CAkey ca-key.pem -set_serial 01 -out server-cert.pem
</source>
<source lang=bash>
chown mysql:www-data *
chown www-data:www-data client-key.pem
chmod 644 *-cert.pem
chmod 600 *-key.pem
</source>
<source lang=php>
# php -r '
$db = new PDO("mysql:host=db-server.domain.de;dbname=testdb", "ssltestuser", "ssltestuserpassword",
array(
PDO::MYSQL_ATTR_SSL_CA=>"/etc/mysql/ssl/ca-cert.pem",
PDO::MYSQL_ATTR_SSL_KEY=>"/etc/mysql/ssl/client-key.pem",
PDO::MYSQL_ATTR_SSL_CERT=>"/etc/mysql/ssl/client-cert.pem",
PDO::MYSQL_ATTR_SSL_CAPATH=>"/etc/ssl/certs"
)
);
$result = $db->query("SHOW STATUS LIKE \"SSL_%\"");
$result->execute();
$status=$result->fetchAll();
print_r($status);
'
</source>
6250d9d96c5048a7c1d7cadee27270c94ba4e0a7
1368
1367
2016-11-09T20:24:55Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:MariaDB|SSL]]
[[Kategorie:MySQL|SSL]]
To be continued!
==Create keys and certificates==
<source lang=bash>
openssl genrsa 2048 > ca-key.pem
openssl req -new -x509 -nodes -days 3600 -key ca-key.pem -out ca-cert.pem -subj '/C=DE/ST=Hamburg/L=Hamburg/O=Spiders Cave/CN=db-server'
</source>
<source lang=bash>
openssl req -newkey rsa:2048 -days 3600 -nodes -keyout client-key.pem -out client-req.pem -subj '/C=DE/ST=Hamburg/L=Hamburg/O=Spiders Cave/CN=web-server.domain.de'
openssl rsa -in client-key.pem -out client-key.pem
openssl x509 -req -in client-req.pem -days 3600 -CA ca-cert.pem -CAkey ca-key.pem -set_serial 01 -out client-cert.pem
</source>
<source lang=bash>
openssl req -newkey rsa:2048 -days 3600 -nodes -keyout server-key.pem -out server-req.pem -subj '/C=DE/ST=Hamburg/L=Hamburg/O=Spiders Cave/CN=db-server.domain.de'
openssl rsa -in server-key.pem -out server-key.pem
openssl x509 -req -in server-req.pem -days 3600 -CA ca-cert.pem -CAkey ca-key.pem -set_serial 01 -out server-cert.pem
</source>
<source lang=bash>
chown mysql:www-data *
chown www-data:www-data client-key.pem
chmod 644 *-cert.pem
chmod 600 *-key.pem
</source>
<source lang=php>
# php -r '
$db = new PDO("mysql:host=db-server.domain.de;dbname=testdb", "ssltestuser", "ssltestuserpassword",
array(
PDO::MYSQL_ATTR_SSL_CA=>"/etc/mysql/ssl/ca-cert.pem",
PDO::MYSQL_ATTR_SSL_KEY=>"/etc/mysql/ssl/client-key.pem",
PDO::MYSQL_ATTR_SSL_CERT=>"/etc/mysql/ssl/client-cert.pem",
PDO::MYSQL_ATTR_SSL_CAPATH=>"/etc/ssl/certs"
)
);
$result = $db->query("SHOW STATUS LIKE \"SSL_%\"");
$result->execute();
$status=$result->fetchAll();
print_r($status);
'
</source>
87c4c1c87695101548ac28bb1eaa20c114318081
Solaris 11 Networking
0
96
1371
1160
2016-11-29T12:07:18Z
Lollypop
2
/* Switch to manual configuration */
wikitext
text/x-wiki
[[Kategorie:Solaris11]]
= Switch to manual configuration =
To disable automatic procedures to take back your changes you have to enable the manual configuration mode.
<pre>
# netadm enable -p ncp DefaultFixed
</pre>
= Nodename =
<pre>
# svccfg -s svc:/system/identity:node setprop config/nodename = astring: camponotus
# svcadm refresh svc:/system/identity:node
# svcadm restart svc:/system/identity:node
</pre>
= Interfaces =
== Initial setup ==
<pre>
# ipadm create-ip net1
# ipadm create-addr -T static -a local=192.168.5.101/24 net1/v4mailcluster1
</pre>
== IPMP ==
<pre>
# ipadm create-ip net2
# ipadm create-ip net3
# ipadm create-addr -T static -a 192.168.5.102/24 net2/v4ipmptestadress
# ipadm create-addr -T static -a 192.168.5.103/24 net3/v4ipmptestadress
# ipadm create-ipmp ipmp0
# ipadm add-ipmp -i net2 -i net3 ipmp0
# ipadm create-addr -T static -a 192.168.5.101/24 ipmp0/v4mailcluster0
# ipmpstat -i
INTERFACE ACTIVE GROUP FLAGS LINK PROBE STATE
net2 yes ipmp0 ------- up ok ok
net3 yes ipmp0 --mbM-- up ok ok
# ipmpstat -an
ADDRESS STATE GROUP INBOUND OUTBOUND
:: down ipmp0 -- --
192.168.5.101 up ipmp0 net3 net2 net3
</pre>
Set one interface to standby:
<pre>
# ipadm set-ifprop -p standby=on -m ip net2
# ipmpstat -i
INTERFACE ACTIVE GROUP FLAGS LINK PROBE STATE
net3 yes ipmp0 --mbM-- up ok ok
net2 no ipmp0 is----- up ok ok
# ipmpstat -g
GROUP GROUPNAME STATE FDT INTERFACES
ipmp0 ipmp0 ok 10.00s net3 (net2)
</pre>
== More sophisticated with aggregations and vnics ==
<source lang=bash>
# dladm show-phys -L
LINK DEVICE LOC
net0 igb12 /SYS/MB
net1 igb13 /SYS/MB
net2 igb14 /SYS/MB
net3 igb15 /SYS/MB
net4 igb0 /SYS/MB/PCI_MEZZ/PCIE3
net5 igb1 /SYS/MB/PCI_MEZZ/PCIE3
net6 igb2 /SYS/MB/PCI_MEZZ/PCIE3
net7 igb3 /SYS/MB/PCI_MEZZ/PCIE3
net8 igb4 /SYS/MB/RISER2/PCIE2
net9 igb5 /SYS/MB/RISER2/PCIE2
net10 igb6 /SYS/MB/RISER2/PCIE2
net11 igb7 /SYS/MB/RISER2/PCIE2
net12 igb8 /SYS/MB/RISER0/PCIE0
net13 igb9 /SYS/MB/RISER0/PCIE0
net14 igb10 /SYS/MB/RISER0/PCIE0
net15 igb11 /SYS/MB/RISER0/PCIE0
net16 usbecm2 --
# dladm create-aggr -P L2,L3 -l net8 -l net9 -l net10 -l net11 PCIE2
# dladm create-aggr -P L2,L3 -l net4 -l net5 -l net6 -l net7 PCIE3
# dladm show-link
...
PCIE2 aggr 1500 up net8 net9 net10 net11
PCIE3 aggr 1500 up net4 net5 net6 net7
...
# dladm create-vnic -l PCIE2 zone01_ipmp0
# dladm create-vnic -l PCIE3 zone01_ipmp1
# dladm show-link
...
zone01_ipmp1 vnic 1500 up PCIE3
zone01_ipmp0 vnic 1500 up PCIE2
...
# zonecfg -z zone01
zonecfg:zone01> add net
zonecfg:zone01:net> set configure-allowed-address=true
zonecfg:zone01:net> set physical=zone01_ipmp0
zonecfg:zone01:net> end
zonecfg:zone01> add net
zonecfg:zone01:net> set configure-allowed-address=true
zonecfg:zone01:net> set physical=zone01_ipmp1
zonecfg:zone01:net> end
zonecfg:zone01> verify
zonecfg:zone01> commit
zonecfg:zone01> exit
</source>
== Change address ==
1. Create new interface:
<pre>
# ipadm create-addr -T static -a 192.168.5.111/24 ipmp0/v4mailcluster1
</pre>
2. Login to new IP.
3. Delete the old interface:
<pre>
# ipadm delete-addr ipmp0/v4mailcluster0
</pre>
= DNS =
== Client ==
<pre>
# svccfg -s svc:/network/dns/client setprop config/nameserver = net_address: "( 0.0.0.0 192.168.1.1 )"
# svccfg -s svc:/network/dns/client setprop config/search = astring: "timmann.de blindhuhn.de"
# svcadm refresh svc:/network/dns/client:default
# svcadm restart svc:/network/dns/client:default
</pre>
Activate dns in nameservice switch (nsswitch.conf):
<pre>
# perl -pi -e "s/^hosts:\s+files$/hosts: files dns/g" /etc/nsswitch.conf
# nscfg import -f svc:/system/name-service/switch:default
# svcadm refresh name-service/switch
# svcprop -p config/host svc:/system/name-service/switch:default
files\ dns
</pre>
== Server ==
<pre>
# groupadd -g 53 dns
# useradd -u 53 -g dns -d /var/named -m dns
# usermod -A solaris.smf.manage.bind dns
# svccfg -s svc:network/dns/server:default setprop start/group = dns
# svccfg -s svc:network/dns/server:default setprop start/user = dns
# svccfg -s svc:network/dns/server:default setprop options/ip_interfaces = IPv4
# svccfg -s svc:network/dns/server:default setprop options/configuration_file = /etc/named.conf
# svcadm refresh svc:network/dns/server:default
# svcadm enable svc:network/dns/server:default
</pre>
= Set tcp/udp parameter (formerly ndd) =
<source lang=bash>
# ipadm show-prop -p smallest_anon_port tcp
PROTO PROPERTY PERM CURRENT PERSISTENT DEFAULT POSSIBLE
tcp smallest_anon_port rw 1024 -- 1024 1024-65535
</source>
<source lang=bash>
# ipadm set-prop -p smallest_anon_port=9000 tcp
# ipadm set-prop -p smallest_anon_port=9000 udp
# ipadm set-prop -p largest_anon_port=65500 tcp
# ipadm set-prop -p largest_anon_port=65500 udp
</source>
= Jumbo Frames =
The MTU of an ipadm-interface can never be greater than its underlying dladm-interface.
To change the dladm-interface the ipadm-interface has to be disabled (DOWNTIME!? BE CAREFUL!).
<source lang=bash>
# ipadm disable-if -t iscsi0
# dladm set-linkprop -p mtu=9000 iscsi0
# ipadm enable-if -t iscsi0
# ipadm set-ifprop -m ipv4 -p mtu=9000 iscsi0
</source>
= Aggregate for iSCSI =
This is cruel but worked on our ciscos:
<source lang=bash>
# dladm create-aggr -m trunk -P L4 -L off "-l iscsi"{0..7} iscsi_aggr0 | /bin/sh
# dladm show-aggr -P iscsi_aggr0
LINK MODE POLICY ADDRPOLICY LACPACTIVITY LACPTIMER
iscsi_aggr0 trunk L4 auto off short
# dladm show-aggr -L iscsi_aggr0
LINK PORT AGGREGATABLE SYNC COLL DIST DEFAULTED EXPIRED
iscsi_aggr0 iscsi0 no no no no yes no
-- iscsi1 no no no no yes no
-- iscsi2 no no no no yes no
-- iscsi3 no no no no yes no
-- iscsi4 no no no no yes no
-- iscsi5 no no no no yes no
-- iscsi6 no no no no yes no
-- iscsi7 no no no no yes no
</source>
= Set TCP parameters in immutable zones =
In normal immutable mode zlogin -U does not change it:
<source lang=bash>
root@global# zlogin -U immutable-zone ipadm set-prop -p _time_wait_interval=30000 tcp
ipadm: set-prop: _time_wait_interval: Invalid argument provided
root@global# zlogin immutable-zone ipadm show-prop -p _time_wait_interval tcp
PROTO PROPERTY PERM CURRENT PERSISTENT DEFAULT POSSIBLE
tcp _time_wait_interval rw 30000 -- 60000 1000-600000
</source>
Need to boot into writable:
<source lang=bash>
root@global# zoneadm -z immutable-zone reboot -w
root@global# zlogin -U immutable-zone ipadm set-prop -p _time_wait_interval=30000 tcp
root@global# zlogin immutable-zone ipadm show-prop -p _time_wait_interval tcp
PROTO PROPERTY PERM CURRENT PERSISTENT DEFAULT POSSIBLE
tcp _time_wait_interval rw 30000 30000 60000 1000-600000
root@global# zoneadm -z immutable-zone reboot
</source>
3986dbcd9fe939824764c0263a192eaf1d239071
MySQL Tipps und Tricks
0
197
1372
1152
2016-12-01T13:34:32Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:MySQL|Tipps und Tricks]]
==Oneliner==
===All grants===
<source lang=bash>
# mysql --skip-column-names --batch --execute 'select concat("`",user,"`@`",host,"`") from mysql.user' | xargs -n 1 -i mysql --execute 'show grants for {}'
</source>
===Last update time===
* Per table
<source lang=mysql>
mysql> SELECT TABLE_SCHEMA AS DB,TABLE_NAME,UPDATE_TIME FROM INFORMATION_SCHEMA.TABLES ORDER BY DB,UPDATE_TIME;
</source>
* Per database
<source lang=mysql>
mysql> SELECT TABLE_SCHEMA AS DB,MAX(UPDATE_TIME) AS LAST_UPDATE FROM INFORMATION_SCHEMA.TABLES GROUP BY DB ORDER BY LAST_UPDATE;
</source>
==InnoDB space==
===Per database===
<source lang=mysql>
mysql> select table_schema as database_name, sum(round(data_length/1024/1024,2)) as total_size_mb from information_schema.tables where engine like 'innodb' group by table_schema order by total_size_mb;
</source>
===Per table===
<source lang=mysql>
mysql> select table_schema as database_name,table_name,round(data_length/1024/1024,2) as size_mb from information_schema.tables order by size_mb;
</source>
==Logging==
If you use SET GLOBAL it is just for the moment.
'''Don't forget to add it in your my.cnf to make it permanent!'''
===What can I log?===
The interesting variables here are:
* log_queries_not_using_indexes
* log_slave_updates
* log_slow_queries
* general_log
===Choose logging destination FILE/TABLE/NONE===
This affects general_log and slow_query_log.
* Log to the table mysql.slow_log and mysql.general_log
<source lang=mysql>
mysql> SET GLOBAL log_output=TABLE;
</source>
* Log to the table mysql.slow_log and mysql.general_log
<source lang=mysql>
mysql> SET GLOBAL log_output=TABLE;
</source>
* Both: tables and files
<source lang=mysql>
mysql> SET GLOBAL log_output = 'TABLE,FILE';
</source>
* None, if NONE appears in the log_output destinations there is no logging
<source lang=mysql>
mysql> SET GLOBAL log_output = 'TABLE,FILE,NONE';
</source>
is equal to
<source lang=mysql>
mysql> SET GLOBAL log_output = 'NONE';
</source>
===Enable/disable general logging===
<source lang=mysql>
mysql> SET GLOBAL general_log_file = '/var/lib/mysql/general.log';
Query OK, 0 rows affected (0.00 sec)
mysql> SET GLOBAL general_log = 'ON';
Query OK, 0 rows affected (0.00 sec)
</source>
<source lang=mysql>
mysql> SET GLOBAL general_log = 'OFF';
Query OK, 0 rows affected (0.00 sec)
</source>
===Enable/disable logging of slow queries===
<source lang=mysql>
mysql> SET GLOBAL slow_query_log_file = '/var/lib/mysql/slow-query.log';
Query OK, 0 rows affected (0.00 sec)
mysql> SET GLOBAL slow_query_log = 'ON';
Query OK, 0 rows affected (0.00 sec)
</source>
<source lang=mysql>
mysql> SET GLOBAL slow_query_log = 'OFF';
Query OK, 0 rows affected (0.00 sec)
</source>
==Filesystems for MySQL==
===ext3/ext4===
====Create Options====
<source lang=bash>
# mkfs.ext4 -b 4096 /dev/mapper/vg--data-lv--ext4--mysql_data
</source>
====Mountoptions====
* noatime
* data=writeback (best performance , only metadata is logged)
* data=ordered (ok performance , recording metadata and grouping metadata related to the data changes)
* data=journal (worst performance, but best data protection, ext3 default mode, recording metadata and all data)
===Raw devices with InnoDB===
'''Take a look at [[Linux_udev_permissions|setting device permissions via udev]] first.'''
'''After''' that the device is owned by mysql:
<source lang=bash>
# ls -alL /dev/vg-data/lv-rawdisk-innodb01
brw-rw---- 1 mysql mysql 252, 0 Aug 12 15:07 /dev/vg-data/lv-rawdisk-innodb01
</source>
Determine the size:
<source lang=bash>
# lvs vg-data
LV VG Attr LSize Pool Origin Data% Move Log Copy% Convert
lv-rawdisk-innodb01 vg-data -wi-a---- 25.00g
# fdisk -l /dev/vg-data/lv-rawdisk-innodb01
Disk /dev/vg-data/lv-rawdisk-innodb01: 26.8 GB, 26843545600 bytes
255 heads, 63 sectors/track, 3263 cylinders, total 52428800 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
# bc -l
26843545600/(1024*1024*1024)
25.00000000000000000000
</source>
Yes... really 25GB!
Add your logical volume to your configuration /etc/mysql/conf.d/innodb.cnf :
<source lang=mysql>
[mysqld]
# InnoDB raw disks
innodb_data_home_dir=
innodb_data_file_path=/dev/vg-data/lv-rawdisk-innodb01:25Gnewraw
</source>
Start mysql:
<source lang=bash>
# service mysql start
</source>
Aaaaaand.. do not forget apparmor! Like I did.. :-D
<source lang=mysql>
InnoDB: Operating system error number 13 in a file operation.
InnoDB: The error means mysqld does not have the access rights to
InnoDB: the directory.
InnoDB: File name /dev/dm-0
InnoDB: File operation call: 'open'.
InnoDB: Cannot continue operation.
</source>
<source lang=bash>
# tail /var/log/kern.log
...
Aug 12 15:30:09 mysql kernel: [ 5840.118528] audit: type=1400 audit(1439386209.399:33): apparmor="DENIED" operation="open" profile="/usr/sbin/mysqld" name="/dev/dm-0" pid=11810 comm="mysqld" requested_mask="wr" denied_mask="wr" fsuid=108 ouid=108
...
</source>
Add your raw device to the apparmor config in /etc/apparmor.d/local/usr.sbin.mysqld :
<source lang=bash>
# Site-specific additions and overrides for usr.sbin.mysqld.
# For more details, please see /etc/apparmor.d/local/README.
/dev/dm-* rwk,
</source>
Reload apparmor:
<source lang=bash>
# service apparmor reload
</source>
Another try!
<source lang=bash>
# service mysql start
</source>
<source lang=mysql>
InnoDB: The first specified data file /dev/vg-data/lv-rawdisk-innodb01 did not exist:
InnoDB: a new database to be created!
150812 15:48:23 InnoDB: Setting file /dev/vg-data/lv-rawdisk-innodb01 size to 25600 MB
InnoDB: Database physically writes the file full: wait...
InnoDB: Progress in MB: 100 200 300 400 500 600 700 800 900 1000 1100 1200 ...
</source>
Much better!
So shutdown MySQL again!
Change your configuration /etc/mysql/conf.d/innodb.cnf and '''change newraw to raw!''' :
<source lang=mysql>
[mysqld]
# InnoDB raw disks
innodb_data_home_dir=
innodb_data_file_path=/dev/vg-data/lv-rawdisk-innodb01:25Graw
</source>
==Sample InnoDB configuration==
/etc/mysql/conf.d/innodb.cnf
<source lang=mysql>
[mysqld]
# InnoDB Parameters
# innodb_buffer_pool_size=(0.7*total_mem_size)
innodb_buffer_pool_size=1433M
# bulk_insert_buffer_size
bulk_insert_buffer_size=256M
# innodb_buffer_pool_instances=... more = more concurrency
innodb_buffer_pool_instances=2
# innodb_thread_concurrency= 2*CPUs
innodb_thread_concurrency=4
# innodb_flush_method=O_DIRECT (avoids double buffering)
innodb_flush_method=O_DIRECT
# InnoDB data raw disks
innodb_data_home_dir=
innodb_data_file_path=/dev/vg-data/lv-rawdisk-innodb01:25Graw
# InnoDB log files
innodb_log_files_in_group=2
innodb_log_file_size=100M
innodb_log_group_home_dir=/var/lib/mysql/ib_log
</source>
==Analyze==
<source lang=mysql>
mysql> select * from <tablename> PROCEDURE ANALYSE();
</source>
<source lang=mysql>
mysql> SHOW /*!50000 GLOBAL*/ STATUS;
</source>
* See [[http://de.slideshare.net/shinguz/pt-presentation-11465700 MySQL Performance Tuning]]
===percona-toolkit===
<source lang=bash>
# aptitude install percona-toolkit
# mysql -e "explain select * from mysql.user,mysql.db where user.user=db.user" | pt-visual-explain
JOIN
+- Bookmark lookup
| +- Table
| | table db
| | possible_keys User
| +- Index lookup
| key db->User
| possible_keys User
| key_len 48
| ref mysql.user.User
| rows 3
+- Table scan
rows 68
+- Table
table user
</source>
===Sysbench===
<source lang=bash>
# mysql -u root -e "create database sbtest;"
# sysbench \
--test=oltp \
--oltp-table-size=10000000 \
--db-driver=mysql \
--mysql-table-engine=innodb \
--mysql-db=sbtest \
--mysql-user=root \
--mysql-password=$(nawk -F'=' '/password/{print $2}' /root/.my.cnf) \
--mysql-socket=/var/run/mysqld/mysqld.sock \
prepare
# sysbench \
--test=oltp \
--oltp-test-mode=complex \
--oltp-table-size=80000000 \
--db-driver=mysql \
--mysql-table-engine=innodb \
--mysql-db=sbtest \
--mysql-user=root \
--mysql-password=$(nawk -F'=' '/password/{print $2}' /root/.my.cnf) \
--mysql-socket=/var/run/mysqld/mysqld.sock \
--num-threads=4 \
--max-time=900 \
--max-requests=500000 \
run
# mysql -u root_rw -e "drop table sbtest;" sbtest
</source>
==Recover a damaged root account==
===Lost grants===
Try out:
<source lang=bash>
# service mysql stop
# echo "grant all privileges on *.* to 'root'@'localhost' with grant option;" > /root/mysql-init
# mysqld_safe --init-file=/root/mysql-init
...
150812 19:14:24 mysqld_safe mysqld from pid file /var/run/mysqld/mysqld.pid ended
# rm /root/mysql-init
# service mysql start
</source>
Or:
<source lang=bash>
# service mysql stop
# mysqld_safe --skip-grant-tables &
...
# mysql -e "UPDATE mysql.user SET Grant_priv='Y', Super_priv='Y' WHERE User='root'; FLUSH PRIVILEGES; GRANT ALL ON *.* TO 'root'@'localhost';"
# mysqladmin -u root shutdown
# service mysql start
</source>
===Lost password===
<source lang=bash>
# service mysql stop
# echo "SET PASSWORD FOR 'root'@'localhost' = PASSWORD('the root password for mysql');" > /root/mysql-init
# mysqld_safe --init-file=/root/mysql-init
...
150812 19:15:24 mysqld_safe mysqld from pid file /var/run/mysqld/mysqld.pid ended
# rm /root/mysql-init
# service mysql start
</source>
==Structured configuration==
This is the default in Ubuntus /etc/mysql/my.cnf:
<source lang=mysql>
...
#
# * IMPORTANT: Additional settings that can override those from this file!
# The files must end with '.cnf', otherwise they'll be ignored.
#
!includedir /etc/mysql/conf.d/
</source>
/etc/mysql/conf.d/innodb.cnf:
<source lang=mysql>
[mysqld]
# InnoDB Parameters
# innodb_buffer_pool_size=(0.7*total_mem_size)
#innodb_buffer_pool_size=512M
innodb_buffer_pool_size=256M
# bulk_insert_buffer_size
#bulk_insert_buffer_size=256M
bulk_insert_buffer_size=128M
# innodb_buffer_pool_instances=... more = more concurrency
innodb_buffer_pool_instances=2
# innodb_thread_concurrency= 2*CPUs
innodb_thread_concurrency=4
# innodb_flush_method=O_DIRECT (avoids double buffering)
innodb_flush_method=O_DIRECT
# InnoDB data raw disks
innodb_data_home_dir=
innodb_data_file_path=/dev/vg-data/lv-rawdisk-innodb01:25Graw
# InnoDB log files
innodb_log_files_in_group=2
innodb_log_file_size=100M
innodb_log_group_home_dir=/var/lib/mysql/ib_log
</source>
/etc/mysql/conf.d/myisam.cnf:
<source lang=mysql>
[mysqld]
#key_buffer = 512M
key_buffer = 128M
table_cache = 8K
myisam_sort_buffer_size = 64M
tmp_table_size = 64M
# Variable: concurrent_insert
# Value Description
# 0 Disables concurrent inserts
# 1 (Default) Enables concurrent insert for MyISAM tables that do not have holes
# 2 Enables concurrent inserts for all MyISAM tables, even those that have holes.
# For a table with a hole, new rows are inserted at the end of the table if it is in use by another thread.
# Otherwise, MySQL acquires a normal write lock and inserts the row into the hole.
concurrent_insert=2
# Variable: myisam_use_mmap
# https://www.percona.com/blog/2006/05/26/myisam-mmap-feature-51/
#
myisam_use_mmap=1
</source>
/etc/mysql/conf.d/mysqld.cnf:
<source lang=mysql>
[mysqld]
datadir = /var/lib/mysql/data/data
# because mysql is soooo stupid
#ignore-db-dirs = lost+found # when we will have mysql >= 5.6.3
bind-address = 127.0.0.1
open-files-limit = 4096
max_connections = 512
max_allowed_packet = 16M
thread_stack = 192K
thread_cache_size = 8
myisam-recover-options = BACKUP
max_connections = 512
table_cache = 8192
thread_concurrency = 4
default-storage-engine = innodb
# Enable the full query log. Every query (even ones with incorrect
# syntax) that the server receives will be logged. This is useful for
# debugging, it is usually disabled in production use.
#log
# Print warnings to the error log file. If you have any problem with
# MySQL you should enable logging of warnings and examine the error log
# for possible explanations.
log_warnings
# Log slow queries. Slow queries are queries which take more than the
# amount of time defined in "long_query_time" or which do not use
# indexes well, if log_long_format is enabled. It is normally good idea
# to have this turned on if you frequently add new queries to the
# system.
log_slow_queries
slow_query_log_file = /var/log/mysql/mysql-slow.log
# All queries taking more than this amount of time (in seconds) will be
# trated as slow. Do not use "1" as a value here, as this will result in
# even very fast queries being logged from time to time (as MySQL
# currently measures time with second accuracy only).
long_query_time = 2
# Log more information in the slow query log. Normally it is good to
# have this turned on. This will enable logging of queries that are not
# using indexes in addition to long running queries.
#log_long_format
log_bin = /var/lib/mysql/binlog/mysql-bin.log
expire_logs_days = 10
max_binlog_size = 100M
sync_binlog = 0
performance_schema = ON
</source>
/etc/mysql/conf.d/mysqld_safe.cnf:
<source lang=mysql>
[mysqld_safe]
</source>
/etc/mysql/conf.d/mysqld_safe_syslog.cnf:
<source lang=mysql>
[mysqld_safe]
syslog
</source>
/etc/mysql/conf.d/query_cache.cnf:
<source lang=mysql>
[mysqld]
query_cache_limit = 4M
query_cache_size = 128M
query_cache_min_res_unit = 2K
</source>
=MySQL Clients=
Small one liners for testing purposes.
==PHP==
===PHP PDO===
<source lang=php>
$ php -r '
$pdo=new PDO("mysql:host=mydbhost;dbname=mydb", "user", "pass", ARRAY(
PDO::ATTR_PERSISTENT => true
)
);
$stmt=$pdo->prepare("SELECT * FROM mytable");
if($stmt->execute()){
while($row = $stmt->fetch()){
print_r($row);
}
};
$stmt = null;
$pdo=null;
'
</source>
e3b44fbd6c36d66a6ab67db0b0254aee064c2285
HP 3par
0
213
1373
944
2016-12-08T08:56:04Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:Hardware]]
Unsorted collection... Don't do this...
Unsortierte Sammlung...
Funktioniert nicht so...
<source lang=bash>
3par-clusterstorage cli% showcage
Id Name LoopA Pos.A LoopB Pos.B Drives Temp RevA RevB Model Side
0 cage0 1:0:1 0 0:0:1 0 24 29-35 321a 321a DCN1 n/a
1 cage1 1:0:2 0 0:0:2 0 24 34-36 321a 321a DCS2 n/a
</source>
<source lang=bash>
3par-storage cli% createcpg -t r5 -ssz 4 -ha mag -p -devtype FC -mg 0-19 -cg 0 FC_R5_31_cage0
3par-storage cli% createcpg -t r5 -ssz 4 -ha mag -p -devtype FC -mg 0-19 -cg 1 FC_R5_31_cage1
</source>
<source lang=bash>
3par-storage cli% showcpg -sdg
------(MB)------
Id Name Warn Limit Grow Args
...
6 FC_R5_31_cage0 - - 32768 -t r5 -ssz 4 -ha mag -p -devtype FC -mg 0-19 -cg 0
7 FC_R5_31_cage1 - - 32768 -t r5 -ssz 4 -ha mag -p -devtype FC -mg 0-19 -cg 1
</source>
<source lang=bash>
3par-storage cli% createvv -wait 0 -comment "Mirror A: PRODDB" FC_R5_31_cage0 VV_DB_PROD01_DATA_DS.1 2T
3par-storage cli% createvv -wait 0 -comment "Mirror B: PRODDB" FC_R5_31_cage1 VV_DB_PROD01_DATA_DS.2 2T
3par-storage cli% createvv -wait 0 -comment "Mirror A: TESTDB" FC_R5_31_cage0 VV_DB_TEST01_DATA_DS.3 2T
3par-storage cli% createvv -wait 0 -comment "Mirror B: TESTDB" FC_R5_31_cage1 VV_DB_TEST01_DATA_DS.4 2T
</source>
<source lang=bash>
3par-storage cli% showvv -sortcol 0 -showcols Id,Name,UsrCPG,Prov,Usr_Used_MB -cpg FC_R5_31_cage0,FC_R5_31_cage1
Id Name UsrCPG Prov Usr_Used_MB
2 VV_DB_PROD01_DATA_DS.1 FC_R5_31_cage0 full 2097152
3 VV_DB_PROD01_DATA_DS.2 FC_R5_31_cage1 full 2097152
4 VV_DB_TEST01_DATA_DS.3 FC_R5_31_cage0 full 2097152
5 VV_DB_TEST01_DATA_DS.4 FC_R5_31_cage1 full 2097152
-----------------------------------------------------------------
2 total 8388608
</source>
==Group virtual volumes to sets (vv -> vvset)==
<source lang=bash>
3par-storage cli% createvvset -comment "Set for all vvs of Solaris Devel" DevelVVSet
3par-storage cli% createvvset -add DevelVVSet VV_DB_TEST01_DATA_DS.3
3par-storage cli% createvvset -add DevelVVSet VV_DB_TEST01_DATA_DS.4
</source>
==Create a set of initiators==
<source lang=bash>
3par-storage cli% createhost -os Solaris -model M10 -contact "SuperAdmin" -comment "Developer node" -loc "Germany, Hamburg" -persona 1 unix14_c2 21000024ff8f5aae
3par-storage cli% createhost -os Solaris -model M10 -contact "SuperAdmin" -comment "Developer node" -loc "Germany, Hamburg" -persona 1 unix14_c3 21000024ff8f5aaf
</source>
<source lang=bash>
3par-storage cli% createhostset DevelHosts
3par-storage cli% createhostset -add DevelHosts unix14_c2
3par-storage cli% createhostset -add DevelHosts unix14_c3
</source>
==Map virtual volumes as LUNs to a set of initiators==
<source lang=bash>
3par-storage cli% createvlun set:DevelVVSet 0+ set:DevelHosts
</source>
Means map all VVs from DevelVVSet to all hosts in DevelHosts and do auto LUN numbering (+) starting with 0.
<source lang=bash>
3par-storage cli% showvlun
Active VLUNs
Lun VVName HostName -Host_WWN/iSCSI_Name- Port Type Status ID
0 VV_DB_TEST01_DATA_DS.3 unix14_c2 21000024FF8F5AAE 0:1:1 host set active 1
1 VV_DB_TEST01_DATA_DS.4 unix14_c2 21000024FF8F5AAE 0:1:1 host set active 1
0 VV_DB_TEST01_DATA_DS.3 unix14_c3 21000024FF8F5AAF 0:1:2 host set active 1
1 VV_DB_TEST01_DATA_DS.4 unix14_c3 21000024FF8F5AAF 0:1:2 host set active 1
-----------------------------------------------------------------------------------------------
4 total
VLUN Templates
Lun VVName HostName -Host_WWN/iSCSI_Name- Port Type
0 set:DevelVVset set:DevelHosts ---------------- --- host set
---------------------------------------------------------------------
1 total
</source>
==Watch disk initialization==
<source lang=bash>
3par-storage cli% showsys -space -devtype FC
------------- System Capacity (MB) -------------
Total Capacity : 57139200
Allocated : 40258560
Volumes : 36577280
Non-CPGs : 0
User : 0
Snapshot : 0
Admin : 0
CPGs (TPVVs & TDVVs & CPVVs) : 36577280
User : 36577280
Used : 36427020
Unused : 0
Snapshot : 0
Used : 0
Unused : 0
Admin : 0
Used : 0
Unused : 0
Unmapped : 0
System : 3681280
Internal : 252928
Spare : 3428352
Used : 0
Unused : 3428352
Free : 16880640
Initialized : 7827456
Uninitialized : 9053184 <--- Still initializing!!!!
Unavailable : 0
Failed : 0
------------- Capacity Efficiency --------------
Compaction : 1.0
Dedup : --------
</source>
== Solaris ==
===/kernel/drv/sd.conf===
<pre>
sd-config-list=“3PARdataVV”,“physical-block-size:16384”;
</pre>
67c8c2f60c8ae6fbff23eb78340fda94a53e5300
Solaris mdb magic
0
23
1374
661
2016-12-08T11:45:31Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:Solaris|Modular Debugger]]
=Verschiedene kleine mdb Tricks=
==Memory usage==
<pre>
# echo ::memstat|mdb -k
Page Summary Pages MB %Tot
------------ ---------------- ---------------- ----
Kernel 2855874 11155 69%
Anon 50119 195 1%
Exec and libs 4754 18 0%
Page cache 22972 89 1%
Free (cachelist) 11948 46 0%
Free (freelist) 1221894 4773 29%
Total 4167561 16279
Physical 4078747 15932
</pre>
==Kernelparameter abfragen==
Syntax: echo '<Parameter>/D' | mdb -k
<pre>
# echo 'ncsize/D' | mdb -k
ncsize:
ncsize: 70485
</pre>
==Kernelparameter setzen==
Syntax: echo '<Parameter>/W<Value>' | mdb -wk
<pre>
# echo 'do_tcp_fusion/W0' | mdb -wk
do_tcp_fusion: 0 = 0x0
</pre>
==Inquiry strings in Solaris 11==
<source lang=bash>
# echo "::walk sd_state | ::grep '.!=0' | ::print struct sd_lun un_sd | ::print struct scsi_device sd_inq | ::print struct scsi_inquiry inq_vid inq_pid" | mdb -k
inq_vid = [ "VMware " ]
inq_pid = [ "Virtual disk " ]
inq_vid = [ "NECVMWar" ]
inq_pid = [ "VMware SATA CD00" ]
inq_vid = [ "VMware " ]
inq_pid = [ "Virtual disk " ]
</source>
967c56648e71ed350864b28c7a70750169195a7d
Exim cheatsheet
0
27
1375
1294
2016-12-15T08:58:56Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:Exim]]
=Fragen und Antworten=
==Header einer MailID ansehen==
<pre># exim -mvh <msgid></pre>
==Statistiken der aktuellen Queue ansehen==
<pre># exim -bpu | exiqsum <parameter></pre>
==Routing von Mails testen==
===Kurz und bündig===
<pre># exim -bv -v <Mailadresse></pre>
===Mit viel Debugging===
<pre># exim -bv -d+all <Mailadresse></pre>
==Wie stosse ich den Versand aller Mails für eine bestimmte Domain an?==
<pre># exim -Rff <Domain></pre>
==Wie stosse ich den Versand EINER bestimmten Mail erneut an?==
<pre># exim -M <message-id></pre>
==Wie ermittle ich, wieviele Mails in der Queue liegen?==
<pre># exim -bpc</pre>
==Wie finde ich eine bestimmte Mail in der Queue?==
Dazu kann entweder in den Logfiles gesucht werden
<pre># exigrep <pattern> /var/log/exim/mainlog-jjjjmmdd</pre>
oder es kann in der Queue gesucht werden
<pre># exiqgrep -r <pattern></pre>
Besser, als exigrep ist exipick!
Ausgabe aller frozen Mails in der Queue:
<pre>
# exipick -z
</pre>
Ausgabe aller Mails an <reciepient> in der Queue:
<pre>
# exipick -r <reciepient>
</pre>
Ausgabe aller Mails von <sender> in der Queue:
<pre>
# exipick -f <sender>
</pre>
Ausggeben aller Mails, die Lokal abgesandt wurden in der Queue:
<pre>
# exipick --or '$sender_host_address eq 127.0.0.1' '$received_protocol eq local'
</pre>
Sogar der Body einer Mail kann durchsucht werden:
<pre>
# /opt/exim/bin/exipick '$message_body =~ /.*Vjagra.*/'
</pre>
Oder Ausgabe der sender_host_address für alle Mails die mehr als 40 und weniger als 50 Minuten alt sind und nicht im Status frozen sind:
<pre>
# exipick --show-vars sender_host_address '$message_age > 40m' '$message_age < 50m' '!$deliver_freeze'
</pre>
==Was tun die Exim-Prozesse?==
<pre># exiwhat</pre>
==Ausgeben von Exim-Parametern==
<pre># exim -bP <Parameter></pre>
z.B.:
<pre># exim -bP message_size_limit</pre>
==Immer gut: queue files ansehen==
<pre>
# find $(exim -bP spool_directory | nawk '{print $NF;}')/input
</pre>
==Ratelimit für einen User zurücksetzen==
Einträge finden:
<source lang=bash>
# exim_dumpdb /var/spool/exim ratelimit | grep user
24-Mar-2016 09:51:28.152687 rate: 218.512 key: 1d/per_rcpt/mail_recipients:user@server.de
24-Mar-2016 09:51:28.098825 rate: 25.618 key: 1d/per_rcpt/failed_recipients:user@server.de
</source>
Einträge löschen:
Dafür nimmt man das etwas struppige Tool <i>exim_fixdb</i>. Man gibt den Key ein, den man aus den Ausgaben vom letzten Befehl hat und wählt damit den entsprechenden Eintrag in der DB aus. Als nächstes kommd dann d, wie Delete, gefolgt von einem Enter. Weg ist der entsprechende Eintrag.
<source lang=bash>
# exim_fixdb /var/spool/exim ratelimit
Modifying Exim hints database /var/spool/exim/db/ratelimit
> 1d/per_rcpt/mail_recipients:user@server.de
24-Mar-2016 09:51:28
0 time stamp: 24-Mar-2016 09:51:28
1 fract. time: .152687
2 sender rate: 218.512
> d
deleted
> 1d/per_rcpt/failed_recipients:user@server.de
24-Mar-2016 09:51:28
0 time stamp: 24-Mar-2016 09:51:28
1 fract. time: .098825
2 sender rate: 25.618
> d
deleted
> ^D
</source>
==Spam==
<source lang=bash>
for file in $(ls -1 /var/log/spamassassin/spamd-exim-acl.log* | sort -t'.' -k3n,3n) ; do if [ "$(basename $file .gz)" == "$(basename $file)" ] ; then command="cat" ; else command="gzip -cd"; fi; printf "%16s - %16s : %7s\t%s\n" "$(${command} ${file} | nawk 'NR==1{print $1,$2,$3}')" "$(${command} ${file} | tail -1 | nawk '{print $1,$2,$3}')" "$(${command} ${file} | grep -c 'result: Y')" "$(basename ${file})"; done
</source>
cb0355c5910692eadb052b6cbc1e9dcd57ecd354
1376
1375
2016-12-15T09:00:55Z
Lollypop
2
/* Spam */
wikitext
text/x-wiki
[[Kategorie:Exim]]
=Fragen und Antworten=
==Header einer MailID ansehen==
<pre># exim -mvh <msgid></pre>
==Statistiken der aktuellen Queue ansehen==
<pre># exim -bpu | exiqsum <parameter></pre>
==Routing von Mails testen==
===Kurz und bündig===
<pre># exim -bv -v <Mailadresse></pre>
===Mit viel Debugging===
<pre># exim -bv -d+all <Mailadresse></pre>
==Wie stosse ich den Versand aller Mails für eine bestimmte Domain an?==
<pre># exim -Rff <Domain></pre>
==Wie stosse ich den Versand EINER bestimmten Mail erneut an?==
<pre># exim -M <message-id></pre>
==Wie ermittle ich, wieviele Mails in der Queue liegen?==
<pre># exim -bpc</pre>
==Wie finde ich eine bestimmte Mail in der Queue?==
Dazu kann entweder in den Logfiles gesucht werden
<pre># exigrep <pattern> /var/log/exim/mainlog-jjjjmmdd</pre>
oder es kann in der Queue gesucht werden
<pre># exiqgrep -r <pattern></pre>
Besser, als exigrep ist exipick!
Ausgabe aller frozen Mails in der Queue:
<pre>
# exipick -z
</pre>
Ausgabe aller Mails an <reciepient> in der Queue:
<pre>
# exipick -r <reciepient>
</pre>
Ausgabe aller Mails von <sender> in der Queue:
<pre>
# exipick -f <sender>
</pre>
Ausggeben aller Mails, die Lokal abgesandt wurden in der Queue:
<pre>
# exipick --or '$sender_host_address eq 127.0.0.1' '$received_protocol eq local'
</pre>
Sogar der Body einer Mail kann durchsucht werden:
<pre>
# /opt/exim/bin/exipick '$message_body =~ /.*Vjagra.*/'
</pre>
Oder Ausgabe der sender_host_address für alle Mails die mehr als 40 und weniger als 50 Minuten alt sind und nicht im Status frozen sind:
<pre>
# exipick --show-vars sender_host_address '$message_age > 40m' '$message_age < 50m' '!$deliver_freeze'
</pre>
==Was tun die Exim-Prozesse?==
<pre># exiwhat</pre>
==Ausgeben von Exim-Parametern==
<pre># exim -bP <Parameter></pre>
z.B.:
<pre># exim -bP message_size_limit</pre>
==Immer gut: queue files ansehen==
<pre>
# find $(exim -bP spool_directory | nawk '{print $NF;}')/input
</pre>
==Ratelimit für einen User zurücksetzen==
Einträge finden:
<source lang=bash>
# exim_dumpdb /var/spool/exim ratelimit | grep user
24-Mar-2016 09:51:28.152687 rate: 218.512 key: 1d/per_rcpt/mail_recipients:user@server.de
24-Mar-2016 09:51:28.098825 rate: 25.618 key: 1d/per_rcpt/failed_recipients:user@server.de
</source>
Einträge löschen:
Dafür nimmt man das etwas struppige Tool <i>exim_fixdb</i>. Man gibt den Key ein, den man aus den Ausgaben vom letzten Befehl hat und wählt damit den entsprechenden Eintrag in der DB aus. Als nächstes kommd dann d, wie Delete, gefolgt von einem Enter. Weg ist der entsprechende Eintrag.
<source lang=bash>
# exim_fixdb /var/spool/exim ratelimit
Modifying Exim hints database /var/spool/exim/db/ratelimit
> 1d/per_rcpt/mail_recipients:user@server.de
24-Mar-2016 09:51:28
0 time stamp: 24-Mar-2016 09:51:28
1 fract. time: .152687
2 sender rate: 218.512
> d
deleted
> 1d/per_rcpt/failed_recipients:user@server.de
24-Mar-2016 09:51:28
0 time stamp: 24-Mar-2016 09:51:28
1 fract. time: .098825
2 sender rate: 25.618
> d
deleted
> ^D
</source>
==Spam==
<source lang=bash>
for file in $(ls -1 /var/log/spamassassin/spamd-exim-acl.log* | sort -t'.' -k3n,3n)
do
if [ "$(basename $file .gz)" == "$(basename $file)" ]
then
command="cat"
else
command="gzip -cd"
fi
printf "%16s - %16s : %7s\t%s\n" \
"$(${command} ${file} | nawk 'NR==1{print $1,$2,$3}')" \
"$(${command} ${file} | tail -1 | nawk '{print $1,$2,$3}')" \
"$(${command} ${file} | grep -c 'result: Y')" \
"$(basename ${file})"
done
</source>
0468131a378ecd3347c3e2fb20b2110f1a886c49
1396
1376
2017-03-22T08:37:55Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:Exim]]
=Fragen und Antworten=
==Header einer MailID ansehen==
<pre># exim -mvh <msgid></pre>
==Statistiken der aktuellen Queue ansehen==
<pre># exim -bpu | exiqsum <parameter></pre>
==Routing von Mails testen==
===Kurz und bündig===
<pre># exim -bv -v <Mailadresse></pre>
===Mit viel Debugging===
<pre># exim -bv -d+all <Mailadresse></pre>
==Wie stosse ich den Versand aller Mails für eine bestimmte Domain an?==
<pre># exim -Rff <Domain></pre>
==Wie stosse ich den Versand EINER bestimmten Mail erneut an?==
<pre># exim -M <message-id></pre>
==Wie ermittle ich, wieviele Mails in der Queue liegen?==
<pre># exim -bpc</pre>
==Wie finde ich eine bestimmte Mail in der Queue?==
Dazu kann entweder in den Logfiles gesucht werden
<pre># exigrep <pattern> /var/log/exim/mainlog-jjjjmmdd</pre>
oder es kann in der Queue gesucht werden
<pre># exiqgrep -r <pattern></pre>
Besser, als exigrep ist exipick!
Ausgabe aller frozen Mails in der Queue:
<pre>
# exipick -z
</pre>
Ausgabe aller Mails an <reciepient> in der Queue:
<pre>
# exipick -r <reciepient>
</pre>
Ausgabe aller Mails von <sender> in der Queue:
<pre>
# exipick -f <sender>
</pre>
Ausggeben aller Mails, die Lokal abgesandt wurden in der Queue:
<pre>
# exipick --or '$sender_host_address eq 127.0.0.1' '$received_protocol eq local'
</pre>
Sogar der Body einer Mail kann durchsucht werden:
<pre>
# /opt/exim/bin/exipick '$message_body =~ /.*Vjagra.*/'
</pre>
Oder Ausgabe der sender_host_address für alle Mails die mehr als 40 und weniger als 50 Minuten alt sind und nicht im Status frozen sind:
<pre>
# exipick --show-vars sender_host_address '$message_age > 40m' '$message_age < 50m' '!$deliver_freeze'
</pre>
==Was tun die Exim-Prozesse?==
<pre># exiwhat</pre>
==Ausgeben von Exim-Parametern==
<pre># exim -bP <Parameter></pre>
z.B.:
<pre># exim -bP message_size_limit</pre>
==Immer gut: queue files ansehen==
<pre>
# find $(exim -bP spool_directory | nawk '{print $NF;}')/input
</pre>
==Ratelimit für einen User zurücksetzen==
Einträge finden:
<source lang=bash>
# exim_dumpdb /var/spool/exim ratelimit | grep user
24-Mar-2016 09:51:28.152687 rate: 218.512 key: 1d/per_rcpt/mail_recipients:user@server.de
24-Mar-2016 09:51:28.098825 rate: 25.618 key: 1d/per_rcpt/failed_recipients:user@server.de
</source>
Einträge löschen:
Dafür nimmt man das etwas struppige Tool <i>exim_fixdb</i>. Man gibt den Key ein, den man aus den Ausgaben vom letzten Befehl hat und wählt damit den entsprechenden Eintrag in der DB aus. Als nächstes kommd dann d, wie Delete, gefolgt von einem Enter. Weg ist der entsprechende Eintrag.
<source lang=bash>
# exim_fixdb /var/spool/exim ratelimit
Modifying Exim hints database /var/spool/exim/db/ratelimit
> 1d/per_rcpt/mail_recipients:user@server.de
24-Mar-2016 09:51:28
0 time stamp: 24-Mar-2016 09:51:28
1 fract. time: .152687
2 sender rate: 218.512
> d
deleted
> 1d/per_rcpt/failed_recipients:user@server.de
24-Mar-2016 09:51:28
0 time stamp: 24-Mar-2016 09:51:28
1 fract. time: .098825
2 sender rate: 25.618
> d
deleted
> ^D
</source>
==Spam==
<source lang=bash>
for file in $(ls -1 /var/log/spamassassin/spamd-exim-acl.log* | sort -t'.' -k3n,3n)
do
if [ "$(basename $file .gz)" == "$(basename $file)" ]
then
command="cat"
else
command="gzip -cd"
fi
printf "%16s - %16s : %7s\t%s\n" \
"$(${command} ${file} | nawk 'NR==1{print $1,$2,$3}')" \
"$(${command} ${file} | tail -1 | nawk '{print $1,$2,$3}')" \
"$(${command} ${file} | grep -c 'result: Y')" \
"$(basename ${file})"
done
</source>
= Logrotation with datestamped logfiles =
I love my logfiles datestamped:
<source lang=bash>
# exim -bP log_file_path
log_file_path = /var/log/exim/%slog-%D
</source>
But the logrotate with this files is a little bit tricky.
I found this as a good way to rotate the logfiles:
== /etc/logrotate.d/exim ==
<pre>
/var/log/exim/rotate_this_do_not_delete {
daily
rotate 0
ifempty
create
lastaction
# gzip all files matching the regex that are not from today
/usr/bin/find /var/log/exim -regextype posix-awk -regex '^/.*/((main|reject)log-[0-9]{8}|paniclog)' ! -mtime +0 -exec /usr/bin/gzip -9q {} \;
# delete gzipped files matching the regex that are older than 90 days:
/usr/bin/find /var/log/exim -regextype posix-awk -regex '^/.*/((main|reject)log-[0-9]{8}|paniclog)\.gz' -mtime +90 -delete
endscript
}
</pre>
63170ef0c54fc3df8fe7551067ec0d9864247977
1397
1396
2017-03-22T08:38:12Z
Lollypop
2
/* /etc/logrotate.d/exim */
wikitext
text/x-wiki
[[Kategorie:Exim]]
=Fragen und Antworten=
==Header einer MailID ansehen==
<pre># exim -mvh <msgid></pre>
==Statistiken der aktuellen Queue ansehen==
<pre># exim -bpu | exiqsum <parameter></pre>
==Routing von Mails testen==
===Kurz und bündig===
<pre># exim -bv -v <Mailadresse></pre>
===Mit viel Debugging===
<pre># exim -bv -d+all <Mailadresse></pre>
==Wie stosse ich den Versand aller Mails für eine bestimmte Domain an?==
<pre># exim -Rff <Domain></pre>
==Wie stosse ich den Versand EINER bestimmten Mail erneut an?==
<pre># exim -M <message-id></pre>
==Wie ermittle ich, wieviele Mails in der Queue liegen?==
<pre># exim -bpc</pre>
==Wie finde ich eine bestimmte Mail in der Queue?==
Dazu kann entweder in den Logfiles gesucht werden
<pre># exigrep <pattern> /var/log/exim/mainlog-jjjjmmdd</pre>
oder es kann in der Queue gesucht werden
<pre># exiqgrep -r <pattern></pre>
Besser, als exigrep ist exipick!
Ausgabe aller frozen Mails in der Queue:
<pre>
# exipick -z
</pre>
Ausgabe aller Mails an <reciepient> in der Queue:
<pre>
# exipick -r <reciepient>
</pre>
Ausgabe aller Mails von <sender> in der Queue:
<pre>
# exipick -f <sender>
</pre>
Ausggeben aller Mails, die Lokal abgesandt wurden in der Queue:
<pre>
# exipick --or '$sender_host_address eq 127.0.0.1' '$received_protocol eq local'
</pre>
Sogar der Body einer Mail kann durchsucht werden:
<pre>
# /opt/exim/bin/exipick '$message_body =~ /.*Vjagra.*/'
</pre>
Oder Ausgabe der sender_host_address für alle Mails die mehr als 40 und weniger als 50 Minuten alt sind und nicht im Status frozen sind:
<pre>
# exipick --show-vars sender_host_address '$message_age > 40m' '$message_age < 50m' '!$deliver_freeze'
</pre>
==Was tun die Exim-Prozesse?==
<pre># exiwhat</pre>
==Ausgeben von Exim-Parametern==
<pre># exim -bP <Parameter></pre>
z.B.:
<pre># exim -bP message_size_limit</pre>
==Immer gut: queue files ansehen==
<pre>
# find $(exim -bP spool_directory | nawk '{print $NF;}')/input
</pre>
==Ratelimit für einen User zurücksetzen==
Einträge finden:
<source lang=bash>
# exim_dumpdb /var/spool/exim ratelimit | grep user
24-Mar-2016 09:51:28.152687 rate: 218.512 key: 1d/per_rcpt/mail_recipients:user@server.de
24-Mar-2016 09:51:28.098825 rate: 25.618 key: 1d/per_rcpt/failed_recipients:user@server.de
</source>
Einträge löschen:
Dafür nimmt man das etwas struppige Tool <i>exim_fixdb</i>. Man gibt den Key ein, den man aus den Ausgaben vom letzten Befehl hat und wählt damit den entsprechenden Eintrag in der DB aus. Als nächstes kommd dann d, wie Delete, gefolgt von einem Enter. Weg ist der entsprechende Eintrag.
<source lang=bash>
# exim_fixdb /var/spool/exim ratelimit
Modifying Exim hints database /var/spool/exim/db/ratelimit
> 1d/per_rcpt/mail_recipients:user@server.de
24-Mar-2016 09:51:28
0 time stamp: 24-Mar-2016 09:51:28
1 fract. time: .152687
2 sender rate: 218.512
> d
deleted
> 1d/per_rcpt/failed_recipients:user@server.de
24-Mar-2016 09:51:28
0 time stamp: 24-Mar-2016 09:51:28
1 fract. time: .098825
2 sender rate: 25.618
> d
deleted
> ^D
</source>
==Spam==
<source lang=bash>
for file in $(ls -1 /var/log/spamassassin/spamd-exim-acl.log* | sort -t'.' -k3n,3n)
do
if [ "$(basename $file .gz)" == "$(basename $file)" ]
then
command="cat"
else
command="gzip -cd"
fi
printf "%16s - %16s : %7s\t%s\n" \
"$(${command} ${file} | nawk 'NR==1{print $1,$2,$3}')" \
"$(${command} ${file} | tail -1 | nawk '{print $1,$2,$3}')" \
"$(${command} ${file} | grep -c 'result: Y')" \
"$(basename ${file})"
done
</source>
= Logrotation with datestamped logfiles =
I love my logfiles datestamped:
<source lang=bash>
# exim -bP log_file_path
log_file_path = /var/log/exim/%slog-%D
</source>
But the logrotate with this files is a little bit tricky.
I found this as a good way to rotate the logfiles:
== /etc/logrotate.d/exim ==
<pre>
/var/log/exim/rotate_this_do_not_delete {
daily
rotate 0
ifempty
create
lastaction
# gzip all files matching the regex that are not from today
/usr/bin/find /var/log/exim -regextype posix-awk -regex '^/.*/((main|reject)log-[0-9]{8}|paniclog)' ! -mtime +0 -exec /usr/bin/gzip -9q {} \;
# delete gzipped files matching the regex that are older than 90 days
/usr/bin/find /var/log/exim -regextype posix-awk -regex '^/.*/((main|reject)log-[0-9]{8}|paniclog)\.gz' -mtime +90 -delete
endscript
}
</pre>
deeea12c69e5bf06807687c827e7e4fda0efedd6
1398
1397
2017-03-22T08:40:00Z
Lollypop
2
/* Logrotation with datestamped logfiles */
wikitext
text/x-wiki
[[Kategorie:Exim]]
=Fragen und Antworten=
==Header einer MailID ansehen==
<pre># exim -mvh <msgid></pre>
==Statistiken der aktuellen Queue ansehen==
<pre># exim -bpu | exiqsum <parameter></pre>
==Routing von Mails testen==
===Kurz und bündig===
<pre># exim -bv -v <Mailadresse></pre>
===Mit viel Debugging===
<pre># exim -bv -d+all <Mailadresse></pre>
==Wie stosse ich den Versand aller Mails für eine bestimmte Domain an?==
<pre># exim -Rff <Domain></pre>
==Wie stosse ich den Versand EINER bestimmten Mail erneut an?==
<pre># exim -M <message-id></pre>
==Wie ermittle ich, wieviele Mails in der Queue liegen?==
<pre># exim -bpc</pre>
==Wie finde ich eine bestimmte Mail in der Queue?==
Dazu kann entweder in den Logfiles gesucht werden
<pre># exigrep <pattern> /var/log/exim/mainlog-jjjjmmdd</pre>
oder es kann in der Queue gesucht werden
<pre># exiqgrep -r <pattern></pre>
Besser, als exigrep ist exipick!
Ausgabe aller frozen Mails in der Queue:
<pre>
# exipick -z
</pre>
Ausgabe aller Mails an <reciepient> in der Queue:
<pre>
# exipick -r <reciepient>
</pre>
Ausgabe aller Mails von <sender> in der Queue:
<pre>
# exipick -f <sender>
</pre>
Ausggeben aller Mails, die Lokal abgesandt wurden in der Queue:
<pre>
# exipick --or '$sender_host_address eq 127.0.0.1' '$received_protocol eq local'
</pre>
Sogar der Body einer Mail kann durchsucht werden:
<pre>
# /opt/exim/bin/exipick '$message_body =~ /.*Vjagra.*/'
</pre>
Oder Ausgabe der sender_host_address für alle Mails die mehr als 40 und weniger als 50 Minuten alt sind und nicht im Status frozen sind:
<pre>
# exipick --show-vars sender_host_address '$message_age > 40m' '$message_age < 50m' '!$deliver_freeze'
</pre>
==Was tun die Exim-Prozesse?==
<pre># exiwhat</pre>
==Ausgeben von Exim-Parametern==
<pre># exim -bP <Parameter></pre>
z.B.:
<pre># exim -bP message_size_limit</pre>
==Immer gut: queue files ansehen==
<pre>
# find $(exim -bP spool_directory | nawk '{print $NF;}')/input
</pre>
==Ratelimit für einen User zurücksetzen==
Einträge finden:
<source lang=bash>
# exim_dumpdb /var/spool/exim ratelimit | grep user
24-Mar-2016 09:51:28.152687 rate: 218.512 key: 1d/per_rcpt/mail_recipients:user@server.de
24-Mar-2016 09:51:28.098825 rate: 25.618 key: 1d/per_rcpt/failed_recipients:user@server.de
</source>
Einträge löschen:
Dafür nimmt man das etwas struppige Tool <i>exim_fixdb</i>. Man gibt den Key ein, den man aus den Ausgaben vom letzten Befehl hat und wählt damit den entsprechenden Eintrag in der DB aus. Als nächstes kommd dann d, wie Delete, gefolgt von einem Enter. Weg ist der entsprechende Eintrag.
<source lang=bash>
# exim_fixdb /var/spool/exim ratelimit
Modifying Exim hints database /var/spool/exim/db/ratelimit
> 1d/per_rcpt/mail_recipients:user@server.de
24-Mar-2016 09:51:28
0 time stamp: 24-Mar-2016 09:51:28
1 fract. time: .152687
2 sender rate: 218.512
> d
deleted
> 1d/per_rcpt/failed_recipients:user@server.de
24-Mar-2016 09:51:28
0 time stamp: 24-Mar-2016 09:51:28
1 fract. time: .098825
2 sender rate: 25.618
> d
deleted
> ^D
</source>
==Spam==
<source lang=bash>
for file in $(ls -1 /var/log/spamassassin/spamd-exim-acl.log* | sort -t'.' -k3n,3n)
do
if [ "$(basename $file .gz)" == "$(basename $file)" ]
then
command="cat"
else
command="gzip -cd"
fi
printf "%16s - %16s : %7s\t%s\n" \
"$(${command} ${file} | nawk 'NR==1{print $1,$2,$3}')" \
"$(${command} ${file} | tail -1 | nawk '{print $1,$2,$3}')" \
"$(${command} ${file} | grep -c 'result: Y')" \
"$(basename ${file})"
done
</source>
= Logrotation with datestamped logfiles =
I love my logfiles datestamped:
<source lang=bash>
# exim -bP log_file_path
log_file_path = /var/log/exim/%slog-%D
</source>
But the logrotate with this files is a little bit tricky.
I found this as a good way to rotate the logfiles:
== /etc/logrotate.d/exim ==
<pre>
/var/log/exim/rotate_this_-_do_not_delete {
daily
rotate 0
ifempty
create
lastaction
# gzip all files matching the regex that are not from today
/usr/bin/find /var/log/exim -regextype posix-awk -regex '^/.*/((main|reject)log-[0-9]{8}|paniclog)' ! -mtime +0 -exec /usr/bin/gzip -9q {} \;
# delete gzipped files matching the regex that are older than 90 days
/usr/bin/find /var/log/exim -regextype posix-awk -regex '^/.*/((main|reject)log-[0-9]{8}|paniclog)\.gz' -mtime +90 -delete
endscript
}
== touch the dummy rotate file ==
This one is needed to trigger the rotation even if it is a dummy.
<source lang=bash>
# touch /var/log/exim/rotate_this_-_do_not_delete
</source>
</pre>
e6a3f8b0d18ec43438cbf00fdb08c35d92097ad8
1399
1398
2017-03-22T08:40:23Z
Lollypop
2
/* /etc/logrotate.d/exim */
wikitext
text/x-wiki
[[Kategorie:Exim]]
=Fragen und Antworten=
==Header einer MailID ansehen==
<pre># exim -mvh <msgid></pre>
==Statistiken der aktuellen Queue ansehen==
<pre># exim -bpu | exiqsum <parameter></pre>
==Routing von Mails testen==
===Kurz und bündig===
<pre># exim -bv -v <Mailadresse></pre>
===Mit viel Debugging===
<pre># exim -bv -d+all <Mailadresse></pre>
==Wie stosse ich den Versand aller Mails für eine bestimmte Domain an?==
<pre># exim -Rff <Domain></pre>
==Wie stosse ich den Versand EINER bestimmten Mail erneut an?==
<pre># exim -M <message-id></pre>
==Wie ermittle ich, wieviele Mails in der Queue liegen?==
<pre># exim -bpc</pre>
==Wie finde ich eine bestimmte Mail in der Queue?==
Dazu kann entweder in den Logfiles gesucht werden
<pre># exigrep <pattern> /var/log/exim/mainlog-jjjjmmdd</pre>
oder es kann in der Queue gesucht werden
<pre># exiqgrep -r <pattern></pre>
Besser, als exigrep ist exipick!
Ausgabe aller frozen Mails in der Queue:
<pre>
# exipick -z
</pre>
Ausgabe aller Mails an <reciepient> in der Queue:
<pre>
# exipick -r <reciepient>
</pre>
Ausgabe aller Mails von <sender> in der Queue:
<pre>
# exipick -f <sender>
</pre>
Ausggeben aller Mails, die Lokal abgesandt wurden in der Queue:
<pre>
# exipick --or '$sender_host_address eq 127.0.0.1' '$received_protocol eq local'
</pre>
Sogar der Body einer Mail kann durchsucht werden:
<pre>
# /opt/exim/bin/exipick '$message_body =~ /.*Vjagra.*/'
</pre>
Oder Ausgabe der sender_host_address für alle Mails die mehr als 40 und weniger als 50 Minuten alt sind und nicht im Status frozen sind:
<pre>
# exipick --show-vars sender_host_address '$message_age > 40m' '$message_age < 50m' '!$deliver_freeze'
</pre>
==Was tun die Exim-Prozesse?==
<pre># exiwhat</pre>
==Ausgeben von Exim-Parametern==
<pre># exim -bP <Parameter></pre>
z.B.:
<pre># exim -bP message_size_limit</pre>
==Immer gut: queue files ansehen==
<pre>
# find $(exim -bP spool_directory | nawk '{print $NF;}')/input
</pre>
==Ratelimit für einen User zurücksetzen==
Einträge finden:
<source lang=bash>
# exim_dumpdb /var/spool/exim ratelimit | grep user
24-Mar-2016 09:51:28.152687 rate: 218.512 key: 1d/per_rcpt/mail_recipients:user@server.de
24-Mar-2016 09:51:28.098825 rate: 25.618 key: 1d/per_rcpt/failed_recipients:user@server.de
</source>
Einträge löschen:
Dafür nimmt man das etwas struppige Tool <i>exim_fixdb</i>. Man gibt den Key ein, den man aus den Ausgaben vom letzten Befehl hat und wählt damit den entsprechenden Eintrag in der DB aus. Als nächstes kommd dann d, wie Delete, gefolgt von einem Enter. Weg ist der entsprechende Eintrag.
<source lang=bash>
# exim_fixdb /var/spool/exim ratelimit
Modifying Exim hints database /var/spool/exim/db/ratelimit
> 1d/per_rcpt/mail_recipients:user@server.de
24-Mar-2016 09:51:28
0 time stamp: 24-Mar-2016 09:51:28
1 fract. time: .152687
2 sender rate: 218.512
> d
deleted
> 1d/per_rcpt/failed_recipients:user@server.de
24-Mar-2016 09:51:28
0 time stamp: 24-Mar-2016 09:51:28
1 fract. time: .098825
2 sender rate: 25.618
> d
deleted
> ^D
</source>
==Spam==
<source lang=bash>
for file in $(ls -1 /var/log/spamassassin/spamd-exim-acl.log* | sort -t'.' -k3n,3n)
do
if [ "$(basename $file .gz)" == "$(basename $file)" ]
then
command="cat"
else
command="gzip -cd"
fi
printf "%16s - %16s : %7s\t%s\n" \
"$(${command} ${file} | nawk 'NR==1{print $1,$2,$3}')" \
"$(${command} ${file} | tail -1 | nawk '{print $1,$2,$3}')" \
"$(${command} ${file} | grep -c 'result: Y')" \
"$(basename ${file})"
done
</source>
= Logrotation with datestamped logfiles =
I love my logfiles datestamped:
<source lang=bash>
# exim -bP log_file_path
log_file_path = /var/log/exim/%slog-%D
</source>
But the logrotate with this files is a little bit tricky.
I found this as a good way to rotate the logfiles:
== /etc/logrotate.d/exim ==
<pre>
/var/log/exim/rotate_this_-_do_not_delete {
daily
rotate 0
ifempty
create
lastaction
# gzip all files matching the regex that are not from today
/usr/bin/find /var/log/exim -regextype posix-awk -regex '^/.*/((main|reject)log-[0-9]{8}|paniclog)' ! -mtime +0 -exec /usr/bin/gzip -9q {} \;
# delete gzipped files matching the regex that are older than 90 days
/usr/bin/find /var/log/exim -regextype posix-awk -regex '^/.*/((main|reject)log-[0-9]{8}|paniclog)\.gz' -mtime +90 -delete
endscript
}
== touch the dummy rotate file ==
This one is needed to trigger the rotation even if it is a dummy.
<source lang=bash>
# touch /var/log/exim/rotate_this_-_do_not_delete
</source>
</pre>
51e518e08aa36405d4967bd3dd5cbb6347762d85
Solaris grub
0
199
1377
658
2016-12-19T17:49:21Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:Solaris|Grub]]
== SP-Konsole auf x86-Systemen ==
=== Speed und Port im GRUB setzen ===
/rpool/boot/grub/menu.lst
<source lang=bash>
title Oracle Solaris 10 X86
findroot (pool_rpool,0,a)
kernel$ /platform/i86pc/multiboot -B $ZFS-BOOTFS,console=ttya,ttya-mode="115200,8,n,1,-"
module /platform/i86pc/boot_archive
title Solaris failsafe
findroot (pool_rpool,0,a)
kernel /boot/multiboot -s -B console=ttya,ttya-mode="115200,8,n,1,-"
module /boot/amd64/x86.miniroot-safe
</source>
=== Speed im Solaris bekannt machen ===
/boot/solaris/bootenv.rc
<source lang=bash>
setprop ttya-mode '115200,8,n,1,-'
</source>
Nach reboot aktiv
=== Console login speed setzen ===
/etc/ttydefs
<source lang=bash>
console115200:115200 hupcl opost onlcr:115200::console
</source>
<source lang=bash>
# svccfg -s svc:/system/console-login setprop ttymon/label= astring: "console115200"
# svcadm refresh svc:/system/console-login
# svcadm restart svc:/system/console-login
</source>
=== Speed im BIOS setzen ===
Mit CTRL+E ins BIOS, dann: Advanced -> Serial Port Console Redirection -> Bits per second : 115200
=== Speed im SP setzen ===
<source lang=bash>
-> set SP/serial/host pendingspeed=115200 commitpending=true
Set 'pendingspeed' to '115200'
Set 'commitpending' to 'true'
-> show SP/serial/host speed
/SP/serial/host
Properties:
speed = 115200
</source>
=grub rescue>=
7ee250bde1cd97fd27f185be656c3383987efcfc
1378
1377
2016-12-19T17:55:36Z
Lollypop
2
/* grub rescue> */
wikitext
text/x-wiki
[[Kategorie:Solaris|Grub]]
== SP-Konsole auf x86-Systemen ==
=== Speed und Port im GRUB setzen ===
/rpool/boot/grub/menu.lst
<source lang=bash>
title Oracle Solaris 10 X86
findroot (pool_rpool,0,a)
kernel$ /platform/i86pc/multiboot -B $ZFS-BOOTFS,console=ttya,ttya-mode="115200,8,n,1,-"
module /platform/i86pc/boot_archive
title Solaris failsafe
findroot (pool_rpool,0,a)
kernel /boot/multiboot -s -B console=ttya,ttya-mode="115200,8,n,1,-"
module /boot/amd64/x86.miniroot-safe
</source>
=== Speed im Solaris bekannt machen ===
/boot/solaris/bootenv.rc
<source lang=bash>
setprop ttya-mode '115200,8,n,1,-'
</source>
Nach reboot aktiv
=== Console login speed setzen ===
/etc/ttydefs
<source lang=bash>
console115200:115200 hupcl opost onlcr:115200::console
</source>
<source lang=bash>
# svccfg -s svc:/system/console-login setprop ttymon/label= astring: "console115200"
# svcadm refresh svc:/system/console-login
# svcadm restart svc:/system/console-login
</source>
=== Speed im BIOS setzen ===
Mit CTRL+E ins BIOS, dann: Advanced -> Serial Port Console Redirection -> Bits per second : 115200
=== Speed im SP setzen ===
<source lang=bash>
-> set SP/serial/host pendingspeed=115200 commitpending=true
Set 'pendingspeed' to '115200'
Set 'commitpending' to 'true'
-> show SP/serial/host speed
/SP/serial/host
Properties:
speed = 115200
</source>
=grub rescue>=
In this example the boot environment is named Solaris11.3SRU1
==Get into the normal grub==
Find your devices:
<source lang=grub>
grub rescue> ls
(hd0) (hd0,gpt9) (hd0,gpt2) (hd0,gpt1) (hd1)
</source>
===Find the directory where the normal.mod file resides===
<source lang=grub>
grub rescue> ls (hd0,gpt2)/ROOT/Solaris11.3SRU1/@/boot/grub/i386-pc
... normal.mod ...
</source>
===Set the prefix to the right place===
<source lang=grub>
grub rescue> set
prefix=(hd0,gpt2)//@/boot/grub/i386-pc
root=hd0,gpt2
grub rescue> set prefix=(hd0,gpt2)/ROOT/Solaris11.3SRU1/@/boot/grub/i386-pc
</source>
===Now you can load and start the module called "normal"===
<source lang=grub>
grub rescue> insmod normal
grub rescue> normal
</source>
b66b19de74b2f640fb3fa333c84633ca3432ba7f
1379
1378
2016-12-19T17:59:41Z
Lollypop
2
/* grub rescue> */
wikitext
text/x-wiki
[[Kategorie:Solaris|Grub]]
== SP-Konsole auf x86-Systemen ==
=== Speed und Port im GRUB setzen ===
/rpool/boot/grub/menu.lst
<source lang=bash>
title Oracle Solaris 10 X86
findroot (pool_rpool,0,a)
kernel$ /platform/i86pc/multiboot -B $ZFS-BOOTFS,console=ttya,ttya-mode="115200,8,n,1,-"
module /platform/i86pc/boot_archive
title Solaris failsafe
findroot (pool_rpool,0,a)
kernel /boot/multiboot -s -B console=ttya,ttya-mode="115200,8,n,1,-"
module /boot/amd64/x86.miniroot-safe
</source>
=== Speed im Solaris bekannt machen ===
/boot/solaris/bootenv.rc
<source lang=bash>
setprop ttya-mode '115200,8,n,1,-'
</source>
Nach reboot aktiv
=== Console login speed setzen ===
/etc/ttydefs
<source lang=bash>
console115200:115200 hupcl opost onlcr:115200::console
</source>
<source lang=bash>
# svccfg -s svc:/system/console-login setprop ttymon/label= astring: "console115200"
# svcadm refresh svc:/system/console-login
# svcadm restart svc:/system/console-login
</source>
=== Speed im BIOS setzen ===
Mit CTRL+E ins BIOS, dann: Advanced -> Serial Port Console Redirection -> Bits per second : 115200
=== Speed im SP setzen ===
<source lang=bash>
-> set SP/serial/host pendingspeed=115200 commitpending=true
Set 'pendingspeed' to '115200'
Set 'commitpending' to 'true'
-> show SP/serial/host speed
/SP/serial/host
Properties:
speed = 115200
</source>
=grub rescue>=
In this example the boot environment is named Solaris11.3SRU1
==Get into the normal grub==
Find your devices:
<source lang=grub>
grub rescue> ls
(hd0) (hd0,gpt9) (hd0,gpt2) (hd0,gpt1) (hd1)
</source>
===Find the directory where the normal.mod file resides===
Remember to replace <i>Solaris11.3SRU15</i> with your boot environment name.
<source lang=grub>
grub rescue> ls (hd0,gpt2)/ROOT/Solaris11.3SRU1/@/boot/grub/i386-pc
... normal.mod ...
</source>
===Set the prefix to the right place===
Remember to replace <i>Solaris11.3SRU15</i> with your boot environment name.
<source lang=grub>
grub rescue> set
prefix=(hd0,gpt2)//@/boot/grub/i386-pc
root=hd0,gpt2
grub rescue> set prefix=(hd0,gpt2)/ROOT/Solaris11.3SRU1/@/boot/grub/i386-pc
</source>
===Now you can load and start the module called "normal"===
<source lang=grub>
grub rescue> insmod normal
grub rescue> normal
GNU GRUB version 1.99,5.11.0.175.2.0.0.42.2
Minimal BASH-like line editing is supported. For the first word, TAB
lists possible command completions. Anywhere else TAB lists possible
device or file completions.
grub>
</source>
==Normal grub is bootet, now start the Solaris==
At the <i>grub></i> prompt enter the following lines. Remember to replace <i>Solaris11.3SRU15</i> with your boot environment name.
<source lang=grub>
insmod zfs
zfs-bootfs /ROOT/Solaris11.3SRU15/@/ zfs_bootfs
set kern=/platform/i86pc/kernel/amd64/unix
$multiboot /ROOT/Solaris11.3SRU15/@/$kern $kern -B $zfs_bootfs
insmod gzio
$module /ROOT/Solaris11.3SRU15/@/platform/i86pc/amd64/boot_archive
boot
</source>
c491e88b073345f3a2ba47e2e23ef2f1c0f5f717
1380
1379
2016-12-19T18:03:39Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:Solaris|Grub]]
== SP-Konsole auf x86-Systemen ==
=== Speed und Port im GRUB setzen ===
/rpool/boot/grub/menu.lst
<source lang=bash>
title Oracle Solaris 10 X86
findroot (pool_rpool,0,a)
kernel$ /platform/i86pc/multiboot -B $ZFS-BOOTFS,console=ttya,ttya-mode="115200,8,n,1,-"
module /platform/i86pc/boot_archive
title Solaris failsafe
findroot (pool_rpool,0,a)
kernel /boot/multiboot -s -B console=ttya,ttya-mode="115200,8,n,1,-"
module /boot/amd64/x86.miniroot-safe
</source>
=== Speed im Solaris bekannt machen ===
/boot/solaris/bootenv.rc
<source lang=bash>
setprop ttya-mode '115200,8,n,1,-'
</source>
Nach reboot aktiv
=== Console login speed setzen ===
/etc/ttydefs
<source lang=bash>
console115200:115200 hupcl opost onlcr:115200::console
</source>
<source lang=bash>
# svccfg -s svc:/system/console-login setprop ttymon/label= astring: "console115200"
# svcadm refresh svc:/system/console-login
# svcadm restart svc:/system/console-login
</source>
=== Speed im BIOS setzen ===
Mit CTRL+E ins BIOS, dann: Advanced -> Serial Port Console Redirection -> Bits per second : 115200
=== Speed im SP setzen ===
<source lang=bash>
-> set SP/serial/host pendingspeed=115200 commitpending=true
Set 'pendingspeed' to '115200'
Set 'commitpending' to 'true'
-> show SP/serial/host speed
/SP/serial/host
Properties:
speed = 115200
</source>
=grub rescue>=
In this example the boot environment is named Solaris11.3SRU1
==Get into the normal grub==
Find your devices:
<source lang=bash>
grub rescue> ls
(hd0) (hd0,gpt9) (hd0,gpt2) (hd0,gpt1) (hd1)
</source>
===Find the directory where the normal.mod file resides===
Remember to replace <i>Solaris11.3SRU15</i> with your boot environment name.
<source lang=bash>
grub rescue> ls (hd0,gpt2)/ROOT/Solaris11.3SRU1/@/boot/grub/i386-pc
... normal.mod ...
</source>
===Set the prefix to the right place===
Remember to replace <i>Solaris11.3SRU15</i> with your boot environment name.
<source lang=bash>
grub rescue> set
prefix=(hd0,gpt2)//@/boot/grub/i386-pc
root=hd0,gpt2
grub rescue> set prefix=(hd0,gpt2)/ROOT/Solaris11.3SRU1/@/boot/grub/i386-pc
</source>
===Now you can load and start the module called "normal"===
<source lang=bash>
grub rescue> insmod normal
grub rescue> normal
GNU GRUB version 1.99,5.11.0.175.2.0.0.42.2
Minimal BASH-like line editing is supported. For the first word, TAB
lists possible command completions. Anywhere else TAB lists possible
device or file completions.
grub>
</source>
==Normal grub is bootet, now start the Solaris==
At the <i>grub></i> prompt enter the following lines. Remember to replace <i>Solaris11.3SRU15</i> with your boot environment name.
<source lang=bash>
insmod zfs
zfs-bootfs /ROOT/Solaris11.3SRU15/@/ zfs_bootfs
set kern=/platform/i86pc/kernel/amd64/unix
$multiboot /ROOT/Solaris11.3SRU15/@/$kern $kern -B $zfs_bootfs
insmod gzio
$module /ROOT/Solaris11.3SRU15/@/platform/i86pc/amd64/boot_archive
boot
</source>
ca4f3ddd3a9386c7d1bc653e591914f922e88388
1382
1380
2016-12-19T18:15:20Z
Lollypop
2
/* grub rescue> */
wikitext
text/x-wiki
[[Kategorie:Solaris|Grub]]
== SP-Konsole auf x86-Systemen ==
=== Speed und Port im GRUB setzen ===
/rpool/boot/grub/menu.lst
<source lang=bash>
title Oracle Solaris 10 X86
findroot (pool_rpool,0,a)
kernel$ /platform/i86pc/multiboot -B $ZFS-BOOTFS,console=ttya,ttya-mode="115200,8,n,1,-"
module /platform/i86pc/boot_archive
title Solaris failsafe
findroot (pool_rpool,0,a)
kernel /boot/multiboot -s -B console=ttya,ttya-mode="115200,8,n,1,-"
module /boot/amd64/x86.miniroot-safe
</source>
=== Speed im Solaris bekannt machen ===
/boot/solaris/bootenv.rc
<source lang=bash>
setprop ttya-mode '115200,8,n,1,-'
</source>
Nach reboot aktiv
=== Console login speed setzen ===
/etc/ttydefs
<source lang=bash>
console115200:115200 hupcl opost onlcr:115200::console
</source>
<source lang=bash>
# svccfg -s svc:/system/console-login setprop ttymon/label= astring: "console115200"
# svcadm refresh svc:/system/console-login
# svcadm restart svc:/system/console-login
</source>
=== Speed im BIOS setzen ===
Mit CTRL+E ins BIOS, dann: Advanced -> Serial Port Console Redirection -> Bits per second : 115200
=== Speed im SP setzen ===
<source lang=bash>
-> set SP/serial/host pendingspeed=115200 commitpending=true
Set 'pendingspeed' to '115200'
Set 'commitpending' to 'true'
-> show SP/serial/host speed
/SP/serial/host
Properties:
speed = 115200
</source>
=grub rescue>=
In this example the boot environment is named Solaris11.3SRU1
<source lang=bash>
GRUB loading...
Welcome to GRUB!
error: couldn't find a valid DVA.
Entering rescue mode...
grub rescue>
</source>
==Get into the normal grub==
Find your devices:
<source lang=bash>
grub rescue> ls
(hd0) (hd0,gpt9) (hd0,gpt2) (hd0,gpt1) (hd1)
</source>
===Find the directory where the normal.mod file resides===
Remember to replace <i>Solaris11.3SRU15</i> with your boot environment name.
<source lang=bash>
grub rescue> ls (hd0,gpt2)/ROOT/Solaris11.3SRU1/@/boot/grub/i386-pc
... normal.mod ...
</source>
===Set the prefix to the right place===
Remember to replace <i>Solaris11.3SRU15</i> with your boot environment name.
<source lang=bash>
grub rescue> set
prefix=(hd0,gpt2)//@/boot/grub/i386-pc
root=hd0,gpt2
grub rescue> set prefix=(hd0,gpt2)/ROOT/Solaris11.3SRU1/@/boot/grub/i386-pc
</source>
===Now you can load and start the module called "normal"===
<source lang=bash>
grub rescue> insmod normal
grub rescue> normal
GNU GRUB version 1.99,5.11.0.175.2.0.0.42.2
Minimal BASH-like line editing is supported. For the first word, TAB
lists possible command completions. Anywhere else TAB lists possible
device or file completions.
grub>
</source>
==Normal grub is bootet, now start the Solaris==
At the <i>grub></i> prompt enter the following lines. Remember to replace <i>Solaris11.3SRU15</i> with your boot environment name.
<source lang=bash>
insmod zfs
zfs-bootfs /ROOT/Solaris11.3SRU15/@/ zfs_bootfs
set kern=/platform/i86pc/kernel/amd64/unix
$multiboot /ROOT/Solaris11.3SRU15/@/$kern $kern -B $zfs_bootfs
insmod gzio
$module /ROOT/Solaris11.3SRU15/@/platform/i86pc/amd64/boot_archive
boot
</source>
6842d7034a5c0b8e9529bdbf425fa5c37b68109f
1383
1382
2016-12-19T18:16:53Z
Lollypop
2
/* Set the prefix to the right place */
wikitext
text/x-wiki
[[Kategorie:Solaris|Grub]]
== SP-Konsole auf x86-Systemen ==
=== Speed und Port im GRUB setzen ===
/rpool/boot/grub/menu.lst
<source lang=bash>
title Oracle Solaris 10 X86
findroot (pool_rpool,0,a)
kernel$ /platform/i86pc/multiboot -B $ZFS-BOOTFS,console=ttya,ttya-mode="115200,8,n,1,-"
module /platform/i86pc/boot_archive
title Solaris failsafe
findroot (pool_rpool,0,a)
kernel /boot/multiboot -s -B console=ttya,ttya-mode="115200,8,n,1,-"
module /boot/amd64/x86.miniroot-safe
</source>
=== Speed im Solaris bekannt machen ===
/boot/solaris/bootenv.rc
<source lang=bash>
setprop ttya-mode '115200,8,n,1,-'
</source>
Nach reboot aktiv
=== Console login speed setzen ===
/etc/ttydefs
<source lang=bash>
console115200:115200 hupcl opost onlcr:115200::console
</source>
<source lang=bash>
# svccfg -s svc:/system/console-login setprop ttymon/label= astring: "console115200"
# svcadm refresh svc:/system/console-login
# svcadm restart svc:/system/console-login
</source>
=== Speed im BIOS setzen ===
Mit CTRL+E ins BIOS, dann: Advanced -> Serial Port Console Redirection -> Bits per second : 115200
=== Speed im SP setzen ===
<source lang=bash>
-> set SP/serial/host pendingspeed=115200 commitpending=true
Set 'pendingspeed' to '115200'
Set 'commitpending' to 'true'
-> show SP/serial/host speed
/SP/serial/host
Properties:
speed = 115200
</source>
=grub rescue>=
In this example the boot environment is named Solaris11.3SRU1
<source lang=bash>
GRUB loading...
Welcome to GRUB!
error: couldn't find a valid DVA.
Entering rescue mode...
grub rescue>
</source>
==Get into the normal grub==
Find your devices:
<source lang=bash>
grub rescue> ls
(hd0) (hd0,gpt9) (hd0,gpt2) (hd0,gpt1) (hd1)
</source>
===Find the directory where the normal.mod file resides===
Remember to replace <i>Solaris11.3SRU15</i> with your boot environment name.
<source lang=bash>
grub rescue> ls (hd0,gpt2)/ROOT/Solaris11.3SRU1/@/boot/grub/i386-pc
... normal.mod ...
</source>
===Set the prefix to the right place===
Remember to replace <i>Solaris11.3SRU15</i> with your boot environment name.
<source lang=bash>
grub rescue> set
prefix=(hd0,gpt2)//@/boot/grub/i386-pc
root=hd0,gpt2
grub rescue> set prefix=(hd0,gpt2)/ROOT/Solaris11.3SRU15/@/boot/grub/i386-pc
</source>
===Now you can load and start the module called "normal"===
<source lang=bash>
grub rescue> insmod normal
grub rescue> normal
GNU GRUB version 1.99,5.11.0.175.2.0.0.42.2
Minimal BASH-like line editing is supported. For the first word, TAB
lists possible command completions. Anywhere else TAB lists possible
device or file completions.
grub>
</source>
==Normal grub is bootet, now start the Solaris==
At the <i>grub></i> prompt enter the following lines. Remember to replace <i>Solaris11.3SRU15</i> with your boot environment name.
<source lang=bash>
insmod zfs
zfs-bootfs /ROOT/Solaris11.3SRU15/@/ zfs_bootfs
set kern=/platform/i86pc/kernel/amd64/unix
$multiboot /ROOT/Solaris11.3SRU15/@/$kern $kern -B $zfs_bootfs
insmod gzio
$module /ROOT/Solaris11.3SRU15/@/platform/i86pc/amd64/boot_archive
boot
</source>
5f68cbefbec4e802ffd31e6c1c9658fdfbc37bd5
1384
1383
2016-12-19T18:17:06Z
Lollypop
2
/* Find the directory where the normal.mod file resides */
wikitext
text/x-wiki
[[Kategorie:Solaris|Grub]]
== SP-Konsole auf x86-Systemen ==
=== Speed und Port im GRUB setzen ===
/rpool/boot/grub/menu.lst
<source lang=bash>
title Oracle Solaris 10 X86
findroot (pool_rpool,0,a)
kernel$ /platform/i86pc/multiboot -B $ZFS-BOOTFS,console=ttya,ttya-mode="115200,8,n,1,-"
module /platform/i86pc/boot_archive
title Solaris failsafe
findroot (pool_rpool,0,a)
kernel /boot/multiboot -s -B console=ttya,ttya-mode="115200,8,n,1,-"
module /boot/amd64/x86.miniroot-safe
</source>
=== Speed im Solaris bekannt machen ===
/boot/solaris/bootenv.rc
<source lang=bash>
setprop ttya-mode '115200,8,n,1,-'
</source>
Nach reboot aktiv
=== Console login speed setzen ===
/etc/ttydefs
<source lang=bash>
console115200:115200 hupcl opost onlcr:115200::console
</source>
<source lang=bash>
# svccfg -s svc:/system/console-login setprop ttymon/label= astring: "console115200"
# svcadm refresh svc:/system/console-login
# svcadm restart svc:/system/console-login
</source>
=== Speed im BIOS setzen ===
Mit CTRL+E ins BIOS, dann: Advanced -> Serial Port Console Redirection -> Bits per second : 115200
=== Speed im SP setzen ===
<source lang=bash>
-> set SP/serial/host pendingspeed=115200 commitpending=true
Set 'pendingspeed' to '115200'
Set 'commitpending' to 'true'
-> show SP/serial/host speed
/SP/serial/host
Properties:
speed = 115200
</source>
=grub rescue>=
In this example the boot environment is named Solaris11.3SRU1
<source lang=bash>
GRUB loading...
Welcome to GRUB!
error: couldn't find a valid DVA.
Entering rescue mode...
grub rescue>
</source>
==Get into the normal grub==
Find your devices:
<source lang=bash>
grub rescue> ls
(hd0) (hd0,gpt9) (hd0,gpt2) (hd0,gpt1) (hd1)
</source>
===Find the directory where the normal.mod file resides===
Remember to replace <i>Solaris11.3SRU15</i> with your boot environment name.
<source lang=bash>
grub rescue> ls (hd0,gpt2)/ROOT/Solaris11.3SRU15/@/boot/grub/i386-pc
... normal.mod ...
</source>
===Set the prefix to the right place===
Remember to replace <i>Solaris11.3SRU15</i> with your boot environment name.
<source lang=bash>
grub rescue> set
prefix=(hd0,gpt2)//@/boot/grub/i386-pc
root=hd0,gpt2
grub rescue> set prefix=(hd0,gpt2)/ROOT/Solaris11.3SRU15/@/boot/grub/i386-pc
</source>
===Now you can load and start the module called "normal"===
<source lang=bash>
grub rescue> insmod normal
grub rescue> normal
GNU GRUB version 1.99,5.11.0.175.2.0.0.42.2
Minimal BASH-like line editing is supported. For the first word, TAB
lists possible command completions. Anywhere else TAB lists possible
device or file completions.
grub>
</source>
==Normal grub is bootet, now start the Solaris==
At the <i>grub></i> prompt enter the following lines. Remember to replace <i>Solaris11.3SRU15</i> with your boot environment name.
<source lang=bash>
insmod zfs
zfs-bootfs /ROOT/Solaris11.3SRU15/@/ zfs_bootfs
set kern=/platform/i86pc/kernel/amd64/unix
$multiboot /ROOT/Solaris11.3SRU15/@/$kern $kern -B $zfs_bootfs
insmod gzio
$module /ROOT/Solaris11.3SRU15/@/platform/i86pc/amd64/boot_archive
boot
</source>
649194560e3e3eed69cb280db4702cc71d178320
1385
1384
2016-12-20T13:11:08Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:Solaris|Grub]]
== SP-Konsole auf x86-Systemen ==
=== Speed und Port im GRUB setzen ===
/rpool/boot/grub/menu.lst
<source lang=bash>
title Oracle Solaris 10 X86
findroot (pool_rpool,0,a)
kernel$ /platform/i86pc/multiboot -B $ZFS-BOOTFS,console=ttya,ttya-mode="115200,8,n,1,-"
module /platform/i86pc/boot_archive
title Solaris failsafe
findroot (pool_rpool,0,a)
kernel /boot/multiboot -s -B console=ttya,ttya-mode="115200,8,n,1,-"
module /boot/amd64/x86.miniroot-safe
</source>
=== Speed im Solaris bekannt machen ===
/boot/solaris/bootenv.rc
<source lang=bash>
setprop ttya-mode '115200,8,n,1,-'
</source>
Nach reboot aktiv
=== Console login speed setzen ===
/etc/ttydefs
<source lang=bash>
console115200:115200 hupcl opost onlcr:115200::console
</source>
<source lang=bash>
# svccfg -s svc:/system/console-login setprop ttymon/label= astring: "console115200"
# svcadm refresh svc:/system/console-login
# svcadm restart svc:/system/console-login
</source>
=== Speed im BIOS setzen ===
Mit CTRL+E ins BIOS, dann: Advanced -> Serial Port Console Redirection -> Bits per second : 115200
=== Speed im SP setzen ===
<source lang=bash>
-> set SP/serial/host pendingspeed=115200 commitpending=true
Set 'pendingspeed' to '115200'
Set 'commitpending' to 'true'
-> show SP/serial/host speed
/SP/serial/host
Properties:
speed = 115200
</source>
=grub rescue>=
The problem:
<source lang=bash>
GRUB loading...
Welcome to GRUB!
error: couldn't find a valid DVA.
Entering rescue mode...
grub rescue>
</source>
==Get into the normal grub==
Find your devices:
<source lang=bash>
grub rescue> ls
(hd0) (hd0,gpt9) (hd0,gpt2) (hd0,gpt1) (hd1)
</source>
===Find the directory where the normal.mod file resides===
In this example the boot environment is named Solaris11.3SRU1.
Remember to replace <i>Solaris11.3SRU15</i> with your boot environment name.
<source lang=bash>
grub rescue> ls (hd0,gpt2)/ROOT/Solaris11.3SRU15/@/boot/grub/i386-pc
... normal.mod ...
</source>
===Set the prefix to the right place===
Remember to replace <i>Solaris11.3SRU15</i> with your boot environment name.
<source lang=bash>
grub rescue> set
prefix=(hd0,gpt2)//@/boot/grub/i386-pc
root=hd0,gpt2
grub rescue> set prefix=(hd0,gpt2)/ROOT/Solaris11.3SRU15/@/boot/grub/i386-pc
</source>
===Now you can load and start the module called "normal"===
<source lang=bash>
grub rescue> insmod normal
grub rescue> normal
GNU GRUB version 1.99,5.11.0.175.2.0.0.42.2
Minimal BASH-like line editing is supported. For the first word, TAB
lists possible command completions. Anywhere else TAB lists possible
device or file completions.
grub>
</source>
==Normal grub is bootet, now start the Solaris==
At the <i>grub></i> prompt enter the following lines. Remember to replace <i>Solaris11.3SRU15</i> with your boot environment name.
<source lang=bash>
insmod zfs
zfs-bootfs /ROOT/Solaris11.3SRU15/@/ zfs_bootfs
set kern=/platform/i86pc/kernel/amd64/unix
$multiboot /ROOT/Solaris11.3SRU15/@/$kern $kern -B $zfs_bootfs
insmod gzio
$module /ROOT/Solaris11.3SRU15/@/platform/i86pc/amd64/boot_archive
boot
</source>
b37c389b3a9f64caf788173f4eee798a6a5393cd
1386
1385
2016-12-20T14:56:17Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:Solaris|Grub]]
[[Kategorie:Grub|Solaris]]
== SP-Konsole auf x86-Systemen ==
=== Speed und Port im GRUB setzen ===
/rpool/boot/grub/menu.lst
<source lang=bash>
title Oracle Solaris 10 X86
findroot (pool_rpool,0,a)
kernel$ /platform/i86pc/multiboot -B $ZFS-BOOTFS,console=ttya,ttya-mode="115200,8,n,1,-"
module /platform/i86pc/boot_archive
title Solaris failsafe
findroot (pool_rpool,0,a)
kernel /boot/multiboot -s -B console=ttya,ttya-mode="115200,8,n,1,-"
module /boot/amd64/x86.miniroot-safe
</source>
=== Speed im Solaris bekannt machen ===
/boot/solaris/bootenv.rc
<source lang=bash>
setprop ttya-mode '115200,8,n,1,-'
</source>
Nach reboot aktiv
=== Console login speed setzen ===
/etc/ttydefs
<source lang=bash>
console115200:115200 hupcl opost onlcr:115200::console
</source>
<source lang=bash>
# svccfg -s svc:/system/console-login setprop ttymon/label= astring: "console115200"
# svcadm refresh svc:/system/console-login
# svcadm restart svc:/system/console-login
</source>
=== Speed im BIOS setzen ===
Mit CTRL+E ins BIOS, dann: Advanced -> Serial Port Console Redirection -> Bits per second : 115200
=== Speed im SP setzen ===
<source lang=bash>
-> set SP/serial/host pendingspeed=115200 commitpending=true
Set 'pendingspeed' to '115200'
Set 'commitpending' to 'true'
-> show SP/serial/host speed
/SP/serial/host
Properties:
speed = 115200
</source>
=grub rescue>=
The problem:
<source lang=bash>
GRUB loading...
Welcome to GRUB!
error: couldn't find a valid DVA.
Entering rescue mode...
grub rescue>
</source>
==Get into the normal grub==
Find your devices:
<source lang=bash>
grub rescue> ls
(hd0) (hd0,gpt9) (hd0,gpt2) (hd0,gpt1) (hd1)
</source>
===Find the directory where the normal.mod file resides===
In this example the boot environment is named Solaris11.3SRU1.
Remember to replace <i>Solaris11.3SRU15</i> with your boot environment name.
<source lang=bash>
grub rescue> ls (hd0,gpt2)/ROOT/Solaris11.3SRU15/@/boot/grub/i386-pc
... normal.mod ...
</source>
===Set the prefix to the right place===
Remember to replace <i>Solaris11.3SRU15</i> with your boot environment name.
<source lang=bash>
grub rescue> set
prefix=(hd0,gpt2)//@/boot/grub/i386-pc
root=hd0,gpt2
grub rescue> set prefix=(hd0,gpt2)/ROOT/Solaris11.3SRU15/@/boot/grub/i386-pc
</source>
===Now you can load and start the module called "normal"===
<source lang=bash>
grub rescue> insmod normal
grub rescue> normal
GNU GRUB version 1.99,5.11.0.175.2.0.0.42.2
Minimal BASH-like line editing is supported. For the first word, TAB
lists possible command completions. Anywhere else TAB lists possible
device or file completions.
grub>
</source>
==Normal grub is bootet, now start the Solaris==
At the <i>grub></i> prompt enter the following lines. Remember to replace <i>Solaris11.3SRU15</i> with your boot environment name.
<source lang=bash>
insmod zfs
zfs-bootfs /ROOT/Solaris11.3SRU15/@/ zfs_bootfs
set kern=/platform/i86pc/kernel/amd64/unix
$multiboot /ROOT/Solaris11.3SRU15/@/$kern $kern -B $zfs_bootfs
insmod gzio
$module /ROOT/Solaris11.3SRU15/@/platform/i86pc/amd64/boot_archive
boot
</source>
ac1ccaaf30eeba3e27c15bd5aef88e8a125c5a3d
SSH Tipps und Tricks
0
75
1381
1333
2016-12-19T18:13:29Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:SSH]]
[[Kategorie:Putty]]
=SSH, der Weg zum Ziel=
==SSH über ein oder mehrere Hops==
Um die SSH-Verbindung von Host_A zu Host_B machen zu können muß man sich über zwei Rechner dorthin tunneln (GW_1 und GW_2). Wenn man sich immer erst einloggt und dann weiter einloggt ist es manchmal sehr schwierig die Portforwardings mitzuschleifen, oder den Socks5-Proxy. Einfacher ist es, wenn man sich ProxyCommands für den Weg von Host_a zu Host_b definiert.
Wir kommen also nur von GW_2 zu Host_b, also machen wir uns hierfür einen Eintrag in der ~/.ssh/config:
<pre>
Host Host_B
ProxyCommand ssh GW_2 "/bin/bash -c 'exec 3<>/dev/tcp/%h/%p; cat <&3 & cat >&3;kill $!'"
</pre>
Zu GW_2 kommen wir aber nur über GW_1, also brauchen wir hierfür auch einen Eintrag:
<pre>
Host GW_2
ProxyCommand ssh GW_1 "/bin/bash -c 'exec 3<>/dev/tcp/%h/%p; cat <&3 & cat >&3;kill $!'"
</pre>
Jetzt gibt man auf Host_A einfach <i>ssh Host_B</i> ein und wird über die beiden Gateways GW_1 und GW_2 getunnelt.
Portforwardings für z.B. NFS macht man jetzt einfach so:
<pre>
root@Host_A# share -F nfs -o ro=@127.0.0.1/32 /tmp
root@Host_A# ssh -R 22049:localhost:2049 user@Host_B
user@Host_B$ su -
root@Host_B# mount -oro nfs://127.0.0.1:22049/tmp /mnt
</pre>
Im Hintergrund werden dann die TunnelVerbindungen aufgebaut und man macht das PortForwarding direkt von Host_A nach Host_B. Sehr schlank und elegant.
PS: Das /dev/tcp/%h/%p ist ein BASH-Builtin wobei %h und %p von der SSH durch Host (%h) und Port (%p) ausgefüllt werden
==Ausbruch aus dem Paradies==
Problem: Die Umgebung in der man sich aufhält ist leider so unglücklich mit Firewalls verbaut, daß man nicht arbeiten kann. Man muß aber per SSH raus, um woanders kurz etwas zu schauen, oder zu holen. Nunja, es gibt immer einen Weg...
Vorraussetzung ist ein lokal installiertes [http://www.meadowy.org/~gotoh/projects/connect connect], z.B. unter Ubuntu: apt-get install connect-proxy.
Weiterhin braucht man einen SSH-Server, wo ein sshd auf dem Port 443 lauscht, denn die meisten Proxies wollen einen nur auf known Ports durchlassen.
Dann trägt man in der ~/.ssh/config ein:
<pre>
Host ssh-via-proxy
ProxyCommand connect -H proxy-server:3128 ssh-server 443
</pre>
Schwuppdiwupp ist man mit <i>ssh ssh-server</i> auf dem SSH-Ziel, wo man hinmöchte. Natürlich kann man auf den ssh-server wieder als ProxyCommand eintragen usw. usw.
==Achja... das interne Wiki...==
Auch nicht schlimm, wenn das nur vom internen Netz erreichbar ist, dann fragen wir einfach via Socks-Proxy an:
<pre>
user@Host_A$ ssh -C -N -T -f -D8080 interner-rechner
user@Host_A$ chromium-browser --proxy-server="socks5://localhost:8080" https://wiki.intern.firma.de/ &
</pre>
Die Optionen sind:
<pre>
-C Requests compression <- das ist optional
-N Do not execute a remote command.
-T Disable pseudo-tty allocation.
-f Requests ssh to go to background just before command execution.
-D Local-Remote-Socks5-Proxy Port
</pre>
Oder wieder via ~/.ssh/config:
<pre>
Host wiki
Compression yes
DynamicForward 8888
RequestTTY no
PermitLocalCommand yes
LocalCommand chromium-browser --proxy-server="socks5://localhost:8888" https://wiki.intern.firma.de/ &
Hostname interner-rechner
</pre>
Und dann <i>ssh -N -f wiki</i> (Entsprechungen für -N und -f habe ich noch nicht gefunden).
=Der Fingerabdruck=
Für die Verifikation ist es oft leichter mit kürzeren Zahlenketten. Daher ist der Fingerabdruck praktisch, um Keys einfacher zu vergleichen:
<pre>
$ ssh-keygen -lf ~/.ssh/id_dsa.pub
1024 98:c5:76:...:08:fa:ba lollypop@lollybook (DSA)
</pre>
=Nutzer einschränken=
<source lang=bash>
# SSH is only allowed for users in the group ssh except syslog
AllowGroups ssh
DenyUsers syslog
</source>
=PuTTY Portable=
==pageant zusammen mit putty starten==
In die Datei ..\PortableApps\PuTTYPortable\App\AppInfo\Launcher\PuTTYPortable.ini muß folgendes unter [Launch] stehen:
<pre>
[Launch]
ProgramExecutable=putty\pageant.exe
CommandLineArguments='%PAL:DataDir%\settings\mykeys.ppk -c %PAL:AppDir%\putty\putty.exe'
DirectoryMoveOK=yes
SupportsUNC=yes
</pre>
Zu PortableApps siehe auch:
* [http://portableapps.com/manuals/PortableApps.comLauncher/ref/envsub.html Environment variable substitions]
* [http://portableapps.com/manuals/PortableApps.comLauncher/ref/launcher.ini/launch.html#programexecutable Launch]
=Probleme mit älteren Gegenstellen=
==Unable to negotiate with <IP> port 22: no matching host key type found. Their offer: ssh-dss==
<source lang=bash>
$ ssh -oHostKeyAlgorithms=+ssh-dss <IP>
</source>
==ssh_dispatch_run_fatal: Connection to <IP> port 22: DH GEX group out of range==
<source lang=bash>
$ ssh -oKexAlgorithms=diffie-hellman-group-exchange-sha256,diffie-hellman-group14-sha1,diffie-hellman-group1-sha1 <IP>
</source>
=SFTP chroot=
<source lang=bash>
# mkdir --parents --mode=0755 /home/sftp
# mkdir --mode=0700 /home/sftp/.authorized_keys
</source>
==/etc/ssh/sshd_config==
<source lang=bash>
...
Match Group sftp
ChrootDirectory /home/sftp/%u
AuthorizedKeysFile /home/sftp/.authorized_keys/%u
AllowTCPForwarding no
X11Forwarding no
ForceCommand internal-sftp
</source>
==Create SFTP user==
Now you can put authorized keys into the files /home/sftp/.authorized_keys/<i>username</i>
And create the sftp users like this:
<source lang=bash>
# USER=myuser
# mkdir --parents --mode=0755 /home/sftp/${USER}
# useradd --create-home --home-dir /home/sftp/${USER}/home ${USER}
</source>
c9aaa67601f8f870bc6605249c5a394e25dec706
1389
1381
2017-02-08T10:18:28Z
Lollypop
2
/* PuTTY Portable */
wikitext
text/x-wiki
[[Kategorie:SSH]]
[[Kategorie:Putty]]
=SSH, der Weg zum Ziel=
==SSH über ein oder mehrere Hops==
Um die SSH-Verbindung von Host_A zu Host_B machen zu können muß man sich über zwei Rechner dorthin tunneln (GW_1 und GW_2). Wenn man sich immer erst einloggt und dann weiter einloggt ist es manchmal sehr schwierig die Portforwardings mitzuschleifen, oder den Socks5-Proxy. Einfacher ist es, wenn man sich ProxyCommands für den Weg von Host_a zu Host_b definiert.
Wir kommen also nur von GW_2 zu Host_b, also machen wir uns hierfür einen Eintrag in der ~/.ssh/config:
<pre>
Host Host_B
ProxyCommand ssh GW_2 "/bin/bash -c 'exec 3<>/dev/tcp/%h/%p; cat <&3 & cat >&3;kill $!'"
</pre>
Zu GW_2 kommen wir aber nur über GW_1, also brauchen wir hierfür auch einen Eintrag:
<pre>
Host GW_2
ProxyCommand ssh GW_1 "/bin/bash -c 'exec 3<>/dev/tcp/%h/%p; cat <&3 & cat >&3;kill $!'"
</pre>
Jetzt gibt man auf Host_A einfach <i>ssh Host_B</i> ein und wird über die beiden Gateways GW_1 und GW_2 getunnelt.
Portforwardings für z.B. NFS macht man jetzt einfach so:
<pre>
root@Host_A# share -F nfs -o ro=@127.0.0.1/32 /tmp
root@Host_A# ssh -R 22049:localhost:2049 user@Host_B
user@Host_B$ su -
root@Host_B# mount -oro nfs://127.0.0.1:22049/tmp /mnt
</pre>
Im Hintergrund werden dann die TunnelVerbindungen aufgebaut und man macht das PortForwarding direkt von Host_A nach Host_B. Sehr schlank und elegant.
PS: Das /dev/tcp/%h/%p ist ein BASH-Builtin wobei %h und %p von der SSH durch Host (%h) und Port (%p) ausgefüllt werden
==Ausbruch aus dem Paradies==
Problem: Die Umgebung in der man sich aufhält ist leider so unglücklich mit Firewalls verbaut, daß man nicht arbeiten kann. Man muß aber per SSH raus, um woanders kurz etwas zu schauen, oder zu holen. Nunja, es gibt immer einen Weg...
Vorraussetzung ist ein lokal installiertes [http://www.meadowy.org/~gotoh/projects/connect connect], z.B. unter Ubuntu: apt-get install connect-proxy.
Weiterhin braucht man einen SSH-Server, wo ein sshd auf dem Port 443 lauscht, denn die meisten Proxies wollen einen nur auf known Ports durchlassen.
Dann trägt man in der ~/.ssh/config ein:
<pre>
Host ssh-via-proxy
ProxyCommand connect -H proxy-server:3128 ssh-server 443
</pre>
Schwuppdiwupp ist man mit <i>ssh ssh-server</i> auf dem SSH-Ziel, wo man hinmöchte. Natürlich kann man auf den ssh-server wieder als ProxyCommand eintragen usw. usw.
==Achja... das interne Wiki...==
Auch nicht schlimm, wenn das nur vom internen Netz erreichbar ist, dann fragen wir einfach via Socks-Proxy an:
<pre>
user@Host_A$ ssh -C -N -T -f -D8080 interner-rechner
user@Host_A$ chromium-browser --proxy-server="socks5://localhost:8080" https://wiki.intern.firma.de/ &
</pre>
Die Optionen sind:
<pre>
-C Requests compression <- das ist optional
-N Do not execute a remote command.
-T Disable pseudo-tty allocation.
-f Requests ssh to go to background just before command execution.
-D Local-Remote-Socks5-Proxy Port
</pre>
Oder wieder via ~/.ssh/config:
<pre>
Host wiki
Compression yes
DynamicForward 8888
RequestTTY no
PermitLocalCommand yes
LocalCommand chromium-browser --proxy-server="socks5://localhost:8888" https://wiki.intern.firma.de/ &
Hostname interner-rechner
</pre>
Und dann <i>ssh -N -f wiki</i> (Entsprechungen für -N und -f habe ich noch nicht gefunden).
=Der Fingerabdruck=
Für die Verifikation ist es oft leichter mit kürzeren Zahlenketten. Daher ist der Fingerabdruck praktisch, um Keys einfacher zu vergleichen:
<pre>
$ ssh-keygen -lf ~/.ssh/id_dsa.pub
1024 98:c5:76:...:08:fa:ba lollypop@lollybook (DSA)
</pre>
=Nutzer einschränken=
<source lang=bash>
# SSH is only allowed for users in the group ssh except syslog
AllowGroups ssh
DenyUsers syslog
</source>
=PuTTY Portable=
==pageant zusammen mit putty starten==
In die Datei ..\PortableApps\PuTTYPortable\App\AppInfo\Launcher\PuTTYPortable.ini muß folgendes unter [Launch] stehen:
<pre>
[Launch]
ProgramExecutable=putty\pageant.exe
CommandLineArguments='%PAL:DataDir%\settings\mykeys.ppk -c %PAL:AppDir%\putty\putty.exe'
DirectoryMoveOK=yes
SupportsUNC=yes
</pre>
Zu PortableApps siehe auch:
* [http://portableapps.com/manuals/PortableApps.comLauncher/ref/envsub.html Environment variable substitions]
* [http://portableapps.com/manuals/PortableApps.comLauncher/ref/launcher.ini/launch.html#programexecutable Launch]
==ppk -> pem==
<source lang=bash>
$ nawk '/---- BEGIN SSH2 PUBLIC KEY ----/{printf "ssh-rsa "; getline; comment=$2; gsub(/"/,"",comment); getline line; while(line !~ /^---- END/){printf line; getline line;} printf " %s\n",comment;}' pubkey.ppk
</source>
=Probleme mit älteren Gegenstellen=
==Unable to negotiate with <IP> port 22: no matching host key type found. Their offer: ssh-dss==
<source lang=bash>
$ ssh -oHostKeyAlgorithms=+ssh-dss <IP>
</source>
==ssh_dispatch_run_fatal: Connection to <IP> port 22: DH GEX group out of range==
<source lang=bash>
$ ssh -oKexAlgorithms=diffie-hellman-group-exchange-sha256,diffie-hellman-group14-sha1,diffie-hellman-group1-sha1 <IP>
</source>
=SFTP chroot=
<source lang=bash>
# mkdir --parents --mode=0755 /home/sftp
# mkdir --mode=0700 /home/sftp/.authorized_keys
</source>
==/etc/ssh/sshd_config==
<source lang=bash>
...
Match Group sftp
ChrootDirectory /home/sftp/%u
AuthorizedKeysFile /home/sftp/.authorized_keys/%u
AllowTCPForwarding no
X11Forwarding no
ForceCommand internal-sftp
</source>
==Create SFTP user==
Now you can put authorized keys into the files /home/sftp/.authorized_keys/<i>username</i>
And create the sftp users like this:
<source lang=bash>
# USER=myuser
# mkdir --parents --mode=0755 /home/sftp/${USER}
# useradd --create-home --home-dir /home/sftp/${USER}/home ${USER}
</source>
daea73c63ad7db625644bf2e4fe728b031eeab5b
Category:Grub
14
296
1387
2016-12-20T14:56:44Z
Lollypop
2
Die Seite wurde neu angelegt: „[[Kategorie:KnowHow]]“
wikitext
text/x-wiki
[[Kategorie:KnowHow]]
5b3e805e2df69a16d339bfd0115e4688ccfd0e65
Linux grub
0
297
1388
2016-12-20T15:10:19Z
Lollypop
2
Die Seite wurde neu angelegt: „[[Kategorie:Linux|Grub]] [[Kategorie:Grub|Linux]] =grub rescue>= The problem: <source lang=bash> ... Entering rescue mode...…“
wikitext
text/x-wiki
[[Kategorie:Linux|Grub]]
[[Kategorie:Grub|Linux]]
=grub rescue>=
The problem:
<source lang=bash>
...
Entering rescue mode...
grub rescue>
</source>
==Get into the normal grub==
Find your devices:
<source lang=bash>
grub rescue> ls
</source>
===Find the directory where the normal.mod file resides===
In this example we have LVM and the /boot/grub is in VG vg-root and the LV lv-root.
<source lang=bash>
grub rescue> ls (lvm/vg--root-lv--root)/boot/grub/i386-pc
... normal.mod ...
</source>
===Set the prefix to the right place===
<source lang=bash>
grub rescue> set prefix=(lvm/vg--root-lv--root)/boot/grub
</source>
===Now you can load and start the module called "normal"===
<source lang=bash>
grub rescue> insmod normal
grub rescue> normal
</source>
If the menu not occurs you get something like this:
<source lang=bash>
GNU GRUB version 1.99,5.11.0.175.2.0.0.42.2
Minimal BASH-like line editing is supported. For the first word, TAB
lists possible command completions. Anywhere else TAB lists possible
device or file completions.
grub>
</source>
==Normal grub is bootet, now start the Kernel==
Example for LVM:
<source lang=bash>
insmod gzio
insmod part_msdos
insmod lvm
insmod ext2
set root='lvmid/KAlPF4-Qb8I-Sx41-10cC-lACw-Msoh-3qEohv/pmE9Nt-rLG3-FlNM-CwOT-hy42-gSnm-fZSn3l'
linux /boot/vmlinuz-4.4.0-53-generic root=/dev/mapper/vg--root-lv--root ro
initrd /boot/initrd.img-4.4.0-53-generic
</source>
Example for ZFS-Root:
<source lang=bash>
insmod gzio
insmod part_msdos
insmod zfs
set root='hd0,msdos4'
linux /ROOT/ubuntu-15.04@/boot/vmlinuz-4.4.0-57-generic root=ZFS=rpool/ROOT/ubuntu-15.04 boot=zfs zfs_force=1 ro quiet splash nomdmonddf nomdmonisw $vt_handoff
initrd /ROOT/ubuntu-15.04@/boot/initrd.img-4.4.0-57-generic
</source>
b4c58c30a39ba72161f5b2e01129ce31cfc0269b
Fibrechannel Analyse
0
139
1393
965
2017-03-14T15:13:21Z
Lollypop
2
/* portloginshow */
wikitext
text/x-wiki
[[Kategorie:Solaris]]
[[Kategorie:Brocade]]
[[Kategorie:NetApp]]
[[Kategorie:FC]]
=Fibrechannel Analyse=
=Kommandos : Solaris=
==luxadm==
===luxadm -e port===
Gibt die Hardwarepfade der vorhandened Fibrechannelports und deren Status aus:
<source lang=bash>
# luxadm -e port
/devices/pci@79,0/pci10de,378@b/pci1077,143@0/fp@0,0:devctl CONNECTED
/devices/pci@79,0/pci10de,378@b/pci1077,143@0,1/fp@0,0:devctl NOT CONNECTED
/devices/pci@79,0/pci10de,376@e/pci1077,143@0/fp@0,0:devctl CONNECTED
/devices/pci@79,0/pci10de,376@e/pci1077,143@0,1/fp@0,0:devctl NOT CONNECTED
</source>
2 Dualport Karten:
/devices/pci@79,0/pci10de,378@b/pci1077,143@0 und ...,1
/devices/pci@79,0/pci10de,376@e/pci1077,143@0 und ...,1
<source lang=bash>
# prtdiag -v | head -1
System Configuration: Sun Microsystems Sun Fire X4440
</source>
Aus der Seite [https://support.oracle.com/epmos/faces/DocContentDisplay?id=1277396.1 Sun x86 Platforms: Matrix of Recognized Device Paths (Doc ID 1277396.1)] (Oracle Support Login benötigt):
Sun Fire x4440 (Tucana)
PCI:
PCIe SLOT0 /pci@0,0/pci10de,375@f/pci1000,3150@0 // with PCI Express 8-Port SAS/SATA HBA
PCIe SLOT0 /pci@0,0/pci10de,375@f/ // without PCI Express 8-Port SAS/SATA HBA
PCIe SLOT1 /pci@0,0/pci10de,376@e/
PCIe SLOT2 /pci@7c,0/pci10de,377@f/
PCIe SLOT3 /pci@0,0/pci10de,377@a/
PCIe SLOT4 /pci@7c,0/pci10de,376@e/
PCIe SLOT5 /pci@7c,0/pci10de,378@b/
(7c can be renamed something else depending on BIOS/OS version)
Also stecken unsere Karten in Slot 4 und 5.
===luxadm -e dump_map <HW_path>===
Gibt die Tabelle der bekannten Geräte an einem Port aus
<source lang=bash>
# luxadm -e dump_map /devices/pci@79,0/pci10de,378@b/pci1077,143@0/fp@0,0:devctl
Pos Port_ID Hard_Addr Port WWN Node WWN Type
0 30200 0 202600a0b86e10e4 200600a0b86e10e4 0x0 (Disk device)
1 30600 0 202700a0b86e10e4 200600a0b86e10e4 0x0 (Disk device)
2 10100 0 203400a0b85bb030 200400a0b85bb030 0x0 (Disk device)
3 10500 0 203500a0b85bb030 200400a0b85bb030 0x0 (Disk device)
4 10200 0 202600a0b86e103c 200600a0b86e103c 0x0 (Disk device)
5 11400 0 202700a0b86e103c 200600a0b86e103c 0x0 (Disk device)
6 30100 0 203200a0b85aeb2d 200200a0b85aeb2d 0x0 (Disk device)
7 30500 0 203300a0b85aeb2d 200200a0b85aeb2d 0x0 (Disk device)
8 10800 0 2100001b32902d45 2000001b32902d45 0x1f (Unknown Type,Host Bus Adapter)
</source>
Erklärung der interessanten Spalten:
* Port_ID <Switch_ID><Switchport><??>
Es sind also offensichtlich 2 Switches in der Fabric an Port /devices/pci@79,0/pci10de,378@b/pci1077,143@0/fp@0,0:devctl
und zwar mit der ID 1 und mit der ID 3.
Switch ID 1
Port 1 und 5 : Node WWN 200400a0b85bb030
Port 2 und 14 : Node WWN 200600a0b86e103c
Port 8 : Node WWN 2000001b32902d45 (Wir selbst)
Switch ID 3
Port 1 und 5 : Node WWN 200200a0b85aeb2d
Port 2 und 6 : Node WWN 200600a0b86e10e4
Wir hängen also mit 2 Storages auf dem Switch mit der ID 1 und haben eine Verbindung zu einem Switch mit der ID 3 an dem 2 weitere Storages hängen.
* Node WWN
Wir sehen hier 4 Disk Devices mit jeweils 2 Einträgen (Gleiche Node WWN)
* Port WWN
Dies ist die Port WWN der an den Switch angeschlossenen Geräte (unter 8 finden wir uns selbst).
Pro Storage sehen wir hier 2 Port WWNs, also 2 Pfade über unseren einen Hostport.
Daher nachher 4 Pfade (2 Pro Hostport) beim [[#mpathadm list lu]].
* Type
Disk Device: Storage
Host Bus Adapter: FC-Karte
===luxadm probe===
Auflistung aller erkannten Fibrechanneldevices
<source lang=bash>
#> luxadm probe
Found Fibre Channel device(s):
Node WWN:200600a0b86e10e4 Device Type:Disk device
Logical Path:/dev/rdsk/c8t600A0B80006E10E40000DC1C52E8B751d0s2
...
</source>
===luxadm display <Diskpath|WWN>===
<source lang=bash>
#> luxadm display /dev/rdsk/c8t600A0B80006E10E40000DC1C52E8B751d0s2
DEVICE PROPERTIES for disk: /dev/rdsk/c8t600A0B80006E10E40000DC1C52E8B751d0s2
Vendor: SUN
Product ID: STK6580_6780
Revision: 0784
Serial Num: SP01068442
Unformatted capacity: 204800.000 MBytes
Write Cache: Enabled
Read Cache: Enabled
Minimum prefetch: 0x300
Maximum prefetch: 0x0
Device Type: Disk device
Path(s):
/dev/rdsk/c8t600A0B80006E10E40000DC1C52E8B751d0s2
/devices/scsi_vhci/disk@g600a0b80006e10e40000dc1c52e8b751:c,raw
Controller /dev/cfg/c4
Device Address 202600a0b86e10e4,5
Host controller port WWN 2100001b328a417f
Class primary
State ONLINE
Controller /dev/cfg/c4
Device Address 202700a0b86e10e4,5
Host controller port WWN 2100001b328a417f
Class secondary
State STANDBY
Controller /dev/cfg/c6
Device Address 201600a0b86e10e4,5
Host controller port WWN 2100001b32904445
Class primary
State ONLINE
Controller /dev/cfg/c6
Device Address 201700a0b86e10e4,5
Host controller port WWN 2100001b32904445
Class secondary
State STANDBY
</source>
* Vendor: SUN
Hersteller
* Product ID: STK6580_6780
Also ein StorageTek 6580/6780
* Revision: 0784
Grobe Firmwarepeilung (Firmware Version: 07.84.47.10)
Siehe hier [[#lsscs list array <array_name>]]
* Serial Num: SP01068442
Praktisch, wenn man mit NetApps arbeitet, um die LUNs zuzuordnen.
* Unformatted capacity: 204800.000 MBytes
Immer gut zu wissen
* Write Cache: Enabled
Die Batterie im Storage sollte also OK sein ;-)
* Path(s):
Rawdevicepath
Hardwaredevicepath
Jetzt folgen immer pro Pfad zu diesem Device ein Block aus
Controller (siehe unten)
Device Address <Port WWN vom Device>,<LUN ID>
Class <primary|secondary> (siehe unten)
State <Online|Standby|Oflline>
Zuweisung Controller zum FC-Port über:
<source lang=bash>
# ls -al /dev/cfg/c6
lrwxrwxrwx 1 root root 60 Sep 3 2009 /dev/cfg/c6 -> ../../devices/pci@79,0/pci10de,376@e/pci1077,143@0/fp@0,0:fc
</source>
Man sieht den Hardwarepfad von [[#luxadm -e port]]
Class:
Via ALUA (Asymmetric Logical Unit Access) teilt das Device dem Host mit, über welche Pfade der Host primär auf die LUN zugreifen soll.
==fcinfo==
===fcinfo hba-port===
Gibt ein paar Infos über Hersteller, Modell, Firmware, Port und Node WWN, Current Speed, ... aus
<source lang=bash>
#> fcinfo hba-port
HBA Port WWN: 2100001b328a417f
OS Device Name: /dev/cfg/c4
Manufacturer: QLogic Corp.
Model: 375-3356-02
Firmware Version: 05.06.00
FCode/BIOS Version: BIOS: 2.02; fcode: 2.01; EFI: 2.00;
Serial Number: 0402R00-0918701860
Driver Name: qlc
Driver Version: 20110825-3.06
Type: N-port
State: online
Supported Speeds: 1Gb 2Gb 4Gb
Current Speed: 4Gb
Node WWN: 2000001b328a417f
HBA Port WWN: 2101001b32aa417f
OS Device Name: /dev/cfg/c5
Manufacturer: QLogic Corp.
Model: 375-3356-02
Firmware Version: 05.06.00
FCode/BIOS Version: BIOS: 2.02; fcode: 2.01; EFI: 2.00;
Serial Number: 0402R00-0918701860
Driver Name: qlc
Driver Version: 20110825-3.06
Type: unknown
State: offline
Supported Speeds: 1Gb 2Gb 4Gb
Current Speed: not established
Node WWN: 2001001b32aa417f
HBA Port WWN: 2100001b32904445
OS Device Name: /dev/cfg/c6
Manufacturer: QLogic Corp.
Model: 375-3356-02
Firmware Version: 05.06.00
FCode/BIOS Version: BIOS: 2.02; fcode: 2.01; EFI: 2.00;
Serial Number: 0402R00-0918701887
Driver Name: qlc
Driver Version: 20110825-3.06
Type: N-port
State: online
Supported Speeds: 1Gb 2Gb 4Gb
Current Speed: 4Gb
Node WWN: 2000001b32904445
HBA Port WWN: 2101001b32b04445
OS Device Name: /dev/cfg/c7
Manufacturer: QLogic Corp.
Model: 375-3356-02
Firmware Version: 05.06.00
FCode/BIOS Version: BIOS: 2.02; fcode: 2.01; EFI: 2.00;
Serial Number: 0402R00-0918701887
Driver Name: qlc
Driver Version: 20110825-3.06
Type: unknown
State: offline
Supported Speeds: 1Gb 2Gb 4Gb
Current Speed: not established
Node WWN: 2001001b32b04445
</source>
===fcinfo remote-port --port <HBA Port WWN> --linkstat===
<source lang=bash>
# fcinfo remote-port --port 2100001b32904445 --linkstat
Remote Port WWN: 201600a0b86e103c
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200600a0b86e103c
Link Error Statistics:
Link Failure Count: 3
Loss of Sync Count: 3
Loss of Signal Count: 2
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 201700a0b86e103c
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200600a0b86e103c
Link Error Statistics:
Link Failure Count: 4
Loss of Sync Count: 261
Loss of Signal Count: 4
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 202200a0b85aeb2d
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200200a0b85aeb2d
Link Error Statistics:
Link Failure Count: 2
Loss of Sync Count: 1
Loss of Signal Count: 2
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 202300a0b85aeb2d
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200200a0b85aeb2d
Link Error Statistics:
Link Failure Count: 2
Loss of Sync Count: 1
Loss of Signal Count: 2
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 201600a0b86e10e4
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200600a0b86e10e4
Link Error Statistics:
Link Failure Count: 3
Loss of Sync Count: 1
Loss of Signal Count: 0
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 201700a0b86e10e4
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200600a0b86e10e4
Link Error Statistics:
Link Failure Count: 3
Loss of Sync Count: 1
Loss of Signal Count: 0
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 202400a0b85bb030
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200400a0b85bb030
Link Error Statistics:
Link Failure Count: 2
Loss of Sync Count: 1
Loss of Signal Count: 2
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 202500a0b85bb030
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200400a0b85bb030
Link Error Statistics:
Link Failure Count: 2
Loss of Sync Count: 1
Loss of Signal Count: 3
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
</source>
===fcinfo remote-port --port <HBA Port WWN> --scsi-target===
<source lang=bash>
# fcinfo hba-port | grep HBA
HBA Port WWN: 21000024ff3cf472
HBA Port WWN: 21000024ff3cf473
HBA Port WWN: 21000024ff3cf454
HBA Port WWN: 21000024ff3cf455
# fcinfo remote-port --port 21000024ff3cf472 --scsi-target
Remote Port WWN: 20110002ac0059ce
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 2ff70002ac0059ce
LUN: 0
Vendor: 3PARdata
Product: VV
OS Device Name: /dev/rdsk/c6t60002AC00000000000000002000059CEd0s2
LUN: 1
Vendor: 3PARdata
Product: VV
OS Device Name: /dev/rdsk/c6t60002AC00000000000000003000059CEd0s2
LUN: 2
Vendor: 3PARdata
Product: VV
OS Device Name: /dev/rdsk/c6t60002AC00000000000000004000059CEd0s2
...
</source>
==mpathadm==
===mpathadm list lu===
<source lang=bash>
</source>
==cfgadm==
===cfgadm -al -o show_FCP_dev [<controller>]===
<source lang=bash>
# cfgadm -al -o show_FCP_dev | grep unusable
c8::21000024ff2d49a2,0 disk connected configured unusable
c8::21000024ff2d49a2,1 disk connected configured unusable
c8::21000024ff2d49a2,2 disk connected configured unusable
c8::21000024ff2d49a2,3 disk connected configured unusable
c8::21000024ff2d49a2,4 disk connected configured unusable
c8::21000024ff2d49a2,5 disk connected configured unusable
c8::21000024ff2d49a2,6 disk connected configured unusable
c8::21000024ff2d49a2,7 disk connected configured unusable
c8::21000024ff2d49a2,8 disk connected configured unusable
c8::21000024ff2d49a2,9 disk connected configured unusable
c8::21000024ff2d49a2,10 disk connected configured unusable
c9::203400a0b839c421,31 disk connected configured unusable
c9::203400a0b84913d2,31 disk connected configured unusable
c9::203500a0b839c421,31 disk connected configured unusable
c9::203500a0b84913d2,31 disk connected configured unusable
</source>
===cfgadm -c unconfigure -o unusable_SCSI_LUN <unusable device>===
<source lang=bash>
# cfgadm -c unconfigure -o unusable_SCSI_LUN c8::21000024ff2d49a2
</source>
Alle aufräumen:
<source lang=bash>
# cfgadm -alo show_SCSI_LUN | nawk '$NF=="unusable"{gsub(/,[0-9]+$/,"",$1);print $1}' | sort -u | xargs -n 1 cfgadm -c unconfigure -o unusable_SCSI_LUN
</source>
===cfgadm -o force_update -c configure <controller>===
Rescan LUNs. Be careful! Does a forcelip!
<source lang=bash>
# cfgadm -o force_update -c configure c10
</source>
==prtconf -Da <device>==
<source lang=bash>
# prtconf -Da /dev/cfg/c3
i86pc (driver name: rootnex)
pci, instance #0 (driver name: npe)
pci8086,3410, instance #5 (driver name: pcieb)
pci111d,806e, instance #12 (driver name: pcieb)
pci111d,806e, instance #13 (driver name: pcieb)
pci1077,170, instance #0 (driver name: qlc) <---
fp, instance #0 (driver name: fp)
</source>
==LUN masking (access LUNs of a storage)==
<source lang=bash>
Nov 6 13:44:59 server01 Corrupt label; wrong magic number
Nov 6 13:44:59 server01 cmlb: WARNING: /pci@380/pci@1/pci@0/pci@5/SUNW,qlc@0/fp@0,0/ssd@w204300a096691217,7 (ssd7):
Nov 6 13:44:59 server01 Corrupt label; wrong magic number
Nov 6 13:44:59 server01 cmlb: WARNING: /pci@380/pci@1/pci@0/pci@5/SUNW,qlc@0/fp@0,0/ssd@w204300a096691217,7 (ssd7):
Nov 6 13:44:59 server01 Corrupt label; wrong magic number
Nov 6 13:44:59 server01 cmlb: WARNING: /pci@300/pci@1/pci@0/pci@4/SUNW,qlc@0/fp@0,0/ssd@w203300a096691217,7 (ssd2):
Nov 6 13:44:59 server01 Corrupt label; wrong magic number
...
</source>
<source lang=bash>
# cat /etc/driver/drv/fp.conf
mpxio-disable="no";
pwwn-lun-blacklist=
"203200a096691265,7",
"203300a096691265,7",
"204200a096691265,7",
"204300a096691265,7",
"203200a096691217,7",
"203300a096691217,7",
"204200a096691217,7",
"204300a096691217,7";
</source>
<source lang=bash>
# reboot -- -r
...
Boot device: /pci@300/pci@1/pci@0/pci@2/scsi@0/disk@p0 File and args: -r
SunOS Release 5.11 Version 11.3 64-bit
Copyright (c) 1983, 2015, Oracle and/or its affiliates. All rights reserved.
/pseudo/fcp@0 (fcp0):
LUN 7 of port 203300a096691217 is masked due to black listing.
/pseudo/fcp@0 (fcp0):
LUN 7 of port 203200a096691217 is masked due to black listing.
/pseudo/fcp@0 (fcp0):
LUN 7 of port 203300a096691265 is masked due to black listing.
/pseudo/fcp@0 (fcp0):
LUN 7 of port 203200a096691265 is masked due to black listing.
/pseudo/fcp@0 (fcp0):
LUN 7 of port 204300a096691217 is masked due to black listing.
/pseudo/fcp@0 (fcp0):
LUN 7 of port 204200a096691217 is masked due to black listing.
/pseudo/fcp@0 (fcp0):
LUN 7 of port 204300a096691265 is masked due to black listing.
/pseudo/fcp@0 (fcp0):
LUN 7 of port 204200a096691265 is masked due to black listing.
Configuring devices.
</source>
=Kommandos : Common Array Manager=
==lsscs==
Ist unter Solaris in /opt/SUNWsefms/bin
===lsscs list array===
<source lang=bash>
</source>
===lsscs list array <array_name>===
<source lang=bash>
</source>
===lsscs list -a <array_name> fcport===
<source lang=bash>
</source>
=Kommandos : Brocade=
==Switch-Kommandos==
===switchshow===
<source lang=bash>
san-sw_11:admin> switchshow
switchName: san-sw_11
switchType: 71.2
switchState: Online
switchMode: Native
switchRole: Principal
switchDomain: 1
switchId: fffc01
switchWwn: 10:00:00:05:33:df:43:5a
zoning: ON (Fabric1)
switchBeacon: OFF
Index Port Address Media Speed State Proto
==============================================
0 0 010000 id N8 No_Light FC
1 1 010100 id N8 Online FC E-Port 10:00:00:05:33:df:bd:b9 "san-sw_21" (downstream)
2 2 010200 id N8 Online FC F-Port 21:00:00:24:ff:05:74:e4
3 3 010300 id N8 Online FC F-Port 50:0a:09:81:8d:32:5d:c4
4 4 010400 id N8 No_Light FC
5 5 010500 id N8 Online FC E-Port 10:00:00:05:33:df:bd:b9 "san-sw_21"
6 6 010600 id N4 Online FC F-Port 20:06:00:a0:b8:32:38:17
7 7 010700 id N4 Online FC F-Port 20:07:00:a0:b8:32:38:17
8 8 010800 id N4 Online FC F-Port 21:00:00:1b:32:91:4c:ed
9 9 010900 id N4 Online FC F-Port 21:00:00:1b:32:98:05:1a
10 10 010a00 id N8 Online FC F-Port 21:00:00:24:ff:4a:d3:bc
11 11 010b00 id N8 No_Light FC
12 12 010c00 id N8 No_Light FC
13 13 010d00 id N8 No_Light FC
14 14 010e00 id N8 No_Light FC
15 15 010f00 id N8 No_Light FC
16 16 011000 -- N8 No_Module FC (No POD License) Disabled
17 17 011100 -- N8 No_Module FC (No POD License) Disabled
18 18 011200 -- N8 No_Module FC (No POD License) Disabled
19 19 011300 -- N8 No_Module FC (No POD License) Disabled
20 20 011400 -- N8 No_Module FC (No POD License) Disabled
21 21 011500 -- N8 No_Module FC (No POD License) Disabled
22 22 011600 -- N8 No_Module FC (No POD License) Disabled
23 23 011700 -- N8 No_Module FC (No POD License) Disabled
</source>
Was sagt uns das?
# Dies ist der "Principal" (alle andere sind "Subordinate") der Fabric "Fabric1" (switchRole:, zoning:)
# Der Switch ist gezoned (zoning:)
# SwitchID ist "fffc01"
# Es ist ein 24-Port Switch
# Es gibt einen doppelten ISL (InterSwitchLink) zu einem anderen Switch E-Port (san-sw_21)
# 6 Ports sind mit SFPs bestückt, aber nicht belegt (0,4,11-15)
# 8 Ports haben keine Lizenz und auch kein SFP (No_Module)
# 9 Ports sind belegt
<source lang=bash>
san-sw_11:root> fabricshow
Switch ID Worldwide Name Enet IP Addr FC IP Addr Name
-------------------------------------------------------------------------
1: fffc01 10:00:00:05:33:df:43:5a 192.168.1.117 0.0.0.0 >"san-sw_11"
2: fffc02 10:00:00:05:33:df:bd:b9 192.168.1.119 0.0.0.0 "san-sw_21"
The Fabric has 2 switches
</source>
==Port-Kommandos==
===porterrshow===
===portstatsshow===
===portstatsclear===
===portloginshow===
Show information about NPIV-ports.
<source lang=bash>
fcsw1:admin> switchshow
...
Index Port Address Media Speed State Proto
==================================================
...
34 34 0f2200 id N16 Online FC F-Port 1 N Port + 1 NPIV public
...
</source>
This is a NetApp 8080 with CDOT behind this port as you can see with <i>nodefind <address></i>
<source lang=bash>
fcsw1:admin> nodefind 0f2200
Local:
Type Pid COS PortName NodeName SCR
N 0f2200; 3;50:0a:09:82:80:d1:21:ee;50:0a:09:80:80:d1:21:ee; 0x00000000
PortSymb: [45] "NetApp FC Target Adapter (8324) cdot1-01:0g"
NodeSymb: [38] "NetApp FAS8080 (cdot1-01/cdot1-02)"
Fabric Port Name: 20:22:50:eb:1a:42:f8:45
Permanent Port Name: 50:0a:09:82:80:d1:21:ee
Device type: Physical Unknown(initiator/target)
Port Index: 34
Share Area: No
Device Shared in Other AD: No
Redirect: No
Partial: No
LSAN: No
Aliases:
</source>
Now look with <i>portloginshow <portnumber></i>:
<source lang=bash>
fcsw1:admin> portloginshow 34
Type PID World Wide Name credit df_sz cos
=====================================================
fd 0f2201 20:00:00:a0:98:5d:33:82 6 2048 8 scr=0x3
fe 0f2200 50:0a:09:82:80:d1:21:ee 6 2048 8 scr=0x0
ff 0f2201 20:00:00:a0:98:5d:33:82 0 0 8 d_id=FFFFFC
ff 0f2200 50:0a:09:82:80:d1:21:ee 0 0 8 d_id=FFFFFC
</source>
With this information you can find out more about the WWNs:
<source lang=bash>
fcsw1:admin> nodefind 20:00:00:a0:98:5d:33:82
Local:
Type Pid COS PortName NodeName SCR
N 0f2201; 3;20:00:00:a0:98:5d:33:82;20:04:00:a0:98:5d:33:82; 0x00000003
FC4s: FCP
PortSymb: [58] "NetApp FC Target Port (8324) cdot1fc:cdot1-01_fc_lif_1"
NodeSymb: [24] "NetApp Vserver cdot1fc"
Fabric Port Name: 20:22:50:eb:1a:42:f8:45
Permanent Port Name: 50:0a:09:82:80:d1:21:ee
Device type: NPIV Target
Port Index: 34
Share Area: No
Device Shared in Other AD: No
Redirect: No
Partial: No
LSAN: No
Aliases: cdot1fc_01_lif1
</source>
Even the VServer (NodeSymb)!
==Zone-Kommandos==
===zoneshow===
===alicreate===
===alishow===
==Backup der Switchconfig per Script==
===Put the backup host ssh-pub-key on the switches===
<source lang=bash>
fcsw1:root> cat >/root/.ssh/authorized_keys <<EOF
> ssh-dss AAAAB3NzaC1...
...
...
lF8qsgtTD8cc= root@host
> EOF
</source>
===Generate ssh-key on the switches===
<source lang=bash>
fcsw1:root> ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
2a:23:33:...:69:bc:25:a5:f9 root@fcsw1
The key's randomart image is:
+--[ RSA 2048]----+
| |
| ... |
| |
+-----------------+
</source>
===Copy the key to your backup users ~/.ssh/authorized_keys on backup host===
<source lang=bash>
fcsw1:root> cat /root/.ssh/id_rsa.pub
ssh-rsa AAAAB3NzaC1yc2EAAA...
...
KHnw1T1NaQ== root@fcsw1
</source>
===Now the script on the backup host===
<source lang=bash>
# cat /opt/bin/backup_brocade_config
#!/bin/bash
SWITCHES="
172.30.40.50
172.30.40.51
"
LOCALUSER="backupuser"
BACKUPDIR="brocade_backup"
BACKUPHOST="172.30.40.10"
DATE="$(date '+%Y%m%d-%H%M%S')"
for switch in ${SWITCHES} ; do
printf "Backing up ${switch} to ~${LOCALUSER}/${BACKUPDIR}/${switch}_config_${date}.txt... "
ssh root@${switch} /fabos/link_sbin/configupload -all -p scp ${BACKUPHOST},${LOCALUSER},${BACKUPDIR}/${switch}_config_${DATE}.txt
done
</source>
==Script zum parsen einer configupload-Datei==
<source lang=awk>
#!/usr/bin/gawk -f
BEGIN{
vendor["001438"]="Hewlett-Packard";
vendor["00a098"]="NetApp";
vendor["0024ff"]="Qlogic";
vendor["001b32"]="Qlogic";
vendor["0000c9"]="Emulex";
vendor["00e002"]="CROSSROADS SYSTEMS, INC.";
}
/\[Zoning\]/,/^$/ {
if(/^cfg./){
split($0,cfgparts,":");
gsub(/^cfg./,"",cfgparts[1]);
cfg[cfgparts[1]]=cfgparts[2];
}
else if(/^zone./) {
zonename=$0;
gsub(/:.*$/,"",zonename);
gsub(/^zone./,"",zonename);
zonemembers=$0;
gsub(/^[^:]*:/,"",zonemembers);
zone[zonename]=zonemembers;
}
else if(/^alias./) {
aliasname=$0;
gsub(/:.*$/,"",aliasname);
gsub(/^alias./,"",aliasname);
aliasmembers=$0;
gsub(/^[^:]*:/,"",aliasmembers);
alias[aliasname]=aliasmembers;
if(length(aliasname)>longestalias){
longestalias=length(aliasname);
}
}
else if(/^enable:/) {
cfgenabled=$0;
gsub(/^enable:/,"",cfgenabled);
}
}
END {
print "Config:",cfgenabled;
split(cfg[cfgenabled],active_zones,";");
for(active_zone in active_zones) {
split(zone[active_zones[active_zone]],zone_members,";");
asort(zone_members);
print "Zone",active_zones[active_zone],"(",length(zone_members),"Members ):";
for(zone_member in zone_members){
member=zone_members[zone_member];
if(alias[member]!=""){
member=alias[member];
}
WWN=member;
gsub(/:/,"",WWN);
if(WWN ~ /^5/){start=2;}else{start=5;}
vendor_id=substr(WWN,start,6);
printf " Member: %s\t",member;
if(alias[zone_members[zone_member]]!=""){
format=sprintf("%%s%%%ds\t",longestalias-length(zone_members[zone_member]));
printf format,zone_members[zone_member]," ";
}
printf "%s\n",vendor[vendor_id];
}
}
printf "\n\n\nCreate config:\n-------------------------------------------------\n";
printf "cfgdelete \"%s\"\n",cfgenabled;
for(active_zone in active_zones) {
split(zone[active_zones[active_zone]],zone_members,";");
asort(zone_members);
for(zone_member in zone_members){
member=zone_members[zone_member];
if(alias[member]!=""){
printf "alicreate \"%s\",\"%s\"\n",member,alias[member];
alias[member]="";
}
}
printf "zonecreate \"%s\",\"%s\"\n",active_zones[active_zone],zone[active_zones[active_zone]];
if(!secondelement){
secondelement=1;
printf "cfgcreate";
} else {
printf "cfgadd ";
}
printf " \"%s\",\"%s\"\n",cfgenabled,active_zones[active_zone];
}
printf "cfgsave\ncfgenable \"%s\"\n",cfgenabled;
}
</source>
=Kommandos: NetApp=
==fcp topology show : Wo hängt mein Frontend-SAN?==
<source lang=bash>
fas01> fcp topology show
Switches connected on adapter 0d:
None connected.
Switches connected on adapter 0c:
None connected.
Switches connected on adapter 1a:
Switch Name: fcsw01
Switch Vendor: Brocade Communications, Inc.
Switch Release: v6.4.2a
Switch Domain: 1
Switch WWN: 10:00:00:05:33:c6:1e:6c
Port Count: 24
Switches connected on adapter 1b:
Switch Name: fcsw02
Switch Vendor: Brocade Communications, Inc.
Switch Release: v6.4.2a
Switch Domain: 1
Switch WWN: 10:00:00:05:33:c7:5e:d2
Port Count: 24
Switches connected on adapter 1c:
None connected.
Switches connected on adapter 1d:
None connected.
</source>
==fcp config <port> : Welche WWN habe ich?==
<source lang=bash>
fas01> fcp config 1a
1a: ONLINE <ADAPTER UP> PTP Fabric
host address 010600
portname 50:0a:09:83:90:00:29:24 nodename 50:0a:09:80:80:00:29:24
mediatype auto speed auto
</source>
Kleines Schmankerl ist noch die "host address", die uns zeigt, daß wir auf Switch-ID 01 Port 06 hängen.
==fcp wwpn-alias (set|show) : Aliasnamen für mehr Klarheit beim Debugging==
<source lang=bash>
fas01> fcp wwpn-alias set sun07_Slot2_Port0 21000024ff363a5a
fas01> fcp wwpn-alias show
WWPN Alias
---- -----
21:00:00:24:ff:36:3a:5a sun07_Slot2_Port0
</source>
==sanlun lun show -d <dev> (mit Solaris und ZPool)==
Wenn man wissen möchte, welche NetApp LUNs zu einem ZPool gehören, geht das folgender Maßen:
<source lang=bash>
# zpool status | nawk '/c[0-9]t/{dev=$1;gsub(/s[0-9]+$/,"",$1);command="/opt/NTAP/SANToolkit/bin/sanlun lun show -d /dev/rdsk/"$1"s2";command | getline; command | getline; print dev,$1$2;next;}{print;}'
</source>
Beispiel:
<source lang=bash>
# zpool status | nawk '/c[0-9]t/{dev=$1;gsub(/s[0-9]+$/,"",$1);command="/opt/NTAP/SANToolkit/bin/sanlun lun show -d /dev/rdsk/"$1"s2";command | getline; command | getline; print dev,$1$2;next;}{print;}'
Pool: testpool
Status: ONLINE
scan: resilvered 11,0G in 0h1m with 0 errors on Thu Oct 2 09:41:39 2014
config:
NAME STATE READ WRITE CKSUM
testpool ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
c5t60A98000433544625634696B76705370d0s0 fas01:/vol/testlun/LUN0
c5t60A980003830304F392446473844375Ad0 fas02:/vol/testlun/LUN0
</source>
=Sonstiges=
==Alle WWNs in einem File finden==
Gibt nur die WWNs aus, auch mehrere, wenn mehrere in einer Zeile sind.
<source lang=awk>
gawk '{line=$0;while(match(line,/[0-9a-f]{2}(:[0-9a-f]{2}){7}/,wwn)){line=substr(line,wwn[0,"start"]+wwn[0,"length"]); print wwn[0];}}' <file>
</source>
7ee438b428b5e73395b3bf6d555593e792cfaa3d
1394
1393
2017-03-14T15:40:56Z
Lollypop
2
/* portloginshow */
wikitext
text/x-wiki
[[Kategorie:Solaris]]
[[Kategorie:Brocade]]
[[Kategorie:NetApp]]
[[Kategorie:FC]]
=Fibrechannel Analyse=
=Kommandos : Solaris=
==luxadm==
===luxadm -e port===
Gibt die Hardwarepfade der vorhandened Fibrechannelports und deren Status aus:
<source lang=bash>
# luxadm -e port
/devices/pci@79,0/pci10de,378@b/pci1077,143@0/fp@0,0:devctl CONNECTED
/devices/pci@79,0/pci10de,378@b/pci1077,143@0,1/fp@0,0:devctl NOT CONNECTED
/devices/pci@79,0/pci10de,376@e/pci1077,143@0/fp@0,0:devctl CONNECTED
/devices/pci@79,0/pci10de,376@e/pci1077,143@0,1/fp@0,0:devctl NOT CONNECTED
</source>
2 Dualport Karten:
/devices/pci@79,0/pci10de,378@b/pci1077,143@0 und ...,1
/devices/pci@79,0/pci10de,376@e/pci1077,143@0 und ...,1
<source lang=bash>
# prtdiag -v | head -1
System Configuration: Sun Microsystems Sun Fire X4440
</source>
Aus der Seite [https://support.oracle.com/epmos/faces/DocContentDisplay?id=1277396.1 Sun x86 Platforms: Matrix of Recognized Device Paths (Doc ID 1277396.1)] (Oracle Support Login benötigt):
Sun Fire x4440 (Tucana)
PCI:
PCIe SLOT0 /pci@0,0/pci10de,375@f/pci1000,3150@0 // with PCI Express 8-Port SAS/SATA HBA
PCIe SLOT0 /pci@0,0/pci10de,375@f/ // without PCI Express 8-Port SAS/SATA HBA
PCIe SLOT1 /pci@0,0/pci10de,376@e/
PCIe SLOT2 /pci@7c,0/pci10de,377@f/
PCIe SLOT3 /pci@0,0/pci10de,377@a/
PCIe SLOT4 /pci@7c,0/pci10de,376@e/
PCIe SLOT5 /pci@7c,0/pci10de,378@b/
(7c can be renamed something else depending on BIOS/OS version)
Also stecken unsere Karten in Slot 4 und 5.
===luxadm -e dump_map <HW_path>===
Gibt die Tabelle der bekannten Geräte an einem Port aus
<source lang=bash>
# luxadm -e dump_map /devices/pci@79,0/pci10de,378@b/pci1077,143@0/fp@0,0:devctl
Pos Port_ID Hard_Addr Port WWN Node WWN Type
0 30200 0 202600a0b86e10e4 200600a0b86e10e4 0x0 (Disk device)
1 30600 0 202700a0b86e10e4 200600a0b86e10e4 0x0 (Disk device)
2 10100 0 203400a0b85bb030 200400a0b85bb030 0x0 (Disk device)
3 10500 0 203500a0b85bb030 200400a0b85bb030 0x0 (Disk device)
4 10200 0 202600a0b86e103c 200600a0b86e103c 0x0 (Disk device)
5 11400 0 202700a0b86e103c 200600a0b86e103c 0x0 (Disk device)
6 30100 0 203200a0b85aeb2d 200200a0b85aeb2d 0x0 (Disk device)
7 30500 0 203300a0b85aeb2d 200200a0b85aeb2d 0x0 (Disk device)
8 10800 0 2100001b32902d45 2000001b32902d45 0x1f (Unknown Type,Host Bus Adapter)
</source>
Erklärung der interessanten Spalten:
* Port_ID <Switch_ID><Switchport><??>
Es sind also offensichtlich 2 Switches in der Fabric an Port /devices/pci@79,0/pci10de,378@b/pci1077,143@0/fp@0,0:devctl
und zwar mit der ID 1 und mit der ID 3.
Switch ID 1
Port 1 und 5 : Node WWN 200400a0b85bb030
Port 2 und 14 : Node WWN 200600a0b86e103c
Port 8 : Node WWN 2000001b32902d45 (Wir selbst)
Switch ID 3
Port 1 und 5 : Node WWN 200200a0b85aeb2d
Port 2 und 6 : Node WWN 200600a0b86e10e4
Wir hängen also mit 2 Storages auf dem Switch mit der ID 1 und haben eine Verbindung zu einem Switch mit der ID 3 an dem 2 weitere Storages hängen.
* Node WWN
Wir sehen hier 4 Disk Devices mit jeweils 2 Einträgen (Gleiche Node WWN)
* Port WWN
Dies ist die Port WWN der an den Switch angeschlossenen Geräte (unter 8 finden wir uns selbst).
Pro Storage sehen wir hier 2 Port WWNs, also 2 Pfade über unseren einen Hostport.
Daher nachher 4 Pfade (2 Pro Hostport) beim [[#mpathadm list lu]].
* Type
Disk Device: Storage
Host Bus Adapter: FC-Karte
===luxadm probe===
Auflistung aller erkannten Fibrechanneldevices
<source lang=bash>
#> luxadm probe
Found Fibre Channel device(s):
Node WWN:200600a0b86e10e4 Device Type:Disk device
Logical Path:/dev/rdsk/c8t600A0B80006E10E40000DC1C52E8B751d0s2
...
</source>
===luxadm display <Diskpath|WWN>===
<source lang=bash>
#> luxadm display /dev/rdsk/c8t600A0B80006E10E40000DC1C52E8B751d0s2
DEVICE PROPERTIES for disk: /dev/rdsk/c8t600A0B80006E10E40000DC1C52E8B751d0s2
Vendor: SUN
Product ID: STK6580_6780
Revision: 0784
Serial Num: SP01068442
Unformatted capacity: 204800.000 MBytes
Write Cache: Enabled
Read Cache: Enabled
Minimum prefetch: 0x300
Maximum prefetch: 0x0
Device Type: Disk device
Path(s):
/dev/rdsk/c8t600A0B80006E10E40000DC1C52E8B751d0s2
/devices/scsi_vhci/disk@g600a0b80006e10e40000dc1c52e8b751:c,raw
Controller /dev/cfg/c4
Device Address 202600a0b86e10e4,5
Host controller port WWN 2100001b328a417f
Class primary
State ONLINE
Controller /dev/cfg/c4
Device Address 202700a0b86e10e4,5
Host controller port WWN 2100001b328a417f
Class secondary
State STANDBY
Controller /dev/cfg/c6
Device Address 201600a0b86e10e4,5
Host controller port WWN 2100001b32904445
Class primary
State ONLINE
Controller /dev/cfg/c6
Device Address 201700a0b86e10e4,5
Host controller port WWN 2100001b32904445
Class secondary
State STANDBY
</source>
* Vendor: SUN
Hersteller
* Product ID: STK6580_6780
Also ein StorageTek 6580/6780
* Revision: 0784
Grobe Firmwarepeilung (Firmware Version: 07.84.47.10)
Siehe hier [[#lsscs list array <array_name>]]
* Serial Num: SP01068442
Praktisch, wenn man mit NetApps arbeitet, um die LUNs zuzuordnen.
* Unformatted capacity: 204800.000 MBytes
Immer gut zu wissen
* Write Cache: Enabled
Die Batterie im Storage sollte also OK sein ;-)
* Path(s):
Rawdevicepath
Hardwaredevicepath
Jetzt folgen immer pro Pfad zu diesem Device ein Block aus
Controller (siehe unten)
Device Address <Port WWN vom Device>,<LUN ID>
Class <primary|secondary> (siehe unten)
State <Online|Standby|Oflline>
Zuweisung Controller zum FC-Port über:
<source lang=bash>
# ls -al /dev/cfg/c6
lrwxrwxrwx 1 root root 60 Sep 3 2009 /dev/cfg/c6 -> ../../devices/pci@79,0/pci10de,376@e/pci1077,143@0/fp@0,0:fc
</source>
Man sieht den Hardwarepfad von [[#luxadm -e port]]
Class:
Via ALUA (Asymmetric Logical Unit Access) teilt das Device dem Host mit, über welche Pfade der Host primär auf die LUN zugreifen soll.
==fcinfo==
===fcinfo hba-port===
Gibt ein paar Infos über Hersteller, Modell, Firmware, Port und Node WWN, Current Speed, ... aus
<source lang=bash>
#> fcinfo hba-port
HBA Port WWN: 2100001b328a417f
OS Device Name: /dev/cfg/c4
Manufacturer: QLogic Corp.
Model: 375-3356-02
Firmware Version: 05.06.00
FCode/BIOS Version: BIOS: 2.02; fcode: 2.01; EFI: 2.00;
Serial Number: 0402R00-0918701860
Driver Name: qlc
Driver Version: 20110825-3.06
Type: N-port
State: online
Supported Speeds: 1Gb 2Gb 4Gb
Current Speed: 4Gb
Node WWN: 2000001b328a417f
HBA Port WWN: 2101001b32aa417f
OS Device Name: /dev/cfg/c5
Manufacturer: QLogic Corp.
Model: 375-3356-02
Firmware Version: 05.06.00
FCode/BIOS Version: BIOS: 2.02; fcode: 2.01; EFI: 2.00;
Serial Number: 0402R00-0918701860
Driver Name: qlc
Driver Version: 20110825-3.06
Type: unknown
State: offline
Supported Speeds: 1Gb 2Gb 4Gb
Current Speed: not established
Node WWN: 2001001b32aa417f
HBA Port WWN: 2100001b32904445
OS Device Name: /dev/cfg/c6
Manufacturer: QLogic Corp.
Model: 375-3356-02
Firmware Version: 05.06.00
FCode/BIOS Version: BIOS: 2.02; fcode: 2.01; EFI: 2.00;
Serial Number: 0402R00-0918701887
Driver Name: qlc
Driver Version: 20110825-3.06
Type: N-port
State: online
Supported Speeds: 1Gb 2Gb 4Gb
Current Speed: 4Gb
Node WWN: 2000001b32904445
HBA Port WWN: 2101001b32b04445
OS Device Name: /dev/cfg/c7
Manufacturer: QLogic Corp.
Model: 375-3356-02
Firmware Version: 05.06.00
FCode/BIOS Version: BIOS: 2.02; fcode: 2.01; EFI: 2.00;
Serial Number: 0402R00-0918701887
Driver Name: qlc
Driver Version: 20110825-3.06
Type: unknown
State: offline
Supported Speeds: 1Gb 2Gb 4Gb
Current Speed: not established
Node WWN: 2001001b32b04445
</source>
===fcinfo remote-port --port <HBA Port WWN> --linkstat===
<source lang=bash>
# fcinfo remote-port --port 2100001b32904445 --linkstat
Remote Port WWN: 201600a0b86e103c
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200600a0b86e103c
Link Error Statistics:
Link Failure Count: 3
Loss of Sync Count: 3
Loss of Signal Count: 2
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 201700a0b86e103c
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200600a0b86e103c
Link Error Statistics:
Link Failure Count: 4
Loss of Sync Count: 261
Loss of Signal Count: 4
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 202200a0b85aeb2d
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200200a0b85aeb2d
Link Error Statistics:
Link Failure Count: 2
Loss of Sync Count: 1
Loss of Signal Count: 2
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 202300a0b85aeb2d
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200200a0b85aeb2d
Link Error Statistics:
Link Failure Count: 2
Loss of Sync Count: 1
Loss of Signal Count: 2
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 201600a0b86e10e4
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200600a0b86e10e4
Link Error Statistics:
Link Failure Count: 3
Loss of Sync Count: 1
Loss of Signal Count: 0
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 201700a0b86e10e4
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200600a0b86e10e4
Link Error Statistics:
Link Failure Count: 3
Loss of Sync Count: 1
Loss of Signal Count: 0
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 202400a0b85bb030
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200400a0b85bb030
Link Error Statistics:
Link Failure Count: 2
Loss of Sync Count: 1
Loss of Signal Count: 2
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 202500a0b85bb030
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200400a0b85bb030
Link Error Statistics:
Link Failure Count: 2
Loss of Sync Count: 1
Loss of Signal Count: 3
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
</source>
===fcinfo remote-port --port <HBA Port WWN> --scsi-target===
<source lang=bash>
# fcinfo hba-port | grep HBA
HBA Port WWN: 21000024ff3cf472
HBA Port WWN: 21000024ff3cf473
HBA Port WWN: 21000024ff3cf454
HBA Port WWN: 21000024ff3cf455
# fcinfo remote-port --port 21000024ff3cf472 --scsi-target
Remote Port WWN: 20110002ac0059ce
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 2ff70002ac0059ce
LUN: 0
Vendor: 3PARdata
Product: VV
OS Device Name: /dev/rdsk/c6t60002AC00000000000000002000059CEd0s2
LUN: 1
Vendor: 3PARdata
Product: VV
OS Device Name: /dev/rdsk/c6t60002AC00000000000000003000059CEd0s2
LUN: 2
Vendor: 3PARdata
Product: VV
OS Device Name: /dev/rdsk/c6t60002AC00000000000000004000059CEd0s2
...
</source>
==mpathadm==
===mpathadm list lu===
<source lang=bash>
</source>
==cfgadm==
===cfgadm -al -o show_FCP_dev [<controller>]===
<source lang=bash>
# cfgadm -al -o show_FCP_dev | grep unusable
c8::21000024ff2d49a2,0 disk connected configured unusable
c8::21000024ff2d49a2,1 disk connected configured unusable
c8::21000024ff2d49a2,2 disk connected configured unusable
c8::21000024ff2d49a2,3 disk connected configured unusable
c8::21000024ff2d49a2,4 disk connected configured unusable
c8::21000024ff2d49a2,5 disk connected configured unusable
c8::21000024ff2d49a2,6 disk connected configured unusable
c8::21000024ff2d49a2,7 disk connected configured unusable
c8::21000024ff2d49a2,8 disk connected configured unusable
c8::21000024ff2d49a2,9 disk connected configured unusable
c8::21000024ff2d49a2,10 disk connected configured unusable
c9::203400a0b839c421,31 disk connected configured unusable
c9::203400a0b84913d2,31 disk connected configured unusable
c9::203500a0b839c421,31 disk connected configured unusable
c9::203500a0b84913d2,31 disk connected configured unusable
</source>
===cfgadm -c unconfigure -o unusable_SCSI_LUN <unusable device>===
<source lang=bash>
# cfgadm -c unconfigure -o unusable_SCSI_LUN c8::21000024ff2d49a2
</source>
Alle aufräumen:
<source lang=bash>
# cfgadm -alo show_SCSI_LUN | nawk '$NF=="unusable"{gsub(/,[0-9]+$/,"",$1);print $1}' | sort -u | xargs -n 1 cfgadm -c unconfigure -o unusable_SCSI_LUN
</source>
===cfgadm -o force_update -c configure <controller>===
Rescan LUNs. Be careful! Does a forcelip!
<source lang=bash>
# cfgadm -o force_update -c configure c10
</source>
==prtconf -Da <device>==
<source lang=bash>
# prtconf -Da /dev/cfg/c3
i86pc (driver name: rootnex)
pci, instance #0 (driver name: npe)
pci8086,3410, instance #5 (driver name: pcieb)
pci111d,806e, instance #12 (driver name: pcieb)
pci111d,806e, instance #13 (driver name: pcieb)
pci1077,170, instance #0 (driver name: qlc) <---
fp, instance #0 (driver name: fp)
</source>
==LUN masking (access LUNs of a storage)==
<source lang=bash>
Nov 6 13:44:59 server01 Corrupt label; wrong magic number
Nov 6 13:44:59 server01 cmlb: WARNING: /pci@380/pci@1/pci@0/pci@5/SUNW,qlc@0/fp@0,0/ssd@w204300a096691217,7 (ssd7):
Nov 6 13:44:59 server01 Corrupt label; wrong magic number
Nov 6 13:44:59 server01 cmlb: WARNING: /pci@380/pci@1/pci@0/pci@5/SUNW,qlc@0/fp@0,0/ssd@w204300a096691217,7 (ssd7):
Nov 6 13:44:59 server01 Corrupt label; wrong magic number
Nov 6 13:44:59 server01 cmlb: WARNING: /pci@300/pci@1/pci@0/pci@4/SUNW,qlc@0/fp@0,0/ssd@w203300a096691217,7 (ssd2):
Nov 6 13:44:59 server01 Corrupt label; wrong magic number
...
</source>
<source lang=bash>
# cat /etc/driver/drv/fp.conf
mpxio-disable="no";
pwwn-lun-blacklist=
"203200a096691265,7",
"203300a096691265,7",
"204200a096691265,7",
"204300a096691265,7",
"203200a096691217,7",
"203300a096691217,7",
"204200a096691217,7",
"204300a096691217,7";
</source>
<source lang=bash>
# reboot -- -r
...
Boot device: /pci@300/pci@1/pci@0/pci@2/scsi@0/disk@p0 File and args: -r
SunOS Release 5.11 Version 11.3 64-bit
Copyright (c) 1983, 2015, Oracle and/or its affiliates. All rights reserved.
/pseudo/fcp@0 (fcp0):
LUN 7 of port 203300a096691217 is masked due to black listing.
/pseudo/fcp@0 (fcp0):
LUN 7 of port 203200a096691217 is masked due to black listing.
/pseudo/fcp@0 (fcp0):
LUN 7 of port 203300a096691265 is masked due to black listing.
/pseudo/fcp@0 (fcp0):
LUN 7 of port 203200a096691265 is masked due to black listing.
/pseudo/fcp@0 (fcp0):
LUN 7 of port 204300a096691217 is masked due to black listing.
/pseudo/fcp@0 (fcp0):
LUN 7 of port 204200a096691217 is masked due to black listing.
/pseudo/fcp@0 (fcp0):
LUN 7 of port 204300a096691265 is masked due to black listing.
/pseudo/fcp@0 (fcp0):
LUN 7 of port 204200a096691265 is masked due to black listing.
Configuring devices.
</source>
=Kommandos : Common Array Manager=
==lsscs==
Ist unter Solaris in /opt/SUNWsefms/bin
===lsscs list array===
<source lang=bash>
</source>
===lsscs list array <array_name>===
<source lang=bash>
</source>
===lsscs list -a <array_name> fcport===
<source lang=bash>
</source>
=Kommandos : Brocade=
==Switch-Kommandos==
===switchshow===
<source lang=bash>
san-sw_11:admin> switchshow
switchName: san-sw_11
switchType: 71.2
switchState: Online
switchMode: Native
switchRole: Principal
switchDomain: 1
switchId: fffc01
switchWwn: 10:00:00:05:33:df:43:5a
zoning: ON (Fabric1)
switchBeacon: OFF
Index Port Address Media Speed State Proto
==============================================
0 0 010000 id N8 No_Light FC
1 1 010100 id N8 Online FC E-Port 10:00:00:05:33:df:bd:b9 "san-sw_21" (downstream)
2 2 010200 id N8 Online FC F-Port 21:00:00:24:ff:05:74:e4
3 3 010300 id N8 Online FC F-Port 50:0a:09:81:8d:32:5d:c4
4 4 010400 id N8 No_Light FC
5 5 010500 id N8 Online FC E-Port 10:00:00:05:33:df:bd:b9 "san-sw_21"
6 6 010600 id N4 Online FC F-Port 20:06:00:a0:b8:32:38:17
7 7 010700 id N4 Online FC F-Port 20:07:00:a0:b8:32:38:17
8 8 010800 id N4 Online FC F-Port 21:00:00:1b:32:91:4c:ed
9 9 010900 id N4 Online FC F-Port 21:00:00:1b:32:98:05:1a
10 10 010a00 id N8 Online FC F-Port 21:00:00:24:ff:4a:d3:bc
11 11 010b00 id N8 No_Light FC
12 12 010c00 id N8 No_Light FC
13 13 010d00 id N8 No_Light FC
14 14 010e00 id N8 No_Light FC
15 15 010f00 id N8 No_Light FC
16 16 011000 -- N8 No_Module FC (No POD License) Disabled
17 17 011100 -- N8 No_Module FC (No POD License) Disabled
18 18 011200 -- N8 No_Module FC (No POD License) Disabled
19 19 011300 -- N8 No_Module FC (No POD License) Disabled
20 20 011400 -- N8 No_Module FC (No POD License) Disabled
21 21 011500 -- N8 No_Module FC (No POD License) Disabled
22 22 011600 -- N8 No_Module FC (No POD License) Disabled
23 23 011700 -- N8 No_Module FC (No POD License) Disabled
</source>
Was sagt uns das?
# Dies ist der "Principal" (alle andere sind "Subordinate") der Fabric "Fabric1" (switchRole:, zoning:)
# Der Switch ist gezoned (zoning:)
# SwitchID ist "fffc01"
# Es ist ein 24-Port Switch
# Es gibt einen doppelten ISL (InterSwitchLink) zu einem anderen Switch E-Port (san-sw_21)
# 6 Ports sind mit SFPs bestückt, aber nicht belegt (0,4,11-15)
# 8 Ports haben keine Lizenz und auch kein SFP (No_Module)
# 9 Ports sind belegt
<source lang=bash>
san-sw_11:root> fabricshow
Switch ID Worldwide Name Enet IP Addr FC IP Addr Name
-------------------------------------------------------------------------
1: fffc01 10:00:00:05:33:df:43:5a 192.168.1.117 0.0.0.0 >"san-sw_11"
2: fffc02 10:00:00:05:33:df:bd:b9 192.168.1.119 0.0.0.0 "san-sw_21"
The Fabric has 2 switches
</source>
==Port-Kommandos==
===porterrshow===
===portstatsshow===
===portstatsclear===
===portloginshow===
Show information about NPIV-ports.
<source lang=bash>
fcsw1:admin> switchshow
...
Index Port Address Media Speed State Proto
==================================================
...
34 34 0f2200 id N16 Online FC F-Port 1 N Port + 1 NPIV public
...
</source>
This is a NetApp 8080 with CDOT behind this port as you can see with <i>nodefind <address></i>
<source lang=bash>
fcsw1:admin> nodefind 0f2200
Local:
Type Pid COS PortName NodeName SCR
N 0f2200; 3;50:0a:09:82:80:d1:21:ee;50:0a:09:80:80:d1:21:ee; 0x00000000
PortSymb: [45] "NetApp FC Target Adapter (8324) cdot1-01:0g"
NodeSymb: [38] "NetApp FAS8080 (cdot1-01/cdot1-02)"
Fabric Port Name: 20:22:50:eb:1a:42:f8:45
Permanent Port Name: 50:0a:09:82:80:d1:21:ee
Device type: Physical Unknown(initiator/target)
Port Index: 34
Share Area: No
Device Shared in Other AD: No
Redirect: No
Partial: No
LSAN: No
Aliases:
</source>
Now look with <i>portloginshow <portnumber></i>:
<source lang=bash>
fcsw1:admin> portloginshow 34
Type PID World Wide Name credit df_sz cos
=====================================================
fd 0f2201 20:00:00:a0:98:5d:33:82 6 2048 8 scr=0x3
fe 0f2200 50:0a:09:82:80:d1:21:ee 6 2048 8 scr=0x0
ff 0f2201 20:00:00:a0:98:5d:33:82 0 0 8 d_id=FFFFFC
ff 0f2200 50:0a:09:82:80:d1:21:ee 0 0 8 d_id=FFFFFC
</source>
With this information you can find out more about the WWNs:
<source lang=bash>
fcsw1:admin> nodefind 20:00:00:a0:98:5d:33:82
Local:
Type Pid COS PortName NodeName SCR
N 0f2201; 3;20:00:00:a0:98:5d:33:82;20:04:00:a0:98:5d:33:82; 0x00000003
FC4s: FCP
PortSymb: [58] "NetApp FC Target Port (8324) cdot1fc:cdot1-01_fc_lif_1"
NodeSymb: [24] "NetApp Vserver cdot1fc"
Fabric Port Name: 20:22:50:eb:1a:42:f8:45
Permanent Port Name: 50:0a:09:82:80:d1:21:ee
Device type: NPIV Target
Port Index: 34
Share Area: No
Device Shared in Other AD: No
Redirect: No
Partial: No
LSAN: No
Aliases: cdot1fc_01_lif1
</source>
Even the VServer (NodeSymb)!
And with the NodeName you can find all logical interfaces of this svm:
<source lang=bash>
nodefind 20:04:00:a0:98:5d:33:82
Local:
Type Pid COS PortName NodeName SCR
N 0f2201; 3;20:00:00:a0:98:5d:33:82;20:04:00:a0:98:5d:33:82; 0x00000003
FC4s: FCP
PortSymb: [58] "NetApp FC Target Port (8324) cdot1fc:cdot1-01_fc_lif_1"
NodeSymb: [24] "NetApp Vserver cdot1fc"
Fabric Port Name: 20:22:50:eb:1a:42:f8:45
Permanent Port Name: 50:0a:09:82:80:d1:21:ee
Device type: NPIV Target
Port Index: 34
Share Area: No
Device Shared in Other AD: No
Redirect: No
Partial: No
LSAN: No
Aliases: cdot1fc_01_lif1
N 0f2301; 3;20:02:00:a0:98:5d:33:82;20:04:00:a0:98:5d:33:82; 0x00000003
FC4s: FCP
PortSymb: [58] "NetApp FC Target Port (8324) cdot1fc:cdot1-02_fc_lif_1"
NodeSymb: [24] "NetApp Vserver cdot1fc"
Fabric Port Name: 20:23:50:eb:1a:42:f8:45
Permanent Port Name: 50:0a:09:82:80:61:21:e8
Device type: NPIV Target
Port Index: 35
Share Area: No
Device Shared in Other AD: No
Redirect: No
Partial: No
LSAN: No
Aliases: cdot1fc_02_lif1
</source>
==Zone-Kommandos==
===zoneshow===
===alicreate===
===alishow===
==Backup der Switchconfig per Script==
===Put the backup host ssh-pub-key on the switches===
<source lang=bash>
fcsw1:root> cat >/root/.ssh/authorized_keys <<EOF
> ssh-dss AAAAB3NzaC1...
...
...
lF8qsgtTD8cc= root@host
> EOF
</source>
===Generate ssh-key on the switches===
<source lang=bash>
fcsw1:root> ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
2a:23:33:...:69:bc:25:a5:f9 root@fcsw1
The key's randomart image is:
+--[ RSA 2048]----+
| |
| ... |
| |
+-----------------+
</source>
===Copy the key to your backup users ~/.ssh/authorized_keys on backup host===
<source lang=bash>
fcsw1:root> cat /root/.ssh/id_rsa.pub
ssh-rsa AAAAB3NzaC1yc2EAAA...
...
KHnw1T1NaQ== root@fcsw1
</source>
===Now the script on the backup host===
<source lang=bash>
# cat /opt/bin/backup_brocade_config
#!/bin/bash
SWITCHES="
172.30.40.50
172.30.40.51
"
LOCALUSER="backupuser"
BACKUPDIR="brocade_backup"
BACKUPHOST="172.30.40.10"
DATE="$(date '+%Y%m%d-%H%M%S')"
for switch in ${SWITCHES} ; do
printf "Backing up ${switch} to ~${LOCALUSER}/${BACKUPDIR}/${switch}_config_${date}.txt... "
ssh root@${switch} /fabos/link_sbin/configupload -all -p scp ${BACKUPHOST},${LOCALUSER},${BACKUPDIR}/${switch}_config_${DATE}.txt
done
</source>
==Script zum parsen einer configupload-Datei==
<source lang=awk>
#!/usr/bin/gawk -f
BEGIN{
vendor["001438"]="Hewlett-Packard";
vendor["00a098"]="NetApp";
vendor["0024ff"]="Qlogic";
vendor["001b32"]="Qlogic";
vendor["0000c9"]="Emulex";
vendor["00e002"]="CROSSROADS SYSTEMS, INC.";
}
/\[Zoning\]/,/^$/ {
if(/^cfg./){
split($0,cfgparts,":");
gsub(/^cfg./,"",cfgparts[1]);
cfg[cfgparts[1]]=cfgparts[2];
}
else if(/^zone./) {
zonename=$0;
gsub(/:.*$/,"",zonename);
gsub(/^zone./,"",zonename);
zonemembers=$0;
gsub(/^[^:]*:/,"",zonemembers);
zone[zonename]=zonemembers;
}
else if(/^alias./) {
aliasname=$0;
gsub(/:.*$/,"",aliasname);
gsub(/^alias./,"",aliasname);
aliasmembers=$0;
gsub(/^[^:]*:/,"",aliasmembers);
alias[aliasname]=aliasmembers;
if(length(aliasname)>longestalias){
longestalias=length(aliasname);
}
}
else if(/^enable:/) {
cfgenabled=$0;
gsub(/^enable:/,"",cfgenabled);
}
}
END {
print "Config:",cfgenabled;
split(cfg[cfgenabled],active_zones,";");
for(active_zone in active_zones) {
split(zone[active_zones[active_zone]],zone_members,";");
asort(zone_members);
print "Zone",active_zones[active_zone],"(",length(zone_members),"Members ):";
for(zone_member in zone_members){
member=zone_members[zone_member];
if(alias[member]!=""){
member=alias[member];
}
WWN=member;
gsub(/:/,"",WWN);
if(WWN ~ /^5/){start=2;}else{start=5;}
vendor_id=substr(WWN,start,6);
printf " Member: %s\t",member;
if(alias[zone_members[zone_member]]!=""){
format=sprintf("%%s%%%ds\t",longestalias-length(zone_members[zone_member]));
printf format,zone_members[zone_member]," ";
}
printf "%s\n",vendor[vendor_id];
}
}
printf "\n\n\nCreate config:\n-------------------------------------------------\n";
printf "cfgdelete \"%s\"\n",cfgenabled;
for(active_zone in active_zones) {
split(zone[active_zones[active_zone]],zone_members,";");
asort(zone_members);
for(zone_member in zone_members){
member=zone_members[zone_member];
if(alias[member]!=""){
printf "alicreate \"%s\",\"%s\"\n",member,alias[member];
alias[member]="";
}
}
printf "zonecreate \"%s\",\"%s\"\n",active_zones[active_zone],zone[active_zones[active_zone]];
if(!secondelement){
secondelement=1;
printf "cfgcreate";
} else {
printf "cfgadd ";
}
printf " \"%s\",\"%s\"\n",cfgenabled,active_zones[active_zone];
}
printf "cfgsave\ncfgenable \"%s\"\n",cfgenabled;
}
</source>
=Kommandos: NetApp=
==fcp topology show : Wo hängt mein Frontend-SAN?==
<source lang=bash>
fas01> fcp topology show
Switches connected on adapter 0d:
None connected.
Switches connected on adapter 0c:
None connected.
Switches connected on adapter 1a:
Switch Name: fcsw01
Switch Vendor: Brocade Communications, Inc.
Switch Release: v6.4.2a
Switch Domain: 1
Switch WWN: 10:00:00:05:33:c6:1e:6c
Port Count: 24
Switches connected on adapter 1b:
Switch Name: fcsw02
Switch Vendor: Brocade Communications, Inc.
Switch Release: v6.4.2a
Switch Domain: 1
Switch WWN: 10:00:00:05:33:c7:5e:d2
Port Count: 24
Switches connected on adapter 1c:
None connected.
Switches connected on adapter 1d:
None connected.
</source>
==fcp config <port> : Welche WWN habe ich?==
<source lang=bash>
fas01> fcp config 1a
1a: ONLINE <ADAPTER UP> PTP Fabric
host address 010600
portname 50:0a:09:83:90:00:29:24 nodename 50:0a:09:80:80:00:29:24
mediatype auto speed auto
</source>
Kleines Schmankerl ist noch die "host address", die uns zeigt, daß wir auf Switch-ID 01 Port 06 hängen.
==fcp wwpn-alias (set|show) : Aliasnamen für mehr Klarheit beim Debugging==
<source lang=bash>
fas01> fcp wwpn-alias set sun07_Slot2_Port0 21000024ff363a5a
fas01> fcp wwpn-alias show
WWPN Alias
---- -----
21:00:00:24:ff:36:3a:5a sun07_Slot2_Port0
</source>
==sanlun lun show -d <dev> (mit Solaris und ZPool)==
Wenn man wissen möchte, welche NetApp LUNs zu einem ZPool gehören, geht das folgender Maßen:
<source lang=bash>
# zpool status | nawk '/c[0-9]t/{dev=$1;gsub(/s[0-9]+$/,"",$1);command="/opt/NTAP/SANToolkit/bin/sanlun lun show -d /dev/rdsk/"$1"s2";command | getline; command | getline; print dev,$1$2;next;}{print;}'
</source>
Beispiel:
<source lang=bash>
# zpool status | nawk '/c[0-9]t/{dev=$1;gsub(/s[0-9]+$/,"",$1);command="/opt/NTAP/SANToolkit/bin/sanlun lun show -d /dev/rdsk/"$1"s2";command | getline; command | getline; print dev,$1$2;next;}{print;}'
Pool: testpool
Status: ONLINE
scan: resilvered 11,0G in 0h1m with 0 errors on Thu Oct 2 09:41:39 2014
config:
NAME STATE READ WRITE CKSUM
testpool ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
c5t60A98000433544625634696B76705370d0s0 fas01:/vol/testlun/LUN0
c5t60A980003830304F392446473844375Ad0 fas02:/vol/testlun/LUN0
</source>
=Sonstiges=
==Alle WWNs in einem File finden==
Gibt nur die WWNs aus, auch mehrere, wenn mehrere in einer Zeile sind.
<source lang=awk>
gawk '{line=$0;while(match(line,/[0-9a-f]{2}(:[0-9a-f]{2}){7}/,wwn)){line=substr(line,wwn[0,"start"]+wwn[0,"length"]); print wwn[0];}}' <file>
</source>
0bc3402d9b72f336f865270d125aa5069039bba2
1395
1394
2017-03-14T15:41:30Z
Lollypop
2
/* portloginshow */
wikitext
text/x-wiki
[[Kategorie:Solaris]]
[[Kategorie:Brocade]]
[[Kategorie:NetApp]]
[[Kategorie:FC]]
=Fibrechannel Analyse=
=Kommandos : Solaris=
==luxadm==
===luxadm -e port===
Gibt die Hardwarepfade der vorhandened Fibrechannelports und deren Status aus:
<source lang=bash>
# luxadm -e port
/devices/pci@79,0/pci10de,378@b/pci1077,143@0/fp@0,0:devctl CONNECTED
/devices/pci@79,0/pci10de,378@b/pci1077,143@0,1/fp@0,0:devctl NOT CONNECTED
/devices/pci@79,0/pci10de,376@e/pci1077,143@0/fp@0,0:devctl CONNECTED
/devices/pci@79,0/pci10de,376@e/pci1077,143@0,1/fp@0,0:devctl NOT CONNECTED
</source>
2 Dualport Karten:
/devices/pci@79,0/pci10de,378@b/pci1077,143@0 und ...,1
/devices/pci@79,0/pci10de,376@e/pci1077,143@0 und ...,1
<source lang=bash>
# prtdiag -v | head -1
System Configuration: Sun Microsystems Sun Fire X4440
</source>
Aus der Seite [https://support.oracle.com/epmos/faces/DocContentDisplay?id=1277396.1 Sun x86 Platforms: Matrix of Recognized Device Paths (Doc ID 1277396.1)] (Oracle Support Login benötigt):
Sun Fire x4440 (Tucana)
PCI:
PCIe SLOT0 /pci@0,0/pci10de,375@f/pci1000,3150@0 // with PCI Express 8-Port SAS/SATA HBA
PCIe SLOT0 /pci@0,0/pci10de,375@f/ // without PCI Express 8-Port SAS/SATA HBA
PCIe SLOT1 /pci@0,0/pci10de,376@e/
PCIe SLOT2 /pci@7c,0/pci10de,377@f/
PCIe SLOT3 /pci@0,0/pci10de,377@a/
PCIe SLOT4 /pci@7c,0/pci10de,376@e/
PCIe SLOT5 /pci@7c,0/pci10de,378@b/
(7c can be renamed something else depending on BIOS/OS version)
Also stecken unsere Karten in Slot 4 und 5.
===luxadm -e dump_map <HW_path>===
Gibt die Tabelle der bekannten Geräte an einem Port aus
<source lang=bash>
# luxadm -e dump_map /devices/pci@79,0/pci10de,378@b/pci1077,143@0/fp@0,0:devctl
Pos Port_ID Hard_Addr Port WWN Node WWN Type
0 30200 0 202600a0b86e10e4 200600a0b86e10e4 0x0 (Disk device)
1 30600 0 202700a0b86e10e4 200600a0b86e10e4 0x0 (Disk device)
2 10100 0 203400a0b85bb030 200400a0b85bb030 0x0 (Disk device)
3 10500 0 203500a0b85bb030 200400a0b85bb030 0x0 (Disk device)
4 10200 0 202600a0b86e103c 200600a0b86e103c 0x0 (Disk device)
5 11400 0 202700a0b86e103c 200600a0b86e103c 0x0 (Disk device)
6 30100 0 203200a0b85aeb2d 200200a0b85aeb2d 0x0 (Disk device)
7 30500 0 203300a0b85aeb2d 200200a0b85aeb2d 0x0 (Disk device)
8 10800 0 2100001b32902d45 2000001b32902d45 0x1f (Unknown Type,Host Bus Adapter)
</source>
Erklärung der interessanten Spalten:
* Port_ID <Switch_ID><Switchport><??>
Es sind also offensichtlich 2 Switches in der Fabric an Port /devices/pci@79,0/pci10de,378@b/pci1077,143@0/fp@0,0:devctl
und zwar mit der ID 1 und mit der ID 3.
Switch ID 1
Port 1 und 5 : Node WWN 200400a0b85bb030
Port 2 und 14 : Node WWN 200600a0b86e103c
Port 8 : Node WWN 2000001b32902d45 (Wir selbst)
Switch ID 3
Port 1 und 5 : Node WWN 200200a0b85aeb2d
Port 2 und 6 : Node WWN 200600a0b86e10e4
Wir hängen also mit 2 Storages auf dem Switch mit der ID 1 und haben eine Verbindung zu einem Switch mit der ID 3 an dem 2 weitere Storages hängen.
* Node WWN
Wir sehen hier 4 Disk Devices mit jeweils 2 Einträgen (Gleiche Node WWN)
* Port WWN
Dies ist die Port WWN der an den Switch angeschlossenen Geräte (unter 8 finden wir uns selbst).
Pro Storage sehen wir hier 2 Port WWNs, also 2 Pfade über unseren einen Hostport.
Daher nachher 4 Pfade (2 Pro Hostport) beim [[#mpathadm list lu]].
* Type
Disk Device: Storage
Host Bus Adapter: FC-Karte
===luxadm probe===
Auflistung aller erkannten Fibrechanneldevices
<source lang=bash>
#> luxadm probe
Found Fibre Channel device(s):
Node WWN:200600a0b86e10e4 Device Type:Disk device
Logical Path:/dev/rdsk/c8t600A0B80006E10E40000DC1C52E8B751d0s2
...
</source>
===luxadm display <Diskpath|WWN>===
<source lang=bash>
#> luxadm display /dev/rdsk/c8t600A0B80006E10E40000DC1C52E8B751d0s2
DEVICE PROPERTIES for disk: /dev/rdsk/c8t600A0B80006E10E40000DC1C52E8B751d0s2
Vendor: SUN
Product ID: STK6580_6780
Revision: 0784
Serial Num: SP01068442
Unformatted capacity: 204800.000 MBytes
Write Cache: Enabled
Read Cache: Enabled
Minimum prefetch: 0x300
Maximum prefetch: 0x0
Device Type: Disk device
Path(s):
/dev/rdsk/c8t600A0B80006E10E40000DC1C52E8B751d0s2
/devices/scsi_vhci/disk@g600a0b80006e10e40000dc1c52e8b751:c,raw
Controller /dev/cfg/c4
Device Address 202600a0b86e10e4,5
Host controller port WWN 2100001b328a417f
Class primary
State ONLINE
Controller /dev/cfg/c4
Device Address 202700a0b86e10e4,5
Host controller port WWN 2100001b328a417f
Class secondary
State STANDBY
Controller /dev/cfg/c6
Device Address 201600a0b86e10e4,5
Host controller port WWN 2100001b32904445
Class primary
State ONLINE
Controller /dev/cfg/c6
Device Address 201700a0b86e10e4,5
Host controller port WWN 2100001b32904445
Class secondary
State STANDBY
</source>
* Vendor: SUN
Hersteller
* Product ID: STK6580_6780
Also ein StorageTek 6580/6780
* Revision: 0784
Grobe Firmwarepeilung (Firmware Version: 07.84.47.10)
Siehe hier [[#lsscs list array <array_name>]]
* Serial Num: SP01068442
Praktisch, wenn man mit NetApps arbeitet, um die LUNs zuzuordnen.
* Unformatted capacity: 204800.000 MBytes
Immer gut zu wissen
* Write Cache: Enabled
Die Batterie im Storage sollte also OK sein ;-)
* Path(s):
Rawdevicepath
Hardwaredevicepath
Jetzt folgen immer pro Pfad zu diesem Device ein Block aus
Controller (siehe unten)
Device Address <Port WWN vom Device>,<LUN ID>
Class <primary|secondary> (siehe unten)
State <Online|Standby|Oflline>
Zuweisung Controller zum FC-Port über:
<source lang=bash>
# ls -al /dev/cfg/c6
lrwxrwxrwx 1 root root 60 Sep 3 2009 /dev/cfg/c6 -> ../../devices/pci@79,0/pci10de,376@e/pci1077,143@0/fp@0,0:fc
</source>
Man sieht den Hardwarepfad von [[#luxadm -e port]]
Class:
Via ALUA (Asymmetric Logical Unit Access) teilt das Device dem Host mit, über welche Pfade der Host primär auf die LUN zugreifen soll.
==fcinfo==
===fcinfo hba-port===
Gibt ein paar Infos über Hersteller, Modell, Firmware, Port und Node WWN, Current Speed, ... aus
<source lang=bash>
#> fcinfo hba-port
HBA Port WWN: 2100001b328a417f
OS Device Name: /dev/cfg/c4
Manufacturer: QLogic Corp.
Model: 375-3356-02
Firmware Version: 05.06.00
FCode/BIOS Version: BIOS: 2.02; fcode: 2.01; EFI: 2.00;
Serial Number: 0402R00-0918701860
Driver Name: qlc
Driver Version: 20110825-3.06
Type: N-port
State: online
Supported Speeds: 1Gb 2Gb 4Gb
Current Speed: 4Gb
Node WWN: 2000001b328a417f
HBA Port WWN: 2101001b32aa417f
OS Device Name: /dev/cfg/c5
Manufacturer: QLogic Corp.
Model: 375-3356-02
Firmware Version: 05.06.00
FCode/BIOS Version: BIOS: 2.02; fcode: 2.01; EFI: 2.00;
Serial Number: 0402R00-0918701860
Driver Name: qlc
Driver Version: 20110825-3.06
Type: unknown
State: offline
Supported Speeds: 1Gb 2Gb 4Gb
Current Speed: not established
Node WWN: 2001001b32aa417f
HBA Port WWN: 2100001b32904445
OS Device Name: /dev/cfg/c6
Manufacturer: QLogic Corp.
Model: 375-3356-02
Firmware Version: 05.06.00
FCode/BIOS Version: BIOS: 2.02; fcode: 2.01; EFI: 2.00;
Serial Number: 0402R00-0918701887
Driver Name: qlc
Driver Version: 20110825-3.06
Type: N-port
State: online
Supported Speeds: 1Gb 2Gb 4Gb
Current Speed: 4Gb
Node WWN: 2000001b32904445
HBA Port WWN: 2101001b32b04445
OS Device Name: /dev/cfg/c7
Manufacturer: QLogic Corp.
Model: 375-3356-02
Firmware Version: 05.06.00
FCode/BIOS Version: BIOS: 2.02; fcode: 2.01; EFI: 2.00;
Serial Number: 0402R00-0918701887
Driver Name: qlc
Driver Version: 20110825-3.06
Type: unknown
State: offline
Supported Speeds: 1Gb 2Gb 4Gb
Current Speed: not established
Node WWN: 2001001b32b04445
</source>
===fcinfo remote-port --port <HBA Port WWN> --linkstat===
<source lang=bash>
# fcinfo remote-port --port 2100001b32904445 --linkstat
Remote Port WWN: 201600a0b86e103c
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200600a0b86e103c
Link Error Statistics:
Link Failure Count: 3
Loss of Sync Count: 3
Loss of Signal Count: 2
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 201700a0b86e103c
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200600a0b86e103c
Link Error Statistics:
Link Failure Count: 4
Loss of Sync Count: 261
Loss of Signal Count: 4
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 202200a0b85aeb2d
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200200a0b85aeb2d
Link Error Statistics:
Link Failure Count: 2
Loss of Sync Count: 1
Loss of Signal Count: 2
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 202300a0b85aeb2d
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200200a0b85aeb2d
Link Error Statistics:
Link Failure Count: 2
Loss of Sync Count: 1
Loss of Signal Count: 2
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 201600a0b86e10e4
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200600a0b86e10e4
Link Error Statistics:
Link Failure Count: 3
Loss of Sync Count: 1
Loss of Signal Count: 0
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 201700a0b86e10e4
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200600a0b86e10e4
Link Error Statistics:
Link Failure Count: 3
Loss of Sync Count: 1
Loss of Signal Count: 0
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 202400a0b85bb030
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200400a0b85bb030
Link Error Statistics:
Link Failure Count: 2
Loss of Sync Count: 1
Loss of Signal Count: 2
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 202500a0b85bb030
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200400a0b85bb030
Link Error Statistics:
Link Failure Count: 2
Loss of Sync Count: 1
Loss of Signal Count: 3
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
</source>
===fcinfo remote-port --port <HBA Port WWN> --scsi-target===
<source lang=bash>
# fcinfo hba-port | grep HBA
HBA Port WWN: 21000024ff3cf472
HBA Port WWN: 21000024ff3cf473
HBA Port WWN: 21000024ff3cf454
HBA Port WWN: 21000024ff3cf455
# fcinfo remote-port --port 21000024ff3cf472 --scsi-target
Remote Port WWN: 20110002ac0059ce
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 2ff70002ac0059ce
LUN: 0
Vendor: 3PARdata
Product: VV
OS Device Name: /dev/rdsk/c6t60002AC00000000000000002000059CEd0s2
LUN: 1
Vendor: 3PARdata
Product: VV
OS Device Name: /dev/rdsk/c6t60002AC00000000000000003000059CEd0s2
LUN: 2
Vendor: 3PARdata
Product: VV
OS Device Name: /dev/rdsk/c6t60002AC00000000000000004000059CEd0s2
...
</source>
==mpathadm==
===mpathadm list lu===
<source lang=bash>
</source>
==cfgadm==
===cfgadm -al -o show_FCP_dev [<controller>]===
<source lang=bash>
# cfgadm -al -o show_FCP_dev | grep unusable
c8::21000024ff2d49a2,0 disk connected configured unusable
c8::21000024ff2d49a2,1 disk connected configured unusable
c8::21000024ff2d49a2,2 disk connected configured unusable
c8::21000024ff2d49a2,3 disk connected configured unusable
c8::21000024ff2d49a2,4 disk connected configured unusable
c8::21000024ff2d49a2,5 disk connected configured unusable
c8::21000024ff2d49a2,6 disk connected configured unusable
c8::21000024ff2d49a2,7 disk connected configured unusable
c8::21000024ff2d49a2,8 disk connected configured unusable
c8::21000024ff2d49a2,9 disk connected configured unusable
c8::21000024ff2d49a2,10 disk connected configured unusable
c9::203400a0b839c421,31 disk connected configured unusable
c9::203400a0b84913d2,31 disk connected configured unusable
c9::203500a0b839c421,31 disk connected configured unusable
c9::203500a0b84913d2,31 disk connected configured unusable
</source>
===cfgadm -c unconfigure -o unusable_SCSI_LUN <unusable device>===
<source lang=bash>
# cfgadm -c unconfigure -o unusable_SCSI_LUN c8::21000024ff2d49a2
</source>
Alle aufräumen:
<source lang=bash>
# cfgadm -alo show_SCSI_LUN | nawk '$NF=="unusable"{gsub(/,[0-9]+$/,"",$1);print $1}' | sort -u | xargs -n 1 cfgadm -c unconfigure -o unusable_SCSI_LUN
</source>
===cfgadm -o force_update -c configure <controller>===
Rescan LUNs. Be careful! Does a forcelip!
<source lang=bash>
# cfgadm -o force_update -c configure c10
</source>
==prtconf -Da <device>==
<source lang=bash>
# prtconf -Da /dev/cfg/c3
i86pc (driver name: rootnex)
pci, instance #0 (driver name: npe)
pci8086,3410, instance #5 (driver name: pcieb)
pci111d,806e, instance #12 (driver name: pcieb)
pci111d,806e, instance #13 (driver name: pcieb)
pci1077,170, instance #0 (driver name: qlc) <---
fp, instance #0 (driver name: fp)
</source>
==LUN masking (access LUNs of a storage)==
<source lang=bash>
Nov 6 13:44:59 server01 Corrupt label; wrong magic number
Nov 6 13:44:59 server01 cmlb: WARNING: /pci@380/pci@1/pci@0/pci@5/SUNW,qlc@0/fp@0,0/ssd@w204300a096691217,7 (ssd7):
Nov 6 13:44:59 server01 Corrupt label; wrong magic number
Nov 6 13:44:59 server01 cmlb: WARNING: /pci@380/pci@1/pci@0/pci@5/SUNW,qlc@0/fp@0,0/ssd@w204300a096691217,7 (ssd7):
Nov 6 13:44:59 server01 Corrupt label; wrong magic number
Nov 6 13:44:59 server01 cmlb: WARNING: /pci@300/pci@1/pci@0/pci@4/SUNW,qlc@0/fp@0,0/ssd@w203300a096691217,7 (ssd2):
Nov 6 13:44:59 server01 Corrupt label; wrong magic number
...
</source>
<source lang=bash>
# cat /etc/driver/drv/fp.conf
mpxio-disable="no";
pwwn-lun-blacklist=
"203200a096691265,7",
"203300a096691265,7",
"204200a096691265,7",
"204300a096691265,7",
"203200a096691217,7",
"203300a096691217,7",
"204200a096691217,7",
"204300a096691217,7";
</source>
<source lang=bash>
# reboot -- -r
...
Boot device: /pci@300/pci@1/pci@0/pci@2/scsi@0/disk@p0 File and args: -r
SunOS Release 5.11 Version 11.3 64-bit
Copyright (c) 1983, 2015, Oracle and/or its affiliates. All rights reserved.
/pseudo/fcp@0 (fcp0):
LUN 7 of port 203300a096691217 is masked due to black listing.
/pseudo/fcp@0 (fcp0):
LUN 7 of port 203200a096691217 is masked due to black listing.
/pseudo/fcp@0 (fcp0):
LUN 7 of port 203300a096691265 is masked due to black listing.
/pseudo/fcp@0 (fcp0):
LUN 7 of port 203200a096691265 is masked due to black listing.
/pseudo/fcp@0 (fcp0):
LUN 7 of port 204300a096691217 is masked due to black listing.
/pseudo/fcp@0 (fcp0):
LUN 7 of port 204200a096691217 is masked due to black listing.
/pseudo/fcp@0 (fcp0):
LUN 7 of port 204300a096691265 is masked due to black listing.
/pseudo/fcp@0 (fcp0):
LUN 7 of port 204200a096691265 is masked due to black listing.
Configuring devices.
</source>
=Kommandos : Common Array Manager=
==lsscs==
Ist unter Solaris in /opt/SUNWsefms/bin
===lsscs list array===
<source lang=bash>
</source>
===lsscs list array <array_name>===
<source lang=bash>
</source>
===lsscs list -a <array_name> fcport===
<source lang=bash>
</source>
=Kommandos : Brocade=
==Switch-Kommandos==
===switchshow===
<source lang=bash>
san-sw_11:admin> switchshow
switchName: san-sw_11
switchType: 71.2
switchState: Online
switchMode: Native
switchRole: Principal
switchDomain: 1
switchId: fffc01
switchWwn: 10:00:00:05:33:df:43:5a
zoning: ON (Fabric1)
switchBeacon: OFF
Index Port Address Media Speed State Proto
==============================================
0 0 010000 id N8 No_Light FC
1 1 010100 id N8 Online FC E-Port 10:00:00:05:33:df:bd:b9 "san-sw_21" (downstream)
2 2 010200 id N8 Online FC F-Port 21:00:00:24:ff:05:74:e4
3 3 010300 id N8 Online FC F-Port 50:0a:09:81:8d:32:5d:c4
4 4 010400 id N8 No_Light FC
5 5 010500 id N8 Online FC E-Port 10:00:00:05:33:df:bd:b9 "san-sw_21"
6 6 010600 id N4 Online FC F-Port 20:06:00:a0:b8:32:38:17
7 7 010700 id N4 Online FC F-Port 20:07:00:a0:b8:32:38:17
8 8 010800 id N4 Online FC F-Port 21:00:00:1b:32:91:4c:ed
9 9 010900 id N4 Online FC F-Port 21:00:00:1b:32:98:05:1a
10 10 010a00 id N8 Online FC F-Port 21:00:00:24:ff:4a:d3:bc
11 11 010b00 id N8 No_Light FC
12 12 010c00 id N8 No_Light FC
13 13 010d00 id N8 No_Light FC
14 14 010e00 id N8 No_Light FC
15 15 010f00 id N8 No_Light FC
16 16 011000 -- N8 No_Module FC (No POD License) Disabled
17 17 011100 -- N8 No_Module FC (No POD License) Disabled
18 18 011200 -- N8 No_Module FC (No POD License) Disabled
19 19 011300 -- N8 No_Module FC (No POD License) Disabled
20 20 011400 -- N8 No_Module FC (No POD License) Disabled
21 21 011500 -- N8 No_Module FC (No POD License) Disabled
22 22 011600 -- N8 No_Module FC (No POD License) Disabled
23 23 011700 -- N8 No_Module FC (No POD License) Disabled
</source>
Was sagt uns das?
# Dies ist der "Principal" (alle andere sind "Subordinate") der Fabric "Fabric1" (switchRole:, zoning:)
# Der Switch ist gezoned (zoning:)
# SwitchID ist "fffc01"
# Es ist ein 24-Port Switch
# Es gibt einen doppelten ISL (InterSwitchLink) zu einem anderen Switch E-Port (san-sw_21)
# 6 Ports sind mit SFPs bestückt, aber nicht belegt (0,4,11-15)
# 8 Ports haben keine Lizenz und auch kein SFP (No_Module)
# 9 Ports sind belegt
<source lang=bash>
san-sw_11:root> fabricshow
Switch ID Worldwide Name Enet IP Addr FC IP Addr Name
-------------------------------------------------------------------------
1: fffc01 10:00:00:05:33:df:43:5a 192.168.1.117 0.0.0.0 >"san-sw_11"
2: fffc02 10:00:00:05:33:df:bd:b9 192.168.1.119 0.0.0.0 "san-sw_21"
The Fabric has 2 switches
</source>
==Port-Kommandos==
===porterrshow===
===portstatsshow===
===portstatsclear===
===portloginshow===
Show information about NPIV-ports.
<source lang=bash>
fcsw1:admin> switchshow
...
Index Port Address Media Speed State Proto
==================================================
...
34 34 0f2200 id N16 Online FC F-Port 1 N Port + 1 NPIV public
...
</source>
This is a NetApp 8080 with CDOT behind this port as you can see with <i>nodefind <address></i>
<source lang=bash>
fcsw1:admin> nodefind 0f2200
Local:
Type Pid COS PortName NodeName SCR
N 0f2200; 3;50:0a:09:82:80:d1:21:ee;50:0a:09:80:80:d1:21:ee; 0x00000000
PortSymb: [45] "NetApp FC Target Adapter (8324) cdot1-01:0g"
NodeSymb: [38] "NetApp FAS8080 (cdot1-01/cdot1-02)"
Fabric Port Name: 20:22:50:eb:1a:42:f8:45
Permanent Port Name: 50:0a:09:82:80:d1:21:ee
Device type: Physical Unknown(initiator/target)
Port Index: 34
Share Area: No
Device Shared in Other AD: No
Redirect: No
Partial: No
LSAN: No
Aliases:
</source>
Now look with <i>portloginshow <portnumber></i>:
<source lang=bash>
fcsw1:admin> portloginshow 34
Type PID World Wide Name credit df_sz cos
=====================================================
fd 0f2201 20:00:00:a0:98:5d:33:82 6 2048 8 scr=0x3
fe 0f2200 50:0a:09:82:80:d1:21:ee 6 2048 8 scr=0x0
ff 0f2201 20:00:00:a0:98:5d:33:82 0 0 8 d_id=FFFFFC
ff 0f2200 50:0a:09:82:80:d1:21:ee 0 0 8 d_id=FFFFFC
</source>
With this information you can find out more about the WWNs:
<source lang=bash>
fcsw1:admin> nodefind 20:00:00:a0:98:5d:33:82
Local:
Type Pid COS PortName NodeName SCR
N 0f2201; 3;20:00:00:a0:98:5d:33:82;20:04:00:a0:98:5d:33:82; 0x00000003
FC4s: FCP
PortSymb: [58] "NetApp FC Target Port (8324) cdot1fc:cdot1-01_fc_lif_1"
NodeSymb: [24] "NetApp Vserver cdot1fc"
Fabric Port Name: 20:22:50:eb:1a:42:f8:45
Permanent Port Name: 50:0a:09:82:80:d1:21:ee
Device type: NPIV Target
Port Index: 34
Share Area: No
Device Shared in Other AD: No
Redirect: No
Partial: No
LSAN: No
Aliases: cdot1fc_01_lif1
</source>
Even the VServer (NodeSymb)!
And with the NodeName you can find all logical interfaces of this svm:
<source lang=bash>
fcsw1:admin> nodefind 20:04:00:a0:98:5d:33:82
Local:
Type Pid COS PortName NodeName SCR
N 0f2201; 3;20:00:00:a0:98:5d:33:82;20:04:00:a0:98:5d:33:82; 0x00000003
FC4s: FCP
PortSymb: [58] "NetApp FC Target Port (8324) cdot1fc:cdot1-01_fc_lif_1"
NodeSymb: [24] "NetApp Vserver cdot1fc"
Fabric Port Name: 20:22:50:eb:1a:42:f8:45
Permanent Port Name: 50:0a:09:82:80:d1:21:ee
Device type: NPIV Target
Port Index: 34
Share Area: No
Device Shared in Other AD: No
Redirect: No
Partial: No
LSAN: No
Aliases: cdot1fc_01_lif1
N 0f2301; 3;20:02:00:a0:98:5d:33:82;20:04:00:a0:98:5d:33:82; 0x00000003
FC4s: FCP
PortSymb: [58] "NetApp FC Target Port (8324) cdot1fc:cdot1-02_fc_lif_1"
NodeSymb: [24] "NetApp Vserver cdot1fc"
Fabric Port Name: 20:23:50:eb:1a:42:f8:45
Permanent Port Name: 50:0a:09:82:80:61:21:e8
Device type: NPIV Target
Port Index: 35
Share Area: No
Device Shared in Other AD: No
Redirect: No
Partial: No
LSAN: No
Aliases: cdot1fc_02_lif1
</source>
==Zone-Kommandos==
===zoneshow===
===alicreate===
===alishow===
==Backup der Switchconfig per Script==
===Put the backup host ssh-pub-key on the switches===
<source lang=bash>
fcsw1:root> cat >/root/.ssh/authorized_keys <<EOF
> ssh-dss AAAAB3NzaC1...
...
...
lF8qsgtTD8cc= root@host
> EOF
</source>
===Generate ssh-key on the switches===
<source lang=bash>
fcsw1:root> ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
2a:23:33:...:69:bc:25:a5:f9 root@fcsw1
The key's randomart image is:
+--[ RSA 2048]----+
| |
| ... |
| |
+-----------------+
</source>
===Copy the key to your backup users ~/.ssh/authorized_keys on backup host===
<source lang=bash>
fcsw1:root> cat /root/.ssh/id_rsa.pub
ssh-rsa AAAAB3NzaC1yc2EAAA...
...
KHnw1T1NaQ== root@fcsw1
</source>
===Now the script on the backup host===
<source lang=bash>
# cat /opt/bin/backup_brocade_config
#!/bin/bash
SWITCHES="
172.30.40.50
172.30.40.51
"
LOCALUSER="backupuser"
BACKUPDIR="brocade_backup"
BACKUPHOST="172.30.40.10"
DATE="$(date '+%Y%m%d-%H%M%S')"
for switch in ${SWITCHES} ; do
printf "Backing up ${switch} to ~${LOCALUSER}/${BACKUPDIR}/${switch}_config_${date}.txt... "
ssh root@${switch} /fabos/link_sbin/configupload -all -p scp ${BACKUPHOST},${LOCALUSER},${BACKUPDIR}/${switch}_config_${DATE}.txt
done
</source>
==Script zum parsen einer configupload-Datei==
<source lang=awk>
#!/usr/bin/gawk -f
BEGIN{
vendor["001438"]="Hewlett-Packard";
vendor["00a098"]="NetApp";
vendor["0024ff"]="Qlogic";
vendor["001b32"]="Qlogic";
vendor["0000c9"]="Emulex";
vendor["00e002"]="CROSSROADS SYSTEMS, INC.";
}
/\[Zoning\]/,/^$/ {
if(/^cfg./){
split($0,cfgparts,":");
gsub(/^cfg./,"",cfgparts[1]);
cfg[cfgparts[1]]=cfgparts[2];
}
else if(/^zone./) {
zonename=$0;
gsub(/:.*$/,"",zonename);
gsub(/^zone./,"",zonename);
zonemembers=$0;
gsub(/^[^:]*:/,"",zonemembers);
zone[zonename]=zonemembers;
}
else if(/^alias./) {
aliasname=$0;
gsub(/:.*$/,"",aliasname);
gsub(/^alias./,"",aliasname);
aliasmembers=$0;
gsub(/^[^:]*:/,"",aliasmembers);
alias[aliasname]=aliasmembers;
if(length(aliasname)>longestalias){
longestalias=length(aliasname);
}
}
else if(/^enable:/) {
cfgenabled=$0;
gsub(/^enable:/,"",cfgenabled);
}
}
END {
print "Config:",cfgenabled;
split(cfg[cfgenabled],active_zones,";");
for(active_zone in active_zones) {
split(zone[active_zones[active_zone]],zone_members,";");
asort(zone_members);
print "Zone",active_zones[active_zone],"(",length(zone_members),"Members ):";
for(zone_member in zone_members){
member=zone_members[zone_member];
if(alias[member]!=""){
member=alias[member];
}
WWN=member;
gsub(/:/,"",WWN);
if(WWN ~ /^5/){start=2;}else{start=5;}
vendor_id=substr(WWN,start,6);
printf " Member: %s\t",member;
if(alias[zone_members[zone_member]]!=""){
format=sprintf("%%s%%%ds\t",longestalias-length(zone_members[zone_member]));
printf format,zone_members[zone_member]," ";
}
printf "%s\n",vendor[vendor_id];
}
}
printf "\n\n\nCreate config:\n-------------------------------------------------\n";
printf "cfgdelete \"%s\"\n",cfgenabled;
for(active_zone in active_zones) {
split(zone[active_zones[active_zone]],zone_members,";");
asort(zone_members);
for(zone_member in zone_members){
member=zone_members[zone_member];
if(alias[member]!=""){
printf "alicreate \"%s\",\"%s\"\n",member,alias[member];
alias[member]="";
}
}
printf "zonecreate \"%s\",\"%s\"\n",active_zones[active_zone],zone[active_zones[active_zone]];
if(!secondelement){
secondelement=1;
printf "cfgcreate";
} else {
printf "cfgadd ";
}
printf " \"%s\",\"%s\"\n",cfgenabled,active_zones[active_zone];
}
printf "cfgsave\ncfgenable \"%s\"\n",cfgenabled;
}
</source>
=Kommandos: NetApp=
==fcp topology show : Wo hängt mein Frontend-SAN?==
<source lang=bash>
fas01> fcp topology show
Switches connected on adapter 0d:
None connected.
Switches connected on adapter 0c:
None connected.
Switches connected on adapter 1a:
Switch Name: fcsw01
Switch Vendor: Brocade Communications, Inc.
Switch Release: v6.4.2a
Switch Domain: 1
Switch WWN: 10:00:00:05:33:c6:1e:6c
Port Count: 24
Switches connected on adapter 1b:
Switch Name: fcsw02
Switch Vendor: Brocade Communications, Inc.
Switch Release: v6.4.2a
Switch Domain: 1
Switch WWN: 10:00:00:05:33:c7:5e:d2
Port Count: 24
Switches connected on adapter 1c:
None connected.
Switches connected on adapter 1d:
None connected.
</source>
==fcp config <port> : Welche WWN habe ich?==
<source lang=bash>
fas01> fcp config 1a
1a: ONLINE <ADAPTER UP> PTP Fabric
host address 010600
portname 50:0a:09:83:90:00:29:24 nodename 50:0a:09:80:80:00:29:24
mediatype auto speed auto
</source>
Kleines Schmankerl ist noch die "host address", die uns zeigt, daß wir auf Switch-ID 01 Port 06 hängen.
==fcp wwpn-alias (set|show) : Aliasnamen für mehr Klarheit beim Debugging==
<source lang=bash>
fas01> fcp wwpn-alias set sun07_Slot2_Port0 21000024ff363a5a
fas01> fcp wwpn-alias show
WWPN Alias
---- -----
21:00:00:24:ff:36:3a:5a sun07_Slot2_Port0
</source>
==sanlun lun show -d <dev> (mit Solaris und ZPool)==
Wenn man wissen möchte, welche NetApp LUNs zu einem ZPool gehören, geht das folgender Maßen:
<source lang=bash>
# zpool status | nawk '/c[0-9]t/{dev=$1;gsub(/s[0-9]+$/,"",$1);command="/opt/NTAP/SANToolkit/bin/sanlun lun show -d /dev/rdsk/"$1"s2";command | getline; command | getline; print dev,$1$2;next;}{print;}'
</source>
Beispiel:
<source lang=bash>
# zpool status | nawk '/c[0-9]t/{dev=$1;gsub(/s[0-9]+$/,"",$1);command="/opt/NTAP/SANToolkit/bin/sanlun lun show -d /dev/rdsk/"$1"s2";command | getline; command | getline; print dev,$1$2;next;}{print;}'
Pool: testpool
Status: ONLINE
scan: resilvered 11,0G in 0h1m with 0 errors on Thu Oct 2 09:41:39 2014
config:
NAME STATE READ WRITE CKSUM
testpool ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
c5t60A98000433544625634696B76705370d0s0 fas01:/vol/testlun/LUN0
c5t60A980003830304F392446473844375Ad0 fas02:/vol/testlun/LUN0
</source>
=Sonstiges=
==Alle WWNs in einem File finden==
Gibt nur die WWNs aus, auch mehrere, wenn mehrere in einer Zeile sind.
<source lang=awk>
gawk '{line=$0;while(match(line,/[0-9a-f]{2}(:[0-9a-f]{2}){7}/,wwn)){line=substr(line,wwn[0,"start"]+wwn[0,"length"]); print wwn[0];}}' <file>
</source>
f7e9f50d3438a77f4caac6e2db7865bdefe1d830
1415
1395
2017-04-25T13:44:32Z
Lollypop
2
/* Kommandos : Brocade */
wikitext
text/x-wiki
[[Kategorie:Solaris]]
[[Kategorie:Brocade]]
[[Kategorie:NetApp]]
[[Kategorie:FC]]
=Fibrechannel Analyse=
=Kommandos : Solaris=
==luxadm==
===luxadm -e port===
Gibt die Hardwarepfade der vorhandened Fibrechannelports und deren Status aus:
<source lang=bash>
# luxadm -e port
/devices/pci@79,0/pci10de,378@b/pci1077,143@0/fp@0,0:devctl CONNECTED
/devices/pci@79,0/pci10de,378@b/pci1077,143@0,1/fp@0,0:devctl NOT CONNECTED
/devices/pci@79,0/pci10de,376@e/pci1077,143@0/fp@0,0:devctl CONNECTED
/devices/pci@79,0/pci10de,376@e/pci1077,143@0,1/fp@0,0:devctl NOT CONNECTED
</source>
2 Dualport Karten:
/devices/pci@79,0/pci10de,378@b/pci1077,143@0 und ...,1
/devices/pci@79,0/pci10de,376@e/pci1077,143@0 und ...,1
<source lang=bash>
# prtdiag -v | head -1
System Configuration: Sun Microsystems Sun Fire X4440
</source>
Aus der Seite [https://support.oracle.com/epmos/faces/DocContentDisplay?id=1277396.1 Sun x86 Platforms: Matrix of Recognized Device Paths (Doc ID 1277396.1)] (Oracle Support Login benötigt):
Sun Fire x4440 (Tucana)
PCI:
PCIe SLOT0 /pci@0,0/pci10de,375@f/pci1000,3150@0 // with PCI Express 8-Port SAS/SATA HBA
PCIe SLOT0 /pci@0,0/pci10de,375@f/ // without PCI Express 8-Port SAS/SATA HBA
PCIe SLOT1 /pci@0,0/pci10de,376@e/
PCIe SLOT2 /pci@7c,0/pci10de,377@f/
PCIe SLOT3 /pci@0,0/pci10de,377@a/
PCIe SLOT4 /pci@7c,0/pci10de,376@e/
PCIe SLOT5 /pci@7c,0/pci10de,378@b/
(7c can be renamed something else depending on BIOS/OS version)
Also stecken unsere Karten in Slot 4 und 5.
===luxadm -e dump_map <HW_path>===
Gibt die Tabelle der bekannten Geräte an einem Port aus
<source lang=bash>
# luxadm -e dump_map /devices/pci@79,0/pci10de,378@b/pci1077,143@0/fp@0,0:devctl
Pos Port_ID Hard_Addr Port WWN Node WWN Type
0 30200 0 202600a0b86e10e4 200600a0b86e10e4 0x0 (Disk device)
1 30600 0 202700a0b86e10e4 200600a0b86e10e4 0x0 (Disk device)
2 10100 0 203400a0b85bb030 200400a0b85bb030 0x0 (Disk device)
3 10500 0 203500a0b85bb030 200400a0b85bb030 0x0 (Disk device)
4 10200 0 202600a0b86e103c 200600a0b86e103c 0x0 (Disk device)
5 11400 0 202700a0b86e103c 200600a0b86e103c 0x0 (Disk device)
6 30100 0 203200a0b85aeb2d 200200a0b85aeb2d 0x0 (Disk device)
7 30500 0 203300a0b85aeb2d 200200a0b85aeb2d 0x0 (Disk device)
8 10800 0 2100001b32902d45 2000001b32902d45 0x1f (Unknown Type,Host Bus Adapter)
</source>
Erklärung der interessanten Spalten:
* Port_ID <Switch_ID><Switchport><??>
Es sind also offensichtlich 2 Switches in der Fabric an Port /devices/pci@79,0/pci10de,378@b/pci1077,143@0/fp@0,0:devctl
und zwar mit der ID 1 und mit der ID 3.
Switch ID 1
Port 1 und 5 : Node WWN 200400a0b85bb030
Port 2 und 14 : Node WWN 200600a0b86e103c
Port 8 : Node WWN 2000001b32902d45 (Wir selbst)
Switch ID 3
Port 1 und 5 : Node WWN 200200a0b85aeb2d
Port 2 und 6 : Node WWN 200600a0b86e10e4
Wir hängen also mit 2 Storages auf dem Switch mit der ID 1 und haben eine Verbindung zu einem Switch mit der ID 3 an dem 2 weitere Storages hängen.
* Node WWN
Wir sehen hier 4 Disk Devices mit jeweils 2 Einträgen (Gleiche Node WWN)
* Port WWN
Dies ist die Port WWN der an den Switch angeschlossenen Geräte (unter 8 finden wir uns selbst).
Pro Storage sehen wir hier 2 Port WWNs, also 2 Pfade über unseren einen Hostport.
Daher nachher 4 Pfade (2 Pro Hostport) beim [[#mpathadm list lu]].
* Type
Disk Device: Storage
Host Bus Adapter: FC-Karte
===luxadm probe===
Auflistung aller erkannten Fibrechanneldevices
<source lang=bash>
#> luxadm probe
Found Fibre Channel device(s):
Node WWN:200600a0b86e10e4 Device Type:Disk device
Logical Path:/dev/rdsk/c8t600A0B80006E10E40000DC1C52E8B751d0s2
...
</source>
===luxadm display <Diskpath|WWN>===
<source lang=bash>
#> luxadm display /dev/rdsk/c8t600A0B80006E10E40000DC1C52E8B751d0s2
DEVICE PROPERTIES for disk: /dev/rdsk/c8t600A0B80006E10E40000DC1C52E8B751d0s2
Vendor: SUN
Product ID: STK6580_6780
Revision: 0784
Serial Num: SP01068442
Unformatted capacity: 204800.000 MBytes
Write Cache: Enabled
Read Cache: Enabled
Minimum prefetch: 0x300
Maximum prefetch: 0x0
Device Type: Disk device
Path(s):
/dev/rdsk/c8t600A0B80006E10E40000DC1C52E8B751d0s2
/devices/scsi_vhci/disk@g600a0b80006e10e40000dc1c52e8b751:c,raw
Controller /dev/cfg/c4
Device Address 202600a0b86e10e4,5
Host controller port WWN 2100001b328a417f
Class primary
State ONLINE
Controller /dev/cfg/c4
Device Address 202700a0b86e10e4,5
Host controller port WWN 2100001b328a417f
Class secondary
State STANDBY
Controller /dev/cfg/c6
Device Address 201600a0b86e10e4,5
Host controller port WWN 2100001b32904445
Class primary
State ONLINE
Controller /dev/cfg/c6
Device Address 201700a0b86e10e4,5
Host controller port WWN 2100001b32904445
Class secondary
State STANDBY
</source>
* Vendor: SUN
Hersteller
* Product ID: STK6580_6780
Also ein StorageTek 6580/6780
* Revision: 0784
Grobe Firmwarepeilung (Firmware Version: 07.84.47.10)
Siehe hier [[#lsscs list array <array_name>]]
* Serial Num: SP01068442
Praktisch, wenn man mit NetApps arbeitet, um die LUNs zuzuordnen.
* Unformatted capacity: 204800.000 MBytes
Immer gut zu wissen
* Write Cache: Enabled
Die Batterie im Storage sollte also OK sein ;-)
* Path(s):
Rawdevicepath
Hardwaredevicepath
Jetzt folgen immer pro Pfad zu diesem Device ein Block aus
Controller (siehe unten)
Device Address <Port WWN vom Device>,<LUN ID>
Class <primary|secondary> (siehe unten)
State <Online|Standby|Oflline>
Zuweisung Controller zum FC-Port über:
<source lang=bash>
# ls -al /dev/cfg/c6
lrwxrwxrwx 1 root root 60 Sep 3 2009 /dev/cfg/c6 -> ../../devices/pci@79,0/pci10de,376@e/pci1077,143@0/fp@0,0:fc
</source>
Man sieht den Hardwarepfad von [[#luxadm -e port]]
Class:
Via ALUA (Asymmetric Logical Unit Access) teilt das Device dem Host mit, über welche Pfade der Host primär auf die LUN zugreifen soll.
==fcinfo==
===fcinfo hba-port===
Gibt ein paar Infos über Hersteller, Modell, Firmware, Port und Node WWN, Current Speed, ... aus
<source lang=bash>
#> fcinfo hba-port
HBA Port WWN: 2100001b328a417f
OS Device Name: /dev/cfg/c4
Manufacturer: QLogic Corp.
Model: 375-3356-02
Firmware Version: 05.06.00
FCode/BIOS Version: BIOS: 2.02; fcode: 2.01; EFI: 2.00;
Serial Number: 0402R00-0918701860
Driver Name: qlc
Driver Version: 20110825-3.06
Type: N-port
State: online
Supported Speeds: 1Gb 2Gb 4Gb
Current Speed: 4Gb
Node WWN: 2000001b328a417f
HBA Port WWN: 2101001b32aa417f
OS Device Name: /dev/cfg/c5
Manufacturer: QLogic Corp.
Model: 375-3356-02
Firmware Version: 05.06.00
FCode/BIOS Version: BIOS: 2.02; fcode: 2.01; EFI: 2.00;
Serial Number: 0402R00-0918701860
Driver Name: qlc
Driver Version: 20110825-3.06
Type: unknown
State: offline
Supported Speeds: 1Gb 2Gb 4Gb
Current Speed: not established
Node WWN: 2001001b32aa417f
HBA Port WWN: 2100001b32904445
OS Device Name: /dev/cfg/c6
Manufacturer: QLogic Corp.
Model: 375-3356-02
Firmware Version: 05.06.00
FCode/BIOS Version: BIOS: 2.02; fcode: 2.01; EFI: 2.00;
Serial Number: 0402R00-0918701887
Driver Name: qlc
Driver Version: 20110825-3.06
Type: N-port
State: online
Supported Speeds: 1Gb 2Gb 4Gb
Current Speed: 4Gb
Node WWN: 2000001b32904445
HBA Port WWN: 2101001b32b04445
OS Device Name: /dev/cfg/c7
Manufacturer: QLogic Corp.
Model: 375-3356-02
Firmware Version: 05.06.00
FCode/BIOS Version: BIOS: 2.02; fcode: 2.01; EFI: 2.00;
Serial Number: 0402R00-0918701887
Driver Name: qlc
Driver Version: 20110825-3.06
Type: unknown
State: offline
Supported Speeds: 1Gb 2Gb 4Gb
Current Speed: not established
Node WWN: 2001001b32b04445
</source>
===fcinfo remote-port --port <HBA Port WWN> --linkstat===
<source lang=bash>
# fcinfo remote-port --port 2100001b32904445 --linkstat
Remote Port WWN: 201600a0b86e103c
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200600a0b86e103c
Link Error Statistics:
Link Failure Count: 3
Loss of Sync Count: 3
Loss of Signal Count: 2
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 201700a0b86e103c
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200600a0b86e103c
Link Error Statistics:
Link Failure Count: 4
Loss of Sync Count: 261
Loss of Signal Count: 4
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 202200a0b85aeb2d
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200200a0b85aeb2d
Link Error Statistics:
Link Failure Count: 2
Loss of Sync Count: 1
Loss of Signal Count: 2
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 202300a0b85aeb2d
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200200a0b85aeb2d
Link Error Statistics:
Link Failure Count: 2
Loss of Sync Count: 1
Loss of Signal Count: 2
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 201600a0b86e10e4
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200600a0b86e10e4
Link Error Statistics:
Link Failure Count: 3
Loss of Sync Count: 1
Loss of Signal Count: 0
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 201700a0b86e10e4
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200600a0b86e10e4
Link Error Statistics:
Link Failure Count: 3
Loss of Sync Count: 1
Loss of Signal Count: 0
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 202400a0b85bb030
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200400a0b85bb030
Link Error Statistics:
Link Failure Count: 2
Loss of Sync Count: 1
Loss of Signal Count: 2
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 202500a0b85bb030
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200400a0b85bb030
Link Error Statistics:
Link Failure Count: 2
Loss of Sync Count: 1
Loss of Signal Count: 3
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
</source>
===fcinfo remote-port --port <HBA Port WWN> --scsi-target===
<source lang=bash>
# fcinfo hba-port | grep HBA
HBA Port WWN: 21000024ff3cf472
HBA Port WWN: 21000024ff3cf473
HBA Port WWN: 21000024ff3cf454
HBA Port WWN: 21000024ff3cf455
# fcinfo remote-port --port 21000024ff3cf472 --scsi-target
Remote Port WWN: 20110002ac0059ce
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 2ff70002ac0059ce
LUN: 0
Vendor: 3PARdata
Product: VV
OS Device Name: /dev/rdsk/c6t60002AC00000000000000002000059CEd0s2
LUN: 1
Vendor: 3PARdata
Product: VV
OS Device Name: /dev/rdsk/c6t60002AC00000000000000003000059CEd0s2
LUN: 2
Vendor: 3PARdata
Product: VV
OS Device Name: /dev/rdsk/c6t60002AC00000000000000004000059CEd0s2
...
</source>
==mpathadm==
===mpathadm list lu===
<source lang=bash>
</source>
==cfgadm==
===cfgadm -al -o show_FCP_dev [<controller>]===
<source lang=bash>
# cfgadm -al -o show_FCP_dev | grep unusable
c8::21000024ff2d49a2,0 disk connected configured unusable
c8::21000024ff2d49a2,1 disk connected configured unusable
c8::21000024ff2d49a2,2 disk connected configured unusable
c8::21000024ff2d49a2,3 disk connected configured unusable
c8::21000024ff2d49a2,4 disk connected configured unusable
c8::21000024ff2d49a2,5 disk connected configured unusable
c8::21000024ff2d49a2,6 disk connected configured unusable
c8::21000024ff2d49a2,7 disk connected configured unusable
c8::21000024ff2d49a2,8 disk connected configured unusable
c8::21000024ff2d49a2,9 disk connected configured unusable
c8::21000024ff2d49a2,10 disk connected configured unusable
c9::203400a0b839c421,31 disk connected configured unusable
c9::203400a0b84913d2,31 disk connected configured unusable
c9::203500a0b839c421,31 disk connected configured unusable
c9::203500a0b84913d2,31 disk connected configured unusable
</source>
===cfgadm -c unconfigure -o unusable_SCSI_LUN <unusable device>===
<source lang=bash>
# cfgadm -c unconfigure -o unusable_SCSI_LUN c8::21000024ff2d49a2
</source>
Alle aufräumen:
<source lang=bash>
# cfgadm -alo show_SCSI_LUN | nawk '$NF=="unusable"{gsub(/,[0-9]+$/,"",$1);print $1}' | sort -u | xargs -n 1 cfgadm -c unconfigure -o unusable_SCSI_LUN
</source>
===cfgadm -o force_update -c configure <controller>===
Rescan LUNs. Be careful! Does a forcelip!
<source lang=bash>
# cfgadm -o force_update -c configure c10
</source>
==prtconf -Da <device>==
<source lang=bash>
# prtconf -Da /dev/cfg/c3
i86pc (driver name: rootnex)
pci, instance #0 (driver name: npe)
pci8086,3410, instance #5 (driver name: pcieb)
pci111d,806e, instance #12 (driver name: pcieb)
pci111d,806e, instance #13 (driver name: pcieb)
pci1077,170, instance #0 (driver name: qlc) <---
fp, instance #0 (driver name: fp)
</source>
==LUN masking (access LUNs of a storage)==
<source lang=bash>
Nov 6 13:44:59 server01 Corrupt label; wrong magic number
Nov 6 13:44:59 server01 cmlb: WARNING: /pci@380/pci@1/pci@0/pci@5/SUNW,qlc@0/fp@0,0/ssd@w204300a096691217,7 (ssd7):
Nov 6 13:44:59 server01 Corrupt label; wrong magic number
Nov 6 13:44:59 server01 cmlb: WARNING: /pci@380/pci@1/pci@0/pci@5/SUNW,qlc@0/fp@0,0/ssd@w204300a096691217,7 (ssd7):
Nov 6 13:44:59 server01 Corrupt label; wrong magic number
Nov 6 13:44:59 server01 cmlb: WARNING: /pci@300/pci@1/pci@0/pci@4/SUNW,qlc@0/fp@0,0/ssd@w203300a096691217,7 (ssd2):
Nov 6 13:44:59 server01 Corrupt label; wrong magic number
...
</source>
<source lang=bash>
# cat /etc/driver/drv/fp.conf
mpxio-disable="no";
pwwn-lun-blacklist=
"203200a096691265,7",
"203300a096691265,7",
"204200a096691265,7",
"204300a096691265,7",
"203200a096691217,7",
"203300a096691217,7",
"204200a096691217,7",
"204300a096691217,7";
</source>
<source lang=bash>
# reboot -- -r
...
Boot device: /pci@300/pci@1/pci@0/pci@2/scsi@0/disk@p0 File and args: -r
SunOS Release 5.11 Version 11.3 64-bit
Copyright (c) 1983, 2015, Oracle and/or its affiliates. All rights reserved.
/pseudo/fcp@0 (fcp0):
LUN 7 of port 203300a096691217 is masked due to black listing.
/pseudo/fcp@0 (fcp0):
LUN 7 of port 203200a096691217 is masked due to black listing.
/pseudo/fcp@0 (fcp0):
LUN 7 of port 203300a096691265 is masked due to black listing.
/pseudo/fcp@0 (fcp0):
LUN 7 of port 203200a096691265 is masked due to black listing.
/pseudo/fcp@0 (fcp0):
LUN 7 of port 204300a096691217 is masked due to black listing.
/pseudo/fcp@0 (fcp0):
LUN 7 of port 204200a096691217 is masked due to black listing.
/pseudo/fcp@0 (fcp0):
LUN 7 of port 204300a096691265 is masked due to black listing.
/pseudo/fcp@0 (fcp0):
LUN 7 of port 204200a096691265 is masked due to black listing.
Configuring devices.
</source>
=Kommandos : Common Array Manager=
==lsscs==
Ist unter Solaris in /opt/SUNWsefms/bin
===lsscs list array===
<source lang=bash>
</source>
===lsscs list array <array_name>===
<source lang=bash>
</source>
===lsscs list -a <array_name> fcport===
<source lang=bash>
</source>
=Kommandos : Brocade=
==Switch-Kommandos==
===switchshow===
<source lang=bash>
san-sw_11:admin> switchshow
switchName: san-sw_11
switchType: 71.2
switchState: Online
switchMode: Native
switchRole: Principal
switchDomain: 1
switchId: fffc01
switchWwn: 10:00:00:05:33:df:43:5a
zoning: ON (Fabric1)
switchBeacon: OFF
Index Port Address Media Speed State Proto
==============================================
0 0 010000 id N8 No_Light FC
1 1 010100 id N8 Online FC E-Port 10:00:00:05:33:df:bd:b9 "san-sw_21" (downstream)
2 2 010200 id N8 Online FC F-Port 21:00:00:24:ff:05:74:e4
3 3 010300 id N8 Online FC F-Port 50:0a:09:81:8d:32:5d:c4
4 4 010400 id N8 No_Light FC
5 5 010500 id N8 Online FC E-Port 10:00:00:05:33:df:bd:b9 "san-sw_21"
6 6 010600 id N4 Online FC F-Port 20:06:00:a0:b8:32:38:17
7 7 010700 id N4 Online FC F-Port 20:07:00:a0:b8:32:38:17
8 8 010800 id N4 Online FC F-Port 21:00:00:1b:32:91:4c:ed
9 9 010900 id N4 Online FC F-Port 21:00:00:1b:32:98:05:1a
10 10 010a00 id N8 Online FC F-Port 21:00:00:24:ff:4a:d3:bc
11 11 010b00 id N8 No_Light FC
12 12 010c00 id N8 No_Light FC
13 13 010d00 id N8 No_Light FC
14 14 010e00 id N8 No_Light FC
15 15 010f00 id N8 No_Light FC
16 16 011000 -- N8 No_Module FC (No POD License) Disabled
17 17 011100 -- N8 No_Module FC (No POD License) Disabled
18 18 011200 -- N8 No_Module FC (No POD License) Disabled
19 19 011300 -- N8 No_Module FC (No POD License) Disabled
20 20 011400 -- N8 No_Module FC (No POD License) Disabled
21 21 011500 -- N8 No_Module FC (No POD License) Disabled
22 22 011600 -- N8 No_Module FC (No POD License) Disabled
23 23 011700 -- N8 No_Module FC (No POD License) Disabled
</source>
Was sagt uns das?
# Dies ist der "Principal" (alle andere sind "Subordinate") der Fabric "Fabric1" (switchRole:, zoning:)
# Der Switch ist gezoned (zoning:)
# SwitchID ist "fffc01"
# Es ist ein 24-Port Switch
# Es gibt einen doppelten ISL (InterSwitchLink) zu einem anderen Switch E-Port (san-sw_21)
# 6 Ports sind mit SFPs bestückt, aber nicht belegt (0,4,11-15)
# 8 Ports haben keine Lizenz und auch kein SFP (No_Module)
# 9 Ports sind belegt
===fabricshow===
<source lang=bash>
san-sw_11:root> fabricshow
Switch ID Worldwide Name Enet IP Addr FC IP Addr Name
-------------------------------------------------------------------------
1: fffc01 10:00:00:05:33:df:43:5a 192.168.1.117 0.0.0.0 >"san-sw_11"
2: fffc02 10:00:00:05:33:df:bd:b9 192.168.1.119 0.0.0.0 "san-sw_21"
The Fabric has 2 switches
</source>
===islshow===
<source lang=bash>
rz1_fab2_11:admin> islshow
1: 1-> 0 10:00:00:05:1e:0d:5e:96 12 bc1_sl4_fab2_12 sp: 4.000G bw: 4.000G
2: 2-> 0 10:00:00:05:1e:0d:e2:53 13 bc2_sl4_fab2_13 sp: 4.000G bw: 4.000G
3: 3-> 0 10:00:00:05:1e:b3:71:bf 14 bc3_sl4_fab2_14 sp: 4.000G bw: 4.000G
4: 5-> 17 10:00:00:05:1e:0d:5e:96 12 bc1_sl4_fab2_12 sp: 4.000G bw: 4.000G
5: 6-> 17 10:00:00:05:1e:0d:e2:53 13 bc2_sl4_fab2_13 sp: 4.000G bw: 4.000G
6: 7-> 17 10:00:00:05:1e:b3:71:bf 14 bc3_sl4_fab2_14 sp: 4.000G bw: 4.000G
7: 10-> 8 10:00:50:eb:1a:45:71:96 15 rz-6510-fab2-15 sp: 4.000G bw: 4.000G
8: 18-> 0 10:00:50:eb:1a:45:71:96 15 rz-6510-fab2-15 sp: 4.000G bw: 4.000G
</source>
==Port-Kommandos==
===porterrshow===
===portstatsshow===
===portstatsclear===
===portloginshow===
Show information about NPIV-ports.
<source lang=bash>
fcsw1:admin> switchshow
...
Index Port Address Media Speed State Proto
==================================================
...
34 34 0f2200 id N16 Online FC F-Port 1 N Port + 1 NPIV public
...
</source>
This is a NetApp 8080 with CDOT behind this port as you can see with <i>nodefind <address></i>
<source lang=bash>
fcsw1:admin> nodefind 0f2200
Local:
Type Pid COS PortName NodeName SCR
N 0f2200; 3;50:0a:09:82:80:d1:21:ee;50:0a:09:80:80:d1:21:ee; 0x00000000
PortSymb: [45] "NetApp FC Target Adapter (8324) cdot1-01:0g"
NodeSymb: [38] "NetApp FAS8080 (cdot1-01/cdot1-02)"
Fabric Port Name: 20:22:50:eb:1a:42:f8:45
Permanent Port Name: 50:0a:09:82:80:d1:21:ee
Device type: Physical Unknown(initiator/target)
Port Index: 34
Share Area: No
Device Shared in Other AD: No
Redirect: No
Partial: No
LSAN: No
Aliases:
</source>
Now look with <i>portloginshow <portnumber></i>:
<source lang=bash>
fcsw1:admin> portloginshow 34
Type PID World Wide Name credit df_sz cos
=====================================================
fd 0f2201 20:00:00:a0:98:5d:33:82 6 2048 8 scr=0x3
fe 0f2200 50:0a:09:82:80:d1:21:ee 6 2048 8 scr=0x0
ff 0f2201 20:00:00:a0:98:5d:33:82 0 0 8 d_id=FFFFFC
ff 0f2200 50:0a:09:82:80:d1:21:ee 0 0 8 d_id=FFFFFC
</source>
With this information you can find out more about the WWNs:
<source lang=bash>
fcsw1:admin> nodefind 20:00:00:a0:98:5d:33:82
Local:
Type Pid COS PortName NodeName SCR
N 0f2201; 3;20:00:00:a0:98:5d:33:82;20:04:00:a0:98:5d:33:82; 0x00000003
FC4s: FCP
PortSymb: [58] "NetApp FC Target Port (8324) cdot1fc:cdot1-01_fc_lif_1"
NodeSymb: [24] "NetApp Vserver cdot1fc"
Fabric Port Name: 20:22:50:eb:1a:42:f8:45
Permanent Port Name: 50:0a:09:82:80:d1:21:ee
Device type: NPIV Target
Port Index: 34
Share Area: No
Device Shared in Other AD: No
Redirect: No
Partial: No
LSAN: No
Aliases: cdot1fc_01_lif1
</source>
Even the VServer (NodeSymb)!
And with the NodeName you can find all logical interfaces of this svm:
<source lang=bash>
fcsw1:admin> nodefind 20:04:00:a0:98:5d:33:82
Local:
Type Pid COS PortName NodeName SCR
N 0f2201; 3;20:00:00:a0:98:5d:33:82;20:04:00:a0:98:5d:33:82; 0x00000003
FC4s: FCP
PortSymb: [58] "NetApp FC Target Port (8324) cdot1fc:cdot1-01_fc_lif_1"
NodeSymb: [24] "NetApp Vserver cdot1fc"
Fabric Port Name: 20:22:50:eb:1a:42:f8:45
Permanent Port Name: 50:0a:09:82:80:d1:21:ee
Device type: NPIV Target
Port Index: 34
Share Area: No
Device Shared in Other AD: No
Redirect: No
Partial: No
LSAN: No
Aliases: cdot1fc_01_lif1
N 0f2301; 3;20:02:00:a0:98:5d:33:82;20:04:00:a0:98:5d:33:82; 0x00000003
FC4s: FCP
PortSymb: [58] "NetApp FC Target Port (8324) cdot1fc:cdot1-02_fc_lif_1"
NodeSymb: [24] "NetApp Vserver cdot1fc"
Fabric Port Name: 20:23:50:eb:1a:42:f8:45
Permanent Port Name: 50:0a:09:82:80:61:21:e8
Device type: NPIV Target
Port Index: 35
Share Area: No
Device Shared in Other AD: No
Redirect: No
Partial: No
LSAN: No
Aliases: cdot1fc_02_lif1
</source>
==Zone-Kommandos==
===zoneshow===
===alicreate===
===alishow===
==Backup der Switchconfig per Script==
===Put the backup host ssh-pub-key on the switches===
<source lang=bash>
fcsw1:root> cat >/root/.ssh/authorized_keys <<EOF
> ssh-dss AAAAB3NzaC1...
...
...
lF8qsgtTD8cc= root@host
> EOF
</source>
===Generate ssh-key on the switches===
<source lang=bash>
fcsw1:root> ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
2a:23:33:...:69:bc:25:a5:f9 root@fcsw1
The key's randomart image is:
+--[ RSA 2048]----+
| |
| ... |
| |
+-----------------+
</source>
===Copy the key to your backup users ~/.ssh/authorized_keys on backup host===
<source lang=bash>
fcsw1:root> cat /root/.ssh/id_rsa.pub
ssh-rsa AAAAB3NzaC1yc2EAAA...
...
KHnw1T1NaQ== root@fcsw1
</source>
===Now the script on the backup host===
<source lang=bash>
# cat /opt/bin/backup_brocade_config
#!/bin/bash
SWITCHES="
172.30.40.50
172.30.40.51
"
LOCALUSER="backupuser"
BACKUPDIR="brocade_backup"
BACKUPHOST="172.30.40.10"
DATE="$(date '+%Y%m%d-%H%M%S')"
for switch in ${SWITCHES} ; do
printf "Backing up ${switch} to ~${LOCALUSER}/${BACKUPDIR}/${switch}_config_${date}.txt... "
ssh root@${switch} /fabos/link_sbin/configupload -all -p scp ${BACKUPHOST},${LOCALUSER},${BACKUPDIR}/${switch}_config_${DATE}.txt
done
</source>
==Script zum parsen einer configupload-Datei==
<source lang=awk>
#!/usr/bin/gawk -f
BEGIN{
vendor["001438"]="Hewlett-Packard";
vendor["00a098"]="NetApp";
vendor["0024ff"]="Qlogic";
vendor["001b32"]="Qlogic";
vendor["0000c9"]="Emulex";
vendor["00e002"]="CROSSROADS SYSTEMS, INC.";
}
/\[Zoning\]/,/^$/ {
if(/^cfg./){
split($0,cfgparts,":");
gsub(/^cfg./,"",cfgparts[1]);
cfg[cfgparts[1]]=cfgparts[2];
}
else if(/^zone./) {
zonename=$0;
gsub(/:.*$/,"",zonename);
gsub(/^zone./,"",zonename);
zonemembers=$0;
gsub(/^[^:]*:/,"",zonemembers);
zone[zonename]=zonemembers;
}
else if(/^alias./) {
aliasname=$0;
gsub(/:.*$/,"",aliasname);
gsub(/^alias./,"",aliasname);
aliasmembers=$0;
gsub(/^[^:]*:/,"",aliasmembers);
alias[aliasname]=aliasmembers;
if(length(aliasname)>longestalias){
longestalias=length(aliasname);
}
}
else if(/^enable:/) {
cfgenabled=$0;
gsub(/^enable:/,"",cfgenabled);
}
}
END {
print "Config:",cfgenabled;
split(cfg[cfgenabled],active_zones,";");
for(active_zone in active_zones) {
split(zone[active_zones[active_zone]],zone_members,";");
asort(zone_members);
print "Zone",active_zones[active_zone],"(",length(zone_members),"Members ):";
for(zone_member in zone_members){
member=zone_members[zone_member];
if(alias[member]!=""){
member=alias[member];
}
WWN=member;
gsub(/:/,"",WWN);
if(WWN ~ /^5/){start=2;}else{start=5;}
vendor_id=substr(WWN,start,6);
printf " Member: %s\t",member;
if(alias[zone_members[zone_member]]!=""){
format=sprintf("%%s%%%ds\t",longestalias-length(zone_members[zone_member]));
printf format,zone_members[zone_member]," ";
}
printf "%s\n",vendor[vendor_id];
}
}
printf "\n\n\nCreate config:\n-------------------------------------------------\n";
printf "cfgdelete \"%s\"\n",cfgenabled;
for(active_zone in active_zones) {
split(zone[active_zones[active_zone]],zone_members,";");
asort(zone_members);
for(zone_member in zone_members){
member=zone_members[zone_member];
if(alias[member]!=""){
printf "alicreate \"%s\",\"%s\"\n",member,alias[member];
alias[member]="";
}
}
printf "zonecreate \"%s\",\"%s\"\n",active_zones[active_zone],zone[active_zones[active_zone]];
if(!secondelement){
secondelement=1;
printf "cfgcreate";
} else {
printf "cfgadd ";
}
printf " \"%s\",\"%s\"\n",cfgenabled,active_zones[active_zone];
}
printf "cfgsave\ncfgenable \"%s\"\n",cfgenabled;
}
</source>
=Kommandos: NetApp=
==fcp topology show : Wo hängt mein Frontend-SAN?==
<source lang=bash>
fas01> fcp topology show
Switches connected on adapter 0d:
None connected.
Switches connected on adapter 0c:
None connected.
Switches connected on adapter 1a:
Switch Name: fcsw01
Switch Vendor: Brocade Communications, Inc.
Switch Release: v6.4.2a
Switch Domain: 1
Switch WWN: 10:00:00:05:33:c6:1e:6c
Port Count: 24
Switches connected on adapter 1b:
Switch Name: fcsw02
Switch Vendor: Brocade Communications, Inc.
Switch Release: v6.4.2a
Switch Domain: 1
Switch WWN: 10:00:00:05:33:c7:5e:d2
Port Count: 24
Switches connected on adapter 1c:
None connected.
Switches connected on adapter 1d:
None connected.
</source>
==fcp config <port> : Welche WWN habe ich?==
<source lang=bash>
fas01> fcp config 1a
1a: ONLINE <ADAPTER UP> PTP Fabric
host address 010600
portname 50:0a:09:83:90:00:29:24 nodename 50:0a:09:80:80:00:29:24
mediatype auto speed auto
</source>
Kleines Schmankerl ist noch die "host address", die uns zeigt, daß wir auf Switch-ID 01 Port 06 hängen.
==fcp wwpn-alias (set|show) : Aliasnamen für mehr Klarheit beim Debugging==
<source lang=bash>
fas01> fcp wwpn-alias set sun07_Slot2_Port0 21000024ff363a5a
fas01> fcp wwpn-alias show
WWPN Alias
---- -----
21:00:00:24:ff:36:3a:5a sun07_Slot2_Port0
</source>
==sanlun lun show -d <dev> (mit Solaris und ZPool)==
Wenn man wissen möchte, welche NetApp LUNs zu einem ZPool gehören, geht das folgender Maßen:
<source lang=bash>
# zpool status | nawk '/c[0-9]t/{dev=$1;gsub(/s[0-9]+$/,"",$1);command="/opt/NTAP/SANToolkit/bin/sanlun lun show -d /dev/rdsk/"$1"s2";command | getline; command | getline; print dev,$1$2;next;}{print;}'
</source>
Beispiel:
<source lang=bash>
# zpool status | nawk '/c[0-9]t/{dev=$1;gsub(/s[0-9]+$/,"",$1);command="/opt/NTAP/SANToolkit/bin/sanlun lun show -d /dev/rdsk/"$1"s2";command | getline; command | getline; print dev,$1$2;next;}{print;}'
Pool: testpool
Status: ONLINE
scan: resilvered 11,0G in 0h1m with 0 errors on Thu Oct 2 09:41:39 2014
config:
NAME STATE READ WRITE CKSUM
testpool ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
c5t60A98000433544625634696B76705370d0s0 fas01:/vol/testlun/LUN0
c5t60A980003830304F392446473844375Ad0 fas02:/vol/testlun/LUN0
</source>
=Sonstiges=
==Alle WWNs in einem File finden==
Gibt nur die WWNs aus, auch mehrere, wenn mehrere in einer Zeile sind.
<source lang=awk>
gawk '{line=$0;while(match(line,/[0-9a-f]{2}(:[0-9a-f]{2}){7}/,wwn)){line=substr(line,wwn[0,"start"]+wwn[0,"length"]); print wwn[0];}}' <file>
</source>
a9ee42bf45ef292d03d615dfbbc01ad12857d64f
Ubuntu apt
0
120
1400
1260
2017-03-23T15:44:24Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:Ubuntu|apt]]
== Get all non LTS packages ==
<source lang=awk>
# dpkg --list | awk '/^ii/ {print $2}' | xargs apt-cache show | awk '
BEGIN{
support="none";
}
/^Package:/,/^$/{
if(/^Package:/){ pkg=$2; }
if(/^Supported:/){ support=$2; }
if(/^$/ && support != "5y"){ printf "%s:\t%s\n", pkg, support; }
}
/^$/ {
support="none";
}'
</source>
== Configuring a proxy for apt ==
Put this into your /etc/apt/apt.conf.d/00proxy :
<source lang=bash>
// Options for the downloading routines
Acquire
{
Queue-Mode "host"; // host|access
Retries "0";
Source-Symlinks "true";
// HTTP method configuration
http
{
//Proxy::http.us.debian.org "DIRECT"; // Specific per-host setting
Proxy "http://<user>:<password>@<proxy-host>:<proxy-port>";
Timeout "120";
Pipeline-Depth "5";
// Cache Control. Note these do not work with Squid 2.0.2
No-Cache "false";
Max-Age "86400"; // 1 Day age on index files
No-Store "false"; // Prevent the cache from storing archives
};
ftp
{
Proxy "http://<user>:<password>@<proxy-host>:<proxy-port>";
//Proxy::http.us.debian.org "DIRECT"; // Specific per-host setting
Timeout "120";
/* Passive mode control, proxy, non-proxy and per-host. Pasv mode
is prefered if possible */
Passive "true";
Proxy::Passive "true";
Passive::http.us.debian.org "true"; // Specific per-host setting
};
cdrom
{
mount "/cdrom";
// You need the trailing slash!
"/cdrom"
{
Mount "sleep 1000";
UMount "sleep 500";
}
};
};
</source>
548892e4adab508a6d10e18502c919e8e7b1a64d
Bash cheatsheet
0
37
1401
1360
2017-04-11T12:04:13Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:Bash]]
=bash history per user=
You need to set LogLevel of sshd to VERBOSE in your /etc/ssh/sshd_config:
<source lang=bash>
...
LogLevel VERBOSE
...
</source>
If you are using ssh public keys for authenticating and want to use a seperate history for each user, you can put this in your .bash_profile:
<source lang=bash>
[ -f /var/log/fingerprint.log ] && FINGERPRINT=$(nawk -v ssh_connection="${SSH_CONNECTION}" -v user=${LOGNAME} 'BEGIN{split(ssh_connection,connection)}/.*sshd\[[0-9]+\]: Accepted publickey for/ && $(NF-5)==connection[1] && $(NF-3)==connection[2] {print $NF;}' /var/log/fingerprint.log)
export HISTFILE=~/.bash_history_${FINGERPRINT:-${SUDO_USER:-default}}
</source>
If $FINGERPRINT is empty the sudo user will be used.
If $SUDO_USER is empty too, use "default" as extension.
I forced rsyslog to write another logfile where group ssh may read:
/etc/rsyslog.d/99-fingerprint.conf:
<source lang=bash>
$FileCreateMode 0640
$FileGroup ssh
auth /var/log/fingerprint.log
</source>
Add user syslog to group ssh so that syslog can open a file as group ssh:
<source lang=bash>
# usermod -aG ssh syslog
</source>
Let only users from group ssh login via ssh except the syslog user:
/etc/ssh/sshd_config:
<source lang=bash>
# SSH is only allowed for users in this group
AllowGroups ssh
DenyUsers syslog
</source>
=bash prompt=
Put this in your ~/.bash_profile
<source lang=bash>
typeset +x PS1="\[\e]0;\u@\h: \w\a\]\u@\h:\w# "
</source>
=Useful variable substitutions=
==dirname==
<source lang=bash>
$ myself=/usr/bin/blafasel ; echo ${myself%/*}
/usr/bin
</source>
==basename==
<source lang=bash>
$ myself=/usr/bin/blafasel ; echo ${myself##*/}
blafasel
</source>
==Path name resolving function==
<source lang=bash>
# dir_resolve originally from http://stackoverflow.com/a/20901614/5887626
# modified at https://lars.timmann.de/wiki/index.php/Bash_cheatsheet
dir_resolve() {
local dir=${1%/*}
local file=${1##*/}
# if the name does not contain a / leave file blank or the name will be name/name
[ "_${1/\//}_" == "_${1}_" -a -d ${1} ] && file=""
[ "_${1/\//}_" == "_${1}_" -a -f ${1} ] && dir=""
pushd "$dir" &>/dev/null || return $? # On error, return error code
echo ${PWD}${file:+"/"${file}} # output full path with filename
popd &> /dev/null
}
</source>
=Arrays=
==Reverse the order of elements==
An example for services in normal and reverse order for start/stop
<source lang=bash>
declare -a SERVICES_STOP=(service1 service2 service3 service4)
declare -a SERVICES_START
for(( i=$[ ${#SERVICES_STOP[*]} - 1 ] ; i>=0 ; i-- ))
do
SERVICES_START+=(${SERVICES_STOP[$i]})
done
</source>
This results in:
<source lang=bash>
$ echo ${SERVICES_STOP[*]} ; echo ${SERVICES_START[*]}
service1 service2 service3 service4
service4 service3 service2 service1
</source>
=Loops=
==Numbers==
$ for i in {0..9} ; do echo $i ; done
oder
$ for ((i=0;i<=9;i++)); do echo $i; done
so gehen natürlich auch andere Sprünge, z.B. immer 3 weiter:
$ for ((i=0;i<=9;i+=3)); do echo $i; done
oder oder oder
$ for ((i=0,j=1;i<=9;i+=3,j++)); do echo "$i $j"; done
==Exit controlled loop==
Just put your code between <i>while</i> and <i>do</i> and use <i>continue</i> alias : in the loop.
<source lang=bash>
#!/bin/bash
while
# some code
(( <your control expression> ))
do
:
done
</source>
For example:
<source lang=bash>
#!/bin/bash
i=1
while
i=$[ $i + 1 ];
(( $i < 10 ))
do
:
done
</source>
=Functions=
==Log with timestamp==
<source lang=bash>
function printlog () {
if [ -n "$*" ]
then
printf "%s %s\n" "$(/bin/date '+%Y%m%d %H:%M:%S')" "${*}"
else
while read input
do
printf "%s %s\n" "$(/bin/date '+%Y%m%d %H:%M:%S')" "${input}"
done
fi
}
</source>
<source lang=bash>
$ printf "test\n\ntoast\n" | printlog
20161103 10:47:25 test
20161103 10:47:25
20161103 10:47:25 toast
$ printlog test
20161103 10:47:30 test
</source>
=Calculations=
$ echo $[ 3 + 4 ]
$ echo $[ 2 ** 8 ] # 2^8
=init scripts=
==A basic skeleton==
<source lang=bash>
#!/bin/bash
NAME=<myname> # The name of the daemon
USER=<runuser> # The user to run the daemon as
SELF=${0##*/}
CALLER=$(id -nu)
# Check if called as ${USER}
if [ "_${CALLER}_" != "_${USER}_" ]
then
# If not do a su if called as root
if [ "_${CALLER}_" == "_root_" ]
then
exec su -l ${USER} -c "$0 $@"
else
echo "Please start this script only as user ${USER}"
exit 1
fi
fi
if [ $# -eq 1 ]
then
command=$1
else
# Called as ${NAME}-start.sh or ${NAME}-stop.sh
command=${SELF%.sh}
command=${command##${NAME}-}
[ "_${command}_" == "_${NAME}_" ] && command=""
fi
case ${command} in
start)
# start commands
;;
stop)
# stop commands
;;
restart)
$0 stop
$0 start
;;
*)
[ ! -z "${command}" ] && echo "ERROR: Unknown option ${command}!"
echo "Usage: $0 (start|stop|restart)";
echo "Or call as ${NAME}-(start|stop|restart).sh"
exit 1
;;
esac
</source>
a8bbb2d1c0416627b7e92119844011a6cef1900d
1405
1401
2017-04-13T08:53:30Z
Lollypop
2
/* Useful variable substitutions */
wikitext
text/x-wiki
[[Kategorie:Bash]]
=bash history per user=
You need to set LogLevel of sshd to VERBOSE in your /etc/ssh/sshd_config:
<source lang=bash>
...
LogLevel VERBOSE
...
</source>
If you are using ssh public keys for authenticating and want to use a seperate history for each user, you can put this in your .bash_profile:
<source lang=bash>
[ -f /var/log/fingerprint.log ] && FINGERPRINT=$(nawk -v ssh_connection="${SSH_CONNECTION}" -v user=${LOGNAME} 'BEGIN{split(ssh_connection,connection)}/.*sshd\[[0-9]+\]: Accepted publickey for/ && $(NF-5)==connection[1] && $(NF-3)==connection[2] {print $NF;}' /var/log/fingerprint.log)
export HISTFILE=~/.bash_history_${FINGERPRINT:-${SUDO_USER:-default}}
</source>
If $FINGERPRINT is empty the sudo user will be used.
If $SUDO_USER is empty too, use "default" as extension.
I forced rsyslog to write another logfile where group ssh may read:
/etc/rsyslog.d/99-fingerprint.conf:
<source lang=bash>
$FileCreateMode 0640
$FileGroup ssh
auth /var/log/fingerprint.log
</source>
Add user syslog to group ssh so that syslog can open a file as group ssh:
<source lang=bash>
# usermod -aG ssh syslog
</source>
Let only users from group ssh login via ssh except the syslog user:
/etc/ssh/sshd_config:
<source lang=bash>
# SSH is only allowed for users in this group
AllowGroups ssh
DenyUsers syslog
</source>
=bash prompt=
Put this in your ~/.bash_profile
<source lang=bash>
typeset +x PS1="\[\e]0;\u@\h: \w\a\]\u@\h:\w# "
</source>
=Useful variable substitutions=
==split==
For example split an ip:
<source lang=bash>
$ delimiter="."
$ ip="10.1.2.3"
$ declare -a octets=( ${ip//${delimiter}/ } )
$ echo "${#octets[@]} octets -> ${octets[@]}"
4 octets -> 10 1 2 3
</source>
==dirname==
<source lang=bash>
$ myself=/usr/bin/blafasel ; echo ${myself%/*}
/usr/bin
</source>
==basename==
<source lang=bash>
$ myself=/usr/bin/blafasel ; echo ${myself##*/}
blafasel
</source>
==Path name resolving function==
<source lang=bash>
# dir_resolve originally from http://stackoverflow.com/a/20901614/5887626
# modified at https://lars.timmann.de/wiki/index.php/Bash_cheatsheet
dir_resolve() {
local dir=${1%/*}
local file=${1##*/}
# if the name does not contain a / leave file blank or the name will be name/name
[ "_${1/\//}_" == "_${1}_" -a -d ${1} ] && file=""
[ "_${1/\//}_" == "_${1}_" -a -f ${1} ] && dir=""
pushd "$dir" &>/dev/null || return $? # On error, return error code
echo ${PWD}${file:+"/"${file}} # output full path with filename
popd &> /dev/null
}
</source>
=Arrays=
==Reverse the order of elements==
An example for services in normal and reverse order for start/stop
<source lang=bash>
declare -a SERVICES_STOP=(service1 service2 service3 service4)
declare -a SERVICES_START
for(( i=$[ ${#SERVICES_STOP[*]} - 1 ] ; i>=0 ; i-- ))
do
SERVICES_START+=(${SERVICES_STOP[$i]})
done
</source>
This results in:
<source lang=bash>
$ echo ${SERVICES_STOP[*]} ; echo ${SERVICES_START[*]}
service1 service2 service3 service4
service4 service3 service2 service1
</source>
=Loops=
==Numbers==
$ for i in {0..9} ; do echo $i ; done
oder
$ for ((i=0;i<=9;i++)); do echo $i; done
so gehen natürlich auch andere Sprünge, z.B. immer 3 weiter:
$ for ((i=0;i<=9;i+=3)); do echo $i; done
oder oder oder
$ for ((i=0,j=1;i<=9;i+=3,j++)); do echo "$i $j"; done
==Exit controlled loop==
Just put your code between <i>while</i> and <i>do</i> and use <i>continue</i> alias : in the loop.
<source lang=bash>
#!/bin/bash
while
# some code
(( <your control expression> ))
do
:
done
</source>
For example:
<source lang=bash>
#!/bin/bash
i=1
while
i=$[ $i + 1 ];
(( $i < 10 ))
do
:
done
</source>
=Functions=
==Log with timestamp==
<source lang=bash>
function printlog () {
if [ -n "$*" ]
then
printf "%s %s\n" "$(/bin/date '+%Y%m%d %H:%M:%S')" "${*}"
else
while read input
do
printf "%s %s\n" "$(/bin/date '+%Y%m%d %H:%M:%S')" "${input}"
done
fi
}
</source>
<source lang=bash>
$ printf "test\n\ntoast\n" | printlog
20161103 10:47:25 test
20161103 10:47:25
20161103 10:47:25 toast
$ printlog test
20161103 10:47:30 test
</source>
=Calculations=
$ echo $[ 3 + 4 ]
$ echo $[ 2 ** 8 ] # 2^8
=init scripts=
==A basic skeleton==
<source lang=bash>
#!/bin/bash
NAME=<myname> # The name of the daemon
USER=<runuser> # The user to run the daemon as
SELF=${0##*/}
CALLER=$(id -nu)
# Check if called as ${USER}
if [ "_${CALLER}_" != "_${USER}_" ]
then
# If not do a su if called as root
if [ "_${CALLER}_" == "_root_" ]
then
exec su -l ${USER} -c "$0 $@"
else
echo "Please start this script only as user ${USER}"
exit 1
fi
fi
if [ $# -eq 1 ]
then
command=$1
else
# Called as ${NAME}-start.sh or ${NAME}-stop.sh
command=${SELF%.sh}
command=${command##${NAME}-}
[ "_${command}_" == "_${NAME}_" ] && command=""
fi
case ${command} in
start)
# start commands
;;
stop)
# stop commands
;;
restart)
$0 stop
$0 start
;;
*)
[ ! -z "${command}" ] && echo "ERROR: Unknown option ${command}!"
echo "Usage: $0 (start|stop|restart)";
echo "Or call as ${NAME}-(start|stop|restart).sh"
exit 1
;;
esac
</source>
eca6fc228b926182bb1ae4fc0149511ea808a6b5
NetApp Commands
0
201
1402
939
2017-04-12T13:22:33Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie: NetApp]]
==Alignment==
CDOT 8.3:
<source lang=bash>
netapp-svm1::> set -priv diag
netapp-svm1::*> lun alignment show -vserver svm_kerberos_backup -fields vserver,lun,alignment -path /vol/kerberos_vol_luns_backup/kerberos_lun_*
vserver path lun alignment
---------- --------------------------------- ------------ ---------
svm_backup /vol/vol_luns_backup/lun_1.bcklun lun_1.bcklun aligned
svm_backup /vol/vol_luns_backup/lun_2.bcklun lun_2.bcklun aligned
svm_backup /vol/vol_luns_backup/lun_3.bcklun lun_3.bcklun aligned
svm_backup /vol/vol_luns_backup/lun_4.bcklun lun_4.bcklun aligned
4 entries were displayed.
netapp-svm1::> set -priv admin
</source>
To see on which bucket the reads and writes occure:
<source lang=bash>
netapp-svm1::> set -priv diag
netapp-svm1::*> lun alignment show -vserver svm_kerberos_backup -fields vserver,lun,alignment,read-histogram,write-histogram -path /vol/kerberos_vol_luns_backup/kerberos_lun_**
vserver path lun alignment write-histogram read-histogram
---------- --------------------------------- ------------ --------- ---------------- ----------------
svm_backup /vol/vol_luns_backup/lun_1.bcklun lun_1.bcklun aligned 99,0,0,0,0,0,0,0 99,0,0,0,0,0,0,0
svm_backup /vol/vol_luns_backup/lun_2.bcklun lun_2.bcklun aligned 99,0,0,0,0,0,0,0 99,0,0,0,0,0,0,0
svm_backup /vol/vol_luns_backup/lun_3.bcklun lun_3.bcklun aligned 99,0,0,0,0,0,0,0 99,0,0,0,0,0,0,0
svm_backup /vol/vol_luns_backup/lun_4.bcklun lun_4.bcklun aligned 99,0,0,0,0,0,0,0 99,0,0,0,0,0,0,0
4 entries were displayed.
netapp-svm1::> set -priv admin
</source>
==Performance ==
filer> priv set -q diag ; statit -b ; sysstat -x -s -c 20 3 ; statit -e ; priv set
filer> priv set -q diag ; stats show lun:*:avg_latency ; priv set
=== Flashpool ===
filer> priv set -q diag ; stats show -p hybrid_aggr ; priv set
== User ==
=== Create snapshot user ===
<source lang=bash>
security login role create -vserver svm1 vol-snapshot-only -cmddirname DEFAULT -access none
security login role create -vserver svm1 vol-snapshot-only -cmddirname "security login publickey" -access all
security login role create -vserver svm1 vol-snapshot-only -cmddirname "volume snapshot" -access readonly
security login role create -vserver svm1 vol-snapshot-only -cmddirname "volume snapshot create" -access all
security login create -vserver svm1 -role vol-snapshot-only -user-or-group-name snapshot-user -application ssh -authmethod publickey -comment "Snapshot User"
security login publickey create -vserver svm1 -username snapshot-user -publickey "ssh-rsa AAAAB3Nz...geX33k5 snapshot-user"
</source>
==Links==
* [http://www.cosonok.com/2014/02/brief-notes-on-advanced-troubleshooting.html Brief Notes on Advanced Troubleshooting in CDOT]
173a1afc8665c11ef0b35e8c34ac3084c23eac6a
1403
1402
2017-04-12T13:23:19Z
Lollypop
2
/* Create snapshot user */
wikitext
text/x-wiki
[[Kategorie: NetApp]]
==Alignment==
CDOT 8.3:
<source lang=bash>
netapp-svm1::> set -priv diag
netapp-svm1::*> lun alignment show -vserver svm_kerberos_backup -fields vserver,lun,alignment -path /vol/kerberos_vol_luns_backup/kerberos_lun_*
vserver path lun alignment
---------- --------------------------------- ------------ ---------
svm_backup /vol/vol_luns_backup/lun_1.bcklun lun_1.bcklun aligned
svm_backup /vol/vol_luns_backup/lun_2.bcklun lun_2.bcklun aligned
svm_backup /vol/vol_luns_backup/lun_3.bcklun lun_3.bcklun aligned
svm_backup /vol/vol_luns_backup/lun_4.bcklun lun_4.bcklun aligned
4 entries were displayed.
netapp-svm1::> set -priv admin
</source>
To see on which bucket the reads and writes occure:
<source lang=bash>
netapp-svm1::> set -priv diag
netapp-svm1::*> lun alignment show -vserver svm_kerberos_backup -fields vserver,lun,alignment,read-histogram,write-histogram -path /vol/kerberos_vol_luns_backup/kerberos_lun_**
vserver path lun alignment write-histogram read-histogram
---------- --------------------------------- ------------ --------- ---------------- ----------------
svm_backup /vol/vol_luns_backup/lun_1.bcklun lun_1.bcklun aligned 99,0,0,0,0,0,0,0 99,0,0,0,0,0,0,0
svm_backup /vol/vol_luns_backup/lun_2.bcklun lun_2.bcklun aligned 99,0,0,0,0,0,0,0 99,0,0,0,0,0,0,0
svm_backup /vol/vol_luns_backup/lun_3.bcklun lun_3.bcklun aligned 99,0,0,0,0,0,0,0 99,0,0,0,0,0,0,0
svm_backup /vol/vol_luns_backup/lun_4.bcklun lun_4.bcklun aligned 99,0,0,0,0,0,0,0 99,0,0,0,0,0,0,0
4 entries were displayed.
netapp-svm1::> set -priv admin
</source>
==Performance ==
filer> priv set -q diag ; statit -b ; sysstat -x -s -c 20 3 ; statit -e ; priv set
filer> priv set -q diag ; stats show lun:*:avg_latency ; priv set
=== Flashpool ===
filer> priv set -q diag ; stats show -p hybrid_aggr ; priv set
== User ==
=== Create snapshot user ===
<source lang=bash>
security login role create -vserver svm1 vol-snapshot-only -cmddirname DEFAULT -access none
security login role create -vserver svm1 vol-snapshot-only -cmddirname "security login publickey" -access all
security login role create -vserver svm1 vol-snapshot-only -cmddirname "volume snapshot" -access readonly
security login role create -vserver svm1 vol-snapshot-only -cmddirname "volume snapshot create" -access all
security login create -vserver svm1 -role vol-snapshot-only -user-or-group-name snapshot-user -application ssh -authmethod publickey -comment "Snapshot User"
security login publickey create -vserver svm1 -username snapshot-user -publickey "ssh-rsa AAAAB3Nz...geX33k5 snapshot-user"
</source>
==Links==
* [http://www.cosonok.com/2014/02/brief-notes-on-advanced-troubleshooting.html Brief Notes on Advanced Troubleshooting in CDOT]
2f95e4de921cee7bf32dda424f122c75980a99ce
1404
1403
2017-04-12T13:23:44Z
Lollypop
2
/* Create snapshot user */
wikitext
text/x-wiki
[[Kategorie: NetApp]]
==Alignment==
CDOT 8.3:
<source lang=bash>
netapp-svm1::> set -priv diag
netapp-svm1::*> lun alignment show -vserver svm_kerberos_backup -fields vserver,lun,alignment -path /vol/kerberos_vol_luns_backup/kerberos_lun_*
vserver path lun alignment
---------- --------------------------------- ------------ ---------
svm_backup /vol/vol_luns_backup/lun_1.bcklun lun_1.bcklun aligned
svm_backup /vol/vol_luns_backup/lun_2.bcklun lun_2.bcklun aligned
svm_backup /vol/vol_luns_backup/lun_3.bcklun lun_3.bcklun aligned
svm_backup /vol/vol_luns_backup/lun_4.bcklun lun_4.bcklun aligned
4 entries were displayed.
netapp-svm1::> set -priv admin
</source>
To see on which bucket the reads and writes occure:
<source lang=bash>
netapp-svm1::> set -priv diag
netapp-svm1::*> lun alignment show -vserver svm_kerberos_backup -fields vserver,lun,alignment,read-histogram,write-histogram -path /vol/kerberos_vol_luns_backup/kerberos_lun_**
vserver path lun alignment write-histogram read-histogram
---------- --------------------------------- ------------ --------- ---------------- ----------------
svm_backup /vol/vol_luns_backup/lun_1.bcklun lun_1.bcklun aligned 99,0,0,0,0,0,0,0 99,0,0,0,0,0,0,0
svm_backup /vol/vol_luns_backup/lun_2.bcklun lun_2.bcklun aligned 99,0,0,0,0,0,0,0 99,0,0,0,0,0,0,0
svm_backup /vol/vol_luns_backup/lun_3.bcklun lun_3.bcklun aligned 99,0,0,0,0,0,0,0 99,0,0,0,0,0,0,0
svm_backup /vol/vol_luns_backup/lun_4.bcklun lun_4.bcklun aligned 99,0,0,0,0,0,0,0 99,0,0,0,0,0,0,0
4 entries were displayed.
netapp-svm1::> set -priv admin
</source>
==Performance ==
filer> priv set -q diag ; statit -b ; sysstat -x -s -c 20 3 ; statit -e ; priv set
filer> priv set -q diag ; stats show lun:*:avg_latency ; priv set
=== Flashpool ===
filer> priv set -q diag ; stats show -p hybrid_aggr ; priv set
== User ==
=== Create snapshot user ===
<source lang=bash>
security login role create -vserver svm1 vol-snapshot-only -cmddirname DEFAULT -access none
security login role create -vserver svm1 vol-snapshot-only -cmddirname "security login publickey" -access all
security login role create -vserver svm1 vol-snapshot-only -cmddirname "volume snapshot" -access readonly
security login role create -vserver svm1 vol-snapshot-only -cmddirname "volume snapshot create" -access all
security login create -vserver svm1 -role vol-snapshot-only -user-or-group-name snapshot-user -application ssh -authmethod publickey -comment "Snapshot User"
security login publickey create -vserver svm1 -username snapshot-user -publickey "ssh-rsa AAAAB3Nz...geX33k5 snapshot-user"
</source>
==Links==
* [http://www.cosonok.com/2014/02/brief-notes-on-advanced-troubleshooting.html Brief Notes on Advanced Troubleshooting in CDOT]
68838fa2dbce278943dbc358a44e4bd0f1a1c316
Ansible tips and tricks
0
299
1406
2017-04-19T09:56:54Z
Lollypop
2
Die Seite wurde neu angelegt: „[[ Kategorie: KnowHow | Tips and tricks ]] [[ Kategorie: KnowHow ]]“
wikitext
text/x-wiki
[[ Kategorie: KnowHow | Tips and tricks ]]
[[ Kategorie: KnowHow ]]
856bdee80d837414db9b887c68e8f3d6df715147
1407
1406
2017-04-19T09:57:57Z
Lollypop
2
wikitext
text/x-wiki
[[ Kategorie: KnowHow ]]
1036e829c0d28d45fe9bed435ae7626a79d96472
1408
1407
2017-04-19T09:58:36Z
Lollypop
2
wikitext
text/x-wiki
[[ Kategorie: Ansible | Tips and tricks ]]
8f54587e9d52d134b7e19dd79105199805a6b3ac
1410
1408
2017-04-19T10:06:32Z
Lollypop
2
wikitext
text/x-wiki
== Gathering facts from file ==
=== Variables from an Oracle response file ===
<source lang=yaml>
- name: "Get variables for version {{ oracle_version }} from response file"
shell: |
awk -F '=' '/{{ item }}/{print $2;}' /install/tepmplate_{{ oracle_version }}/db_{{ oracle_version | lower}}.rsp
register: oracle_response_variables
with_items:
- ORACLE_HOME
- ORACLE_BASE
- INVENTORY_LOCATION
tags:
- oracle
- oracle_install
- name: Set facts from response file to oracle_environment
set_fact:
"{{ 'oracle_' + item.item | lower | regex_replace('oracle_','') }}": "{{ item.stdout }}"
oracle_environment: "{{oracle_environment|default([]) + [ {item.item: item.stdout} ] }}"
with_items:
- "{{ oracle_response_variables.results }}"
tags:
- oracle
- oracle_install
</source>
This snippet gets some variables from the response file and puts them into an environment variable <i>oracle_environment</i> and sets the variable name itself (prepended with oracle_ if not already there). The variable <i>oracle_environment</i> can be used for <i>environment:</i> when you use <i>shell:</i>.
[[ Kategorie: Ansible | Tips and tricks ]]
47c2fba9abe2b0bbe881ed9470dd3e27a030d0a4
1411
1410
2017-04-19T10:19:26Z
Lollypop
2
/* Variables from an Oracle response file */
wikitext
text/x-wiki
== Gathering facts from file ==
=== Variables from an Oracle response file ===
This snippet gets some variables from the response file and puts them into an environment variable <i>oracle_environment</i> and sets the variable name itself (prepended with oracle_ if not already there). The variable <i>oracle_environment</i> can be used for <i>environment:</i> when you use <i>shell:</i>.
<source lang=yaml>
vars:
oracle_user: oracle
oracle_version: 12cR2
oracle_response_file: /install/tepmplate_{{ oracle_version }}/db_{{ oracle_version | lower}}.rsp
</source>
<source lang=yaml>
- name: "Getting variables for version {{ oracle_version }} from response file"
shell: |
awk -F '=' '/{{ item }}/{print $2;}' {{ oracle_response_file }}
register: oracle_response_variables
with_items:
- ORACLE_HOME
- ORACLE_BASE
- INVENTORY_LOCATION
tags:
- oracle
- oracle_install
- name: Setting facts from response file to oracle_environment
set_fact:
"{{ 'oracle_' + item.item | lower | regex_replace('oracle_','') }}": "{{ item.stdout }}"
oracle_environment: "{{oracle_environment|default([]) + [ {item.item: item.stdout} ] }}"
with_items:
- "{{ oracle_response_variables.results }}"
tags:
- oracle
- oracle_install
</source>
[[ Kategorie: Ansible | Tips and tricks ]]
68db5cc9934c4ff0203e3b40911dc779f6b6e09e
1416
1411
2017-04-26T08:47:02Z
Lollypop
2
wikitext
text/x-wiki
== Ansible commandline ==
=== Get settings for host ===
Gathering settings for host in ${hostname}:
<source lang=bash>
$ ansible -m debug -a 'var=hostvars[inventory_hostname]' ${hostname}
</source>
For example:
<source lang=bash>
$ ansible -m debug -a 'var=hostvars[inventory_hostname]' localhost
</source>
== Gathering facts from file ==
=== Variables from an Oracle response file ===
This snippet gets some variables from the response file and puts them into an environment variable <i>oracle_environment</i> and sets the variable name itself (prepended with oracle_ if not already there). The variable <i>oracle_environment</i> can be used for <i>environment:</i> when you use <i>shell:</i>.
<source lang=yaml>
vars:
oracle_user: oracle
oracle_version: 12cR2
oracle_response_file: /install/tepmplate_{{ oracle_version }}/db_{{ oracle_version | lower}}.rsp
</source>
<source lang=yaml>
- name: "Getting variables for version {{ oracle_version }} from response file"
shell: |
awk -F '=' '/{{ item }}/{print $2;}' {{ oracle_response_file }}
register: oracle_response_variables
with_items:
- ORACLE_HOME
- ORACLE_BASE
- INVENTORY_LOCATION
tags:
- oracle
- oracle_install
- name: Setting facts from response file to oracle_environment
set_fact:
"{{ 'oracle_' + item.item | lower | regex_replace('oracle_','') }}": "{{ item.stdout }}"
oracle_environment: "{{oracle_environment|default([]) + [ {item.item: item.stdout} ] }}"
with_items:
- "{{ oracle_response_variables.results }}"
tags:
- oracle
- oracle_install
</source>
[[ Kategorie: Ansible | Tips and tricks ]]
13875e87d2e41e4b38efce68185f18d072a739bf
Category:Ansible
14
300
1409
2017-04-19T09:58:51Z
Lollypop
2
Die Seite wurde neu angelegt: „[[ Kategorie: KnowHow ]]“
wikitext
text/x-wiki
[[ Kategorie: KnowHow ]]
1036e829c0d28d45fe9bed435ae7626a79d96472
RedHat networking
0
301
1412
2017-04-21T13:09:34Z
Lollypop
2
Die Seite wurde neu angelegt: „= Bonding = In this example we configure two bonds. bond0 : Failover bond1 : LACP == /etc/modprobe.d/bonding.conf == <source lang=conf> alias netdev-bond0 bo…“
wikitext
text/x-wiki
= Bonding =
In this example we configure two bonds.
bond0 : Failover
bond1 : LACP
== /etc/modprobe.d/bonding.conf ==
<source lang=conf>
alias netdev-bond0 bonding
options bond0 miimon=100 mode=active-backup updelay=0 downdelay=0 primary=bond0_slave1
alias netdev-bond1 bonding
options bond1 miimon=100 mode=4 lacp_rate=1
</source>
91a20a2f15175ca21cd67619c373091e29abd432
1413
1412
2017-04-21T13:20:47Z
Lollypop
2
wikitext
text/x-wiki
= Bonding =
In this example we configure two bonds.
bond0 : Failover (eno1/bond0_slave1 and eno3/bond0_slave2)
bond1 : LACP
== /etc/modprobe.d/bonding.conf ==
<source lang=conf>
alias netdev-bond0 bonding
options bond0 miimon=100 mode=active-backup updelay=0 downdelay=0 primary=bond0_slave1
alias netdev-bond1 bonding
options bond1 miimon=100 mode=4 lacp_rate=1
</source>
== /etc/sysconfig/network-scripts/ifcfg-bond0 ==
<source lang=conf>
DEVICE=bond0
TYPE=Bond
BONDING_MASTER=yes
BOOTPROTO=static
DEFROUTE=yes
IPV4_FAILURE_FATAL=no
NAME=bond0
UUID=9e2088b8-4cfe-435a-b0a2-9387f0fc8024
ONBOOT=yes
DNS1=172.16.0.69
BONDING_OPTS="miimon=100 updelay=0 downdelay=0 mode=active-backup primary=bond0_slave1"
IPADDR=172.16.0.105
PREFIX=16
GATEWAY=172.16.0.1
</source>
== /etc/sysconfig/network-scripts/ifcfg-bond0_slave1 ==
<source lang=conf>
HWADDR=94:18:82:80:C2:18
TYPE=Ethernet
NAME=bond0_slave1
UUID=a03819df-0715-455d-9726-9348cdbd45c9
DEVICE=eno1
ONBOOT=yes
MASTER=bond0
SLAVE=yes
</source>
== /etc/sysconfig/network-scripts/ifcfg-bond0_slave2 ==
<source lang=conf>
HWADDR=94:18:82:80:C2:1A
TYPE=Ethernet
NAME=bond0_slave2
UUID=a03819df-0715-455d-9726-9348cdbd45c9
DEVICE=eno3
ONBOOT=yes
MASTER=bond0
SLAVE=yes
</source>
== Check state of bond0 ==
<source lang=bash>
# cat /proc/net/bonding/bond0
Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011)
Bonding Mode: fault-tolerance (active-backup)
Primary Slave: None
Currently Active Slave: eno1
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 0
Down Delay (ms): 0
Slave Interface: eno1
MII Status: up
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 94:18:82:80:c2:18
Slave queue ID: 0
Slave Interface: eno3
MII Status: up
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 94:18:82:80:c2:1a
Slave queue ID: 0
</source>
== /etc/sysconfig/network-scripts/ifcfg-bond1 ==
<source lang=conf>
DEVICE=bond1
TYPE=Bond
BONDING_MASTER=yes
BOOTPROTO=none
DEFROUTE=yes
IPV4_FAILURE_FATAL=no
NAME=bond1
UUID=c9a4bce2-5dbe-4cf9-beb6-34a24512ae23
ONBOOT=yes
BONDING_OPTS="mode=4 miimon=100 lacp_rate=1"
IPADDR=172.20.0.30
PREFIX=24
</source>
== /etc/sysconfig/network-scripts/ifcfg-bond1_slave1 ==
<source lang=conf>
TYPE=Ethernet
NAME=bond1_slave1
UUID=9ad3a93f-362e-4a18-bb2e-c4588e666e12
ONBOOT=yes
MASTER=bond1
SLAVE=yes
MACADDR=14:02:ec:8e:f3:24
MTU=1500
DEVICE=eno49
</source>
== /etc/sysconfig/network-scripts/ifcfg-bond1_slave2 ==
<source lang=conf>
TYPE=Ethernet
NAME=bond1_slave2
UUID=6d8015ef-fe60-472a-b18f-17caf952e45b
ONBOOT=yes
#MASTER=c9a4bce2-5dbe-4cf9-beb6-34a24512ae23
MASTER=bond1
SLAVE=yes
MACADDR=14:02:ec:8e:f3:24
MTU=1500
DEVICE=eno50
</source>
== Check state of bond1 ==
<source lang=bash>
# cat /proc/net/bonding/bond1
Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011)
Bonding Mode: IEEE 802.3ad Dynamic link aggregation
Transmit Hash Policy: layer2 (0)
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 0
Down Delay (ms): 0
802.3ad info
LACP rate: fast
Min links: 0
Aggregator selection policy (ad_select): stable
Active Aggregator Info:
Aggregator ID: 2
Number of ports: 2
Actor Key: 13
Partner Key: 70
Partner Mac Address: 01:e0:52:00:00:02
Slave Interface: eno49
MII Status: up
Speed: 10000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 14:02:ec:8e:f3:24
Slave queue ID: 0
Aggregator ID: 2
Actor Churn State: none
Partner Churn State: none
Actor Churned Count: 0
Partner Churned Count: 0
details actor lacp pdu:
system priority: 0
port key: 13
port priority: 255
port number: 1
port state: 63
details partner lacp pdu:
system priority: 32768
oper key: 70
port priority: 32768
port number: 534
port state: 63
Slave Interface: eno50
MII Status: up
Speed: 10000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 14:02:ec:8e:f3:24
Slave queue ID: 0
Aggregator ID: 2
Actor Churn State: none
Partner Churn State: none
Actor Churned Count: 0
Partner Churned Count: 0
details actor lacp pdu:
system priority: 0
port key: 13
port priority: 255
port number: 2
port state: 63
details partner lacp pdu:
system priority: 32768
oper key: 70
port priority: 32768
port number: 1046
port state: 63
</source>
246b6f64f347a89f029acca432da06506583827d
1414
1413
2017-04-21T13:27:25Z
Lollypop
2
/* /etc/sysconfig/network-scripts/ifcfg-bond1_slave2 */
wikitext
text/x-wiki
= Bonding =
In this example we configure two bonds.
bond0 : Failover (eno1/bond0_slave1 and eno3/bond0_slave2)
bond1 : LACP
== /etc/modprobe.d/bonding.conf ==
<source lang=conf>
alias netdev-bond0 bonding
options bond0 miimon=100 mode=active-backup updelay=0 downdelay=0 primary=bond0_slave1
alias netdev-bond1 bonding
options bond1 miimon=100 mode=4 lacp_rate=1
</source>
== /etc/sysconfig/network-scripts/ifcfg-bond0 ==
<source lang=conf>
DEVICE=bond0
TYPE=Bond
BONDING_MASTER=yes
BOOTPROTO=static
DEFROUTE=yes
IPV4_FAILURE_FATAL=no
NAME=bond0
UUID=9e2088b8-4cfe-435a-b0a2-9387f0fc8024
ONBOOT=yes
DNS1=172.16.0.69
BONDING_OPTS="miimon=100 updelay=0 downdelay=0 mode=active-backup primary=bond0_slave1"
IPADDR=172.16.0.105
PREFIX=16
GATEWAY=172.16.0.1
</source>
== /etc/sysconfig/network-scripts/ifcfg-bond0_slave1 ==
<source lang=conf>
HWADDR=94:18:82:80:C2:18
TYPE=Ethernet
NAME=bond0_slave1
UUID=a03819df-0715-455d-9726-9348cdbd45c9
DEVICE=eno1
ONBOOT=yes
MASTER=bond0
SLAVE=yes
</source>
== /etc/sysconfig/network-scripts/ifcfg-bond0_slave2 ==
<source lang=conf>
HWADDR=94:18:82:80:C2:1A
TYPE=Ethernet
NAME=bond0_slave2
UUID=a03819df-0715-455d-9726-9348cdbd45c9
DEVICE=eno3
ONBOOT=yes
MASTER=bond0
SLAVE=yes
</source>
== Check state of bond0 ==
<source lang=bash>
# cat /proc/net/bonding/bond0
Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011)
Bonding Mode: fault-tolerance (active-backup)
Primary Slave: None
Currently Active Slave: eno1
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 0
Down Delay (ms): 0
Slave Interface: eno1
MII Status: up
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 94:18:82:80:c2:18
Slave queue ID: 0
Slave Interface: eno3
MII Status: up
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 94:18:82:80:c2:1a
Slave queue ID: 0
</source>
== /etc/sysconfig/network-scripts/ifcfg-bond1 ==
<source lang=conf>
DEVICE=bond1
TYPE=Bond
BONDING_MASTER=yes
BOOTPROTO=none
DEFROUTE=yes
IPV4_FAILURE_FATAL=no
NAME=bond1
UUID=c9a4bce2-5dbe-4cf9-beb6-34a24512ae23
ONBOOT=yes
BONDING_OPTS="mode=4 miimon=100 lacp_rate=1"
IPADDR=172.20.0.30
PREFIX=24
</source>
== /etc/sysconfig/network-scripts/ifcfg-bond1_slave1 ==
<source lang=conf>
TYPE=Ethernet
NAME=bond1_slave1
UUID=9ad3a93f-362e-4a18-bb2e-c4588e666e12
ONBOOT=yes
MASTER=bond1
SLAVE=yes
MACADDR=14:02:ec:8e:f3:24
MTU=1500
DEVICE=eno49
</source>
== /etc/sysconfig/network-scripts/ifcfg-bond1_slave2 ==
<source lang=conf>
TYPE=Ethernet
NAME=bond1_slave2
UUID=6d8015ef-fe60-472a-b18f-17caf952e45b
ONBOOT=yes
MASTER=bond1
SLAVE=yes
MACADDR=14:02:ec:8e:f3:24
MTU=1500
DEVICE=eno50
</source>
== Check state of bond1 ==
<source lang=bash>
# cat /proc/net/bonding/bond1
Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011)
Bonding Mode: IEEE 802.3ad Dynamic link aggregation
Transmit Hash Policy: layer2 (0)
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 0
Down Delay (ms): 0
802.3ad info
LACP rate: fast
Min links: 0
Aggregator selection policy (ad_select): stable
Active Aggregator Info:
Aggregator ID: 2
Number of ports: 2
Actor Key: 13
Partner Key: 70
Partner Mac Address: 01:e0:52:00:00:02
Slave Interface: eno49
MII Status: up
Speed: 10000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 14:02:ec:8e:f3:24
Slave queue ID: 0
Aggregator ID: 2
Actor Churn State: none
Partner Churn State: none
Actor Churned Count: 0
Partner Churned Count: 0
details actor lacp pdu:
system priority: 0
port key: 13
port priority: 255
port number: 1
port state: 63
details partner lacp pdu:
system priority: 32768
oper key: 70
port priority: 32768
port number: 534
port state: 63
Slave Interface: eno50
MII Status: up
Speed: 10000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 14:02:ec:8e:f3:24
Slave queue ID: 0
Aggregator ID: 2
Actor Churn State: none
Partner Churn State: none
Actor Churned Count: 0
Partner Churned Count: 0
details actor lacp pdu:
system priority: 0
port key: 13
port priority: 255
port number: 2
port state: 63
details partner lacp pdu:
system priority: 32768
oper key: 70
port priority: 32768
port number: 1046
port state: 63
</source>
33752f395529b99b77597f177725e2abc96fd34c
Category:Termiten
14
302
1417
2017-04-28T12:53:58Z
Lollypop
2
Die Seite wurde neu angelegt: „[[Kategorie: Insekten]]“
wikitext
text/x-wiki
[[Kategorie: Insekten]]
0394c15dc7bc827648333de49b954e344201aaa6
Reticulitermes grassei
0
303
1418
2017-04-28T12:59:24Z
Lollypop
2
Die Seite wurde neu angelegt: „{{Systematik | DeName = | WissName = Reticulitermes grassei | Autor = | Unterfamilie = Heterotermitinae | Familie…“
wikitext
text/x-wiki
{{Systematik
| DeName =
| WissName = Reticulitermes grassei
| Autor =
| Unterfamilie = Heterotermitinae
| Familie = Rhinotermitidae
| Tribus =
| Gattung = Reticulitermes
| Untergattung =
| Art = grassei
| Verbreitung =
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur =
| Winterruhe =
}}
419cd8985fc2272170a7f17b0ace7b22ea98d277
1419
1418
2017-04-28T13:07:37Z
Lollypop
2
wikitext
text/x-wiki
{{Systematik
| DeName =
| WissName = Reticulitermes grassei
| Autor =
| Ordnung = Blattodea
| Superfamilie = Isoptera
| Familie = Rhinotermitidae
| Unterfamilie = Heterotermitinae
| Tribus =
| Gattung = Reticulitermes
| Untergattung =
| Art = grassei
| Verbreitung =
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur =
| Winterruhe =
}}
fc8a22acd60a2a9dbc00ec09d8dd0b23b9b4ad56
1424
1419
2017-04-28T13:43:44Z
Lollypop
2
wikitext
text/x-wiki
{{Systematik
| DeName =
| WissName = Reticulitermes grassei
| Autor = Holmgren, 1913
| Ordnung = Blattodea
| Superfamilie = Isoptera
| Familie = Rhinotermitidae
| Unterfamilie = Heterotermitinae
| Tribus =
| Gattung = Reticulitermes
| Untergattung =
| Art = grassei
| Verbreitung =
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur =
| Winterruhe =
}}
3aa0dcb0b1f267224385ee9558b09365743c4d47
Category:Isoptera
14
304
1420
2017-04-28T13:08:41Z
Lollypop
2
Die Seite wurde neu angelegt: „[[Kategorie:Blattodea]]“
wikitext
text/x-wiki
[[Kategorie:Blattodea]]
90cca11127aa448c8819e5a187f0666a919f6637
Category:Rhinotermitidae
14
305
1421
2017-04-28T13:09:21Z
Lollypop
2
Die Seite wurde neu angelegt: „[[Kategorie: Isoptera]]“
wikitext
text/x-wiki
[[Kategorie: Isoptera]]
aa1791702399a4e3a0f628178b3fd401b69bfc4d
Category:Reticulitermes
14
307
1423
2017-04-28T13:10:18Z
Lollypop
2
Die Seite wurde neu angelegt: „[[Kategorie:Heterotermitinae]]“
wikitext
text/x-wiki
[[Kategorie:Heterotermitinae]]
3eb92cb4d9dcd1be873d8b613b451bcdead427b5
Reticulitermes flavipes
0
308
1425
2017-04-28T13:44:46Z
Lollypop
2
Die Seite wurde neu angelegt: „{{Systematik | DeName = | WissName = Reticulitermes flavipes | Autor = (Kollar, 1837) | Ordnung = Blattodea | Superf…“
wikitext
text/x-wiki
{{Systematik
| DeName =
| WissName = Reticulitermes flavipes
| Autor = (Kollar, 1837)
| Ordnung = Blattodea
| Superfamilie = Isoptera
| Familie = Rhinotermitidae
| Unterfamilie = Heterotermitinae
| Tribus =
| Gattung = Reticulitermes
| Untergattung =
| Art = flavipes
| Verbreitung =
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur =
| Winterruhe =
}}
177038b229fc6c7d1d88b1b21c00fee4ad68b479
Reticulitermes banyulensis
0
309
1426
2017-04-28T13:46:45Z
Lollypop
2
Die Seite wurde neu angelegt: „{{Systematik | DeName = | WissName = Reticulitermes banyulensis | Autor = Clément, 1978 | Ordnung = Blattodea | Sup…“
wikitext
text/x-wiki
{{Systematik
| DeName =
| WissName = Reticulitermes banyulensis
| Autor = Clément, 1978
| Ordnung = Blattodea
| Superfamilie = Isoptera
| Familie = Rhinotermitidae
| Unterfamilie = Heterotermitinae
| Tribus =
| Gattung = Reticulitermes
| Untergattung =
| Art = banyulensis
| Verbreitung =
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur =
| Winterruhe =
}}
56499b5aa7d0dbd0d0c1a6064a4b3cd4b27aa681
Cryptotermes brevis
0
310
1427
2017-04-28T13:51:03Z
Lollypop
2
Die Seite wurde neu angelegt: „{{Systematik | DeName = | WissName = Cryptotermes brevis | Autor = (Walker, 1853) | Ordnung = Blattodea | Superfamil…“
wikitext
text/x-wiki
{{Systematik
| DeName =
| WissName = Cryptotermes brevis
| Autor = (Walker, 1853)
| Ordnung = Blattodea
| Superfamilie = Isoptera
| Familie = Kalotermitidae
| Unterfamilie =
| Tribus =
| Gattung = Reticulitermes
| Untergattung =
| Art = banyulensis
| Verbreitung =
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur =
| Winterruhe =
}}
d10cb85122f8ad4e37a2399913fdb4a2c3041d3f
Cryptotermes brevis
0
310
1428
1427
2017-04-28T13:51:29Z
Lollypop
2
wikitext
text/x-wiki
{{Systematik
| DeName =
| WissName = Cryptotermes brevis
| Autor = (Walker, 1853)
| Ordnung = Blattodea
| Superfamilie = Isoptera
| Familie = Kalotermitidae
| Unterfamilie =
| Tribus =
| Gattung = Cryptotermes
| Untergattung =
| Art = brevis
| Verbreitung =
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur =
| Winterruhe =
}}
9aee52d1ab0d1f60a3c2c91bca1b3162f50b1a15
1447
1428
2017-04-28T14:30:32Z
Lollypop
2
wikitext
text/x-wiki
{{Systematik
| DeName =
| WissName = Cryptotermes brevis
| Autor = (Walker, 1853)
| Ordnung = Dictyoptera
| Unterordnung = Isoptera
| Familie = Kalotermitidae
| Unterfamilie =
| Tribus =
| Gattung = Cryptotermes
| Untergattung =
| Art = brevis
| Verbreitung =
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur =
| Winterruhe =
}}
7af33bc06454986bccee104d7948d9fba7dbb208
1467
1447
2017-04-28T14:46:49Z
Lollypop
2
wikitext
text/x-wiki
{{Systematik
| DeName =
| WissName = Cryptotermes brevis
| Autor = (Walker, 1853)
| Kingdom = Animalia
| Subkingdom = Eumetazoa
| Phylum = Arthropoda
| Subphylum = Hexapoda
| Klasse = Insecta
| Ordnung = Dictyoptera
| Unterordnung = Isoptera
| Familie = Kalotermitidae
| Unterfamilie =
| Tribus =
| Gattung = Cryptotermes
| Untergattung =
| Art = brevis
| Verbreitung =
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur =
| Winterruhe =
}}
64bdfe1a4db51e63009088b39b98672da65003c0
Category:Kalotermitidae
14
311
1429
2017-04-28T13:52:18Z
Lollypop
2
Die Seite wurde neu angelegt: „[[Kategorie:Kalotermitidae]]“
wikitext
text/x-wiki
[[Kategorie:Kalotermitidae]]
cdf345b1d33377f655f20c458ecbca034af21d57
1430
1429
2017-04-28T13:52:52Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:Isoptera]]
c9cfd2e4c53eae8bed80ea3eb2845b4abc115ab1
Category:Cryptotermes
14
312
1431
2017-04-28T13:53:22Z
Lollypop
2
Die Seite wurde neu angelegt: „[[Kategorie:Kalotermitidae]]“
wikitext
text/x-wiki
[[Kategorie:Kalotermitidae]]
cdf345b1d33377f655f20c458ecbca034af21d57
Bifiditermes rogierae
0
313
1432
2017-04-28T14:02:55Z
Lollypop
2
Die Seite wurde neu angelegt: „{{Systematik | DeName = Bifiditermes rogierae | WissName = | Autor = Hollande 1982 | Ordnung = Blattodea | Superfami…“
wikitext
text/x-wiki
{{Systematik
| DeName = Bifiditermes rogierae
| WissName =
| Autor = Hollande 1982
| Ordnung = Blattodea
| Superfamilie = Isoptera
| Familie = Kalotermitidae
| Unterfamilie =
| Tribus =
| Gattung = Bifiditermes
| Untergattung =
| Art = rogierae
| Verbreitung =
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur =
| Winterruhe =
}}
5a2b4a135b9da0e6a12a8431b43e7f3ef1126194
1446
1432
2017-04-28T14:29:29Z
Lollypop
2
wikitext
text/x-wiki
{{Systematik
| DeName = Bifiditermes rogierae
| WissName =
| Autor = Hollande 1982
| Ordnung = Dictyoptera
| Unterordnung = Isoptera
| Familie = Kalotermitidae
| Unterfamilie =
| Tribus =
| Gattung = Bifiditermes
| Untergattung =
| Art = rogierae
| Verbreitung =
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur =
| Winterruhe =
}}
fa00f409395489e90cf802157524cbfd09512136
1466
1446
2017-04-28T14:46:31Z
Lollypop
2
wikitext
text/x-wiki
{{Systematik
| DeName = Bifiditermes rogierae
| WissName =
| Autor = Hollande 1982
| Kingdom = Animalia
| Subkingdom = Eumetazoa
| Phylum = Arthropoda
| Subphylum = Hexapoda
| Klasse = Insecta
| Ordnung = Dictyoptera
| Unterordnung = Isoptera
| Familie = Kalotermitidae
| Unterfamilie =
| Tribus =
| Gattung = Bifiditermes
| Untergattung =
| Art = rogierae
| Verbreitung =
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur =
| Winterruhe =
}}
aa4ec365006b50578c15e97af9a503d375502304
Category:Bifiditermes
14
314
1433
2017-04-28T14:03:17Z
Lollypop
2
Die Seite wurde neu angelegt: „[[Kategorie:Kalotermitidae]]“
wikitext
text/x-wiki
[[Kategorie:Kalotermitidae]]
cdf345b1d33377f655f20c458ecbca034af21d57
Reticulitermes grassei
0
303
1434
1424
2017-04-28T14:06:29Z
Lollypop
2
wikitext
text/x-wiki
{{Systematik
| DeName =
| WissName = Reticulitermes grassei
| Autor = Holmgren, 1913
| Ordnung = Dictyoptera
| Unterordnung = Blattodea
| Superfamilie = Isoptera
| Familie = Rhinotermitidae
| Unterfamilie = Heterotermitinae
| Tribus =
| Gattung = Reticulitermes
| Untergattung =
| Art = grassei
| Verbreitung =
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur =
| Winterruhe =
}}
917fd4a24c79a882bc16ce46703b24e646f5555d
1450
1434
2017-04-28T14:31:59Z
Lollypop
2
wikitext
text/x-wiki
{{Systematik
| DeName =
| WissName = Reticulitermes grassei
| Autor = Holmgren, 1913
| Ordnung = Dictyoptera
| Unterordnung = Isoptera
| Familie = Rhinotermitidae
| Unterfamilie = Heterotermitinae
| Tribus =
| Gattung = Reticulitermes
| Untergattung =
| Art = grassei
| Verbreitung =
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur =
| Winterruhe =
}}
bed6c99bc872aa01914486e03916d5ac079cef5a
1470
1450
2017-04-28T14:47:56Z
Lollypop
2
wikitext
text/x-wiki
{{Systematik
| DeName =
| WissName = Reticulitermes grassei
| Autor = Holmgren, 1913
| Kingdom = Animalia
| Subkingdom = Eumetazoa
| Phylum = Arthropoda
| Subphylum = Hexapoda
| Klasse = Insecta
| Ordnung = Dictyoptera
| Unterordnung = Isoptera
| Familie = Rhinotermitidae
| Unterfamilie = Heterotermitinae
| Tribus =
| Gattung = Reticulitermes
| Untergattung =
| Art = grassei
| Verbreitung =
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur =
| Winterruhe =
}}
bb964163bc372395169f7294ddc2015fb40d1fd0
Template:Systematik
10
117
1435
1121
2017-04-28T14:07:48Z
Lollypop
2
wikitext
text/x-wiki
<includeonly>
{| cellpadding="2" cellspacing="0" style="border: 1px solid #6688AA;width:260px; background-color:#efefef; float:right;overflow: hidden; margin-left:10px; margin-bottom:10px;" valign="middle" |
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"|
'''''{{PAGENAME}}'''''
{{#if:{{{DeName|}}}|
<br>({{{DeName|}}})
}}
|-
|style="border: 0px solid #6688AA;border-bottom-width:1px;"| <div style="text-align:center;font-size:8pt;">
{{#if: {{{Bild|}}}
|
[[Bild:{{{Bild}}}{{!}}frameless{{!}}250x300px{{!}}{{{Bildbeschreibung}}}]]
}}
</div>
|
|
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| Systematik
|-
|style="text-align:center;width=250px;"|
{| style="background-color:#efefef;text-align:left;"
{{#if:{{{Ordnung|}}}
|
{{!-}}
{{!}} Ordnung:
{{!}} ''[[:Kategorie:{{{Ordnung|}}}{{!}}{{{Ordnung|}}}]]''
}}
{{#if:{{{Unterordnung|}}}
|
{{!-}}
{{!}} Unterordnung:
{{!}} ''[[:Kategorie:{{{Unterordnung|}}}{{!}}{{{Unterordnung|}}}]]''
}}
{{#if:{{{Superfamilie|}}}
|
{{!-}}
{{!}} Superfamilie:
{{!}} ''[[:Kategorie:{{{Superfamilie|}}}{{!}}{{{Superfamilie|}}}]]''
}}
{{#if:{{{Familie|}}}
|
{{!-}}
{{!}} Familie:
{{!}} ''[[:Kategorie:{{{Familie|}}}{{!}}{{{Familie|}}}]]''
}}
{{#if:{{{Unterfamilie|}}}
|
{{!-}}
{{!}} Unterfamilie:
{{!}} ''[[:Kategorie:{{{Unterfamilie|}}}{{!}}{{{Unterfamilie|}}}]]''
}}
{{#if:{{{Tribus|}}}
|
{{!-}}
{{!}} Tribus:
{{!}} ''[[:Kategorie:{{{Tribus|}}}{{!}}{{{Tribus|}}}]]''
}}
|-
{{#if:{{{Gattung|}}}|
{{!-}}
{{!}} Gattung:
{{!}} ''[[:Kategorie:{{{Gattung|}}}{{!}}{{{Gattung|}}}]]''
}}
|-
{{#if:{{{Untergattung|}}}|
{{!-}}
{{!}} Untergattung:
{{!}} ''[[:Kategorie:{{{Untergattung|}}}{{!}}{{{Untergattung|}}}]]''
}}
|-
{{#if:{{{Art|}}}|
{{!-}}
{{!}} Art:
{{!}} ''{{{Gattung|}}} {{{Untergattung|}}} {{{Art|}}}''
}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Wissenschaftlicher Name
|-
|style="text-align:center"|
{| style="text-align:center;background-color:#efefef;widht:250px"
|-
|style="text-align:center;width:250px"|''{{{Gattung|}}} {{{Art|}}}'' {{#if:{{{Autor|}}}|{{{Autor|}}}}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Weitere Informationen
|-
|style="text-align:left"|
{| style="text-align:left;background-color:#efefef;widht:250px"
{{#if:{{{Verbreitung|}}}
|
{{!-}}
{{!}} Verbreitung:
{{!}} {{{Verbreitung|}}}
|
{{!-}}
}}
{{#if:{{{Habitat|}}}
|
{{!-}}
{{!}} Habitat:
{{!}} {{{Habitat|}}}
|
{{!-}}
}}
{{#if:{{{Nahrung|}}}
|
{{!-}}
{{!}} Nahrung:
{{!}} {{{Nahrung|}}}
|
{{!-}}
}}
{{#if:{{{Luftfeuchtigkeit|}}}
|
{{!-}}
{{!}} Luftfeuchtigkeit:
{{!}} {{{Luftfeuchtigkeit|}}}
|
{{!-}}
}}
{{#if:{{{Temperatur|}}}
|
{{!-}}
{{!}} Temperatur:
{{!}} {{{Temperatur|}}}
|
{{!-}}
}}
{{#if:{{{StudyGroupNumber|}}}
|
{{!-}}
{{!}} StudyGroupNumber:
{{!}} {{{StudyGroupNumber|}}}
|
{{!-}}
}}
|}
|}
{{#ifeq: {{NAMESPACE}} | {{ns:0}} | [[Kategorie:Spezies]]}}
{{#if:{{{Ordnung|}}}|
-> [[:Kategorie:{{{Ordnung|}}}{{!}}{{{Ordnung|}}}]]
}}
{{#if:{{{Superfamilie|}}}|
-> [[:Kategorie:{{{Superfamilie|}}}{{!}}{{{Superfamilie|}}}]]
}}
{{#if:{{{Familie|}}}|
-> [[:Kategorie:{{{Familie|}}}{{!}}{{{Familie|}}}]]
}}
{{#if:{{{Unterfamilie|}}}|
-> [[:Kategorie:{{{Unterfamilie|}}}{{!}}{{{Unterfamilie|}}}]]
}}
{{#if:{{{Tribus|}}}|
-> [[:Kategorie:{{{Tribus|}}}{{!}}{{{Tribus|}}}]]
}}
{{#if:{{{Gattung|}}}|
-> [[:Kategorie:{{{Gattung|}}}{{!}}{{{Gattung|}}}]]
}}
{{#if:{{{Untergattung|}}}|
-> [[:Kategorie:{{{Untergattung|}}}{{!}}{{{Untergattung|}}}]]
}}
{{#if:{{{cockroach.speciesfile.org_TaxonNameID|}}}|
* [http://cockroach.speciesfile.org/common/basic/Taxa.aspx?TaxonNameID={{{cockroach.speciesfile.org_TaxonNameID|}}} cockroach.speciesfile.org -> {{PAGENAME}}]
}}
{{#if:{{{LSID|}}}|
* [http://www.ipni.org/lsids.html LSID] : {{{LSID|}}}
}}
{{#ifeq:{{PAGENAME}}|{{{Ordnung}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=6}}
{{#ifeq:{{PAGENAME}}|Blattodea|
[[Kategorie:Schaben]]
}}
}}
{{#ifeq:{{PAGENAME}}|{{{Superfamilie}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
[[Kategorie:{{{Ordnung|}}}{{!}}{{{Superfamilie|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{Familie}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=4}}
[[Kategorie:{{{Superfamilie|}}}{{!}}{{{Familie|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{Unterfamilie}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=3}}
[[Kategorie:{{{Familie|}}}{{!}}{{{Unterfamilie|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{Gattung}}}|
[[Kategorie:{{{Unterfamilie|}}}{{!}}{{{Gattung|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{Gattung}}} {{{Art}}}|
[[Kategorie:{{{Gattung|}}}{{!}}{{{Art|}}}]]
}}
</includeonly>
<noinclude>
<pre>
Beispielaufruf:
{{Systematik
| DeName = Fauchschabe
| Autor = van Herrewege, 1973
| Ordnung =
| Superfamily =
| Familie = Blaberidae
| Unterfamilie = Oxyhaloinae
| Gattung = Princisia
| Untergattung =
| Tribus = Gromphadorhini
| Art = vanwaerebeki
| Verbreitung =
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| StudyGroupNumber = BCG 34
| Winterruhe =
| cockroach.speciesfile.org_TaxonNameID = 1174416
| LSID = urn:lsid:Blattodea.speciesfile.org:TaxonName:6326
}}
</pre>
</noinclude>
7c90d80cae137d7dd041f22415aea0e62a53361e
1436
1435
2017-04-28T14:10:16Z
Lollypop
2
wikitext
text/x-wiki
<includeonly>
{| cellpadding="2" cellspacing="0" style="border: 1px solid #6688AA;width:260px; background-color:#efefef; float:right;overflow: hidden; margin-left:10px; margin-bottom:10px;" valign="middle" |
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"|
'''''{{PAGENAME}}'''''
{{#if:{{{DeName|}}}|
<br>({{{DeName|}}})
}}
|-
|style="border: 0px solid #6688AA;border-bottom-width:1px;"| <div style="text-align:center;font-size:8pt;">
{{#if: {{{Bild|}}}
|
[[Bild:{{{Bild}}}{{!}}frameless{{!}}250x300px{{!}}{{{Bildbeschreibung}}}]]
}}
</div>
|
|
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| Systematik
|-
|style="text-align:center;width=250px;"|
{| style="background-color:#efefef;text-align:left;"
{{#if:{{{Ordnung|}}}
|
{{!-}}
{{!}} Ordnung:
{{!}} ''[[:Kategorie:{{{Ordnung|}}}{{!}}{{{Ordnung|}}}]]''
}}
{{#if:{{{Unterordnung|}}}
|
{{!-}}
{{!}} Unterordnung:
{{!}} ''[[:Kategorie:{{{Unterordnung|}}}{{!}}{{{Unterordnung|}}}]]''
}}
{{#if:{{{Superfamilie|}}}
|
{{!-}}
{{!}} Superfamilie:
{{!}} ''[[:Kategorie:{{{Superfamilie|}}}{{!}}{{{Superfamilie|}}}]]''
}}
{{#if:{{{Familie|}}}
|
{{!-}}
{{!}} Familie:
{{!}} ''[[:Kategorie:{{{Familie|}}}{{!}}{{{Familie|}}}]]''
}}
{{#if:{{{Unterfamilie|}}}
|
{{!-}}
{{!}} Unterfamilie:
{{!}} ''[[:Kategorie:{{{Unterfamilie|}}}{{!}}{{{Unterfamilie|}}}]]''
}}
{{#if:{{{Tribus|}}}
|
{{!-}}
{{!}} Tribus:
{{!}} ''[[:Kategorie:{{{Tribus|}}}{{!}}{{{Tribus|}}}]]''
}}
|-
{{#if:{{{Gattung|}}}|
{{!-}}
{{!}} Gattung:
{{!}} ''[[:Kategorie:{{{Gattung|}}}{{!}}{{{Gattung|}}}]]''
}}
|-
{{#if:{{{Untergattung|}}}|
{{!-}}
{{!}} Untergattung:
{{!}} ''[[:Kategorie:{{{Untergattung|}}}{{!}}{{{Untergattung|}}}]]''
}}
|-
{{#if:{{{Art|}}}|
{{!-}}
{{!}} Art:
{{!}} ''{{{Gattung|}}} {{{Untergattung|}}} {{{Art|}}}''
}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Wissenschaftlicher Name
|-
|style="text-align:center"|
{| style="text-align:center;background-color:#efefef;widht:250px"
|-
|style="text-align:center;width:250px"|''{{{Gattung|}}} {{{Art|}}}'' {{#if:{{{Autor|}}}|{{{Autor|}}}}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Weitere Informationen
|-
|style="text-align:left"|
{| style="text-align:left;background-color:#efefef;widht:250px"
{{#if:{{{Verbreitung|}}}
|
{{!-}}
{{!}} Verbreitung:
{{!}} {{{Verbreitung|}}}
|
{{!-}}
}}
{{#if:{{{Habitat|}}}
|
{{!-}}
{{!}} Habitat:
{{!}} {{{Habitat|}}}
|
{{!-}}
}}
{{#if:{{{Nahrung|}}}
|
{{!-}}
{{!}} Nahrung:
{{!}} {{{Nahrung|}}}
|
{{!-}}
}}
{{#if:{{{Luftfeuchtigkeit|}}}
|
{{!-}}
{{!}} Luftfeuchtigkeit:
{{!}} {{{Luftfeuchtigkeit|}}}
|
{{!-}}
}}
{{#if:{{{Temperatur|}}}
|
{{!-}}
{{!}} Temperatur:
{{!}} {{{Temperatur|}}}
|
{{!-}}
}}
{{#if:{{{StudyGroupNumber|}}}
|
{{!-}}
{{!}} StudyGroupNumber:
{{!}} {{{StudyGroupNumber|}}}
|
{{!-}}
}}
|}
|}
{{#ifeq: {{NAMESPACE}} | {{ns:0}} | [[Kategorie:Spezies]]}}
{{#if:{{{Ordnung|}}}|
-> [[:Kategorie:{{{Ordnung|}}}{{!}}{{{Ordnung|}}}]]
}}
{{#if:{{{Unterordnung|}}}|
-> [[:Kategorie:{{{Ordnung|}}}{{!}}{{{Ordnung|}}}]]
}}
{{#if:{{{Superfamilie|}}}|
-> [[:Kategorie:{{{Superfamilie|}}}{{!}}{{{Superfamilie|}}}]]
}}
{{#if:{{{Familie|}}}|
-> [[:Kategorie:{{{Familie|}}}{{!}}{{{Familie|}}}]]
}}
{{#if:{{{Unterfamilie|}}}|
-> [[:Kategorie:{{{Unterfamilie|}}}{{!}}{{{Unterfamilie|}}}]]
}}
{{#if:{{{Tribus|}}}|
-> [[:Kategorie:{{{Tribus|}}}{{!}}{{{Tribus|}}}]]
}}
{{#if:{{{Gattung|}}}|
-> [[:Kategorie:{{{Gattung|}}}{{!}}{{{Gattung|}}}]]
}}
{{#if:{{{Untergattung|}}}|
-> [[:Kategorie:{{{Untergattung|}}}{{!}}{{{Untergattung|}}}]]
}}
{{#if:{{{cockroach.speciesfile.org_TaxonNameID|}}}|
* [http://cockroach.speciesfile.org/common/basic/Taxa.aspx?TaxonNameID={{{cockroach.speciesfile.org_TaxonNameID|}}} cockroach.speciesfile.org -> {{PAGENAME}}]
}}
{{#if:{{{LSID|}}}|
* [http://www.ipni.org/lsids.html LSID] : {{{LSID|}}}
}}
{{#ifeq:{{PAGENAME}}|{{{Ordnung}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=6}}
{{#ifeq:{{PAGENAME}}|Blattodea|
[[Kategorie:Schaben]]
}}
}}
{{#ifeq:{{PAGENAME}}|{{{Superfamilie}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
[[Kategorie:{{{Ordnung|}}}{{!}}{{{Superfamilie|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{Familie}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=4}}
[[Kategorie:{{{Superfamilie|}}}{{!}}{{{Familie|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{Unterfamilie}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=3}}
[[Kategorie:{{{Familie|}}}{{!}}{{{Unterfamilie|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{Gattung}}}|
[[Kategorie:{{{Unterfamilie|}}}{{!}}{{{Gattung|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{Gattung}}} {{{Art}}}|
[[Kategorie:{{{Gattung|}}}{{!}}{{{Art|}}}]]
}}
</includeonly>
<noinclude>
<pre>
Beispielaufruf:
{{Systematik
| DeName = Fauchschabe
| Autor = van Herrewege, 1973
| Ordnung =
| Superfamily =
| Familie = Blaberidae
| Unterfamilie = Oxyhaloinae
| Gattung = Princisia
| Untergattung =
| Tribus = Gromphadorhini
| Art = vanwaerebeki
| Verbreitung =
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| StudyGroupNumber = BCG 34
| Winterruhe =
| cockroach.speciesfile.org_TaxonNameID = 1174416
| LSID = urn:lsid:Blattodea.speciesfile.org:TaxonName:6326
}}
</pre>
</noinclude>
eea80df2398aee02950f489de6f1bc64e526280c
1437
1436
2017-04-28T14:13:26Z
Lollypop
2
wikitext
text/x-wiki
<includeonly>
{| cellpadding="2" cellspacing="0" style="border: 1px solid #6688AA;width:260px; background-color:#efefef; float:right;overflow: hidden; margin-left:10px; margin-bottom:10px;" valign="middle" |
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"|
'''''{{PAGENAME}}'''''
{{#if:{{{DeName|}}}|
<br>({{{DeName|}}})
}}
|-
|style="border: 0px solid #6688AA;border-bottom-width:1px;"| <div style="text-align:center;font-size:8pt;">
{{#if: {{{Bild|}}}
|
[[Bild:{{{Bild}}}{{!}}frameless{{!}}250x300px{{!}}{{{Bildbeschreibung}}}]]
}}
</div>
|
|
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| Systematik
|-
|style="text-align:center;width=250px;"|
{| style="background-color:#efefef;text-align:left;"
{{#if:{{{Ordnung|}}}
|
{{!-}}
{{!}} Ordnung:
{{!}} ''[[:Kategorie:{{{Ordnung|}}}{{!}}{{{Ordnung|}}}]]''
}}
{{#if:{{{Unterordnung|}}}
|
{{!-}}
{{!}} Unterordnung:
{{!}} ''[[:Kategorie:{{{Unterordnung|}}}{{!}}{{{Unterordnung|}}}]]''
}}
{{#if:{{{Superfamilie|}}}
|
{{!-}}
{{!}} Superfamilie:
{{!}} ''[[:Kategorie:{{{Superfamilie|}}}{{!}}{{{Superfamilie|}}}]]''
}}
{{#if:{{{Familie|}}}
|
{{!-}}
{{!}} Familie:
{{!}} ''[[:Kategorie:{{{Familie|}}}{{!}}{{{Familie|}}}]]''
}}
{{#if:{{{Unterfamilie|}}}
|
{{!-}}
{{!}} Unterfamilie:
{{!}} ''[[:Kategorie:{{{Unterfamilie|}}}{{!}}{{{Unterfamilie|}}}]]''
}}
{{#if:{{{Tribus|}}}
|
{{!-}}
{{!}} Tribus:
{{!}} ''[[:Kategorie:{{{Tribus|}}}{{!}}{{{Tribus|}}}]]''
}}
|-
{{#if:{{{Gattung|}}}|
{{!-}}
{{!}} Gattung:
{{!}} ''[[:Kategorie:{{{Gattung|}}}{{!}}{{{Gattung|}}}]]''
}}
|-
{{#if:{{{Untergattung|}}}|
{{!-}}
{{!}} Untergattung:
{{!}} ''[[:Kategorie:{{{Untergattung|}}}{{!}}{{{Untergattung|}}}]]''
}}
|-
{{#if:{{{Art|}}}|
{{!-}}
{{!}} Art:
{{!}} ''{{{Gattung|}}} {{{Untergattung|}}} {{{Art|}}}''
}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Wissenschaftlicher Name
|-
|style="text-align:center"|
{| style="text-align:center;background-color:#efefef;widht:250px"
|-
|style="text-align:center;width:250px"|''{{{Gattung|}}} {{{Art|}}}'' {{#if:{{{Autor|}}}|{{{Autor|}}}}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Weitere Informationen
|-
|style="text-align:left"|
{| style="text-align:left;background-color:#efefef;widht:250px"
{{#if:{{{Verbreitung|}}}
|
{{!-}}
{{!}} Verbreitung:
{{!}} {{{Verbreitung|}}}
|
{{!-}}
}}
{{#if:{{{Habitat|}}}
|
{{!-}}
{{!}} Habitat:
{{!}} {{{Habitat|}}}
|
{{!-}}
}}
{{#if:{{{Nahrung|}}}
|
{{!-}}
{{!}} Nahrung:
{{!}} {{{Nahrung|}}}
|
{{!-}}
}}
{{#if:{{{Luftfeuchtigkeit|}}}
|
{{!-}}
{{!}} Luftfeuchtigkeit:
{{!}} {{{Luftfeuchtigkeit|}}}
|
{{!-}}
}}
{{#if:{{{Temperatur|}}}
|
{{!-}}
{{!}} Temperatur:
{{!}} {{{Temperatur|}}}
|
{{!-}}
}}
{{#if:{{{StudyGroupNumber|}}}
|
{{!-}}
{{!}} StudyGroupNumber:
{{!}} {{{StudyGroupNumber|}}}
|
{{!-}}
}}
|}
|}
{{#ifeq: {{NAMESPACE}} | {{ns:0}} | [[Kategorie:Spezies]]}}
{{#if:{{{Ordnung|}}}|
-> [[:Kategorie:{{{Ordnung|}}}{{!}}{{{Ordnung|}}}]]
}}
{{#if:{{{Unterordnung|}}}|
-> [[:Kategorie:{{{Ordnung|}}}{{!}}{{{Ordnung|}}}]]
}}
{{#if:{{{Superfamilie|}}}|
-> [[:Kategorie:{{{Superfamilie|}}}{{!}}{{{Superfamilie|}}}]]
}}
{{#if:{{{Familie|}}}|
-> [[:Kategorie:{{{Familie|}}}{{!}}{{{Familie|}}}]]
}}
{{#if:{{{Unterfamilie|}}}|
-> [[:Kategorie:{{{Unterfamilie|}}}{{!}}{{{Unterfamilie|}}}]]
}}
{{#if:{{{Tribus|}}}|
-> [[:Kategorie:{{{Tribus|}}}{{!}}{{{Tribus|}}}]]
}}
{{#if:{{{Gattung|}}}|
-> [[:Kategorie:{{{Gattung|}}}{{!}}{{{Gattung|}}}]]
}}
{{#if:{{{Untergattung|}}}|
-> [[:Kategorie:{{{Untergattung|}}}{{!}}{{{Untergattung|}}}]]
}}
{{#if:{{{cockroach.speciesfile.org_TaxonNameID|}}}|
* [http://cockroach.speciesfile.org/common/basic/Taxa.aspx?TaxonNameID={{{cockroach.speciesfile.org_TaxonNameID|}}} cockroach.speciesfile.org -> {{PAGENAME}}]
}}
{{#if:{{{LSID|}}}|
* [http://www.ipni.org/lsids.html LSID] : {{{LSID|}}}
}}
{{#ifeq:{{PAGENAME}}|{{{Ordnung}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=6}}
{{#ifeq:{{PAGENAME}}|Blattodea|
[[Kategorie:Schaben]]
}}
}}
{{#ifeq:{{PAGENAME}}|{{{Superfamilie}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=6}}
[[Kategorie:{{{Ordnung|}}}{{!}}{{{Unterordnung|}}}]]
}}
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
[[Kategorie:{{{Unterordnung|}}}{{!}}{{{Superfamilie|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{Familie}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=4}}
[[Kategorie:{{{Superfamilie|}}}{{!}}{{{Familie|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{Unterfamilie}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=3}}
[[Kategorie:{{{Familie|}}}{{!}}{{{Unterfamilie|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{Gattung}}}|
[[Kategorie:{{{Unterfamilie|}}}{{!}}{{{Gattung|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{Gattung}}} {{{Art}}}|
[[Kategorie:{{{Gattung|}}}{{!}}{{{Art|}}}]]
}}
</includeonly>
<noinclude>
<pre>
Beispielaufruf:
{{Systematik
| DeName = Fauchschabe
| Autor = van Herrewege, 1973
| Ordnung =
| Unterordnung =
| Superfamily =
| Familie = Blaberidae
| Unterfamilie = Oxyhaloinae
| Gattung = Princisia
| Untergattung =
| Tribus = Gromphadorhini
| Art = vanwaerebeki
| Verbreitung =
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| StudyGroupNumber = BCG 34
| Winterruhe =
| cockroach.speciesfile.org_TaxonNameID = 1174416
| LSID = urn:lsid:Blattodea.speciesfile.org:TaxonName:6326
}}
</pre>
</noinclude>
91048af2e81cc1e8f0be0b4a4794b5c19cbac373
1438
1437
2017-04-28T14:15:26Z
Lollypop
2
wikitext
text/x-wiki
<includeonly>
{| cellpadding="2" cellspacing="0" style="border: 1px solid #6688AA;width:260px; background-color:#efefef; float:right;overflow: hidden; margin-left:10px; margin-bottom:10px;" valign="middle" |
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"|
'''''{{PAGENAME}}'''''
{{#if:{{{DeName|}}}|
<br>({{{DeName|}}})
}}
|-
|style="border: 0px solid #6688AA;border-bottom-width:1px;"| <div style="text-align:center;font-size:8pt;">
{{#if: {{{Bild|}}}
|
[[Bild:{{{Bild}}}{{!}}frameless{{!}}250x300px{{!}}{{{Bildbeschreibung}}}]]
}}
</div>
|
|
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| Systematik
|-
|style="text-align:center;width=250px;"|
{| style="background-color:#efefef;text-align:left;"
{{#if:{{{Ordnung|}}}
|
{{!-}}
{{!}} Ordnung:
{{!}} ''[[:Kategorie:{{{Ordnung|}}}{{!}}{{{Ordnung|}}}]]''
}}
{{#if:{{{Unterordnung|}}}
|
{{!-}}
{{!}} Unterordnung:
{{!}} ''[[:Kategorie:{{{Unterordnung|}}}{{!}}{{{Unterordnung|}}}]]''
}}
{{#if:{{{Superfamilie|}}}
|
{{!-}}
{{!}} Superfamilie:
{{!}} ''[[:Kategorie:{{{Superfamilie|}}}{{!}}{{{Superfamilie|}}}]]''
}}
{{#if:{{{Familie|}}}
|
{{!-}}
{{!}} Familie:
{{!}} ''[[:Kategorie:{{{Familie|}}}{{!}}{{{Familie|}}}]]''
}}
{{#if:{{{Unterfamilie|}}}
|
{{!-}}
{{!}} Unterfamilie:
{{!}} ''[[:Kategorie:{{{Unterfamilie|}}}{{!}}{{{Unterfamilie|}}}]]''
}}
{{#if:{{{Tribus|}}}
|
{{!-}}
{{!}} Tribus:
{{!}} ''[[:Kategorie:{{{Tribus|}}}{{!}}{{{Tribus|}}}]]''
}}
|-
{{#if:{{{Gattung|}}}|
{{!-}}
{{!}} Gattung:
{{!}} ''[[:Kategorie:{{{Gattung|}}}{{!}}{{{Gattung|}}}]]''
}}
|-
{{#if:{{{Untergattung|}}}|
{{!-}}
{{!}} Untergattung:
{{!}} ''[[:Kategorie:{{{Untergattung|}}}{{!}}{{{Untergattung|}}}]]''
}}
|-
{{#if:{{{Art|}}}|
{{!-}}
{{!}} Art:
{{!}} ''{{{Gattung|}}} {{{Untergattung|}}} {{{Art|}}}''
}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Wissenschaftlicher Name
|-
|style="text-align:center"|
{| style="text-align:center;background-color:#efefef;widht:250px"
|-
|style="text-align:center;width:250px"|''{{{Gattung|}}} {{{Art|}}}'' {{#if:{{{Autor|}}}|{{{Autor|}}}}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Weitere Informationen
|-
|style="text-align:left"|
{| style="text-align:left;background-color:#efefef;widht:250px"
{{#if:{{{Verbreitung|}}}
|
{{!-}}
{{!}} Verbreitung:
{{!}} {{{Verbreitung|}}}
|
{{!-}}
}}
{{#if:{{{Habitat|}}}
|
{{!-}}
{{!}} Habitat:
{{!}} {{{Habitat|}}}
|
{{!-}}
}}
{{#if:{{{Nahrung|}}}
|
{{!-}}
{{!}} Nahrung:
{{!}} {{{Nahrung|}}}
|
{{!-}}
}}
{{#if:{{{Luftfeuchtigkeit|}}}
|
{{!-}}
{{!}} Luftfeuchtigkeit:
{{!}} {{{Luftfeuchtigkeit|}}}
|
{{!-}}
}}
{{#if:{{{Temperatur|}}}
|
{{!-}}
{{!}} Temperatur:
{{!}} {{{Temperatur|}}}
|
{{!-}}
}}
{{#if:{{{StudyGroupNumber|}}}
|
{{!-}}
{{!}} StudyGroupNumber:
{{!}} {{{StudyGroupNumber|}}}
|
{{!-}}
}}
|}
|}
{{#ifeq: {{NAMESPACE}} | {{ns:0}} | [[Kategorie:Spezies]]}}
{{#if:{{{Ordnung|}}}|
-> [[:Kategorie:{{{Ordnung|}}}{{!}}{{{Ordnung|}}}]]
}}
{{#if:{{{Unterordnung|}}}|
-> [[:Kategorie:{{{Unterordnung|}}}{{!}}{{{Unterordnung|}}}]]
}}
{{#if:{{{Superfamilie|}}}|
-> [[:Kategorie:{{{Superfamilie|}}}{{!}}{{{Superfamilie|}}}]]
}}
{{#if:{{{Familie|}}}|
-> [[:Kategorie:{{{Familie|}}}{{!}}{{{Familie|}}}]]
}}
{{#if:{{{Unterfamilie|}}}|
-> [[:Kategorie:{{{Unterfamilie|}}}{{!}}{{{Unterfamilie|}}}]]
}}
{{#if:{{{Tribus|}}}|
-> [[:Kategorie:{{{Tribus|}}}{{!}}{{{Tribus|}}}]]
}}
{{#if:{{{Gattung|}}}|
-> [[:Kategorie:{{{Gattung|}}}{{!}}{{{Gattung|}}}]]
}}
{{#if:{{{Untergattung|}}}|
-> [[:Kategorie:{{{Untergattung|}}}{{!}}{{{Untergattung|}}}]]
}}
{{#if:{{{cockroach.speciesfile.org_TaxonNameID|}}}|
* [http://cockroach.speciesfile.org/common/basic/Taxa.aspx?TaxonNameID={{{cockroach.speciesfile.org_TaxonNameID|}}} cockroach.speciesfile.org -> {{PAGENAME}}]
}}
{{#if:{{{LSID|}}}|
* [http://www.ipni.org/lsids.html LSID] : {{{LSID|}}}
}}
{{#ifeq:{{PAGENAME}}|{{{Ordnung}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=6}}
{{#ifeq:{{PAGENAME}}|Blattodea|
[[Kategorie:Schaben]]
}}
}}
{{#ifeq:{{PAGENAME}}|{{{Superfamilie}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=6}}
[[Kategorie:{{{Ordnung|}}}{{!}}{{{Unterordnung|}}}]]
}}
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
[[Kategorie:{{{Unterordnung|}}}{{!}}{{{Superfamilie|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{Familie}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=4}}
[[Kategorie:{{{Superfamilie|}}}{{!}}{{{Familie|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{Unterfamilie}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=3}}
[[Kategorie:{{{Familie|}}}{{!}}{{{Unterfamilie|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{Gattung}}}|
[[Kategorie:{{{Unterfamilie|}}}{{!}}{{{Gattung|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{Gattung}}} {{{Art}}}|
[[Kategorie:{{{Gattung|}}}{{!}}{{{Art|}}}]]
}}
</includeonly>
<noinclude>
<pre>
Beispielaufruf:
{{Systematik
| DeName = Fauchschabe
| Autor = van Herrewege, 1973
| Ordnung =
| Unterordnung =
| Superfamily =
| Familie = Blaberidae
| Unterfamilie = Oxyhaloinae
| Gattung = Princisia
| Untergattung =
| Tribus = Gromphadorhini
| Art = vanwaerebeki
| Verbreitung =
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| StudyGroupNumber = BCG 34
| Winterruhe =
| cockroach.speciesfile.org_TaxonNameID = 1174416
| LSID = urn:lsid:Blattodea.speciesfile.org:TaxonName:6326
}}
</pre>
</noinclude>
7f23a14fb6565b223e89f9f065bb775603eb9fef
1439
1438
2017-04-28T14:18:53Z
Lollypop
2
wikitext
text/x-wiki
<includeonly>
{| cellpadding="2" cellspacing="0" style="border: 1px solid #6688AA;width:260px; background-color:#efefef; float:right;overflow: hidden; margin-left:10px; margin-bottom:10px;" valign="middle" |
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"|
'''''{{PAGENAME}}'''''
{{#if:{{{DeName|}}}|
<br>({{{DeName|}}})
}}
|-
|style="border: 0px solid #6688AA;border-bottom-width:1px;"| <div style="text-align:center;font-size:8pt;">
{{#if: {{{Bild|}}}
|
[[Bild:{{{Bild}}}{{!}}frameless{{!}}250x300px{{!}}{{{Bildbeschreibung}}}]]
}}
</div>
|
|
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| Systematik
|-
|style="text-align:center;width=250px;"|
{| style="background-color:#efefef;text-align:left;"
{{#if:{{{Ordnung|}}}
|
{{!-}}
{{!}} Ordnung:
{{!}} ''[[:Kategorie:{{{Ordnung|}}}{{!}}{{{Ordnung|}}}]]''
}}
{{#if:{{{Unterordnung|}}}
|
{{!-}}
{{!}} Unterordnung:
{{!}} ''[[:Kategorie:{{{Unterordnung|}}}{{!}}{{{Unterordnung|}}}]]''
}}
{{#if:{{{Superfamilie|}}}
|
{{!-}}
{{!}} Superfamilie:
{{!}} ''[[:Kategorie:{{{Superfamilie|}}}{{!}}{{{Superfamilie|}}}]]''
}}
{{#if:{{{Familie|}}}
|
{{!-}}
{{!}} Familie:
{{!}} ''[[:Kategorie:{{{Familie|}}}{{!}}{{{Familie|}}}]]''
}}
{{#if:{{{Unterfamilie|}}}
|
{{!-}}
{{!}} Unterfamilie:
{{!}} ''[[:Kategorie:{{{Unterfamilie|}}}{{!}}{{{Unterfamilie|}}}]]''
}}
{{#if:{{{Tribus|}}}
|
{{!-}}
{{!}} Tribus:
{{!}} ''[[:Kategorie:{{{Tribus|}}}{{!}}{{{Tribus|}}}]]''
}}
|-
{{#if:{{{Gattung|}}}|
{{!-}}
{{!}} Gattung:
{{!}} ''[[:Kategorie:{{{Gattung|}}}{{!}}{{{Gattung|}}}]]''
}}
|-
{{#if:{{{Untergattung|}}}|
{{!-}}
{{!}} Untergattung:
{{!}} ''[[:Kategorie:{{{Untergattung|}}}{{!}}{{{Untergattung|}}}]]''
}}
|-
{{#if:{{{Art|}}}|
{{!-}}
{{!}} Art:
{{!}} ''{{{Gattung|}}} {{{Untergattung|}}} {{{Art|}}}''
}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Wissenschaftlicher Name
|-
|style="text-align:center"|
{| style="text-align:center;background-color:#efefef;widht:250px"
|-
|style="text-align:center;width:250px"|''{{{Gattung|}}} {{{Art|}}}'' {{#if:{{{Autor|}}}|{{{Autor|}}}}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Weitere Informationen
|-
|style="text-align:left"|
{| style="text-align:left;background-color:#efefef;widht:250px"
{{#if:{{{Verbreitung|}}}
|
{{!-}}
{{!}} Verbreitung:
{{!}} {{{Verbreitung|}}}
|
{{!-}}
}}
{{#if:{{{Habitat|}}}
|
{{!-}}
{{!}} Habitat:
{{!}} {{{Habitat|}}}
|
{{!-}}
}}
{{#if:{{{Nahrung|}}}
|
{{!-}}
{{!}} Nahrung:
{{!}} {{{Nahrung|}}}
|
{{!-}}
}}
{{#if:{{{Luftfeuchtigkeit|}}}
|
{{!-}}
{{!}} Luftfeuchtigkeit:
{{!}} {{{Luftfeuchtigkeit|}}}
|
{{!-}}
}}
{{#if:{{{Temperatur|}}}
|
{{!-}}
{{!}} Temperatur:
{{!}} {{{Temperatur|}}}
|
{{!-}}
}}
{{#if:{{{StudyGroupNumber|}}}
|
{{!-}}
{{!}} StudyGroupNumber:
{{!}} {{{StudyGroupNumber|}}}
|
{{!-}}
}}
|}
|}
{{#ifeq: {{NAMESPACE}} | {{ns:0}} | [[Kategorie:Spezies]]}}
{{#if:{{{Ordnung|}}}|
-> [[:Kategorie:{{{Ordnung|}}}{{!}}{{{Ordnung|}}}]]
}}
{{#if:{{{Unterordnung|}}}|
-> [[:Kategorie:{{{Unterordnung|}}}{{!}}{{{Unterordnung|}}}]]
}}
{{#if:{{{Superfamilie|}}}|
-> [[:Kategorie:{{{Superfamilie|}}}{{!}}{{{Superfamilie|}}}]]
}}
{{#if:{{{Familie|}}}|
-> [[:Kategorie:{{{Familie|}}}{{!}}{{{Familie|}}}]]
}}
{{#if:{{{Unterfamilie|}}}|
-> [[:Kategorie:{{{Unterfamilie|}}}{{!}}{{{Unterfamilie|}}}]]
}}
{{#if:{{{Tribus|}}}|
-> [[:Kategorie:{{{Tribus|}}}{{!}}{{{Tribus|}}}]]
}}
{{#if:{{{Gattung|}}}|
-> [[:Kategorie:{{{Gattung|}}}{{!}}{{{Gattung|}}}]]
}}
{{#if:{{{Untergattung|}}}|
-> [[:Kategorie:{{{Untergattung|}}}{{!}}{{{Untergattung|}}}]]
}}
{{#if:{{{cockroach.speciesfile.org_TaxonNameID|}}}|
* [http://cockroach.speciesfile.org/common/basic/Taxa.aspx?TaxonNameID={{{cockroach.speciesfile.org_TaxonNameID|}}} cockroach.speciesfile.org -> {{PAGENAME}}]
}}
{{#if:{{{LSID|}}}|
* [http://www.ipni.org/lsids.html LSID] : {{{LSID|}}}
}}
{{#ifeq:{{PAGENAME}}|{{{Ordnung}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=6}}
{{#ifeq:{{PAGENAME}}|Blattodea|
[[Kategorie:Schaben]]
}}
}}
{{#ifeq:{{PAGENAME}}|{{{Superfamilie}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=6}}
[[Kategorie:{{{Ordnung|}}}{{!}}{{{Unterordnung|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{Familie}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=4}}
[[Kategorie:{{{Superfamilie|}}}{{!}}{{{Familie|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{Unterfamilie}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=3}}
[[Kategorie:{{{Familie|}}}{{!}}{{{Unterfamilie|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{Gattung}}}|
[[Kategorie:{{{Unterfamilie|}}}{{!}}{{{Gattung|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{Gattung}}} {{{Art}}}|
[[Kategorie:{{{Gattung|}}}{{!}}{{{Art|}}}]]
}}
</includeonly>
<noinclude>
<pre>
Beispielaufruf:
{{Systematik
| DeName = Fauchschabe
| Autor = van Herrewege, 1973
| Ordnung =
| Unterordnung =
| Superfamily =
| Familie = Blaberidae
| Unterfamilie = Oxyhaloinae
| Gattung = Princisia
| Untergattung =
| Tribus = Gromphadorhini
| Art = vanwaerebeki
| Verbreitung =
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| StudyGroupNumber = BCG 34
| Winterruhe =
| cockroach.speciesfile.org_TaxonNameID = 1174416
| LSID = urn:lsid:Blattodea.speciesfile.org:TaxonName:6326
}}
</pre>
</noinclude>
3427ca7e80031860ac0dd240c1248fca654c0877
1440
1439
2017-04-28T14:20:52Z
Lollypop
2
wikitext
text/x-wiki
<includeonly>
{| cellpadding="2" cellspacing="0" style="border: 1px solid #6688AA;width:260px; background-color:#efefef; float:right;overflow: hidden; margin-left:10px; margin-bottom:10px;" valign="middle" |
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"|
'''''{{PAGENAME}}'''''
{{#if:{{{DeName|}}}|
<br>({{{DeName|}}})
}}
|-
|style="border: 0px solid #6688AA;border-bottom-width:1px;"| <div style="text-align:center;font-size:8pt;">
{{#if: {{{Bild|}}}
|
[[Bild:{{{Bild}}}{{!}}frameless{{!}}250x300px{{!}}{{{Bildbeschreibung}}}]]
}}
</div>
|
|
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| Systematik
|-
|style="text-align:center;width=250px;"|
{| style="background-color:#efefef;text-align:left;"
{{#if:{{{Ordnung|}}}
|
{{!-}}
{{!}} Ordnung:
{{!}} ''[[:Kategorie:{{{Ordnung|}}}{{!}}{{{Ordnung|}}}]]''
}}
{{#if:{{{Unterordnung|}}}
|
{{!-}}
{{!}} Unterordnung:
{{!}} ''[[:Kategorie:{{{Unterordnung|}}}{{!}}{{{Unterordnung|}}}]]''
}}
{{#if:{{{Superfamilie|}}}
|
{{!-}}
{{!}} Superfamilie:
{{!}} ''[[:Kategorie:{{{Superfamilie|}}}{{!}}{{{Superfamilie|}}}]]''
}}
{{#if:{{{Familie|}}}
|
{{!-}}
{{!}} Familie:
{{!}} ''[[:Kategorie:{{{Familie|}}}{{!}}{{{Familie|}}}]]''
}}
{{#if:{{{Unterfamilie|}}}
|
{{!-}}
{{!}} Unterfamilie:
{{!}} ''[[:Kategorie:{{{Unterfamilie|}}}{{!}}{{{Unterfamilie|}}}]]''
}}
{{#if:{{{Tribus|}}}
|
{{!-}}
{{!}} Tribus:
{{!}} ''[[:Kategorie:{{{Tribus|}}}{{!}}{{{Tribus|}}}]]''
}}
|-
{{#if:{{{Gattung|}}}|
{{!-}}
{{!}} Gattung:
{{!}} ''[[:Kategorie:{{{Gattung|}}}{{!}}{{{Gattung|}}}]]''
}}
|-
{{#if:{{{Untergattung|}}}|
{{!-}}
{{!}} Untergattung:
{{!}} ''[[:Kategorie:{{{Untergattung|}}}{{!}}{{{Untergattung|}}}]]''
}}
|-
{{#if:{{{Art|}}}|
{{!-}}
{{!}} Art:
{{!}} ''{{{Gattung|}}} {{{Untergattung|}}} {{{Art|}}}''
}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Wissenschaftlicher Name
|-
|style="text-align:center"|
{| style="text-align:center;background-color:#efefef;widht:250px"
|-
|style="text-align:center;width:250px"|''{{{Gattung|}}} {{{Art|}}}'' {{#if:{{{Autor|}}}|{{{Autor|}}}}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Weitere Informationen
|-
|style="text-align:left"|
{| style="text-align:left;background-color:#efefef;widht:250px"
{{#if:{{{Verbreitung|}}}
|
{{!-}}
{{!}} Verbreitung:
{{!}} {{{Verbreitung|}}}
|
{{!-}}
}}
{{#if:{{{Habitat|}}}
|
{{!-}}
{{!}} Habitat:
{{!}} {{{Habitat|}}}
|
{{!-}}
}}
{{#if:{{{Nahrung|}}}
|
{{!-}}
{{!}} Nahrung:
{{!}} {{{Nahrung|}}}
|
{{!-}}
}}
{{#if:{{{Luftfeuchtigkeit|}}}
|
{{!-}}
{{!}} Luftfeuchtigkeit:
{{!}} {{{Luftfeuchtigkeit|}}}
|
{{!-}}
}}
{{#if:{{{Temperatur|}}}
|
{{!-}}
{{!}} Temperatur:
{{!}} {{{Temperatur|}}}
|
{{!-}}
}}
{{#if:{{{StudyGroupNumber|}}}
|
{{!-}}
{{!}} StudyGroupNumber:
{{!}} {{{StudyGroupNumber|}}}
|
{{!-}}
}}
|}
|}
{{#ifeq: {{NAMESPACE}} | {{ns:0}} | [[Kategorie:Spezies]]}}
{{#if:{{{Ordnung|}}}|
-> [[:Kategorie:{{{Ordnung|}}}{{!}}{{{Ordnung|}}}]]
}}
{{#if:{{{Unterordnung|}}}|
-> [[:Kategorie:{{{Unterordnung|}}}{{!}}{{{Unterordnung|}}}]]
}}
{{#if:{{{Superfamilie|}}}|
-> [[:Kategorie:{{{Superfamilie|}}}{{!}}{{{Superfamilie|}}}]]
}}
{{#if:{{{Familie|}}}|
-> [[:Kategorie:{{{Familie|}}}{{!}}{{{Familie|}}}]]
}}
{{#if:{{{Unterfamilie|}}}|
-> [[:Kategorie:{{{Unterfamilie|}}}{{!}}{{{Unterfamilie|}}}]]
}}
{{#if:{{{Tribus|}}}|
-> [[:Kategorie:{{{Tribus|}}}{{!}}{{{Tribus|}}}]]
}}
{{#if:{{{Gattung|}}}|
-> [[:Kategorie:{{{Gattung|}}}{{!}}{{{Gattung|}}}]]
}}
{{#if:{{{Untergattung|}}}|
-> [[:Kategorie:{{{Untergattung|}}}{{!}}{{{Untergattung|}}}]]
}}
{{#if:{{{cockroach.speciesfile.org_TaxonNameID|}}}|
* [http://cockroach.speciesfile.org/common/basic/Taxa.aspx?TaxonNameID={{{cockroach.speciesfile.org_TaxonNameID|}}} cockroach.speciesfile.org -> {{PAGENAME}}]
}}
{{#if:{{{LSID|}}}|
* [http://www.ipni.org/lsids.html LSID] : {{{LSID|}}}
}}
{{#ifeq:{{PAGENAME}}|{{{Ordnung}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=6}}
{{#ifeq:{{PAGENAME}}|Blattodea|
[[Kategorie:Schaben]]
}}
}}
{{#ifeq:{{PAGENAME}}|{{{Unterordnung}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=6}}
[[Kategorie:{{{Ordnung|}}}{{!}}{{{Unterordnung|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{Superfamilie}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=6}}
[[Kategorie:{{{Unterordnung|}}}{{!}}{{{Superfamilie|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{Familie}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=4}}
[[Kategorie:{{{Superfamilie|}}}{{!}}{{{Familie|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{Unterfamilie}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=3}}
[[Kategorie:{{{Familie|}}}{{!}}{{{Unterfamilie|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{Gattung}}}|
[[Kategorie:{{{Unterfamilie|}}}{{!}}{{{Gattung|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{Gattung}}} {{{Art}}}|
[[Kategorie:{{{Gattung|}}}{{!}}{{{Art|}}}]]
}}
</includeonly>
<noinclude>
<pre>
Beispielaufruf:
{{Systematik
| DeName = Fauchschabe
| Autor = van Herrewege, 1973
| Ordnung =
| Unterordnung =
| Superfamily =
| Familie = Blaberidae
| Unterfamilie = Oxyhaloinae
| Gattung = Princisia
| Untergattung =
| Tribus = Gromphadorhini
| Art = vanwaerebeki
| Verbreitung =
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| StudyGroupNumber = BCG 34
| Winterruhe =
| cockroach.speciesfile.org_TaxonNameID = 1174416
| LSID = urn:lsid:Blattodea.speciesfile.org:TaxonName:6326
}}
</pre>
</noinclude>
8dca94bc15c114aa2dcd884b78452fde7befafd9
1476
1440
2017-04-28T14:58:25Z
Lollypop
2
wikitext
text/x-wiki
<includeonly>
{| cellpadding="2" cellspacing="0" style="border: 1px solid #6688AA;width:260px; background-color:#efefef; float:right;overflow: hidden; margin-left:10px; margin-bottom:10px;" valign="middle" |
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"|
'''''{{PAGENAME}}'''''
{{#if:{{{DeName|}}}|
<br>({{{DeName|}}})
}}
|-
|style="border: 0px solid #6688AA;border-bottom-width:1px;"| <div style="text-align:center;font-size:8pt;">
{{#if: {{{Bild|}}}
|
[[Bild:{{{Bild}}}{{!}}frameless{{!}}250x300px{{!}}{{{Bildbeschreibung}}}]]
}}
</div>
|
|
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| Systematik
|-
|style="text-align:center;width=250px;"|
{| style="background-color:#efefef;text-align:left;"
{{#if:{{{Ordnung|}}}
|
{{!-}}
{{!}} Ordnung:
{{!}} ''[[:Kategorie:{{{Ordnung|}}}{{!}}{{{Ordnung|}}}]]''
}}
{{#if:{{{Unterordnung|}}}
|
{{!-}}
{{!}} Unterordnung:
{{!}} ''[[:Kategorie:{{{Unterordnung|}}}{{!}}{{{Unterordnung|}}}]]''
}}
{{#if:{{{Superfamilie|}}}
|
{{!-}}
{{!}} Superfamilie:
{{!}} ''[[:Kategorie:{{{Superfamilie|}}}{{!}}{{{Superfamilie|}}}]]''
}}
{{#if:{{{Familie|}}}
|
{{!-}}
{{!}} Familie:
{{!}} ''[[:Kategorie:{{{Familie|}}}{{!}}{{{Familie|}}}]]''
}}
{{#if:{{{Unterfamilie|}}}
|
{{!-}}
{{!}} Unterfamilie:
{{!}} ''[[:Kategorie:{{{Unterfamilie|}}}{{!}}{{{Unterfamilie|}}}]]''
}}
{{#if:{{{Tribus|}}}
|
{{!-}}
{{!}} Tribus:
{{!}} ''[[:Kategorie:{{{Tribus|}}}{{!}}{{{Tribus|}}}]]''
}}
|-
{{#if:{{{Gattung|}}}|
{{!-}}
{{!}} Gattung:
{{!}} ''[[:Kategorie:{{{Gattung|}}}{{!}}{{{Gattung|}}}]]''
}}
|-
{{#if:{{{Untergattung|}}}|
{{!-}}
{{!}} Untergattung:
{{!}} ''[[:Kategorie:{{{Untergattung|}}}{{!}}{{{Untergattung|}}}]]''
}}
|-
{{#if:{{{Art|}}}|
{{!-}}
{{!}} Art:
{{!}} ''{{{Gattung|}}} {{{Untergattung|}}} {{{Art|}}}''
}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Wissenschaftlicher Name
|-
|style="text-align:center"|
{| style="text-align:center;background-color:#efefef;widht:250px"
|-
|style="text-align:center;width:250px"|''{{{Gattung|}}} {{{Art|}}}'' {{#if:{{{Autor|}}}|{{{Autor|}}}}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Weitere Informationen
|-
|style="text-align:left"|
{| style="text-align:left;background-color:#efefef;widht:250px"
{{#if:{{{Verbreitung|}}}
|
{{!-}}
{{!}} Verbreitung:
{{!}} {{{Verbreitung|}}}
|
{{!-}}
}}
{{#if:{{{Habitat|}}}
|
{{!-}}
{{!}} Habitat:
{{!}} {{{Habitat|}}}
|
{{!-}}
}}
{{#if:{{{Nahrung|}}}
|
{{!-}}
{{!}} Nahrung:
{{!}} {{{Nahrung|}}}
|
{{!-}}
}}
{{#if:{{{Luftfeuchtigkeit|}}}
|
{{!-}}
{{!}} Luftfeuchtigkeit:
{{!}} {{{Luftfeuchtigkeit|}}}
|
{{!-}}
}}
{{#if:{{{Temperatur|}}}
|
{{!-}}
{{!}} Temperatur:
{{!}} {{{Temperatur|}}}
|
{{!-}}
}}
{{#if:{{{StudyGroupNumber|}}}
|
{{!-}}
{{!}} StudyGroupNumber:
{{!}} {{{StudyGroupNumber|}}}
|
{{!-}}
}}
|}
|}
{{#ifeq: {{NAMESPACE}} | {{ns:0}} | [[Kategorie:Spezies]]}}
{{#if:{{{Ordnung|}}}|
-> [[:Kategorie:{{{Ordnung|}}}{{!}}{{{Ordnung|}}}]]
}}
{{#if:{{{Unterordnung|}}}|
-> [[:Kategorie:{{{Unterordnung|}}}{{!}}{{{Unterordnung|}}}]]
}}
{{#if:{{{Superfamilie|}}}|
-> [[:Kategorie:{{{Superfamilie|}}}{{!}}{{{Superfamilie|}}}]]
}}
{{#if:{{{Familie|}}}|
-> [[:Kategorie:{{{Familie|}}}{{!}}{{{Familie|}}}]]
}}
{{#if:{{{Unterfamilie|}}}|
-> [[:Kategorie:{{{Unterfamilie|}}}{{!}}{{{Unterfamilie|}}}]]
}}
{{#if:{{{Tribus|}}}|
-> [[:Kategorie:{{{Tribus|}}}{{!}}{{{Tribus|}}}]]
}}
{{#if:{{{Gattung|}}}|
-> [[:Kategorie:{{{Gattung|}}}{{!}}{{{Gattung|}}}]]
}}
{{#if:{{{Untergattung|}}}|
-> [[:Kategorie:{{{Untergattung|}}}{{!}}{{{Untergattung|}}}]]
}}
{{#if:{{{cockroach.speciesfile.org_TaxonNameID|}}}|
* [http://cockroach.speciesfile.org/common/basic/Taxa.aspx?TaxonNameID={{{cockroach.speciesfile.org_TaxonNameID|}}} cockroach.speciesfile.org -> {{PAGENAME}}]
}}
{{#if:{{{LSID|}}}|
* [http://www.ipni.org/lsids.html LSID] : {{{LSID|}}}
}}
{{#ifeq:{{PAGENAME}}|{{{Ordnung}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=6}}
{{#ifeq:{{PAGENAME}}|Blattodea|
[[Kategorie:Schaben]]
}}
}}
{{#ifeq:{{PAGENAME}}|{{{Unterordnung}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=6}}
[[Kategorie:{{{Ordnung|}}}{{!}}{{{Unterordnung|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{Superfamilie}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=6}}
[[Kategorie:{{{Unterordnung|}}}{{!}}{{{Superfamilie|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{Familie}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=4}}
[[Kategorie:{{{Superfamilie|}}}{{!}}{{{Familie|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{Unterfamilie}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=3}}
[[Kategorie:{{{Familie|}}}{{!}}{{{Unterfamilie|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{Gattung}}}|
[[Kategorie:{{{Unterfamilie|}}}{{!}}{{{Gattung|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{Art}}}|
[[Kategorie:{{{Gattung|}}}{{!}}{{{Art|}}}]]
}}
</includeonly>
<noinclude>
<pre>
Beispielaufruf:
{{Systematik
| DeName = Fauchschabe
| Autor = van Herrewege, 1973
| Ordnung =
| Unterordnung =
| Superfamily =
| Familie = Blaberidae
| Unterfamilie = Oxyhaloinae
| Gattung = Princisia
| Untergattung =
| Tribus = Gromphadorhini
| Art = vanwaerebeki
| Verbreitung =
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| StudyGroupNumber = BCG 34
| Winterruhe =
| cockroach.speciesfile.org_TaxonNameID = 1174416
| LSID = urn:lsid:Blattodea.speciesfile.org:TaxonName:6326
}}
</pre>
</noinclude>
995f78f200f4e8a90e488f6c47e4c40a8a816f8b
Category:Blattodea
14
267
1441
1115
2017-04-28T14:22:24Z
Lollypop
2
wikitext
text/x-wiki
{{Systematik
| Autor =
| Bild =
| Bildbeschreibung =
| Ordnung = Dictyoptera
| Unterordnung = Blattodea
| cockroach.speciesfile.org_TaxonNameID = 1172573
| LSID = urn:lsid:Blattodea.speciesfile.org:TaxonName:1
}}
3296ee1c61562710d6a7438b1afd551ba64a36f8
1477
1441
2017-04-28T14:59:54Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie: Schaben]]
{{Systematik
| Autor =
| Bild =
| Bildbeschreibung =
| Ordnung = Dictyoptera
| Unterordnung = Blattodea
| cockroach.speciesfile.org_TaxonNameID = 1172573
| LSID = urn:lsid:Blattodea.speciesfile.org:TaxonName:1
}}
7df7fdafb6f8db76cce530dc6f67f196963f0f04
Category:Blaberoidea
14
268
1442
1119
2017-04-28T14:23:14Z
Lollypop
2
wikitext
text/x-wiki
{{Systematik
| Autor = Saussure, 1864
| Bild =
| Bildbeschreibung =
| Ordnung = Dictyoptera
| Unterordnung = Blattodea
| Superfamilie = Blaberoidea
| cockroach.speciesfile.org_TaxonNameID = 1172574
| LSID = urn:lsid:Blattodea.speciesfile.org:TaxonName:1848
}}
8d2d24dcd1801ede3dbf6aeffe0eda0ef4b9663f
Category:Corydioidea
14
266
1443
1114
2017-04-28T14:23:45Z
Lollypop
2
wikitext
text/x-wiki
{{Systematik
| Autor = Saussure, 1864
| Bild =
| Bildbeschreibung =
| Ordnung = Dictyoptera
| Unterordnung = Blattodea
| Superfamilie = Corydioidea
| cockroach.speciesfile.org_TaxonNameID = 1177728
| LSID = urn:lsid:Blattodea.speciesfile.org:TaxonName:1252
}}
fb54ee766a07a2115a6bebb0ad3a015202453dc5
Category:Isoptera
14
304
1444
1420
2017-04-28T14:26:21Z
Lollypop
2
wikitext
text/x-wiki
{{Systematik
| Autor =
| Bild =
| Bildbeschreibung =
| Ordnung = Dictyoptera
| Unterordnung = Isoptera
| LSID = urn:lsid:faunaeur.org:taxname:11922
}}
8ed47ef2c372f3f00c45a6bcdb2d39901b1d2543
1475
1444
2017-04-28T14:54:09Z
Lollypop
2
wikitext
text/x-wiki
{{Systematik
| Autor =
| Bild =
| Bildbeschreibung =
| Kingdom = Animalia
| Subkingdom = Eumetazoa
| Phylum = Arthropoda
| Subphylum = Hexapoda
| Klasse = Insecta
| Ordnung = Dictyoptera
| Unterordnung = Isoptera
| LSID = urn:lsid:faunaeur.org:taxname:11922
}}
e39cdeb4029f94b1494de81e75e7ff9866ee5da7
Category:Dictyoptera
14
315
1445
2017-04-28T14:27:34Z
Lollypop
2
Die Seite wurde neu angelegt: „{{Systematik | Autor = | Bild = | Bildbeschreibung = | Ordnung = Dictyoptera | LSID = urn:lsid:faunaeur.org…“
wikitext
text/x-wiki
{{Systematik
| Autor =
| Bild =
| Bildbeschreibung =
| Ordnung = Dictyoptera
| LSID = urn:lsid:faunaeur.org:taxname:11907
}}
be81f9960805fc4bb373133b14732ec16e6a45fb
1451
1445
2017-04-28T14:34:34Z
Lollypop
2
wikitext
text/x-wiki
{{Systematik
| Autor =
| Bild =
| Bildbeschreibung =
| Klasse = Insecta
| Ordnung = Dictyoptera
| LSID = urn:lsid:faunaeur.org:taxname:11907
}}
f843876c291d6e2766a6b5f081639ff7af5055aa
Reticulitermes banyulensis
0
309
1448
1426
2017-04-28T14:31:20Z
Lollypop
2
wikitext
text/x-wiki
{{Systematik
| DeName =
| WissName = Reticulitermes banyulensis
| Autor = Clément, 1978
| Ordnung = Dictyoptera
| Unterordnung = Isoptera
| Familie = Rhinotermitidae
| Unterfamilie = Heterotermitinae
| Tribus =
| Gattung = Reticulitermes
| Untergattung =
| Art = banyulensis
| Verbreitung =
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur =
| Winterruhe =
}}
a73dbee712aadd01b90d67d04eddc13472cae7cb
1468
1448
2017-04-28T14:47:07Z
Lollypop
2
wikitext
text/x-wiki
{{Systematik
| DeName =
| WissName = Reticulitermes banyulensis
| Autor = Clément, 1978
| Kingdom = Animalia
| Subkingdom = Eumetazoa
| Phylum = Arthropoda
| Subphylum = Hexapoda
| Klasse = Insecta| Ordnung = Dictyoptera
| Unterordnung = Isoptera
| Familie = Rhinotermitidae
| Unterfamilie = Heterotermitinae
| Tribus =
| Gattung = Reticulitermes
| Untergattung =
| Art = banyulensis
| Verbreitung =
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur =
| Winterruhe =
}}
8baa26d51bbdba4ce37d095bd64dfa175ecb8d51
Reticulitermes flavipes
0
308
1449
1425
2017-04-28T14:31:39Z
Lollypop
2
wikitext
text/x-wiki
{{Systematik
| DeName =
| WissName = Reticulitermes flavipes
| Autor = (Kollar, 1837)
| Ordnung = Dictyoptera
| Unterordnung = Isoptera
| Familie = Rhinotermitidae
| Unterfamilie = Heterotermitinae
| Tribus =
| Gattung = Reticulitermes
| Untergattung =
| Art = flavipes
| Verbreitung =
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur =
| Winterruhe =
}}
7dcb2bbcabbd84782b55c33dab2918c8ef473971
1469
1449
2017-04-28T14:47:30Z
Lollypop
2
wikitext
text/x-wiki
{{Systematik
| DeName =
| WissName = Reticulitermes flavipes
| Autor = (Kollar, 1837)
| Kingdom = Animalia
| Subkingdom = Eumetazoa
| Phylum = Arthropoda
| Subphylum = Hexapoda
| Klasse = Insecta
| Ordnung = Dictyoptera
| Unterordnung = Isoptera
| Familie = Rhinotermitidae
| Unterfamilie = Heterotermitinae
| Tribus =
| Gattung = Reticulitermes
| Untergattung =
| Art = flavipes
| Verbreitung =
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur =
| Winterruhe =
}}
56f735811978a3924ceedcdc3e898f8ecd8dff6c
Archimandrita tesselata
0
148
1452
1156
2017-04-28T14:36:17Z
Lollypop
2
wikitext
text/x-wiki
{{Systematik
| DeName = Pfefferschabe
| Bild = Archimandrita_tesselata_IMG_2891.JPG
| Bildbeschreibung = Archimandrita Tesselata an einem Stück Gurke
| Autor = Rehn, 1903
| Ordnung = Dictyoptera
| Unterordnung = Blattodea
| Superfamilie = Blaberoidea
| Familie = Blaberidae
| Unterfamilie = Blaberinae
| Gattung = Archimandrita
| Untergattung =
| Art = tesselata
| Verbreitung = Guatemala, Costa Rica, Panama, Kolumbien
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| StudyGroupNumber = BCG 23
| Winterruhe =
| cockroach.speciesfile.org_TaxonNameID = 1174141
| LSID = urn:lsid:Blattodea.speciesfile.org:TaxonName:6665
}}
Archimandrita tesselata ist eine große, meist recht scheue Schabenart.
Ihren Namen "Pfefferschabe" trägt sie aufgrund der Zeichnung auf ihren Flügeldecken.
<gallery mode="packed-hover">
Image:Archimandrita tesselata IMG 2891.JPG|An Gurke
Image:Archimandrita_tesselata_R0015551.png|Können dese Augen lügen?
</gallery>
<gallery mode="slideshow" caption="Häutung eines Archimandrita tesselata Männchens">
Image:Archimandrita_tesselata_IMG_2901.png
Image:Archimandrita_tesselata_IMG_2902.png
Image:Archimandrita_tesselata_IMG_2903.png
Image:Archimandrita_tesselata_IMG_2904.png
Image:Archimandrita_tesselata_IMG_2905.png
Image:Archimandrita_tesselata_IMG_2906.png
Image:Archimandrita_tesselata_IMG_2907.png
Image:Archimandrita_tesselata_IMG_2908.png
Image:Archimandrita_tesselata_IMG_2909.png
Image:Archimandrita_tesselata_IMG_2910.png
Image:Archimandrita_tesselata_IMG_2911.png
Image:Archimandrita_tesselata_IMG_2912.png
Image:Archimandrita_tesselata_IMG_2913.png
Image:Archimandrita_tesselata_IMG_2914.png
Image:Archimandrita_tesselata_IMG_2915.png
Image:Archimandrita_tesselata_IMG_2916.png
Image:Archimandrita_tesselata_IMG_2917.png
Image:Archimandrita_tesselata_IMG_2918.png
</gallery>
<slideshow sequence="forward" transition="blindDown" refresh="3000">
[[Image:Archimandrita_tesselata_IMG_2901.png|thumb|right|256px|Caption 1]]
[[Image:Archimandrita_tesselata_IMG_2902.png|thumb|right|256px|Caption 2]]
[[Image:Archimandrita_tesselata_IMG_2903.png|thumb|right|256px|Caption 3]]
[[Image:Archimandrita_tesselata_IMG_2904.png|thumb|right|256px|Caption 4]]
[[Image:Archimandrita_tesselata_IMG_2905.png|thumb|right|256px|Caption 5]]
[[Image:Archimandrita_tesselata_IMG_2906.png|thumb|right|256px|Caption 6]]
</slideshow>
09caff45bdeea623401a6ad9f9e2bb3f541ec3aa
1453
1452
2017-04-28T14:39:07Z
Lollypop
2
wikitext
text/x-wiki
{{Systematik
| DeName = Pfefferschabe
| Bild = Archimandrita_tesselata_IMG_2891.JPG
| Bildbeschreibung = Archimandrita Tesselata an einem Stück Gurke
| Autor = Rehn, 1903
| Kingdom = Animalia
| Subkingdom = Eumetazoa
| Phylum = Arthropoda
| Subphylum = Hexapoda
| Klasse = Insecta
| Ordnung = Dictyoptera
| Unterordnung = Blattodea
| Superfamilie = Blaberoidea
| Familie = Blaberidae
| Unterfamilie = Blaberinae
| Gattung = Archimandrita
| Untergattung =
| Art = tesselata
| Verbreitung = Guatemala, Costa Rica, Panama, Kolumbien
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| StudyGroupNumber = BCG 23
| Winterruhe =
| cockroach.speciesfile.org_TaxonNameID = 1174141
| LSID = urn:lsid:Blattodea.speciesfile.org:TaxonName:6665
}}
Archimandrita tesselata ist eine große, meist recht scheue Schabenart.
Ihren Namen "Pfefferschabe" trägt sie aufgrund der Zeichnung auf ihren Flügeldecken.
<gallery mode="packed-hover">
Image:Archimandrita tesselata IMG 2891.JPG|An Gurke
Image:Archimandrita_tesselata_R0015551.png|Können dese Augen lügen?
</gallery>
<gallery mode="slideshow" caption="Häutung eines Archimandrita tesselata Männchens">
Image:Archimandrita_tesselata_IMG_2901.png
Image:Archimandrita_tesselata_IMG_2902.png
Image:Archimandrita_tesselata_IMG_2903.png
Image:Archimandrita_tesselata_IMG_2904.png
Image:Archimandrita_tesselata_IMG_2905.png
Image:Archimandrita_tesselata_IMG_2906.png
Image:Archimandrita_tesselata_IMG_2907.png
Image:Archimandrita_tesselata_IMG_2908.png
Image:Archimandrita_tesselata_IMG_2909.png
Image:Archimandrita_tesselata_IMG_2910.png
Image:Archimandrita_tesselata_IMG_2911.png
Image:Archimandrita_tesselata_IMG_2912.png
Image:Archimandrita_tesselata_IMG_2913.png
Image:Archimandrita_tesselata_IMG_2914.png
Image:Archimandrita_tesselata_IMG_2915.png
Image:Archimandrita_tesselata_IMG_2916.png
Image:Archimandrita_tesselata_IMG_2917.png
Image:Archimandrita_tesselata_IMG_2918.png
</gallery>
<slideshow sequence="forward" transition="blindDown" refresh="3000">
[[Image:Archimandrita_tesselata_IMG_2901.png|thumb|right|256px|Caption 1]]
[[Image:Archimandrita_tesselata_IMG_2902.png|thumb|right|256px|Caption 2]]
[[Image:Archimandrita_tesselata_IMG_2903.png|thumb|right|256px|Caption 3]]
[[Image:Archimandrita_tesselata_IMG_2904.png|thumb|right|256px|Caption 4]]
[[Image:Archimandrita_tesselata_IMG_2905.png|thumb|right|256px|Caption 5]]
[[Image:Archimandrita_tesselata_IMG_2906.png|thumb|right|256px|Caption 6]]
</slideshow>
84233db02219e52a9137c92d9111263bc8b329f6
Blaberus giganteus
0
192
1454
1147
2017-04-28T14:39:40Z
Lollypop
2
wikitext
text/x-wiki
{{Systematik
| DeName = Mittelamerikanische Riesenschabe
| WissName = Blaberus giganteus
| Autor = Linnaeus, 1758
| Bild = Blaberus_giganteus.jpg
| Bildbeschreibung = Adult Blaberus giganteus
| Kingdom = Animalia
| Subkingdom = Eumetazoa
| Phylum = Arthropoda
| Subphylum = Hexapoda
| Klasse = Insecta
| Ordnung = Dictyoptera
| Unterordnung = Blattodea
| Superfamilie = Blaberoidea
| Familie = Blaberidae
| Unterfamilie = Blaberinae
| Tribus =
| Gattung = Blaberus
| Untergattung =
| Art = giganteus
| Verbreitung = Mittelamerika und nördliches Südamerika
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| Winterruhe =
| cockroach.speciesfile.org_TaxonNameID = 1174190
| LSID = urn:lsid:Blattodea.speciesfile.org:TaxonName:6598
}}
e3e07aa6d283585c6518677bf776ee3fa4e4d296
Blaptica dubia
0
150
1455
1148
2017-04-28T14:40:42Z
Lollypop
2
wikitext
text/x-wiki
{{Systematik
| DeName = Argentinische Waldschabe
| Autor = Serville, 1838
| Kingdom = Animalia
| Subkingdom = Eumetazoa
| Phylum = Arthropoda
| Subphylum = Hexapoda
| Klasse = Insecta
| Ordnung = Dictyoptera
| Unterordnung = Blattodea
| Superfamilie = Blaberoidea
| Familie = Blaberidae
| Unterfamilie = Blaberinae
| Tribus =
| Gattung = Blaptica
| Art = dubia
| Verbreitung = Argentinien, Paraguay, Uruguay
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| Winterruhe =
| cockroach.speciesfile.org_TaxonNameID = 1174202
| LSID = urn:lsid:Blattodea.speciesfile.org:TaxonName:6586
}}
2d0b1b893b858bb7f52ca763eb1d60e996618686
Elliptorhina javanica
0
146
1456
1125
2017-04-28T14:41:10Z
Lollypop
2
wikitext
text/x-wiki
{{Systematik
| DeName = Fauchschabe
| WissName = Elliptorhina javanica
| Autor = Hanitsch, 1930
| Bild = Elliptorhina_javanica.JPG
| Bildbeschreibung = Elliptorhina javanica an einem Champignon
| Kingdom = Animalia
| Subkingdom = Eumetazoa
| Phylum = Arthropoda
| Subphylum = Hexapoda
| Klasse = Insecta
| Ordnung = Dictyoptera
| Unterordnung = Blattodea
| Superfamilie = Blaberoidea
| Familie = Blaberidae
| Unterfamilie = Oxyhaloinae
| Gattung = Elliptorhina
| Untergattung =
| Tribus = Gromphadorhini
| Art = javanica
| Verbreitung =
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| Winterruhe =
| LSID = urn:lsid:Blattodea.speciesfile.org:TaxonName:6342
| cockroach.speciesfile.org_TaxonNameID = 1174403
}}
6b30dbad8c0c2d1fedb6e3c487bb56050f64e97f
Elliptorhina laevigata
0
271
1457
1150
2017-04-28T14:41:35Z
Lollypop
2
wikitext
text/x-wiki
{{Systematik
| DeName = Fauchschabe
| Autor = Saussure & Zehntner, 1895
| Bild =
| Bildbeschreibung =
| Kingdom = Animalia
| Subkingdom = Eumetazoa
| Phylum = Arthropoda
| Subphylum = Hexapoda
| Klasse = Insecta
| Ordnung = Dictyoptera
| Unterordnung = Blattodea
| Superfamilie = Blaberoidea
| Familie = Blaberidae
| Unterfamilie = Oxyhaloinae
| Tribus = Gromphadorhini
| Gattung = Elliptorhina
| Untergattung =
| Art = laevigata
| Verbreitung =
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| Winterruhe =
| LSID = urn:lsid:Blattodea.speciesfile.org:TaxonName:6339
| cockroach.speciesfile.org_TaxonNameID = 1174404
}}
cee901a341220c964576a6af65793dfaafd55241
Gromphadorhina oblongonota
0
175
1458
1128
2017-04-28T14:42:10Z
Lollypop
2
wikitext
text/x-wiki
{{Systematik
| DeName = Fauchschabe
| WissName = Gromphadorhina oblongonota
| Autor = van Herrewege, 1973
| Kingdom = Animalia
| Subkingdom = Eumetazoa
| Phylum = Arthropoda
| Subphylum = Hexapoda
| Klasse = Insecta
| Ordnung = Dictyoptera
| Unterordnung = Blattodea
| Superfamilie = Blaberoidea
| Familie = Blaberidae
| Unterfamilie = Oxyhaloinae
| Gattung = Gromphadorhina
| Untergattung =
| Tribus = Gromphadorhini
| Art = oblongonota
| Verbreitung =
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| StudyGroupNumber = BCG 48
| Winterruhe =
| cockroach.speciesfile.org_TaxonNameID = 1174411
| LSID = urn:lsid:Blattodea.speciesfile.org:TaxonName:6332
}}
bfebc7ce51949ee93e4d9582d947b206a83c4210
Gromphadorhina portentosa
0
145
1459
1129
2017-04-28T14:42:29Z
Lollypop
2
wikitext
text/x-wiki
{{Systematik
| DeName = Fauchschabe
| WissName = Gromphadorhina portentosa
| Autor = Schaum, 1853
| Kingdom = Animalia
| Subkingdom = Eumetazoa
| Phylum = Arthropoda
| Subphylum = Hexapoda
| Klasse = Insecta
| Ordnung = Dictyoptera
| Unterordnung = Blattodea
| Superfamilie = Blaberoidea
| Familie = Blaberidae
| Unterfamilie = Oxyhaloinae
| Tribus = Gromphadorhini
| Gattung = Gromphadorhina
| Untergattung =
| Art = portentosa
| Verbreitung =
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| StudyGroupNumber = BCG 12
| Winterruhe =
| LSID = urn:lsid:Blattodea.speciesfile.org:TaxonName:6329
| cockroach.speciesfile.org_TaxonNameID = 1174413
}}
1a1824a88809858ede82097b5837485a8124f22e
Gromphadorhina spec.
0
144
1460
1131
2017-04-28T14:42:48Z
Lollypop
2
wikitext
text/x-wiki
{{Systematik
| DeName = Fauchschabe
| Autor =
| Kingdom = Animalia
| Subkingdom = Eumetazoa
| Phylum = Arthropoda
| Subphylum = Hexapoda
| Klasse = Insecta
| Ordnung = Dictyoptera
| Unterordnung = Blattodea
| Superfamilie = Blaberoidea
| Familie = Blaberidae
| Unterfamilie = Oxyhaloinae
| Gattung = Gromphadorhina
| Untergattung =
| Tribus = Gromphadorhini
| Art = spec.
| Verbreitung =
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| Winterruhe =
}}
2f439ad92293bb68cd7d21adbc8287bdaa12456e
Princisia vanwaerebeki
0
152
1461
1133
2017-04-28T14:43:23Z
Lollypop
2
wikitext
text/x-wiki
{{Systematik
| DeName = Fauchschabe
| Autor = van Herrewege, 1973
| Kingdom = Animalia
| Subkingdom = Eumetazoa
| Phylum = Arthropoda
| Subphylum = Hexapoda
| Klasse = Insecta
| Ordnung = Dictyoptera
| Unterordnung = Blattodea
| Superfamilie = Blaberoidea
| Familie = Blaberidae
| Unterfamilie = Oxyhaloinae
| Gattung = Princisia
| Untergattung =
| Tribus = Gromphadorhini
| Art = vanwaerebeki
| Verbreitung =
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| StudyGroupNumber = BCG 34
| Winterruhe =
| cockroach.speciesfile.org_TaxonNameID = 1174416
| LSID = urn:lsid:Blattodea.speciesfile.org:TaxonName:6326
}}
3e5e11e683385d48da96d9636ab47f3b9366574b
Therea olegrandjeani
0
173
1462
1074
2017-04-28T14:43:51Z
Lollypop
2
wikitext
text/x-wiki
{{Systematik
| DeName = Fragezeichen-Schabe
| WissName = Therea olegrandjeani
| Autor = Fritzsche & Zompro, 2008
| Kingdom = Animalia
| Subkingdom = Eumetazoa
| Phylum = Arthropoda
| Subphylum = Hexapoda
| Klasse = Insecta
| Ordnung = Dictyoptera
| Unterordnung = Blattodea
| Familie = Corydiidae
| Unterfamilie = Corydiinae
| Tribus =
| Gattung = Therea
| Untergattung =
| Art = olegrandjeani
| Verbreitung = Indien
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| Winterruhe =
| cockroach.speciesfile.org_TaxonNameID = 1178153
| LSID = urn:lsid:Blattodea.speciesfile.org:TaxonName:1259
}}
4a3f3856c064d622837b8f741b6fd50c40b1572d
1465
1462
2017-04-28T14:46:08Z
Lollypop
2
wikitext
text/x-wiki
{{Systematik
| DeName = Fragezeichen-Schabe
| WissName = Therea olegrandjeani
| Autor = Fritzsche & Zompro, 2008
| Kingdom = Animalia
| Subkingdom = Eumetazoa
| Phylum = Arthropoda
| Subphylum = Hexapoda
| Klasse = Insecta
| Ordnung = Dictyoptera
| Unterordnung = Blattodea
| Superfamilie = Corydioidea
| Familie = Corydiidae
| Unterfamilie = Corydiinae
| Tribus =
| Gattung = Therea
| Untergattung =
| Art = olegrandjeani
| Verbreitung = Indien
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| Winterruhe =
| cockroach.speciesfile.org_TaxonNameID = 1178153
| LSID = urn:lsid:Blattodea.speciesfile.org:TaxonName:1259
}}
3f217fda03a276a1f99699cd4fe69540f70b7fbd
Therea regularis
0
171
1463
1075
2017-04-28T14:44:37Z
Lollypop
2
wikitext
text/x-wiki
{{Systematik
| DeName = Dominoschabe
| WissName = Therea regularis
| Autor = Grandcolas, 1993
| Kingdom = Animalia
| Subkingdom = Eumetazoa
| Phylum = Arthropoda
| Subphylum = Hexapoda
| Klasse = Insecta
| Ordnung = Dictyoptera
| Unterordnung = Blattodea
| Familie = Corydiidae
| Unterfamilie = Corydiinae
| Gattung = Therea
| Untergattung =
| Art = regularis
| Verbreitung = Indien
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| Winterruhe =
| cockroach.speciesfile.org_TaxonNameID = 1178147
| LSID = urn:lsid:Blattodea.speciesfile.org:TaxonName:1267
}}
Kleine, quirlige Art.
18d8f8c82ef348c16c6c3d0ca9eda4252abbcde0
1464
1463
2017-04-28T14:45:35Z
Lollypop
2
wikitext
text/x-wiki
{{Systematik
| DeName = Dominoschabe
| WissName = Therea regularis
| Autor = Grandcolas, 1993
| Kingdom = Animalia
| Subkingdom = Eumetazoa
| Phylum = Arthropoda
| Subphylum = Hexapoda
| Klasse = Insecta
| Ordnung = Dictyoptera
| Unterordnung = Blattodea
| Superfamilie = Corydioidea
| Familie = Corydiidae
| Unterfamilie = Corydiinae
| Gattung = Therea
| Untergattung =
| Art = regularis
| Verbreitung = Indien
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| Winterruhe =
| cockroach.speciesfile.org_TaxonNameID = 1178147
| LSID = urn:lsid:Blattodea.speciesfile.org:TaxonName:1267
}}
Kleine, quirlige Art.
65183633c861615e46198bfe25ba630214f97e4d
Nasutitermes sp
0
316
1471
2017-04-28T14:51:19Z
Lollypop
2
Die Seite wurde neu angelegt: „{{Systematik | DeName = | WissName = Nasutitermes (nigriceps?) | Autor = | Kingdom = Animalia | Subkingdom =…“
wikitext
text/x-wiki
{{Systematik
| DeName =
| WissName = Nasutitermes (nigriceps?)
| Autor =
| Kingdom = Animalia
| Subkingdom = Eumetazoa
| Phylum = Arthropoda
| Subphylum = Hexapoda
| Klasse = Insecta
| Ordnung = Dictyoptera
| Unterordnung = Isoptera
| Familie = Termitidae
| Unterfamilie = Nasutitermitinae
| Tribus =
| Gattung = Nasutitermes
| Untergattung =
| Art = (nigriceps?)
| Verbreitung = Jamaica
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur =
| Winterruhe =
}}
2f20956cb26dc8e0ac12fbe37b22a2db30ba5bbf
Category:Nasutitermes
14
317
1472
2017-04-28T14:52:34Z
Lollypop
2
Die Seite wurde neu angelegt: „{{Systematik | DeName = | WissName = Nasutitermes | Autor = | Kingdom = Animalia | Subkingdom = Eumetazoa |…“
wikitext
text/x-wiki
{{Systematik
| DeName =
| WissName = Nasutitermes
| Autor =
| Kingdom = Animalia
| Subkingdom = Eumetazoa
| Phylum = Arthropoda
| Subphylum = Hexapoda
| Klasse = Insecta
| Ordnung = Dictyoptera
| Unterordnung = Isoptera
| Familie = Termitidae
| Unterfamilie = Nasutitermitinae
| Tribus =
| Gattung = Nasutitermes
}}
c0049f2a26af0765cd15c19ce8a391c351d2dab1
Category:Nasutitermitinae
14
318
1473
2017-04-28T14:53:04Z
Lollypop
2
Die Seite wurde neu angelegt: „{{Systematik | DeName = | WissName = Nasutitermes | Autor = | Kingdom = Animalia | Subkingdom = Eumetazoa |…“
wikitext
text/x-wiki
{{Systematik
| DeName =
| WissName = Nasutitermes
| Autor =
| Kingdom = Animalia
| Subkingdom = Eumetazoa
| Phylum = Arthropoda
| Subphylum = Hexapoda
| Klasse = Insecta
| Ordnung = Dictyoptera
| Unterordnung = Isoptera
| Familie = Termitidae
| Unterfamilie = Nasutitermitinae
}}
ebeff9f16baf4b9dfb58436e4c07335d47e491d4
Category:Termitidae
14
319
1474
2017-04-28T14:53:24Z
Lollypop
2
Die Seite wurde neu angelegt: „{{Systematik | DeName = | WissName = Nasutitermes | Autor = | Kingdom = Animalia | Subkingdom = Eumetazoa |…“
wikitext
text/x-wiki
{{Systematik
| DeName =
| WissName = Nasutitermes
| Autor =
| Kingdom = Animalia
| Subkingdom = Eumetazoa
| Phylum = Arthropoda
| Subphylum = Hexapoda
| Klasse = Insecta
| Ordnung = Dictyoptera
| Unterordnung = Isoptera
| Familie = Termitidae
}}
f377e74d8f7bd01f9b3274bbaa0b662f0b383b14
Category:Isoptera
14
304
1478
1475
2017-04-28T15:01:00Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie: Termiten]]
{{Systematik
| Autor =
| Bild =
| Bildbeschreibung =
| Kingdom = Animalia
| Subkingdom = Eumetazoa
| Phylum = Arthropoda
| Subphylum = Hexapoda
| Klasse = Insecta
| Ordnung = Dictyoptera
| Unterordnung = Isoptera
| LSID = urn:lsid:faunaeur.org:taxname:11922
}}
c91d545c08ff0a74b58d1eae218f3c218edf1a60
1498
1478
2017-04-28T15:44:43Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie: Termiten]]
{{Systematik
| Autor =
| Bild =
| Bildbeschreibung =
| Kingdom = Animalia
| Subkingdom = Eumetazoa
| Phylum = Arthropoda
| Subphylum = Hexapoda
| Klasse = Insecta
| Ordnung = Dictyoptera
| Unterordnung = Isoptera
| LSID = urn:lsid:faunaeur.org:taxname:11922
| www.faunaeur.org_id = 11922
}}
dd8af193c4f09e6b22f862b537d7762dd5bb8e73
1524
1498
2017-05-02T06:53:50Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie: Termiten]]
{{Systematik
| Autor =
| Bild =
| Bildbeschreibung =
| regnum = Animalia
| subregnum = Eumetazoa
| phylum = Arthropoda
| subphylum = Hexapoda
| classis = Insecta
| superordo = Dictyoptera
| ordo = Isoptera
| LSID = urn:lsid:faunaeur.org:taxname:11922
| www.faunaeur.org_id = 11922
}}
a0f82ba09276ba6d48c20922653f0034f41936a5
Template:Systematik
10
117
1479
1476
2017-04-28T15:06:52Z
Lollypop
2
wikitext
text/x-wiki
<includeonly>
{| cellpadding="2" cellspacing="0" style="border: 1px solid #6688AA;width:260px; background-color:#efefef; float:right;overflow: hidden; margin-left:10px; margin-bottom:10px;" valign="middle" |
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"|
'''''{{PAGENAME}}'''''
{{#if:{{{DeName|}}}|
<br>({{{DeName|}}})
}}
|-
|style="border: 0px solid #6688AA;border-bottom-width:1px;"| <div style="text-align:center;font-size:8pt;">
{{#if: {{{Bild|}}}
|
[[Bild:{{{Bild}}}{{!}}frameless{{!}}250x300px{{!}}{{{Bildbeschreibung}}}]]
}}
</div>
|
|
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| Systematik
|-
|style="text-align:center;width=250px;"|
{| style="background-color:#efefef;text-align:left;"
{{#if:{{{Ordnung|}}}
|
{{!-}}
{{!}} Ordnung:
{{!}} ''[[:Kategorie:{{{Ordnung|}}}{{!}}{{{Ordnung|}}}]]''
}}
{{#if:{{{Unterordnung|}}}
|
{{!-}}
{{!}} Unterordnung:
{{!}} ''[[:Kategorie:{{{Unterordnung|}}}{{!}}{{{Unterordnung|}}}]]''
}}
{{#if:{{{Superfamilie|}}}
|
{{!-}}
{{!}} Superfamilie:
{{!}} ''[[:Kategorie:{{{Superfamilie|}}}{{!}}{{{Superfamilie|}}}]]''
}}
{{#if:{{{Familie|}}}
|
{{!-}}
{{!}} Familie:
{{!}} ''[[:Kategorie:{{{Familie|}}}{{!}}{{{Familie|}}}]]''
}}
{{#if:{{{Unterfamilie|}}}
|
{{!-}}
{{!}} Unterfamilie:
{{!}} ''[[:Kategorie:{{{Unterfamilie|}}}{{!}}{{{Unterfamilie|}}}]]''
}}
{{#if:{{{Tribus|}}}
|
{{!-}}
{{!}} Tribus:
{{!}} ''[[:Kategorie:{{{Tribus|}}}{{!}}{{{Tribus|}}}]]''
}}
|-
{{#if:{{{Gattung|}}}|
{{!-}}
{{!}} Gattung:
{{!}} ''[[:Kategorie:{{{Gattung|}}}{{!}}{{{Gattung|}}}]]''
}}
|-
{{#if:{{{Untergattung|}}}|
{{!-}}
{{!}} Untergattung:
{{!}} ''[[:Kategorie:{{{Untergattung|}}}{{!}}{{{Untergattung|}}}]]''
}}
|-
{{#if:{{{Art|}}}|
{{!-}}
{{!}} Art:
{{!}} ''{{{Gattung|}}} {{{Untergattung|}}} {{{Art|}}}''
}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Wissenschaftlicher Name
|-
|style="text-align:center"|
{| style="text-align:center;background-color:#efefef;widht:250px"
|-
|style="text-align:center;width:250px"|''{{{Gattung|}}} {{{Art|}}}'' {{#if:{{{Autor|}}}|{{{Autor|}}}}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Weitere Informationen
|-
|style="text-align:left"|
{| style="text-align:left;background-color:#efefef;widht:250px"
{{#if:{{{Verbreitung|}}}
|
{{!-}}
{{!}} Verbreitung:
{{!}} {{{Verbreitung|}}}
|
{{!-}}
}}
{{#if:{{{Habitat|}}}
|
{{!-}}
{{!}} Habitat:
{{!}} {{{Habitat|}}}
|
{{!-}}
}}
{{#if:{{{Nahrung|}}}
|
{{!-}}
{{!}} Nahrung:
{{!}} {{{Nahrung|}}}
|
{{!-}}
}}
{{#if:{{{Luftfeuchtigkeit|}}}
|
{{!-}}
{{!}} Luftfeuchtigkeit:
{{!}} {{{Luftfeuchtigkeit|}}}
|
{{!-}}
}}
{{#if:{{{Temperatur|}}}
|
{{!-}}
{{!}} Temperatur:
{{!}} {{{Temperatur|}}}
|
{{!-}}
}}
{{#if:{{{StudyGroupNumber|}}}
|
{{!-}}
{{!}} StudyGroupNumber:
{{!}} {{{StudyGroupNumber|}}}
|
{{!-}}
}}
|}
|}
{{#ifeq: {{NAMESPACE}} | {{ns:0}} | [[Kategorie:Spezies]]}}
{{#if:{{{Ordnung|}}}|
-> [[:Kategorie:{{{Ordnung|}}}{{!}}{{{Ordnung|}}}]]
}}
{{#if:{{{Unterordnung|}}}|
-> [[:Kategorie:{{{Unterordnung|}}}{{!}}{{{Unterordnung|}}}]]
}}
{{#if:{{{Superfamilie|}}}|
-> [[:Kategorie:{{{Superfamilie|}}}{{!}}{{{Superfamilie|}}}]]
}}
{{#if:{{{Familie|}}}|
-> [[:Kategorie:{{{Familie|}}}{{!}}{{{Familie|}}}]]
}}
{{#if:{{{Unterfamilie|}}}|
-> [[:Kategorie:{{{Unterfamilie|}}}{{!}}{{{Unterfamilie|}}}]]
}}
{{#if:{{{Tribus|}}}|
-> [[:Kategorie:{{{Tribus|}}}{{!}}{{{Tribus|}}}]]
}}
{{#if:{{{Gattung|}}}|
-> [[:Kategorie:{{{Gattung|}}}{{!}}{{{Gattung|}}}]]
}}
{{#if:{{{Untergattung|}}}|
-> [[:Kategorie:{{{Untergattung|}}}{{!}}{{{Untergattung|}}}]]
}}
{{#if:{{{cockroach.speciesfile.org_TaxonNameID|}}}|
* [http://cockroach.speciesfile.org/common/basic/Taxa.aspx?TaxonNameID={{{cockroach.speciesfile.org_TaxonNameID|}}} cockroach.speciesfile.org -> {{PAGENAME}}]
}}
{{#if:{{{LSID|}}}|
* [http://www.ipni.org/lsids.html LSID] : {{{LSID|}}}
}}
{{#ifeq:{{PAGENAME}}|{{{Ordnung}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=6}}
{{#ifeq:{{PAGENAME}}|Blattodea|
[[Kategorie:Schaben]]
}}
}}
{{#ifeq:{{PAGENAME}}|{{{Unterordnung}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=6}}
[[Kategorie:{{{Ordnung|}}}{{!}}{{{Unterordnung|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{Unterordnung}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=6}}
[[Kategorie:{{{Ordnung|}}}{{!}}{{{Unterordnung|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{Superfamilie}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=4}}
[[Kategorie:{{{Unterordnung|}}}{{!}}{{{Superfamilie|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{Familie}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=4}}
[[Kategorie:{{{Superfamilie|}}}{{!}}{{{Familie|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{Unterfamilie}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=3}}
[[Kategorie:{{{Familie|}}}{{!}}{{{Unterfamilie|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{Gattung}}}|
[[Kategorie:{{{Unterfamilie|}}}{{!}}{{{Gattung|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{Art}}}|
[[Kategorie:{{{Gattung|}}}{{!}}{{{Art|}}}]]
}}
</includeonly>
<noinclude>
<pre>
Beispielaufruf:
{{Systematik
| DeName = Fauchschabe
| Autor = van Herrewege, 1973
| Ordnung =
| Unterordnung =
| Superfamily =
| Familie = Blaberidae
| Unterfamilie = Oxyhaloinae
| Gattung = Princisia
| Untergattung =
| Tribus = Gromphadorhini
| Art = vanwaerebeki
| Verbreitung =
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| StudyGroupNumber = BCG 34
| Winterruhe =
| cockroach.speciesfile.org_TaxonNameID = 1174416
| LSID = urn:lsid:Blattodea.speciesfile.org:TaxonName:6326
}}
</pre>
</noinclude>
54ef7b5d1e39c88662f86e8b0de51fd56e99c568
1482
1479
2017-04-28T15:17:41Z
Lollypop
2
wikitext
text/x-wiki
<includeonly>
{| cellpadding="2" cellspacing="0" style="border: 1px solid #6688AA;width:260px; background-color:#efefef; float:right;overflow: hidden; margin-left:10px; margin-bottom:10px;" valign="middle" |
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"|
'''''{{PAGENAME}}'''''
{{#if:{{{DeName|}}}|
<br>({{{DeName|}}})
}}
|-
|style="border: 0px solid #6688AA;border-bottom-width:1px;"| <div style="text-align:center;font-size:8pt;">
{{#if: {{{Bild|}}}
|
[[Bild:{{{Bild}}}{{!}}frameless{{!}}250x300px{{!}}{{{Bildbeschreibung}}}]]
}}
</div>
|
|
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| Systematik
|-
|style="text-align:center;width=250px;"|
{| style="background-color:#efefef;text-align:left;"
{{#if:{{{Ordnung|}}}
|
{{!-}}
{{!}} Ordnung:
{{!}} ''[[:Kategorie:{{{Ordnung|}}}{{!}}{{{Ordnung|}}}]]''
}}
{{#if:{{{Unterordnung|}}}
|
{{!-}}
{{!}} Unterordnung:
{{!}} ''[[:Kategorie:{{{Unterordnung|}}}{{!}}{{{Unterordnung|}}}]]''
}}
{{#if:{{{Superfamilie|}}}
|
{{!-}}
{{!}} Superfamilie:
{{!}} ''[[:Kategorie:{{{Superfamilie|}}}{{!}}{{{Superfamilie|}}}]]''
}}
{{#if:{{{Familie|}}}
|
{{!-}}
{{!}} Familie:
{{!}} ''[[:Kategorie:{{{Familie|}}}{{!}}{{{Familie|}}}]]''
}}
{{#if:{{{Unterfamilie|}}}
|
{{!-}}
{{!}} Unterfamilie:
{{!}} ''[[:Kategorie:{{{Unterfamilie|}}}{{!}}{{{Unterfamilie|}}}]]''
}}
{{#if:{{{Tribus|}}}
|
{{!-}}
{{!}} Tribus:
{{!}} ''[[:Kategorie:{{{Tribus|}}}{{!}}{{{Tribus|}}}]]''
}}
|-
{{#if:{{{Gattung|}}}|
{{!-}}
{{!}} Gattung:
{{!}} ''[[:Kategorie:{{{Gattung|}}}{{!}}{{{Gattung|}}}]]''
}}
|-
{{#if:{{{Untergattung|}}}|
{{!-}}
{{!}} Untergattung:
{{!}} ''[[:Kategorie:{{{Untergattung|}}}{{!}}{{{Untergattung|}}}]]''
}}
|-
{{#if:{{{Art|}}}|
{{!-}}
{{!}} Art:
{{!}} ''{{{Gattung|}}} {{{Untergattung|}}} {{{Art|}}}''
}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Wissenschaftlicher Name
|-
|style="text-align:center"|
{| style="text-align:center;background-color:#efefef;widht:250px"
|-
|style="text-align:center;width:250px"|''{{{Gattung|}}} {{{Art|}}}'' {{#if:{{{Autor|}}}|{{{Autor|}}}}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Weitere Informationen
|-
|style="text-align:left"|
{| style="text-align:left;background-color:#efefef;widht:250px"
{{#if:{{{Verbreitung|}}}
|
{{!-}}
{{!}} Verbreitung:
{{!}} {{{Verbreitung|}}}
|
{{!-}}
}}
{{#if:{{{Habitat|}}}
|
{{!-}}
{{!}} Habitat:
{{!}} {{{Habitat|}}}
|
{{!-}}
}}
{{#if:{{{Nahrung|}}}
|
{{!-}}
{{!}} Nahrung:
{{!}} {{{Nahrung|}}}
|
{{!-}}
}}
{{#if:{{{Luftfeuchtigkeit|}}}
|
{{!-}}
{{!}} Luftfeuchtigkeit:
{{!}} {{{Luftfeuchtigkeit|}}}
|
{{!-}}
}}
{{#if:{{{Temperatur|}}}
|
{{!-}}
{{!}} Temperatur:
{{!}} {{{Temperatur|}}}
|
{{!-}}
}}
{{#if:{{{StudyGroupNumber|}}}
|
{{!-}}
{{!}} StudyGroupNumber:
{{!}} {{{StudyGroupNumber|}}}
|
{{!-}}
}}
|}
|}
{{#ifeq: {{NAMESPACE}} | {{ns:0}} | [[Kategorie:Spezies]]}}
{{#if:{{{Ordnung|}}}|
-> [[:Kategorie:{{{Ordnung|}}}{{!}}{{{Ordnung|}}}]]
}}
{{#if:{{{Unterordnung|}}}|
-> [[:Kategorie:{{{Unterordnung|}}}{{!}}{{{Unterordnung|}}}]]
}}
{{#if:{{{Superfamilie|}}}|
-> [[:Kategorie:{{{Superfamilie|}}}{{!}}{{{Superfamilie|}}}]]
}}
{{#if:{{{Familie|}}}|
-> [[:Kategorie:{{{Familie|}}}{{!}}{{{Familie|}}}]]
}}
{{#if:{{{Unterfamilie|}}}|
-> [[:Kategorie:{{{Unterfamilie|}}}{{!}}{{{Unterfamilie|}}}]]
}}
{{#if:{{{Tribus|}}}|
-> [[:Kategorie:{{{Tribus|}}}{{!}}{{{Tribus|}}}]]
}}
{{#if:{{{Gattung|}}}|
-> [[:Kategorie:{{{Gattung|}}}{{!}}{{{Gattung|}}}]]
}}
{{#if:{{{Untergattung|}}}|
-> [[:Kategorie:{{{Untergattung|}}}{{!}}{{{Untergattung|}}}]]
}}
{{#if:{{{cockroach.speciesfile.org_TaxonNameID|}}}|
* [http://cockroach.speciesfile.org/common/basic/Taxa.aspx?TaxonNameID={{{cockroach.speciesfile.org_TaxonNameID|}}} cockroach.speciesfile.org -> {{PAGENAME}}]
}}
{{#if:{{{LSID|}}}|
* [http://www.ipni.org/lsids.html LSID] : {{{LSID|}}}
}}
{{#ifeq:{{PAGENAME}}|{{{Ordnung}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=6}}
{{#ifeq:{{PAGENAME}}|Blattodea|
[[Kategorie:Schaben]]
}}
}}
{{#ifeq:{{PAGENAME}}|{{{Unterordnung}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=6}}
[[Kategorie:{{{Ordnung|}}}{{!}}{{{Unterordnung|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{Unterordnung}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=6}}
[[Kategorie:{{{Ordnung|}}}{{!}}{{{Unterordnung|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{Superfamilie}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=4}}
[[Kategorie:{{{Unterordnung|}}}{{!}}{{{Superfamilie|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{Familie}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=4}}
[[Kategorie:{{#if: {{{Superfamilie}}} | {{{Superfamilie|}}} | {{{Unterordnung}}} }}{{!}}{{{Familie|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{Unterfamilie}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=3}}
[[Kategorie:{{{Familie|}}}{{!}}{{{Unterfamilie|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{Gattung}}}|
[[Kategorie:{{{Unterfamilie|}}}{{!}}{{{Gattung|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{Art}}}|
[[Kategorie:{{{Gattung|}}}{{!}}{{{Art|}}}]]
}}
</includeonly>
<noinclude>
<pre>
Beispielaufruf:
{{Systematik
| DeName = Fauchschabe
| Autor = van Herrewege, 1973
| Ordnung =
| Unterordnung =
| Superfamily =
| Familie = Blaberidae
| Unterfamilie = Oxyhaloinae
| Gattung = Princisia
| Untergattung =
| Tribus = Gromphadorhini
| Art = vanwaerebeki
| Verbreitung =
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| StudyGroupNumber = BCG 34
| Winterruhe =
| cockroach.speciesfile.org_TaxonNameID = 1174416
| LSID = urn:lsid:Blattodea.speciesfile.org:TaxonName:6326
}}
</pre>
</noinclude>
a7cd5b0424197e40e156a04b97e9a69f6ebaf8bb
1483
1482
2017-04-28T15:20:57Z
Lollypop
2
wikitext
text/x-wiki
<includeonly>
{| cellpadding="2" cellspacing="0" style="border: 1px solid #6688AA;width:260px; background-color:#efefef; float:right;overflow: hidden; margin-left:10px; margin-bottom:10px;" valign="middle" |
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"|
'''''{{PAGENAME}}'''''
{{#if:{{{DeName|}}}|
<br>({{{DeName|}}})
}}
|-
|style="border: 0px solid #6688AA;border-bottom-width:1px;"| <div style="text-align:center;font-size:8pt;">
{{#if: {{{Bild|}}}
|
[[Bild:{{{Bild}}}{{!}}frameless{{!}}250x300px{{!}}{{{Bildbeschreibung}}}]]
}}
</div>
|
|
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| Systematik
|-
|style="text-align:center;width=250px;"|
{| style="background-color:#efefef;text-align:left;"
{{#if:{{{Ordnung|}}}
|
{{!-}}
{{!}} Ordnung:
{{!}} ''[[:Kategorie:{{{Ordnung|}}}{{!}}{{{Ordnung|}}}]]''
}}
{{#if:{{{Unterordnung|}}}
|
{{!-}}
{{!}} Unterordnung:
{{!}} ''[[:Kategorie:{{{Unterordnung|}}}{{!}}{{{Unterordnung|}}}]]''
}}
{{#if:{{{Superfamilie|}}}
|
{{!-}}
{{!}} Superfamilie:
{{!}} ''[[:Kategorie:{{{Superfamilie|}}}{{!}}{{{Superfamilie|}}}]]''
}}
{{#if:{{{Familie|}}}
|
{{!-}}
{{!}} Familie:
{{!}} ''[[:Kategorie:{{{Familie|}}}{{!}}{{{Familie|}}}]]''
}}
{{#if:{{{Unterfamilie|}}}
|
{{!-}}
{{!}} Unterfamilie:
{{!}} ''[[:Kategorie:{{{Unterfamilie|}}}{{!}}{{{Unterfamilie|}}}]]''
}}
{{#if:{{{Tribus|}}}
|
{{!-}}
{{!}} Tribus:
{{!}} ''[[:Kategorie:{{{Tribus|}}}{{!}}{{{Tribus|}}}]]''
}}
|-
{{#if:{{{Gattung|}}}|
{{!-}}
{{!}} Gattung:
{{!}} ''[[:Kategorie:{{{Gattung|}}}{{!}}{{{Gattung|}}}]]''
}}
|-
{{#if:{{{Untergattung|}}}|
{{!-}}
{{!}} Untergattung:
{{!}} ''[[:Kategorie:{{{Untergattung|}}}{{!}}{{{Untergattung|}}}]]''
}}
|-
{{#if:{{{Art|}}}|
{{!-}}
{{!}} Art:
{{!}} ''{{{Gattung|}}} {{{Untergattung|}}} {{{Art|}}}''
}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Wissenschaftlicher Name
|-
|style="text-align:center"|
{| style="text-align:center;background-color:#efefef;widht:250px"
|-
|style="text-align:center;width:250px"|''{{{Gattung|}}} {{{Art|}}}'' {{#if:{{{Autor|}}}|{{{Autor|}}}}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Weitere Informationen
|-
|style="text-align:left"|
{| style="text-align:left;background-color:#efefef;widht:250px"
{{#if:{{{Verbreitung|}}}
|
{{!-}}
{{!}} Verbreitung:
{{!}} {{{Verbreitung|}}}
|
{{!-}}
}}
{{#if:{{{Habitat|}}}
|
{{!-}}
{{!}} Habitat:
{{!}} {{{Habitat|}}}
|
{{!-}}
}}
{{#if:{{{Nahrung|}}}
|
{{!-}}
{{!}} Nahrung:
{{!}} {{{Nahrung|}}}
|
{{!-}}
}}
{{#if:{{{Luftfeuchtigkeit|}}}
|
{{!-}}
{{!}} Luftfeuchtigkeit:
{{!}} {{{Luftfeuchtigkeit|}}}
|
{{!-}}
}}
{{#if:{{{Temperatur|}}}
|
{{!-}}
{{!}} Temperatur:
{{!}} {{{Temperatur|}}}
|
{{!-}}
}}
{{#if:{{{StudyGroupNumber|}}}
|
{{!-}}
{{!}} StudyGroupNumber:
{{!}} {{{StudyGroupNumber|}}}
|
{{!-}}
}}
|}
|}
{{#ifeq: {{NAMESPACE}} | {{ns:0}} | [[Kategorie:Spezies]]}}
{{#if:{{{Ordnung|}}}|
-> [[:Kategorie:{{{Ordnung|}}}{{!}}{{{Ordnung|}}}]]
}}
{{#if:{{{Unterordnung|}}}|
-> [[:Kategorie:{{{Unterordnung|}}}{{!}}{{{Unterordnung|}}}]]
}}
{{#if:{{{Superfamilie|}}}|
-> [[:Kategorie:{{{Superfamilie|}}}{{!}}{{{Superfamilie|}}}]]
}}
{{#if:{{{Familie|}}}|
-> [[:Kategorie:{{{Familie|}}}{{!}}{{{Familie|}}}]]
}}
{{#if:{{{Unterfamilie|}}}|
-> [[:Kategorie:{{{Unterfamilie|}}}{{!}}{{{Unterfamilie|}}}]]
}}
{{#if:{{{Tribus|}}}|
-> [[:Kategorie:{{{Tribus|}}}{{!}}{{{Tribus|}}}]]
}}
{{#if:{{{Gattung|}}}|
-> [[:Kategorie:{{{Gattung|}}}{{!}}{{{Gattung|}}}]]
}}
{{#if:{{{Untergattung|}}}|
-> [[:Kategorie:{{{Untergattung|}}}{{!}}{{{Untergattung|}}}]]
}}
{{#if:{{{cockroach.speciesfile.org_TaxonNameID|}}}|
* [http://cockroach.speciesfile.org/common/basic/Taxa.aspx?TaxonNameID={{{cockroach.speciesfile.org_TaxonNameID|}}} cockroach.speciesfile.org -> {{PAGENAME}}]
}}
{{#if:{{{LSID|}}}|
* [http://www.ipni.org/lsids.html LSID] : {{{LSID|}}}
}}
{{#ifeq:{{PAGENAME}}|{{{Ordnung}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=6}}
{{#ifeq:{{PAGENAME}}|Blattodea|
[[Kategorie:Schaben]]
}}
}}
{{#ifeq:{{PAGENAME}}|{{{Unterordnung}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=6}}
[[Kategorie:{{{Ordnung|}}}{{!}}{{{Unterordnung|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{Unterordnung}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=6}}
[[Kategorie:{{{Ordnung|}}}{{!}}{{{Unterordnung|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{Superfamilie}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=4}}
[[Kategorie:{{{Unterordnung|}}}{{!}}{{{Superfamilie|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{Familie}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=4}}
[[Kategorie:{{#if: {{{Superfamilie|}}} | {{{Superfamilie|}}} | {{{Unterordnung}}} }}{{!}}{{{Familie|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{Unterfamilie}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=3}}
[[Kategorie:{{{Familie|}}}{{!}}{{{Unterfamilie|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{Gattung}}}|
[[Kategorie:{{{Unterfamilie|}}}{{!}}{{{Gattung|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{Art}}}|
[[Kategorie:{{{Gattung|}}}{{!}}{{{Art|}}}]]
}}
</includeonly>
<noinclude>
<pre>
Beispielaufruf:
{{Systematik
| DeName = Fauchschabe
| Autor = van Herrewege, 1973
| Ordnung =
| Unterordnung =
| Superfamily =
| Familie = Blaberidae
| Unterfamilie = Oxyhaloinae
| Gattung = Princisia
| Untergattung =
| Tribus = Gromphadorhini
| Art = vanwaerebeki
| Verbreitung =
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| StudyGroupNumber = BCG 34
| Winterruhe =
| cockroach.speciesfile.org_TaxonNameID = 1174416
| LSID = urn:lsid:Blattodea.speciesfile.org:TaxonName:6326
}}
</pre>
</noinclude>
ecc8c83da84638126c0fa1f232e196aecab882a9
1484
1483
2017-04-28T15:21:47Z
Lollypop
2
wikitext
text/x-wiki
<includeonly>
{| cellpadding="2" cellspacing="0" style="border: 1px solid #6688AA;width:260px; background-color:#efefef; float:right;overflow: hidden; margin-left:10px; margin-bottom:10px;" valign="middle" |
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"|
'''''{{PAGENAME}}'''''
{{#if:{{{DeName|}}}|
<br>({{{DeName|}}})
}}
|-
|style="border: 0px solid #6688AA;border-bottom-width:1px;"| <div style="text-align:center;font-size:8pt;">
{{#if: {{{Bild|}}}
|
[[Bild:{{{Bild}}}{{!}}frameless{{!}}250x300px{{!}}{{{Bildbeschreibung}}}]]
}}
</div>
|
|
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| Systematik
|-
|style="text-align:center;width=250px;"|
{| style="background-color:#efefef;text-align:left;"
{{#if:{{{Ordnung|}}}
|
{{!-}}
{{!}} Ordnung:
{{!}} ''[[:Kategorie:{{{Ordnung|}}}{{!}}{{{Ordnung|}}}]]''
}}
{{#if:{{{Unterordnung|}}}
|
{{!-}}
{{!}} Unterordnung:
{{!}} ''[[:Kategorie:{{{Unterordnung|}}}{{!}}{{{Unterordnung|}}}]]''
}}
{{#if:{{{Superfamilie|}}}
|
{{!-}}
{{!}} Superfamilie:
{{!}} ''[[:Kategorie:{{{Superfamilie|}}}{{!}}{{{Superfamilie|}}}]]''
}}
{{#if:{{{Familie|}}}
|
{{!-}}
{{!}} Familie:
{{!}} ''[[:Kategorie:{{{Familie|}}}{{!}}{{{Familie|}}}]]''
}}
{{#if:{{{Unterfamilie|}}}
|
{{!-}}
{{!}} Unterfamilie:
{{!}} ''[[:Kategorie:{{{Unterfamilie|}}}{{!}}{{{Unterfamilie|}}}]]''
}}
{{#if:{{{Tribus|}}}
|
{{!-}}
{{!}} Tribus:
{{!}} ''[[:Kategorie:{{{Tribus|}}}{{!}}{{{Tribus|}}}]]''
}}
|-
{{#if:{{{Gattung|}}}|
{{!-}}
{{!}} Gattung:
{{!}} ''[[:Kategorie:{{{Gattung|}}}{{!}}{{{Gattung|}}}]]''
}}
|-
{{#if:{{{Untergattung|}}}|
{{!-}}
{{!}} Untergattung:
{{!}} ''[[:Kategorie:{{{Untergattung|}}}{{!}}{{{Untergattung|}}}]]''
}}
|-
{{#if:{{{Art|}}}|
{{!-}}
{{!}} Art:
{{!}} ''{{{Gattung|}}} {{{Untergattung|}}} {{{Art|}}}''
}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Wissenschaftlicher Name
|-
|style="text-align:center"|
{| style="text-align:center;background-color:#efefef;widht:250px"
|-
|style="text-align:center;width:250px"|''{{{Gattung|}}} {{{Art|}}}'' {{#if:{{{Autor|}}}|{{{Autor|}}}}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Weitere Informationen
|-
|style="text-align:left"|
{| style="text-align:left;background-color:#efefef;widht:250px"
{{#if:{{{Verbreitung|}}}
|
{{!-}}
{{!}} Verbreitung:
{{!}} {{{Verbreitung|}}}
|
{{!-}}
}}
{{#if:{{{Habitat|}}}
|
{{!-}}
{{!}} Habitat:
{{!}} {{{Habitat|}}}
|
{{!-}}
}}
{{#if:{{{Nahrung|}}}
|
{{!-}}
{{!}} Nahrung:
{{!}} {{{Nahrung|}}}
|
{{!-}}
}}
{{#if:{{{Luftfeuchtigkeit|}}}
|
{{!-}}
{{!}} Luftfeuchtigkeit:
{{!}} {{{Luftfeuchtigkeit|}}}
|
{{!-}}
}}
{{#if:{{{Temperatur|}}}
|
{{!-}}
{{!}} Temperatur:
{{!}} {{{Temperatur|}}}
|
{{!-}}
}}
{{#if:{{{StudyGroupNumber|}}}
|
{{!-}}
{{!}} StudyGroupNumber:
{{!}} {{{StudyGroupNumber|}}}
|
{{!-}}
}}
|}
|}
{{#ifeq: {{NAMESPACE}} | {{ns:0}} | [[Kategorie:Spezies]]}}
{{#if:{{{Ordnung|}}}|
-> [[:Kategorie:{{{Ordnung|}}}{{!}}{{{Ordnung|}}}]]
}}
{{#if:{{{Unterordnung|}}}|
-> [[:Kategorie:{{{Unterordnung|}}}{{!}}{{{Unterordnung|}}}]]
}}
{{#if:{{{Superfamilie|}}}|
-> [[:Kategorie:{{{Superfamilie|}}}{{!}}{{{Superfamilie|}}}]]
}}
{{#if:{{{Familie|}}}|
-> [[:Kategorie:{{{Familie|}}}{{!}}{{{Familie|}}}]]
}}
{{#if:{{{Unterfamilie|}}}|
-> [[:Kategorie:{{{Unterfamilie|}}}{{!}}{{{Unterfamilie|}}}]]
}}
{{#if:{{{Tribus|}}}|
-> [[:Kategorie:{{{Tribus|}}}{{!}}{{{Tribus|}}}]]
}}
{{#if:{{{Gattung|}}}|
-> [[:Kategorie:{{{Gattung|}}}{{!}}{{{Gattung|}}}]]
}}
{{#if:{{{Untergattung|}}}|
-> [[:Kategorie:{{{Untergattung|}}}{{!}}{{{Untergattung|}}}]]
}}
{{#if:{{{cockroach.speciesfile.org_TaxonNameID|}}}|
* [http://cockroach.speciesfile.org/common/basic/Taxa.aspx?TaxonNameID={{{cockroach.speciesfile.org_TaxonNameID|}}} cockroach.speciesfile.org -> {{PAGENAME}}]
}}
{{#if:{{{LSID|}}}|
* [http://www.ipni.org/lsids.html LSID] : {{{LSID|}}}
}}
{{#ifeq:{{PAGENAME}}|{{{Ordnung}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=6}}
{{#ifeq:{{PAGENAME}}|Blattodea|
[[Kategorie:Schaben]]
}}
}}
{{#ifeq:{{PAGENAME}}|{{{Unterordnung}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=6}}
[[Kategorie:{{{Ordnung|}}}{{!}}{{{Unterordnung|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{Unterordnung}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=6}}
[[Kategorie:{{{Ordnung|}}}{{!}}{{{Unterordnung|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{Superfamilie}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=4}}
[[Kategorie:{{{Unterordnung|}}}{{!}}{{{Superfamilie|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{Familie}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=4}}
[[Kategorie:{{#if: {{{Superfamilie|}}} | {{{Superfamilie|}}} | {{{Unterordnung}}} }}{{!}}{{{Familie|}}}]]
PAGENAME=Fmailie
}}
{{#ifeq:{{PAGENAME}}|{{{Unterfamilie}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=3}}
[[Kategorie:{{{Familie|}}}{{!}}{{{Unterfamilie|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{Gattung}}}|
[[Kategorie:{{{Unterfamilie|}}}{{!}}{{{Gattung|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{Art}}}|
[[Kategorie:{{{Gattung|}}}{{!}}{{{Art|}}}]]
}}
</includeonly>
<noinclude>
<pre>
Beispielaufruf:
{{Systematik
| DeName = Fauchschabe
| Autor = van Herrewege, 1973
| Ordnung =
| Unterordnung =
| Superfamily =
| Familie = Blaberidae
| Unterfamilie = Oxyhaloinae
| Gattung = Princisia
| Untergattung =
| Tribus = Gromphadorhini
| Art = vanwaerebeki
| Verbreitung =
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| StudyGroupNumber = BCG 34
| Winterruhe =
| cockroach.speciesfile.org_TaxonNameID = 1174416
| LSID = urn:lsid:Blattodea.speciesfile.org:TaxonName:6326
}}
</pre>
</noinclude>
a857f99779735d0d92815fec25bdf57f51de1f5c
1485
1484
2017-04-28T15:22:21Z
Lollypop
2
wikitext
text/x-wiki
<includeonly>
{| cellpadding="2" cellspacing="0" style="border: 1px solid #6688AA;width:260px; background-color:#efefef; float:right;overflow: hidden; margin-left:10px; margin-bottom:10px;" valign="middle" |
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"|
'''''{{PAGENAME}}'''''
{{#if:{{{DeName|}}}|
<br>({{{DeName|}}})
}}
|-
|style="border: 0px solid #6688AA;border-bottom-width:1px;"| <div style="text-align:center;font-size:8pt;">
{{#if: {{{Bild|}}}
|
[[Bild:{{{Bild}}}{{!}}frameless{{!}}250x300px{{!}}{{{Bildbeschreibung}}}]]
}}
</div>
|
|
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| Systematik
|-
|style="text-align:center;width=250px;"|
{| style="background-color:#efefef;text-align:left;"
{{#if:{{{Ordnung|}}}
|
{{!-}}
{{!}} Ordnung:
{{!}} ''[[:Kategorie:{{{Ordnung|}}}{{!}}{{{Ordnung|}}}]]''
}}
{{#if:{{{Unterordnung|}}}
|
{{!-}}
{{!}} Unterordnung:
{{!}} ''[[:Kategorie:{{{Unterordnung|}}}{{!}}{{{Unterordnung|}}}]]''
}}
{{#if:{{{Superfamilie|}}}
|
{{!-}}
{{!}} Superfamilie:
{{!}} ''[[:Kategorie:{{{Superfamilie|}}}{{!}}{{{Superfamilie|}}}]]''
}}
{{#if:{{{Familie|}}}
|
{{!-}}
{{!}} Familie:
{{!}} ''[[:Kategorie:{{{Familie|}}}{{!}}{{{Familie|}}}]]''
}}
{{#if:{{{Unterfamilie|}}}
|
{{!-}}
{{!}} Unterfamilie:
{{!}} ''[[:Kategorie:{{{Unterfamilie|}}}{{!}}{{{Unterfamilie|}}}]]''
}}
{{#if:{{{Tribus|}}}
|
{{!-}}
{{!}} Tribus:
{{!}} ''[[:Kategorie:{{{Tribus|}}}{{!}}{{{Tribus|}}}]]''
}}
|-
{{#if:{{{Gattung|}}}|
{{!-}}
{{!}} Gattung:
{{!}} ''[[:Kategorie:{{{Gattung|}}}{{!}}{{{Gattung|}}}]]''
}}
|-
{{#if:{{{Untergattung|}}}|
{{!-}}
{{!}} Untergattung:
{{!}} ''[[:Kategorie:{{{Untergattung|}}}{{!}}{{{Untergattung|}}}]]''
}}
|-
{{#if:{{{Art|}}}|
{{!-}}
{{!}} Art:
{{!}} ''{{{Gattung|}}} {{{Untergattung|}}} {{{Art|}}}''
}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Wissenschaftlicher Name
|-
|style="text-align:center"|
{| style="text-align:center;background-color:#efefef;widht:250px"
|-
|style="text-align:center;width:250px"|''{{{Gattung|}}} {{{Art|}}}'' {{#if:{{{Autor|}}}|{{{Autor|}}}}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Weitere Informationen
|-
|style="text-align:left"|
{| style="text-align:left;background-color:#efefef;widht:250px"
{{#if:{{{Verbreitung|}}}
|
{{!-}}
{{!}} Verbreitung:
{{!}} {{{Verbreitung|}}}
|
{{!-}}
}}
{{#if:{{{Habitat|}}}
|
{{!-}}
{{!}} Habitat:
{{!}} {{{Habitat|}}}
|
{{!-}}
}}
{{#if:{{{Nahrung|}}}
|
{{!-}}
{{!}} Nahrung:
{{!}} {{{Nahrung|}}}
|
{{!-}}
}}
{{#if:{{{Luftfeuchtigkeit|}}}
|
{{!-}}
{{!}} Luftfeuchtigkeit:
{{!}} {{{Luftfeuchtigkeit|}}}
|
{{!-}}
}}
{{#if:{{{Temperatur|}}}
|
{{!-}}
{{!}} Temperatur:
{{!}} {{{Temperatur|}}}
|
{{!-}}
}}
{{#if:{{{StudyGroupNumber|}}}
|
{{!-}}
{{!}} StudyGroupNumber:
{{!}} {{{StudyGroupNumber|}}}
|
{{!-}}
}}
|}
|}
{{#ifeq: {{NAMESPACE}} | {{ns:0}} | [[Kategorie:Spezies]]}}
{{#if:{{{Ordnung|}}}|
-> [[:Kategorie:{{{Ordnung|}}}{{!}}{{{Ordnung|}}}]]
}}
{{#if:{{{Unterordnung|}}}|
-> [[:Kategorie:{{{Unterordnung|}}}{{!}}{{{Unterordnung|}}}]]
}}
{{#if:{{{Superfamilie|}}}|
-> [[:Kategorie:{{{Superfamilie|}}}{{!}}{{{Superfamilie|}}}]]
}}
{{#if:{{{Familie|}}}|
-> [[:Kategorie:{{{Familie|}}}{{!}}{{{Familie|}}}]]
}}
{{#if:{{{Unterfamilie|}}}|
-> [[:Kategorie:{{{Unterfamilie|}}}{{!}}{{{Unterfamilie|}}}]]
}}
{{#if:{{{Tribus|}}}|
-> [[:Kategorie:{{{Tribus|}}}{{!}}{{{Tribus|}}}]]
}}
{{#if:{{{Gattung|}}}|
-> [[:Kategorie:{{{Gattung|}}}{{!}}{{{Gattung|}}}]]
}}
{{#if:{{{Untergattung|}}}|
-> [[:Kategorie:{{{Untergattung|}}}{{!}}{{{Untergattung|}}}]]
}}
{{#if:{{{cockroach.speciesfile.org_TaxonNameID|}}}|
* [http://cockroach.speciesfile.org/common/basic/Taxa.aspx?TaxonNameID={{{cockroach.speciesfile.org_TaxonNameID|}}} cockroach.speciesfile.org -> {{PAGENAME}}]
}}
{{#if:{{{LSID|}}}|
* [http://www.ipni.org/lsids.html LSID] : {{{LSID|}}}
}}
{{#ifeq:{{PAGENAME}}|{{{Ordnung}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=6}}
{{#ifeq:{{PAGENAME}}|Blattodea|
[[Kategorie:Schaben]]
}}
}}
{{#ifeq:{{PAGENAME}}|{{{Unterordnung}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=6}}
[[Kategorie:{{{Ordnung|}}}{{!}}{{{Unterordnung|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{Unterordnung}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=6}}
[[Kategorie:{{{Ordnung|}}}{{!}}{{{Unterordnung|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{Superfamilie}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=4}}
[[Kategorie:{{{Unterordnung|}}}{{!}}{{{Superfamilie|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{Familie}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=4}}
[[Kategorie:{{#if: {{{Superfamilie|}}} | {{{Superfamilie|}}} | {{{Unterordnung}}} }}{{!}}{{{Familie|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{Unterfamilie}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=3}}
[[Kategorie:{{{Familie|}}}{{!}}{{{Unterfamilie|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{Gattung}}}|
[[Kategorie:{{{Unterfamilie|}}}{{!}}{{{Gattung|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{Art}}}|
[[Kategorie:{{{Gattung|}}}{{!}}{{{Art|}}}]]
}}
</includeonly>
<noinclude>
<pre>
Beispielaufruf:
{{Systematik
| DeName = Fauchschabe
| Autor = van Herrewege, 1973
| Ordnung =
| Unterordnung =
| Superfamily =
| Familie = Blaberidae
| Unterfamilie = Oxyhaloinae
| Gattung = Princisia
| Untergattung =
| Tribus = Gromphadorhini
| Art = vanwaerebeki
| Verbreitung =
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| StudyGroupNumber = BCG 34
| Winterruhe =
| cockroach.speciesfile.org_TaxonNameID = 1174416
| LSID = urn:lsid:Blattodea.speciesfile.org:TaxonName:6326
}}
</pre>
</noinclude>
ecc8c83da84638126c0fa1f232e196aecab882a9
1486
1485
2017-04-28T15:24:05Z
Lollypop
2
wikitext
text/x-wiki
<includeonly>
{| cellpadding="2" cellspacing="0" style="border: 1px solid #6688AA;width:260px; background-color:#efefef; float:right;overflow: hidden; margin-left:10px; margin-bottom:10px;" valign="middle" |
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"|
'''''{{PAGENAME}}'''''
{{#if:{{{DeName|}}}|
<br>({{{DeName|}}})
}}
|-
|style="border: 0px solid #6688AA;border-bottom-width:1px;"| <div style="text-align:center;font-size:8pt;">
{{#if: {{{Bild|}}}
|
[[Bild:{{{Bild}}}{{!}}frameless{{!}}250x300px{{!}}{{{Bildbeschreibung}}}]]
}}
</div>
|
|
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| Systematik
|-
|style="text-align:center;width=250px;"|
{| style="background-color:#efefef;text-align:left;"
{{#if:{{{Ordnung|}}}
|
{{!-}}
{{!}} Ordnung:
{{!}} ''[[:Kategorie:{{{Ordnung|}}}{{!}}{{{Ordnung|}}}]]''
}}
{{#if:{{{Unterordnung|}}}
|
{{!-}}
{{!}} Unterordnung:
{{!}} ''[[:Kategorie:{{{Unterordnung|}}}{{!}}{{{Unterordnung|}}}]]''
}}
{{#if:{{{Superfamilie|}}}
|
{{!-}}
{{!}} Superfamilie:
{{!}} ''[[:Kategorie:{{{Superfamilie|}}}{{!}}{{{Superfamilie|}}}]]''
}}
{{#if:{{{Familie|}}}
|
{{!-}}
{{!}} Familie:
{{!}} ''[[:Kategorie:{{{Familie|}}}{{!}}{{{Familie|}}}]]''
}}
{{#if:{{{Unterfamilie|}}}
|
{{!-}}
{{!}} Unterfamilie:
{{!}} ''[[:Kategorie:{{{Unterfamilie|}}}{{!}}{{{Unterfamilie|}}}]]''
}}
{{#if:{{{Tribus|}}}
|
{{!-}}
{{!}} Tribus:
{{!}} ''[[:Kategorie:{{{Tribus|}}}{{!}}{{{Tribus|}}}]]''
}}
|-
{{#if:{{{Gattung|}}}|
{{!-}}
{{!}} Gattung:
{{!}} ''[[:Kategorie:{{{Gattung|}}}{{!}}{{{Gattung|}}}]]''
}}
|-
{{#if:{{{Untergattung|}}}|
{{!-}}
{{!}} Untergattung:
{{!}} ''[[:Kategorie:{{{Untergattung|}}}{{!}}{{{Untergattung|}}}]]''
}}
|-
{{#if:{{{Art|}}}|
{{!-}}
{{!}} Art:
{{!}} ''{{{Gattung|}}} {{{Untergattung|}}} {{{Art|}}}''
}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Wissenschaftlicher Name
|-
|style="text-align:center"|
{| style="text-align:center;background-color:#efefef;widht:250px"
|-
|style="text-align:center;width:250px"|''{{{Gattung|}}} {{{Art|}}}'' {{#if:{{{Autor|}}}|{{{Autor|}}}}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Weitere Informationen
|-
|style="text-align:left"|
{| style="text-align:left;background-color:#efefef;widht:250px"
{{#if:{{{Verbreitung|}}}
|
{{!-}}
{{!}} Verbreitung:
{{!}} {{{Verbreitung|}}}
|
{{!-}}
}}
{{#if:{{{Habitat|}}}
|
{{!-}}
{{!}} Habitat:
{{!}} {{{Habitat|}}}
|
{{!-}}
}}
{{#if:{{{Nahrung|}}}
|
{{!-}}
{{!}} Nahrung:
{{!}} {{{Nahrung|}}}
|
{{!-}}
}}
{{#if:{{{Luftfeuchtigkeit|}}}
|
{{!-}}
{{!}} Luftfeuchtigkeit:
{{!}} {{{Luftfeuchtigkeit|}}}
|
{{!-}}
}}
{{#if:{{{Temperatur|}}}
|
{{!-}}
{{!}} Temperatur:
{{!}} {{{Temperatur|}}}
|
{{!-}}
}}
{{#if:{{{StudyGroupNumber|}}}
|
{{!-}}
{{!}} StudyGroupNumber:
{{!}} {{{StudyGroupNumber|}}}
|
{{!-}}
}}
|}
|}
{{#ifeq: {{NAMESPACE}} | {{ns:0}} | [[Kategorie:Spezies]]}}
{{#if:{{{Ordnung|}}}|
-> [[:Kategorie:{{{Ordnung|}}}{{!}}{{{Ordnung|}}}]]
}}
{{#if:{{{Unterordnung|}}}|
-> [[:Kategorie:{{{Unterordnung|}}}{{!}}{{{Unterordnung|}}}]]
}}
{{#if:{{{Superfamilie|}}}|
-> [[:Kategorie:{{{Superfamilie|}}}{{!}}{{{Superfamilie|}}}]]
}}
{{#if:{{{Familie|}}}|
-> [[:Kategorie:{{{Familie|}}}{{!}}{{{Familie|}}}]]
}}
{{#if:{{{Unterfamilie|}}}|
-> [[:Kategorie:{{{Unterfamilie|}}}{{!}}{{{Unterfamilie|}}}]]
}}
{{#if:{{{Tribus|}}}|
-> [[:Kategorie:{{{Tribus|}}}{{!}}{{{Tribus|}}}]]
}}
{{#if:{{{Gattung|}}}|
-> [[:Kategorie:{{{Gattung|}}}{{!}}{{{Gattung|}}}]]
}}
{{#if:{{{Untergattung|}}}|
-> [[:Kategorie:{{{Untergattung|}}}{{!}}{{{Untergattung|}}}]]
}}
{{#if:{{{cockroach.speciesfile.org_TaxonNameID|}}}|
* [http://cockroach.speciesfile.org/common/basic/Taxa.aspx?TaxonNameID={{{cockroach.speciesfile.org_TaxonNameID|}}} cockroach.speciesfile.org -> {{PAGENAME}}]
}}
{{#if:{{{LSID|}}}|
* [http://www.ipni.org/lsids.html LSID] : {{{LSID|}}}
}}
{{#ifeq:{{PAGENAME}}|{{{Ordnung}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=7}}
{{#ifeq:{{PAGENAME}}|Blattodea|
[[Kategorie:Schaben]]
}}
}}
{{#ifeq:{{PAGENAME}}|{{{Unterordnung}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=6}}
[[Kategorie:{{{Ordnung|}}}{{!}}{{{Unterordnung|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{Superfamilie}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
[[Kategorie:{{{Unterordnung|}}}{{!}}{{{Superfamilie|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{Familie}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=4}}
[[Kategorie:{{#if: {{{Superfamilie|}}} | {{{Superfamilie|}}} | {{{Unterordnung}}} }}{{!}}{{{Familie|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{Unterfamilie}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=3}}
[[Kategorie:{{{Familie|}}}{{!}}{{{Unterfamilie|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{Gattung}}}|
[[Kategorie:{{{Unterfamilie|}}}{{!}}{{{Gattung|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{Art}}}|
[[Kategorie:{{{Gattung|}}}{{!}}{{{Art|}}}]]
}}
</includeonly>
<noinclude>
<pre>
Beispielaufruf:
{{Systematik
| DeName = Fauchschabe
| Autor = van Herrewege, 1973
| Ordnung =
| Unterordnung =
| Superfamily =
| Familie = Blaberidae
| Unterfamilie = Oxyhaloinae
| Gattung = Princisia
| Untergattung =
| Tribus = Gromphadorhini
| Art = vanwaerebeki
| Verbreitung =
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| StudyGroupNumber = BCG 34
| Winterruhe =
| cockroach.speciesfile.org_TaxonNameID = 1174416
| LSID = urn:lsid:Blattodea.speciesfile.org:TaxonName:6326
}}
</pre>
</noinclude>
87a7ae6e9166834d54bcc7d53d3f371083cac0fe
1491
1486
2017-04-28T15:32:28Z
Lollypop
2
wikitext
text/x-wiki
<includeonly>
{| cellpadding="2" cellspacing="0" style="border: 1px solid #6688AA;width:260px; background-color:#efefef; float:right;overflow: hidden; margin-left:10px; margin-bottom:10px;" valign="middle" |
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"|
'''''{{PAGENAME}}'''''
{{#if:{{{DeName|}}}|
<br>({{{DeName|}}})
}}
|-
|style="border: 0px solid #6688AA;border-bottom-width:1px;"| <div style="text-align:center;font-size:8pt;">
{{#if: {{{Bild|}}}
|
[[Bild:{{{Bild}}}{{!}}frameless{{!}}250x300px{{!}}{{{Bildbeschreibung}}}]]
}}
</div>
|
|
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| Systematik
|-
|style="text-align:center;width=250px;"|
{| style="background-color:#efefef;text-align:left;"
{{#if:{{{Ordnung|}}}
|
{{!-}}
{{!}} Ordnung:
{{!}} ''[[:Kategorie:{{{Ordnung|}}}{{!}}{{{Ordnung|}}}]]''
}}
{{#if:{{{Unterordnung|}}}
|
{{!-}}
{{!}} Unterordnung:
{{!}} ''[[:Kategorie:{{{Unterordnung|}}}{{!}}{{{Unterordnung|}}}]]''
}}
{{#if:{{{Superfamilie|}}}
|
{{!-}}
{{!}} Superfamilie:
{{!}} ''[[:Kategorie:{{{Superfamilie|}}}{{!}}{{{Superfamilie|}}}]]''
}}
{{#if:{{{Familie|}}}
|
{{!-}}
{{!}} Familie:
{{!}} ''[[:Kategorie:{{{Familie|}}}{{!}}{{{Familie|}}}]]''
}}
{{#if:{{{Unterfamilie|}}}
|
{{!-}}
{{!}} Unterfamilie:
{{!}} ''[[:Kategorie:{{{Unterfamilie|}}}{{!}}{{{Unterfamilie|}}}]]''
}}
{{#if:{{{Tribus|}}}
|
{{!-}}
{{!}} Tribus:
{{!}} ''[[:Kategorie:{{{Tribus|}}}{{!}}{{{Tribus|}}}]]''
}}
|-
{{#if:{{{Gattung|}}}|
{{!-}}
{{!}} Gattung:
{{!}} ''[[:Kategorie:{{{Gattung|}}}{{!}}{{{Gattung|}}}]]''
}}
|-
{{#if:{{{Untergattung|}}}|
{{!-}}
{{!}} Untergattung:
{{!}} ''[[:Kategorie:{{{Untergattung|}}}{{!}}{{{Untergattung|}}}]]''
}}
|-
{{#if:{{{Art|}}}|
{{!-}}
{{!}} Art:
{{!}} ''{{{Gattung|}}} {{{Untergattung|}}} {{{Art|}}}''
}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Wissenschaftlicher Name
|-
|style="text-align:center"|
{| style="text-align:center;background-color:#efefef;widht:250px"
|-
|style="text-align:center;width:250px"|''{{{Gattung|}}} {{{Art|}}}'' {{#if:{{{Autor|}}}|{{{Autor|}}}}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Weitere Informationen
|-
|style="text-align:left"|
{| style="text-align:left;background-color:#efefef;widht:250px"
{{#if:{{{Verbreitung|}}}
|
{{!-}}
{{!}} Verbreitung:
{{!}} {{{Verbreitung|}}}
|
{{!-}}
}}
{{#if:{{{Habitat|}}}
|
{{!-}}
{{!}} Habitat:
{{!}} {{{Habitat|}}}
|
{{!-}}
}}
{{#if:{{{Nahrung|}}}
|
{{!-}}
{{!}} Nahrung:
{{!}} {{{Nahrung|}}}
|
{{!-}}
}}
{{#if:{{{Luftfeuchtigkeit|}}}
|
{{!-}}
{{!}} Luftfeuchtigkeit:
{{!}} {{{Luftfeuchtigkeit|}}}
|
{{!-}}
}}
{{#if:{{{Temperatur|}}}
|
{{!-}}
{{!}} Temperatur:
{{!}} {{{Temperatur|}}}
|
{{!-}}
}}
{{#if:{{{StudyGroupNumber|}}}
|
{{!-}}
{{!}} StudyGroupNumber:
{{!}} {{{StudyGroupNumber|}}}
|
{{!-}}
}}
|}
|}
{{#ifeq: {{NAMESPACE}} | {{ns:0}} | [[Kategorie:Spezies]]}}
{{#if:{{{Ordnung|}}}|
-> [[:Kategorie:{{{Ordnung|}}}{{!}}{{{Ordnung|}}}]]
}}
{{#if:{{{Unterordnung|}}}|
-> [[:Kategorie:{{{Unterordnung|}}}{{!}}{{{Unterordnung|}}}]]
}}
{{#if:{{{Superfamilie|}}}|
-> [[:Kategorie:{{{Superfamilie|}}}{{!}}{{{Superfamilie|}}}]]
}}
{{#if:{{{Familie|}}}|
-> [[:Kategorie:{{{Familie|}}}{{!}}{{{Familie|}}}]]
}}
{{#if:{{{Unterfamilie|}}}|
-> [[:Kategorie:{{{Unterfamilie|}}}{{!}}{{{Unterfamilie|}}}]]
}}
{{#if:{{{Tribus|}}}|
-> [[:Kategorie:{{{Tribus|}}}{{!}}{{{Tribus|}}}]]
}}
{{#if:{{{Gattung|}}}|
-> [[:Kategorie:{{{Gattung|}}}{{!}}{{{Gattung|}}}]]
}}
{{#if:{{{Untergattung|}}}|
-> [[:Kategorie:{{{Untergattung|}}}{{!}}{{{Untergattung|}}}]]
}}
{{#if:{{{cockroach.speciesfile.org_TaxonNameID|}}}|
* [http://cockroach.speciesfile.org/common/basic/Taxa.aspx?TaxonNameID={{{cockroach.speciesfile.org_TaxonNameID|}}} cockroach.speciesfile.org -> {{PAGENAME}}]
}}
{{#if:{{{LSID|}}}|
* [http://www.ipni.org/lsids.html LSID] : {{{LSID|}}}
}}
{{#ifeq:{{PAGENAME}}|{{{Ordnung}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=7}}
{{#ifeq:{{PAGENAME}}|Blattodea|
[[Kategorie:Schaben]]
}}
}}
{{#ifeq:{{PAGENAME}}|{{{Unterordnung}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=6}}
[[Kategorie:{{{Ordnung|}}}{{!}}{{{Unterordnung|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{Superfamilie}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
[[Kategorie:{{{Unterordnung|}}}{{!}}{{{Superfamilie|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{Familie}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=4}}
[[Kategorie:{{#if: {{{Superfamilie|}}} | {{{Superfamilie|}}} | {{{Unterordnung}}} }}{{!}}{{{Familie|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{Unterfamilie}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=3}}
[[Kategorie:{{{Familie|}}}{{!}}{{{Unterfamilie|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{Gattung}}}|
[[Kategorie:{{#if: {{{Unterfamilie|}}} | {{{Unterfamilie|}}} | {{{Familie|}}} }}{{!}}{{{Gattung|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{Art}}}|
[[Kategorie:{{{Gattung|}}}{{!}}{{{Art|}}}]]
}}
</includeonly>
<noinclude>
<pre>
Beispielaufruf:
{{Systematik
| DeName = Fauchschabe
| Autor = van Herrewege, 1973
| Ordnung =
| Unterordnung =
| Superfamily =
| Familie = Blaberidae
| Unterfamilie = Oxyhaloinae
| Gattung = Princisia
| Untergattung =
| Tribus = Gromphadorhini
| Art = vanwaerebeki
| Verbreitung =
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| StudyGroupNumber = BCG 34
| Winterruhe =
| cockroach.speciesfile.org_TaxonNameID = 1174416
| LSID = urn:lsid:Blattodea.speciesfile.org:TaxonName:6326
}}
</pre>
</noinclude>
be4a197d873964898bb8b8e6e3c63f72cb7654ee
1495
1491
2017-04-28T15:43:08Z
Lollypop
2
wikitext
text/x-wiki
<includeonly>
{| cellpadding="2" cellspacing="0" style="border: 1px solid #6688AA;width:260px; background-color:#efefef; float:right;overflow: hidden; margin-left:10px; margin-bottom:10px;" valign="middle" |
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"|
'''''{{PAGENAME}}'''''
{{#if:{{{DeName|}}}|
<br>({{{DeName|}}})
}}
|-
|style="border: 0px solid #6688AA;border-bottom-width:1px;"| <div style="text-align:center;font-size:8pt;">
{{#if: {{{Bild|}}}
|
[[Bild:{{{Bild}}}{{!}}frameless{{!}}250x300px{{!}}{{{Bildbeschreibung}}}]]
}}
</div>
|
|
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| Systematik
|-
|style="text-align:center;width=250px;"|
{| style="background-color:#efefef;text-align:left;"
{{#if:{{{Ordnung|}}}
|
{{!-}}
{{!}} Ordnung:
{{!}} ''[[:Kategorie:{{{Ordnung|}}}{{!}}{{{Ordnung|}}}]]''
}}
{{#if:{{{Unterordnung|}}}
|
{{!-}}
{{!}} Unterordnung:
{{!}} ''[[:Kategorie:{{{Unterordnung|}}}{{!}}{{{Unterordnung|}}}]]''
}}
{{#if:{{{Superfamilie|}}}
|
{{!-}}
{{!}} Superfamilie:
{{!}} ''[[:Kategorie:{{{Superfamilie|}}}{{!}}{{{Superfamilie|}}}]]''
}}
{{#if:{{{Familie|}}}
|
{{!-}}
{{!}} Familie:
{{!}} ''[[:Kategorie:{{{Familie|}}}{{!}}{{{Familie|}}}]]''
}}
{{#if:{{{Unterfamilie|}}}
|
{{!-}}
{{!}} Unterfamilie:
{{!}} ''[[:Kategorie:{{{Unterfamilie|}}}{{!}}{{{Unterfamilie|}}}]]''
}}
{{#if:{{{Tribus|}}}
|
{{!-}}
{{!}} Tribus:
{{!}} ''[[:Kategorie:{{{Tribus|}}}{{!}}{{{Tribus|}}}]]''
}}
|-
{{#if:{{{Gattung|}}}|
{{!-}}
{{!}} Gattung:
{{!}} ''[[:Kategorie:{{{Gattung|}}}{{!}}{{{Gattung|}}}]]''
}}
|-
{{#if:{{{Untergattung|}}}|
{{!-}}
{{!}} Untergattung:
{{!}} ''[[:Kategorie:{{{Untergattung|}}}{{!}}{{{Untergattung|}}}]]''
}}
|-
{{#if:{{{Art|}}}|
{{!-}}
{{!}} Art:
{{!}} ''{{{Gattung|}}} {{{Untergattung|}}} {{{Art|}}}''
}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Wissenschaftlicher Name
|-
|style="text-align:center"|
{| style="text-align:center;background-color:#efefef;widht:250px"
|-
|style="text-align:center;width:250px"|''{{{Gattung|}}} {{{Art|}}}'' {{#if:{{{Autor|}}}|{{{Autor|}}}}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Weitere Informationen
|-
|style="text-align:left"|
{| style="text-align:left;background-color:#efefef;widht:250px"
{{#if:{{{Verbreitung|}}}
|
{{!-}}
{{!}} Verbreitung:
{{!}} {{{Verbreitung|}}}
|
{{!-}}
}}
{{#if:{{{Habitat|}}}
|
{{!-}}
{{!}} Habitat:
{{!}} {{{Habitat|}}}
|
{{!-}}
}}
{{#if:{{{Nahrung|}}}
|
{{!-}}
{{!}} Nahrung:
{{!}} {{{Nahrung|}}}
|
{{!-}}
}}
{{#if:{{{Luftfeuchtigkeit|}}}
|
{{!-}}
{{!}} Luftfeuchtigkeit:
{{!}} {{{Luftfeuchtigkeit|}}}
|
{{!-}}
}}
{{#if:{{{Temperatur|}}}
|
{{!-}}
{{!}} Temperatur:
{{!}} {{{Temperatur|}}}
|
{{!-}}
}}
{{#if:{{{StudyGroupNumber|}}}
|
{{!-}}
{{!}} StudyGroupNumber:
{{!}} {{{StudyGroupNumber|}}}
|
{{!-}}
}}
|}
|}
{{#ifeq: {{NAMESPACE}} | {{ns:0}} | [[Kategorie:Spezies]]}}
{{#if:{{{Ordnung|}}}|
-> [[:Kategorie:{{{Ordnung|}}}{{!}}{{{Ordnung|}}}]]
}}
{{#if:{{{Unterordnung|}}}|
-> [[:Kategorie:{{{Unterordnung|}}}{{!}}{{{Unterordnung|}}}]]
}}
{{#if:{{{Superfamilie|}}}|
-> [[:Kategorie:{{{Superfamilie|}}}{{!}}{{{Superfamilie|}}}]]
}}
{{#if:{{{Familie|}}}|
-> [[:Kategorie:{{{Familie|}}}{{!}}{{{Familie|}}}]]
}}
{{#if:{{{Unterfamilie|}}}|
-> [[:Kategorie:{{{Unterfamilie|}}}{{!}}{{{Unterfamilie|}}}]]
}}
{{#if:{{{Tribus|}}}|
-> [[:Kategorie:{{{Tribus|}}}{{!}}{{{Tribus|}}}]]
}}
{{#if:{{{Gattung|}}}|
-> [[:Kategorie:{{{Gattung|}}}{{!}}{{{Gattung|}}}]]
}}
{{#if:{{{Untergattung|}}}|
-> [[:Kategorie:{{{Untergattung|}}}{{!}}{{{Untergattung|}}}]]
}}
{{#if:{{{www.faunaeur.org_id|}}}|
* [http://www.faunaeur.org/full_results.php?id={{{www.faunaeur.org_id|}}} www.faunaeur.org -> {{PAGENAME}}]
}}
{{#if:{{{cockroach.speciesfile.org_TaxonNameID|}}}|
* [http://cockroach.speciesfile.org/common/basic/Taxa.aspx?TaxonNameID={{{cockroach.speciesfile.org_TaxonNameID|}}} cockroach.speciesfile.org -> {{PAGENAME}}]
}}
{{#if:{{{LSID|}}}|
* [http://www.ipni.org/lsids.html LSID] : {{{LSID|}}}
}}
{{#ifeq:{{PAGENAME}}|{{{Ordnung}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=7}}
{{#ifeq:{{PAGENAME}}|Blattodea|
[[Kategorie:Schaben]]
}}
}}
{{#ifeq:{{PAGENAME}}|{{{Unterordnung}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=6}}
[[Kategorie:{{{Ordnung|}}}{{!}}{{{Unterordnung|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{Superfamilie}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
[[Kategorie:{{{Unterordnung|}}}{{!}}{{{Superfamilie|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{Familie}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=4}}
[[Kategorie:{{#if: {{{Superfamilie|}}} | {{{Superfamilie|}}} | {{{Unterordnung}}} }}{{!}}{{{Familie|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{Unterfamilie}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=3}}
[[Kategorie:{{{Familie|}}}{{!}}{{{Unterfamilie|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{Gattung}}}|
[[Kategorie:{{#if: {{{Unterfamilie|}}} | {{{Unterfamilie|}}} | {{{Familie|}}} }}{{!}}{{{Gattung|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{Art}}}|
[[Kategorie:{{{Gattung|}}}{{!}}{{{Art|}}}]]
}}
</includeonly>
<noinclude>
<pre>
Beispielaufruf:
{{Systematik
| DeName = Fauchschabe
| Autor = van Herrewege, 1973
| Ordnung =
| Unterordnung =
| Superfamily =
| Familie = Blaberidae
| Unterfamilie = Oxyhaloinae
| Gattung = Princisia
| Untergattung =
| Tribus = Gromphadorhini
| Art = vanwaerebeki
| Verbreitung =
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| StudyGroupNumber = BCG 34
| Winterruhe =
| cockroach.speciesfile.org_TaxonNameID = 1174416
| LSID = urn:lsid:Blattodea.speciesfile.org:TaxonName:6326
}}
</pre>
</noinclude>
a2293e98c53ab23f9fba8d96cfca673e7f51139c
1496
1495
2017-04-28T15:43:50Z
Lollypop
2
wikitext
text/x-wiki
<includeonly>
{| cellpadding="2" cellspacing="0" style="border: 1px solid #6688AA;width:260px; background-color:#efefef; float:right;overflow: hidden; margin-left:10px; margin-bottom:10px;" valign="middle" |
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"|
'''''{{PAGENAME}}'''''
{{#if:{{{DeName|}}}|
<br>({{{DeName|}}})
}}
|-
|style="border: 0px solid #6688AA;border-bottom-width:1px;"| <div style="text-align:center;font-size:8pt;">
{{#if: {{{Bild|}}}
|
[[Bild:{{{Bild}}}{{!}}frameless{{!}}250x300px{{!}}{{{Bildbeschreibung}}}]]
}}
</div>
|
|
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| Systematik
|-
|style="text-align:center;width=250px;"|
{| style="background-color:#efefef;text-align:left;"
{{#if:{{{Ordnung|}}}
|
{{!-}}
{{!}} Ordnung:
{{!}} ''[[:Kategorie:{{{Ordnung|}}}{{!}}{{{Ordnung|}}}]]''
}}
{{#if:{{{Unterordnung|}}}
|
{{!-}}
{{!}} Unterordnung:
{{!}} ''[[:Kategorie:{{{Unterordnung|}}}{{!}}{{{Unterordnung|}}}]]''
}}
{{#if:{{{Superfamilie|}}}
|
{{!-}}
{{!}} Superfamilie:
{{!}} ''[[:Kategorie:{{{Superfamilie|}}}{{!}}{{{Superfamilie|}}}]]''
}}
{{#if:{{{Familie|}}}
|
{{!-}}
{{!}} Familie:
{{!}} ''[[:Kategorie:{{{Familie|}}}{{!}}{{{Familie|}}}]]''
}}
{{#if:{{{Unterfamilie|}}}
|
{{!-}}
{{!}} Unterfamilie:
{{!}} ''[[:Kategorie:{{{Unterfamilie|}}}{{!}}{{{Unterfamilie|}}}]]''
}}
{{#if:{{{Tribus|}}}
|
{{!-}}
{{!}} Tribus:
{{!}} ''[[:Kategorie:{{{Tribus|}}}{{!}}{{{Tribus|}}}]]''
}}
|-
{{#if:{{{Gattung|}}}|
{{!-}}
{{!}} Gattung:
{{!}} ''[[:Kategorie:{{{Gattung|}}}{{!}}{{{Gattung|}}}]]''
}}
|-
{{#if:{{{Untergattung|}}}|
{{!-}}
{{!}} Untergattung:
{{!}} ''[[:Kategorie:{{{Untergattung|}}}{{!}}{{{Untergattung|}}}]]''
}}
|-
{{#if:{{{Art|}}}|
{{!-}}
{{!}} Art:
{{!}} ''{{{Gattung|}}} {{{Untergattung|}}} {{{Art|}}}''
}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Wissenschaftlicher Name
|-
|style="text-align:center"|
{| style="text-align:center;background-color:#efefef;widht:250px"
|-
|style="text-align:center;width:250px"|''{{{Gattung|}}} {{{Art|}}}'' {{#if:{{{Autor|}}}|{{{Autor|}}}}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Weitere Informationen
|-
|style="text-align:left"|
{| style="text-align:left;background-color:#efefef;widht:250px"
{{#if:{{{Verbreitung|}}}
|
{{!-}}
{{!}} Verbreitung:
{{!}} {{{Verbreitung|}}}
|
{{!-}}
}}
{{#if:{{{Habitat|}}}
|
{{!-}}
{{!}} Habitat:
{{!}} {{{Habitat|}}}
|
{{!-}}
}}
{{#if:{{{Nahrung|}}}
|
{{!-}}
{{!}} Nahrung:
{{!}} {{{Nahrung|}}}
|
{{!-}}
}}
{{#if:{{{Luftfeuchtigkeit|}}}
|
{{!-}}
{{!}} Luftfeuchtigkeit:
{{!}} {{{Luftfeuchtigkeit|}}}
|
{{!-}}
}}
{{#if:{{{Temperatur|}}}
|
{{!-}}
{{!}} Temperatur:
{{!}} {{{Temperatur|}}}
|
{{!-}}
}}
{{#if:{{{StudyGroupNumber|}}}
|
{{!-}}
{{!}} StudyGroupNumber:
{{!}} {{{StudyGroupNumber|}}}
|
{{!-}}
}}
|}
|}
{{#ifeq: {{NAMESPACE}} | {{ns:0}} | [[Kategorie:Spezies]]}}
{{#if:{{{Ordnung|}}}|
-> [[:Kategorie:{{{Ordnung|}}}{{!}}{{{Ordnung|}}}]]
}}
{{#if:{{{Unterordnung|}}}|
-> [[:Kategorie:{{{Unterordnung|}}}{{!}}{{{Unterordnung|}}}]]
}}
{{#if:{{{Superfamilie|}}}|
-> [[:Kategorie:{{{Superfamilie|}}}{{!}}{{{Superfamilie|}}}]]
}}
{{#if:{{{Familie|}}}|
-> [[:Kategorie:{{{Familie|}}}{{!}}{{{Familie|}}}]]
}}
{{#if:{{{Unterfamilie|}}}|
-> [[:Kategorie:{{{Unterfamilie|}}}{{!}}{{{Unterfamilie|}}}]]
}}
{{#if:{{{Tribus|}}}|
-> [[:Kategorie:{{{Tribus|}}}{{!}}{{{Tribus|}}}]]
}}
{{#if:{{{Gattung|}}}|
-> [[:Kategorie:{{{Gattung|}}}{{!}}{{{Gattung|}}}]]
}}
{{#if:{{{Untergattung|}}}|
-> [[:Kategorie:{{{Untergattung|}}}{{!}}{{{Untergattung|}}}]]
}}
{{#if:{{{www.faunaeur.org_id|}}}|
* [http://www.faunaeur.org/full_results.php?id={{{www.faunaeur.org_id|}}} www.faunaeur.org -> {{PAGENAME}}]
}}
{{#if:{{{cockroach.speciesfile.org_TaxonNameID|}}}|
* [http://cockroach.speciesfile.org/common/basic/Taxa.aspx?TaxonNameID={{{cockroach.speciesfile.org_TaxonNameID|}}} cockroach.speciesfile.org -> {{PAGENAME}}]
}}
{{#if:{{{LSID|}}}|
* [http://www.ipni.org/lsids.html LSID] : {{{LSID|}}}
}}
{{#ifeq:{{PAGENAME}}|{{{Ordnung}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=7}}
{{#ifeq:{{PAGENAME}}|Blattodea|
[[Kategorie:Schaben]]
}}
}}
{{#ifeq:{{PAGENAME}}|{{{Unterordnung}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=6}}
[[Kategorie:{{{Ordnung|}}}{{!}}{{{Unterordnung|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{Superfamilie}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
[[Kategorie:{{{Unterordnung|}}}{{!}}{{{Superfamilie|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{Familie}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=4}}
[[Kategorie:{{#if: {{{Superfamilie|}}} | {{{Superfamilie|}}} | {{{Unterordnung}}} }}{{!}}{{{Familie|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{Unterfamilie}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=3}}
[[Kategorie:{{{Familie|}}}{{!}}{{{Unterfamilie|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{Gattung}}}|
[[Kategorie:{{#if: {{{Unterfamilie|}}} | {{{Unterfamilie|}}} | {{{Familie|}}} }}{{!}}{{{Gattung|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{Art}}}|
[[Kategorie:{{{Gattung|}}}{{!}}{{{Art|}}}]]
}}
</includeonly>
<noinclude>
<pre>
Beispielaufruf:
{{Systematik
| DeName = Fauchschabe
| Autor = van Herrewege, 1973
| Ordnung =
| Unterordnung =
| Superfamily =
| Familie = Blaberidae
| Unterfamilie = Oxyhaloinae
| Gattung = Princisia
| Untergattung =
| Tribus = Gromphadorhini
| Art = vanwaerebeki
| Verbreitung =
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| StudyGroupNumber = BCG 34
| Winterruhe =
| cockroach.speciesfile.org_TaxonNameID = 1174416
| www.faunaeur.org_id = 11922
| LSID = urn:lsid:Blattodea.speciesfile.org:TaxonName:6326
}}
</pre>
</noinclude>
75e4c4f741bcded6b2dc29c79ccae28dedb25ca0
1497
1496
2017-04-28T15:44:23Z
Lollypop
2
wikitext
text/x-wiki
<includeonly>
{| cellpadding="2" cellspacing="0" style="border: 1px solid #6688AA;width:260px; background-color:#efefef; float:right;overflow: hidden; margin-left:10px; margin-bottom:10px;" valign="middle" |
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"|
'''''{{PAGENAME}}'''''
{{#if:{{{DeName|}}}|
<br>({{{DeName|}}})
}}
|-
|style="border: 0px solid #6688AA;border-bottom-width:1px;"| <div style="text-align:center;font-size:8pt;">
{{#if: {{{Bild|}}}
|
[[Bild:{{{Bild}}}{{!}}frameless{{!}}250x300px{{!}}{{{Bildbeschreibung}}}]]
}}
</div>
|
|
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| Systematik
|-
|style="text-align:center;width=250px;"|
{| style="background-color:#efefef;text-align:left;"
{{#if:{{{Ordnung|}}}
|
{{!-}}
{{!}} Ordnung:
{{!}} ''[[:Kategorie:{{{Ordnung|}}}{{!}}{{{Ordnung|}}}]]''
}}
{{#if:{{{Unterordnung|}}}
|
{{!-}}
{{!}} Unterordnung:
{{!}} ''[[:Kategorie:{{{Unterordnung|}}}{{!}}{{{Unterordnung|}}}]]''
}}
{{#if:{{{Superfamilie|}}}
|
{{!-}}
{{!}} Superfamilie:
{{!}} ''[[:Kategorie:{{{Superfamilie|}}}{{!}}{{{Superfamilie|}}}]]''
}}
{{#if:{{{Familie|}}}
|
{{!-}}
{{!}} Familie:
{{!}} ''[[:Kategorie:{{{Familie|}}}{{!}}{{{Familie|}}}]]''
}}
{{#if:{{{Unterfamilie|}}}
|
{{!-}}
{{!}} Unterfamilie:
{{!}} ''[[:Kategorie:{{{Unterfamilie|}}}{{!}}{{{Unterfamilie|}}}]]''
}}
{{#if:{{{Tribus|}}}
|
{{!-}}
{{!}} Tribus:
{{!}} ''[[:Kategorie:{{{Tribus|}}}{{!}}{{{Tribus|}}}]]''
}}
|-
{{#if:{{{Gattung|}}}|
{{!-}}
{{!}} Gattung:
{{!}} ''[[:Kategorie:{{{Gattung|}}}{{!}}{{{Gattung|}}}]]''
}}
|-
{{#if:{{{Untergattung|}}}|
{{!-}}
{{!}} Untergattung:
{{!}} ''[[:Kategorie:{{{Untergattung|}}}{{!}}{{{Untergattung|}}}]]''
}}
|-
{{#if:{{{Art|}}}|
{{!-}}
{{!}} Art:
{{!}} ''{{{Gattung|}}} {{{Untergattung|}}} {{{Art|}}}''
}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Wissenschaftlicher Name
|-
|style="text-align:center"|
{| style="text-align:center;background-color:#efefef;widht:250px"
|-
|style="text-align:center;width:250px"|''{{{Gattung|}}} {{{Art|}}}'' {{#if:{{{Autor|}}}|{{{Autor|}}}}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Weitere Informationen
|-
|style="text-align:left"|
{| style="text-align:left;background-color:#efefef;widht:250px"
{{#if:{{{Verbreitung|}}}
|
{{!-}}
{{!}} Verbreitung:
{{!}} {{{Verbreitung|}}}
|
{{!-}}
}}
{{#if:{{{Habitat|}}}
|
{{!-}}
{{!}} Habitat:
{{!}} {{{Habitat|}}}
|
{{!-}}
}}
{{#if:{{{Nahrung|}}}
|
{{!-}}
{{!}} Nahrung:
{{!}} {{{Nahrung|}}}
|
{{!-}}
}}
{{#if:{{{Luftfeuchtigkeit|}}}
|
{{!-}}
{{!}} Luftfeuchtigkeit:
{{!}} {{{Luftfeuchtigkeit|}}}
|
{{!-}}
}}
{{#if:{{{Temperatur|}}}
|
{{!-}}
{{!}} Temperatur:
{{!}} {{{Temperatur|}}}
|
{{!-}}
}}
{{#if:{{{StudyGroupNumber|}}}
|
{{!-}}
{{!}} StudyGroupNumber:
{{!}} {{{StudyGroupNumber|}}}
|
{{!-}}
}}
|}
|}
{{#ifeq: {{NAMESPACE}} | {{ns:0}} | [[Kategorie:Spezies]]}}
{{#if:{{{Ordnung|}}}|
-> [[:Kategorie:{{{Ordnung|}}}{{!}}{{{Ordnung|}}}]]
}}
{{#if:{{{Unterordnung|}}}|
-> [[:Kategorie:{{{Unterordnung|}}}{{!}}{{{Unterordnung|}}}]]
}}
{{#if:{{{Superfamilie|}}}|
-> [[:Kategorie:{{{Superfamilie|}}}{{!}}{{{Superfamilie|}}}]]
}}
{{#if:{{{Familie|}}}|
-> [[:Kategorie:{{{Familie|}}}{{!}}{{{Familie|}}}]]
}}
{{#if:{{{Unterfamilie|}}}|
-> [[:Kategorie:{{{Unterfamilie|}}}{{!}}{{{Unterfamilie|}}}]]
}}
{{#if:{{{Tribus|}}}|
-> [[:Kategorie:{{{Tribus|}}}{{!}}{{{Tribus|}}}]]
}}
{{#if:{{{Gattung|}}}|
-> [[:Kategorie:{{{Gattung|}}}{{!}}{{{Gattung|}}}]]
}}
{{#if:{{{Untergattung|}}}|
-> [[:Kategorie:{{{Untergattung|}}}{{!}}{{{Untergattung|}}}]]
}}
{{#if:{{{www.faunaeur.org_id|}}}|
* [http://www.faunaeur.org/full_results.php?id={{{www.faunaeur.org_id|}}} www.faunaeur.org -> {{PAGENAME}}]
}}
{{#if:{{{cockroach.speciesfile.org_TaxonNameID|}}}|
* [http://cockroach.speciesfile.org/common/basic/Taxa.aspx?TaxonNameID={{{cockroach.speciesfile.org_TaxonNameID|}}} cockroach.speciesfile.org -> {{PAGENAME}}]
}}
{{#if:{{{LSID|}}}|
* [http://www.ipni.org/lsids.html LSID] : {{{LSID|}}}
}}
{{#ifeq:{{PAGENAME}}|{{{Ordnung}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=7}}
{{#ifeq:{{PAGENAME}}|Blattodea|
[[Kategorie:Schaben]]
}}
}}
{{#ifeq:{{PAGENAME}}|{{{Unterordnung}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=6}}
[[Kategorie:{{{Ordnung|}}}{{!}}{{{Unterordnung|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{Superfamilie}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
[[Kategorie:{{{Unterordnung|}}}{{!}}{{{Superfamilie|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{Familie}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=4}}
[[Kategorie:{{#if: {{{Superfamilie|}}} | {{{Superfamilie|}}} | {{{Unterordnung}}} }}{{!}}{{{Familie|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{Unterfamilie}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=3}}
[[Kategorie:{{{Familie|}}}{{!}}{{{Unterfamilie|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{Gattung}}}|
[[Kategorie:{{#if: {{{Unterfamilie|}}} | {{{Unterfamilie|}}} | {{{Familie|}}} }}{{!}}{{{Gattung|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{Art}}}|
[[Kategorie:{{{Gattung|}}}{{!}}{{{Art|}}}]]
}}
</includeonly>
<noinclude>
<pre>
Beispielaufruf:
{{Systematik
| DeName = Fauchschabe
| Autor = van Herrewege, 1973
| Ordnung =
| Unterordnung =
| Superfamily =
| Familie = Blaberidae
| Unterfamilie = Oxyhaloinae
| Gattung = Princisia
| Untergattung =
| Tribus = Gromphadorhini
| Art = vanwaerebeki
| Verbreitung =
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| StudyGroupNumber = BCG 34
| Winterruhe =
| cockroach.speciesfile.org_TaxonNameID = 1174416
| LSID = urn:lsid:Blattodea.speciesfile.org:TaxonName:6326
}}
{{Systematik
| Autor =
| Bild =
| Bildbeschreibung =
| Kingdom = Animalia
| Subkingdom = Eumetazoa
| Phylum = Arthropoda
| Subphylum = Hexapoda
| Klasse = Insecta
| Ordnung = Dictyoptera
| Unterordnung = Isoptera
| LSID = urn:lsid:faunaeur.org:taxname:11922
| www.faunaeur.org_id = 11922
}}
</pre>
</noinclude>
72423e54329dcde2a8b42c585d628f32b50680f4
1501
1497
2017-04-28T15:50:22Z
Lollypop
2
wikitext
text/x-wiki
<includeonly>
{| cellpadding="2" cellspacing="0" style="border: 1px solid #6688AA;width:260px; background-color:#efefef; float:right;overflow: hidden; margin-left:10px; margin-bottom:10px;" valign="middle" |
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"|
'''''{{PAGENAME}}'''''
{{#if:{{{DeName|}}}|
<br>({{{DeName|}}})
}}
|-
|style="border: 0px solid #6688AA;border-bottom-width:1px;"| <div style="text-align:center;font-size:8pt;">
{{#if: {{{Bild|}}}
|
[[Bild:{{{Bild}}}{{!}}frameless{{!}}250x300px{{!}}{{{Bildbeschreibung}}}]]
}}
</div>
|
|
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| Systematik
|-
|style="text-align:center;width=250px;"|
{| style="background-color:#efefef;text-align:left;"
{{#if:{{{Ordnung|}}}
|
{{!-}}
{{!}} Ordnung:
{{!}} ''[[:Kategorie:{{{Ordnung|}}}{{!}}{{{Ordnung|}}}]]''
}}
{{#if:{{{Unterordnung|}}}
|
{{!-}}
{{!}} Unterordnung:
{{!}} ''[[:Kategorie:{{{Unterordnung|}}}{{!}}{{{Unterordnung|}}}]]''
}}
{{#if:{{{Superfamilie|}}}
|
{{!-}}
{{!}} Superfamilie:
{{!}} ''[[:Kategorie:{{{Superfamilie|}}}{{!}}{{{Superfamilie|}}}]]''
}}
{{#if:{{{Familie|}}}
|
{{!-}}
{{!}} Familie:
{{!}} ''[[:Kategorie:{{{Familie|}}}{{!}}{{{Familie|}}}]]''
}}
{{#if:{{{Unterfamilie|}}}
|
{{!-}}
{{!}} Unterfamilie:
{{!}} ''[[:Kategorie:{{{Unterfamilie|}}}{{!}}{{{Unterfamilie|}}}]]''
}}
{{#if:{{{Tribus|}}}
|
{{!-}}
{{!}} Tribus:
{{!}} ''[[:Kategorie:{{{Tribus|}}}{{!}}{{{Tribus|}}}]]''
}}
|-
{{#if:{{{Gattung|}}}|
{{!-}}
{{!}} Gattung:
{{!}} ''[[:Kategorie:{{{Gattung|}}}{{!}}{{{Gattung|}}}]]''
}}
|-
{{#if:{{{Untergattung|}}}|
{{!-}}
{{!}} Untergattung:
{{!}} ''[[:Kategorie:{{{Untergattung|}}}{{!}}{{{Untergattung|}}}]]''
}}
|-
{{#if:{{{Art|}}}|
{{!-}}
{{!}} Art:
{{!}} ''{{{Gattung|}}} {{{Untergattung|}}} {{{Art|}}}''
}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Wissenschaftlicher Name
|-
|style="text-align:center"|
{| style="text-align:center;background-color:#efefef;widht:250px"
|-
|style="text-align:center;width:250px"|''{{{Gattung|}}} {{{Art|}}}'' {{#if:{{{Autor|}}}|{{{Autor|}}}}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Weitere Informationen
|-
|style="text-align:left"|
{| style="text-align:left;background-color:#efefef;widht:250px"
{{#if:{{{Verbreitung|}}}
|
{{!-}}
{{!}} Verbreitung:
{{!}} {{{Verbreitung|}}}
|
{{!-}}
}}
{{#if:{{{Habitat|}}}
|
{{!-}}
{{!}} Habitat:
{{!}} {{{Habitat|}}}
|
{{!-}}
}}
{{#if:{{{Nahrung|}}}
|
{{!-}}
{{!}} Nahrung:
{{!}} {{{Nahrung|}}}
|
{{!-}}
}}
{{#if:{{{Luftfeuchtigkeit|}}}
|
{{!-}}
{{!}} Luftfeuchtigkeit:
{{!}} {{{Luftfeuchtigkeit|}}}
|
{{!-}}
}}
{{#if:{{{Temperatur|}}}
|
{{!-}}
{{!}} Temperatur:
{{!}} {{{Temperatur|}}}
|
{{!-}}
}}
{{#if:{{{StudyGroupNumber|}}}
|
{{!-}}
{{!}} StudyGroupNumber:
{{!}} {{{StudyGroupNumber|}}}
|
{{!-}}
}}
|}
|}
{{#ifeq: {{NAMESPACE}} | {{ns:0}} | [[Kategorie:Spezies]]}}
{{#if:{{{Ordnung|}}}|
-> [[:Kategorie:{{{Ordnung|}}}{{!}}{{{Ordnung|}}}]]
}}
{{#if:{{{Unterordnung|}}}|
-> [[:Kategorie:{{{Unterordnung|}}}{{!}}{{{Unterordnung|}}}]]
}}
{{#if:{{{Superfamilie|}}}|
-> [[:Kategorie:{{{Superfamilie|}}}{{!}}{{{Superfamilie|}}}]]
}}
{{#if:{{{Familie|}}}|
-> [[:Kategorie:{{{Familie|}}}{{!}}{{{Familie|}}}]]
}}
{{#if:{{{Unterfamilie|}}}|
-> [[:Kategorie:{{{Unterfamilie|}}}{{!}}{{{Unterfamilie|}}}]]
}}
{{#if:{{{Tribus|}}}|
-> [[:Kategorie:{{{Tribus|}}}{{!}}{{{Tribus|}}}]]
}}
{{#if:{{{Gattung|}}}|
-> [[:Kategorie:{{{Gattung|}}}{{!}}{{{Gattung|}}}]]
}}
{{#if:{{{Untergattung|}}}|
-> [[:Kategorie:{{{Untergattung|}}}{{!}}{{{Untergattung|}}}]]
}}
{{#if:{{{www.faunaeur.org_id|}}}|
* [http://www.faunaeur.org/full_results.php?id={{{www.faunaeur.org_id|}}} Fauna Europaea : www.faunaeur.org -> {{PAGENAME}}]
}}
{{#if:{{{cockroach.speciesfile.org_TaxonNameID|}}}|
* [http://cockroach.speciesfile.org/common/basic/Taxa.aspx?TaxonNameID={{{cockroach.speciesfile.org_TaxonNameID|}}} Cockroach Species File (CSF) : cockroach.speciesfile.org -> {{PAGENAME}}]
}}
{{#if:{{{LSID|}}}|
* [http://www.ipni.org/lsids.html LSID] : {{{LSID|}}}
}}
{{#ifeq:{{PAGENAME}}|{{{Ordnung}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=7}}
{{#ifeq:{{PAGENAME}}|Blattodea|
[[Kategorie:Schaben]]
}}
}}
{{#ifeq:{{PAGENAME}}|{{{Unterordnung}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=6}}
[[Kategorie:{{{Ordnung|}}}{{!}}{{{Unterordnung|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{Superfamilie}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
[[Kategorie:{{{Unterordnung|}}}{{!}}{{{Superfamilie|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{Familie}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=4}}
[[Kategorie:{{#if: {{{Superfamilie|}}} | {{{Superfamilie|}}} | {{{Unterordnung}}} }}{{!}}{{{Familie|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{Unterfamilie}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=3}}
[[Kategorie:{{{Familie|}}}{{!}}{{{Unterfamilie|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{Gattung}}}|
[[Kategorie:{{#if: {{{Unterfamilie|}}} | {{{Unterfamilie|}}} | {{{Familie|}}} }}{{!}}{{{Gattung|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{Art}}}|
[[Kategorie:{{{Gattung|}}}{{!}}{{{Art|}}}]]
}}
</includeonly>
<noinclude>
<pre>
Beispielaufruf:
{{Systematik
| DeName = Fauchschabe
| Autor = van Herrewege, 1973
| Ordnung =
| Unterordnung =
| Superfamily =
| Familie = Blaberidae
| Unterfamilie = Oxyhaloinae
| Gattung = Princisia
| Untergattung =
| Tribus = Gromphadorhini
| Art = vanwaerebeki
| Verbreitung =
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| StudyGroupNumber = BCG 34
| Winterruhe =
| cockroach.speciesfile.org_TaxonNameID = 1174416
| LSID = urn:lsid:Blattodea.speciesfile.org:TaxonName:6326
}}
{{Systematik
| Autor =
| Bild =
| Bildbeschreibung =
| Kingdom = Animalia
| Subkingdom = Eumetazoa
| Phylum = Arthropoda
| Subphylum = Hexapoda
| Klasse = Insecta
| Ordnung = Dictyoptera
| Unterordnung = Isoptera
| LSID = urn:lsid:faunaeur.org:taxname:11922
| www.faunaeur.org_id = 11922
}}
</pre>
</noinclude>
0706188734995424533e37a61dfc056058a4c1a6
1509
1501
2017-04-28T16:05:29Z
Lollypop
2
wikitext
text/x-wiki
<includeonly>
{| cellpadding="2" cellspacing="0" style="border: 1px solid #6688AA;width:260px; background-color:#efefef; float:right;overflow: hidden; margin-left:10px; margin-bottom:10px;" valign="middle" |
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"|
'''''{{PAGENAME}}'''''
{{#if:{{{DeName|}}}|
<br>({{{DeName|}}})
}}
|-
|style="border: 0px solid #6688AA;border-bottom-width:1px;"| <div style="text-align:center;font-size:8pt;">
{{#if: {{{Bild|}}}
|
[[Bild:{{{Bild}}}{{!}}frameless{{!}}250x300px{{!}}{{{Bildbeschreibung}}}]]
}}
</div>
|
|
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| Systematik
|-
|style="text-align:center;width=250px;"|
{| style="background-color:#efefef;text-align:left;"
{{#if:{{{Ordnung|}}}
|
{{!-}}
{{!}} Ordnung:
{{!}} ''[[:Kategorie:{{{Ordnung|}}}{{!}}{{{Ordnung|}}}]]''
}}
{{#if:{{{Unterordnung|}}}
|
{{!-}}
{{!}} Unterordnung:
{{!}} ''[[:Kategorie:{{{Unterordnung|}}}{{!}}{{{Unterordnung|}}}]]''
}}
{{#if:{{{Superfamilie|}}}
|
{{!-}}
{{!}} Superfamilie:
{{!}} ''[[:Kategorie:{{{Superfamilie|}}}{{!}}{{{Superfamilie|}}}]]''
}}
{{#if:{{{Familie|}}}
|
{{!-}}
{{!}} Familie:
{{!}} ''[[:Kategorie:{{{Familie|}}}{{!}}{{{Familie|}}}]]''
}}
{{#if:{{{Unterfamilie|}}}
|
{{!-}}
{{!}} Unterfamilie:
{{!}} ''[[:Kategorie:{{{Unterfamilie|}}}{{!}}{{{Unterfamilie|}}}]]''
}}
{{#if:{{{Tribus|}}}
|
{{!-}}
{{!}} Tribus:
{{!}} ''[[:Kategorie:{{{Tribus|}}}{{!}}{{{Tribus|}}}]]''
}}
|-
{{#if:{{{Gattung|}}}|
{{!-}}
{{!}} Gattung:
{{!}} ''[[:Kategorie:{{{Gattung|}}}{{!}}{{{Gattung|}}}]]''
}}
|-
{{#if:{{{Untergattung|}}}|
{{!-}}
{{!}} Untergattung:
{{!}} ''[[:Kategorie:{{{Untergattung|}}}{{!}}{{{Untergattung|}}}]]''
}}
|-
{{#if:{{{Art|}}}|
{{!-}}
{{!}} Art:
{{!}} ''{{{Gattung|}}} {{{Untergattung|}}} {{{Art|}}}''
}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Wissenschaftlicher Name
|-
|style="text-align:center"|
{| style="text-align:center;background-color:#efefef;widht:250px"
|-
|style="text-align:center;width:250px"|''{{{Gattung|}}} {{{Art|}}}'' {{#if:{{{Autor|}}}|{{{Autor|}}}}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Weitere Informationen
|-
|style="text-align:left"|
{| style="text-align:left;background-color:#efefef;widht:250px"
{{#if:{{{Verbreitung|}}}
|
{{!-}}
{{!}} Verbreitung:
{{!}} {{{Verbreitung|}}}
|
{{!-}}
}}
{{#if:{{{Habitat|}}}
|
{{!-}}
{{!}} Habitat:
{{!}} {{{Habitat|}}}
|
{{!-}}
}}
{{#if:{{{Nahrung|}}}
|
{{!-}}
{{!}} Nahrung:
{{!}} {{{Nahrung|}}}
|
{{!-}}
}}
{{#if:{{{Luftfeuchtigkeit|}}}
|
{{!-}}
{{!}} Luftfeuchtigkeit:
{{!}} {{{Luftfeuchtigkeit|}}}
|
{{!-}}
}}
{{#if:{{{Temperatur|}}}
|
{{!-}}
{{!}} Temperatur:
{{!}} {{{Temperatur|}}}
|
{{!-}}
}}
{{#if:{{{StudyGroupNumber|}}}
|
{{!-}}
{{!}} StudyGroupNumber:
{{!}} {{{StudyGroupNumber|}}}
|
{{!-}}
}}
|}
|}
{{#ifeq: {{NAMESPACE}} | {{ns:0}} | [[Kategorie:Spezies]]}}
{{#if:{{{Ordnung|}}}|
-> [[:Kategorie:{{{Ordnung|}}}{{!}}{{{Ordnung|}}}]]
}}
{{#if:{{{Unterordnung|}}}|
-> [[:Kategorie:{{{Unterordnung|}}}{{!}}{{{Unterordnung|}}}]]
}}
{{#if:{{{Superfamilie|}}}|
-> [[:Kategorie:{{{Superfamilie|}}}{{!}}{{{Superfamilie|}}}]]
}}
{{#if:{{{Familie|}}}|
-> [[:Kategorie:{{{Familie|}}}{{!}}{{{Familie|}}}]]
}}
{{#if:{{{Unterfamilie|}}}|
-> [[:Kategorie:{{{Unterfamilie|}}}{{!}}{{{Unterfamilie|}}}]]
}}
{{#if:{{{Tribus|}}}|
-> [[:Kategorie:{{{Tribus|}}}{{!}}{{{Tribus|}}}]]
}}
{{#if:{{{Gattung|}}}|
-> [[:Kategorie:{{{Gattung|}}}{{!}}{{{Gattung|}}}]]
}}
{{#if:{{{Untergattung|}}}|
-> [[:Kategorie:{{{Untergattung|}}}{{!}}{{{Untergattung|}}}]]
}}
{{#if:{{{www.faunaeur.org_id|}}}|
* [http://www.faunaeur.org/full_results.php?id={{{www.faunaeur.org_id|}}} Fauna Europaea : www.faunaeur.org -> {{PAGENAME}}]
}}
{{#if:{{{cockroach.speciesfile.org_TaxonNameID|}}}|
* [http://cockroach.speciesfile.org/common/basic/Taxa.aspx?TaxonNameID={{{cockroach.speciesfile.org_TaxonNameID|}}} Cockroach Species File (CSF) : cockroach.speciesfile.org -> {{PAGENAME}}]
}}
{{#if:{{{LSID|}}}|
* [http://www.ipni.org/lsids.html LSID] : {{{LSID|}}}
}}
{{#ifeq:{{PAGENAME}}|{{{Ordnung}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=7}}
{{#ifeq:{{PAGENAME}}|Blattodea|
[[Kategorie:Schaben]]
}}
}}
{{#ifeq:{{PAGENAME}}|{{{Unterordnung}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=6}}
[[Kategorie:{{{Ordnung|}}}{{!}}{{{Unterordnung|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{Superfamilie}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
[[Kategorie:{{{Unterordnung|}}}{{!}}{{{Superfamilie|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{Familie}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=4}}
[[Kategorie:{{#if: {{{Superfamilie|}}} | {{{Superfamilie|}}} | {{{Unterordnung}}} }}{{!}}{{{Familie|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{Unterfamilie}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=3}}
[[Kategorie:{{{Familie|}}}{{!}}{{{Unterfamilie|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{Gattung}}}|
[[Kategorie:{{#if: {{{Unterfamilie|}}} | {{{Unterfamilie|}}} | {{{Familie|}}} }}{{!}}{{{Gattung|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{WissName}}}|
[[Kategorie:{{{Gattung|}}}{{!}}{{{Art|}}}]]
}}
</includeonly>
<noinclude>
<pre>
Beispielaufruf:
{{Systematik
| DeName = Fauchschabe
| Autor = van Herrewege, 1973
| Ordnung =
| Unterordnung =
| Superfamily =
| Familie = Blaberidae
| Unterfamilie = Oxyhaloinae
| Gattung = Princisia
| Untergattung =
| Tribus = Gromphadorhini
| Art = vanwaerebeki
| Verbreitung =
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| StudyGroupNumber = BCG 34
| Winterruhe =
| cockroach.speciesfile.org_TaxonNameID = 1174416
| LSID = urn:lsid:Blattodea.speciesfile.org:TaxonName:6326
}}
{{Systematik
| Autor =
| Bild =
| Bildbeschreibung =
| Kingdom = Animalia
| Subkingdom = Eumetazoa
| Phylum = Arthropoda
| Subphylum = Hexapoda
| Klasse = Insecta
| Ordnung = Dictyoptera
| Unterordnung = Isoptera
| LSID = urn:lsid:faunaeur.org:taxname:11922
| www.faunaeur.org_id = 11922
}}
</pre>
</noinclude>
cc20e24313b70dd493d61b23f2cbbec78aac1a9e
1514
1509
2017-04-28T16:13:10Z
Lollypop
2
wikitext
text/x-wiki
<includeonly>
{| cellpadding="2" cellspacing="0" style="border: 1px solid #6688AA;width:260px; background-color:#efefef; float:right;overflow: hidden; margin-left:10px; margin-bottom:10px;" valign="middle" |
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"|
'''''{{PAGENAME}}'''''
{{#if:{{{DeName|}}}|
<br>({{{DeName|}}})
}}
|-
|style="border: 0px solid #6688AA;border-bottom-width:1px;"| <div style="text-align:center;font-size:8pt;">
{{#if: {{{Bild|}}}
|
[[Bild:{{{Bild}}}{{!}}frameless{{!}}250x300px{{!}}{{{Bildbeschreibung}}}]]
}}
</div>
|
|
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| Systematik
|-
|style="text-align:center;width=250px;"|
{| style="background-color:#efefef;text-align:left;"
{{#if:{{{Ordnung|}}}
|
{{!-}}
{{!}} Ordnung:
{{!}} ''[[:Kategorie:{{{Ordnung|}}}{{!}}{{{Ordnung|}}}]]''
}}
{{#if:{{{Unterordnung|}}}
|
{{!-}}
{{!}} Unterordnung:
{{!}} ''[[:Kategorie:{{{Unterordnung|}}}{{!}}{{{Unterordnung|}}}]]''
}}
{{#if:{{{Superfamilie|}}}
|
{{!-}}
{{!}} Superfamilie:
{{!}} ''[[:Kategorie:{{{Superfamilie|}}}{{!}}{{{Superfamilie|}}}]]''
}}
{{#if:{{{Familie|}}}
|
{{!-}}
{{!}} Familie:
{{!}} ''[[:Kategorie:{{{Familie|}}}{{!}}{{{Familie|}}}]]''
}}
{{#if:{{{Unterfamilie|}}}
|
{{!-}}
{{!}} Unterfamilie:
{{!}} ''[[:Kategorie:{{{Unterfamilie|}}}{{!}}{{{Unterfamilie|}}}]]''
}}
{{#if:{{{Tribus|}}}
|
{{!-}}
{{!}} Tribus:
{{!}} ''[[:Kategorie:{{{Tribus|}}}{{!}}{{{Tribus|}}}]]''
}}
|-
{{#if:{{{Gattung|}}}|
{{!-}}
{{!}} Gattung:
{{!}} ''[[:Kategorie:{{{Gattung|}}}{{!}}{{{Gattung|}}}]]''
}}
|-
{{#if:{{{Untergattung|}}}|
{{!-}}
{{!}} Untergattung:
{{!}} ''[[:Kategorie:{{{Untergattung|}}}{{!}}{{{Untergattung|}}}]]''
}}
|-
{{#if:{{{Art|}}}|
{{!-}}
{{!}} Art:
{{!}} ''{{{Gattung|}}} {{{Untergattung|}}} {{{Art|}}}''
}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Wissenschaftlicher Name
|-
|style="text-align:center"|
{| style="text-align:center;background-color:#efefef;widht:250px"
|-
|style="text-align:center;width:250px"|''{{{Gattung|}}} {{{Art|}}}'' {{#if:{{{Autor|}}}|{{{Autor|}}}}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Weitere Informationen
|-
|style="text-align:left"|
{| style="text-align:left;background-color:#efefef;widht:250px"
{{#if:{{{Verbreitung|}}}
|
{{!-}}
{{!}} Verbreitung:
{{!}} {{{Verbreitung|}}}
|
{{!-}}
}}
{{#if:{{{Habitat|}}}
|
{{!-}}
{{!}} Habitat:
{{!}} {{{Habitat|}}}
|
{{!-}}
}}
{{#if:{{{Nahrung|}}}
|
{{!-}}
{{!}} Nahrung:
{{!}} {{{Nahrung|}}}
|
{{!-}}
}}
{{#if:{{{Luftfeuchtigkeit|}}}
|
{{!-}}
{{!}} Luftfeuchtigkeit:
{{!}} {{{Luftfeuchtigkeit|}}}
|
{{!-}}
}}
{{#if:{{{Temperatur|}}}
|
{{!-}}
{{!}} Temperatur:
{{!}} {{{Temperatur|}}}
|
{{!-}}
}}
{{#if:{{{StudyGroupNumber|}}}
|
{{!-}}
{{!}} StudyGroupNumber:
{{!}} {{{StudyGroupNumber|}}}
|
{{!-}}
}}
|}
|}
{{#ifeq: {{NAMESPACE}} | {{ns:0}} | [[Kategorie:Spezies]]}}
{{#if:{{{Ordnung|}}}|
-> [[:Kategorie:{{{Ordnung|}}}{{!}}{{{Ordnung|}}}]]
}}
{{#if:{{{Unterordnung|}}}|
-> [[:Kategorie:{{{Unterordnung|}}}{{!}}{{{Unterordnung|}}}]]
}}
{{#if:{{{Superfamilie|}}}|
-> [[:Kategorie:{{{Superfamilie|}}}{{!}}{{{Superfamilie|}}}]]
}}
{{#if:{{{Familie|}}}|
-> [[:Kategorie:{{{Familie|}}}{{!}}{{{Familie|}}}]]
}}
{{#if:{{{Unterfamilie|}}}|
-> [[:Kategorie:{{{Unterfamilie|}}}{{!}}{{{Unterfamilie|}}}]]
}}
{{#if:{{{Tribus|}}}|
-> [[:Kategorie:{{{Tribus|}}}{{!}}{{{Tribus|}}}]]
}}
{{#if:{{{Gattung|}}}|
-> [[:Kategorie:{{{Gattung|}}}{{!}}{{{Gattung|}}}]]
}}
{{#if:{{{Untergattung|}}}|
-> [[:Kategorie:{{{Untergattung|}}}{{!}}{{{Untergattung|}}}]]
}}
{{#if:{{{www.faunaeur.org_id|}}}|
* [http://www.faunaeur.org/full_results.php?id={{{www.faunaeur.org_id|}}} Fauna Europaea : www.faunaeur.org -> {{PAGENAME}}]
}}
{{#if:{{{cockroach.speciesfile.org_TaxonNameID|}}}|
* [http://cockroach.speciesfile.org/common/basic/Taxa.aspx?TaxonNameID={{{cockroach.speciesfile.org_TaxonNameID|}}} Cockroach Species File (CSF) : cockroach.speciesfile.org -> {{PAGENAME}}]
}}
{{#if:{{{LSID|}}}|
* [http://www.ipni.org/lsids.html LSID] : {{{LSID|}}}
}}
{{#ifeq:{{PAGENAME}}|{{{Ordnung}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=7}}
{{#ifeq:{{PAGENAME}}|Blattodea|
[[Kategorie:Schaben]]
}}
}}
{{#ifeq:{{PAGENAME}}|{{{Unterordnung}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=6}}
[[Kategorie:{{{Ordnung|}}}{{!}}{{{Unterordnung|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{Superfamilie}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
[[Kategorie:{{{Unterordnung|}}}{{!}}{{{Superfamilie|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{Familie}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=4}}
[[Kategorie:{{#if: {{{Superfamilie|}}} | {{{Superfamilie|}}} | {{{Unterordnung|}}} }}{{!}}{{{Familie|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{Unterfamilie}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=3}}
[[Kategorie:{{{Familie|}}}{{!}}{{{Unterfamilie|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{Gattung}}}|
[[Kategorie:{{#if: {{{Unterfamilie|}}} | {{{Unterfamilie|}}} | {{{Familie|}}} }}{{!}}{{{Gattung|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{WissName}}}|
[[Kategorie:{{{Gattung|}}}{{!}}{{{Art|}}}]]
}}
</includeonly>
<noinclude>
<pre>
Beispielaufruf:
{{Systematik
| DeName = Fauchschabe
| Autor = van Herrewege, 1973
| Ordnung =
| Unterordnung =
| Superfamily =
| Familie = Blaberidae
| Unterfamilie = Oxyhaloinae
| Gattung = Princisia
| Untergattung =
| Tribus = Gromphadorhini
| Art = vanwaerebeki
| Verbreitung =
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| StudyGroupNumber = BCG 34
| Winterruhe =
| cockroach.speciesfile.org_TaxonNameID = 1174416
| LSID = urn:lsid:Blattodea.speciesfile.org:TaxonName:6326
}}
{{Systematik
| Autor =
| Bild =
| Bildbeschreibung =
| Kingdom = Animalia
| Subkingdom = Eumetazoa
| Phylum = Arthropoda
| Subphylum = Hexapoda
| Klasse = Insecta
| Ordnung = Dictyoptera
| Unterordnung = Isoptera
| LSID = urn:lsid:faunaeur.org:taxname:11922
| www.faunaeur.org_id = 11922
}}
</pre>
</noinclude>
323bba70f009f098e4b9d959d7e045aa6b87c3cc
1515
1514
2017-04-28T16:18:24Z
Lollypop
2
wikitext
text/x-wiki
<includeonly>
{| cellpadding="2" cellspacing="0" style="border: 1px solid #6688AA;width:260px; background-color:#efefef; float:right;overflow: hidden; margin-left:10px; margin-bottom:10px;" valign="middle" |
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"|
'''''{{PAGENAME}}'''''
{{#if:{{{DeName|}}}|
<br>({{{DeName|}}})
}}
|-
|style="border: 0px solid #6688AA;border-bottom-width:1px;"| <div style="text-align:center;font-size:8pt;">
{{#if: {{{Bild|}}}
|
[[Bild:{{{Bild}}}{{!}}frameless{{!}}250x300px{{!}}{{{Bildbeschreibung}}}]]
}}
</div>
|
|
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| Systematik
|-
|style="text-align:center;width=250px;"|
{| style="background-color:#efefef;text-align:left;"
{{#if:{{{Ordnung|}}}
|
{{!-}}
{{!}} Ordnung:
{{!}} ''[[:Kategorie:{{{Ordnung|}}}{{!}}{{{Ordnung|}}}]]''
}}
{{#if:{{{Unterordnung|}}}
|
{{!-}}
{{!}} Unterordnung:
{{!}} ''[[:Kategorie:{{{Unterordnung|}}}{{!}}{{{Unterordnung|}}}]]''
}}
{{#if:{{{Superfamilie|}}}
|
{{!-}}
{{!}} Superfamilie:
{{!}} ''[[:Kategorie:{{{Superfamilie|}}}{{!}}{{{Superfamilie|}}}]]''
}}
{{#if:{{{Familie|}}}
|
{{!-}}
{{!}} Familie:
{{!}} ''TEST [[:Kategorie:{{{Familie|}}}{{!}}{{{Familie|}}}]]''
}}
{{#if:{{{Unterfamilie|}}}
|
{{!-}}
{{!}} Unterfamilie:
{{!}} ''[[:Kategorie:{{{Unterfamilie|}}}{{!}}{{{Unterfamilie|}}}]]''
}}
{{#if:{{{Tribus|}}}
|
{{!-}}
{{!}} Tribus:
{{!}} ''[[:Kategorie:{{{Tribus|}}}{{!}}{{{Tribus|}}}]]''
}}
|-
{{#if:{{{Gattung|}}}|
{{!-}}
{{!}} Gattung:
{{!}} ''[[:Kategorie:{{{Gattung|}}}{{!}}{{{Gattung|}}}]]''
}}
|-
{{#if:{{{Untergattung|}}}|
{{!-}}
{{!}} Untergattung:
{{!}} ''[[:Kategorie:{{{Untergattung|}}}{{!}}{{{Untergattung|}}}]]''
}}
|-
{{#if:{{{Art|}}}|
{{!-}}
{{!}} Art:
{{!}} ''{{{Gattung|}}} {{{Untergattung|}}} {{{Art|}}}''
}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Wissenschaftlicher Name
|-
|style="text-align:center"|
{| style="text-align:center;background-color:#efefef;widht:250px"
|-
|style="text-align:center;width:250px"|''{{{Gattung|}}} {{{Art|}}}'' {{#if:{{{Autor|}}}|{{{Autor|}}}}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Weitere Informationen
|-
|style="text-align:left"|
{| style="text-align:left;background-color:#efefef;widht:250px"
{{#if:{{{Verbreitung|}}}
|
{{!-}}
{{!}} Verbreitung:
{{!}} {{{Verbreitung|}}}
|
{{!-}}
}}
{{#if:{{{Habitat|}}}
|
{{!-}}
{{!}} Habitat:
{{!}} {{{Habitat|}}}
|
{{!-}}
}}
{{#if:{{{Nahrung|}}}
|
{{!-}}
{{!}} Nahrung:
{{!}} {{{Nahrung|}}}
|
{{!-}}
}}
{{#if:{{{Luftfeuchtigkeit|}}}
|
{{!-}}
{{!}} Luftfeuchtigkeit:
{{!}} {{{Luftfeuchtigkeit|}}}
|
{{!-}}
}}
{{#if:{{{Temperatur|}}}
|
{{!-}}
{{!}} Temperatur:
{{!}} {{{Temperatur|}}}
|
{{!-}}
}}
{{#if:{{{StudyGroupNumber|}}}
|
{{!-}}
{{!}} StudyGroupNumber:
{{!}} {{{StudyGroupNumber|}}}
|
{{!-}}
}}
|}
|}
{{#ifeq: {{NAMESPACE}} | {{ns:0}} | [[Kategorie:Spezies]]}}
{{#if:{{{Ordnung|}}}|
-> [[:Kategorie:{{{Ordnung|}}}{{!}}{{{Ordnung|}}}]]
}}
{{#if:{{{Unterordnung|}}}|
-> [[:Kategorie:{{{Unterordnung|}}}{{!}}{{{Unterordnung|}}}]]
}}
{{#if:{{{Superfamilie|}}}|
-> [[:Kategorie:{{{Superfamilie|}}}{{!}}{{{Superfamilie|}}}]]
}}
{{#if:{{{Familie|}}}|
-> [[:Kategorie:{{{Familie|}}}{{!}}{{{Familie|}}}]]
}}
{{#if:{{{Unterfamilie|}}}|
-> [[:Kategorie:{{{Unterfamilie|}}}{{!}}{{{Unterfamilie|}}}]]
}}
{{#if:{{{Tribus|}}}|
-> [[:Kategorie:{{{Tribus|}}}{{!}}{{{Tribus|}}}]]
}}
{{#if:{{{Gattung|}}}|
-> [[:Kategorie:{{{Gattung|}}}{{!}}{{{Gattung|}}}]]
}}
{{#if:{{{Untergattung|}}}|
-> [[:Kategorie:{{{Untergattung|}}}{{!}}{{{Untergattung|}}}]]
}}
{{#if:{{{www.faunaeur.org_id|}}}|
* [http://www.faunaeur.org/full_results.php?id={{{www.faunaeur.org_id|}}} Fauna Europaea : www.faunaeur.org -> {{PAGENAME}}]
}}
{{#if:{{{cockroach.speciesfile.org_TaxonNameID|}}}|
* [http://cockroach.speciesfile.org/common/basic/Taxa.aspx?TaxonNameID={{{cockroach.speciesfile.org_TaxonNameID|}}} Cockroach Species File (CSF) : cockroach.speciesfile.org -> {{PAGENAME}}]
}}
{{#if:{{{LSID|}}}|
* [http://www.ipni.org/lsids.html LSID] : {{{LSID|}}}
}}
{{#ifeq:{{PAGENAME}}|{{{Ordnung}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=7}}
{{#ifeq:{{PAGENAME}}|Blattodea|
[[Kategorie:Schaben]]
}}
}}
{{#ifeq:{{PAGENAME}}|{{{Unterordnung}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=6}}
[[Kategorie:{{{Ordnung|}}}{{!}}{{{Unterordnung|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{Superfamilie}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
[[Kategorie:{{{Unterordnung|}}}{{!}}{{{Superfamilie|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{Familie}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=4}}
[[Kategorie:{{#if: {{{Superfamilie|}}} | {{{Superfamilie|}}} | {{{Unterordnung|}}} }}{{!}}{{{Familie|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{Unterfamilie}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=3}}
[[Kategorie:{{{Familie|}}}{{!}}{{{Unterfamilie|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{Gattung}}}|
[[Kategorie:{{#if: {{{Unterfamilie|}}} | {{{Unterfamilie|}}} | {{{Familie|}}} }}{{!}}{{{Gattung|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{WissName}}}|
[[Kategorie:{{{Gattung|}}}{{!}}{{{Art|}}}]]
}}
</includeonly>
<noinclude>
<pre>
Beispielaufruf:
{{Systematik
| DeName = Fauchschabe
| Autor = van Herrewege, 1973
| Ordnung =
| Unterordnung =
| Superfamily =
| Familie = Blaberidae
| Unterfamilie = Oxyhaloinae
| Gattung = Princisia
| Untergattung =
| Tribus = Gromphadorhini
| Art = vanwaerebeki
| Verbreitung =
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| StudyGroupNumber = BCG 34
| Winterruhe =
| cockroach.speciesfile.org_TaxonNameID = 1174416
| LSID = urn:lsid:Blattodea.speciesfile.org:TaxonName:6326
}}
{{Systematik
| Autor =
| Bild =
| Bildbeschreibung =
| Kingdom = Animalia
| Subkingdom = Eumetazoa
| Phylum = Arthropoda
| Subphylum = Hexapoda
| Klasse = Insecta
| Ordnung = Dictyoptera
| Unterordnung = Isoptera
| LSID = urn:lsid:faunaeur.org:taxname:11922
| www.faunaeur.org_id = 11922
}}
</pre>
</noinclude>
679802300401b3390124e1c36cc70762b9deda79
1516
1515
2017-04-28T16:19:18Z
Lollypop
2
wikitext
text/x-wiki
<includeonly>
{| cellpadding="2" cellspacing="0" style="border: 1px solid #6688AA;width:260px; background-color:#efefef; float:right;overflow: hidden; margin-left:10px; margin-bottom:10px;" valign="middle" |
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"|
'''''{{PAGENAME}}'''''
{{#if:{{{DeName|}}}|
<br>({{{DeName|}}})
}}
|-
|style="border: 0px solid #6688AA;border-bottom-width:1px;"| <div style="text-align:center;font-size:8pt;">
{{#if: {{{Bild|}}}
|
[[Bild:{{{Bild}}}{{!}}frameless{{!}}250x300px{{!}}{{{Bildbeschreibung}}}]]
}}
</div>
|
|
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| Systematik
|-
|style="text-align:center;width=250px;"|
{| style="background-color:#efefef;text-align:left;"
{{#if:{{{Ordnung|}}}
|
{{!-}}
{{!}} Ordnung:
{{!}} ''[[:Kategorie:{{{Ordnung|}}}{{!}}{{{Ordnung|}}}]]''
}}
{{#if:{{{Unterordnung|}}}
|
{{!-}}
{{!}} Unterordnung:
{{!}} ''[[:Kategorie:{{{Unterordnung|}}}{{!}}{{{Unterordnung|}}}]]''
}}
{{#if:{{{Superfamilie|}}}
|
{{!-}}
{{!}} Superfamilie:
{{!}} ''[[:Kategorie:{{{Superfamilie|}}}{{!}}{{{Superfamilie|}}}]]''
}}
{{#if:{{{Familie|}}}
|
{{!-}}
{{!}} Familie:
{{!}} ''[[:Kategorie:{{{Familie|}}}{{!}}{{{Familie|}}}]]''
}}
{{#if:{{{Unterfamilie|}}}
|
{{!-}}
{{!}} Unterfamilie:
{{!}} ''[[:Kategorie:{{{Unterfamilie|}}}{{!}}{{{Unterfamilie|}}}]]''
}}
{{#if:{{{Tribus|}}}
|
{{!-}}
{{!}} Tribus:
{{!}} ''[[:Kategorie:{{{Tribus|}}}{{!}}{{{Tribus|}}}]]''
}}
|-
{{#if:{{{Gattung|}}}|
{{!-}}
{{!}} Gattung:
{{!}} ''[[:Kategorie:{{{Gattung|}}}{{!}}{{{Gattung|}}}]]''
}}
|-
{{#if:{{{Untergattung|}}}|
{{!-}}
{{!}} Untergattung:
{{!}} ''[[:Kategorie:{{{Untergattung|}}}{{!}}{{{Untergattung|}}}]]''
}}
|-
{{#if:{{{Art|}}}|
{{!-}}
{{!}} Art:
{{!}} ''{{{Gattung|}}} {{{Untergattung|}}} {{{Art|}}}''
}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Wissenschaftlicher Name
|-
|style="text-align:center"|
{| style="text-align:center;background-color:#efefef;widht:250px"
|-
|style="text-align:center;width:250px"|''{{{Gattung|}}} {{{Art|}}}'' {{#if:{{{Autor|}}}|{{{Autor|}}}}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Weitere Informationen
|-
|style="text-align:left"|
{| style="text-align:left;background-color:#efefef;widht:250px"
{{#if:{{{Verbreitung|}}}
|
{{!-}}
{{!}} Verbreitung:
{{!}} {{{Verbreitung|}}}
|
{{!-}}
}}
{{#if:{{{Habitat|}}}
|
{{!-}}
{{!}} Habitat:
{{!}} {{{Habitat|}}}
|
{{!-}}
}}
{{#if:{{{Nahrung|}}}
|
{{!-}}
{{!}} Nahrung:
{{!}} {{{Nahrung|}}}
|
{{!-}}
}}
{{#if:{{{Luftfeuchtigkeit|}}}
|
{{!-}}
{{!}} Luftfeuchtigkeit:
{{!}} {{{Luftfeuchtigkeit|}}}
|
{{!-}}
}}
{{#if:{{{Temperatur|}}}
|
{{!-}}
{{!}} Temperatur:
{{!}} {{{Temperatur|}}}
|
{{!-}}
}}
{{#if:{{{StudyGroupNumber|}}}
|
{{!-}}
{{!}} StudyGroupNumber:
{{!}} {{{StudyGroupNumber|}}}
|
{{!-}}
}}
|}
|}
{{#ifeq: {{NAMESPACE}} | {{ns:0}} | [[Kategorie:Spezies]]}}
{{#if:{{{Ordnung|}}}|
-> [[:Kategorie:{{{Ordnung|}}}{{!}}{{{Ordnung|}}}]]
}}
{{#if:{{{Unterordnung|}}}|
-> [[:Kategorie:{{{Unterordnung|}}}{{!}}{{{Unterordnung|}}}]]
}}
{{#if:{{{Superfamilie|}}}|
-> [[:Kategorie:{{{Superfamilie|}}}{{!}}{{{Superfamilie|}}}]]
}}
{{#if:{{{Familie|}}}|
-> [[:Kategorie:{{{Familie|}}}{{!}}{{{Familie|}}}]]
}}
{{#if:{{{Unterfamilie|}}}|
-> [[:Kategorie:{{{Unterfamilie|}}}{{!}}{{{Unterfamilie|}}}]]
}}
{{#if:{{{Tribus|}}}|
-> [[:Kategorie:{{{Tribus|}}}{{!}}{{{Tribus|}}}]]
}}
{{#if:{{{Gattung|}}}|
-> [[:Kategorie:{{{Gattung|}}}{{!}}{{{Gattung|}}}]]
}}
{{#if:{{{Untergattung|}}}|
-> [[:Kategorie:{{{Untergattung|}}}{{!}}{{{Untergattung|}}}]]
}}
{{#if:{{{www.faunaeur.org_id|}}}|
* [http://www.faunaeur.org/full_results.php?id={{{www.faunaeur.org_id|}}} Fauna Europaea : www.faunaeur.org -> {{PAGENAME}}]
}}
{{#if:{{{cockroach.speciesfile.org_TaxonNameID|}}}|
* [http://cockroach.speciesfile.org/common/basic/Taxa.aspx?TaxonNameID={{{cockroach.speciesfile.org_TaxonNameID|}}} Cockroach Species File (CSF) : cockroach.speciesfile.org -> {{PAGENAME}}]
}}
{{#if:{{{LSID|}}}|
* [http://www.ipni.org/lsids.html LSID] : {{{LSID|}}}
}}
{{#ifeq:{{PAGENAME}}|{{{Ordnung}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=7}}
{{#ifeq:{{PAGENAME}}|Blattodea|
[[Kategorie:Schaben]]
}}
}}
{{#ifeq:{{PAGENAME}}|{{{Unterordnung}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=6}}
[[Kategorie:{{{Ordnung|}}}{{!}}{{{Unterordnung|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{Superfamilie}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
[[Kategorie:{{{Unterordnung|}}}{{!}}{{{Superfamilie|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{Familie}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=4}}
[[Kategorie:{{#if: {{{Superfamilie|}}} | {{{Superfamilie|}}} | {{{Unterordnung|}}} }}{{!}}{{{Familie|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{Unterfamilie}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=3}}
[[Kategorie:{{{Familie|}}}{{!}}{{{Unterfamilie|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{Gattung}}}|
[[Kategorie:{{#if: {{{Unterfamilie|}}} | {{{Unterfamilie|}}} | {{{Familie|}}} }}{{!}}{{{Gattung|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{WissName}}}|
[[Kategorie:{{{Gattung|}}}{{!}}{{{Art|}}}]]
}}
</includeonly>
<noinclude>
<pre>
Beispielaufruf:
{{Systematik
| DeName = Fauchschabe
| Autor = van Herrewege, 1973
| Ordnung =
| Unterordnung =
| Superfamily =
| Familie = Blaberidae
| Unterfamilie = Oxyhaloinae
| Gattung = Princisia
| Untergattung =
| Tribus = Gromphadorhini
| Art = vanwaerebeki
| Verbreitung =
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| StudyGroupNumber = BCG 34
| Winterruhe =
| cockroach.speciesfile.org_TaxonNameID = 1174416
| LSID = urn:lsid:Blattodea.speciesfile.org:TaxonName:6326
}}
{{Systematik
| Autor =
| Bild =
| Bildbeschreibung =
| Kingdom = Animalia
| Subkingdom = Eumetazoa
| Phylum = Arthropoda
| Subphylum = Hexapoda
| Klasse = Insecta
| Ordnung = Dictyoptera
| Unterordnung = Isoptera
| LSID = urn:lsid:faunaeur.org:taxname:11922
| www.faunaeur.org_id = 11922
}}
</pre>
</noinclude>
323bba70f009f098e4b9d959d7e045aa6b87c3cc
1523
1516
2017-05-02T06:53:22Z
Lollypop
2
wikitext
text/x-wiki
<includeonly>
{| cellpadding="2" cellspacing="0" style="border: 1px solid #6688AA;width:260px; background-color:#efefef; float:right;overflow: hidden; margin-left:10px; margin-bottom:10px;" valign="middle" |
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"|
'''''{{PAGENAME}}'''''
{{#if:{{{DeName|}}}|
<br>({{{DeName|}}})
}}
|-
|style="border: 0px solid #6688AA;border-bottom-width:1px;"| <div style="text-align:center;font-size:8pt;">
{{#if: {{{Bild|}}}
|
[[Bild:{{{Bild}}}{{!}}frameless{{!}}250x300px{{!}}{{{Bildbeschreibung}}}]]
}}
</div>
|
|
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| Systematik
|-
|style="text-align:center;width=250px;"|
{| style="background-color:#efefef;text-align:left;"
{{#if:{{{ordo|}}}
|
{{!-}}
{{!}} Ordo:
{{!}} ''[[:Kategorie:{{{ordo|}}}{{!}}{{{ordo|}}}]]''
}}
{{#if:{{{subordo|}}}
|
{{!-}}
{{!}} subordo:
{{!}} ''[[:Kategorie:{{{subordo|}}}{{!}}{{{subordo|}}}]]''
}}
{{#if:{{{superfamilia|}}}
|
{{!-}}
{{!}} superfamilia:
{{!}} ''[[:Kategorie:{{{superfamilia|}}}{{!}}{{{superfamilia|}}}]]''
}}
{{#if:{{{familia|}}}
|
{{!-}}
{{!}} familia:
{{!}} ''[[:Kategorie:{{{familia|}}}{{!}}{{{familia|}}}]]''
}}
{{#if:{{{subfamilia|}}}
|
{{!-}}
{{!}} subfamilia:
{{!}} ''[[:Kategorie:{{{subfamilia|}}}{{!}}{{{subfamilia|}}}]]''
}}
{{#if:{{{tribus|}}}
|
{{!-}}
{{!}} tribus:
{{!}} ''[[:Kategorie:{{{tribus|}}}{{!}}{{{tribus|}}}]]''
}}
|-
{{#if:{{{genus|}}}|
{{!-}}
{{!}} genus:
{{!}} ''[[:Kategorie:{{{genus|}}}{{!}}{{{genus|}}}]]''
}}
|-
{{#if:{{{subgenus|}}}|
{{!-}}
{{!}} subgenus:
{{!}} ''[[:Kategorie:{{{subgenus|}}}{{!}}{{{subgenus|}}}]]''
}}
|-
{{#if:{{{species|}}}|
{{!-}}
{{!}} species:
{{!}} ''{{{genus|}}} {{{subgenus|}}} {{{species|}}}''
}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Wissenschaftlicher Name
|-
|style="text-align:center"|
{| style="text-align:center;background-color:#efefef;widht:250px"
|-
|style="text-align:center;width:250px"|''{{{genus|}}} {{{species|}}}'' {{#if:{{{Autor|}}}|{{{Autor|}}}}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Weitere Informationen
|-
|style="text-align:left"|
{| style="text-align:left;background-color:#efefef;widht:250px"
{{#if:{{{Verbreitung|}}}
|
{{!-}}
{{!}} Verbreitung:
{{!}} {{{Verbreitung|}}}
|
{{!-}}
}}
{{#if:{{{Habitat|}}}
|
{{!-}}
{{!}} Habitat:
{{!}} {{{Habitat|}}}
|
{{!-}}
}}
{{#if:{{{Nahrung|}}}
|
{{!-}}
{{!}} Nahrung:
{{!}} {{{Nahrung|}}}
|
{{!-}}
}}
{{#if:{{{Luftfeuchtigkeit|}}}
|
{{!-}}
{{!}} Luftfeuchtigkeit:
{{!}} {{{Luftfeuchtigkeit|}}}
|
{{!-}}
}}
{{#if:{{{Temperatur|}}}
|
{{!-}}
{{!}} Temperatur:
{{!}} {{{Temperatur|}}}
|
{{!-}}
}}
{{#if:{{{StudyGroupNumber|}}}
|
{{!-}}
{{!}} StudyGroupNumber:
{{!}} {{{StudyGroupNumber|}}}
|
{{!-}}
}}
|}
|}
{{#ifeq: {{NAMESPACE}} | {{ns:0}} | [[Kategorie:species]]}}
{{#if:{{{ordo|}}}|
-> [[:Kategorie:{{{ordo|}}}{{!}}{{{ordo|}}}]]
}}
{{#if:{{{subordo|}}}|
-> [[:Kategorie:{{{subordo|}}}{{!}}{{{subordo|}}}]]
}}
{{#if:{{{superfamilia|}}}|
-> [[:Kategorie:{{{superfamilia|}}}{{!}}{{{superfamilia|}}}]]
}}
{{#if:{{{familia|}}}|
-> [[:Kategorie:{{{familia|}}}{{!}}{{{familia|}}}]]
}}
{{#if:{{{subfamilia|}}}|
-> [[:Kategorie:{{{subfamilia|}}}{{!}}{{{subfamilia|}}}]]
}}
{{#if:{{{tribus|}}}|
-> [[:Kategorie:{{{tribus|}}}{{!}}{{{tribus|}}}]]
}}
{{#if:{{{genus|}}}|
-> [[:Kategorie:{{{genus|}}}{{!}}{{{genus|}}}]]
}}
{{#if:{{{subgenus|}}}|
-> [[:Kategorie:{{{subgenus|}}}{{!}}{{{subgenus|}}}]]
}}
{{#if:{{{www.faunaeur.org_id|}}}|
* [http://www.faunaeur.org/full_results.php?id={{{www.faunaeur.org_id|}}} Fauna Europaea : www.faunaeur.org -> {{PAGENAME}}]
}}
{{#if:{{{cockroach.genusfile.org_TaxonNameID|}}}|
* [http://cockroach.genusfile.org/common/basic/Taxa.aspx?TaxonNameID={{{cockroach.genusfile.org_TaxonNameID|}}} Cockroach Species File (CSF) : cockroach.genusfile.org -> {{PAGENAME}}]
}}
{{#if:{{{LSID|}}}|
* [http://www.ipni.org/lsids.html LSID] : {{{LSID|}}}
}}
{{#ifeq:{{PAGENAME}}|{{{ordo}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=7}}
{{#ifeq:{{PAGENAME}}|Blattodea|
[[Kategorie:Schaben]]
}}
}}
{{#ifeq:{{PAGENAME}}|{{{subordo}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=6}}
[[Kategorie:{{{ordo|}}}{{!}}{{{subordo|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{superfamilia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
[[Kategorie:{{{subordo|}}}{{!}}{{{superfamilia|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{familia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=4}}
[[Kategorie:{{#if: {{{superfamilia|}}} | {{{superfamilia|}}} | {{{subordo|}}} }}{{!}}{{{familia|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{subfamilia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=3}}
[[Kategorie:{{{familia|}}}{{!}}{{{subfamilia|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{genus}}}|
[[Kategorie:{{#if: {{{subfamilia|}}} | {{{subfamilia|}}} | {{{familia|}}} }}{{!}}{{{genus|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{WissName}}}|
[[Kategorie:{{{genus|}}}{{!}}{{{species|}}}]]
}}
</includeonly>
<noinclude>
<pre>
Beispielaufruf:
{{Systematik
| DeName = Fauchschabe
| Autor = van Herrewege, 1973
| ordo =
| subordo =
| superfamilia =
| familia = Blaberidae
| subfamilia = Oxyhaloinae
| genus = Princisia
| subgenus =
| tribus = Gromphadorhini
| species = vanwaerebeki
| Verbreitung =
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| StudyGroupNumber = BCG 34
| Winterruhe =
| cockroach.genusfile.org_TaxonNameID = 1174416
| LSID = urn:lsid:Blattodea.genusfile.org:TaxonName:6326
}}
{{Systematik
| Autor =
| Bild =
| Bildbeschreibung =
| regnum = Animalia
| subregnum = Eumetazoa
| phylum = specieshropoda
| subphylum = Hexapoda
| classis = Insecta
| ordo = Dictyoptera
| subordo = Isoptera
| LSID = urn:lsid:faunaeur.org:taxname:11922
| www.faunaeur.org_id = 11922
}}
</pre>
</noinclude>
3cfc60e963363a4f71945a1391165269402ecf9a
1526
1523
2017-05-02T07:00:24Z
Lollypop
2
wikitext
text/x-wiki
<includeonly>
{| cellpadding="2" cellspacing="0" style="border: 1px solid #6688AA;width:260px; background-color:#efefef; float:right;overflow: hidden; margin-left:10px; margin-bottom:10px;" valign="middle" |
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"|
'''''{{PAGENAME}}'''''
{{#if:{{{DeName|}}}|
<br>({{{DeName|}}})
}}
|-
|style="border: 0px solid #6688AA;border-bottom-width:1px;"| <div style="text-align:center;font-size:8pt;">
{{#if: {{{Bild|}}}
|
[[Bild:{{{Bild}}}{{!}}frameless{{!}}250x300px{{!}}{{{Bildbeschreibung}}}]]
}}
</div>
|
|
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| Systematik
|-
|style="text-align:center;width=250px;"|
{| style="background-color:#efefef;text-align:left;"
{{#if:{{{ordo|}}}
|
{{!-}}
{{!}} Ordo:
{{!}} ''[[:Kategorie:{{{ordo|}}}{{!}}{{{ordo|}}}]]''
}}
{{#if:{{{subordo|}}}
|
{{!-}}
{{!}} Subordo:
{{!}} ''[[:Kategorie:{{{subordo|}}}{{!}}{{{subordo|}}}]]''
}}
{{#if:{{{superfamilia|}}}
|
{{!-}}
{{!}} Superfamilia:
{{!}} ''[[:Kategorie:{{{superfamilia|}}}{{!}}{{{superfamilia|}}}]]''
}}
{{#if:{{{familia|}}}
|
{{!-}}
{{!}} Familia:
{{!}} ''[[:Kategorie:{{{familia|}}}{{!}}{{{familia|}}}]]''
}}
{{#if:{{{subfamilia|}}}
|
{{!-}}
{{!}} Subfamilia:
{{!}} ''[[:Kategorie:{{{subfamilia|}}}{{!}}{{{subfamilia|}}}]]''
}}
{{#if:{{{tribus|}}}
|
{{!-}}
{{!}} Tribus:
{{!}} ''[[:Kategorie:{{{tribus|}}}{{!}}{{{tribus|}}}]]''
}}
|-
{{#if:{{{genus|}}}|
{{!-}}
{{!}} Genus:
{{!}} ''[[:Kategorie:{{{genus|}}}{{!}}{{{genus|}}}]]''
}}
|-
{{#if:{{{subgenus|}}}|
{{!-}}
{{!}} Subgenus:
{{!}} ''[[:Kategorie:{{{subgenus|}}}{{!}}{{{subgenus|}}}]]''
}}
|-
{{#if:{{{species|}}}|
{{!-}}
{{!}} Species:
{{!}} ''{{{genus|}}} {{{subgenus|}}} {{{species|}}}''
}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Wissenschaftlicher Name
|-
|style="text-align:center"|
{| style="text-align:center;background-color:#efefef;widht:250px"
|-
|style="text-align:center;width:250px"|''{{{genus|}}} {{{species|}}}'' {{#if:{{{Autor|}}}|{{{Autor|}}}}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Weitere Informationen
|-
|style="text-align:left"|
{| style="text-align:left;background-color:#efefef;widht:250px"
{{#if:{{{Verbreitung|}}}
|
{{!-}}
{{!}} Verbreitung:
{{!}} {{{Verbreitung|}}}
|
{{!-}}
}}
{{#if:{{{Habitat|}}}
|
{{!-}}
{{!}} Habitat:
{{!}} {{{Habitat|}}}
|
{{!-}}
}}
{{#if:{{{Nahrung|}}}
|
{{!-}}
{{!}} Nahrung:
{{!}} {{{Nahrung|}}}
|
{{!-}}
}}
{{#if:{{{Luftfeuchtigkeit|}}}
|
{{!-}}
{{!}} Luftfeuchtigkeit:
{{!}} {{{Luftfeuchtigkeit|}}}
|
{{!-}}
}}
{{#if:{{{Temperatur|}}}
|
{{!-}}
{{!}} Temperatur:
{{!}} {{{Temperatur|}}}
|
{{!-}}
}}
{{#if:{{{StudyGroupNumber|}}}
|
{{!-}}
{{!}} StudyGroupNumber:
{{!}} {{{StudyGroupNumber|}}}
|
{{!-}}
}}
|}
|}
{{#ifeq: {{NAMESPACE}} | {{ns:0}} | [[Kategorie:species]]}}
{{#if:{{{ordo|}}}|
-> [[:Kategorie:{{{ordo|}}}{{!}}{{{ordo|}}}]]
}}
{{#if:{{{subordo|}}}|
-> [[:Kategorie:{{{subordo|}}}{{!}}{{{subordo|}}}]]
}}
{{#if:{{{superfamilia|}}}|
-> [[:Kategorie:{{{superfamilia|}}}{{!}}{{{superfamilia|}}}]]
}}
{{#if:{{{familia|}}}|
-> [[:Kategorie:{{{familia|}}}{{!}}{{{familia|}}}]]
}}
{{#if:{{{subfamilia|}}}|
-> [[:Kategorie:{{{subfamilia|}}}{{!}}{{{subfamilia|}}}]]
}}
{{#if:{{{tribus|}}}|
-> [[:Kategorie:{{{tribus|}}}{{!}}{{{tribus|}}}]]
}}
{{#if:{{{genus|}}}|
-> [[:Kategorie:{{{genus|}}}{{!}}{{{genus|}}}]]
}}
{{#if:{{{subgenus|}}}|
-> [[:Kategorie:{{{subgenus|}}}{{!}}{{{subgenus|}}}]]
}}
{{#if:{{{www.faunaeur.org_id|}}}|
* [http://www.faunaeur.org/full_results.php?id={{{www.faunaeur.org_id|}}} Fauna Europaea : www.faunaeur.org -> {{PAGENAME}}]
}}
{{#if:{{{cockroach.genusfile.org_TaxonNameID|}}}|
* [http://cockroach.genusfile.org/common/basic/Taxa.aspx?TaxonNameID={{{cockroach.genusfile.org_TaxonNameID|}}} Cockroach Species File (CSF) : cockroach.genusfile.org -> {{PAGENAME}}]
}}
{{#if:{{{LSID|}}}|
* [http://www.ipni.org/lsids.html LSID] : {{{LSID|}}}
}}
{{#ifeq:{{PAGENAME}}|{{{ordo}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=7}}
{{#ifeq:{{PAGENAME}}|Blattodea|
[[Kategorie:Schaben]]
}}
}}
{{#ifeq:{{PAGENAME}}|{{{subordo}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=6}}
[[Kategorie:{{{ordo|}}}{{!}}{{{subordo|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{superfamilia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
[[Kategorie:{{{subordo|}}}{{!}}{{{superfamilia|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{familia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=4}}
[[Kategorie:{{#if: {{{superfamilia|}}} | {{{superfamilia|}}} | {{{subordo|}}} }}{{!}}{{{familia|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{subfamilia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=3}}
[[Kategorie:{{{familia|}}}{{!}}{{{subfamilia|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{genus}}}|
[[Kategorie:{{#if: {{{subfamilia|}}} | {{{subfamilia|}}} | {{{familia|}}} }}{{!}}{{{genus|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{WissName}}}|
[[Kategorie:{{{genus|}}}{{!}}{{{species|}}}]]
}}
</includeonly>
<noinclude>
<pre>
Beispielaufruf:
{{Systematik
| DeName = Fauchschabe
| Autor = van Herrewege, 1973
| ordo =
| subordo =
| superfamilia =
| familia = Blaberidae
| subfamilia = Oxyhaloinae
| genus = Princisia
| subgenus =
| tribus = Gromphadorhini
| species = vanwaerebeki
| Verbreitung =
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| StudyGroupNumber = BCG 34
| Winterruhe =
| cockroach.genusfile.org_TaxonNameID = 1174416
| LSID = urn:lsid:Blattodea.genusfile.org:TaxonName:6326
}}
{{Systematik
| Autor =
| Bild =
| Bildbeschreibung =
| regnum = Animalia
| subregnum = Eumetazoa
| phylum = specieshropoda
| subphylum = Hexapoda
| classis = Insecta
| ordo = Dictyoptera
| subordo = Isoptera
| LSID = urn:lsid:faunaeur.org:taxname:11922
| www.faunaeur.org_id = 11922
}}
</pre>
</noinclude>
2f82e620d4359c89b7f9190ec20660e5fa368a3f
1527
1526
2017-05-02T07:06:20Z
Lollypop
2
wikitext
text/x-wiki
<includeonly>
{| cellpadding="2" cellspacing="0" style="border: 1px solid #6688AA;width:260px; background-color:#efefef; float:right;overflow: hidden; margin-left:10px; margin-bottom:10px;" valign="middle" |
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"|
'''''{{PAGENAME}}'''''
{{#if:{{{DeName|}}}|
<br>({{{DeName|}}})
}}
|-
|style="border: 0px solid #6688AA;border-bottom-width:1px;"| <div style="text-align:center;font-size:8pt;">
{{#if: {{{Bild|}}}
|
[[Bild:{{{Bild}}}{{!}}frameless{{!}}250x300px{{!}}{{{Bildbeschreibung}}}]]
}}
</div>
|
|
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| Systematik
|-
|style="text-align:center;width=250px;"|
{| style="background-color:#efefef;text-align:left;"
{{#if:{{{ordo|}}}
{{!-}}
{{!}} | Ordo:
{{!}} ''[[:Kategorie:{{{ordo|}}}{{!}}{{{ordo|}}}]]''
}}
{{#if:{{{subordo|}}}
{{!-}}
{{!}} | Subordo:
{{!}} ''[[:Kategorie:{{{subordo|}}}{{!}}{{{subordo|}}}]]''
}}
{{#if:{{{superfamilia|}}}
{{!-}}
{{!}} | Superfamilia:
{{!}} ''[[:Kategorie:{{{superfamilia|}}}{{!}}{{{superfamilia|}}}]]''
}}
{{#if:{{{familia|}}}
{{!-}}
{{!}} | Familia:
{{!}} ''[[:Kategorie:{{{familia|}}}{{!}}{{{familia|}}}]]''
}}
{{#if:{{{subfamilia|}}}
{{!-}}
{{!}} | Subfamilia:
{{!}} ''[[:Kategorie:{{{subfamilia|}}}{{!}}{{{subfamilia|}}}]]''
}}
{{#if:{{{tribus|}}}
{{!-}}
{{!}} | Tribus:
{{!}} ''[[:Kategorie:{{{tribus|}}}{{!}}{{{tribus|}}}]]''
}}
|-
{{#if:{{{genus|}}}|
{{!-}}
{{!}} Genus:
{{!}} ''[[:Kategorie:{{{genus|}}}{{!}}{{{genus|}}}]]''
}}
|-
{{#if:{{{subgenus|}}}|
{{!-}}
{{!}} Subgenus:
{{!}} ''[[:Kategorie:{{{subgenus|}}}{{!}}{{{subgenus|}}}]]''
}}
|-
{{#if:{{{species|}}}|
{{!-}}
{{!}} Species:
{{!}} ''{{{genus|}}} {{{subgenus|}}} {{{species|}}}''
}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Wissenschaftlicher Name
|-
|style="text-align:center"|
{| style="text-align:center;background-color:#efefef;widht:250px"
|-
|style="text-align:center;width:250px"|''{{{genus|}}} {{{species|}}}'' {{#if:{{{Autor|}}}|{{{Autor|}}}}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Weitere Informationen
|-
|style="text-align:left"|
{| style="text-align:left;background-color:#efefef;widht:250px"
{{#if:{{{Verbreitung|}}}
|
{{!-}}
{{!}} Verbreitung:
{{!}} {{{Verbreitung|}}}
|
{{!-}}
}}
{{#if:{{{Habitat|}}}
|
{{!-}}
{{!}} Habitat:
{{!}} {{{Habitat|}}}
|
{{!-}}
}}
{{#if:{{{Nahrung|}}}
|
{{!-}}
{{!}} Nahrung:
{{!}} {{{Nahrung|}}}
|
{{!-}}
}}
{{#if:{{{Luftfeuchtigkeit|}}}
|
{{!-}}
{{!}} Luftfeuchtigkeit:
{{!}} {{{Luftfeuchtigkeit|}}}
|
{{!-}}
}}
{{#if:{{{Temperatur|}}}
|
{{!-}}
{{!}} Temperatur:
{{!}} {{{Temperatur|}}}
|
{{!-}}
}}
{{#if:{{{StudyGroupNumber|}}}
|
{{!-}}
{{!}} StudyGroupNumber:
{{!}} {{{StudyGroupNumber|}}}
|
{{!-}}
}}
|}
|}
{{#ifeq: {{NAMESPACE}} | {{ns:0}} | [[Kategorie:species]]}}
{{#if:{{{ordo|}}}|
-> [[:Kategorie:{{{ordo|}}}{{!}}{{{ordo|}}}]]
}}
{{#if:{{{subordo|}}}|
-> [[:Kategorie:{{{subordo|}}}{{!}}{{{subordo|}}}]]
}}
{{#if:{{{superfamilia|}}}|
-> [[:Kategorie:{{{superfamilia|}}}{{!}}{{{superfamilia|}}}]]
}}
{{#if:{{{familia|}}}|
-> [[:Kategorie:{{{familia|}}}{{!}}{{{familia|}}}]]
}}
{{#if:{{{subfamilia|}}}|
-> [[:Kategorie:{{{subfamilia|}}}{{!}}{{{subfamilia|}}}]]
}}
{{#if:{{{tribus|}}}|
-> [[:Kategorie:{{{tribus|}}}{{!}}{{{tribus|}}}]]
}}
{{#if:{{{genus|}}}|
-> [[:Kategorie:{{{genus|}}}{{!}}{{{genus|}}}]]
}}
{{#if:{{{subgenus|}}}|
-> [[:Kategorie:{{{subgenus|}}}{{!}}{{{subgenus|}}}]]
}}
{{#if:{{{www.faunaeur.org_id|}}}|
* [http://www.faunaeur.org/full_results.php?id={{{www.faunaeur.org_id|}}} Fauna Europaea : www.faunaeur.org -> {{PAGENAME}}]
}}
{{#if:{{{cockroach.genusfile.org_TaxonNameID|}}}|
* [http://cockroach.genusfile.org/common/basic/Taxa.aspx?TaxonNameID={{{cockroach.genusfile.org_TaxonNameID|}}} Cockroach Species File (CSF) : cockroach.genusfile.org -> {{PAGENAME}}]
}}
{{#if:{{{LSID|}}}|
* [http://www.ipni.org/lsids.html LSID] : {{{LSID|}}}
}}
{{#ifeq:{{PAGENAME}}|{{{ordo}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=7}}
{{#ifeq:{{PAGENAME}}|Blattodea|
[[Kategorie:Schaben]]
}}
}}
{{#ifeq:{{PAGENAME}}|{{{subordo}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=6}}
[[Kategorie:{{{ordo|}}}{{!}}{{{subordo|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{superfamilia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
[[Kategorie:{{{subordo|}}}{{!}}{{{superfamilia|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{familia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=4}}
[[Kategorie:{{#if: {{{superfamilia|}}} | {{{superfamilia|}}} | {{{subordo|}}} }}{{!}}{{{familia|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{subfamilia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=3}}
[[Kategorie:{{{familia|}}}{{!}}{{{subfamilia|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{genus}}}|
[[Kategorie:{{#if: {{{subfamilia|}}} | {{{subfamilia|}}} | {{{familia|}}} }}{{!}}{{{genus|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{WissName}}}|
[[Kategorie:{{{genus|}}}{{!}}{{{species|}}}]]
}}
</includeonly>
<noinclude>
<pre>
Beispielaufruf:
{{Systematik
| DeName = Fauchschabe
| Autor = van Herrewege, 1973
| ordo =
| subordo =
| superfamilia =
| familia = Blaberidae
| subfamilia = Oxyhaloinae
| genus = Princisia
| subgenus =
| tribus = Gromphadorhini
| species = vanwaerebeki
| Verbreitung =
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| StudyGroupNumber = BCG 34
| Winterruhe =
| cockroach.genusfile.org_TaxonNameID = 1174416
| LSID = urn:lsid:Blattodea.genusfile.org:TaxonName:6326
}}
{{Systematik
| Autor =
| Bild =
| Bildbeschreibung =
| regnum = Animalia
| subregnum = Eumetazoa
| phylum = specieshropoda
| subphylum = Hexapoda
| classis = Insecta
| ordo = Dictyoptera
| subordo = Isoptera
| LSID = urn:lsid:faunaeur.org:taxname:11922
| www.faunaeur.org_id = 11922
}}
</pre>
</noinclude>
dc63d84952e463eae6e216377780fcc5e33daf52
Nasutitermes sp
0
316
1480
1471
2017-04-28T15:09:05Z
Lollypop
2
wikitext
text/x-wiki
{{Systematik
| DeName =
| WissName = Nasutitermes (nigriceps?)
| Autor =
| Kingdom = Animalia
| Subkingdom = Eumetazoa
| Phylum = Arthropoda
| Subphylum = Hexapoda
| Klasse = Insecta
| Ordnung = Dictyoptera
| Unterordnung = Isoptera
| Familie = Termitidae
| Unterfamilie = Nasutitermitinae
| Tribus =
| Gattung = Nasutitermes
| Untergattung =
| Art = sp
| Verbreitung = Jamaica
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur =
| Winterruhe =
}}
8be857b2238819c6336d4fb36e2f0ef8958fe722
1481
1480
2017-04-28T15:09:58Z
Lollypop
2
wikitext
text/x-wiki
[[ Kategorie: Nasutitermes]]
{{Systematik
| DeName =
| WissName = Nasutitermes (nigriceps?)
| Autor =
| Kingdom = Animalia
| Subkingdom = Eumetazoa
| Phylum = Arthropoda
| Subphylum = Hexapoda
| Klasse = Insecta
| Ordnung = Dictyoptera
| Unterordnung = Isoptera
| Familie = Termitidae
| Unterfamilie = Nasutitermitinae
| Tribus =
| Gattung = Nasutitermes
| Untergattung =
| Art = sp
| Verbreitung = Jamaica
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur =
| Winterruhe =
}}
69f63fec4e1a7f0345005152db95c4a47da9c5f0
1487
1481
2017-04-28T15:25:28Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie: Nasutitermes ]]
{{Systematik
| DeName =
| WissName = Nasutitermes (nigriceps?)
| Autor =
| Kingdom = Animalia
| Subkingdom = Eumetazoa
| Phylum = Arthropoda
| Subphylum = Hexapoda
| Klasse = Insecta
| Ordnung = Dictyoptera
| Unterordnung = Isoptera
| Familie = Termitidae
| Unterfamilie = Nasutitermitinae
| Tribus =
| Gattung = Nasutitermes
| Untergattung =
| Art = sp
| Verbreitung = Jamaica
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur =
| Winterruhe =
}}
eb2de7e1783c93a54df270052003a8f3c5861386
1493
1487
2017-04-28T15:38:09Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie: Nasutitermes ]]
{{Systematik
| DeName =
| WissName = Nasutitermes (nigriceps?)
| Autor =
| Kingdom = Animalia
| Subkingdom = Eumetazoa
| Phylum = Arthropoda
| Subphylum = Hexapoda
| Klasse = Insecta
| Ordnung = Dictyoptera
| Unterordnung = Isoptera
| Familie = Termitidae
| Unterfamilie = Nasutitermitinae
| Tribus =
| Gattung = Nasutitermes
| Untergattung =
| Art = sp
| Verbreitung = Jamaica
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur =
| Winterruhe =
}}
Aus: Jamaica, Bahía Montego (David)
899f546aae53ae66e8767ed2b2244cb1811670a3
Neotermes sp
0
320
1488
2017-04-28T15:28:13Z
Lollypop
2
Die Seite wurde neu angelegt: „[[Kategorie: Neotermes ]] {{Systematik | DeName = | WissName = Neotermes sp. | Autor = | Kingdom = Animalia | Subk…“
wikitext
text/x-wiki
[[Kategorie: Neotermes ]]
{{Systematik
| DeName =
| WissName = Neotermes sp.
| Autor =
| Kingdom = Animalia
| Subkingdom = Eumetazoa
| Phylum = Arthropoda
| Subphylum = Hexapoda
| Klasse = Insecta
| Ordnung = Dictyoptera
| Unterordnung = Isoptera
| Familie = Kalotermitidae
| Unterfamilie =
| Tribus =
| Gattung = Neotermes
| Untergattung =
| Art = sp
| Verbreitung =
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur =
| Winterruhe =
}}
40172740c6a35213324047b13ee71678257834b7
1522
1488
2017-05-02T06:31:21Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie: Neotermes ]]
{{Systematik
| DeName =
| WissName = Neotermes sp.
| Autor =
| Kingdom = Animalia
| Subkingdom = Eumetazoa
| Phylum = Arthropoda
| Subphylum = Hexapoda
| Klasse = Insecta
| Ordnung = Dictyoptera
| Unterordnung = Isoptera
| Familie = Kalotermitidae
| Unterfamilie =
| Tribus =
| Gattung = Neotermes
| Untergattung =
| Art = sp
| Verbreitung =
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur =
| Winterruhe =
}}
* [http://www.boldsystems.org/index.php/Taxbrowser_Taxonpage?taxid=354458 BoldSystems Database : Neotermes castaneus]
81573430c91cb8c785dfa09ac4b3bf44f61b80bf
Category:Neotermes
14
321
1489
2017-04-28T15:28:47Z
Lollypop
2
Die Seite wurde neu angelegt: „{{Systematik | DeName = | WissName = Neotermes sp. | Autor = | Kingdom = Animalia | Subkingdom = Eumetazoa |…“
wikitext
text/x-wiki
{{Systematik
| DeName =
| WissName = Neotermes sp.
| Autor =
| Kingdom = Animalia
| Subkingdom = Eumetazoa
| Phylum = Arthropoda
| Subphylum = Hexapoda
| Klasse = Insecta
| Ordnung = Dictyoptera
| Unterordnung = Isoptera
| Familie = Kalotermitidae
| Unterfamilie =
| Tribus =
| Gattung = Neotermes
}}
1eecddde56fcb81a299a3bfa27c0f28ce8d601e4
1492
1489
2017-04-28T15:35:42Z
Lollypop
2
wikitext
text/x-wiki
{{Systematik
| DeName =
| WissName = Neotermes sp.
| Autor =
| Kingdom = Animalia
| Subkingdom = Eumetazoa
| Phylum = Arthropoda
| Subphylum = Hexapoda
| Klasse = Insecta
| Ordnung = Dictyoptera
| Unterordnung = Isoptera
| Familie = Kalotermitidae
| Unterfamilie =
| Tribus =
| Gattung = Neotermes
}}
5ae3f4f7eb39c5af782115be5fe66588848e285d
Category:Kalotermitidae
14
311
1490
1430
2017-04-28T15:29:26Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:Isoptera]]
{{Systematik
| DeName =
| WissName = Neotermes sp.
| Autor =
| Kingdom = Animalia
| Subkingdom = Eumetazoa
| Phylum = Arthropoda
| Subphylum = Hexapoda
| Klasse = Insecta
| Ordnung = Dictyoptera
| Unterordnung = Isoptera
| Familie = Kalotermitidae
}}
2e59657c812a0a230d23bb6572562ae275ac1f1e
1500
1490
2017-04-28T15:48:08Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:Isoptera]]
{{Systematik
| DeName =
| WissName = Neotermes sp.
| Autor =
| Kingdom = Animalia
| Subkingdom = Eumetazoa
| Phylum = Arthropoda
| Subphylum = Hexapoda
| Klasse = Insecta
| Ordnung = Dictyoptera
| Unterordnung = Isoptera
| Familie = Kalotermitidae
| LSID = urn:lsid:faunaeur.org:taxname:11923
| www.faunaeur.org_id = 11923
}}
fdaca2af29fba2ded542c764aa6a313486a24375
1502
1500
2017-04-28T15:51:45Z
Lollypop
2
wikitext
text/x-wiki
{{Systematik
| DeName =
| WissName = Neotermes sp.
| Autor =
| Kingdom = Animalia
| Subkingdom = Eumetazoa
| Phylum = Arthropoda
| Subphylum = Hexapoda
| Klasse = Insecta
| Ordnung = Dictyoptera
| Unterordnung = Isoptera
| Familie = Kalotermitidae
| LSID = urn:lsid:faunaeur.org:taxname:11923
| www.faunaeur.org_id = 11923
}}
ead87b680b50f1a61a151661e8f59591b0fd9b40
1504
1502
2017-04-28T15:53:18Z
Lollypop
2
wikitext
text/x-wiki
{{Systematik
| DeName =
| WissName = Kalotermitidae
| Autor =
| Kingdom = Animalia
| Subkingdom = Eumetazoa
| Phylum = Arthropoda
| Subphylum = Hexapoda
| Klasse = Insecta
| Ordnung = Dictyoptera
| Unterordnung = Isoptera
| Familie = Kalotermitidae
| LSID = urn:lsid:faunaeur.org:taxname:11923
| www.faunaeur.org_id = 11923
}}
82a08492bc9203856d75ed41fdbcb86f2783ba18
1525
1504
2017-05-02T06:55:03Z
Lollypop
2
wikitext
text/x-wiki
{{Systematik
| DeName =
| WissName = Kalotermitidae
| Autor =
| regnum = Animalia
| subregnum = Eumetazoa
| phylum = Arthropoda
| subphylum = Hexapoda
| classis = Insecta
| ordo = Dictyoptera
| subordo = Isoptera
| familia = Kalotermitidae
| LSID = urn:lsid:faunaeur.org:taxname:11923
| www.faunaeur.org_id = 11923
}}
71e05a06d58c6a7ffe58ce7f6a4f5c3cfede58d0
Bifiditermes rogierae
0
313
1494
1466
2017-04-28T15:39:01Z
Lollypop
2
wikitext
text/x-wiki
{{Systematik
| DeName = Bifiditermes rogierae
| WissName =
| Autor = Hollande 1982
| Kingdom = Animalia
| Subkingdom = Eumetazoa
| Phylum = Arthropoda
| Subphylum = Hexapoda
| Klasse = Insecta
| Ordnung = Dictyoptera
| Unterordnung = Isoptera
| Familie = Kalotermitidae
| Unterfamilie =
| Tribus =
| Gattung = Bifiditermes
| Untergattung =
| Art = rogierae
| Verbreitung = Teneriffa
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur =
| Winterruhe =
}}
Aus: Teneriffa
3be5d37eea659d58960bd78d4895dfeb28127c84
1507
1494
2017-04-28T16:00:50Z
Lollypop
2
wikitext
text/x-wiki
{{Systematik
| DeName = Bifiditermes rogierae
| WissName =
| Autor = Hollande 1982
| Kingdom = Animalia
| Subkingdom = Eumetazoa
| Phylum = Arthropoda
| Subphylum = Hexapoda
| Klasse = Insecta
| Ordnung = Dictyoptera
| Unterordnung = Isoptera
| Familie = Kalotermitidae
| Unterfamilie =
| Tribus =
| Gattung = Bifiditermes
| Untergattung =
| Art = rogierae
| Verbreitung = Teneriffa
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur =
| Winterruhe =
| LSID = urn:lsid:faunaeur.org:taxname:337259
| www.faunaeur.org_id = 337259
}}
Aus: Teneriffa
ec377d242f14711f0fb9c435bf777b16310ab4f7
1508
1507
2017-04-28T16:03:21Z
Lollypop
2
wikitext
text/x-wiki
{{Systematik
| DeName =
| WissName = Bifiditermes rogierae
| Autor = Hollande 1982
| Kingdom = Animalia
| Subkingdom = Eumetazoa
| Phylum = Arthropoda
| Subphylum = Hexapoda
| Klasse = Insecta
| Ordnung = Dictyoptera
| Unterordnung = Isoptera
| Familie = Kalotermitidae
| Unterfamilie =
| Tribus =
| Gattung = Bifiditermes
| Untergattung =
| Art = rogierae
| Verbreitung = Teneriffa
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur =
| Winterruhe =
| LSID = urn:lsid:faunaeur.org:taxname:337259
| www.faunaeur.org_id = 337259
}}
Aus: Teneriffa
a7125159831596687e9e0fd2a08ffcda00f67528
1510
1508
2017-04-28T16:06:40Z
Lollypop
2
wikitext
text/x-wiki
{{Systematik
| DeName =
| WissName = Bifiditermes rogierae
| Autor = Hollande 1982
| Kingdom = Animalia
| Subkingdom = Eumetazoa
| Phylum = Arthropoda
| Subphylum = Hexapoda
| Klasse = Insecta
| Ordnung = Dictyoptera
| Unterordnung = Isoptera
| Familie = Kalotermitidae
| Unterfamilie =
| Tribus =
| Gattung = Bifiditermes
| Untergattung =
| Art = rogierae
| Verbreitung = Teneriffa
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur =
| Winterruhe =
| LSID = urn:lsid:faunaeur.org:taxname:337260
| www.faunaeur.org_id = 337260
}}
Aus: Teneriffa
a8939a6d07dde157b8db26d64cf0cdef471f5b51
1521
1510
2017-04-28T16:47:34Z
Lollypop
2
wikitext
text/x-wiki
{{Systematik
| DeName =
| WissName = Bifiditermes rogierae
| Autor = Hollande 1982
| Kingdom = Animalia
| Subkingdom = Eumetazoa
| Phylum = Arthropoda
| Subphylum = Hexapoda
| Klasse = Insecta
| Ordnung = Dictyoptera
| Unterordnung = Isoptera
| Familie = Kalotermitidae
| Unterfamilie =
| Tribus =
| Gattung = Bifiditermes
| Untergattung =
| Art = rogierae
| Verbreitung = Teneriffa
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur =
| Winterruhe =
| LSID = urn:lsid:faunaeur.org:taxname:337260
| www.faunaeur.org_id = 337260
}}
Aus: Teneriffa
* [https://www.expertoentermitas.org/descripcion-de-una-termita-nueva-de-las-islas-canarias-bifiditermes-rogierae-n-sp Descripción de Bifiditermes rogierae n. sp. termita de las Islas Canarias]
4c6898fe5695ab15fe4e93b1495f86b293793e95
Category:Dictyoptera
14
315
1499
1451
2017-04-28T15:45:47Z
Lollypop
2
wikitext
text/x-wiki
{{Systematik
| Autor =
| Bild =
| Bildbeschreibung =
| Klasse = Insecta
| Ordnung = Dictyoptera
| LSID = urn:lsid:faunaeur.org:taxname:11907
| www.faunaeur.org_id = 11907
}}
3eff3e758e7ff0e657468f486a03031a7bb3e570
Category:Rhinotermitidae
14
305
1503
1421
2017-04-28T15:52:38Z
Lollypop
2
wikitext
text/x-wiki
{{Systematik
| DeName =
| WissName = Rhinotermitidae
| Autor =
| Kingdom = Animalia
| Subkingdom = Eumetazoa
| Phylum = Arthropoda
| Subphylum = Hexapoda
| Klasse = Insecta
| Ordnung = Dictyoptera
| Unterordnung = Isoptera
| Familie = Rhinotermitidae
| LSID = urn:lsid:faunaeur.org:taxname:11924
| www.faunaeur.org_id = 11924
}}
4adc2d493fb0f0d59616d246313715fe35e5374b
Category:Termitidae
14
319
1505
1474
2017-04-28T15:55:06Z
Lollypop
2
wikitext
text/x-wiki
{{Systematik
| DeName =
| WissName = Termitidae
| Autor =
| Kingdom = Animalia
| Subkingdom = Eumetazoa
| Phylum = Arthropoda
| Subphylum = Hexapoda
| Klasse = Insecta
| Ordnung = Dictyoptera
| Unterordnung = Isoptera
| Familie = Termitidae
| LSID = urn:lsid:faunaeur.org:taxname:336400
| www.faunaeur.org_id = 336400
}}
2537e719ace605ee6b8ddf5e55f657aac24504c8
Category:Bifiditermes
14
314
1506
1433
2017-04-28T15:57:55Z
Lollypop
2
wikitext
text/x-wiki
{{Systematik
| Autor =
| Bild =
| Bildbeschreibung =
| Kingdom = Animalia
| Subkingdom = Eumetazoa
| Phylum = Arthropoda
| Subphylum = Hexapoda
| Klasse = Insecta
| Ordnung = Dictyoptera
| Unterordnung = Isoptera
| Familie = Kalotermitidae
| Gattung = Bifiditermes
| LSID = urn:lsid:faunaeur.org:taxname:337259
| www.faunaeur.org_id = 337259
}}
0c6d2ec557505b3aa6d662d67825d88d91d99bab
1511
1506
2017-04-28T16:07:31Z
Lollypop
2
wikitext
text/x-wiki
{{Systematik
| Autor = Krishna 1961
| Bild =
| Bildbeschreibung =
| Kingdom = Animalia
| Subkingdom = Eumetazoa
| Phylum = Arthropoda
| Subphylum = Hexapoda
| Klasse = Insecta
| Ordnung = Dictyoptera
| Unterordnung = Isoptera
| Familie = Kalotermitidae
| Gattung = Bifiditermes
| LSID = urn:lsid:faunaeur.org:taxname:337259
| www.faunaeur.org_id = 337259
}}
b2ef7d6faef3b1493506fa8f0bb065f1cfd93640
Category:Cryptotermes
14
312
1512
1431
2017-04-28T16:09:25Z
Lollypop
2
wikitext
text/x-wiki
{{Systematik
| Autor = Banks 1906
| Bild =
| Bildbeschreibung =
| Kingdom = Animalia
| Subkingdom = Eumetazoa
| Phylum = Arthropoda
| Subphylum = Hexapoda
| Klasse = Insecta
| Ordnung = Dictyoptera
| Unterordnung = Isoptera
| Familie = Kalotermitidae
| Gattung = Cryptotermes
| LSID = urn:lsid:faunaeur.org:taxname:337261
| www.faunaeur.org_id = 337261
}}
50e96590af0d8f074b309728e1aa15d32fafa2e4
Cryptotermes brevis
0
310
1513
1467
2017-04-28T16:10:48Z
Lollypop
2
wikitext
text/x-wiki
{{Systematik
| DeName =
| WissName = Cryptotermes brevis
| Autor = (Walker, 1853)
| Kingdom = Animalia
| Subkingdom = Eumetazoa
| Phylum = Arthropoda
| Subphylum = Hexapoda
| Klasse = Insecta
| Ordnung = Dictyoptera
| Unterordnung = Isoptera
| Familie = Kalotermitidae
| Unterfamilie =
| Tribus =
| Gattung = Cryptotermes
| Untergattung =
| Art = brevis
| Verbreitung =
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur =
| Winterruhe =
| LSID = urn:lsid:faunaeur.org:taxname:337262
| www.faunaeur.org_id = 337262
}}
b71ca1af03fe4acda4f42375dd4ef30038541d32
Category:Reticulitermes
14
307
1517
1423
2017-04-28T16:23:02Z
Lollypop
2
wikitext
text/x-wiki
{{Systematik
| DeName =
| WissName = Rhinotermitidae
| Autor = Holmgren 1913
| Kingdom = Animalia
| Subkingdom = Eumetazoa
| Phylum = Arthropoda
| Subphylum = Hexapoda
| Klasse = Insecta
| Ordnung = Dictyoptera
| Unterordnung = Isoptera
| Familie = Rhinotermitidae
| Gattung = Reticulitermes
| LSID = urn:lsid:faunaeur.org:taxname:337268
| www.faunaeur.org_id = 337268
}}
ec10310514d3b3a66dd3fe0fccd0bfd7da539bea
Reticulitermes banyulensis
0
309
1518
1468
2017-04-28T16:24:24Z
Lollypop
2
wikitext
text/x-wiki
{{Systematik
| DeName =
| WissName = Reticulitermes banyulensis
| Autor = Clément, 1978
| Kingdom = Animalia
| Subkingdom = Eumetazoa
| Phylum = Arthropoda
| Subphylum = Hexapoda
| Klasse = Insecta| Ordnung = Dictyoptera
| Unterordnung = Isoptera
| Familie = Rhinotermitidae
| Unterfamilie =
| Tribus =
| Gattung = Reticulitermes
| Untergattung =
| Art = banyulensis
| Verbreitung =
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur =
| Winterruhe =
| LSID = urn:lsid:faunaeur.org:taxname:337269
| www.faunaeur.org_id = 337269
}}
d054f57fe0fc312c26745de8105549899f572ca9
Reticulitermes flavipes
0
308
1519
1469
2017-04-28T16:25:19Z
Lollypop
2
wikitext
text/x-wiki
{{Systematik
| DeName =
| WissName = Reticulitermes flavipes
| Autor = (Kollar, 1837)
| Kingdom = Animalia
| Subkingdom = Eumetazoa
| Phylum = Arthropoda
| Subphylum = Hexapoda
| Klasse = Insecta
| Ordnung = Dictyoptera
| Unterordnung = Isoptera
| Familie = Rhinotermitidae
| Unterfamilie =
| Tribus =
| Gattung = Reticulitermes
| Untergattung =
| Art = flavipes
| Verbreitung =
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur =
| Winterruhe =
| LSID = urn:lsid:faunaeur.org:taxname:337272
| www.faunaeur.org_id = 337272
}}
41d251ce2ec09a5e1b7d60624b61e8cc9d7f72f1
Reticulitermes grassei
0
303
1520
1470
2017-04-28T16:26:08Z
Lollypop
2
wikitext
text/x-wiki
{{Systematik
| DeName =
| WissName = Reticulitermes grassei
| Autor = Holmgren, 1913
| Kingdom = Animalia
| Subkingdom = Eumetazoa
| Phylum = Arthropoda
| Subphylum = Hexapoda
| Klasse = Insecta
| Ordnung = Dictyoptera
| Unterordnung = Isoptera
| Familie = Rhinotermitidae
| Unterfamilie = Heterotermitinae
| Tribus =
| Gattung = Reticulitermes
| Untergattung =
| Art = grassei
| Verbreitung =
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur =
| Winterruhe =
| LSID = urn:lsid:faunaeur.org:taxname:337274
| www.faunaeur.org_id = 337274
}}
241e2fdf773ad9d48c88d67599206ebd3fa272f3
Template:Systematik
10
117
1528
1527
2017-05-02T07:08:09Z
Lollypop
2
wikitext
text/x-wiki
<includeonly>
{| cellpadding="2" cellspacing="0" style="border: 1px solid #6688AA;width:260px; background-color:#efefef; float:right;overflow: hidden; margin-left:10px; margin-bottom:10px;" valign="middle" |
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"|
'''''{{PAGENAME}}'''''
{{#if:{{{DeName|}}}|
<br>({{{DeName|}}})
}}
|-
|style="border: 0px solid #6688AA;border-bottom-width:1px;"| <div style="text-align:center;font-size:8pt;">
{{#if: {{{Bild|}}}
|
[[Bild:{{{Bild}}}{{!}}frameless{{!}}250x300px{{!}}{{{Bildbeschreibung}}}]]
}}
</div>
|
|
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| Systematik
|-
|style="text-align:center;width=250px;"|
{| style="background-color:#efefef;text-align:left;"
{{#if:{{{ordo|}}}
{{!-}}
{{!}} | Ordo:
{{!}} ''[[:Kategorie: ordo|{{!}}{{{ordo|}}}]]''
}}
{{#if:{{{subordo|}}}
{{!-}}
{{!}} | Subordo:
{{!}} ''[[:Kategorie:{{{subordo|}}}{{!}}{{{subordo|}}}]]''
}}
{{#if:{{{superfamilia|}}}
{{!-}}
{{!}} | Superfamilia:
{{!}} ''[[:Kategorie:{{{superfamilia|}}}{{!}}{{{superfamilia|}}}]]''
}}
{{#if:{{{familia|}}}
{{!-}}
{{!}} | Familia:
{{!}} ''[[:Kategorie:{{{familia|}}}{{!}}{{{familia|}}}]]''
}}
{{#if:{{{subfamilia|}}}
{{!-}}
{{!}} | Subfamilia:
{{!}} ''[[:Kategorie:{{{subfamilia|}}}{{!}}{{{subfamilia|}}}]]''
}}
{{#if:{{{tribus|}}}
{{!-}}
{{!}} | Tribus:
{{!}} ''[[:Kategorie:{{{tribus|}}}{{!}}{{{tribus|}}}]]''
}}
|-
{{#if:{{{genus|}}}|
{{!-}}
{{!}} Genus:
{{!}} ''[[:Kategorie:{{{genus|}}}{{!}}{{{genus|}}}]]''
}}
|-
{{#if:{{{subgenus|}}}|
{{!-}}
{{!}} Subgenus:
{{!}} ''[[:Kategorie:{{{subgenus|}}}{{!}}{{{subgenus|}}}]]''
}}
|-
{{#if:{{{species|}}}|
{{!-}}
{{!}} Species:
{{!}} ''{{{genus|}}} {{{subgenus|}}} {{{species|}}}''
}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Wissenschaftlicher Name
|-
|style="text-align:center"|
{| style="text-align:center;background-color:#efefef;widht:250px"
|-
|style="text-align:center;width:250px"|''{{{genus|}}} {{{species|}}}'' {{#if:{{{Autor|}}}|{{{Autor|}}}}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Weitere Informationen
|-
|style="text-align:left"|
{| style="text-align:left;background-color:#efefef;widht:250px"
{{#if:{{{Verbreitung|}}}
|
{{!-}}
{{!}} Verbreitung:
{{!}} {{{Verbreitung|}}}
|
{{!-}}
}}
{{#if:{{{Habitat|}}}
|
{{!-}}
{{!}} Habitat:
{{!}} {{{Habitat|}}}
|
{{!-}}
}}
{{#if:{{{Nahrung|}}}
|
{{!-}}
{{!}} Nahrung:
{{!}} {{{Nahrung|}}}
|
{{!-}}
}}
{{#if:{{{Luftfeuchtigkeit|}}}
|
{{!-}}
{{!}} Luftfeuchtigkeit:
{{!}} {{{Luftfeuchtigkeit|}}}
|
{{!-}}
}}
{{#if:{{{Temperatur|}}}
|
{{!-}}
{{!}} Temperatur:
{{!}} {{{Temperatur|}}}
|
{{!-}}
}}
{{#if:{{{StudyGroupNumber|}}}
|
{{!-}}
{{!}} StudyGroupNumber:
{{!}} {{{StudyGroupNumber|}}}
|
{{!-}}
}}
|}
|}
{{#ifeq: {{NAMESPACE}} | {{ns:0}} | [[Kategorie:species]]}}
{{#if:{{{ordo|}}}|
-> [[:Kategorie:{{{ordo|}}}{{!}}{{{ordo|}}}]]
}}
{{#if:{{{subordo|}}}|
-> [[:Kategorie:{{{subordo|}}}{{!}}{{{subordo|}}}]]
}}
{{#if:{{{superfamilia|}}}|
-> [[:Kategorie:{{{superfamilia|}}}{{!}}{{{superfamilia|}}}]]
}}
{{#if:{{{familia|}}}|
-> [[:Kategorie:{{{familia|}}}{{!}}{{{familia|}}}]]
}}
{{#if:{{{subfamilia|}}}|
-> [[:Kategorie:{{{subfamilia|}}}{{!}}{{{subfamilia|}}}]]
}}
{{#if:{{{tribus|}}}|
-> [[:Kategorie:{{{tribus|}}}{{!}}{{{tribus|}}}]]
}}
{{#if:{{{genus|}}}|
-> [[:Kategorie:{{{genus|}}}{{!}}{{{genus|}}}]]
}}
{{#if:{{{subgenus|}}}|
-> [[:Kategorie:{{{subgenus|}}}{{!}}{{{subgenus|}}}]]
}}
{{#if:{{{www.faunaeur.org_id|}}}|
* [http://www.faunaeur.org/full_results.php?id={{{www.faunaeur.org_id|}}} Fauna Europaea : www.faunaeur.org -> {{PAGENAME}}]
}}
{{#if:{{{cockroach.genusfile.org_TaxonNameID|}}}|
* [http://cockroach.genusfile.org/common/basic/Taxa.aspx?TaxonNameID={{{cockroach.genusfile.org_TaxonNameID|}}} Cockroach Species File (CSF) : cockroach.genusfile.org -> {{PAGENAME}}]
}}
{{#if:{{{LSID|}}}|
* [http://www.ipni.org/lsids.html LSID] : {{{LSID|}}}
}}
{{#ifeq:{{PAGENAME}}|{{{ordo}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=7}}
{{#ifeq:{{PAGENAME}}|Blattodea|
[[Kategorie:Schaben]]
}}
}}
{{#ifeq:{{PAGENAME}}|{{{subordo}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=6}}
[[Kategorie:{{{ordo|}}}{{!}}{{{subordo|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{superfamilia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
[[Kategorie:{{{subordo|}}}{{!}}{{{superfamilia|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{familia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=4}}
[[Kategorie:{{#if: {{{superfamilia|}}} | {{{superfamilia|}}} | {{{subordo|}}} }}{{!}}{{{familia|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{subfamilia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=3}}
[[Kategorie:{{{familia|}}}{{!}}{{{subfamilia|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{genus}}}|
[[Kategorie:{{#if: {{{subfamilia|}}} | {{{subfamilia|}}} | {{{familia|}}} }}{{!}}{{{genus|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{WissName}}}|
[[Kategorie:{{{genus|}}}{{!}}{{{species|}}}]]
}}
</includeonly>
<noinclude>
<pre>
Beispielaufruf:
{{Systematik
| DeName = Fauchschabe
| Autor = van Herrewege, 1973
| ordo =
| subordo =
| superfamilia =
| familia = Blaberidae
| subfamilia = Oxyhaloinae
| genus = Princisia
| subgenus =
| tribus = Gromphadorhini
| species = vanwaerebeki
| Verbreitung =
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| StudyGroupNumber = BCG 34
| Winterruhe =
| cockroach.genusfile.org_TaxonNameID = 1174416
| LSID = urn:lsid:Blattodea.genusfile.org:TaxonName:6326
}}
{{Systematik
| Autor =
| Bild =
| Bildbeschreibung =
| regnum = Animalia
| subregnum = Eumetazoa
| phylum = specieshropoda
| subphylum = Hexapoda
| classis = Insecta
| ordo = Dictyoptera
| subordo = Isoptera
| LSID = urn:lsid:faunaeur.org:taxname:11922
| www.faunaeur.org_id = 11922
}}
</pre>
</noinclude>
222c9b7457b18ba31be25b1aaf0e60fa9c619b84
1529
1528
2017-05-02T07:13:12Z
Lollypop
2
wikitext
text/x-wiki
<includeonly>
{| cellpadding="2" cellspacing="0" style="border: 1px solid #6688AA;width:260px; background-color:#efefef; float:right;overflow: hidden; margin-left:10px; margin-bottom:10px;" valign="middle" |
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"|
'''''{{PAGENAME}}'''''
{{#if:{{{DeName|}}}|
<br>({{{DeName|}}})
}}
|-
|style="border: 0px solid #6688AA;border-bottom-width:1px;"| <div style="text-align:center;font-size:8pt;">
{{#if: {{{Bild|}}}
|
[[Bild:{{{Bild}}}{{!}}frameless{{!}}250x300px{{!}}{{{Bildbeschreibung}}}]]
}}
</div>
|
|
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| Systematik
|-
|style="text-align:center;width=250px;"|
{| style="background-color:#efefef;text-align:left;"
{{#if:{{{ordo|}}}
| {{!-}}
{{!}} Ordo: {{{ordo|}}}
{{!}} ''[[:Kategorie: ordo|{{!}}{{{ordo|}}}]]''
}}
{{#if:{{{subordo|}}}
| {{!-}}
{{!}} Subordo:
{{!}} ''[[:Kategorie:{{{subordo|}}}{{!}}{{{subordo|}}}]]''
}}
{{#if:{{{superfamilia|}}}
| {{!-}}
{{!}} Superfamilia:
{{!}} ''[[:Kategorie:{{{superfamilia|}}}{{!}}{{{superfamilia|}}}]]''
}}
{{#if:{{{familia|}}}
| {{!-}}
{{!}} Familia:
{{!}} ''[[:Kategorie:{{{familia|}}}{{!}}{{{familia|}}}]]''
}}
{{#if:{{{subfamilia|}}}
| {{!-}}
{{!}} Subfamilia:
{{!}} ''[[:Kategorie:{{{subfamilia|}}}{{!}}{{{subfamilia|}}}]]''
}}
{{#if:{{{tribus|}}}
| {{!-}}
{{!}} Tribus:
{{!}} ''[[:Kategorie:{{{tribus|}}}{{!}}{{{tribus|}}}]]''
}}
|-
{{#if:{{{genus|}}}
| {{!-}}
{{!}} Genus:
{{!}} ''[[:Kategorie:{{{genus|}}}{{!}}{{{genus|}}}]]''
}}
|-
{{#if:{{{subgenus|}}}
| {{!-}}
{{!}} Subgenus:
{{!}} ''[[:Kategorie:{{{subgenus|}}}{{!}}{{{subgenus|}}}]]''
}}
|-
{{#if:{{{species|}}}
| {{!-}}
{{!}} Species:
{{!}} ''{{{genus|}}} {{{subgenus|}}} {{{species|}}}''
}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Wissenschaftlicher Name
|-
|style="text-align:center"|
{| style="text-align:center;background-color:#efefef;widht:250px"
|-
|style="text-align:center;width:250px"|''{{{genus|}}} {{{species|}}}'' {{#if:{{{Autor|}}}|{{{Autor|}}}}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Weitere Informationen
|-
|style="text-align:left"|
{| style="text-align:left;background-color:#efefef;widht:250px"
{{#if:{{{Verbreitung|}}}
|
{{!-}}
{{!}} Verbreitung:
{{!}} {{{Verbreitung|}}}
|
{{!-}}
}}
{{#if:{{{Habitat|}}}
|
{{!-}}
{{!}} Habitat:
{{!}} {{{Habitat|}}}
|
{{!-}}
}}
{{#if:{{{Nahrung|}}}
|
{{!-}}
{{!}} Nahrung:
{{!}} {{{Nahrung|}}}
|
{{!-}}
}}
{{#if:{{{Luftfeuchtigkeit|}}}
|
{{!-}}
{{!}} Luftfeuchtigkeit:
{{!}} {{{Luftfeuchtigkeit|}}}
|
{{!-}}
}}
{{#if:{{{Temperatur|}}}
|
{{!-}}
{{!}} Temperatur:
{{!}} {{{Temperatur|}}}
|
{{!-}}
}}
{{#if:{{{StudyGroupNumber|}}}
|
{{!-}}
{{!}} StudyGroupNumber:
{{!}} {{{StudyGroupNumber|}}}
|
{{!-}}
}}
|}
|}
{{#ifeq: {{NAMESPACE}} | {{ns:0}} | [[Kategorie:species]]}}
{{#if:{{{ordo|}}}|
-> [[:Kategorie:{{{ordo|}}}{{!}}{{{ordo|}}}]]
}}
{{#if:{{{subordo|}}}|
-> [[:Kategorie:{{{subordo|}}}{{!}}{{{subordo|}}}]]
}}
{{#if:{{{superfamilia|}}}|
-> [[:Kategorie:{{{superfamilia|}}}{{!}}{{{superfamilia|}}}]]
}}
{{#if:{{{familia|}}}|
-> [[:Kategorie:{{{familia|}}}{{!}}{{{familia|}}}]]
}}
{{#if:{{{subfamilia|}}}|
-> [[:Kategorie:{{{subfamilia|}}}{{!}}{{{subfamilia|}}}]]
}}
{{#if:{{{tribus|}}}|
-> [[:Kategorie:{{{tribus|}}}{{!}}{{{tribus|}}}]]
}}
{{#if:{{{genus|}}}|
-> [[:Kategorie:{{{genus|}}}{{!}}{{{genus|}}}]]
}}
{{#if:{{{subgenus|}}}|
-> [[:Kategorie:{{{subgenus|}}}{{!}}{{{subgenus|}}}]]
}}
{{#if:{{{www.faunaeur.org_id|}}}|
* [http://www.faunaeur.org/full_results.php?id={{{www.faunaeur.org_id|}}} Fauna Europaea : www.faunaeur.org -> {{PAGENAME}}]
}}
{{#if:{{{cockroach.genusfile.org_TaxonNameID|}}}|
* [http://cockroach.genusfile.org/common/basic/Taxa.aspx?TaxonNameID={{{cockroach.genusfile.org_TaxonNameID|}}} Cockroach Species File (CSF) : cockroach.genusfile.org -> {{PAGENAME}}]
}}
{{#if:{{{LSID|}}}|
* [http://www.ipni.org/lsids.html LSID] : {{{LSID|}}}
}}
{{#ifeq:{{PAGENAME}}|{{{ordo}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=7}}
{{#ifeq:{{PAGENAME}}|Blattodea|
[[Kategorie:Schaben]]
}}
}}
{{#ifeq:{{PAGENAME}}|{{{subordo}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=6}}
[[Kategorie:{{{ordo|}}}{{!}}{{{subordo|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{superfamilia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
[[Kategorie:{{{subordo|}}}{{!}}{{{superfamilia|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{familia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=4}}
[[Kategorie:{{#if: {{{superfamilia|}}} | {{{superfamilia|}}} | {{{subordo|}}} }}{{!}}{{{familia|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{subfamilia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=3}}
[[Kategorie:{{{familia|}}}{{!}}{{{subfamilia|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{genus}}}|
[[Kategorie:{{#if: {{{subfamilia|}}} | {{{subfamilia|}}} | {{{familia|}}} }}{{!}}{{{genus|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{WissName}}}|
[[Kategorie:{{{genus|}}}{{!}}{{{species|}}}]]
}}
</includeonly>
<noinclude>
<pre>
Beispielaufruf:
{{Systematik
| DeName = Fauchschabe
| Autor = van Herrewege, 1973
| ordo =
| subordo =
| superfamilia =
| familia = Blaberidae
| subfamilia = Oxyhaloinae
| genus = Princisia
| subgenus =
| tribus = Gromphadorhini
| species = vanwaerebeki
| Verbreitung =
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| StudyGroupNumber = BCG 34
| Winterruhe =
| cockroach.genusfile.org_TaxonNameID = 1174416
| LSID = urn:lsid:Blattodea.genusfile.org:TaxonName:6326
}}
{{Systematik
| Autor =
| Bild =
| Bildbeschreibung =
| regnum = Animalia
| subregnum = Eumetazoa
| phylum = specieshropoda
| subphylum = Hexapoda
| classis = Insecta
| ordo = Dictyoptera
| subordo = Isoptera
| LSID = urn:lsid:faunaeur.org:taxname:11922
| www.faunaeur.org_id = 11922
}}
</pre>
</noinclude>
514946ea7fe2a83160d73dc3a45c855c24e65d58
1530
1529
2017-05-02T07:13:45Z
Lollypop
2
wikitext
text/x-wiki
<includeonly>
{| cellpadding="2" cellspacing="0" style="border: 1px solid #6688AA;width:260px; background-color:#efefef; float:right;overflow: hidden; margin-left:10px; margin-bottom:10px;" valign="middle" |
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"|
'''''{{PAGENAME}}'''''
{{#if:{{{DeName|}}}|
<br>({{{DeName|}}})
}}
|-
|style="border: 0px solid #6688AA;border-bottom-width:1px;"| <div style="text-align:center;font-size:8pt;">
{{#if: {{{Bild|}}}
|
[[Bild:{{{Bild}}}{{!}}frameless{{!}}250x300px{{!}}{{{Bildbeschreibung}}}]]
}}
</div>
|
|
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| Systematik
|-
|style="text-align:center;width=250px;"|
{| style="background-color:#efefef;text-align:left;"
{{#if:{{{ordo|}}}
| {{!-}}
{{!}} Ordo: {{{ordo|}}}
}}
{{#if:{{{subordo|}}}
| {{!-}}
{{!}} Subordo:
{{!}} ''[[:Kategorie:{{{subordo|}}}{{!}}{{{subordo|}}}]]''
}}
{{#if:{{{superfamilia|}}}
| {{!-}}
{{!}} Superfamilia:
{{!}} ''[[:Kategorie:{{{superfamilia|}}}{{!}}{{{superfamilia|}}}]]''
}}
{{#if:{{{familia|}}}
| {{!-}}
{{!}} Familia:
{{!}} ''[[:Kategorie:{{{familia|}}}{{!}}{{{familia|}}}]]''
}}
{{#if:{{{subfamilia|}}}
| {{!-}}
{{!}} Subfamilia:
{{!}} ''[[:Kategorie:{{{subfamilia|}}}{{!}}{{{subfamilia|}}}]]''
}}
{{#if:{{{tribus|}}}
| {{!-}}
{{!}} Tribus:
{{!}} ''[[:Kategorie:{{{tribus|}}}{{!}}{{{tribus|}}}]]''
}}
|-
{{#if:{{{genus|}}}
| {{!-}}
{{!}} Genus:
{{!}} ''[[:Kategorie:{{{genus|}}}{{!}}{{{genus|}}}]]''
}}
|-
{{#if:{{{subgenus|}}}
| {{!-}}
{{!}} Subgenus:
{{!}} ''[[:Kategorie:{{{subgenus|}}}{{!}}{{{subgenus|}}}]]''
}}
|-
{{#if:{{{species|}}}
| {{!-}}
{{!}} Species:
{{!}} ''{{{genus|}}} {{{subgenus|}}} {{{species|}}}''
}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Wissenschaftlicher Name
|-
|style="text-align:center"|
{| style="text-align:center;background-color:#efefef;widht:250px"
|-
|style="text-align:center;width:250px"|''{{{genus|}}} {{{species|}}}'' {{#if:{{{Autor|}}}|{{{Autor|}}}}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Weitere Informationen
|-
|style="text-align:left"|
{| style="text-align:left;background-color:#efefef;widht:250px"
{{#if:{{{Verbreitung|}}}
|
{{!-}}
{{!}} Verbreitung:
{{!}} {{{Verbreitung|}}}
|
{{!-}}
}}
{{#if:{{{Habitat|}}}
|
{{!-}}
{{!}} Habitat:
{{!}} {{{Habitat|}}}
|
{{!-}}
}}
{{#if:{{{Nahrung|}}}
|
{{!-}}
{{!}} Nahrung:
{{!}} {{{Nahrung|}}}
|
{{!-}}
}}
{{#if:{{{Luftfeuchtigkeit|}}}
|
{{!-}}
{{!}} Luftfeuchtigkeit:
{{!}} {{{Luftfeuchtigkeit|}}}
|
{{!-}}
}}
{{#if:{{{Temperatur|}}}
|
{{!-}}
{{!}} Temperatur:
{{!}} {{{Temperatur|}}}
|
{{!-}}
}}
{{#if:{{{StudyGroupNumber|}}}
|
{{!-}}
{{!}} StudyGroupNumber:
{{!}} {{{StudyGroupNumber|}}}
|
{{!-}}
}}
|}
|}
{{#ifeq: {{NAMESPACE}} | {{ns:0}} | [[Kategorie:species]]}}
{{#if:{{{ordo|}}}|
-> [[:Kategorie:{{{ordo|}}}{{!}}{{{ordo|}}}]]
}}
{{#if:{{{subordo|}}}|
-> [[:Kategorie:{{{subordo|}}}{{!}}{{{subordo|}}}]]
}}
{{#if:{{{superfamilia|}}}|
-> [[:Kategorie:{{{superfamilia|}}}{{!}}{{{superfamilia|}}}]]
}}
{{#if:{{{familia|}}}|
-> [[:Kategorie:{{{familia|}}}{{!}}{{{familia|}}}]]
}}
{{#if:{{{subfamilia|}}}|
-> [[:Kategorie:{{{subfamilia|}}}{{!}}{{{subfamilia|}}}]]
}}
{{#if:{{{tribus|}}}|
-> [[:Kategorie:{{{tribus|}}}{{!}}{{{tribus|}}}]]
}}
{{#if:{{{genus|}}}|
-> [[:Kategorie:{{{genus|}}}{{!}}{{{genus|}}}]]
}}
{{#if:{{{subgenus|}}}|
-> [[:Kategorie:{{{subgenus|}}}{{!}}{{{subgenus|}}}]]
}}
{{#if:{{{www.faunaeur.org_id|}}}|
* [http://www.faunaeur.org/full_results.php?id={{{www.faunaeur.org_id|}}} Fauna Europaea : www.faunaeur.org -> {{PAGENAME}}]
}}
{{#if:{{{cockroach.genusfile.org_TaxonNameID|}}}|
* [http://cockroach.genusfile.org/common/basic/Taxa.aspx?TaxonNameID={{{cockroach.genusfile.org_TaxonNameID|}}} Cockroach Species File (CSF) : cockroach.genusfile.org -> {{PAGENAME}}]
}}
{{#if:{{{LSID|}}}|
* [http://www.ipni.org/lsids.html LSID] : {{{LSID|}}}
}}
{{#ifeq:{{PAGENAME}}|{{{ordo}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=7}}
{{#ifeq:{{PAGENAME}}|Blattodea|
[[Kategorie:Schaben]]
}}
}}
{{#ifeq:{{PAGENAME}}|{{{subordo}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=6}}
[[Kategorie:{{{ordo|}}}{{!}}{{{subordo|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{superfamilia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
[[Kategorie:{{{subordo|}}}{{!}}{{{superfamilia|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{familia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=4}}
[[Kategorie:{{#if: {{{superfamilia|}}} | {{{superfamilia|}}} | {{{subordo|}}} }}{{!}}{{{familia|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{subfamilia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=3}}
[[Kategorie:{{{familia|}}}{{!}}{{{subfamilia|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{genus}}}|
[[Kategorie:{{#if: {{{subfamilia|}}} | {{{subfamilia|}}} | {{{familia|}}} }}{{!}}{{{genus|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{WissName}}}|
[[Kategorie:{{{genus|}}}{{!}}{{{species|}}}]]
}}
</includeonly>
<noinclude>
<pre>
Beispielaufruf:
{{Systematik
| DeName = Fauchschabe
| Autor = van Herrewege, 1973
| ordo =
| subordo =
| superfamilia =
| familia = Blaberidae
| subfamilia = Oxyhaloinae
| genus = Princisia
| subgenus =
| tribus = Gromphadorhini
| species = vanwaerebeki
| Verbreitung =
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| StudyGroupNumber = BCG 34
| Winterruhe =
| cockroach.genusfile.org_TaxonNameID = 1174416
| LSID = urn:lsid:Blattodea.genusfile.org:TaxonName:6326
}}
{{Systematik
| Autor =
| Bild =
| Bildbeschreibung =
| regnum = Animalia
| subregnum = Eumetazoa
| phylum = specieshropoda
| subphylum = Hexapoda
| classis = Insecta
| ordo = Dictyoptera
| subordo = Isoptera
| LSID = urn:lsid:faunaeur.org:taxname:11922
| www.faunaeur.org_id = 11922
}}
</pre>
</noinclude>
6c359d3e530f5e0536d96450fb68ca6d8298fe8f
1531
1530
2017-05-02T07:14:45Z
Lollypop
2
wikitext
text/x-wiki
<includeonly>
{| cellpadding="2" cellspacing="0" style="border: 1px solid #6688AA;width:260px; background-color:#efefef; float:right;overflow: hidden; margin-left:10px; margin-bottom:10px;" valign="middle" |
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"|
'''''{{PAGENAME}}'''''
{{#if:{{{DeName|}}}|
<br>({{{DeName|}}})
}}
|-
|style="border: 0px solid #6688AA;border-bottom-width:1px;"| <div style="text-align:center;font-size:8pt;">
{{#if: {{{Bild|}}}
|
[[Bild:{{{Bild}}}{{!}}frameless{{!}}250x300px{{!}}{{{Bildbeschreibung}}}]]
}}
</div>
|
|
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| Systematik
|-
|style="text-align:center;width=250px;"|
{| style="background-color:#efefef;text-align:left;"
{{#if:{{{ordo|}}}
| {{!-}}
{{!}} Ordo:
{{!-}}
{{!}} {{{ordo|}}}
}}
{{#if:{{{subordo|}}}
| {{!-}}
{{!}} Subordo:
{{!}} ''[[:Kategorie:{{{subordo|}}}{{!}}{{{subordo|}}}]]''
}}
{{#if:{{{superfamilia|}}}
| {{!-}}
{{!}} Superfamilia:
{{!}} ''[[:Kategorie:{{{superfamilia|}}}{{!}}{{{superfamilia|}}}]]''
}}
{{#if:{{{familia|}}}
| {{!-}}
{{!}} Familia:
{{!}} ''[[:Kategorie:{{{familia|}}}{{!}}{{{familia|}}}]]''
}}
{{#if:{{{subfamilia|}}}
| {{!-}}
{{!}} Subfamilia:
{{!}} ''[[:Kategorie:{{{subfamilia|}}}{{!}}{{{subfamilia|}}}]]''
}}
{{#if:{{{tribus|}}}
| {{!-}}
{{!}} Tribus:
{{!}} ''[[:Kategorie:{{{tribus|}}}{{!}}{{{tribus|}}}]]''
}}
|-
{{#if:{{{genus|}}}
| {{!-}}
{{!}} Genus:
{{!}} ''[[:Kategorie:{{{genus|}}}{{!}}{{{genus|}}}]]''
}}
|-
{{#if:{{{subgenus|}}}
| {{!-}}
{{!}} Subgenus:
{{!}} ''[[:Kategorie:{{{subgenus|}}}{{!}}{{{subgenus|}}}]]''
}}
|-
{{#if:{{{species|}}}
| {{!-}}
{{!}} Species:
{{!}} ''{{{genus|}}} {{{subgenus|}}} {{{species|}}}''
}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Wissenschaftlicher Name
|-
|style="text-align:center"|
{| style="text-align:center;background-color:#efefef;widht:250px"
|-
|style="text-align:center;width:250px"|''{{{genus|}}} {{{species|}}}'' {{#if:{{{Autor|}}}|{{{Autor|}}}}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Weitere Informationen
|-
|style="text-align:left"|
{| style="text-align:left;background-color:#efefef;widht:250px"
{{#if:{{{Verbreitung|}}}
|
{{!-}}
{{!}} Verbreitung:
{{!}} {{{Verbreitung|}}}
|
{{!-}}
}}
{{#if:{{{Habitat|}}}
|
{{!-}}
{{!}} Habitat:
{{!}} {{{Habitat|}}}
|
{{!-}}
}}
{{#if:{{{Nahrung|}}}
|
{{!-}}
{{!}} Nahrung:
{{!}} {{{Nahrung|}}}
|
{{!-}}
}}
{{#if:{{{Luftfeuchtigkeit|}}}
|
{{!-}}
{{!}} Luftfeuchtigkeit:
{{!}} {{{Luftfeuchtigkeit|}}}
|
{{!-}}
}}
{{#if:{{{Temperatur|}}}
|
{{!-}}
{{!}} Temperatur:
{{!}} {{{Temperatur|}}}
|
{{!-}}
}}
{{#if:{{{StudyGroupNumber|}}}
|
{{!-}}
{{!}} StudyGroupNumber:
{{!}} {{{StudyGroupNumber|}}}
|
{{!-}}
}}
|}
|}
{{#ifeq: {{NAMESPACE}} | {{ns:0}} | [[Kategorie:species]]}}
{{#if:{{{ordo|}}}|
-> [[:Kategorie:{{{ordo|}}}{{!}}{{{ordo|}}}]]
}}
{{#if:{{{subordo|}}}|
-> [[:Kategorie:{{{subordo|}}}{{!}}{{{subordo|}}}]]
}}
{{#if:{{{superfamilia|}}}|
-> [[:Kategorie:{{{superfamilia|}}}{{!}}{{{superfamilia|}}}]]
}}
{{#if:{{{familia|}}}|
-> [[:Kategorie:{{{familia|}}}{{!}}{{{familia|}}}]]
}}
{{#if:{{{subfamilia|}}}|
-> [[:Kategorie:{{{subfamilia|}}}{{!}}{{{subfamilia|}}}]]
}}
{{#if:{{{tribus|}}}|
-> [[:Kategorie:{{{tribus|}}}{{!}}{{{tribus|}}}]]
}}
{{#if:{{{genus|}}}|
-> [[:Kategorie:{{{genus|}}}{{!}}{{{genus|}}}]]
}}
{{#if:{{{subgenus|}}}|
-> [[:Kategorie:{{{subgenus|}}}{{!}}{{{subgenus|}}}]]
}}
{{#if:{{{www.faunaeur.org_id|}}}|
* [http://www.faunaeur.org/full_results.php?id={{{www.faunaeur.org_id|}}} Fauna Europaea : www.faunaeur.org -> {{PAGENAME}}]
}}
{{#if:{{{cockroach.genusfile.org_TaxonNameID|}}}|
* [http://cockroach.genusfile.org/common/basic/Taxa.aspx?TaxonNameID={{{cockroach.genusfile.org_TaxonNameID|}}} Cockroach Species File (CSF) : cockroach.genusfile.org -> {{PAGENAME}}]
}}
{{#if:{{{LSID|}}}|
* [http://www.ipni.org/lsids.html LSID] : {{{LSID|}}}
}}
{{#ifeq:{{PAGENAME}}|{{{ordo}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=7}}
{{#ifeq:{{PAGENAME}}|Blattodea|
[[Kategorie:Schaben]]
}}
}}
{{#ifeq:{{PAGENAME}}|{{{subordo}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=6}}
[[Kategorie:{{{ordo|}}}{{!}}{{{subordo|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{superfamilia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
[[Kategorie:{{{subordo|}}}{{!}}{{{superfamilia|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{familia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=4}}
[[Kategorie:{{#if: {{{superfamilia|}}} | {{{superfamilia|}}} | {{{subordo|}}} }}{{!}}{{{familia|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{subfamilia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=3}}
[[Kategorie:{{{familia|}}}{{!}}{{{subfamilia|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{genus}}}|
[[Kategorie:{{#if: {{{subfamilia|}}} | {{{subfamilia|}}} | {{{familia|}}} }}{{!}}{{{genus|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{WissName}}}|
[[Kategorie:{{{genus|}}}{{!}}{{{species|}}}]]
}}
</includeonly>
<noinclude>
<pre>
Beispielaufruf:
{{Systematik
| DeName = Fauchschabe
| Autor = van Herrewege, 1973
| ordo =
| subordo =
| superfamilia =
| familia = Blaberidae
| subfamilia = Oxyhaloinae
| genus = Princisia
| subgenus =
| tribus = Gromphadorhini
| species = vanwaerebeki
| Verbreitung =
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| StudyGroupNumber = BCG 34
| Winterruhe =
| cockroach.genusfile.org_TaxonNameID = 1174416
| LSID = urn:lsid:Blattodea.genusfile.org:TaxonName:6326
}}
{{Systematik
| Autor =
| Bild =
| Bildbeschreibung =
| regnum = Animalia
| subregnum = Eumetazoa
| phylum = specieshropoda
| subphylum = Hexapoda
| classis = Insecta
| ordo = Dictyoptera
| subordo = Isoptera
| LSID = urn:lsid:faunaeur.org:taxname:11922
| www.faunaeur.org_id = 11922
}}
</pre>
</noinclude>
d943f05a57a0c6739c892f8ef3b80051b14400d8
1532
1531
2017-05-02T07:15:24Z
Lollypop
2
wikitext
text/x-wiki
<includeonly>
{| cellpadding="2" cellspacing="0" style="border: 1px solid #6688AA;width:260px; background-color:#efefef; float:right;overflow: hidden; margin-left:10px; margin-bottom:10px;" valign="middle" |
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"|
'''''{{PAGENAME}}'''''
{{#if:{{{DeName|}}}|
<br>({{{DeName|}}})
}}
|-
|style="border: 0px solid #6688AA;border-bottom-width:1px;"| <div style="text-align:center;font-size:8pt;">
{{#if: {{{Bild|}}}
|
[[Bild:{{{Bild}}}{{!}}frameless{{!}}250x300px{{!}}{{{Bildbeschreibung}}}]]
}}
</div>
|
|
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| Systematik
|-
|style="text-align:center;width=250px;"|
{| style="background-color:#efefef;text-align:left;"
{{#if:{{{ordo|}}}
| {{!-}}
{{!}} Ordo:
{{!}} {{{ordo|}}}
}}
{{#if:{{{subordo|}}}
| {{!-}}
{{!}} Subordo:
{{!}} ''[[:Kategorie:{{{subordo|}}}{{!}}{{{subordo|}}}]]''
}}
{{#if:{{{superfamilia|}}}
| {{!-}}
{{!}} Superfamilia:
{{!}} ''[[:Kategorie:{{{superfamilia|}}}{{!}}{{{superfamilia|}}}]]''
}}
{{#if:{{{familia|}}}
| {{!-}}
{{!}} Familia:
{{!}} ''[[:Kategorie:{{{familia|}}}{{!}}{{{familia|}}}]]''
}}
{{#if:{{{subfamilia|}}}
| {{!-}}
{{!}} Subfamilia:
{{!}} ''[[:Kategorie:{{{subfamilia|}}}{{!}}{{{subfamilia|}}}]]''
}}
{{#if:{{{tribus|}}}
| {{!-}}
{{!}} Tribus:
{{!}} ''[[:Kategorie:{{{tribus|}}}{{!}}{{{tribus|}}}]]''
}}
|-
{{#if:{{{genus|}}}
| {{!-}}
{{!}} Genus:
{{!}} ''[[:Kategorie:{{{genus|}}}{{!}}{{{genus|}}}]]''
}}
|-
{{#if:{{{subgenus|}}}
| {{!-}}
{{!}} Subgenus:
{{!}} ''[[:Kategorie:{{{subgenus|}}}{{!}}{{{subgenus|}}}]]''
}}
|-
{{#if:{{{species|}}}
| {{!-}}
{{!}} Species:
{{!}} ''{{{genus|}}} {{{subgenus|}}} {{{species|}}}''
}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Wissenschaftlicher Name
|-
|style="text-align:center"|
{| style="text-align:center;background-color:#efefef;widht:250px"
|-
|style="text-align:center;width:250px"|''{{{genus|}}} {{{species|}}}'' {{#if:{{{Autor|}}}|{{{Autor|}}}}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Weitere Informationen
|-
|style="text-align:left"|
{| style="text-align:left;background-color:#efefef;widht:250px"
{{#if:{{{Verbreitung|}}}
|
{{!-}}
{{!}} Verbreitung:
{{!}} {{{Verbreitung|}}}
|
{{!-}}
}}
{{#if:{{{Habitat|}}}
|
{{!-}}
{{!}} Habitat:
{{!}} {{{Habitat|}}}
|
{{!-}}
}}
{{#if:{{{Nahrung|}}}
|
{{!-}}
{{!}} Nahrung:
{{!}} {{{Nahrung|}}}
|
{{!-}}
}}
{{#if:{{{Luftfeuchtigkeit|}}}
|
{{!-}}
{{!}} Luftfeuchtigkeit:
{{!}} {{{Luftfeuchtigkeit|}}}
|
{{!-}}
}}
{{#if:{{{Temperatur|}}}
|
{{!-}}
{{!}} Temperatur:
{{!}} {{{Temperatur|}}}
|
{{!-}}
}}
{{#if:{{{StudyGroupNumber|}}}
|
{{!-}}
{{!}} StudyGroupNumber:
{{!}} {{{StudyGroupNumber|}}}
|
{{!-}}
}}
|}
|}
{{#ifeq: {{NAMESPACE}} | {{ns:0}} | [[Kategorie:species]]}}
{{#if:{{{ordo|}}}|
-> [[:Kategorie:{{{ordo|}}}{{!}}{{{ordo|}}}]]
}}
{{#if:{{{subordo|}}}|
-> [[:Kategorie:{{{subordo|}}}{{!}}{{{subordo|}}}]]
}}
{{#if:{{{superfamilia|}}}|
-> [[:Kategorie:{{{superfamilia|}}}{{!}}{{{superfamilia|}}}]]
}}
{{#if:{{{familia|}}}|
-> [[:Kategorie:{{{familia|}}}{{!}}{{{familia|}}}]]
}}
{{#if:{{{subfamilia|}}}|
-> [[:Kategorie:{{{subfamilia|}}}{{!}}{{{subfamilia|}}}]]
}}
{{#if:{{{tribus|}}}|
-> [[:Kategorie:{{{tribus|}}}{{!}}{{{tribus|}}}]]
}}
{{#if:{{{genus|}}}|
-> [[:Kategorie:{{{genus|}}}{{!}}{{{genus|}}}]]
}}
{{#if:{{{subgenus|}}}|
-> [[:Kategorie:{{{subgenus|}}}{{!}}{{{subgenus|}}}]]
}}
{{#if:{{{www.faunaeur.org_id|}}}|
* [http://www.faunaeur.org/full_results.php?id={{{www.faunaeur.org_id|}}} Fauna Europaea : www.faunaeur.org -> {{PAGENAME}}]
}}
{{#if:{{{cockroach.genusfile.org_TaxonNameID|}}}|
* [http://cockroach.genusfile.org/common/basic/Taxa.aspx?TaxonNameID={{{cockroach.genusfile.org_TaxonNameID|}}} Cockroach Species File (CSF) : cockroach.genusfile.org -> {{PAGENAME}}]
}}
{{#if:{{{LSID|}}}|
* [http://www.ipni.org/lsids.html LSID] : {{{LSID|}}}
}}
{{#ifeq:{{PAGENAME}}|{{{ordo}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=7}}
{{#ifeq:{{PAGENAME}}|Blattodea|
[[Kategorie:Schaben]]
}}
}}
{{#ifeq:{{PAGENAME}}|{{{subordo}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=6}}
[[Kategorie:{{{ordo|}}}{{!}}{{{subordo|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{superfamilia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
[[Kategorie:{{{subordo|}}}{{!}}{{{superfamilia|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{familia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=4}}
[[Kategorie:{{#if: {{{superfamilia|}}} | {{{superfamilia|}}} | {{{subordo|}}} }}{{!}}{{{familia|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{subfamilia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=3}}
[[Kategorie:{{{familia|}}}{{!}}{{{subfamilia|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{genus}}}|
[[Kategorie:{{#if: {{{subfamilia|}}} | {{{subfamilia|}}} | {{{familia|}}} }}{{!}}{{{genus|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{WissName}}}|
[[Kategorie:{{{genus|}}}{{!}}{{{species|}}}]]
}}
</includeonly>
<noinclude>
<pre>
Beispielaufruf:
{{Systematik
| DeName = Fauchschabe
| Autor = van Herrewege, 1973
| ordo =
| subordo =
| superfamilia =
| familia = Blaberidae
| subfamilia = Oxyhaloinae
| genus = Princisia
| subgenus =
| tribus = Gromphadorhini
| species = vanwaerebeki
| Verbreitung =
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| StudyGroupNumber = BCG 34
| Winterruhe =
| cockroach.genusfile.org_TaxonNameID = 1174416
| LSID = urn:lsid:Blattodea.genusfile.org:TaxonName:6326
}}
{{Systematik
| Autor =
| Bild =
| Bildbeschreibung =
| regnum = Animalia
| subregnum = Eumetazoa
| phylum = specieshropoda
| subphylum = Hexapoda
| classis = Insecta
| ordo = Dictyoptera
| subordo = Isoptera
| LSID = urn:lsid:faunaeur.org:taxname:11922
| www.faunaeur.org_id = 11922
}}
</pre>
</noinclude>
cf7c8011736c37cdf3d2761898dc561409aed90e
1533
1532
2017-05-02T07:16:20Z
Lollypop
2
wikitext
text/x-wiki
<includeonly>
{| cellpadding="2" cellspacing="0" style="border: 1px solid #6688AA;width:260px; background-color:#efefef; float:right;overflow: hidden; margin-left:10px; margin-bottom:10px;" valign="middle" |
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"|
'''''{{PAGENAME}}'''''
{{#if:{{{DeName|}}}|
<br>({{{DeName|}}})
}}
|-
|style="border: 0px solid #6688AA;border-bottom-width:1px;"| <div style="text-align:center;font-size:8pt;">
{{#if: {{{Bild|}}}
|
[[Bild:{{{Bild}}}{{!}}frameless{{!}}250x300px{{!}}{{{Bildbeschreibung}}}]]
}}
</div>
|
|
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| Systematik
|-
|style="text-align:center;width=250px;"|
{| style="background-color:#efefef;text-align:left;"
{{#if:{{{ordo|}}}
| {{!-}}
{{!}} Ordo:
{{!}} ''[[:Kategorie:{{{ordo|}}}{{!}}{{{ordo|}}}]]''
}}
{{#if:{{{subordo|}}}
| {{!-}}
{{!}} Subordo:
{{!}} ''[[:Kategorie:{{{subordo|}}}{{!}}{{{subordo|}}}]]''
}}
{{#if:{{{superfamilia|}}}
| {{!-}}
{{!}} Superfamilia:
{{!}} ''[[:Kategorie:{{{superfamilia|}}}{{!}}{{{superfamilia|}}}]]''
}}
{{#if:{{{familia|}}}
| {{!-}}
{{!}} Familia:
{{!}} ''[[:Kategorie:{{{familia|}}}{{!}}{{{familia|}}}]]''
}}
{{#if:{{{subfamilia|}}}
| {{!-}}
{{!}} Subfamilia:
{{!}} ''[[:Kategorie:{{{subfamilia|}}}{{!}}{{{subfamilia|}}}]]''
}}
{{#if:{{{tribus|}}}
| {{!-}}
{{!}} Tribus:
{{!}} ''[[:Kategorie:{{{tribus|}}}{{!}}{{{tribus|}}}]]''
}}
|-
{{#if:{{{genus|}}}
| {{!-}}
{{!}} Genus:
{{!}} ''[[:Kategorie:{{{genus|}}}{{!}}{{{genus|}}}]]''
}}
|-
{{#if:{{{subgenus|}}}
| {{!-}}
{{!}} Subgenus:
{{!}} ''[[:Kategorie:{{{subgenus|}}}{{!}}{{{subgenus|}}}]]''
}}
|-
{{#if:{{{species|}}}
| {{!-}}
{{!}} Species:
{{!}} ''{{{genus|}}} {{{subgenus|}}} {{{species|}}}''
}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Wissenschaftlicher Name
|-
|style="text-align:center"|
{| style="text-align:center;background-color:#efefef;widht:250px"
|-
|style="text-align:center;width:250px"|''{{{genus|}}} {{{species|}}}'' {{#if:{{{Autor|}}}|{{{Autor|}}}}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Weitere Informationen
|-
|style="text-align:left"|
{| style="text-align:left;background-color:#efefef;widht:250px"
{{#if:{{{Verbreitung|}}}
|
{{!-}}
{{!}} Verbreitung:
{{!}} {{{Verbreitung|}}}
|
{{!-}}
}}
{{#if:{{{Habitat|}}}
|
{{!-}}
{{!}} Habitat:
{{!}} {{{Habitat|}}}
|
{{!-}}
}}
{{#if:{{{Nahrung|}}}
|
{{!-}}
{{!}} Nahrung:
{{!}} {{{Nahrung|}}}
|
{{!-}}
}}
{{#if:{{{Luftfeuchtigkeit|}}}
|
{{!-}}
{{!}} Luftfeuchtigkeit:
{{!}} {{{Luftfeuchtigkeit|}}}
|
{{!-}}
}}
{{#if:{{{Temperatur|}}}
|
{{!-}}
{{!}} Temperatur:
{{!}} {{{Temperatur|}}}
|
{{!-}}
}}
{{#if:{{{StudyGroupNumber|}}}
|
{{!-}}
{{!}} StudyGroupNumber:
{{!}} {{{StudyGroupNumber|}}}
|
{{!-}}
}}
|}
|}
{{#ifeq: {{NAMESPACE}} | {{ns:0}} | [[Kategorie:species]]}}
{{#if:{{{ordo|}}}|
-> [[:Kategorie:{{{ordo|}}}{{!}}{{{ordo|}}}]]
}}
{{#if:{{{subordo|}}}|
-> [[:Kategorie:{{{subordo|}}}{{!}}{{{subordo|}}}]]
}}
{{#if:{{{superfamilia|}}}|
-> [[:Kategorie:{{{superfamilia|}}}{{!}}{{{superfamilia|}}}]]
}}
{{#if:{{{familia|}}}|
-> [[:Kategorie:{{{familia|}}}{{!}}{{{familia|}}}]]
}}
{{#if:{{{subfamilia|}}}|
-> [[:Kategorie:{{{subfamilia|}}}{{!}}{{{subfamilia|}}}]]
}}
{{#if:{{{tribus|}}}|
-> [[:Kategorie:{{{tribus|}}}{{!}}{{{tribus|}}}]]
}}
{{#if:{{{genus|}}}|
-> [[:Kategorie:{{{genus|}}}{{!}}{{{genus|}}}]]
}}
{{#if:{{{subgenus|}}}|
-> [[:Kategorie:{{{subgenus|}}}{{!}}{{{subgenus|}}}]]
}}
{{#if:{{{www.faunaeur.org_id|}}}|
* [http://www.faunaeur.org/full_results.php?id={{{www.faunaeur.org_id|}}} Fauna Europaea : www.faunaeur.org -> {{PAGENAME}}]
}}
{{#if:{{{cockroach.genusfile.org_TaxonNameID|}}}|
* [http://cockroach.genusfile.org/common/basic/Taxa.aspx?TaxonNameID={{{cockroach.genusfile.org_TaxonNameID|}}} Cockroach Species File (CSF) : cockroach.genusfile.org -> {{PAGENAME}}]
}}
{{#if:{{{LSID|}}}|
* [http://www.ipni.org/lsids.html LSID] : {{{LSID|}}}
}}
{{#ifeq:{{PAGENAME}}|{{{ordo}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=7}}
{{#ifeq:{{PAGENAME}}|Blattodea|
[[Kategorie:Schaben]]
}}
}}
{{#ifeq:{{PAGENAME}}|{{{subordo}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=6}}
[[Kategorie:{{{ordo|}}}{{!}}{{{subordo|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{superfamilia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
[[Kategorie:{{{subordo|}}}{{!}}{{{superfamilia|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{familia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=4}}
[[Kategorie:{{#if: {{{superfamilia|}}} | {{{superfamilia|}}} | {{{subordo|}}} }}{{!}}{{{familia|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{subfamilia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=3}}
[[Kategorie:{{{familia|}}}{{!}}{{{subfamilia|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{genus}}}|
[[Kategorie:{{#if: {{{subfamilia|}}} | {{{subfamilia|}}} | {{{familia|}}} }}{{!}}{{{genus|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{WissName}}}|
[[Kategorie:{{{genus|}}}{{!}}{{{species|}}}]]
}}
</includeonly>
<noinclude>
<pre>
Beispielaufruf:
{{Systematik
| DeName = Fauchschabe
| Autor = van Herrewege, 1973
| ordo =
| subordo =
| superfamilia =
| familia = Blaberidae
| subfamilia = Oxyhaloinae
| genus = Princisia
| subgenus =
| tribus = Gromphadorhini
| species = vanwaerebeki
| Verbreitung =
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| StudyGroupNumber = BCG 34
| Winterruhe =
| cockroach.genusfile.org_TaxonNameID = 1174416
| LSID = urn:lsid:Blattodea.genusfile.org:TaxonName:6326
}}
{{Systematik
| Autor =
| Bild =
| Bildbeschreibung =
| regnum = Animalia
| subregnum = Eumetazoa
| phylum = specieshropoda
| subphylum = Hexapoda
| classis = Insecta
| ordo = Dictyoptera
| subordo = Isoptera
| LSID = urn:lsid:faunaeur.org:taxname:11922
| www.faunaeur.org_id = 11922
}}
</pre>
</noinclude>
468cfe8507ecbea53500eb421ada48fb28976121
1534
1533
2017-05-02T07:23:33Z
Lollypop
2
wikitext
text/x-wiki
<includeonly>
{| cellpadding="2" cellspacing="0" style="border: 1px solid #6688AA;width:260px; background-color:#efefef; float:right;overflow: hidden; margin-left:10px; margin-bottom:10px;" valign="middle" |
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"|
'''''{{PAGENAME}}'''''
{{#if:{{{DeName|}}}|
<br>({{{DeName|}}})
}}
|-
|style="border: 0px solid #6688AA;border-bottom-width:1px;"| <div style="text-align:center;font-size:8pt;">
{{#if: {{{Bild|}}}
|
[[Bild:{{{Bild}}}{{!}}frameless{{!}}250x300px{{!}}{{{Bildbeschreibung}}}]]
}}
</div>
|
|
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| Systematik
|-
|style="text-align:center;width=250px;"|
{| style="background-color:#efefef;text-align:left;"
{{#if:{{{dominia|}}}
| {{!-}}
{{!}} Dominia:
{{!}} ''[[:Kategorie:{{{dominia|}}}{{!}}{{{dominia|}}}]]''
}}
{{#if:{{{regnum|}}}
| {{!-}}
{{!}} Regnum:
{{!}} ''[[:Kategorie:{{{regnum|}}}{{!}}{{{regnum|}}}]]''
}}
{{#if:{{{subregnum|}}}
| {{!-}}
{{!}} Subregnum:
{{!}} ''[[:Kategorie:{{{subregnum|}}}{{!}}{{{subregnum|}}}]]''
}}
{{#if:{{{superdivisio|}}}
| {{!-}}
{{!}} Superdivisio:
{{!}} ''[[:Kategorie:{{{superdivisio|}}}{{!}}{{{superdivisio|}}}]]''
}}
{{#if:{{{divisio|}}}
| {{!-}}
{{!}} Divisio:
{{!}} ''[[:Kategorie:{{{divisio|}}}{{!}}{{{divisio|}}}]]''
}}
{{#if:{{{subdivisio|}}}
| {{!-}}
{{!}} Subdivisio:
{{!}} ''[[:Kategorie:{{{subdivisio|}}}{{!}}{{{subdivisio|}}}]]''
}}
{{#if:{{{superclassis|}}}
| {{!-}}
{{!}} Superclassis:
{{!}} ''[[:Kategorie:{{{superclassis|}}}{{!}}{{{superclassis|}}}]]''
}}
{{#if:{{{classis|}}}
| {{!-}}
{{!}} Classis:
{{!}} ''[[:Kategorie:{{{classis|}}}{{!}}{{{classis|}}}]]''
}}
{{#if:{{{subclassis|}}}
| {{!-}}
{{!}} Subclassis:
{{!}} ''[[:Kategorie:{{{subclassis|}}}{{!}}{{{subclassis|}}}]]''
}}
{{#if:{{{superordo|}}}
| {{!-}}
{{!}} Superordo:
{{!}} ''[[:Kategorie:{{{superordo|}}}{{!}}{{{superordo|}}}]]''
}}
{{#if:{{{ordo|}}}
| {{!-}}
{{!}} Ordo:
{{!}} ''[[:Kategorie:{{{ordo|}}}{{!}}{{{ordo|}}}]]''
}}
{{#if:{{{subordo|}}}
| {{!-}}
{{!}} Subordo:
{{!}} ''[[:Kategorie:{{{subordo|}}}{{!}}{{{subordo|}}}]]''
}}
{{#if:{{{superfamilia|}}}
| {{!-}}
{{!}} Superfamilia:
{{!}} ''[[:Kategorie:{{{superfamilia|}}}{{!}}{{{superfamilia|}}}]]''
}}
{{#if:{{{familia|}}}
| {{!-}}
{{!}} Familia:
{{!}} ''[[:Kategorie:{{{familia|}}}{{!}}{{{familia|}}}]]''
}}
{{#if:{{{subfamilia|}}}
| {{!-}}
{{!}} Subfamilia:
{{!}} ''[[:Kategorie:{{{subfamilia|}}}{{!}}{{{subfamilia|}}}]]''
}}
{{#if:{{{tribus|}}}
| {{!-}}
{{!}} Tribus:
{{!}} ''[[:Kategorie:{{{tribus|}}}{{!}}{{{tribus|}}}]]''
}}
|-
{{#if:{{{genus|}}}
| {{!-}}
{{!}} Genus:
{{!}} ''[[:Kategorie:{{{genus|}}}{{!}}{{{genus|}}}]]''
}}
|-
{{#if:{{{subgenus|}}}
| {{!-}}
{{!}} Subgenus:
{{!}} ''[[:Kategorie:{{{subgenus|}}}{{!}}{{{subgenus|}}}]]''
}}
|-
{{#if:{{{species|}}}
| {{!-}}
{{!}} Species:
{{!}} ''{{{genus|}}} {{{subgenus|}}} {{{species|}}}''
}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Wissenschaftlicher Name
|-
|style="text-align:center"|
{| style="text-align:center;background-color:#efefef;widht:250px"
|-
|style="text-align:center;width:250px"|''{{{genus|}}} {{{species|}}}'' {{#if:{{{Autor|}}}|{{{Autor|}}}}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Weitere Informationen
|-
|style="text-align:left"|
{| style="text-align:left;background-color:#efefef;widht:250px"
{{#if:{{{Verbreitung|}}}
|
{{!-}}
{{!}} Verbreitung:
{{!}} {{{Verbreitung|}}}
|
{{!-}}
}}
{{#if:{{{Habitat|}}}
|
{{!-}}
{{!}} Habitat:
{{!}} {{{Habitat|}}}
|
{{!-}}
}}
{{#if:{{{Nahrung|}}}
|
{{!-}}
{{!}} Nahrung:
{{!}} {{{Nahrung|}}}
|
{{!-}}
}}
{{#if:{{{Luftfeuchtigkeit|}}}
|
{{!-}}
{{!}} Luftfeuchtigkeit:
{{!}} {{{Luftfeuchtigkeit|}}}
|
{{!-}}
}}
{{#if:{{{Temperatur|}}}
|
{{!-}}
{{!}} Temperatur:
{{!}} {{{Temperatur|}}}
|
{{!-}}
}}
{{#if:{{{StudyGroupNumber|}}}
|
{{!-}}
{{!}} StudyGroupNumber:
{{!}} {{{StudyGroupNumber|}}}
|
{{!-}}
}}
|}
|}
{{#ifeq: {{NAMESPACE}} | {{ns:0}} | [[Kategorie:species]]}}
{{#if:{{{ordo|}}}|
-> [[:Kategorie:{{{ordo|}}}{{!}}{{{ordo|}}}]]
}}
{{#if:{{{subordo|}}}|
-> [[:Kategorie:{{{subordo|}}}{{!}}{{{subordo|}}}]]
}}
{{#if:{{{superfamilia|}}}|
-> [[:Kategorie:{{{superfamilia|}}}{{!}}{{{superfamilia|}}}]]
}}
{{#if:{{{familia|}}}|
-> [[:Kategorie:{{{familia|}}}{{!}}{{{familia|}}}]]
}}
{{#if:{{{subfamilia|}}}|
-> [[:Kategorie:{{{subfamilia|}}}{{!}}{{{subfamilia|}}}]]
}}
{{#if:{{{tribus|}}}|
-> [[:Kategorie:{{{tribus|}}}{{!}}{{{tribus|}}}]]
}}
{{#if:{{{genus|}}}|
-> [[:Kategorie:{{{genus|}}}{{!}}{{{genus|}}}]]
}}
{{#if:{{{subgenus|}}}|
-> [[:Kategorie:{{{subgenus|}}}{{!}}{{{subgenus|}}}]]
}}
{{#if:{{{www.faunaeur.org_id|}}}|
* [http://www.faunaeur.org/full_results.php?id={{{www.faunaeur.org_id|}}} Fauna Europaea : www.faunaeur.org -> {{PAGENAME}}]
}}
{{#if:{{{cockroach.genusfile.org_TaxonNameID|}}}|
* [http://cockroach.genusfile.org/common/basic/Taxa.aspx?TaxonNameID={{{cockroach.genusfile.org_TaxonNameID|}}} Cockroach Species File (CSF) : cockroach.genusfile.org -> {{PAGENAME}}]
}}
{{#if:{{{LSID|}}}|
* [http://www.ipni.org/lsids.html LSID] : {{{LSID|}}}
}}
{{#ifeq:{{PAGENAME}}|{{{ordo}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=7}}
{{#ifeq:{{PAGENAME}}|Blattodea|
[[Kategorie:Schaben]]
}}
}}
{{#ifeq:{{PAGENAME}}|{{{subordo}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=6}}
[[Kategorie:{{{ordo|}}}{{!}}{{{subordo|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{superfamilia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
[[Kategorie:{{{subordo|}}}{{!}}{{{superfamilia|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{familia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=4}}
[[Kategorie:{{#if: {{{superfamilia|}}} | {{{superfamilia|}}} | {{{subordo|}}} }}{{!}}{{{familia|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{subfamilia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=3}}
[[Kategorie:{{{familia|}}}{{!}}{{{subfamilia|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{genus}}}|
[[Kategorie:{{#if: {{{subfamilia|}}} | {{{subfamilia|}}} | {{{familia|}}} }}{{!}}{{{genus|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{WissName}}}|
[[Kategorie:{{{genus|}}}{{!}}{{{species|}}}]]
}}
</includeonly>
<noinclude>
<pre>
Beispielaufruf:
{{Systematik
| DeName = Fauchschabe
| Autor = van Herrewege, 1973
| ordo =
| subordo =
| superfamilia =
| familia = Blaberidae
| subfamilia = Oxyhaloinae
| genus = Princisia
| subgenus =
| tribus = Gromphadorhini
| species = vanwaerebeki
| Verbreitung =
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| StudyGroupNumber = BCG 34
| Winterruhe =
| cockroach.genusfile.org_TaxonNameID = 1174416
| LSID = urn:lsid:Blattodea.genusfile.org:TaxonName:6326
}}
{{Systematik
| Autor =
| Bild =
| Bildbeschreibung =
| regnum = Animalia
| subregnum = Eumetazoa
| phylum = specieshropoda
| subphylum = Hexapoda
| classis = Insecta
| ordo = Dictyoptera
| subordo = Isoptera
| LSID = urn:lsid:faunaeur.org:taxname:11922
| www.faunaeur.org_id = 11922
}}
</pre>
</noinclude>
4715e3c243eed39b4aa5d8d7234d8136f9134f84
1550
1534
2017-05-02T10:32:00Z
Lollypop
2
wikitext
text/x-wiki
<includeonly>
{| cellpadding="2" cellspacing="0" style="border: 1px solid #6688AA;width:260px; background-color:#efefef; float:right;overflow: hidden; margin-left:10px; margin-bottom:10px;" valign="middle" |
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"|
'''''{{PAGENAME}}'''''
{{#if:{{{DeName|}}}|
<br>({{{DeName|}}})
}}
|-
|style="border: 0px solid #6688AA;border-bottom-width:1px;"| <div style="text-align:center;font-size:8pt;">
{{#if: {{{Bild|}}}
|
[[Bild:{{{Bild}}}{{!}}frameless{{!}}250x300px{{!}}{{{Bildbeschreibung}}}]]
}}
</div>
|
|
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| Systematik
|-
|style="text-align:center;width=250px;"|
{| style="background-color:#efefef;text-align:left;"
{{#if:{{{dominia|}}}
| {{!-}}
{{!}} Dominia:
{{!}} ''[[:Kategorie:{{{dominia|}}}{{!}}{{{dominia|}}}]]''
}}
{{#if:{{{regnum|}}}
| {{!-}}
{{!}} Regnum:
{{!}} ''[[:Kategorie:{{{regnum|}}}{{!}}{{{regnum|}}}]]''
}}
{{#if:{{{subregnum|}}}
| {{!-}}
{{!}} Subregnum:
{{!}} ''[[:Kategorie:{{{subregnum|}}}{{!}}{{{subregnum|}}}]]''
}}
{{#if:{{{superdivisio|}}}
| {{!-}}
{{!}} Superdivisio:
{{!}} ''[[:Kategorie:{{{superdivisio|}}}{{!}}{{{superdivisio|}}}]]''
}}
{{#if:{{{divisio|}}}
| {{!-}}
{{!}} Divisio:
{{!}} ''[[:Kategorie:{{{divisio|}}}{{!}}{{{divisio|}}}]]''
}}
{{#if:{{{subdivisio|}}}
| {{!-}}
{{!}} Subdivisio:
{{!}} ''[[:Kategorie:{{{subdivisio|}}}{{!}}{{{subdivisio|}}}]]''
}}
{{#if:{{{superclassis|}}}
| {{!-}}
{{!}} Superclassis:
{{!}} ''[[:Kategorie:{{{superclassis|}}}{{!}}{{{superclassis|}}}]]''
}}
{{#if:{{{classis|}}}
| {{!-}}
{{!}} Classis:
{{!}} ''[[:Kategorie:{{{classis|}}}{{!}}{{{classis|}}}]]''
}}
{{#if:{{{subclassis|}}}
| {{!-}}
{{!}} Subclassis:
{{!}} ''[[:Kategorie:{{{subclassis|}}}{{!}}{{{subclassis|}}}]]''
}}
{{#if:{{{superordo|}}}
| {{!-}}
{{!}} Superordo:
{{!}} ''[[:Kategorie:{{{superordo|}}}{{!}}{{{superordo|}}}]]''
}}
{{#if:{{{ordo|}}}
| {{!-}}
{{!}} Ordo:
{{!}} ''[[:Kategorie:{{{ordo|}}}{{!}}{{{ordo|}}}]]''
}}
{{#if:{{{subordo|}}}
| {{!-}}
{{!}} Subordo:
{{!}} ''[[:Kategorie:{{{subordo|}}}{{!}}{{{subordo|}}}]]''
}}
{{#if:{{{superfamilia|}}}
| {{!-}}
{{!}} Superfamilia:
{{!}} ''[[:Kategorie:{{{superfamilia|}}}{{!}}{{{superfamilia|}}}]]''
}}
{{#if:{{{familia|}}}
| {{!-}}
{{!}} Familia:
{{!}} ''[[:Kategorie:{{{familia|}}}{{!}}{{{familia|}}}]]''
}}
{{#if:{{{subfamilia|}}}
| {{!-}}
{{!}} Subfamilia:
{{!}} ''[[:Kategorie:{{{subfamilia|}}}{{!}}{{{subfamilia|}}}]]''
}}
{{#if:{{{tribus|}}}
| {{!-}}
{{!}} Tribus:
{{!}} ''[[:Kategorie:{{{tribus|}}}{{!}}{{{tribus|}}}]]''
}}
|-
{{#if:{{{genus|}}}
| {{!-}}
{{!}} Genus:
{{!}} ''[[:Kategorie:{{{genus|}}}{{!}}{{{genus|}}}]]''
}}
|-
{{#if:{{{subgenus|}}}
| {{!-}}
{{!}} Subgenus:
{{!}} ''[[:Kategorie:{{{subgenus|}}}{{!}}{{{subgenus|}}}]]''
}}
|-
{{#if:{{{species|}}}
| {{!-}}
{{!}} Species:
{{!}} ''{{{genus|}}} {{{subgenus|}}} {{{species|}}}''
}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Wissenschaftlicher Name
|-
|style="text-align:center"|
{| style="text-align:center;background-color:#efefef;widht:250px"
|-
|style="text-align:center;width:250px"|''{{{genus|}}} {{{species|}}}'' {{#if:{{{Autor|}}}|{{{Autor|}}}}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Weitere Informationen
|-
|style="text-align:left"|
{| style="text-align:left;background-color:#efefef;widht:250px"
{{#if:{{{Verbreitung|}}}
|
{{!-}}
{{!}} Verbreitung:
{{!}} {{{Verbreitung|}}}
|
{{!-}}
}}
{{#if:{{{Habitat|}}}
|
{{!-}}
{{!}} Habitat:
{{!}} {{{Habitat|}}}
|
{{!-}}
}}
{{#if:{{{Nahrung|}}}
|
{{!-}}
{{!}} Nahrung:
{{!}} {{{Nahrung|}}}
|
{{!-}}
}}
{{#if:{{{Luftfeuchtigkeit|}}}
|
{{!-}}
{{!}} Luftfeuchtigkeit:
{{!}} {{{Luftfeuchtigkeit|}}}
|
{{!-}}
}}
{{#if:{{{Temperatur|}}}
|
{{!-}}
{{!}} Temperatur:
{{!}} {{{Temperatur|}}}
|
{{!-}}
}}
{{#if:{{{StudyGroupNumber|}}}
|
{{!-}}
{{!}} StudyGroupNumber:
{{!}} {{{StudyGroupNumber|}}}
|
{{!-}}
}}
|}
|}
{{#ifeq: {{NAMESPACE}} | {{ns:0}} | [[Kategorie:species]]}}
{{#if:{{{ordo|}}}|
-> [[:Kategorie:{{{ordo|}}}{{!}}{{{ordo|}}}]]
}}
{{#if:{{{subordo|}}}|
-> [[:Kategorie:{{{subordo|}}}{{!}}{{{subordo|}}}]]
}}
{{#if:{{{superfamilia|}}}|
-> [[:Kategorie:{{{superfamilia|}}}{{!}}{{{superfamilia|}}}]]
}}
{{#if:{{{familia|}}}|
-> [[:Kategorie:{{{familia|}}}{{!}}{{{familia|}}}]]
}}
{{#if:{{{subfamilia|}}}|
-> [[:Kategorie:{{{subfamilia|}}}{{!}}{{{subfamilia|}}}]]
}}
{{#if:{{{tribus|}}}|
-> [[:Kategorie:{{{tribus|}}}{{!}}{{{tribus|}}}]]
}}
{{#if:{{{genus|}}}|
-> [[:Kategorie:{{{genus|}}}{{!}}{{{genus|}}}]]
}}
{{#if:{{{subgenus|}}}|
-> [[:Kategorie:{{{subgenus|}}}{{!}}{{{subgenus|}}}]]
}}
{{#if:{{{www.faunaeur.org_id|}}}|
* [http://www.faunaeur.org/full_results.php?id={{{www.faunaeur.org_id|}}} Fauna Europaea : www.faunaeur.org -> {{PAGENAME}}]
}}
{{#if:{{{cockroach.genusfile.org_TaxonNameID|}}}|
* [http://cockroach.genusfile.org/common/basic/Taxa.aspx?TaxonNameID={{{cockroach.genusfile.org_TaxonNameID|}}} Cockroach Species File (CSF) : cockroach.genusfile.org -> {{PAGENAME}}]
}}
{{#if:{{{LSID|}}}|
* [http://www.ipni.org/lsids.html LSID] : {{{LSID|}}}
}}
{{#ifeq:{{PAGENAME}}|{{{ordo}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=7}}
{{#ifeq:{{PAGENAME}}|Blattodea|
[[Kategorie:Schaben]]
}}
[[Kategorie:{{{|}}}{{!}}{{{ordo|}}}]]
[[Kategorie:{{#if: {{{superordo|}}} | {{{superordo|}}} | {{{subclassis|}}} }}{{!}}{{{ordo|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{subordo}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=6}}
[[Kategorie:{{{ordo|}}}{{!}}{{{subordo|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{superfamilia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
[[Kategorie:{{{subordo|}}}{{!}}{{{superfamilia|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{familia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=4}}
[[Kategorie:{{#if: {{{superfamilia|}}} | {{{superfamilia|}}} | {{{subordo|}}} }}{{!}}{{{familia|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{subfamilia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=3}}
[[Kategorie:{{{familia|}}}{{!}}{{{subfamilia|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{genus}}}|
[[Kategorie:{{#if: {{{subfamilia|}}} | {{{subfamilia|}}} | {{{familia|}}} }}{{!}}{{{genus|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{WissName}}}|
[[Kategorie:{{{genus|}}}{{!}}{{{species|}}}]]
}}
</includeonly>
<noinclude>
<pre>
Beispielaufruf:
{{Systematik
| DeName = Fauchschabe
| Autor = van Herrewege, 1973
| ordo =
| subordo =
| superfamilia =
| familia = Blaberidae
| subfamilia = Oxyhaloinae
| genus = Princisia
| subgenus =
| tribus = Gromphadorhini
| species = vanwaerebeki
| Verbreitung =
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| StudyGroupNumber = BCG 34
| Winterruhe =
| cockroach.genusfile.org_TaxonNameID = 1174416
| LSID = urn:lsid:Blattodea.genusfile.org:TaxonName:6326
}}
{{Systematik
| Autor =
| Bild =
| Bildbeschreibung =
| regnum = Animalia
| subregnum = Eumetazoa
| phylum = specieshropoda
| subphylum = Hexapoda
| classis = Insecta
| ordo = Dictyoptera
| subordo = Isoptera
| LSID = urn:lsid:faunaeur.org:taxname:11922
| www.faunaeur.org_id = 11922
}}
</pre>
</noinclude>
52ce6bb293d8242145675c6440e3d80347e8a23c
1551
1550
2017-05-02T10:33:44Z
Lollypop
2
wikitext
text/x-wiki
<includeonly>
{| cellpadding="2" cellspacing="0" style="border: 1px solid #6688AA;width:260px; background-color:#efefef; float:right;overflow: hidden; margin-left:10px; margin-bottom:10px;" valign="middle" |
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"|
'''''{{PAGENAME}}'''''
{{#if:{{{DeName|}}}|
<br>({{{DeName|}}})
}}
|-
|style="border: 0px solid #6688AA;border-bottom-width:1px;"| <div style="text-align:center;font-size:8pt;">
{{#if: {{{Bild|}}}
|
[[Bild:{{{Bild}}}{{!}}frameless{{!}}250x300px{{!}}{{{Bildbeschreibung}}}]]
}}
</div>
|
|
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| Systematik
|-
|style="text-align:center;width=250px;"|
{| style="background-color:#efefef;text-align:left;"
{{#if:{{{dominia|}}}
| {{!-}}
{{!}} Dominia:
{{!}} ''[[:Kategorie:{{{dominia|}}}{{!}}{{{dominia|}}}]]''
}}
{{#if:{{{regnum|}}}
| {{!-}}
{{!}} Regnum:
{{!}} ''[[:Kategorie:{{{regnum|}}}{{!}}{{{regnum|}}}]]''
}}
{{#if:{{{subregnum|}}}
| {{!-}}
{{!}} Subregnum:
{{!}} ''[[:Kategorie:{{{subregnum|}}}{{!}}{{{subregnum|}}}]]''
}}
{{#if:{{{superdivisio|}}}
| {{!-}}
{{!}} Superdivisio:
{{!}} ''[[:Kategorie:{{{superdivisio|}}}{{!}}{{{superdivisio|}}}]]''
}}
{{#if:{{{divisio|}}}
| {{!-}}
{{!}} Divisio:
{{!}} ''[[:Kategorie:{{{divisio|}}}{{!}}{{{divisio|}}}]]''
}}
{{#if:{{{subdivisio|}}}
| {{!-}}
{{!}} Subdivisio:
{{!}} ''[[:Kategorie:{{{subdivisio|}}}{{!}}{{{subdivisio|}}}]]''
}}
{{#if:{{{superclassis|}}}
| {{!-}}
{{!}} Superclassis:
{{!}} ''[[:Kategorie:{{{superclassis|}}}{{!}}{{{superclassis|}}}]]''
}}
{{#if:{{{classis|}}}
| {{!-}}
{{!}} Classis:
{{!}} ''[[:Kategorie:{{{classis|}}}{{!}}{{{classis|}}}]]''
}}
{{#if:{{{subclassis|}}}
| {{!-}}
{{!}} Subclassis:
{{!}} ''[[:Kategorie:{{{subclassis|}}}{{!}}{{{subclassis|}}}]]''
}}
{{#if:{{{superordo|}}}
| {{!-}}
{{!}} Superordo:
{{!}} ''[[:Kategorie:{{{superordo|}}}{{!}}{{{superordo|}}}]]''
}}
{{#if:{{{ordo|}}}
| {{!-}}
{{!}} Ordo:
{{!}} ''[[:Kategorie:{{{ordo|}}}{{!}}{{{ordo|}}}]]''
}}
{{#if:{{{subordo|}}}
| {{!-}}
{{!}} Subordo:
{{!}} ''[[:Kategorie:{{{subordo|}}}{{!}}{{{subordo|}}}]]''
}}
{{#if:{{{superfamilia|}}}
| {{!-}}
{{!}} Superfamilia:
{{!}} ''[[:Kategorie:{{{superfamilia|}}}{{!}}{{{superfamilia|}}}]]''
}}
{{#if:{{{familia|}}}
| {{!-}}
{{!}} Familia:
{{!}} ''[[:Kategorie:{{{familia|}}}{{!}}{{{familia|}}}]]''
}}
{{#if:{{{subfamilia|}}}
| {{!-}}
{{!}} Subfamilia:
{{!}} ''[[:Kategorie:{{{subfamilia|}}}{{!}}{{{subfamilia|}}}]]''
}}
{{#if:{{{tribus|}}}
| {{!-}}
{{!}} Tribus:
{{!}} ''[[:Kategorie:{{{tribus|}}}{{!}}{{{tribus|}}}]]''
}}
|-
{{#if:{{{genus|}}}
| {{!-}}
{{!}} Genus:
{{!}} ''[[:Kategorie:{{{genus|}}}{{!}}{{{genus|}}}]]''
}}
|-
{{#if:{{{subgenus|}}}
| {{!-}}
{{!}} Subgenus:
{{!}} ''[[:Kategorie:{{{subgenus|}}}{{!}}{{{subgenus|}}}]]''
}}
|-
{{#if:{{{species|}}}
| {{!-}}
{{!}} Species:
{{!}} ''{{{genus|}}} {{{subgenus|}}} {{{species|}}}''
}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Wissenschaftlicher Name
|-
|style="text-align:center"|
{| style="text-align:center;background-color:#efefef;widht:250px"
|-
|style="text-align:center;width:250px"|''{{{genus|}}} {{{species|}}}'' {{#if:{{{Autor|}}}|{{{Autor|}}}}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Weitere Informationen
|-
|style="text-align:left"|
{| style="text-align:left;background-color:#efefef;widht:250px"
{{#if:{{{Verbreitung|}}}
|
{{!-}}
{{!}} Verbreitung:
{{!}} {{{Verbreitung|}}}
|
{{!-}}
}}
{{#if:{{{Habitat|}}}
|
{{!-}}
{{!}} Habitat:
{{!}} {{{Habitat|}}}
|
{{!-}}
}}
{{#if:{{{Nahrung|}}}
|
{{!-}}
{{!}} Nahrung:
{{!}} {{{Nahrung|}}}
|
{{!-}}
}}
{{#if:{{{Luftfeuchtigkeit|}}}
|
{{!-}}
{{!}} Luftfeuchtigkeit:
{{!}} {{{Luftfeuchtigkeit|}}}
|
{{!-}}
}}
{{#if:{{{Temperatur|}}}
|
{{!-}}
{{!}} Temperatur:
{{!}} {{{Temperatur|}}}
|
{{!-}}
}}
{{#if:{{{StudyGroupNumber|}}}
|
{{!-}}
{{!}} StudyGroupNumber:
{{!}} {{{StudyGroupNumber|}}}
|
{{!-}}
}}
|}
|}
{{#ifeq: {{NAMESPACE}} | {{ns:0}} | [[Kategorie:species]]}}
{{#if:{{{ordo|}}}|
-> [[:Kategorie:{{{ordo|}}}{{!}}{{{ordo|}}}]]
}}
{{#if:{{{subordo|}}}|
-> [[:Kategorie:{{{subordo|}}}{{!}}{{{subordo|}}}]]
}}
{{#if:{{{superfamilia|}}}|
-> [[:Kategorie:{{{superfamilia|}}}{{!}}{{{superfamilia|}}}]]
}}
{{#if:{{{familia|}}}|
-> [[:Kategorie:{{{familia|}}}{{!}}{{{familia|}}}]]
}}
{{#if:{{{subfamilia|}}}|
-> [[:Kategorie:{{{subfamilia|}}}{{!}}{{{subfamilia|}}}]]
}}
{{#if:{{{tribus|}}}|
-> [[:Kategorie:{{{tribus|}}}{{!}}{{{tribus|}}}]]
}}
{{#if:{{{genus|}}}|
-> [[:Kategorie:{{{genus|}}}{{!}}{{{genus|}}}]]
}}
{{#if:{{{subgenus|}}}|
-> [[:Kategorie:{{{subgenus|}}}{{!}}{{{subgenus|}}}]]
}}
{{#if:{{{www.faunaeur.org_id|}}}|
* [http://www.faunaeur.org/full_results.php?id={{{www.faunaeur.org_id|}}} Fauna Europaea : www.faunaeur.org -> {{PAGENAME}}]
}}
{{#if:{{{cockroach.genusfile.org_TaxonNameID|}}}|
* [http://cockroach.genusfile.org/common/basic/Taxa.aspx?TaxonNameID={{{cockroach.genusfile.org_TaxonNameID|}}} Cockroach Species File (CSF) : cockroach.genusfile.org -> {{PAGENAME}}]
}}
{{#if:{{{LSID|}}}|
* [http://www.ipni.org/lsids.html LSID] : {{{LSID|}}}
}}
{{#ifeq:{{PAGENAME}}|{{{ordo}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=7}}
{{#ifeq:{{PAGENAME}}|Blattodea|
[[Kategorie:Schaben]]
}}
[[Kategorie:{{#if: {{{superordo|}}} | {{{superordo|}}} | {{{subclassis|}}} }}{{!}}{{{ordo|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{subordo}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=6}}
[[Kategorie:{{{ordo|}}}{{!}}{{{subordo|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{superfamilia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
[[Kategorie:{{{subordo|}}}{{!}}{{{superfamilia|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{familia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=4}}
[[Kategorie:{{#if: {{{superfamilia|}}} | {{{superfamilia|}}} | {{{subordo|}}} }}{{!}}{{{familia|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{subfamilia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=3}}
[[Kategorie:{{{familia|}}}{{!}}{{{subfamilia|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{genus}}}|
[[Kategorie:{{#if: {{{subfamilia|}}} | {{{subfamilia|}}} | {{{familia|}}} }}{{!}}{{{genus|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{WissName}}}|
[[Kategorie:{{{genus|}}}{{!}}{{{species|}}}]]
}}
</includeonly>
<noinclude>
<pre>
Beispielaufruf:
{{Systematik
| DeName = Fauchschabe
| Autor = van Herrewege, 1973
| ordo =
| subordo =
| superfamilia =
| familia = Blaberidae
| subfamilia = Oxyhaloinae
| genus = Princisia
| subgenus =
| tribus = Gromphadorhini
| species = vanwaerebeki
| Verbreitung =
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| StudyGroupNumber = BCG 34
| Winterruhe =
| cockroach.genusfile.org_TaxonNameID = 1174416
| LSID = urn:lsid:Blattodea.genusfile.org:TaxonName:6326
}}
{{Systematik
| Autor =
| Bild =
| Bildbeschreibung =
| regnum = Animalia
| subregnum = Eumetazoa
| phylum = specieshropoda
| subphylum = Hexapoda
| classis = Insecta
| ordo = Dictyoptera
| subordo = Isoptera
| LSID = urn:lsid:faunaeur.org:taxname:11922
| www.faunaeur.org_id = 11922
}}
</pre>
</noinclude>
243a36c41f13723c49e0a0c0605224555011ce40
1555
1551
2017-05-02T11:27:20Z
Lollypop
2
wikitext
text/x-wiki
<includeonly>
{| cellpadding="2" cellspacing="0" style="border: 1px solid #6688AA;width:260px; background-color:#efefef; float:right;overflow: hidden; margin-left:10px; margin-bottom:10px;" valign="middle" |
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"|
'''''{{PAGENAME}}'''''
{{#if:{{{DeName|}}}|
<br>({{{DeName|}}})
}}
|-
|style="border: 0px solid #6688AA;border-bottom-width:1px;"| <div style="text-align:center;font-size:8pt;">
{{#if: {{{Bild|}}}
|
[[Bild:{{{Bild}}}{{!}}frameless{{!}}250x300px{{!}}{{{Bildbeschreibung}}}]]
}}
</div>
|
|
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| Systematik
|-
|style="text-align:center;width=250px;"|
{| style="background-color:#efefef;text-align:left;"
{{#if:{{{dominia|}}}
| {{!-}}
{{!}} Dominia:
{{!}} ''[[:Kategorie:{{{dominia|}}}{{!}}{{{dominia|}}}]]''
}}
{{#if:{{{regnum|}}}
| {{!-}}
{{!}} Regnum:
{{!}} ''[[:Kategorie:{{{regnum|}}}{{!}}{{{regnum|}}}]]''
}}
{{#if:{{{subregnum|}}}
| {{!-}}
{{!}} Subregnum:
{{!}} ''[[:Kategorie:{{{subregnum|}}}{{!}}{{{subregnum|}}}]]''
}}
{{#if:{{{superdivisio|}}}
| {{!-}}
{{!}} Superdivisio:
{{!}} ''[[:Kategorie:{{{superdivisio|}}}{{!}}{{{superdivisio|}}}]]''
}}
{{#if:{{{divisio|}}}
| {{!-}}
{{!}} Divisio:
{{!}} ''[[:Kategorie:{{{divisio|}}}{{!}}{{{divisio|}}}]]''
}}
{{#if:{{{subdivisio|}}}
| {{!-}}
{{!}} Subdivisio:
{{!}} ''[[:Kategorie:{{{subdivisio|}}}{{!}}{{{subdivisio|}}}]]''
}}
{{#if:{{{superclassis|}}}
| {{!-}}
{{!}} Superclassis:
{{!}} ''[[:Kategorie:{{{superclassis|}}}{{!}}{{{superclassis|}}}]]''
}}
{{#if:{{{classis|}}}
| {{!-}}
{{!}} Classis:
{{!}} ''[[:Kategorie:{{{classis|}}}{{!}}{{{classis|}}}]]''
}}
{{#if:{{{subclassis|}}}
| {{!-}}
{{!}} Subclassis:
{{!}} ''[[:Kategorie:{{{subclassis|}}}{{!}}{{{subclassis|}}}]]''
}}
{{#if:{{{superordo|}}}
| {{!-}}
{{!}} Superordo:
{{!}} ''[[:Kategorie:{{{superordo|}}}{{!}}{{{superordo|}}}]]''
}}
{{#if:{{{ordo|}}}
| {{!-}}
{{!}} Ordo:
{{!}} ''[[:Kategorie:{{{ordo|}}}{{!}}{{{ordo|}}}]]''
}}
{{#if:{{{subordo|}}}
| {{!-}}
{{!}} Subordo:
{{!}} ''[[:Kategorie:{{{subordo|}}}{{!}}{{{subordo|}}}]]''
}}
{{#if:{{{superfamilia|}}}
| {{!-}}
{{!}} Superfamilia:
{{!}} ''[[:Kategorie:{{{superfamilia|}}}{{!}}{{{superfamilia|}}}]]''
}}
{{#if:{{{familia|}}}
| {{!-}}
{{!}} Familia:
{{!}} ''[[:Kategorie:{{{familia|}}}{{!}}{{{familia|}}}]]''
}}
{{#if:{{{subfamilia|}}}
| {{!-}}
{{!}} Subfamilia:
{{!}} ''[[:Kategorie:{{{subfamilia|}}}{{!}}{{{subfamilia|}}}]]''
}}
{{#if:{{{tribus|}}}
| {{!-}}
{{!}} Tribus:
{{!}} ''[[:Kategorie:{{{tribus|}}}{{!}}{{{tribus|}}}]]''
}}
|-
{{#if:{{{genus|}}}
| {{!-}}
{{!}} Genus:
{{!}} ''[[:Kategorie:{{{genus|}}}{{!}}{{{genus|}}}]]''
}}
|-
{{#if:{{{subgenus|}}}
| {{!-}}
{{!}} Subgenus:
{{!}} ''[[:Kategorie:{{{subgenus|}}}{{!}}{{{subgenus|}}}]]''
}}
|-
{{#if:{{{species|}}}
| {{!-}}
{{!}} Species:
{{!}} ''{{{genus|}}} {{{subgenus|}}} {{{species|}}}''
}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Wissenschaftlicher Name
|-
|style="text-align:center"|
{| style="text-align:center;background-color:#efefef;widht:250px"
|-
|style="text-align:center;width:250px"|''{{{genus|}}} {{{species|}}}'' {{#if:{{{Autor|}}}|{{{Autor|}}}}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Weitere Informationen
|-
|style="text-align:left"|
{| style="text-align:left;background-color:#efefef;widht:250px"
{{#if:{{{Verbreitung|}}}
|
{{!-}}
{{!}} Verbreitung:
{{!}} {{{Verbreitung|}}}
|
{{!-}}
}}
{{#if:{{{Habitat|}}}
|
{{!-}}
{{!}} Habitat:
{{!}} {{{Habitat|}}}
|
{{!-}}
}}
{{#if:{{{Nahrung|}}}
|
{{!-}}
{{!}} Nahrung:
{{!}} {{{Nahrung|}}}
|
{{!-}}
}}
{{#if:{{{Luftfeuchtigkeit|}}}
|
{{!-}}
{{!}} Luftfeuchtigkeit:
{{!}} {{{Luftfeuchtigkeit|}}}
|
{{!-}}
}}
{{#if:{{{Temperatur|}}}
|
{{!-}}
{{!}} Temperatur:
{{!}} {{{Temperatur|}}}
|
{{!-}}
}}
{{#if:{{{StudyGroupNumber|}}}
|
{{!-}}
{{!}} StudyGroupNumber:
{{!}} {{{StudyGroupNumber|}}}
|
{{!-}}
}}
|}
|}
{{#ifeq: {{NAMESPACE}} | {{ns:0}} | [[Kategorie:species]]}}
{{#if:{{{ordo|}}}|
-> [[:Kategorie:{{{ordo|}}}{{!}}{{{ordo|}}}]]
}}
{{#if:{{{subordo|}}}|
-> [[:Kategorie:{{{subordo|}}}{{!}}{{{subordo|}}}]]
}}
{{#if:{{{superfamilia|}}}|
-> [[:Kategorie:{{{superfamilia|}}}{{!}}{{{superfamilia|}}}]]
}}
{{#if:{{{familia|}}}|
-> [[:Kategorie:{{{familia|}}}{{!}}{{{familia|}}}]]
}}
{{#if:{{{subfamilia|}}}|
-> [[:Kategorie:{{{subfamilia|}}}{{!}}{{{subfamilia|}}}]]
}}
{{#if:{{{tribus|}}}|
-> [[:Kategorie:{{{tribus|}}}{{!}}{{{tribus|}}}]]
}}
{{#if:{{{genus|}}}|
-> [[:Kategorie:{{{genus|}}}{{!}}{{{genus|}}}]]
}}
{{#if:{{{subgenus|}}}|
-> [[:Kategorie:{{{subgenus|}}}{{!}}{{{subgenus|}}}]]
}}
{{#if:{{{www.faunaeur.org_id|}}}|
* [http://www.faunaeur.org/full_results.php?id={{{www.faunaeur.org_id|}}} Fauna Europaea : www.faunaeur.org -> {{PAGENAME}}]
}}
{{#if:{{{cockroach.genusfile.org_TaxonNameID|}}}|
* [http://cockroach.genusfile.org/common/basic/Taxa.aspx?TaxonNameID={{{cockroach.genusfile.org_TaxonNameID|}}} Cockroach Species File (CSF) : cockroach.genusfile.org -> {{PAGENAME}}]
}}
{{#if:{{{LSID|}}}|
* [http://www.ipni.org/lsids.html LSID] : {{{LSID|}}}
}}
{{#ifeq:{{PAGENAME}}|{{{superordo}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
[[Kategorie:{{#if: {{{subclassis|}}} | {{{subclassis|}}} | {{{classis|}}} }}{{!}}{{{superordo|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{ordo}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=7}}
{{#ifeq:{{PAGENAME}}|Blattodea|
[[Kategorie:Schaben]]
}}
[[Kategorie:{{#if: {{{superordo|}}} | {{{superordo|}}} | {{{subclassis|}}} }}{{!}}{{{ordo|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{superfamilia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
[[Kategorie:{{{subordo|}}}{{!}}{{{superfamilia|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{familia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=4}}
[[Kategorie:{{#if: {{{superfamilia|}}} | {{{superfamilia|}}} | {{{subordo|}}} }}{{!}}{{{familia|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{subfamilia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=3}}
[[Kategorie:{{{familia|}}}{{!}}{{{subfamilia|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{genus}}}|
[[Kategorie:{{#if: {{{subfamilia|}}} | {{{subfamilia|}}} | {{{familia|}}} }}{{!}}{{{genus|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{WissName}}}|
[[Kategorie:{{{genus|}}}{{!}}{{{species|}}}]]
}}
</includeonly>
<noinclude>
<pre>
Beispielaufruf:
{{Systematik
| DeName = Fauchschabe
| Autor = van Herrewege, 1973
| ordo =
| subordo =
| superfamilia =
| familia = Blaberidae
| subfamilia = Oxyhaloinae
| genus = Princisia
| subgenus =
| tribus = Gromphadorhini
| species = vanwaerebeki
| Verbreitung =
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| StudyGroupNumber = BCG 34
| Winterruhe =
| cockroach.genusfile.org_TaxonNameID = 1174416
| LSID = urn:lsid:Blattodea.genusfile.org:TaxonName:6326
}}
{{Systematik
| Autor =
| Bild =
| Bildbeschreibung =
| regnum = Animalia
| subregnum = Eumetazoa
| phylum = specieshropoda
| subphylum = Hexapoda
| classis = Insecta
| ordo = Dictyoptera
| subordo = Isoptera
| LSID = urn:lsid:faunaeur.org:taxname:11922
| www.faunaeur.org_id = 11922
}}
</pre>
</noinclude>
95461fbd725addc5d752d1203156d25ffda5f9d7
1557
1555
2017-05-02T13:53:28Z
Lollypop
2
wikitext
text/x-wiki
<includeonly>
{| cellpadding="2" cellspacing="0" style="border: 1px solid #6688AA;width:260px; background-color:#efefef; float:right;overflow: hidden; margin-left:10px; margin-bottom:10px;" valign="middle" |
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"|
'''''{{PAGENAME}}'''''
{{#if:{{{DeName|}}}|
<br>({{{DeName|}}})
}}
|-
|style="border: 0px solid #6688AA;border-bottom-width:1px;"| <div style="text-align:center;font-size:8pt;">
{{#if: {{{Bild|}}}
|
[[Bild:{{{Bild}}}{{!}}frameless{{!}}250x300px{{!}}{{{Bildbeschreibung}}}]]
}}
</div>
|
|
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| Systematik
|-
|style="text-align:center;width=250px;"|
{| style="background-color:#efefef;text-align:left;"
{{#if:{{{dominia|}}}
| {{!-}}
{{!}} Dominia:
{{!}} ''[[:Kategorie:{{{dominia|}}}{{!}}{{{dominia|}}}]]''
}}
{{#if:{{{regnum|}}}
| {{!-}}
{{!}} Regnum:
{{!}} ''[[:Kategorie:{{{regnum|}}}{{!}}{{{regnum|}}}]]''
}}
{{#if:{{{subregnum|}}}
| {{!-}}
{{!}} Subregnum:
{{!}} ''[[:Kategorie:{{{subregnum|}}}{{!}}{{{subregnum|}}}]]''
}}
{{#if:{{{superdivisio|}}}
| {{!-}}
{{!}} Superdivisio:
{{!}} ''[[:Kategorie:{{{superdivisio|}}}{{!}}{{{superdivisio|}}}]]''
}}
{{#if:{{{divisio|}}}
| {{!-}}
{{!}} Divisio:
{{!}} ''[[:Kategorie:{{{divisio|}}}{{!}}{{{divisio|}}}]]''
}}
{{#if:{{{subdivisio|}}}
| {{!-}}
{{!}} Subdivisio:
{{!}} ''[[:Kategorie:{{{subdivisio|}}}{{!}}{{{subdivisio|}}}]]''
}}
{{#if:{{{superclassis|}}}
| {{!-}}
{{!}} Superclassis:
{{!}} ''[[:Kategorie:{{{superclassis|}}}{{!}}{{{superclassis|}}}]]''
}}
{{#if:{{{classis|}}}
| {{!-}}
{{!}} Classis:
{{!}} ''[[:Kategorie:{{{classis|}}}{{!}}{{{classis|}}}]]''
}}
{{#if:{{{subclassis|}}}
| {{!-}}
{{!}} Subclassis:
{{!}} ''[[:Kategorie:{{{subclassis|}}}{{!}}{{{subclassis|}}}]]''
}}
{{#if:{{{superordo|}}}
| {{!-}}
{{!}} Superordo:
{{!}} ''[[:Kategorie:{{{superordo|}}}{{!}}{{{superordo|}}}]]''
}}
{{#if:{{{ordo|}}}
| {{!-}}
{{!}} Ordo:
{{!}} ''[[:Kategorie:{{{ordo|}}}{{!}}{{{ordo|}}}]]''
}}
{{#if:{{{subordo|}}}
| {{!-}}
{{!}} Subordo:
{{!}} ''[[:Kategorie:{{{subordo|}}}{{!}}{{{subordo|}}}]]''
}}
{{#if:{{{superfamilia|}}}
| {{!-}}
{{!}} Superfamilia:
{{!}} ''[[:Kategorie:{{{superfamilia|}}}{{!}}{{{superfamilia|}}}]]''
}}
{{#if:{{{familia|}}}
| {{!-}}
{{!}} Familia:
{{!}} ''[[:Kategorie:{{{familia|}}}{{!}}{{{familia|}}}]]''
}}
{{#if:{{{subfamilia|}}}
| {{!-}}
{{!}} Subfamilia:
{{!}} ''[[:Kategorie:{{{subfamilia|}}}{{!}}{{{subfamilia|}}}]]''
}}
{{#if:{{{tribus|}}}
| {{!-}}
{{!}} Tribus:
{{!}} ''[[:Kategorie:{{{tribus|}}}{{!}}{{{tribus|}}}]]''
}}
|-
{{#if:{{{genus|}}}
| {{!-}}
{{!}} Genus:
{{!}} ''[[:Kategorie:{{{genus|}}}{{!}}{{{genus|}}}]]''
}}
|-
{{#if:{{{subgenus|}}}
| {{!-}}
{{!}} Subgenus:
{{!}} ''[[:Kategorie:{{{subgenus|}}}{{!}}{{{subgenus|}}}]]''
}}
|-
{{#if:{{{species|}}}
| {{!-}}
{{!}} Species:
{{!}} ''{{{genus|}}} {{{subgenus|}}} {{{species|}}}''
}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Wissenschaftlicher Name
|-
|style="text-align:center"|
{| style="text-align:center;background-color:#efefef;widht:250px"
|-
|style="text-align:center;width:250px"|''{{{genus|}}} {{{species|}}}'' {{#if:{{{Autor|}}}|{{{Autor|}}}}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Weitere Informationen
|-
|style="text-align:left"|
{| style="text-align:left;background-color:#efefef;widht:250px"
{{#if:{{{Verbreitung|}}}
| {{!-}}
{{!}} Verbreitung:
{{!}} {{{Verbreitung|}}}
| {{!-}}
}}
{{#if:{{{Habitat|}}}
| {{!-}}
{{!}} Habitat:
{{!}} {{{Habitat|}}}
| {{!-}}
}}
{{#if:{{{Nahrung|}}}
| {{!-}}
{{!}} Nahrung:
{{!}} {{{Nahrung|}}}
| {{!-}}
}}
{{#if:{{{Luftfeuchtigkeit|}}}
| {{!-}}
{{!}} Luftfeuchtigkeit:
{{!}} {{{Luftfeuchtigkeit|}}}
| {{!-}}
}}
{{#if:{{{Temperatur|}}}
| {{!-}}
{{!}} Temperatur:
{{!}} {{{Temperatur|}}}
| {{!-}}
}}
{{#if:{{{StudyGroupNumber|}}}
| {{!-}}
{{!}} StudyGroupNumber:
{{!}} {{{StudyGroupNumber|}}}
| {{!-}}
}}
|}
|}
{{#ifeq: {{NAMESPACE}} | {{ns:0}} | [[Kategorie:species]]}}
{{#if:{{{ordo|}}}|
-> [[:Kategorie:{{{ordo|}}}{{!}}{{{ordo|}}}]]
}}
{{#if:{{{subordo|}}}|
-> [[:Kategorie:{{{subordo|}}}{{!}}{{{subordo|}}}]]
}}
{{#if:{{{superfamilia|}}}|
-> [[:Kategorie:{{{superfamilia|}}}{{!}}{{{superfamilia|}}}]]
}}
{{#if:{{{familia|}}}|
-> [[:Kategorie:{{{familia|}}}{{!}}{{{familia|}}}]]
}}
{{#if:{{{subfamilia|}}}|
-> [[:Kategorie:{{{subfamilia|}}}{{!}}{{{subfamilia|}}}]]
}}
{{#if:{{{tribus|}}}|
-> [[:Kategorie:{{{tribus|}}}{{!}}{{{tribus|}}}]]
}}
{{#if:{{{genus|}}}|
-> [[:Kategorie:{{{genus|}}}{{!}}{{{genus|}}}]]
}}
{{#if:{{{subgenus|}}}|
-> [[:Kategorie:{{{subgenus|}}}{{!}}{{{subgenus|}}}]]
}}
{{#if:{{{www.faunaeur.org_id|}}}|
* [http://www.faunaeur.org/full_results.php?id={{{www.faunaeur.org_id|}}} Fauna Europaea : www.faunaeur.org -> {{PAGENAME}}]
}}
{{#if:{{{cockroach.genusfile.org_TaxonNameID|}}}|
* [http://cockroach.genusfile.org/common/basic/Taxa.aspx?TaxonNameID={{{cockroach.genusfile.org_TaxonNameID|}}} Cockroach Species File (CSF) : cockroach.genusfile.org -> {{PAGENAME}}]
}}
{{#if:{{{LSID|}}}|
* [http://www.ipni.org/lsids.html LSID] : {{{LSID|}}}
}}
{{#ifeq:{{PAGENAME}}|{{{superordo}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
[[Kategorie:{{#if: {{{subclassis|}}} | {{{subclassis|}}} | {{{classis|}}} }}{{!}}{{{superordo|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{ordo}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=7}}
{{#ifeq:{{PAGENAME}}|Blattodea|
[[Kategorie:Schaben]]
}}
[[Kategorie:{{#if: {{{superordo|}}} | {{{superordo|}}} | {{{subclassis|}}} }}{{!}}{{{ordo|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{superfamilia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
[[Kategorie:{{{subordo|}}}{{!}}{{{superfamilia|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{familia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=4}}
[[Kategorie:{{#if: {{{superfamilia|}}} | {{{superfamilia|}}} | {{{subordo|}}} }}{{!}}{{{familia|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{subfamilia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=3}}
[[Kategorie:{{{familia|}}}{{!}}{{{subfamilia|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{genus}}}|
[[Kategorie:{{#if: {{{subfamilia|}}} | {{{subfamilia|}}} | {{{familia|}}} }}{{!}}{{{genus|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{WissName}}}|
[[Kategorie:{{{genus|}}}{{!}}{{{species|}}}]]
}}
</includeonly>
<noinclude>
<pre>
Beispielaufruf:
{{Systematik
| DeName = Fauchschabe
| Autor = van Herrewege, 1973
| ordo =
| subordo =
| superfamilia =
| familia = Blaberidae
| subfamilia = Oxyhaloinae
| genus = Princisia
| subgenus =
| tribus = Gromphadorhini
| species = vanwaerebeki
| Verbreitung =
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| StudyGroupNumber = BCG 34
| Winterruhe =
| cockroach.genusfile.org_TaxonNameID = 1174416
| LSID = urn:lsid:Blattodea.genusfile.org:TaxonName:6326
}}
{{Systematik
| Autor =
| Bild =
| Bildbeschreibung =
| regnum = Animalia
| subregnum = Eumetazoa
| phylum = specieshropoda
| subphylum = Hexapoda
| classis = Insecta
| ordo = Dictyoptera
| subordo = Isoptera
| LSID = urn:lsid:faunaeur.org:taxname:11922
| www.faunaeur.org_id = 11922
}}
</pre>
</noinclude>
7affe25996371095e9e70e128dea52d269c2a903
1564
1557
2017-05-02T14:07:37Z
Lollypop
2
wikitext
text/x-wiki
<includeonly>
{| cellpadding="2" cellspacing="0" style="border: 1px solid #6688AA;width:260px; background-color:#efefef; float:right;overflow: hidden; margin-left:10px; margin-bottom:10px;" valign="middle" |
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"|
'''''{{PAGENAME}}'''''
{{#if:{{{DeName|}}}|
<br>({{{DeName|}}})
}}
|-
|style="border: 0px solid #6688AA;border-bottom-width:1px;"| <div style="text-align:center;font-size:8pt;">
{{#if: {{{Bild|}}}
|
[[Bild:{{{Bild}}}{{!}}frameless{{!}}250x300px{{!}}{{{Bildbeschreibung}}}]]
}}
</div>
|
|
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| Systematik
|-
|style="text-align:center;width=250px;"|
{| style="background-color:#efefef;text-align:left;"
{{#if:{{{dominia|}}}
| {{!-}}
{{!}} Dominia:
{{!}} ''[[:Kategorie:{{{dominia|}}}{{!}}{{{dominia|}}}]]''
}}
{{#if:{{{regnum|}}}
| {{!-}}
{{!}} Regnum:
{{!}} ''[[:Kategorie:{{{regnum|}}}{{!}}{{{regnum|}}}]]''
}}
{{#if:{{{subregnum|}}}
| {{!-}}
{{!}} Subregnum:
{{!}} ''[[:Kategorie:{{{subregnum|}}}{{!}}{{{subregnum|}}}]]''
}}
{{#if:{{{superdivisio|}}}
| {{!-}}
{{!}} Superdivisio:
{{!}} ''[[:Kategorie:{{{superdivisio|}}}{{!}}{{{superdivisio|}}}]]''
}}
{{#if:{{{divisio|}}}
| {{!-}}
{{!}} Divisio:
{{!}} ''[[:Kategorie:{{{divisio|}}}{{!}}{{{divisio|}}}]]''
}}
{{#if:{{{subdivisio|}}}
| {{!-}}
{{!}} Subdivisio:
{{!}} ''[[:Kategorie:{{{subdivisio|}}}{{!}}{{{subdivisio|}}}]]''
}}
{{#if:{{{superclassis|}}}
| {{!-}}
{{!}} Superclassis:
{{!}} ''[[:Kategorie:{{{superclassis|}}}{{!}}{{{superclassis|}}}]]''
}}
{{#if:{{{classis|}}}
| {{!-}}
{{!}} Classis:
{{!}} ''[[:Kategorie:{{{classis|}}}{{!}}{{{classis|}}}]]''
}}
{{#if:{{{subclassis|}}}
| {{!-}}
{{!}} Subclassis:
{{!}} ''[[:Kategorie:{{{subclassis|}}}{{!}}{{{subclassis|}}}]]''
}}
{{#if:{{{superordo|}}}
| {{!-}}
{{!}} Superordo:
{{!}} ''[[:Kategorie:{{{superordo|}}}{{!}}{{{superordo|}}}]]''
}}
{{#if:{{{ordo|}}}
| {{!-}}
{{!}} Ordo:
{{!}} ''[[:Kategorie:{{{ordo|}}}{{!}}{{{ordo|}}}]]''
}}
{{#if:{{{subordo|}}}
| {{!-}}
{{!}} Subordo:
{{!}} ''[[:Kategorie:{{{subordo|}}}{{!}}{{{subordo|}}}]]''
}}
{{#if:{{{superfamilia|}}}
| {{!-}}
{{!}} Superfamilia:
{{!}} ''[[:Kategorie:{{{superfamilia|}}}{{!}}{{{superfamilia|}}}]]''
}}
{{#if:{{{familia|}}}
| {{!-}}
{{!}} Familia:
{{!}} ''[[:Kategorie:{{{familia|}}}{{!}}{{{familia|}}}]]''
}}
{{#if:{{{subfamilia|}}}
| {{!-}}
{{!}} Subfamilia:
{{!}} ''[[:Kategorie:{{{subfamilia|}}}{{!}}{{{subfamilia|}}}]]''
}}
{{#if:{{{tribus|}}}
| {{!-}}
{{!}} Tribus:
{{!}} ''[[:Kategorie:{{{tribus|}}}{{!}}{{{tribus|}}}]]''
}}
|-
{{#if:{{{genus|}}}
| {{!-}}
{{!}} Genus:
{{!}} ''[[:Kategorie:{{{genus|}}}{{!}}{{{genus|}}}]]''
}}
|-
{{#if:{{{subgenus|}}}
| {{!-}}
{{!}} Subgenus:
{{!}} ''[[:Kategorie:{{{subgenus|}}}{{!}}{{{subgenus|}}}]]''
}}
|-
{{#if:{{{species|}}}
| {{!-}}
{{!}} Species:
{{!}} ''{{{genus|}}} {{{subgenus|}}} {{{species|}}}''
}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Wissenschaftlicher Name
|-
|style="text-align:center"|
{| style="text-align:center;background-color:#efefef;widht:250px"
|-
|style="text-align:center;width:250px"|''{{{genus|}}} {{{species|}}}'' {{#if:{{{Autor|}}}|{{{Autor|}}}}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Weitere Informationen
|-
|style="text-align:left"|
{| style="text-align:left;background-color:#efefef;widht:250px"
{{#if:{{{Verbreitung|}}}
| {{!-}}
{{!}} Verbreitung:
{{!}} {{{Verbreitung|}}}
| {{!-}}
}}
{{#if:{{{Habitat|}}}
| {{!-}}
{{!}} Habitat:
{{!}} {{{Habitat|}}}
| {{!-}}
}}
{{#if:{{{Nahrung|}}}
| {{!-}}
{{!}} Nahrung:
{{!}} {{{Nahrung|}}}
| {{!-}}
}}
{{#if:{{{Luftfeuchtigkeit|}}}
| {{!-}}
{{!}} Luftfeuchtigkeit:
{{!}} {{{Luftfeuchtigkeit|}}}
| {{!-}}
}}
{{#if:{{{Temperatur|}}}
| {{!-}}
{{!}} Temperatur:
{{!}} {{{Temperatur|}}}
| {{!-}}
}}
{{#if:{{{StudyGroupNumber|}}}
| {{!-}}
{{!}} StudyGroupNumber:
{{!}} {{{StudyGroupNumber|}}}
| {{!-}}
}}
|}
|}
{{#ifeq: {{NAMESPACE}} | {{ns:0}} | [[Kategorie:species]]}}
{{#if:{{{ordo|}}}|
-> [[:Kategorie:{{{ordo|}}}{{!}}{{{ordo|}}}]]
}}
{{#if:{{{subordo|}}}|
-> [[:Kategorie:{{{subordo|}}}{{!}}{{{subordo|}}}]]
}}
{{#if:{{{superfamilia|}}}|
-> [[:Kategorie:{{{superfamilia|}}}{{!}}{{{superfamilia|}}}]]
}}
{{#if:{{{familia|}}}|
-> [[:Kategorie:{{{familia|}}}{{!}}{{{familia|}}}]]
}}
{{#if:{{{subfamilia|}}}|
-> [[:Kategorie:{{{subfamilia|}}}{{!}}{{{subfamilia|}}}]]
}}
{{#if:{{{tribus|}}}|
-> [[:Kategorie:{{{tribus|}}}{{!}}{{{tribus|}}}]]
}}
{{#if:{{{genus|}}}|
-> [[:Kategorie:{{{genus|}}}{{!}}{{{genus|}}}]]
}}
{{#if:{{{subgenus|}}}|
-> [[:Kategorie:{{{subgenus|}}}{{!}}{{{subgenus|}}}]]
}}
{{#if:{{{www.faunaeur.org_id|}}}|
* [http://www.faunaeur.org/full_results.php?id={{{www.faunaeur.org_id|}}} Fauna Europaea : www.faunaeur.org -> {{PAGENAME}}]
}}
{{#if:{{{cockroach.genusfile.org_TaxonNameID|}}}|
* [http://cockroach.genusfile.org/common/basic/Taxa.aspx?TaxonNameID={{{cockroach.genusfile.org_TaxonNameID|}}} Cockroach Species File (CSF) : cockroach.genusfile.org -> {{PAGENAME}}]
}}
{{#if:{{{LSID|}}}|
* [http://www.ipni.org/lsids.html LSID] : {{{LSID|}}}
}}
{{#ifeq:{{PAGENAME}}|{{{superordo}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
[[Kategorie:{{#if: {{{subclassis|}}} | {{{subclassis|}}} | {{{classis|}}} }}{{!}}{{{superordo|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{ordo}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=7}}
{{#ifeq:{{PAGENAME}}|Blattodea|
[[Kategorie:Schaben]]
}}
[[Kategorie:{{#if: {{{superordo|}}} | {{{superordo|}}} | {{{subclassis|}}} }}{{!}}{{{ordo|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{superfamilia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
[[Kategorie:{{{subordo|}}}{{!}}{{{superfamilia|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{familia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=4}}
[[Kategorie:{{#if: {{{superfamilia|}}} | {{{superfamilia|}}} | {{{subordo|}}} }}{{!}}{{{familia|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{subfamilia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=3}}
[[Kategorie:{{{familia|}}}{{!}}{{{subfamilia|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{genus}}}|
[[Kategorie:{{#if: {{{subfamilia|}}} | {{{subfamilia|}}} | {{{familia|}}} }}{{!}}{{{genus|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{genus}}} {{{species}}}|
[[Kategorie:{{{genus|}}}{{!}}{{{species|}}}]]
}}
</includeonly>
<noinclude>
<pre>
Beispielaufruf:
{{Systematik
| DeName = Fauchschabe
| Autor = van Herrewege, 1973
| ordo =
| subordo =
| superfamilia =
| familia = Blaberidae
| subfamilia = Oxyhaloinae
| genus = Princisia
| subgenus =
| tribus = Gromphadorhini
| species = vanwaerebeki
| Verbreitung =
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| StudyGroupNumber = BCG 34
| Winterruhe =
| cockroach.genusfile.org_TaxonNameID = 1174416
| LSID = urn:lsid:Blattodea.genusfile.org:TaxonName:6326
}}
{{Systematik
| Autor =
| Bild =
| Bildbeschreibung =
| regnum = Animalia
| subregnum = Eumetazoa
| phylum = specieshropoda
| subphylum = Hexapoda
| classis = Insecta
| ordo = Dictyoptera
| subordo = Isoptera
| LSID = urn:lsid:faunaeur.org:taxname:11922
| www.faunaeur.org_id = 11922
}}
</pre>
</noinclude>
59d35bd15a0cbe96d09d81ff51c74a7eda00fe4c
1565
1564
2017-05-02T14:10:06Z
Lollypop
2
wikitext
text/x-wiki
<includeonly>
{| cellpadding="2" cellspacing="0" style="border: 1px solid #6688AA;width:260px; background-color:#efefef; float:right;overflow: hidden; margin-left:10px; margin-bottom:10px;" valign="middle" |
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"|
'''''{{PAGENAME}}'''''
{{#if:{{{DeName|}}}|
<br>({{{DeName|}}})
}}
|-
|style="border: 0px solid #6688AA;border-bottom-width:1px;"| <div style="text-align:center;font-size:8pt;">
{{#if: {{{Bild|}}}
|
[[Bild:{{{Bild}}}{{!}}frameless{{!}}250x300px{{!}}{{{Bildbeschreibung}}}]]
}}
</div>
|
|
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| Systematik
|-
|style="text-align:center;width=250px;"|
{| style="background-color:#efefef;text-align:left;"
{{#if:{{{dominia|}}}
| {{!-}}
{{!}} Dominia:
{{!}} ''[[:Kategorie:{{{dominia|}}}{{!}}{{{dominia|}}}]]''
}}
{{#if:{{{regnum|}}}
| {{!-}}
{{!}} Regnum:
{{!}} ''[[:Kategorie:{{{regnum|}}}{{!}}{{{regnum|}}}]]''
}}
{{#if:{{{subregnum|}}}
| {{!-}}
{{!}} Subregnum:
{{!}} ''[[:Kategorie:{{{subregnum|}}}{{!}}{{{subregnum|}}}]]''
}}
{{#if:{{{superdivisio|}}}
| {{!-}}
{{!}} Superdivisio:
{{!}} ''[[:Kategorie:{{{superdivisio|}}}{{!}}{{{superdivisio|}}}]]''
}}
{{#if:{{{divisio|}}}
| {{!-}}
{{!}} Divisio:
{{!}} ''[[:Kategorie:{{{divisio|}}}{{!}}{{{divisio|}}}]]''
}}
{{#if:{{{subdivisio|}}}
| {{!-}}
{{!}} Subdivisio:
{{!}} ''[[:Kategorie:{{{subdivisio|}}}{{!}}{{{subdivisio|}}}]]''
}}
{{#if:{{{superclassis|}}}
| {{!-}}
{{!}} Superclassis:
{{!}} ''[[:Kategorie:{{{superclassis|}}}{{!}}{{{superclassis|}}}]]''
}}
{{#if:{{{classis|}}}
| {{!-}}
{{!}} Classis:
{{!}} ''[[:Kategorie:{{{classis|}}}{{!}}{{{classis|}}}]]''
}}
{{#if:{{{subclassis|}}}
| {{!-}}
{{!}} Subclassis:
{{!}} ''[[:Kategorie:{{{subclassis|}}}{{!}}{{{subclassis|}}}]]''
}}
{{#if:{{{superordo|}}}
| {{!-}}
{{!}} Superordo:
{{!}} ''[[:Kategorie:{{{superordo|}}}{{!}}{{{superordo|}}}]]''
}}
{{#if:{{{ordo|}}}
| {{!-}}
{{!}} Ordo:
{{!}} ''[[:Kategorie:{{{ordo|}}}{{!}}{{{ordo|}}}]]''
}}
{{#if:{{{subordo|}}}
| {{!-}}
{{!}} Subordo:
{{!}} ''[[:Kategorie:{{{subordo|}}}{{!}}{{{subordo|}}}]]''
}}
{{#if:{{{superfamilia|}}}
| {{!-}}
{{!}} Superfamilia:
{{!}} ''[[:Kategorie:{{{superfamilia|}}}{{!}}{{{superfamilia|}}}]]''
}}
{{#if:{{{familia|}}}
| {{!-}}
{{!}} Familia:
{{!}} ''[[:Kategorie:{{{familia|}}}{{!}}{{{familia|}}}]]''
}}
{{#if:{{{subfamilia|}}}
| {{!-}}
{{!}} Subfamilia:
{{!}} ''[[:Kategorie:{{{subfamilia|}}}{{!}}{{{subfamilia|}}}]]''
}}
{{#if:{{{tribus|}}}
| {{!-}}
{{!}} Tribus:
{{!}} ''[[:Kategorie:{{{tribus|}}}{{!}}{{{tribus|}}}]]''
}}
|-
{{#if:{{{genus|}}}
| {{!-}}
{{!}} Genus:
{{!}} ''[[:Kategorie:{{{genus|}}}{{!}}{{{genus|}}}]]''
}}
|-
{{#if:{{{subgenus|}}}
| {{!-}}
{{!}} Subgenus:
{{!}} ''[[:Kategorie:{{{subgenus|}}}{{!}}{{{subgenus|}}}]]''
}}
|-
{{#if:{{{species|}}}
| {{!-}}
{{!}} Species:
{{!}} ''{{{genus|}}} {{{subgenus|}}} {{{species|}}}''
}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Wissenschaftlicher Name
|-
|style="text-align:center"|
{| style="text-align:center;background-color:#efefef;widht:250px"
|-
|style="text-align:center;width:250px"|''{{{genus|}}} {{{species|}}}'' {{#if:{{{Autor|}}}|{{{Autor|}}}}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Weitere Informationen
|-
|style="text-align:left"|
{| style="text-align:left;background-color:#efefef;widht:250px"
{{#if:{{{Verbreitung|}}}
| {{!-}}
{{!}} Verbreitung:
{{!}} {{{Verbreitung|}}}
| {{!-}}
}}
{{#if:{{{Habitat|}}}
| {{!-}}
{{!}} Habitat:
{{!}} {{{Habitat|}}}
| {{!-}}
}}
{{#if:{{{Nahrung|}}}
| {{!-}}
{{!}} Nahrung:
{{!}} {{{Nahrung|}}}
| {{!-}}
}}
{{#if:{{{Luftfeuchtigkeit|}}}
| {{!-}}
{{!}} Luftfeuchtigkeit:
{{!}} {{{Luftfeuchtigkeit|}}}
| {{!-}}
}}
{{#if:{{{Temperatur|}}}
| {{!-}}
{{!}} Temperatur:
{{!}} {{{Temperatur|}}}
| {{!-}}
}}
{{#if:{{{StudyGroupNumber|}}}
| {{!-}}
{{!}} StudyGroupNumber:
{{!}} {{{StudyGroupNumber|}}}
| {{!-}}
}}
|}
|}
{{#ifeq: {{NAMESPACE}} | {{ns:0}} | [[Kategorie:species]]}}
{{#if:{{{ordo|}}}|
-> [[:Kategorie:{{{ordo|}}}{{!}}{{{ordo|}}}]]
}}
{{#if:{{{subordo|}}}|
-> [[:Kategorie:{{{subordo|}}}{{!}}{{{subordo|}}}]]
}}
{{#if:{{{superfamilia|}}}|
-> [[:Kategorie:{{{superfamilia|}}}{{!}}{{{superfamilia|}}}]]
}}
{{#if:{{{familia|}}}|
-> [[:Kategorie:{{{familia|}}}{{!}}{{{familia|}}}]]
}}
{{#if:{{{subfamilia|}}}|
-> [[:Kategorie:{{{subfamilia|}}}{{!}}{{{subfamilia|}}}]]
}}
{{#if:{{{tribus|}}}|
-> [[:Kategorie:{{{tribus|}}}{{!}}{{{tribus|}}}]]
}}
{{#if:{{{genus|}}}|
-> [[:Kategorie:{{{genus|}}}{{!}}{{{genus|}}}]]
}}
{{#if:{{{subgenus|}}}|
-> [[:Kategorie:{{{subgenus|}}}{{!}}{{{subgenus|}}}]]
}}
{{#if:{{{www.faunaeur.org_id|}}}|
* [http://www.faunaeur.org/full_results.php?id={{{www.faunaeur.org_id|}}} Fauna Europaea : www.faunaeur.org -> {{PAGENAME}}]
}}
{{#if:{{{cockroach.genusfile.org_TaxonNameID|}}}|
* [http://cockroach.genusfile.org/common/basic/Taxa.aspx?TaxonNameID={{{cockroach.genusfile.org_TaxonNameID|}}} Cockroach Species File (CSF) : cockroach.genusfile.org -> {{PAGENAME}}]
}}
{{#if:{{{LSID|}}}|
* [http://www.ipni.org/lsids.html LSID] : {{{LSID|}}}
}}
{{#ifeq:{{PAGENAME}}|{{{superordo}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
[[Kategorie:{{#if: {{{subclassis|}}} | {{{subclassis|}}} | {{{classis|}}} }}{{!}}{{{superordo|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{ordo}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=7}}
{{#ifeq:{{PAGENAME}}|Blattodea|
[[Kategorie:Schaben]]
}}
[[Kategorie:{{#if: {{{superordo|}}} | {{{superordo|}}} | {{{subclassis|}}} }}{{!}}{{{ordo|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{superfamilia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
[[Kategorie:{{{subordo|}}}{{!}}{{{superfamilia|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{familia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=4}}
[[Kategorie:{{#if: {{{superfamilia|}}} | {{{superfamilia|}}} | {{{subordo|}}} }}{{!}}{{{familia|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{subfamilia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=3}}
[[Kategorie:{{{familia|}}}{{!}}{{{subfamilia|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{genus}}}|
[[Kategorie:{{#if: {{{subfamilia|}}} | {{{subfamilia|}}} | {{{familia|}}} }}{{!}}{{{genus|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{genus}}}_{{{species}}}|
[[Kategorie:{{{genus|}}}{{!}}{{{species|}}}]]
}}
</includeonly>
<noinclude>
<pre>
Beispielaufruf:
{{Systematik
| DeName = Fauchschabe
| Autor = van Herrewege, 1973
| ordo =
| subordo =
| superfamilia =
| familia = Blaberidae
| subfamilia = Oxyhaloinae
| genus = Princisia
| subgenus =
| tribus = Gromphadorhini
| species = vanwaerebeki
| Verbreitung =
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| StudyGroupNumber = BCG 34
| Winterruhe =
| cockroach.genusfile.org_TaxonNameID = 1174416
| LSID = urn:lsid:Blattodea.genusfile.org:TaxonName:6326
}}
{{Systematik
| Autor =
| Bild =
| Bildbeschreibung =
| regnum = Animalia
| subregnum = Eumetazoa
| phylum = specieshropoda
| subphylum = Hexapoda
| classis = Insecta
| ordo = Dictyoptera
| subordo = Isoptera
| LSID = urn:lsid:faunaeur.org:taxname:11922
| www.faunaeur.org_id = 11922
}}
</pre>
</noinclude>
4132a5d4a3e4a7e1429068522a409e5783b15946
1566
1565
2017-05-02T14:10:29Z
Lollypop
2
wikitext
text/x-wiki
<includeonly>
{| cellpadding="2" cellspacing="0" style="border: 1px solid #6688AA;width:260px; background-color:#efefef; float:right;overflow: hidden; margin-left:10px; margin-bottom:10px;" valign="middle" |
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"|
'''''{{PAGENAME}}'''''
{{#if:{{{DeName|}}}|
<br>({{{DeName|}}})
}}
|-
|style="border: 0px solid #6688AA;border-bottom-width:1px;"| <div style="text-align:center;font-size:8pt;">
{{#if: {{{Bild|}}}
|
[[Bild:{{{Bild}}}{{!}}frameless{{!}}250x300px{{!}}{{{Bildbeschreibung}}}]]
}}
</div>
|
|
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| Systematik
|-
|style="text-align:center;width=250px;"|
{| style="background-color:#efefef;text-align:left;"
{{#if:{{{dominia|}}}
| {{!-}}
{{!}} Dominia:
{{!}} ''[[:Kategorie:{{{dominia|}}}{{!}}{{{dominia|}}}]]''
}}
{{#if:{{{regnum|}}}
| {{!-}}
{{!}} Regnum:
{{!}} ''[[:Kategorie:{{{regnum|}}}{{!}}{{{regnum|}}}]]''
}}
{{#if:{{{subregnum|}}}
| {{!-}}
{{!}} Subregnum:
{{!}} ''[[:Kategorie:{{{subregnum|}}}{{!}}{{{subregnum|}}}]]''
}}
{{#if:{{{superdivisio|}}}
| {{!-}}
{{!}} Superdivisio:
{{!}} ''[[:Kategorie:{{{superdivisio|}}}{{!}}{{{superdivisio|}}}]]''
}}
{{#if:{{{divisio|}}}
| {{!-}}
{{!}} Divisio:
{{!}} ''[[:Kategorie:{{{divisio|}}}{{!}}{{{divisio|}}}]]''
}}
{{#if:{{{subdivisio|}}}
| {{!-}}
{{!}} Subdivisio:
{{!}} ''[[:Kategorie:{{{subdivisio|}}}{{!}}{{{subdivisio|}}}]]''
}}
{{#if:{{{superclassis|}}}
| {{!-}}
{{!}} Superclassis:
{{!}} ''[[:Kategorie:{{{superclassis|}}}{{!}}{{{superclassis|}}}]]''
}}
{{#if:{{{classis|}}}
| {{!-}}
{{!}} Classis:
{{!}} ''[[:Kategorie:{{{classis|}}}{{!}}{{{classis|}}}]]''
}}
{{#if:{{{subclassis|}}}
| {{!-}}
{{!}} Subclassis:
{{!}} ''[[:Kategorie:{{{subclassis|}}}{{!}}{{{subclassis|}}}]]''
}}
{{#if:{{{superordo|}}}
| {{!-}}
{{!}} Superordo:
{{!}} ''[[:Kategorie:{{{superordo|}}}{{!}}{{{superordo|}}}]]''
}}
{{#if:{{{ordo|}}}
| {{!-}}
{{!}} Ordo:
{{!}} ''[[:Kategorie:{{{ordo|}}}{{!}}{{{ordo|}}}]]''
}}
{{#if:{{{subordo|}}}
| {{!-}}
{{!}} Subordo:
{{!}} ''[[:Kategorie:{{{subordo|}}}{{!}}{{{subordo|}}}]]''
}}
{{#if:{{{superfamilia|}}}
| {{!-}}
{{!}} Superfamilia:
{{!}} ''[[:Kategorie:{{{superfamilia|}}}{{!}}{{{superfamilia|}}}]]''
}}
{{#if:{{{familia|}}}
| {{!-}}
{{!}} Familia:
{{!}} ''[[:Kategorie:{{{familia|}}}{{!}}{{{familia|}}}]]''
}}
{{#if:{{{subfamilia|}}}
| {{!-}}
{{!}} Subfamilia:
{{!}} ''[[:Kategorie:{{{subfamilia|}}}{{!}}{{{subfamilia|}}}]]''
}}
{{#if:{{{tribus|}}}
| {{!-}}
{{!}} Tribus:
{{!}} ''[[:Kategorie:{{{tribus|}}}{{!}}{{{tribus|}}}]]''
}}
|-
{{#if:{{{genus|}}}
| {{!-}}
{{!}} Genus:
{{!}} ''[[:Kategorie:{{{genus|}}}{{!}}{{{genus|}}}]]''
}}
|-
{{#if:{{{subgenus|}}}
| {{!-}}
{{!}} Subgenus:
{{!}} ''[[:Kategorie:{{{subgenus|}}}{{!}}{{{subgenus|}}}]]''
}}
|-
{{#if:{{{species|}}}
| {{!-}}
{{!}} Species:
{{!}} ''{{{genus|}}} {{{subgenus|}}} {{{species|}}}''
}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Wissenschaftlicher Name
|-
|style="text-align:center"|
{| style="text-align:center;background-color:#efefef;widht:250px"
|-
|style="text-align:center;width:250px"|''{{{genus|}}} {{{species|}}}'' {{#if:{{{Autor|}}}|{{{Autor|}}}}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Weitere Informationen
|-
|style="text-align:left"|
{| style="text-align:left;background-color:#efefef;widht:250px"
{{#if:{{{Verbreitung|}}}
| {{!-}}
{{!}} Verbreitung:
{{!}} {{{Verbreitung|}}}
| {{!-}}
}}
{{#if:{{{Habitat|}}}
| {{!-}}
{{!}} Habitat:
{{!}} {{{Habitat|}}}
| {{!-}}
}}
{{#if:{{{Nahrung|}}}
| {{!-}}
{{!}} Nahrung:
{{!}} {{{Nahrung|}}}
| {{!-}}
}}
{{#if:{{{Luftfeuchtigkeit|}}}
| {{!-}}
{{!}} Luftfeuchtigkeit:
{{!}} {{{Luftfeuchtigkeit|}}}
| {{!-}}
}}
{{#if:{{{Temperatur|}}}
| {{!-}}
{{!}} Temperatur:
{{!}} {{{Temperatur|}}}
| {{!-}}
}}
{{#if:{{{StudyGroupNumber|}}}
| {{!-}}
{{!}} StudyGroupNumber:
{{!}} {{{StudyGroupNumber|}}}
| {{!-}}
}}
|}
|}
{{#ifeq: {{NAMESPACE}} | {{ns:0}} | [[Kategorie:species]]}}
{{#if:{{{ordo|}}}|
-> [[:Kategorie:{{{ordo|}}}{{!}}{{{ordo|}}}]]
}}
{{#if:{{{subordo|}}}|
-> [[:Kategorie:{{{subordo|}}}{{!}}{{{subordo|}}}]]
}}
{{#if:{{{superfamilia|}}}|
-> [[:Kategorie:{{{superfamilia|}}}{{!}}{{{superfamilia|}}}]]
}}
{{#if:{{{familia|}}}|
-> [[:Kategorie:{{{familia|}}}{{!}}{{{familia|}}}]]
}}
{{#if:{{{subfamilia|}}}|
-> [[:Kategorie:{{{subfamilia|}}}{{!}}{{{subfamilia|}}}]]
}}
{{#if:{{{tribus|}}}|
-> [[:Kategorie:{{{tribus|}}}{{!}}{{{tribus|}}}]]
}}
{{#if:{{{genus|}}}|
-> [[:Kategorie:{{{genus|}}}{{!}}{{{genus|}}}]]
}}
{{#if:{{{subgenus|}}}|
-> [[:Kategorie:{{{subgenus|}}}{{!}}{{{subgenus|}}}]]
}}
{{#if:{{{www.faunaeur.org_id|}}}|
* [http://www.faunaeur.org/full_results.php?id={{{www.faunaeur.org_id|}}} Fauna Europaea : www.faunaeur.org -> {{PAGENAME}}]
}}
{{#if:{{{cockroach.genusfile.org_TaxonNameID|}}}|
* [http://cockroach.genusfile.org/common/basic/Taxa.aspx?TaxonNameID={{{cockroach.genusfile.org_TaxonNameID|}}} Cockroach Species File (CSF) : cockroach.genusfile.org -> {{PAGENAME}}]
}}
{{#if:{{{LSID|}}}|
* [http://www.ipni.org/lsids.html LSID] : {{{LSID|}}}
}}
{{#ifeq:{{PAGENAME}}|{{{superordo}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
[[Kategorie:{{#if: {{{subclassis|}}} | {{{subclassis|}}} | {{{classis|}}} }}{{!}}{{{superordo|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{ordo}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=7}}
{{#ifeq:{{PAGENAME}}|Blattodea|
[[Kategorie:Schaben]]
}}
[[Kategorie:{{#if: {{{superordo|}}} | {{{superordo|}}} | {{{subclassis|}}} }}{{!}}{{{ordo|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{superfamilia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
[[Kategorie:{{{subordo|}}}{{!}}{{{superfamilia|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{familia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=4}}
[[Kategorie:{{#if: {{{superfamilia|}}} | {{{superfamilia|}}} | {{{subordo|}}} }}{{!}}{{{familia|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{subfamilia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=3}}
[[Kategorie:{{{familia|}}}{{!}}{{{subfamilia|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{genus}}}|
[[Kategorie:{{#if: {{{subfamilia|}}} | {{{subfamilia|}}} | {{{familia|}}} }}{{!}}{{{genus|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{genus}}} {{{species}}}|
[[Kategorie:{{{genus|}}}{{!}}{{{species|}}}]]
}}
</includeonly>
<noinclude>
<pre>
Beispielaufruf:
{{Systematik
| DeName = Fauchschabe
| Autor = van Herrewege, 1973
| ordo =
| subordo =
| superfamilia =
| familia = Blaberidae
| subfamilia = Oxyhaloinae
| genus = Princisia
| subgenus =
| tribus = Gromphadorhini
| species = vanwaerebeki
| Verbreitung =
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| StudyGroupNumber = BCG 34
| Winterruhe =
| cockroach.genusfile.org_TaxonNameID = 1174416
| LSID = urn:lsid:Blattodea.genusfile.org:TaxonName:6326
}}
{{Systematik
| Autor =
| Bild =
| Bildbeschreibung =
| regnum = Animalia
| subregnum = Eumetazoa
| phylum = specieshropoda
| subphylum = Hexapoda
| classis = Insecta
| ordo = Dictyoptera
| subordo = Isoptera
| LSID = urn:lsid:faunaeur.org:taxname:11922
| www.faunaeur.org_id = 11922
}}
</pre>
</noinclude>
59d35bd15a0cbe96d09d81ff51c74a7eda00fe4c
Category:Bifiditermes
14
314
1535
1511
2017-05-02T07:27:30Z
Lollypop
2
wikitext
text/x-wiki
{{Systematik
| Autor = Krishna 1961
| Bild =
| Bildbeschreibung =
| regnum = Animalia
| subregnum = Eumetazoa
| phylum = Arthropoda
| subphylum = Hexapoda
| classis = Insecta
| ordo = Dictyoptera
| subordo = Isoptera
| familia = Kalotermitidae
| genus = Bifiditermes
| LSID = urn:lsid:faunaeur.org:taxname:337259
| www.faunaeur.org_id = 337259
}}
336f0fff8a341e3734104d385dc95df237baa0b3
Category:Cryptotermes
14
312
1536
1512
2017-05-02T07:31:39Z
Lollypop
2
wikitext
text/x-wiki
{{Systematik
| Autor = Banks 1906
| Bild =
| Bildbeschreibung =
| regnum = Animalia
| subregnum = Eumetazoa
| phylum = Arthropoda
| subphylum = Hexapoda
| classis = Insecta
| ordo = Dictyoptera
| subordo = Isoptera
| familia = Kalotermitidae
| genus = Cryptotermes
| LSID = urn:lsid:faunaeur.org:taxname:337261
| www.faunaeur.org_id = 337261
}}
3fb9dac0481d7ebbc1f097eb73965b89e645178d
Category:Neotermes
14
321
1537
1492
2017-05-02T09:47:41Z
Lollypop
2
wikitext
text/x-wiki
{{Systematik
| DeName =
| WissName = Neotermes sp.
| Autor =
| regnum = Animalia
| subregnum = Eumetazoa
| phylum = Arthropoda
| subphylum = Hexapoda
| classis = Insecta
| ordo = Dictyoptera
| subordo = Isoptera
| familia = Kalotermitidae
| subfamilia =
| tribus =
| genus = Neotermes
}}
7fb9690ae1b36e94f6542638801aef2ad4b2c9c9
Category:Rhinotermitidae
14
305
1538
1503
2017-05-02T09:51:29Z
Lollypop
2
wikitext
text/x-wiki
{{Systematik
| DeName =
| WissName = Rhinotermitidae
| Autor =
| regnum = Animalia
| subregnum = Eumetazoa
| phylum = Arthropoda
| subphylum = Hexapoda
| classis = Insecta
| ordo = Dictyoptera
| subordo = Isoptera
| familia = Rhinotermitidae
| LSID = urn:lsid:faunaeur.org:taxname:11924
| www.faunaeur.org_id = 11924
}}
6a5ab4389a08132ca8017848c4d02e5a61f88b25
Category:Reticulitermes
14
307
1539
1517
2017-05-02T09:52:08Z
Lollypop
2
wikitext
text/x-wiki
{{Systematik
| DeName =
| WissName = Rhinotermitidae
| Autor = Holmgren 1913
| regnum = Animalia
| subregnum = Eumetazoa
| phylum = Arthropoda
| subphylum = Hexapoda
| classis = Insecta
| ordo = Dictyoptera
| subordo = Isoptera
| familia = Rhinotermitidae
| genus = Reticulitermes
| LSID = urn:lsid:faunaeur.org:taxname:337268
| www.faunaeur.org_id = 337268
}}
08491b8eb5a3cbfdd781160539750d981d33d995
Category:Termitidae
14
319
1540
1505
2017-05-02T09:52:55Z
Lollypop
2
wikitext
text/x-wiki
{{Systematik
| DeName =
| WissName = Termitidae
| Autor =
| regnum = Animalia
| subregnum = Eumetazoa
| phylum = Arthropoda
| subphylum = Hexapoda
| classis = Insecta
| ordo = Dictyoptera
| subordo = Isoptera
| familia = Termitidae
| LSID = urn:lsid:faunaeur.org:taxname:336400
| www.faunaeur.org_id = 336400
}}
6a1f4f472f86cc4a5f4c7fc98a475204f074e14a
1556
1540
2017-05-02T13:45:53Z
Lollypop
2
wikitext
text/x-wiki
{{Systematik
| DeName =
| WissName = Termitidae
| Autor =
| regnum = Animalia
| subregnum = Eumetazoa
| phylum = Arthropoda
| subphylum = Hexapoda
| classis = Insecta
| ordo = Dictyoptera
| subordo = Isoptera
| familia = Termitidae
| LSID = urn:lsid:faunaeur.org:taxname:336400
| www.faunaeur.org_id = 336400
}}
a8bf23ca53ea5219eab11b6ec61a7e43dda8cff9
Nasutitermes sp
0
316
1541
1493
2017-05-02T09:54:42Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie: Nasutitermes ]]
{{Systematik
| DeName =
| WissName = Nasutitermes (nigriceps?)
| Autor =
| regnum = Animalia
| subregnum = Eumetazoa
| phylum = Arthropoda
| subphylum = Hexapoda
| classis = Insecta
| ordo = Dictyoptera
| subordo = Isoptera
| familia = Termitidae
| subfamilia = Nasutitermitinae
| tribus =
| genus = Nasutitermes
| subgenus =
| species = sp
| Verbreitung = Jamaica
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur =
| Winterruhe =
}}
Aus: Jamaica, Bahía Montego (David)
803c7bf996c30e3a377650508dc413c1d9354984
1542
1541
2017-05-02T09:55:21Z
Lollypop
2
wikitext
text/x-wiki
{{Systematik
| DeName =
| WissName = Nasutitermes (nigriceps?)
| Autor =
| regnum = Animalia
| subregnum = Eumetazoa
| phylum = Arthropoda
| subphylum = Hexapoda
| classis = Insecta
| ordo = Dictyoptera
| subordo = Isoptera
| familia = Termitidae
| subfamilia = Nasutitermitinae
| tribus =
| genus = Nasutitermes
| subgenus =
| species = sp
| Verbreitung = Jamaica
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur =
| Winterruhe =
}}
Aus: Jamaica, Bahía Montego (David)
69c45a6d5ff08e50aff67ffee5aea980682521bc
1545
1542
2017-05-02T09:57:34Z
Lollypop
2
wikitext
text/x-wiki
{{Systematik
| DeName =
| WissName = Nasutitermes (nigriceps?)
| Autor =
| regnum = Animalia
| subregnum = Eumetazoa
| phylum = Arthropoda
| subphylum = Hexapoda
| classis = Insecta
| ordo = Dictyoptera
| subordo = Isoptera
| familia = Termitidae
| subfamilia = Nasutitermitinae
| tribus =
| genus = Nasutitermes
| subgenus =
| species = sp
| Verbreitung = Jamaica
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur =
| Winterruhe =
}}
Aus: Jamaica, Bahía Montego (David), Nasutitermes nigriceps (?)
8fd6ce4565581cc1b437a793ca0eb48f3ea295a6
1546
1545
2017-05-02T09:57:49Z
Lollypop
2
wikitext
text/x-wiki
{{Systematik
| DeName =
| WissName = Nasutitermes sp
| Autor =
| regnum = Animalia
| subregnum = Eumetazoa
| phylum = Arthropoda
| subphylum = Hexapoda
| classis = Insecta
| ordo = Dictyoptera
| subordo = Isoptera
| familia = Termitidae
| subfamilia = Nasutitermitinae
| tribus =
| genus = Nasutitermes
| subgenus =
| species = sp
| Verbreitung = Jamaica
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur =
| Winterruhe =
}}
Aus: Jamaica, Bahía Montego (David), Nasutitermes nigriceps (?)
a493824ce665068f3077b71a5b3f3ebeb5774622
Category:Nasutitermes
14
317
1543
1472
2017-05-02T09:55:56Z
Lollypop
2
wikitext
text/x-wiki
{{Systematik
| DeName =
| WissName = Nasutitermes
| Autor =
| regnum = Animalia
| subregnum = Eumetazoa
| phylum = Arthropoda
| subphylum = Hexapoda
| classis = Insecta
| ordo = Dictyoptera
| subordo = Isoptera
| familia = Termitidae
| subfamilia = Nasutitermitinae
| tribus =
| genus = Nasutitermes
}}
1f195f204e4f3adcd35491829d79bb054dd4f0dc
Category:Nasutitermitinae
14
318
1544
1473
2017-05-02T09:56:35Z
Lollypop
2
wikitext
text/x-wiki
{{Systematik
| DeName =
| WissName = Nasutitermes (nigriceps?)
| Autor =
| regnum = Animalia
| subregnum = Eumetazoa
| phylum = Arthropoda
| subphylum = Hexapoda
| classis = Insecta
| ordo = Dictyoptera
| subordo = Isoptera
| familia = Termitidae
| subfamilia = Nasutitermitinae
}}
b26f0ce09a918c5cc5535c41cc5ab04c8dde86e5
Reticulitermes banyulensis
0
309
1547
1518
2017-05-02T09:59:20Z
Lollypop
2
wikitext
text/x-wiki
{{Systematik
| DeName =
| WissName = Reticulitermes banyulensis
| Autor = Clément, 1978
| regnum = Animalia
| subregnum = Eumetazoa
| phylum = Arthropoda
| subphylum = Hexapoda
| classis = Insecta
| ordo = Dictyoptera
| subordo = Isoptera
| familia = Rhinotermitidae
| genus = Reticulitermes
| subgenus =
| species = banyulensis
| Verbreitung =
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur =
| Winterruhe =
| LSID = urn:lsid:faunaeur.org:taxname:337269
| www.faunaeur.org_id = 337269
}}
0cf84339259519c4132bcce4911d9102eb2e3510
Reticulitermes flavipes
0
308
1548
1519
2017-05-02T09:59:59Z
Lollypop
2
wikitext
text/x-wiki
{{Systematik
| DeName =
| WissName = Reticulitermes flavipes
| Autor = (Kollar, 1837)
| regnum = Animalia
| subregnum = Eumetazoa
| phylum = Arthropoda
| subphylum = Hexapoda
| classis = Insecta
| ordo = Dictyoptera
| subordo = Isoptera
| familia = Rhinotermitidae
| genus = Reticulitermes
| subgenus =
| species = flavipes
| Verbreitung =
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur =
| Winterruhe =
| LSID = urn:lsid:faunaeur.org:taxname:337272
| www.faunaeur.org_id = 337272
}}
408016e40a67535a60f71fd0d63c5e3367844cfb
Category:Dictyoptera
14
315
1549
1499
2017-05-02T10:24:56Z
Lollypop
2
wikitext
text/x-wiki
{{Systematik
| Autor =
| Bild =
| Bildbeschreibung =
| classis = Insecta
| ordo = Dictyoptera
| LSID = urn:lsid:faunaeur.org:taxname:11907
| www.faunaeur.org_id = 11907
}}
94df1f054ac9aef11775454c7184b9f5849bbf4f
1552
1549
2017-05-02T10:34:25Z
Lollypop
2
wikitext
text/x-wiki
{{Systematik
| Autor =
| Bild =
| Bildbeschreibung =
| classis = Insecta
| superordo = Dictyoptera
| LSID = urn:lsid:faunaeur.org:taxname:11907
| www.faunaeur.org_id = 11907
}}
62b3cfe074aeec6f726db95b867ae64f0cd3228d
1553
1552
2017-05-02T10:35:26Z
Lollypop
2
wikitext
text/x-wiki
{{Systematik
| Autor =
| Bild =
| Bildbeschreibung =
| regnum = Animalia
| subregnum = Eumetazoa
| phylum = Arthropoda
| subphylum = Hexapoda
| classis = Insecta
| superordo = Dictyoptera
| LSID = urn:lsid:faunaeur.org:taxname:11907
| www.faunaeur.org_id = 11907
}}
fdb53f81141336c26faf5e6f9b1c0490a0deca54
Category:Insecta
14
322
1554
2017-05-02T10:36:14Z
Lollypop
2
Die Seite wurde neu angelegt: „{{Systematik | Autor = | Bild = | Bildbeschreibung = | regnum = Animalia | subregnum = Eumetazoa | phylum…“
wikitext
text/x-wiki
{{Systematik
| Autor =
| Bild =
| Bildbeschreibung =
| regnum = Animalia
| subregnum = Eumetazoa
| phylum = Arthropoda
| subphylum = Hexapoda
| classis = Insecta
| LSID = urn:lsid:faunaeur.org:taxname:11907
| www.faunaeur.org_id = 11907
}}
de63240b10877515cf8fc26f84efb81198e6b23e
Category:Blattodea
14
267
1558
1477
2017-05-02T13:59:36Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie: Schaben]]
{{Systematik
| Autor =
| Bild =
| Bildbeschreibung =
| ordo = Dictyoptera
| subordo = Blattodea
| cockroach.speciesfile.org_TaxonNameID = 1172573
| LSID = urn:lsid:Blattodea.speciesfile.org:TaxonName:1
}}
446d89b5ffb8bd2f0f5c631af5ff7d4833ed7b9e
1559
1558
2017-05-02T14:01:49Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie: Schaben]]
{{Systematik
| Autor =
| Bild =
| Bildbeschreibung =
| regnum = Animalia
| subregnum = Eumetazoa
| phylum = Arthropoda
| subphylum = Hexapoda
| classis = Insecta
| superordo = Dictyoptera
| ordo = Blattodea
| cockroach.speciesfile.org_TaxonNameID = 1172573
| LSID = urn:lsid:Blattodea.speciesfile.org:TaxonName:1
}}
a0ca21cd72d47caf16c9f9cefcf88ec634d833fc
Category:Blaberoidea
14
268
1560
1442
2017-05-02T14:02:57Z
Lollypop
2
wikitext
text/x-wiki
{{Systematik
| Autor = Saussure, 1864
| Bild =
| Bildbeschreibung =
| regnum = Animalia
| subregnum = Eumetazoa
| phylum = Arthropoda
| subphylum = Hexapoda
| classis = Insecta
| superordo = Dictyoptera
| ordo = Blattodea
| superfamilia = Blaberoidea
| cockroach.speciesfile.org_TaxonNameID = 1172574
| LSID = urn:lsid:Blattodea.speciesfile.org:TaxonName:1848
}}
aca8f10d6411d61c92784a4f1d2d93121251df3b
Category:Blaberinae
14
269
1561
1135
2017-05-02T14:03:48Z
Lollypop
2
wikitext
text/x-wiki
{{Systematik
| Autor = Saussure, 1864
| Bild =
| Bildbeschreibung =
| regnum = Animalia
| subregnum = Eumetazoa
| phylum = Arthropoda
| subphylum = Hexapoda
| classis = Insecta
| superordo = Dictyoptera
| ordo = Blattodea
| superfamilia = Blaberoidea
| familia = Blaberidae
| subfamilia = Blaberinae
| cockroach.speciesfile.org_TaxonNameID = 1174138
| LSID = urn:lsid:Blattodea.speciesfile.org:TaxonName:6392
}}
3190ee4392b68e6810086db709691e97fc3bc567
Category:Archimandrita
14
149
1562
1139
2017-05-02T14:04:12Z
Lollypop
2
wikitext
text/x-wiki
{{Systematik
| Autor = Saussure, 1893
| Bild =
| Bildbeschreibung =
| regnum = Animalia
| subregnum = Eumetazoa
| phylum = Arthropoda
| subphylum = Hexapoda
| classis = Insecta
| superordo = Dictyoptera
| ordo = Blattodea
| superfamilia = Blaberoidea
| familia = Blaberidae
| subfamilia = Blaberinae
| genus = Archimandrita
| cockroach.speciesfile.org_TaxonNameID = 1174139
| LSID = urn:lsid:Blattodea.speciesfile.org:TaxonName:6664
}}
8f8a0cc442a9c9005fc64e7166540a3233cdecc4
Archimandrita tesselata
0
148
1563
1453
2017-05-02T14:05:39Z
Lollypop
2
wikitext
text/x-wiki
{{Systematik
| DeName = Pfefferschabe
| Bild = Archimandrita_tesselata_IMG_2891.JPG
| Bildbeschreibung = Archimandrita Tesselata an einem Stück Gurke
| Autor = Rehn, 1903
| regnum = Animalia
| subregnum = Eumetazoa
| phylum = Arthropoda
| subphylum = Hexapoda
| classis = Insecta
| superordo = Dictyoptera
| ordo = Blattodea
| superfamilia = Blaberoidea
| familia = Blaberidae
| subfamilia = Blaberinae
| genus = Archimandrita
| Untergattung =
| species = tesselata
| Verbreitung = Guatemala, Costa Rica, Panama, Kolumbien
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| StudyGroupNumber = BCG 23
| Winterruhe =
| cockroach.speciesfile.org_TaxonNameID = 1174141
| LSID = urn:lsid:Blattodea.speciesfile.org:TaxonName:6665
}}
Archimandrita tesselata ist eine große, meist recht scheue Schabenart.
Ihren Namen "Pfefferschabe" trägt sie aufgrund der Zeichnung auf ihren Flügeldecken.
<gallery mode="packed-hover">
Image:Archimandrita tesselata IMG 2891.JPG|An Gurke
Image:Archimandrita_tesselata_R0015551.png|Können dese Augen lügen?
</gallery>
<gallery mode="slideshow" caption="Häutung eines Archimandrita tesselata Männchens">
Image:Archimandrita_tesselata_IMG_2901.png
Image:Archimandrita_tesselata_IMG_2902.png
Image:Archimandrita_tesselata_IMG_2903.png
Image:Archimandrita_tesselata_IMG_2904.png
Image:Archimandrita_tesselata_IMG_2905.png
Image:Archimandrita_tesselata_IMG_2906.png
Image:Archimandrita_tesselata_IMG_2907.png
Image:Archimandrita_tesselata_IMG_2908.png
Image:Archimandrita_tesselata_IMG_2909.png
Image:Archimandrita_tesselata_IMG_2910.png
Image:Archimandrita_tesselata_IMG_2911.png
Image:Archimandrita_tesselata_IMG_2912.png
Image:Archimandrita_tesselata_IMG_2913.png
Image:Archimandrita_tesselata_IMG_2914.png
Image:Archimandrita_tesselata_IMG_2915.png
Image:Archimandrita_tesselata_IMG_2916.png
Image:Archimandrita_tesselata_IMG_2917.png
Image:Archimandrita_tesselata_IMG_2918.png
</gallery>
<slideshow sequence="forward" transition="blindDown" refresh="3000">
[[Image:Archimandrita_tesselata_IMG_2901.png|thumb|right|256px|Caption 1]]
[[Image:Archimandrita_tesselata_IMG_2902.png|thumb|right|256px|Caption 2]]
[[Image:Archimandrita_tesselata_IMG_2903.png|thumb|right|256px|Caption 3]]
[[Image:Archimandrita_tesselata_IMG_2904.png|thumb|right|256px|Caption 4]]
[[Image:Archimandrita_tesselata_IMG_2905.png|thumb|right|256px|Caption 5]]
[[Image:Archimandrita_tesselata_IMG_2906.png|thumb|right|256px|Caption 6]]
</slideshow>
d4dc8260c1f5d03472c3a806ebc751feb13d574c
Category:Blaberus
14
193
1567
1142
2017-05-02T14:11:45Z
Lollypop
2
wikitext
text/x-wiki
{{Systematik
| Autor = Serville, 1831
| Bild =
| Bildbeschreibung =
| regnum = Animalia
| subregnum = Eumetazoa
| phylum = Arthropoda
| subphylum = Hexapoda
| classis = Insecta
| superordo = Dictyoptera
| ordo = Blattodea
| superfamilia = Blaberoidea
| familia = Blaberidae
| subfamilia = Blaberinae
| genus = Blaberus
| cockroach.speciesfile.org_TaxonNameID = 1174154
| LSID = urn:lsid:Blattodea.speciesfile.org:TaxonName:6590
}}
b613e5b72935f60c6e1ebe3d3023b69738bf7d05
Blaberus giganteus
0
192
1568
1454
2017-05-02T14:12:38Z
Lollypop
2
wikitext
text/x-wiki
{{Systematik
| DeName = Mittelamerikanische Riesenschabe
| WissName = Blaberus giganteus
| Autor = Linnaeus, 1758
| Bild = Blaberus_giganteus.jpg
| Bildbeschreibung = Adult Blaberus giganteus
| regnum = Animalia
| subregnum = Eumetazoa
| phylum = Arthropoda
| subphylum = Hexapoda
| classis = Insecta
| superordo = Dictyoptera
| ordo = Blattodea
| superfamilia = Blaberoidea
| familia = Blaberidae
| subfamilia = Blaberinae
| genus = Blaberus
| subgenus =
| species = giganteus
| Verbreitung = Mittelamerika und nördliches Südamerika
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| Winterruhe =
| cockroach.speciesfile.org_TaxonNameID = 1174190
| LSID = urn:lsid:Blattodea.speciesfile.org:TaxonName:6598
}}
2dd0c0b6d1a7d4af07641b338214219b0c2ae094
Category:Blaptica
14
151
1569
1143
2017-05-02T14:13:27Z
Lollypop
2
wikitext
text/x-wiki
{{Systematik
| Autor = Stål, 1874
| Bild =
| Bildbeschreibung =
| regnum = Animalia
| subregnum = Eumetazoa
| phylum = Arthropoda
| subphylum = Hexapoda
| classis = Insecta
| superordo = Dictyoptera
| ordo = Blattodea
| superfamilia = Blaberoidea
| familia = Blaberidae
| subfamilia = Blaberinae
| genus = Blaptica
| cockroach.speciesfile.org_TaxonNameID = 1174201
| LSID = urn:lsid:Blattodea.speciesfile.org:TaxonName:6568
}}
9a591a91c3e77e84e39df03cbc4a2ba2b8a3940e
Blaptica dubia
0
150
1570
1455
2017-05-02T14:14:09Z
Lollypop
2
wikitext
text/x-wiki
{{Systematik
| DeName = Argentinische Waldschabe
| Autor = Serville, 1838
| regnum = Animalia
| subregnum = Eumetazoa
| phylum = Arthropoda
| subphylum = Hexapoda
| classis = Insecta
| superordo = Dictyoptera
| ordo = Blattodea
| superfamilia = Blaberoidea
| familia = Blaberidae
| subfamilia = Blaberinae
| genus = Blaptica
| species = dubia
| Verbreitung = Argentinien, Paraguay, Uruguay
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| Winterruhe =
| cockroach.speciesfile.org_TaxonNameID = 1174202
| LSID = urn:lsid:Blattodea.speciesfile.org:TaxonName:6586
}}
ad048c2b353c43453b95491a70f8ddecff36bf1b
Category:Oxyhaloinae
14
263
1571
1123
2017-05-02T14:14:49Z
Lollypop
2
wikitext
text/x-wiki
{{Systematik
| Autor =
| Bild =
| Bildbeschreibung =
| regnum = Animalia
| subregnum = Eumetazoa
| phylum = Arthropoda
| subphylum = Hexapoda
| classis = Insecta
| superordo = Dictyoptera
| ordo = Blattodea
| superfamilia = Blaberoidea
| familia = Blaberidae
| subfamilia = Oxyhaloinae
| LSID = urn:lsid:Blattodea.speciesfile.org:TaxonName:6192
| cockroach.speciesfile.org_TaxonNameID = 1174364
}}
41175d4ba045ce1d9888d72afa5cd42c725f6e4c
Category:Elliptorhina
14
147
1572
1124
2017-05-02T14:15:19Z
Lollypop
2
wikitext
text/x-wiki
{{Systematik
| Autor = van Herrewege, 1973
| Bild =
| Bildbeschreibung =
| regnum = Animalia
| subregnum = Eumetazoa
| phylum = Arthropoda
| subphylum = Hexapoda
| classis = Insecta
| superordo = Dictyoptera
| ordo = Blattodea
| superfamilia = Blaberoidea
| familia = Blaberidae
| subfamilia = Oxyhaloinae
| genus = Elliptorhina
| LSID = urn:lsid:Blattodea.speciesfile.org:TaxonName:6334
| cockroach.speciesfile.org_TaxonNameID = 1174395
}}
a626641d8b012d82a480c5f2f600b324dadb300c
Elliptorhina javanica
0
146
1573
1456
2017-05-02T14:16:22Z
Lollypop
2
wikitext
text/x-wiki
{{Systematik
| DeName = Fauchschabe
| WissName = Elliptorhina javanica
| Autor = Hanitsch, 1930
| Bild = Elliptorhina_javanica.JPG
| Bildbeschreibung = Elliptorhina javanica an einem Champignon
| regnum = Animalia
| subregnum = Eumetazoa
| phylum = Arthropoda
| subphylum = Hexapoda
| classis = Insecta
| superordo = Dictyoptera
| ordo = Blattodea
| superfamilia = Blaberoidea
| familia = Blaberidae
| subfamilia = Oxyhaloinae
| genus = Elliptorhina
| subgenus =
| tribus = Gromphadorhini
| species = javanica
| Verbreitung =
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| Winterruhe =
| LSID = urn:lsid:Blattodea.speciesfile.org:TaxonName:6342
| cockroach.speciesfile.org_TaxonNameID = 1174403
}}
e3a7064cb0f2d2a723338728933f1aebdcea0e24
Category:Gromphadorhini
14
323
1574
2017-05-02T14:17:39Z
Lollypop
2
Die Seite wurde neu angelegt: „<nowiki>Unformatierten Text hier einfügen</nowiki>{{Systematik | WissName = Gromphadorhini | Autor = | Bild = | Bildbeschr…“
wikitext
text/x-wiki
<nowiki>Unformatierten Text hier einfügen</nowiki>{{Systematik
| WissName = Gromphadorhini
| Autor =
| Bild =
| Bildbeschreibung =
| regnum = Animalia
| subregnum = Eumetazoa
| phylum = Arthropoda
| subphylum = Hexapoda
| classis = Insecta
| superordo = Dictyoptera
| ordo = Blattodea
| superfamilia = Blaberoidea
| familia = Blaberidae
| subfamilia = Oxyhaloinae
| genus = Elliptorhina
| subgenus =
| tribus = Gromphadorhini
}}
76c9846d552a696cc1e5f3a05f6aaa99a086d864
Elliptorhina laevigata
0
271
1575
1457
2017-05-02T14:18:41Z
Lollypop
2
wikitext
text/x-wiki
{{Systematik
| DeName = Fauchschabe
| Autor = Saussure & Zehntner, 1895
| Bild =
| Bildbeschreibung =
| regnum = Animalia
| subregnum = Eumetazoa
| phylum = Arthropoda
| subphylum = Hexapoda
| classis = Insecta
| superordo = Dictyoptera
| ordo = Blattodea
| superfamilia = Blaberoidea
| familia = Blaberidae
| subfamilia = Oxyhaloinae
| genus = Elliptorhina
| species = laevigata
| Verbreitung =
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| Winterruhe =
| LSID = urn:lsid:Blattodea.speciesfile.org:TaxonName:6339
| cockroach.speciesfile.org_TaxonNameID = 1174404
}}
3ff0d6fba93f88c6b1a5ccc3ff748abc5953bd76
Category:Gromphadorhina
14
183
1576
1140
2017-05-02T14:19:28Z
Lollypop
2
wikitext
text/x-wiki
{{Systematik
| Autor = Brunner von Wattenwyl, 1865
| Bild =
| Bildbeschreibung =
| regnum = Animalia
| subregnum = Eumetazoa
| phylum = Arthropoda
| subphylum = Hexapoda
| classis = Insecta
| superordo = Dictyoptera
| ordo = Blattodea
| superfamilia = Blaberoidea
| familia = Blaberidae
| subfamilia = Oxyhaloinae
| genus = Gromphadorhina
| cockroach.speciesfile.org_TaxonNameID = 1174409
| LSID = urn:lsid:Blattodea.speciesfile.org:TaxonName:6328
}}
6b9fe1106b434132c874b2656bfc14367b82453a
Gromphadorhina oblongonota
0
175
1577
1458
2017-05-02T14:21:27Z
Lollypop
2
wikitext
text/x-wiki
{{Systematik
| DeName = Fauchschabe
| WissName = Gromphadorhina oblongonota
| Autor = van Herrewege, 1973
| regnum = Animalia
| subregnum = Eumetazoa
| phylum = Arthropoda
| subphylum = Hexapoda
| classis = Insecta
| superordo = Dictyoptera
| ordo = Blattodea
| superfamilia = Blaberoidea
| familia = Blaberidae
| subfamilia = Oxyhaloinae
| genus = Gromphadorhina
| tribus = Gromphadorhini
| species = oblongonota
| Verbreitung =
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| StudyGroupNumber = BCG 48
| Winterruhe =
| cockroach.speciesfile.org_TaxonNameID = 1174411
| LSID = urn:lsid:Blattodea.speciesfile.org:TaxonName:6332
}}
35bc65bd14d5efe9fde4fe18fc9cbb40b24d382b
Gromphadorhina portentosa
0
145
1578
1459
2017-05-02T14:24:37Z
Lollypop
2
wikitext
text/x-wiki
{{Systematik
| DeName = Fauchschabe
| WissName = Gromphadorhina portentosa
| Autor = Schaum, 1853
| regnum = Animalia
| subregnum = Eumetazoa
| phylum = Arthropoda
| subphylum = Hexapoda
| classis = Insecta
| superordo = Dictyoptera
| ordo = Blattodea
| superfamilia = Blaberoidea
| familia = Blaberidae
| subfamilia = Oxyhaloinae
| tribus = Gromphadorhini
| genus = Gromphadorhina
| subgenus =
| species = portentosa
| Verbreitung =
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| StudyGroupNumber = BCG 12
| Winterruhe =
| LSID = urn:lsid:Blattodea.speciesfile.org:TaxonName:6329
| cockroach.speciesfile.org_TaxonNameID = 1174413
}}
16e2a3847889731c9c87438491a4abba2ece0819
Gromphadorhina spec.
0
144
1579
1460
2017-05-02T14:25:06Z
Lollypop
2
wikitext
text/x-wiki
{{Systematik
| DeName = Fauchschabe
| Autor =
| regnum = Animalia
| subregnum = Eumetazoa
| phylum = Arthropoda
| subphylum = Hexapoda
| classis = Insecta
| superordo = Dictyoptera
| ordo = Blattodea
| superfamilia = Blaberoidea
| familia = Blaberidae
| subfamilia = Oxyhaloinae
| tribus = Gromphadorhini
| genus = Gromphadorhina
| subgenus =
| species = spec.
| Verbreitung =
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| Winterruhe =
}}
1397351f6d8bbd975be12479f2d01134a4f0cbca
Category:Elliptorhina
14
147
1580
1572
2017-05-02T14:25:57Z
Lollypop
2
wikitext
text/x-wiki
{{Systematik
| Autor = van Herrewege, 1973
| Bild =
| Bildbeschreibung =
| regnum = Animalia
| subregnum = Eumetazoa
| phylum = Arthropoda
| subphylum = Hexapoda
| classis = Insecta
| superordo = Dictyoptera
| ordo = Blattodea
| superfamilia = Blaberoidea
| familia = Blaberidae
| subfamilia = Oxyhaloinae
| tribus = Gromphadorhini
| genus = Elliptorhina
| subgenus =
| LSID = urn:lsid:Blattodea.speciesfile.org:TaxonName:6334
| cockroach.speciesfile.org_TaxonNameID = 1174395
}}
499994cbbd21f1f4fce1a3afecce6d38692550f8
Elliptorhina javanica
0
146
1581
1573
2017-05-02T14:26:32Z
Lollypop
2
wikitext
text/x-wiki
{{Systematik
| DeName = Fauchschabe
| WissName = Elliptorhina javanica
| Autor = Hanitsch, 1930
| Bild = Elliptorhina_javanica.JPG
| Bildbeschreibung = Elliptorhina javanica an einem Champignon
| regnum = Animalia
| subregnum = Eumetazoa
| phylum = Arthropoda
| subphylum = Hexapoda
| classis = Insecta
| superordo = Dictyoptera
| ordo = Blattodea
| superfamilia = Blaberoidea
| familia = Blaberidae
| subfamilia = Oxyhaloinae
| tribus = Gromphadorhini
| genus = Elliptorhina
| subgenus =
| species = javanica
| Verbreitung =
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| Winterruhe =
| LSID = urn:lsid:Blattodea.speciesfile.org:TaxonName:6342
| cockroach.speciesfile.org_TaxonNameID = 1174403
}}
4a67589337cee3d7e499cd7e899a5f9e717ab9e4
Elliptorhina laevigata
0
271
1582
1575
2017-05-02T14:26:54Z
Lollypop
2
wikitext
text/x-wiki
{{Systematik
| DeName = Fauchschabe
| Autor = Saussure & Zehntner, 1895
| Bild =
| Bildbeschreibung =
| regnum = Animalia
| subregnum = Eumetazoa
| phylum = Arthropoda
| subphylum = Hexapoda
| classis = Insecta
| superordo = Dictyoptera
| ordo = Blattodea
| superfamilia = Blaberoidea
| familia = Blaberidae
| subfamilia = Oxyhaloinae
| tribus = Gromphadorhini
| genus = Elliptorhina
| subgenus =
| species = laevigata
| Verbreitung =
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| Winterruhe =
| LSID = urn:lsid:Blattodea.speciesfile.org:TaxonName:6339
| cockroach.speciesfile.org_TaxonNameID = 1174404
}}
b78716064a1e297df71a0848c67f14706fa8d6a0
Category:Princisia
14
153
1583
1141
2017-05-02T14:28:09Z
Lollypop
2
wikitext
text/x-wiki
{{Systematik
| Autor = van Herrewege, 1973
| Bild =
| Bildbeschreibung =
| regnum = Animalia
| subregnum = Eumetazoa
| phylum = Arthropoda
| subphylum = Hexapoda
| classis = Insecta
| superordo = Dictyoptera
| ordo = Blattodea
| superfamilia = Blaberoidea
| familia = Blaberidae
| subfamilia = Oxyhaloinae
| tribus = Gromphadorhini
| genus = Princisia
| subgenus =
| cockroach.speciesfile.org_TaxonNameID = 1174415
| LSID = urn:lsid:Blattodea.speciesfile.org:TaxonName:6325
}}
ff28a2d9116f642cbda4d6bb48265cbe5e365d8a
1584
1583
2017-05-02T14:28:42Z
Lollypop
2
wikitext
text/x-wiki
{{Systematik
| Autor = van Herrewege, 1973
| Bild =
| Bildbeschreibung =
| regnum = Animalia
| subregnum = Eumetazoa
| phylum = Arthropoda
| subphylum = Hexapoda
| classis = Insecta
| superordo = Dictyoptera
| ordo = Blattodea
| superfamilia = Blaberoidea
| familia = Blaberidae
| subfamilia = Oxyhaloinae
| tribus = Gromphadorhini
| genus = Princisia
| subgenus =
| cockroach.speciesfile.org_TaxonNameID = 1174415
| LSID = urn:lsid:Blattodea.speciesfile.org:TaxonName:6325
}}
d55f60486bf6d88ee6ed36d0350b47473155bad4
Princisia vanwaerebeki
0
152
1585
1461
2017-05-02T14:30:08Z
Lollypop
2
wikitext
text/x-wiki
{{Systematik
| DeName = Fauchschabe
| Autor = van Herrewege, 1973
| regnum = Animalia
| subregnum = Eumetazoa
| phylum = Arthropoda
| subphylum = Hexapoda
| classis = Insecta
| superordo = Dictyoptera
| ordo = Blattodea
| superfamilia = Blaberoidea
| familia = Blaberidae
| subfamilia = Oxyhaloinae
| tribus = Gromphadorhini
| genus = Princisia
| subgenus =
| species = vanwaerebeki
| Verbreitung =
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| StudyGroupNumber = BCG 34
| Winterruhe =
| cockroach.speciesfile.org_TaxonNameID = 1174416
| LSID = urn:lsid:Blattodea.speciesfile.org:TaxonName:6326
}}
b75cd858c9646b75e503b579b01d699b95fd7fc1
Template:Systematik
10
117
1586
1566
2017-05-02T14:31:44Z
Lollypop
2
wikitext
text/x-wiki
<includeonly>
{| cellpadding="2" cellspacing="0" style="border: 1px solid #6688AA;width:260px; background-color:#efefef; float:right;overflow: hidden; margin-left:10px; margin-bottom:10px;" valign="middle" |
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"|
'''''{{PAGENAME}}'''''
{{#if:{{{DeName|}}}|
<br>({{{DeName|}}})
}}
|-
|style="border: 0px solid #6688AA;border-bottom-width:1px;"| <div style="text-align:center;font-size:8pt;">
{{#if: {{{Bild|}}}
|
[[Bild:{{{Bild}}}{{!}}frameless{{!}}250x300px{{!}}{{{Bildbeschreibung}}}]]
}}
</div>
|
|
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| Systematik
|-
|style="text-align:center;width=250px;"|
{| style="background-color:#efefef;text-align:left;"
{{#if:{{{dominia|}}}
| {{!-}}
{{!}} Dominia:
{{!}} ''[[:Kategorie:{{{dominia|}}}{{!}}{{{dominia|}}}]]''
}}
{{#if:{{{regnum|}}}
| {{!-}}
{{!}} Regnum:
{{!}} ''[[:Kategorie:{{{regnum|}}}{{!}}{{{regnum|}}}]]''
}}
{{#if:{{{subregnum|}}}
| {{!-}}
{{!}} Subregnum:
{{!}} ''[[:Kategorie:{{{subregnum|}}}{{!}}{{{subregnum|}}}]]''
}}
{{#if:{{{superdivisio|}}}
| {{!-}}
{{!}} Superdivisio:
{{!}} ''[[:Kategorie:{{{superdivisio|}}}{{!}}{{{superdivisio|}}}]]''
}}
{{#if:{{{divisio|}}}
| {{!-}}
{{!}} Divisio:
{{!}} ''[[:Kategorie:{{{divisio|}}}{{!}}{{{divisio|}}}]]''
}}
{{#if:{{{subdivisio|}}}
| {{!-}}
{{!}} Subdivisio:
{{!}} ''[[:Kategorie:{{{subdivisio|}}}{{!}}{{{subdivisio|}}}]]''
}}
{{#if:{{{superclassis|}}}
| {{!-}}
{{!}} Superclassis:
{{!}} ''[[:Kategorie:{{{superclassis|}}}{{!}}{{{superclassis|}}}]]''
}}
{{#if:{{{classis|}}}
| {{!-}}
{{!}} Classis:
{{!}} ''[[:Kategorie:{{{classis|}}}{{!}}{{{classis|}}}]]''
}}
{{#if:{{{subclassis|}}}
| {{!-}}
{{!}} Subclassis:
{{!}} ''[[:Kategorie:{{{subclassis|}}}{{!}}{{{subclassis|}}}]]''
}}
{{#if:{{{superordo|}}}
| {{!-}}
{{!}} Superordo:
{{!}} ''[[:Kategorie:{{{superordo|}}}{{!}}{{{superordo|}}}]]''
}}
{{#if:{{{ordo|}}}
| {{!-}}
{{!}} Ordo:
{{!}} ''[[:Kategorie:{{{ordo|}}}{{!}}{{{ordo|}}}]]''
}}
{{#if:{{{subordo|}}}
| {{!-}}
{{!}} Subordo:
{{!}} ''[[:Kategorie:{{{subordo|}}}{{!}}{{{subordo|}}}]]''
}}
{{#if:{{{superfamilia|}}}
| {{!-}}
{{!}} Superfamilia:
{{!}} ''[[:Kategorie:{{{superfamilia|}}}{{!}}{{{superfamilia|}}}]]''
}}
{{#if:{{{familia|}}}
| {{!-}}
{{!}} Familia:
{{!}} ''[[:Kategorie:{{{familia|}}}{{!}}{{{familia|}}}]]''
}}
{{#if:{{{subfamilia|}}}
| {{!-}}
{{!}} Subfamilia:
{{!}} ''[[:Kategorie:{{{subfamilia|}}}{{!}}{{{subfamilia|}}}]]''
}}
{{#if:{{{tribus|}}}
| {{!-}}
{{!}} Tribus:
{{!}} ''[[:Kategorie:{{{tribus|}}}{{!}}{{{tribus|}}}]]''
}}
|-
{{#if:{{{genus|}}}
| {{!-}}
{{!}} Genus:
{{!}} ''[[:Kategorie:{{{genus|}}}{{!}}{{{genus|}}}]]''
}}
|-
{{#if:{{{subgenus|}}}
| {{!-}}
{{!}} Subgenus:
{{!}} ''[[:Kategorie:{{{subgenus|}}}{{!}}{{{subgenus|}}}]]''
}}
|-
{{#if:{{{species|}}}
| {{!-}}
{{!}} Species:
{{!}} ''{{{genus|}}} {{{subgenus|}}} {{{species|}}}''
}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Wissenschaftlicher Name
|-
|style="text-align:center"|
{| style="text-align:center;background-color:#efefef;widht:250px"
|-
|style="text-align:center;width:250px"|''{{{genus|}}} {{{species|}}}'' {{#if:{{{Autor|}}}|{{{Autor|}}}}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Weitere Informationen
|-
|style="text-align:left"|
{| style="text-align:left;background-color:#efefef;widht:250px"
{{#if:{{{Verbreitung|}}}
| {{!-}}
{{!}} Verbreitung:
{{!}} {{{Verbreitung|}}}
| {{!-}}
}}
{{#if:{{{Habitat|}}}
| {{!-}}
{{!}} Habitat:
{{!}} {{{Habitat|}}}
| {{!-}}
}}
{{#if:{{{Nahrung|}}}
| {{!-}}
{{!}} Nahrung:
{{!}} {{{Nahrung|}}}
| {{!-}}
}}
{{#if:{{{Luftfeuchtigkeit|}}}
| {{!-}}
{{!}} Luftfeuchtigkeit:
{{!}} {{{Luftfeuchtigkeit|}}}
| {{!-}}
}}
{{#if:{{{Temperatur|}}}
| {{!-}}
{{!}} Temperatur:
{{!}} {{{Temperatur|}}}
| {{!-}}
}}
{{#if:{{{StudyGroupNumber|}}}
| {{!-}}
{{!}} StudyGroupNumber:
{{!}} {{{StudyGroupNumber|}}}
| {{!-}}
}}
|}
|}
{{#ifeq: {{NAMESPACE}} | {{ns:0}} | [[Kategorie:species]]}}
{{#if:{{{ordo|}}}|
-> [[:Kategorie:{{{ordo|}}}{{!}}{{{ordo|}}}]]
}}
{{#if:{{{subordo|}}}|
-> [[:Kategorie:{{{subordo|}}}{{!}}{{{subordo|}}}]]
}}
{{#if:{{{superfamilia|}}}|
-> [[:Kategorie:{{{superfamilia|}}}{{!}}{{{superfamilia|}}}]]
}}
{{#if:{{{familia|}}}|
-> [[:Kategorie:{{{familia|}}}{{!}}{{{familia|}}}]]
}}
{{#if:{{{subfamilia|}}}|
-> [[:Kategorie:{{{subfamilia|}}}{{!}}{{{subfamilia|}}}]]
}}
{{#if:{{{tribus|}}}|
-> [[:Kategorie:{{{tribus|}}}{{!}}{{{tribus|}}}]]
}}
{{#if:{{{genus|}}}|
-> [[:Kategorie:{{{genus|}}}{{!}}{{{genus|}}}]]
}}
{{#if:{{{subgenus|}}}|
-> [[:Kategorie:{{{subgenus|}}}{{!}}{{{subgenus|}}}]]
}}
{{#if:{{{www.faunaeur.org_id|}}}|
* [http://www.faunaeur.org/full_results.php?id={{{www.faunaeur.org_id|}}} Fauna Europaea : www.faunaeur.org -> {{PAGENAME}}]
}}
{{#if:{{{cockroach.speciesfile.org_TaxonNameI|}}}|
* [http://cockroach.speciesfile.org/common/basic/Taxa.aspx?TaxonNameID={{{cockroach.genusfile.org_TaxonNameID|}}} Cockroach Species File (CSF) : cockroach.genusfile.org -> {{PAGENAME}}]
}}
{{#if:{{{LSID|}}}|
* [http://www.ipni.org/lsids.html LSID] : {{{LSID|}}}
}}
{{#ifeq:{{PAGENAME}}|{{{superordo}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
[[Kategorie:{{#if: {{{subclassis|}}} | {{{subclassis|}}} | {{{classis|}}} }}{{!}}{{{superordo|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{ordo}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=7}}
{{#ifeq:{{PAGENAME}}|Blattodea|
[[Kategorie:Schaben]]
}}
[[Kategorie:{{#if: {{{superordo|}}} | {{{superordo|}}} | {{{subclassis|}}} }}{{!}}{{{ordo|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{superfamilia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
[[Kategorie:{{{subordo|}}}{{!}}{{{superfamilia|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{familia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=4}}
[[Kategorie:{{#if: {{{superfamilia|}}} | {{{superfamilia|}}} | {{{subordo|}}} }}{{!}}{{{familia|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{subfamilia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=3}}
[[Kategorie:{{{familia|}}}{{!}}{{{subfamilia|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{genus}}}|
[[Kategorie:{{#if: {{{subfamilia|}}} | {{{subfamilia|}}} | {{{familia|}}} }}{{!}}{{{genus|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{genus}}} {{{species}}}|
[[Kategorie:{{{genus|}}}{{!}}{{{species|}}}]]
}}
</includeonly>
<noinclude>
<pre>
Beispielaufruf:
{{Systematik
| DeName = Fauchschabe
| Autor = van Herrewege, 1973
| ordo =
| subordo =
| superfamilia =
| familia = Blaberidae
| subfamilia = Oxyhaloinae
| tribus = Gromphadorhini
| genus = Princisia
| subgenus =
| species = vanwaerebeki
| Verbreitung =
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| StudyGroupNumber = BCG 34
| Winterruhe =
| cockroach.speciesfile.org_TaxonNameID = 1174416
| LSID = urn:lsid:Blattodea.genusfile.org:TaxonName:6326
}}
{{Systematik
| Autor =
| Bild =
| Bildbeschreibung =
| regnum = Animalia
| subregnum = Eumetazoa
| phylum = specieshropoda
| subphylum = Hexapoda
| classis = Insecta
| ordo = Dictyoptera
| subordo = Isoptera
| LSID = urn:lsid:faunaeur.org:taxname:11922
| www.faunaeur.org_id = 11922
}}
</pre>
</noinclude>
d36b55ffa4c79cd39bba3ce7235287fe08008da5
1587
1586
2017-05-02T14:32:53Z
Lollypop
2
wikitext
text/x-wiki
<includeonly>
{| cellpadding="2" cellspacing="0" style="border: 1px solid #6688AA;width:260px; background-color:#efefef; float:right;overflow: hidden; margin-left:10px; margin-bottom:10px;" valign="middle" |
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"|
'''''{{PAGENAME}}'''''
{{#if:{{{DeName|}}}|
<br>({{{DeName|}}})
}}
|-
|style="border: 0px solid #6688AA;border-bottom-width:1px;"| <div style="text-align:center;font-size:8pt;">
{{#if: {{{Bild|}}}
|
[[Bild:{{{Bild}}}{{!}}frameless{{!}}250x300px{{!}}{{{Bildbeschreibung}}}]]
}}
</div>
|
|
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| Systematik
|-
|style="text-align:center;width=250px;"|
{| style="background-color:#efefef;text-align:left;"
{{#if:{{{dominia|}}}
| {{!-}}
{{!}} Dominia:
{{!}} ''[[:Kategorie:{{{dominia|}}}{{!}}{{{dominia|}}}]]''
}}
{{#if:{{{regnum|}}}
| {{!-}}
{{!}} Regnum:
{{!}} ''[[:Kategorie:{{{regnum|}}}{{!}}{{{regnum|}}}]]''
}}
{{#if:{{{subregnum|}}}
| {{!-}}
{{!}} Subregnum:
{{!}} ''[[:Kategorie:{{{subregnum|}}}{{!}}{{{subregnum|}}}]]''
}}
{{#if:{{{superdivisio|}}}
| {{!-}}
{{!}} Superdivisio:
{{!}} ''[[:Kategorie:{{{superdivisio|}}}{{!}}{{{superdivisio|}}}]]''
}}
{{#if:{{{divisio|}}}
| {{!-}}
{{!}} Divisio:
{{!}} ''[[:Kategorie:{{{divisio|}}}{{!}}{{{divisio|}}}]]''
}}
{{#if:{{{subdivisio|}}}
| {{!-}}
{{!}} Subdivisio:
{{!}} ''[[:Kategorie:{{{subdivisio|}}}{{!}}{{{subdivisio|}}}]]''
}}
{{#if:{{{superclassis|}}}
| {{!-}}
{{!}} Superclassis:
{{!}} ''[[:Kategorie:{{{superclassis|}}}{{!}}{{{superclassis|}}}]]''
}}
{{#if:{{{classis|}}}
| {{!-}}
{{!}} Classis:
{{!}} ''[[:Kategorie:{{{classis|}}}{{!}}{{{classis|}}}]]''
}}
{{#if:{{{subclassis|}}}
| {{!-}}
{{!}} Subclassis:
{{!}} ''[[:Kategorie:{{{subclassis|}}}{{!}}{{{subclassis|}}}]]''
}}
{{#if:{{{superordo|}}}
| {{!-}}
{{!}} Superordo:
{{!}} ''[[:Kategorie:{{{superordo|}}}{{!}}{{{superordo|}}}]]''
}}
{{#if:{{{ordo|}}}
| {{!-}}
{{!}} Ordo:
{{!}} ''[[:Kategorie:{{{ordo|}}}{{!}}{{{ordo|}}}]]''
}}
{{#if:{{{subordo|}}}
| {{!-}}
{{!}} Subordo:
{{!}} ''[[:Kategorie:{{{subordo|}}}{{!}}{{{subordo|}}}]]''
}}
{{#if:{{{superfamilia|}}}
| {{!-}}
{{!}} Superfamilia:
{{!}} ''[[:Kategorie:{{{superfamilia|}}}{{!}}{{{superfamilia|}}}]]''
}}
{{#if:{{{familia|}}}
| {{!-}}
{{!}} Familia:
{{!}} ''[[:Kategorie:{{{familia|}}}{{!}}{{{familia|}}}]]''
}}
{{#if:{{{subfamilia|}}}
| {{!-}}
{{!}} Subfamilia:
{{!}} ''[[:Kategorie:{{{subfamilia|}}}{{!}}{{{subfamilia|}}}]]''
}}
{{#if:{{{tribus|}}}
| {{!-}}
{{!}} Tribus:
{{!}} ''[[:Kategorie:{{{tribus|}}}{{!}}{{{tribus|}}}]]''
}}
|-
{{#if:{{{genus|}}}
| {{!-}}
{{!}} Genus:
{{!}} ''[[:Kategorie:{{{genus|}}}{{!}}{{{genus|}}}]]''
}}
|-
{{#if:{{{subgenus|}}}
| {{!-}}
{{!}} Subgenus:
{{!}} ''[[:Kategorie:{{{subgenus|}}}{{!}}{{{subgenus|}}}]]''
}}
|-
{{#if:{{{species|}}}
| {{!-}}
{{!}} Species:
{{!}} ''{{{genus|}}} {{{subgenus|}}} {{{species|}}}''
}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Wissenschaftlicher Name
|-
|style="text-align:center"|
{| style="text-align:center;background-color:#efefef;widht:250px"
|-
|style="text-align:center;width:250px"|''{{{genus|}}} {{{species|}}}'' {{#if:{{{Autor|}}}|{{{Autor|}}}}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Weitere Informationen
|-
|style="text-align:left"|
{| style="text-align:left;background-color:#efefef;widht:250px"
{{#if:{{{Verbreitung|}}}
| {{!-}}
{{!}} Verbreitung:
{{!}} {{{Verbreitung|}}}
| {{!-}}
}}
{{#if:{{{Habitat|}}}
| {{!-}}
{{!}} Habitat:
{{!}} {{{Habitat|}}}
| {{!-}}
}}
{{#if:{{{Nahrung|}}}
| {{!-}}
{{!}} Nahrung:
{{!}} {{{Nahrung|}}}
| {{!-}}
}}
{{#if:{{{Luftfeuchtigkeit|}}}
| {{!-}}
{{!}} Luftfeuchtigkeit:
{{!}} {{{Luftfeuchtigkeit|}}}
| {{!-}}
}}
{{#if:{{{Temperatur|}}}
| {{!-}}
{{!}} Temperatur:
{{!}} {{{Temperatur|}}}
| {{!-}}
}}
{{#if:{{{StudyGroupNumber|}}}
| {{!-}}
{{!}} StudyGroupNumber:
{{!}} {{{StudyGroupNumber|}}}
| {{!-}}
}}
|}
|}
{{#ifeq: {{NAMESPACE}} | {{ns:0}} | [[Kategorie:species]]}}
{{#if:{{{ordo|}}}|
-> [[:Kategorie:{{{ordo|}}}{{!}}{{{ordo|}}}]]
}}
{{#if:{{{subordo|}}}|
-> [[:Kategorie:{{{subordo|}}}{{!}}{{{subordo|}}}]]
}}
{{#if:{{{superfamilia|}}}|
-> [[:Kategorie:{{{superfamilia|}}}{{!}}{{{superfamilia|}}}]]
}}
{{#if:{{{familia|}}}|
-> [[:Kategorie:{{{familia|}}}{{!}}{{{familia|}}}]]
}}
{{#if:{{{subfamilia|}}}|
-> [[:Kategorie:{{{subfamilia|}}}{{!}}{{{subfamilia|}}}]]
}}
{{#if:{{{tribus|}}}|
-> [[:Kategorie:{{{tribus|}}}{{!}}{{{tribus|}}}]]
}}
{{#if:{{{genus|}}}|
-> [[:Kategorie:{{{genus|}}}{{!}}{{{genus|}}}]]
}}
{{#if:{{{subgenus|}}}|
-> [[:Kategorie:{{{subgenus|}}}{{!}}{{{subgenus|}}}]]
}}
{{#if:{{{www.faunaeur.org_id|}}}|
* [http://www.faunaeur.org/full_results.php?id={{{www.faunaeur.org_id|}}} Fauna Europaea : www.faunaeur.org -> {{PAGENAME}}]
}}
{{#if:{{{cockroach.speciesfile.org_TaxonNameI|}}}|
* [http://cockroach.speciesfile.org/common/basic/Taxa.aspx?TaxonNameID={{{cockroach.speciesfile.org_TaxonNameID|}}} Cockroach Species File (CSF) : cockroach.speciesfile.org -> {{PAGENAME}}]
}}
{{#if:{{{LSID|}}}|
* [http://www.ipni.org/lsids.html LSID] : {{{LSID|}}}
}}
{{#ifeq:{{PAGENAME}}|{{{superordo}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
[[Kategorie:{{#if: {{{subclassis|}}} | {{{subclassis|}}} | {{{classis|}}} }}{{!}}{{{superordo|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{ordo}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=7}}
{{#ifeq:{{PAGENAME}}|Blattodea|
[[Kategorie:Schaben]]
}}
[[Kategorie:{{#if: {{{superordo|}}} | {{{superordo|}}} | {{{subclassis|}}} }}{{!}}{{{ordo|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{superfamilia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
[[Kategorie:{{{subordo|}}}{{!}}{{{superfamilia|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{familia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=4}}
[[Kategorie:{{#if: {{{superfamilia|}}} | {{{superfamilia|}}} | {{{subordo|}}} }}{{!}}{{{familia|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{subfamilia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=3}}
[[Kategorie:{{{familia|}}}{{!}}{{{subfamilia|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{genus}}}|
[[Kategorie:{{#if: {{{subfamilia|}}} | {{{subfamilia|}}} | {{{familia|}}} }}{{!}}{{{genus|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{genus}}} {{{species}}}|
[[Kategorie:{{{genus|}}}{{!}}{{{species|}}}]]
}}
</includeonly>
<noinclude>
<pre>
Beispielaufruf:
{{Systematik
| DeName = Fauchschabe
| Autor = van Herrewege, 1973
| ordo =
| subordo =
| superfamilia =
| familia = Blaberidae
| subfamilia = Oxyhaloinae
| tribus = Gromphadorhini
| genus = Princisia
| subgenus =
| species = vanwaerebeki
| Verbreitung =
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| StudyGroupNumber = BCG 34
| Winterruhe =
| cockroach.speciesfile.org_TaxonNameID = 1174416
| LSID = urn:lsid:Blattodea.genusfile.org:TaxonName:6326
}}
{{Systematik
| Autor =
| Bild =
| Bildbeschreibung =
| regnum = Animalia
| subregnum = Eumetazoa
| phylum = specieshropoda
| subphylum = Hexapoda
| classis = Insecta
| ordo = Dictyoptera
| subordo = Isoptera
| LSID = urn:lsid:faunaeur.org:taxname:11922
| www.faunaeur.org_id = 11922
}}
</pre>
</noinclude>
cc30b436e96c7aa21aac71bc019deec1dba795f0
1588
1587
2017-05-02T14:33:45Z
Lollypop
2
wikitext
text/x-wiki
<includeonly>
{| cellpadding="2" cellspacing="0" style="border: 1px solid #6688AA;width:260px; background-color:#efefef; float:right;overflow: hidden; margin-left:10px; margin-bottom:10px;" valign="middle" |
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"|
'''''{{PAGENAME}}'''''
{{#if:{{{DeName|}}}|
<br>({{{DeName|}}})
}}
|-
|style="border: 0px solid #6688AA;border-bottom-width:1px;"| <div style="text-align:center;font-size:8pt;">
{{#if: {{{Bild|}}}
|
[[Bild:{{{Bild}}}{{!}}frameless{{!}}250x300px{{!}}{{{Bildbeschreibung}}}]]
}}
</div>
|
|
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| Systematik
|-
|style="text-align:center;width=250px;"|
{| style="background-color:#efefef;text-align:left;"
{{#if:{{{dominia|}}}
| {{!-}}
{{!}} Dominia:
{{!}} ''[[:Kategorie:{{{dominia|}}}{{!}}{{{dominia|}}}]]''
}}
{{#if:{{{regnum|}}}
| {{!-}}
{{!}} Regnum:
{{!}} ''[[:Kategorie:{{{regnum|}}}{{!}}{{{regnum|}}}]]''
}}
{{#if:{{{subregnum|}}}
| {{!-}}
{{!}} Subregnum:
{{!}} ''[[:Kategorie:{{{subregnum|}}}{{!}}{{{subregnum|}}}]]''
}}
{{#if:{{{superdivisio|}}}
| {{!-}}
{{!}} Superdivisio:
{{!}} ''[[:Kategorie:{{{superdivisio|}}}{{!}}{{{superdivisio|}}}]]''
}}
{{#if:{{{divisio|}}}
| {{!-}}
{{!}} Divisio:
{{!}} ''[[:Kategorie:{{{divisio|}}}{{!}}{{{divisio|}}}]]''
}}
{{#if:{{{subdivisio|}}}
| {{!-}}
{{!}} Subdivisio:
{{!}} ''[[:Kategorie:{{{subdivisio|}}}{{!}}{{{subdivisio|}}}]]''
}}
{{#if:{{{superclassis|}}}
| {{!-}}
{{!}} Superclassis:
{{!}} ''[[:Kategorie:{{{superclassis|}}}{{!}}{{{superclassis|}}}]]''
}}
{{#if:{{{classis|}}}
| {{!-}}
{{!}} Classis:
{{!}} ''[[:Kategorie:{{{classis|}}}{{!}}{{{classis|}}}]]''
}}
{{#if:{{{subclassis|}}}
| {{!-}}
{{!}} Subclassis:
{{!}} ''[[:Kategorie:{{{subclassis|}}}{{!}}{{{subclassis|}}}]]''
}}
{{#if:{{{superordo|}}}
| {{!-}}
{{!}} Superordo:
{{!}} ''[[:Kategorie:{{{superordo|}}}{{!}}{{{superordo|}}}]]''
}}
{{#if:{{{ordo|}}}
| {{!-}}
{{!}} Ordo:
{{!}} ''[[:Kategorie:{{{ordo|}}}{{!}}{{{ordo|}}}]]''
}}
{{#if:{{{subordo|}}}
| {{!-}}
{{!}} Subordo:
{{!}} ''[[:Kategorie:{{{subordo|}}}{{!}}{{{subordo|}}}]]''
}}
{{#if:{{{superfamilia|}}}
| {{!-}}
{{!}} Superfamilia:
{{!}} ''[[:Kategorie:{{{superfamilia|}}}{{!}}{{{superfamilia|}}}]]''
}}
{{#if:{{{familia|}}}
| {{!-}}
{{!}} Familia:
{{!}} ''[[:Kategorie:{{{familia|}}}{{!}}{{{familia|}}}]]''
}}
{{#if:{{{subfamilia|}}}
| {{!-}}
{{!}} Subfamilia:
{{!}} ''[[:Kategorie:{{{subfamilia|}}}{{!}}{{{subfamilia|}}}]]''
}}
{{#if:{{{tribus|}}}
| {{!-}}
{{!}} Tribus:
{{!}} ''[[:Kategorie:{{{tribus|}}}{{!}}{{{tribus|}}}]]''
}}
|-
{{#if:{{{genus|}}}
| {{!-}}
{{!}} Genus:
{{!}} ''[[:Kategorie:{{{genus|}}}{{!}}{{{genus|}}}]]''
}}
|-
{{#if:{{{subgenus|}}}
| {{!-}}
{{!}} Subgenus:
{{!}} ''[[:Kategorie:{{{subgenus|}}}{{!}}{{{subgenus|}}}]]''
}}
|-
{{#if:{{{species|}}}
| {{!-}}
{{!}} Species:
{{!}} ''{{{genus|}}} {{{subgenus|}}} {{{species|}}}''
}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Wissenschaftlicher Name
|-
|style="text-align:center"|
{| style="text-align:center;background-color:#efefef;widht:250px"
|-
|style="text-align:center;width:250px"|''{{{genus|}}} {{{species|}}}'' {{#if:{{{Autor|}}}|{{{Autor|}}}}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Weitere Informationen
|-
|style="text-align:left"|
{| style="text-align:left;background-color:#efefef;widht:250px"
{{#if:{{{Verbreitung|}}}
| {{!-}}
{{!}} Verbreitung:
{{!}} {{{Verbreitung|}}}
| {{!-}}
}}
{{#if:{{{Habitat|}}}
| {{!-}}
{{!}} Habitat:
{{!}} {{{Habitat|}}}
| {{!-}}
}}
{{#if:{{{Nahrung|}}}
| {{!-}}
{{!}} Nahrung:
{{!}} {{{Nahrung|}}}
| {{!-}}
}}
{{#if:{{{Luftfeuchtigkeit|}}}
| {{!-}}
{{!}} Luftfeuchtigkeit:
{{!}} {{{Luftfeuchtigkeit|}}}
| {{!-}}
}}
{{#if:{{{Temperatur|}}}
| {{!-}}
{{!}} Temperatur:
{{!}} {{{Temperatur|}}}
| {{!-}}
}}
{{#if:{{{StudyGroupNumber|}}}
| {{!-}}
{{!}} StudyGroupNumber:
{{!}} {{{StudyGroupNumber|}}}
| {{!-}}
}}
|}
|}
{{#ifeq: {{NAMESPACE}} | {{ns:0}} | [[Kategorie:species]]}}
{{#if:{{{ordo|}}}|
-> [[:Kategorie:{{{ordo|}}}{{!}}{{{ordo|}}}]]
}}
{{#if:{{{subordo|}}}|
-> [[:Kategorie:{{{subordo|}}}{{!}}{{{subordo|}}}]]
}}
{{#if:{{{superfamilia|}}}|
-> [[:Kategorie:{{{superfamilia|}}}{{!}}{{{superfamilia|}}}]]
}}
{{#if:{{{familia|}}}|
-> [[:Kategorie:{{{familia|}}}{{!}}{{{familia|}}}]]
}}
{{#if:{{{subfamilia|}}}|
-> [[:Kategorie:{{{subfamilia|}}}{{!}}{{{subfamilia|}}}]]
}}
{{#if:{{{tribus|}}}|
-> [[:Kategorie:{{{tribus|}}}{{!}}{{{tribus|}}}]]
}}
{{#if:{{{genus|}}}|
-> [[:Kategorie:{{{genus|}}}{{!}}{{{genus|}}}]]
}}
{{#if:{{{subgenus|}}}|
-> [[:Kategorie:{{{subgenus|}}}{{!}}{{{subgenus|}}}]]
}}
{{#if:{{{www.faunaeur.org_id|}}}|
* [http://www.faunaeur.org/full_results.php?id={{{www.faunaeur.org_id|}}} Fauna Europaea : www.faunaeur.org -> {{PAGENAME}}]
}}
{{#if:{{{cockroach.speciesfile.org_TaxonNameID|}}}|
* [http://cockroach.speciesfile.org/common/basic/Taxa.aspx?TaxonNameID={{{cockroach.speciesfile.org_TaxonNameID|}}} Cockroach Species File (CSF) : cockroach.speciesfile.org -> {{PAGENAME}}]
}}
{{#if:{{{LSID|}}}|
* [http://www.ipni.org/lsids.html LSID] : {{{LSID|}}}
}}
{{#ifeq:{{PAGENAME}}|{{{superordo}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
[[Kategorie:{{#if: {{{subclassis|}}} | {{{subclassis|}}} | {{{classis|}}} }}{{!}}{{{superordo|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{ordo}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=7}}
{{#ifeq:{{PAGENAME}}|Blattodea|
[[Kategorie:Schaben]]
}}
[[Kategorie:{{#if: {{{superordo|}}} | {{{superordo|}}} | {{{subclassis|}}} }}{{!}}{{{ordo|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{superfamilia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
[[Kategorie:{{{subordo|}}}{{!}}{{{superfamilia|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{familia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=4}}
[[Kategorie:{{#if: {{{superfamilia|}}} | {{{superfamilia|}}} | {{{subordo|}}} }}{{!}}{{{familia|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{subfamilia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=3}}
[[Kategorie:{{{familia|}}}{{!}}{{{subfamilia|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{genus}}}|
[[Kategorie:{{#if: {{{subfamilia|}}} | {{{subfamilia|}}} | {{{familia|}}} }}{{!}}{{{genus|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{genus}}} {{{species}}}|
[[Kategorie:{{{genus|}}}{{!}}{{{species|}}}]]
}}
</includeonly>
<noinclude>
<pre>
Beispielaufruf:
{{Systematik
| DeName = Fauchschabe
| Autor = van Herrewege, 1973
| ordo =
| subordo =
| superfamilia =
| familia = Blaberidae
| subfamilia = Oxyhaloinae
| tribus = Gromphadorhini
| genus = Princisia
| subgenus =
| species = vanwaerebeki
| Verbreitung =
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| StudyGroupNumber = BCG 34
| Winterruhe =
| cockroach.speciesfile.org_TaxonNameID = 1174416
| LSID = urn:lsid:Blattodea.genusfile.org:TaxonName:6326
}}
{{Systematik
| Autor =
| Bild =
| Bildbeschreibung =
| regnum = Animalia
| subregnum = Eumetazoa
| phylum = specieshropoda
| subphylum = Hexapoda
| classis = Insecta
| ordo = Dictyoptera
| subordo = Isoptera
| LSID = urn:lsid:faunaeur.org:taxname:11922
| www.faunaeur.org_id = 11922
}}
</pre>
</noinclude>
079e8bc20483cffd4921e344dff366cc3fe9328a
1599
1588
2017-05-02T14:49:25Z
Lollypop
2
wikitext
text/x-wiki
<includeonly>
{| cellpadding="2" cellspacing="0" style="border: 1px solid #6688AA;width:260px; background-color:#efefef; float:right;overflow: hidden; margin-left:10px; margin-bottom:10px;" valign="middle" |
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"|
'''''{{PAGENAME}}'''''
{{#if:{{{DeName|}}}|
<br>({{{DeName|}}})
}}
|-
|style="border: 0px solid #6688AA;border-bottom-width:1px;"| <div style="text-align:center;font-size:8pt;">
{{#if: {{{Bild|}}}
|
[[Bild:{{{Bild}}}{{!}}frameless{{!}}250x300px{{!}}{{{Bildbeschreibung}}}]]
}}
</div>
|
|
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| Systematik
|-
|style="text-align:center;width=250px;"|
{| style="background-color:#efefef;text-align:left;"
{{#if:{{{dominia|}}}
| {{!-}}
{{!}} Dominia:
{{!}} ''[[:Kategorie:{{{dominia|}}}{{!}}{{{dominia|}}}]]''
}}
{{#if:{{{regnum|}}}
| {{!-}}
{{!}} Regnum:
{{!}} ''[[:Kategorie:{{{regnum|}}}{{!}}{{{regnum|}}}]]''
}}
{{#if:{{{subregnum|}}}
| {{!-}}
{{!}} Subregnum:
{{!}} ''[[:Kategorie:{{{subregnum|}}}{{!}}{{{subregnum|}}}]]''
}}
{{#if:{{{superdivisio|}}}
| {{!-}}
{{!}} Superdivisio:
{{!}} ''[[:Kategorie:{{{superdivisio|}}}{{!}}{{{superdivisio|}}}]]''
}}
{{#if:{{{divisio|}}}
| {{!-}}
{{!}} Divisio:
{{!}} ''[[:Kategorie:{{{divisio|}}}{{!}}{{{divisio|}}}]]''
}}
{{#if:{{{subdivisio|}}}
| {{!-}}
{{!}} Subdivisio:
{{!}} ''[[:Kategorie:{{{subdivisio|}}}{{!}}{{{subdivisio|}}}]]''
}}
{{#if:{{{superclassis|}}}
| {{!-}}
{{!}} Superclassis:
{{!}} ''[[:Kategorie:{{{superclassis|}}}{{!}}{{{superclassis|}}}]]''
}}
{{#if:{{{classis|}}}
| {{!-}}
{{!}} Classis:
{{!}} ''[[:Kategorie:{{{classis|}}}{{!}}{{{classis|}}}]]''
}}
{{#if:{{{subclassis|}}}
| {{!-}}
{{!}} Subclassis:
{{!}} ''[[:Kategorie:{{{subclassis|}}}{{!}}{{{subclassis|}}}]]''
}}
{{#if:{{{superordo|}}}
| {{!-}}
{{!}} Superordo:
{{!}} ''[[:Kategorie:{{{superordo|}}}{{!}}{{{superordo|}}}]]''
}}
{{#if:{{{ordo|}}}
| {{!-}}
{{!}} Ordo:
{{!}} ''[[:Kategorie:{{{ordo|}}}{{!}}{{{ordo|}}}]]''
}}
{{#if:{{{subordo|}}}
| {{!-}}
{{!}} Subordo:
{{!}} ''[[:Kategorie:{{{subordo|}}}{{!}}{{{subordo|}}}]]''
}}
{{#if:{{{superfamilia|}}}
| {{!-}}
{{!}} Superfamilia:
{{!}} ''[[:Kategorie:{{{superfamilia|}}}{{!}}{{{superfamilia|}}}]]''
}}
{{#if:{{{familia|}}}
| {{!-}}
{{!}} Familia:
{{!}} ''[[:Kategorie:{{{familia|}}}{{!}}{{{familia|}}}]]''
}}
{{#if:{{{subfamilia|}}}
| {{!-}}
{{!}} Subfamilia:
{{!}} ''[[:Kategorie:{{{subfamilia|}}}{{!}}{{{subfamilia|}}}]]''
}}
{{#if:{{{tribus|}}}
| {{!-}}
{{!}} Tribus:
{{!}} ''[[:Kategorie:{{{tribus|}}}{{!}}{{{tribus|}}}]]''
}}
|-
{{#if:{{{genus|}}}
| {{!-}}
{{!}} Genus:
{{!}} ''[[:Kategorie:{{{genus|}}}{{!}}{{{genus|}}}]]''
}}
|-
{{#if:{{{subgenus|}}}
| {{!-}}
{{!}} Subgenus:
{{!}} ''[[:Kategorie:{{{subgenus|}}}{{!}}{{{subgenus|}}}]]''
}}
|-
{{#if:{{{species|}}}
| {{!-}}
{{!}} Species:
{{!}} ''{{{genus|}}} {{{subgenus|}}} {{{species|}}}''
}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Wissenschaftlicher Name
|-
|style="text-align:center"|
{| style="text-align:center;background-color:#efefef;widht:250px"
|-
|style="text-align:center;width:250px"|''{{{genus|}}} {{{species|}}}'' {{#if:{{{Autor|}}}|{{{Autor|}}}}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Weitere Informationen
|-
|style="text-align:left"|
{| style="text-align:left;background-color:#efefef;widht:250px"
{{#if:{{{Verbreitung|}}}
| {{!-}}
{{!}} Verbreitung:
{{!}} {{{Verbreitung|}}}
| {{!-}}
}}
{{#if:{{{Habitat|}}}
| {{!-}}
{{!}} Habitat:
{{!}} {{{Habitat|}}}
| {{!-}}
}}
{{#if:{{{Nahrung|}}}
| {{!-}}
{{!}} Nahrung:
{{!}} {{{Nahrung|}}}
| {{!-}}
}}
{{#if:{{{Luftfeuchtigkeit|}}}
| {{!-}}
{{!}} Luftfeuchtigkeit:
{{!}} {{{Luftfeuchtigkeit|}}}
| {{!-}}
}}
{{#if:{{{Temperatur|}}}
| {{!-}}
{{!}} Temperatur:
{{!}} {{{Temperatur|}}}
| {{!-}}
}}
{{#if:{{{StudyGroupNumber|}}}
| {{!-}}
{{!}} StudyGroupNumber:
{{!}} {{{StudyGroupNumber|}}}
| {{!-}}
}}
|}
|}
{{#ifeq: {{NAMESPACE}} | {{ns:0}} | [[Kategorie:species]]}}
{{#if:{{{ordo|}}}|
-> [[:Kategorie:{{{ordo|}}}{{!}}{{{ordo|}}}]]
}}
{{#if:{{{subordo|}}}|
-> [[:Kategorie:{{{subordo|}}}{{!}}{{{subordo|}}}]]
}}
{{#if:{{{superfamilia|}}}|
-> [[:Kategorie:{{{subordo|}}}{{!}}{{{superfamilia|}}}]]
}}
{{#if:{{{familia|}}}|
-> [[:Kategorie:{{{familia|}}}{{!}}{{{familia|}}}]]
}}
{{#if:{{{subfamilia|}}}|
-> [[:Kategorie:{{{subfamilia|}}}{{!}}{{{subfamilia|}}}]]
}}
{{#if:{{{tribus|}}}|
-> [[:Kategorie:{{{tribus|}}}{{!}}{{{tribus|}}}]]
}}
{{#if:{{{genus|}}}|
-> [[:Kategorie:{{{genus|}}}{{!}}{{{genus|}}}]]
}}
{{#if:{{{subgenus|}}}|
-> [[:Kategorie:{{{subgenus|}}}{{!}}{{{subgenus|}}}]]
}}
{{#if:{{{www.faunaeur.org_id|}}}|
* [http://www.faunaeur.org/full_results.php?id={{{www.faunaeur.org_id|}}} Fauna Europaea : www.faunaeur.org -> {{PAGENAME}}]
}}
{{#if:{{{cockroach.speciesfile.org_TaxonNameID|}}}|
* [http://cockroach.speciesfile.org/common/basic/Taxa.aspx?TaxonNameID={{{cockroach.speciesfile.org_TaxonNameID|}}} Cockroach Species File (CSF) : cockroach.speciesfile.org -> {{PAGENAME}}]
}}
{{#if:{{{LSID|}}}|
* [http://www.ipni.org/lsids.html LSID] : {{{LSID|}}}
}}
{{#ifeq:{{PAGENAME}}|{{{superordo}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
[[Kategorie:{{#if: {{{subclassis|}}} | {{{subclassis|}}} | {{{classis|}}} }}{{!}}{{{superordo|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{ordo}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=7}}
{{#ifeq:{{PAGENAME}}|Blattodea|
[[Kategorie:Schaben]]
}}
[[Kategorie:{{#if: {{{superordo|}}} | {{{superordo|}}} | {{{subclassis|}}} }}{{!}}{{{ordo|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{superfamilia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
[[Kategorie:{{{subordo|}}}{{!}}{{{superfamilia|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{familia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=4}}
[[Kategorie:{{#if: {{{superfamilia|}}} | {{{superfamilia|}}} | {{{subordo|}}} }}{{!}}{{{familia|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{subfamilia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=3}}
[[Kategorie:{{{familia|}}}{{!}}{{{subfamilia|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{genus}}}|
[[Kategorie:{{#if: {{{subfamilia|}}} | {{{subfamilia|}}} | {{{familia|}}} }}{{!}}{{{genus|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{genus}}} {{{species}}}|
[[Kategorie:{{{genus|}}}{{!}}{{{species|}}}]]
}}
</includeonly>
<noinclude>
<pre>
Beispielaufruf:
{{Systematik
| DeName = Fauchschabe
| Autor = van Herrewege, 1973
| ordo =
| subordo =
| superfamilia =
| familia = Blaberidae
| subfamilia = Oxyhaloinae
| tribus = Gromphadorhini
| genus = Princisia
| subgenus =
| species = vanwaerebeki
| Verbreitung =
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| StudyGroupNumber = BCG 34
| Winterruhe =
| cockroach.speciesfile.org_TaxonNameID = 1174416
| LSID = urn:lsid:Blattodea.genusfile.org:TaxonName:6326
}}
{{Systematik
| Autor =
| Bild =
| Bildbeschreibung =
| regnum = Animalia
| subregnum = Eumetazoa
| phylum = specieshropoda
| subphylum = Hexapoda
| classis = Insecta
| ordo = Dictyoptera
| subordo = Isoptera
| LSID = urn:lsid:faunaeur.org:taxname:11922
| www.faunaeur.org_id = 11922
}}
</pre>
</noinclude>
7ec2b983907144a033f2eb50285e73604653b165
1600
1599
2017-05-02T14:51:38Z
Lollypop
2
wikitext
text/x-wiki
<includeonly>
{| cellpadding="2" cellspacing="0" style="border: 1px solid #6688AA;width:260px; background-color:#efefef; float:right;overflow: hidden; margin-left:10px; margin-bottom:10px;" valign="middle" |
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"|
'''''{{PAGENAME}}'''''
{{#if:{{{DeName|}}}|
<br>({{{DeName|}}})
}}
|-
|style="border: 0px solid #6688AA;border-bottom-width:1px;"| <div style="text-align:center;font-size:8pt;">
{{#if: {{{Bild|}}}
|
[[Bild:{{{Bild}}}{{!}}frameless{{!}}250x300px{{!}}{{{Bildbeschreibung}}}]]
}}
</div>
|
|
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| Systematik
|-
|style="text-align:center;width=250px;"|
{| style="background-color:#efefef;text-align:left;"
{{#if:{{{dominia|}}}
| {{!-}}
{{!}} Dominia:
{{!}} ''[[:Kategorie:{{{dominia|}}}{{!}}{{{dominia|}}}]]''
}}
{{#if:{{{regnum|}}}
| {{!-}}
{{!}} Regnum:
{{!}} ''[[:Kategorie:{{{regnum|}}}{{!}}{{{regnum|}}}]]''
}}
{{#if:{{{subregnum|}}}
| {{!-}}
{{!}} Subregnum:
{{!}} ''[[:Kategorie:{{{subregnum|}}}{{!}}{{{subregnum|}}}]]''
}}
{{#if:{{{superdivisio|}}}
| {{!-}}
{{!}} Superdivisio:
{{!}} ''[[:Kategorie:{{{superdivisio|}}}{{!}}{{{superdivisio|}}}]]''
}}
{{#if:{{{divisio|}}}
| {{!-}}
{{!}} Divisio:
{{!}} ''[[:Kategorie:{{{divisio|}}}{{!}}{{{divisio|}}}]]''
}}
{{#if:{{{subdivisio|}}}
| {{!-}}
{{!}} Subdivisio:
{{!}} ''[[:Kategorie:{{{subdivisio|}}}{{!}}{{{subdivisio|}}}]]''
}}
{{#if:{{{superclassis|}}}
| {{!-}}
{{!}} Superclassis:
{{!}} ''[[:Kategorie:{{{superclassis|}}}{{!}}{{{superclassis|}}}]]''
}}
{{#if:{{{classis|}}}
| {{!-}}
{{!}} Classis:
{{!}} ''[[:Kategorie:{{{classis|}}}{{!}}{{{classis|}}}]]''
}}
{{#if:{{{subclassis|}}}
| {{!-}}
{{!}} Subclassis:
{{!}} ''[[:Kategorie:{{{subclassis|}}}{{!}}{{{subclassis|}}}]]''
}}
{{#if:{{{superordo|}}}
| {{!-}}
{{!}} Superordo:
{{!}} ''[[:Kategorie:{{{superordo|}}}{{!}}{{{superordo|}}}]]''
}}
{{#if:{{{ordo|}}}
| {{!-}}
{{!}} Ordo:
{{!}} ''[[:Kategorie:{{{ordo|}}}{{!}}{{{ordo|}}}]]''
}}
{{#if:{{{subordo|}}}
| {{!-}}
{{!}} Subordo:
{{!}} ''[[:Kategorie:{{{subordo|}}}{{!}}{{{subordo|}}}]]''
}}
{{#if:{{{superfamilia|}}}
| {{!-}}
{{!}} Superfamilia:
{{!}} ''[[:Kategorie:{{{superfamilia|}}}{{!}}{{{superfamilia|}}}]]''
}}
{{#if:{{{familia|}}}
| {{!-}}
{{!}} Familia:
{{!}} ''[[:Kategorie:{{{familia|}}}{{!}}{{{familia|}}}]]''
}}
{{#if:{{{subfamilia|}}}
| {{!-}}
{{!}} Subfamilia:
{{!}} ''[[:Kategorie:{{{subfamilia|}}}{{!}}{{{subfamilia|}}}]]''
}}
{{#if:{{{tribus|}}}
| {{!-}}
{{!}} Tribus:
{{!}} ''[[:Kategorie:{{{tribus|}}}{{!}}{{{tribus|}}}]]''
}}
|-
{{#if:{{{genus|}}}
| {{!-}}
{{!}} Genus:
{{!}} ''[[:Kategorie:{{{genus|}}}{{!}}{{{genus|}}}]]''
}}
|-
{{#if:{{{subgenus|}}}
| {{!-}}
{{!}} Subgenus:
{{!}} ''[[:Kategorie:{{{subgenus|}}}{{!}}{{{subgenus|}}}]]''
}}
|-
{{#if:{{{species|}}}
| {{!-}}
{{!}} Species:
{{!}} ''{{{genus|}}} {{{subgenus|}}} {{{species|}}}''
}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Wissenschaftlicher Name
|-
|style="text-align:center"|
{| style="text-align:center;background-color:#efefef;widht:250px"
|-
|style="text-align:center;width:250px"|''{{{genus|}}} {{{species|}}}'' {{#if:{{{Autor|}}}|{{{Autor|}}}}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Weitere Informationen
|-
|style="text-align:left"|
{| style="text-align:left;background-color:#efefef;widht:250px"
{{#if:{{{Verbreitung|}}}
| {{!-}}
{{!}} Verbreitung:
{{!}} {{{Verbreitung|}}}
| {{!-}}
}}
{{#if:{{{Habitat|}}}
| {{!-}}
{{!}} Habitat:
{{!}} {{{Habitat|}}}
| {{!-}}
}}
{{#if:{{{Nahrung|}}}
| {{!-}}
{{!}} Nahrung:
{{!}} {{{Nahrung|}}}
| {{!-}}
}}
{{#if:{{{Luftfeuchtigkeit|}}}
| {{!-}}
{{!}} Luftfeuchtigkeit:
{{!}} {{{Luftfeuchtigkeit|}}}
| {{!-}}
}}
{{#if:{{{Temperatur|}}}
| {{!-}}
{{!}} Temperatur:
{{!}} {{{Temperatur|}}}
| {{!-}}
}}
{{#if:{{{StudyGroupNumber|}}}
| {{!-}}
{{!}} StudyGroupNumber:
{{!}} {{{StudyGroupNumber|}}}
| {{!-}}
}}
|}
|}
{{#ifeq: {{NAMESPACE}} | {{ns:0}} | [[Kategorie:species]]}}
{{#if:{{{ordo|}}}|
-> [[:Kategorie:{{{ordo|}}}{{!}}{{{ordo|}}}]]
}}
{{#if:{{{subordo|}}}|
-> [[:Kategorie:{{{subordo|}}}{{!}}{{{subordo|}}}]]
}}
{{#if:{{{superfamilia|}}}|
-> [[:Kategorie:{{{subordo|}}}{{!}}{{{superfamilia|}}}]]
}}
{{#if:{{{familia|}}}|
-> [[:Kategorie:{{{familia|}}}{{!}}{{{familia|}}}]]
}}
{{#if:{{{subfamilia|}}}|
-> [[:Kategorie:{{{subfamilia|}}}{{!}}{{{subfamilia|}}}]]
}}
{{#if:{{{tribus|}}}|
-> [[:Kategorie:{{{tribus|}}}{{!}}{{{tribus|}}}]]
}}
{{#if:{{{genus|}}}|
-> [[:Kategorie:{{{genus|}}}{{!}}{{{genus|}}}]]
}}
{{#if:{{{subgenus|}}}|
-> [[:Kategorie:{{{subgenus|}}}{{!}}{{{subgenus|}}}]]
}}
{{#if:{{{www.faunaeur.org_id|}}}|
* [http://www.faunaeur.org/full_results.php?id={{{www.faunaeur.org_id|}}} Fauna Europaea : www.faunaeur.org -> {{PAGENAME}}]
}}
{{#if:{{{cockroach.speciesfile.org_TaxonNameID|}}}|
* [http://cockroach.speciesfile.org/common/basic/Taxa.aspx?TaxonNameID={{{cockroach.speciesfile.org_TaxonNameID|}}} Cockroach Species File (CSF) : cockroach.speciesfile.org -> {{PAGENAME}}]
}}
{{#if:{{{LSID|}}}|
* [http://www.ipni.org/lsids.html LSID] : {{{LSID|}}}
}}
{{#ifeq:{{PAGENAME}}|{{{superordo}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
[[Kategorie:{{#if: {{{subclassis|}}} | {{{subclassis|}}} | {{{classis|}}} }}{{!}}{{{superordo|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{ordo}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=7}}
{{#ifeq:{{PAGENAME}}|Blattodea|
[[Kategorie:Schaben]]
}}
[[Kategorie:{{#if: {{{superordo|}}} | {{{superordo|}}} | {{{subclassis|}}} }}{{!}}{{{ordo|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{superfamilia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
[[Kategorie:{{#if: {{{subordo|}}} | {{{subordo|}}} | {{{ordo|}}} }}{{!}}{{{superfamilia|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{familia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=4}}
[[Kategorie:{{#if: {{{superfamilia|}}} | {{{superfamilia|}}} | {{{subordo|}}} }}{{!}}{{{familia|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{subfamilia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=3}}
[[Kategorie:{{{familia|}}}{{!}}{{{subfamilia|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{genus}}}|
[[Kategorie:{{#if: {{{subfamilia|}}} | {{{subfamilia|}}} | {{{familia|}}} }}{{!}}{{{genus|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{genus}}} {{{species}}}|
[[Kategorie:{{{genus|}}}{{!}}{{{species|}}}]]
}}
</includeonly>
<noinclude>
<pre>
Beispielaufruf:
{{Systematik
| DeName = Fauchschabe
| Autor = van Herrewege, 1973
| ordo =
| subordo =
| superfamilia =
| familia = Blaberidae
| subfamilia = Oxyhaloinae
| tribus = Gromphadorhini
| genus = Princisia
| subgenus =
| species = vanwaerebeki
| Verbreitung =
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| StudyGroupNumber = BCG 34
| Winterruhe =
| cockroach.speciesfile.org_TaxonNameID = 1174416
| LSID = urn:lsid:Blattodea.genusfile.org:TaxonName:6326
}}
{{Systematik
| Autor =
| Bild =
| Bildbeschreibung =
| regnum = Animalia
| subregnum = Eumetazoa
| phylum = specieshropoda
| subphylum = Hexapoda
| classis = Insecta
| ordo = Dictyoptera
| subordo = Isoptera
| LSID = urn:lsid:faunaeur.org:taxname:11922
| www.faunaeur.org_id = 11922
}}
</pre>
</noinclude>
9b90244fcb51dea8e589233f01bdb85c5e2d4f89
1601
1600
2017-05-02T14:52:48Z
Lollypop
2
wikitext
text/x-wiki
<includeonly>
{| cellpadding="2" cellspacing="0" style="border: 1px solid #6688AA;width:260px; background-color:#efefef; float:right;overflow: hidden; margin-left:10px; margin-bottom:10px;" valign="middle" |
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"|
'''''{{PAGENAME}}'''''
{{#if:{{{DeName|}}}|
<br>({{{DeName|}}})
}}
|-
|style="border: 0px solid #6688AA;border-bottom-width:1px;"| <div style="text-align:center;font-size:8pt;">
{{#if: {{{Bild|}}}
|
[[Bild:{{{Bild}}}{{!}}frameless{{!}}250x300px{{!}}{{{Bildbeschreibung}}}]]
}}
</div>
|
|
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| Systematik
|-
|style="text-align:center;width=250px;"|
{| style="background-color:#efefef;text-align:left;"
{{#if:{{{dominia|}}}
| {{!-}}
{{!}} Dominia:
{{!}} ''[[:Kategorie:{{{dominia|}}}{{!}}{{{dominia|}}}]]''
}}
{{#if:{{{regnum|}}}
| {{!-}}
{{!}} Regnum:
{{!}} ''[[:Kategorie:{{{regnum|}}}{{!}}{{{regnum|}}}]]''
}}
{{#if:{{{subregnum|}}}
| {{!-}}
{{!}} Subregnum:
{{!}} ''[[:Kategorie:{{{subregnum|}}}{{!}}{{{subregnum|}}}]]''
}}
{{#if:{{{superdivisio|}}}
| {{!-}}
{{!}} Superdivisio:
{{!}} ''[[:Kategorie:{{{superdivisio|}}}{{!}}{{{superdivisio|}}}]]''
}}
{{#if:{{{divisio|}}}
| {{!-}}
{{!}} Divisio:
{{!}} ''[[:Kategorie:{{{divisio|}}}{{!}}{{{divisio|}}}]]''
}}
{{#if:{{{subdivisio|}}}
| {{!-}}
{{!}} Subdivisio:
{{!}} ''[[:Kategorie:{{{subdivisio|}}}{{!}}{{{subdivisio|}}}]]''
}}
{{#if:{{{superclassis|}}}
| {{!-}}
{{!}} Superclassis:
{{!}} ''[[:Kategorie:{{{superclassis|}}}{{!}}{{{superclassis|}}}]]''
}}
{{#if:{{{classis|}}}
| {{!-}}
{{!}} Classis:
{{!}} ''[[:Kategorie:{{{classis|}}}{{!}}{{{classis|}}}]]''
}}
{{#if:{{{subclassis|}}}
| {{!-}}
{{!}} Subclassis:
{{!}} ''[[:Kategorie:{{{subclassis|}}}{{!}}{{{subclassis|}}}]]''
}}
{{#if:{{{superordo|}}}
| {{!-}}
{{!}} Superordo:
{{!}} ''[[:Kategorie:{{{superordo|}}}{{!}}{{{superordo|}}}]]''
}}
{{#if:{{{ordo|}}}
| {{!-}}
{{!}} Ordo:
{{!}} ''[[:Kategorie:{{{ordo|}}}{{!}}{{{ordo|}}}]]''
}}
{{#if:{{{subordo|}}}
| {{!-}}
{{!}} Subordo:
{{!}} ''[[:Kategorie:{{{subordo|}}}{{!}}{{{subordo|}}}]]''
}}
{{#if:{{{superfamilia|}}}
| {{!-}}
{{!}} Superfamilia:
{{!}} ''[[:Kategorie:{{{superfamilia|}}}{{!}}{{{superfamilia|}}}]]''
}}
{{#if:{{{familia|}}}
| {{!-}}
{{!}} Familia:
{{!}} ''[[:Kategorie:{{{familia|}}}{{!}}{{{familia|}}}]]''
}}
{{#if:{{{subfamilia|}}}
| {{!-}}
{{!}} Subfamilia:
{{!}} ''[[:Kategorie:{{{subfamilia|}}}{{!}}{{{subfamilia|}}}]]''
}}
{{#if:{{{tribus|}}}
| {{!-}}
{{!}} Tribus:
{{!}} ''[[:Kategorie:{{{tribus|}}}{{!}}{{{tribus|}}}]]''
}}
|-
{{#if:{{{genus|}}}
| {{!-}}
{{!}} Genus:
{{!}} ''[[:Kategorie:{{{genus|}}}{{!}}{{{genus|}}}]]''
}}
|-
{{#if:{{{subgenus|}}}
| {{!-}}
{{!}} Subgenus:
{{!}} ''[[:Kategorie:{{{subgenus|}}}{{!}}{{{subgenus|}}}]]''
}}
|-
{{#if:{{{species|}}}
| {{!-}}
{{!}} Species:
{{!}} ''{{{genus|}}} {{{subgenus|}}} {{{species|}}}''
}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Wissenschaftlicher Name
|-
|style="text-align:center"|
{| style="text-align:center;background-color:#efefef;widht:250px"
|-
|style="text-align:center;width:250px"|''{{{genus|}}} {{{species|}}}'' {{#if:{{{Autor|}}}|{{{Autor|}}}}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Weitere Informationen
|-
|style="text-align:left"|
{| style="text-align:left;background-color:#efefef;widht:250px"
{{#if:{{{Verbreitung|}}}
| {{!-}}
{{!}} Verbreitung:
{{!}} {{{Verbreitung|}}}
| {{!-}}
}}
{{#if:{{{Habitat|}}}
| {{!-}}
{{!}} Habitat:
{{!}} {{{Habitat|}}}
| {{!-}}
}}
{{#if:{{{Nahrung|}}}
| {{!-}}
{{!}} Nahrung:
{{!}} {{{Nahrung|}}}
| {{!-}}
}}
{{#if:{{{Luftfeuchtigkeit|}}}
| {{!-}}
{{!}} Luftfeuchtigkeit:
{{!}} {{{Luftfeuchtigkeit|}}}
| {{!-}}
}}
{{#if:{{{Temperatur|}}}
| {{!-}}
{{!}} Temperatur:
{{!}} {{{Temperatur|}}}
| {{!-}}
}}
{{#if:{{{StudyGroupNumber|}}}
| {{!-}}
{{!}} StudyGroupNumber:
{{!}} {{{StudyGroupNumber|}}}
| {{!-}}
}}
|}
|}
{{#ifeq: {{NAMESPACE}} | {{ns:0}} | [[Kategorie:species]]}}
{{#if:{{{ordo|}}}|
-> [[:Kategorie:{{{ordo|}}}{{!}}{{{ordo|}}}]]
}}
{{#if:{{{subordo|}}}|
-> [[:Kategorie:{{{subordo|}}}{{!}}{{{subordo|}}}]]
}}
{{#if:{{{superfamilia|}}}|
-> [[:Kategorie:{{#if: {{{subordo|}}} | {{{subordo|}}} | {{{ordo|}}} }}{{!}}{{{superfamilia|}}}]]
}}
{{#if:{{{familia|}}}|
-> [[:Kategorie:{{{familia|}}}{{!}}{{{familia|}}}]]
}}
{{#if:{{{subfamilia|}}}|
-> [[:Kategorie:{{{subfamilia|}}}{{!}}{{{subfamilia|}}}]]
}}
{{#if:{{{tribus|}}}|
-> [[:Kategorie:{{{tribus|}}}{{!}}{{{tribus|}}}]]
}}
{{#if:{{{genus|}}}|
-> [[:Kategorie:{{{genus|}}}{{!}}{{{genus|}}}]]
}}
{{#if:{{{subgenus|}}}|
-> [[:Kategorie:{{{subgenus|}}}{{!}}{{{subgenus|}}}]]
}}
{{#if:{{{www.faunaeur.org_id|}}}|
* [http://www.faunaeur.org/full_results.php?id={{{www.faunaeur.org_id|}}} Fauna Europaea : www.faunaeur.org -> {{PAGENAME}}]
}}
{{#if:{{{cockroach.speciesfile.org_TaxonNameID|}}}|
* [http://cockroach.speciesfile.org/common/basic/Taxa.aspx?TaxonNameID={{{cockroach.speciesfile.org_TaxonNameID|}}} Cockroach Species File (CSF) : cockroach.speciesfile.org -> {{PAGENAME}}]
}}
{{#if:{{{LSID|}}}|
* [http://www.ipni.org/lsids.html LSID] : {{{LSID|}}}
}}
{{#ifeq:{{PAGENAME}}|{{{superordo}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
[[Kategorie:{{#if: {{{subclassis|}}} | {{{subclassis|}}} | {{{classis|}}} }}{{!}}{{{superordo|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{ordo}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=7}}
{{#ifeq:{{PAGENAME}}|Blattodea|
[[Kategorie:Schaben]]
}}
[[Kategorie:{{#if: {{{superordo|}}} | {{{superordo|}}} | {{{subclassis|}}} }}{{!}}{{{ordo|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{superfamilia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
[[Kategorie:{{#if: {{{subordo|}}} | {{{subordo|}}} | {{{ordo|}}} }}{{!}}{{{superfamilia|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{familia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=4}}
[[Kategorie:{{#if: {{{superfamilia|}}} | {{{superfamilia|}}} | {{{subordo|}}} }}{{!}}{{{familia|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{subfamilia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=3}}
[[Kategorie:{{{familia|}}}{{!}}{{{subfamilia|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{genus}}}|
[[Kategorie:{{#if: {{{subfamilia|}}} | {{{subfamilia|}}} | {{{familia|}}} }}{{!}}{{{genus|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{genus}}} {{{species}}}|
[[Kategorie:{{{genus|}}}{{!}}{{{species|}}}]]
}}
</includeonly>
<noinclude>
<pre>
Beispielaufruf:
{{Systematik
| DeName = Fauchschabe
| Autor = van Herrewege, 1973
| ordo =
| subordo =
| superfamilia =
| familia = Blaberidae
| subfamilia = Oxyhaloinae
| tribus = Gromphadorhini
| genus = Princisia
| subgenus =
| species = vanwaerebeki
| Verbreitung =
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| StudyGroupNumber = BCG 34
| Winterruhe =
| cockroach.speciesfile.org_TaxonNameID = 1174416
| LSID = urn:lsid:Blattodea.genusfile.org:TaxonName:6326
}}
{{Systematik
| Autor =
| Bild =
| Bildbeschreibung =
| regnum = Animalia
| subregnum = Eumetazoa
| phylum = specieshropoda
| subphylum = Hexapoda
| classis = Insecta
| ordo = Dictyoptera
| subordo = Isoptera
| LSID = urn:lsid:faunaeur.org:taxname:11922
| www.faunaeur.org_id = 11922
}}
</pre>
</noinclude>
450fbb3cc9e3de2b430988a6e5ed6b88418b60b2
1602
1601
2017-05-02T16:40:42Z
Lollypop
2
wikitext
text/x-wiki
<includeonly>
{{#vardefine:taxon|
{{#if:{{{dominia|}}}|{{{dominia|}}}{{|}}}}
{{#if:{{{regnum|}}}|{{{regnum|}}}{{|}}}}
}}
{| cellpadding="2" cellspacing="0" style="border: 1px solid #6688AA;width:260px; background-color:#efefef; float:right;overflow: hidden; margin-left:10px; margin-bottom:10px;" valign="middle" |
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"|
'''''{{PAGENAME}}'''''
{{#if:{{{DeName|}}}|
<br>({{{DeName|}}})
}}
|-
|style="border: 0px solid #6688AA;border-bottom-width:1px;"| <div style="text-align:center;font-size:8pt;">
{{#if: {{{Bild|}}}
|
[[Bild:{{{Bild}}}{{!}}frameless{{!}}250x300px{{!}}{{{Bildbeschreibung}}}]]
}}
</div>
|
|
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| Systematik
|-
|style="text-align:center;width=250px;"|
{| style="background-color:#efefef;text-align:left;"
{{#if:{{{dominia|}}}
| {{!-}}
{{!}} Dominia:
{{!}} ''[[:Kategorie:{{{dominia|}}}{{!}}{{{dominia|}}}]]''
}}
{{#if:{{{regnum|}}}
| {{!-}}
{{!}} Regnum:
{{!}} ''[[:Kategorie:{{{regnum|}}}{{!}}{{{regnum|}}}]]''
}}
{{#if:{{{subregnum|}}}
| {{!-}}
{{!}} Subregnum:
{{!}} ''[[:Kategorie:{{{subregnum|}}}{{!}}{{{subregnum|}}}]]''
}}
{{#if:{{{superdivisio|}}}
| {{!-}}
{{!}} Superdivisio:
{{!}} ''[[:Kategorie:{{{superdivisio|}}}{{!}}{{{superdivisio|}}}]]''
}}
{{#if:{{{divisio|}}}
| {{!-}}
{{!}} Divisio:
{{!}} ''[[:Kategorie:{{{divisio|}}}{{!}}{{{divisio|}}}]]''
}}
{{#if:{{{subdivisio|}}}
| {{!-}}
{{!}} Subdivisio:
{{!}} ''[[:Kategorie:{{{subdivisio|}}}{{!}}{{{subdivisio|}}}]]''
}}
{{#if:{{{superclassis|}}}
| {{!-}}
{{!}} Superclassis:
{{!}} ''[[:Kategorie:{{{superclassis|}}}{{!}}{{{superclassis|}}}]]''
}}
{{#if:{{{classis|}}}
| {{!-}}
{{!}} Classis:
{{!}} ''[[:Kategorie:{{{classis|}}}{{!}}{{{classis|}}}]]''
}}
{{#if:{{{subclassis|}}}
| {{!-}}
{{!}} Subclassis:
{{!}} ''[[:Kategorie:{{{subclassis|}}}{{!}}{{{subclassis|}}}]]''
}}
{{#if:{{{superordo|}}}
| {{!-}}
{{!}} Superordo:
{{!}} ''[[:Kategorie:{{{superordo|}}}{{!}}{{{superordo|}}}]]''
}}
{{#if:{{{ordo|}}}
| {{!-}}
{{!}} Ordo:
{{!}} ''[[:Kategorie:{{{ordo|}}}{{!}}{{{ordo|}}}]]''
}}
{{#if:{{{subordo|}}}
| {{!-}}
{{!}} Subordo:
{{!}} ''[[:Kategorie:{{{subordo|}}}{{!}}{{{subordo|}}}]]''
}}
{{#if:{{{superfamilia|}}}
| {{!-}}
{{!}} Superfamilia:
{{!}} ''[[:Kategorie:{{{superfamilia|}}}{{!}}{{{superfamilia|}}}]]''
}}
{{#if:{{{familia|}}}
| {{!-}}
{{!}} Familia:
{{!}} ''[[:Kategorie:{{{familia|}}}{{!}}{{{familia|}}}]]''
}}
{{#if:{{{subfamilia|}}}
| {{!-}}
{{!}} Subfamilia:
{{!}} ''[[:Kategorie:{{{subfamilia|}}}{{!}}{{{subfamilia|}}}]]''
}}
{{#if:{{{tribus|}}}
| {{!-}}
{{!}} Tribus:
{{!}} ''[[:Kategorie:{{{tribus|}}}{{!}}{{{tribus|}}}]]''
}}
|-
{{#if:{{{genus|}}}
| {{!-}}
{{!}} Genus:
{{!}} ''[[:Kategorie:{{{genus|}}}{{!}}{{{genus|}}}]]''
}}
|-
{{#if:{{{subgenus|}}}
| {{!-}}
{{!}} Subgenus:
{{!}} ''[[:Kategorie:{{{subgenus|}}}{{!}}{{{subgenus|}}}]]''
}}
|-
{{#if:{{{species|}}}
| {{!-}}
{{!}} Species:
{{!}} ''{{{genus|}}} {{{subgenus|}}} {{{species|}}}''
}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Wissenschaftlicher Name
|-
|style="text-align:center"|
{| style="text-align:center;background-color:#efefef;widht:250px"
|-
|style="text-align:center;width:250px"|''{{{genus|}}} {{{species|}}}'' {{#if:{{{Autor|}}}|{{{Autor|}}}}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Weitere Informationen
|-
|style="text-align:left"|
{| style="text-align:left;background-color:#efefef;widht:250px"
{{#if:{{{Verbreitung|}}}
| {{!-}}
{{!}} Verbreitung:
{{!}} {{{Verbreitung|}}}
| {{!-}}
}}
{{#if:{{{Habitat|}}}
| {{!-}}
{{!}} Habitat:
{{!}} {{{Habitat|}}}
| {{!-}}
}}
{{#if:{{{Nahrung|}}}
| {{!-}}
{{!}} Nahrung:
{{!}} {{{Nahrung|}}}
| {{!-}}
}}
{{#if:{{{Luftfeuchtigkeit|}}}
| {{!-}}
{{!}} Luftfeuchtigkeit:
{{!}} {{{Luftfeuchtigkeit|}}}
| {{!-}}
}}
{{#if:{{{Temperatur|}}}
| {{!-}}
{{!}} Temperatur:
{{!}} {{{Temperatur|}}}
| {{!-}}
}}
{{#if:{{{StudyGroupNumber|}}}
| {{!-}}
{{!}} StudyGroupNumber:
{{!}} {{{StudyGroupNumber|}}}
| {{!-}}
}}
|}
|}
{{#ifeq: {{NAMESPACE}} | {{ns:0}} | [[Kategorie:species]]}}
{{#if:{{{taxon|}}}|
* {{{taxon|}}}
}}
{{#if:{{{ordo|}}}|
-> [[:Kategorie:{{{ordo|}}}{{!}}{{{ordo|}}}]]
}}
{{#if:{{{subordo|}}}|
-> [[:Kategorie:{{{subordo|}}}{{!}}{{{subordo|}}}]]
}}
{{#if:{{{superfamilia|}}}|
-> [[:Kategorie:{{#if: {{{subordo|}}} | {{{subordo|}}} | {{{ordo|}}} }}{{!}}{{{superfamilia|}}}]]
}}
{{#if:{{{familia|}}}|
-> [[:Kategorie:{{{familia|}}}{{!}}{{{familia|}}}]]
}}
{{#if:{{{subfamilia|}}}|
-> [[:Kategorie:{{{subfamilia|}}}{{!}}{{{subfamilia|}}}]]
}}
{{#if:{{{tribus|}}}|
-> [[:Kategorie:{{{tribus|}}}{{!}}{{{tribus|}}}]]
}}
{{#if:{{{genus|}}}|
-> [[:Kategorie:{{{genus|}}}{{!}}{{{genus|}}}]]
}}
{{#if:{{{subgenus|}}}|
-> [[:Kategorie:{{{subgenus|}}}{{!}}{{{subgenus|}}}]]
}}
{{#if:{{{www.faunaeur.org_id|}}}|
* [http://www.faunaeur.org/full_results.php?id={{{www.faunaeur.org_id|}}} Fauna Europaea : www.faunaeur.org -> {{PAGENAME}}]
}}
{{#if:{{{cockroach.speciesfile.org_TaxonNameID|}}}|
* [http://cockroach.speciesfile.org/common/basic/Taxa.aspx?TaxonNameID={{{cockroach.speciesfile.org_TaxonNameID|}}} Cockroach Species File (CSF) : cockroach.speciesfile.org -> {{PAGENAME}}]
}}
{{#if:{{{LSID|}}}|
* [http://www.ipni.org/lsids.html LSID] : {{{LSID|}}}
}}
{{#ifeq:{{PAGENAME}}|{{{superordo}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
[[Kategorie:{{#if: {{{subclassis|}}} | {{{subclassis|}}} | {{{classis|}}} }}{{!}}{{{superordo|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{ordo}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=7}}
{{#ifeq:{{PAGENAME}}|Blattodea|
[[Kategorie:Schaben]]
}}
[[Kategorie:{{#if: {{{superordo|}}} | {{{superordo|}}} | {{{subclassis|}}} }}{{!}}{{{ordo|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{superfamilia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
[[Kategorie:{{#if: {{{subordo|}}} | {{{subordo|}}} | {{{ordo|}}} }}{{!}}{{{superfamilia|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{familia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=4}}
[[Kategorie:{{#if: {{{superfamilia|}}} | {{{superfamilia|}}} | {{{subordo|}}} }}{{!}}{{{familia|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{subfamilia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=3}}
[[Kategorie:{{{familia|}}}{{!}}{{{subfamilia|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{genus}}}|
[[Kategorie:{{#if: {{{subfamilia|}}} | {{{subfamilia|}}} | {{{familia|}}} }}{{!}}{{{genus|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{genus}}} {{{species}}}|
[[Kategorie:{{{genus|}}}{{!}}{{{species|}}}]]
}}
</includeonly>
<noinclude>
<pre>
Beispielaufruf:
{{Systematik
| DeName = Fauchschabe
| Autor = van Herrewege, 1973
| ordo =
| subordo =
| superfamilia =
| familia = Blaberidae
| subfamilia = Oxyhaloinae
| tribus = Gromphadorhini
| genus = Princisia
| subgenus =
| species = vanwaerebeki
| Verbreitung =
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| StudyGroupNumber = BCG 34
| Winterruhe =
| cockroach.speciesfile.org_TaxonNameID = 1174416
| LSID = urn:lsid:Blattodea.genusfile.org:TaxonName:6326
}}
{{Systematik
| Autor =
| Bild =
| Bildbeschreibung =
| regnum = Animalia
| subregnum = Eumetazoa
| phylum = specieshropoda
| subphylum = Hexapoda
| classis = Insecta
| ordo = Dictyoptera
| subordo = Isoptera
| LSID = urn:lsid:faunaeur.org:taxname:11922
| www.faunaeur.org_id = 11922
}}
</pre>
</noinclude>
f3495ed76c9e3acb982fab8c9b8eff9cd79776c6
1603
1602
2017-05-02T16:42:03Z
Lollypop
2
wikitext
text/x-wiki
<includeonly>
{{#vardefine:taxon|
{{#if:{{{dominia|}}}|{{{dominia|}}}{{!}}}}
{{#if:{{{regnum|}}} |{{{regnum|}}}{{!}}}}
}}
{| cellpadding="2" cellspacing="0" style="border: 1px solid #6688AA;width:260px; background-color:#efefef; float:right;overflow: hidden; margin-left:10px; margin-bottom:10px;" valign="middle" |
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"|
'''''{{PAGENAME}}'''''
{{#if:{{{DeName|}}}|
<br>({{{DeName|}}})
}}
|-
|style="border: 0px solid #6688AA;border-bottom-width:1px;"| <div style="text-align:center;font-size:8pt;">
{{#if: {{{Bild|}}}
|
[[Bild:{{{Bild}}}{{!}}frameless{{!}}250x300px{{!}}{{{Bildbeschreibung}}}]]
}}
</div>
|
|
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| Systematik
|-
|style="text-align:center;width=250px;"|
{| style="background-color:#efefef;text-align:left;"
{{#if:{{{dominia|}}}
| {{!-}}
{{!}} Dominia:
{{!}} ''[[:Kategorie:{{{dominia|}}}{{!}}{{{dominia|}}}]]''
}}
{{#if:{{{regnum|}}}
| {{!-}}
{{!}} Regnum:
{{!}} ''[[:Kategorie:{{{regnum|}}}{{!}}{{{regnum|}}}]]''
}}
{{#if:{{{subregnum|}}}
| {{!-}}
{{!}} Subregnum:
{{!}} ''[[:Kategorie:{{{subregnum|}}}{{!}}{{{subregnum|}}}]]''
}}
{{#if:{{{superdivisio|}}}
| {{!-}}
{{!}} Superdivisio:
{{!}} ''[[:Kategorie:{{{superdivisio|}}}{{!}}{{{superdivisio|}}}]]''
}}
{{#if:{{{divisio|}}}
| {{!-}}
{{!}} Divisio:
{{!}} ''[[:Kategorie:{{{divisio|}}}{{!}}{{{divisio|}}}]]''
}}
{{#if:{{{subdivisio|}}}
| {{!-}}
{{!}} Subdivisio:
{{!}} ''[[:Kategorie:{{{subdivisio|}}}{{!}}{{{subdivisio|}}}]]''
}}
{{#if:{{{superclassis|}}}
| {{!-}}
{{!}} Superclassis:
{{!}} ''[[:Kategorie:{{{superclassis|}}}{{!}}{{{superclassis|}}}]]''
}}
{{#if:{{{classis|}}}
| {{!-}}
{{!}} Classis:
{{!}} ''[[:Kategorie:{{{classis|}}}{{!}}{{{classis|}}}]]''
}}
{{#if:{{{subclassis|}}}
| {{!-}}
{{!}} Subclassis:
{{!}} ''[[:Kategorie:{{{subclassis|}}}{{!}}{{{subclassis|}}}]]''
}}
{{#if:{{{superordo|}}}
| {{!-}}
{{!}} Superordo:
{{!}} ''[[:Kategorie:{{{superordo|}}}{{!}}{{{superordo|}}}]]''
}}
{{#if:{{{ordo|}}}
| {{!-}}
{{!}} Ordo:
{{!}} ''[[:Kategorie:{{{ordo|}}}{{!}}{{{ordo|}}}]]''
}}
{{#if:{{{subordo|}}}
| {{!-}}
{{!}} Subordo:
{{!}} ''[[:Kategorie:{{{subordo|}}}{{!}}{{{subordo|}}}]]''
}}
{{#if:{{{superfamilia|}}}
| {{!-}}
{{!}} Superfamilia:
{{!}} ''[[:Kategorie:{{{superfamilia|}}}{{!}}{{{superfamilia|}}}]]''
}}
{{#if:{{{familia|}}}
| {{!-}}
{{!}} Familia:
{{!}} ''[[:Kategorie:{{{familia|}}}{{!}}{{{familia|}}}]]''
}}
{{#if:{{{subfamilia|}}}
| {{!-}}
{{!}} Subfamilia:
{{!}} ''[[:Kategorie:{{{subfamilia|}}}{{!}}{{{subfamilia|}}}]]''
}}
{{#if:{{{tribus|}}}
| {{!-}}
{{!}} Tribus:
{{!}} ''[[:Kategorie:{{{tribus|}}}{{!}}{{{tribus|}}}]]''
}}
|-
{{#if:{{{genus|}}}
| {{!-}}
{{!}} Genus:
{{!}} ''[[:Kategorie:{{{genus|}}}{{!}}{{{genus|}}}]]''
}}
|-
{{#if:{{{subgenus|}}}
| {{!-}}
{{!}} Subgenus:
{{!}} ''[[:Kategorie:{{{subgenus|}}}{{!}}{{{subgenus|}}}]]''
}}
|-
{{#if:{{{species|}}}
| {{!-}}
{{!}} Species:
{{!}} ''{{{genus|}}} {{{subgenus|}}} {{{species|}}}''
}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Wissenschaftlicher Name
|-
|style="text-align:center"|
{| style="text-align:center;background-color:#efefef;widht:250px"
|-
|style="text-align:center;width:250px"|''{{{genus|}}} {{{species|}}}'' {{#if:{{{Autor|}}}|{{{Autor|}}}}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Weitere Informationen
|-
|style="text-align:left"|
{| style="text-align:left;background-color:#efefef;widht:250px"
{{#if:{{{Verbreitung|}}}
| {{!-}}
{{!}} Verbreitung:
{{!}} {{{Verbreitung|}}}
| {{!-}}
}}
{{#if:{{{Habitat|}}}
| {{!-}}
{{!}} Habitat:
{{!}} {{{Habitat|}}}
| {{!-}}
}}
{{#if:{{{Nahrung|}}}
| {{!-}}
{{!}} Nahrung:
{{!}} {{{Nahrung|}}}
| {{!-}}
}}
{{#if:{{{Luftfeuchtigkeit|}}}
| {{!-}}
{{!}} Luftfeuchtigkeit:
{{!}} {{{Luftfeuchtigkeit|}}}
| {{!-}}
}}
{{#if:{{{Temperatur|}}}
| {{!-}}
{{!}} Temperatur:
{{!}} {{{Temperatur|}}}
| {{!-}}
}}
{{#if:{{{StudyGroupNumber|}}}
| {{!-}}
{{!}} StudyGroupNumber:
{{!}} {{{StudyGroupNumber|}}}
| {{!-}}
}}
|}
|}
{{#ifeq: {{NAMESPACE}} | {{ns:0}} | [[Kategorie:species]]}}
{{#if:{{{taxon|}}}|
* {{{taxon|}}}
}}
{{#if:{{{ordo|}}}|
-> [[:Kategorie:{{{ordo|}}}{{!}}{{{ordo|}}}]]
}}
{{#if:{{{subordo|}}}|
-> [[:Kategorie:{{{subordo|}}}{{!}}{{{subordo|}}}]]
}}
{{#if:{{{superfamilia|}}}|
-> [[:Kategorie:{{#if: {{{subordo|}}} | {{{subordo|}}} | {{{ordo|}}} }}{{!}}{{{superfamilia|}}}]]
}}
{{#if:{{{familia|}}}|
-> [[:Kategorie:{{{familia|}}}{{!}}{{{familia|}}}]]
}}
{{#if:{{{subfamilia|}}}|
-> [[:Kategorie:{{{subfamilia|}}}{{!}}{{{subfamilia|}}}]]
}}
{{#if:{{{tribus|}}}|
-> [[:Kategorie:{{{tribus|}}}{{!}}{{{tribus|}}}]]
}}
{{#if:{{{genus|}}}|
-> [[:Kategorie:{{{genus|}}}{{!}}{{{genus|}}}]]
}}
{{#if:{{{subgenus|}}}|
-> [[:Kategorie:{{{subgenus|}}}{{!}}{{{subgenus|}}}]]
}}
{{#if:{{{www.faunaeur.org_id|}}}|
* [http://www.faunaeur.org/full_results.php?id={{{www.faunaeur.org_id|}}} Fauna Europaea : www.faunaeur.org -> {{PAGENAME}}]
}}
{{#if:{{{cockroach.speciesfile.org_TaxonNameID|}}}|
* [http://cockroach.speciesfile.org/common/basic/Taxa.aspx?TaxonNameID={{{cockroach.speciesfile.org_TaxonNameID|}}} Cockroach Species File (CSF) : cockroach.speciesfile.org -> {{PAGENAME}}]
}}
{{#if:{{{LSID|}}}|
* [http://www.ipni.org/lsids.html LSID] : {{{LSID|}}}
}}
{{#ifeq:{{PAGENAME}}|{{{superordo}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
[[Kategorie:{{#if: {{{subclassis|}}} | {{{subclassis|}}} | {{{classis|}}} }}{{!}}{{{superordo|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{ordo}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=7}}
{{#ifeq:{{PAGENAME}}|Blattodea|
[[Kategorie:Schaben]]
}}
[[Kategorie:{{#if: {{{superordo|}}} | {{{superordo|}}} | {{{subclassis|}}} }}{{!}}{{{ordo|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{superfamilia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
[[Kategorie:{{#if: {{{subordo|}}} | {{{subordo|}}} | {{{ordo|}}} }}{{!}}{{{superfamilia|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{familia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=4}}
[[Kategorie:{{#if: {{{superfamilia|}}} | {{{superfamilia|}}} | {{{subordo|}}} }}{{!}}{{{familia|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{subfamilia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=3}}
[[Kategorie:{{{familia|}}}{{!}}{{{subfamilia|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{genus}}}|
[[Kategorie:{{#if: {{{subfamilia|}}} | {{{subfamilia|}}} | {{{familia|}}} }}{{!}}{{{genus|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{genus}}} {{{species}}}|
[[Kategorie:{{{genus|}}}{{!}}{{{species|}}}]]
}}
</includeonly>
<noinclude>
<pre>
Beispielaufruf:
{{Systematik
| DeName = Fauchschabe
| Autor = van Herrewege, 1973
| ordo =
| subordo =
| superfamilia =
| familia = Blaberidae
| subfamilia = Oxyhaloinae
| tribus = Gromphadorhini
| genus = Princisia
| subgenus =
| species = vanwaerebeki
| Verbreitung =
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| StudyGroupNumber = BCG 34
| Winterruhe =
| cockroach.speciesfile.org_TaxonNameID = 1174416
| LSID = urn:lsid:Blattodea.genusfile.org:TaxonName:6326
}}
{{Systematik
| Autor =
| Bild =
| Bildbeschreibung =
| regnum = Animalia
| subregnum = Eumetazoa
| phylum = specieshropoda
| subphylum = Hexapoda
| classis = Insecta
| ordo = Dictyoptera
| subordo = Isoptera
| LSID = urn:lsid:faunaeur.org:taxname:11922
| www.faunaeur.org_id = 11922
}}
</pre>
</noinclude>
ce12746b6cd552a9647da47aaa98a668a0bbbc4e
1604
1603
2017-05-02T16:54:22Z
Lollypop
2
wikitext
text/x-wiki
<includeonly>
{{#vardefine:taxon|{{#if:{{{dominia|}}}|{{{dominia|}}}{{!}}}}{{#if:{{{regnum|}}} |{{{regnum|}}}{{!}}}}}}
{| cellpadding="2" cellspacing="0" style="border: 1px solid #6688AA;width:260px; background-color:#efefef; float:right;overflow: hidden; margin-left:10px; margin-bottom:10px;" valign="middle" |
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"|
'''''{{PAGENAME}}'''''
{{#if:{{{DeName|}}}|
<br>({{{DeName|}}})
}}
|-
|style="border: 0px solid #6688AA;border-bottom-width:1px;"| <div style="text-align:center;font-size:8pt;">
{{#if: {{{Bild|}}}
|
[[Bild:{{{Bild}}}{{!}}frameless{{!}}250x300px{{!}}{{{Bildbeschreibung}}}]]
}}
</div>
|
|
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| Systematik
|-
|style="text-align:center;width=250px;"|
{| style="background-color:#efefef;text-align:left;"
{{#if:{{{dominia|}}}
| {{!-}}
{{!}} Dominia:
{{!}} ''[[:Kategorie:{{{dominia|}}}{{!}}{{{dominia|}}}]]''
}}
{{#if:{{{regnum|}}}
| {{!-}}
{{!}} Regnum:
{{!}} ''[[:Kategorie:{{{regnum|}}}{{!}}{{{regnum|}}}]]''
}}
{{#if:{{{subregnum|}}}
| {{!-}}
{{!}} Subregnum:
{{!}} ''[[:Kategorie:{{{subregnum|}}}{{!}}{{{subregnum|}}}]]''
}}
{{#if:{{{superdivisio|}}}
| {{!-}}
{{!}} Superdivisio:
{{!}} ''[[:Kategorie:{{{superdivisio|}}}{{!}}{{{superdivisio|}}}]]''
}}
{{#if:{{{divisio|}}}
| {{!-}}
{{!}} Divisio:
{{!}} ''[[:Kategorie:{{{divisio|}}}{{!}}{{{divisio|}}}]]''
}}
{{#if:{{{subdivisio|}}}
| {{!-}}
{{!}} Subdivisio:
{{!}} ''[[:Kategorie:{{{subdivisio|}}}{{!}}{{{subdivisio|}}}]]''
}}
{{#if:{{{superclassis|}}}
| {{!-}}
{{!}} Superclassis:
{{!}} ''[[:Kategorie:{{{superclassis|}}}{{!}}{{{superclassis|}}}]]''
}}
{{#if:{{{classis|}}}
| {{!-}}
{{!}} Classis:
{{!}} ''[[:Kategorie:{{{classis|}}}{{!}}{{{classis|}}}]]''
}}
{{#if:{{{subclassis|}}}
| {{!-}}
{{!}} Subclassis:
{{!}} ''[[:Kategorie:{{{subclassis|}}}{{!}}{{{subclassis|}}}]]''
}}
{{#if:{{{superordo|}}}
| {{!-}}
{{!}} Superordo:
{{!}} ''[[:Kategorie:{{{superordo|}}}{{!}}{{{superordo|}}}]]''
}}
{{#if:{{{ordo|}}}
| {{!-}}
{{!}} Ordo:
{{!}} ''[[:Kategorie:{{{ordo|}}}{{!}}{{{ordo|}}}]]''
}}
{{#if:{{{subordo|}}}
| {{!-}}
{{!}} Subordo:
{{!}} ''[[:Kategorie:{{{subordo|}}}{{!}}{{{subordo|}}}]]''
}}
{{#if:{{{superfamilia|}}}
| {{!-}}
{{!}} Superfamilia:
{{!}} ''[[:Kategorie:{{{superfamilia|}}}{{!}}{{{superfamilia|}}}]]''
}}
{{#if:{{{familia|}}}
| {{!-}}
{{!}} Familia:
{{!}} ''[[:Kategorie:{{{familia|}}}{{!}}{{{familia|}}}]]''
}}
{{#if:{{{subfamilia|}}}
| {{!-}}
{{!}} Subfamilia:
{{!}} ''[[:Kategorie:{{{subfamilia|}}}{{!}}{{{subfamilia|}}}]]''
}}
{{#if:{{{tribus|}}}
| {{!-}}
{{!}} Tribus:
{{!}} ''[[:Kategorie:{{{tribus|}}}{{!}}{{{tribus|}}}]]''
}}
|-
{{#if:{{{genus|}}}
| {{!-}}
{{!}} Genus:
{{!}} ''[[:Kategorie:{{{genus|}}}{{!}}{{{genus|}}}]]''
}}
|-
{{#if:{{{subgenus|}}}
| {{!-}}
{{!}} Subgenus:
{{!}} ''[[:Kategorie:{{{subgenus|}}}{{!}}{{{subgenus|}}}]]''
}}
|-
{{#if:{{{species|}}}
| {{!-}}
{{!}} Species:
{{!}} ''{{{genus|}}} {{{subgenus|}}} {{{species|}}}''
}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Wissenschaftlicher Name
|-
|style="text-align:center"|
{| style="text-align:center;background-color:#efefef;widht:250px"
|-
|style="text-align:center;width:250px"|''{{{genus|}}} {{{species|}}}'' {{#if:{{{Autor|}}}|{{{Autor|}}}}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Weitere Informationen
|-
|style="text-align:left"|
{| style="text-align:left;background-color:#efefef;widht:250px"
{{#if:{{{Verbreitung|}}}
| {{!-}}
{{!}} Verbreitung:
{{!}} {{{Verbreitung|}}}
| {{!-}}
}}
{{#if:{{{Habitat|}}}
| {{!-}}
{{!}} Habitat:
{{!}} {{{Habitat|}}}
| {{!-}}
}}
{{#if:{{{Nahrung|}}}
| {{!-}}
{{!}} Nahrung:
{{!}} {{{Nahrung|}}}
| {{!-}}
}}
{{#if:{{{Luftfeuchtigkeit|}}}
| {{!-}}
{{!}} Luftfeuchtigkeit:
{{!}} {{{Luftfeuchtigkeit|}}}
| {{!-}}
}}
{{#if:{{{Temperatur|}}}
| {{!-}}
{{!}} Temperatur:
{{!}} {{{Temperatur|}}}
| {{!-}}
}}
{{#if:{{{StudyGroupNumber|}}}
| {{!-}}
{{!}} StudyGroupNumber:
{{!}} {{{StudyGroupNumber|}}}
| {{!-}}
}}
|}
|}
{{#ifeq: {{NAMESPACE}} | {{ns:0}} | [[Kategorie:species]]}}
{{#if:{{#var:taxon}}|
* {{var:taxon}}
}}
{{#if:{{{ordo|}}}|
-> [[:Kategorie:{{{ordo|}}}{{!}}{{{ordo|}}}]]
}}
{{#if:{{{subordo|}}}|
-> [[:Kategorie:{{{subordo|}}}{{!}}{{{subordo|}}}]]
}}
{{#if:{{{superfamilia|}}}|
-> [[:Kategorie:{{#if: {{{subordo|}}} | {{{subordo|}}} | {{{ordo|}}} }}{{!}}{{{superfamilia|}}}]]
}}
{{#if:{{{familia|}}}|
-> [[:Kategorie:{{{familia|}}}{{!}}{{{familia|}}}]]
}}
{{#if:{{{subfamilia|}}}|
-> [[:Kategorie:{{{subfamilia|}}}{{!}}{{{subfamilia|}}}]]
}}
{{#if:{{{tribus|}}}|
-> [[:Kategorie:{{{tribus|}}}{{!}}{{{tribus|}}}]]
}}
{{#if:{{{genus|}}}|
-> [[:Kategorie:{{{genus|}}}{{!}}{{{genus|}}}]]
}}
{{#if:{{{subgenus|}}}|
-> [[:Kategorie:{{{subgenus|}}}{{!}}{{{subgenus|}}}]]
}}
{{#if:{{{www.faunaeur.org_id|}}}|
* [http://www.faunaeur.org/full_results.php?id={{{www.faunaeur.org_id|}}} Fauna Europaea : www.faunaeur.org -> {{PAGENAME}}]
}}
{{#if:{{{cockroach.speciesfile.org_TaxonNameID|}}}|
* [http://cockroach.speciesfile.org/common/basic/Taxa.aspx?TaxonNameID={{{cockroach.speciesfile.org_TaxonNameID|}}} Cockroach Species File (CSF) : cockroach.speciesfile.org -> {{PAGENAME}}]
}}
{{#if:{{{LSID|}}}|
* [http://www.ipni.org/lsids.html LSID] : {{{LSID|}}}
}}
{{#ifeq:{{PAGENAME}}|{{{superordo}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
[[Kategorie:{{#if: {{{subclassis|}}} | {{{subclassis|}}} | {{{classis|}}} }}{{!}}{{{superordo|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{ordo}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=7}}
{{#ifeq:{{PAGENAME}}|Blattodea|
[[Kategorie:Schaben]]
}}
[[Kategorie:{{#if: {{{superordo|}}} | {{{superordo|}}} | {{{subclassis|}}} }}{{!}}{{{ordo|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{superfamilia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
[[Kategorie:{{#if: {{{subordo|}}} | {{{subordo|}}} | {{{ordo|}}} }}{{!}}{{{superfamilia|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{familia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=4}}
[[Kategorie:{{#if: {{{superfamilia|}}} | {{{superfamilia|}}} | {{{subordo|}}} }}{{!}}{{{familia|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{subfamilia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=3}}
[[Kategorie:{{{familia|}}}{{!}}{{{subfamilia|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{genus}}}|
[[Kategorie:{{#if: {{{subfamilia|}}} | {{{subfamilia|}}} | {{{familia|}}} }}{{!}}{{{genus|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{genus}}} {{{species}}}|
[[Kategorie:{{{genus|}}}{{!}}{{{species|}}}]]
}}
</includeonly>
<noinclude>
<pre>
Beispielaufruf:
{{Systematik
| DeName = Fauchschabe
| Autor = van Herrewege, 1973
| ordo =
| subordo =
| superfamilia =
| familia = Blaberidae
| subfamilia = Oxyhaloinae
| tribus = Gromphadorhini
| genus = Princisia
| subgenus =
| species = vanwaerebeki
| Verbreitung =
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| StudyGroupNumber = BCG 34
| Winterruhe =
| cockroach.speciesfile.org_TaxonNameID = 1174416
| LSID = urn:lsid:Blattodea.genusfile.org:TaxonName:6326
}}
{{Systematik
| Autor =
| Bild =
| Bildbeschreibung =
| regnum = Animalia
| subregnum = Eumetazoa
| phylum = specieshropoda
| subphylum = Hexapoda
| classis = Insecta
| ordo = Dictyoptera
| subordo = Isoptera
| LSID = urn:lsid:faunaeur.org:taxname:11922
| www.faunaeur.org_id = 11922
}}
</pre>
</noinclude>
83530ee025573428096184af4194c20e6b5a2827
1605
1604
2017-05-02T17:05:06Z
Lollypop
2
wikitext
text/x-wiki
<includeonly>
{{#vardefine:taxon|
{{#if:{{{dominia|}}}
|{{{dominia|}}}{{!}}
}}
{{#if:{{{regnum|}}}
|{{{regnum|}}}{{!}}
}}
}}
{| cellpadding="2" cellspacing="0" style="border: 1px solid #6688AA;width:260px; background-color:#efefef; float:right;overflow: hidden; margin-left:10px; margin-bottom:10px;" valign="middle" |
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"|
'''''{{PAGENAME}}'''''
{{#if:{{{DeName|}}}|
<br>({{{DeName|}}})
}}
|-
|style="border: 0px solid #6688AA;border-bottom-width:1px;"| <div style="text-align:center;font-size:8pt;">
{{#if: {{{Bild|}}}
|
[[Bild:{{{Bild}}}{{!}}frameless{{!}}250x300px{{!}}{{{Bildbeschreibung}}}]]
}}
</div>
|
|
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| Systematik
|-
|style="text-align:center;width=250px;"|
{| style="background-color:#efefef;text-align:left;"
{{#if:{{{dominia|}}}
| {{!-}}
{{!}} Dominia:
{{!}} ''[[:Kategorie:{{{dominia|}}}{{!}}{{{dominia|}}}]]''
}}
{{#if:{{{regnum|}}}
| {{!-}}
{{!}} Regnum:
{{!}} ''[[:Kategorie:{{{regnum|}}}{{!}}{{{regnum|}}}]]''
}}
{{#if:{{{subregnum|}}}
| {{!-}}
{{!}} Subregnum:
{{!}} ''[[:Kategorie:{{{subregnum|}}}{{!}}{{{subregnum|}}}]]''
}}
{{#if:{{{superdivisio|}}}
| {{!-}}
{{!}} Superdivisio:
{{!}} ''[[:Kategorie:{{{superdivisio|}}}{{!}}{{{superdivisio|}}}]]''
}}
{{#if:{{{divisio|}}}
| {{!-}}
{{!}} Divisio:
{{!}} ''[[:Kategorie:{{{divisio|}}}{{!}}{{{divisio|}}}]]''
}}
{{#if:{{{subdivisio|}}}
| {{!-}}
{{!}} Subdivisio:
{{!}} ''[[:Kategorie:{{{subdivisio|}}}{{!}}{{{subdivisio|}}}]]''
}}
{{#if:{{{superclassis|}}}
| {{!-}}
{{!}} Superclassis:
{{!}} ''[[:Kategorie:{{{superclassis|}}}{{!}}{{{superclassis|}}}]]''
}}
{{#if:{{{classis|}}}
| {{!-}}
{{!}} Classis:
{{!}} ''[[:Kategorie:{{{classis|}}}{{!}}{{{classis|}}}]]''
}}
{{#if:{{{subclassis|}}}
| {{!-}}
{{!}} Subclassis:
{{!}} ''[[:Kategorie:{{{subclassis|}}}{{!}}{{{subclassis|}}}]]''
}}
{{#if:{{{superordo|}}}
| {{!-}}
{{!}} Superordo:
{{!}} ''[[:Kategorie:{{{superordo|}}}{{!}}{{{superordo|}}}]]''
}}
{{#if:{{{ordo|}}}
| {{!-}}
{{!}} Ordo:
{{!}} ''[[:Kategorie:{{{ordo|}}}{{!}}{{{ordo|}}}]]''
}}
{{#if:{{{subordo|}}}
| {{!-}}
{{!}} Subordo:
{{!}} ''[[:Kategorie:{{{subordo|}}}{{!}}{{{subordo|}}}]]''
}}
{{#if:{{{superfamilia|}}}
| {{!-}}
{{!}} Superfamilia:
{{!}} ''[[:Kategorie:{{{superfamilia|}}}{{!}}{{{superfamilia|}}}]]''
}}
{{#if:{{{familia|}}}
| {{!-}}
{{!}} Familia:
{{!}} ''[[:Kategorie:{{{familia|}}}{{!}}{{{familia|}}}]]''
}}
{{#if:{{{subfamilia|}}}
| {{!-}}
{{!}} Subfamilia:
{{!}} ''[[:Kategorie:{{{subfamilia|}}}{{!}}{{{subfamilia|}}}]]''
}}
{{#if:{{{tribus|}}}
| {{!-}}
{{!}} Tribus:
{{!}} ''[[:Kategorie:{{{tribus|}}}{{!}}{{{tribus|}}}]]''
}}
|-
{{#if:{{{genus|}}}
| {{!-}}
{{!}} Genus:
{{!}} ''[[:Kategorie:{{{genus|}}}{{!}}{{{genus|}}}]]''
}}
|-
{{#if:{{{subgenus|}}}
| {{!-}}
{{!}} Subgenus:
{{!}} ''[[:Kategorie:{{{subgenus|}}}{{!}}{{{subgenus|}}}]]''
}}
|-
{{#if:{{{species|}}}
| {{!-}}
{{!}} Species:
{{!}} ''{{{genus|}}} {{{subgenus|}}} {{{species|}}}''
}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Wissenschaftlicher Name
|-
|style="text-align:center"|
{| style="text-align:center;background-color:#efefef;widht:250px"
|-
|style="text-align:center;width:250px"|''{{{genus|}}} {{{species|}}}'' {{#if:{{{Autor|}}}|{{{Autor|}}}}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Weitere Informationen
|-
|style="text-align:left"|
{| style="text-align:left;background-color:#efefef;widht:250px"
{{#if:{{{Verbreitung|}}}
| {{!-}}
{{!}} Verbreitung:
{{!}} {{{Verbreitung|}}}
| {{!-}}
}}
{{#if:{{{Habitat|}}}
| {{!-}}
{{!}} Habitat:
{{!}} {{{Habitat|}}}
| {{!-}}
}}
{{#if:{{{Nahrung|}}}
| {{!-}}
{{!}} Nahrung:
{{!}} {{{Nahrung|}}}
| {{!-}}
}}
{{#if:{{{Luftfeuchtigkeit|}}}
| {{!-}}
{{!}} Luftfeuchtigkeit:
{{!}} {{{Luftfeuchtigkeit|}}}
| {{!-}}
}}
{{#if:{{{Temperatur|}}}
| {{!-}}
{{!}} Temperatur:
{{!}} {{{Temperatur|}}}
| {{!-}}
}}
{{#if:{{{StudyGroupNumber|}}}
| {{!-}}
{{!}} StudyGroupNumber:
{{!}} {{{StudyGroupNumber|}}}
| {{!-}}
}}
|}
|}
{{#ifeq: {{NAMESPACE}} | {{ns:0}} | [[Kategorie:species]]}}
{{#if:{{#var:taxon}}|{{var:taxon}} }}
{{#if:{{{ordo|}}}|
-> [[:Kategorie:{{{ordo|}}}{{!}}{{{ordo|}}}]]
}}
{{#if:{{{subordo|}}}|
-> [[:Kategorie:{{{subordo|}}}{{!}}{{{subordo|}}}]]
}}
{{#if:{{{superfamilia|}}}|
-> [[:Kategorie:{{#if: {{{subordo|}}} | {{{subordo|}}} | {{{ordo|}}} }}{{!}}{{{superfamilia|}}}]]
}}
{{#if:{{{familia|}}}|
-> [[:Kategorie:{{{familia|}}}{{!}}{{{familia|}}}]]
}}
{{#if:{{{subfamilia|}}}|
-> [[:Kategorie:{{{subfamilia|}}}{{!}}{{{subfamilia|}}}]]
}}
{{#if:{{{tribus|}}}|
-> [[:Kategorie:{{{tribus|}}}{{!}}{{{tribus|}}}]]
}}
{{#if:{{{genus|}}}|
-> [[:Kategorie:{{{genus|}}}{{!}}{{{genus|}}}]]
}}
{{#if:{{{subgenus|}}}|
-> [[:Kategorie:{{{subgenus|}}}{{!}}{{{subgenus|}}}]]
}}
{{#if:{{{www.faunaeur.org_id|}}}|
* [http://www.faunaeur.org/full_results.php?id={{{www.faunaeur.org_id|}}} Fauna Europaea : www.faunaeur.org -> {{PAGENAME}}]
}}
{{#if:{{{cockroach.speciesfile.org_TaxonNameID|}}}|
* [http://cockroach.speciesfile.org/common/basic/Taxa.aspx?TaxonNameID={{{cockroach.speciesfile.org_TaxonNameID|}}} Cockroach Species File (CSF) : cockroach.speciesfile.org -> {{PAGENAME}}]
}}
{{#if:{{{LSID|}}}|
* [http://www.ipni.org/lsids.html LSID] : {{{LSID|}}}
}}
{{#ifeq:{{PAGENAME}}|{{{superordo}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
[[Kategorie:{{#if: {{{subclassis|}}} | {{{subclassis|}}} | {{{classis|}}} }}{{!}}{{{superordo|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{ordo}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=7}}
{{#ifeq:{{PAGENAME}}|Blattodea|
[[Kategorie:Schaben]]
}}
[[Kategorie:{{#if: {{{superordo|}}} | {{{superordo|}}} | {{{subclassis|}}} }}{{!}}{{{ordo|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{superfamilia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
[[Kategorie:{{#if: {{{subordo|}}} | {{{subordo|}}} | {{{ordo|}}} }}{{!}}{{{superfamilia|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{familia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=4}}
[[Kategorie:{{#if: {{{superfamilia|}}} | {{{superfamilia|}}} | {{{subordo|}}} }}{{!}}{{{familia|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{subfamilia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=3}}
[[Kategorie:{{{familia|}}}{{!}}{{{subfamilia|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{genus}}}|
[[Kategorie:{{#if: {{{subfamilia|}}} | {{{subfamilia|}}} | {{{familia|}}} }}{{!}}{{{genus|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{genus}}} {{{species}}}|
[[Kategorie:{{{genus|}}}{{!}}{{{species|}}}]]
}}
</includeonly>
<noinclude>
<pre>
Beispielaufruf:
{{Systematik
| DeName = Fauchschabe
| Autor = van Herrewege, 1973
| ordo =
| subordo =
| superfamilia =
| familia = Blaberidae
| subfamilia = Oxyhaloinae
| tribus = Gromphadorhini
| genus = Princisia
| subgenus =
| species = vanwaerebeki
| Verbreitung =
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| StudyGroupNumber = BCG 34
| Winterruhe =
| cockroach.speciesfile.org_TaxonNameID = 1174416
| LSID = urn:lsid:Blattodea.genusfile.org:TaxonName:6326
}}
{{Systematik
| Autor =
| Bild =
| Bildbeschreibung =
| regnum = Animalia
| subregnum = Eumetazoa
| phylum = specieshropoda
| subphylum = Hexapoda
| classis = Insecta
| ordo = Dictyoptera
| subordo = Isoptera
| LSID = urn:lsid:faunaeur.org:taxname:11922
| www.faunaeur.org_id = 11922
}}
</pre>
</noinclude>
eb9803651a08175eedb410070a450ac96e5050d6
1606
1605
2017-05-02T17:07:48Z
Lollypop
2
wikitext
text/x-wiki
<includeonly>
{{#vardefineecho:taxon|
{{#if:{{{dominia|}}}
|{{{dominia|}}}{{!}}
}}
{{#if:{{{regnum|}}}
|{{{regnum|}}}{{!}}
}}
}}
{| cellpadding="2" cellspacing="0" style="border: 1px solid #6688AA;width:260px; background-color:#efefef; float:right;overflow: hidden; margin-left:10px; margin-bottom:10px;" valign="middle" |
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"|
'''''{{PAGENAME}}'''''
{{#if:{{{DeName|}}}|
<br>({{{DeName|}}})
}}
|-
|style="border: 0px solid #6688AA;border-bottom-width:1px;"| <div style="text-align:center;font-size:8pt;">
{{#if: {{{Bild|}}}
|
[[Bild:{{{Bild}}}{{!}}frameless{{!}}250x300px{{!}}{{{Bildbeschreibung}}}]]
}}
</div>
|
|
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| Systematik
|-
|style="text-align:center;width=250px;"|
{| style="background-color:#efefef;text-align:left;"
{{#if:{{{dominia|}}}
| {{!-}}
{{!}} Dominia:
{{!}} ''[[:Kategorie:{{{dominia|}}}{{!}}{{{dominia|}}}]]''
}}
{{#if:{{{regnum|}}}
| {{!-}}
{{!}} Regnum:
{{!}} ''[[:Kategorie:{{{regnum|}}}{{!}}{{{regnum|}}}]]''
}}
{{#if:{{{subregnum|}}}
| {{!-}}
{{!}} Subregnum:
{{!}} ''[[:Kategorie:{{{subregnum|}}}{{!}}{{{subregnum|}}}]]''
}}
{{#if:{{{superdivisio|}}}
| {{!-}}
{{!}} Superdivisio:
{{!}} ''[[:Kategorie:{{{superdivisio|}}}{{!}}{{{superdivisio|}}}]]''
}}
{{#if:{{{divisio|}}}
| {{!-}}
{{!}} Divisio:
{{!}} ''[[:Kategorie:{{{divisio|}}}{{!}}{{{divisio|}}}]]''
}}
{{#if:{{{subdivisio|}}}
| {{!-}}
{{!}} Subdivisio:
{{!}} ''[[:Kategorie:{{{subdivisio|}}}{{!}}{{{subdivisio|}}}]]''
}}
{{#if:{{{superclassis|}}}
| {{!-}}
{{!}} Superclassis:
{{!}} ''[[:Kategorie:{{{superclassis|}}}{{!}}{{{superclassis|}}}]]''
}}
{{#if:{{{classis|}}}
| {{!-}}
{{!}} Classis:
{{!}} ''[[:Kategorie:{{{classis|}}}{{!}}{{{classis|}}}]]''
}}
{{#if:{{{subclassis|}}}
| {{!-}}
{{!}} Subclassis:
{{!}} ''[[:Kategorie:{{{subclassis|}}}{{!}}{{{subclassis|}}}]]''
}}
{{#if:{{{superordo|}}}
| {{!-}}
{{!}} Superordo:
{{!}} ''[[:Kategorie:{{{superordo|}}}{{!}}{{{superordo|}}}]]''
}}
{{#if:{{{ordo|}}}
| {{!-}}
{{!}} Ordo:
{{!}} ''[[:Kategorie:{{{ordo|}}}{{!}}{{{ordo|}}}]]''
}}
{{#if:{{{subordo|}}}
| {{!-}}
{{!}} Subordo:
{{!}} ''[[:Kategorie:{{{subordo|}}}{{!}}{{{subordo|}}}]]''
}}
{{#if:{{{superfamilia|}}}
| {{!-}}
{{!}} Superfamilia:
{{!}} ''[[:Kategorie:{{{superfamilia|}}}{{!}}{{{superfamilia|}}}]]''
}}
{{#if:{{{familia|}}}
| {{!-}}
{{!}} Familia:
{{!}} ''[[:Kategorie:{{{familia|}}}{{!}}{{{familia|}}}]]''
}}
{{#if:{{{subfamilia|}}}
| {{!-}}
{{!}} Subfamilia:
{{!}} ''[[:Kategorie:{{{subfamilia|}}}{{!}}{{{subfamilia|}}}]]''
}}
{{#if:{{{tribus|}}}
| {{!-}}
{{!}} Tribus:
{{!}} ''[[:Kategorie:{{{tribus|}}}{{!}}{{{tribus|}}}]]''
}}
|-
{{#if:{{{genus|}}}
| {{!-}}
{{!}} Genus:
{{!}} ''[[:Kategorie:{{{genus|}}}{{!}}{{{genus|}}}]]''
}}
|-
{{#if:{{{subgenus|}}}
| {{!-}}
{{!}} Subgenus:
{{!}} ''[[:Kategorie:{{{subgenus|}}}{{!}}{{{subgenus|}}}]]''
}}
|-
{{#if:{{{species|}}}
| {{!-}}
{{!}} Species:
{{!}} ''{{{genus|}}} {{{subgenus|}}} {{{species|}}}''
}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Wissenschaftlicher Name
|-
|style="text-align:center"|
{| style="text-align:center;background-color:#efefef;widht:250px"
|-
|style="text-align:center;width:250px"|''{{{genus|}}} {{{species|}}}'' {{#if:{{{Autor|}}}|{{{Autor|}}}}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Weitere Informationen
|-
|style="text-align:left"|
{| style="text-align:left;background-color:#efefef;widht:250px"
{{#if:{{{Verbreitung|}}}
| {{!-}}
{{!}} Verbreitung:
{{!}} {{{Verbreitung|}}}
| {{!-}}
}}
{{#if:{{{Habitat|}}}
| {{!-}}
{{!}} Habitat:
{{!}} {{{Habitat|}}}
| {{!-}}
}}
{{#if:{{{Nahrung|}}}
| {{!-}}
{{!}} Nahrung:
{{!}} {{{Nahrung|}}}
| {{!-}}
}}
{{#if:{{{Luftfeuchtigkeit|}}}
| {{!-}}
{{!}} Luftfeuchtigkeit:
{{!}} {{{Luftfeuchtigkeit|}}}
| {{!-}}
}}
{{#if:{{{Temperatur|}}}
| {{!-}}
{{!}} Temperatur:
{{!}} {{{Temperatur|}}}
| {{!-}}
}}
{{#if:{{{StudyGroupNumber|}}}
| {{!-}}
{{!}} StudyGroupNumber:
{{!}} {{{StudyGroupNumber|}}}
| {{!-}}
}}
|}
|}
{{#ifeq: {{NAMESPACE}} | {{ns:0}} | [[Kategorie:species]]}}
{{#if:{{#var:taxon}}|{{var:taxon}} }}
{{#if:{{{ordo|}}}|
-> [[:Kategorie:{{{ordo|}}}{{!}}{{{ordo|}}}]]
}}
{{#if:{{{subordo|}}}|
-> [[:Kategorie:{{{subordo|}}}{{!}}{{{subordo|}}}]]
}}
{{#if:{{{superfamilia|}}}|
-> [[:Kategorie:{{#if: {{{subordo|}}} | {{{subordo|}}} | {{{ordo|}}} }}{{!}}{{{superfamilia|}}}]]
}}
{{#if:{{{familia|}}}|
-> [[:Kategorie:{{{familia|}}}{{!}}{{{familia|}}}]]
}}
{{#if:{{{subfamilia|}}}|
-> [[:Kategorie:{{{subfamilia|}}}{{!}}{{{subfamilia|}}}]]
}}
{{#if:{{{tribus|}}}|
-> [[:Kategorie:{{{tribus|}}}{{!}}{{{tribus|}}}]]
}}
{{#if:{{{genus|}}}|
-> [[:Kategorie:{{{genus|}}}{{!}}{{{genus|}}}]]
}}
{{#if:{{{subgenus|}}}|
-> [[:Kategorie:{{{subgenus|}}}{{!}}{{{subgenus|}}}]]
}}
{{#if:{{{www.faunaeur.org_id|}}}|
* [http://www.faunaeur.org/full_results.php?id={{{www.faunaeur.org_id|}}} Fauna Europaea : www.faunaeur.org -> {{PAGENAME}}]
}}
{{#if:{{{cockroach.speciesfile.org_TaxonNameID|}}}|
* [http://cockroach.speciesfile.org/common/basic/Taxa.aspx?TaxonNameID={{{cockroach.speciesfile.org_TaxonNameID|}}} Cockroach Species File (CSF) : cockroach.speciesfile.org -> {{PAGENAME}}]
}}
{{#if:{{{LSID|}}}|
* [http://www.ipni.org/lsids.html LSID] : {{{LSID|}}}
}}
{{#ifeq:{{PAGENAME}}|{{{superordo}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
[[Kategorie:{{#if: {{{subclassis|}}} | {{{subclassis|}}} | {{{classis|}}} }}{{!}}{{{superordo|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{ordo}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=7}}
{{#ifeq:{{PAGENAME}}|Blattodea|
[[Kategorie:Schaben]]
}}
[[Kategorie:{{#if: {{{superordo|}}} | {{{superordo|}}} | {{{subclassis|}}} }}{{!}}{{{ordo|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{superfamilia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
[[Kategorie:{{#if: {{{subordo|}}} | {{{subordo|}}} | {{{ordo|}}} }}{{!}}{{{superfamilia|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{familia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=4}}
[[Kategorie:{{#if: {{{superfamilia|}}} | {{{superfamilia|}}} | {{{subordo|}}} }}{{!}}{{{familia|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{subfamilia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=3}}
[[Kategorie:{{{familia|}}}{{!}}{{{subfamilia|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{genus}}}|
[[Kategorie:{{#if: {{{subfamilia|}}} | {{{subfamilia|}}} | {{{familia|}}} }}{{!}}{{{genus|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{genus}}} {{{species}}}|
[[Kategorie:{{{genus|}}}{{!}}{{{species|}}}]]
}}
</includeonly>
<noinclude>
<pre>
Beispielaufruf:
{{Systematik
| DeName = Fauchschabe
| Autor = van Herrewege, 1973
| ordo =
| subordo =
| superfamilia =
| familia = Blaberidae
| subfamilia = Oxyhaloinae
| tribus = Gromphadorhini
| genus = Princisia
| subgenus =
| species = vanwaerebeki
| Verbreitung =
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| StudyGroupNumber = BCG 34
| Winterruhe =
| cockroach.speciesfile.org_TaxonNameID = 1174416
| LSID = urn:lsid:Blattodea.genusfile.org:TaxonName:6326
}}
{{Systematik
| Autor =
| Bild =
| Bildbeschreibung =
| regnum = Animalia
| subregnum = Eumetazoa
| phylum = specieshropoda
| subphylum = Hexapoda
| classis = Insecta
| ordo = Dictyoptera
| subordo = Isoptera
| LSID = urn:lsid:faunaeur.org:taxname:11922
| www.faunaeur.org_id = 11922
}}
</pre>
</noinclude>
46a18dd29df3f3928fe3f56f8a1b50260715586e
1607
1606
2017-05-02T17:10:25Z
Lollypop
2
wikitext
text/x-wiki
<includeonly>
{{#vardefineecho:taxon|
''Test''
}}
{| cellpadding="2" cellspacing="0" style="border: 1px solid #6688AA;width:260px; background-color:#efefef; float:right;overflow: hidden; margin-left:10px; margin-bottom:10px;" valign="middle" |
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"|
'''''{{PAGENAME}}'''''
{{#if:{{{DeName|}}}|
<br>({{{DeName|}}})
}}
|-
|style="border: 0px solid #6688AA;border-bottom-width:1px;"| <div style="text-align:center;font-size:8pt;">
{{#if: {{{Bild|}}}
|
[[Bild:{{{Bild}}}{{!}}frameless{{!}}250x300px{{!}}{{{Bildbeschreibung}}}]]
}}
</div>
|
|
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| Systematik
|-
|style="text-align:center;width=250px;"|
{| style="background-color:#efefef;text-align:left;"
{{#if:{{{dominia|}}}
| {{!-}}
{{!}} Dominia:
{{!}} ''[[:Kategorie:{{{dominia|}}}{{!}}{{{dominia|}}}]]''
}}
{{#if:{{{regnum|}}}
| {{!-}}
{{!}} Regnum:
{{!}} ''[[:Kategorie:{{{regnum|}}}{{!}}{{{regnum|}}}]]''
}}
{{#if:{{{subregnum|}}}
| {{!-}}
{{!}} Subregnum:
{{!}} ''[[:Kategorie:{{{subregnum|}}}{{!}}{{{subregnum|}}}]]''
}}
{{#if:{{{superdivisio|}}}
| {{!-}}
{{!}} Superdivisio:
{{!}} ''[[:Kategorie:{{{superdivisio|}}}{{!}}{{{superdivisio|}}}]]''
}}
{{#if:{{{divisio|}}}
| {{!-}}
{{!}} Divisio:
{{!}} ''[[:Kategorie:{{{divisio|}}}{{!}}{{{divisio|}}}]]''
}}
{{#if:{{{subdivisio|}}}
| {{!-}}
{{!}} Subdivisio:
{{!}} ''[[:Kategorie:{{{subdivisio|}}}{{!}}{{{subdivisio|}}}]]''
}}
{{#if:{{{superclassis|}}}
| {{!-}}
{{!}} Superclassis:
{{!}} ''[[:Kategorie:{{{superclassis|}}}{{!}}{{{superclassis|}}}]]''
}}
{{#if:{{{classis|}}}
| {{!-}}
{{!}} Classis:
{{!}} ''[[:Kategorie:{{{classis|}}}{{!}}{{{classis|}}}]]''
}}
{{#if:{{{subclassis|}}}
| {{!-}}
{{!}} Subclassis:
{{!}} ''[[:Kategorie:{{{subclassis|}}}{{!}}{{{subclassis|}}}]]''
}}
{{#if:{{{superordo|}}}
| {{!-}}
{{!}} Superordo:
{{!}} ''[[:Kategorie:{{{superordo|}}}{{!}}{{{superordo|}}}]]''
}}
{{#if:{{{ordo|}}}
| {{!-}}
{{!}} Ordo:
{{!}} ''[[:Kategorie:{{{ordo|}}}{{!}}{{{ordo|}}}]]''
}}
{{#if:{{{subordo|}}}
| {{!-}}
{{!}} Subordo:
{{!}} ''[[:Kategorie:{{{subordo|}}}{{!}}{{{subordo|}}}]]''
}}
{{#if:{{{superfamilia|}}}
| {{!-}}
{{!}} Superfamilia:
{{!}} ''[[:Kategorie:{{{superfamilia|}}}{{!}}{{{superfamilia|}}}]]''
}}
{{#if:{{{familia|}}}
| {{!-}}
{{!}} Familia:
{{!}} ''[[:Kategorie:{{{familia|}}}{{!}}{{{familia|}}}]]''
}}
{{#if:{{{subfamilia|}}}
| {{!-}}
{{!}} Subfamilia:
{{!}} ''[[:Kategorie:{{{subfamilia|}}}{{!}}{{{subfamilia|}}}]]''
}}
{{#if:{{{tribus|}}}
| {{!-}}
{{!}} Tribus:
{{!}} ''[[:Kategorie:{{{tribus|}}}{{!}}{{{tribus|}}}]]''
}}
|-
{{#if:{{{genus|}}}
| {{!-}}
{{!}} Genus:
{{!}} ''[[:Kategorie:{{{genus|}}}{{!}}{{{genus|}}}]]''
}}
|-
{{#if:{{{subgenus|}}}
| {{!-}}
{{!}} Subgenus:
{{!}} ''[[:Kategorie:{{{subgenus|}}}{{!}}{{{subgenus|}}}]]''
}}
|-
{{#if:{{{species|}}}
| {{!-}}
{{!}} Species:
{{!}} ''{{{genus|}}} {{{subgenus|}}} {{{species|}}}''
}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Wissenschaftlicher Name
|-
|style="text-align:center"|
{| style="text-align:center;background-color:#efefef;widht:250px"
|-
|style="text-align:center;width:250px"|''{{{genus|}}} {{{species|}}}'' {{#if:{{{Autor|}}}|{{{Autor|}}}}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Weitere Informationen
|-
|style="text-align:left"|
{| style="text-align:left;background-color:#efefef;widht:250px"
{{#if:{{{Verbreitung|}}}
| {{!-}}
{{!}} Verbreitung:
{{!}} {{{Verbreitung|}}}
| {{!-}}
}}
{{#if:{{{Habitat|}}}
| {{!-}}
{{!}} Habitat:
{{!}} {{{Habitat|}}}
| {{!-}}
}}
{{#if:{{{Nahrung|}}}
| {{!-}}
{{!}} Nahrung:
{{!}} {{{Nahrung|}}}
| {{!-}}
}}
{{#if:{{{Luftfeuchtigkeit|}}}
| {{!-}}
{{!}} Luftfeuchtigkeit:
{{!}} {{{Luftfeuchtigkeit|}}}
| {{!-}}
}}
{{#if:{{{Temperatur|}}}
| {{!-}}
{{!}} Temperatur:
{{!}} {{{Temperatur|}}}
| {{!-}}
}}
{{#if:{{{StudyGroupNumber|}}}
| {{!-}}
{{!}} StudyGroupNumber:
{{!}} {{{StudyGroupNumber|}}}
| {{!-}}
}}
|}
|}
{{#ifeq: {{NAMESPACE}} | {{ns:0}} | [[Kategorie:species]]}}
{{#if:{{#var:taxon}}|{{var:taxon}} }}
{{#if:{{{ordo|}}}|
-> [[:Kategorie:{{{ordo|}}}{{!}}{{{ordo|}}}]]
}}
{{#if:{{{subordo|}}}|
-> [[:Kategorie:{{{subordo|}}}{{!}}{{{subordo|}}}]]
}}
{{#if:{{{superfamilia|}}}|
-> [[:Kategorie:{{#if: {{{subordo|}}} | {{{subordo|}}} | {{{ordo|}}} }}{{!}}{{{superfamilia|}}}]]
}}
{{#if:{{{familia|}}}|
-> [[:Kategorie:{{{familia|}}}{{!}}{{{familia|}}}]]
}}
{{#if:{{{subfamilia|}}}|
-> [[:Kategorie:{{{subfamilia|}}}{{!}}{{{subfamilia|}}}]]
}}
{{#if:{{{tribus|}}}|
-> [[:Kategorie:{{{tribus|}}}{{!}}{{{tribus|}}}]]
}}
{{#if:{{{genus|}}}|
-> [[:Kategorie:{{{genus|}}}{{!}}{{{genus|}}}]]
}}
{{#if:{{{subgenus|}}}|
-> [[:Kategorie:{{{subgenus|}}}{{!}}{{{subgenus|}}}]]
}}
{{#if:{{{www.faunaeur.org_id|}}}|
* [http://www.faunaeur.org/full_results.php?id={{{www.faunaeur.org_id|}}} Fauna Europaea : www.faunaeur.org -> {{PAGENAME}}]
}}
{{#if:{{{cockroach.speciesfile.org_TaxonNameID|}}}|
* [http://cockroach.speciesfile.org/common/basic/Taxa.aspx?TaxonNameID={{{cockroach.speciesfile.org_TaxonNameID|}}} Cockroach Species File (CSF) : cockroach.speciesfile.org -> {{PAGENAME}}]
}}
{{#if:{{{LSID|}}}|
* [http://www.ipni.org/lsids.html LSID] : {{{LSID|}}}
}}
{{#ifeq:{{PAGENAME}}|{{{superordo}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
[[Kategorie:{{#if: {{{subclassis|}}} | {{{subclassis|}}} | {{{classis|}}} }}{{!}}{{{superordo|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{ordo}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=7}}
{{#ifeq:{{PAGENAME}}|Blattodea|
[[Kategorie:Schaben]]
}}
[[Kategorie:{{#if: {{{superordo|}}} | {{{superordo|}}} | {{{subclassis|}}} }}{{!}}{{{ordo|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{superfamilia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
[[Kategorie:{{#if: {{{subordo|}}} | {{{subordo|}}} | {{{ordo|}}} }}{{!}}{{{superfamilia|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{familia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=4}}
[[Kategorie:{{#if: {{{superfamilia|}}} | {{{superfamilia|}}} | {{{subordo|}}} }}{{!}}{{{familia|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{subfamilia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=3}}
[[Kategorie:{{{familia|}}}{{!}}{{{subfamilia|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{genus}}}|
[[Kategorie:{{#if: {{{subfamilia|}}} | {{{subfamilia|}}} | {{{familia|}}} }}{{!}}{{{genus|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{genus}}} {{{species}}}|
[[Kategorie:{{{genus|}}}{{!}}{{{species|}}}]]
}}
</includeonly>
<noinclude>
<pre>
Beispielaufruf:
{{Systematik
| DeName = Fauchschabe
| Autor = van Herrewege, 1973
| ordo =
| subordo =
| superfamilia =
| familia = Blaberidae
| subfamilia = Oxyhaloinae
| tribus = Gromphadorhini
| genus = Princisia
| subgenus =
| species = vanwaerebeki
| Verbreitung =
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| StudyGroupNumber = BCG 34
| Winterruhe =
| cockroach.speciesfile.org_TaxonNameID = 1174416
| LSID = urn:lsid:Blattodea.genusfile.org:TaxonName:6326
}}
{{Systematik
| Autor =
| Bild =
| Bildbeschreibung =
| regnum = Animalia
| subregnum = Eumetazoa
| phylum = specieshropoda
| subphylum = Hexapoda
| classis = Insecta
| ordo = Dictyoptera
| subordo = Isoptera
| LSID = urn:lsid:faunaeur.org:taxname:11922
| www.faunaeur.org_id = 11922
}}
</pre>
</noinclude>
69a28e00725d5d8d550e509905bfc7b2a1ab2bc8
1608
1607
2017-05-02T17:11:49Z
Lollypop
2
wikitext
text/x-wiki
<includeonly>
{{#vardefineecho:taxon|
{{#if:{{{regnum|}}}|{{{regnum|}}}{{|}}}}
}}
{| cellpadding="2" cellspacing="0" style="border: 1px solid #6688AA;width:260px; background-color:#efefef; float:right;overflow: hidden; margin-left:10px; margin-bottom:10px;" valign="middle" |
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"|
'''''{{PAGENAME}}'''''
{{#if:{{{DeName|}}}|
<br>({{{DeName|}}})
}}
|-
|style="border: 0px solid #6688AA;border-bottom-width:1px;"| <div style="text-align:center;font-size:8pt;">
{{#if: {{{Bild|}}}
|
[[Bild:{{{Bild}}}{{!}}frameless{{!}}250x300px{{!}}{{{Bildbeschreibung}}}]]
}}
</div>
|
|
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| Systematik
|-
|style="text-align:center;width=250px;"|
{| style="background-color:#efefef;text-align:left;"
{{#if:{{{dominia|}}}
| {{!-}}
{{!}} Dominia:
{{!}} ''[[:Kategorie:{{{dominia|}}}{{!}}{{{dominia|}}}]]''
}}
{{#if:{{{regnum|}}}
| {{!-}}
{{!}} Regnum:
{{!}} ''[[:Kategorie:{{{regnum|}}}{{!}}{{{regnum|}}}]]''
}}
{{#if:{{{subregnum|}}}
| {{!-}}
{{!}} Subregnum:
{{!}} ''[[:Kategorie:{{{subregnum|}}}{{!}}{{{subregnum|}}}]]''
}}
{{#if:{{{superdivisio|}}}
| {{!-}}
{{!}} Superdivisio:
{{!}} ''[[:Kategorie:{{{superdivisio|}}}{{!}}{{{superdivisio|}}}]]''
}}
{{#if:{{{divisio|}}}
| {{!-}}
{{!}} Divisio:
{{!}} ''[[:Kategorie:{{{divisio|}}}{{!}}{{{divisio|}}}]]''
}}
{{#if:{{{subdivisio|}}}
| {{!-}}
{{!}} Subdivisio:
{{!}} ''[[:Kategorie:{{{subdivisio|}}}{{!}}{{{subdivisio|}}}]]''
}}
{{#if:{{{superclassis|}}}
| {{!-}}
{{!}} Superclassis:
{{!}} ''[[:Kategorie:{{{superclassis|}}}{{!}}{{{superclassis|}}}]]''
}}
{{#if:{{{classis|}}}
| {{!-}}
{{!}} Classis:
{{!}} ''[[:Kategorie:{{{classis|}}}{{!}}{{{classis|}}}]]''
}}
{{#if:{{{subclassis|}}}
| {{!-}}
{{!}} Subclassis:
{{!}} ''[[:Kategorie:{{{subclassis|}}}{{!}}{{{subclassis|}}}]]''
}}
{{#if:{{{superordo|}}}
| {{!-}}
{{!}} Superordo:
{{!}} ''[[:Kategorie:{{{superordo|}}}{{!}}{{{superordo|}}}]]''
}}
{{#if:{{{ordo|}}}
| {{!-}}
{{!}} Ordo:
{{!}} ''[[:Kategorie:{{{ordo|}}}{{!}}{{{ordo|}}}]]''
}}
{{#if:{{{subordo|}}}
| {{!-}}
{{!}} Subordo:
{{!}} ''[[:Kategorie:{{{subordo|}}}{{!}}{{{subordo|}}}]]''
}}
{{#if:{{{superfamilia|}}}
| {{!-}}
{{!}} Superfamilia:
{{!}} ''[[:Kategorie:{{{superfamilia|}}}{{!}}{{{superfamilia|}}}]]''
}}
{{#if:{{{familia|}}}
| {{!-}}
{{!}} Familia:
{{!}} ''[[:Kategorie:{{{familia|}}}{{!}}{{{familia|}}}]]''
}}
{{#if:{{{subfamilia|}}}
| {{!-}}
{{!}} Subfamilia:
{{!}} ''[[:Kategorie:{{{subfamilia|}}}{{!}}{{{subfamilia|}}}]]''
}}
{{#if:{{{tribus|}}}
| {{!-}}
{{!}} Tribus:
{{!}} ''[[:Kategorie:{{{tribus|}}}{{!}}{{{tribus|}}}]]''
}}
|-
{{#if:{{{genus|}}}
| {{!-}}
{{!}} Genus:
{{!}} ''[[:Kategorie:{{{genus|}}}{{!}}{{{genus|}}}]]''
}}
|-
{{#if:{{{subgenus|}}}
| {{!-}}
{{!}} Subgenus:
{{!}} ''[[:Kategorie:{{{subgenus|}}}{{!}}{{{subgenus|}}}]]''
}}
|-
{{#if:{{{species|}}}
| {{!-}}
{{!}} Species:
{{!}} ''{{{genus|}}} {{{subgenus|}}} {{{species|}}}''
}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Wissenschaftlicher Name
|-
|style="text-align:center"|
{| style="text-align:center;background-color:#efefef;widht:250px"
|-
|style="text-align:center;width:250px"|''{{{genus|}}} {{{species|}}}'' {{#if:{{{Autor|}}}|{{{Autor|}}}}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Weitere Informationen
|-
|style="text-align:left"|
{| style="text-align:left;background-color:#efefef;widht:250px"
{{#if:{{{Verbreitung|}}}
| {{!-}}
{{!}} Verbreitung:
{{!}} {{{Verbreitung|}}}
| {{!-}}
}}
{{#if:{{{Habitat|}}}
| {{!-}}
{{!}} Habitat:
{{!}} {{{Habitat|}}}
| {{!-}}
}}
{{#if:{{{Nahrung|}}}
| {{!-}}
{{!}} Nahrung:
{{!}} {{{Nahrung|}}}
| {{!-}}
}}
{{#if:{{{Luftfeuchtigkeit|}}}
| {{!-}}
{{!}} Luftfeuchtigkeit:
{{!}} {{{Luftfeuchtigkeit|}}}
| {{!-}}
}}
{{#if:{{{Temperatur|}}}
| {{!-}}
{{!}} Temperatur:
{{!}} {{{Temperatur|}}}
| {{!-}}
}}
{{#if:{{{StudyGroupNumber|}}}
| {{!-}}
{{!}} StudyGroupNumber:
{{!}} {{{StudyGroupNumber|}}}
| {{!-}}
}}
|}
|}
{{#ifeq: {{NAMESPACE}} | {{ns:0}} | [[Kategorie:species]]}}
{{#if:{{#var:taxon}}|{{#var:taxon}} }}
{{#if:{{{ordo|}}}|
-> [[:Kategorie:{{{ordo|}}}{{!}}{{{ordo|}}}]]
}}
{{#if:{{{subordo|}}}|
-> [[:Kategorie:{{{subordo|}}}{{!}}{{{subordo|}}}]]
}}
{{#if:{{{superfamilia|}}}|
-> [[:Kategorie:{{#if: {{{subordo|}}} | {{{subordo|}}} | {{{ordo|}}} }}{{!}}{{{superfamilia|}}}]]
}}
{{#if:{{{familia|}}}|
-> [[:Kategorie:{{{familia|}}}{{!}}{{{familia|}}}]]
}}
{{#if:{{{subfamilia|}}}|
-> [[:Kategorie:{{{subfamilia|}}}{{!}}{{{subfamilia|}}}]]
}}
{{#if:{{{tribus|}}}|
-> [[:Kategorie:{{{tribus|}}}{{!}}{{{tribus|}}}]]
}}
{{#if:{{{genus|}}}|
-> [[:Kategorie:{{{genus|}}}{{!}}{{{genus|}}}]]
}}
{{#if:{{{subgenus|}}}|
-> [[:Kategorie:{{{subgenus|}}}{{!}}{{{subgenus|}}}]]
}}
{{#if:{{{www.faunaeur.org_id|}}}|
* [http://www.faunaeur.org/full_results.php?id={{{www.faunaeur.org_id|}}} Fauna Europaea : www.faunaeur.org -> {{PAGENAME}}]
}}
{{#if:{{{cockroach.speciesfile.org_TaxonNameID|}}}|
* [http://cockroach.speciesfile.org/common/basic/Taxa.aspx?TaxonNameID={{{cockroach.speciesfile.org_TaxonNameID|}}} Cockroach Species File (CSF) : cockroach.speciesfile.org -> {{PAGENAME}}]
}}
{{#if:{{{LSID|}}}|
* [http://www.ipni.org/lsids.html LSID] : {{{LSID|}}}
}}
{{#ifeq:{{PAGENAME}}|{{{superordo}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
[[Kategorie:{{#if: {{{subclassis|}}} | {{{subclassis|}}} | {{{classis|}}} }}{{!}}{{{superordo|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{ordo}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=7}}
{{#ifeq:{{PAGENAME}}|Blattodea|
[[Kategorie:Schaben]]
}}
[[Kategorie:{{#if: {{{superordo|}}} | {{{superordo|}}} | {{{subclassis|}}} }}{{!}}{{{ordo|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{superfamilia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
[[Kategorie:{{#if: {{{subordo|}}} | {{{subordo|}}} | {{{ordo|}}} }}{{!}}{{{superfamilia|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{familia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=4}}
[[Kategorie:{{#if: {{{superfamilia|}}} | {{{superfamilia|}}} | {{{subordo|}}} }}{{!}}{{{familia|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{subfamilia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=3}}
[[Kategorie:{{{familia|}}}{{!}}{{{subfamilia|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{genus}}}|
[[Kategorie:{{#if: {{{subfamilia|}}} | {{{subfamilia|}}} | {{{familia|}}} }}{{!}}{{{genus|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{genus}}} {{{species}}}|
[[Kategorie:{{{genus|}}}{{!}}{{{species|}}}]]
}}
</includeonly>
<noinclude>
<pre>
Beispielaufruf:
{{Systematik
| DeName = Fauchschabe
| Autor = van Herrewege, 1973
| ordo =
| subordo =
| superfamilia =
| familia = Blaberidae
| subfamilia = Oxyhaloinae
| tribus = Gromphadorhini
| genus = Princisia
| subgenus =
| species = vanwaerebeki
| Verbreitung =
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| StudyGroupNumber = BCG 34
| Winterruhe =
| cockroach.speciesfile.org_TaxonNameID = 1174416
| LSID = urn:lsid:Blattodea.genusfile.org:TaxonName:6326
}}
{{Systematik
| Autor =
| Bild =
| Bildbeschreibung =
| regnum = Animalia
| subregnum = Eumetazoa
| phylum = specieshropoda
| subphylum = Hexapoda
| classis = Insecta
| ordo = Dictyoptera
| subordo = Isoptera
| LSID = urn:lsid:faunaeur.org:taxname:11922
| www.faunaeur.org_id = 11922
}}
</pre>
</noinclude>
486f92fafde162a4037557e0249828e80da31ceb
1609
1608
2017-05-02T17:12:55Z
Lollypop
2
wikitext
text/x-wiki
<includeonly>
{{#vardefineecho:taxon|
{{#if:{{{regnum|}}}|[[Kategorie:{{{regnum|}}}]]|}}
{{#if:{{{regnum|}}}|[[Kategorie:{{{regnum|}}}]]|}}
{{#if:{{{regnum|}}}|[[Kategorie:{{{regnum|}}}]]|}}
{{#if:{{{regnum|}}}|[[Kategorie:{{{regnum|}}}]]|}}
{{#if:{{{regnum|}}}|[[Kategorie:{{{regnum|}}}]]|}}
{{#if:{{{regnum|}}}|[[Kategorie:{{{regnum|}}}]]|}}
{{#if:{{{regnum|}}}|[[Kategorie:{{{regnum|}}}]]|}}
{{#if:{{{regnum|}}}|[[Kategorie:{{{regnum|}}}]]|}}
}}
{| cellpadding="2" cellspacing="0" style="border: 1px solid #6688AA;width:260px; background-color:#efefef; float:right;overflow: hidden; margin-left:10px; margin-bottom:10px;" valign="middle" |
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"|
'''''{{PAGENAME}}'''''
{{#if:{{{DeName|}}}|
<br>({{{DeName|}}})
}}
|-
|style="border: 0px solid #6688AA;border-bottom-width:1px;"| <div style="text-align:center;font-size:8pt;">
{{#if: {{{Bild|}}}
|
[[Bild:{{{Bild}}}{{!}}frameless{{!}}250x300px{{!}}{{{Bildbeschreibung}}}]]
}}
</div>
|
|
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| Systematik
|-
|style="text-align:center;width=250px;"|
{| style="background-color:#efefef;text-align:left;"
{{#if:{{{dominia|}}}
| {{!-}}
{{!}} Dominia:
{{!}} ''[[:Kategorie:{{{dominia|}}}{{!}}{{{dominia|}}}]]''
}}
{{#if:{{{regnum|}}}
| {{!-}}
{{!}} Regnum:
{{!}} ''[[:Kategorie:{{{regnum|}}}{{!}}{{{regnum|}}}]]''
}}
{{#if:{{{subregnum|}}}
| {{!-}}
{{!}} Subregnum:
{{!}} ''[[:Kategorie:{{{subregnum|}}}{{!}}{{{subregnum|}}}]]''
}}
{{#if:{{{superdivisio|}}}
| {{!-}}
{{!}} Superdivisio:
{{!}} ''[[:Kategorie:{{{superdivisio|}}}{{!}}{{{superdivisio|}}}]]''
}}
{{#if:{{{divisio|}}}
| {{!-}}
{{!}} Divisio:
{{!}} ''[[:Kategorie:{{{divisio|}}}{{!}}{{{divisio|}}}]]''
}}
{{#if:{{{subdivisio|}}}
| {{!-}}
{{!}} Subdivisio:
{{!}} ''[[:Kategorie:{{{subdivisio|}}}{{!}}{{{subdivisio|}}}]]''
}}
{{#if:{{{superclassis|}}}
| {{!-}}
{{!}} Superclassis:
{{!}} ''[[:Kategorie:{{{superclassis|}}}{{!}}{{{superclassis|}}}]]''
}}
{{#if:{{{classis|}}}
| {{!-}}
{{!}} Classis:
{{!}} ''[[:Kategorie:{{{classis|}}}{{!}}{{{classis|}}}]]''
}}
{{#if:{{{subclassis|}}}
| {{!-}}
{{!}} Subclassis:
{{!}} ''[[:Kategorie:{{{subclassis|}}}{{!}}{{{subclassis|}}}]]''
}}
{{#if:{{{superordo|}}}
| {{!-}}
{{!}} Superordo:
{{!}} ''[[:Kategorie:{{{superordo|}}}{{!}}{{{superordo|}}}]]''
}}
{{#if:{{{ordo|}}}
| {{!-}}
{{!}} Ordo:
{{!}} ''[[:Kategorie:{{{ordo|}}}{{!}}{{{ordo|}}}]]''
}}
{{#if:{{{subordo|}}}
| {{!-}}
{{!}} Subordo:
{{!}} ''[[:Kategorie:{{{subordo|}}}{{!}}{{{subordo|}}}]]''
}}
{{#if:{{{superfamilia|}}}
| {{!-}}
{{!}} Superfamilia:
{{!}} ''[[:Kategorie:{{{superfamilia|}}}{{!}}{{{superfamilia|}}}]]''
}}
{{#if:{{{familia|}}}
| {{!-}}
{{!}} Familia:
{{!}} ''[[:Kategorie:{{{familia|}}}{{!}}{{{familia|}}}]]''
}}
{{#if:{{{subfamilia|}}}
| {{!-}}
{{!}} Subfamilia:
{{!}} ''[[:Kategorie:{{{subfamilia|}}}{{!}}{{{subfamilia|}}}]]''
}}
{{#if:{{{tribus|}}}
| {{!-}}
{{!}} Tribus:
{{!}} ''[[:Kategorie:{{{tribus|}}}{{!}}{{{tribus|}}}]]''
}}
|-
{{#if:{{{genus|}}}
| {{!-}}
{{!}} Genus:
{{!}} ''[[:Kategorie:{{{genus|}}}{{!}}{{{genus|}}}]]''
}}
|-
{{#if:{{{subgenus|}}}
| {{!-}}
{{!}} Subgenus:
{{!}} ''[[:Kategorie:{{{subgenus|}}}{{!}}{{{subgenus|}}}]]''
}}
|-
{{#if:{{{species|}}}
| {{!-}}
{{!}} Species:
{{!}} ''{{{genus|}}} {{{subgenus|}}} {{{species|}}}''
}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Wissenschaftlicher Name
|-
|style="text-align:center"|
{| style="text-align:center;background-color:#efefef;widht:250px"
|-
|style="text-align:center;width:250px"|''{{{genus|}}} {{{species|}}}'' {{#if:{{{Autor|}}}|{{{Autor|}}}}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Weitere Informationen
|-
|style="text-align:left"|
{| style="text-align:left;background-color:#efefef;widht:250px"
{{#if:{{{Verbreitung|}}}
| {{!-}}
{{!}} Verbreitung:
{{!}} {{{Verbreitung|}}}
| {{!-}}
}}
{{#if:{{{Habitat|}}}
| {{!-}}
{{!}} Habitat:
{{!}} {{{Habitat|}}}
| {{!-}}
}}
{{#if:{{{Nahrung|}}}
| {{!-}}
{{!}} Nahrung:
{{!}} {{{Nahrung|}}}
| {{!-}}
}}
{{#if:{{{Luftfeuchtigkeit|}}}
| {{!-}}
{{!}} Luftfeuchtigkeit:
{{!}} {{{Luftfeuchtigkeit|}}}
| {{!-}}
}}
{{#if:{{{Temperatur|}}}
| {{!-}}
{{!}} Temperatur:
{{!}} {{{Temperatur|}}}
| {{!-}}
}}
{{#if:{{{StudyGroupNumber|}}}
| {{!-}}
{{!}} StudyGroupNumber:
{{!}} {{{StudyGroupNumber|}}}
| {{!-}}
}}
|}
|}
{{#ifeq: {{NAMESPACE}} | {{ns:0}} | [[Kategorie:species]]}}
{{#if:{{#var:taxon}}|{{#var:taxon}} }}
{{#if:{{{ordo|}}}|
-> [[:Kategorie:{{{ordo|}}}{{!}}{{{ordo|}}}]]
}}
{{#if:{{{subordo|}}}|
-> [[:Kategorie:{{{subordo|}}}{{!}}{{{subordo|}}}]]
}}
{{#if:{{{superfamilia|}}}|
-> [[:Kategorie:{{#if: {{{subordo|}}} | {{{subordo|}}} | {{{ordo|}}} }}{{!}}{{{superfamilia|}}}]]
}}
{{#if:{{{familia|}}}|
-> [[:Kategorie:{{{familia|}}}{{!}}{{{familia|}}}]]
}}
{{#if:{{{subfamilia|}}}|
-> [[:Kategorie:{{{subfamilia|}}}{{!}}{{{subfamilia|}}}]]
}}
{{#if:{{{tribus|}}}|
-> [[:Kategorie:{{{tribus|}}}{{!}}{{{tribus|}}}]]
}}
{{#if:{{{genus|}}}|
-> [[:Kategorie:{{{genus|}}}{{!}}{{{genus|}}}]]
}}
{{#if:{{{subgenus|}}}|
-> [[:Kategorie:{{{subgenus|}}}{{!}}{{{subgenus|}}}]]
}}
{{#if:{{{www.faunaeur.org_id|}}}|
* [http://www.faunaeur.org/full_results.php?id={{{www.faunaeur.org_id|}}} Fauna Europaea : www.faunaeur.org -> {{PAGENAME}}]
}}
{{#if:{{{cockroach.speciesfile.org_TaxonNameID|}}}|
* [http://cockroach.speciesfile.org/common/basic/Taxa.aspx?TaxonNameID={{{cockroach.speciesfile.org_TaxonNameID|}}} Cockroach Species File (CSF) : cockroach.speciesfile.org -> {{PAGENAME}}]
}}
{{#if:{{{LSID|}}}|
* [http://www.ipni.org/lsids.html LSID] : {{{LSID|}}}
}}
{{#ifeq:{{PAGENAME}}|{{{superordo}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
[[Kategorie:{{#if: {{{subclassis|}}} | {{{subclassis|}}} | {{{classis|}}} }}{{!}}{{{superordo|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{ordo}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=7}}
{{#ifeq:{{PAGENAME}}|Blattodea|
[[Kategorie:Schaben]]
}}
[[Kategorie:{{#if: {{{superordo|}}} | {{{superordo|}}} | {{{subclassis|}}} }}{{!}}{{{ordo|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{superfamilia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
[[Kategorie:{{#if: {{{subordo|}}} | {{{subordo|}}} | {{{ordo|}}} }}{{!}}{{{superfamilia|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{familia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=4}}
[[Kategorie:{{#if: {{{superfamilia|}}} | {{{superfamilia|}}} | {{{subordo|}}} }}{{!}}{{{familia|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{subfamilia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=3}}
[[Kategorie:{{{familia|}}}{{!}}{{{subfamilia|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{genus}}}|
[[Kategorie:{{#if: {{{subfamilia|}}} | {{{subfamilia|}}} | {{{familia|}}} }}{{!}}{{{genus|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{genus}}} {{{species}}}|
[[Kategorie:{{{genus|}}}{{!}}{{{species|}}}]]
}}
</includeonly>
<noinclude>
<pre>
Beispielaufruf:
{{Systematik
| DeName = Fauchschabe
| Autor = van Herrewege, 1973
| ordo =
| subordo =
| superfamilia =
| familia = Blaberidae
| subfamilia = Oxyhaloinae
| tribus = Gromphadorhini
| genus = Princisia
| subgenus =
| species = vanwaerebeki
| Verbreitung =
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| StudyGroupNumber = BCG 34
| Winterruhe =
| cockroach.speciesfile.org_TaxonNameID = 1174416
| LSID = urn:lsid:Blattodea.genusfile.org:TaxonName:6326
}}
{{Systematik
| Autor =
| Bild =
| Bildbeschreibung =
| regnum = Animalia
| subregnum = Eumetazoa
| phylum = specieshropoda
| subphylum = Hexapoda
| classis = Insecta
| ordo = Dictyoptera
| subordo = Isoptera
| LSID = urn:lsid:faunaeur.org:taxname:11922
| www.faunaeur.org_id = 11922
}}
</pre>
</noinclude>
bc6fd25feb8772ad73cafabdbd90adcf3de8f93c
1610
1609
2017-05-02T17:14:52Z
Lollypop
2
wikitext
text/x-wiki
<includeonly>
{{#vardefineecho:taxon|
{{#if:{{{ordo|}}}|
-> [[:Kategorie:{{{ordo|}}}{{!}}{{{ordo|}}}]]
}}
{{#if:{{{subordo|}}}|
-> [[:Kategorie:{{{subordo|}}}{{!}}{{{subordo|}}}]]
}}
{{#if:{{{superfamilia|}}}|
-> [[:Kategorie:{{#if: {{{subordo|}}} | {{{subordo|}}} | {{{ordo|}}} }}{{!}}{{{superfamilia|}}}]]
}}
{{#if:{{{familia|}}}|
-> [[:Kategorie:{{{familia|}}}{{!}}{{{familia|}}}]]
}}
{{#if:{{{subfamilia|}}}|
-> [[:Kategorie:{{{subfamilia|}}}{{!}}{{{subfamilia|}}}]]
}}
{{#if:{{{tribus|}}}|
-> [[:Kategorie:{{{tribus|}}}{{!}}{{{tribus|}}}]]
}}
{{#if:{{{genus|}}}|
-> [[:Kategorie:{{{genus|}}}{{!}}{{{genus|}}}]]
}}
{{#if:{{{subgenus|}}}|
-> [[:Kategorie:{{{subgenus|}}}{{!}}{{{subgenus|}}}]]
}}
}}
{| cellpadding="2" cellspacing="0" style="border: 1px solid #6688AA;width:260px; background-color:#efefef; float:right;overflow: hidden; margin-left:10px; margin-bottom:10px;" valign="middle" |
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"|
'''''{{PAGENAME}}'''''
{{#if:{{{DeName|}}}|
<br>({{{DeName|}}})
}}
|-
|style="border: 0px solid #6688AA;border-bottom-width:1px;"| <div style="text-align:center;font-size:8pt;">
{{#if: {{{Bild|}}}
|
[[Bild:{{{Bild}}}{{!}}frameless{{!}}250x300px{{!}}{{{Bildbeschreibung}}}]]
}}
</div>
|
|
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| Systematik
|-
|style="text-align:center;width=250px;"|
{| style="background-color:#efefef;text-align:left;"
{{#if:{{{dominia|}}}
| {{!-}}
{{!}} Dominia:
{{!}} ''[[:Kategorie:{{{dominia|}}}{{!}}{{{dominia|}}}]]''
}}
{{#if:{{{regnum|}}}
| {{!-}}
{{!}} Regnum:
{{!}} ''[[:Kategorie:{{{regnum|}}}{{!}}{{{regnum|}}}]]''
}}
{{#if:{{{subregnum|}}}
| {{!-}}
{{!}} Subregnum:
{{!}} ''[[:Kategorie:{{{subregnum|}}}{{!}}{{{subregnum|}}}]]''
}}
{{#if:{{{superdivisio|}}}
| {{!-}}
{{!}} Superdivisio:
{{!}} ''[[:Kategorie:{{{superdivisio|}}}{{!}}{{{superdivisio|}}}]]''
}}
{{#if:{{{divisio|}}}
| {{!-}}
{{!}} Divisio:
{{!}} ''[[:Kategorie:{{{divisio|}}}{{!}}{{{divisio|}}}]]''
}}
{{#if:{{{subdivisio|}}}
| {{!-}}
{{!}} Subdivisio:
{{!}} ''[[:Kategorie:{{{subdivisio|}}}{{!}}{{{subdivisio|}}}]]''
}}
{{#if:{{{superclassis|}}}
| {{!-}}
{{!}} Superclassis:
{{!}} ''[[:Kategorie:{{{superclassis|}}}{{!}}{{{superclassis|}}}]]''
}}
{{#if:{{{classis|}}}
| {{!-}}
{{!}} Classis:
{{!}} ''[[:Kategorie:{{{classis|}}}{{!}}{{{classis|}}}]]''
}}
{{#if:{{{subclassis|}}}
| {{!-}}
{{!}} Subclassis:
{{!}} ''[[:Kategorie:{{{subclassis|}}}{{!}}{{{subclassis|}}}]]''
}}
{{#if:{{{superordo|}}}
| {{!-}}
{{!}} Superordo:
{{!}} ''[[:Kategorie:{{{superordo|}}}{{!}}{{{superordo|}}}]]''
}}
{{#if:{{{ordo|}}}
| {{!-}}
{{!}} Ordo:
{{!}} ''[[:Kategorie:{{{ordo|}}}{{!}}{{{ordo|}}}]]''
}}
{{#if:{{{subordo|}}}
| {{!-}}
{{!}} Subordo:
{{!}} ''[[:Kategorie:{{{subordo|}}}{{!}}{{{subordo|}}}]]''
}}
{{#if:{{{superfamilia|}}}
| {{!-}}
{{!}} Superfamilia:
{{!}} ''[[:Kategorie:{{{superfamilia|}}}{{!}}{{{superfamilia|}}}]]''
}}
{{#if:{{{familia|}}}
| {{!-}}
{{!}} Familia:
{{!}} ''[[:Kategorie:{{{familia|}}}{{!}}{{{familia|}}}]]''
}}
{{#if:{{{subfamilia|}}}
| {{!-}}
{{!}} Subfamilia:
{{!}} ''[[:Kategorie:{{{subfamilia|}}}{{!}}{{{subfamilia|}}}]]''
}}
{{#if:{{{tribus|}}}
| {{!-}}
{{!}} Tribus:
{{!}} ''[[:Kategorie:{{{tribus|}}}{{!}}{{{tribus|}}}]]''
}}
|-
{{#if:{{{genus|}}}
| {{!-}}
{{!}} Genus:
{{!}} ''[[:Kategorie:{{{genus|}}}{{!}}{{{genus|}}}]]''
}}
|-
{{#if:{{{subgenus|}}}
| {{!-}}
{{!}} Subgenus:
{{!}} ''[[:Kategorie:{{{subgenus|}}}{{!}}{{{subgenus|}}}]]''
}}
|-
{{#if:{{{species|}}}
| {{!-}}
{{!}} Species:
{{!}} ''{{{genus|}}} {{{subgenus|}}} {{{species|}}}''
}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Wissenschaftlicher Name
|-
|style="text-align:center"|
{| style="text-align:center;background-color:#efefef;widht:250px"
|-
|style="text-align:center;width:250px"|''{{{genus|}}} {{{species|}}}'' {{#if:{{{Autor|}}}|{{{Autor|}}}}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Weitere Informationen
|-
|style="text-align:left"|
{| style="text-align:left;background-color:#efefef;widht:250px"
{{#if:{{{Verbreitung|}}}
| {{!-}}
{{!}} Verbreitung:
{{!}} {{{Verbreitung|}}}
| {{!-}}
}}
{{#if:{{{Habitat|}}}
| {{!-}}
{{!}} Habitat:
{{!}} {{{Habitat|}}}
| {{!-}}
}}
{{#if:{{{Nahrung|}}}
| {{!-}}
{{!}} Nahrung:
{{!}} {{{Nahrung|}}}
| {{!-}}
}}
{{#if:{{{Luftfeuchtigkeit|}}}
| {{!-}}
{{!}} Luftfeuchtigkeit:
{{!}} {{{Luftfeuchtigkeit|}}}
| {{!-}}
}}
{{#if:{{{Temperatur|}}}
| {{!-}}
{{!}} Temperatur:
{{!}} {{{Temperatur|}}}
| {{!-}}
}}
{{#if:{{{StudyGroupNumber|}}}
| {{!-}}
{{!}} StudyGroupNumber:
{{!}} {{{StudyGroupNumber|}}}
| {{!-}}
}}
|}
|}
{{#ifeq: {{NAMESPACE}} | {{ns:0}} | [[Kategorie:species]]}}
{{#if:{{#var:taxon}}|{{#var:taxon}} }}
{{#if:{{{ordo|}}}|
-> [[:Kategorie:{{{ordo|}}}{{!}}{{{ordo|}}}]]
}}
{{#if:{{{subordo|}}}|
-> [[:Kategorie:{{{subordo|}}}{{!}}{{{subordo|}}}]]
}}
{{#if:{{{superfamilia|}}}|
-> [[:Kategorie:{{#if: {{{subordo|}}} | {{{subordo|}}} | {{{ordo|}}} }}{{!}}{{{superfamilia|}}}]]
}}
{{#if:{{{familia|}}}|
-> [[:Kategorie:{{{familia|}}}{{!}}{{{familia|}}}]]
}}
{{#if:{{{subfamilia|}}}|
-> [[:Kategorie:{{{subfamilia|}}}{{!}}{{{subfamilia|}}}]]
}}
{{#if:{{{tribus|}}}|
-> [[:Kategorie:{{{tribus|}}}{{!}}{{{tribus|}}}]]
}}
{{#if:{{{genus|}}}|
-> [[:Kategorie:{{{genus|}}}{{!}}{{{genus|}}}]]
}}
{{#if:{{{subgenus|}}}|
-> [[:Kategorie:{{{subgenus|}}}{{!}}{{{subgenus|}}}]]
}}
{{#if:{{{www.faunaeur.org_id|}}}|
* [http://www.faunaeur.org/full_results.php?id={{{www.faunaeur.org_id|}}} Fauna Europaea : www.faunaeur.org -> {{PAGENAME}}]
}}
{{#if:{{{cockroach.speciesfile.org_TaxonNameID|}}}|
* [http://cockroach.speciesfile.org/common/basic/Taxa.aspx?TaxonNameID={{{cockroach.speciesfile.org_TaxonNameID|}}} Cockroach Species File (CSF) : cockroach.speciesfile.org -> {{PAGENAME}}]
}}
{{#if:{{{LSID|}}}|
* [http://www.ipni.org/lsids.html LSID] : {{{LSID|}}}
}}
{{#ifeq:{{PAGENAME}}|{{{superordo}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
[[Kategorie:{{#if: {{{subclassis|}}} | {{{subclassis|}}} | {{{classis|}}} }}{{!}}{{{superordo|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{ordo}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=7}}
{{#ifeq:{{PAGENAME}}|Blattodea|
[[Kategorie:Schaben]]
}}
[[Kategorie:{{#if: {{{superordo|}}} | {{{superordo|}}} | {{{subclassis|}}} }}{{!}}{{{ordo|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{superfamilia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
[[Kategorie:{{#if: {{{subordo|}}} | {{{subordo|}}} | {{{ordo|}}} }}{{!}}{{{superfamilia|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{familia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=4}}
[[Kategorie:{{#if: {{{superfamilia|}}} | {{{superfamilia|}}} | {{{subordo|}}} }}{{!}}{{{familia|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{subfamilia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=3}}
[[Kategorie:{{{familia|}}}{{!}}{{{subfamilia|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{genus}}}|
[[Kategorie:{{#if: {{{subfamilia|}}} | {{{subfamilia|}}} | {{{familia|}}} }}{{!}}{{{genus|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{genus}}} {{{species}}}|
[[Kategorie:{{{genus|}}}{{!}}{{{species|}}}]]
}}
</includeonly>
<noinclude>
<pre>
Beispielaufruf:
{{Systematik
| DeName = Fauchschabe
| Autor = van Herrewege, 1973
| ordo =
| subordo =
| superfamilia =
| familia = Blaberidae
| subfamilia = Oxyhaloinae
| tribus = Gromphadorhini
| genus = Princisia
| subgenus =
| species = vanwaerebeki
| Verbreitung =
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| StudyGroupNumber = BCG 34
| Winterruhe =
| cockroach.speciesfile.org_TaxonNameID = 1174416
| LSID = urn:lsid:Blattodea.genusfile.org:TaxonName:6326
}}
{{Systematik
| Autor =
| Bild =
| Bildbeschreibung =
| regnum = Animalia
| subregnum = Eumetazoa
| phylum = specieshropoda
| subphylum = Hexapoda
| classis = Insecta
| ordo = Dictyoptera
| subordo = Isoptera
| LSID = urn:lsid:faunaeur.org:taxname:11922
| www.faunaeur.org_id = 11922
}}
</pre>
</noinclude>
648e4d18237bbb5dcc106615766401da27b1d179
1611
1610
2017-05-02T17:18:19Z
Lollypop
2
wikitext
text/x-wiki
<includeonly>
{{#vardefine:taxon|
{{#if:{{{ordo|}}} | -> [[:Kategorie:{{{ordo|}}}]]}}
{{#if:{{{subordo|}}} | -> [[:Kategorie:{{{subordo|}}}]]}}
{{#if:{{{superfamilia|}}}| -> [[:Kategorie:{{{superfamilia|}}}]]}}
{{#if:{{{familia|}}} | -> [[:Kategorie:{{{familia|}}}]]}}
{{#if:{{{subfamilia|}}} | -> [[:Kategorie:{{{subfamilia|}}}]]}}
{{#if:{{{tribus|}}} | -> [[:Kategorie:{{{tribus|}}}]]}}
{{#if:{{{genus|}}} | -> [[:Kategorie:{{{genus|}}}]]}}
{{#if:{{{subgenus|}}} | -> [[:Kategorie:{{{subgenus|}}}]]}}
}}
{| cellpadding="2" cellspacing="0" style="border: 1px solid #6688AA;width:260px; background-color:#efefef; float:right;overflow: hidden; margin-left:10px; margin-bottom:10px;" valign="middle" |
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"|
'''''{{PAGENAME}}'''''
{{#if:{{{DeName|}}}|
<br>({{{DeName|}}})
}}
|-
|style="border: 0px solid #6688AA;border-bottom-width:1px;"| <div style="text-align:center;font-size:8pt;">
{{#if: {{{Bild|}}}
|
[[Bild:{{{Bild}}}{{!}}frameless{{!}}250x300px{{!}}{{{Bildbeschreibung}}}]]
}}
</div>
|
|
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| Systematik
|-
|style="text-align:center;width=250px;"|
{| style="background-color:#efefef;text-align:left;"
{{#if:{{{dominia|}}}
| {{!-}}
{{!}} Dominia:
{{!}} ''[[:Kategorie:{{{dominia|}}}{{!}}{{{dominia|}}}]]''
}}
{{#if:{{{regnum|}}}
| {{!-}}
{{!}} Regnum:
{{!}} ''[[:Kategorie:{{{regnum|}}}{{!}}{{{regnum|}}}]]''
}}
{{#if:{{{subregnum|}}}
| {{!-}}
{{!}} Subregnum:
{{!}} ''[[:Kategorie:{{{subregnum|}}}{{!}}{{{subregnum|}}}]]''
}}
{{#if:{{{superdivisio|}}}
| {{!-}}
{{!}} Superdivisio:
{{!}} ''[[:Kategorie:{{{superdivisio|}}}{{!}}{{{superdivisio|}}}]]''
}}
{{#if:{{{divisio|}}}
| {{!-}}
{{!}} Divisio:
{{!}} ''[[:Kategorie:{{{divisio|}}}{{!}}{{{divisio|}}}]]''
}}
{{#if:{{{subdivisio|}}}
| {{!-}}
{{!}} Subdivisio:
{{!}} ''[[:Kategorie:{{{subdivisio|}}}{{!}}{{{subdivisio|}}}]]''
}}
{{#if:{{{superclassis|}}}
| {{!-}}
{{!}} Superclassis:
{{!}} ''[[:Kategorie:{{{superclassis|}}}{{!}}{{{superclassis|}}}]]''
}}
{{#if:{{{classis|}}}
| {{!-}}
{{!}} Classis:
{{!}} ''[[:Kategorie:{{{classis|}}}{{!}}{{{classis|}}}]]''
}}
{{#if:{{{subclassis|}}}
| {{!-}}
{{!}} Subclassis:
{{!}} ''[[:Kategorie:{{{subclassis|}}}{{!}}{{{subclassis|}}}]]''
}}
{{#if:{{{superordo|}}}
| {{!-}}
{{!}} Superordo:
{{!}} ''[[:Kategorie:{{{superordo|}}}{{!}}{{{superordo|}}}]]''
}}
{{#if:{{{ordo|}}}
| {{!-}}
{{!}} Ordo:
{{!}} ''[[:Kategorie:{{{ordo|}}}{{!}}{{{ordo|}}}]]''
}}
{{#if:{{{subordo|}}}
| {{!-}}
{{!}} Subordo:
{{!}} ''[[:Kategorie:{{{subordo|}}}{{!}}{{{subordo|}}}]]''
}}
{{#if:{{{superfamilia|}}}
| {{!-}}
{{!}} Superfamilia:
{{!}} ''[[:Kategorie:{{{superfamilia|}}}{{!}}{{{superfamilia|}}}]]''
}}
{{#if:{{{familia|}}}
| {{!-}}
{{!}} Familia:
{{!}} ''[[:Kategorie:{{{familia|}}}{{!}}{{{familia|}}}]]''
}}
{{#if:{{{subfamilia|}}}
| {{!-}}
{{!}} Subfamilia:
{{!}} ''[[:Kategorie:{{{subfamilia|}}}{{!}}{{{subfamilia|}}}]]''
}}
{{#if:{{{tribus|}}}
| {{!-}}
{{!}} Tribus:
{{!}} ''[[:Kategorie:{{{tribus|}}}{{!}}{{{tribus|}}}]]''
}}
|-
{{#if:{{{genus|}}}
| {{!-}}
{{!}} Genus:
{{!}} ''[[:Kategorie:{{{genus|}}}{{!}}{{{genus|}}}]]''
}}
|-
{{#if:{{{subgenus|}}}
| {{!-}}
{{!}} Subgenus:
{{!}} ''[[:Kategorie:{{{subgenus|}}}{{!}}{{{subgenus|}}}]]''
}}
|-
{{#if:{{{species|}}}
| {{!-}}
{{!}} Species:
{{!}} ''{{{genus|}}} {{{subgenus|}}} {{{species|}}}''
}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Wissenschaftlicher Name
|-
|style="text-align:center"|
{| style="text-align:center;background-color:#efefef;widht:250px"
|-
|style="text-align:center;width:250px"|''{{{genus|}}} {{{species|}}}'' {{#if:{{{Autor|}}}|{{{Autor|}}}}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Weitere Informationen
|-
|style="text-align:left"|
{| style="text-align:left;background-color:#efefef;widht:250px"
{{#if:{{{Verbreitung|}}}
| {{!-}}
{{!}} Verbreitung:
{{!}} {{{Verbreitung|}}}
| {{!-}}
}}
{{#if:{{{Habitat|}}}
| {{!-}}
{{!}} Habitat:
{{!}} {{{Habitat|}}}
| {{!-}}
}}
{{#if:{{{Nahrung|}}}
| {{!-}}
{{!}} Nahrung:
{{!}} {{{Nahrung|}}}
| {{!-}}
}}
{{#if:{{{Luftfeuchtigkeit|}}}
| {{!-}}
{{!}} Luftfeuchtigkeit:
{{!}} {{{Luftfeuchtigkeit|}}}
| {{!-}}
}}
{{#if:{{{Temperatur|}}}
| {{!-}}
{{!}} Temperatur:
{{!}} {{{Temperatur|}}}
| {{!-}}
}}
{{#if:{{{StudyGroupNumber|}}}
| {{!-}}
{{!}} StudyGroupNumber:
{{!}} {{{StudyGroupNumber|}}}
| {{!-}}
}}
|}
|}
{{#ifeq: {{NAMESPACE}} | {{ns:0}} | [[Kategorie:species]]}}
{{#if:{{#var:taxon}}|{{#var:taxon}} }}
{{#if:{{{ordo|}}}|
-> [[:Kategorie:{{{ordo|}}}{{!}}{{{ordo|}}}]]
}}
{{#if:{{{subordo|}}}|
-> [[:Kategorie:{{{subordo|}}}{{!}}{{{subordo|}}}]]
}}
{{#if:{{{superfamilia|}}}|
-> [[:Kategorie:{{#if: {{{subordo|}}} | {{{subordo|}}} | {{{ordo|}}} }}{{!}}{{{superfamilia|}}}]]
}}
{{#if:{{{familia|}}}|
-> [[:Kategorie:{{{familia|}}}{{!}}{{{familia|}}}]]
}}
{{#if:{{{subfamilia|}}}|
-> [[:Kategorie:{{{subfamilia|}}}{{!}}{{{subfamilia|}}}]]
}}
{{#if:{{{tribus|}}}|
-> [[:Kategorie:{{{tribus|}}}{{!}}{{{tribus|}}}]]
}}
{{#if:{{{genus|}}}|
-> [[:Kategorie:{{{genus|}}}{{!}}{{{genus|}}}]]
}}
{{#if:{{{subgenus|}}}|
-> [[:Kategorie:{{{subgenus|}}}{{!}}{{{subgenus|}}}]]
}}
{{#if:{{{www.faunaeur.org_id|}}}|
* [http://www.faunaeur.org/full_results.php?id={{{www.faunaeur.org_id|}}} Fauna Europaea : www.faunaeur.org -> {{PAGENAME}}]
}}
{{#if:{{{cockroach.speciesfile.org_TaxonNameID|}}}|
* [http://cockroach.speciesfile.org/common/basic/Taxa.aspx?TaxonNameID={{{cockroach.speciesfile.org_TaxonNameID|}}} Cockroach Species File (CSF) : cockroach.speciesfile.org -> {{PAGENAME}}]
}}
{{#if:{{{LSID|}}}|
* [http://www.ipni.org/lsids.html LSID] : {{{LSID|}}}
}}
{{#ifeq:{{PAGENAME}}|{{{superordo}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
[[Kategorie:{{#if: {{{subclassis|}}} | {{{subclassis|}}} | {{{classis|}}} }}{{!}}{{{superordo|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{ordo}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=7}}
{{#ifeq:{{PAGENAME}}|Blattodea|
[[Kategorie:Schaben]]
}}
[[Kategorie:{{#if: {{{superordo|}}} | {{{superordo|}}} | {{{subclassis|}}} }}{{!}}{{{ordo|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{superfamilia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
[[Kategorie:{{#if: {{{subordo|}}} | {{{subordo|}}} | {{{ordo|}}} }}{{!}}{{{superfamilia|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{familia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=4}}
[[Kategorie:{{#if: {{{superfamilia|}}} | {{{superfamilia|}}} | {{{subordo|}}} }}{{!}}{{{familia|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{subfamilia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=3}}
[[Kategorie:{{{familia|}}}{{!}}{{{subfamilia|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{genus}}}|
[[Kategorie:{{#if: {{{subfamilia|}}} | {{{subfamilia|}}} | {{{familia|}}} }}{{!}}{{{genus|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{genus}}} {{{species}}}|
[[Kategorie:{{{genus|}}}{{!}}{{{species|}}}]]
}}
</includeonly>
<noinclude>
<pre>
Beispielaufruf:
{{Systematik
| DeName = Fauchschabe
| Autor = van Herrewege, 1973
| ordo =
| subordo =
| superfamilia =
| familia = Blaberidae
| subfamilia = Oxyhaloinae
| tribus = Gromphadorhini
| genus = Princisia
| subgenus =
| species = vanwaerebeki
| Verbreitung =
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| StudyGroupNumber = BCG 34
| Winterruhe =
| cockroach.speciesfile.org_TaxonNameID = 1174416
| LSID = urn:lsid:Blattodea.genusfile.org:TaxonName:6326
}}
{{Systematik
| Autor =
| Bild =
| Bildbeschreibung =
| regnum = Animalia
| subregnum = Eumetazoa
| phylum = specieshropoda
| subphylum = Hexapoda
| classis = Insecta
| ordo = Dictyoptera
| subordo = Isoptera
| LSID = urn:lsid:faunaeur.org:taxname:11922
| www.faunaeur.org_id = 11922
}}
</pre>
</noinclude>
587056f1a730469673a27d8b4bfdfbb3ff7df884
1612
1611
2017-05-02T17:20:20Z
Lollypop
2
wikitext
text/x-wiki
<includeonly>
{{#vardefine:taxon|
{{#if:{{{ordo|}}} | -> [[:Kategorie:{{{ordo|}}}{{!}}{{{ordo|}}}]]}}
{{#if:{{{subordo|}}} | -> [[:Kategorie:{{{subordo|}}}{{!}}{{{subordo|}}}]]}}
{{#if:{{{superfamilia|}}}| -> [[:Kategorie:{{{superfamilia|}}}{{!}}{{{superfamilia|}}}]]}}
{{#if:{{{familia|}}} | -> [[:Kategorie:{{{familia|}}}{{!}}{{{familia|}}}]]}}
{{#if:{{{subfamilia|}}} | -> [[:Kategorie:{{{subfamilia|}}}{{!}}{{{subfamilia|}}}]]}}
{{#if:{{{tribus|}}} | -> [[:Kategorie:{{{tribus|}}}{{!}}{{{tribus|}}}]]}}
{{#if:{{{genus|}}} | -> [[:Kategorie:{{{genus|}}}{{!}}{{{genus|}}}]]}}
{{#if:{{{subgenus|}}} | -> [[:Kategorie:{{{subgenus|}}}{{!}}{{{subgenus|}}}]]}}
}}
{| cellpadding="2" cellspacing="0" style="border: 1px solid #6688AA;width:260px; background-color:#efefef; float:right;overflow: hidden; margin-left:10px; margin-bottom:10px;" valign="middle" |
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"|
'''''{{PAGENAME}}'''''
{{#if:{{{DeName|}}}|
<br>({{{DeName|}}})
}}
|-
|style="border: 0px solid #6688AA;border-bottom-width:1px;"| <div style="text-align:center;font-size:8pt;">
{{#if: {{{Bild|}}}
|
[[Bild:{{{Bild}}}{{!}}frameless{{!}}250x300px{{!}}{{{Bildbeschreibung}}}]]
}}
</div>
|
|
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| Systematik
|-
|style="text-align:center;width=250px;"|
{| style="background-color:#efefef;text-align:left;"
{{#if:{{{dominia|}}}
| {{!-}}
{{!}} Dominia:
{{!}} ''[[:Kategorie:{{{dominia|}}}{{!}}{{{dominia|}}}]]''
}}
{{#if:{{{regnum|}}}
| {{!-}}
{{!}} Regnum:
{{!}} ''[[:Kategorie:{{{regnum|}}}{{!}}{{{regnum|}}}]]''
}}
{{#if:{{{subregnum|}}}
| {{!-}}
{{!}} Subregnum:
{{!}} ''[[:Kategorie:{{{subregnum|}}}{{!}}{{{subregnum|}}}]]''
}}
{{#if:{{{superdivisio|}}}
| {{!-}}
{{!}} Superdivisio:
{{!}} ''[[:Kategorie:{{{superdivisio|}}}{{!}}{{{superdivisio|}}}]]''
}}
{{#if:{{{divisio|}}}
| {{!-}}
{{!}} Divisio:
{{!}} ''[[:Kategorie:{{{divisio|}}}{{!}}{{{divisio|}}}]]''
}}
{{#if:{{{subdivisio|}}}
| {{!-}}
{{!}} Subdivisio:
{{!}} ''[[:Kategorie:{{{subdivisio|}}}{{!}}{{{subdivisio|}}}]]''
}}
{{#if:{{{superclassis|}}}
| {{!-}}
{{!}} Superclassis:
{{!}} ''[[:Kategorie:{{{superclassis|}}}{{!}}{{{superclassis|}}}]]''
}}
{{#if:{{{classis|}}}
| {{!-}}
{{!}} Classis:
{{!}} ''[[:Kategorie:{{{classis|}}}{{!}}{{{classis|}}}]]''
}}
{{#if:{{{subclassis|}}}
| {{!-}}
{{!}} Subclassis:
{{!}} ''[[:Kategorie:{{{subclassis|}}}{{!}}{{{subclassis|}}}]]''
}}
{{#if:{{{superordo|}}}
| {{!-}}
{{!}} Superordo:
{{!}} ''[[:Kategorie:{{{superordo|}}}{{!}}{{{superordo|}}}]]''
}}
{{#if:{{{ordo|}}}
| {{!-}}
{{!}} Ordo:
{{!}} ''[[:Kategorie:{{{ordo|}}}{{!}}{{{ordo|}}}]]''
}}
{{#if:{{{subordo|}}}
| {{!-}}
{{!}} Subordo:
{{!}} ''[[:Kategorie:{{{subordo|}}}{{!}}{{{subordo|}}}]]''
}}
{{#if:{{{superfamilia|}}}
| {{!-}}
{{!}} Superfamilia:
{{!}} ''[[:Kategorie:{{{superfamilia|}}}{{!}}{{{superfamilia|}}}]]''
}}
{{#if:{{{familia|}}}
| {{!-}}
{{!}} Familia:
{{!}} ''[[:Kategorie:{{{familia|}}}{{!}}{{{familia|}}}]]''
}}
{{#if:{{{subfamilia|}}}
| {{!-}}
{{!}} Subfamilia:
{{!}} ''[[:Kategorie:{{{subfamilia|}}}{{!}}{{{subfamilia|}}}]]''
}}
{{#if:{{{tribus|}}}
| {{!-}}
{{!}} Tribus:
{{!}} ''[[:Kategorie:{{{tribus|}}}{{!}}{{{tribus|}}}]]''
}}
|-
{{#if:{{{genus|}}}
| {{!-}}
{{!}} Genus:
{{!}} ''[[:Kategorie:{{{genus|}}}{{!}}{{{genus|}}}]]''
}}
|-
{{#if:{{{subgenus|}}}
| {{!-}}
{{!}} Subgenus:
{{!}} ''[[:Kategorie:{{{subgenus|}}}{{!}}{{{subgenus|}}}]]''
}}
|-
{{#if:{{{species|}}}
| {{!-}}
{{!}} Species:
{{!}} ''{{{genus|}}} {{{subgenus|}}} {{{species|}}}''
}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Wissenschaftlicher Name
|-
|style="text-align:center"|
{| style="text-align:center;background-color:#efefef;widht:250px"
|-
|style="text-align:center;width:250px"|''{{{genus|}}} {{{species|}}}'' {{#if:{{{Autor|}}}|{{{Autor|}}}}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Weitere Informationen
|-
|style="text-align:left"|
{| style="text-align:left;background-color:#efefef;widht:250px"
{{#if:{{{Verbreitung|}}}
| {{!-}}
{{!}} Verbreitung:
{{!}} {{{Verbreitung|}}}
| {{!-}}
}}
{{#if:{{{Habitat|}}}
| {{!-}}
{{!}} Habitat:
{{!}} {{{Habitat|}}}
| {{!-}}
}}
{{#if:{{{Nahrung|}}}
| {{!-}}
{{!}} Nahrung:
{{!}} {{{Nahrung|}}}
| {{!-}}
}}
{{#if:{{{Luftfeuchtigkeit|}}}
| {{!-}}
{{!}} Luftfeuchtigkeit:
{{!}} {{{Luftfeuchtigkeit|}}}
| {{!-}}
}}
{{#if:{{{Temperatur|}}}
| {{!-}}
{{!}} Temperatur:
{{!}} {{{Temperatur|}}}
| {{!-}}
}}
{{#if:{{{StudyGroupNumber|}}}
| {{!-}}
{{!}} StudyGroupNumber:
{{!}} {{{StudyGroupNumber|}}}
| {{!-}}
}}
|}
|}
{{#ifeq: {{NAMESPACE}} | {{ns:0}} | [[Kategorie:species]]}}
{{#if:{{#var:taxon}}|{{#var:taxon}} }}
{{#if:{{{ordo|}}}|
-> [[:Kategorie:{{{ordo|}}}{{!}}{{{ordo|}}}]]
}}
{{#if:{{{subordo|}}}|
-> [[:Kategorie:{{{subordo|}}}{{!}}{{{subordo|}}}]]
}}
{{#if:{{{superfamilia|}}}|
-> [[:Kategorie:{{#if: {{{subordo|}}} | {{{subordo|}}} | {{{ordo|}}} }}{{!}}{{{superfamilia|}}}]]
}}
{{#if:{{{familia|}}}|
-> [[:Kategorie:{{{familia|}}}{{!}}{{{familia|}}}]]
}}
{{#if:{{{subfamilia|}}}|
-> [[:Kategorie:{{{subfamilia|}}}{{!}}{{{subfamilia|}}}]]
}}
{{#if:{{{tribus|}}}|
-> [[:Kategorie:{{{tribus|}}}{{!}}{{{tribus|}}}]]
}}
{{#if:{{{genus|}}}|
-> [[:Kategorie:{{{genus|}}}{{!}}{{{genus|}}}]]
}}
{{#if:{{{subgenus|}}}|
-> [[:Kategorie:{{{subgenus|}}}{{!}}{{{subgenus|}}}]]
}}
{{#if:{{{www.faunaeur.org_id|}}}|
* [http://www.faunaeur.org/full_results.php?id={{{www.faunaeur.org_id|}}} Fauna Europaea : www.faunaeur.org -> {{PAGENAME}}]
}}
{{#if:{{{cockroach.speciesfile.org_TaxonNameID|}}}|
* [http://cockroach.speciesfile.org/common/basic/Taxa.aspx?TaxonNameID={{{cockroach.speciesfile.org_TaxonNameID|}}} Cockroach Species File (CSF) : cockroach.speciesfile.org -> {{PAGENAME}}]
}}
{{#if:{{{LSID|}}}|
* [http://www.ipni.org/lsids.html LSID] : {{{LSID|}}}
}}
{{#ifeq:{{PAGENAME}}|{{{superordo}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
[[Kategorie:{{#if: {{{subclassis|}}} | {{{subclassis|}}} | {{{classis|}}} }}{{!}}{{{superordo|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{ordo}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=7}}
{{#ifeq:{{PAGENAME}}|Blattodea|
[[Kategorie:Schaben]]
}}
[[Kategorie:{{#if: {{{superordo|}}} | {{{superordo|}}} | {{{subclassis|}}} }}{{!}}{{{ordo|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{superfamilia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
[[Kategorie:{{#if: {{{subordo|}}} | {{{subordo|}}} | {{{ordo|}}} }}{{!}}{{{superfamilia|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{familia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=4}}
[[Kategorie:{{#if: {{{superfamilia|}}} | {{{superfamilia|}}} | {{{subordo|}}} }}{{!}}{{{familia|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{subfamilia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=3}}
[[Kategorie:{{{familia|}}}{{!}}{{{subfamilia|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{genus}}}|
[[Kategorie:{{#if: {{{subfamilia|}}} | {{{subfamilia|}}} | {{{familia|}}} }}{{!}}{{{genus|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{genus}}} {{{species}}}|
[[Kategorie:{{{genus|}}}{{!}}{{{species|}}}]]
}}
</includeonly>
<noinclude>
<pre>
Beispielaufruf:
{{Systematik
| DeName = Fauchschabe
| Autor = van Herrewege, 1973
| ordo =
| subordo =
| superfamilia =
| familia = Blaberidae
| subfamilia = Oxyhaloinae
| tribus = Gromphadorhini
| genus = Princisia
| subgenus =
| species = vanwaerebeki
| Verbreitung =
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| StudyGroupNumber = BCG 34
| Winterruhe =
| cockroach.speciesfile.org_TaxonNameID = 1174416
| LSID = urn:lsid:Blattodea.genusfile.org:TaxonName:6326
}}
{{Systematik
| Autor =
| Bild =
| Bildbeschreibung =
| regnum = Animalia
| subregnum = Eumetazoa
| phylum = specieshropoda
| subphylum = Hexapoda
| classis = Insecta
| ordo = Dictyoptera
| subordo = Isoptera
| LSID = urn:lsid:faunaeur.org:taxname:11922
| www.faunaeur.org_id = 11922
}}
</pre>
</noinclude>
76a7ccf83c2295511ebceac373baaf5e837db5ee
1613
1612
2017-05-02T17:26:43Z
Lollypop
2
wikitext
text/x-wiki
<includeonly>
{{#vardefine:taxon|
{{#if:{{{dominia|}}} | -> [[:Kategorie:{{{dominia|}}}{{!}}{{{dominia|}}}]]}}
{{#if:{{{regnum|}}} | -> [[:Kategorie:{{{regnum|}}}{{!}}{{{regnum|}}}]]}}
{{#if:{{{subregnum|}}} | -> [[:Kategorie:{{{subregnum|}}}{{!}}{{{subregnum|}}}]]}}
{{#if:{{{divisio|}}} | -> [[:Kategorie:{{{divisio|}}}{{!}}{{{divisio|}}}]]}}
{{#if:{{{subdivisio|}}} | -> [[:Kategorie:{{{subdivisio|}}}{{!}}{{{subdivisio|}}}]]}}
{{#if:{{{superclassis|}}}| -> [[:Kategorie:{{{superclassis|}}}{{!}}{{{superclassis|}}}]]}}
{{#if:{{{classis|}}} | -> [[:Kategorie:{{{classis|}}}{{!}}{{{classis|}}}]]}}
{{#if:{{{subclassis|}}} | -> [[:Kategorie:{{{subclassis|}}}{{!}}{{{subclassis|}}}]]}}
{{#if:{{{superordo|}}} | -> [[:Kategorie:{{{superordo|}}}{{!}}{{{superordo|}}}]]}}
{{#if:{{{ordo|}}} | -> [[:Kategorie:{{{ordo|}}}{{!}}{{{ordo|}}}]]}}
{{#if:{{{subordo|}}} | -> [[:Kategorie:{{{subordo|}}}{{!}}{{{subordo|}}}]]}}
{{#if:{{{superfamilia|}}}| -> [[:Kategorie:{{{superfamilia|}}}{{!}}{{{superfamilia|}}}]]}}
{{#if:{{{familia|}}} | -> [[:Kategorie:{{{familia|}}}{{!}}{{{familia|}}}]]}}
{{#if:{{{subfamilia|}}} | -> [[:Kategorie:{{{subfamilia|}}}{{!}}{{{subfamilia|}}}]]}}
{{#if:{{{tribus|}}} | -> [[:Kategorie:{{{tribus|}}}{{!}}{{{tribus|}}}]]}}
{{#if:{{{genus|}}} | -> [[:Kategorie:{{{genus|}}}{{!}}{{{genus|}}}]]}}
{{#if:{{{subgenus|}}} | -> [[:Kategorie:{{{subgenus|}}}{{!}}{{{subgenus|}}}]]}}
}}
{| cellpadding="2" cellspacing="0" style="border: 1px solid #6688AA;width:260px; background-color:#efefef; float:right;overflow: hidden; margin-left:10px; margin-bottom:10px;" valign="middle" |
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"|
'''''{{PAGENAME}}'''''
{{#if:{{{DeName|}}}|
<br>({{{DeName|}}})
}}
|-
|style="border: 0px solid #6688AA;border-bottom-width:1px;"| <div style="text-align:center;font-size:8pt;">
{{#if: {{{Bild|}}}
|
[[Bild:{{{Bild}}}{{!}}frameless{{!}}250x300px{{!}}{{{Bildbeschreibung}}}]]
}}
</div>
|
|
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| Systematik
|-
|style="text-align:center;width=250px;"|
{| style="background-color:#efefef;text-align:left;"
{{#if:{{{dominia|}}}
| {{!-}}
{{!}} Dominia:
{{!}} ''[[:Kategorie:{{{dominia|}}}{{!}}{{{dominia|}}}]]''
}}
{{#if:{{{regnum|}}}
| {{!-}}
{{!}} Regnum:
{{!}} ''[[:Kategorie:{{{regnum|}}}{{!}}{{{regnum|}}}]]''
}}
{{#if:{{{subregnum|}}}
| {{!-}}
{{!}} Subregnum:
{{!}} ''[[:Kategorie:{{{subregnum|}}}{{!}}{{{subregnum|}}}]]''
}}
{{#if:{{{superdivisio|}}}
| {{!-}}
{{!}} Superdivisio:
{{!}} ''[[:Kategorie:{{{superdivisio|}}}{{!}}{{{superdivisio|}}}]]''
}}
{{#if:{{{divisio|}}}
| {{!-}}
{{!}} Divisio:
{{!}} ''[[:Kategorie:{{{divisio|}}}{{!}}{{{divisio|}}}]]''
}}
{{#if:{{{subdivisio|}}}
| {{!-}}
{{!}} Subdivisio:
{{!}} ''[[:Kategorie:{{{subdivisio|}}}{{!}}{{{subdivisio|}}}]]''
}}
{{#if:{{{superclassis|}}}
| {{!-}}
{{!}} Superclassis:
{{!}} ''[[:Kategorie:{{{superclassis|}}}{{!}}{{{superclassis|}}}]]''
}}
{{#if:{{{classis|}}}
| {{!-}}
{{!}} Classis:
{{!}} ''[[:Kategorie:{{{classis|}}}{{!}}{{{classis|}}}]]''
}}
{{#if:{{{subclassis|}}}
| {{!-}}
{{!}} Subclassis:
{{!}} ''[[:Kategorie:{{{subclassis|}}}{{!}}{{{subclassis|}}}]]''
}}
{{#if:{{{superordo|}}}
| {{!-}}
{{!}} Superordo:
{{!}} ''[[:Kategorie:{{{superordo|}}}{{!}}{{{superordo|}}}]]''
}}
{{#if:{{{ordo|}}}
| {{!-}}
{{!}} Ordo:
{{!}} ''[[:Kategorie:{{{ordo|}}}{{!}}{{{ordo|}}}]]''
}}
{{#if:{{{subordo|}}}
| {{!-}}
{{!}} Subordo:
{{!}} ''[[:Kategorie:{{{subordo|}}}{{!}}{{{subordo|}}}]]''
}}
{{#if:{{{superfamilia|}}}
| {{!-}}
{{!}} Superfamilia:
{{!}} ''[[:Kategorie:{{{superfamilia|}}}{{!}}{{{superfamilia|}}}]]''
}}
{{#if:{{{familia|}}}
| {{!-}}
{{!}} Familia:
{{!}} ''[[:Kategorie:{{{familia|}}}{{!}}{{{familia|}}}]]''
}}
{{#if:{{{subfamilia|}}}
| {{!-}}
{{!}} Subfamilia:
{{!}} ''[[:Kategorie:{{{subfamilia|}}}{{!}}{{{subfamilia|}}}]]''
}}
{{#if:{{{tribus|}}}
| {{!-}}
{{!}} Tribus:
{{!}} ''[[:Kategorie:{{{tribus|}}}{{!}}{{{tribus|}}}]]''
}}
|-
{{#if:{{{genus|}}}
| {{!-}}
{{!}} Genus:
{{!}} ''[[:Kategorie:{{{genus|}}}{{!}}{{{genus|}}}]]''
}}
|-
{{#if:{{{subgenus|}}}
| {{!-}}
{{!}} Subgenus:
{{!}} ''[[:Kategorie:{{{subgenus|}}}{{!}}{{{subgenus|}}}]]''
}}
|-
{{#if:{{{species|}}}
| {{!-}}
{{!}} Species:
{{!}} ''{{{genus|}}} {{{subgenus|}}} {{{species|}}}''
}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Wissenschaftlicher Name
|-
|style="text-align:center"|
{| style="text-align:center;background-color:#efefef;widht:250px"
|-
|style="text-align:center;width:250px"|''{{{genus|}}} {{{species|}}}'' {{#if:{{{Autor|}}}|{{{Autor|}}}}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Weitere Informationen
|-
|style="text-align:left"|
{| style="text-align:left;background-color:#efefef;widht:250px"
{{#if:{{{Verbreitung|}}}
| {{!-}}
{{!}} Verbreitung:
{{!}} {{{Verbreitung|}}}
| {{!-}}
}}
{{#if:{{{Habitat|}}}
| {{!-}}
{{!}} Habitat:
{{!}} {{{Habitat|}}}
| {{!-}}
}}
{{#if:{{{Nahrung|}}}
| {{!-}}
{{!}} Nahrung:
{{!}} {{{Nahrung|}}}
| {{!-}}
}}
{{#if:{{{Luftfeuchtigkeit|}}}
| {{!-}}
{{!}} Luftfeuchtigkeit:
{{!}} {{{Luftfeuchtigkeit|}}}
| {{!-}}
}}
{{#if:{{{Temperatur|}}}
| {{!-}}
{{!}} Temperatur:
{{!}} {{{Temperatur|}}}
| {{!-}}
}}
{{#if:{{{StudyGroupNumber|}}}
| {{!-}}
{{!}} StudyGroupNumber:
{{!}} {{{StudyGroupNumber|}}}
| {{!-}}
}}
|}
|}
{{#ifeq: {{NAMESPACE}} | {{ns:0}} | [[Kategorie:species]]}}
{{#if:{{#var:taxon}}|{{#var:taxon}} }}
{{#if:{{{ordo|}}}|
-> [[:Kategorie:{{{ordo|}}}{{!}}{{{ordo|}}}]]
}}
{{#if:{{{subordo|}}}|
-> [[:Kategorie:{{{subordo|}}}{{!}}{{{subordo|}}}]]
}}
{{#if:{{{superfamilia|}}}|
-> [[:Kategorie:{{#if: {{{subordo|}}} | {{{subordo|}}} | {{{ordo|}}} }}{{!}}{{{superfamilia|}}}]]
}}
{{#if:{{{familia|}}}|
-> [[:Kategorie:{{{familia|}}}{{!}}{{{familia|}}}]]
}}
{{#if:{{{subfamilia|}}}|
-> [[:Kategorie:{{{subfamilia|}}}{{!}}{{{subfamilia|}}}]]
}}
{{#if:{{{tribus|}}}|
-> [[:Kategorie:{{{tribus|}}}{{!}}{{{tribus|}}}]]
}}
{{#if:{{{genus|}}}|
-> [[:Kategorie:{{{genus|}}}{{!}}{{{genus|}}}]]
}}
{{#if:{{{subgenus|}}}|
-> [[:Kategorie:{{{subgenus|}}}{{!}}{{{subgenus|}}}]]
}}
{{#if:{{{www.faunaeur.org_id|}}}|
* [http://www.faunaeur.org/full_results.php?id={{{www.faunaeur.org_id|}}} Fauna Europaea : www.faunaeur.org -> {{PAGENAME}}]
}}
{{#if:{{{cockroach.speciesfile.org_TaxonNameID|}}}|
* [http://cockroach.speciesfile.org/common/basic/Taxa.aspx?TaxonNameID={{{cockroach.speciesfile.org_TaxonNameID|}}} Cockroach Species File (CSF) : cockroach.speciesfile.org -> {{PAGENAME}}]
}}
{{#if:{{{LSID|}}}|
* [http://www.ipni.org/lsids.html LSID] : {{{LSID|}}}
}}
{{#ifeq:{{PAGENAME}}|{{{superordo}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
[[Kategorie:{{#if: {{{subclassis|}}} | {{{subclassis|}}} | {{{classis|}}} }}{{!}}{{{superordo|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{ordo}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=7}}
{{#ifeq:{{PAGENAME}}|Blattodea|
[[Kategorie:Schaben]]
}}
[[Kategorie:{{#if: {{{superordo|}}} | {{{superordo|}}} | {{{subclassis|}}} }}{{!}}{{{ordo|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{superfamilia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
[[Kategorie:{{#if: {{{subordo|}}} | {{{subordo|}}} | {{{ordo|}}} }}{{!}}{{{superfamilia|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{familia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=4}}
[[Kategorie:{{#if: {{{superfamilia|}}} | {{{superfamilia|}}} | {{{subordo|}}} }}{{!}}{{{familia|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{subfamilia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=3}}
[[Kategorie:{{{familia|}}}{{!}}{{{subfamilia|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{genus}}}|
[[Kategorie:{{#if: {{{subfamilia|}}} | {{{subfamilia|}}} | {{{familia|}}} }}{{!}}{{{genus|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{genus}}} {{{species}}}|
[[Kategorie:{{{genus|}}}{{!}}{{{species|}}}]]
}}
</includeonly>
<noinclude>
<pre>
Beispielaufruf:
{{Systematik
| DeName = Fauchschabe
| Autor = van Herrewege, 1973
| ordo =
| subordo =
| superfamilia =
| familia = Blaberidae
| subfamilia = Oxyhaloinae
| tribus = Gromphadorhini
| genus = Princisia
| subgenus =
| species = vanwaerebeki
| Verbreitung =
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| StudyGroupNumber = BCG 34
| Winterruhe =
| cockroach.speciesfile.org_TaxonNameID = 1174416
| LSID = urn:lsid:Blattodea.genusfile.org:TaxonName:6326
}}
{{Systematik
| Autor =
| Bild =
| Bildbeschreibung =
| regnum = Animalia
| subregnum = Eumetazoa
| phylum = specieshropoda
| subphylum = Hexapoda
| classis = Insecta
| ordo = Dictyoptera
| subordo = Isoptera
| LSID = urn:lsid:faunaeur.org:taxname:11922
| www.faunaeur.org_id = 11922
}}
</pre>
</noinclude>
e434ee0c20f2f0a2f976fc80cf3c93ff777d5462
1614
1613
2017-05-02T17:35:03Z
Lollypop
2
wikitext
text/x-wiki
<includeonly>
{{#vardefine:taxon|
{{#if:{{{dominia|}}} | * [[:Kategorie:{{{dominia|}}}{{!}}{{{dominia|}}}]]}}
{{#if:{{{regnum|}}} | * [[:Kategorie:{{{regnum|}}}{{!}}{{{regnum|}}}]]}}
{{#if:{{{subregnum|}}} | * [[:Kategorie:{{{subregnum|}}}{{!}}{{{subregnum|}}}]]}}
{{#if:{{{divisio|}}} | * [[:Kategorie:{{{divisio|}}}{{!}}{{{divisio|}}}]]}}
{{#if:{{{subdivisio|}}} | * [[:Kategorie:{{{subdivisio|}}}{{!}}{{{subdivisio|}}}]]}}
{{#if:{{{superclassis|}}}| * [[:Kategorie:{{{superclassis|}}}{{!}}{{{superclassis|}}}]]}}
{{#if:{{{classis|}}} | * [[:Kategorie:{{{classis|}}}{{!}}{{{classis|}}}]]}}
{{#if:{{{subclassis|}}} | * [[:Kategorie:{{{subclassis|}}}{{!}}{{{subclassis|}}}]]}}
{{#if:{{{superordo|}}} | * [[:Kategorie:{{{superordo|}}}{{!}}{{{superordo|}}}]]}}
{{#if:{{{ordo|}}} | * [[:Kategorie:{{{ordo|}}}{{!}}{{{ordo|}}}]]}}
{{#if:{{{subordo|}}} | * [[:Kategorie:{{{subordo|}}}{{!}}{{{subordo|}}}]]}}
{{#if:{{{superfamilia|}}}| * [[:Kategorie:{{{superfamilia|}}}{{!}}{{{superfamilia|}}}]]}}
{{#if:{{{familia|}}} | * [[:Kategorie:{{{familia|}}}{{!}}{{{familia|}}}]]}}
{{#if:{{{subfamilia|}}} | * [[:Kategorie:{{{subfamilia|}}}{{!}}{{{subfamilia|}}}]]}}
{{#if:{{{tribus|}}} | * [[:Kategorie:{{{tribus|}}}{{!}}{{{tribus|}}}]]}}
{{#if:{{{genus|}}} | * [[:Kategorie:{{{genus|}}}{{!}}{{{genus|}}}]]}}
{{#if:{{{subgenus|}}} | * [[:Kategorie:{{{subgenus|}}}{{!}}{{{subgenus|}}}]]}}
}}
{| cellpadding="2" cellspacing="0" style="border: 1px solid #6688AA;width:260px; background-color:#efefef; float:right;overflow: hidden; margin-left:10px; margin-bottom:10px;" valign="middle" |
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"|
'''''{{PAGENAME}}'''''
{{#if:{{{DeName|}}}|
<br>({{{DeName|}}})
}}
|-
|style="border: 0px solid #6688AA;border-bottom-width:1px;"| <div style="text-align:center;font-size:8pt;">
{{#if: {{{Bild|}}}
|
[[Bild:{{{Bild}}}{{!}}frameless{{!}}250x300px{{!}}{{{Bildbeschreibung}}}]]
}}
</div>
|
|
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| Systematik
|-
|style="text-align:center;width=250px;"|
{| style="background-color:#efefef;text-align:left;"
{{#if:{{{dominia|}}}
| {{!-}}
{{!}} Dominia:
{{!}} ''[[:Kategorie:{{{dominia|}}}{{!}}{{{dominia|}}}]]''
}}
{{#if:{{{regnum|}}}
| {{!-}}
{{!}} Regnum:
{{!}} ''[[:Kategorie:{{{regnum|}}}{{!}}{{{regnum|}}}]]''
}}
{{#if:{{{subregnum|}}}
| {{!-}}
{{!}} Subregnum:
{{!}} ''[[:Kategorie:{{{subregnum|}}}{{!}}{{{subregnum|}}}]]''
}}
{{#if:{{{superdivisio|}}}
| {{!-}}
{{!}} Superdivisio:
{{!}} ''[[:Kategorie:{{{superdivisio|}}}{{!}}{{{superdivisio|}}}]]''
}}
{{#if:{{{divisio|}}}
| {{!-}}
{{!}} Divisio:
{{!}} ''[[:Kategorie:{{{divisio|}}}{{!}}{{{divisio|}}}]]''
}}
{{#if:{{{subdivisio|}}}
| {{!-}}
{{!}} Subdivisio:
{{!}} ''[[:Kategorie:{{{subdivisio|}}}{{!}}{{{subdivisio|}}}]]''
}}
{{#if:{{{superclassis|}}}
| {{!-}}
{{!}} Superclassis:
{{!}} ''[[:Kategorie:{{{superclassis|}}}{{!}}{{{superclassis|}}}]]''
}}
{{#if:{{{classis|}}}
| {{!-}}
{{!}} Classis:
{{!}} ''[[:Kategorie:{{{classis|}}}{{!}}{{{classis|}}}]]''
}}
{{#if:{{{subclassis|}}}
| {{!-}}
{{!}} Subclassis:
{{!}} ''[[:Kategorie:{{{subclassis|}}}{{!}}{{{subclassis|}}}]]''
}}
{{#if:{{{superordo|}}}
| {{!-}}
{{!}} Superordo:
{{!}} ''[[:Kategorie:{{{superordo|}}}{{!}}{{{superordo|}}}]]''
}}
{{#if:{{{ordo|}}}
| {{!-}}
{{!}} Ordo:
{{!}} ''[[:Kategorie:{{{ordo|}}}{{!}}{{{ordo|}}}]]''
}}
{{#if:{{{subordo|}}}
| {{!-}}
{{!}} Subordo:
{{!}} ''[[:Kategorie:{{{subordo|}}}{{!}}{{{subordo|}}}]]''
}}
{{#if:{{{superfamilia|}}}
| {{!-}}
{{!}} Superfamilia:
{{!}} ''[[:Kategorie:{{{superfamilia|}}}{{!}}{{{superfamilia|}}}]]''
}}
{{#if:{{{familia|}}}
| {{!-}}
{{!}} Familia:
{{!}} ''[[:Kategorie:{{{familia|}}}{{!}}{{{familia|}}}]]''
}}
{{#if:{{{subfamilia|}}}
| {{!-}}
{{!}} Subfamilia:
{{!}} ''[[:Kategorie:{{{subfamilia|}}}{{!}}{{{subfamilia|}}}]]''
}}
{{#if:{{{tribus|}}}
| {{!-}}
{{!}} Tribus:
{{!}} ''[[:Kategorie:{{{tribus|}}}{{!}}{{{tribus|}}}]]''
}}
|-
{{#if:{{{genus|}}}
| {{!-}}
{{!}} Genus:
{{!}} ''[[:Kategorie:{{{genus|}}}{{!}}{{{genus|}}}]]''
}}
|-
{{#if:{{{subgenus|}}}
| {{!-}}
{{!}} Subgenus:
{{!}} ''[[:Kategorie:{{{subgenus|}}}{{!}}{{{subgenus|}}}]]''
}}
|-
{{#if:{{{species|}}}
| {{!-}}
{{!}} Species:
{{!}} ''{{{genus|}}} {{{subgenus|}}} {{{species|}}}''
}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Wissenschaftlicher Name
|-
|style="text-align:center"|
{| style="text-align:center;background-color:#efefef;widht:250px"
|-
|style="text-align:center;width:250px"|''{{{genus|}}} {{{species|}}}'' {{#if:{{{Autor|}}}|{{{Autor|}}}}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Weitere Informationen
|-
|style="text-align:left"|
{| style="text-align:left;background-color:#efefef;widht:250px"
{{#if:{{{Verbreitung|}}}
| {{!-}}
{{!}} Verbreitung:
{{!}} {{{Verbreitung|}}}
| {{!-}}
}}
{{#if:{{{Habitat|}}}
| {{!-}}
{{!}} Habitat:
{{!}} {{{Habitat|}}}
| {{!-}}
}}
{{#if:{{{Nahrung|}}}
| {{!-}}
{{!}} Nahrung:
{{!}} {{{Nahrung|}}}
| {{!-}}
}}
{{#if:{{{Luftfeuchtigkeit|}}}
| {{!-}}
{{!}} Luftfeuchtigkeit:
{{!}} {{{Luftfeuchtigkeit|}}}
| {{!-}}
}}
{{#if:{{{Temperatur|}}}
| {{!-}}
{{!}} Temperatur:
{{!}} {{{Temperatur|}}}
| {{!-}}
}}
{{#if:{{{StudyGroupNumber|}}}
| {{!-}}
{{!}} StudyGroupNumber:
{{!}} {{{StudyGroupNumber|}}}
| {{!-}}
}}
|}
|}
{{#ifeq: {{NAMESPACE}} | {{ns:0}} | [[Kategorie:species]]}}
{{#if:{{#var:taxon}}|{{flatlist|{{#var:taxon}}}} }}
{{#if:{{{ordo|}}}|
-> [[:Kategorie:{{{ordo|}}}{{!}}{{{ordo|}}}]]
}}
{{#if:{{{subordo|}}}|
-> [[:Kategorie:{{{subordo|}}}{{!}}{{{subordo|}}}]]
}}
{{#if:{{{superfamilia|}}}|
-> [[:Kategorie:{{#if: {{{subordo|}}} | {{{subordo|}}} | {{{ordo|}}} }}{{!}}{{{superfamilia|}}}]]
}}
{{#if:{{{familia|}}}|
-> [[:Kategorie:{{{familia|}}}{{!}}{{{familia|}}}]]
}}
{{#if:{{{subfamilia|}}}|
-> [[:Kategorie:{{{subfamilia|}}}{{!}}{{{subfamilia|}}}]]
}}
{{#if:{{{tribus|}}}|
-> [[:Kategorie:{{{tribus|}}}{{!}}{{{tribus|}}}]]
}}
{{#if:{{{genus|}}}|
-> [[:Kategorie:{{{genus|}}}{{!}}{{{genus|}}}]]
}}
{{#if:{{{subgenus|}}}|
-> [[:Kategorie:{{{subgenus|}}}{{!}}{{{subgenus|}}}]]
}}
{{#if:{{{www.faunaeur.org_id|}}}|
* [http://www.faunaeur.org/full_results.php?id={{{www.faunaeur.org_id|}}} Fauna Europaea : www.faunaeur.org -> {{PAGENAME}}]
}}
{{#if:{{{cockroach.speciesfile.org_TaxonNameID|}}}|
* [http://cockroach.speciesfile.org/common/basic/Taxa.aspx?TaxonNameID={{{cockroach.speciesfile.org_TaxonNameID|}}} Cockroach Species File (CSF) : cockroach.speciesfile.org -> {{PAGENAME}}]
}}
{{#if:{{{LSID|}}}|
* [http://www.ipni.org/lsids.html LSID] : {{{LSID|}}}
}}
{{#ifeq:{{PAGENAME}}|{{{superordo}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
[[Kategorie:{{#if: {{{subclassis|}}} | {{{subclassis|}}} | {{{classis|}}} }}{{!}}{{{superordo|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{ordo}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=7}}
{{#ifeq:{{PAGENAME}}|Blattodea|
[[Kategorie:Schaben]]
}}
[[Kategorie:{{#if: {{{superordo|}}} | {{{superordo|}}} | {{{subclassis|}}} }}{{!}}{{{ordo|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{superfamilia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
[[Kategorie:{{#if: {{{subordo|}}} | {{{subordo|}}} | {{{ordo|}}} }}{{!}}{{{superfamilia|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{familia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=4}}
[[Kategorie:{{#if: {{{superfamilia|}}} | {{{superfamilia|}}} | {{{subordo|}}} }}{{!}}{{{familia|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{subfamilia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=3}}
[[Kategorie:{{{familia|}}}{{!}}{{{subfamilia|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{genus}}}|
[[Kategorie:{{#if: {{{subfamilia|}}} | {{{subfamilia|}}} | {{{familia|}}} }}{{!}}{{{genus|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{genus}}} {{{species}}}|
[[Kategorie:{{{genus|}}}{{!}}{{{species|}}}]]
}}
</includeonly>
<noinclude>
<pre>
Beispielaufruf:
{{Systematik
| DeName = Fauchschabe
| Autor = van Herrewege, 1973
| ordo =
| subordo =
| superfamilia =
| familia = Blaberidae
| subfamilia = Oxyhaloinae
| tribus = Gromphadorhini
| genus = Princisia
| subgenus =
| species = vanwaerebeki
| Verbreitung =
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| StudyGroupNumber = BCG 34
| Winterruhe =
| cockroach.speciesfile.org_TaxonNameID = 1174416
| LSID = urn:lsid:Blattodea.genusfile.org:TaxonName:6326
}}
{{Systematik
| Autor =
| Bild =
| Bildbeschreibung =
| regnum = Animalia
| subregnum = Eumetazoa
| phylum = specieshropoda
| subphylum = Hexapoda
| classis = Insecta
| ordo = Dictyoptera
| subordo = Isoptera
| LSID = urn:lsid:faunaeur.org:taxname:11922
| www.faunaeur.org_id = 11922
}}
</pre>
</noinclude>
31939f4c94186f117d90ebc70af157ab145cc74b
1615
1614
2017-05-02T17:54:28Z
Lollypop
2
wikitext
text/x-wiki
<includeonly>
{{#vardefine:taxon|
{{#if:{{{dominia|}}} | * [[:Kategorie:{{{dominia|}}}{{!}}{{{dominia|}}}]]}}
{{#if:{{{regnum|}}} | * [[:Kategorie:{{{regnum|}}}{{!}}{{{regnum|}}}]]}}
{{#if:{{{subregnum|}}} | * [[:Kategorie:{{{subregnum|}}}{{!}}{{{subregnum|}}}]]}}
{{#if:{{{divisio|}}} | * [[:Kategorie:{{{divisio|}}}{{!}}{{{divisio|}}}]]}}
{{#if:{{{subdivisio|}}} | * [[:Kategorie:{{{subdivisio|}}}{{!}}{{{subdivisio|}}}]]}}
{{#if:{{{superclassis|}}}| * [[:Kategorie:{{{superclassis|}}}{{!}}{{{superclassis|}}}]]}}
{{#if:{{{classis|}}} | * [[:Kategorie:{{{classis|}}}{{!}}{{{classis|}}}]]}}
{{#if:{{{subclassis|}}} | * [[:Kategorie:{{{subclassis|}}}{{!}}{{{subclassis|}}}]]}}
{{#if:{{{superordo|}}} | * [[:Kategorie:{{{superordo|}}}{{!}}{{{superordo|}}}]]}}
{{#if:{{{ordo|}}} | * [[:Kategorie:{{{ordo|}}}{{!}}{{{ordo|}}}]]}}
{{#if:{{{subordo|}}} | * [[:Kategorie:{{{subordo|}}}{{!}}{{{subordo|}}}]]}}
{{#if:{{{superfamilia|}}}| * [[:Kategorie:{{{superfamilia|}}}{{!}}{{{superfamilia|}}}]]}}
{{#if:{{{familia|}}} | * [[:Kategorie:{{{familia|}}}{{!}}{{{familia|}}}]]}}
{{#if:{{{subfamilia|}}} | * [[:Kategorie:{{{subfamilia|}}}{{!}}{{{subfamilia|}}}]]}}
{{#if:{{{tribus|}}} | * [[:Kategorie:{{{tribus|}}}{{!}}{{{tribus|}}}]]}}
{{#if:{{{genus|}}} | * [[:Kategorie:{{{genus|}}}{{!}}{{{genus|}}}]]}}
{{#if:{{{subgenus|}}} | * [[:Kategorie:{{{subgenus|}}}{{!}}{{{subgenus|}}}]]}}
}}
{| cellpadding="2" cellspacing="0" style="border: 1px solid #6688AA;width:260px; background-color:#efefef; float:right;overflow: hidden; margin-left:10px; margin-bottom:10px;" valign="middle" |
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"|
'''''{{PAGENAME}}'''''
{{#if:{{{DeName|}}}|
<br>({{{DeName|}}})
}}
|-
|style="border: 0px solid #6688AA;border-bottom-width:1px;"| <div style="text-align:center;font-size:8pt;">
{{#if: {{{Bild|}}}
|
[[Bild:{{{Bild}}}{{!}}frameless{{!}}250x300px{{!}}{{{Bildbeschreibung}}}]]
}}
</div>
|
|
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| Systematik
|-
|style="text-align:center;width=250px;"|
{| style="background-color:#efefef;text-align:left;"
{{#if:{{{dominia|}}}
| {{!-}}
{{!}} Dominia:
{{!}} ''[[:Kategorie:{{{dominia|}}}{{!}}{{{dominia|}}}]]''
}}
{{#if:{{{regnum|}}}
| {{!-}}
{{!}} Regnum:
{{!}} ''[[:Kategorie:{{{regnum|}}}{{!}}{{{regnum|}}}]]''
}}
{{#if:{{{subregnum|}}}
| {{!-}}
{{!}} Subregnum:
{{!}} ''[[:Kategorie:{{{subregnum|}}}{{!}}{{{subregnum|}}}]]''
}}
{{#if:{{{superdivisio|}}}
| {{!-}}
{{!}} Superdivisio:
{{!}} ''[[:Kategorie:{{{superdivisio|}}}{{!}}{{{superdivisio|}}}]]''
}}
{{#if:{{{divisio|}}}
| {{!-}}
{{!}} Divisio:
{{!}} ''[[:Kategorie:{{{divisio|}}}{{!}}{{{divisio|}}}]]''
}}
{{#if:{{{subdivisio|}}}
| {{!-}}
{{!}} Subdivisio:
{{!}} ''[[:Kategorie:{{{subdivisio|}}}{{!}}{{{subdivisio|}}}]]''
}}
{{#if:{{{superclassis|}}}
| {{!-}}
{{!}} Superclassis:
{{!}} ''[[:Kategorie:{{{superclassis|}}}{{!}}{{{superclassis|}}}]]''
}}
{{#if:{{{classis|}}}
| {{!-}}
{{!}} Classis:
{{!}} ''[[:Kategorie:{{{classis|}}}{{!}}{{{classis|}}}]]''
}}
{{#if:{{{subclassis|}}}
| {{!-}}
{{!}} Subclassis:
{{!}} ''[[:Kategorie:{{{subclassis|}}}{{!}}{{{subclassis|}}}]]''
}}
{{#if:{{{superordo|}}}
| {{!-}}
{{!}} Superordo:
{{!}} ''[[:Kategorie:{{{superordo|}}}{{!}}{{{superordo|}}}]]''
}}
{{#if:{{{ordo|}}}
| {{!-}}
{{!}} Ordo:
{{!}} ''[[:Kategorie:{{{ordo|}}}{{!}}{{{ordo|}}}]]''
}}
{{#if:{{{subordo|}}}
| {{!-}}
{{!}} Subordo:
{{!}} ''[[:Kategorie:{{{subordo|}}}{{!}}{{{subordo|}}}]]''
}}
{{#if:{{{superfamilia|}}}
| {{!-}}
{{!}} Superfamilia:
{{!}} ''[[:Kategorie:{{{superfamilia|}}}{{!}}{{{superfamilia|}}}]]''
}}
{{#if:{{{familia|}}}
| {{!-}}
{{!}} Familia:
{{!}} ''[[:Kategorie:{{{familia|}}}{{!}}{{{familia|}}}]]''
}}
{{#if:{{{subfamilia|}}}
| {{!-}}
{{!}} Subfamilia:
{{!}} ''[[:Kategorie:{{{subfamilia|}}}{{!}}{{{subfamilia|}}}]]''
}}
{{#if:{{{tribus|}}}
| {{!-}}
{{!}} Tribus:
{{!}} ''[[:Kategorie:{{{tribus|}}}{{!}}{{{tribus|}}}]]''
}}
|-
{{#if:{{{genus|}}}
| {{!-}}
{{!}} Genus:
{{!}} ''[[:Kategorie:{{{genus|}}}{{!}}{{{genus|}}}]]''
}}
|-
{{#if:{{{subgenus|}}}
| {{!-}}
{{!}} Subgenus:
{{!}} ''[[:Kategorie:{{{subgenus|}}}{{!}}{{{subgenus|}}}]]''
}}
|-
{{#if:{{{species|}}}
| {{!-}}
{{!}} Species:
{{!}} ''{{{genus|}}} {{{subgenus|}}} {{{species|}}}''
}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Wissenschaftlicher Name
|-
|style="text-align:center"|
{| style="text-align:center;background-color:#efefef;widht:250px"
|-
|style="text-align:center;width:250px"|''{{{genus|}}} {{{species|}}}'' {{#if:{{{Autor|}}}|{{{Autor|}}}}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Weitere Informationen
|-
|style="text-align:left"|
{| style="text-align:left;background-color:#efefef;widht:250px"
{{#if:{{{Verbreitung|}}}
| {{!-}}
{{!}} Verbreitung:
{{!}} {{{Verbreitung|}}}
| {{!-}}
}}
{{#if:{{{Habitat|}}}
| {{!-}}
{{!}} Habitat:
{{!}} {{{Habitat|}}}
| {{!-}}
}}
{{#if:{{{Nahrung|}}}
| {{!-}}
{{!}} Nahrung:
{{!}} {{{Nahrung|}}}
| {{!-}}
}}
{{#if:{{{Luftfeuchtigkeit|}}}
| {{!-}}
{{!}} Luftfeuchtigkeit:
{{!}} {{{Luftfeuchtigkeit|}}}
| {{!-}}
}}
{{#if:{{{Temperatur|}}}
| {{!-}}
{{!}} Temperatur:
{{!}} {{{Temperatur|}}}
| {{!-}}
}}
{{#if:{{{StudyGroupNumber|}}}
| {{!-}}
{{!}} StudyGroupNumber:
{{!}} {{{StudyGroupNumber|}}}
| {{!-}}
}}
|}
|}
{{#ifeq: {{NAMESPACE}} | {{ns:0}} | [[Kategorie:species]]}}
{{#if:{{#var:taxon}}|{{#var:taxon}}}}
{{#if:{{{ordo|}}}|
-> [[:Kategorie:{{{ordo|}}}{{!}}{{{ordo|}}}]]
}}
{{#if:{{{subordo|}}}|
-> [[:Kategorie:{{{subordo|}}}{{!}}{{{subordo|}}}]]
}}
{{#if:{{{superfamilia|}}}|
-> [[:Kategorie:{{#if: {{{subordo|}}} | {{{subordo|}}} | {{{ordo|}}} }}{{!}}{{{superfamilia|}}}]]
}}
{{#if:{{{familia|}}}|
-> [[:Kategorie:{{{familia|}}}{{!}}{{{familia|}}}]]
}}
{{#if:{{{subfamilia|}}}|
-> [[:Kategorie:{{{subfamilia|}}}{{!}}{{{subfamilia|}}}]]
}}
{{#if:{{{tribus|}}}|
-> [[:Kategorie:{{{tribus|}}}{{!}}{{{tribus|}}}]]
}}
{{#if:{{{genus|}}}|
-> [[:Kategorie:{{{genus|}}}{{!}}{{{genus|}}}]]
}}
{{#if:{{{subgenus|}}}|
-> [[:Kategorie:{{{subgenus|}}}{{!}}{{{subgenus|}}}]]
}}
{{#if:{{{www.faunaeur.org_id|}}}|
* [http://www.faunaeur.org/full_results.php?id={{{www.faunaeur.org_id|}}} Fauna Europaea : www.faunaeur.org -> {{PAGENAME}}]
}}
{{#if:{{{cockroach.speciesfile.org_TaxonNameID|}}}|
* [http://cockroach.speciesfile.org/common/basic/Taxa.aspx?TaxonNameID={{{cockroach.speciesfile.org_TaxonNameID|}}} Cockroach Species File (CSF) : cockroach.speciesfile.org -> {{PAGENAME}}]
}}
{{#if:{{{LSID|}}}|
* [http://www.ipni.org/lsids.html LSID] : {{{LSID|}}}
}}
{{#ifeq:{{PAGENAME}}|{{{superordo}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
[[Kategorie:{{#if: {{{subclassis|}}} | {{{subclassis|}}} | {{{classis|}}} }}{{!}}{{{superordo|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{ordo}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=7}}
{{#ifeq:{{PAGENAME}}|Blattodea|
[[Kategorie:Schaben]]
}}
[[Kategorie:{{#if: {{{superordo|}}} | {{{superordo|}}} | {{{subclassis|}}} }}{{!}}{{{ordo|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{superfamilia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
[[Kategorie:{{#if: {{{subordo|}}} | {{{subordo|}}} | {{{ordo|}}} }}{{!}}{{{superfamilia|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{familia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=4}}
[[Kategorie:{{#if: {{{superfamilia|}}} | {{{superfamilia|}}} | {{{subordo|}}} }}{{!}}{{{familia|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{subfamilia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=3}}
[[Kategorie:{{{familia|}}}{{!}}{{{subfamilia|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{genus}}}|
[[Kategorie:{{#if: {{{subfamilia|}}} | {{{subfamilia|}}} | {{{familia|}}} }}{{!}}{{{genus|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{genus}}} {{{species}}}|
[[Kategorie:{{{genus|}}}{{!}}{{{species|}}}]]
}}
</includeonly>
<noinclude>
<pre>
Beispielaufruf:
{{Systematik
| DeName = Fauchschabe
| Autor = van Herrewege, 1973
| ordo =
| subordo =
| superfamilia =
| familia = Blaberidae
| subfamilia = Oxyhaloinae
| tribus = Gromphadorhini
| genus = Princisia
| subgenus =
| species = vanwaerebeki
| Verbreitung =
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| StudyGroupNumber = BCG 34
| Winterruhe =
| cockroach.speciesfile.org_TaxonNameID = 1174416
| LSID = urn:lsid:Blattodea.genusfile.org:TaxonName:6326
}}
{{Systematik
| Autor =
| Bild =
| Bildbeschreibung =
| regnum = Animalia
| subregnum = Eumetazoa
| phylum = specieshropoda
| subphylum = Hexapoda
| classis = Insecta
| ordo = Dictyoptera
| subordo = Isoptera
| LSID = urn:lsid:faunaeur.org:taxname:11922
| www.faunaeur.org_id = 11922
}}
</pre>
</noinclude>
d651ad19721009b69456a58063e2206a90f3b4b3
1616
1615
2017-05-02T17:57:22Z
Lollypop
2
wikitext
text/x-wiki
<includeonly>
{{#vardefine:taxon|
{{{{#if:{{{dominia|}}} | * [[:Kategorie:{{{dominia|}}}{{!}}{{{dominia|}}}]]}}
{{#if:{{{regnum|}}} | * [[:Kategorie:{{{regnum|}}}{{!}}{{{regnum|}}}]]}}
{{#if:{{{subregnum|}}} | * [[:Kategorie:{{{subregnum|}}}{{!}}{{{subregnum|}}}]]}}
{{#if:{{{divisio|}}} | * [[:Kategorie:{{{divisio|}}}{{!}{{{divisio|}}}]]}}
{{#if:{{{subdivisio|}}} | * [[:Kategorie:{{{subdivisio|}}}{{!}}{{{subdivisio|}}}]]}}
{{#if:{{{superclassis|}}}| * [[:Kategorie:{{{superclassis|}}}{{!}}{{{superclassis|}}}]]}}
{{#if:{{{classis|}}} | * [[:Kategorie:{{{classis|}}}{{!}}{{{classis|}}}]]}}
{{#if:{{{subclassis|}}} | * [[:Kategorie:{{{subclassis|}}}{{!}}{{{subclassis|}}}]]}}
{{#if:{{{superordo|}}} | * [[:Kategorie:{{{superordo|}}}{{!}}{{{superordo|}}}]]}}
{{#if:{{{ordo|}}} | * [[:Kategorie:{{{ordo|}}}{{!}}{{{ordo|}}}]]}}
{{#if:{{{subordo|}}} | * [[:Kategorie:{{{subordo|}}}{{!}}{{{subordo|}}}]]}}
{{#if:{{{superfamilia|}}}| * [[:Kategorie:{{{superfamilia|}}}{{!}}{{{superfamilia|}}}]]}}
{{#if:{{{familia|}}} | * [[:Kategorie:{{{familia|}}}{{!}}{{{familia|}}}]]}}
{{#if:{{{subfamilia|}}} | * [[:Kategorie:{{{subfamilia|}}}{{!}}{{{subfamilia|}}}]]}}
{{#if:{{{tribus|}}} | * [[:Kategorie:{{{tribus|}}}{{!}}{{{tribus|}}}]]}}
{{#if:{{{genus|}}} | * [[:Kategorie:{{{genus|}}}{{!}}{{{genus|}}}]]}}
{{#if:{{{subgenus|}}} | * [[:Kategorie:{{{subgenus|}}}{{!}}{{{subgenus|}}}]]}}
}}
}}
{| cellpadding="2" cellspacing="0" style="border: 1px solid #6688AA;width:260px; background-color:#efefef; float:right;overflow: hidden; margin-left:10px; margin-bottom:10px;" valign="middle" |
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"|
'''''{{PAGENAME}}'''''
{{#if:{{{DeName|}}}|
<br>({{{DeName|}}})
}}
|-
|style="border: 0px solid #6688AA;border-bottom-width:1px;"| <div style="text-align:center;font-size:8pt;">
{{#if: {{{Bild|}}}
|
[[Bild:{{{Bild}}}{{!}}frameless{{!}}250x300px{{!}}{{{Bildbeschreibung}}}]]
}}
</div>
|
|
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| Systematik
|-
|style="text-align:center;width=250px;"|
{| style="background-color:#efefef;text-align:left;"
{{#if:{{{dominia|}}}
| {{!-}}
{{!}} Dominia:
{{!}} ''[[:Kategorie:{{{dominia|}}}{{!}}{{{dominia|}}}]]''
}}
{{#if:{{{regnum|}}}
| {{!-}}
{{!}} Regnum:
{{!}} ''[[:Kategorie:{{{regnum|}}}{{!}}{{{regnum|}}}]]''
}}
{{#if:{{{subregnum|}}}
| {{!-}}
{{!}} Subregnum:
{{!}} ''[[:Kategorie:{{{subregnum|}}}{{!}}{{{subregnum|}}}]]''
}}
{{#if:{{{superdivisio|}}}
| {{!-}}
{{!}} Superdivisio:
{{!}} ''[[:Kategorie:{{{superdivisio|}}}{{!}}{{{superdivisio|}}}]]''
}}
{{#if:{{{divisio|}}}
| {{!-}}
{{!}} Divisio:
{{!}} ''[[:Kategorie:{{{divisio|}}}{{!}}{{{divisio|}}}]]''
}}
{{#if:{{{subdivisio|}}}
| {{!-}}
{{!}} Subdivisio:
{{!}} ''[[:Kategorie:{{{subdivisio|}}}{{!}}{{{subdivisio|}}}]]''
}}
{{#if:{{{superclassis|}}}
| {{!-}}
{{!}} Superclassis:
{{!}} ''[[:Kategorie:{{{superclassis|}}}{{!}}{{{superclassis|}}}]]''
}}
{{#if:{{{classis|}}}
| {{!-}}
{{!}} Classis:
{{!}} ''[[:Kategorie:{{{classis|}}}{{!}}{{{classis|}}}]]''
}}
{{#if:{{{subclassis|}}}
| {{!-}}
{{!}} Subclassis:
{{!}} ''[[:Kategorie:{{{subclassis|}}}{{!}}{{{subclassis|}}}]]''
}}
{{#if:{{{superordo|}}}
| {{!-}}
{{!}} Superordo:
{{!}} ''[[:Kategorie:{{{superordo|}}}{{!}}{{{superordo|}}}]]''
}}
{{#if:{{{ordo|}}}
| {{!-}}
{{!}} Ordo:
{{!}} ''[[:Kategorie:{{{ordo|}}}{{!}}{{{ordo|}}}]]''
}}
{{#if:{{{subordo|}}}
| {{!-}}
{{!}} Subordo:
{{!}} ''[[:Kategorie:{{{subordo|}}}{{!}}{{{subordo|}}}]]''
}}
{{#if:{{{superfamilia|}}}
| {{!-}}
{{!}} Superfamilia:
{{!}} ''[[:Kategorie:{{{superfamilia|}}}{{!}}{{{superfamilia|}}}]]''
}}
{{#if:{{{familia|}}}
| {{!-}}
{{!}} Familia:
{{!}} ''[[:Kategorie:{{{familia|}}}{{!}}{{{familia|}}}]]''
}}
{{#if:{{{subfamilia|}}}
| {{!-}}
{{!}} Subfamilia:
{{!}} ''[[:Kategorie:{{{subfamilia|}}}{{!}}{{{subfamilia|}}}]]''
}}
{{#if:{{{tribus|}}}
| {{!-}}
{{!}} Tribus:
{{!}} ''[[:Kategorie:{{{tribus|}}}{{!}}{{{tribus|}}}]]''
}}
|-
{{#if:{{{genus|}}}
| {{!-}}
{{!}} Genus:
{{!}} ''[[:Kategorie:{{{genus|}}}{{!}}{{{genus|}}}]]''
}}
|-
{{#if:{{{subgenus|}}}
| {{!-}}
{{!}} Subgenus:
{{!}} ''[[:Kategorie:{{{subgenus|}}}{{!}}{{{subgenus|}}}]]''
}}
|-
{{#if:{{{species|}}}
| {{!-}}
{{!}} Species:
{{!}} ''{{{genus|}}} {{{subgenus|}}} {{{species|}}}''
}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Wissenschaftlicher Name
|-
|style="text-align:center"|
{| style="text-align:center;background-color:#efefef;widht:250px"
|-
|style="text-align:center;width:250px"|''{{{genus|}}} {{{species|}}}'' {{#if:{{{Autor|}}}|{{{Autor|}}}}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Weitere Informationen
|-
|style="text-align:left"|
{| style="text-align:left;background-color:#efefef;widht:250px"
{{#if:{{{Verbreitung|}}}
| {{!-}}
{{!}} Verbreitung:
{{!}} {{{Verbreitung|}}}
| {{!-}}
}}
{{#if:{{{Habitat|}}}
| {{!-}}
{{!}} Habitat:
{{!}} {{{Habitat|}}}
| {{!-}}
}}
{{#if:{{{Nahrung|}}}
| {{!-}}
{{!}} Nahrung:
{{!}} {{{Nahrung|}}}
| {{!-}}
}}
{{#if:{{{Luftfeuchtigkeit|}}}
| {{!-}}
{{!}} Luftfeuchtigkeit:
{{!}} {{{Luftfeuchtigkeit|}}}
| {{!-}}
}}
{{#if:{{{Temperatur|}}}
| {{!-}}
{{!}} Temperatur:
{{!}} {{{Temperatur|}}}
| {{!-}}
}}
{{#if:{{{StudyGroupNumber|}}}
| {{!-}}
{{!}} StudyGroupNumber:
{{!}} {{{StudyGroupNumber|}}}
| {{!-}}
}}
|}
|}
{{#ifeq: {{NAMESPACE}} | {{ns:0}} | [[Kategorie:species]]}}
{{#if:{{#var:taxon}}|{{#var:taxon}}}}
{{#if:{{{ordo|}}}|
-> [[:Kategorie:{{{ordo|}}}{{!}}{{{ordo|}}}]]
}}
{{#if:{{{subordo|}}}|
-> [[:Kategorie:{{{subordo|}}}{{!}}{{{subordo|}}}]]
}}
{{#if:{{{superfamilia|}}}|
-> [[:Kategorie:{{#if: {{{subordo|}}} | {{{subordo|}}} | {{{ordo|}}} }}{{!}}{{{superfamilia|}}}]]
}}
{{#if:{{{familia|}}}|
-> [[:Kategorie:{{{familia|}}}{{!}}{{{familia|}}}]]
}}
{{#if:{{{subfamilia|}}}|
-> [[:Kategorie:{{{subfamilia|}}}{{!}}{{{subfamilia|}}}]]
}}
{{#if:{{{tribus|}}}|
-> [[:Kategorie:{{{tribus|}}}{{!}}{{{tribus|}}}]]
}}
{{#if:{{{genus|}}}|
-> [[:Kategorie:{{{genus|}}}{{!}}{{{genus|}}}]]
}}
{{#if:{{{subgenus|}}}|
-> [[:Kategorie:{{{subgenus|}}}{{!}}{{{subgenus|}}}]]
}}
{{#if:{{{www.faunaeur.org_id|}}}|
* [http://www.faunaeur.org/full_results.php?id={{{www.faunaeur.org_id|}}} Fauna Europaea : www.faunaeur.org -> {{PAGENAME}}]
}}
{{#if:{{{cockroach.speciesfile.org_TaxonNameID|}}}|
* [http://cockroach.speciesfile.org/common/basic/Taxa.aspx?TaxonNameID={{{cockroach.speciesfile.org_TaxonNameID|}}} Cockroach Species File (CSF) : cockroach.speciesfile.org -> {{PAGENAME}}]
}}
{{#if:{{{LSID|}}}|
* [http://www.ipni.org/lsids.html LSID] : {{{LSID|}}}
}}
{{#ifeq:{{PAGENAME}}|{{{superordo}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
[[Kategorie:{{#if: {{{subclassis|}}} | {{{subclassis|}}} | {{{classis|}}} }}{{!}}{{{superordo|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{ordo}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=7}}
{{#ifeq:{{PAGENAME}}|Blattodea|
[[Kategorie:Schaben]]
}}
[[Kategorie:{{#if: {{{superordo|}}} | {{{superordo|}}} | {{{subclassis|}}} }}{{!}}{{{ordo|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{superfamilia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
[[Kategorie:{{#if: {{{subordo|}}} | {{{subordo|}}} | {{{ordo|}}} }}{{!}}{{{superfamilia|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{familia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=4}}
[[Kategorie:{{#if: {{{superfamilia|}}} | {{{superfamilia|}}} | {{{subordo|}}} }}{{!}}{{{familia|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{subfamilia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=3}}
[[Kategorie:{{{familia|}}}{{!}}{{{subfamilia|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{genus}}}|
[[Kategorie:{{#if: {{{subfamilia|}}} | {{{subfamilia|}}} | {{{familia|}}} }}{{!}}{{{genus|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{genus}}} {{{species}}}|
[[Kategorie:{{{genus|}}}{{!}}{{{species|}}}]]
}}
</includeonly>
<noinclude>
<pre>
Beispielaufruf:
{{Systematik
| DeName = Fauchschabe
| Autor = van Herrewege, 1973
| ordo =
| subordo =
| superfamilia =
| familia = Blaberidae
| subfamilia = Oxyhaloinae
| tribus = Gromphadorhini
| genus = Princisia
| subgenus =
| species = vanwaerebeki
| Verbreitung =
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| StudyGroupNumber = BCG 34
| Winterruhe =
| cockroach.speciesfile.org_TaxonNameID = 1174416
| LSID = urn:lsid:Blattodea.genusfile.org:TaxonName:6326
}}
{{Systematik
| Autor =
| Bild =
| Bildbeschreibung =
| regnum = Animalia
| subregnum = Eumetazoa
| phylum = specieshropoda
| subphylum = Hexapoda
| classis = Insecta
| ordo = Dictyoptera
| subordo = Isoptera
| LSID = urn:lsid:faunaeur.org:taxname:11922
| www.faunaeur.org_id = 11922
}}
</pre>
</noinclude>
5d0a6d9ebdf3ca6abe511dfc3fa689e5eee663aa
1617
1616
2017-05-02T17:57:54Z
Lollypop
2
wikitext
text/x-wiki
<includeonly>
{{#vardefine:taxon|
{{#if:{{{dominia|}}} | * [[:Kategorie:{{{dominia|}}}{{!}}{{{dominia|}}}]]}}
{{#if:{{{regnum|}}} | * [[:Kategorie:{{{regnum|}}}{{!}}{{{regnum|}}}]]}}
{{#if:{{{subregnum|}}} | * [[:Kategorie:{{{subregnum|}}}{{!}}{{{subregnum|}}}]]}}
{{#if:{{{divisio|}}} | * [[:Kategorie:{{{divisio|}}}{{!}{{{divisio|}}}]]}}
{{#if:{{{subdivisio|}}} | * [[:Kategorie:{{{subdivisio|}}}{{!}}{{{subdivisio|}}}]]}}
{{#if:{{{superclassis|}}}| * [[:Kategorie:{{{superclassis|}}}{{!}}{{{superclassis|}}}]]}}
{{#if:{{{classis|}}} | * [[:Kategorie:{{{classis|}}}{{!}}{{{classis|}}}]]}}
{{#if:{{{subclassis|}}} | * [[:Kategorie:{{{subclassis|}}}{{!}}{{{subclassis|}}}]]}}
{{#if:{{{superordo|}}} | * [[:Kategorie:{{{superordo|}}}{{!}}{{{superordo|}}}]]}}
{{#if:{{{ordo|}}} | * [[:Kategorie:{{{ordo|}}}{{!}}{{{ordo|}}}]]}}
{{#if:{{{subordo|}}} | * [[:Kategorie:{{{subordo|}}}{{!}}{{{subordo|}}}]]}}
{{#if:{{{superfamilia|}}}| * [[:Kategorie:{{{superfamilia|}}}{{!}}{{{superfamilia|}}}]]}}
{{#if:{{{familia|}}} | * [[:Kategorie:{{{familia|}}}{{!}}{{{familia|}}}]]}}
{{#if:{{{subfamilia|}}} | * [[:Kategorie:{{{subfamilia|}}}{{!}}{{{subfamilia|}}}]]}}
{{#if:{{{tribus|}}} | * [[:Kategorie:{{{tribus|}}}{{!}}{{{tribus|}}}]]}}
{{#if:{{{genus|}}} | * [[:Kategorie:{{{genus|}}}{{!}}{{{genus|}}}]]}}
{{#if:{{{subgenus|}}} | * [[:Kategorie:{{{subgenus|}}}{{!}}{{{subgenus|}}}]]}}
}}
{| cellpadding="2" cellspacing="0" style="border: 1px solid #6688AA;width:260px; background-color:#efefef; float:right;overflow: hidden; margin-left:10px; margin-bottom:10px;" valign="middle" |
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"|
'''''{{PAGENAME}}'''''
{{#if:{{{DeName|}}}|
<br>({{{DeName|}}})
}}
|-
|style="border: 0px solid #6688AA;border-bottom-width:1px;"| <div style="text-align:center;font-size:8pt;">
{{#if: {{{Bild|}}}
|
[[Bild:{{{Bild}}}{{!}}frameless{{!}}250x300px{{!}}{{{Bildbeschreibung}}}]]
}}
</div>
|
|
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| Systematik
|-
|style="text-align:center;width=250px;"|
{| style="background-color:#efefef;text-align:left;"
{{#if:{{{dominia|}}}
| {{!-}}
{{!}} Dominia:
{{!}} ''[[:Kategorie:{{{dominia|}}}{{!}}{{{dominia|}}}]]''
}}
{{#if:{{{regnum|}}}
| {{!-}}
{{!}} Regnum:
{{!}} ''[[:Kategorie:{{{regnum|}}}{{!}}{{{regnum|}}}]]''
}}
{{#if:{{{subregnum|}}}
| {{!-}}
{{!}} Subregnum:
{{!}} ''[[:Kategorie:{{{subregnum|}}}{{!}}{{{subregnum|}}}]]''
}}
{{#if:{{{superdivisio|}}}
| {{!-}}
{{!}} Superdivisio:
{{!}} ''[[:Kategorie:{{{superdivisio|}}}{{!}}{{{superdivisio|}}}]]''
}}
{{#if:{{{divisio|}}}
| {{!-}}
{{!}} Divisio:
{{!}} ''[[:Kategorie:{{{divisio|}}}{{!}}{{{divisio|}}}]]''
}}
{{#if:{{{subdivisio|}}}
| {{!-}}
{{!}} Subdivisio:
{{!}} ''[[:Kategorie:{{{subdivisio|}}}{{!}}{{{subdivisio|}}}]]''
}}
{{#if:{{{superclassis|}}}
| {{!-}}
{{!}} Superclassis:
{{!}} ''[[:Kategorie:{{{superclassis|}}}{{!}}{{{superclassis|}}}]]''
}}
{{#if:{{{classis|}}}
| {{!-}}
{{!}} Classis:
{{!}} ''[[:Kategorie:{{{classis|}}}{{!}}{{{classis|}}}]]''
}}
{{#if:{{{subclassis|}}}
| {{!-}}
{{!}} Subclassis:
{{!}} ''[[:Kategorie:{{{subclassis|}}}{{!}}{{{subclassis|}}}]]''
}}
{{#if:{{{superordo|}}}
| {{!-}}
{{!}} Superordo:
{{!}} ''[[:Kategorie:{{{superordo|}}}{{!}}{{{superordo|}}}]]''
}}
{{#if:{{{ordo|}}}
| {{!-}}
{{!}} Ordo:
{{!}} ''[[:Kategorie:{{{ordo|}}}{{!}}{{{ordo|}}}]]''
}}
{{#if:{{{subordo|}}}
| {{!-}}
{{!}} Subordo:
{{!}} ''[[:Kategorie:{{{subordo|}}}{{!}}{{{subordo|}}}]]''
}}
{{#if:{{{superfamilia|}}}
| {{!-}}
{{!}} Superfamilia:
{{!}} ''[[:Kategorie:{{{superfamilia|}}}{{!}}{{{superfamilia|}}}]]''
}}
{{#if:{{{familia|}}}
| {{!-}}
{{!}} Familia:
{{!}} ''[[:Kategorie:{{{familia|}}}{{!}}{{{familia|}}}]]''
}}
{{#if:{{{subfamilia|}}}
| {{!-}}
{{!}} Subfamilia:
{{!}} ''[[:Kategorie:{{{subfamilia|}}}{{!}}{{{subfamilia|}}}]]''
}}
{{#if:{{{tribus|}}}
| {{!-}}
{{!}} Tribus:
{{!}} ''[[:Kategorie:{{{tribus|}}}{{!}}{{{tribus|}}}]]''
}}
|-
{{#if:{{{genus|}}}
| {{!-}}
{{!}} Genus:
{{!}} ''[[:Kategorie:{{{genus|}}}{{!}}{{{genus|}}}]]''
}}
|-
{{#if:{{{subgenus|}}}
| {{!-}}
{{!}} Subgenus:
{{!}} ''[[:Kategorie:{{{subgenus|}}}{{!}}{{{subgenus|}}}]]''
}}
|-
{{#if:{{{species|}}}
| {{!-}}
{{!}} Species:
{{!}} ''{{{genus|}}} {{{subgenus|}}} {{{species|}}}''
}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Wissenschaftlicher Name
|-
|style="text-align:center"|
{| style="text-align:center;background-color:#efefef;widht:250px"
|-
|style="text-align:center;width:250px"|''{{{genus|}}} {{{species|}}}'' {{#if:{{{Autor|}}}|{{{Autor|}}}}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Weitere Informationen
|-
|style="text-align:left"|
{| style="text-align:left;background-color:#efefef;widht:250px"
{{#if:{{{Verbreitung|}}}
| {{!-}}
{{!}} Verbreitung:
{{!}} {{{Verbreitung|}}}
| {{!-}}
}}
{{#if:{{{Habitat|}}}
| {{!-}}
{{!}} Habitat:
{{!}} {{{Habitat|}}}
| {{!-}}
}}
{{#if:{{{Nahrung|}}}
| {{!-}}
{{!}} Nahrung:
{{!}} {{{Nahrung|}}}
| {{!-}}
}}
{{#if:{{{Luftfeuchtigkeit|}}}
| {{!-}}
{{!}} Luftfeuchtigkeit:
{{!}} {{{Luftfeuchtigkeit|}}}
| {{!-}}
}}
{{#if:{{{Temperatur|}}}
| {{!-}}
{{!}} Temperatur:
{{!}} {{{Temperatur|}}}
| {{!-}}
}}
{{#if:{{{StudyGroupNumber|}}}
| {{!-}}
{{!}} StudyGroupNumber:
{{!}} {{{StudyGroupNumber|}}}
| {{!-}}
}}
|}
|}
{{#ifeq: {{NAMESPACE}} | {{ns:0}} | [[Kategorie:species]]}}
{{#if:{{#var:taxon}}|{{#var:taxon}}}}
{{#if:{{{ordo|}}}|
-> [[:Kategorie:{{{ordo|}}}{{!}}{{{ordo|}}}]]
}}
{{#if:{{{subordo|}}}|
-> [[:Kategorie:{{{subordo|}}}{{!}}{{{subordo|}}}]]
}}
{{#if:{{{superfamilia|}}}|
-> [[:Kategorie:{{#if: {{{subordo|}}} | {{{subordo|}}} | {{{ordo|}}} }}{{!}}{{{superfamilia|}}}]]
}}
{{#if:{{{familia|}}}|
-> [[:Kategorie:{{{familia|}}}{{!}}{{{familia|}}}]]
}}
{{#if:{{{subfamilia|}}}|
-> [[:Kategorie:{{{subfamilia|}}}{{!}}{{{subfamilia|}}}]]
}}
{{#if:{{{tribus|}}}|
-> [[:Kategorie:{{{tribus|}}}{{!}}{{{tribus|}}}]]
}}
{{#if:{{{genus|}}}|
-> [[:Kategorie:{{{genus|}}}{{!}}{{{genus|}}}]]
}}
{{#if:{{{subgenus|}}}|
-> [[:Kategorie:{{{subgenus|}}}{{!}}{{{subgenus|}}}]]
}}
{{#if:{{{www.faunaeur.org_id|}}}|
* [http://www.faunaeur.org/full_results.php?id={{{www.faunaeur.org_id|}}} Fauna Europaea : www.faunaeur.org -> {{PAGENAME}}]
}}
{{#if:{{{cockroach.speciesfile.org_TaxonNameID|}}}|
* [http://cockroach.speciesfile.org/common/basic/Taxa.aspx?TaxonNameID={{{cockroach.speciesfile.org_TaxonNameID|}}} Cockroach Species File (CSF) : cockroach.speciesfile.org -> {{PAGENAME}}]
}}
{{#if:{{{LSID|}}}|
* [http://www.ipni.org/lsids.html LSID] : {{{LSID|}}}
}}
{{#ifeq:{{PAGENAME}}|{{{superordo}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
[[Kategorie:{{#if: {{{subclassis|}}} | {{{subclassis|}}} | {{{classis|}}} }}{{!}}{{{superordo|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{ordo}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=7}}
{{#ifeq:{{PAGENAME}}|Blattodea|
[[Kategorie:Schaben]]
}}
[[Kategorie:{{#if: {{{superordo|}}} | {{{superordo|}}} | {{{subclassis|}}} }}{{!}}{{{ordo|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{superfamilia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
[[Kategorie:{{#if: {{{subordo|}}} | {{{subordo|}}} | {{{ordo|}}} }}{{!}}{{{superfamilia|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{familia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=4}}
[[Kategorie:{{#if: {{{superfamilia|}}} | {{{superfamilia|}}} | {{{subordo|}}} }}{{!}}{{{familia|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{subfamilia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=3}}
[[Kategorie:{{{familia|}}}{{!}}{{{subfamilia|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{genus}}}|
[[Kategorie:{{#if: {{{subfamilia|}}} | {{{subfamilia|}}} | {{{familia|}}} }}{{!}}{{{genus|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{genus}}} {{{species}}}|
[[Kategorie:{{{genus|}}}{{!}}{{{species|}}}]]
}}
</includeonly>
<noinclude>
<pre>
Beispielaufruf:
{{Systematik
| DeName = Fauchschabe
| Autor = van Herrewege, 1973
| ordo =
| subordo =
| superfamilia =
| familia = Blaberidae
| subfamilia = Oxyhaloinae
| tribus = Gromphadorhini
| genus = Princisia
| subgenus =
| species = vanwaerebeki
| Verbreitung =
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| StudyGroupNumber = BCG 34
| Winterruhe =
| cockroach.speciesfile.org_TaxonNameID = 1174416
| LSID = urn:lsid:Blattodea.genusfile.org:TaxonName:6326
}}
{{Systematik
| Autor =
| Bild =
| Bildbeschreibung =
| regnum = Animalia
| subregnum = Eumetazoa
| phylum = specieshropoda
| subphylum = Hexapoda
| classis = Insecta
| ordo = Dictyoptera
| subordo = Isoptera
| LSID = urn:lsid:faunaeur.org:taxname:11922
| www.faunaeur.org_id = 11922
}}
</pre>
</noinclude>
d5adb6cbb41ecb80dbaf7834f2ccd1c4aa10717f
1618
1617
2017-05-02T18:00:21Z
Lollypop
2
wikitext
text/x-wiki
<includeonly>
{{#vardefine:taxon|
{{#if:{{{dominia|}}} | * [[:Kategorie:{{{dominia|}}}{{!}}{{{dominia|}}}]]}}
{{#if:{{{regnum|}}} | * [[:Kategorie:{{{regnum|}}}{{!}}{{{regnum|}}}]]}}
{{#if:{{{subregnum|}}} | * [[:Kategorie:{{{subregnum|}}}{{!}}{{{subregnum|}}}]]}}
{{#if:{{{divisio|}}} | * [[:Kategorie:{{{divisio|}}}{{!}}{{{divisio|}}}]]}}
{{#if:{{{subdivisio|}}} | * [[:Kategorie:{{{subdivisio|}}}{{!}}{{{subdivisio|}}}]]}}
{{#if:{{{superclassis|}}}| * [[:Kategorie:{{{superclassis|}}}{{!}}{{{superclassis|}}}]]}}
{{#if:{{{classis|}}} | * [[:Kategorie:{{{classis|}}}{{!}}{{{classis|}}}]]}}
{{#if:{{{subclassis|}}} | * [[:Kategorie:{{{subclassis|}}}{{!}}{{{subclassis|}}}]]}}
{{#if:{{{superordo|}}} | * [[:Kategorie:{{{superordo|}}}{{!}}{{{superordo|}}}]]}}
{{#if:{{{ordo|}}} | * [[:Kategorie:{{{ordo|}}}{{!}}{{{ordo|}}}]]}}
{{#if:{{{subordo|}}} | * [[:Kategorie:{{{subordo|}}}{{!}}{{{subordo|}}}]]}}
{{#if:{{{superfamilia|}}}| * [[:Kategorie:{{{superfamilia|}}}{{!}}{{{superfamilia|}}}]]}}
{{#if:{{{familia|}}} | * [[:Kategorie:{{{familia|}}}{{!}}{{{familia|}}}]]}}
{{#if:{{{subfamilia|}}} | * [[:Kategorie:{{{subfamilia|}}}{{!}}{{{subfamilia|}}}]]}}
{{#if:{{{tribus|}}} | * [[:Kategorie:{{{tribus|}}}{{!}}{{{tribus|}}}]]}}
{{#if:{{{genus|}}} | * [[:Kategorie:{{{genus|}}}{{!}}{{{genus|}}}]]}}
{{#if:{{{subgenus|}}} | * [[:Kategorie:{{{subgenus|}}}{{!}}{{{subgenus|}}}]]}}
}}
{| cellpadding="2" cellspacing="0" style="border: 1px solid #6688AA;width:260px; background-color:#efefef; float:right;overflow: hidden; margin-left:10px; margin-bottom:10px;" valign="middle" |
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"|
'''''{{PAGENAME}}'''''
{{#if:{{{DeName|}}}|
<br>({{{DeName|}}})
}}
|-
|style="border: 0px solid #6688AA;border-bottom-width:1px;"| <div style="text-align:center;font-size:8pt;">
{{#if: {{{Bild|}}}
|
[[Bild:{{{Bild}}}{{!}}frameless{{!}}250x300px{{!}}{{{Bildbeschreibung}}}]]
}}
</div>
|
|
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| Systematik
|-
|style="text-align:center;width=250px;"|
{| style="background-color:#efefef;text-align:left;"
{{#if:{{{dominia|}}}
| {{!-}}
{{!}} Dominia:
{{!}} ''[[:Kategorie:{{{dominia|}}}{{!}}{{{dominia|}}}]]''
}}
{{#if:{{{regnum|}}}
| {{!-}}
{{!}} Regnum:
{{!}} ''[[:Kategorie:{{{regnum|}}}{{!}}{{{regnum|}}}]]''
}}
{{#if:{{{subregnum|}}}
| {{!-}}
{{!}} Subregnum:
{{!}} ''[[:Kategorie:{{{subregnum|}}}{{!}}{{{subregnum|}}}]]''
}}
{{#if:{{{superdivisio|}}}
| {{!-}}
{{!}} Superdivisio:
{{!}} ''[[:Kategorie:{{{superdivisio|}}}{{!}}{{{superdivisio|}}}]]''
}}
{{#if:{{{divisio|}}}
| {{!-}}
{{!}} Divisio:
{{!}} ''[[:Kategorie:{{{divisio|}}}{{!}}{{{divisio|}}}]]''
}}
{{#if:{{{subdivisio|}}}
| {{!-}}
{{!}} Subdivisio:
{{!}} ''[[:Kategorie:{{{subdivisio|}}}{{!}}{{{subdivisio|}}}]]''
}}
{{#if:{{{superclassis|}}}
| {{!-}}
{{!}} Superclassis:
{{!}} ''[[:Kategorie:{{{superclassis|}}}{{!}}{{{superclassis|}}}]]''
}}
{{#if:{{{classis|}}}
| {{!-}}
{{!}} Classis:
{{!}} ''[[:Kategorie:{{{classis|}}}{{!}}{{{classis|}}}]]''
}}
{{#if:{{{subclassis|}}}
| {{!-}}
{{!}} Subclassis:
{{!}} ''[[:Kategorie:{{{subclassis|}}}{{!}}{{{subclassis|}}}]]''
}}
{{#if:{{{superordo|}}}
| {{!-}}
{{!}} Superordo:
{{!}} ''[[:Kategorie:{{{superordo|}}}{{!}}{{{superordo|}}}]]''
}}
{{#if:{{{ordo|}}}
| {{!-}}
{{!}} Ordo:
{{!}} ''[[:Kategorie:{{{ordo|}}}{{!}}{{{ordo|}}}]]''
}}
{{#if:{{{subordo|}}}
| {{!-}}
{{!}} Subordo:
{{!}} ''[[:Kategorie:{{{subordo|}}}{{!}}{{{subordo|}}}]]''
}}
{{#if:{{{superfamilia|}}}
| {{!-}}
{{!}} Superfamilia:
{{!}} ''[[:Kategorie:{{{superfamilia|}}}{{!}}{{{superfamilia|}}}]]''
}}
{{#if:{{{familia|}}}
| {{!-}}
{{!}} Familia:
{{!}} ''[[:Kategorie:{{{familia|}}}{{!}}{{{familia|}}}]]''
}}
{{#if:{{{subfamilia|}}}
| {{!-}}
{{!}} Subfamilia:
{{!}} ''[[:Kategorie:{{{subfamilia|}}}{{!}}{{{subfamilia|}}}]]''
}}
{{#if:{{{tribus|}}}
| {{!-}}
{{!}} Tribus:
{{!}} ''[[:Kategorie:{{{tribus|}}}{{!}}{{{tribus|}}}]]''
}}
|-
{{#if:{{{genus|}}}
| {{!-}}
{{!}} Genus:
{{!}} ''[[:Kategorie:{{{genus|}}}{{!}}{{{genus|}}}]]''
}}
|-
{{#if:{{{subgenus|}}}
| {{!-}}
{{!}} Subgenus:
{{!}} ''[[:Kategorie:{{{subgenus|}}}{{!}}{{{subgenus|}}}]]''
}}
|-
{{#if:{{{species|}}}
| {{!-}}
{{!}} Species:
{{!}} ''{{{genus|}}} {{{subgenus|}}} {{{species|}}}''
}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Wissenschaftlicher Name
|-
|style="text-align:center"|
{| style="text-align:center;background-color:#efefef;widht:250px"
|-
|style="text-align:center;width:250px"|''{{{genus|}}} {{{species|}}}'' {{#if:{{{Autor|}}}|{{{Autor|}}}}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Weitere Informationen
|-
|style="text-align:left"|
{| style="text-align:left;background-color:#efefef;widht:250px"
{{#if:{{{Verbreitung|}}}
| {{!-}}
{{!}} Verbreitung:
{{!}} {{{Verbreitung|}}}
| {{!-}}
}}
{{#if:{{{Habitat|}}}
| {{!-}}
{{!}} Habitat:
{{!}} {{{Habitat|}}}
| {{!-}}
}}
{{#if:{{{Nahrung|}}}
| {{!-}}
{{!}} Nahrung:
{{!}} {{{Nahrung|}}}
| {{!-}}
}}
{{#if:{{{Luftfeuchtigkeit|}}}
| {{!-}}
{{!}} Luftfeuchtigkeit:
{{!}} {{{Luftfeuchtigkeit|}}}
| {{!-}}
}}
{{#if:{{{Temperatur|}}}
| {{!-}}
{{!}} Temperatur:
{{!}} {{{Temperatur|}}}
| {{!-}}
}}
{{#if:{{{StudyGroupNumber|}}}
| {{!-}}
{{!}} StudyGroupNumber:
{{!}} {{{StudyGroupNumber|}}}
| {{!-}}
}}
|}
|}
{{#ifeq: {{NAMESPACE}} | {{ns:0}} | [[Kategorie:species]]}}
{{#if:{{#var:taxon}}|{{#var:taxon}}}}
{{#if:{{{ordo|}}}|
-> [[:Kategorie:{{{ordo|}}}{{!}}{{{ordo|}}}]]
}}
{{#if:{{{subordo|}}}|
-> [[:Kategorie:{{{subordo|}}}{{!}}{{{subordo|}}}]]
}}
{{#if:{{{superfamilia|}}}|
-> [[:Kategorie:{{#if: {{{subordo|}}} | {{{subordo|}}} | {{{ordo|}}} }}{{!}}{{{superfamilia|}}}]]
}}
{{#if:{{{familia|}}}|
-> [[:Kategorie:{{{familia|}}}{{!}}{{{familia|}}}]]
}}
{{#if:{{{subfamilia|}}}|
-> [[:Kategorie:{{{subfamilia|}}}{{!}}{{{subfamilia|}}}]]
}}
{{#if:{{{tribus|}}}|
-> [[:Kategorie:{{{tribus|}}}{{!}}{{{tribus|}}}]]
}}
{{#if:{{{genus|}}}|
-> [[:Kategorie:{{{genus|}}}{{!}}{{{genus|}}}]]
}}
{{#if:{{{subgenus|}}}|
-> [[:Kategorie:{{{subgenus|}}}{{!}}{{{subgenus|}}}]]
}}
{{#if:{{{www.faunaeur.org_id|}}}|
* [http://www.faunaeur.org/full_results.php?id={{{www.faunaeur.org_id|}}} Fauna Europaea : www.faunaeur.org -> {{PAGENAME}}]
}}
{{#if:{{{cockroach.speciesfile.org_TaxonNameID|}}}|
* [http://cockroach.speciesfile.org/common/basic/Taxa.aspx?TaxonNameID={{{cockroach.speciesfile.org_TaxonNameID|}}} Cockroach Species File (CSF) : cockroach.speciesfile.org -> {{PAGENAME}}]
}}
{{#if:{{{LSID|}}}|
* [http://www.ipni.org/lsids.html LSID] : {{{LSID|}}}
}}
{{#ifeq:{{PAGENAME}}|{{{superordo}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
[[Kategorie:{{#if: {{{subclassis|}}} | {{{subclassis|}}} | {{{classis|}}} }}{{!}}{{{superordo|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{ordo}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=7}}
{{#ifeq:{{PAGENAME}}|Blattodea|
[[Kategorie:Schaben]]
}}
[[Kategorie:{{#if: {{{superordo|}}} | {{{superordo|}}} | {{{subclassis|}}} }}{{!}}{{{ordo|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{superfamilia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
[[Kategorie:{{#if: {{{subordo|}}} | {{{subordo|}}} | {{{ordo|}}} }}{{!}}{{{superfamilia|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{familia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=4}}
[[Kategorie:{{#if: {{{superfamilia|}}} | {{{superfamilia|}}} | {{{subordo|}}} }}{{!}}{{{familia|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{subfamilia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=3}}
[[Kategorie:{{{familia|}}}{{!}}{{{subfamilia|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{genus}}}|
[[Kategorie:{{#if: {{{subfamilia|}}} | {{{subfamilia|}}} | {{{familia|}}} }}{{!}}{{{genus|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{genus}}} {{{species}}}|
[[Kategorie:{{{genus|}}}{{!}}{{{species|}}}]]
}}
</includeonly>
<noinclude>
<pre>
Beispielaufruf:
{{Systematik
| DeName = Fauchschabe
| Autor = van Herrewege, 1973
| ordo =
| subordo =
| superfamilia =
| familia = Blaberidae
| subfamilia = Oxyhaloinae
| tribus = Gromphadorhini
| genus = Princisia
| subgenus =
| species = vanwaerebeki
| Verbreitung =
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| StudyGroupNumber = BCG 34
| Winterruhe =
| cockroach.speciesfile.org_TaxonNameID = 1174416
| LSID = urn:lsid:Blattodea.genusfile.org:TaxonName:6326
}}
{{Systematik
| Autor =
| Bild =
| Bildbeschreibung =
| regnum = Animalia
| subregnum = Eumetazoa
| phylum = specieshropoda
| subphylum = Hexapoda
| classis = Insecta
| ordo = Dictyoptera
| subordo = Isoptera
| LSID = urn:lsid:faunaeur.org:taxname:11922
| www.faunaeur.org_id = 11922
}}
</pre>
</noinclude>
d651ad19721009b69456a58063e2206a90f3b4b3
1619
1618
2017-05-02T19:08:36Z
Lollypop
2
wikitext
text/x-wiki
<includeonly>
{{#vardefine:taxon|
{{nowrap begin}}
{{#if:{{{dominia|}}} | * [[:Kategorie:{{{dominia|}}}{{!}}{{{dominia|}}}]]}}
{{#if:{{{regnum|}}} | * [[:Kategorie:{{{regnum|}}}{{!}}{{{regnum|}}}]]}}
{{#if:{{{subregnum|}}} | * [[:Kategorie:{{{subregnum|}}}{{!}}{{{subregnum|}}}]]}}
{{#if:{{{divisio|}}} | * [[:Kategorie:{{{divisio|}}}{{!}}{{{divisio|}}}]]}}
{{#if:{{{subdivisio|}}} | * [[:Kategorie:{{{subdivisio|}}}{{!}}{{{subdivisio|}}}]]}}
{{#if:{{{superclassis|}}}| * [[:Kategorie:{{{superclassis|}}}{{!}}{{{superclassis|}}}]]}}
{{#if:{{{classis|}}} | * [[:Kategorie:{{{classis|}}}{{!}}{{{classis|}}}]]}}
{{#if:{{{subclassis|}}} | * [[:Kategorie:{{{subclassis|}}}{{!}}{{{subclassis|}}}]]}}
{{#if:{{{superordo|}}} | * [[:Kategorie:{{{superordo|}}}{{!}}{{{superordo|}}}]]}}
{{#if:{{{ordo|}}} | * [[:Kategorie:{{{ordo|}}}{{!}}{{{ordo|}}}]]}}
{{#if:{{{subordo|}}} | * [[:Kategorie:{{{subordo|}}}{{!}}{{{subordo|}}}]]}}
{{#if:{{{superfamilia|}}}| * [[:Kategorie:{{{superfamilia|}}}{{!}}{{{superfamilia|}}}]]}}
{{#if:{{{familia|}}} | * [[:Kategorie:{{{familia|}}}{{!}}{{{familia|}}}]]}}
{{#if:{{{subfamilia|}}} | * [[:Kategorie:{{{subfamilia|}}}{{!}}{{{subfamilia|}}}]]}}
{{#if:{{{tribus|}}} | * [[:Kategorie:{{{tribus|}}}{{!}}{{{tribus|}}}]]}}
{{#if:{{{genus|}}} | * [[:Kategorie:{{{genus|}}}{{!}}{{{genus|}}}]]}}
{{#if:{{{subgenus|}}} | * [[:Kategorie:{{{subgenus|}}}{{!}}{{{subgenus|}}}]]}}
{{nowrap end}}
}}
{| cellpadding="2" cellspacing="0" style="border: 1px solid #6688AA;width:260px; background-color:#efefef; float:right;overflow: hidden; margin-left:10px; margin-bottom:10px;" valign="middle" |
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"|
'''''{{PAGENAME}}'''''
{{#if:{{{DeName|}}}|
<br>({{{DeName|}}})
}}
|-
|style="border: 0px solid #6688AA;border-bottom-width:1px;"| <div style="text-align:center;font-size:8pt;">
{{#if: {{{Bild|}}}
|
[[Bild:{{{Bild}}}{{!}}frameless{{!}}250x300px{{!}}{{{Bildbeschreibung}}}]]
}}
</div>
|
|
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| Systematik
|-
|style="text-align:center;width=250px;"|
{| style="background-color:#efefef;text-align:left;"
{{#if:{{{dominia|}}}
| {{!-}}
{{!}} Dominia:
{{!}} ''[[:Kategorie:{{{dominia|}}}{{!}}{{{dominia|}}}]]''
}}
{{#if:{{{regnum|}}}
| {{!-}}
{{!}} Regnum:
{{!}} ''[[:Kategorie:{{{regnum|}}}{{!}}{{{regnum|}}}]]''
}}
{{#if:{{{subregnum|}}}
| {{!-}}
{{!}} Subregnum:
{{!}} ''[[:Kategorie:{{{subregnum|}}}{{!}}{{{subregnum|}}}]]''
}}
{{#if:{{{superdivisio|}}}
| {{!-}}
{{!}} Superdivisio:
{{!}} ''[[:Kategorie:{{{superdivisio|}}}{{!}}{{{superdivisio|}}}]]''
}}
{{#if:{{{divisio|}}}
| {{!-}}
{{!}} Divisio:
{{!}} ''[[:Kategorie:{{{divisio|}}}{{!}}{{{divisio|}}}]]''
}}
{{#if:{{{subdivisio|}}}
| {{!-}}
{{!}} Subdivisio:
{{!}} ''[[:Kategorie:{{{subdivisio|}}}{{!}}{{{subdivisio|}}}]]''
}}
{{#if:{{{superclassis|}}}
| {{!-}}
{{!}} Superclassis:
{{!}} ''[[:Kategorie:{{{superclassis|}}}{{!}}{{{superclassis|}}}]]''
}}
{{#if:{{{classis|}}}
| {{!-}}
{{!}} Classis:
{{!}} ''[[:Kategorie:{{{classis|}}}{{!}}{{{classis|}}}]]''
}}
{{#if:{{{subclassis|}}}
| {{!-}}
{{!}} Subclassis:
{{!}} ''[[:Kategorie:{{{subclassis|}}}{{!}}{{{subclassis|}}}]]''
}}
{{#if:{{{superordo|}}}
| {{!-}}
{{!}} Superordo:
{{!}} ''[[:Kategorie:{{{superordo|}}}{{!}}{{{superordo|}}}]]''
}}
{{#if:{{{ordo|}}}
| {{!-}}
{{!}} Ordo:
{{!}} ''[[:Kategorie:{{{ordo|}}}{{!}}{{{ordo|}}}]]''
}}
{{#if:{{{subordo|}}}
| {{!-}}
{{!}} Subordo:
{{!}} ''[[:Kategorie:{{{subordo|}}}{{!}}{{{subordo|}}}]]''
}}
{{#if:{{{superfamilia|}}}
| {{!-}}
{{!}} Superfamilia:
{{!}} ''[[:Kategorie:{{{superfamilia|}}}{{!}}{{{superfamilia|}}}]]''
}}
{{#if:{{{familia|}}}
| {{!-}}
{{!}} Familia:
{{!}} ''[[:Kategorie:{{{familia|}}}{{!}}{{{familia|}}}]]''
}}
{{#if:{{{subfamilia|}}}
| {{!-}}
{{!}} Subfamilia:
{{!}} ''[[:Kategorie:{{{subfamilia|}}}{{!}}{{{subfamilia|}}}]]''
}}
{{#if:{{{tribus|}}}
| {{!-}}
{{!}} Tribus:
{{!}} ''[[:Kategorie:{{{tribus|}}}{{!}}{{{tribus|}}}]]''
}}
|-
{{#if:{{{genus|}}}
| {{!-}}
{{!}} Genus:
{{!}} ''[[:Kategorie:{{{genus|}}}{{!}}{{{genus|}}}]]''
}}
|-
{{#if:{{{subgenus|}}}
| {{!-}}
{{!}} Subgenus:
{{!}} ''[[:Kategorie:{{{subgenus|}}}{{!}}{{{subgenus|}}}]]''
}}
|-
{{#if:{{{species|}}}
| {{!-}}
{{!}} Species:
{{!}} ''{{{genus|}}} {{{subgenus|}}} {{{species|}}}''
}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Wissenschaftlicher Name
|-
|style="text-align:center"|
{| style="text-align:center;background-color:#efefef;widht:250px"
|-
|style="text-align:center;width:250px"|''{{{genus|}}} {{{species|}}}'' {{#if:{{{Autor|}}}|{{{Autor|}}}}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Weitere Informationen
|-
|style="text-align:left"|
{| style="text-align:left;background-color:#efefef;widht:250px"
{{#if:{{{Verbreitung|}}}
| {{!-}}
{{!}} Verbreitung:
{{!}} {{{Verbreitung|}}}
| {{!-}}
}}
{{#if:{{{Habitat|}}}
| {{!-}}
{{!}} Habitat:
{{!}} {{{Habitat|}}}
| {{!-}}
}}
{{#if:{{{Nahrung|}}}
| {{!-}}
{{!}} Nahrung:
{{!}} {{{Nahrung|}}}
| {{!-}}
}}
{{#if:{{{Luftfeuchtigkeit|}}}
| {{!-}}
{{!}} Luftfeuchtigkeit:
{{!}} {{{Luftfeuchtigkeit|}}}
| {{!-}}
}}
{{#if:{{{Temperatur|}}}
| {{!-}}
{{!}} Temperatur:
{{!}} {{{Temperatur|}}}
| {{!-}}
}}
{{#if:{{{StudyGroupNumber|}}}
| {{!-}}
{{!}} StudyGroupNumber:
{{!}} {{{StudyGroupNumber|}}}
| {{!-}}
}}
|}
|}
{{#ifeq: {{NAMESPACE}} | {{ns:0}} | [[Kategorie:species]]}}
{{#if:{{#var:taxon}}|{{#var:taxon}}}}
{{#if:{{{ordo|}}}|
-> [[:Kategorie:{{{ordo|}}}{{!}}{{{ordo|}}}]]
}}
{{#if:{{{subordo|}}}|
-> [[:Kategorie:{{{subordo|}}}{{!}}{{{subordo|}}}]]
}}
{{#if:{{{superfamilia|}}}|
-> [[:Kategorie:{{#if: {{{subordo|}}} | {{{subordo|}}} | {{{ordo|}}} }}{{!}}{{{superfamilia|}}}]]
}}
{{#if:{{{familia|}}}|
-> [[:Kategorie:{{{familia|}}}{{!}}{{{familia|}}}]]
}}
{{#if:{{{subfamilia|}}}|
-> [[:Kategorie:{{{subfamilia|}}}{{!}}{{{subfamilia|}}}]]
}}
{{#if:{{{tribus|}}}|
-> [[:Kategorie:{{{tribus|}}}{{!}}{{{tribus|}}}]]
}}
{{#if:{{{genus|}}}|
-> [[:Kategorie:{{{genus|}}}{{!}}{{{genus|}}}]]
}}
{{#if:{{{subgenus|}}}|
-> [[:Kategorie:{{{subgenus|}}}{{!}}{{{subgenus|}}}]]
}}
{{#if:{{{www.faunaeur.org_id|}}}|
* [http://www.faunaeur.org/full_results.php?id={{{www.faunaeur.org_id|}}} Fauna Europaea : www.faunaeur.org -> {{PAGENAME}}]
}}
{{#if:{{{cockroach.speciesfile.org_TaxonNameID|}}}|
* [http://cockroach.speciesfile.org/common/basic/Taxa.aspx?TaxonNameID={{{cockroach.speciesfile.org_TaxonNameID|}}} Cockroach Species File (CSF) : cockroach.speciesfile.org -> {{PAGENAME}}]
}}
{{#if:{{{LSID|}}}|
* [http://www.ipni.org/lsids.html LSID] : {{{LSID|}}}
}}
{{#ifeq:{{PAGENAME}}|{{{superordo}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
[[Kategorie:{{#if: {{{subclassis|}}} | {{{subclassis|}}} | {{{classis|}}} }}{{!}}{{{superordo|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{ordo}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=7}}
{{#ifeq:{{PAGENAME}}|Blattodea|
[[Kategorie:Schaben]]
}}
[[Kategorie:{{#if: {{{superordo|}}} | {{{superordo|}}} | {{{subclassis|}}} }}{{!}}{{{ordo|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{superfamilia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
[[Kategorie:{{#if: {{{subordo|}}} | {{{subordo|}}} | {{{ordo|}}} }}{{!}}{{{superfamilia|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{familia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=4}}
[[Kategorie:{{#if: {{{superfamilia|}}} | {{{superfamilia|}}} | {{{subordo|}}} }}{{!}}{{{familia|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{subfamilia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=3}}
[[Kategorie:{{{familia|}}}{{!}}{{{subfamilia|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{genus}}}|
[[Kategorie:{{#if: {{{subfamilia|}}} | {{{subfamilia|}}} | {{{familia|}}} }}{{!}}{{{genus|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{genus}}} {{{species}}}|
[[Kategorie:{{{genus|}}}{{!}}{{{species|}}}]]
}}
</includeonly>
<noinclude>
<pre>
Beispielaufruf:
{{Systematik
| DeName = Fauchschabe
| Autor = van Herrewege, 1973
| ordo =
| subordo =
| superfamilia =
| familia = Blaberidae
| subfamilia = Oxyhaloinae
| tribus = Gromphadorhini
| genus = Princisia
| subgenus =
| species = vanwaerebeki
| Verbreitung =
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| StudyGroupNumber = BCG 34
| Winterruhe =
| cockroach.speciesfile.org_TaxonNameID = 1174416
| LSID = urn:lsid:Blattodea.genusfile.org:TaxonName:6326
}}
{{Systematik
| Autor =
| Bild =
| Bildbeschreibung =
| regnum = Animalia
| subregnum = Eumetazoa
| phylum = specieshropoda
| subphylum = Hexapoda
| classis = Insecta
| ordo = Dictyoptera
| subordo = Isoptera
| LSID = urn:lsid:faunaeur.org:taxname:11922
| www.faunaeur.org_id = 11922
}}
</pre>
</noinclude>
3e30b68a60f20e2540480e6d7b98127ad957d78e
1620
1619
2017-05-02T20:17:44Z
Lollypop
2
wikitext
text/x-wiki
<includeonly>
{{#vardefine:taxon|
{{flatlist|
{{#if:{{{dominia|}}} | * [[:Kategorie:{{{dominia|}}}{{!}}{{{dominia|}}}]]}}
{{#if:{{{regnum|}}} | * [[:Kategorie:{{{regnum|}}}{{!}}{{{regnum|}}}]]}}
{{#if:{{{subregnum|}}} | * [[:Kategorie:{{{subregnum|}}}{{!}}{{{subregnum|}}}]]}}
{{#if:{{{divisio|}}} | * [[:Kategorie:{{{divisio|}}}{{!}}{{{divisio|}}}]]}}
{{#if:{{{subdivisio|}}} | * [[:Kategorie:{{{subdivisio|}}}{{!}}{{{subdivisio|}}}]]}}
{{#if:{{{superclassis|}}}| * [[:Kategorie:{{{superclassis|}}}{{!}}{{{superclassis|}}}]]}}
{{#if:{{{classis|}}} | * [[:Kategorie:{{{classis|}}}{{!}}{{{classis|}}}]]}}
{{#if:{{{subclassis|}}} | * [[:Kategorie:{{{subclassis|}}}{{!}}{{{subclassis|}}}]]}}
{{#if:{{{superordo|}}} | * [[:Kategorie:{{{superordo|}}}{{!}}{{{superordo|}}}]]}}
{{#if:{{{ordo|}}} | * [[:Kategorie:{{{ordo|}}}{{!}}{{{ordo|}}}]]}}
{{#if:{{{subordo|}}} | * [[:Kategorie:{{{subordo|}}}{{!}}{{{subordo|}}}]]}}
{{#if:{{{superfamilia|}}}| * [[:Kategorie:{{{superfamilia|}}}{{!}}{{{superfamilia|}}}]]}}
{{#if:{{{familia|}}} | * [[:Kategorie:{{{familia|}}}{{!}}{{{familia|}}}]]}}
{{#if:{{{subfamilia|}}} | * [[:Kategorie:{{{subfamilia|}}}{{!}}{{{subfamilia|}}}]]}}
{{#if:{{{tribus|}}} | * [[:Kategorie:{{{tribus|}}}{{!}}{{{tribus|}}}]]}}
{{#if:{{{genus|}}} | * [[:Kategorie:{{{genus|}}}{{!}}{{{genus|}}}]]}}
{{#if:{{{subgenus|}}} | * [[:Kategorie:{{{subgenus|}}}{{!}}{{{subgenus|}}}]]}}
}}
}}
{| cellpadding="2" cellspacing="0" style="border: 1px solid #6688AA;width:260px; background-color:#efefef; float:right;overflow: hidden; margin-left:10px; margin-bottom:10px;" valign="middle" |
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"|
'''''{{PAGENAME}}'''''
{{#if:{{{DeName|}}}|
<br>({{{DeName|}}})
}}
|-
|style="border: 0px solid #6688AA;border-bottom-width:1px;"| <div style="text-align:center;font-size:8pt;">
{{#if: {{{Bild|}}}
|
[[Bild:{{{Bild}}}{{!}}frameless{{!}}250x300px{{!}}{{{Bildbeschreibung}}}]]
}}
</div>
|
|
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| Systematik
|-
|style="text-align:center;width=250px;"|
{| style="background-color:#efefef;text-align:left;"
{{#if:{{{dominia|}}}
| {{!-}}
{{!}} Dominia:
{{!}} ''[[:Kategorie:{{{dominia|}}}{{!}}{{{dominia|}}}]]''
}}
{{#if:{{{regnum|}}}
| {{!-}}
{{!}} Regnum:
{{!}} ''[[:Kategorie:{{{regnum|}}}{{!}}{{{regnum|}}}]]''
}}
{{#if:{{{subregnum|}}}
| {{!-}}
{{!}} Subregnum:
{{!}} ''[[:Kategorie:{{{subregnum|}}}{{!}}{{{subregnum|}}}]]''
}}
{{#if:{{{superdivisio|}}}
| {{!-}}
{{!}} Superdivisio:
{{!}} ''[[:Kategorie:{{{superdivisio|}}}{{!}}{{{superdivisio|}}}]]''
}}
{{#if:{{{divisio|}}}
| {{!-}}
{{!}} Divisio:
{{!}} ''[[:Kategorie:{{{divisio|}}}{{!}}{{{divisio|}}}]]''
}}
{{#if:{{{subdivisio|}}}
| {{!-}}
{{!}} Subdivisio:
{{!}} ''[[:Kategorie:{{{subdivisio|}}}{{!}}{{{subdivisio|}}}]]''
}}
{{#if:{{{superclassis|}}}
| {{!-}}
{{!}} Superclassis:
{{!}} ''[[:Kategorie:{{{superclassis|}}}{{!}}{{{superclassis|}}}]]''
}}
{{#if:{{{classis|}}}
| {{!-}}
{{!}} Classis:
{{!}} ''[[:Kategorie:{{{classis|}}}{{!}}{{{classis|}}}]]''
}}
{{#if:{{{subclassis|}}}
| {{!-}}
{{!}} Subclassis:
{{!}} ''[[:Kategorie:{{{subclassis|}}}{{!}}{{{subclassis|}}}]]''
}}
{{#if:{{{superordo|}}}
| {{!-}}
{{!}} Superordo:
{{!}} ''[[:Kategorie:{{{superordo|}}}{{!}}{{{superordo|}}}]]''
}}
{{#if:{{{ordo|}}}
| {{!-}}
{{!}} Ordo:
{{!}} ''[[:Kategorie:{{{ordo|}}}{{!}}{{{ordo|}}}]]''
}}
{{#if:{{{subordo|}}}
| {{!-}}
{{!}} Subordo:
{{!}} ''[[:Kategorie:{{{subordo|}}}{{!}}{{{subordo|}}}]]''
}}
{{#if:{{{superfamilia|}}}
| {{!-}}
{{!}} Superfamilia:
{{!}} ''[[:Kategorie:{{{superfamilia|}}}{{!}}{{{superfamilia|}}}]]''
}}
{{#if:{{{familia|}}}
| {{!-}}
{{!}} Familia:
{{!}} ''[[:Kategorie:{{{familia|}}}{{!}}{{{familia|}}}]]''
}}
{{#if:{{{subfamilia|}}}
| {{!-}}
{{!}} Subfamilia:
{{!}} ''[[:Kategorie:{{{subfamilia|}}}{{!}}{{{subfamilia|}}}]]''
}}
{{#if:{{{tribus|}}}
| {{!-}}
{{!}} Tribus:
{{!}} ''[[:Kategorie:{{{tribus|}}}{{!}}{{{tribus|}}}]]''
}}
|-
{{#if:{{{genus|}}}
| {{!-}}
{{!}} Genus:
{{!}} ''[[:Kategorie:{{{genus|}}}{{!}}{{{genus|}}}]]''
}}
|-
{{#if:{{{subgenus|}}}
| {{!-}}
{{!}} Subgenus:
{{!}} ''[[:Kategorie:{{{subgenus|}}}{{!}}{{{subgenus|}}}]]''
}}
|-
{{#if:{{{species|}}}
| {{!-}}
{{!}} Species:
{{!}} ''{{{genus|}}} {{{subgenus|}}} {{{species|}}}''
}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Wissenschaftlicher Name
|-
|style="text-align:center"|
{| style="text-align:center;background-color:#efefef;widht:250px"
|-
|style="text-align:center;width:250px"|''{{{genus|}}} {{{species|}}}'' {{#if:{{{Autor|}}}|{{{Autor|}}}}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Weitere Informationen
|-
|style="text-align:left"|
{| style="text-align:left;background-color:#efefef;widht:250px"
{{#if:{{{Verbreitung|}}}
| {{!-}}
{{!}} Verbreitung:
{{!}} {{{Verbreitung|}}}
| {{!-}}
}}
{{#if:{{{Habitat|}}}
| {{!-}}
{{!}} Habitat:
{{!}} {{{Habitat|}}}
| {{!-}}
}}
{{#if:{{{Nahrung|}}}
| {{!-}}
{{!}} Nahrung:
{{!}} {{{Nahrung|}}}
| {{!-}}
}}
{{#if:{{{Luftfeuchtigkeit|}}}
| {{!-}}
{{!}} Luftfeuchtigkeit:
{{!}} {{{Luftfeuchtigkeit|}}}
| {{!-}}
}}
{{#if:{{{Temperatur|}}}
| {{!-}}
{{!}} Temperatur:
{{!}} {{{Temperatur|}}}
| {{!-}}
}}
{{#if:{{{StudyGroupNumber|}}}
| {{!-}}
{{!}} StudyGroupNumber:
{{!}} {{{StudyGroupNumber|}}}
| {{!-}}
}}
|}
|}
{{#ifeq: {{NAMESPACE}} | {{ns:0}} | [[Kategorie:species]]}}
{{#if:{{#var:taxon}}|{{#var:taxon}}}}
{{#if:{{{ordo|}}}|
-> [[:Kategorie:{{{ordo|}}}{{!}}{{{ordo|}}}]]
}}
{{#if:{{{subordo|}}}|
-> [[:Kategorie:{{{subordo|}}}{{!}}{{{subordo|}}}]]
}}
{{#if:{{{superfamilia|}}}|
-> [[:Kategorie:{{#if: {{{subordo|}}} | {{{subordo|}}} | {{{ordo|}}} }}{{!}}{{{superfamilia|}}}]]
}}
{{#if:{{{familia|}}}|
-> [[:Kategorie:{{{familia|}}}{{!}}{{{familia|}}}]]
}}
{{#if:{{{subfamilia|}}}|
-> [[:Kategorie:{{{subfamilia|}}}{{!}}{{{subfamilia|}}}]]
}}
{{#if:{{{tribus|}}}|
-> [[:Kategorie:{{{tribus|}}}{{!}}{{{tribus|}}}]]
}}
{{#if:{{{genus|}}}|
-> [[:Kategorie:{{{genus|}}}{{!}}{{{genus|}}}]]
}}
{{#if:{{{subgenus|}}}|
-> [[:Kategorie:{{{subgenus|}}}{{!}}{{{subgenus|}}}]]
}}
{{#if:{{{www.faunaeur.org_id|}}}|
* [http://www.faunaeur.org/full_results.php?id={{{www.faunaeur.org_id|}}} Fauna Europaea : www.faunaeur.org -> {{PAGENAME}}]
}}
{{#if:{{{cockroach.speciesfile.org_TaxonNameID|}}}|
* [http://cockroach.speciesfile.org/common/basic/Taxa.aspx?TaxonNameID={{{cockroach.speciesfile.org_TaxonNameID|}}} Cockroach Species File (CSF) : cockroach.speciesfile.org -> {{PAGENAME}}]
}}
{{#if:{{{LSID|}}}|
* [http://www.ipni.org/lsids.html LSID] : {{{LSID|}}}
}}
{{#ifeq:{{PAGENAME}}|{{{superordo}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
[[Kategorie:{{#if: {{{subclassis|}}} | {{{subclassis|}}} | {{{classis|}}} }}{{!}}{{{superordo|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{ordo}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=7}}
{{#ifeq:{{PAGENAME}}|Blattodea|
[[Kategorie:Schaben]]
}}
[[Kategorie:{{#if: {{{superordo|}}} | {{{superordo|}}} | {{{subclassis|}}} }}{{!}}{{{ordo|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{superfamilia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
[[Kategorie:{{#if: {{{subordo|}}} | {{{subordo|}}} | {{{ordo|}}} }}{{!}}{{{superfamilia|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{familia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=4}}
[[Kategorie:{{#if: {{{superfamilia|}}} | {{{superfamilia|}}} | {{{subordo|}}} }}{{!}}{{{familia|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{subfamilia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=3}}
[[Kategorie:{{{familia|}}}{{!}}{{{subfamilia|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{genus}}}|
[[Kategorie:{{#if: {{{subfamilia|}}} | {{{subfamilia|}}} | {{{familia|}}} }}{{!}}{{{genus|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{genus}}} {{{species}}}|
[[Kategorie:{{{genus|}}}{{!}}{{{species|}}}]]
}}
</includeonly>
<noinclude>
<pre>
Beispielaufruf:
{{Systematik
| DeName = Fauchschabe
| Autor = van Herrewege, 1973
| ordo =
| subordo =
| superfamilia =
| familia = Blaberidae
| subfamilia = Oxyhaloinae
| tribus = Gromphadorhini
| genus = Princisia
| subgenus =
| species = vanwaerebeki
| Verbreitung =
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| StudyGroupNumber = BCG 34
| Winterruhe =
| cockroach.speciesfile.org_TaxonNameID = 1174416
| LSID = urn:lsid:Blattodea.genusfile.org:TaxonName:6326
}}
{{Systematik
| Autor =
| Bild =
| Bildbeschreibung =
| regnum = Animalia
| subregnum = Eumetazoa
| phylum = specieshropoda
| subphylum = Hexapoda
| classis = Insecta
| ordo = Dictyoptera
| subordo = Isoptera
| LSID = urn:lsid:faunaeur.org:taxname:11922
| www.faunaeur.org_id = 11922
}}
</pre>
</noinclude>
6afa00b65b72f5aafbf285ab721e46960e9e0f7e
1621
1620
2017-05-02T20:18:39Z
Lollypop
2
wikitext
text/x-wiki
<includeonly>
{{#vardefine:taxon|
{{flatlist begin}}
{{#if:{{{dominia|}}} | * [[:Kategorie:{{{dominia|}}}{{!}}{{{dominia|}}}]]}}
{{#if:{{{regnum|}}} | * [[:Kategorie:{{{regnum|}}}{{!}}{{{regnum|}}}]]}}
{{#if:{{{subregnum|}}} | * [[:Kategorie:{{{subregnum|}}}{{!}}{{{subregnum|}}}]]}}
{{#if:{{{divisio|}}} | * [[:Kategorie:{{{divisio|}}}{{!}}{{{divisio|}}}]]}}
{{#if:{{{subdivisio|}}} | * [[:Kategorie:{{{subdivisio|}}}{{!}}{{{subdivisio|}}}]]}}
{{#if:{{{superclassis|}}}| * [[:Kategorie:{{{superclassis|}}}{{!}}{{{superclassis|}}}]]}}
{{#if:{{{classis|}}} | * [[:Kategorie:{{{classis|}}}{{!}}{{{classis|}}}]]}}
{{#if:{{{subclassis|}}} | * [[:Kategorie:{{{subclassis|}}}{{!}}{{{subclassis|}}}]]}}
{{#if:{{{superordo|}}} | * [[:Kategorie:{{{superordo|}}}{{!}}{{{superordo|}}}]]}}
{{#if:{{{ordo|}}} | * [[:Kategorie:{{{ordo|}}}{{!}}{{{ordo|}}}]]}}
{{#if:{{{subordo|}}} | * [[:Kategorie:{{{subordo|}}}{{!}}{{{subordo|}}}]]}}
{{#if:{{{superfamilia|}}}| * [[:Kategorie:{{{superfamilia|}}}{{!}}{{{superfamilia|}}}]]}}
{{#if:{{{familia|}}} | * [[:Kategorie:{{{familia|}}}{{!}}{{{familia|}}}]]}}
{{#if:{{{subfamilia|}}} | * [[:Kategorie:{{{subfamilia|}}}{{!}}{{{subfamilia|}}}]]}}
{{#if:{{{tribus|}}} | * [[:Kategorie:{{{tribus|}}}{{!}}{{{tribus|}}}]]}}
{{#if:{{{genus|}}} | * [[:Kategorie:{{{genus|}}}{{!}}{{{genus|}}}]]}}
{{#if:{{{subgenus|}}} | * [[:Kategorie:{{{subgenus|}}}{{!}}{{{subgenus|}}}]]}}
{{flatlist end}}
}}
{| cellpadding="2" cellspacing="0" style="border: 1px solid #6688AA;width:260px; background-color:#efefef; float:right;overflow: hidden; margin-left:10px; margin-bottom:10px;" valign="middle" |
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"|
'''''{{PAGENAME}}'''''
{{#if:{{{DeName|}}}|
<br>({{{DeName|}}})
}}
|-
|style="border: 0px solid #6688AA;border-bottom-width:1px;"| <div style="text-align:center;font-size:8pt;">
{{#if: {{{Bild|}}}
|
[[Bild:{{{Bild}}}{{!}}frameless{{!}}250x300px{{!}}{{{Bildbeschreibung}}}]]
}}
</div>
|
|
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| Systematik
|-
|style="text-align:center;width=250px;"|
{| style="background-color:#efefef;text-align:left;"
{{#if:{{{dominia|}}}
| {{!-}}
{{!}} Dominia:
{{!}} ''[[:Kategorie:{{{dominia|}}}{{!}}{{{dominia|}}}]]''
}}
{{#if:{{{regnum|}}}
| {{!-}}
{{!}} Regnum:
{{!}} ''[[:Kategorie:{{{regnum|}}}{{!}}{{{regnum|}}}]]''
}}
{{#if:{{{subregnum|}}}
| {{!-}}
{{!}} Subregnum:
{{!}} ''[[:Kategorie:{{{subregnum|}}}{{!}}{{{subregnum|}}}]]''
}}
{{#if:{{{superdivisio|}}}
| {{!-}}
{{!}} Superdivisio:
{{!}} ''[[:Kategorie:{{{superdivisio|}}}{{!}}{{{superdivisio|}}}]]''
}}
{{#if:{{{divisio|}}}
| {{!-}}
{{!}} Divisio:
{{!}} ''[[:Kategorie:{{{divisio|}}}{{!}}{{{divisio|}}}]]''
}}
{{#if:{{{subdivisio|}}}
| {{!-}}
{{!}} Subdivisio:
{{!}} ''[[:Kategorie:{{{subdivisio|}}}{{!}}{{{subdivisio|}}}]]''
}}
{{#if:{{{superclassis|}}}
| {{!-}}
{{!}} Superclassis:
{{!}} ''[[:Kategorie:{{{superclassis|}}}{{!}}{{{superclassis|}}}]]''
}}
{{#if:{{{classis|}}}
| {{!-}}
{{!}} Classis:
{{!}} ''[[:Kategorie:{{{classis|}}}{{!}}{{{classis|}}}]]''
}}
{{#if:{{{subclassis|}}}
| {{!-}}
{{!}} Subclassis:
{{!}} ''[[:Kategorie:{{{subclassis|}}}{{!}}{{{subclassis|}}}]]''
}}
{{#if:{{{superordo|}}}
| {{!-}}
{{!}} Superordo:
{{!}} ''[[:Kategorie:{{{superordo|}}}{{!}}{{{superordo|}}}]]''
}}
{{#if:{{{ordo|}}}
| {{!-}}
{{!}} Ordo:
{{!}} ''[[:Kategorie:{{{ordo|}}}{{!}}{{{ordo|}}}]]''
}}
{{#if:{{{subordo|}}}
| {{!-}}
{{!}} Subordo:
{{!}} ''[[:Kategorie:{{{subordo|}}}{{!}}{{{subordo|}}}]]''
}}
{{#if:{{{superfamilia|}}}
| {{!-}}
{{!}} Superfamilia:
{{!}} ''[[:Kategorie:{{{superfamilia|}}}{{!}}{{{superfamilia|}}}]]''
}}
{{#if:{{{familia|}}}
| {{!-}}
{{!}} Familia:
{{!}} ''[[:Kategorie:{{{familia|}}}{{!}}{{{familia|}}}]]''
}}
{{#if:{{{subfamilia|}}}
| {{!-}}
{{!}} Subfamilia:
{{!}} ''[[:Kategorie:{{{subfamilia|}}}{{!}}{{{subfamilia|}}}]]''
}}
{{#if:{{{tribus|}}}
| {{!-}}
{{!}} Tribus:
{{!}} ''[[:Kategorie:{{{tribus|}}}{{!}}{{{tribus|}}}]]''
}}
|-
{{#if:{{{genus|}}}
| {{!-}}
{{!}} Genus:
{{!}} ''[[:Kategorie:{{{genus|}}}{{!}}{{{genus|}}}]]''
}}
|-
{{#if:{{{subgenus|}}}
| {{!-}}
{{!}} Subgenus:
{{!}} ''[[:Kategorie:{{{subgenus|}}}{{!}}{{{subgenus|}}}]]''
}}
|-
{{#if:{{{species|}}}
| {{!-}}
{{!}} Species:
{{!}} ''{{{genus|}}} {{{subgenus|}}} {{{species|}}}''
}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Wissenschaftlicher Name
|-
|style="text-align:center"|
{| style="text-align:center;background-color:#efefef;widht:250px"
|-
|style="text-align:center;width:250px"|''{{{genus|}}} {{{species|}}}'' {{#if:{{{Autor|}}}|{{{Autor|}}}}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Weitere Informationen
|-
|style="text-align:left"|
{| style="text-align:left;background-color:#efefef;widht:250px"
{{#if:{{{Verbreitung|}}}
| {{!-}}
{{!}} Verbreitung:
{{!}} {{{Verbreitung|}}}
| {{!-}}
}}
{{#if:{{{Habitat|}}}
| {{!-}}
{{!}} Habitat:
{{!}} {{{Habitat|}}}
| {{!-}}
}}
{{#if:{{{Nahrung|}}}
| {{!-}}
{{!}} Nahrung:
{{!}} {{{Nahrung|}}}
| {{!-}}
}}
{{#if:{{{Luftfeuchtigkeit|}}}
| {{!-}}
{{!}} Luftfeuchtigkeit:
{{!}} {{{Luftfeuchtigkeit|}}}
| {{!-}}
}}
{{#if:{{{Temperatur|}}}
| {{!-}}
{{!}} Temperatur:
{{!}} {{{Temperatur|}}}
| {{!-}}
}}
{{#if:{{{StudyGroupNumber|}}}
| {{!-}}
{{!}} StudyGroupNumber:
{{!}} {{{StudyGroupNumber|}}}
| {{!-}}
}}
|}
|}
{{#ifeq: {{NAMESPACE}} | {{ns:0}} | [[Kategorie:species]]}}
{{#if:{{#var:taxon}}|{{#var:taxon}}}}
{{#if:{{{ordo|}}}|
-> [[:Kategorie:{{{ordo|}}}{{!}}{{{ordo|}}}]]
}}
{{#if:{{{subordo|}}}|
-> [[:Kategorie:{{{subordo|}}}{{!}}{{{subordo|}}}]]
}}
{{#if:{{{superfamilia|}}}|
-> [[:Kategorie:{{#if: {{{subordo|}}} | {{{subordo|}}} | {{{ordo|}}} }}{{!}}{{{superfamilia|}}}]]
}}
{{#if:{{{familia|}}}|
-> [[:Kategorie:{{{familia|}}}{{!}}{{{familia|}}}]]
}}
{{#if:{{{subfamilia|}}}|
-> [[:Kategorie:{{{subfamilia|}}}{{!}}{{{subfamilia|}}}]]
}}
{{#if:{{{tribus|}}}|
-> [[:Kategorie:{{{tribus|}}}{{!}}{{{tribus|}}}]]
}}
{{#if:{{{genus|}}}|
-> [[:Kategorie:{{{genus|}}}{{!}}{{{genus|}}}]]
}}
{{#if:{{{subgenus|}}}|
-> [[:Kategorie:{{{subgenus|}}}{{!}}{{{subgenus|}}}]]
}}
{{#if:{{{www.faunaeur.org_id|}}}|
* [http://www.faunaeur.org/full_results.php?id={{{www.faunaeur.org_id|}}} Fauna Europaea : www.faunaeur.org -> {{PAGENAME}}]
}}
{{#if:{{{cockroach.speciesfile.org_TaxonNameID|}}}|
* [http://cockroach.speciesfile.org/common/basic/Taxa.aspx?TaxonNameID={{{cockroach.speciesfile.org_TaxonNameID|}}} Cockroach Species File (CSF) : cockroach.speciesfile.org -> {{PAGENAME}}]
}}
{{#if:{{{LSID|}}}|
* [http://www.ipni.org/lsids.html LSID] : {{{LSID|}}}
}}
{{#ifeq:{{PAGENAME}}|{{{superordo}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
[[Kategorie:{{#if: {{{subclassis|}}} | {{{subclassis|}}} | {{{classis|}}} }}{{!}}{{{superordo|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{ordo}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=7}}
{{#ifeq:{{PAGENAME}}|Blattodea|
[[Kategorie:Schaben]]
}}
[[Kategorie:{{#if: {{{superordo|}}} | {{{superordo|}}} | {{{subclassis|}}} }}{{!}}{{{ordo|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{superfamilia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
[[Kategorie:{{#if: {{{subordo|}}} | {{{subordo|}}} | {{{ordo|}}} }}{{!}}{{{superfamilia|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{familia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=4}}
[[Kategorie:{{#if: {{{superfamilia|}}} | {{{superfamilia|}}} | {{{subordo|}}} }}{{!}}{{{familia|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{subfamilia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=3}}
[[Kategorie:{{{familia|}}}{{!}}{{{subfamilia|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{genus}}}|
[[Kategorie:{{#if: {{{subfamilia|}}} | {{{subfamilia|}}} | {{{familia|}}} }}{{!}}{{{genus|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{genus}}} {{{species}}}|
[[Kategorie:{{{genus|}}}{{!}}{{{species|}}}]]
}}
</includeonly>
<noinclude>
<pre>
Beispielaufruf:
{{Systematik
| DeName = Fauchschabe
| Autor = van Herrewege, 1973
| ordo =
| subordo =
| superfamilia =
| familia = Blaberidae
| subfamilia = Oxyhaloinae
| tribus = Gromphadorhini
| genus = Princisia
| subgenus =
| species = vanwaerebeki
| Verbreitung =
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| StudyGroupNumber = BCG 34
| Winterruhe =
| cockroach.speciesfile.org_TaxonNameID = 1174416
| LSID = urn:lsid:Blattodea.genusfile.org:TaxonName:6326
}}
{{Systematik
| Autor =
| Bild =
| Bildbeschreibung =
| regnum = Animalia
| subregnum = Eumetazoa
| phylum = specieshropoda
| subphylum = Hexapoda
| classis = Insecta
| ordo = Dictyoptera
| subordo = Isoptera
| LSID = urn:lsid:faunaeur.org:taxname:11922
| www.faunaeur.org_id = 11922
}}
</pre>
</noinclude>
1409166dd2ff291d986657bbc39676b7517e724f
1622
1621
2017-05-02T20:20:35Z
Lollypop
2
wikitext
text/x-wiki
<includeonly>
{{#vardefine:taxon|
{{#if:{{{dominia|}}} | * [[:Kategorie:{{{dominia|}}}{{!}}{{{dominia|}}}]]}}
{{#if:{{{regnum|}}} | * [[:Kategorie:{{{regnum|}}}{{!}}{{{regnum|}}}]]}}
{{#if:{{{subregnum|}}} | * [[:Kategorie:{{{subregnum|}}}{{!}}{{{subregnum|}}}]]}}
{{#if:{{{divisio|}}} | * [[:Kategorie:{{{divisio|}}}{{!}}{{{divisio|}}}]]}}
{{#if:{{{subdivisio|}}} | * [[:Kategorie:{{{subdivisio|}}}{{!}}{{{subdivisio|}}}]]}}
{{#if:{{{superclassis|}}}| * [[:Kategorie:{{{superclassis|}}}{{!}}{{{superclassis|}}}]]}}
{{#if:{{{classis|}}} | * [[:Kategorie:{{{classis|}}}{{!}}{{{classis|}}}]]}}
{{#if:{{{subclassis|}}} | * [[:Kategorie:{{{subclassis|}}}{{!}}{{{subclassis|}}}]]}}
{{#if:{{{superordo|}}} | * [[:Kategorie:{{{superordo|}}}{{!}}{{{superordo|}}}]]}}
{{#if:{{{ordo|}}} | * [[:Kategorie:{{{ordo|}}}{{!}}{{{ordo|}}}]]}}
{{#if:{{{subordo|}}} | * [[:Kategorie:{{{subordo|}}}{{!}}{{{subordo|}}}]]}}
{{#if:{{{superfamilia|}}}| * [[:Kategorie:{{{superfamilia|}}}{{!}}{{{superfamilia|}}}]]}}
{{#if:{{{familia|}}} | * [[:Kategorie:{{{familia|}}}{{!}}{{{familia|}}}]]}}
{{#if:{{{subfamilia|}}} | * [[:Kategorie:{{{subfamilia|}}}{{!}}{{{subfamilia|}}}]]}}
{{#if:{{{tribus|}}} | * [[:Kategorie:{{{tribus|}}}{{!}}{{{tribus|}}}]]}}
{{#if:{{{genus|}}} | * [[:Kategorie:{{{genus|}}}{{!}}{{{genus|}}}]]}}
{{#if:{{{subgenus|}}} | * [[:Kategorie:{{{subgenus|}}}{{!}}{{{subgenus|}}}]]}}
}}
{| cellpadding="2" cellspacing="0" style="border: 1px solid #6688AA;width:260px; background-color:#efefef; float:right;overflow: hidden; margin-left:10px; margin-bottom:10px;" valign="middle" |
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"|
'''''{{PAGENAME}}'''''
{{#if:{{{DeName|}}}|
<br>({{{DeName|}}})
}}
|-
|style="border: 0px solid #6688AA;border-bottom-width:1px;"| <div style="text-align:center;font-size:8pt;">
{{#if: {{{Bild|}}}
|
[[Bild:{{{Bild}}}{{!}}frameless{{!}}250x300px{{!}}{{{Bildbeschreibung}}}]]
}}
</div>
|
|
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| Systematik
|-
|style="text-align:center;width=250px;"|
{| style="background-color:#efefef;text-align:left;"
{{#if:{{{dominia|}}}
| {{!-}}
{{!}} Dominia:
{{!}} ''[[:Kategorie:{{{dominia|}}}{{!}}{{{dominia|}}}]]''
}}
{{#if:{{{regnum|}}}
| {{!-}}
{{!}} Regnum:
{{!}} ''[[:Kategorie:{{{regnum|}}}{{!}}{{{regnum|}}}]]''
}}
{{#if:{{{subregnum|}}}
| {{!-}}
{{!}} Subregnum:
{{!}} ''[[:Kategorie:{{{subregnum|}}}{{!}}{{{subregnum|}}}]]''
}}
{{#if:{{{superdivisio|}}}
| {{!-}}
{{!}} Superdivisio:
{{!}} ''[[:Kategorie:{{{superdivisio|}}}{{!}}{{{superdivisio|}}}]]''
}}
{{#if:{{{divisio|}}}
| {{!-}}
{{!}} Divisio:
{{!}} ''[[:Kategorie:{{{divisio|}}}{{!}}{{{divisio|}}}]]''
}}
{{#if:{{{subdivisio|}}}
| {{!-}}
{{!}} Subdivisio:
{{!}} ''[[:Kategorie:{{{subdivisio|}}}{{!}}{{{subdivisio|}}}]]''
}}
{{#if:{{{superclassis|}}}
| {{!-}}
{{!}} Superclassis:
{{!}} ''[[:Kategorie:{{{superclassis|}}}{{!}}{{{superclassis|}}}]]''
}}
{{#if:{{{classis|}}}
| {{!-}}
{{!}} Classis:
{{!}} ''[[:Kategorie:{{{classis|}}}{{!}}{{{classis|}}}]]''
}}
{{#if:{{{subclassis|}}}
| {{!-}}
{{!}} Subclassis:
{{!}} ''[[:Kategorie:{{{subclassis|}}}{{!}}{{{subclassis|}}}]]''
}}
{{#if:{{{superordo|}}}
| {{!-}}
{{!}} Superordo:
{{!}} ''[[:Kategorie:{{{superordo|}}}{{!}}{{{superordo|}}}]]''
}}
{{#if:{{{ordo|}}}
| {{!-}}
{{!}} Ordo:
{{!}} ''[[:Kategorie:{{{ordo|}}}{{!}}{{{ordo|}}}]]''
}}
{{#if:{{{subordo|}}}
| {{!-}}
{{!}} Subordo:
{{!}} ''[[:Kategorie:{{{subordo|}}}{{!}}{{{subordo|}}}]]''
}}
{{#if:{{{superfamilia|}}}
| {{!-}}
{{!}} Superfamilia:
{{!}} ''[[:Kategorie:{{{superfamilia|}}}{{!}}{{{superfamilia|}}}]]''
}}
{{#if:{{{familia|}}}
| {{!-}}
{{!}} Familia:
{{!}} ''[[:Kategorie:{{{familia|}}}{{!}}{{{familia|}}}]]''
}}
{{#if:{{{subfamilia|}}}
| {{!-}}
{{!}} Subfamilia:
{{!}} ''[[:Kategorie:{{{subfamilia|}}}{{!}}{{{subfamilia|}}}]]''
}}
{{#if:{{{tribus|}}}
| {{!-}}
{{!}} Tribus:
{{!}} ''[[:Kategorie:{{{tribus|}}}{{!}}{{{tribus|}}}]]''
}}
|-
{{#if:{{{genus|}}}
| {{!-}}
{{!}} Genus:
{{!}} ''[[:Kategorie:{{{genus|}}}{{!}}{{{genus|}}}]]''
}}
|-
{{#if:{{{subgenus|}}}
| {{!-}}
{{!}} Subgenus:
{{!}} ''[[:Kategorie:{{{subgenus|}}}{{!}}{{{subgenus|}}}]]''
}}
|-
{{#if:{{{species|}}}
| {{!-}}
{{!}} Species:
{{!}} ''{{{genus|}}} {{{subgenus|}}} {{{species|}}}''
}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Wissenschaftlicher Name
|-
|style="text-align:center"|
{| style="text-align:center;background-color:#efefef;widht:250px"
|-
|style="text-align:center;width:250px"|''{{{genus|}}} {{{species|}}}'' {{#if:{{{Autor|}}}|{{{Autor|}}}}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Weitere Informationen
|-
|style="text-align:left"|
{| style="text-align:left;background-color:#efefef;widht:250px"
{{#if:{{{Verbreitung|}}}
| {{!-}}
{{!}} Verbreitung:
{{!}} {{{Verbreitung|}}}
| {{!-}}
}}
{{#if:{{{Habitat|}}}
| {{!-}}
{{!}} Habitat:
{{!}} {{{Habitat|}}}
| {{!-}}
}}
{{#if:{{{Nahrung|}}}
| {{!-}}
{{!}} Nahrung:
{{!}} {{{Nahrung|}}}
| {{!-}}
}}
{{#if:{{{Luftfeuchtigkeit|}}}
| {{!-}}
{{!}} Luftfeuchtigkeit:
{{!}} {{{Luftfeuchtigkeit|}}}
| {{!-}}
}}
{{#if:{{{Temperatur|}}}
| {{!-}}
{{!}} Temperatur:
{{!}} {{{Temperatur|}}}
| {{!-}}
}}
{{#if:{{{StudyGroupNumber|}}}
| {{!-}}
{{!}} StudyGroupNumber:
{{!}} {{{StudyGroupNumber|}}}
| {{!-}}
}}
|}
|}
{{#ifeq: {{NAMESPACE}} | {{ns:0}} | [[Kategorie:species]]}}
{{#if:{{#var:taxon}}|{{nowrap {{#var:taxon}}}}}}
{{#if:{{{ordo|}}}|
-> [[:Kategorie:{{{ordo|}}}{{!}}{{{ordo|}}}]]
}}
{{#if:{{{subordo|}}}|
-> [[:Kategorie:{{{subordo|}}}{{!}}{{{subordo|}}}]]
}}
{{#if:{{{superfamilia|}}}|
-> [[:Kategorie:{{#if: {{{subordo|}}} | {{{subordo|}}} | {{{ordo|}}} }}{{!}}{{{superfamilia|}}}]]
}}
{{#if:{{{familia|}}}|
-> [[:Kategorie:{{{familia|}}}{{!}}{{{familia|}}}]]
}}
{{#if:{{{subfamilia|}}}|
-> [[:Kategorie:{{{subfamilia|}}}{{!}}{{{subfamilia|}}}]]
}}
{{#if:{{{tribus|}}}|
-> [[:Kategorie:{{{tribus|}}}{{!}}{{{tribus|}}}]]
}}
{{#if:{{{genus|}}}|
-> [[:Kategorie:{{{genus|}}}{{!}}{{{genus|}}}]]
}}
{{#if:{{{subgenus|}}}|
-> [[:Kategorie:{{{subgenus|}}}{{!}}{{{subgenus|}}}]]
}}
{{#if:{{{www.faunaeur.org_id|}}}|
* [http://www.faunaeur.org/full_results.php?id={{{www.faunaeur.org_id|}}} Fauna Europaea : www.faunaeur.org -> {{PAGENAME}}]
}}
{{#if:{{{cockroach.speciesfile.org_TaxonNameID|}}}|
* [http://cockroach.speciesfile.org/common/basic/Taxa.aspx?TaxonNameID={{{cockroach.speciesfile.org_TaxonNameID|}}} Cockroach Species File (CSF) : cockroach.speciesfile.org -> {{PAGENAME}}]
}}
{{#if:{{{LSID|}}}|
* [http://www.ipni.org/lsids.html LSID] : {{{LSID|}}}
}}
{{#ifeq:{{PAGENAME}}|{{{superordo}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
[[Kategorie:{{#if: {{{subclassis|}}} | {{{subclassis|}}} | {{{classis|}}} }}{{!}}{{{superordo|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{ordo}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=7}}
{{#ifeq:{{PAGENAME}}|Blattodea|
[[Kategorie:Schaben]]
}}
[[Kategorie:{{#if: {{{superordo|}}} | {{{superordo|}}} | {{{subclassis|}}} }}{{!}}{{{ordo|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{superfamilia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
[[Kategorie:{{#if: {{{subordo|}}} | {{{subordo|}}} | {{{ordo|}}} }}{{!}}{{{superfamilia|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{familia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=4}}
[[Kategorie:{{#if: {{{superfamilia|}}} | {{{superfamilia|}}} | {{{subordo|}}} }}{{!}}{{{familia|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{subfamilia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=3}}
[[Kategorie:{{{familia|}}}{{!}}{{{subfamilia|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{genus}}}|
[[Kategorie:{{#if: {{{subfamilia|}}} | {{{subfamilia|}}} | {{{familia|}}} }}{{!}}{{{genus|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{genus}}} {{{species}}}|
[[Kategorie:{{{genus|}}}{{!}}{{{species|}}}]]
}}
</includeonly>
<noinclude>
<pre>
Beispielaufruf:
{{Systematik
| DeName = Fauchschabe
| Autor = van Herrewege, 1973
| ordo =
| subordo =
| superfamilia =
| familia = Blaberidae
| subfamilia = Oxyhaloinae
| tribus = Gromphadorhini
| genus = Princisia
| subgenus =
| species = vanwaerebeki
| Verbreitung =
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| StudyGroupNumber = BCG 34
| Winterruhe =
| cockroach.speciesfile.org_TaxonNameID = 1174416
| LSID = urn:lsid:Blattodea.genusfile.org:TaxonName:6326
}}
{{Systematik
| Autor =
| Bild =
| Bildbeschreibung =
| regnum = Animalia
| subregnum = Eumetazoa
| phylum = specieshropoda
| subphylum = Hexapoda
| classis = Insecta
| ordo = Dictyoptera
| subordo = Isoptera
| LSID = urn:lsid:faunaeur.org:taxname:11922
| www.faunaeur.org_id = 11922
}}
</pre>
</noinclude>
eeda130cdff775fcc03270574611912e5cc03a81
1623
1622
2017-05-02T20:21:22Z
Lollypop
2
wikitext
text/x-wiki
<includeonly>
{{#vardefine:taxon|
{{#if:{{{dominia|}}} | * [[:Kategorie:{{{dominia|}}}{{!}}{{{dominia|}}}]]}}
{{#if:{{{regnum|}}} | * [[:Kategorie:{{{regnum|}}}{{!}}{{{regnum|}}}]]}}
{{#if:{{{subregnum|}}} | * [[:Kategorie:{{{subregnum|}}}{{!}}{{{subregnum|}}}]]}}
{{#if:{{{divisio|}}} | * [[:Kategorie:{{{divisio|}}}{{!}}{{{divisio|}}}]]}}
{{#if:{{{subdivisio|}}} | * [[:Kategorie:{{{subdivisio|}}}{{!}}{{{subdivisio|}}}]]}}
{{#if:{{{superclassis|}}}| * [[:Kategorie:{{{superclassis|}}}{{!}}{{{superclassis|}}}]]}}
{{#if:{{{classis|}}} | * [[:Kategorie:{{{classis|}}}{{!}}{{{classis|}}}]]}}
{{#if:{{{subclassis|}}} | * [[:Kategorie:{{{subclassis|}}}{{!}}{{{subclassis|}}}]]}}
{{#if:{{{superordo|}}} | * [[:Kategorie:{{{superordo|}}}{{!}}{{{superordo|}}}]]}}
{{#if:{{{ordo|}}} | * [[:Kategorie:{{{ordo|}}}{{!}}{{{ordo|}}}]]}}
{{#if:{{{subordo|}}} | * [[:Kategorie:{{{subordo|}}}{{!}}{{{subordo|}}}]]}}
{{#if:{{{superfamilia|}}}| * [[:Kategorie:{{{superfamilia|}}}{{!}}{{{superfamilia|}}}]]}}
{{#if:{{{familia|}}} | * [[:Kategorie:{{{familia|}}}{{!}}{{{familia|}}}]]}}
{{#if:{{{subfamilia|}}} | * [[:Kategorie:{{{subfamilia|}}}{{!}}{{{subfamilia|}}}]]}}
{{#if:{{{tribus|}}} | * [[:Kategorie:{{{tribus|}}}{{!}}{{{tribus|}}}]]}}
{{#if:{{{genus|}}} | * [[:Kategorie:{{{genus|}}}{{!}}{{{genus|}}}]]}}
{{#if:{{{subgenus|}}} | * [[:Kategorie:{{{subgenus|}}}{{!}}{{{subgenus|}}}]]}}
}}
{| cellpadding="2" cellspacing="0" style="border: 1px solid #6688AA;width:260px; background-color:#efefef; float:right;overflow: hidden; margin-left:10px; margin-bottom:10px;" valign="middle" |
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"|
'''''{{PAGENAME}}'''''
{{#if:{{{DeName|}}}|
<br>({{{DeName|}}})
}}
|-
|style="border: 0px solid #6688AA;border-bottom-width:1px;"| <div style="text-align:center;font-size:8pt;">
{{#if: {{{Bild|}}}
|
[[Bild:{{{Bild}}}{{!}}frameless{{!}}250x300px{{!}}{{{Bildbeschreibung}}}]]
}}
</div>
|
|
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| Systematik
|-
|style="text-align:center;width=250px;"|
{| style="background-color:#efefef;text-align:left;"
{{#if:{{{dominia|}}}
| {{!-}}
{{!}} Dominia:
{{!}} ''[[:Kategorie:{{{dominia|}}}{{!}}{{{dominia|}}}]]''
}}
{{#if:{{{regnum|}}}
| {{!-}}
{{!}} Regnum:
{{!}} ''[[:Kategorie:{{{regnum|}}}{{!}}{{{regnum|}}}]]''
}}
{{#if:{{{subregnum|}}}
| {{!-}}
{{!}} Subregnum:
{{!}} ''[[:Kategorie:{{{subregnum|}}}{{!}}{{{subregnum|}}}]]''
}}
{{#if:{{{superdivisio|}}}
| {{!-}}
{{!}} Superdivisio:
{{!}} ''[[:Kategorie:{{{superdivisio|}}}{{!}}{{{superdivisio|}}}]]''
}}
{{#if:{{{divisio|}}}
| {{!-}}
{{!}} Divisio:
{{!}} ''[[:Kategorie:{{{divisio|}}}{{!}}{{{divisio|}}}]]''
}}
{{#if:{{{subdivisio|}}}
| {{!-}}
{{!}} Subdivisio:
{{!}} ''[[:Kategorie:{{{subdivisio|}}}{{!}}{{{subdivisio|}}}]]''
}}
{{#if:{{{superclassis|}}}
| {{!-}}
{{!}} Superclassis:
{{!}} ''[[:Kategorie:{{{superclassis|}}}{{!}}{{{superclassis|}}}]]''
}}
{{#if:{{{classis|}}}
| {{!-}}
{{!}} Classis:
{{!}} ''[[:Kategorie:{{{classis|}}}{{!}}{{{classis|}}}]]''
}}
{{#if:{{{subclassis|}}}
| {{!-}}
{{!}} Subclassis:
{{!}} ''[[:Kategorie:{{{subclassis|}}}{{!}}{{{subclassis|}}}]]''
}}
{{#if:{{{superordo|}}}
| {{!-}}
{{!}} Superordo:
{{!}} ''[[:Kategorie:{{{superordo|}}}{{!}}{{{superordo|}}}]]''
}}
{{#if:{{{ordo|}}}
| {{!-}}
{{!}} Ordo:
{{!}} ''[[:Kategorie:{{{ordo|}}}{{!}}{{{ordo|}}}]]''
}}
{{#if:{{{subordo|}}}
| {{!-}}
{{!}} Subordo:
{{!}} ''[[:Kategorie:{{{subordo|}}}{{!}}{{{subordo|}}}]]''
}}
{{#if:{{{superfamilia|}}}
| {{!-}}
{{!}} Superfamilia:
{{!}} ''[[:Kategorie:{{{superfamilia|}}}{{!}}{{{superfamilia|}}}]]''
}}
{{#if:{{{familia|}}}
| {{!-}}
{{!}} Familia:
{{!}} ''[[:Kategorie:{{{familia|}}}{{!}}{{{familia|}}}]]''
}}
{{#if:{{{subfamilia|}}}
| {{!-}}
{{!}} Subfamilia:
{{!}} ''[[:Kategorie:{{{subfamilia|}}}{{!}}{{{subfamilia|}}}]]''
}}
{{#if:{{{tribus|}}}
| {{!-}}
{{!}} Tribus:
{{!}} ''[[:Kategorie:{{{tribus|}}}{{!}}{{{tribus|}}}]]''
}}
|-
{{#if:{{{genus|}}}
| {{!-}}
{{!}} Genus:
{{!}} ''[[:Kategorie:{{{genus|}}}{{!}}{{{genus|}}}]]''
}}
|-
{{#if:{{{subgenus|}}}
| {{!-}}
{{!}} Subgenus:
{{!}} ''[[:Kategorie:{{{subgenus|}}}{{!}}{{{subgenus|}}}]]''
}}
|-
{{#if:{{{species|}}}
| {{!-}}
{{!}} Species:
{{!}} ''{{{genus|}}} {{{subgenus|}}} {{{species|}}}''
}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Wissenschaftlicher Name
|-
|style="text-align:center"|
{| style="text-align:center;background-color:#efefef;widht:250px"
|-
|style="text-align:center;width:250px"|''{{{genus|}}} {{{species|}}}'' {{#if:{{{Autor|}}}|{{{Autor|}}}}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Weitere Informationen
|-
|style="text-align:left"|
{| style="text-align:left;background-color:#efefef;widht:250px"
{{#if:{{{Verbreitung|}}}
| {{!-}}
{{!}} Verbreitung:
{{!}} {{{Verbreitung|}}}
| {{!-}}
}}
{{#if:{{{Habitat|}}}
| {{!-}}
{{!}} Habitat:
{{!}} {{{Habitat|}}}
| {{!-}}
}}
{{#if:{{{Nahrung|}}}
| {{!-}}
{{!}} Nahrung:
{{!}} {{{Nahrung|}}}
| {{!-}}
}}
{{#if:{{{Luftfeuchtigkeit|}}}
| {{!-}}
{{!}} Luftfeuchtigkeit:
{{!}} {{{Luftfeuchtigkeit|}}}
| {{!-}}
}}
{{#if:{{{Temperatur|}}}
| {{!-}}
{{!}} Temperatur:
{{!}} {{{Temperatur|}}}
| {{!-}}
}}
{{#if:{{{StudyGroupNumber|}}}
| {{!-}}
{{!}} StudyGroupNumber:
{{!}} {{{StudyGroupNumber|}}}
| {{!-}}
}}
|}
|}
{{#ifeq: {{NAMESPACE}} | {{ns:0}} | [[Kategorie:species]]}}
{{#if:{{#var:taxon}} <{{#var:taxon}}}}
{{#if:{{{ordo|}}}|
-> [[:Kategorie:{{{ordo|}}}{{!}}{{{ordo|}}}]]
}}
{{#if:{{{subordo|}}}|
-> [[:Kategorie:{{{subordo|}}}{{!}}{{{subordo|}}}]]
}}
{{#if:{{{superfamilia|}}}|
-> [[:Kategorie:{{#if: {{{subordo|}}} | {{{subordo|}}} | {{{ordo|}}} }}{{!}}{{{superfamilia|}}}]]
}}
{{#if:{{{familia|}}}|
-> [[:Kategorie:{{{familia|}}}{{!}}{{{familia|}}}]]
}}
{{#if:{{{subfamilia|}}}|
-> [[:Kategorie:{{{subfamilia|}}}{{!}}{{{subfamilia|}}}]]
}}
{{#if:{{{tribus|}}}|
-> [[:Kategorie:{{{tribus|}}}{{!}}{{{tribus|}}}]]
}}
{{#if:{{{genus|}}}|
-> [[:Kategorie:{{{genus|}}}{{!}}{{{genus|}}}]]
}}
{{#if:{{{subgenus|}}}|
-> [[:Kategorie:{{{subgenus|}}}{{!}}{{{subgenus|}}}]]
}}
{{#if:{{{www.faunaeur.org_id|}}}|
* [http://www.faunaeur.org/full_results.php?id={{{www.faunaeur.org_id|}}} Fauna Europaea : www.faunaeur.org -> {{PAGENAME}}]
}}
{{#if:{{{cockroach.speciesfile.org_TaxonNameID|}}}|
* [http://cockroach.speciesfile.org/common/basic/Taxa.aspx?TaxonNameID={{{cockroach.speciesfile.org_TaxonNameID|}}} Cockroach Species File (CSF) : cockroach.speciesfile.org -> {{PAGENAME}}]
}}
{{#if:{{{LSID|}}}|
* [http://www.ipni.org/lsids.html LSID] : {{{LSID|}}}
}}
{{#ifeq:{{PAGENAME}}|{{{superordo}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
[[Kategorie:{{#if: {{{subclassis|}}} | {{{subclassis|}}} | {{{classis|}}} }}{{!}}{{{superordo|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{ordo}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=7}}
{{#ifeq:{{PAGENAME}}|Blattodea|
[[Kategorie:Schaben]]
}}
[[Kategorie:{{#if: {{{superordo|}}} | {{{superordo|}}} | {{{subclassis|}}} }}{{!}}{{{ordo|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{superfamilia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
[[Kategorie:{{#if: {{{subordo|}}} | {{{subordo|}}} | {{{ordo|}}} }}{{!}}{{{superfamilia|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{familia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=4}}
[[Kategorie:{{#if: {{{superfamilia|}}} | {{{superfamilia|}}} | {{{subordo|}}} }}{{!}}{{{familia|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{subfamilia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=3}}
[[Kategorie:{{{familia|}}}{{!}}{{{subfamilia|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{genus}}}|
[[Kategorie:{{#if: {{{subfamilia|}}} | {{{subfamilia|}}} | {{{familia|}}} }}{{!}}{{{genus|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{genus}}} {{{species}}}|
[[Kategorie:{{{genus|}}}{{!}}{{{species|}}}]]
}}
</includeonly>
<noinclude>
<pre>
Beispielaufruf:
{{Systematik
| DeName = Fauchschabe
| Autor = van Herrewege, 1973
| ordo =
| subordo =
| superfamilia =
| familia = Blaberidae
| subfamilia = Oxyhaloinae
| tribus = Gromphadorhini
| genus = Princisia
| subgenus =
| species = vanwaerebeki
| Verbreitung =
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| StudyGroupNumber = BCG 34
| Winterruhe =
| cockroach.speciesfile.org_TaxonNameID = 1174416
| LSID = urn:lsid:Blattodea.genusfile.org:TaxonName:6326
}}
{{Systematik
| Autor =
| Bild =
| Bildbeschreibung =
| regnum = Animalia
| subregnum = Eumetazoa
| phylum = specieshropoda
| subphylum = Hexapoda
| classis = Insecta
| ordo = Dictyoptera
| subordo = Isoptera
| LSID = urn:lsid:faunaeur.org:taxname:11922
| www.faunaeur.org_id = 11922
}}
</pre>
</noinclude>
5b148a0e711123cf823a73e80cf45ac54fb03731
1624
1623
2017-05-02T20:24:13Z
Lollypop
2
wikitext
text/x-wiki
<includeonly>
{{#vardefine:taxon|
{{#if:{{{dominia|}}} | -> [[:Kategorie:{{{dominia|}}}{{!}}{{{dominia|}}}]]}}
{{#if:{{{regnum|}}} | -> [[:Kategorie:{{{regnum|}}}{{!}}{{{regnum|}}}]]}}
{{#if:{{{subregnum|}}} | -> [[:Kategorie:{{{subregnum|}}}{{!}}{{{subregnum|}}}]]}}
{{#if:{{{divisio|}}} | -> [[:Kategorie:{{{divisio|}}}{{!}}{{{divisio|}}}]]}}
{{#if:{{{subdivisio|}}} | -> [[:Kategorie:{{{subdivisio|}}}{{!}}{{{subdivisio|}}}]]}}
{{#if:{{{superclassis|}}}| -> [[:Kategorie:{{{superclassis|}}}{{!}}{{{superclassis|}}}]]}}
{{#if:{{{classis|}}} | -> [[:Kategorie:{{{classis|}}}{{!}}{{{classis|}}}]]}}
{{#if:{{{subclassis|}}} | -> [[:Kategorie:{{{subclassis|}}}{{!}}{{{subclassis|}}}]]}}
{{#if:{{{superordo|}}} | -> [[:Kategorie:{{{superordo|}}}{{!}}{{{superordo|}}}]]}}
{{#if:{{{ordo|}}} | -> [[:Kategorie:{{{ordo|}}}{{!}}{{{ordo|}}}]]}}
{{#if:{{{subordo|}}} | -> [[:Kategorie:{{{subordo|}}}{{!}}{{{subordo|}}}]]}}
{{#if:{{{superfamilia|}}}| -> [[:Kategorie:{{#if: {{{subordo|}}} | {{{subordo|}}} | {{{ordo|}}} }}{{!}}{{{superfamilia|}}}]]}}
{{#if:{{{familia|}}} | -> [[:Kategorie:{{{familia|}}}{{!}}{{{familia|}}}]]}}
{{#if:{{{subfamilia|}}} | -> [[:Kategorie:{{{subfamilia|}}}{{!}}{{{subfamilia|}}}]]}}
{{#if:{{{tribus|}}} | -> [[:Kategorie:{{{tribus|}}}{{!}}{{{tribus|}}}]]}}
{{#if:{{{genus|}}} | -> [[:Kategorie:{{{genus|}}}{{!}}{{{genus|}}}]]}}
{{#if:{{{subgenus|}}} | -> [[:Kategorie:{{{subgenus|}}}{{!}}{{{subgenus|}}}]]}}
}}
{| cellpadding="2" cellspacing="0" style="border: 1px solid #6688AA;width:260px; background-color:#efefef; float:right;overflow: hidden; margin-left:10px; margin-bottom:10px;" valign="middle" |
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"|
'''''{{PAGENAME}}'''''
{{#if:{{{DeName|}}}|
<br>({{{DeName|}}})
}}
|-
|style="border: 0px solid #6688AA;border-bottom-width:1px;"| <div style="text-align:center;font-size:8pt;">
{{#if: {{{Bild|}}}
|
[[Bild:{{{Bild}}}{{!}}frameless{{!}}250x300px{{!}}{{{Bildbeschreibung}}}]]
}}
</div>
|
|
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| Systematik
|-
|style="text-align:center;width=250px;"|
{| style="background-color:#efefef;text-align:left;"
{{#if:{{{dominia|}}}
| {{!-}}
{{!}} Dominia:
{{!}} ''[[:Kategorie:{{{dominia|}}}{{!}}{{{dominia|}}}]]''
}}
{{#if:{{{regnum|}}}
| {{!-}}
{{!}} Regnum:
{{!}} ''[[:Kategorie:{{{regnum|}}}{{!}}{{{regnum|}}}]]''
}}
{{#if:{{{subregnum|}}}
| {{!-}}
{{!}} Subregnum:
{{!}} ''[[:Kategorie:{{{subregnum|}}}{{!}}{{{subregnum|}}}]]''
}}
{{#if:{{{superdivisio|}}}
| {{!-}}
{{!}} Superdivisio:
{{!}} ''[[:Kategorie:{{{superdivisio|}}}{{!}}{{{superdivisio|}}}]]''
}}
{{#if:{{{divisio|}}}
| {{!-}}
{{!}} Divisio:
{{!}} ''[[:Kategorie:{{{divisio|}}}{{!}}{{{divisio|}}}]]''
}}
{{#if:{{{subdivisio|}}}
| {{!-}}
{{!}} Subdivisio:
{{!}} ''[[:Kategorie:{{{subdivisio|}}}{{!}}{{{subdivisio|}}}]]''
}}
{{#if:{{{superclassis|}}}
| {{!-}}
{{!}} Superclassis:
{{!}} ''[[:Kategorie:{{{superclassis|}}}{{!}}{{{superclassis|}}}]]''
}}
{{#if:{{{classis|}}}
| {{!-}}
{{!}} Classis:
{{!}} ''[[:Kategorie:{{{classis|}}}{{!}}{{{classis|}}}]]''
}}
{{#if:{{{subclassis|}}}
| {{!-}}
{{!}} Subclassis:
{{!}} ''[[:Kategorie:{{{subclassis|}}}{{!}}{{{subclassis|}}}]]''
}}
{{#if:{{{superordo|}}}
| {{!-}}
{{!}} Superordo:
{{!}} ''[[:Kategorie:{{{superordo|}}}{{!}}{{{superordo|}}}]]''
}}
{{#if:{{{ordo|}}}
| {{!-}}
{{!}} Ordo:
{{!}} ''[[:Kategorie:{{{ordo|}}}{{!}}{{{ordo|}}}]]''
}}
{{#if:{{{subordo|}}}
| {{!-}}
{{!}} Subordo:
{{!}} ''[[:Kategorie:{{{subordo|}}}{{!}}{{{subordo|}}}]]''
}}
{{#if:{{{superfamilia|}}}
| {{!-}}
{{!}} Superfamilia:
{{!}} ''[[:Kategorie:{{{superfamilia|}}}{{!}}{{{superfamilia|}}}]]''
}}
{{#if:{{{familia|}}}
| {{!-}}
{{!}} Familia:
{{!}} ''[[:Kategorie:{{{familia|}}}{{!}}{{{familia|}}}]]''
}}
{{#if:{{{subfamilia|}}}
| {{!-}}
{{!}} Subfamilia:
{{!}} ''[[:Kategorie:{{{subfamilia|}}}{{!}}{{{subfamilia|}}}]]''
}}
{{#if:{{{tribus|}}}
| {{!-}}
{{!}} Tribus:
{{!}} ''[[:Kategorie:{{{tribus|}}}{{!}}{{{tribus|}}}]]''
}}
|-
{{#if:{{{genus|}}}
| {{!-}}
{{!}} Genus:
{{!}} ''[[:Kategorie:{{{genus|}}}{{!}}{{{genus|}}}]]''
}}
|-
{{#if:{{{subgenus|}}}
| {{!-}}
{{!}} Subgenus:
{{!}} ''[[:Kategorie:{{{subgenus|}}}{{!}}{{{subgenus|}}}]]''
}}
|-
{{#if:{{{species|}}}
| {{!-}}
{{!}} Species:
{{!}} ''{{{genus|}}} {{{subgenus|}}} {{{species|}}}''
}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Wissenschaftlicher Name
|-
|style="text-align:center"|
{| style="text-align:center;background-color:#efefef;widht:250px"
|-
|style="text-align:center;width:250px"|''{{{genus|}}} {{{species|}}}'' {{#if:{{{Autor|}}}|{{{Autor|}}}}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Weitere Informationen
|-
|style="text-align:left"|
{| style="text-align:left;background-color:#efefef;widht:250px"
{{#if:{{{Verbreitung|}}}
| {{!-}}
{{!}} Verbreitung:
{{!}} {{{Verbreitung|}}}
| {{!-}}
}}
{{#if:{{{Habitat|}}}
| {{!-}}
{{!}} Habitat:
{{!}} {{{Habitat|}}}
| {{!-}}
}}
{{#if:{{{Nahrung|}}}
| {{!-}}
{{!}} Nahrung:
{{!}} {{{Nahrung|}}}
| {{!-}}
}}
{{#if:{{{Luftfeuchtigkeit|}}}
| {{!-}}
{{!}} Luftfeuchtigkeit:
{{!}} {{{Luftfeuchtigkeit|}}}
| {{!-}}
}}
{{#if:{{{Temperatur|}}}
| {{!-}}
{{!}} Temperatur:
{{!}} {{{Temperatur|}}}
| {{!-}}
}}
{{#if:{{{StudyGroupNumber|}}}
| {{!-}}
{{!}} StudyGroupNumber:
{{!}} {{{StudyGroupNumber|}}}
| {{!-}}
}}
|}
|}
{{#ifeq: {{NAMESPACE}} | {{ns:0}} | [[Kategorie:species]]}}
{{#if:{{#var:taxon}}|{{#var:taxon}}}}
{{#if:{{{ordo|}}}|
-> [[:Kategorie:{{{ordo|}}}{{!}}{{{ordo|}}}]]
}}
{{#if:{{{subordo|}}}|
-> [[:Kategorie:{{{subordo|}}}{{!}}{{{subordo|}}}]]
}}
{{#if:{{{superfamilia|}}}|
-> [[:Kategorie:{{#if: {{{subordo|}}} | {{{subordo|}}} | {{{ordo|}}} }}{{!}}{{{superfamilia|}}}]]
}}
{{#if:{{{familia|}}}|
-> [[:Kategorie:{{{familia|}}}{{!}}{{{familia|}}}]]
}}
{{#if:{{{subfamilia|}}}|
-> [[:Kategorie:{{{subfamilia|}}}{{!}}{{{subfamilia|}}}]]
}}
{{#if:{{{tribus|}}}|
-> [[:Kategorie:{{{tribus|}}}{{!}}{{{tribus|}}}]]
}}
{{#if:{{{genus|}}}|
-> [[:Kategorie:{{{genus|}}}{{!}}{{{genus|}}}]]
}}
{{#if:{{{subgenus|}}}|
-> [[:Kategorie:{{{subgenus|}}}{{!}}{{{subgenus|}}}]]
}}
{{#if:{{{www.faunaeur.org_id|}}}|
* [http://www.faunaeur.org/full_results.php?id={{{www.faunaeur.org_id|}}} Fauna Europaea : www.faunaeur.org -> {{PAGENAME}}]
}}
{{#if:{{{cockroach.speciesfile.org_TaxonNameID|}}}|
* [http://cockroach.speciesfile.org/common/basic/Taxa.aspx?TaxonNameID={{{cockroach.speciesfile.org_TaxonNameID|}}} Cockroach Species File (CSF) : cockroach.speciesfile.org -> {{PAGENAME}}]
}}
{{#if:{{{LSID|}}}|
* [http://www.ipni.org/lsids.html LSID] : {{{LSID|}}}
}}
{{#ifeq:{{PAGENAME}}|{{{superordo}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
[[Kategorie:{{#if: {{{subclassis|}}} | {{{subclassis|}}} | {{{classis|}}} }}{{!}}{{{superordo|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{ordo}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=7}}
{{#ifeq:{{PAGENAME}}|Blattodea|
[[Kategorie:Schaben]]
}}
[[Kategorie:{{#if: {{{superordo|}}} | {{{superordo|}}} | {{{subclassis|}}} }}{{!}}{{{ordo|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{superfamilia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
[[Kategorie:{{#if: {{{subordo|}}} | {{{subordo|}}} | {{{ordo|}}} }}{{!}}{{{superfamilia|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{familia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=4}}
[[Kategorie:{{#if: {{{superfamilia|}}} | {{{superfamilia|}}} | {{{subordo|}}} }}{{!}}{{{familia|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{subfamilia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=3}}
[[Kategorie:{{{familia|}}}{{!}}{{{subfamilia|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{genus}}}|
[[Kategorie:{{#if: {{{subfamilia|}}} | {{{subfamilia|}}} | {{{familia|}}} }}{{!}}{{{genus|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{genus}}} {{{species}}}|
[[Kategorie:{{{genus|}}}{{!}}{{{species|}}}]]
}}
</includeonly>
<noinclude>
<pre>
Beispielaufruf:
{{Systematik
| DeName = Fauchschabe
| Autor = van Herrewege, 1973
| ordo =
| subordo =
| superfamilia =
| familia = Blaberidae
| subfamilia = Oxyhaloinae
| tribus = Gromphadorhini
| genus = Princisia
| subgenus =
| species = vanwaerebeki
| Verbreitung =
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| StudyGroupNumber = BCG 34
| Winterruhe =
| cockroach.speciesfile.org_TaxonNameID = 1174416
| LSID = urn:lsid:Blattodea.genusfile.org:TaxonName:6326
}}
{{Systematik
| Autor =
| Bild =
| Bildbeschreibung =
| regnum = Animalia
| subregnum = Eumetazoa
| phylum = specieshropoda
| subphylum = Hexapoda
| classis = Insecta
| ordo = Dictyoptera
| subordo = Isoptera
| LSID = urn:lsid:faunaeur.org:taxname:11922
| www.faunaeur.org_id = 11922
}}
</pre>
</noinclude>
934603aee42133d438dd300be10eacf9654e6873
1625
1624
2017-05-02T20:31:44Z
Lollypop
2
wikitext
text/x-wiki
<includeonly>
{{#vardefine:taxon|
{{#if:{{{dominia|}}} | -> [[:Kategorie:{{{dominia|}}}{{!}}{{{dominia|}}}]]}}
{{#if:{{{regnum|}}} | -> [[:Kategorie:{{{regnum|}}}{{!}}{{{regnum|}}}]]}}
{{#if:{{{subregnum|}}} | -> [[:Kategorie:{{{subregnum|}}}{{!}}{{{subregnum|}}}]]}}
{{#if:{{{divisio|}}} | -> [[:Kategorie:{{{divisio|}}}{{!}}{{{divisio|}}}]]}}
{{#if:{{{subdivisio|}}} | -> [[:Kategorie:{{{subdivisio|}}}{{!}}{{{subdivisio|}}}]]}}
{{#if:{{{superclassis|}}}| -> [[:Kategorie:{{{superclassis|}}}{{!}}{{{superclassis|}}}]]}}
{{#if:{{{classis|}}} | -> [[:Kategorie:{{{classis|}}}{{!}}{{{classis|}}}]]}}
{{#if:{{{subclassis|}}} | -> [[:Kategorie:{{{subclassis|}}}{{!}}{{{subclassis|}}}]]}}
{{#if:{{{superordo|}}} | -> [[:Kategorie:{{{superordo|}}}{{!}}{{{superordo|}}}]]}}
{{#if:{{{ordo|}}} | -> [[:Kategorie:{{{ordo|}}}{{!}}{{{ordo|}}}]]}}
{{#if:{{{subordo|}}} | -> [[:Kategorie:{{{subordo|}}}{{!}}{{{subordo|}}}]]}}
{{#if:{{{superfamilia|}}}| -> [[:Kategorie:{{#if: {{{subordo|}}} | {{{subordo|}}} | {{{ordo|}}} }}{{!}}{{{superfamilia|}}}]]}}
{{#if:{{{familia|}}} | -> [[:Kategorie:{{{familia|}}}{{!}}{{{familia|}}}]]}}
{{#if:{{{subfamilia|}}} | -> [[:Kategorie:{{{subfamilia|}}}{{!}}{{{subfamilia|}}}]]}}
{{#if:{{{tribus|}}} | -> [[:Kategorie:{{{tribus|}}}{{!}}{{{tribus|}}}]]}}
{{#if:{{{genus|}}} | -> [[:Kategorie:{{{genus|}}}{{!}}{{{genus|}}}]]}}
{{#if:{{{subgenus|}}} | -> [[:Kategorie:{{{subgenus|}}}{{!}}{{{subgenus|}}}]]}}
}}
{| cellpadding="2" cellspacing="0" style="border: 1px solid #6688AA;width:260px; background-color:#efefef; float:right;overflow: hidden; margin-left:10px; margin-bottom:10px;" valign="middle" |
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"|
'''''{{PAGENAME}}'''''
{{#if:{{{DeName|}}}|
<br>({{{DeName|}}})
}}
|-
|style="border: 0px solid #6688AA;border-bottom-width:1px;"| <div style="text-align:center;font-size:8pt;">
{{#if: {{{Bild|}}}
|
[[Bild:{{{Bild}}}{{!}}frameless{{!}}250x300px{{!}}{{{Bildbeschreibung}}}]]
}}
</div>
|
|
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| Systematik
|-
|style="text-align:center;width=250px;"|
{| style="background-color:#efefef;text-align:left;"
{{#if:{{{dominia|}}}
| {{!-}}
{{!}} Dominia:
{{!}} ''[[:Kategorie:{{{dominia|}}}{{!}}{{{dominia|}}}]]''
}}
{{#if:{{{regnum|}}}
| {{!-}}
{{!}} Regnum:
{{!}} ''[[:Kategorie:{{{regnum|}}}{{!}}{{{regnum|}}}]]''
}}
{{#if:{{{subregnum|}}}
| {{!-}}
{{!}} Subregnum:
{{!}} ''[[:Kategorie:{{{subregnum|}}}{{!}}{{{subregnum|}}}]]''
}}
{{#if:{{{superdivisio|}}}
| {{!-}}
{{!}} Superdivisio:
{{!}} ''[[:Kategorie:{{{superdivisio|}}}{{!}}{{{superdivisio|}}}]]''
}}
{{#if:{{{divisio|}}}
| {{!-}}
{{!}} Divisio:
{{!}} ''[[:Kategorie:{{{divisio|}}}{{!}}{{{divisio|}}}]]''
}}
{{#if:{{{subdivisio|}}}
| {{!-}}
{{!}} Subdivisio:
{{!}} ''[[:Kategorie:{{{subdivisio|}}}{{!}}{{{subdivisio|}}}]]''
}}
{{#if:{{{superclassis|}}}
| {{!-}}
{{!}} Superclassis:
{{!}} ''[[:Kategorie:{{{superclassis|}}}{{!}}{{{superclassis|}}}]]''
}}
{{#if:{{{classis|}}}
| {{!-}}
{{!}} Classis:
{{!}} ''[[:Kategorie:{{{classis|}}}{{!}}{{{classis|}}}]]''
}}
{{#if:{{{subclassis|}}}
| {{!-}}
{{!}} Subclassis:
{{!}} ''[[:Kategorie:{{{subclassis|}}}{{!}}{{{subclassis|}}}]]''
}}
{{#if:{{{superordo|}}}
| {{!-}}
{{!}} Superordo:
{{!}} ''[[:Kategorie:{{{superordo|}}}{{!}}{{{superordo|}}}]]''
}}
{{#if:{{{ordo|}}}
| {{!-}}
{{!}} Ordo:
{{!}} ''[[:Kategorie:{{{ordo|}}}{{!}}{{{ordo|}}}]]''
}}
{{#if:{{{subordo|}}}
| {{!-}}
{{!}} Subordo:
{{!}} ''[[:Kategorie:{{{subordo|}}}{{!}}{{{subordo|}}}]]''
}}
{{#if:{{{superfamilia|}}}
| {{!-}}
{{!}} Superfamilia:
{{!}} ''[[:Kategorie:{{{superfamilia|}}}{{!}}{{{superfamilia|}}}]]''
}}
{{#if:{{{familia|}}}
| {{!-}}
{{!}} Familia:
{{!}} ''[[:Kategorie:{{{familia|}}}{{!}}{{{familia|}}}]]''
}}
{{#if:{{{subfamilia|}}}
| {{!-}}
{{!}} Subfamilia:
{{!}} ''[[:Kategorie:{{{subfamilia|}}}{{!}}{{{subfamilia|}}}]]''
}}
{{#if:{{{tribus|}}}
| {{!-}}
{{!}} Tribus:
{{!}} ''[[:Kategorie:{{{tribus|}}}{{!}}{{{tribus|}}}]]''
}}
|-
{{#if:{{{genus|}}}
| {{!-}}
{{!}} Genus:
{{!}} ''[[:Kategorie:{{{genus|}}}{{!}}{{{genus|}}}]]''
}}
|-
{{#if:{{{subgenus|}}}
| {{!-}}
{{!}} Subgenus:
{{!}} ''[[:Kategorie:{{{subgenus|}}}{{!}}{{{subgenus|}}}]]''
}}
|-
{{#if:{{{species|}}}
| {{!-}}
{{!}} Species:
{{!}} ''{{{genus|}}} {{{subgenus|}}} {{{species|}}}''
}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Wissenschaftlicher Name
|-
|style="text-align:center"|
{| style="text-align:center;background-color:#efefef;widht:250px"
|-
|style="text-align:center;width:250px"|''{{{genus|}}} {{{species|}}}'' {{#if:{{{Autor|}}}|{{{Autor|}}}}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Weitere Informationen
|-
|style="text-align:left"|
{| style="text-align:left;background-color:#efefef;widht:250px"
{{#if:{{{Verbreitung|}}}
| {{!-}}
{{!}} Verbreitung:
{{!}} {{{Verbreitung|}}}
| {{!-}}
}}
{{#if:{{{Habitat|}}}
| {{!-}}
{{!}} Habitat:
{{!}} {{{Habitat|}}}
| {{!-}}
}}
{{#if:{{{Nahrung|}}}
| {{!-}}
{{!}} Nahrung:
{{!}} {{{Nahrung|}}}
| {{!-}}
}}
{{#if:{{{Luftfeuchtigkeit|}}}
| {{!-}}
{{!}} Luftfeuchtigkeit:
{{!}} {{{Luftfeuchtigkeit|}}}
| {{!-}}
}}
{{#if:{{{Temperatur|}}}
| {{!-}}
{{!}} Temperatur:
{{!}} {{{Temperatur|}}}
| {{!-}}
}}
{{#if:{{{StudyGroupNumber|}}}
| {{!-}}
{{!}} StudyGroupNumber:
{{!}} {{{StudyGroupNumber|}}}
| {{!-}}
}}
|}
|}
{{#ifeq: {{NAMESPACE}} | {{ns:0}} | [[Kategorie:species]]}}
{{#if:{{#var:taxon}}
|<pre style="white-space: pre;">{{#var:taxon}}</pre>
}}
{{#if:{{{ordo|}}}|
-> [[:Kategorie:{{{ordo|}}}{{!}}{{{ordo|}}}]]
}}
{{#if:{{{subordo|}}}|
-> [[:Kategorie:{{{subordo|}}}{{!}}{{{subordo|}}}]]
}}
{{#if:{{{superfamilia|}}}|
-> [[:Kategorie:{{#if: {{{subordo|}}} | {{{subordo|}}} | {{{ordo|}}} }}{{!}}{{{superfamilia|}}}]]
}}
{{#if:{{{familia|}}}|
-> [[:Kategorie:{{{familia|}}}{{!}}{{{familia|}}}]]
}}
{{#if:{{{subfamilia|}}}|
-> [[:Kategorie:{{{subfamilia|}}}{{!}}{{{subfamilia|}}}]]
}}
{{#if:{{{tribus|}}}|
-> [[:Kategorie:{{{tribus|}}}{{!}}{{{tribus|}}}]]
}}
{{#if:{{{genus|}}}|
-> [[:Kategorie:{{{genus|}}}{{!}}{{{genus|}}}]]
}}
{{#if:{{{subgenus|}}}|
-> [[:Kategorie:{{{subgenus|}}}{{!}}{{{subgenus|}}}]]
}}
{{#if:{{{www.faunaeur.org_id|}}}|
* [http://www.faunaeur.org/full_results.php?id={{{www.faunaeur.org_id|}}} Fauna Europaea : www.faunaeur.org -> {{PAGENAME}}]
}}
{{#if:{{{cockroach.speciesfile.org_TaxonNameID|}}}|
* [http://cockroach.speciesfile.org/common/basic/Taxa.aspx?TaxonNameID={{{cockroach.speciesfile.org_TaxonNameID|}}} Cockroach Species File (CSF) : cockroach.speciesfile.org -> {{PAGENAME}}]
}}
{{#if:{{{LSID|}}}|
* [http://www.ipni.org/lsids.html LSID] : {{{LSID|}}}
}}
{{#ifeq:{{PAGENAME}}|{{{superordo}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
[[Kategorie:{{#if: {{{subclassis|}}} | {{{subclassis|}}} | {{{classis|}}} }}{{!}}{{{superordo|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{ordo}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=7}}
{{#ifeq:{{PAGENAME}}|Blattodea|
[[Kategorie:Schaben]]
}}
[[Kategorie:{{#if: {{{superordo|}}} | {{{superordo|}}} | {{{subclassis|}}} }}{{!}}{{{ordo|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{superfamilia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
[[Kategorie:{{#if: {{{subordo|}}} | {{{subordo|}}} | {{{ordo|}}} }}{{!}}{{{superfamilia|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{familia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=4}}
[[Kategorie:{{#if: {{{superfamilia|}}} | {{{superfamilia|}}} | {{{subordo|}}} }}{{!}}{{{familia|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{subfamilia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=3}}
[[Kategorie:{{{familia|}}}{{!}}{{{subfamilia|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{genus}}}|
[[Kategorie:{{#if: {{{subfamilia|}}} | {{{subfamilia|}}} | {{{familia|}}} }}{{!}}{{{genus|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{genus}}} {{{species}}}|
[[Kategorie:{{{genus|}}}{{!}}{{{species|}}}]]
}}
</includeonly>
<noinclude>
<pre>
Beispielaufruf:
{{Systematik
| DeName = Fauchschabe
| Autor = van Herrewege, 1973
| ordo =
| subordo =
| superfamilia =
| familia = Blaberidae
| subfamilia = Oxyhaloinae
| tribus = Gromphadorhini
| genus = Princisia
| subgenus =
| species = vanwaerebeki
| Verbreitung =
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| StudyGroupNumber = BCG 34
| Winterruhe =
| cockroach.speciesfile.org_TaxonNameID = 1174416
| LSID = urn:lsid:Blattodea.genusfile.org:TaxonName:6326
}}
{{Systematik
| Autor =
| Bild =
| Bildbeschreibung =
| regnum = Animalia
| subregnum = Eumetazoa
| phylum = specieshropoda
| subphylum = Hexapoda
| classis = Insecta
| ordo = Dictyoptera
| subordo = Isoptera
| LSID = urn:lsid:faunaeur.org:taxname:11922
| www.faunaeur.org_id = 11922
}}
</pre>
</noinclude>
b68b196d74b7b180488984f91f15409375e60030
1626
1625
2017-05-02T20:35:22Z
Lollypop
2
wikitext
text/x-wiki
<includeonly>
{{#vardefine:taxon|
{{#if:{{{dominia|}}} | -> [[:Kategorie:{{{dominia|}}}{{!}}{{{dominia|}}}]]}}
{{#if:{{{regnum|}}} | -> [[:Kategorie:{{{regnum|}}}{{!}}{{{regnum|}}}]]}}
{{#if:{{{subregnum|}}} | -> [[:Kategorie:{{{subregnum|}}}{{!}}{{{subregnum|}}}]]}}
{{#if:{{{divisio|}}} | -> [[:Kategorie:{{{divisio|}}}{{!}}{{{divisio|}}}]]}}
{{#if:{{{subdivisio|}}} | -> [[:Kategorie:{{{subdivisio|}}}{{!}}{{{subdivisio|}}}]]}}
{{#if:{{{superclassis|}}}| -> [[:Kategorie:{{{superclassis|}}}{{!}}{{{superclassis|}}}]]}}
{{#if:{{{classis|}}} | -> [[:Kategorie:{{{classis|}}}{{!}}{{{classis|}}}]]}}
{{#if:{{{subclassis|}}} | -> [[:Kategorie:{{{subclassis|}}}{{!}}{{{subclassis|}}}]]}}
{{#if:{{{superordo|}}} | -> [[:Kategorie:{{{superordo|}}}{{!}}{{{superordo|}}}]]}}
{{#if:{{{ordo|}}} | -> [[:Kategorie:{{{ordo|}}}{{!}}{{{ordo|}}}]]}}
{{#if:{{{subordo|}}} | -> [[:Kategorie:{{{subordo|}}}{{!}}{{{subordo|}}}]]}}
{{#if:{{{superfamilia|}}}| -> [[:Kategorie:{{#if: {{{subordo|}}} | {{{subordo|}}} | {{{ordo|}}} }}{{!}}{{{superfamilia|}}}]]}}
{{#if:{{{familia|}}} | -> [[:Kategorie:{{{familia|}}}{{!}}{{{familia|}}}]]}}
{{#if:{{{subfamilia|}}} | -> [[:Kategorie:{{{subfamilia|}}}{{!}}{{{subfamilia|}}}]]}}
{{#if:{{{tribus|}}} | -> [[:Kategorie:{{{tribus|}}}{{!}}{{{tribus|}}}]]}}
{{#if:{{{genus|}}} | -> [[:Kategorie:{{{genus|}}}{{!}}{{{genus|}}}]]}}
{{#if:{{{subgenus|}}} | -> [[:Kategorie:{{{subgenus|}}}{{!}}{{{subgenus|}}}]]}}
}}
{| cellpadding="2" cellspacing="0" style="border: 1px solid #6688AA;width:260px; background-color:#efefef; float:right;overflow: hidden; margin-left:10px; margin-bottom:10px;" valign="middle" |
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"|
'''''{{PAGENAME}}'''''
{{#if:{{{DeName|}}}|
<br>({{{DeName|}}})
}}
|-
|style="border: 0px solid #6688AA;border-bottom-width:1px;"| <div style="text-align:center;font-size:8pt;">
{{#if: {{{Bild|}}}
|
[[Bild:{{{Bild}}}{{!}}frameless{{!}}250x300px{{!}}{{{Bildbeschreibung}}}]]
}}
</div>
|
|
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| Systematik
|-
|style="text-align:center;width=250px;"|
{| style="background-color:#efefef;text-align:left;"
{{#if:{{{dominia|}}}
| {{!-}}
{{!}} Dominia:
{{!}} ''[[:Kategorie:{{{dominia|}}}{{!}}{{{dominia|}}}]]''
}}
{{#if:{{{regnum|}}}
| {{!-}}
{{!}} Regnum:
{{!}} ''[[:Kategorie:{{{regnum|}}}{{!}}{{{regnum|}}}]]''
}}
{{#if:{{{subregnum|}}}
| {{!-}}
{{!}} Subregnum:
{{!}} ''[[:Kategorie:{{{subregnum|}}}{{!}}{{{subregnum|}}}]]''
}}
{{#if:{{{superdivisio|}}}
| {{!-}}
{{!}} Superdivisio:
{{!}} ''[[:Kategorie:{{{superdivisio|}}}{{!}}{{{superdivisio|}}}]]''
}}
{{#if:{{{divisio|}}}
| {{!-}}
{{!}} Divisio:
{{!}} ''[[:Kategorie:{{{divisio|}}}{{!}}{{{divisio|}}}]]''
}}
{{#if:{{{subdivisio|}}}
| {{!-}}
{{!}} Subdivisio:
{{!}} ''[[:Kategorie:{{{subdivisio|}}}{{!}}{{{subdivisio|}}}]]''
}}
{{#if:{{{superclassis|}}}
| {{!-}}
{{!}} Superclassis:
{{!}} ''[[:Kategorie:{{{superclassis|}}}{{!}}{{{superclassis|}}}]]''
}}
{{#if:{{{classis|}}}
| {{!-}}
{{!}} Classis:
{{!}} ''[[:Kategorie:{{{classis|}}}{{!}}{{{classis|}}}]]''
}}
{{#if:{{{subclassis|}}}
| {{!-}}
{{!}} Subclassis:
{{!}} ''[[:Kategorie:{{{subclassis|}}}{{!}}{{{subclassis|}}}]]''
}}
{{#if:{{{superordo|}}}
| {{!-}}
{{!}} Superordo:
{{!}} ''[[:Kategorie:{{{superordo|}}}{{!}}{{{superordo|}}}]]''
}}
{{#if:{{{ordo|}}}
| {{!-}}
{{!}} Ordo:
{{!}} ''[[:Kategorie:{{{ordo|}}}{{!}}{{{ordo|}}}]]''
}}
{{#if:{{{subordo|}}}
| {{!-}}
{{!}} Subordo:
{{!}} ''[[:Kategorie:{{{subordo|}}}{{!}}{{{subordo|}}}]]''
}}
{{#if:{{{superfamilia|}}}
| {{!-}}
{{!}} Superfamilia:
{{!}} ''[[:Kategorie:{{{superfamilia|}}}{{!}}{{{superfamilia|}}}]]''
}}
{{#if:{{{familia|}}}
| {{!-}}
{{!}} Familia:
{{!}} ''[[:Kategorie:{{{familia|}}}{{!}}{{{familia|}}}]]''
}}
{{#if:{{{subfamilia|}}}
| {{!-}}
{{!}} Subfamilia:
{{!}} ''[[:Kategorie:{{{subfamilia|}}}{{!}}{{{subfamilia|}}}]]''
}}
{{#if:{{{tribus|}}}
| {{!-}}
{{!}} Tribus:
{{!}} ''[[:Kategorie:{{{tribus|}}}{{!}}{{{tribus|}}}]]''
}}
|-
{{#if:{{{genus|}}}
| {{!-}}
{{!}} Genus:
{{!}} ''[[:Kategorie:{{{genus|}}}{{!}}{{{genus|}}}]]''
}}
|-
{{#if:{{{subgenus|}}}
| {{!-}}
{{!}} Subgenus:
{{!}} ''[[:Kategorie:{{{subgenus|}}}{{!}}{{{subgenus|}}}]]''
}}
|-
{{#if:{{{species|}}}
| {{!-}}
{{!}} Species:
{{!}} ''{{{genus|}}} {{{subgenus|}}} {{{species|}}}''
}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Wissenschaftlicher Name
|-
|style="text-align:center"|
{| style="text-align:center;background-color:#efefef;widht:250px"
|-
|style="text-align:center;width:250px"|''{{{genus|}}} {{{species|}}}'' {{#if:{{{Autor|}}}|{{{Autor|}}}}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Weitere Informationen
|-
|style="text-align:left"|
{| style="text-align:left;background-color:#efefef;widht:250px"
{{#if:{{{Verbreitung|}}}
| {{!-}}
{{!}} Verbreitung:
{{!}} {{{Verbreitung|}}}
| {{!-}}
}}
{{#if:{{{Habitat|}}}
| {{!-}}
{{!}} Habitat:
{{!}} {{{Habitat|}}}
| {{!-}}
}}
{{#if:{{{Nahrung|}}}
| {{!-}}
{{!}} Nahrung:
{{!}} {{{Nahrung|}}}
| {{!-}}
}}
{{#if:{{{Luftfeuchtigkeit|}}}
| {{!-}}
{{!}} Luftfeuchtigkeit:
{{!}} {{{Luftfeuchtigkeit|}}}
| {{!-}}
}}
{{#if:{{{Temperatur|}}}
| {{!-}}
{{!}} Temperatur:
{{!}} {{{Temperatur|}}}
| {{!-}}
}}
{{#if:{{{StudyGroupNumber|}}}
| {{!-}}
{{!}} StudyGroupNumber:
{{!}} {{{StudyGroupNumber|}}}
| {{!-}}
}}
|}
|}
{{#ifeq: {{NAMESPACE}} | {{ns:0}} | [[Kategorie:species]]}}
{{#if:{{#var:taxon}}
|<div style="white-space: pre;">{{#var:taxon}}</div>
}}
{{#if:{{{ordo|}}}|
-> [[:Kategorie:{{{ordo|}}}{{!}}{{{ordo|}}}]]
}}
{{#if:{{{subordo|}}}|
-> [[:Kategorie:{{{subordo|}}}{{!}}{{{subordo|}}}]]
}}
{{#if:{{{superfamilia|}}}|
-> [[:Kategorie:{{#if: {{{subordo|}}} | {{{subordo|}}} | {{{ordo|}}} }}{{!}}{{{superfamilia|}}}]]
}}
{{#if:{{{familia|}}}|
-> [[:Kategorie:{{{familia|}}}{{!}}{{{familia|}}}]]
}}
{{#if:{{{subfamilia|}}}|
-> [[:Kategorie:{{{subfamilia|}}}{{!}}{{{subfamilia|}}}]]
}}
{{#if:{{{tribus|}}}|
-> [[:Kategorie:{{{tribus|}}}{{!}}{{{tribus|}}}]]
}}
{{#if:{{{genus|}}}|
-> [[:Kategorie:{{{genus|}}}{{!}}{{{genus|}}}]]
}}
{{#if:{{{subgenus|}}}|
-> [[:Kategorie:{{{subgenus|}}}{{!}}{{{subgenus|}}}]]
}}
{{#if:{{{www.faunaeur.org_id|}}}|
* [http://www.faunaeur.org/full_results.php?id={{{www.faunaeur.org_id|}}} Fauna Europaea : www.faunaeur.org -> {{PAGENAME}}]
}}
{{#if:{{{cockroach.speciesfile.org_TaxonNameID|}}}|
* [http://cockroach.speciesfile.org/common/basic/Taxa.aspx?TaxonNameID={{{cockroach.speciesfile.org_TaxonNameID|}}} Cockroach Species File (CSF) : cockroach.speciesfile.org -> {{PAGENAME}}]
}}
{{#if:{{{LSID|}}}|
* [http://www.ipni.org/lsids.html LSID] : {{{LSID|}}}
}}
{{#ifeq:{{PAGENAME}}|{{{superordo}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
[[Kategorie:{{#if: {{{subclassis|}}} | {{{subclassis|}}} | {{{classis|}}} }}{{!}}{{{superordo|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{ordo}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=7}}
{{#ifeq:{{PAGENAME}}|Blattodea|
[[Kategorie:Schaben]]
}}
[[Kategorie:{{#if: {{{superordo|}}} | {{{superordo|}}} | {{{subclassis|}}} }}{{!}}{{{ordo|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{superfamilia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
[[Kategorie:{{#if: {{{subordo|}}} | {{{subordo|}}} | {{{ordo|}}} }}{{!}}{{{superfamilia|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{familia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=4}}
[[Kategorie:{{#if: {{{superfamilia|}}} | {{{superfamilia|}}} | {{{subordo|}}} }}{{!}}{{{familia|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{subfamilia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=3}}
[[Kategorie:{{{familia|}}}{{!}}{{{subfamilia|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{genus}}}|
[[Kategorie:{{#if: {{{subfamilia|}}} | {{{subfamilia|}}} | {{{familia|}}} }}{{!}}{{{genus|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{genus}}} {{{species}}}|
[[Kategorie:{{{genus|}}}{{!}}{{{species|}}}]]
}}
</includeonly>
<noinclude>
<pre>
Beispielaufruf:
{{Systematik
| DeName = Fauchschabe
| Autor = van Herrewege, 1973
| ordo =
| subordo =
| superfamilia =
| familia = Blaberidae
| subfamilia = Oxyhaloinae
| tribus = Gromphadorhini
| genus = Princisia
| subgenus =
| species = vanwaerebeki
| Verbreitung =
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| StudyGroupNumber = BCG 34
| Winterruhe =
| cockroach.speciesfile.org_TaxonNameID = 1174416
| LSID = urn:lsid:Blattodea.genusfile.org:TaxonName:6326
}}
{{Systematik
| Autor =
| Bild =
| Bildbeschreibung =
| regnum = Animalia
| subregnum = Eumetazoa
| phylum = specieshropoda
| subphylum = Hexapoda
| classis = Insecta
| ordo = Dictyoptera
| subordo = Isoptera
| LSID = urn:lsid:faunaeur.org:taxname:11922
| www.faunaeur.org_id = 11922
}}
</pre>
</noinclude>
3501a1a35857f37c10d2905bf78cb5476856c222
1627
1626
2017-05-02T20:41:08Z
Lollypop
2
wikitext
text/x-wiki
<includeonly>
{{#vardefine:taxon|
{{#if:{{{dominia|}}} | -> [[:Kategorie:{{{dominia|}}}{{!}}{{{dominia|}}}]]}}
{{#if:{{{regnum|}}} | -> [[:Kategorie:{{{regnum|}}}{{!}}{{{regnum|}}}]]}}
{{#if:{{{subregnum|}}} | -> [[:Kategorie:{{{subregnum|}}}{{!}}{{{subregnum|}}}]]}}
{{#if:{{{divisio|}}} | -> [[:Kategorie:{{{divisio|}}}{{!}}{{{divisio|}}}]]}}
{{#if:{{{subdivisio|}}} | -> [[:Kategorie:{{{subdivisio|}}}{{!}}{{{subdivisio|}}}]]}}
{{#if:{{{superclassis|}}}| -> [[:Kategorie:{{{superclassis|}}}{{!}}{{{superclassis|}}}]]}}
{{#if:{{{classis|}}} | -> [[:Kategorie:{{{classis|}}}{{!}}{{{classis|}}}]]}}
{{#if:{{{subclassis|}}} | -> [[:Kategorie:{{{subclassis|}}}{{!}}{{{subclassis|}}}]]}}
{{#if:{{{superordo|}}} | -> [[:Kategorie:{{{superordo|}}}{{!}}{{{superordo|}}}]]}}
{{#if:{{{ordo|}}} | -> [[:Kategorie:{{{ordo|}}}{{!}}{{{ordo|}}}]]}}
{{#if:{{{subordo|}}} | -> [[:Kategorie:{{{subordo|}}}{{!}}{{{subordo|}}}]]}}
{{#if:{{{superfamilia|}}}| -> [[:Kategorie:{{#if: {{{subordo|}}} | {{{subordo|}}} | {{{ordo|}}} }}{{!}}{{{superfamilia|}}}]]}}
{{#if:{{{familia|}}} | -> [[:Kategorie:{{{familia|}}}{{!}}{{{familia|}}}]]}}
{{#if:{{{subfamilia|}}} | -> [[:Kategorie:{{{subfamilia|}}}{{!}}{{{subfamilia|}}}]]}}
{{#if:{{{tribus|}}} | -> [[:Kategorie:{{{tribus|}}}{{!}}{{{tribus|}}}]]}}
{{#if:{{{genus|}}} | -> [[:Kategorie:{{{genus|}}}{{!}}{{{genus|}}}]]}}
{{#if:{{{subgenus|}}} | -> [[:Kategorie:{{{subgenus|}}}{{!}}{{{subgenus|}}}]]}}
}}
{| cellpadding="2" cellspacing="0" style="border: 1px solid #6688AA;width:260px; background-color:#efefef; float:right;overflow: hidden; margin-left:10px; margin-bottom:10px;" valign="middle" |
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"|
'''''{{PAGENAME}}'''''
{{#if:{{{DeName|}}}|
<br>({{{DeName|}}})
}}
|-
|style="border: 0px solid #6688AA;border-bottom-width:1px;"| <div style="text-align:center;font-size:8pt;">
{{#if: {{{Bild|}}}
|
[[Bild:{{{Bild}}}{{!}}frameless{{!}}250x300px{{!}}{{{Bildbeschreibung}}}]]
}}
</div>
|
|
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| Systematik
|-
|style="text-align:center;width=250px;"|
{| style="background-color:#efefef;text-align:left;"
{{#if:{{{dominia|}}}
| {{!-}}
{{!}} Dominia:
{{!}} ''[[:Kategorie:{{{dominia|}}}{{!}}{{{dominia|}}}]]''
}}
{{#if:{{{regnum|}}}
| {{!-}}
{{!}} Regnum:
{{!}} ''[[:Kategorie:{{{regnum|}}}{{!}}{{{regnum|}}}]]''
}}
{{#if:{{{subregnum|}}}
| {{!-}}
{{!}} Subregnum:
{{!}} ''[[:Kategorie:{{{subregnum|}}}{{!}}{{{subregnum|}}}]]''
}}
{{#if:{{{superdivisio|}}}
| {{!-}}
{{!}} Superdivisio:
{{!}} ''[[:Kategorie:{{{superdivisio|}}}{{!}}{{{superdivisio|}}}]]''
}}
{{#if:{{{divisio|}}}
| {{!-}}
{{!}} Divisio:
{{!}} ''[[:Kategorie:{{{divisio|}}}{{!}}{{{divisio|}}}]]''
}}
{{#if:{{{subdivisio|}}}
| {{!-}}
{{!}} Subdivisio:
{{!}} ''[[:Kategorie:{{{subdivisio|}}}{{!}}{{{subdivisio|}}}]]''
}}
{{#if:{{{superclassis|}}}
| {{!-}}
{{!}} Superclassis:
{{!}} ''[[:Kategorie:{{{superclassis|}}}{{!}}{{{superclassis|}}}]]''
}}
{{#if:{{{classis|}}}
| {{!-}}
{{!}} Classis:
{{!}} ''[[:Kategorie:{{{classis|}}}{{!}}{{{classis|}}}]]''
}}
{{#if:{{{subclassis|}}}
| {{!-}}
{{!}} Subclassis:
{{!}} ''[[:Kategorie:{{{subclassis|}}}{{!}}{{{subclassis|}}}]]''
}}
{{#if:{{{superordo|}}}
| {{!-}}
{{!}} Superordo:
{{!}} ''[[:Kategorie:{{{superordo|}}}{{!}}{{{superordo|}}}]]''
}}
{{#if:{{{ordo|}}}
| {{!-}}
{{!}} Ordo:
{{!}} ''[[:Kategorie:{{{ordo|}}}{{!}}{{{ordo|}}}]]''
}}
{{#if:{{{subordo|}}}
| {{!-}}
{{!}} Subordo:
{{!}} ''[[:Kategorie:{{{subordo|}}}{{!}}{{{subordo|}}}]]''
}}
{{#if:{{{superfamilia|}}}
| {{!-}}
{{!}} Superfamilia:
{{!}} ''[[:Kategorie:{{{superfamilia|}}}{{!}}{{{superfamilia|}}}]]''
}}
{{#if:{{{familia|}}}
| {{!-}}
{{!}} Familia:
{{!}} ''[[:Kategorie:{{{familia|}}}{{!}}{{{familia|}}}]]''
}}
{{#if:{{{subfamilia|}}}
| {{!-}}
{{!}} Subfamilia:
{{!}} ''[[:Kategorie:{{{subfamilia|}}}{{!}}{{{subfamilia|}}}]]''
}}
{{#if:{{{tribus|}}}
| {{!-}}
{{!}} Tribus:
{{!}} ''[[:Kategorie:{{{tribus|}}}{{!}}{{{tribus|}}}]]''
}}
|-
{{#if:{{{genus|}}}
| {{!-}}
{{!}} Genus:
{{!}} ''[[:Kategorie:{{{genus|}}}{{!}}{{{genus|}}}]]''
}}
|-
{{#if:{{{subgenus|}}}
| {{!-}}
{{!}} Subgenus:
{{!}} ''[[:Kategorie:{{{subgenus|}}}{{!}}{{{subgenus|}}}]]''
}}
|-
{{#if:{{{species|}}}
| {{!-}}
{{!}} Species:
{{!}} ''{{{genus|}}} {{{subgenus|}}} {{{species|}}}''
}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Wissenschaftlicher Name
|-
|style="text-align:center"|
{| style="text-align:center;background-color:#efefef;widht:250px"
|-
|style="text-align:center;width:250px"|''{{{genus|}}} {{{species|}}}'' {{#if:{{{Autor|}}}|{{{Autor|}}}}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Weitere Informationen
|-
|style="text-align:left"|
{| style="text-align:left;background-color:#efefef;widht:250px"
{{#if:{{{Verbreitung|}}}
| {{!-}}
{{!}} Verbreitung:
{{!}} {{{Verbreitung|}}}
| {{!-}}
}}
{{#if:{{{Habitat|}}}
| {{!-}}
{{!}} Habitat:
{{!}} {{{Habitat|}}}
| {{!-}}
}}
{{#if:{{{Nahrung|}}}
| {{!-}}
{{!}} Nahrung:
{{!}} {{{Nahrung|}}}
| {{!-}}
}}
{{#if:{{{Luftfeuchtigkeit|}}}
| {{!-}}
{{!}} Luftfeuchtigkeit:
{{!}} {{{Luftfeuchtigkeit|}}}
| {{!-}}
}}
{{#if:{{{Temperatur|}}}
| {{!-}}
{{!}} Temperatur:
{{!}} {{{Temperatur|}}}
| {{!-}}
}}
{{#if:{{{StudyGroupNumber|}}}
| {{!-}}
{{!}} StudyGroupNumber:
{{!}} {{{StudyGroupNumber|}}}
| {{!-}}
}}
|}
|}
{{#ifeq: {{NAMESPACE}} | {{ns:0}} | [[Kategorie:species]]}}
{{#if:{{#var:taxon}}
|{{#regex: {{#var:taxon}} | /\n+/d }}
}}
{{#if:{{{ordo|}}}|
-> [[:Kategorie:{{{ordo|}}}{{!}}{{{ordo|}}}]]
}}
{{#if:{{{subordo|}}}|
-> [[:Kategorie:{{{subordo|}}}{{!}}{{{subordo|}}}]]
}}
{{#if:{{{superfamilia|}}}|
-> [[:Kategorie:{{#if: {{{subordo|}}} | {{{subordo|}}} | {{{ordo|}}} }}{{!}}{{{superfamilia|}}}]]
}}
{{#if:{{{familia|}}}|
-> [[:Kategorie:{{{familia|}}}{{!}}{{{familia|}}}]]
}}
{{#if:{{{subfamilia|}}}|
-> [[:Kategorie:{{{subfamilia|}}}{{!}}{{{subfamilia|}}}]]
}}
{{#if:{{{tribus|}}}|
-> [[:Kategorie:{{{tribus|}}}{{!}}{{{tribus|}}}]]
}}
{{#if:{{{genus|}}}|
-> [[:Kategorie:{{{genus|}}}{{!}}{{{genus|}}}]]
}}
{{#if:{{{subgenus|}}}|
-> [[:Kategorie:{{{subgenus|}}}{{!}}{{{subgenus|}}}]]
}}
{{#if:{{{www.faunaeur.org_id|}}}|
* [http://www.faunaeur.org/full_results.php?id={{{www.faunaeur.org_id|}}} Fauna Europaea : www.faunaeur.org -> {{PAGENAME}}]
}}
{{#if:{{{cockroach.speciesfile.org_TaxonNameID|}}}|
* [http://cockroach.speciesfile.org/common/basic/Taxa.aspx?TaxonNameID={{{cockroach.speciesfile.org_TaxonNameID|}}} Cockroach Species File (CSF) : cockroach.speciesfile.org -> {{PAGENAME}}]
}}
{{#if:{{{LSID|}}}|
* [http://www.ipni.org/lsids.html LSID] : {{{LSID|}}}
}}
{{#ifeq:{{PAGENAME}}|{{{superordo}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
[[Kategorie:{{#if: {{{subclassis|}}} | {{{subclassis|}}} | {{{classis|}}} }}{{!}}{{{superordo|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{ordo}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=7}}
{{#ifeq:{{PAGENAME}}|Blattodea|
[[Kategorie:Schaben]]
}}
[[Kategorie:{{#if: {{{superordo|}}} | {{{superordo|}}} | {{{subclassis|}}} }}{{!}}{{{ordo|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{superfamilia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
[[Kategorie:{{#if: {{{subordo|}}} | {{{subordo|}}} | {{{ordo|}}} }}{{!}}{{{superfamilia|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{familia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=4}}
[[Kategorie:{{#if: {{{superfamilia|}}} | {{{superfamilia|}}} | {{{subordo|}}} }}{{!}}{{{familia|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{subfamilia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=3}}
[[Kategorie:{{{familia|}}}{{!}}{{{subfamilia|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{genus}}}|
[[Kategorie:{{#if: {{{subfamilia|}}} | {{{subfamilia|}}} | {{{familia|}}} }}{{!}}{{{genus|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{genus}}} {{{species}}}|
[[Kategorie:{{{genus|}}}{{!}}{{{species|}}}]]
}}
</includeonly>
<noinclude>
<pre>
Beispielaufruf:
{{Systematik
| DeName = Fauchschabe
| Autor = van Herrewege, 1973
| ordo =
| subordo =
| superfamilia =
| familia = Blaberidae
| subfamilia = Oxyhaloinae
| tribus = Gromphadorhini
| genus = Princisia
| subgenus =
| species = vanwaerebeki
| Verbreitung =
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| StudyGroupNumber = BCG 34
| Winterruhe =
| cockroach.speciesfile.org_TaxonNameID = 1174416
| LSID = urn:lsid:Blattodea.genusfile.org:TaxonName:6326
}}
{{Systematik
| Autor =
| Bild =
| Bildbeschreibung =
| regnum = Animalia
| subregnum = Eumetazoa
| phylum = specieshropoda
| subphylum = Hexapoda
| classis = Insecta
| ordo = Dictyoptera
| subordo = Isoptera
| LSID = urn:lsid:faunaeur.org:taxname:11922
| www.faunaeur.org_id = 11922
}}
</pre>
</noinclude>
956ad72a9b954fb00b4a0be0863dbed005ef2344
Category:Gromphadorhina
14
183
1589
1576
2017-05-02T14:37:24Z
Lollypop
2
wikitext
text/x-wiki
{{Systematik
| Autor = Brunner von Wattenwyl, 1865
| Bild =
| Bildbeschreibung =
| regnum = Animalia
| subregnum = Eumetazoa
| phylum = Arthropoda
| subphylum = Hexapoda
| classis = Insecta
| superordo = Dictyoptera
| ordo = Blattodea
| superfamilia = Blaberoidea
| familia = Blaberidae
| subfamilia = Oxyhaloinae
| tribus = Gromphadorhini
| genus = Gromphadorhina
| cockroach.speciesfile.org_TaxonNameID = 1174409
| LSID = urn:lsid:Blattodea.speciesfile.org:TaxonName:6328
}}
5372301da3dcb5341127bb284ed841ab97fb8207
Gromphadorhina oblongonota
0
175
1590
1577
2017-05-02T14:37:43Z
Lollypop
2
wikitext
text/x-wiki
{{Systematik
| DeName = Fauchschabe
| WissName = Gromphadorhina oblongonota
| Autor = van Herrewege, 1973
| regnum = Animalia
| subregnum = Eumetazoa
| phylum = Arthropoda
| subphylum = Hexapoda
| classis = Insecta
| superordo = Dictyoptera
| ordo = Blattodea
| superfamilia = Blaberoidea
| familia = Blaberidae
| subfamilia = Oxyhaloinae
| tribus = Gromphadorhini
| genus = Gromphadorhina
| species = oblongonota
| Verbreitung =
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| StudyGroupNumber = BCG 48
| Winterruhe =
| cockroach.speciesfile.org_TaxonNameID = 1174411
| LSID = urn:lsid:Blattodea.speciesfile.org:TaxonName:6332
}}
bf54e30887866fb086ea7b99c48bf6ce361ab712
Therea olegrandjeani
0
173
1591
1465
2017-05-02T14:42:14Z
Lollypop
2
wikitext
text/x-wiki
{{Systematik
| DeName = Fragezeichen-Schabe
| WissName = Therea olegrandjeani
| Autor = Fritzsche & Zompro, 2008
| regnum = Animalia
| subregnum = Eumetazoa
| phylum = Arthropoda
| subphylum = Hexapoda
| classis = Insecta
| ordo = Dictyoptera
| subordo = Blattodea
| superfamilia = Corydioidea
| familia = Corydiidae
| subfamilia = Corydiinae
| tribus =
| genus = Therea
| subgenus =
| species = olegrandjeani
| Verbreitung = Indien
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| Winterruhe =
| cockroach.speciesfile.org_TaxonNameID = 1178153
| LSID = urn:lsid:Blattodea.speciesfile.org:TaxonName:1259
}}
72605d1ece683c0a982cabf797303e0f52564cdb
Category:Therea
14
172
1592
1116
2017-05-02T14:42:36Z
Lollypop
2
wikitext
text/x-wiki
{{Systematik
| Autor = Billberg, 1820
| Bild =
| Bildbeschreibung =
| regnum = Animalia
| subregnum = Eumetazoa
| phylum = Arthropoda
| subphylum = Hexapoda
| classis = Insecta
| ordo = Dictyoptera
| subordo = Blattodea
| superfamilia = Corydioidea
| familia = Corydiidae
| subfamilia = Corydiinae
| tribus =
| genus = Therea
| cockroach.speciesfile.org_TaxonNameID = 1178142
| LSID = urn:lsid:Blattodea.speciesfile.org:TaxonName:1257
}}
a3ab06a66397b26de37512a522b67bde0c9af544
Therea regularis
0
171
1593
1464
2017-05-02T14:42:59Z
Lollypop
2
wikitext
text/x-wiki
{{Systematik
| DeName = Dominoschabe
| WissName = Therea regularis
| Autor = Grandcolas, 1993
| regnum = Animalia
| subregnum = Eumetazoa
| phylum = Arthropoda
| subphylum = Hexapoda
| classis = Insecta
| ordo = Dictyoptera
| subordo = Blattodea
| superfamilia = Corydioidea
| familia = Corydiidae
| subfamilia = Corydiinae
| tribus =
| genus = Therea
| species = regularis
| Verbreitung = Indien
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| Winterruhe =
| cockroach.speciesfile.org_TaxonNameID = 1178147
| LSID = urn:lsid:Blattodea.speciesfile.org:TaxonName:1267
}}
Kleine, quirlige Art.
2c5c7af46d7ef40a901416a6a34807a3a262d788
Category:Corydiinae
14
262
1594
1117
2017-05-02T14:43:45Z
Lollypop
2
wikitext
text/x-wiki
{{Systematik
| Autor = Saussure, 1864
| Bild =
| Bildbeschreibung =
| regnum = Animalia
| subregnum = Eumetazoa
| phylum = Arthropoda
| subphylum = Hexapoda
| classis = Insecta
| ordo = Dictyoptera
| subordo = Blattodea
| superfamilia = Corydioidea
| familia = Corydiidae
| subfamilia = Corydiinae
| cockroach.speciesfile.org_TaxonNameID = 1177956
| LSID = urn:lsid:Blattodea.speciesfile.org:TaxonName:1256
}}
6dce1b99f657b553fec1da01aebfe71b89c227c2
Category:Corydiidae
14
265
1595
1113
2017-05-02T14:44:00Z
Lollypop
2
wikitext
text/x-wiki
{{Systematik
| Autor = Saussure, 1864
| Bild =
| Bildbeschreibung =
| regnum = Animalia
| subregnum = Eumetazoa
| phylum = Arthropoda
| subphylum = Hexapoda
| classis = Insecta
| ordo = Dictyoptera
| subordo = Blattodea
| superfamilia = Corydioidea
| familia = Corydiidae
| cockroach.speciesfile.org_TaxonNameID = 1177778
| LSID = urn:lsid:Blattodea.speciesfile.org:TaxonName:1253
}}
4bc7d66daaa53c49b757e75a8499f7067bcdc8c4
Category:Corydioidea
14
266
1596
1443
2017-05-02T14:44:30Z
Lollypop
2
wikitext
text/x-wiki
{{Systematik
| Autor = Saussure, 1864
| Bild =
| Bildbeschreibung =
| regnum = Animalia
| subregnum = Eumetazoa
| phylum = Arthropoda
| subphylum = Hexapoda
| classis = Insecta
| ordo = Dictyoptera
| subordo = Blattodea
| superfamilia = Corydioidea
| cockroach.speciesfile.org_TaxonNameID = 1177728
| LSID = urn:lsid:Blattodea.speciesfile.org:TaxonName:1252
}}
bfa5387f1c4e7b49a843f81f31f0939e02e2cbaf
1598
1596
2017-05-02T14:48:06Z
Lollypop
2
wikitext
text/x-wiki
{{Systematik
| Autor = Saussure, 1864
| Bild =
| Bildbeschreibung =
| regnum = Animalia
| subregnum = Eumetazoa
| phylum = Arthropoda
| subphylum = Hexapoda
| classis = Insecta
| superordo = Dictyoptera
| ordo = Blattodea
| superfamilia = Corydioidea
| cockroach.speciesfile.org_TaxonNameID = 1177728
| LSID = urn:lsid:Blattodea.speciesfile.org:TaxonName:1252
}}
008b87d368dde80198b14ff16865e7fe8a8b6206
Category:Blaberidae
14
264
1597
1122
2017-05-02T14:47:21Z
Lollypop
2
wikitext
text/x-wiki
{{Systematik
| Autor = Saussure, 1864
| Bild =
| Bildbeschreibung =
| regnum = Animalia
| subregnum = Eumetazoa
| phylum = Arthropoda
| subphylum = Hexapoda
| classis = Insecta
| superordo = Dictyoptera
| ordo = Blattodea
| superfamilia = Blaberoidea
| familia = Blaberidae
| LSID = urn:lsid:Blattodea.speciesfile.org:TaxonName:6191
| cockroach.speciesfile.org_TaxonNameID = 1172575
}}
5a178dbb737be1137bd587ee2415adcaefb147e8
Template:Systematik
10
117
1628
1627
2017-05-02T20:48:39Z
Lollypop
2
wikitext
text/x-wiki
<includeonly>
{{#vardefine:taxon|
{{#if:{{{dominia|}}} | -> [[:Kategorie:{{{dominia|}}}{{!}}{{{dominia|}}}]]}}
{{#if:{{{regnum|}}} | -> [[:Kategorie:{{{regnum|}}}{{!}}{{{regnum|}}}]]}}
{{#if:{{{subregnum|}}} | -> [[:Kategorie:{{{subregnum|}}}{{!}}{{{subregnum|}}}]]}}
{{#if:{{{divisio|}}} | -> [[:Kategorie:{{{divisio|}}}{{!}}{{{divisio|}}}]]}}
{{#if:{{{subdivisio|}}} | -> [[:Kategorie:{{{subdivisio|}}}{{!}}{{{subdivisio|}}}]]}}
{{#if:{{{superclassis|}}}| -> [[:Kategorie:{{{superclassis|}}}{{!}}{{{superclassis|}}}]]}}
{{#if:{{{classis|}}} | -> [[:Kategorie:{{{classis|}}}{{!}}{{{classis|}}}]]}}
{{#if:{{{subclassis|}}} | -> [[:Kategorie:{{{subclassis|}}}{{!}}{{{subclassis|}}}]]}}
{{#if:{{{superordo|}}} | -> [[:Kategorie:{{{superordo|}}}{{!}}{{{superordo|}}}]]}}
{{#if:{{{ordo|}}} | -> [[:Kategorie:{{{ordo|}}}{{!}}{{{ordo|}}}]]}}
{{#if:{{{subordo|}}} | -> [[:Kategorie:{{{subordo|}}}{{!}}{{{subordo|}}}]]}}
{{#if:{{{superfamilia|}}}| -> [[:Kategorie:{{#if: {{{subordo|}}} | {{{subordo|}}} | {{{ordo|}}} }}{{!}}{{{superfamilia|}}}]]}}
{{#if:{{{familia|}}} | -> [[:Kategorie:{{{familia|}}}{{!}}{{{familia|}}}]]}}
{{#if:{{{subfamilia|}}} | -> [[:Kategorie:{{{subfamilia|}}}{{!}}{{{subfamilia|}}}]]}}
{{#if:{{{tribus|}}} | -> [[:Kategorie:{{{tribus|}}}{{!}}{{{tribus|}}}]]}}
{{#if:{{{genus|}}} | -> [[:Kategorie:{{{genus|}}}{{!}}{{{genus|}}}]]}}
{{#if:{{{subgenus|}}} | -> [[:Kategorie:{{{subgenus|}}}{{!}}{{{subgenus|}}}]]}}
}}
{| cellpadding="2" cellspacing="0" style="border: 1px solid #6688AA;width:260px; background-color:#efefef; float:right;overflow: hidden; margin-left:10px; margin-bottom:10px;" valign="middle" |
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"|
'''''{{PAGENAME}}'''''
{{#if:{{{DeName|}}}|
<br>({{{DeName|}}})
}}
|-
|style="border: 0px solid #6688AA;border-bottom-width:1px;"| <div style="text-align:center;font-size:8pt;">
{{#if: {{{Bild|}}}
|
[[Bild:{{{Bild}}}{{!}}frameless{{!}}250x300px{{!}}{{{Bildbeschreibung}}}]]
}}
</div>
|
|
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| Systematik
|-
|style="text-align:center;width=250px;"|
{| style="background-color:#efefef;text-align:left;"
{{#if:{{{dominia|}}}
| {{!-}}
{{!}} Dominia:
{{!}} ''[[:Kategorie:{{{dominia|}}}{{!}}{{{dominia|}}}]]''
}}
{{#if:{{{regnum|}}}
| {{!-}}
{{!}} Regnum:
{{!}} ''[[:Kategorie:{{{regnum|}}}{{!}}{{{regnum|}}}]]''
}}
{{#if:{{{subregnum|}}}
| {{!-}}
{{!}} Subregnum:
{{!}} ''[[:Kategorie:{{{subregnum|}}}{{!}}{{{subregnum|}}}]]''
}}
{{#if:{{{superdivisio|}}}
| {{!-}}
{{!}} Superdivisio:
{{!}} ''[[:Kategorie:{{{superdivisio|}}}{{!}}{{{superdivisio|}}}]]''
}}
{{#if:{{{divisio|}}}
| {{!-}}
{{!}} Divisio:
{{!}} ''[[:Kategorie:{{{divisio|}}}{{!}}{{{divisio|}}}]]''
}}
{{#if:{{{subdivisio|}}}
| {{!-}}
{{!}} Subdivisio:
{{!}} ''[[:Kategorie:{{{subdivisio|}}}{{!}}{{{subdivisio|}}}]]''
}}
{{#if:{{{superclassis|}}}
| {{!-}}
{{!}} Superclassis:
{{!}} ''[[:Kategorie:{{{superclassis|}}}{{!}}{{{superclassis|}}}]]''
}}
{{#if:{{{classis|}}}
| {{!-}}
{{!}} Classis:
{{!}} ''[[:Kategorie:{{{classis|}}}{{!}}{{{classis|}}}]]''
}}
{{#if:{{{subclassis|}}}
| {{!-}}
{{!}} Subclassis:
{{!}} ''[[:Kategorie:{{{subclassis|}}}{{!}}{{{subclassis|}}}]]''
}}
{{#if:{{{superordo|}}}
| {{!-}}
{{!}} Superordo:
{{!}} ''[[:Kategorie:{{{superordo|}}}{{!}}{{{superordo|}}}]]''
}}
{{#if:{{{ordo|}}}
| {{!-}}
{{!}} Ordo:
{{!}} ''[[:Kategorie:{{{ordo|}}}{{!}}{{{ordo|}}}]]''
}}
{{#if:{{{subordo|}}}
| {{!-}}
{{!}} Subordo:
{{!}} ''[[:Kategorie:{{{subordo|}}}{{!}}{{{subordo|}}}]]''
}}
{{#if:{{{superfamilia|}}}
| {{!-}}
{{!}} Superfamilia:
{{!}} ''[[:Kategorie:{{{superfamilia|}}}{{!}}{{{superfamilia|}}}]]''
}}
{{#if:{{{familia|}}}
| {{!-}}
{{!}} Familia:
{{!}} ''[[:Kategorie:{{{familia|}}}{{!}}{{{familia|}}}]]''
}}
{{#if:{{{subfamilia|}}}
| {{!-}}
{{!}} Subfamilia:
{{!}} ''[[:Kategorie:{{{subfamilia|}}}{{!}}{{{subfamilia|}}}]]''
}}
{{#if:{{{tribus|}}}
| {{!-}}
{{!}} Tribus:
{{!}} ''[[:Kategorie:{{{tribus|}}}{{!}}{{{tribus|}}}]]''
}}
|-
{{#if:{{{genus|}}}
| {{!-}}
{{!}} Genus:
{{!}} ''[[:Kategorie:{{{genus|}}}{{!}}{{{genus|}}}]]''
}}
|-
{{#if:{{{subgenus|}}}
| {{!-}}
{{!}} Subgenus:
{{!}} ''[[:Kategorie:{{{subgenus|}}}{{!}}{{{subgenus|}}}]]''
}}
|-
{{#if:{{{species|}}}
| {{!-}}
{{!}} Species:
{{!}} ''{{{genus|}}} {{{subgenus|}}} {{{species|}}}''
}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Wissenschaftlicher Name
|-
|style="text-align:center"|
{| style="text-align:center;background-color:#efefef;widht:250px"
|-
|style="text-align:center;width:250px"|''{{{genus|}}} {{{species|}}}'' {{#if:{{{Autor|}}}|{{{Autor|}}}}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Weitere Informationen
|-
|style="text-align:left"|
{| style="text-align:left;background-color:#efefef;widht:250px"
{{#if:{{{Verbreitung|}}}
| {{!-}}
{{!}} Verbreitung:
{{!}} {{{Verbreitung|}}}
| {{!-}}
}}
{{#if:{{{Habitat|}}}
| {{!-}}
{{!}} Habitat:
{{!}} {{{Habitat|}}}
| {{!-}}
}}
{{#if:{{{Nahrung|}}}
| {{!-}}
{{!}} Nahrung:
{{!}} {{{Nahrung|}}}
| {{!-}}
}}
{{#if:{{{Luftfeuchtigkeit|}}}
| {{!-}}
{{!}} Luftfeuchtigkeit:
{{!}} {{{Luftfeuchtigkeit|}}}
| {{!-}}
}}
{{#if:{{{Temperatur|}}}
| {{!-}}
{{!}} Temperatur:
{{!}} {{{Temperatur|}}}
| {{!-}}
}}
{{#if:{{{StudyGroupNumber|}}}
| {{!-}}
{{!}} StudyGroupNumber:
{{!}} {{{StudyGroupNumber|}}}
| {{!-}}
}}
|}
|}
{{#ifeq: {{NAMESPACE}} | {{ns:0}} | [[Kategorie:species]]}}
{{#if:{{#var:taxon}}
|{{#regex: {{#var:taxon}} | /\\[nr]/ | }}
}}
{{#if:{{{ordo|}}}|
-> [[:Kategorie:{{{ordo|}}}{{!}}{{{ordo|}}}]]
}}
{{#if:{{{subordo|}}}|
-> [[:Kategorie:{{{subordo|}}}{{!}}{{{subordo|}}}]]
}}
{{#if:{{{superfamilia|}}}|
-> [[:Kategorie:{{#if: {{{subordo|}}} | {{{subordo|}}} | {{{ordo|}}} }}{{!}}{{{superfamilia|}}}]]
}}
{{#if:{{{familia|}}}|
-> [[:Kategorie:{{{familia|}}}{{!}}{{{familia|}}}]]
}}
{{#if:{{{subfamilia|}}}|
-> [[:Kategorie:{{{subfamilia|}}}{{!}}{{{subfamilia|}}}]]
}}
{{#if:{{{tribus|}}}|
-> [[:Kategorie:{{{tribus|}}}{{!}}{{{tribus|}}}]]
}}
{{#if:{{{genus|}}}|
-> [[:Kategorie:{{{genus|}}}{{!}}{{{genus|}}}]]
}}
{{#if:{{{subgenus|}}}|
-> [[:Kategorie:{{{subgenus|}}}{{!}}{{{subgenus|}}}]]
}}
{{#if:{{{www.faunaeur.org_id|}}}|
* [http://www.faunaeur.org/full_results.php?id={{{www.faunaeur.org_id|}}} Fauna Europaea : www.faunaeur.org -> {{PAGENAME}}]
}}
{{#if:{{{cockroach.speciesfile.org_TaxonNameID|}}}|
* [http://cockroach.speciesfile.org/common/basic/Taxa.aspx?TaxonNameID={{{cockroach.speciesfile.org_TaxonNameID|}}} Cockroach Species File (CSF) : cockroach.speciesfile.org -> {{PAGENAME}}]
}}
{{#if:{{{LSID|}}}|
* [http://www.ipni.org/lsids.html LSID] : {{{LSID|}}}
}}
{{#ifeq:{{PAGENAME}}|{{{superordo}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
[[Kategorie:{{#if: {{{subclassis|}}} | {{{subclassis|}}} | {{{classis|}}} }}{{!}}{{{superordo|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{ordo}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=7}}
{{#ifeq:{{PAGENAME}}|Blattodea|
[[Kategorie:Schaben]]
}}
[[Kategorie:{{#if: {{{superordo|}}} | {{{superordo|}}} | {{{subclassis|}}} }}{{!}}{{{ordo|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{superfamilia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
[[Kategorie:{{#if: {{{subordo|}}} | {{{subordo|}}} | {{{ordo|}}} }}{{!}}{{{superfamilia|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{familia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=4}}
[[Kategorie:{{#if: {{{superfamilia|}}} | {{{superfamilia|}}} | {{{subordo|}}} }}{{!}}{{{familia|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{subfamilia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=3}}
[[Kategorie:{{{familia|}}}{{!}}{{{subfamilia|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{genus}}}|
[[Kategorie:{{#if: {{{subfamilia|}}} | {{{subfamilia|}}} | {{{familia|}}} }}{{!}}{{{genus|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{genus}}} {{{species}}}|
[[Kategorie:{{{genus|}}}{{!}}{{{species|}}}]]
}}
</includeonly>
<noinclude>
<pre>
Beispielaufruf:
{{Systematik
| DeName = Fauchschabe
| Autor = van Herrewege, 1973
| ordo =
| subordo =
| superfamilia =
| familia = Blaberidae
| subfamilia = Oxyhaloinae
| tribus = Gromphadorhini
| genus = Princisia
| subgenus =
| species = vanwaerebeki
| Verbreitung =
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| StudyGroupNumber = BCG 34
| Winterruhe =
| cockroach.speciesfile.org_TaxonNameID = 1174416
| LSID = urn:lsid:Blattodea.genusfile.org:TaxonName:6326
}}
{{Systematik
| Autor =
| Bild =
| Bildbeschreibung =
| regnum = Animalia
| subregnum = Eumetazoa
| phylum = specieshropoda
| subphylum = Hexapoda
| classis = Insecta
| ordo = Dictyoptera
| subordo = Isoptera
| LSID = urn:lsid:faunaeur.org:taxname:11922
| www.faunaeur.org_id = 11922
}}
</pre>
</noinclude>
fbd4de9849695eb8e040c0ae983285debe6a6b76
1629
1628
2017-05-02T20:49:36Z
Lollypop
2
wikitext
text/x-wiki
<includeonly>
{{#vardefine:taxon|
{{#if:{{{dominia|}}} | -> [[:Kategorie:{{{dominia|}}}{{!}}{{{dominia|}}}]]}}
{{#if:{{{regnum|}}} | -> [[:Kategorie:{{{regnum|}}}{{!}}{{{regnum|}}}]]}}
{{#if:{{{subregnum|}}} | -> [[:Kategorie:{{{subregnum|}}}{{!}}{{{subregnum|}}}]]}}
{{#if:{{{divisio|}}} | -> [[:Kategorie:{{{divisio|}}}{{!}}{{{divisio|}}}]]}}
{{#if:{{{subdivisio|}}} | -> [[:Kategorie:{{{subdivisio|}}}{{!}}{{{subdivisio|}}}]]}}
{{#if:{{{superclassis|}}}| -> [[:Kategorie:{{{superclassis|}}}{{!}}{{{superclassis|}}}]]}}
{{#if:{{{classis|}}} | -> [[:Kategorie:{{{classis|}}}{{!}}{{{classis|}}}]]}}
{{#if:{{{subclassis|}}} | -> [[:Kategorie:{{{subclassis|}}}{{!}}{{{subclassis|}}}]]}}
{{#if:{{{superordo|}}} | -> [[:Kategorie:{{{superordo|}}}{{!}}{{{superordo|}}}]]}}
{{#if:{{{ordo|}}} | -> [[:Kategorie:{{{ordo|}}}{{!}}{{{ordo|}}}]]}}
{{#if:{{{subordo|}}} | -> [[:Kategorie:{{{subordo|}}}{{!}}{{{subordo|}}}]]}}
{{#if:{{{superfamilia|}}}| -> [[:Kategorie:{{#if: {{{subordo|}}} | {{{subordo|}}} | {{{ordo|}}} }}{{!}}{{{superfamilia|}}}]]}}
{{#if:{{{familia|}}} | -> [[:Kategorie:{{{familia|}}}{{!}}{{{familia|}}}]]}}
{{#if:{{{subfamilia|}}} | -> [[:Kategorie:{{{subfamilia|}}}{{!}}{{{subfamilia|}}}]]}}
{{#if:{{{tribus|}}} | -> [[:Kategorie:{{{tribus|}}}{{!}}{{{tribus|}}}]]}}
{{#if:{{{genus|}}} | -> [[:Kategorie:{{{genus|}}}{{!}}{{{genus|}}}]]}}
{{#if:{{{subgenus|}}} | -> [[:Kategorie:{{{subgenus|}}}{{!}}{{{subgenus|}}}]]}}
}}
{| cellpadding="2" cellspacing="0" style="border: 1px solid #6688AA;width:260px; background-color:#efefef; float:right;overflow: hidden; margin-left:10px; margin-bottom:10px;" valign="middle" |
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"|
'''''{{PAGENAME}}'''''
{{#if:{{{DeName|}}}|
<br>({{{DeName|}}})
}}
|-
|style="border: 0px solid #6688AA;border-bottom-width:1px;"| <div style="text-align:center;font-size:8pt;">
{{#if: {{{Bild|}}}
|
[[Bild:{{{Bild}}}{{!}}frameless{{!}}250x300px{{!}}{{{Bildbeschreibung}}}]]
}}
</div>
|
|
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| Systematik
|-
|style="text-align:center;width=250px;"|
{| style="background-color:#efefef;text-align:left;"
{{#if:{{{dominia|}}}
| {{!-}}
{{!}} Dominia:
{{!}} ''[[:Kategorie:{{{dominia|}}}{{!}}{{{dominia|}}}]]''
}}
{{#if:{{{regnum|}}}
| {{!-}}
{{!}} Regnum:
{{!}} ''[[:Kategorie:{{{regnum|}}}{{!}}{{{regnum|}}}]]''
}}
{{#if:{{{subregnum|}}}
| {{!-}}
{{!}} Subregnum:
{{!}} ''[[:Kategorie:{{{subregnum|}}}{{!}}{{{subregnum|}}}]]''
}}
{{#if:{{{superdivisio|}}}
| {{!-}}
{{!}} Superdivisio:
{{!}} ''[[:Kategorie:{{{superdivisio|}}}{{!}}{{{superdivisio|}}}]]''
}}
{{#if:{{{divisio|}}}
| {{!-}}
{{!}} Divisio:
{{!}} ''[[:Kategorie:{{{divisio|}}}{{!}}{{{divisio|}}}]]''
}}
{{#if:{{{subdivisio|}}}
| {{!-}}
{{!}} Subdivisio:
{{!}} ''[[:Kategorie:{{{subdivisio|}}}{{!}}{{{subdivisio|}}}]]''
}}
{{#if:{{{superclassis|}}}
| {{!-}}
{{!}} Superclassis:
{{!}} ''[[:Kategorie:{{{superclassis|}}}{{!}}{{{superclassis|}}}]]''
}}
{{#if:{{{classis|}}}
| {{!-}}
{{!}} Classis:
{{!}} ''[[:Kategorie:{{{classis|}}}{{!}}{{{classis|}}}]]''
}}
{{#if:{{{subclassis|}}}
| {{!-}}
{{!}} Subclassis:
{{!}} ''[[:Kategorie:{{{subclassis|}}}{{!}}{{{subclassis|}}}]]''
}}
{{#if:{{{superordo|}}}
| {{!-}}
{{!}} Superordo:
{{!}} ''[[:Kategorie:{{{superordo|}}}{{!}}{{{superordo|}}}]]''
}}
{{#if:{{{ordo|}}}
| {{!-}}
{{!}} Ordo:
{{!}} ''[[:Kategorie:{{{ordo|}}}{{!}}{{{ordo|}}}]]''
}}
{{#if:{{{subordo|}}}
| {{!-}}
{{!}} Subordo:
{{!}} ''[[:Kategorie:{{{subordo|}}}{{!}}{{{subordo|}}}]]''
}}
{{#if:{{{superfamilia|}}}
| {{!-}}
{{!}} Superfamilia:
{{!}} ''[[:Kategorie:{{{superfamilia|}}}{{!}}{{{superfamilia|}}}]]''
}}
{{#if:{{{familia|}}}
| {{!-}}
{{!}} Familia:
{{!}} ''[[:Kategorie:{{{familia|}}}{{!}}{{{familia|}}}]]''
}}
{{#if:{{{subfamilia|}}}
| {{!-}}
{{!}} Subfamilia:
{{!}} ''[[:Kategorie:{{{subfamilia|}}}{{!}}{{{subfamilia|}}}]]''
}}
{{#if:{{{tribus|}}}
| {{!-}}
{{!}} Tribus:
{{!}} ''[[:Kategorie:{{{tribus|}}}{{!}}{{{tribus|}}}]]''
}}
|-
{{#if:{{{genus|}}}
| {{!-}}
{{!}} Genus:
{{!}} ''[[:Kategorie:{{{genus|}}}{{!}}{{{genus|}}}]]''
}}
|-
{{#if:{{{subgenus|}}}
| {{!-}}
{{!}} Subgenus:
{{!}} ''[[:Kategorie:{{{subgenus|}}}{{!}}{{{subgenus|}}}]]''
}}
|-
{{#if:{{{species|}}}
| {{!-}}
{{!}} Species:
{{!}} ''{{{genus|}}} {{{subgenus|}}} {{{species|}}}''
}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Wissenschaftlicher Name
|-
|style="text-align:center"|
{| style="text-align:center;background-color:#efefef;widht:250px"
|-
|style="text-align:center;width:250px"|''{{{genus|}}} {{{species|}}}'' {{#if:{{{Autor|}}}|{{{Autor|}}}}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Weitere Informationen
|-
|style="text-align:left"|
{| style="text-align:left;background-color:#efefef;widht:250px"
{{#if:{{{Verbreitung|}}}
| {{!-}}
{{!}} Verbreitung:
{{!}} {{{Verbreitung|}}}
| {{!-}}
}}
{{#if:{{{Habitat|}}}
| {{!-}}
{{!}} Habitat:
{{!}} {{{Habitat|}}}
| {{!-}}
}}
{{#if:{{{Nahrung|}}}
| {{!-}}
{{!}} Nahrung:
{{!}} {{{Nahrung|}}}
| {{!-}}
}}
{{#if:{{{Luftfeuchtigkeit|}}}
| {{!-}}
{{!}} Luftfeuchtigkeit:
{{!}} {{{Luftfeuchtigkeit|}}}
| {{!-}}
}}
{{#if:{{{Temperatur|}}}
| {{!-}}
{{!}} Temperatur:
{{!}} {{{Temperatur|}}}
| {{!-}}
}}
{{#if:{{{StudyGroupNumber|}}}
| {{!-}}
{{!}} StudyGroupNumber:
{{!}} {{{StudyGroupNumber|}}}
| {{!-}}
}}
|}
|}
{{#ifeq: {{NAMESPACE}} | {{ns:0}} | [[Kategorie:species]]}}
{{#if:{{#var:taxon}}
|{{#regex: {{#var:taxon}} | /[\\n]+/ | }}
}}
{{#if:{{{ordo|}}}|
-> [[:Kategorie:{{{ordo|}}}{{!}}{{{ordo|}}}]]
}}
{{#if:{{{subordo|}}}|
-> [[:Kategorie:{{{subordo|}}}{{!}}{{{subordo|}}}]]
}}
{{#if:{{{superfamilia|}}}|
-> [[:Kategorie:{{#if: {{{subordo|}}} | {{{subordo|}}} | {{{ordo|}}} }}{{!}}{{{superfamilia|}}}]]
}}
{{#if:{{{familia|}}}|
-> [[:Kategorie:{{{familia|}}}{{!}}{{{familia|}}}]]
}}
{{#if:{{{subfamilia|}}}|
-> [[:Kategorie:{{{subfamilia|}}}{{!}}{{{subfamilia|}}}]]
}}
{{#if:{{{tribus|}}}|
-> [[:Kategorie:{{{tribus|}}}{{!}}{{{tribus|}}}]]
}}
{{#if:{{{genus|}}}|
-> [[:Kategorie:{{{genus|}}}{{!}}{{{genus|}}}]]
}}
{{#if:{{{subgenus|}}}|
-> [[:Kategorie:{{{subgenus|}}}{{!}}{{{subgenus|}}}]]
}}
{{#if:{{{www.faunaeur.org_id|}}}|
* [http://www.faunaeur.org/full_results.php?id={{{www.faunaeur.org_id|}}} Fauna Europaea : www.faunaeur.org -> {{PAGENAME}}]
}}
{{#if:{{{cockroach.speciesfile.org_TaxonNameID|}}}|
* [http://cockroach.speciesfile.org/common/basic/Taxa.aspx?TaxonNameID={{{cockroach.speciesfile.org_TaxonNameID|}}} Cockroach Species File (CSF) : cockroach.speciesfile.org -> {{PAGENAME}}]
}}
{{#if:{{{LSID|}}}|
* [http://www.ipni.org/lsids.html LSID] : {{{LSID|}}}
}}
{{#ifeq:{{PAGENAME}}|{{{superordo}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
[[Kategorie:{{#if: {{{subclassis|}}} | {{{subclassis|}}} | {{{classis|}}} }}{{!}}{{{superordo|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{ordo}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=7}}
{{#ifeq:{{PAGENAME}}|Blattodea|
[[Kategorie:Schaben]]
}}
[[Kategorie:{{#if: {{{superordo|}}} | {{{superordo|}}} | {{{subclassis|}}} }}{{!}}{{{ordo|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{superfamilia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
[[Kategorie:{{#if: {{{subordo|}}} | {{{subordo|}}} | {{{ordo|}}} }}{{!}}{{{superfamilia|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{familia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=4}}
[[Kategorie:{{#if: {{{superfamilia|}}} | {{{superfamilia|}}} | {{{subordo|}}} }}{{!}}{{{familia|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{subfamilia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=3}}
[[Kategorie:{{{familia|}}}{{!}}{{{subfamilia|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{genus}}}|
[[Kategorie:{{#if: {{{subfamilia|}}} | {{{subfamilia|}}} | {{{familia|}}} }}{{!}}{{{genus|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{genus}}} {{{species}}}|
[[Kategorie:{{{genus|}}}{{!}}{{{species|}}}]]
}}
</includeonly>
<noinclude>
<pre>
Beispielaufruf:
{{Systematik
| DeName = Fauchschabe
| Autor = van Herrewege, 1973
| ordo =
| subordo =
| superfamilia =
| familia = Blaberidae
| subfamilia = Oxyhaloinae
| tribus = Gromphadorhini
| genus = Princisia
| subgenus =
| species = vanwaerebeki
| Verbreitung =
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| StudyGroupNumber = BCG 34
| Winterruhe =
| cockroach.speciesfile.org_TaxonNameID = 1174416
| LSID = urn:lsid:Blattodea.genusfile.org:TaxonName:6326
}}
{{Systematik
| Autor =
| Bild =
| Bildbeschreibung =
| regnum = Animalia
| subregnum = Eumetazoa
| phylum = specieshropoda
| subphylum = Hexapoda
| classis = Insecta
| ordo = Dictyoptera
| subordo = Isoptera
| LSID = urn:lsid:faunaeur.org:taxname:11922
| www.faunaeur.org_id = 11922
}}
</pre>
</noinclude>
ea7384f4bfc50666056853e9a5237aca2b12c26e
1630
1629
2017-05-02T20:50:59Z
Lollypop
2
wikitext
text/x-wiki
<includeonly>
{{#vardefine:taxon|
{{#if:{{{dominia|}}} | -> [[:Kategorie:{{{dominia|}}}{{!}}{{{dominia|}}}]]}}
{{#if:{{{regnum|}}} | -> [[:Kategorie:{{{regnum|}}}{{!}}{{{regnum|}}}]]}}
{{#if:{{{subregnum|}}} | -> [[:Kategorie:{{{subregnum|}}}{{!}}{{{subregnum|}}}]]}}
{{#if:{{{divisio|}}} | -> [[:Kategorie:{{{divisio|}}}{{!}}{{{divisio|}}}]]}}
{{#if:{{{subdivisio|}}} | -> [[:Kategorie:{{{subdivisio|}}}{{!}}{{{subdivisio|}}}]]}}
{{#if:{{{superclassis|}}}| -> [[:Kategorie:{{{superclassis|}}}{{!}}{{{superclassis|}}}]]}}
{{#if:{{{classis|}}} | -> [[:Kategorie:{{{classis|}}}{{!}}{{{classis|}}}]]}}
{{#if:{{{subclassis|}}} | -> [[:Kategorie:{{{subclassis|}}}{{!}}{{{subclassis|}}}]]}}
{{#if:{{{superordo|}}} | -> [[:Kategorie:{{{superordo|}}}{{!}}{{{superordo|}}}]]}}
{{#if:{{{ordo|}}} | -> [[:Kategorie:{{{ordo|}}}{{!}}{{{ordo|}}}]]}}
{{#if:{{{subordo|}}} | -> [[:Kategorie:{{{subordo|}}}{{!}}{{{subordo|}}}]]}}
{{#if:{{{superfamilia|}}}| -> [[:Kategorie:{{#if: {{{subordo|}}} | {{{subordo|}}} | {{{ordo|}}} }}{{!}}{{{superfamilia|}}}]]}}
{{#if:{{{familia|}}} | -> [[:Kategorie:{{{familia|}}}{{!}}{{{familia|}}}]]}}
{{#if:{{{subfamilia|}}} | -> [[:Kategorie:{{{subfamilia|}}}{{!}}{{{subfamilia|}}}]]}}
{{#if:{{{tribus|}}} | -> [[:Kategorie:{{{tribus|}}}{{!}}{{{tribus|}}}]]}}
{{#if:{{{genus|}}} | -> [[:Kategorie:{{{genus|}}}{{!}}{{{genus|}}}]]}}
{{#if:{{{subgenus|}}} | -> [[:Kategorie:{{{subgenus|}}}{{!}}{{{subgenus|}}}]]}}
}}
{| cellpadding="2" cellspacing="0" style="border: 1px solid #6688AA;width:260px; background-color:#efefef; float:right;overflow: hidden; margin-left:10px; margin-bottom:10px;" valign="middle" |
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"|
'''''{{PAGENAME}}'''''
{{#if:{{{DeName|}}}|
<br>({{{DeName|}}})
}}
|-
|style="border: 0px solid #6688AA;border-bottom-width:1px;"| <div style="text-align:center;font-size:8pt;">
{{#if: {{{Bild|}}}
|
[[Bild:{{{Bild}}}{{!}}frameless{{!}}250x300px{{!}}{{{Bildbeschreibung}}}]]
}}
</div>
|
|
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| Systematik
|-
|style="text-align:center;width=250px;"|
{| style="background-color:#efefef;text-align:left;"
{{#if:{{{dominia|}}}
| {{!-}}
{{!}} Dominia:
{{!}} ''[[:Kategorie:{{{dominia|}}}{{!}}{{{dominia|}}}]]''
}}
{{#if:{{{regnum|}}}
| {{!-}}
{{!}} Regnum:
{{!}} ''[[:Kategorie:{{{regnum|}}}{{!}}{{{regnum|}}}]]''
}}
{{#if:{{{subregnum|}}}
| {{!-}}
{{!}} Subregnum:
{{!}} ''[[:Kategorie:{{{subregnum|}}}{{!}}{{{subregnum|}}}]]''
}}
{{#if:{{{superdivisio|}}}
| {{!-}}
{{!}} Superdivisio:
{{!}} ''[[:Kategorie:{{{superdivisio|}}}{{!}}{{{superdivisio|}}}]]''
}}
{{#if:{{{divisio|}}}
| {{!-}}
{{!}} Divisio:
{{!}} ''[[:Kategorie:{{{divisio|}}}{{!}}{{{divisio|}}}]]''
}}
{{#if:{{{subdivisio|}}}
| {{!-}}
{{!}} Subdivisio:
{{!}} ''[[:Kategorie:{{{subdivisio|}}}{{!}}{{{subdivisio|}}}]]''
}}
{{#if:{{{superclassis|}}}
| {{!-}}
{{!}} Superclassis:
{{!}} ''[[:Kategorie:{{{superclassis|}}}{{!}}{{{superclassis|}}}]]''
}}
{{#if:{{{classis|}}}
| {{!-}}
{{!}} Classis:
{{!}} ''[[:Kategorie:{{{classis|}}}{{!}}{{{classis|}}}]]''
}}
{{#if:{{{subclassis|}}}
| {{!-}}
{{!}} Subclassis:
{{!}} ''[[:Kategorie:{{{subclassis|}}}{{!}}{{{subclassis|}}}]]''
}}
{{#if:{{{superordo|}}}
| {{!-}}
{{!}} Superordo:
{{!}} ''[[:Kategorie:{{{superordo|}}}{{!}}{{{superordo|}}}]]''
}}
{{#if:{{{ordo|}}}
| {{!-}}
{{!}} Ordo:
{{!}} ''[[:Kategorie:{{{ordo|}}}{{!}}{{{ordo|}}}]]''
}}
{{#if:{{{subordo|}}}
| {{!-}}
{{!}} Subordo:
{{!}} ''[[:Kategorie:{{{subordo|}}}{{!}}{{{subordo|}}}]]''
}}
{{#if:{{{superfamilia|}}}
| {{!-}}
{{!}} Superfamilia:
{{!}} ''[[:Kategorie:{{{superfamilia|}}}{{!}}{{{superfamilia|}}}]]''
}}
{{#if:{{{familia|}}}
| {{!-}}
{{!}} Familia:
{{!}} ''[[:Kategorie:{{{familia|}}}{{!}}{{{familia|}}}]]''
}}
{{#if:{{{subfamilia|}}}
| {{!-}}
{{!}} Subfamilia:
{{!}} ''[[:Kategorie:{{{subfamilia|}}}{{!}}{{{subfamilia|}}}]]''
}}
{{#if:{{{tribus|}}}
| {{!-}}
{{!}} Tribus:
{{!}} ''[[:Kategorie:{{{tribus|}}}{{!}}{{{tribus|}}}]]''
}}
|-
{{#if:{{{genus|}}}
| {{!-}}
{{!}} Genus:
{{!}} ''[[:Kategorie:{{{genus|}}}{{!}}{{{genus|}}}]]''
}}
|-
{{#if:{{{subgenus|}}}
| {{!-}}
{{!}} Subgenus:
{{!}} ''[[:Kategorie:{{{subgenus|}}}{{!}}{{{subgenus|}}}]]''
}}
|-
{{#if:{{{species|}}}
| {{!-}}
{{!}} Species:
{{!}} ''{{{genus|}}} {{{subgenus|}}} {{{species|}}}''
}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Wissenschaftlicher Name
|-
|style="text-align:center"|
{| style="text-align:center;background-color:#efefef;widht:250px"
|-
|style="text-align:center;width:250px"|''{{{genus|}}} {{{species|}}}'' {{#if:{{{Autor|}}}|{{{Autor|}}}}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Weitere Informationen
|-
|style="text-align:left"|
{| style="text-align:left;background-color:#efefef;widht:250px"
{{#if:{{{Verbreitung|}}}
| {{!-}}
{{!}} Verbreitung:
{{!}} {{{Verbreitung|}}}
| {{!-}}
}}
{{#if:{{{Habitat|}}}
| {{!-}}
{{!}} Habitat:
{{!}} {{{Habitat|}}}
| {{!-}}
}}
{{#if:{{{Nahrung|}}}
| {{!-}}
{{!}} Nahrung:
{{!}} {{{Nahrung|}}}
| {{!-}}
}}
{{#if:{{{Luftfeuchtigkeit|}}}
| {{!-}}
{{!}} Luftfeuchtigkeit:
{{!}} {{{Luftfeuchtigkeit|}}}
| {{!-}}
}}
{{#if:{{{Temperatur|}}}
| {{!-}}
{{!}} Temperatur:
{{!}} {{{Temperatur|}}}
| {{!-}}
}}
{{#if:{{{StudyGroupNumber|}}}
| {{!-}}
{{!}} StudyGroupNumber:
{{!}} {{{StudyGroupNumber|}}}
| {{!-}}
}}
|}
|}
{{#ifeq: {{NAMESPACE}} | {{ns:0}} | [[Kategorie:species]]}}
{{#if:{{#var:taxon}}
|{{#regex: {{#var:taxon}} | /[ \\n]+/ | }}
}}
{{#if:{{{ordo|}}}|
-> [[:Kategorie:{{{ordo|}}}{{!}}{{{ordo|}}}]]
}}
{{#if:{{{subordo|}}}|
-> [[:Kategorie:{{{subordo|}}}{{!}}{{{subordo|}}}]]
}}
{{#if:{{{superfamilia|}}}|
-> [[:Kategorie:{{#if: {{{subordo|}}} | {{{subordo|}}} | {{{ordo|}}} }}{{!}}{{{superfamilia|}}}]]
}}
{{#if:{{{familia|}}}|
-> [[:Kategorie:{{{familia|}}}{{!}}{{{familia|}}}]]
}}
{{#if:{{{subfamilia|}}}|
-> [[:Kategorie:{{{subfamilia|}}}{{!}}{{{subfamilia|}}}]]
}}
{{#if:{{{tribus|}}}|
-> [[:Kategorie:{{{tribus|}}}{{!}}{{{tribus|}}}]]
}}
{{#if:{{{genus|}}}|
-> [[:Kategorie:{{{genus|}}}{{!}}{{{genus|}}}]]
}}
{{#if:{{{subgenus|}}}|
-> [[:Kategorie:{{{subgenus|}}}{{!}}{{{subgenus|}}}]]
}}
{{#if:{{{www.faunaeur.org_id|}}}|
* [http://www.faunaeur.org/full_results.php?id={{{www.faunaeur.org_id|}}} Fauna Europaea : www.faunaeur.org -> {{PAGENAME}}]
}}
{{#if:{{{cockroach.speciesfile.org_TaxonNameID|}}}|
* [http://cockroach.speciesfile.org/common/basic/Taxa.aspx?TaxonNameID={{{cockroach.speciesfile.org_TaxonNameID|}}} Cockroach Species File (CSF) : cockroach.speciesfile.org -> {{PAGENAME}}]
}}
{{#if:{{{LSID|}}}|
* [http://www.ipni.org/lsids.html LSID] : {{{LSID|}}}
}}
{{#ifeq:{{PAGENAME}}|{{{superordo}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
[[Kategorie:{{#if: {{{subclassis|}}} | {{{subclassis|}}} | {{{classis|}}} }}{{!}}{{{superordo|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{ordo}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=7}}
{{#ifeq:{{PAGENAME}}|Blattodea|
[[Kategorie:Schaben]]
}}
[[Kategorie:{{#if: {{{superordo|}}} | {{{superordo|}}} | {{{subclassis|}}} }}{{!}}{{{ordo|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{superfamilia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
[[Kategorie:{{#if: {{{subordo|}}} | {{{subordo|}}} | {{{ordo|}}} }}{{!}}{{{superfamilia|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{familia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=4}}
[[Kategorie:{{#if: {{{superfamilia|}}} | {{{superfamilia|}}} | {{{subordo|}}} }}{{!}}{{{familia|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{subfamilia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=3}}
[[Kategorie:{{{familia|}}}{{!}}{{{subfamilia|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{genus}}}|
[[Kategorie:{{#if: {{{subfamilia|}}} | {{{subfamilia|}}} | {{{familia|}}} }}{{!}}{{{genus|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{genus}}} {{{species}}}|
[[Kategorie:{{{genus|}}}{{!}}{{{species|}}}]]
}}
</includeonly>
<noinclude>
<pre>
Beispielaufruf:
{{Systematik
| DeName = Fauchschabe
| Autor = van Herrewege, 1973
| ordo =
| subordo =
| superfamilia =
| familia = Blaberidae
| subfamilia = Oxyhaloinae
| tribus = Gromphadorhini
| genus = Princisia
| subgenus =
| species = vanwaerebeki
| Verbreitung =
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| StudyGroupNumber = BCG 34
| Winterruhe =
| cockroach.speciesfile.org_TaxonNameID = 1174416
| LSID = urn:lsid:Blattodea.genusfile.org:TaxonName:6326
}}
{{Systematik
| Autor =
| Bild =
| Bildbeschreibung =
| regnum = Animalia
| subregnum = Eumetazoa
| phylum = specieshropoda
| subphylum = Hexapoda
| classis = Insecta
| ordo = Dictyoptera
| subordo = Isoptera
| LSID = urn:lsid:faunaeur.org:taxname:11922
| www.faunaeur.org_id = 11922
}}
</pre>
</noinclude>
9b7dc99fd552a2990129b087ea4ea1c5594fe4f7
1631
1630
2017-05-02T20:52:02Z
Lollypop
2
wikitext
text/x-wiki
<includeonly>
{{#vardefine:taxon|
{{#if:{{{dominia|}}} | -> [[:Kategorie:{{{dominia|}}}{{!}}{{{dominia|}}}]]}}
{{#if:{{{regnum|}}} | -> [[:Kategorie:{{{regnum|}}}{{!}}{{{regnum|}}}]]}}
{{#if:{{{subregnum|}}} | -> [[:Kategorie:{{{subregnum|}}}{{!}}{{{subregnum|}}}]]}}
{{#if:{{{divisio|}}} | -> [[:Kategorie:{{{divisio|}}}{{!}}{{{divisio|}}}]]}}
{{#if:{{{subdivisio|}}} | -> [[:Kategorie:{{{subdivisio|}}}{{!}}{{{subdivisio|}}}]]}}
{{#if:{{{superclassis|}}}| -> [[:Kategorie:{{{superclassis|}}}{{!}}{{{superclassis|}}}]]}}
{{#if:{{{classis|}}} | -> [[:Kategorie:{{{classis|}}}{{!}}{{{classis|}}}]]}}
{{#if:{{{subclassis|}}} | -> [[:Kategorie:{{{subclassis|}}}{{!}}{{{subclassis|}}}]]}}
{{#if:{{{superordo|}}} | -> [[:Kategorie:{{{superordo|}}}{{!}}{{{superordo|}}}]]}}
{{#if:{{{ordo|}}} | -> [[:Kategorie:{{{ordo|}}}{{!}}{{{ordo|}}}]]}}
{{#if:{{{subordo|}}} | -> [[:Kategorie:{{{subordo|}}}{{!}}{{{subordo|}}}]]}}
{{#if:{{{superfamilia|}}}| -> [[:Kategorie:{{#if: {{{subordo|}}} | {{{subordo|}}} | {{{ordo|}}} }}{{!}}{{{superfamilia|}}}]]}}
{{#if:{{{familia|}}} | -> [[:Kategorie:{{{familia|}}}{{!}}{{{familia|}}}]]}}
{{#if:{{{subfamilia|}}} | -> [[:Kategorie:{{{subfamilia|}}}{{!}}{{{subfamilia|}}}]]}}
{{#if:{{{tribus|}}} | -> [[:Kategorie:{{{tribus|}}}{{!}}{{{tribus|}}}]]}}
{{#if:{{{genus|}}} | -> [[:Kategorie:{{{genus|}}}{{!}}{{{genus|}}}]]}}
{{#if:{{{subgenus|}}} | -> [[:Kategorie:{{{subgenus|}}}{{!}}{{{subgenus|}}}]]}}
}}
{| cellpadding="2" cellspacing="0" style="border: 1px solid #6688AA;width:260px; background-color:#efefef; float:right;overflow: hidden; margin-left:10px; margin-bottom:10px;" valign="middle" |
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"|
'''''{{PAGENAME}}'''''
{{#if:{{{DeName|}}}|
<br>({{{DeName|}}})
}}
|-
|style="border: 0px solid #6688AA;border-bottom-width:1px;"| <div style="text-align:center;font-size:8pt;">
{{#if: {{{Bild|}}}
|
[[Bild:{{{Bild}}}{{!}}frameless{{!}}250x300px{{!}}{{{Bildbeschreibung}}}]]
}}
</div>
|
|
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| Systematik
|-
|style="text-align:center;width=250px;"|
{| style="background-color:#efefef;text-align:left;"
{{#if:{{{dominia|}}}
| {{!-}}
{{!}} Dominia:
{{!}} ''[[:Kategorie:{{{dominia|}}}{{!}}{{{dominia|}}}]]''
}}
{{#if:{{{regnum|}}}
| {{!-}}
{{!}} Regnum:
{{!}} ''[[:Kategorie:{{{regnum|}}}{{!}}{{{regnum|}}}]]''
}}
{{#if:{{{subregnum|}}}
| {{!-}}
{{!}} Subregnum:
{{!}} ''[[:Kategorie:{{{subregnum|}}}{{!}}{{{subregnum|}}}]]''
}}
{{#if:{{{superdivisio|}}}
| {{!-}}
{{!}} Superdivisio:
{{!}} ''[[:Kategorie:{{{superdivisio|}}}{{!}}{{{superdivisio|}}}]]''
}}
{{#if:{{{divisio|}}}
| {{!-}}
{{!}} Divisio:
{{!}} ''[[:Kategorie:{{{divisio|}}}{{!}}{{{divisio|}}}]]''
}}
{{#if:{{{subdivisio|}}}
| {{!-}}
{{!}} Subdivisio:
{{!}} ''[[:Kategorie:{{{subdivisio|}}}{{!}}{{{subdivisio|}}}]]''
}}
{{#if:{{{superclassis|}}}
| {{!-}}
{{!}} Superclassis:
{{!}} ''[[:Kategorie:{{{superclassis|}}}{{!}}{{{superclassis|}}}]]''
}}
{{#if:{{{classis|}}}
| {{!-}}
{{!}} Classis:
{{!}} ''[[:Kategorie:{{{classis|}}}{{!}}{{{classis|}}}]]''
}}
{{#if:{{{subclassis|}}}
| {{!-}}
{{!}} Subclassis:
{{!}} ''[[:Kategorie:{{{subclassis|}}}{{!}}{{{subclassis|}}}]]''
}}
{{#if:{{{superordo|}}}
| {{!-}}
{{!}} Superordo:
{{!}} ''[[:Kategorie:{{{superordo|}}}{{!}}{{{superordo|}}}]]''
}}
{{#if:{{{ordo|}}}
| {{!-}}
{{!}} Ordo:
{{!}} ''[[:Kategorie:{{{ordo|}}}{{!}}{{{ordo|}}}]]''
}}
{{#if:{{{subordo|}}}
| {{!-}}
{{!}} Subordo:
{{!}} ''[[:Kategorie:{{{subordo|}}}{{!}}{{{subordo|}}}]]''
}}
{{#if:{{{superfamilia|}}}
| {{!-}}
{{!}} Superfamilia:
{{!}} ''[[:Kategorie:{{{superfamilia|}}}{{!}}{{{superfamilia|}}}]]''
}}
{{#if:{{{familia|}}}
| {{!-}}
{{!}} Familia:
{{!}} ''[[:Kategorie:{{{familia|}}}{{!}}{{{familia|}}}]]''
}}
{{#if:{{{subfamilia|}}}
| {{!-}}
{{!}} Subfamilia:
{{!}} ''[[:Kategorie:{{{subfamilia|}}}{{!}}{{{subfamilia|}}}]]''
}}
{{#if:{{{tribus|}}}
| {{!-}}
{{!}} Tribus:
{{!}} ''[[:Kategorie:{{{tribus|}}}{{!}}{{{tribus|}}}]]''
}}
|-
{{#if:{{{genus|}}}
| {{!-}}
{{!}} Genus:
{{!}} ''[[:Kategorie:{{{genus|}}}{{!}}{{{genus|}}}]]''
}}
|-
{{#if:{{{subgenus|}}}
| {{!-}}
{{!}} Subgenus:
{{!}} ''[[:Kategorie:{{{subgenus|}}}{{!}}{{{subgenus|}}}]]''
}}
|-
{{#if:{{{species|}}}
| {{!-}}
{{!}} Species:
{{!}} ''{{{genus|}}} {{{subgenus|}}} {{{species|}}}''
}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Wissenschaftlicher Name
|-
|style="text-align:center"|
{| style="text-align:center;background-color:#efefef;widht:250px"
|-
|style="text-align:center;width:250px"|''{{{genus|}}} {{{species|}}}'' {{#if:{{{Autor|}}}|{{{Autor|}}}}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Weitere Informationen
|-
|style="text-align:left"|
{| style="text-align:left;background-color:#efefef;widht:250px"
{{#if:{{{Verbreitung|}}}
| {{!-}}
{{!}} Verbreitung:
{{!}} {{{Verbreitung|}}}
| {{!-}}
}}
{{#if:{{{Habitat|}}}
| {{!-}}
{{!}} Habitat:
{{!}} {{{Habitat|}}}
| {{!-}}
}}
{{#if:{{{Nahrung|}}}
| {{!-}}
{{!}} Nahrung:
{{!}} {{{Nahrung|}}}
| {{!-}}
}}
{{#if:{{{Luftfeuchtigkeit|}}}
| {{!-}}
{{!}} Luftfeuchtigkeit:
{{!}} {{{Luftfeuchtigkeit|}}}
| {{!-}}
}}
{{#if:{{{Temperatur|}}}
| {{!-}}
{{!}} Temperatur:
{{!}} {{{Temperatur|}}}
| {{!-}}
}}
{{#if:{{{StudyGroupNumber|}}}
| {{!-}}
{{!}} StudyGroupNumber:
{{!}} {{{StudyGroupNumber|}}}
| {{!-}}
}}
|}
|}
{{#ifeq: {{NAMESPACE}} | {{ns:0}} | [[Kategorie:species]]}}
{{#if:{{#var:taxon}}
|{{#regex: {{#var:taxon}} | /[ \\r\\n]+/ | }}
}}
{{#if:{{{ordo|}}}|
-> [[:Kategorie:{{{ordo|}}}{{!}}{{{ordo|}}}]]
}}
{{#if:{{{subordo|}}}|
-> [[:Kategorie:{{{subordo|}}}{{!}}{{{subordo|}}}]]
}}
{{#if:{{{superfamilia|}}}|
-> [[:Kategorie:{{#if: {{{subordo|}}} | {{{subordo|}}} | {{{ordo|}}} }}{{!}}{{{superfamilia|}}}]]
}}
{{#if:{{{familia|}}}|
-> [[:Kategorie:{{{familia|}}}{{!}}{{{familia|}}}]]
}}
{{#if:{{{subfamilia|}}}|
-> [[:Kategorie:{{{subfamilia|}}}{{!}}{{{subfamilia|}}}]]
}}
{{#if:{{{tribus|}}}|
-> [[:Kategorie:{{{tribus|}}}{{!}}{{{tribus|}}}]]
}}
{{#if:{{{genus|}}}|
-> [[:Kategorie:{{{genus|}}}{{!}}{{{genus|}}}]]
}}
{{#if:{{{subgenus|}}}|
-> [[:Kategorie:{{{subgenus|}}}{{!}}{{{subgenus|}}}]]
}}
{{#if:{{{www.faunaeur.org_id|}}}|
* [http://www.faunaeur.org/full_results.php?id={{{www.faunaeur.org_id|}}} Fauna Europaea : www.faunaeur.org -> {{PAGENAME}}]
}}
{{#if:{{{cockroach.speciesfile.org_TaxonNameID|}}}|
* [http://cockroach.speciesfile.org/common/basic/Taxa.aspx?TaxonNameID={{{cockroach.speciesfile.org_TaxonNameID|}}} Cockroach Species File (CSF) : cockroach.speciesfile.org -> {{PAGENAME}}]
}}
{{#if:{{{LSID|}}}|
* [http://www.ipni.org/lsids.html LSID] : {{{LSID|}}}
}}
{{#ifeq:{{PAGENAME}}|{{{superordo}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
[[Kategorie:{{#if: {{{subclassis|}}} | {{{subclassis|}}} | {{{classis|}}} }}{{!}}{{{superordo|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{ordo}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=7}}
{{#ifeq:{{PAGENAME}}|Blattodea|
[[Kategorie:Schaben]]
}}
[[Kategorie:{{#if: {{{superordo|}}} | {{{superordo|}}} | {{{subclassis|}}} }}{{!}}{{{ordo|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{superfamilia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
[[Kategorie:{{#if: {{{subordo|}}} | {{{subordo|}}} | {{{ordo|}}} }}{{!}}{{{superfamilia|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{familia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=4}}
[[Kategorie:{{#if: {{{superfamilia|}}} | {{{superfamilia|}}} | {{{subordo|}}} }}{{!}}{{{familia|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{subfamilia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=3}}
[[Kategorie:{{{familia|}}}{{!}}{{{subfamilia|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{genus}}}|
[[Kategorie:{{#if: {{{subfamilia|}}} | {{{subfamilia|}}} | {{{familia|}}} }}{{!}}{{{genus|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{genus}}} {{{species}}}|
[[Kategorie:{{{genus|}}}{{!}}{{{species|}}}]]
}}
</includeonly>
<noinclude>
<pre>
Beispielaufruf:
{{Systematik
| DeName = Fauchschabe
| Autor = van Herrewege, 1973
| ordo =
| subordo =
| superfamilia =
| familia = Blaberidae
| subfamilia = Oxyhaloinae
| tribus = Gromphadorhini
| genus = Princisia
| subgenus =
| species = vanwaerebeki
| Verbreitung =
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| StudyGroupNumber = BCG 34
| Winterruhe =
| cockroach.speciesfile.org_TaxonNameID = 1174416
| LSID = urn:lsid:Blattodea.genusfile.org:TaxonName:6326
}}
{{Systematik
| Autor =
| Bild =
| Bildbeschreibung =
| regnum = Animalia
| subregnum = Eumetazoa
| phylum = specieshropoda
| subphylum = Hexapoda
| classis = Insecta
| ordo = Dictyoptera
| subordo = Isoptera
| LSID = urn:lsid:faunaeur.org:taxname:11922
| www.faunaeur.org_id = 11922
}}
</pre>
</noinclude>
f9d27679b72bd9bf2b75b67d182e51864e6c4968
1632
1631
2017-05-02T20:53:24Z
Lollypop
2
wikitext
text/x-wiki
<includeonly>
{{#vardefine:taxon|
{{#if:{{{dominia|}}} | -> [[:Kategorie:{{{dominia|}}}{{!}}{{{dominia|}}}]]}}
{{#if:{{{regnum|}}} | -> [[:Kategorie:{{{regnum|}}}{{!}}{{{regnum|}}}]]}}
{{#if:{{{subregnum|}}} | -> [[:Kategorie:{{{subregnum|}}}{{!}}{{{subregnum|}}}]]}}
{{#if:{{{divisio|}}} | -> [[:Kategorie:{{{divisio|}}}{{!}}{{{divisio|}}}]]}}
{{#if:{{{subdivisio|}}} | -> [[:Kategorie:{{{subdivisio|}}}{{!}}{{{subdivisio|}}}]]}}
{{#if:{{{superclassis|}}}| -> [[:Kategorie:{{{superclassis|}}}{{!}}{{{superclassis|}}}]]}}
{{#if:{{{classis|}}} | -> [[:Kategorie:{{{classis|}}}{{!}}{{{classis|}}}]]}}
{{#if:{{{subclassis|}}} | -> [[:Kategorie:{{{subclassis|}}}{{!}}{{{subclassis|}}}]]}}
{{#if:{{{superordo|}}} | -> [[:Kategorie:{{{superordo|}}}{{!}}{{{superordo|}}}]]}}
{{#if:{{{ordo|}}} | -> [[:Kategorie:{{{ordo|}}}{{!}}{{{ordo|}}}]]}}
{{#if:{{{subordo|}}} | -> [[:Kategorie:{{{subordo|}}}{{!}}{{{subordo|}}}]]}}
{{#if:{{{superfamilia|}}}| -> [[:Kategorie:{{#if: {{{subordo|}}} | {{{subordo|}}} | {{{ordo|}}} }}{{!}}{{{superfamilia|}}}]]}}
{{#if:{{{familia|}}} | -> [[:Kategorie:{{{familia|}}}{{!}}{{{familia|}}}]]}}
{{#if:{{{subfamilia|}}} | -> [[:Kategorie:{{{subfamilia|}}}{{!}}{{{subfamilia|}}}]]}}
{{#if:{{{tribus|}}} | -> [[:Kategorie:{{{tribus|}}}{{!}}{{{tribus|}}}]]}}
{{#if:{{{genus|}}} | -> [[:Kategorie:{{{genus|}}}{{!}}{{{genus|}}}]]}}
{{#if:{{{subgenus|}}} | -> [[:Kategorie:{{{subgenus|}}}{{!}}{{{subgenus|}}}]]}}
}}
{| cellpadding="2" cellspacing="0" style="border: 1px solid #6688AA;width:260px; background-color:#efefef; float:right;overflow: hidden; margin-left:10px; margin-bottom:10px;" valign="middle" |
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"|
'''''{{PAGENAME}}'''''
{{#if:{{{DeName|}}}|
<br>({{{DeName|}}})
}}
|-
|style="border: 0px solid #6688AA;border-bottom-width:1px;"| <div style="text-align:center;font-size:8pt;">
{{#if: {{{Bild|}}}
|
[[Bild:{{{Bild}}}{{!}}frameless{{!}}250x300px{{!}}{{{Bildbeschreibung}}}]]
}}
</div>
|
|
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| Systematik
|-
|style="text-align:center;width=250px;"|
{| style="background-color:#efefef;text-align:left;"
{{#if:{{{dominia|}}}
| {{!-}}
{{!}} Dominia:
{{!}} ''[[:Kategorie:{{{dominia|}}}{{!}}{{{dominia|}}}]]''
}}
{{#if:{{{regnum|}}}
| {{!-}}
{{!}} Regnum:
{{!}} ''[[:Kategorie:{{{regnum|}}}{{!}}{{{regnum|}}}]]''
}}
{{#if:{{{subregnum|}}}
| {{!-}}
{{!}} Subregnum:
{{!}} ''[[:Kategorie:{{{subregnum|}}}{{!}}{{{subregnum|}}}]]''
}}
{{#if:{{{superdivisio|}}}
| {{!-}}
{{!}} Superdivisio:
{{!}} ''[[:Kategorie:{{{superdivisio|}}}{{!}}{{{superdivisio|}}}]]''
}}
{{#if:{{{divisio|}}}
| {{!-}}
{{!}} Divisio:
{{!}} ''[[:Kategorie:{{{divisio|}}}{{!}}{{{divisio|}}}]]''
}}
{{#if:{{{subdivisio|}}}
| {{!-}}
{{!}} Subdivisio:
{{!}} ''[[:Kategorie:{{{subdivisio|}}}{{!}}{{{subdivisio|}}}]]''
}}
{{#if:{{{superclassis|}}}
| {{!-}}
{{!}} Superclassis:
{{!}} ''[[:Kategorie:{{{superclassis|}}}{{!}}{{{superclassis|}}}]]''
}}
{{#if:{{{classis|}}}
| {{!-}}
{{!}} Classis:
{{!}} ''[[:Kategorie:{{{classis|}}}{{!}}{{{classis|}}}]]''
}}
{{#if:{{{subclassis|}}}
| {{!-}}
{{!}} Subclassis:
{{!}} ''[[:Kategorie:{{{subclassis|}}}{{!}}{{{subclassis|}}}]]''
}}
{{#if:{{{superordo|}}}
| {{!-}}
{{!}} Superordo:
{{!}} ''[[:Kategorie:{{{superordo|}}}{{!}}{{{superordo|}}}]]''
}}
{{#if:{{{ordo|}}}
| {{!-}}
{{!}} Ordo:
{{!}} ''[[:Kategorie:{{{ordo|}}}{{!}}{{{ordo|}}}]]''
}}
{{#if:{{{subordo|}}}
| {{!-}}
{{!}} Subordo:
{{!}} ''[[:Kategorie:{{{subordo|}}}{{!}}{{{subordo|}}}]]''
}}
{{#if:{{{superfamilia|}}}
| {{!-}}
{{!}} Superfamilia:
{{!}} ''[[:Kategorie:{{{superfamilia|}}}{{!}}{{{superfamilia|}}}]]''
}}
{{#if:{{{familia|}}}
| {{!-}}
{{!}} Familia:
{{!}} ''[[:Kategorie:{{{familia|}}}{{!}}{{{familia|}}}]]''
}}
{{#if:{{{subfamilia|}}}
| {{!-}}
{{!}} Subfamilia:
{{!}} ''[[:Kategorie:{{{subfamilia|}}}{{!}}{{{subfamilia|}}}]]''
}}
{{#if:{{{tribus|}}}
| {{!-}}
{{!}} Tribus:
{{!}} ''[[:Kategorie:{{{tribus|}}}{{!}}{{{tribus|}}}]]''
}}
|-
{{#if:{{{genus|}}}
| {{!-}}
{{!}} Genus:
{{!}} ''[[:Kategorie:{{{genus|}}}{{!}}{{{genus|}}}]]''
}}
|-
{{#if:{{{subgenus|}}}
| {{!-}}
{{!}} Subgenus:
{{!}} ''[[:Kategorie:{{{subgenus|}}}{{!}}{{{subgenus|}}}]]''
}}
|-
{{#if:{{{species|}}}
| {{!-}}
{{!}} Species:
{{!}} ''{{{genus|}}} {{{subgenus|}}} {{{species|}}}''
}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Wissenschaftlicher Name
|-
|style="text-align:center"|
{| style="text-align:center;background-color:#efefef;widht:250px"
|-
|style="text-align:center;width:250px"|''{{{genus|}}} {{{species|}}}'' {{#if:{{{Autor|}}}|{{{Autor|}}}}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Weitere Informationen
|-
|style="text-align:left"|
{| style="text-align:left;background-color:#efefef;widht:250px"
{{#if:{{{Verbreitung|}}}
| {{!-}}
{{!}} Verbreitung:
{{!}} {{{Verbreitung|}}}
| {{!-}}
}}
{{#if:{{{Habitat|}}}
| {{!-}}
{{!}} Habitat:
{{!}} {{{Habitat|}}}
| {{!-}}
}}
{{#if:{{{Nahrung|}}}
| {{!-}}
{{!}} Nahrung:
{{!}} {{{Nahrung|}}}
| {{!-}}
}}
{{#if:{{{Luftfeuchtigkeit|}}}
| {{!-}}
{{!}} Luftfeuchtigkeit:
{{!}} {{{Luftfeuchtigkeit|}}}
| {{!-}}
}}
{{#if:{{{Temperatur|}}}
| {{!-}}
{{!}} Temperatur:
{{!}} {{{Temperatur|}}}
| {{!-}}
}}
{{#if:{{{StudyGroupNumber|}}}
| {{!-}}
{{!}} StudyGroupNumber:
{{!}} {{{StudyGroupNumber|}}}
| {{!-}}
}}
|}
|}
{{#ifeq: {{NAMESPACE}} | {{ns:0}} | [[Kategorie:species]]}}
{{#if:{{#var:taxon}}
|{{#regex: {{#var:taxon}} | /[ \r\n]+/ | _ }}
}}
{{#if:{{{ordo|}}}|
-> [[:Kategorie:{{{ordo|}}}{{!}}{{{ordo|}}}]]
}}
{{#if:{{{subordo|}}}|
-> [[:Kategorie:{{{subordo|}}}{{!}}{{{subordo|}}}]]
}}
{{#if:{{{superfamilia|}}}|
-> [[:Kategorie:{{#if: {{{subordo|}}} | {{{subordo|}}} | {{{ordo|}}} }}{{!}}{{{superfamilia|}}}]]
}}
{{#if:{{{familia|}}}|
-> [[:Kategorie:{{{familia|}}}{{!}}{{{familia|}}}]]
}}
{{#if:{{{subfamilia|}}}|
-> [[:Kategorie:{{{subfamilia|}}}{{!}}{{{subfamilia|}}}]]
}}
{{#if:{{{tribus|}}}|
-> [[:Kategorie:{{{tribus|}}}{{!}}{{{tribus|}}}]]
}}
{{#if:{{{genus|}}}|
-> [[:Kategorie:{{{genus|}}}{{!}}{{{genus|}}}]]
}}
{{#if:{{{subgenus|}}}|
-> [[:Kategorie:{{{subgenus|}}}{{!}}{{{subgenus|}}}]]
}}
{{#if:{{{www.faunaeur.org_id|}}}|
* [http://www.faunaeur.org/full_results.php?id={{{www.faunaeur.org_id|}}} Fauna Europaea : www.faunaeur.org -> {{PAGENAME}}]
}}
{{#if:{{{cockroach.speciesfile.org_TaxonNameID|}}}|
* [http://cockroach.speciesfile.org/common/basic/Taxa.aspx?TaxonNameID={{{cockroach.speciesfile.org_TaxonNameID|}}} Cockroach Species File (CSF) : cockroach.speciesfile.org -> {{PAGENAME}}]
}}
{{#if:{{{LSID|}}}|
* [http://www.ipni.org/lsids.html LSID] : {{{LSID|}}}
}}
{{#ifeq:{{PAGENAME}}|{{{superordo}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
[[Kategorie:{{#if: {{{subclassis|}}} | {{{subclassis|}}} | {{{classis|}}} }}{{!}}{{{superordo|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{ordo}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=7}}
{{#ifeq:{{PAGENAME}}|Blattodea|
[[Kategorie:Schaben]]
}}
[[Kategorie:{{#if: {{{superordo|}}} | {{{superordo|}}} | {{{subclassis|}}} }}{{!}}{{{ordo|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{superfamilia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
[[Kategorie:{{#if: {{{subordo|}}} | {{{subordo|}}} | {{{ordo|}}} }}{{!}}{{{superfamilia|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{familia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=4}}
[[Kategorie:{{#if: {{{superfamilia|}}} | {{{superfamilia|}}} | {{{subordo|}}} }}{{!}}{{{familia|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{subfamilia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=3}}
[[Kategorie:{{{familia|}}}{{!}}{{{subfamilia|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{genus}}}|
[[Kategorie:{{#if: {{{subfamilia|}}} | {{{subfamilia|}}} | {{{familia|}}} }}{{!}}{{{genus|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{genus}}} {{{species}}}|
[[Kategorie:{{{genus|}}}{{!}}{{{species|}}}]]
}}
</includeonly>
<noinclude>
<pre>
Beispielaufruf:
{{Systematik
| DeName = Fauchschabe
| Autor = van Herrewege, 1973
| ordo =
| subordo =
| superfamilia =
| familia = Blaberidae
| subfamilia = Oxyhaloinae
| tribus = Gromphadorhini
| genus = Princisia
| subgenus =
| species = vanwaerebeki
| Verbreitung =
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| StudyGroupNumber = BCG 34
| Winterruhe =
| cockroach.speciesfile.org_TaxonNameID = 1174416
| LSID = urn:lsid:Blattodea.genusfile.org:TaxonName:6326
}}
{{Systematik
| Autor =
| Bild =
| Bildbeschreibung =
| regnum = Animalia
| subregnum = Eumetazoa
| phylum = specieshropoda
| subphylum = Hexapoda
| classis = Insecta
| ordo = Dictyoptera
| subordo = Isoptera
| LSID = urn:lsid:faunaeur.org:taxname:11922
| www.faunaeur.org_id = 11922
}}
</pre>
</noinclude>
963d61a55c55d6882c8e82df98316f98dfe0dae4
1633
1632
2017-05-02T20:54:50Z
Lollypop
2
wikitext
text/x-wiki
<includeonly>
{{#vardefine:taxon|
{{#if:{{{dominia|}}} | -> [[:Kategorie:{{{dominia|}}}{{!}}{{{dominia|}}}]]}}
{{#if:{{{regnum|}}} | -> [[:Kategorie:{{{regnum|}}}{{!}}{{{regnum|}}}]]}}
{{#if:{{{subregnum|}}} | -> [[:Kategorie:{{{subregnum|}}}{{!}}{{{subregnum|}}}]]}}
{{#if:{{{divisio|}}} | -> [[:Kategorie:{{{divisio|}}}{{!}}{{{divisio|}}}]]}}
{{#if:{{{subdivisio|}}} | -> [[:Kategorie:{{{subdivisio|}}}{{!}}{{{subdivisio|}}}]]}}
{{#if:{{{superclassis|}}}| -> [[:Kategorie:{{{superclassis|}}}{{!}}{{{superclassis|}}}]]}}
{{#if:{{{classis|}}} | -> [[:Kategorie:{{{classis|}}}{{!}}{{{classis|}}}]]}}
{{#if:{{{subclassis|}}} | -> [[:Kategorie:{{{subclassis|}}}{{!}}{{{subclassis|}}}]]}}
{{#if:{{{superordo|}}} | -> [[:Kategorie:{{{superordo|}}}{{!}}{{{superordo|}}}]]}}
{{#if:{{{ordo|}}} | -> [[:Kategorie:{{{ordo|}}}{{!}}{{{ordo|}}}]]}}
{{#if:{{{subordo|}}} | -> [[:Kategorie:{{{subordo|}}}{{!}}{{{subordo|}}}]]}}
{{#if:{{{superfamilia|}}}| -> [[:Kategorie:{{#if: {{{subordo|}}} | {{{subordo|}}} | {{{ordo|}}} }}{{!}}{{{superfamilia|}}}]]}}
{{#if:{{{familia|}}} | -> [[:Kategorie:{{{familia|}}}{{!}}{{{familia|}}}]]}}
{{#if:{{{subfamilia|}}} | -> [[:Kategorie:{{{subfamilia|}}}{{!}}{{{subfamilia|}}}]]}}
{{#if:{{{tribus|}}} | -> [[:Kategorie:{{{tribus|}}}{{!}}{{{tribus|}}}]]}}
{{#if:{{{genus|}}} | -> [[:Kategorie:{{{genus|}}}{{!}}{{{genus|}}}]]}}
{{#if:{{{subgenus|}}} | -> [[:Kategorie:{{{subgenus|}}}{{!}}{{{subgenus|}}}]]}}
}}
{| cellpadding="2" cellspacing="0" style="border: 1px solid #6688AA;width:260px; background-color:#efefef; float:right;overflow: hidden; margin-left:10px; margin-bottom:10px;" valign="middle" |
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"|
'''''{{PAGENAME}}'''''
{{#if:{{{DeName|}}}|
<br>({{{DeName|}}})
}}
|-
|style="border: 0px solid #6688AA;border-bottom-width:1px;"| <div style="text-align:center;font-size:8pt;">
{{#if: {{{Bild|}}}
|
[[Bild:{{{Bild}}}{{!}}frameless{{!}}250x300px{{!}}{{{Bildbeschreibung}}}]]
}}
</div>
|
|
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| Systematik
|-
|style="text-align:center;width=250px;"|
{| style="background-color:#efefef;text-align:left;"
{{#if:{{{dominia|}}}
| {{!-}}
{{!}} Dominia:
{{!}} ''[[:Kategorie:{{{dominia|}}}{{!}}{{{dominia|}}}]]''
}}
{{#if:{{{regnum|}}}
| {{!-}}
{{!}} Regnum:
{{!}} ''[[:Kategorie:{{{regnum|}}}{{!}}{{{regnum|}}}]]''
}}
{{#if:{{{subregnum|}}}
| {{!-}}
{{!}} Subregnum:
{{!}} ''[[:Kategorie:{{{subregnum|}}}{{!}}{{{subregnum|}}}]]''
}}
{{#if:{{{superdivisio|}}}
| {{!-}}
{{!}} Superdivisio:
{{!}} ''[[:Kategorie:{{{superdivisio|}}}{{!}}{{{superdivisio|}}}]]''
}}
{{#if:{{{divisio|}}}
| {{!-}}
{{!}} Divisio:
{{!}} ''[[:Kategorie:{{{divisio|}}}{{!}}{{{divisio|}}}]]''
}}
{{#if:{{{subdivisio|}}}
| {{!-}}
{{!}} Subdivisio:
{{!}} ''[[:Kategorie:{{{subdivisio|}}}{{!}}{{{subdivisio|}}}]]''
}}
{{#if:{{{superclassis|}}}
| {{!-}}
{{!}} Superclassis:
{{!}} ''[[:Kategorie:{{{superclassis|}}}{{!}}{{{superclassis|}}}]]''
}}
{{#if:{{{classis|}}}
| {{!-}}
{{!}} Classis:
{{!}} ''[[:Kategorie:{{{classis|}}}{{!}}{{{classis|}}}]]''
}}
{{#if:{{{subclassis|}}}
| {{!-}}
{{!}} Subclassis:
{{!}} ''[[:Kategorie:{{{subclassis|}}}{{!}}{{{subclassis|}}}]]''
}}
{{#if:{{{superordo|}}}
| {{!-}}
{{!}} Superordo:
{{!}} ''[[:Kategorie:{{{superordo|}}}{{!}}{{{superordo|}}}]]''
}}
{{#if:{{{ordo|}}}
| {{!-}}
{{!}} Ordo:
{{!}} ''[[:Kategorie:{{{ordo|}}}{{!}}{{{ordo|}}}]]''
}}
{{#if:{{{subordo|}}}
| {{!-}}
{{!}} Subordo:
{{!}} ''[[:Kategorie:{{{subordo|}}}{{!}}{{{subordo|}}}]]''
}}
{{#if:{{{superfamilia|}}}
| {{!-}}
{{!}} Superfamilia:
{{!}} ''[[:Kategorie:{{{superfamilia|}}}{{!}}{{{superfamilia|}}}]]''
}}
{{#if:{{{familia|}}}
| {{!-}}
{{!}} Familia:
{{!}} ''[[:Kategorie:{{{familia|}}}{{!}}{{{familia|}}}]]''
}}
{{#if:{{{subfamilia|}}}
| {{!-}}
{{!}} Subfamilia:
{{!}} ''[[:Kategorie:{{{subfamilia|}}}{{!}}{{{subfamilia|}}}]]''
}}
{{#if:{{{tribus|}}}
| {{!-}}
{{!}} Tribus:
{{!}} ''[[:Kategorie:{{{tribus|}}}{{!}}{{{tribus|}}}]]''
}}
|-
{{#if:{{{genus|}}}
| {{!-}}
{{!}} Genus:
{{!}} ''[[:Kategorie:{{{genus|}}}{{!}}{{{genus|}}}]]''
}}
|-
{{#if:{{{subgenus|}}}
| {{!-}}
{{!}} Subgenus:
{{!}} ''[[:Kategorie:{{{subgenus|}}}{{!}}{{{subgenus|}}}]]''
}}
|-
{{#if:{{{species|}}}
| {{!-}}
{{!}} Species:
{{!}} ''{{{genus|}}} {{{subgenus|}}} {{{species|}}}''
}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Wissenschaftlicher Name
|-
|style="text-align:center"|
{| style="text-align:center;background-color:#efefef;widht:250px"
|-
|style="text-align:center;width:250px"|''{{{genus|}}} {{{species|}}}'' {{#if:{{{Autor|}}}|{{{Autor|}}}}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Weitere Informationen
|-
|style="text-align:left"|
{| style="text-align:left;background-color:#efefef;widht:250px"
{{#if:{{{Verbreitung|}}}
| {{!-}}
{{!}} Verbreitung:
{{!}} {{{Verbreitung|}}}
| {{!-}}
}}
{{#if:{{{Habitat|}}}
| {{!-}}
{{!}} Habitat:
{{!}} {{{Habitat|}}}
| {{!-}}
}}
{{#if:{{{Nahrung|}}}
| {{!-}}
{{!}} Nahrung:
{{!}} {{{Nahrung|}}}
| {{!-}}
}}
{{#if:{{{Luftfeuchtigkeit|}}}
| {{!-}}
{{!}} Luftfeuchtigkeit:
{{!}} {{{Luftfeuchtigkeit|}}}
| {{!-}}
}}
{{#if:{{{Temperatur|}}}
| {{!-}}
{{!}} Temperatur:
{{!}} {{{Temperatur|}}}
| {{!-}}
}}
{{#if:{{{StudyGroupNumber|}}}
| {{!-}}
{{!}} StudyGroupNumber:
{{!}} {{{StudyGroupNumber|}}}
| {{!-}}
}}
|}
|}
{{#ifeq: {{NAMESPACE}} | {{ns:0}} | [[Kategorie:species]]}}
{{#if:{{#var:taxon}}
|{{#regex: {{#var:taxon}} | /[ \r\n]+/ | _ }}
}}
{{#if:{{{www.faunaeur.org_id|}}}|
* [http://www.faunaeur.org/full_results.php?id={{{www.faunaeur.org_id|}}} Fauna Europaea : www.faunaeur.org -> {{PAGENAME}}]
}}
{{#if:{{{cockroach.speciesfile.org_TaxonNameID|}}}|
* [http://cockroach.speciesfile.org/common/basic/Taxa.aspx?TaxonNameID={{{cockroach.speciesfile.org_TaxonNameID|}}} Cockroach Species File (CSF) : cockroach.speciesfile.org -> {{PAGENAME}}]
}}
{{#if:{{{LSID|}}}|
* [http://www.ipni.org/lsids.html LSID] : {{{LSID|}}}
}}
{{#ifeq:{{PAGENAME}}|{{{superordo}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
[[Kategorie:{{#if: {{{subclassis|}}} | {{{subclassis|}}} | {{{classis|}}} }}{{!}}{{{superordo|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{ordo}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=7}}
{{#ifeq:{{PAGENAME}}|Blattodea|
[[Kategorie:Schaben]]
}}
[[Kategorie:{{#if: {{{superordo|}}} | {{{superordo|}}} | {{{subclassis|}}} }}{{!}}{{{ordo|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{superfamilia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
[[Kategorie:{{#if: {{{subordo|}}} | {{{subordo|}}} | {{{ordo|}}} }}{{!}}{{{superfamilia|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{familia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=4}}
[[Kategorie:{{#if: {{{superfamilia|}}} | {{{superfamilia|}}} | {{{subordo|}}} }}{{!}}{{{familia|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{subfamilia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=3}}
[[Kategorie:{{{familia|}}}{{!}}{{{subfamilia|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{genus}}}|
[[Kategorie:{{#if: {{{subfamilia|}}} | {{{subfamilia|}}} | {{{familia|}}} }}{{!}}{{{genus|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{genus}}} {{{species}}}|
[[Kategorie:{{{genus|}}}{{!}}{{{species|}}}]]
}}
</includeonly>
<noinclude>
<pre>
Beispielaufruf:
{{Systematik
| DeName = Fauchschabe
| Autor = van Herrewege, 1973
| ordo =
| subordo =
| superfamilia =
| familia = Blaberidae
| subfamilia = Oxyhaloinae
| tribus = Gromphadorhini
| genus = Princisia
| subgenus =
| species = vanwaerebeki
| Verbreitung =
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| StudyGroupNumber = BCG 34
| Winterruhe =
| cockroach.speciesfile.org_TaxonNameID = 1174416
| LSID = urn:lsid:Blattodea.genusfile.org:TaxonName:6326
}}
{{Systematik
| Autor =
| Bild =
| Bildbeschreibung =
| regnum = Animalia
| subregnum = Eumetazoa
| phylum = specieshropoda
| subphylum = Hexapoda
| classis = Insecta
| ordo = Dictyoptera
| subordo = Isoptera
| LSID = urn:lsid:faunaeur.org:taxname:11922
| www.faunaeur.org_id = 11922
}}
</pre>
</noinclude>
f7f82500522b7da7cada5c550c37a93893a0bbbd
1634
1633
2017-05-02T21:00:09Z
Lollypop
2
wikitext
text/x-wiki
<includeonly>
{{#vardefine:taxon|
{{#if:{{{dominia|}}} | -> [[:Kategorie:{{{dominia|}}}{{!}}{{{dominia|}}}]]}}
{{#if:{{{regnum|}}} | -> [[:Kategorie:{{{regnum|}}}{{!}}{{{regnum|}}}]]}}
{{#if:{{{subregnum|}}} | -> [[:Kategorie:{{{subregnum|}}}{{!}}{{{subregnum|}}}]]}}
{{#if:{{{divisio|}}} | -> [[:Kategorie:{{{divisio|}}}{{!}}{{{divisio|}}}]]}}
{{#if:{{{subdivisio|}}} | -> [[:Kategorie:{{{subdivisio|}}}{{!}}{{{subdivisio|}}}]]}}
{{#if:{{{superclassis|}}}| -> [[:Kategorie:{{{superclassis|}}}{{!}}{{{superclassis|}}}]]}}
{{#if:{{{classis|}}} | -> [[:Kategorie:{{{classis|}}}{{!}}{{{classis|}}}]]}}
{{#if:{{{subclassis|}}} | -> [[:Kategorie:{{{subclassis|}}}{{!}}{{{subclassis|}}}]]}}
{{#if:{{{superordo|}}} | -> [[:Kategorie:{{{superordo|}}}{{!}}{{{superordo|}}}]]}}
{{#if:{{{ordo|}}} | -> [[:Kategorie:{{{ordo|}}}{{!}}{{{ordo|}}}]]}}
{{#if:{{{subordo|}}} | -> [[:Kategorie:{{{subordo|}}}{{!}}{{{subordo|}}}]]}}
{{#if:{{{superfamilia|}}}| -> [[:Kategorie:{{#if: {{{subordo|}}} | {{{subordo|}}} | {{{ordo|}}} }}{{!}}{{{superfamilia|}}}]]}}
{{#if:{{{familia|}}} | -> [[:Kategorie:{{{familia|}}}{{!}}{{{familia|}}}]]}}
{{#if:{{{subfamilia|}}} | -> [[:Kategorie:{{{subfamilia|}}}{{!}}{{{subfamilia|}}}]]}}
{{#if:{{{tribus|}}} | -> [[:Kategorie:{{{tribus|}}}{{!}}{{{tribus|}}}]]}}
{{#if:{{{genus|}}} | -> [[:Kategorie:{{{genus|}}}{{!}}{{{genus|}}}]]}}
{{#if:{{{subgenus|}}} | -> [[:Kategorie:{{{subgenus|}}}{{!}}{{{subgenus|}}}]]}}
}}
{| cellpadding="2" cellspacing="0" style="border: 1px solid #6688AA;width:260px; background-color:#efefef; float:right;overflow: hidden; margin-left:10px; margin-bottom:10px;" valign="middle" |
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"|
'''''{{PAGENAME}}'''''
{{#if:{{{DeName|}}}|
<br>({{{DeName|}}})
}}
|-
|style="border: 0px solid #6688AA;border-bottom-width:1px;"| <div style="text-align:center;font-size:8pt;">
{{#if: {{{Bild|}}}
| [[Bild:{{{Bild}}}{{!}}frameless{{!}}250x300px{{!}}{{{Bildbeschreibung}}}]]
}}</div>
|
|
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| Systematik
|-
|style="text-align:center;width=250px;"|
{| style="background-color:#efefef;text-align:left;"
{{#if:{{{dominia|}}}
| {{!-}}
{{!}} Dominia:
{{!}} ''[[:Kategorie:{{{dominia|}}}{{!}}{{{dominia|}}}]]''
}}
{{#if:{{{regnum|}}}
| {{!-}}
{{!}} Regnum:
{{!}} ''[[:Kategorie:{{{regnum|}}}{{!}}{{{regnum|}}}]]''
}}
{{#if:{{{subregnum|}}}
| {{!-}}
{{!}} Subregnum:
{{!}} ''[[:Kategorie:{{{subregnum|}}}{{!}}{{{subregnum|}}}]]''
}}
{{#if:{{{superdivisio|}}}
| {{!-}}
{{!}} Superdivisio:
{{!}} ''[[:Kategorie:{{{superdivisio|}}}{{!}}{{{superdivisio|}}}]]''
}}
{{#if:{{{divisio|}}}
| {{!-}}
{{!}} Divisio:
{{!}} ''[[:Kategorie:{{{divisio|}}}{{!}}{{{divisio|}}}]]''
}}
{{#if:{{{subdivisio|}}}
| {{!-}}
{{!}} Subdivisio:
{{!}} ''[[:Kategorie:{{{subdivisio|}}}{{!}}{{{subdivisio|}}}]]''
}}
{{#if:{{{superclassis|}}}
| {{!-}}
{{!}} Superclassis:
{{!}} ''[[:Kategorie:{{{superclassis|}}}{{!}}{{{superclassis|}}}]]''
}}
{{#if:{{{classis|}}}
| {{!-}}
{{!}} Classis:
{{!}} ''[[:Kategorie:{{{classis|}}}{{!}}{{{classis|}}}]]''
}}
{{#if:{{{subclassis|}}}
| {{!-}}
{{!}} Subclassis:
{{!}} ''[[:Kategorie:{{{subclassis|}}}{{!}}{{{subclassis|}}}]]''
}}
{{#if:{{{superordo|}}}
| {{!-}}
{{!}} Superordo:
{{!}} ''[[:Kategorie:{{{superordo|}}}{{!}}{{{superordo|}}}]]''
}}
{{#if:{{{ordo|}}}
| {{!-}}
{{!}} Ordo:
{{!}} ''[[:Kategorie:{{{ordo|}}}{{!}}{{{ordo|}}}]]''
}}
{{#if:{{{subordo|}}}
| {{!-}}
{{!}} Subordo:
{{!}} ''[[:Kategorie:{{{subordo|}}}{{!}}{{{subordo|}}}]]''
}}
{{#if:{{{superfamilia|}}}
| {{!-}}
{{!}} Superfamilia:
{{!}} ''[[:Kategorie:{{{superfamilia|}}}{{!}}{{{superfamilia|}}}]]''
}}
{{#if:{{{familia|}}}
| {{!-}}
{{!}} Familia:
{{!}} ''[[:Kategorie:{{{familia|}}}{{!}}{{{familia|}}}]]''
}}
{{#if:{{{subfamilia|}}}
| {{!-}}
{{!}} Subfamilia:
{{!}} ''[[:Kategorie:{{{subfamilia|}}}{{!}}{{{subfamilia|}}}]]''
}}
{{#if:{{{tribus|}}}
| {{!-}}
{{!}} Tribus:
{{!}} ''[[:Kategorie:{{{tribus|}}}{{!}}{{{tribus|}}}]]''
}}
|-
{{#if:{{{genus|}}}
| {{!-}}
{{!}} Genus:
{{!}} ''[[:Kategorie:{{{genus|}}}{{!}}{{{genus|}}}]]''
}}
|-
{{#if:{{{subgenus|}}}
| {{!-}}
{{!}} Subgenus:
{{!}} ''[[:Kategorie:{{{subgenus|}}}{{!}}{{{subgenus|}}}]]''
}}
|-
{{#if:{{{species|}}}
| {{!-}}
{{!}} Species:
{{!}} ''{{{genus|}}} {{{subgenus|}}} {{{species|}}}''
}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Wissenschaftlicher Name
|-
|style="text-align:center"|
{| style="text-align:center;background-color:#efefef;widht:250px"
|-
|style="text-align:center;width:250px"|''{{{genus|}}} {{{species|}}}'' {{#if:{{{Autor|}}}|{{{Autor|}}}}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Weitere Informationen
|-
|style="text-align:left"|
{| style="text-align:left;background-color:#efefef;widht:250px"
{{#if:{{{Verbreitung|}}}
| {{!-}}
{{!}} Verbreitung:
{{!}} {{{Verbreitung|}}}
| {{!-}}
}}
{{#if:{{{Habitat|}}}
| {{!-}}
{{!}} Habitat:
{{!}} {{{Habitat|}}}
| {{!-}}
}}
{{#if:{{{Nahrung|}}}
| {{!-}}
{{!}} Nahrung:
{{!}} {{{Nahrung|}}}
| {{!-}}
}}
{{#if:{{{Luftfeuchtigkeit|}}}
| {{!-}}
{{!}} Luftfeuchtigkeit:
{{!}} {{{Luftfeuchtigkeit|}}}
| {{!-}}
}}
{{#if:{{{Temperatur|}}}
| {{!-}}
{{!}} Temperatur:
{{!}} {{{Temperatur|}}}
| {{!-}}
}}
{{#if:{{{StudyGroupNumber|}}}
| {{!-}}
{{!}} StudyGroupNumber:
{{!}} {{{StudyGroupNumber|}}}
| {{!-}}
}}
|}
|}
{{#ifeq: {{NAMESPACE}} | {{ns:0}} | [[Kategorie:species]]}}
{{#if:{{#var:taxon}}
|{{#regex: {{#var:taxon}} | /[ \r\n]+/ | _ }}
}}
{{#if:{{{www.faunaeur.org_id|}}}|
* [http://www.faunaeur.org/full_results.php?id={{{www.faunaeur.org_id|}}} Fauna Europaea : www.faunaeur.org -> {{PAGENAME}}]
}}
{{#if:{{{cockroach.speciesfile.org_TaxonNameID|}}}|
* [http://cockroach.speciesfile.org/common/basic/Taxa.aspx?TaxonNameID={{{cockroach.speciesfile.org_TaxonNameID|}}} Cockroach Species File (CSF) : cockroach.speciesfile.org -> {{PAGENAME}}]
}}
{{#if:{{{LSID|}}}|
* [http://www.ipni.org/lsids.html LSID] : {{{LSID|}}}
}}
{{#ifeq:{{PAGENAME}}|{{{superordo}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
[[Kategorie:{{#if: {{{subclassis|}}} | {{{subclassis|}}} | {{{classis|}}} }}{{!}}{{{superordo|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{ordo}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=7}}
{{#ifeq:{{PAGENAME}}|Blattodea|
[[Kategorie:Schaben]]
}}
[[Kategorie:{{#if: {{{superordo|}}} | {{{superordo|}}} | {{{subclassis|}}} }}{{!}}{{{ordo|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{superfamilia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
[[Kategorie:{{#if: {{{subordo|}}} | {{{subordo|}}} | {{{ordo|}}} }}{{!}}{{{superfamilia|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{familia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=4}}
[[Kategorie:{{#if: {{{superfamilia|}}} | {{{superfamilia|}}} | {{{subordo|}}} }}{{!}}{{{familia|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{subfamilia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=3}}
[[Kategorie:{{{familia|}}}{{!}}{{{subfamilia|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{genus}}}|
[[Kategorie:{{#if: {{{subfamilia|}}} | {{{subfamilia|}}} | {{{familia|}}} }}{{!}}{{{genus|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{genus}}} {{{species}}}|
[[Kategorie:{{{genus|}}}{{!}}{{{species|}}}]]
}}
</includeonly>
<noinclude>
<pre>
Beispielaufruf:
{{Systematik
| DeName = Fauchschabe
| Autor = van Herrewege, 1973
| ordo =
| subordo =
| superfamilia =
| familia = Blaberidae
| subfamilia = Oxyhaloinae
| tribus = Gromphadorhini
| genus = Princisia
| subgenus =
| species = vanwaerebeki
| Verbreitung =
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| StudyGroupNumber = BCG 34
| Winterruhe =
| cockroach.speciesfile.org_TaxonNameID = 1174416
| LSID = urn:lsid:Blattodea.genusfile.org:TaxonName:6326
}}
{{Systematik
| Autor =
| Bild =
| Bildbeschreibung =
| regnum = Animalia
| subregnum = Eumetazoa
| phylum = specieshropoda
| subphylum = Hexapoda
| classis = Insecta
| ordo = Dictyoptera
| subordo = Isoptera
| LSID = urn:lsid:faunaeur.org:taxname:11922
| www.faunaeur.org_id = 11922
}}
</pre>
</noinclude>
ea1ca0ddf0bfe782b047ff3d877f882df8e9176b
1635
1634
2017-05-02T21:05:58Z
Lollypop
2
wikitext
text/x-wiki
<includeonly>
{{#vardefine:taxon|
{{#if:{{{dominia|}}} | -> [[:Kategorie:{{{dominia|}}}{{!}}{{{dominia|}}}]]}}
{{#if:{{{regnum|}}} | -> [[:Kategorie:{{{regnum|}}}{{!}}{{{regnum|}}}]]}}
{{#if:{{{subregnum|}}} | -> [[:Kategorie:{{{subregnum|}}}{{!}}{{{subregnum|}}}]]}}
{{#if:{{{divisio|}}} | -> [[:Kategorie:{{{divisio|}}}{{!}}{{{divisio|}}}]]}}
{{#if:{{{subdivisio|}}} | -> [[:Kategorie:{{{subdivisio|}}}{{!}}{{{subdivisio|}}}]]}}
{{#if:{{{superclassis|}}}| -> [[:Kategorie:{{{superclassis|}}}{{!}}{{{superclassis|}}}]]}}
{{#if:{{{classis|}}} | -> [[:Kategorie:{{{classis|}}}{{!}}{{{classis|}}}]]}}
{{#if:{{{subclassis|}}} | -> [[:Kategorie:{{{subclassis|}}}{{!}}{{{subclassis|}}}]]}}
{{#if:{{{superordo|}}} | -> [[:Kategorie:{{{superordo|}}}{{!}}{{{superordo|}}}]]}}
{{#if:{{{ordo|}}} | -> [[:Kategorie:{{{ordo|}}}{{!}}{{{ordo|}}}]]}}
{{#if:{{{subordo|}}} | -> [[:Kategorie:{{{subordo|}}}{{!}}{{{subordo|}}}]]}}
{{#if:{{{superfamilia|}}}| -> [[:Kategorie:{{#if: {{{subordo|}}} | {{{subordo|}}} | {{{ordo|}}} }}{{!}}{{{superfamilia|}}}]]}}
{{#if:{{{familia|}}} | -> [[:Kategorie:{{{familia|}}}{{!}}{{{familia|}}}]]}}
{{#if:{{{subfamilia|}}} | -> [[:Kategorie:{{{subfamilia|}}}{{!}}{{{subfamilia|}}}]]}}
{{#if:{{{tribus|}}} | -> [[:Kategorie:{{{tribus|}}}{{!}}{{{tribus|}}}]]}}
{{#if:{{{genus|}}} | -> [[:Kategorie:{{{genus|}}}{{!}}{{{genus|}}}]]}}
{{#if:{{{subgenus|}}} | -> [[:Kategorie:{{{subgenus|}}}{{!}}{{{subgenus|}}}]]}}
}}
{| cellpadding="2" cellspacing="0" style="border: 1px solid #6688AA;width:260px; background-color:#efefef; float:right;overflow: hidden; margin-left:10px; margin-bottom:10px;" valign="middle" |
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"|
'''''{{PAGENAME}}'''''
{{#if:{{{DeName|}}}|
<br>({{{DeName|}}})
}}
|-
|style="border: 0px solid #6688AA;border-bottom-width:1px;"| <div style="text-align:center;font-size:8pt;">
{{#if: {{{Bild|}}}
| [[Bild:{{{Bild}}}{{!}}frameless{{!}}250x300px{{!}}{{{Bildbeschreibung}}}]]
}}</div>
|
|
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| Systematik
|-
|style="text-align:center;width=250px;"|
{| style="background-color:#efefef;text-align:left;"
{{#if:{{{dominia|}}}
| {{!-}}
{{!}} Dominia:
{{!}} ''[[:Kategorie:{{{dominia|}}}{{!}}{{{dominia|}}}]]''
}}
{{#if:{{{regnum|}}}
| {{!-}}
{{!}} Regnum:
{{!}} ''[[:Kategorie:{{{regnum|}}}{{!}}{{{regnum|}}}]]''
}}
{{#if:{{{subregnum|}}}
| {{!-}}
{{!}} Subregnum:
{{!}} ''[[:Kategorie:{{{subregnum|}}}{{!}}{{{subregnum|}}}]]''
}}
{{#if:{{{superdivisio|}}}
| {{!-}}
{{!}} Superdivisio:
{{!}} ''[[:Kategorie:{{{superdivisio|}}}{{!}}{{{superdivisio|}}}]]''
}}
{{#if:{{{divisio|}}}
| {{!-}}
{{!}} Divisio:
{{!}} ''[[:Kategorie:{{{divisio|}}}{{!}}{{{divisio|}}}]]''
}}
{{#if:{{{subdivisio|}}}
| {{!-}}
{{!}} Subdivisio:
{{!}} ''[[:Kategorie:{{{subdivisio|}}}{{!}}{{{subdivisio|}}}]]''
}}
{{#if:{{{superclassis|}}}
| {{!-}}
{{!}} Superclassis:
{{!}} ''[[:Kategorie:{{{superclassis|}}}{{!}}{{{superclassis|}}}]]''
}}
{{#if:{{{classis|}}}
| {{!-}}
{{!}} Classis:
{{!}} ''[[:Kategorie:{{{classis|}}}{{!}}{{{classis|}}}]]''
}}
{{#if:{{{subclassis|}}}
| {{!-}}
{{!}} Subclassis:
{{!}} ''[[:Kategorie:{{{subclassis|}}}{{!}}{{{subclassis|}}}]]''
}}
{{#if:{{{superordo|}}}
| {{!-}}
{{!}} Superordo:
{{!}} ''[[:Kategorie:{{{superordo|}}}{{!}}{{{superordo|}}}]]''
}}
{{#if:{{{ordo|}}}
| {{!-}}
{{!}} Ordo:
{{!}} ''[[:Kategorie:{{{ordo|}}}{{!}}{{{ordo|}}}]]''
}}
{{#if:{{{subordo|}}}
| {{!-}}
{{!}} Subordo:
{{!}} ''[[:Kategorie:{{{subordo|}}}{{!}}{{{subordo|}}}]]''
}}
{{#if:{{{superfamilia|}}}
| {{!-}}
{{!}} Superfamilia:
{{!}} ''[[:Kategorie:{{{superfamilia|}}}{{!}}{{{superfamilia|}}}]]''
}}
{{#if:{{{familia|}}}
| {{!-}}
{{!}} Familia:
{{!}} ''[[:Kategorie:{{{familia|}}}{{!}}{{{familia|}}}]]''
}}
{{#if:{{{subfamilia|}}}
| {{!-}}
{{!}} Subfamilia:
{{!}} ''[[:Kategorie:{{{subfamilia|}}}{{!}}{{{subfamilia|}}}]]''
}}
{{#if:{{{tribus|}}}
| {{!-}}
{{!}} Tribus:
{{!}} ''[[:Kategorie:{{{tribus|}}}{{!}}{{{tribus|}}}]]''
}}
|-
{{#if:{{{genus|}}}
| {{!-}}
{{!}} Genus:
{{!}} ''[[:Kategorie:{{{genus|}}}{{!}}{{{genus|}}}]]''
}}
|-
{{#if:{{{subgenus|}}}
| {{!-}}
{{!}} Subgenus:
{{!}} ''[[:Kategorie:{{{subgenus|}}}{{!}}{{{subgenus|}}}]]''
}}
|-
{{#if:{{{species|}}}
| {{!-}}
{{!}} Species:
{{!}} ''{{{genus|}}} {{{subgenus|}}} {{{species|}}}''
}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Wissenschaftlicher Name
|-
|style="text-align:center"|
{| style="text-align:center;background-color:#efefef;widht:250px"
|-
|style="text-align:center;width:250px"|''{{{genus|}}} {{{species|}}}'' {{#if:{{{Autor|}}}|{{{Autor|}}}}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Weitere Informationen
|-
|style="text-align:left"|
{| style="text-align:left;background-color:#efefef;widht:250px"
{{#if:{{{Verbreitung|}}}
| {{!-}}
{{!}} Verbreitung:
{{!}} {{{Verbreitung|}}}
| {{!-}}
}}
{{#if:{{{Habitat|}}}
| {{!-}}
{{!}} Habitat:
{{!}} {{{Habitat|}}}
| {{!-}}
}}
{{#if:{{{Nahrung|}}}
| {{!-}}
{{!}} Nahrung:
{{!}} {{{Nahrung|}}}
| {{!-}}
}}
{{#if:{{{Luftfeuchtigkeit|}}}
| {{!-}}
{{!}} Luftfeuchtigkeit:
{{!}} {{{Luftfeuchtigkeit|}}}
| {{!-}}
}}
{{#if:{{{Temperatur|}}}
| {{!-}}
{{!}} Temperatur:
{{!}} {{{Temperatur|}}}
| {{!-}}
}}
{{#if:{{{StudyGroupNumber|}}}
| {{!-}}
{{!}} StudyGroupNumber:
{{!}} {{{StudyGroupNumber|}}}
| {{!-}}
}}
|}
|}
{{#ifeq: {{NAMESPACE}} | {{ns:0}} | [[Kategorie:species]]}}
{{#if:{{#var:taxon}}
| * {{#regex: {{#var:taxon}} | /[ \r\n]+/ | _ }}
* <span style="white-space: pre;">{{#var:taxon}}</span>
}}
{{#if:{{{www.faunaeur.org_id|}}}|
* [http://www.faunaeur.org/full_results.php?id={{{www.faunaeur.org_id|}}} Fauna Europaea : www.faunaeur.org -> {{PAGENAME}}]
}}
{{#if:{{{cockroach.speciesfile.org_TaxonNameID|}}}|
* [http://cockroach.speciesfile.org/common/basic/Taxa.aspx?TaxonNameID={{{cockroach.speciesfile.org_TaxonNameID|}}} Cockroach Species File (CSF) : cockroach.speciesfile.org -> {{PAGENAME}}]
}}
{{#if:{{{LSID|}}}|
* [http://www.ipni.org/lsids.html LSID] : {{{LSID|}}}
}}
{{#ifeq:{{PAGENAME}}|{{{superordo}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
[[Kategorie:{{#if: {{{subclassis|}}} | {{{subclassis|}}} | {{{classis|}}} }}{{!}}{{{superordo|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{ordo}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=7}}
{{#ifeq:{{PAGENAME}}|Blattodea|
[[Kategorie:Schaben]]
}}
[[Kategorie:{{#if: {{{superordo|}}} | {{{superordo|}}} | {{{subclassis|}}} }}{{!}}{{{ordo|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{superfamilia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
[[Kategorie:{{#if: {{{subordo|}}} | {{{subordo|}}} | {{{ordo|}}} }}{{!}}{{{superfamilia|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{familia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=4}}
[[Kategorie:{{#if: {{{superfamilia|}}} | {{{superfamilia|}}} | {{{subordo|}}} }}{{!}}{{{familia|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{subfamilia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=3}}
[[Kategorie:{{{familia|}}}{{!}}{{{subfamilia|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{genus}}}|
[[Kategorie:{{#if: {{{subfamilia|}}} | {{{subfamilia|}}} | {{{familia|}}} }}{{!}}{{{genus|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{genus}}} {{{species}}}|
[[Kategorie:{{{genus|}}}{{!}}{{{species|}}}]]
}}
</includeonly>
<noinclude>
<pre>
Beispielaufruf:
{{Systematik
| DeName = Fauchschabe
| Autor = van Herrewege, 1973
| ordo =
| subordo =
| superfamilia =
| familia = Blaberidae
| subfamilia = Oxyhaloinae
| tribus = Gromphadorhini
| genus = Princisia
| subgenus =
| species = vanwaerebeki
| Verbreitung =
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| StudyGroupNumber = BCG 34
| Winterruhe =
| cockroach.speciesfile.org_TaxonNameID = 1174416
| LSID = urn:lsid:Blattodea.genusfile.org:TaxonName:6326
}}
{{Systematik
| Autor =
| Bild =
| Bildbeschreibung =
| regnum = Animalia
| subregnum = Eumetazoa
| phylum = specieshropoda
| subphylum = Hexapoda
| classis = Insecta
| ordo = Dictyoptera
| subordo = Isoptera
| LSID = urn:lsid:faunaeur.org:taxname:11922
| www.faunaeur.org_id = 11922
}}
</pre>
</noinclude>
717389e8ebafd156cc857676a098611f6f0881bb
1636
1635
2017-05-02T21:07:09Z
Lollypop
2
wikitext
text/x-wiki
<includeonly>
{{#vardefine:taxon|
{{#if:{{{dominia|}}} | -> [[:Kategorie:{{{dominia|}}}{{!}}{{{dominia|}}}]]}}
{{#if:{{{regnum|}}} | -> [[:Kategorie:{{{regnum|}}}{{!}}{{{regnum|}}}]]}}
{{#if:{{{subregnum|}}} | -> [[:Kategorie:{{{subregnum|}}}{{!}}{{{subregnum|}}}]]}}
{{#if:{{{divisio|}}} | -> [[:Kategorie:{{{divisio|}}}{{!}}{{{divisio|}}}]]}}
{{#if:{{{subdivisio|}}} | -> [[:Kategorie:{{{subdivisio|}}}{{!}}{{{subdivisio|}}}]]}}
{{#if:{{{superclassis|}}}| -> [[:Kategorie:{{{superclassis|}}}{{!}}{{{superclassis|}}}]]}}
{{#if:{{{classis|}}} | -> [[:Kategorie:{{{classis|}}}{{!}}{{{classis|}}}]]}}
{{#if:{{{subclassis|}}} | -> [[:Kategorie:{{{subclassis|}}}{{!}}{{{subclassis|}}}]]}}
{{#if:{{{superordo|}}} | -> [[:Kategorie:{{{superordo|}}}{{!}}{{{superordo|}}}]]}}
{{#if:{{{ordo|}}} | -> [[:Kategorie:{{{ordo|}}}{{!}}{{{ordo|}}}]]}}
{{#if:{{{subordo|}}} | -> [[:Kategorie:{{{subordo|}}}{{!}}{{{subordo|}}}]]}}
{{#if:{{{superfamilia|}}}| -> [[:Kategorie:{{#if: {{{subordo|}}} | {{{subordo|}}} | {{{ordo|}}} }}{{!}}{{{superfamilia|}}}]]}}
{{#if:{{{familia|}}} | -> [[:Kategorie:{{{familia|}}}{{!}}{{{familia|}}}]]}}
{{#if:{{{subfamilia|}}} | -> [[:Kategorie:{{{subfamilia|}}}{{!}}{{{subfamilia|}}}]]}}
{{#if:{{{tribus|}}} | -> [[:Kategorie:{{{tribus|}}}{{!}}{{{tribus|}}}]]}}
{{#if:{{{genus|}}} | -> [[:Kategorie:{{{genus|}}}{{!}}{{{genus|}}}]]}}
{{#if:{{{subgenus|}}} | -> [[:Kategorie:{{{subgenus|}}}{{!}}{{{subgenus|}}}]]}}
}}
{| cellpadding="2" cellspacing="0" style="border: 1px solid #6688AA;width:260px; background-color:#efefef; float:right;overflow: hidden; margin-left:10px; margin-bottom:10px;" valign="middle" |
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"|
'''''{{PAGENAME}}'''''
{{#if:{{{DeName|}}}|
<br>({{{DeName|}}})
}}
|-
|style="border: 0px solid #6688AA;border-bottom-width:1px;"| <div style="text-align:center;font-size:8pt;">
{{#if: {{{Bild|}}}
| [[Bild:{{{Bild}}}{{!}}frameless{{!}}250x300px{{!}}{{{Bildbeschreibung}}}]]
}}</div>
|
|
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| Systematik
|-
|style="text-align:center;width=250px;"|
{| style="background-color:#efefef;text-align:left;"
{{#if:{{{dominia|}}}
| {{!-}}
{{!}} Dominia:
{{!}} ''[[:Kategorie:{{{dominia|}}}{{!}}{{{dominia|}}}]]''
}}
{{#if:{{{regnum|}}}
| {{!-}}
{{!}} Regnum:
{{!}} ''[[:Kategorie:{{{regnum|}}}{{!}}{{{regnum|}}}]]''
}}
{{#if:{{{subregnum|}}}
| {{!-}}
{{!}} Subregnum:
{{!}} ''[[:Kategorie:{{{subregnum|}}}{{!}}{{{subregnum|}}}]]''
}}
{{#if:{{{superdivisio|}}}
| {{!-}}
{{!}} Superdivisio:
{{!}} ''[[:Kategorie:{{{superdivisio|}}}{{!}}{{{superdivisio|}}}]]''
}}
{{#if:{{{divisio|}}}
| {{!-}}
{{!}} Divisio:
{{!}} ''[[:Kategorie:{{{divisio|}}}{{!}}{{{divisio|}}}]]''
}}
{{#if:{{{subdivisio|}}}
| {{!-}}
{{!}} Subdivisio:
{{!}} ''[[:Kategorie:{{{subdivisio|}}}{{!}}{{{subdivisio|}}}]]''
}}
{{#if:{{{superclassis|}}}
| {{!-}}
{{!}} Superclassis:
{{!}} ''[[:Kategorie:{{{superclassis|}}}{{!}}{{{superclassis|}}}]]''
}}
{{#if:{{{classis|}}}
| {{!-}}
{{!}} Classis:
{{!}} ''[[:Kategorie:{{{classis|}}}{{!}}{{{classis|}}}]]''
}}
{{#if:{{{subclassis|}}}
| {{!-}}
{{!}} Subclassis:
{{!}} ''[[:Kategorie:{{{subclassis|}}}{{!}}{{{subclassis|}}}]]''
}}
{{#if:{{{superordo|}}}
| {{!-}}
{{!}} Superordo:
{{!}} ''[[:Kategorie:{{{superordo|}}}{{!}}{{{superordo|}}}]]''
}}
{{#if:{{{ordo|}}}
| {{!-}}
{{!}} Ordo:
{{!}} ''[[:Kategorie:{{{ordo|}}}{{!}}{{{ordo|}}}]]''
}}
{{#if:{{{subordo|}}}
| {{!-}}
{{!}} Subordo:
{{!}} ''[[:Kategorie:{{{subordo|}}}{{!}}{{{subordo|}}}]]''
}}
{{#if:{{{superfamilia|}}}
| {{!-}}
{{!}} Superfamilia:
{{!}} ''[[:Kategorie:{{{superfamilia|}}}{{!}}{{{superfamilia|}}}]]''
}}
{{#if:{{{familia|}}}
| {{!-}}
{{!}} Familia:
{{!}} ''[[:Kategorie:{{{familia|}}}{{!}}{{{familia|}}}]]''
}}
{{#if:{{{subfamilia|}}}
| {{!-}}
{{!}} Subfamilia:
{{!}} ''[[:Kategorie:{{{subfamilia|}}}{{!}}{{{subfamilia|}}}]]''
}}
{{#if:{{{tribus|}}}
| {{!-}}
{{!}} Tribus:
{{!}} ''[[:Kategorie:{{{tribus|}}}{{!}}{{{tribus|}}}]]''
}}
|-
{{#if:{{{genus|}}}
| {{!-}}
{{!}} Genus:
{{!}} ''[[:Kategorie:{{{genus|}}}{{!}}{{{genus|}}}]]''
}}
|-
{{#if:{{{subgenus|}}}
| {{!-}}
{{!}} Subgenus:
{{!}} ''[[:Kategorie:{{{subgenus|}}}{{!}}{{{subgenus|}}}]]''
}}
|-
{{#if:{{{species|}}}
| {{!-}}
{{!}} Species:
{{!}} ''{{{genus|}}} {{{subgenus|}}} {{{species|}}}''
}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Wissenschaftlicher Name
|-
|style="text-align:center"|
{| style="text-align:center;background-color:#efefef;widht:250px"
|-
|style="text-align:center;width:250px"|''{{{genus|}}} {{{species|}}}'' {{#if:{{{Autor|}}}|{{{Autor|}}}}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Weitere Informationen
|-
|style="text-align:left"|
{| style="text-align:left;background-color:#efefef;widht:250px"
{{#if:{{{Verbreitung|}}}
| {{!-}}
{{!}} Verbreitung:
{{!}} {{{Verbreitung|}}}
| {{!-}}
}}
{{#if:{{{Habitat|}}}
| {{!-}}
{{!}} Habitat:
{{!}} {{{Habitat|}}}
| {{!-}}
}}
{{#if:{{{Nahrung|}}}
| {{!-}}
{{!}} Nahrung:
{{!}} {{{Nahrung|}}}
| {{!-}}
}}
{{#if:{{{Luftfeuchtigkeit|}}}
| {{!-}}
{{!}} Luftfeuchtigkeit:
{{!}} {{{Luftfeuchtigkeit|}}}
| {{!-}}
}}
{{#if:{{{Temperatur|}}}
| {{!-}}
{{!}} Temperatur:
{{!}} {{{Temperatur|}}}
| {{!-}}
}}
{{#if:{{{StudyGroupNumber|}}}
| {{!-}}
{{!}} StudyGroupNumber:
{{!}} {{{StudyGroupNumber|}}}
| {{!-}}
}}
|}
|}
{{#ifeq: {{NAMESPACE}} | {{ns:0}} | [[Kategorie:species]]}}
{{#if:{{#var:taxon}}
| {{#regex: {{#var:taxon}} | /[ \r\n]+/ | _ }}
<span style="white-space: pre;">
{{#var:taxon}}
</span>
}}
{{#if:{{{www.faunaeur.org_id|}}}|
* [http://www.faunaeur.org/full_results.php?id={{{www.faunaeur.org_id|}}} Fauna Europaea : www.faunaeur.org -> {{PAGENAME}}]
}}
{{#if:{{{cockroach.speciesfile.org_TaxonNameID|}}}|
* [http://cockroach.speciesfile.org/common/basic/Taxa.aspx?TaxonNameID={{{cockroach.speciesfile.org_TaxonNameID|}}} Cockroach Species File (CSF) : cockroach.speciesfile.org -> {{PAGENAME}}]
}}
{{#if:{{{LSID|}}}|
* [http://www.ipni.org/lsids.html LSID] : {{{LSID|}}}
}}
{{#ifeq:{{PAGENAME}}|{{{superordo}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
[[Kategorie:{{#if: {{{subclassis|}}} | {{{subclassis|}}} | {{{classis|}}} }}{{!}}{{{superordo|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{ordo}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=7}}
{{#ifeq:{{PAGENAME}}|Blattodea|
[[Kategorie:Schaben]]
}}
[[Kategorie:{{#if: {{{superordo|}}} | {{{superordo|}}} | {{{subclassis|}}} }}{{!}}{{{ordo|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{superfamilia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
[[Kategorie:{{#if: {{{subordo|}}} | {{{subordo|}}} | {{{ordo|}}} }}{{!}}{{{superfamilia|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{familia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=4}}
[[Kategorie:{{#if: {{{superfamilia|}}} | {{{superfamilia|}}} | {{{subordo|}}} }}{{!}}{{{familia|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{subfamilia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=3}}
[[Kategorie:{{{familia|}}}{{!}}{{{subfamilia|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{genus}}}|
[[Kategorie:{{#if: {{{subfamilia|}}} | {{{subfamilia|}}} | {{{familia|}}} }}{{!}}{{{genus|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{genus}}} {{{species}}}|
[[Kategorie:{{{genus|}}}{{!}}{{{species|}}}]]
}}
</includeonly>
<noinclude>
<pre>
Beispielaufruf:
{{Systematik
| DeName = Fauchschabe
| Autor = van Herrewege, 1973
| ordo =
| subordo =
| superfamilia =
| familia = Blaberidae
| subfamilia = Oxyhaloinae
| tribus = Gromphadorhini
| genus = Princisia
| subgenus =
| species = vanwaerebeki
| Verbreitung =
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| StudyGroupNumber = BCG 34
| Winterruhe =
| cockroach.speciesfile.org_TaxonNameID = 1174416
| LSID = urn:lsid:Blattodea.genusfile.org:TaxonName:6326
}}
{{Systematik
| Autor =
| Bild =
| Bildbeschreibung =
| regnum = Animalia
| subregnum = Eumetazoa
| phylum = specieshropoda
| subphylum = Hexapoda
| classis = Insecta
| ordo = Dictyoptera
| subordo = Isoptera
| LSID = urn:lsid:faunaeur.org:taxname:11922
| www.faunaeur.org_id = 11922
}}
</pre>
</noinclude>
6da82f6bd47fb0230ef32416e73887ce0206e73c
1637
1636
2017-05-02T21:10:02Z
Lollypop
2
wikitext
text/x-wiki
<includeonly>
{{#vardefine:taxon|
{{#if:{{{dominia|}}} | -> [[:Kategorie:{{{dominia|}}}{{!}}{{{dominia|}}}]]}}
{{#if:{{{regnum|}}} | -> [[:Kategorie:{{{regnum|}}}{{!}}{{{regnum|}}}]]}}
{{#if:{{{subregnum|}}} | -> [[:Kategorie:{{{subregnum|}}}{{!}}{{{subregnum|}}}]]}}
{{#if:{{{divisio|}}} | -> [[:Kategorie:{{{divisio|}}}{{!}}{{{divisio|}}}]]}}
{{#if:{{{subdivisio|}}} | -> [[:Kategorie:{{{subdivisio|}}}{{!}}{{{subdivisio|}}}]]}}
{{#if:{{{superclassis|}}}| -> [[:Kategorie:{{{superclassis|}}}{{!}}{{{superclassis|}}}]]}}
{{#if:{{{classis|}}} | -> [[:Kategorie:{{{classis|}}}{{!}}{{{classis|}}}]]}}
{{#if:{{{subclassis|}}} | -> [[:Kategorie:{{{subclassis|}}}{{!}}{{{subclassis|}}}]]}}
{{#if:{{{superordo|}}} | -> [[:Kategorie:{{{superordo|}}}{{!}}{{{superordo|}}}]]}}
{{#if:{{{ordo|}}} | -> [[:Kategorie:{{{ordo|}}}{{!}}{{{ordo|}}}]]}}
{{#if:{{{subordo|}}} | -> [[:Kategorie:{{{subordo|}}}{{!}}{{{subordo|}}}]]}}
{{#if:{{{superfamilia|}}}| -> [[:Kategorie:{{#if: {{{subordo|}}} | {{{subordo|}}} | {{{ordo|}}} }}{{!}}{{{superfamilia|}}}]]}}
{{#if:{{{familia|}}} | -> [[:Kategorie:{{{familia|}}}{{!}}{{{familia|}}}]]}}
{{#if:{{{subfamilia|}}} | -> [[:Kategorie:{{{subfamilia|}}}{{!}}{{{subfamilia|}}}]]}}
{{#if:{{{tribus|}}} | -> [[:Kategorie:{{{tribus|}}}{{!}}{{{tribus|}}}]]}}
{{#if:{{{genus|}}} | -> [[:Kategorie:{{{genus|}}}{{!}}{{{genus|}}}]]}}
{{#if:{{{subgenus|}}} | -> [[:Kategorie:{{{subgenus|}}}{{!}}{{{subgenus|}}}]]}}
}}
{| cellpadding="2" cellspacing="0" style="border: 1px solid #6688AA;width:260px; background-color:#efefef; float:right;overflow: hidden; margin-left:10px; margin-bottom:10px;" valign="middle" |
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"|
'''''{{PAGENAME}}'''''
{{#if:{{{DeName|}}}|
<br>({{{DeName|}}})
}}
|-
|style="border: 0px solid #6688AA;border-bottom-width:1px;"| <div style="text-align:center;font-size:8pt;">
{{#if: {{{Bild|}}}
| [[Bild:{{{Bild}}}{{!}}frameless{{!}}250x300px{{!}}{{{Bildbeschreibung}}}]]
}}</div>
|
|
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| Systematik
|-
|style="text-align:center;width=250px;"|
{| style="background-color:#efefef;text-align:left;"
{{#if:{{{dominia|}}}
| {{!-}}
{{!}} Dominia:
{{!}} ''[[:Kategorie:{{{dominia|}}}{{!}}{{{dominia|}}}]]''
}}
{{#if:{{{regnum|}}}
| {{!-}}
{{!}} Regnum:
{{!}} ''[[:Kategorie:{{{regnum|}}}{{!}}{{{regnum|}}}]]''
}}
{{#if:{{{subregnum|}}}
| {{!-}}
{{!}} Subregnum:
{{!}} ''[[:Kategorie:{{{subregnum|}}}{{!}}{{{subregnum|}}}]]''
}}
{{#if:{{{superdivisio|}}}
| {{!-}}
{{!}} Superdivisio:
{{!}} ''[[:Kategorie:{{{superdivisio|}}}{{!}}{{{superdivisio|}}}]]''
}}
{{#if:{{{divisio|}}}
| {{!-}}
{{!}} Divisio:
{{!}} ''[[:Kategorie:{{{divisio|}}}{{!}}{{{divisio|}}}]]''
}}
{{#if:{{{subdivisio|}}}
| {{!-}}
{{!}} Subdivisio:
{{!}} ''[[:Kategorie:{{{subdivisio|}}}{{!}}{{{subdivisio|}}}]]''
}}
{{#if:{{{superclassis|}}}
| {{!-}}
{{!}} Superclassis:
{{!}} ''[[:Kategorie:{{{superclassis|}}}{{!}}{{{superclassis|}}}]]''
}}
{{#if:{{{classis|}}}
| {{!-}}
{{!}} Classis:
{{!}} ''[[:Kategorie:{{{classis|}}}{{!}}{{{classis|}}}]]''
}}
{{#if:{{{subclassis|}}}
| {{!-}}
{{!}} Subclassis:
{{!}} ''[[:Kategorie:{{{subclassis|}}}{{!}}{{{subclassis|}}}]]''
}}
{{#if:{{{superordo|}}}
| {{!-}}
{{!}} Superordo:
{{!}} ''[[:Kategorie:{{{superordo|}}}{{!}}{{{superordo|}}}]]''
}}
{{#if:{{{ordo|}}}
| {{!-}}
{{!}} Ordo:
{{!}} ''[[:Kategorie:{{{ordo|}}}{{!}}{{{ordo|}}}]]''
}}
{{#if:{{{subordo|}}}
| {{!-}}
{{!}} Subordo:
{{!}} ''[[:Kategorie:{{{subordo|}}}{{!}}{{{subordo|}}}]]''
}}
{{#if:{{{superfamilia|}}}
| {{!-}}
{{!}} Superfamilia:
{{!}} ''[[:Kategorie:{{{superfamilia|}}}{{!}}{{{superfamilia|}}}]]''
}}
{{#if:{{{familia|}}}
| {{!-}}
{{!}} Familia:
{{!}} ''[[:Kategorie:{{{familia|}}}{{!}}{{{familia|}}}]]''
}}
{{#if:{{{subfamilia|}}}
| {{!-}}
{{!}} Subfamilia:
{{!}} ''[[:Kategorie:{{{subfamilia|}}}{{!}}{{{subfamilia|}}}]]''
}}
{{#if:{{{tribus|}}}
| {{!-}}
{{!}} Tribus:
{{!}} ''[[:Kategorie:{{{tribus|}}}{{!}}{{{tribus|}}}]]''
}}
|-
{{#if:{{{genus|}}}
| {{!-}}
{{!}} Genus:
{{!}} ''[[:Kategorie:{{{genus|}}}{{!}}{{{genus|}}}]]''
}}
|-
{{#if:{{{subgenus|}}}
| {{!-}}
{{!}} Subgenus:
{{!}} ''[[:Kategorie:{{{subgenus|}}}{{!}}{{{subgenus|}}}]]''
}}
|-
{{#if:{{{species|}}}
| {{!-}}
{{!}} Species:
{{!}} ''{{{genus|}}} {{{subgenus|}}} {{{species|}}}''
}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Wissenschaftlicher Name
|-
|style="text-align:center"|
{| style="text-align:center;background-color:#efefef;widht:250px"
|-
|style="text-align:center;width:250px"|''{{{genus|}}} {{{species|}}}'' {{#if:{{{Autor|}}}|{{{Autor|}}}}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Weitere Informationen
|-
|style="text-align:left"|
{| style="text-align:left;background-color:#efefef;widht:250px"
{{#if:{{{Verbreitung|}}}
| {{!-}}
{{!}} Verbreitung:
{{!}} {{{Verbreitung|}}}
| {{!-}}
}}
{{#if:{{{Habitat|}}}
| {{!-}}
{{!}} Habitat:
{{!}} {{{Habitat|}}}
| {{!-}}
}}
{{#if:{{{Nahrung|}}}
| {{!-}}
{{!}} Nahrung:
{{!}} {{{Nahrung|}}}
| {{!-}}
}}
{{#if:{{{Luftfeuchtigkeit|}}}
| {{!-}}
{{!}} Luftfeuchtigkeit:
{{!}} {{{Luftfeuchtigkeit|}}}
| {{!-}}
}}
{{#if:{{{Temperatur|}}}
| {{!-}}
{{!}} Temperatur:
{{!}} {{{Temperatur|}}}
| {{!-}}
}}
{{#if:{{{StudyGroupNumber|}}}
| {{!-}}
{{!}} StudyGroupNumber:
{{!}} {{{StudyGroupNumber|}}}
| {{!-}}
}}
|}
|}
{{#ifeq: {{NAMESPACE}} | {{ns:0}} | [[Kategorie:species]]}}
{{#if:{{#var:taxon}}
| {{#regex: {{#var:taxon}} | /[ \r\n]+/ | }}
}}
{{#if:{{{www.faunaeur.org_id|}}}|
* [http://www.faunaeur.org/full_results.php?id={{{www.faunaeur.org_id|}}} Fauna Europaea : www.faunaeur.org -> {{PAGENAME}}]
}}
{{#if:{{{cockroach.speciesfile.org_TaxonNameID|}}}|
* [http://cockroach.speciesfile.org/common/basic/Taxa.aspx?TaxonNameID={{{cockroach.speciesfile.org_TaxonNameID|}}} Cockroach Species File (CSF) : cockroach.speciesfile.org -> {{PAGENAME}}]
}}
{{#if:{{{LSID|}}}|
* [http://www.ipni.org/lsids.html LSID] : {{{LSID|}}}
}}
{{#ifeq:{{PAGENAME}}|{{{superordo}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
[[Kategorie:{{#if: {{{subclassis|}}} | {{{subclassis|}}} | {{{classis|}}} }}{{!}}{{{superordo|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{ordo}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=7}}
{{#ifeq:{{PAGENAME}}|Blattodea|
[[Kategorie:Schaben]]
}}
[[Kategorie:{{#if: {{{superordo|}}} | {{{superordo|}}} | {{{subclassis|}}} }}{{!}}{{{ordo|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{superfamilia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
[[Kategorie:{{#if: {{{subordo|}}} | {{{subordo|}}} | {{{ordo|}}} }}{{!}}{{{superfamilia|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{familia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=4}}
[[Kategorie:{{#if: {{{superfamilia|}}} | {{{superfamilia|}}} | {{{subordo|}}} }}{{!}}{{{familia|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{subfamilia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=3}}
[[Kategorie:{{{familia|}}}{{!}}{{{subfamilia|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{genus}}}|
[[Kategorie:{{#if: {{{subfamilia|}}} | {{{subfamilia|}}} | {{{familia|}}} }}{{!}}{{{genus|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{genus}}} {{{species}}}|
[[Kategorie:{{{genus|}}}{{!}}{{{species|}}}]]
}}
</includeonly>
<noinclude>
<pre>
Beispielaufruf:
{{Systematik
| DeName = Fauchschabe
| Autor = van Herrewege, 1973
| ordo =
| subordo =
| superfamilia =
| familia = Blaberidae
| subfamilia = Oxyhaloinae
| tribus = Gromphadorhini
| genus = Princisia
| subgenus =
| species = vanwaerebeki
| Verbreitung =
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| StudyGroupNumber = BCG 34
| Winterruhe =
| cockroach.speciesfile.org_TaxonNameID = 1174416
| LSID = urn:lsid:Blattodea.genusfile.org:TaxonName:6326
}}
{{Systematik
| Autor =
| Bild =
| Bildbeschreibung =
| regnum = Animalia
| subregnum = Eumetazoa
| phylum = specieshropoda
| subphylum = Hexapoda
| classis = Insecta
| ordo = Dictyoptera
| subordo = Isoptera
| LSID = urn:lsid:faunaeur.org:taxname:11922
| www.faunaeur.org_id = 11922
}}
</pre>
</noinclude>
805ec2fdbc36ad768ea8ece3087d4773aeb0f965
1638
1637
2017-05-02T21:12:56Z
Lollypop
2
wikitext
text/x-wiki
<includeonly>
{{#vardefine:taxon|
{{#if:{{{dominia|}}} | -> [[:Kategorie:{{{dominia|}}}{{!}}{{{dominia|}}}]]}}
{{#if:{{{regnum|}}} | -> [[:Kategorie:{{{regnum|}}}{{!}}{{{regnum|}}}]]}}
{{#if:{{{subregnum|}}} | -> [[:Kategorie:{{{subregnum|}}}{{!}}{{{subregnum|}}}]]}}
{{#if:{{{divisio|}}} | -> [[:Kategorie:{{{divisio|}}}{{!}}{{{divisio|}}}]]}}
{{#if:{{{subdivisio|}}} | -> [[:Kategorie:{{{subdivisio|}}}{{!}}{{{subdivisio|}}}]]}}
{{#if:{{{superclassis|}}}| -> [[:Kategorie:{{{superclassis|}}}{{!}}{{{superclassis|}}}]]}}
{{#if:{{{classis|}}} | -> [[:Kategorie:{{{classis|}}}{{!}}{{{classis|}}}]]}}
{{#if:{{{subclassis|}}} | -> [[:Kategorie:{{{subclassis|}}}{{!}}{{{subclassis|}}}]]}}
{{#if:{{{superordo|}}} | -> [[:Kategorie:{{{superordo|}}}{{!}}{{{superordo|}}}]]}}
{{#if:{{{ordo|}}} | -> [[:Kategorie:{{{ordo|}}}{{!}}{{{ordo|}}}]]}}
{{#if:{{{subordo|}}} | -> [[:Kategorie:{{{subordo|}}}{{!}}{{{subordo|}}}]]}}
{{#if:{{{superfamilia|}}}| -> [[:Kategorie:{{#if: {{{subordo|}}} | {{{subordo|}}} | {{{ordo|}}} }}{{!}}{{{superfamilia|}}}]]}}
{{#if:{{{familia|}}} | -> [[:Kategorie:{{{familia|}}}{{!}}{{{familia|}}}]]}}
{{#if:{{{subfamilia|}}} | -> [[:Kategorie:{{{subfamilia|}}}{{!}}{{{subfamilia|}}}]]}}
{{#if:{{{tribus|}}} | -> [[:Kategorie:{{{tribus|}}}{{!}}{{{tribus|}}}]]}}
{{#if:{{{genus|}}} | -> [[:Kategorie:{{{genus|}}}{{!}}{{{genus|}}}]]}}
{{#if:{{{subgenus|}}} | -> [[:Kategorie:{{{subgenus|}}}{{!}}{{{subgenus|}}}]]}}
}}
{| cellpadding="2" cellspacing="0" style="border: 1px solid #6688AA;width:260px; background-color:#efefef; float:right;overflow: hidden; margin-left:10px; margin-bottom:10px;" valign="middle" |
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"|
'''''{{PAGENAME}}'''''
{{#if:{{{DeName|}}}|
<br>({{{DeName|}}})
}}
|-
|style="border: 0px solid #6688AA;border-bottom-width:1px;"| <div style="text-align:center;font-size:8pt;">
{{#if: {{{Bild|}}}
| [[Bild:{{{Bild}}}{{!}}frameless{{!}}250x300px{{!}}{{{Bildbeschreibung}}}]]
}}</div>
|
|
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| Systematik
|-
|style="text-align:center;width=250px;"|
{| style="background-color:#efefef;text-align:left;"
{{#regex:
{{#if:{{{dominia|}}}
| {{!-}}
{{!}} Dominia:
{{!}} ''[[:Kategorie:{{{dominia|}}}{{!}}{{{dominia|}}}]]''
}}
{{#if:{{{regnum|}}}
| {{!-}}
{{!}} Regnum:
{{!}} ''[[:Kategorie:{{{regnum|}}}{{!}}{{{regnum|}}}]]''
}}
{{#if:{{{subregnum|}}}
| {{!-}}
{{!}} Subregnum:
{{!}} ''[[:Kategorie:{{{subregnum|}}}{{!}}{{{subregnum|}}}]]''
}}
{{#if:{{{superdivisio|}}}
| {{!-}}
{{!}} Superdivisio:
{{!}} ''[[:Kategorie:{{{superdivisio|}}}{{!}}{{{superdivisio|}}}]]''
}}
{{#if:{{{divisio|}}}
| {{!-}}
{{!}} Divisio:
{{!}} ''[[:Kategorie:{{{divisio|}}}{{!}}{{{divisio|}}}]]''
}}
{{#if:{{{subdivisio|}}}
| {{!-}}
{{!}} Subdivisio:
{{!}} ''[[:Kategorie:{{{subdivisio|}}}{{!}}{{{subdivisio|}}}]]''
}}
{{#if:{{{superclassis|}}}
| {{!-}}
{{!}} Superclassis:
{{!}} ''[[:Kategorie:{{{superclassis|}}}{{!}}{{{superclassis|}}}]]''
}}
{{#if:{{{classis|}}}
| {{!-}}
{{!}} Classis:
{{!}} ''[[:Kategorie:{{{classis|}}}{{!}}{{{classis|}}}]]''
}}
{{#if:{{{subclassis|}}}
| {{!-}}
{{!}} Subclassis:
{{!}} ''[[:Kategorie:{{{subclassis|}}}{{!}}{{{subclassis|}}}]]''
}}
{{#if:{{{superordo|}}}
| {{!-}}
{{!}} Superordo:
{{!}} ''[[:Kategorie:{{{superordo|}}}{{!}}{{{superordo|}}}]]''
}}
{{#if:{{{ordo|}}}
| {{!-}}
{{!}} Ordo:
{{!}} ''[[:Kategorie:{{{ordo|}}}{{!}}{{{ordo|}}}]]''
}}
{{#if:{{{subordo|}}}
| {{!-}}
{{!}} Subordo:
{{!}} ''[[:Kategorie:{{{subordo|}}}{{!}}{{{subordo|}}}]]''
}}
{{#if:{{{superfamilia|}}}
| {{!-}}
{{!}} Superfamilia:
{{!}} ''[[:Kategorie:{{{superfamilia|}}}{{!}}{{{superfamilia|}}}]]''
}}
{{#if:{{{familia|}}}
| {{!-}}
{{!}} Familia:
{{!}} ''[[:Kategorie:{{{familia|}}}{{!}}{{{familia|}}}]]''
}}
{{#if:{{{subfamilia|}}}
| {{!-}}
{{!}} Subfamilia:
{{!}} ''[[:Kategorie:{{{subfamilia|}}}{{!}}{{{subfamilia|}}}]]''
}}
{{#if:{{{tribus|}}}
| {{!-}}
{{!}} Tribus:
{{!}} ''[[:Kategorie:{{{tribus|}}}{{!}}{{{tribus|}}}]]''
}}
|-
{{#if:{{{genus|}}}
| {{!-}}
{{!}} Genus:
{{!}} ''[[:Kategorie:{{{genus|}}}{{!}}{{{genus|}}}]]''
}}
|-
{{#if:{{{subgenus|}}}
| {{!-}}
{{!}} Subgenus:
{{!}} ''[[:Kategorie:{{{subgenus|}}}{{!}}{{{subgenus|}}}]]''
}}
|-
{{#if:{{{species|}}}
| {{!-}}
{{!}} Species:
{{!}} ''{{{genus|}}} {{{subgenus|}}} {{{species|}}}''
}}
| /[\n]+/g | \n }}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Wissenschaftlicher Name
|-
|style="text-align:center"|
{| style="text-align:center;background-color:#efefef;widht:250px"
|-
|style="text-align:center;width:250px"|''{{{genus|}}} {{{species|}}}'' {{#if:{{{Autor|}}}|{{{Autor|}}}}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Weitere Informationen
|-
|style="text-align:left"|
{| style="text-align:left;background-color:#efefef;widht:250px"
{{#if:{{{Verbreitung|}}}
| {{!-}}
{{!}} Verbreitung:
{{!}} {{{Verbreitung|}}}
| {{!-}}
}}
{{#if:{{{Habitat|}}}
| {{!-}}
{{!}} Habitat:
{{!}} {{{Habitat|}}}
| {{!-}}
}}
{{#if:{{{Nahrung|}}}
| {{!-}}
{{!}} Nahrung:
{{!}} {{{Nahrung|}}}
| {{!-}}
}}
{{#if:{{{Luftfeuchtigkeit|}}}
| {{!-}}
{{!}} Luftfeuchtigkeit:
{{!}} {{{Luftfeuchtigkeit|}}}
| {{!-}}
}}
{{#if:{{{Temperatur|}}}
| {{!-}}
{{!}} Temperatur:
{{!}} {{{Temperatur|}}}
| {{!-}}
}}
{{#if:{{{StudyGroupNumber|}}}
| {{!-}}
{{!}} StudyGroupNumber:
{{!}} {{{StudyGroupNumber|}}}
| {{!-}}
}}
|}
|}
{{#ifeq: {{NAMESPACE}} | {{ns:0}} | [[Kategorie:species]]}}
{{#if:{{#var:taxon}}
| {{#regex: {{#var:taxon}} | /[ \r\n]+/ | }}
}}
{{#if:{{{www.faunaeur.org_id|}}}|
* [http://www.faunaeur.org/full_results.php?id={{{www.faunaeur.org_id|}}} Fauna Europaea : www.faunaeur.org -> {{PAGENAME}}]
}}
{{#if:{{{cockroach.speciesfile.org_TaxonNameID|}}}|
* [http://cockroach.speciesfile.org/common/basic/Taxa.aspx?TaxonNameID={{{cockroach.speciesfile.org_TaxonNameID|}}} Cockroach Species File (CSF) : cockroach.speciesfile.org -> {{PAGENAME}}]
}}
{{#if:{{{LSID|}}}|
* [http://www.ipni.org/lsids.html LSID] : {{{LSID|}}}
}}
{{#ifeq:{{PAGENAME}}|{{{superordo}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
[[Kategorie:{{#if: {{{subclassis|}}} | {{{subclassis|}}} | {{{classis|}}} }}{{!}}{{{superordo|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{ordo}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=7}}
{{#ifeq:{{PAGENAME}}|Blattodea|
[[Kategorie:Schaben]]
}}
[[Kategorie:{{#if: {{{superordo|}}} | {{{superordo|}}} | {{{subclassis|}}} }}{{!}}{{{ordo|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{superfamilia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
[[Kategorie:{{#if: {{{subordo|}}} | {{{subordo|}}} | {{{ordo|}}} }}{{!}}{{{superfamilia|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{familia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=4}}
[[Kategorie:{{#if: {{{superfamilia|}}} | {{{superfamilia|}}} | {{{subordo|}}} }}{{!}}{{{familia|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{subfamilia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=3}}
[[Kategorie:{{{familia|}}}{{!}}{{{subfamilia|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{genus}}}|
[[Kategorie:{{#if: {{{subfamilia|}}} | {{{subfamilia|}}} | {{{familia|}}} }}{{!}}{{{genus|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{genus}}} {{{species}}}|
[[Kategorie:{{{genus|}}}{{!}}{{{species|}}}]]
}}
</includeonly>
<noinclude>
<pre>
Beispielaufruf:
{{Systematik
| DeName = Fauchschabe
| Autor = van Herrewege, 1973
| ordo =
| subordo =
| superfamilia =
| familia = Blaberidae
| subfamilia = Oxyhaloinae
| tribus = Gromphadorhini
| genus = Princisia
| subgenus =
| species = vanwaerebeki
| Verbreitung =
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| StudyGroupNumber = BCG 34
| Winterruhe =
| cockroach.speciesfile.org_TaxonNameID = 1174416
| LSID = urn:lsid:Blattodea.genusfile.org:TaxonName:6326
}}
{{Systematik
| Autor =
| Bild =
| Bildbeschreibung =
| regnum = Animalia
| subregnum = Eumetazoa
| phylum = specieshropoda
| subphylum = Hexapoda
| classis = Insecta
| ordo = Dictyoptera
| subordo = Isoptera
| LSID = urn:lsid:faunaeur.org:taxname:11922
| www.faunaeur.org_id = 11922
}}
</pre>
</noinclude>
9dea873c5fab296465e14c33ca61d33f4483fe87
1639
1638
2017-05-02T21:19:21Z
Lollypop
2
wikitext
text/x-wiki
<includeonly>
{{#vardefine:taxon|
{{#if:{{{dominia|}}} | -> [[:Kategorie:{{{dominia|}}}{{!}}{{{dominia|}}}]]}}
{{#if:{{{regnum|}}} | -> [[:Kategorie:{{{regnum|}}}{{!}}{{{regnum|}}}]]}}
{{#if:{{{subregnum|}}} | -> [[:Kategorie:{{{subregnum|}}}{{!}}{{{subregnum|}}}]]}}
{{#if:{{{divisio|}}} | -> [[:Kategorie:{{{divisio|}}}{{!}}{{{divisio|}}}]]}}
{{#if:{{{subdivisio|}}} | -> [[:Kategorie:{{{subdivisio|}}}{{!}}{{{subdivisio|}}}]]}}
{{#if:{{{superclassis|}}}| -> [[:Kategorie:{{{superclassis|}}}{{!}}{{{superclassis|}}}]]}}
{{#if:{{{classis|}}} | -> [[:Kategorie:{{{classis|}}}{{!}}{{{classis|}}}]]}}
{{#if:{{{subclassis|}}} | -> [[:Kategorie:{{{subclassis|}}}{{!}}{{{subclassis|}}}]]}}
{{#if:{{{superordo|}}} | -> [[:Kategorie:{{{superordo|}}}{{!}}{{{superordo|}}}]]}}
{{#if:{{{ordo|}}} | -> [[:Kategorie:{{{ordo|}}}{{!}}{{{ordo|}}}]]}}
{{#if:{{{subordo|}}} | -> [[:Kategorie:{{{subordo|}}}{{!}}{{{subordo|}}}]]}}
{{#if:{{{superfamilia|}}}| -> [[:Kategorie:{{#if: {{{subordo|}}} | {{{subordo|}}} | {{{ordo|}}} }}{{!}}{{{superfamilia|}}}]]}}
{{#if:{{{familia|}}} | -> [[:Kategorie:{{{familia|}}}{{!}}{{{familia|}}}]]}}
{{#if:{{{subfamilia|}}} | -> [[:Kategorie:{{{subfamilia|}}}{{!}}{{{subfamilia|}}}]]}}
{{#if:{{{tribus|}}} | -> [[:Kategorie:{{{tribus|}}}{{!}}{{{tribus|}}}]]}}
{{#if:{{{genus|}}} | -> [[:Kategorie:{{{genus|}}}{{!}}{{{genus|}}}]]}}
{{#if:{{{subgenus|}}} | -> [[:Kategorie:{{{subgenus|}}}{{!}}{{{subgenus|}}}]]}}
}}
{{#vardefine:sidetaxon|
{{#if:{{{dominia|}}}
| {{!-}}
{{!}} Dominia:
{{!}} ''[[:Kategorie:{{{dominia|}}}{{!}}{{{dominia|}}}]]''
}}
{{#if:{{{regnum|}}}
| {{!-}}
{{!}} Regnum:
{{!}} ''[[:Kategorie:{{{regnum|}}}{{!}}{{{regnum|}}}]]''
}}
{{#if:{{{subregnum|}}}
| {{!-}}
{{!}} Subregnum:
{{!}} ''[[:Kategorie:{{{subregnum|}}}{{!}}{{{subregnum|}}}]]''
}}
{{#if:{{{superdivisio|}}}
| {{!-}}
{{!}} Superdivisio:
{{!}} ''[[:Kategorie:{{{superdivisio|}}}{{!}}{{{superdivisio|}}}]]''
}}
{{#if:{{{divisio|}}}
| {{!-}}
{{!}} Divisio:
{{!}} ''[[:Kategorie:{{{divisio|}}}{{!}}{{{divisio|}}}]]''
}}
{{#if:{{{subdivisio|}}}
| {{!-}}
{{!}} Subdivisio:
{{!}} ''[[:Kategorie:{{{subdivisio|}}}{{!}}{{{subdivisio|}}}]]''
}}
{{#if:{{{superclassis|}}}
| {{!-}}
{{!}} Superclassis:
{{!}} ''[[:Kategorie:{{{superclassis|}}}{{!}}{{{superclassis|}}}]]''
}}
{{#if:{{{classis|}}}
| {{!-}}
{{!}} Classis:
{{!}} ''[[:Kategorie:{{{classis|}}}{{!}}{{{classis|}}}]]''
}}
{{#if:{{{subclassis|}}}
| {{!-}}
{{!}} Subclassis:
{{!}} ''[[:Kategorie:{{{subclassis|}}}{{!}}{{{subclassis|}}}]]''
}}
{{#if:{{{superordo|}}}
| {{!-}}
{{!}} Superordo:
{{!}} ''[[:Kategorie:{{{superordo|}}}{{!}}{{{superordo|}}}]]''
}}
{{#if:{{{ordo|}}}
| {{!-}}
{{!}} Ordo:
{{!}} ''[[:Kategorie:{{{ordo|}}}{{!}}{{{ordo|}}}]]''
}}
{{#if:{{{subordo|}}}
| {{!-}}
{{!}} Subordo:
{{!}} ''[[:Kategorie:{{{subordo|}}}{{!}}{{{subordo|}}}]]''
}}
{{#if:{{{superfamilia|}}}
| {{!-}}
{{!}} Superfamilia:
{{!}} ''[[:Kategorie:{{{superfamilia|}}}{{!}}{{{superfamilia|}}}]]''
}}
{{#if:{{{familia|}}}
| {{!-}}
{{!}} Familia:
{{!}} ''[[:Kategorie:{{{familia|}}}{{!}}{{{familia|}}}]]''
}}
{{#if:{{{subfamilia|}}}
| {{!-}}
{{!}} Subfamilia:
{{!}} ''[[:Kategorie:{{{subfamilia|}}}{{!}}{{{subfamilia|}}}]]''
}}
{{#if:{{{tribus|}}}
| {{!-}}
{{!}} Tribus:
{{!}} ''[[:Kategorie:{{{tribus|}}}{{!}}{{{tribus|}}}]]''
}}
|-
{{#if:{{{genus|}}}
| {{!-}}
{{!}} Genus:
{{!}} ''[[:Kategorie:{{{genus|}}}{{!}}{{{genus|}}}]]''
}}
|-
{{#if:{{{subgenus|}}}
| {{!-}}
{{!}} Subgenus:
{{!}} ''[[:Kategorie:{{{subgenus|}}}{{!}}{{{subgenus|}}}]]''
}}
|-
{{#if:{{{species|}}}
| {{!-}}
{{!}} Species:
{{!}} ''{{{genus|}}} {{{subgenus|}}} {{{species|}}}''
}}
}}
{| cellpadding="2" cellspacing="0" style="border: 1px solid #6688AA;width:260px; background-color:#efefef; float:right;overflow: hidden; margin-left:10px; margin-bottom:10px;" valign="middle" |
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"|
'''''{{PAGENAME}}'''''
{{#if:{{{DeName|}}}|
<br>({{{DeName|}}})
}}
|-
|style="border: 0px solid #6688AA;border-bottom-width:1px;"| <div style="text-align:center;font-size:8pt;">
{{#if: {{{Bild|}}}
| [[Bild:{{{Bild}}}{{!}}frameless{{!}}250x300px{{!}}{{{Bildbeschreibung}}}]]
}}</div>
|
|
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| Systematik
|-
|style="text-align:center;width=250px;"|
{| style="background-color:#efefef;text-align:left;"
{{#regex: {{#var:sidetaxon}} | /[\r\n]+/ | \n }}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Wissenschaftlicher Name
|-
|style="text-align:center"|
{| style="text-align:center;background-color:#efefef;widht:250px"
|-
|style="text-align:center;width:250px"|''{{{genus|}}} {{{species|}}}'' {{#if:{{{Autor|}}}|{{{Autor|}}}}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Weitere Informationen
|-
|style="text-align:left"|
{| style="text-align:left;background-color:#efefef;widht:250px"
{{#if:{{{Verbreitung|}}}
| {{!-}}
{{!}} Verbreitung:
{{!}} {{{Verbreitung|}}}
| {{!-}}
}}
{{#if:{{{Habitat|}}}
| {{!-}}
{{!}} Habitat:
{{!}} {{{Habitat|}}}
| {{!-}}
}}
{{#if:{{{Nahrung|}}}
| {{!-}}
{{!}} Nahrung:
{{!}} {{{Nahrung|}}}
| {{!-}}
}}
{{#if:{{{Luftfeuchtigkeit|}}}
| {{!-}}
{{!}} Luftfeuchtigkeit:
{{!}} {{{Luftfeuchtigkeit|}}}
| {{!-}}
}}
{{#if:{{{Temperatur|}}}
| {{!-}}
{{!}} Temperatur:
{{!}} {{{Temperatur|}}}
| {{!-}}
}}
{{#if:{{{StudyGroupNumber|}}}
| {{!-}}
{{!}} StudyGroupNumber:
{{!}} {{{StudyGroupNumber|}}}
| {{!-}}
}}
|}
|}
{{#ifeq: {{NAMESPACE}} | {{ns:0}} | [[Kategorie:species]]}}
{{#if:{{#var:taxon}}
| {{#regex: {{#var:taxon}} | /[ \r\n]+/ | }}
}}
{{#if:{{{www.faunaeur.org_id|}}}|
* [http://www.faunaeur.org/full_results.php?id={{{www.faunaeur.org_id|}}} Fauna Europaea : www.faunaeur.org -> {{PAGENAME}}]
}}
{{#if:{{{cockroach.speciesfile.org_TaxonNameID|}}}|
* [http://cockroach.speciesfile.org/common/basic/Taxa.aspx?TaxonNameID={{{cockroach.speciesfile.org_TaxonNameID|}}} Cockroach Species File (CSF) : cockroach.speciesfile.org -> {{PAGENAME}}]
}}
{{#if:{{{LSID|}}}|
* [http://www.ipni.org/lsids.html LSID] : {{{LSID|}}}
}}
{{#ifeq:{{PAGENAME}}|{{{superordo}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
[[Kategorie:{{#if: {{{subclassis|}}} | {{{subclassis|}}} | {{{classis|}}} }}{{!}}{{{superordo|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{ordo}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=7}}
{{#ifeq:{{PAGENAME}}|Blattodea|
[[Kategorie:Schaben]]
}}
[[Kategorie:{{#if: {{{superordo|}}} | {{{superordo|}}} | {{{subclassis|}}} }}{{!}}{{{ordo|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{superfamilia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
[[Kategorie:{{#if: {{{subordo|}}} | {{{subordo|}}} | {{{ordo|}}} }}{{!}}{{{superfamilia|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{familia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=4}}
[[Kategorie:{{#if: {{{superfamilia|}}} | {{{superfamilia|}}} | {{{subordo|}}} }}{{!}}{{{familia|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{subfamilia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=3}}
[[Kategorie:{{{familia|}}}{{!}}{{{subfamilia|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{genus}}}|
[[Kategorie:{{#if: {{{subfamilia|}}} | {{{subfamilia|}}} | {{{familia|}}} }}{{!}}{{{genus|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{genus}}} {{{species}}}|
[[Kategorie:{{{genus|}}}{{!}}{{{species|}}}]]
}}
</includeonly>
<noinclude>
<pre>
Beispielaufruf:
{{Systematik
| DeName = Fauchschabe
| Autor = van Herrewege, 1973
| ordo =
| subordo =
| superfamilia =
| familia = Blaberidae
| subfamilia = Oxyhaloinae
| tribus = Gromphadorhini
| genus = Princisia
| subgenus =
| species = vanwaerebeki
| Verbreitung =
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| StudyGroupNumber = BCG 34
| Winterruhe =
| cockroach.speciesfile.org_TaxonNameID = 1174416
| LSID = urn:lsid:Blattodea.genusfile.org:TaxonName:6326
}}
{{Systematik
| Autor =
| Bild =
| Bildbeschreibung =
| regnum = Animalia
| subregnum = Eumetazoa
| phylum = specieshropoda
| subphylum = Hexapoda
| classis = Insecta
| ordo = Dictyoptera
| subordo = Isoptera
| LSID = urn:lsid:faunaeur.org:taxname:11922
| www.faunaeur.org_id = 11922
}}
</pre>
</noinclude>
d1e493c62d56ff8db8c2a828f0e0ee623abea300
1640
1639
2017-05-02T21:21:26Z
Lollypop
2
wikitext
text/x-wiki
<includeonly>
{{#vardefine:taxon|
{{#if:{{{dominia|}}} | -> [[:Kategorie:{{{dominia|}}}{{!}}{{{dominia|}}}]]}}
{{#if:{{{regnum|}}} | -> [[:Kategorie:{{{regnum|}}}{{!}}{{{regnum|}}}]]}}
{{#if:{{{subregnum|}}} | -> [[:Kategorie:{{{subregnum|}}}{{!}}{{{subregnum|}}}]]}}
{{#if:{{{divisio|}}} | -> [[:Kategorie:{{{divisio|}}}{{!}}{{{divisio|}}}]]}}
{{#if:{{{subdivisio|}}} | -> [[:Kategorie:{{{subdivisio|}}}{{!}}{{{subdivisio|}}}]]}}
{{#if:{{{superclassis|}}}| -> [[:Kategorie:{{{superclassis|}}}{{!}}{{{superclassis|}}}]]}}
{{#if:{{{classis|}}} | -> [[:Kategorie:{{{classis|}}}{{!}}{{{classis|}}}]]}}
{{#if:{{{subclassis|}}} | -> [[:Kategorie:{{{subclassis|}}}{{!}}{{{subclassis|}}}]]}}
{{#if:{{{superordo|}}} | -> [[:Kategorie:{{{superordo|}}}{{!}}{{{superordo|}}}]]}}
{{#if:{{{ordo|}}} | -> [[:Kategorie:{{{ordo|}}}{{!}}{{{ordo|}}}]]}}
{{#if:{{{subordo|}}} | -> [[:Kategorie:{{{subordo|}}}{{!}}{{{subordo|}}}]]}}
{{#if:{{{superfamilia|}}}| -> [[:Kategorie:{{#if: {{{subordo|}}} | {{{subordo|}}} | {{{ordo|}}} }}{{!}}{{{superfamilia|}}}]]}}
{{#if:{{{familia|}}} | -> [[:Kategorie:{{{familia|}}}{{!}}{{{familia|}}}]]}}
{{#if:{{{subfamilia|}}} | -> [[:Kategorie:{{{subfamilia|}}}{{!}}{{{subfamilia|}}}]]}}
{{#if:{{{tribus|}}} | -> [[:Kategorie:{{{tribus|}}}{{!}}{{{tribus|}}}]]}}
{{#if:{{{genus|}}} | -> [[:Kategorie:{{{genus|}}}{{!}}{{{genus|}}}]]}}
{{#if:{{{subgenus|}}} | -> [[:Kategorie:{{{subgenus|}}}{{!}}{{{subgenus|}}}]]}}
}}
{{#vardefine:taxonside|
{{#if:{{{dominia|}}}
| {{!-}}
{{!}} Dominia:
{{!}} ''[[:Kategorie:{{{dominia|}}}{{!}}{{{dominia|}}}]]''
}}
{{#if:{{{regnum|}}}
| {{!-}}
{{!}} Regnum:
{{!}} ''[[:Kategorie:{{{regnum|}}}{{!}}{{{regnum|}}}]]''
}}
{{#if:{{{subregnum|}}}
| {{!-}}
{{!}} Subregnum:
{{!}} ''[[:Kategorie:{{{subregnum|}}}{{!}}{{{subregnum|}}}]]''
}}
{{#if:{{{superdivisio|}}}
| {{!-}}
{{!}} Superdivisio:
{{!}} ''[[:Kategorie:{{{superdivisio|}}}{{!}}{{{superdivisio|}}}]]''
}}
{{#if:{{{divisio|}}}
| {{!-}}
{{!}} Divisio:
{{!}} ''[[:Kategorie:{{{divisio|}}}{{!}}{{{divisio|}}}]]''
}}
{{#if:{{{subdivisio|}}}
| {{!-}}
{{!}} Subdivisio:
{{!}} ''[[:Kategorie:{{{subdivisio|}}}{{!}}{{{subdivisio|}}}]]''
}}
{{#if:{{{superclassis|}}}
| {{!-}}
{{!}} Superclassis:
{{!}} ''[[:Kategorie:{{{superclassis|}}}{{!}}{{{superclassis|}}}]]''
}}
{{#if:{{{classis|}}}
| {{!-}}
{{!}} Classis:
{{!}} ''[[:Kategorie:{{{classis|}}}{{!}}{{{classis|}}}]]''
}}
{{#if:{{{subclassis|}}}
| {{!-}}
{{!}} Subclassis:
{{!}} ''[[:Kategorie:{{{subclassis|}}}{{!}}{{{subclassis|}}}]]''
}}
{{#if:{{{superordo|}}}
| {{!-}}
{{!}} Superordo:
{{!}} ''[[:Kategorie:{{{superordo|}}}{{!}}{{{superordo|}}}]]''
}}
{{#if:{{{ordo|}}}
| {{!-}}
{{!}} Ordo:
{{!}} ''[[:Kategorie:{{{ordo|}}}{{!}}{{{ordo|}}}]]''
}}
{{#if:{{{subordo|}}}
| {{!-}}
{{!}} Subordo:
{{!}} ''[[:Kategorie:{{{subordo|}}}{{!}}{{{subordo|}}}]]''
}}
{{#if:{{{superfamilia|}}}
| {{!-}}
{{!}} Superfamilia:
{{!}} ''[[:Kategorie:{{{superfamilia|}}}{{!}}{{{superfamilia|}}}]]''
}}
{{#if:{{{familia|}}}
| {{!-}}
{{!}} Familia:
{{!}} ''[[:Kategorie:{{{familia|}}}{{!}}{{{familia|}}}]]''
}}
{{#if:{{{subfamilia|}}}
| {{!-}}
{{!}} Subfamilia:
{{!}} ''[[:Kategorie:{{{subfamilia|}}}{{!}}{{{subfamilia|}}}]]''
}}
{{#if:{{{tribus|}}}
| {{!-}}
{{!}} Tribus:
{{!}} ''[[:Kategorie:{{{tribus|}}}{{!}}{{{tribus|}}}]]''
}}
|-
{{#if:{{{genus|}}}
| {{!-}}
{{!}} Genus:
{{!}} ''[[:Kategorie:{{{genus|}}}{{!}}{{{genus|}}}]]''
}}
|-
{{#if:{{{subgenus|}}}
| {{!-}}
{{!}} Subgenus:
{{!}} ''[[:Kategorie:{{{subgenus|}}}{{!}}{{{subgenus|}}}]]''
}}
|-
{{#if:{{{species|}}}
| {{!-}}
{{!}} Species:
{{!}} ''{{{genus|}}} {{{subgenus|}}} {{{species|}}}''
}}
}}
{| cellpadding="2" cellspacing="0" style="border: 1px solid #6688AA;width:260px; background-color:#efefef; float:right;overflow: hidden; margin-left:10px; margin-bottom:10px;" valign="middle" |
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"|
'''''{{PAGENAME}}'''''
{{#if:{{{DeName|}}}|
<br>({{{DeName|}}})
}}
|-
|style="border: 0px solid #6688AA;border-bottom-width:1px;"| <div style="text-align:center;font-size:8pt;">
{{#if: {{{Bild|}}}
| [[Bild:{{{Bild}}}{{!}}frameless{{!}}250x300px{{!}}{{{Bildbeschreibung}}}]]
}}</div>
|
|
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| Systematik
|-
|style="text-align:center;width=250px;"|
{| style="background-color:#efefef;text-align:left;"
{{#var:taxonside}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Wissenschaftlicher Name
|-
|style="text-align:center"|
{| style="text-align:center;background-color:#efefef;widht:250px"
|-
|style="text-align:center;width:250px"|''{{{genus|}}} {{{species|}}}'' {{#if:{{{Autor|}}}|{{{Autor|}}}}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Weitere Informationen
|-
|style="text-align:left"|
{| style="text-align:left;background-color:#efefef;widht:250px"
{{#if:{{{Verbreitung|}}}
| {{!-}}
{{!}} Verbreitung:
{{!}} {{{Verbreitung|}}}
| {{!-}}
}}
{{#if:{{{Habitat|}}}
| {{!-}}
{{!}} Habitat:
{{!}} {{{Habitat|}}}
| {{!-}}
}}
{{#if:{{{Nahrung|}}}
| {{!-}}
{{!}} Nahrung:
{{!}} {{{Nahrung|}}}
| {{!-}}
}}
{{#if:{{{Luftfeuchtigkeit|}}}
| {{!-}}
{{!}} Luftfeuchtigkeit:
{{!}} {{{Luftfeuchtigkeit|}}}
| {{!-}}
}}
{{#if:{{{Temperatur|}}}
| {{!-}}
{{!}} Temperatur:
{{!}} {{{Temperatur|}}}
| {{!-}}
}}
{{#if:{{{StudyGroupNumber|}}}
| {{!-}}
{{!}} StudyGroupNumber:
{{!}} {{{StudyGroupNumber|}}}
| {{!-}}
}}
|}
|}
{{#ifeq: {{NAMESPACE}} | {{ns:0}} | [[Kategorie:species]]}}
{{#if:{{#var:taxon}}
| {{#regex: {{#var:taxon}} | /[ \r\n]+/ | }}
}}
{{#if:{{{www.faunaeur.org_id|}}}|
* [http://www.faunaeur.org/full_results.php?id={{{www.faunaeur.org_id|}}} Fauna Europaea : www.faunaeur.org -> {{PAGENAME}}]
}}
{{#if:{{{cockroach.speciesfile.org_TaxonNameID|}}}|
* [http://cockroach.speciesfile.org/common/basic/Taxa.aspx?TaxonNameID={{{cockroach.speciesfile.org_TaxonNameID|}}} Cockroach Species File (CSF) : cockroach.speciesfile.org -> {{PAGENAME}}]
}}
{{#if:{{{LSID|}}}|
* [http://www.ipni.org/lsids.html LSID] : {{{LSID|}}}
}}
{{#ifeq:{{PAGENAME}}|{{{superordo}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
[[Kategorie:{{#if: {{{subclassis|}}} | {{{subclassis|}}} | {{{classis|}}} }}{{!}}{{{superordo|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{ordo}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=7}}
{{#ifeq:{{PAGENAME}}|Blattodea|
[[Kategorie:Schaben]]
}}
[[Kategorie:{{#if: {{{superordo|}}} | {{{superordo|}}} | {{{subclassis|}}} }}{{!}}{{{ordo|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{superfamilia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
[[Kategorie:{{#if: {{{subordo|}}} | {{{subordo|}}} | {{{ordo|}}} }}{{!}}{{{superfamilia|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{familia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=4}}
[[Kategorie:{{#if: {{{superfamilia|}}} | {{{superfamilia|}}} | {{{subordo|}}} }}{{!}}{{{familia|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{subfamilia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=3}}
[[Kategorie:{{{familia|}}}{{!}}{{{subfamilia|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{genus}}}|
[[Kategorie:{{#if: {{{subfamilia|}}} | {{{subfamilia|}}} | {{{familia|}}} }}{{!}}{{{genus|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{genus}}} {{{species}}}|
[[Kategorie:{{{genus|}}}{{!}}{{{species|}}}]]
}}
</includeonly>
<noinclude>
<pre>
Beispielaufruf:
{{Systematik
| DeName = Fauchschabe
| Autor = van Herrewege, 1973
| ordo =
| subordo =
| superfamilia =
| familia = Blaberidae
| subfamilia = Oxyhaloinae
| tribus = Gromphadorhini
| genus = Princisia
| subgenus =
| species = vanwaerebeki
| Verbreitung =
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| StudyGroupNumber = BCG 34
| Winterruhe =
| cockroach.speciesfile.org_TaxonNameID = 1174416
| LSID = urn:lsid:Blattodea.genusfile.org:TaxonName:6326
}}
{{Systematik
| Autor =
| Bild =
| Bildbeschreibung =
| regnum = Animalia
| subregnum = Eumetazoa
| phylum = specieshropoda
| subphylum = Hexapoda
| classis = Insecta
| ordo = Dictyoptera
| subordo = Isoptera
| LSID = urn:lsid:faunaeur.org:taxname:11922
| www.faunaeur.org_id = 11922
}}
</pre>
</noinclude>
eb4f704958cb1e5f219bc994d728c806c33a7aab
1641
1640
2017-05-02T21:23:15Z
Lollypop
2
wikitext
text/x-wiki
<includeonly>
{{#vardefine:taxon|
{{#if:{{{dominia|}}} | -> [[:Kategorie:{{{dominia|}}}{{!}}{{{dominia|}}}]]}}
{{#if:{{{regnum|}}} | -> [[:Kategorie:{{{regnum|}}}{{!}}{{{regnum|}}}]]}}
{{#if:{{{subregnum|}}} | -> [[:Kategorie:{{{subregnum|}}}{{!}}{{{subregnum|}}}]]}}
{{#if:{{{divisio|}}} | -> [[:Kategorie:{{{divisio|}}}{{!}}{{{divisio|}}}]]}}
{{#if:{{{subdivisio|}}} | -> [[:Kategorie:{{{subdivisio|}}}{{!}}{{{subdivisio|}}}]]}}
{{#if:{{{superclassis|}}}| -> [[:Kategorie:{{{superclassis|}}}{{!}}{{{superclassis|}}}]]}}
{{#if:{{{classis|}}} | -> [[:Kategorie:{{{classis|}}}{{!}}{{{classis|}}}]]}}
{{#if:{{{subclassis|}}} | -> [[:Kategorie:{{{subclassis|}}}{{!}}{{{subclassis|}}}]]}}
{{#if:{{{superordo|}}} | -> [[:Kategorie:{{{superordo|}}}{{!}}{{{superordo|}}}]]}}
{{#if:{{{ordo|}}} | -> [[:Kategorie:{{{ordo|}}}{{!}}{{{ordo|}}}]]}}
{{#if:{{{subordo|}}} | -> [[:Kategorie:{{{subordo|}}}{{!}}{{{subordo|}}}]]}}
{{#if:{{{superfamilia|}}}| -> [[:Kategorie:{{#if: {{{subordo|}}} | {{{subordo|}}} | {{{ordo|}}} }}{{!}}{{{superfamilia|}}}]]}}
{{#if:{{{familia|}}} | -> [[:Kategorie:{{{familia|}}}{{!}}{{{familia|}}}]]}}
{{#if:{{{subfamilia|}}} | -> [[:Kategorie:{{{subfamilia|}}}{{!}}{{{subfamilia|}}}]]}}
{{#if:{{{tribus|}}} | -> [[:Kategorie:{{{tribus|}}}{{!}}{{{tribus|}}}]]}}
{{#if:{{{genus|}}} | -> [[:Kategorie:{{{genus|}}}{{!}}{{{genus|}}}]]}}
{{#if:{{{subgenus|}}} | -> [[:Kategorie:{{{subgenus|}}}{{!}}{{{subgenus|}}}]]}}
}}
{{#vardefine:taxonside|
{{#if:{{{dominia|}}}
| {{!-}}
{{!}} Dominia:
{{!}} ''[[:Kategorie:{{{dominia|}}}{{!}}{{{dominia|}}}]]''
}}
{{#if:{{{regnum|}}}
| {{!-}}
{{!}} Regnum:
{{!}} ''[[:Kategorie:{{{regnum|}}}{{!}}{{{regnum|}}}]]''
}}
{{#if:{{{subregnum|}}}
| {{!-}}
{{!}} Subregnum:
{{!}} ''[[:Kategorie:{{{subregnum|}}}{{!}}{{{subregnum|}}}]]''
}}
{{#if:{{{superdivisio|}}}
| {{!-}}
{{!}} Superdivisio:
{{!}} ''[[:Kategorie:{{{superdivisio|}}}{{!}}{{{superdivisio|}}}]]''
}}
{{#if:{{{divisio|}}}
| {{!-}}
{{!}} Divisio:
{{!}} ''[[:Kategorie:{{{divisio|}}}{{!}}{{{divisio|}}}]]''
}}
{{#if:{{{subdivisio|}}}
| {{!-}}
{{!}} Subdivisio:
{{!}} ''[[:Kategorie:{{{subdivisio|}}}{{!}}{{{subdivisio|}}}]]''
}}
{{#if:{{{superclassis|}}}
| {{!-}}
{{!}} Superclassis:
{{!}} ''[[:Kategorie:{{{superclassis|}}}{{!}}{{{superclassis|}}}]]''
}}
{{#if:{{{classis|}}}
| {{!-}}
{{!}} Classis:
{{!}} ''[[:Kategorie:{{{classis|}}}{{!}}{{{classis|}}}]]''
}}
{{#if:{{{subclassis|}}}
| {{!-}}
{{!}} Subclassis:
{{!}} ''[[:Kategorie:{{{subclassis|}}}{{!}}{{{subclassis|}}}]]''
}}
{{#if:{{{superordo|}}}
| {{!-}}
{{!}} Superordo:
{{!}} ''[[:Kategorie:{{{superordo|}}}{{!}}{{{superordo|}}}]]''
}}
{{#if:{{{ordo|}}}
| {{!-}}
{{!}} Ordo:
{{!}} ''[[:Kategorie:{{{ordo|}}}{{!}}{{{ordo|}}}]]''
}}
{{#if:{{{subordo|}}}
| {{!-}}
{{!}} Subordo:
{{!}} ''[[:Kategorie:{{{subordo|}}}{{!}}{{{subordo|}}}]]''
}}
{{#if:{{{superfamilia|}}}
| {{!-}}
{{!}} Superfamilia:
{{!}} ''[[:Kategorie:{{{superfamilia|}}}{{!}}{{{superfamilia|}}}]]''
}}
{{#if:{{{familia|}}}
| {{!-}}
{{!}} Familia:
{{!}} ''[[:Kategorie:{{{familia|}}}{{!}}{{{familia|}}}]]''
}}
{{#if:{{{subfamilia|}}}
| {{!-}}
{{!}} Subfamilia:
{{!}} ''[[:Kategorie:{{{subfamilia|}}}{{!}}{{{subfamilia|}}}]]''
}}
{{#if:{{{tribus|}}}
| {{!-}}
{{!}} Tribus:
{{!}} ''[[:Kategorie:{{{tribus|}}}{{!}}{{{tribus|}}}]]''
}}
|-
{{#if:{{{genus|}}}
| {{!-}}
{{!}} Genus:
{{!}} ''[[:Kategorie:{{{genus|}}}{{!}}{{{genus|}}}]]''
}}
|-
{{#if:{{{subgenus|}}}
| {{!-}}
{{!}} Subgenus:
{{!}} ''[[:Kategorie:{{{subgenus|}}}{{!}}{{{subgenus|}}}]]''
}}
|-
{{#if:{{{species|}}}
| {{!-}}
{{!}} Species:
{{!}} ''{{{genus|}}} {{{subgenus|}}} {{{species|}}}''
}}
}}
{| cellpadding="2" cellspacing="0" style="border: 1px solid #6688AA;width:260px; background-color:#efefef; float:right;overflow: hidden; margin-left:10px; margin-bottom:10px;" valign="middle" |
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"|
'''''{{PAGENAME}}'''''
{{#if:{{{DeName|}}}|
<br>({{{DeName|}}})
}}
|-
|style="border: 0px solid #6688AA;border-bottom-width:1px;"| <div style="text-align:center;font-size:8pt;">
{{#if: {{{Bild|}}}
| [[Bild:{{{Bild}}}{{!}}frameless{{!}}250x300px{{!}}{{{Bildbeschreibung}}}]]
}}</div>
|
|
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| Systematik
|-
|style="text-align:center;width=250px;"|
{| style="background-color:#efefef;text-align:left;"
{{#regex: {{#var:taxonside}} | /[\n]+/ | \n }
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Wissenschaftlicher Name
|-
|style="text-align:center"|
{| style="text-align:center;background-color:#efefef;widht:250px"
|-
|style="text-align:center;width:250px"|''{{{genus|}}} {{{species|}}}'' {{#if:{{{Autor|}}}|{{{Autor|}}}}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Weitere Informationen
|-
|style="text-align:left"|
{| style="text-align:left;background-color:#efefef;widht:250px"
{{#if:{{{Verbreitung|}}}
| {{!-}}
{{!}} Verbreitung:
{{!}} {{{Verbreitung|}}}
| {{!-}}
}}
{{#if:{{{Habitat|}}}
| {{!-}}
{{!}} Habitat:
{{!}} {{{Habitat|}}}
| {{!-}}
}}
{{#if:{{{Nahrung|}}}
| {{!-}}
{{!}} Nahrung:
{{!}} {{{Nahrung|}}}
| {{!-}}
}}
{{#if:{{{Luftfeuchtigkeit|}}}
| {{!-}}
{{!}} Luftfeuchtigkeit:
{{!}} {{{Luftfeuchtigkeit|}}}
| {{!-}}
}}
{{#if:{{{Temperatur|}}}
| {{!-}}
{{!}} Temperatur:
{{!}} {{{Temperatur|}}}
| {{!-}}
}}
{{#if:{{{StudyGroupNumber|}}}
| {{!-}}
{{!}} StudyGroupNumber:
{{!}} {{{StudyGroupNumber|}}}
| {{!-}}
}}
|}
|}
{{#ifeq: {{NAMESPACE}} | {{ns:0}} | [[Kategorie:species]]}}
{{#if:{{#var:taxon}}
| {{#regex: {{#var:taxon}} | /[ \r\n]+/ | }}
}}
{{#if:{{{www.faunaeur.org_id|}}}|
* [http://www.faunaeur.org/full_results.php?id={{{www.faunaeur.org_id|}}} Fauna Europaea : www.faunaeur.org -> {{PAGENAME}}]
}}
{{#if:{{{cockroach.speciesfile.org_TaxonNameID|}}}|
* [http://cockroach.speciesfile.org/common/basic/Taxa.aspx?TaxonNameID={{{cockroach.speciesfile.org_TaxonNameID|}}} Cockroach Species File (CSF) : cockroach.speciesfile.org -> {{PAGENAME}}]
}}
{{#if:{{{LSID|}}}|
* [http://www.ipni.org/lsids.html LSID] : {{{LSID|}}}
}}
{{#ifeq:{{PAGENAME}}|{{{superordo}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
[[Kategorie:{{#if: {{{subclassis|}}} | {{{subclassis|}}} | {{{classis|}}} }}{{!}}{{{superordo|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{ordo}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=7}}
{{#ifeq:{{PAGENAME}}|Blattodea|
[[Kategorie:Schaben]]
}}
[[Kategorie:{{#if: {{{superordo|}}} | {{{superordo|}}} | {{{subclassis|}}} }}{{!}}{{{ordo|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{superfamilia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
[[Kategorie:{{#if: {{{subordo|}}} | {{{subordo|}}} | {{{ordo|}}} }}{{!}}{{{superfamilia|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{familia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=4}}
[[Kategorie:{{#if: {{{superfamilia|}}} | {{{superfamilia|}}} | {{{subordo|}}} }}{{!}}{{{familia|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{subfamilia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=3}}
[[Kategorie:{{{familia|}}}{{!}}{{{subfamilia|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{genus}}}|
[[Kategorie:{{#if: {{{subfamilia|}}} | {{{subfamilia|}}} | {{{familia|}}} }}{{!}}{{{genus|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{genus}}} {{{species}}}|
[[Kategorie:{{{genus|}}}{{!}}{{{species|}}}]]
}}
</includeonly>
<noinclude>
<pre>
Beispielaufruf:
{{Systematik
| DeName = Fauchschabe
| Autor = van Herrewege, 1973
| ordo =
| subordo =
| superfamilia =
| familia = Blaberidae
| subfamilia = Oxyhaloinae
| tribus = Gromphadorhini
| genus = Princisia
| subgenus =
| species = vanwaerebeki
| Verbreitung =
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| StudyGroupNumber = BCG 34
| Winterruhe =
| cockroach.speciesfile.org_TaxonNameID = 1174416
| LSID = urn:lsid:Blattodea.genusfile.org:TaxonName:6326
}}
{{Systematik
| Autor =
| Bild =
| Bildbeschreibung =
| regnum = Animalia
| subregnum = Eumetazoa
| phylum = specieshropoda
| subphylum = Hexapoda
| classis = Insecta
| ordo = Dictyoptera
| subordo = Isoptera
| LSID = urn:lsid:faunaeur.org:taxname:11922
| www.faunaeur.org_id = 11922
}}
</pre>
</noinclude>
97dc9b1a88bfa07a54066164dea2e843e245bc48
1642
1641
2017-05-02T21:24:21Z
Lollypop
2
wikitext
text/x-wiki
<includeonly>
{{#vardefine:taxon|
{{#if:{{{dominia|}}} | -> [[:Kategorie:{{{dominia|}}}{{!}}{{{dominia|}}}]]}}
{{#if:{{{regnum|}}} | -> [[:Kategorie:{{{regnum|}}}{{!}}{{{regnum|}}}]]}}
{{#if:{{{subregnum|}}} | -> [[:Kategorie:{{{subregnum|}}}{{!}}{{{subregnum|}}}]]}}
{{#if:{{{divisio|}}} | -> [[:Kategorie:{{{divisio|}}}{{!}}{{{divisio|}}}]]}}
{{#if:{{{subdivisio|}}} | -> [[:Kategorie:{{{subdivisio|}}}{{!}}{{{subdivisio|}}}]]}}
{{#if:{{{superclassis|}}}| -> [[:Kategorie:{{{superclassis|}}}{{!}}{{{superclassis|}}}]]}}
{{#if:{{{classis|}}} | -> [[:Kategorie:{{{classis|}}}{{!}}{{{classis|}}}]]}}
{{#if:{{{subclassis|}}} | -> [[:Kategorie:{{{subclassis|}}}{{!}}{{{subclassis|}}}]]}}
{{#if:{{{superordo|}}} | -> [[:Kategorie:{{{superordo|}}}{{!}}{{{superordo|}}}]]}}
{{#if:{{{ordo|}}} | -> [[:Kategorie:{{{ordo|}}}{{!}}{{{ordo|}}}]]}}
{{#if:{{{subordo|}}} | -> [[:Kategorie:{{{subordo|}}}{{!}}{{{subordo|}}}]]}}
{{#if:{{{superfamilia|}}}| -> [[:Kategorie:{{#if: {{{subordo|}}} | {{{subordo|}}} | {{{ordo|}}} }}{{!}}{{{superfamilia|}}}]]}}
{{#if:{{{familia|}}} | -> [[:Kategorie:{{{familia|}}}{{!}}{{{familia|}}}]]}}
{{#if:{{{subfamilia|}}} | -> [[:Kategorie:{{{subfamilia|}}}{{!}}{{{subfamilia|}}}]]}}
{{#if:{{{tribus|}}} | -> [[:Kategorie:{{{tribus|}}}{{!}}{{{tribus|}}}]]}}
{{#if:{{{genus|}}} | -> [[:Kategorie:{{{genus|}}}{{!}}{{{genus|}}}]]}}
{{#if:{{{subgenus|}}} | -> [[:Kategorie:{{{subgenus|}}}{{!}}{{{subgenus|}}}]]}}
}}
{{#vardefine:taxonside|
{{#if:{{{dominia|}}}
| {{!-}}
{{!}} Dominia:
{{!}} ''[[:Kategorie:{{{dominia|}}}{{!}}{{{dominia|}}}]]''
}}
{{#if:{{{regnum|}}}
| {{!-}}
{{!}} Regnum:
{{!}} ''[[:Kategorie:{{{regnum|}}}{{!}}{{{regnum|}}}]]''
}}
{{#if:{{{subregnum|}}}
| {{!-}}
{{!}} Subregnum:
{{!}} ''[[:Kategorie:{{{subregnum|}}}{{!}}{{{subregnum|}}}]]''
}}
{{#if:{{{superdivisio|}}}
| {{!-}}
{{!}} Superdivisio:
{{!}} ''[[:Kategorie:{{{superdivisio|}}}{{!}}{{{superdivisio|}}}]]''
}}
{{#if:{{{divisio|}}}
| {{!-}}
{{!}} Divisio:
{{!}} ''[[:Kategorie:{{{divisio|}}}{{!}}{{{divisio|}}}]]''
}}
{{#if:{{{subdivisio|}}}
| {{!-}}
{{!}} Subdivisio:
{{!}} ''[[:Kategorie:{{{subdivisio|}}}{{!}}{{{subdivisio|}}}]]''
}}
{{#if:{{{superclassis|}}}
| {{!-}}
{{!}} Superclassis:
{{!}} ''[[:Kategorie:{{{superclassis|}}}{{!}}{{{superclassis|}}}]]''
}}
{{#if:{{{classis|}}}
| {{!-}}
{{!}} Classis:
{{!}} ''[[:Kategorie:{{{classis|}}}{{!}}{{{classis|}}}]]''
}}
{{#if:{{{subclassis|}}}
| {{!-}}
{{!}} Subclassis:
{{!}} ''[[:Kategorie:{{{subclassis|}}}{{!}}{{{subclassis|}}}]]''
}}
{{#if:{{{superordo|}}}
| {{!-}}
{{!}} Superordo:
{{!}} ''[[:Kategorie:{{{superordo|}}}{{!}}{{{superordo|}}}]]''
}}
{{#if:{{{ordo|}}}
| {{!-}}
{{!}} Ordo:
{{!}} ''[[:Kategorie:{{{ordo|}}}{{!}}{{{ordo|}}}]]''
}}
{{#if:{{{subordo|}}}
| {{!-}}
{{!}} Subordo:
{{!}} ''[[:Kategorie:{{{subordo|}}}{{!}}{{{subordo|}}}]]''
}}
{{#if:{{{superfamilia|}}}
| {{!-}}
{{!}} Superfamilia:
{{!}} ''[[:Kategorie:{{{superfamilia|}}}{{!}}{{{superfamilia|}}}]]''
}}
{{#if:{{{familia|}}}
| {{!-}}
{{!}} Familia:
{{!}} ''[[:Kategorie:{{{familia|}}}{{!}}{{{familia|}}}]]''
}}
{{#if:{{{subfamilia|}}}
| {{!-}}
{{!}} Subfamilia:
{{!}} ''[[:Kategorie:{{{subfamilia|}}}{{!}}{{{subfamilia|}}}]]''
}}
{{#if:{{{tribus|}}}
| {{!-}}
{{!}} Tribus:
{{!}} ''[[:Kategorie:{{{tribus|}}}{{!}}{{{tribus|}}}]]''
}}
{{#if:{{{genus|}}}
| {{!-}}
{{!}} Genus:
{{!}} ''[[:Kategorie:{{{genus|}}}{{!}}{{{genus|}}}]]''
}}
{{#if:{{{subgenus|}}}
| {{!-}}
{{!}} Subgenus:
{{!}} ''[[:Kategorie:{{{subgenus|}}}{{!}}{{{subgenus|}}}]]''
}}
{{#if:{{{species|}}}
| {{!-}}
{{!}} Species:
{{!}} ''{{{genus|}}} {{{subgenus|}}} {{{species|}}}''
}}
}}
{| cellpadding="2" cellspacing="0" style="border: 1px solid #6688AA;width:260px; background-color:#efefef; float:right;overflow: hidden; margin-left:10px; margin-bottom:10px;" valign="middle" |
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"|
'''''{{PAGENAME}}'''''
{{#if:{{{DeName|}}}|
<br>({{{DeName|}}})
}}
|-
|style="border: 0px solid #6688AA;border-bottom-width:1px;"| <div style="text-align:center;font-size:8pt;">
{{#if: {{{Bild|}}}
| [[Bild:{{{Bild}}}{{!}}frameless{{!}}250x300px{{!}}{{{Bildbeschreibung}}}]]
}}</div>
|
|
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| Systematik
|-
|style="text-align:center;width=250px;"|
{| style="background-color:#efefef;text-align:left;"
{{#regex: {{#var:taxonside}} | /[\n]+/ | \n }
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Wissenschaftlicher Name
|-
|style="text-align:center"|
{| style="text-align:center;background-color:#efefef;widht:250px"
|-
|style="text-align:center;width:250px"|''{{{genus|}}} {{{species|}}}'' {{#if:{{{Autor|}}}|{{{Autor|}}}}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Weitere Informationen
|-
|style="text-align:left"|
{| style="text-align:left;background-color:#efefef;widht:250px"
{{#if:{{{Verbreitung|}}}
| {{!-}}
{{!}} Verbreitung:
{{!}} {{{Verbreitung|}}}
| {{!-}}
}}
{{#if:{{{Habitat|}}}
| {{!-}}
{{!}} Habitat:
{{!}} {{{Habitat|}}}
| {{!-}}
}}
{{#if:{{{Nahrung|}}}
| {{!-}}
{{!}} Nahrung:
{{!}} {{{Nahrung|}}}
| {{!-}}
}}
{{#if:{{{Luftfeuchtigkeit|}}}
| {{!-}}
{{!}} Luftfeuchtigkeit:
{{!}} {{{Luftfeuchtigkeit|}}}
| {{!-}}
}}
{{#if:{{{Temperatur|}}}
| {{!-}}
{{!}} Temperatur:
{{!}} {{{Temperatur|}}}
| {{!-}}
}}
{{#if:{{{StudyGroupNumber|}}}
| {{!-}}
{{!}} StudyGroupNumber:
{{!}} {{{StudyGroupNumber|}}}
| {{!-}}
}}
|}
|}
{{#ifeq: {{NAMESPACE}} | {{ns:0}} | [[Kategorie:species]]}}
{{#if:{{#var:taxon}}
| {{#regex: {{#var:taxon}} | /[ \r\n]+/ | }}
}}
{{#if:{{{www.faunaeur.org_id|}}}|
* [http://www.faunaeur.org/full_results.php?id={{{www.faunaeur.org_id|}}} Fauna Europaea : www.faunaeur.org -> {{PAGENAME}}]
}}
{{#if:{{{cockroach.speciesfile.org_TaxonNameID|}}}|
* [http://cockroach.speciesfile.org/common/basic/Taxa.aspx?TaxonNameID={{{cockroach.speciesfile.org_TaxonNameID|}}} Cockroach Species File (CSF) : cockroach.speciesfile.org -> {{PAGENAME}}]
}}
{{#if:{{{LSID|}}}|
* [http://www.ipni.org/lsids.html LSID] : {{{LSID|}}}
}}
{{#ifeq:{{PAGENAME}}|{{{superordo}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
[[Kategorie:{{#if: {{{subclassis|}}} | {{{subclassis|}}} | {{{classis|}}} }}{{!}}{{{superordo|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{ordo}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=7}}
{{#ifeq:{{PAGENAME}}|Blattodea|
[[Kategorie:Schaben]]
}}
[[Kategorie:{{#if: {{{superordo|}}} | {{{superordo|}}} | {{{subclassis|}}} }}{{!}}{{{ordo|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{superfamilia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
[[Kategorie:{{#if: {{{subordo|}}} | {{{subordo|}}} | {{{ordo|}}} }}{{!}}{{{superfamilia|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{familia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=4}}
[[Kategorie:{{#if: {{{superfamilia|}}} | {{{superfamilia|}}} | {{{subordo|}}} }}{{!}}{{{familia|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{subfamilia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=3}}
[[Kategorie:{{{familia|}}}{{!}}{{{subfamilia|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{genus}}}|
[[Kategorie:{{#if: {{{subfamilia|}}} | {{{subfamilia|}}} | {{{familia|}}} }}{{!}}{{{genus|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{genus}}} {{{species}}}|
[[Kategorie:{{{genus|}}}{{!}}{{{species|}}}]]
}}
</includeonly>
<noinclude>
<pre>
Beispielaufruf:
{{Systematik
| DeName = Fauchschabe
| Autor = van Herrewege, 1973
| ordo =
| subordo =
| superfamilia =
| familia = Blaberidae
| subfamilia = Oxyhaloinae
| tribus = Gromphadorhini
| genus = Princisia
| subgenus =
| species = vanwaerebeki
| Verbreitung =
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| StudyGroupNumber = BCG 34
| Winterruhe =
| cockroach.speciesfile.org_TaxonNameID = 1174416
| LSID = urn:lsid:Blattodea.genusfile.org:TaxonName:6326
}}
{{Systematik
| Autor =
| Bild =
| Bildbeschreibung =
| regnum = Animalia
| subregnum = Eumetazoa
| phylum = specieshropoda
| subphylum = Hexapoda
| classis = Insecta
| ordo = Dictyoptera
| subordo = Isoptera
| LSID = urn:lsid:faunaeur.org:taxname:11922
| www.faunaeur.org_id = 11922
}}
</pre>
</noinclude>
4da937f44057b3befb2a993952203f3d5a3ae8dc
1643
1642
2017-05-02T21:25:56Z
Lollypop
2
wikitext
text/x-wiki
<includeonly>
{{#vardefine:taxon|
{{#if:{{{dominia|}}} | -> [[:Kategorie:{{{dominia|}}}{{!}}{{{dominia|}}}]]}}
{{#if:{{{regnum|}}} | -> [[:Kategorie:{{{regnum|}}}{{!}}{{{regnum|}}}]]}}
{{#if:{{{subregnum|}}} | -> [[:Kategorie:{{{subregnum|}}}{{!}}{{{subregnum|}}}]]}}
{{#if:{{{divisio|}}} | -> [[:Kategorie:{{{divisio|}}}{{!}}{{{divisio|}}}]]}}
{{#if:{{{subdivisio|}}} | -> [[:Kategorie:{{{subdivisio|}}}{{!}}{{{subdivisio|}}}]]}}
{{#if:{{{superclassis|}}}| -> [[:Kategorie:{{{superclassis|}}}{{!}}{{{superclassis|}}}]]}}
{{#if:{{{classis|}}} | -> [[:Kategorie:{{{classis|}}}{{!}}{{{classis|}}}]]}}
{{#if:{{{subclassis|}}} | -> [[:Kategorie:{{{subclassis|}}}{{!}}{{{subclassis|}}}]]}}
{{#if:{{{superordo|}}} | -> [[:Kategorie:{{{superordo|}}}{{!}}{{{superordo|}}}]]}}
{{#if:{{{ordo|}}} | -> [[:Kategorie:{{{ordo|}}}{{!}}{{{ordo|}}}]]}}
{{#if:{{{subordo|}}} | -> [[:Kategorie:{{{subordo|}}}{{!}}{{{subordo|}}}]]}}
{{#if:{{{superfamilia|}}}| -> [[:Kategorie:{{#if: {{{subordo|}}} | {{{subordo|}}} | {{{ordo|}}} }}{{!}}{{{superfamilia|}}}]]}}
{{#if:{{{familia|}}} | -> [[:Kategorie:{{{familia|}}}{{!}}{{{familia|}}}]]}}
{{#if:{{{subfamilia|}}} | -> [[:Kategorie:{{{subfamilia|}}}{{!}}{{{subfamilia|}}}]]}}
{{#if:{{{tribus|}}} | -> [[:Kategorie:{{{tribus|}}}{{!}}{{{tribus|}}}]]}}
{{#if:{{{genus|}}} | -> [[:Kategorie:{{{genus|}}}{{!}}{{{genus|}}}]]}}
{{#if:{{{subgenus|}}} | -> [[:Kategorie:{{{subgenus|}}}{{!}}{{{subgenus|}}}]]}}
}}
{{#vardefine:taxonside|
{{#if:{{{dominia|}}}
| {{!-}}
{{!}} Dominia:
{{!}} ''[[:Kategorie:{{{dominia|}}}{{!}}{{{dominia|}}}]]''
}}
{{#if:{{{regnum|}}}
| {{!-}}
{{!}} Regnum:
{{!}} ''[[:Kategorie:{{{regnum|}}}{{!}}{{{regnum|}}}]]''
}}
{{#if:{{{subregnum|}}}
| {{!-}}
{{!}} Subregnum:
{{!}} ''[[:Kategorie:{{{subregnum|}}}{{!}}{{{subregnum|}}}]]''
}}
{{#if:{{{superdivisio|}}}
| {{!-}}
{{!}} Superdivisio:
{{!}} ''[[:Kategorie:{{{superdivisio|}}}{{!}}{{{superdivisio|}}}]]''
}}
{{#if:{{{divisio|}}}
| {{!-}}
{{!}} Divisio:
{{!}} ''[[:Kategorie:{{{divisio|}}}{{!}}{{{divisio|}}}]]''
}}
{{#if:{{{subdivisio|}}}
| {{!-}}
{{!}} Subdivisio:
{{!}} ''[[:Kategorie:{{{subdivisio|}}}{{!}}{{{subdivisio|}}}]]''
}}
{{#if:{{{superclassis|}}}
| {{!-}}
{{!}} Superclassis:
{{!}} ''[[:Kategorie:{{{superclassis|}}}{{!}}{{{superclassis|}}}]]''
}}
{{#if:{{{classis|}}}
| {{!-}}
{{!}} Classis:
{{!}} ''[[:Kategorie:{{{classis|}}}{{!}}{{{classis|}}}]]''
}}
{{#if:{{{subclassis|}}}
| {{!-}}
{{!}} Subclassis:
{{!}} ''[[:Kategorie:{{{subclassis|}}}{{!}}{{{subclassis|}}}]]''
}}
{{#if:{{{superordo|}}}
| {{!-}}
{{!}} Superordo:
{{!}} ''[[:Kategorie:{{{superordo|}}}{{!}}{{{superordo|}}}]]''
}}
{{#if:{{{ordo|}}}
| {{!-}}
{{!}} Ordo:
{{!}} ''[[:Kategorie:{{{ordo|}}}{{!}}{{{ordo|}}}]]''
}}
{{#if:{{{subordo|}}}
| {{!-}}
{{!}} Subordo:
{{!}} ''[[:Kategorie:{{{subordo|}}}{{!}}{{{subordo|}}}]]''
}}
{{#if:{{{superfamilia|}}}
| {{!-}}
{{!}} Superfamilia:
{{!}} ''[[:Kategorie:{{{superfamilia|}}}{{!}}{{{superfamilia|}}}]]''
}}
{{#if:{{{familia|}}}
| {{!-}}
{{!}} Familia:
{{!}} ''[[:Kategorie:{{{familia|}}}{{!}}{{{familia|}}}]]''
}}
{{#if:{{{subfamilia|}}}
| {{!-}}
{{!}} Subfamilia:
{{!}} ''[[:Kategorie:{{{subfamilia|}}}{{!}}{{{subfamilia|}}}]]''
}}
{{#if:{{{tribus|}}}
| {{!-}}
{{!}} Tribus:
{{!}} ''[[:Kategorie:{{{tribus|}}}{{!}}{{{tribus|}}}]]''
}}
{{#if:{{{genus|}}}
| {{!-}}
{{!}} Genus:
{{!}} ''[[:Kategorie:{{{genus|}}}{{!}}{{{genus|}}}]]''
}}
{{#if:{{{subgenus|}}}
| {{!-}}
{{!}} Subgenus:
{{!}} ''[[:Kategorie:{{{subgenus|}}}{{!}}{{{subgenus|}}}]]''
}}
{{#if:{{{species|}}}
| {{!-}}
{{!}} Species:
{{!}} ''{{{genus|}}} {{{subgenus|}}} {{{species|}}}''
}}
}}
{| cellpadding="2" cellspacing="0" style="border: 1px solid #6688AA;width:260px; background-color:#efefef; float:right;overflow: hidden; margin-left:10px; margin-bottom:10px;" valign="middle" |
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"|
'''''{{PAGENAME}}'''''
{{#if:{{{DeName|}}}|
<br>({{{DeName|}}})
}}
|-
|style="border: 0px solid #6688AA;border-bottom-width:1px;"| <div style="text-align:center;font-size:8pt;">
{{#if: {{{Bild|}}}
| [[Bild:{{{Bild}}}{{!}}frameless{{!}}250x300px{{!}}{{{Bildbeschreibung}}}]]
}}</div>
|
|
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| Systematik
|-
|style="text-align:center;width=250px;"|
{| style="background-color:#efefef;text-align:left;"
{{#regex: {{#var:taxonside}} | /[\n]+/ | \n }}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Wissenschaftlicher Name
|-
|style="text-align:center"|
{| style="text-align:center;background-color:#efefef;widht:250px"
|-
|style="text-align:center;width:250px"|''{{{genus|}}} {{{species|}}}'' {{#if:{{{Autor|}}}|{{{Autor|}}}}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Weitere Informationen
|-
|style="text-align:left"|
{| style="text-align:left;background-color:#efefef;widht:250px"
{{#if:{{{Verbreitung|}}}
| {{!-}}
{{!}} Verbreitung:
{{!}} {{{Verbreitung|}}}
| {{!-}}
}}
{{#if:{{{Habitat|}}}
| {{!-}}
{{!}} Habitat:
{{!}} {{{Habitat|}}}
| {{!-}}
}}
{{#if:{{{Nahrung|}}}
| {{!-}}
{{!}} Nahrung:
{{!}} {{{Nahrung|}}}
| {{!-}}
}}
{{#if:{{{Luftfeuchtigkeit|}}}
| {{!-}}
{{!}} Luftfeuchtigkeit:
{{!}} {{{Luftfeuchtigkeit|}}}
| {{!-}}
}}
{{#if:{{{Temperatur|}}}
| {{!-}}
{{!}} Temperatur:
{{!}} {{{Temperatur|}}}
| {{!-}}
}}
{{#if:{{{StudyGroupNumber|}}}
| {{!-}}
{{!}} StudyGroupNumber:
{{!}} {{{StudyGroupNumber|}}}
| {{!-}}
}}
|}
|}
{{#ifeq: {{NAMESPACE}} | {{ns:0}} | [[Kategorie:species]]}}
{{#if:{{#var:taxon}}
| {{#regex: {{#var:taxon}} | /[ \r\n]+/ | }}
}}
{{#if:{{{www.faunaeur.org_id|}}}|
* [http://www.faunaeur.org/full_results.php?id={{{www.faunaeur.org_id|}}} Fauna Europaea : www.faunaeur.org -> {{PAGENAME}}]
}}
{{#if:{{{cockroach.speciesfile.org_TaxonNameID|}}}|
* [http://cockroach.speciesfile.org/common/basic/Taxa.aspx?TaxonNameID={{{cockroach.speciesfile.org_TaxonNameID|}}} Cockroach Species File (CSF) : cockroach.speciesfile.org -> {{PAGENAME}}]
}}
{{#if:{{{LSID|}}}|
* [http://www.ipni.org/lsids.html LSID] : {{{LSID|}}}
}}
{{#ifeq:{{PAGENAME}}|{{{superordo}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
[[Kategorie:{{#if: {{{subclassis|}}} | {{{subclassis|}}} | {{{classis|}}} }}{{!}}{{{superordo|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{ordo}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=7}}
{{#ifeq:{{PAGENAME}}|Blattodea|
[[Kategorie:Schaben]]
}}
[[Kategorie:{{#if: {{{superordo|}}} | {{{superordo|}}} | {{{subclassis|}}} }}{{!}}{{{ordo|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{superfamilia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
[[Kategorie:{{#if: {{{subordo|}}} | {{{subordo|}}} | {{{ordo|}}} }}{{!}}{{{superfamilia|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{familia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=4}}
[[Kategorie:{{#if: {{{superfamilia|}}} | {{{superfamilia|}}} | {{{subordo|}}} }}{{!}}{{{familia|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{subfamilia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=3}}
[[Kategorie:{{{familia|}}}{{!}}{{{subfamilia|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{genus}}}|
[[Kategorie:{{#if: {{{subfamilia|}}} | {{{subfamilia|}}} | {{{familia|}}} }}{{!}}{{{genus|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{genus}}} {{{species}}}|
[[Kategorie:{{{genus|}}}{{!}}{{{species|}}}]]
}}
</includeonly>
<noinclude>
<pre>
Beispielaufruf:
{{Systematik
| DeName = Fauchschabe
| Autor = van Herrewege, 1973
| ordo =
| subordo =
| superfamilia =
| familia = Blaberidae
| subfamilia = Oxyhaloinae
| tribus = Gromphadorhini
| genus = Princisia
| subgenus =
| species = vanwaerebeki
| Verbreitung =
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| StudyGroupNumber = BCG 34
| Winterruhe =
| cockroach.speciesfile.org_TaxonNameID = 1174416
| LSID = urn:lsid:Blattodea.genusfile.org:TaxonName:6326
}}
{{Systematik
| Autor =
| Bild =
| Bildbeschreibung =
| regnum = Animalia
| subregnum = Eumetazoa
| phylum = specieshropoda
| subphylum = Hexapoda
| classis = Insecta
| ordo = Dictyoptera
| subordo = Isoptera
| LSID = urn:lsid:faunaeur.org:taxname:11922
| www.faunaeur.org_id = 11922
}}
</pre>
</noinclude>
6de054cbf5b3c94ab4c9f7c42a7d0f28b0565e8d
1644
1643
2017-05-02T21:27:08Z
Lollypop
2
wikitext
text/x-wiki
<includeonly>
{{#vardefine:taxon|
{{#if:{{{dominia|}}} | -> [[:Kategorie:{{{dominia|}}}{{!}}{{{dominia|}}}]]}}
{{#if:{{{regnum|}}} | -> [[:Kategorie:{{{regnum|}}}{{!}}{{{regnum|}}}]]}}
{{#if:{{{subregnum|}}} | -> [[:Kategorie:{{{subregnum|}}}{{!}}{{{subregnum|}}}]]}}
{{#if:{{{divisio|}}} | -> [[:Kategorie:{{{divisio|}}}{{!}}{{{divisio|}}}]]}}
{{#if:{{{subdivisio|}}} | -> [[:Kategorie:{{{subdivisio|}}}{{!}}{{{subdivisio|}}}]]}}
{{#if:{{{superclassis|}}}| -> [[:Kategorie:{{{superclassis|}}}{{!}}{{{superclassis|}}}]]}}
{{#if:{{{classis|}}} | -> [[:Kategorie:{{{classis|}}}{{!}}{{{classis|}}}]]}}
{{#if:{{{subclassis|}}} | -> [[:Kategorie:{{{subclassis|}}}{{!}}{{{subclassis|}}}]]}}
{{#if:{{{superordo|}}} | -> [[:Kategorie:{{{superordo|}}}{{!}}{{{superordo|}}}]]}}
{{#if:{{{ordo|}}} | -> [[:Kategorie:{{{ordo|}}}{{!}}{{{ordo|}}}]]}}
{{#if:{{{subordo|}}} | -> [[:Kategorie:{{{subordo|}}}{{!}}{{{subordo|}}}]]}}
{{#if:{{{superfamilia|}}}| -> [[:Kategorie:{{#if: {{{subordo|}}} | {{{subordo|}}} | {{{ordo|}}} }}{{!}}{{{superfamilia|}}}]]}}
{{#if:{{{familia|}}} | -> [[:Kategorie:{{{familia|}}}{{!}}{{{familia|}}}]]}}
{{#if:{{{subfamilia|}}} | -> [[:Kategorie:{{{subfamilia|}}}{{!}}{{{subfamilia|}}}]]}}
{{#if:{{{tribus|}}} | -> [[:Kategorie:{{{tribus|}}}{{!}}{{{tribus|}}}]]}}
{{#if:{{{genus|}}} | -> [[:Kategorie:{{{genus|}}}{{!}}{{{genus|}}}]]}}
{{#if:{{{subgenus|}}} | -> [[:Kategorie:{{{subgenus|}}}{{!}}{{{subgenus|}}}]]}}
}}
{{#vardefine:taxonside|
{{#if:{{{dominia|}}}
| {{!-}}
{{!}} Dominia:
{{!}} ''[[:Kategorie:{{{dominia|}}}{{!}}{{{dominia|}}}]]''
}}
{{#if:{{{regnum|}}}
| {{!-}}
{{!}} Regnum:
{{!}} ''[[:Kategorie:{{{regnum|}}}{{!}}{{{regnum|}}}]]''
}}
{{#if:{{{subregnum|}}}
| {{!-}}
{{!}} Subregnum:
{{!}} ''[[:Kategorie:{{{subregnum|}}}{{!}}{{{subregnum|}}}]]''
}}
{{#if:{{{superdivisio|}}}
| {{!-}}
{{!}} Superdivisio:
{{!}} ''[[:Kategorie:{{{superdivisio|}}}{{!}}{{{superdivisio|}}}]]''
}}
{{#if:{{{divisio|}}}
| {{!-}}
{{!}} Divisio:
{{!}} ''[[:Kategorie:{{{divisio|}}}{{!}}{{{divisio|}}}]]''
}}
{{#if:{{{subdivisio|}}}
| {{!-}}
{{!}} Subdivisio:
{{!}} ''[[:Kategorie:{{{subdivisio|}}}{{!}}{{{subdivisio|}}}]]''
}}
{{#if:{{{superclassis|}}}
| {{!-}}
{{!}} Superclassis:
{{!}} ''[[:Kategorie:{{{superclassis|}}}{{!}}{{{superclassis|}}}]]''
}}
{{#if:{{{classis|}}}
| {{!-}}
{{!}} Classis:
{{!}} ''[[:Kategorie:{{{classis|}}}{{!}}{{{classis|}}}]]''
}}
{{#if:{{{subclassis|}}}
| {{!-}}
{{!}} Subclassis:
{{!}} ''[[:Kategorie:{{{subclassis|}}}{{!}}{{{subclassis|}}}]]''
}}
{{#if:{{{superordo|}}}
| {{!-}}
{{!}} Superordo:
{{!}} ''[[:Kategorie:{{{superordo|}}}{{!}}{{{superordo|}}}]]''
}}
{{#if:{{{ordo|}}}
| {{!-}}
{{!}} Ordo:
{{!}} ''[[:Kategorie:{{{ordo|}}}{{!}}{{{ordo|}}}]]''
}}
{{#if:{{{subordo|}}}
| {{!-}}
{{!}} Subordo:
{{!}} ''[[:Kategorie:{{{subordo|}}}{{!}}{{{subordo|}}}]]''
}}
{{#if:{{{superfamilia|}}}
| {{!-}}
{{!}} Superfamilia:
{{!}} ''[[:Kategorie:{{{superfamilia|}}}{{!}}{{{superfamilia|}}}]]''
}}
{{#if:{{{familia|}}}
| {{!-}}
{{!}} Familia:
{{!}} ''[[:Kategorie:{{{familia|}}}{{!}}{{{familia|}}}]]''
}}
{{#if:{{{subfamilia|}}}
| {{!-}}
{{!}} Subfamilia:
{{!}} ''[[:Kategorie:{{{subfamilia|}}}{{!}}{{{subfamilia|}}}]]''
}}
{{#if:{{{tribus|}}}
| {{!-}}
{{!}} Tribus:
{{!}} ''[[:Kategorie:{{{tribus|}}}{{!}}{{{tribus|}}}]]''
}}
{{#if:{{{genus|}}}
| {{!-}}
{{!}} Genus:
{{!}} ''[[:Kategorie:{{{genus|}}}{{!}}{{{genus|}}}]]''
}}
{{#if:{{{subgenus|}}}
| {{!-}}
{{!}} Subgenus:
{{!}} ''[[:Kategorie:{{{subgenus|}}}{{!}}{{{subgenus|}}}]]''
}}
{{#if:{{{species|}}}
| {{!-}}
{{!}} Species:
{{!}} ''{{{genus|}}} {{{subgenus|}}} {{{species|}}}''
}}
}}
{| cellpadding="2" cellspacing="0" style="border: 1px solid #6688AA;width:260px; background-color:#efefef; float:right;overflow: hidden; margin-left:10px; margin-bottom:10px;" valign="middle" |
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"|
'''''{{PAGENAME}}'''''
{{#if:{{{DeName|}}}|
<br>({{{DeName|}}})
}}
|-
|style="border: 0px solid #6688AA;border-bottom-width:1px;"| <div style="text-align:center;font-size:8pt;">
{{#if: {{{Bild|}}}
| [[Bild:{{{Bild}}}{{!}}frameless{{!}}250x300px{{!}}{{{Bildbeschreibung}}}]]
}}</div>
|
|
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| Systematik
|-
|style="text-align:center;width=250px;"|
{| style="background-color:#efefef;text-align:left;"
{{#regex: {{#var:taxonside}} | /[\n]+/ | "_" }}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Wissenschaftlicher Name
|-
|style="text-align:center"|
{| style="text-align:center;background-color:#efefef;widht:250px"
|-
|style="text-align:center;width:250px"|''{{{genus|}}} {{{species|}}}'' {{#if:{{{Autor|}}}|{{{Autor|}}}}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Weitere Informationen
|-
|style="text-align:left"|
{| style="text-align:left;background-color:#efefef;widht:250px"
{{#if:{{{Verbreitung|}}}
| {{!-}}
{{!}} Verbreitung:
{{!}} {{{Verbreitung|}}}
| {{!-}}
}}
{{#if:{{{Habitat|}}}
| {{!-}}
{{!}} Habitat:
{{!}} {{{Habitat|}}}
| {{!-}}
}}
{{#if:{{{Nahrung|}}}
| {{!-}}
{{!}} Nahrung:
{{!}} {{{Nahrung|}}}
| {{!-}}
}}
{{#if:{{{Luftfeuchtigkeit|}}}
| {{!-}}
{{!}} Luftfeuchtigkeit:
{{!}} {{{Luftfeuchtigkeit|}}}
| {{!-}}
}}
{{#if:{{{Temperatur|}}}
| {{!-}}
{{!}} Temperatur:
{{!}} {{{Temperatur|}}}
| {{!-}}
}}
{{#if:{{{StudyGroupNumber|}}}
| {{!-}}
{{!}} StudyGroupNumber:
{{!}} {{{StudyGroupNumber|}}}
| {{!-}}
}}
|}
|}
{{#ifeq: {{NAMESPACE}} | {{ns:0}} | [[Kategorie:species]]}}
{{#if:{{#var:taxon}}
| {{#regex: {{#var:taxon}} | /[ \r\n]+/ | }}
}}
{{#if:{{{www.faunaeur.org_id|}}}|
* [http://www.faunaeur.org/full_results.php?id={{{www.faunaeur.org_id|}}} Fauna Europaea : www.faunaeur.org -> {{PAGENAME}}]
}}
{{#if:{{{cockroach.speciesfile.org_TaxonNameID|}}}|
* [http://cockroach.speciesfile.org/common/basic/Taxa.aspx?TaxonNameID={{{cockroach.speciesfile.org_TaxonNameID|}}} Cockroach Species File (CSF) : cockroach.speciesfile.org -> {{PAGENAME}}]
}}
{{#if:{{{LSID|}}}|
* [http://www.ipni.org/lsids.html LSID] : {{{LSID|}}}
}}
{{#ifeq:{{PAGENAME}}|{{{superordo}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
[[Kategorie:{{#if: {{{subclassis|}}} | {{{subclassis|}}} | {{{classis|}}} }}{{!}}{{{superordo|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{ordo}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=7}}
{{#ifeq:{{PAGENAME}}|Blattodea|
[[Kategorie:Schaben]]
}}
[[Kategorie:{{#if: {{{superordo|}}} | {{{superordo|}}} | {{{subclassis|}}} }}{{!}}{{{ordo|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{superfamilia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
[[Kategorie:{{#if: {{{subordo|}}} | {{{subordo|}}} | {{{ordo|}}} }}{{!}}{{{superfamilia|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{familia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=4}}
[[Kategorie:{{#if: {{{superfamilia|}}} | {{{superfamilia|}}} | {{{subordo|}}} }}{{!}}{{{familia|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{subfamilia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=3}}
[[Kategorie:{{{familia|}}}{{!}}{{{subfamilia|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{genus}}}|
[[Kategorie:{{#if: {{{subfamilia|}}} | {{{subfamilia|}}} | {{{familia|}}} }}{{!}}{{{genus|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{genus}}} {{{species}}}|
[[Kategorie:{{{genus|}}}{{!}}{{{species|}}}]]
}}
</includeonly>
<noinclude>
<pre>
Beispielaufruf:
{{Systematik
| DeName = Fauchschabe
| Autor = van Herrewege, 1973
| ordo =
| subordo =
| superfamilia =
| familia = Blaberidae
| subfamilia = Oxyhaloinae
| tribus = Gromphadorhini
| genus = Princisia
| subgenus =
| species = vanwaerebeki
| Verbreitung =
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| StudyGroupNumber = BCG 34
| Winterruhe =
| cockroach.speciesfile.org_TaxonNameID = 1174416
| LSID = urn:lsid:Blattodea.genusfile.org:TaxonName:6326
}}
{{Systematik
| Autor =
| Bild =
| Bildbeschreibung =
| regnum = Animalia
| subregnum = Eumetazoa
| phylum = specieshropoda
| subphylum = Hexapoda
| classis = Insecta
| ordo = Dictyoptera
| subordo = Isoptera
| LSID = urn:lsid:faunaeur.org:taxname:11922
| www.faunaeur.org_id = 11922
}}
</pre>
</noinclude>
cb5a93251669a425750ede502c683543e5e55a80
1645
1644
2017-05-02T21:28:17Z
Lollypop
2
wikitext
text/x-wiki
<includeonly>
{{#vardefine:taxon|
{{#if:{{{dominia|}}} | -> [[:Kategorie:{{{dominia|}}}{{!}}{{{dominia|}}}]]}}
{{#if:{{{regnum|}}} | -> [[:Kategorie:{{{regnum|}}}{{!}}{{{regnum|}}}]]}}
{{#if:{{{subregnum|}}} | -> [[:Kategorie:{{{subregnum|}}}{{!}}{{{subregnum|}}}]]}}
{{#if:{{{divisio|}}} | -> [[:Kategorie:{{{divisio|}}}{{!}}{{{divisio|}}}]]}}
{{#if:{{{subdivisio|}}} | -> [[:Kategorie:{{{subdivisio|}}}{{!}}{{{subdivisio|}}}]]}}
{{#if:{{{superclassis|}}}| -> [[:Kategorie:{{{superclassis|}}}{{!}}{{{superclassis|}}}]]}}
{{#if:{{{classis|}}} | -> [[:Kategorie:{{{classis|}}}{{!}}{{{classis|}}}]]}}
{{#if:{{{subclassis|}}} | -> [[:Kategorie:{{{subclassis|}}}{{!}}{{{subclassis|}}}]]}}
{{#if:{{{superordo|}}} | -> [[:Kategorie:{{{superordo|}}}{{!}}{{{superordo|}}}]]}}
{{#if:{{{ordo|}}} | -> [[:Kategorie:{{{ordo|}}}{{!}}{{{ordo|}}}]]}}
{{#if:{{{subordo|}}} | -> [[:Kategorie:{{{subordo|}}}{{!}}{{{subordo|}}}]]}}
{{#if:{{{superfamilia|}}}| -> [[:Kategorie:{{#if: {{{subordo|}}} | {{{subordo|}}} | {{{ordo|}}} }}{{!}}{{{superfamilia|}}}]]}}
{{#if:{{{familia|}}} | -> [[:Kategorie:{{{familia|}}}{{!}}{{{familia|}}}]]}}
{{#if:{{{subfamilia|}}} | -> [[:Kategorie:{{{subfamilia|}}}{{!}}{{{subfamilia|}}}]]}}
{{#if:{{{tribus|}}} | -> [[:Kategorie:{{{tribus|}}}{{!}}{{{tribus|}}}]]}}
{{#if:{{{genus|}}} | -> [[:Kategorie:{{{genus|}}}{{!}}{{{genus|}}}]]}}
{{#if:{{{subgenus|}}} | -> [[:Kategorie:{{{subgenus|}}}{{!}}{{{subgenus|}}}]]}}
}}
{{#vardefine:taxonside|
{{#if:{{{dominia|}}}
| {{!-}}
{{!}} Dominia:
{{!}} ''[[:Kategorie:{{{dominia|}}}{{!}}{{{dominia|}}}]]''
}}
{{#if:{{{regnum|}}}
| {{!-}}
{{!}} Regnum:
{{!}} ''[[:Kategorie:{{{regnum|}}}{{!}}{{{regnum|}}}]]''
}}
{{#if:{{{subregnum|}}}
| {{!-}}
{{!}} Subregnum:
{{!}} ''[[:Kategorie:{{{subregnum|}}}{{!}}{{{subregnum|}}}]]''
}}
{{#if:{{{superdivisio|}}}
| {{!-}}
{{!}} Superdivisio:
{{!}} ''[[:Kategorie:{{{superdivisio|}}}{{!}}{{{superdivisio|}}}]]''
}}
{{#if:{{{divisio|}}}
| {{!-}}
{{!}} Divisio:
{{!}} ''[[:Kategorie:{{{divisio|}}}{{!}}{{{divisio|}}}]]''
}}
{{#if:{{{subdivisio|}}}
| {{!-}}
{{!}} Subdivisio:
{{!}} ''[[:Kategorie:{{{subdivisio|}}}{{!}}{{{subdivisio|}}}]]''
}}
{{#if:{{{superclassis|}}}
| {{!-}}
{{!}} Superclassis:
{{!}} ''[[:Kategorie:{{{superclassis|}}}{{!}}{{{superclassis|}}}]]''
}}
{{#if:{{{classis|}}}
| {{!-}}
{{!}} Classis:
{{!}} ''[[:Kategorie:{{{classis|}}}{{!}}{{{classis|}}}]]''
}}
{{#if:{{{subclassis|}}}
| {{!-}}
{{!}} Subclassis:
{{!}} ''[[:Kategorie:{{{subclassis|}}}{{!}}{{{subclassis|}}}]]''
}}
{{#if:{{{superordo|}}}
| {{!-}}
{{!}} Superordo:
{{!}} ''[[:Kategorie:{{{superordo|}}}{{!}}{{{superordo|}}}]]''
}}
{{#if:{{{ordo|}}}
| {{!-}}
{{!}} Ordo:
{{!}} ''[[:Kategorie:{{{ordo|}}}{{!}}{{{ordo|}}}]]''
}}
{{#if:{{{subordo|}}}
| {{!-}}
{{!}} Subordo:
{{!}} ''[[:Kategorie:{{{subordo|}}}{{!}}{{{subordo|}}}]]''
}}
{{#if:{{{superfamilia|}}}
| {{!-}}
{{!}} Superfamilia:
{{!}} ''[[:Kategorie:{{{superfamilia|}}}{{!}}{{{superfamilia|}}}]]''
}}
{{#if:{{{familia|}}}
| {{!-}}
{{!}} Familia:
{{!}} ''[[:Kategorie:{{{familia|}}}{{!}}{{{familia|}}}]]''
}}
{{#if:{{{subfamilia|}}}
| {{!-}}
{{!}} Subfamilia:
{{!}} ''[[:Kategorie:{{{subfamilia|}}}{{!}}{{{subfamilia|}}}]]''
}}
{{#if:{{{tribus|}}}
| {{!-}}
{{!}} Tribus:
{{!}} ''[[:Kategorie:{{{tribus|}}}{{!}}{{{tribus|}}}]]''
}}
{{#if:{{{genus|}}}
| {{!-}}
{{!}} Genus:
{{!}} ''[[:Kategorie:{{{genus|}}}{{!}}{{{genus|}}}]]''
}}
{{#if:{{{subgenus|}}}
| {{!-}}
{{!}} Subgenus:
{{!}} ''[[:Kategorie:{{{subgenus|}}}{{!}}{{{subgenus|}}}]]''
}}
{{#if:{{{species|}}}
| {{!-}}
{{!}} Species:
{{!}} ''{{{genus|}}} {{{subgenus|}}} {{{species|}}}''
}}
}}
{| cellpadding="2" cellspacing="0" style="border: 1px solid #6688AA;width:260px; background-color:#efefef; float:right;overflow: hidden; margin-left:10px; margin-bottom:10px;" valign="middle" |
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"|
'''''{{PAGENAME}}'''''
{{#if:{{{DeName|}}}|
<br>({{{DeName|}}})
}}
|-
|style="border: 0px solid #6688AA;border-bottom-width:1px;"| <div style="text-align:center;font-size:8pt;">
{{#if: {{{Bild|}}}
| [[Bild:{{{Bild}}}{{!}}frameless{{!}}250x300px{{!}}{{{Bildbeschreibung}}}]]
}}</div>
|
|
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| Systematik
|-
|style="text-align:center;width=250px;"|
{| style="background-color:#efefef;text-align:left;"
{{#var:taxonside}}
{{#regex: {{#var:taxonside}} | /[\n]+/ | "_" }}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Wissenschaftlicher Name
|-
|style="text-align:center"|
{| style="text-align:center;background-color:#efefef;widht:250px"
|-
|style="text-align:center;width:250px"|''{{{genus|}}} {{{species|}}}'' {{#if:{{{Autor|}}}|{{{Autor|}}}}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Weitere Informationen
|-
|style="text-align:left"|
{| style="text-align:left;background-color:#efefef;widht:250px"
{{#if:{{{Verbreitung|}}}
| {{!-}}
{{!}} Verbreitung:
{{!}} {{{Verbreitung|}}}
| {{!-}}
}}
{{#if:{{{Habitat|}}}
| {{!-}}
{{!}} Habitat:
{{!}} {{{Habitat|}}}
| {{!-}}
}}
{{#if:{{{Nahrung|}}}
| {{!-}}
{{!}} Nahrung:
{{!}} {{{Nahrung|}}}
| {{!-}}
}}
{{#if:{{{Luftfeuchtigkeit|}}}
| {{!-}}
{{!}} Luftfeuchtigkeit:
{{!}} {{{Luftfeuchtigkeit|}}}
| {{!-}}
}}
{{#if:{{{Temperatur|}}}
| {{!-}}
{{!}} Temperatur:
{{!}} {{{Temperatur|}}}
| {{!-}}
}}
{{#if:{{{StudyGroupNumber|}}}
| {{!-}}
{{!}} StudyGroupNumber:
{{!}} {{{StudyGroupNumber|}}}
| {{!-}}
}}
|}
|}
{{#ifeq: {{NAMESPACE}} | {{ns:0}} | [[Kategorie:species]]}}
{{#if:{{#var:taxon}}
| {{#regex: {{#var:taxon}} | /[ \r\n]+/ | }}
}}
{{#if:{{{www.faunaeur.org_id|}}}|
* [http://www.faunaeur.org/full_results.php?id={{{www.faunaeur.org_id|}}} Fauna Europaea : www.faunaeur.org -> {{PAGENAME}}]
}}
{{#if:{{{cockroach.speciesfile.org_TaxonNameID|}}}|
* [http://cockroach.speciesfile.org/common/basic/Taxa.aspx?TaxonNameID={{{cockroach.speciesfile.org_TaxonNameID|}}} Cockroach Species File (CSF) : cockroach.speciesfile.org -> {{PAGENAME}}]
}}
{{#if:{{{LSID|}}}|
* [http://www.ipni.org/lsids.html LSID] : {{{LSID|}}}
}}
{{#ifeq:{{PAGENAME}}|{{{superordo}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
[[Kategorie:{{#if: {{{subclassis|}}} | {{{subclassis|}}} | {{{classis|}}} }}{{!}}{{{superordo|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{ordo}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=7}}
{{#ifeq:{{PAGENAME}}|Blattodea|
[[Kategorie:Schaben]]
}}
[[Kategorie:{{#if: {{{superordo|}}} | {{{superordo|}}} | {{{subclassis|}}} }}{{!}}{{{ordo|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{superfamilia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
[[Kategorie:{{#if: {{{subordo|}}} | {{{subordo|}}} | {{{ordo|}}} }}{{!}}{{{superfamilia|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{familia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=4}}
[[Kategorie:{{#if: {{{superfamilia|}}} | {{{superfamilia|}}} | {{{subordo|}}} }}{{!}}{{{familia|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{subfamilia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=3}}
[[Kategorie:{{{familia|}}}{{!}}{{{subfamilia|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{genus}}}|
[[Kategorie:{{#if: {{{subfamilia|}}} | {{{subfamilia|}}} | {{{familia|}}} }}{{!}}{{{genus|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{genus}}} {{{species}}}|
[[Kategorie:{{{genus|}}}{{!}}{{{species|}}}]]
}}
</includeonly>
<noinclude>
<pre>
Beispielaufruf:
{{Systematik
| DeName = Fauchschabe
| Autor = van Herrewege, 1973
| ordo =
| subordo =
| superfamilia =
| familia = Blaberidae
| subfamilia = Oxyhaloinae
| tribus = Gromphadorhini
| genus = Princisia
| subgenus =
| species = vanwaerebeki
| Verbreitung =
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| StudyGroupNumber = BCG 34
| Winterruhe =
| cockroach.speciesfile.org_TaxonNameID = 1174416
| LSID = urn:lsid:Blattodea.genusfile.org:TaxonName:6326
}}
{{Systematik
| Autor =
| Bild =
| Bildbeschreibung =
| regnum = Animalia
| subregnum = Eumetazoa
| phylum = specieshropoda
| subphylum = Hexapoda
| classis = Insecta
| ordo = Dictyoptera
| subordo = Isoptera
| LSID = urn:lsid:faunaeur.org:taxname:11922
| www.faunaeur.org_id = 11922
}}
</pre>
</noinclude>
5c88c59382c78a48bdf32d2abf1e1b5634129b8c
1646
1645
2017-05-02T21:29:56Z
Lollypop
2
wikitext
text/x-wiki
<includeonly>
{{#vardefine:taxon|
{{#if:{{{dominia|}}} | -> [[:Kategorie:{{{dominia|}}}{{!}}{{{dominia|}}}]]}}
{{#if:{{{regnum|}}} | -> [[:Kategorie:{{{regnum|}}}{{!}}{{{regnum|}}}]]}}
{{#if:{{{subregnum|}}} | -> [[:Kategorie:{{{subregnum|}}}{{!}}{{{subregnum|}}}]]}}
{{#if:{{{divisio|}}} | -> [[:Kategorie:{{{divisio|}}}{{!}}{{{divisio|}}}]]}}
{{#if:{{{subdivisio|}}} | -> [[:Kategorie:{{{subdivisio|}}}{{!}}{{{subdivisio|}}}]]}}
{{#if:{{{superclassis|}}}| -> [[:Kategorie:{{{superclassis|}}}{{!}}{{{superclassis|}}}]]}}
{{#if:{{{classis|}}} | -> [[:Kategorie:{{{classis|}}}{{!}}{{{classis|}}}]]}}
{{#if:{{{subclassis|}}} | -> [[:Kategorie:{{{subclassis|}}}{{!}}{{{subclassis|}}}]]}}
{{#if:{{{superordo|}}} | -> [[:Kategorie:{{{superordo|}}}{{!}}{{{superordo|}}}]]}}
{{#if:{{{ordo|}}} | -> [[:Kategorie:{{{ordo|}}}{{!}}{{{ordo|}}}]]}}
{{#if:{{{subordo|}}} | -> [[:Kategorie:{{{subordo|}}}{{!}}{{{subordo|}}}]]}}
{{#if:{{{superfamilia|}}}| -> [[:Kategorie:{{#if: {{{subordo|}}} | {{{subordo|}}} | {{{ordo|}}} }}{{!}}{{{superfamilia|}}}]]}}
{{#if:{{{familia|}}} | -> [[:Kategorie:{{{familia|}}}{{!}}{{{familia|}}}]]}}
{{#if:{{{subfamilia|}}} | -> [[:Kategorie:{{{subfamilia|}}}{{!}}{{{subfamilia|}}}]]}}
{{#if:{{{tribus|}}} | -> [[:Kategorie:{{{tribus|}}}{{!}}{{{tribus|}}}]]}}
{{#if:{{{genus|}}} | -> [[:Kategorie:{{{genus|}}}{{!}}{{{genus|}}}]]}}
{{#if:{{{subgenus|}}} | -> [[:Kategorie:{{{subgenus|}}}{{!}}{{{subgenus|}}}]]}}
}}
{{#vardefine:taxonside|
{{#if:{{{dominia|}}}
| {{!-}}
{{!}} Dominia:
{{!}} ''[[:Kategorie:{{{dominia|}}}{{!}}{{{dominia|}}}]]''
}}
{{#if:{{{regnum|}}}
| {{!-}}
{{!}} Regnum:
{{!}} ''[[:Kategorie:{{{regnum|}}}{{!}}{{{regnum|}}}]]''
}}
{{#if:{{{subregnum|}}}
| {{!-}}
{{!}} Subregnum:
{{!}} ''[[:Kategorie:{{{subregnum|}}}{{!}}{{{subregnum|}}}]]''
}}
{{#if:{{{superdivisio|}}}
| {{!-}}
{{!}} Superdivisio:
{{!}} ''[[:Kategorie:{{{superdivisio|}}}{{!}}{{{superdivisio|}}}]]''
}}
{{#if:{{{divisio|}}}
| {{!-}}
{{!}} Divisio:
{{!}} ''[[:Kategorie:{{{divisio|}}}{{!}}{{{divisio|}}}]]''
}}
{{#if:{{{subdivisio|}}}
| {{!-}}
{{!}} Subdivisio:
{{!}} ''[[:Kategorie:{{{subdivisio|}}}{{!}}{{{subdivisio|}}}]]''
}}
{{#if:{{{superclassis|}}}
| {{!-}}
{{!}} Superclassis:
{{!}} ''[[:Kategorie:{{{superclassis|}}}{{!}}{{{superclassis|}}}]]''
}}
{{#if:{{{classis|}}}
| {{!-}}
{{!}} Classis:
{{!}} ''[[:Kategorie:{{{classis|}}}{{!}}{{{classis|}}}]]''
}}
{{#if:{{{subclassis|}}}
| {{!-}}
{{!}} Subclassis:
{{!}} ''[[:Kategorie:{{{subclassis|}}}{{!}}{{{subclassis|}}}]]''
}}
{{#if:{{{superordo|}}}
| {{!-}}
{{!}} Superordo:
{{!}} ''[[:Kategorie:{{{superordo|}}}{{!}}{{{superordo|}}}]]''
}}
{{#if:{{{ordo|}}}
| {{!-}}
{{!}} Ordo:
{{!}} ''[[:Kategorie:{{{ordo|}}}{{!}}{{{ordo|}}}]]''
}}
{{#if:{{{subordo|}}}
| {{!-}}
{{!}} Subordo:
{{!}} ''[[:Kategorie:{{{subordo|}}}{{!}}{{{subordo|}}}]]''
}}
{{#if:{{{superfamilia|}}}
| {{!-}}
{{!}} Superfamilia:
{{!}} ''[[:Kategorie:{{{superfamilia|}}}{{!}}{{{superfamilia|}}}]]''
}}
{{#if:{{{familia|}}}
| {{!-}}
{{!}} Familia:
{{!}} ''[[:Kategorie:{{{familia|}}}{{!}}{{{familia|}}}]]''
}}
{{#if:{{{subfamilia|}}}
| {{!-}}
{{!}} Subfamilia:
{{!}} ''[[:Kategorie:{{{subfamilia|}}}{{!}}{{{subfamilia|}}}]]''
}}
{{#if:{{{tribus|}}}
| {{!-}}
{{!}} Tribus:
{{!}} ''[[:Kategorie:{{{tribus|}}}{{!}}{{{tribus|}}}]]''
}}
{{#if:{{{genus|}}}
| {{!-}}
{{!}} Genus:
{{!}} ''[[:Kategorie:{{{genus|}}}{{!}}{{{genus|}}}]]''
}}
{{#if:{{{subgenus|}}}
| {{!-}}
{{!}} Subgenus:
{{!}} ''[[:Kategorie:{{{subgenus|}}}{{!}}{{{subgenus|}}}]]''
}}
{{#if:{{{species|}}}
| {{!-}}
{{!}} Species:
{{!}} ''{{{genus|}}} {{{subgenus|}}} {{{species|}}}''
}}
}}
{| cellpadding="2" cellspacing="0" style="border: 1px solid #6688AA;width:260px; background-color:#efefef; float:right;overflow: hidden; margin-left:10px; margin-bottom:10px;" valign="middle" |
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"|
'''''{{PAGENAME}}'''''
{{#if:{{{DeName|}}}|
<br>({{{DeName|}}})
}}
|-
|style="border: 0px solid #6688AA;border-bottom-width:1px;"| <div style="text-align:center;font-size:8pt;">
{{#if: {{{Bild|}}}
| [[Bild:{{{Bild}}}{{!}}frameless{{!}}250x300px{{!}}{{{Bildbeschreibung}}}]]
}}</div>
|
|
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| Systematik
|-
|style="text-align:center;width=250px;"|
{| style="background-color:#efefef;text-align:left;"
{{#var:taxonside}}
----
{{#if:{{#var:taxonside}}
| {{#regex: {{#var:taxonside}} | /[ \r\n]+/ | }}
}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Wissenschaftlicher Name
|-
|style="text-align:center"|
{| style="text-align:center;background-color:#efefef;widht:250px"
|-
|style="text-align:center;width:250px"|''{{{genus|}}} {{{species|}}}'' {{#if:{{{Autor|}}}|{{{Autor|}}}}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Weitere Informationen
|-
|style="text-align:left"|
{| style="text-align:left;background-color:#efefef;widht:250px"
{{#if:{{{Verbreitung|}}}
| {{!-}}
{{!}} Verbreitung:
{{!}} {{{Verbreitung|}}}
| {{!-}}
}}
{{#if:{{{Habitat|}}}
| {{!-}}
{{!}} Habitat:
{{!}} {{{Habitat|}}}
| {{!-}}
}}
{{#if:{{{Nahrung|}}}
| {{!-}}
{{!}} Nahrung:
{{!}} {{{Nahrung|}}}
| {{!-}}
}}
{{#if:{{{Luftfeuchtigkeit|}}}
| {{!-}}
{{!}} Luftfeuchtigkeit:
{{!}} {{{Luftfeuchtigkeit|}}}
| {{!-}}
}}
{{#if:{{{Temperatur|}}}
| {{!-}}
{{!}} Temperatur:
{{!}} {{{Temperatur|}}}
| {{!-}}
}}
{{#if:{{{StudyGroupNumber|}}}
| {{!-}}
{{!}} StudyGroupNumber:
{{!}} {{{StudyGroupNumber|}}}
| {{!-}}
}}
|}
|}
{{#ifeq: {{NAMESPACE}} | {{ns:0}} | [[Kategorie:species]]}}
{{#if:{{#var:taxon}}
| {{#regex: {{#var:taxon}} | /[ \r\n]+/ | }}
}}
{{#if:{{{www.faunaeur.org_id|}}}|
* [http://www.faunaeur.org/full_results.php?id={{{www.faunaeur.org_id|}}} Fauna Europaea : www.faunaeur.org -> {{PAGENAME}}]
}}
{{#if:{{{cockroach.speciesfile.org_TaxonNameID|}}}|
* [http://cockroach.speciesfile.org/common/basic/Taxa.aspx?TaxonNameID={{{cockroach.speciesfile.org_TaxonNameID|}}} Cockroach Species File (CSF) : cockroach.speciesfile.org -> {{PAGENAME}}]
}}
{{#if:{{{LSID|}}}|
* [http://www.ipni.org/lsids.html LSID] : {{{LSID|}}}
}}
{{#ifeq:{{PAGENAME}}|{{{superordo}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
[[Kategorie:{{#if: {{{subclassis|}}} | {{{subclassis|}}} | {{{classis|}}} }}{{!}}{{{superordo|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{ordo}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=7}}
{{#ifeq:{{PAGENAME}}|Blattodea|
[[Kategorie:Schaben]]
}}
[[Kategorie:{{#if: {{{superordo|}}} | {{{superordo|}}} | {{{subclassis|}}} }}{{!}}{{{ordo|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{superfamilia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
[[Kategorie:{{#if: {{{subordo|}}} | {{{subordo|}}} | {{{ordo|}}} }}{{!}}{{{superfamilia|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{familia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=4}}
[[Kategorie:{{#if: {{{superfamilia|}}} | {{{superfamilia|}}} | {{{subordo|}}} }}{{!}}{{{familia|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{subfamilia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=3}}
[[Kategorie:{{{familia|}}}{{!}}{{{subfamilia|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{genus}}}|
[[Kategorie:{{#if: {{{subfamilia|}}} | {{{subfamilia|}}} | {{{familia|}}} }}{{!}}{{{genus|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{genus}}} {{{species}}}|
[[Kategorie:{{{genus|}}}{{!}}{{{species|}}}]]
}}
</includeonly>
<noinclude>
<pre>
Beispielaufruf:
{{Systematik
| DeName = Fauchschabe
| Autor = van Herrewege, 1973
| ordo =
| subordo =
| superfamilia =
| familia = Blaberidae
| subfamilia = Oxyhaloinae
| tribus = Gromphadorhini
| genus = Princisia
| subgenus =
| species = vanwaerebeki
| Verbreitung =
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| StudyGroupNumber = BCG 34
| Winterruhe =
| cockroach.speciesfile.org_TaxonNameID = 1174416
| LSID = urn:lsid:Blattodea.genusfile.org:TaxonName:6326
}}
{{Systematik
| Autor =
| Bild =
| Bildbeschreibung =
| regnum = Animalia
| subregnum = Eumetazoa
| phylum = specieshropoda
| subphylum = Hexapoda
| classis = Insecta
| ordo = Dictyoptera
| subordo = Isoptera
| LSID = urn:lsid:faunaeur.org:taxname:11922
| www.faunaeur.org_id = 11922
}}
</pre>
</noinclude>
7e7575603eefe70e9c7f6f2136a8c3d785fae74e
1647
1646
2017-05-02T21:32:31Z
Lollypop
2
wikitext
text/x-wiki
<includeonly>
{{#vardefine:taxon|
{{#if:{{{dominia|}}} | -> [[:Kategorie:{{{dominia|}}}{{!}}{{{dominia|}}}]]}}
{{#if:{{{regnum|}}} | -> [[:Kategorie:{{{regnum|}}}{{!}}{{{regnum|}}}]]}}
{{#if:{{{subregnum|}}} | -> [[:Kategorie:{{{subregnum|}}}{{!}}{{{subregnum|}}}]]}}
{{#if:{{{divisio|}}} | -> [[:Kategorie:{{{divisio|}}}{{!}}{{{divisio|}}}]]}}
{{#if:{{{subdivisio|}}} | -> [[:Kategorie:{{{subdivisio|}}}{{!}}{{{subdivisio|}}}]]}}
{{#if:{{{superclassis|}}}| -> [[:Kategorie:{{{superclassis|}}}{{!}}{{{superclassis|}}}]]}}
{{#if:{{{classis|}}} | -> [[:Kategorie:{{{classis|}}}{{!}}{{{classis|}}}]]}}
{{#if:{{{subclassis|}}} | -> [[:Kategorie:{{{subclassis|}}}{{!}}{{{subclassis|}}}]]}}
{{#if:{{{superordo|}}} | -> [[:Kategorie:{{{superordo|}}}{{!}}{{{superordo|}}}]]}}
{{#if:{{{ordo|}}} | -> [[:Kategorie:{{{ordo|}}}{{!}}{{{ordo|}}}]]}}
{{#if:{{{subordo|}}} | -> [[:Kategorie:{{{subordo|}}}{{!}}{{{subordo|}}}]]}}
{{#if:{{{superfamilia|}}}| -> [[:Kategorie:{{#if: {{{subordo|}}} | {{{subordo|}}} | {{{ordo|}}} }}{{!}}{{{superfamilia|}}}]]}}
{{#if:{{{familia|}}} | -> [[:Kategorie:{{{familia|}}}{{!}}{{{familia|}}}]]}}
{{#if:{{{subfamilia|}}} | -> [[:Kategorie:{{{subfamilia|}}}{{!}}{{{subfamilia|}}}]]}}
{{#if:{{{tribus|}}} | -> [[:Kategorie:{{{tribus|}}}{{!}}{{{tribus|}}}]]}}
{{#if:{{{genus|}}} | -> [[:Kategorie:{{{genus|}}}{{!}}{{{genus|}}}]]}}
{{#if:{{{subgenus|}}} | -> [[:Kategorie:{{{subgenus|}}}{{!}}{{{subgenus|}}}]]}}
}}
{{#vardefine:taxonside|
{{#if:{{{dominia|}}}
| {{!-}}
{{!}} Dominia:
{{!}} ''[[:Kategorie:{{{dominia|}}}{{!}}{{{dominia|}}}]]''
}}
{{#if:{{{regnum|}}}
| {{!-}}
{{!}} Regnum:
{{!}} ''[[:Kategorie:{{{regnum|}}}{{!}}{{{regnum|}}}]]''
}}
{{#if:{{{subregnum|}}}
| {{!-}}
{{!}} Subregnum:
{{!}} ''[[:Kategorie:{{{subregnum|}}}{{!}}{{{subregnum|}}}]]''
}}
{{#if:{{{superdivisio|}}}
| {{!-}}
{{!}} Superdivisio:
{{!}} ''[[:Kategorie:{{{superdivisio|}}}{{!}}{{{superdivisio|}}}]]''
}}
{{#if:{{{divisio|}}}
| {{!-}}
{{!}} Divisio:
{{!}} ''[[:Kategorie:{{{divisio|}}}{{!}}{{{divisio|}}}]]''
}}
{{#if:{{{subdivisio|}}}
| {{!-}}
{{!}} Subdivisio:
{{!}} ''[[:Kategorie:{{{subdivisio|}}}{{!}}{{{subdivisio|}}}]]''
}}
{{#if:{{{superclassis|}}}
| {{!-}}
{{!}} Superclassis:
{{!}} ''[[:Kategorie:{{{superclassis|}}}{{!}}{{{superclassis|}}}]]''
}}
{{#if:{{{classis|}}}
| {{!-}}
{{!}} Classis:
{{!}} ''[[:Kategorie:{{{classis|}}}{{!}}{{{classis|}}}]]''
}}
{{#if:{{{subclassis|}}}
| {{!-}}
{{!}} Subclassis:
{{!}} ''[[:Kategorie:{{{subclassis|}}}{{!}}{{{subclassis|}}}]]''
}}
{{#if:{{{superordo|}}}
| {{!-}}
{{!}} Superordo:
{{!}} ''[[:Kategorie:{{{superordo|}}}{{!}}{{{superordo|}}}]]''
}}
{{#if:{{{ordo|}}}
| {{!-}}
{{!}} Ordo:
{{!}} ''[[:Kategorie:{{{ordo|}}}{{!}}{{{ordo|}}}]]''
}}
{{#if:{{{subordo|}}}
| {{!-}}
{{!}} Subordo:
{{!}} ''[[:Kategorie:{{{subordo|}}}{{!}}{{{subordo|}}}]]''
}}
{{#if:{{{superfamilia|}}}
| {{!-}}
{{!}} Superfamilia:
{{!}} ''[[:Kategorie:{{{superfamilia|}}}{{!}}{{{superfamilia|}}}]]''
}}
{{#if:{{{familia|}}}
| {{!-}}
{{!}} Familia:
{{!}} ''[[:Kategorie:{{{familia|}}}{{!}}{{{familia|}}}]]''
}}
{{#if:{{{subfamilia|}}}
| {{!-}}
{{!}} Subfamilia:
{{!}} ''[[:Kategorie:{{{subfamilia|}}}{{!}}{{{subfamilia|}}}]]''
}}
{{#if:{{{tribus|}}}
| {{!-}}
{{!}} Tribus:
{{!}} ''[[:Kategorie:{{{tribus|}}}{{!}}{{{tribus|}}}]]''
}}
{{#if:{{{genus|}}}
| {{!-}}
{{!}} Genus:
{{!}} ''[[:Kategorie:{{{genus|}}}{{!}}{{{genus|}}}]]''
}}
{{#if:{{{subgenus|}}}
| {{!-}}
{{!}} Subgenus:
{{!}} ''[[:Kategorie:{{{subgenus|}}}{{!}}{{{subgenus|}}}]]''
}}
{{#if:{{{species|}}}
| {{!-}}
{{!}} Species:
{{!}} ''{{{genus|}}} {{{subgenus|}}} {{{species|}}}''
}}
}}
{| cellpadding="2" cellspacing="0" style="border: 1px solid #6688AA;width:260px; background-color:#efefef; float:right;overflow: hidden; margin-left:10px; margin-bottom:10px;" valign="middle" |
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"|
'''''{{PAGENAME}}'''''
{{#if:{{{DeName|}}}|
<br>({{{DeName|}}})
}}
|-
|style="border: 0px solid #6688AA;border-bottom-width:1px;"| <div style="text-align:center;font-size:8pt;">
{{#if: {{{Bild|}}}
| [[Bild:{{{Bild}}}{{!}}frameless{{!}}250x300px{{!}}{{{Bildbeschreibung}}}]]
}}</div>
|
|
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| Systematik
|-
|style="text-align:center;width=250px;"|
{| style="background-color:#efefef;text-align:left;"
{{#var:taxonside}}
----
{{#if:{{#var:taxonside}}
| {{#regex: {{#var:taxonside}} | /\n/ | _ }}
}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Wissenschaftlicher Name
|-
|style="text-align:center"|
{| style="text-align:center;background-color:#efefef;widht:250px"
|-
|style="text-align:center;width:250px"|''{{{genus|}}} {{{species|}}}'' {{#if:{{{Autor|}}}|{{{Autor|}}}}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Weitere Informationen
|-
|style="text-align:left"|
{| style="text-align:left;background-color:#efefef;widht:250px"
{{#if:{{{Verbreitung|}}}
| {{!-}}
{{!}} Verbreitung:
{{!}} {{{Verbreitung|}}}
| {{!-}}
}}
{{#if:{{{Habitat|}}}
| {{!-}}
{{!}} Habitat:
{{!}} {{{Habitat|}}}
| {{!-}}
}}
{{#if:{{{Nahrung|}}}
| {{!-}}
{{!}} Nahrung:
{{!}} {{{Nahrung|}}}
| {{!-}}
}}
{{#if:{{{Luftfeuchtigkeit|}}}
| {{!-}}
{{!}} Luftfeuchtigkeit:
{{!}} {{{Luftfeuchtigkeit|}}}
| {{!-}}
}}
{{#if:{{{Temperatur|}}}
| {{!-}}
{{!}} Temperatur:
{{!}} {{{Temperatur|}}}
| {{!-}}
}}
{{#if:{{{StudyGroupNumber|}}}
| {{!-}}
{{!}} StudyGroupNumber:
{{!}} {{{StudyGroupNumber|}}}
| {{!-}}
}}
|}
|}
{{#ifeq: {{NAMESPACE}} | {{ns:0}} | [[Kategorie:species]]}}
{{#if:{{#var:taxon}}
| {{#regex: {{#var:taxon}} | /[ \r\n]+/ | }}
}}
{{#if:{{{www.faunaeur.org_id|}}}|
* [http://www.faunaeur.org/full_results.php?id={{{www.faunaeur.org_id|}}} Fauna Europaea : www.faunaeur.org -> {{PAGENAME}}]
}}
{{#if:{{{cockroach.speciesfile.org_TaxonNameID|}}}|
* [http://cockroach.speciesfile.org/common/basic/Taxa.aspx?TaxonNameID={{{cockroach.speciesfile.org_TaxonNameID|}}} Cockroach Species File (CSF) : cockroach.speciesfile.org -> {{PAGENAME}}]
}}
{{#if:{{{LSID|}}}|
* [http://www.ipni.org/lsids.html LSID] : {{{LSID|}}}
}}
{{#ifeq:{{PAGENAME}}|{{{superordo}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
[[Kategorie:{{#if: {{{subclassis|}}} | {{{subclassis|}}} | {{{classis|}}} }}{{!}}{{{superordo|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{ordo}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=7}}
{{#ifeq:{{PAGENAME}}|Blattodea|
[[Kategorie:Schaben]]
}}
[[Kategorie:{{#if: {{{superordo|}}} | {{{superordo|}}} | {{{subclassis|}}} }}{{!}}{{{ordo|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{superfamilia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
[[Kategorie:{{#if: {{{subordo|}}} | {{{subordo|}}} | {{{ordo|}}} }}{{!}}{{{superfamilia|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{familia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=4}}
[[Kategorie:{{#if: {{{superfamilia|}}} | {{{superfamilia|}}} | {{{subordo|}}} }}{{!}}{{{familia|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{subfamilia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=3}}
[[Kategorie:{{{familia|}}}{{!}}{{{subfamilia|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{genus}}}|
[[Kategorie:{{#if: {{{subfamilia|}}} | {{{subfamilia|}}} | {{{familia|}}} }}{{!}}{{{genus|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{genus}}} {{{species}}}|
[[Kategorie:{{{genus|}}}{{!}}{{{species|}}}]]
}}
</includeonly>
<noinclude>
<pre>
Beispielaufruf:
{{Systematik
| DeName = Fauchschabe
| Autor = van Herrewege, 1973
| ordo =
| subordo =
| superfamilia =
| familia = Blaberidae
| subfamilia = Oxyhaloinae
| tribus = Gromphadorhini
| genus = Princisia
| subgenus =
| species = vanwaerebeki
| Verbreitung =
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| StudyGroupNumber = BCG 34
| Winterruhe =
| cockroach.speciesfile.org_TaxonNameID = 1174416
| LSID = urn:lsid:Blattodea.genusfile.org:TaxonName:6326
}}
{{Systematik
| Autor =
| Bild =
| Bildbeschreibung =
| regnum = Animalia
| subregnum = Eumetazoa
| phylum = specieshropoda
| subphylum = Hexapoda
| classis = Insecta
| ordo = Dictyoptera
| subordo = Isoptera
| LSID = urn:lsid:faunaeur.org:taxname:11922
| www.faunaeur.org_id = 11922
}}
</pre>
</noinclude>
aff9282d5a3e7802ba95e283ebb882cedf0d61c5
1648
1647
2017-05-02T21:33:40Z
Lollypop
2
wikitext
text/x-wiki
<includeonly>
{{#vardefine:taxon|
{{#if:{{{dominia|}}} | -> [[:Kategorie:{{{dominia|}}}{{!}}{{{dominia|}}}]]}}
{{#if:{{{regnum|}}} | -> [[:Kategorie:{{{regnum|}}}{{!}}{{{regnum|}}}]]}}
{{#if:{{{subregnum|}}} | -> [[:Kategorie:{{{subregnum|}}}{{!}}{{{subregnum|}}}]]}}
{{#if:{{{divisio|}}} | -> [[:Kategorie:{{{divisio|}}}{{!}}{{{divisio|}}}]]}}
{{#if:{{{subdivisio|}}} | -> [[:Kategorie:{{{subdivisio|}}}{{!}}{{{subdivisio|}}}]]}}
{{#if:{{{superclassis|}}}| -> [[:Kategorie:{{{superclassis|}}}{{!}}{{{superclassis|}}}]]}}
{{#if:{{{classis|}}} | -> [[:Kategorie:{{{classis|}}}{{!}}{{{classis|}}}]]}}
{{#if:{{{subclassis|}}} | -> [[:Kategorie:{{{subclassis|}}}{{!}}{{{subclassis|}}}]]}}
{{#if:{{{superordo|}}} | -> [[:Kategorie:{{{superordo|}}}{{!}}{{{superordo|}}}]]}}
{{#if:{{{ordo|}}} | -> [[:Kategorie:{{{ordo|}}}{{!}}{{{ordo|}}}]]}}
{{#if:{{{subordo|}}} | -> [[:Kategorie:{{{subordo|}}}{{!}}{{{subordo|}}}]]}}
{{#if:{{{superfamilia|}}}| -> [[:Kategorie:{{#if: {{{subordo|}}} | {{{subordo|}}} | {{{ordo|}}} }}{{!}}{{{superfamilia|}}}]]}}
{{#if:{{{familia|}}} | -> [[:Kategorie:{{{familia|}}}{{!}}{{{familia|}}}]]}}
{{#if:{{{subfamilia|}}} | -> [[:Kategorie:{{{subfamilia|}}}{{!}}{{{subfamilia|}}}]]}}
{{#if:{{{tribus|}}} | -> [[:Kategorie:{{{tribus|}}}{{!}}{{{tribus|}}}]]}}
{{#if:{{{genus|}}} | -> [[:Kategorie:{{{genus|}}}{{!}}{{{genus|}}}]]}}
{{#if:{{{subgenus|}}} | -> [[:Kategorie:{{{subgenus|}}}{{!}}{{{subgenus|}}}]]}}
}}
{{#vardefine:taxonside|
{{#if:{{{dominia|}}}
| {{!-}}
{{!}} Dominia:
{{!}} ''[[:Kategorie:{{{dominia|}}}{{!}}{{{dominia|}}}]]''
}}
{{#if:{{{regnum|}}}
| {{!-}}
{{!}} Regnum:
{{!}} ''[[:Kategorie:{{{regnum|}}}{{!}}{{{regnum|}}}]]''
}}
{{#if:{{{subregnum|}}}
| {{!-}}
{{!}} Subregnum:
{{!}} ''[[:Kategorie:{{{subregnum|}}}{{!}}{{{subregnum|}}}]]''
}}
{{#if:{{{superdivisio|}}}
| {{!-}}
{{!}} Superdivisio:
{{!}} ''[[:Kategorie:{{{superdivisio|}}}{{!}}{{{superdivisio|}}}]]''
}}
{{#if:{{{divisio|}}}
| {{!-}}
{{!}} Divisio:
{{!}} ''[[:Kategorie:{{{divisio|}}}{{!}}{{{divisio|}}}]]''
}}
{{#if:{{{subdivisio|}}}
| {{!-}}
{{!}} Subdivisio:
{{!}} ''[[:Kategorie:{{{subdivisio|}}}{{!}}{{{subdivisio|}}}]]''
}}
{{#if:{{{superclassis|}}}
| {{!-}}
{{!}} Superclassis:
{{!}} ''[[:Kategorie:{{{superclassis|}}}{{!}}{{{superclassis|}}}]]''
}}
{{#if:{{{classis|}}}
| {{!-}}
{{!}} Classis:
{{!}} ''[[:Kategorie:{{{classis|}}}{{!}}{{{classis|}}}]]''
}}
{{#if:{{{subclassis|}}}
| {{!-}}
{{!}} Subclassis:
{{!}} ''[[:Kategorie:{{{subclassis|}}}{{!}}{{{subclassis|}}}]]''
}}
{{#if:{{{superordo|}}}
| {{!-}}
{{!}} Superordo:
{{!}} ''[[:Kategorie:{{{superordo|}}}{{!}}{{{superordo|}}}]]''
}}
{{#if:{{{ordo|}}}
| {{!-}}
{{!}} Ordo:
{{!}} ''[[:Kategorie:{{{ordo|}}}{{!}}{{{ordo|}}}]]''
}}
{{#if:{{{subordo|}}}
| {{!-}}
{{!}} Subordo:
{{!}} ''[[:Kategorie:{{{subordo|}}}{{!}}{{{subordo|}}}]]''
}}
{{#if:{{{superfamilia|}}}
| {{!-}}
{{!}} Superfamilia:
{{!}} ''[[:Kategorie:{{{superfamilia|}}}{{!}}{{{superfamilia|}}}]]''
}}
{{#if:{{{familia|}}}
| {{!-}}
{{!}} Familia:
{{!}} ''[[:Kategorie:{{{familia|}}}{{!}}{{{familia|}}}]]''
}}
{{#if:{{{subfamilia|}}}
| {{!-}}
{{!}} Subfamilia:
{{!}} ''[[:Kategorie:{{{subfamilia|}}}{{!}}{{{subfamilia|}}}]]''
}}
{{#if:{{{tribus|}}}
| {{!-}}
{{!}} Tribus:
{{!}} ''[[:Kategorie:{{{tribus|}}}{{!}}{{{tribus|}}}]]''
}}
{{#if:{{{genus|}}}
| {{!-}}
{{!}} Genus:
{{!}} ''[[:Kategorie:{{{genus|}}}{{!}}{{{genus|}}}]]''
}}
{{#if:{{{subgenus|}}}
| {{!-}}
{{!}} Subgenus:
{{!}} ''[[:Kategorie:{{{subgenus|}}}{{!}}{{{subgenus|}}}]]''
}}
{{#if:{{{species|}}}
| {{!-}}
{{!}} Species:
{{!}} ''{{{genus|}}} {{{subgenus|}}} {{{species|}}}''
}}
}}
{| cellpadding="2" cellspacing="0" style="border: 1px solid #6688AA;width:260px; background-color:#efefef; float:right;overflow: hidden; margin-left:10px; margin-bottom:10px;" valign="middle" |
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"|
'''''{{PAGENAME}}'''''
{{#if:{{{DeName|}}}|
<br>({{{DeName|}}})
}}
|-
|style="border: 0px solid #6688AA;border-bottom-width:1px;"| <div style="text-align:center;font-size:8pt;">
{{#if: {{{Bild|}}}
| [[Bild:{{{Bild}}}{{!}}frameless{{!}}250x300px{{!}}{{{Bildbeschreibung}}}]]
}}</div>
|
|
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| Systematik
|-
|style="text-align:center;width=250px;"|
{| style="background-color:#efefef;text-align:left;"
{{#var:taxonside}}
----
{{#if:{{#var:taxonside}}
| {{#regex: {{#var:taxonside}} | /^$/ | _ }}
}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Wissenschaftlicher Name
|-
|style="text-align:center"|
{| style="text-align:center;background-color:#efefef;widht:250px"
|-
|style="text-align:center;width:250px"|''{{{genus|}}} {{{species|}}}'' {{#if:{{{Autor|}}}|{{{Autor|}}}}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Weitere Informationen
|-
|style="text-align:left"|
{| style="text-align:left;background-color:#efefef;widht:250px"
{{#if:{{{Verbreitung|}}}
| {{!-}}
{{!}} Verbreitung:
{{!}} {{{Verbreitung|}}}
| {{!-}}
}}
{{#if:{{{Habitat|}}}
| {{!-}}
{{!}} Habitat:
{{!}} {{{Habitat|}}}
| {{!-}}
}}
{{#if:{{{Nahrung|}}}
| {{!-}}
{{!}} Nahrung:
{{!}} {{{Nahrung|}}}
| {{!-}}
}}
{{#if:{{{Luftfeuchtigkeit|}}}
| {{!-}}
{{!}} Luftfeuchtigkeit:
{{!}} {{{Luftfeuchtigkeit|}}}
| {{!-}}
}}
{{#if:{{{Temperatur|}}}
| {{!-}}
{{!}} Temperatur:
{{!}} {{{Temperatur|}}}
| {{!-}}
}}
{{#if:{{{StudyGroupNumber|}}}
| {{!-}}
{{!}} StudyGroupNumber:
{{!}} {{{StudyGroupNumber|}}}
| {{!-}}
}}
|}
|}
{{#ifeq: {{NAMESPACE}} | {{ns:0}} | [[Kategorie:species]]}}
{{#if:{{#var:taxon}}
| {{#regex: {{#var:taxon}} | /[ \r\n]+/ | }}
}}
{{#if:{{{www.faunaeur.org_id|}}}|
* [http://www.faunaeur.org/full_results.php?id={{{www.faunaeur.org_id|}}} Fauna Europaea : www.faunaeur.org -> {{PAGENAME}}]
}}
{{#if:{{{cockroach.speciesfile.org_TaxonNameID|}}}|
* [http://cockroach.speciesfile.org/common/basic/Taxa.aspx?TaxonNameID={{{cockroach.speciesfile.org_TaxonNameID|}}} Cockroach Species File (CSF) : cockroach.speciesfile.org -> {{PAGENAME}}]
}}
{{#if:{{{LSID|}}}|
* [http://www.ipni.org/lsids.html LSID] : {{{LSID|}}}
}}
{{#ifeq:{{PAGENAME}}|{{{superordo}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
[[Kategorie:{{#if: {{{subclassis|}}} | {{{subclassis|}}} | {{{classis|}}} }}{{!}}{{{superordo|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{ordo}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=7}}
{{#ifeq:{{PAGENAME}}|Blattodea|
[[Kategorie:Schaben]]
}}
[[Kategorie:{{#if: {{{superordo|}}} | {{{superordo|}}} | {{{subclassis|}}} }}{{!}}{{{ordo|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{superfamilia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
[[Kategorie:{{#if: {{{subordo|}}} | {{{subordo|}}} | {{{ordo|}}} }}{{!}}{{{superfamilia|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{familia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=4}}
[[Kategorie:{{#if: {{{superfamilia|}}} | {{{superfamilia|}}} | {{{subordo|}}} }}{{!}}{{{familia|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{subfamilia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=3}}
[[Kategorie:{{{familia|}}}{{!}}{{{subfamilia|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{genus}}}|
[[Kategorie:{{#if: {{{subfamilia|}}} | {{{subfamilia|}}} | {{{familia|}}} }}{{!}}{{{genus|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{genus}}} {{{species}}}|
[[Kategorie:{{{genus|}}}{{!}}{{{species|}}}]]
}}
</includeonly>
<noinclude>
<pre>
Beispielaufruf:
{{Systematik
| DeName = Fauchschabe
| Autor = van Herrewege, 1973
| ordo =
| subordo =
| superfamilia =
| familia = Blaberidae
| subfamilia = Oxyhaloinae
| tribus = Gromphadorhini
| genus = Princisia
| subgenus =
| species = vanwaerebeki
| Verbreitung =
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| StudyGroupNumber = BCG 34
| Winterruhe =
| cockroach.speciesfile.org_TaxonNameID = 1174416
| LSID = urn:lsid:Blattodea.genusfile.org:TaxonName:6326
}}
{{Systematik
| Autor =
| Bild =
| Bildbeschreibung =
| regnum = Animalia
| subregnum = Eumetazoa
| phylum = specieshropoda
| subphylum = Hexapoda
| classis = Insecta
| ordo = Dictyoptera
| subordo = Isoptera
| LSID = urn:lsid:faunaeur.org:taxname:11922
| www.faunaeur.org_id = 11922
}}
</pre>
</noinclude>
dfcc6c44907507e8b1386afcd65e9ef09b47d013
1649
1648
2017-05-02T21:34:30Z
Lollypop
2
wikitext
text/x-wiki
<includeonly>
{{#vardefine:taxon|
{{#if:{{{dominia|}}} | -> [[:Kategorie:{{{dominia|}}}{{!}}{{{dominia|}}}]]}}
{{#if:{{{regnum|}}} | -> [[:Kategorie:{{{regnum|}}}{{!}}{{{regnum|}}}]]}}
{{#if:{{{subregnum|}}} | -> [[:Kategorie:{{{subregnum|}}}{{!}}{{{subregnum|}}}]]}}
{{#if:{{{divisio|}}} | -> [[:Kategorie:{{{divisio|}}}{{!}}{{{divisio|}}}]]}}
{{#if:{{{subdivisio|}}} | -> [[:Kategorie:{{{subdivisio|}}}{{!}}{{{subdivisio|}}}]]}}
{{#if:{{{superclassis|}}}| -> [[:Kategorie:{{{superclassis|}}}{{!}}{{{superclassis|}}}]]}}
{{#if:{{{classis|}}} | -> [[:Kategorie:{{{classis|}}}{{!}}{{{classis|}}}]]}}
{{#if:{{{subclassis|}}} | -> [[:Kategorie:{{{subclassis|}}}{{!}}{{{subclassis|}}}]]}}
{{#if:{{{superordo|}}} | -> [[:Kategorie:{{{superordo|}}}{{!}}{{{superordo|}}}]]}}
{{#if:{{{ordo|}}} | -> [[:Kategorie:{{{ordo|}}}{{!}}{{{ordo|}}}]]}}
{{#if:{{{subordo|}}} | -> [[:Kategorie:{{{subordo|}}}{{!}}{{{subordo|}}}]]}}
{{#if:{{{superfamilia|}}}| -> [[:Kategorie:{{#if: {{{subordo|}}} | {{{subordo|}}} | {{{ordo|}}} }}{{!}}{{{superfamilia|}}}]]}}
{{#if:{{{familia|}}} | -> [[:Kategorie:{{{familia|}}}{{!}}{{{familia|}}}]]}}
{{#if:{{{subfamilia|}}} | -> [[:Kategorie:{{{subfamilia|}}}{{!}}{{{subfamilia|}}}]]}}
{{#if:{{{tribus|}}} | -> [[:Kategorie:{{{tribus|}}}{{!}}{{{tribus|}}}]]}}
{{#if:{{{genus|}}} | -> [[:Kategorie:{{{genus|}}}{{!}}{{{genus|}}}]]}}
{{#if:{{{subgenus|}}} | -> [[:Kategorie:{{{subgenus|}}}{{!}}{{{subgenus|}}}]]}}
}}
{{#vardefine:taxonside|
{{#if:{{{dominia|}}}
| {{!-}}
{{!}} Dominia:
{{!}} ''[[:Kategorie:{{{dominia|}}}{{!}}{{{dominia|}}}]]''
}}
{{#if:{{{regnum|}}}
| {{!-}}
{{!}} Regnum:
{{!}} ''[[:Kategorie:{{{regnum|}}}{{!}}{{{regnum|}}}]]''
}}
{{#if:{{{subregnum|}}}
| {{!-}}
{{!}} Subregnum:
{{!}} ''[[:Kategorie:{{{subregnum|}}}{{!}}{{{subregnum|}}}]]''
}}
{{#if:{{{superdivisio|}}}
| {{!-}}
{{!}} Superdivisio:
{{!}} ''[[:Kategorie:{{{superdivisio|}}}{{!}}{{{superdivisio|}}}]]''
}}
{{#if:{{{divisio|}}}
| {{!-}}
{{!}} Divisio:
{{!}} ''[[:Kategorie:{{{divisio|}}}{{!}}{{{divisio|}}}]]''
}}
{{#if:{{{subdivisio|}}}
| {{!-}}
{{!}} Subdivisio:
{{!}} ''[[:Kategorie:{{{subdivisio|}}}{{!}}{{{subdivisio|}}}]]''
}}
{{#if:{{{superclassis|}}}
| {{!-}}
{{!}} Superclassis:
{{!}} ''[[:Kategorie:{{{superclassis|}}}{{!}}{{{superclassis|}}}]]''
}}
{{#if:{{{classis|}}}
| {{!-}}
{{!}} Classis:
{{!}} ''[[:Kategorie:{{{classis|}}}{{!}}{{{classis|}}}]]''
}}
{{#if:{{{subclassis|}}}
| {{!-}}
{{!}} Subclassis:
{{!}} ''[[:Kategorie:{{{subclassis|}}}{{!}}{{{subclassis|}}}]]''
}}
{{#if:{{{superordo|}}}
| {{!-}}
{{!}} Superordo:
{{!}} ''[[:Kategorie:{{{superordo|}}}{{!}}{{{superordo|}}}]]''
}}
{{#if:{{{ordo|}}}
| {{!-}}
{{!}} Ordo:
{{!}} ''[[:Kategorie:{{{ordo|}}}{{!}}{{{ordo|}}}]]''
}}
{{#if:{{{subordo|}}}
| {{!-}}
{{!}} Subordo:
{{!}} ''[[:Kategorie:{{{subordo|}}}{{!}}{{{subordo|}}}]]''
}}
{{#if:{{{superfamilia|}}}
| {{!-}}
{{!}} Superfamilia:
{{!}} ''[[:Kategorie:{{{superfamilia|}}}{{!}}{{{superfamilia|}}}]]''
}}
{{#if:{{{familia|}}}
| {{!-}}
{{!}} Familia:
{{!}} ''[[:Kategorie:{{{familia|}}}{{!}}{{{familia|}}}]]''
}}
{{#if:{{{subfamilia|}}}
| {{!-}}
{{!}} Subfamilia:
{{!}} ''[[:Kategorie:{{{subfamilia|}}}{{!}}{{{subfamilia|}}}]]''
}}
{{#if:{{{tribus|}}}
| {{!-}}
{{!}} Tribus:
{{!}} ''[[:Kategorie:{{{tribus|}}}{{!}}{{{tribus|}}}]]''
}}
{{#if:{{{genus|}}}
| {{!-}}
{{!}} Genus:
{{!}} ''[[:Kategorie:{{{genus|}}}{{!}}{{{genus|}}}]]''
}}
{{#if:{{{subgenus|}}}
| {{!-}}
{{!}} Subgenus:
{{!}} ''[[:Kategorie:{{{subgenus|}}}{{!}}{{{subgenus|}}}]]''
}}
{{#if:{{{species|}}}
| {{!-}}
{{!}} Species:
{{!}} ''{{{genus|}}} {{{subgenus|}}} {{{species|}}}''
}}
}}
{| cellpadding="2" cellspacing="0" style="border: 1px solid #6688AA;width:260px; background-color:#efefef; float:right;overflow: hidden; margin-left:10px; margin-bottom:10px;" valign="middle" |
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"|
'''''{{PAGENAME}}'''''
{{#if:{{{DeName|}}}|
<br>({{{DeName|}}})
}}
|-
|style="border: 0px solid #6688AA;border-bottom-width:1px;"| <div style="text-align:center;font-size:8pt;">
{{#if: {{{Bild|}}}
| [[Bild:{{{Bild}}}{{!}}frameless{{!}}250x300px{{!}}{{{Bildbeschreibung}}}]]
}}</div>
|
|
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| Systematik
|-
|style="text-align:center;width=250px;"|
{| style="background-color:#efefef;text-align:left;"
{{#var:taxonside}}
----
{{#if:{{#var:taxonside}}
| {{#regex: {{#var:taxonside}} | /^\w+$/ | _ }}
}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Wissenschaftlicher Name
|-
|style="text-align:center"|
{| style="text-align:center;background-color:#efefef;widht:250px"
|-
|style="text-align:center;width:250px"|''{{{genus|}}} {{{species|}}}'' {{#if:{{{Autor|}}}|{{{Autor|}}}}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Weitere Informationen
|-
|style="text-align:left"|
{| style="text-align:left;background-color:#efefef;widht:250px"
{{#if:{{{Verbreitung|}}}
| {{!-}}
{{!}} Verbreitung:
{{!}} {{{Verbreitung|}}}
| {{!-}}
}}
{{#if:{{{Habitat|}}}
| {{!-}}
{{!}} Habitat:
{{!}} {{{Habitat|}}}
| {{!-}}
}}
{{#if:{{{Nahrung|}}}
| {{!-}}
{{!}} Nahrung:
{{!}} {{{Nahrung|}}}
| {{!-}}
}}
{{#if:{{{Luftfeuchtigkeit|}}}
| {{!-}}
{{!}} Luftfeuchtigkeit:
{{!}} {{{Luftfeuchtigkeit|}}}
| {{!-}}
}}
{{#if:{{{Temperatur|}}}
| {{!-}}
{{!}} Temperatur:
{{!}} {{{Temperatur|}}}
| {{!-}}
}}
{{#if:{{{StudyGroupNumber|}}}
| {{!-}}
{{!}} StudyGroupNumber:
{{!}} {{{StudyGroupNumber|}}}
| {{!-}}
}}
|}
|}
{{#ifeq: {{NAMESPACE}} | {{ns:0}} | [[Kategorie:species]]}}
{{#if:{{#var:taxon}}
| {{#regex: {{#var:taxon}} | /[ \r\n]+/ | }}
}}
{{#if:{{{www.faunaeur.org_id|}}}|
* [http://www.faunaeur.org/full_results.php?id={{{www.faunaeur.org_id|}}} Fauna Europaea : www.faunaeur.org -> {{PAGENAME}}]
}}
{{#if:{{{cockroach.speciesfile.org_TaxonNameID|}}}|
* [http://cockroach.speciesfile.org/common/basic/Taxa.aspx?TaxonNameID={{{cockroach.speciesfile.org_TaxonNameID|}}} Cockroach Species File (CSF) : cockroach.speciesfile.org -> {{PAGENAME}}]
}}
{{#if:{{{LSID|}}}|
* [http://www.ipni.org/lsids.html LSID] : {{{LSID|}}}
}}
{{#ifeq:{{PAGENAME}}|{{{superordo}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
[[Kategorie:{{#if: {{{subclassis|}}} | {{{subclassis|}}} | {{{classis|}}} }}{{!}}{{{superordo|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{ordo}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=7}}
{{#ifeq:{{PAGENAME}}|Blattodea|
[[Kategorie:Schaben]]
}}
[[Kategorie:{{#if: {{{superordo|}}} | {{{superordo|}}} | {{{subclassis|}}} }}{{!}}{{{ordo|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{superfamilia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
[[Kategorie:{{#if: {{{subordo|}}} | {{{subordo|}}} | {{{ordo|}}} }}{{!}}{{{superfamilia|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{familia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=4}}
[[Kategorie:{{#if: {{{superfamilia|}}} | {{{superfamilia|}}} | {{{subordo|}}} }}{{!}}{{{familia|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{subfamilia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=3}}
[[Kategorie:{{{familia|}}}{{!}}{{{subfamilia|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{genus}}}|
[[Kategorie:{{#if: {{{subfamilia|}}} | {{{subfamilia|}}} | {{{familia|}}} }}{{!}}{{{genus|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{genus}}} {{{species}}}|
[[Kategorie:{{{genus|}}}{{!}}{{{species|}}}]]
}}
</includeonly>
<noinclude>
<pre>
Beispielaufruf:
{{Systematik
| DeName = Fauchschabe
| Autor = van Herrewege, 1973
| ordo =
| subordo =
| superfamilia =
| familia = Blaberidae
| subfamilia = Oxyhaloinae
| tribus = Gromphadorhini
| genus = Princisia
| subgenus =
| species = vanwaerebeki
| Verbreitung =
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| StudyGroupNumber = BCG 34
| Winterruhe =
| cockroach.speciesfile.org_TaxonNameID = 1174416
| LSID = urn:lsid:Blattodea.genusfile.org:TaxonName:6326
}}
{{Systematik
| Autor =
| Bild =
| Bildbeschreibung =
| regnum = Animalia
| subregnum = Eumetazoa
| phylum = specieshropoda
| subphylum = Hexapoda
| classis = Insecta
| ordo = Dictyoptera
| subordo = Isoptera
| LSID = urn:lsid:faunaeur.org:taxname:11922
| www.faunaeur.org_id = 11922
}}
</pre>
</noinclude>
7b605c4aa40aaa01785a5c1521d9ebb86d416cf0
1650
1649
2017-05-02T21:35:55Z
Lollypop
2
wikitext
text/x-wiki
<includeonly>
{{#vardefine:taxon|
{{#if:{{{dominia|}}} | -> [[:Kategorie:{{{dominia|}}}{{!}}{{{dominia|}}}]]}}
{{#if:{{{regnum|}}} | -> [[:Kategorie:{{{regnum|}}}{{!}}{{{regnum|}}}]]}}
{{#if:{{{subregnum|}}} | -> [[:Kategorie:{{{subregnum|}}}{{!}}{{{subregnum|}}}]]}}
{{#if:{{{divisio|}}} | -> [[:Kategorie:{{{divisio|}}}{{!}}{{{divisio|}}}]]}}
{{#if:{{{subdivisio|}}} | -> [[:Kategorie:{{{subdivisio|}}}{{!}}{{{subdivisio|}}}]]}}
{{#if:{{{superclassis|}}}| -> [[:Kategorie:{{{superclassis|}}}{{!}}{{{superclassis|}}}]]}}
{{#if:{{{classis|}}} | -> [[:Kategorie:{{{classis|}}}{{!}}{{{classis|}}}]]}}
{{#if:{{{subclassis|}}} | -> [[:Kategorie:{{{subclassis|}}}{{!}}{{{subclassis|}}}]]}}
{{#if:{{{superordo|}}} | -> [[:Kategorie:{{{superordo|}}}{{!}}{{{superordo|}}}]]}}
{{#if:{{{ordo|}}} | -> [[:Kategorie:{{{ordo|}}}{{!}}{{{ordo|}}}]]}}
{{#if:{{{subordo|}}} | -> [[:Kategorie:{{{subordo|}}}{{!}}{{{subordo|}}}]]}}
{{#if:{{{superfamilia|}}}| -> [[:Kategorie:{{#if: {{{subordo|}}} | {{{subordo|}}} | {{{ordo|}}} }}{{!}}{{{superfamilia|}}}]]}}
{{#if:{{{familia|}}} | -> [[:Kategorie:{{{familia|}}}{{!}}{{{familia|}}}]]}}
{{#if:{{{subfamilia|}}} | -> [[:Kategorie:{{{subfamilia|}}}{{!}}{{{subfamilia|}}}]]}}
{{#if:{{{tribus|}}} | -> [[:Kategorie:{{{tribus|}}}{{!}}{{{tribus|}}}]]}}
{{#if:{{{genus|}}} | -> [[:Kategorie:{{{genus|}}}{{!}}{{{genus|}}}]]}}
{{#if:{{{subgenus|}}} | -> [[:Kategorie:{{{subgenus|}}}{{!}}{{{subgenus|}}}]]}}
}}
{{#vardefine:taxonbox|
{{#if:{{{dominia|}}}
| {{!-}}
{{!}} Dominia:
{{!}} ''[[:Kategorie:{{{dominia|}}}{{!}}{{{dominia|}}}]]''
}}
{{#if:{{{regnum|}}}
| {{!-}}
{{!}} Regnum:
{{!}} ''[[:Kategorie:{{{regnum|}}}{{!}}{{{regnum|}}}]]''
}}
{{#if:{{{subregnum|}}}
| {{!-}}
{{!}} Subregnum:
{{!}} ''[[:Kategorie:{{{subregnum|}}}{{!}}{{{subregnum|}}}]]''
}}
{{#if:{{{superdivisio|}}}
| {{!-}}
{{!}} Superdivisio:
{{!}} ''[[:Kategorie:{{{superdivisio|}}}{{!}}{{{superdivisio|}}}]]''
}}
{{#if:{{{divisio|}}}
| {{!-}}
{{!}} Divisio:
{{!}} ''[[:Kategorie:{{{divisio|}}}{{!}}{{{divisio|}}}]]''
}}
{{#if:{{{subdivisio|}}}
| {{!-}}
{{!}} Subdivisio:
{{!}} ''[[:Kategorie:{{{subdivisio|}}}{{!}}{{{subdivisio|}}}]]''
}}
{{#if:{{{superclassis|}}}
| {{!-}}
{{!}} Superclassis:
{{!}} ''[[:Kategorie:{{{superclassis|}}}{{!}}{{{superclassis|}}}]]''
}}
{{#if:{{{classis|}}}
| {{!-}}
{{!}} Classis:
{{!}} ''[[:Kategorie:{{{classis|}}}{{!}}{{{classis|}}}]]''
}}
{{#if:{{{subclassis|}}}
| {{!-}}
{{!}} Subclassis:
{{!}} ''[[:Kategorie:{{{subclassis|}}}{{!}}{{{subclassis|}}}]]''
}}
{{#if:{{{superordo|}}}
| {{!-}}
{{!}} Superordo:
{{!}} ''[[:Kategorie:{{{superordo|}}}{{!}}{{{superordo|}}}]]''
}}
{{#if:{{{ordo|}}}
| {{!-}}
{{!}} Ordo:
{{!}} ''[[:Kategorie:{{{ordo|}}}{{!}}{{{ordo|}}}]]''
}}
{{#if:{{{subordo|}}}
| {{!-}}
{{!}} Subordo:
{{!}} ''[[:Kategorie:{{{subordo|}}}{{!}}{{{subordo|}}}]]''
}}
{{#if:{{{superfamilia|}}}
| {{!-}}
{{!}} Superfamilia:
{{!}} ''[[:Kategorie:{{{superfamilia|}}}{{!}}{{{superfamilia|}}}]]''
}}
{{#if:{{{familia|}}}
| {{!-}}
{{!}} Familia:
{{!}} ''[[:Kategorie:{{{familia|}}}{{!}}{{{familia|}}}]]''
}}
{{#if:{{{subfamilia|}}}
| {{!-}}
{{!}} Subfamilia:
{{!}} ''[[:Kategorie:{{{subfamilia|}}}{{!}}{{{subfamilia|}}}]]''
}}
{{#if:{{{tribus|}}}
| {{!-}}
{{!}} Tribus:
{{!}} ''[[:Kategorie:{{{tribus|}}}{{!}}{{{tribus|}}}]]''
}}
{{#if:{{{genus|}}}
| {{!-}}
{{!}} Genus:
{{!}} ''[[:Kategorie:{{{genus|}}}{{!}}{{{genus|}}}]]''
}}
{{#if:{{{subgenus|}}}
| {{!-}}
{{!}} Subgenus:
{{!}} ''[[:Kategorie:{{{subgenus|}}}{{!}}{{{subgenus|}}}]]''
}}
{{#if:{{{species|}}}
| {{!-}}
{{!}} Species:
{{!}} ''{{{genus|}}} {{{subgenus|}}} {{{species|}}}''
}}
}}
{| cellpadding="2" cellspacing="0" style="border: 1px solid #6688AA;width:260px; background-color:#efefef; float:right;overflow: hidden; margin-left:10px; margin-bottom:10px;" valign="middle" |
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"|
'''''{{PAGENAME}}'''''
{{#if:{{{DeName|}}}|
<br>({{{DeName|}}})
}}
|-
|style="border: 0px solid #6688AA;border-bottom-width:1px;"| <div style="text-align:center;font-size:8pt;">
{{#if: {{{Bild|}}}
| [[Bild:{{{Bild}}}{{!}}frameless{{!}}250x300px{{!}}{{{Bildbeschreibung}}}]]
}}</div>
|
|
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| Systematik
|-
|style="text-align:center;width=250px;"|
{| style="background-color:#efefef;text-align:left;"
{{#var:taxonbox}}
----
{{#if:{{#var:taxonbox}}
| {{#regex: {{#var:taxonbox}} | /^\w*$/ | _ }}
}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Wissenschaftlicher Name
|-
|style="text-align:center"|
{| style="text-align:center;background-color:#efefef;widht:250px"
|-
|style="text-align:center;width:250px"|''{{{genus|}}} {{{species|}}}'' {{#if:{{{Autor|}}}|{{{Autor|}}}}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Weitere Informationen
|-
|style="text-align:left"|
{| style="text-align:left;background-color:#efefef;widht:250px"
{{#if:{{{Verbreitung|}}}
| {{!-}}
{{!}} Verbreitung:
{{!}} {{{Verbreitung|}}}
| {{!-}}
}}
{{#if:{{{Habitat|}}}
| {{!-}}
{{!}} Habitat:
{{!}} {{{Habitat|}}}
| {{!-}}
}}
{{#if:{{{Nahrung|}}}
| {{!-}}
{{!}} Nahrung:
{{!}} {{{Nahrung|}}}
| {{!-}}
}}
{{#if:{{{Luftfeuchtigkeit|}}}
| {{!-}}
{{!}} Luftfeuchtigkeit:
{{!}} {{{Luftfeuchtigkeit|}}}
| {{!-}}
}}
{{#if:{{{Temperatur|}}}
| {{!-}}
{{!}} Temperatur:
{{!}} {{{Temperatur|}}}
| {{!-}}
}}
{{#if:{{{StudyGroupNumber|}}}
| {{!-}}
{{!}} StudyGroupNumber:
{{!}} {{{StudyGroupNumber|}}}
| {{!-}}
}}
|}
|}
{{#ifeq: {{NAMESPACE}} | {{ns:0}} | [[Kategorie:species]]}}
{{#if:{{#var:taxon}}
| {{#regex: {{#var:taxon}} | /[ \r\n]+/ | }}
}}
{{#if:{{{www.faunaeur.org_id|}}}|
* [http://www.faunaeur.org/full_results.php?id={{{www.faunaeur.org_id|}}} Fauna Europaea : www.faunaeur.org -> {{PAGENAME}}]
}}
{{#if:{{{cockroach.speciesfile.org_TaxonNameID|}}}|
* [http://cockroach.speciesfile.org/common/basic/Taxa.aspx?TaxonNameID={{{cockroach.speciesfile.org_TaxonNameID|}}} Cockroach Species File (CSF) : cockroach.speciesfile.org -> {{PAGENAME}}]
}}
{{#if:{{{LSID|}}}|
* [http://www.ipni.org/lsids.html LSID] : {{{LSID|}}}
}}
{{#ifeq:{{PAGENAME}}|{{{superordo}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
[[Kategorie:{{#if: {{{subclassis|}}} | {{{subclassis|}}} | {{{classis|}}} }}{{!}}{{{superordo|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{ordo}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=7}}
{{#ifeq:{{PAGENAME}}|Blattodea|
[[Kategorie:Schaben]]
}}
[[Kategorie:{{#if: {{{superordo|}}} | {{{superordo|}}} | {{{subclassis|}}} }}{{!}}{{{ordo|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{superfamilia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
[[Kategorie:{{#if: {{{subordo|}}} | {{{subordo|}}} | {{{ordo|}}} }}{{!}}{{{superfamilia|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{familia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=4}}
[[Kategorie:{{#if: {{{superfamilia|}}} | {{{superfamilia|}}} | {{{subordo|}}} }}{{!}}{{{familia|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{subfamilia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=3}}
[[Kategorie:{{{familia|}}}{{!}}{{{subfamilia|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{genus}}}|
[[Kategorie:{{#if: {{{subfamilia|}}} | {{{subfamilia|}}} | {{{familia|}}} }}{{!}}{{{genus|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{genus}}} {{{species}}}|
[[Kategorie:{{{genus|}}}{{!}}{{{species|}}}]]
}}
</includeonly>
<noinclude>
<pre>
Beispielaufruf:
{{Systematik
| DeName = Fauchschabe
| Autor = van Herrewege, 1973
| ordo =
| subordo =
| superfamilia =
| familia = Blaberidae
| subfamilia = Oxyhaloinae
| tribus = Gromphadorhini
| genus = Princisia
| subgenus =
| species = vanwaerebeki
| Verbreitung =
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| StudyGroupNumber = BCG 34
| Winterruhe =
| cockroach.speciesfile.org_TaxonNameID = 1174416
| LSID = urn:lsid:Blattodea.genusfile.org:TaxonName:6326
}}
{{Systematik
| Autor =
| Bild =
| Bildbeschreibung =
| regnum = Animalia
| subregnum = Eumetazoa
| phylum = specieshropoda
| subphylum = Hexapoda
| classis = Insecta
| ordo = Dictyoptera
| subordo = Isoptera
| LSID = urn:lsid:faunaeur.org:taxname:11922
| www.faunaeur.org_id = 11922
}}
</pre>
</noinclude>
ea4e8b2cba63e19eba0d75143c0ef84b0723a9d0
1651
1650
2017-05-02T21:36:36Z
Lollypop
2
wikitext
text/x-wiki
<includeonly>
{{#vardefine:taxon|
{{#if:{{{dominia|}}} | -> [[:Kategorie:{{{dominia|}}}{{!}}{{{dominia|}}}]]}}
{{#if:{{{regnum|}}} | -> [[:Kategorie:{{{regnum|}}}{{!}}{{{regnum|}}}]]}}
{{#if:{{{subregnum|}}} | -> [[:Kategorie:{{{subregnum|}}}{{!}}{{{subregnum|}}}]]}}
{{#if:{{{divisio|}}} | -> [[:Kategorie:{{{divisio|}}}{{!}}{{{divisio|}}}]]}}
{{#if:{{{subdivisio|}}} | -> [[:Kategorie:{{{subdivisio|}}}{{!}}{{{subdivisio|}}}]]}}
{{#if:{{{superclassis|}}}| -> [[:Kategorie:{{{superclassis|}}}{{!}}{{{superclassis|}}}]]}}
{{#if:{{{classis|}}} | -> [[:Kategorie:{{{classis|}}}{{!}}{{{classis|}}}]]}}
{{#if:{{{subclassis|}}} | -> [[:Kategorie:{{{subclassis|}}}{{!}}{{{subclassis|}}}]]}}
{{#if:{{{superordo|}}} | -> [[:Kategorie:{{{superordo|}}}{{!}}{{{superordo|}}}]]}}
{{#if:{{{ordo|}}} | -> [[:Kategorie:{{{ordo|}}}{{!}}{{{ordo|}}}]]}}
{{#if:{{{subordo|}}} | -> [[:Kategorie:{{{subordo|}}}{{!}}{{{subordo|}}}]]}}
{{#if:{{{superfamilia|}}}| -> [[:Kategorie:{{#if: {{{subordo|}}} | {{{subordo|}}} | {{{ordo|}}} }}{{!}}{{{superfamilia|}}}]]}}
{{#if:{{{familia|}}} | -> [[:Kategorie:{{{familia|}}}{{!}}{{{familia|}}}]]}}
{{#if:{{{subfamilia|}}} | -> [[:Kategorie:{{{subfamilia|}}}{{!}}{{{subfamilia|}}}]]}}
{{#if:{{{tribus|}}} | -> [[:Kategorie:{{{tribus|}}}{{!}}{{{tribus|}}}]]}}
{{#if:{{{genus|}}} | -> [[:Kategorie:{{{genus|}}}{{!}}{{{genus|}}}]]}}
{{#if:{{{subgenus|}}} | -> [[:Kategorie:{{{subgenus|}}}{{!}}{{{subgenus|}}}]]}}
}}
{{#vardefine:taxonbox|
{{#if:{{{dominia|}}}
| {{!-}}
{{!}} Dominia:
{{!}} ''[[:Kategorie:{{{dominia|}}}{{!}}{{{dominia|}}}]]''
}}
{{#if:{{{regnum|}}}
| {{!-}}
{{!}} Regnum:
{{!}} ''[[:Kategorie:{{{regnum|}}}{{!}}{{{regnum|}}}]]''
}}
{{#if:{{{subregnum|}}}
| {{!-}}
{{!}} Subregnum:
{{!}} ''[[:Kategorie:{{{subregnum|}}}{{!}}{{{subregnum|}}}]]''
}}
{{#if:{{{superdivisio|}}}
| {{!-}}
{{!}} Superdivisio:
{{!}} ''[[:Kategorie:{{{superdivisio|}}}{{!}}{{{superdivisio|}}}]]''
}}
{{#if:{{{divisio|}}}
| {{!-}}
{{!}} Divisio:
{{!}} ''[[:Kategorie:{{{divisio|}}}{{!}}{{{divisio|}}}]]''
}}
{{#if:{{{subdivisio|}}}
| {{!-}}
{{!}} Subdivisio:
{{!}} ''[[:Kategorie:{{{subdivisio|}}}{{!}}{{{subdivisio|}}}]]''
}}
{{#if:{{{superclassis|}}}
| {{!-}}
{{!}} Superclassis:
{{!}} ''[[:Kategorie:{{{superclassis|}}}{{!}}{{{superclassis|}}}]]''
}}
{{#if:{{{classis|}}}
| {{!-}}
{{!}} Classis:
{{!}} ''[[:Kategorie:{{{classis|}}}{{!}}{{{classis|}}}]]''
}}
{{#if:{{{subclassis|}}}
| {{!-}}
{{!}} Subclassis:
{{!}} ''[[:Kategorie:{{{subclassis|}}}{{!}}{{{subclassis|}}}]]''
}}
{{#if:{{{superordo|}}}
| {{!-}}
{{!}} Superordo:
{{!}} ''[[:Kategorie:{{{superordo|}}}{{!}}{{{superordo|}}}]]''
}}
{{#if:{{{ordo|}}}
| {{!-}}
{{!}} Ordo:
{{!}} ''[[:Kategorie:{{{ordo|}}}{{!}}{{{ordo|}}}]]''
}}
{{#if:{{{subordo|}}}
| {{!-}}
{{!}} Subordo:
{{!}} ''[[:Kategorie:{{{subordo|}}}{{!}}{{{subordo|}}}]]''
}}
{{#if:{{{superfamilia|}}}
| {{!-}}
{{!}} Superfamilia:
{{!}} ''[[:Kategorie:{{{superfamilia|}}}{{!}}{{{superfamilia|}}}]]''
}}
{{#if:{{{familia|}}}
| {{!-}}
{{!}} Familia:
{{!}} ''[[:Kategorie:{{{familia|}}}{{!}}{{{familia|}}}]]''
}}
{{#if:{{{subfamilia|}}}
| {{!-}}
{{!}} Subfamilia:
{{!}} ''[[:Kategorie:{{{subfamilia|}}}{{!}}{{{subfamilia|}}}]]''
}}
{{#if:{{{tribus|}}}
| {{!-}}
{{!}} Tribus:
{{!}} ''[[:Kategorie:{{{tribus|}}}{{!}}{{{tribus|}}}]]''
}}
{{#if:{{{genus|}}}
| {{!-}}
{{!}} Genus:
{{!}} ''[[:Kategorie:{{{genus|}}}{{!}}{{{genus|}}}]]''
}}
{{#if:{{{subgenus|}}}
| {{!-}}
{{!}} Subgenus:
{{!}} ''[[:Kategorie:{{{subgenus|}}}{{!}}{{{subgenus|}}}]]''
}}
{{#if:{{{species|}}}
| {{!-}}
{{!}} Species:
{{!}} ''{{{genus|}}} {{{subgenus|}}} {{{species|}}}''
}}
}}
{| cellpadding="2" cellspacing="0" style="border: 1px solid #6688AA;width:260px; background-color:#efefef; float:right;overflow: hidden; margin-left:10px; margin-bottom:10px;" valign="middle" |
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"|
'''''{{PAGENAME}}'''''
{{#if:{{{DeName|}}}|
<br>({{{DeName|}}})
}}
|-
|style="border: 0px solid #6688AA;border-bottom-width:1px;"| <div style="text-align:center;font-size:8pt;">
{{#if: {{{Bild|}}}
| [[Bild:{{{Bild}}}{{!}}frameless{{!}}250x300px{{!}}{{{Bildbeschreibung}}}]]
}}</div>
|
|
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| Systematik
|-
|style="text-align:center;width=250px;"|
{| style="background-color:#efefef;text-align:left;"
{{#var:taxonbox}}
----
{{#if:{{#var:taxonbox}}
| {{#regex: {{#var:taxonbox}} | /\n\n/ | _ }}
}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Wissenschaftlicher Name
|-
|style="text-align:center"|
{| style="text-align:center;background-color:#efefef;widht:250px"
|-
|style="text-align:center;width:250px"|''{{{genus|}}} {{{species|}}}'' {{#if:{{{Autor|}}}|{{{Autor|}}}}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Weitere Informationen
|-
|style="text-align:left"|
{| style="text-align:left;background-color:#efefef;widht:250px"
{{#if:{{{Verbreitung|}}}
| {{!-}}
{{!}} Verbreitung:
{{!}} {{{Verbreitung|}}}
| {{!-}}
}}
{{#if:{{{Habitat|}}}
| {{!-}}
{{!}} Habitat:
{{!}} {{{Habitat|}}}
| {{!-}}
}}
{{#if:{{{Nahrung|}}}
| {{!-}}
{{!}} Nahrung:
{{!}} {{{Nahrung|}}}
| {{!-}}
}}
{{#if:{{{Luftfeuchtigkeit|}}}
| {{!-}}
{{!}} Luftfeuchtigkeit:
{{!}} {{{Luftfeuchtigkeit|}}}
| {{!-}}
}}
{{#if:{{{Temperatur|}}}
| {{!-}}
{{!}} Temperatur:
{{!}} {{{Temperatur|}}}
| {{!-}}
}}
{{#if:{{{StudyGroupNumber|}}}
| {{!-}}
{{!}} StudyGroupNumber:
{{!}} {{{StudyGroupNumber|}}}
| {{!-}}
}}
|}
|}
{{#ifeq: {{NAMESPACE}} | {{ns:0}} | [[Kategorie:species]]}}
{{#if:{{#var:taxon}}
| {{#regex: {{#var:taxon}} | /[ \r\n]+/ | }}
}}
{{#if:{{{www.faunaeur.org_id|}}}|
* [http://www.faunaeur.org/full_results.php?id={{{www.faunaeur.org_id|}}} Fauna Europaea : www.faunaeur.org -> {{PAGENAME}}]
}}
{{#if:{{{cockroach.speciesfile.org_TaxonNameID|}}}|
* [http://cockroach.speciesfile.org/common/basic/Taxa.aspx?TaxonNameID={{{cockroach.speciesfile.org_TaxonNameID|}}} Cockroach Species File (CSF) : cockroach.speciesfile.org -> {{PAGENAME}}]
}}
{{#if:{{{LSID|}}}|
* [http://www.ipni.org/lsids.html LSID] : {{{LSID|}}}
}}
{{#ifeq:{{PAGENAME}}|{{{superordo}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
[[Kategorie:{{#if: {{{subclassis|}}} | {{{subclassis|}}} | {{{classis|}}} }}{{!}}{{{superordo|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{ordo}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=7}}
{{#ifeq:{{PAGENAME}}|Blattodea|
[[Kategorie:Schaben]]
}}
[[Kategorie:{{#if: {{{superordo|}}} | {{{superordo|}}} | {{{subclassis|}}} }}{{!}}{{{ordo|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{superfamilia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
[[Kategorie:{{#if: {{{subordo|}}} | {{{subordo|}}} | {{{ordo|}}} }}{{!}}{{{superfamilia|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{familia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=4}}
[[Kategorie:{{#if: {{{superfamilia|}}} | {{{superfamilia|}}} | {{{subordo|}}} }}{{!}}{{{familia|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{subfamilia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=3}}
[[Kategorie:{{{familia|}}}{{!}}{{{subfamilia|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{genus}}}|
[[Kategorie:{{#if: {{{subfamilia|}}} | {{{subfamilia|}}} | {{{familia|}}} }}{{!}}{{{genus|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{genus}}} {{{species}}}|
[[Kategorie:{{{genus|}}}{{!}}{{{species|}}}]]
}}
</includeonly>
<noinclude>
<pre>
Beispielaufruf:
{{Systematik
| DeName = Fauchschabe
| Autor = van Herrewege, 1973
| ordo =
| subordo =
| superfamilia =
| familia = Blaberidae
| subfamilia = Oxyhaloinae
| tribus = Gromphadorhini
| genus = Princisia
| subgenus =
| species = vanwaerebeki
| Verbreitung =
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| StudyGroupNumber = BCG 34
| Winterruhe =
| cockroach.speciesfile.org_TaxonNameID = 1174416
| LSID = urn:lsid:Blattodea.genusfile.org:TaxonName:6326
}}
{{Systematik
| Autor =
| Bild =
| Bildbeschreibung =
| regnum = Animalia
| subregnum = Eumetazoa
| phylum = specieshropoda
| subphylum = Hexapoda
| classis = Insecta
| ordo = Dictyoptera
| subordo = Isoptera
| LSID = urn:lsid:faunaeur.org:taxname:11922
| www.faunaeur.org_id = 11922
}}
</pre>
</noinclude>
968b8df9b7f736809a811c9df13a0ba80768dbd1
1652
1651
2017-05-02T21:37:37Z
Lollypop
2
wikitext
text/x-wiki
<includeonly>
{{#vardefine:taxon|
{{#if:{{{dominia|}}} | -> [[:Kategorie:{{{dominia|}}}{{!}}{{{dominia|}}}]]}}
{{#if:{{{regnum|}}} | -> [[:Kategorie:{{{regnum|}}}{{!}}{{{regnum|}}}]]}}
{{#if:{{{subregnum|}}} | -> [[:Kategorie:{{{subregnum|}}}{{!}}{{{subregnum|}}}]]}}
{{#if:{{{divisio|}}} | -> [[:Kategorie:{{{divisio|}}}{{!}}{{{divisio|}}}]]}}
{{#if:{{{subdivisio|}}} | -> [[:Kategorie:{{{subdivisio|}}}{{!}}{{{subdivisio|}}}]]}}
{{#if:{{{superclassis|}}}| -> [[:Kategorie:{{{superclassis|}}}{{!}}{{{superclassis|}}}]]}}
{{#if:{{{classis|}}} | -> [[:Kategorie:{{{classis|}}}{{!}}{{{classis|}}}]]}}
{{#if:{{{subclassis|}}} | -> [[:Kategorie:{{{subclassis|}}}{{!}}{{{subclassis|}}}]]}}
{{#if:{{{superordo|}}} | -> [[:Kategorie:{{{superordo|}}}{{!}}{{{superordo|}}}]]}}
{{#if:{{{ordo|}}} | -> [[:Kategorie:{{{ordo|}}}{{!}}{{{ordo|}}}]]}}
{{#if:{{{subordo|}}} | -> [[:Kategorie:{{{subordo|}}}{{!}}{{{subordo|}}}]]}}
{{#if:{{{superfamilia|}}}| -> [[:Kategorie:{{#if: {{{subordo|}}} | {{{subordo|}}} | {{{ordo|}}} }}{{!}}{{{superfamilia|}}}]]}}
{{#if:{{{familia|}}} | -> [[:Kategorie:{{{familia|}}}{{!}}{{{familia|}}}]]}}
{{#if:{{{subfamilia|}}} | -> [[:Kategorie:{{{subfamilia|}}}{{!}}{{{subfamilia|}}}]]}}
{{#if:{{{tribus|}}} | -> [[:Kategorie:{{{tribus|}}}{{!}}{{{tribus|}}}]]}}
{{#if:{{{genus|}}} | -> [[:Kategorie:{{{genus|}}}{{!}}{{{genus|}}}]]}}
{{#if:{{{subgenus|}}} | -> [[:Kategorie:{{{subgenus|}}}{{!}}{{{subgenus|}}}]]}}
}}
{{#vardefine:taxonbox|
{{#if:{{{dominia|}}}
| {{!-}}
{{!}} Dominia:
{{!}} ''[[:Kategorie:{{{dominia|}}}{{!}}{{{dominia|}}}]]''
}}
{{#if:{{{regnum|}}}
| {{!-}}
{{!}} Regnum:
{{!}} ''[[:Kategorie:{{{regnum|}}}{{!}}{{{regnum|}}}]]''
}}
{{#if:{{{subregnum|}}}
| {{!-}}
{{!}} Subregnum:
{{!}} ''[[:Kategorie:{{{subregnum|}}}{{!}}{{{subregnum|}}}]]''
}}
{{#if:{{{superdivisio|}}}
| {{!-}}
{{!}} Superdivisio:
{{!}} ''[[:Kategorie:{{{superdivisio|}}}{{!}}{{{superdivisio|}}}]]''
}}
{{#if:{{{divisio|}}}
| {{!-}}
{{!}} Divisio:
{{!}} ''[[:Kategorie:{{{divisio|}}}{{!}}{{{divisio|}}}]]''
}}
{{#if:{{{subdivisio|}}}
| {{!-}}
{{!}} Subdivisio:
{{!}} ''[[:Kategorie:{{{subdivisio|}}}{{!}}{{{subdivisio|}}}]]''
}}
{{#if:{{{superclassis|}}}
| {{!-}}
{{!}} Superclassis:
{{!}} ''[[:Kategorie:{{{superclassis|}}}{{!}}{{{superclassis|}}}]]''
}}
{{#if:{{{classis|}}}
| {{!-}}
{{!}} Classis:
{{!}} ''[[:Kategorie:{{{classis|}}}{{!}}{{{classis|}}}]]''
}}
{{#if:{{{subclassis|}}}
| {{!-}}
{{!}} Subclassis:
{{!}} ''[[:Kategorie:{{{subclassis|}}}{{!}}{{{subclassis|}}}]]''
}}
{{#if:{{{superordo|}}}
| {{!-}}
{{!}} Superordo:
{{!}} ''[[:Kategorie:{{{superordo|}}}{{!}}{{{superordo|}}}]]''
}}
{{#if:{{{ordo|}}}
| {{!-}}
{{!}} Ordo:
{{!}} ''[[:Kategorie:{{{ordo|}}}{{!}}{{{ordo|}}}]]''
}}
{{#if:{{{subordo|}}}
| {{!-}}
{{!}} Subordo:
{{!}} ''[[:Kategorie:{{{subordo|}}}{{!}}{{{subordo|}}}]]''
}}
{{#if:{{{superfamilia|}}}
| {{!-}}
{{!}} Superfamilia:
{{!}} ''[[:Kategorie:{{{superfamilia|}}}{{!}}{{{superfamilia|}}}]]''
}}
{{#if:{{{familia|}}}
| {{!-}}
{{!}} Familia:
{{!}} ''[[:Kategorie:{{{familia|}}}{{!}}{{{familia|}}}]]''
}}
{{#if:{{{subfamilia|}}}
| {{!-}}
{{!}} Subfamilia:
{{!}} ''[[:Kategorie:{{{subfamilia|}}}{{!}}{{{subfamilia|}}}]]''
}}
{{#if:{{{tribus|}}}
| {{!-}}
{{!}} Tribus:
{{!}} ''[[:Kategorie:{{{tribus|}}}{{!}}{{{tribus|}}}]]''
}}
{{#if:{{{genus|}}}
| {{!-}}
{{!}} Genus:
{{!}} ''[[:Kategorie:{{{genus|}}}{{!}}{{{genus|}}}]]''
}}
{{#if:{{{subgenus|}}}
| {{!-}}
{{!}} Subgenus:
{{!}} ''[[:Kategorie:{{{subgenus|}}}{{!}}{{{subgenus|}}}]]''
}}
{{#if:{{{species|}}}
| {{!-}}
{{!}} Species:
{{!}} ''{{{genus|}}} {{{subgenus|}}} {{{species|}}}''
}}
}}
{| cellpadding="2" cellspacing="0" style="border: 1px solid #6688AA;width:260px; background-color:#efefef; float:right;overflow: hidden; margin-left:10px; margin-bottom:10px;" valign="middle" |
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"|
'''''{{PAGENAME}}'''''
{{#if:{{{DeName|}}}|
<br>({{{DeName|}}})
}}
|-
|style="border: 0px solid #6688AA;border-bottom-width:1px;"| <div style="text-align:center;font-size:8pt;">
{{#if: {{{Bild|}}}
| [[Bild:{{{Bild}}}{{!}}frameless{{!}}250x300px{{!}}{{{Bildbeschreibung}}}]]
}}</div>
|
|
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| Systematik
|-
|style="text-align:center;width=250px;"|
{| style="background-color:#efefef;text-align:left;"
{{#var:taxonbox}}
----
{{#if:{{#var:taxonbox}}
| {{#regex: {{#var:taxonbox}} | /[\n]+/ | "\n" }}
}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Wissenschaftlicher Name
|-
|style="text-align:center"|
{| style="text-align:center;background-color:#efefef;widht:250px"
|-
|style="text-align:center;width:250px"|''{{{genus|}}} {{{species|}}}'' {{#if:{{{Autor|}}}|{{{Autor|}}}}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Weitere Informationen
|-
|style="text-align:left"|
{| style="text-align:left;background-color:#efefef;widht:250px"
{{#if:{{{Verbreitung|}}}
| {{!-}}
{{!}} Verbreitung:
{{!}} {{{Verbreitung|}}}
| {{!-}}
}}
{{#if:{{{Habitat|}}}
| {{!-}}
{{!}} Habitat:
{{!}} {{{Habitat|}}}
| {{!-}}
}}
{{#if:{{{Nahrung|}}}
| {{!-}}
{{!}} Nahrung:
{{!}} {{{Nahrung|}}}
| {{!-}}
}}
{{#if:{{{Luftfeuchtigkeit|}}}
| {{!-}}
{{!}} Luftfeuchtigkeit:
{{!}} {{{Luftfeuchtigkeit|}}}
| {{!-}}
}}
{{#if:{{{Temperatur|}}}
| {{!-}}
{{!}} Temperatur:
{{!}} {{{Temperatur|}}}
| {{!-}}
}}
{{#if:{{{StudyGroupNumber|}}}
| {{!-}}
{{!}} StudyGroupNumber:
{{!}} {{{StudyGroupNumber|}}}
| {{!-}}
}}
|}
|}
{{#ifeq: {{NAMESPACE}} | {{ns:0}} | [[Kategorie:species]]}}
{{#if:{{#var:taxon}}
| {{#regex: {{#var:taxon}} | /[ \r\n]+/ | }}
}}
{{#if:{{{www.faunaeur.org_id|}}}|
* [http://www.faunaeur.org/full_results.php?id={{{www.faunaeur.org_id|}}} Fauna Europaea : www.faunaeur.org -> {{PAGENAME}}]
}}
{{#if:{{{cockroach.speciesfile.org_TaxonNameID|}}}|
* [http://cockroach.speciesfile.org/common/basic/Taxa.aspx?TaxonNameID={{{cockroach.speciesfile.org_TaxonNameID|}}} Cockroach Species File (CSF) : cockroach.speciesfile.org -> {{PAGENAME}}]
}}
{{#if:{{{LSID|}}}|
* [http://www.ipni.org/lsids.html LSID] : {{{LSID|}}}
}}
{{#ifeq:{{PAGENAME}}|{{{superordo}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
[[Kategorie:{{#if: {{{subclassis|}}} | {{{subclassis|}}} | {{{classis|}}} }}{{!}}{{{superordo|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{ordo}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=7}}
{{#ifeq:{{PAGENAME}}|Blattodea|
[[Kategorie:Schaben]]
}}
[[Kategorie:{{#if: {{{superordo|}}} | {{{superordo|}}} | {{{subclassis|}}} }}{{!}}{{{ordo|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{superfamilia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
[[Kategorie:{{#if: {{{subordo|}}} | {{{subordo|}}} | {{{ordo|}}} }}{{!}}{{{superfamilia|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{familia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=4}}
[[Kategorie:{{#if: {{{superfamilia|}}} | {{{superfamilia|}}} | {{{subordo|}}} }}{{!}}{{{familia|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{subfamilia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=3}}
[[Kategorie:{{{familia|}}}{{!}}{{{subfamilia|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{genus}}}|
[[Kategorie:{{#if: {{{subfamilia|}}} | {{{subfamilia|}}} | {{{familia|}}} }}{{!}}{{{genus|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{genus}}} {{{species}}}|
[[Kategorie:{{{genus|}}}{{!}}{{{species|}}}]]
}}
</includeonly>
<noinclude>
<pre>
Beispielaufruf:
{{Systematik
| DeName = Fauchschabe
| Autor = van Herrewege, 1973
| ordo =
| subordo =
| superfamilia =
| familia = Blaberidae
| subfamilia = Oxyhaloinae
| tribus = Gromphadorhini
| genus = Princisia
| subgenus =
| species = vanwaerebeki
| Verbreitung =
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| StudyGroupNumber = BCG 34
| Winterruhe =
| cockroach.speciesfile.org_TaxonNameID = 1174416
| LSID = urn:lsid:Blattodea.genusfile.org:TaxonName:6326
}}
{{Systematik
| Autor =
| Bild =
| Bildbeschreibung =
| regnum = Animalia
| subregnum = Eumetazoa
| phylum = specieshropoda
| subphylum = Hexapoda
| classis = Insecta
| ordo = Dictyoptera
| subordo = Isoptera
| LSID = urn:lsid:faunaeur.org:taxname:11922
| www.faunaeur.org_id = 11922
}}
</pre>
</noinclude>
4f239e8e7d0b09814e21cb5246b6d209df371c74
1653
1652
2017-05-02T21:40:02Z
Lollypop
2
wikitext
text/x-wiki
<includeonly>
{{#vardefine:taxon|
{{#if:{{{dominia|}}} | -> [[:Kategorie:{{{dominia|}}}{{!}}{{{dominia|}}}]]}}
{{#if:{{{regnum|}}} | -> [[:Kategorie:{{{regnum|}}}{{!}}{{{regnum|}}}]]}}
{{#if:{{{subregnum|}}} | -> [[:Kategorie:{{{subregnum|}}}{{!}}{{{subregnum|}}}]]}}
{{#if:{{{divisio|}}} | -> [[:Kategorie:{{{divisio|}}}{{!}}{{{divisio|}}}]]}}
{{#if:{{{subdivisio|}}} | -> [[:Kategorie:{{{subdivisio|}}}{{!}}{{{subdivisio|}}}]]}}
{{#if:{{{superclassis|}}}| -> [[:Kategorie:{{{superclassis|}}}{{!}}{{{superclassis|}}}]]}}
{{#if:{{{classis|}}} | -> [[:Kategorie:{{{classis|}}}{{!}}{{{classis|}}}]]}}
{{#if:{{{subclassis|}}} | -> [[:Kategorie:{{{subclassis|}}}{{!}}{{{subclassis|}}}]]}}
{{#if:{{{superordo|}}} | -> [[:Kategorie:{{{superordo|}}}{{!}}{{{superordo|}}}]]}}
{{#if:{{{ordo|}}} | -> [[:Kategorie:{{{ordo|}}}{{!}}{{{ordo|}}}]]}}
{{#if:{{{subordo|}}} | -> [[:Kategorie:{{{subordo|}}}{{!}}{{{subordo|}}}]]}}
{{#if:{{{superfamilia|}}}| -> [[:Kategorie:{{#if: {{{subordo|}}} | {{{subordo|}}} | {{{ordo|}}} }}{{!}}{{{superfamilia|}}}]]}}
{{#if:{{{familia|}}} | -> [[:Kategorie:{{{familia|}}}{{!}}{{{familia|}}}]]}}
{{#if:{{{subfamilia|}}} | -> [[:Kategorie:{{{subfamilia|}}}{{!}}{{{subfamilia|}}}]]}}
{{#if:{{{tribus|}}} | -> [[:Kategorie:{{{tribus|}}}{{!}}{{{tribus|}}}]]}}
{{#if:{{{genus|}}} | -> [[:Kategorie:{{{genus|}}}{{!}}{{{genus|}}}]]}}
{{#if:{{{subgenus|}}} | -> [[:Kategorie:{{{subgenus|}}}{{!}}{{{subgenus|}}}]]}}
}}
{{#vardefine:taxonbox|
{{#if:{{{dominia|}}}
| {{!-}}
{{!}} Dominia:
{{!}} ''[[:Kategorie:{{{dominia|}}}{{!}}{{{dominia|}}}]]''
}}
{{#if:{{{regnum|}}}
| {{!-}}
{{!}} Regnum:
{{!}} ''[[:Kategorie:{{{regnum|}}}{{!}}{{{regnum|}}}]]''
}}
{{#if:{{{subregnum|}}}
| {{!-}}
{{!}} Subregnum:
{{!}} ''[[:Kategorie:{{{subregnum|}}}{{!}}{{{subregnum|}}}]]''
}}
{{#if:{{{superdivisio|}}}
| {{!-}}
{{!}} Superdivisio:
{{!}} ''[[:Kategorie:{{{superdivisio|}}}{{!}}{{{superdivisio|}}}]]''
}}
{{#if:{{{divisio|}}}
| {{!-}}
{{!}} Divisio:
{{!}} ''[[:Kategorie:{{{divisio|}}}{{!}}{{{divisio|}}}]]''
}}
{{#if:{{{subdivisio|}}}
| {{!-}}
{{!}} Subdivisio:
{{!}} ''[[:Kategorie:{{{subdivisio|}}}{{!}}{{{subdivisio|}}}]]''
}}
{{#if:{{{superclassis|}}}
| {{!-}}
{{!}} Superclassis:
{{!}} ''[[:Kategorie:{{{superclassis|}}}{{!}}{{{superclassis|}}}]]''
}}
{{#if:{{{classis|}}}
| {{!-}}
{{!}} Classis:
{{!}} ''[[:Kategorie:{{{classis|}}}{{!}}{{{classis|}}}]]''
}}
{{#if:{{{subclassis|}}}
| {{!-}}
{{!}} Subclassis:
{{!}} ''[[:Kategorie:{{{subclassis|}}}{{!}}{{{subclassis|}}}]]''
}}
{{#if:{{{superordo|}}}
| {{!-}}
{{!}} Superordo:
{{!}} ''[[:Kategorie:{{{superordo|}}}{{!}}{{{superordo|}}}]]''
}}
{{#if:{{{ordo|}}}
| {{!-}}
{{!}} Ordo:
{{!}} ''[[:Kategorie:{{{ordo|}}}{{!}}{{{ordo|}}}]]''
}}
{{#if:{{{subordo|}}}
| {{!-}}
{{!}} Subordo:
{{!}} ''[[:Kategorie:{{{subordo|}}}{{!}}{{{subordo|}}}]]''
}}
{{#if:{{{superfamilia|}}}
| {{!-}}
{{!}} Superfamilia:
{{!}} ''[[:Kategorie:{{{superfamilia|}}}{{!}}{{{superfamilia|}}}]]''
}}
{{#if:{{{familia|}}}
| {{!-}}
{{!}} Familia:
{{!}} ''[[:Kategorie:{{{familia|}}}{{!}}{{{familia|}}}]]''
}}
{{#if:{{{subfamilia|}}}
| {{!-}}
{{!}} Subfamilia:
{{!}} ''[[:Kategorie:{{{subfamilia|}}}{{!}}{{{subfamilia|}}}]]''
}}
{{#if:{{{tribus|}}}
| {{!-}}
{{!}} Tribus:
{{!}} ''[[:Kategorie:{{{tribus|}}}{{!}}{{{tribus|}}}]]''
}}
{{#if:{{{genus|}}}
| {{!-}}
{{!}} Genus:
{{!}} ''[[:Kategorie:{{{genus|}}}{{!}}{{{genus|}}}]]''
}}
{{#if:{{{subgenus|}}}
| {{!-}}
{{!}} Subgenus:
{{!}} ''[[:Kategorie:{{{subgenus|}}}{{!}}{{{subgenus|}}}]]''
}}
{{#if:{{{species|}}}
| {{!-}}
{{!}} Species:
{{!}} ''{{{genus|}}} {{{subgenus|}}} {{{species|}}}''
}}
}}
{| cellpadding="2" cellspacing="0" style="border: 1px solid #6688AA;width:260px; background-color:#efefef; float:right;overflow: hidden; margin-left:10px; margin-bottom:10px;" valign="middle" |
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"|
'''''{{PAGENAME}}'''''
{{#if:{{{DeName|}}}|
<br>({{{DeName|}}})
}}
|-
|style="border: 0px solid #6688AA;border-bottom-width:1px;"| <div style="text-align:center;font-size:8pt;">
{{#if: {{{Bild|}}}
| [[Bild:{{{Bild}}}{{!}}frameless{{!}}250x300px{{!}}{{{Bildbeschreibung}}}]]
}}</div>
|
|
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| Systematik
|-
|style="text-align:center;width=250px;"|
{| style="background-color:#efefef;text-align:left;"
{{#var:taxonbox}}
----
{{#if:{{#var:taxonbox}}
| {{#regex: {{#var:taxonbox}} | /\\n+/ | \\n }}
}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Wissenschaftlicher Name
|-
|style="text-align:center"|
{| style="text-align:center;background-color:#efefef;widht:250px"
|-
|style="text-align:center;width:250px"|''{{{genus|}}} {{{species|}}}'' {{#if:{{{Autor|}}}|{{{Autor|}}}}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Weitere Informationen
|-
|style="text-align:left"|
{| style="text-align:left;background-color:#efefef;widht:250px"
{{#if:{{{Verbreitung|}}}
| {{!-}}
{{!}} Verbreitung:
{{!}} {{{Verbreitung|}}}
| {{!-}}
}}
{{#if:{{{Habitat|}}}
| {{!-}}
{{!}} Habitat:
{{!}} {{{Habitat|}}}
| {{!-}}
}}
{{#if:{{{Nahrung|}}}
| {{!-}}
{{!}} Nahrung:
{{!}} {{{Nahrung|}}}
| {{!-}}
}}
{{#if:{{{Luftfeuchtigkeit|}}}
| {{!-}}
{{!}} Luftfeuchtigkeit:
{{!}} {{{Luftfeuchtigkeit|}}}
| {{!-}}
}}
{{#if:{{{Temperatur|}}}
| {{!-}}
{{!}} Temperatur:
{{!}} {{{Temperatur|}}}
| {{!-}}
}}
{{#if:{{{StudyGroupNumber|}}}
| {{!-}}
{{!}} StudyGroupNumber:
{{!}} {{{StudyGroupNumber|}}}
| {{!-}}
}}
|}
|}
{{#ifeq: {{NAMESPACE}} | {{ns:0}} | [[Kategorie:species]]}}
{{#if:{{#var:taxon}}
| {{#regex: {{#var:taxon}} | /[ \r\n]+/ | }}
}}
{{#if:{{{www.faunaeur.org_id|}}}|
* [http://www.faunaeur.org/full_results.php?id={{{www.faunaeur.org_id|}}} Fauna Europaea : www.faunaeur.org -> {{PAGENAME}}]
}}
{{#if:{{{cockroach.speciesfile.org_TaxonNameID|}}}|
* [http://cockroach.speciesfile.org/common/basic/Taxa.aspx?TaxonNameID={{{cockroach.speciesfile.org_TaxonNameID|}}} Cockroach Species File (CSF) : cockroach.speciesfile.org -> {{PAGENAME}}]
}}
{{#if:{{{LSID|}}}|
* [http://www.ipni.org/lsids.html LSID] : {{{LSID|}}}
}}
{{#ifeq:{{PAGENAME}}|{{{superordo}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
[[Kategorie:{{#if: {{{subclassis|}}} | {{{subclassis|}}} | {{{classis|}}} }}{{!}}{{{superordo|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{ordo}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=7}}
{{#ifeq:{{PAGENAME}}|Blattodea|
[[Kategorie:Schaben]]
}}
[[Kategorie:{{#if: {{{superordo|}}} | {{{superordo|}}} | {{{subclassis|}}} }}{{!}}{{{ordo|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{superfamilia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
[[Kategorie:{{#if: {{{subordo|}}} | {{{subordo|}}} | {{{ordo|}}} }}{{!}}{{{superfamilia|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{familia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=4}}
[[Kategorie:{{#if: {{{superfamilia|}}} | {{{superfamilia|}}} | {{{subordo|}}} }}{{!}}{{{familia|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{subfamilia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=3}}
[[Kategorie:{{{familia|}}}{{!}}{{{subfamilia|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{genus}}}|
[[Kategorie:{{#if: {{{subfamilia|}}} | {{{subfamilia|}}} | {{{familia|}}} }}{{!}}{{{genus|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{genus}}} {{{species}}}|
[[Kategorie:{{{genus|}}}{{!}}{{{species|}}}]]
}}
</includeonly>
<noinclude>
<pre>
Beispielaufruf:
{{Systematik
| DeName = Fauchschabe
| Autor = van Herrewege, 1973
| ordo =
| subordo =
| superfamilia =
| familia = Blaberidae
| subfamilia = Oxyhaloinae
| tribus = Gromphadorhini
| genus = Princisia
| subgenus =
| species = vanwaerebeki
| Verbreitung =
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| StudyGroupNumber = BCG 34
| Winterruhe =
| cockroach.speciesfile.org_TaxonNameID = 1174416
| LSID = urn:lsid:Blattodea.genusfile.org:TaxonName:6326
}}
{{Systematik
| Autor =
| Bild =
| Bildbeschreibung =
| regnum = Animalia
| subregnum = Eumetazoa
| phylum = specieshropoda
| subphylum = Hexapoda
| classis = Insecta
| ordo = Dictyoptera
| subordo = Isoptera
| LSID = urn:lsid:faunaeur.org:taxname:11922
| www.faunaeur.org_id = 11922
}}
</pre>
</noinclude>
f908ae04bfa0ae5c629a84a09760fc257324f46f
1654
1653
2017-05-02T21:40:27Z
Lollypop
2
wikitext
text/x-wiki
<includeonly>
{{#vardefine:taxon|
{{#if:{{{dominia|}}} | -> [[:Kategorie:{{{dominia|}}}{{!}}{{{dominia|}}}]]}}
{{#if:{{{regnum|}}} | -> [[:Kategorie:{{{regnum|}}}{{!}}{{{regnum|}}}]]}}
{{#if:{{{subregnum|}}} | -> [[:Kategorie:{{{subregnum|}}}{{!}}{{{subregnum|}}}]]}}
{{#if:{{{divisio|}}} | -> [[:Kategorie:{{{divisio|}}}{{!}}{{{divisio|}}}]]}}
{{#if:{{{subdivisio|}}} | -> [[:Kategorie:{{{subdivisio|}}}{{!}}{{{subdivisio|}}}]]}}
{{#if:{{{superclassis|}}}| -> [[:Kategorie:{{{superclassis|}}}{{!}}{{{superclassis|}}}]]}}
{{#if:{{{classis|}}} | -> [[:Kategorie:{{{classis|}}}{{!}}{{{classis|}}}]]}}
{{#if:{{{subclassis|}}} | -> [[:Kategorie:{{{subclassis|}}}{{!}}{{{subclassis|}}}]]}}
{{#if:{{{superordo|}}} | -> [[:Kategorie:{{{superordo|}}}{{!}}{{{superordo|}}}]]}}
{{#if:{{{ordo|}}} | -> [[:Kategorie:{{{ordo|}}}{{!}}{{{ordo|}}}]]}}
{{#if:{{{subordo|}}} | -> [[:Kategorie:{{{subordo|}}}{{!}}{{{subordo|}}}]]}}
{{#if:{{{superfamilia|}}}| -> [[:Kategorie:{{#if: {{{subordo|}}} | {{{subordo|}}} | {{{ordo|}}} }}{{!}}{{{superfamilia|}}}]]}}
{{#if:{{{familia|}}} | -> [[:Kategorie:{{{familia|}}}{{!}}{{{familia|}}}]]}}
{{#if:{{{subfamilia|}}} | -> [[:Kategorie:{{{subfamilia|}}}{{!}}{{{subfamilia|}}}]]}}
{{#if:{{{tribus|}}} | -> [[:Kategorie:{{{tribus|}}}{{!}}{{{tribus|}}}]]}}
{{#if:{{{genus|}}} | -> [[:Kategorie:{{{genus|}}}{{!}}{{{genus|}}}]]}}
{{#if:{{{subgenus|}}} | -> [[:Kategorie:{{{subgenus|}}}{{!}}{{{subgenus|}}}]]}}
}}
{{#vardefine:taxonbox|
{{#if:{{{dominia|}}}
| {{!-}}
{{!}} Dominia:
{{!}} ''[[:Kategorie:{{{dominia|}}}{{!}}{{{dominia|}}}]]''
}}
{{#if:{{{regnum|}}}
| {{!-}}
{{!}} Regnum:
{{!}} ''[[:Kategorie:{{{regnum|}}}{{!}}{{{regnum|}}}]]''
}}
{{#if:{{{subregnum|}}}
| {{!-}}
{{!}} Subregnum:
{{!}} ''[[:Kategorie:{{{subregnum|}}}{{!}}{{{subregnum|}}}]]''
}}
{{#if:{{{superdivisio|}}}
| {{!-}}
{{!}} Superdivisio:
{{!}} ''[[:Kategorie:{{{superdivisio|}}}{{!}}{{{superdivisio|}}}]]''
}}
{{#if:{{{divisio|}}}
| {{!-}}
{{!}} Divisio:
{{!}} ''[[:Kategorie:{{{divisio|}}}{{!}}{{{divisio|}}}]]''
}}
{{#if:{{{subdivisio|}}}
| {{!-}}
{{!}} Subdivisio:
{{!}} ''[[:Kategorie:{{{subdivisio|}}}{{!}}{{{subdivisio|}}}]]''
}}
{{#if:{{{superclassis|}}}
| {{!-}}
{{!}} Superclassis:
{{!}} ''[[:Kategorie:{{{superclassis|}}}{{!}}{{{superclassis|}}}]]''
}}
{{#if:{{{classis|}}}
| {{!-}}
{{!}} Classis:
{{!}} ''[[:Kategorie:{{{classis|}}}{{!}}{{{classis|}}}]]''
}}
{{#if:{{{subclassis|}}}
| {{!-}}
{{!}} Subclassis:
{{!}} ''[[:Kategorie:{{{subclassis|}}}{{!}}{{{subclassis|}}}]]''
}}
{{#if:{{{superordo|}}}
| {{!-}}
{{!}} Superordo:
{{!}} ''[[:Kategorie:{{{superordo|}}}{{!}}{{{superordo|}}}]]''
}}
{{#if:{{{ordo|}}}
| {{!-}}
{{!}} Ordo:
{{!}} ''[[:Kategorie:{{{ordo|}}}{{!}}{{{ordo|}}}]]''
}}
{{#if:{{{subordo|}}}
| {{!-}}
{{!}} Subordo:
{{!}} ''[[:Kategorie:{{{subordo|}}}{{!}}{{{subordo|}}}]]''
}}
{{#if:{{{superfamilia|}}}
| {{!-}}
{{!}} Superfamilia:
{{!}} ''[[:Kategorie:{{{superfamilia|}}}{{!}}{{{superfamilia|}}}]]''
}}
{{#if:{{{familia|}}}
| {{!-}}
{{!}} Familia:
{{!}} ''[[:Kategorie:{{{familia|}}}{{!}}{{{familia|}}}]]''
}}
{{#if:{{{subfamilia|}}}
| {{!-}}
{{!}} Subfamilia:
{{!}} ''[[:Kategorie:{{{subfamilia|}}}{{!}}{{{subfamilia|}}}]]''
}}
{{#if:{{{tribus|}}}
| {{!-}}
{{!}} Tribus:
{{!}} ''[[:Kategorie:{{{tribus|}}}{{!}}{{{tribus|}}}]]''
}}
{{#if:{{{genus|}}}
| {{!-}}
{{!}} Genus:
{{!}} ''[[:Kategorie:{{{genus|}}}{{!}}{{{genus|}}}]]''
}}
{{#if:{{{subgenus|}}}
| {{!-}}
{{!}} Subgenus:
{{!}} ''[[:Kategorie:{{{subgenus|}}}{{!}}{{{subgenus|}}}]]''
}}
{{#if:{{{species|}}}
| {{!-}}
{{!}} Species:
{{!}} ''{{{genus|}}} {{{subgenus|}}} {{{species|}}}''
}}
}}
{| cellpadding="2" cellspacing="0" style="border: 1px solid #6688AA;width:260px; background-color:#efefef; float:right;overflow: hidden; margin-left:10px; margin-bottom:10px;" valign="middle" |
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"|
'''''{{PAGENAME}}'''''
{{#if:{{{DeName|}}}|
<br>({{{DeName|}}})
}}
|-
|style="border: 0px solid #6688AA;border-bottom-width:1px;"| <div style="text-align:center;font-size:8pt;">
{{#if: {{{Bild|}}}
| [[Bild:{{{Bild}}}{{!}}frameless{{!}}250x300px{{!}}{{{Bildbeschreibung}}}]]
}}</div>
|
|
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| Systematik
|-
|style="text-align:center;width=250px;"|
{| style="background-color:#efefef;text-align:left;"
{{#var:taxonbox}}
----
{{#if:{{#var:taxonbox}}
| {{#regex: {{#var:taxonbox}} | /\n+/ | \\n }}
}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Wissenschaftlicher Name
|-
|style="text-align:center"|
{| style="text-align:center;background-color:#efefef;widht:250px"
|-
|style="text-align:center;width:250px"|''{{{genus|}}} {{{species|}}}'' {{#if:{{{Autor|}}}|{{{Autor|}}}}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Weitere Informationen
|-
|style="text-align:left"|
{| style="text-align:left;background-color:#efefef;widht:250px"
{{#if:{{{Verbreitung|}}}
| {{!-}}
{{!}} Verbreitung:
{{!}} {{{Verbreitung|}}}
| {{!-}}
}}
{{#if:{{{Habitat|}}}
| {{!-}}
{{!}} Habitat:
{{!}} {{{Habitat|}}}
| {{!-}}
}}
{{#if:{{{Nahrung|}}}
| {{!-}}
{{!}} Nahrung:
{{!}} {{{Nahrung|}}}
| {{!-}}
}}
{{#if:{{{Luftfeuchtigkeit|}}}
| {{!-}}
{{!}} Luftfeuchtigkeit:
{{!}} {{{Luftfeuchtigkeit|}}}
| {{!-}}
}}
{{#if:{{{Temperatur|}}}
| {{!-}}
{{!}} Temperatur:
{{!}} {{{Temperatur|}}}
| {{!-}}
}}
{{#if:{{{StudyGroupNumber|}}}
| {{!-}}
{{!}} StudyGroupNumber:
{{!}} {{{StudyGroupNumber|}}}
| {{!-}}
}}
|}
|}
{{#ifeq: {{NAMESPACE}} | {{ns:0}} | [[Kategorie:species]]}}
{{#if:{{#var:taxon}}
| {{#regex: {{#var:taxon}} | /[ \r\n]+/ | }}
}}
{{#if:{{{www.faunaeur.org_id|}}}|
* [http://www.faunaeur.org/full_results.php?id={{{www.faunaeur.org_id|}}} Fauna Europaea : www.faunaeur.org -> {{PAGENAME}}]
}}
{{#if:{{{cockroach.speciesfile.org_TaxonNameID|}}}|
* [http://cockroach.speciesfile.org/common/basic/Taxa.aspx?TaxonNameID={{{cockroach.speciesfile.org_TaxonNameID|}}} Cockroach Species File (CSF) : cockroach.speciesfile.org -> {{PAGENAME}}]
}}
{{#if:{{{LSID|}}}|
* [http://www.ipni.org/lsids.html LSID] : {{{LSID|}}}
}}
{{#ifeq:{{PAGENAME}}|{{{superordo}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
[[Kategorie:{{#if: {{{subclassis|}}} | {{{subclassis|}}} | {{{classis|}}} }}{{!}}{{{superordo|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{ordo}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=7}}
{{#ifeq:{{PAGENAME}}|Blattodea|
[[Kategorie:Schaben]]
}}
[[Kategorie:{{#if: {{{superordo|}}} | {{{superordo|}}} | {{{subclassis|}}} }}{{!}}{{{ordo|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{superfamilia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
[[Kategorie:{{#if: {{{subordo|}}} | {{{subordo|}}} | {{{ordo|}}} }}{{!}}{{{superfamilia|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{familia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=4}}
[[Kategorie:{{#if: {{{superfamilia|}}} | {{{superfamilia|}}} | {{{subordo|}}} }}{{!}}{{{familia|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{subfamilia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=3}}
[[Kategorie:{{{familia|}}}{{!}}{{{subfamilia|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{genus}}}|
[[Kategorie:{{#if: {{{subfamilia|}}} | {{{subfamilia|}}} | {{{familia|}}} }}{{!}}{{{genus|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{genus}}} {{{species}}}|
[[Kategorie:{{{genus|}}}{{!}}{{{species|}}}]]
}}
</includeonly>
<noinclude>
<pre>
Beispielaufruf:
{{Systematik
| DeName = Fauchschabe
| Autor = van Herrewege, 1973
| ordo =
| subordo =
| superfamilia =
| familia = Blaberidae
| subfamilia = Oxyhaloinae
| tribus = Gromphadorhini
| genus = Princisia
| subgenus =
| species = vanwaerebeki
| Verbreitung =
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| StudyGroupNumber = BCG 34
| Winterruhe =
| cockroach.speciesfile.org_TaxonNameID = 1174416
| LSID = urn:lsid:Blattodea.genusfile.org:TaxonName:6326
}}
{{Systematik
| Autor =
| Bild =
| Bildbeschreibung =
| regnum = Animalia
| subregnum = Eumetazoa
| phylum = specieshropoda
| subphylum = Hexapoda
| classis = Insecta
| ordo = Dictyoptera
| subordo = Isoptera
| LSID = urn:lsid:faunaeur.org:taxname:11922
| www.faunaeur.org_id = 11922
}}
</pre>
</noinclude>
b24c50b18f1770ba4a0be86a4fbdbeef4dc3466e
1655
1654
2017-05-02T21:41:12Z
Lollypop
2
wikitext
text/x-wiki
<includeonly>
{{#vardefine:taxon|
{{#if:{{{dominia|}}} | -> [[:Kategorie:{{{dominia|}}}{{!}}{{{dominia|}}}]]}}
{{#if:{{{regnum|}}} | -> [[:Kategorie:{{{regnum|}}}{{!}}{{{regnum|}}}]]}}
{{#if:{{{subregnum|}}} | -> [[:Kategorie:{{{subregnum|}}}{{!}}{{{subregnum|}}}]]}}
{{#if:{{{divisio|}}} | -> [[:Kategorie:{{{divisio|}}}{{!}}{{{divisio|}}}]]}}
{{#if:{{{subdivisio|}}} | -> [[:Kategorie:{{{subdivisio|}}}{{!}}{{{subdivisio|}}}]]}}
{{#if:{{{superclassis|}}}| -> [[:Kategorie:{{{superclassis|}}}{{!}}{{{superclassis|}}}]]}}
{{#if:{{{classis|}}} | -> [[:Kategorie:{{{classis|}}}{{!}}{{{classis|}}}]]}}
{{#if:{{{subclassis|}}} | -> [[:Kategorie:{{{subclassis|}}}{{!}}{{{subclassis|}}}]]}}
{{#if:{{{superordo|}}} | -> [[:Kategorie:{{{superordo|}}}{{!}}{{{superordo|}}}]]}}
{{#if:{{{ordo|}}} | -> [[:Kategorie:{{{ordo|}}}{{!}}{{{ordo|}}}]]}}
{{#if:{{{subordo|}}} | -> [[:Kategorie:{{{subordo|}}}{{!}}{{{subordo|}}}]]}}
{{#if:{{{superfamilia|}}}| -> [[:Kategorie:{{#if: {{{subordo|}}} | {{{subordo|}}} | {{{ordo|}}} }}{{!}}{{{superfamilia|}}}]]}}
{{#if:{{{familia|}}} | -> [[:Kategorie:{{{familia|}}}{{!}}{{{familia|}}}]]}}
{{#if:{{{subfamilia|}}} | -> [[:Kategorie:{{{subfamilia|}}}{{!}}{{{subfamilia|}}}]]}}
{{#if:{{{tribus|}}} | -> [[:Kategorie:{{{tribus|}}}{{!}}{{{tribus|}}}]]}}
{{#if:{{{genus|}}} | -> [[:Kategorie:{{{genus|}}}{{!}}{{{genus|}}}]]}}
{{#if:{{{subgenus|}}} | -> [[:Kategorie:{{{subgenus|}}}{{!}}{{{subgenus|}}}]]}}
}}
{{#vardefine:taxonbox|
{{#if:{{{dominia|}}}
| {{!-}}
{{!}} Dominia:
{{!}} ''[[:Kategorie:{{{dominia|}}}{{!}}{{{dominia|}}}]]''
}}
{{#if:{{{regnum|}}}
| {{!-}}
{{!}} Regnum:
{{!}} ''[[:Kategorie:{{{regnum|}}}{{!}}{{{regnum|}}}]]''
}}
{{#if:{{{subregnum|}}}
| {{!-}}
{{!}} Subregnum:
{{!}} ''[[:Kategorie:{{{subregnum|}}}{{!}}{{{subregnum|}}}]]''
}}
{{#if:{{{superdivisio|}}}
| {{!-}}
{{!}} Superdivisio:
{{!}} ''[[:Kategorie:{{{superdivisio|}}}{{!}}{{{superdivisio|}}}]]''
}}
{{#if:{{{divisio|}}}
| {{!-}}
{{!}} Divisio:
{{!}} ''[[:Kategorie:{{{divisio|}}}{{!}}{{{divisio|}}}]]''
}}
{{#if:{{{subdivisio|}}}
| {{!-}}
{{!}} Subdivisio:
{{!}} ''[[:Kategorie:{{{subdivisio|}}}{{!}}{{{subdivisio|}}}]]''
}}
{{#if:{{{superclassis|}}}
| {{!-}}
{{!}} Superclassis:
{{!}} ''[[:Kategorie:{{{superclassis|}}}{{!}}{{{superclassis|}}}]]''
}}
{{#if:{{{classis|}}}
| {{!-}}
{{!}} Classis:
{{!}} ''[[:Kategorie:{{{classis|}}}{{!}}{{{classis|}}}]]''
}}
{{#if:{{{subclassis|}}}
| {{!-}}
{{!}} Subclassis:
{{!}} ''[[:Kategorie:{{{subclassis|}}}{{!}}{{{subclassis|}}}]]''
}}
{{#if:{{{superordo|}}}
| {{!-}}
{{!}} Superordo:
{{!}} ''[[:Kategorie:{{{superordo|}}}{{!}}{{{superordo|}}}]]''
}}
{{#if:{{{ordo|}}}
| {{!-}}
{{!}} Ordo:
{{!}} ''[[:Kategorie:{{{ordo|}}}{{!}}{{{ordo|}}}]]''
}}
{{#if:{{{subordo|}}}
| {{!-}}
{{!}} Subordo:
{{!}} ''[[:Kategorie:{{{subordo|}}}{{!}}{{{subordo|}}}]]''
}}
{{#if:{{{superfamilia|}}}
| {{!-}}
{{!}} Superfamilia:
{{!}} ''[[:Kategorie:{{{superfamilia|}}}{{!}}{{{superfamilia|}}}]]''
}}
{{#if:{{{familia|}}}
| {{!-}}
{{!}} Familia:
{{!}} ''[[:Kategorie:{{{familia|}}}{{!}}{{{familia|}}}]]''
}}
{{#if:{{{subfamilia|}}}
| {{!-}}
{{!}} Subfamilia:
{{!}} ''[[:Kategorie:{{{subfamilia|}}}{{!}}{{{subfamilia|}}}]]''
}}
{{#if:{{{tribus|}}}
| {{!-}}
{{!}} Tribus:
{{!}} ''[[:Kategorie:{{{tribus|}}}{{!}}{{{tribus|}}}]]''
}}
{{#if:{{{genus|}}}
| {{!-}}
{{!}} Genus:
{{!}} ''[[:Kategorie:{{{genus|}}}{{!}}{{{genus|}}}]]''
}}
{{#if:{{{subgenus|}}}
| {{!-}}
{{!}} Subgenus:
{{!}} ''[[:Kategorie:{{{subgenus|}}}{{!}}{{{subgenus|}}}]]''
}}
{{#if:{{{species|}}}
| {{!-}}
{{!}} Species:
{{!}} ''{{{genus|}}} {{{subgenus|}}} {{{species|}}}''
}}
}}
{| cellpadding="2" cellspacing="0" style="border: 1px solid #6688AA;width:260px; background-color:#efefef; float:right;overflow: hidden; margin-left:10px; margin-bottom:10px;" valign="middle" |
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"|
'''''{{PAGENAME}}'''''
{{#if:{{{DeName|}}}|
<br>({{{DeName|}}})
}}
|-
|style="border: 0px solid #6688AA;border-bottom-width:1px;"| <div style="text-align:center;font-size:8pt;">
{{#if: {{{Bild|}}}
| [[Bild:{{{Bild}}}{{!}}frameless{{!}}250x300px{{!}}{{{Bildbeschreibung}}}]]
}}</div>
|
|
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| Systematik
|-
|style="text-align:center;width=250px;"|
{| style="background-color:#efefef;text-align:left;"
{{#var:taxonbox}}
----
{{#if:{{#var:taxonbox}}
| {{#regex: {{#var:taxonbox}} | /[\n]+/ | "\\n" }}
}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Wissenschaftlicher Name
|-
|style="text-align:center"|
{| style="text-align:center;background-color:#efefef;widht:250px"
|-
|style="text-align:center;width:250px"|''{{{genus|}}} {{{species|}}}'' {{#if:{{{Autor|}}}|{{{Autor|}}}}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Weitere Informationen
|-
|style="text-align:left"|
{| style="text-align:left;background-color:#efefef;widht:250px"
{{#if:{{{Verbreitung|}}}
| {{!-}}
{{!}} Verbreitung:
{{!}} {{{Verbreitung|}}}
| {{!-}}
}}
{{#if:{{{Habitat|}}}
| {{!-}}
{{!}} Habitat:
{{!}} {{{Habitat|}}}
| {{!-}}
}}
{{#if:{{{Nahrung|}}}
| {{!-}}
{{!}} Nahrung:
{{!}} {{{Nahrung|}}}
| {{!-}}
}}
{{#if:{{{Luftfeuchtigkeit|}}}
| {{!-}}
{{!}} Luftfeuchtigkeit:
{{!}} {{{Luftfeuchtigkeit|}}}
| {{!-}}
}}
{{#if:{{{Temperatur|}}}
| {{!-}}
{{!}} Temperatur:
{{!}} {{{Temperatur|}}}
| {{!-}}
}}
{{#if:{{{StudyGroupNumber|}}}
| {{!-}}
{{!}} StudyGroupNumber:
{{!}} {{{StudyGroupNumber|}}}
| {{!-}}
}}
|}
|}
{{#ifeq: {{NAMESPACE}} | {{ns:0}} | [[Kategorie:species]]}}
{{#if:{{#var:taxon}}
| {{#regex: {{#var:taxon}} | /[ \r\n]+/ | }}
}}
{{#if:{{{www.faunaeur.org_id|}}}|
* [http://www.faunaeur.org/full_results.php?id={{{www.faunaeur.org_id|}}} Fauna Europaea : www.faunaeur.org -> {{PAGENAME}}]
}}
{{#if:{{{cockroach.speciesfile.org_TaxonNameID|}}}|
* [http://cockroach.speciesfile.org/common/basic/Taxa.aspx?TaxonNameID={{{cockroach.speciesfile.org_TaxonNameID|}}} Cockroach Species File (CSF) : cockroach.speciesfile.org -> {{PAGENAME}}]
}}
{{#if:{{{LSID|}}}|
* [http://www.ipni.org/lsids.html LSID] : {{{LSID|}}}
}}
{{#ifeq:{{PAGENAME}}|{{{superordo}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
[[Kategorie:{{#if: {{{subclassis|}}} | {{{subclassis|}}} | {{{classis|}}} }}{{!}}{{{superordo|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{ordo}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=7}}
{{#ifeq:{{PAGENAME}}|Blattodea|
[[Kategorie:Schaben]]
}}
[[Kategorie:{{#if: {{{superordo|}}} | {{{superordo|}}} | {{{subclassis|}}} }}{{!}}{{{ordo|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{superfamilia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
[[Kategorie:{{#if: {{{subordo|}}} | {{{subordo|}}} | {{{ordo|}}} }}{{!}}{{{superfamilia|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{familia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=4}}
[[Kategorie:{{#if: {{{superfamilia|}}} | {{{superfamilia|}}} | {{{subordo|}}} }}{{!}}{{{familia|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{subfamilia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=3}}
[[Kategorie:{{{familia|}}}{{!}}{{{subfamilia|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{genus}}}|
[[Kategorie:{{#if: {{{subfamilia|}}} | {{{subfamilia|}}} | {{{familia|}}} }}{{!}}{{{genus|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{genus}}} {{{species}}}|
[[Kategorie:{{{genus|}}}{{!}}{{{species|}}}]]
}}
</includeonly>
<noinclude>
<pre>
Beispielaufruf:
{{Systematik
| DeName = Fauchschabe
| Autor = van Herrewege, 1973
| ordo =
| subordo =
| superfamilia =
| familia = Blaberidae
| subfamilia = Oxyhaloinae
| tribus = Gromphadorhini
| genus = Princisia
| subgenus =
| species = vanwaerebeki
| Verbreitung =
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| StudyGroupNumber = BCG 34
| Winterruhe =
| cockroach.speciesfile.org_TaxonNameID = 1174416
| LSID = urn:lsid:Blattodea.genusfile.org:TaxonName:6326
}}
{{Systematik
| Autor =
| Bild =
| Bildbeschreibung =
| regnum = Animalia
| subregnum = Eumetazoa
| phylum = specieshropoda
| subphylum = Hexapoda
| classis = Insecta
| ordo = Dictyoptera
| subordo = Isoptera
| LSID = urn:lsid:faunaeur.org:taxname:11922
| www.faunaeur.org_id = 11922
}}
</pre>
</noinclude>
6fe022d70582d7eb3d43c1f1d54506f455eb494b
1656
1655
2017-05-02T21:43:54Z
Lollypop
2
wikitext
text/x-wiki
<includeonly>
{{#vardefine:taxon|
{{#if:{{{dominia|}}} | -> [[:Kategorie:{{{dominia|}}}{{!}}{{{dominia|}}}]]}}
{{#if:{{{regnum|}}} | -> [[:Kategorie:{{{regnum|}}}{{!}}{{{regnum|}}}]]}}
{{#if:{{{subregnum|}}} | -> [[:Kategorie:{{{subregnum|}}}{{!}}{{{subregnum|}}}]]}}
{{#if:{{{divisio|}}} | -> [[:Kategorie:{{{divisio|}}}{{!}}{{{divisio|}}}]]}}
{{#if:{{{subdivisio|}}} | -> [[:Kategorie:{{{subdivisio|}}}{{!}}{{{subdivisio|}}}]]}}
{{#if:{{{superclassis|}}}| -> [[:Kategorie:{{{superclassis|}}}{{!}}{{{superclassis|}}}]]}}
{{#if:{{{classis|}}} | -> [[:Kategorie:{{{classis|}}}{{!}}{{{classis|}}}]]}}
{{#if:{{{subclassis|}}} | -> [[:Kategorie:{{{subclassis|}}}{{!}}{{{subclassis|}}}]]}}
{{#if:{{{superordo|}}} | -> [[:Kategorie:{{{superordo|}}}{{!}}{{{superordo|}}}]]}}
{{#if:{{{ordo|}}} | -> [[:Kategorie:{{{ordo|}}}{{!}}{{{ordo|}}}]]}}
{{#if:{{{subordo|}}} | -> [[:Kategorie:{{{subordo|}}}{{!}}{{{subordo|}}}]]}}
{{#if:{{{superfamilia|}}}| -> [[:Kategorie:{{#if: {{{subordo|}}} | {{{subordo|}}} | {{{ordo|}}} }}{{!}}{{{superfamilia|}}}]]}}
{{#if:{{{familia|}}} | -> [[:Kategorie:{{{familia|}}}{{!}}{{{familia|}}}]]}}
{{#if:{{{subfamilia|}}} | -> [[:Kategorie:{{{subfamilia|}}}{{!}}{{{subfamilia|}}}]]}}
{{#if:{{{tribus|}}} | -> [[:Kategorie:{{{tribus|}}}{{!}}{{{tribus|}}}]]}}
{{#if:{{{genus|}}} | -> [[:Kategorie:{{{genus|}}}{{!}}{{{genus|}}}]]}}
{{#if:{{{subgenus|}}} | -> [[:Kategorie:{{{subgenus|}}}{{!}}{{{subgenus|}}}]]}}
}}
{{#vardefine:taxonbox|
{{#if:{{{dominia|}}}
| {{!-}}
{{!}} Dominia:
{{!}} ''[[:Kategorie:{{{dominia|}}}{{!}}{{{dominia|}}}]]''
}}
{{#if:{{{regnum|}}}
| {{!-}}
{{!}} Regnum:
{{!}} ''[[:Kategorie:{{{regnum|}}}{{!}}{{{regnum|}}}]]''
}}
{{#if:{{{subregnum|}}}
| {{!-}}
{{!}} Subregnum:
{{!}} ''[[:Kategorie:{{{subregnum|}}}{{!}}{{{subregnum|}}}]]''
}}
{{#if:{{{superdivisio|}}}
| {{!-}}
{{!}} Superdivisio:
{{!}} ''[[:Kategorie:{{{superdivisio|}}}{{!}}{{{superdivisio|}}}]]''
}}
{{#if:{{{divisio|}}}
| {{!-}}
{{!}} Divisio:
{{!}} ''[[:Kategorie:{{{divisio|}}}{{!}}{{{divisio|}}}]]''
}}
{{#if:{{{subdivisio|}}}
| {{!-}}
{{!}} Subdivisio:
{{!}} ''[[:Kategorie:{{{subdivisio|}}}{{!}}{{{subdivisio|}}}]]''
}}
{{#if:{{{superclassis|}}}
| {{!-}}
{{!}} Superclassis:
{{!}} ''[[:Kategorie:{{{superclassis|}}}{{!}}{{{superclassis|}}}]]''
}}
{{#if:{{{classis|}}}
| {{!-}}
{{!}} Classis:
{{!}} ''[[:Kategorie:{{{classis|}}}{{!}}{{{classis|}}}]]''
}}
{{#if:{{{subclassis|}}}
| {{!-}}
{{!}} Subclassis:
{{!}} ''[[:Kategorie:{{{subclassis|}}}{{!}}{{{subclassis|}}}]]''
}}
{{#if:{{{superordo|}}}
| {{!-}}
{{!}} Superordo:
{{!}} ''[[:Kategorie:{{{superordo|}}}{{!}}{{{superordo|}}}]]''
}}
{{#if:{{{ordo|}}}
| {{!-}}
{{!}} Ordo:
{{!}} ''[[:Kategorie:{{{ordo|}}}{{!}}{{{ordo|}}}]]''
}}
{{#if:{{{subordo|}}}
| {{!-}}
{{!}} Subordo:
{{!}} ''[[:Kategorie:{{{subordo|}}}{{!}}{{{subordo|}}}]]''
}}
{{#if:{{{superfamilia|}}}
| {{!-}}
{{!}} Superfamilia:
{{!}} ''[[:Kategorie:{{{superfamilia|}}}{{!}}{{{superfamilia|}}}]]''
}}
{{#if:{{{familia|}}}
| {{!-}}
{{!}} Familia:
{{!}} ''[[:Kategorie:{{{familia|}}}{{!}}{{{familia|}}}]]''
}}
{{#if:{{{subfamilia|}}}
| {{!-}}
{{!}} Subfamilia:
{{!}} ''[[:Kategorie:{{{subfamilia|}}}{{!}}{{{subfamilia|}}}]]''
}}
{{#if:{{{tribus|}}}
| {{!-}}
{{!}} Tribus:
{{!}} ''[[:Kategorie:{{{tribus|}}}{{!}}{{{tribus|}}}]]''
}}
{{#if:{{{genus|}}}
| {{!-}}
{{!}} Genus:
{{!}} ''[[:Kategorie:{{{genus|}}}{{!}}{{{genus|}}}]]''
}}
{{#if:{{{subgenus|}}}
| {{!-}}
{{!}} Subgenus:
{{!}} ''[[:Kategorie:{{{subgenus|}}}{{!}}{{{subgenus|}}}]]''
}}
{{#if:{{{species|}}}
| {{!-}}
{{!}} Species:
{{!}} ''{{{genus|}}} {{{subgenus|}}} {{{species|}}}''
}}
}}
{| cellpadding="2" cellspacing="0" style="border: 1px solid #6688AA;width:260px; background-color:#efefef; float:right;overflow: hidden; margin-left:10px; margin-bottom:10px;" valign="middle" |
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"|
'''''{{PAGENAME}}'''''
{{#if:{{{DeName|}}}|
<br>({{{DeName|}}})
}}
|-
|style="border: 0px solid #6688AA;border-bottom-width:1px;"| <div style="text-align:center;font-size:8pt;">
{{#if: {{{Bild|}}}
| [[Bild:{{{Bild}}}{{!}}frameless{{!}}250x300px{{!}}{{{Bildbeschreibung}}}]]
}}</div>
|
|
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| Systematik
|-
|style="text-align:center;width=250px;"|
{| style="background-color:#efefef;text-align:left;"
{{#var:taxonbox}}
----
{{#if:{{#var:taxonbox}}
| {{#regex: {{#var:taxonbox}} | /^\s+$^\s+$/ | "_" }}
}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Wissenschaftlicher Name
|-
|style="text-align:center"|
{| style="text-align:center;background-color:#efefef;widht:250px"
|-
|style="text-align:center;width:250px"|''{{{genus|}}} {{{species|}}}'' {{#if:{{{Autor|}}}|{{{Autor|}}}}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Weitere Informationen
|-
|style="text-align:left"|
{| style="text-align:left;background-color:#efefef;widht:250px"
{{#if:{{{Verbreitung|}}}
| {{!-}}
{{!}} Verbreitung:
{{!}} {{{Verbreitung|}}}
| {{!-}}
}}
{{#if:{{{Habitat|}}}
| {{!-}}
{{!}} Habitat:
{{!}} {{{Habitat|}}}
| {{!-}}
}}
{{#if:{{{Nahrung|}}}
| {{!-}}
{{!}} Nahrung:
{{!}} {{{Nahrung|}}}
| {{!-}}
}}
{{#if:{{{Luftfeuchtigkeit|}}}
| {{!-}}
{{!}} Luftfeuchtigkeit:
{{!}} {{{Luftfeuchtigkeit|}}}
| {{!-}}
}}
{{#if:{{{Temperatur|}}}
| {{!-}}
{{!}} Temperatur:
{{!}} {{{Temperatur|}}}
| {{!-}}
}}
{{#if:{{{StudyGroupNumber|}}}
| {{!-}}
{{!}} StudyGroupNumber:
{{!}} {{{StudyGroupNumber|}}}
| {{!-}}
}}
|}
|}
{{#ifeq: {{NAMESPACE}} | {{ns:0}} | [[Kategorie:species]]}}
{{#if:{{#var:taxon}}
| {{#regex: {{#var:taxon}} | /[ \r\n]+/ | }}
}}
{{#if:{{{www.faunaeur.org_id|}}}|
* [http://www.faunaeur.org/full_results.php?id={{{www.faunaeur.org_id|}}} Fauna Europaea : www.faunaeur.org -> {{PAGENAME}}]
}}
{{#if:{{{cockroach.speciesfile.org_TaxonNameID|}}}|
* [http://cockroach.speciesfile.org/common/basic/Taxa.aspx?TaxonNameID={{{cockroach.speciesfile.org_TaxonNameID|}}} Cockroach Species File (CSF) : cockroach.speciesfile.org -> {{PAGENAME}}]
}}
{{#if:{{{LSID|}}}|
* [http://www.ipni.org/lsids.html LSID] : {{{LSID|}}}
}}
{{#ifeq:{{PAGENAME}}|{{{superordo}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
[[Kategorie:{{#if: {{{subclassis|}}} | {{{subclassis|}}} | {{{classis|}}} }}{{!}}{{{superordo|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{ordo}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=7}}
{{#ifeq:{{PAGENAME}}|Blattodea|
[[Kategorie:Schaben]]
}}
[[Kategorie:{{#if: {{{superordo|}}} | {{{superordo|}}} | {{{subclassis|}}} }}{{!}}{{{ordo|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{superfamilia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
[[Kategorie:{{#if: {{{subordo|}}} | {{{subordo|}}} | {{{ordo|}}} }}{{!}}{{{superfamilia|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{familia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=4}}
[[Kategorie:{{#if: {{{superfamilia|}}} | {{{superfamilia|}}} | {{{subordo|}}} }}{{!}}{{{familia|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{subfamilia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=3}}
[[Kategorie:{{{familia|}}}{{!}}{{{subfamilia|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{genus}}}|
[[Kategorie:{{#if: {{{subfamilia|}}} | {{{subfamilia|}}} | {{{familia|}}} }}{{!}}{{{genus|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{genus}}} {{{species}}}|
[[Kategorie:{{{genus|}}}{{!}}{{{species|}}}]]
}}
</includeonly>
<noinclude>
<pre>
Beispielaufruf:
{{Systematik
| DeName = Fauchschabe
| Autor = van Herrewege, 1973
| ordo =
| subordo =
| superfamilia =
| familia = Blaberidae
| subfamilia = Oxyhaloinae
| tribus = Gromphadorhini
| genus = Princisia
| subgenus =
| species = vanwaerebeki
| Verbreitung =
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| StudyGroupNumber = BCG 34
| Winterruhe =
| cockroach.speciesfile.org_TaxonNameID = 1174416
| LSID = urn:lsid:Blattodea.genusfile.org:TaxonName:6326
}}
{{Systematik
| Autor =
| Bild =
| Bildbeschreibung =
| regnum = Animalia
| subregnum = Eumetazoa
| phylum = specieshropoda
| subphylum = Hexapoda
| classis = Insecta
| ordo = Dictyoptera
| subordo = Isoptera
| LSID = urn:lsid:faunaeur.org:taxname:11922
| www.faunaeur.org_id = 11922
}}
</pre>
</noinclude>
6a16f81427431216a80186b7c94b2ef2d4ca5447
1657
1656
2017-05-02T21:44:32Z
Lollypop
2
wikitext
text/x-wiki
<includeonly>
{{#vardefine:taxon|
{{#if:{{{dominia|}}} | -> [[:Kategorie:{{{dominia|}}}{{!}}{{{dominia|}}}]]}}
{{#if:{{{regnum|}}} | -> [[:Kategorie:{{{regnum|}}}{{!}}{{{regnum|}}}]]}}
{{#if:{{{subregnum|}}} | -> [[:Kategorie:{{{subregnum|}}}{{!}}{{{subregnum|}}}]]}}
{{#if:{{{divisio|}}} | -> [[:Kategorie:{{{divisio|}}}{{!}}{{{divisio|}}}]]}}
{{#if:{{{subdivisio|}}} | -> [[:Kategorie:{{{subdivisio|}}}{{!}}{{{subdivisio|}}}]]}}
{{#if:{{{superclassis|}}}| -> [[:Kategorie:{{{superclassis|}}}{{!}}{{{superclassis|}}}]]}}
{{#if:{{{classis|}}} | -> [[:Kategorie:{{{classis|}}}{{!}}{{{classis|}}}]]}}
{{#if:{{{subclassis|}}} | -> [[:Kategorie:{{{subclassis|}}}{{!}}{{{subclassis|}}}]]}}
{{#if:{{{superordo|}}} | -> [[:Kategorie:{{{superordo|}}}{{!}}{{{superordo|}}}]]}}
{{#if:{{{ordo|}}} | -> [[:Kategorie:{{{ordo|}}}{{!}}{{{ordo|}}}]]}}
{{#if:{{{subordo|}}} | -> [[:Kategorie:{{{subordo|}}}{{!}}{{{subordo|}}}]]}}
{{#if:{{{superfamilia|}}}| -> [[:Kategorie:{{#if: {{{subordo|}}} | {{{subordo|}}} | {{{ordo|}}} }}{{!}}{{{superfamilia|}}}]]}}
{{#if:{{{familia|}}} | -> [[:Kategorie:{{{familia|}}}{{!}}{{{familia|}}}]]}}
{{#if:{{{subfamilia|}}} | -> [[:Kategorie:{{{subfamilia|}}}{{!}}{{{subfamilia|}}}]]}}
{{#if:{{{tribus|}}} | -> [[:Kategorie:{{{tribus|}}}{{!}}{{{tribus|}}}]]}}
{{#if:{{{genus|}}} | -> [[:Kategorie:{{{genus|}}}{{!}}{{{genus|}}}]]}}
{{#if:{{{subgenus|}}} | -> [[:Kategorie:{{{subgenus|}}}{{!}}{{{subgenus|}}}]]}}
}}
{{#vardefine:taxonbox|
{{#if:{{{dominia|}}}
| {{!-}}
{{!}} Dominia:
{{!}} ''[[:Kategorie:{{{dominia|}}}{{!}}{{{dominia|}}}]]''
}}
{{#if:{{{regnum|}}}
| {{!-}}
{{!}} Regnum:
{{!}} ''[[:Kategorie:{{{regnum|}}}{{!}}{{{regnum|}}}]]''
}}
{{#if:{{{subregnum|}}}
| {{!-}}
{{!}} Subregnum:
{{!}} ''[[:Kategorie:{{{subregnum|}}}{{!}}{{{subregnum|}}}]]''
}}
{{#if:{{{superdivisio|}}}
| {{!-}}
{{!}} Superdivisio:
{{!}} ''[[:Kategorie:{{{superdivisio|}}}{{!}}{{{superdivisio|}}}]]''
}}
{{#if:{{{divisio|}}}
| {{!-}}
{{!}} Divisio:
{{!}} ''[[:Kategorie:{{{divisio|}}}{{!}}{{{divisio|}}}]]''
}}
{{#if:{{{subdivisio|}}}
| {{!-}}
{{!}} Subdivisio:
{{!}} ''[[:Kategorie:{{{subdivisio|}}}{{!}}{{{subdivisio|}}}]]''
}}
{{#if:{{{superclassis|}}}
| {{!-}}
{{!}} Superclassis:
{{!}} ''[[:Kategorie:{{{superclassis|}}}{{!}}{{{superclassis|}}}]]''
}}
{{#if:{{{classis|}}}
| {{!-}}
{{!}} Classis:
{{!}} ''[[:Kategorie:{{{classis|}}}{{!}}{{{classis|}}}]]''
}}
{{#if:{{{subclassis|}}}
| {{!-}}
{{!}} Subclassis:
{{!}} ''[[:Kategorie:{{{subclassis|}}}{{!}}{{{subclassis|}}}]]''
}}
{{#if:{{{superordo|}}}
| {{!-}}
{{!}} Superordo:
{{!}} ''[[:Kategorie:{{{superordo|}}}{{!}}{{{superordo|}}}]]''
}}
{{#if:{{{ordo|}}}
| {{!-}}
{{!}} Ordo:
{{!}} ''[[:Kategorie:{{{ordo|}}}{{!}}{{{ordo|}}}]]''
}}
{{#if:{{{subordo|}}}
| {{!-}}
{{!}} Subordo:
{{!}} ''[[:Kategorie:{{{subordo|}}}{{!}}{{{subordo|}}}]]''
}}
{{#if:{{{superfamilia|}}}
| {{!-}}
{{!}} Superfamilia:
{{!}} ''[[:Kategorie:{{{superfamilia|}}}{{!}}{{{superfamilia|}}}]]''
}}
{{#if:{{{familia|}}}
| {{!-}}
{{!}} Familia:
{{!}} ''[[:Kategorie:{{{familia|}}}{{!}}{{{familia|}}}]]''
}}
{{#if:{{{subfamilia|}}}
| {{!-}}
{{!}} Subfamilia:
{{!}} ''[[:Kategorie:{{{subfamilia|}}}{{!}}{{{subfamilia|}}}]]''
}}
{{#if:{{{tribus|}}}
| {{!-}}
{{!}} Tribus:
{{!}} ''[[:Kategorie:{{{tribus|}}}{{!}}{{{tribus|}}}]]''
}}
{{#if:{{{genus|}}}
| {{!-}}
{{!}} Genus:
{{!}} ''[[:Kategorie:{{{genus|}}}{{!}}{{{genus|}}}]]''
}}
{{#if:{{{subgenus|}}}
| {{!-}}
{{!}} Subgenus:
{{!}} ''[[:Kategorie:{{{subgenus|}}}{{!}}{{{subgenus|}}}]]''
}}
{{#if:{{{species|}}}
| {{!-}}
{{!}} Species:
{{!}} ''{{{genus|}}} {{{subgenus|}}} {{{species|}}}''
}}
}}
{| cellpadding="2" cellspacing="0" style="border: 1px solid #6688AA;width:260px; background-color:#efefef; float:right;overflow: hidden; margin-left:10px; margin-bottom:10px;" valign="middle" |
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"|
'''''{{PAGENAME}}'''''
{{#if:{{{DeName|}}}|
<br>({{{DeName|}}})
}}
|-
|style="border: 0px solid #6688AA;border-bottom-width:1px;"| <div style="text-align:center;font-size:8pt;">
{{#if: {{{Bild|}}}
| [[Bild:{{{Bild}}}{{!}}frameless{{!}}250x300px{{!}}{{{Bildbeschreibung}}}]]
}}</div>
|
|
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| Systematik
|-
|style="text-align:center;width=250px;"|
{| style="background-color:#efefef;text-align:left;"
{{#var:taxonbox}}
----
{{#if:{{#var:taxonbox}}
| {{#regex: {{#var:taxonbox}} | /^\s*$/ | "_" }}
}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Wissenschaftlicher Name
|-
|style="text-align:center"|
{| style="text-align:center;background-color:#efefef;widht:250px"
|-
|style="text-align:center;width:250px"|''{{{genus|}}} {{{species|}}}'' {{#if:{{{Autor|}}}|{{{Autor|}}}}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Weitere Informationen
|-
|style="text-align:left"|
{| style="text-align:left;background-color:#efefef;widht:250px"
{{#if:{{{Verbreitung|}}}
| {{!-}}
{{!}} Verbreitung:
{{!}} {{{Verbreitung|}}}
| {{!-}}
}}
{{#if:{{{Habitat|}}}
| {{!-}}
{{!}} Habitat:
{{!}} {{{Habitat|}}}
| {{!-}}
}}
{{#if:{{{Nahrung|}}}
| {{!-}}
{{!}} Nahrung:
{{!}} {{{Nahrung|}}}
| {{!-}}
}}
{{#if:{{{Luftfeuchtigkeit|}}}
| {{!-}}
{{!}} Luftfeuchtigkeit:
{{!}} {{{Luftfeuchtigkeit|}}}
| {{!-}}
}}
{{#if:{{{Temperatur|}}}
| {{!-}}
{{!}} Temperatur:
{{!}} {{{Temperatur|}}}
| {{!-}}
}}
{{#if:{{{StudyGroupNumber|}}}
| {{!-}}
{{!}} StudyGroupNumber:
{{!}} {{{StudyGroupNumber|}}}
| {{!-}}
}}
|}
|}
{{#ifeq: {{NAMESPACE}} | {{ns:0}} | [[Kategorie:species]]}}
{{#if:{{#var:taxon}}
| {{#regex: {{#var:taxon}} | /[ \r\n]+/ | }}
}}
{{#if:{{{www.faunaeur.org_id|}}}|
* [http://www.faunaeur.org/full_results.php?id={{{www.faunaeur.org_id|}}} Fauna Europaea : www.faunaeur.org -> {{PAGENAME}}]
}}
{{#if:{{{cockroach.speciesfile.org_TaxonNameID|}}}|
* [http://cockroach.speciesfile.org/common/basic/Taxa.aspx?TaxonNameID={{{cockroach.speciesfile.org_TaxonNameID|}}} Cockroach Species File (CSF) : cockroach.speciesfile.org -> {{PAGENAME}}]
}}
{{#if:{{{LSID|}}}|
* [http://www.ipni.org/lsids.html LSID] : {{{LSID|}}}
}}
{{#ifeq:{{PAGENAME}}|{{{superordo}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
[[Kategorie:{{#if: {{{subclassis|}}} | {{{subclassis|}}} | {{{classis|}}} }}{{!}}{{{superordo|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{ordo}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=7}}
{{#ifeq:{{PAGENAME}}|Blattodea|
[[Kategorie:Schaben]]
}}
[[Kategorie:{{#if: {{{superordo|}}} | {{{superordo|}}} | {{{subclassis|}}} }}{{!}}{{{ordo|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{superfamilia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
[[Kategorie:{{#if: {{{subordo|}}} | {{{subordo|}}} | {{{ordo|}}} }}{{!}}{{{superfamilia|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{familia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=4}}
[[Kategorie:{{#if: {{{superfamilia|}}} | {{{superfamilia|}}} | {{{subordo|}}} }}{{!}}{{{familia|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{subfamilia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=3}}
[[Kategorie:{{{familia|}}}{{!}}{{{subfamilia|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{genus}}}|
[[Kategorie:{{#if: {{{subfamilia|}}} | {{{subfamilia|}}} | {{{familia|}}} }}{{!}}{{{genus|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{genus}}} {{{species}}}|
[[Kategorie:{{{genus|}}}{{!}}{{{species|}}}]]
}}
</includeonly>
<noinclude>
<pre>
Beispielaufruf:
{{Systematik
| DeName = Fauchschabe
| Autor = van Herrewege, 1973
| ordo =
| subordo =
| superfamilia =
| familia = Blaberidae
| subfamilia = Oxyhaloinae
| tribus = Gromphadorhini
| genus = Princisia
| subgenus =
| species = vanwaerebeki
| Verbreitung =
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| StudyGroupNumber = BCG 34
| Winterruhe =
| cockroach.speciesfile.org_TaxonNameID = 1174416
| LSID = urn:lsid:Blattodea.genusfile.org:TaxonName:6326
}}
{{Systematik
| Autor =
| Bild =
| Bildbeschreibung =
| regnum = Animalia
| subregnum = Eumetazoa
| phylum = specieshropoda
| subphylum = Hexapoda
| classis = Insecta
| ordo = Dictyoptera
| subordo = Isoptera
| LSID = urn:lsid:faunaeur.org:taxname:11922
| www.faunaeur.org_id = 11922
}}
</pre>
</noinclude>
d9c2fa423ff4d8973a73468d909f9010d34cac01
1658
1657
2017-05-02T21:45:27Z
Lollypop
2
wikitext
text/x-wiki
<includeonly>
{{#vardefine:taxon|
{{#if:{{{dominia|}}} | -> [[:Kategorie:{{{dominia|}}}{{!}}{{{dominia|}}}]]}}
{{#if:{{{regnum|}}} | -> [[:Kategorie:{{{regnum|}}}{{!}}{{{regnum|}}}]]}}
{{#if:{{{subregnum|}}} | -> [[:Kategorie:{{{subregnum|}}}{{!}}{{{subregnum|}}}]]}}
{{#if:{{{divisio|}}} | -> [[:Kategorie:{{{divisio|}}}{{!}}{{{divisio|}}}]]}}
{{#if:{{{subdivisio|}}} | -> [[:Kategorie:{{{subdivisio|}}}{{!}}{{{subdivisio|}}}]]}}
{{#if:{{{superclassis|}}}| -> [[:Kategorie:{{{superclassis|}}}{{!}}{{{superclassis|}}}]]}}
{{#if:{{{classis|}}} | -> [[:Kategorie:{{{classis|}}}{{!}}{{{classis|}}}]]}}
{{#if:{{{subclassis|}}} | -> [[:Kategorie:{{{subclassis|}}}{{!}}{{{subclassis|}}}]]}}
{{#if:{{{superordo|}}} | -> [[:Kategorie:{{{superordo|}}}{{!}}{{{superordo|}}}]]}}
{{#if:{{{ordo|}}} | -> [[:Kategorie:{{{ordo|}}}{{!}}{{{ordo|}}}]]}}
{{#if:{{{subordo|}}} | -> [[:Kategorie:{{{subordo|}}}{{!}}{{{subordo|}}}]]}}
{{#if:{{{superfamilia|}}}| -> [[:Kategorie:{{#if: {{{subordo|}}} | {{{subordo|}}} | {{{ordo|}}} }}{{!}}{{{superfamilia|}}}]]}}
{{#if:{{{familia|}}} | -> [[:Kategorie:{{{familia|}}}{{!}}{{{familia|}}}]]}}
{{#if:{{{subfamilia|}}} | -> [[:Kategorie:{{{subfamilia|}}}{{!}}{{{subfamilia|}}}]]}}
{{#if:{{{tribus|}}} | -> [[:Kategorie:{{{tribus|}}}{{!}}{{{tribus|}}}]]}}
{{#if:{{{genus|}}} | -> [[:Kategorie:{{{genus|}}}{{!}}{{{genus|}}}]]}}
{{#if:{{{subgenus|}}} | -> [[:Kategorie:{{{subgenus|}}}{{!}}{{{subgenus|}}}]]}}
}}
{{#vardefine:taxonbox|
{{#if:{{{dominia|}}}
| {{!-}}
{{!}} Dominia:
{{!}} ''[[:Kategorie:{{{dominia|}}}{{!}}{{{dominia|}}}]]''
}}
{{#if:{{{regnum|}}}
| {{!-}}
{{!}} Regnum:
{{!}} ''[[:Kategorie:{{{regnum|}}}{{!}}{{{regnum|}}}]]''
}}
{{#if:{{{subregnum|}}}
| {{!-}}
{{!}} Subregnum:
{{!}} ''[[:Kategorie:{{{subregnum|}}}{{!}}{{{subregnum|}}}]]''
}}
{{#if:{{{superdivisio|}}}
| {{!-}}
{{!}} Superdivisio:
{{!}} ''[[:Kategorie:{{{superdivisio|}}}{{!}}{{{superdivisio|}}}]]''
}}
{{#if:{{{divisio|}}}
| {{!-}}
{{!}} Divisio:
{{!}} ''[[:Kategorie:{{{divisio|}}}{{!}}{{{divisio|}}}]]''
}}
{{#if:{{{subdivisio|}}}
| {{!-}}
{{!}} Subdivisio:
{{!}} ''[[:Kategorie:{{{subdivisio|}}}{{!}}{{{subdivisio|}}}]]''
}}
{{#if:{{{superclassis|}}}
| {{!-}}
{{!}} Superclassis:
{{!}} ''[[:Kategorie:{{{superclassis|}}}{{!}}{{{superclassis|}}}]]''
}}
{{#if:{{{classis|}}}
| {{!-}}
{{!}} Classis:
{{!}} ''[[:Kategorie:{{{classis|}}}{{!}}{{{classis|}}}]]''
}}
{{#if:{{{subclassis|}}}
| {{!-}}
{{!}} Subclassis:
{{!}} ''[[:Kategorie:{{{subclassis|}}}{{!}}{{{subclassis|}}}]]''
}}
{{#if:{{{superordo|}}}
| {{!-}}
{{!}} Superordo:
{{!}} ''[[:Kategorie:{{{superordo|}}}{{!}}{{{superordo|}}}]]''
}}
{{#if:{{{ordo|}}}
| {{!-}}
{{!}} Ordo:
{{!}} ''[[:Kategorie:{{{ordo|}}}{{!}}{{{ordo|}}}]]''
}}
{{#if:{{{subordo|}}}
| {{!-}}
{{!}} Subordo:
{{!}} ''[[:Kategorie:{{{subordo|}}}{{!}}{{{subordo|}}}]]''
}}
{{#if:{{{superfamilia|}}}
| {{!-}}
{{!}} Superfamilia:
{{!}} ''[[:Kategorie:{{{superfamilia|}}}{{!}}{{{superfamilia|}}}]]''
}}
{{#if:{{{familia|}}}
| {{!-}}
{{!}} Familia:
{{!}} ''[[:Kategorie:{{{familia|}}}{{!}}{{{familia|}}}]]''
}}
{{#if:{{{subfamilia|}}}
| {{!-}}
{{!}} Subfamilia:
{{!}} ''[[:Kategorie:{{{subfamilia|}}}{{!}}{{{subfamilia|}}}]]''
}}
{{#if:{{{tribus|}}}
| {{!-}}
{{!}} Tribus:
{{!}} ''[[:Kategorie:{{{tribus|}}}{{!}}{{{tribus|}}}]]''
}}
{{#if:{{{genus|}}}
| {{!-}}
{{!}} Genus:
{{!}} ''[[:Kategorie:{{{genus|}}}{{!}}{{{genus|}}}]]''
}}
{{#if:{{{subgenus|}}}
| {{!-}}
{{!}} Subgenus:
{{!}} ''[[:Kategorie:{{{subgenus|}}}{{!}}{{{subgenus|}}}]]''
}}
{{#if:{{{species|}}}
| {{!-}}
{{!}} Species:
{{!}} ''{{{genus|}}} {{{subgenus|}}} {{{species|}}}''
}}
}}
{| cellpadding="2" cellspacing="0" style="border: 1px solid #6688AA;width:260px; background-color:#efefef; float:right;overflow: hidden; margin-left:10px; margin-bottom:10px;" valign="middle" |
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"|
'''''{{PAGENAME}}'''''
{{#if:{{{DeName|}}}|
<br>({{{DeName|}}})
}}
|-
|style="border: 0px solid #6688AA;border-bottom-width:1px;"| <div style="text-align:center;font-size:8pt;">
{{#if: {{{Bild|}}}
| [[Bild:{{{Bild}}}{{!}}frameless{{!}}250x300px{{!}}{{{Bildbeschreibung}}}]]
}}</div>
|
|
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| Systematik
|-
|style="text-align:center;width=250px;"|
{| style="background-color:#efefef;text-align:left;"
{{#var:taxonbox}}
----
{{#if:{{#var:taxonbox}}
| {{#regex: {{#var:taxonbox}} | /$$/ | "_" }}
}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Wissenschaftlicher Name
|-
|style="text-align:center"|
{| style="text-align:center;background-color:#efefef;widht:250px"
|-
|style="text-align:center;width:250px"|''{{{genus|}}} {{{species|}}}'' {{#if:{{{Autor|}}}|{{{Autor|}}}}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Weitere Informationen
|-
|style="text-align:left"|
{| style="text-align:left;background-color:#efefef;widht:250px"
{{#if:{{{Verbreitung|}}}
| {{!-}}
{{!}} Verbreitung:
{{!}} {{{Verbreitung|}}}
| {{!-}}
}}
{{#if:{{{Habitat|}}}
| {{!-}}
{{!}} Habitat:
{{!}} {{{Habitat|}}}
| {{!-}}
}}
{{#if:{{{Nahrung|}}}
| {{!-}}
{{!}} Nahrung:
{{!}} {{{Nahrung|}}}
| {{!-}}
}}
{{#if:{{{Luftfeuchtigkeit|}}}
| {{!-}}
{{!}} Luftfeuchtigkeit:
{{!}} {{{Luftfeuchtigkeit|}}}
| {{!-}}
}}
{{#if:{{{Temperatur|}}}
| {{!-}}
{{!}} Temperatur:
{{!}} {{{Temperatur|}}}
| {{!-}}
}}
{{#if:{{{StudyGroupNumber|}}}
| {{!-}}
{{!}} StudyGroupNumber:
{{!}} {{{StudyGroupNumber|}}}
| {{!-}}
}}
|}
|}
{{#ifeq: {{NAMESPACE}} | {{ns:0}} | [[Kategorie:species]]}}
{{#if:{{#var:taxon}}
| {{#regex: {{#var:taxon}} | /[ \r\n]+/ | }}
}}
{{#if:{{{www.faunaeur.org_id|}}}|
* [http://www.faunaeur.org/full_results.php?id={{{www.faunaeur.org_id|}}} Fauna Europaea : www.faunaeur.org -> {{PAGENAME}}]
}}
{{#if:{{{cockroach.speciesfile.org_TaxonNameID|}}}|
* [http://cockroach.speciesfile.org/common/basic/Taxa.aspx?TaxonNameID={{{cockroach.speciesfile.org_TaxonNameID|}}} Cockroach Species File (CSF) : cockroach.speciesfile.org -> {{PAGENAME}}]
}}
{{#if:{{{LSID|}}}|
* [http://www.ipni.org/lsids.html LSID] : {{{LSID|}}}
}}
{{#ifeq:{{PAGENAME}}|{{{superordo}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
[[Kategorie:{{#if: {{{subclassis|}}} | {{{subclassis|}}} | {{{classis|}}} }}{{!}}{{{superordo|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{ordo}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=7}}
{{#ifeq:{{PAGENAME}}|Blattodea|
[[Kategorie:Schaben]]
}}
[[Kategorie:{{#if: {{{superordo|}}} | {{{superordo|}}} | {{{subclassis|}}} }}{{!}}{{{ordo|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{superfamilia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
[[Kategorie:{{#if: {{{subordo|}}} | {{{subordo|}}} | {{{ordo|}}} }}{{!}}{{{superfamilia|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{familia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=4}}
[[Kategorie:{{#if: {{{superfamilia|}}} | {{{superfamilia|}}} | {{{subordo|}}} }}{{!}}{{{familia|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{subfamilia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=3}}
[[Kategorie:{{{familia|}}}{{!}}{{{subfamilia|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{genus}}}|
[[Kategorie:{{#if: {{{subfamilia|}}} | {{{subfamilia|}}} | {{{familia|}}} }}{{!}}{{{genus|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{genus}}} {{{species}}}|
[[Kategorie:{{{genus|}}}{{!}}{{{species|}}}]]
}}
</includeonly>
<noinclude>
<pre>
Beispielaufruf:
{{Systematik
| DeName = Fauchschabe
| Autor = van Herrewege, 1973
| ordo =
| subordo =
| superfamilia =
| familia = Blaberidae
| subfamilia = Oxyhaloinae
| tribus = Gromphadorhini
| genus = Princisia
| subgenus =
| species = vanwaerebeki
| Verbreitung =
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| StudyGroupNumber = BCG 34
| Winterruhe =
| cockroach.speciesfile.org_TaxonNameID = 1174416
| LSID = urn:lsid:Blattodea.genusfile.org:TaxonName:6326
}}
{{Systematik
| Autor =
| Bild =
| Bildbeschreibung =
| regnum = Animalia
| subregnum = Eumetazoa
| phylum = specieshropoda
| subphylum = Hexapoda
| classis = Insecta
| ordo = Dictyoptera
| subordo = Isoptera
| LSID = urn:lsid:faunaeur.org:taxname:11922
| www.faunaeur.org_id = 11922
}}
</pre>
</noinclude>
2db5b9e4688c5a5b376f55e8f44d5838fb416398
1659
1658
2017-05-02T21:46:22Z
Lollypop
2
wikitext
text/x-wiki
<includeonly>
{{#vardefine:taxon|
{{#if:{{{dominia|}}} | -> [[:Kategorie:{{{dominia|}}}{{!}}{{{dominia|}}}]]}}
{{#if:{{{regnum|}}} | -> [[:Kategorie:{{{regnum|}}}{{!}}{{{regnum|}}}]]}}
{{#if:{{{subregnum|}}} | -> [[:Kategorie:{{{subregnum|}}}{{!}}{{{subregnum|}}}]]}}
{{#if:{{{divisio|}}} | -> [[:Kategorie:{{{divisio|}}}{{!}}{{{divisio|}}}]]}}
{{#if:{{{subdivisio|}}} | -> [[:Kategorie:{{{subdivisio|}}}{{!}}{{{subdivisio|}}}]]}}
{{#if:{{{superclassis|}}}| -> [[:Kategorie:{{{superclassis|}}}{{!}}{{{superclassis|}}}]]}}
{{#if:{{{classis|}}} | -> [[:Kategorie:{{{classis|}}}{{!}}{{{classis|}}}]]}}
{{#if:{{{subclassis|}}} | -> [[:Kategorie:{{{subclassis|}}}{{!}}{{{subclassis|}}}]]}}
{{#if:{{{superordo|}}} | -> [[:Kategorie:{{{superordo|}}}{{!}}{{{superordo|}}}]]}}
{{#if:{{{ordo|}}} | -> [[:Kategorie:{{{ordo|}}}{{!}}{{{ordo|}}}]]}}
{{#if:{{{subordo|}}} | -> [[:Kategorie:{{{subordo|}}}{{!}}{{{subordo|}}}]]}}
{{#if:{{{superfamilia|}}}| -> [[:Kategorie:{{#if: {{{subordo|}}} | {{{subordo|}}} | {{{ordo|}}} }}{{!}}{{{superfamilia|}}}]]}}
{{#if:{{{familia|}}} | -> [[:Kategorie:{{{familia|}}}{{!}}{{{familia|}}}]]}}
{{#if:{{{subfamilia|}}} | -> [[:Kategorie:{{{subfamilia|}}}{{!}}{{{subfamilia|}}}]]}}
{{#if:{{{tribus|}}} | -> [[:Kategorie:{{{tribus|}}}{{!}}{{{tribus|}}}]]}}
{{#if:{{{genus|}}} | -> [[:Kategorie:{{{genus|}}}{{!}}{{{genus|}}}]]}}
{{#if:{{{subgenus|}}} | -> [[:Kategorie:{{{subgenus|}}}{{!}}{{{subgenus|}}}]]}}
}}
{{#vardefine:taxonbox|
{{#if:{{{dominia|}}}
| {{!-}}
{{!}} Dominia:
{{!}} ''[[:Kategorie:{{{dominia|}}}{{!}}{{{dominia|}}}]]''
}}
{{#if:{{{regnum|}}}
| {{!-}}
{{!}} Regnum:
{{!}} ''[[:Kategorie:{{{regnum|}}}{{!}}{{{regnum|}}}]]''
}}
{{#if:{{{subregnum|}}}
| {{!-}}
{{!}} Subregnum:
{{!}} ''[[:Kategorie:{{{subregnum|}}}{{!}}{{{subregnum|}}}]]''
}}
{{#if:{{{superdivisio|}}}
| {{!-}}
{{!}} Superdivisio:
{{!}} ''[[:Kategorie:{{{superdivisio|}}}{{!}}{{{superdivisio|}}}]]''
}}
{{#if:{{{divisio|}}}
| {{!-}}
{{!}} Divisio:
{{!}} ''[[:Kategorie:{{{divisio|}}}{{!}}{{{divisio|}}}]]''
}}
{{#if:{{{subdivisio|}}}
| {{!-}}
{{!}} Subdivisio:
{{!}} ''[[:Kategorie:{{{subdivisio|}}}{{!}}{{{subdivisio|}}}]]''
}}
{{#if:{{{superclassis|}}}
| {{!-}}
{{!}} Superclassis:
{{!}} ''[[:Kategorie:{{{superclassis|}}}{{!}}{{{superclassis|}}}]]''
}}
{{#if:{{{classis|}}}
| {{!-}}
{{!}} Classis:
{{!}} ''[[:Kategorie:{{{classis|}}}{{!}}{{{classis|}}}]]''
}}
{{#if:{{{subclassis|}}}
| {{!-}}
{{!}} Subclassis:
{{!}} ''[[:Kategorie:{{{subclassis|}}}{{!}}{{{subclassis|}}}]]''
}}
{{#if:{{{superordo|}}}
| {{!-}}
{{!}} Superordo:
{{!}} ''[[:Kategorie:{{{superordo|}}}{{!}}{{{superordo|}}}]]''
}}
{{#if:{{{ordo|}}}
| {{!-}}
{{!}} Ordo:
{{!}} ''[[:Kategorie:{{{ordo|}}}{{!}}{{{ordo|}}}]]''
}}
{{#if:{{{subordo|}}}
| {{!-}}
{{!}} Subordo:
{{!}} ''[[:Kategorie:{{{subordo|}}}{{!}}{{{subordo|}}}]]''
}}
{{#if:{{{superfamilia|}}}
| {{!-}}
{{!}} Superfamilia:
{{!}} ''[[:Kategorie:{{{superfamilia|}}}{{!}}{{{superfamilia|}}}]]''
}}
{{#if:{{{familia|}}}
| {{!-}}
{{!}} Familia:
{{!}} ''[[:Kategorie:{{{familia|}}}{{!}}{{{familia|}}}]]''
}}
{{#if:{{{subfamilia|}}}
| {{!-}}
{{!}} Subfamilia:
{{!}} ''[[:Kategorie:{{{subfamilia|}}}{{!}}{{{subfamilia|}}}]]''
}}
{{#if:{{{tribus|}}}
| {{!-}}
{{!}} Tribus:
{{!}} ''[[:Kategorie:{{{tribus|}}}{{!}}{{{tribus|}}}]]''
}}
{{#if:{{{genus|}}}
| {{!-}}
{{!}} Genus:
{{!}} ''[[:Kategorie:{{{genus|}}}{{!}}{{{genus|}}}]]''
}}
{{#if:{{{subgenus|}}}
| {{!-}}
{{!}} Subgenus:
{{!}} ''[[:Kategorie:{{{subgenus|}}}{{!}}{{{subgenus|}}}]]''
}}
{{#if:{{{species|}}}
| {{!-}}
{{!}} Species:
{{!}} ''{{{genus|}}} {{{subgenus|}}} {{{species|}}}''
}}
}}
{| cellpadding="2" cellspacing="0" style="border: 1px solid #6688AA;width:260px; background-color:#efefef; float:right;overflow: hidden; margin-left:10px; margin-bottom:10px;" valign="middle" |
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"|
'''''{{PAGENAME}}'''''
{{#if:{{{DeName|}}}|
<br>({{{DeName|}}})
}}
|-
|style="border: 0px solid #6688AA;border-bottom-width:1px;"| <div style="text-align:center;font-size:8pt;">
{{#if: {{{Bild|}}}
| [[Bild:{{{Bild}}}{{!}}frameless{{!}}250x300px{{!}}{{{Bildbeschreibung}}}]]
}}</div>
|
|
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| Systematik
|-
|style="text-align:center;width=250px;"|
{| style="background-color:#efefef;text-align:left;"
{{#var:taxonbox}}
----
{{#if:{{#var:taxonbox}}
| {{#regex: {{#var:taxonbox}} | /\n\n/ | "_" }}
}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Wissenschaftlicher Name
|-
|style="text-align:center"|
{| style="text-align:center;background-color:#efefef;widht:250px"
|-
|style="text-align:center;width:250px"|''{{{genus|}}} {{{species|}}}'' {{#if:{{{Autor|}}}|{{{Autor|}}}}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Weitere Informationen
|-
|style="text-align:left"|
{| style="text-align:left;background-color:#efefef;widht:250px"
{{#if:{{{Verbreitung|}}}
| {{!-}}
{{!}} Verbreitung:
{{!}} {{{Verbreitung|}}}
| {{!-}}
}}
{{#if:{{{Habitat|}}}
| {{!-}}
{{!}} Habitat:
{{!}} {{{Habitat|}}}
| {{!-}}
}}
{{#if:{{{Nahrung|}}}
| {{!-}}
{{!}} Nahrung:
{{!}} {{{Nahrung|}}}
| {{!-}}
}}
{{#if:{{{Luftfeuchtigkeit|}}}
| {{!-}}
{{!}} Luftfeuchtigkeit:
{{!}} {{{Luftfeuchtigkeit|}}}
| {{!-}}
}}
{{#if:{{{Temperatur|}}}
| {{!-}}
{{!}} Temperatur:
{{!}} {{{Temperatur|}}}
| {{!-}}
}}
{{#if:{{{StudyGroupNumber|}}}
| {{!-}}
{{!}} StudyGroupNumber:
{{!}} {{{StudyGroupNumber|}}}
| {{!-}}
}}
|}
|}
{{#ifeq: {{NAMESPACE}} | {{ns:0}} | [[Kategorie:species]]}}
{{#if:{{#var:taxon}}
| {{#regex: {{#var:taxon}} | /[ \r\n]+/ | }}
}}
{{#if:{{{www.faunaeur.org_id|}}}|
* [http://www.faunaeur.org/full_results.php?id={{{www.faunaeur.org_id|}}} Fauna Europaea : www.faunaeur.org -> {{PAGENAME}}]
}}
{{#if:{{{cockroach.speciesfile.org_TaxonNameID|}}}|
* [http://cockroach.speciesfile.org/common/basic/Taxa.aspx?TaxonNameID={{{cockroach.speciesfile.org_TaxonNameID|}}} Cockroach Species File (CSF) : cockroach.speciesfile.org -> {{PAGENAME}}]
}}
{{#if:{{{LSID|}}}|
* [http://www.ipni.org/lsids.html LSID] : {{{LSID|}}}
}}
{{#ifeq:{{PAGENAME}}|{{{superordo}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
[[Kategorie:{{#if: {{{subclassis|}}} | {{{subclassis|}}} | {{{classis|}}} }}{{!}}{{{superordo|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{ordo}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=7}}
{{#ifeq:{{PAGENAME}}|Blattodea|
[[Kategorie:Schaben]]
}}
[[Kategorie:{{#if: {{{superordo|}}} | {{{superordo|}}} | {{{subclassis|}}} }}{{!}}{{{ordo|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{superfamilia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
[[Kategorie:{{#if: {{{subordo|}}} | {{{subordo|}}} | {{{ordo|}}} }}{{!}}{{{superfamilia|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{familia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=4}}
[[Kategorie:{{#if: {{{superfamilia|}}} | {{{superfamilia|}}} | {{{subordo|}}} }}{{!}}{{{familia|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{subfamilia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=3}}
[[Kategorie:{{{familia|}}}{{!}}{{{subfamilia|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{genus}}}|
[[Kategorie:{{#if: {{{subfamilia|}}} | {{{subfamilia|}}} | {{{familia|}}} }}{{!}}{{{genus|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{genus}}} {{{species}}}|
[[Kategorie:{{{genus|}}}{{!}}{{{species|}}}]]
}}
</includeonly>
<noinclude>
<pre>
Beispielaufruf:
{{Systematik
| DeName = Fauchschabe
| Autor = van Herrewege, 1973
| ordo =
| subordo =
| superfamilia =
| familia = Blaberidae
| subfamilia = Oxyhaloinae
| tribus = Gromphadorhini
| genus = Princisia
| subgenus =
| species = vanwaerebeki
| Verbreitung =
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| StudyGroupNumber = BCG 34
| Winterruhe =
| cockroach.speciesfile.org_TaxonNameID = 1174416
| LSID = urn:lsid:Blattodea.genusfile.org:TaxonName:6326
}}
{{Systematik
| Autor =
| Bild =
| Bildbeschreibung =
| regnum = Animalia
| subregnum = Eumetazoa
| phylum = specieshropoda
| subphylum = Hexapoda
| classis = Insecta
| ordo = Dictyoptera
| subordo = Isoptera
| LSID = urn:lsid:faunaeur.org:taxname:11922
| www.faunaeur.org_id = 11922
}}
</pre>
</noinclude>
236ef616c1df2756d9d88fb12f49c541a30105b4
1660
1659
2017-05-02T21:47:33Z
Lollypop
2
wikitext
text/x-wiki
<includeonly>
{{#vardefine:taxon|
{{#if:{{{dominia|}}} | -> [[:Kategorie:{{{dominia|}}}{{!}}{{{dominia|}}}]]}}
{{#if:{{{regnum|}}} | -> [[:Kategorie:{{{regnum|}}}{{!}}{{{regnum|}}}]]}}
{{#if:{{{subregnum|}}} | -> [[:Kategorie:{{{subregnum|}}}{{!}}{{{subregnum|}}}]]}}
{{#if:{{{divisio|}}} | -> [[:Kategorie:{{{divisio|}}}{{!}}{{{divisio|}}}]]}}
{{#if:{{{subdivisio|}}} | -> [[:Kategorie:{{{subdivisio|}}}{{!}}{{{subdivisio|}}}]]}}
{{#if:{{{superclassis|}}}| -> [[:Kategorie:{{{superclassis|}}}{{!}}{{{superclassis|}}}]]}}
{{#if:{{{classis|}}} | -> [[:Kategorie:{{{classis|}}}{{!}}{{{classis|}}}]]}}
{{#if:{{{subclassis|}}} | -> [[:Kategorie:{{{subclassis|}}}{{!}}{{{subclassis|}}}]]}}
{{#if:{{{superordo|}}} | -> [[:Kategorie:{{{superordo|}}}{{!}}{{{superordo|}}}]]}}
{{#if:{{{ordo|}}} | -> [[:Kategorie:{{{ordo|}}}{{!}}{{{ordo|}}}]]}}
{{#if:{{{subordo|}}} | -> [[:Kategorie:{{{subordo|}}}{{!}}{{{subordo|}}}]]}}
{{#if:{{{superfamilia|}}}| -> [[:Kategorie:{{#if: {{{subordo|}}} | {{{subordo|}}} | {{{ordo|}}} }}{{!}}{{{superfamilia|}}}]]}}
{{#if:{{{familia|}}} | -> [[:Kategorie:{{{familia|}}}{{!}}{{{familia|}}}]]}}
{{#if:{{{subfamilia|}}} | -> [[:Kategorie:{{{subfamilia|}}}{{!}}{{{subfamilia|}}}]]}}
{{#if:{{{tribus|}}} | -> [[:Kategorie:{{{tribus|}}}{{!}}{{{tribus|}}}]]}}
{{#if:{{{genus|}}} | -> [[:Kategorie:{{{genus|}}}{{!}}{{{genus|}}}]]}}
{{#if:{{{subgenus|}}} | -> [[:Kategorie:{{{subgenus|}}}{{!}}{{{subgenus|}}}]]}}
}}
{{#vardefine:taxonbox|
{{#if:{{{dominia|}}}
| {{!-}}
{{!}} Dominia:
{{!}} ''[[:Kategorie:{{{dominia|}}}{{!}}{{{dominia|}}}]]''
}}
{{#if:{{{regnum|}}}
| {{!-}}
{{!}} Regnum:
{{!}} ''[[:Kategorie:{{{regnum|}}}{{!}}{{{regnum|}}}]]''
}}
{{#if:{{{subregnum|}}}
| {{!-}}
{{!}} Subregnum:
{{!}} ''[[:Kategorie:{{{subregnum|}}}{{!}}{{{subregnum|}}}]]''
}}
{{#if:{{{superdivisio|}}}
| {{!-}}
{{!}} Superdivisio:
{{!}} ''[[:Kategorie:{{{superdivisio|}}}{{!}}{{{superdivisio|}}}]]''
}}
{{#if:{{{divisio|}}}
| {{!-}}
{{!}} Divisio:
{{!}} ''[[:Kategorie:{{{divisio|}}}{{!}}{{{divisio|}}}]]''
}}
{{#if:{{{subdivisio|}}}
| {{!-}}
{{!}} Subdivisio:
{{!}} ''[[:Kategorie:{{{subdivisio|}}}{{!}}{{{subdivisio|}}}]]''
}}
{{#if:{{{superclassis|}}}
| {{!-}}
{{!}} Superclassis:
{{!}} ''[[:Kategorie:{{{superclassis|}}}{{!}}{{{superclassis|}}}]]''
}}
{{#if:{{{classis|}}}
| {{!-}}
{{!}} Classis:
{{!}} ''[[:Kategorie:{{{classis|}}}{{!}}{{{classis|}}}]]''
}}
{{#if:{{{subclassis|}}}
| {{!-}}
{{!}} Subclassis:
{{!}} ''[[:Kategorie:{{{subclassis|}}}{{!}}{{{subclassis|}}}]]''
}}
{{#if:{{{superordo|}}}
| {{!-}}
{{!}} Superordo:
{{!}} ''[[:Kategorie:{{{superordo|}}}{{!}}{{{superordo|}}}]]''
}}
{{#if:{{{ordo|}}}
| {{!-}}
{{!}} Ordo:
{{!}} ''[[:Kategorie:{{{ordo|}}}{{!}}{{{ordo|}}}]]''
}}
{{#if:{{{subordo|}}}
| {{!-}}
{{!}} Subordo:
{{!}} ''[[:Kategorie:{{{subordo|}}}{{!}}{{{subordo|}}}]]''
}}
{{#if:{{{superfamilia|}}}
| {{!-}}
{{!}} Superfamilia:
{{!}} ''[[:Kategorie:{{{superfamilia|}}}{{!}}{{{superfamilia|}}}]]''
}}
{{#if:{{{familia|}}}
| {{!-}}
{{!}} Familia:
{{!}} ''[[:Kategorie:{{{familia|}}}{{!}}{{{familia|}}}]]''
}}
{{#if:{{{subfamilia|}}}
| {{!-}}
{{!}} Subfamilia:
{{!}} ''[[:Kategorie:{{{subfamilia|}}}{{!}}{{{subfamilia|}}}]]''
}}
{{#if:{{{tribus|}}}
| {{!-}}
{{!}} Tribus:
{{!}} ''[[:Kategorie:{{{tribus|}}}{{!}}{{{tribus|}}}]]''
}}
{{#if:{{{genus|}}}
| {{!-}}
{{!}} Genus:
{{!}} ''[[:Kategorie:{{{genus|}}}{{!}}{{{genus|}}}]]''
}}
{{#if:{{{subgenus|}}}
| {{!-}}
{{!}} Subgenus:
{{!}} ''[[:Kategorie:{{{subgenus|}}}{{!}}{{{subgenus|}}}]]''
}}
{{#if:{{{species|}}}
| {{!-}}
{{!}} Species:
{{!}} ''{{{genus|}}} {{{subgenus|}}} {{{species|}}}''
}}
}}
{| cellpadding="2" cellspacing="0" style="border: 1px solid #6688AA;width:260px; background-color:#efefef; float:right;overflow: hidden; margin-left:10px; margin-bottom:10px;" valign="middle" |
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"|
'''''{{PAGENAME}}'''''
{{#if:{{{DeName|}}}|
<br>({{{DeName|}}})
}}
|-
|style="border: 0px solid #6688AA;border-bottom-width:1px;"| <div style="text-align:center;font-size:8pt;">
{{#if: {{{Bild|}}}
| [[Bild:{{{Bild}}}{{!}}frameless{{!}}250x300px{{!}}{{{Bildbeschreibung}}}]]
}}</div>
|
|
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| Systematik
|-
|style="text-align:center;width=250px;"|
{| style="background-color:#efefef;text-align:left;"
{{#var:taxonbox}}
----
{{#if:{{#var:taxonbox}}
| {{#regex: {{#var:taxonbox}} | /[\n]{2,}/ | _ }}
}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Wissenschaftlicher Name
|-
|style="text-align:center"|
{| style="text-align:center;background-color:#efefef;widht:250px"
|-
|style="text-align:center;width:250px"|''{{{genus|}}} {{{species|}}}'' {{#if:{{{Autor|}}}|{{{Autor|}}}}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Weitere Informationen
|-
|style="text-align:left"|
{| style="text-align:left;background-color:#efefef;widht:250px"
{{#if:{{{Verbreitung|}}}
| {{!-}}
{{!}} Verbreitung:
{{!}} {{{Verbreitung|}}}
| {{!-}}
}}
{{#if:{{{Habitat|}}}
| {{!-}}
{{!}} Habitat:
{{!}} {{{Habitat|}}}
| {{!-}}
}}
{{#if:{{{Nahrung|}}}
| {{!-}}
{{!}} Nahrung:
{{!}} {{{Nahrung|}}}
| {{!-}}
}}
{{#if:{{{Luftfeuchtigkeit|}}}
| {{!-}}
{{!}} Luftfeuchtigkeit:
{{!}} {{{Luftfeuchtigkeit|}}}
| {{!-}}
}}
{{#if:{{{Temperatur|}}}
| {{!-}}
{{!}} Temperatur:
{{!}} {{{Temperatur|}}}
| {{!-}}
}}
{{#if:{{{StudyGroupNumber|}}}
| {{!-}}
{{!}} StudyGroupNumber:
{{!}} {{{StudyGroupNumber|}}}
| {{!-}}
}}
|}
|}
{{#ifeq: {{NAMESPACE}} | {{ns:0}} | [[Kategorie:species]]}}
{{#if:{{#var:taxon}}
| {{#regex: {{#var:taxon}} | /[ \r\n]+/ | }}
}}
{{#if:{{{www.faunaeur.org_id|}}}|
* [http://www.faunaeur.org/full_results.php?id={{{www.faunaeur.org_id|}}} Fauna Europaea : www.faunaeur.org -> {{PAGENAME}}]
}}
{{#if:{{{cockroach.speciesfile.org_TaxonNameID|}}}|
* [http://cockroach.speciesfile.org/common/basic/Taxa.aspx?TaxonNameID={{{cockroach.speciesfile.org_TaxonNameID|}}} Cockroach Species File (CSF) : cockroach.speciesfile.org -> {{PAGENAME}}]
}}
{{#if:{{{LSID|}}}|
* [http://www.ipni.org/lsids.html LSID] : {{{LSID|}}}
}}
{{#ifeq:{{PAGENAME}}|{{{superordo}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
[[Kategorie:{{#if: {{{subclassis|}}} | {{{subclassis|}}} | {{{classis|}}} }}{{!}}{{{superordo|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{ordo}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=7}}
{{#ifeq:{{PAGENAME}}|Blattodea|
[[Kategorie:Schaben]]
}}
[[Kategorie:{{#if: {{{superordo|}}} | {{{superordo|}}} | {{{subclassis|}}} }}{{!}}{{{ordo|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{superfamilia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
[[Kategorie:{{#if: {{{subordo|}}} | {{{subordo|}}} | {{{ordo|}}} }}{{!}}{{{superfamilia|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{familia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=4}}
[[Kategorie:{{#if: {{{superfamilia|}}} | {{{superfamilia|}}} | {{{subordo|}}} }}{{!}}{{{familia|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{subfamilia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=3}}
[[Kategorie:{{{familia|}}}{{!}}{{{subfamilia|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{genus}}}|
[[Kategorie:{{#if: {{{subfamilia|}}} | {{{subfamilia|}}} | {{{familia|}}} }}{{!}}{{{genus|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{genus}}} {{{species}}}|
[[Kategorie:{{{genus|}}}{{!}}{{{species|}}}]]
}}
</includeonly>
<noinclude>
<pre>
Beispielaufruf:
{{Systematik
| DeName = Fauchschabe
| Autor = van Herrewege, 1973
| ordo =
| subordo =
| superfamilia =
| familia = Blaberidae
| subfamilia = Oxyhaloinae
| tribus = Gromphadorhini
| genus = Princisia
| subgenus =
| species = vanwaerebeki
| Verbreitung =
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| StudyGroupNumber = BCG 34
| Winterruhe =
| cockroach.speciesfile.org_TaxonNameID = 1174416
| LSID = urn:lsid:Blattodea.genusfile.org:TaxonName:6326
}}
{{Systematik
| Autor =
| Bild =
| Bildbeschreibung =
| regnum = Animalia
| subregnum = Eumetazoa
| phylum = specieshropoda
| subphylum = Hexapoda
| classis = Insecta
| ordo = Dictyoptera
| subordo = Isoptera
| LSID = urn:lsid:faunaeur.org:taxname:11922
| www.faunaeur.org_id = 11922
}}
</pre>
</noinclude>
787c82e3f7a0339eaa32638395b3652743de5e75
1661
1660
2017-05-02T21:48:40Z
Lollypop
2
wikitext
text/x-wiki
<includeonly>
{{#vardefine:taxon|
{{#if:{{{dominia|}}} | -> [[:Kategorie:{{{dominia|}}}{{!}}{{{dominia|}}}]]}}
{{#if:{{{regnum|}}} | -> [[:Kategorie:{{{regnum|}}}{{!}}{{{regnum|}}}]]}}
{{#if:{{{subregnum|}}} | -> [[:Kategorie:{{{subregnum|}}}{{!}}{{{subregnum|}}}]]}}
{{#if:{{{divisio|}}} | -> [[:Kategorie:{{{divisio|}}}{{!}}{{{divisio|}}}]]}}
{{#if:{{{subdivisio|}}} | -> [[:Kategorie:{{{subdivisio|}}}{{!}}{{{subdivisio|}}}]]}}
{{#if:{{{superclassis|}}}| -> [[:Kategorie:{{{superclassis|}}}{{!}}{{{superclassis|}}}]]}}
{{#if:{{{classis|}}} | -> [[:Kategorie:{{{classis|}}}{{!}}{{{classis|}}}]]}}
{{#if:{{{subclassis|}}} | -> [[:Kategorie:{{{subclassis|}}}{{!}}{{{subclassis|}}}]]}}
{{#if:{{{superordo|}}} | -> [[:Kategorie:{{{superordo|}}}{{!}}{{{superordo|}}}]]}}
{{#if:{{{ordo|}}} | -> [[:Kategorie:{{{ordo|}}}{{!}}{{{ordo|}}}]]}}
{{#if:{{{subordo|}}} | -> [[:Kategorie:{{{subordo|}}}{{!}}{{{subordo|}}}]]}}
{{#if:{{{superfamilia|}}}| -> [[:Kategorie:{{#if: {{{subordo|}}} | {{{subordo|}}} | {{{ordo|}}} }}{{!}}{{{superfamilia|}}}]]}}
{{#if:{{{familia|}}} | -> [[:Kategorie:{{{familia|}}}{{!}}{{{familia|}}}]]}}
{{#if:{{{subfamilia|}}} | -> [[:Kategorie:{{{subfamilia|}}}{{!}}{{{subfamilia|}}}]]}}
{{#if:{{{tribus|}}} | -> [[:Kategorie:{{{tribus|}}}{{!}}{{{tribus|}}}]]}}
{{#if:{{{genus|}}} | -> [[:Kategorie:{{{genus|}}}{{!}}{{{genus|}}}]]}}
{{#if:{{{subgenus|}}} | -> [[:Kategorie:{{{subgenus|}}}{{!}}{{{subgenus|}}}]]}}
}}
{{#vardefine:taxonbox|
{{#if:{{{dominia|}}}
| {{!-}}
{{!}} Dominia:
{{!}} ''[[:Kategorie:{{{dominia|}}}{{!}}{{{dominia|}}}]]''
}}
{{#if:{{{regnum|}}}
| {{!-}}
{{!}} Regnum:
{{!}} ''[[:Kategorie:{{{regnum|}}}{{!}}{{{regnum|}}}]]''
}}
{{#if:{{{subregnum|}}}
| {{!-}}
{{!}} Subregnum:
{{!}} ''[[:Kategorie:{{{subregnum|}}}{{!}}{{{subregnum|}}}]]''
}}
{{#if:{{{superdivisio|}}}
| {{!-}}
{{!}} Superdivisio:
{{!}} ''[[:Kategorie:{{{superdivisio|}}}{{!}}{{{superdivisio|}}}]]''
}}
{{#if:{{{divisio|}}}
| {{!-}}
{{!}} Divisio:
{{!}} ''[[:Kategorie:{{{divisio|}}}{{!}}{{{divisio|}}}]]''
}}
{{#if:{{{subdivisio|}}}
| {{!-}}
{{!}} Subdivisio:
{{!}} ''[[:Kategorie:{{{subdivisio|}}}{{!}}{{{subdivisio|}}}]]''
}}
{{#if:{{{superclassis|}}}
| {{!-}}
{{!}} Superclassis:
{{!}} ''[[:Kategorie:{{{superclassis|}}}{{!}}{{{superclassis|}}}]]''
}}
{{#if:{{{classis|}}}
| {{!-}}
{{!}} Classis:
{{!}} ''[[:Kategorie:{{{classis|}}}{{!}}{{{classis|}}}]]''
}}
{{#if:{{{subclassis|}}}
| {{!-}}
{{!}} Subclassis:
{{!}} ''[[:Kategorie:{{{subclassis|}}}{{!}}{{{subclassis|}}}]]''
}}
{{#if:{{{superordo|}}}
| {{!-}}
{{!}} Superordo:
{{!}} ''[[:Kategorie:{{{superordo|}}}{{!}}{{{superordo|}}}]]''
}}
{{#if:{{{ordo|}}}
| {{!-}}
{{!}} Ordo:
{{!}} ''[[:Kategorie:{{{ordo|}}}{{!}}{{{ordo|}}}]]''
}}
{{#if:{{{subordo|}}}
| {{!-}}
{{!}} Subordo:
{{!}} ''[[:Kategorie:{{{subordo|}}}{{!}}{{{subordo|}}}]]''
}}
{{#if:{{{superfamilia|}}}
| {{!-}}
{{!}} Superfamilia:
{{!}} ''[[:Kategorie:{{{superfamilia|}}}{{!}}{{{superfamilia|}}}]]''
}}
{{#if:{{{familia|}}}
| {{!-}}
{{!}} Familia:
{{!}} ''[[:Kategorie:{{{familia|}}}{{!}}{{{familia|}}}]]''
}}
{{#if:{{{subfamilia|}}}
| {{!-}}
{{!}} Subfamilia:
{{!}} ''[[:Kategorie:{{{subfamilia|}}}{{!}}{{{subfamilia|}}}]]''
}}
{{#if:{{{tribus|}}}
| {{!-}}
{{!}} Tribus:
{{!}} ''[[:Kategorie:{{{tribus|}}}{{!}}{{{tribus|}}}]]''
}}
{{#if:{{{genus|}}}
| {{!-}}
{{!}} Genus:
{{!}} ''[[:Kategorie:{{{genus|}}}{{!}}{{{genus|}}}]]''
}}
{{#if:{{{subgenus|}}}
| {{!-}}
{{!}} Subgenus:
{{!}} ''[[:Kategorie:{{{subgenus|}}}{{!}}{{{subgenus|}}}]]''
}}
{{#if:{{{species|}}}
| {{!-}}
{{!}} Species:
{{!}} ''{{{genus|}}} {{{subgenus|}}} {{{species|}}}''
}}
}}
{| cellpadding="2" cellspacing="0" style="border: 1px solid #6688AA;width:260px; background-color:#efefef; float:right;overflow: hidden; margin-left:10px; margin-bottom:10px;" valign="middle" |
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"|
'''''{{PAGENAME}}'''''
{{#if:{{{DeName|}}}|
<br>({{{DeName|}}})
}}
|-
|style="border: 0px solid #6688AA;border-bottom-width:1px;"| <div style="text-align:center;font-size:8pt;">
{{#if: {{{Bild|}}}
| [[Bild:{{{Bild}}}{{!}}frameless{{!}}250x300px{{!}}{{{Bildbeschreibung}}}]]
}}</div>
|
|
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| Systematik
|-
|style="text-align:center;width=250px;"|
{| style="background-color:#efefef;text-align:left;"
{{#var:taxonbox}}
----
{{#if:{{#var:taxonbox}}
| {{#regex: {{#var:taxonbox}} | /(\n)[\n]+/ | $1 }}
}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Wissenschaftlicher Name
|-
|style="text-align:center"|
{| style="text-align:center;background-color:#efefef;widht:250px"
|-
|style="text-align:center;width:250px"|''{{{genus|}}} {{{species|}}}'' {{#if:{{{Autor|}}}|{{{Autor|}}}}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Weitere Informationen
|-
|style="text-align:left"|
{| style="text-align:left;background-color:#efefef;widht:250px"
{{#if:{{{Verbreitung|}}}
| {{!-}}
{{!}} Verbreitung:
{{!}} {{{Verbreitung|}}}
| {{!-}}
}}
{{#if:{{{Habitat|}}}
| {{!-}}
{{!}} Habitat:
{{!}} {{{Habitat|}}}
| {{!-}}
}}
{{#if:{{{Nahrung|}}}
| {{!-}}
{{!}} Nahrung:
{{!}} {{{Nahrung|}}}
| {{!-}}
}}
{{#if:{{{Luftfeuchtigkeit|}}}
| {{!-}}
{{!}} Luftfeuchtigkeit:
{{!}} {{{Luftfeuchtigkeit|}}}
| {{!-}}
}}
{{#if:{{{Temperatur|}}}
| {{!-}}
{{!}} Temperatur:
{{!}} {{{Temperatur|}}}
| {{!-}}
}}
{{#if:{{{StudyGroupNumber|}}}
| {{!-}}
{{!}} StudyGroupNumber:
{{!}} {{{StudyGroupNumber|}}}
| {{!-}}
}}
|}
|}
{{#ifeq: {{NAMESPACE}} | {{ns:0}} | [[Kategorie:species]]}}
{{#if:{{#var:taxon}}
| {{#regex: {{#var:taxon}} | /[ \r\n]+/ | }}
}}
{{#if:{{{www.faunaeur.org_id|}}}|
* [http://www.faunaeur.org/full_results.php?id={{{www.faunaeur.org_id|}}} Fauna Europaea : www.faunaeur.org -> {{PAGENAME}}]
}}
{{#if:{{{cockroach.speciesfile.org_TaxonNameID|}}}|
* [http://cockroach.speciesfile.org/common/basic/Taxa.aspx?TaxonNameID={{{cockroach.speciesfile.org_TaxonNameID|}}} Cockroach Species File (CSF) : cockroach.speciesfile.org -> {{PAGENAME}}]
}}
{{#if:{{{LSID|}}}|
* [http://www.ipni.org/lsids.html LSID] : {{{LSID|}}}
}}
{{#ifeq:{{PAGENAME}}|{{{superordo}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
[[Kategorie:{{#if: {{{subclassis|}}} | {{{subclassis|}}} | {{{classis|}}} }}{{!}}{{{superordo|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{ordo}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=7}}
{{#ifeq:{{PAGENAME}}|Blattodea|
[[Kategorie:Schaben]]
}}
[[Kategorie:{{#if: {{{superordo|}}} | {{{superordo|}}} | {{{subclassis|}}} }}{{!}}{{{ordo|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{superfamilia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
[[Kategorie:{{#if: {{{subordo|}}} | {{{subordo|}}} | {{{ordo|}}} }}{{!}}{{{superfamilia|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{familia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=4}}
[[Kategorie:{{#if: {{{superfamilia|}}} | {{{superfamilia|}}} | {{{subordo|}}} }}{{!}}{{{familia|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{subfamilia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=3}}
[[Kategorie:{{{familia|}}}{{!}}{{{subfamilia|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{genus}}}|
[[Kategorie:{{#if: {{{subfamilia|}}} | {{{subfamilia|}}} | {{{familia|}}} }}{{!}}{{{genus|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{genus}}} {{{species}}}|
[[Kategorie:{{{genus|}}}{{!}}{{{species|}}}]]
}}
</includeonly>
<noinclude>
<pre>
Beispielaufruf:
{{Systematik
| DeName = Fauchschabe
| Autor = van Herrewege, 1973
| ordo =
| subordo =
| superfamilia =
| familia = Blaberidae
| subfamilia = Oxyhaloinae
| tribus = Gromphadorhini
| genus = Princisia
| subgenus =
| species = vanwaerebeki
| Verbreitung =
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| StudyGroupNumber = BCG 34
| Winterruhe =
| cockroach.speciesfile.org_TaxonNameID = 1174416
| LSID = urn:lsid:Blattodea.genusfile.org:TaxonName:6326
}}
{{Systematik
| Autor =
| Bild =
| Bildbeschreibung =
| regnum = Animalia
| subregnum = Eumetazoa
| phylum = specieshropoda
| subphylum = Hexapoda
| classis = Insecta
| ordo = Dictyoptera
| subordo = Isoptera
| LSID = urn:lsid:faunaeur.org:taxname:11922
| www.faunaeur.org_id = 11922
}}
</pre>
</noinclude>
c304f75feb6ca13967e027dd074e7d6778f9e8dd
1662
1661
2017-05-02T21:49:16Z
Lollypop
2
wikitext
text/x-wiki
<includeonly>
{{#vardefine:taxon|
{{#if:{{{dominia|}}} | -> [[:Kategorie:{{{dominia|}}}{{!}}{{{dominia|}}}]]}}
{{#if:{{{regnum|}}} | -> [[:Kategorie:{{{regnum|}}}{{!}}{{{regnum|}}}]]}}
{{#if:{{{subregnum|}}} | -> [[:Kategorie:{{{subregnum|}}}{{!}}{{{subregnum|}}}]]}}
{{#if:{{{divisio|}}} | -> [[:Kategorie:{{{divisio|}}}{{!}}{{{divisio|}}}]]}}
{{#if:{{{subdivisio|}}} | -> [[:Kategorie:{{{subdivisio|}}}{{!}}{{{subdivisio|}}}]]}}
{{#if:{{{superclassis|}}}| -> [[:Kategorie:{{{superclassis|}}}{{!}}{{{superclassis|}}}]]}}
{{#if:{{{classis|}}} | -> [[:Kategorie:{{{classis|}}}{{!}}{{{classis|}}}]]}}
{{#if:{{{subclassis|}}} | -> [[:Kategorie:{{{subclassis|}}}{{!}}{{{subclassis|}}}]]}}
{{#if:{{{superordo|}}} | -> [[:Kategorie:{{{superordo|}}}{{!}}{{{superordo|}}}]]}}
{{#if:{{{ordo|}}} | -> [[:Kategorie:{{{ordo|}}}{{!}}{{{ordo|}}}]]}}
{{#if:{{{subordo|}}} | -> [[:Kategorie:{{{subordo|}}}{{!}}{{{subordo|}}}]]}}
{{#if:{{{superfamilia|}}}| -> [[:Kategorie:{{#if: {{{subordo|}}} | {{{subordo|}}} | {{{ordo|}}} }}{{!}}{{{superfamilia|}}}]]}}
{{#if:{{{familia|}}} | -> [[:Kategorie:{{{familia|}}}{{!}}{{{familia|}}}]]}}
{{#if:{{{subfamilia|}}} | -> [[:Kategorie:{{{subfamilia|}}}{{!}}{{{subfamilia|}}}]]}}
{{#if:{{{tribus|}}} | -> [[:Kategorie:{{{tribus|}}}{{!}}{{{tribus|}}}]]}}
{{#if:{{{genus|}}} | -> [[:Kategorie:{{{genus|}}}{{!}}{{{genus|}}}]]}}
{{#if:{{{subgenus|}}} | -> [[:Kategorie:{{{subgenus|}}}{{!}}{{{subgenus|}}}]]}}
}}
{{#vardefine:taxonbox|
{{#if:{{{dominia|}}}
| {{!-}}
{{!}} Dominia:
{{!}} ''[[:Kategorie:{{{dominia|}}}{{!}}{{{dominia|}}}]]''
}}
{{#if:{{{regnum|}}}
| {{!-}}
{{!}} Regnum:
{{!}} ''[[:Kategorie:{{{regnum|}}}{{!}}{{{regnum|}}}]]''
}}
{{#if:{{{subregnum|}}}
| {{!-}}
{{!}} Subregnum:
{{!}} ''[[:Kategorie:{{{subregnum|}}}{{!}}{{{subregnum|}}}]]''
}}
{{#if:{{{superdivisio|}}}
| {{!-}}
{{!}} Superdivisio:
{{!}} ''[[:Kategorie:{{{superdivisio|}}}{{!}}{{{superdivisio|}}}]]''
}}
{{#if:{{{divisio|}}}
| {{!-}}
{{!}} Divisio:
{{!}} ''[[:Kategorie:{{{divisio|}}}{{!}}{{{divisio|}}}]]''
}}
{{#if:{{{subdivisio|}}}
| {{!-}}
{{!}} Subdivisio:
{{!}} ''[[:Kategorie:{{{subdivisio|}}}{{!}}{{{subdivisio|}}}]]''
}}
{{#if:{{{superclassis|}}}
| {{!-}}
{{!}} Superclassis:
{{!}} ''[[:Kategorie:{{{superclassis|}}}{{!}}{{{superclassis|}}}]]''
}}
{{#if:{{{classis|}}}
| {{!-}}
{{!}} Classis:
{{!}} ''[[:Kategorie:{{{classis|}}}{{!}}{{{classis|}}}]]''
}}
{{#if:{{{subclassis|}}}
| {{!-}}
{{!}} Subclassis:
{{!}} ''[[:Kategorie:{{{subclassis|}}}{{!}}{{{subclassis|}}}]]''
}}
{{#if:{{{superordo|}}}
| {{!-}}
{{!}} Superordo:
{{!}} ''[[:Kategorie:{{{superordo|}}}{{!}}{{{superordo|}}}]]''
}}
{{#if:{{{ordo|}}}
| {{!-}}
{{!}} Ordo:
{{!}} ''[[:Kategorie:{{{ordo|}}}{{!}}{{{ordo|}}}]]''
}}
{{#if:{{{subordo|}}}
| {{!-}}
{{!}} Subordo:
{{!}} ''[[:Kategorie:{{{subordo|}}}{{!}}{{{subordo|}}}]]''
}}
{{#if:{{{superfamilia|}}}
| {{!-}}
{{!}} Superfamilia:
{{!}} ''[[:Kategorie:{{{superfamilia|}}}{{!}}{{{superfamilia|}}}]]''
}}
{{#if:{{{familia|}}}
| {{!-}}
{{!}} Familia:
{{!}} ''[[:Kategorie:{{{familia|}}}{{!}}{{{familia|}}}]]''
}}
{{#if:{{{subfamilia|}}}
| {{!-}}
{{!}} Subfamilia:
{{!}} ''[[:Kategorie:{{{subfamilia|}}}{{!}}{{{subfamilia|}}}]]''
}}
{{#if:{{{tribus|}}}
| {{!-}}
{{!}} Tribus:
{{!}} ''[[:Kategorie:{{{tribus|}}}{{!}}{{{tribus|}}}]]''
}}
{{#if:{{{genus|}}}
| {{!-}}
{{!}} Genus:
{{!}} ''[[:Kategorie:{{{genus|}}}{{!}}{{{genus|}}}]]''
}}
{{#if:{{{subgenus|}}}
| {{!-}}
{{!}} Subgenus:
{{!}} ''[[:Kategorie:{{{subgenus|}}}{{!}}{{{subgenus|}}}]]''
}}
{{#if:{{{species|}}}
| {{!-}}
{{!}} Species:
{{!}} ''{{{genus|}}} {{{subgenus|}}} {{{species|}}}''
}}
}}
{| cellpadding="2" cellspacing="0" style="border: 1px solid #6688AA;width:260px; background-color:#efefef; float:right;overflow: hidden; margin-left:10px; margin-bottom:10px;" valign="middle" |
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"|
'''''{{PAGENAME}}'''''
{{#if:{{{DeName|}}}|
<br>({{{DeName|}}})
}}
|-
|style="border: 0px solid #6688AA;border-bottom-width:1px;"| <div style="text-align:center;font-size:8pt;">
{{#if: {{{Bild|}}}
| [[Bild:{{{Bild}}}{{!}}frameless{{!}}250x300px{{!}}{{{Bildbeschreibung}}}]]
}}</div>
|
|
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| Systematik
|-
|style="text-align:center;width=250px;"|
{| style="background-color:#efefef;text-align:left;"
{{#if:{{#var:taxonbox}}
| {{#regex: {{#var:taxonbox}} | /(\n)[\n]+/ | $1 }}
}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Wissenschaftlicher Name
|-
|style="text-align:center"|
{| style="text-align:center;background-color:#efefef;widht:250px"
|-
|style="text-align:center;width:250px"|''{{{genus|}}} {{{species|}}}'' {{#if:{{{Autor|}}}|{{{Autor|}}}}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Weitere Informationen
|-
|style="text-align:left"|
{| style="text-align:left;background-color:#efefef;widht:250px"
{{#if:{{{Verbreitung|}}}
| {{!-}}
{{!}} Verbreitung:
{{!}} {{{Verbreitung|}}}
| {{!-}}
}}
{{#if:{{{Habitat|}}}
| {{!-}}
{{!}} Habitat:
{{!}} {{{Habitat|}}}
| {{!-}}
}}
{{#if:{{{Nahrung|}}}
| {{!-}}
{{!}} Nahrung:
{{!}} {{{Nahrung|}}}
| {{!-}}
}}
{{#if:{{{Luftfeuchtigkeit|}}}
| {{!-}}
{{!}} Luftfeuchtigkeit:
{{!}} {{{Luftfeuchtigkeit|}}}
| {{!-}}
}}
{{#if:{{{Temperatur|}}}
| {{!-}}
{{!}} Temperatur:
{{!}} {{{Temperatur|}}}
| {{!-}}
}}
{{#if:{{{StudyGroupNumber|}}}
| {{!-}}
{{!}} StudyGroupNumber:
{{!}} {{{StudyGroupNumber|}}}
| {{!-}}
}}
|}
|}
{{#ifeq: {{NAMESPACE}} | {{ns:0}} | [[Kategorie:species]]}}
{{#if:{{#var:taxon}}
| {{#regex: {{#var:taxon}} | /[ \r\n]+/ | }}
}}
{{#if:{{{www.faunaeur.org_id|}}}|
* [http://www.faunaeur.org/full_results.php?id={{{www.faunaeur.org_id|}}} Fauna Europaea : www.faunaeur.org -> {{PAGENAME}}]
}}
{{#if:{{{cockroach.speciesfile.org_TaxonNameID|}}}|
* [http://cockroach.speciesfile.org/common/basic/Taxa.aspx?TaxonNameID={{{cockroach.speciesfile.org_TaxonNameID|}}} Cockroach Species File (CSF) : cockroach.speciesfile.org -> {{PAGENAME}}]
}}
{{#if:{{{LSID|}}}|
* [http://www.ipni.org/lsids.html LSID] : {{{LSID|}}}
}}
{{#ifeq:{{PAGENAME}}|{{{superordo}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
[[Kategorie:{{#if: {{{subclassis|}}} | {{{subclassis|}}} | {{{classis|}}} }}{{!}}{{{superordo|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{ordo}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=7}}
{{#ifeq:{{PAGENAME}}|Blattodea|
[[Kategorie:Schaben]]
}}
[[Kategorie:{{#if: {{{superordo|}}} | {{{superordo|}}} | {{{subclassis|}}} }}{{!}}{{{ordo|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{superfamilia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
[[Kategorie:{{#if: {{{subordo|}}} | {{{subordo|}}} | {{{ordo|}}} }}{{!}}{{{superfamilia|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{familia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=4}}
[[Kategorie:{{#if: {{{superfamilia|}}} | {{{superfamilia|}}} | {{{subordo|}}} }}{{!}}{{{familia|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{subfamilia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=3}}
[[Kategorie:{{{familia|}}}{{!}}{{{subfamilia|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{genus}}}|
[[Kategorie:{{#if: {{{subfamilia|}}} | {{{subfamilia|}}} | {{{familia|}}} }}{{!}}{{{genus|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{genus}}} {{{species}}}|
[[Kategorie:{{{genus|}}}{{!}}{{{species|}}}]]
}}
</includeonly>
<noinclude>
<pre>
Beispielaufruf:
{{Systematik
| DeName = Fauchschabe
| Autor = van Herrewege, 1973
| ordo =
| subordo =
| superfamilia =
| familia = Blaberidae
| subfamilia = Oxyhaloinae
| tribus = Gromphadorhini
| genus = Princisia
| subgenus =
| species = vanwaerebeki
| Verbreitung =
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| StudyGroupNumber = BCG 34
| Winterruhe =
| cockroach.speciesfile.org_TaxonNameID = 1174416
| LSID = urn:lsid:Blattodea.genusfile.org:TaxonName:6326
}}
{{Systematik
| Autor =
| Bild =
| Bildbeschreibung =
| regnum = Animalia
| subregnum = Eumetazoa
| phylum = specieshropoda
| subphylum = Hexapoda
| classis = Insecta
| ordo = Dictyoptera
| subordo = Isoptera
| LSID = urn:lsid:faunaeur.org:taxname:11922
| www.faunaeur.org_id = 11922
}}
</pre>
</noinclude>
ac7598c2a7110e2d5ac1af55f599fb6c287e6cc5
1665
1662
2017-05-03T08:10:29Z
Lollypop
2
wikitext
text/x-wiki
<includeonly>
{{#vardefine:taxon|
{{#if:{{{dominia|}}} | -> [[:Kategorie:{{{dominia|}}}{{!}}{{{dominia|}}}]]}}
{{#if:{{{regnum|}}} | -> [[:Kategorie:{{{regnum|}}}{{!}}{{{regnum|}}}]]}}
{{#if:{{{subregnum|}}} | -> [[:Kategorie:{{{subregnum|}}}{{!}}{{{subregnum|}}}]]}}
{{#if:{{{superdivisio|}}}| -> [[:Kategorie:{{{superdivisio|}}}{{!}}{{{superdivisio|}}}]]}}
{{#if:{{{divisio|}}} | -> [[:Kategorie:{{{divisio|}}}{{!}}{{{divisio|}}}]]}}
{{#if:{{{subdivisio|}}} | -> [[:Kategorie:{{{subdivisio|}}}{{!}}{{{subdivisio|}}}]]}}
{{#if:{{{superclassis|}}}| -> [[:Kategorie:{{{superclassis|}}}{{!}}{{{superclassis|}}}]]}}
{{#if:{{{classis|}}} | -> [[:Kategorie:{{{classis|}}}{{!}}{{{classis|}}}]]}}
{{#if:{{{subclassis|}}} | -> [[:Kategorie:{{{subclassis|}}}{{!}}{{{subclassis|}}}]]}}
{{#if:{{{superordo|}}} | -> [[:Kategorie:{{{superordo|}}}{{!}}{{{superordo|}}}]]}}
{{#if:{{{ordo|}}} | -> [[:Kategorie:{{{ordo|}}}{{!}}{{{ordo|}}}]]}}
{{#if:{{{subordo|}}} | -> [[:Kategorie:{{{subordo|}}}{{!}}{{{subordo|}}}]]}}
{{#if:{{{superfamilia|}}}| -> [[:Kategorie:{{#if: {{{subordo|}}} | {{{subordo|}}} | {{{ordo|}}} }}{{!}}{{{superfamilia|}}}]]}}
{{#if:{{{familia|}}} | -> [[:Kategorie:{{{familia|}}}{{!}}{{{familia|}}}]]}}
{{#if:{{{subfamilia|}}} | -> [[:Kategorie:{{{subfamilia|}}}{{!}}{{{subfamilia|}}}]]}}
{{#if:{{{tribus|}}} | -> [[:Kategorie:{{{tribus|}}}{{!}}{{{tribus|}}}]]}}
{{#if:{{{genus|}}} | -> [[:Kategorie:{{{genus|}}}{{!}}{{{genus|}}}]]}}
{{#if:{{{subgenus|}}} | -> [[:Kategorie:{{{subgenus|}}}{{!}}{{{subgenus|}}}]]}}
}}
{{#vardefine:taxonbox|
{{#if:{{{dominia|}}}
| {{!-}}
{{!}} Dominia:
{{!}} ''[[:Kategorie:{{{dominia|}}}{{!}}{{{dominia|}}}]]''
}}
{{#if:{{{regnum|}}}
| {{!-}}
{{!}} Regnum:
{{!}} ''[[:Kategorie:{{{regnum|}}}{{!}}{{{regnum|}}}]]''
}}
{{#if:{{{subregnum|}}}
| {{!-}}
{{!}} Subregnum:
{{!}} ''[[:Kategorie:{{{subregnum|}}}{{!}}{{{subregnum|}}}]]''
}}
{{#if:{{{superdivisio|}}}
| {{!-}}
{{!}} Superdivisio:
{{!}} ''[[:Kategorie:{{{superdivisio|}}}{{!}}{{{superdivisio|}}}]]''
}}
{{#if:{{{divisio|}}}
| {{!-}}
{{!}} Divisio:
{{!}} ''[[:Kategorie:{{{divisio|}}}{{!}}{{{divisio|}}}]]''
}}
{{#if:{{{phylum|}}}
| {{!-}}
{{!}} Phylum:
{{!}} ''[[:Kategorie:{{{phylum|}}}{{!}}{{{phylum|}}}]]''
}}
{{#if:{{{subdivisio|}}}
| {{!-}}
{{!}} Subdivisio:
{{!}} ''[[:Kategorie:{{{subdivisio|}}}{{!}}{{{subdivisio|}}}]]''
}}
{{#if:{{{subphylum|}}}
| {{!-}}
{{!}} Subphylum:
{{!}} ''[[:Kategorie:{{{subphylum|}}}{{!}}{{{subphylum|}}}]]''
}}
{{#if:{{{superclassis|}}}
| {{!-}}
{{!}} Superclassis:
{{!}} ''[[:Kategorie:{{{superclassis|}}}{{!}}{{{superclassis|}}}]]''
}}
{{#if:{{{classis|}}}
| {{!-}}
{{!}} Classis:
{{!}} ''[[:Kategorie:{{{classis|}}}{{!}}{{{classis|}}}]]''
}}
{{#if:{{{subclassis|}}}
| {{!-}}
{{!}} Subclassis:
{{!}} ''[[:Kategorie:{{{subclassis|}}}{{!}}{{{subclassis|}}}]]''
}}
{{#if:{{{superordo|}}}
| {{!-}}
{{!}} Superordo:
{{!}} ''[[:Kategorie:{{{superordo|}}}{{!}}{{{superordo|}}}]]''
}}
{{#if:{{{ordo|}}}
| {{!-}}
{{!}} Ordo:
{{!}} ''[[:Kategorie:{{{ordo|}}}{{!}}{{{ordo|}}}]]''
}}
{{#if:{{{subordo|}}}
| {{!-}}
{{!}} Subordo:
{{!}} ''[[:Kategorie:{{{subordo|}}}{{!}}{{{subordo|}}}]]''
}}
{{#if:{{{superfamilia|}}}
| {{!-}}
{{!}} Superfamilia:
{{!}} ''[[:Kategorie:{{{superfamilia|}}}{{!}}{{{superfamilia|}}}]]''
}}
{{#if:{{{familia|}}}
| {{!-}}
{{!}} Familia:
{{!}} ''[[:Kategorie:{{{familia|}}}{{!}}{{{familia|}}}]]''
}}
{{#if:{{{subfamilia|}}}
| {{!-}}
{{!}} Subfamilia:
{{!}} ''[[:Kategorie:{{{subfamilia|}}}{{!}}{{{subfamilia|}}}]]''
}}
{{#if:{{{tribus|}}}
| {{!-}}
{{!}} Tribus:
{{!}} ''[[:Kategorie:{{{tribus|}}}{{!}}{{{tribus|}}}]]''
}}
{{#if:{{{genus|}}}
| {{!-}}
{{!}} Genus:
{{!}} ''[[:Kategorie:{{{genus|}}}{{!}}{{{genus|}}}]]''
}}
{{#if:{{{subgenus|}}}
| {{!-}}
{{!}} Subgenus:
{{!}} ''[[:Kategorie:{{{subgenus|}}}{{!}}{{{subgenus|}}}]]''
}}
{{#if:{{{species|}}}
| {{!-}}
{{!}} Species:
{{!}} ''{{{genus|}}} {{{subgenus|}}} {{{species|}}}{{#if: {{{varietas|}}}| " var. {{{varietas|}}}"}}{{#if: {{{forma|}}}| " f. {{{forma|}}}"}}''
}}
}}
{| cellpadding="2" cellspacing="0" style="border: 1px solid #6688AA;width:260px; background-color:#efefef; float:right;overflow: hidden; margin-left:10px; margin-bottom:10px;" valign="middle" |
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"|
'''''{{PAGENAME}}'''''
{{#if:{{{DeName|}}}|
<br>({{{DeName|}}})
}}
|-
|style="border: 0px solid #6688AA;border-bottom-width:1px;"| <div style="text-align:center;font-size:8pt;">
{{#if: {{{Bild|}}}
| [[Bild:{{{Bild}}}{{!}}frameless{{!}}250x300px{{!}}{{{Bildbeschreibung}}}]]
}}</div>
|
|
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| Systematik
|-
|style="text-align:center;width=250px;"|
{| style="background-color:#efefef;text-align:left;"
{{#if:{{#var:taxonbox}}
| {{#regex: {{#var:taxonbox}} | /(\n)[\n]+/ | $1 }}
}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Wissenschaftlicher Name
|-
|style="text-align:center"|
{| style="text-align:center;background-color:#efefef;widht:250px"
|-
|style="text-align:center;width:250px"|''{{{genus|}}} {{{species|}}}'' {{#if:{{{Autor|}}}|{{{Autor|}}}}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Weitere Informationen
|-
|style="text-align:left"|
{| style="text-align:left;background-color:#efefef;widht:250px"
{{#if:{{{Verbreitung|}}}
| {{!-}}
{{!}} Verbreitung:
{{!}} {{{Verbreitung|}}}
| {{!-}}
}}
{{#if:{{{Habitat|}}}
| {{!-}}
{{!}} Habitat:
{{!}} {{{Habitat|}}}
| {{!-}}
}}
{{#if:{{{Nahrung|}}}
| {{!-}}
{{!}} Nahrung:
{{!}} {{{Nahrung|}}}
| {{!-}}
}}
{{#if:{{{Luftfeuchtigkeit|}}}
| {{!-}}
{{!}} Luftfeuchtigkeit:
{{!}} {{{Luftfeuchtigkeit|}}}
| {{!-}}
}}
{{#if:{{{Temperatur|}}}
| {{!-}}
{{!}} Temperatur:
{{!}} {{{Temperatur|}}}
| {{!-}}
}}
{{#if:{{{StudyGroupNumber|}}}
| {{!-}}
{{!}} StudyGroupNumber:
{{!}} {{{StudyGroupNumber|}}}
| {{!-}}
}}
|}
|}
{{#ifeq: {{NAMESPACE}} | {{ns:0}} | [[Kategorie:species]]}}
{{#if:{{#var:taxon}}
| {{#regex: {{#var:taxon}} | /[ \r\n]+/ | }}
}}
{{#if:{{{www.faunaeur.org_id|}}}|
* [http://www.faunaeur.org/full_results.php?id={{{www.faunaeur.org_id|}}} Fauna Europaea : www.faunaeur.org -> {{PAGENAME}}]
}}
{{#if:{{{cockroach.speciesfile.org_TaxonNameID|}}}|
* [http://cockroach.speciesfile.org/common/basic/Taxa.aspx?TaxonNameID={{{cockroach.speciesfile.org_TaxonNameID|}}} Cockroach Species File (CSF) : cockroach.speciesfile.org -> {{PAGENAME}}]
}}
{{#if:{{{LSID|}}}|
* [http://www.ipni.org/lsids.html LSID] : {{{LSID|}}}
}}
{{#ifeq:{{PAGENAME}}|{{{superordo}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
[[Kategorie:{{#if: {{{subclassis|}}} | {{{subclassis|}}} | {{{classis|}}} }}{{!}}{{{superordo|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{ordo}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=7}}
{{#ifeq:{{PAGENAME}}|Blattodea|
[[Kategorie:Schaben]]
}}
[[Kategorie:{{#if: {{{superordo|}}} | {{{superordo|}}} | {{{subclassis|}}} }}{{!}}{{{ordo|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{superfamilia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
[[Kategorie:{{#if: {{{subordo|}}} | {{{subordo|}}} | {{{ordo|}}} }}{{!}}{{{superfamilia|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{familia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=4}}
[[Kategorie:{{#if: {{{superfamilia|}}} | {{{superfamilia|}}} | {{{subordo|}}} }}{{!}}{{{familia|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{subfamilia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=3}}
[[Kategorie:{{{familia|}}}{{!}}{{{subfamilia|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{genus}}}|
[[Kategorie:{{#if: {{{subfamilia|}}} | {{{subfamilia|}}} | {{{familia|}}} }}{{!}}{{{genus|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{genus}}} {{{species}}}|
[[Kategorie:{{{genus|}}}{{!}}{{{species|}}}]]
}}
</includeonly>
<noinclude>
<pre>
Beispielaufruf:
{{Systematik
| DeName = Fauchschabe
| Autor = van Herrewege, 1973
| ordo =
| subordo =
| superfamilia =
| familia = Blaberidae
| subfamilia = Oxyhaloinae
| tribus = Gromphadorhini
| genus = Princisia
| subgenus =
| species = vanwaerebeki
| Verbreitung =
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| StudyGroupNumber = BCG 34
| Winterruhe =
| cockroach.speciesfile.org_TaxonNameID = 1174416
| LSID = urn:lsid:Blattodea.genusfile.org:TaxonName:6326
}}
{{Systematik
| Autor =
| Bild =
| Bildbeschreibung =
| regnum = Animalia
| subregnum = Eumetazoa
| phylum = specieshropoda
| subphylum = Hexapoda
| classis = Insecta
| ordo = Dictyoptera
| subordo = Isoptera
| LSID = urn:lsid:faunaeur.org:taxname:11922
| www.faunaeur.org_id = 11922
}}
</pre>
</noinclude>
332083301eb4a0d1f825292e4150734830fee6ad
1673
1665
2017-05-03T08:36:30Z
Lollypop
2
wikitext
text/x-wiki
<includeonly>
{{#vardefine:taxon|
{{#if:{{{dominia|}}} | -> [[:Kategorie:{{{dominia|}}}{{!}}{{{dominia|}}}]]}}
{{#if:{{{regnum|}}} | -> [[:Kategorie:{{{regnum|}}}{{!}}{{{regnum|}}}]]}}
{{#if:{{{subregnum|}}} | -> [[:Kategorie:{{{subregnum|}}}{{!}}{{{subregnum|}}}]]}}
{{#if:{{{superdivisio|}}}| -> [[:Kategorie:{{{superdivisio|}}}{{!}}{{{superdivisio|}}}]]}}
{{#if:{{{divisio|}}} | -> [[:Kategorie:{{{divisio|}}}{{!}}{{{divisio|}}}]]}}
{{#if:{{{subdivisio|}}} | -> [[:Kategorie:{{{subdivisio|}}}{{!}}{{{subdivisio|}}}]]}}
{{#if:{{{superclassis|}}}| -> [[:Kategorie:{{{superclassis|}}}{{!}}{{{superclassis|}}}]]}}
{{#if:{{{classis|}}} | -> [[:Kategorie:{{{classis|}}}{{!}}{{{classis|}}}]]}}
{{#if:{{{subclassis|}}} | -> [[:Kategorie:{{{subclassis|}}}{{!}}{{{subclassis|}}}]]}}
{{#if:{{{superordo|}}} | -> [[:Kategorie:{{{superordo|}}}{{!}}{{{superordo|}}}]]}}
{{#if:{{{ordo|}}} | -> [[:Kategorie:{{{ordo|}}}{{!}}{{{ordo|}}}]]}}
{{#if:{{{subordo|}}} | -> [[:Kategorie:{{{subordo|}}}{{!}}{{{subordo|}}}]]}}
{{#if:{{{superfamilia|}}}| -> [[:Kategorie:{{#if: {{{subordo|}}} | {{{subordo|}}} | {{{ordo|}}} }}{{!}}{{{superfamilia|}}}]]}}
{{#if:{{{familia|}}} | -> [[:Kategorie:{{{familia|}}}{{!}}{{{familia|}}}]]}}
{{#if:{{{subfamilia|}}} | -> [[:Kategorie:{{{subfamilia|}}}{{!}}{{{subfamilia|}}}]]}}
{{#if:{{{tribus|}}} | -> [[:Kategorie:{{{tribus|}}}{{!}}{{{tribus|}}}]]}}
{{#if:{{{genus|}}} | -> [[:Kategorie:{{{genus|}}}{{!}}{{{genus|}}}]]}}
{{#if:{{{subgenus|}}} | -> [[:Kategorie:{{{subgenus|}}}{{!}}{{{subgenus|}}}]]}}
}}
{{#vardefine:taxonbox|
{{#if:{{{dominia|}}}
| {{!-}}
{{!}} Dominia:
{{!}} ''[[:Kategorie:{{{dominia|}}}{{!}}{{{dominia|}}}]]''
}}
{{#if:{{{regnum|}}}
| {{!-}}
{{!}} Regnum:
{{!}} ''[[:Kategorie:{{{regnum|}}}{{!}}{{{regnum|}}}]]''
}}
{{#if:{{{subregnum|}}}
| {{!-}}
{{!}} Subregnum:
{{!}} ''[[:Kategorie:{{{subregnum|}}}{{!}}{{{subregnum|}}}]]''
}}
{{#if:{{{superdivisio|}}}
| {{!-}}
{{!}} Superdivisio:
{{!}} ''[[:Kategorie:{{{superdivisio|}}}{{!}}{{{superdivisio|}}}]]''
}}
{{#if:{{{divisio|}}}
| {{!-}}
{{!}} Divisio:
{{!}} ''[[:Kategorie:{{{divisio|}}}{{!}}{{{divisio|}}}]]''
}}
{{#if:{{{phylum|}}}
| {{!-}}
{{!}} Phylum:
{{!}} ''[[:Kategorie:{{{phylum|}}}{{!}}{{{phylum|}}}]]''
}}
{{#if:{{{subdivisio|}}}
| {{!-}}
{{!}} Subdivisio:
{{!}} ''[[:Kategorie:{{{subdivisio|}}}{{!}}{{{subdivisio|}}}]]''
}}
{{#if:{{{subphylum|}}}
| {{!-}}
{{!}} Subphylum:
{{!}} ''[[:Kategorie:{{{subphylum|}}}{{!}}{{{subphylum|}}}]]''
}}
{{#if:{{{superclassis|}}}
| {{!-}}
{{!}} Superclassis:
{{!}} ''[[:Kategorie:{{{superclassis|}}}{{!}}{{{superclassis|}}}]]''
}}
{{#if:{{{classis|}}}
| {{!-}}
{{!}} Classis:
{{!}} ''[[:Kategorie:{{{classis|}}}{{!}}{{{classis|}}}]]''
}}
{{#if:{{{subclassis|}}}
| {{!-}}
{{!}} Subclassis:
{{!}} ''[[:Kategorie:{{{subclassis|}}}{{!}}{{{subclassis|}}}]]''
}}
{{#if:{{{superordo|}}}
| {{!-}}
{{!}} Superordo:
{{!}} ''[[:Kategorie:{{{superordo|}}}{{!}}{{{superordo|}}}]]''
}}
{{#if:{{{ordo|}}}
| {{!-}}
{{!}} Ordo:
{{!}} ''[[:Kategorie:{{{ordo|}}}{{!}}{{{ordo|}}}]]''
}}
{{#if:{{{subordo|}}}
| {{!-}}
{{!}} Subordo:
{{!}} ''[[:Kategorie:{{{subordo|}}}{{!}}{{{subordo|}}}]]''
}}
{{#if:{{{superfamilia|}}}
| {{!-}}
{{!}} Superfamilia:
{{!}} ''[[:Kategorie:{{{superfamilia|}}}{{!}}{{{superfamilia|}}}]]''
}}
{{#if:{{{familia|}}}
| {{!-}}
{{!}} Familia:
{{!}} ''[[:Kategorie:{{{familia|}}}{{!}}{{{familia|}}}]]''
}}
{{#if:{{{subfamilia|}}}
| {{!-}}
{{!}} Subfamilia:
{{!}} ''[[:Kategorie:{{{subfamilia|}}}{{!}}{{{subfamilia|}}}]]''
}}
{{#if:{{{tribus|}}}
| {{!-}}
{{!}} Tribus:
{{!}} ''[[:Kategorie:{{{tribus|}}}{{!}}{{{tribus|}}}]]''
}}
{{#if:{{{genus|}}}
| {{!-}}
{{!}} Genus:
{{!}} ''[[:Kategorie:{{{genus|}}}{{!}}{{{genus|}}}]]''
}}
{{#if:{{{subgenus|}}}
| {{!-}}
{{!}} Subgenus:
{{!}} ''[[:Kategorie:{{{subgenus|}}}{{!}}{{{subgenus|}}}]]''
}}
{{#if:{{{species|}}}
| {{!-}}
{{!}} Species:
{{!}} ''{{{genus|}}} {{{subgenus|}}} {{{species|}}}{{#if: {{{varietas|}}}| " var. {{{varietas|}}}"}}{{#if: {{{forma|}}}| " f. {{{forma|}}}"}}''
}}
}}
{| cellpadding="2" cellspacing="0" style="border: 1px solid #6688AA;width:260px; background-color:#efefef; float:right;overflow: hidden; margin-left:10px; margin-bottom:10px;" valign="middle" |
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"|
'''''{{PAGENAME}}'''''
{{#if:{{{DeName|}}}|
<br>({{{DeName|}}})
}}
|-
|style="border: 0px solid #6688AA;border-bottom-width:1px;"| <div style="text-align:center;font-size:8pt;">
{{#if: {{{Bild|}}}
| [[Bild:{{{Bild}}}{{!}}frameless{{!}}250x300px{{!}}{{{Bildbeschreibung}}}]]
}}</div>
|
|
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| Systematik
|-
|style="text-align:center;width=250px;"|
{| style="background-color:#efefef;text-align:left;"
{{#if:{{#var:taxonbox}}
| {{#regex: {{#var:taxonbox}} | /(\n)[\n]+/ | $1 }}
}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Wissenschaftlicher Name
|-
|style="text-align:center"|
{| style="text-align:center;background-color:#efefef;widht:250px"
|-
|style="text-align:center;width:250px"|''{{{genus|}}} {{{species|}}}'' {{#if:{{{Autor|}}}|{{{Autor|}}}}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Weitere Informationen
|-
|style="text-align:left"|
{| style="text-align:left;background-color:#efefef;widht:250px"
{{#if:{{{Verbreitung|}}}
| {{!-}}
{{!}} Verbreitung:
{{!}} {{{Verbreitung|}}}
| {{!-}}
}}
{{#if:{{{Habitat|}}}
| {{!-}}
{{!}} Habitat:
{{!}} {{{Habitat|}}}
| {{!-}}
}}
{{#if:{{{Nahrung|}}}
| {{!-}}
{{!}} Nahrung:
{{!}} {{{Nahrung|}}}
| {{!-}}
}}
{{#if:{{{Luftfeuchtigkeit|}}}
| {{!-}}
{{!}} Luftfeuchtigkeit:
{{!}} {{{Luftfeuchtigkeit|}}}
| {{!-}}
}}
{{#if:{{{Temperatur|}}}
| {{!-}}
{{!}} Temperatur:
{{!}} {{{Temperatur|}}}
| {{!-}}
}}
{{#if:{{{StudyGroupNumber|}}}
| {{!-}}
{{!}} StudyGroupNumber:
{{!}} {{{StudyGroupNumber|}}}
| {{!-}}
}}
|}
|}
{{#ifeq: {{NAMESPACE}} | {{ns:0}} | [[Kategorie:species]]}}
{{#if:{{#var:taxon}}
| {{#regex: {{#var:taxon}} | /[ \r\n]+/ | }}
}}
{{#if:{{{www.faunaeur.org_id|}}}|
* [http://www.faunaeur.org/full_results.php?id={{{www.faunaeur.org_id|}}} Fauna Europaea : www.faunaeur.org -> {{PAGENAME}}]
}}
{{#if:{{{cockroach.speciesfile.org_TaxonNameID|}}}|
* [http://cockroach.speciesfile.org/common/basic/Taxa.aspx?TaxonNameID={{{cockroach.speciesfile.org_TaxonNameID|}}} Cockroach Species File (CSF) : cockroach.speciesfile.org -> {{PAGENAME}}]
}}
{{#if:{{{LSID|}}}|
* [http://www.ipni.org/lsids.html LSID] : {{{LSID|}}}
}}
{{#ifeq:{{PAGENAME}}|{{{superordo}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
[[Kategorie:{{#if: {{{subclassis|}}} | {{{subclassis|}}} | {{{classis|}}} }}{{!}}{{{superordo|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{ordo}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=7}}
{{#ifeq:{{PAGENAME}}|Blattodea|
[[Kategorie:Schaben]]
}}
[[Kategorie:{{#if: {{{superordo|}}} | {{{superordo|}}} | {{{subclassis|}}} }}{{!}}{{{ordo|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{superfamilia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
[[Kategorie:{{#if: {{{subordo|}}} | {{{subordo|}}} | {{{ordo|}}} }}{{!}}{{{superfamilia|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{familia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=4}}
[[Kategorie:{{#if: {{{superfamilia|}}} | {{{superfamilia|}}} | {{{subordo|}}} }}{{!}}{{{familia|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{subfamilia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=3}}
[[Kategorie:{{{familia|}}}{{!}}{{{subfamilia|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{genus}}}|
[[Kategorie:{{#if: {{{subfamilia|}}} | {{{subfamilia|}}} | {{{familia|}}} }}{{!}}{{{genus|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{genus}}} {{{species}}}|
[[Kategorie:{{{genus|}}}{{!}}{{{species|}}}]]
}}
{{#string_loop:taxonType
| dominia
regnum
subregnum
superdivisio
divisio
subdivisio
superclassis
classis
subclassis
superordo
ordo
subordo
superfamilia
familia
subfamilia
genus
subgenus
species
subspecies
varietas
subvarietas
forma
subforma
| [[{{ #var: taxonType }}]] <nowiki/>
}}
</includeonly>
<noinclude>
<pre>
Beispielaufruf:
{{Systematik
| DeName = Fauchschabe
| Autor = van Herrewege, 1973
| ordo =
| subordo =
| superfamilia =
| familia = Blaberidae
| subfamilia = Oxyhaloinae
| tribus = Gromphadorhini
| genus = Princisia
| subgenus =
| species = vanwaerebeki
| Verbreitung =
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| StudyGroupNumber = BCG 34
| Winterruhe =
| cockroach.speciesfile.org_TaxonNameID = 1174416
| LSID = urn:lsid:Blattodea.genusfile.org:TaxonName:6326
}}
{{Systematik
| Autor =
| Bild =
| Bildbeschreibung =
| regnum = Animalia
| subregnum = Eumetazoa
| phylum = specieshropoda
| subphylum = Hexapoda
| classis = Insecta
| ordo = Dictyoptera
| subordo = Isoptera
| LSID = urn:lsid:faunaeur.org:taxname:11922
| www.faunaeur.org_id = 11922
}}
</pre>
</noinclude>
9156736a0ce18eab189644ce1ce6bf3e2e79fd02
1674
1673
2017-05-03T08:40:05Z
Lollypop
2
wikitext
text/x-wiki
<includeonly>
{{#vardefine:taxon|
{{#if:{{{dominia|}}} | -> [[:Kategorie:{{{dominia|}}}{{!}}{{{dominia|}}}]]}}
{{#if:{{{regnum|}}} | -> [[:Kategorie:{{{regnum|}}}{{!}}{{{regnum|}}}]]}}
{{#if:{{{subregnum|}}} | -> [[:Kategorie:{{{subregnum|}}}{{!}}{{{subregnum|}}}]]}}
{{#if:{{{superdivisio|}}}| -> [[:Kategorie:{{{superdivisio|}}}{{!}}{{{superdivisio|}}}]]}}
{{#if:{{{divisio|}}} | -> [[:Kategorie:{{{divisio|}}}{{!}}{{{divisio|}}}]]}}
{{#if:{{{subdivisio|}}} | -> [[:Kategorie:{{{subdivisio|}}}{{!}}{{{subdivisio|}}}]]}}
{{#if:{{{superclassis|}}}| -> [[:Kategorie:{{{superclassis|}}}{{!}}{{{superclassis|}}}]]}}
{{#if:{{{classis|}}} | -> [[:Kategorie:{{{classis|}}}{{!}}{{{classis|}}}]]}}
{{#if:{{{subclassis|}}} | -> [[:Kategorie:{{{subclassis|}}}{{!}}{{{subclassis|}}}]]}}
{{#if:{{{superordo|}}} | -> [[:Kategorie:{{{superordo|}}}{{!}}{{{superordo|}}}]]}}
{{#if:{{{ordo|}}} | -> [[:Kategorie:{{{ordo|}}}{{!}}{{{ordo|}}}]]}}
{{#if:{{{subordo|}}} | -> [[:Kategorie:{{{subordo|}}}{{!}}{{{subordo|}}}]]}}
{{#if:{{{superfamilia|}}}| -> [[:Kategorie:{{#if: {{{subordo|}}} | {{{subordo|}}} | {{{ordo|}}} }}{{!}}{{{superfamilia|}}}]]}}
{{#if:{{{familia|}}} | -> [[:Kategorie:{{{familia|}}}{{!}}{{{familia|}}}]]}}
{{#if:{{{subfamilia|}}} | -> [[:Kategorie:{{{subfamilia|}}}{{!}}{{{subfamilia|}}}]]}}
{{#if:{{{tribus|}}} | -> [[:Kategorie:{{{tribus|}}}{{!}}{{{tribus|}}}]]}}
{{#if:{{{genus|}}} | -> [[:Kategorie:{{{genus|}}}{{!}}{{{genus|}}}]]}}
{{#if:{{{subgenus|}}} | -> [[:Kategorie:{{{subgenus|}}}{{!}}{{{subgenus|}}}]]}}
}}
{{#vardefine:taxonbox|
{{#if:{{{dominia|}}}
| {{!-}}
{{!}} Dominia:
{{!}} ''[[:Kategorie:{{{dominia|}}}{{!}}{{{dominia|}}}]]''
}}
{{#if:{{{regnum|}}}
| {{!-}}
{{!}} Regnum:
{{!}} ''[[:Kategorie:{{{regnum|}}}{{!}}{{{regnum|}}}]]''
}}
{{#if:{{{subregnum|}}}
| {{!-}}
{{!}} Subregnum:
{{!}} ''[[:Kategorie:{{{subregnum|}}}{{!}}{{{subregnum|}}}]]''
}}
{{#if:{{{superdivisio|}}}
| {{!-}}
{{!}} Superdivisio:
{{!}} ''[[:Kategorie:{{{superdivisio|}}}{{!}}{{{superdivisio|}}}]]''
}}
{{#if:{{{divisio|}}}
| {{!-}}
{{!}} Divisio:
{{!}} ''[[:Kategorie:{{{divisio|}}}{{!}}{{{divisio|}}}]]''
}}
{{#if:{{{phylum|}}}
| {{!-}}
{{!}} Phylum:
{{!}} ''[[:Kategorie:{{{phylum|}}}{{!}}{{{phylum|}}}]]''
}}
{{#if:{{{subdivisio|}}}
| {{!-}}
{{!}} Subdivisio:
{{!}} ''[[:Kategorie:{{{subdivisio|}}}{{!}}{{{subdivisio|}}}]]''
}}
{{#if:{{{subphylum|}}}
| {{!-}}
{{!}} Subphylum:
{{!}} ''[[:Kategorie:{{{subphylum|}}}{{!}}{{{subphylum|}}}]]''
}}
{{#if:{{{superclassis|}}}
| {{!-}}
{{!}} Superclassis:
{{!}} ''[[:Kategorie:{{{superclassis|}}}{{!}}{{{superclassis|}}}]]''
}}
{{#if:{{{classis|}}}
| {{!-}}
{{!}} Classis:
{{!}} ''[[:Kategorie:{{{classis|}}}{{!}}{{{classis|}}}]]''
}}
{{#if:{{{subclassis|}}}
| {{!-}}
{{!}} Subclassis:
{{!}} ''[[:Kategorie:{{{subclassis|}}}{{!}}{{{subclassis|}}}]]''
}}
{{#if:{{{superordo|}}}
| {{!-}}
{{!}} Superordo:
{{!}} ''[[:Kategorie:{{{superordo|}}}{{!}}{{{superordo|}}}]]''
}}
{{#if:{{{ordo|}}}
| {{!-}}
{{!}} Ordo:
{{!}} ''[[:Kategorie:{{{ordo|}}}{{!}}{{{ordo|}}}]]''
}}
{{#if:{{{subordo|}}}
| {{!-}}
{{!}} Subordo:
{{!}} ''[[:Kategorie:{{{subordo|}}}{{!}}{{{subordo|}}}]]''
}}
{{#if:{{{superfamilia|}}}
| {{!-}}
{{!}} Superfamilia:
{{!}} ''[[:Kategorie:{{{superfamilia|}}}{{!}}{{{superfamilia|}}}]]''
}}
{{#if:{{{familia|}}}
| {{!-}}
{{!}} Familia:
{{!}} ''[[:Kategorie:{{{familia|}}}{{!}}{{{familia|}}}]]''
}}
{{#if:{{{subfamilia|}}}
| {{!-}}
{{!}} Subfamilia:
{{!}} ''[[:Kategorie:{{{subfamilia|}}}{{!}}{{{subfamilia|}}}]]''
}}
{{#if:{{{tribus|}}}
| {{!-}}
{{!}} Tribus:
{{!}} ''[[:Kategorie:{{{tribus|}}}{{!}}{{{tribus|}}}]]''
}}
{{#if:{{{genus|}}}
| {{!-}}
{{!}} Genus:
{{!}} ''[[:Kategorie:{{{genus|}}}{{!}}{{{genus|}}}]]''
}}
{{#if:{{{subgenus|}}}
| {{!-}}
{{!}} Subgenus:
{{!}} ''[[:Kategorie:{{{subgenus|}}}{{!}}{{{subgenus|}}}]]''
}}
{{#if:{{{species|}}}
| {{!-}}
{{!}} Species:
{{!}} ''{{{genus|}}} {{{subgenus|}}} {{{species|}}}{{#if: {{{varietas|}}}| " var. {{{varietas|}}}"}}{{#if: {{{forma|}}}| " f. {{{forma|}}}"}}''
}}
}}
{| cellpadding="2" cellspacing="0" style="border: 1px solid #6688AA;width:260px; background-color:#efefef; float:right;overflow: hidden; margin-left:10px; margin-bottom:10px;" valign="middle" |
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"|
'''''{{PAGENAME}}'''''
{{#if:{{{DeName|}}}|
<br>({{{DeName|}}})
}}
|-
|style="border: 0px solid #6688AA;border-bottom-width:1px;"| <div style="text-align:center;font-size:8pt;">
{{#if: {{{Bild|}}}
| [[Bild:{{{Bild}}}{{!}}frameless{{!}}250x300px{{!}}{{{Bildbeschreibung}}}]]
}}</div>
|
|
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| Systematik
|-
|style="text-align:center;width=250px;"|
{| style="background-color:#efefef;text-align:left;"
{{#if:{{#var:taxonbox}}
| {{#regex: {{#var:taxonbox}} | /(\n)[\n]+/ | $1 }}
}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Wissenschaftlicher Name
|-
|style="text-align:center"|
{| style="text-align:center;background-color:#efefef;widht:250px"
|-
|style="text-align:center;width:250px"|''{{{genus|}}} {{{species|}}}'' {{#if:{{{Autor|}}}|{{{Autor|}}}}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Weitere Informationen
|-
|style="text-align:left"|
{| style="text-align:left;background-color:#efefef;widht:250px"
{{#if:{{{Verbreitung|}}}
| {{!-}}
{{!}} Verbreitung:
{{!}} {{{Verbreitung|}}}
| {{!-}}
}}
{{#if:{{{Habitat|}}}
| {{!-}}
{{!}} Habitat:
{{!}} {{{Habitat|}}}
| {{!-}}
}}
{{#if:{{{Nahrung|}}}
| {{!-}}
{{!}} Nahrung:
{{!}} {{{Nahrung|}}}
| {{!-}}
}}
{{#if:{{{Luftfeuchtigkeit|}}}
| {{!-}}
{{!}} Luftfeuchtigkeit:
{{!}} {{{Luftfeuchtigkeit|}}}
| {{!-}}
}}
{{#if:{{{Temperatur|}}}
| {{!-}}
{{!}} Temperatur:
{{!}} {{{Temperatur|}}}
| {{!-}}
}}
{{#if:{{{StudyGroupNumber|}}}
| {{!-}}
{{!}} StudyGroupNumber:
{{!}} {{{StudyGroupNumber|}}}
| {{!-}}
}}
|}
|}
{{#ifeq: {{NAMESPACE}} | {{ns:0}} | [[Kategorie:species]]}}
{{#if:{{#var:taxon}}
| {{#regex: {{#var:taxon}} | /[ \r\n]+/ | }}
}}
{{#if:{{{www.faunaeur.org_id|}}}|
* [http://www.faunaeur.org/full_results.php?id={{{www.faunaeur.org_id|}}} Fauna Europaea : www.faunaeur.org -> {{PAGENAME}}]
}}
{{#if:{{{cockroach.speciesfile.org_TaxonNameID|}}}|
* [http://cockroach.speciesfile.org/common/basic/Taxa.aspx?TaxonNameID={{{cockroach.speciesfile.org_TaxonNameID|}}} Cockroach Species File (CSF) : cockroach.speciesfile.org -> {{PAGENAME}}]
}}
{{#if:{{{LSID|}}}|
* [http://www.ipni.org/lsids.html LSID] : {{{LSID|}}}
}}
{{#ifeq:{{PAGENAME}}|{{{superordo}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
[[Kategorie:{{#if: {{{subclassis|}}} | {{{subclassis|}}} | {{{classis|}}} }}{{!}}{{{superordo|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{ordo}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=7}}
{{#ifeq:{{PAGENAME}}|Blattodea|
[[Kategorie:Schaben]]
}}
[[Kategorie:{{#if: {{{superordo|}}} | {{{superordo|}}} | {{{subclassis|}}} }}{{!}}{{{ordo|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{superfamilia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
[[Kategorie:{{#if: {{{subordo|}}} | {{{subordo|}}} | {{{ordo|}}} }}{{!}}{{{superfamilia|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{familia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=4}}
[[Kategorie:{{#if: {{{superfamilia|}}} | {{{superfamilia|}}} | {{{subordo|}}} }}{{!}}{{{familia|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{subfamilia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=3}}
[[Kategorie:{{{familia|}}}{{!}}{{{subfamilia|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{genus}}}|
[[Kategorie:{{#if: {{{subfamilia|}}} | {{{subfamilia|}}} | {{{familia|}}} }}{{!}}{{{genus|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{genus}}} {{{species}}}|
[[Kategorie:{{{genus|}}}{{!}}{{{species|}}}]]
}}
<pre>
{{#string_loop:taxonType
| dominia
regnum
subregnum
superdivisio
divisio
subdivisio
superclassis
classis
subclassis
superordo
ordo
subordo
superfamilia
familia
subfamilia
genus
subgenus
species
subspecies
varietas
subvarietas
forma
subforma
| [[{{ #var: taxonType }}]] <nowiki/>
}}
</pre>
</includeonly>
<noinclude>
<pre>
Beispielaufruf:
{{Systematik
| DeName = Fauchschabe
| Autor = van Herrewege, 1973
| ordo =
| subordo =
| superfamilia =
| familia = Blaberidae
| subfamilia = Oxyhaloinae
| tribus = Gromphadorhini
| genus = Princisia
| subgenus =
| species = vanwaerebeki
| Verbreitung =
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| StudyGroupNumber = BCG 34
| Winterruhe =
| cockroach.speciesfile.org_TaxonNameID = 1174416
| LSID = urn:lsid:Blattodea.genusfile.org:TaxonName:6326
}}
{{Systematik
| Autor =
| Bild =
| Bildbeschreibung =
| regnum = Animalia
| subregnum = Eumetazoa
| phylum = specieshropoda
| subphylum = Hexapoda
| classis = Insecta
| ordo = Dictyoptera
| subordo = Isoptera
| LSID = urn:lsid:faunaeur.org:taxname:11922
| www.faunaeur.org_id = 11922
}}
</pre>
</noinclude>
a0f415335402a7254dd7458b577095dae61fca31
1675
1674
2017-05-03T08:41:51Z
Lollypop
2
wikitext
text/x-wiki
<includeonly>
{{#vardefine:taxon|
{{#if:{{{dominia|}}} | -> [[:Kategorie:{{{dominia|}}}{{!}}{{{dominia|}}}]]}}
{{#if:{{{regnum|}}} | -> [[:Kategorie:{{{regnum|}}}{{!}}{{{regnum|}}}]]}}
{{#if:{{{subregnum|}}} | -> [[:Kategorie:{{{subregnum|}}}{{!}}{{{subregnum|}}}]]}}
{{#if:{{{superdivisio|}}}| -> [[:Kategorie:{{{superdivisio|}}}{{!}}{{{superdivisio|}}}]]}}
{{#if:{{{divisio|}}} | -> [[:Kategorie:{{{divisio|}}}{{!}}{{{divisio|}}}]]}}
{{#if:{{{subdivisio|}}} | -> [[:Kategorie:{{{subdivisio|}}}{{!}}{{{subdivisio|}}}]]}}
{{#if:{{{superclassis|}}}| -> [[:Kategorie:{{{superclassis|}}}{{!}}{{{superclassis|}}}]]}}
{{#if:{{{classis|}}} | -> [[:Kategorie:{{{classis|}}}{{!}}{{{classis|}}}]]}}
{{#if:{{{subclassis|}}} | -> [[:Kategorie:{{{subclassis|}}}{{!}}{{{subclassis|}}}]]}}
{{#if:{{{superordo|}}} | -> [[:Kategorie:{{{superordo|}}}{{!}}{{{superordo|}}}]]}}
{{#if:{{{ordo|}}} | -> [[:Kategorie:{{{ordo|}}}{{!}}{{{ordo|}}}]]}}
{{#if:{{{subordo|}}} | -> [[:Kategorie:{{{subordo|}}}{{!}}{{{subordo|}}}]]}}
{{#if:{{{superfamilia|}}}| -> [[:Kategorie:{{#if: {{{subordo|}}} | {{{subordo|}}} | {{{ordo|}}} }}{{!}}{{{superfamilia|}}}]]}}
{{#if:{{{familia|}}} | -> [[:Kategorie:{{{familia|}}}{{!}}{{{familia|}}}]]}}
{{#if:{{{subfamilia|}}} | -> [[:Kategorie:{{{subfamilia|}}}{{!}}{{{subfamilia|}}}]]}}
{{#if:{{{tribus|}}} | -> [[:Kategorie:{{{tribus|}}}{{!}}{{{tribus|}}}]]}}
{{#if:{{{genus|}}} | -> [[:Kategorie:{{{genus|}}}{{!}}{{{genus|}}}]]}}
{{#if:{{{subgenus|}}} | -> [[:Kategorie:{{{subgenus|}}}{{!}}{{{subgenus|}}}]]}}
}}
{{#vardefine:taxonbox|
{{#if:{{{dominia|}}}
| {{!-}}
{{!}} Dominia:
{{!}} ''[[:Kategorie:{{{dominia|}}}{{!}}{{{dominia|}}}]]''
}}
{{#if:{{{regnum|}}}
| {{!-}}
{{!}} Regnum:
{{!}} ''[[:Kategorie:{{{regnum|}}}{{!}}{{{regnum|}}}]]''
}}
{{#if:{{{subregnum|}}}
| {{!-}}
{{!}} Subregnum:
{{!}} ''[[:Kategorie:{{{subregnum|}}}{{!}}{{{subregnum|}}}]]''
}}
{{#if:{{{superdivisio|}}}
| {{!-}}
{{!}} Superdivisio:
{{!}} ''[[:Kategorie:{{{superdivisio|}}}{{!}}{{{superdivisio|}}}]]''
}}
{{#if:{{{divisio|}}}
| {{!-}}
{{!}} Divisio:
{{!}} ''[[:Kategorie:{{{divisio|}}}{{!}}{{{divisio|}}}]]''
}}
{{#if:{{{phylum|}}}
| {{!-}}
{{!}} Phylum:
{{!}} ''[[:Kategorie:{{{phylum|}}}{{!}}{{{phylum|}}}]]''
}}
{{#if:{{{subdivisio|}}}
| {{!-}}
{{!}} Subdivisio:
{{!}} ''[[:Kategorie:{{{subdivisio|}}}{{!}}{{{subdivisio|}}}]]''
}}
{{#if:{{{subphylum|}}}
| {{!-}}
{{!}} Subphylum:
{{!}} ''[[:Kategorie:{{{subphylum|}}}{{!}}{{{subphylum|}}}]]''
}}
{{#if:{{{superclassis|}}}
| {{!-}}
{{!}} Superclassis:
{{!}} ''[[:Kategorie:{{{superclassis|}}}{{!}}{{{superclassis|}}}]]''
}}
{{#if:{{{classis|}}}
| {{!-}}
{{!}} Classis:
{{!}} ''[[:Kategorie:{{{classis|}}}{{!}}{{{classis|}}}]]''
}}
{{#if:{{{subclassis|}}}
| {{!-}}
{{!}} Subclassis:
{{!}} ''[[:Kategorie:{{{subclassis|}}}{{!}}{{{subclassis|}}}]]''
}}
{{#if:{{{superordo|}}}
| {{!-}}
{{!}} Superordo:
{{!}} ''[[:Kategorie:{{{superordo|}}}{{!}}{{{superordo|}}}]]''
}}
{{#if:{{{ordo|}}}
| {{!-}}
{{!}} Ordo:
{{!}} ''[[:Kategorie:{{{ordo|}}}{{!}}{{{ordo|}}}]]''
}}
{{#if:{{{subordo|}}}
| {{!-}}
{{!}} Subordo:
{{!}} ''[[:Kategorie:{{{subordo|}}}{{!}}{{{subordo|}}}]]''
}}
{{#if:{{{superfamilia|}}}
| {{!-}}
{{!}} Superfamilia:
{{!}} ''[[:Kategorie:{{{superfamilia|}}}{{!}}{{{superfamilia|}}}]]''
}}
{{#if:{{{familia|}}}
| {{!-}}
{{!}} Familia:
{{!}} ''[[:Kategorie:{{{familia|}}}{{!}}{{{familia|}}}]]''
}}
{{#if:{{{subfamilia|}}}
| {{!-}}
{{!}} Subfamilia:
{{!}} ''[[:Kategorie:{{{subfamilia|}}}{{!}}{{{subfamilia|}}}]]''
}}
{{#if:{{{tribus|}}}
| {{!-}}
{{!}} Tribus:
{{!}} ''[[:Kategorie:{{{tribus|}}}{{!}}{{{tribus|}}}]]''
}}
{{#if:{{{genus|}}}
| {{!-}}
{{!}} Genus:
{{!}} ''[[:Kategorie:{{{genus|}}}{{!}}{{{genus|}}}]]''
}}
{{#if:{{{subgenus|}}}
| {{!-}}
{{!}} Subgenus:
{{!}} ''[[:Kategorie:{{{subgenus|}}}{{!}}{{{subgenus|}}}]]''
}}
{{#if:{{{species|}}}
| {{!-}}
{{!}} Species:
{{!}} ''{{{genus|}}} {{{subgenus|}}} {{{species|}}}{{#if: {{{varietas|}}}| " var. {{{varietas|}}}"}}{{#if: {{{forma|}}}| " f. {{{forma|}}}"}}''
}}
}}
{| cellpadding="2" cellspacing="0" style="border: 1px solid #6688AA;width:260px; background-color:#efefef; float:right;overflow: hidden; margin-left:10px; margin-bottom:10px;" valign="middle" |
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"|
'''''{{PAGENAME}}'''''
{{#if:{{{DeName|}}}|
<br>({{{DeName|}}})
}}
|-
|style="border: 0px solid #6688AA;border-bottom-width:1px;"| <div style="text-align:center;font-size:8pt;">
{{#if: {{{Bild|}}}
| [[Bild:{{{Bild}}}{{!}}frameless{{!}}250x300px{{!}}{{{Bildbeschreibung}}}]]
}}</div>
|
|
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| Systematik
|-
|style="text-align:center;width=250px;"|
{| style="background-color:#efefef;text-align:left;"
{{#if:{{#var:taxonbox}}
| {{#regex: {{#var:taxonbox}} | /(\n)[\n]+/ | $1 }}
}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Wissenschaftlicher Name
|-
|style="text-align:center"|
{| style="text-align:center;background-color:#efefef;widht:250px"
|-
|style="text-align:center;width:250px"|''{{{genus|}}} {{{species|}}}'' {{#if:{{{Autor|}}}|{{{Autor|}}}}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Weitere Informationen
|-
|style="text-align:left"|
{| style="text-align:left;background-color:#efefef;widht:250px"
{{#if:{{{Verbreitung|}}}
| {{!-}}
{{!}} Verbreitung:
{{!}} {{{Verbreitung|}}}
| {{!-}}
}}
{{#if:{{{Habitat|}}}
| {{!-}}
{{!}} Habitat:
{{!}} {{{Habitat|}}}
| {{!-}}
}}
{{#if:{{{Nahrung|}}}
| {{!-}}
{{!}} Nahrung:
{{!}} {{{Nahrung|}}}
| {{!-}}
}}
{{#if:{{{Luftfeuchtigkeit|}}}
| {{!-}}
{{!}} Luftfeuchtigkeit:
{{!}} {{{Luftfeuchtigkeit|}}}
| {{!-}}
}}
{{#if:{{{Temperatur|}}}
| {{!-}}
{{!}} Temperatur:
{{!}} {{{Temperatur|}}}
| {{!-}}
}}
{{#if:{{{StudyGroupNumber|}}}
| {{!-}}
{{!}} StudyGroupNumber:
{{!}} {{{StudyGroupNumber|}}}
| {{!-}}
}}
|}
|}
{{#ifeq: {{NAMESPACE}} | {{ns:0}} | [[Kategorie:species]]}}
{{#if:{{#var:taxon}}
| {{#regex: {{#var:taxon}} | /[ \r\n]+/ | }}
}}
{{#if:{{{www.faunaeur.org_id|}}}|
* [http://www.faunaeur.org/full_results.php?id={{{www.faunaeur.org_id|}}} Fauna Europaea : www.faunaeur.org -> {{PAGENAME}}]
}}
{{#if:{{{cockroach.speciesfile.org_TaxonNameID|}}}|
* [http://cockroach.speciesfile.org/common/basic/Taxa.aspx?TaxonNameID={{{cockroach.speciesfile.org_TaxonNameID|}}} Cockroach Species File (CSF) : cockroach.speciesfile.org -> {{PAGENAME}}]
}}
{{#if:{{{LSID|}}}|
* [http://www.ipni.org/lsids.html LSID] : {{{LSID|}}}
}}
{{#ifeq:{{PAGENAME}}|{{{superordo}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
[[Kategorie:{{#if: {{{subclassis|}}} | {{{subclassis|}}} | {{{classis|}}} }}{{!}}{{{superordo|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{ordo}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=7}}
{{#ifeq:{{PAGENAME}}|Blattodea|
[[Kategorie:Schaben]]
}}
[[Kategorie:{{#if: {{{superordo|}}} | {{{superordo|}}} | {{{subclassis|}}} }}{{!}}{{{ordo|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{superfamilia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
[[Kategorie:{{#if: {{{subordo|}}} | {{{subordo|}}} | {{{ordo|}}} }}{{!}}{{{superfamilia|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{familia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=4}}
[[Kategorie:{{#if: {{{superfamilia|}}} | {{{superfamilia|}}} | {{{subordo|}}} }}{{!}}{{{familia|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{subfamilia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=3}}
[[Kategorie:{{{familia|}}}{{!}}{{{subfamilia|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{genus}}}|
[[Kategorie:{{#if: {{{subfamilia|}}} | {{{subfamilia|}}} | {{{familia|}}} }}{{!}}{{{genus|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{genus}}} {{{species}}}|
[[Kategorie:{{{genus|}}}{{!}}{{{species|}}}]]
}}
{{#string_loop:taxonType
| dominia regnum subregnum superdivisio divisio subdivisio superclassis classis subclassis superordo ordo subordo superfamilia familia subfamilia genus subgenus species subspecies varietas subvarietas forma subforma
| [[{{ #var: taxonType }}]] <nowiki/>
}}
<pre>
</pre>
</includeonly>
<noinclude>
<pre>
Beispielaufruf:
{{Systematik
| DeName = Fauchschabe
| Autor = van Herrewege, 1973
| ordo =
| subordo =
| superfamilia =
| familia = Blaberidae
| subfamilia = Oxyhaloinae
| tribus = Gromphadorhini
| genus = Princisia
| subgenus =
| species = vanwaerebeki
| Verbreitung =
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| StudyGroupNumber = BCG 34
| Winterruhe =
| cockroach.speciesfile.org_TaxonNameID = 1174416
| LSID = urn:lsid:Blattodea.genusfile.org:TaxonName:6326
}}
{{Systematik
| Autor =
| Bild =
| Bildbeschreibung =
| regnum = Animalia
| subregnum = Eumetazoa
| phylum = specieshropoda
| subphylum = Hexapoda
| classis = Insecta
| ordo = Dictyoptera
| subordo = Isoptera
| LSID = urn:lsid:faunaeur.org:taxname:11922
| www.faunaeur.org_id = 11922
}}
</pre>
</noinclude>
878ec942c7ecc11ad08668e3339fe2568992dce2
1676
1675
2017-05-03T08:43:27Z
Lollypop
2
wikitext
text/x-wiki
<includeonly>
{{#vardefine:taxon|
{{#if:{{{dominia|}}} | -> [[:Kategorie:{{{dominia|}}}{{!}}{{{dominia|}}}]]}}
{{#if:{{{regnum|}}} | -> [[:Kategorie:{{{regnum|}}}{{!}}{{{regnum|}}}]]}}
{{#if:{{{subregnum|}}} | -> [[:Kategorie:{{{subregnum|}}}{{!}}{{{subregnum|}}}]]}}
{{#if:{{{superdivisio|}}}| -> [[:Kategorie:{{{superdivisio|}}}{{!}}{{{superdivisio|}}}]]}}
{{#if:{{{divisio|}}} | -> [[:Kategorie:{{{divisio|}}}{{!}}{{{divisio|}}}]]}}
{{#if:{{{subdivisio|}}} | -> [[:Kategorie:{{{subdivisio|}}}{{!}}{{{subdivisio|}}}]]}}
{{#if:{{{superclassis|}}}| -> [[:Kategorie:{{{superclassis|}}}{{!}}{{{superclassis|}}}]]}}
{{#if:{{{classis|}}} | -> [[:Kategorie:{{{classis|}}}{{!}}{{{classis|}}}]]}}
{{#if:{{{subclassis|}}} | -> [[:Kategorie:{{{subclassis|}}}{{!}}{{{subclassis|}}}]]}}
{{#if:{{{superordo|}}} | -> [[:Kategorie:{{{superordo|}}}{{!}}{{{superordo|}}}]]}}
{{#if:{{{ordo|}}} | -> [[:Kategorie:{{{ordo|}}}{{!}}{{{ordo|}}}]]}}
{{#if:{{{subordo|}}} | -> [[:Kategorie:{{{subordo|}}}{{!}}{{{subordo|}}}]]}}
{{#if:{{{superfamilia|}}}| -> [[:Kategorie:{{#if: {{{subordo|}}} | {{{subordo|}}} | {{{ordo|}}} }}{{!}}{{{superfamilia|}}}]]}}
{{#if:{{{familia|}}} | -> [[:Kategorie:{{{familia|}}}{{!}}{{{familia|}}}]]}}
{{#if:{{{subfamilia|}}} | -> [[:Kategorie:{{{subfamilia|}}}{{!}}{{{subfamilia|}}}]]}}
{{#if:{{{tribus|}}} | -> [[:Kategorie:{{{tribus|}}}{{!}}{{{tribus|}}}]]}}
{{#if:{{{genus|}}} | -> [[:Kategorie:{{{genus|}}}{{!}}{{{genus|}}}]]}}
{{#if:{{{subgenus|}}} | -> [[:Kategorie:{{{subgenus|}}}{{!}}{{{subgenus|}}}]]}}
}}
{{#vardefine:taxonbox|
{{#if:{{{dominia|}}}
| {{!-}}
{{!}} Dominia:
{{!}} ''[[:Kategorie:{{{dominia|}}}{{!}}{{{dominia|}}}]]''
}}
{{#if:{{{regnum|}}}
| {{!-}}
{{!}} Regnum:
{{!}} ''[[:Kategorie:{{{regnum|}}}{{!}}{{{regnum|}}}]]''
}}
{{#if:{{{subregnum|}}}
| {{!-}}
{{!}} Subregnum:
{{!}} ''[[:Kategorie:{{{subregnum|}}}{{!}}{{{subregnum|}}}]]''
}}
{{#if:{{{superdivisio|}}}
| {{!-}}
{{!}} Superdivisio:
{{!}} ''[[:Kategorie:{{{superdivisio|}}}{{!}}{{{superdivisio|}}}]]''
}}
{{#if:{{{divisio|}}}
| {{!-}}
{{!}} Divisio:
{{!}} ''[[:Kategorie:{{{divisio|}}}{{!}}{{{divisio|}}}]]''
}}
{{#if:{{{phylum|}}}
| {{!-}}
{{!}} Phylum:
{{!}} ''[[:Kategorie:{{{phylum|}}}{{!}}{{{phylum|}}}]]''
}}
{{#if:{{{subdivisio|}}}
| {{!-}}
{{!}} Subdivisio:
{{!}} ''[[:Kategorie:{{{subdivisio|}}}{{!}}{{{subdivisio|}}}]]''
}}
{{#if:{{{subphylum|}}}
| {{!-}}
{{!}} Subphylum:
{{!}} ''[[:Kategorie:{{{subphylum|}}}{{!}}{{{subphylum|}}}]]''
}}
{{#if:{{{superclassis|}}}
| {{!-}}
{{!}} Superclassis:
{{!}} ''[[:Kategorie:{{{superclassis|}}}{{!}}{{{superclassis|}}}]]''
}}
{{#if:{{{classis|}}}
| {{!-}}
{{!}} Classis:
{{!}} ''[[:Kategorie:{{{classis|}}}{{!}}{{{classis|}}}]]''
}}
{{#if:{{{subclassis|}}}
| {{!-}}
{{!}} Subclassis:
{{!}} ''[[:Kategorie:{{{subclassis|}}}{{!}}{{{subclassis|}}}]]''
}}
{{#if:{{{superordo|}}}
| {{!-}}
{{!}} Superordo:
{{!}} ''[[:Kategorie:{{{superordo|}}}{{!}}{{{superordo|}}}]]''
}}
{{#if:{{{ordo|}}}
| {{!-}}
{{!}} Ordo:
{{!}} ''[[:Kategorie:{{{ordo|}}}{{!}}{{{ordo|}}}]]''
}}
{{#if:{{{subordo|}}}
| {{!-}}
{{!}} Subordo:
{{!}} ''[[:Kategorie:{{{subordo|}}}{{!}}{{{subordo|}}}]]''
}}
{{#if:{{{superfamilia|}}}
| {{!-}}
{{!}} Superfamilia:
{{!}} ''[[:Kategorie:{{{superfamilia|}}}{{!}}{{{superfamilia|}}}]]''
}}
{{#if:{{{familia|}}}
| {{!-}}
{{!}} Familia:
{{!}} ''[[:Kategorie:{{{familia|}}}{{!}}{{{familia|}}}]]''
}}
{{#if:{{{subfamilia|}}}
| {{!-}}
{{!}} Subfamilia:
{{!}} ''[[:Kategorie:{{{subfamilia|}}}{{!}}{{{subfamilia|}}}]]''
}}
{{#if:{{{tribus|}}}
| {{!-}}
{{!}} Tribus:
{{!}} ''[[:Kategorie:{{{tribus|}}}{{!}}{{{tribus|}}}]]''
}}
{{#if:{{{genus|}}}
| {{!-}}
{{!}} Genus:
{{!}} ''[[:Kategorie:{{{genus|}}}{{!}}{{{genus|}}}]]''
}}
{{#if:{{{subgenus|}}}
| {{!-}}
{{!}} Subgenus:
{{!}} ''[[:Kategorie:{{{subgenus|}}}{{!}}{{{subgenus|}}}]]''
}}
{{#if:{{{species|}}}
| {{!-}}
{{!}} Species:
{{!}} ''{{{genus|}}} {{{subgenus|}}} {{{species|}}}{{#if: {{{varietas|}}}| " var. {{{varietas|}}}"}}{{#if: {{{forma|}}}| " f. {{{forma|}}}"}}''
}}
}}
{| cellpadding="2" cellspacing="0" style="border: 1px solid #6688AA;width:260px; background-color:#efefef; float:right;overflow: hidden; margin-left:10px; margin-bottom:10px;" valign="middle" |
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"|
'''''{{PAGENAME}}'''''
{{#if:{{{DeName|}}}|
<br>({{{DeName|}}})
}}
|-
|style="border: 0px solid #6688AA;border-bottom-width:1px;"| <div style="text-align:center;font-size:8pt;">
{{#if: {{{Bild|}}}
| [[Bild:{{{Bild}}}{{!}}frameless{{!}}250x300px{{!}}{{{Bildbeschreibung}}}]]
}}</div>
|
|
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| Systematik
|-
|style="text-align:center;width=250px;"|
{| style="background-color:#efefef;text-align:left;"
{{#if:{{#var:taxonbox}}
| {{#regex: {{#var:taxonbox}} | /(\n)[\n]+/ | $1 }}
}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Wissenschaftlicher Name
|-
|style="text-align:center"|
{| style="text-align:center;background-color:#efefef;widht:250px"
|-
|style="text-align:center;width:250px"|''{{{genus|}}} {{{species|}}}'' {{#if:{{{Autor|}}}|{{{Autor|}}}}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Weitere Informationen
|-
|style="text-align:left"|
{| style="text-align:left;background-color:#efefef;widht:250px"
{{#if:{{{Verbreitung|}}}
| {{!-}}
{{!}} Verbreitung:
{{!}} {{{Verbreitung|}}}
| {{!-}}
}}
{{#if:{{{Habitat|}}}
| {{!-}}
{{!}} Habitat:
{{!}} {{{Habitat|}}}
| {{!-}}
}}
{{#if:{{{Nahrung|}}}
| {{!-}}
{{!}} Nahrung:
{{!}} {{{Nahrung|}}}
| {{!-}}
}}
{{#if:{{{Luftfeuchtigkeit|}}}
| {{!-}}
{{!}} Luftfeuchtigkeit:
{{!}} {{{Luftfeuchtigkeit|}}}
| {{!-}}
}}
{{#if:{{{Temperatur|}}}
| {{!-}}
{{!}} Temperatur:
{{!}} {{{Temperatur|}}}
| {{!-}}
}}
{{#if:{{{StudyGroupNumber|}}}
| {{!-}}
{{!}} StudyGroupNumber:
{{!}} {{{StudyGroupNumber|}}}
| {{!-}}
}}
|}
|}
{{#ifeq: {{NAMESPACE}} | {{ns:0}} | [[Kategorie:species]]}}
{{#if:{{#var:taxon}}
| {{#regex: {{#var:taxon}} | /[ \r\n]+/ | }}
}}
{{#if:{{{www.faunaeur.org_id|}}}|
* [http://www.faunaeur.org/full_results.php?id={{{www.faunaeur.org_id|}}} Fauna Europaea : www.faunaeur.org -> {{PAGENAME}}]
}}
{{#if:{{{cockroach.speciesfile.org_TaxonNameID|}}}|
* [http://cockroach.speciesfile.org/common/basic/Taxa.aspx?TaxonNameID={{{cockroach.speciesfile.org_TaxonNameID|}}} Cockroach Species File (CSF) : cockroach.speciesfile.org -> {{PAGENAME}}]
}}
{{#if:{{{LSID|}}}|
* [http://www.ipni.org/lsids.html LSID] : {{{LSID|}}}
}}
{{#ifeq:{{PAGENAME}}|{{{superordo}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
[[Kategorie:{{#if: {{{subclassis|}}} | {{{subclassis|}}} | {{{classis|}}} }}{{!}}{{{superordo|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{ordo}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=7}}
{{#ifeq:{{PAGENAME}}|Blattodea|
[[Kategorie:Schaben]]
}}
[[Kategorie:{{#if: {{{superordo|}}} | {{{superordo|}}} | {{{subclassis|}}} }}{{!}}{{{ordo|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{superfamilia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
[[Kategorie:{{#if: {{{subordo|}}} | {{{subordo|}}} | {{{ordo|}}} }}{{!}}{{{superfamilia|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{familia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=4}}
[[Kategorie:{{#if: {{{superfamilia|}}} | {{{superfamilia|}}} | {{{subordo|}}} }}{{!}}{{{familia|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{subfamilia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=3}}
[[Kategorie:{{{familia|}}}{{!}}{{{subfamilia|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{genus}}}|
[[Kategorie:{{#if: {{{subfamilia|}}} | {{{subfamilia|}}} | {{{familia|}}} }}{{!}}{{{genus|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{genus}}} {{{species}}}|
[[Kategorie:{{{genus|}}}{{!}}{{{species|}}}]]
}}
</includeonly>
<noinclude>
<pre>
{{#string_loop:taxonType
| dominia regnum subregnum superdivisio divisio subdivisio superclassis classis subclassis superordo ordo subordo superfamilia familia subfamilia genus subgenus species subspecies varietas subvarietas forma subforma
| [[{{ #var: taxonType }}]] <nowiki/>
}}
</pre>
<pre>
Beispielaufruf:
{{Systematik
| DeName = Fauchschabe
| Autor = van Herrewege, 1973
| ordo =
| subordo =
| superfamilia =
| familia = Blaberidae
| subfamilia = Oxyhaloinae
| tribus = Gromphadorhini
| genus = Princisia
| subgenus =
| species = vanwaerebeki
| Verbreitung =
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| StudyGroupNumber = BCG 34
| Winterruhe =
| cockroach.speciesfile.org_TaxonNameID = 1174416
| LSID = urn:lsid:Blattodea.genusfile.org:TaxonName:6326
}}
{{Systematik
| Autor =
| Bild =
| Bildbeschreibung =
| regnum = Animalia
| subregnum = Eumetazoa
| phylum = specieshropoda
| subphylum = Hexapoda
| classis = Insecta
| ordo = Dictyoptera
| subordo = Isoptera
| LSID = urn:lsid:faunaeur.org:taxname:11922
| www.faunaeur.org_id = 11922
}}
</pre>
</noinclude>
4ff355f15a39591f97a70175c5ec6f30bd1826b6
1677
1676
2017-05-03T09:13:33Z
Lollypop
2
wikitext
text/x-wiki
<includeonly>
{{#vardefine:taxon|
{{#if:{{{dominia|}}} | -> [[:Kategorie:{{{dominia|}}}{{!}}{{{dominia|}}}]]}}
{{#if:{{{regnum|}}} | -> [[:Kategorie:{{{regnum|}}}{{!}}{{{regnum|}}}]]}}
{{#if:{{{subregnum|}}} | -> [[:Kategorie:{{{subregnum|}}}{{!}}{{{subregnum|}}}]]}}
{{#if:{{{superdivisio|}}}| -> [[:Kategorie:{{{superdivisio|}}}{{!}}{{{superdivisio|}}}]]}}
{{#if:{{{divisio|}}} | -> [[:Kategorie:{{{divisio|}}}{{!}}{{{divisio|}}}]]}}
{{#if:{{{subdivisio|}}} | -> [[:Kategorie:{{{subdivisio|}}}{{!}}{{{subdivisio|}}}]]}}
{{#if:{{{superclassis|}}}| -> [[:Kategorie:{{{superclassis|}}}{{!}}{{{superclassis|}}}]]}}
{{#if:{{{classis|}}} | -> [[:Kategorie:{{{classis|}}}{{!}}{{{classis|}}}]]}}
{{#if:{{{subclassis|}}} | -> [[:Kategorie:{{{subclassis|}}}{{!}}{{{subclassis|}}}]]}}
{{#if:{{{superordo|}}} | -> [[:Kategorie:{{{superordo|}}}{{!}}{{{superordo|}}}]]}}
{{#if:{{{ordo|}}} | -> [[:Kategorie:{{{ordo|}}}{{!}}{{{ordo|}}}]]}}
{{#if:{{{subordo|}}} | -> [[:Kategorie:{{{subordo|}}}{{!}}{{{subordo|}}}]]}}
{{#if:{{{superfamilia|}}}| -> [[:Kategorie:{{#if: {{{subordo|}}} | {{{subordo|}}} | {{{ordo|}}} }}{{!}}{{{superfamilia|}}}]]}}
{{#if:{{{familia|}}} | -> [[:Kategorie:{{{familia|}}}{{!}}{{{familia|}}}]]}}
{{#if:{{{subfamilia|}}} | -> [[:Kategorie:{{{subfamilia|}}}{{!}}{{{subfamilia|}}}]]}}
{{#if:{{{tribus|}}} | -> [[:Kategorie:{{{tribus|}}}{{!}}{{{tribus|}}}]]}}
{{#if:{{{genus|}}} | -> [[:Kategorie:{{{genus|}}}{{!}}{{{genus|}}}]]}}
{{#if:{{{subgenus|}}} | -> [[:Kategorie:{{{subgenus|}}}{{!}}{{{subgenus|}}}]]}}
}}
{{#vardefine:taxonbox|
{{#if:{{{dominia|}}}
| {{!-}}
{{!}} Dominia:
{{!}} ''[[:Kategorie:{{{dominia|}}}{{!}}{{{dominia|}}}]]''
}}
{{#if:{{{regnum|}}}
| {{!-}}
{{!}} Regnum:
{{!}} ''[[:Kategorie:{{{regnum|}}}{{!}}{{{regnum|}}}]]''
}}
{{#if:{{{subregnum|}}}
| {{!-}}
{{!}} Subregnum:
{{!}} ''[[:Kategorie:{{{subregnum|}}}{{!}}{{{subregnum|}}}]]''
}}
{{#if:{{{superdivisio|}}}
| {{!-}}
{{!}} Superdivisio:
{{!}} ''[[:Kategorie:{{{superdivisio|}}}{{!}}{{{superdivisio|}}}]]''
}}
{{#if:{{{divisio|}}}
| {{!-}}
{{!}} Divisio:
{{!}} ''[[:Kategorie:{{{divisio|}}}{{!}}{{{divisio|}}}]]''
}}
{{#if:{{{phylum|}}}
| {{!-}}
{{!}} Phylum:
{{!}} ''[[:Kategorie:{{{phylum|}}}{{!}}{{{phylum|}}}]]''
}}
{{#if:{{{subdivisio|}}}
| {{!-}}
{{!}} Subdivisio:
{{!}} ''[[:Kategorie:{{{subdivisio|}}}{{!}}{{{subdivisio|}}}]]''
}}
{{#if:{{{subphylum|}}}
| {{!-}}
{{!}} Subphylum:
{{!}} ''[[:Kategorie:{{{subphylum|}}}{{!}}{{{subphylum|}}}]]''
}}
{{#if:{{{superclassis|}}}
| {{!-}}
{{!}} Superclassis:
{{!}} ''[[:Kategorie:{{{superclassis|}}}{{!}}{{{superclassis|}}}]]''
}}
{{#if:{{{classis|}}}
| {{!-}}
{{!}} Classis:
{{!}} ''[[:Kategorie:{{{classis|}}}{{!}}{{{classis|}}}]]''
}}
{{#if:{{{subclassis|}}}
| {{!-}}
{{!}} Subclassis:
{{!}} ''[[:Kategorie:{{{subclassis|}}}{{!}}{{{subclassis|}}}]]''
}}
{{#if:{{{superordo|}}}
| {{!-}}
{{!}} Superordo:
{{!}} ''[[:Kategorie:{{{superordo|}}}{{!}}{{{superordo|}}}]]''
}}
{{#if:{{{ordo|}}}
| {{!-}}
{{!}} Ordo:
{{!}} ''[[:Kategorie:{{{ordo|}}}{{!}}{{{ordo|}}}]]''
}}
{{#if:{{{subordo|}}}
| {{!-}}
{{!}} Subordo:
{{!}} ''[[:Kategorie:{{{subordo|}}}{{!}}{{{subordo|}}}]]''
}}
{{#if:{{{superfamilia|}}}
| {{!-}}
{{!}} Superfamilia:
{{!}} ''[[:Kategorie:{{{superfamilia|}}}{{!}}{{{superfamilia|}}}]]''
}}
{{#if:{{{familia|}}}
| {{!-}}
{{!}} Familia:
{{!}} ''[[:Kategorie:{{{familia|}}}{{!}}{{{familia|}}}]]''
}}
{{#if:{{{subfamilia|}}}
| {{!-}}
{{!}} Subfamilia:
{{!}} ''[[:Kategorie:{{{subfamilia|}}}{{!}}{{{subfamilia|}}}]]''
}}
{{#if:{{{tribus|}}}
| {{!-}}
{{!}} Tribus:
{{!}} ''[[:Kategorie:{{{tribus|}}}{{!}}{{{tribus|}}}]]''
}}
{{#if:{{{genus|}}}
| {{!-}}
{{!}} Genus:
{{!}} ''[[:Kategorie:{{{genus|}}}{{!}}{{{genus|}}}]]''
}}
{{#if:{{{subgenus|}}}
| {{!-}}
{{!}} Subgenus:
{{!}} ''[[:Kategorie:{{{subgenus|}}}{{!}}{{{subgenus|}}}]]''
}}
{{#if:{{{species|}}}
| {{!-}}
{{!}} Species:
{{!}} ''{{{genus|}}} {{{subgenus|}}} {{{species|}}}{{#if: {{{varietas|}}}| " var. {{{varietas|}}}"}}{{#if: {{{forma|}}}| " f. {{{forma|}}}"}}''
}}
}}
{| cellpadding="2" cellspacing="0" style="border: 1px solid #6688AA;width:260px; background-color:#efefef; float:right;overflow: hidden; margin-left:10px; margin-bottom:10px;" valign="middle" |
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"|
'''''{{PAGENAME}}'''''
{{#if:{{{DeName|}}}|
<br>({{{DeName|}}})
}}
|-
|style="border: 0px solid #6688AA;border-bottom-width:1px;"| <div style="text-align:center;font-size:8pt;">
{{#if: {{{Bild|}}}
| [[Bild:{{{Bild}}}{{!}}frameless{{!}}250x300px{{!}}{{{Bildbeschreibung}}}]]
}}</div>
|
|
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| Systematik
|-
|style="text-align:center;width=250px;"|
{| style="background-color:#efefef;text-align:left;"
{{#if:{{#var:taxonbox}}
| {{#regex: {{#var:taxonbox}} | /(\n)[\n]+/ | $1 }}
}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Wissenschaftlicher Name
|-
|style="text-align:center"|
{| style="text-align:center;background-color:#efefef;widht:250px"
|-
|style="text-align:center;width:250px"|''{{{genus|}}} {{{species|}}}'' {{#if:{{{Autor|}}}|{{{Autor|}}}}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Weitere Informationen
|-
|style="text-align:left"|
{| style="text-align:left;background-color:#efefef;widht:250px"
{{#if:{{{Verbreitung|}}}
| {{!-}}
{{!}} Verbreitung:
{{!}} {{{Verbreitung|}}}
| {{!-}}
}}
{{#if:{{{Habitat|}}}
| {{!-}}
{{!}} Habitat:
{{!}} {{{Habitat|}}}
| {{!-}}
}}
{{#if:{{{Nahrung|}}}
| {{!-}}
{{!}} Nahrung:
{{!}} {{{Nahrung|}}}
| {{!-}}
}}
{{#if:{{{Luftfeuchtigkeit|}}}
| {{!-}}
{{!}} Luftfeuchtigkeit:
{{!}} {{{Luftfeuchtigkeit|}}}
| {{!-}}
}}
{{#if:{{{Temperatur|}}}
| {{!-}}
{{!}} Temperatur:
{{!}} {{{Temperatur|}}}
| {{!-}}
}}
{{#if:{{{StudyGroupNumber|}}}
| {{!-}}
{{!}} StudyGroupNumber:
{{!}} {{{StudyGroupNumber|}}}
| {{!-}}
}}
|}
|}
{{#ifeq: {{NAMESPACE}} | {{ns:0}} | [[Kategorie:species]]}}
{{#if:{{#var:taxon}}
| {{#regex: {{#var:taxon}} | /[ \r\n]+/ | }}
}}
{{#if:{{{www.faunaeur.org_id|}}}|
* [http://www.faunaeur.org/full_results.php?id={{{www.faunaeur.org_id|}}} Fauna Europaea : www.faunaeur.org -> {{PAGENAME}}]
}}
{{#if:{{{cockroach.speciesfile.org_TaxonNameID|}}}|
* [http://cockroach.speciesfile.org/common/basic/Taxa.aspx?TaxonNameID={{{cockroach.speciesfile.org_TaxonNameID|}}} Cockroach Species File (CSF) : cockroach.speciesfile.org -> {{PAGENAME}}]
}}
{{#if:{{{LSID|}}}|
* [http://www.ipni.org/lsids.html LSID] : {{{LSID|}}}
}}
{{#ifeq:{{PAGENAME}}|{{{regnum}}}
| {{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
{{#if: {{{dominia|}}} | [[Kategorie: {{{dominia|}}} {{!}} {{{regnum|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{subregnum}}}
| {{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
{{#if: {{{regnum|}}}
| [[Kategorie: {{{regnum|}}}{{!}}{{{subregnum|}}}]]
| {{#if: {{{dominia|}}} | [[Kategorie: {{{dominia|}}}{{!}}{{{subregnum|}}}]] }}
}}
}}
{{#ifeq:{{PAGENAME}}|{{{superdivisio}}}
| {{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
{{#if: {{{subregnum|}}}
| [[Kategorie: {{{subregnum|}}}{{!}}{{{superdivisio|}}}]]
| {{#if: {{{regnum|}}} | [[Kategorie: {{{regnum|}}}{{!}}{{{superdivisio|}}}]] }}
}}
}}
{{#ifeq:{{PAGENAME}}|{{{divisio}}}
| {{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
{{#if: {{{superdivisio|}}}
| [[Kategorie: {{{superdivisio|}}}{{!}}{{{divisio|}}}]]
| {{#if: {{{subregnum|}}} | [[Kategorie: {{{subregnum|}}}{{!}}{{{divisio|}}}]] }}
}}
}}
{{#ifeq:{{PAGENAME}}|{{{subdivisio}}}
| {{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
{{#if: {{{divisio|}}}
| [[Kategorie: {{{divisio|}}}{{!}}{{{subdivisio|}}}]]
| {{#if: {{{superdivisio|}}} | [[Kategorie: {{{superdivisio|}}}{{!}}{{{subdivisio|}}}]] }}
}}
}}
{{#ifeq:{{PAGENAME}}|{{{superclassis}}}
| {{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
{{#if: {{{subdivisio|}}}
| [[Kategorie: {{{subdivisio|}}}{{!}}{{{superclassis|}}}]]
| {{#if: {{{divisio|}}} | [[Kategorie: {{{divisio|}}}{{!}}{{{superclassis|}}}]] }}
}}
}}
{{#ifeq:{{PAGENAME}}|{{{classis}}}
| {{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
{{#if: {{{superclassis|}}}
| [[Kategorie: {{{superclassis|}}}{{!}}{{{classis|}}}]]
| {{#if: {{{subdivisio|}}} | [[Kategorie: {{{subdivisio|}}}{{!}}{{{classis|}}}]] }}
}}
}}
{{#ifeq:{{PAGENAME}}|{{{subclassis}}}
| {{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
{{#if: {{{classis|}}}
| [[Kategorie: {{{classis|}}}{{!}}{{{subclassis|}}}]]
| {{#if: {{{superclassis|}}} | [[Kategorie: {{{superclassis|}}}{{!}}{{{subclassis|}}}]] }}
}}
}}
{{#ifeq:{{PAGENAME}}|{{{superordo}}}
| {{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
{{#if: {{{subclassis|}}}
| [[Kategorie: {{{subclassis|}}}{{!}}{{{superordo|}}}]]
| {{#if: {{{classis|}}} | [[Kategorie: {{{classis|}}}{{!}}{{{superordo|}}}]] }}
}}
}}
{{#ifeq:{{PAGENAME}}|{{{ordo}}}
| {{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
{{#if: {{{superordo|}}}
| [[Kategorie: {{{superordo|}}}{{!}}{{{ordo|}}}]]
| {{#if: {{{subclassis|}}} | [[Kategorie: {{{subclassis|}}}{{!}}{{{ordo|}}}]] }}
}}
}}
{{#ifeq:{{PAGENAME}}|{{{subordo}}}
| {{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
{{#if: {{{ordo|}}}
| [[Kategorie: {{{ordo|}}}{{!}}{{{subordo|}}}]]
| {{#if: {{{superordo|}}} | [[Kategorie: {{{superordo|}}}{{!}}{{{subordo|}}}]] }}
}}
}}
{{#ifeq:{{PAGENAME}}|{{{superfamilia}}}
| {{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
{{#if: {{{subordo|}}}
| [[Kategorie: {{{subordo|}}}{{!}}{{{superfamilia|}}}]]
| {{#if: {{{ordo|}}} | [[Kategorie: {{{ordo|}}}{{!}}{{{superfamilia|}}}]] }}
}}
}}
{{#ifeq:{{PAGENAME}}|{{{familia}}}
| {{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
{{#if: {{{superfamilia|}}}
| [[Kategorie: {{{superfamilia|}}}{{!}}{{{familia|}}}]]
| {{#if: {{{subordo|}}} | [[Kategorie: {{{subordo|}}}{{!}}{{{familia|}}}]] }}
}}
}}
{{#ifeq:{{PAGENAME}}|{{{subfamilia}}}
| {{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
{{#if: {{{familia|}}}
| [[Kategorie: {{{familia|}}}{{!}}{{{subfamilia|}}}]]
| {{#if: {{{superfamilia|}}} | [[Kategorie: {{{superfamilia|}}}{{!}}{{{subfamilia|}}}]] }}
}}
}}
{{#ifeq:{{PAGENAME}}|{{{genus}}}
| {{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
{{#if: {{{subfamilia|}}}
| [[Kategorie: {{{subfamilia|}}}{{!}}{{{genus|}}}]]
| {{#if: {{{familia|}}} | [[Kategorie: {{{familia|}}}{{!}}{{{genus|}}}]] }}
}}
}}
{{#ifeq:{{PAGENAME}}|{{{subgenus}}}
| {{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
{{#if: {{{genus|}}}
| [[Kategorie: {{{genus|}}}{{!}}{{{subgenus|}}}]]
| {{#if: {{{subfamilia|}}} | [[Kategorie: {{{subfamilia|}}}{{!}}{{{subgenus|}}}]] }}
}}
}}
{{#ifeq:{{PAGENAME}}|{{{species}}}
| {{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
{{#if: {{{subgenus|}}}
| [[Kategorie: {{{subgenus|}}}{{!}}{{{species|}}}]]
| {{#if: {{{genus|}}} | [[Kategorie: {{{genus|}}}{{!}}{{{species|}}}]] }}
}}
}}
</includeonly>
<noinclude>
<pre>
{{#string_loop:taxonType
| dominia regnum subregnum superdivisio divisio subdivisio superclassis classis subclassis superordo ordo subordo superfamilia familia subfamilia genus subgenus species subspecies varietas subvarietas forma subforma
| [[{{ #var: taxonType }}]] <nowiki/>
}}
</pre>
<pre>
Beispielaufruf:
{{Systematik
| DeName = Fauchschabe
| Autor = van Herrewege, 1973
| ordo =
| subordo =
| superfamilia =
| familia = Blaberidae
| subfamilia = Oxyhaloinae
| tribus = Gromphadorhini
| genus = Princisia
| subgenus =
| species = vanwaerebeki
| Verbreitung =
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| StudyGroupNumber = BCG 34
| Winterruhe =
| cockroach.speciesfile.org_TaxonNameID = 1174416
| LSID = urn:lsid:Blattodea.genusfile.org:TaxonName:6326
}}
{{Systematik
| Autor =
| Bild =
| Bildbeschreibung =
| regnum = Animalia
| subregnum = Eumetazoa
| phylum = specieshropoda
| subphylum = Hexapoda
| classis = Insecta
| ordo = Dictyoptera
| subordo = Isoptera
| LSID = urn:lsid:faunaeur.org:taxname:11922
| www.faunaeur.org_id = 11922
}}
</pre>
</noinclude>
bfe8f805f13b0d7c118313d5427fca660c0fffcd
Category:Eumetazoa
14
324
1663
2017-05-03T05:57:50Z
Lollypop
2
Die Seite wurde neu angelegt: „{{Systematik | Autor = | Bild = | Bildbeschreibung = | regnum = Animalia | subregnum = Eumetazoa | LSID…“
wikitext
text/x-wiki
{{Systematik
| Autor =
| Bild =
| Bildbeschreibung =
| regnum = Animalia
| subregnum = Eumetazoa
| LSID =
| www.faunaeur.org_id =
}}
3970df27b679e12fe5cd0115c9d2e6a5784f0943
1671
1663
2017-05-03T08:16:25Z
Lollypop
2
wikitext
text/x-wiki
{{Systematik
| Autor =
| Bild =
| Bildbeschreibung =
| regnum = Animalia
| subregnum = Eumetazoa
| LSID = urn:lsid:faunaeur.org:taxname:54070
| www.faunaeur.org_id = 54070
}}
1d71d7498b0e1e0e71e5322497d27b9841cb1eec
Category:Animalia
14
325
1664
2017-05-03T05:58:03Z
Lollypop
2
Die Seite wurde neu angelegt: „{{Systematik | Autor = | Bild = | Bildbeschreibung = | regnum = Animalia | LSID = | www.faunaeur.org_id =…“
wikitext
text/x-wiki
{{Systematik
| Autor =
| Bild =
| Bildbeschreibung =
| regnum = Animalia
| LSID =
| www.faunaeur.org_id =
}}
4db5e4288ee08117c0ec518551d3ede575e14536
1672
1664
2017-05-03T08:16:58Z
Lollypop
2
wikitext
text/x-wiki
{{Systematik
| Autor =
| Bild =
| Bildbeschreibung =
| regnum = Animalia
| LSID = urn:lsid:faunaeur.org:taxname:1
| www.faunaeur.org_id = 1
}}
6d490f5216324de62faee3b945e65f4be9855aea
Category:Hexapoda
14
326
1666
2017-05-03T08:11:16Z
Lollypop
2
Die Seite wurde neu angelegt: „{{Systematik | Autor = | Bild = | Bildbeschreibung = | regnum = Animalia | subregnum = Eumetazoa | phylum…“
wikitext
text/x-wiki
{{Systematik
| Autor =
| Bild =
| Bildbeschreibung =
| regnum = Animalia
| subregnum = Eumetazoa
| phylum = Arthropoda
| subphylum = Hexapoda
| LSID = urn:lsid:faunaeur.org:taxname:
| www.faunaeur.org_id =
}}
b1765c0d08970ad4e6d5dbba77697028eeb0facb
1669
1666
2017-05-03T08:15:13Z
Lollypop
2
wikitext
text/x-wiki
{{Systematik
| Autor =
| Bild =
| Bildbeschreibung =
| regnum = Animalia
| subregnum = Eumetazoa
| phylum = Arthropoda
| subphylum = Hexapoda
| LSID = urn:lsid:faunaeur.org:taxname:3
| www.faunaeur.org_id = 3
}}
2ba3e30d62c23d1db3985da9eab31dcf8193edce
Category:Arthropoda
14
327
1667
2017-05-03T08:11:47Z
Lollypop
2
Die Seite wurde neu angelegt: „{{Systematik | Autor = | Bild = | Bildbeschreibung = | regnum = Animalia | subregnum = Eumetazoa | phylum…“
wikitext
text/x-wiki
{{Systematik
| Autor =
| Bild =
| Bildbeschreibung =
| regnum = Animalia
| subregnum = Eumetazoa
| phylum = Arthropoda
| LSID = urn:lsid:faunaeur.org:taxname:
| www.faunaeur.org_id =
}}
4d2bdd7c9c4c5a543bfa8d4d17650c4e1d830be8
1670
1667
2017-05-03T08:15:55Z
Lollypop
2
wikitext
text/x-wiki
{{Systematik
| Autor =
| Bild =
| Bildbeschreibung =
| regnum = Animalia
| subregnum = Eumetazoa
| phylum = Arthropoda
| LSID = urn:lsid:faunaeur.org:taxname:2
| www.faunaeur.org_id = 2
}}
007a2239d49313ced6a20a789f95f7520344f235
Category:Insecta
14
322
1668
1554
2017-05-03T08:14:42Z
Lollypop
2
wikitext
text/x-wiki
{{Systematik
| Autor =
| Bild =
| Bildbeschreibung =
| regnum = Animalia
| subregnum = Eumetazoa
| phylum = Arthropoda
| subphylum = Hexapoda
| classis = Insecta
| LSID = urn:lsid:faunaeur.org:taxname:4
| www.faunaeur.org_id = 4
}}
cbc845d79579f55667e38f913dcc1ca29d022d22
Template:Systematik
10
117
1678
1677
2017-05-03T09:28:05Z
Lollypop
2
wikitext
text/x-wiki
<includeonly>
{{#vardefine:taxon|
{{#if:{{{dominia|}}} | -> [[:Kategorie:{{{dominia|}}}{{!}}{{{dominia|}}}]]}}
{{#if:{{{regnum|}}} | -> [[:Kategorie:{{{regnum|}}}{{!}}{{{regnum|}}}]]}}
{{#if:{{{subregnum|}}} | -> [[:Kategorie:{{{subregnum|}}}{{!}}{{{subregnum|}}}]]}}
{{#if:{{{superdivisio|}}}| -> [[:Kategorie:{{{superdivisio|}}}{{!}}{{{superdivisio|}}}]]}}
{{#if:{{{divisio|}}} | -> [[:Kategorie:{{{divisio|}}}{{!}}{{{divisio|}}}]]}}
{{#if:{{{subdivisio|}}} | -> [[:Kategorie:{{{subdivisio|}}}{{!}}{{{subdivisio|}}}]]}}
{{#if:{{{superclassis|}}}| -> [[:Kategorie:{{{superclassis|}}}{{!}}{{{superclassis|}}}]]}}
{{#if:{{{classis|}}} | -> [[:Kategorie:{{{classis|}}}{{!}}{{{classis|}}}]]}}
{{#if:{{{subclassis|}}} | -> [[:Kategorie:{{{subclassis|}}}{{!}}{{{subclassis|}}}]]}}
{{#if:{{{superordo|}}} | -> [[:Kategorie:{{{superordo|}}}{{!}}{{{superordo|}}}]]}}
{{#if:{{{ordo|}}} | -> [[:Kategorie:{{{ordo|}}}{{!}}{{{ordo|}}}]]}}
{{#if:{{{subordo|}}} | -> [[:Kategorie:{{{subordo|}}}{{!}}{{{subordo|}}}]]}}
{{#if:{{{superfamilia|}}}| -> [[:Kategorie:{{#if: {{{subordo|}}} | {{{subordo|}}} | {{{ordo|}}} }}{{!}}{{{superfamilia|}}}]]}}
{{#if:{{{familia|}}} | -> [[:Kategorie:{{{familia|}}}{{!}}{{{familia|}}}]]}}
{{#if:{{{subfamilia|}}} | -> [[:Kategorie:{{{subfamilia|}}}{{!}}{{{subfamilia|}}}]]}}
{{#if:{{{tribus|}}} | -> [[:Kategorie:{{{tribus|}}}{{!}}{{{tribus|}}}]]}}
{{#if:{{{genus|}}} | -> [[:Kategorie:{{{genus|}}}{{!}}{{{genus|}}}]]}}
{{#if:{{{subgenus|}}} | -> [[:Kategorie:{{{subgenus|}}}{{!}}{{{subgenus|}}}]]}}
}}
{{#vardefine:taxonbox|
{{#if:{{{dominia|}}}
| {{!-}}
{{!}} Dominia:
{{!}} ''[[:Kategorie:{{{dominia|}}}{{!}}{{{dominia|}}}]]''
}}
{{#if:{{{regnum|}}}
| {{!-}}
{{!}} Regnum:
{{!}} ''[[:Kategorie:{{{regnum|}}}{{!}}{{{regnum|}}}]]''
}}
{{#if:{{{subregnum|}}}
| {{!-}}
{{!}} Subregnum:
{{!}} ''[[:Kategorie:{{{subregnum|}}}{{!}}{{{subregnum|}}}]]''
}}
{{#if:{{{superdivisio|}}}
| {{!-}}
{{!}} Superdivisio:
{{!}} ''[[:Kategorie:{{{superdivisio|}}}{{!}}{{{superdivisio|}}}]]''
}}
{{#if:{{{divisio|}}}
| {{!-}}
{{!}} Divisio:
{{!}} ''[[:Kategorie:{{{divisio|}}}{{!}}{{{divisio|}}}]]''
}}
{{#if:{{{phylum|}}}
| {{!-}}
{{!}} Phylum:
{{!}} ''[[:Kategorie:{{{phylum|}}}{{!}}{{{phylum|}}}]]''
}}
{{#if:{{{subdivisio|}}}
| {{!-}}
{{!}} Subdivisio:
{{!}} ''[[:Kategorie:{{{subdivisio|}}}{{!}}{{{subdivisio|}}}]]''
}}
{{#if:{{{subphylum|}}}
| {{!-}}
{{!}} Subphylum:
{{!}} ''[[:Kategorie:{{{subphylum|}}}{{!}}{{{subphylum|}}}]]''
}}
{{#if:{{{superclassis|}}}
| {{!-}}
{{!}} Superclassis:
{{!}} ''[[:Kategorie:{{{superclassis|}}}{{!}}{{{superclassis|}}}]]''
}}
{{#if:{{{classis|}}}
| {{!-}}
{{!}} Classis:
{{!}} ''[[:Kategorie:{{{classis|}}}{{!}}{{{classis|}}}]]''
}}
{{#if:{{{subclassis|}}}
| {{!-}}
{{!}} Subclassis:
{{!}} ''[[:Kategorie:{{{subclassis|}}}{{!}}{{{subclassis|}}}]]''
}}
{{#if:{{{superordo|}}}
| {{!-}}
{{!}} Superordo:
{{!}} ''[[:Kategorie:{{{superordo|}}}{{!}}{{{superordo|}}}]]''
}}
{{#if:{{{ordo|}}}
| {{!-}}
{{!}} Ordo:
{{!}} ''[[:Kategorie:{{{ordo|}}}{{!}}{{{ordo|}}}]]''
}}
{{#if:{{{subordo|}}}
| {{!-}}
{{!}} Subordo:
{{!}} ''[[:Kategorie:{{{subordo|}}}{{!}}{{{subordo|}}}]]''
}}
{{#if:{{{superfamilia|}}}
| {{!-}}
{{!}} Superfamilia:
{{!}} ''[[:Kategorie:{{{superfamilia|}}}{{!}}{{{superfamilia|}}}]]''
}}
{{#if:{{{familia|}}}
| {{!-}}
{{!}} Familia:
{{!}} ''[[:Kategorie:{{{familia|}}}{{!}}{{{familia|}}}]]''
}}
{{#if:{{{subfamilia|}}}
| {{!-}}
{{!}} Subfamilia:
{{!}} ''[[:Kategorie:{{{subfamilia|}}}{{!}}{{{subfamilia|}}}]]''
}}
{{#if:{{{tribus|}}}
| {{!-}}
{{!}} Tribus:
{{!}} ''[[:Kategorie:{{{tribus|}}}{{!}}{{{tribus|}}}]]''
}}
{{#if:{{{genus|}}}
| {{!-}}
{{!}} Genus:
{{!}} ''[[:Kategorie:{{{genus|}}}{{!}}{{{genus|}}}]]''
}}
{{#if:{{{subgenus|}}}
| {{!-}}
{{!}} Subgenus:
{{!}} ''[[:Kategorie:{{{subgenus|}}}{{!}}{{{subgenus|}}}]]''
}}
{{#if:{{{species|}}}
| {{!-}}
{{!}} Species:
{{!}} ''{{{genus|}}} {{{subgenus|}}} {{{species|}}}{{#if: {{{varietas|}}}| " var. {{{varietas|}}}"}}{{#if: {{{forma|}}}| " f. {{{forma|}}}"}}''
}}
}}
{| cellpadding="2" cellspacing="0" style="border: 1px solid #6688AA;width:260px; background-color:#efefef; float:right;overflow: hidden; margin-left:10px; margin-bottom:10px;" valign="middle" |
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"|
'''''{{PAGENAME}}'''''
{{#if:{{{DeName|}}}|
<br>({{{DeName|}}})
}}
|-
|style="border: 0px solid #6688AA;border-bottom-width:1px;"| <div style="text-align:center;font-size:8pt;">
{{#if: {{{Bild|}}}
| [[Bild:{{{Bild}}}{{!}}frameless{{!}}250x300px{{!}}{{{Bildbeschreibung}}}]]
}}</div>
|
|
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| Systematik
|-
|style="text-align:center;width=250px;"|
{| style="background-color:#efefef;text-align:left;"
{{#if:{{#var:taxonbox}}
| {{#regex: {{#var:taxonbox}} | /(\n)[\n]+/ | $1 }}
}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Wissenschaftlicher Name
|-
|style="text-align:center"|
{| style="text-align:center;background-color:#efefef;widht:250px"
|-
|style="text-align:center;width:250px"|''{{{genus|}}} {{{species|}}}'' {{#if:{{{Autor|}}}|{{{Autor|}}}}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Weitere Informationen
|-
|style="text-align:left"|
{| style="text-align:left;background-color:#efefef;widht:250px"
{{#if:{{{Verbreitung|}}}
| {{!-}}
{{!}} Verbreitung:
{{!}} {{{Verbreitung|}}}
| {{!-}}
}}
{{#if:{{{Habitat|}}}
| {{!-}}
{{!}} Habitat:
{{!}} {{{Habitat|}}}
| {{!-}}
}}
{{#if:{{{Nahrung|}}}
| {{!-}}
{{!}} Nahrung:
{{!}} {{{Nahrung|}}}
| {{!-}}
}}
{{#if:{{{Luftfeuchtigkeit|}}}
| {{!-}}
{{!}} Luftfeuchtigkeit:
{{!}} {{{Luftfeuchtigkeit|}}}
| {{!-}}
}}
{{#if:{{{Temperatur|}}}
| {{!-}}
{{!}} Temperatur:
{{!}} {{{Temperatur|}}}
| {{!-}}
}}
{{#if:{{{StudyGroupNumber|}}}
| {{!-}}
{{!}} StudyGroupNumber:
{{!}} {{{StudyGroupNumber|}}}
| {{!-}}
}}
|}
|}
{{#ifeq: {{NAMESPACE}} | {{ns:0}} | [[Kategorie:species]]}}
{{#if:{{#var:taxon}}
| {{#regex: {{#var:taxon}} | /[ \r\n]+/ | }}
}}
{{#if:{{{www.faunaeur.org_id|}}}|
* [http://www.faunaeur.org/full_results.php?id={{{www.faunaeur.org_id|}}} Fauna Europaea : www.faunaeur.org -> {{PAGENAME}}]
}}
{{#if:{{{cockroach.speciesfile.org_TaxonNameID|}}}|
* [http://cockroach.speciesfile.org/common/basic/Taxa.aspx?TaxonNameID={{{cockroach.speciesfile.org_TaxonNameID|}}} Cockroach Species File (CSF) : cockroach.speciesfile.org -> {{PAGENAME}}]
}}
{{#if:{{{LSID|}}}|
* [http://www.ipni.org/lsids.html LSID] : {{{LSID|}}}
}}
{{#ifeq:{{PAGENAME}}|{{{regnum}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
{{#if: {{{dominia|}}} | [[Kategorie: {{{dominia|}}} {{!}} {{{regnum|}}}]]
}}
{{#ifeq:{{PAGENAME}}|{{{subregnum}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
{{#if: {{{regnum|}}}
| [[Kategorie: {{{regnum|}}}{{!}}{{{subregnum|}}}]]
| {{#if: {{{dominia|}}} | [[Kategorie: {{{dominia|}}}{{!}}{{{subregnum|}}}]] }}
}}
}}
{{#ifeq:{{PAGENAME}}|{{{superdivisio}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
{{#if: {{{subregnum|}}}
| [[Kategorie: {{{subregnum|}}}{{!}}{{{superdivisio|}}}]]
| {{#if: {{{regnum|}}} | [[Kategorie: {{{regnum|}}}{{!}}{{{superdivisio|}}}]] }}
}}
}}
{{#ifeq:{{PAGENAME}}|{{{divisio}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
{{#if: {{{superdivisio|}}}
| [[Kategorie: {{{superdivisio|}}}{{!}}{{{divisio|}}}]]
| {{#if: {{{subregnum|}}} | [[Kategorie: {{{subregnum|}}}{{!}}{{{divisio|}}}]] }}
}}
}}
{{#ifeq:{{PAGENAME}}|{{{subdivisio}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
{{#if: {{{divisio|}}}
| [[Kategorie: {{{divisio|}}}{{!}}{{{subdivisio|}}}]]
| {{#if: {{{superdivisio|}}} | [[Kategorie: {{{superdivisio|}}}{{!}}{{{subdivisio|}}}]] }}
}}
}}
{{#ifeq:{{PAGENAME}}|{{{superclassis}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
{{#if: {{{subdivisio|}}}
| [[Kategorie: {{{subdivisio|}}}{{!}}{{{superclassis|}}}]]
| {{#if: {{{divisio|}}} | [[Kategorie: {{{divisio|}}}{{!}}{{{superclassis|}}}]] }}
}}
}}
{{#ifeq:{{PAGENAME}}|{{{classis}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
{{#if: {{{superclassis|}}}
| [[Kategorie: {{{superclassis|}}}{{!}}{{{classis|}}}]]
| {{#if: {{{subdivisio|}}} | [[Kategorie: {{{subdivisio|}}}{{!}}{{{classis|}}}]] }}
}}
}}
{{#ifeq:{{PAGENAME}}|{{{subclassis}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
{{#if: {{{classis|}}}
| [[Kategorie: {{{classis|}}}{{!}}{{{subclassis|}}}]]
| {{#if: {{{superclassis|}}} | [[Kategorie: {{{superclassis|}}}{{!}}{{{subclassis|}}}]] }}
}}
}}
{{#ifeq:{{PAGENAME}}|{{{superordo}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
{{#if: {{{subclassis|}}}
| [[Kategorie: {{{subclassis|}}}{{!}}{{{superordo|}}}]]
| {{#if: {{{classis|}}} | [[Kategorie: {{{classis|}}}{{!}}{{{superordo|}}}]] }}
}}
}}
{{#ifeq:{{PAGENAME}}|{{{ordo}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
{{#if: {{{superordo|}}}
| [[Kategorie: {{{superordo|}}}{{!}}{{{ordo|}}}]]
| {{#if: {{{subclassis|}}} | [[Kategorie: {{{subclassis|}}}{{!}}{{{ordo|}}}]] }}
}}
}}
{{#ifeq:{{PAGENAME}}|{{{subordo}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
{{#if: {{{ordo|}}}
| [[Kategorie: {{{ordo|}}}{{!}}{{{subordo|}}}]]
| {{#if: {{{superordo|}}} | [[Kategorie: {{{superordo|}}}{{!}}{{{subordo|}}}]] }}
}}
}}
{{#ifeq:{{PAGENAME}}|{{{superfamilia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
{{#if: {{{subordo|}}}
| [[Kategorie: {{{subordo|}}}{{!}}{{{superfamilia|}}}]]
| {{#if: {{{ordo|}}} | [[Kategorie: {{{ordo|}}}{{!}}{{{superfamilia|}}}]] }}
}}
}}
{{#ifeq:{{PAGENAME}}|{{{familia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
{{#if: {{{superfamilia|}}}
| [[Kategorie: {{{superfamilia|}}}{{!}}{{{familia|}}}]]
| {{#if: {{{subordo|}}} | [[Kategorie: {{{subordo|}}}{{!}}{{{familia|}}}]] }}
}}
}}
{{#ifeq:{{PAGENAME}}|{{{subfamilia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
{{#if: {{{familia|}}}
| [[Kategorie: {{{familia|}}}{{!}}{{{subfamilia|}}}]]
| {{#if: {{{superfamilia|}}} | [[Kategorie: {{{superfamilia|}}}{{!}}{{{subfamilia|}}}]] }}
}}
}}
{{#ifeq:{{PAGENAME}}|{{{genus}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
{{#if: {{{subfamilia|}}}
| [[Kategorie: {{{subfamilia|}}}{{!}}{{{genus|}}}]]
| {{#if: {{{familia|}}} | [[Kategorie: {{{familia|}}}{{!}}{{{genus|}}}]] }}
}}
}}
{{#ifeq:{{PAGENAME}}|{{{subgenus}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
{{#if: {{{genus|}}}
| [[Kategorie: {{{genus|}}}{{!}}{{{subgenus|}}}]]
| {{#if: {{{subfamilia|}}} | [[Kategorie: {{{subfamilia|}}}{{!}}{{{subgenus|}}}]] }}
}}
}}
{{#ifeq:{{PAGENAME}}|{{{species}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
{{#if: {{{subgenus|}}}
| [[Kategorie: {{{subgenus|}}}{{!}}{{{species|}}}]]
| {{#if: {{{genus|}}} | [[Kategorie: {{{genus|}}}{{!}}{{{species|}}}]] }}
}}
}}
</includeonly>
<noinclude>
<pre>
Beispielaufruf:
{{Systematik
| DeName = Fauchschabe
| Autor = van Herrewege, 1973
| ordo =
| subordo =
| superfamilia =
| familia = Blaberidae
| subfamilia = Oxyhaloinae
| tribus = Gromphadorhini
| genus = Princisia
| subgenus =
| species = vanwaerebeki
| Verbreitung =
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| StudyGroupNumber = BCG 34
| Winterruhe =
| cockroach.speciesfile.org_TaxonNameID = 1174416
| LSID = urn:lsid:Blattodea.genusfile.org:TaxonName:6326
}}
{{Systematik
| Autor =
| Bild =
| Bildbeschreibung =
| regnum = Animalia
| subregnum = Eumetazoa
| phylum = specieshropoda
| subphylum = Hexapoda
| classis = Insecta
| ordo = Dictyoptera
| subordo = Isoptera
| LSID = urn:lsid:faunaeur.org:taxname:11922
| www.faunaeur.org_id = 11922
}}
</pre>
</noinclude>
6e3df02193a18d538420abb1d8d3b73e169106fa
1679
1678
2017-05-03T09:35:10Z
Lollypop
2
wikitext
text/x-wiki
<includeonly>
{{#vardefine:taxon|
{{#if:{{{dominia|}}} | -> [[:Kategorie:{{{dominia|}}}{{!}}{{{dominia|}}}]]}}
{{#if:{{{regnum|}}} | -> [[:Kategorie:{{{regnum|}}}{{!}}{{{regnum|}}}]]}}
{{#if:{{{subregnum|}}} | -> [[:Kategorie:{{{subregnum|}}}{{!}}{{{subregnum|}}}]]}}
{{#if:{{{superdivisio|}}}| -> [[:Kategorie:{{{superdivisio|}}}{{!}}{{{superdivisio|}}}]]}}
{{#if:{{{divisio|}}} | -> [[:Kategorie:{{{divisio|}}}{{!}}{{{divisio|}}}]]}}
{{#if:{{{subdivisio|}}} | -> [[:Kategorie:{{{subdivisio|}}}{{!}}{{{subdivisio|}}}]]}}
{{#if:{{{superclassis|}}}| -> [[:Kategorie:{{{superclassis|}}}{{!}}{{{superclassis|}}}]]}}
{{#if:{{{classis|}}} | -> [[:Kategorie:{{{classis|}}}{{!}}{{{classis|}}}]]}}
{{#if:{{{subclassis|}}} | -> [[:Kategorie:{{{subclassis|}}}{{!}}{{{subclassis|}}}]]}}
{{#if:{{{superordo|}}} | -> [[:Kategorie:{{{superordo|}}}{{!}}{{{superordo|}}}]]}}
{{#if:{{{ordo|}}} | -> [[:Kategorie:{{{ordo|}}}{{!}}{{{ordo|}}}]]}}
{{#if:{{{subordo|}}} | -> [[:Kategorie:{{{subordo|}}}{{!}}{{{subordo|}}}]]}}
{{#if:{{{superfamilia|}}}| -> [[:Kategorie:{{#if: {{{subordo|}}} | {{{subordo|}}} | {{{ordo|}}} }}{{!}}{{{superfamilia|}}}]]}}
{{#if:{{{familia|}}} | -> [[:Kategorie:{{{familia|}}}{{!}}{{{familia|}}}]]}}
{{#if:{{{subfamilia|}}} | -> [[:Kategorie:{{{subfamilia|}}}{{!}}{{{subfamilia|}}}]]}}
{{#if:{{{tribus|}}} | -> [[:Kategorie:{{{tribus|}}}{{!}}{{{tribus|}}}]]}}
{{#if:{{{genus|}}} | -> [[:Kategorie:{{{genus|}}}{{!}}{{{genus|}}}]]}}
{{#if:{{{subgenus|}}} | -> [[:Kategorie:{{{subgenus|}}}{{!}}{{{subgenus|}}}]]}}
}}
{{#vardefine:taxonbox|
{{#if:{{{dominia|}}}
| {{!-}}
{{!}} Dominia:
{{!}} ''[[:Kategorie:{{{dominia|}}}{{!}}{{{dominia|}}}]]''
}}
{{#if:{{{regnum|}}}
| {{!-}}
{{!}} Regnum:
{{!}} ''[[:Kategorie:{{{regnum|}}}{{!}}{{{regnum|}}}]]''
}}
{{#if:{{{subregnum|}}}
| {{!-}}
{{!}} Subregnum:
{{!}} ''[[:Kategorie:{{{subregnum|}}}{{!}}{{{subregnum|}}}]]''
}}
{{#if:{{{superdivisio|}}}
| {{!-}}
{{!}} Superdivisio:
{{!}} ''[[:Kategorie:{{{superdivisio|}}}{{!}}{{{superdivisio|}}}]]''
}}
{{#if:{{{divisio|}}}
| {{!-}}
{{!}} Divisio:
{{!}} ''[[:Kategorie:{{{divisio|}}}{{!}}{{{divisio|}}}]]''
}}
{{#if:{{{phylum|}}}
| {{!-}}
{{!}} Phylum:
{{!}} ''[[:Kategorie:{{{phylum|}}}{{!}}{{{phylum|}}}]]''
}}
{{#if:{{{subdivisio|}}}
| {{!-}}
{{!}} Subdivisio:
{{!}} ''[[:Kategorie:{{{subdivisio|}}}{{!}}{{{subdivisio|}}}]]''
}}
{{#if:{{{subphylum|}}}
| {{!-}}
{{!}} Subphylum:
{{!}} ''[[:Kategorie:{{{subphylum|}}}{{!}}{{{subphylum|}}}]]''
}}
{{#if:{{{superclassis|}}}
| {{!-}}
{{!}} Superclassis:
{{!}} ''[[:Kategorie:{{{superclassis|}}}{{!}}{{{superclassis|}}}]]''
}}
{{#if:{{{classis|}}}
| {{!-}}
{{!}} Classis:
{{!}} ''[[:Kategorie:{{{classis|}}}{{!}}{{{classis|}}}]]''
}}
{{#if:{{{subclassis|}}}
| {{!-}}
{{!}} Subclassis:
{{!}} ''[[:Kategorie:{{{subclassis|}}}{{!}}{{{subclassis|}}}]]''
}}
{{#if:{{{superordo|}}}
| {{!-}}
{{!}} Superordo:
{{!}} ''[[:Kategorie:{{{superordo|}}}{{!}}{{{superordo|}}}]]''
}}
{{#if:{{{ordo|}}}
| {{!-}}
{{!}} Ordo:
{{!}} ''[[:Kategorie:{{{ordo|}}}{{!}}{{{ordo|}}}]]''
}}
{{#if:{{{subordo|}}}
| {{!-}}
{{!}} Subordo:
{{!}} ''[[:Kategorie:{{{subordo|}}}{{!}}{{{subordo|}}}]]''
}}
{{#if:{{{superfamilia|}}}
| {{!-}}
{{!}} Superfamilia:
{{!}} ''[[:Kategorie:{{{superfamilia|}}}{{!}}{{{superfamilia|}}}]]''
}}
{{#if:{{{familia|}}}
| {{!-}}
{{!}} Familia:
{{!}} ''[[:Kategorie:{{{familia|}}}{{!}}{{{familia|}}}]]''
}}
{{#if:{{{subfamilia|}}}
| {{!-}}
{{!}} Subfamilia:
{{!}} ''[[:Kategorie:{{{subfamilia|}}}{{!}}{{{subfamilia|}}}]]''
}}
{{#if:{{{tribus|}}}
| {{!-}}
{{!}} Tribus:
{{!}} ''[[:Kategorie:{{{tribus|}}}{{!}}{{{tribus|}}}]]''
}}
{{#if:{{{genus|}}}
| {{!-}}
{{!}} Genus:
{{!}} ''[[:Kategorie:{{{genus|}}}{{!}}{{{genus|}}}]]''
}}
{{#if:{{{subgenus|}}}
| {{!-}}
{{!}} Subgenus:
{{!}} ''[[:Kategorie:{{{subgenus|}}}{{!}}{{{subgenus|}}}]]''
}}
{{#if:{{{species|}}}
| {{!-}}
{{!}} Species:
{{!}} ''{{{genus|}}} {{{subgenus|}}} {{{species|}}}{{#if: {{{varietas|}}}| " var. {{{varietas|}}}"}}{{#if: {{{forma|}}}| " f. {{{forma|}}}"}}''
}}
}}
{| cellpadding="2" cellspacing="0" style="border: 1px solid #6688AA;width:260px; background-color:#efefef; float:right;overflow: hidden; margin-left:10px; margin-bottom:10px;" valign="middle" |
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"|
'''''{{PAGENAME}}'''''
{{#if:{{{DeName|}}}|
<br>({{{DeName|}}})
}}
|-
|style="border: 0px solid #6688AA;border-bottom-width:1px;"| <div style="text-align:center;font-size:8pt;">
{{#if: {{{Bild|}}}
| [[Bild:{{{Bild}}}{{!}}frameless{{!}}250x300px{{!}}{{{Bildbeschreibung}}}]]
}}</div>
|
|
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| Systematik
|-
|style="text-align:center;width=250px;"|
{| style="background-color:#efefef;text-align:left;"
{{#if:{{#var:taxonbox}}
| {{#regex: {{#var:taxonbox}} | /(\n)[\n]+/ | $1 }}
}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Wissenschaftlicher Name
|-
|style="text-align:center"|
{| style="text-align:center;background-color:#efefef;widht:250px"
|-
|style="text-align:center;width:250px"|''{{{genus|}}} {{{species|}}}'' {{#if:{{{Autor|}}}|{{{Autor|}}}}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Weitere Informationen
|-
|style="text-align:left"|
{| style="text-align:left;background-color:#efefef;widht:250px"
{{#if:{{{Verbreitung|}}}
| {{!-}}
{{!}} Verbreitung:
{{!}} {{{Verbreitung|}}}
| {{!-}}
}}
{{#if:{{{Habitat|}}}
| {{!-}}
{{!}} Habitat:
{{!}} {{{Habitat|}}}
| {{!-}}
}}
{{#if:{{{Nahrung|}}}
| {{!-}}
{{!}} Nahrung:
{{!}} {{{Nahrung|}}}
| {{!-}}
}}
{{#if:{{{Luftfeuchtigkeit|}}}
| {{!-}}
{{!}} Luftfeuchtigkeit:
{{!}} {{{Luftfeuchtigkeit|}}}
| {{!-}}
}}
{{#if:{{{Temperatur|}}}
| {{!-}}
{{!}} Temperatur:
{{!}} {{{Temperatur|}}}
| {{!-}}
}}
{{#if:{{{StudyGroupNumber|}}}
| {{!-}}
{{!}} StudyGroupNumber:
{{!}} {{{StudyGroupNumber|}}}
| {{!-}}
}}
|}
|}
{{#ifeq: {{NAMESPACE}} | {{ns:0}} | [[Kategorie:species]]}}
{{#if:{{#var:taxon}}
| {{#regex: {{#var:taxon}} | /[ \r\n]+/ | }}
}}
{{#if:{{{www.faunaeur.org_id|}}}|
* [http://www.faunaeur.org/full_results.php?id={{{www.faunaeur.org_id|}}} Fauna Europaea : www.faunaeur.org -> {{PAGENAME}}]
}}
{{#if:{{{cockroach.speciesfile.org_TaxonNameID|}}}|
* [http://cockroach.speciesfile.org/common/basic/Taxa.aspx?TaxonNameID={{{cockroach.speciesfile.org_TaxonNameID|}}} Cockroach Species File (CSF) : cockroach.speciesfile.org -> {{PAGENAME}}]
}}
{{#if:{{{LSID|}}}|
* [http://www.ipni.org/lsids.html LSID] : {{{LSID|}}}
}}
{{#ifeq:{{PAGENAME}}|{{{regnum}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
{{#if: {{{dominia|}}} | [[Kategorie: {{{dominia|}}} {{!}} {{{regnum|}}}]] }}
}}
{{#ifeq:{{PAGENAME}}|{{{subregnum}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
{{#if: {{{regnum|}}}
| [[Kategorie: {{{regnum|}}}{{!}}{{{subregnum|}}}]]
| {{#if: {{{dominia|}}} | [[Kategorie: {{{dominia|}}}{{!}}{{{subregnum|}}}]] }}
}}
}}
{{#ifeq:{{PAGENAME}}|{{{superdivisio}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
{{#if: {{{subregnum|}}}
| [[Kategorie: {{{subregnum|}}}{{!}}{{{superdivisio|}}}]]
| {{#if: {{{regnum|}}} | [[Kategorie: {{{regnum|}}}{{!}}{{{superdivisio|}}}]] }}
}}
}}
{{#ifeq:{{PAGENAME}}|{{{divisio}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
{{#if: {{{superdivisio|}}}
| [[Kategorie: {{{superdivisio|}}}{{!}}{{{divisio|}}}]]
| {{#if: {{{subregnum|}}} | [[Kategorie: {{{subregnum|}}}{{!}}{{{divisio|}}}]] }}
}}
}}
{{#ifeq:{{PAGENAME}}|{{{subdivisio}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
{{#if: {{{divisio|}}}
| [[Kategorie: {{{divisio|}}}{{!}}{{{subdivisio|}}}]]
| {{#if: {{{superdivisio|}}} | [[Kategorie: {{{superdivisio|}}}{{!}}{{{subdivisio|}}}]] }}
}}
}}
{{#ifeq:{{PAGENAME}}|{{{superclassis}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
{{#if: {{{subdivisio|}}}
| [[Kategorie: {{{subdivisio|}}}{{!}}{{{superclassis|}}}]]
| {{#if: {{{divisio|}}} | [[Kategorie: {{{divisio|}}}{{!}}{{{superclassis|}}}]] }}
}}
}}
{{#ifeq:{{PAGENAME}}|{{{classis}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
{{#if: {{{superclassis|}}}
| [[Kategorie: {{{superclassis|}}}{{!}}{{{classis|}}}]]
| {{#if: {{{subdivisio|}}} | [[Kategorie: {{{subdivisio|}}}{{!}}{{{classis|}}}]] }}
}}
}}
{{#ifeq:{{PAGENAME}}|{{{subclassis}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
{{#if: {{{classis|}}}
| [[Kategorie: {{{classis|}}}{{!}}{{{subclassis|}}}]]
| {{#if: {{{superclassis|}}} | [[Kategorie: {{{superclassis|}}}{{!}}{{{subclassis|}}}]] }}
}}
}}
{{#ifeq:{{PAGENAME}}|{{{superordo}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
{{#if: {{{subclassis|}}}
| [[Kategorie: {{{subclassis|}}}{{!}}{{{superordo|}}}]]
| {{#if: {{{classis|}}} | [[Kategorie: {{{classis|}}}{{!}}{{{superordo|}}}]] }}
}}
}}
{{#ifeq:{{PAGENAME}}|{{{ordo}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
{{#if: {{{superordo|}}}
| [[Kategorie: {{{superordo|}}}{{!}}{{{ordo|}}}]]
| {{#if: {{{subclassis|}}} | [[Kategorie: {{{subclassis|}}}{{!}}{{{ordo|}}}]] }}
}}
}}
{{#ifeq:{{PAGENAME}}|{{{subordo}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
{{#if: {{{ordo|}}}
| [[Kategorie: {{{ordo|}}}{{!}}{{{subordo|}}}]]
| {{#if: {{{superordo|}}} | [[Kategorie: {{{superordo|}}}{{!}}{{{subordo|}}}]] }}
}}
}}
{{#ifeq:{{PAGENAME}}|{{{superfamilia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
{{#if: {{{subordo|}}}
| [[Kategorie: {{{subordo|}}}{{!}}{{{superfamilia|}}}]]
| {{#if: {{{ordo|}}} | [[Kategorie: {{{ordo|}}}{{!}}{{{superfamilia|}}}]] }}
}}
}}
{{#ifeq:{{PAGENAME}}|{{{familia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
{{#if: {{{superfamilia|}}}
| [[Kategorie: {{{superfamilia|}}}{{!}}{{{familia|}}}]]
| {{#if: {{{subordo|}}} | [[Kategorie: {{{subordo|}}}{{!}}{{{familia|}}}]] }}
}}
}}
{{#ifeq:{{PAGENAME}}|{{{subfamilia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
{{#if: {{{familia|}}}
| [[Kategorie: {{{familia|}}}{{!}}{{{subfamilia|}}}]]
| {{#if: {{{superfamilia|}}} | [[Kategorie: {{{superfamilia|}}}{{!}}{{{subfamilia|}}}]] }}
}}
}}
{{#ifeq:{{PAGENAME}}|{{{genus}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
{{#if: {{{subfamilia|}}}
| [[Kategorie: {{{subfamilia|}}}{{!}}{{{genus|}}}]]
| {{#if: {{{familia|}}} | [[Kategorie: {{{familia|}}}{{!}}{{{genus|}}}]] }}
}}
}}
{{#ifeq:{{PAGENAME}}|{{{subgenus}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
{{#if: {{{genus|}}}
| [[Kategorie: {{{genus|}}}{{!}}{{{subgenus|}}}]]
| {{#if: {{{subfamilia|}}} | [[Kategorie: {{{subfamilia|}}}{{!}}{{{subgenus|}}}]] }}
}}
}}
{{#ifeq:{{PAGENAME}}|{{{species}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
{{#if: {{{subgenus|}}}
| [[Kategorie: {{{subgenus|}}}{{!}}{{{species|}}}]]
| {{#if: {{{genus|}}} | [[Kategorie: {{{genus|}}}{{!}}{{{species|}}}]] }}
}}
}}
</includeonly>
<noinclude>
<pre>
Beispielaufruf:
{{Systematik
| DeName = Fauchschabe
| Autor = van Herrewege, 1973
| ordo =
| subordo =
| superfamilia =
| familia = Blaberidae
| subfamilia = Oxyhaloinae
| tribus = Gromphadorhini
| genus = Princisia
| subgenus =
| species = vanwaerebeki
| Verbreitung =
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| StudyGroupNumber = BCG 34
| Winterruhe =
| cockroach.speciesfile.org_TaxonNameID = 1174416
| LSID = urn:lsid:Blattodea.genusfile.org:TaxonName:6326
}}
{{Systematik
| Autor =
| Bild =
| Bildbeschreibung =
| regnum = Animalia
| subregnum = Eumetazoa
| phylum = specieshropoda
| subphylum = Hexapoda
| classis = Insecta
| ordo = Dictyoptera
| subordo = Isoptera
| LSID = urn:lsid:faunaeur.org:taxname:11922
| www.faunaeur.org_id = 11922
}}
</pre>
</noinclude>
2dc52d0e5cf8838462c596201af3600b362e4fd9
1680
1679
2017-05-03T09:36:55Z
Lollypop
2
wikitext
text/x-wiki
<includeonly>
{{#vardefine:taxon|
{{#if:{{{dominia|}}} | -> [[:Kategorie:{{{dominia|}}}{{!}}{{{dominia|}}}]]}}
{{#if:{{{regnum|}}} | -> [[:Kategorie:{{{regnum|}}}{{!}}{{{regnum|}}}]]}}
{{#if:{{{subregnum|}}} | -> [[:Kategorie:{{{subregnum|}}}{{!}}{{{subregnum|}}}]]}}
{{#if:{{{superdivisio|}}}| -> [[:Kategorie:{{{superdivisio|}}}{{!}}{{{superdivisio|}}}]]}}
{{#if:{{{divisio|}}} | -> [[:Kategorie:{{{divisio|}}}{{!}}{{{divisio|}}}]]}}
{{#if:{{{subdivisio|}}} | -> [[:Kategorie:{{{subdivisio|}}}{{!}}{{{subdivisio|}}}]]}}
{{#if:{{{superclassis|}}}| -> [[:Kategorie:{{{superclassis|}}}{{!}}{{{superclassis|}}}]]}}
{{#if:{{{classis|}}} | -> [[:Kategorie:{{{classis|}}}{{!}}{{{classis|}}}]]}}
{{#if:{{{subclassis|}}} | -> [[:Kategorie:{{{subclassis|}}}{{!}}{{{subclassis|}}}]]}}
{{#if:{{{superordo|}}} | -> [[:Kategorie:{{{superordo|}}}{{!}}{{{superordo|}}}]]}}
{{#if:{{{ordo|}}} | -> [[:Kategorie:{{{ordo|}}}{{!}}{{{ordo|}}}]]}}
{{#if:{{{subordo|}}} | -> [[:Kategorie:{{{subordo|}}}{{!}}{{{subordo|}}}]]}}
{{#if:{{{superfamilia|}}}| -> [[:Kategorie:{{#if: {{{subordo|}}} | {{{subordo|}}} | {{{ordo|}}} }}{{!}}{{{superfamilia|}}}]]}}
{{#if:{{{familia|}}} | -> [[:Kategorie:{{{familia|}}}{{!}}{{{familia|}}}]]}}
{{#if:{{{subfamilia|}}} | -> [[:Kategorie:{{{subfamilia|}}}{{!}}{{{subfamilia|}}}]]}}
{{#if:{{{tribus|}}} | -> [[:Kategorie:{{{tribus|}}}{{!}}{{{tribus|}}}]]}}
{{#if:{{{genus|}}} | -> [[:Kategorie:{{{genus|}}}{{!}}{{{genus|}}}]]}}
{{#if:{{{subgenus|}}} | -> [[:Kategorie:{{{subgenus|}}}{{!}}{{{subgenus|}}}]]}}
}}
{{#vardefine:taxonbox|
{{#if:{{{dominia|}}}
| {{!-}}
{{!}} Dominia:
{{!}} ''[[:Kategorie:{{{dominia|}}}{{!}}{{{dominia|}}}]]''
}}
{{#if:{{{regnum|}}}
| {{!-}}
{{!}} Regnum:
{{!}} ''[[:Kategorie:{{{regnum|}}}{{!}}{{{regnum|}}}]]''
}}
{{#if:{{{subregnum|}}}
| {{!-}}
{{!}} Subregnum:
{{!}} ''[[:Kategorie:{{{subregnum|}}}{{!}}{{{subregnum|}}}]]''
}}
{{#if:{{{superdivisio|}}}
| {{!-}}
{{!}} Superdivisio:
{{!}} ''[[:Kategorie:{{{superdivisio|}}}{{!}}{{{superdivisio|}}}]]''
}}
{{#if:{{{divisio|}}}
| {{!-}}
{{!}} Divisio:
{{!}} ''[[:Kategorie:{{{divisio|}}}{{!}}{{{divisio|}}}]]''
}}
{{#if:{{{phylum|}}}
| {{!-}}
{{!}} Phylum:
{{!}} ''[[:Kategorie:{{{phylum|}}}{{!}}{{{phylum|}}}]]''
}}
{{#if:{{{subdivisio|}}}
| {{!-}}
{{!}} Subdivisio:
{{!}} ''[[:Kategorie:{{{subdivisio|}}}{{!}}{{{subdivisio|}}}]]''
}}
{{#if:{{{subphylum|}}}
| {{!-}}
{{!}} Subphylum:
{{!}} ''[[:Kategorie:{{{subphylum|}}}{{!}}{{{subphylum|}}}]]''
}}
{{#if:{{{superclassis|}}}
| {{!-}}
{{!}} Superclassis:
{{!}} ''[[:Kategorie:{{{superclassis|}}}{{!}}{{{superclassis|}}}]]''
}}
{{#if:{{{classis|}}}
| {{!-}}
{{!}} Classis:
{{!}} ''[[:Kategorie:{{{classis|}}}{{!}}{{{classis|}}}]]''
}}
{{#if:{{{subclassis|}}}
| {{!-}}
{{!}} Subclassis:
{{!}} ''[[:Kategorie:{{{subclassis|}}}{{!}}{{{subclassis|}}}]]''
}}
{{#if:{{{superordo|}}}
| {{!-}}
{{!}} Superordo:
{{!}} ''[[:Kategorie:{{{superordo|}}}{{!}}{{{superordo|}}}]]''
}}
{{#if:{{{ordo|}}}
| {{!-}}
{{!}} Ordo:
{{!}} ''[[:Kategorie:{{{ordo|}}}{{!}}{{{ordo|}}}]]''
}}
{{#if:{{{subordo|}}}
| {{!-}}
{{!}} Subordo:
{{!}} ''[[:Kategorie:{{{subordo|}}}{{!}}{{{subordo|}}}]]''
}}
{{#if:{{{superfamilia|}}}
| {{!-}}
{{!}} Superfamilia:
{{!}} ''[[:Kategorie:{{{superfamilia|}}}{{!}}{{{superfamilia|}}}]]''
}}
{{#if:{{{familia|}}}
| {{!-}}
{{!}} Familia:
{{!}} ''[[:Kategorie:{{{familia|}}}{{!}}{{{familia|}}}]]''
}}
{{#if:{{{subfamilia|}}}
| {{!-}}
{{!}} Subfamilia:
{{!}} ''[[:Kategorie:{{{subfamilia|}}}{{!}}{{{subfamilia|}}}]]''
}}
{{#if:{{{tribus|}}}
| {{!-}}
{{!}} Tribus:
{{!}} ''[[:Kategorie:{{{tribus|}}}{{!}}{{{tribus|}}}]]''
}}
{{#if:{{{genus|}}}
| {{!-}}
{{!}} Genus:
{{!}} ''[[:Kategorie:{{{genus|}}}{{!}}{{{genus|}}}]]''
}}
{{#if:{{{subgenus|}}}
| {{!-}}
{{!}} Subgenus:
{{!}} ''[[:Kategorie:{{{subgenus|}}}{{!}}{{{subgenus|}}}]]''
}}
{{#if:{{{species|}}}
| {{!-}}
{{!}} Species:
{{!}} ''{{{genus|}}} {{{subgenus|}}} {{{species|}}}{{#if: {{{varietas|}}}| " var. {{{varietas|}}}"}}{{#if: {{{forma|}}}| " f. {{{forma|}}}"}}''
}}
}}
{| cellpadding="2" cellspacing="0" style="border: 1px solid #6688AA;width:260px; background-color:#efefef; float:right;overflow: hidden; margin-left:10px; margin-bottom:10px;" valign="middle" |
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"|
'''''{{PAGENAME}}'''''
{{#if:{{{DeName|}}}|
<br>({{{DeName|}}})
}}
|-
|style="border: 0px solid #6688AA;border-bottom-width:1px;"| <div style="text-align:center;font-size:8pt;">
{{#if: {{{Bild|}}}
| [[Bild:{{{Bild}}}{{!}}frameless{{!}}250x300px{{!}}{{{Bildbeschreibung}}}]]
}}</div>
|
|
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| Systematik
|-
|style="text-align:center;width=250px;"|
{| style="background-color:#efefef;text-align:left;"
{{#if:{{#var:taxonbox}}
| {{#regex: {{#var:taxonbox}} | /(\n)[\n]+/ | $1 }}
}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Wissenschaftlicher Name
|-
|style="text-align:center"|
{| style="text-align:center;background-color:#efefef;widht:250px"
|-
|style="text-align:center;width:250px"|''{{{genus|}}} {{{species|}}}'' {{#if:{{{Autor|}}}|{{{Autor|}}}}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Weitere Informationen
|-
|style="text-align:left"|
{| style="text-align:left;background-color:#efefef;widht:250px"
{{#if:{{{Verbreitung|}}}
| {{!-}}
{{!}} Verbreitung:
{{!}} {{{Verbreitung|}}}
| {{!-}}
}}
{{#if:{{{Habitat|}}}
| {{!-}}
{{!}} Habitat:
{{!}} {{{Habitat|}}}
| {{!-}}
}}
{{#if:{{{Nahrung|}}}
| {{!-}}
{{!}} Nahrung:
{{!}} {{{Nahrung|}}}
| {{!-}}
}}
{{#if:{{{Luftfeuchtigkeit|}}}
| {{!-}}
{{!}} Luftfeuchtigkeit:
{{!}} {{{Luftfeuchtigkeit|}}}
| {{!-}}
}}
{{#if:{{{Temperatur|}}}
| {{!-}}
{{!}} Temperatur:
{{!}} {{{Temperatur|}}}
| {{!-}}
}}
{{#if:{{{StudyGroupNumber|}}}
| {{!-}}
{{!}} StudyGroupNumber:
{{!}} {{{StudyGroupNumber|}}}
| {{!-}}
}}
|}
|}
{{#ifeq: {{NAMESPACE}} | {{ns:0}} | [[Kategorie:species]]}}
{{#if:{{#var:taxon}}
| {{#regex: {{#var:taxon}} | /[ \r\n]+/ | }}
}}
{{#if:{{{www.faunaeur.org_id|}}}|
* [http://www.faunaeur.org/full_results.php?id={{{www.faunaeur.org_id|}}} Fauna Europaea : www.faunaeur.org -> {{PAGENAME}}]
}}
{{#if:{{{cockroach.speciesfile.org_TaxonNameID|}}}|
* [http://cockroach.speciesfile.org/common/basic/Taxa.aspx?TaxonNameID={{{cockroach.speciesfile.org_TaxonNameID|}}} Cockroach Species File (CSF) : cockroach.speciesfile.org -> {{PAGENAME}}]
}}
{{#if:{{{LSID|}}}|
* [http://www.ipni.org/lsids.html LSID] : {{{LSID|}}}
}}
{{#ifeq:{{PAGENAME}}|{{{regnum}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
{{#if: {{{dominia|}}} | [[Kategorie: {{{dominia|}}} {{!}} {{{regnum|}}}]] }}
}}
{{#ifeq:{{PAGENAME}}|{{{subregnum}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
{{#if: {{{regnum|}}}
| [[Kategorie: {{{regnum|}}}{{!}}{{{subregnum|}}}]]
| {{#if: {{{dominia|}}} | [[Kategorie: {{{dominia|}}}{{!}}{{{subregnum|}}}]] }}
}}
}}
{{#ifeq:{{PAGENAME}}|{{{superdivisio}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
{{#if: {{{subregnum|}}}
| [[Kategorie: {{{subregnum|}}}{{!}}{{{superdivisio|}}}]]
| {{#if: {{{regnum|}}} | [[Kategorie: {{{regnum|}}}{{!}}{{{superdivisio|}}}]] }}
}}
}}
{{#ifeq:{{PAGENAME}}|{{{divisio}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
{{#if: {{{superdivisio|}}}
| [[Kategorie: {{{superdivisio|}}}{{!}}{{{divisio|}}}]]
| {{#if: {{{subregnum|}}} | [[Kategorie: {{{subregnum|}}}{{!}}{{{divisio|}}}]] }}
}}
}}
{{#ifeq:{{PAGENAME}}|{{{subdivisio}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
{{#if: {{{divisio|}}}
| [[Kategorie: {{{divisio|}}}{{!}}{{{subdivisio|}}}]]
| {{#if: {{{superdivisio|}}} | [[Kategorie: {{{superdivisio|}}}{{!}}{{{subdivisio|}}}]] }}
}}
}}
{{#ifeq:{{PAGENAME}}|{{{superclassis}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
{{#if: {{{subdivisio|}}}
| [[Kategorie: {{{subdivisio|}}}{{!}}{{{superclassis|}}}]]
| {{#if: {{{divisio|}}} | [[Kategorie: {{{divisio|}}}{{!}}{{{superclassis|}}}]] }}
}}
}}
{{#ifeq:{{PAGENAME}}|{{{classis}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
{{#if: {{{superclassis|}}}
| [[Kategorie: {{{superclassis|}}}{{!}}{{{classis|}}}]]
| {{#if: {{{subdivisio|}}} | [[Kategorie: {{{subdivisio|}}}{{!}}{{{classis|}}}]] }}
}}
}}
{{#ifeq:{{PAGENAME}}|{{{subclassis}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
{{#if: {{{classis|}}}
| [[Kategorie: {{{classis|}}}{{!}}{{{subclassis|}}}]]
| {{#if: {{{superclassis|}}} | [[Kategorie: {{{superclassis|}}}{{!}}{{{subclassis|}}}]] }}
}}
}}
{{#ifeq:{{PAGENAME}}|{{{superordo}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
{{#if: {{{subclassis|}}}
| [[Kategorie: {{{subclassis|}}}{{!}}{{{superordo|}}}]]
| {{#if: {{{classis|}}} | [[Kategorie: {{{classis|}}}{{!}}{{{superordo|}}}]] }}
}}
}}
{{#ifeq:{{PAGENAME}}|{{{ordo}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
{{#if: {{{superordo|}}}
| [[Kategorie: {{{superordo|}}}{{!}}{{{ordo|}}}]]
| {{#if: {{{subclassis|}}} | [[Kategorie: {{{subclassis|}}}{{!}}{{{ordo|}}}]] }}
}}
}}
{{#ifeq:{{PAGENAME}}|{{{subordo}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
{{#if: {{{ordo|}}}
| [[Kategorie: {{{ordo|}}}{{!}}{{{subordo|}}}]]
| {{#if: {{{superordo|}}} | [[Kategorie: {{{superordo|}}}{{!}}{{{subordo|}}}]] }}
}}
}}
{{#ifeq:{{PAGENAME}}|{{{superfamilia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
{{#if: {{{subordo|}}}
| [[Kategorie: {{{subordo|}}}{{!}}{{{superfamilia|}}}]]
| {{#if: {{{ordo|}}} | [[Kategorie: {{{ordo|}}}{{!}}{{{superfamilia|}}}]] }}
}}
}}
{{#ifeq:{{PAGENAME}}|{{{familia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
{{#if: {{{superfamilia|}}}
| [[Kategorie: {{{superfamilia|}}}{{!}}{{{familia|}}}]]
| {{#if: {{{subordo|}}} | [[Kategorie: {{{subordo|}}}{{!}}{{{familia|}}}]] }}
}}
}}
{{#ifeq:{{PAGENAME}}|{{{subfamilia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
{{#if: {{{familia|}}}
| [[Kategorie: {{{familia|}}}{{!}}{{{subfamilia|}}}]]
| {{#if: {{{superfamilia|}}} | [[Kategorie: {{{superfamilia|}}}{{!}}{{{subfamilia|}}}]] }}
}}
}}
{{#ifeq:{{PAGENAME}}|{{{genus}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
{{#if: {{{subfamilia|}}}
| [[Kategorie: {{{subfamilia|}}}{{!}}{{{genus|}}}]]
| {{#if: {{{familia|}}} | [[Kategorie: {{{familia|}}}{{!}}{{{genus|}}}]] }}
}}
}}
{{#ifeq:{{PAGENAME}}|{{{subgenus}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
{{#if: {{{genus|}}}
| [[Kategorie: {{{genus|}}}{{!}}{{{subgenus|}}}]]
| {{#if: {{{subfamilia|}}} | [[Kategorie: {{{subfamilia|}}}{{!}}{{{subgenus|}}}]] }}
}}
}}
{{#ifeq:{{PAGENAME}}|{{{genus}}} {{{species}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
{{#if: {{{subgenus|}}}
| [[Kategorie: {{{subgenus|}}}{{!}}{{{species|}}}]]
| {{#if: {{{genus|}}} | [[Kategorie: {{{genus|}}}{{!}}{{{species|}}}]] }}
}}
}}
</includeonly>
<noinclude>
<pre>
Beispielaufruf:
{{Systematik
| DeName = Fauchschabe
| Autor = van Herrewege, 1973
| ordo =
| subordo =
| superfamilia =
| familia = Blaberidae
| subfamilia = Oxyhaloinae
| tribus = Gromphadorhini
| genus = Princisia
| subgenus =
| species = vanwaerebeki
| Verbreitung =
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| StudyGroupNumber = BCG 34
| Winterruhe =
| cockroach.speciesfile.org_TaxonNameID = 1174416
| LSID = urn:lsid:Blattodea.genusfile.org:TaxonName:6326
}}
{{Systematik
| Autor =
| Bild =
| Bildbeschreibung =
| regnum = Animalia
| subregnum = Eumetazoa
| phylum = specieshropoda
| subphylum = Hexapoda
| classis = Insecta
| ordo = Dictyoptera
| subordo = Isoptera
| LSID = urn:lsid:faunaeur.org:taxname:11922
| www.faunaeur.org_id = 11922
}}
</pre>
</noinclude>
3313d6428585b5be06db1af071f83b8f9f66db5f
1681
1680
2017-05-03T10:26:22Z
Lollypop
2
wikitext
text/x-wiki
<includeonly>
{{#vardefine:taxon|
{{#if:{{{dominia|}}} | -> [[:Kategorie:{{{dominia|}}}{{!}}{{{dominia|}}}]]}}
{{#if:{{{regnum|}}} | -> [[:Kategorie:{{{regnum|}}}{{!}}{{{regnum|}}}]]}}
{{#if:{{{subregnum|}}} | -> [[:Kategorie:{{{subregnum|}}}{{!}}{{{subregnum|}}}]]}}
{{#if:{{{superdivisio|}}}| -> [[:Kategorie:{{{superdivisio|}}}{{!}}{{{superdivisio|}}}]]}}
{{#if:{{{divisio|}}} | -> [[:Kategorie:{{{divisio|}}}{{!}}{{{divisio|}}}]]}}
{{#if:{{{subdivisio|}}} | -> [[:Kategorie:{{{subdivisio|}}}{{!}}{{{subdivisio|}}}]]}}
{{#if:{{{superclassis|}}}| -> [[:Kategorie:{{{superclassis|}}}{{!}}{{{superclassis|}}}]]}}
{{#if:{{{classis|}}} | -> [[:Kategorie:{{{classis|}}}{{!}}{{{classis|}}}]]}}
{{#if:{{{subclassis|}}} | -> [[:Kategorie:{{{subclassis|}}}{{!}}{{{subclassis|}}}]]}}
{{#if:{{{superordo|}}} | -> [[:Kategorie:{{{superordo|}}}{{!}}{{{superordo|}}}]]}}
{{#if:{{{ordo|}}} | -> [[:Kategorie:{{{ordo|}}}{{!}}{{{ordo|}}}]]}}
{{#if:{{{subordo|}}} | -> [[:Kategorie:{{{subordo|}}}{{!}}{{{subordo|}}}]]}}
{{#if:{{{superfamilia|}}}| -> [[:Kategorie:{{#if: {{{subordo|}}} | {{{subordo|}}} | {{{ordo|}}} }}{{!}}{{{superfamilia|}}}]]}}
{{#if:{{{familia|}}} | -> [[:Kategorie:{{{familia|}}}{{!}}{{{familia|}}}]]}}
{{#if:{{{subfamilia|}}} | -> [[:Kategorie:{{{subfamilia|}}}{{!}}{{{subfamilia|}}}]]}}
{{#if:{{{tribus|}}} | -> [[:Kategorie:{{{tribus|}}}{{!}}{{{tribus|}}}]]}}
{{#if:{{{genus|}}} | -> [[:Kategorie:{{{genus|}}}{{!}}{{{genus|}}}]]}}
{{#if:{{{subgenus|}}} | -> [[:Kategorie:{{{subgenus|}}}{{!}}{{{subgenus|}}}]]}}
}}
{{#vardefine:taxonbox|
{{#if:{{{dominia|}}}
| {{!-}}
{{!}} Dominia:
{{!}} ''[[:Kategorie:{{{dominia|}}}{{!}}{{{dominia|}}}]]''
}}
{{#if:{{{regnum|}}}
| {{!-}}
{{!}} Regnum:
{{!}} ''[[:Kategorie:{{{regnum|}}}{{!}}{{{regnum|}}}]]''
}}
{{#if:{{{subregnum|}}}
| {{!-}}
{{!}} Subregnum:
{{!}} ''[[:Kategorie:{{{subregnum|}}}{{!}}{{{subregnum|}}}]]''
}}
{{#if:{{{superdivisio|}}}
| {{!-}}
{{!}} Superdivisio:
{{!}} ''[[:Kategorie:{{{superdivisio|}}}{{!}}{{{superdivisio|}}}]]''
}}
{{#if:{{{divisio|}}}
| {{!-}}
{{!}} Divisio:
{{!}} ''[[:Kategorie:{{{divisio|}}}{{!}}{{{divisio|}}}]]''
}}
{{#if:{{{phylum|}}}
| {{!-}}
{{!}} Phylum:
{{!}} ''[[:Kategorie:{{{phylum|}}}{{!}}{{{phylum|}}}]]''
}}
{{#if:{{{subdivisio|}}}
| {{!-}}
{{!}} Subdivisio:
{{!}} ''[[:Kategorie:{{{subdivisio|}}}{{!}}{{{subdivisio|}}}]]''
}}
{{#if:{{{subphylum|}}}
| {{!-}}
{{!}} Subphylum:
{{!}} ''[[:Kategorie:{{{subphylum|}}}{{!}}{{{subphylum|}}}]]''
}}
{{#if:{{{superclassis|}}}
| {{!-}}
{{!}} Superclassis:
{{!}} ''[[:Kategorie:{{{superclassis|}}}{{!}}{{{superclassis|}}}]]''
}}
{{#if:{{{classis|}}}
| {{!-}}
{{!}} Classis:
{{!}} ''[[:Kategorie:{{{classis|}}}{{!}}{{{classis|}}}]]''
}}
{{#if:{{{subclassis|}}}
| {{!-}}
{{!}} Subclassis:
{{!}} ''[[:Kategorie:{{{subclassis|}}}{{!}}{{{subclassis|}}}]]''
}}
{{#if:{{{superordo|}}}
| {{!-}}
{{!}} Superordo:
{{!}} ''[[:Kategorie:{{{superordo|}}}{{!}}{{{superordo|}}}]]''
}}
{{#if:{{{ordo|}}}
| {{!-}}
{{!}} Ordo:
{{!}} ''[[:Kategorie:{{{ordo|}}}{{!}}{{{ordo|}}}]]''
}}
{{#if:{{{subordo|}}}
| {{!-}}
{{!}} Subordo:
{{!}} ''[[:Kategorie:{{{subordo|}}}{{!}}{{{subordo|}}}]]''
}}
{{#if:{{{superfamilia|}}}
| {{!-}}
{{!}} Superfamilia:
{{!}} ''[[:Kategorie:{{{superfamilia|}}}{{!}}{{{superfamilia|}}}]]''
}}
{{#if:{{{familia|}}}
| {{!-}}
{{!}} Familia:
{{!}} ''[[:Kategorie:{{{familia|}}}{{!}}{{{familia|}}}]]''
}}
{{#if:{{{subfamilia|}}}
| {{!-}}
{{!}} Subfamilia:
{{!}} ''[[:Kategorie:{{{subfamilia|}}}{{!}}{{{subfamilia|}}}]]''
}}
{{#if:{{{tribus|}}}
| {{!-}}
{{!}} Tribus:
{{!}} ''[[:Kategorie:{{{tribus|}}}{{!}}{{{tribus|}}}]]''
}}
{{#if:{{{genus|}}}
| {{!-}}
{{!}} Genus:
{{!}} ''[[:Kategorie:{{{genus|}}}{{!}}{{{genus|}}}]]''
}}
{{#if:{{{subgenus|}}}
| {{!-}}
{{!}} Subgenus:
{{!}} ''[[:Kategorie:{{{subgenus|}}}{{!}}{{{subgenus|}}}]]''
}}
{{#if:{{{species|}}}
| {{!-}}
{{!}} Species:
{{!}} ''{{{genus|}}} {{{subgenus|}}} {{{species|}}}{{#if: {{{varietas|}}}| " var. {{{varietas|}}}"}}{{#if: {{{forma|}}}| " f. {{{forma|}}}"}}''
}}
}}
{| cellpadding="2" cellspacing="0" style="border: 1px solid #6688AA;width:260px; background-color:#efefef; float:right;overflow: hidden; margin-left:10px; margin-bottom:10px;" valign="middle" |
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"|
'''''{{PAGENAME}}'''''
{{#if:{{{DeName|}}}|
<br>({{{DeName|}}})
}}
|-
|style="border: 0px solid #6688AA;border-bottom-width:1px;"| <div style="text-align:center;font-size:8pt;">
{{#if: {{{Bild|}}}
| [[Bild:{{{Bild}}}{{!}}frameless{{!}}250x300px{{!}}{{{Bildbeschreibung}}}]]
}}</div>
|
|
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| Systematik
|-
|style="text-align:center;width=250px;"|
{| style="background-color:#efefef;text-align:left;"
{{#if:{{#var:taxonbox}}
| {{#regex: {{#var:taxonbox}} | /(\n)[\n]+/ | $1 }}
}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Wissenschaftlicher Name
|-
|style="text-align:center"|
{| style="text-align:center;background-color:#efefef;widht:250px"
|-
|style="text-align:center;width:250px"|''{{{genus|}}} {{{species|}}}'' {{#if:{{{Autor|}}}|{{{Autor|}}}}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Weitere Informationen
|-
|style="text-align:left"|
{| style="text-align:left;background-color:#efefef;widht:250px"
{{#if:{{{Verbreitung|}}}
| {{!-}}
{{!}} Verbreitung:
{{!}} {{{Verbreitung|}}}
| {{!-}}
}}
{{#if:{{{Habitat|}}}
| {{!-}}
{{!}} Habitat:
{{!}} {{{Habitat|}}}
| {{!-}}
}}
{{#if:{{{Nahrung|}}}
| {{!-}}
{{!}} Nahrung:
{{!}} {{{Nahrung|}}}
| {{!-}}
}}
{{#if:{{{Luftfeuchtigkeit|}}}
| {{!-}}
{{!}} Luftfeuchtigkeit:
{{!}} {{{Luftfeuchtigkeit|}}}
| {{!-}}
}}
{{#if:{{{Temperatur|}}}
| {{!-}}
{{!}} Temperatur:
{{!}} {{{Temperatur|}}}
| {{!-}}
}}
{{#if:{{{StudyGroupNumber|}}}
| {{!-}}
{{!}} StudyGroupNumber:
{{!}} {{{StudyGroupNumber|}}}
| {{!-}}
}}
|}
|}
{{#ifeq: {{NAMESPACE}} | {{ns:0}} | [[Kategorie:species]]}}
{{#if:{{#var:taxon}}
| {{#regex: {{#var:taxon}} | /[ \r\n]+/ | }}
}}
{{#if:{{{www.faunaeur.org_id|}}}|
* [http://www.faunaeur.org/full_results.php?id={{{www.faunaeur.org_id|}}} Fauna Europaea : www.faunaeur.org -> {{PAGENAME}}]
}}
{{#if:{{{cockroach.speciesfile.org_TaxonNameID|}}}|
* [http://cockroach.speciesfile.org/common/basic/Taxa.aspx?TaxonNameID={{{cockroach.speciesfile.org_TaxonNameID|}}} Cockroach Species File (CSF) : cockroach.speciesfile.org -> {{PAGENAME}}]
}}
{{#if:{{{LSID|}}}|
* [http://www.ipni.org/lsids.html LSID] : {{{LSID|}}}
}}
{{#ifeq:{{PAGENAME}}|{{{regnum}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
{{#if: {{{dominia|}}} | [[Kategorie: {{{dominia|}}} {{!}} {{{regnum|}}}]] }}
}}
{{#ifeq:{{PAGENAME}}|{{{subregnum}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
{{#if: {{{regnum|}}}
| [[Kategorie: {{{regnum|}}}{{!}}{{{subregnum|}}}]]
| {{#if: {{{dominia|}}} | [[Kategorie: {{{dominia|}}}{{!}}{{{subregnum|}}}]] }}
}}
}}
{{#ifeq:{{PAGENAME}}|{{{superdivisio}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
{{#if: {{{subregnum|}}}
| [[Kategorie: {{{subregnum|}}}{{!}}{{{superdivisio|}}}]]
| {{#if: {{{regnum|}}} | [[Kategorie: {{{regnum|}}}{{!}}{{{superdivisio|}}}]] }}
}}
}}
{{#ifeq:{{PAGENAME}}|{{{superphylum}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
{{#if: {{{subregnum|}}}
| [[Kategorie: {{{subregnum|}}}{{!}}{{{superphylum|}}}]]
| {{#if: {{{regnum|}}} | [[Kategorie: {{{regnum|}}}{{!}}{{{superphylum|}}}]] }}
}}
}}
{{#ifeq:{{PAGENAME}}|{{{divisio}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
{{#if: {{{superdivisio|}}}
| [[Kategorie: {{{superdivisio|}}}{{!}}{{{divisio|}}}]]
| {{#if: {{{subregnum|}}} | [[Kategorie: {{{subregnum|}}}{{!}}{{{divisio|}}}]] }}
}}
}}
{{#ifeq:{{PAGENAME}}|{{{phylum}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
{{#if: {{{superphylum|}}}
| [[Kategorie: {{{superphylum|}}}{{!}}{{{phylum|}}}]]
| {{#if: {{{subregnum|}}} | [[Kategorie: {{{subregnum|}}}{{!}}{{{phylum|}}}]] }}
}}
}}
{{#ifeq:{{PAGENAME}}|{{{subdivisio}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
{{#if: {{{divisio|}}}
| [[Kategorie: {{{divisio|}}}{{!}}{{{subdivisio|}}}]]
| {{#if: {{{superdivisio|}}} | [[Kategorie: {{{superdivisio|}}}{{!}}{{{subdivisio|}}}]] }}
}}
}}
{{#ifeq:{{PAGENAME}}|{{{subphylum}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
{{#if: {{{phylum|}}}
| [[Kategorie: {{{phylum|}}}{{!}}{{{subphylum|}}}]]
| {{#if: {{{superphylum|}}} | [[Kategorie: {{{superphylum|}}}{{!}}{{{subphylum|}}}]] }}
}}
}}
{{#ifeq:{{PAGENAME}}|{{{superclassis}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
{{#if: {{{subdivisio|}}}
| [[Kategorie: {{{subdivisio|}}}{{!}}{{{superclassis|}}}]]
| {{#if: {{{divisio|}}} | [[Kategorie: {{{divisio|}}}{{!}}{{{superclassis|}}}]] }}
}}
{{#if: {{{subphylum|}}}
| [[Kategorie: {{{subphylum|}}}{{!}}{{{superclassis|}}}]]
| {{#if: {{{phylum|}}} | [[Kategorie: {{{phylum|}}}{{!}}{{{superclassis|}}}]] }}
}}
}}
{{#ifeq:{{PAGENAME}}|{{{classis}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
{{#if: {{{superclassis|}}}
| [[Kategorie: {{{superclassis|}}}{{!}}{{{classis|}}}]]
| {{#if: {{{subdivisio|}}} | [[Kategorie: {{{subdivisio|}}}{{!}}{{{classis|}}}]] }}
{{#if: {{{subphylum|}}} | [[Kategorie: {{{subphylum|}}}{{!}}{{{classis|}}}]] }}
}}
}}
{{#ifeq:{{PAGENAME}}|{{{subclassis}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
{{#if: {{{classis|}}}
| [[Kategorie: {{{classis|}}}{{!}}{{{subclassis|}}}]]
| {{#if: {{{superclassis|}}} | [[Kategorie: {{{superclassis|}}}{{!}}{{{subclassis|}}}]] }}
}}
}}
{{#ifeq:{{PAGENAME}}|{{{superordo}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
{{#if: {{{subclassis|}}}
| [[Kategorie: {{{subclassis|}}}{{!}}{{{superordo|}}}]]
| {{#if: {{{classis|}}} | [[Kategorie: {{{classis|}}}{{!}}{{{superordo|}}}]] }}
}}
}}
{{#ifeq:{{PAGENAME}}|{{{ordo}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
{{#if: {{{superordo|}}}
| [[Kategorie: {{{superordo|}}}{{!}}{{{ordo|}}}]]
| {{#if: {{{subclassis|}}} | [[Kategorie: {{{subclassis|}}}{{!}}{{{ordo|}}}]] }}
}}
}}
{{#ifeq:{{PAGENAME}}|{{{subordo}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
{{#if: {{{ordo|}}}
| [[Kategorie: {{{ordo|}}}{{!}}{{{subordo|}}}]]
| {{#if: {{{superordo|}}} | [[Kategorie: {{{superordo|}}}{{!}}{{{subordo|}}}]] }}
}}
}}
{{#ifeq:{{PAGENAME}}|{{{superfamilia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
{{#if: {{{subordo|}}}
| [[Kategorie: {{{subordo|}}}{{!}}{{{superfamilia|}}}]]
| {{#if: {{{ordo|}}} | [[Kategorie: {{{ordo|}}}{{!}}{{{superfamilia|}}}]] }}
}}
}}
{{#ifeq:{{PAGENAME}}|{{{familia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
{{#if: {{{superfamilia|}}}
| [[Kategorie: {{{superfamilia|}}}{{!}}{{{familia|}}}]]
| {{#if: {{{subordo|}}} | [[Kategorie: {{{subordo|}}}{{!}}{{{familia|}}}]] }}
}}
}}
{{#ifeq:{{PAGENAME}}|{{{subfamilia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
{{#if: {{{familia|}}}
| [[Kategorie: {{{familia|}}}{{!}}{{{subfamilia|}}}]]
| {{#if: {{{superfamilia|}}} | [[Kategorie: {{{superfamilia|}}}{{!}}{{{subfamilia|}}}]] }}
}}
}}
{{#ifeq:{{PAGENAME}}|{{{genus}}} {{{species}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
{{#if: {{{subgenus|}}}
| [[Kategorie: {{{subgenus|}}}{{!}}{{{species|}}}]]
| {{#if: {{{genus|}}} | [[Kategorie: {{{genus|}}}{{!}}{{{species|}}}]] }}
}}
}}
{{#ifeq:{{PAGENAME}}|{{{subgenus}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
{{#if: {{{genus|}}}
| [[Kategorie: {{{genus|}}}{{!}}{{{subgenus|}}}]]
| {{#if: {{{subfamilia|}}} | [[Kategorie: {{{subfamilia|}}}{{!}}{{{subgenus|}}}]] }}
}}
}}
</includeonly>
<noinclude>
<pre>
Beispielaufruf:
{{Systematik
| DeName = Fauchschabe
| Autor = van Herrewege, 1973
| ordo =
| subordo =
| superfamilia =
| familia = Blaberidae
| subfamilia = Oxyhaloinae
| tribus = Gromphadorhini
| genus = Princisia
| subgenus =
| species = vanwaerebeki
| Verbreitung =
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| StudyGroupNumber = BCG 34
| Winterruhe =
| cockroach.speciesfile.org_TaxonNameID = 1174416
| LSID = urn:lsid:Blattodea.genusfile.org:TaxonName:6326
}}
{{Systematik
| Autor =
| Bild =
| Bildbeschreibung =
| regnum = Animalia
| subregnum = Eumetazoa
| phylum = specieshropoda
| subphylum = Hexapoda
| classis = Insecta
| ordo = Dictyoptera
| subordo = Isoptera
| LSID = urn:lsid:faunaeur.org:taxname:11922
| www.faunaeur.org_id = 11922
}}
</pre>
</noinclude>
f3c33dd3c3014d8bb5e550dabb6f8fc76ecdb3d4
1682
1681
2017-05-03T11:37:57Z
Lollypop
2
wikitext
text/x-wiki
<includeonly>
{{#vardefine:taxon|
{{#if:{{{dominia|}}} | -> [[:Kategorie:{{{dominia|}}}{{!}}{{{dominia|}}}]]}}
{{#if:{{{regnum|}}} | -> [[:Kategorie:{{{regnum|}}}{{!}}{{{regnum|}}}]]}}
{{#if:{{{subregnum|}}} | -> [[:Kategorie:{{{subregnum|}}}{{!}}{{{subregnum|}}}]]}}
{{#if:{{{superdivisio|}}}| -> [[:Kategorie:{{{superdivisio|}}}{{!}}{{{superdivisio|}}}]]}}
{{#if:{{{divisio|}}} | -> [[:Kategorie:{{{divisio|}}}{{!}}{{{divisio|}}}]]}}
{{#if:{{{subdivisio|}}} | -> [[:Kategorie:{{{subdivisio|}}}{{!}}{{{subdivisio|}}}]]}}
{{#if:{{{superclassis|}}}| -> [[:Kategorie:{{{superclassis|}}}{{!}}{{{superclassis|}}}]]}}
{{#if:{{{classis|}}} | -> [[:Kategorie:{{{classis|}}}{{!}}{{{classis|}}}]]}}
{{#if:{{{subclassis|}}} | -> [[:Kategorie:{{{subclassis|}}}{{!}}{{{subclassis|}}}]]}}
{{#if:{{{superordo|}}} | -> [[:Kategorie:{{{superordo|}}}{{!}}{{{superordo|}}}]]}}
{{#if:{{{ordo|}}} | -> [[:Kategorie:{{{ordo|}}}{{!}}{{{ordo|}}}]]}}
{{#if:{{{subordo|}}} | -> [[:Kategorie:{{{subordo|}}}{{!}}{{{subordo|}}}]]}}
{{#if:{{{superfamilia|}}}| -> [[:Kategorie:{{#if: {{{subordo|}}} | {{{subordo|}}} | {{{ordo|}}} }}{{!}}{{{superfamilia|}}}]]}}
{{#if:{{{familia|}}} | -> [[:Kategorie:{{{familia|}}}{{!}}{{{familia|}}}]]}}
{{#if:{{{subfamilia|}}} | -> [[:Kategorie:{{{subfamilia|}}}{{!}}{{{subfamilia|}}}]]}}
{{#if:{{{tribus|}}} | -> [[:Kategorie:{{{tribus|}}}{{!}}{{{tribus|}}}]]}}
{{#if:{{{genus|}}} | -> [[:Kategorie:{{{genus|}}}{{!}}{{{genus|}}}]]}}
{{#if:{{{subgenus|}}} | -> [[:Kategorie:{{{subgenus|}}}{{!}}{{{subgenus|}}}]]}}
}}
{{#vardefine:taxonbox|
{{#if:{{{dominia|}}}
| {{!-}}
{{!}} Dominia:
{{!}} ''[[:Kategorie:{{{dominia|}}}{{!}}{{{dominia|}}}]]''
}}
{{#if:{{{regnum|}}}
| {{!-}}
{{!}} Regnum:
{{!}} ''[[:Kategorie:{{{regnum|}}}{{!}}{{{regnum|}}}]]''
}}
{{#if:{{{subregnum|}}}
| {{!-}}
{{!}} Subregnum:
{{!}} ''[[:Kategorie:{{{subregnum|}}}{{!}}{{{subregnum|}}}]]''
}}
{{#if:{{{superdivisio|}}}
| {{!-}}
{{!}} Superdivisio:
{{!}} ''[[:Kategorie:{{{superdivisio|}}}{{!}}{{{superdivisio|}}}]]''
}}
{{#if:{{{divisio|}}}
| {{!-}}
{{!}} Divisio:
{{!}} ''[[:Kategorie:{{{divisio|}}}{{!}}{{{divisio|}}}]]''
}}
{{#if:{{{phylum|}}}
| {{!-}}
{{!}} Phylum:
{{!}} ''[[:Kategorie:{{{phylum|}}}{{!}}{{{phylum|}}}]]''
}}
{{#if:{{{subdivisio|}}}
| {{!-}}
{{!}} Subdivisio:
{{!}} ''[[:Kategorie:{{{subdivisio|}}}{{!}}{{{subdivisio|}}}]]''
}}
{{#if:{{{subphylum|}}}
| {{!-}}
{{!}} Subphylum:
{{!}} ''[[:Kategorie:{{{subphylum|}}}{{!}}{{{subphylum|}}}]]''
}}
{{#if:{{{superclassis|}}}
| {{!-}}
{{!}} Superclassis:
{{!}} ''[[:Kategorie:{{{superclassis|}}}{{!}}{{{superclassis|}}}]]''
}}
{{#if:{{{classis|}}}
| {{!-}}
{{!}} Classis:
{{!}} ''[[:Kategorie:{{{classis|}}}{{!}}{{{classis|}}}]]''
}}
{{#if:{{{subclassis|}}}
| {{!-}}
{{!}} Subclassis:
{{!}} ''[[:Kategorie:{{{subclassis|}}}{{!}}{{{subclassis|}}}]]''
}}
{{#if:{{{superordo|}}}
| {{!-}}
{{!}} Superordo:
{{!}} ''[[:Kategorie:{{{superordo|}}}{{!}}{{{superordo|}}}]]''
}}
{{#if:{{{ordo|}}}
| {{!-}}
{{!}} Ordo:
{{!}} ''[[:Kategorie:{{{ordo|}}}{{!}}{{{ordo|}}}]]''
}}
{{#if:{{{subordo|}}}
| {{!-}}
{{!}} Subordo:
{{!}} ''[[:Kategorie:{{{subordo|}}}{{!}}{{{subordo|}}}]]''
}}
{{#if:{{{superfamilia|}}}
| {{!-}}
{{!}} Superfamilia:
{{!}} ''[[:Kategorie:{{{superfamilia|}}}{{!}}{{{superfamilia|}}}]]''
}}
{{#if:{{{familia|}}}
| {{!-}}
{{!}} Familia:
{{!}} ''[[:Kategorie:{{{familia|}}}{{!}}{{{familia|}}}]]''
}}
{{#if:{{{subfamilia|}}}
| {{!-}}
{{!}} Subfamilia:
{{!}} ''[[:Kategorie:{{{subfamilia|}}}{{!}}{{{subfamilia|}}}]]''
}}
{{#if:{{{tribus|}}}
| {{!-}}
{{!}} Tribus:
{{!}} ''[[:Kategorie:{{{tribus|}}}{{!}}{{{tribus|}}}]]''
}}
{{#if:{{{genus|}}}
| {{!-}}
{{!}} Genus:
{{!}} ''[[:Kategorie:{{{genus|}}}{{!}}{{{genus|}}}]]''
}}
{{#if:{{{subgenus|}}}
| {{!-}}
{{!}} Subgenus:
{{!}} ''[[:Kategorie:{{{subgenus|}}}{{!}}{{{subgenus|}}}]]''
}}
{{#if:{{{species|}}}
| {{!-}}
{{!}} Species:
{{!}} ''{{{genus|}}} {{{subgenus|}}} {{{species|}}}{{#if: {{{varietas|}}}| " var. {{{varietas|}}}"}}{{#if: {{{forma|}}}| " f. {{{forma|}}}"}}''
}}
}}
{| cellpadding="2" cellspacing="0" style="border: 1px solid #6688AA;width:260px; background-color:#efefef; float:right;overflow: hidden; margin-left:10px; margin-bottom:10px;" valign="middle" |
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"|
'''''{{PAGENAME}}'''''
{{#if:{{{DeName|}}}|
<br>({{{DeName|}}})
}}
|-
|style="border: 0px solid #6688AA;border-bottom-width:1px;"| <div style="text-align:center;font-size:8pt;">
{{#if: {{{Bild|}}}
| [[Bild:{{{Bild}}}{{!}}frameless{{!}}250x300px{{!}}{{{Bildbeschreibung}}}]]
}}</div>
|
|
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| Systematik
|-
|style="text-align:center;width=250px;"|
{| style="background-color:#efefef;text-align:left;"
{{#if:{{#var:taxonbox}}
| {{#regex: {{#var:taxonbox}} | /(\n)[\n]+/ | $1 }}
}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Wissenschaftlicher Name
|-
|style="text-align:center"|
{| style="text-align:center;background-color:#efefef;widht:250px"
|-
|style="text-align:center;width:250px"|''{{{genus|}}} {{{species|}}}'' {{#if:{{{Autor|}}}|{{{Autor|}}}}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Weitere Informationen
|-
|style="text-align:left"|
{| style="text-align:left;background-color:#efefef;widht:250px"
{{#if:{{{Verbreitung|}}}
| {{!-}}
{{!}} Verbreitung:
{{!}} {{{Verbreitung|}}}
| {{!-}}
}}
{{#if:{{{Habitat|}}}
| {{!-}}
{{!}} Habitat:
{{!}} {{{Habitat|}}}
| {{!-}}
}}
{{#if:{{{Nahrung|}}}
| {{!-}}
{{!}} Nahrung:
{{!}} {{{Nahrung|}}}
| {{!-}}
}}
{{#if:{{{Luftfeuchtigkeit|}}}
| {{!-}}
{{!}} Luftfeuchtigkeit:
{{!}} {{{Luftfeuchtigkeit|}}}
| {{!-}}
}}
{{#if:{{{Temperatur|}}}
| {{!-}}
{{!}} Temperatur:
{{!}} {{{Temperatur|}}}
| {{!-}}
}}
{{#if:{{{StudyGroupNumber|}}}
| {{!-}}
{{!}} StudyGroupNumber:
{{!}} {{{StudyGroupNumber|}}}
| {{!-}}
}}
|}
|}
{{#ifeq: {{NAMESPACE}} | {{ns:0}} | [[Kategorie:species]]}}
{{#if:{{#var:taxon}}
| {{#regex: {{#var:taxon}} | /[ \r\n]+/ | }}
}}
{{#if:{{{www.faunaeur.org_id|}}}|
* [http://www.faunaeur.org/full_results.php?id={{{www.faunaeur.org_id|}}} Fauna Europaea : www.faunaeur.org -> {{PAGENAME}}]
}}
{{#if:{{{cockroach.speciesfile.org_TaxonNameID|}}}|
* [http://cockroach.speciesfile.org/common/basic/Taxa.aspx?TaxonNameID={{{cockroach.speciesfile.org_TaxonNameID|}}} Cockroach Species File (CSF) : cockroach.speciesfile.org -> {{PAGENAME}}]
}}
{{#if:{{{LSID|}}}|
* [http://www.ipni.org/lsids.html LSID] : {{{LSID|}}}
}}
{{#ifeq:{{PAGENAME}}|{{{regnum}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
{{#if: {{{dominia|}}} | [[Kategorie: {{{dominia|}}} {{!}} {{{regnum|}}}]] }}
}}
{{#ifeq:{{PAGENAME}}|{{{subregnum}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
{{#if: {{{regnum|}}}
| [[Kategorie: {{{regnum|}}}{{!}}{{{subregnum|}}}]]
| {{#if: {{{dominia|}}} | [[Kategorie: {{{dominia|}}}{{!}}{{{subregnum|}}}]] }}
}}
}}
{{#ifeq:{{PAGENAME}}|{{{superdivisio}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
{{#if: {{{subregnum|}}}
| [[Kategorie: {{{subregnum|}}}{{!}}{{{superdivisio|}}}]]
| {{#if: {{{regnum|}}} | [[Kategorie: {{{regnum|}}}{{!}}{{{superdivisio|}}}]] }}
}}
}}
{{#ifeq:{{PAGENAME}}|{{{superphylum}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
{{#if: {{{subregnum|}}}
| [[Kategorie: {{{subregnum|}}}{{!}}{{{superphylum|}}}]]
| {{#if: {{{regnum|}}} | [[Kategorie: {{{regnum|}}}{{!}}{{{superphylum|}}}]] }}
}}
}}
{{#ifeq:{{PAGENAME}}|{{{divisio}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
{{#if: {{{superdivisio|}}}
| [[Kategorie: {{{superdivisio|}}}{{!}}{{{divisio|}}}]]
| {{#if: {{{subregnum|}}} | [[Kategorie: {{{subregnum|}}}{{!}}{{{divisio|}}}]] }}
}}
}}
{{#ifeq:{{PAGENAME}}|{{{phylum}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
{{#if: {{{superphylum|}}}
| [[Kategorie: {{{superphylum|}}}{{!}}{{{phylum|}}}]]
| {{#if: {{{subregnum|}}} | [[Kategorie: {{{subregnum|}}}{{!}}{{{phylum|}}}]] }}
}}
}}
{{#ifeq:{{PAGENAME}}|{{{subdivisio}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
{{#if: {{{divisio|}}}
| [[Kategorie: {{{divisio|}}}{{!}}{{{subdivisio|}}}]]
| {{#if: {{{superdivisio|}}} | [[Kategorie: {{{superdivisio|}}}{{!}}{{{subdivisio|}}}]] }}
}}
}}
{{#ifeq:{{PAGENAME}}|{{{subphylum}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
{{#if: {{{phylum|}}}
| [[Kategorie: {{{phylum|}}}{{!}}{{{subphylum|}}}]]
| {{#if: {{{superphylum|}}} | [[Kategorie: {{{superphylum|}}}{{!}}{{{subphylum|}}}]] }}
}}
}}
{{#ifeq:{{PAGENAME}}|{{{superclassis}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
{{#if: {{{subdivisio|}}}
| [[Kategorie: {{{subdivisio|}}}{{!}}{{{superclassis|}}}]]
| {{#if: {{{divisio|}}} | [[Kategorie: {{{divisio|}}}{{!}}{{{superclassis|}}}]] }}
}}
{{#if: {{{subphylum|}}}
| [[Kategorie: {{{subphylum|}}}{{!}}{{{superclassis|}}}]]
| {{#if: {{{phylum|}}} | [[Kategorie: {{{phylum|}}}{{!}}{{{superclassis|}}}]] }}
}}
}}
{{#ifeq:{{PAGENAME}}|{{{classis}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
{{#if: {{{superclassis|}}}
| [[Kategorie: {{{superclassis|}}}{{!}}{{{classis|}}}]]
| {{#if: {{{subdivisio|}}} | [[Kategorie: {{{subdivisio|}}}{{!}}{{{classis|}}}]] }}
{{#if: {{{subphylum|}}} | [[Kategorie: {{{subphylum|}}}{{!}}{{{classis|}}}]] }}
}}
}}
{{#ifeq:{{PAGENAME}}|{{{subclassis}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
{{#if: {{{classis|}}}
| [[Kategorie: {{{classis|}}}{{!}}{{{subclassis|}}}]]
| {{#if: {{{superclassis|}}} | [[Kategorie: {{{superclassis|}}}{{!}}{{{subclassis|}}}]] }}
}}
}}
{{#ifeq:{{PAGENAME}}|{{{superordo}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
{{#if: {{{subclassis|}}}
| [[Kategorie: {{{subclassis|}}}{{!}}{{{superordo|}}}]]
| {{#if: {{{classis|}}} | [[Kategorie: {{{classis|}}}{{!}}{{{superordo|}}}]] }}
}}
}}
{{#ifeq:{{PAGENAME}}|{{{ordo}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
{{#if: {{{superordo|}}}
| [[Kategorie: {{{superordo|}}}{{!}}{{{ordo|}}}]]
| {{#if: {{{subclassis|}}} | [[Kategorie: {{{subclassis|}}}{{!}}{{{ordo|}}}]] }}
}}
}}
{{#ifeq:{{PAGENAME}}|{{{subordo}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
{{#if: {{{ordo|}}}
| [[Kategorie: {{{ordo|}}}{{!}}{{{subordo|}}}]]
| {{#if: {{{superordo|}}} | [[Kategorie: {{{superordo|}}}{{!}}{{{subordo|}}}]] }}
}}
}}
{{#ifeq:{{PAGENAME}}|{{{superfamilia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
{{#if: {{{subordo|}}}
| [[Kategorie: {{{subordo|}}}{{!}}{{{superfamilia|}}}]]
| {{#if: {{{ordo|}}} | [[Kategorie: {{{ordo|}}}{{!}}{{{superfamilia|}}}]] }}
}}
}}
{{#ifeq:{{PAGENAME}}|{{{familia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
{{#if: {{{superfamilia|}}}
| [[Kategorie: {{{superfamilia|}}}{{!}}{{{familia|}}}]]
| {{#if: {{{subordo|}}} | [[Kategorie: {{{subordo|}}}{{!}}{{{familia|}}}]] }}
}}
}}
{{#ifeq:{{PAGENAME}}|{{{subfamilia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
{{#if: {{{familia|}}}
| [[Kategorie: {{{familia|}}}{{!}}{{{subfamilia|}}}]]
| {{#if: {{{superfamilia|}}} | [[Kategorie: {{{superfamilia|}}}{{!}}{{{subfamilia|}}}]] }}
}}
}}
{{#ifeq:{{PAGENAME}}|{{{genus}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
{{#if: {{{subgenus|}}}
| [[Kategorie: {{{subfamilia|}}}{{!}}{{{genus|}}}]]
| {{#if: {{{familia|}}} | [[Kategorie: {{{familia|}}}{{!}}{{{genus|}}}]] }}
}}
}}
{{#ifeq:{{PAGENAME}}|{{{subgenus}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
{{#if: {{{genus|}}}
| [[Kategorie: {{{genus|}}}{{!}}{{{subgenus|}}}]]
| {{#if: {{{subfamilia|}}} | [[Kategorie: {{{subfamilia|}}}{{!}}{{{subgenus|}}}]] }}
}}
}}
{{#ifeq:{{PAGENAME}}|{{{genus}}} {{{species}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
{{#if: {{{subgenus|}}}
| [[Kategorie: {{{subgenus|}}}{{!}}{{{species|}}}]]
| {{#if: {{{genus|}}} | [[Kategorie: {{{genus|}}}{{!}}{{{species|}}}]] }}
}}
}}
</includeonly>
<noinclude>
<pre>
Beispielaufruf:
{{Systematik
| DeName = Fauchschabe
| Autor = van Herrewege, 1973
| ordo =
| subordo =
| superfamilia =
| familia = Blaberidae
| subfamilia = Oxyhaloinae
| tribus = Gromphadorhini
| genus = Princisia
| subgenus =
| species = vanwaerebeki
| Verbreitung =
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| StudyGroupNumber = BCG 34
| Winterruhe =
| cockroach.speciesfile.org_TaxonNameID = 1174416
| LSID = urn:lsid:Blattodea.genusfile.org:TaxonName:6326
}}
{{Systematik
| Autor =
| Bild =
| Bildbeschreibung =
| regnum = Animalia
| subregnum = Eumetazoa
| phylum = specieshropoda
| subphylum = Hexapoda
| classis = Insecta
| ordo = Dictyoptera
| subordo = Isoptera
| LSID = urn:lsid:faunaeur.org:taxname:11922
| www.faunaeur.org_id = 11922
}}
</pre>
</noinclude>
17e410d552238b806d1d2eb009c2b399edb62904
1683
1682
2017-05-03T11:39:49Z
Lollypop
2
wikitext
text/x-wiki
<includeonly>
{{#vardefine:taxon|
{{#if:{{{dominia|}}} | -> [[:Kategorie:{{{dominia|}}}{{!}}{{{dominia|}}}]]}}
{{#if:{{{regnum|}}} | -> [[:Kategorie:{{{regnum|}}}{{!}}{{{regnum|}}}]]}}
{{#if:{{{subregnum|}}} | -> [[:Kategorie:{{{subregnum|}}}{{!}}{{{subregnum|}}}]]}}
{{#if:{{{superdivisio|}}}| -> [[:Kategorie:{{{superdivisio|}}}{{!}}{{{superdivisio|}}}]]}}
{{#if:{{{divisio|}}} | -> [[:Kategorie:{{{divisio|}}}{{!}}{{{divisio|}}}]]}}
{{#if:{{{subdivisio|}}} | -> [[:Kategorie:{{{subdivisio|}}}{{!}}{{{subdivisio|}}}]]}}
{{#if:{{{superclassis|}}}| -> [[:Kategorie:{{{superclassis|}}}{{!}}{{{superclassis|}}}]]}}
{{#if:{{{classis|}}} | -> [[:Kategorie:{{{classis|}}}{{!}}{{{classis|}}}]]}}
{{#if:{{{subclassis|}}} | -> [[:Kategorie:{{{subclassis|}}}{{!}}{{{subclassis|}}}]]}}
{{#if:{{{superordo|}}} | -> [[:Kategorie:{{{superordo|}}}{{!}}{{{superordo|}}}]]}}
{{#if:{{{ordo|}}} | -> [[:Kategorie:{{{ordo|}}}{{!}}{{{ordo|}}}]]}}
{{#if:{{{subordo|}}} | -> [[:Kategorie:{{{subordo|}}}{{!}}{{{subordo|}}}]]}}
{{#if:{{{superfamilia|}}}| -> [[:Kategorie:{{#if: {{{subordo|}}} | {{{subordo|}}} | {{{ordo|}}} }}{{!}}{{{superfamilia|}}}]]}}
{{#if:{{{familia|}}} | -> [[:Kategorie:{{{familia|}}}{{!}}{{{familia|}}}]]}}
{{#if:{{{subfamilia|}}} | -> [[:Kategorie:{{{subfamilia|}}}{{!}}{{{subfamilia|}}}]]}}
{{#if:{{{tribus|}}} | -> [[:Kategorie:{{{tribus|}}}{{!}}{{{tribus|}}}]]}}
{{#if:{{{genus|}}} | -> [[:Kategorie:{{{genus|}}}{{!}}{{{genus|}}}]]}}
{{#if:{{{subgenus|}}} | -> [[:Kategorie:{{{subgenus|}}}{{!}}{{{subgenus|}}}]]}}
}}
{{#vardefine:taxonbox|
{{#if:{{{dominia|}}}
| {{!-}}
{{!}} Dominia:
{{!}} ''[[:Kategorie:{{{dominia|}}}{{!}}{{{dominia|}}}]]''
}}
{{#if:{{{regnum|}}}
| {{!-}}
{{!}} Regnum:
{{!}} ''[[:Kategorie:{{{regnum|}}}{{!}}{{{regnum|}}}]]''
}}
{{#if:{{{subregnum|}}}
| {{!-}}
{{!}} Subregnum:
{{!}} ''[[:Kategorie:{{{subregnum|}}}{{!}}{{{subregnum|}}}]]''
}}
{{#if:{{{superdivisio|}}}
| {{!-}}
{{!}} Superdivisio:
{{!}} ''[[:Kategorie:{{{superdivisio|}}}{{!}}{{{superdivisio|}}}]]''
}}
{{#if:{{{divisio|}}}
| {{!-}}
{{!}} Divisio:
{{!}} ''[[:Kategorie:{{{divisio|}}}{{!}}{{{divisio|}}}]]''
}}
{{#if:{{{phylum|}}}
| {{!-}}
{{!}} Phylum:
{{!}} ''[[:Kategorie:{{{phylum|}}}{{!}}{{{phylum|}}}]]''
}}
{{#if:{{{subdivisio|}}}
| {{!-}}
{{!}} Subdivisio:
{{!}} ''[[:Kategorie:{{{subdivisio|}}}{{!}}{{{subdivisio|}}}]]''
}}
{{#if:{{{subphylum|}}}
| {{!-}}
{{!}} Subphylum:
{{!}} ''[[:Kategorie:{{{subphylum|}}}{{!}}{{{subphylum|}}}]]''
}}
{{#if:{{{superclassis|}}}
| {{!-}}
{{!}} Superclassis:
{{!}} ''[[:Kategorie:{{{superclassis|}}}{{!}}{{{superclassis|}}}]]''
}}
{{#if:{{{classis|}}}
| {{!-}}
{{!}} Classis:
{{!}} ''[[:Kategorie:{{{classis|}}}{{!}}{{{classis|}}}]]''
}}
{{#if:{{{subclassis|}}}
| {{!-}}
{{!}} Subclassis:
{{!}} ''[[:Kategorie:{{{subclassis|}}}{{!}}{{{subclassis|}}}]]''
}}
{{#if:{{{superordo|}}}
| {{!-}}
{{!}} Superordo:
{{!}} ''[[:Kategorie:{{{superordo|}}}{{!}}{{{superordo|}}}]]''
}}
{{#if:{{{ordo|}}}
| {{!-}}
{{!}} Ordo:
{{!}} ''[[:Kategorie:{{{ordo|}}}{{!}}{{{ordo|}}}]]''
}}
{{#if:{{{subordo|}}}
| {{!-}}
{{!}} Subordo:
{{!}} ''[[:Kategorie:{{{subordo|}}}{{!}}{{{subordo|}}}]]''
}}
{{#if:{{{superfamilia|}}}
| {{!-}}
{{!}} Superfamilia:
{{!}} ''[[:Kategorie:{{{superfamilia|}}}{{!}}{{{superfamilia|}}}]]''
}}
{{#if:{{{familia|}}}
| {{!-}}
{{!}} Familia:
{{!}} ''[[:Kategorie:{{{familia|}}}{{!}}{{{familia|}}}]]''
}}
{{#if:{{{subfamilia|}}}
| {{!-}}
{{!}} Subfamilia:
{{!}} ''[[:Kategorie:{{{subfamilia|}}}{{!}}{{{subfamilia|}}}]]''
}}
{{#if:{{{tribus|}}}
| {{!-}}
{{!}} Tribus:
{{!}} ''[[:Kategorie:{{{tribus|}}}{{!}}{{{tribus|}}}]]''
}}
{{#if:{{{genus|}}}
| {{!-}}
{{!}} Genus:
{{!}} ''[[:Kategorie:{{{genus|}}}{{!}}{{{genus|}}}]]''
}}
{{#if:{{{subgenus|}}}
| {{!-}}
{{!}} Subgenus:
{{!}} ''[[:Kategorie:{{{subgenus|}}}{{!}}{{{subgenus|}}}]]''
}}
{{#if:{{{species|}}}
| {{!-}}
{{!}} Species:
{{!}} ''{{{genus|}}} {{{subgenus|}}} {{{species|}}}{{#if: {{{varietas|}}}| " var. {{{varietas|}}}"}}{{#if: {{{forma|}}}| " f. {{{forma|}}}"}}''
}}
}}
{| cellpadding="2" cellspacing="0" style="border: 1px solid #6688AA;width:260px; background-color:#efefef; float:right;overflow: hidden; margin-left:10px; margin-bottom:10px;" valign="middle" |
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"|
'''''{{PAGENAME}}'''''
{{#if:{{{DeName|}}}|
<br>({{{DeName|}}})
}}
|-
|style="border: 0px solid #6688AA;border-bottom-width:1px;"| <div style="text-align:center;font-size:8pt;">
{{#if: {{{Bild|}}}
| [[Bild:{{{Bild}}}{{!}}frameless{{!}}250x300px{{!}}{{{Bildbeschreibung}}}]]
}}</div>
|
|
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| Systematik
|-
|style="text-align:center;width=250px;"|
{| style="background-color:#efefef;text-align:left;"
{{#if:{{#var:taxonbox}}
| {{#regex: {{#var:taxonbox}} | /(\n)[\n]+/ | $1 }}
}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Wissenschaftlicher Name
|-
|style="text-align:center"|
{| style="text-align:center;background-color:#efefef;widht:250px"
|-
|style="text-align:center;width:250px"|''{{{genus|}}} {{{species|}}}'' {{#if:{{{Autor|}}}|{{{Autor|}}}}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Weitere Informationen
|-
|style="text-align:left"|
{| style="text-align:left;background-color:#efefef;widht:250px"
{{#if:{{{Verbreitung|}}}
| {{!-}}
{{!}} Verbreitung:
{{!}} {{{Verbreitung|}}}
| {{!-}}
}}
{{#if:{{{Habitat|}}}
| {{!-}}
{{!}} Habitat:
{{!}} {{{Habitat|}}}
| {{!-}}
}}
{{#if:{{{Nahrung|}}}
| {{!-}}
{{!}} Nahrung:
{{!}} {{{Nahrung|}}}
| {{!-}}
}}
{{#if:{{{Luftfeuchtigkeit|}}}
| {{!-}}
{{!}} Luftfeuchtigkeit:
{{!}} {{{Luftfeuchtigkeit|}}}
| {{!-}}
}}
{{#if:{{{Temperatur|}}}
| {{!-}}
{{!}} Temperatur:
{{!}} {{{Temperatur|}}}
| {{!-}}
}}
{{#if:{{{StudyGroupNumber|}}}
| {{!-}}
{{!}} StudyGroupNumber:
{{!}} {{{StudyGroupNumber|}}}
| {{!-}}
}}
|}
|}
{{#ifeq: {{NAMESPACE}} | {{ns:0}} | [[Kategorie:species]]}}
{{#if:{{#var:taxon}}
| {{#regex: {{#var:taxon}} | /[ \r\n]+/ | }}
}}
{{#if:{{{www.faunaeur.org_id|}}}|
* [http://www.faunaeur.org/full_results.php?id={{{www.faunaeur.org_id|}}} Fauna Europaea : www.faunaeur.org -> {{PAGENAME}}]
}}
{{#if:{{{cockroach.speciesfile.org_TaxonNameID|}}}|
* [http://cockroach.speciesfile.org/common/basic/Taxa.aspx?TaxonNameID={{{cockroach.speciesfile.org_TaxonNameID|}}} Cockroach Species File (CSF) : cockroach.speciesfile.org -> {{PAGENAME}}]
}}
{{#if:{{{LSID|}}}|
* [http://www.ipni.org/lsids.html LSID] : {{{LSID|}}}
}}
{{#ifeq:{{PAGENAME}}|{{{regnum}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
{{#if: {{{dominia|}}} | [[Kategorie: {{{dominia|}}} {{!}} {{{regnum|}}}]] }}
}}
{{#ifeq:{{PAGENAME}}|{{{subregnum}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
{{#if: {{{regnum|}}}
| [[Kategorie: {{{regnum|}}}{{!}}{{{subregnum|}}}]]
| {{#if: {{{dominia|}}} | [[Kategorie: {{{dominia|}}}{{!}}{{{subregnum|}}}]] }}
}}
}}
{{#ifeq:{{PAGENAME}}|{{{superdivisio}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
{{#if: {{{subregnum|}}}
| [[Kategorie: {{{subregnum|}}}{{!}}{{{superdivisio|}}}]]
| {{#if: {{{regnum|}}} | [[Kategorie: {{{regnum|}}}{{!}}{{{superdivisio|}}}]] }}
}}
}}
{{#ifeq:{{PAGENAME}}|{{{superphylum}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
{{#if: {{{subregnum|}}}
| [[Kategorie: {{{subregnum|}}}{{!}}{{{superphylum|}}}]]
| {{#if: {{{regnum|}}} | [[Kategorie: {{{regnum|}}}{{!}}{{{superphylum|}}}]] }}
}}
}}
{{#ifeq:{{PAGENAME}}|{{{divisio}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
{{#if: {{{superdivisio|}}}
| [[Kategorie: {{{superdivisio|}}}{{!}}{{{divisio|}}}]]
| {{#if: {{{subregnum|}}} | [[Kategorie: {{{subregnum|}}}{{!}}{{{divisio|}}}]] }}
}}
}}
{{#ifeq:{{PAGENAME}}|{{{phylum}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
{{#if: {{{superphylum|}}}
| [[Kategorie: {{{superphylum|}}}{{!}}{{{phylum|}}}]]
| {{#if: {{{subregnum|}}} | [[Kategorie: {{{subregnum|}}}{{!}}{{{phylum|}}}]] }}
}}
}}
{{#ifeq:{{PAGENAME}}|{{{subdivisio}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
{{#if: {{{divisio|}}}
| [[Kategorie: {{{divisio|}}}{{!}}{{{subdivisio|}}}]]
| {{#if: {{{superdivisio|}}} | [[Kategorie: {{{superdivisio|}}}{{!}}{{{subdivisio|}}}]] }}
}}
}}
{{#ifeq:{{PAGENAME}}|{{{subphylum}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
{{#if: {{{phylum|}}}
| [[Kategorie: {{{phylum|}}}{{!}}{{{subphylum|}}}]]
| {{#if: {{{superphylum|}}} | [[Kategorie: {{{superphylum|}}}{{!}}{{{subphylum|}}}]] }}
}}
}}
{{#ifeq:{{PAGENAME}}|{{{superclassis}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
{{#if: {{{subdivisio|}}}
| [[Kategorie: {{{subdivisio|}}}{{!}}{{{superclassis|}}}]]
| {{#if: {{{divisio|}}} | [[Kategorie: {{{divisio|}}}{{!}}{{{superclassis|}}}]] }}
}}
{{#if: {{{subphylum|}}}
| [[Kategorie: {{{subphylum|}}}{{!}}{{{superclassis|}}}]]
| {{#if: {{{phylum|}}} | [[Kategorie: {{{phylum|}}}{{!}}{{{superclassis|}}}]] }}
}}
}}
{{#ifeq:{{PAGENAME}}|{{{classis}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
{{#if: {{{superclassis|}}}
| [[Kategorie: {{{superclassis|}}}{{!}}{{{classis|}}}]]
| {{#if: {{{subdivisio|}}} | [[Kategorie: {{{subdivisio|}}}{{!}}{{{classis|}}}]] }}
{{#if: {{{subphylum|}}} | [[Kategorie: {{{subphylum|}}}{{!}}{{{classis|}}}]] }}
}}
}}
{{#ifeq:{{PAGENAME}}|{{{subclassis}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
{{#if: {{{classis|}}}
| [[Kategorie: {{{classis|}}}{{!}}{{{subclassis|}}}]]
| {{#if: {{{superclassis|}}} | [[Kategorie: {{{superclassis|}}}{{!}}{{{subclassis|}}}]] }}
}}
}}
{{#ifeq:{{PAGENAME}}|{{{superordo}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
{{#if: {{{subclassis|}}}
| [[Kategorie: {{{subclassis|}}}{{!}}{{{superordo|}}}]]
| {{#if: {{{classis|}}} | [[Kategorie: {{{classis|}}}{{!}}{{{superordo|}}}]] }}
}}
}}
{{#ifeq:{{PAGENAME}}|{{{ordo}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
{{#if: {{{superordo|}}}
| [[Kategorie: {{{superordo|}}}{{!}}{{{ordo|}}}]]
| {{#if: {{{subclassis|}}} | [[Kategorie: {{{subclassis|}}}{{!}}{{{ordo|}}}]] }}
}}
}}
{{#ifeq:{{PAGENAME}}|{{{subordo}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
{{#if: {{{ordo|}}}
| [[Kategorie: {{{ordo|}}}{{!}}{{{subordo|}}}]]
| {{#if: {{{superordo|}}} | [[Kategorie: {{{superordo|}}}{{!}}{{{subordo|}}}]] }}
}}
}}
{{#ifeq:{{PAGENAME}}|{{{superfamilia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
{{#if: {{{subordo|}}}
| [[Kategorie: {{{subordo|}}}{{!}}{{{superfamilia|}}}]]
| {{#if: {{{ordo|}}} | [[Kategorie: {{{ordo|}}}{{!}}{{{superfamilia|}}}]] }}
}}
}}
{{#ifeq:{{PAGENAME}}|{{{familia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
{{#if: {{{superfamilia|}}}
| [[Kategorie: {{{superfamilia|}}}{{!}}{{{familia|}}}]]
| {{#if: {{{subordo|}}} | [[Kategorie: {{{subordo|}}}{{!}}{{{familia|}}}]] }}
}}
}}
{{#ifeq:{{PAGENAME}}|{{{subfamilia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
{{#if: {{{familia|}}}
| [[Kategorie: {{{familia|}}}{{!}}{{{subfamilia|}}}]]
| {{#if: {{{superfamilia|}}} | [[Kategorie: {{{superfamilia|}}}{{!}}{{{subfamilia|}}}]] }}
}}
}}
{{#ifeq:{{PAGENAME}}|{{{genus}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
{{#if: {{{subfamilia|}}}
| [[Kategorie: {{{subfamilia|}}}{{!}}{{{genus|}}}]]
| {{#if: {{{familia|}}} | [[Kategorie: {{{familia|}}}{{!}}{{{genus|}}}]] }}
}}
}}
{{#ifeq:{{PAGENAME}}|{{{subgenus}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
{{#if: {{{genus|}}}
| [[Kategorie: {{{genus|}}}{{!}}{{{subgenus|}}}]]
| {{#if: {{{subfamilia|}}} | [[Kategorie: {{{subfamilia|}}}{{!}}{{{subgenus|}}}]] }}
}}
}}
{{#ifeq:{{PAGENAME}}|{{{genus}}} {{{species}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
{{#if: {{{subgenus|}}}
| [[Kategorie: {{{subgenus|}}}{{!}}{{{species|}}}]]
| {{#if: {{{genus|}}} | [[Kategorie: {{{genus|}}}{{!}}{{{species|}}}]] }}
}}
}}
</includeonly>
<noinclude>
<pre>
Beispielaufruf:
{{Systematik
| DeName = Fauchschabe
| Autor = van Herrewege, 1973
| ordo =
| subordo =
| superfamilia =
| familia = Blaberidae
| subfamilia = Oxyhaloinae
| tribus = Gromphadorhini
| genus = Princisia
| subgenus =
| species = vanwaerebeki
| Verbreitung =
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| StudyGroupNumber = BCG 34
| Winterruhe =
| cockroach.speciesfile.org_TaxonNameID = 1174416
| LSID = urn:lsid:Blattodea.genusfile.org:TaxonName:6326
}}
{{Systematik
| Autor =
| Bild =
| Bildbeschreibung =
| regnum = Animalia
| subregnum = Eumetazoa
| phylum = specieshropoda
| subphylum = Hexapoda
| classis = Insecta
| ordo = Dictyoptera
| subordo = Isoptera
| LSID = urn:lsid:faunaeur.org:taxname:11922
| www.faunaeur.org_id = 11922
}}
</pre>
</noinclude>
091804263b95ff2ce1767d121991fec10228dbae
1684
1683
2017-05-03T11:43:57Z
Lollypop
2
wikitext
text/x-wiki
<includeonly>
{{#vardefine:taxon|
{{#if:{{{dominia|}}} | -> [[:Kategorie:{{{dominia|}}}{{!}}{{{dominia|}}}]]}}
{{#if:{{{regnum|}}} | -> [[:Kategorie:{{{regnum|}}}{{!}}{{{regnum|}}}]]}}
{{#if:{{{subregnum|}}} | -> [[:Kategorie:{{{subregnum|}}}{{!}}{{{subregnum|}}}]]}}
{{#if:{{{superdivisio|}}}| -> [[:Kategorie:{{{superdivisio|}}}{{!}}{{{superdivisio|}}}]]}}
{{#if:{{{divisio|}}} | -> [[:Kategorie:{{{divisio|}}}{{!}}{{{divisio|}}}]]}}
{{#if:{{{subdivisio|}}} | -> [[:Kategorie:{{{subdivisio|}}}{{!}}{{{subdivisio|}}}]]}}
{{#if:{{{superclassis|}}}| -> [[:Kategorie:{{{superclassis|}}}{{!}}{{{superclassis|}}}]]}}
{{#if:{{{classis|}}} | -> [[:Kategorie:{{{classis|}}}{{!}}{{{classis|}}}]]}}
{{#if:{{{subclassis|}}} | -> [[:Kategorie:{{{subclassis|}}}{{!}}{{{subclassis|}}}]]}}
{{#if:{{{superordo|}}} | -> [[:Kategorie:{{{superordo|}}}{{!}}{{{superordo|}}}]]}}
{{#if:{{{ordo|}}} | -> [[:Kategorie:{{{ordo|}}}{{!}}{{{ordo|}}}]]}}
{{#if:{{{subordo|}}} | -> [[:Kategorie:{{{subordo|}}}{{!}}{{{subordo|}}}]]}}
{{#if:{{{superfamilia|}}}| -> [[:Kategorie:{{#if: {{{subordo|}}} | {{{subordo|}}} | {{{ordo|}}} }}{{!}}{{{superfamilia|}}}]]}}
{{#if:{{{familia|}}} | -> [[:Kategorie:{{{familia|}}}{{!}}{{{familia|}}}]]}}
{{#if:{{{subfamilia|}}} | -> [[:Kategorie:{{{subfamilia|}}}{{!}}{{{subfamilia|}}}]]}}
{{#if:{{{tribus|}}} | -> [[:Kategorie:{{{tribus|}}}{{!}}{{{tribus|}}}]]}}
{{#if:{{{genus|}}} | -> [[:Kategorie:{{{genus|}}}{{!}}{{{genus|}}}]]}}
{{#if:{{{subgenus|}}} | -> [[:Kategorie:{{{subgenus|}}}{{!}}{{{subgenus|}}}]]}}
}}
{{#vardefine:taxonbox|
{{#if:{{{dominia|}}}
| {{!-}}
{{!}} Dominia:
{{!}} ''[[:Kategorie:{{{dominia|}}}{{!}}{{{dominia|}}}]]''
}}
{{#if:{{{regnum|}}}
| {{!-}}
{{!}} Regnum:
{{!}} ''[[:Kategorie:{{{regnum|}}}{{!}}{{{regnum|}}}]]''
}}
{{#if:{{{subregnum|}}}
| {{!-}}
{{!}} Subregnum:
{{!}} ''[[:Kategorie:{{{subregnum|}}}{{!}}{{{subregnum|}}}]]''
}}
{{#if:{{{superdivisio|}}}
| {{!-}}
{{!}} Superdivisio:
{{!}} ''[[:Kategorie:{{{superdivisio|}}}{{!}}{{{superdivisio|}}}]]''
}}
{{#if:{{{divisio|}}}
| {{!-}}
{{!}} Divisio:
{{!}} ''[[:Kategorie:{{{divisio|}}}{{!}}{{{divisio|}}}]]''
}}
{{#if:{{{phylum|}}}
| {{!-}}
{{!}} Phylum:
{{!}} ''[[:Kategorie:{{{phylum|}}}{{!}}{{{phylum|}}}]]''
}}
{{#if:{{{subdivisio|}}}
| {{!-}}
{{!}} Subdivisio:
{{!}} ''[[:Kategorie:{{{subdivisio|}}}{{!}}{{{subdivisio|}}}]]''
}}
{{#if:{{{subphylum|}}}
| {{!-}}
{{!}} Subphylum:
{{!}} ''[[:Kategorie:{{{subphylum|}}}{{!}}{{{subphylum|}}}]]''
}}
{{#if:{{{superclassis|}}}
| {{!-}}
{{!}} Superclassis:
{{!}} ''[[:Kategorie:{{{superclassis|}}}{{!}}{{{superclassis|}}}]]''
}}
{{#if:{{{classis|}}}
| {{!-}}
{{!}} Classis:
{{!}} ''[[:Kategorie:{{{classis|}}}{{!}}{{{classis|}}}]]''
}}
{{#if:{{{subclassis|}}}
| {{!-}}
{{!}} Subclassis:
{{!}} ''[[:Kategorie:{{{subclassis|}}}{{!}}{{{subclassis|}}}]]''
}}
{{#if:{{{superordo|}}}
| {{!-}}
{{!}} Superordo:
{{!}} ''[[:Kategorie:{{{superordo|}}}{{!}}{{{superordo|}}}]]''
}}
{{#if:{{{ordo|}}}
| {{!-}}
{{!}} Ordo:
{{!}} ''[[:Kategorie:{{{ordo|}}}{{!}}{{{ordo|}}}]]''
}}
{{#if:{{{subordo|}}}
| {{!-}}
{{!}} Subordo:
{{!}} ''[[:Kategorie:{{{subordo|}}}{{!}}{{{subordo|}}}]]''
}}
{{#if:{{{superfamilia|}}}
| {{!-}}
{{!}} Superfamilia:
{{!}} ''[[:Kategorie:{{{superfamilia|}}}{{!}}{{{superfamilia|}}}]]''
}}
{{#if:{{{familia|}}}
| {{!-}}
{{!}} Familia:
{{!}} ''[[:Kategorie:{{{familia|}}}{{!}}{{{familia|}}}]]''
}}
{{#if:{{{subfamilia|}}}
| {{!-}}
{{!}} Subfamilia:
{{!}} ''[[:Kategorie:{{{subfamilia|}}}{{!}}{{{subfamilia|}}}]]''
}}
{{#if:{{{tribus|}}}
| {{!-}}
{{!}} Tribus:
{{!}} ''[[:Kategorie:{{{tribus|}}}{{!}}{{{tribus|}}}]]''
}}
{{#if:{{{genus|}}}
| {{!-}}
{{!}} Genus:
{{!}} ''[[:Kategorie:{{{genus|}}}{{!}}{{{genus|}}}]]''
}}
{{#if:{{{subgenus|}}}
| {{!-}}
{{!}} Subgenus:
{{!}} ''[[:Kategorie:{{{subgenus|}}}{{!}}{{{subgenus|}}}]]''
}}
{{#if:{{{species|}}}
| {{!-}}
{{!}} Species:
{{!}} ''{{{genus|}}} {{{subgenus|}}} {{{species|}}}{{#if: {{{varietas|}}}| " var. {{{varietas|}}}"}}{{#if: {{{forma|}}}| " f. {{{forma|}}}"}}''
}}
}}
{| cellpadding="2" cellspacing="0" style="border: 1px solid #6688AA;width:260px; background-color:#efefef; float:right;overflow: hidden; margin-left:10px; margin-bottom:10px;" valign="middle" |
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"|
'''''{{PAGENAME}}'''''
{{#if:{{{DeName|}}}|
<br>({{{DeName|}}})
}}
|-
|style="border: 0px solid #6688AA;border-bottom-width:1px;"| <div style="text-align:center;font-size:8pt;">
{{#if: {{{Bild|}}}
| [[Bild:{{{Bild}}}{{!}}frameless{{!}}250x300px{{!}}{{{Bildbeschreibung}}}]]
}}</div>
|
|
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| Systematik
|-
|style="text-align:center;width=250px;"|
{| style="background-color:#efefef;text-align:left;"
{{#if:{{#var:taxonbox}}
| {{#regex: {{#var:taxonbox}} | /(\n)[\n]+/ | $1 }}
}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Wissenschaftlicher Name
|-
|style="text-align:center"|
{| style="text-align:center;background-color:#efefef;widht:250px"
|-
|style="text-align:center;width:250px"|''{{{genus|}}} {{{species|}}}'' {{#if:{{{Autor|}}}|{{{Autor|}}}}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Weitere Informationen
|-
|style="text-align:left"|
{| style="text-align:left;background-color:#efefef;widht:250px"
{{#if:{{{Verbreitung|}}}
| {{!-}}
{{!}} Verbreitung:
{{!}} {{{Verbreitung|}}}
| {{!-}}
}}
{{#if:{{{Habitat|}}}
| {{!-}}
{{!}} Habitat:
{{!}} {{{Habitat|}}}
| {{!-}}
}}
{{#if:{{{Nahrung|}}}
| {{!-}}
{{!}} Nahrung:
{{!}} {{{Nahrung|}}}
| {{!-}}
}}
{{#if:{{{Luftfeuchtigkeit|}}}
| {{!-}}
{{!}} Luftfeuchtigkeit:
{{!}} {{{Luftfeuchtigkeit|}}}
| {{!-}}
}}
{{#if:{{{Temperatur|}}}
| {{!-}}
{{!}} Temperatur:
{{!}} {{{Temperatur|}}}
| {{!-}}
}}
{{#if:{{{StudyGroupNumber|}}}
| {{!-}}
{{!}} StudyGroupNumber:
{{!}} {{{StudyGroupNumber|}}}
| {{!-}}
}}
|}
|}
{{#ifeq: {{NAMESPACE}} | {{ns:0}} | [[Kategorie:species]]}}
{{#if:{{#var:taxon}}
| {{#regex: {{#var:taxon}} | /[ \r\n]+/ | }}
}}
{{#if:{{{www.faunaeur.org_id|}}}|
* [http://www.faunaeur.org/full_results.php?id={{{www.faunaeur.org_id|}}} Fauna Europaea : www.faunaeur.org -> {{PAGENAME}}]
}}
{{#if:{{{cockroach.speciesfile.org_TaxonNameID|}}}|
* [http://cockroach.speciesfile.org/common/basic/Taxa.aspx?TaxonNameID={{{cockroach.speciesfile.org_TaxonNameID|}}} Cockroach Species File (CSF) : cockroach.speciesfile.org -> {{PAGENAME}}]
}}
{{#if:{{{LSID|}}}|
* [http://www.ipni.org/lsids.html LSID] : {{{LSID|}}}
}}
{{#ifeq:{{PAGENAME}}|{{{regnum}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
{{#if: {{{dominia|}}} | [[Kategorie: {{{dominia|}}} {{!}} {{{regnum|}}}]] }}
}}
{{#ifeq:{{PAGENAME}}|{{{subregnum}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
{{#if: {{{regnum|}}}
| [[Kategorie: {{{regnum|}}}{{!}}{{{subregnum|}}}]]
| {{#if: {{{dominia|}}} | [[Kategorie: {{{dominia|}}}{{!}}{{{subregnum|}}}]] }}
}}
}}
{{#ifeq:{{PAGENAME}}|{{{superdivisio}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
{{#if: {{{subregnum|}}}
| [[Kategorie: {{{subregnum|}}}{{!}}{{{superdivisio|}}}]]
| {{#if: {{{regnum|}}} | [[Kategorie: {{{regnum|}}}{{!}}{{{superdivisio|}}}]] }}
}}
}}
{{#ifeq:{{PAGENAME}}|{{{superphylum}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
{{#if: {{{subregnum|}}}
| [[Kategorie: {{{subregnum|}}}{{!}}{{{superphylum|}}}]]
| {{#if: {{{regnum|}}} | [[Kategorie: {{{regnum|}}}{{!}}{{{superphylum|}}}]] }}
}}
}}
{{#ifeq:{{PAGENAME}}|{{{divisio}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
{{#if: {{{superdivisio|}}}
| [[Kategorie: {{{superdivisio|}}}{{!}}{{{divisio|}}}]]
| {{#if: {{{subregnum|}}} | [[Kategorie: {{{subregnum|}}}{{!}}{{{divisio|}}}]] }}
}}
}}
{{#ifeq:{{PAGENAME}}|{{{phylum}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
{{#if: {{{superphylum|}}}
| [[Kategorie: {{{superphylum|}}}{{!}}{{{phylum|}}}]]
| {{#if: {{{subregnum|}}} | [[Kategorie: {{{subregnum|}}}{{!}}{{{phylum|}}}]] }}
}}
}}
{{#ifeq:{{PAGENAME}}|{{{subdivisio}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
{{#if: {{{divisio|}}}
| [[Kategorie: {{{divisio|}}}{{!}}{{{subdivisio|}}}]]
| {{#if: {{{superdivisio|}}} | [[Kategorie: {{{superdivisio|}}}{{!}}{{{subdivisio|}}}]] }}
}}
}}
{{#ifeq:{{PAGENAME}}|{{{subphylum}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
{{#if: {{{phylum|}}}
| [[Kategorie: {{{phylum|}}}{{!}}{{{subphylum|}}}]]
| {{#if: {{{superphylum|}}} | [[Kategorie: {{{superphylum|}}}{{!}}{{{subphylum|}}}]] }}
}}
}}
{{#ifeq:{{PAGENAME}}|{{{superclassis}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
{{#if: {{{subdivisio|}}}
| [[Kategorie: {{{subdivisio|}}}{{!}}{{{superclassis|}}}]]
| {{#if: {{{divisio|}}} | [[Kategorie: {{{divisio|}}}{{!}}{{{superclassis|}}}]] }}
}}
{{#if: {{{subphylum|}}}
| [[Kategorie: {{{subphylum|}}}{{!}}{{{superclassis|}}}]]
| {{#if: {{{phylum|}}} | [[Kategorie: {{{phylum|}}}{{!}}{{{superclassis|}}}]] }}
}}
}}
{{#ifeq:{{PAGENAME}}|{{{classis}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
{{#if: {{{superclassis|}}}
| [[Kategorie: {{{superclassis|}}}{{!}}{{{classis|}}}]]
| {{#if: {{{subdivisio|}}} | [[Kategorie: {{{subdivisio|}}}{{!}}{{{classis|}}}]] }}
{{#if: {{{subphylum|}}} | [[Kategorie: {{{subphylum|}}}{{!}}{{{classis|}}}]] }}
}}
}}
{{#ifeq:{{PAGENAME}}|{{{subclassis}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
{{#if: {{{classis|}}}
| [[Kategorie: {{{classis|}}}{{!}}{{{subclassis|}}}]]
| {{#if: {{{superclassis|}}} | [[Kategorie: {{{superclassis|}}}{{!}}{{{subclassis|}}}]] }}
}}
}}
{{#ifeq:{{PAGENAME}}|{{{superordo}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
{{#if: {{{subclassis|}}}
| [[Kategorie: {{{subclassis|}}}{{!}}{{{superordo|}}}]]
| {{#if: {{{classis|}}} | [[Kategorie: {{{classis|}}}{{!}}{{{superordo|}}}]] }}
}}
}}
{{#ifeq:{{PAGENAME}}|{{{ordo}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
{{#if: {{{superordo|}}}
| [[Kategorie: {{{superordo|}}}{{!}}{{{ordo|}}}]]
| {{#if: {{{subclassis|}}} | [[Kategorie: {{{subclassis|}}}{{!}}{{{ordo|}}}]] }}
}}
}}
{{#ifeq:{{PAGENAME}}|{{{subordo}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
{{#if: {{{ordo|}}}
| [[Kategorie: {{{ordo|}}}{{!}}{{{subordo|}}}]]
| {{#if: {{{superordo|}}} | [[Kategorie: {{{superordo|}}}{{!}}{{{subordo|}}}]] }}
}}
}}
{{#ifeq:{{PAGENAME}}|{{{superfamilia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
{{#if: {{{subordo|}}}
| [[Kategorie: {{{subordo|}}}{{!}}{{{superfamilia|}}}]]
| {{#if: {{{ordo|}}} | [[Kategorie: {{{ordo|}}}{{!}}{{{superfamilia|}}}]] }}
}}
}}
{{#ifeq:{{PAGENAME}}|{{{familia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
{{#if: {{{superfamilia|}}}
| [[Kategorie: {{{superfamilia|}}}{{!}}{{{familia|}}}]]
| {{#if: {{{subordo|}}} | [[Kategorie: {{{subordo|}}}{{!}}{{{familia|}}}]] }}
}}
}}
{{#ifeq:{{PAGENAME}}|{{{subfamilia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
{{#if: {{{familia|}}}
| [[Kategorie: {{{familia|}}}{{!}}{{{subfamilia|}}}]]
| {{#if: {{{superfamilia|}}} | [[Kategorie: {{{superfamilia|}}}{{!}}{{{subfamilia|}}}]] }}
}}
}}
{{#ifeq:{{PAGENAME}}|{{{genus}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
{{#if: {{{subfamilia|}}}
| [[Kategorie: {{{subfamilia|}}}{{!}}{{{genus|}}}]]
| {{#if: {{{familia|}}} | [[Kategorie: {{{familia|}}}{{!}}{{{genus|}}}]] }}
}}
}}
{{#ifeq:{{PAGENAME}}|{{{subgenus}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
{{#if: {{{genus|}}}
| [[Kategorie: {{{genus|}}}{{!}}{{{subgenus|}}}]]
| {{#if: {{{subfamilia|}}} | [[Kategorie: {{{subfamilia|}}}{{!}}{{{subgenus|}}}]] }}
}}
}}
{{#ifeq:{{PAGENAME}}|{{{genus}}} {{{species}}}|
{{#if: {{{subgenus|}}}
| [[Kategorie: {{{subgenus|}}}{{!}}{{{species|}}}]]
| {{#if: {{{genus|}}} | [[Kategorie: {{{genus|}}}{{!}}{{{species|}}}]] }}
}}
}}
</includeonly>
<noinclude>
<pre>
Beispielaufruf:
{{Systematik
| DeName = Fauchschabe
| Autor = van Herrewege, 1973
| ordo =
| subordo =
| superfamilia =
| familia = Blaberidae
| subfamilia = Oxyhaloinae
| tribus = Gromphadorhini
| genus = Princisia
| subgenus =
| species = vanwaerebeki
| Verbreitung =
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| StudyGroupNumber = BCG 34
| Winterruhe =
| cockroach.speciesfile.org_TaxonNameID = 1174416
| LSID = urn:lsid:Blattodea.genusfile.org:TaxonName:6326
}}
{{Systematik
| Autor =
| Bild =
| Bildbeschreibung =
| regnum = Animalia
| subregnum = Eumetazoa
| phylum = specieshropoda
| subphylum = Hexapoda
| classis = Insecta
| ordo = Dictyoptera
| subordo = Isoptera
| LSID = urn:lsid:faunaeur.org:taxname:11922
| www.faunaeur.org_id = 11922
}}
</pre>
</noinclude>
eea6c8b932a78e9c78d968471d36b03c635727d5
1686
1684
2017-05-03T11:50:14Z
Lollypop
2
wikitext
text/x-wiki
<includeonly>
{{#vardefine:taxon|
{{#if:{{{dominia|}}} | -> [[:Kategorie:{{{dominia|}}}{{!}}{{{dominia|}}}]]}}
{{#if:{{{regnum|}}} | -> [[:Kategorie:{{{regnum|}}}{{!}}{{{regnum|}}}]]}}
{{#if:{{{subregnum|}}} | -> [[:Kategorie:{{{subregnum|}}}{{!}}{{{subregnum|}}}]]}}
{{#if:{{{superdivisio|}}}| -> [[:Kategorie:{{{superdivisio|}}}{{!}}{{{superdivisio|}}}]]}}
{{#if:{{{divisio|}}} | -> [[:Kategorie:{{{divisio|}}}{{!}}{{{divisio|}}}]]}}
{{#if:{{{subdivisio|}}} | -> [[:Kategorie:{{{subdivisio|}}}{{!}}{{{subdivisio|}}}]]}}
{{#if:{{{superclassis|}}}| -> [[:Kategorie:{{{superclassis|}}}{{!}}{{{superclassis|}}}]]}}
{{#if:{{{classis|}}} | -> [[:Kategorie:{{{classis|}}}{{!}}{{{classis|}}}]]}}
{{#if:{{{subclassis|}}} | -> [[:Kategorie:{{{subclassis|}}}{{!}}{{{subclassis|}}}]]}}
{{#if:{{{superordo|}}} | -> [[:Kategorie:{{{superordo|}}}{{!}}{{{superordo|}}}]]}}
{{#if:{{{ordo|}}} | -> [[:Kategorie:{{{ordo|}}}{{!}}{{{ordo|}}}]]}}
{{#if:{{{subordo|}}} | -> [[:Kategorie:{{{subordo|}}}{{!}}{{{subordo|}}}]]}}
{{#if:{{{superfamilia|}}}| -> [[:Kategorie:{{#if: {{{subordo|}}} | {{{subordo|}}} | {{{ordo|}}} }}{{!}}{{{superfamilia|}}}]]}}
{{#if:{{{familia|}}} | -> [[:Kategorie:{{{familia|}}}{{!}}{{{familia|}}}]]}}
{{#if:{{{subfamilia|}}} | -> [[:Kategorie:{{{subfamilia|}}}{{!}}{{{subfamilia|}}}]]}}
{{#if:{{{tribus|}}} | -> [[:Kategorie:{{{tribus|}}}{{!}}{{{tribus|}}}]]}}
{{#if:{{{genus|}}} | -> [[:Kategorie:{{{genus|}}}{{!}}{{{genus|}}}]]}}
{{#if:{{{subgenus|}}} | -> [[:Kategorie:{{{subgenus|}}}{{!}}{{{subgenus|}}}]]}}
}}
{{#vardefine:taxonbox|
{{#if:{{{dominia|}}}
| {{!-}}
{{!}} Dominia:
{{!}} ''[[:Kategorie:{{{dominia|}}}{{!}}{{{dominia|}}}]]''
}}
{{#if:{{{regnum|}}}
| {{!-}}
{{!}} Regnum:
{{!}} ''[[:Kategorie:{{{regnum|}}}{{!}}{{{regnum|}}}]]''
}}
{{#if:{{{subregnum|}}}
| {{!-}}
{{!}} Subregnum:
{{!}} ''[[:Kategorie:{{{subregnum|}}}{{!}}{{{subregnum|}}}]]''
}}
{{#if:{{{superdivisio|}}}
| {{!-}}
{{!}} Superdivisio:
{{!}} ''[[:Kategorie:{{{superdivisio|}}}{{!}}{{{superdivisio|}}}]]''
}}
{{#if:{{{divisio|}}}
| {{!-}}
{{!}} Divisio:
{{!}} ''[[:Kategorie:{{{divisio|}}}{{!}}{{{divisio|}}}]]''
}}
{{#if:{{{phylum|}}}
| {{!-}}
{{!}} Phylum:
{{!}} ''[[:Kategorie:{{{phylum|}}}{{!}}{{{phylum|}}}]]''
}}
{{#if:{{{subdivisio|}}}
| {{!-}}
{{!}} Subdivisio:
{{!}} ''[[:Kategorie:{{{subdivisio|}}}{{!}}{{{subdivisio|}}}]]''
}}
{{#if:{{{subphylum|}}}
| {{!-}}
{{!}} Subphylum:
{{!}} ''[[:Kategorie:{{{subphylum|}}}{{!}}{{{subphylum|}}}]]''
}}
{{#if:{{{superclassis|}}}
| {{!-}}
{{!}} Superclassis:
{{!}} ''[[:Kategorie:{{{superclassis|}}}{{!}}{{{superclassis|}}}]]''
}}
{{#if:{{{classis|}}}
| {{!-}}
{{!}} Classis:
{{!}} ''[[:Kategorie:{{{classis|}}}{{!}}{{{classis|}}}]]''
}}
{{#if:{{{subclassis|}}}
| {{!-}}
{{!}} Subclassis:
{{!}} ''[[:Kategorie:{{{subclassis|}}}{{!}}{{{subclassis|}}}]]''
}}
{{#if:{{{superordo|}}}
| {{!-}}
{{!}} Superordo:
{{!}} ''[[:Kategorie:{{{superordo|}}}{{!}}{{{superordo|}}}]]''
}}
{{#if:{{{ordo|}}}
| {{!-}}
{{!}} Ordo:
{{!}} ''[[:Kategorie:{{{ordo|}}}{{!}}{{{ordo|}}}]]''
}}
{{#if:{{{subordo|}}}
| {{!-}}
{{!}} Subordo:
{{!}} ''[[:Kategorie:{{{subordo|}}}{{!}}{{{subordo|}}}]]''
}}
{{#if:{{{superfamilia|}}}
| {{!-}}
{{!}} Superfamilia:
{{!}} ''[[:Kategorie:{{{superfamilia|}}}{{!}}{{{superfamilia|}}}]]''
}}
{{#if:{{{familia|}}}
| {{!-}}
{{!}} Familia:
{{!}} ''[[:Kategorie:{{{familia|}}}{{!}}{{{familia|}}}]]''
}}
{{#if:{{{subfamilia|}}}
| {{!-}}
{{!}} Subfamilia:
{{!}} ''[[:Kategorie:{{{subfamilia|}}}{{!}}{{{subfamilia|}}}]]''
}}
{{#if:{{{tribus|}}}
| {{!-}}
{{!}} Tribus:
{{!}} ''[[:Kategorie:{{{tribus|}}}{{!}}{{{tribus|}}}]]''
}}
{{#if:{{{genus|}}}
| {{!-}}
{{!}} Genus:
{{!}} ''[[:Kategorie:{{{genus|}}}{{!}}{{{genus|}}}]]''
}}
{{#if:{{{subgenus|}}}
| {{!-}}
{{!}} Subgenus:
{{!}} ''[[:Kategorie:{{{subgenus|}}}{{!}}{{{subgenus|}}}]]''
}}
{{#if:{{{species|}}}
| {{!-}}
{{!}} Species:
{{!}} ''{{{genus|}}} {{{subgenus|}}} {{{species|}}}{{#if: {{{varietas|}}}| " var. {{{varietas|}}}"}}{{#if: {{{forma|}}}| " f. {{{forma|}}}"}}''
}}
}}
{| cellpadding="2" cellspacing="0" style="border: 1px solid #6688AA;width:260px; background-color:#efefef; float:right;overflow: hidden; margin-left:10px; margin-bottom:10px;" valign="middle" |
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"|
'''''{{PAGENAME}}'''''
{{#if:{{{DeName|}}}|
<br>({{{DeName|}}})
}}
|-
|style="border: 0px solid #6688AA;border-bottom-width:1px;"| <div style="text-align:center;font-size:8pt;">
{{#if: {{{Bild|}}}
| [[Bild:{{{Bild}}}{{!}}frameless{{!}}250x300px{{!}}{{{Bildbeschreibung}}}]]
}}</div>
|
|
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| Systematik
|-
|style="text-align:center;width=250px;"|
{| style="background-color:#efefef;text-align:left;"
{{#if:{{#var:taxonbox}}
| {{#regex: {{#var:taxonbox}} | /(\n)[\n]+/ | $1 }}
}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Wissenschaftlicher Name
|-
|style="text-align:center"|
{| style="text-align:center;background-color:#efefef;widht:250px"
|-
|style="text-align:center;width:250px"|''{{{genus|}}} {{{species|}}}'' {{#if:{{{Autor|}}}|{{{Autor|}}}}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Weitere Informationen
|-
|style="text-align:left"|
{| style="text-align:left;background-color:#efefef;widht:250px"
{{#if:{{{Verbreitung|}}}
| {{!-}}
{{!}} Verbreitung:
{{!}} {{{Verbreitung|}}}
| {{!-}}
}}
{{#if:{{{Habitat|}}}
| {{!-}}
{{!}} Habitat:
{{!}} {{{Habitat|}}}
| {{!-}}
}}
{{#if:{{{Nahrung|}}}
| {{!-}}
{{!}} Nahrung:
{{!}} {{{Nahrung|}}}
| {{!-}}
}}
{{#if:{{{Luftfeuchtigkeit|}}}
| {{!-}}
{{!}} Luftfeuchtigkeit:
{{!}} {{{Luftfeuchtigkeit|}}}
| {{!-}}
}}
{{#if:{{{Temperatur|}}}
| {{!-}}
{{!}} Temperatur:
{{!}} {{{Temperatur|}}}
| {{!-}}
}}
{{#if:{{{StudyGroupNumber|}}}
| {{!-}}
{{!}} StudyGroupNumber:
{{!}} {{{StudyGroupNumber|}}}
| {{!-}}
}}
|}
|}
{{#ifeq: {{NAMESPACE}} | {{ns:0}} | [[Kategorie:species]]}}
{{#if:{{#var:taxon}}
| {{#regex: {{#var:taxon}} | /[ \r\n]+/ | }}
}}
{{#if:{{{www.faunaeur.org_id|}}}|
* [http://www.faunaeur.org/full_results.php?id={{{www.faunaeur.org_id|}}} Fauna Europaea : www.faunaeur.org -> {{PAGENAME}}]
}}
{{#if:{{{cockroach.speciesfile.org_TaxonNameID|}}}|
* [http://cockroach.speciesfile.org/common/basic/Taxa.aspx?TaxonNameID={{{cockroach.speciesfile.org_TaxonNameID|}}} Cockroach Species File (CSF) : cockroach.speciesfile.org -> {{PAGENAME}}]
}}
{{#if:{{{LSID|}}}|
* [http://www.ipni.org/lsids.html LSID] : {{{LSID|}}}
}}
{{#ifeq:{{PAGENAME}}|{{{regnum}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
{{#if: {{{dominia|}}} | [[Kategorie: {{{dominia|}}} {{!}} {{{regnum|}}}]] }}
}}
{{#ifeq:{{PAGENAME}}|{{{subregnum}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
{{#if: {{{regnum|}}}
| [[Kategorie: {{{regnum|}}}{{!}}{{{subregnum|}}}]]
| {{#if: {{{dominia|}}} | [[Kategorie: {{{dominia|}}}{{!}}{{{subregnum|}}}]] }}
}}
}}
{{#ifeq:{{PAGENAME}}|{{{superdivisio}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
{{#if: {{{subregnum|}}}
| [[Kategorie: {{{subregnum|}}}{{!}}{{{superdivisio|}}}]]
| {{#if: {{{regnum|}}} | [[Kategorie: {{{regnum|}}}{{!}}{{{superdivisio|}}}]] }}
}}
}}
{{#ifeq:{{PAGENAME}}|{{{superphylum}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
{{#if: {{{subregnum|}}}
| [[Kategorie: {{{subregnum|}}}{{!}}{{{superphylum|}}}]]
| {{#if: {{{regnum|}}} | [[Kategorie: {{{regnum|}}}{{!}}{{{superphylum|}}}]] }}
}}
}}
{{#ifeq:{{PAGENAME}}|{{{divisio}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
{{#if: {{{superdivisio|}}}
| [[Kategorie: {{{superdivisio|}}}{{!}}{{{divisio|}}}]]
| {{#if: {{{subregnum|}}} | [[Kategorie: {{{subregnum|}}}{{!}}{{{divisio|}}}]] }}
}}
}}
{{#ifeq:{{PAGENAME}}|{{{phylum}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
{{#if: {{{superphylum|}}}
| [[Kategorie: {{{superphylum|}}}{{!}}{{{phylum|}}}]]
| {{#if: {{{subregnum|}}} | [[Kategorie: {{{subregnum|}}}{{!}}{{{phylum|}}}]] }}
}}
}}
{{#ifeq:{{PAGENAME}}|{{{subdivisio}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
{{#if: {{{divisio|}}}
| [[Kategorie: {{{divisio|}}}{{!}}{{{subdivisio|}}}]]
| {{#if: {{{superdivisio|}}} | [[Kategorie: {{{superdivisio|}}}{{!}}{{{subdivisio|}}}]] }}
}}
}}
{{#ifeq:{{PAGENAME}}|{{{subphylum}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
{{#if: {{{phylum|}}}
| [[Kategorie: {{{phylum|}}}{{!}}{{{subphylum|}}}]]
| {{#if: {{{superphylum|}}} | [[Kategorie: {{{superphylum|}}}{{!}}{{{subphylum|}}}]] }}
}}
}}
{{#ifeq:{{PAGENAME}}|{{{superclassis}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
{{#if: {{{subdivisio|}}}
| [[Kategorie: {{{subdivisio|}}}{{!}}{{{superclassis|}}}]]
| {{#if: {{{divisio|}}} | [[Kategorie: {{{divisio|}}}{{!}}{{{superclassis|}}}]] }}
}}
{{#if: {{{subphylum|}}}
| [[Kategorie: {{{subphylum|}}}{{!}}{{{superclassis|}}}]]
| {{#if: {{{phylum|}}} | [[Kategorie: {{{phylum|}}}{{!}}{{{superclassis|}}}]] }}
}}
}}
{{#ifeq:{{PAGENAME}}|{{{classis}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
{{#if: {{{superclassis|}}}
| [[Kategorie: {{{superclassis|}}}{{!}}{{{classis|}}}]]
| {{#if: {{{subdivisio|}}} | [[Kategorie: {{{subdivisio|}}}{{!}}{{{classis|}}}]] }}
{{#if: {{{subphylum|}}} | [[Kategorie: {{{subphylum|}}}{{!}}{{{classis|}}}]] }}
}}
}}
{{#ifeq:{{PAGENAME}}|{{{subclassis}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
{{#if: {{{classis|}}}
| [[Kategorie: {{{classis|}}}{{!}}{{{subclassis|}}}]]
| {{#if: {{{superclassis|}}} | [[Kategorie: {{{superclassis|}}}{{!}}{{{subclassis|}}}]] }}
}}
}}
{{#ifeq:{{PAGENAME}}|{{{superordo}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
{{#if: {{{subclassis|}}}
| [[Kategorie: {{{subclassis|}}}{{!}}{{{superordo|}}}]]
| {{#if: {{{classis|}}} | [[Kategorie: {{{classis|}}}{{!}}{{{superordo|}}}]] }}
}}
}}
{{#ifeq:{{PAGENAME}}|{{{ordo}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
{{#if: {{{superordo|}}}
| [[Kategorie: {{{superordo|}}}{{!}}{{{ordo|}}}]]
| {{#if: {{{subclassis|}}} | [[Kategorie: {{{subclassis|}}}{{!}}{{{ordo|}}}]] }}
}}
}}
{{#ifeq:{{PAGENAME}}|{{{subordo}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
{{#if: {{{ordo|}}}
| [[Kategorie: {{{ordo|}}}{{!}}{{{subordo|}}}]]
| {{#if: {{{superordo|}}} | [[Kategorie: {{{superordo|}}}{{!}}{{{subordo|}}}]] }}
}}
}}
{{#ifeq:{{PAGENAME}}|{{{superfamilia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
{{#if: {{{subordo|}}}
| [[Kategorie: {{{subordo|}}}{{!}}{{{superfamilia|}}}]]
| {{#if: {{{ordo|}}} | [[Kategorie: {{{ordo|}}}{{!}}{{{superfamilia|}}}]] }}
}}
}}
{{#ifeq:{{PAGENAME}}|{{{familia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
{{#if: {{{superfamilia|}}}
| [[Kategorie: {{{superfamilia|}}}{{!}}{{{familia|}}}]]
| {{#if: {{{subordo|}}} | [[Kategorie: {{{subordo|}}}{{!}}{{{familia|}}}]] }}
}}
}}
{{#ifeq:{{PAGENAME}}|{{{subfamilia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
{{#if: {{{familia|}}}
| [[Kategorie: {{{familia|}}}{{!}}{{{subfamilia|}}}]]
| {{#if: {{{superfamilia|}}} | [[Kategorie: {{{superfamilia|}}}{{!}}{{{subfamilia|}}}]] }}
}}
}}
{{#ifeq:{{PAGENAME}}|{{{genus}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
{{#if: {{{tribus|}}}
| [[Kategorie: {{{tribus|}}}{{!}}{{{genus|}}}]]
| {{#if: {{{subfamilia|}}}
| [[Kategorie: {{{subfamilia|}}}{{!}}{{{genus|}}}]]
| {{#if: {{{familia|}}} | [[Kategorie: {{{familia|}}}{{!}}{{{genus|}}}]] }}
}}
}}
}}
{{#ifeq:{{PAGENAME}}|{{{subgenus}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
{{#if: {{{genus|}}}
| [[Kategorie: {{{genus|}}}{{!}}{{{subgenus|}}}]]
| {{#if: {{{subfamilia|}}} | [[Kategorie: {{{subfamilia|}}}{{!}}{{{subgenus|}}}]] }}
}}
}}
{{#ifeq:{{PAGENAME}}|{{{genus}}} {{{species}}}|
{{#if: {{{subgenus|}}}
| [[Kategorie: {{{subgenus|}}}{{!}}{{{species|}}}]]
| {{#if: {{{genus|}}} | [[Kategorie: {{{genus|}}}{{!}}{{{species|}}}]] }}
}}
}}
</includeonly>
<noinclude>
<pre>
Beispielaufruf:
{{Systematik
| DeName = Fauchschabe
| Autor = van Herrewege, 1973
| ordo =
| subordo =
| superfamilia =
| familia = Blaberidae
| subfamilia = Oxyhaloinae
| tribus = Gromphadorhini
| genus = Princisia
| subgenus =
| species = vanwaerebeki
| Verbreitung =
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| StudyGroupNumber = BCG 34
| Winterruhe =
| cockroach.speciesfile.org_TaxonNameID = 1174416
| LSID = urn:lsid:Blattodea.genusfile.org:TaxonName:6326
}}
{{Systematik
| Autor =
| Bild =
| Bildbeschreibung =
| regnum = Animalia
| subregnum = Eumetazoa
| phylum = specieshropoda
| subphylum = Hexapoda
| classis = Insecta
| ordo = Dictyoptera
| subordo = Isoptera
| LSID = urn:lsid:faunaeur.org:taxname:11922
| www.faunaeur.org_id = 11922
}}
</pre>
</noinclude>
281a66ae4c34de420442e839da6e801c57655bd0
1688
1686
2017-05-03T12:05:19Z
Lollypop
2
wikitext
text/x-wiki
<includeonly>
{{#vardefine:taxon|
{{#if:{{{dominia|}}} | -> [[:Kategorie:{{{dominia|}}}{{!}}{{{dominia|}}}]]}}
{{#if:{{{regnum|}}} | -> [[:Kategorie:{{{regnum|}}}{{!}}{{{regnum|}}}]]}}
{{#if:{{{subregnum|}}} | -> [[:Kategorie:{{{subregnum|}}}{{!}}{{{subregnum|}}}]]}}
{{#if:{{{superdivisio|}}}| -> [[:Kategorie:{{{superdivisio|}}}{{!}}{{{superdivisio|}}}]]}}
{{#if:{{{divisio|}}} | -> [[:Kategorie:{{{divisio|}}}{{!}}{{{divisio|}}}]]}}
{{#if:{{{subdivisio|}}} | -> [[:Kategorie:{{{subdivisio|}}}{{!}}{{{subdivisio|}}}]]}}
{{#if:{{{superclassis|}}}| -> [[:Kategorie:{{{superclassis|}}}{{!}}{{{superclassis|}}}]]}}
{{#if:{{{classis|}}} | -> [[:Kategorie:{{{classis|}}}{{!}}{{{classis|}}}]]}}
{{#if:{{{subclassis|}}} | -> [[:Kategorie:{{{subclassis|}}}{{!}}{{{subclassis|}}}]]}}
{{#if:{{{superordo|}}} | -> [[:Kategorie:{{{superordo|}}}{{!}}{{{superordo|}}}]]}}
{{#if:{{{ordo|}}} | -> [[:Kategorie:{{{ordo|}}}{{!}}{{{ordo|}}}]]}}
{{#if:{{{subordo|}}} | -> [[:Kategorie:{{{subordo|}}}{{!}}{{{subordo|}}}]]}}
{{#if:{{{superfamilia|}}}| -> [[:Kategorie:{{#if: {{{subordo|}}} | {{{subordo|}}} | {{{ordo|}}} }}{{!}}{{{superfamilia|}}}]]}}
{{#if:{{{familia|}}} | -> [[:Kategorie:{{{familia|}}}{{!}}{{{familia|}}}]]}}
{{#if:{{{subfamilia|}}} | -> [[:Kategorie:{{{subfamilia|}}}{{!}}{{{subfamilia|}}}]]}}
{{#if:{{{tribus|}}} | -> [[:Kategorie:{{{tribus|}}}{{!}}{{{tribus|}}}]]}}
{{#if:{{{genus|}}} | -> [[:Kategorie:{{{genus|}}}{{!}}{{{genus|}}}]]}}
{{#if:{{{subgenus|}}} | -> [[:Kategorie:{{{subgenus|}}}{{!}}{{{subgenus|}}}]]}}
}}
{{#vardefine:taxonbox|
{{#if:{{{dominia|}}}
| {{!-}}
{{!}} Dominia:
{{!}} ''[[:Kategorie:{{{dominia|}}}{{!}}{{{dominia|}}}]]''
}}
{{#if:{{{regnum|}}}
| {{!-}}
{{!}} Regnum:
{{!}} ''[[:Kategorie:{{{regnum|}}}{{!}}{{{regnum|}}}]]''
}}
{{#if:{{{subregnum|}}}
| {{!-}}
{{!}} Subregnum:
{{!}} ''[[:Kategorie:{{{subregnum|}}}{{!}}{{{subregnum|}}}]]''
}}
{{#if:{{{superdivisio|}}}
| {{!-}}
{{!}} Superdivisio:
{{!}} ''[[:Kategorie:{{{superdivisio|}}}{{!}}{{{superdivisio|}}}]]''
}}
{{#if:{{{divisio|}}}
| {{!-}}
{{!}} Divisio:
{{!}} ''[[:Kategorie:{{{divisio|}}}{{!}}{{{divisio|}}}]]''
}}
{{#if:{{{phylum|}}}
| {{!-}}
{{!}} Phylum:
{{!}} ''[[:Kategorie:{{{phylum|}}}{{!}}{{{phylum|}}}]]''
}}
{{#if:{{{subdivisio|}}}
| {{!-}}
{{!}} Subdivisio:
{{!}} ''[[:Kategorie:{{{subdivisio|}}}{{!}}{{{subdivisio|}}}]]''
}}
{{#if:{{{subphylum|}}}
| {{!-}}
{{!}} Subphylum:
{{!}} ''[[:Kategorie:{{{subphylum|}}}{{!}}{{{subphylum|}}}]]''
}}
{{#if:{{{superclassis|}}}
| {{!-}}
{{!}} Superclassis:
{{!}} ''[[:Kategorie:{{{superclassis|}}}{{!}}{{{superclassis|}}}]]''
}}
{{#if:{{{classis|}}}
| {{!-}}
{{!}} Classis:
{{!}} ''[[:Kategorie:{{{classis|}}}{{!}}{{{classis|}}}]]''
}}
{{#if:{{{subclassis|}}}
| {{!-}}
{{!}} Subclassis:
{{!}} ''[[:Kategorie:{{{subclassis|}}}{{!}}{{{subclassis|}}}]]''
}}
{{#if:{{{superordo|}}}
| {{!-}}
{{!}} Superordo:
{{!}} ''[[:Kategorie:{{{superordo|}}}{{!}}{{{superordo|}}}]]''
}}
{{#if:{{{ordo|}}}
| {{!-}}
{{!}} Ordo:
{{!}} ''[[:Kategorie:{{{ordo|}}}{{!}}{{{ordo|}}}]]''
}}
{{#if:{{{subordo|}}}
| {{!-}}
{{!}} Subordo:
{{!}} ''[[:Kategorie:{{{subordo|}}}{{!}}{{{subordo|}}}]]''
}}
{{#if:{{{superfamilia|}}}
| {{!-}}
{{!}} Superfamilia:
{{!}} ''[[:Kategorie:{{{superfamilia|}}}{{!}}{{{superfamilia|}}}]]''
}}
{{#if:{{{familia|}}}
| {{!-}}
{{!}} Familia:
{{!}} ''[[:Kategorie:{{{familia|}}}{{!}}{{{familia|}}}]]''
}}
{{#if:{{{subfamilia|}}}
| {{!-}}
{{!}} Subfamilia:
{{!}} ''[[:Kategorie:{{{subfamilia|}}}{{!}}{{{subfamilia|}}}]]''
}}
{{#if:{{{tribus|}}}
| {{!-}}
{{!}} Tribus:
{{!}} ''[[:Kategorie:{{{tribus|}}}{{!}}{{{tribus|}}}]]''
}}
{{#if:{{{genus|}}}
| {{!-}}
{{!}} Genus:
{{!}} ''[[:Kategorie:{{{genus|}}}{{!}}{{{genus|}}}]]''
}}
{{#if:{{{subgenus|}}}
| {{!-}}
{{!}} Subgenus:
{{!}} ''[[:Kategorie:{{{subgenus|}}}{{!}}{{{subgenus|}}}]]''
}}
{{#if:{{{species|}}}
| {{!-}}
{{!}} Species:
{{!}} ''{{{genus|}}} {{{subgenus|}}} {{{species|}}}{{#if: {{{varietas|}}}| " var. {{{varietas|}}}"}}{{#if: {{{forma|}}}| " f. {{{forma|}}}"}}''
}}
}}
{| cellpadding="2" cellspacing="0" style="border: 1px solid #6688AA;width:260px; background-color:#efefef; float:right;overflow: hidden; margin-left:10px; margin-bottom:10px;" valign="middle" |
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"|
'''''{{PAGENAME}}'''''
{{#if:{{{DeName|}}}|
<br>({{{DeName|}}})
}}
|-
|style="border: 0px solid #6688AA;border-bottom-width:1px;"| <div style="text-align:center;font-size:8pt;">
{{#if: {{{Bild|}}}
| [[Bild:{{{Bild}}}{{!}}frameless{{!}}250x300px{{!}}{{{Bildbeschreibung}}}]]
}}</div>
|
|
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| Systematik
|-
|style="text-align:center;width=250px;"|
{| style="background-color:#efefef;text-align:left;"
{{#if:{{#var:taxonbox}}
| {{#regex: {{#var:taxonbox}} | /(\n)[\n]+/ | $1 }}
}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Wissenschaftlicher Name
|-
|style="text-align:center"|
{| style="text-align:center;background-color:#efefef;widht:250px"
|-
|style="text-align:center;width:250px"|''{{{genus|}}} {{{species|}}}'' {{#if:{{{Autor|}}}|{{{Autor|}}}}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Weitere Informationen
|-
|style="text-align:left"|
{| style="text-align:left;background-color:#efefef;widht:250px"
{{#if:{{{Verbreitung|}}}
| {{!-}}
{{!}} Verbreitung:
{{!}} {{{Verbreitung|}}}
| {{!-}}
}}
{{#if:{{{Habitat|}}}
| {{!-}}
{{!}} Habitat:
{{!}} {{{Habitat|}}}
| {{!-}}
}}
{{#if:{{{Nahrung|}}}
| {{!-}}
{{!}} Nahrung:
{{!}} {{{Nahrung|}}}
| {{!-}}
}}
{{#if:{{{Luftfeuchtigkeit|}}}
| {{!-}}
{{!}} Luftfeuchtigkeit:
{{!}} {{{Luftfeuchtigkeit|}}}
| {{!-}}
}}
{{#if:{{{Temperatur|}}}
| {{!-}}
{{!}} Temperatur:
{{!}} {{{Temperatur|}}}
| {{!-}}
}}
{{#if:{{{StudyGroupNumber|}}}
| {{!-}}
{{!}} StudyGroupNumber:
{{!}} {{{StudyGroupNumber|}}}
| {{!-}}
}}
|}
|}
{{#ifeq: {{NAMESPACE}} | {{ns:0}} | [[Kategorie:species]]}}
{{#if:{{#var:taxon}}
| {{#regex: {{#var:taxon}} | /[ \r\n]+/ | }}
}}
{{#if:{{{www.faunaeur.org_id|}}}|
* [http://www.faunaeur.org/full_results.php?id={{{www.faunaeur.org_id|}}} Fauna Europaea : www.faunaeur.org -> {{PAGENAME}}]
}}
{{#if:{{{cockroach.speciesfile.org_TaxonNameID|}}}|
* [http://cockroach.speciesfile.org/common/basic/Taxa.aspx?TaxonNameID={{{cockroach.speciesfile.org_TaxonNameID|}}} Cockroach Species File (CSF) : cockroach.speciesfile.org -> {{PAGENAME}}]
}}
{{#if:{{{LSID|}}}|
* [http://www.ipni.org/lsids.html LSID] : {{{LSID|}}}
}}
{{#ifeq:{{PAGENAME}}|{{{regnum}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
{{#if: {{{dominia|}}} | [[Kategorie: {{{dominia|}}} {{!}} {{{regnum|}}}]] }}
}}
{{#ifeq:{{PAGENAME}}|{{{subregnum}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
{{#if: {{{regnum|}}}
| [[Kategorie: {{{regnum|}}}{{!}}{{{subregnum|}}}]]
| {{#if: {{{dominia|}}} | [[Kategorie: {{{dominia|}}}{{!}}{{{subregnum|}}}]] }}
}}
}}
{{#ifeq:{{PAGENAME}}|{{{superdivisio}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
{{#if: {{{subregnum|}}}
| [[Kategorie: {{{subregnum|}}}{{!}}{{{superdivisio|}}}]]
| {{#if: {{{regnum|}}} | [[Kategorie: {{{regnum|}}}{{!}}{{{superdivisio|}}}]] }}
}}
}}
{{#ifeq:{{PAGENAME}}|{{{superphylum}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
{{#if: {{{subregnum|}}}
| [[Kategorie: {{{subregnum|}}}{{!}}{{{superphylum|}}}]]
| {{#if: {{{regnum|}}} | [[Kategorie: {{{regnum|}}}{{!}}{{{superphylum|}}}]] }}
}}
}}
{{#ifeq:{{PAGENAME}}|{{{divisio}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
{{#if: {{{superdivisio|}}}
| [[Kategorie: {{{superdivisio|}}}{{!}}{{{divisio|}}}]]
| {{#if: {{{subregnum|}}} | [[Kategorie: {{{subregnum|}}}{{!}}{{{divisio|}}}]] }}
}}
}}
{{#ifeq:{{PAGENAME}}|{{{phylum}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
{{#if: {{{superphylum|}}}
| [[Kategorie: {{{superphylum|}}}{{!}}{{{phylum|}}}]]
| {{#if: {{{subregnum|}}} | [[Kategorie: {{{subregnum|}}}{{!}}{{{phylum|}}}]] }}
}}
}}
{{#ifeq:{{PAGENAME}}|{{{subdivisio}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
{{#if: {{{divisio|}}}
| [[Kategorie: {{{divisio|}}}{{!}}{{{subdivisio|}}}]]
| {{#if: {{{superdivisio|}}} | [[Kategorie: {{{superdivisio|}}}{{!}}{{{subdivisio|}}}]] }}
}}
}}
{{#ifeq:{{PAGENAME}}|{{{subphylum}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
{{#if: {{{phylum|}}}
| [[Kategorie: {{{phylum|}}}{{!}}{{{subphylum|}}}]]
| {{#if: {{{superphylum|}}} | [[Kategorie: {{{superphylum|}}}{{!}}{{{subphylum|}}}]] }}
}}
}}
{{#ifeq:{{PAGENAME}}|{{{superclassis}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
{{#if: {{{subdivisio|}}}
| [[Kategorie: {{{subdivisio|}}}{{!}}{{{superclassis|}}}]]
| {{#if: {{{divisio|}}} | [[Kategorie: {{{divisio|}}}{{!}}{{{superclassis|}}}]] }}
}}
{{#if: {{{subphylum|}}}
| [[Kategorie: {{{subphylum|}}}{{!}}{{{superclassis|}}}]]
| {{#if: {{{phylum|}}} | [[Kategorie: {{{phylum|}}}{{!}}{{{superclassis|}}}]] }}
}}
}}
{{#ifeq:{{PAGENAME}}|{{{classis}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
{{#if: {{{superclassis|}}}
| [[Kategorie: {{{superclassis|}}}{{!}}{{{classis|}}}]]
| {{#if: {{{subdivisio|}}} | [[Kategorie: {{{subdivisio|}}}{{!}}{{{classis|}}}]] }}
{{#if: {{{subphylum|}}} | [[Kategorie: {{{subphylum|}}}{{!}}{{{classis|}}}]] }}
}}
}}
{{#ifeq:{{PAGENAME}}|{{{subclassis}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
{{#if: {{{classis|}}}
| [[Kategorie: {{{classis|}}}{{!}}{{{subclassis|}}}]]
| {{#if: {{{superclassis|}}} | [[Kategorie: {{{superclassis|}}}{{!}}{{{subclassis|}}}]] }}
}}
}}
{{#ifeq:{{PAGENAME}}|{{{superordo}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
{{#if: {{{subclassis|}}}
| [[Kategorie: {{{subclassis|}}}{{!}}{{{superordo|}}}]]
| {{#if: {{{classis|}}} | [[Kategorie: {{{classis|}}}{{!}}{{{superordo|}}}]] }}
}}
}}
{{#ifeq:{{PAGENAME}}|{{{ordo}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
{{#if: {{{superordo|}}}
| [[Kategorie: {{{superordo|}}}{{!}}{{{ordo|}}}]]
| {{#if: {{{subclassis|}}} | [[Kategorie: {{{subclassis|}}}{{!}}{{{ordo|}}}]] }}
}}
}}
{{#ifeq:{{PAGENAME}}|{{{subordo}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
{{#if: {{{ordo|}}}
| [[Kategorie: {{{ordo|}}}{{!}}{{{subordo|}}}]]
| {{#if: {{{superordo|}}} | [[Kategorie: {{{superordo|}}}{{!}}{{{subordo|}}}]] }}
}}
}}
{{#ifeq:{{PAGENAME}}|{{{superfamilia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
{{#if: {{{subordo|}}}
| [[Kategorie: {{{subordo|}}}{{!}}{{{superfamilia|}}}]]
| {{#if: {{{ordo|}}} | [[Kategorie: {{{ordo|}}}{{!}}{{{superfamilia|}}}]] }}
}}
}}
{{#ifeq:{{PAGENAME}}|{{{familia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
{{#if: {{{superfamilia|}}}
| [[Kategorie: {{{superfamilia|}}}{{!}}{{{familia|}}}]]
| {{#if: {{{subordo|}}} | [[Kategorie: {{{subordo|}}}{{!}}{{{familia|}}}]] }}
}}
}}
{{#ifeq:{{PAGENAME}}|{{{subfamilia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
{{#if: {{{familia|}}}
| [[Kategorie: {{{familia|}}}{{!}}{{{subfamilia|}}}]]
| {{#if: {{{superfamilia|}}} | [[Kategorie: {{{superfamilia|}}}{{!}}{{{subfamilia|}}}]] }}
}}
}}
{{#ifeq:{{PAGENAME}}|{{{tribus}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
{{#if: {{{subfamilia|}}}
| [[Kategorie: {{{subfamilia|}}}{{!}}{{{tribus|}}}]]
| {{#if: {{{familia|}}} | [[Kategorie: {{{familia|}}}{{!}}{{{tribus|}}}]] }}
}}
}}
{{#ifeq:{{PAGENAME}}|{{{genus}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
{{#if: {{{tribus|}}}
| [[Kategorie: {{{tribus|}}}{{!}}{{{genus|}}}]]
| {{#if: {{{subfamilia|}}}
| [[Kategorie: {{{subfamilia|}}}{{!}}{{{genus|}}}]]
| {{#if: {{{familia|}}} | [[Kategorie: {{{familia|}}}{{!}}{{{genus|}}}]] }}
}}
}}
}}
{{#ifeq:{{PAGENAME}}|{{{subgenus}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
{{#if: {{{genus|}}}
| [[Kategorie: {{{genus|}}}{{!}}{{{subgenus|}}}]]
| {{#if: {{{subfamilia|}}} | [[Kategorie: {{{subfamilia|}}}{{!}}{{{subgenus|}}}]] }}
}}
}}
{{#ifeq:{{PAGENAME}}|{{{genus}}} {{{species}}}|
{{#if: {{{subgenus|}}}
| [[Kategorie: {{{subgenus|}}}{{!}}{{{species|}}}]]
| {{#if: {{{genus|}}} | [[Kategorie: {{{genus|}}}{{!}}{{{species|}}}]] }}
}}
}}
</includeonly>
<noinclude>
<pre>
Beispielaufruf:
{{Systematik
| DeName = Fauchschabe
| Autor = van Herrewege, 1973
| ordo =
| subordo =
| superfamilia =
| familia = Blaberidae
| subfamilia = Oxyhaloinae
| tribus = Gromphadorhini
| genus = Princisia
| subgenus =
| species = vanwaerebeki
| Verbreitung =
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| StudyGroupNumber = BCG 34
| Winterruhe =
| cockroach.speciesfile.org_TaxonNameID = 1174416
| LSID = urn:lsid:Blattodea.genusfile.org:TaxonName:6326
}}
{{Systematik
| Autor =
| Bild =
| Bildbeschreibung =
| regnum = Animalia
| subregnum = Eumetazoa
| phylum = specieshropoda
| subphylum = Hexapoda
| classis = Insecta
| ordo = Dictyoptera
| subordo = Isoptera
| LSID = urn:lsid:faunaeur.org:taxname:11922
| www.faunaeur.org_id = 11922
}}
</pre>
</noinclude>
58fb5bfe803b4db3facf52b9448e7d5488388f01
Category:Gromphadorhini
14
323
1685
1574
2017-05-03T11:46:35Z
Lollypop
2
wikitext
text/x-wiki
<nowiki>Unformatierten Text hier einfügen</nowiki>{{Systematik
| WissName = Gromphadorhini
| Autor =
| Bild =
| Bildbeschreibung =
| regnum = Animalia
| subregnum = Eumetazoa
| phylum = Arthropoda
| subphylum = Hexapoda
| classis = Insecta
| superordo = Dictyoptera
| ordo = Blattodea
| superfamilia = Blaberoidea
| familia = Blaberidae
| subfamilia = Oxyhaloinae
| tribus = Gromphadorhini
}}
4beb9ae87955c60cf873aca6fee58670cd6325d7
1687
1685
2017-05-03T11:51:52Z
Lollypop
2
wikitext
text/x-wiki
{{Systematik
| WissName = Gromphadorhini
| Autor =
| Bild =
| Bildbeschreibung =
| regnum = Animalia
| subregnum = Eumetazoa
| phylum = Arthropoda
| subphylum = Hexapoda
| classis = Insecta
| superordo = Dictyoptera
| ordo = Blattodea
| superfamilia = Blaberoidea
| familia = Blaberidae
| subfamilia = Oxyhaloinae
| tribus = Gromphadorhini
}}
57d271442d6022525f49e10b3b35a13b279cd75d
Bifiditermes rogierae
0
313
1689
1521
2017-05-03T12:27:58Z
Lollypop
2
wikitext
text/x-wiki
{{Systematik
| DeName =
| WissName = Bifiditermes rogeriae
| Autor = Hollande 1982
| Kingdom = Animalia
| Subkingdom = Eumetazoa
| Phylum = Arthropoda
| Subphylum = Hexapoda
| Klasse = Insecta
| Ordnung = Dictyoptera
| Unterordnung = Isoptera
| Familie = Kalotermitidae
| Unterfamilie =
| Tribus =
| Gattung = Bifiditermes
| Untergattung =
| Art = rogeriae
| Verbreitung = Teneriffa
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur =
| Winterruhe =
| LSID = urn:lsid:faunaeur.org:taxname:337260
| www.faunaeur.org_id = 337260
}}
Aus: Teneriffa
* [https://www.expertoentermitas.org/descripcion-de-una-termita-nueva-de-las-islas-canarias-bifiditermes-rogierae-n-sp Descripción de Bifiditermes rogierae n. sp. termita de las Islas Canarias]
e55b9a1ad4f0bc7a2dcabfc6eabac705ea1a377b
1690
1689
2017-05-03T12:28:18Z
Lollypop
2
Lollypop verschob die Seite [[Bifiditermes rogierae]] nach [[Bifiditermes rogeriae]]: Typo
wikitext
text/x-wiki
{{Systematik
| DeName =
| WissName = Bifiditermes rogeriae
| Autor = Hollande 1982
| Kingdom = Animalia
| Subkingdom = Eumetazoa
| Phylum = Arthropoda
| Subphylum = Hexapoda
| Klasse = Insecta
| Ordnung = Dictyoptera
| Unterordnung = Isoptera
| Familie = Kalotermitidae
| Unterfamilie =
| Tribus =
| Gattung = Bifiditermes
| Untergattung =
| Art = rogeriae
| Verbreitung = Teneriffa
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur =
| Winterruhe =
| LSID = urn:lsid:faunaeur.org:taxname:337260
| www.faunaeur.org_id = 337260
}}
Aus: Teneriffa
* [https://www.expertoentermitas.org/descripcion-de-una-termita-nueva-de-las-islas-canarias-bifiditermes-rogierae-n-sp Descripción de Bifiditermes rogierae n. sp. termita de las Islas Canarias]
e55b9a1ad4f0bc7a2dcabfc6eabac705ea1a377b
1692
1690
2017-05-03T12:32:06Z
Lollypop
2
wikitext
text/x-wiki
{{Systematik
| DeName =
| WissName = Bifiditermes rogeriae
| Autor = Hollande 1982
| regnum = Animalia
| subregnum = Eumetazoa
| phylum = Arthropoda
| subphylum = Hexapoda
| classia = Insecta
| ordo = Dictyoptera
| subordo = Isoptera
| familia = Kalotermitidae
| subfamilia =
| tribus =
| genus = Bifiditermes
| subgenus =
| species = rogeriae
| Verbreitung = Teneriffa
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur =
| Winterruhe =
| LSID = urn:lsid:faunaeur.org:taxname:337260
| www.faunaeur.org_id = 337260
}}
Aus: Teneriffa
* [https://www.expertoentermitas.org/descripcion-de-una-termita-nueva-de-las-islas-canarias-bifiditermes-rogierae-n-sp Descripción de Bifiditermes rogierae n. sp. termita de las Islas Canarias]
a445f143bc31eb6a0391a358ad0fe7b0b8927e7b
1693
1692
2017-05-03T12:40:54Z
Lollypop
2
Änderung 1692 von [[Special:Contributions/Lollypop|Lollypop]] ([[User talk:Lollypop|Diskussion]]) rückgängig gemacht.
wikitext
text/x-wiki
{{Systematik
| DeName =
| WissName = Bifiditermes rogeriae
| Autor = Hollande 1982
| Kingdom = Animalia
| Subkingdom = Eumetazoa
| Phylum = Arthropoda
| Subphylum = Hexapoda
| Klasse = Insecta
| Ordnung = Dictyoptera
| Unterordnung = Isoptera
| Familie = Kalotermitidae
| Unterfamilie =
| Tribus =
| Gattung = Bifiditermes
| Untergattung =
| Art = rogeriae
| Verbreitung = Teneriffa
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur =
| Winterruhe =
| LSID = urn:lsid:faunaeur.org:taxname:337260
| www.faunaeur.org_id = 337260
}}
Aus: Teneriffa
* [https://www.expertoentermitas.org/descripcion-de-una-termita-nueva-de-las-islas-canarias-bifiditermes-rogierae-n-sp Descripción de Bifiditermes rogierae n. sp. termita de las Islas Canarias]
e55b9a1ad4f0bc7a2dcabfc6eabac705ea1a377b
1694
1693
2017-05-03T12:43:08Z
Lollypop
2
wikitext
text/x-wiki
{{Systematik
| DeName =
| WissName = Bifiditermes rogierae
| Autor = Hollande 1982
| regnum = Animalia
| subregnum = Eumetazoa
| phylum = Arthropoda
| subphylum = Hexapoda
| classia = Insecta
| ordo = Dictyoptera
| subordo = Isoptera
| familia = Kalotermitidae
| subfamilia =
| tribus =
| genus = Bifiditermes
| subgenus =
| species = rogierae
| Verbreitung = Teneriffa
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur =
| Winterruhe =
| LSID = urn:lsid:faunaeur.org:taxname:337260
| www.faunaeur.org_id = 337260
}}
Aus: Teneriffa
* [https://www.expertoentermitas.org/descripcion-de-una-termita-nueva-de-las-islas-canarias-bifiditermes-rogierae-n-sp Descripción de Bifiditermes rogierae n. sp. termita de las Islas Canarias]
07ad55b1ad85db3bf2de136613c1ec9beb2d47a6
1695
1694
2017-05-03T12:43:17Z
Lollypop
2
Lollypop verschob die Seite [[Bifiditermes rogeriae]] nach [[Bifiditermes rogierae]] und überschrieb dabei eine Weiterleitung
wikitext
text/x-wiki
{{Systematik
| DeName =
| WissName = Bifiditermes rogierae
| Autor = Hollande 1982
| regnum = Animalia
| subregnum = Eumetazoa
| phylum = Arthropoda
| subphylum = Hexapoda
| classia = Insecta
| ordo = Dictyoptera
| subordo = Isoptera
| familia = Kalotermitidae
| subfamilia =
| tribus =
| genus = Bifiditermes
| subgenus =
| species = rogierae
| Verbreitung = Teneriffa
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur =
| Winterruhe =
| LSID = urn:lsid:faunaeur.org:taxname:337260
| www.faunaeur.org_id = 337260
}}
Aus: Teneriffa
* [https://www.expertoentermitas.org/descripcion-de-una-termita-nueva-de-las-islas-canarias-bifiditermes-rogierae-n-sp Descripción de Bifiditermes rogierae n. sp. termita de las Islas Canarias]
07ad55b1ad85db3bf2de136613c1ec9beb2d47a6
Bifiditermes rogeriae
0
329
1696
2017-05-03T12:43:17Z
Lollypop
2
Lollypop verschob die Seite [[Bifiditermes rogeriae]] nach [[Bifiditermes rogierae]] und überschrieb dabei eine Weiterleitung
wikitext
text/x-wiki
#WEITERLEITUNG [[Bifiditermes rogierae]]
d489164f82656bdb2b99f0073da45522a9b64770
Category:Isoptera
14
304
1697
1524
2017-05-03T14:39:25Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie: Termiten]]
{{Systematik
| Autor =
| Bild =
| Bildbeschreibung =
| regnum = Animalia
| subregnum = Eumetazoa
| phylum = Arthropoda
| subphylum = Hexapoda
| classis = Insecta
| superordo = Dictyoptera
| ordo = Isoptera
| LSID = urn:lsid:faunaeur.org:taxname:11922
| www.faunaeur.org_id = 11922
}}
[https://commons.wikimedia.org/wiki/Category:Isoptera Isoptera at Wikimedia Commons]
aaf7b70de2530a6f25e44109ca8e49f7f230b137
1698
1697
2017-05-03T14:39:54Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie: Termiten]]
{{Systematik
| Autor =
| Bild =
| Bildbeschreibung =
| regnum = Animalia
| subregnum = Eumetazoa
| phylum = Arthropoda
| subphylum = Hexapoda
| classis = Insecta
| superordo = Dictyoptera
| ordo = Isoptera
| LSID = urn:lsid:faunaeur.org:taxname:11922
| www.faunaeur.org_id = 11922
}}
* [https://commons.wikimedia.org/wiki/Category:Isoptera Isoptera at Wikimedia Commons]
fbf39cd2f9dde75dc7609c6b53d2da9389d62299
Cryptotermes brevis
0
310
1699
1513
2017-05-03T14:42:06Z
Lollypop
2
wikitext
text/x-wiki
{{Systematik
| DeName =
| WissName = Cryptotermes brevis
| Autor = (Walker, 1853)
| regnum = Animalia
| subregnum = Eumetazoa
| phylum = Arthropoda
| subphylum = Hexapoda
| classis = Insecta
| ordo = Dictyoptera
| subordo = Isoptera
| familia = Kalotermitidae
| subfamilia =
| tribus =
| genus = Cryptotermes
| subgenus =
| species = brevis
| Verbreitung =
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur =
| Winterruhe =
| LSID = urn:lsid:faunaeur.org:taxname:337262
| www.faunaeur.org_id = 337262
}}
82fc233d728d468a792b0cc936754c7c8f5636fe
1726
1699
2017-05-04T22:21:42Z
Lollypop
2
wikitext
text/x-wiki
{{Systematik
| DeName =
| WissName = Cryptotermes brevis
| Autor = (Walker, 1853)
| regnum = Animalia
| subregnum = Eumetazoa
| phylum = Arthropoda
| subphylum = Hexapoda
| classis = Insecta
| ordo = Dictyoptera
| subordo = Isoptera
| familia = Kalotermitidae
| subfamilia =
| tribus =
| genus = Cryptotermes
| subgenus =
| species = brevis
| Bild = Cryptotermes_brevis.JPG
| Bildbeschreibung = Cryptotermes brevis, soldier, alate and worker
| Verbreitung =
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur =
| Winterruhe =
| LSID = urn:lsid:faunaeur.org:taxname:337262
| www.faunaeur.org_id = 337262
}}
f9185251b4b550ff6b535b89c34da2c85dc688b1
Archispirostreptus gigas
0
13
1700
426
2017-05-03T14:47:17Z
Lollypop
2
wikitext
text/x-wiki
{{Systematik
| DeName = Riesentausendfüsser
| WissName = Archispirostreptus gigas
| Autor = Peters, 1855
| familia = Spirostreptidae
| subfamilia =
| genus = Archispirostreptus
| subgenus =
| species =
| Verbreitung = Somalia, Kenia, Tansania, Mosambik
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| Winterruhe =
}}
5a3466afb2d58b6e4e95597bb803f053d6c6322e
1701
1700
2017-05-03T14:48:20Z
Lollypop
2
wikitext
text/x-wiki
{{Systematik
| DeName = Riesentausendfüsser
| WissName = Archispirostreptus gigas
| Autor = Peters, 1855
| familia = Spirostreptidae
| subfamilia =
| genus = Archispirostreptus
| subgenus =
| species = gigas
| Verbreitung = Somalia, Kenia, Tansania, Mosambik
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| Winterruhe =
}}
b196596e02a03d4301fdb8278d633d090622f25c
Orthoporus ornatus
0
154
1702
431
2017-05-03T14:49:35Z
Lollypop
2
wikitext
text/x-wiki
{{Systematik
| DeName = Wüstentausendfüsser
| WissName = Orthoporus ornatus
| Autor = Girard, 1853
| familia = Spirostreptidae
| subfamilia =
| genus = Orthoporus
| subgenus =
| species = ornatus
| Verbreitung =
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| Winterruhe =
}}
b98c4436d234c257e38d4b9db8cbe1533a93e6ad
Telodeinopus aoutii
0
156
1703
428
2017-05-03T14:50:50Z
Lollypop
2
wikitext
text/x-wiki
{{Systematik
| DeName =
| WissName = Telodeinopus aoutii
| Autor = Demange, 1971
| familia = Spirostreptida
| subfamilia =
| genus = Telodeinopus
| subgenus =
| species = aoutii
| Verbreitung = Togo, Ghana
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| Winterruhe =
}}
8940137bb7850594206b852f69a474088a77fba8
Dracunculus vulgaris
0
83
1704
168
2017-05-03T14:55:20Z
Lollypop
2
wikitext
text/x-wiki
{{Systematik
| familia = Araceae
| subfamilia =
| genus = Dracunculus
| species = vulgaris
}}
== Beschreibung ==
227cafe136147e8891b012aa9b87b08f2a7030c2
Neotermes sp
0
320
1705
1522
2017-05-03T15:11:49Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie: Neotermes ]]
{{Systematik
| DeName =
| WissName = Neotermes sp.
| Autor =
| regnum = Animalia
| subregnum = Eumetazoa
| phylum = Arthropoda
| subphylum = Hexapoda
| classis = Insecta
| ordo = Dictyoptera
| subordo = Isoptera
| familia = Kalotermitidae
| subfamilia =
| tribus =
| genus = Neotermes
| subgenus =
| species = sp
| Verbreitung =
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur =
| Winterruhe =
}}
* [http://www.boldsystems.org/index.php/Taxbrowser_Taxonpage?taxid=354458 BoldSystems Database : Neotermes castaneus]
88f92158c5477cfd9a30404083452fd4303f5af6
MySQL Tipps und Tricks
0
197
1706
1372
2017-05-04T15:01:11Z
Lollypop
2
/* Filesystems for MySQL */
wikitext
text/x-wiki
[[Kategorie:MySQL|Tipps und Tricks]]
==Oneliner==
===All grants===
<source lang=bash>
# mysql --skip-column-names --batch --execute 'select concat("`",user,"`@`",host,"`") from mysql.user' | xargs -n 1 -i mysql --execute 'show grants for {}'
</source>
===Last update time===
* Per table
<source lang=mysql>
mysql> SELECT TABLE_SCHEMA AS DB,TABLE_NAME,UPDATE_TIME FROM INFORMATION_SCHEMA.TABLES ORDER BY DB,UPDATE_TIME;
</source>
* Per database
<source lang=mysql>
mysql> SELECT TABLE_SCHEMA AS DB,MAX(UPDATE_TIME) AS LAST_UPDATE FROM INFORMATION_SCHEMA.TABLES GROUP BY DB ORDER BY LAST_UPDATE;
</source>
==InnoDB space==
===Per database===
<source lang=mysql>
mysql> select table_schema as database_name, sum(round(data_length/1024/1024,2)) as total_size_mb from information_schema.tables where engine like 'innodb' group by table_schema order by total_size_mb;
</source>
===Per table===
<source lang=mysql>
mysql> select table_schema as database_name,table_name,round(data_length/1024/1024,2) as size_mb from information_schema.tables order by size_mb;
</source>
==Logging==
If you use SET GLOBAL it is just for the moment.
'''Don't forget to add it in your my.cnf to make it permanent!'''
===What can I log?===
The interesting variables here are:
* log_queries_not_using_indexes
* log_slave_updates
* log_slow_queries
* general_log
===Choose logging destination FILE/TABLE/NONE===
This affects general_log and slow_query_log.
* Log to the table mysql.slow_log and mysql.general_log
<source lang=mysql>
mysql> SET GLOBAL log_output=TABLE;
</source>
* Log to the table mysql.slow_log and mysql.general_log
<source lang=mysql>
mysql> SET GLOBAL log_output=TABLE;
</source>
* Both: tables and files
<source lang=mysql>
mysql> SET GLOBAL log_output = 'TABLE,FILE';
</source>
* None, if NONE appears in the log_output destinations there is no logging
<source lang=mysql>
mysql> SET GLOBAL log_output = 'TABLE,FILE,NONE';
</source>
is equal to
<source lang=mysql>
mysql> SET GLOBAL log_output = 'NONE';
</source>
===Enable/disable general logging===
<source lang=mysql>
mysql> SET GLOBAL general_log_file = '/var/lib/mysql/general.log';
Query OK, 0 rows affected (0.00 sec)
mysql> SET GLOBAL general_log = 'ON';
Query OK, 0 rows affected (0.00 sec)
</source>
<source lang=mysql>
mysql> SET GLOBAL general_log = 'OFF';
Query OK, 0 rows affected (0.00 sec)
</source>
===Enable/disable logging of slow queries===
<source lang=mysql>
mysql> SET GLOBAL slow_query_log_file = '/var/lib/mysql/slow-query.log';
Query OK, 0 rows affected (0.00 sec)
mysql> SET GLOBAL slow_query_log = 'ON';
Query OK, 0 rows affected (0.00 sec)
</source>
<source lang=mysql>
mysql> SET GLOBAL slow_query_log = 'OFF';
Query OK, 0 rows affected (0.00 sec)
</source>
==Filesystems for MySQL==
===ext3/ext4===
====Create Options====
<source lang=bash>
# mkfs.ext4 -b 4096 /dev/mapper/vg--data-lv--ext4--mysql_data
</source>
====Mountoptions====
* noatime
* data=writeback (best performance , only metadata is logged)
* data=ordered (ok performance , recording metadata and grouping metadata related to the data changes)
* data=journal (worst performance, but best data protection, ext3 default mode, recording metadata and all data)
===Raw devices with InnoDB===
'''Take a look at [[Linux_udev_permissions|setting device permissions via udev]] first.'''
'''After''' that the device is owned by mysql:
<source lang=bash>
# ls -alL /dev/vg-data/lv-rawdisk-innodb01
brw-rw---- 1 mysql mysql 252, 0 Aug 12 15:07 /dev/vg-data/lv-rawdisk-innodb01
</source>
Determine the size:
<source lang=bash>
# lvs vg-data
LV VG Attr LSize Pool Origin Data% Move Log Copy% Convert
lv-rawdisk-innodb01 vg-data -wi-a---- 25.00g
# fdisk -l /dev/vg-data/lv-rawdisk-innodb01
Disk /dev/vg-data/lv-rawdisk-innodb01: 26.8 GB, 26843545600 bytes
255 heads, 63 sectors/track, 3263 cylinders, total 52428800 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
# bc -l
26843545600/(1024*1024*1024)
25.00000000000000000000
</source>
Yes... really 25GB!
Add your logical volume to your configuration /etc/mysql/conf.d/innodb.cnf :
<source lang=mysql>
[mysqld]
# InnoDB raw disks
innodb_data_home_dir=
innodb_data_file_path=/dev/vg-data/lv-rawdisk-innodb01:25Gnewraw
</source>
Start mysql:
<source lang=bash>
# service mysql start
</source>
Aaaaaand.. do not forget apparmor! Like I did.. :-D
<source lang=mysql>
InnoDB: Operating system error number 13 in a file operation.
InnoDB: The error means mysqld does not have the access rights to
InnoDB: the directory.
InnoDB: File name /dev/dm-0
InnoDB: File operation call: 'open'.
InnoDB: Cannot continue operation.
</source>
<source lang=bash>
# tail /var/log/kern.log
...
Aug 12 15:30:09 mysql kernel: [ 5840.118528] audit: type=1400 audit(1439386209.399:33): apparmor="DENIED" operation="open" profile="/usr/sbin/mysqld" name="/dev/dm-0" pid=11810 comm="mysqld" requested_mask="wr" denied_mask="wr" fsuid=108 ouid=108
...
</source>
Add your raw device to the apparmor config in /etc/apparmor.d/local/usr.sbin.mysqld :
<source lang=bash>
# Site-specific additions and overrides for usr.sbin.mysqld.
# For more details, please see /etc/apparmor.d/local/README.
/dev/dm-* rwk,
</source>
Reload apparmor:
<source lang=bash>
# service apparmor reload
</source>
Another try!
<source lang=bash>
# service mysql start
</source>
<source lang=mysql>
InnoDB: The first specified data file /dev/vg-data/lv-rawdisk-innodb01 did not exist:
InnoDB: a new database to be created!
150812 15:48:23 InnoDB: Setting file /dev/vg-data/lv-rawdisk-innodb01 size to 25600 MB
InnoDB: Database physically writes the file full: wait...
InnoDB: Progress in MB: 100 200 300 400 500 600 700 800 900 1000 1100 1200 ...
</source>
Much better!
So shutdown MySQL again!
Change your configuration /etc/mysql/conf.d/innodb.cnf and '''change newraw to raw!''' :
<source lang=mysql>
[mysqld]
# InnoDB raw disks
innodb_data_home_dir=
innodb_data_file_path=/dev/vg-data/lv-rawdisk-innodb01:25Graw
</source>
=== NFS ===
==== NFSv4 ====
===== On NetApp CDOT SVM =====
<source lang=cdot>
cdot1nfsv4::> export-policy rule create -policyname default -clientmatch 172.18.128.0/22 -superuser none -rwrule none -rorule sys -allow-dev false -allow-suid false
cdot1nfsv4::> export-policy rule create -policyname mysql-clients -clientmatch 172.18.128.0/22 -superuser sys -rwrule sys -rorule sys -allow-dev true -allow-suid false
cdot1nfsv4::> nfs server modify -v4.0 enabled -v4-id-domain this.domain.tld
</source>
===== On Linux =====
====== /etc/idmapd.conf ======
<source lang=conf>
# Domain = localdomain
Domain = this.domain.tld
</source>
====== /etc/fstab ======
<source lang=fstab>
cdot-nfsv4-svm:/MYSQLNFS_LOG /MYSQLNFS_LOG nfs rw,hard,nointr,rsize=65536,wsize=65536,bg,vers=4,proto=tcp,noatime
cdot-nfsv4-svm:/MYSQLNFS_DATA /MYSQLNFS_DATA nfs rw,hard,nointr,rsize=65536,wsize=65536,bg,vers=4,proto=tcp,noatime
</source>
====== MySQL ======
<source lang=bash>
# mysql -e "show variables where variable_name like '%dir' and value like '/MYSQLNFS%'"
+---------------------------+------------------------------------+
| Variable_name | Value |
+---------------------------+------------------------------------+
| datadir | /MYSQLNFS_DATA/data/mysql/ |
| innodb_data_home_dir | /MYSQLNFS_DATA/InnoDB |
| innodb_log_group_home_dir | /MYSQLNFS_LOG/ib_log |
+---------------------------+------------------------------------+
</source>
====== apparmor : /etc/apparmor.d/local/usr.sbin.mysqld ======
<source lang=apparmor>
# vim:syntax=apparmor
# This should be always there...
owner @{PROC}/@{pid}/status r,
/sys/devices/system/node/ r,
/sys/devices/system/node/** r,
# The mysql datadir, innodb_data_home_dir
/MYSQLNFS_DATA/ r,
/MYSQLNFS_DATA/** rwk,
# The mysql innodb_log_group_home_dir
/MYSQLNFS_LOG/ r,
/MYSQLNFS_LOG/** rwk,
</source>
==Sample InnoDB configuration==
/etc/mysql/conf.d/innodb.cnf
<source lang=mysql>
[mysqld]
# InnoDB Parameters
# innodb_buffer_pool_size=(0.7*total_mem_size)
innodb_buffer_pool_size=1433M
# bulk_insert_buffer_size
bulk_insert_buffer_size=256M
# innodb_buffer_pool_instances=... more = more concurrency
innodb_buffer_pool_instances=2
# innodb_thread_concurrency= 2*CPUs
innodb_thread_concurrency=4
# innodb_flush_method=O_DIRECT (avoids double buffering)
innodb_flush_method=O_DIRECT
# InnoDB data raw disks
innodb_data_home_dir=
innodb_data_file_path=/dev/vg-data/lv-rawdisk-innodb01:25Graw
# InnoDB log files
innodb_log_files_in_group=2
innodb_log_file_size=100M
innodb_log_group_home_dir=/var/lib/mysql/ib_log
</source>
==Analyze==
<source lang=mysql>
mysql> select * from <tablename> PROCEDURE ANALYSE();
</source>
<source lang=mysql>
mysql> SHOW /*!50000 GLOBAL*/ STATUS;
</source>
* See [[http://de.slideshare.net/shinguz/pt-presentation-11465700 MySQL Performance Tuning]]
===percona-toolkit===
<source lang=bash>
# aptitude install percona-toolkit
# mysql -e "explain select * from mysql.user,mysql.db where user.user=db.user" | pt-visual-explain
JOIN
+- Bookmark lookup
| +- Table
| | table db
| | possible_keys User
| +- Index lookup
| key db->User
| possible_keys User
| key_len 48
| ref mysql.user.User
| rows 3
+- Table scan
rows 68
+- Table
table user
</source>
===Sysbench===
<source lang=bash>
# mysql -u root -e "create database sbtest;"
# sysbench \
--test=oltp \
--oltp-table-size=10000000 \
--db-driver=mysql \
--mysql-table-engine=innodb \
--mysql-db=sbtest \
--mysql-user=root \
--mysql-password=$(nawk -F'=' '/password/{print $2}' /root/.my.cnf) \
--mysql-socket=/var/run/mysqld/mysqld.sock \
prepare
# sysbench \
--test=oltp \
--oltp-test-mode=complex \
--oltp-table-size=80000000 \
--db-driver=mysql \
--mysql-table-engine=innodb \
--mysql-db=sbtest \
--mysql-user=root \
--mysql-password=$(nawk -F'=' '/password/{print $2}' /root/.my.cnf) \
--mysql-socket=/var/run/mysqld/mysqld.sock \
--num-threads=4 \
--max-time=900 \
--max-requests=500000 \
run
# mysql -u root_rw -e "drop table sbtest;" sbtest
</source>
==Recover a damaged root account==
===Lost grants===
Try out:
<source lang=bash>
# service mysql stop
# echo "grant all privileges on *.* to 'root'@'localhost' with grant option;" > /root/mysql-init
# mysqld_safe --init-file=/root/mysql-init
...
150812 19:14:24 mysqld_safe mysqld from pid file /var/run/mysqld/mysqld.pid ended
# rm /root/mysql-init
# service mysql start
</source>
Or:
<source lang=bash>
# service mysql stop
# mysqld_safe --skip-grant-tables &
...
# mysql -e "UPDATE mysql.user SET Grant_priv='Y', Super_priv='Y' WHERE User='root'; FLUSH PRIVILEGES; GRANT ALL ON *.* TO 'root'@'localhost';"
# mysqladmin -u root shutdown
# service mysql start
</source>
===Lost password===
<source lang=bash>
# service mysql stop
# echo "SET PASSWORD FOR 'root'@'localhost' = PASSWORD('the root password for mysql');" > /root/mysql-init
# mysqld_safe --init-file=/root/mysql-init
...
150812 19:15:24 mysqld_safe mysqld from pid file /var/run/mysqld/mysqld.pid ended
# rm /root/mysql-init
# service mysql start
</source>
==Structured configuration==
This is the default in Ubuntus /etc/mysql/my.cnf:
<source lang=mysql>
...
#
# * IMPORTANT: Additional settings that can override those from this file!
# The files must end with '.cnf', otherwise they'll be ignored.
#
!includedir /etc/mysql/conf.d/
</source>
/etc/mysql/conf.d/innodb.cnf:
<source lang=mysql>
[mysqld]
# InnoDB Parameters
# innodb_buffer_pool_size=(0.7*total_mem_size)
#innodb_buffer_pool_size=512M
innodb_buffer_pool_size=256M
# bulk_insert_buffer_size
#bulk_insert_buffer_size=256M
bulk_insert_buffer_size=128M
# innodb_buffer_pool_instances=... more = more concurrency
innodb_buffer_pool_instances=2
# innodb_thread_concurrency= 2*CPUs
innodb_thread_concurrency=4
# innodb_flush_method=O_DIRECT (avoids double buffering)
innodb_flush_method=O_DIRECT
# InnoDB data raw disks
innodb_data_home_dir=
innodb_data_file_path=/dev/vg-data/lv-rawdisk-innodb01:25Graw
# InnoDB log files
innodb_log_files_in_group=2
innodb_log_file_size=100M
innodb_log_group_home_dir=/var/lib/mysql/ib_log
</source>
/etc/mysql/conf.d/myisam.cnf:
<source lang=mysql>
[mysqld]
#key_buffer = 512M
key_buffer = 128M
table_cache = 8K
myisam_sort_buffer_size = 64M
tmp_table_size = 64M
# Variable: concurrent_insert
# Value Description
# 0 Disables concurrent inserts
# 1 (Default) Enables concurrent insert for MyISAM tables that do not have holes
# 2 Enables concurrent inserts for all MyISAM tables, even those that have holes.
# For a table with a hole, new rows are inserted at the end of the table if it is in use by another thread.
# Otherwise, MySQL acquires a normal write lock and inserts the row into the hole.
concurrent_insert=2
# Variable: myisam_use_mmap
# https://www.percona.com/blog/2006/05/26/myisam-mmap-feature-51/
#
myisam_use_mmap=1
</source>
/etc/mysql/conf.d/mysqld.cnf:
<source lang=mysql>
[mysqld]
datadir = /var/lib/mysql/data/data
# because mysql is soooo stupid
#ignore-db-dirs = lost+found # when we will have mysql >= 5.6.3
bind-address = 127.0.0.1
open-files-limit = 4096
max_connections = 512
max_allowed_packet = 16M
thread_stack = 192K
thread_cache_size = 8
myisam-recover-options = BACKUP
max_connections = 512
table_cache = 8192
thread_concurrency = 4
default-storage-engine = innodb
# Enable the full query log. Every query (even ones with incorrect
# syntax) that the server receives will be logged. This is useful for
# debugging, it is usually disabled in production use.
#log
# Print warnings to the error log file. If you have any problem with
# MySQL you should enable logging of warnings and examine the error log
# for possible explanations.
log_warnings
# Log slow queries. Slow queries are queries which take more than the
# amount of time defined in "long_query_time" or which do not use
# indexes well, if log_long_format is enabled. It is normally good idea
# to have this turned on if you frequently add new queries to the
# system.
log_slow_queries
slow_query_log_file = /var/log/mysql/mysql-slow.log
# All queries taking more than this amount of time (in seconds) will be
# trated as slow. Do not use "1" as a value here, as this will result in
# even very fast queries being logged from time to time (as MySQL
# currently measures time with second accuracy only).
long_query_time = 2
# Log more information in the slow query log. Normally it is good to
# have this turned on. This will enable logging of queries that are not
# using indexes in addition to long running queries.
#log_long_format
log_bin = /var/lib/mysql/binlog/mysql-bin.log
expire_logs_days = 10
max_binlog_size = 100M
sync_binlog = 0
performance_schema = ON
</source>
/etc/mysql/conf.d/mysqld_safe.cnf:
<source lang=mysql>
[mysqld_safe]
</source>
/etc/mysql/conf.d/mysqld_safe_syslog.cnf:
<source lang=mysql>
[mysqld_safe]
syslog
</source>
/etc/mysql/conf.d/query_cache.cnf:
<source lang=mysql>
[mysqld]
query_cache_limit = 4M
query_cache_size = 128M
query_cache_min_res_unit = 2K
</source>
=MySQL Clients=
Small one liners for testing purposes.
==PHP==
===PHP PDO===
<source lang=php>
$ php -r '
$pdo=new PDO("mysql:host=mydbhost;dbname=mydb", "user", "pass", ARRAY(
PDO::ATTR_PERSISTENT => true
)
);
$stmt=$pdo->prepare("SELECT * FROM mytable");
if($stmt->execute()){
while($row = $stmt->fetch()){
print_r($row);
}
};
$stmt = null;
$pdo=null;
'
</source>
836a98407c956ce567d9c5984a04515b0eb00d6f
1707
1706
2017-05-04T15:04:02Z
Lollypop
2
/* NFSv4 */
wikitext
text/x-wiki
[[Kategorie:MySQL|Tipps und Tricks]]
==Oneliner==
===All grants===
<source lang=bash>
# mysql --skip-column-names --batch --execute 'select concat("`",user,"`@`",host,"`") from mysql.user' | xargs -n 1 -i mysql --execute 'show grants for {}'
</source>
===Last update time===
* Per table
<source lang=mysql>
mysql> SELECT TABLE_SCHEMA AS DB,TABLE_NAME,UPDATE_TIME FROM INFORMATION_SCHEMA.TABLES ORDER BY DB,UPDATE_TIME;
</source>
* Per database
<source lang=mysql>
mysql> SELECT TABLE_SCHEMA AS DB,MAX(UPDATE_TIME) AS LAST_UPDATE FROM INFORMATION_SCHEMA.TABLES GROUP BY DB ORDER BY LAST_UPDATE;
</source>
==InnoDB space==
===Per database===
<source lang=mysql>
mysql> select table_schema as database_name, sum(round(data_length/1024/1024,2)) as total_size_mb from information_schema.tables where engine like 'innodb' group by table_schema order by total_size_mb;
</source>
===Per table===
<source lang=mysql>
mysql> select table_schema as database_name,table_name,round(data_length/1024/1024,2) as size_mb from information_schema.tables order by size_mb;
</source>
==Logging==
If you use SET GLOBAL it is just for the moment.
'''Don't forget to add it in your my.cnf to make it permanent!'''
===What can I log?===
The interesting variables here are:
* log_queries_not_using_indexes
* log_slave_updates
* log_slow_queries
* general_log
===Choose logging destination FILE/TABLE/NONE===
This affects general_log and slow_query_log.
* Log to the table mysql.slow_log and mysql.general_log
<source lang=mysql>
mysql> SET GLOBAL log_output=TABLE;
</source>
* Log to the table mysql.slow_log and mysql.general_log
<source lang=mysql>
mysql> SET GLOBAL log_output=TABLE;
</source>
* Both: tables and files
<source lang=mysql>
mysql> SET GLOBAL log_output = 'TABLE,FILE';
</source>
* None, if NONE appears in the log_output destinations there is no logging
<source lang=mysql>
mysql> SET GLOBAL log_output = 'TABLE,FILE,NONE';
</source>
is equal to
<source lang=mysql>
mysql> SET GLOBAL log_output = 'NONE';
</source>
===Enable/disable general logging===
<source lang=mysql>
mysql> SET GLOBAL general_log_file = '/var/lib/mysql/general.log';
Query OK, 0 rows affected (0.00 sec)
mysql> SET GLOBAL general_log = 'ON';
Query OK, 0 rows affected (0.00 sec)
</source>
<source lang=mysql>
mysql> SET GLOBAL general_log = 'OFF';
Query OK, 0 rows affected (0.00 sec)
</source>
===Enable/disable logging of slow queries===
<source lang=mysql>
mysql> SET GLOBAL slow_query_log_file = '/var/lib/mysql/slow-query.log';
Query OK, 0 rows affected (0.00 sec)
mysql> SET GLOBAL slow_query_log = 'ON';
Query OK, 0 rows affected (0.00 sec)
</source>
<source lang=mysql>
mysql> SET GLOBAL slow_query_log = 'OFF';
Query OK, 0 rows affected (0.00 sec)
</source>
==Filesystems for MySQL==
===ext3/ext4===
====Create Options====
<source lang=bash>
# mkfs.ext4 -b 4096 /dev/mapper/vg--data-lv--ext4--mysql_data
</source>
====Mountoptions====
* noatime
* data=writeback (best performance , only metadata is logged)
* data=ordered (ok performance , recording metadata and grouping metadata related to the data changes)
* data=journal (worst performance, but best data protection, ext3 default mode, recording metadata and all data)
===Raw devices with InnoDB===
'''Take a look at [[Linux_udev_permissions|setting device permissions via udev]] first.'''
'''After''' that the device is owned by mysql:
<source lang=bash>
# ls -alL /dev/vg-data/lv-rawdisk-innodb01
brw-rw---- 1 mysql mysql 252, 0 Aug 12 15:07 /dev/vg-data/lv-rawdisk-innodb01
</source>
Determine the size:
<source lang=bash>
# lvs vg-data
LV VG Attr LSize Pool Origin Data% Move Log Copy% Convert
lv-rawdisk-innodb01 vg-data -wi-a---- 25.00g
# fdisk -l /dev/vg-data/lv-rawdisk-innodb01
Disk /dev/vg-data/lv-rawdisk-innodb01: 26.8 GB, 26843545600 bytes
255 heads, 63 sectors/track, 3263 cylinders, total 52428800 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
# bc -l
26843545600/(1024*1024*1024)
25.00000000000000000000
</source>
Yes... really 25GB!
Add your logical volume to your configuration /etc/mysql/conf.d/innodb.cnf :
<source lang=mysql>
[mysqld]
# InnoDB raw disks
innodb_data_home_dir=
innodb_data_file_path=/dev/vg-data/lv-rawdisk-innodb01:25Gnewraw
</source>
Start mysql:
<source lang=bash>
# service mysql start
</source>
Aaaaaand.. do not forget apparmor! Like I did.. :-D
<source lang=mysql>
InnoDB: Operating system error number 13 in a file operation.
InnoDB: The error means mysqld does not have the access rights to
InnoDB: the directory.
InnoDB: File name /dev/dm-0
InnoDB: File operation call: 'open'.
InnoDB: Cannot continue operation.
</source>
<source lang=bash>
# tail /var/log/kern.log
...
Aug 12 15:30:09 mysql kernel: [ 5840.118528] audit: type=1400 audit(1439386209.399:33): apparmor="DENIED" operation="open" profile="/usr/sbin/mysqld" name="/dev/dm-0" pid=11810 comm="mysqld" requested_mask="wr" denied_mask="wr" fsuid=108 ouid=108
...
</source>
Add your raw device to the apparmor config in /etc/apparmor.d/local/usr.sbin.mysqld :
<source lang=bash>
# Site-specific additions and overrides for usr.sbin.mysqld.
# For more details, please see /etc/apparmor.d/local/README.
/dev/dm-* rwk,
</source>
Reload apparmor:
<source lang=bash>
# service apparmor reload
</source>
Another try!
<source lang=bash>
# service mysql start
</source>
<source lang=mysql>
InnoDB: The first specified data file /dev/vg-data/lv-rawdisk-innodb01 did not exist:
InnoDB: a new database to be created!
150812 15:48:23 InnoDB: Setting file /dev/vg-data/lv-rawdisk-innodb01 size to 25600 MB
InnoDB: Database physically writes the file full: wait...
InnoDB: Progress in MB: 100 200 300 400 500 600 700 800 900 1000 1100 1200 ...
</source>
Much better!
So shutdown MySQL again!
Change your configuration /etc/mysql/conf.d/innodb.cnf and '''change newraw to raw!''' :
<source lang=mysql>
[mysqld]
# InnoDB raw disks
innodb_data_home_dir=
innodb_data_file_path=/dev/vg-data/lv-rawdisk-innodb01:25Graw
</source>
=== NFS ===
==== NFSv4 ====
===== On NetApp CDOT SVM =====
<source lang=cdot>
cdot1nfsv4::> export-policy rule create -policyname default -clientmatch 172.18.128.0/22 -superuser none -rwrule none -rorule sys -allow-dev false -allow-suid false
cdot1nfsv4::>
cdot1nfsv4::> export-policy rule create -policyname mysql-clients -clientmatch 172.18.128.0/22 -superuser sys -rwrule sys -rorule sys -allow-dev true -allow-suid false
cdot1nfsv4::>
cdot1nfsv4::> nfs server modify -v4.0 enabled -v4-id-domain this.domain.tld
cdot1nfsv4::>
</source>
===== On Linux =====
====== /etc/idmapd.conf ======
<source lang=conf>
# Domain = localdomain
Domain = this.domain.tld
</source>
====== /etc/fstab ======
<source lang=fstab>
cdot-nfsv4-svm:/MYSQLNFS_LOG /MYSQLNFS_LOG nfs rw,hard,nointr,rsize=65536,wsize=65536,bg,vers=4,proto=tcp,noatime
cdot-nfsv4-svm:/MYSQLNFS_DATA /MYSQLNFS_DATA nfs rw,hard,nointr,rsize=65536,wsize=65536,bg,vers=4,proto=tcp,noatime
</source>
====== MySQL ======
<source lang=mysql>
# mysql -e "show variables where variable_name like '%dir' and value like '/MYSQLNFS%'"
+---------------------------+------------------------------------+
| Variable_name | Value |
+---------------------------+------------------------------------+
| datadir | /MYSQLNFS_DATA/data/mysql/ |
| innodb_data_home_dir | /MYSQLNFS_DATA/InnoDB |
| innodb_log_group_home_dir | /MYSQLNFS_LOG/ib_log |
+---------------------------+------------------------------------+
</source>
====== apparmor : /etc/apparmor.d/local/usr.sbin.mysqld ======
<source lang=apparmor>
# vim:syntax=apparmor
# This should be always there...
owner @{PROC}/@{pid}/status r,
/sys/devices/system/node/ r,
/sys/devices/system/node/** r,
# The mysql datadir, innodb_data_home_dir
/MYSQLNFS_DATA/ r,
/MYSQLNFS_DATA/** rwk,
# The mysql innodb_log_group_home_dir
/MYSQLNFS_LOG/ r,
/MYSQLNFS_LOG/** rwk,
</source>
==Sample InnoDB configuration==
/etc/mysql/conf.d/innodb.cnf
<source lang=mysql>
[mysqld]
# InnoDB Parameters
# innodb_buffer_pool_size=(0.7*total_mem_size)
innodb_buffer_pool_size=1433M
# bulk_insert_buffer_size
bulk_insert_buffer_size=256M
# innodb_buffer_pool_instances=... more = more concurrency
innodb_buffer_pool_instances=2
# innodb_thread_concurrency= 2*CPUs
innodb_thread_concurrency=4
# innodb_flush_method=O_DIRECT (avoids double buffering)
innodb_flush_method=O_DIRECT
# InnoDB data raw disks
innodb_data_home_dir=
innodb_data_file_path=/dev/vg-data/lv-rawdisk-innodb01:25Graw
# InnoDB log files
innodb_log_files_in_group=2
innodb_log_file_size=100M
innodb_log_group_home_dir=/var/lib/mysql/ib_log
</source>
==Analyze==
<source lang=mysql>
mysql> select * from <tablename> PROCEDURE ANALYSE();
</source>
<source lang=mysql>
mysql> SHOW /*!50000 GLOBAL*/ STATUS;
</source>
* See [[http://de.slideshare.net/shinguz/pt-presentation-11465700 MySQL Performance Tuning]]
===percona-toolkit===
<source lang=bash>
# aptitude install percona-toolkit
# mysql -e "explain select * from mysql.user,mysql.db where user.user=db.user" | pt-visual-explain
JOIN
+- Bookmark lookup
| +- Table
| | table db
| | possible_keys User
| +- Index lookup
| key db->User
| possible_keys User
| key_len 48
| ref mysql.user.User
| rows 3
+- Table scan
rows 68
+- Table
table user
</source>
===Sysbench===
<source lang=bash>
# mysql -u root -e "create database sbtest;"
# sysbench \
--test=oltp \
--oltp-table-size=10000000 \
--db-driver=mysql \
--mysql-table-engine=innodb \
--mysql-db=sbtest \
--mysql-user=root \
--mysql-password=$(nawk -F'=' '/password/{print $2}' /root/.my.cnf) \
--mysql-socket=/var/run/mysqld/mysqld.sock \
prepare
# sysbench \
--test=oltp \
--oltp-test-mode=complex \
--oltp-table-size=80000000 \
--db-driver=mysql \
--mysql-table-engine=innodb \
--mysql-db=sbtest \
--mysql-user=root \
--mysql-password=$(nawk -F'=' '/password/{print $2}' /root/.my.cnf) \
--mysql-socket=/var/run/mysqld/mysqld.sock \
--num-threads=4 \
--max-time=900 \
--max-requests=500000 \
run
# mysql -u root_rw -e "drop table sbtest;" sbtest
</source>
==Recover a damaged root account==
===Lost grants===
Try out:
<source lang=bash>
# service mysql stop
# echo "grant all privileges on *.* to 'root'@'localhost' with grant option;" > /root/mysql-init
# mysqld_safe --init-file=/root/mysql-init
...
150812 19:14:24 mysqld_safe mysqld from pid file /var/run/mysqld/mysqld.pid ended
# rm /root/mysql-init
# service mysql start
</source>
Or:
<source lang=bash>
# service mysql stop
# mysqld_safe --skip-grant-tables &
...
# mysql -e "UPDATE mysql.user SET Grant_priv='Y', Super_priv='Y' WHERE User='root'; FLUSH PRIVILEGES; GRANT ALL ON *.* TO 'root'@'localhost';"
# mysqladmin -u root shutdown
# service mysql start
</source>
===Lost password===
<source lang=bash>
# service mysql stop
# echo "SET PASSWORD FOR 'root'@'localhost' = PASSWORD('the root password for mysql');" > /root/mysql-init
# mysqld_safe --init-file=/root/mysql-init
...
150812 19:15:24 mysqld_safe mysqld from pid file /var/run/mysqld/mysqld.pid ended
# rm /root/mysql-init
# service mysql start
</source>
==Structured configuration==
This is the default in Ubuntus /etc/mysql/my.cnf:
<source lang=mysql>
...
#
# * IMPORTANT: Additional settings that can override those from this file!
# The files must end with '.cnf', otherwise they'll be ignored.
#
!includedir /etc/mysql/conf.d/
</source>
/etc/mysql/conf.d/innodb.cnf:
<source lang=mysql>
[mysqld]
# InnoDB Parameters
# innodb_buffer_pool_size=(0.7*total_mem_size)
#innodb_buffer_pool_size=512M
innodb_buffer_pool_size=256M
# bulk_insert_buffer_size
#bulk_insert_buffer_size=256M
bulk_insert_buffer_size=128M
# innodb_buffer_pool_instances=... more = more concurrency
innodb_buffer_pool_instances=2
# innodb_thread_concurrency= 2*CPUs
innodb_thread_concurrency=4
# innodb_flush_method=O_DIRECT (avoids double buffering)
innodb_flush_method=O_DIRECT
# InnoDB data raw disks
innodb_data_home_dir=
innodb_data_file_path=/dev/vg-data/lv-rawdisk-innodb01:25Graw
# InnoDB log files
innodb_log_files_in_group=2
innodb_log_file_size=100M
innodb_log_group_home_dir=/var/lib/mysql/ib_log
</source>
/etc/mysql/conf.d/myisam.cnf:
<source lang=mysql>
[mysqld]
#key_buffer = 512M
key_buffer = 128M
table_cache = 8K
myisam_sort_buffer_size = 64M
tmp_table_size = 64M
# Variable: concurrent_insert
# Value Description
# 0 Disables concurrent inserts
# 1 (Default) Enables concurrent insert for MyISAM tables that do not have holes
# 2 Enables concurrent inserts for all MyISAM tables, even those that have holes.
# For a table with a hole, new rows are inserted at the end of the table if it is in use by another thread.
# Otherwise, MySQL acquires a normal write lock and inserts the row into the hole.
concurrent_insert=2
# Variable: myisam_use_mmap
# https://www.percona.com/blog/2006/05/26/myisam-mmap-feature-51/
#
myisam_use_mmap=1
</source>
/etc/mysql/conf.d/mysqld.cnf:
<source lang=mysql>
[mysqld]
datadir = /var/lib/mysql/data/data
# because mysql is soooo stupid
#ignore-db-dirs = lost+found # when we will have mysql >= 5.6.3
bind-address = 127.0.0.1
open-files-limit = 4096
max_connections = 512
max_allowed_packet = 16M
thread_stack = 192K
thread_cache_size = 8
myisam-recover-options = BACKUP
max_connections = 512
table_cache = 8192
thread_concurrency = 4
default-storage-engine = innodb
# Enable the full query log. Every query (even ones with incorrect
# syntax) that the server receives will be logged. This is useful for
# debugging, it is usually disabled in production use.
#log
# Print warnings to the error log file. If you have any problem with
# MySQL you should enable logging of warnings and examine the error log
# for possible explanations.
log_warnings
# Log slow queries. Slow queries are queries which take more than the
# amount of time defined in "long_query_time" or which do not use
# indexes well, if log_long_format is enabled. It is normally good idea
# to have this turned on if you frequently add new queries to the
# system.
log_slow_queries
slow_query_log_file = /var/log/mysql/mysql-slow.log
# All queries taking more than this amount of time (in seconds) will be
# trated as slow. Do not use "1" as a value here, as this will result in
# even very fast queries being logged from time to time (as MySQL
# currently measures time with second accuracy only).
long_query_time = 2
# Log more information in the slow query log. Normally it is good to
# have this turned on. This will enable logging of queries that are not
# using indexes in addition to long running queries.
#log_long_format
log_bin = /var/lib/mysql/binlog/mysql-bin.log
expire_logs_days = 10
max_binlog_size = 100M
sync_binlog = 0
performance_schema = ON
</source>
/etc/mysql/conf.d/mysqld_safe.cnf:
<source lang=mysql>
[mysqld_safe]
</source>
/etc/mysql/conf.d/mysqld_safe_syslog.cnf:
<source lang=mysql>
[mysqld_safe]
syslog
</source>
/etc/mysql/conf.d/query_cache.cnf:
<source lang=mysql>
[mysqld]
query_cache_limit = 4M
query_cache_size = 128M
query_cache_min_res_unit = 2K
</source>
=MySQL Clients=
Small one liners for testing purposes.
==PHP==
===PHP PDO===
<source lang=php>
$ php -r '
$pdo=new PDO("mysql:host=mydbhost;dbname=mydb", "user", "pass", ARRAY(
PDO::ATTR_PERSISTENT => true
)
);
$stmt=$pdo->prepare("SELECT * FROM mytable");
if($stmt->execute()){
while($row = $stmt->fetch()){
print_r($row);
}
};
$stmt = null;
$pdo=null;
'
</source>
1989c0cc84425771aecfc0b3b239c67a0cb20d20
1708
1707
2017-05-04T15:17:12Z
Lollypop
2
/* NFSv4 */
wikitext
text/x-wiki
[[Kategorie:MySQL|Tipps und Tricks]]
==Oneliner==
===All grants===
<source lang=bash>
# mysql --skip-column-names --batch --execute 'select concat("`",user,"`@`",host,"`") from mysql.user' | xargs -n 1 -i mysql --execute 'show grants for {}'
</source>
===Last update time===
* Per table
<source lang=mysql>
mysql> SELECT TABLE_SCHEMA AS DB,TABLE_NAME,UPDATE_TIME FROM INFORMATION_SCHEMA.TABLES ORDER BY DB,UPDATE_TIME;
</source>
* Per database
<source lang=mysql>
mysql> SELECT TABLE_SCHEMA AS DB,MAX(UPDATE_TIME) AS LAST_UPDATE FROM INFORMATION_SCHEMA.TABLES GROUP BY DB ORDER BY LAST_UPDATE;
</source>
==InnoDB space==
===Per database===
<source lang=mysql>
mysql> select table_schema as database_name, sum(round(data_length/1024/1024,2)) as total_size_mb from information_schema.tables where engine like 'innodb' group by table_schema order by total_size_mb;
</source>
===Per table===
<source lang=mysql>
mysql> select table_schema as database_name,table_name,round(data_length/1024/1024,2) as size_mb from information_schema.tables order by size_mb;
</source>
==Logging==
If you use SET GLOBAL it is just for the moment.
'''Don't forget to add it in your my.cnf to make it permanent!'''
===What can I log?===
The interesting variables here are:
* log_queries_not_using_indexes
* log_slave_updates
* log_slow_queries
* general_log
===Choose logging destination FILE/TABLE/NONE===
This affects general_log and slow_query_log.
* Log to the table mysql.slow_log and mysql.general_log
<source lang=mysql>
mysql> SET GLOBAL log_output=TABLE;
</source>
* Log to the table mysql.slow_log and mysql.general_log
<source lang=mysql>
mysql> SET GLOBAL log_output=TABLE;
</source>
* Both: tables and files
<source lang=mysql>
mysql> SET GLOBAL log_output = 'TABLE,FILE';
</source>
* None, if NONE appears in the log_output destinations there is no logging
<source lang=mysql>
mysql> SET GLOBAL log_output = 'TABLE,FILE,NONE';
</source>
is equal to
<source lang=mysql>
mysql> SET GLOBAL log_output = 'NONE';
</source>
===Enable/disable general logging===
<source lang=mysql>
mysql> SET GLOBAL general_log_file = '/var/lib/mysql/general.log';
Query OK, 0 rows affected (0.00 sec)
mysql> SET GLOBAL general_log = 'ON';
Query OK, 0 rows affected (0.00 sec)
</source>
<source lang=mysql>
mysql> SET GLOBAL general_log = 'OFF';
Query OK, 0 rows affected (0.00 sec)
</source>
===Enable/disable logging of slow queries===
<source lang=mysql>
mysql> SET GLOBAL slow_query_log_file = '/var/lib/mysql/slow-query.log';
Query OK, 0 rows affected (0.00 sec)
mysql> SET GLOBAL slow_query_log = 'ON';
Query OK, 0 rows affected (0.00 sec)
</source>
<source lang=mysql>
mysql> SET GLOBAL slow_query_log = 'OFF';
Query OK, 0 rows affected (0.00 sec)
</source>
==Filesystems for MySQL==
===ext3/ext4===
====Create Options====
<source lang=bash>
# mkfs.ext4 -b 4096 /dev/mapper/vg--data-lv--ext4--mysql_data
</source>
====Mountoptions====
* noatime
* data=writeback (best performance , only metadata is logged)
* data=ordered (ok performance , recording metadata and grouping metadata related to the data changes)
* data=journal (worst performance, but best data protection, ext3 default mode, recording metadata and all data)
===Raw devices with InnoDB===
'''Take a look at [[Linux_udev_permissions|setting device permissions via udev]] first.'''
'''After''' that the device is owned by mysql:
<source lang=bash>
# ls -alL /dev/vg-data/lv-rawdisk-innodb01
brw-rw---- 1 mysql mysql 252, 0 Aug 12 15:07 /dev/vg-data/lv-rawdisk-innodb01
</source>
Determine the size:
<source lang=bash>
# lvs vg-data
LV VG Attr LSize Pool Origin Data% Move Log Copy% Convert
lv-rawdisk-innodb01 vg-data -wi-a---- 25.00g
# fdisk -l /dev/vg-data/lv-rawdisk-innodb01
Disk /dev/vg-data/lv-rawdisk-innodb01: 26.8 GB, 26843545600 bytes
255 heads, 63 sectors/track, 3263 cylinders, total 52428800 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
# bc -l
26843545600/(1024*1024*1024)
25.00000000000000000000
</source>
Yes... really 25GB!
Add your logical volume to your configuration /etc/mysql/conf.d/innodb.cnf :
<source lang=mysql>
[mysqld]
# InnoDB raw disks
innodb_data_home_dir=
innodb_data_file_path=/dev/vg-data/lv-rawdisk-innodb01:25Gnewraw
</source>
Start mysql:
<source lang=bash>
# service mysql start
</source>
Aaaaaand.. do not forget apparmor! Like I did.. :-D
<source lang=mysql>
InnoDB: Operating system error number 13 in a file operation.
InnoDB: The error means mysqld does not have the access rights to
InnoDB: the directory.
InnoDB: File name /dev/dm-0
InnoDB: File operation call: 'open'.
InnoDB: Cannot continue operation.
</source>
<source lang=bash>
# tail /var/log/kern.log
...
Aug 12 15:30:09 mysql kernel: [ 5840.118528] audit: type=1400 audit(1439386209.399:33): apparmor="DENIED" operation="open" profile="/usr/sbin/mysqld" name="/dev/dm-0" pid=11810 comm="mysqld" requested_mask="wr" denied_mask="wr" fsuid=108 ouid=108
...
</source>
Add your raw device to the apparmor config in /etc/apparmor.d/local/usr.sbin.mysqld :
<source lang=bash>
# Site-specific additions and overrides for usr.sbin.mysqld.
# For more details, please see /etc/apparmor.d/local/README.
/dev/dm-* rwk,
</source>
Reload apparmor:
<source lang=bash>
# service apparmor reload
</source>
Another try!
<source lang=bash>
# service mysql start
</source>
<source lang=mysql>
InnoDB: The first specified data file /dev/vg-data/lv-rawdisk-innodb01 did not exist:
InnoDB: a new database to be created!
150812 15:48:23 InnoDB: Setting file /dev/vg-data/lv-rawdisk-innodb01 size to 25600 MB
InnoDB: Database physically writes the file full: wait...
InnoDB: Progress in MB: 100 200 300 400 500 600 700 800 900 1000 1100 1200 ...
</source>
Much better!
So shutdown MySQL again!
Change your configuration /etc/mysql/conf.d/innodb.cnf and '''change newraw to raw!''' :
<source lang=mysql>
[mysqld]
# InnoDB raw disks
innodb_data_home_dir=
innodb_data_file_path=/dev/vg-data/lv-rawdisk-innodb01:25Graw
</source>
=== NFS ===
==== NFSv4 ====
===== On NetApp CDOT SVM =====
<source lang=cdot>
cdot1nfsv4::> export-policy rule create -policyname default -clientmatch 172.18.128.0/22 -superuser none -rwrule none -rorule sys -allow-dev false -allow-suid false
cdot1nfsv4::>
cdot1nfsv4::> export-policy create -policyname mysql_clients
cdot1nfsv4::> export-policy rule create -policyname mysql_clients -clientmatch 172.18.128.0/22 -superuser sys -rwrule sys -rorule sys -allow-dev true -allow-suid false
cdot1nfsv4::>
cdot1nfsv4::> nfs server modify -v4.0 enabled -v4-id-domain this.domain.tld
cdot1nfsv4::> set -units GB
cdot1nfsv4::> vol show -volume MYSQLNFS_* -fields volume,policy,size,junction-path
vserver volume size policy junction-path
------------------ --------------------- ---- ------------- ----------------------
cdot1nfsv4 MYSQLNFS_DATA 40GB mysql_clients /MYSQLNFS_DATA
cdot1nfsv4 MYSQLNFS_LOG 1GB mysql_clients /MYSQLNFS_LOG
2 entries were displayed.
</source>
===== On Linux =====
====== /etc/sysctl.d/99-mysql.conf ======
<source>
#
## http://www.ajohnstone.com/achives/optimizing-mysql-over-nfs-with-netapp/
#
###################################################################
# Semaphores & IPC for optimizations in innodb
kernel.shmmax=2147483648
kernel.shmall=2147483648
kernel.msgmni=1024
kernel.msgmax=65536
kernel.sem=250 32000 32 1024
###################################################################
# Swap
vm.swappiness = 0
vm.vfs_cache_pressure = 50
</source>
====== /etc/sysctl.d/99-netapp-nfs.conf ======
<source>
#
## http://www.ajohnstone.com/achives/optimizing-mysql-over-nfs-with-netapp/
#
###################################################################
# Optimization for netapp/nfs increased from 64k, @see http://tldp.org/HOWTO/NFS-HOWTO/performance.html#MEMLIMITS
net.core.wmem_default=262144
net.core.rmem_default=262144
net.core.wmem_max=262144
net.core.rmem_max=262144
net.ipv4.tcp_rmem = 4096 87380 16777216
net.ipv4.tcp_wmem = 4096 65536 16777216
net.ipv4.tcp_no_metrics_save = 1
# Guidelines from http://media.netapp.com/documents/mysqlperformance-5.pdf
net.ipv4.tcp_sack=0
net.ipv4.tcp_timestamps=0
sunrpc.tcp_slot_table_entries=128
#nfs.v3.enable on
nfs.tcp.enable=on
nfs.tcp.recvwindowsize=65536
nfs.tcp.xfersize=65536
#iscsi.iswt.max_ios_per_session 128
#iscsi.iswt.tcp_window_size 131400
#iscsi.max_connections_per_session 16
net.ipv4.tcp_tw_reuse = 1
net.ipv4.ip_local_port_range = 1024 65023
net.ipv4.tcp_max_syn_backlog = 10240
net.ipv4.tcp_max_tw_buckets = 400000
net.ipv4.tcp_max_orphans = 60000
net.ipv4.tcp_synack_retries = 3
net.core.somaxconn = 10000
kernel.sysrq=0
net.ipv4.neigh.default.gc_thresh1 = 4096
net.ipv4.neigh.default.gc_thresh2 = 8192
net.ipv4.neigh.default.gc_thresh3 = 8192
net.ipv4.neigh.default.base_reachable_time = 86400
net.ipv4.neigh.default.gc_stale_time = 86400
</source>
====== /etc/idmapd.conf ======
<source lang=conf>
# Domain = localdomain
Domain = this.domain.tld
</source>
====== /etc/fstab ======
<source lang=fstab>
cdot-nfsv4-svm:/MYSQLNFS_LOG /MYSQLNFS_LOG nfs rw,hard,nointr,rsize=65536,wsize=65536,bg,vers=4,proto=tcp,noatime
cdot-nfsv4-svm:/MYSQLNFS_DATA /MYSQLNFS_DATA nfs rw,hard,nointr,rsize=65536,wsize=65536,bg,vers=4,proto=tcp,noatime
</source>
====== MySQL ======
======= /etc/mysql/mysql.conf.d/innodb.cnf =======
<source>
[mysqld]
#
# * InnoDB
#
innodb_data_home_dir = /MYSQLNFS_DATA/InnoDB
innodb_data_file_path = ibdata1:200M:autoextend
innodb_log_group_home_dir = /MYSQLNFS_LOG/ib_log
#innodb_flush_method = O_DIRECT
innodb_flush_log_at_trx_commit = 2
innodb_file_per_table = on
</source>
<source lang=mysql>
# mysql -e "show variables where variable_name like '%dir' and value like '/MYSQLNFS%'"
+---------------------------+------------------------------------+
| Variable_name | Value |
+---------------------------+------------------------------------+
| datadir | /MYSQLNFS_DATA/data/mysql/ |
| innodb_data_home_dir | /MYSQLNFS_DATA/InnoDB |
| innodb_log_group_home_dir | /MYSQLNFS_LOG/ib_log |
+---------------------------+------------------------------------+
</source>
====== apparmor : /etc/apparmor.d/local/usr.sbin.mysqld ======
<source lang=apparmor>
# vim:syntax=apparmor
# This should be always there...
owner @{PROC}/@{pid}/status r,
/sys/devices/system/node/ r,
/sys/devices/system/node/** r,
# The mysql datadir, innodb_data_home_dir
/MYSQLNFS_DATA/ r,
/MYSQLNFS_DATA/** rwk,
# The mysql innodb_log_group_home_dir
/MYSQLNFS_LOG/ r,
/MYSQLNFS_LOG/** rwk,
</source>
==Sample InnoDB configuration==
/etc/mysql/conf.d/innodb.cnf
<source lang=mysql>
[mysqld]
# InnoDB Parameters
# innodb_buffer_pool_size=(0.7*total_mem_size)
innodb_buffer_pool_size=1433M
# bulk_insert_buffer_size
bulk_insert_buffer_size=256M
# innodb_buffer_pool_instances=... more = more concurrency
innodb_buffer_pool_instances=2
# innodb_thread_concurrency= 2*CPUs
innodb_thread_concurrency=4
# innodb_flush_method=O_DIRECT (avoids double buffering)
innodb_flush_method=O_DIRECT
# InnoDB data raw disks
innodb_data_home_dir=
innodb_data_file_path=/dev/vg-data/lv-rawdisk-innodb01:25Graw
# InnoDB log files
innodb_log_files_in_group=2
innodb_log_file_size=100M
innodb_log_group_home_dir=/var/lib/mysql/ib_log
</source>
==Analyze==
<source lang=mysql>
mysql> select * from <tablename> PROCEDURE ANALYSE();
</source>
<source lang=mysql>
mysql> SHOW /*!50000 GLOBAL*/ STATUS;
</source>
* See [[http://de.slideshare.net/shinguz/pt-presentation-11465700 MySQL Performance Tuning]]
===percona-toolkit===
<source lang=bash>
# aptitude install percona-toolkit
# mysql -e "explain select * from mysql.user,mysql.db where user.user=db.user" | pt-visual-explain
JOIN
+- Bookmark lookup
| +- Table
| | table db
| | possible_keys User
| +- Index lookup
| key db->User
| possible_keys User
| key_len 48
| ref mysql.user.User
| rows 3
+- Table scan
rows 68
+- Table
table user
</source>
===Sysbench===
<source lang=bash>
# mysql -u root -e "create database sbtest;"
# sysbench \
--test=oltp \
--oltp-table-size=10000000 \
--db-driver=mysql \
--mysql-table-engine=innodb \
--mysql-db=sbtest \
--mysql-user=root \
--mysql-password=$(nawk -F'=' '/password/{print $2}' /root/.my.cnf) \
--mysql-socket=/var/run/mysqld/mysqld.sock \
prepare
# sysbench \
--test=oltp \
--oltp-test-mode=complex \
--oltp-table-size=80000000 \
--db-driver=mysql \
--mysql-table-engine=innodb \
--mysql-db=sbtest \
--mysql-user=root \
--mysql-password=$(nawk -F'=' '/password/{print $2}' /root/.my.cnf) \
--mysql-socket=/var/run/mysqld/mysqld.sock \
--num-threads=4 \
--max-time=900 \
--max-requests=500000 \
run
# mysql -u root_rw -e "drop table sbtest;" sbtest
</source>
==Recover a damaged root account==
===Lost grants===
Try out:
<source lang=bash>
# service mysql stop
# echo "grant all privileges on *.* to 'root'@'localhost' with grant option;" > /root/mysql-init
# mysqld_safe --init-file=/root/mysql-init
...
150812 19:14:24 mysqld_safe mysqld from pid file /var/run/mysqld/mysqld.pid ended
# rm /root/mysql-init
# service mysql start
</source>
Or:
<source lang=bash>
# service mysql stop
# mysqld_safe --skip-grant-tables &
...
# mysql -e "UPDATE mysql.user SET Grant_priv='Y', Super_priv='Y' WHERE User='root'; FLUSH PRIVILEGES; GRANT ALL ON *.* TO 'root'@'localhost';"
# mysqladmin -u root shutdown
# service mysql start
</source>
===Lost password===
<source lang=bash>
# service mysql stop
# echo "SET PASSWORD FOR 'root'@'localhost' = PASSWORD('the root password for mysql');" > /root/mysql-init
# mysqld_safe --init-file=/root/mysql-init
...
150812 19:15:24 mysqld_safe mysqld from pid file /var/run/mysqld/mysqld.pid ended
# rm /root/mysql-init
# service mysql start
</source>
==Structured configuration==
This is the default in Ubuntus /etc/mysql/my.cnf:
<source lang=mysql>
...
#
# * IMPORTANT: Additional settings that can override those from this file!
# The files must end with '.cnf', otherwise they'll be ignored.
#
!includedir /etc/mysql/conf.d/
</source>
/etc/mysql/conf.d/innodb.cnf:
<source lang=mysql>
[mysqld]
# InnoDB Parameters
# innodb_buffer_pool_size=(0.7*total_mem_size)
#innodb_buffer_pool_size=512M
innodb_buffer_pool_size=256M
# bulk_insert_buffer_size
#bulk_insert_buffer_size=256M
bulk_insert_buffer_size=128M
# innodb_buffer_pool_instances=... more = more concurrency
innodb_buffer_pool_instances=2
# innodb_thread_concurrency= 2*CPUs
innodb_thread_concurrency=4
# innodb_flush_method=O_DIRECT (avoids double buffering)
innodb_flush_method=O_DIRECT
# InnoDB data raw disks
innodb_data_home_dir=
innodb_data_file_path=/dev/vg-data/lv-rawdisk-innodb01:25Graw
# InnoDB log files
innodb_log_files_in_group=2
innodb_log_file_size=100M
innodb_log_group_home_dir=/var/lib/mysql/ib_log
</source>
/etc/mysql/conf.d/myisam.cnf:
<source lang=mysql>
[mysqld]
#key_buffer = 512M
key_buffer = 128M
table_cache = 8K
myisam_sort_buffer_size = 64M
tmp_table_size = 64M
# Variable: concurrent_insert
# Value Description
# 0 Disables concurrent inserts
# 1 (Default) Enables concurrent insert for MyISAM tables that do not have holes
# 2 Enables concurrent inserts for all MyISAM tables, even those that have holes.
# For a table with a hole, new rows are inserted at the end of the table if it is in use by another thread.
# Otherwise, MySQL acquires a normal write lock and inserts the row into the hole.
concurrent_insert=2
# Variable: myisam_use_mmap
# https://www.percona.com/blog/2006/05/26/myisam-mmap-feature-51/
#
myisam_use_mmap=1
</source>
/etc/mysql/conf.d/mysqld.cnf:
<source lang=mysql>
[mysqld]
datadir = /var/lib/mysql/data/data
# because mysql is soooo stupid
#ignore-db-dirs = lost+found # when we will have mysql >= 5.6.3
bind-address = 127.0.0.1
open-files-limit = 4096
max_connections = 512
max_allowed_packet = 16M
thread_stack = 192K
thread_cache_size = 8
myisam-recover-options = BACKUP
max_connections = 512
table_cache = 8192
thread_concurrency = 4
default-storage-engine = innodb
# Enable the full query log. Every query (even ones with incorrect
# syntax) that the server receives will be logged. This is useful for
# debugging, it is usually disabled in production use.
#log
# Print warnings to the error log file. If you have any problem with
# MySQL you should enable logging of warnings and examine the error log
# for possible explanations.
log_warnings
# Log slow queries. Slow queries are queries which take more than the
# amount of time defined in "long_query_time" or which do not use
# indexes well, if log_long_format is enabled. It is normally good idea
# to have this turned on if you frequently add new queries to the
# system.
log_slow_queries
slow_query_log_file = /var/log/mysql/mysql-slow.log
# All queries taking more than this amount of time (in seconds) will be
# trated as slow. Do not use "1" as a value here, as this will result in
# even very fast queries being logged from time to time (as MySQL
# currently measures time with second accuracy only).
long_query_time = 2
# Log more information in the slow query log. Normally it is good to
# have this turned on. This will enable logging of queries that are not
# using indexes in addition to long running queries.
#log_long_format
log_bin = /var/lib/mysql/binlog/mysql-bin.log
expire_logs_days = 10
max_binlog_size = 100M
sync_binlog = 0
performance_schema = ON
</source>
/etc/mysql/conf.d/mysqld_safe.cnf:
<source lang=mysql>
[mysqld_safe]
</source>
/etc/mysql/conf.d/mysqld_safe_syslog.cnf:
<source lang=mysql>
[mysqld_safe]
syslog
</source>
/etc/mysql/conf.d/query_cache.cnf:
<source lang=mysql>
[mysqld]
query_cache_limit = 4M
query_cache_size = 128M
query_cache_min_res_unit = 2K
</source>
=MySQL Clients=
Small one liners for testing purposes.
==PHP==
===PHP PDO===
<source lang=php>
$ php -r '
$pdo=new PDO("mysql:host=mydbhost;dbname=mydb", "user", "pass", ARRAY(
PDO::ATTR_PERSISTENT => true
)
);
$stmt=$pdo->prepare("SELECT * FROM mytable");
if($stmt->execute()){
while($row = $stmt->fetch()){
print_r($row);
}
};
$stmt = null;
$pdo=null;
'
</source>
2e8d3585c4d4500c4bd8d3ae48c6dbf63aec577e
1709
1708
2017-05-04T15:19:26Z
Lollypop
2
/* NFSv4 */
wikitext
text/x-wiki
[[Kategorie:MySQL|Tipps und Tricks]]
==Oneliner==
===All grants===
<source lang=bash>
# mysql --skip-column-names --batch --execute 'select concat("`",user,"`@`",host,"`") from mysql.user' | xargs -n 1 -i mysql --execute 'show grants for {}'
</source>
===Last update time===
* Per table
<source lang=mysql>
mysql> SELECT TABLE_SCHEMA AS DB,TABLE_NAME,UPDATE_TIME FROM INFORMATION_SCHEMA.TABLES ORDER BY DB,UPDATE_TIME;
</source>
* Per database
<source lang=mysql>
mysql> SELECT TABLE_SCHEMA AS DB,MAX(UPDATE_TIME) AS LAST_UPDATE FROM INFORMATION_SCHEMA.TABLES GROUP BY DB ORDER BY LAST_UPDATE;
</source>
==InnoDB space==
===Per database===
<source lang=mysql>
mysql> select table_schema as database_name, sum(round(data_length/1024/1024,2)) as total_size_mb from information_schema.tables where engine like 'innodb' group by table_schema order by total_size_mb;
</source>
===Per table===
<source lang=mysql>
mysql> select table_schema as database_name,table_name,round(data_length/1024/1024,2) as size_mb from information_schema.tables order by size_mb;
</source>
==Logging==
If you use SET GLOBAL it is just for the moment.
'''Don't forget to add it in your my.cnf to make it permanent!'''
===What can I log?===
The interesting variables here are:
* log_queries_not_using_indexes
* log_slave_updates
* log_slow_queries
* general_log
===Choose logging destination FILE/TABLE/NONE===
This affects general_log and slow_query_log.
* Log to the table mysql.slow_log and mysql.general_log
<source lang=mysql>
mysql> SET GLOBAL log_output=TABLE;
</source>
* Log to the table mysql.slow_log and mysql.general_log
<source lang=mysql>
mysql> SET GLOBAL log_output=TABLE;
</source>
* Both: tables and files
<source lang=mysql>
mysql> SET GLOBAL log_output = 'TABLE,FILE';
</source>
* None, if NONE appears in the log_output destinations there is no logging
<source lang=mysql>
mysql> SET GLOBAL log_output = 'TABLE,FILE,NONE';
</source>
is equal to
<source lang=mysql>
mysql> SET GLOBAL log_output = 'NONE';
</source>
===Enable/disable general logging===
<source lang=mysql>
mysql> SET GLOBAL general_log_file = '/var/lib/mysql/general.log';
Query OK, 0 rows affected (0.00 sec)
mysql> SET GLOBAL general_log = 'ON';
Query OK, 0 rows affected (0.00 sec)
</source>
<source lang=mysql>
mysql> SET GLOBAL general_log = 'OFF';
Query OK, 0 rows affected (0.00 sec)
</source>
===Enable/disable logging of slow queries===
<source lang=mysql>
mysql> SET GLOBAL slow_query_log_file = '/var/lib/mysql/slow-query.log';
Query OK, 0 rows affected (0.00 sec)
mysql> SET GLOBAL slow_query_log = 'ON';
Query OK, 0 rows affected (0.00 sec)
</source>
<source lang=mysql>
mysql> SET GLOBAL slow_query_log = 'OFF';
Query OK, 0 rows affected (0.00 sec)
</source>
==Filesystems for MySQL==
===ext3/ext4===
====Create Options====
<source lang=bash>
# mkfs.ext4 -b 4096 /dev/mapper/vg--data-lv--ext4--mysql_data
</source>
====Mountoptions====
* noatime
* data=writeback (best performance , only metadata is logged)
* data=ordered (ok performance , recording metadata and grouping metadata related to the data changes)
* data=journal (worst performance, but best data protection, ext3 default mode, recording metadata and all data)
===Raw devices with InnoDB===
'''Take a look at [[Linux_udev_permissions|setting device permissions via udev]] first.'''
'''After''' that the device is owned by mysql:
<source lang=bash>
# ls -alL /dev/vg-data/lv-rawdisk-innodb01
brw-rw---- 1 mysql mysql 252, 0 Aug 12 15:07 /dev/vg-data/lv-rawdisk-innodb01
</source>
Determine the size:
<source lang=bash>
# lvs vg-data
LV VG Attr LSize Pool Origin Data% Move Log Copy% Convert
lv-rawdisk-innodb01 vg-data -wi-a---- 25.00g
# fdisk -l /dev/vg-data/lv-rawdisk-innodb01
Disk /dev/vg-data/lv-rawdisk-innodb01: 26.8 GB, 26843545600 bytes
255 heads, 63 sectors/track, 3263 cylinders, total 52428800 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
# bc -l
26843545600/(1024*1024*1024)
25.00000000000000000000
</source>
Yes... really 25GB!
Add your logical volume to your configuration /etc/mysql/conf.d/innodb.cnf :
<source lang=mysql>
[mysqld]
# InnoDB raw disks
innodb_data_home_dir=
innodb_data_file_path=/dev/vg-data/lv-rawdisk-innodb01:25Gnewraw
</source>
Start mysql:
<source lang=bash>
# service mysql start
</source>
Aaaaaand.. do not forget apparmor! Like I did.. :-D
<source lang=mysql>
InnoDB: Operating system error number 13 in a file operation.
InnoDB: The error means mysqld does not have the access rights to
InnoDB: the directory.
InnoDB: File name /dev/dm-0
InnoDB: File operation call: 'open'.
InnoDB: Cannot continue operation.
</source>
<source lang=bash>
# tail /var/log/kern.log
...
Aug 12 15:30:09 mysql kernel: [ 5840.118528] audit: type=1400 audit(1439386209.399:33): apparmor="DENIED" operation="open" profile="/usr/sbin/mysqld" name="/dev/dm-0" pid=11810 comm="mysqld" requested_mask="wr" denied_mask="wr" fsuid=108 ouid=108
...
</source>
Add your raw device to the apparmor config in /etc/apparmor.d/local/usr.sbin.mysqld :
<source lang=bash>
# Site-specific additions and overrides for usr.sbin.mysqld.
# For more details, please see /etc/apparmor.d/local/README.
/dev/dm-* rwk,
</source>
Reload apparmor:
<source lang=bash>
# service apparmor reload
</source>
Another try!
<source lang=bash>
# service mysql start
</source>
<source lang=mysql>
InnoDB: The first specified data file /dev/vg-data/lv-rawdisk-innodb01 did not exist:
InnoDB: a new database to be created!
150812 15:48:23 InnoDB: Setting file /dev/vg-data/lv-rawdisk-innodb01 size to 25600 MB
InnoDB: Database physically writes the file full: wait...
InnoDB: Progress in MB: 100 200 300 400 500 600 700 800 900 1000 1100 1200 ...
</source>
Much better!
So shutdown MySQL again!
Change your configuration /etc/mysql/conf.d/innodb.cnf and '''change newraw to raw!''' :
<source lang=mysql>
[mysqld]
# InnoDB raw disks
innodb_data_home_dir=
innodb_data_file_path=/dev/vg-data/lv-rawdisk-innodb01:25Graw
</source>
=== NFS ===
==== NFSv4 ====
===== On NetApp CDOT SVM =====
<source lang=cdot>
cdot1nfsv4::> export-policy rule create -policyname default -clientmatch 172.18.128.0/22 -superuser none -rwrule none -rorule sys -allow-dev false -allow-suid false
cdot1nfsv4::>
cdot1nfsv4::> export-policy create -policyname mysql_clients
cdot1nfsv4::> export-policy rule create -policyname mysql_clients -clientmatch 172.18.128.0/22 -superuser sys -rwrule sys -rorule sys -allow-dev true -allow-suid false
cdot1nfsv4::>
cdot1nfsv4::> nfs server modify -v4.0 enabled -v4-id-domain this.domain.tld
cdot1nfsv4::> set -units GB
cdot1nfsv4::> vol show -volume MYSQLNFS_* -fields volume,policy,size,junction-path
vserver volume size policy junction-path
------------------ --------------------- ---- ------------- ----------------------
cdot1nfsv4 MYSQLNFS_DATA 40GB mysql_clients /MYSQLNFS_DATA
cdot1nfsv4 MYSQLNFS_LOG 1GB mysql_clients /MYSQLNFS_LOG
2 entries were displayed.
</source>
===== On Linux =====
====== /etc/sysctl.d/99-mysql.conf ======
<source>
#
## http://www.ajohnstone.com/achives/optimizing-mysql-over-nfs-with-netapp/
#
###################################################################
# Semaphores & IPC for optimizations in innodb
kernel.shmmax=2147483648
kernel.shmall=2147483648
kernel.msgmni=1024
kernel.msgmax=65536
kernel.sem=250 32000 32 1024
###################################################################
# Swap
vm.swappiness = 0
vm.vfs_cache_pressure = 50
</source>
====== /etc/sysctl.d/99-netapp-nfs.conf ======
<source>
#
## http://www.ajohnstone.com/achives/optimizing-mysql-over-nfs-with-netapp/
#
###################################################################
# Optimization for netapp/nfs increased from 64k, @see http://tldp.org/HOWTO/NFS-HOWTO/performance.html#MEMLIMITS
net.core.wmem_default=262144
net.core.rmem_default=262144
net.core.wmem_max=262144
net.core.rmem_max=262144
net.ipv4.tcp_rmem = 4096 87380 16777216
net.ipv4.tcp_wmem = 4096 65536 16777216
net.ipv4.tcp_no_metrics_save = 1
# Guidelines from http://media.netapp.com/documents/mysqlperformance-5.pdf
net.ipv4.tcp_sack=0
net.ipv4.tcp_timestamps=0
sunrpc.tcp_slot_table_entries=128
#nfs.v3.enable on
nfs.tcp.enable=on
nfs.tcp.recvwindowsize=65536
nfs.tcp.xfersize=65536
#iscsi.iswt.max_ios_per_session 128
#iscsi.iswt.tcp_window_size 131400
#iscsi.max_connections_per_session 16
net.ipv4.tcp_tw_reuse = 1
net.ipv4.ip_local_port_range = 1024 65023
net.ipv4.tcp_max_syn_backlog = 10240
net.ipv4.tcp_max_tw_buckets = 400000
net.ipv4.tcp_max_orphans = 60000
net.ipv4.tcp_synack_retries = 3
net.core.somaxconn = 10000
kernel.sysrq=0
net.ipv4.neigh.default.gc_thresh1 = 4096
net.ipv4.neigh.default.gc_thresh2 = 8192
net.ipv4.neigh.default.gc_thresh3 = 8192
net.ipv4.neigh.default.base_reachable_time = 86400
net.ipv4.neigh.default.gc_stale_time = 86400
</source>
====== /etc/idmapd.conf ======
<source lang=conf>
# Domain = localdomain
Domain = this.domain.tld
</source>
====== /etc/fstab ======
<source lang=fstab>
cdot-nfsv4-svm:/MYSQLNFS_LOG /MYSQLNFS_LOG nfs rw,hard,nointr,rsize=65536,wsize=65536,bg,vers=4,proto=tcp,noatime
cdot-nfsv4-svm:/MYSQLNFS_DATA /MYSQLNFS_DATA nfs rw,hard,nointr,rsize=65536,wsize=65536,bg,vers=4,proto=tcp,noatime
</source>
====== MySQL ======
======= /etc/mysql/mysql.conf.d/innodb.cnf =======
<source>
[mysqld]
#
# * InnoDB
#
innodb_data_home_dir = /MYSQLNFS_DATA/InnoDB
innodb_data_file_path = ibdata1:200M:autoextend
innodb_log_group_home_dir = /MYSQLNFS_LOG/ib_log
#innodb_flush_method = O_DIRECT
innodb_flush_log_at_trx_commit = 2
innodb_file_per_table = on
</source>
<source lang=mysql>
# mysql -e "show variables where variable_name like '%dir' and value like '/MYSQLNFS%'"
+---------------------------+------------------------------------+
| Variable_name | Value |
+---------------------------+------------------------------------+
| datadir | /MYSQLNFS_DATA/data/mysql/ |
| innodb_data_home_dir | /MYSQLNFS_DATA/InnoDB |
| innodb_log_group_home_dir | /MYSQLNFS_LOG/ib_log |
+---------------------------+------------------------------------+
</source>
====== apparmor : /etc/apparmor.d/local/usr.sbin.mysqld ======
<source lang=apparmor>
# vim:syntax=apparmor
# This should be always there...
owner @{PROC}/@{pid}/status r,
/sys/devices/system/node/ r,
/sys/devices/system/node/** r,
# The mysql datadir, innodb_data_home_dir
/MYSQLNFS_DATA/ r,
/MYSQLNFS_DATA/** rwk,
# The mysql innodb_log_group_home_dir
/MYSQLNFS_LOG/ r,
/MYSQLNFS_LOG/** rwk,
</source>
====== Short stupid performance test ======
<source lang=bash>
# time dd if=/dev/zero of=/MYSQLNFS_DATA/io.test bs=16k count=65536
65536+0 records in
65536+0 records out
1073741824 bytes (1,1 GB, 1,0 GiB) copied, 1,7552 s, 612 MB/s
real 0m1.772s
user 0m0.016s
sys 0m0.672s
</source>
Some things seem to work...
==Sample InnoDB configuration==
/etc/mysql/conf.d/innodb.cnf
<source lang=mysql>
[mysqld]
# InnoDB Parameters
# innodb_buffer_pool_size=(0.7*total_mem_size)
innodb_buffer_pool_size=1433M
# bulk_insert_buffer_size
bulk_insert_buffer_size=256M
# innodb_buffer_pool_instances=... more = more concurrency
innodb_buffer_pool_instances=2
# innodb_thread_concurrency= 2*CPUs
innodb_thread_concurrency=4
# innodb_flush_method=O_DIRECT (avoids double buffering)
innodb_flush_method=O_DIRECT
# InnoDB data raw disks
innodb_data_home_dir=
innodb_data_file_path=/dev/vg-data/lv-rawdisk-innodb01:25Graw
# InnoDB log files
innodb_log_files_in_group=2
innodb_log_file_size=100M
innodb_log_group_home_dir=/var/lib/mysql/ib_log
</source>
==Analyze==
<source lang=mysql>
mysql> select * from <tablename> PROCEDURE ANALYSE();
</source>
<source lang=mysql>
mysql> SHOW /*!50000 GLOBAL*/ STATUS;
</source>
* See [[http://de.slideshare.net/shinguz/pt-presentation-11465700 MySQL Performance Tuning]]
===percona-toolkit===
<source lang=bash>
# aptitude install percona-toolkit
# mysql -e "explain select * from mysql.user,mysql.db where user.user=db.user" | pt-visual-explain
JOIN
+- Bookmark lookup
| +- Table
| | table db
| | possible_keys User
| +- Index lookup
| key db->User
| possible_keys User
| key_len 48
| ref mysql.user.User
| rows 3
+- Table scan
rows 68
+- Table
table user
</source>
===Sysbench===
<source lang=bash>
# mysql -u root -e "create database sbtest;"
# sysbench \
--test=oltp \
--oltp-table-size=10000000 \
--db-driver=mysql \
--mysql-table-engine=innodb \
--mysql-db=sbtest \
--mysql-user=root \
--mysql-password=$(nawk -F'=' '/password/{print $2}' /root/.my.cnf) \
--mysql-socket=/var/run/mysqld/mysqld.sock \
prepare
# sysbench \
--test=oltp \
--oltp-test-mode=complex \
--oltp-table-size=80000000 \
--db-driver=mysql \
--mysql-table-engine=innodb \
--mysql-db=sbtest \
--mysql-user=root \
--mysql-password=$(nawk -F'=' '/password/{print $2}' /root/.my.cnf) \
--mysql-socket=/var/run/mysqld/mysqld.sock \
--num-threads=4 \
--max-time=900 \
--max-requests=500000 \
run
# mysql -u root_rw -e "drop table sbtest;" sbtest
</source>
==Recover a damaged root account==
===Lost grants===
Try out:
<source lang=bash>
# service mysql stop
# echo "grant all privileges on *.* to 'root'@'localhost' with grant option;" > /root/mysql-init
# mysqld_safe --init-file=/root/mysql-init
...
150812 19:14:24 mysqld_safe mysqld from pid file /var/run/mysqld/mysqld.pid ended
# rm /root/mysql-init
# service mysql start
</source>
Or:
<source lang=bash>
# service mysql stop
# mysqld_safe --skip-grant-tables &
...
# mysql -e "UPDATE mysql.user SET Grant_priv='Y', Super_priv='Y' WHERE User='root'; FLUSH PRIVILEGES; GRANT ALL ON *.* TO 'root'@'localhost';"
# mysqladmin -u root shutdown
# service mysql start
</source>
===Lost password===
<source lang=bash>
# service mysql stop
# echo "SET PASSWORD FOR 'root'@'localhost' = PASSWORD('the root password for mysql');" > /root/mysql-init
# mysqld_safe --init-file=/root/mysql-init
...
150812 19:15:24 mysqld_safe mysqld from pid file /var/run/mysqld/mysqld.pid ended
# rm /root/mysql-init
# service mysql start
</source>
==Structured configuration==
This is the default in Ubuntus /etc/mysql/my.cnf:
<source lang=mysql>
...
#
# * IMPORTANT: Additional settings that can override those from this file!
# The files must end with '.cnf', otherwise they'll be ignored.
#
!includedir /etc/mysql/conf.d/
</source>
/etc/mysql/conf.d/innodb.cnf:
<source lang=mysql>
[mysqld]
# InnoDB Parameters
# innodb_buffer_pool_size=(0.7*total_mem_size)
#innodb_buffer_pool_size=512M
innodb_buffer_pool_size=256M
# bulk_insert_buffer_size
#bulk_insert_buffer_size=256M
bulk_insert_buffer_size=128M
# innodb_buffer_pool_instances=... more = more concurrency
innodb_buffer_pool_instances=2
# innodb_thread_concurrency= 2*CPUs
innodb_thread_concurrency=4
# innodb_flush_method=O_DIRECT (avoids double buffering)
innodb_flush_method=O_DIRECT
# InnoDB data raw disks
innodb_data_home_dir=
innodb_data_file_path=/dev/vg-data/lv-rawdisk-innodb01:25Graw
# InnoDB log files
innodb_log_files_in_group=2
innodb_log_file_size=100M
innodb_log_group_home_dir=/var/lib/mysql/ib_log
</source>
/etc/mysql/conf.d/myisam.cnf:
<source lang=mysql>
[mysqld]
#key_buffer = 512M
key_buffer = 128M
table_cache = 8K
myisam_sort_buffer_size = 64M
tmp_table_size = 64M
# Variable: concurrent_insert
# Value Description
# 0 Disables concurrent inserts
# 1 (Default) Enables concurrent insert for MyISAM tables that do not have holes
# 2 Enables concurrent inserts for all MyISAM tables, even those that have holes.
# For a table with a hole, new rows are inserted at the end of the table if it is in use by another thread.
# Otherwise, MySQL acquires a normal write lock and inserts the row into the hole.
concurrent_insert=2
# Variable: myisam_use_mmap
# https://www.percona.com/blog/2006/05/26/myisam-mmap-feature-51/
#
myisam_use_mmap=1
</source>
/etc/mysql/conf.d/mysqld.cnf:
<source lang=mysql>
[mysqld]
datadir = /var/lib/mysql/data/data
# because mysql is soooo stupid
#ignore-db-dirs = lost+found # when we will have mysql >= 5.6.3
bind-address = 127.0.0.1
open-files-limit = 4096
max_connections = 512
max_allowed_packet = 16M
thread_stack = 192K
thread_cache_size = 8
myisam-recover-options = BACKUP
max_connections = 512
table_cache = 8192
thread_concurrency = 4
default-storage-engine = innodb
# Enable the full query log. Every query (even ones with incorrect
# syntax) that the server receives will be logged. This is useful for
# debugging, it is usually disabled in production use.
#log
# Print warnings to the error log file. If you have any problem with
# MySQL you should enable logging of warnings and examine the error log
# for possible explanations.
log_warnings
# Log slow queries. Slow queries are queries which take more than the
# amount of time defined in "long_query_time" or which do not use
# indexes well, if log_long_format is enabled. It is normally good idea
# to have this turned on if you frequently add new queries to the
# system.
log_slow_queries
slow_query_log_file = /var/log/mysql/mysql-slow.log
# All queries taking more than this amount of time (in seconds) will be
# trated as slow. Do not use "1" as a value here, as this will result in
# even very fast queries being logged from time to time (as MySQL
# currently measures time with second accuracy only).
long_query_time = 2
# Log more information in the slow query log. Normally it is good to
# have this turned on. This will enable logging of queries that are not
# using indexes in addition to long running queries.
#log_long_format
log_bin = /var/lib/mysql/binlog/mysql-bin.log
expire_logs_days = 10
max_binlog_size = 100M
sync_binlog = 0
performance_schema = ON
</source>
/etc/mysql/conf.d/mysqld_safe.cnf:
<source lang=mysql>
[mysqld_safe]
</source>
/etc/mysql/conf.d/mysqld_safe_syslog.cnf:
<source lang=mysql>
[mysqld_safe]
syslog
</source>
/etc/mysql/conf.d/query_cache.cnf:
<source lang=mysql>
[mysqld]
query_cache_limit = 4M
query_cache_size = 128M
query_cache_min_res_unit = 2K
</source>
=MySQL Clients=
Small one liners for testing purposes.
==PHP==
===PHP PDO===
<source lang=php>
$ php -r '
$pdo=new PDO("mysql:host=mydbhost;dbname=mydb", "user", "pass", ARRAY(
PDO::ATTR_PERSISTENT => true
)
);
$stmt=$pdo->prepare("SELECT * FROM mytable");
if($stmt->execute()){
while($row = $stmt->fetch()){
print_r($row);
}
};
$stmt = null;
$pdo=null;
'
</source>
7d81807a0e40d878f77917a48499724125e82984
1710
1709
2017-05-04T15:29:45Z
Lollypop
2
/* On Linux */
wikitext
text/x-wiki
[[Kategorie:MySQL|Tipps und Tricks]]
==Oneliner==
===All grants===
<source lang=bash>
# mysql --skip-column-names --batch --execute 'select concat("`",user,"`@`",host,"`") from mysql.user' | xargs -n 1 -i mysql --execute 'show grants for {}'
</source>
===Last update time===
* Per table
<source lang=mysql>
mysql> SELECT TABLE_SCHEMA AS DB,TABLE_NAME,UPDATE_TIME FROM INFORMATION_SCHEMA.TABLES ORDER BY DB,UPDATE_TIME;
</source>
* Per database
<source lang=mysql>
mysql> SELECT TABLE_SCHEMA AS DB,MAX(UPDATE_TIME) AS LAST_UPDATE FROM INFORMATION_SCHEMA.TABLES GROUP BY DB ORDER BY LAST_UPDATE;
</source>
==InnoDB space==
===Per database===
<source lang=mysql>
mysql> select table_schema as database_name, sum(round(data_length/1024/1024,2)) as total_size_mb from information_schema.tables where engine like 'innodb' group by table_schema order by total_size_mb;
</source>
===Per table===
<source lang=mysql>
mysql> select table_schema as database_name,table_name,round(data_length/1024/1024,2) as size_mb from information_schema.tables order by size_mb;
</source>
==Logging==
If you use SET GLOBAL it is just for the moment.
'''Don't forget to add it in your my.cnf to make it permanent!'''
===What can I log?===
The interesting variables here are:
* log_queries_not_using_indexes
* log_slave_updates
* log_slow_queries
* general_log
===Choose logging destination FILE/TABLE/NONE===
This affects general_log and slow_query_log.
* Log to the table mysql.slow_log and mysql.general_log
<source lang=mysql>
mysql> SET GLOBAL log_output=TABLE;
</source>
* Log to the table mysql.slow_log and mysql.general_log
<source lang=mysql>
mysql> SET GLOBAL log_output=TABLE;
</source>
* Both: tables and files
<source lang=mysql>
mysql> SET GLOBAL log_output = 'TABLE,FILE';
</source>
* None, if NONE appears in the log_output destinations there is no logging
<source lang=mysql>
mysql> SET GLOBAL log_output = 'TABLE,FILE,NONE';
</source>
is equal to
<source lang=mysql>
mysql> SET GLOBAL log_output = 'NONE';
</source>
===Enable/disable general logging===
<source lang=mysql>
mysql> SET GLOBAL general_log_file = '/var/lib/mysql/general.log';
Query OK, 0 rows affected (0.00 sec)
mysql> SET GLOBAL general_log = 'ON';
Query OK, 0 rows affected (0.00 sec)
</source>
<source lang=mysql>
mysql> SET GLOBAL general_log = 'OFF';
Query OK, 0 rows affected (0.00 sec)
</source>
===Enable/disable logging of slow queries===
<source lang=mysql>
mysql> SET GLOBAL slow_query_log_file = '/var/lib/mysql/slow-query.log';
Query OK, 0 rows affected (0.00 sec)
mysql> SET GLOBAL slow_query_log = 'ON';
Query OK, 0 rows affected (0.00 sec)
</source>
<source lang=mysql>
mysql> SET GLOBAL slow_query_log = 'OFF';
Query OK, 0 rows affected (0.00 sec)
</source>
==Filesystems for MySQL==
===ext3/ext4===
====Create Options====
<source lang=bash>
# mkfs.ext4 -b 4096 /dev/mapper/vg--data-lv--ext4--mysql_data
</source>
====Mountoptions====
* noatime
* data=writeback (best performance , only metadata is logged)
* data=ordered (ok performance , recording metadata and grouping metadata related to the data changes)
* data=journal (worst performance, but best data protection, ext3 default mode, recording metadata and all data)
===Raw devices with InnoDB===
'''Take a look at [[Linux_udev_permissions|setting device permissions via udev]] first.'''
'''After''' that the device is owned by mysql:
<source lang=bash>
# ls -alL /dev/vg-data/lv-rawdisk-innodb01
brw-rw---- 1 mysql mysql 252, 0 Aug 12 15:07 /dev/vg-data/lv-rawdisk-innodb01
</source>
Determine the size:
<source lang=bash>
# lvs vg-data
LV VG Attr LSize Pool Origin Data% Move Log Copy% Convert
lv-rawdisk-innodb01 vg-data -wi-a---- 25.00g
# fdisk -l /dev/vg-data/lv-rawdisk-innodb01
Disk /dev/vg-data/lv-rawdisk-innodb01: 26.8 GB, 26843545600 bytes
255 heads, 63 sectors/track, 3263 cylinders, total 52428800 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
# bc -l
26843545600/(1024*1024*1024)
25.00000000000000000000
</source>
Yes... really 25GB!
Add your logical volume to your configuration /etc/mysql/conf.d/innodb.cnf :
<source lang=mysql>
[mysqld]
# InnoDB raw disks
innodb_data_home_dir=
innodb_data_file_path=/dev/vg-data/lv-rawdisk-innodb01:25Gnewraw
</source>
Start mysql:
<source lang=bash>
# service mysql start
</source>
Aaaaaand.. do not forget apparmor! Like I did.. :-D
<source lang=mysql>
InnoDB: Operating system error number 13 in a file operation.
InnoDB: The error means mysqld does not have the access rights to
InnoDB: the directory.
InnoDB: File name /dev/dm-0
InnoDB: File operation call: 'open'.
InnoDB: Cannot continue operation.
</source>
<source lang=bash>
# tail /var/log/kern.log
...
Aug 12 15:30:09 mysql kernel: [ 5840.118528] audit: type=1400 audit(1439386209.399:33): apparmor="DENIED" operation="open" profile="/usr/sbin/mysqld" name="/dev/dm-0" pid=11810 comm="mysqld" requested_mask="wr" denied_mask="wr" fsuid=108 ouid=108
...
</source>
Add your raw device to the apparmor config in /etc/apparmor.d/local/usr.sbin.mysqld :
<source lang=bash>
# Site-specific additions and overrides for usr.sbin.mysqld.
# For more details, please see /etc/apparmor.d/local/README.
/dev/dm-* rwk,
</source>
Reload apparmor:
<source lang=bash>
# service apparmor reload
</source>
Another try!
<source lang=bash>
# service mysql start
</source>
<source lang=mysql>
InnoDB: The first specified data file /dev/vg-data/lv-rawdisk-innodb01 did not exist:
InnoDB: a new database to be created!
150812 15:48:23 InnoDB: Setting file /dev/vg-data/lv-rawdisk-innodb01 size to 25600 MB
InnoDB: Database physically writes the file full: wait...
InnoDB: Progress in MB: 100 200 300 400 500 600 700 800 900 1000 1100 1200 ...
</source>
Much better!
So shutdown MySQL again!
Change your configuration /etc/mysql/conf.d/innodb.cnf and '''change newraw to raw!''' :
<source lang=mysql>
[mysqld]
# InnoDB raw disks
innodb_data_home_dir=
innodb_data_file_path=/dev/vg-data/lv-rawdisk-innodb01:25Graw
</source>
=== NFS ===
==== NFSv4 ====
===== On NetApp CDOT SVM =====
<source lang=cdot>
cdot1nfsv4::> export-policy rule create -policyname default -clientmatch 172.18.128.0/22 -superuser none -rwrule none -rorule sys -allow-dev false -allow-suid false
cdot1nfsv4::>
cdot1nfsv4::> export-policy create -policyname mysql_clients
cdot1nfsv4::> export-policy rule create -policyname mysql_clients -clientmatch 172.18.128.0/22 -superuser sys -rwrule sys -rorule sys -allow-dev true -allow-suid false
cdot1nfsv4::>
cdot1nfsv4::> nfs server modify -v4.0 enabled -v4-id-domain this.domain.tld
cdot1nfsv4::> set -units GB
cdot1nfsv4::> vol show -volume MYSQLNFS_* -fields volume,policy,size,junction-path
vserver volume size policy junction-path
------------------ --------------------- ---- ------------- ----------------------
cdot1nfsv4 MYSQLNFS_DATA 40GB mysql_clients /MYSQLNFS_DATA
cdot1nfsv4 MYSQLNFS_LOG 1GB mysql_clients /MYSQLNFS_LOG
2 entries were displayed.
</source>
===== On Linux =====
====== /etc/sysctl.d/99-mysql.conf ======
<source>
#
## http://www.ajohnstone.com/achives/optimizing-mysql-over-nfs-with-netapp/
#
###################################################################
# Semaphores & IPC for optimizations in innodb
kernel.shmmax=2147483648
kernel.shmall=2147483648
kernel.msgmni=1024
kernel.msgmax=65536
kernel.sem=250 32000 32 1024
###################################################################
# Swap
vm.swappiness = 0
vm.vfs_cache_pressure = 50
</source>
====== /etc/sysctl.d/99-netapp-nfs.conf ======
<source>
#
## http://www.ajohnstone.com/achives/optimizing-mysql-over-nfs-with-netapp/
#
###################################################################
# Optimization for netapp/nfs increased from 64k, @see http://tldp.org/HOWTO/NFS-HOWTO/performance.html#MEMLIMITS
net.core.wmem_default=262144
net.core.rmem_default=262144
net.core.wmem_max=262144
net.core.rmem_max=262144
net.ipv4.tcp_rmem = 4096 87380 16777216
net.ipv4.tcp_wmem = 4096 65536 16777216
net.ipv4.tcp_no_metrics_save = 1
# Guidelines from http://media.netapp.com/documents/mysqlperformance-5.pdf
net.ipv4.tcp_sack=0
net.ipv4.tcp_timestamps=0
sunrpc.tcp_slot_table_entries=128
#nfs.v3.enable on
nfs.tcp.enable=on
nfs.tcp.recvwindowsize=65536
nfs.tcp.xfersize=65536
#iscsi.iswt.max_ios_per_session 128
#iscsi.iswt.tcp_window_size 131400
#iscsi.max_connections_per_session 16
net.ipv4.tcp_tw_reuse = 1
net.ipv4.ip_local_port_range = 1024 65023
net.ipv4.tcp_max_syn_backlog = 10240
net.ipv4.tcp_max_tw_buckets = 400000
net.ipv4.tcp_max_orphans = 60000
net.ipv4.tcp_synack_retries = 3
net.core.somaxconn = 10000
kernel.sysrq=0
net.ipv4.neigh.default.gc_thresh1 = 4096
net.ipv4.neigh.default.gc_thresh2 = 8192
net.ipv4.neigh.default.gc_thresh3 = 8192
net.ipv4.neigh.default.base_reachable_time = 86400
net.ipv4.neigh.default.gc_stale_time = 86400
</source>
====== Modify systemd service to wait for NFS ======
To be sure that the NFS mount is ready when the mysql server starts add After=nfs-client.target to the systemd service in the Unit-section.
<source lang=bash>
# systemctl edit mysql.service
</source>
and enter:
<source lang=inifile>
[Unit]
Description=MySQL Community Server
After=network.target
After=nfs-client.target
</source>
<source lang=bash>
# systemctl cat mysql
# /lib/systemd/system/mysql.service
# MySQL systemd service file
[Unit]
Description=MySQL Community Server
After=network.target
[Install]
WantedBy=multi-user.target
[Service]
User=mysql
Group=mysql
PermissionsStartOnly=true
ExecStartPre=/usr/share/mysql/mysql-systemd-start pre
ExecStart=/usr/sbin/mysqld
ExecStartPost=/usr/share/mysql/mysql-systemd-start post
TimeoutSec=600
Restart=on-failure
RuntimeDirectory=mysqld
RuntimeDirectoryMode=755
# /etc/systemd/system/mysql.service.d/override.conf
[Unit]
Description=MySQL Community Server
After=network.target
After=nfs-client.target
</source>
====== /etc/idmapd.conf ======
<source lang=conf>
# Domain = localdomain
Domain = this.domain.tld
</source>
====== /etc/fstab ======
<source lang=fstab>
cdot-nfsv4-svm:/MYSQLNFS_LOG /MYSQLNFS_LOG nfs rw,hard,nointr,rsize=65536,wsize=65536,bg,vers=4,proto=tcp,noatime
cdot-nfsv4-svm:/MYSQLNFS_DATA /MYSQLNFS_DATA nfs rw,hard,nointr,rsize=65536,wsize=65536,bg,vers=4,proto=tcp,noatime
</source>
====== /etc/mysql/mysql.conf.d/mysqld.cnf ======
<source lang=inifile>
[mysqld]
...
datadir = /MYSQLNFS_DATA/data/mysql
...
</source>
====== /etc/mysql/mysql.conf.d/innodb.cnf ======
<source lang=inifile>
[mysqld]
#
# * InnoDB
#
innodb_data_home_dir = /MYSQLNFS_DATA/InnoDB
innodb_data_file_path = ibdata1:200M:autoextend
innodb_log_group_home_dir = /MYSQLNFS_LOG/ib_log
#innodb_flush_method = O_DIRECT
innodb_flush_log_at_trx_commit = 2
innodb_file_per_table = on
</source>
<source lang=mysql>
# mysql -e "show variables where variable_name like '%dir' and value like '/MYSQLNFS%'"
+---------------------------+------------------------------------+
| Variable_name | Value |
+---------------------------+------------------------------------+
| datadir | /MYSQLNFS_DATA/data/mysql/ |
| innodb_data_home_dir | /MYSQLNFS_DATA/InnoDB |
| innodb_log_group_home_dir | /MYSQLNFS_LOG/ib_log |
+---------------------------+------------------------------------+
</source>
====== apparmor : /etc/apparmor.d/local/usr.sbin.mysqld ======
<source lang=apparmor>
# vim:syntax=apparmor
# This should be always there...
owner @{PROC}/@{pid}/status r,
/sys/devices/system/node/ r,
/sys/devices/system/node/** r,
# The mysql datadir, innodb_data_home_dir
/MYSQLNFS_DATA/ r,
/MYSQLNFS_DATA/** rwk,
# The mysql innodb_log_group_home_dir
/MYSQLNFS_LOG/ r,
/MYSQLNFS_LOG/** rwk,
</source>
====== Short stupid performance test ======
<source lang=bash>
# time dd if=/dev/zero of=/MYSQLNFS_DATA/io.test bs=16k count=65536
65536+0 records in
65536+0 records out
1073741824 bytes (1,1 GB, 1,0 GiB) copied, 1,7552 s, 612 MB/s
real 0m1.772s
user 0m0.016s
sys 0m0.672s
</source>
Some things seem to work...
==Sample InnoDB configuration==
/etc/mysql/conf.d/innodb.cnf
<source lang=mysql>
[mysqld]
# InnoDB Parameters
# innodb_buffer_pool_size=(0.7*total_mem_size)
innodb_buffer_pool_size=1433M
# bulk_insert_buffer_size
bulk_insert_buffer_size=256M
# innodb_buffer_pool_instances=... more = more concurrency
innodb_buffer_pool_instances=2
# innodb_thread_concurrency= 2*CPUs
innodb_thread_concurrency=4
# innodb_flush_method=O_DIRECT (avoids double buffering)
innodb_flush_method=O_DIRECT
# InnoDB data raw disks
innodb_data_home_dir=
innodb_data_file_path=/dev/vg-data/lv-rawdisk-innodb01:25Graw
# InnoDB log files
innodb_log_files_in_group=2
innodb_log_file_size=100M
innodb_log_group_home_dir=/var/lib/mysql/ib_log
</source>
==Analyze==
<source lang=mysql>
mysql> select * from <tablename> PROCEDURE ANALYSE();
</source>
<source lang=mysql>
mysql> SHOW /*!50000 GLOBAL*/ STATUS;
</source>
* See [[http://de.slideshare.net/shinguz/pt-presentation-11465700 MySQL Performance Tuning]]
===percona-toolkit===
<source lang=bash>
# aptitude install percona-toolkit
# mysql -e "explain select * from mysql.user,mysql.db where user.user=db.user" | pt-visual-explain
JOIN
+- Bookmark lookup
| +- Table
| | table db
| | possible_keys User
| +- Index lookup
| key db->User
| possible_keys User
| key_len 48
| ref mysql.user.User
| rows 3
+- Table scan
rows 68
+- Table
table user
</source>
===Sysbench===
<source lang=bash>
# mysql -u root -e "create database sbtest;"
# sysbench \
--test=oltp \
--oltp-table-size=10000000 \
--db-driver=mysql \
--mysql-table-engine=innodb \
--mysql-db=sbtest \
--mysql-user=root \
--mysql-password=$(nawk -F'=' '/password/{print $2}' /root/.my.cnf) \
--mysql-socket=/var/run/mysqld/mysqld.sock \
prepare
# sysbench \
--test=oltp \
--oltp-test-mode=complex \
--oltp-table-size=80000000 \
--db-driver=mysql \
--mysql-table-engine=innodb \
--mysql-db=sbtest \
--mysql-user=root \
--mysql-password=$(nawk -F'=' '/password/{print $2}' /root/.my.cnf) \
--mysql-socket=/var/run/mysqld/mysqld.sock \
--num-threads=4 \
--max-time=900 \
--max-requests=500000 \
run
# mysql -u root_rw -e "drop table sbtest;" sbtest
</source>
==Recover a damaged root account==
===Lost grants===
Try out:
<source lang=bash>
# service mysql stop
# echo "grant all privileges on *.* to 'root'@'localhost' with grant option;" > /root/mysql-init
# mysqld_safe --init-file=/root/mysql-init
...
150812 19:14:24 mysqld_safe mysqld from pid file /var/run/mysqld/mysqld.pid ended
# rm /root/mysql-init
# service mysql start
</source>
Or:
<source lang=bash>
# service mysql stop
# mysqld_safe --skip-grant-tables &
...
# mysql -e "UPDATE mysql.user SET Grant_priv='Y', Super_priv='Y' WHERE User='root'; FLUSH PRIVILEGES; GRANT ALL ON *.* TO 'root'@'localhost';"
# mysqladmin -u root shutdown
# service mysql start
</source>
===Lost password===
<source lang=bash>
# service mysql stop
# echo "SET PASSWORD FOR 'root'@'localhost' = PASSWORD('the root password for mysql');" > /root/mysql-init
# mysqld_safe --init-file=/root/mysql-init
...
150812 19:15:24 mysqld_safe mysqld from pid file /var/run/mysqld/mysqld.pid ended
# rm /root/mysql-init
# service mysql start
</source>
==Structured configuration==
This is the default in Ubuntus /etc/mysql/my.cnf:
<source lang=mysql>
...
#
# * IMPORTANT: Additional settings that can override those from this file!
# The files must end with '.cnf', otherwise they'll be ignored.
#
!includedir /etc/mysql/conf.d/
</source>
/etc/mysql/conf.d/innodb.cnf:
<source lang=mysql>
[mysqld]
# InnoDB Parameters
# innodb_buffer_pool_size=(0.7*total_mem_size)
#innodb_buffer_pool_size=512M
innodb_buffer_pool_size=256M
# bulk_insert_buffer_size
#bulk_insert_buffer_size=256M
bulk_insert_buffer_size=128M
# innodb_buffer_pool_instances=... more = more concurrency
innodb_buffer_pool_instances=2
# innodb_thread_concurrency= 2*CPUs
innodb_thread_concurrency=4
# innodb_flush_method=O_DIRECT (avoids double buffering)
innodb_flush_method=O_DIRECT
# InnoDB data raw disks
innodb_data_home_dir=
innodb_data_file_path=/dev/vg-data/lv-rawdisk-innodb01:25Graw
# InnoDB log files
innodb_log_files_in_group=2
innodb_log_file_size=100M
innodb_log_group_home_dir=/var/lib/mysql/ib_log
</source>
/etc/mysql/conf.d/myisam.cnf:
<source lang=mysql>
[mysqld]
#key_buffer = 512M
key_buffer = 128M
table_cache = 8K
myisam_sort_buffer_size = 64M
tmp_table_size = 64M
# Variable: concurrent_insert
# Value Description
# 0 Disables concurrent inserts
# 1 (Default) Enables concurrent insert for MyISAM tables that do not have holes
# 2 Enables concurrent inserts for all MyISAM tables, even those that have holes.
# For a table with a hole, new rows are inserted at the end of the table if it is in use by another thread.
# Otherwise, MySQL acquires a normal write lock and inserts the row into the hole.
concurrent_insert=2
# Variable: myisam_use_mmap
# https://www.percona.com/blog/2006/05/26/myisam-mmap-feature-51/
#
myisam_use_mmap=1
</source>
/etc/mysql/conf.d/mysqld.cnf:
<source lang=mysql>
[mysqld]
datadir = /var/lib/mysql/data/data
# because mysql is soooo stupid
#ignore-db-dirs = lost+found # when we will have mysql >= 5.6.3
bind-address = 127.0.0.1
open-files-limit = 4096
max_connections = 512
max_allowed_packet = 16M
thread_stack = 192K
thread_cache_size = 8
myisam-recover-options = BACKUP
max_connections = 512
table_cache = 8192
thread_concurrency = 4
default-storage-engine = innodb
# Enable the full query log. Every query (even ones with incorrect
# syntax) that the server receives will be logged. This is useful for
# debugging, it is usually disabled in production use.
#log
# Print warnings to the error log file. If you have any problem with
# MySQL you should enable logging of warnings and examine the error log
# for possible explanations.
log_warnings
# Log slow queries. Slow queries are queries which take more than the
# amount of time defined in "long_query_time" or which do not use
# indexes well, if log_long_format is enabled. It is normally good idea
# to have this turned on if you frequently add new queries to the
# system.
log_slow_queries
slow_query_log_file = /var/log/mysql/mysql-slow.log
# All queries taking more than this amount of time (in seconds) will be
# trated as slow. Do not use "1" as a value here, as this will result in
# even very fast queries being logged from time to time (as MySQL
# currently measures time with second accuracy only).
long_query_time = 2
# Log more information in the slow query log. Normally it is good to
# have this turned on. This will enable logging of queries that are not
# using indexes in addition to long running queries.
#log_long_format
log_bin = /var/lib/mysql/binlog/mysql-bin.log
expire_logs_days = 10
max_binlog_size = 100M
sync_binlog = 0
performance_schema = ON
</source>
/etc/mysql/conf.d/mysqld_safe.cnf:
<source lang=mysql>
[mysqld_safe]
</source>
/etc/mysql/conf.d/mysqld_safe_syslog.cnf:
<source lang=mysql>
[mysqld_safe]
syslog
</source>
/etc/mysql/conf.d/query_cache.cnf:
<source lang=mysql>
[mysqld]
query_cache_limit = 4M
query_cache_size = 128M
query_cache_min_res_unit = 2K
</source>
=MySQL Clients=
Small one liners for testing purposes.
==PHP==
===PHP PDO===
<source lang=php>
$ php -r '
$pdo=new PDO("mysql:host=mydbhost;dbname=mydb", "user", "pass", ARRAY(
PDO::ATTR_PERSISTENT => true
)
);
$stmt=$pdo->prepare("SELECT * FROM mytable");
if($stmt->execute()){
while($row = $stmt->fetch()){
print_r($row);
}
};
$stmt = null;
$pdo=null;
'
</source>
efbf0292d258cbcf6b91cd5aba3ff523286f9d83
1711
1710
2017-05-04T15:40:18Z
Lollypop
2
/* On Linux */
wikitext
text/x-wiki
[[Kategorie:MySQL|Tipps und Tricks]]
==Oneliner==
===All grants===
<source lang=bash>
# mysql --skip-column-names --batch --execute 'select concat("`",user,"`@`",host,"`") from mysql.user' | xargs -n 1 -i mysql --execute 'show grants for {}'
</source>
===Last update time===
* Per table
<source lang=mysql>
mysql> SELECT TABLE_SCHEMA AS DB,TABLE_NAME,UPDATE_TIME FROM INFORMATION_SCHEMA.TABLES ORDER BY DB,UPDATE_TIME;
</source>
* Per database
<source lang=mysql>
mysql> SELECT TABLE_SCHEMA AS DB,MAX(UPDATE_TIME) AS LAST_UPDATE FROM INFORMATION_SCHEMA.TABLES GROUP BY DB ORDER BY LAST_UPDATE;
</source>
==InnoDB space==
===Per database===
<source lang=mysql>
mysql> select table_schema as database_name, sum(round(data_length/1024/1024,2)) as total_size_mb from information_schema.tables where engine like 'innodb' group by table_schema order by total_size_mb;
</source>
===Per table===
<source lang=mysql>
mysql> select table_schema as database_name,table_name,round(data_length/1024/1024,2) as size_mb from information_schema.tables order by size_mb;
</source>
==Logging==
If you use SET GLOBAL it is just for the moment.
'''Don't forget to add it in your my.cnf to make it permanent!'''
===What can I log?===
The interesting variables here are:
* log_queries_not_using_indexes
* log_slave_updates
* log_slow_queries
* general_log
===Choose logging destination FILE/TABLE/NONE===
This affects general_log and slow_query_log.
* Log to the table mysql.slow_log and mysql.general_log
<source lang=mysql>
mysql> SET GLOBAL log_output=TABLE;
</source>
* Log to the table mysql.slow_log and mysql.general_log
<source lang=mysql>
mysql> SET GLOBAL log_output=TABLE;
</source>
* Both: tables and files
<source lang=mysql>
mysql> SET GLOBAL log_output = 'TABLE,FILE';
</source>
* None, if NONE appears in the log_output destinations there is no logging
<source lang=mysql>
mysql> SET GLOBAL log_output = 'TABLE,FILE,NONE';
</source>
is equal to
<source lang=mysql>
mysql> SET GLOBAL log_output = 'NONE';
</source>
===Enable/disable general logging===
<source lang=mysql>
mysql> SET GLOBAL general_log_file = '/var/lib/mysql/general.log';
Query OK, 0 rows affected (0.00 sec)
mysql> SET GLOBAL general_log = 'ON';
Query OK, 0 rows affected (0.00 sec)
</source>
<source lang=mysql>
mysql> SET GLOBAL general_log = 'OFF';
Query OK, 0 rows affected (0.00 sec)
</source>
===Enable/disable logging of slow queries===
<source lang=mysql>
mysql> SET GLOBAL slow_query_log_file = '/var/lib/mysql/slow-query.log';
Query OK, 0 rows affected (0.00 sec)
mysql> SET GLOBAL slow_query_log = 'ON';
Query OK, 0 rows affected (0.00 sec)
</source>
<source lang=mysql>
mysql> SET GLOBAL slow_query_log = 'OFF';
Query OK, 0 rows affected (0.00 sec)
</source>
==Filesystems for MySQL==
===ext3/ext4===
====Create Options====
<source lang=bash>
# mkfs.ext4 -b 4096 /dev/mapper/vg--data-lv--ext4--mysql_data
</source>
====Mountoptions====
* noatime
* data=writeback (best performance , only metadata is logged)
* data=ordered (ok performance , recording metadata and grouping metadata related to the data changes)
* data=journal (worst performance, but best data protection, ext3 default mode, recording metadata and all data)
===Raw devices with InnoDB===
'''Take a look at [[Linux_udev_permissions|setting device permissions via udev]] first.'''
'''After''' that the device is owned by mysql:
<source lang=bash>
# ls -alL /dev/vg-data/lv-rawdisk-innodb01
brw-rw---- 1 mysql mysql 252, 0 Aug 12 15:07 /dev/vg-data/lv-rawdisk-innodb01
</source>
Determine the size:
<source lang=bash>
# lvs vg-data
LV VG Attr LSize Pool Origin Data% Move Log Copy% Convert
lv-rawdisk-innodb01 vg-data -wi-a---- 25.00g
# fdisk -l /dev/vg-data/lv-rawdisk-innodb01
Disk /dev/vg-data/lv-rawdisk-innodb01: 26.8 GB, 26843545600 bytes
255 heads, 63 sectors/track, 3263 cylinders, total 52428800 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
# bc -l
26843545600/(1024*1024*1024)
25.00000000000000000000
</source>
Yes... really 25GB!
Add your logical volume to your configuration /etc/mysql/conf.d/innodb.cnf :
<source lang=mysql>
[mysqld]
# InnoDB raw disks
innodb_data_home_dir=
innodb_data_file_path=/dev/vg-data/lv-rawdisk-innodb01:25Gnewraw
</source>
Start mysql:
<source lang=bash>
# service mysql start
</source>
Aaaaaand.. do not forget apparmor! Like I did.. :-D
<source lang=mysql>
InnoDB: Operating system error number 13 in a file operation.
InnoDB: The error means mysqld does not have the access rights to
InnoDB: the directory.
InnoDB: File name /dev/dm-0
InnoDB: File operation call: 'open'.
InnoDB: Cannot continue operation.
</source>
<source lang=bash>
# tail /var/log/kern.log
...
Aug 12 15:30:09 mysql kernel: [ 5840.118528] audit: type=1400 audit(1439386209.399:33): apparmor="DENIED" operation="open" profile="/usr/sbin/mysqld" name="/dev/dm-0" pid=11810 comm="mysqld" requested_mask="wr" denied_mask="wr" fsuid=108 ouid=108
...
</source>
Add your raw device to the apparmor config in /etc/apparmor.d/local/usr.sbin.mysqld :
<source lang=bash>
# Site-specific additions and overrides for usr.sbin.mysqld.
# For more details, please see /etc/apparmor.d/local/README.
/dev/dm-* rwk,
</source>
Reload apparmor:
<source lang=bash>
# service apparmor reload
</source>
Another try!
<source lang=bash>
# service mysql start
</source>
<source lang=mysql>
InnoDB: The first specified data file /dev/vg-data/lv-rawdisk-innodb01 did not exist:
InnoDB: a new database to be created!
150812 15:48:23 InnoDB: Setting file /dev/vg-data/lv-rawdisk-innodb01 size to 25600 MB
InnoDB: Database physically writes the file full: wait...
InnoDB: Progress in MB: 100 200 300 400 500 600 700 800 900 1000 1100 1200 ...
</source>
Much better!
So shutdown MySQL again!
Change your configuration /etc/mysql/conf.d/innodb.cnf and '''change newraw to raw!''' :
<source lang=mysql>
[mysqld]
# InnoDB raw disks
innodb_data_home_dir=
innodb_data_file_path=/dev/vg-data/lv-rawdisk-innodb01:25Graw
</source>
=== NFS ===
==== NFSv4 ====
===== On NetApp CDOT SVM =====
<source lang=cdot>
cdot1nfsv4::> export-policy rule create -policyname default -clientmatch 172.18.128.0/22 -superuser none -rwrule none -rorule sys -allow-dev false -allow-suid false
cdot1nfsv4::>
cdot1nfsv4::> export-policy create -policyname mysql_clients
cdot1nfsv4::> export-policy rule create -policyname mysql_clients -clientmatch 172.18.128.0/22 -superuser sys -rwrule sys -rorule sys -allow-dev true -allow-suid false
cdot1nfsv4::>
cdot1nfsv4::> nfs server modify -v4.0 enabled -v4-id-domain this.domain.tld
cdot1nfsv4::> set -units GB
cdot1nfsv4::> vol show -volume MYSQLNFS_* -fields volume,policy,size,junction-path
vserver volume size policy junction-path
------------------ --------------------- ---- ------------- ----------------------
cdot1nfsv4 MYSQLNFS_DATA 40GB mysql_clients /MYSQLNFS_DATA
cdot1nfsv4 MYSQLNFS_LOG 1GB mysql_clients /MYSQLNFS_LOG
2 entries were displayed.
</source>
===== On Linux =====
====== /etc/sysctl.d/99-mysql.conf ======
<source>
#
## http://www.ajohnstone.com/achives/optimizing-mysql-over-nfs-with-netapp/
#
###################################################################
# Semaphores & IPC for optimizations in innodb
kernel.shmmax=2147483648
kernel.shmall=2147483648
kernel.msgmni=1024
kernel.msgmax=65536
kernel.sem=250 32000 32 1024
###################################################################
# Swap
vm.swappiness = 0
vm.vfs_cache_pressure = 50
</source>
====== /etc/sysctl.d/99-netapp-nfs.conf ======
<source>
#
## http://www.ajohnstone.com/achives/optimizing-mysql-over-nfs-with-netapp/
#
###################################################################
# Optimization for netapp/nfs increased from 64k, @see http://tldp.org/HOWTO/NFS-HOWTO/performance.html#MEMLIMITS
net.core.wmem_default=262144
net.core.rmem_default=262144
net.core.wmem_max=262144
net.core.rmem_max=262144
net.ipv4.tcp_rmem = 4096 87380 16777216
net.ipv4.tcp_wmem = 4096 65536 16777216
net.ipv4.tcp_no_metrics_save = 1
# Guidelines from http://media.netapp.com/documents/mysqlperformance-5.pdf
net.ipv4.tcp_sack=0
net.ipv4.tcp_timestamps=0
sunrpc.tcp_slot_table_entries=128
#nfs.v3.enable on
nfs.tcp.enable=on
nfs.tcp.recvwindowsize=65536
nfs.tcp.xfersize=65536
#iscsi.iswt.max_ios_per_session 128
#iscsi.iswt.tcp_window_size 131400
#iscsi.max_connections_per_session 16
net.ipv4.tcp_tw_reuse = 1
net.ipv4.ip_local_port_range = 1024 65023
net.ipv4.tcp_max_syn_backlog = 10240
net.ipv4.tcp_max_tw_buckets = 400000
net.ipv4.tcp_max_orphans = 60000
net.ipv4.tcp_synack_retries = 3
net.core.somaxconn = 10000
kernel.sysrq=0
net.ipv4.neigh.default.gc_thresh1 = 4096
net.ipv4.neigh.default.gc_thresh2 = 8192
net.ipv4.neigh.default.gc_thresh3 = 8192
net.ipv4.neigh.default.base_reachable_time = 86400
net.ipv4.neigh.default.gc_stale_time = 86400
</source>
====== Modify systemd service to wait for NFS ======
To be sure that the NFS mount is ready when the mysql server starts add After=nfs-client.target to the systemd service in the Unit-section.
<source lang=bash>
# systemctl edit mysql.service
</source>
and enter:
<source lang=inifile>
[Unit]
Description=MySQL Community Server
After=network.target
After=nfs-client.target
</source>
<source lang=bash>
# systemctl cat mysql
# /lib/systemd/system/mysql.service
# MySQL systemd service file
[Unit]
Description=MySQL Community Server
After=network.target
[Install]
WantedBy=multi-user.target
[Service]
User=mysql
Group=mysql
PermissionsStartOnly=true
ExecStartPre=/usr/share/mysql/mysql-systemd-start pre
ExecStart=/usr/sbin/mysqld
ExecStartPost=/usr/share/mysql/mysql-systemd-start post
TimeoutSec=600
Restart=on-failure
RuntimeDirectory=mysqld
RuntimeDirectoryMode=755
# /etc/systemd/system/mysql.service.d/override.conf
[Unit]
Description=MySQL Community Server
After=network.target
After=nfs-client.target
</source>
====== /etc/idmapd.conf ======
<source lang=conf>
# Domain = localdomain
Domain = this.domain.tld
</source>
====== /etc/fstab ======
<source lang=fstab>
cdot-nfsv4-svm:/MYSQLNFS_LOG /MYSQLNFS_LOG nfs rw,hard,nointr,rsize=65536,wsize=65536,bg,vers=4,proto=tcp,noatime
cdot-nfsv4-svm:/MYSQLNFS_DATA /MYSQLNFS_DATA nfs rw,hard,nointr,rsize=65536,wsize=65536,bg,vers=4,proto=tcp,noatime
</source>
====== /etc/mysql/mysql.conf.d/mysqld.cnf ======
<source lang=inifile>
[mysqld]
...
datadir = /MYSQLNFS_DATA/data/mysql
...
</source>
====== /etc/mysql/mysql.conf.d/innodb.cnf ======
<source lang=inifile>
[mysqld]
#
# * InnoDB
#
innodb_data_home_dir = /MYSQLNFS_DATA/InnoDB
innodb_data_file_path = ibdata1:200M:autoextend
innodb_log_group_home_dir = /MYSQLNFS_LOG/ib_log
#innodb_flush_method = O_DIRECT
innodb_flush_log_at_trx_commit = 2
innodb_file_per_table = on
</source>
<source lang=mysql>
# mysql -e "show variables where variable_name like '%dir' and value like '/MYSQLNFS%'"
+---------------------------+------------------------------------+
| Variable_name | Value |
+---------------------------+------------------------------------+
| datadir | /MYSQLNFS_DATA/data/mysql/ |
| innodb_data_home_dir | /MYSQLNFS_DATA/InnoDB |
| innodb_log_group_home_dir | /MYSQLNFS_LOG/ib_log |
+---------------------------+------------------------------------+
</source>
====== /etc/mysql/mysql.conf.d/query_cache.cnf ======
<source lang=inifile>
[mysqld]
#
# * Query Cache Configuration
#
query_cache_type = 1
query_cache_limit = 256K
query_cache_min_res_unit = 2k
query_cache_size = 80M
</source>
====== apparmor : /etc/apparmor.d/local/usr.sbin.mysqld ======
<source lang=apparmor>
# vim:syntax=apparmor
# This should be always there...
owner @{PROC}/@{pid}/status r,
/sys/devices/system/node/ r,
/sys/devices/system/node/** r,
# The mysql datadir, innodb_data_home_dir
/MYSQLNFS_DATA/ r,
/MYSQLNFS_DATA/** rwk,
# The mysql innodb_log_group_home_dir
/MYSQLNFS_LOG/ r,
/MYSQLNFS_LOG/** rwk,
</source>
====== Short stupid performance test ======
<source lang=bash>
# time dd if=/dev/zero of=/MYSQLNFS_DATA/io.test bs=16k count=65536
65536+0 records in
65536+0 records out
1073741824 bytes (1,1 GB, 1,0 GiB) copied, 1,7552 s, 612 MB/s
real 0m1.772s
user 0m0.016s
sys 0m0.672s
</source>
Some things seem to work...
==Sample InnoDB configuration==
/etc/mysql/conf.d/innodb.cnf
<source lang=mysql>
[mysqld]
# InnoDB Parameters
# innodb_buffer_pool_size=(0.7*total_mem_size)
innodb_buffer_pool_size=1433M
# bulk_insert_buffer_size
bulk_insert_buffer_size=256M
# innodb_buffer_pool_instances=... more = more concurrency
innodb_buffer_pool_instances=2
# innodb_thread_concurrency= 2*CPUs
innodb_thread_concurrency=4
# innodb_flush_method=O_DIRECT (avoids double buffering)
innodb_flush_method=O_DIRECT
# InnoDB data raw disks
innodb_data_home_dir=
innodb_data_file_path=/dev/vg-data/lv-rawdisk-innodb01:25Graw
# InnoDB log files
innodb_log_files_in_group=2
innodb_log_file_size=100M
innodb_log_group_home_dir=/var/lib/mysql/ib_log
</source>
==Analyze==
<source lang=mysql>
mysql> select * from <tablename> PROCEDURE ANALYSE();
</source>
<source lang=mysql>
mysql> SHOW /*!50000 GLOBAL*/ STATUS;
</source>
* See [[http://de.slideshare.net/shinguz/pt-presentation-11465700 MySQL Performance Tuning]]
===percona-toolkit===
<source lang=bash>
# aptitude install percona-toolkit
# mysql -e "explain select * from mysql.user,mysql.db where user.user=db.user" | pt-visual-explain
JOIN
+- Bookmark lookup
| +- Table
| | table db
| | possible_keys User
| +- Index lookup
| key db->User
| possible_keys User
| key_len 48
| ref mysql.user.User
| rows 3
+- Table scan
rows 68
+- Table
table user
</source>
===Sysbench===
<source lang=bash>
# mysql -u root -e "create database sbtest;"
# sysbench \
--test=oltp \
--oltp-table-size=10000000 \
--db-driver=mysql \
--mysql-table-engine=innodb \
--mysql-db=sbtest \
--mysql-user=root \
--mysql-password=$(nawk -F'=' '/password/{print $2}' /root/.my.cnf) \
--mysql-socket=/var/run/mysqld/mysqld.sock \
prepare
# sysbench \
--test=oltp \
--oltp-test-mode=complex \
--oltp-table-size=80000000 \
--db-driver=mysql \
--mysql-table-engine=innodb \
--mysql-db=sbtest \
--mysql-user=root \
--mysql-password=$(nawk -F'=' '/password/{print $2}' /root/.my.cnf) \
--mysql-socket=/var/run/mysqld/mysqld.sock \
--num-threads=4 \
--max-time=900 \
--max-requests=500000 \
run
# mysql -u root_rw -e "drop table sbtest;" sbtest
</source>
==Recover a damaged root account==
===Lost grants===
Try out:
<source lang=bash>
# service mysql stop
# echo "grant all privileges on *.* to 'root'@'localhost' with grant option;" > /root/mysql-init
# mysqld_safe --init-file=/root/mysql-init
...
150812 19:14:24 mysqld_safe mysqld from pid file /var/run/mysqld/mysqld.pid ended
# rm /root/mysql-init
# service mysql start
</source>
Or:
<source lang=bash>
# service mysql stop
# mysqld_safe --skip-grant-tables &
...
# mysql -e "UPDATE mysql.user SET Grant_priv='Y', Super_priv='Y' WHERE User='root'; FLUSH PRIVILEGES; GRANT ALL ON *.* TO 'root'@'localhost';"
# mysqladmin -u root shutdown
# service mysql start
</source>
===Lost password===
<source lang=bash>
# service mysql stop
# echo "SET PASSWORD FOR 'root'@'localhost' = PASSWORD('the root password for mysql');" > /root/mysql-init
# mysqld_safe --init-file=/root/mysql-init
...
150812 19:15:24 mysqld_safe mysqld from pid file /var/run/mysqld/mysqld.pid ended
# rm /root/mysql-init
# service mysql start
</source>
==Structured configuration==
This is the default in Ubuntus /etc/mysql/my.cnf:
<source lang=mysql>
...
#
# * IMPORTANT: Additional settings that can override those from this file!
# The files must end with '.cnf', otherwise they'll be ignored.
#
!includedir /etc/mysql/conf.d/
</source>
/etc/mysql/conf.d/innodb.cnf:
<source lang=mysql>
[mysqld]
# InnoDB Parameters
# innodb_buffer_pool_size=(0.7*total_mem_size)
#innodb_buffer_pool_size=512M
innodb_buffer_pool_size=256M
# bulk_insert_buffer_size
#bulk_insert_buffer_size=256M
bulk_insert_buffer_size=128M
# innodb_buffer_pool_instances=... more = more concurrency
innodb_buffer_pool_instances=2
# innodb_thread_concurrency= 2*CPUs
innodb_thread_concurrency=4
# innodb_flush_method=O_DIRECT (avoids double buffering)
innodb_flush_method=O_DIRECT
# InnoDB data raw disks
innodb_data_home_dir=
innodb_data_file_path=/dev/vg-data/lv-rawdisk-innodb01:25Graw
# InnoDB log files
innodb_log_files_in_group=2
innodb_log_file_size=100M
innodb_log_group_home_dir=/var/lib/mysql/ib_log
</source>
/etc/mysql/conf.d/myisam.cnf:
<source lang=mysql>
[mysqld]
#key_buffer = 512M
key_buffer = 128M
table_cache = 8K
myisam_sort_buffer_size = 64M
tmp_table_size = 64M
# Variable: concurrent_insert
# Value Description
# 0 Disables concurrent inserts
# 1 (Default) Enables concurrent insert for MyISAM tables that do not have holes
# 2 Enables concurrent inserts for all MyISAM tables, even those that have holes.
# For a table with a hole, new rows are inserted at the end of the table if it is in use by another thread.
# Otherwise, MySQL acquires a normal write lock and inserts the row into the hole.
concurrent_insert=2
# Variable: myisam_use_mmap
# https://www.percona.com/blog/2006/05/26/myisam-mmap-feature-51/
#
myisam_use_mmap=1
</source>
/etc/mysql/conf.d/mysqld.cnf:
<source lang=mysql>
[mysqld]
datadir = /var/lib/mysql/data/data
# because mysql is soooo stupid
#ignore-db-dirs = lost+found # when we will have mysql >= 5.6.3
bind-address = 127.0.0.1
open-files-limit = 4096
max_connections = 512
max_allowed_packet = 16M
thread_stack = 192K
thread_cache_size = 8
myisam-recover-options = BACKUP
max_connections = 512
table_cache = 8192
thread_concurrency = 4
default-storage-engine = innodb
# Enable the full query log. Every query (even ones with incorrect
# syntax) that the server receives will be logged. This is useful for
# debugging, it is usually disabled in production use.
#log
# Print warnings to the error log file. If you have any problem with
# MySQL you should enable logging of warnings and examine the error log
# for possible explanations.
log_warnings
# Log slow queries. Slow queries are queries which take more than the
# amount of time defined in "long_query_time" or which do not use
# indexes well, if log_long_format is enabled. It is normally good idea
# to have this turned on if you frequently add new queries to the
# system.
log_slow_queries
slow_query_log_file = /var/log/mysql/mysql-slow.log
# All queries taking more than this amount of time (in seconds) will be
# trated as slow. Do not use "1" as a value here, as this will result in
# even very fast queries being logged from time to time (as MySQL
# currently measures time with second accuracy only).
long_query_time = 2
# Log more information in the slow query log. Normally it is good to
# have this turned on. This will enable logging of queries that are not
# using indexes in addition to long running queries.
#log_long_format
log_bin = /var/lib/mysql/binlog/mysql-bin.log
expire_logs_days = 10
max_binlog_size = 100M
sync_binlog = 0
performance_schema = ON
</source>
/etc/mysql/conf.d/mysqld_safe.cnf:
<source lang=mysql>
[mysqld_safe]
</source>
/etc/mysql/conf.d/mysqld_safe_syslog.cnf:
<source lang=mysql>
[mysqld_safe]
syslog
</source>
/etc/mysql/conf.d/query_cache.cnf:
<source lang=mysql>
[mysqld]
query_cache_limit = 4M
query_cache_size = 128M
query_cache_min_res_unit = 2K
</source>
=MySQL Clients=
Small one liners for testing purposes.
==PHP==
===PHP PDO===
<source lang=php>
$ php -r '
$pdo=new PDO("mysql:host=mydbhost;dbname=mydb", "user", "pass", ARRAY(
PDO::ATTR_PERSISTENT => true
)
);
$stmt=$pdo->prepare("SELECT * FROM mytable");
if($stmt->execute()){
while($row = $stmt->fetch()){
print_r($row);
}
};
$stmt = null;
$pdo=null;
'
</source>
87e8b4275dc785e6002b07d08856b1623ecd37f7
1712
1711
2017-05-04T15:41:19Z
Lollypop
2
/* /etc/mysql/mysql.conf.d/query_cache.cnf */
wikitext
text/x-wiki
[[Kategorie:MySQL|Tipps und Tricks]]
==Oneliner==
===All grants===
<source lang=bash>
# mysql --skip-column-names --batch --execute 'select concat("`",user,"`@`",host,"`") from mysql.user' | xargs -n 1 -i mysql --execute 'show grants for {}'
</source>
===Last update time===
* Per table
<source lang=mysql>
mysql> SELECT TABLE_SCHEMA AS DB,TABLE_NAME,UPDATE_TIME FROM INFORMATION_SCHEMA.TABLES ORDER BY DB,UPDATE_TIME;
</source>
* Per database
<source lang=mysql>
mysql> SELECT TABLE_SCHEMA AS DB,MAX(UPDATE_TIME) AS LAST_UPDATE FROM INFORMATION_SCHEMA.TABLES GROUP BY DB ORDER BY LAST_UPDATE;
</source>
==InnoDB space==
===Per database===
<source lang=mysql>
mysql> select table_schema as database_name, sum(round(data_length/1024/1024,2)) as total_size_mb from information_schema.tables where engine like 'innodb' group by table_schema order by total_size_mb;
</source>
===Per table===
<source lang=mysql>
mysql> select table_schema as database_name,table_name,round(data_length/1024/1024,2) as size_mb from information_schema.tables order by size_mb;
</source>
==Logging==
If you use SET GLOBAL it is just for the moment.
'''Don't forget to add it in your my.cnf to make it permanent!'''
===What can I log?===
The interesting variables here are:
* log_queries_not_using_indexes
* log_slave_updates
* log_slow_queries
* general_log
===Choose logging destination FILE/TABLE/NONE===
This affects general_log and slow_query_log.
* Log to the table mysql.slow_log and mysql.general_log
<source lang=mysql>
mysql> SET GLOBAL log_output=TABLE;
</source>
* Log to the table mysql.slow_log and mysql.general_log
<source lang=mysql>
mysql> SET GLOBAL log_output=TABLE;
</source>
* Both: tables and files
<source lang=mysql>
mysql> SET GLOBAL log_output = 'TABLE,FILE';
</source>
* None, if NONE appears in the log_output destinations there is no logging
<source lang=mysql>
mysql> SET GLOBAL log_output = 'TABLE,FILE,NONE';
</source>
is equal to
<source lang=mysql>
mysql> SET GLOBAL log_output = 'NONE';
</source>
===Enable/disable general logging===
<source lang=mysql>
mysql> SET GLOBAL general_log_file = '/var/lib/mysql/general.log';
Query OK, 0 rows affected (0.00 sec)
mysql> SET GLOBAL general_log = 'ON';
Query OK, 0 rows affected (0.00 sec)
</source>
<source lang=mysql>
mysql> SET GLOBAL general_log = 'OFF';
Query OK, 0 rows affected (0.00 sec)
</source>
===Enable/disable logging of slow queries===
<source lang=mysql>
mysql> SET GLOBAL slow_query_log_file = '/var/lib/mysql/slow-query.log';
Query OK, 0 rows affected (0.00 sec)
mysql> SET GLOBAL slow_query_log = 'ON';
Query OK, 0 rows affected (0.00 sec)
</source>
<source lang=mysql>
mysql> SET GLOBAL slow_query_log = 'OFF';
Query OK, 0 rows affected (0.00 sec)
</source>
==Filesystems for MySQL==
===ext3/ext4===
====Create Options====
<source lang=bash>
# mkfs.ext4 -b 4096 /dev/mapper/vg--data-lv--ext4--mysql_data
</source>
====Mountoptions====
* noatime
* data=writeback (best performance , only metadata is logged)
* data=ordered (ok performance , recording metadata and grouping metadata related to the data changes)
* data=journal (worst performance, but best data protection, ext3 default mode, recording metadata and all data)
===Raw devices with InnoDB===
'''Take a look at [[Linux_udev_permissions|setting device permissions via udev]] first.'''
'''After''' that the device is owned by mysql:
<source lang=bash>
# ls -alL /dev/vg-data/lv-rawdisk-innodb01
brw-rw---- 1 mysql mysql 252, 0 Aug 12 15:07 /dev/vg-data/lv-rawdisk-innodb01
</source>
Determine the size:
<source lang=bash>
# lvs vg-data
LV VG Attr LSize Pool Origin Data% Move Log Copy% Convert
lv-rawdisk-innodb01 vg-data -wi-a---- 25.00g
# fdisk -l /dev/vg-data/lv-rawdisk-innodb01
Disk /dev/vg-data/lv-rawdisk-innodb01: 26.8 GB, 26843545600 bytes
255 heads, 63 sectors/track, 3263 cylinders, total 52428800 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
# bc -l
26843545600/(1024*1024*1024)
25.00000000000000000000
</source>
Yes... really 25GB!
Add your logical volume to your configuration /etc/mysql/conf.d/innodb.cnf :
<source lang=mysql>
[mysqld]
# InnoDB raw disks
innodb_data_home_dir=
innodb_data_file_path=/dev/vg-data/lv-rawdisk-innodb01:25Gnewraw
</source>
Start mysql:
<source lang=bash>
# service mysql start
</source>
Aaaaaand.. do not forget apparmor! Like I did.. :-D
<source lang=mysql>
InnoDB: Operating system error number 13 in a file operation.
InnoDB: The error means mysqld does not have the access rights to
InnoDB: the directory.
InnoDB: File name /dev/dm-0
InnoDB: File operation call: 'open'.
InnoDB: Cannot continue operation.
</source>
<source lang=bash>
# tail /var/log/kern.log
...
Aug 12 15:30:09 mysql kernel: [ 5840.118528] audit: type=1400 audit(1439386209.399:33): apparmor="DENIED" operation="open" profile="/usr/sbin/mysqld" name="/dev/dm-0" pid=11810 comm="mysqld" requested_mask="wr" denied_mask="wr" fsuid=108 ouid=108
...
</source>
Add your raw device to the apparmor config in /etc/apparmor.d/local/usr.sbin.mysqld :
<source lang=bash>
# Site-specific additions and overrides for usr.sbin.mysqld.
# For more details, please see /etc/apparmor.d/local/README.
/dev/dm-* rwk,
</source>
Reload apparmor:
<source lang=bash>
# service apparmor reload
</source>
Another try!
<source lang=bash>
# service mysql start
</source>
<source lang=mysql>
InnoDB: The first specified data file /dev/vg-data/lv-rawdisk-innodb01 did not exist:
InnoDB: a new database to be created!
150812 15:48:23 InnoDB: Setting file /dev/vg-data/lv-rawdisk-innodb01 size to 25600 MB
InnoDB: Database physically writes the file full: wait...
InnoDB: Progress in MB: 100 200 300 400 500 600 700 800 900 1000 1100 1200 ...
</source>
Much better!
So shutdown MySQL again!
Change your configuration /etc/mysql/conf.d/innodb.cnf and '''change newraw to raw!''' :
<source lang=mysql>
[mysqld]
# InnoDB raw disks
innodb_data_home_dir=
innodb_data_file_path=/dev/vg-data/lv-rawdisk-innodb01:25Graw
</source>
=== NFS ===
==== NFSv4 ====
===== On NetApp CDOT SVM =====
<source lang=cdot>
cdot1nfsv4::> export-policy rule create -policyname default -clientmatch 172.18.128.0/22 -superuser none -rwrule none -rorule sys -allow-dev false -allow-suid false
cdot1nfsv4::>
cdot1nfsv4::> export-policy create -policyname mysql_clients
cdot1nfsv4::> export-policy rule create -policyname mysql_clients -clientmatch 172.18.128.0/22 -superuser sys -rwrule sys -rorule sys -allow-dev true -allow-suid false
cdot1nfsv4::>
cdot1nfsv4::> nfs server modify -v4.0 enabled -v4-id-domain this.domain.tld
cdot1nfsv4::> set -units GB
cdot1nfsv4::> vol show -volume MYSQLNFS_* -fields volume,policy,size,junction-path
vserver volume size policy junction-path
------------------ --------------------- ---- ------------- ----------------------
cdot1nfsv4 MYSQLNFS_DATA 40GB mysql_clients /MYSQLNFS_DATA
cdot1nfsv4 MYSQLNFS_LOG 1GB mysql_clients /MYSQLNFS_LOG
2 entries were displayed.
</source>
===== On Linux =====
====== /etc/sysctl.d/99-mysql.conf ======
<source>
#
## http://www.ajohnstone.com/achives/optimizing-mysql-over-nfs-with-netapp/
#
###################################################################
# Semaphores & IPC for optimizations in innodb
kernel.shmmax=2147483648
kernel.shmall=2147483648
kernel.msgmni=1024
kernel.msgmax=65536
kernel.sem=250 32000 32 1024
###################################################################
# Swap
vm.swappiness = 0
vm.vfs_cache_pressure = 50
</source>
====== /etc/sysctl.d/99-netapp-nfs.conf ======
<source>
#
## http://www.ajohnstone.com/achives/optimizing-mysql-over-nfs-with-netapp/
#
###################################################################
# Optimization for netapp/nfs increased from 64k, @see http://tldp.org/HOWTO/NFS-HOWTO/performance.html#MEMLIMITS
net.core.wmem_default=262144
net.core.rmem_default=262144
net.core.wmem_max=262144
net.core.rmem_max=262144
net.ipv4.tcp_rmem = 4096 87380 16777216
net.ipv4.tcp_wmem = 4096 65536 16777216
net.ipv4.tcp_no_metrics_save = 1
# Guidelines from http://media.netapp.com/documents/mysqlperformance-5.pdf
net.ipv4.tcp_sack=0
net.ipv4.tcp_timestamps=0
sunrpc.tcp_slot_table_entries=128
#nfs.v3.enable on
nfs.tcp.enable=on
nfs.tcp.recvwindowsize=65536
nfs.tcp.xfersize=65536
#iscsi.iswt.max_ios_per_session 128
#iscsi.iswt.tcp_window_size 131400
#iscsi.max_connections_per_session 16
net.ipv4.tcp_tw_reuse = 1
net.ipv4.ip_local_port_range = 1024 65023
net.ipv4.tcp_max_syn_backlog = 10240
net.ipv4.tcp_max_tw_buckets = 400000
net.ipv4.tcp_max_orphans = 60000
net.ipv4.tcp_synack_retries = 3
net.core.somaxconn = 10000
kernel.sysrq=0
net.ipv4.neigh.default.gc_thresh1 = 4096
net.ipv4.neigh.default.gc_thresh2 = 8192
net.ipv4.neigh.default.gc_thresh3 = 8192
net.ipv4.neigh.default.base_reachable_time = 86400
net.ipv4.neigh.default.gc_stale_time = 86400
</source>
====== Modify systemd service to wait for NFS ======
To be sure that the NFS mount is ready when the mysql server starts add After=nfs-client.target to the systemd service in the Unit-section.
<source lang=bash>
# systemctl edit mysql.service
</source>
and enter:
<source lang=inifile>
[Unit]
Description=MySQL Community Server
After=network.target
After=nfs-client.target
</source>
<source lang=bash>
# systemctl cat mysql
# /lib/systemd/system/mysql.service
# MySQL systemd service file
[Unit]
Description=MySQL Community Server
After=network.target
[Install]
WantedBy=multi-user.target
[Service]
User=mysql
Group=mysql
PermissionsStartOnly=true
ExecStartPre=/usr/share/mysql/mysql-systemd-start pre
ExecStart=/usr/sbin/mysqld
ExecStartPost=/usr/share/mysql/mysql-systemd-start post
TimeoutSec=600
Restart=on-failure
RuntimeDirectory=mysqld
RuntimeDirectoryMode=755
# /etc/systemd/system/mysql.service.d/override.conf
[Unit]
Description=MySQL Community Server
After=network.target
After=nfs-client.target
</source>
====== /etc/idmapd.conf ======
<source lang=conf>
# Domain = localdomain
Domain = this.domain.tld
</source>
====== /etc/fstab ======
<source lang=fstab>
cdot-nfsv4-svm:/MYSQLNFS_LOG /MYSQLNFS_LOG nfs rw,hard,nointr,rsize=65536,wsize=65536,bg,vers=4,proto=tcp,noatime
cdot-nfsv4-svm:/MYSQLNFS_DATA /MYSQLNFS_DATA nfs rw,hard,nointr,rsize=65536,wsize=65536,bg,vers=4,proto=tcp,noatime
</source>
====== /etc/mysql/mysql.conf.d/mysqld.cnf ======
<source lang=inifile>
[mysqld]
...
datadir = /MYSQLNFS_DATA/data/mysql
...
</source>
====== /etc/mysql/mysql.conf.d/innodb.cnf ======
<source lang=inifile>
[mysqld]
#
# * InnoDB
#
innodb_data_home_dir = /MYSQLNFS_DATA/InnoDB
innodb_data_file_path = ibdata1:200M:autoextend
innodb_log_group_home_dir = /MYSQLNFS_LOG/ib_log
#innodb_flush_method = O_DIRECT
innodb_flush_log_at_trx_commit = 2
innodb_file_per_table = on
</source>
<source lang=mysql>
# mysql -e "show variables where variable_name like '%dir' and value like '/MYSQLNFS%'"
+---------------------------+------------------------------------+
| Variable_name | Value |
+---------------------------+------------------------------------+
| datadir | /MYSQLNFS_DATA/data/mysql/ |
| innodb_data_home_dir | /MYSQLNFS_DATA/InnoDB |
| innodb_log_group_home_dir | /MYSQLNFS_LOG/ib_log |
+---------------------------+------------------------------------+
</source>
====== /etc/mysql/mysql.conf.d/query_cache.cnf ======
<source lang=inifile>
[mysqld]
#
# * Query Cache Configuration
#
query_cache_type = 1
query_cache_limit = 256K
query_cache_min_res_unit = 2k
query_cache_size = 80M
</source>
<source lang=mysql>
mysql> SHOW VARIABLES LIKE 'have_query_cache';
+------------------+-------+
| Variable_name | Value |
+------------------+-------+
| have_query_cache | YES |
+------------------+-------+
1 row in set (0,00 sec)
mysql> SHOW VARIABLES LIKE 'query_cache%';
+------------------------------+----------+
| Variable_name | Value |
+------------------------------+----------+
| query_cache_limit | 262144 |
| query_cache_min_res_unit | 2048 |
| query_cache_size | 83886080 |
| query_cache_type | ON |
| query_cache_wlock_invalidate | OFF |
+------------------------------+----------+
5 rows in set (0,00 sec)
</source>
====== apparmor : /etc/apparmor.d/local/usr.sbin.mysqld ======
<source lang=apparmor>
# vim:syntax=apparmor
# This should be always there...
owner @{PROC}/@{pid}/status r,
/sys/devices/system/node/ r,
/sys/devices/system/node/** r,
# The mysql datadir, innodb_data_home_dir
/MYSQLNFS_DATA/ r,
/MYSQLNFS_DATA/** rwk,
# The mysql innodb_log_group_home_dir
/MYSQLNFS_LOG/ r,
/MYSQLNFS_LOG/** rwk,
</source>
====== Short stupid performance test ======
<source lang=bash>
# time dd if=/dev/zero of=/MYSQLNFS_DATA/io.test bs=16k count=65536
65536+0 records in
65536+0 records out
1073741824 bytes (1,1 GB, 1,0 GiB) copied, 1,7552 s, 612 MB/s
real 0m1.772s
user 0m0.016s
sys 0m0.672s
</source>
Some things seem to work...
==Sample InnoDB configuration==
/etc/mysql/conf.d/innodb.cnf
<source lang=mysql>
[mysqld]
# InnoDB Parameters
# innodb_buffer_pool_size=(0.7*total_mem_size)
innodb_buffer_pool_size=1433M
# bulk_insert_buffer_size
bulk_insert_buffer_size=256M
# innodb_buffer_pool_instances=... more = more concurrency
innodb_buffer_pool_instances=2
# innodb_thread_concurrency= 2*CPUs
innodb_thread_concurrency=4
# innodb_flush_method=O_DIRECT (avoids double buffering)
innodb_flush_method=O_DIRECT
# InnoDB data raw disks
innodb_data_home_dir=
innodb_data_file_path=/dev/vg-data/lv-rawdisk-innodb01:25Graw
# InnoDB log files
innodb_log_files_in_group=2
innodb_log_file_size=100M
innodb_log_group_home_dir=/var/lib/mysql/ib_log
</source>
==Analyze==
<source lang=mysql>
mysql> select * from <tablename> PROCEDURE ANALYSE();
</source>
<source lang=mysql>
mysql> SHOW /*!50000 GLOBAL*/ STATUS;
</source>
* See [[http://de.slideshare.net/shinguz/pt-presentation-11465700 MySQL Performance Tuning]]
===percona-toolkit===
<source lang=bash>
# aptitude install percona-toolkit
# mysql -e "explain select * from mysql.user,mysql.db where user.user=db.user" | pt-visual-explain
JOIN
+- Bookmark lookup
| +- Table
| | table db
| | possible_keys User
| +- Index lookup
| key db->User
| possible_keys User
| key_len 48
| ref mysql.user.User
| rows 3
+- Table scan
rows 68
+- Table
table user
</source>
===Sysbench===
<source lang=bash>
# mysql -u root -e "create database sbtest;"
# sysbench \
--test=oltp \
--oltp-table-size=10000000 \
--db-driver=mysql \
--mysql-table-engine=innodb \
--mysql-db=sbtest \
--mysql-user=root \
--mysql-password=$(nawk -F'=' '/password/{print $2}' /root/.my.cnf) \
--mysql-socket=/var/run/mysqld/mysqld.sock \
prepare
# sysbench \
--test=oltp \
--oltp-test-mode=complex \
--oltp-table-size=80000000 \
--db-driver=mysql \
--mysql-table-engine=innodb \
--mysql-db=sbtest \
--mysql-user=root \
--mysql-password=$(nawk -F'=' '/password/{print $2}' /root/.my.cnf) \
--mysql-socket=/var/run/mysqld/mysqld.sock \
--num-threads=4 \
--max-time=900 \
--max-requests=500000 \
run
# mysql -u root_rw -e "drop table sbtest;" sbtest
</source>
==Recover a damaged root account==
===Lost grants===
Try out:
<source lang=bash>
# service mysql stop
# echo "grant all privileges on *.* to 'root'@'localhost' with grant option;" > /root/mysql-init
# mysqld_safe --init-file=/root/mysql-init
...
150812 19:14:24 mysqld_safe mysqld from pid file /var/run/mysqld/mysqld.pid ended
# rm /root/mysql-init
# service mysql start
</source>
Or:
<source lang=bash>
# service mysql stop
# mysqld_safe --skip-grant-tables &
...
# mysql -e "UPDATE mysql.user SET Grant_priv='Y', Super_priv='Y' WHERE User='root'; FLUSH PRIVILEGES; GRANT ALL ON *.* TO 'root'@'localhost';"
# mysqladmin -u root shutdown
# service mysql start
</source>
===Lost password===
<source lang=bash>
# service mysql stop
# echo "SET PASSWORD FOR 'root'@'localhost' = PASSWORD('the root password for mysql');" > /root/mysql-init
# mysqld_safe --init-file=/root/mysql-init
...
150812 19:15:24 mysqld_safe mysqld from pid file /var/run/mysqld/mysqld.pid ended
# rm /root/mysql-init
# service mysql start
</source>
==Structured configuration==
This is the default in Ubuntus /etc/mysql/my.cnf:
<source lang=mysql>
...
#
# * IMPORTANT: Additional settings that can override those from this file!
# The files must end with '.cnf', otherwise they'll be ignored.
#
!includedir /etc/mysql/conf.d/
</source>
/etc/mysql/conf.d/innodb.cnf:
<source lang=mysql>
[mysqld]
# InnoDB Parameters
# innodb_buffer_pool_size=(0.7*total_mem_size)
#innodb_buffer_pool_size=512M
innodb_buffer_pool_size=256M
# bulk_insert_buffer_size
#bulk_insert_buffer_size=256M
bulk_insert_buffer_size=128M
# innodb_buffer_pool_instances=... more = more concurrency
innodb_buffer_pool_instances=2
# innodb_thread_concurrency= 2*CPUs
innodb_thread_concurrency=4
# innodb_flush_method=O_DIRECT (avoids double buffering)
innodb_flush_method=O_DIRECT
# InnoDB data raw disks
innodb_data_home_dir=
innodb_data_file_path=/dev/vg-data/lv-rawdisk-innodb01:25Graw
# InnoDB log files
innodb_log_files_in_group=2
innodb_log_file_size=100M
innodb_log_group_home_dir=/var/lib/mysql/ib_log
</source>
/etc/mysql/conf.d/myisam.cnf:
<source lang=mysql>
[mysqld]
#key_buffer = 512M
key_buffer = 128M
table_cache = 8K
myisam_sort_buffer_size = 64M
tmp_table_size = 64M
# Variable: concurrent_insert
# Value Description
# 0 Disables concurrent inserts
# 1 (Default) Enables concurrent insert for MyISAM tables that do not have holes
# 2 Enables concurrent inserts for all MyISAM tables, even those that have holes.
# For a table with a hole, new rows are inserted at the end of the table if it is in use by another thread.
# Otherwise, MySQL acquires a normal write lock and inserts the row into the hole.
concurrent_insert=2
# Variable: myisam_use_mmap
# https://www.percona.com/blog/2006/05/26/myisam-mmap-feature-51/
#
myisam_use_mmap=1
</source>
/etc/mysql/conf.d/mysqld.cnf:
<source lang=mysql>
[mysqld]
datadir = /var/lib/mysql/data/data
# because mysql is soooo stupid
#ignore-db-dirs = lost+found # when we will have mysql >= 5.6.3
bind-address = 127.0.0.1
open-files-limit = 4096
max_connections = 512
max_allowed_packet = 16M
thread_stack = 192K
thread_cache_size = 8
myisam-recover-options = BACKUP
max_connections = 512
table_cache = 8192
thread_concurrency = 4
default-storage-engine = innodb
# Enable the full query log. Every query (even ones with incorrect
# syntax) that the server receives will be logged. This is useful for
# debugging, it is usually disabled in production use.
#log
# Print warnings to the error log file. If you have any problem with
# MySQL you should enable logging of warnings and examine the error log
# for possible explanations.
log_warnings
# Log slow queries. Slow queries are queries which take more than the
# amount of time defined in "long_query_time" or which do not use
# indexes well, if log_long_format is enabled. It is normally good idea
# to have this turned on if you frequently add new queries to the
# system.
log_slow_queries
slow_query_log_file = /var/log/mysql/mysql-slow.log
# All queries taking more than this amount of time (in seconds) will be
# trated as slow. Do not use "1" as a value here, as this will result in
# even very fast queries being logged from time to time (as MySQL
# currently measures time with second accuracy only).
long_query_time = 2
# Log more information in the slow query log. Normally it is good to
# have this turned on. This will enable logging of queries that are not
# using indexes in addition to long running queries.
#log_long_format
log_bin = /var/lib/mysql/binlog/mysql-bin.log
expire_logs_days = 10
max_binlog_size = 100M
sync_binlog = 0
performance_schema = ON
</source>
/etc/mysql/conf.d/mysqld_safe.cnf:
<source lang=mysql>
[mysqld_safe]
</source>
/etc/mysql/conf.d/mysqld_safe_syslog.cnf:
<source lang=mysql>
[mysqld_safe]
syslog
</source>
/etc/mysql/conf.d/query_cache.cnf:
<source lang=mysql>
[mysqld]
query_cache_limit = 4M
query_cache_size = 128M
query_cache_min_res_unit = 2K
</source>
=MySQL Clients=
Small one liners for testing purposes.
==PHP==
===PHP PDO===
<source lang=php>
$ php -r '
$pdo=new PDO("mysql:host=mydbhost;dbname=mydb", "user", "pass", ARRAY(
PDO::ATTR_PERSISTENT => true
)
);
$stmt=$pdo->prepare("SELECT * FROM mytable");
if($stmt->execute()){
while($row = $stmt->fetch()){
print_r($row);
}
};
$stmt = null;
$pdo=null;
'
</source>
daeab86172d944232b9ef8fd66c0c3ef5f58e834
File:Reticulitermes banyulensis soldier.jpg
6
330
1713
2017-05-04T21:25:37Z
Lollypop
2
Reticulitermes banyulensis, soldier
wikitext
text/x-wiki
Reticulitermes banyulensis, soldier
94652e18d9030bd3c8b5de733720217ee9e73537
Reticulitermes banyulensis
0
309
1714
1547
2017-05-04T21:29:52Z
Lollypop
2
wikitext
text/x-wiki
{{Systematik
| DeName =
| WissName = Reticulitermes banyulensis
| Autor = Clément, 1978
| regnum = Animalia
| subregnum = Eumetazoa
| phylum = Arthropoda
| subphylum = Hexapoda
| classis = Insecta
| ordo = Dictyoptera
| subordo = Isoptera
| familia = Rhinotermitidae
| genus = Reticulitermes
| subgenus =
| species = banyulensis
| Bild = Reticulitermes_banyulensis_soldier.jpg
| Verbreitung =
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur =
| Winterruhe =
| LSID = urn:lsid:faunaeur.org:taxname:337269
| www.faunaeur.org_id = 337269
}}
40a4cff55ecde231adb498fabca71cabc096b180
1722
1714
2017-05-04T22:01:54Z
Lollypop
2
wikitext
text/x-wiki
{{Systematik
| DeName =
| WissName = Reticulitermes banyulensis
| Autor = Clément, 1978
| regnum = Animalia
| subregnum = Eumetazoa
| phylum = Arthropoda
| subphylum = Hexapoda
| classis = Insecta
| ordo = Dictyoptera
| subordo = Isoptera
| familia = Rhinotermitidae
| genus = Reticulitermes
| subgenus =
| species = banyulensis
| Bild = Reticulitermes_banyulensis_soldier.jpg
| Bildbeschreibung = Reticulitermes banyulensis soldier
| Verbreitung =
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur =
| Winterruhe =
| LSID = urn:lsid:faunaeur.org:taxname:337269
| www.faunaeur.org_id = 337269
}}
[[Datei:Reticulitermes_banyulensis_larvae.JPG|640px|thumb|left|Reticulitermes banyulensis larvae]]
b9c20ead12ce5d6147134ee72d337e165c7c3688
1724
1722
2017-05-04T22:09:45Z
Lollypop
2
wikitext
text/x-wiki
{{Systematik
| DeName =
| WissName = Reticulitermes banyulensis
| Autor = Clément, 1978
| regnum = Animalia
| subregnum = Eumetazoa
| phylum = Arthropoda
| subphylum = Hexapoda
| classis = Insecta
| ordo = Dictyoptera
| subordo = Isoptera
| familia = Rhinotermitidae
| genus = Reticulitermes
| subgenus =
| species = banyulensis
| Bild = Reticulitermes_banyulensis_soldier.jpg
| Bildbeschreibung = Reticulitermes banyulensis soldier
| Verbreitung =
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur =
| Winterruhe =
| LSID = urn:lsid:faunaeur.org:taxname:337269
| www.faunaeur.org_id = 337269
}}
[[Datei:Reticulitermes_banyulensis_larvae.JPG|640px|thumb|left|Reticulitermes banyulensis larvae]]
[[Datei:Reticulitermes_banyulensis_colony.JPG|640px|thumb|left|Reticulitermes banyulensis colony]]
5c4646af42cdc48d785e31d4dfc0e196c0ed568b
1728
1724
2017-05-04T22:39:23Z
Lollypop
2
wikitext
text/x-wiki
{{Systematik
| DeName =
| WissName = Reticulitermes banyulensis
| Autor = Clément, 1978
| regnum = Animalia
| subregnum = Eumetazoa
| phylum = Arthropoda
| subphylum = Hexapoda
| classis = Insecta
| ordo = Dictyoptera
| subordo = Isoptera
| familia = Rhinotermitidae
| genus = Reticulitermes
| subgenus =
| species = banyulensis
| Bild = Reticulitermes_banyulensis_soldier.jpg
| Bildbeschreibung = Reticulitermes banyulensis soldier
| Verbreitung =
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur =
| Winterruhe =
| LSID = urn:lsid:faunaeur.org:taxname:337269
| www.faunaeur.org_id = 337269
}}
<br>
[[Datei:Reticulitermes_banyulensis_larvae.JPG|640px|thumb|left|Reticulitermes banyulensis larvae]]
[[Datei:Reticulitermes_banyulensis_colony.JPG|640px|thumb|left|Reticulitermes banyulensis colony]]
[[Datei:Reticulitermes_banyulensis_secondary_king.JPG|640px|thumb|left|Reticulitermes banyulensis secondary king]]
6ab3992f920d053aae90e57cd3cdb8eee0d9de8e
Kalotermes flavicollis
0
331
1715
2017-05-04T21:36:39Z
Lollypop
2
Die Seite wurde neu angelegt: „{{Systematik | DeName = | WissName = Kalotermes flavicollis | Autor = Fabricius 1793 | regnum = Animalia | subregnu…“
wikitext
text/x-wiki
{{Systematik
| DeName =
| WissName = Kalotermes flavicollis
| Autor = Fabricius 1793
| regnum = Animalia
| subregnum = Eumetazoa
| phylum = Arthropoda
| subphylum = Hexapoda
| classia = Insecta
| ordo = Dictyoptera
| subordo = Isoptera
| familia = Kalotermitidae
| subfamilia =
| tribus =
| genus = Kalotermes
| subgenus =
| species = flavicollis
| Verbreitung =
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur =
| Winterruhe =
| LSID = urn:lsid:faunaeur.org:taxname:337260
| www.faunaeur.org_id = 337265
}}
9d4c8a065d4c411864de9aa07dea35bbaa1cfc89
1718
1715
2017-05-04T21:42:05Z
Lollypop
2
wikitext
text/x-wiki
{{Systematik
| DeName =
| WissName = Kalotermes flavicollis
| Autor = Fabricius 1793
| regnum = Animalia
| subregnum = Eumetazoa
| phylum = Arthropoda
| subphylum = Hexapoda
| classia = Insecta
| ordo = Dictyoptera
| subordo = Isoptera
| familia = Kalotermitidae
| subfamilia =
| tribus =
| genus = Kalotermes
| subgenus =
| species = flavicollis
| Bild = Kalotermes_flavicollis.jpg
| Verbreitung =
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur =
| Winterruhe =
| LSID = urn:lsid:faunaeur.org:taxname:337260
| www.faunaeur.org_id = 337265
}}
6c0f1da70ac0784e11355ce9ecaabf44e606f4ad
Category:Kalotermes
14
332
1716
2017-05-04T21:37:55Z
Lollypop
2
Die Seite wurde neu angelegt: „{{Systematik | DeName = | WissName = | Autor = Hagen 1853 | regnum = Animalia | subregnum = Eumetazoa | ph…“
wikitext
text/x-wiki
{{Systematik
| DeName =
| WissName =
| Autor = Hagen 1853
| regnum = Animalia
| subregnum = Eumetazoa
| phylum = Arthropoda
| subphylum = Hexapoda
| classia = Insecta
| ordo = Dictyoptera
| subordo = Isoptera
| familia = Kalotermitidae
| subfamilia =
| tribus =
| genus = Kalotermes
| Verbreitung =
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur =
| Winterruhe =
| LSID = urn:lsid:faunaeur.org:taxname:337263
| www.faunaeur.org_id = 337263
}}
913081ce20d0eddf48987fceeccc991573d3e07e
File:Kalotermes flavicollis.jpg
6
333
1717
2017-05-04T21:41:30Z
Lollypop
2
Kalotermes flavicollis, view inside a colony.
wikitext
text/x-wiki
Kalotermes flavicollis, view inside a colony.
4b78c53f497858e41bdb7ec8bf25849e216638e2
Reticulitermes grassei
0
303
1719
1520
2017-05-04T21:45:37Z
Lollypop
2
wikitext
text/x-wiki
{{Systematik
| DeName =
| WissName = Reticulitermes grassei
| Autor = Holmgren, 1913
| regnum = Animalia
| subregnum = Eumetazoa
| phylum = Arthropoda
| subphylum = Hexapoda
| classis = Insecta
| ordo = Dictyoptera
| subordo = Isoptera
| familia = Rhinotermitidae
| subfamilia = Heterotermitinae
| tribus =
| genus = Reticulitermes
| subgenus =
| species = grassei
| Bild =
| Verbreitung =
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur =
| Winterruhe =
| LSID = urn:lsid:faunaeur.org:taxname:337274
| www.faunaeur.org_id = 337274
}}
401e2f8068312c15a2089de26ff146678102fcfe
1720
1719
2017-05-04T21:48:11Z
Lollypop
2
wikitext
text/x-wiki
{{Systematik
| DeName =
| WissName = Reticulitermes grassei
| Autor = Holmgren, 1913
| regnum = Animalia
| subregnum = Eumetazoa
| phylum = Arthropoda
| subphylum = Hexapoda
| classis = Insecta
| ordo = Dictyoptera
| subordo = Isoptera
| familia = Rhinotermitidae
| subfamilia =
| subfamilia? = Heterotermitinae
| tribus =
| genus = Reticulitermes
| subgenus =
| species = grassei
| Bild =
| Verbreitung =
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur =
| Winterruhe =
| LSID = urn:lsid:faunaeur.org:taxname:337274
| www.faunaeur.org_id = 337274
}}
adc3bd6f2202d6c25eaea8aa4e7bea34dde3053f
File:Reticulitermes banyulensis larvae.JPG
6
334
1721
2017-05-04T21:55:25Z
Lollypop
2
Reticulitermes banyulensis, larvae
wikitext
text/x-wiki
Reticulitermes banyulensis, larvae
c6ef3d1ed53643793f8d1865a459f238b5ce11d8
File:Reticulitermes banyulensis colony.JPG
6
335
1723
2017-05-04T22:08:14Z
Lollypop
2
Reticulitermes banyulensis colony
wikitext
text/x-wiki
Reticulitermes banyulensis colony
fb656ca593b469f6216ee80ec7e7cc8a77d3b7d0
File:Cryptotermes brevis.JPG
6
336
1725
2017-05-04T22:20:10Z
Lollypop
2
Cryptotermes brevis
wikitext
text/x-wiki
Cryptotermes brevis
4bbafb27648818430bf54bd7183e3d536288c39b
File:Reticulitermes banyulensis secondary king.JPG
6
337
1727
2017-05-04T22:37:41Z
Lollypop
2
Reticulitermes banyulensis, secondary king
wikitext
text/x-wiki
Reticulitermes banyulensis, secondary king
2e67763bbdc17fc015ba1c7c464717b309343f8b
MySQL Tipps und Tricks
0
197
1729
1712
2017-05-05T08:17:59Z
Lollypop
2
/* On NetApp CDOT SVM */
wikitext
text/x-wiki
[[Kategorie:MySQL|Tipps und Tricks]]
==Oneliner==
===All grants===
<source lang=bash>
# mysql --skip-column-names --batch --execute 'select concat("`",user,"`@`",host,"`") from mysql.user' | xargs -n 1 -i mysql --execute 'show grants for {}'
</source>
===Last update time===
* Per table
<source lang=mysql>
mysql> SELECT TABLE_SCHEMA AS DB,TABLE_NAME,UPDATE_TIME FROM INFORMATION_SCHEMA.TABLES ORDER BY DB,UPDATE_TIME;
</source>
* Per database
<source lang=mysql>
mysql> SELECT TABLE_SCHEMA AS DB,MAX(UPDATE_TIME) AS LAST_UPDATE FROM INFORMATION_SCHEMA.TABLES GROUP BY DB ORDER BY LAST_UPDATE;
</source>
==InnoDB space==
===Per database===
<source lang=mysql>
mysql> select table_schema as database_name, sum(round(data_length/1024/1024,2)) as total_size_mb from information_schema.tables where engine like 'innodb' group by table_schema order by total_size_mb;
</source>
===Per table===
<source lang=mysql>
mysql> select table_schema as database_name,table_name,round(data_length/1024/1024,2) as size_mb from information_schema.tables order by size_mb;
</source>
==Logging==
If you use SET GLOBAL it is just for the moment.
'''Don't forget to add it in your my.cnf to make it permanent!'''
===What can I log?===
The interesting variables here are:
* log_queries_not_using_indexes
* log_slave_updates
* log_slow_queries
* general_log
===Choose logging destination FILE/TABLE/NONE===
This affects general_log and slow_query_log.
* Log to the table mysql.slow_log and mysql.general_log
<source lang=mysql>
mysql> SET GLOBAL log_output=TABLE;
</source>
* Log to the table mysql.slow_log and mysql.general_log
<source lang=mysql>
mysql> SET GLOBAL log_output=TABLE;
</source>
* Both: tables and files
<source lang=mysql>
mysql> SET GLOBAL log_output = 'TABLE,FILE';
</source>
* None, if NONE appears in the log_output destinations there is no logging
<source lang=mysql>
mysql> SET GLOBAL log_output = 'TABLE,FILE,NONE';
</source>
is equal to
<source lang=mysql>
mysql> SET GLOBAL log_output = 'NONE';
</source>
===Enable/disable general logging===
<source lang=mysql>
mysql> SET GLOBAL general_log_file = '/var/lib/mysql/general.log';
Query OK, 0 rows affected (0.00 sec)
mysql> SET GLOBAL general_log = 'ON';
Query OK, 0 rows affected (0.00 sec)
</source>
<source lang=mysql>
mysql> SET GLOBAL general_log = 'OFF';
Query OK, 0 rows affected (0.00 sec)
</source>
===Enable/disable logging of slow queries===
<source lang=mysql>
mysql> SET GLOBAL slow_query_log_file = '/var/lib/mysql/slow-query.log';
Query OK, 0 rows affected (0.00 sec)
mysql> SET GLOBAL slow_query_log = 'ON';
Query OK, 0 rows affected (0.00 sec)
</source>
<source lang=mysql>
mysql> SET GLOBAL slow_query_log = 'OFF';
Query OK, 0 rows affected (0.00 sec)
</source>
==Filesystems for MySQL==
===ext3/ext4===
====Create Options====
<source lang=bash>
# mkfs.ext4 -b 4096 /dev/mapper/vg--data-lv--ext4--mysql_data
</source>
====Mountoptions====
* noatime
* data=writeback (best performance , only metadata is logged)
* data=ordered (ok performance , recording metadata and grouping metadata related to the data changes)
* data=journal (worst performance, but best data protection, ext3 default mode, recording metadata and all data)
===Raw devices with InnoDB===
'''Take a look at [[Linux_udev_permissions|setting device permissions via udev]] first.'''
'''After''' that the device is owned by mysql:
<source lang=bash>
# ls -alL /dev/vg-data/lv-rawdisk-innodb01
brw-rw---- 1 mysql mysql 252, 0 Aug 12 15:07 /dev/vg-data/lv-rawdisk-innodb01
</source>
Determine the size:
<source lang=bash>
# lvs vg-data
LV VG Attr LSize Pool Origin Data% Move Log Copy% Convert
lv-rawdisk-innodb01 vg-data -wi-a---- 25.00g
# fdisk -l /dev/vg-data/lv-rawdisk-innodb01
Disk /dev/vg-data/lv-rawdisk-innodb01: 26.8 GB, 26843545600 bytes
255 heads, 63 sectors/track, 3263 cylinders, total 52428800 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
# bc -l
26843545600/(1024*1024*1024)
25.00000000000000000000
</source>
Yes... really 25GB!
Add your logical volume to your configuration /etc/mysql/conf.d/innodb.cnf :
<source lang=mysql>
[mysqld]
# InnoDB raw disks
innodb_data_home_dir=
innodb_data_file_path=/dev/vg-data/lv-rawdisk-innodb01:25Gnewraw
</source>
Start mysql:
<source lang=bash>
# service mysql start
</source>
Aaaaaand.. do not forget apparmor! Like I did.. :-D
<source lang=mysql>
InnoDB: Operating system error number 13 in a file operation.
InnoDB: The error means mysqld does not have the access rights to
InnoDB: the directory.
InnoDB: File name /dev/dm-0
InnoDB: File operation call: 'open'.
InnoDB: Cannot continue operation.
</source>
<source lang=bash>
# tail /var/log/kern.log
...
Aug 12 15:30:09 mysql kernel: [ 5840.118528] audit: type=1400 audit(1439386209.399:33): apparmor="DENIED" operation="open" profile="/usr/sbin/mysqld" name="/dev/dm-0" pid=11810 comm="mysqld" requested_mask="wr" denied_mask="wr" fsuid=108 ouid=108
...
</source>
Add your raw device to the apparmor config in /etc/apparmor.d/local/usr.sbin.mysqld :
<source lang=bash>
# Site-specific additions and overrides for usr.sbin.mysqld.
# For more details, please see /etc/apparmor.d/local/README.
/dev/dm-* rwk,
</source>
Reload apparmor:
<source lang=bash>
# service apparmor reload
</source>
Another try!
<source lang=bash>
# service mysql start
</source>
<source lang=mysql>
InnoDB: The first specified data file /dev/vg-data/lv-rawdisk-innodb01 did not exist:
InnoDB: a new database to be created!
150812 15:48:23 InnoDB: Setting file /dev/vg-data/lv-rawdisk-innodb01 size to 25600 MB
InnoDB: Database physically writes the file full: wait...
InnoDB: Progress in MB: 100 200 300 400 500 600 700 800 900 1000 1100 1200 ...
</source>
Much better!
So shutdown MySQL again!
Change your configuration /etc/mysql/conf.d/innodb.cnf and '''change newraw to raw!''' :
<source lang=mysql>
[mysqld]
# InnoDB raw disks
innodb_data_home_dir=
innodb_data_file_path=/dev/vg-data/lv-rawdisk-innodb01:25Graw
</source>
=== NFS ===
==== NFSv4 ====
===== On NetApp CDOT SVM =====
<source lang=cdot>
cdot1nfsv4::> export-policy rule create -policyname default -clientmatch 172.18.128.0/22 -superuser none -rwrule none -rorule sys -allow-dev false -allow-suid false
cdot1nfsv4::>
cdot1nfsv4::> export-policy create -policyname mysql_clients
cdot1nfsv4::> export-policy rule create -policyname mysql_clients -clientmatch 172.18.128.0/22 -superuser sys -rwrule sys -rorule sys -allow-dev true -allow-suid false
cdot1nfsv4::>
cdot1nfsv4::> nfs server modify -v4.0 enabled -v4-id-domain this.domain.tld
cdot1nfsv4::> set -units GB
cdot1nfsv4::> vol show -volume MYSQLNFS_* -fields volume,policy,size,junction-path
vserver volume size policy junction-path
------------------ --------------------- ---- ------------- ----------------------
cdot1nfsv4 MYSQLNFS_DATA 40GB mysql_clients /MYSQLNFS_DATA
cdot1nfsv4 MYSQLNFS_LOG 1GB mysql_clients /MYSQLNFS_LOG
2 entries were displayed.
</source>
* [https://kb.netapp.com/support/s/article/how-to-configure-nfsv4-in-cluster-mode How to configure NFSv4 in Cluster-Mode]
* [https://kb.netapp.com/support/s/article/clustered-data-ontap-nfs-expert-recommended-articles Clustered Data ONTAP NFS Expert recommended articles]
===== On Linux =====
====== /etc/sysctl.d/99-mysql.conf ======
<source>
#
## http://www.ajohnstone.com/achives/optimizing-mysql-over-nfs-with-netapp/
#
###################################################################
# Semaphores & IPC for optimizations in innodb
kernel.shmmax=2147483648
kernel.shmall=2147483648
kernel.msgmni=1024
kernel.msgmax=65536
kernel.sem=250 32000 32 1024
###################################################################
# Swap
vm.swappiness = 0
vm.vfs_cache_pressure = 50
</source>
====== /etc/sysctl.d/99-netapp-nfs.conf ======
<source>
#
## http://www.ajohnstone.com/achives/optimizing-mysql-over-nfs-with-netapp/
#
###################################################################
# Optimization for netapp/nfs increased from 64k, @see http://tldp.org/HOWTO/NFS-HOWTO/performance.html#MEMLIMITS
net.core.wmem_default=262144
net.core.rmem_default=262144
net.core.wmem_max=262144
net.core.rmem_max=262144
net.ipv4.tcp_rmem = 4096 87380 16777216
net.ipv4.tcp_wmem = 4096 65536 16777216
net.ipv4.tcp_no_metrics_save = 1
# Guidelines from http://media.netapp.com/documents/mysqlperformance-5.pdf
net.ipv4.tcp_sack=0
net.ipv4.tcp_timestamps=0
sunrpc.tcp_slot_table_entries=128
#nfs.v3.enable on
nfs.tcp.enable=on
nfs.tcp.recvwindowsize=65536
nfs.tcp.xfersize=65536
#iscsi.iswt.max_ios_per_session 128
#iscsi.iswt.tcp_window_size 131400
#iscsi.max_connections_per_session 16
net.ipv4.tcp_tw_reuse = 1
net.ipv4.ip_local_port_range = 1024 65023
net.ipv4.tcp_max_syn_backlog = 10240
net.ipv4.tcp_max_tw_buckets = 400000
net.ipv4.tcp_max_orphans = 60000
net.ipv4.tcp_synack_retries = 3
net.core.somaxconn = 10000
kernel.sysrq=0
net.ipv4.neigh.default.gc_thresh1 = 4096
net.ipv4.neigh.default.gc_thresh2 = 8192
net.ipv4.neigh.default.gc_thresh3 = 8192
net.ipv4.neigh.default.base_reachable_time = 86400
net.ipv4.neigh.default.gc_stale_time = 86400
</source>
====== Modify systemd service to wait for NFS ======
To be sure that the NFS mount is ready when the mysql server starts add After=nfs-client.target to the systemd service in the Unit-section.
<source lang=bash>
# systemctl edit mysql.service
</source>
and enter:
<source lang=inifile>
[Unit]
Description=MySQL Community Server
After=network.target
After=nfs-client.target
</source>
<source lang=bash>
# systemctl cat mysql
# /lib/systemd/system/mysql.service
# MySQL systemd service file
[Unit]
Description=MySQL Community Server
After=network.target
[Install]
WantedBy=multi-user.target
[Service]
User=mysql
Group=mysql
PermissionsStartOnly=true
ExecStartPre=/usr/share/mysql/mysql-systemd-start pre
ExecStart=/usr/sbin/mysqld
ExecStartPost=/usr/share/mysql/mysql-systemd-start post
TimeoutSec=600
Restart=on-failure
RuntimeDirectory=mysqld
RuntimeDirectoryMode=755
# /etc/systemd/system/mysql.service.d/override.conf
[Unit]
Description=MySQL Community Server
After=network.target
After=nfs-client.target
</source>
====== /etc/idmapd.conf ======
<source lang=conf>
# Domain = localdomain
Domain = this.domain.tld
</source>
====== /etc/fstab ======
<source lang=fstab>
cdot-nfsv4-svm:/MYSQLNFS_LOG /MYSQLNFS_LOG nfs rw,hard,nointr,rsize=65536,wsize=65536,bg,vers=4,proto=tcp,noatime
cdot-nfsv4-svm:/MYSQLNFS_DATA /MYSQLNFS_DATA nfs rw,hard,nointr,rsize=65536,wsize=65536,bg,vers=4,proto=tcp,noatime
</source>
====== /etc/mysql/mysql.conf.d/mysqld.cnf ======
<source lang=inifile>
[mysqld]
...
datadir = /MYSQLNFS_DATA/data/mysql
...
</source>
====== /etc/mysql/mysql.conf.d/innodb.cnf ======
<source lang=inifile>
[mysqld]
#
# * InnoDB
#
innodb_data_home_dir = /MYSQLNFS_DATA/InnoDB
innodb_data_file_path = ibdata1:200M:autoextend
innodb_log_group_home_dir = /MYSQLNFS_LOG/ib_log
#innodb_flush_method = O_DIRECT
innodb_flush_log_at_trx_commit = 2
innodb_file_per_table = on
</source>
<source lang=mysql>
# mysql -e "show variables where variable_name like '%dir' and value like '/MYSQLNFS%'"
+---------------------------+------------------------------------+
| Variable_name | Value |
+---------------------------+------------------------------------+
| datadir | /MYSQLNFS_DATA/data/mysql/ |
| innodb_data_home_dir | /MYSQLNFS_DATA/InnoDB |
| innodb_log_group_home_dir | /MYSQLNFS_LOG/ib_log |
+---------------------------+------------------------------------+
</source>
====== /etc/mysql/mysql.conf.d/query_cache.cnf ======
<source lang=inifile>
[mysqld]
#
# * Query Cache Configuration
#
query_cache_type = 1
query_cache_limit = 256K
query_cache_min_res_unit = 2k
query_cache_size = 80M
</source>
<source lang=mysql>
mysql> SHOW VARIABLES LIKE 'have_query_cache';
+------------------+-------+
| Variable_name | Value |
+------------------+-------+
| have_query_cache | YES |
+------------------+-------+
1 row in set (0,00 sec)
mysql> SHOW VARIABLES LIKE 'query_cache%';
+------------------------------+----------+
| Variable_name | Value |
+------------------------------+----------+
| query_cache_limit | 262144 |
| query_cache_min_res_unit | 2048 |
| query_cache_size | 83886080 |
| query_cache_type | ON |
| query_cache_wlock_invalidate | OFF |
+------------------------------+----------+
5 rows in set (0,00 sec)
</source>
====== apparmor : /etc/apparmor.d/local/usr.sbin.mysqld ======
<source lang=apparmor>
# vim:syntax=apparmor
# This should be always there...
owner @{PROC}/@{pid}/status r,
/sys/devices/system/node/ r,
/sys/devices/system/node/** r,
# The mysql datadir, innodb_data_home_dir
/MYSQLNFS_DATA/ r,
/MYSQLNFS_DATA/** rwk,
# The mysql innodb_log_group_home_dir
/MYSQLNFS_LOG/ r,
/MYSQLNFS_LOG/** rwk,
</source>
====== Short stupid performance test ======
<source lang=bash>
# time dd if=/dev/zero of=/MYSQLNFS_DATA/io.test bs=16k count=65536
65536+0 records in
65536+0 records out
1073741824 bytes (1,1 GB, 1,0 GiB) copied, 1,7552 s, 612 MB/s
real 0m1.772s
user 0m0.016s
sys 0m0.672s
</source>
Some things seem to work...
==Sample InnoDB configuration==
/etc/mysql/conf.d/innodb.cnf
<source lang=mysql>
[mysqld]
# InnoDB Parameters
# innodb_buffer_pool_size=(0.7*total_mem_size)
innodb_buffer_pool_size=1433M
# bulk_insert_buffer_size
bulk_insert_buffer_size=256M
# innodb_buffer_pool_instances=... more = more concurrency
innodb_buffer_pool_instances=2
# innodb_thread_concurrency= 2*CPUs
innodb_thread_concurrency=4
# innodb_flush_method=O_DIRECT (avoids double buffering)
innodb_flush_method=O_DIRECT
# InnoDB data raw disks
innodb_data_home_dir=
innodb_data_file_path=/dev/vg-data/lv-rawdisk-innodb01:25Graw
# InnoDB log files
innodb_log_files_in_group=2
innodb_log_file_size=100M
innodb_log_group_home_dir=/var/lib/mysql/ib_log
</source>
==Analyze==
<source lang=mysql>
mysql> select * from <tablename> PROCEDURE ANALYSE();
</source>
<source lang=mysql>
mysql> SHOW /*!50000 GLOBAL*/ STATUS;
</source>
* See [[http://de.slideshare.net/shinguz/pt-presentation-11465700 MySQL Performance Tuning]]
===percona-toolkit===
<source lang=bash>
# aptitude install percona-toolkit
# mysql -e "explain select * from mysql.user,mysql.db where user.user=db.user" | pt-visual-explain
JOIN
+- Bookmark lookup
| +- Table
| | table db
| | possible_keys User
| +- Index lookup
| key db->User
| possible_keys User
| key_len 48
| ref mysql.user.User
| rows 3
+- Table scan
rows 68
+- Table
table user
</source>
===Sysbench===
<source lang=bash>
# mysql -u root -e "create database sbtest;"
# sysbench \
--test=oltp \
--oltp-table-size=10000000 \
--db-driver=mysql \
--mysql-table-engine=innodb \
--mysql-db=sbtest \
--mysql-user=root \
--mysql-password=$(nawk -F'=' '/password/{print $2}' /root/.my.cnf) \
--mysql-socket=/var/run/mysqld/mysqld.sock \
prepare
# sysbench \
--test=oltp \
--oltp-test-mode=complex \
--oltp-table-size=80000000 \
--db-driver=mysql \
--mysql-table-engine=innodb \
--mysql-db=sbtest \
--mysql-user=root \
--mysql-password=$(nawk -F'=' '/password/{print $2}' /root/.my.cnf) \
--mysql-socket=/var/run/mysqld/mysqld.sock \
--num-threads=4 \
--max-time=900 \
--max-requests=500000 \
run
# mysql -u root_rw -e "drop table sbtest;" sbtest
</source>
==Recover a damaged root account==
===Lost grants===
Try out:
<source lang=bash>
# service mysql stop
# echo "grant all privileges on *.* to 'root'@'localhost' with grant option;" > /root/mysql-init
# mysqld_safe --init-file=/root/mysql-init
...
150812 19:14:24 mysqld_safe mysqld from pid file /var/run/mysqld/mysqld.pid ended
# rm /root/mysql-init
# service mysql start
</source>
Or:
<source lang=bash>
# service mysql stop
# mysqld_safe --skip-grant-tables &
...
# mysql -e "UPDATE mysql.user SET Grant_priv='Y', Super_priv='Y' WHERE User='root'; FLUSH PRIVILEGES; GRANT ALL ON *.* TO 'root'@'localhost';"
# mysqladmin -u root shutdown
# service mysql start
</source>
===Lost password===
<source lang=bash>
# service mysql stop
# echo "SET PASSWORD FOR 'root'@'localhost' = PASSWORD('the root password for mysql');" > /root/mysql-init
# mysqld_safe --init-file=/root/mysql-init
...
150812 19:15:24 mysqld_safe mysqld from pid file /var/run/mysqld/mysqld.pid ended
# rm /root/mysql-init
# service mysql start
</source>
==Structured configuration==
This is the default in Ubuntus /etc/mysql/my.cnf:
<source lang=mysql>
...
#
# * IMPORTANT: Additional settings that can override those from this file!
# The files must end with '.cnf', otherwise they'll be ignored.
#
!includedir /etc/mysql/conf.d/
</source>
/etc/mysql/conf.d/innodb.cnf:
<source lang=mysql>
[mysqld]
# InnoDB Parameters
# innodb_buffer_pool_size=(0.7*total_mem_size)
#innodb_buffer_pool_size=512M
innodb_buffer_pool_size=256M
# bulk_insert_buffer_size
#bulk_insert_buffer_size=256M
bulk_insert_buffer_size=128M
# innodb_buffer_pool_instances=... more = more concurrency
innodb_buffer_pool_instances=2
# innodb_thread_concurrency= 2*CPUs
innodb_thread_concurrency=4
# innodb_flush_method=O_DIRECT (avoids double buffering)
innodb_flush_method=O_DIRECT
# InnoDB data raw disks
innodb_data_home_dir=
innodb_data_file_path=/dev/vg-data/lv-rawdisk-innodb01:25Graw
# InnoDB log files
innodb_log_files_in_group=2
innodb_log_file_size=100M
innodb_log_group_home_dir=/var/lib/mysql/ib_log
</source>
/etc/mysql/conf.d/myisam.cnf:
<source lang=mysql>
[mysqld]
#key_buffer = 512M
key_buffer = 128M
table_cache = 8K
myisam_sort_buffer_size = 64M
tmp_table_size = 64M
# Variable: concurrent_insert
# Value Description
# 0 Disables concurrent inserts
# 1 (Default) Enables concurrent insert for MyISAM tables that do not have holes
# 2 Enables concurrent inserts for all MyISAM tables, even those that have holes.
# For a table with a hole, new rows are inserted at the end of the table if it is in use by another thread.
# Otherwise, MySQL acquires a normal write lock and inserts the row into the hole.
concurrent_insert=2
# Variable: myisam_use_mmap
# https://www.percona.com/blog/2006/05/26/myisam-mmap-feature-51/
#
myisam_use_mmap=1
</source>
/etc/mysql/conf.d/mysqld.cnf:
<source lang=mysql>
[mysqld]
datadir = /var/lib/mysql/data/data
# because mysql is soooo stupid
#ignore-db-dirs = lost+found # when we will have mysql >= 5.6.3
bind-address = 127.0.0.1
open-files-limit = 4096
max_connections = 512
max_allowed_packet = 16M
thread_stack = 192K
thread_cache_size = 8
myisam-recover-options = BACKUP
max_connections = 512
table_cache = 8192
thread_concurrency = 4
default-storage-engine = innodb
# Enable the full query log. Every query (even ones with incorrect
# syntax) that the server receives will be logged. This is useful for
# debugging, it is usually disabled in production use.
#log
# Print warnings to the error log file. If you have any problem with
# MySQL you should enable logging of warnings and examine the error log
# for possible explanations.
log_warnings
# Log slow queries. Slow queries are queries which take more than the
# amount of time defined in "long_query_time" or which do not use
# indexes well, if log_long_format is enabled. It is normally good idea
# to have this turned on if you frequently add new queries to the
# system.
log_slow_queries
slow_query_log_file = /var/log/mysql/mysql-slow.log
# All queries taking more than this amount of time (in seconds) will be
# trated as slow. Do not use "1" as a value here, as this will result in
# even very fast queries being logged from time to time (as MySQL
# currently measures time with second accuracy only).
long_query_time = 2
# Log more information in the slow query log. Normally it is good to
# have this turned on. This will enable logging of queries that are not
# using indexes in addition to long running queries.
#log_long_format
log_bin = /var/lib/mysql/binlog/mysql-bin.log
expire_logs_days = 10
max_binlog_size = 100M
sync_binlog = 0
performance_schema = ON
</source>
/etc/mysql/conf.d/mysqld_safe.cnf:
<source lang=mysql>
[mysqld_safe]
</source>
/etc/mysql/conf.d/mysqld_safe_syslog.cnf:
<source lang=mysql>
[mysqld_safe]
syslog
</source>
/etc/mysql/conf.d/query_cache.cnf:
<source lang=mysql>
[mysqld]
query_cache_limit = 4M
query_cache_size = 128M
query_cache_min_res_unit = 2K
</source>
=MySQL Clients=
Small one liners for testing purposes.
==PHP==
===PHP PDO===
<source lang=php>
$ php -r '
$pdo=new PDO("mysql:host=mydbhost;dbname=mydb", "user", "pass", ARRAY(
PDO::ATTR_PERSISTENT => true
)
);
$stmt=$pdo->prepare("SELECT * FROM mytable");
if($stmt->execute()){
while($row = $stmt->fetch()){
print_r($row);
}
};
$stmt = null;
$pdo=null;
'
</source>
f99f59867bccb2ca3fd9a95733eece54e0cb3fb2
1730
1729
2017-05-05T08:19:16Z
Lollypop
2
/* On NetApp CDOT SVM */
wikitext
text/x-wiki
[[Kategorie:MySQL|Tipps und Tricks]]
==Oneliner==
===All grants===
<source lang=bash>
# mysql --skip-column-names --batch --execute 'select concat("`",user,"`@`",host,"`") from mysql.user' | xargs -n 1 -i mysql --execute 'show grants for {}'
</source>
===Last update time===
* Per table
<source lang=mysql>
mysql> SELECT TABLE_SCHEMA AS DB,TABLE_NAME,UPDATE_TIME FROM INFORMATION_SCHEMA.TABLES ORDER BY DB,UPDATE_TIME;
</source>
* Per database
<source lang=mysql>
mysql> SELECT TABLE_SCHEMA AS DB,MAX(UPDATE_TIME) AS LAST_UPDATE FROM INFORMATION_SCHEMA.TABLES GROUP BY DB ORDER BY LAST_UPDATE;
</source>
==InnoDB space==
===Per database===
<source lang=mysql>
mysql> select table_schema as database_name, sum(round(data_length/1024/1024,2)) as total_size_mb from information_schema.tables where engine like 'innodb' group by table_schema order by total_size_mb;
</source>
===Per table===
<source lang=mysql>
mysql> select table_schema as database_name,table_name,round(data_length/1024/1024,2) as size_mb from information_schema.tables order by size_mb;
</source>
==Logging==
If you use SET GLOBAL it is just for the moment.
'''Don't forget to add it in your my.cnf to make it permanent!'''
===What can I log?===
The interesting variables here are:
* log_queries_not_using_indexes
* log_slave_updates
* log_slow_queries
* general_log
===Choose logging destination FILE/TABLE/NONE===
This affects general_log and slow_query_log.
* Log to the table mysql.slow_log and mysql.general_log
<source lang=mysql>
mysql> SET GLOBAL log_output=TABLE;
</source>
* Log to the table mysql.slow_log and mysql.general_log
<source lang=mysql>
mysql> SET GLOBAL log_output=TABLE;
</source>
* Both: tables and files
<source lang=mysql>
mysql> SET GLOBAL log_output = 'TABLE,FILE';
</source>
* None, if NONE appears in the log_output destinations there is no logging
<source lang=mysql>
mysql> SET GLOBAL log_output = 'TABLE,FILE,NONE';
</source>
is equal to
<source lang=mysql>
mysql> SET GLOBAL log_output = 'NONE';
</source>
===Enable/disable general logging===
<source lang=mysql>
mysql> SET GLOBAL general_log_file = '/var/lib/mysql/general.log';
Query OK, 0 rows affected (0.00 sec)
mysql> SET GLOBAL general_log = 'ON';
Query OK, 0 rows affected (0.00 sec)
</source>
<source lang=mysql>
mysql> SET GLOBAL general_log = 'OFF';
Query OK, 0 rows affected (0.00 sec)
</source>
===Enable/disable logging of slow queries===
<source lang=mysql>
mysql> SET GLOBAL slow_query_log_file = '/var/lib/mysql/slow-query.log';
Query OK, 0 rows affected (0.00 sec)
mysql> SET GLOBAL slow_query_log = 'ON';
Query OK, 0 rows affected (0.00 sec)
</source>
<source lang=mysql>
mysql> SET GLOBAL slow_query_log = 'OFF';
Query OK, 0 rows affected (0.00 sec)
</source>
==Filesystems for MySQL==
===ext3/ext4===
====Create Options====
<source lang=bash>
# mkfs.ext4 -b 4096 /dev/mapper/vg--data-lv--ext4--mysql_data
</source>
====Mountoptions====
* noatime
* data=writeback (best performance , only metadata is logged)
* data=ordered (ok performance , recording metadata and grouping metadata related to the data changes)
* data=journal (worst performance, but best data protection, ext3 default mode, recording metadata and all data)
===Raw devices with InnoDB===
'''Take a look at [[Linux_udev_permissions|setting device permissions via udev]] first.'''
'''After''' that the device is owned by mysql:
<source lang=bash>
# ls -alL /dev/vg-data/lv-rawdisk-innodb01
brw-rw---- 1 mysql mysql 252, 0 Aug 12 15:07 /dev/vg-data/lv-rawdisk-innodb01
</source>
Determine the size:
<source lang=bash>
# lvs vg-data
LV VG Attr LSize Pool Origin Data% Move Log Copy% Convert
lv-rawdisk-innodb01 vg-data -wi-a---- 25.00g
# fdisk -l /dev/vg-data/lv-rawdisk-innodb01
Disk /dev/vg-data/lv-rawdisk-innodb01: 26.8 GB, 26843545600 bytes
255 heads, 63 sectors/track, 3263 cylinders, total 52428800 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
# bc -l
26843545600/(1024*1024*1024)
25.00000000000000000000
</source>
Yes... really 25GB!
Add your logical volume to your configuration /etc/mysql/conf.d/innodb.cnf :
<source lang=mysql>
[mysqld]
# InnoDB raw disks
innodb_data_home_dir=
innodb_data_file_path=/dev/vg-data/lv-rawdisk-innodb01:25Gnewraw
</source>
Start mysql:
<source lang=bash>
# service mysql start
</source>
Aaaaaand.. do not forget apparmor! Like I did.. :-D
<source lang=mysql>
InnoDB: Operating system error number 13 in a file operation.
InnoDB: The error means mysqld does not have the access rights to
InnoDB: the directory.
InnoDB: File name /dev/dm-0
InnoDB: File operation call: 'open'.
InnoDB: Cannot continue operation.
</source>
<source lang=bash>
# tail /var/log/kern.log
...
Aug 12 15:30:09 mysql kernel: [ 5840.118528] audit: type=1400 audit(1439386209.399:33): apparmor="DENIED" operation="open" profile="/usr/sbin/mysqld" name="/dev/dm-0" pid=11810 comm="mysqld" requested_mask="wr" denied_mask="wr" fsuid=108 ouid=108
...
</source>
Add your raw device to the apparmor config in /etc/apparmor.d/local/usr.sbin.mysqld :
<source lang=bash>
# Site-specific additions and overrides for usr.sbin.mysqld.
# For more details, please see /etc/apparmor.d/local/README.
/dev/dm-* rwk,
</source>
Reload apparmor:
<source lang=bash>
# service apparmor reload
</source>
Another try!
<source lang=bash>
# service mysql start
</source>
<source lang=mysql>
InnoDB: The first specified data file /dev/vg-data/lv-rawdisk-innodb01 did not exist:
InnoDB: a new database to be created!
150812 15:48:23 InnoDB: Setting file /dev/vg-data/lv-rawdisk-innodb01 size to 25600 MB
InnoDB: Database physically writes the file full: wait...
InnoDB: Progress in MB: 100 200 300 400 500 600 700 800 900 1000 1100 1200 ...
</source>
Much better!
So shutdown MySQL again!
Change your configuration /etc/mysql/conf.d/innodb.cnf and '''change newraw to raw!''' :
<source lang=mysql>
[mysqld]
# InnoDB raw disks
innodb_data_home_dir=
innodb_data_file_path=/dev/vg-data/lv-rawdisk-innodb01:25Graw
</source>
=== NFS ===
==== NFSv4 ====
===== On NetApp CDOT SVM =====
<source lang=cdot>
cdot1nfsv4::> export-policy rule create -policyname default -clientmatch 172.18.128.0/22 -superuser none -rwrule none -rorule sys -allow-dev false -allow-suid false
cdot1nfsv4::>
cdot1nfsv4::> export-policy create -policyname mysql_clients
cdot1nfsv4::> export-policy rule create -policyname mysql_clients -clientmatch 172.18.128.0/22 -superuser sys -rwrule sys -rorule sys -allow-dev true -allow-suid false
cdot1nfsv4::>
cdot1nfsv4::> nfs server modify -v4.0 enabled -v4-id-domain this.domain.tld
cdot1nfsv4::> set -units GB
cdot1nfsv4::> vol show -volume MYSQLNFS_* -fields volume,policy,size,junction-path
vserver volume size policy junction-path
------------------ --------------------- ---- ------------- ----------------------
cdot1nfsv4 MYSQLNFS_DATA 40GB mysql_clients /MYSQLNFS_DATA
cdot1nfsv4 MYSQLNFS_LOG 1GB mysql_clients /MYSQLNFS_LOG
2 entries were displayed.
</source>
Links:
* [https://kb.netapp.com/support/s/article/how-to-configure-nfsv4-in-cluster-mode How to configure NFSv4 in Cluster-Mode]
* [https://kb.netapp.com/support/s/article/clustered-data-ontap-nfs-expert-recommended-articles Clustered Data ONTAP NFS Expert recommended articles]
* [https://kb.netapp.com/support/s/article/how-to-configure-netapp-storage-systems-for-network-file-system-version-4-in-aix-and-linux-environments How to configure NetApp storage systems for Network File System version 4 in AIX and Linux environments]
===== On Linux =====
====== /etc/sysctl.d/99-mysql.conf ======
<source>
#
## http://www.ajohnstone.com/achives/optimizing-mysql-over-nfs-with-netapp/
#
###################################################################
# Semaphores & IPC for optimizations in innodb
kernel.shmmax=2147483648
kernel.shmall=2147483648
kernel.msgmni=1024
kernel.msgmax=65536
kernel.sem=250 32000 32 1024
###################################################################
# Swap
vm.swappiness = 0
vm.vfs_cache_pressure = 50
</source>
====== /etc/sysctl.d/99-netapp-nfs.conf ======
<source>
#
## http://www.ajohnstone.com/achives/optimizing-mysql-over-nfs-with-netapp/
#
###################################################################
# Optimization for netapp/nfs increased from 64k, @see http://tldp.org/HOWTO/NFS-HOWTO/performance.html#MEMLIMITS
net.core.wmem_default=262144
net.core.rmem_default=262144
net.core.wmem_max=262144
net.core.rmem_max=262144
net.ipv4.tcp_rmem = 4096 87380 16777216
net.ipv4.tcp_wmem = 4096 65536 16777216
net.ipv4.tcp_no_metrics_save = 1
# Guidelines from http://media.netapp.com/documents/mysqlperformance-5.pdf
net.ipv4.tcp_sack=0
net.ipv4.tcp_timestamps=0
sunrpc.tcp_slot_table_entries=128
#nfs.v3.enable on
nfs.tcp.enable=on
nfs.tcp.recvwindowsize=65536
nfs.tcp.xfersize=65536
#iscsi.iswt.max_ios_per_session 128
#iscsi.iswt.tcp_window_size 131400
#iscsi.max_connections_per_session 16
net.ipv4.tcp_tw_reuse = 1
net.ipv4.ip_local_port_range = 1024 65023
net.ipv4.tcp_max_syn_backlog = 10240
net.ipv4.tcp_max_tw_buckets = 400000
net.ipv4.tcp_max_orphans = 60000
net.ipv4.tcp_synack_retries = 3
net.core.somaxconn = 10000
kernel.sysrq=0
net.ipv4.neigh.default.gc_thresh1 = 4096
net.ipv4.neigh.default.gc_thresh2 = 8192
net.ipv4.neigh.default.gc_thresh3 = 8192
net.ipv4.neigh.default.base_reachable_time = 86400
net.ipv4.neigh.default.gc_stale_time = 86400
</source>
====== Modify systemd service to wait for NFS ======
To be sure that the NFS mount is ready when the mysql server starts add After=nfs-client.target to the systemd service in the Unit-section.
<source lang=bash>
# systemctl edit mysql.service
</source>
and enter:
<source lang=inifile>
[Unit]
Description=MySQL Community Server
After=network.target
After=nfs-client.target
</source>
<source lang=bash>
# systemctl cat mysql
# /lib/systemd/system/mysql.service
# MySQL systemd service file
[Unit]
Description=MySQL Community Server
After=network.target
[Install]
WantedBy=multi-user.target
[Service]
User=mysql
Group=mysql
PermissionsStartOnly=true
ExecStartPre=/usr/share/mysql/mysql-systemd-start pre
ExecStart=/usr/sbin/mysqld
ExecStartPost=/usr/share/mysql/mysql-systemd-start post
TimeoutSec=600
Restart=on-failure
RuntimeDirectory=mysqld
RuntimeDirectoryMode=755
# /etc/systemd/system/mysql.service.d/override.conf
[Unit]
Description=MySQL Community Server
After=network.target
After=nfs-client.target
</source>
====== /etc/idmapd.conf ======
<source lang=conf>
# Domain = localdomain
Domain = this.domain.tld
</source>
====== /etc/fstab ======
<source lang=fstab>
cdot-nfsv4-svm:/MYSQLNFS_LOG /MYSQLNFS_LOG nfs rw,hard,nointr,rsize=65536,wsize=65536,bg,vers=4,proto=tcp,noatime
cdot-nfsv4-svm:/MYSQLNFS_DATA /MYSQLNFS_DATA nfs rw,hard,nointr,rsize=65536,wsize=65536,bg,vers=4,proto=tcp,noatime
</source>
====== /etc/mysql/mysql.conf.d/mysqld.cnf ======
<source lang=inifile>
[mysqld]
...
datadir = /MYSQLNFS_DATA/data/mysql
...
</source>
====== /etc/mysql/mysql.conf.d/innodb.cnf ======
<source lang=inifile>
[mysqld]
#
# * InnoDB
#
innodb_data_home_dir = /MYSQLNFS_DATA/InnoDB
innodb_data_file_path = ibdata1:200M:autoextend
innodb_log_group_home_dir = /MYSQLNFS_LOG/ib_log
#innodb_flush_method = O_DIRECT
innodb_flush_log_at_trx_commit = 2
innodb_file_per_table = on
</source>
<source lang=mysql>
# mysql -e "show variables where variable_name like '%dir' and value like '/MYSQLNFS%'"
+---------------------------+------------------------------------+
| Variable_name | Value |
+---------------------------+------------------------------------+
| datadir | /MYSQLNFS_DATA/data/mysql/ |
| innodb_data_home_dir | /MYSQLNFS_DATA/InnoDB |
| innodb_log_group_home_dir | /MYSQLNFS_LOG/ib_log |
+---------------------------+------------------------------------+
</source>
====== /etc/mysql/mysql.conf.d/query_cache.cnf ======
<source lang=inifile>
[mysqld]
#
# * Query Cache Configuration
#
query_cache_type = 1
query_cache_limit = 256K
query_cache_min_res_unit = 2k
query_cache_size = 80M
</source>
<source lang=mysql>
mysql> SHOW VARIABLES LIKE 'have_query_cache';
+------------------+-------+
| Variable_name | Value |
+------------------+-------+
| have_query_cache | YES |
+------------------+-------+
1 row in set (0,00 sec)
mysql> SHOW VARIABLES LIKE 'query_cache%';
+------------------------------+----------+
| Variable_name | Value |
+------------------------------+----------+
| query_cache_limit | 262144 |
| query_cache_min_res_unit | 2048 |
| query_cache_size | 83886080 |
| query_cache_type | ON |
| query_cache_wlock_invalidate | OFF |
+------------------------------+----------+
5 rows in set (0,00 sec)
</source>
====== apparmor : /etc/apparmor.d/local/usr.sbin.mysqld ======
<source lang=apparmor>
# vim:syntax=apparmor
# This should be always there...
owner @{PROC}/@{pid}/status r,
/sys/devices/system/node/ r,
/sys/devices/system/node/** r,
# The mysql datadir, innodb_data_home_dir
/MYSQLNFS_DATA/ r,
/MYSQLNFS_DATA/** rwk,
# The mysql innodb_log_group_home_dir
/MYSQLNFS_LOG/ r,
/MYSQLNFS_LOG/** rwk,
</source>
====== Short stupid performance test ======
<source lang=bash>
# time dd if=/dev/zero of=/MYSQLNFS_DATA/io.test bs=16k count=65536
65536+0 records in
65536+0 records out
1073741824 bytes (1,1 GB, 1,0 GiB) copied, 1,7552 s, 612 MB/s
real 0m1.772s
user 0m0.016s
sys 0m0.672s
</source>
Some things seem to work...
==Sample InnoDB configuration==
/etc/mysql/conf.d/innodb.cnf
<source lang=mysql>
[mysqld]
# InnoDB Parameters
# innodb_buffer_pool_size=(0.7*total_mem_size)
innodb_buffer_pool_size=1433M
# bulk_insert_buffer_size
bulk_insert_buffer_size=256M
# innodb_buffer_pool_instances=... more = more concurrency
innodb_buffer_pool_instances=2
# innodb_thread_concurrency= 2*CPUs
innodb_thread_concurrency=4
# innodb_flush_method=O_DIRECT (avoids double buffering)
innodb_flush_method=O_DIRECT
# InnoDB data raw disks
innodb_data_home_dir=
innodb_data_file_path=/dev/vg-data/lv-rawdisk-innodb01:25Graw
# InnoDB log files
innodb_log_files_in_group=2
innodb_log_file_size=100M
innodb_log_group_home_dir=/var/lib/mysql/ib_log
</source>
==Analyze==
<source lang=mysql>
mysql> select * from <tablename> PROCEDURE ANALYSE();
</source>
<source lang=mysql>
mysql> SHOW /*!50000 GLOBAL*/ STATUS;
</source>
* See [[http://de.slideshare.net/shinguz/pt-presentation-11465700 MySQL Performance Tuning]]
===percona-toolkit===
<source lang=bash>
# aptitude install percona-toolkit
# mysql -e "explain select * from mysql.user,mysql.db where user.user=db.user" | pt-visual-explain
JOIN
+- Bookmark lookup
| +- Table
| | table db
| | possible_keys User
| +- Index lookup
| key db->User
| possible_keys User
| key_len 48
| ref mysql.user.User
| rows 3
+- Table scan
rows 68
+- Table
table user
</source>
===Sysbench===
<source lang=bash>
# mysql -u root -e "create database sbtest;"
# sysbench \
--test=oltp \
--oltp-table-size=10000000 \
--db-driver=mysql \
--mysql-table-engine=innodb \
--mysql-db=sbtest \
--mysql-user=root \
--mysql-password=$(nawk -F'=' '/password/{print $2}' /root/.my.cnf) \
--mysql-socket=/var/run/mysqld/mysqld.sock \
prepare
# sysbench \
--test=oltp \
--oltp-test-mode=complex \
--oltp-table-size=80000000 \
--db-driver=mysql \
--mysql-table-engine=innodb \
--mysql-db=sbtest \
--mysql-user=root \
--mysql-password=$(nawk -F'=' '/password/{print $2}' /root/.my.cnf) \
--mysql-socket=/var/run/mysqld/mysqld.sock \
--num-threads=4 \
--max-time=900 \
--max-requests=500000 \
run
# mysql -u root_rw -e "drop table sbtest;" sbtest
</source>
==Recover a damaged root account==
===Lost grants===
Try out:
<source lang=bash>
# service mysql stop
# echo "grant all privileges on *.* to 'root'@'localhost' with grant option;" > /root/mysql-init
# mysqld_safe --init-file=/root/mysql-init
...
150812 19:14:24 mysqld_safe mysqld from pid file /var/run/mysqld/mysqld.pid ended
# rm /root/mysql-init
# service mysql start
</source>
Or:
<source lang=bash>
# service mysql stop
# mysqld_safe --skip-grant-tables &
...
# mysql -e "UPDATE mysql.user SET Grant_priv='Y', Super_priv='Y' WHERE User='root'; FLUSH PRIVILEGES; GRANT ALL ON *.* TO 'root'@'localhost';"
# mysqladmin -u root shutdown
# service mysql start
</source>
===Lost password===
<source lang=bash>
# service mysql stop
# echo "SET PASSWORD FOR 'root'@'localhost' = PASSWORD('the root password for mysql');" > /root/mysql-init
# mysqld_safe --init-file=/root/mysql-init
...
150812 19:15:24 mysqld_safe mysqld from pid file /var/run/mysqld/mysqld.pid ended
# rm /root/mysql-init
# service mysql start
</source>
==Structured configuration==
This is the default in Ubuntus /etc/mysql/my.cnf:
<source lang=mysql>
...
#
# * IMPORTANT: Additional settings that can override those from this file!
# The files must end with '.cnf', otherwise they'll be ignored.
#
!includedir /etc/mysql/conf.d/
</source>
/etc/mysql/conf.d/innodb.cnf:
<source lang=mysql>
[mysqld]
# InnoDB Parameters
# innodb_buffer_pool_size=(0.7*total_mem_size)
#innodb_buffer_pool_size=512M
innodb_buffer_pool_size=256M
# bulk_insert_buffer_size
#bulk_insert_buffer_size=256M
bulk_insert_buffer_size=128M
# innodb_buffer_pool_instances=... more = more concurrency
innodb_buffer_pool_instances=2
# innodb_thread_concurrency= 2*CPUs
innodb_thread_concurrency=4
# innodb_flush_method=O_DIRECT (avoids double buffering)
innodb_flush_method=O_DIRECT
# InnoDB data raw disks
innodb_data_home_dir=
innodb_data_file_path=/dev/vg-data/lv-rawdisk-innodb01:25Graw
# InnoDB log files
innodb_log_files_in_group=2
innodb_log_file_size=100M
innodb_log_group_home_dir=/var/lib/mysql/ib_log
</source>
/etc/mysql/conf.d/myisam.cnf:
<source lang=mysql>
[mysqld]
#key_buffer = 512M
key_buffer = 128M
table_cache = 8K
myisam_sort_buffer_size = 64M
tmp_table_size = 64M
# Variable: concurrent_insert
# Value Description
# 0 Disables concurrent inserts
# 1 (Default) Enables concurrent insert for MyISAM tables that do not have holes
# 2 Enables concurrent inserts for all MyISAM tables, even those that have holes.
# For a table with a hole, new rows are inserted at the end of the table if it is in use by another thread.
# Otherwise, MySQL acquires a normal write lock and inserts the row into the hole.
concurrent_insert=2
# Variable: myisam_use_mmap
# https://www.percona.com/blog/2006/05/26/myisam-mmap-feature-51/
#
myisam_use_mmap=1
</source>
/etc/mysql/conf.d/mysqld.cnf:
<source lang=mysql>
[mysqld]
datadir = /var/lib/mysql/data/data
# because mysql is soooo stupid
#ignore-db-dirs = lost+found # when we will have mysql >= 5.6.3
bind-address = 127.0.0.1
open-files-limit = 4096
max_connections = 512
max_allowed_packet = 16M
thread_stack = 192K
thread_cache_size = 8
myisam-recover-options = BACKUP
max_connections = 512
table_cache = 8192
thread_concurrency = 4
default-storage-engine = innodb
# Enable the full query log. Every query (even ones with incorrect
# syntax) that the server receives will be logged. This is useful for
# debugging, it is usually disabled in production use.
#log
# Print warnings to the error log file. If you have any problem with
# MySQL you should enable logging of warnings and examine the error log
# for possible explanations.
log_warnings
# Log slow queries. Slow queries are queries which take more than the
# amount of time defined in "long_query_time" or which do not use
# indexes well, if log_long_format is enabled. It is normally good idea
# to have this turned on if you frequently add new queries to the
# system.
log_slow_queries
slow_query_log_file = /var/log/mysql/mysql-slow.log
# All queries taking more than this amount of time (in seconds) will be
# trated as slow. Do not use "1" as a value here, as this will result in
# even very fast queries being logged from time to time (as MySQL
# currently measures time with second accuracy only).
long_query_time = 2
# Log more information in the slow query log. Normally it is good to
# have this turned on. This will enable logging of queries that are not
# using indexes in addition to long running queries.
#log_long_format
log_bin = /var/lib/mysql/binlog/mysql-bin.log
expire_logs_days = 10
max_binlog_size = 100M
sync_binlog = 0
performance_schema = ON
</source>
/etc/mysql/conf.d/mysqld_safe.cnf:
<source lang=mysql>
[mysqld_safe]
</source>
/etc/mysql/conf.d/mysqld_safe_syslog.cnf:
<source lang=mysql>
[mysqld_safe]
syslog
</source>
/etc/mysql/conf.d/query_cache.cnf:
<source lang=mysql>
[mysqld]
query_cache_limit = 4M
query_cache_size = 128M
query_cache_min_res_unit = 2K
</source>
=MySQL Clients=
Small one liners for testing purposes.
==PHP==
===PHP PDO===
<source lang=php>
$ php -r '
$pdo=new PDO("mysql:host=mydbhost;dbname=mydb", "user", "pass", ARRAY(
PDO::ATTR_PERSISTENT => true
)
);
$stmt=$pdo->prepare("SELECT * FROM mytable");
if($stmt->execute()){
while($row = $stmt->fetch()){
print_r($row);
}
};
$stmt = null;
$pdo=null;
'
</source>
7e59a787e1e1dd5b6baf3cfe6d9406161078217d
1731
1730
2017-05-05T08:20:29Z
Lollypop
2
/* On NetApp CDOT SVM */
wikitext
text/x-wiki
[[Kategorie:MySQL|Tipps und Tricks]]
==Oneliner==
===All grants===
<source lang=bash>
# mysql --skip-column-names --batch --execute 'select concat("`",user,"`@`",host,"`") from mysql.user' | xargs -n 1 -i mysql --execute 'show grants for {}'
</source>
===Last update time===
* Per table
<source lang=mysql>
mysql> SELECT TABLE_SCHEMA AS DB,TABLE_NAME,UPDATE_TIME FROM INFORMATION_SCHEMA.TABLES ORDER BY DB,UPDATE_TIME;
</source>
* Per database
<source lang=mysql>
mysql> SELECT TABLE_SCHEMA AS DB,MAX(UPDATE_TIME) AS LAST_UPDATE FROM INFORMATION_SCHEMA.TABLES GROUP BY DB ORDER BY LAST_UPDATE;
</source>
==InnoDB space==
===Per database===
<source lang=mysql>
mysql> select table_schema as database_name, sum(round(data_length/1024/1024,2)) as total_size_mb from information_schema.tables where engine like 'innodb' group by table_schema order by total_size_mb;
</source>
===Per table===
<source lang=mysql>
mysql> select table_schema as database_name,table_name,round(data_length/1024/1024,2) as size_mb from information_schema.tables order by size_mb;
</source>
==Logging==
If you use SET GLOBAL it is just for the moment.
'''Don't forget to add it in your my.cnf to make it permanent!'''
===What can I log?===
The interesting variables here are:
* log_queries_not_using_indexes
* log_slave_updates
* log_slow_queries
* general_log
===Choose logging destination FILE/TABLE/NONE===
This affects general_log and slow_query_log.
* Log to the table mysql.slow_log and mysql.general_log
<source lang=mysql>
mysql> SET GLOBAL log_output=TABLE;
</source>
* Log to the table mysql.slow_log and mysql.general_log
<source lang=mysql>
mysql> SET GLOBAL log_output=TABLE;
</source>
* Both: tables and files
<source lang=mysql>
mysql> SET GLOBAL log_output = 'TABLE,FILE';
</source>
* None, if NONE appears in the log_output destinations there is no logging
<source lang=mysql>
mysql> SET GLOBAL log_output = 'TABLE,FILE,NONE';
</source>
is equal to
<source lang=mysql>
mysql> SET GLOBAL log_output = 'NONE';
</source>
===Enable/disable general logging===
<source lang=mysql>
mysql> SET GLOBAL general_log_file = '/var/lib/mysql/general.log';
Query OK, 0 rows affected (0.00 sec)
mysql> SET GLOBAL general_log = 'ON';
Query OK, 0 rows affected (0.00 sec)
</source>
<source lang=mysql>
mysql> SET GLOBAL general_log = 'OFF';
Query OK, 0 rows affected (0.00 sec)
</source>
===Enable/disable logging of slow queries===
<source lang=mysql>
mysql> SET GLOBAL slow_query_log_file = '/var/lib/mysql/slow-query.log';
Query OK, 0 rows affected (0.00 sec)
mysql> SET GLOBAL slow_query_log = 'ON';
Query OK, 0 rows affected (0.00 sec)
</source>
<source lang=mysql>
mysql> SET GLOBAL slow_query_log = 'OFF';
Query OK, 0 rows affected (0.00 sec)
</source>
==Filesystems for MySQL==
===ext3/ext4===
====Create Options====
<source lang=bash>
# mkfs.ext4 -b 4096 /dev/mapper/vg--data-lv--ext4--mysql_data
</source>
====Mountoptions====
* noatime
* data=writeback (best performance , only metadata is logged)
* data=ordered (ok performance , recording metadata and grouping metadata related to the data changes)
* data=journal (worst performance, but best data protection, ext3 default mode, recording metadata and all data)
===Raw devices with InnoDB===
'''Take a look at [[Linux_udev_permissions|setting device permissions via udev]] first.'''
'''After''' that the device is owned by mysql:
<source lang=bash>
# ls -alL /dev/vg-data/lv-rawdisk-innodb01
brw-rw---- 1 mysql mysql 252, 0 Aug 12 15:07 /dev/vg-data/lv-rawdisk-innodb01
</source>
Determine the size:
<source lang=bash>
# lvs vg-data
LV VG Attr LSize Pool Origin Data% Move Log Copy% Convert
lv-rawdisk-innodb01 vg-data -wi-a---- 25.00g
# fdisk -l /dev/vg-data/lv-rawdisk-innodb01
Disk /dev/vg-data/lv-rawdisk-innodb01: 26.8 GB, 26843545600 bytes
255 heads, 63 sectors/track, 3263 cylinders, total 52428800 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
# bc -l
26843545600/(1024*1024*1024)
25.00000000000000000000
</source>
Yes... really 25GB!
Add your logical volume to your configuration /etc/mysql/conf.d/innodb.cnf :
<source lang=mysql>
[mysqld]
# InnoDB raw disks
innodb_data_home_dir=
innodb_data_file_path=/dev/vg-data/lv-rawdisk-innodb01:25Gnewraw
</source>
Start mysql:
<source lang=bash>
# service mysql start
</source>
Aaaaaand.. do not forget apparmor! Like I did.. :-D
<source lang=mysql>
InnoDB: Operating system error number 13 in a file operation.
InnoDB: The error means mysqld does not have the access rights to
InnoDB: the directory.
InnoDB: File name /dev/dm-0
InnoDB: File operation call: 'open'.
InnoDB: Cannot continue operation.
</source>
<source lang=bash>
# tail /var/log/kern.log
...
Aug 12 15:30:09 mysql kernel: [ 5840.118528] audit: type=1400 audit(1439386209.399:33): apparmor="DENIED" operation="open" profile="/usr/sbin/mysqld" name="/dev/dm-0" pid=11810 comm="mysqld" requested_mask="wr" denied_mask="wr" fsuid=108 ouid=108
...
</source>
Add your raw device to the apparmor config in /etc/apparmor.d/local/usr.sbin.mysqld :
<source lang=bash>
# Site-specific additions and overrides for usr.sbin.mysqld.
# For more details, please see /etc/apparmor.d/local/README.
/dev/dm-* rwk,
</source>
Reload apparmor:
<source lang=bash>
# service apparmor reload
</source>
Another try!
<source lang=bash>
# service mysql start
</source>
<source lang=mysql>
InnoDB: The first specified data file /dev/vg-data/lv-rawdisk-innodb01 did not exist:
InnoDB: a new database to be created!
150812 15:48:23 InnoDB: Setting file /dev/vg-data/lv-rawdisk-innodb01 size to 25600 MB
InnoDB: Database physically writes the file full: wait...
InnoDB: Progress in MB: 100 200 300 400 500 600 700 800 900 1000 1100 1200 ...
</source>
Much better!
So shutdown MySQL again!
Change your configuration /etc/mysql/conf.d/innodb.cnf and '''change newraw to raw!''' :
<source lang=mysql>
[mysqld]
# InnoDB raw disks
innodb_data_home_dir=
innodb_data_file_path=/dev/vg-data/lv-rawdisk-innodb01:25Graw
</source>
=== NFS ===
==== NFSv4 ====
===== On NetApp CDOT SVM =====
<source lang=cdot>
cdot1nfsv4::> export-policy rule create -policyname default -clientmatch 172.18.128.0/22 -superuser none -rwrule none -rorule sys -allow-dev false -allow-suid false
cdot1nfsv4::>
cdot1nfsv4::> export-policy create -policyname mysql_clients
cdot1nfsv4::> export-policy rule create -policyname mysql_clients -clientmatch 172.18.128.0/22 -superuser sys -rwrule sys -rorule sys -allow-dev true -allow-suid false
cdot1nfsv4::>
cdot1nfsv4::> nfs server modify -v4.0 enabled -v4-id-domain this.domain.tld
cdot1nfsv4::> set -units GB
cdot1nfsv4::> vol show -volume MYSQLNFS_* -fields volume,policy,size,junction-path
vserver volume size policy junction-path
------------------ --------------------- ---- ------------- ----------------------
cdot1nfsv4 MYSQLNFS_DATA 40GB mysql_clients /MYSQLNFS_DATA
cdot1nfsv4 MYSQLNFS_LOG 1GB mysql_clients /MYSQLNFS_LOG
2 entries were displayed.
</source>
Links:
* [https://kb.netapp.com/support/s/article/how-to-configure-nfsv4-in-cluster-mode How to configure NFSv4 in Cluster-Mode]
* [https://kb.netapp.com/support/s/article/clustered-data-ontap-nfs-expert-recommended-articles Clustered Data ONTAP NFS Expert recommended articles]
* [https://kb.netapp.com/support/s/article/how-to-configure-netapp-storage-systems-for-network-file-system-version-4-in-aix-and-linux-environments How to configure NetApp storage systems for Network File System version 4 in AIX and Linux environments]
* [https://kb.netapp.com/support/s/article/how-to-enable-or-disable-nfsv4-on-netapp-storage-systems How to enable or disable NFSv4 on NetApp storage systems]
===== On Linux =====
====== /etc/sysctl.d/99-mysql.conf ======
<source>
#
## http://www.ajohnstone.com/achives/optimizing-mysql-over-nfs-with-netapp/
#
###################################################################
# Semaphores & IPC for optimizations in innodb
kernel.shmmax=2147483648
kernel.shmall=2147483648
kernel.msgmni=1024
kernel.msgmax=65536
kernel.sem=250 32000 32 1024
###################################################################
# Swap
vm.swappiness = 0
vm.vfs_cache_pressure = 50
</source>
====== /etc/sysctl.d/99-netapp-nfs.conf ======
<source>
#
## http://www.ajohnstone.com/achives/optimizing-mysql-over-nfs-with-netapp/
#
###################################################################
# Optimization for netapp/nfs increased from 64k, @see http://tldp.org/HOWTO/NFS-HOWTO/performance.html#MEMLIMITS
net.core.wmem_default=262144
net.core.rmem_default=262144
net.core.wmem_max=262144
net.core.rmem_max=262144
net.ipv4.tcp_rmem = 4096 87380 16777216
net.ipv4.tcp_wmem = 4096 65536 16777216
net.ipv4.tcp_no_metrics_save = 1
# Guidelines from http://media.netapp.com/documents/mysqlperformance-5.pdf
net.ipv4.tcp_sack=0
net.ipv4.tcp_timestamps=0
sunrpc.tcp_slot_table_entries=128
#nfs.v3.enable on
nfs.tcp.enable=on
nfs.tcp.recvwindowsize=65536
nfs.tcp.xfersize=65536
#iscsi.iswt.max_ios_per_session 128
#iscsi.iswt.tcp_window_size 131400
#iscsi.max_connections_per_session 16
net.ipv4.tcp_tw_reuse = 1
net.ipv4.ip_local_port_range = 1024 65023
net.ipv4.tcp_max_syn_backlog = 10240
net.ipv4.tcp_max_tw_buckets = 400000
net.ipv4.tcp_max_orphans = 60000
net.ipv4.tcp_synack_retries = 3
net.core.somaxconn = 10000
kernel.sysrq=0
net.ipv4.neigh.default.gc_thresh1 = 4096
net.ipv4.neigh.default.gc_thresh2 = 8192
net.ipv4.neigh.default.gc_thresh3 = 8192
net.ipv4.neigh.default.base_reachable_time = 86400
net.ipv4.neigh.default.gc_stale_time = 86400
</source>
====== Modify systemd service to wait for NFS ======
To be sure that the NFS mount is ready when the mysql server starts add After=nfs-client.target to the systemd service in the Unit-section.
<source lang=bash>
# systemctl edit mysql.service
</source>
and enter:
<source lang=inifile>
[Unit]
Description=MySQL Community Server
After=network.target
After=nfs-client.target
</source>
<source lang=bash>
# systemctl cat mysql
# /lib/systemd/system/mysql.service
# MySQL systemd service file
[Unit]
Description=MySQL Community Server
After=network.target
[Install]
WantedBy=multi-user.target
[Service]
User=mysql
Group=mysql
PermissionsStartOnly=true
ExecStartPre=/usr/share/mysql/mysql-systemd-start pre
ExecStart=/usr/sbin/mysqld
ExecStartPost=/usr/share/mysql/mysql-systemd-start post
TimeoutSec=600
Restart=on-failure
RuntimeDirectory=mysqld
RuntimeDirectoryMode=755
# /etc/systemd/system/mysql.service.d/override.conf
[Unit]
Description=MySQL Community Server
After=network.target
After=nfs-client.target
</source>
====== /etc/idmapd.conf ======
<source lang=conf>
# Domain = localdomain
Domain = this.domain.tld
</source>
====== /etc/fstab ======
<source lang=fstab>
cdot-nfsv4-svm:/MYSQLNFS_LOG /MYSQLNFS_LOG nfs rw,hard,nointr,rsize=65536,wsize=65536,bg,vers=4,proto=tcp,noatime
cdot-nfsv4-svm:/MYSQLNFS_DATA /MYSQLNFS_DATA nfs rw,hard,nointr,rsize=65536,wsize=65536,bg,vers=4,proto=tcp,noatime
</source>
====== /etc/mysql/mysql.conf.d/mysqld.cnf ======
<source lang=inifile>
[mysqld]
...
datadir = /MYSQLNFS_DATA/data/mysql
...
</source>
====== /etc/mysql/mysql.conf.d/innodb.cnf ======
<source lang=inifile>
[mysqld]
#
# * InnoDB
#
innodb_data_home_dir = /MYSQLNFS_DATA/InnoDB
innodb_data_file_path = ibdata1:200M:autoextend
innodb_log_group_home_dir = /MYSQLNFS_LOG/ib_log
#innodb_flush_method = O_DIRECT
innodb_flush_log_at_trx_commit = 2
innodb_file_per_table = on
</source>
<source lang=mysql>
# mysql -e "show variables where variable_name like '%dir' and value like '/MYSQLNFS%'"
+---------------------------+------------------------------------+
| Variable_name | Value |
+---------------------------+------------------------------------+
| datadir | /MYSQLNFS_DATA/data/mysql/ |
| innodb_data_home_dir | /MYSQLNFS_DATA/InnoDB |
| innodb_log_group_home_dir | /MYSQLNFS_LOG/ib_log |
+---------------------------+------------------------------------+
</source>
====== /etc/mysql/mysql.conf.d/query_cache.cnf ======
<source lang=inifile>
[mysqld]
#
# * Query Cache Configuration
#
query_cache_type = 1
query_cache_limit = 256K
query_cache_min_res_unit = 2k
query_cache_size = 80M
</source>
<source lang=mysql>
mysql> SHOW VARIABLES LIKE 'have_query_cache';
+------------------+-------+
| Variable_name | Value |
+------------------+-------+
| have_query_cache | YES |
+------------------+-------+
1 row in set (0,00 sec)
mysql> SHOW VARIABLES LIKE 'query_cache%';
+------------------------------+----------+
| Variable_name | Value |
+------------------------------+----------+
| query_cache_limit | 262144 |
| query_cache_min_res_unit | 2048 |
| query_cache_size | 83886080 |
| query_cache_type | ON |
| query_cache_wlock_invalidate | OFF |
+------------------------------+----------+
5 rows in set (0,00 sec)
</source>
====== apparmor : /etc/apparmor.d/local/usr.sbin.mysqld ======
<source lang=apparmor>
# vim:syntax=apparmor
# This should be always there...
owner @{PROC}/@{pid}/status r,
/sys/devices/system/node/ r,
/sys/devices/system/node/** r,
# The mysql datadir, innodb_data_home_dir
/MYSQLNFS_DATA/ r,
/MYSQLNFS_DATA/** rwk,
# The mysql innodb_log_group_home_dir
/MYSQLNFS_LOG/ r,
/MYSQLNFS_LOG/** rwk,
</source>
====== Short stupid performance test ======
<source lang=bash>
# time dd if=/dev/zero of=/MYSQLNFS_DATA/io.test bs=16k count=65536
65536+0 records in
65536+0 records out
1073741824 bytes (1,1 GB, 1,0 GiB) copied, 1,7552 s, 612 MB/s
real 0m1.772s
user 0m0.016s
sys 0m0.672s
</source>
Some things seem to work...
==Sample InnoDB configuration==
/etc/mysql/conf.d/innodb.cnf
<source lang=mysql>
[mysqld]
# InnoDB Parameters
# innodb_buffer_pool_size=(0.7*total_mem_size)
innodb_buffer_pool_size=1433M
# bulk_insert_buffer_size
bulk_insert_buffer_size=256M
# innodb_buffer_pool_instances=... more = more concurrency
innodb_buffer_pool_instances=2
# innodb_thread_concurrency= 2*CPUs
innodb_thread_concurrency=4
# innodb_flush_method=O_DIRECT (avoids double buffering)
innodb_flush_method=O_DIRECT
# InnoDB data raw disks
innodb_data_home_dir=
innodb_data_file_path=/dev/vg-data/lv-rawdisk-innodb01:25Graw
# InnoDB log files
innodb_log_files_in_group=2
innodb_log_file_size=100M
innodb_log_group_home_dir=/var/lib/mysql/ib_log
</source>
==Analyze==
<source lang=mysql>
mysql> select * from <tablename> PROCEDURE ANALYSE();
</source>
<source lang=mysql>
mysql> SHOW /*!50000 GLOBAL*/ STATUS;
</source>
* See [[http://de.slideshare.net/shinguz/pt-presentation-11465700 MySQL Performance Tuning]]
===percona-toolkit===
<source lang=bash>
# aptitude install percona-toolkit
# mysql -e "explain select * from mysql.user,mysql.db where user.user=db.user" | pt-visual-explain
JOIN
+- Bookmark lookup
| +- Table
| | table db
| | possible_keys User
| +- Index lookup
| key db->User
| possible_keys User
| key_len 48
| ref mysql.user.User
| rows 3
+- Table scan
rows 68
+- Table
table user
</source>
===Sysbench===
<source lang=bash>
# mysql -u root -e "create database sbtest;"
# sysbench \
--test=oltp \
--oltp-table-size=10000000 \
--db-driver=mysql \
--mysql-table-engine=innodb \
--mysql-db=sbtest \
--mysql-user=root \
--mysql-password=$(nawk -F'=' '/password/{print $2}' /root/.my.cnf) \
--mysql-socket=/var/run/mysqld/mysqld.sock \
prepare
# sysbench \
--test=oltp \
--oltp-test-mode=complex \
--oltp-table-size=80000000 \
--db-driver=mysql \
--mysql-table-engine=innodb \
--mysql-db=sbtest \
--mysql-user=root \
--mysql-password=$(nawk -F'=' '/password/{print $2}' /root/.my.cnf) \
--mysql-socket=/var/run/mysqld/mysqld.sock \
--num-threads=4 \
--max-time=900 \
--max-requests=500000 \
run
# mysql -u root_rw -e "drop table sbtest;" sbtest
</source>
==Recover a damaged root account==
===Lost grants===
Try out:
<source lang=bash>
# service mysql stop
# echo "grant all privileges on *.* to 'root'@'localhost' with grant option;" > /root/mysql-init
# mysqld_safe --init-file=/root/mysql-init
...
150812 19:14:24 mysqld_safe mysqld from pid file /var/run/mysqld/mysqld.pid ended
# rm /root/mysql-init
# service mysql start
</source>
Or:
<source lang=bash>
# service mysql stop
# mysqld_safe --skip-grant-tables &
...
# mysql -e "UPDATE mysql.user SET Grant_priv='Y', Super_priv='Y' WHERE User='root'; FLUSH PRIVILEGES; GRANT ALL ON *.* TO 'root'@'localhost';"
# mysqladmin -u root shutdown
# service mysql start
</source>
===Lost password===
<source lang=bash>
# service mysql stop
# echo "SET PASSWORD FOR 'root'@'localhost' = PASSWORD('the root password for mysql');" > /root/mysql-init
# mysqld_safe --init-file=/root/mysql-init
...
150812 19:15:24 mysqld_safe mysqld from pid file /var/run/mysqld/mysqld.pid ended
# rm /root/mysql-init
# service mysql start
</source>
==Structured configuration==
This is the default in Ubuntus /etc/mysql/my.cnf:
<source lang=mysql>
...
#
# * IMPORTANT: Additional settings that can override those from this file!
# The files must end with '.cnf', otherwise they'll be ignored.
#
!includedir /etc/mysql/conf.d/
</source>
/etc/mysql/conf.d/innodb.cnf:
<source lang=mysql>
[mysqld]
# InnoDB Parameters
# innodb_buffer_pool_size=(0.7*total_mem_size)
#innodb_buffer_pool_size=512M
innodb_buffer_pool_size=256M
# bulk_insert_buffer_size
#bulk_insert_buffer_size=256M
bulk_insert_buffer_size=128M
# innodb_buffer_pool_instances=... more = more concurrency
innodb_buffer_pool_instances=2
# innodb_thread_concurrency= 2*CPUs
innodb_thread_concurrency=4
# innodb_flush_method=O_DIRECT (avoids double buffering)
innodb_flush_method=O_DIRECT
# InnoDB data raw disks
innodb_data_home_dir=
innodb_data_file_path=/dev/vg-data/lv-rawdisk-innodb01:25Graw
# InnoDB log files
innodb_log_files_in_group=2
innodb_log_file_size=100M
innodb_log_group_home_dir=/var/lib/mysql/ib_log
</source>
/etc/mysql/conf.d/myisam.cnf:
<source lang=mysql>
[mysqld]
#key_buffer = 512M
key_buffer = 128M
table_cache = 8K
myisam_sort_buffer_size = 64M
tmp_table_size = 64M
# Variable: concurrent_insert
# Value Description
# 0 Disables concurrent inserts
# 1 (Default) Enables concurrent insert for MyISAM tables that do not have holes
# 2 Enables concurrent inserts for all MyISAM tables, even those that have holes.
# For a table with a hole, new rows are inserted at the end of the table if it is in use by another thread.
# Otherwise, MySQL acquires a normal write lock and inserts the row into the hole.
concurrent_insert=2
# Variable: myisam_use_mmap
# https://www.percona.com/blog/2006/05/26/myisam-mmap-feature-51/
#
myisam_use_mmap=1
</source>
/etc/mysql/conf.d/mysqld.cnf:
<source lang=mysql>
[mysqld]
datadir = /var/lib/mysql/data/data
# because mysql is soooo stupid
#ignore-db-dirs = lost+found # when we will have mysql >= 5.6.3
bind-address = 127.0.0.1
open-files-limit = 4096
max_connections = 512
max_allowed_packet = 16M
thread_stack = 192K
thread_cache_size = 8
myisam-recover-options = BACKUP
max_connections = 512
table_cache = 8192
thread_concurrency = 4
default-storage-engine = innodb
# Enable the full query log. Every query (even ones with incorrect
# syntax) that the server receives will be logged. This is useful for
# debugging, it is usually disabled in production use.
#log
# Print warnings to the error log file. If you have any problem with
# MySQL you should enable logging of warnings and examine the error log
# for possible explanations.
log_warnings
# Log slow queries. Slow queries are queries which take more than the
# amount of time defined in "long_query_time" or which do not use
# indexes well, if log_long_format is enabled. It is normally good idea
# to have this turned on if you frequently add new queries to the
# system.
log_slow_queries
slow_query_log_file = /var/log/mysql/mysql-slow.log
# All queries taking more than this amount of time (in seconds) will be
# trated as slow. Do not use "1" as a value here, as this will result in
# even very fast queries being logged from time to time (as MySQL
# currently measures time with second accuracy only).
long_query_time = 2
# Log more information in the slow query log. Normally it is good to
# have this turned on. This will enable logging of queries that are not
# using indexes in addition to long running queries.
#log_long_format
log_bin = /var/lib/mysql/binlog/mysql-bin.log
expire_logs_days = 10
max_binlog_size = 100M
sync_binlog = 0
performance_schema = ON
</source>
/etc/mysql/conf.d/mysqld_safe.cnf:
<source lang=mysql>
[mysqld_safe]
</source>
/etc/mysql/conf.d/mysqld_safe_syslog.cnf:
<source lang=mysql>
[mysqld_safe]
syslog
</source>
/etc/mysql/conf.d/query_cache.cnf:
<source lang=mysql>
[mysqld]
query_cache_limit = 4M
query_cache_size = 128M
query_cache_min_res_unit = 2K
</source>
=MySQL Clients=
Small one liners for testing purposes.
==PHP==
===PHP PDO===
<source lang=php>
$ php -r '
$pdo=new PDO("mysql:host=mydbhost;dbname=mydb", "user", "pass", ARRAY(
PDO::ATTR_PERSISTENT => true
)
);
$stmt=$pdo->prepare("SELECT * FROM mytable");
if($stmt->execute()){
while($row = $stmt->fetch()){
print_r($row);
}
};
$stmt = null;
$pdo=null;
'
</source>
bd424e187bcc62e1bf258326808a30279f8c82c0
1744
1731
2017-05-12T15:12:15Z
Lollypop
2
/* Modify systemd service to wait for NFS */
wikitext
text/x-wiki
[[Kategorie:MySQL|Tipps und Tricks]]
==Oneliner==
===All grants===
<source lang=bash>
# mysql --skip-column-names --batch --execute 'select concat("`",user,"`@`",host,"`") from mysql.user' | xargs -n 1 -i mysql --execute 'show grants for {}'
</source>
===Last update time===
* Per table
<source lang=mysql>
mysql> SELECT TABLE_SCHEMA AS DB,TABLE_NAME,UPDATE_TIME FROM INFORMATION_SCHEMA.TABLES ORDER BY DB,UPDATE_TIME;
</source>
* Per database
<source lang=mysql>
mysql> SELECT TABLE_SCHEMA AS DB,MAX(UPDATE_TIME) AS LAST_UPDATE FROM INFORMATION_SCHEMA.TABLES GROUP BY DB ORDER BY LAST_UPDATE;
</source>
==InnoDB space==
===Per database===
<source lang=mysql>
mysql> select table_schema as database_name, sum(round(data_length/1024/1024,2)) as total_size_mb from information_schema.tables where engine like 'innodb' group by table_schema order by total_size_mb;
</source>
===Per table===
<source lang=mysql>
mysql> select table_schema as database_name,table_name,round(data_length/1024/1024,2) as size_mb from information_schema.tables order by size_mb;
</source>
==Logging==
If you use SET GLOBAL it is just for the moment.
'''Don't forget to add it in your my.cnf to make it permanent!'''
===What can I log?===
The interesting variables here are:
* log_queries_not_using_indexes
* log_slave_updates
* log_slow_queries
* general_log
===Choose logging destination FILE/TABLE/NONE===
This affects general_log and slow_query_log.
* Log to the table mysql.slow_log and mysql.general_log
<source lang=mysql>
mysql> SET GLOBAL log_output=TABLE;
</source>
* Log to the table mysql.slow_log and mysql.general_log
<source lang=mysql>
mysql> SET GLOBAL log_output=TABLE;
</source>
* Both: tables and files
<source lang=mysql>
mysql> SET GLOBAL log_output = 'TABLE,FILE';
</source>
* None, if NONE appears in the log_output destinations there is no logging
<source lang=mysql>
mysql> SET GLOBAL log_output = 'TABLE,FILE,NONE';
</source>
is equal to
<source lang=mysql>
mysql> SET GLOBAL log_output = 'NONE';
</source>
===Enable/disable general logging===
<source lang=mysql>
mysql> SET GLOBAL general_log_file = '/var/lib/mysql/general.log';
Query OK, 0 rows affected (0.00 sec)
mysql> SET GLOBAL general_log = 'ON';
Query OK, 0 rows affected (0.00 sec)
</source>
<source lang=mysql>
mysql> SET GLOBAL general_log = 'OFF';
Query OK, 0 rows affected (0.00 sec)
</source>
===Enable/disable logging of slow queries===
<source lang=mysql>
mysql> SET GLOBAL slow_query_log_file = '/var/lib/mysql/slow-query.log';
Query OK, 0 rows affected (0.00 sec)
mysql> SET GLOBAL slow_query_log = 'ON';
Query OK, 0 rows affected (0.00 sec)
</source>
<source lang=mysql>
mysql> SET GLOBAL slow_query_log = 'OFF';
Query OK, 0 rows affected (0.00 sec)
</source>
==Filesystems for MySQL==
===ext3/ext4===
====Create Options====
<source lang=bash>
# mkfs.ext4 -b 4096 /dev/mapper/vg--data-lv--ext4--mysql_data
</source>
====Mountoptions====
* noatime
* data=writeback (best performance , only metadata is logged)
* data=ordered (ok performance , recording metadata and grouping metadata related to the data changes)
* data=journal (worst performance, but best data protection, ext3 default mode, recording metadata and all data)
===Raw devices with InnoDB===
'''Take a look at [[Linux_udev_permissions|setting device permissions via udev]] first.'''
'''After''' that the device is owned by mysql:
<source lang=bash>
# ls -alL /dev/vg-data/lv-rawdisk-innodb01
brw-rw---- 1 mysql mysql 252, 0 Aug 12 15:07 /dev/vg-data/lv-rawdisk-innodb01
</source>
Determine the size:
<source lang=bash>
# lvs vg-data
LV VG Attr LSize Pool Origin Data% Move Log Copy% Convert
lv-rawdisk-innodb01 vg-data -wi-a---- 25.00g
# fdisk -l /dev/vg-data/lv-rawdisk-innodb01
Disk /dev/vg-data/lv-rawdisk-innodb01: 26.8 GB, 26843545600 bytes
255 heads, 63 sectors/track, 3263 cylinders, total 52428800 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
# bc -l
26843545600/(1024*1024*1024)
25.00000000000000000000
</source>
Yes... really 25GB!
Add your logical volume to your configuration /etc/mysql/conf.d/innodb.cnf :
<source lang=mysql>
[mysqld]
# InnoDB raw disks
innodb_data_home_dir=
innodb_data_file_path=/dev/vg-data/lv-rawdisk-innodb01:25Gnewraw
</source>
Start mysql:
<source lang=bash>
# service mysql start
</source>
Aaaaaand.. do not forget apparmor! Like I did.. :-D
<source lang=mysql>
InnoDB: Operating system error number 13 in a file operation.
InnoDB: The error means mysqld does not have the access rights to
InnoDB: the directory.
InnoDB: File name /dev/dm-0
InnoDB: File operation call: 'open'.
InnoDB: Cannot continue operation.
</source>
<source lang=bash>
# tail /var/log/kern.log
...
Aug 12 15:30:09 mysql kernel: [ 5840.118528] audit: type=1400 audit(1439386209.399:33): apparmor="DENIED" operation="open" profile="/usr/sbin/mysqld" name="/dev/dm-0" pid=11810 comm="mysqld" requested_mask="wr" denied_mask="wr" fsuid=108 ouid=108
...
</source>
Add your raw device to the apparmor config in /etc/apparmor.d/local/usr.sbin.mysqld :
<source lang=bash>
# Site-specific additions and overrides for usr.sbin.mysqld.
# For more details, please see /etc/apparmor.d/local/README.
/dev/dm-* rwk,
</source>
Reload apparmor:
<source lang=bash>
# service apparmor reload
</source>
Another try!
<source lang=bash>
# service mysql start
</source>
<source lang=mysql>
InnoDB: The first specified data file /dev/vg-data/lv-rawdisk-innodb01 did not exist:
InnoDB: a new database to be created!
150812 15:48:23 InnoDB: Setting file /dev/vg-data/lv-rawdisk-innodb01 size to 25600 MB
InnoDB: Database physically writes the file full: wait...
InnoDB: Progress in MB: 100 200 300 400 500 600 700 800 900 1000 1100 1200 ...
</source>
Much better!
So shutdown MySQL again!
Change your configuration /etc/mysql/conf.d/innodb.cnf and '''change newraw to raw!''' :
<source lang=mysql>
[mysqld]
# InnoDB raw disks
innodb_data_home_dir=
innodb_data_file_path=/dev/vg-data/lv-rawdisk-innodb01:25Graw
</source>
=== NFS ===
==== NFSv4 ====
===== On NetApp CDOT SVM =====
<source lang=cdot>
cdot1nfsv4::> export-policy rule create -policyname default -clientmatch 172.18.128.0/22 -superuser none -rwrule none -rorule sys -allow-dev false -allow-suid false
cdot1nfsv4::>
cdot1nfsv4::> export-policy create -policyname mysql_clients
cdot1nfsv4::> export-policy rule create -policyname mysql_clients -clientmatch 172.18.128.0/22 -superuser sys -rwrule sys -rorule sys -allow-dev true -allow-suid false
cdot1nfsv4::>
cdot1nfsv4::> nfs server modify -v4.0 enabled -v4-id-domain this.domain.tld
cdot1nfsv4::> set -units GB
cdot1nfsv4::> vol show -volume MYSQLNFS_* -fields volume,policy,size,junction-path
vserver volume size policy junction-path
------------------ --------------------- ---- ------------- ----------------------
cdot1nfsv4 MYSQLNFS_DATA 40GB mysql_clients /MYSQLNFS_DATA
cdot1nfsv4 MYSQLNFS_LOG 1GB mysql_clients /MYSQLNFS_LOG
2 entries were displayed.
</source>
Links:
* [https://kb.netapp.com/support/s/article/how-to-configure-nfsv4-in-cluster-mode How to configure NFSv4 in Cluster-Mode]
* [https://kb.netapp.com/support/s/article/clustered-data-ontap-nfs-expert-recommended-articles Clustered Data ONTAP NFS Expert recommended articles]
* [https://kb.netapp.com/support/s/article/how-to-configure-netapp-storage-systems-for-network-file-system-version-4-in-aix-and-linux-environments How to configure NetApp storage systems for Network File System version 4 in AIX and Linux environments]
* [https://kb.netapp.com/support/s/article/how-to-enable-or-disable-nfsv4-on-netapp-storage-systems How to enable or disable NFSv4 on NetApp storage systems]
===== On Linux =====
====== /etc/sysctl.d/99-mysql.conf ======
<source>
#
## http://www.ajohnstone.com/achives/optimizing-mysql-over-nfs-with-netapp/
#
###################################################################
# Semaphores & IPC for optimizations in innodb
kernel.shmmax=2147483648
kernel.shmall=2147483648
kernel.msgmni=1024
kernel.msgmax=65536
kernel.sem=250 32000 32 1024
###################################################################
# Swap
vm.swappiness = 0
vm.vfs_cache_pressure = 50
</source>
====== /etc/sysctl.d/99-netapp-nfs.conf ======
<source>
#
## http://www.ajohnstone.com/achives/optimizing-mysql-over-nfs-with-netapp/
#
###################################################################
# Optimization for netapp/nfs increased from 64k, @see http://tldp.org/HOWTO/NFS-HOWTO/performance.html#MEMLIMITS
net.core.wmem_default=262144
net.core.rmem_default=262144
net.core.wmem_max=262144
net.core.rmem_max=262144
net.ipv4.tcp_rmem = 4096 87380 16777216
net.ipv4.tcp_wmem = 4096 65536 16777216
net.ipv4.tcp_no_metrics_save = 1
# Guidelines from http://media.netapp.com/documents/mysqlperformance-5.pdf
net.ipv4.tcp_sack=0
net.ipv4.tcp_timestamps=0
sunrpc.tcp_slot_table_entries=128
#nfs.v3.enable on
nfs.tcp.enable=on
nfs.tcp.recvwindowsize=65536
nfs.tcp.xfersize=65536
#iscsi.iswt.max_ios_per_session 128
#iscsi.iswt.tcp_window_size 131400
#iscsi.max_connections_per_session 16
net.ipv4.tcp_tw_reuse = 1
net.ipv4.ip_local_port_range = 1024 65023
net.ipv4.tcp_max_syn_backlog = 10240
net.ipv4.tcp_max_tw_buckets = 400000
net.ipv4.tcp_max_orphans = 60000
net.ipv4.tcp_synack_retries = 3
net.core.somaxconn = 10000
kernel.sysrq=0
net.ipv4.neigh.default.gc_thresh1 = 4096
net.ipv4.neigh.default.gc_thresh2 = 8192
net.ipv4.neigh.default.gc_thresh3 = 8192
net.ipv4.neigh.default.base_reachable_time = 86400
net.ipv4.neigh.default.gc_stale_time = 86400
</source>
====== Modify systemd service to wait for NFS ======
To be sure that the NFS mount is ready when the mysql server starts add After=nfs-client.target to the systemd service in the Unit-section.
<source lang=bash>
# systemctl edit mysql.service
</source>
and enter:
<source lang=inifile>
[Unit]
Description=MySQL Community Server
After=network.target
After=nfs-client.target
[Service]
LimitNOFILE=1024000
</source>
<source lang=bash>
# systemctl cat mysql
# /lib/systemd/system/mysql.service
# MySQL systemd service file
[Unit]
Description=MySQL Community Server
After=network.target
[Install]
WantedBy=multi-user.target
[Service]
User=mysql
Group=mysql
PermissionsStartOnly=true
ExecStartPre=/usr/share/mysql/mysql-systemd-start pre
ExecStart=/usr/sbin/mysqld
ExecStartPost=/usr/share/mysql/mysql-systemd-start post
TimeoutSec=600
Restart=on-failure
RuntimeDirectory=mysqld
RuntimeDirectoryMode=755
# /etc/systemd/system/mysql.service.d/override.conf
[Unit]
Description=MySQL Community Server
After=network.target
After=nfs-client.target
[Service]
LimitNOFILE=1024000
</source>
====== /etc/idmapd.conf ======
<source lang=conf>
# Domain = localdomain
Domain = this.domain.tld
</source>
====== /etc/fstab ======
<source lang=fstab>
cdot-nfsv4-svm:/MYSQLNFS_LOG /MYSQLNFS_LOG nfs rw,hard,nointr,rsize=65536,wsize=65536,bg,vers=4,proto=tcp,noatime
cdot-nfsv4-svm:/MYSQLNFS_DATA /MYSQLNFS_DATA nfs rw,hard,nointr,rsize=65536,wsize=65536,bg,vers=4,proto=tcp,noatime
</source>
====== /etc/mysql/mysql.conf.d/mysqld.cnf ======
<source lang=inifile>
[mysqld]
...
datadir = /MYSQLNFS_DATA/data/mysql
...
</source>
====== /etc/mysql/mysql.conf.d/innodb.cnf ======
<source lang=inifile>
[mysqld]
#
# * InnoDB
#
innodb_data_home_dir = /MYSQLNFS_DATA/InnoDB
innodb_data_file_path = ibdata1:200M:autoextend
innodb_log_group_home_dir = /MYSQLNFS_LOG/ib_log
#innodb_flush_method = O_DIRECT
innodb_flush_log_at_trx_commit = 2
innodb_file_per_table = on
</source>
<source lang=mysql>
# mysql -e "show variables where variable_name like '%dir' and value like '/MYSQLNFS%'"
+---------------------------+------------------------------------+
| Variable_name | Value |
+---------------------------+------------------------------------+
| datadir | /MYSQLNFS_DATA/data/mysql/ |
| innodb_data_home_dir | /MYSQLNFS_DATA/InnoDB |
| innodb_log_group_home_dir | /MYSQLNFS_LOG/ib_log |
+---------------------------+------------------------------------+
</source>
====== /etc/mysql/mysql.conf.d/query_cache.cnf ======
<source lang=inifile>
[mysqld]
#
# * Query Cache Configuration
#
query_cache_type = 1
query_cache_limit = 256K
query_cache_min_res_unit = 2k
query_cache_size = 80M
</source>
<source lang=mysql>
mysql> SHOW VARIABLES LIKE 'have_query_cache';
+------------------+-------+
| Variable_name | Value |
+------------------+-------+
| have_query_cache | YES |
+------------------+-------+
1 row in set (0,00 sec)
mysql> SHOW VARIABLES LIKE 'query_cache%';
+------------------------------+----------+
| Variable_name | Value |
+------------------------------+----------+
| query_cache_limit | 262144 |
| query_cache_min_res_unit | 2048 |
| query_cache_size | 83886080 |
| query_cache_type | ON |
| query_cache_wlock_invalidate | OFF |
+------------------------------+----------+
5 rows in set (0,00 sec)
</source>
====== apparmor : /etc/apparmor.d/local/usr.sbin.mysqld ======
<source lang=apparmor>
# vim:syntax=apparmor
# This should be always there...
owner @{PROC}/@{pid}/status r,
/sys/devices/system/node/ r,
/sys/devices/system/node/** r,
# The mysql datadir, innodb_data_home_dir
/MYSQLNFS_DATA/ r,
/MYSQLNFS_DATA/** rwk,
# The mysql innodb_log_group_home_dir
/MYSQLNFS_LOG/ r,
/MYSQLNFS_LOG/** rwk,
</source>
====== Short stupid performance test ======
<source lang=bash>
# time dd if=/dev/zero of=/MYSQLNFS_DATA/io.test bs=16k count=65536
65536+0 records in
65536+0 records out
1073741824 bytes (1,1 GB, 1,0 GiB) copied, 1,7552 s, 612 MB/s
real 0m1.772s
user 0m0.016s
sys 0m0.672s
</source>
Some things seem to work...
==Sample InnoDB configuration==
/etc/mysql/conf.d/innodb.cnf
<source lang=mysql>
[mysqld]
# InnoDB Parameters
# innodb_buffer_pool_size=(0.7*total_mem_size)
innodb_buffer_pool_size=1433M
# bulk_insert_buffer_size
bulk_insert_buffer_size=256M
# innodb_buffer_pool_instances=... more = more concurrency
innodb_buffer_pool_instances=2
# innodb_thread_concurrency= 2*CPUs
innodb_thread_concurrency=4
# innodb_flush_method=O_DIRECT (avoids double buffering)
innodb_flush_method=O_DIRECT
# InnoDB data raw disks
innodb_data_home_dir=
innodb_data_file_path=/dev/vg-data/lv-rawdisk-innodb01:25Graw
# InnoDB log files
innodb_log_files_in_group=2
innodb_log_file_size=100M
innodb_log_group_home_dir=/var/lib/mysql/ib_log
</source>
==Analyze==
<source lang=mysql>
mysql> select * from <tablename> PROCEDURE ANALYSE();
</source>
<source lang=mysql>
mysql> SHOW /*!50000 GLOBAL*/ STATUS;
</source>
* See [[http://de.slideshare.net/shinguz/pt-presentation-11465700 MySQL Performance Tuning]]
===percona-toolkit===
<source lang=bash>
# aptitude install percona-toolkit
# mysql -e "explain select * from mysql.user,mysql.db where user.user=db.user" | pt-visual-explain
JOIN
+- Bookmark lookup
| +- Table
| | table db
| | possible_keys User
| +- Index lookup
| key db->User
| possible_keys User
| key_len 48
| ref mysql.user.User
| rows 3
+- Table scan
rows 68
+- Table
table user
</source>
===Sysbench===
<source lang=bash>
# mysql -u root -e "create database sbtest;"
# sysbench \
--test=oltp \
--oltp-table-size=10000000 \
--db-driver=mysql \
--mysql-table-engine=innodb \
--mysql-db=sbtest \
--mysql-user=root \
--mysql-password=$(nawk -F'=' '/password/{print $2}' /root/.my.cnf) \
--mysql-socket=/var/run/mysqld/mysqld.sock \
prepare
# sysbench \
--test=oltp \
--oltp-test-mode=complex \
--oltp-table-size=80000000 \
--db-driver=mysql \
--mysql-table-engine=innodb \
--mysql-db=sbtest \
--mysql-user=root \
--mysql-password=$(nawk -F'=' '/password/{print $2}' /root/.my.cnf) \
--mysql-socket=/var/run/mysqld/mysqld.sock \
--num-threads=4 \
--max-time=900 \
--max-requests=500000 \
run
# mysql -u root_rw -e "drop table sbtest;" sbtest
</source>
==Recover a damaged root account==
===Lost grants===
Try out:
<source lang=bash>
# service mysql stop
# echo "grant all privileges on *.* to 'root'@'localhost' with grant option;" > /root/mysql-init
# mysqld_safe --init-file=/root/mysql-init
...
150812 19:14:24 mysqld_safe mysqld from pid file /var/run/mysqld/mysqld.pid ended
# rm /root/mysql-init
# service mysql start
</source>
Or:
<source lang=bash>
# service mysql stop
# mysqld_safe --skip-grant-tables &
...
# mysql -e "UPDATE mysql.user SET Grant_priv='Y', Super_priv='Y' WHERE User='root'; FLUSH PRIVILEGES; GRANT ALL ON *.* TO 'root'@'localhost';"
# mysqladmin -u root shutdown
# service mysql start
</source>
===Lost password===
<source lang=bash>
# service mysql stop
# echo "SET PASSWORD FOR 'root'@'localhost' = PASSWORD('the root password for mysql');" > /root/mysql-init
# mysqld_safe --init-file=/root/mysql-init
...
150812 19:15:24 mysqld_safe mysqld from pid file /var/run/mysqld/mysqld.pid ended
# rm /root/mysql-init
# service mysql start
</source>
==Structured configuration==
This is the default in Ubuntus /etc/mysql/my.cnf:
<source lang=mysql>
...
#
# * IMPORTANT: Additional settings that can override those from this file!
# The files must end with '.cnf', otherwise they'll be ignored.
#
!includedir /etc/mysql/conf.d/
</source>
/etc/mysql/conf.d/innodb.cnf:
<source lang=mysql>
[mysqld]
# InnoDB Parameters
# innodb_buffer_pool_size=(0.7*total_mem_size)
#innodb_buffer_pool_size=512M
innodb_buffer_pool_size=256M
# bulk_insert_buffer_size
#bulk_insert_buffer_size=256M
bulk_insert_buffer_size=128M
# innodb_buffer_pool_instances=... more = more concurrency
innodb_buffer_pool_instances=2
# innodb_thread_concurrency= 2*CPUs
innodb_thread_concurrency=4
# innodb_flush_method=O_DIRECT (avoids double buffering)
innodb_flush_method=O_DIRECT
# InnoDB data raw disks
innodb_data_home_dir=
innodb_data_file_path=/dev/vg-data/lv-rawdisk-innodb01:25Graw
# InnoDB log files
innodb_log_files_in_group=2
innodb_log_file_size=100M
innodb_log_group_home_dir=/var/lib/mysql/ib_log
</source>
/etc/mysql/conf.d/myisam.cnf:
<source lang=mysql>
[mysqld]
#key_buffer = 512M
key_buffer = 128M
table_cache = 8K
myisam_sort_buffer_size = 64M
tmp_table_size = 64M
# Variable: concurrent_insert
# Value Description
# 0 Disables concurrent inserts
# 1 (Default) Enables concurrent insert for MyISAM tables that do not have holes
# 2 Enables concurrent inserts for all MyISAM tables, even those that have holes.
# For a table with a hole, new rows are inserted at the end of the table if it is in use by another thread.
# Otherwise, MySQL acquires a normal write lock and inserts the row into the hole.
concurrent_insert=2
# Variable: myisam_use_mmap
# https://www.percona.com/blog/2006/05/26/myisam-mmap-feature-51/
#
myisam_use_mmap=1
</source>
/etc/mysql/conf.d/mysqld.cnf:
<source lang=mysql>
[mysqld]
datadir = /var/lib/mysql/data/data
# because mysql is soooo stupid
#ignore-db-dirs = lost+found # when we will have mysql >= 5.6.3
bind-address = 127.0.0.1
open-files-limit = 4096
max_connections = 512
max_allowed_packet = 16M
thread_stack = 192K
thread_cache_size = 8
myisam-recover-options = BACKUP
max_connections = 512
table_cache = 8192
thread_concurrency = 4
default-storage-engine = innodb
# Enable the full query log. Every query (even ones with incorrect
# syntax) that the server receives will be logged. This is useful for
# debugging, it is usually disabled in production use.
#log
# Print warnings to the error log file. If you have any problem with
# MySQL you should enable logging of warnings and examine the error log
# for possible explanations.
log_warnings
# Log slow queries. Slow queries are queries which take more than the
# amount of time defined in "long_query_time" or which do not use
# indexes well, if log_long_format is enabled. It is normally good idea
# to have this turned on if you frequently add new queries to the
# system.
log_slow_queries
slow_query_log_file = /var/log/mysql/mysql-slow.log
# All queries taking more than this amount of time (in seconds) will be
# trated as slow. Do not use "1" as a value here, as this will result in
# even very fast queries being logged from time to time (as MySQL
# currently measures time with second accuracy only).
long_query_time = 2
# Log more information in the slow query log. Normally it is good to
# have this turned on. This will enable logging of queries that are not
# using indexes in addition to long running queries.
#log_long_format
log_bin = /var/lib/mysql/binlog/mysql-bin.log
expire_logs_days = 10
max_binlog_size = 100M
sync_binlog = 0
performance_schema = ON
</source>
/etc/mysql/conf.d/mysqld_safe.cnf:
<source lang=mysql>
[mysqld_safe]
</source>
/etc/mysql/conf.d/mysqld_safe_syslog.cnf:
<source lang=mysql>
[mysqld_safe]
syslog
</source>
/etc/mysql/conf.d/query_cache.cnf:
<source lang=mysql>
[mysqld]
query_cache_limit = 4M
query_cache_size = 128M
query_cache_min_res_unit = 2K
</source>
=MySQL Clients=
Small one liners for testing purposes.
==PHP==
===PHP PDO===
<source lang=php>
$ php -r '
$pdo=new PDO("mysql:host=mydbhost;dbname=mydb", "user", "pass", ARRAY(
PDO::ATTR_PERSISTENT => true
)
);
$stmt=$pdo->prepare("SELECT * FROM mytable");
if($stmt->execute()){
while($row = $stmt->fetch()){
print_r($row);
}
};
$stmt = null;
$pdo=null;
'
</source>
5546386fa183868d6d603b1b4d07c830866f43df
1745
1744
2017-05-12T15:15:17Z
Lollypop
2
/* Modify systemd service to wait for NFS */
wikitext
text/x-wiki
[[Kategorie:MySQL|Tipps und Tricks]]
==Oneliner==
===All grants===
<source lang=bash>
# mysql --skip-column-names --batch --execute 'select concat("`",user,"`@`",host,"`") from mysql.user' | xargs -n 1 -i mysql --execute 'show grants for {}'
</source>
===Last update time===
* Per table
<source lang=mysql>
mysql> SELECT TABLE_SCHEMA AS DB,TABLE_NAME,UPDATE_TIME FROM INFORMATION_SCHEMA.TABLES ORDER BY DB,UPDATE_TIME;
</source>
* Per database
<source lang=mysql>
mysql> SELECT TABLE_SCHEMA AS DB,MAX(UPDATE_TIME) AS LAST_UPDATE FROM INFORMATION_SCHEMA.TABLES GROUP BY DB ORDER BY LAST_UPDATE;
</source>
==InnoDB space==
===Per database===
<source lang=mysql>
mysql> select table_schema as database_name, sum(round(data_length/1024/1024,2)) as total_size_mb from information_schema.tables where engine like 'innodb' group by table_schema order by total_size_mb;
</source>
===Per table===
<source lang=mysql>
mysql> select table_schema as database_name,table_name,round(data_length/1024/1024,2) as size_mb from information_schema.tables order by size_mb;
</source>
==Logging==
If you use SET GLOBAL it is just for the moment.
'''Don't forget to add it in your my.cnf to make it permanent!'''
===What can I log?===
The interesting variables here are:
* log_queries_not_using_indexes
* log_slave_updates
* log_slow_queries
* general_log
===Choose logging destination FILE/TABLE/NONE===
This affects general_log and slow_query_log.
* Log to the table mysql.slow_log and mysql.general_log
<source lang=mysql>
mysql> SET GLOBAL log_output=TABLE;
</source>
* Log to the table mysql.slow_log and mysql.general_log
<source lang=mysql>
mysql> SET GLOBAL log_output=TABLE;
</source>
* Both: tables and files
<source lang=mysql>
mysql> SET GLOBAL log_output = 'TABLE,FILE';
</source>
* None, if NONE appears in the log_output destinations there is no logging
<source lang=mysql>
mysql> SET GLOBAL log_output = 'TABLE,FILE,NONE';
</source>
is equal to
<source lang=mysql>
mysql> SET GLOBAL log_output = 'NONE';
</source>
===Enable/disable general logging===
<source lang=mysql>
mysql> SET GLOBAL general_log_file = '/var/lib/mysql/general.log';
Query OK, 0 rows affected (0.00 sec)
mysql> SET GLOBAL general_log = 'ON';
Query OK, 0 rows affected (0.00 sec)
</source>
<source lang=mysql>
mysql> SET GLOBAL general_log = 'OFF';
Query OK, 0 rows affected (0.00 sec)
</source>
===Enable/disable logging of slow queries===
<source lang=mysql>
mysql> SET GLOBAL slow_query_log_file = '/var/lib/mysql/slow-query.log';
Query OK, 0 rows affected (0.00 sec)
mysql> SET GLOBAL slow_query_log = 'ON';
Query OK, 0 rows affected (0.00 sec)
</source>
<source lang=mysql>
mysql> SET GLOBAL slow_query_log = 'OFF';
Query OK, 0 rows affected (0.00 sec)
</source>
==Filesystems for MySQL==
===ext3/ext4===
====Create Options====
<source lang=bash>
# mkfs.ext4 -b 4096 /dev/mapper/vg--data-lv--ext4--mysql_data
</source>
====Mountoptions====
* noatime
* data=writeback (best performance , only metadata is logged)
* data=ordered (ok performance , recording metadata and grouping metadata related to the data changes)
* data=journal (worst performance, but best data protection, ext3 default mode, recording metadata and all data)
===Raw devices with InnoDB===
'''Take a look at [[Linux_udev_permissions|setting device permissions via udev]] first.'''
'''After''' that the device is owned by mysql:
<source lang=bash>
# ls -alL /dev/vg-data/lv-rawdisk-innodb01
brw-rw---- 1 mysql mysql 252, 0 Aug 12 15:07 /dev/vg-data/lv-rawdisk-innodb01
</source>
Determine the size:
<source lang=bash>
# lvs vg-data
LV VG Attr LSize Pool Origin Data% Move Log Copy% Convert
lv-rawdisk-innodb01 vg-data -wi-a---- 25.00g
# fdisk -l /dev/vg-data/lv-rawdisk-innodb01
Disk /dev/vg-data/lv-rawdisk-innodb01: 26.8 GB, 26843545600 bytes
255 heads, 63 sectors/track, 3263 cylinders, total 52428800 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
# bc -l
26843545600/(1024*1024*1024)
25.00000000000000000000
</source>
Yes... really 25GB!
Add your logical volume to your configuration /etc/mysql/conf.d/innodb.cnf :
<source lang=mysql>
[mysqld]
# InnoDB raw disks
innodb_data_home_dir=
innodb_data_file_path=/dev/vg-data/lv-rawdisk-innodb01:25Gnewraw
</source>
Start mysql:
<source lang=bash>
# service mysql start
</source>
Aaaaaand.. do not forget apparmor! Like I did.. :-D
<source lang=mysql>
InnoDB: Operating system error number 13 in a file operation.
InnoDB: The error means mysqld does not have the access rights to
InnoDB: the directory.
InnoDB: File name /dev/dm-0
InnoDB: File operation call: 'open'.
InnoDB: Cannot continue operation.
</source>
<source lang=bash>
# tail /var/log/kern.log
...
Aug 12 15:30:09 mysql kernel: [ 5840.118528] audit: type=1400 audit(1439386209.399:33): apparmor="DENIED" operation="open" profile="/usr/sbin/mysqld" name="/dev/dm-0" pid=11810 comm="mysqld" requested_mask="wr" denied_mask="wr" fsuid=108 ouid=108
...
</source>
Add your raw device to the apparmor config in /etc/apparmor.d/local/usr.sbin.mysqld :
<source lang=bash>
# Site-specific additions and overrides for usr.sbin.mysqld.
# For more details, please see /etc/apparmor.d/local/README.
/dev/dm-* rwk,
</source>
Reload apparmor:
<source lang=bash>
# service apparmor reload
</source>
Another try!
<source lang=bash>
# service mysql start
</source>
<source lang=mysql>
InnoDB: The first specified data file /dev/vg-data/lv-rawdisk-innodb01 did not exist:
InnoDB: a new database to be created!
150812 15:48:23 InnoDB: Setting file /dev/vg-data/lv-rawdisk-innodb01 size to 25600 MB
InnoDB: Database physically writes the file full: wait...
InnoDB: Progress in MB: 100 200 300 400 500 600 700 800 900 1000 1100 1200 ...
</source>
Much better!
So shutdown MySQL again!
Change your configuration /etc/mysql/conf.d/innodb.cnf and '''change newraw to raw!''' :
<source lang=mysql>
[mysqld]
# InnoDB raw disks
innodb_data_home_dir=
innodb_data_file_path=/dev/vg-data/lv-rawdisk-innodb01:25Graw
</source>
=== NFS ===
==== NFSv4 ====
===== On NetApp CDOT SVM =====
<source lang=cdot>
cdot1nfsv4::> export-policy rule create -policyname default -clientmatch 172.18.128.0/22 -superuser none -rwrule none -rorule sys -allow-dev false -allow-suid false
cdot1nfsv4::>
cdot1nfsv4::> export-policy create -policyname mysql_clients
cdot1nfsv4::> export-policy rule create -policyname mysql_clients -clientmatch 172.18.128.0/22 -superuser sys -rwrule sys -rorule sys -allow-dev true -allow-suid false
cdot1nfsv4::>
cdot1nfsv4::> nfs server modify -v4.0 enabled -v4-id-domain this.domain.tld
cdot1nfsv4::> set -units GB
cdot1nfsv4::> vol show -volume MYSQLNFS_* -fields volume,policy,size,junction-path
vserver volume size policy junction-path
------------------ --------------------- ---- ------------- ----------------------
cdot1nfsv4 MYSQLNFS_DATA 40GB mysql_clients /MYSQLNFS_DATA
cdot1nfsv4 MYSQLNFS_LOG 1GB mysql_clients /MYSQLNFS_LOG
2 entries were displayed.
</source>
Links:
* [https://kb.netapp.com/support/s/article/how-to-configure-nfsv4-in-cluster-mode How to configure NFSv4 in Cluster-Mode]
* [https://kb.netapp.com/support/s/article/clustered-data-ontap-nfs-expert-recommended-articles Clustered Data ONTAP NFS Expert recommended articles]
* [https://kb.netapp.com/support/s/article/how-to-configure-netapp-storage-systems-for-network-file-system-version-4-in-aix-and-linux-environments How to configure NetApp storage systems for Network File System version 4 in AIX and Linux environments]
* [https://kb.netapp.com/support/s/article/how-to-enable-or-disable-nfsv4-on-netapp-storage-systems How to enable or disable NFSv4 on NetApp storage systems]
===== On Linux =====
====== /etc/sysctl.d/99-mysql.conf ======
<source>
#
## http://www.ajohnstone.com/achives/optimizing-mysql-over-nfs-with-netapp/
#
###################################################################
# Semaphores & IPC for optimizations in innodb
kernel.shmmax=2147483648
kernel.shmall=2147483648
kernel.msgmni=1024
kernel.msgmax=65536
kernel.sem=250 32000 32 1024
###################################################################
# Swap
vm.swappiness = 0
vm.vfs_cache_pressure = 50
</source>
====== /etc/sysctl.d/99-netapp-nfs.conf ======
<source>
#
## http://www.ajohnstone.com/achives/optimizing-mysql-over-nfs-with-netapp/
#
###################################################################
# Optimization for netapp/nfs increased from 64k, @see http://tldp.org/HOWTO/NFS-HOWTO/performance.html#MEMLIMITS
net.core.wmem_default=262144
net.core.rmem_default=262144
net.core.wmem_max=262144
net.core.rmem_max=262144
net.ipv4.tcp_rmem = 4096 87380 16777216
net.ipv4.tcp_wmem = 4096 65536 16777216
net.ipv4.tcp_no_metrics_save = 1
# Guidelines from http://media.netapp.com/documents/mysqlperformance-5.pdf
net.ipv4.tcp_sack=0
net.ipv4.tcp_timestamps=0
sunrpc.tcp_slot_table_entries=128
#nfs.v3.enable on
nfs.tcp.enable=on
nfs.tcp.recvwindowsize=65536
nfs.tcp.xfersize=65536
#iscsi.iswt.max_ios_per_session 128
#iscsi.iswt.tcp_window_size 131400
#iscsi.max_connections_per_session 16
net.ipv4.tcp_tw_reuse = 1
net.ipv4.ip_local_port_range = 1024 65023
net.ipv4.tcp_max_syn_backlog = 10240
net.ipv4.tcp_max_tw_buckets = 400000
net.ipv4.tcp_max_orphans = 60000
net.ipv4.tcp_synack_retries = 3
net.core.somaxconn = 10000
kernel.sysrq=0
net.ipv4.neigh.default.gc_thresh1 = 4096
net.ipv4.neigh.default.gc_thresh2 = 8192
net.ipv4.neigh.default.gc_thresh3 = 8192
net.ipv4.neigh.default.base_reachable_time = 86400
net.ipv4.neigh.default.gc_stale_time = 86400
</source>
====== Modify systemd service to wait for NFS ======
To be sure that the NFS mount is ready when the mysql server starts add After=nfs-client.target to the systemd service in the Unit-section.
<source lang=bash>
# systemctl edit mysql.service
</source>
and enter:
<source lang=inifile>
[Unit]
Description=MySQL Community Server
After=network.target
After=nfs-client.target
</source>
<source lang=bash>
# systemctl cat mysql
# /lib/systemd/system/mysql.service
# MySQL systemd service file
[Unit]
Description=MySQL Community Server
After=network.target
[Install]
WantedBy=multi-user.target
[Service]
User=mysql
Group=mysql
PermissionsStartOnly=true
ExecStartPre=/usr/share/mysql/mysql-systemd-start pre
ExecStart=/usr/sbin/mysqld
ExecStartPost=/usr/share/mysql/mysql-systemd-start post
TimeoutSec=600
Restart=on-failure
RuntimeDirectory=mysqld
RuntimeDirectoryMode=755
# /etc/systemd/system/mysql.service.d/override.conf
[Unit]
Description=MySQL Community Server
After=network.target
After=nfs-client.target
[Service]
LimitNOFILE=1024000
</source>
====== /etc/idmapd.conf ======
<source lang=conf>
# Domain = localdomain
Domain = this.domain.tld
</source>
====== /etc/fstab ======
<source lang=fstab>
cdot-nfsv4-svm:/MYSQLNFS_LOG /MYSQLNFS_LOG nfs rw,hard,nointr,rsize=65536,wsize=65536,bg,vers=4,proto=tcp,noatime
cdot-nfsv4-svm:/MYSQLNFS_DATA /MYSQLNFS_DATA nfs rw,hard,nointr,rsize=65536,wsize=65536,bg,vers=4,proto=tcp,noatime
</source>
====== /etc/mysql/mysql.conf.d/mysqld.cnf ======
<source lang=inifile>
[mysqld]
...
datadir = /MYSQLNFS_DATA/data/mysql
...
</source>
====== /etc/mysql/mysql.conf.d/innodb.cnf ======
<source lang=inifile>
[mysqld]
#
# * InnoDB
#
innodb_data_home_dir = /MYSQLNFS_DATA/InnoDB
innodb_data_file_path = ibdata1:200M:autoextend
innodb_log_group_home_dir = /MYSQLNFS_LOG/ib_log
#innodb_flush_method = O_DIRECT
innodb_flush_log_at_trx_commit = 2
innodb_file_per_table = on
</source>
<source lang=mysql>
# mysql -e "show variables where variable_name like '%dir' and value like '/MYSQLNFS%'"
+---------------------------+------------------------------------+
| Variable_name | Value |
+---------------------------+------------------------------------+
| datadir | /MYSQLNFS_DATA/data/mysql/ |
| innodb_data_home_dir | /MYSQLNFS_DATA/InnoDB |
| innodb_log_group_home_dir | /MYSQLNFS_LOG/ib_log |
+---------------------------+------------------------------------+
</source>
====== /etc/mysql/mysql.conf.d/query_cache.cnf ======
<source lang=inifile>
[mysqld]
#
# * Query Cache Configuration
#
query_cache_type = 1
query_cache_limit = 256K
query_cache_min_res_unit = 2k
query_cache_size = 80M
</source>
<source lang=mysql>
mysql> SHOW VARIABLES LIKE 'have_query_cache';
+------------------+-------+
| Variable_name | Value |
+------------------+-------+
| have_query_cache | YES |
+------------------+-------+
1 row in set (0,00 sec)
mysql> SHOW VARIABLES LIKE 'query_cache%';
+------------------------------+----------+
| Variable_name | Value |
+------------------------------+----------+
| query_cache_limit | 262144 |
| query_cache_min_res_unit | 2048 |
| query_cache_size | 83886080 |
| query_cache_type | ON |
| query_cache_wlock_invalidate | OFF |
+------------------------------+----------+
5 rows in set (0,00 sec)
</source>
====== apparmor : /etc/apparmor.d/local/usr.sbin.mysqld ======
<source lang=apparmor>
# vim:syntax=apparmor
# This should be always there...
owner @{PROC}/@{pid}/status r,
/sys/devices/system/node/ r,
/sys/devices/system/node/** r,
# The mysql datadir, innodb_data_home_dir
/MYSQLNFS_DATA/ r,
/MYSQLNFS_DATA/** rwk,
# The mysql innodb_log_group_home_dir
/MYSQLNFS_LOG/ r,
/MYSQLNFS_LOG/** rwk,
</source>
====== Short stupid performance test ======
<source lang=bash>
# time dd if=/dev/zero of=/MYSQLNFS_DATA/io.test bs=16k count=65536
65536+0 records in
65536+0 records out
1073741824 bytes (1,1 GB, 1,0 GiB) copied, 1,7552 s, 612 MB/s
real 0m1.772s
user 0m0.016s
sys 0m0.672s
</source>
Some things seem to work...
==Sample InnoDB configuration==
/etc/mysql/conf.d/innodb.cnf
<source lang=mysql>
[mysqld]
# InnoDB Parameters
# innodb_buffer_pool_size=(0.7*total_mem_size)
innodb_buffer_pool_size=1433M
# bulk_insert_buffer_size
bulk_insert_buffer_size=256M
# innodb_buffer_pool_instances=... more = more concurrency
innodb_buffer_pool_instances=2
# innodb_thread_concurrency= 2*CPUs
innodb_thread_concurrency=4
# innodb_flush_method=O_DIRECT (avoids double buffering)
innodb_flush_method=O_DIRECT
# InnoDB data raw disks
innodb_data_home_dir=
innodb_data_file_path=/dev/vg-data/lv-rawdisk-innodb01:25Graw
# InnoDB log files
innodb_log_files_in_group=2
innodb_log_file_size=100M
innodb_log_group_home_dir=/var/lib/mysql/ib_log
</source>
==Analyze==
<source lang=mysql>
mysql> select * from <tablename> PROCEDURE ANALYSE();
</source>
<source lang=mysql>
mysql> SHOW /*!50000 GLOBAL*/ STATUS;
</source>
* See [[http://de.slideshare.net/shinguz/pt-presentation-11465700 MySQL Performance Tuning]]
===percona-toolkit===
<source lang=bash>
# aptitude install percona-toolkit
# mysql -e "explain select * from mysql.user,mysql.db where user.user=db.user" | pt-visual-explain
JOIN
+- Bookmark lookup
| +- Table
| | table db
| | possible_keys User
| +- Index lookup
| key db->User
| possible_keys User
| key_len 48
| ref mysql.user.User
| rows 3
+- Table scan
rows 68
+- Table
table user
</source>
===Sysbench===
<source lang=bash>
# mysql -u root -e "create database sbtest;"
# sysbench \
--test=oltp \
--oltp-table-size=10000000 \
--db-driver=mysql \
--mysql-table-engine=innodb \
--mysql-db=sbtest \
--mysql-user=root \
--mysql-password=$(nawk -F'=' '/password/{print $2}' /root/.my.cnf) \
--mysql-socket=/var/run/mysqld/mysqld.sock \
prepare
# sysbench \
--test=oltp \
--oltp-test-mode=complex \
--oltp-table-size=80000000 \
--db-driver=mysql \
--mysql-table-engine=innodb \
--mysql-db=sbtest \
--mysql-user=root \
--mysql-password=$(nawk -F'=' '/password/{print $2}' /root/.my.cnf) \
--mysql-socket=/var/run/mysqld/mysqld.sock \
--num-threads=4 \
--max-time=900 \
--max-requests=500000 \
run
# mysql -u root_rw -e "drop table sbtest;" sbtest
</source>
==Recover a damaged root account==
===Lost grants===
Try out:
<source lang=bash>
# service mysql stop
# echo "grant all privileges on *.* to 'root'@'localhost' with grant option;" > /root/mysql-init
# mysqld_safe --init-file=/root/mysql-init
...
150812 19:14:24 mysqld_safe mysqld from pid file /var/run/mysqld/mysqld.pid ended
# rm /root/mysql-init
# service mysql start
</source>
Or:
<source lang=bash>
# service mysql stop
# mysqld_safe --skip-grant-tables &
...
# mysql -e "UPDATE mysql.user SET Grant_priv='Y', Super_priv='Y' WHERE User='root'; FLUSH PRIVILEGES; GRANT ALL ON *.* TO 'root'@'localhost';"
# mysqladmin -u root shutdown
# service mysql start
</source>
===Lost password===
<source lang=bash>
# service mysql stop
# echo "SET PASSWORD FOR 'root'@'localhost' = PASSWORD('the root password for mysql');" > /root/mysql-init
# mysqld_safe --init-file=/root/mysql-init
...
150812 19:15:24 mysqld_safe mysqld from pid file /var/run/mysqld/mysqld.pid ended
# rm /root/mysql-init
# service mysql start
</source>
==Structured configuration==
This is the default in Ubuntus /etc/mysql/my.cnf:
<source lang=mysql>
...
#
# * IMPORTANT: Additional settings that can override those from this file!
# The files must end with '.cnf', otherwise they'll be ignored.
#
!includedir /etc/mysql/conf.d/
</source>
/etc/mysql/conf.d/innodb.cnf:
<source lang=mysql>
[mysqld]
# InnoDB Parameters
# innodb_buffer_pool_size=(0.7*total_mem_size)
#innodb_buffer_pool_size=512M
innodb_buffer_pool_size=256M
# bulk_insert_buffer_size
#bulk_insert_buffer_size=256M
bulk_insert_buffer_size=128M
# innodb_buffer_pool_instances=... more = more concurrency
innodb_buffer_pool_instances=2
# innodb_thread_concurrency= 2*CPUs
innodb_thread_concurrency=4
# innodb_flush_method=O_DIRECT (avoids double buffering)
innodb_flush_method=O_DIRECT
# InnoDB data raw disks
innodb_data_home_dir=
innodb_data_file_path=/dev/vg-data/lv-rawdisk-innodb01:25Graw
# InnoDB log files
innodb_log_files_in_group=2
innodb_log_file_size=100M
innodb_log_group_home_dir=/var/lib/mysql/ib_log
</source>
/etc/mysql/conf.d/myisam.cnf:
<source lang=mysql>
[mysqld]
#key_buffer = 512M
key_buffer = 128M
table_cache = 8K
myisam_sort_buffer_size = 64M
tmp_table_size = 64M
# Variable: concurrent_insert
# Value Description
# 0 Disables concurrent inserts
# 1 (Default) Enables concurrent insert for MyISAM tables that do not have holes
# 2 Enables concurrent inserts for all MyISAM tables, even those that have holes.
# For a table with a hole, new rows are inserted at the end of the table if it is in use by another thread.
# Otherwise, MySQL acquires a normal write lock and inserts the row into the hole.
concurrent_insert=2
# Variable: myisam_use_mmap
# https://www.percona.com/blog/2006/05/26/myisam-mmap-feature-51/
#
myisam_use_mmap=1
</source>
/etc/mysql/conf.d/mysqld.cnf:
<source lang=mysql>
[mysqld]
datadir = /var/lib/mysql/data/data
# because mysql is soooo stupid
#ignore-db-dirs = lost+found # when we will have mysql >= 5.6.3
bind-address = 127.0.0.1
open-files-limit = 4096
max_connections = 512
max_allowed_packet = 16M
thread_stack = 192K
thread_cache_size = 8
myisam-recover-options = BACKUP
max_connections = 512
table_cache = 8192
thread_concurrency = 4
default-storage-engine = innodb
# Enable the full query log. Every query (even ones with incorrect
# syntax) that the server receives will be logged. This is useful for
# debugging, it is usually disabled in production use.
#log
# Print warnings to the error log file. If you have any problem with
# MySQL you should enable logging of warnings and examine the error log
# for possible explanations.
log_warnings
# Log slow queries. Slow queries are queries which take more than the
# amount of time defined in "long_query_time" or which do not use
# indexes well, if log_long_format is enabled. It is normally good idea
# to have this turned on if you frequently add new queries to the
# system.
log_slow_queries
slow_query_log_file = /var/log/mysql/mysql-slow.log
# All queries taking more than this amount of time (in seconds) will be
# trated as slow. Do not use "1" as a value here, as this will result in
# even very fast queries being logged from time to time (as MySQL
# currently measures time with second accuracy only).
long_query_time = 2
# Log more information in the slow query log. Normally it is good to
# have this turned on. This will enable logging of queries that are not
# using indexes in addition to long running queries.
#log_long_format
log_bin = /var/lib/mysql/binlog/mysql-bin.log
expire_logs_days = 10
max_binlog_size = 100M
sync_binlog = 0
performance_schema = ON
</source>
/etc/mysql/conf.d/mysqld_safe.cnf:
<source lang=mysql>
[mysqld_safe]
</source>
/etc/mysql/conf.d/mysqld_safe_syslog.cnf:
<source lang=mysql>
[mysqld_safe]
syslog
</source>
/etc/mysql/conf.d/query_cache.cnf:
<source lang=mysql>
[mysqld]
query_cache_limit = 4M
query_cache_size = 128M
query_cache_min_res_unit = 2K
</source>
=MySQL Clients=
Small one liners for testing purposes.
==PHP==
===PHP PDO===
<source lang=php>
$ php -r '
$pdo=new PDO("mysql:host=mydbhost;dbname=mydb", "user", "pass", ARRAY(
PDO::ATTR_PERSISTENT => true
)
);
$stmt=$pdo->prepare("SELECT * FROM mytable");
if($stmt->execute()){
while($row = $stmt->fetch()){
print_r($row);
}
};
$stmt = null;
$pdo=null;
'
</source>
d0d881c48ac5115fde4e2ebcdc73edfd3ab8f41a
1746
1745
2017-05-12T15:16:11Z
Lollypop
2
/* Modify systemd service to wait for NFS */
wikitext
text/x-wiki
[[Kategorie:MySQL|Tipps und Tricks]]
==Oneliner==
===All grants===
<source lang=bash>
# mysql --skip-column-names --batch --execute 'select concat("`",user,"`@`",host,"`") from mysql.user' | xargs -n 1 -i mysql --execute 'show grants for {}'
</source>
===Last update time===
* Per table
<source lang=mysql>
mysql> SELECT TABLE_SCHEMA AS DB,TABLE_NAME,UPDATE_TIME FROM INFORMATION_SCHEMA.TABLES ORDER BY DB,UPDATE_TIME;
</source>
* Per database
<source lang=mysql>
mysql> SELECT TABLE_SCHEMA AS DB,MAX(UPDATE_TIME) AS LAST_UPDATE FROM INFORMATION_SCHEMA.TABLES GROUP BY DB ORDER BY LAST_UPDATE;
</source>
==InnoDB space==
===Per database===
<source lang=mysql>
mysql> select table_schema as database_name, sum(round(data_length/1024/1024,2)) as total_size_mb from information_schema.tables where engine like 'innodb' group by table_schema order by total_size_mb;
</source>
===Per table===
<source lang=mysql>
mysql> select table_schema as database_name,table_name,round(data_length/1024/1024,2) as size_mb from information_schema.tables order by size_mb;
</source>
==Logging==
If you use SET GLOBAL it is just for the moment.
'''Don't forget to add it in your my.cnf to make it permanent!'''
===What can I log?===
The interesting variables here are:
* log_queries_not_using_indexes
* log_slave_updates
* log_slow_queries
* general_log
===Choose logging destination FILE/TABLE/NONE===
This affects general_log and slow_query_log.
* Log to the table mysql.slow_log and mysql.general_log
<source lang=mysql>
mysql> SET GLOBAL log_output=TABLE;
</source>
* Log to the table mysql.slow_log and mysql.general_log
<source lang=mysql>
mysql> SET GLOBAL log_output=TABLE;
</source>
* Both: tables and files
<source lang=mysql>
mysql> SET GLOBAL log_output = 'TABLE,FILE';
</source>
* None, if NONE appears in the log_output destinations there is no logging
<source lang=mysql>
mysql> SET GLOBAL log_output = 'TABLE,FILE,NONE';
</source>
is equal to
<source lang=mysql>
mysql> SET GLOBAL log_output = 'NONE';
</source>
===Enable/disable general logging===
<source lang=mysql>
mysql> SET GLOBAL general_log_file = '/var/lib/mysql/general.log';
Query OK, 0 rows affected (0.00 sec)
mysql> SET GLOBAL general_log = 'ON';
Query OK, 0 rows affected (0.00 sec)
</source>
<source lang=mysql>
mysql> SET GLOBAL general_log = 'OFF';
Query OK, 0 rows affected (0.00 sec)
</source>
===Enable/disable logging of slow queries===
<source lang=mysql>
mysql> SET GLOBAL slow_query_log_file = '/var/lib/mysql/slow-query.log';
Query OK, 0 rows affected (0.00 sec)
mysql> SET GLOBAL slow_query_log = 'ON';
Query OK, 0 rows affected (0.00 sec)
</source>
<source lang=mysql>
mysql> SET GLOBAL slow_query_log = 'OFF';
Query OK, 0 rows affected (0.00 sec)
</source>
==Filesystems for MySQL==
===ext3/ext4===
====Create Options====
<source lang=bash>
# mkfs.ext4 -b 4096 /dev/mapper/vg--data-lv--ext4--mysql_data
</source>
====Mountoptions====
* noatime
* data=writeback (best performance , only metadata is logged)
* data=ordered (ok performance , recording metadata and grouping metadata related to the data changes)
* data=journal (worst performance, but best data protection, ext3 default mode, recording metadata and all data)
===Raw devices with InnoDB===
'''Take a look at [[Linux_udev_permissions|setting device permissions via udev]] first.'''
'''After''' that the device is owned by mysql:
<source lang=bash>
# ls -alL /dev/vg-data/lv-rawdisk-innodb01
brw-rw---- 1 mysql mysql 252, 0 Aug 12 15:07 /dev/vg-data/lv-rawdisk-innodb01
</source>
Determine the size:
<source lang=bash>
# lvs vg-data
LV VG Attr LSize Pool Origin Data% Move Log Copy% Convert
lv-rawdisk-innodb01 vg-data -wi-a---- 25.00g
# fdisk -l /dev/vg-data/lv-rawdisk-innodb01
Disk /dev/vg-data/lv-rawdisk-innodb01: 26.8 GB, 26843545600 bytes
255 heads, 63 sectors/track, 3263 cylinders, total 52428800 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
# bc -l
26843545600/(1024*1024*1024)
25.00000000000000000000
</source>
Yes... really 25GB!
Add your logical volume to your configuration /etc/mysql/conf.d/innodb.cnf :
<source lang=mysql>
[mysqld]
# InnoDB raw disks
innodb_data_home_dir=
innodb_data_file_path=/dev/vg-data/lv-rawdisk-innodb01:25Gnewraw
</source>
Start mysql:
<source lang=bash>
# service mysql start
</source>
Aaaaaand.. do not forget apparmor! Like I did.. :-D
<source lang=mysql>
InnoDB: Operating system error number 13 in a file operation.
InnoDB: The error means mysqld does not have the access rights to
InnoDB: the directory.
InnoDB: File name /dev/dm-0
InnoDB: File operation call: 'open'.
InnoDB: Cannot continue operation.
</source>
<source lang=bash>
# tail /var/log/kern.log
...
Aug 12 15:30:09 mysql kernel: [ 5840.118528] audit: type=1400 audit(1439386209.399:33): apparmor="DENIED" operation="open" profile="/usr/sbin/mysqld" name="/dev/dm-0" pid=11810 comm="mysqld" requested_mask="wr" denied_mask="wr" fsuid=108 ouid=108
...
</source>
Add your raw device to the apparmor config in /etc/apparmor.d/local/usr.sbin.mysqld :
<source lang=bash>
# Site-specific additions and overrides for usr.sbin.mysqld.
# For more details, please see /etc/apparmor.d/local/README.
/dev/dm-* rwk,
</source>
Reload apparmor:
<source lang=bash>
# service apparmor reload
</source>
Another try!
<source lang=bash>
# service mysql start
</source>
<source lang=mysql>
InnoDB: The first specified data file /dev/vg-data/lv-rawdisk-innodb01 did not exist:
InnoDB: a new database to be created!
150812 15:48:23 InnoDB: Setting file /dev/vg-data/lv-rawdisk-innodb01 size to 25600 MB
InnoDB: Database physically writes the file full: wait...
InnoDB: Progress in MB: 100 200 300 400 500 600 700 800 900 1000 1100 1200 ...
</source>
Much better!
So shutdown MySQL again!
Change your configuration /etc/mysql/conf.d/innodb.cnf and '''change newraw to raw!''' :
<source lang=mysql>
[mysqld]
# InnoDB raw disks
innodb_data_home_dir=
innodb_data_file_path=/dev/vg-data/lv-rawdisk-innodb01:25Graw
</source>
=== NFS ===
==== NFSv4 ====
===== On NetApp CDOT SVM =====
<source lang=cdot>
cdot1nfsv4::> export-policy rule create -policyname default -clientmatch 172.18.128.0/22 -superuser none -rwrule none -rorule sys -allow-dev false -allow-suid false
cdot1nfsv4::>
cdot1nfsv4::> export-policy create -policyname mysql_clients
cdot1nfsv4::> export-policy rule create -policyname mysql_clients -clientmatch 172.18.128.0/22 -superuser sys -rwrule sys -rorule sys -allow-dev true -allow-suid false
cdot1nfsv4::>
cdot1nfsv4::> nfs server modify -v4.0 enabled -v4-id-domain this.domain.tld
cdot1nfsv4::> set -units GB
cdot1nfsv4::> vol show -volume MYSQLNFS_* -fields volume,policy,size,junction-path
vserver volume size policy junction-path
------------------ --------------------- ---- ------------- ----------------------
cdot1nfsv4 MYSQLNFS_DATA 40GB mysql_clients /MYSQLNFS_DATA
cdot1nfsv4 MYSQLNFS_LOG 1GB mysql_clients /MYSQLNFS_LOG
2 entries were displayed.
</source>
Links:
* [https://kb.netapp.com/support/s/article/how-to-configure-nfsv4-in-cluster-mode How to configure NFSv4 in Cluster-Mode]
* [https://kb.netapp.com/support/s/article/clustered-data-ontap-nfs-expert-recommended-articles Clustered Data ONTAP NFS Expert recommended articles]
* [https://kb.netapp.com/support/s/article/how-to-configure-netapp-storage-systems-for-network-file-system-version-4-in-aix-and-linux-environments How to configure NetApp storage systems for Network File System version 4 in AIX and Linux environments]
* [https://kb.netapp.com/support/s/article/how-to-enable-or-disable-nfsv4-on-netapp-storage-systems How to enable or disable NFSv4 on NetApp storage systems]
===== On Linux =====
====== /etc/sysctl.d/99-mysql.conf ======
<source>
#
## http://www.ajohnstone.com/achives/optimizing-mysql-over-nfs-with-netapp/
#
###################################################################
# Semaphores & IPC for optimizations in innodb
kernel.shmmax=2147483648
kernel.shmall=2147483648
kernel.msgmni=1024
kernel.msgmax=65536
kernel.sem=250 32000 32 1024
###################################################################
# Swap
vm.swappiness = 0
vm.vfs_cache_pressure = 50
</source>
====== /etc/sysctl.d/99-netapp-nfs.conf ======
<source>
#
## http://www.ajohnstone.com/achives/optimizing-mysql-over-nfs-with-netapp/
#
###################################################################
# Optimization for netapp/nfs increased from 64k, @see http://tldp.org/HOWTO/NFS-HOWTO/performance.html#MEMLIMITS
net.core.wmem_default=262144
net.core.rmem_default=262144
net.core.wmem_max=262144
net.core.rmem_max=262144
net.ipv4.tcp_rmem = 4096 87380 16777216
net.ipv4.tcp_wmem = 4096 65536 16777216
net.ipv4.tcp_no_metrics_save = 1
# Guidelines from http://media.netapp.com/documents/mysqlperformance-5.pdf
net.ipv4.tcp_sack=0
net.ipv4.tcp_timestamps=0
sunrpc.tcp_slot_table_entries=128
#nfs.v3.enable on
nfs.tcp.enable=on
nfs.tcp.recvwindowsize=65536
nfs.tcp.xfersize=65536
#iscsi.iswt.max_ios_per_session 128
#iscsi.iswt.tcp_window_size 131400
#iscsi.max_connections_per_session 16
net.ipv4.tcp_tw_reuse = 1
net.ipv4.ip_local_port_range = 1024 65023
net.ipv4.tcp_max_syn_backlog = 10240
net.ipv4.tcp_max_tw_buckets = 400000
net.ipv4.tcp_max_orphans = 60000
net.ipv4.tcp_synack_retries = 3
net.core.somaxconn = 10000
kernel.sysrq=0
net.ipv4.neigh.default.gc_thresh1 = 4096
net.ipv4.neigh.default.gc_thresh2 = 8192
net.ipv4.neigh.default.gc_thresh3 = 8192
net.ipv4.neigh.default.base_reachable_time = 86400
net.ipv4.neigh.default.gc_stale_time = 86400
</source>
====== Modify systemd service to raise the number of files limit ======
# systemctl edit mysql.service
</source>
and enter:
<source lang=inifile>
[Service]
LimitNOFILE=1024000
</source>
<source lang=bash>
# systemctl cat mysql
# /lib/systemd/system/mysql.service
# MySQL systemd service file
...
# /etc/systemd/system/mysql.service.d/override.conf
[Service]
LimitNOFILE=1024000
</source>
====== Modify systemd service to wait for NFS ======
To be sure that the NFS mount is ready when the mysql server starts add After=nfs-client.target to the systemd service in the Unit-section.
<source lang=bash>
# systemctl edit mysql.service
</source>
and enter:
<source lang=inifile>
[Unit]
Description=MySQL Community Server
After=network.target
After=nfs-client.target
</source>
<source lang=bash>
# systemctl cat mysql
# /lib/systemd/system/mysql.service
# MySQL systemd service file
[Unit]
Description=MySQL Community Server
After=network.target
[Install]
WantedBy=multi-user.target
[Service]
User=mysql
Group=mysql
PermissionsStartOnly=true
ExecStartPre=/usr/share/mysql/mysql-systemd-start pre
ExecStart=/usr/sbin/mysqld
ExecStartPost=/usr/share/mysql/mysql-systemd-start post
TimeoutSec=600
Restart=on-failure
RuntimeDirectory=mysqld
RuntimeDirectoryMode=755
# /etc/systemd/system/mysql.service.d/override.conf
[Unit]
Description=MySQL Community Server
After=network.target
After=nfs-client.target
[Service]
LimitNOFILE=1024000
</source>
====== /etc/idmapd.conf ======
<source lang=conf>
# Domain = localdomain
Domain = this.domain.tld
</source>
====== /etc/fstab ======
<source lang=fstab>
cdot-nfsv4-svm:/MYSQLNFS_LOG /MYSQLNFS_LOG nfs rw,hard,nointr,rsize=65536,wsize=65536,bg,vers=4,proto=tcp,noatime
cdot-nfsv4-svm:/MYSQLNFS_DATA /MYSQLNFS_DATA nfs rw,hard,nointr,rsize=65536,wsize=65536,bg,vers=4,proto=tcp,noatime
</source>
====== /etc/mysql/mysql.conf.d/mysqld.cnf ======
<source lang=inifile>
[mysqld]
...
datadir = /MYSQLNFS_DATA/data/mysql
...
</source>
====== /etc/mysql/mysql.conf.d/innodb.cnf ======
<source lang=inifile>
[mysqld]
#
# * InnoDB
#
innodb_data_home_dir = /MYSQLNFS_DATA/InnoDB
innodb_data_file_path = ibdata1:200M:autoextend
innodb_log_group_home_dir = /MYSQLNFS_LOG/ib_log
#innodb_flush_method = O_DIRECT
innodb_flush_log_at_trx_commit = 2
innodb_file_per_table = on
</source>
<source lang=mysql>
# mysql -e "show variables where variable_name like '%dir' and value like '/MYSQLNFS%'"
+---------------------------+------------------------------------+
| Variable_name | Value |
+---------------------------+------------------------------------+
| datadir | /MYSQLNFS_DATA/data/mysql/ |
| innodb_data_home_dir | /MYSQLNFS_DATA/InnoDB |
| innodb_log_group_home_dir | /MYSQLNFS_LOG/ib_log |
+---------------------------+------------------------------------+
</source>
====== /etc/mysql/mysql.conf.d/query_cache.cnf ======
<source lang=inifile>
[mysqld]
#
# * Query Cache Configuration
#
query_cache_type = 1
query_cache_limit = 256K
query_cache_min_res_unit = 2k
query_cache_size = 80M
</source>
<source lang=mysql>
mysql> SHOW VARIABLES LIKE 'have_query_cache';
+------------------+-------+
| Variable_name | Value |
+------------------+-------+
| have_query_cache | YES |
+------------------+-------+
1 row in set (0,00 sec)
mysql> SHOW VARIABLES LIKE 'query_cache%';
+------------------------------+----------+
| Variable_name | Value |
+------------------------------+----------+
| query_cache_limit | 262144 |
| query_cache_min_res_unit | 2048 |
| query_cache_size | 83886080 |
| query_cache_type | ON |
| query_cache_wlock_invalidate | OFF |
+------------------------------+----------+
5 rows in set (0,00 sec)
</source>
====== apparmor : /etc/apparmor.d/local/usr.sbin.mysqld ======
<source lang=apparmor>
# vim:syntax=apparmor
# This should be always there...
owner @{PROC}/@{pid}/status r,
/sys/devices/system/node/ r,
/sys/devices/system/node/** r,
# The mysql datadir, innodb_data_home_dir
/MYSQLNFS_DATA/ r,
/MYSQLNFS_DATA/** rwk,
# The mysql innodb_log_group_home_dir
/MYSQLNFS_LOG/ r,
/MYSQLNFS_LOG/** rwk,
</source>
====== Short stupid performance test ======
<source lang=bash>
# time dd if=/dev/zero of=/MYSQLNFS_DATA/io.test bs=16k count=65536
65536+0 records in
65536+0 records out
1073741824 bytes (1,1 GB, 1,0 GiB) copied, 1,7552 s, 612 MB/s
real 0m1.772s
user 0m0.016s
sys 0m0.672s
</source>
Some things seem to work...
==Sample InnoDB configuration==
/etc/mysql/conf.d/innodb.cnf
<source lang=mysql>
[mysqld]
# InnoDB Parameters
# innodb_buffer_pool_size=(0.7*total_mem_size)
innodb_buffer_pool_size=1433M
# bulk_insert_buffer_size
bulk_insert_buffer_size=256M
# innodb_buffer_pool_instances=... more = more concurrency
innodb_buffer_pool_instances=2
# innodb_thread_concurrency= 2*CPUs
innodb_thread_concurrency=4
# innodb_flush_method=O_DIRECT (avoids double buffering)
innodb_flush_method=O_DIRECT
# InnoDB data raw disks
innodb_data_home_dir=
innodb_data_file_path=/dev/vg-data/lv-rawdisk-innodb01:25Graw
# InnoDB log files
innodb_log_files_in_group=2
innodb_log_file_size=100M
innodb_log_group_home_dir=/var/lib/mysql/ib_log
</source>
==Analyze==
<source lang=mysql>
mysql> select * from <tablename> PROCEDURE ANALYSE();
</source>
<source lang=mysql>
mysql> SHOW /*!50000 GLOBAL*/ STATUS;
</source>
* See [[http://de.slideshare.net/shinguz/pt-presentation-11465700 MySQL Performance Tuning]]
===percona-toolkit===
<source lang=bash>
# aptitude install percona-toolkit
# mysql -e "explain select * from mysql.user,mysql.db where user.user=db.user" | pt-visual-explain
JOIN
+- Bookmark lookup
| +- Table
| | table db
| | possible_keys User
| +- Index lookup
| key db->User
| possible_keys User
| key_len 48
| ref mysql.user.User
| rows 3
+- Table scan
rows 68
+- Table
table user
</source>
===Sysbench===
<source lang=bash>
# mysql -u root -e "create database sbtest;"
# sysbench \
--test=oltp \
--oltp-table-size=10000000 \
--db-driver=mysql \
--mysql-table-engine=innodb \
--mysql-db=sbtest \
--mysql-user=root \
--mysql-password=$(nawk -F'=' '/password/{print $2}' /root/.my.cnf) \
--mysql-socket=/var/run/mysqld/mysqld.sock \
prepare
# sysbench \
--test=oltp \
--oltp-test-mode=complex \
--oltp-table-size=80000000 \
--db-driver=mysql \
--mysql-table-engine=innodb \
--mysql-db=sbtest \
--mysql-user=root \
--mysql-password=$(nawk -F'=' '/password/{print $2}' /root/.my.cnf) \
--mysql-socket=/var/run/mysqld/mysqld.sock \
--num-threads=4 \
--max-time=900 \
--max-requests=500000 \
run
# mysql -u root_rw -e "drop table sbtest;" sbtest
</source>
==Recover a damaged root account==
===Lost grants===
Try out:
<source lang=bash>
# service mysql stop
# echo "grant all privileges on *.* to 'root'@'localhost' with grant option;" > /root/mysql-init
# mysqld_safe --init-file=/root/mysql-init
...
150812 19:14:24 mysqld_safe mysqld from pid file /var/run/mysqld/mysqld.pid ended
# rm /root/mysql-init
# service mysql start
</source>
Or:
<source lang=bash>
# service mysql stop
# mysqld_safe --skip-grant-tables &
...
# mysql -e "UPDATE mysql.user SET Grant_priv='Y', Super_priv='Y' WHERE User='root'; FLUSH PRIVILEGES; GRANT ALL ON *.* TO 'root'@'localhost';"
# mysqladmin -u root shutdown
# service mysql start
</source>
===Lost password===
<source lang=bash>
# service mysql stop
# echo "SET PASSWORD FOR 'root'@'localhost' = PASSWORD('the root password for mysql');" > /root/mysql-init
# mysqld_safe --init-file=/root/mysql-init
...
150812 19:15:24 mysqld_safe mysqld from pid file /var/run/mysqld/mysqld.pid ended
# rm /root/mysql-init
# service mysql start
</source>
==Structured configuration==
This is the default in Ubuntus /etc/mysql/my.cnf:
<source lang=mysql>
...
#
# * IMPORTANT: Additional settings that can override those from this file!
# The files must end with '.cnf', otherwise they'll be ignored.
#
!includedir /etc/mysql/conf.d/
</source>
/etc/mysql/conf.d/innodb.cnf:
<source lang=mysql>
[mysqld]
# InnoDB Parameters
# innodb_buffer_pool_size=(0.7*total_mem_size)
#innodb_buffer_pool_size=512M
innodb_buffer_pool_size=256M
# bulk_insert_buffer_size
#bulk_insert_buffer_size=256M
bulk_insert_buffer_size=128M
# innodb_buffer_pool_instances=... more = more concurrency
innodb_buffer_pool_instances=2
# innodb_thread_concurrency= 2*CPUs
innodb_thread_concurrency=4
# innodb_flush_method=O_DIRECT (avoids double buffering)
innodb_flush_method=O_DIRECT
# InnoDB data raw disks
innodb_data_home_dir=
innodb_data_file_path=/dev/vg-data/lv-rawdisk-innodb01:25Graw
# InnoDB log files
innodb_log_files_in_group=2
innodb_log_file_size=100M
innodb_log_group_home_dir=/var/lib/mysql/ib_log
</source>
/etc/mysql/conf.d/myisam.cnf:
<source lang=mysql>
[mysqld]
#key_buffer = 512M
key_buffer = 128M
table_cache = 8K
myisam_sort_buffer_size = 64M
tmp_table_size = 64M
# Variable: concurrent_insert
# Value Description
# 0 Disables concurrent inserts
# 1 (Default) Enables concurrent insert for MyISAM tables that do not have holes
# 2 Enables concurrent inserts for all MyISAM tables, even those that have holes.
# For a table with a hole, new rows are inserted at the end of the table if it is in use by another thread.
# Otherwise, MySQL acquires a normal write lock and inserts the row into the hole.
concurrent_insert=2
# Variable: myisam_use_mmap
# https://www.percona.com/blog/2006/05/26/myisam-mmap-feature-51/
#
myisam_use_mmap=1
</source>
/etc/mysql/conf.d/mysqld.cnf:
<source lang=mysql>
[mysqld]
datadir = /var/lib/mysql/data/data
# because mysql is soooo stupid
#ignore-db-dirs = lost+found # when we will have mysql >= 5.6.3
bind-address = 127.0.0.1
open-files-limit = 4096
max_connections = 512
max_allowed_packet = 16M
thread_stack = 192K
thread_cache_size = 8
myisam-recover-options = BACKUP
max_connections = 512
table_cache = 8192
thread_concurrency = 4
default-storage-engine = innodb
# Enable the full query log. Every query (even ones with incorrect
# syntax) that the server receives will be logged. This is useful for
# debugging, it is usually disabled in production use.
#log
# Print warnings to the error log file. If you have any problem with
# MySQL you should enable logging of warnings and examine the error log
# for possible explanations.
log_warnings
# Log slow queries. Slow queries are queries which take more than the
# amount of time defined in "long_query_time" or which do not use
# indexes well, if log_long_format is enabled. It is normally good idea
# to have this turned on if you frequently add new queries to the
# system.
log_slow_queries
slow_query_log_file = /var/log/mysql/mysql-slow.log
# All queries taking more than this amount of time (in seconds) will be
# trated as slow. Do not use "1" as a value here, as this will result in
# even very fast queries being logged from time to time (as MySQL
# currently measures time with second accuracy only).
long_query_time = 2
# Log more information in the slow query log. Normally it is good to
# have this turned on. This will enable logging of queries that are not
# using indexes in addition to long running queries.
#log_long_format
log_bin = /var/lib/mysql/binlog/mysql-bin.log
expire_logs_days = 10
max_binlog_size = 100M
sync_binlog = 0
performance_schema = ON
</source>
/etc/mysql/conf.d/mysqld_safe.cnf:
<source lang=mysql>
[mysqld_safe]
</source>
/etc/mysql/conf.d/mysqld_safe_syslog.cnf:
<source lang=mysql>
[mysqld_safe]
syslog
</source>
/etc/mysql/conf.d/query_cache.cnf:
<source lang=mysql>
[mysqld]
query_cache_limit = 4M
query_cache_size = 128M
query_cache_min_res_unit = 2K
</source>
=MySQL Clients=
Small one liners for testing purposes.
==PHP==
===PHP PDO===
<source lang=php>
$ php -r '
$pdo=new PDO("mysql:host=mydbhost;dbname=mydb", "user", "pass", ARRAY(
PDO::ATTR_PERSISTENT => true
)
);
$stmt=$pdo->prepare("SELECT * FROM mytable");
if($stmt->execute()){
while($row = $stmt->fetch()){
print_r($row);
}
};
$stmt = null;
$pdo=null;
'
</source>
51cf3b353f2a5cf6b3e6584c4f2738e3f5d73631
1747
1746
2017-05-12T15:19:22Z
Lollypop
2
/* On Linux */
wikitext
text/x-wiki
[[Kategorie:MySQL|Tipps und Tricks]]
==Oneliner==
===All grants===
<source lang=bash>
# mysql --skip-column-names --batch --execute 'select concat("`",user,"`@`",host,"`") from mysql.user' | xargs -n 1 -i mysql --execute 'show grants for {}'
</source>
===Last update time===
* Per table
<source lang=mysql>
mysql> SELECT TABLE_SCHEMA AS DB,TABLE_NAME,UPDATE_TIME FROM INFORMATION_SCHEMA.TABLES ORDER BY DB,UPDATE_TIME;
</source>
* Per database
<source lang=mysql>
mysql> SELECT TABLE_SCHEMA AS DB,MAX(UPDATE_TIME) AS LAST_UPDATE FROM INFORMATION_SCHEMA.TABLES GROUP BY DB ORDER BY LAST_UPDATE;
</source>
==InnoDB space==
===Per database===
<source lang=mysql>
mysql> select table_schema as database_name, sum(round(data_length/1024/1024,2)) as total_size_mb from information_schema.tables where engine like 'innodb' group by table_schema order by total_size_mb;
</source>
===Per table===
<source lang=mysql>
mysql> select table_schema as database_name,table_name,round(data_length/1024/1024,2) as size_mb from information_schema.tables order by size_mb;
</source>
==Logging==
If you use SET GLOBAL it is just for the moment.
'''Don't forget to add it in your my.cnf to make it permanent!'''
===What can I log?===
The interesting variables here are:
* log_queries_not_using_indexes
* log_slave_updates
* log_slow_queries
* general_log
===Choose logging destination FILE/TABLE/NONE===
This affects general_log and slow_query_log.
* Log to the table mysql.slow_log and mysql.general_log
<source lang=mysql>
mysql> SET GLOBAL log_output=TABLE;
</source>
* Log to the table mysql.slow_log and mysql.general_log
<source lang=mysql>
mysql> SET GLOBAL log_output=TABLE;
</source>
* Both: tables and files
<source lang=mysql>
mysql> SET GLOBAL log_output = 'TABLE,FILE';
</source>
* None, if NONE appears in the log_output destinations there is no logging
<source lang=mysql>
mysql> SET GLOBAL log_output = 'TABLE,FILE,NONE';
</source>
is equal to
<source lang=mysql>
mysql> SET GLOBAL log_output = 'NONE';
</source>
===Enable/disable general logging===
<source lang=mysql>
mysql> SET GLOBAL general_log_file = '/var/lib/mysql/general.log';
Query OK, 0 rows affected (0.00 sec)
mysql> SET GLOBAL general_log = 'ON';
Query OK, 0 rows affected (0.00 sec)
</source>
<source lang=mysql>
mysql> SET GLOBAL general_log = 'OFF';
Query OK, 0 rows affected (0.00 sec)
</source>
===Enable/disable logging of slow queries===
<source lang=mysql>
mysql> SET GLOBAL slow_query_log_file = '/var/lib/mysql/slow-query.log';
Query OK, 0 rows affected (0.00 sec)
mysql> SET GLOBAL slow_query_log = 'ON';
Query OK, 0 rows affected (0.00 sec)
</source>
<source lang=mysql>
mysql> SET GLOBAL slow_query_log = 'OFF';
Query OK, 0 rows affected (0.00 sec)
</source>
==Filesystems for MySQL==
===ext3/ext4===
====Create Options====
<source lang=bash>
# mkfs.ext4 -b 4096 /dev/mapper/vg--data-lv--ext4--mysql_data
</source>
====Mountoptions====
* noatime
* data=writeback (best performance , only metadata is logged)
* data=ordered (ok performance , recording metadata and grouping metadata related to the data changes)
* data=journal (worst performance, but best data protection, ext3 default mode, recording metadata and all data)
===Raw devices with InnoDB===
'''Take a look at [[Linux_udev_permissions|setting device permissions via udev]] first.'''
'''After''' that the device is owned by mysql:
<source lang=bash>
# ls -alL /dev/vg-data/lv-rawdisk-innodb01
brw-rw---- 1 mysql mysql 252, 0 Aug 12 15:07 /dev/vg-data/lv-rawdisk-innodb01
</source>
Determine the size:
<source lang=bash>
# lvs vg-data
LV VG Attr LSize Pool Origin Data% Move Log Copy% Convert
lv-rawdisk-innodb01 vg-data -wi-a---- 25.00g
# fdisk -l /dev/vg-data/lv-rawdisk-innodb01
Disk /dev/vg-data/lv-rawdisk-innodb01: 26.8 GB, 26843545600 bytes
255 heads, 63 sectors/track, 3263 cylinders, total 52428800 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
# bc -l
26843545600/(1024*1024*1024)
25.00000000000000000000
</source>
Yes... really 25GB!
Add your logical volume to your configuration /etc/mysql/conf.d/innodb.cnf :
<source lang=mysql>
[mysqld]
# InnoDB raw disks
innodb_data_home_dir=
innodb_data_file_path=/dev/vg-data/lv-rawdisk-innodb01:25Gnewraw
</source>
Start mysql:
<source lang=bash>
# service mysql start
</source>
Aaaaaand.. do not forget apparmor! Like I did.. :-D
<source lang=mysql>
InnoDB: Operating system error number 13 in a file operation.
InnoDB: The error means mysqld does not have the access rights to
InnoDB: the directory.
InnoDB: File name /dev/dm-0
InnoDB: File operation call: 'open'.
InnoDB: Cannot continue operation.
</source>
<source lang=bash>
# tail /var/log/kern.log
...
Aug 12 15:30:09 mysql kernel: [ 5840.118528] audit: type=1400 audit(1439386209.399:33): apparmor="DENIED" operation="open" profile="/usr/sbin/mysqld" name="/dev/dm-0" pid=11810 comm="mysqld" requested_mask="wr" denied_mask="wr" fsuid=108 ouid=108
...
</source>
Add your raw device to the apparmor config in /etc/apparmor.d/local/usr.sbin.mysqld :
<source lang=bash>
# Site-specific additions and overrides for usr.sbin.mysqld.
# For more details, please see /etc/apparmor.d/local/README.
/dev/dm-* rwk,
</source>
Reload apparmor:
<source lang=bash>
# service apparmor reload
</source>
Another try!
<source lang=bash>
# service mysql start
</source>
<source lang=mysql>
InnoDB: The first specified data file /dev/vg-data/lv-rawdisk-innodb01 did not exist:
InnoDB: a new database to be created!
150812 15:48:23 InnoDB: Setting file /dev/vg-data/lv-rawdisk-innodb01 size to 25600 MB
InnoDB: Database physically writes the file full: wait...
InnoDB: Progress in MB: 100 200 300 400 500 600 700 800 900 1000 1100 1200 ...
</source>
Much better!
So shutdown MySQL again!
Change your configuration /etc/mysql/conf.d/innodb.cnf and '''change newraw to raw!''' :
<source lang=mysql>
[mysqld]
# InnoDB raw disks
innodb_data_home_dir=
innodb_data_file_path=/dev/vg-data/lv-rawdisk-innodb01:25Graw
</source>
=== NFS ===
==== NFSv4 ====
===== On NetApp CDOT SVM =====
<source lang=cdot>
cdot1nfsv4::> export-policy rule create -policyname default -clientmatch 172.18.128.0/22 -superuser none -rwrule none -rorule sys -allow-dev false -allow-suid false
cdot1nfsv4::>
cdot1nfsv4::> export-policy create -policyname mysql_clients
cdot1nfsv4::> export-policy rule create -policyname mysql_clients -clientmatch 172.18.128.0/22 -superuser sys -rwrule sys -rorule sys -allow-dev true -allow-suid false
cdot1nfsv4::>
cdot1nfsv4::> nfs server modify -v4.0 enabled -v4-id-domain this.domain.tld
cdot1nfsv4::> set -units GB
cdot1nfsv4::> vol show -volume MYSQLNFS_* -fields volume,policy,size,junction-path
vserver volume size policy junction-path
------------------ --------------------- ---- ------------- ----------------------
cdot1nfsv4 MYSQLNFS_DATA 40GB mysql_clients /MYSQLNFS_DATA
cdot1nfsv4 MYSQLNFS_LOG 1GB mysql_clients /MYSQLNFS_LOG
2 entries were displayed.
</source>
Links:
* [https://kb.netapp.com/support/s/article/how-to-configure-nfsv4-in-cluster-mode How to configure NFSv4 in Cluster-Mode]
* [https://kb.netapp.com/support/s/article/clustered-data-ontap-nfs-expert-recommended-articles Clustered Data ONTAP NFS Expert recommended articles]
* [https://kb.netapp.com/support/s/article/how-to-configure-netapp-storage-systems-for-network-file-system-version-4-in-aix-and-linux-environments How to configure NetApp storage systems for Network File System version 4 in AIX and Linux environments]
* [https://kb.netapp.com/support/s/article/how-to-enable-or-disable-nfsv4-on-netapp-storage-systems How to enable or disable NFSv4 on NetApp storage systems]
===== On Linux =====
====== /etc/sysctl.d/99-mysql.conf ======
<source>
#
## http://www.ajohnstone.com/achives/optimizing-mysql-over-nfs-with-netapp/
#
###################################################################
# Semaphores & IPC for optimizations in innodb
kernel.shmmax=2147483648
kernel.shmall=2147483648
kernel.msgmni=1024
kernel.msgmax=65536
kernel.sem=250 32000 32 1024
###################################################################
# Swap
vm.swappiness = 0
vm.vfs_cache_pressure = 50
</source>
====== /etc/sysctl.d/99-netapp-nfs.conf ======
<source>
#
## http://www.ajohnstone.com/achives/optimizing-mysql-over-nfs-with-netapp/
#
###################################################################
# Optimization for netapp/nfs increased from 64k, @see http://tldp.org/HOWTO/NFS-HOWTO/performance.html#MEMLIMITS
net.core.wmem_default=262144
net.core.rmem_default=262144
net.core.wmem_max=262144
net.core.rmem_max=262144
net.ipv4.tcp_rmem = 4096 87380 16777216
net.ipv4.tcp_wmem = 4096 65536 16777216
net.ipv4.tcp_no_metrics_save = 1
# Guidelines from http://media.netapp.com/documents/mysqlperformance-5.pdf
net.ipv4.tcp_sack=0
net.ipv4.tcp_timestamps=0
sunrpc.tcp_slot_table_entries=128
#nfs.v3.enable on
nfs.tcp.enable=on
nfs.tcp.recvwindowsize=65536
nfs.tcp.xfersize=65536
#iscsi.iswt.max_ios_per_session 128
#iscsi.iswt.tcp_window_size 131400
#iscsi.max_connections_per_session 16
net.ipv4.tcp_tw_reuse = 1
net.ipv4.ip_local_port_range = 1024 65023
net.ipv4.tcp_max_syn_backlog = 10240
net.ipv4.tcp_max_tw_buckets = 400000
net.ipv4.tcp_max_orphans = 60000
net.ipv4.tcp_synack_retries = 3
net.core.somaxconn = 10000
kernel.sysrq=0
net.ipv4.neigh.default.gc_thresh1 = 4096
net.ipv4.neigh.default.gc_thresh2 = 8192
net.ipv4.neigh.default.gc_thresh3 = 8192
net.ipv4.neigh.default.base_reachable_time = 86400
net.ipv4.neigh.default.gc_stale_time = 86400
</source>
====== Raise allowed number of open files /etc/security/limits.d/mysql.conf ======
<source>
mysql soft nofile 1024000
mysql hard nofile 1024000
mysql soft nproc 10240
mysql hard nproc 10240
</source>
====== Modify systemd service to raise the number of files limit ======
To raise the number of files for the service you have to tell the systemd the new limit.
<source lang=bash>
# systemctl edit mysql.service
</source>
and enter:
<source lang=inifile>
[Service]
LimitNOFILE=1024000
</source>
<source lang=bash>
# systemctl cat mysql
# /lib/systemd/system/mysql.service
# MySQL systemd service file
...
# /etc/systemd/system/mysql.service.d/override.conf
[Service]
LimitNOFILE=1024000
</source>
====== Modify systemd service to wait for NFS ======
To be sure that the NFS mount is ready when the mysql server starts add After=nfs-client.target to the systemd service in the Unit-section.
<source lang=bash>
# systemctl edit mysql.service
</source>
and enter:
<source lang=inifile>
[Unit]
Description=MySQL Community Server
After=network.target
After=nfs-client.target
</source>
<source lang=bash>
# systemctl cat mysql
# /lib/systemd/system/mysql.service
# MySQL systemd service file
[Unit]
Description=MySQL Community Server
After=network.target
[Install]
WantedBy=multi-user.target
[Service]
User=mysql
Group=mysql
PermissionsStartOnly=true
ExecStartPre=/usr/share/mysql/mysql-systemd-start pre
ExecStart=/usr/sbin/mysqld
ExecStartPost=/usr/share/mysql/mysql-systemd-start post
TimeoutSec=600
Restart=on-failure
RuntimeDirectory=mysqld
RuntimeDirectoryMode=755
# /etc/systemd/system/mysql.service.d/override.conf
[Unit]
Description=MySQL Community Server
After=network.target
After=nfs-client.target
[Service]
LimitNOFILE=1024000
</source>
====== /etc/idmapd.conf ======
<source lang=conf>
# Domain = localdomain
Domain = this.domain.tld
</source>
====== /etc/fstab ======
<source lang=fstab>
cdot-nfsv4-svm:/MYSQLNFS_LOG /MYSQLNFS_LOG nfs rw,hard,nointr,rsize=65536,wsize=65536,bg,vers=4,proto=tcp,noatime
cdot-nfsv4-svm:/MYSQLNFS_DATA /MYSQLNFS_DATA nfs rw,hard,nointr,rsize=65536,wsize=65536,bg,vers=4,proto=tcp,noatime
</source>
====== /etc/mysql/mysql.conf.d/mysqld.cnf ======
<source lang=inifile>
[mysqld]
...
datadir = /MYSQLNFS_DATA/data/mysql
...
</source>
====== /etc/mysql/mysql.conf.d/innodb.cnf ======
<source lang=inifile>
[mysqld]
#
# * InnoDB
#
innodb_data_home_dir = /MYSQLNFS_DATA/InnoDB
innodb_data_file_path = ibdata1:200M:autoextend
innodb_log_group_home_dir = /MYSQLNFS_LOG/ib_log
#innodb_flush_method = O_DIRECT
innodb_flush_log_at_trx_commit = 2
innodb_file_per_table = on
</source>
<source lang=mysql>
# mysql -e "show variables where variable_name like '%dir' and value like '/MYSQLNFS%'"
+---------------------------+------------------------------------+
| Variable_name | Value |
+---------------------------+------------------------------------+
| datadir | /MYSQLNFS_DATA/data/mysql/ |
| innodb_data_home_dir | /MYSQLNFS_DATA/InnoDB |
| innodb_log_group_home_dir | /MYSQLNFS_LOG/ib_log |
+---------------------------+------------------------------------+
</source>
====== /etc/mysql/mysql.conf.d/query_cache.cnf ======
<source lang=inifile>
[mysqld]
#
# * Query Cache Configuration
#
query_cache_type = 1
query_cache_limit = 256K
query_cache_min_res_unit = 2k
query_cache_size = 80M
</source>
<source lang=mysql>
mysql> SHOW VARIABLES LIKE 'have_query_cache';
+------------------+-------+
| Variable_name | Value |
+------------------+-------+
| have_query_cache | YES |
+------------------+-------+
1 row in set (0,00 sec)
mysql> SHOW VARIABLES LIKE 'query_cache%';
+------------------------------+----------+
| Variable_name | Value |
+------------------------------+----------+
| query_cache_limit | 262144 |
| query_cache_min_res_unit | 2048 |
| query_cache_size | 83886080 |
| query_cache_type | ON |
| query_cache_wlock_invalidate | OFF |
+------------------------------+----------+
5 rows in set (0,00 sec)
</source>
====== apparmor : /etc/apparmor.d/local/usr.sbin.mysqld ======
<source lang=apparmor>
# vim:syntax=apparmor
# This should be always there...
owner @{PROC}/@{pid}/status r,
/sys/devices/system/node/ r,
/sys/devices/system/node/** r,
# The mysql datadir, innodb_data_home_dir
/MYSQLNFS_DATA/ r,
/MYSQLNFS_DATA/** rwk,
# The mysql innodb_log_group_home_dir
/MYSQLNFS_LOG/ r,
/MYSQLNFS_LOG/** rwk,
</source>
====== Short stupid performance test ======
<source lang=bash>
# time dd if=/dev/zero of=/MYSQLNFS_DATA/io.test bs=16k count=65536
65536+0 records in
65536+0 records out
1073741824 bytes (1,1 GB, 1,0 GiB) copied, 1,7552 s, 612 MB/s
real 0m1.772s
user 0m0.016s
sys 0m0.672s
</source>
Some things seem to work...
==Sample InnoDB configuration==
/etc/mysql/conf.d/innodb.cnf
<source lang=mysql>
[mysqld]
# InnoDB Parameters
# innodb_buffer_pool_size=(0.7*total_mem_size)
innodb_buffer_pool_size=1433M
# bulk_insert_buffer_size
bulk_insert_buffer_size=256M
# innodb_buffer_pool_instances=... more = more concurrency
innodb_buffer_pool_instances=2
# innodb_thread_concurrency= 2*CPUs
innodb_thread_concurrency=4
# innodb_flush_method=O_DIRECT (avoids double buffering)
innodb_flush_method=O_DIRECT
# InnoDB data raw disks
innodb_data_home_dir=
innodb_data_file_path=/dev/vg-data/lv-rawdisk-innodb01:25Graw
# InnoDB log files
innodb_log_files_in_group=2
innodb_log_file_size=100M
innodb_log_group_home_dir=/var/lib/mysql/ib_log
</source>
==Analyze==
<source lang=mysql>
mysql> select * from <tablename> PROCEDURE ANALYSE();
</source>
<source lang=mysql>
mysql> SHOW /*!50000 GLOBAL*/ STATUS;
</source>
* See [[http://de.slideshare.net/shinguz/pt-presentation-11465700 MySQL Performance Tuning]]
===percona-toolkit===
<source lang=bash>
# aptitude install percona-toolkit
# mysql -e "explain select * from mysql.user,mysql.db where user.user=db.user" | pt-visual-explain
JOIN
+- Bookmark lookup
| +- Table
| | table db
| | possible_keys User
| +- Index lookup
| key db->User
| possible_keys User
| key_len 48
| ref mysql.user.User
| rows 3
+- Table scan
rows 68
+- Table
table user
</source>
===Sysbench===
<source lang=bash>
# mysql -u root -e "create database sbtest;"
# sysbench \
--test=oltp \
--oltp-table-size=10000000 \
--db-driver=mysql \
--mysql-table-engine=innodb \
--mysql-db=sbtest \
--mysql-user=root \
--mysql-password=$(nawk -F'=' '/password/{print $2}' /root/.my.cnf) \
--mysql-socket=/var/run/mysqld/mysqld.sock \
prepare
# sysbench \
--test=oltp \
--oltp-test-mode=complex \
--oltp-table-size=80000000 \
--db-driver=mysql \
--mysql-table-engine=innodb \
--mysql-db=sbtest \
--mysql-user=root \
--mysql-password=$(nawk -F'=' '/password/{print $2}' /root/.my.cnf) \
--mysql-socket=/var/run/mysqld/mysqld.sock \
--num-threads=4 \
--max-time=900 \
--max-requests=500000 \
run
# mysql -u root_rw -e "drop table sbtest;" sbtest
</source>
==Recover a damaged root account==
===Lost grants===
Try out:
<source lang=bash>
# service mysql stop
# echo "grant all privileges on *.* to 'root'@'localhost' with grant option;" > /root/mysql-init
# mysqld_safe --init-file=/root/mysql-init
...
150812 19:14:24 mysqld_safe mysqld from pid file /var/run/mysqld/mysqld.pid ended
# rm /root/mysql-init
# service mysql start
</source>
Or:
<source lang=bash>
# service mysql stop
# mysqld_safe --skip-grant-tables &
...
# mysql -e "UPDATE mysql.user SET Grant_priv='Y', Super_priv='Y' WHERE User='root'; FLUSH PRIVILEGES; GRANT ALL ON *.* TO 'root'@'localhost';"
# mysqladmin -u root shutdown
# service mysql start
</source>
===Lost password===
<source lang=bash>
# service mysql stop
# echo "SET PASSWORD FOR 'root'@'localhost' = PASSWORD('the root password for mysql');" > /root/mysql-init
# mysqld_safe --init-file=/root/mysql-init
...
150812 19:15:24 mysqld_safe mysqld from pid file /var/run/mysqld/mysqld.pid ended
# rm /root/mysql-init
# service mysql start
</source>
==Structured configuration==
This is the default in Ubuntus /etc/mysql/my.cnf:
<source lang=mysql>
...
#
# * IMPORTANT: Additional settings that can override those from this file!
# The files must end with '.cnf', otherwise they'll be ignored.
#
!includedir /etc/mysql/conf.d/
</source>
/etc/mysql/conf.d/innodb.cnf:
<source lang=mysql>
[mysqld]
# InnoDB Parameters
# innodb_buffer_pool_size=(0.7*total_mem_size)
#innodb_buffer_pool_size=512M
innodb_buffer_pool_size=256M
# bulk_insert_buffer_size
#bulk_insert_buffer_size=256M
bulk_insert_buffer_size=128M
# innodb_buffer_pool_instances=... more = more concurrency
innodb_buffer_pool_instances=2
# innodb_thread_concurrency= 2*CPUs
innodb_thread_concurrency=4
# innodb_flush_method=O_DIRECT (avoids double buffering)
innodb_flush_method=O_DIRECT
# InnoDB data raw disks
innodb_data_home_dir=
innodb_data_file_path=/dev/vg-data/lv-rawdisk-innodb01:25Graw
# InnoDB log files
innodb_log_files_in_group=2
innodb_log_file_size=100M
innodb_log_group_home_dir=/var/lib/mysql/ib_log
</source>
/etc/mysql/conf.d/myisam.cnf:
<source lang=mysql>
[mysqld]
#key_buffer = 512M
key_buffer = 128M
table_cache = 8K
myisam_sort_buffer_size = 64M
tmp_table_size = 64M
# Variable: concurrent_insert
# Value Description
# 0 Disables concurrent inserts
# 1 (Default) Enables concurrent insert for MyISAM tables that do not have holes
# 2 Enables concurrent inserts for all MyISAM tables, even those that have holes.
# For a table with a hole, new rows are inserted at the end of the table if it is in use by another thread.
# Otherwise, MySQL acquires a normal write lock and inserts the row into the hole.
concurrent_insert=2
# Variable: myisam_use_mmap
# https://www.percona.com/blog/2006/05/26/myisam-mmap-feature-51/
#
myisam_use_mmap=1
</source>
/etc/mysql/conf.d/mysqld.cnf:
<source lang=mysql>
[mysqld]
datadir = /var/lib/mysql/data/data
# because mysql is soooo stupid
#ignore-db-dirs = lost+found # when we will have mysql >= 5.6.3
bind-address = 127.0.0.1
open-files-limit = 4096
max_connections = 512
max_allowed_packet = 16M
thread_stack = 192K
thread_cache_size = 8
myisam-recover-options = BACKUP
max_connections = 512
table_cache = 8192
thread_concurrency = 4
default-storage-engine = innodb
# Enable the full query log. Every query (even ones with incorrect
# syntax) that the server receives will be logged. This is useful for
# debugging, it is usually disabled in production use.
#log
# Print warnings to the error log file. If you have any problem with
# MySQL you should enable logging of warnings and examine the error log
# for possible explanations.
log_warnings
# Log slow queries. Slow queries are queries which take more than the
# amount of time defined in "long_query_time" or which do not use
# indexes well, if log_long_format is enabled. It is normally good idea
# to have this turned on if you frequently add new queries to the
# system.
log_slow_queries
slow_query_log_file = /var/log/mysql/mysql-slow.log
# All queries taking more than this amount of time (in seconds) will be
# trated as slow. Do not use "1" as a value here, as this will result in
# even very fast queries being logged from time to time (as MySQL
# currently measures time with second accuracy only).
long_query_time = 2
# Log more information in the slow query log. Normally it is good to
# have this turned on. This will enable logging of queries that are not
# using indexes in addition to long running queries.
#log_long_format
log_bin = /var/lib/mysql/binlog/mysql-bin.log
expire_logs_days = 10
max_binlog_size = 100M
sync_binlog = 0
performance_schema = ON
</source>
/etc/mysql/conf.d/mysqld_safe.cnf:
<source lang=mysql>
[mysqld_safe]
</source>
/etc/mysql/conf.d/mysqld_safe_syslog.cnf:
<source lang=mysql>
[mysqld_safe]
syslog
</source>
/etc/mysql/conf.d/query_cache.cnf:
<source lang=mysql>
[mysqld]
query_cache_limit = 4M
query_cache_size = 128M
query_cache_min_res_unit = 2K
</source>
=MySQL Clients=
Small one liners for testing purposes.
==PHP==
===PHP PDO===
<source lang=php>
$ php -r '
$pdo=new PDO("mysql:host=mydbhost;dbname=mydb", "user", "pass", ARRAY(
PDO::ATTR_PERSISTENT => true
)
);
$stmt=$pdo->prepare("SELECT * FROM mytable");
if($stmt->execute()){
while($row = $stmt->fetch()){
print_r($row);
}
};
$stmt = null;
$pdo=null;
'
</source>
7bad5a9d2bda4f4a3a8dfcbae3adbc7e1cbc14e3
1748
1747
2017-05-12T15:19:46Z
Lollypop
2
/* Raise allowed number of open files /etc/security/limits.d/mysql.conf */
wikitext
text/x-wiki
[[Kategorie:MySQL|Tipps und Tricks]]
==Oneliner==
===All grants===
<source lang=bash>
# mysql --skip-column-names --batch --execute 'select concat("`",user,"`@`",host,"`") from mysql.user' | xargs -n 1 -i mysql --execute 'show grants for {}'
</source>
===Last update time===
* Per table
<source lang=mysql>
mysql> SELECT TABLE_SCHEMA AS DB,TABLE_NAME,UPDATE_TIME FROM INFORMATION_SCHEMA.TABLES ORDER BY DB,UPDATE_TIME;
</source>
* Per database
<source lang=mysql>
mysql> SELECT TABLE_SCHEMA AS DB,MAX(UPDATE_TIME) AS LAST_UPDATE FROM INFORMATION_SCHEMA.TABLES GROUP BY DB ORDER BY LAST_UPDATE;
</source>
==InnoDB space==
===Per database===
<source lang=mysql>
mysql> select table_schema as database_name, sum(round(data_length/1024/1024,2)) as total_size_mb from information_schema.tables where engine like 'innodb' group by table_schema order by total_size_mb;
</source>
===Per table===
<source lang=mysql>
mysql> select table_schema as database_name,table_name,round(data_length/1024/1024,2) as size_mb from information_schema.tables order by size_mb;
</source>
==Logging==
If you use SET GLOBAL it is just for the moment.
'''Don't forget to add it in your my.cnf to make it permanent!'''
===What can I log?===
The interesting variables here are:
* log_queries_not_using_indexes
* log_slave_updates
* log_slow_queries
* general_log
===Choose logging destination FILE/TABLE/NONE===
This affects general_log and slow_query_log.
* Log to the table mysql.slow_log and mysql.general_log
<source lang=mysql>
mysql> SET GLOBAL log_output=TABLE;
</source>
* Log to the table mysql.slow_log and mysql.general_log
<source lang=mysql>
mysql> SET GLOBAL log_output=TABLE;
</source>
* Both: tables and files
<source lang=mysql>
mysql> SET GLOBAL log_output = 'TABLE,FILE';
</source>
* None, if NONE appears in the log_output destinations there is no logging
<source lang=mysql>
mysql> SET GLOBAL log_output = 'TABLE,FILE,NONE';
</source>
is equal to
<source lang=mysql>
mysql> SET GLOBAL log_output = 'NONE';
</source>
===Enable/disable general logging===
<source lang=mysql>
mysql> SET GLOBAL general_log_file = '/var/lib/mysql/general.log';
Query OK, 0 rows affected (0.00 sec)
mysql> SET GLOBAL general_log = 'ON';
Query OK, 0 rows affected (0.00 sec)
</source>
<source lang=mysql>
mysql> SET GLOBAL general_log = 'OFF';
Query OK, 0 rows affected (0.00 sec)
</source>
===Enable/disable logging of slow queries===
<source lang=mysql>
mysql> SET GLOBAL slow_query_log_file = '/var/lib/mysql/slow-query.log';
Query OK, 0 rows affected (0.00 sec)
mysql> SET GLOBAL slow_query_log = 'ON';
Query OK, 0 rows affected (0.00 sec)
</source>
<source lang=mysql>
mysql> SET GLOBAL slow_query_log = 'OFF';
Query OK, 0 rows affected (0.00 sec)
</source>
==Filesystems for MySQL==
===ext3/ext4===
====Create Options====
<source lang=bash>
# mkfs.ext4 -b 4096 /dev/mapper/vg--data-lv--ext4--mysql_data
</source>
====Mountoptions====
* noatime
* data=writeback (best performance , only metadata is logged)
* data=ordered (ok performance , recording metadata and grouping metadata related to the data changes)
* data=journal (worst performance, but best data protection, ext3 default mode, recording metadata and all data)
===Raw devices with InnoDB===
'''Take a look at [[Linux_udev_permissions|setting device permissions via udev]] first.'''
'''After''' that the device is owned by mysql:
<source lang=bash>
# ls -alL /dev/vg-data/lv-rawdisk-innodb01
brw-rw---- 1 mysql mysql 252, 0 Aug 12 15:07 /dev/vg-data/lv-rawdisk-innodb01
</source>
Determine the size:
<source lang=bash>
# lvs vg-data
LV VG Attr LSize Pool Origin Data% Move Log Copy% Convert
lv-rawdisk-innodb01 vg-data -wi-a---- 25.00g
# fdisk -l /dev/vg-data/lv-rawdisk-innodb01
Disk /dev/vg-data/lv-rawdisk-innodb01: 26.8 GB, 26843545600 bytes
255 heads, 63 sectors/track, 3263 cylinders, total 52428800 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
# bc -l
26843545600/(1024*1024*1024)
25.00000000000000000000
</source>
Yes... really 25GB!
Add your logical volume to your configuration /etc/mysql/conf.d/innodb.cnf :
<source lang=mysql>
[mysqld]
# InnoDB raw disks
innodb_data_home_dir=
innodb_data_file_path=/dev/vg-data/lv-rawdisk-innodb01:25Gnewraw
</source>
Start mysql:
<source lang=bash>
# service mysql start
</source>
Aaaaaand.. do not forget apparmor! Like I did.. :-D
<source lang=mysql>
InnoDB: Operating system error number 13 in a file operation.
InnoDB: The error means mysqld does not have the access rights to
InnoDB: the directory.
InnoDB: File name /dev/dm-0
InnoDB: File operation call: 'open'.
InnoDB: Cannot continue operation.
</source>
<source lang=bash>
# tail /var/log/kern.log
...
Aug 12 15:30:09 mysql kernel: [ 5840.118528] audit: type=1400 audit(1439386209.399:33): apparmor="DENIED" operation="open" profile="/usr/sbin/mysqld" name="/dev/dm-0" pid=11810 comm="mysqld" requested_mask="wr" denied_mask="wr" fsuid=108 ouid=108
...
</source>
Add your raw device to the apparmor config in /etc/apparmor.d/local/usr.sbin.mysqld :
<source lang=bash>
# Site-specific additions and overrides for usr.sbin.mysqld.
# For more details, please see /etc/apparmor.d/local/README.
/dev/dm-* rwk,
</source>
Reload apparmor:
<source lang=bash>
# service apparmor reload
</source>
Another try!
<source lang=bash>
# service mysql start
</source>
<source lang=mysql>
InnoDB: The first specified data file /dev/vg-data/lv-rawdisk-innodb01 did not exist:
InnoDB: a new database to be created!
150812 15:48:23 InnoDB: Setting file /dev/vg-data/lv-rawdisk-innodb01 size to 25600 MB
InnoDB: Database physically writes the file full: wait...
InnoDB: Progress in MB: 100 200 300 400 500 600 700 800 900 1000 1100 1200 ...
</source>
Much better!
So shutdown MySQL again!
Change your configuration /etc/mysql/conf.d/innodb.cnf and '''change newraw to raw!''' :
<source lang=mysql>
[mysqld]
# InnoDB raw disks
innodb_data_home_dir=
innodb_data_file_path=/dev/vg-data/lv-rawdisk-innodb01:25Graw
</source>
=== NFS ===
==== NFSv4 ====
===== On NetApp CDOT SVM =====
<source lang=cdot>
cdot1nfsv4::> export-policy rule create -policyname default -clientmatch 172.18.128.0/22 -superuser none -rwrule none -rorule sys -allow-dev false -allow-suid false
cdot1nfsv4::>
cdot1nfsv4::> export-policy create -policyname mysql_clients
cdot1nfsv4::> export-policy rule create -policyname mysql_clients -clientmatch 172.18.128.0/22 -superuser sys -rwrule sys -rorule sys -allow-dev true -allow-suid false
cdot1nfsv4::>
cdot1nfsv4::> nfs server modify -v4.0 enabled -v4-id-domain this.domain.tld
cdot1nfsv4::> set -units GB
cdot1nfsv4::> vol show -volume MYSQLNFS_* -fields volume,policy,size,junction-path
vserver volume size policy junction-path
------------------ --------------------- ---- ------------- ----------------------
cdot1nfsv4 MYSQLNFS_DATA 40GB mysql_clients /MYSQLNFS_DATA
cdot1nfsv4 MYSQLNFS_LOG 1GB mysql_clients /MYSQLNFS_LOG
2 entries were displayed.
</source>
Links:
* [https://kb.netapp.com/support/s/article/how-to-configure-nfsv4-in-cluster-mode How to configure NFSv4 in Cluster-Mode]
* [https://kb.netapp.com/support/s/article/clustered-data-ontap-nfs-expert-recommended-articles Clustered Data ONTAP NFS Expert recommended articles]
* [https://kb.netapp.com/support/s/article/how-to-configure-netapp-storage-systems-for-network-file-system-version-4-in-aix-and-linux-environments How to configure NetApp storage systems for Network File System version 4 in AIX and Linux environments]
* [https://kb.netapp.com/support/s/article/how-to-enable-or-disable-nfsv4-on-netapp-storage-systems How to enable or disable NFSv4 on NetApp storage systems]
===== On Linux =====
====== /etc/sysctl.d/99-mysql.conf ======
<source>
#
## http://www.ajohnstone.com/achives/optimizing-mysql-over-nfs-with-netapp/
#
###################################################################
# Semaphores & IPC for optimizations in innodb
kernel.shmmax=2147483648
kernel.shmall=2147483648
kernel.msgmni=1024
kernel.msgmax=65536
kernel.sem=250 32000 32 1024
###################################################################
# Swap
vm.swappiness = 0
vm.vfs_cache_pressure = 50
</source>
====== /etc/sysctl.d/99-netapp-nfs.conf ======
<source>
#
## http://www.ajohnstone.com/achives/optimizing-mysql-over-nfs-with-netapp/
#
###################################################################
# Optimization for netapp/nfs increased from 64k, @see http://tldp.org/HOWTO/NFS-HOWTO/performance.html#MEMLIMITS
net.core.wmem_default=262144
net.core.rmem_default=262144
net.core.wmem_max=262144
net.core.rmem_max=262144
net.ipv4.tcp_rmem = 4096 87380 16777216
net.ipv4.tcp_wmem = 4096 65536 16777216
net.ipv4.tcp_no_metrics_save = 1
# Guidelines from http://media.netapp.com/documents/mysqlperformance-5.pdf
net.ipv4.tcp_sack=0
net.ipv4.tcp_timestamps=0
sunrpc.tcp_slot_table_entries=128
#nfs.v3.enable on
nfs.tcp.enable=on
nfs.tcp.recvwindowsize=65536
nfs.tcp.xfersize=65536
#iscsi.iswt.max_ios_per_session 128
#iscsi.iswt.tcp_window_size 131400
#iscsi.max_connections_per_session 16
net.ipv4.tcp_tw_reuse = 1
net.ipv4.ip_local_port_range = 1024 65023
net.ipv4.tcp_max_syn_backlog = 10240
net.ipv4.tcp_max_tw_buckets = 400000
net.ipv4.tcp_max_orphans = 60000
net.ipv4.tcp_synack_retries = 3
net.core.somaxconn = 10000
kernel.sysrq=0
net.ipv4.neigh.default.gc_thresh1 = 4096
net.ipv4.neigh.default.gc_thresh2 = 8192
net.ipv4.neigh.default.gc_thresh3 = 8192
net.ipv4.neigh.default.base_reachable_time = 86400
net.ipv4.neigh.default.gc_stale_time = 86400
</source>
====== Raise allowed number of open files for mysql in /etc/security/limits.d/mysql.conf ======
<source>
mysql soft nofile 1024000
mysql hard nofile 1024000
mysql soft nproc 10240
mysql hard nproc 10240
</source>
====== Modify systemd service to raise the number of files limit ======
To raise the number of files for the service you have to tell the systemd the new limit.
<source lang=bash>
# systemctl edit mysql.service
</source>
and enter:
<source lang=inifile>
[Service]
LimitNOFILE=1024000
</source>
<source lang=bash>
# systemctl cat mysql
# /lib/systemd/system/mysql.service
# MySQL systemd service file
...
# /etc/systemd/system/mysql.service.d/override.conf
[Service]
LimitNOFILE=1024000
</source>
====== Modify systemd service to wait for NFS ======
To be sure that the NFS mount is ready when the mysql server starts add After=nfs-client.target to the systemd service in the Unit-section.
<source lang=bash>
# systemctl edit mysql.service
</source>
and enter:
<source lang=inifile>
[Unit]
Description=MySQL Community Server
After=network.target
After=nfs-client.target
</source>
<source lang=bash>
# systemctl cat mysql
# /lib/systemd/system/mysql.service
# MySQL systemd service file
[Unit]
Description=MySQL Community Server
After=network.target
[Install]
WantedBy=multi-user.target
[Service]
User=mysql
Group=mysql
PermissionsStartOnly=true
ExecStartPre=/usr/share/mysql/mysql-systemd-start pre
ExecStart=/usr/sbin/mysqld
ExecStartPost=/usr/share/mysql/mysql-systemd-start post
TimeoutSec=600
Restart=on-failure
RuntimeDirectory=mysqld
RuntimeDirectoryMode=755
# /etc/systemd/system/mysql.service.d/override.conf
[Unit]
Description=MySQL Community Server
After=network.target
After=nfs-client.target
[Service]
LimitNOFILE=1024000
</source>
====== /etc/idmapd.conf ======
<source lang=conf>
# Domain = localdomain
Domain = this.domain.tld
</source>
====== /etc/fstab ======
<source lang=fstab>
cdot-nfsv4-svm:/MYSQLNFS_LOG /MYSQLNFS_LOG nfs rw,hard,nointr,rsize=65536,wsize=65536,bg,vers=4,proto=tcp,noatime
cdot-nfsv4-svm:/MYSQLNFS_DATA /MYSQLNFS_DATA nfs rw,hard,nointr,rsize=65536,wsize=65536,bg,vers=4,proto=tcp,noatime
</source>
====== /etc/mysql/mysql.conf.d/mysqld.cnf ======
<source lang=inifile>
[mysqld]
...
datadir = /MYSQLNFS_DATA/data/mysql
...
</source>
====== /etc/mysql/mysql.conf.d/innodb.cnf ======
<source lang=inifile>
[mysqld]
#
# * InnoDB
#
innodb_data_home_dir = /MYSQLNFS_DATA/InnoDB
innodb_data_file_path = ibdata1:200M:autoextend
innodb_log_group_home_dir = /MYSQLNFS_LOG/ib_log
#innodb_flush_method = O_DIRECT
innodb_flush_log_at_trx_commit = 2
innodb_file_per_table = on
</source>
<source lang=mysql>
# mysql -e "show variables where variable_name like '%dir' and value like '/MYSQLNFS%'"
+---------------------------+------------------------------------+
| Variable_name | Value |
+---------------------------+------------------------------------+
| datadir | /MYSQLNFS_DATA/data/mysql/ |
| innodb_data_home_dir | /MYSQLNFS_DATA/InnoDB |
| innodb_log_group_home_dir | /MYSQLNFS_LOG/ib_log |
+---------------------------+------------------------------------+
</source>
====== /etc/mysql/mysql.conf.d/query_cache.cnf ======
<source lang=inifile>
[mysqld]
#
# * Query Cache Configuration
#
query_cache_type = 1
query_cache_limit = 256K
query_cache_min_res_unit = 2k
query_cache_size = 80M
</source>
<source lang=mysql>
mysql> SHOW VARIABLES LIKE 'have_query_cache';
+------------------+-------+
| Variable_name | Value |
+------------------+-------+
| have_query_cache | YES |
+------------------+-------+
1 row in set (0,00 sec)
mysql> SHOW VARIABLES LIKE 'query_cache%';
+------------------------------+----------+
| Variable_name | Value |
+------------------------------+----------+
| query_cache_limit | 262144 |
| query_cache_min_res_unit | 2048 |
| query_cache_size | 83886080 |
| query_cache_type | ON |
| query_cache_wlock_invalidate | OFF |
+------------------------------+----------+
5 rows in set (0,00 sec)
</source>
====== apparmor : /etc/apparmor.d/local/usr.sbin.mysqld ======
<source lang=apparmor>
# vim:syntax=apparmor
# This should be always there...
owner @{PROC}/@{pid}/status r,
/sys/devices/system/node/ r,
/sys/devices/system/node/** r,
# The mysql datadir, innodb_data_home_dir
/MYSQLNFS_DATA/ r,
/MYSQLNFS_DATA/** rwk,
# The mysql innodb_log_group_home_dir
/MYSQLNFS_LOG/ r,
/MYSQLNFS_LOG/** rwk,
</source>
====== Short stupid performance test ======
<source lang=bash>
# time dd if=/dev/zero of=/MYSQLNFS_DATA/io.test bs=16k count=65536
65536+0 records in
65536+0 records out
1073741824 bytes (1,1 GB, 1,0 GiB) copied, 1,7552 s, 612 MB/s
real 0m1.772s
user 0m0.016s
sys 0m0.672s
</source>
Some things seem to work...
==Sample InnoDB configuration==
/etc/mysql/conf.d/innodb.cnf
<source lang=mysql>
[mysqld]
# InnoDB Parameters
# innodb_buffer_pool_size=(0.7*total_mem_size)
innodb_buffer_pool_size=1433M
# bulk_insert_buffer_size
bulk_insert_buffer_size=256M
# innodb_buffer_pool_instances=... more = more concurrency
innodb_buffer_pool_instances=2
# innodb_thread_concurrency= 2*CPUs
innodb_thread_concurrency=4
# innodb_flush_method=O_DIRECT (avoids double buffering)
innodb_flush_method=O_DIRECT
# InnoDB data raw disks
innodb_data_home_dir=
innodb_data_file_path=/dev/vg-data/lv-rawdisk-innodb01:25Graw
# InnoDB log files
innodb_log_files_in_group=2
innodb_log_file_size=100M
innodb_log_group_home_dir=/var/lib/mysql/ib_log
</source>
==Analyze==
<source lang=mysql>
mysql> select * from <tablename> PROCEDURE ANALYSE();
</source>
<source lang=mysql>
mysql> SHOW /*!50000 GLOBAL*/ STATUS;
</source>
* See [[http://de.slideshare.net/shinguz/pt-presentation-11465700 MySQL Performance Tuning]]
===percona-toolkit===
<source lang=bash>
# aptitude install percona-toolkit
# mysql -e "explain select * from mysql.user,mysql.db where user.user=db.user" | pt-visual-explain
JOIN
+- Bookmark lookup
| +- Table
| | table db
| | possible_keys User
| +- Index lookup
| key db->User
| possible_keys User
| key_len 48
| ref mysql.user.User
| rows 3
+- Table scan
rows 68
+- Table
table user
</source>
===Sysbench===
<source lang=bash>
# mysql -u root -e "create database sbtest;"
# sysbench \
--test=oltp \
--oltp-table-size=10000000 \
--db-driver=mysql \
--mysql-table-engine=innodb \
--mysql-db=sbtest \
--mysql-user=root \
--mysql-password=$(nawk -F'=' '/password/{print $2}' /root/.my.cnf) \
--mysql-socket=/var/run/mysqld/mysqld.sock \
prepare
# sysbench \
--test=oltp \
--oltp-test-mode=complex \
--oltp-table-size=80000000 \
--db-driver=mysql \
--mysql-table-engine=innodb \
--mysql-db=sbtest \
--mysql-user=root \
--mysql-password=$(nawk -F'=' '/password/{print $2}' /root/.my.cnf) \
--mysql-socket=/var/run/mysqld/mysqld.sock \
--num-threads=4 \
--max-time=900 \
--max-requests=500000 \
run
# mysql -u root_rw -e "drop table sbtest;" sbtest
</source>
==Recover a damaged root account==
===Lost grants===
Try out:
<source lang=bash>
# service mysql stop
# echo "grant all privileges on *.* to 'root'@'localhost' with grant option;" > /root/mysql-init
# mysqld_safe --init-file=/root/mysql-init
...
150812 19:14:24 mysqld_safe mysqld from pid file /var/run/mysqld/mysqld.pid ended
# rm /root/mysql-init
# service mysql start
</source>
Or:
<source lang=bash>
# service mysql stop
# mysqld_safe --skip-grant-tables &
...
# mysql -e "UPDATE mysql.user SET Grant_priv='Y', Super_priv='Y' WHERE User='root'; FLUSH PRIVILEGES; GRANT ALL ON *.* TO 'root'@'localhost';"
# mysqladmin -u root shutdown
# service mysql start
</source>
===Lost password===
<source lang=bash>
# service mysql stop
# echo "SET PASSWORD FOR 'root'@'localhost' = PASSWORD('the root password for mysql');" > /root/mysql-init
# mysqld_safe --init-file=/root/mysql-init
...
150812 19:15:24 mysqld_safe mysqld from pid file /var/run/mysqld/mysqld.pid ended
# rm /root/mysql-init
# service mysql start
</source>
==Structured configuration==
This is the default in Ubuntus /etc/mysql/my.cnf:
<source lang=mysql>
...
#
# * IMPORTANT: Additional settings that can override those from this file!
# The files must end with '.cnf', otherwise they'll be ignored.
#
!includedir /etc/mysql/conf.d/
</source>
/etc/mysql/conf.d/innodb.cnf:
<source lang=mysql>
[mysqld]
# InnoDB Parameters
# innodb_buffer_pool_size=(0.7*total_mem_size)
#innodb_buffer_pool_size=512M
innodb_buffer_pool_size=256M
# bulk_insert_buffer_size
#bulk_insert_buffer_size=256M
bulk_insert_buffer_size=128M
# innodb_buffer_pool_instances=... more = more concurrency
innodb_buffer_pool_instances=2
# innodb_thread_concurrency= 2*CPUs
innodb_thread_concurrency=4
# innodb_flush_method=O_DIRECT (avoids double buffering)
innodb_flush_method=O_DIRECT
# InnoDB data raw disks
innodb_data_home_dir=
innodb_data_file_path=/dev/vg-data/lv-rawdisk-innodb01:25Graw
# InnoDB log files
innodb_log_files_in_group=2
innodb_log_file_size=100M
innodb_log_group_home_dir=/var/lib/mysql/ib_log
</source>
/etc/mysql/conf.d/myisam.cnf:
<source lang=mysql>
[mysqld]
#key_buffer = 512M
key_buffer = 128M
table_cache = 8K
myisam_sort_buffer_size = 64M
tmp_table_size = 64M
# Variable: concurrent_insert
# Value Description
# 0 Disables concurrent inserts
# 1 (Default) Enables concurrent insert for MyISAM tables that do not have holes
# 2 Enables concurrent inserts for all MyISAM tables, even those that have holes.
# For a table with a hole, new rows are inserted at the end of the table if it is in use by another thread.
# Otherwise, MySQL acquires a normal write lock and inserts the row into the hole.
concurrent_insert=2
# Variable: myisam_use_mmap
# https://www.percona.com/blog/2006/05/26/myisam-mmap-feature-51/
#
myisam_use_mmap=1
</source>
/etc/mysql/conf.d/mysqld.cnf:
<source lang=mysql>
[mysqld]
datadir = /var/lib/mysql/data/data
# because mysql is soooo stupid
#ignore-db-dirs = lost+found # when we will have mysql >= 5.6.3
bind-address = 127.0.0.1
open-files-limit = 4096
max_connections = 512
max_allowed_packet = 16M
thread_stack = 192K
thread_cache_size = 8
myisam-recover-options = BACKUP
max_connections = 512
table_cache = 8192
thread_concurrency = 4
default-storage-engine = innodb
# Enable the full query log. Every query (even ones with incorrect
# syntax) that the server receives will be logged. This is useful for
# debugging, it is usually disabled in production use.
#log
# Print warnings to the error log file. If you have any problem with
# MySQL you should enable logging of warnings and examine the error log
# for possible explanations.
log_warnings
# Log slow queries. Slow queries are queries which take more than the
# amount of time defined in "long_query_time" or which do not use
# indexes well, if log_long_format is enabled. It is normally good idea
# to have this turned on if you frequently add new queries to the
# system.
log_slow_queries
slow_query_log_file = /var/log/mysql/mysql-slow.log
# All queries taking more than this amount of time (in seconds) will be
# trated as slow. Do not use "1" as a value here, as this will result in
# even very fast queries being logged from time to time (as MySQL
# currently measures time with second accuracy only).
long_query_time = 2
# Log more information in the slow query log. Normally it is good to
# have this turned on. This will enable logging of queries that are not
# using indexes in addition to long running queries.
#log_long_format
log_bin = /var/lib/mysql/binlog/mysql-bin.log
expire_logs_days = 10
max_binlog_size = 100M
sync_binlog = 0
performance_schema = ON
</source>
/etc/mysql/conf.d/mysqld_safe.cnf:
<source lang=mysql>
[mysqld_safe]
</source>
/etc/mysql/conf.d/mysqld_safe_syslog.cnf:
<source lang=mysql>
[mysqld_safe]
syslog
</source>
/etc/mysql/conf.d/query_cache.cnf:
<source lang=mysql>
[mysqld]
query_cache_limit = 4M
query_cache_size = 128M
query_cache_min_res_unit = 2K
</source>
=MySQL Clients=
Small one liners for testing purposes.
==PHP==
===PHP PDO===
<source lang=php>
$ php -r '
$pdo=new PDO("mysql:host=mydbhost;dbname=mydb", "user", "pass", ARRAY(
PDO::ATTR_PERSISTENT => true
)
);
$stmt=$pdo->prepare("SELECT * FROM mytable");
if($stmt->execute()){
while($row = $stmt->fetch()){
print_r($row);
}
};
$stmt = null;
$pdo=null;
'
</source>
453950c022c0244b5db492feae3202882c096b93
1749
1748
2017-05-12T15:24:38Z
Lollypop
2
/* Modify systemd service to raise the number of files limit */
wikitext
text/x-wiki
[[Kategorie:MySQL|Tipps und Tricks]]
==Oneliner==
===All grants===
<source lang=bash>
# mysql --skip-column-names --batch --execute 'select concat("`",user,"`@`",host,"`") from mysql.user' | xargs -n 1 -i mysql --execute 'show grants for {}'
</source>
===Last update time===
* Per table
<source lang=mysql>
mysql> SELECT TABLE_SCHEMA AS DB,TABLE_NAME,UPDATE_TIME FROM INFORMATION_SCHEMA.TABLES ORDER BY DB,UPDATE_TIME;
</source>
* Per database
<source lang=mysql>
mysql> SELECT TABLE_SCHEMA AS DB,MAX(UPDATE_TIME) AS LAST_UPDATE FROM INFORMATION_SCHEMA.TABLES GROUP BY DB ORDER BY LAST_UPDATE;
</source>
==InnoDB space==
===Per database===
<source lang=mysql>
mysql> select table_schema as database_name, sum(round(data_length/1024/1024,2)) as total_size_mb from information_schema.tables where engine like 'innodb' group by table_schema order by total_size_mb;
</source>
===Per table===
<source lang=mysql>
mysql> select table_schema as database_name,table_name,round(data_length/1024/1024,2) as size_mb from information_schema.tables order by size_mb;
</source>
==Logging==
If you use SET GLOBAL it is just for the moment.
'''Don't forget to add it in your my.cnf to make it permanent!'''
===What can I log?===
The interesting variables here are:
* log_queries_not_using_indexes
* log_slave_updates
* log_slow_queries
* general_log
===Choose logging destination FILE/TABLE/NONE===
This affects general_log and slow_query_log.
* Log to the table mysql.slow_log and mysql.general_log
<source lang=mysql>
mysql> SET GLOBAL log_output=TABLE;
</source>
* Log to the table mysql.slow_log and mysql.general_log
<source lang=mysql>
mysql> SET GLOBAL log_output=TABLE;
</source>
* Both: tables and files
<source lang=mysql>
mysql> SET GLOBAL log_output = 'TABLE,FILE';
</source>
* None, if NONE appears in the log_output destinations there is no logging
<source lang=mysql>
mysql> SET GLOBAL log_output = 'TABLE,FILE,NONE';
</source>
is equal to
<source lang=mysql>
mysql> SET GLOBAL log_output = 'NONE';
</source>
===Enable/disable general logging===
<source lang=mysql>
mysql> SET GLOBAL general_log_file = '/var/lib/mysql/general.log';
Query OK, 0 rows affected (0.00 sec)
mysql> SET GLOBAL general_log = 'ON';
Query OK, 0 rows affected (0.00 sec)
</source>
<source lang=mysql>
mysql> SET GLOBAL general_log = 'OFF';
Query OK, 0 rows affected (0.00 sec)
</source>
===Enable/disable logging of slow queries===
<source lang=mysql>
mysql> SET GLOBAL slow_query_log_file = '/var/lib/mysql/slow-query.log';
Query OK, 0 rows affected (0.00 sec)
mysql> SET GLOBAL slow_query_log = 'ON';
Query OK, 0 rows affected (0.00 sec)
</source>
<source lang=mysql>
mysql> SET GLOBAL slow_query_log = 'OFF';
Query OK, 0 rows affected (0.00 sec)
</source>
==Filesystems for MySQL==
===ext3/ext4===
====Create Options====
<source lang=bash>
# mkfs.ext4 -b 4096 /dev/mapper/vg--data-lv--ext4--mysql_data
</source>
====Mountoptions====
* noatime
* data=writeback (best performance , only metadata is logged)
* data=ordered (ok performance , recording metadata and grouping metadata related to the data changes)
* data=journal (worst performance, but best data protection, ext3 default mode, recording metadata and all data)
===Raw devices with InnoDB===
'''Take a look at [[Linux_udev_permissions|setting device permissions via udev]] first.'''
'''After''' that the device is owned by mysql:
<source lang=bash>
# ls -alL /dev/vg-data/lv-rawdisk-innodb01
brw-rw---- 1 mysql mysql 252, 0 Aug 12 15:07 /dev/vg-data/lv-rawdisk-innodb01
</source>
Determine the size:
<source lang=bash>
# lvs vg-data
LV VG Attr LSize Pool Origin Data% Move Log Copy% Convert
lv-rawdisk-innodb01 vg-data -wi-a---- 25.00g
# fdisk -l /dev/vg-data/lv-rawdisk-innodb01
Disk /dev/vg-data/lv-rawdisk-innodb01: 26.8 GB, 26843545600 bytes
255 heads, 63 sectors/track, 3263 cylinders, total 52428800 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
# bc -l
26843545600/(1024*1024*1024)
25.00000000000000000000
</source>
Yes... really 25GB!
Add your logical volume to your configuration /etc/mysql/conf.d/innodb.cnf :
<source lang=mysql>
[mysqld]
# InnoDB raw disks
innodb_data_home_dir=
innodb_data_file_path=/dev/vg-data/lv-rawdisk-innodb01:25Gnewraw
</source>
Start mysql:
<source lang=bash>
# service mysql start
</source>
Aaaaaand.. do not forget apparmor! Like I did.. :-D
<source lang=mysql>
InnoDB: Operating system error number 13 in a file operation.
InnoDB: The error means mysqld does not have the access rights to
InnoDB: the directory.
InnoDB: File name /dev/dm-0
InnoDB: File operation call: 'open'.
InnoDB: Cannot continue operation.
</source>
<source lang=bash>
# tail /var/log/kern.log
...
Aug 12 15:30:09 mysql kernel: [ 5840.118528] audit: type=1400 audit(1439386209.399:33): apparmor="DENIED" operation="open" profile="/usr/sbin/mysqld" name="/dev/dm-0" pid=11810 comm="mysqld" requested_mask="wr" denied_mask="wr" fsuid=108 ouid=108
...
</source>
Add your raw device to the apparmor config in /etc/apparmor.d/local/usr.sbin.mysqld :
<source lang=bash>
# Site-specific additions and overrides for usr.sbin.mysqld.
# For more details, please see /etc/apparmor.d/local/README.
/dev/dm-* rwk,
</source>
Reload apparmor:
<source lang=bash>
# service apparmor reload
</source>
Another try!
<source lang=bash>
# service mysql start
</source>
<source lang=mysql>
InnoDB: The first specified data file /dev/vg-data/lv-rawdisk-innodb01 did not exist:
InnoDB: a new database to be created!
150812 15:48:23 InnoDB: Setting file /dev/vg-data/lv-rawdisk-innodb01 size to 25600 MB
InnoDB: Database physically writes the file full: wait...
InnoDB: Progress in MB: 100 200 300 400 500 600 700 800 900 1000 1100 1200 ...
</source>
Much better!
So shutdown MySQL again!
Change your configuration /etc/mysql/conf.d/innodb.cnf and '''change newraw to raw!''' :
<source lang=mysql>
[mysqld]
# InnoDB raw disks
innodb_data_home_dir=
innodb_data_file_path=/dev/vg-data/lv-rawdisk-innodb01:25Graw
</source>
=== NFS ===
==== NFSv4 ====
===== On NetApp CDOT SVM =====
<source lang=cdot>
cdot1nfsv4::> export-policy rule create -policyname default -clientmatch 172.18.128.0/22 -superuser none -rwrule none -rorule sys -allow-dev false -allow-suid false
cdot1nfsv4::>
cdot1nfsv4::> export-policy create -policyname mysql_clients
cdot1nfsv4::> export-policy rule create -policyname mysql_clients -clientmatch 172.18.128.0/22 -superuser sys -rwrule sys -rorule sys -allow-dev true -allow-suid false
cdot1nfsv4::>
cdot1nfsv4::> nfs server modify -v4.0 enabled -v4-id-domain this.domain.tld
cdot1nfsv4::> set -units GB
cdot1nfsv4::> vol show -volume MYSQLNFS_* -fields volume,policy,size,junction-path
vserver volume size policy junction-path
------------------ --------------------- ---- ------------- ----------------------
cdot1nfsv4 MYSQLNFS_DATA 40GB mysql_clients /MYSQLNFS_DATA
cdot1nfsv4 MYSQLNFS_LOG 1GB mysql_clients /MYSQLNFS_LOG
2 entries were displayed.
</source>
Links:
* [https://kb.netapp.com/support/s/article/how-to-configure-nfsv4-in-cluster-mode How to configure NFSv4 in Cluster-Mode]
* [https://kb.netapp.com/support/s/article/clustered-data-ontap-nfs-expert-recommended-articles Clustered Data ONTAP NFS Expert recommended articles]
* [https://kb.netapp.com/support/s/article/how-to-configure-netapp-storage-systems-for-network-file-system-version-4-in-aix-and-linux-environments How to configure NetApp storage systems for Network File System version 4 in AIX and Linux environments]
* [https://kb.netapp.com/support/s/article/how-to-enable-or-disable-nfsv4-on-netapp-storage-systems How to enable or disable NFSv4 on NetApp storage systems]
===== On Linux =====
====== /etc/sysctl.d/99-mysql.conf ======
<source>
#
## http://www.ajohnstone.com/achives/optimizing-mysql-over-nfs-with-netapp/
#
###################################################################
# Semaphores & IPC for optimizations in innodb
kernel.shmmax=2147483648
kernel.shmall=2147483648
kernel.msgmni=1024
kernel.msgmax=65536
kernel.sem=250 32000 32 1024
###################################################################
# Swap
vm.swappiness = 0
vm.vfs_cache_pressure = 50
</source>
====== /etc/sysctl.d/99-netapp-nfs.conf ======
<source>
#
## http://www.ajohnstone.com/achives/optimizing-mysql-over-nfs-with-netapp/
#
###################################################################
# Optimization for netapp/nfs increased from 64k, @see http://tldp.org/HOWTO/NFS-HOWTO/performance.html#MEMLIMITS
net.core.wmem_default=262144
net.core.rmem_default=262144
net.core.wmem_max=262144
net.core.rmem_max=262144
net.ipv4.tcp_rmem = 4096 87380 16777216
net.ipv4.tcp_wmem = 4096 65536 16777216
net.ipv4.tcp_no_metrics_save = 1
# Guidelines from http://media.netapp.com/documents/mysqlperformance-5.pdf
net.ipv4.tcp_sack=0
net.ipv4.tcp_timestamps=0
sunrpc.tcp_slot_table_entries=128
#nfs.v3.enable on
nfs.tcp.enable=on
nfs.tcp.recvwindowsize=65536
nfs.tcp.xfersize=65536
#iscsi.iswt.max_ios_per_session 128
#iscsi.iswt.tcp_window_size 131400
#iscsi.max_connections_per_session 16
net.ipv4.tcp_tw_reuse = 1
net.ipv4.ip_local_port_range = 1024 65023
net.ipv4.tcp_max_syn_backlog = 10240
net.ipv4.tcp_max_tw_buckets = 400000
net.ipv4.tcp_max_orphans = 60000
net.ipv4.tcp_synack_retries = 3
net.core.somaxconn = 10000
kernel.sysrq=0
net.ipv4.neigh.default.gc_thresh1 = 4096
net.ipv4.neigh.default.gc_thresh2 = 8192
net.ipv4.neigh.default.gc_thresh3 = 8192
net.ipv4.neigh.default.base_reachable_time = 86400
net.ipv4.neigh.default.gc_stale_time = 86400
</source>
====== Raise allowed number of open files for mysql in /etc/security/limits.d/mysql.conf ======
<source>
mysql soft nofile 1024000
mysql hard nofile 1024000
mysql soft nproc 10240
mysql hard nproc 10240
</source>
====== Modify systemd mysql.service to raise the number of files limit ======
To raise the number of files for the service you have to tell the systemd the new limit.
<source lang=bash>
# systemctl edit mysql.service
</source>
and enter:
<source lang=inifile>
[Service]
LimitNOFILE=1024000
</source>
<source lang=bash>
# systemctl cat mysql
# /lib/systemd/system/mysql.service
# MySQL systemd service file
...
# /etc/systemd/system/mysql.service.d/override.conf
[Service]
LimitNOFILE=1024000
</source>
Do not forget to activate and check the limit
<source lang=bash>
# systemctl daemon-reload
# systemctl restart mysql
# awk 'NR==1 || /Max open files/' /proc/$(pgrep mysqld$)/limits
Limit Soft Limit Hard Limit Units
Max open files 1024000 1024000 files
</source>
====== Modify systemd service to wait for NFS ======
To be sure that the NFS mount is ready when the mysql server starts add After=nfs-client.target to the systemd service in the Unit-section.
<source lang=bash>
# systemctl edit mysql.service
</source>
and enter:
<source lang=inifile>
[Unit]
Description=MySQL Community Server
After=network.target
After=nfs-client.target
</source>
<source lang=bash>
# systemctl cat mysql
# /lib/systemd/system/mysql.service
# MySQL systemd service file
[Unit]
Description=MySQL Community Server
After=network.target
[Install]
WantedBy=multi-user.target
[Service]
User=mysql
Group=mysql
PermissionsStartOnly=true
ExecStartPre=/usr/share/mysql/mysql-systemd-start pre
ExecStart=/usr/sbin/mysqld
ExecStartPost=/usr/share/mysql/mysql-systemd-start post
TimeoutSec=600
Restart=on-failure
RuntimeDirectory=mysqld
RuntimeDirectoryMode=755
# /etc/systemd/system/mysql.service.d/override.conf
[Unit]
Description=MySQL Community Server
After=network.target
After=nfs-client.target
[Service]
LimitNOFILE=1024000
</source>
====== /etc/idmapd.conf ======
<source lang=conf>
# Domain = localdomain
Domain = this.domain.tld
</source>
====== /etc/fstab ======
<source lang=fstab>
cdot-nfsv4-svm:/MYSQLNFS_LOG /MYSQLNFS_LOG nfs rw,hard,nointr,rsize=65536,wsize=65536,bg,vers=4,proto=tcp,noatime
cdot-nfsv4-svm:/MYSQLNFS_DATA /MYSQLNFS_DATA nfs rw,hard,nointr,rsize=65536,wsize=65536,bg,vers=4,proto=tcp,noatime
</source>
====== /etc/mysql/mysql.conf.d/mysqld.cnf ======
<source lang=inifile>
[mysqld]
...
datadir = /MYSQLNFS_DATA/data/mysql
...
</source>
====== /etc/mysql/mysql.conf.d/innodb.cnf ======
<source lang=inifile>
[mysqld]
#
# * InnoDB
#
innodb_data_home_dir = /MYSQLNFS_DATA/InnoDB
innodb_data_file_path = ibdata1:200M:autoextend
innodb_log_group_home_dir = /MYSQLNFS_LOG/ib_log
#innodb_flush_method = O_DIRECT
innodb_flush_log_at_trx_commit = 2
innodb_file_per_table = on
</source>
<source lang=mysql>
# mysql -e "show variables where variable_name like '%dir' and value like '/MYSQLNFS%'"
+---------------------------+------------------------------------+
| Variable_name | Value |
+---------------------------+------------------------------------+
| datadir | /MYSQLNFS_DATA/data/mysql/ |
| innodb_data_home_dir | /MYSQLNFS_DATA/InnoDB |
| innodb_log_group_home_dir | /MYSQLNFS_LOG/ib_log |
+---------------------------+------------------------------------+
</source>
====== /etc/mysql/mysql.conf.d/query_cache.cnf ======
<source lang=inifile>
[mysqld]
#
# * Query Cache Configuration
#
query_cache_type = 1
query_cache_limit = 256K
query_cache_min_res_unit = 2k
query_cache_size = 80M
</source>
<source lang=mysql>
mysql> SHOW VARIABLES LIKE 'have_query_cache';
+------------------+-------+
| Variable_name | Value |
+------------------+-------+
| have_query_cache | YES |
+------------------+-------+
1 row in set (0,00 sec)
mysql> SHOW VARIABLES LIKE 'query_cache%';
+------------------------------+----------+
| Variable_name | Value |
+------------------------------+----------+
| query_cache_limit | 262144 |
| query_cache_min_res_unit | 2048 |
| query_cache_size | 83886080 |
| query_cache_type | ON |
| query_cache_wlock_invalidate | OFF |
+------------------------------+----------+
5 rows in set (0,00 sec)
</source>
====== apparmor : /etc/apparmor.d/local/usr.sbin.mysqld ======
<source lang=apparmor>
# vim:syntax=apparmor
# This should be always there...
owner @{PROC}/@{pid}/status r,
/sys/devices/system/node/ r,
/sys/devices/system/node/** r,
# The mysql datadir, innodb_data_home_dir
/MYSQLNFS_DATA/ r,
/MYSQLNFS_DATA/** rwk,
# The mysql innodb_log_group_home_dir
/MYSQLNFS_LOG/ r,
/MYSQLNFS_LOG/** rwk,
</source>
====== Short stupid performance test ======
<source lang=bash>
# time dd if=/dev/zero of=/MYSQLNFS_DATA/io.test bs=16k count=65536
65536+0 records in
65536+0 records out
1073741824 bytes (1,1 GB, 1,0 GiB) copied, 1,7552 s, 612 MB/s
real 0m1.772s
user 0m0.016s
sys 0m0.672s
</source>
Some things seem to work...
==Sample InnoDB configuration==
/etc/mysql/conf.d/innodb.cnf
<source lang=mysql>
[mysqld]
# InnoDB Parameters
# innodb_buffer_pool_size=(0.7*total_mem_size)
innodb_buffer_pool_size=1433M
# bulk_insert_buffer_size
bulk_insert_buffer_size=256M
# innodb_buffer_pool_instances=... more = more concurrency
innodb_buffer_pool_instances=2
# innodb_thread_concurrency= 2*CPUs
innodb_thread_concurrency=4
# innodb_flush_method=O_DIRECT (avoids double buffering)
innodb_flush_method=O_DIRECT
# InnoDB data raw disks
innodb_data_home_dir=
innodb_data_file_path=/dev/vg-data/lv-rawdisk-innodb01:25Graw
# InnoDB log files
innodb_log_files_in_group=2
innodb_log_file_size=100M
innodb_log_group_home_dir=/var/lib/mysql/ib_log
</source>
==Analyze==
<source lang=mysql>
mysql> select * from <tablename> PROCEDURE ANALYSE();
</source>
<source lang=mysql>
mysql> SHOW /*!50000 GLOBAL*/ STATUS;
</source>
* See [[http://de.slideshare.net/shinguz/pt-presentation-11465700 MySQL Performance Tuning]]
===percona-toolkit===
<source lang=bash>
# aptitude install percona-toolkit
# mysql -e "explain select * from mysql.user,mysql.db where user.user=db.user" | pt-visual-explain
JOIN
+- Bookmark lookup
| +- Table
| | table db
| | possible_keys User
| +- Index lookup
| key db->User
| possible_keys User
| key_len 48
| ref mysql.user.User
| rows 3
+- Table scan
rows 68
+- Table
table user
</source>
===Sysbench===
<source lang=bash>
# mysql -u root -e "create database sbtest;"
# sysbench \
--test=oltp \
--oltp-table-size=10000000 \
--db-driver=mysql \
--mysql-table-engine=innodb \
--mysql-db=sbtest \
--mysql-user=root \
--mysql-password=$(nawk -F'=' '/password/{print $2}' /root/.my.cnf) \
--mysql-socket=/var/run/mysqld/mysqld.sock \
prepare
# sysbench \
--test=oltp \
--oltp-test-mode=complex \
--oltp-table-size=80000000 \
--db-driver=mysql \
--mysql-table-engine=innodb \
--mysql-db=sbtest \
--mysql-user=root \
--mysql-password=$(nawk -F'=' '/password/{print $2}' /root/.my.cnf) \
--mysql-socket=/var/run/mysqld/mysqld.sock \
--num-threads=4 \
--max-time=900 \
--max-requests=500000 \
run
# mysql -u root_rw -e "drop table sbtest;" sbtest
</source>
==Recover a damaged root account==
===Lost grants===
Try out:
<source lang=bash>
# service mysql stop
# echo "grant all privileges on *.* to 'root'@'localhost' with grant option;" > /root/mysql-init
# mysqld_safe --init-file=/root/mysql-init
...
150812 19:14:24 mysqld_safe mysqld from pid file /var/run/mysqld/mysqld.pid ended
# rm /root/mysql-init
# service mysql start
</source>
Or:
<source lang=bash>
# service mysql stop
# mysqld_safe --skip-grant-tables &
...
# mysql -e "UPDATE mysql.user SET Grant_priv='Y', Super_priv='Y' WHERE User='root'; FLUSH PRIVILEGES; GRANT ALL ON *.* TO 'root'@'localhost';"
# mysqladmin -u root shutdown
# service mysql start
</source>
===Lost password===
<source lang=bash>
# service mysql stop
# echo "SET PASSWORD FOR 'root'@'localhost' = PASSWORD('the root password for mysql');" > /root/mysql-init
# mysqld_safe --init-file=/root/mysql-init
...
150812 19:15:24 mysqld_safe mysqld from pid file /var/run/mysqld/mysqld.pid ended
# rm /root/mysql-init
# service mysql start
</source>
==Structured configuration==
This is the default in Ubuntus /etc/mysql/my.cnf:
<source lang=mysql>
...
#
# * IMPORTANT: Additional settings that can override those from this file!
# The files must end with '.cnf', otherwise they'll be ignored.
#
!includedir /etc/mysql/conf.d/
</source>
/etc/mysql/conf.d/innodb.cnf:
<source lang=mysql>
[mysqld]
# InnoDB Parameters
# innodb_buffer_pool_size=(0.7*total_mem_size)
#innodb_buffer_pool_size=512M
innodb_buffer_pool_size=256M
# bulk_insert_buffer_size
#bulk_insert_buffer_size=256M
bulk_insert_buffer_size=128M
# innodb_buffer_pool_instances=... more = more concurrency
innodb_buffer_pool_instances=2
# innodb_thread_concurrency= 2*CPUs
innodb_thread_concurrency=4
# innodb_flush_method=O_DIRECT (avoids double buffering)
innodb_flush_method=O_DIRECT
# InnoDB data raw disks
innodb_data_home_dir=
innodb_data_file_path=/dev/vg-data/lv-rawdisk-innodb01:25Graw
# InnoDB log files
innodb_log_files_in_group=2
innodb_log_file_size=100M
innodb_log_group_home_dir=/var/lib/mysql/ib_log
</source>
/etc/mysql/conf.d/myisam.cnf:
<source lang=mysql>
[mysqld]
#key_buffer = 512M
key_buffer = 128M
table_cache = 8K
myisam_sort_buffer_size = 64M
tmp_table_size = 64M
# Variable: concurrent_insert
# Value Description
# 0 Disables concurrent inserts
# 1 (Default) Enables concurrent insert for MyISAM tables that do not have holes
# 2 Enables concurrent inserts for all MyISAM tables, even those that have holes.
# For a table with a hole, new rows are inserted at the end of the table if it is in use by another thread.
# Otherwise, MySQL acquires a normal write lock and inserts the row into the hole.
concurrent_insert=2
# Variable: myisam_use_mmap
# https://www.percona.com/blog/2006/05/26/myisam-mmap-feature-51/
#
myisam_use_mmap=1
</source>
/etc/mysql/conf.d/mysqld.cnf:
<source lang=mysql>
[mysqld]
datadir = /var/lib/mysql/data/data
# because mysql is soooo stupid
#ignore-db-dirs = lost+found # when we will have mysql >= 5.6.3
bind-address = 127.0.0.1
open-files-limit = 4096
max_connections = 512
max_allowed_packet = 16M
thread_stack = 192K
thread_cache_size = 8
myisam-recover-options = BACKUP
max_connections = 512
table_cache = 8192
thread_concurrency = 4
default-storage-engine = innodb
# Enable the full query log. Every query (even ones with incorrect
# syntax) that the server receives will be logged. This is useful for
# debugging, it is usually disabled in production use.
#log
# Print warnings to the error log file. If you have any problem with
# MySQL you should enable logging of warnings and examine the error log
# for possible explanations.
log_warnings
# Log slow queries. Slow queries are queries which take more than the
# amount of time defined in "long_query_time" or which do not use
# indexes well, if log_long_format is enabled. It is normally good idea
# to have this turned on if you frequently add new queries to the
# system.
log_slow_queries
slow_query_log_file = /var/log/mysql/mysql-slow.log
# All queries taking more than this amount of time (in seconds) will be
# trated as slow. Do not use "1" as a value here, as this will result in
# even very fast queries being logged from time to time (as MySQL
# currently measures time with second accuracy only).
long_query_time = 2
# Log more information in the slow query log. Normally it is good to
# have this turned on. This will enable logging of queries that are not
# using indexes in addition to long running queries.
#log_long_format
log_bin = /var/lib/mysql/binlog/mysql-bin.log
expire_logs_days = 10
max_binlog_size = 100M
sync_binlog = 0
performance_schema = ON
</source>
/etc/mysql/conf.d/mysqld_safe.cnf:
<source lang=mysql>
[mysqld_safe]
</source>
/etc/mysql/conf.d/mysqld_safe_syslog.cnf:
<source lang=mysql>
[mysqld_safe]
syslog
</source>
/etc/mysql/conf.d/query_cache.cnf:
<source lang=mysql>
[mysqld]
query_cache_limit = 4M
query_cache_size = 128M
query_cache_min_res_unit = 2K
</source>
=MySQL Clients=
Small one liners for testing purposes.
==PHP==
===PHP PDO===
<source lang=php>
$ php -r '
$pdo=new PDO("mysql:host=mydbhost;dbname=mydb", "user", "pass", ARRAY(
PDO::ATTR_PERSISTENT => true
)
);
$stmt=$pdo->prepare("SELECT * FROM mytable");
if($stmt->execute()){
while($row = $stmt->fetch()){
print_r($row);
}
};
$stmt = null;
$pdo=null;
'
</source>
c18c465c9293045b9ef5ddf23bf7d945140077e0
1750
1749
2017-05-12T15:28:59Z
Lollypop
2
/* Modify systemd service to wait for NFS */
wikitext
text/x-wiki
[[Kategorie:MySQL|Tipps und Tricks]]
==Oneliner==
===All grants===
<source lang=bash>
# mysql --skip-column-names --batch --execute 'select concat("`",user,"`@`",host,"`") from mysql.user' | xargs -n 1 -i mysql --execute 'show grants for {}'
</source>
===Last update time===
* Per table
<source lang=mysql>
mysql> SELECT TABLE_SCHEMA AS DB,TABLE_NAME,UPDATE_TIME FROM INFORMATION_SCHEMA.TABLES ORDER BY DB,UPDATE_TIME;
</source>
* Per database
<source lang=mysql>
mysql> SELECT TABLE_SCHEMA AS DB,MAX(UPDATE_TIME) AS LAST_UPDATE FROM INFORMATION_SCHEMA.TABLES GROUP BY DB ORDER BY LAST_UPDATE;
</source>
==InnoDB space==
===Per database===
<source lang=mysql>
mysql> select table_schema as database_name, sum(round(data_length/1024/1024,2)) as total_size_mb from information_schema.tables where engine like 'innodb' group by table_schema order by total_size_mb;
</source>
===Per table===
<source lang=mysql>
mysql> select table_schema as database_name,table_name,round(data_length/1024/1024,2) as size_mb from information_schema.tables order by size_mb;
</source>
==Logging==
If you use SET GLOBAL it is just for the moment.
'''Don't forget to add it in your my.cnf to make it permanent!'''
===What can I log?===
The interesting variables here are:
* log_queries_not_using_indexes
* log_slave_updates
* log_slow_queries
* general_log
===Choose logging destination FILE/TABLE/NONE===
This affects general_log and slow_query_log.
* Log to the table mysql.slow_log and mysql.general_log
<source lang=mysql>
mysql> SET GLOBAL log_output=TABLE;
</source>
* Log to the table mysql.slow_log and mysql.general_log
<source lang=mysql>
mysql> SET GLOBAL log_output=TABLE;
</source>
* Both: tables and files
<source lang=mysql>
mysql> SET GLOBAL log_output = 'TABLE,FILE';
</source>
* None, if NONE appears in the log_output destinations there is no logging
<source lang=mysql>
mysql> SET GLOBAL log_output = 'TABLE,FILE,NONE';
</source>
is equal to
<source lang=mysql>
mysql> SET GLOBAL log_output = 'NONE';
</source>
===Enable/disable general logging===
<source lang=mysql>
mysql> SET GLOBAL general_log_file = '/var/lib/mysql/general.log';
Query OK, 0 rows affected (0.00 sec)
mysql> SET GLOBAL general_log = 'ON';
Query OK, 0 rows affected (0.00 sec)
</source>
<source lang=mysql>
mysql> SET GLOBAL general_log = 'OFF';
Query OK, 0 rows affected (0.00 sec)
</source>
===Enable/disable logging of slow queries===
<source lang=mysql>
mysql> SET GLOBAL slow_query_log_file = '/var/lib/mysql/slow-query.log';
Query OK, 0 rows affected (0.00 sec)
mysql> SET GLOBAL slow_query_log = 'ON';
Query OK, 0 rows affected (0.00 sec)
</source>
<source lang=mysql>
mysql> SET GLOBAL slow_query_log = 'OFF';
Query OK, 0 rows affected (0.00 sec)
</source>
==Filesystems for MySQL==
===ext3/ext4===
====Create Options====
<source lang=bash>
# mkfs.ext4 -b 4096 /dev/mapper/vg--data-lv--ext4--mysql_data
</source>
====Mountoptions====
* noatime
* data=writeback (best performance , only metadata is logged)
* data=ordered (ok performance , recording metadata and grouping metadata related to the data changes)
* data=journal (worst performance, but best data protection, ext3 default mode, recording metadata and all data)
===Raw devices with InnoDB===
'''Take a look at [[Linux_udev_permissions|setting device permissions via udev]] first.'''
'''After''' that the device is owned by mysql:
<source lang=bash>
# ls -alL /dev/vg-data/lv-rawdisk-innodb01
brw-rw---- 1 mysql mysql 252, 0 Aug 12 15:07 /dev/vg-data/lv-rawdisk-innodb01
</source>
Determine the size:
<source lang=bash>
# lvs vg-data
LV VG Attr LSize Pool Origin Data% Move Log Copy% Convert
lv-rawdisk-innodb01 vg-data -wi-a---- 25.00g
# fdisk -l /dev/vg-data/lv-rawdisk-innodb01
Disk /dev/vg-data/lv-rawdisk-innodb01: 26.8 GB, 26843545600 bytes
255 heads, 63 sectors/track, 3263 cylinders, total 52428800 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
# bc -l
26843545600/(1024*1024*1024)
25.00000000000000000000
</source>
Yes... really 25GB!
Add your logical volume to your configuration /etc/mysql/conf.d/innodb.cnf :
<source lang=mysql>
[mysqld]
# InnoDB raw disks
innodb_data_home_dir=
innodb_data_file_path=/dev/vg-data/lv-rawdisk-innodb01:25Gnewraw
</source>
Start mysql:
<source lang=bash>
# service mysql start
</source>
Aaaaaand.. do not forget apparmor! Like I did.. :-D
<source lang=mysql>
InnoDB: Operating system error number 13 in a file operation.
InnoDB: The error means mysqld does not have the access rights to
InnoDB: the directory.
InnoDB: File name /dev/dm-0
InnoDB: File operation call: 'open'.
InnoDB: Cannot continue operation.
</source>
<source lang=bash>
# tail /var/log/kern.log
...
Aug 12 15:30:09 mysql kernel: [ 5840.118528] audit: type=1400 audit(1439386209.399:33): apparmor="DENIED" operation="open" profile="/usr/sbin/mysqld" name="/dev/dm-0" pid=11810 comm="mysqld" requested_mask="wr" denied_mask="wr" fsuid=108 ouid=108
...
</source>
Add your raw device to the apparmor config in /etc/apparmor.d/local/usr.sbin.mysqld :
<source lang=bash>
# Site-specific additions and overrides for usr.sbin.mysqld.
# For more details, please see /etc/apparmor.d/local/README.
/dev/dm-* rwk,
</source>
Reload apparmor:
<source lang=bash>
# service apparmor reload
</source>
Another try!
<source lang=bash>
# service mysql start
</source>
<source lang=mysql>
InnoDB: The first specified data file /dev/vg-data/lv-rawdisk-innodb01 did not exist:
InnoDB: a new database to be created!
150812 15:48:23 InnoDB: Setting file /dev/vg-data/lv-rawdisk-innodb01 size to 25600 MB
InnoDB: Database physically writes the file full: wait...
InnoDB: Progress in MB: 100 200 300 400 500 600 700 800 900 1000 1100 1200 ...
</source>
Much better!
So shutdown MySQL again!
Change your configuration /etc/mysql/conf.d/innodb.cnf and '''change newraw to raw!''' :
<source lang=mysql>
[mysqld]
# InnoDB raw disks
innodb_data_home_dir=
innodb_data_file_path=/dev/vg-data/lv-rawdisk-innodb01:25Graw
</source>
=== NFS ===
==== NFSv4 ====
===== On NetApp CDOT SVM =====
<source lang=cdot>
cdot1nfsv4::> export-policy rule create -policyname default -clientmatch 172.18.128.0/22 -superuser none -rwrule none -rorule sys -allow-dev false -allow-suid false
cdot1nfsv4::>
cdot1nfsv4::> export-policy create -policyname mysql_clients
cdot1nfsv4::> export-policy rule create -policyname mysql_clients -clientmatch 172.18.128.0/22 -superuser sys -rwrule sys -rorule sys -allow-dev true -allow-suid false
cdot1nfsv4::>
cdot1nfsv4::> nfs server modify -v4.0 enabled -v4-id-domain this.domain.tld
cdot1nfsv4::> set -units GB
cdot1nfsv4::> vol show -volume MYSQLNFS_* -fields volume,policy,size,junction-path
vserver volume size policy junction-path
------------------ --------------------- ---- ------------- ----------------------
cdot1nfsv4 MYSQLNFS_DATA 40GB mysql_clients /MYSQLNFS_DATA
cdot1nfsv4 MYSQLNFS_LOG 1GB mysql_clients /MYSQLNFS_LOG
2 entries were displayed.
</source>
Links:
* [https://kb.netapp.com/support/s/article/how-to-configure-nfsv4-in-cluster-mode How to configure NFSv4 in Cluster-Mode]
* [https://kb.netapp.com/support/s/article/clustered-data-ontap-nfs-expert-recommended-articles Clustered Data ONTAP NFS Expert recommended articles]
* [https://kb.netapp.com/support/s/article/how-to-configure-netapp-storage-systems-for-network-file-system-version-4-in-aix-and-linux-environments How to configure NetApp storage systems for Network File System version 4 in AIX and Linux environments]
* [https://kb.netapp.com/support/s/article/how-to-enable-or-disable-nfsv4-on-netapp-storage-systems How to enable or disable NFSv4 on NetApp storage systems]
===== On Linux =====
====== /etc/sysctl.d/99-mysql.conf ======
<source>
#
## http://www.ajohnstone.com/achives/optimizing-mysql-over-nfs-with-netapp/
#
###################################################################
# Semaphores & IPC for optimizations in innodb
kernel.shmmax=2147483648
kernel.shmall=2147483648
kernel.msgmni=1024
kernel.msgmax=65536
kernel.sem=250 32000 32 1024
###################################################################
# Swap
vm.swappiness = 0
vm.vfs_cache_pressure = 50
</source>
====== /etc/sysctl.d/99-netapp-nfs.conf ======
<source>
#
## http://www.ajohnstone.com/achives/optimizing-mysql-over-nfs-with-netapp/
#
###################################################################
# Optimization for netapp/nfs increased from 64k, @see http://tldp.org/HOWTO/NFS-HOWTO/performance.html#MEMLIMITS
net.core.wmem_default=262144
net.core.rmem_default=262144
net.core.wmem_max=262144
net.core.rmem_max=262144
net.ipv4.tcp_rmem = 4096 87380 16777216
net.ipv4.tcp_wmem = 4096 65536 16777216
net.ipv4.tcp_no_metrics_save = 1
# Guidelines from http://media.netapp.com/documents/mysqlperformance-5.pdf
net.ipv4.tcp_sack=0
net.ipv4.tcp_timestamps=0
sunrpc.tcp_slot_table_entries=128
#nfs.v3.enable on
nfs.tcp.enable=on
nfs.tcp.recvwindowsize=65536
nfs.tcp.xfersize=65536
#iscsi.iswt.max_ios_per_session 128
#iscsi.iswt.tcp_window_size 131400
#iscsi.max_connections_per_session 16
net.ipv4.tcp_tw_reuse = 1
net.ipv4.ip_local_port_range = 1024 65023
net.ipv4.tcp_max_syn_backlog = 10240
net.ipv4.tcp_max_tw_buckets = 400000
net.ipv4.tcp_max_orphans = 60000
net.ipv4.tcp_synack_retries = 3
net.core.somaxconn = 10000
kernel.sysrq=0
net.ipv4.neigh.default.gc_thresh1 = 4096
net.ipv4.neigh.default.gc_thresh2 = 8192
net.ipv4.neigh.default.gc_thresh3 = 8192
net.ipv4.neigh.default.base_reachable_time = 86400
net.ipv4.neigh.default.gc_stale_time = 86400
</source>
====== Raise allowed number of open files for mysql in /etc/security/limits.d/mysql.conf ======
<source>
mysql soft nofile 1024000
mysql hard nofile 1024000
mysql soft nproc 10240
mysql hard nproc 10240
</source>
====== Modify systemd mysql.service to raise the number of files limit ======
To raise the number of files for the service you have to tell the systemd the new limit.
<source lang=bash>
# systemctl edit mysql.service
</source>
and enter:
<source lang=inifile>
[Service]
LimitNOFILE=1024000
</source>
<source lang=bash>
# systemctl cat mysql
# /lib/systemd/system/mysql.service
# MySQL systemd service file
...
# /etc/systemd/system/mysql.service.d/override.conf
[Service]
LimitNOFILE=1024000
</source>
Do not forget to activate and check the limit
<source lang=bash>
# systemctl daemon-reload
# systemctl restart mysql
# awk 'NR==1 || /Max open files/' /proc/$(pgrep mysqld$)/limits
Limit Soft Limit Hard Limit Units
Max open files 1024000 1024000 files
</source>
====== Modify systemd service to wait for NFS ======
To be sure that the NFS mount is ready when the mysql server starts add After=nfs-client.target to the systemd service in the Unit-section.
<source lang=bash>
# systemctl edit mysql.service
</source>
and enter:
<source lang=inifile>
[Unit]
Description=MySQL Community Server
After=network.target
After=nfs-client.target
</source>
<source lang=bash>
# systemctl cat mysql
# /lib/systemd/system/mysql.service
# MySQL systemd service file
[Unit]
Description=MySQL Community Server
After=network.target
[Install]
WantedBy=multi-user.target
[Service]
User=mysql
Group=mysql
PermissionsStartOnly=true
ExecStartPre=/usr/share/mysql/mysql-systemd-start pre
ExecStart=/usr/sbin/mysqld
ExecStartPost=/usr/share/mysql/mysql-systemd-start post
TimeoutSec=600
Restart=on-failure
RuntimeDirectory=mysqld
RuntimeDirectoryMode=755
# /etc/systemd/system/mysql.service.d/override.conf
[Unit]
Description=MySQL Community Server
After=network.target
After=nfs-client.target
[Service]
LimitNOFILE=1024000
</source>
<source lang=bash>
# systemctl list-dependencies --after mysql.service | grep nfs-client.target
● ├─nfs-client.target
</source>
====== /etc/idmapd.conf ======
<source lang=conf>
# Domain = localdomain
Domain = this.domain.tld
</source>
====== /etc/fstab ======
<source lang=fstab>
cdot-nfsv4-svm:/MYSQLNFS_LOG /MYSQLNFS_LOG nfs rw,hard,nointr,rsize=65536,wsize=65536,bg,vers=4,proto=tcp,noatime
cdot-nfsv4-svm:/MYSQLNFS_DATA /MYSQLNFS_DATA nfs rw,hard,nointr,rsize=65536,wsize=65536,bg,vers=4,proto=tcp,noatime
</source>
====== /etc/mysql/mysql.conf.d/mysqld.cnf ======
<source lang=inifile>
[mysqld]
...
datadir = /MYSQLNFS_DATA/data/mysql
...
</source>
====== /etc/mysql/mysql.conf.d/innodb.cnf ======
<source lang=inifile>
[mysqld]
#
# * InnoDB
#
innodb_data_home_dir = /MYSQLNFS_DATA/InnoDB
innodb_data_file_path = ibdata1:200M:autoextend
innodb_log_group_home_dir = /MYSQLNFS_LOG/ib_log
#innodb_flush_method = O_DIRECT
innodb_flush_log_at_trx_commit = 2
innodb_file_per_table = on
</source>
<source lang=mysql>
# mysql -e "show variables where variable_name like '%dir' and value like '/MYSQLNFS%'"
+---------------------------+------------------------------------+
| Variable_name | Value |
+---------------------------+------------------------------------+
| datadir | /MYSQLNFS_DATA/data/mysql/ |
| innodb_data_home_dir | /MYSQLNFS_DATA/InnoDB |
| innodb_log_group_home_dir | /MYSQLNFS_LOG/ib_log |
+---------------------------+------------------------------------+
</source>
====== /etc/mysql/mysql.conf.d/query_cache.cnf ======
<source lang=inifile>
[mysqld]
#
# * Query Cache Configuration
#
query_cache_type = 1
query_cache_limit = 256K
query_cache_min_res_unit = 2k
query_cache_size = 80M
</source>
<source lang=mysql>
mysql> SHOW VARIABLES LIKE 'have_query_cache';
+------------------+-------+
| Variable_name | Value |
+------------------+-------+
| have_query_cache | YES |
+------------------+-------+
1 row in set (0,00 sec)
mysql> SHOW VARIABLES LIKE 'query_cache%';
+------------------------------+----------+
| Variable_name | Value |
+------------------------------+----------+
| query_cache_limit | 262144 |
| query_cache_min_res_unit | 2048 |
| query_cache_size | 83886080 |
| query_cache_type | ON |
| query_cache_wlock_invalidate | OFF |
+------------------------------+----------+
5 rows in set (0,00 sec)
</source>
====== apparmor : /etc/apparmor.d/local/usr.sbin.mysqld ======
<source lang=apparmor>
# vim:syntax=apparmor
# This should be always there...
owner @{PROC}/@{pid}/status r,
/sys/devices/system/node/ r,
/sys/devices/system/node/** r,
# The mysql datadir, innodb_data_home_dir
/MYSQLNFS_DATA/ r,
/MYSQLNFS_DATA/** rwk,
# The mysql innodb_log_group_home_dir
/MYSQLNFS_LOG/ r,
/MYSQLNFS_LOG/** rwk,
</source>
====== Short stupid performance test ======
<source lang=bash>
# time dd if=/dev/zero of=/MYSQLNFS_DATA/io.test bs=16k count=65536
65536+0 records in
65536+0 records out
1073741824 bytes (1,1 GB, 1,0 GiB) copied, 1,7552 s, 612 MB/s
real 0m1.772s
user 0m0.016s
sys 0m0.672s
</source>
Some things seem to work...
==Sample InnoDB configuration==
/etc/mysql/conf.d/innodb.cnf
<source lang=mysql>
[mysqld]
# InnoDB Parameters
# innodb_buffer_pool_size=(0.7*total_mem_size)
innodb_buffer_pool_size=1433M
# bulk_insert_buffer_size
bulk_insert_buffer_size=256M
# innodb_buffer_pool_instances=... more = more concurrency
innodb_buffer_pool_instances=2
# innodb_thread_concurrency= 2*CPUs
innodb_thread_concurrency=4
# innodb_flush_method=O_DIRECT (avoids double buffering)
innodb_flush_method=O_DIRECT
# InnoDB data raw disks
innodb_data_home_dir=
innodb_data_file_path=/dev/vg-data/lv-rawdisk-innodb01:25Graw
# InnoDB log files
innodb_log_files_in_group=2
innodb_log_file_size=100M
innodb_log_group_home_dir=/var/lib/mysql/ib_log
</source>
==Analyze==
<source lang=mysql>
mysql> select * from <tablename> PROCEDURE ANALYSE();
</source>
<source lang=mysql>
mysql> SHOW /*!50000 GLOBAL*/ STATUS;
</source>
* See [[http://de.slideshare.net/shinguz/pt-presentation-11465700 MySQL Performance Tuning]]
===percona-toolkit===
<source lang=bash>
# aptitude install percona-toolkit
# mysql -e "explain select * from mysql.user,mysql.db where user.user=db.user" | pt-visual-explain
JOIN
+- Bookmark lookup
| +- Table
| | table db
| | possible_keys User
| +- Index lookup
| key db->User
| possible_keys User
| key_len 48
| ref mysql.user.User
| rows 3
+- Table scan
rows 68
+- Table
table user
</source>
===Sysbench===
<source lang=bash>
# mysql -u root -e "create database sbtest;"
# sysbench \
--test=oltp \
--oltp-table-size=10000000 \
--db-driver=mysql \
--mysql-table-engine=innodb \
--mysql-db=sbtest \
--mysql-user=root \
--mysql-password=$(nawk -F'=' '/password/{print $2}' /root/.my.cnf) \
--mysql-socket=/var/run/mysqld/mysqld.sock \
prepare
# sysbench \
--test=oltp \
--oltp-test-mode=complex \
--oltp-table-size=80000000 \
--db-driver=mysql \
--mysql-table-engine=innodb \
--mysql-db=sbtest \
--mysql-user=root \
--mysql-password=$(nawk -F'=' '/password/{print $2}' /root/.my.cnf) \
--mysql-socket=/var/run/mysqld/mysqld.sock \
--num-threads=4 \
--max-time=900 \
--max-requests=500000 \
run
# mysql -u root_rw -e "drop table sbtest;" sbtest
</source>
==Recover a damaged root account==
===Lost grants===
Try out:
<source lang=bash>
# service mysql stop
# echo "grant all privileges on *.* to 'root'@'localhost' with grant option;" > /root/mysql-init
# mysqld_safe --init-file=/root/mysql-init
...
150812 19:14:24 mysqld_safe mysqld from pid file /var/run/mysqld/mysqld.pid ended
# rm /root/mysql-init
# service mysql start
</source>
Or:
<source lang=bash>
# service mysql stop
# mysqld_safe --skip-grant-tables &
...
# mysql -e "UPDATE mysql.user SET Grant_priv='Y', Super_priv='Y' WHERE User='root'; FLUSH PRIVILEGES; GRANT ALL ON *.* TO 'root'@'localhost';"
# mysqladmin -u root shutdown
# service mysql start
</source>
===Lost password===
<source lang=bash>
# service mysql stop
# echo "SET PASSWORD FOR 'root'@'localhost' = PASSWORD('the root password for mysql');" > /root/mysql-init
# mysqld_safe --init-file=/root/mysql-init
...
150812 19:15:24 mysqld_safe mysqld from pid file /var/run/mysqld/mysqld.pid ended
# rm /root/mysql-init
# service mysql start
</source>
==Structured configuration==
This is the default in Ubuntus /etc/mysql/my.cnf:
<source lang=mysql>
...
#
# * IMPORTANT: Additional settings that can override those from this file!
# The files must end with '.cnf', otherwise they'll be ignored.
#
!includedir /etc/mysql/conf.d/
</source>
/etc/mysql/conf.d/innodb.cnf:
<source lang=mysql>
[mysqld]
# InnoDB Parameters
# innodb_buffer_pool_size=(0.7*total_mem_size)
#innodb_buffer_pool_size=512M
innodb_buffer_pool_size=256M
# bulk_insert_buffer_size
#bulk_insert_buffer_size=256M
bulk_insert_buffer_size=128M
# innodb_buffer_pool_instances=... more = more concurrency
innodb_buffer_pool_instances=2
# innodb_thread_concurrency= 2*CPUs
innodb_thread_concurrency=4
# innodb_flush_method=O_DIRECT (avoids double buffering)
innodb_flush_method=O_DIRECT
# InnoDB data raw disks
innodb_data_home_dir=
innodb_data_file_path=/dev/vg-data/lv-rawdisk-innodb01:25Graw
# InnoDB log files
innodb_log_files_in_group=2
innodb_log_file_size=100M
innodb_log_group_home_dir=/var/lib/mysql/ib_log
</source>
/etc/mysql/conf.d/myisam.cnf:
<source lang=mysql>
[mysqld]
#key_buffer = 512M
key_buffer = 128M
table_cache = 8K
myisam_sort_buffer_size = 64M
tmp_table_size = 64M
# Variable: concurrent_insert
# Value Description
# 0 Disables concurrent inserts
# 1 (Default) Enables concurrent insert for MyISAM tables that do not have holes
# 2 Enables concurrent inserts for all MyISAM tables, even those that have holes.
# For a table with a hole, new rows are inserted at the end of the table if it is in use by another thread.
# Otherwise, MySQL acquires a normal write lock and inserts the row into the hole.
concurrent_insert=2
# Variable: myisam_use_mmap
# https://www.percona.com/blog/2006/05/26/myisam-mmap-feature-51/
#
myisam_use_mmap=1
</source>
/etc/mysql/conf.d/mysqld.cnf:
<source lang=mysql>
[mysqld]
datadir = /var/lib/mysql/data/data
# because mysql is soooo stupid
#ignore-db-dirs = lost+found # when we will have mysql >= 5.6.3
bind-address = 127.0.0.1
open-files-limit = 4096
max_connections = 512
max_allowed_packet = 16M
thread_stack = 192K
thread_cache_size = 8
myisam-recover-options = BACKUP
max_connections = 512
table_cache = 8192
thread_concurrency = 4
default-storage-engine = innodb
# Enable the full query log. Every query (even ones with incorrect
# syntax) that the server receives will be logged. This is useful for
# debugging, it is usually disabled in production use.
#log
# Print warnings to the error log file. If you have any problem with
# MySQL you should enable logging of warnings and examine the error log
# for possible explanations.
log_warnings
# Log slow queries. Slow queries are queries which take more than the
# amount of time defined in "long_query_time" or which do not use
# indexes well, if log_long_format is enabled. It is normally good idea
# to have this turned on if you frequently add new queries to the
# system.
log_slow_queries
slow_query_log_file = /var/log/mysql/mysql-slow.log
# All queries taking more than this amount of time (in seconds) will be
# trated as slow. Do not use "1" as a value here, as this will result in
# even very fast queries being logged from time to time (as MySQL
# currently measures time with second accuracy only).
long_query_time = 2
# Log more information in the slow query log. Normally it is good to
# have this turned on. This will enable logging of queries that are not
# using indexes in addition to long running queries.
#log_long_format
log_bin = /var/lib/mysql/binlog/mysql-bin.log
expire_logs_days = 10
max_binlog_size = 100M
sync_binlog = 0
performance_schema = ON
</source>
/etc/mysql/conf.d/mysqld_safe.cnf:
<source lang=mysql>
[mysqld_safe]
</source>
/etc/mysql/conf.d/mysqld_safe_syslog.cnf:
<source lang=mysql>
[mysqld_safe]
syslog
</source>
/etc/mysql/conf.d/query_cache.cnf:
<source lang=mysql>
[mysqld]
query_cache_limit = 4M
query_cache_size = 128M
query_cache_min_res_unit = 2K
</source>
=MySQL Clients=
Small one liners for testing purposes.
==PHP==
===PHP PDO===
<source lang=php>
$ php -r '
$pdo=new PDO("mysql:host=mydbhost;dbname=mydb", "user", "pass", ARRAY(
PDO::ATTR_PERSISTENT => true
)
);
$stmt=$pdo->prepare("SELECT * FROM mytable");
if($stmt->execute()){
while($row = $stmt->fetch()){
print_r($row);
}
};
$stmt = null;
$pdo=null;
'
</source>
bbb678fe1e2475f6fe95afe5cca33bc66c32587a
1751
1750
2017-05-12T15:31:17Z
Lollypop
2
/* Modify systemd service to wait for NFS */
wikitext
text/x-wiki
[[Kategorie:MySQL|Tipps und Tricks]]
==Oneliner==
===All grants===
<source lang=bash>
# mysql --skip-column-names --batch --execute 'select concat("`",user,"`@`",host,"`") from mysql.user' | xargs -n 1 -i mysql --execute 'show grants for {}'
</source>
===Last update time===
* Per table
<source lang=mysql>
mysql> SELECT TABLE_SCHEMA AS DB,TABLE_NAME,UPDATE_TIME FROM INFORMATION_SCHEMA.TABLES ORDER BY DB,UPDATE_TIME;
</source>
* Per database
<source lang=mysql>
mysql> SELECT TABLE_SCHEMA AS DB,MAX(UPDATE_TIME) AS LAST_UPDATE FROM INFORMATION_SCHEMA.TABLES GROUP BY DB ORDER BY LAST_UPDATE;
</source>
==InnoDB space==
===Per database===
<source lang=mysql>
mysql> select table_schema as database_name, sum(round(data_length/1024/1024,2)) as total_size_mb from information_schema.tables where engine like 'innodb' group by table_schema order by total_size_mb;
</source>
===Per table===
<source lang=mysql>
mysql> select table_schema as database_name,table_name,round(data_length/1024/1024,2) as size_mb from information_schema.tables order by size_mb;
</source>
==Logging==
If you use SET GLOBAL it is just for the moment.
'''Don't forget to add it in your my.cnf to make it permanent!'''
===What can I log?===
The interesting variables here are:
* log_queries_not_using_indexes
* log_slave_updates
* log_slow_queries
* general_log
===Choose logging destination FILE/TABLE/NONE===
This affects general_log and slow_query_log.
* Log to the table mysql.slow_log and mysql.general_log
<source lang=mysql>
mysql> SET GLOBAL log_output=TABLE;
</source>
* Log to the table mysql.slow_log and mysql.general_log
<source lang=mysql>
mysql> SET GLOBAL log_output=TABLE;
</source>
* Both: tables and files
<source lang=mysql>
mysql> SET GLOBAL log_output = 'TABLE,FILE';
</source>
* None, if NONE appears in the log_output destinations there is no logging
<source lang=mysql>
mysql> SET GLOBAL log_output = 'TABLE,FILE,NONE';
</source>
is equal to
<source lang=mysql>
mysql> SET GLOBAL log_output = 'NONE';
</source>
===Enable/disable general logging===
<source lang=mysql>
mysql> SET GLOBAL general_log_file = '/var/lib/mysql/general.log';
Query OK, 0 rows affected (0.00 sec)
mysql> SET GLOBAL general_log = 'ON';
Query OK, 0 rows affected (0.00 sec)
</source>
<source lang=mysql>
mysql> SET GLOBAL general_log = 'OFF';
Query OK, 0 rows affected (0.00 sec)
</source>
===Enable/disable logging of slow queries===
<source lang=mysql>
mysql> SET GLOBAL slow_query_log_file = '/var/lib/mysql/slow-query.log';
Query OK, 0 rows affected (0.00 sec)
mysql> SET GLOBAL slow_query_log = 'ON';
Query OK, 0 rows affected (0.00 sec)
</source>
<source lang=mysql>
mysql> SET GLOBAL slow_query_log = 'OFF';
Query OK, 0 rows affected (0.00 sec)
</source>
==Filesystems for MySQL==
===ext3/ext4===
====Create Options====
<source lang=bash>
# mkfs.ext4 -b 4096 /dev/mapper/vg--data-lv--ext4--mysql_data
</source>
====Mountoptions====
* noatime
* data=writeback (best performance , only metadata is logged)
* data=ordered (ok performance , recording metadata and grouping metadata related to the data changes)
* data=journal (worst performance, but best data protection, ext3 default mode, recording metadata and all data)
===Raw devices with InnoDB===
'''Take a look at [[Linux_udev_permissions|setting device permissions via udev]] first.'''
'''After''' that the device is owned by mysql:
<source lang=bash>
# ls -alL /dev/vg-data/lv-rawdisk-innodb01
brw-rw---- 1 mysql mysql 252, 0 Aug 12 15:07 /dev/vg-data/lv-rawdisk-innodb01
</source>
Determine the size:
<source lang=bash>
# lvs vg-data
LV VG Attr LSize Pool Origin Data% Move Log Copy% Convert
lv-rawdisk-innodb01 vg-data -wi-a---- 25.00g
# fdisk -l /dev/vg-data/lv-rawdisk-innodb01
Disk /dev/vg-data/lv-rawdisk-innodb01: 26.8 GB, 26843545600 bytes
255 heads, 63 sectors/track, 3263 cylinders, total 52428800 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
# bc -l
26843545600/(1024*1024*1024)
25.00000000000000000000
</source>
Yes... really 25GB!
Add your logical volume to your configuration /etc/mysql/conf.d/innodb.cnf :
<source lang=mysql>
[mysqld]
# InnoDB raw disks
innodb_data_home_dir=
innodb_data_file_path=/dev/vg-data/lv-rawdisk-innodb01:25Gnewraw
</source>
Start mysql:
<source lang=bash>
# service mysql start
</source>
Aaaaaand.. do not forget apparmor! Like I did.. :-D
<source lang=mysql>
InnoDB: Operating system error number 13 in a file operation.
InnoDB: The error means mysqld does not have the access rights to
InnoDB: the directory.
InnoDB: File name /dev/dm-0
InnoDB: File operation call: 'open'.
InnoDB: Cannot continue operation.
</source>
<source lang=bash>
# tail /var/log/kern.log
...
Aug 12 15:30:09 mysql kernel: [ 5840.118528] audit: type=1400 audit(1439386209.399:33): apparmor="DENIED" operation="open" profile="/usr/sbin/mysqld" name="/dev/dm-0" pid=11810 comm="mysqld" requested_mask="wr" denied_mask="wr" fsuid=108 ouid=108
...
</source>
Add your raw device to the apparmor config in /etc/apparmor.d/local/usr.sbin.mysqld :
<source lang=bash>
# Site-specific additions and overrides for usr.sbin.mysqld.
# For more details, please see /etc/apparmor.d/local/README.
/dev/dm-* rwk,
</source>
Reload apparmor:
<source lang=bash>
# service apparmor reload
</source>
Another try!
<source lang=bash>
# service mysql start
</source>
<source lang=mysql>
InnoDB: The first specified data file /dev/vg-data/lv-rawdisk-innodb01 did not exist:
InnoDB: a new database to be created!
150812 15:48:23 InnoDB: Setting file /dev/vg-data/lv-rawdisk-innodb01 size to 25600 MB
InnoDB: Database physically writes the file full: wait...
InnoDB: Progress in MB: 100 200 300 400 500 600 700 800 900 1000 1100 1200 ...
</source>
Much better!
So shutdown MySQL again!
Change your configuration /etc/mysql/conf.d/innodb.cnf and '''change newraw to raw!''' :
<source lang=mysql>
[mysqld]
# InnoDB raw disks
innodb_data_home_dir=
innodb_data_file_path=/dev/vg-data/lv-rawdisk-innodb01:25Graw
</source>
=== NFS ===
==== NFSv4 ====
===== On NetApp CDOT SVM =====
<source lang=cdot>
cdot1nfsv4::> export-policy rule create -policyname default -clientmatch 172.18.128.0/22 -superuser none -rwrule none -rorule sys -allow-dev false -allow-suid false
cdot1nfsv4::>
cdot1nfsv4::> export-policy create -policyname mysql_clients
cdot1nfsv4::> export-policy rule create -policyname mysql_clients -clientmatch 172.18.128.0/22 -superuser sys -rwrule sys -rorule sys -allow-dev true -allow-suid false
cdot1nfsv4::>
cdot1nfsv4::> nfs server modify -v4.0 enabled -v4-id-domain this.domain.tld
cdot1nfsv4::> set -units GB
cdot1nfsv4::> vol show -volume MYSQLNFS_* -fields volume,policy,size,junction-path
vserver volume size policy junction-path
------------------ --------------------- ---- ------------- ----------------------
cdot1nfsv4 MYSQLNFS_DATA 40GB mysql_clients /MYSQLNFS_DATA
cdot1nfsv4 MYSQLNFS_LOG 1GB mysql_clients /MYSQLNFS_LOG
2 entries were displayed.
</source>
Links:
* [https://kb.netapp.com/support/s/article/how-to-configure-nfsv4-in-cluster-mode How to configure NFSv4 in Cluster-Mode]
* [https://kb.netapp.com/support/s/article/clustered-data-ontap-nfs-expert-recommended-articles Clustered Data ONTAP NFS Expert recommended articles]
* [https://kb.netapp.com/support/s/article/how-to-configure-netapp-storage-systems-for-network-file-system-version-4-in-aix-and-linux-environments How to configure NetApp storage systems for Network File System version 4 in AIX and Linux environments]
* [https://kb.netapp.com/support/s/article/how-to-enable-or-disable-nfsv4-on-netapp-storage-systems How to enable or disable NFSv4 on NetApp storage systems]
===== On Linux =====
====== /etc/sysctl.d/99-mysql.conf ======
<source>
#
## http://www.ajohnstone.com/achives/optimizing-mysql-over-nfs-with-netapp/
#
###################################################################
# Semaphores & IPC for optimizations in innodb
kernel.shmmax=2147483648
kernel.shmall=2147483648
kernel.msgmni=1024
kernel.msgmax=65536
kernel.sem=250 32000 32 1024
###################################################################
# Swap
vm.swappiness = 0
vm.vfs_cache_pressure = 50
</source>
====== /etc/sysctl.d/99-netapp-nfs.conf ======
<source>
#
## http://www.ajohnstone.com/achives/optimizing-mysql-over-nfs-with-netapp/
#
###################################################################
# Optimization for netapp/nfs increased from 64k, @see http://tldp.org/HOWTO/NFS-HOWTO/performance.html#MEMLIMITS
net.core.wmem_default=262144
net.core.rmem_default=262144
net.core.wmem_max=262144
net.core.rmem_max=262144
net.ipv4.tcp_rmem = 4096 87380 16777216
net.ipv4.tcp_wmem = 4096 65536 16777216
net.ipv4.tcp_no_metrics_save = 1
# Guidelines from http://media.netapp.com/documents/mysqlperformance-5.pdf
net.ipv4.tcp_sack=0
net.ipv4.tcp_timestamps=0
sunrpc.tcp_slot_table_entries=128
#nfs.v3.enable on
nfs.tcp.enable=on
nfs.tcp.recvwindowsize=65536
nfs.tcp.xfersize=65536
#iscsi.iswt.max_ios_per_session 128
#iscsi.iswt.tcp_window_size 131400
#iscsi.max_connections_per_session 16
net.ipv4.tcp_tw_reuse = 1
net.ipv4.ip_local_port_range = 1024 65023
net.ipv4.tcp_max_syn_backlog = 10240
net.ipv4.tcp_max_tw_buckets = 400000
net.ipv4.tcp_max_orphans = 60000
net.ipv4.tcp_synack_retries = 3
net.core.somaxconn = 10000
kernel.sysrq=0
net.ipv4.neigh.default.gc_thresh1 = 4096
net.ipv4.neigh.default.gc_thresh2 = 8192
net.ipv4.neigh.default.gc_thresh3 = 8192
net.ipv4.neigh.default.base_reachable_time = 86400
net.ipv4.neigh.default.gc_stale_time = 86400
</source>
====== Raise allowed number of open files for mysql in /etc/security/limits.d/mysql.conf ======
<source>
mysql soft nofile 1024000
mysql hard nofile 1024000
mysql soft nproc 10240
mysql hard nproc 10240
</source>
====== Modify systemd mysql.service to raise the number of files limit ======
To raise the number of files for the service you have to tell the systemd the new limit.
<source lang=bash>
# systemctl edit mysql.service
</source>
and enter:
<source lang=inifile>
[Service]
LimitNOFILE=1024000
</source>
<source lang=bash>
# systemctl cat mysql
# /lib/systemd/system/mysql.service
# MySQL systemd service file
...
# /etc/systemd/system/mysql.service.d/override.conf
[Service]
LimitNOFILE=1024000
</source>
Do not forget to activate and check the limit
<source lang=bash>
# systemctl daemon-reload
# systemctl restart mysql
# awk 'NR==1 || /Max open files/' /proc/$(pgrep mysqld$)/limits
Limit Soft Limit Hard Limit Units
Max open files 1024000 1024000 files
</source>
====== Modify systemd service to wait for NFS ======
To be sure that the NFS mount is ready when the mysql server starts add After=nfs-client.target to the systemd service in the Unit-section.
<source lang=bash>
# systemctl edit mysql.service
</source>
and enter:
<source lang=inifile>
[Unit]
Description=MySQL Community Server
After=network.target
After=nfs-client.target
</source>
<source lang=bash>
# systemctl cat mysql
# /lib/systemd/system/mysql.service
# MySQL systemd service file
[Unit]
Description=MySQL Community Server
After=network.target
[Install]
WantedBy=multi-user.target
[Service]
User=mysql
Group=mysql
PermissionsStartOnly=true
ExecStartPre=/usr/share/mysql/mysql-systemd-start pre
ExecStart=/usr/sbin/mysqld
ExecStartPost=/usr/share/mysql/mysql-systemd-start post
TimeoutSec=600
Restart=on-failure
RuntimeDirectory=mysqld
RuntimeDirectoryMode=755
# /etc/systemd/system/mysql.service.d/override.conf
[Unit]
Description=MySQL Community Server
After=network.target
After=nfs-client.target
[Service]
LimitNOFILE=1024000
</source>
Do not forget to activate the changes...
<source lang=bash>
# systemctl daemon-reload
# systemctl restart mysql
</source>
... and check they are active:
<source lang=bash>
# systemctl list-dependencies --after mysql.service | grep nfs-client.target
● ├─nfs-client.target
</source>
====== /etc/idmapd.conf ======
<source lang=conf>
# Domain = localdomain
Domain = this.domain.tld
</source>
====== /etc/fstab ======
<source lang=fstab>
cdot-nfsv4-svm:/MYSQLNFS_LOG /MYSQLNFS_LOG nfs rw,hard,nointr,rsize=65536,wsize=65536,bg,vers=4,proto=tcp,noatime
cdot-nfsv4-svm:/MYSQLNFS_DATA /MYSQLNFS_DATA nfs rw,hard,nointr,rsize=65536,wsize=65536,bg,vers=4,proto=tcp,noatime
</source>
====== /etc/mysql/mysql.conf.d/mysqld.cnf ======
<source lang=inifile>
[mysqld]
...
datadir = /MYSQLNFS_DATA/data/mysql
...
</source>
====== /etc/mysql/mysql.conf.d/innodb.cnf ======
<source lang=inifile>
[mysqld]
#
# * InnoDB
#
innodb_data_home_dir = /MYSQLNFS_DATA/InnoDB
innodb_data_file_path = ibdata1:200M:autoextend
innodb_log_group_home_dir = /MYSQLNFS_LOG/ib_log
#innodb_flush_method = O_DIRECT
innodb_flush_log_at_trx_commit = 2
innodb_file_per_table = on
</source>
<source lang=mysql>
# mysql -e "show variables where variable_name like '%dir' and value like '/MYSQLNFS%'"
+---------------------------+------------------------------------+
| Variable_name | Value |
+---------------------------+------------------------------------+
| datadir | /MYSQLNFS_DATA/data/mysql/ |
| innodb_data_home_dir | /MYSQLNFS_DATA/InnoDB |
| innodb_log_group_home_dir | /MYSQLNFS_LOG/ib_log |
+---------------------------+------------------------------------+
</source>
====== /etc/mysql/mysql.conf.d/query_cache.cnf ======
<source lang=inifile>
[mysqld]
#
# * Query Cache Configuration
#
query_cache_type = 1
query_cache_limit = 256K
query_cache_min_res_unit = 2k
query_cache_size = 80M
</source>
<source lang=mysql>
mysql> SHOW VARIABLES LIKE 'have_query_cache';
+------------------+-------+
| Variable_name | Value |
+------------------+-------+
| have_query_cache | YES |
+------------------+-------+
1 row in set (0,00 sec)
mysql> SHOW VARIABLES LIKE 'query_cache%';
+------------------------------+----------+
| Variable_name | Value |
+------------------------------+----------+
| query_cache_limit | 262144 |
| query_cache_min_res_unit | 2048 |
| query_cache_size | 83886080 |
| query_cache_type | ON |
| query_cache_wlock_invalidate | OFF |
+------------------------------+----------+
5 rows in set (0,00 sec)
</source>
====== apparmor : /etc/apparmor.d/local/usr.sbin.mysqld ======
<source lang=apparmor>
# vim:syntax=apparmor
# This should be always there...
owner @{PROC}/@{pid}/status r,
/sys/devices/system/node/ r,
/sys/devices/system/node/** r,
# The mysql datadir, innodb_data_home_dir
/MYSQLNFS_DATA/ r,
/MYSQLNFS_DATA/** rwk,
# The mysql innodb_log_group_home_dir
/MYSQLNFS_LOG/ r,
/MYSQLNFS_LOG/** rwk,
</source>
====== Short stupid performance test ======
<source lang=bash>
# time dd if=/dev/zero of=/MYSQLNFS_DATA/io.test bs=16k count=65536
65536+0 records in
65536+0 records out
1073741824 bytes (1,1 GB, 1,0 GiB) copied, 1,7552 s, 612 MB/s
real 0m1.772s
user 0m0.016s
sys 0m0.672s
</source>
Some things seem to work...
==Sample InnoDB configuration==
/etc/mysql/conf.d/innodb.cnf
<source lang=mysql>
[mysqld]
# InnoDB Parameters
# innodb_buffer_pool_size=(0.7*total_mem_size)
innodb_buffer_pool_size=1433M
# bulk_insert_buffer_size
bulk_insert_buffer_size=256M
# innodb_buffer_pool_instances=... more = more concurrency
innodb_buffer_pool_instances=2
# innodb_thread_concurrency= 2*CPUs
innodb_thread_concurrency=4
# innodb_flush_method=O_DIRECT (avoids double buffering)
innodb_flush_method=O_DIRECT
# InnoDB data raw disks
innodb_data_home_dir=
innodb_data_file_path=/dev/vg-data/lv-rawdisk-innodb01:25Graw
# InnoDB log files
innodb_log_files_in_group=2
innodb_log_file_size=100M
innodb_log_group_home_dir=/var/lib/mysql/ib_log
</source>
==Analyze==
<source lang=mysql>
mysql> select * from <tablename> PROCEDURE ANALYSE();
</source>
<source lang=mysql>
mysql> SHOW /*!50000 GLOBAL*/ STATUS;
</source>
* See [[http://de.slideshare.net/shinguz/pt-presentation-11465700 MySQL Performance Tuning]]
===percona-toolkit===
<source lang=bash>
# aptitude install percona-toolkit
# mysql -e "explain select * from mysql.user,mysql.db where user.user=db.user" | pt-visual-explain
JOIN
+- Bookmark lookup
| +- Table
| | table db
| | possible_keys User
| +- Index lookup
| key db->User
| possible_keys User
| key_len 48
| ref mysql.user.User
| rows 3
+- Table scan
rows 68
+- Table
table user
</source>
===Sysbench===
<source lang=bash>
# mysql -u root -e "create database sbtest;"
# sysbench \
--test=oltp \
--oltp-table-size=10000000 \
--db-driver=mysql \
--mysql-table-engine=innodb \
--mysql-db=sbtest \
--mysql-user=root \
--mysql-password=$(nawk -F'=' '/password/{print $2}' /root/.my.cnf) \
--mysql-socket=/var/run/mysqld/mysqld.sock \
prepare
# sysbench \
--test=oltp \
--oltp-test-mode=complex \
--oltp-table-size=80000000 \
--db-driver=mysql \
--mysql-table-engine=innodb \
--mysql-db=sbtest \
--mysql-user=root \
--mysql-password=$(nawk -F'=' '/password/{print $2}' /root/.my.cnf) \
--mysql-socket=/var/run/mysqld/mysqld.sock \
--num-threads=4 \
--max-time=900 \
--max-requests=500000 \
run
# mysql -u root_rw -e "drop table sbtest;" sbtest
</source>
==Recover a damaged root account==
===Lost grants===
Try out:
<source lang=bash>
# service mysql stop
# echo "grant all privileges on *.* to 'root'@'localhost' with grant option;" > /root/mysql-init
# mysqld_safe --init-file=/root/mysql-init
...
150812 19:14:24 mysqld_safe mysqld from pid file /var/run/mysqld/mysqld.pid ended
# rm /root/mysql-init
# service mysql start
</source>
Or:
<source lang=bash>
# service mysql stop
# mysqld_safe --skip-grant-tables &
...
# mysql -e "UPDATE mysql.user SET Grant_priv='Y', Super_priv='Y' WHERE User='root'; FLUSH PRIVILEGES; GRANT ALL ON *.* TO 'root'@'localhost';"
# mysqladmin -u root shutdown
# service mysql start
</source>
===Lost password===
<source lang=bash>
# service mysql stop
# echo "SET PASSWORD FOR 'root'@'localhost' = PASSWORD('the root password for mysql');" > /root/mysql-init
# mysqld_safe --init-file=/root/mysql-init
...
150812 19:15:24 mysqld_safe mysqld from pid file /var/run/mysqld/mysqld.pid ended
# rm /root/mysql-init
# service mysql start
</source>
==Structured configuration==
This is the default in Ubuntus /etc/mysql/my.cnf:
<source lang=mysql>
...
#
# * IMPORTANT: Additional settings that can override those from this file!
# The files must end with '.cnf', otherwise they'll be ignored.
#
!includedir /etc/mysql/conf.d/
</source>
/etc/mysql/conf.d/innodb.cnf:
<source lang=mysql>
[mysqld]
# InnoDB Parameters
# innodb_buffer_pool_size=(0.7*total_mem_size)
#innodb_buffer_pool_size=512M
innodb_buffer_pool_size=256M
# bulk_insert_buffer_size
#bulk_insert_buffer_size=256M
bulk_insert_buffer_size=128M
# innodb_buffer_pool_instances=... more = more concurrency
innodb_buffer_pool_instances=2
# innodb_thread_concurrency= 2*CPUs
innodb_thread_concurrency=4
# innodb_flush_method=O_DIRECT (avoids double buffering)
innodb_flush_method=O_DIRECT
# InnoDB data raw disks
innodb_data_home_dir=
innodb_data_file_path=/dev/vg-data/lv-rawdisk-innodb01:25Graw
# InnoDB log files
innodb_log_files_in_group=2
innodb_log_file_size=100M
innodb_log_group_home_dir=/var/lib/mysql/ib_log
</source>
/etc/mysql/conf.d/myisam.cnf:
<source lang=mysql>
[mysqld]
#key_buffer = 512M
key_buffer = 128M
table_cache = 8K
myisam_sort_buffer_size = 64M
tmp_table_size = 64M
# Variable: concurrent_insert
# Value Description
# 0 Disables concurrent inserts
# 1 (Default) Enables concurrent insert for MyISAM tables that do not have holes
# 2 Enables concurrent inserts for all MyISAM tables, even those that have holes.
# For a table with a hole, new rows are inserted at the end of the table if it is in use by another thread.
# Otherwise, MySQL acquires a normal write lock and inserts the row into the hole.
concurrent_insert=2
# Variable: myisam_use_mmap
# https://www.percona.com/blog/2006/05/26/myisam-mmap-feature-51/
#
myisam_use_mmap=1
</source>
/etc/mysql/conf.d/mysqld.cnf:
<source lang=mysql>
[mysqld]
datadir = /var/lib/mysql/data/data
# because mysql is soooo stupid
#ignore-db-dirs = lost+found # when we will have mysql >= 5.6.3
bind-address = 127.0.0.1
open-files-limit = 4096
max_connections = 512
max_allowed_packet = 16M
thread_stack = 192K
thread_cache_size = 8
myisam-recover-options = BACKUP
max_connections = 512
table_cache = 8192
thread_concurrency = 4
default-storage-engine = innodb
# Enable the full query log. Every query (even ones with incorrect
# syntax) that the server receives will be logged. This is useful for
# debugging, it is usually disabled in production use.
#log
# Print warnings to the error log file. If you have any problem with
# MySQL you should enable logging of warnings and examine the error log
# for possible explanations.
log_warnings
# Log slow queries. Slow queries are queries which take more than the
# amount of time defined in "long_query_time" or which do not use
# indexes well, if log_long_format is enabled. It is normally good idea
# to have this turned on if you frequently add new queries to the
# system.
log_slow_queries
slow_query_log_file = /var/log/mysql/mysql-slow.log
# All queries taking more than this amount of time (in seconds) will be
# trated as slow. Do not use "1" as a value here, as this will result in
# even very fast queries being logged from time to time (as MySQL
# currently measures time with second accuracy only).
long_query_time = 2
# Log more information in the slow query log. Normally it is good to
# have this turned on. This will enable logging of queries that are not
# using indexes in addition to long running queries.
#log_long_format
log_bin = /var/lib/mysql/binlog/mysql-bin.log
expire_logs_days = 10
max_binlog_size = 100M
sync_binlog = 0
performance_schema = ON
</source>
/etc/mysql/conf.d/mysqld_safe.cnf:
<source lang=mysql>
[mysqld_safe]
</source>
/etc/mysql/conf.d/mysqld_safe_syslog.cnf:
<source lang=mysql>
[mysqld_safe]
syslog
</source>
/etc/mysql/conf.d/query_cache.cnf:
<source lang=mysql>
[mysqld]
query_cache_limit = 4M
query_cache_size = 128M
query_cache_min_res_unit = 2K
</source>
=MySQL Clients=
Small one liners for testing purposes.
==PHP==
===PHP PDO===
<source lang=php>
$ php -r '
$pdo=new PDO("mysql:host=mydbhost;dbname=mydb", "user", "pass", ARRAY(
PDO::ATTR_PERSISTENT => true
)
);
$stmt=$pdo->prepare("SELECT * FROM mytable");
if($stmt->execute()){
while($row = $stmt->fetch()){
print_r($row);
}
};
$stmt = null;
$pdo=null;
'
</source>
019f53e58e601f5094f6a805e10874ca478c911e
1752
1751
2017-05-23T07:31:13Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:MySQL|Tipps und Tricks]]
==Oneliner==
===All grants===
<source lang=bash>
# mysql --skip-column-names --batch --execute 'select concat("`",user,"`@`",host,"`") from mysql.user' | xargs -n 1 -i mysql --execute 'show grants for {}'
</source>
===Last update time===
* Per table
<source lang=mysql>
mysql> SELECT TABLE_SCHEMA AS DB,TABLE_NAME,UPDATE_TIME FROM INFORMATION_SCHEMA.TABLES ORDER BY DB,UPDATE_TIME;
</source>
* Per database
<source lang=mysql>
mysql> SELECT TABLE_SCHEMA AS DB,MAX(UPDATE_TIME) AS LAST_UPDATE FROM INFORMATION_SCHEMA.TABLES GROUP BY DB ORDER BY LAST_UPDATE;
</source>
==InnoDB space==
===Per database===
<source lang=mysql>
mysql> select table_schema as database_name, sum(round(data_length/1024/1024,2)) as total_size_mb from information_schema.tables where engine like 'innodb' group by table_schema order by total_size_mb;
</source>
===Per table===
<source lang=mysql>
mysql> select table_schema as database_name,table_name,round(data_length/1024/1024,2) as size_mb from information_schema.tables order by size_mb;
</source>
==Logging==
If you use SET GLOBAL it is just for the moment.
'''Don't forget to add it in your my.cnf to make it permanent!'''
===What can I log?===
The interesting variables here are:
* log_queries_not_using_indexes
* log_slave_updates
* log_slow_queries
* general_log
===Choose logging destination FILE/TABLE/NONE===
This affects general_log and slow_query_log.
* Log to the table mysql.slow_log and mysql.general_log
<source lang=mysql>
mysql> SET GLOBAL log_output=TABLE;
</source>
* Log to the table mysql.slow_log and mysql.general_log
<source lang=mysql>
mysql> SET GLOBAL log_output=TABLE;
</source>
* Both: tables and files
<source lang=mysql>
mysql> SET GLOBAL log_output = 'TABLE,FILE';
</source>
* None, if NONE appears in the log_output destinations there is no logging
<source lang=mysql>
mysql> SET GLOBAL log_output = 'TABLE,FILE,NONE';
</source>
is equal to
<source lang=mysql>
mysql> SET GLOBAL log_output = 'NONE';
</source>
===Enable/disable general logging===
<source lang=mysql>
mysql> SET GLOBAL general_log_file = '/var/lib/mysql/general.log';
Query OK, 0 rows affected (0.00 sec)
mysql> SET GLOBAL general_log = 'ON';
Query OK, 0 rows affected (0.00 sec)
</source>
<source lang=mysql>
mysql> SET GLOBAL general_log = 'OFF';
Query OK, 0 rows affected (0.00 sec)
</source>
===Enable/disable logging of slow queries===
<source lang=mysql>
mysql> SET GLOBAL slow_query_log_file = '/var/lib/mysql/slow-query.log';
Query OK, 0 rows affected (0.00 sec)
mysql> SET GLOBAL slow_query_log = 'ON';
Query OK, 0 rows affected (0.00 sec)
</source>
<source lang=mysql>
mysql> SET GLOBAL slow_query_log = 'OFF';
Query OK, 0 rows affected (0.00 sec)
</source>
== Slave ==
=== Debugging ===
==== What did we see from the master ====
Read the binlog from Master:
<source lang=bash>
# mysqlbinlog --read-from-remote-server --host='your replication host' --user='your replication user' --password='your replication password' --base64-output=auto --database='limit output to this database' -vv mysql-bin.number | less
</source>
if you get
ERROR: Failed on connect: SSL connection error: protocol version mismatch
try
<source lang=bash>
# mysqlbinlog --read-from-remote-server --host='your replication host' --user='your replication user' --password='your replication password' --ssl-mode=DISABLED --base64-output=auto --database='limit output to this database' -vv mysql-bin.number | less
</source>
==Filesystems for MySQL==
===ext3/ext4===
====Create Options====
<source lang=bash>
# mkfs.ext4 -b 4096 /dev/mapper/vg--data-lv--ext4--mysql_data
</source>
====Mountoptions====
* noatime
* data=writeback (best performance , only metadata is logged)
* data=ordered (ok performance , recording metadata and grouping metadata related to the data changes)
* data=journal (worst performance, but best data protection, ext3 default mode, recording metadata and all data)
===Raw devices with InnoDB===
'''Take a look at [[Linux_udev_permissions|setting device permissions via udev]] first.'''
'''After''' that the device is owned by mysql:
<source lang=bash>
# ls -alL /dev/vg-data/lv-rawdisk-innodb01
brw-rw---- 1 mysql mysql 252, 0 Aug 12 15:07 /dev/vg-data/lv-rawdisk-innodb01
</source>
Determine the size:
<source lang=bash>
# lvs vg-data
LV VG Attr LSize Pool Origin Data% Move Log Copy% Convert
lv-rawdisk-innodb01 vg-data -wi-a---- 25.00g
# fdisk -l /dev/vg-data/lv-rawdisk-innodb01
Disk /dev/vg-data/lv-rawdisk-innodb01: 26.8 GB, 26843545600 bytes
255 heads, 63 sectors/track, 3263 cylinders, total 52428800 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
# bc -l
26843545600/(1024*1024*1024)
25.00000000000000000000
</source>
Yes... really 25GB!
Add your logical volume to your configuration /etc/mysql/conf.d/innodb.cnf :
<source lang=mysql>
[mysqld]
# InnoDB raw disks
innodb_data_home_dir=
innodb_data_file_path=/dev/vg-data/lv-rawdisk-innodb01:25Gnewraw
</source>
Start mysql:
<source lang=bash>
# service mysql start
</source>
Aaaaaand.. do not forget apparmor! Like I did.. :-D
<source lang=mysql>
InnoDB: Operating system error number 13 in a file operation.
InnoDB: The error means mysqld does not have the access rights to
InnoDB: the directory.
InnoDB: File name /dev/dm-0
InnoDB: File operation call: 'open'.
InnoDB: Cannot continue operation.
</source>
<source lang=bash>
# tail /var/log/kern.log
...
Aug 12 15:30:09 mysql kernel: [ 5840.118528] audit: type=1400 audit(1439386209.399:33): apparmor="DENIED" operation="open" profile="/usr/sbin/mysqld" name="/dev/dm-0" pid=11810 comm="mysqld" requested_mask="wr" denied_mask="wr" fsuid=108 ouid=108
...
</source>
Add your raw device to the apparmor config in /etc/apparmor.d/local/usr.sbin.mysqld :
<source lang=bash>
# Site-specific additions and overrides for usr.sbin.mysqld.
# For more details, please see /etc/apparmor.d/local/README.
/dev/dm-* rwk,
</source>
Reload apparmor:
<source lang=bash>
# service apparmor reload
</source>
Another try!
<source lang=bash>
# service mysql start
</source>
<source lang=mysql>
InnoDB: The first specified data file /dev/vg-data/lv-rawdisk-innodb01 did not exist:
InnoDB: a new database to be created!
150812 15:48:23 InnoDB: Setting file /dev/vg-data/lv-rawdisk-innodb01 size to 25600 MB
InnoDB: Database physically writes the file full: wait...
InnoDB: Progress in MB: 100 200 300 400 500 600 700 800 900 1000 1100 1200 ...
</source>
Much better!
So shutdown MySQL again!
Change your configuration /etc/mysql/conf.d/innodb.cnf and '''change newraw to raw!''' :
<source lang=mysql>
[mysqld]
# InnoDB raw disks
innodb_data_home_dir=
innodb_data_file_path=/dev/vg-data/lv-rawdisk-innodb01:25Graw
</source>
=== NFS ===
==== NFSv4 ====
===== On NetApp CDOT SVM =====
<source lang=cdot>
cdot1nfsv4::> export-policy rule create -policyname default -clientmatch 172.18.128.0/22 -superuser none -rwrule none -rorule sys -allow-dev false -allow-suid false
cdot1nfsv4::>
cdot1nfsv4::> export-policy create -policyname mysql_clients
cdot1nfsv4::> export-policy rule create -policyname mysql_clients -clientmatch 172.18.128.0/22 -superuser sys -rwrule sys -rorule sys -allow-dev true -allow-suid false
cdot1nfsv4::>
cdot1nfsv4::> nfs server modify -v4.0 enabled -v4-id-domain this.domain.tld
cdot1nfsv4::> set -units GB
cdot1nfsv4::> vol show -volume MYSQLNFS_* -fields volume,policy,size,junction-path
vserver volume size policy junction-path
------------------ --------------------- ---- ------------- ----------------------
cdot1nfsv4 MYSQLNFS_DATA 40GB mysql_clients /MYSQLNFS_DATA
cdot1nfsv4 MYSQLNFS_LOG 1GB mysql_clients /MYSQLNFS_LOG
2 entries were displayed.
</source>
Links:
* [https://kb.netapp.com/support/s/article/how-to-configure-nfsv4-in-cluster-mode How to configure NFSv4 in Cluster-Mode]
* [https://kb.netapp.com/support/s/article/clustered-data-ontap-nfs-expert-recommended-articles Clustered Data ONTAP NFS Expert recommended articles]
* [https://kb.netapp.com/support/s/article/how-to-configure-netapp-storage-systems-for-network-file-system-version-4-in-aix-and-linux-environments How to configure NetApp storage systems for Network File System version 4 in AIX and Linux environments]
* [https://kb.netapp.com/support/s/article/how-to-enable-or-disable-nfsv4-on-netapp-storage-systems How to enable or disable NFSv4 on NetApp storage systems]
===== On Linux =====
====== /etc/sysctl.d/99-mysql.conf ======
<source>
#
## http://www.ajohnstone.com/achives/optimizing-mysql-over-nfs-with-netapp/
#
###################################################################
# Semaphores & IPC for optimizations in innodb
kernel.shmmax=2147483648
kernel.shmall=2147483648
kernel.msgmni=1024
kernel.msgmax=65536
kernel.sem=250 32000 32 1024
###################################################################
# Swap
vm.swappiness = 0
vm.vfs_cache_pressure = 50
</source>
====== /etc/sysctl.d/99-netapp-nfs.conf ======
<source>
#
## http://www.ajohnstone.com/achives/optimizing-mysql-over-nfs-with-netapp/
#
###################################################################
# Optimization for netapp/nfs increased from 64k, @see http://tldp.org/HOWTO/NFS-HOWTO/performance.html#MEMLIMITS
net.core.wmem_default=262144
net.core.rmem_default=262144
net.core.wmem_max=262144
net.core.rmem_max=262144
net.ipv4.tcp_rmem = 4096 87380 16777216
net.ipv4.tcp_wmem = 4096 65536 16777216
net.ipv4.tcp_no_metrics_save = 1
# Guidelines from http://media.netapp.com/documents/mysqlperformance-5.pdf
net.ipv4.tcp_sack=0
net.ipv4.tcp_timestamps=0
sunrpc.tcp_slot_table_entries=128
#nfs.v3.enable on
nfs.tcp.enable=on
nfs.tcp.recvwindowsize=65536
nfs.tcp.xfersize=65536
#iscsi.iswt.max_ios_per_session 128
#iscsi.iswt.tcp_window_size 131400
#iscsi.max_connections_per_session 16
net.ipv4.tcp_tw_reuse = 1
net.ipv4.ip_local_port_range = 1024 65023
net.ipv4.tcp_max_syn_backlog = 10240
net.ipv4.tcp_max_tw_buckets = 400000
net.ipv4.tcp_max_orphans = 60000
net.ipv4.tcp_synack_retries = 3
net.core.somaxconn = 10000
kernel.sysrq=0
net.ipv4.neigh.default.gc_thresh1 = 4096
net.ipv4.neigh.default.gc_thresh2 = 8192
net.ipv4.neigh.default.gc_thresh3 = 8192
net.ipv4.neigh.default.base_reachable_time = 86400
net.ipv4.neigh.default.gc_stale_time = 86400
</source>
====== Raise allowed number of open files for mysql in /etc/security/limits.d/mysql.conf ======
<source>
mysql soft nofile 1024000
mysql hard nofile 1024000
mysql soft nproc 10240
mysql hard nproc 10240
</source>
====== Modify systemd mysql.service to raise the number of files limit ======
To raise the number of files for the service you have to tell the systemd the new limit.
<source lang=bash>
# systemctl edit mysql.service
</source>
and enter:
<source lang=inifile>
[Service]
LimitNOFILE=1024000
</source>
<source lang=bash>
# systemctl cat mysql
# /lib/systemd/system/mysql.service
# MySQL systemd service file
...
# /etc/systemd/system/mysql.service.d/override.conf
[Service]
LimitNOFILE=1024000
</source>
Do not forget to activate and check the limit
<source lang=bash>
# systemctl daemon-reload
# systemctl restart mysql
# awk 'NR==1 || /Max open files/' /proc/$(pgrep mysqld$)/limits
Limit Soft Limit Hard Limit Units
Max open files 1024000 1024000 files
</source>
====== Modify systemd service to wait for NFS ======
To be sure that the NFS mount is ready when the mysql server starts add After=nfs-client.target to the systemd service in the Unit-section.
<source lang=bash>
# systemctl edit mysql.service
</source>
and enter:
<source lang=inifile>
[Unit]
Description=MySQL Community Server
After=network.target
After=nfs-client.target
</source>
<source lang=bash>
# systemctl cat mysql
# /lib/systemd/system/mysql.service
# MySQL systemd service file
[Unit]
Description=MySQL Community Server
After=network.target
[Install]
WantedBy=multi-user.target
[Service]
User=mysql
Group=mysql
PermissionsStartOnly=true
ExecStartPre=/usr/share/mysql/mysql-systemd-start pre
ExecStart=/usr/sbin/mysqld
ExecStartPost=/usr/share/mysql/mysql-systemd-start post
TimeoutSec=600
Restart=on-failure
RuntimeDirectory=mysqld
RuntimeDirectoryMode=755
# /etc/systemd/system/mysql.service.d/override.conf
[Unit]
Description=MySQL Community Server
After=network.target
After=nfs-client.target
[Service]
LimitNOFILE=1024000
</source>
Do not forget to activate the changes...
<source lang=bash>
# systemctl daemon-reload
# systemctl restart mysql
</source>
... and check they are active:
<source lang=bash>
# systemctl list-dependencies --after mysql.service | grep nfs-client.target
● ├─nfs-client.target
</source>
====== /etc/idmapd.conf ======
<source lang=conf>
# Domain = localdomain
Domain = this.domain.tld
</source>
====== /etc/fstab ======
<source lang=fstab>
cdot-nfsv4-svm:/MYSQLNFS_LOG /MYSQLNFS_LOG nfs rw,hard,nointr,rsize=65536,wsize=65536,bg,vers=4,proto=tcp,noatime
cdot-nfsv4-svm:/MYSQLNFS_DATA /MYSQLNFS_DATA nfs rw,hard,nointr,rsize=65536,wsize=65536,bg,vers=4,proto=tcp,noatime
</source>
====== /etc/mysql/mysql.conf.d/mysqld.cnf ======
<source lang=inifile>
[mysqld]
...
datadir = /MYSQLNFS_DATA/data/mysql
...
</source>
====== /etc/mysql/mysql.conf.d/innodb.cnf ======
<source lang=inifile>
[mysqld]
#
# * InnoDB
#
innodb_data_home_dir = /MYSQLNFS_DATA/InnoDB
innodb_data_file_path = ibdata1:200M:autoextend
innodb_log_group_home_dir = /MYSQLNFS_LOG/ib_log
#innodb_flush_method = O_DIRECT
innodb_flush_log_at_trx_commit = 2
innodb_file_per_table = on
</source>
<source lang=mysql>
# mysql -e "show variables where variable_name like '%dir' and value like '/MYSQLNFS%'"
+---------------------------+------------------------------------+
| Variable_name | Value |
+---------------------------+------------------------------------+
| datadir | /MYSQLNFS_DATA/data/mysql/ |
| innodb_data_home_dir | /MYSQLNFS_DATA/InnoDB |
| innodb_log_group_home_dir | /MYSQLNFS_LOG/ib_log |
+---------------------------+------------------------------------+
</source>
====== /etc/mysql/mysql.conf.d/query_cache.cnf ======
<source lang=inifile>
[mysqld]
#
# * Query Cache Configuration
#
query_cache_type = 1
query_cache_limit = 256K
query_cache_min_res_unit = 2k
query_cache_size = 80M
</source>
<source lang=mysql>
mysql> SHOW VARIABLES LIKE 'have_query_cache';
+------------------+-------+
| Variable_name | Value |
+------------------+-------+
| have_query_cache | YES |
+------------------+-------+
1 row in set (0,00 sec)
mysql> SHOW VARIABLES LIKE 'query_cache%';
+------------------------------+----------+
| Variable_name | Value |
+------------------------------+----------+
| query_cache_limit | 262144 |
| query_cache_min_res_unit | 2048 |
| query_cache_size | 83886080 |
| query_cache_type | ON |
| query_cache_wlock_invalidate | OFF |
+------------------------------+----------+
5 rows in set (0,00 sec)
</source>
====== apparmor : /etc/apparmor.d/local/usr.sbin.mysqld ======
<source lang=apparmor>
# vim:syntax=apparmor
# This should be always there...
owner @{PROC}/@{pid}/status r,
/sys/devices/system/node/ r,
/sys/devices/system/node/** r,
# The mysql datadir, innodb_data_home_dir
/MYSQLNFS_DATA/ r,
/MYSQLNFS_DATA/** rwk,
# The mysql innodb_log_group_home_dir
/MYSQLNFS_LOG/ r,
/MYSQLNFS_LOG/** rwk,
</source>
====== Short stupid performance test ======
<source lang=bash>
# time dd if=/dev/zero of=/MYSQLNFS_DATA/io.test bs=16k count=65536
65536+0 records in
65536+0 records out
1073741824 bytes (1,1 GB, 1,0 GiB) copied, 1,7552 s, 612 MB/s
real 0m1.772s
user 0m0.016s
sys 0m0.672s
</source>
Some things seem to work...
==Sample InnoDB configuration==
/etc/mysql/conf.d/innodb.cnf
<source lang=mysql>
[mysqld]
# InnoDB Parameters
# innodb_buffer_pool_size=(0.7*total_mem_size)
innodb_buffer_pool_size=1433M
# bulk_insert_buffer_size
bulk_insert_buffer_size=256M
# innodb_buffer_pool_instances=... more = more concurrency
innodb_buffer_pool_instances=2
# innodb_thread_concurrency= 2*CPUs
innodb_thread_concurrency=4
# innodb_flush_method=O_DIRECT (avoids double buffering)
innodb_flush_method=O_DIRECT
# InnoDB data raw disks
innodb_data_home_dir=
innodb_data_file_path=/dev/vg-data/lv-rawdisk-innodb01:25Graw
# InnoDB log files
innodb_log_files_in_group=2
innodb_log_file_size=100M
innodb_log_group_home_dir=/var/lib/mysql/ib_log
</source>
==Analyze==
<source lang=mysql>
mysql> select * from <tablename> PROCEDURE ANALYSE();
</source>
<source lang=mysql>
mysql> SHOW /*!50000 GLOBAL*/ STATUS;
</source>
* See [[http://de.slideshare.net/shinguz/pt-presentation-11465700 MySQL Performance Tuning]]
===percona-toolkit===
<source lang=bash>
# aptitude install percona-toolkit
# mysql -e "explain select * from mysql.user,mysql.db where user.user=db.user" | pt-visual-explain
JOIN
+- Bookmark lookup
| +- Table
| | table db
| | possible_keys User
| +- Index lookup
| key db->User
| possible_keys User
| key_len 48
| ref mysql.user.User
| rows 3
+- Table scan
rows 68
+- Table
table user
</source>
===Sysbench===
<source lang=bash>
# mysql -u root -e "create database sbtest;"
# sysbench \
--test=oltp \
--oltp-table-size=10000000 \
--db-driver=mysql \
--mysql-table-engine=innodb \
--mysql-db=sbtest \
--mysql-user=root \
--mysql-password=$(nawk -F'=' '/password/{print $2}' /root/.my.cnf) \
--mysql-socket=/var/run/mysqld/mysqld.sock \
prepare
# sysbench \
--test=oltp \
--oltp-test-mode=complex \
--oltp-table-size=80000000 \
--db-driver=mysql \
--mysql-table-engine=innodb \
--mysql-db=sbtest \
--mysql-user=root \
--mysql-password=$(nawk -F'=' '/password/{print $2}' /root/.my.cnf) \
--mysql-socket=/var/run/mysqld/mysqld.sock \
--num-threads=4 \
--max-time=900 \
--max-requests=500000 \
run
# mysql -u root_rw -e "drop table sbtest;" sbtest
</source>
==Recover a damaged root account==
===Lost grants===
Try out:
<source lang=bash>
# service mysql stop
# echo "grant all privileges on *.* to 'root'@'localhost' with grant option;" > /root/mysql-init
# mysqld_safe --init-file=/root/mysql-init
...
150812 19:14:24 mysqld_safe mysqld from pid file /var/run/mysqld/mysqld.pid ended
# rm /root/mysql-init
# service mysql start
</source>
Or:
<source lang=bash>
# service mysql stop
# mysqld_safe --skip-grant-tables &
...
# mysql -e "UPDATE mysql.user SET Grant_priv='Y', Super_priv='Y' WHERE User='root'; FLUSH PRIVILEGES; GRANT ALL ON *.* TO 'root'@'localhost';"
# mysqladmin -u root shutdown
# service mysql start
</source>
===Lost password===
<source lang=bash>
# service mysql stop
# echo "SET PASSWORD FOR 'root'@'localhost' = PASSWORD('the root password for mysql');" > /root/mysql-init
# mysqld_safe --init-file=/root/mysql-init
...
150812 19:15:24 mysqld_safe mysqld from pid file /var/run/mysqld/mysqld.pid ended
# rm /root/mysql-init
# service mysql start
</source>
==Structured configuration==
This is the default in Ubuntus /etc/mysql/my.cnf:
<source lang=mysql>
...
#
# * IMPORTANT: Additional settings that can override those from this file!
# The files must end with '.cnf', otherwise they'll be ignored.
#
!includedir /etc/mysql/conf.d/
</source>
/etc/mysql/conf.d/innodb.cnf:
<source lang=mysql>
[mysqld]
# InnoDB Parameters
# innodb_buffer_pool_size=(0.7*total_mem_size)
#innodb_buffer_pool_size=512M
innodb_buffer_pool_size=256M
# bulk_insert_buffer_size
#bulk_insert_buffer_size=256M
bulk_insert_buffer_size=128M
# innodb_buffer_pool_instances=... more = more concurrency
innodb_buffer_pool_instances=2
# innodb_thread_concurrency= 2*CPUs
innodb_thread_concurrency=4
# innodb_flush_method=O_DIRECT (avoids double buffering)
innodb_flush_method=O_DIRECT
# InnoDB data raw disks
innodb_data_home_dir=
innodb_data_file_path=/dev/vg-data/lv-rawdisk-innodb01:25Graw
# InnoDB log files
innodb_log_files_in_group=2
innodb_log_file_size=100M
innodb_log_group_home_dir=/var/lib/mysql/ib_log
</source>
/etc/mysql/conf.d/myisam.cnf:
<source lang=mysql>
[mysqld]
#key_buffer = 512M
key_buffer = 128M
table_cache = 8K
myisam_sort_buffer_size = 64M
tmp_table_size = 64M
# Variable: concurrent_insert
# Value Description
# 0 Disables concurrent inserts
# 1 (Default) Enables concurrent insert for MyISAM tables that do not have holes
# 2 Enables concurrent inserts for all MyISAM tables, even those that have holes.
# For a table with a hole, new rows are inserted at the end of the table if it is in use by another thread.
# Otherwise, MySQL acquires a normal write lock and inserts the row into the hole.
concurrent_insert=2
# Variable: myisam_use_mmap
# https://www.percona.com/blog/2006/05/26/myisam-mmap-feature-51/
#
myisam_use_mmap=1
</source>
/etc/mysql/conf.d/mysqld.cnf:
<source lang=mysql>
[mysqld]
datadir = /var/lib/mysql/data/data
# because mysql is soooo stupid
#ignore-db-dirs = lost+found # when we will have mysql >= 5.6.3
bind-address = 127.0.0.1
open-files-limit = 4096
max_connections = 512
max_allowed_packet = 16M
thread_stack = 192K
thread_cache_size = 8
myisam-recover-options = BACKUP
max_connections = 512
table_cache = 8192
thread_concurrency = 4
default-storage-engine = innodb
# Enable the full query log. Every query (even ones with incorrect
# syntax) that the server receives will be logged. This is useful for
# debugging, it is usually disabled in production use.
#log
# Print warnings to the error log file. If you have any problem with
# MySQL you should enable logging of warnings and examine the error log
# for possible explanations.
log_warnings
# Log slow queries. Slow queries are queries which take more than the
# amount of time defined in "long_query_time" or which do not use
# indexes well, if log_long_format is enabled. It is normally good idea
# to have this turned on if you frequently add new queries to the
# system.
log_slow_queries
slow_query_log_file = /var/log/mysql/mysql-slow.log
# All queries taking more than this amount of time (in seconds) will be
# trated as slow. Do not use "1" as a value here, as this will result in
# even very fast queries being logged from time to time (as MySQL
# currently measures time with second accuracy only).
long_query_time = 2
# Log more information in the slow query log. Normally it is good to
# have this turned on. This will enable logging of queries that are not
# using indexes in addition to long running queries.
#log_long_format
log_bin = /var/lib/mysql/binlog/mysql-bin.log
expire_logs_days = 10
max_binlog_size = 100M
sync_binlog = 0
performance_schema = ON
</source>
/etc/mysql/conf.d/mysqld_safe.cnf:
<source lang=mysql>
[mysqld_safe]
</source>
/etc/mysql/conf.d/mysqld_safe_syslog.cnf:
<source lang=mysql>
[mysqld_safe]
syslog
</source>
/etc/mysql/conf.d/query_cache.cnf:
<source lang=mysql>
[mysqld]
query_cache_limit = 4M
query_cache_size = 128M
query_cache_min_res_unit = 2K
</source>
=MySQL Clients=
Small one liners for testing purposes.
==PHP==
===PHP PDO===
<source lang=php>
$ php -r '
$pdo=new PDO("mysql:host=mydbhost;dbname=mydb", "user", "pass", ARRAY(
PDO::ATTR_PERSISTENT => true
)
);
$stmt=$pdo->prepare("SELECT * FROM mytable");
if($stmt->execute()){
while($row = $stmt->fetch()){
print_r($row);
}
};
$stmt = null;
$pdo=null;
'
</source>
9e0a71917dd06ba3b898648f474090980f747a89
1753
1752
2017-05-23T07:40:19Z
Lollypop
2
/* What did we see from the master */
wikitext
text/x-wiki
[[Kategorie:MySQL|Tipps und Tricks]]
==Oneliner==
===All grants===
<source lang=bash>
# mysql --skip-column-names --batch --execute 'select concat("`",user,"`@`",host,"`") from mysql.user' | xargs -n 1 -i mysql --execute 'show grants for {}'
</source>
===Last update time===
* Per table
<source lang=mysql>
mysql> SELECT TABLE_SCHEMA AS DB,TABLE_NAME,UPDATE_TIME FROM INFORMATION_SCHEMA.TABLES ORDER BY DB,UPDATE_TIME;
</source>
* Per database
<source lang=mysql>
mysql> SELECT TABLE_SCHEMA AS DB,MAX(UPDATE_TIME) AS LAST_UPDATE FROM INFORMATION_SCHEMA.TABLES GROUP BY DB ORDER BY LAST_UPDATE;
</source>
==InnoDB space==
===Per database===
<source lang=mysql>
mysql> select table_schema as database_name, sum(round(data_length/1024/1024,2)) as total_size_mb from information_schema.tables where engine like 'innodb' group by table_schema order by total_size_mb;
</source>
===Per table===
<source lang=mysql>
mysql> select table_schema as database_name,table_name,round(data_length/1024/1024,2) as size_mb from information_schema.tables order by size_mb;
</source>
==Logging==
If you use SET GLOBAL it is just for the moment.
'''Don't forget to add it in your my.cnf to make it permanent!'''
===What can I log?===
The interesting variables here are:
* log_queries_not_using_indexes
* log_slave_updates
* log_slow_queries
* general_log
===Choose logging destination FILE/TABLE/NONE===
This affects general_log and slow_query_log.
* Log to the table mysql.slow_log and mysql.general_log
<source lang=mysql>
mysql> SET GLOBAL log_output=TABLE;
</source>
* Log to the table mysql.slow_log and mysql.general_log
<source lang=mysql>
mysql> SET GLOBAL log_output=TABLE;
</source>
* Both: tables and files
<source lang=mysql>
mysql> SET GLOBAL log_output = 'TABLE,FILE';
</source>
* None, if NONE appears in the log_output destinations there is no logging
<source lang=mysql>
mysql> SET GLOBAL log_output = 'TABLE,FILE,NONE';
</source>
is equal to
<source lang=mysql>
mysql> SET GLOBAL log_output = 'NONE';
</source>
===Enable/disable general logging===
<source lang=mysql>
mysql> SET GLOBAL general_log_file = '/var/lib/mysql/general.log';
Query OK, 0 rows affected (0.00 sec)
mysql> SET GLOBAL general_log = 'ON';
Query OK, 0 rows affected (0.00 sec)
</source>
<source lang=mysql>
mysql> SET GLOBAL general_log = 'OFF';
Query OK, 0 rows affected (0.00 sec)
</source>
===Enable/disable logging of slow queries===
<source lang=mysql>
mysql> SET GLOBAL slow_query_log_file = '/var/lib/mysql/slow-query.log';
Query OK, 0 rows affected (0.00 sec)
mysql> SET GLOBAL slow_query_log = 'ON';
Query OK, 0 rows affected (0.00 sec)
</source>
<source lang=mysql>
mysql> SET GLOBAL slow_query_log = 'OFF';
Query OK, 0 rows affected (0.00 sec)
</source>
== Slave ==
=== Debugging ===
==== What did we see from the master ====
Read the binlog from Master:
<source lang=bash>
# mysqlbinlog --read-from-remote-server --host='your replication host' --user='your replication user' --password='your replication password' --base64-output=auto --database='limit output to this database' -vv mysql-bin.number | less
</source>
if you get
ERROR: Failed on connect: SSL connection error: protocol version mismatch
try
<source lang=bash>
# mysqlbinlog --read-from-remote-server --host='your replication host' --user='your replication user' --password='your replication password' --ssl-mode=DISABLED --base64-output=auto --database='limit output to this database' -vv mysql-bin.number | less
</source>
For an idea of the binlog file to investigate on the master do this on your slave:
<source lang=bash>
# mysql -e 'show slave status\G' | awk '$1=="Master_Log_File:"'
</source>
==Filesystems for MySQL==
===ext3/ext4===
====Create Options====
<source lang=bash>
# mkfs.ext4 -b 4096 /dev/mapper/vg--data-lv--ext4--mysql_data
</source>
====Mountoptions====
* noatime
* data=writeback (best performance , only metadata is logged)
* data=ordered (ok performance , recording metadata and grouping metadata related to the data changes)
* data=journal (worst performance, but best data protection, ext3 default mode, recording metadata and all data)
===Raw devices with InnoDB===
'''Take a look at [[Linux_udev_permissions|setting device permissions via udev]] first.'''
'''After''' that the device is owned by mysql:
<source lang=bash>
# ls -alL /dev/vg-data/lv-rawdisk-innodb01
brw-rw---- 1 mysql mysql 252, 0 Aug 12 15:07 /dev/vg-data/lv-rawdisk-innodb01
</source>
Determine the size:
<source lang=bash>
# lvs vg-data
LV VG Attr LSize Pool Origin Data% Move Log Copy% Convert
lv-rawdisk-innodb01 vg-data -wi-a---- 25.00g
# fdisk -l /dev/vg-data/lv-rawdisk-innodb01
Disk /dev/vg-data/lv-rawdisk-innodb01: 26.8 GB, 26843545600 bytes
255 heads, 63 sectors/track, 3263 cylinders, total 52428800 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
# bc -l
26843545600/(1024*1024*1024)
25.00000000000000000000
</source>
Yes... really 25GB!
Add your logical volume to your configuration /etc/mysql/conf.d/innodb.cnf :
<source lang=mysql>
[mysqld]
# InnoDB raw disks
innodb_data_home_dir=
innodb_data_file_path=/dev/vg-data/lv-rawdisk-innodb01:25Gnewraw
</source>
Start mysql:
<source lang=bash>
# service mysql start
</source>
Aaaaaand.. do not forget apparmor! Like I did.. :-D
<source lang=mysql>
InnoDB: Operating system error number 13 in a file operation.
InnoDB: The error means mysqld does not have the access rights to
InnoDB: the directory.
InnoDB: File name /dev/dm-0
InnoDB: File operation call: 'open'.
InnoDB: Cannot continue operation.
</source>
<source lang=bash>
# tail /var/log/kern.log
...
Aug 12 15:30:09 mysql kernel: [ 5840.118528] audit: type=1400 audit(1439386209.399:33): apparmor="DENIED" operation="open" profile="/usr/sbin/mysqld" name="/dev/dm-0" pid=11810 comm="mysqld" requested_mask="wr" denied_mask="wr" fsuid=108 ouid=108
...
</source>
Add your raw device to the apparmor config in /etc/apparmor.d/local/usr.sbin.mysqld :
<source lang=bash>
# Site-specific additions and overrides for usr.sbin.mysqld.
# For more details, please see /etc/apparmor.d/local/README.
/dev/dm-* rwk,
</source>
Reload apparmor:
<source lang=bash>
# service apparmor reload
</source>
Another try!
<source lang=bash>
# service mysql start
</source>
<source lang=mysql>
InnoDB: The first specified data file /dev/vg-data/lv-rawdisk-innodb01 did not exist:
InnoDB: a new database to be created!
150812 15:48:23 InnoDB: Setting file /dev/vg-data/lv-rawdisk-innodb01 size to 25600 MB
InnoDB: Database physically writes the file full: wait...
InnoDB: Progress in MB: 100 200 300 400 500 600 700 800 900 1000 1100 1200 ...
</source>
Much better!
So shutdown MySQL again!
Change your configuration /etc/mysql/conf.d/innodb.cnf and '''change newraw to raw!''' :
<source lang=mysql>
[mysqld]
# InnoDB raw disks
innodb_data_home_dir=
innodb_data_file_path=/dev/vg-data/lv-rawdisk-innodb01:25Graw
</source>
=== NFS ===
==== NFSv4 ====
===== On NetApp CDOT SVM =====
<source lang=cdot>
cdot1nfsv4::> export-policy rule create -policyname default -clientmatch 172.18.128.0/22 -superuser none -rwrule none -rorule sys -allow-dev false -allow-suid false
cdot1nfsv4::>
cdot1nfsv4::> export-policy create -policyname mysql_clients
cdot1nfsv4::> export-policy rule create -policyname mysql_clients -clientmatch 172.18.128.0/22 -superuser sys -rwrule sys -rorule sys -allow-dev true -allow-suid false
cdot1nfsv4::>
cdot1nfsv4::> nfs server modify -v4.0 enabled -v4-id-domain this.domain.tld
cdot1nfsv4::> set -units GB
cdot1nfsv4::> vol show -volume MYSQLNFS_* -fields volume,policy,size,junction-path
vserver volume size policy junction-path
------------------ --------------------- ---- ------------- ----------------------
cdot1nfsv4 MYSQLNFS_DATA 40GB mysql_clients /MYSQLNFS_DATA
cdot1nfsv4 MYSQLNFS_LOG 1GB mysql_clients /MYSQLNFS_LOG
2 entries were displayed.
</source>
Links:
* [https://kb.netapp.com/support/s/article/how-to-configure-nfsv4-in-cluster-mode How to configure NFSv4 in Cluster-Mode]
* [https://kb.netapp.com/support/s/article/clustered-data-ontap-nfs-expert-recommended-articles Clustered Data ONTAP NFS Expert recommended articles]
* [https://kb.netapp.com/support/s/article/how-to-configure-netapp-storage-systems-for-network-file-system-version-4-in-aix-and-linux-environments How to configure NetApp storage systems for Network File System version 4 in AIX and Linux environments]
* [https://kb.netapp.com/support/s/article/how-to-enable-or-disable-nfsv4-on-netapp-storage-systems How to enable or disable NFSv4 on NetApp storage systems]
===== On Linux =====
====== /etc/sysctl.d/99-mysql.conf ======
<source>
#
## http://www.ajohnstone.com/achives/optimizing-mysql-over-nfs-with-netapp/
#
###################################################################
# Semaphores & IPC for optimizations in innodb
kernel.shmmax=2147483648
kernel.shmall=2147483648
kernel.msgmni=1024
kernel.msgmax=65536
kernel.sem=250 32000 32 1024
###################################################################
# Swap
vm.swappiness = 0
vm.vfs_cache_pressure = 50
</source>
====== /etc/sysctl.d/99-netapp-nfs.conf ======
<source>
#
## http://www.ajohnstone.com/achives/optimizing-mysql-over-nfs-with-netapp/
#
###################################################################
# Optimization for netapp/nfs increased from 64k, @see http://tldp.org/HOWTO/NFS-HOWTO/performance.html#MEMLIMITS
net.core.wmem_default=262144
net.core.rmem_default=262144
net.core.wmem_max=262144
net.core.rmem_max=262144
net.ipv4.tcp_rmem = 4096 87380 16777216
net.ipv4.tcp_wmem = 4096 65536 16777216
net.ipv4.tcp_no_metrics_save = 1
# Guidelines from http://media.netapp.com/documents/mysqlperformance-5.pdf
net.ipv4.tcp_sack=0
net.ipv4.tcp_timestamps=0
sunrpc.tcp_slot_table_entries=128
#nfs.v3.enable on
nfs.tcp.enable=on
nfs.tcp.recvwindowsize=65536
nfs.tcp.xfersize=65536
#iscsi.iswt.max_ios_per_session 128
#iscsi.iswt.tcp_window_size 131400
#iscsi.max_connections_per_session 16
net.ipv4.tcp_tw_reuse = 1
net.ipv4.ip_local_port_range = 1024 65023
net.ipv4.tcp_max_syn_backlog = 10240
net.ipv4.tcp_max_tw_buckets = 400000
net.ipv4.tcp_max_orphans = 60000
net.ipv4.tcp_synack_retries = 3
net.core.somaxconn = 10000
kernel.sysrq=0
net.ipv4.neigh.default.gc_thresh1 = 4096
net.ipv4.neigh.default.gc_thresh2 = 8192
net.ipv4.neigh.default.gc_thresh3 = 8192
net.ipv4.neigh.default.base_reachable_time = 86400
net.ipv4.neigh.default.gc_stale_time = 86400
</source>
====== Raise allowed number of open files for mysql in /etc/security/limits.d/mysql.conf ======
<source>
mysql soft nofile 1024000
mysql hard nofile 1024000
mysql soft nproc 10240
mysql hard nproc 10240
</source>
====== Modify systemd mysql.service to raise the number of files limit ======
To raise the number of files for the service you have to tell the systemd the new limit.
<source lang=bash>
# systemctl edit mysql.service
</source>
and enter:
<source lang=inifile>
[Service]
LimitNOFILE=1024000
</source>
<source lang=bash>
# systemctl cat mysql
# /lib/systemd/system/mysql.service
# MySQL systemd service file
...
# /etc/systemd/system/mysql.service.d/override.conf
[Service]
LimitNOFILE=1024000
</source>
Do not forget to activate and check the limit
<source lang=bash>
# systemctl daemon-reload
# systemctl restart mysql
# awk 'NR==1 || /Max open files/' /proc/$(pgrep mysqld$)/limits
Limit Soft Limit Hard Limit Units
Max open files 1024000 1024000 files
</source>
====== Modify systemd service to wait for NFS ======
To be sure that the NFS mount is ready when the mysql server starts add After=nfs-client.target to the systemd service in the Unit-section.
<source lang=bash>
# systemctl edit mysql.service
</source>
and enter:
<source lang=inifile>
[Unit]
Description=MySQL Community Server
After=network.target
After=nfs-client.target
</source>
<source lang=bash>
# systemctl cat mysql
# /lib/systemd/system/mysql.service
# MySQL systemd service file
[Unit]
Description=MySQL Community Server
After=network.target
[Install]
WantedBy=multi-user.target
[Service]
User=mysql
Group=mysql
PermissionsStartOnly=true
ExecStartPre=/usr/share/mysql/mysql-systemd-start pre
ExecStart=/usr/sbin/mysqld
ExecStartPost=/usr/share/mysql/mysql-systemd-start post
TimeoutSec=600
Restart=on-failure
RuntimeDirectory=mysqld
RuntimeDirectoryMode=755
# /etc/systemd/system/mysql.service.d/override.conf
[Unit]
Description=MySQL Community Server
After=network.target
After=nfs-client.target
[Service]
LimitNOFILE=1024000
</source>
Do not forget to activate the changes...
<source lang=bash>
# systemctl daemon-reload
# systemctl restart mysql
</source>
... and check they are active:
<source lang=bash>
# systemctl list-dependencies --after mysql.service | grep nfs-client.target
● ├─nfs-client.target
</source>
====== /etc/idmapd.conf ======
<source lang=conf>
# Domain = localdomain
Domain = this.domain.tld
</source>
====== /etc/fstab ======
<source lang=fstab>
cdot-nfsv4-svm:/MYSQLNFS_LOG /MYSQLNFS_LOG nfs rw,hard,nointr,rsize=65536,wsize=65536,bg,vers=4,proto=tcp,noatime
cdot-nfsv4-svm:/MYSQLNFS_DATA /MYSQLNFS_DATA nfs rw,hard,nointr,rsize=65536,wsize=65536,bg,vers=4,proto=tcp,noatime
</source>
====== /etc/mysql/mysql.conf.d/mysqld.cnf ======
<source lang=inifile>
[mysqld]
...
datadir = /MYSQLNFS_DATA/data/mysql
...
</source>
====== /etc/mysql/mysql.conf.d/innodb.cnf ======
<source lang=inifile>
[mysqld]
#
# * InnoDB
#
innodb_data_home_dir = /MYSQLNFS_DATA/InnoDB
innodb_data_file_path = ibdata1:200M:autoextend
innodb_log_group_home_dir = /MYSQLNFS_LOG/ib_log
#innodb_flush_method = O_DIRECT
innodb_flush_log_at_trx_commit = 2
innodb_file_per_table = on
</source>
<source lang=mysql>
# mysql -e "show variables where variable_name like '%dir' and value like '/MYSQLNFS%'"
+---------------------------+------------------------------------+
| Variable_name | Value |
+---------------------------+------------------------------------+
| datadir | /MYSQLNFS_DATA/data/mysql/ |
| innodb_data_home_dir | /MYSQLNFS_DATA/InnoDB |
| innodb_log_group_home_dir | /MYSQLNFS_LOG/ib_log |
+---------------------------+------------------------------------+
</source>
====== /etc/mysql/mysql.conf.d/query_cache.cnf ======
<source lang=inifile>
[mysqld]
#
# * Query Cache Configuration
#
query_cache_type = 1
query_cache_limit = 256K
query_cache_min_res_unit = 2k
query_cache_size = 80M
</source>
<source lang=mysql>
mysql> SHOW VARIABLES LIKE 'have_query_cache';
+------------------+-------+
| Variable_name | Value |
+------------------+-------+
| have_query_cache | YES |
+------------------+-------+
1 row in set (0,00 sec)
mysql> SHOW VARIABLES LIKE 'query_cache%';
+------------------------------+----------+
| Variable_name | Value |
+------------------------------+----------+
| query_cache_limit | 262144 |
| query_cache_min_res_unit | 2048 |
| query_cache_size | 83886080 |
| query_cache_type | ON |
| query_cache_wlock_invalidate | OFF |
+------------------------------+----------+
5 rows in set (0,00 sec)
</source>
====== apparmor : /etc/apparmor.d/local/usr.sbin.mysqld ======
<source lang=apparmor>
# vim:syntax=apparmor
# This should be always there...
owner @{PROC}/@{pid}/status r,
/sys/devices/system/node/ r,
/sys/devices/system/node/** r,
# The mysql datadir, innodb_data_home_dir
/MYSQLNFS_DATA/ r,
/MYSQLNFS_DATA/** rwk,
# The mysql innodb_log_group_home_dir
/MYSQLNFS_LOG/ r,
/MYSQLNFS_LOG/** rwk,
</source>
====== Short stupid performance test ======
<source lang=bash>
# time dd if=/dev/zero of=/MYSQLNFS_DATA/io.test bs=16k count=65536
65536+0 records in
65536+0 records out
1073741824 bytes (1,1 GB, 1,0 GiB) copied, 1,7552 s, 612 MB/s
real 0m1.772s
user 0m0.016s
sys 0m0.672s
</source>
Some things seem to work...
==Sample InnoDB configuration==
/etc/mysql/conf.d/innodb.cnf
<source lang=mysql>
[mysqld]
# InnoDB Parameters
# innodb_buffer_pool_size=(0.7*total_mem_size)
innodb_buffer_pool_size=1433M
# bulk_insert_buffer_size
bulk_insert_buffer_size=256M
# innodb_buffer_pool_instances=... more = more concurrency
innodb_buffer_pool_instances=2
# innodb_thread_concurrency= 2*CPUs
innodb_thread_concurrency=4
# innodb_flush_method=O_DIRECT (avoids double buffering)
innodb_flush_method=O_DIRECT
# InnoDB data raw disks
innodb_data_home_dir=
innodb_data_file_path=/dev/vg-data/lv-rawdisk-innodb01:25Graw
# InnoDB log files
innodb_log_files_in_group=2
innodb_log_file_size=100M
innodb_log_group_home_dir=/var/lib/mysql/ib_log
</source>
==Analyze==
<source lang=mysql>
mysql> select * from <tablename> PROCEDURE ANALYSE();
</source>
<source lang=mysql>
mysql> SHOW /*!50000 GLOBAL*/ STATUS;
</source>
* See [[http://de.slideshare.net/shinguz/pt-presentation-11465700 MySQL Performance Tuning]]
===percona-toolkit===
<source lang=bash>
# aptitude install percona-toolkit
# mysql -e "explain select * from mysql.user,mysql.db where user.user=db.user" | pt-visual-explain
JOIN
+- Bookmark lookup
| +- Table
| | table db
| | possible_keys User
| +- Index lookup
| key db->User
| possible_keys User
| key_len 48
| ref mysql.user.User
| rows 3
+- Table scan
rows 68
+- Table
table user
</source>
===Sysbench===
<source lang=bash>
# mysql -u root -e "create database sbtest;"
# sysbench \
--test=oltp \
--oltp-table-size=10000000 \
--db-driver=mysql \
--mysql-table-engine=innodb \
--mysql-db=sbtest \
--mysql-user=root \
--mysql-password=$(nawk -F'=' '/password/{print $2}' /root/.my.cnf) \
--mysql-socket=/var/run/mysqld/mysqld.sock \
prepare
# sysbench \
--test=oltp \
--oltp-test-mode=complex \
--oltp-table-size=80000000 \
--db-driver=mysql \
--mysql-table-engine=innodb \
--mysql-db=sbtest \
--mysql-user=root \
--mysql-password=$(nawk -F'=' '/password/{print $2}' /root/.my.cnf) \
--mysql-socket=/var/run/mysqld/mysqld.sock \
--num-threads=4 \
--max-time=900 \
--max-requests=500000 \
run
# mysql -u root_rw -e "drop table sbtest;" sbtest
</source>
==Recover a damaged root account==
===Lost grants===
Try out:
<source lang=bash>
# service mysql stop
# echo "grant all privileges on *.* to 'root'@'localhost' with grant option;" > /root/mysql-init
# mysqld_safe --init-file=/root/mysql-init
...
150812 19:14:24 mysqld_safe mysqld from pid file /var/run/mysqld/mysqld.pid ended
# rm /root/mysql-init
# service mysql start
</source>
Or:
<source lang=bash>
# service mysql stop
# mysqld_safe --skip-grant-tables &
...
# mysql -e "UPDATE mysql.user SET Grant_priv='Y', Super_priv='Y' WHERE User='root'; FLUSH PRIVILEGES; GRANT ALL ON *.* TO 'root'@'localhost';"
# mysqladmin -u root shutdown
# service mysql start
</source>
===Lost password===
<source lang=bash>
# service mysql stop
# echo "SET PASSWORD FOR 'root'@'localhost' = PASSWORD('the root password for mysql');" > /root/mysql-init
# mysqld_safe --init-file=/root/mysql-init
...
150812 19:15:24 mysqld_safe mysqld from pid file /var/run/mysqld/mysqld.pid ended
# rm /root/mysql-init
# service mysql start
</source>
==Structured configuration==
This is the default in Ubuntus /etc/mysql/my.cnf:
<source lang=mysql>
...
#
# * IMPORTANT: Additional settings that can override those from this file!
# The files must end with '.cnf', otherwise they'll be ignored.
#
!includedir /etc/mysql/conf.d/
</source>
/etc/mysql/conf.d/innodb.cnf:
<source lang=mysql>
[mysqld]
# InnoDB Parameters
# innodb_buffer_pool_size=(0.7*total_mem_size)
#innodb_buffer_pool_size=512M
innodb_buffer_pool_size=256M
# bulk_insert_buffer_size
#bulk_insert_buffer_size=256M
bulk_insert_buffer_size=128M
# innodb_buffer_pool_instances=... more = more concurrency
innodb_buffer_pool_instances=2
# innodb_thread_concurrency= 2*CPUs
innodb_thread_concurrency=4
# innodb_flush_method=O_DIRECT (avoids double buffering)
innodb_flush_method=O_DIRECT
# InnoDB data raw disks
innodb_data_home_dir=
innodb_data_file_path=/dev/vg-data/lv-rawdisk-innodb01:25Graw
# InnoDB log files
innodb_log_files_in_group=2
innodb_log_file_size=100M
innodb_log_group_home_dir=/var/lib/mysql/ib_log
</source>
/etc/mysql/conf.d/myisam.cnf:
<source lang=mysql>
[mysqld]
#key_buffer = 512M
key_buffer = 128M
table_cache = 8K
myisam_sort_buffer_size = 64M
tmp_table_size = 64M
# Variable: concurrent_insert
# Value Description
# 0 Disables concurrent inserts
# 1 (Default) Enables concurrent insert for MyISAM tables that do not have holes
# 2 Enables concurrent inserts for all MyISAM tables, even those that have holes.
# For a table with a hole, new rows are inserted at the end of the table if it is in use by another thread.
# Otherwise, MySQL acquires a normal write lock and inserts the row into the hole.
concurrent_insert=2
# Variable: myisam_use_mmap
# https://www.percona.com/blog/2006/05/26/myisam-mmap-feature-51/
#
myisam_use_mmap=1
</source>
/etc/mysql/conf.d/mysqld.cnf:
<source lang=mysql>
[mysqld]
datadir = /var/lib/mysql/data/data
# because mysql is soooo stupid
#ignore-db-dirs = lost+found # when we will have mysql >= 5.6.3
bind-address = 127.0.0.1
open-files-limit = 4096
max_connections = 512
max_allowed_packet = 16M
thread_stack = 192K
thread_cache_size = 8
myisam-recover-options = BACKUP
max_connections = 512
table_cache = 8192
thread_concurrency = 4
default-storage-engine = innodb
# Enable the full query log. Every query (even ones with incorrect
# syntax) that the server receives will be logged. This is useful for
# debugging, it is usually disabled in production use.
#log
# Print warnings to the error log file. If you have any problem with
# MySQL you should enable logging of warnings and examine the error log
# for possible explanations.
log_warnings
# Log slow queries. Slow queries are queries which take more than the
# amount of time defined in "long_query_time" or which do not use
# indexes well, if log_long_format is enabled. It is normally good idea
# to have this turned on if you frequently add new queries to the
# system.
log_slow_queries
slow_query_log_file = /var/log/mysql/mysql-slow.log
# All queries taking more than this amount of time (in seconds) will be
# trated as slow. Do not use "1" as a value here, as this will result in
# even very fast queries being logged from time to time (as MySQL
# currently measures time with second accuracy only).
long_query_time = 2
# Log more information in the slow query log. Normally it is good to
# have this turned on. This will enable logging of queries that are not
# using indexes in addition to long running queries.
#log_long_format
log_bin = /var/lib/mysql/binlog/mysql-bin.log
expire_logs_days = 10
max_binlog_size = 100M
sync_binlog = 0
performance_schema = ON
</source>
/etc/mysql/conf.d/mysqld_safe.cnf:
<source lang=mysql>
[mysqld_safe]
</source>
/etc/mysql/conf.d/mysqld_safe_syslog.cnf:
<source lang=mysql>
[mysqld_safe]
syslog
</source>
/etc/mysql/conf.d/query_cache.cnf:
<source lang=mysql>
[mysqld]
query_cache_limit = 4M
query_cache_size = 128M
query_cache_min_res_unit = 2K
</source>
=MySQL Clients=
Small one liners for testing purposes.
==PHP==
===PHP PDO===
<source lang=php>
$ php -r '
$pdo=new PDO("mysql:host=mydbhost;dbname=mydb", "user", "pass", ARRAY(
PDO::ATTR_PERSISTENT => true
)
);
$stmt=$pdo->prepare("SELECT * FROM mytable");
if($stmt->execute()){
while($row = $stmt->fetch()){
print_r($row);
}
};
$stmt = null;
$pdo=null;
'
</source>
7585b5afffe0b70010f648d46a620ed7977ab928
1758
1753
2017-06-15T11:59:45Z
Lollypop
2
/* All grants */
wikitext
text/x-wiki
[[Kategorie:MySQL|Tipps und Tricks]]
==Oneliner==
===All grants===
<source lang=bash>
# mysql --skip-column-names --batch --execute 'select concat("`",user,"`@`",host,"`") from mysql.user' | xargs -n 1 -i mysql --execute 'show grants for {}'
</source>
Or a little nicer:
<source lang=bash>
#!/bin/bash
#
## Written by Lars Timmann <L@rs.Timmann.de> 2017
#
function usage () {
cat << EOH
Usage: $0 [--grant-user <pattern>|--gu <pattern>] [--help] ...
--help: This output
--grant-user|--gu: You can specify this option several times.
The <pattern> can be:
<user> : You will get grants on all hosts for this user.
@<host> : You will get grants for all users on this host.
<user>@<host> : You will get specific grants for user@host.
The pattern may contain % as wildcard.
If the pattern is @% it shows all grants where host is exactly '%'.
...: Optional parameters to the mysql command
EOH
exit
}
declare -a grant_user
for ((param=1;param<=${#};param++))
do
case ${!param} in
--grant-user|--gu)
param=$[ ${param} + 1 ]
grant_user+=( $( echo ${!param} | awk -F'@' "NF==2 && \$1 {printf \"'%s'@'%s'\n\",\$1,\$2;next;}{print}") )
# delete 2 parameters from list and set back $param
set -- "${@:1:param-2}" "${@:param+1}"
param=$[ ${param} - 2 ]
;;
--help)
usage
;;
*)
;;
esac
done
# if no users specified, show all grants
if [ ${#grant_user[@]} -eq 0 ]
then
printf -- '--\n-- %s\n--\n' "all grants";
grant_user=( $(mysql $* --silent --skip-column-names --execute "select concat('\'',user,'\'@\'',host,'\'') as user from mysql.user" | sort ) )
else
# Fill users which are without host
count=${#grant_user[@]}
for((param=0;param<count;param++))
do
user="${grant_user[${param}]}"
if [[ ${user} != ?*"@"?* ]]
then
before=${#grant_user[@]}
if [[ ${user} == "@"?* ]]
then
host="${user/@}"
if [[ "_${host}_" == "_%_" ]]
then
grant_user=( "${grant_user[@]:0:param}" $(mysql $* --silent --skip-column-names --execute "select concat('\'',user,'\'@\'',host,'\'') as user from mysql.user where host='${host}'" | sort ) "${grant_user[@]:param+1}" )
else
grant_user=( "${grant_user[@]:0:param}" $(mysql $* --silent --skip-column-names --execute "select concat('\'',user,'\'@\'',host,'\'') as user from mysql.user where host like '${host}'" | sort ) "${grant_user[@]:param+1}" )
fi
else
grant_user=( "${grant_user[@]:0:param}" $(mysql $* --silent --skip-column-names --execute "select concat('\'',user,'\'@\'',host,'\'') as user from mysql.user where user like '${user}'" | sort ) "${grant_user[@]:param+1}" )
fi
after=${#grant_user[@]}
param=$[ param + after - before ]
count=$[ count + after - before ]
fi
done
fi
for user in ${grant_user[@]}
do
printf -- '--\n-- %s\n--\n' "${user}";
mysql $* --silent --skip-column-names --execute "show create user ${user}; show grants for ${user}" | sed 's/$/;/'
done
</source>
===Last update time===
* Per table
<source lang=mysql>
mysql> SELECT TABLE_SCHEMA AS DB,TABLE_NAME,UPDATE_TIME FROM INFORMATION_SCHEMA.TABLES ORDER BY DB,UPDATE_TIME;
</source>
* Per database
<source lang=mysql>
mysql> SELECT TABLE_SCHEMA AS DB,MAX(UPDATE_TIME) AS LAST_UPDATE FROM INFORMATION_SCHEMA.TABLES GROUP BY DB ORDER BY LAST_UPDATE;
</source>
==InnoDB space==
===Per database===
<source lang=mysql>
mysql> select table_schema as database_name, sum(round(data_length/1024/1024,2)) as total_size_mb from information_schema.tables where engine like 'innodb' group by table_schema order by total_size_mb;
</source>
===Per table===
<source lang=mysql>
mysql> select table_schema as database_name,table_name,round(data_length/1024/1024,2) as size_mb from information_schema.tables order by size_mb;
</source>
==Logging==
If you use SET GLOBAL it is just for the moment.
'''Don't forget to add it in your my.cnf to make it permanent!'''
===What can I log?===
The interesting variables here are:
* log_queries_not_using_indexes
* log_slave_updates
* log_slow_queries
* general_log
===Choose logging destination FILE/TABLE/NONE===
This affects general_log and slow_query_log.
* Log to the table mysql.slow_log and mysql.general_log
<source lang=mysql>
mysql> SET GLOBAL log_output=TABLE;
</source>
* Log to the table mysql.slow_log and mysql.general_log
<source lang=mysql>
mysql> SET GLOBAL log_output=TABLE;
</source>
* Both: tables and files
<source lang=mysql>
mysql> SET GLOBAL log_output = 'TABLE,FILE';
</source>
* None, if NONE appears in the log_output destinations there is no logging
<source lang=mysql>
mysql> SET GLOBAL log_output = 'TABLE,FILE,NONE';
</source>
is equal to
<source lang=mysql>
mysql> SET GLOBAL log_output = 'NONE';
</source>
===Enable/disable general logging===
<source lang=mysql>
mysql> SET GLOBAL general_log_file = '/var/lib/mysql/general.log';
Query OK, 0 rows affected (0.00 sec)
mysql> SET GLOBAL general_log = 'ON';
Query OK, 0 rows affected (0.00 sec)
</source>
<source lang=mysql>
mysql> SET GLOBAL general_log = 'OFF';
Query OK, 0 rows affected (0.00 sec)
</source>
===Enable/disable logging of slow queries===
<source lang=mysql>
mysql> SET GLOBAL slow_query_log_file = '/var/lib/mysql/slow-query.log';
Query OK, 0 rows affected (0.00 sec)
mysql> SET GLOBAL slow_query_log = 'ON';
Query OK, 0 rows affected (0.00 sec)
</source>
<source lang=mysql>
mysql> SET GLOBAL slow_query_log = 'OFF';
Query OK, 0 rows affected (0.00 sec)
</source>
== Slave ==
=== Debugging ===
==== What did we see from the master ====
Read the binlog from Master:
<source lang=bash>
# mysqlbinlog --read-from-remote-server --host='your replication host' --user='your replication user' --password='your replication password' --base64-output=auto --database='limit output to this database' -vv mysql-bin.number | less
</source>
if you get
ERROR: Failed on connect: SSL connection error: protocol version mismatch
try
<source lang=bash>
# mysqlbinlog --read-from-remote-server --host='your replication host' --user='your replication user' --password='your replication password' --ssl-mode=DISABLED --base64-output=auto --database='limit output to this database' -vv mysql-bin.number | less
</source>
For an idea of the binlog file to investigate on the master do this on your slave:
<source lang=bash>
# mysql -e 'show slave status\G' | awk '$1=="Master_Log_File:"'
</source>
==Filesystems for MySQL==
===ext3/ext4===
====Create Options====
<source lang=bash>
# mkfs.ext4 -b 4096 /dev/mapper/vg--data-lv--ext4--mysql_data
</source>
====Mountoptions====
* noatime
* data=writeback (best performance , only metadata is logged)
* data=ordered (ok performance , recording metadata and grouping metadata related to the data changes)
* data=journal (worst performance, but best data protection, ext3 default mode, recording metadata and all data)
===Raw devices with InnoDB===
'''Take a look at [[Linux_udev_permissions|setting device permissions via udev]] first.'''
'''After''' that the device is owned by mysql:
<source lang=bash>
# ls -alL /dev/vg-data/lv-rawdisk-innodb01
brw-rw---- 1 mysql mysql 252, 0 Aug 12 15:07 /dev/vg-data/lv-rawdisk-innodb01
</source>
Determine the size:
<source lang=bash>
# lvs vg-data
LV VG Attr LSize Pool Origin Data% Move Log Copy% Convert
lv-rawdisk-innodb01 vg-data -wi-a---- 25.00g
# fdisk -l /dev/vg-data/lv-rawdisk-innodb01
Disk /dev/vg-data/lv-rawdisk-innodb01: 26.8 GB, 26843545600 bytes
255 heads, 63 sectors/track, 3263 cylinders, total 52428800 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
# bc -l
26843545600/(1024*1024*1024)
25.00000000000000000000
</source>
Yes... really 25GB!
Add your logical volume to your configuration /etc/mysql/conf.d/innodb.cnf :
<source lang=mysql>
[mysqld]
# InnoDB raw disks
innodb_data_home_dir=
innodb_data_file_path=/dev/vg-data/lv-rawdisk-innodb01:25Gnewraw
</source>
Start mysql:
<source lang=bash>
# service mysql start
</source>
Aaaaaand.. do not forget apparmor! Like I did.. :-D
<source lang=mysql>
InnoDB: Operating system error number 13 in a file operation.
InnoDB: The error means mysqld does not have the access rights to
InnoDB: the directory.
InnoDB: File name /dev/dm-0
InnoDB: File operation call: 'open'.
InnoDB: Cannot continue operation.
</source>
<source lang=bash>
# tail /var/log/kern.log
...
Aug 12 15:30:09 mysql kernel: [ 5840.118528] audit: type=1400 audit(1439386209.399:33): apparmor="DENIED" operation="open" profile="/usr/sbin/mysqld" name="/dev/dm-0" pid=11810 comm="mysqld" requested_mask="wr" denied_mask="wr" fsuid=108 ouid=108
...
</source>
Add your raw device to the apparmor config in /etc/apparmor.d/local/usr.sbin.mysqld :
<source lang=bash>
# Site-specific additions and overrides for usr.sbin.mysqld.
# For more details, please see /etc/apparmor.d/local/README.
/dev/dm-* rwk,
</source>
Reload apparmor:
<source lang=bash>
# service apparmor reload
</source>
Another try!
<source lang=bash>
# service mysql start
</source>
<source lang=mysql>
InnoDB: The first specified data file /dev/vg-data/lv-rawdisk-innodb01 did not exist:
InnoDB: a new database to be created!
150812 15:48:23 InnoDB: Setting file /dev/vg-data/lv-rawdisk-innodb01 size to 25600 MB
InnoDB: Database physically writes the file full: wait...
InnoDB: Progress in MB: 100 200 300 400 500 600 700 800 900 1000 1100 1200 ...
</source>
Much better!
So shutdown MySQL again!
Change your configuration /etc/mysql/conf.d/innodb.cnf and '''change newraw to raw!''' :
<source lang=mysql>
[mysqld]
# InnoDB raw disks
innodb_data_home_dir=
innodb_data_file_path=/dev/vg-data/lv-rawdisk-innodb01:25Graw
</source>
=== NFS ===
==== NFSv4 ====
===== On NetApp CDOT SVM =====
<source lang=cdot>
cdot1nfsv4::> export-policy rule create -policyname default -clientmatch 172.18.128.0/22 -superuser none -rwrule none -rorule sys -allow-dev false -allow-suid false
cdot1nfsv4::>
cdot1nfsv4::> export-policy create -policyname mysql_clients
cdot1nfsv4::> export-policy rule create -policyname mysql_clients -clientmatch 172.18.128.0/22 -superuser sys -rwrule sys -rorule sys -allow-dev true -allow-suid false
cdot1nfsv4::>
cdot1nfsv4::> nfs server modify -v4.0 enabled -v4-id-domain this.domain.tld
cdot1nfsv4::> set -units GB
cdot1nfsv4::> vol show -volume MYSQLNFS_* -fields volume,policy,size,junction-path
vserver volume size policy junction-path
------------------ --------------------- ---- ------------- ----------------------
cdot1nfsv4 MYSQLNFS_DATA 40GB mysql_clients /MYSQLNFS_DATA
cdot1nfsv4 MYSQLNFS_LOG 1GB mysql_clients /MYSQLNFS_LOG
2 entries were displayed.
</source>
Links:
* [https://kb.netapp.com/support/s/article/how-to-configure-nfsv4-in-cluster-mode How to configure NFSv4 in Cluster-Mode]
* [https://kb.netapp.com/support/s/article/clustered-data-ontap-nfs-expert-recommended-articles Clustered Data ONTAP NFS Expert recommended articles]
* [https://kb.netapp.com/support/s/article/how-to-configure-netapp-storage-systems-for-network-file-system-version-4-in-aix-and-linux-environments How to configure NetApp storage systems for Network File System version 4 in AIX and Linux environments]
* [https://kb.netapp.com/support/s/article/how-to-enable-or-disable-nfsv4-on-netapp-storage-systems How to enable or disable NFSv4 on NetApp storage systems]
===== On Linux =====
====== /etc/sysctl.d/99-mysql.conf ======
<source>
#
## http://www.ajohnstone.com/achives/optimizing-mysql-over-nfs-with-netapp/
#
###################################################################
# Semaphores & IPC for optimizations in innodb
kernel.shmmax=2147483648
kernel.shmall=2147483648
kernel.msgmni=1024
kernel.msgmax=65536
kernel.sem=250 32000 32 1024
###################################################################
# Swap
vm.swappiness = 0
vm.vfs_cache_pressure = 50
</source>
====== /etc/sysctl.d/99-netapp-nfs.conf ======
<source>
#
## http://www.ajohnstone.com/achives/optimizing-mysql-over-nfs-with-netapp/
#
###################################################################
# Optimization for netapp/nfs increased from 64k, @see http://tldp.org/HOWTO/NFS-HOWTO/performance.html#MEMLIMITS
net.core.wmem_default=262144
net.core.rmem_default=262144
net.core.wmem_max=262144
net.core.rmem_max=262144
net.ipv4.tcp_rmem = 4096 87380 16777216
net.ipv4.tcp_wmem = 4096 65536 16777216
net.ipv4.tcp_no_metrics_save = 1
# Guidelines from http://media.netapp.com/documents/mysqlperformance-5.pdf
net.ipv4.tcp_sack=0
net.ipv4.tcp_timestamps=0
sunrpc.tcp_slot_table_entries=128
#nfs.v3.enable on
nfs.tcp.enable=on
nfs.tcp.recvwindowsize=65536
nfs.tcp.xfersize=65536
#iscsi.iswt.max_ios_per_session 128
#iscsi.iswt.tcp_window_size 131400
#iscsi.max_connections_per_session 16
net.ipv4.tcp_tw_reuse = 1
net.ipv4.ip_local_port_range = 1024 65023
net.ipv4.tcp_max_syn_backlog = 10240
net.ipv4.tcp_max_tw_buckets = 400000
net.ipv4.tcp_max_orphans = 60000
net.ipv4.tcp_synack_retries = 3
net.core.somaxconn = 10000
kernel.sysrq=0
net.ipv4.neigh.default.gc_thresh1 = 4096
net.ipv4.neigh.default.gc_thresh2 = 8192
net.ipv4.neigh.default.gc_thresh3 = 8192
net.ipv4.neigh.default.base_reachable_time = 86400
net.ipv4.neigh.default.gc_stale_time = 86400
</source>
====== Raise allowed number of open files for mysql in /etc/security/limits.d/mysql.conf ======
<source>
mysql soft nofile 1024000
mysql hard nofile 1024000
mysql soft nproc 10240
mysql hard nproc 10240
</source>
====== Modify systemd mysql.service to raise the number of files limit ======
To raise the number of files for the service you have to tell the systemd the new limit.
<source lang=bash>
# systemctl edit mysql.service
</source>
and enter:
<source lang=inifile>
[Service]
LimitNOFILE=1024000
</source>
<source lang=bash>
# systemctl cat mysql
# /lib/systemd/system/mysql.service
# MySQL systemd service file
...
# /etc/systemd/system/mysql.service.d/override.conf
[Service]
LimitNOFILE=1024000
</source>
Do not forget to activate and check the limit
<source lang=bash>
# systemctl daemon-reload
# systemctl restart mysql
# awk 'NR==1 || /Max open files/' /proc/$(pgrep mysqld$)/limits
Limit Soft Limit Hard Limit Units
Max open files 1024000 1024000 files
</source>
====== Modify systemd service to wait for NFS ======
To be sure that the NFS mount is ready when the mysql server starts add After=nfs-client.target to the systemd service in the Unit-section.
<source lang=bash>
# systemctl edit mysql.service
</source>
and enter:
<source lang=inifile>
[Unit]
Description=MySQL Community Server
After=network.target
After=nfs-client.target
</source>
<source lang=bash>
# systemctl cat mysql
# /lib/systemd/system/mysql.service
# MySQL systemd service file
[Unit]
Description=MySQL Community Server
After=network.target
[Install]
WantedBy=multi-user.target
[Service]
User=mysql
Group=mysql
PermissionsStartOnly=true
ExecStartPre=/usr/share/mysql/mysql-systemd-start pre
ExecStart=/usr/sbin/mysqld
ExecStartPost=/usr/share/mysql/mysql-systemd-start post
TimeoutSec=600
Restart=on-failure
RuntimeDirectory=mysqld
RuntimeDirectoryMode=755
# /etc/systemd/system/mysql.service.d/override.conf
[Unit]
Description=MySQL Community Server
After=network.target
After=nfs-client.target
[Service]
LimitNOFILE=1024000
</source>
Do not forget to activate the changes...
<source lang=bash>
# systemctl daemon-reload
# systemctl restart mysql
</source>
... and check they are active:
<source lang=bash>
# systemctl list-dependencies --after mysql.service | grep nfs-client.target
● ├─nfs-client.target
</source>
====== /etc/idmapd.conf ======
<source lang=conf>
# Domain = localdomain
Domain = this.domain.tld
</source>
====== /etc/fstab ======
<source lang=fstab>
cdot-nfsv4-svm:/MYSQLNFS_LOG /MYSQLNFS_LOG nfs rw,hard,nointr,rsize=65536,wsize=65536,bg,vers=4,proto=tcp,noatime
cdot-nfsv4-svm:/MYSQLNFS_DATA /MYSQLNFS_DATA nfs rw,hard,nointr,rsize=65536,wsize=65536,bg,vers=4,proto=tcp,noatime
</source>
====== /etc/mysql/mysql.conf.d/mysqld.cnf ======
<source lang=inifile>
[mysqld]
...
datadir = /MYSQLNFS_DATA/data/mysql
...
</source>
====== /etc/mysql/mysql.conf.d/innodb.cnf ======
<source lang=inifile>
[mysqld]
#
# * InnoDB
#
innodb_data_home_dir = /MYSQLNFS_DATA/InnoDB
innodb_data_file_path = ibdata1:200M:autoextend
innodb_log_group_home_dir = /MYSQLNFS_LOG/ib_log
#innodb_flush_method = O_DIRECT
innodb_flush_log_at_trx_commit = 2
innodb_file_per_table = on
</source>
<source lang=mysql>
# mysql -e "show variables where variable_name like '%dir' and value like '/MYSQLNFS%'"
+---------------------------+------------------------------------+
| Variable_name | Value |
+---------------------------+------------------------------------+
| datadir | /MYSQLNFS_DATA/data/mysql/ |
| innodb_data_home_dir | /MYSQLNFS_DATA/InnoDB |
| innodb_log_group_home_dir | /MYSQLNFS_LOG/ib_log |
+---------------------------+------------------------------------+
</source>
====== /etc/mysql/mysql.conf.d/query_cache.cnf ======
<source lang=inifile>
[mysqld]
#
# * Query Cache Configuration
#
query_cache_type = 1
query_cache_limit = 256K
query_cache_min_res_unit = 2k
query_cache_size = 80M
</source>
<source lang=mysql>
mysql> SHOW VARIABLES LIKE 'have_query_cache';
+------------------+-------+
| Variable_name | Value |
+------------------+-------+
| have_query_cache | YES |
+------------------+-------+
1 row in set (0,00 sec)
mysql> SHOW VARIABLES LIKE 'query_cache%';
+------------------------------+----------+
| Variable_name | Value |
+------------------------------+----------+
| query_cache_limit | 262144 |
| query_cache_min_res_unit | 2048 |
| query_cache_size | 83886080 |
| query_cache_type | ON |
| query_cache_wlock_invalidate | OFF |
+------------------------------+----------+
5 rows in set (0,00 sec)
</source>
====== apparmor : /etc/apparmor.d/local/usr.sbin.mysqld ======
<source lang=apparmor>
# vim:syntax=apparmor
# This should be always there...
owner @{PROC}/@{pid}/status r,
/sys/devices/system/node/ r,
/sys/devices/system/node/** r,
# The mysql datadir, innodb_data_home_dir
/MYSQLNFS_DATA/ r,
/MYSQLNFS_DATA/** rwk,
# The mysql innodb_log_group_home_dir
/MYSQLNFS_LOG/ r,
/MYSQLNFS_LOG/** rwk,
</source>
====== Short stupid performance test ======
<source lang=bash>
# time dd if=/dev/zero of=/MYSQLNFS_DATA/io.test bs=16k count=65536
65536+0 records in
65536+0 records out
1073741824 bytes (1,1 GB, 1,0 GiB) copied, 1,7552 s, 612 MB/s
real 0m1.772s
user 0m0.016s
sys 0m0.672s
</source>
Some things seem to work...
==Sample InnoDB configuration==
/etc/mysql/conf.d/innodb.cnf
<source lang=mysql>
[mysqld]
# InnoDB Parameters
# innodb_buffer_pool_size=(0.7*total_mem_size)
innodb_buffer_pool_size=1433M
# bulk_insert_buffer_size
bulk_insert_buffer_size=256M
# innodb_buffer_pool_instances=... more = more concurrency
innodb_buffer_pool_instances=2
# innodb_thread_concurrency= 2*CPUs
innodb_thread_concurrency=4
# innodb_flush_method=O_DIRECT (avoids double buffering)
innodb_flush_method=O_DIRECT
# InnoDB data raw disks
innodb_data_home_dir=
innodb_data_file_path=/dev/vg-data/lv-rawdisk-innodb01:25Graw
# InnoDB log files
innodb_log_files_in_group=2
innodb_log_file_size=100M
innodb_log_group_home_dir=/var/lib/mysql/ib_log
</source>
==Analyze==
<source lang=mysql>
mysql> select * from <tablename> PROCEDURE ANALYSE();
</source>
<source lang=mysql>
mysql> SHOW /*!50000 GLOBAL*/ STATUS;
</source>
* See [[http://de.slideshare.net/shinguz/pt-presentation-11465700 MySQL Performance Tuning]]
===percona-toolkit===
<source lang=bash>
# aptitude install percona-toolkit
# mysql -e "explain select * from mysql.user,mysql.db where user.user=db.user" | pt-visual-explain
JOIN
+- Bookmark lookup
| +- Table
| | table db
| | possible_keys User
| +- Index lookup
| key db->User
| possible_keys User
| key_len 48
| ref mysql.user.User
| rows 3
+- Table scan
rows 68
+- Table
table user
</source>
===Sysbench===
<source lang=bash>
# mysql -u root -e "create database sbtest;"
# sysbench \
--test=oltp \
--oltp-table-size=10000000 \
--db-driver=mysql \
--mysql-table-engine=innodb \
--mysql-db=sbtest \
--mysql-user=root \
--mysql-password=$(nawk -F'=' '/password/{print $2}' /root/.my.cnf) \
--mysql-socket=/var/run/mysqld/mysqld.sock \
prepare
# sysbench \
--test=oltp \
--oltp-test-mode=complex \
--oltp-table-size=80000000 \
--db-driver=mysql \
--mysql-table-engine=innodb \
--mysql-db=sbtest \
--mysql-user=root \
--mysql-password=$(nawk -F'=' '/password/{print $2}' /root/.my.cnf) \
--mysql-socket=/var/run/mysqld/mysqld.sock \
--num-threads=4 \
--max-time=900 \
--max-requests=500000 \
run
# mysql -u root_rw -e "drop table sbtest;" sbtest
</source>
==Recover a damaged root account==
===Lost grants===
Try out:
<source lang=bash>
# service mysql stop
# echo "grant all privileges on *.* to 'root'@'localhost' with grant option;" > /root/mysql-init
# mysqld_safe --init-file=/root/mysql-init
...
150812 19:14:24 mysqld_safe mysqld from pid file /var/run/mysqld/mysqld.pid ended
# rm /root/mysql-init
# service mysql start
</source>
Or:
<source lang=bash>
# service mysql stop
# mysqld_safe --skip-grant-tables &
...
# mysql -e "UPDATE mysql.user SET Grant_priv='Y', Super_priv='Y' WHERE User='root'; FLUSH PRIVILEGES; GRANT ALL ON *.* TO 'root'@'localhost';"
# mysqladmin -u root shutdown
# service mysql start
</source>
===Lost password===
<source lang=bash>
# service mysql stop
# echo "SET PASSWORD FOR 'root'@'localhost' = PASSWORD('the root password for mysql');" > /root/mysql-init
# mysqld_safe --init-file=/root/mysql-init
...
150812 19:15:24 mysqld_safe mysqld from pid file /var/run/mysqld/mysqld.pid ended
# rm /root/mysql-init
# service mysql start
</source>
==Structured configuration==
This is the default in Ubuntus /etc/mysql/my.cnf:
<source lang=mysql>
...
#
# * IMPORTANT: Additional settings that can override those from this file!
# The files must end with '.cnf', otherwise they'll be ignored.
#
!includedir /etc/mysql/conf.d/
</source>
/etc/mysql/conf.d/innodb.cnf:
<source lang=mysql>
[mysqld]
# InnoDB Parameters
# innodb_buffer_pool_size=(0.7*total_mem_size)
#innodb_buffer_pool_size=512M
innodb_buffer_pool_size=256M
# bulk_insert_buffer_size
#bulk_insert_buffer_size=256M
bulk_insert_buffer_size=128M
# innodb_buffer_pool_instances=... more = more concurrency
innodb_buffer_pool_instances=2
# innodb_thread_concurrency= 2*CPUs
innodb_thread_concurrency=4
# innodb_flush_method=O_DIRECT (avoids double buffering)
innodb_flush_method=O_DIRECT
# InnoDB data raw disks
innodb_data_home_dir=
innodb_data_file_path=/dev/vg-data/lv-rawdisk-innodb01:25Graw
# InnoDB log files
innodb_log_files_in_group=2
innodb_log_file_size=100M
innodb_log_group_home_dir=/var/lib/mysql/ib_log
</source>
/etc/mysql/conf.d/myisam.cnf:
<source lang=mysql>
[mysqld]
#key_buffer = 512M
key_buffer = 128M
table_cache = 8K
myisam_sort_buffer_size = 64M
tmp_table_size = 64M
# Variable: concurrent_insert
# Value Description
# 0 Disables concurrent inserts
# 1 (Default) Enables concurrent insert for MyISAM tables that do not have holes
# 2 Enables concurrent inserts for all MyISAM tables, even those that have holes.
# For a table with a hole, new rows are inserted at the end of the table if it is in use by another thread.
# Otherwise, MySQL acquires a normal write lock and inserts the row into the hole.
concurrent_insert=2
# Variable: myisam_use_mmap
# https://www.percona.com/blog/2006/05/26/myisam-mmap-feature-51/
#
myisam_use_mmap=1
</source>
/etc/mysql/conf.d/mysqld.cnf:
<source lang=mysql>
[mysqld]
datadir = /var/lib/mysql/data/data
# because mysql is soooo stupid
#ignore-db-dirs = lost+found # when we will have mysql >= 5.6.3
bind-address = 127.0.0.1
open-files-limit = 4096
max_connections = 512
max_allowed_packet = 16M
thread_stack = 192K
thread_cache_size = 8
myisam-recover-options = BACKUP
max_connections = 512
table_cache = 8192
thread_concurrency = 4
default-storage-engine = innodb
# Enable the full query log. Every query (even ones with incorrect
# syntax) that the server receives will be logged. This is useful for
# debugging, it is usually disabled in production use.
#log
# Print warnings to the error log file. If you have any problem with
# MySQL you should enable logging of warnings and examine the error log
# for possible explanations.
log_warnings
# Log slow queries. Slow queries are queries which take more than the
# amount of time defined in "long_query_time" or which do not use
# indexes well, if log_long_format is enabled. It is normally good idea
# to have this turned on if you frequently add new queries to the
# system.
log_slow_queries
slow_query_log_file = /var/log/mysql/mysql-slow.log
# All queries taking more than this amount of time (in seconds) will be
# trated as slow. Do not use "1" as a value here, as this will result in
# even very fast queries being logged from time to time (as MySQL
# currently measures time with second accuracy only).
long_query_time = 2
# Log more information in the slow query log. Normally it is good to
# have this turned on. This will enable logging of queries that are not
# using indexes in addition to long running queries.
#log_long_format
log_bin = /var/lib/mysql/binlog/mysql-bin.log
expire_logs_days = 10
max_binlog_size = 100M
sync_binlog = 0
performance_schema = ON
</source>
/etc/mysql/conf.d/mysqld_safe.cnf:
<source lang=mysql>
[mysqld_safe]
</source>
/etc/mysql/conf.d/mysqld_safe_syslog.cnf:
<source lang=mysql>
[mysqld_safe]
syslog
</source>
/etc/mysql/conf.d/query_cache.cnf:
<source lang=mysql>
[mysqld]
query_cache_limit = 4M
query_cache_size = 128M
query_cache_min_res_unit = 2K
</source>
=MySQL Clients=
Small one liners for testing purposes.
==PHP==
===PHP PDO===
<source lang=php>
$ php -r '
$pdo=new PDO("mysql:host=mydbhost;dbname=mydb", "user", "pass", ARRAY(
PDO::ATTR_PERSISTENT => true
)
);
$stmt=$pdo->prepare("SELECT * FROM mytable");
if($stmt->execute()){
while($row = $stmt->fetch()){
print_r($row);
}
};
$stmt = null;
$pdo=null;
'
</source>
7336dc4bd249c0e27b191698489ba293f89c7e25
1759
1758
2017-06-15T15:38:47Z
Lollypop
2
/* All grants */
wikitext
text/x-wiki
[[Kategorie:MySQL|Tipps und Tricks]]
==Oneliner==
===All grants===
<source lang=bash>
# mysql --skip-column-names --batch --execute 'select concat("`",user,"`@`",host,"`") from mysql.user' | xargs -n 1 -i mysql --execute 'show grants for {}'
</source>
Or a little nicer:
<source lang=bash>
#!/bin/bash
#
## Written by Lars Timmann <L@rs.Timmann.de> 2017
#
function usage () {
cat << EOH
Usage: $0 [--grant-user <pattern>|--gu <pattern>] [--grant-db <pattern>|--gdb <pattern>] [--help] ...
--help: This output
--grant-user|--gu: You can specify this option several times.
The <pattern> can be:
<user> : You will get grants on all hosts for this user.
@<host> : You will get grants for all users on this host.
<user>@<host> : You will get specific grants for user@host.
The pattern may contain % as wildcard.
If the pattern is @% it shows all grants where host is exactly '%'.
--grant-db|--gdb: You can specify this option several times.
The pattern names the database to look for.
The pattern may contain % as wildcard.
...: Optional parameters to the mysql command
EOH
exit
}
declare -a grant_user
for ((param=1;param<=${#};param++))
do
case ${!param} in
--grant-user|--gu)
param=$[ ${param} + 1 ]
grant_user+=( $( echo ${!param} | awk -F'@' "NF==2 && \$1 {printf \"'%s'@'%s'\n\",\$1,\$2;next;}{print}") )
# delete 2 parameters from list and set back $param
set -- "${@:1:param-2}" "${@:param+1}"
param=$[ ${param} - 2 ]
;;
--grant-db|--gdb)
param=$[ ${param} + 1 ]
grant_db+=( "${!param}" )
# delete 2 parameters from list and set back $param
set -- "${@:1:param-2}" "${@:param+1}"
param=$[ ${param} - 2 ]
;;
--help)
usage
;;
*)
;;
esac
done
# Fill users which are without host
count=${#grant_user[@]}
for((param=0;param<count;param++))
do
user="${grant_user[${param}]}"
if [[ ${user} != ?*"@"?* ]]
then
before=${#grant_user[@]}
if [[ ${user} == "@"?* ]]
then
host="${user/@}"
if [[ "_${host}_" == "_%_" ]]
then
grant_user=( "${grant_user[@]:0:param}" $(mysql $* --silent --skip-column-names --execute "select concat('\'',user,'\'@\'',host,'\'') as user from mysql.user where host='${host}'" | sort ) "${grant_user[@]:param+1}" )
else
grant_user=( "${grant_user[@]:0:param}" $(mysql $* --silent --skip-column-names --execute "select concat('\'',user,'\'@\'',host,'\'') as user from mysql.user where host like '${host}'" | sort ) "${grant_user[@]:param+1}" )
fi
else
grant_user=( "${grant_user[@]:0:param}" $(mysql $* --silent --skip-column-names --execute "select concat('\'',user,'\'@\'',host,'\'') as user from mysql.user where user like '${user}'" | sort ) "${grant_user[@]:param+1}" )
fi
after=${#grant_user[@]}
param=$[ param + after - before ]
count=$[ count + after - before ]
fi
done
# Get user for database in grant_db array
for db in ${grant_db[@]}
do
grant_user+=( $(mysql $* --silent --skip-column-names --execute "
select concat('\'',user,'\'@\'',host,'\'') as user from mysql.db where db like '${db}';
select concat('\'',user,'\'@\'',host,'\'') as user from mysql.columns_priv where db like '${db}';
select concat('\'',user,'\'@\'',host,'\'') as user from mysql.tables_priv where db like '${db}';
" | sort -u ) )
done
# if no users specified, show all grants
if [ ${#grant_user[@]} -eq 0 ]
then
printf -- '--\n-- %s\n--\n' "all grants";
grant_user=( $(mysql $* --silent --skip-column-names --execute "select concat('\'',user,'\'@\'',host,'\'') as user from mysql.user" | sort ) )
fi
for user in ${grant_user[@]}
do
printf -- '--\n-- %s\n--\n' "${user}";
mysql $* --silent --skip-column-names --execute "show create user ${user}; show grants for ${user}" | sed 's/$/;/'
done
</source>
===Last update time===
* Per table
<source lang=mysql>
mysql> SELECT TABLE_SCHEMA AS DB,TABLE_NAME,UPDATE_TIME FROM INFORMATION_SCHEMA.TABLES ORDER BY DB,UPDATE_TIME;
</source>
* Per database
<source lang=mysql>
mysql> SELECT TABLE_SCHEMA AS DB,MAX(UPDATE_TIME) AS LAST_UPDATE FROM INFORMATION_SCHEMA.TABLES GROUP BY DB ORDER BY LAST_UPDATE;
</source>
==InnoDB space==
===Per database===
<source lang=mysql>
mysql> select table_schema as database_name, sum(round(data_length/1024/1024,2)) as total_size_mb from information_schema.tables where engine like 'innodb' group by table_schema order by total_size_mb;
</source>
===Per table===
<source lang=mysql>
mysql> select table_schema as database_name,table_name,round(data_length/1024/1024,2) as size_mb from information_schema.tables order by size_mb;
</source>
==Logging==
If you use SET GLOBAL it is just for the moment.
'''Don't forget to add it in your my.cnf to make it permanent!'''
===What can I log?===
The interesting variables here are:
* log_queries_not_using_indexes
* log_slave_updates
* log_slow_queries
* general_log
===Choose logging destination FILE/TABLE/NONE===
This affects general_log and slow_query_log.
* Log to the table mysql.slow_log and mysql.general_log
<source lang=mysql>
mysql> SET GLOBAL log_output=TABLE;
</source>
* Log to the table mysql.slow_log and mysql.general_log
<source lang=mysql>
mysql> SET GLOBAL log_output=TABLE;
</source>
* Both: tables and files
<source lang=mysql>
mysql> SET GLOBAL log_output = 'TABLE,FILE';
</source>
* None, if NONE appears in the log_output destinations there is no logging
<source lang=mysql>
mysql> SET GLOBAL log_output = 'TABLE,FILE,NONE';
</source>
is equal to
<source lang=mysql>
mysql> SET GLOBAL log_output = 'NONE';
</source>
===Enable/disable general logging===
<source lang=mysql>
mysql> SET GLOBAL general_log_file = '/var/lib/mysql/general.log';
Query OK, 0 rows affected (0.00 sec)
mysql> SET GLOBAL general_log = 'ON';
Query OK, 0 rows affected (0.00 sec)
</source>
<source lang=mysql>
mysql> SET GLOBAL general_log = 'OFF';
Query OK, 0 rows affected (0.00 sec)
</source>
===Enable/disable logging of slow queries===
<source lang=mysql>
mysql> SET GLOBAL slow_query_log_file = '/var/lib/mysql/slow-query.log';
Query OK, 0 rows affected (0.00 sec)
mysql> SET GLOBAL slow_query_log = 'ON';
Query OK, 0 rows affected (0.00 sec)
</source>
<source lang=mysql>
mysql> SET GLOBAL slow_query_log = 'OFF';
Query OK, 0 rows affected (0.00 sec)
</source>
== Slave ==
=== Debugging ===
==== What did we see from the master ====
Read the binlog from Master:
<source lang=bash>
# mysqlbinlog --read-from-remote-server --host='your replication host' --user='your replication user' --password='your replication password' --base64-output=auto --database='limit output to this database' -vv mysql-bin.number | less
</source>
if you get
ERROR: Failed on connect: SSL connection error: protocol version mismatch
try
<source lang=bash>
# mysqlbinlog --read-from-remote-server --host='your replication host' --user='your replication user' --password='your replication password' --ssl-mode=DISABLED --base64-output=auto --database='limit output to this database' -vv mysql-bin.number | less
</source>
For an idea of the binlog file to investigate on the master do this on your slave:
<source lang=bash>
# mysql -e 'show slave status\G' | awk '$1=="Master_Log_File:"'
</source>
==Filesystems for MySQL==
===ext3/ext4===
====Create Options====
<source lang=bash>
# mkfs.ext4 -b 4096 /dev/mapper/vg--data-lv--ext4--mysql_data
</source>
====Mountoptions====
* noatime
* data=writeback (best performance , only metadata is logged)
* data=ordered (ok performance , recording metadata and grouping metadata related to the data changes)
* data=journal (worst performance, but best data protection, ext3 default mode, recording metadata and all data)
===Raw devices with InnoDB===
'''Take a look at [[Linux_udev_permissions|setting device permissions via udev]] first.'''
'''After''' that the device is owned by mysql:
<source lang=bash>
# ls -alL /dev/vg-data/lv-rawdisk-innodb01
brw-rw---- 1 mysql mysql 252, 0 Aug 12 15:07 /dev/vg-data/lv-rawdisk-innodb01
</source>
Determine the size:
<source lang=bash>
# lvs vg-data
LV VG Attr LSize Pool Origin Data% Move Log Copy% Convert
lv-rawdisk-innodb01 vg-data -wi-a---- 25.00g
# fdisk -l /dev/vg-data/lv-rawdisk-innodb01
Disk /dev/vg-data/lv-rawdisk-innodb01: 26.8 GB, 26843545600 bytes
255 heads, 63 sectors/track, 3263 cylinders, total 52428800 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
# bc -l
26843545600/(1024*1024*1024)
25.00000000000000000000
</source>
Yes... really 25GB!
Add your logical volume to your configuration /etc/mysql/conf.d/innodb.cnf :
<source lang=mysql>
[mysqld]
# InnoDB raw disks
innodb_data_home_dir=
innodb_data_file_path=/dev/vg-data/lv-rawdisk-innodb01:25Gnewraw
</source>
Start mysql:
<source lang=bash>
# service mysql start
</source>
Aaaaaand.. do not forget apparmor! Like I did.. :-D
<source lang=mysql>
InnoDB: Operating system error number 13 in a file operation.
InnoDB: The error means mysqld does not have the access rights to
InnoDB: the directory.
InnoDB: File name /dev/dm-0
InnoDB: File operation call: 'open'.
InnoDB: Cannot continue operation.
</source>
<source lang=bash>
# tail /var/log/kern.log
...
Aug 12 15:30:09 mysql kernel: [ 5840.118528] audit: type=1400 audit(1439386209.399:33): apparmor="DENIED" operation="open" profile="/usr/sbin/mysqld" name="/dev/dm-0" pid=11810 comm="mysqld" requested_mask="wr" denied_mask="wr" fsuid=108 ouid=108
...
</source>
Add your raw device to the apparmor config in /etc/apparmor.d/local/usr.sbin.mysqld :
<source lang=bash>
# Site-specific additions and overrides for usr.sbin.mysqld.
# For more details, please see /etc/apparmor.d/local/README.
/dev/dm-* rwk,
</source>
Reload apparmor:
<source lang=bash>
# service apparmor reload
</source>
Another try!
<source lang=bash>
# service mysql start
</source>
<source lang=mysql>
InnoDB: The first specified data file /dev/vg-data/lv-rawdisk-innodb01 did not exist:
InnoDB: a new database to be created!
150812 15:48:23 InnoDB: Setting file /dev/vg-data/lv-rawdisk-innodb01 size to 25600 MB
InnoDB: Database physically writes the file full: wait...
InnoDB: Progress in MB: 100 200 300 400 500 600 700 800 900 1000 1100 1200 ...
</source>
Much better!
So shutdown MySQL again!
Change your configuration /etc/mysql/conf.d/innodb.cnf and '''change newraw to raw!''' :
<source lang=mysql>
[mysqld]
# InnoDB raw disks
innodb_data_home_dir=
innodb_data_file_path=/dev/vg-data/lv-rawdisk-innodb01:25Graw
</source>
=== NFS ===
==== NFSv4 ====
===== On NetApp CDOT SVM =====
<source lang=cdot>
cdot1nfsv4::> export-policy rule create -policyname default -clientmatch 172.18.128.0/22 -superuser none -rwrule none -rorule sys -allow-dev false -allow-suid false
cdot1nfsv4::>
cdot1nfsv4::> export-policy create -policyname mysql_clients
cdot1nfsv4::> export-policy rule create -policyname mysql_clients -clientmatch 172.18.128.0/22 -superuser sys -rwrule sys -rorule sys -allow-dev true -allow-suid false
cdot1nfsv4::>
cdot1nfsv4::> nfs server modify -v4.0 enabled -v4-id-domain this.domain.tld
cdot1nfsv4::> set -units GB
cdot1nfsv4::> vol show -volume MYSQLNFS_* -fields volume,policy,size,junction-path
vserver volume size policy junction-path
------------------ --------------------- ---- ------------- ----------------------
cdot1nfsv4 MYSQLNFS_DATA 40GB mysql_clients /MYSQLNFS_DATA
cdot1nfsv4 MYSQLNFS_LOG 1GB mysql_clients /MYSQLNFS_LOG
2 entries were displayed.
</source>
Links:
* [https://kb.netapp.com/support/s/article/how-to-configure-nfsv4-in-cluster-mode How to configure NFSv4 in Cluster-Mode]
* [https://kb.netapp.com/support/s/article/clustered-data-ontap-nfs-expert-recommended-articles Clustered Data ONTAP NFS Expert recommended articles]
* [https://kb.netapp.com/support/s/article/how-to-configure-netapp-storage-systems-for-network-file-system-version-4-in-aix-and-linux-environments How to configure NetApp storage systems for Network File System version 4 in AIX and Linux environments]
* [https://kb.netapp.com/support/s/article/how-to-enable-or-disable-nfsv4-on-netapp-storage-systems How to enable or disable NFSv4 on NetApp storage systems]
===== On Linux =====
====== /etc/sysctl.d/99-mysql.conf ======
<source>
#
## http://www.ajohnstone.com/achives/optimizing-mysql-over-nfs-with-netapp/
#
###################################################################
# Semaphores & IPC for optimizations in innodb
kernel.shmmax=2147483648
kernel.shmall=2147483648
kernel.msgmni=1024
kernel.msgmax=65536
kernel.sem=250 32000 32 1024
###################################################################
# Swap
vm.swappiness = 0
vm.vfs_cache_pressure = 50
</source>
====== /etc/sysctl.d/99-netapp-nfs.conf ======
<source>
#
## http://www.ajohnstone.com/achives/optimizing-mysql-over-nfs-with-netapp/
#
###################################################################
# Optimization for netapp/nfs increased from 64k, @see http://tldp.org/HOWTO/NFS-HOWTO/performance.html#MEMLIMITS
net.core.wmem_default=262144
net.core.rmem_default=262144
net.core.wmem_max=262144
net.core.rmem_max=262144
net.ipv4.tcp_rmem = 4096 87380 16777216
net.ipv4.tcp_wmem = 4096 65536 16777216
net.ipv4.tcp_no_metrics_save = 1
# Guidelines from http://media.netapp.com/documents/mysqlperformance-5.pdf
net.ipv4.tcp_sack=0
net.ipv4.tcp_timestamps=0
sunrpc.tcp_slot_table_entries=128
#nfs.v3.enable on
nfs.tcp.enable=on
nfs.tcp.recvwindowsize=65536
nfs.tcp.xfersize=65536
#iscsi.iswt.max_ios_per_session 128
#iscsi.iswt.tcp_window_size 131400
#iscsi.max_connections_per_session 16
net.ipv4.tcp_tw_reuse = 1
net.ipv4.ip_local_port_range = 1024 65023
net.ipv4.tcp_max_syn_backlog = 10240
net.ipv4.tcp_max_tw_buckets = 400000
net.ipv4.tcp_max_orphans = 60000
net.ipv4.tcp_synack_retries = 3
net.core.somaxconn = 10000
kernel.sysrq=0
net.ipv4.neigh.default.gc_thresh1 = 4096
net.ipv4.neigh.default.gc_thresh2 = 8192
net.ipv4.neigh.default.gc_thresh3 = 8192
net.ipv4.neigh.default.base_reachable_time = 86400
net.ipv4.neigh.default.gc_stale_time = 86400
</source>
====== Raise allowed number of open files for mysql in /etc/security/limits.d/mysql.conf ======
<source>
mysql soft nofile 1024000
mysql hard nofile 1024000
mysql soft nproc 10240
mysql hard nproc 10240
</source>
====== Modify systemd mysql.service to raise the number of files limit ======
To raise the number of files for the service you have to tell the systemd the new limit.
<source lang=bash>
# systemctl edit mysql.service
</source>
and enter:
<source lang=inifile>
[Service]
LimitNOFILE=1024000
</source>
<source lang=bash>
# systemctl cat mysql
# /lib/systemd/system/mysql.service
# MySQL systemd service file
...
# /etc/systemd/system/mysql.service.d/override.conf
[Service]
LimitNOFILE=1024000
</source>
Do not forget to activate and check the limit
<source lang=bash>
# systemctl daemon-reload
# systemctl restart mysql
# awk 'NR==1 || /Max open files/' /proc/$(pgrep mysqld$)/limits
Limit Soft Limit Hard Limit Units
Max open files 1024000 1024000 files
</source>
====== Modify systemd service to wait for NFS ======
To be sure that the NFS mount is ready when the mysql server starts add After=nfs-client.target to the systemd service in the Unit-section.
<source lang=bash>
# systemctl edit mysql.service
</source>
and enter:
<source lang=inifile>
[Unit]
Description=MySQL Community Server
After=network.target
After=nfs-client.target
</source>
<source lang=bash>
# systemctl cat mysql
# /lib/systemd/system/mysql.service
# MySQL systemd service file
[Unit]
Description=MySQL Community Server
After=network.target
[Install]
WantedBy=multi-user.target
[Service]
User=mysql
Group=mysql
PermissionsStartOnly=true
ExecStartPre=/usr/share/mysql/mysql-systemd-start pre
ExecStart=/usr/sbin/mysqld
ExecStartPost=/usr/share/mysql/mysql-systemd-start post
TimeoutSec=600
Restart=on-failure
RuntimeDirectory=mysqld
RuntimeDirectoryMode=755
# /etc/systemd/system/mysql.service.d/override.conf
[Unit]
Description=MySQL Community Server
After=network.target
After=nfs-client.target
[Service]
LimitNOFILE=1024000
</source>
Do not forget to activate the changes...
<source lang=bash>
# systemctl daemon-reload
# systemctl restart mysql
</source>
... and check they are active:
<source lang=bash>
# systemctl list-dependencies --after mysql.service | grep nfs-client.target
● ├─nfs-client.target
</source>
====== /etc/idmapd.conf ======
<source lang=conf>
# Domain = localdomain
Domain = this.domain.tld
</source>
====== /etc/fstab ======
<source lang=fstab>
cdot-nfsv4-svm:/MYSQLNFS_LOG /MYSQLNFS_LOG nfs rw,hard,nointr,rsize=65536,wsize=65536,bg,vers=4,proto=tcp,noatime
cdot-nfsv4-svm:/MYSQLNFS_DATA /MYSQLNFS_DATA nfs rw,hard,nointr,rsize=65536,wsize=65536,bg,vers=4,proto=tcp,noatime
</source>
====== /etc/mysql/mysql.conf.d/mysqld.cnf ======
<source lang=inifile>
[mysqld]
...
datadir = /MYSQLNFS_DATA/data/mysql
...
</source>
====== /etc/mysql/mysql.conf.d/innodb.cnf ======
<source lang=inifile>
[mysqld]
#
# * InnoDB
#
innodb_data_home_dir = /MYSQLNFS_DATA/InnoDB
innodb_data_file_path = ibdata1:200M:autoextend
innodb_log_group_home_dir = /MYSQLNFS_LOG/ib_log
#innodb_flush_method = O_DIRECT
innodb_flush_log_at_trx_commit = 2
innodb_file_per_table = on
</source>
<source lang=mysql>
# mysql -e "show variables where variable_name like '%dir' and value like '/MYSQLNFS%'"
+---------------------------+------------------------------------+
| Variable_name | Value |
+---------------------------+------------------------------------+
| datadir | /MYSQLNFS_DATA/data/mysql/ |
| innodb_data_home_dir | /MYSQLNFS_DATA/InnoDB |
| innodb_log_group_home_dir | /MYSQLNFS_LOG/ib_log |
+---------------------------+------------------------------------+
</source>
====== /etc/mysql/mysql.conf.d/query_cache.cnf ======
<source lang=inifile>
[mysqld]
#
# * Query Cache Configuration
#
query_cache_type = 1
query_cache_limit = 256K
query_cache_min_res_unit = 2k
query_cache_size = 80M
</source>
<source lang=mysql>
mysql> SHOW VARIABLES LIKE 'have_query_cache';
+------------------+-------+
| Variable_name | Value |
+------------------+-------+
| have_query_cache | YES |
+------------------+-------+
1 row in set (0,00 sec)
mysql> SHOW VARIABLES LIKE 'query_cache%';
+------------------------------+----------+
| Variable_name | Value |
+------------------------------+----------+
| query_cache_limit | 262144 |
| query_cache_min_res_unit | 2048 |
| query_cache_size | 83886080 |
| query_cache_type | ON |
| query_cache_wlock_invalidate | OFF |
+------------------------------+----------+
5 rows in set (0,00 sec)
</source>
====== apparmor : /etc/apparmor.d/local/usr.sbin.mysqld ======
<source lang=apparmor>
# vim:syntax=apparmor
# This should be always there...
owner @{PROC}/@{pid}/status r,
/sys/devices/system/node/ r,
/sys/devices/system/node/** r,
# The mysql datadir, innodb_data_home_dir
/MYSQLNFS_DATA/ r,
/MYSQLNFS_DATA/** rwk,
# The mysql innodb_log_group_home_dir
/MYSQLNFS_LOG/ r,
/MYSQLNFS_LOG/** rwk,
</source>
====== Short stupid performance test ======
<source lang=bash>
# time dd if=/dev/zero of=/MYSQLNFS_DATA/io.test bs=16k count=65536
65536+0 records in
65536+0 records out
1073741824 bytes (1,1 GB, 1,0 GiB) copied, 1,7552 s, 612 MB/s
real 0m1.772s
user 0m0.016s
sys 0m0.672s
</source>
Some things seem to work...
==Sample InnoDB configuration==
/etc/mysql/conf.d/innodb.cnf
<source lang=mysql>
[mysqld]
# InnoDB Parameters
# innodb_buffer_pool_size=(0.7*total_mem_size)
innodb_buffer_pool_size=1433M
# bulk_insert_buffer_size
bulk_insert_buffer_size=256M
# innodb_buffer_pool_instances=... more = more concurrency
innodb_buffer_pool_instances=2
# innodb_thread_concurrency= 2*CPUs
innodb_thread_concurrency=4
# innodb_flush_method=O_DIRECT (avoids double buffering)
innodb_flush_method=O_DIRECT
# InnoDB data raw disks
innodb_data_home_dir=
innodb_data_file_path=/dev/vg-data/lv-rawdisk-innodb01:25Graw
# InnoDB log files
innodb_log_files_in_group=2
innodb_log_file_size=100M
innodb_log_group_home_dir=/var/lib/mysql/ib_log
</source>
==Analyze==
<source lang=mysql>
mysql> select * from <tablename> PROCEDURE ANALYSE();
</source>
<source lang=mysql>
mysql> SHOW /*!50000 GLOBAL*/ STATUS;
</source>
* See [[http://de.slideshare.net/shinguz/pt-presentation-11465700 MySQL Performance Tuning]]
===percona-toolkit===
<source lang=bash>
# aptitude install percona-toolkit
# mysql -e "explain select * from mysql.user,mysql.db where user.user=db.user" | pt-visual-explain
JOIN
+- Bookmark lookup
| +- Table
| | table db
| | possible_keys User
| +- Index lookup
| key db->User
| possible_keys User
| key_len 48
| ref mysql.user.User
| rows 3
+- Table scan
rows 68
+- Table
table user
</source>
===Sysbench===
<source lang=bash>
# mysql -u root -e "create database sbtest;"
# sysbench \
--test=oltp \
--oltp-table-size=10000000 \
--db-driver=mysql \
--mysql-table-engine=innodb \
--mysql-db=sbtest \
--mysql-user=root \
--mysql-password=$(nawk -F'=' '/password/{print $2}' /root/.my.cnf) \
--mysql-socket=/var/run/mysqld/mysqld.sock \
prepare
# sysbench \
--test=oltp \
--oltp-test-mode=complex \
--oltp-table-size=80000000 \
--db-driver=mysql \
--mysql-table-engine=innodb \
--mysql-db=sbtest \
--mysql-user=root \
--mysql-password=$(nawk -F'=' '/password/{print $2}' /root/.my.cnf) \
--mysql-socket=/var/run/mysqld/mysqld.sock \
--num-threads=4 \
--max-time=900 \
--max-requests=500000 \
run
# mysql -u root_rw -e "drop table sbtest;" sbtest
</source>
==Recover a damaged root account==
===Lost grants===
Try out:
<source lang=bash>
# service mysql stop
# echo "grant all privileges on *.* to 'root'@'localhost' with grant option;" > /root/mysql-init
# mysqld_safe --init-file=/root/mysql-init
...
150812 19:14:24 mysqld_safe mysqld from pid file /var/run/mysqld/mysqld.pid ended
# rm /root/mysql-init
# service mysql start
</source>
Or:
<source lang=bash>
# service mysql stop
# mysqld_safe --skip-grant-tables &
...
# mysql -e "UPDATE mysql.user SET Grant_priv='Y', Super_priv='Y' WHERE User='root'; FLUSH PRIVILEGES; GRANT ALL ON *.* TO 'root'@'localhost';"
# mysqladmin -u root shutdown
# service mysql start
</source>
===Lost password===
<source lang=bash>
# service mysql stop
# echo "SET PASSWORD FOR 'root'@'localhost' = PASSWORD('the root password for mysql');" > /root/mysql-init
# mysqld_safe --init-file=/root/mysql-init
...
150812 19:15:24 mysqld_safe mysqld from pid file /var/run/mysqld/mysqld.pid ended
# rm /root/mysql-init
# service mysql start
</source>
==Structured configuration==
This is the default in Ubuntus /etc/mysql/my.cnf:
<source lang=mysql>
...
#
# * IMPORTANT: Additional settings that can override those from this file!
# The files must end with '.cnf', otherwise they'll be ignored.
#
!includedir /etc/mysql/conf.d/
</source>
/etc/mysql/conf.d/innodb.cnf:
<source lang=mysql>
[mysqld]
# InnoDB Parameters
# innodb_buffer_pool_size=(0.7*total_mem_size)
#innodb_buffer_pool_size=512M
innodb_buffer_pool_size=256M
# bulk_insert_buffer_size
#bulk_insert_buffer_size=256M
bulk_insert_buffer_size=128M
# innodb_buffer_pool_instances=... more = more concurrency
innodb_buffer_pool_instances=2
# innodb_thread_concurrency= 2*CPUs
innodb_thread_concurrency=4
# innodb_flush_method=O_DIRECT (avoids double buffering)
innodb_flush_method=O_DIRECT
# InnoDB data raw disks
innodb_data_home_dir=
innodb_data_file_path=/dev/vg-data/lv-rawdisk-innodb01:25Graw
# InnoDB log files
innodb_log_files_in_group=2
innodb_log_file_size=100M
innodb_log_group_home_dir=/var/lib/mysql/ib_log
</source>
/etc/mysql/conf.d/myisam.cnf:
<source lang=mysql>
[mysqld]
#key_buffer = 512M
key_buffer = 128M
table_cache = 8K
myisam_sort_buffer_size = 64M
tmp_table_size = 64M
# Variable: concurrent_insert
# Value Description
# 0 Disables concurrent inserts
# 1 (Default) Enables concurrent insert for MyISAM tables that do not have holes
# 2 Enables concurrent inserts for all MyISAM tables, even those that have holes.
# For a table with a hole, new rows are inserted at the end of the table if it is in use by another thread.
# Otherwise, MySQL acquires a normal write lock and inserts the row into the hole.
concurrent_insert=2
# Variable: myisam_use_mmap
# https://www.percona.com/blog/2006/05/26/myisam-mmap-feature-51/
#
myisam_use_mmap=1
</source>
/etc/mysql/conf.d/mysqld.cnf:
<source lang=mysql>
[mysqld]
datadir = /var/lib/mysql/data/data
# because mysql is soooo stupid
#ignore-db-dirs = lost+found # when we will have mysql >= 5.6.3
bind-address = 127.0.0.1
open-files-limit = 4096
max_connections = 512
max_allowed_packet = 16M
thread_stack = 192K
thread_cache_size = 8
myisam-recover-options = BACKUP
max_connections = 512
table_cache = 8192
thread_concurrency = 4
default-storage-engine = innodb
# Enable the full query log. Every query (even ones with incorrect
# syntax) that the server receives will be logged. This is useful for
# debugging, it is usually disabled in production use.
#log
# Print warnings to the error log file. If you have any problem with
# MySQL you should enable logging of warnings and examine the error log
# for possible explanations.
log_warnings
# Log slow queries. Slow queries are queries which take more than the
# amount of time defined in "long_query_time" or which do not use
# indexes well, if log_long_format is enabled. It is normally good idea
# to have this turned on if you frequently add new queries to the
# system.
log_slow_queries
slow_query_log_file = /var/log/mysql/mysql-slow.log
# All queries taking more than this amount of time (in seconds) will be
# trated as slow. Do not use "1" as a value here, as this will result in
# even very fast queries being logged from time to time (as MySQL
# currently measures time with second accuracy only).
long_query_time = 2
# Log more information in the slow query log. Normally it is good to
# have this turned on. This will enable logging of queries that are not
# using indexes in addition to long running queries.
#log_long_format
log_bin = /var/lib/mysql/binlog/mysql-bin.log
expire_logs_days = 10
max_binlog_size = 100M
sync_binlog = 0
performance_schema = ON
</source>
/etc/mysql/conf.d/mysqld_safe.cnf:
<source lang=mysql>
[mysqld_safe]
</source>
/etc/mysql/conf.d/mysqld_safe_syslog.cnf:
<source lang=mysql>
[mysqld_safe]
syslog
</source>
/etc/mysql/conf.d/query_cache.cnf:
<source lang=mysql>
[mysqld]
query_cache_limit = 4M
query_cache_size = 128M
query_cache_min_res_unit = 2K
</source>
=MySQL Clients=
Small one liners for testing purposes.
==PHP==
===PHP PDO===
<source lang=php>
$ php -r '
$pdo=new PDO("mysql:host=mydbhost;dbname=mydb", "user", "pass", ARRAY(
PDO::ATTR_PERSISTENT => true
)
);
$stmt=$pdo->prepare("SELECT * FROM mytable");
if($stmt->execute()){
while($row = $stmt->fetch()){
print_r($row);
}
};
$stmt = null;
$pdo=null;
'
</source>
7c292a1d70bbd291c85817387e64ede3bc1883c9
1760
1759
2017-06-15T16:06:37Z
Lollypop
2
/* All grants */
wikitext
text/x-wiki
[[Kategorie:MySQL|Tipps und Tricks]]
==Oneliner==
===All grants===
<source lang=bash>
# mysql --skip-column-names --batch --execute 'select concat("`",user,"`@`",host,"`") from mysql.user' | xargs -n 1 -i mysql --execute 'show grants for {}'
</source>
Or a little nicer:
<source lang=bash>
#!/bin/bash
#
## Written by Lars Timmann <L@rs.Timmann.de> 2017
#
function usage () {
cat << EOH
Usage: $0 [--grant-user <pattern>|--gu <pattern>] [--grant-db <pattern>|--gdb <pattern>] [--help] ...
--help: This output
--grant-user|--gu: You can specify this option several times.
The <pattern> can be:
<user> : You will get grants on all hosts for this user.
@<host> : You will get grants for all users on this host.
<user>@<host> : You will get specific grants for user@host.
The pattern may contain % as wildcard.
If the pattern is @% it shows all grants where host is exactly '%'.
--grant-db|--gdb: You can specify this option several times.
The pattern names the database to look for.
The pattern may contain % as wildcard.
...: Optional parameters to the mysql command
EOH
exit
}
declare -a grant_user
for ((param=1;param<=${#};param++))
do
case ${!param} in
--grant-user|--gu)
param=$[ ${param} + 1 ]
grant_user+=( $( echo ${!param} | awk -F'@' "NF==2 && \$1 {printf \"'%s'@'%s'\n\",\$1,\$2;next;}{print}") )
# delete 2 parameters from list and set back $param
set -- "${@:1:param-2}" "${@:param+1}"
param=$[ ${param} - 2 ]
;;
--grant-db|--gdb)
param=$[ ${param} + 1 ]
grant_db+=( "${!param}" )
# delete 2 parameters from list and set back $param
set -- "${@:1:param-2}" "${@:param+1}"
param=$[ ${param} - 2 ]
;;
--help)
usage
;;
*)
;;
esac
done
# Fill users which are without host
count=${#grant_user[@]}
for((param=0;param<count;param++))
do
user="${grant_user[${param}]}"
if [[ ${user} != ?*"@"?* ]]
then
before=${#grant_user[@]}
if [[ ${user} == "@"?* ]]
then
host="${user/@}"
if [[ "_${host}_" == "_%_" ]]
then
grant_user=( "${grant_user[@]:0:param}" $(mysql $* --silent --skip-column-names --execute "select concat('\'',user,'\'@\'',host,'\'') as user from mysql.user where host='${host}'" | sort ) "${grant_user[@]:param+1}" )
else
grant_user=( "${grant_user[@]:0:param}" $(mysql $* --silent --skip-column-names --execute "select concat('\'',user,'\'@\'',host,'\'') as user from mysql.user where host like '${host}'" | sort ) "${grant_user[@]:param+1}" )
fi
else
grant_user=( "${grant_user[@]:0:param}" $(mysql $* --silent --skip-column-names --execute "select concat('\'',user,'\'@\'',host,'\'') as user from mysql.user where user like '${user}'" | sort ) "${grant_user[@]:param+1}" )
fi
after=${#grant_user[@]}
param=$[ param + after - before ]
count=$[ count + after - before ]
fi
done
# Get user for database in grant_db array
for db in ${grant_db[@]}
do
grant_user+=( $(mysql $* --silent --skip-column-names --execute "
select concat('\'',user,'\'@\'',host,'\'') as user from mysql.db where db like '${db}';
select concat('\'',user,'\'@\'',host,'\'') as user from mysql.columns_priv where db like '${db}';
select concat('\'',user,'\'@\'',host,'\'') as user from mysql.tables_priv where db like '${db}';
" | sort -u ) )
done
# if no users specified, show all grants
if [ ${#grant_user[@]} -eq 0 -a ${#grant_db[@]} -eq 0 ]
then
printf -- '--\n-- %s\n--\n' "all grants";
grant_user=( $(mysql $* --silent --skip-column-names --execute "select concat('\'',user,'\'@\'',host,'\'') as user from mysql.user" | sort ) )
fi
for user in ${grant_user[@]}
do
printf -- '--\n-- %s\n--\n' "${user}";
mysql $* --silent --skip-column-names --execute "show create user ${user}; show grants for ${user}" | sed 's/$/;/'
done
</source>
===Last update time===
* Per table
<source lang=mysql>
mysql> SELECT TABLE_SCHEMA AS DB,TABLE_NAME,UPDATE_TIME FROM INFORMATION_SCHEMA.TABLES ORDER BY DB,UPDATE_TIME;
</source>
* Per database
<source lang=mysql>
mysql> SELECT TABLE_SCHEMA AS DB,MAX(UPDATE_TIME) AS LAST_UPDATE FROM INFORMATION_SCHEMA.TABLES GROUP BY DB ORDER BY LAST_UPDATE;
</source>
==InnoDB space==
===Per database===
<source lang=mysql>
mysql> select table_schema as database_name, sum(round(data_length/1024/1024,2)) as total_size_mb from information_schema.tables where engine like 'innodb' group by table_schema order by total_size_mb;
</source>
===Per table===
<source lang=mysql>
mysql> select table_schema as database_name,table_name,round(data_length/1024/1024,2) as size_mb from information_schema.tables order by size_mb;
</source>
==Logging==
If you use SET GLOBAL it is just for the moment.
'''Don't forget to add it in your my.cnf to make it permanent!'''
===What can I log?===
The interesting variables here are:
* log_queries_not_using_indexes
* log_slave_updates
* log_slow_queries
* general_log
===Choose logging destination FILE/TABLE/NONE===
This affects general_log and slow_query_log.
* Log to the table mysql.slow_log and mysql.general_log
<source lang=mysql>
mysql> SET GLOBAL log_output=TABLE;
</source>
* Log to the table mysql.slow_log and mysql.general_log
<source lang=mysql>
mysql> SET GLOBAL log_output=TABLE;
</source>
* Both: tables and files
<source lang=mysql>
mysql> SET GLOBAL log_output = 'TABLE,FILE';
</source>
* None, if NONE appears in the log_output destinations there is no logging
<source lang=mysql>
mysql> SET GLOBAL log_output = 'TABLE,FILE,NONE';
</source>
is equal to
<source lang=mysql>
mysql> SET GLOBAL log_output = 'NONE';
</source>
===Enable/disable general logging===
<source lang=mysql>
mysql> SET GLOBAL general_log_file = '/var/lib/mysql/general.log';
Query OK, 0 rows affected (0.00 sec)
mysql> SET GLOBAL general_log = 'ON';
Query OK, 0 rows affected (0.00 sec)
</source>
<source lang=mysql>
mysql> SET GLOBAL general_log = 'OFF';
Query OK, 0 rows affected (0.00 sec)
</source>
===Enable/disable logging of slow queries===
<source lang=mysql>
mysql> SET GLOBAL slow_query_log_file = '/var/lib/mysql/slow-query.log';
Query OK, 0 rows affected (0.00 sec)
mysql> SET GLOBAL slow_query_log = 'ON';
Query OK, 0 rows affected (0.00 sec)
</source>
<source lang=mysql>
mysql> SET GLOBAL slow_query_log = 'OFF';
Query OK, 0 rows affected (0.00 sec)
</source>
== Slave ==
=== Debugging ===
==== What did we see from the master ====
Read the binlog from Master:
<source lang=bash>
# mysqlbinlog --read-from-remote-server --host='your replication host' --user='your replication user' --password='your replication password' --base64-output=auto --database='limit output to this database' -vv mysql-bin.number | less
</source>
if you get
ERROR: Failed on connect: SSL connection error: protocol version mismatch
try
<source lang=bash>
# mysqlbinlog --read-from-remote-server --host='your replication host' --user='your replication user' --password='your replication password' --ssl-mode=DISABLED --base64-output=auto --database='limit output to this database' -vv mysql-bin.number | less
</source>
For an idea of the binlog file to investigate on the master do this on your slave:
<source lang=bash>
# mysql -e 'show slave status\G' | awk '$1=="Master_Log_File:"'
</source>
==Filesystems for MySQL==
===ext3/ext4===
====Create Options====
<source lang=bash>
# mkfs.ext4 -b 4096 /dev/mapper/vg--data-lv--ext4--mysql_data
</source>
====Mountoptions====
* noatime
* data=writeback (best performance , only metadata is logged)
* data=ordered (ok performance , recording metadata and grouping metadata related to the data changes)
* data=journal (worst performance, but best data protection, ext3 default mode, recording metadata and all data)
===Raw devices with InnoDB===
'''Take a look at [[Linux_udev_permissions|setting device permissions via udev]] first.'''
'''After''' that the device is owned by mysql:
<source lang=bash>
# ls -alL /dev/vg-data/lv-rawdisk-innodb01
brw-rw---- 1 mysql mysql 252, 0 Aug 12 15:07 /dev/vg-data/lv-rawdisk-innodb01
</source>
Determine the size:
<source lang=bash>
# lvs vg-data
LV VG Attr LSize Pool Origin Data% Move Log Copy% Convert
lv-rawdisk-innodb01 vg-data -wi-a---- 25.00g
# fdisk -l /dev/vg-data/lv-rawdisk-innodb01
Disk /dev/vg-data/lv-rawdisk-innodb01: 26.8 GB, 26843545600 bytes
255 heads, 63 sectors/track, 3263 cylinders, total 52428800 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
# bc -l
26843545600/(1024*1024*1024)
25.00000000000000000000
</source>
Yes... really 25GB!
Add your logical volume to your configuration /etc/mysql/conf.d/innodb.cnf :
<source lang=mysql>
[mysqld]
# InnoDB raw disks
innodb_data_home_dir=
innodb_data_file_path=/dev/vg-data/lv-rawdisk-innodb01:25Gnewraw
</source>
Start mysql:
<source lang=bash>
# service mysql start
</source>
Aaaaaand.. do not forget apparmor! Like I did.. :-D
<source lang=mysql>
InnoDB: Operating system error number 13 in a file operation.
InnoDB: The error means mysqld does not have the access rights to
InnoDB: the directory.
InnoDB: File name /dev/dm-0
InnoDB: File operation call: 'open'.
InnoDB: Cannot continue operation.
</source>
<source lang=bash>
# tail /var/log/kern.log
...
Aug 12 15:30:09 mysql kernel: [ 5840.118528] audit: type=1400 audit(1439386209.399:33): apparmor="DENIED" operation="open" profile="/usr/sbin/mysqld" name="/dev/dm-0" pid=11810 comm="mysqld" requested_mask="wr" denied_mask="wr" fsuid=108 ouid=108
...
</source>
Add your raw device to the apparmor config in /etc/apparmor.d/local/usr.sbin.mysqld :
<source lang=bash>
# Site-specific additions and overrides for usr.sbin.mysqld.
# For more details, please see /etc/apparmor.d/local/README.
/dev/dm-* rwk,
</source>
Reload apparmor:
<source lang=bash>
# service apparmor reload
</source>
Another try!
<source lang=bash>
# service mysql start
</source>
<source lang=mysql>
InnoDB: The first specified data file /dev/vg-data/lv-rawdisk-innodb01 did not exist:
InnoDB: a new database to be created!
150812 15:48:23 InnoDB: Setting file /dev/vg-data/lv-rawdisk-innodb01 size to 25600 MB
InnoDB: Database physically writes the file full: wait...
InnoDB: Progress in MB: 100 200 300 400 500 600 700 800 900 1000 1100 1200 ...
</source>
Much better!
So shutdown MySQL again!
Change your configuration /etc/mysql/conf.d/innodb.cnf and '''change newraw to raw!''' :
<source lang=mysql>
[mysqld]
# InnoDB raw disks
innodb_data_home_dir=
innodb_data_file_path=/dev/vg-data/lv-rawdisk-innodb01:25Graw
</source>
=== NFS ===
==== NFSv4 ====
===== On NetApp CDOT SVM =====
<source lang=cdot>
cdot1nfsv4::> export-policy rule create -policyname default -clientmatch 172.18.128.0/22 -superuser none -rwrule none -rorule sys -allow-dev false -allow-suid false
cdot1nfsv4::>
cdot1nfsv4::> export-policy create -policyname mysql_clients
cdot1nfsv4::> export-policy rule create -policyname mysql_clients -clientmatch 172.18.128.0/22 -superuser sys -rwrule sys -rorule sys -allow-dev true -allow-suid false
cdot1nfsv4::>
cdot1nfsv4::> nfs server modify -v4.0 enabled -v4-id-domain this.domain.tld
cdot1nfsv4::> set -units GB
cdot1nfsv4::> vol show -volume MYSQLNFS_* -fields volume,policy,size,junction-path
vserver volume size policy junction-path
------------------ --------------------- ---- ------------- ----------------------
cdot1nfsv4 MYSQLNFS_DATA 40GB mysql_clients /MYSQLNFS_DATA
cdot1nfsv4 MYSQLNFS_LOG 1GB mysql_clients /MYSQLNFS_LOG
2 entries were displayed.
</source>
Links:
* [https://kb.netapp.com/support/s/article/how-to-configure-nfsv4-in-cluster-mode How to configure NFSv4 in Cluster-Mode]
* [https://kb.netapp.com/support/s/article/clustered-data-ontap-nfs-expert-recommended-articles Clustered Data ONTAP NFS Expert recommended articles]
* [https://kb.netapp.com/support/s/article/how-to-configure-netapp-storage-systems-for-network-file-system-version-4-in-aix-and-linux-environments How to configure NetApp storage systems for Network File System version 4 in AIX and Linux environments]
* [https://kb.netapp.com/support/s/article/how-to-enable-or-disable-nfsv4-on-netapp-storage-systems How to enable or disable NFSv4 on NetApp storage systems]
===== On Linux =====
====== /etc/sysctl.d/99-mysql.conf ======
<source>
#
## http://www.ajohnstone.com/achives/optimizing-mysql-over-nfs-with-netapp/
#
###################################################################
# Semaphores & IPC for optimizations in innodb
kernel.shmmax=2147483648
kernel.shmall=2147483648
kernel.msgmni=1024
kernel.msgmax=65536
kernel.sem=250 32000 32 1024
###################################################################
# Swap
vm.swappiness = 0
vm.vfs_cache_pressure = 50
</source>
====== /etc/sysctl.d/99-netapp-nfs.conf ======
<source>
#
## http://www.ajohnstone.com/achives/optimizing-mysql-over-nfs-with-netapp/
#
###################################################################
# Optimization for netapp/nfs increased from 64k, @see http://tldp.org/HOWTO/NFS-HOWTO/performance.html#MEMLIMITS
net.core.wmem_default=262144
net.core.rmem_default=262144
net.core.wmem_max=262144
net.core.rmem_max=262144
net.ipv4.tcp_rmem = 4096 87380 16777216
net.ipv4.tcp_wmem = 4096 65536 16777216
net.ipv4.tcp_no_metrics_save = 1
# Guidelines from http://media.netapp.com/documents/mysqlperformance-5.pdf
net.ipv4.tcp_sack=0
net.ipv4.tcp_timestamps=0
sunrpc.tcp_slot_table_entries=128
#nfs.v3.enable on
nfs.tcp.enable=on
nfs.tcp.recvwindowsize=65536
nfs.tcp.xfersize=65536
#iscsi.iswt.max_ios_per_session 128
#iscsi.iswt.tcp_window_size 131400
#iscsi.max_connections_per_session 16
net.ipv4.tcp_tw_reuse = 1
net.ipv4.ip_local_port_range = 1024 65023
net.ipv4.tcp_max_syn_backlog = 10240
net.ipv4.tcp_max_tw_buckets = 400000
net.ipv4.tcp_max_orphans = 60000
net.ipv4.tcp_synack_retries = 3
net.core.somaxconn = 10000
kernel.sysrq=0
net.ipv4.neigh.default.gc_thresh1 = 4096
net.ipv4.neigh.default.gc_thresh2 = 8192
net.ipv4.neigh.default.gc_thresh3 = 8192
net.ipv4.neigh.default.base_reachable_time = 86400
net.ipv4.neigh.default.gc_stale_time = 86400
</source>
====== Raise allowed number of open files for mysql in /etc/security/limits.d/mysql.conf ======
<source>
mysql soft nofile 1024000
mysql hard nofile 1024000
mysql soft nproc 10240
mysql hard nproc 10240
</source>
====== Modify systemd mysql.service to raise the number of files limit ======
To raise the number of files for the service you have to tell the systemd the new limit.
<source lang=bash>
# systemctl edit mysql.service
</source>
and enter:
<source lang=inifile>
[Service]
LimitNOFILE=1024000
</source>
<source lang=bash>
# systemctl cat mysql
# /lib/systemd/system/mysql.service
# MySQL systemd service file
...
# /etc/systemd/system/mysql.service.d/override.conf
[Service]
LimitNOFILE=1024000
</source>
Do not forget to activate and check the limit
<source lang=bash>
# systemctl daemon-reload
# systemctl restart mysql
# awk 'NR==1 || /Max open files/' /proc/$(pgrep mysqld$)/limits
Limit Soft Limit Hard Limit Units
Max open files 1024000 1024000 files
</source>
====== Modify systemd service to wait for NFS ======
To be sure that the NFS mount is ready when the mysql server starts add After=nfs-client.target to the systemd service in the Unit-section.
<source lang=bash>
# systemctl edit mysql.service
</source>
and enter:
<source lang=inifile>
[Unit]
Description=MySQL Community Server
After=network.target
After=nfs-client.target
</source>
<source lang=bash>
# systemctl cat mysql
# /lib/systemd/system/mysql.service
# MySQL systemd service file
[Unit]
Description=MySQL Community Server
After=network.target
[Install]
WantedBy=multi-user.target
[Service]
User=mysql
Group=mysql
PermissionsStartOnly=true
ExecStartPre=/usr/share/mysql/mysql-systemd-start pre
ExecStart=/usr/sbin/mysqld
ExecStartPost=/usr/share/mysql/mysql-systemd-start post
TimeoutSec=600
Restart=on-failure
RuntimeDirectory=mysqld
RuntimeDirectoryMode=755
# /etc/systemd/system/mysql.service.d/override.conf
[Unit]
Description=MySQL Community Server
After=network.target
After=nfs-client.target
[Service]
LimitNOFILE=1024000
</source>
Do not forget to activate the changes...
<source lang=bash>
# systemctl daemon-reload
# systemctl restart mysql
</source>
... and check they are active:
<source lang=bash>
# systemctl list-dependencies --after mysql.service | grep nfs-client.target
● ├─nfs-client.target
</source>
====== /etc/idmapd.conf ======
<source lang=conf>
# Domain = localdomain
Domain = this.domain.tld
</source>
====== /etc/fstab ======
<source lang=fstab>
cdot-nfsv4-svm:/MYSQLNFS_LOG /MYSQLNFS_LOG nfs rw,hard,nointr,rsize=65536,wsize=65536,bg,vers=4,proto=tcp,noatime
cdot-nfsv4-svm:/MYSQLNFS_DATA /MYSQLNFS_DATA nfs rw,hard,nointr,rsize=65536,wsize=65536,bg,vers=4,proto=tcp,noatime
</source>
====== /etc/mysql/mysql.conf.d/mysqld.cnf ======
<source lang=inifile>
[mysqld]
...
datadir = /MYSQLNFS_DATA/data/mysql
...
</source>
====== /etc/mysql/mysql.conf.d/innodb.cnf ======
<source lang=inifile>
[mysqld]
#
# * InnoDB
#
innodb_data_home_dir = /MYSQLNFS_DATA/InnoDB
innodb_data_file_path = ibdata1:200M:autoextend
innodb_log_group_home_dir = /MYSQLNFS_LOG/ib_log
#innodb_flush_method = O_DIRECT
innodb_flush_log_at_trx_commit = 2
innodb_file_per_table = on
</source>
<source lang=mysql>
# mysql -e "show variables where variable_name like '%dir' and value like '/MYSQLNFS%'"
+---------------------------+------------------------------------+
| Variable_name | Value |
+---------------------------+------------------------------------+
| datadir | /MYSQLNFS_DATA/data/mysql/ |
| innodb_data_home_dir | /MYSQLNFS_DATA/InnoDB |
| innodb_log_group_home_dir | /MYSQLNFS_LOG/ib_log |
+---------------------------+------------------------------------+
</source>
====== /etc/mysql/mysql.conf.d/query_cache.cnf ======
<source lang=inifile>
[mysqld]
#
# * Query Cache Configuration
#
query_cache_type = 1
query_cache_limit = 256K
query_cache_min_res_unit = 2k
query_cache_size = 80M
</source>
<source lang=mysql>
mysql> SHOW VARIABLES LIKE 'have_query_cache';
+------------------+-------+
| Variable_name | Value |
+------------------+-------+
| have_query_cache | YES |
+------------------+-------+
1 row in set (0,00 sec)
mysql> SHOW VARIABLES LIKE 'query_cache%';
+------------------------------+----------+
| Variable_name | Value |
+------------------------------+----------+
| query_cache_limit | 262144 |
| query_cache_min_res_unit | 2048 |
| query_cache_size | 83886080 |
| query_cache_type | ON |
| query_cache_wlock_invalidate | OFF |
+------------------------------+----------+
5 rows in set (0,00 sec)
</source>
====== apparmor : /etc/apparmor.d/local/usr.sbin.mysqld ======
<source lang=apparmor>
# vim:syntax=apparmor
# This should be always there...
owner @{PROC}/@{pid}/status r,
/sys/devices/system/node/ r,
/sys/devices/system/node/** r,
# The mysql datadir, innodb_data_home_dir
/MYSQLNFS_DATA/ r,
/MYSQLNFS_DATA/** rwk,
# The mysql innodb_log_group_home_dir
/MYSQLNFS_LOG/ r,
/MYSQLNFS_LOG/** rwk,
</source>
====== Short stupid performance test ======
<source lang=bash>
# time dd if=/dev/zero of=/MYSQLNFS_DATA/io.test bs=16k count=65536
65536+0 records in
65536+0 records out
1073741824 bytes (1,1 GB, 1,0 GiB) copied, 1,7552 s, 612 MB/s
real 0m1.772s
user 0m0.016s
sys 0m0.672s
</source>
Some things seem to work...
==Sample InnoDB configuration==
/etc/mysql/conf.d/innodb.cnf
<source lang=mysql>
[mysqld]
# InnoDB Parameters
# innodb_buffer_pool_size=(0.7*total_mem_size)
innodb_buffer_pool_size=1433M
# bulk_insert_buffer_size
bulk_insert_buffer_size=256M
# innodb_buffer_pool_instances=... more = more concurrency
innodb_buffer_pool_instances=2
# innodb_thread_concurrency= 2*CPUs
innodb_thread_concurrency=4
# innodb_flush_method=O_DIRECT (avoids double buffering)
innodb_flush_method=O_DIRECT
# InnoDB data raw disks
innodb_data_home_dir=
innodb_data_file_path=/dev/vg-data/lv-rawdisk-innodb01:25Graw
# InnoDB log files
innodb_log_files_in_group=2
innodb_log_file_size=100M
innodb_log_group_home_dir=/var/lib/mysql/ib_log
</source>
==Analyze==
<source lang=mysql>
mysql> select * from <tablename> PROCEDURE ANALYSE();
</source>
<source lang=mysql>
mysql> SHOW /*!50000 GLOBAL*/ STATUS;
</source>
* See [[http://de.slideshare.net/shinguz/pt-presentation-11465700 MySQL Performance Tuning]]
===percona-toolkit===
<source lang=bash>
# aptitude install percona-toolkit
# mysql -e "explain select * from mysql.user,mysql.db where user.user=db.user" | pt-visual-explain
JOIN
+- Bookmark lookup
| +- Table
| | table db
| | possible_keys User
| +- Index lookup
| key db->User
| possible_keys User
| key_len 48
| ref mysql.user.User
| rows 3
+- Table scan
rows 68
+- Table
table user
</source>
===Sysbench===
<source lang=bash>
# mysql -u root -e "create database sbtest;"
# sysbench \
--test=oltp \
--oltp-table-size=10000000 \
--db-driver=mysql \
--mysql-table-engine=innodb \
--mysql-db=sbtest \
--mysql-user=root \
--mysql-password=$(nawk -F'=' '/password/{print $2}' /root/.my.cnf) \
--mysql-socket=/var/run/mysqld/mysqld.sock \
prepare
# sysbench \
--test=oltp \
--oltp-test-mode=complex \
--oltp-table-size=80000000 \
--db-driver=mysql \
--mysql-table-engine=innodb \
--mysql-db=sbtest \
--mysql-user=root \
--mysql-password=$(nawk -F'=' '/password/{print $2}' /root/.my.cnf) \
--mysql-socket=/var/run/mysqld/mysqld.sock \
--num-threads=4 \
--max-time=900 \
--max-requests=500000 \
run
# mysql -u root_rw -e "drop table sbtest;" sbtest
</source>
==Recover a damaged root account==
===Lost grants===
Try out:
<source lang=bash>
# service mysql stop
# echo "grant all privileges on *.* to 'root'@'localhost' with grant option;" > /root/mysql-init
# mysqld_safe --init-file=/root/mysql-init
...
150812 19:14:24 mysqld_safe mysqld from pid file /var/run/mysqld/mysqld.pid ended
# rm /root/mysql-init
# service mysql start
</source>
Or:
<source lang=bash>
# service mysql stop
# mysqld_safe --skip-grant-tables &
...
# mysql -e "UPDATE mysql.user SET Grant_priv='Y', Super_priv='Y' WHERE User='root'; FLUSH PRIVILEGES; GRANT ALL ON *.* TO 'root'@'localhost';"
# mysqladmin -u root shutdown
# service mysql start
</source>
===Lost password===
<source lang=bash>
# service mysql stop
# echo "SET PASSWORD FOR 'root'@'localhost' = PASSWORD('the root password for mysql');" > /root/mysql-init
# mysqld_safe --init-file=/root/mysql-init
...
150812 19:15:24 mysqld_safe mysqld from pid file /var/run/mysqld/mysqld.pid ended
# rm /root/mysql-init
# service mysql start
</source>
==Structured configuration==
This is the default in Ubuntus /etc/mysql/my.cnf:
<source lang=mysql>
...
#
# * IMPORTANT: Additional settings that can override those from this file!
# The files must end with '.cnf', otherwise they'll be ignored.
#
!includedir /etc/mysql/conf.d/
</source>
/etc/mysql/conf.d/innodb.cnf:
<source lang=mysql>
[mysqld]
# InnoDB Parameters
# innodb_buffer_pool_size=(0.7*total_mem_size)
#innodb_buffer_pool_size=512M
innodb_buffer_pool_size=256M
# bulk_insert_buffer_size
#bulk_insert_buffer_size=256M
bulk_insert_buffer_size=128M
# innodb_buffer_pool_instances=... more = more concurrency
innodb_buffer_pool_instances=2
# innodb_thread_concurrency= 2*CPUs
innodb_thread_concurrency=4
# innodb_flush_method=O_DIRECT (avoids double buffering)
innodb_flush_method=O_DIRECT
# InnoDB data raw disks
innodb_data_home_dir=
innodb_data_file_path=/dev/vg-data/lv-rawdisk-innodb01:25Graw
# InnoDB log files
innodb_log_files_in_group=2
innodb_log_file_size=100M
innodb_log_group_home_dir=/var/lib/mysql/ib_log
</source>
/etc/mysql/conf.d/myisam.cnf:
<source lang=mysql>
[mysqld]
#key_buffer = 512M
key_buffer = 128M
table_cache = 8K
myisam_sort_buffer_size = 64M
tmp_table_size = 64M
# Variable: concurrent_insert
# Value Description
# 0 Disables concurrent inserts
# 1 (Default) Enables concurrent insert for MyISAM tables that do not have holes
# 2 Enables concurrent inserts for all MyISAM tables, even those that have holes.
# For a table with a hole, new rows are inserted at the end of the table if it is in use by another thread.
# Otherwise, MySQL acquires a normal write lock and inserts the row into the hole.
concurrent_insert=2
# Variable: myisam_use_mmap
# https://www.percona.com/blog/2006/05/26/myisam-mmap-feature-51/
#
myisam_use_mmap=1
</source>
/etc/mysql/conf.d/mysqld.cnf:
<source lang=mysql>
[mysqld]
datadir = /var/lib/mysql/data/data
# because mysql is soooo stupid
#ignore-db-dirs = lost+found # when we will have mysql >= 5.6.3
bind-address = 127.0.0.1
open-files-limit = 4096
max_connections = 512
max_allowed_packet = 16M
thread_stack = 192K
thread_cache_size = 8
myisam-recover-options = BACKUP
max_connections = 512
table_cache = 8192
thread_concurrency = 4
default-storage-engine = innodb
# Enable the full query log. Every query (even ones with incorrect
# syntax) that the server receives will be logged. This is useful for
# debugging, it is usually disabled in production use.
#log
# Print warnings to the error log file. If you have any problem with
# MySQL you should enable logging of warnings and examine the error log
# for possible explanations.
log_warnings
# Log slow queries. Slow queries are queries which take more than the
# amount of time defined in "long_query_time" or which do not use
# indexes well, if log_long_format is enabled. It is normally good idea
# to have this turned on if you frequently add new queries to the
# system.
log_slow_queries
slow_query_log_file = /var/log/mysql/mysql-slow.log
# All queries taking more than this amount of time (in seconds) will be
# trated as slow. Do not use "1" as a value here, as this will result in
# even very fast queries being logged from time to time (as MySQL
# currently measures time with second accuracy only).
long_query_time = 2
# Log more information in the slow query log. Normally it is good to
# have this turned on. This will enable logging of queries that are not
# using indexes in addition to long running queries.
#log_long_format
log_bin = /var/lib/mysql/binlog/mysql-bin.log
expire_logs_days = 10
max_binlog_size = 100M
sync_binlog = 0
performance_schema = ON
</source>
/etc/mysql/conf.d/mysqld_safe.cnf:
<source lang=mysql>
[mysqld_safe]
</source>
/etc/mysql/conf.d/mysqld_safe_syslog.cnf:
<source lang=mysql>
[mysqld_safe]
syslog
</source>
/etc/mysql/conf.d/query_cache.cnf:
<source lang=mysql>
[mysqld]
query_cache_limit = 4M
query_cache_size = 128M
query_cache_min_res_unit = 2K
</source>
=MySQL Clients=
Small one liners for testing purposes.
==PHP==
===PHP PDO===
<source lang=php>
$ php -r '
$pdo=new PDO("mysql:host=mydbhost;dbname=mydb", "user", "pass", ARRAY(
PDO::ATTR_PERSISTENT => true
)
);
$stmt=$pdo->prepare("SELECT * FROM mytable");
if($stmt->execute()){
while($row = $stmt->fetch()){
print_r($row);
}
};
$stmt = null;
$pdo=null;
'
</source>
51ffc7f7bd641f062b601d70001658c35614b120
SSH Tipps und Tricks
0
75
1732
1389
2017-05-05T12:56:28Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:SSH]]
[[Kategorie:Putty]]
=SSH, der Weg zum Ziel=
==SSH über ein oder mehrere Hops==
Um die SSH-Verbindung von Host_A zu Host_B machen zu können muß man sich über zwei Rechner dorthin tunneln (GW_1 und GW_2). Wenn man sich immer erst einloggt und dann weiter einloggt ist es manchmal sehr schwierig die Portforwardings mitzuschleifen, oder den Socks5-Proxy. Einfacher ist es, wenn man sich ProxyCommands für den Weg von Host_a zu Host_b definiert.
Wir kommen also nur von GW_2 zu Host_b, also machen wir uns hierfür einen Eintrag in der ~/.ssh/config:
<pre>
Host Host_B
ProxyCommand ssh GW_2 "/bin/bash -c 'exec 3<>/dev/tcp/%h/%p; cat <&3 & cat >&3;kill $!'"
</pre>
Zu GW_2 kommen wir aber nur über GW_1, also brauchen wir hierfür auch einen Eintrag:
<pre>
Host GW_2
ProxyCommand ssh GW_1 "/bin/bash -c 'exec 3<>/dev/tcp/%h/%p; cat <&3 & cat >&3;kill $!'"
</pre>
Jetzt gibt man auf Host_A einfach <i>ssh Host_B</i> ein und wird über die beiden Gateways GW_1 und GW_2 getunnelt.
Portforwardings für z.B. NFS macht man jetzt einfach so:
<pre>
root@Host_A# share -F nfs -o ro=@127.0.0.1/32 /tmp
root@Host_A# ssh -R 22049:localhost:2049 user@Host_B
user@Host_B$ su -
root@Host_B# mount -oro nfs://127.0.0.1:22049/tmp /mnt
</pre>
Im Hintergrund werden dann die TunnelVerbindungen aufgebaut und man macht das PortForwarding direkt von Host_A nach Host_B. Sehr schlank und elegant.
PS: Das /dev/tcp/%h/%p ist ein BASH-Builtin wobei %h und %p von der SSH durch Host (%h) und Port (%p) ausgefüllt werden
==Ausbruch aus dem Paradies==
Problem: Die Umgebung in der man sich aufhält ist leider so unglücklich mit Firewalls verbaut, daß man nicht arbeiten kann. Man muß aber per SSH raus, um woanders kurz etwas zu schauen, oder zu holen. Nunja, es gibt immer einen Weg...
Vorraussetzung ist ein lokal installiertes [http://www.meadowy.org/~gotoh/projects/connect connect], z.B. unter Ubuntu: apt-get install connect-proxy.
Weiterhin braucht man einen SSH-Server, wo ein sshd auf dem Port 443 lauscht, denn die meisten Proxies wollen einen nur auf known Ports durchlassen.
Dann trägt man in der ~/.ssh/config ein:
<pre>
Host ssh-via-proxy
ProxyCommand connect -H proxy-server:3128 ssh-server 443
</pre>
Schwuppdiwupp ist man mit <i>ssh ssh-server</i> auf dem SSH-Ziel, wo man hinmöchte. Natürlich kann man auf den ssh-server wieder als ProxyCommand eintragen usw. usw.
==Achja... das interne Wiki...==
Auch nicht schlimm, wenn das nur vom internen Netz erreichbar ist, dann fragen wir einfach via Socks-Proxy an:
<pre>
user@Host_A$ ssh -C -N -T -f -D8080 interner-rechner
user@Host_A$ chromium-browser --proxy-server="socks5://localhost:8080" https://wiki.intern.firma.de/ &
</pre>
Die Optionen sind:
<pre>
-C Requests compression <- das ist optional
-N Do not execute a remote command.
-T Disable pseudo-tty allocation.
-f Requests ssh to go to background just before command execution.
-D Local-Remote-Socks5-Proxy Port
</pre>
Oder wieder via ~/.ssh/config:
<pre>
Host wiki
Compression yes
DynamicForward 8888
RequestTTY no
PermitLocalCommand yes
LocalCommand chromium-browser --proxy-server="socks5://localhost:8888" https://wiki.intern.firma.de/ &
Hostname interner-rechner
</pre>
Und dann <i>ssh -N -f wiki</i> (Entsprechungen für -N und -f habe ich noch nicht gefunden).
=Der Fingerabdruck=
Für die Verifikation ist es oft leichter mit kürzeren Zahlenketten. Daher ist der Fingerabdruck praktisch, um Keys einfacher zu vergleichen:
<pre>
$ ssh-keygen -lf ~/.ssh/id_dsa.pub
1024 98:c5:76:...:08:fa:ba lollypop@lollybook (DSA)
</pre>
=Nutzer einschränken=
<source lang=bash>
# SSH is only allowed for users in the group ssh except syslog
AllowGroups ssh
DenyUsers syslog
</source>
=PuTTY Portable=
==pageant zusammen mit putty starten==
In die Datei ..\PortableApps\PuTTYPortable\App\AppInfo\Launcher\PuTTYPortable.ini muß folgendes unter [Launch] stehen:
<pre>
[Launch]
ProgramExecutable=putty\pageant.exe
CommandLineArguments='%PAL:DataDir%\settings\mykeys.ppk -c %PAL:AppDir%\putty\putty.exe'
DirectoryMoveOK=yes
SupportsUNC=yes
</pre>
Zu PortableApps siehe auch:
* [http://portableapps.com/manuals/PortableApps.comLauncher/ref/envsub.html Environment variable substitions]
* [http://portableapps.com/manuals/PortableApps.comLauncher/ref/launcher.ini/launch.html#programexecutable Launch]
==ppk -> pem==
<source lang=bash>
$ nawk '/---- BEGIN SSH2 PUBLIC KEY ----/{printf "ssh-rsa "; getline; comment=$2; gsub(/"/,"",comment); getline line; while(line !~ /^---- END/){printf line; getline line;} printf " %s\n",comment;}' pubkey.ppk
</source>
=Probleme mit älteren Gegenstellen=
==Unable to negotiate with <IP> port 22: no matching host key type found. Their offer: ssh-dss==
<source lang=bash>
$ ssh -oHostKeyAlgorithms=+ssh-dss <IP>
</source>
==ssh_dispatch_run_fatal: Connection to <IP> port 22: DH GEX group out of range==
<source lang=bash>
$ ssh -oKexAlgorithms=diffie-hellman-group-exchange-sha256,diffie-hellman-group14-sha1,diffie-hellman-group1-sha1 <IP>
</source>
=SFTP chroot=
<source lang=bash>
# mkdir --parents --mode=0755 /home/sftp
# mkdir --mode=0700 /home/sftp/.authorized_keys
</source>
==/etc/ssh/sshd_config==
<source lang=bash>
...
Match Group sftp
ChrootDirectory /home/sftp/%u
AuthorizedKeysFile /home/sftp/.authorized_keys/%u
AllowTCPForwarding no
X11Forwarding no
ForceCommand internal-sftp
</source>
==Create SFTP user==
Now you can put authorized keys into the files /home/sftp/.authorized_keys/<i>username</i>
And create the sftp users like this:
<source lang=bash>
# USER=myuser
# mkdir --parents --mode=0755 /home/sftp/${USER}
# useradd --create-home --home-dir /home/sftp/${USER}/home ${USER}
</source>
= Two factor authentication =
== Google Authenticator ==
As the Google Authenticator is a tool which is available on several SmartPhone OS I took this one for the OTP authentication.
All steps have to be done on the destination host.
=== Install libpam-google-authenticator ===
<source lang=bash>
$ sudo apt-get install libpam-google-authenticator
</source>
=== Add settings to the /etc/pam.d/sshd ===
Put this line at the top of your /etc/pam.d/sshd!
<source>
auth [success=done new_authtok_reqd=done default=die] pam_google_authenticator.so nullok
</source>
See the man page pam.d(5) or read here...
The meaning of the parameters:
* success=done : If pam_google_authenticator returns successful (code was correct) all authentication is done.
* new_authtok_reqd=done : New authentication token is required set to done. Done is like ok, <nowiki><man page></nowiki>except that the stack also terminates and control is immediately returned to the application.<nowiki></man page></nowiki>
* default=die : If pam_google_authenticator failed no other authentications will be tried
* nullok : Allow user to access auth mechanism even if the password is empty
=== Add settings to the /etc/ssh/sshd_config ===
This lines have to be in the /etc/ssh/sshd_config:
<source>
UsePAM yes
PasswordAuthentication no
PubkeyAuthentication yes
ChallengeResponseAuthentication yes
AuthenticationMethods publickey,keyboard-interactive:pam
</source>
Without the setting in /etc/pam.d/sshd the "PasswordAuthentication no" will not be sufficient and still ask for a password because /etc/pam.d/sshd enables password authentication.
621fd70701635bdc15203cffc36bb427bc189990
File:Nasutitermes small.JPG
6
338
1733
2017-05-05T14:31:15Z
Lollypop
2
Nasutitermes sp., Malaysia
wikitext
text/x-wiki
Nasutitermes sp., Malaysia
62fa468721e84aa4d9f75853d794496cac05055a
Category:Nasutitermes
14
317
1734
1543
2017-05-05T14:32:32Z
Lollypop
2
wikitext
text/x-wiki
{{Systematik
| DeName =
| WissName = Nasutitermes
| Autor =
| regnum = Animalia
| subregnum = Eumetazoa
| phylum = Arthropoda
| subphylum = Hexapoda
| classis = Insecta
| ordo = Dictyoptera
| subordo = Isoptera
| familia = Termitidae
| subfamilia = Nasutitermitinae
| tribus =
| genus = Nasutitermes
| Bild = Nasutitermes_small.JPG
| Bildbeschreibung = Nasutitermes sp., Malaysia
}}
65f76a72d4039c6e3ef6b811382220e5a927df0b
Category:Termitinae
14
339
1735
2017-05-05T14:38:09Z
Lollypop
2
Die Seite wurde neu angelegt: „{{Systematik | DeName = | Autor = | regnum = Animalia | subregnum = Eumetazoa | phylum = Arthropoda | su…“
wikitext
text/x-wiki
{{Systematik
| DeName =
| Autor =
| regnum = Animalia
| subregnum = Eumetazoa
| phylum = Arthropoda
| subphylum = Hexapoda
| classis = Insecta
| ordo = Dictyoptera
| subordo = Isoptera
| familia = Termitidae
| subfamilia = Termitinae
}}
b0b433d348e318a32793f580ddc7b7746fc3cba1
Category:Macrotermes
14
340
1736
2017-05-05T14:40:06Z
Lollypop
2
Die Seite wurde neu angelegt: „{{Systematik | DeName = | Autor = Holmgren, 1913 | regnum = Animalia | subregnum = Eumetazoa | phylum = A…“
wikitext
text/x-wiki
{{Systematik
| DeName =
| Autor = Holmgren, 1913
| regnum = Animalia
| subregnum = Eumetazoa
| phylum = Arthropoda
| subphylum = Hexapoda
| classis = Insecta
| ordo = Dictyoptera
| subordo = Isoptera
| familia = Termitidae
| subfamilia = Termitinae
| genus = Macrotermes
}}
5097344b1376872640651533407860900d3b5ec9
1738
1736
2017-05-05T14:46:37Z
Lollypop
2
wikitext
text/x-wiki
{{Systematik
| DeName =
| Autor = Holmgren, 1913
| regnum = Animalia
| subregnum = Eumetazoa
| phylum = Arthropoda
| subphylum = Hexapoda
| classis = Insecta
| ordo = Dictyoptera
| subordo = Isoptera
| familia = Termitidae
| subfamilia = Termitinae
| genus = Macrotermes
| Bild = Macrotermes_small.JPG
| Bildbeschreibung = Macrotermes sp., Malaysia
}}
bde43b8512cf38eb58b77da56d30a6cb99bc4b72
File:Macrotermes small.JPG
6
341
1737
2017-05-05T14:45:45Z
Lollypop
2
Macrotermes sp., Malaysia
wikitext
text/x-wiki
Macrotermes sp., Malaysia
c44eca595c6a66c9518cb33adabda7c627004d30
Oracle Clients
0
342
1739
2017-05-09T08:36:19Z
Lollypop
2
Die Seite wurde neu angelegt: „= Ubuntu = Download <pre> oracle-instantclient12.2-basiclite-12.2.0.1.0-1.x86_64.rpm oracle-instantclient12.2-sqlplus_12.2.0.1.0-2_amd64.deb </pre> from [http:…“
wikitext
text/x-wiki
= Ubuntu =
Download
<pre>
oracle-instantclient12.2-basiclite-12.2.0.1.0-1.x86_64.rpm
oracle-instantclient12.2-sqlplus_12.2.0.1.0-2_amd64.deb
</pre>
from [http://www.oracle.com/technetwork/database/features/instant-client/index.html Oracle Instant Client download page]
<source lang=bash>
$ sudo apt install alien
$ sudo alien -i oracle-instantclient12.2-basiclite-12.2.0.1.0-1.x86_64.rpm
$ sudo alien -i oracle-instantclient12.2-sqlplus_12.2.0.1.0-2_amd64.deb
$ for i in /usr/lib/oracle/12.2/client64/lib/*.so*
do
BASENAME=${i##*/}
sudo update-alternatives --install /usr/lib/${BASENAME} ${BASENAME} ${i} 10
done
</source>
c8ef81db6c61abfdbad1b592d5388fa9ebace87f
1740
1739
2017-05-09T08:42:55Z
Lollypop
2
/* Ubuntu */
wikitext
text/x-wiki
= Ubuntu =
Download
<pre>
oracle-instantclient12.2-basiclite-12.2.0.1.0-1.x86_64.rpm
oracle-instantclient12.2-sqlplus-12.2.0.1.0-1.x86_64.rpm
</pre>
from [http://www.oracle.com/technetwork/database/features/instant-client/index.html Oracle Instant Client download page]
<source lang=bash>
$ sudo apt install alien
$ sudo alien -i oracle-instantclient12.2-basiclite-12.2.0.1.0-1.x86_64.rpm
$ sudo alien -i oracle-instantclient12.2-sqlplus-12.2.0.1.0-1.x86_64.rpm
$ for i in $(dpkg -L $(dpkg -l oracle-instantclient\* | awk '$1=="ii"{print $2;}') | grep .so )
do
BASENAME=${i##*/}
sudo update-alternatives --install /usr/lib/${BASENAME} ${BASENAME} ${i} 10
done
</source>
1577f970cdc5d6fdb3152dd1c6c303cdb2c31670
1741
1740
2017-05-09T12:18:09Z
Lollypop
2
/* Ubuntu */
wikitext
text/x-wiki
= Ubuntu =
Download
<pre>
oracle-instantclient12.2-basiclite-12.2.0.1.0-1.x86_64.rpm
oracle-instantclient12.2-sqlplus-12.2.0.1.0-1.x86_64.rpm
</pre>
from [http://www.oracle.com/technetwork/database/features/instant-client/index.html Oracle Instant Client download page]
<source lang=bash>
$ sudo apt install alien
$ sudo apt install libaio1
$ sudo alien -i oracle-instantclient12.2-basiclite-12.2.0.1.0-1.x86_64.rpm
$ sudo alien -i oracle-instantclient12.2-sqlplus-12.2.0.1.0-1.x86_64.rpm
$ for i in $(dpkg -L $(dpkg -l oracle-instantclient\* | awk '$1=="ii"{print $2;}') | grep .so )
do
BASENAME=${i##*/}
sudo update-alternatives --install /usr/lib/${BASENAME} ${BASENAME} ${i} 10
done
</source>
e527ceb702ffef4024887245c739f7b9701d5e77
1742
1741
2017-05-09T12:19:22Z
Lollypop
2
/* Ubuntu */
wikitext
text/x-wiki
= Ubuntu =
Download
<pre>
oracle-instantclient12.2-basiclite-12.2.0.1.0-1.x86_64.rpm
oracle-instantclient12.2-sqlplus-12.2.0.1.0-1.x86_64.rpm
</pre>
from [http://www.oracle.com/technetwork/database/features/instant-client/index.html Oracle Instant Client download page]
<source lang=bash>
$ sudo apt install alien libaio1
$ sudo alien -i oracle-instantclient12.2-basiclite-12.2.0.1.0-1.x86_64.rpm
$ sudo alien -i oracle-instantclient12.2-sqlplus-12.2.0.1.0-1.x86_64.rpm
$ for i in $(dpkg -L $(dpkg -l oracle-instantclient\* | awk '$1=="ii"{print $2;}') | grep .so )
do
BASENAME=${i##*/}
sudo update-alternatives --install /usr/lib/${BASENAME} ${BASENAME} ${i} 10
done
</source>
9f992e73d6d3e67d340375ec3b8093529553fe1e
1743
1742
2017-05-09T15:39:28Z
Lollypop
2
wikitext
text/x-wiki
= Ubuntu =
Download
<pre>
oracle-instantclient12.2-basiclite-12.2.0.1.0-1.x86_64.rpm
oracle-instantclient12.2-sqlplus-12.2.0.1.0-1.x86_64.rpm
</pre>
from [http://www.oracle.com/technetwork/database/features/instant-client/index.html Oracle Instant Client download page]
<source lang=bash>
$ sudo apt install alien libaio1
$ sudo alien -i oracle-instantclient12.2-basiclite-12.2.0.1.0-1.x86_64.rpm
$ sudo alien -i oracle-instantclient12.2-sqlplus-12.2.0.1.0-1.x86_64.rpm
$ for i in $(dpkg -L $(dpkg -l oracle-instantclient\* | awk '$1=="ii"{print $2;}') | grep .so )
do
BASENAME=${i##*/}
sudo update-alternatives --install /usr/lib/${BASENAME} ${BASENAME} ${i} 10
done
$ dpkg -L $(dpkg -l oracle-instantclient*-basiclite | awk '$1=="ii"{print $2;}') | \
awk '
/client64$/{
oracle_home=$1;
printf "ORACLE_HOME=%s\nPATH=${PATH}:${ORACLE_HOME}/bin\nexport ORACLE_HOME PATH\n",oracle_home;
}' | \
sudo tee /etc/profile.d/oracle.sh
</source>
23a0e324aaa60c4f47a8bb977ed87afac8660909
Ansible tips and tricks
0
299
1754
1416
2017-05-24T09:34:31Z
Lollypop
2
wikitext
text/x-wiki
[[ Kategorie: Ansible | Tips and tricks ]]
== Ansible commandline ==
=== Get settings for host ===
Gathering settings for host in ${hostname}:
<source lang=bash>
$ ansible -m debug -a 'var=hostvars[inventory_hostname]' ${hostname}
</source>
For example:
<source lang=bash>
$ ansible -m debug -a 'var=hostvars[inventory_hostname]' localhost
</source>
== Gathering facts from file ==
=== Variables from an Oracle response file ===
This snippet gets some variables from the response file and puts them into an environment variable <i>oracle_environment</i> and sets the variable name itself (prepended with oracle_ if not already there). The variable <i>oracle_environment</i> can be used for <i>environment:</i> when you use <i>shell:</i>.
<source lang=yaml>
vars:
oracle_user: oracle
oracle_version: 12cR2
oracle_response_file: /install/tepmplate_{{ oracle_version }}/db_{{ oracle_version | lower}}.rsp
</source>
<source lang=yaml>
- name: "Getting variables for version {{ oracle_version }} from response file"
shell: |
awk -F '=' '/{{ item }}/{print $2;}' {{ oracle_response_file }}
register: oracle_response_variables
with_items:
- ORACLE_HOME
- ORACLE_BASE
- INVENTORY_LOCATION
tags:
- oracle
- oracle_install
- name: Setting facts from response file to oracle_environment
set_fact:
"{{ 'oracle_' + item.item | lower | regex_replace('oracle_','') }}": "{{ item.stdout }}"
oracle_environment: "{{oracle_environment|default([]) + [ {item.item: item.stdout} ] }}"
with_items:
- "{{ oracle_response_variables.results }}"
tags:
- oracle
- oracle_install
</source>
== Gathering oracle environment ==
<source lang=yaml>
- name: Calling oraenv
shell: |
# Set ORAENV_ASK=NO and ORACLE_SID, ORACLE_HOME, PATH from /etc/oratab
eval $(awk -F':' '!/^[ ]*(#|$)/ && $3=="Y"{printf "export ORAENV_ASK=NO ORACLE_SID=%s ORACLE_HOME=%s PATH=${PATH}:%s/bin\n",$1,$2,$2}' /etc/oratab)
# Call /usr/local/bin/oraenv for additional settings
. /usr/local/bin/oraenv -s
# Just register what we need for Oracle
env | egrep "(ORACLE_.*|PATH|LD_LIBRARY_PATH)="
register: env
changed_when: False
- name: Creating environment ora_env
set_fact:
ora_env: |
{%- set tmp_env={} -%}
{%- for line in env.stdout_lines -%}
{%- set x=tmp_env.__setitem__(line.split('=')[0], line.split('=')[1]) -%}
{%- endfor -%}
{{ tmp_env }}
- debug: var=ora_env
</source>
70860299e30f462a585306ca55682ead385eba2a
1755
1754
2017-05-24T09:40:35Z
Lollypop
2
/* Gathering oracle environment */
wikitext
text/x-wiki
[[ Kategorie: Ansible | Tips and tricks ]]
== Ansible commandline ==
=== Get settings for host ===
Gathering settings for host in ${hostname}:
<source lang=bash>
$ ansible -m debug -a 'var=hostvars[inventory_hostname]' ${hostname}
</source>
For example:
<source lang=bash>
$ ansible -m debug -a 'var=hostvars[inventory_hostname]' localhost
</source>
== Gathering facts from file ==
=== Variables from an Oracle response file ===
This snippet gets some variables from the response file and puts them into an environment variable <i>oracle_environment</i> and sets the variable name itself (prepended with oracle_ if not already there). The variable <i>oracle_environment</i> can be used for <i>environment:</i> when you use <i>shell:</i>.
<source lang=yaml>
vars:
oracle_user: oracle
oracle_version: 12cR2
oracle_response_file: /install/tepmplate_{{ oracle_version }}/db_{{ oracle_version | lower}}.rsp
</source>
<source lang=yaml>
- name: "Getting variables for version {{ oracle_version }} from response file"
shell: |
awk -F '=' '/{{ item }}/{print $2;}' {{ oracle_response_file }}
register: oracle_response_variables
with_items:
- ORACLE_HOME
- ORACLE_BASE
- INVENTORY_LOCATION
tags:
- oracle
- oracle_install
- name: Setting facts from response file to oracle_environment
set_fact:
"{{ 'oracle_' + item.item | lower | regex_replace('oracle_','') }}": "{{ item.stdout }}"
oracle_environment: "{{oracle_environment|default([]) + [ {item.item: item.stdout} ] }}"
with_items:
- "{{ oracle_response_variables.results }}"
tags:
- oracle
- oracle_install
</source>
== Gathering oracle environment ==
<source lang=yaml>
- name: Calling oraenv
shell: |
# Set ORAENV_ASK=NO and ORACLE_SID, ORACLE_HOME, PATH from /etc/oratab
eval $(awk -F':' '!/^[ ]*(#|$)/ && $3=="Y"{printf "export ORAENV_ASK=NO ORACLE_SID=%s ORACLE_HOME=%s PATH=${PATH}:%s/bin\n",$1,$2,$2}' /etc/oratab)
# Call /usr/local/bin/oraenv for additional settings
. /usr/local/bin/oraenv -s
# Just register what we need for Oracle
env | egrep "(ORACLE_.*|PATH|LD_LIBRARY_PATH)="
register: env
changed_when: False
- name: Creating environment ora_env
set_fact:
ora_env: |
{# Creating empty dictionary #}
{%- set tmp_env={} -%}
{# For each line from env call tmp_env._setitem_(<variable>,<value>) #}
{%- for line in env.stdout_lines -%}
{{ tmp_env.__setitem__(line.split('=')[0], line.split('=')[1]) }}
{%- endfor -%}
{# Print the created variable #}
{{ tmp_env }}
- debug: var=ora_env
</source>
146774c99b59a49197a87f54d629567d4d380566
ZFS on Linux
0
222
1756
1370
2017-05-29T16:07:44Z
Lollypop
2
/* Setup Ubuntu 16.04 with ZFS root */
wikitext
text/x-wiki
[[Kategorie:Linux|ZFS]]
[[Kategorie:ZFS|Linux]]
[[Kategorie:VirtualBox|ZFS]]
==Grub==
Create /etc/udev/rules.d/99-local-grub.rules with this content:
<source lang=bash>
# Create by-id links in /dev as well for zfs vdev. Needed by grub
# Add links for zfs_member only
KERNEL=="sd*[0-9]", IMPORT{parent}=="ID_*", ENV{ID_FS_TYPE}=="zfs_member", SYMLINK+="$env{ID_BUS}-$env{ID_SERIAL}-part%n"
</source>
==Virtualbox on ZVols==
If you use ZVols as rawvmdk-device in VirtualBox as normal user (vmuser in this example) create /etc/udev/rules.d/99-local-zvol.rules with this content:
<source lang=bash>
KERNEL=="zd*" SUBSYSTEM=="block" ACTION=="add|change" PROGRAM="/lib/udev/zvol_id /dev/%k" RESULT=="rpool/VM/*" OWNER="vmuser"
</source>
<source lang=bash>
vmuser@virtualbox-server:~$ VBoxManage internalcommands createrawvmdk -filename /var/data/VMs/dev/Solaris10.vmdk -rawdisk /dev/zvol/rpool/VM/Solaris10
</source>
==Setup Ubuntu 16.04 with ZFS root==
Most is from here [https://github.com/zfsonlinux/zfs/wiki/Ubuntu-16.04-Root-on-ZFS Ubuntu-16.04-Root-on-ZFS].
Boot Ubuntu Desktop (alias Live CD) and choose "try out".
===Get the right ashift value===
<source lang=bash>
# lsblk -o NAME,PHY-SeC,LOG-SEC /dev/sd{a,b} | awk 'function exponent (value) {for(i=0;value>1;i++){value/=2;}; return i;}{if($2 ~ /[0-9]+/){print $0,exponent($2)}else{print$0,"ashift"}}'
NAME PHY-SEC LOG-SEC ashift
sda 512 512 9
├─sda1 512 512 9
├─sda2 512 512 9
├─sda3 512 512 9
└─sda4 512 512 9
sdb 4096 512 12
├─sdb1 4096 512 12
├─sdb2 4096 512 12
├─sdb3 4096 512 12
└─sdb4 4096 512 12
</source>
===Connect it to your network===
<source lang=bash>
sudo -i
ifconfig ens160 <IP> netmask 255.255.255.0
route add default gw <defaultrouter>
echo "nameserver <nameserver>" >> /etc/resolv.conf
echo 'Acquire::http::Proxy "http://<user>:<pass>@<proxyhost>:<proxyport>";' >> /etc/apt/apt.conf
apt-add-repository universe
apt update
apt --yes install openssh-server
passwd ubuntu
Reconnect via ssh
apt install --yes debootstrap gdisk zfs-initramfs
sgdisk -g -a1 -n2:34:2047 -t2:EF02 /dev/disk/by-id/scsi-36000c2932cdb62febff0b5ac93786dd4
sgdisk -n9:-8M:0 -t9:BF07 /dev/disk/by-id/scsi-36000c2932cdb62febff0b5ac93786dd4
sgdisk -n1:0:0 -t1:BF01 /dev/disk/by-id/scsi-36000c2932cdb62febff0b5ac93786dd4
zpool create -f -o ashift=12 \
-O atime=off \
-O canmount=off \
-O compression=lz4 \
-O normalization=formD \
-O mountpoint=/ \
-R /mnt \
rpool /dev/disk/by-id/scsi-36000c2932cdb62febff0b5ac93786dd4-part1
zfs create -o canmount=off -o mountpoint=none rpool/ROOT
zfs create -o canmount=noauto -o mountpoint=/ rpool/ROOT/ubuntu
zfs mount rpool/ROOT/ubuntu
zfs create -o setuid=off rpool/home
zfs create -o mountpoint=/root rpool/home/root
zfs create -o canmount=off -o setuid=off -o exec=off rpool/var
zfs create -o com.sun:auto-snapshot=false rpool/var/cache
zfs create rpool/var/log
zfs create rpool/var/spool
zfs create -o com.sun:auto-snapshot=false -o exec=on rpool/var/tmp
zfs create -V 4G -b $(getconf PAGESIZE) -o compression=zle \
-o logbias=throughput -o sync=always \
-o primarycache=metadata -o secondarycache=none \
-o com.sun:auto-snapshot=false rpool/swap
cp -p {,/mnt}/etc/apt/apt.conf
export http_proxy=$(awk '/Acquire::http::Proxy/{gsub(/\"/,"");gsub(/;$/,"");print $2}' /mnt/etc/apt/apt.conf)
echo -n xenial{,-security,-updates} | \
xargs -n 1 -d ' ' -I{} echo "deb http://archive.ubuntu.com/ubuntu {} main universe" > /mnt/etc/apt/sources.list
chmod 1777 /mnt/var/tmp
debootstrap xenial /mnt
zfs set devices=off rpool
HOSTNAME=Template-VM
echo ${HOSTNAME} > /mnt/etc/hostname
printf "127.0.1.1\t%s\n" "${HOSTNAME}" >> /mnt/etc/hosts
INTERFACE=$(ip a s scope global | awk 'NR==1{gsub(/:$/,"",$2);print $2;}')
printf "auto %s\niface %s inet dhcp\n" "${INTERFACE}" "${INTERFACE}" > /mnt/etc/network/interfaces.d/${INTERFACE}
mount --rbind /dev /mnt/dev
mount --rbind /proc /mnt/proc
mount --rbind /sys /mnt/sys
cp -p {,/mnt}/etc/apt/apt.conf
echo -n xenial{,-security,-updates} | \
xargs -n 1 -d ' ' -I{} echo "deb http://archive.ubuntu.com/ubuntu {} main universe" > /mnt/etc/apt/sources.list
chroot /mnt /bin/bash --login
locale-gen en_US.UTF-8
echo 'LANG="en_US.UTF-8"' > /etc/default/locale
LANG="en_US.UTF-8"
dpkg-reconfigure tzdata
ln -s /proc/self/mounts /etc/mtab
apt update
apt install --yes ubuntu-minimal
apt install --yes --no-install-recommends linux-image-generic
apt install --yes zfs-initramfs
apt install --yes openssh-server
apt install --yes grub-pc
addgroup --system lpadmin
addgroup --system sambashare
passwd
grub-probe /
update-initramfs -c -k all
vi /etc/default/grub
Comment out: GRUB_HIDDEN_TIMEOUT=0
Remove quiet and splash from: GRUB_CMDLINE_LINUX_DEFAULT
Uncomment: GRUB_TERMINAL=console
update-grub
grub-install /dev/disk/by-id/scsi-36000c2932cdb62febff0b5ac93786dd4
zfs snapshot rpool/ROOT/ubuntu@install
exit
mount | grep -v zfs | tac | awk '/\/mnt/ {print $3}' | xargs -i{} umount -lf {}
zpool export rpool
reboot
apt install --yes cryptsetup
echo cryptswap1 /dev/zvol/rpool/swap /dev/urandom swap,cipher=aes-xts-plain64:sha256,size=256 >> /etc/crypttab
systemctl daemon-reload
systemctl start systemd-cryptsetup@cryptswap1.service
echo /dev/mapper/cryptswap1 none swap defaults 0 0 >> /etc/fstab
swapon -av
</source>
==Links==
* [[https://github.com/zfsonlinux/pkg-zfs/wiki/HOWTO-install-Ubuntu-16.04-to-a-Whole-Disk-Native-ZFS-Root-Filesystem-using-Ubiquity-GUI-installer HOWTO install Ubuntu 16.04 to a Whole Disk Native ZFS Root Filesystem using Ubiquity GUI installer]]
* [[https://github.com/zfsonlinux/zfs/wiki/Ubuntu-16.04-Root-on-ZFS Ubuntu 16.04 Root on ZFS]]
fb266875b2fca5ffe6624fcfd89a9b6a704957dd
1757
1756
2017-05-29T16:08:21Z
Lollypop
2
/* Get the right ashift value */
wikitext
text/x-wiki
[[Kategorie:Linux|ZFS]]
[[Kategorie:ZFS|Linux]]
[[Kategorie:VirtualBox|ZFS]]
==Grub==
Create /etc/udev/rules.d/99-local-grub.rules with this content:
<source lang=bash>
# Create by-id links in /dev as well for zfs vdev. Needed by grub
# Add links for zfs_member only
KERNEL=="sd*[0-9]", IMPORT{parent}=="ID_*", ENV{ID_FS_TYPE}=="zfs_member", SYMLINK+="$env{ID_BUS}-$env{ID_SERIAL}-part%n"
</source>
==Virtualbox on ZVols==
If you use ZVols as rawvmdk-device in VirtualBox as normal user (vmuser in this example) create /etc/udev/rules.d/99-local-zvol.rules with this content:
<source lang=bash>
KERNEL=="zd*" SUBSYSTEM=="block" ACTION=="add|change" PROGRAM="/lib/udev/zvol_id /dev/%k" RESULT=="rpool/VM/*" OWNER="vmuser"
</source>
<source lang=bash>
vmuser@virtualbox-server:~$ VBoxManage internalcommands createrawvmdk -filename /var/data/VMs/dev/Solaris10.vmdk -rawdisk /dev/zvol/rpool/VM/Solaris10
</source>
==Setup Ubuntu 16.04 with ZFS root==
Most is from here [https://github.com/zfsonlinux/zfs/wiki/Ubuntu-16.04-Root-on-ZFS Ubuntu-16.04-Root-on-ZFS].
Boot Ubuntu Desktop (alias Live CD) and choose "try out".
===Get the right ashift value===
For example to get sda and sdb:
<source lang=bash>
# lsblk -o NAME,PHY-SeC,LOG-SEC /dev/sd{a,b} | awk 'function exponent (value) {for(i=0;value>1;i++){value/=2;}; return i;}{if($2 ~ /[0-9]+/){print $0,exponent($2)}else{print$0,"ashift"}}'
NAME PHY-SEC LOG-SEC ashift
sda 512 512 9
├─sda1 512 512 9
├─sda2 512 512 9
├─sda3 512 512 9
└─sda4 512 512 9
sdb 4096 512 12
├─sdb1 4096 512 12
├─sdb2 4096 512 12
├─sdb3 4096 512 12
└─sdb4 4096 512 12
</source>
===Connect it to your network===
<source lang=bash>
sudo -i
ifconfig ens160 <IP> netmask 255.255.255.0
route add default gw <defaultrouter>
echo "nameserver <nameserver>" >> /etc/resolv.conf
echo 'Acquire::http::Proxy "http://<user>:<pass>@<proxyhost>:<proxyport>";' >> /etc/apt/apt.conf
apt-add-repository universe
apt update
apt --yes install openssh-server
passwd ubuntu
Reconnect via ssh
apt install --yes debootstrap gdisk zfs-initramfs
sgdisk -g -a1 -n2:34:2047 -t2:EF02 /dev/disk/by-id/scsi-36000c2932cdb62febff0b5ac93786dd4
sgdisk -n9:-8M:0 -t9:BF07 /dev/disk/by-id/scsi-36000c2932cdb62febff0b5ac93786dd4
sgdisk -n1:0:0 -t1:BF01 /dev/disk/by-id/scsi-36000c2932cdb62febff0b5ac93786dd4
zpool create -f -o ashift=12 \
-O atime=off \
-O canmount=off \
-O compression=lz4 \
-O normalization=formD \
-O mountpoint=/ \
-R /mnt \
rpool /dev/disk/by-id/scsi-36000c2932cdb62febff0b5ac93786dd4-part1
zfs create -o canmount=off -o mountpoint=none rpool/ROOT
zfs create -o canmount=noauto -o mountpoint=/ rpool/ROOT/ubuntu
zfs mount rpool/ROOT/ubuntu
zfs create -o setuid=off rpool/home
zfs create -o mountpoint=/root rpool/home/root
zfs create -o canmount=off -o setuid=off -o exec=off rpool/var
zfs create -o com.sun:auto-snapshot=false rpool/var/cache
zfs create rpool/var/log
zfs create rpool/var/spool
zfs create -o com.sun:auto-snapshot=false -o exec=on rpool/var/tmp
zfs create -V 4G -b $(getconf PAGESIZE) -o compression=zle \
-o logbias=throughput -o sync=always \
-o primarycache=metadata -o secondarycache=none \
-o com.sun:auto-snapshot=false rpool/swap
cp -p {,/mnt}/etc/apt/apt.conf
export http_proxy=$(awk '/Acquire::http::Proxy/{gsub(/\"/,"");gsub(/;$/,"");print $2}' /mnt/etc/apt/apt.conf)
echo -n xenial{,-security,-updates} | \
xargs -n 1 -d ' ' -I{} echo "deb http://archive.ubuntu.com/ubuntu {} main universe" > /mnt/etc/apt/sources.list
chmod 1777 /mnt/var/tmp
debootstrap xenial /mnt
zfs set devices=off rpool
HOSTNAME=Template-VM
echo ${HOSTNAME} > /mnt/etc/hostname
printf "127.0.1.1\t%s\n" "${HOSTNAME}" >> /mnt/etc/hosts
INTERFACE=$(ip a s scope global | awk 'NR==1{gsub(/:$/,"",$2);print $2;}')
printf "auto %s\niface %s inet dhcp\n" "${INTERFACE}" "${INTERFACE}" > /mnt/etc/network/interfaces.d/${INTERFACE}
mount --rbind /dev /mnt/dev
mount --rbind /proc /mnt/proc
mount --rbind /sys /mnt/sys
cp -p {,/mnt}/etc/apt/apt.conf
echo -n xenial{,-security,-updates} | \
xargs -n 1 -d ' ' -I{} echo "deb http://archive.ubuntu.com/ubuntu {} main universe" > /mnt/etc/apt/sources.list
chroot /mnt /bin/bash --login
locale-gen en_US.UTF-8
echo 'LANG="en_US.UTF-8"' > /etc/default/locale
LANG="en_US.UTF-8"
dpkg-reconfigure tzdata
ln -s /proc/self/mounts /etc/mtab
apt update
apt install --yes ubuntu-minimal
apt install --yes --no-install-recommends linux-image-generic
apt install --yes zfs-initramfs
apt install --yes openssh-server
apt install --yes grub-pc
addgroup --system lpadmin
addgroup --system sambashare
passwd
grub-probe /
update-initramfs -c -k all
vi /etc/default/grub
Comment out: GRUB_HIDDEN_TIMEOUT=0
Remove quiet and splash from: GRUB_CMDLINE_LINUX_DEFAULT
Uncomment: GRUB_TERMINAL=console
update-grub
grub-install /dev/disk/by-id/scsi-36000c2932cdb62febff0b5ac93786dd4
zfs snapshot rpool/ROOT/ubuntu@install
exit
mount | grep -v zfs | tac | awk '/\/mnt/ {print $3}' | xargs -i{} umount -lf {}
zpool export rpool
reboot
apt install --yes cryptsetup
echo cryptswap1 /dev/zvol/rpool/swap /dev/urandom swap,cipher=aes-xts-plain64:sha256,size=256 >> /etc/crypttab
systemctl daemon-reload
systemctl start systemd-cryptsetup@cryptswap1.service
echo /dev/mapper/cryptswap1 none swap defaults 0 0 >> /etc/fstab
swapon -av
</source>
==Links==
* [[https://github.com/zfsonlinux/pkg-zfs/wiki/HOWTO-install-Ubuntu-16.04-to-a-Whole-Disk-Native-ZFS-Root-Filesystem-using-Ubiquity-GUI-installer HOWTO install Ubuntu 16.04 to a Whole Disk Native ZFS Root Filesystem using Ubiquity GUI installer]]
* [[https://github.com/zfsonlinux/zfs/wiki/Ubuntu-16.04-Root-on-ZFS Ubuntu 16.04 Root on ZFS]]
ef6d6cee56d8580c669ee70a0e1b07b1feac80cf
Perl Tipps und Tricks
0
178
1761
540
2017-06-22T14:23:11Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:Perl|Tipps und Tricks]]
==Unread while reading from filehandle==
Dov Grobgeld made my day!
<source lang=perl>
# Found at a comment of Dov Grobgeld at https://groups.google.com/d/msg/comp.lang.perl/7fPyGpWpP8M/hc7xTMvAoW0J
while($_ = shift(@linestack) || <IN>) {
:
push(@linestack, $whatever); # unread
}
</source>
== Config ==
Override compile time flags on the commandline like this:
<source lang=perl>
PERL_MM_OPT='optimize=-O2 cc=gcc ld=gcc cccdlflags=-DPIC'
</source>
I used it to run sa-compile on Solaris:
<source lang=perl>
#!/bin/bash
exec >> /var/log/update-spamd-rules.log 2>&1
#LD_LIBRARY_PATH=/usr/sfw/lib
PATH=$PATH:/usr/local/bin:/opt/re2c/bin:/usr/sfw/bin:/usr/ccs/bin:/opt/csw/bin
PERL_VER=$(/usr/perl5/bin/perl -e 'printf "%.3f",$];')
SA_VER=$(/opt/spamassassin/bin/spamassassin -V | /usr/bin/nawk '
/SpamAssassin version/ {
split($NF,version,/\./);
printf "%d.%03d%03d",version[1],version[2],version[3];
}')
export LD_LIBRARY_PATH PATH PERL_VER SA_VER
/usr/perl5/bin/perlgcc -T /opt/spamassassin/bin/sa-update --updatedir=/var/opt/spamassassin/$SA_VER -D
PERL_MM_OPT='optimize=-O2 cc=gcc ld=gcc cccdlflags=-DPIC' /opt/spamassassin/bin/sa-compile --updatedir=/var/opt/spamassassin/compiled/${PERL_VER}/${SA_VER} -D
/usr/bin/kill -HUP `cat /tmp/spamd-exim-acl.pid`
/usr/bin/kill -HUP `cat /tmp/spamd-ip.pid`
</source>
8cdcce287fe1baadebfd4942eae63cbf9d6d8c8e
SSL and TLS
0
229
1762
1340
2017-06-26T07:00:18Z
Lollypop
2
/* HTTPS */
wikitext
text/x-wiki
[[Kategorie: Security]]
=Web=
==HTTPS==
===TLSA - Record ===
<source lang=bash>
$ openssl s_client -connect lars.timmann.de:443 </dev/null 2>/dev/null | openssl x509 -pubkey -noout | openssl pkey -pubin -outform DER | openssl sha256
(stdin)= e642c89062361241dc77f3fb363c8cd0faa04d870b68a3411b8fac8c4b4581ac
</source>
This could be used for a tlsa record like this:
_443._tcp.lars.timmann.de. 60 IN TLSA 3 0 1 e642c89062361241dc77f3fb363c8cd0faa04d870b68a3411b8fac8c4b4581ac
===HSTS - HTTP Strict Transport Security===
<source lang=apache>
<VirtualHost <host>:443>
...
Header always set Strict-Transport-Security "max-age=31556926; includeSubDomains;"
...
</VirtualHost>
</source>
You need to enable the headers module in Apache.
On Ubuntu just do:
<source lang=bash>
# sudo a2enmod headers
</source>
The max-age is entered in seconds:
<source lang=bash>
$ bc -l
31556926/(60*60*24)
365.24219907407407407407
</souce>
So this value is a year as seconds.
What changes when we set this header and the browser understands it?
The browser transforms any link on this page to https even if the link is a http link. If the secure connection cannot be established because of Certificate errors, the browser will refuse to load the page. If this header contains ''includeSubDomains;'' subdomains are treated like this as well.
Links:
* [https://en.wikipedia.org/wiki/HTTP_Strict_Transport_Security HSTS at Wikipedia (English)]
* [https://de.wikipedia.org/wiki/Hypertext_Transfer_Protocol_Secure#HSTS HSTS at Wikipedia (German)]
===HPKP - HTTP Public Key Pinning===
A helpful script to create the hashes was made by Hanno Böck and is accessible at [https://github.com/hannob/hpkp Github].
I added a create option which makes the script more comfortable for me at [https://github.com/Popyllol/hpkp Github], too.
The public key pins for this site are created like this:
<source lang=bash>
# /etc/apache2/ssl/hpkp-gen.sh create DE Hamburg Hamburg lars.timmann.de
Generating RSA private key, 4096 bit long modulus
..................................................................................................................................................................................................................++
..........................................................................................................................................................................................++
e is 65537 (0x10001)
Generating RSA private key, 4096 bit long modulus
..................................................++
..........................................++
e is 65537 (0x10001)
Header always set Strict-Transport-Security "max-age=31556926;"
Header always set Public-Key-Pins "max-age=5184000; pin-sha256=\"UcmGe/VSm6N9ruX235yb9PEYseuo+mr2volWwx1RffE=\";pin-sha256=\"O8xUszxHm+JJpRR4Pycl7LCnKjFpTY3REemrBxQZWQU=\";pin-sha256=\"UcmGe/VSm6N9ruX235yb9PEYseuo+mr2volWwx1RffE=\";"
</source>
At the end you get one line for adding Strict-Transport-Security and one for Public-Key-Pins. Both in Apache format.
<source lang=apache>
<VirtualHost lars.timmann.de:443>
...
SSLEngine On
SSLProtocol all -SSLv2 -SSLv3
SSLCompression off
SSLOptions +FakeBasicAuth +ExportCertData +StrictRequire
SSLCertificateFile /etc/apache2/ssl/timmann.de-wildcard.pem
SSLCertificateKeyFile /etc/apache2/ssl/timmann.de.ec-key
Header always set Strict-Transport-Security "max-age=31556926;"
Header always set Public-Key-Pins "max-age=5184000; pin-sha256=\"sEQMIUbXSCbQQAMcCH7712u+cYCjFITlUSH/C1DEGHY=\";pin-sha256=\"9f3SRITO2UNdpnurhfJGLZqcaXJBUm3WRKRIKYiPARc=\";pin-sha256=\"sEQMIUbXSCbQQAMcCH7712u+cYCjFITlUSH/C1DEGHY=\";"
...
</VirtualHost>
</source>
You need to enable the headers module in Apache.
On Ubuntu just do:
<source lang=bash>
# sudo a2enmod headers
</source>
=Mail=
==STARTTLS==
with OpenSSL:
<source lang=bash>
$ openssl s_client -starttls smtp -connect <mailserver>:<port>
</source>
with GNUTLS:
<source lang=bash>
$ gnutls-cli --crlf --starttls --port <port> <mailserver>
EHLO hey <-- Send EHLO
250-<mailserver> Hello <yourhost> [<yourip>]
250-SIZE 52428800
250-8BITMIME
250-ETRN
250-PIPELINING
250-AUTH PLAIN
250-STARTTLS
250 HELP
STARTTLS <-- Send STARTTLS
220 TLS go ahead
^D <-- Send CTRL-D to begin STARTTLS handshake
...
- Version: TLS1.2
- Key Exchange: DHE-RSA
- Cipher: AES-256-CBC
- MAC: SHA256
- Compression: NULL
</source>
You can specify the security priority for the handshake like this:
<source lang=bash>
$ gnutls-cli --crlf --starttls --priority 'SECURE256:%LATEST_RECORD_VERSION:-VERS-SSL3.0' --port <port> <mailserver>
</source>
Or us sslscan to check the available ciphers:
<source lang=bash>
$ sudo apt-get install sslscan
$ sslscan --no-failed --starttls <mailserver>:<port>
</source>
==SMTPS==
with OpenSSL:
<source lang=bash>
$ openssl s_client -connect <mailserver>:465
</source>
with GNUTLS:
<source lang=bash>
$ gnutls-cli --port 465 <mailserver>
</source>
a08e7c8fcd07f3e1c3b57a44704bec4df755ce3a
VMWare Hints
0
343
1763
2017-06-26T09:33:55Z
Lollypop
2
Die Seite wurde neu angelegt: „[[Kategorie:VMWare]] - [https://labs.vmware.com/flings VMWare Flings] - [https://blogs.vmware.com/vsphere/2012/09/vmware-posters.html VMWare Posters]“
wikitext
text/x-wiki
[[Kategorie:VMWare]]
- [https://labs.vmware.com/flings VMWare Flings]
- [https://blogs.vmware.com/vsphere/2012/09/vmware-posters.html VMWare Posters]
28291aebcbbc0c277bef621a0de8b5cdb4a1720f
1764
1763
2017-06-26T09:40:38Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:VMWare]]
== Links ==
* [https://labs.vmware.com/flings VMWare Flings]
* [https://blogs.vmware.com/vsphere/2012/09/vmware-posters.html VMWare Posters]
google: vsphere <version> configuration maximums:
* [https://www.vmware.com/pdf/vsphere6/r65/vsphere-65-configuration-maximums.pdf V6.5]
4727e05180caaf20c4cc9db69d9d7b1b84464541
1765
1764
2017-06-26T09:57:10Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:VMWare]]
== Links ==
* [https://labs.vmware.com/flings VMWare Flings]
* [https://blogs.vmware.com/vsphere/2012/09/vmware-posters.html VMWare Posters] or here [https://blogs.vmware.com/vsphere/posters VMWare Posters]
google: vsphere <version> configuration maximums:
* [https://www.vmware.com/pdf/vsphere6/r65/vsphere-65-configuration-maximums.pdf V6.5]
f1f5f76de8292134ee92a6a1bb4a56739e4ba1ff
1766
1765
2017-06-26T10:12:52Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:VMWare]]
== VCenter ==
===HTML5 User Interface===
* https://<vcenter>/ui
===Appliance Management===
* https://<vcenter>:5480/
== Links ==
* [https://labs.vmware.com/flings VMWare Flings]
* [https://blogs.vmware.com/vsphere/2012/09/vmware-posters.html VMWare Posters] or here [https://blogs.vmware.com/vsphere/posters VMWare Posters]
google: vsphere <version> configuration maximums:
* [https://www.vmware.com/pdf/vsphere6/r65/vsphere-65-configuration-maximums.pdf V6.5]
39eab07ab49efa2c08f7ed72a92d1507e49c5462
1767
1766
2017-06-26T14:14:47Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:VMWare]]
== VCenter ==
===HTML5 User Interface===
* https://<vcenter>/ui
===Appliance Management===
* https://<vcenter>:5480/
== Links ==
* [https://labs.vmware.com/flings VMWare Flings]
* [https://blogs.vmware.com/vsphere/2012/09/vmware-posters.html VMWare Posters] or here [https://blogs.vmware.com/vsphere/posters VMWare Posters]
* [https://www.vmware.com/support/developer/vima/ vSphere Management Assistant Documentation]
google: vsphere <version> configuration maximums:
* [https://www.vmware.com/pdf/vsphere6/r65/vsphere-65-configuration-maximums.pdf V6.5]
c6abbed83fcfd07f5639fdabf78d89f2016e70e1
1768
1767
2017-06-26T14:21:13Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:VMWare]]
== VCenter ==
===HTML5 User Interface===
* https://<vcenter>/ui
===Appliance Management===
* https://<vcenter>:5480/
== Links ==
* [https://labs.vmware.com/flings VMWare Flings]
* [https://blogs.vmware.com/vsphere/2012/09/vmware-posters.html VMWare Posters] or here [https://blogs.vmware.com/vsphere/posters VMWare Posters]
* [https://www.vmware.com/support/developer/vima/ vSphere Management Assistant Documentation]
* [https://www.vmware.com/support/developer/vcli/ vSphere Command-Line Interface Documentation (vCLI)]
google: vsphere <version> configuration maximums:
* [https://www.vmware.com/pdf/vsphere6/r65/vsphere-65-configuration-maximums.pdf V6.5]
c056e9db2bc415fea2e394e61bde3f5fb3608905
1769
1768
2017-06-26T14:22:13Z
Lollypop
2
/* Links */
wikitext
text/x-wiki
[[Kategorie:VMWare]]
== VCenter ==
===HTML5 User Interface===
* https://<vcenter>/ui
===Appliance Management===
* https://<vcenter>:5480/
== Links ==
* [https://labs.vmware.com/flings VMWare Flings]
* [https://blogs.vmware.com/vsphere/2012/09/vmware-posters.html VMWare Posters] or here [https://blogs.vmware.com/vsphere/posters VMWare Posters]
* [https://www.vmware.com/support/developer/vima/ vSphere Management Assistant Documentation]
* [https://www.vmware.com/support/developer/vcli/ vSphere Command-Line Interface Documentation (vCLI)]
* [http://labs.hol.vmware.com VMWare Hands On Labs]
google: vsphere <version> configuration maximums:
* [https://www.vmware.com/pdf/vsphere6/r65/vsphere-65-configuration-maximums.pdf V6.5]
a4ff8c6c00c2f74029f117e21dd0d86cfc2bb506
1770
1769
2017-06-26T21:31:18Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:VMWare]]
== VCenter ==
===HTML5 User Interface===
* https://<vcenter>/ui
===Appliance Management===
* https://<vcenter>:5480/
== Links ==
* [https://labs.vmware.com/flings VMWare Flings]
* [https://blogs.vmware.com/vsphere/2012/09/vmware-posters.html VMWare Posters] or here [https://blogs.vmware.com/vsphere/posters VMWare Posters]
* [https://www.vmware.com/support/developer/vima/ vSphere Management Assistant Documentation]
* [https://www.vmware.com/support/developer/vcli/ vSphere Command-Line Interface Documentation (vCLI)]
* [http://labs.hol.vmware.com VMWare Hands On Labs]
* [http://docs.ansible.com/ansible/list_of_cloud_modules.html#vmware Ansible modules for VMWare]
google: vsphere <version> configuration maximums:
* [https://www.vmware.com/pdf/vsphere6/r65/vsphere-65-configuration-maximums.pdf V6.5]
74a48fd745e3e2ebbde00c9d182d59791ca2bd45
1773
1770
2017-06-28T07:19:51Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:VMWare]]
== VCenter ==
===HTML5 User Interface===
* https://<vcenter>/ui
===Appliance Management===
* https://<vcenter>:5480/
== Links ==
* [https://labs.vmware.com/flings VMWare Flings]
* [https://blogs.vmware.com/vsphere/2012/09/vmware-posters.html VMWare Posters] or here [https://blogs.vmware.com/vsphere/posters VMWare Posters]
* [https://www.vmware.com/support/developer/vima/ vSphere Management Assistant Documentation]
* [https://www.vmware.com/support/developer/vcli/ vSphere Command-Line Interface Documentation (vCLI)]
* [http://labs.hol.vmware.com VMWare Hands On Labs]
* [http://docs.ansible.com/ansible/list_of_cloud_modules.html#vmware Ansible modules for VMWare]
* [http://www.robware.net/rvtools/ RVTools]
google: vsphere <version> configuration maximums:
* [https://www.vmware.com/pdf/vsphere6/r65/vsphere-65-configuration-maximums.pdf V6.5]
77991b5a35f50dfe2783bdca588ccf59f9ce38a1
1774
1773
2017-06-28T07:32:58Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:VMWare]]
== VCenter ==
===HTML5 User Interface===
* https://<vcenter>/ui
===Appliance Management===
* https://<vcenter>:5480/
== CLI ==
=== I/O ===
* /usr/lib/vmware/bin/vscsiStats
== Links ==
* [https://labs.vmware.com/flings VMWare Flings]
* [https://blogs.vmware.com/vsphere/2012/09/vmware-posters.html VMWare Posters] or here [https://blogs.vmware.com/vsphere/posters VMWare Posters]
* [https://www.vmware.com/support/developer/vima/ vSphere Management Assistant Documentation]
* [https://www.vmware.com/support/developer/vcli/ vSphere Command-Line Interface Documentation (vCLI)]
* [http://labs.hol.vmware.com VMWare Hands On Labs]
* [http://docs.ansible.com/ansible/list_of_cloud_modules.html#vmware Ansible modules for VMWare]
* [http://www.robware.net/rvtools/ RVTools]
google: vsphere <version> configuration maximums:
* [https://www.vmware.com/pdf/vsphere6/r65/vsphere-65-configuration-maximums.pdf V6.5]
93bee8db93ff240a3fade839e5e747c1ef71650c
1775
1774
2017-06-28T07:39:05Z
Lollypop
2
/* I/O */
wikitext
text/x-wiki
[[Kategorie:VMWare]]
== VCenter ==
===HTML5 User Interface===
* https://<vcenter>/ui
===Appliance Management===
* https://<vcenter>:5480/
== CLI ==
=== I/O ===
* vscsiStats : /usr/lib/vmware/bin/vscsiStats
== Links ==
* [https://labs.vmware.com/flings VMWare Flings]
* [https://blogs.vmware.com/vsphere/2012/09/vmware-posters.html VMWare Posters] or here [https://blogs.vmware.com/vsphere/posters VMWare Posters]
* [https://www.vmware.com/support/developer/vima/ vSphere Management Assistant Documentation]
* [https://www.vmware.com/support/developer/vcli/ vSphere Command-Line Interface Documentation (vCLI)]
* [http://labs.hol.vmware.com VMWare Hands On Labs]
* [http://docs.ansible.com/ansible/list_of_cloud_modules.html#vmware Ansible modules for VMWare]
* [http://www.robware.net/rvtools/ RVTools]
google: vsphere <version> configuration maximums:
* [https://www.vmware.com/pdf/vsphere6/r65/vsphere-65-configuration-maximums.pdf V6.5]
ce46538b9026dd042f08446e9b4f4b63ab41d18b
1776
1775
2017-06-28T07:41:33Z
Lollypop
2
/* I/O */
wikitext
text/x-wiki
[[Kategorie:VMWare]]
== VCenter ==
===HTML5 User Interface===
* https://<vcenter>/ui
===Appliance Management===
* https://<vcenter>:5480/
== CLI ==
=== I/O ===
* [https://voiceforvirtual.com/2010/12/06/vscsistats/ vscsiStats] : /usr/lib/vmware/bin/vscsiStats
== Links ==
* [https://labs.vmware.com/flings VMWare Flings]
* [https://blogs.vmware.com/vsphere/2012/09/vmware-posters.html VMWare Posters] or here [https://blogs.vmware.com/vsphere/posters VMWare Posters]
* [https://www.vmware.com/support/developer/vima/ vSphere Management Assistant Documentation]
* [https://www.vmware.com/support/developer/vcli/ vSphere Command-Line Interface Documentation (vCLI)]
* [http://labs.hol.vmware.com VMWare Hands On Labs]
* [http://docs.ansible.com/ansible/list_of_cloud_modules.html#vmware Ansible modules for VMWare]
* [http://www.robware.net/rvtools/ RVTools]
google: vsphere <version> configuration maximums:
* [https://www.vmware.com/pdf/vsphere6/r65/vsphere-65-configuration-maximums.pdf V6.5]
0f864338569e3b211153545942c28077587ee5d3
1777
1776
2017-06-28T07:45:35Z
Lollypop
2
/* I/O */
wikitext
text/x-wiki
[[Kategorie:VMWare]]
== VCenter ==
===HTML5 User Interface===
* https://<vcenter>/ui
===Appliance Management===
* https://<vcenter>:5480/
== CLI ==
=== I/O ===
* [https://voiceforvirtual.com/2010/12/06/vscsistats/ vscsiStats] : /usr/lib/vmware/bin/vscsiStats
* [https://labs.vmware.com/flings/i-o-analyzer VMware I/O Analyzer]
* [https://labs.vmware.com/flings/ioinsight VMware IOInsight]
== Links ==
* [https://labs.vmware.com/flings VMWare Flings]
* [https://blogs.vmware.com/vsphere/2012/09/vmware-posters.html VMWare Posters] or here [https://blogs.vmware.com/vsphere/posters VMWare Posters]
* [https://www.vmware.com/support/developer/vima/ vSphere Management Assistant Documentation]
* [https://www.vmware.com/support/developer/vcli/ vSphere Command-Line Interface Documentation (vCLI)]
* [http://labs.hol.vmware.com VMWare Hands On Labs]
* [http://docs.ansible.com/ansible/list_of_cloud_modules.html#vmware Ansible modules for VMWare]
* [http://www.robware.net/rvtools/ RVTools]
google: vsphere <version> configuration maximums:
* [https://www.vmware.com/pdf/vsphere6/r65/vsphere-65-configuration-maximums.pdf V6.5]
bf24c340065ea166445634d93c0dc7c5f1351494
1778
1777
2017-06-28T08:56:08Z
Lollypop
2
/* I/O */
wikitext
text/x-wiki
[[Kategorie:VMWare]]
== VCenter ==
===HTML5 User Interface===
* https://<vcenter>/ui
===Appliance Management===
* https://<vcenter>:5480/
== CLI ==
=== I/O ===
* [https://voiceforvirtual.com/2010/12/06/vscsistats/ vscsiStats] : /usr/lib/vmware/bin/vscsiStats
* [https://labs.vmware.com/flings/i-o-analyzer VMware I/O Analyzer]
* [https://labs.vmware.com/flings/ioinsight VMware IOInsight]
* [https://kb.vmware.com/kb/1008205 Using esxtop to identify storage performance issues for ESX / ESXi]
== Links ==
* [https://labs.vmware.com/flings VMWare Flings]
* [https://blogs.vmware.com/vsphere/2012/09/vmware-posters.html VMWare Posters] or here [https://blogs.vmware.com/vsphere/posters VMWare Posters]
* [https://www.vmware.com/support/developer/vima/ vSphere Management Assistant Documentation]
* [https://www.vmware.com/support/developer/vcli/ vSphere Command-Line Interface Documentation (vCLI)]
* [http://labs.hol.vmware.com VMWare Hands On Labs]
* [http://docs.ansible.com/ansible/list_of_cloud_modules.html#vmware Ansible modules for VMWare]
* [http://www.robware.net/rvtools/ RVTools]
google: vsphere <version> configuration maximums:
* [https://www.vmware.com/pdf/vsphere6/r65/vsphere-65-configuration-maximums.pdf V6.5]
3fa45a4776e70c3806cac3cd32572be0a12276ce
VMWare CLi
0
344
1771
2017-06-28T07:08:48Z
Lollypop
2
Die Seite wurde neu angelegt: „=== Routen === <pre> esxcli network ip route ipv4 add --network=10.14.90.0/25 --gateway=10.128.1.9 esxcli network ip route ipv4 add --network=10.14.95.0/25 --…“
wikitext
text/x-wiki
=== Routen ===
<pre>
esxcli network ip route ipv4 add --network=10.14.90.0/25 --gateway=10.128.1.9
esxcli network ip route ipv4 add --network=10.14.95.0/25 --gateway=10.128.1.9
esxcli network ip route ipv4 add --network=10.14.90.128/25 --gateway=10.128.1.10
esxcli network ip route ipv4 add --network=10.14.95.128/25 --gateway=10.128.1.10
</pre>
=== Firewall ===
==== SSH ====
<pre>
esxcli network firewall ruleset set --ruleset-id sshServer --allowed-all false
esxcli network firewall ruleset allowedip add --ruleset-id sshServer --ip-address 10.14.0.0/16
esxcli network firewall ruleset allowedip add --ruleset-id sshServer --ip-address 192.168.2.0/24
esxcli network firewall ruleset allowedip list --ruleset-id sshServer
Ruleset Allowed IP Addresses
--------- ------------------------------
sshServer 10.14.0.0/16, 192.168.2.0/24
</pre>
==== HTTP ====
<pre>
esxcli network firewall ruleset set --ruleset-id CIMHttpServer --allowed-all false
esxcli network firewall ruleset allowedip add --ruleset-id CIMHttpServer --ip-address 10.14.0.0/16
esxcli network firewall ruleset allowedip add --ruleset-id CIMHttpServer --ip-address 192.168.2.0/24
esxcli network firewall ruleset allowedip list --ruleset-id CIMHttpServer
Ruleset Allowed IP Addresses
------------- ----------------------------
CIMHttpServer 10.14.0.0/16, 192.168.2.0/24
</pre>
==== HTTPS ====
<pre>
esxcli network firewall ruleset set --ruleset-id CIMHttpsServer --allowed-all false
esxcli network firewall ruleset allowedip add --ruleset-id CIMHttpsServer --ip-address 10.14.0.0/16
esxcli network firewall ruleset allowedip add --ruleset-id CIMHttpsServer --ip-address 192.168.2.0/24
esxcli network firewall ruleset allowedip list --ruleset-id CIMHttpsServer
Ruleset Allowed IP Addresses
-------------- ----------------------------
CIMHttpsServer 10.14.0.0/16, 192.168.2.0/24
</pre>
==== CIMSLP ====
<pre>
esxcli network firewall ruleset set --ruleset-id CIMSLP --allowed-all false
esxcli network firewall ruleset allowedip add --ruleset-id CIMSLP --ip-address 10.14.0.0/16
esxcli network firewall ruleset allowedip add --ruleset-id CIMSLP --ip-address 192.168.2.0/24
esxcli network firewall ruleset allowedip list --ruleset-id CIMSLP
Ruleset Allowed IP Addresses
------- ----------------------------
CIMSLP 10.14.0.0/16, 192.168.2.0/24
</pre>
5a0794cfc0123c3d67108ddedc6d5577caaaa54b
1772
1771
2017-06-28T07:19:17Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie: VMWare]]
=== Routen ===
<pre>
esxcli network ip route ipv4 add --network=10.14.90.0/25 --gateway=10.128.1.9
esxcli network ip route ipv4 add --network=10.14.95.0/25 --gateway=10.128.1.9
esxcli network ip route ipv4 add --network=10.14.90.128/25 --gateway=10.128.1.10
esxcli network ip route ipv4 add --network=10.14.95.128/25 --gateway=10.128.1.10
</pre>
=== Firewall ===
==== SSH ====
<pre>
esxcli network firewall ruleset set --ruleset-id sshServer --allowed-all false
esxcli network firewall ruleset allowedip add --ruleset-id sshServer --ip-address 10.14.0.0/16
esxcli network firewall ruleset allowedip add --ruleset-id sshServer --ip-address 192.168.2.0/24
esxcli network firewall ruleset allowedip list --ruleset-id sshServer
Ruleset Allowed IP Addresses
--------- ------------------------------
sshServer 10.14.0.0/16, 192.168.2.0/24
</pre>
==== HTTP ====
<pre>
esxcli network firewall ruleset set --ruleset-id CIMHttpServer --allowed-all false
esxcli network firewall ruleset allowedip add --ruleset-id CIMHttpServer --ip-address 10.14.0.0/16
esxcli network firewall ruleset allowedip add --ruleset-id CIMHttpServer --ip-address 192.168.2.0/24
esxcli network firewall ruleset allowedip list --ruleset-id CIMHttpServer
Ruleset Allowed IP Addresses
------------- ----------------------------
CIMHttpServer 10.14.0.0/16, 192.168.2.0/24
</pre>
==== HTTPS ====
<pre>
esxcli network firewall ruleset set --ruleset-id CIMHttpsServer --allowed-all false
esxcli network firewall ruleset allowedip add --ruleset-id CIMHttpsServer --ip-address 10.14.0.0/16
esxcli network firewall ruleset allowedip add --ruleset-id CIMHttpsServer --ip-address 192.168.2.0/24
esxcli network firewall ruleset allowedip list --ruleset-id CIMHttpsServer
Ruleset Allowed IP Addresses
-------------- ----------------------------
CIMHttpsServer 10.14.0.0/16, 192.168.2.0/24
</pre>
==== CIMSLP ====
<pre>
esxcli network firewall ruleset set --ruleset-id CIMSLP --allowed-all false
esxcli network firewall ruleset allowedip add --ruleset-id CIMSLP --ip-address 10.14.0.0/16
esxcli network firewall ruleset allowedip add --ruleset-id CIMSLP --ip-address 192.168.2.0/24
esxcli network firewall ruleset allowedip list --ruleset-id CIMSLP
Ruleset Allowed IP Addresses
------- ----------------------------
CIMSLP 10.14.0.0/16, 192.168.2.0/24
</pre>
9cef77f573b1129844be3032615c7cb64d71bd9b
VMWare Hints
0
343
1779
1778
2017-06-28T09:11:50Z
Lollypop
2
/* CLI */
wikitext
text/x-wiki
[[Kategorie:VMWare]]
== VCenter ==
===HTML5 User Interface===
* https://<vcenter>/ui
===Appliance Management===
* https://<vcenter>:5480/
== CLI ==
=== I/O ===
* [https://voiceforvirtual.com/2010/12/06/vscsistats/ vscsiStats] : /usr/lib/vmware/bin/vscsiStats
* [https://labs.vmware.com/flings/i-o-analyzer VMware I/O Analyzer]
* [https://labs.vmware.com/flings/ioinsight VMware IOInsight]
* [https://kb.vmware.com/kb/1008205 Using esxtop to identify storage performance issues for ESX / ESXi]
* [https://support.netapp.com support.netapp.com] -> Downloads -> Software -> NetApp NFS Plug-in for VMware
== Links ==
* [https://labs.vmware.com/flings VMWare Flings]
* [https://blogs.vmware.com/vsphere/2012/09/vmware-posters.html VMWare Posters] or here [https://blogs.vmware.com/vsphere/posters VMWare Posters]
* [https://www.vmware.com/support/developer/vima/ vSphere Management Assistant Documentation]
* [https://www.vmware.com/support/developer/vcli/ vSphere Command-Line Interface Documentation (vCLI)]
* [http://labs.hol.vmware.com VMWare Hands On Labs]
* [http://docs.ansible.com/ansible/list_of_cloud_modules.html#vmware Ansible modules for VMWare]
* [http://www.robware.net/rvtools/ RVTools]
google: vsphere <version> configuration maximums:
* [https://www.vmware.com/pdf/vsphere6/r65/vsphere-65-configuration-maximums.pdf V6.5]
a1f218030586f01fd34c99f21bab86be9eb9f780
1780
1779
2017-06-28T09:13:03Z
Lollypop
2
/* Links */
wikitext
text/x-wiki
[[Kategorie:VMWare]]
== VCenter ==
===HTML5 User Interface===
* https://<vcenter>/ui
===Appliance Management===
* https://<vcenter>:5480/
== CLI ==
=== I/O ===
* [https://voiceforvirtual.com/2010/12/06/vscsistats/ vscsiStats] : /usr/lib/vmware/bin/vscsiStats
* [https://labs.vmware.com/flings/i-o-analyzer VMware I/O Analyzer]
* [https://labs.vmware.com/flings/ioinsight VMware IOInsight]
* [https://kb.vmware.com/kb/1008205 Using esxtop to identify storage performance issues for ESX / ESXi]
* [https://support.netapp.com support.netapp.com] -> Downloads -> Software -> NetApp NFS Plug-in for VMware
== Links ==
* [https://labs.vmware.com/flings VMWare Flings]
* [https://blogs.vmware.com/vsphere/2012/09/vmware-posters.html VMWare Posters] or here [https://blogs.vmware.com/vsphere/posters VMWare Posters]
* [https://www.vmware.com/support/developer/vima/ vSphere Management Assistant Documentation]
* [https://www.vmware.com/support/developer/vcli/ vSphere Command-Line Interface Documentation (vCLI)]
* [http://labs.hol.vmware.com VMWare Hands On Labs]
* [http://docs.ansible.com/ansible/list_of_cloud_modules.html#vmware Ansible modules for VMWare]
* [http://www.robware.net/rvtools/ RVTools]
* [http://www.running-system.com VMWare related BLOG]
google: vsphere <version> configuration maximums:
* [https://www.vmware.com/pdf/vsphere6/r65/vsphere-65-configuration-maximums.pdf V6.5]
9fdee8b1695e07ff8e09b8c5f6b3ed7c3dc9e6b3
1781
1780
2017-06-28T09:13:41Z
Lollypop
2
/* CLI */
wikitext
text/x-wiki
[[Kategorie:VMWare]]
== VCenter ==
===HTML5 User Interface===
* https://<vcenter>/ui
===Appliance Management===
* https://<vcenter>:5480/
== CLI ==
=== I/O ===
* [https://voiceforvirtual.com/2010/12/06/vscsistats/ vscsiStats] : /usr/lib/vmware/bin/vscsiStats
* [https://labs.vmware.com/flings/i-o-analyzer VMware I/O Analyzer]
* [https://labs.vmware.com/flings/ioinsight VMware IOInsight]
* [https://kb.vmware.com/kb/1008205 Using esxtop to identify storage performance issues for ESX / ESXi]
* [https://support.netapp.com support.netapp.com] -> Downloads -> Software -> NetApp NFS Plug-in for VMware
=== ESXTOP ===
* [http://www.running-system.com/vsphere-6-esxtop-quick-overview-for-troubleshooting/ vSphere 6 ESXTOP quick overview for Troubleshooting]
== Links ==
* [https://labs.vmware.com/flings VMWare Flings]
* [https://blogs.vmware.com/vsphere/2012/09/vmware-posters.html VMWare Posters] or here [https://blogs.vmware.com/vsphere/posters VMWare Posters]
* [https://www.vmware.com/support/developer/vima/ vSphere Management Assistant Documentation]
* [https://www.vmware.com/support/developer/vcli/ vSphere Command-Line Interface Documentation (vCLI)]
* [http://labs.hol.vmware.com VMWare Hands On Labs]
* [http://docs.ansible.com/ansible/list_of_cloud_modules.html#vmware Ansible modules for VMWare]
* [http://www.robware.net/rvtools/ RVTools]
* [http://www.running-system.com VMWare related BLOG]
google: vsphere <version> configuration maximums:
* [https://www.vmware.com/pdf/vsphere6/r65/vsphere-65-configuration-maximums.pdf V6.5]
271c11ef239da763db3ce2bf70bd77edb8d16477
1782
1781
2017-06-28T09:34:32Z
Lollypop
2
/* ESXTOP */
wikitext
text/x-wiki
[[Kategorie:VMWare]]
== VCenter ==
===HTML5 User Interface===
* https://<vcenter>/ui
===Appliance Management===
* https://<vcenter>:5480/
== CLI ==
=== I/O ===
* [https://voiceforvirtual.com/2010/12/06/vscsistats/ vscsiStats] : /usr/lib/vmware/bin/vscsiStats
* [https://labs.vmware.com/flings/i-o-analyzer VMware I/O Analyzer]
* [https://labs.vmware.com/flings/ioinsight VMware IOInsight]
* [https://kb.vmware.com/kb/1008205 Using esxtop to identify storage performance issues for ESX / ESXi]
* [https://support.netapp.com support.netapp.com] -> Downloads -> Software -> NetApp NFS Plug-in for VMware
=== ESXTOP ===
* [http://www.running-system.com/vsphere-6-esxtop-quick-overview-for-troubleshooting/ vSphere 6 ESXTOP quick overview for Troubleshooting]
* [https://communities.vmware.com/docs/DOC-9279 Interpreting esxtop Statistics]
* [http://www.vmworld.net/wp-content/uploads/2012/05/Esxtop_Troubleshooting_ger.pdf vSphere 5 ESXTOP quick Overview for Troubleshooting]
* [http://www.running-system.com/wp-content/uploads/2012/08/esxtop_english_v11.pdf Sphere 5.5 ESXTOP quick Overview for Troubleshooting]
* [http://www.running-system.com/wp-content/uploads/2015/04/ESXTOP_vSphere6.pdf vSphere 6 ESXTOP quick Overview for Troubleshooting]
== Links ==
* [https://labs.vmware.com/flings VMWare Flings]
* [https://blogs.vmware.com/vsphere/2012/09/vmware-posters.html VMWare Posters] or here [https://blogs.vmware.com/vsphere/posters VMWare Posters]
* [https://www.vmware.com/support/developer/vima/ vSphere Management Assistant Documentation]
* [https://www.vmware.com/support/developer/vcli/ vSphere Command-Line Interface Documentation (vCLI)]
* [http://labs.hol.vmware.com VMWare Hands On Labs]
* [http://docs.ansible.com/ansible/list_of_cloud_modules.html#vmware Ansible modules for VMWare]
* [http://www.robware.net/rvtools/ RVTools]
* [http://www.running-system.com VMWare related BLOG]
google: vsphere <version> configuration maximums:
* [https://www.vmware.com/pdf/vsphere6/r65/vsphere-65-configuration-maximums.pdf V6.5]
05c6c93f30a50dc845093dc4ce6db04423c62822
1783
1782
2017-06-28T09:35:48Z
Lollypop
2
/* ESXTOP */
wikitext
text/x-wiki
[[Kategorie:VMWare]]
== VCenter ==
===HTML5 User Interface===
* https://<vcenter>/ui
===Appliance Management===
* https://<vcenter>:5480/
== CLI ==
=== I/O ===
* [https://voiceforvirtual.com/2010/12/06/vscsistats/ vscsiStats] : /usr/lib/vmware/bin/vscsiStats
* [https://labs.vmware.com/flings/i-o-analyzer VMware I/O Analyzer]
* [https://labs.vmware.com/flings/ioinsight VMware IOInsight]
* [https://kb.vmware.com/kb/1008205 Using esxtop to identify storage performance issues for ESX / ESXi]
* [https://support.netapp.com support.netapp.com] -> Downloads -> Software -> NetApp NFS Plug-in for VMware
=== ESXTOP ===
* [http://www.running-system.com/vsphere-6-esxtop-quick-overview-for-troubleshooting/ vSphere 6 ESXTOP quick overview for Troubleshooting]
* [https://communities.vmware.com/docs/DOC-9279 Interpreting esxtop Statistics]
* [http://www.vmworld.net/wp-content/uploads/2012/05/Esxtop_Troubleshooting_ger.pdf PDF : vSphere 5 ESXTOP quick Overview for Troubleshooting]
* [http://www.running-system.com/wp-content/uploads/2012/08/esxtop_english_v11.pdf PDF : vSphere 5.5 ESXTOP quick Overview for Troubleshooting]
* [http://www.running-system.com/wp-content/uploads/2015/04/ESXTOP_vSphere6.pdf PDF : vSphere 6 ESXTOP quick Overview for Troubleshooting]
== Links ==
* [https://labs.vmware.com/flings VMWare Flings]
* [https://blogs.vmware.com/vsphere/2012/09/vmware-posters.html VMWare Posters] or here [https://blogs.vmware.com/vsphere/posters VMWare Posters]
* [https://www.vmware.com/support/developer/vima/ vSphere Management Assistant Documentation]
* [https://www.vmware.com/support/developer/vcli/ vSphere Command-Line Interface Documentation (vCLI)]
* [http://labs.hol.vmware.com VMWare Hands On Labs]
* [http://docs.ansible.com/ansible/list_of_cloud_modules.html#vmware Ansible modules for VMWare]
* [http://www.robware.net/rvtools/ RVTools]
* [http://www.running-system.com VMWare related BLOG]
google: vsphere <version> configuration maximums:
* [https://www.vmware.com/pdf/vsphere6/r65/vsphere-65-configuration-maximums.pdf V6.5]
7714697b45ca3bbdb77151671513e904e2393832
Bash cheatsheet
0
37
1784
1405
2017-07-13T11:51:05Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:Bash]]
=bash history per user=
You need to set LogLevel of sshd to VERBOSE in your /etc/ssh/sshd_config:
<source lang=bash>
...
LogLevel VERBOSE
...
</source>
If you are using ssh public keys for authenticating and want to use a seperate history for each user, you can put this in your .bash_profile:
<source lang=bash>
[ -f /var/log/fingerprint.log ] && FINGERPRINT=$(nawk -v ssh_connection="${SSH_CONNECTION}" -v user=${LOGNAME} 'BEGIN{split(ssh_connection,connection)}/.*sshd\[[0-9]+\]: Accepted publickey for/ && $(NF-5)==connection[1] && $(NF-3)==connection[2] {print $NF;}' /var/log/fingerprint.log)
export HISTFILE=~/.bash_history_${FINGERPRINT:-${SUDO_USER:-default}}
</source>
If $FINGERPRINT is empty the sudo user will be used.
If $SUDO_USER is empty too, use "default" as extension.
I forced rsyslog to write another logfile where group ssh may read:
/etc/rsyslog.d/99-fingerprint.conf:
<source lang=bash>
$FileCreateMode 0640
$FileGroup ssh
auth /var/log/fingerprint.log
</source>
Add user syslog to group ssh so that syslog can open a file as group ssh:
<source lang=bash>
# usermod -aG ssh syslog
</source>
Let only users from group ssh login via ssh except the syslog user:
/etc/ssh/sshd_config:
<source lang=bash>
# SSH is only allowed for users in this group
AllowGroups ssh
DenyUsers syslog
</source>
=bash prompt=
Put this in your ~/.bash_profile
<source lang=bash>
typeset +x PS1="\[\e]0;\u@\h: \w\a\]\u@\h:\w# "
</source>
=Useful variable substitutions=
==split==
For example split an ip:
<source lang=bash>
$ delimiter="."
$ ip="10.1.2.3"
$ declare -a octets=( ${ip//${delimiter}/ } )
$ echo "${#octets[@]} octets -> ${octets[@]}"
4 octets -> 10 1 2 3
</source>
==dirname==
<source lang=bash>
$ myself=/usr/bin/blafasel ; echo ${myself%/*}
/usr/bin
</source>
==basename==
<source lang=bash>
$ myself=/usr/bin/blafasel ; echo ${myself##*/}
blafasel
</source>
==Path name resolving function==
<source lang=bash>
# dir_resolve originally from http://stackoverflow.com/a/20901614/5887626
# modified at https://lars.timmann.de/wiki/index.php/Bash_cheatsheet
dir_resolve() {
local dir=${1%/*}
local file=${1##*/}
# if the name does not contain a / leave file blank or the name will be name/name
[ "_${1/\//}_" == "_${1}_" -a -d ${1} ] && file=""
[ "_${1/\//}_" == "_${1}_" -a -f ${1} ] && dir=""
pushd "$dir" &>/dev/null || return $? # On error, return error code
echo ${PWD}${file:+"/"${file}} # output full path with filename
popd &> /dev/null
}
</source>
=Arrays=
==Reverse the order of elements==
An example for services in normal and reverse order for start/stop
<source lang=bash>
declare -a SERVICES_STOP=(service1 service2 service3 service4)
declare -a SERVICES_START
for(( i=$[ ${#SERVICES_STOP[*]} - 1 ] ; i>=0 ; i-- ))
do
SERVICES_START+=(${SERVICES_STOP[$i]})
done
</source>
This results in:
<source lang=bash>
$ echo ${SERVICES_STOP[*]} ; echo ${SERVICES_START[*]}
service1 service2 service3 service4
service4 service3 service2 service1
</source>
=Loops=
==Numbers==
$ for i in {0..9} ; do echo $i ; done
oder
$ for ((i=0;i<=9;i++)); do echo $i; done
so gehen natürlich auch andere Sprünge, z.B. immer 3 weiter:
$ for ((i=0;i<=9;i+=3)); do echo $i; done
oder oder oder
$ for ((i=0,j=1;i<=9;i+=3,j++)); do echo "$i $j"; done
==Exit controlled loop==
Just put your code between <i>while</i> and <i>do</i> and use <i>continue</i> alias : in the loop.
<source lang=bash>
#!/bin/bash
while
# some code
(( <your control expression> ))
do
:
done
</source>
For example:
<source lang=bash>
#!/bin/bash
i=1
while
i=$[ $i + 1 ];
(( $i < 10 ))
do
:
done
</source>
=Functions=
==Log with timestamp==
<source lang=bash>
function printlog () {
if [ -n "$*" ]
then
printf "%s %s\n" "$(/bin/date '+%Y%m%d %H:%M:%S')" "${*}"
else
while read input
do
printf "%s %s\n" "$(/bin/date '+%Y%m%d %H:%M:%S')" "${input}"
done
fi
}
</source>
<source lang=bash>
$ printf "test\n\ntoast\n" | printlog
20161103 10:47:25 test
20161103 10:47:25
20161103 10:47:25 toast
$ printlog test
20161103 10:47:30 test
</source>
=Calculations=
$ echo $[ 3 + 4 ]
$ echo $[ 2 ** 8 ] # 2^8
=init scripts=
==A basic skeleton==
<source lang=bash>
#!/bin/bash
NAME=<myname> # The name of the daemon
USER=<runuser> # The user to run the daemon as
SELF=${0##*/}
CALLER=$(id -nu)
# Check if called as ${USER}
if [ "_${CALLER}_" != "_${USER}_" ]
then
# If not do a su if called as root
if [ "_${CALLER}_" == "_root_" ]
then
exec su -l ${USER} -c "$0 $@"
else
echo "Please start this script only as user ${USER}"
exit 1
fi
fi
if [ $# -eq 1 ]
then
command=$1
else
# Called as ${NAME}-start.sh or ${NAME}-stop.sh
command=${SELF%.sh}
command=${command##${NAME}-}
[ "_${command}_" == "_${NAME}_" ] && command=""
fi
case ${command} in
start)
# start commands
;;
stop)
# stop commands
;;
restart)
$0 stop
$0 start
;;
*)
[ ! -z "${command}" ] && echo "ERROR: Unknown option ${command}!"
echo "Usage: $0 (start|stop|restart)";
echo "Or call as ${NAME}-(start|stop|restart).sh"
exit 1
;;
esac
</source>
= Logging and output in your scripts =
== Add a timestamp to all output ==
<source lang=bash>
#!/bin/bash
# Find temp filename
FIFO=$(mktemp)
# Clenup on exit
trap 'rm -f ${FIFO}' 0
# Delete file created by mktemp
rm "${FIFO}"
# Create a FIFO instead
mkfifo "${FIFO}"
# Read from FIFO and add date at the beginning
sed -e "s|^|$(/bin/date '+%d.%m.%Y %H:%M:%S') :: |g" < ${FIFO} &
# Redirect stdout & stderr to FIFO
exec > ${FIFO} 2>&1
#
# Now your program
#
echo bla
echo bli >&2
</source>
== Add a timestamp to all output and send to file==
<source lang=bash>
#!/bin/bash
LOGFILE=/tmp/bla.log
# Find temp filename
FIFO=$(mktemp)
# Clenup on exit
trap 'rm -f ${FIFO}' 0
# Delete file created by mktemp
rm "${FIFO}"
# Create a FIFO instead
mkfifo "${FIFO}"
# Read from FIFO and add date at the beginning
sed -e "s|^|$(/bin/date '+%d.%m.%Y %H:%M:%S') :: |g" < ${FIFO} > ${LOGFILE}&
# Redirect stdout & stderr to FIFO
exec > ${FIFO} 2>&1
#
# Now your program
#
echo bla
echo bli >&2
</source>
fa022212869c3c82e31bf127e851414d71d8625d
1785
1784
2017-07-13T11:52:48Z
Lollypop
2
/* Add a timestamp to all output */
wikitext
text/x-wiki
[[Kategorie:Bash]]
=bash history per user=
You need to set LogLevel of sshd to VERBOSE in your /etc/ssh/sshd_config:
<source lang=bash>
...
LogLevel VERBOSE
...
</source>
If you are using ssh public keys for authenticating and want to use a seperate history for each user, you can put this in your .bash_profile:
<source lang=bash>
[ -f /var/log/fingerprint.log ] && FINGERPRINT=$(nawk -v ssh_connection="${SSH_CONNECTION}" -v user=${LOGNAME} 'BEGIN{split(ssh_connection,connection)}/.*sshd\[[0-9]+\]: Accepted publickey for/ && $(NF-5)==connection[1] && $(NF-3)==connection[2] {print $NF;}' /var/log/fingerprint.log)
export HISTFILE=~/.bash_history_${FINGERPRINT:-${SUDO_USER:-default}}
</source>
If $FINGERPRINT is empty the sudo user will be used.
If $SUDO_USER is empty too, use "default" as extension.
I forced rsyslog to write another logfile where group ssh may read:
/etc/rsyslog.d/99-fingerprint.conf:
<source lang=bash>
$FileCreateMode 0640
$FileGroup ssh
auth /var/log/fingerprint.log
</source>
Add user syslog to group ssh so that syslog can open a file as group ssh:
<source lang=bash>
# usermod -aG ssh syslog
</source>
Let only users from group ssh login via ssh except the syslog user:
/etc/ssh/sshd_config:
<source lang=bash>
# SSH is only allowed for users in this group
AllowGroups ssh
DenyUsers syslog
</source>
=bash prompt=
Put this in your ~/.bash_profile
<source lang=bash>
typeset +x PS1="\[\e]0;\u@\h: \w\a\]\u@\h:\w# "
</source>
=Useful variable substitutions=
==split==
For example split an ip:
<source lang=bash>
$ delimiter="."
$ ip="10.1.2.3"
$ declare -a octets=( ${ip//${delimiter}/ } )
$ echo "${#octets[@]} octets -> ${octets[@]}"
4 octets -> 10 1 2 3
</source>
==dirname==
<source lang=bash>
$ myself=/usr/bin/blafasel ; echo ${myself%/*}
/usr/bin
</source>
==basename==
<source lang=bash>
$ myself=/usr/bin/blafasel ; echo ${myself##*/}
blafasel
</source>
==Path name resolving function==
<source lang=bash>
# dir_resolve originally from http://stackoverflow.com/a/20901614/5887626
# modified at https://lars.timmann.de/wiki/index.php/Bash_cheatsheet
dir_resolve() {
local dir=${1%/*}
local file=${1##*/}
# if the name does not contain a / leave file blank or the name will be name/name
[ "_${1/\//}_" == "_${1}_" -a -d ${1} ] && file=""
[ "_${1/\//}_" == "_${1}_" -a -f ${1} ] && dir=""
pushd "$dir" &>/dev/null || return $? # On error, return error code
echo ${PWD}${file:+"/"${file}} # output full path with filename
popd &> /dev/null
}
</source>
=Arrays=
==Reverse the order of elements==
An example for services in normal and reverse order for start/stop
<source lang=bash>
declare -a SERVICES_STOP=(service1 service2 service3 service4)
declare -a SERVICES_START
for(( i=$[ ${#SERVICES_STOP[*]} - 1 ] ; i>=0 ; i-- ))
do
SERVICES_START+=(${SERVICES_STOP[$i]})
done
</source>
This results in:
<source lang=bash>
$ echo ${SERVICES_STOP[*]} ; echo ${SERVICES_START[*]}
service1 service2 service3 service4
service4 service3 service2 service1
</source>
=Loops=
==Numbers==
$ for i in {0..9} ; do echo $i ; done
oder
$ for ((i=0;i<=9;i++)); do echo $i; done
so gehen natürlich auch andere Sprünge, z.B. immer 3 weiter:
$ for ((i=0;i<=9;i+=3)); do echo $i; done
oder oder oder
$ for ((i=0,j=1;i<=9;i+=3,j++)); do echo "$i $j"; done
==Exit controlled loop==
Just put your code between <i>while</i> and <i>do</i> and use <i>continue</i> alias : in the loop.
<source lang=bash>
#!/bin/bash
while
# some code
(( <your control expression> ))
do
:
done
</source>
For example:
<source lang=bash>
#!/bin/bash
i=1
while
i=$[ $i + 1 ];
(( $i < 10 ))
do
:
done
</source>
=Functions=
==Log with timestamp==
<source lang=bash>
function printlog () {
if [ -n "$*" ]
then
printf "%s %s\n" "$(/bin/date '+%Y%m%d %H:%M:%S')" "${*}"
else
while read input
do
printf "%s %s\n" "$(/bin/date '+%Y%m%d %H:%M:%S')" "${input}"
done
fi
}
</source>
<source lang=bash>
$ printf "test\n\ntoast\n" | printlog
20161103 10:47:25 test
20161103 10:47:25
20161103 10:47:25 toast
$ printlog test
20161103 10:47:30 test
</source>
=Calculations=
$ echo $[ 3 + 4 ]
$ echo $[ 2 ** 8 ] # 2^8
=init scripts=
==A basic skeleton==
<source lang=bash>
#!/bin/bash
NAME=<myname> # The name of the daemon
USER=<runuser> # The user to run the daemon as
SELF=${0##*/}
CALLER=$(id -nu)
# Check if called as ${USER}
if [ "_${CALLER}_" != "_${USER}_" ]
then
# If not do a su if called as root
if [ "_${CALLER}_" == "_root_" ]
then
exec su -l ${USER} -c "$0 $@"
else
echo "Please start this script only as user ${USER}"
exit 1
fi
fi
if [ $# -eq 1 ]
then
command=$1
else
# Called as ${NAME}-start.sh or ${NAME}-stop.sh
command=${SELF%.sh}
command=${command##${NAME}-}
[ "_${command}_" == "_${NAME}_" ] && command=""
fi
case ${command} in
start)
# start commands
;;
stop)
# stop commands
;;
restart)
$0 stop
$0 start
;;
*)
[ ! -z "${command}" ] && echo "ERROR: Unknown option ${command}!"
echo "Usage: $0 (start|stop|restart)";
echo "Or call as ${NAME}-(start|stop|restart).sh"
exit 1
;;
esac
</source>
= Logging and output in your scripts =
== Add a timestamp to all output ==
<source lang=bash>
#!/bin/bash
# Find temp filename
FIFO=$(mktemp)
# Clenup on exit
trap 'rm -f ${FIFO}' 0
# Delete file created by mktemp
rm "${FIFO}"
# Create a FIFO instead
mkfifo "${FIFO}"
# Read from FIFO and add date at the beginning
sed -e "s|^|$(date '+%d.%m.%Y %H:%M:%S') :: |g" < ${FIFO} &
# Redirect stdout & stderr to FIFO
exec > ${FIFO} 2>&1
#
# Now your program
#
echo bla
echo bli >&2
</source>
== Add a timestamp to all output and send to file==
<source lang=bash>
#!/bin/bash
LOGFILE=/tmp/bla.log
# Find temp filename
FIFO=$(mktemp)
# Clenup on exit
trap 'rm -f ${FIFO}' 0
# Delete file created by mktemp
rm "${FIFO}"
# Create a FIFO instead
mkfifo "${FIFO}"
# Read from FIFO and add date at the beginning
sed -e "s|^|$(/bin/date '+%d.%m.%Y %H:%M:%S') :: |g" < ${FIFO} > ${LOGFILE}&
# Redirect stdout & stderr to FIFO
exec > ${FIFO} 2>&1
#
# Now your program
#
echo bla
echo bli >&2
</source>
38bc52dfe6ae129d859aa60bb19726d504be244d
1786
1785
2017-07-13T11:53:00Z
Lollypop
2
/* Add a timestamp to all output and send to file */
wikitext
text/x-wiki
[[Kategorie:Bash]]
=bash history per user=
You need to set LogLevel of sshd to VERBOSE in your /etc/ssh/sshd_config:
<source lang=bash>
...
LogLevel VERBOSE
...
</source>
If you are using ssh public keys for authenticating and want to use a seperate history for each user, you can put this in your .bash_profile:
<source lang=bash>
[ -f /var/log/fingerprint.log ] && FINGERPRINT=$(nawk -v ssh_connection="${SSH_CONNECTION}" -v user=${LOGNAME} 'BEGIN{split(ssh_connection,connection)}/.*sshd\[[0-9]+\]: Accepted publickey for/ && $(NF-5)==connection[1] && $(NF-3)==connection[2] {print $NF;}' /var/log/fingerprint.log)
export HISTFILE=~/.bash_history_${FINGERPRINT:-${SUDO_USER:-default}}
</source>
If $FINGERPRINT is empty the sudo user will be used.
If $SUDO_USER is empty too, use "default" as extension.
I forced rsyslog to write another logfile where group ssh may read:
/etc/rsyslog.d/99-fingerprint.conf:
<source lang=bash>
$FileCreateMode 0640
$FileGroup ssh
auth /var/log/fingerprint.log
</source>
Add user syslog to group ssh so that syslog can open a file as group ssh:
<source lang=bash>
# usermod -aG ssh syslog
</source>
Let only users from group ssh login via ssh except the syslog user:
/etc/ssh/sshd_config:
<source lang=bash>
# SSH is only allowed for users in this group
AllowGroups ssh
DenyUsers syslog
</source>
=bash prompt=
Put this in your ~/.bash_profile
<source lang=bash>
typeset +x PS1="\[\e]0;\u@\h: \w\a\]\u@\h:\w# "
</source>
=Useful variable substitutions=
==split==
For example split an ip:
<source lang=bash>
$ delimiter="."
$ ip="10.1.2.3"
$ declare -a octets=( ${ip//${delimiter}/ } )
$ echo "${#octets[@]} octets -> ${octets[@]}"
4 octets -> 10 1 2 3
</source>
==dirname==
<source lang=bash>
$ myself=/usr/bin/blafasel ; echo ${myself%/*}
/usr/bin
</source>
==basename==
<source lang=bash>
$ myself=/usr/bin/blafasel ; echo ${myself##*/}
blafasel
</source>
==Path name resolving function==
<source lang=bash>
# dir_resolve originally from http://stackoverflow.com/a/20901614/5887626
# modified at https://lars.timmann.de/wiki/index.php/Bash_cheatsheet
dir_resolve() {
local dir=${1%/*}
local file=${1##*/}
# if the name does not contain a / leave file blank or the name will be name/name
[ "_${1/\//}_" == "_${1}_" -a -d ${1} ] && file=""
[ "_${1/\//}_" == "_${1}_" -a -f ${1} ] && dir=""
pushd "$dir" &>/dev/null || return $? # On error, return error code
echo ${PWD}${file:+"/"${file}} # output full path with filename
popd &> /dev/null
}
</source>
=Arrays=
==Reverse the order of elements==
An example for services in normal and reverse order for start/stop
<source lang=bash>
declare -a SERVICES_STOP=(service1 service2 service3 service4)
declare -a SERVICES_START
for(( i=$[ ${#SERVICES_STOP[*]} - 1 ] ; i>=0 ; i-- ))
do
SERVICES_START+=(${SERVICES_STOP[$i]})
done
</source>
This results in:
<source lang=bash>
$ echo ${SERVICES_STOP[*]} ; echo ${SERVICES_START[*]}
service1 service2 service3 service4
service4 service3 service2 service1
</source>
=Loops=
==Numbers==
$ for i in {0..9} ; do echo $i ; done
oder
$ for ((i=0;i<=9;i++)); do echo $i; done
so gehen natürlich auch andere Sprünge, z.B. immer 3 weiter:
$ for ((i=0;i<=9;i+=3)); do echo $i; done
oder oder oder
$ for ((i=0,j=1;i<=9;i+=3,j++)); do echo "$i $j"; done
==Exit controlled loop==
Just put your code between <i>while</i> and <i>do</i> and use <i>continue</i> alias : in the loop.
<source lang=bash>
#!/bin/bash
while
# some code
(( <your control expression> ))
do
:
done
</source>
For example:
<source lang=bash>
#!/bin/bash
i=1
while
i=$[ $i + 1 ];
(( $i < 10 ))
do
:
done
</source>
=Functions=
==Log with timestamp==
<source lang=bash>
function printlog () {
if [ -n "$*" ]
then
printf "%s %s\n" "$(/bin/date '+%Y%m%d %H:%M:%S')" "${*}"
else
while read input
do
printf "%s %s\n" "$(/bin/date '+%Y%m%d %H:%M:%S')" "${input}"
done
fi
}
</source>
<source lang=bash>
$ printf "test\n\ntoast\n" | printlog
20161103 10:47:25 test
20161103 10:47:25
20161103 10:47:25 toast
$ printlog test
20161103 10:47:30 test
</source>
=Calculations=
$ echo $[ 3 + 4 ]
$ echo $[ 2 ** 8 ] # 2^8
=init scripts=
==A basic skeleton==
<source lang=bash>
#!/bin/bash
NAME=<myname> # The name of the daemon
USER=<runuser> # The user to run the daemon as
SELF=${0##*/}
CALLER=$(id -nu)
# Check if called as ${USER}
if [ "_${CALLER}_" != "_${USER}_" ]
then
# If not do a su if called as root
if [ "_${CALLER}_" == "_root_" ]
then
exec su -l ${USER} -c "$0 $@"
else
echo "Please start this script only as user ${USER}"
exit 1
fi
fi
if [ $# -eq 1 ]
then
command=$1
else
# Called as ${NAME}-start.sh or ${NAME}-stop.sh
command=${SELF%.sh}
command=${command##${NAME}-}
[ "_${command}_" == "_${NAME}_" ] && command=""
fi
case ${command} in
start)
# start commands
;;
stop)
# stop commands
;;
restart)
$0 stop
$0 start
;;
*)
[ ! -z "${command}" ] && echo "ERROR: Unknown option ${command}!"
echo "Usage: $0 (start|stop|restart)";
echo "Or call as ${NAME}-(start|stop|restart).sh"
exit 1
;;
esac
</source>
= Logging and output in your scripts =
== Add a timestamp to all output ==
<source lang=bash>
#!/bin/bash
# Find temp filename
FIFO=$(mktemp)
# Clenup on exit
trap 'rm -f ${FIFO}' 0
# Delete file created by mktemp
rm "${FIFO}"
# Create a FIFO instead
mkfifo "${FIFO}"
# Read from FIFO and add date at the beginning
sed -e "s|^|$(date '+%d.%m.%Y %H:%M:%S') :: |g" < ${FIFO} &
# Redirect stdout & stderr to FIFO
exec > ${FIFO} 2>&1
#
# Now your program
#
echo bla
echo bli >&2
</source>
== Add a timestamp to all output and send to file==
<source lang=bash>
#!/bin/bash
LOGFILE=/tmp/bla.log
# Find temp filename
FIFO=$(mktemp)
# Clenup on exit
trap 'rm -f ${FIFO}' 0
# Delete file created by mktemp
rm "${FIFO}"
# Create a FIFO instead
mkfifo "${FIFO}"
# Read from FIFO and add date at the beginning
sed -e "s|^|$(date '+%d.%m.%Y %H:%M:%S') :: |g" < ${FIFO} > ${LOGFILE}&
# Redirect stdout & stderr to FIFO
exec > ${FIFO} 2>&1
#
# Now your program
#
echo bla
echo bli >&2
</source>
046ce9a1e18cb411c3e37b658b26f07e90070093
1787
1786
2017-07-13T14:43:29Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:Bash]]
=bash history per user=
You need to set LogLevel of sshd to VERBOSE in your /etc/ssh/sshd_config:
<source lang=bash>
...
LogLevel VERBOSE
...
</source>
If you are using ssh public keys for authenticating and want to use a seperate history for each user, you can put this in your .bash_profile:
<source lang=bash>
[ -f /var/log/fingerprint.log ] && FINGERPRINT=$(nawk -v ssh_connection="${SSH_CONNECTION}" -v user=${LOGNAME} 'BEGIN{split(ssh_connection,connection)}/.*sshd\[[0-9]+\]: Accepted publickey for/ && $(NF-5)==connection[1] && $(NF-3)==connection[2] {print $NF;}' /var/log/fingerprint.log)
export HISTFILE=~/.bash_history_${FINGERPRINT:-${SUDO_USER:-default}}
</source>
If $FINGERPRINT is empty the sudo user will be used.
If $SUDO_USER is empty too, use "default" as extension.
I forced rsyslog to write another logfile where group ssh may read:
/etc/rsyslog.d/99-fingerprint.conf:
<source lang=bash>
$FileCreateMode 0640
$FileGroup ssh
auth /var/log/fingerprint.log
</source>
Add user syslog to group ssh so that syslog can open a file as group ssh:
<source lang=bash>
# usermod -aG ssh syslog
</source>
Let only users from group ssh login via ssh except the syslog user:
/etc/ssh/sshd_config:
<source lang=bash>
# SSH is only allowed for users in this group
AllowGroups ssh
DenyUsers syslog
</source>
=bash prompt=
Put this in your ~/.bash_profile
<source lang=bash>
typeset +x PS1="\[\e]0;\u@\h: \w\a\]\u@\h:\w# "
</source>
=Useful variable substitutions=
==split==
For example split an ip:
<source lang=bash>
$ delimiter="."
$ ip="10.1.2.3"
$ declare -a octets=( ${ip//${delimiter}/ } )
$ echo "${#octets[@]} octets -> ${octets[@]}"
4 octets -> 10 1 2 3
</source>
==dirname==
<source lang=bash>
$ myself=/usr/bin/blafasel ; echo ${myself%/*}
/usr/bin
</source>
==basename==
<source lang=bash>
$ myself=/usr/bin/blafasel ; echo ${myself##*/}
blafasel
</source>
==Path name resolving function==
<source lang=bash>
# dir_resolve originally from http://stackoverflow.com/a/20901614/5887626
# modified at https://lars.timmann.de/wiki/index.php/Bash_cheatsheet
dir_resolve() {
local dir=${1%/*}
local file=${1##*/}
# if the name does not contain a / leave file blank or the name will be name/name
[ "_${1/\//}_" == "_${1}_" -a -d ${1} ] && file=""
[ "_${1/\//}_" == "_${1}_" -a -f ${1} ] && dir=""
pushd "$dir" &>/dev/null || return $? # On error, return error code
echo ${PWD}${file:+"/"${file}} # output full path with filename
popd &> /dev/null
}
</source>
=Arrays=
==Reverse the order of elements==
An example for services in normal and reverse order for start/stop
<source lang=bash>
declare -a SERVICES_STOP=(service1 service2 service3 service4)
declare -a SERVICES_START
for(( i=$[ ${#SERVICES_STOP[*]} - 1 ] ; i>=0 ; i-- ))
do
SERVICES_START+=(${SERVICES_STOP[$i]})
done
</source>
This results in:
<source lang=bash>
$ echo ${SERVICES_STOP[*]} ; echo ${SERVICES_START[*]}
service1 service2 service3 service4
service4 service3 service2 service1
</source>
=Loops=
==Numbers==
$ for i in {0..9} ; do echo $i ; done
oder
$ for ((i=0;i<=9;i++)); do echo $i; done
so gehen natürlich auch andere Sprünge, z.B. immer 3 weiter:
$ for ((i=0;i<=9;i+=3)); do echo $i; done
oder oder oder
$ for ((i=0,j=1;i<=9;i+=3,j++)); do echo "$i $j"; done
==Exit controlled loop==
Just put your code between <i>while</i> and <i>do</i> and use <i>continue</i> alias : in the loop.
<source lang=bash>
#!/bin/bash
while
# some code
(( <your control expression> ))
do
:
done
</source>
For example:
<source lang=bash>
#!/bin/bash
i=1
while
i=$[ $i + 1 ];
(( $i < 10 ))
do
:
done
</source>
=Functions=
==Log with timestamp==
<source lang=bash>
function printlog () {
if [ -n "$*" ]
then
printf "%s %s\n" "$(/bin/date '+%Y%m%d %H:%M:%S')" "${*}"
else
while read input
do
printf "%s %s\n" "$(/bin/date '+%Y%m%d %H:%M:%S')" "${input}"
done
fi
}
</source>
<source lang=bash>
$ printf "test\n\ntoast\n" | printlog
20161103 10:47:25 test
20161103 10:47:25
20161103 10:47:25 toast
$ printlog test
20161103 10:47:30 test
</source>
=Calculations=
$ echo $[ 3 + 4 ]
$ echo $[ 2 ** 8 ] # 2^8
=init scripts=
==A basic skeleton==
<source lang=bash>
#!/bin/bash
NAME=<myname> # The name of the daemon
USER=<runuser> # The user to run the daemon as
SELF=${0##*/}
CALLER=$(id -nu)
# Check if called as ${USER}
if [ "_${CALLER}_" != "_${USER}_" ]
then
# If not do a su if called as root
if [ "_${CALLER}_" == "_root_" ]
then
exec su -l ${USER} -c "$0 $@"
else
echo "Please start this script only as user ${USER}"
exit 1
fi
fi
if [ $# -eq 1 ]
then
command=$1
else
# Called as ${NAME}-start.sh or ${NAME}-stop.sh
command=${SELF%.sh}
command=${command##${NAME}-}
[ "_${command}_" == "_${NAME}_" ] && command=""
fi
case ${command} in
start)
# start commands
;;
stop)
# stop commands
;;
restart)
$0 stop
$0 start
;;
*)
[ ! -z "${command}" ] && echo "ERROR: Unknown option ${command}!"
echo "Usage: $0 (start|stop|restart)";
echo "Or call as ${NAME}-(start|stop|restart).sh"
exit 1
;;
esac
</source>
= Logging and output in your scripts =
== Add a timestamp to all output ==
<source lang=bash>
#!/bin/bash
# Find temp filename
FIFO=$(mktemp)
# Clenup on exit
trap 'rm -f ${FIFO}' 0
# Delete file created by mktemp
rm "${FIFO}"
# Create a FIFO instead
mkfifo "${FIFO}"
# Read from FIFO and add date at the beginning
sed -e "s|^|$(date '+%d.%m.%Y %H:%M:%S') :: |g" < ${FIFO} &
# Redirect stdout & stderr to FIFO
exec > ${FIFO} 2>&1
#
# Now your program
#
echo bla
echo bli >&2
</source>
== Add a timestamp to all output and send to file==
<source lang=bash>
#!/bin/bash
LOGFILE=/tmp/bla.log
# Find temp filename
FIFO=$(mktemp)
# Clenup on exit
trap 'rm -f ${FIFO}' 0
# Delete file created by mktemp
rm "${FIFO}"
# Create a FIFO instead
mkfifo "${FIFO}"
# Read from FIFO and add date at the beginning
sed -e "s|^|$(date '+%d.%m.%Y %H:%M:%S') :: |g" < ${FIFO} > ${LOGFILE}&
# Redirect stdout & stderr to FIFO
exec > ${FIFO} 2>&1
#
# Now your program
#
echo bla
echo bli >&2
</source>
=Parameter parsing=
<source lang=bash>
while [ $# -gt 0 ]
do
#if [ $# -ge 2 ]; then value=$2; fi
case $1 in
-h|--help)
usage help
shift;
exit 0;
;;
--?*=?*|-?*=?*)
param=${1%=*}
value=${1#*=}
shift;
;;
--?*=|-?*=)
param=${1%=*}
usage "${param} needs a vlaue!"
;;
*)
if [ $# -lt 2 ] ; then usage "$1 needs a value!"; fi
param=$1
value=$2
shift; shift;
;;
esac
case $param in
*)
other_params[$[${#other_params[*]} + 1]]="${param}=${value}"
param=${param#--}
param=${param/-/_}
export ${param^^}=${value}
;;
esac
done
</source>
e0a676a73531056062e057f34df8975278109ca4
1788
1787
2017-07-13T14:44:19Z
Lollypop
2
/* Parameter parsing */
wikitext
text/x-wiki
[[Kategorie:Bash]]
=bash history per user=
You need to set LogLevel of sshd to VERBOSE in your /etc/ssh/sshd_config:
<source lang=bash>
...
LogLevel VERBOSE
...
</source>
If you are using ssh public keys for authenticating and want to use a seperate history for each user, you can put this in your .bash_profile:
<source lang=bash>
[ -f /var/log/fingerprint.log ] && FINGERPRINT=$(nawk -v ssh_connection="${SSH_CONNECTION}" -v user=${LOGNAME} 'BEGIN{split(ssh_connection,connection)}/.*sshd\[[0-9]+\]: Accepted publickey for/ && $(NF-5)==connection[1] && $(NF-3)==connection[2] {print $NF;}' /var/log/fingerprint.log)
export HISTFILE=~/.bash_history_${FINGERPRINT:-${SUDO_USER:-default}}
</source>
If $FINGERPRINT is empty the sudo user will be used.
If $SUDO_USER is empty too, use "default" as extension.
I forced rsyslog to write another logfile where group ssh may read:
/etc/rsyslog.d/99-fingerprint.conf:
<source lang=bash>
$FileCreateMode 0640
$FileGroup ssh
auth /var/log/fingerprint.log
</source>
Add user syslog to group ssh so that syslog can open a file as group ssh:
<source lang=bash>
# usermod -aG ssh syslog
</source>
Let only users from group ssh login via ssh except the syslog user:
/etc/ssh/sshd_config:
<source lang=bash>
# SSH is only allowed for users in this group
AllowGroups ssh
DenyUsers syslog
</source>
=bash prompt=
Put this in your ~/.bash_profile
<source lang=bash>
typeset +x PS1="\[\e]0;\u@\h: \w\a\]\u@\h:\w# "
</source>
=Useful variable substitutions=
==split==
For example split an ip:
<source lang=bash>
$ delimiter="."
$ ip="10.1.2.3"
$ declare -a octets=( ${ip//${delimiter}/ } )
$ echo "${#octets[@]} octets -> ${octets[@]}"
4 octets -> 10 1 2 3
</source>
==dirname==
<source lang=bash>
$ myself=/usr/bin/blafasel ; echo ${myself%/*}
/usr/bin
</source>
==basename==
<source lang=bash>
$ myself=/usr/bin/blafasel ; echo ${myself##*/}
blafasel
</source>
==Path name resolving function==
<source lang=bash>
# dir_resolve originally from http://stackoverflow.com/a/20901614/5887626
# modified at https://lars.timmann.de/wiki/index.php/Bash_cheatsheet
dir_resolve() {
local dir=${1%/*}
local file=${1##*/}
# if the name does not contain a / leave file blank or the name will be name/name
[ "_${1/\//}_" == "_${1}_" -a -d ${1} ] && file=""
[ "_${1/\//}_" == "_${1}_" -a -f ${1} ] && dir=""
pushd "$dir" &>/dev/null || return $? # On error, return error code
echo ${PWD}${file:+"/"${file}} # output full path with filename
popd &> /dev/null
}
</source>
=Arrays=
==Reverse the order of elements==
An example for services in normal and reverse order for start/stop
<source lang=bash>
declare -a SERVICES_STOP=(service1 service2 service3 service4)
declare -a SERVICES_START
for(( i=$[ ${#SERVICES_STOP[*]} - 1 ] ; i>=0 ; i-- ))
do
SERVICES_START+=(${SERVICES_STOP[$i]})
done
</source>
This results in:
<source lang=bash>
$ echo ${SERVICES_STOP[*]} ; echo ${SERVICES_START[*]}
service1 service2 service3 service4
service4 service3 service2 service1
</source>
=Loops=
==Numbers==
$ for i in {0..9} ; do echo $i ; done
oder
$ for ((i=0;i<=9;i++)); do echo $i; done
so gehen natürlich auch andere Sprünge, z.B. immer 3 weiter:
$ for ((i=0;i<=9;i+=3)); do echo $i; done
oder oder oder
$ for ((i=0,j=1;i<=9;i+=3,j++)); do echo "$i $j"; done
==Exit controlled loop==
Just put your code between <i>while</i> and <i>do</i> and use <i>continue</i> alias : in the loop.
<source lang=bash>
#!/bin/bash
while
# some code
(( <your control expression> ))
do
:
done
</source>
For example:
<source lang=bash>
#!/bin/bash
i=1
while
i=$[ $i + 1 ];
(( $i < 10 ))
do
:
done
</source>
=Functions=
==Log with timestamp==
<source lang=bash>
function printlog () {
if [ -n "$*" ]
then
printf "%s %s\n" "$(/bin/date '+%Y%m%d %H:%M:%S')" "${*}"
else
while read input
do
printf "%s %s\n" "$(/bin/date '+%Y%m%d %H:%M:%S')" "${input}"
done
fi
}
</source>
<source lang=bash>
$ printf "test\n\ntoast\n" | printlog
20161103 10:47:25 test
20161103 10:47:25
20161103 10:47:25 toast
$ printlog test
20161103 10:47:30 test
</source>
=Calculations=
$ echo $[ 3 + 4 ]
$ echo $[ 2 ** 8 ] # 2^8
=init scripts=
==A basic skeleton==
<source lang=bash>
#!/bin/bash
NAME=<myname> # The name of the daemon
USER=<runuser> # The user to run the daemon as
SELF=${0##*/}
CALLER=$(id -nu)
# Check if called as ${USER}
if [ "_${CALLER}_" != "_${USER}_" ]
then
# If not do a su if called as root
if [ "_${CALLER}_" == "_root_" ]
then
exec su -l ${USER} -c "$0 $@"
else
echo "Please start this script only as user ${USER}"
exit 1
fi
fi
if [ $# -eq 1 ]
then
command=$1
else
# Called as ${NAME}-start.sh or ${NAME}-stop.sh
command=${SELF%.sh}
command=${command##${NAME}-}
[ "_${command}_" == "_${NAME}_" ] && command=""
fi
case ${command} in
start)
# start commands
;;
stop)
# stop commands
;;
restart)
$0 stop
$0 start
;;
*)
[ ! -z "${command}" ] && echo "ERROR: Unknown option ${command}!"
echo "Usage: $0 (start|stop|restart)";
echo "Or call as ${NAME}-(start|stop|restart).sh"
exit 1
;;
esac
</source>
= Logging and output in your scripts =
== Add a timestamp to all output ==
<source lang=bash>
#!/bin/bash
# Find temp filename
FIFO=$(mktemp)
# Clenup on exit
trap 'rm -f ${FIFO}' 0
# Delete file created by mktemp
rm "${FIFO}"
# Create a FIFO instead
mkfifo "${FIFO}"
# Read from FIFO and add date at the beginning
sed -e "s|^|$(date '+%d.%m.%Y %H:%M:%S') :: |g" < ${FIFO} &
# Redirect stdout & stderr to FIFO
exec > ${FIFO} 2>&1
#
# Now your program
#
echo bla
echo bli >&2
</source>
== Add a timestamp to all output and send to file==
<source lang=bash>
#!/bin/bash
LOGFILE=/tmp/bla.log
# Find temp filename
FIFO=$(mktemp)
# Clenup on exit
trap 'rm -f ${FIFO}' 0
# Delete file created by mktemp
rm "${FIFO}"
# Create a FIFO instead
mkfifo "${FIFO}"
# Read from FIFO and add date at the beginning
sed -e "s|^|$(date '+%d.%m.%Y %H:%M:%S') :: |g" < ${FIFO} > ${LOGFILE}&
# Redirect stdout & stderr to FIFO
exec > ${FIFO} 2>&1
#
# Now your program
#
echo bla
echo bli >&2
</source>
=Parameter parsing=
In progress... no time...
<source lang=bash>
while [ $# -gt 0 ]
do
case $1 in
-h|--help)
usage help
shift;
exit 0;
;;
--?*=?*|-?*=?*)
param=${1%=*}
value=${1#*=}
shift;
;;
--?*=|-?*=)
param=${1%=*}
usage "${param} needs a vlaue!"
;;
*)
if [ $# -lt 2 ] ; then usage "$1 needs a value!"; fi
param=$1
value=$2
shift; shift;
;;
esac
case $param in
*)
other_params[$[${#other_params[*]} + 1]]="${param}=${value}"
param=${param#--}
param=${param/-/_}
export ${param^^}=${value}
;;
esac
done
</source>
89b4a517bfd0b08d48fbb812b424eb9a5e814454
1804
1788
2017-07-21T07:47:05Z
Lollypop
2
/* Log with timestamp */
wikitext
text/x-wiki
[[Kategorie:Bash]]
=bash history per user=
You need to set LogLevel of sshd to VERBOSE in your /etc/ssh/sshd_config:
<source lang=bash>
...
LogLevel VERBOSE
...
</source>
If you are using ssh public keys for authenticating and want to use a seperate history for each user, you can put this in your .bash_profile:
<source lang=bash>
[ -f /var/log/fingerprint.log ] && FINGERPRINT=$(nawk -v ssh_connection="${SSH_CONNECTION}" -v user=${LOGNAME} 'BEGIN{split(ssh_connection,connection)}/.*sshd\[[0-9]+\]: Accepted publickey for/ && $(NF-5)==connection[1] && $(NF-3)==connection[2] {print $NF;}' /var/log/fingerprint.log)
export HISTFILE=~/.bash_history_${FINGERPRINT:-${SUDO_USER:-default}}
</source>
If $FINGERPRINT is empty the sudo user will be used.
If $SUDO_USER is empty too, use "default" as extension.
I forced rsyslog to write another logfile where group ssh may read:
/etc/rsyslog.d/99-fingerprint.conf:
<source lang=bash>
$FileCreateMode 0640
$FileGroup ssh
auth /var/log/fingerprint.log
</source>
Add user syslog to group ssh so that syslog can open a file as group ssh:
<source lang=bash>
# usermod -aG ssh syslog
</source>
Let only users from group ssh login via ssh except the syslog user:
/etc/ssh/sshd_config:
<source lang=bash>
# SSH is only allowed for users in this group
AllowGroups ssh
DenyUsers syslog
</source>
=bash prompt=
Put this in your ~/.bash_profile
<source lang=bash>
typeset +x PS1="\[\e]0;\u@\h: \w\a\]\u@\h:\w# "
</source>
=Useful variable substitutions=
==split==
For example split an ip:
<source lang=bash>
$ delimiter="."
$ ip="10.1.2.3"
$ declare -a octets=( ${ip//${delimiter}/ } )
$ echo "${#octets[@]} octets -> ${octets[@]}"
4 octets -> 10 1 2 3
</source>
==dirname==
<source lang=bash>
$ myself=/usr/bin/blafasel ; echo ${myself%/*}
/usr/bin
</source>
==basename==
<source lang=bash>
$ myself=/usr/bin/blafasel ; echo ${myself##*/}
blafasel
</source>
==Path name resolving function==
<source lang=bash>
# dir_resolve originally from http://stackoverflow.com/a/20901614/5887626
# modified at https://lars.timmann.de/wiki/index.php/Bash_cheatsheet
dir_resolve() {
local dir=${1%/*}
local file=${1##*/}
# if the name does not contain a / leave file blank or the name will be name/name
[ "_${1/\//}_" == "_${1}_" -a -d ${1} ] && file=""
[ "_${1/\//}_" == "_${1}_" -a -f ${1} ] && dir=""
pushd "$dir" &>/dev/null || return $? # On error, return error code
echo ${PWD}${file:+"/"${file}} # output full path with filename
popd &> /dev/null
}
</source>
=Arrays=
==Reverse the order of elements==
An example for services in normal and reverse order for start/stop
<source lang=bash>
declare -a SERVICES_STOP=(service1 service2 service3 service4)
declare -a SERVICES_START
for(( i=$[ ${#SERVICES_STOP[*]} - 1 ] ; i>=0 ; i-- ))
do
SERVICES_START+=(${SERVICES_STOP[$i]})
done
</source>
This results in:
<source lang=bash>
$ echo ${SERVICES_STOP[*]} ; echo ${SERVICES_START[*]}
service1 service2 service3 service4
service4 service3 service2 service1
</source>
=Loops=
==Numbers==
$ for i in {0..9} ; do echo $i ; done
oder
$ for ((i=0;i<=9;i++)); do echo $i; done
so gehen natürlich auch andere Sprünge, z.B. immer 3 weiter:
$ for ((i=0;i<=9;i+=3)); do echo $i; done
oder oder oder
$ for ((i=0,j=1;i<=9;i+=3,j++)); do echo "$i $j"; done
==Exit controlled loop==
Just put your code between <i>while</i> and <i>do</i> and use <i>continue</i> alias : in the loop.
<source lang=bash>
#!/bin/bash
while
# some code
(( <your control expression> ))
do
:
done
</source>
For example:
<source lang=bash>
#!/bin/bash
i=1
while
i=$[ $i + 1 ];
(( $i < 10 ))
do
:
done
</source>
=Functions=
==Log with timestamp==
<source lang=bash>
function printlog () {
if [ ${#} -ge 1 ]
then
format=$1; shift;
printf "%s : ${format}" "$(/bin/date '+%Y%m%d %H:%M:%S')" ${*}
else
while read input
do
printf "%s : %s\n" "$(/bin/date '+%Y%m%d %H:%M:%S')" "${input}"
done
fi
}
</source>
<source lang=bash>
$ printf "test\n\ntoast\n" | printlog
20161103 10:47:25 test
20161103 10:47:25
20161103 10:47:25 toast
$ printlog test
20161103 10:47:30 test
$ printlog "test %s %d %s\n" "bla" 0 "bli"
20170721 09:45:06 : test bla 0 bli
</source>
=Calculations=
$ echo $[ 3 + 4 ]
$ echo $[ 2 ** 8 ] # 2^8
=init scripts=
==A basic skeleton==
<source lang=bash>
#!/bin/bash
NAME=<myname> # The name of the daemon
USER=<runuser> # The user to run the daemon as
SELF=${0##*/}
CALLER=$(id -nu)
# Check if called as ${USER}
if [ "_${CALLER}_" != "_${USER}_" ]
then
# If not do a su if called as root
if [ "_${CALLER}_" == "_root_" ]
then
exec su -l ${USER} -c "$0 $@"
else
echo "Please start this script only as user ${USER}"
exit 1
fi
fi
if [ $# -eq 1 ]
then
command=$1
else
# Called as ${NAME}-start.sh or ${NAME}-stop.sh
command=${SELF%.sh}
command=${command##${NAME}-}
[ "_${command}_" == "_${NAME}_" ] && command=""
fi
case ${command} in
start)
# start commands
;;
stop)
# stop commands
;;
restart)
$0 stop
$0 start
;;
*)
[ ! -z "${command}" ] && echo "ERROR: Unknown option ${command}!"
echo "Usage: $0 (start|stop|restart)";
echo "Or call as ${NAME}-(start|stop|restart).sh"
exit 1
;;
esac
</source>
= Logging and output in your scripts =
== Add a timestamp to all output ==
<source lang=bash>
#!/bin/bash
# Find temp filename
FIFO=$(mktemp)
# Clenup on exit
trap 'rm -f ${FIFO}' 0
# Delete file created by mktemp
rm "${FIFO}"
# Create a FIFO instead
mkfifo "${FIFO}"
# Read from FIFO and add date at the beginning
sed -e "s|^|$(date '+%d.%m.%Y %H:%M:%S') :: |g" < ${FIFO} &
# Redirect stdout & stderr to FIFO
exec > ${FIFO} 2>&1
#
# Now your program
#
echo bla
echo bli >&2
</source>
== Add a timestamp to all output and send to file==
<source lang=bash>
#!/bin/bash
LOGFILE=/tmp/bla.log
# Find temp filename
FIFO=$(mktemp)
# Clenup on exit
trap 'rm -f ${FIFO}' 0
# Delete file created by mktemp
rm "${FIFO}"
# Create a FIFO instead
mkfifo "${FIFO}"
# Read from FIFO and add date at the beginning
sed -e "s|^|$(date '+%d.%m.%Y %H:%M:%S') :: |g" < ${FIFO} > ${LOGFILE}&
# Redirect stdout & stderr to FIFO
exec > ${FIFO} 2>&1
#
# Now your program
#
echo bla
echo bli >&2
</source>
=Parameter parsing=
In progress... no time...
<source lang=bash>
while [ $# -gt 0 ]
do
case $1 in
-h|--help)
usage help
shift;
exit 0;
;;
--?*=?*|-?*=?*)
param=${1%=*}
value=${1#*=}
shift;
;;
--?*=|-?*=)
param=${1%=*}
usage "${param} needs a vlaue!"
;;
*)
if [ $# -lt 2 ] ; then usage "$1 needs a value!"; fi
param=$1
value=$2
shift; shift;
;;
esac
case $param in
*)
other_params[$[${#other_params[*]} + 1]]="${param}=${value}"
param=${param#--}
param=${param/-/_}
export ${param^^}=${value}
;;
esac
done
</source>
72c39185f8236dff503bffbe907a49759f674962
RadSecProxy
0
345
1789
2017-07-13T15:14:31Z
Lollypop
2
Die Seite wurde neu angelegt: „=RadSecProxy= ==Build== ===Patch for radsecproxy-1.6.8 on Ubuntu 16.04=== [https://project.nordu.net/browse/RADSECPROXY-72 taken from here] <source lang=diff…“
wikitext
text/x-wiki
=RadSecProxy=
==Build==
===Patch for radsecproxy-1.6.8 on Ubuntu 16.04===
[https://project.nordu.net/browse/RADSECPROXY-72 taken from here]
<source lang=diff>
diff -rub radsecproxy-1.6.8/tcp.c radsecproxy-1.6.8_Ubuntu_16.04/tcp.c
--- radsecproxy-1.6.8/tcp.c 2016-09-21 13:49:09.000000000 +0200
+++ radsecproxy-1.6.8_Ubuntu_16.04/tcp.c 2017-07-13 16:35:52.414151832 +0200
@@ -353,7 +353,7 @@
struct sockaddr_storage from;
socklen_t fromlen = sizeof(from);
- listen(*sp, 0);
+ listen(*sp, 16);
for (;;) {
s = accept(*sp, (struct sockaddr *)&from, &fromlen);
diff -rub radsecproxy-1.6.8/tls.c radsecproxy-1.6.8_Ubuntu_16.04/tls.c
--- radsecproxy-1.6.8/tls.c 2016-09-21 13:49:09.000000000 +0200
+++ radsecproxy-1.6.8_Ubuntu_16.04/tls.c 2017-07-13 16:36:22.678166655 +0200
@@ -467,7 +467,7 @@
struct sockaddr_storage from;
socklen_t fromlen = sizeof(from);
- listen(*sp, 0);
+ listen(*sp, 16);
for (;;) {
s = accept(*sp, (struct sockaddr *)&from, &fromlen);
</source>
===Configure===
<source lang=bash>
$ ./configure --prefix=/opt/radsecproxy-1.6.8 --sysconfdir=/etc/radsec --with-ssl --enable-fticks
$ make clean all && sudo make install
</source>
==Config==
===/etc/radsec/radsecproxy.conf===
<source>
# Master config file for radsecproxy
IPv4Only on
listenUDP <IP>:1812
listenUDP <IP>:1813
listenTLS <IP>:2083
LogLevel 5 # For testing later reduce to 3
#LogDestination file:///var/log/radsecproxy.log
LogDestination x-syslog:///LOG_DAEMON
LoopPrevention on
######## TLS section
tls default {
#CACertificatePath /etc/radsec/cert/ca
CACertificateFile /etc/radsec/cert/radsecproxy.pem
CertificateFile /etc/radsec/cert/radsecproxy.pem
CertificateKeyFile /etc/radsec/cert/radsecproxy.key
CertificateKeyPassword ****secret****
}
Include /etc/radsec/rewrites.conf
Include /etc/radsec/clients.conf
Include /etc/radsec/servers.conf
Include /etc/radsec/realms.conf
</source>
===/etc/radsec/rewrites.conf===
<source>
## Empty for our setup
</source>
===/etc/radsec/clients.conf===
This matches our german top level radius (tlr) you have to customize it for other countries.
<source>
client tlr1 {
host 193.174.75.134
type tls
certificatenamecheck off
matchCertificateAttribute CN:/^(radius1\.dfn|tld1\.eduroam)\.de$/
}
client tlr2 {
host 193.174.75.138
type tls
certificatenamecheck off
matchCertificateAttribute CN:/^(radius2\.dfn|tld2\.eduroam)\.de$/
}
# Our WLAN Controller
client wlc {
host 10.1.1.0/24
type udp
secret ****secret****
}
client anyIP4TLS {
host 0.0.0.0/0
type TLS
}
</source>
===/etc/radsec/servers.conf===
<source>
Server Our-EduroamRadiusAuth {
host <internal radius server>
port 1812
#rewriteOut UserName
type udp
secret ****secret****
}
Server Our-EduroamRadiusAcct {
host <internal radius accounting server>
port 1813
type udp
secret ****secret****
}
server tlr1 {
host 193.174.75.134
type tls
certificatenamecheck off
matchCertificateAttribute CN:/^(radius1\.dfn|tld1\.eduroam)\.de$/
StatusServer on
}
server tlr2 {
host 193.174.75.138
type tls
certificatenamecheck off
matchCertificateAttribute CN:/^(radius2\.dfn|tld2\.eduroam)\.de$/
StatusServer on
}
</source>
===/etc/radsec/realms.conf===
<source>
# Our domain
realm domain.tld {
server Our-EduroamRadiusAuth
accountingServer Our-EduroamRadiusAcct
}
# Wrong counfigured clients are rejected here
realm /myabc\.com$ {
replymessage "Misconfigured client: default realm of Intel PRO/Wireless supplicant! Rejected by us."
accountingresponse on
}
realm /^$/ {
replymessage "Misconfigured client: empty realm! Rejected by us."
accountingresponse on
}
# Default route -> Eduroam toplevel servers
realm * {
server tlr1
server tlr2
accountingserver tlr1
accountingserver tlr2
}
</source>
===/etc/radsec/cert/radsecproxy.pem===
<source>
subject=/CN=radsecproxy.domain.tld/OU=bla/O=bli/L=Hamburg/ST=Hamburg/C=DE
-----BEGIN CERTIFICATE-----
...
-----END CERTIFICATE-----
And now the whole cerstificate chain...
</source>
===systemd unit file===
# systemctl cat radsecproxy.service
<source lang=ini>
# /lib/systemd/system/radsecproxy.service
[Unit]
Description=radsecproxy
ConditionPathExists=/etc/radsec/radsecproxy.conf
After=network.target
Documentation=man:radsecproxy(1)
[Service]
Type=forking
ExecStart=/opt/radsecproxy/sbin/radsecproxy -i /run/radsecproxy.pid
PIDFile=/run/radsecproxy.pid
[Install]
WantedBy=multi-user.target
</source>
Put this to /lib/systemd/system/radsecproxy.service and do:
# systemctl daemon-reload
# systemctl enable radsecproxy.service
# systemctl start radsecproxy.service
===Testing===
# openssl s_client -connect <IP>:2083 -showcerts
b6be4d03b7c3a6602972efdeb3dc4a73d8d3d707
1790
1789
2017-07-13T15:15:02Z
Lollypop
2
/* systemd unit file */
wikitext
text/x-wiki
=RadSecProxy=
==Build==
===Patch for radsecproxy-1.6.8 on Ubuntu 16.04===
[https://project.nordu.net/browse/RADSECPROXY-72 taken from here]
<source lang=diff>
diff -rub radsecproxy-1.6.8/tcp.c radsecproxy-1.6.8_Ubuntu_16.04/tcp.c
--- radsecproxy-1.6.8/tcp.c 2016-09-21 13:49:09.000000000 +0200
+++ radsecproxy-1.6.8_Ubuntu_16.04/tcp.c 2017-07-13 16:35:52.414151832 +0200
@@ -353,7 +353,7 @@
struct sockaddr_storage from;
socklen_t fromlen = sizeof(from);
- listen(*sp, 0);
+ listen(*sp, 16);
for (;;) {
s = accept(*sp, (struct sockaddr *)&from, &fromlen);
diff -rub radsecproxy-1.6.8/tls.c radsecproxy-1.6.8_Ubuntu_16.04/tls.c
--- radsecproxy-1.6.8/tls.c 2016-09-21 13:49:09.000000000 +0200
+++ radsecproxy-1.6.8_Ubuntu_16.04/tls.c 2017-07-13 16:36:22.678166655 +0200
@@ -467,7 +467,7 @@
struct sockaddr_storage from;
socklen_t fromlen = sizeof(from);
- listen(*sp, 0);
+ listen(*sp, 16);
for (;;) {
s = accept(*sp, (struct sockaddr *)&from, &fromlen);
</source>
===Configure===
<source lang=bash>
$ ./configure --prefix=/opt/radsecproxy-1.6.8 --sysconfdir=/etc/radsec --with-ssl --enable-fticks
$ make clean all && sudo make install
</source>
==Config==
===/etc/radsec/radsecproxy.conf===
<source>
# Master config file for radsecproxy
IPv4Only on
listenUDP <IP>:1812
listenUDP <IP>:1813
listenTLS <IP>:2083
LogLevel 5 # For testing later reduce to 3
#LogDestination file:///var/log/radsecproxy.log
LogDestination x-syslog:///LOG_DAEMON
LoopPrevention on
######## TLS section
tls default {
#CACertificatePath /etc/radsec/cert/ca
CACertificateFile /etc/radsec/cert/radsecproxy.pem
CertificateFile /etc/radsec/cert/radsecproxy.pem
CertificateKeyFile /etc/radsec/cert/radsecproxy.key
CertificateKeyPassword ****secret****
}
Include /etc/radsec/rewrites.conf
Include /etc/radsec/clients.conf
Include /etc/radsec/servers.conf
Include /etc/radsec/realms.conf
</source>
===/etc/radsec/rewrites.conf===
<source>
## Empty for our setup
</source>
===/etc/radsec/clients.conf===
This matches our german top level radius (tlr) you have to customize it for other countries.
<source>
client tlr1 {
host 193.174.75.134
type tls
certificatenamecheck off
matchCertificateAttribute CN:/^(radius1\.dfn|tld1\.eduroam)\.de$/
}
client tlr2 {
host 193.174.75.138
type tls
certificatenamecheck off
matchCertificateAttribute CN:/^(radius2\.dfn|tld2\.eduroam)\.de$/
}
# Our WLAN Controller
client wlc {
host 10.1.1.0/24
type udp
secret ****secret****
}
client anyIP4TLS {
host 0.0.0.0/0
type TLS
}
</source>
===/etc/radsec/servers.conf===
<source>
Server Our-EduroamRadiusAuth {
host <internal radius server>
port 1812
#rewriteOut UserName
type udp
secret ****secret****
}
Server Our-EduroamRadiusAcct {
host <internal radius accounting server>
port 1813
type udp
secret ****secret****
}
server tlr1 {
host 193.174.75.134
type tls
certificatenamecheck off
matchCertificateAttribute CN:/^(radius1\.dfn|tld1\.eduroam)\.de$/
StatusServer on
}
server tlr2 {
host 193.174.75.138
type tls
certificatenamecheck off
matchCertificateAttribute CN:/^(radius2\.dfn|tld2\.eduroam)\.de$/
StatusServer on
}
</source>
===/etc/radsec/realms.conf===
<source>
# Our domain
realm domain.tld {
server Our-EduroamRadiusAuth
accountingServer Our-EduroamRadiusAcct
}
# Wrong counfigured clients are rejected here
realm /myabc\.com$ {
replymessage "Misconfigured client: default realm of Intel PRO/Wireless supplicant! Rejected by us."
accountingresponse on
}
realm /^$/ {
replymessage "Misconfigured client: empty realm! Rejected by us."
accountingresponse on
}
# Default route -> Eduroam toplevel servers
realm * {
server tlr1
server tlr2
accountingserver tlr1
accountingserver tlr2
}
</source>
===/etc/radsec/cert/radsecproxy.pem===
<source>
subject=/CN=radsecproxy.domain.tld/OU=bla/O=bli/L=Hamburg/ST=Hamburg/C=DE
-----BEGIN CERTIFICATE-----
...
-----END CERTIFICATE-----
And now the whole cerstificate chain...
</source>
==Run the daemon==
===systemd unit file===
# systemctl cat radsecproxy.service
<source lang=ini>
# /lib/systemd/system/radsecproxy.service
[Unit]
Description=radsecproxy
ConditionPathExists=/etc/radsec/radsecproxy.conf
After=network.target
Documentation=man:radsecproxy(1)
[Service]
Type=forking
ExecStart=/opt/radsecproxy/sbin/radsecproxy -i /run/radsecproxy.pid
PIDFile=/run/radsecproxy.pid
[Install]
WantedBy=multi-user.target
</source>
Put this to /lib/systemd/system/radsecproxy.service and do:
# systemctl daemon-reload
# systemctl enable radsecproxy.service
# systemctl start radsecproxy.service
===Testing===
# openssl s_client -connect <IP>:2083 -showcerts
65d045723230b415fe9f7d51e7da7bb7d564ac42
1791
1790
2017-07-13T15:16:05Z
Lollypop
2
wikitext
text/x-wiki
[Kategorie:Eduroam]
=RadSecProxy=
==Build==
===Patch for radsecproxy-1.6.8 on Ubuntu 16.04===
[https://project.nordu.net/browse/RADSECPROXY-72 taken from here]
<source lang=diff>
diff -rub radsecproxy-1.6.8/tcp.c radsecproxy-1.6.8_Ubuntu_16.04/tcp.c
--- radsecproxy-1.6.8/tcp.c 2016-09-21 13:49:09.000000000 +0200
+++ radsecproxy-1.6.8_Ubuntu_16.04/tcp.c 2017-07-13 16:35:52.414151832 +0200
@@ -353,7 +353,7 @@
struct sockaddr_storage from;
socklen_t fromlen = sizeof(from);
- listen(*sp, 0);
+ listen(*sp, 16);
for (;;) {
s = accept(*sp, (struct sockaddr *)&from, &fromlen);
diff -rub radsecproxy-1.6.8/tls.c radsecproxy-1.6.8_Ubuntu_16.04/tls.c
--- radsecproxy-1.6.8/tls.c 2016-09-21 13:49:09.000000000 +0200
+++ radsecproxy-1.6.8_Ubuntu_16.04/tls.c 2017-07-13 16:36:22.678166655 +0200
@@ -467,7 +467,7 @@
struct sockaddr_storage from;
socklen_t fromlen = sizeof(from);
- listen(*sp, 0);
+ listen(*sp, 16);
for (;;) {
s = accept(*sp, (struct sockaddr *)&from, &fromlen);
</source>
===Configure===
<source lang=bash>
$ ./configure --prefix=/opt/radsecproxy-1.6.8 --sysconfdir=/etc/radsec --with-ssl --enable-fticks
$ make clean all && sudo make install
</source>
==Config==
===/etc/radsec/radsecproxy.conf===
<source>
# Master config file for radsecproxy
IPv4Only on
listenUDP <IP>:1812
listenUDP <IP>:1813
listenTLS <IP>:2083
LogLevel 5 # For testing later reduce to 3
#LogDestination file:///var/log/radsecproxy.log
LogDestination x-syslog:///LOG_DAEMON
LoopPrevention on
######## TLS section
tls default {
#CACertificatePath /etc/radsec/cert/ca
CACertificateFile /etc/radsec/cert/radsecproxy.pem
CertificateFile /etc/radsec/cert/radsecproxy.pem
CertificateKeyFile /etc/radsec/cert/radsecproxy.key
CertificateKeyPassword ****secret****
}
Include /etc/radsec/rewrites.conf
Include /etc/radsec/clients.conf
Include /etc/radsec/servers.conf
Include /etc/radsec/realms.conf
</source>
===/etc/radsec/rewrites.conf===
<source>
## Empty for our setup
</source>
===/etc/radsec/clients.conf===
This matches our german top level radius (tlr) you have to customize it for other countries.
<source>
client tlr1 {
host 193.174.75.134
type tls
certificatenamecheck off
matchCertificateAttribute CN:/^(radius1\.dfn|tld1\.eduroam)\.de$/
}
client tlr2 {
host 193.174.75.138
type tls
certificatenamecheck off
matchCertificateAttribute CN:/^(radius2\.dfn|tld2\.eduroam)\.de$/
}
# Our WLAN Controller
client wlc {
host 10.1.1.0/24
type udp
secret ****secret****
}
client anyIP4TLS {
host 0.0.0.0/0
type TLS
}
</source>
===/etc/radsec/servers.conf===
<source>
Server Our-EduroamRadiusAuth {
host <internal radius server>
port 1812
#rewriteOut UserName
type udp
secret ****secret****
}
Server Our-EduroamRadiusAcct {
host <internal radius accounting server>
port 1813
type udp
secret ****secret****
}
server tlr1 {
host 193.174.75.134
type tls
certificatenamecheck off
matchCertificateAttribute CN:/^(radius1\.dfn|tld1\.eduroam)\.de$/
StatusServer on
}
server tlr2 {
host 193.174.75.138
type tls
certificatenamecheck off
matchCertificateAttribute CN:/^(radius2\.dfn|tld2\.eduroam)\.de$/
StatusServer on
}
</source>
===/etc/radsec/realms.conf===
<source>
# Our domain
realm domain.tld {
server Our-EduroamRadiusAuth
accountingServer Our-EduroamRadiusAcct
}
# Wrong counfigured clients are rejected here
realm /myabc\.com$ {
replymessage "Misconfigured client: default realm of Intel PRO/Wireless supplicant! Rejected by us."
accountingresponse on
}
realm /^$/ {
replymessage "Misconfigured client: empty realm! Rejected by us."
accountingresponse on
}
# Default route -> Eduroam toplevel servers
realm * {
server tlr1
server tlr2
accountingserver tlr1
accountingserver tlr2
}
</source>
===/etc/radsec/cert/radsecproxy.pem===
<source>
subject=/CN=radsecproxy.domain.tld/OU=bla/O=bli/L=Hamburg/ST=Hamburg/C=DE
-----BEGIN CERTIFICATE-----
...
-----END CERTIFICATE-----
And now the whole cerstificate chain...
</source>
==Run the daemon==
===systemd unit file===
# systemctl cat radsecproxy.service
<source lang=ini>
# /lib/systemd/system/radsecproxy.service
[Unit]
Description=radsecproxy
ConditionPathExists=/etc/radsec/radsecproxy.conf
After=network.target
Documentation=man:radsecproxy(1)
[Service]
Type=forking
ExecStart=/opt/radsecproxy/sbin/radsecproxy -i /run/radsecproxy.pid
PIDFile=/run/radsecproxy.pid
[Install]
WantedBy=multi-user.target
</source>
Put this to /lib/systemd/system/radsecproxy.service and do:
# systemctl daemon-reload
# systemctl enable radsecproxy.service
# systemctl start radsecproxy.service
===Testing===
# openssl s_client -connect <IP>:2083 -showcerts
ee3962f16ead472cb64ec6f8b738472a849776a6
1792
1791
2017-07-13T15:16:43Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:Eduroam]]
=RadSecProxy=
==Build==
===Patch for radsecproxy-1.6.8 on Ubuntu 16.04===
[https://project.nordu.net/browse/RADSECPROXY-72 taken from here]
<source lang=diff>
diff -rub radsecproxy-1.6.8/tcp.c radsecproxy-1.6.8_Ubuntu_16.04/tcp.c
--- radsecproxy-1.6.8/tcp.c 2016-09-21 13:49:09.000000000 +0200
+++ radsecproxy-1.6.8_Ubuntu_16.04/tcp.c 2017-07-13 16:35:52.414151832 +0200
@@ -353,7 +353,7 @@
struct sockaddr_storage from;
socklen_t fromlen = sizeof(from);
- listen(*sp, 0);
+ listen(*sp, 16);
for (;;) {
s = accept(*sp, (struct sockaddr *)&from, &fromlen);
diff -rub radsecproxy-1.6.8/tls.c radsecproxy-1.6.8_Ubuntu_16.04/tls.c
--- radsecproxy-1.6.8/tls.c 2016-09-21 13:49:09.000000000 +0200
+++ radsecproxy-1.6.8_Ubuntu_16.04/tls.c 2017-07-13 16:36:22.678166655 +0200
@@ -467,7 +467,7 @@
struct sockaddr_storage from;
socklen_t fromlen = sizeof(from);
- listen(*sp, 0);
+ listen(*sp, 16);
for (;;) {
s = accept(*sp, (struct sockaddr *)&from, &fromlen);
</source>
===Configure===
<source lang=bash>
$ ./configure --prefix=/opt/radsecproxy-1.6.8 --sysconfdir=/etc/radsec --with-ssl --enable-fticks
$ make clean all && sudo make install
</source>
==Config==
===/etc/radsec/radsecproxy.conf===
<source>
# Master config file for radsecproxy
IPv4Only on
listenUDP <IP>:1812
listenUDP <IP>:1813
listenTLS <IP>:2083
LogLevel 5 # For testing later reduce to 3
#LogDestination file:///var/log/radsecproxy.log
LogDestination x-syslog:///LOG_DAEMON
LoopPrevention on
######## TLS section
tls default {
#CACertificatePath /etc/radsec/cert/ca
CACertificateFile /etc/radsec/cert/radsecproxy.pem
CertificateFile /etc/radsec/cert/radsecproxy.pem
CertificateKeyFile /etc/radsec/cert/radsecproxy.key
CertificateKeyPassword ****secret****
}
Include /etc/radsec/rewrites.conf
Include /etc/radsec/clients.conf
Include /etc/radsec/servers.conf
Include /etc/radsec/realms.conf
</source>
===/etc/radsec/rewrites.conf===
<source>
## Empty for our setup
</source>
===/etc/radsec/clients.conf===
This matches our german top level radius (tlr) you have to customize it for other countries.
<source>
client tlr1 {
host 193.174.75.134
type tls
certificatenamecheck off
matchCertificateAttribute CN:/^(radius1\.dfn|tld1\.eduroam)\.de$/
}
client tlr2 {
host 193.174.75.138
type tls
certificatenamecheck off
matchCertificateAttribute CN:/^(radius2\.dfn|tld2\.eduroam)\.de$/
}
# Our WLAN Controller
client wlc {
host 10.1.1.0/24
type udp
secret ****secret****
}
client anyIP4TLS {
host 0.0.0.0/0
type TLS
}
</source>
===/etc/radsec/servers.conf===
<source>
Server Our-EduroamRadiusAuth {
host <internal radius server>
port 1812
#rewriteOut UserName
type udp
secret ****secret****
}
Server Our-EduroamRadiusAcct {
host <internal radius accounting server>
port 1813
type udp
secret ****secret****
}
server tlr1 {
host 193.174.75.134
type tls
certificatenamecheck off
matchCertificateAttribute CN:/^(radius1\.dfn|tld1\.eduroam)\.de$/
StatusServer on
}
server tlr2 {
host 193.174.75.138
type tls
certificatenamecheck off
matchCertificateAttribute CN:/^(radius2\.dfn|tld2\.eduroam)\.de$/
StatusServer on
}
</source>
===/etc/radsec/realms.conf===
<source>
# Our domain
realm domain.tld {
server Our-EduroamRadiusAuth
accountingServer Our-EduroamRadiusAcct
}
# Wrong counfigured clients are rejected here
realm /myabc\.com$ {
replymessage "Misconfigured client: default realm of Intel PRO/Wireless supplicant! Rejected by us."
accountingresponse on
}
realm /^$/ {
replymessage "Misconfigured client: empty realm! Rejected by us."
accountingresponse on
}
# Default route -> Eduroam toplevel servers
realm * {
server tlr1
server tlr2
accountingserver tlr1
accountingserver tlr2
}
</source>
===/etc/radsec/cert/radsecproxy.pem===
<source>
subject=/CN=radsecproxy.domain.tld/OU=bla/O=bli/L=Hamburg/ST=Hamburg/C=DE
-----BEGIN CERTIFICATE-----
...
-----END CERTIFICATE-----
And now the whole cerstificate chain...
</source>
==Run the daemon==
===systemd unit file===
# systemctl cat radsecproxy.service
<source lang=ini>
# /lib/systemd/system/radsecproxy.service
[Unit]
Description=radsecproxy
ConditionPathExists=/etc/radsec/radsecproxy.conf
After=network.target
Documentation=man:radsecproxy(1)
[Service]
Type=forking
ExecStart=/opt/radsecproxy/sbin/radsecproxy -i /run/radsecproxy.pid
PIDFile=/run/radsecproxy.pid
[Install]
WantedBy=multi-user.target
</source>
Put this to /lib/systemd/system/radsecproxy.service and do:
# systemctl daemon-reload
# systemctl enable radsecproxy.service
# systemctl start radsecproxy.service
===Testing===
# openssl s_client -connect <IP>:2083 -showcerts
468ca8ddb43272051e964cf5b92421a6a2d74015
1795
1792
2017-07-14T07:05:17Z
Lollypop
2
/* Testing */
wikitext
text/x-wiki
[[Kategorie:Eduroam]]
=RadSecProxy=
==Build==
===Patch for radsecproxy-1.6.8 on Ubuntu 16.04===
[https://project.nordu.net/browse/RADSECPROXY-72 taken from here]
<source lang=diff>
diff -rub radsecproxy-1.6.8/tcp.c radsecproxy-1.6.8_Ubuntu_16.04/tcp.c
--- radsecproxy-1.6.8/tcp.c 2016-09-21 13:49:09.000000000 +0200
+++ radsecproxy-1.6.8_Ubuntu_16.04/tcp.c 2017-07-13 16:35:52.414151832 +0200
@@ -353,7 +353,7 @@
struct sockaddr_storage from;
socklen_t fromlen = sizeof(from);
- listen(*sp, 0);
+ listen(*sp, 16);
for (;;) {
s = accept(*sp, (struct sockaddr *)&from, &fromlen);
diff -rub radsecproxy-1.6.8/tls.c radsecproxy-1.6.8_Ubuntu_16.04/tls.c
--- radsecproxy-1.6.8/tls.c 2016-09-21 13:49:09.000000000 +0200
+++ radsecproxy-1.6.8_Ubuntu_16.04/tls.c 2017-07-13 16:36:22.678166655 +0200
@@ -467,7 +467,7 @@
struct sockaddr_storage from;
socklen_t fromlen = sizeof(from);
- listen(*sp, 0);
+ listen(*sp, 16);
for (;;) {
s = accept(*sp, (struct sockaddr *)&from, &fromlen);
</source>
===Configure===
<source lang=bash>
$ ./configure --prefix=/opt/radsecproxy-1.6.8 --sysconfdir=/etc/radsec --with-ssl --enable-fticks
$ make clean all && sudo make install
</source>
==Config==
===/etc/radsec/radsecproxy.conf===
<source>
# Master config file for radsecproxy
IPv4Only on
listenUDP <IP>:1812
listenUDP <IP>:1813
listenTLS <IP>:2083
LogLevel 5 # For testing later reduce to 3
#LogDestination file:///var/log/radsecproxy.log
LogDestination x-syslog:///LOG_DAEMON
LoopPrevention on
######## TLS section
tls default {
#CACertificatePath /etc/radsec/cert/ca
CACertificateFile /etc/radsec/cert/radsecproxy.pem
CertificateFile /etc/radsec/cert/radsecproxy.pem
CertificateKeyFile /etc/radsec/cert/radsecproxy.key
CertificateKeyPassword ****secret****
}
Include /etc/radsec/rewrites.conf
Include /etc/radsec/clients.conf
Include /etc/radsec/servers.conf
Include /etc/radsec/realms.conf
</source>
===/etc/radsec/rewrites.conf===
<source>
## Empty for our setup
</source>
===/etc/radsec/clients.conf===
This matches our german top level radius (tlr) you have to customize it for other countries.
<source>
client tlr1 {
host 193.174.75.134
type tls
certificatenamecheck off
matchCertificateAttribute CN:/^(radius1\.dfn|tld1\.eduroam)\.de$/
}
client tlr2 {
host 193.174.75.138
type tls
certificatenamecheck off
matchCertificateAttribute CN:/^(radius2\.dfn|tld2\.eduroam)\.de$/
}
# Our WLAN Controller
client wlc {
host 10.1.1.0/24
type udp
secret ****secret****
}
client anyIP4TLS {
host 0.0.0.0/0
type TLS
}
</source>
===/etc/radsec/servers.conf===
<source>
Server Our-EduroamRadiusAuth {
host <internal radius server>
port 1812
#rewriteOut UserName
type udp
secret ****secret****
}
Server Our-EduroamRadiusAcct {
host <internal radius accounting server>
port 1813
type udp
secret ****secret****
}
server tlr1 {
host 193.174.75.134
type tls
certificatenamecheck off
matchCertificateAttribute CN:/^(radius1\.dfn|tld1\.eduroam)\.de$/
StatusServer on
}
server tlr2 {
host 193.174.75.138
type tls
certificatenamecheck off
matchCertificateAttribute CN:/^(radius2\.dfn|tld2\.eduroam)\.de$/
StatusServer on
}
</source>
===/etc/radsec/realms.conf===
<source>
# Our domain
realm domain.tld {
server Our-EduroamRadiusAuth
accountingServer Our-EduroamRadiusAcct
}
# Wrong counfigured clients are rejected here
realm /myabc\.com$ {
replymessage "Misconfigured client: default realm of Intel PRO/Wireless supplicant! Rejected by us."
accountingresponse on
}
realm /^$/ {
replymessage "Misconfigured client: empty realm! Rejected by us."
accountingresponse on
}
# Default route -> Eduroam toplevel servers
realm * {
server tlr1
server tlr2
accountingserver tlr1
accountingserver tlr2
}
</source>
===/etc/radsec/cert/radsecproxy.pem===
<source>
subject=/CN=radsecproxy.domain.tld/OU=bla/O=bli/L=Hamburg/ST=Hamburg/C=DE
-----BEGIN CERTIFICATE-----
...
-----END CERTIFICATE-----
And now the whole cerstificate chain...
</source>
==Run the daemon==
===systemd unit file===
# systemctl cat radsecproxy.service
<source lang=ini>
# /lib/systemd/system/radsecproxy.service
[Unit]
Description=radsecproxy
ConditionPathExists=/etc/radsec/radsecproxy.conf
After=network.target
Documentation=man:radsecproxy(1)
[Service]
Type=forking
ExecStart=/opt/radsecproxy/sbin/radsecproxy -i /run/radsecproxy.pid
PIDFile=/run/radsecproxy.pid
[Install]
WantedBy=multi-user.target
</source>
Put this to /lib/systemd/system/radsecproxy.service and do:
# systemctl daemon-reload
# systemctl enable radsecproxy.service
# systemctl start radsecproxy.service
===Testing===
# openssl s_client -connect <IP>:2083 -showcerts
<graphviz caption="Eduroam Struktur an der HSU" alt="Eduroam Struktur an der HSU" format="png">
digraph Eduroam {
node [shape=plaintext];
node_client [shape=rect, style=rounded, label="Client", labelloc="b", URL="[[Eduroam#]]"];
node_ap [shape=rect, style=rounded, label="Access Point", labelloc="b", URL="[[Eduroam#]]"];
node_wlc [shape=rect, style=rounded, label="WLAN Controller / WLC", labelloc="b", URL="[[Eduroam#wlc]]"];
node_hsu_radsecproxy [shape=record, style=rounded, label="{ RadSecProxy | TCP:2083 | UDP:1812 | UDP:1813 }", labelloc="b" URL="[[Eduroam#]]"];
node_freeradius [shape=record, style=rounded, label="{ FreeRadius | UDP:1812 | UDP:1813 }", labelloc="b" URL="[[Eduroam#]]"];
node_internet [shape=hexagon, label="Internet", labelloc="b"];
node_dfn_radsecproxy [shape=record, style=rounded, label="{ RadSecProxy | TCP:2083 }", labelloc="b" URL="[[Eduroam#]]"];
node_desy_radsecproxy [shape=record, style=rounded, label="{ RadSecProxy | TCP:2083 }", labelloc="b" URL="[[Eduroam#]]"];
node_client -> node_ap [label="meldet sich an"];
node_ap -> node_wlc [label="fragt"];
node_wlc -> node_radsecproxy [label="fragt"];
node_hsu_radsecproxy -> node_hsu_freeradius [label="fragt für Realm @hsu-hh.de"];
node_hsu_radsecproxy -> node_dfn_radsecproxy [label="fragt für alle anderen Realms"];
node_dfn_radsecproxy -> node_desy_radsecproxy [label="fragt für Realm @desy.de"];
node_dfn_radsecproxy -> node_hsu_radsecproxy [label="fragt für Realm @hsu-hh.de"];
</graphviz>
965bd19814630db0576a3c000a08e0c54b8e9dcd
1796
1795
2017-07-14T07:34:04Z
Lollypop
2
/* Testing */
wikitext
text/x-wiki
[[Kategorie:Eduroam]]
=RadSecProxy=
==Build==
===Patch for radsecproxy-1.6.8 on Ubuntu 16.04===
[https://project.nordu.net/browse/RADSECPROXY-72 taken from here]
<source lang=diff>
diff -rub radsecproxy-1.6.8/tcp.c radsecproxy-1.6.8_Ubuntu_16.04/tcp.c
--- radsecproxy-1.6.8/tcp.c 2016-09-21 13:49:09.000000000 +0200
+++ radsecproxy-1.6.8_Ubuntu_16.04/tcp.c 2017-07-13 16:35:52.414151832 +0200
@@ -353,7 +353,7 @@
struct sockaddr_storage from;
socklen_t fromlen = sizeof(from);
- listen(*sp, 0);
+ listen(*sp, 16);
for (;;) {
s = accept(*sp, (struct sockaddr *)&from, &fromlen);
diff -rub radsecproxy-1.6.8/tls.c radsecproxy-1.6.8_Ubuntu_16.04/tls.c
--- radsecproxy-1.6.8/tls.c 2016-09-21 13:49:09.000000000 +0200
+++ radsecproxy-1.6.8_Ubuntu_16.04/tls.c 2017-07-13 16:36:22.678166655 +0200
@@ -467,7 +467,7 @@
struct sockaddr_storage from;
socklen_t fromlen = sizeof(from);
- listen(*sp, 0);
+ listen(*sp, 16);
for (;;) {
s = accept(*sp, (struct sockaddr *)&from, &fromlen);
</source>
===Configure===
<source lang=bash>
$ ./configure --prefix=/opt/radsecproxy-1.6.8 --sysconfdir=/etc/radsec --with-ssl --enable-fticks
$ make clean all && sudo make install
</source>
==Config==
===/etc/radsec/radsecproxy.conf===
<source>
# Master config file for radsecproxy
IPv4Only on
listenUDP <IP>:1812
listenUDP <IP>:1813
listenTLS <IP>:2083
LogLevel 5 # For testing later reduce to 3
#LogDestination file:///var/log/radsecproxy.log
LogDestination x-syslog:///LOG_DAEMON
LoopPrevention on
######## TLS section
tls default {
#CACertificatePath /etc/radsec/cert/ca
CACertificateFile /etc/radsec/cert/radsecproxy.pem
CertificateFile /etc/radsec/cert/radsecproxy.pem
CertificateKeyFile /etc/radsec/cert/radsecproxy.key
CertificateKeyPassword ****secret****
}
Include /etc/radsec/rewrites.conf
Include /etc/radsec/clients.conf
Include /etc/radsec/servers.conf
Include /etc/radsec/realms.conf
</source>
===/etc/radsec/rewrites.conf===
<source>
## Empty for our setup
</source>
===/etc/radsec/clients.conf===
This matches our german top level radius (tlr) you have to customize it for other countries.
<source>
client tlr1 {
host 193.174.75.134
type tls
certificatenamecheck off
matchCertificateAttribute CN:/^(radius1\.dfn|tld1\.eduroam)\.de$/
}
client tlr2 {
host 193.174.75.138
type tls
certificatenamecheck off
matchCertificateAttribute CN:/^(radius2\.dfn|tld2\.eduroam)\.de$/
}
# Our WLAN Controller
client wlc {
host 10.1.1.0/24
type udp
secret ****secret****
}
client anyIP4TLS {
host 0.0.0.0/0
type TLS
}
</source>
===/etc/radsec/servers.conf===
<source>
Server Our-EduroamRadiusAuth {
host <internal radius server>
port 1812
#rewriteOut UserName
type udp
secret ****secret****
}
Server Our-EduroamRadiusAcct {
host <internal radius accounting server>
port 1813
type udp
secret ****secret****
}
server tlr1 {
host 193.174.75.134
type tls
certificatenamecheck off
matchCertificateAttribute CN:/^(radius1\.dfn|tld1\.eduroam)\.de$/
StatusServer on
}
server tlr2 {
host 193.174.75.138
type tls
certificatenamecheck off
matchCertificateAttribute CN:/^(radius2\.dfn|tld2\.eduroam)\.de$/
StatusServer on
}
</source>
===/etc/radsec/realms.conf===
<source>
# Our domain
realm domain.tld {
server Our-EduroamRadiusAuth
accountingServer Our-EduroamRadiusAcct
}
# Wrong counfigured clients are rejected here
realm /myabc\.com$ {
replymessage "Misconfigured client: default realm of Intel PRO/Wireless supplicant! Rejected by us."
accountingresponse on
}
realm /^$/ {
replymessage "Misconfigured client: empty realm! Rejected by us."
accountingresponse on
}
# Default route -> Eduroam toplevel servers
realm * {
server tlr1
server tlr2
accountingserver tlr1
accountingserver tlr2
}
</source>
===/etc/radsec/cert/radsecproxy.pem===
<source>
subject=/CN=radsecproxy.domain.tld/OU=bla/O=bli/L=Hamburg/ST=Hamburg/C=DE
-----BEGIN CERTIFICATE-----
...
-----END CERTIFICATE-----
And now the whole cerstificate chain...
</source>
==Run the daemon==
===systemd unit file===
# systemctl cat radsecproxy.service
<source lang=ini>
# /lib/systemd/system/radsecproxy.service
[Unit]
Description=radsecproxy
ConditionPathExists=/etc/radsec/radsecproxy.conf
After=network.target
Documentation=man:radsecproxy(1)
[Service]
Type=forking
ExecStart=/opt/radsecproxy/sbin/radsecproxy -i /run/radsecproxy.pid
PIDFile=/run/radsecproxy.pid
[Install]
WantedBy=multi-user.target
</source>
Put this to /lib/systemd/system/radsecproxy.service and do:
# systemctl daemon-reload
# systemctl enable radsecproxy.service
# systemctl start radsecproxy.service
===Testing===
# openssl s_client -connect <IP>:2083 -showcerts
<graphviz caption="Eduroam Struktur an der HSU" alt="Eduroam Struktur an der HSU" format="png">
digraph Eduroam {
node [shape=plaintext];
node_client [shape=rect, style=rounded, label="Client", labelloc="b", URL="[[Eduroam#]]"];
node_ap [shape=rect, style=rounded, label="Access Point", labelloc="b", URL="[[Eduroam#]]"];
node_wlc [shape=rect, style=rounded, label="WLAN Controller / WLC", labelloc="b", URL="[[Eduroam#wlc]]"];
node_hsu_radsecproxy [shape=record, style=rounded, label="{ RadSecProxy | TCP:2083 | UDP:1812 | UDP:1813 }", labelloc="b" URL="[[Eduroam#]]"];
node_freeradius [shape=record, style=rounded, label="{ FreeRadius | UDP:1812 | UDP:1813 }", labelloc="b", URL="[[Eduroam#]]"];
node_internet [shape=hexagon, label="Internet", labelloc="b"];
node_dfn_radsecproxy [shape=record, style=rounded, label="{ RadSecProxy | TCP:2083 }", labelloc="b", URL="[[Eduroam#]]"];
node_desy_radsecproxy [shape=record, style=rounded, label="{ RadSecProxy | TCP:2083 }", labelloc="b", URL="[[Eduroam#]]"];
node_client -> node_ap [label="meldet sich an"];
node_ap -> node_wlc [label="fragt"];
node_wlc -> node_radsecproxy [label="fragt"];
node_hsu_radsecproxy -> node_hsu_freeradius [label="fragt für Realm @hsu-hh.de"];
node_hsu_radsecproxy -> node_dfn_radsecproxy [label="fragt für alle anderen Realms"];
node_dfn_radsecproxy -> node_desy_radsecproxy [label="fragt für Realm @desy.de"];
node_dfn_radsecproxy -> node_hsu_radsecproxy [label="fragt für Realm @hsu-hh.de"];
</graphviz>
b789a2569faca9a0ed5ac6dca712039ab1fa84c5
1797
1796
2017-07-14T07:35:19Z
Lollypop
2
/* Testing */
wikitext
text/x-wiki
[[Kategorie:Eduroam]]
=RadSecProxy=
==Build==
===Patch for radsecproxy-1.6.8 on Ubuntu 16.04===
[https://project.nordu.net/browse/RADSECPROXY-72 taken from here]
<source lang=diff>
diff -rub radsecproxy-1.6.8/tcp.c radsecproxy-1.6.8_Ubuntu_16.04/tcp.c
--- radsecproxy-1.6.8/tcp.c 2016-09-21 13:49:09.000000000 +0200
+++ radsecproxy-1.6.8_Ubuntu_16.04/tcp.c 2017-07-13 16:35:52.414151832 +0200
@@ -353,7 +353,7 @@
struct sockaddr_storage from;
socklen_t fromlen = sizeof(from);
- listen(*sp, 0);
+ listen(*sp, 16);
for (;;) {
s = accept(*sp, (struct sockaddr *)&from, &fromlen);
diff -rub radsecproxy-1.6.8/tls.c radsecproxy-1.6.8_Ubuntu_16.04/tls.c
--- radsecproxy-1.6.8/tls.c 2016-09-21 13:49:09.000000000 +0200
+++ radsecproxy-1.6.8_Ubuntu_16.04/tls.c 2017-07-13 16:36:22.678166655 +0200
@@ -467,7 +467,7 @@
struct sockaddr_storage from;
socklen_t fromlen = sizeof(from);
- listen(*sp, 0);
+ listen(*sp, 16);
for (;;) {
s = accept(*sp, (struct sockaddr *)&from, &fromlen);
</source>
===Configure===
<source lang=bash>
$ ./configure --prefix=/opt/radsecproxy-1.6.8 --sysconfdir=/etc/radsec --with-ssl --enable-fticks
$ make clean all && sudo make install
</source>
==Config==
===/etc/radsec/radsecproxy.conf===
<source>
# Master config file for radsecproxy
IPv4Only on
listenUDP <IP>:1812
listenUDP <IP>:1813
listenTLS <IP>:2083
LogLevel 5 # For testing later reduce to 3
#LogDestination file:///var/log/radsecproxy.log
LogDestination x-syslog:///LOG_DAEMON
LoopPrevention on
######## TLS section
tls default {
#CACertificatePath /etc/radsec/cert/ca
CACertificateFile /etc/radsec/cert/radsecproxy.pem
CertificateFile /etc/radsec/cert/radsecproxy.pem
CertificateKeyFile /etc/radsec/cert/radsecproxy.key
CertificateKeyPassword ****secret****
}
Include /etc/radsec/rewrites.conf
Include /etc/radsec/clients.conf
Include /etc/radsec/servers.conf
Include /etc/radsec/realms.conf
</source>
===/etc/radsec/rewrites.conf===
<source>
## Empty for our setup
</source>
===/etc/radsec/clients.conf===
This matches our german top level radius (tlr) you have to customize it for other countries.
<source>
client tlr1 {
host 193.174.75.134
type tls
certificatenamecheck off
matchCertificateAttribute CN:/^(radius1\.dfn|tld1\.eduroam)\.de$/
}
client tlr2 {
host 193.174.75.138
type tls
certificatenamecheck off
matchCertificateAttribute CN:/^(radius2\.dfn|tld2\.eduroam)\.de$/
}
# Our WLAN Controller
client wlc {
host 10.1.1.0/24
type udp
secret ****secret****
}
client anyIP4TLS {
host 0.0.0.0/0
type TLS
}
</source>
===/etc/radsec/servers.conf===
<source>
Server Our-EduroamRadiusAuth {
host <internal radius server>
port 1812
#rewriteOut UserName
type udp
secret ****secret****
}
Server Our-EduroamRadiusAcct {
host <internal radius accounting server>
port 1813
type udp
secret ****secret****
}
server tlr1 {
host 193.174.75.134
type tls
certificatenamecheck off
matchCertificateAttribute CN:/^(radius1\.dfn|tld1\.eduroam)\.de$/
StatusServer on
}
server tlr2 {
host 193.174.75.138
type tls
certificatenamecheck off
matchCertificateAttribute CN:/^(radius2\.dfn|tld2\.eduroam)\.de$/
StatusServer on
}
</source>
===/etc/radsec/realms.conf===
<source>
# Our domain
realm domain.tld {
server Our-EduroamRadiusAuth
accountingServer Our-EduroamRadiusAcct
}
# Wrong counfigured clients are rejected here
realm /myabc\.com$ {
replymessage "Misconfigured client: default realm of Intel PRO/Wireless supplicant! Rejected by us."
accountingresponse on
}
realm /^$/ {
replymessage "Misconfigured client: empty realm! Rejected by us."
accountingresponse on
}
# Default route -> Eduroam toplevel servers
realm * {
server tlr1
server tlr2
accountingserver tlr1
accountingserver tlr2
}
</source>
===/etc/radsec/cert/radsecproxy.pem===
<source>
subject=/CN=radsecproxy.domain.tld/OU=bla/O=bli/L=Hamburg/ST=Hamburg/C=DE
-----BEGIN CERTIFICATE-----
...
-----END CERTIFICATE-----
And now the whole cerstificate chain...
</source>
==Run the daemon==
===systemd unit file===
# systemctl cat radsecproxy.service
<source lang=ini>
# /lib/systemd/system/radsecproxy.service
[Unit]
Description=radsecproxy
ConditionPathExists=/etc/radsec/radsecproxy.conf
After=network.target
Documentation=man:radsecproxy(1)
[Service]
Type=forking
ExecStart=/opt/radsecproxy/sbin/radsecproxy -i /run/radsecproxy.pid
PIDFile=/run/radsecproxy.pid
[Install]
WantedBy=multi-user.target
</source>
Put this to /lib/systemd/system/radsecproxy.service and do:
# systemctl daemon-reload
# systemctl enable radsecproxy.service
# systemctl start radsecproxy.service
===Testing===
# openssl s_client -connect <IP>:2083 -showcerts
<graphviz caption="Eduroam Struktur an der HSU" alt="Eduroam Struktur an der HSU" format="png">
digraph Eduroam {
node [shape=plaintext];
node_client [shape=rect, style=rounded, label="Client", labelloc="b", URL="[[Eduroam#]]"];
node_ap [shape=rect, style=rounded, label="Access Point", labelloc="b", URL="[[Eduroam#]]"];
node_wlc [shape=rect, style=rounded, label="WLAN Controller / WLC", labelloc="b", URL="[[Eduroam#wlc]]"];
node_hsu_radsecproxy [shape=record, style=rounded, label="{ RadSecProxy | TCP:2083 | UDP:1812 | UDP:1813 }", labelloc="b" URL="[[Eduroam#]]"];
node_freeradius [shape=record, style=rounded, label="{ FreeRadius | UDP:1812 | UDP:1813 }", labelloc="b", URL="[[Eduroam#]]"];
node_internet [shape=hexagon, label="Internet", labelloc="b"];
node_dfn_radsecproxy [shape=record, style=rounded, label="{ RadSecProxy | TCP:2083 }", labelloc="b", URL="[[Eduroam#]]"];
node_desy_radsecproxy [shape=record, style=rounded, label="{ RadSecProxy | TCP:2083 }", labelloc="b", URL="[[Eduroam#]]"];
node_client -> node_ap [label="meldet sich an"];
node_ap -> node_wlc [label="fragt"];
node_wlc -> node_radsecproxy [label="fragt"];
node_hsu_radsecproxy -> node_hsu_freeradius [label="fragt für Realm @hsu-hh.de"];
node_hsu_radsecproxy -> node_dfn_radsecproxy [label="fragt für alle anderen Realms"];
node_dfn_radsecproxy -> node_desy_radsecproxy [label="fragt für Realm @desy.de"];
node_dfn_radsecproxy -> node_hsu_radsecproxy [label="fragt für Realm @hsu-hh.de"];
}
</graphviz>
08fd9b8b7532f936bb4d99734cb29b2ae92009c9
1798
1797
2017-07-14T08:28:18Z
Lollypop
2
/* Testing */
wikitext
text/x-wiki
[[Kategorie:Eduroam]]
=RadSecProxy=
==Build==
===Patch for radsecproxy-1.6.8 on Ubuntu 16.04===
[https://project.nordu.net/browse/RADSECPROXY-72 taken from here]
<source lang=diff>
diff -rub radsecproxy-1.6.8/tcp.c radsecproxy-1.6.8_Ubuntu_16.04/tcp.c
--- radsecproxy-1.6.8/tcp.c 2016-09-21 13:49:09.000000000 +0200
+++ radsecproxy-1.6.8_Ubuntu_16.04/tcp.c 2017-07-13 16:35:52.414151832 +0200
@@ -353,7 +353,7 @@
struct sockaddr_storage from;
socklen_t fromlen = sizeof(from);
- listen(*sp, 0);
+ listen(*sp, 16);
for (;;) {
s = accept(*sp, (struct sockaddr *)&from, &fromlen);
diff -rub radsecproxy-1.6.8/tls.c radsecproxy-1.6.8_Ubuntu_16.04/tls.c
--- radsecproxy-1.6.8/tls.c 2016-09-21 13:49:09.000000000 +0200
+++ radsecproxy-1.6.8_Ubuntu_16.04/tls.c 2017-07-13 16:36:22.678166655 +0200
@@ -467,7 +467,7 @@
struct sockaddr_storage from;
socklen_t fromlen = sizeof(from);
- listen(*sp, 0);
+ listen(*sp, 16);
for (;;) {
s = accept(*sp, (struct sockaddr *)&from, &fromlen);
</source>
===Configure===
<source lang=bash>
$ ./configure --prefix=/opt/radsecproxy-1.6.8 --sysconfdir=/etc/radsec --with-ssl --enable-fticks
$ make clean all && sudo make install
</source>
==Config==
===/etc/radsec/radsecproxy.conf===
<source>
# Master config file for radsecproxy
IPv4Only on
listenUDP <IP>:1812
listenUDP <IP>:1813
listenTLS <IP>:2083
LogLevel 5 # For testing later reduce to 3
#LogDestination file:///var/log/radsecproxy.log
LogDestination x-syslog:///LOG_DAEMON
LoopPrevention on
######## TLS section
tls default {
#CACertificatePath /etc/radsec/cert/ca
CACertificateFile /etc/radsec/cert/radsecproxy.pem
CertificateFile /etc/radsec/cert/radsecproxy.pem
CertificateKeyFile /etc/radsec/cert/radsecproxy.key
CertificateKeyPassword ****secret****
}
Include /etc/radsec/rewrites.conf
Include /etc/radsec/clients.conf
Include /etc/radsec/servers.conf
Include /etc/radsec/realms.conf
</source>
===/etc/radsec/rewrites.conf===
<source>
## Empty for our setup
</source>
===/etc/radsec/clients.conf===
This matches our german top level radius (tlr) you have to customize it for other countries.
<source>
client tlr1 {
host 193.174.75.134
type tls
certificatenamecheck off
matchCertificateAttribute CN:/^(radius1\.dfn|tld1\.eduroam)\.de$/
}
client tlr2 {
host 193.174.75.138
type tls
certificatenamecheck off
matchCertificateAttribute CN:/^(radius2\.dfn|tld2\.eduroam)\.de$/
}
# Our WLAN Controller
client wlc {
host 10.1.1.0/24
type udp
secret ****secret****
}
client anyIP4TLS {
host 0.0.0.0/0
type TLS
}
</source>
===/etc/radsec/servers.conf===
<source>
Server Our-EduroamRadiusAuth {
host <internal radius server>
port 1812
#rewriteOut UserName
type udp
secret ****secret****
}
Server Our-EduroamRadiusAcct {
host <internal radius accounting server>
port 1813
type udp
secret ****secret****
}
server tlr1 {
host 193.174.75.134
type tls
certificatenamecheck off
matchCertificateAttribute CN:/^(radius1\.dfn|tld1\.eduroam)\.de$/
StatusServer on
}
server tlr2 {
host 193.174.75.138
type tls
certificatenamecheck off
matchCertificateAttribute CN:/^(radius2\.dfn|tld2\.eduroam)\.de$/
StatusServer on
}
</source>
===/etc/radsec/realms.conf===
<source>
# Our domain
realm domain.tld {
server Our-EduroamRadiusAuth
accountingServer Our-EduroamRadiusAcct
}
# Wrong counfigured clients are rejected here
realm /myabc\.com$ {
replymessage "Misconfigured client: default realm of Intel PRO/Wireless supplicant! Rejected by us."
accountingresponse on
}
realm /^$/ {
replymessage "Misconfigured client: empty realm! Rejected by us."
accountingresponse on
}
# Default route -> Eduroam toplevel servers
realm * {
server tlr1
server tlr2
accountingserver tlr1
accountingserver tlr2
}
</source>
===/etc/radsec/cert/radsecproxy.pem===
<source>
subject=/CN=radsecproxy.domain.tld/OU=bla/O=bli/L=Hamburg/ST=Hamburg/C=DE
-----BEGIN CERTIFICATE-----
...
-----END CERTIFICATE-----
And now the whole cerstificate chain...
</source>
==Run the daemon==
===systemd unit file===
# systemctl cat radsecproxy.service
<source lang=ini>
# /lib/systemd/system/radsecproxy.service
[Unit]
Description=radsecproxy
ConditionPathExists=/etc/radsec/radsecproxy.conf
After=network.target
Documentation=man:radsecproxy(1)
[Service]
Type=forking
ExecStart=/opt/radsecproxy/sbin/radsecproxy -i /run/radsecproxy.pid
PIDFile=/run/radsecproxy.pid
[Install]
WantedBy=multi-user.target
</source>
Put this to /lib/systemd/system/radsecproxy.service and do:
# systemctl daemon-reload
# systemctl enable radsecproxy.service
# systemctl start radsecproxy.service
===Testing===
# openssl s_client -connect <IP>:2083 -showcerts
468ca8ddb43272051e964cf5b92421a6a2d74015
1799
1798
2017-07-14T12:50:15Z
Lollypop
2
/* systemd unit file */
wikitext
text/x-wiki
[[Kategorie:Eduroam]]
=RadSecProxy=
==Build==
===Patch for radsecproxy-1.6.8 on Ubuntu 16.04===
[https://project.nordu.net/browse/RADSECPROXY-72 taken from here]
<source lang=diff>
diff -rub radsecproxy-1.6.8/tcp.c radsecproxy-1.6.8_Ubuntu_16.04/tcp.c
--- radsecproxy-1.6.8/tcp.c 2016-09-21 13:49:09.000000000 +0200
+++ radsecproxy-1.6.8_Ubuntu_16.04/tcp.c 2017-07-13 16:35:52.414151832 +0200
@@ -353,7 +353,7 @@
struct sockaddr_storage from;
socklen_t fromlen = sizeof(from);
- listen(*sp, 0);
+ listen(*sp, 16);
for (;;) {
s = accept(*sp, (struct sockaddr *)&from, &fromlen);
diff -rub radsecproxy-1.6.8/tls.c radsecproxy-1.6.8_Ubuntu_16.04/tls.c
--- radsecproxy-1.6.8/tls.c 2016-09-21 13:49:09.000000000 +0200
+++ radsecproxy-1.6.8_Ubuntu_16.04/tls.c 2017-07-13 16:36:22.678166655 +0200
@@ -467,7 +467,7 @@
struct sockaddr_storage from;
socklen_t fromlen = sizeof(from);
- listen(*sp, 0);
+ listen(*sp, 16);
for (;;) {
s = accept(*sp, (struct sockaddr *)&from, &fromlen);
</source>
===Configure===
<source lang=bash>
$ ./configure --prefix=/opt/radsecproxy-1.6.8 --sysconfdir=/etc/radsec --with-ssl --enable-fticks
$ make clean all && sudo make install
</source>
==Config==
===/etc/radsec/radsecproxy.conf===
<source>
# Master config file for radsecproxy
IPv4Only on
listenUDP <IP>:1812
listenUDP <IP>:1813
listenTLS <IP>:2083
LogLevel 5 # For testing later reduce to 3
#LogDestination file:///var/log/radsecproxy.log
LogDestination x-syslog:///LOG_DAEMON
LoopPrevention on
######## TLS section
tls default {
#CACertificatePath /etc/radsec/cert/ca
CACertificateFile /etc/radsec/cert/radsecproxy.pem
CertificateFile /etc/radsec/cert/radsecproxy.pem
CertificateKeyFile /etc/radsec/cert/radsecproxy.key
CertificateKeyPassword ****secret****
}
Include /etc/radsec/rewrites.conf
Include /etc/radsec/clients.conf
Include /etc/radsec/servers.conf
Include /etc/radsec/realms.conf
</source>
===/etc/radsec/rewrites.conf===
<source>
## Empty for our setup
</source>
===/etc/radsec/clients.conf===
This matches our german top level radius (tlr) you have to customize it for other countries.
<source>
client tlr1 {
host 193.174.75.134
type tls
certificatenamecheck off
matchCertificateAttribute CN:/^(radius1\.dfn|tld1\.eduroam)\.de$/
}
client tlr2 {
host 193.174.75.138
type tls
certificatenamecheck off
matchCertificateAttribute CN:/^(radius2\.dfn|tld2\.eduroam)\.de$/
}
# Our WLAN Controller
client wlc {
host 10.1.1.0/24
type udp
secret ****secret****
}
client anyIP4TLS {
host 0.0.0.0/0
type TLS
}
</source>
===/etc/radsec/servers.conf===
<source>
Server Our-EduroamRadiusAuth {
host <internal radius server>
port 1812
#rewriteOut UserName
type udp
secret ****secret****
}
Server Our-EduroamRadiusAcct {
host <internal radius accounting server>
port 1813
type udp
secret ****secret****
}
server tlr1 {
host 193.174.75.134
type tls
certificatenamecheck off
matchCertificateAttribute CN:/^(radius1\.dfn|tld1\.eduroam)\.de$/
StatusServer on
}
server tlr2 {
host 193.174.75.138
type tls
certificatenamecheck off
matchCertificateAttribute CN:/^(radius2\.dfn|tld2\.eduroam)\.de$/
StatusServer on
}
</source>
===/etc/radsec/realms.conf===
<source>
# Our domain
realm domain.tld {
server Our-EduroamRadiusAuth
accountingServer Our-EduroamRadiusAcct
}
# Wrong counfigured clients are rejected here
realm /myabc\.com$ {
replymessage "Misconfigured client: default realm of Intel PRO/Wireless supplicant! Rejected by us."
accountingresponse on
}
realm /^$/ {
replymessage "Misconfigured client: empty realm! Rejected by us."
accountingresponse on
}
# Default route -> Eduroam toplevel servers
realm * {
server tlr1
server tlr2
accountingserver tlr1
accountingserver tlr2
}
</source>
===/etc/radsec/cert/radsecproxy.pem===
<source>
subject=/CN=radsecproxy.domain.tld/OU=bla/O=bli/L=Hamburg/ST=Hamburg/C=DE
-----BEGIN CERTIFICATE-----
...
-----END CERTIFICATE-----
And now the whole cerstificate chain...
</source>
==Run the daemon==
===Security===
There is no need to run radsecproxy as root.
But you need write access to the log or use syslog.
====User====
<source lang=bash>
# addgroup -g 2083 radsecproxy
# useradd -u 2083 -g nogroup -s /bin/false -h /nonexistent
</source>
====Permissions====
<source lang=bash>
# chown -R root:radsecproxy /etc/radsec
# find /etc/radsec -type d -exec chmod 0750 {} \;
# find /etc/radsec -type f -exec chmod 0640 {} \;
</source>
====systemd unit file====
<source lang=bash>
# systemctl cat radsecproxy.service
</source>
<source lang=ini>
# /lib/systemd/system/radsecproxy.service
[Unit]
Description=radsecproxy
ConditionPathExists=/etc/radsec/radsecproxy.conf
After=network.target
Documentation=man:radsecproxy(1)
[Service]
Type=forking
User=radsecproxy
Group=radsecproxy
RuntimeDirectory=radsecproxy
RuntimeDirectoryMode=0700
ExecStart=/opt/radsecproxy/sbin/radsecproxy -i /run/radsecproxy/radsecproxy.pid
PIDFile=/run/radsecproxy/radsecproxy.pid
[Install]
WantedBy=multi-user.target
</source>
Put this to /lib/systemd/system/radsecproxy.service and do:
# systemctl daemon-reload
# systemctl enable radsecproxy.service
# systemctl start radsecproxy.service
===Testing===
# openssl s_client -connect <IP>:2083 -showcerts
8043ace8497fb0a4d8b0d32c3b616bcac74ed70a
1800
1799
2017-07-14T12:52:50Z
Lollypop
2
/* Security */
wikitext
text/x-wiki
[[Kategorie:Eduroam]]
=RadSecProxy=
==Build==
===Patch for radsecproxy-1.6.8 on Ubuntu 16.04===
[https://project.nordu.net/browse/RADSECPROXY-72 taken from here]
<source lang=diff>
diff -rub radsecproxy-1.6.8/tcp.c radsecproxy-1.6.8_Ubuntu_16.04/tcp.c
--- radsecproxy-1.6.8/tcp.c 2016-09-21 13:49:09.000000000 +0200
+++ radsecproxy-1.6.8_Ubuntu_16.04/tcp.c 2017-07-13 16:35:52.414151832 +0200
@@ -353,7 +353,7 @@
struct sockaddr_storage from;
socklen_t fromlen = sizeof(from);
- listen(*sp, 0);
+ listen(*sp, 16);
for (;;) {
s = accept(*sp, (struct sockaddr *)&from, &fromlen);
diff -rub radsecproxy-1.6.8/tls.c radsecproxy-1.6.8_Ubuntu_16.04/tls.c
--- radsecproxy-1.6.8/tls.c 2016-09-21 13:49:09.000000000 +0200
+++ radsecproxy-1.6.8_Ubuntu_16.04/tls.c 2017-07-13 16:36:22.678166655 +0200
@@ -467,7 +467,7 @@
struct sockaddr_storage from;
socklen_t fromlen = sizeof(from);
- listen(*sp, 0);
+ listen(*sp, 16);
for (;;) {
s = accept(*sp, (struct sockaddr *)&from, &fromlen);
</source>
===Configure===
<source lang=bash>
$ ./configure --prefix=/opt/radsecproxy-1.6.8 --sysconfdir=/etc/radsec --with-ssl --enable-fticks
$ make clean all && sudo make install
</source>
==Config==
===/etc/radsec/radsecproxy.conf===
<source>
# Master config file for radsecproxy
IPv4Only on
listenUDP <IP>:1812
listenUDP <IP>:1813
listenTLS <IP>:2083
LogLevel 5 # For testing later reduce to 3
#LogDestination file:///var/log/radsecproxy.log
LogDestination x-syslog:///LOG_DAEMON
LoopPrevention on
######## TLS section
tls default {
#CACertificatePath /etc/radsec/cert/ca
CACertificateFile /etc/radsec/cert/radsecproxy.pem
CertificateFile /etc/radsec/cert/radsecproxy.pem
CertificateKeyFile /etc/radsec/cert/radsecproxy.key
CertificateKeyPassword ****secret****
}
Include /etc/radsec/rewrites.conf
Include /etc/radsec/clients.conf
Include /etc/radsec/servers.conf
Include /etc/radsec/realms.conf
</source>
===/etc/radsec/rewrites.conf===
<source>
## Empty for our setup
</source>
===/etc/radsec/clients.conf===
This matches our german top level radius (tlr) you have to customize it for other countries.
<source>
client tlr1 {
host 193.174.75.134
type tls
certificatenamecheck off
matchCertificateAttribute CN:/^(radius1\.dfn|tld1\.eduroam)\.de$/
}
client tlr2 {
host 193.174.75.138
type tls
certificatenamecheck off
matchCertificateAttribute CN:/^(radius2\.dfn|tld2\.eduroam)\.de$/
}
# Our WLAN Controller
client wlc {
host 10.1.1.0/24
type udp
secret ****secret****
}
client anyIP4TLS {
host 0.0.0.0/0
type TLS
}
</source>
===/etc/radsec/servers.conf===
<source>
Server Our-EduroamRadiusAuth {
host <internal radius server>
port 1812
#rewriteOut UserName
type udp
secret ****secret****
}
Server Our-EduroamRadiusAcct {
host <internal radius accounting server>
port 1813
type udp
secret ****secret****
}
server tlr1 {
host 193.174.75.134
type tls
certificatenamecheck off
matchCertificateAttribute CN:/^(radius1\.dfn|tld1\.eduroam)\.de$/
StatusServer on
}
server tlr2 {
host 193.174.75.138
type tls
certificatenamecheck off
matchCertificateAttribute CN:/^(radius2\.dfn|tld2\.eduroam)\.de$/
StatusServer on
}
</source>
===/etc/radsec/realms.conf===
<source>
# Our domain
realm domain.tld {
server Our-EduroamRadiusAuth
accountingServer Our-EduroamRadiusAcct
}
# Wrong counfigured clients are rejected here
realm /myabc\.com$ {
replymessage "Misconfigured client: default realm of Intel PRO/Wireless supplicant! Rejected by us."
accountingresponse on
}
realm /^$/ {
replymessage "Misconfigured client: empty realm! Rejected by us."
accountingresponse on
}
# Default route -> Eduroam toplevel servers
realm * {
server tlr1
server tlr2
accountingserver tlr1
accountingserver tlr2
}
</source>
===/etc/radsec/cert/radsecproxy.pem===
<source>
subject=/CN=radsecproxy.domain.tld/OU=bla/O=bli/L=Hamburg/ST=Hamburg/C=DE
-----BEGIN CERTIFICATE-----
...
-----END CERTIFICATE-----
And now the whole cerstificate chain...
</source>
==Run the daemon==
===Security===
There is no need to run radsecproxy as root.
But you need write access to the log or use syslog.
The config, certificate and key is not readable by the user (nogroup) but by the group radsecproxy where the porocess lives in (see systemd unit file radsecproxy.service).
====User====
<source lang=bash>
# addgroup -g 2083 radsecproxy
# useradd -u 2083 -g nogroup -s /bin/false -h /nonexistent
</source>
====Permissions====
<source lang=bash>
# chown -R root:radsecproxy /etc/radsec
# find /etc/radsec -type d -exec chmod 0750 {} \;
# find /etc/radsec -type f -exec chmod 0640 {} \;
</source>
====systemd unit file====
<source lang=bash>
# systemctl cat radsecproxy.service
</source>
<source lang=ini>
# /lib/systemd/system/radsecproxy.service
[Unit]
Description=radsecproxy
ConditionPathExists=/etc/radsec/radsecproxy.conf
After=network.target
Documentation=man:radsecproxy(1)
[Service]
Type=forking
User=radsecproxy
Group=radsecproxy
RuntimeDirectory=radsecproxy
RuntimeDirectoryMode=0700
ExecStart=/opt/radsecproxy/sbin/radsecproxy -i /run/radsecproxy/radsecproxy.pid
PIDFile=/run/radsecproxy/radsecproxy.pid
[Install]
WantedBy=multi-user.target
</source>
Put this to /lib/systemd/system/radsecproxy.service and do:
# systemctl daemon-reload
# systemctl enable radsecproxy.service
# systemctl start radsecproxy.service
===Testing===
# openssl s_client -connect <IP>:2083 -showcerts
3543cbbbb7f1a2d8aeecf74fb829bc77865a1272
1801
1800
2017-07-14T14:18:21Z
Lollypop
2
/* Patch for radsecproxy-1.6.8 on Ubuntu 16.04 */
wikitext
text/x-wiki
[[Kategorie:Eduroam]]
=RadSecProxy=
==Build==
===Patch for radsecproxy-1.6.8 on Ubuntu 16.04===
In radsecproxy from git on [[https://git.nordu.net/?p=radsecproxy.git;a=tree git.nordu.net]] this patch is not needed since [[https://git.nordu.net/?p=radsecproxy.git;a=commit;h=f3619bf65967255e1009fec42b28007b49e0f4e4 18.1.2017]].
<source lang=bash>
$ git clone https://git.nordu.net/radsecproxy.git
</source>
[https://project.nordu.net/browse/RADSECPROXY-72 taken from here]
<source lang=diff>
diff -rub radsecproxy-1.6.8/tcp.c radsecproxy-1.6.8_Ubuntu_16.04/tcp.c
--- radsecproxy-1.6.8/tcp.c 2016-09-21 13:49:09.000000000 +0200
+++ radsecproxy-1.6.8_Ubuntu_16.04/tcp.c 2017-07-13 16:35:52.414151832 +0200
@@ -353,7 +353,7 @@
struct sockaddr_storage from;
socklen_t fromlen = sizeof(from);
- listen(*sp, 0);
+ listen(*sp, 16);
for (;;) {
s = accept(*sp, (struct sockaddr *)&from, &fromlen);
diff -rub radsecproxy-1.6.8/tls.c radsecproxy-1.6.8_Ubuntu_16.04/tls.c
--- radsecproxy-1.6.8/tls.c 2016-09-21 13:49:09.000000000 +0200
+++ radsecproxy-1.6.8_Ubuntu_16.04/tls.c 2017-07-13 16:36:22.678166655 +0200
@@ -467,7 +467,7 @@
struct sockaddr_storage from;
socklen_t fromlen = sizeof(from);
- listen(*sp, 0);
+ listen(*sp, 16);
for (;;) {
s = accept(*sp, (struct sockaddr *)&from, &fromlen);
</source>
===Configure===
<source lang=bash>
$ ./configure --prefix=/opt/radsecproxy-1.6.8 --sysconfdir=/etc/radsec --with-ssl --enable-fticks
$ make clean all && sudo make install
</source>
==Config==
===/etc/radsec/radsecproxy.conf===
<source>
# Master config file for radsecproxy
IPv4Only on
listenUDP <IP>:1812
listenUDP <IP>:1813
listenTLS <IP>:2083
LogLevel 5 # For testing later reduce to 3
#LogDestination file:///var/log/radsecproxy.log
LogDestination x-syslog:///LOG_DAEMON
LoopPrevention on
######## TLS section
tls default {
#CACertificatePath /etc/radsec/cert/ca
CACertificateFile /etc/radsec/cert/radsecproxy.pem
CertificateFile /etc/radsec/cert/radsecproxy.pem
CertificateKeyFile /etc/radsec/cert/radsecproxy.key
CertificateKeyPassword ****secret****
}
Include /etc/radsec/rewrites.conf
Include /etc/radsec/clients.conf
Include /etc/radsec/servers.conf
Include /etc/radsec/realms.conf
</source>
===/etc/radsec/rewrites.conf===
<source>
## Empty for our setup
</source>
===/etc/radsec/clients.conf===
This matches our german top level radius (tlr) you have to customize it for other countries.
<source>
client tlr1 {
host 193.174.75.134
type tls
certificatenamecheck off
matchCertificateAttribute CN:/^(radius1\.dfn|tld1\.eduroam)\.de$/
}
client tlr2 {
host 193.174.75.138
type tls
certificatenamecheck off
matchCertificateAttribute CN:/^(radius2\.dfn|tld2\.eduroam)\.de$/
}
# Our WLAN Controller
client wlc {
host 10.1.1.0/24
type udp
secret ****secret****
}
client anyIP4TLS {
host 0.0.0.0/0
type TLS
}
</source>
===/etc/radsec/servers.conf===
<source>
Server Our-EduroamRadiusAuth {
host <internal radius server>
port 1812
#rewriteOut UserName
type udp
secret ****secret****
}
Server Our-EduroamRadiusAcct {
host <internal radius accounting server>
port 1813
type udp
secret ****secret****
}
server tlr1 {
host 193.174.75.134
type tls
certificatenamecheck off
matchCertificateAttribute CN:/^(radius1\.dfn|tld1\.eduroam)\.de$/
StatusServer on
}
server tlr2 {
host 193.174.75.138
type tls
certificatenamecheck off
matchCertificateAttribute CN:/^(radius2\.dfn|tld2\.eduroam)\.de$/
StatusServer on
}
</source>
===/etc/radsec/realms.conf===
<source>
# Our domain
realm domain.tld {
server Our-EduroamRadiusAuth
accountingServer Our-EduroamRadiusAcct
}
# Wrong counfigured clients are rejected here
realm /myabc\.com$ {
replymessage "Misconfigured client: default realm of Intel PRO/Wireless supplicant! Rejected by us."
accountingresponse on
}
realm /^$/ {
replymessage "Misconfigured client: empty realm! Rejected by us."
accountingresponse on
}
# Default route -> Eduroam toplevel servers
realm * {
server tlr1
server tlr2
accountingserver tlr1
accountingserver tlr2
}
</source>
===/etc/radsec/cert/radsecproxy.pem===
<source>
subject=/CN=radsecproxy.domain.tld/OU=bla/O=bli/L=Hamburg/ST=Hamburg/C=DE
-----BEGIN CERTIFICATE-----
...
-----END CERTIFICATE-----
And now the whole cerstificate chain...
</source>
==Run the daemon==
===Security===
There is no need to run radsecproxy as root.
But you need write access to the log or use syslog.
The config, certificate and key is not readable by the user (nogroup) but by the group radsecproxy where the porocess lives in (see systemd unit file radsecproxy.service).
====User====
<source lang=bash>
# addgroup -g 2083 radsecproxy
# useradd -u 2083 -g nogroup -s /bin/false -h /nonexistent
</source>
====Permissions====
<source lang=bash>
# chown -R root:radsecproxy /etc/radsec
# find /etc/radsec -type d -exec chmod 0750 {} \;
# find /etc/radsec -type f -exec chmod 0640 {} \;
</source>
====systemd unit file====
<source lang=bash>
# systemctl cat radsecproxy.service
</source>
<source lang=ini>
# /lib/systemd/system/radsecproxy.service
[Unit]
Description=radsecproxy
ConditionPathExists=/etc/radsec/radsecproxy.conf
After=network.target
Documentation=man:radsecproxy(1)
[Service]
Type=forking
User=radsecproxy
Group=radsecproxy
RuntimeDirectory=radsecproxy
RuntimeDirectoryMode=0700
ExecStart=/opt/radsecproxy/sbin/radsecproxy -i /run/radsecproxy/radsecproxy.pid
PIDFile=/run/radsecproxy/radsecproxy.pid
[Install]
WantedBy=multi-user.target
</source>
Put this to /lib/systemd/system/radsecproxy.service and do:
# systemctl daemon-reload
# systemctl enable radsecproxy.service
# systemctl start radsecproxy.service
===Testing===
# openssl s_client -connect <IP>:2083 -showcerts
56dae172786f99edf9f96de9355937235dd67000
1802
1801
2017-07-14T15:03:09Z
Lollypop
2
/* systemd unit file */
wikitext
text/x-wiki
[[Kategorie:Eduroam]]
=RadSecProxy=
==Build==
===Patch for radsecproxy-1.6.8 on Ubuntu 16.04===
In radsecproxy from git on [[https://git.nordu.net/?p=radsecproxy.git;a=tree git.nordu.net]] this patch is not needed since [[https://git.nordu.net/?p=radsecproxy.git;a=commit;h=f3619bf65967255e1009fec42b28007b49e0f4e4 18.1.2017]].
<source lang=bash>
$ git clone https://git.nordu.net/radsecproxy.git
</source>
[https://project.nordu.net/browse/RADSECPROXY-72 taken from here]
<source lang=diff>
diff -rub radsecproxy-1.6.8/tcp.c radsecproxy-1.6.8_Ubuntu_16.04/tcp.c
--- radsecproxy-1.6.8/tcp.c 2016-09-21 13:49:09.000000000 +0200
+++ radsecproxy-1.6.8_Ubuntu_16.04/tcp.c 2017-07-13 16:35:52.414151832 +0200
@@ -353,7 +353,7 @@
struct sockaddr_storage from;
socklen_t fromlen = sizeof(from);
- listen(*sp, 0);
+ listen(*sp, 16);
for (;;) {
s = accept(*sp, (struct sockaddr *)&from, &fromlen);
diff -rub radsecproxy-1.6.8/tls.c radsecproxy-1.6.8_Ubuntu_16.04/tls.c
--- radsecproxy-1.6.8/tls.c 2016-09-21 13:49:09.000000000 +0200
+++ radsecproxy-1.6.8_Ubuntu_16.04/tls.c 2017-07-13 16:36:22.678166655 +0200
@@ -467,7 +467,7 @@
struct sockaddr_storage from;
socklen_t fromlen = sizeof(from);
- listen(*sp, 0);
+ listen(*sp, 16);
for (;;) {
s = accept(*sp, (struct sockaddr *)&from, &fromlen);
</source>
===Configure===
<source lang=bash>
$ ./configure --prefix=/opt/radsecproxy-1.6.8 --sysconfdir=/etc/radsec --with-ssl --enable-fticks
$ make clean all && sudo make install
</source>
==Config==
===/etc/radsec/radsecproxy.conf===
<source>
# Master config file for radsecproxy
IPv4Only on
listenUDP <IP>:1812
listenUDP <IP>:1813
listenTLS <IP>:2083
LogLevel 5 # For testing later reduce to 3
#LogDestination file:///var/log/radsecproxy.log
LogDestination x-syslog:///LOG_DAEMON
LoopPrevention on
######## TLS section
tls default {
#CACertificatePath /etc/radsec/cert/ca
CACertificateFile /etc/radsec/cert/radsecproxy.pem
CertificateFile /etc/radsec/cert/radsecproxy.pem
CertificateKeyFile /etc/radsec/cert/radsecproxy.key
CertificateKeyPassword ****secret****
}
Include /etc/radsec/rewrites.conf
Include /etc/radsec/clients.conf
Include /etc/radsec/servers.conf
Include /etc/radsec/realms.conf
</source>
===/etc/radsec/rewrites.conf===
<source>
## Empty for our setup
</source>
===/etc/radsec/clients.conf===
This matches our german top level radius (tlr) you have to customize it for other countries.
<source>
client tlr1 {
host 193.174.75.134
type tls
certificatenamecheck off
matchCertificateAttribute CN:/^(radius1\.dfn|tld1\.eduroam)\.de$/
}
client tlr2 {
host 193.174.75.138
type tls
certificatenamecheck off
matchCertificateAttribute CN:/^(radius2\.dfn|tld2\.eduroam)\.de$/
}
# Our WLAN Controller
client wlc {
host 10.1.1.0/24
type udp
secret ****secret****
}
client anyIP4TLS {
host 0.0.0.0/0
type TLS
}
</source>
===/etc/radsec/servers.conf===
<source>
Server Our-EduroamRadiusAuth {
host <internal radius server>
port 1812
#rewriteOut UserName
type udp
secret ****secret****
}
Server Our-EduroamRadiusAcct {
host <internal radius accounting server>
port 1813
type udp
secret ****secret****
}
server tlr1 {
host 193.174.75.134
type tls
certificatenamecheck off
matchCertificateAttribute CN:/^(radius1\.dfn|tld1\.eduroam)\.de$/
StatusServer on
}
server tlr2 {
host 193.174.75.138
type tls
certificatenamecheck off
matchCertificateAttribute CN:/^(radius2\.dfn|tld2\.eduroam)\.de$/
StatusServer on
}
</source>
===/etc/radsec/realms.conf===
<source>
# Our domain
realm domain.tld {
server Our-EduroamRadiusAuth
accountingServer Our-EduroamRadiusAcct
}
# Wrong counfigured clients are rejected here
realm /myabc\.com$ {
replymessage "Misconfigured client: default realm of Intel PRO/Wireless supplicant! Rejected by us."
accountingresponse on
}
realm /^$/ {
replymessage "Misconfigured client: empty realm! Rejected by us."
accountingresponse on
}
# Default route -> Eduroam toplevel servers
realm * {
server tlr1
server tlr2
accountingserver tlr1
accountingserver tlr2
}
</source>
===/etc/radsec/cert/radsecproxy.pem===
<source>
subject=/CN=radsecproxy.domain.tld/OU=bla/O=bli/L=Hamburg/ST=Hamburg/C=DE
-----BEGIN CERTIFICATE-----
...
-----END CERTIFICATE-----
And now the whole cerstificate chain...
</source>
==Run the daemon==
===Security===
There is no need to run radsecproxy as root.
But you need write access to the log or use syslog.
The config, certificate and key is not readable by the user (nogroup) but by the group radsecproxy where the porocess lives in (see systemd unit file radsecproxy.service).
====User====
<source lang=bash>
# addgroup -g 2083 radsecproxy
# useradd -u 2083 -g nogroup -s /bin/false -h /nonexistent
</source>
====Permissions====
<source lang=bash>
# chown -R root:radsecproxy /etc/radsec
# find /etc/radsec -type d -exec chmod 0750 {} \;
# find /etc/radsec -type f -exec chmod 0640 {} \;
</source>
====systemd unit file====
<source lang=bash>
# systemctl cat radsecproxy.service
</source>
<source lang=ini>
[Unit]
Description=radsecproxy
ConditionPathExists=/etc/radsec/radsecproxy.conf
After=network.target
Documentation=man:radsecproxy(1)
[Service]
Type=forking
User=radsecproxy
Group=radsecproxy
RuntimeDirectory=radsecproxy
RuntimeDirectoryMode=0700
PrivateTmp=yes
InaccessibleDirectories=/var
ReadOnlyDirectories=/etc
ReadOnlyDirectories=/lib
ReadOnlyDirectories=/usr
ExecStart=/opt/radsecproxy/sbin/radsecproxy -i /run/radsecproxy/radsecproxy.pid
PIDFile=/run/radsecproxy/radsecproxy.pid
[Install]
WantedBy=multi-user.target
</source>
Put this to /lib/systemd/system/radsecproxy.service and do:
# systemctl daemon-reload
# systemctl enable radsecproxy.service
# systemctl start radsecproxy.service
===Testing===
# openssl s_client -connect <IP>:2083 -showcerts
ff6b9292c37656414c9a5024d5b16d1b05e4deb1
1803
1802
2017-07-14T16:40:47Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:Eduroam]]
=RadSecProxy=
==Build==
===Patch for radsecproxy-1.6.8 on Ubuntu 16.04===
In radsecproxy from git on [[https://git.nordu.net/?p=radsecproxy.git;a=tree git.nordu.net]] this patch is not needed since [[https://git.nordu.net/?p=radsecproxy.git;a=commit;h=f3619bf65967255e1009fec42b28007b49e0f4e4 18.1.2017]].
<source lang=bash>
$ git clone https://git.nordu.net/radsecproxy.git
</source>
[https://project.nordu.net/browse/RADSECPROXY-72 taken from here]
<source lang=diff>
diff -rub radsecproxy-1.6.8/tcp.c radsecproxy-1.6.8_Ubuntu_16.04/tcp.c
--- radsecproxy-1.6.8/tcp.c 2016-09-21 13:49:09.000000000 +0200
+++ radsecproxy-1.6.8_Ubuntu_16.04/tcp.c 2017-07-13 16:35:52.414151832 +0200
@@ -353,7 +353,7 @@
struct sockaddr_storage from;
socklen_t fromlen = sizeof(from);
- listen(*sp, 0);
+ listen(*sp, 16);
for (;;) {
s = accept(*sp, (struct sockaddr *)&from, &fromlen);
diff -rub radsecproxy-1.6.8/tls.c radsecproxy-1.6.8_Ubuntu_16.04/tls.c
--- radsecproxy-1.6.8/tls.c 2016-09-21 13:49:09.000000000 +0200
+++ radsecproxy-1.6.8_Ubuntu_16.04/tls.c 2017-07-13 16:36:22.678166655 +0200
@@ -467,7 +467,7 @@
struct sockaddr_storage from;
socklen_t fromlen = sizeof(from);
- listen(*sp, 0);
+ listen(*sp, 16);
for (;;) {
s = accept(*sp, (struct sockaddr *)&from, &fromlen);
</source>
===Configure===
<source lang=bash>
$ ./configure --prefix=/opt/radsecproxy-1.6.8 --sysconfdir=/etc/radsec --with-ssl --enable-fticks
$ make clean all && sudo make install
</source>
==Config==
===/etc/radsec/radsecproxy.conf===
<source>
# Master config file for radsecproxy
IPv4Only on
listenUDP <IP>:1812
listenUDP <IP>:1813
listenTLS <IP>:2083
LogLevel 5 # For testing later reduce to 3
#LogDestination file:///var/log/radsecproxy.log
LogDestination x-syslog:///LOG_DAEMON
LoopPrevention on
######## TLS section
tls default {
#CACertificatePath /etc/radsec/cert/ca
CACertificateFile /etc/radsec/cert/radsecproxy.pem
CertificateFile /etc/radsec/cert/radsecproxy.pem
CertificateKeyFile /etc/radsec/cert/radsecproxy.key
CertificateKeyPassword ****secret****
}
Include /etc/radsec/rewrites.conf
Include /etc/radsec/clients.conf
Include /etc/radsec/servers.conf
Include /etc/radsec/realms.conf
</source>
===/etc/radsec/rewrites.conf===
<source>
## Empty for our setup
</source>
===/etc/radsec/clients.conf===
This matches our german top level radius (tlr) you have to customize it for other countries.
<source>
client tlr1 {
host 193.174.75.134
type tls
certificatenamecheck off
matchCertificateAttribute CN:/^(radius1\.dfn|tld1\.eduroam)\.de$/
}
client tlr2 {
host 193.174.75.138
type tls
certificatenamecheck off
matchCertificateAttribute CN:/^(radius2\.dfn|tld2\.eduroam)\.de$/
}
# Our WLAN Controller
client wlc {
host 10.1.1.0/24
type udp
secret ****secret****
}
client anyIP4TLS {
host 0.0.0.0/0
type TLS
}
</source>
===/etc/radsec/servers.conf===
<source>
Server Our-EduroamRadiusAuth {
host <internal radius server>
port 1812
#rewriteOut UserName
type udp
secret ****secret****
}
Server Our-EduroamRadiusAcct {
host <internal radius accounting server>
port 1813
type udp
secret ****secret****
}
server tlr1 {
host 193.174.75.134
type tls
certificatenamecheck off
matchCertificateAttribute CN:/^(radius1\.dfn|tld1\.eduroam)\.de$/
StatusServer on
}
server tlr2 {
host 193.174.75.138
type tls
certificatenamecheck off
matchCertificateAttribute CN:/^(radius2\.dfn|tld2\.eduroam)\.de$/
StatusServer on
}
</source>
===/etc/radsec/realms.conf===
<source>
# Our domain
realm domain.tld {
server Our-EduroamRadiusAuth
accountingServer Our-EduroamRadiusAcct
}
# Wrong counfigured clients are rejected here
realm /myabc\.com$ {
replymessage "Misconfigured client: default realm of Intel PRO/Wireless supplicant! Rejected by us."
accountingresponse on
}
realm /^$/ {
replymessage "Misconfigured client: empty realm! Rejected by us."
accountingresponse on
}
# Default route -> Eduroam toplevel servers
realm * {
server tlr1
server tlr2
accountingserver tlr1
accountingserver tlr2
}
</source>
===/etc/radsec/cert/radsecproxy.pem===
<source>
subject=/CN=radsecproxy.domain.tld/OU=bla/O=bli/L=Hamburg/ST=Hamburg/C=DE
-----BEGIN CERTIFICATE-----
...
-----END CERTIFICATE-----
And now the whole cerstificate chain...
</source>
==Run the daemon==
===Security===
There is no need to run radsecproxy as root.
But you need write access to the log or use syslog.
The config, certificate and key is not readable by the user (nogroup) but by the group radsecproxy where the porocess lives in (see systemd unit file radsecproxy.service).
====User====
<source lang=bash>
# addgroup -g 2083 radsecproxy
# useradd -u 2083 -g nogroup -s /bin/false -h /nonexistent
</source>
====Permissions====
<source lang=bash>
# chown -R root:radsecproxy /etc/radsec
# find /etc/radsec -type d -exec chmod 0750 {} \;
# find /etc/radsec -type f -exec chmod 0640 {} \;
</source>
====systemd unit file====
<source lang=bash>
# systemctl cat radsecproxy.service
</source>
<source lang=ini>
[Unit]
Description=radsecproxy
ConditionPathExists=/etc/radsec/radsecproxy.conf
After=network.target
Documentation=man:radsecproxy(1)
[Service]
Type=forking
User=radsecproxy
Group=radsecproxy
RuntimeDirectory=radsecproxy
RuntimeDirectoryMode=0700
PrivateTmp=yes
InaccessibleDirectories=/var
ReadOnlyDirectories=/etc
ReadOnlyDirectories=/lib
ReadOnlyDirectories=/usr
ExecStart=/opt/radsecproxy/sbin/radsecproxy -i /run/radsecproxy/radsecproxy.pid
PIDFile=/run/radsecproxy/radsecproxy.pid
[Install]
WantedBy=multi-user.target
</source>
Put this to /lib/systemd/system/radsecproxy.service and do:
# systemctl daemon-reload
# systemctl enable radsecproxy.service
# systemctl start radsecproxy.service
===Testing===
# openssl s_client -connect <IP>:2083 -showcerts
264f8fc5254aa0bd81fb14b1090faeaad75f3aaa
1805
1803
2017-08-11T08:17:44Z
Lollypop
2
/* Testing */
wikitext
text/x-wiki
[[Kategorie:Eduroam]]
=RadSecProxy=
==Build==
===Patch for radsecproxy-1.6.8 on Ubuntu 16.04===
In radsecproxy from git on [[https://git.nordu.net/?p=radsecproxy.git;a=tree git.nordu.net]] this patch is not needed since [[https://git.nordu.net/?p=radsecproxy.git;a=commit;h=f3619bf65967255e1009fec42b28007b49e0f4e4 18.1.2017]].
<source lang=bash>
$ git clone https://git.nordu.net/radsecproxy.git
</source>
[https://project.nordu.net/browse/RADSECPROXY-72 taken from here]
<source lang=diff>
diff -rub radsecproxy-1.6.8/tcp.c radsecproxy-1.6.8_Ubuntu_16.04/tcp.c
--- radsecproxy-1.6.8/tcp.c 2016-09-21 13:49:09.000000000 +0200
+++ radsecproxy-1.6.8_Ubuntu_16.04/tcp.c 2017-07-13 16:35:52.414151832 +0200
@@ -353,7 +353,7 @@
struct sockaddr_storage from;
socklen_t fromlen = sizeof(from);
- listen(*sp, 0);
+ listen(*sp, 16);
for (;;) {
s = accept(*sp, (struct sockaddr *)&from, &fromlen);
diff -rub radsecproxy-1.6.8/tls.c radsecproxy-1.6.8_Ubuntu_16.04/tls.c
--- radsecproxy-1.6.8/tls.c 2016-09-21 13:49:09.000000000 +0200
+++ radsecproxy-1.6.8_Ubuntu_16.04/tls.c 2017-07-13 16:36:22.678166655 +0200
@@ -467,7 +467,7 @@
struct sockaddr_storage from;
socklen_t fromlen = sizeof(from);
- listen(*sp, 0);
+ listen(*sp, 16);
for (;;) {
s = accept(*sp, (struct sockaddr *)&from, &fromlen);
</source>
===Configure===
<source lang=bash>
$ ./configure --prefix=/opt/radsecproxy-1.6.8 --sysconfdir=/etc/radsec --with-ssl --enable-fticks
$ make clean all && sudo make install
</source>
==Config==
===/etc/radsec/radsecproxy.conf===
<source>
# Master config file for radsecproxy
IPv4Only on
listenUDP <IP>:1812
listenUDP <IP>:1813
listenTLS <IP>:2083
LogLevel 5 # For testing later reduce to 3
#LogDestination file:///var/log/radsecproxy.log
LogDestination x-syslog:///LOG_DAEMON
LoopPrevention on
######## TLS section
tls default {
#CACertificatePath /etc/radsec/cert/ca
CACertificateFile /etc/radsec/cert/radsecproxy.pem
CertificateFile /etc/radsec/cert/radsecproxy.pem
CertificateKeyFile /etc/radsec/cert/radsecproxy.key
CertificateKeyPassword ****secret****
}
Include /etc/radsec/rewrites.conf
Include /etc/radsec/clients.conf
Include /etc/radsec/servers.conf
Include /etc/radsec/realms.conf
</source>
===/etc/radsec/rewrites.conf===
<source>
## Empty for our setup
</source>
===/etc/radsec/clients.conf===
This matches our german top level radius (tlr) you have to customize it for other countries.
<source>
client tlr1 {
host 193.174.75.134
type tls
certificatenamecheck off
matchCertificateAttribute CN:/^(radius1\.dfn|tld1\.eduroam)\.de$/
}
client tlr2 {
host 193.174.75.138
type tls
certificatenamecheck off
matchCertificateAttribute CN:/^(radius2\.dfn|tld2\.eduroam)\.de$/
}
# Our WLAN Controller
client wlc {
host 10.1.1.0/24
type udp
secret ****secret****
}
client anyIP4TLS {
host 0.0.0.0/0
type TLS
}
</source>
===/etc/radsec/servers.conf===
<source>
Server Our-EduroamRadiusAuth {
host <internal radius server>
port 1812
#rewriteOut UserName
type udp
secret ****secret****
}
Server Our-EduroamRadiusAcct {
host <internal radius accounting server>
port 1813
type udp
secret ****secret****
}
server tlr1 {
host 193.174.75.134
type tls
certificatenamecheck off
matchCertificateAttribute CN:/^(radius1\.dfn|tld1\.eduroam)\.de$/
StatusServer on
}
server tlr2 {
host 193.174.75.138
type tls
certificatenamecheck off
matchCertificateAttribute CN:/^(radius2\.dfn|tld2\.eduroam)\.de$/
StatusServer on
}
</source>
===/etc/radsec/realms.conf===
<source>
# Our domain
realm domain.tld {
server Our-EduroamRadiusAuth
accountingServer Our-EduroamRadiusAcct
}
# Wrong counfigured clients are rejected here
realm /myabc\.com$ {
replymessage "Misconfigured client: default realm of Intel PRO/Wireless supplicant! Rejected by us."
accountingresponse on
}
realm /^$/ {
replymessage "Misconfigured client: empty realm! Rejected by us."
accountingresponse on
}
# Default route -> Eduroam toplevel servers
realm * {
server tlr1
server tlr2
accountingserver tlr1
accountingserver tlr2
}
</source>
===/etc/radsec/cert/radsecproxy.pem===
<source>
subject=/CN=radsecproxy.domain.tld/OU=bla/O=bli/L=Hamburg/ST=Hamburg/C=DE
-----BEGIN CERTIFICATE-----
...
-----END CERTIFICATE-----
And now the whole cerstificate chain...
</source>
==Run the daemon==
===Security===
There is no need to run radsecproxy as root.
But you need write access to the log or use syslog.
The config, certificate and key is not readable by the user (nogroup) but by the group radsecproxy where the porocess lives in (see systemd unit file radsecproxy.service).
====User====
<source lang=bash>
# addgroup -g 2083 radsecproxy
# useradd -u 2083 -g nogroup -s /bin/false -h /nonexistent
</source>
====Permissions====
<source lang=bash>
# chown -R root:radsecproxy /etc/radsec
# find /etc/radsec -type d -exec chmod 0750 {} \;
# find /etc/radsec -type f -exec chmod 0640 {} \;
</source>
====systemd unit file====
<source lang=bash>
# systemctl cat radsecproxy.service
</source>
<source lang=ini>
[Unit]
Description=radsecproxy
ConditionPathExists=/etc/radsec/radsecproxy.conf
After=network.target
Documentation=man:radsecproxy(1)
[Service]
Type=forking
User=radsecproxy
Group=radsecproxy
RuntimeDirectory=radsecproxy
RuntimeDirectoryMode=0700
PrivateTmp=yes
InaccessibleDirectories=/var
ReadOnlyDirectories=/etc
ReadOnlyDirectories=/lib
ReadOnlyDirectories=/usr
ExecStart=/opt/radsecproxy/sbin/radsecproxy -i /run/radsecproxy/radsecproxy.pid
PIDFile=/run/radsecproxy/radsecproxy.pid
[Install]
WantedBy=multi-user.target
</source>
Put this to /lib/systemd/system/radsecproxy.service and do:
# systemctl daemon-reload
# systemctl enable radsecproxy.service
# systemctl start radsecproxy.service
===Testing===
# openssl s_client -connect <IP>:2083 -showcerts
===Certificate Enddate===
# $ openssl s_client -connect <IP>:2083 -tls1 -no_ssl2 -no_ssl3 -showcerts 2>/dev/null | openssl x509 -enddate -noout
notAfter=Oct 9 12:13:17 2020 GMT
46e45861583b052658ab59f1242b89eaa77d72ec
1806
1805
2017-08-11T08:18:06Z
Lollypop
2
/* Certificate Enddate */
wikitext
text/x-wiki
[[Kategorie:Eduroam]]
=RadSecProxy=
==Build==
===Patch for radsecproxy-1.6.8 on Ubuntu 16.04===
In radsecproxy from git on [[https://git.nordu.net/?p=radsecproxy.git;a=tree git.nordu.net]] this patch is not needed since [[https://git.nordu.net/?p=radsecproxy.git;a=commit;h=f3619bf65967255e1009fec42b28007b49e0f4e4 18.1.2017]].
<source lang=bash>
$ git clone https://git.nordu.net/radsecproxy.git
</source>
[https://project.nordu.net/browse/RADSECPROXY-72 taken from here]
<source lang=diff>
diff -rub radsecproxy-1.6.8/tcp.c radsecproxy-1.6.8_Ubuntu_16.04/tcp.c
--- radsecproxy-1.6.8/tcp.c 2016-09-21 13:49:09.000000000 +0200
+++ radsecproxy-1.6.8_Ubuntu_16.04/tcp.c 2017-07-13 16:35:52.414151832 +0200
@@ -353,7 +353,7 @@
struct sockaddr_storage from;
socklen_t fromlen = sizeof(from);
- listen(*sp, 0);
+ listen(*sp, 16);
for (;;) {
s = accept(*sp, (struct sockaddr *)&from, &fromlen);
diff -rub radsecproxy-1.6.8/tls.c radsecproxy-1.6.8_Ubuntu_16.04/tls.c
--- radsecproxy-1.6.8/tls.c 2016-09-21 13:49:09.000000000 +0200
+++ radsecproxy-1.6.8_Ubuntu_16.04/tls.c 2017-07-13 16:36:22.678166655 +0200
@@ -467,7 +467,7 @@
struct sockaddr_storage from;
socklen_t fromlen = sizeof(from);
- listen(*sp, 0);
+ listen(*sp, 16);
for (;;) {
s = accept(*sp, (struct sockaddr *)&from, &fromlen);
</source>
===Configure===
<source lang=bash>
$ ./configure --prefix=/opt/radsecproxy-1.6.8 --sysconfdir=/etc/radsec --with-ssl --enable-fticks
$ make clean all && sudo make install
</source>
==Config==
===/etc/radsec/radsecproxy.conf===
<source>
# Master config file for radsecproxy
IPv4Only on
listenUDP <IP>:1812
listenUDP <IP>:1813
listenTLS <IP>:2083
LogLevel 5 # For testing later reduce to 3
#LogDestination file:///var/log/radsecproxy.log
LogDestination x-syslog:///LOG_DAEMON
LoopPrevention on
######## TLS section
tls default {
#CACertificatePath /etc/radsec/cert/ca
CACertificateFile /etc/radsec/cert/radsecproxy.pem
CertificateFile /etc/radsec/cert/radsecproxy.pem
CertificateKeyFile /etc/radsec/cert/radsecproxy.key
CertificateKeyPassword ****secret****
}
Include /etc/radsec/rewrites.conf
Include /etc/radsec/clients.conf
Include /etc/radsec/servers.conf
Include /etc/radsec/realms.conf
</source>
===/etc/radsec/rewrites.conf===
<source>
## Empty for our setup
</source>
===/etc/radsec/clients.conf===
This matches our german top level radius (tlr) you have to customize it for other countries.
<source>
client tlr1 {
host 193.174.75.134
type tls
certificatenamecheck off
matchCertificateAttribute CN:/^(radius1\.dfn|tld1\.eduroam)\.de$/
}
client tlr2 {
host 193.174.75.138
type tls
certificatenamecheck off
matchCertificateAttribute CN:/^(radius2\.dfn|tld2\.eduroam)\.de$/
}
# Our WLAN Controller
client wlc {
host 10.1.1.0/24
type udp
secret ****secret****
}
client anyIP4TLS {
host 0.0.0.0/0
type TLS
}
</source>
===/etc/radsec/servers.conf===
<source>
Server Our-EduroamRadiusAuth {
host <internal radius server>
port 1812
#rewriteOut UserName
type udp
secret ****secret****
}
Server Our-EduroamRadiusAcct {
host <internal radius accounting server>
port 1813
type udp
secret ****secret****
}
server tlr1 {
host 193.174.75.134
type tls
certificatenamecheck off
matchCertificateAttribute CN:/^(radius1\.dfn|tld1\.eduroam)\.de$/
StatusServer on
}
server tlr2 {
host 193.174.75.138
type tls
certificatenamecheck off
matchCertificateAttribute CN:/^(radius2\.dfn|tld2\.eduroam)\.de$/
StatusServer on
}
</source>
===/etc/radsec/realms.conf===
<source>
# Our domain
realm domain.tld {
server Our-EduroamRadiusAuth
accountingServer Our-EduroamRadiusAcct
}
# Wrong counfigured clients are rejected here
realm /myabc\.com$ {
replymessage "Misconfigured client: default realm of Intel PRO/Wireless supplicant! Rejected by us."
accountingresponse on
}
realm /^$/ {
replymessage "Misconfigured client: empty realm! Rejected by us."
accountingresponse on
}
# Default route -> Eduroam toplevel servers
realm * {
server tlr1
server tlr2
accountingserver tlr1
accountingserver tlr2
}
</source>
===/etc/radsec/cert/radsecproxy.pem===
<source>
subject=/CN=radsecproxy.domain.tld/OU=bla/O=bli/L=Hamburg/ST=Hamburg/C=DE
-----BEGIN CERTIFICATE-----
...
-----END CERTIFICATE-----
And now the whole cerstificate chain...
</source>
==Run the daemon==
===Security===
There is no need to run radsecproxy as root.
But you need write access to the log or use syslog.
The config, certificate and key is not readable by the user (nogroup) but by the group radsecproxy where the porocess lives in (see systemd unit file radsecproxy.service).
====User====
<source lang=bash>
# addgroup -g 2083 radsecproxy
# useradd -u 2083 -g nogroup -s /bin/false -h /nonexistent
</source>
====Permissions====
<source lang=bash>
# chown -R root:radsecproxy /etc/radsec
# find /etc/radsec -type d -exec chmod 0750 {} \;
# find /etc/radsec -type f -exec chmod 0640 {} \;
</source>
====systemd unit file====
<source lang=bash>
# systemctl cat radsecproxy.service
</source>
<source lang=ini>
[Unit]
Description=radsecproxy
ConditionPathExists=/etc/radsec/radsecproxy.conf
After=network.target
Documentation=man:radsecproxy(1)
[Service]
Type=forking
User=radsecproxy
Group=radsecproxy
RuntimeDirectory=radsecproxy
RuntimeDirectoryMode=0700
PrivateTmp=yes
InaccessibleDirectories=/var
ReadOnlyDirectories=/etc
ReadOnlyDirectories=/lib
ReadOnlyDirectories=/usr
ExecStart=/opt/radsecproxy/sbin/radsecproxy -i /run/radsecproxy/radsecproxy.pid
PIDFile=/run/radsecproxy/radsecproxy.pid
[Install]
WantedBy=multi-user.target
</source>
Put this to /lib/systemd/system/radsecproxy.service and do:
# systemctl daemon-reload
# systemctl enable radsecproxy.service
# systemctl start radsecproxy.service
===Testing===
# openssl s_client -connect <IP>:2083 -showcerts
===Certificate Enddate===
$ openssl s_client -connect <IP>:2083 -tls1 -no_ssl2 -no_ssl3 -showcerts 2>/dev/null | openssl x509 -enddate -noout
notAfter=Oct 9 12:13:17 2020 GMT
9b8d29af707571cd07293050105da0f81523dae6
1807
1806
2017-08-11T08:18:17Z
Lollypop
2
/* Testing */
wikitext
text/x-wiki
[[Kategorie:Eduroam]]
=RadSecProxy=
==Build==
===Patch for radsecproxy-1.6.8 on Ubuntu 16.04===
In radsecproxy from git on [[https://git.nordu.net/?p=radsecproxy.git;a=tree git.nordu.net]] this patch is not needed since [[https://git.nordu.net/?p=radsecproxy.git;a=commit;h=f3619bf65967255e1009fec42b28007b49e0f4e4 18.1.2017]].
<source lang=bash>
$ git clone https://git.nordu.net/radsecproxy.git
</source>
[https://project.nordu.net/browse/RADSECPROXY-72 taken from here]
<source lang=diff>
diff -rub radsecproxy-1.6.8/tcp.c radsecproxy-1.6.8_Ubuntu_16.04/tcp.c
--- radsecproxy-1.6.8/tcp.c 2016-09-21 13:49:09.000000000 +0200
+++ radsecproxy-1.6.8_Ubuntu_16.04/tcp.c 2017-07-13 16:35:52.414151832 +0200
@@ -353,7 +353,7 @@
struct sockaddr_storage from;
socklen_t fromlen = sizeof(from);
- listen(*sp, 0);
+ listen(*sp, 16);
for (;;) {
s = accept(*sp, (struct sockaddr *)&from, &fromlen);
diff -rub radsecproxy-1.6.8/tls.c radsecproxy-1.6.8_Ubuntu_16.04/tls.c
--- radsecproxy-1.6.8/tls.c 2016-09-21 13:49:09.000000000 +0200
+++ radsecproxy-1.6.8_Ubuntu_16.04/tls.c 2017-07-13 16:36:22.678166655 +0200
@@ -467,7 +467,7 @@
struct sockaddr_storage from;
socklen_t fromlen = sizeof(from);
- listen(*sp, 0);
+ listen(*sp, 16);
for (;;) {
s = accept(*sp, (struct sockaddr *)&from, &fromlen);
</source>
===Configure===
<source lang=bash>
$ ./configure --prefix=/opt/radsecproxy-1.6.8 --sysconfdir=/etc/radsec --with-ssl --enable-fticks
$ make clean all && sudo make install
</source>
==Config==
===/etc/radsec/radsecproxy.conf===
<source>
# Master config file for radsecproxy
IPv4Only on
listenUDP <IP>:1812
listenUDP <IP>:1813
listenTLS <IP>:2083
LogLevel 5 # For testing later reduce to 3
#LogDestination file:///var/log/radsecproxy.log
LogDestination x-syslog:///LOG_DAEMON
LoopPrevention on
######## TLS section
tls default {
#CACertificatePath /etc/radsec/cert/ca
CACertificateFile /etc/radsec/cert/radsecproxy.pem
CertificateFile /etc/radsec/cert/radsecproxy.pem
CertificateKeyFile /etc/radsec/cert/radsecproxy.key
CertificateKeyPassword ****secret****
}
Include /etc/radsec/rewrites.conf
Include /etc/radsec/clients.conf
Include /etc/radsec/servers.conf
Include /etc/radsec/realms.conf
</source>
===/etc/radsec/rewrites.conf===
<source>
## Empty for our setup
</source>
===/etc/radsec/clients.conf===
This matches our german top level radius (tlr) you have to customize it for other countries.
<source>
client tlr1 {
host 193.174.75.134
type tls
certificatenamecheck off
matchCertificateAttribute CN:/^(radius1\.dfn|tld1\.eduroam)\.de$/
}
client tlr2 {
host 193.174.75.138
type tls
certificatenamecheck off
matchCertificateAttribute CN:/^(radius2\.dfn|tld2\.eduroam)\.de$/
}
# Our WLAN Controller
client wlc {
host 10.1.1.0/24
type udp
secret ****secret****
}
client anyIP4TLS {
host 0.0.0.0/0
type TLS
}
</source>
===/etc/radsec/servers.conf===
<source>
Server Our-EduroamRadiusAuth {
host <internal radius server>
port 1812
#rewriteOut UserName
type udp
secret ****secret****
}
Server Our-EduroamRadiusAcct {
host <internal radius accounting server>
port 1813
type udp
secret ****secret****
}
server tlr1 {
host 193.174.75.134
type tls
certificatenamecheck off
matchCertificateAttribute CN:/^(radius1\.dfn|tld1\.eduroam)\.de$/
StatusServer on
}
server tlr2 {
host 193.174.75.138
type tls
certificatenamecheck off
matchCertificateAttribute CN:/^(radius2\.dfn|tld2\.eduroam)\.de$/
StatusServer on
}
</source>
===/etc/radsec/realms.conf===
<source>
# Our domain
realm domain.tld {
server Our-EduroamRadiusAuth
accountingServer Our-EduroamRadiusAcct
}
# Wrong counfigured clients are rejected here
realm /myabc\.com$ {
replymessage "Misconfigured client: default realm of Intel PRO/Wireless supplicant! Rejected by us."
accountingresponse on
}
realm /^$/ {
replymessage "Misconfigured client: empty realm! Rejected by us."
accountingresponse on
}
# Default route -> Eduroam toplevel servers
realm * {
server tlr1
server tlr2
accountingserver tlr1
accountingserver tlr2
}
</source>
===/etc/radsec/cert/radsecproxy.pem===
<source>
subject=/CN=radsecproxy.domain.tld/OU=bla/O=bli/L=Hamburg/ST=Hamburg/C=DE
-----BEGIN CERTIFICATE-----
...
-----END CERTIFICATE-----
And now the whole cerstificate chain...
</source>
==Run the daemon==
===Security===
There is no need to run radsecproxy as root.
But you need write access to the log or use syslog.
The config, certificate and key is not readable by the user (nogroup) but by the group radsecproxy where the porocess lives in (see systemd unit file radsecproxy.service).
====User====
<source lang=bash>
# addgroup -g 2083 radsecproxy
# useradd -u 2083 -g nogroup -s /bin/false -h /nonexistent
</source>
====Permissions====
<source lang=bash>
# chown -R root:radsecproxy /etc/radsec
# find /etc/radsec -type d -exec chmod 0750 {} \;
# find /etc/radsec -type f -exec chmod 0640 {} \;
</source>
====systemd unit file====
<source lang=bash>
# systemctl cat radsecproxy.service
</source>
<source lang=ini>
[Unit]
Description=radsecproxy
ConditionPathExists=/etc/radsec/radsecproxy.conf
After=network.target
Documentation=man:radsecproxy(1)
[Service]
Type=forking
User=radsecproxy
Group=radsecproxy
RuntimeDirectory=radsecproxy
RuntimeDirectoryMode=0700
PrivateTmp=yes
InaccessibleDirectories=/var
ReadOnlyDirectories=/etc
ReadOnlyDirectories=/lib
ReadOnlyDirectories=/usr
ExecStart=/opt/radsecproxy/sbin/radsecproxy -i /run/radsecproxy/radsecproxy.pid
PIDFile=/run/radsecproxy/radsecproxy.pid
[Install]
WantedBy=multi-user.target
</source>
Put this to /lib/systemd/system/radsecproxy.service and do:
# systemctl daemon-reload
# systemctl enable radsecproxy.service
# systemctl start radsecproxy.service
===Testing===
$ openssl s_client -connect <IP>:2083 -showcerts
===Certificate Enddate===
$ openssl s_client -connect <IP>:2083 -tls1 -no_ssl2 -no_ssl3 -showcerts 2>/dev/null | openssl x509 -enddate -noout
notAfter=Oct 9 12:13:17 2020 GMT
8355c9ba82eb1f5af2f0e85afe8426b0b371a31a
1808
1807
2017-08-18T08:08:44Z
Lollypop
2
/* Patch for radsecproxy-1.6.8 on Ubuntu 16.04 */
wikitext
text/x-wiki
[[Kategorie:Eduroam]]
=RadSecProxy=
==Build==
===Patch for radsecproxy-1.6.8 on Ubuntu 16.04===
In radsecproxy 1.6.9 and source from git on [[https://git.nordu.net/?p=radsecproxy.git;a=tree git.nordu.net]] this patch is not needed since [[https://git.nordu.net/?p=radsecproxy.git;a=commit;h=f3619bf65967255e1009fec42b28007b49e0f4e4 18.1.2017]].
<source lang=bash>
$ git clone https://git.nordu.net/radsecproxy.git
</source>
[https://project.nordu.net/browse/RADSECPROXY-72 taken from here]
<source lang=diff>
diff -rub radsecproxy-1.6.8/tcp.c radsecproxy-1.6.8_Ubuntu_16.04/tcp.c
--- radsecproxy-1.6.8/tcp.c 2016-09-21 13:49:09.000000000 +0200
+++ radsecproxy-1.6.8_Ubuntu_16.04/tcp.c 2017-07-13 16:35:52.414151832 +0200
@@ -353,7 +353,7 @@
struct sockaddr_storage from;
socklen_t fromlen = sizeof(from);
- listen(*sp, 0);
+ listen(*sp, 16);
for (;;) {
s = accept(*sp, (struct sockaddr *)&from, &fromlen);
diff -rub radsecproxy-1.6.8/tls.c radsecproxy-1.6.8_Ubuntu_16.04/tls.c
--- radsecproxy-1.6.8/tls.c 2016-09-21 13:49:09.000000000 +0200
+++ radsecproxy-1.6.8_Ubuntu_16.04/tls.c 2017-07-13 16:36:22.678166655 +0200
@@ -467,7 +467,7 @@
struct sockaddr_storage from;
socklen_t fromlen = sizeof(from);
- listen(*sp, 0);
+ listen(*sp, 16);
for (;;) {
s = accept(*sp, (struct sockaddr *)&from, &fromlen);
</source>
===Configure===
<source lang=bash>
$ ./configure --prefix=/opt/radsecproxy-1.6.8 --sysconfdir=/etc/radsec --with-ssl --enable-fticks
$ make clean all && sudo make install
</source>
==Config==
===/etc/radsec/radsecproxy.conf===
<source>
# Master config file for radsecproxy
IPv4Only on
listenUDP <IP>:1812
listenUDP <IP>:1813
listenTLS <IP>:2083
LogLevel 5 # For testing later reduce to 3
#LogDestination file:///var/log/radsecproxy.log
LogDestination x-syslog:///LOG_DAEMON
LoopPrevention on
######## TLS section
tls default {
#CACertificatePath /etc/radsec/cert/ca
CACertificateFile /etc/radsec/cert/radsecproxy.pem
CertificateFile /etc/radsec/cert/radsecproxy.pem
CertificateKeyFile /etc/radsec/cert/radsecproxy.key
CertificateKeyPassword ****secret****
}
Include /etc/radsec/rewrites.conf
Include /etc/radsec/clients.conf
Include /etc/radsec/servers.conf
Include /etc/radsec/realms.conf
</source>
===/etc/radsec/rewrites.conf===
<source>
## Empty for our setup
</source>
===/etc/radsec/clients.conf===
This matches our german top level radius (tlr) you have to customize it for other countries.
<source>
client tlr1 {
host 193.174.75.134
type tls
certificatenamecheck off
matchCertificateAttribute CN:/^(radius1\.dfn|tld1\.eduroam)\.de$/
}
client tlr2 {
host 193.174.75.138
type tls
certificatenamecheck off
matchCertificateAttribute CN:/^(radius2\.dfn|tld2\.eduroam)\.de$/
}
# Our WLAN Controller
client wlc {
host 10.1.1.0/24
type udp
secret ****secret****
}
client anyIP4TLS {
host 0.0.0.0/0
type TLS
}
</source>
===/etc/radsec/servers.conf===
<source>
Server Our-EduroamRadiusAuth {
host <internal radius server>
port 1812
#rewriteOut UserName
type udp
secret ****secret****
}
Server Our-EduroamRadiusAcct {
host <internal radius accounting server>
port 1813
type udp
secret ****secret****
}
server tlr1 {
host 193.174.75.134
type tls
certificatenamecheck off
matchCertificateAttribute CN:/^(radius1\.dfn|tld1\.eduroam)\.de$/
StatusServer on
}
server tlr2 {
host 193.174.75.138
type tls
certificatenamecheck off
matchCertificateAttribute CN:/^(radius2\.dfn|tld2\.eduroam)\.de$/
StatusServer on
}
</source>
===/etc/radsec/realms.conf===
<source>
# Our domain
realm domain.tld {
server Our-EduroamRadiusAuth
accountingServer Our-EduroamRadiusAcct
}
# Wrong counfigured clients are rejected here
realm /myabc\.com$ {
replymessage "Misconfigured client: default realm of Intel PRO/Wireless supplicant! Rejected by us."
accountingresponse on
}
realm /^$/ {
replymessage "Misconfigured client: empty realm! Rejected by us."
accountingresponse on
}
# Default route -> Eduroam toplevel servers
realm * {
server tlr1
server tlr2
accountingserver tlr1
accountingserver tlr2
}
</source>
===/etc/radsec/cert/radsecproxy.pem===
<source>
subject=/CN=radsecproxy.domain.tld/OU=bla/O=bli/L=Hamburg/ST=Hamburg/C=DE
-----BEGIN CERTIFICATE-----
...
-----END CERTIFICATE-----
And now the whole cerstificate chain...
</source>
==Run the daemon==
===Security===
There is no need to run radsecproxy as root.
But you need write access to the log or use syslog.
The config, certificate and key is not readable by the user (nogroup) but by the group radsecproxy where the porocess lives in (see systemd unit file radsecproxy.service).
====User====
<source lang=bash>
# addgroup -g 2083 radsecproxy
# useradd -u 2083 -g nogroup -s /bin/false -h /nonexistent
</source>
====Permissions====
<source lang=bash>
# chown -R root:radsecproxy /etc/radsec
# find /etc/radsec -type d -exec chmod 0750 {} \;
# find /etc/radsec -type f -exec chmod 0640 {} \;
</source>
====systemd unit file====
<source lang=bash>
# systemctl cat radsecproxy.service
</source>
<source lang=ini>
[Unit]
Description=radsecproxy
ConditionPathExists=/etc/radsec/radsecproxy.conf
After=network.target
Documentation=man:radsecproxy(1)
[Service]
Type=forking
User=radsecproxy
Group=radsecproxy
RuntimeDirectory=radsecproxy
RuntimeDirectoryMode=0700
PrivateTmp=yes
InaccessibleDirectories=/var
ReadOnlyDirectories=/etc
ReadOnlyDirectories=/lib
ReadOnlyDirectories=/usr
ExecStart=/opt/radsecproxy/sbin/radsecproxy -i /run/radsecproxy/radsecproxy.pid
PIDFile=/run/radsecproxy/radsecproxy.pid
[Install]
WantedBy=multi-user.target
</source>
Put this to /lib/systemd/system/radsecproxy.service and do:
# systemctl daemon-reload
# systemctl enable radsecproxy.service
# systemctl start radsecproxy.service
===Testing===
$ openssl s_client -connect <IP>:2083 -showcerts
===Certificate Enddate===
$ openssl s_client -connect <IP>:2083 -tls1 -no_ssl2 -no_ssl3 -showcerts 2>/dev/null | openssl x509 -enddate -noout
notAfter=Oct 9 12:13:17 2020 GMT
80c61a340d5ac61c36efe15b70268a3ae82c0c76
Category:Eduroam
14
346
1793
2017-07-13T15:17:06Z
Lollypop
2
Die Seite wurde neu angelegt: „[[Kategorie:KnowHow]]“
wikitext
text/x-wiki
[[Kategorie:KnowHow]]
5b3e805e2df69a16d339bfd0115e4688ccfd0e65
1794
1793
2017-07-13T15:18:14Z
Lollypop
2
wikitext
text/x-wiki
[[ Kategorie: KnowHow ]]
1036e829c0d28d45fe9bed435ae7626a79d96472
Systemd
0
233
1809
1339
2017-08-28T10:39:14Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:Linux]]
=systemd=
Yes, like daemons are usually written this has to be written lowercase.
=What is systemd?=
systemd is a replacement for the old and rusty init system of Linux.
It has many new features and extends the normal init system with the ability to watch processes after the start has done, list sockets owned by processes started with systemd, adds security features like [http://manpages.ubuntu.com/manpages/vivid/en/man7/capabilities.7.html capabilities(7)] and a lot more.
Maybe it will be as good as SMF (Service Management Facility) of Solaris one day :-).
=Take a look with systemctl=
==List units==
As you can see, there are hardware and software related units.
<source lang=bash>
# systemctl list-units
UNIT LOAD ACTIVE SUB DESCRIPTION
proc-sys-fs-binfmt_misc.automount loaded active running Arbitrary Executable File Formats File System Automount Point
sys-devices-pci0000:00-0000:00:02.0-backlight-acpi_video0.device loaded active plugged /sys/devices/pci0000:00/0000:00:02.0/backlight/acpi_video0
sys-devices-pci0000:00-0000:00:02.0-drm-card0-card0\x2dLVDS\x2d1-intel_backlight.device loaded active plugged /sys/devices/pci0000:00/0000:00:02.0/drm
sys-devices-pci0000:00-0000:00:19.0-net-eth0.device loaded active plugged 82579LM Gigabit Network Connection
sys-devices-pci0000:00-0000:00:1a.0-usb1-1\x2d1-1\x2d1.4-1\x2d1.4:1.0-bluetooth-hci0-rfkill3.device loaded active plugged /sys/devices/pci0000:00/0000
sys-devices-pci0000:00-0000:00:1a.0-usb1-1\x2d1-1\x2d1.4-1\x2d1.4:1.0-bluetooth-hci0.device loaded active plugged /sys/devices/pci0000:00/0000:00:1a.0
sys-devices-pci0000:00-0000:00:1b.0-sound-card0.device loaded active plugged 6 Series/C200 Series Chipset Family High Definition Audio Contro
sys-devices-pci0000:00-0000:00:1c.1-0000:03:00.0-ieee80211-phy0-rfkill2.device loaded active plugged /sys/devices/pci0000:00/0000:00:1c.1/0000:03:00.0
sys-devices-pci0000:00-0000:00:1c.1-0000:03:00.0-net-wlan0.device loaded active plugged Centrino Advanced-N 6205 [Taylor Peak] (Centrino Advanced-N 62
sys-devices-pci0000:00-0000:00:1d.0-usb2-2\x2d1-2\x2d1.4-2\x2d1.4:1.1-tty-ttyACM0.device loaded active plugged F5521gw
sys-devices-pci0000:00-0000:00:1d.0-usb2-2\x2d1-2\x2d1.4-2\x2d1.4:1.3-tty-ttyACM1.device loaded active plugged F5521gw
...
session-c2.scope loaded active running Session c2 of user lollypop
accounts-daemon.service loaded active running Accounts Service
● anacron.service loaded failed failed Run anacron jobs
apparmor.service loaded active exited LSB: AppArmor initialization
apport.service loaded active exited LSB: automatic crash report generation
...
</source>
In this example you can see that the anacron.service failed to start.
==Display unit status==
<source lang=bash>
# systemctl status anacron
● anacron.service - Run anacron jobs
Loaded: loaded (/lib/systemd/system/anacron.service; enabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Fr 2015-08-28 09:18:13 CEST; 31min ago
Process: 1591 ExecStart=/usr/sbin/anacron -dsq (code=exited, status=1/FAILURE)
Main PID: 1591 (code=exited, status=1/FAILURE)
Aug 28 09:18:13 lollybook systemd[1]: Started Run anacron jobs.
Aug 28 09:18:13 lollybook systemd[1]: Starting Run anacron jobs...
Aug 28 09:18:13 lollybook systemd[1]: anacron.service: main process exited, code=exited, status=1/FAILURE
Aug 28 09:18:13 lollybook anacron[1591]: anacron: Can't chdir to /var/spool/anacron: No such file or directory
Aug 28 09:18:13 lollybook systemd[1]: Unit anacron.service entered failed state.
Aug 28 09:18:13 lollybook systemd[1]: anacron.service failed.
</source>
Ah, deleted the anacron spool directory. ;-)
==Restart units==
Fix the problem and restart the service.
<source lang=bash>
root@lollybook:~# mkdir /var/spool/anacron
root@lollybook:~# systemctl restart anacron.service
root@lollybook:~# systemctl status anacron
● anacron.service - Run anacron jobs
Loaded: loaded (/lib/systemd/system/anacron.service; enabled; vendor preset: enabled)
Active: active (running) since Fr 2015-08-28 09:53:49 CEST; 4s ago
Main PID: 5179 (anacron)
CGroup: /system.slice/anacron.service
└─5179 /usr/sbin/anacron -dsq
Aug 28 09:53:49 lollybook systemd[1]: Started Run anacron jobs.
Aug 28 09:53:49 lollybook systemd[1]: Starting Run anacron jobs...
Aug 28 09:53:49 lollybook anacron[5179]: Anacron 2.3 started on 2015-08-28
Aug 28 09:53:49 lollybook anacron[5179]: Will run job `cron.daily' in 5 min.
Aug 28 09:53:49 lollybook anacron[5179]: Will run job `cron.weekly' in 10 min.
Aug 28 09:53:49 lollybook anacron[5179]: Will run job `cron.monthly' in 15 min.
Aug 28 09:53:49 lollybook anacron[5179]: Jobs will be executed sequentially
</source>
==Display unit declaration==
<source lang=ini>
# systemctl cat zfs.target
# /lib/systemd/system/zfs.target
[Unit]
Description=ZFS startup target
Requires=zfs-mount.service
Requires=zfs-share.service
Wants=zed.service
[Install]
WantedBy=multi-user.target
</source>
==Sockets==
<source lang=bash>
# systemctl list-sockets --all
LISTEN UNIT ACTIVATES
/run/acpid.socket acpid.socket acpid.service
/run/systemd/fsckd systemd-fsckd.socket systemd-fsckd.service
/run/systemd/initctl/fifo systemd-initctl.socket systemd-initctl.service
/run/systemd/journal/dev-log systemd-journald-dev-log.socket systemd-journald.service
/run/systemd/journal/socket systemd-journald.socket systemd-journald.service
/run/systemd/journal/stdout systemd-journald.socket systemd-journald.service
/run/systemd/journal/syslog syslog.socket rsyslog.service
/run/systemd/shutdownd systemd-shutdownd.socket systemd-shutdownd.service
/run/udev/control systemd-udevd-control.socket systemd-udevd.service
/run/uuidd/request uuidd.socket uuidd.service
/var/run/avahi-daemon/socket avahi-daemon.socket avahi-daemon.service
/var/run/cups/cups.sock cups.socket cups.service
/var/run/dbus/system_bus_socket dbus.socket dbus.service
127.0.0.1:631 cups.socket cups.service
[::1]:631 cups.socket cups.service
audit 1 systemd-journald-audit.socket systemd-journald.service
kobject-uevent 1 systemd-udevd-kernel.socket systemd-udevd.service
17 sockets listed.
</source>
==View dependencies==
What depends on ''zfs.target'':
<source lang=bash>
# systemctl list-dependencies --reverse zfs.target
zfs.target
● ├─basic.target
...
● └─multi-user.target
...
</source>
And what do we need to reach the ''zfs.target''?
<source lang=bash>
# systemctl list-dependencies --recursive zfs.target
zfs.target
● ├─zed.service
● ├─zfs-mount.service
● └─zfs-share.service
</source>
=Security=
==Use capabilities to drop user privileges (CapabilityBoundingSet)==
<source lang=bash>
# systemctl cat systemd-networkd.service --no-pager
...
[Service]
Type=notify
Restart=on-failure
RestartSec=0
ExecStart=/lib/systemd/systemd-networkd
CapabilityBoundingSet=CAP_NET_ADMIN CAP_NET_BIND_SERVICE CAP_NET_BROADCAST CAP_NET_RAW CAP_SETUID CAP_SETGID CAP_SETPCAP CAP_CHOWN CAP_DAC_OVERRIDE CAP_FOWNER
ProtectSystem=full
ProtectHome=yes
WatchdogSec=1min
...
</source>
Now the process is started with exactly the capabilities it needs to have. Even if it starts as root all unnessesary capabilities are dropped for starting the process.
I dont want to copy the whole man page of [http://manpages.ubuntu.com/manpages/vivid/en/man7/capabilities.7.html capabilities(7)] here but you can take a look to understand what this capabilities are.
'''BUT''' beware of programs which just test on UID 0!
==Nailing a process to it's rights : NoNewPrivileges==
Setting ''NoNewPrivileges=true'' ensures that the processtree from this level on will stuck at the UID and the privileges it has. This prohibits UID changes. No set UID binary will help the hacker to get more privileges than the user of the exploited service.
=systemd-timesyncd an alternative to ntp=
The ntpd is a good and fat old horse for servers but clients do not necessarily need this one. Just give systemd-timesyncd a chance.
Configuration can be easily made through <i>/etc/systemd/timesyncd.conf</i>:
<source lang=ini>
# This file is part of systemd.
#
# systemd is free software; you can redistribute it and/or modify it
# under the terms of the GNU Lesser General Public License as published by
# the Free Software Foundation; either version 2.1 of the License, or
# (at your option) any later version.
#
# Entries in this file show the compile time defaults.
# You can change settings by editing this file.
# Defaults can be restored by simply deleting this file.
#
# See timesyncd.conf(5) for details.
[Time]
NTP=ptbtime1.ptb.de hora.cs.tu-berlin.de
FallbackNTP=ntp.ubuntu.com
</source>
The server list is a space separated list of NTP servers.
FallbackNTP is a list of servers if none of the NTP list could be reached.
If you want to split them into multiple files or generate them at start you can put files with the ending <i>.conf</i> in <i>/etc/systemd/timesyncd.conf.d/</i>.
After you setup the config you can enable the timesyncd via:
<source lang=bash>
# timedatectl set-ntp true
</source>
Control your success with:
<source lang=bash>
# timedatectl
Local time: Fr 2016-07-01 09:16:24 CEST
Universal time: Fr 2016-07-01 07:16:24 UTC
RTC time: Fr 2016-07-01 07:16:24
Time zone: Europe/Berlin (CEST, +0200)
Network time on: yes
NTP synchronized: yes
RTC in local TZ: no
</source>
Nice it worked <i>NTP synchronized: yes</i>.
If not take a look with <i>systemctl</i>:
<source lang=bash>
# systemctl status systemd-timesyncd.service
● systemd-timesyncd.service - Network Time Synchronization
Loaded: loaded (/lib/systemd/system/systemd-timesyncd.service; enabled; vendor preset: enabled)
Drop-In: /lib/systemd/system/systemd-timesyncd.service.d
└─disable-with-time-daemon.conf
Active: inactive (dead)
Condition: start condition failed at Fr 2016-07-01 10:49:15 CEST; 1h 43min left
Docs: man:systemd-timesyncd.service(8)
</source>
Hmm... let us take a look at ntp:
<source lang=bash>
# systemctl status ntp.service
● ntp.service - LSB: Start NTP daemon
Loaded: loaded (/etc/init.d/ntp; bad; vendor preset: enabled)
Active: active (exited) since Fr 2016-07-01 10:49:19 CEST; 1h 44min left
Docs: man:systemd-sysv-generator(8)
</source>
Maybe we should uninstall or disable ntp first ;-).
<source lang=bash>
# systemctl stop ntp.service
# systemctl disable ntp.service
</source>
<source lang=bash>
# systemctl start systemd-timesyncd.service
# systemctl status systemd-timesyncd.service
● systemd-timesyncd.service - Network Time Synchronization
Loaded: loaded (/lib/systemd/system/systemd-timesyncd.service; enabled; vendor preset: enabled)
Drop-In: /lib/systemd/system/systemd-timesyncd.service.d
└─disable-with-time-daemon.conf
Active: active (running) since Fr 2016-07-01 09:06:10 CEST; 1s ago
Docs: man:systemd-timesyncd.service(8)
Main PID: 12360 (systemd-timesyn)
Status: "Synchronized to time server 192.53.103.108:123 (ptbtime1.ptb.de)."
CGroup: /system.slice/systemd-timesyncd.service
└─12360 /lib/systemd/systemd-timesyncd
Jul 01 09:06:10 lollybook systemd[1]: Starting Network Time Synchronization...
Jul 01 09:06:10 lollybook systemd[1]: Started Network Time Synchronization.
Jul 01 09:06:10 lollybook systemd-timesyncd[12360]: Synchronized to time server 192.53.103.108:123 (ptbtime1.ptb.de).
</source>
That's it!
=Units=
==[Unit]==
===Define dependencies===
For example the ''zfs.target'' is defined like this:
<source lang=bash>
# systemctl cat zfs.target
# /lib/systemd/system/zfs.target
[Unit]
Description=ZFS startup target
Requires=zfs-mount.service
Requires=zfs-share.service
Wants=zed.service
[Install]
WantedBy=multi-user.target
</source>
This means to reach the ''zfs.target'' we want that ''zed.service'' is started if enabled and we need ''zfs-mount.service'' and ''zfs-share.service''.
===Directories===
====ReadWrite-, ReadOnly- and InaccessibleDirectories====
====Private Tmp-Directories====
Mounts a private incarnation of /tmp and /var/tmp which only lives as long as the unit is up. When the unit comes down the directories are cleared. This is done by a seperate namespace for this unit.
<source lang=ini>
[Unit]
...
PrivateTmp=true|false
...
</source>
If several units should share a private tmp-directory you can use ''JoinsNamespaceOf=<unit1>[,<unit2>,<unit3>]''.
==[Service]==
==[Install]==
=Tools=
==Testing around with capabilities==
For example arping:
<source lang=bash>
# getcap /usr/bin/arping
/usr/bin/arping = cap_net_raw+ep
</source>
With this capability set we can use this as normal user:
<source lang=bash>
lollypop $ /usr/bin/arping -I wlan0 192.168.178.1
ARPING 192.168.178.1 from 192.168.178.31 wlan0
Unicast reply from 192.168.178.1 [24:65:11:F0:DC:A8] 1.774ms
Unicast reply from 192.168.178.1 [24:65:11:F0:DC:A8] 1.658ms
</source>
If we remove this capability it does not work:
<source lang=bash>
# setcap cap_net_raw=-ep /usr/bin/arping
</source>
<source lang=bash>
lollypop $ /usr/bin/arping -I wlan0 192.168.178.1
arping: socket: Operation not permitted
</source>
Of course it still works as root as root has all capabilities:
<source lang=bash>
root@lollybook:~# /usr/bin/arping -I wlan0 192.168.178.1
ARPING 192.168.178.1 from 192.168.178.31 wlan0
Unicast reply from 192.168.178.1 [24:65:11:F0:DC:A8] 2.052ms
Unicast reply from 192.168.178.1 [24:65:11:F0:DC:A8] 1.852ms
Received 2 response(s)
</source>
So we better set this capability again:
<source lang=bash>
# setcap cap_net_raw=+ep /usr/bin/arping
</source>
= Examples =
== Oracle ==
<source lang=ini>
# Systemd unit file dbora@<product>.service
# e.g.: dbora@12cR1.service
[Unit]
Description=Oracle Database %I
After=syslog.target network.target
[Service]
# systemd ignores PAM limits, so set any necessary limits in the service.
# Not really a bug, but a feature.
# https://bugzilla.redhat.com/show_bug.cgi?id=754285
LimitMEMLOCK=infinity
LimitNOFILE=65535
#
Type=simple
RemainAfterExit=yes
User=oracle
Group=dba
Environment="ORACLE_HOME=/opt/oracle/product/%i/db"
ExecStart=$ORACLE_HOME/bin/dbstart $ORACLE_HOME >> 2>&1 &
ExecStop=$ORACLE_HOME/bin/dbstart $ORACLE_HOME 2>&1 &
[Install]
WantedBy=multi-user.target
</source>
98c63fa22d765c008ff34c4fc30129dd3b5efabf
1810
1809
2017-08-28T10:46:59Z
Lollypop
2
/* Oracle */
wikitext
text/x-wiki
[[Kategorie:Linux]]
=systemd=
Yes, like daemons are usually written this has to be written lowercase.
=What is systemd?=
systemd is a replacement for the old and rusty init system of Linux.
It has many new features and extends the normal init system with the ability to watch processes after the start has done, list sockets owned by processes started with systemd, adds security features like [http://manpages.ubuntu.com/manpages/vivid/en/man7/capabilities.7.html capabilities(7)] and a lot more.
Maybe it will be as good as SMF (Service Management Facility) of Solaris one day :-).
=Take a look with systemctl=
==List units==
As you can see, there are hardware and software related units.
<source lang=bash>
# systemctl list-units
UNIT LOAD ACTIVE SUB DESCRIPTION
proc-sys-fs-binfmt_misc.automount loaded active running Arbitrary Executable File Formats File System Automount Point
sys-devices-pci0000:00-0000:00:02.0-backlight-acpi_video0.device loaded active plugged /sys/devices/pci0000:00/0000:00:02.0/backlight/acpi_video0
sys-devices-pci0000:00-0000:00:02.0-drm-card0-card0\x2dLVDS\x2d1-intel_backlight.device loaded active plugged /sys/devices/pci0000:00/0000:00:02.0/drm
sys-devices-pci0000:00-0000:00:19.0-net-eth0.device loaded active plugged 82579LM Gigabit Network Connection
sys-devices-pci0000:00-0000:00:1a.0-usb1-1\x2d1-1\x2d1.4-1\x2d1.4:1.0-bluetooth-hci0-rfkill3.device loaded active plugged /sys/devices/pci0000:00/0000
sys-devices-pci0000:00-0000:00:1a.0-usb1-1\x2d1-1\x2d1.4-1\x2d1.4:1.0-bluetooth-hci0.device loaded active plugged /sys/devices/pci0000:00/0000:00:1a.0
sys-devices-pci0000:00-0000:00:1b.0-sound-card0.device loaded active plugged 6 Series/C200 Series Chipset Family High Definition Audio Contro
sys-devices-pci0000:00-0000:00:1c.1-0000:03:00.0-ieee80211-phy0-rfkill2.device loaded active plugged /sys/devices/pci0000:00/0000:00:1c.1/0000:03:00.0
sys-devices-pci0000:00-0000:00:1c.1-0000:03:00.0-net-wlan0.device loaded active plugged Centrino Advanced-N 6205 [Taylor Peak] (Centrino Advanced-N 62
sys-devices-pci0000:00-0000:00:1d.0-usb2-2\x2d1-2\x2d1.4-2\x2d1.4:1.1-tty-ttyACM0.device loaded active plugged F5521gw
sys-devices-pci0000:00-0000:00:1d.0-usb2-2\x2d1-2\x2d1.4-2\x2d1.4:1.3-tty-ttyACM1.device loaded active plugged F5521gw
...
session-c2.scope loaded active running Session c2 of user lollypop
accounts-daemon.service loaded active running Accounts Service
● anacron.service loaded failed failed Run anacron jobs
apparmor.service loaded active exited LSB: AppArmor initialization
apport.service loaded active exited LSB: automatic crash report generation
...
</source>
In this example you can see that the anacron.service failed to start.
==Display unit status==
<source lang=bash>
# systemctl status anacron
● anacron.service - Run anacron jobs
Loaded: loaded (/lib/systemd/system/anacron.service; enabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Fr 2015-08-28 09:18:13 CEST; 31min ago
Process: 1591 ExecStart=/usr/sbin/anacron -dsq (code=exited, status=1/FAILURE)
Main PID: 1591 (code=exited, status=1/FAILURE)
Aug 28 09:18:13 lollybook systemd[1]: Started Run anacron jobs.
Aug 28 09:18:13 lollybook systemd[1]: Starting Run anacron jobs...
Aug 28 09:18:13 lollybook systemd[1]: anacron.service: main process exited, code=exited, status=1/FAILURE
Aug 28 09:18:13 lollybook anacron[1591]: anacron: Can't chdir to /var/spool/anacron: No such file or directory
Aug 28 09:18:13 lollybook systemd[1]: Unit anacron.service entered failed state.
Aug 28 09:18:13 lollybook systemd[1]: anacron.service failed.
</source>
Ah, deleted the anacron spool directory. ;-)
==Restart units==
Fix the problem and restart the service.
<source lang=bash>
root@lollybook:~# mkdir /var/spool/anacron
root@lollybook:~# systemctl restart anacron.service
root@lollybook:~# systemctl status anacron
● anacron.service - Run anacron jobs
Loaded: loaded (/lib/systemd/system/anacron.service; enabled; vendor preset: enabled)
Active: active (running) since Fr 2015-08-28 09:53:49 CEST; 4s ago
Main PID: 5179 (anacron)
CGroup: /system.slice/anacron.service
└─5179 /usr/sbin/anacron -dsq
Aug 28 09:53:49 lollybook systemd[1]: Started Run anacron jobs.
Aug 28 09:53:49 lollybook systemd[1]: Starting Run anacron jobs...
Aug 28 09:53:49 lollybook anacron[5179]: Anacron 2.3 started on 2015-08-28
Aug 28 09:53:49 lollybook anacron[5179]: Will run job `cron.daily' in 5 min.
Aug 28 09:53:49 lollybook anacron[5179]: Will run job `cron.weekly' in 10 min.
Aug 28 09:53:49 lollybook anacron[5179]: Will run job `cron.monthly' in 15 min.
Aug 28 09:53:49 lollybook anacron[5179]: Jobs will be executed sequentially
</source>
==Display unit declaration==
<source lang=ini>
# systemctl cat zfs.target
# /lib/systemd/system/zfs.target
[Unit]
Description=ZFS startup target
Requires=zfs-mount.service
Requires=zfs-share.service
Wants=zed.service
[Install]
WantedBy=multi-user.target
</source>
==Sockets==
<source lang=bash>
# systemctl list-sockets --all
LISTEN UNIT ACTIVATES
/run/acpid.socket acpid.socket acpid.service
/run/systemd/fsckd systemd-fsckd.socket systemd-fsckd.service
/run/systemd/initctl/fifo systemd-initctl.socket systemd-initctl.service
/run/systemd/journal/dev-log systemd-journald-dev-log.socket systemd-journald.service
/run/systemd/journal/socket systemd-journald.socket systemd-journald.service
/run/systemd/journal/stdout systemd-journald.socket systemd-journald.service
/run/systemd/journal/syslog syslog.socket rsyslog.service
/run/systemd/shutdownd systemd-shutdownd.socket systemd-shutdownd.service
/run/udev/control systemd-udevd-control.socket systemd-udevd.service
/run/uuidd/request uuidd.socket uuidd.service
/var/run/avahi-daemon/socket avahi-daemon.socket avahi-daemon.service
/var/run/cups/cups.sock cups.socket cups.service
/var/run/dbus/system_bus_socket dbus.socket dbus.service
127.0.0.1:631 cups.socket cups.service
[::1]:631 cups.socket cups.service
audit 1 systemd-journald-audit.socket systemd-journald.service
kobject-uevent 1 systemd-udevd-kernel.socket systemd-udevd.service
17 sockets listed.
</source>
==View dependencies==
What depends on ''zfs.target'':
<source lang=bash>
# systemctl list-dependencies --reverse zfs.target
zfs.target
● ├─basic.target
...
● └─multi-user.target
...
</source>
And what do we need to reach the ''zfs.target''?
<source lang=bash>
# systemctl list-dependencies --recursive zfs.target
zfs.target
● ├─zed.service
● ├─zfs-mount.service
● └─zfs-share.service
</source>
=Security=
==Use capabilities to drop user privileges (CapabilityBoundingSet)==
<source lang=bash>
# systemctl cat systemd-networkd.service --no-pager
...
[Service]
Type=notify
Restart=on-failure
RestartSec=0
ExecStart=/lib/systemd/systemd-networkd
CapabilityBoundingSet=CAP_NET_ADMIN CAP_NET_BIND_SERVICE CAP_NET_BROADCAST CAP_NET_RAW CAP_SETUID CAP_SETGID CAP_SETPCAP CAP_CHOWN CAP_DAC_OVERRIDE CAP_FOWNER
ProtectSystem=full
ProtectHome=yes
WatchdogSec=1min
...
</source>
Now the process is started with exactly the capabilities it needs to have. Even if it starts as root all unnessesary capabilities are dropped for starting the process.
I dont want to copy the whole man page of [http://manpages.ubuntu.com/manpages/vivid/en/man7/capabilities.7.html capabilities(7)] here but you can take a look to understand what this capabilities are.
'''BUT''' beware of programs which just test on UID 0!
==Nailing a process to it's rights : NoNewPrivileges==
Setting ''NoNewPrivileges=true'' ensures that the processtree from this level on will stuck at the UID and the privileges it has. This prohibits UID changes. No set UID binary will help the hacker to get more privileges than the user of the exploited service.
=systemd-timesyncd an alternative to ntp=
The ntpd is a good and fat old horse for servers but clients do not necessarily need this one. Just give systemd-timesyncd a chance.
Configuration can be easily made through <i>/etc/systemd/timesyncd.conf</i>:
<source lang=ini>
# This file is part of systemd.
#
# systemd is free software; you can redistribute it and/or modify it
# under the terms of the GNU Lesser General Public License as published by
# the Free Software Foundation; either version 2.1 of the License, or
# (at your option) any later version.
#
# Entries in this file show the compile time defaults.
# You can change settings by editing this file.
# Defaults can be restored by simply deleting this file.
#
# See timesyncd.conf(5) for details.
[Time]
NTP=ptbtime1.ptb.de hora.cs.tu-berlin.de
FallbackNTP=ntp.ubuntu.com
</source>
The server list is a space separated list of NTP servers.
FallbackNTP is a list of servers if none of the NTP list could be reached.
If you want to split them into multiple files or generate them at start you can put files with the ending <i>.conf</i> in <i>/etc/systemd/timesyncd.conf.d/</i>.
After you setup the config you can enable the timesyncd via:
<source lang=bash>
# timedatectl set-ntp true
</source>
Control your success with:
<source lang=bash>
# timedatectl
Local time: Fr 2016-07-01 09:16:24 CEST
Universal time: Fr 2016-07-01 07:16:24 UTC
RTC time: Fr 2016-07-01 07:16:24
Time zone: Europe/Berlin (CEST, +0200)
Network time on: yes
NTP synchronized: yes
RTC in local TZ: no
</source>
Nice it worked <i>NTP synchronized: yes</i>.
If not take a look with <i>systemctl</i>:
<source lang=bash>
# systemctl status systemd-timesyncd.service
● systemd-timesyncd.service - Network Time Synchronization
Loaded: loaded (/lib/systemd/system/systemd-timesyncd.service; enabled; vendor preset: enabled)
Drop-In: /lib/systemd/system/systemd-timesyncd.service.d
└─disable-with-time-daemon.conf
Active: inactive (dead)
Condition: start condition failed at Fr 2016-07-01 10:49:15 CEST; 1h 43min left
Docs: man:systemd-timesyncd.service(8)
</source>
Hmm... let us take a look at ntp:
<source lang=bash>
# systemctl status ntp.service
● ntp.service - LSB: Start NTP daemon
Loaded: loaded (/etc/init.d/ntp; bad; vendor preset: enabled)
Active: active (exited) since Fr 2016-07-01 10:49:19 CEST; 1h 44min left
Docs: man:systemd-sysv-generator(8)
</source>
Maybe we should uninstall or disable ntp first ;-).
<source lang=bash>
# systemctl stop ntp.service
# systemctl disable ntp.service
</source>
<source lang=bash>
# systemctl start systemd-timesyncd.service
# systemctl status systemd-timesyncd.service
● systemd-timesyncd.service - Network Time Synchronization
Loaded: loaded (/lib/systemd/system/systemd-timesyncd.service; enabled; vendor preset: enabled)
Drop-In: /lib/systemd/system/systemd-timesyncd.service.d
└─disable-with-time-daemon.conf
Active: active (running) since Fr 2016-07-01 09:06:10 CEST; 1s ago
Docs: man:systemd-timesyncd.service(8)
Main PID: 12360 (systemd-timesyn)
Status: "Synchronized to time server 192.53.103.108:123 (ptbtime1.ptb.de)."
CGroup: /system.slice/systemd-timesyncd.service
└─12360 /lib/systemd/systemd-timesyncd
Jul 01 09:06:10 lollybook systemd[1]: Starting Network Time Synchronization...
Jul 01 09:06:10 lollybook systemd[1]: Started Network Time Synchronization.
Jul 01 09:06:10 lollybook systemd-timesyncd[12360]: Synchronized to time server 192.53.103.108:123 (ptbtime1.ptb.de).
</source>
That's it!
=Units=
==[Unit]==
===Define dependencies===
For example the ''zfs.target'' is defined like this:
<source lang=bash>
# systemctl cat zfs.target
# /lib/systemd/system/zfs.target
[Unit]
Description=ZFS startup target
Requires=zfs-mount.service
Requires=zfs-share.service
Wants=zed.service
[Install]
WantedBy=multi-user.target
</source>
This means to reach the ''zfs.target'' we want that ''zed.service'' is started if enabled and we need ''zfs-mount.service'' and ''zfs-share.service''.
===Directories===
====ReadWrite-, ReadOnly- and InaccessibleDirectories====
====Private Tmp-Directories====
Mounts a private incarnation of /tmp and /var/tmp which only lives as long as the unit is up. When the unit comes down the directories are cleared. This is done by a seperate namespace for this unit.
<source lang=ini>
[Unit]
...
PrivateTmp=true|false
...
</source>
If several units should share a private tmp-directory you can use ''JoinsNamespaceOf=<unit1>[,<unit2>,<unit3>]''.
==[Service]==
==[Install]==
=Tools=
==Testing around with capabilities==
For example arping:
<source lang=bash>
# getcap /usr/bin/arping
/usr/bin/arping = cap_net_raw+ep
</source>
With this capability set we can use this as normal user:
<source lang=bash>
lollypop $ /usr/bin/arping -I wlan0 192.168.178.1
ARPING 192.168.178.1 from 192.168.178.31 wlan0
Unicast reply from 192.168.178.1 [24:65:11:F0:DC:A8] 1.774ms
Unicast reply from 192.168.178.1 [24:65:11:F0:DC:A8] 1.658ms
</source>
If we remove this capability it does not work:
<source lang=bash>
# setcap cap_net_raw=-ep /usr/bin/arping
</source>
<source lang=bash>
lollypop $ /usr/bin/arping -I wlan0 192.168.178.1
arping: socket: Operation not permitted
</source>
Of course it still works as root as root has all capabilities:
<source lang=bash>
root@lollybook:~# /usr/bin/arping -I wlan0 192.168.178.1
ARPING 192.168.178.1 from 192.168.178.31 wlan0
Unicast reply from 192.168.178.1 [24:65:11:F0:DC:A8] 2.052ms
Unicast reply from 192.168.178.1 [24:65:11:F0:DC:A8] 1.852ms
Received 2 response(s)
</source>
So we better set this capability again:
<source lang=bash>
# setcap cap_net_raw=+ep /usr/bin/arping
</source>
= Examples =
== Oracle ==
<source lang=ini>
# /usr/lib/systemd/system/dbora@.service
# This file is part of systemd.
#
# Configure instances for your oracle database versions like this
# # systemctl enable dbora@<product>.service
# e.g.:
# # systemctl enable dbora@12cR1.service
#
[Unit]
Description=Oracle Database %I
After=syslog.target network.target
[Service]
# systemd ignores PAM limits, so set any necessary limits in the service.
# Not really a bug, but a feature.
# https://bugzilla.redhat.com/show_bug.cgi?id=754285
LimitMEMLOCK=infinity
LimitNOFILE=65535
#
Type=simple
RemainAfterExit=yes
User=oracle
Group=dba
Environment="ORACLE_HOME=/opt/oracle/product/%i/db"
ExecStart=$ORACLE_HOME/bin/dbstart $ORACLE_HOME >> 2>&1 &
ExecStop=$ORACLE_HOME/bin/dbstart $ORACLE_HOME 2>&1 &
[Install]
WantedBy=multi-user.target
</source>
e6136c6aa0fbb1144e4d502ee480aa6b4f07db93
1811
1810
2017-08-28T12:02:36Z
Lollypop
2
/* Oracle */
wikitext
text/x-wiki
[[Kategorie:Linux]]
=systemd=
Yes, like daemons are usually written this has to be written lowercase.
=What is systemd?=
systemd is a replacement for the old and rusty init system of Linux.
It has many new features and extends the normal init system with the ability to watch processes after the start has done, list sockets owned by processes started with systemd, adds security features like [http://manpages.ubuntu.com/manpages/vivid/en/man7/capabilities.7.html capabilities(7)] and a lot more.
Maybe it will be as good as SMF (Service Management Facility) of Solaris one day :-).
=Take a look with systemctl=
==List units==
As you can see, there are hardware and software related units.
<source lang=bash>
# systemctl list-units
UNIT LOAD ACTIVE SUB DESCRIPTION
proc-sys-fs-binfmt_misc.automount loaded active running Arbitrary Executable File Formats File System Automount Point
sys-devices-pci0000:00-0000:00:02.0-backlight-acpi_video0.device loaded active plugged /sys/devices/pci0000:00/0000:00:02.0/backlight/acpi_video0
sys-devices-pci0000:00-0000:00:02.0-drm-card0-card0\x2dLVDS\x2d1-intel_backlight.device loaded active plugged /sys/devices/pci0000:00/0000:00:02.0/drm
sys-devices-pci0000:00-0000:00:19.0-net-eth0.device loaded active plugged 82579LM Gigabit Network Connection
sys-devices-pci0000:00-0000:00:1a.0-usb1-1\x2d1-1\x2d1.4-1\x2d1.4:1.0-bluetooth-hci0-rfkill3.device loaded active plugged /sys/devices/pci0000:00/0000
sys-devices-pci0000:00-0000:00:1a.0-usb1-1\x2d1-1\x2d1.4-1\x2d1.4:1.0-bluetooth-hci0.device loaded active plugged /sys/devices/pci0000:00/0000:00:1a.0
sys-devices-pci0000:00-0000:00:1b.0-sound-card0.device loaded active plugged 6 Series/C200 Series Chipset Family High Definition Audio Contro
sys-devices-pci0000:00-0000:00:1c.1-0000:03:00.0-ieee80211-phy0-rfkill2.device loaded active plugged /sys/devices/pci0000:00/0000:00:1c.1/0000:03:00.0
sys-devices-pci0000:00-0000:00:1c.1-0000:03:00.0-net-wlan0.device loaded active plugged Centrino Advanced-N 6205 [Taylor Peak] (Centrino Advanced-N 62
sys-devices-pci0000:00-0000:00:1d.0-usb2-2\x2d1-2\x2d1.4-2\x2d1.4:1.1-tty-ttyACM0.device loaded active plugged F5521gw
sys-devices-pci0000:00-0000:00:1d.0-usb2-2\x2d1-2\x2d1.4-2\x2d1.4:1.3-tty-ttyACM1.device loaded active plugged F5521gw
...
session-c2.scope loaded active running Session c2 of user lollypop
accounts-daemon.service loaded active running Accounts Service
● anacron.service loaded failed failed Run anacron jobs
apparmor.service loaded active exited LSB: AppArmor initialization
apport.service loaded active exited LSB: automatic crash report generation
...
</source>
In this example you can see that the anacron.service failed to start.
==Display unit status==
<source lang=bash>
# systemctl status anacron
● anacron.service - Run anacron jobs
Loaded: loaded (/lib/systemd/system/anacron.service; enabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Fr 2015-08-28 09:18:13 CEST; 31min ago
Process: 1591 ExecStart=/usr/sbin/anacron -dsq (code=exited, status=1/FAILURE)
Main PID: 1591 (code=exited, status=1/FAILURE)
Aug 28 09:18:13 lollybook systemd[1]: Started Run anacron jobs.
Aug 28 09:18:13 lollybook systemd[1]: Starting Run anacron jobs...
Aug 28 09:18:13 lollybook systemd[1]: anacron.service: main process exited, code=exited, status=1/FAILURE
Aug 28 09:18:13 lollybook anacron[1591]: anacron: Can't chdir to /var/spool/anacron: No such file or directory
Aug 28 09:18:13 lollybook systemd[1]: Unit anacron.service entered failed state.
Aug 28 09:18:13 lollybook systemd[1]: anacron.service failed.
</source>
Ah, deleted the anacron spool directory. ;-)
==Restart units==
Fix the problem and restart the service.
<source lang=bash>
root@lollybook:~# mkdir /var/spool/anacron
root@lollybook:~# systemctl restart anacron.service
root@lollybook:~# systemctl status anacron
● anacron.service - Run anacron jobs
Loaded: loaded (/lib/systemd/system/anacron.service; enabled; vendor preset: enabled)
Active: active (running) since Fr 2015-08-28 09:53:49 CEST; 4s ago
Main PID: 5179 (anacron)
CGroup: /system.slice/anacron.service
└─5179 /usr/sbin/anacron -dsq
Aug 28 09:53:49 lollybook systemd[1]: Started Run anacron jobs.
Aug 28 09:53:49 lollybook systemd[1]: Starting Run anacron jobs...
Aug 28 09:53:49 lollybook anacron[5179]: Anacron 2.3 started on 2015-08-28
Aug 28 09:53:49 lollybook anacron[5179]: Will run job `cron.daily' in 5 min.
Aug 28 09:53:49 lollybook anacron[5179]: Will run job `cron.weekly' in 10 min.
Aug 28 09:53:49 lollybook anacron[5179]: Will run job `cron.monthly' in 15 min.
Aug 28 09:53:49 lollybook anacron[5179]: Jobs will be executed sequentially
</source>
==Display unit declaration==
<source lang=ini>
# systemctl cat zfs.target
# /lib/systemd/system/zfs.target
[Unit]
Description=ZFS startup target
Requires=zfs-mount.service
Requires=zfs-share.service
Wants=zed.service
[Install]
WantedBy=multi-user.target
</source>
==Sockets==
<source lang=bash>
# systemctl list-sockets --all
LISTEN UNIT ACTIVATES
/run/acpid.socket acpid.socket acpid.service
/run/systemd/fsckd systemd-fsckd.socket systemd-fsckd.service
/run/systemd/initctl/fifo systemd-initctl.socket systemd-initctl.service
/run/systemd/journal/dev-log systemd-journald-dev-log.socket systemd-journald.service
/run/systemd/journal/socket systemd-journald.socket systemd-journald.service
/run/systemd/journal/stdout systemd-journald.socket systemd-journald.service
/run/systemd/journal/syslog syslog.socket rsyslog.service
/run/systemd/shutdownd systemd-shutdownd.socket systemd-shutdownd.service
/run/udev/control systemd-udevd-control.socket systemd-udevd.service
/run/uuidd/request uuidd.socket uuidd.service
/var/run/avahi-daemon/socket avahi-daemon.socket avahi-daemon.service
/var/run/cups/cups.sock cups.socket cups.service
/var/run/dbus/system_bus_socket dbus.socket dbus.service
127.0.0.1:631 cups.socket cups.service
[::1]:631 cups.socket cups.service
audit 1 systemd-journald-audit.socket systemd-journald.service
kobject-uevent 1 systemd-udevd-kernel.socket systemd-udevd.service
17 sockets listed.
</source>
==View dependencies==
What depends on ''zfs.target'':
<source lang=bash>
# systemctl list-dependencies --reverse zfs.target
zfs.target
● ├─basic.target
...
● └─multi-user.target
...
</source>
And what do we need to reach the ''zfs.target''?
<source lang=bash>
# systemctl list-dependencies --recursive zfs.target
zfs.target
● ├─zed.service
● ├─zfs-mount.service
● └─zfs-share.service
</source>
=Security=
==Use capabilities to drop user privileges (CapabilityBoundingSet)==
<source lang=bash>
# systemctl cat systemd-networkd.service --no-pager
...
[Service]
Type=notify
Restart=on-failure
RestartSec=0
ExecStart=/lib/systemd/systemd-networkd
CapabilityBoundingSet=CAP_NET_ADMIN CAP_NET_BIND_SERVICE CAP_NET_BROADCAST CAP_NET_RAW CAP_SETUID CAP_SETGID CAP_SETPCAP CAP_CHOWN CAP_DAC_OVERRIDE CAP_FOWNER
ProtectSystem=full
ProtectHome=yes
WatchdogSec=1min
...
</source>
Now the process is started with exactly the capabilities it needs to have. Even if it starts as root all unnessesary capabilities are dropped for starting the process.
I dont want to copy the whole man page of [http://manpages.ubuntu.com/manpages/vivid/en/man7/capabilities.7.html capabilities(7)] here but you can take a look to understand what this capabilities are.
'''BUT''' beware of programs which just test on UID 0!
==Nailing a process to it's rights : NoNewPrivileges==
Setting ''NoNewPrivileges=true'' ensures that the processtree from this level on will stuck at the UID and the privileges it has. This prohibits UID changes. No set UID binary will help the hacker to get more privileges than the user of the exploited service.
=systemd-timesyncd an alternative to ntp=
The ntpd is a good and fat old horse for servers but clients do not necessarily need this one. Just give systemd-timesyncd a chance.
Configuration can be easily made through <i>/etc/systemd/timesyncd.conf</i>:
<source lang=ini>
# This file is part of systemd.
#
# systemd is free software; you can redistribute it and/or modify it
# under the terms of the GNU Lesser General Public License as published by
# the Free Software Foundation; either version 2.1 of the License, or
# (at your option) any later version.
#
# Entries in this file show the compile time defaults.
# You can change settings by editing this file.
# Defaults can be restored by simply deleting this file.
#
# See timesyncd.conf(5) for details.
[Time]
NTP=ptbtime1.ptb.de hora.cs.tu-berlin.de
FallbackNTP=ntp.ubuntu.com
</source>
The server list is a space separated list of NTP servers.
FallbackNTP is a list of servers if none of the NTP list could be reached.
If you want to split them into multiple files or generate them at start you can put files with the ending <i>.conf</i> in <i>/etc/systemd/timesyncd.conf.d/</i>.
After you setup the config you can enable the timesyncd via:
<source lang=bash>
# timedatectl set-ntp true
</source>
Control your success with:
<source lang=bash>
# timedatectl
Local time: Fr 2016-07-01 09:16:24 CEST
Universal time: Fr 2016-07-01 07:16:24 UTC
RTC time: Fr 2016-07-01 07:16:24
Time zone: Europe/Berlin (CEST, +0200)
Network time on: yes
NTP synchronized: yes
RTC in local TZ: no
</source>
Nice it worked <i>NTP synchronized: yes</i>.
If not take a look with <i>systemctl</i>:
<source lang=bash>
# systemctl status systemd-timesyncd.service
● systemd-timesyncd.service - Network Time Synchronization
Loaded: loaded (/lib/systemd/system/systemd-timesyncd.service; enabled; vendor preset: enabled)
Drop-In: /lib/systemd/system/systemd-timesyncd.service.d
└─disable-with-time-daemon.conf
Active: inactive (dead)
Condition: start condition failed at Fr 2016-07-01 10:49:15 CEST; 1h 43min left
Docs: man:systemd-timesyncd.service(8)
</source>
Hmm... let us take a look at ntp:
<source lang=bash>
# systemctl status ntp.service
● ntp.service - LSB: Start NTP daemon
Loaded: loaded (/etc/init.d/ntp; bad; vendor preset: enabled)
Active: active (exited) since Fr 2016-07-01 10:49:19 CEST; 1h 44min left
Docs: man:systemd-sysv-generator(8)
</source>
Maybe we should uninstall or disable ntp first ;-).
<source lang=bash>
# systemctl stop ntp.service
# systemctl disable ntp.service
</source>
<source lang=bash>
# systemctl start systemd-timesyncd.service
# systemctl status systemd-timesyncd.service
● systemd-timesyncd.service - Network Time Synchronization
Loaded: loaded (/lib/systemd/system/systemd-timesyncd.service; enabled; vendor preset: enabled)
Drop-In: /lib/systemd/system/systemd-timesyncd.service.d
└─disable-with-time-daemon.conf
Active: active (running) since Fr 2016-07-01 09:06:10 CEST; 1s ago
Docs: man:systemd-timesyncd.service(8)
Main PID: 12360 (systemd-timesyn)
Status: "Synchronized to time server 192.53.103.108:123 (ptbtime1.ptb.de)."
CGroup: /system.slice/systemd-timesyncd.service
└─12360 /lib/systemd/systemd-timesyncd
Jul 01 09:06:10 lollybook systemd[1]: Starting Network Time Synchronization...
Jul 01 09:06:10 lollybook systemd[1]: Started Network Time Synchronization.
Jul 01 09:06:10 lollybook systemd-timesyncd[12360]: Synchronized to time server 192.53.103.108:123 (ptbtime1.ptb.de).
</source>
That's it!
=Units=
==[Unit]==
===Define dependencies===
For example the ''zfs.target'' is defined like this:
<source lang=bash>
# systemctl cat zfs.target
# /lib/systemd/system/zfs.target
[Unit]
Description=ZFS startup target
Requires=zfs-mount.service
Requires=zfs-share.service
Wants=zed.service
[Install]
WantedBy=multi-user.target
</source>
This means to reach the ''zfs.target'' we want that ''zed.service'' is started if enabled and we need ''zfs-mount.service'' and ''zfs-share.service''.
===Directories===
====ReadWrite-, ReadOnly- and InaccessibleDirectories====
====Private Tmp-Directories====
Mounts a private incarnation of /tmp and /var/tmp which only lives as long as the unit is up. When the unit comes down the directories are cleared. This is done by a seperate namespace for this unit.
<source lang=ini>
[Unit]
...
PrivateTmp=true|false
...
</source>
If several units should share a private tmp-directory you can use ''JoinsNamespaceOf=<unit1>[,<unit2>,<unit3>]''.
==[Service]==
==[Install]==
=Tools=
==Testing around with capabilities==
For example arping:
<source lang=bash>
# getcap /usr/bin/arping
/usr/bin/arping = cap_net_raw+ep
</source>
With this capability set we can use this as normal user:
<source lang=bash>
lollypop $ /usr/bin/arping -I wlan0 192.168.178.1
ARPING 192.168.178.1 from 192.168.178.31 wlan0
Unicast reply from 192.168.178.1 [24:65:11:F0:DC:A8] 1.774ms
Unicast reply from 192.168.178.1 [24:65:11:F0:DC:A8] 1.658ms
</source>
If we remove this capability it does not work:
<source lang=bash>
# setcap cap_net_raw=-ep /usr/bin/arping
</source>
<source lang=bash>
lollypop $ /usr/bin/arping -I wlan0 192.168.178.1
arping: socket: Operation not permitted
</source>
Of course it still works as root as root has all capabilities:
<source lang=bash>
root@lollybook:~# /usr/bin/arping -I wlan0 192.168.178.1
ARPING 192.168.178.1 from 192.168.178.31 wlan0
Unicast reply from 192.168.178.1 [24:65:11:F0:DC:A8] 2.052ms
Unicast reply from 192.168.178.1 [24:65:11:F0:DC:A8] 1.852ms
Received 2 response(s)
</source>
So we better set this capability again:
<source lang=bash>
# setcap cap_net_raw=+ep /usr/bin/arping
</source>
= Examples =
== Oracle ==
File this as
/usr/lib/systemd/system/dbora@.service (SLES12)
<source lang=ini>
# This file is part of systemd.
#
# Configure instances for your oracle database versions like this
# # systemctl enable dbora@<product>.service
# e.g.:
# # systemctl enable dbora@12cR1.service
#
[Unit]
Description=Oracle Database %I
After=syslog.target network.target
[Service]
# systemd ignores PAM limits, so set any necessary limits in the service.
# Not really a bug, but a feature.
# https://bugzilla.redhat.com/show_bug.cgi?id=754285
LimitMEMLOCK=infinity
LimitNOFILE=65535
#
Type=simple
RemainAfterExit=yes
User=oracle
Group=dba
Environment="ORACLE_HOME=/opt/oracle/product/%i/db"
ExecStart=/opt/oracle/product/%i/db/bin/dbstart $ORACLE_HOME >> 2>&1 &
ExecStop=/opt/oracle/product/%i/db/bin/dbstart $ORACLE_HOME 2>&1 &
[Install]
WantedBy=multi-user.target
</source>
<source lang=bash>
# systemctl daemon-reload
# systemctl enable dbora@12cR2.service
Created symlink from /etc/systemd/system/multi-user.target.wants/dbora@12cR2.service to /usr/lib/systemd/system/dbora@.service.
</source>
cd8ea1540d2d5e609fa82e79cf64456edf1d9c41
1812
1811
2017-08-28T12:04:55Z
Lollypop
2
/* Oracle */
wikitext
text/x-wiki
[[Kategorie:Linux]]
=systemd=
Yes, like daemons are usually written this has to be written lowercase.
=What is systemd?=
systemd is a replacement for the old and rusty init system of Linux.
It has many new features and extends the normal init system with the ability to watch processes after the start has done, list sockets owned by processes started with systemd, adds security features like [http://manpages.ubuntu.com/manpages/vivid/en/man7/capabilities.7.html capabilities(7)] and a lot more.
Maybe it will be as good as SMF (Service Management Facility) of Solaris one day :-).
=Take a look with systemctl=
==List units==
As you can see, there are hardware and software related units.
<source lang=bash>
# systemctl list-units
UNIT LOAD ACTIVE SUB DESCRIPTION
proc-sys-fs-binfmt_misc.automount loaded active running Arbitrary Executable File Formats File System Automount Point
sys-devices-pci0000:00-0000:00:02.0-backlight-acpi_video0.device loaded active plugged /sys/devices/pci0000:00/0000:00:02.0/backlight/acpi_video0
sys-devices-pci0000:00-0000:00:02.0-drm-card0-card0\x2dLVDS\x2d1-intel_backlight.device loaded active plugged /sys/devices/pci0000:00/0000:00:02.0/drm
sys-devices-pci0000:00-0000:00:19.0-net-eth0.device loaded active plugged 82579LM Gigabit Network Connection
sys-devices-pci0000:00-0000:00:1a.0-usb1-1\x2d1-1\x2d1.4-1\x2d1.4:1.0-bluetooth-hci0-rfkill3.device loaded active plugged /sys/devices/pci0000:00/0000
sys-devices-pci0000:00-0000:00:1a.0-usb1-1\x2d1-1\x2d1.4-1\x2d1.4:1.0-bluetooth-hci0.device loaded active plugged /sys/devices/pci0000:00/0000:00:1a.0
sys-devices-pci0000:00-0000:00:1b.0-sound-card0.device loaded active plugged 6 Series/C200 Series Chipset Family High Definition Audio Contro
sys-devices-pci0000:00-0000:00:1c.1-0000:03:00.0-ieee80211-phy0-rfkill2.device loaded active plugged /sys/devices/pci0000:00/0000:00:1c.1/0000:03:00.0
sys-devices-pci0000:00-0000:00:1c.1-0000:03:00.0-net-wlan0.device loaded active plugged Centrino Advanced-N 6205 [Taylor Peak] (Centrino Advanced-N 62
sys-devices-pci0000:00-0000:00:1d.0-usb2-2\x2d1-2\x2d1.4-2\x2d1.4:1.1-tty-ttyACM0.device loaded active plugged F5521gw
sys-devices-pci0000:00-0000:00:1d.0-usb2-2\x2d1-2\x2d1.4-2\x2d1.4:1.3-tty-ttyACM1.device loaded active plugged F5521gw
...
session-c2.scope loaded active running Session c2 of user lollypop
accounts-daemon.service loaded active running Accounts Service
● anacron.service loaded failed failed Run anacron jobs
apparmor.service loaded active exited LSB: AppArmor initialization
apport.service loaded active exited LSB: automatic crash report generation
...
</source>
In this example you can see that the anacron.service failed to start.
==Display unit status==
<source lang=bash>
# systemctl status anacron
● anacron.service - Run anacron jobs
Loaded: loaded (/lib/systemd/system/anacron.service; enabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Fr 2015-08-28 09:18:13 CEST; 31min ago
Process: 1591 ExecStart=/usr/sbin/anacron -dsq (code=exited, status=1/FAILURE)
Main PID: 1591 (code=exited, status=1/FAILURE)
Aug 28 09:18:13 lollybook systemd[1]: Started Run anacron jobs.
Aug 28 09:18:13 lollybook systemd[1]: Starting Run anacron jobs...
Aug 28 09:18:13 lollybook systemd[1]: anacron.service: main process exited, code=exited, status=1/FAILURE
Aug 28 09:18:13 lollybook anacron[1591]: anacron: Can't chdir to /var/spool/anacron: No such file or directory
Aug 28 09:18:13 lollybook systemd[1]: Unit anacron.service entered failed state.
Aug 28 09:18:13 lollybook systemd[1]: anacron.service failed.
</source>
Ah, deleted the anacron spool directory. ;-)
==Restart units==
Fix the problem and restart the service.
<source lang=bash>
root@lollybook:~# mkdir /var/spool/anacron
root@lollybook:~# systemctl restart anacron.service
root@lollybook:~# systemctl status anacron
● anacron.service - Run anacron jobs
Loaded: loaded (/lib/systemd/system/anacron.service; enabled; vendor preset: enabled)
Active: active (running) since Fr 2015-08-28 09:53:49 CEST; 4s ago
Main PID: 5179 (anacron)
CGroup: /system.slice/anacron.service
└─5179 /usr/sbin/anacron -dsq
Aug 28 09:53:49 lollybook systemd[1]: Started Run anacron jobs.
Aug 28 09:53:49 lollybook systemd[1]: Starting Run anacron jobs...
Aug 28 09:53:49 lollybook anacron[5179]: Anacron 2.3 started on 2015-08-28
Aug 28 09:53:49 lollybook anacron[5179]: Will run job `cron.daily' in 5 min.
Aug 28 09:53:49 lollybook anacron[5179]: Will run job `cron.weekly' in 10 min.
Aug 28 09:53:49 lollybook anacron[5179]: Will run job `cron.monthly' in 15 min.
Aug 28 09:53:49 lollybook anacron[5179]: Jobs will be executed sequentially
</source>
==Display unit declaration==
<source lang=ini>
# systemctl cat zfs.target
# /lib/systemd/system/zfs.target
[Unit]
Description=ZFS startup target
Requires=zfs-mount.service
Requires=zfs-share.service
Wants=zed.service
[Install]
WantedBy=multi-user.target
</source>
==Sockets==
<source lang=bash>
# systemctl list-sockets --all
LISTEN UNIT ACTIVATES
/run/acpid.socket acpid.socket acpid.service
/run/systemd/fsckd systemd-fsckd.socket systemd-fsckd.service
/run/systemd/initctl/fifo systemd-initctl.socket systemd-initctl.service
/run/systemd/journal/dev-log systemd-journald-dev-log.socket systemd-journald.service
/run/systemd/journal/socket systemd-journald.socket systemd-journald.service
/run/systemd/journal/stdout systemd-journald.socket systemd-journald.service
/run/systemd/journal/syslog syslog.socket rsyslog.service
/run/systemd/shutdownd systemd-shutdownd.socket systemd-shutdownd.service
/run/udev/control systemd-udevd-control.socket systemd-udevd.service
/run/uuidd/request uuidd.socket uuidd.service
/var/run/avahi-daemon/socket avahi-daemon.socket avahi-daemon.service
/var/run/cups/cups.sock cups.socket cups.service
/var/run/dbus/system_bus_socket dbus.socket dbus.service
127.0.0.1:631 cups.socket cups.service
[::1]:631 cups.socket cups.service
audit 1 systemd-journald-audit.socket systemd-journald.service
kobject-uevent 1 systemd-udevd-kernel.socket systemd-udevd.service
17 sockets listed.
</source>
==View dependencies==
What depends on ''zfs.target'':
<source lang=bash>
# systemctl list-dependencies --reverse zfs.target
zfs.target
● ├─basic.target
...
● └─multi-user.target
...
</source>
And what do we need to reach the ''zfs.target''?
<source lang=bash>
# systemctl list-dependencies --recursive zfs.target
zfs.target
● ├─zed.service
● ├─zfs-mount.service
● └─zfs-share.service
</source>
=Security=
==Use capabilities to drop user privileges (CapabilityBoundingSet)==
<source lang=bash>
# systemctl cat systemd-networkd.service --no-pager
...
[Service]
Type=notify
Restart=on-failure
RestartSec=0
ExecStart=/lib/systemd/systemd-networkd
CapabilityBoundingSet=CAP_NET_ADMIN CAP_NET_BIND_SERVICE CAP_NET_BROADCAST CAP_NET_RAW CAP_SETUID CAP_SETGID CAP_SETPCAP CAP_CHOWN CAP_DAC_OVERRIDE CAP_FOWNER
ProtectSystem=full
ProtectHome=yes
WatchdogSec=1min
...
</source>
Now the process is started with exactly the capabilities it needs to have. Even if it starts as root all unnessesary capabilities are dropped for starting the process.
I dont want to copy the whole man page of [http://manpages.ubuntu.com/manpages/vivid/en/man7/capabilities.7.html capabilities(7)] here but you can take a look to understand what this capabilities are.
'''BUT''' beware of programs which just test on UID 0!
==Nailing a process to it's rights : NoNewPrivileges==
Setting ''NoNewPrivileges=true'' ensures that the processtree from this level on will stuck at the UID and the privileges it has. This prohibits UID changes. No set UID binary will help the hacker to get more privileges than the user of the exploited service.
=systemd-timesyncd an alternative to ntp=
The ntpd is a good and fat old horse for servers but clients do not necessarily need this one. Just give systemd-timesyncd a chance.
Configuration can be easily made through <i>/etc/systemd/timesyncd.conf</i>:
<source lang=ini>
# This file is part of systemd.
#
# systemd is free software; you can redistribute it and/or modify it
# under the terms of the GNU Lesser General Public License as published by
# the Free Software Foundation; either version 2.1 of the License, or
# (at your option) any later version.
#
# Entries in this file show the compile time defaults.
# You can change settings by editing this file.
# Defaults can be restored by simply deleting this file.
#
# See timesyncd.conf(5) for details.
[Time]
NTP=ptbtime1.ptb.de hora.cs.tu-berlin.de
FallbackNTP=ntp.ubuntu.com
</source>
The server list is a space separated list of NTP servers.
FallbackNTP is a list of servers if none of the NTP list could be reached.
If you want to split them into multiple files or generate them at start you can put files with the ending <i>.conf</i> in <i>/etc/systemd/timesyncd.conf.d/</i>.
After you setup the config you can enable the timesyncd via:
<source lang=bash>
# timedatectl set-ntp true
</source>
Control your success with:
<source lang=bash>
# timedatectl
Local time: Fr 2016-07-01 09:16:24 CEST
Universal time: Fr 2016-07-01 07:16:24 UTC
RTC time: Fr 2016-07-01 07:16:24
Time zone: Europe/Berlin (CEST, +0200)
Network time on: yes
NTP synchronized: yes
RTC in local TZ: no
</source>
Nice it worked <i>NTP synchronized: yes</i>.
If not take a look with <i>systemctl</i>:
<source lang=bash>
# systemctl status systemd-timesyncd.service
● systemd-timesyncd.service - Network Time Synchronization
Loaded: loaded (/lib/systemd/system/systemd-timesyncd.service; enabled; vendor preset: enabled)
Drop-In: /lib/systemd/system/systemd-timesyncd.service.d
└─disable-with-time-daemon.conf
Active: inactive (dead)
Condition: start condition failed at Fr 2016-07-01 10:49:15 CEST; 1h 43min left
Docs: man:systemd-timesyncd.service(8)
</source>
Hmm... let us take a look at ntp:
<source lang=bash>
# systemctl status ntp.service
● ntp.service - LSB: Start NTP daemon
Loaded: loaded (/etc/init.d/ntp; bad; vendor preset: enabled)
Active: active (exited) since Fr 2016-07-01 10:49:19 CEST; 1h 44min left
Docs: man:systemd-sysv-generator(8)
</source>
Maybe we should uninstall or disable ntp first ;-).
<source lang=bash>
# systemctl stop ntp.service
# systemctl disable ntp.service
</source>
<source lang=bash>
# systemctl start systemd-timesyncd.service
# systemctl status systemd-timesyncd.service
● systemd-timesyncd.service - Network Time Synchronization
Loaded: loaded (/lib/systemd/system/systemd-timesyncd.service; enabled; vendor preset: enabled)
Drop-In: /lib/systemd/system/systemd-timesyncd.service.d
└─disable-with-time-daemon.conf
Active: active (running) since Fr 2016-07-01 09:06:10 CEST; 1s ago
Docs: man:systemd-timesyncd.service(8)
Main PID: 12360 (systemd-timesyn)
Status: "Synchronized to time server 192.53.103.108:123 (ptbtime1.ptb.de)."
CGroup: /system.slice/systemd-timesyncd.service
└─12360 /lib/systemd/systemd-timesyncd
Jul 01 09:06:10 lollybook systemd[1]: Starting Network Time Synchronization...
Jul 01 09:06:10 lollybook systemd[1]: Started Network Time Synchronization.
Jul 01 09:06:10 lollybook systemd-timesyncd[12360]: Synchronized to time server 192.53.103.108:123 (ptbtime1.ptb.de).
</source>
That's it!
=Units=
==[Unit]==
===Define dependencies===
For example the ''zfs.target'' is defined like this:
<source lang=bash>
# systemctl cat zfs.target
# /lib/systemd/system/zfs.target
[Unit]
Description=ZFS startup target
Requires=zfs-mount.service
Requires=zfs-share.service
Wants=zed.service
[Install]
WantedBy=multi-user.target
</source>
This means to reach the ''zfs.target'' we want that ''zed.service'' is started if enabled and we need ''zfs-mount.service'' and ''zfs-share.service''.
===Directories===
====ReadWrite-, ReadOnly- and InaccessibleDirectories====
====Private Tmp-Directories====
Mounts a private incarnation of /tmp and /var/tmp which only lives as long as the unit is up. When the unit comes down the directories are cleared. This is done by a seperate namespace for this unit.
<source lang=ini>
[Unit]
...
PrivateTmp=true|false
...
</source>
If several units should share a private tmp-directory you can use ''JoinsNamespaceOf=<unit1>[,<unit2>,<unit3>]''.
==[Service]==
==[Install]==
=Tools=
==Testing around with capabilities==
For example arping:
<source lang=bash>
# getcap /usr/bin/arping
/usr/bin/arping = cap_net_raw+ep
</source>
With this capability set we can use this as normal user:
<source lang=bash>
lollypop $ /usr/bin/arping -I wlan0 192.168.178.1
ARPING 192.168.178.1 from 192.168.178.31 wlan0
Unicast reply from 192.168.178.1 [24:65:11:F0:DC:A8] 1.774ms
Unicast reply from 192.168.178.1 [24:65:11:F0:DC:A8] 1.658ms
</source>
If we remove this capability it does not work:
<source lang=bash>
# setcap cap_net_raw=-ep /usr/bin/arping
</source>
<source lang=bash>
lollypop $ /usr/bin/arping -I wlan0 192.168.178.1
arping: socket: Operation not permitted
</source>
Of course it still works as root as root has all capabilities:
<source lang=bash>
root@lollybook:~# /usr/bin/arping -I wlan0 192.168.178.1
ARPING 192.168.178.1 from 192.168.178.31 wlan0
Unicast reply from 192.168.178.1 [24:65:11:F0:DC:A8] 2.052ms
Unicast reply from 192.168.178.1 [24:65:11:F0:DC:A8] 1.852ms
Received 2 response(s)
</source>
So we better set this capability again:
<source lang=bash>
# setcap cap_net_raw=+ep /usr/bin/arping
</source>
= Examples =
== Oracle ==
UNTESTED, just an example!
File this as
/usr/lib/systemd/system/dbora@.service (SLES12)
<source lang=ini>
# This file is part of systemd.
#
# Configure instances for your oracle database versions like this
# # systemctl enable dbora@<product>.service
# e.g.:
# # systemctl enable dbora@12cR1.service
#
[Unit]
Description=Oracle Database %I
After=syslog.target network.target
[Service]
# systemd ignores PAM limits, so set any necessary limits in the service.
# Not really a bug, but a feature.
# https://bugzilla.redhat.com/show_bug.cgi?id=754285
LimitMEMLOCK=infinity
LimitNOFILE=65535
#
Type=simple
RemainAfterExit=yes
User=oracle
Group=dba
Environment="ORACLE_HOME=/opt/oracle/product/%i/db"
ExecStart=/opt/oracle/product/%i/db/bin/dbstart $ORACLE_HOME >> 2>&1 &
ExecStop=/opt/oracle/product/%i/db/bin/dbstart $ORACLE_HOME 2>&1 &
[Install]
WantedBy=multi-user.target
</source>
<source lang=bash>
# systemctl daemon-reload
# systemctl enable dbora@12cR2.service
Created symlink from /etc/systemd/system/multi-user.target.wants/dbora@12cR2.service to /usr/lib/systemd/system/dbora@.service.
</source>
7b9b6a81551227cdbd0a1e2b1ee6e2116d32f60d
SSL and TLS
0
229
1813
1762
2017-09-29T11:03:03Z
Lollypop
2
/* HSTS - HTTP Strict Transport Security */
wikitext
text/x-wiki
[[Kategorie: Security]]
=Web=
==HTTPS==
===TLSA - Record ===
<source lang=bash>
$ openssl s_client -connect lars.timmann.de:443 </dev/null 2>/dev/null | openssl x509 -pubkey -noout | openssl pkey -pubin -outform DER | openssl sha256
(stdin)= e642c89062361241dc77f3fb363c8cd0faa04d870b68a3411b8fac8c4b4581ac
</source>
This could be used for a tlsa record like this:
_443._tcp.lars.timmann.de. 60 IN TLSA 3 0 1 e642c89062361241dc77f3fb363c8cd0faa04d870b68a3411b8fac8c4b4581ac
===HSTS - HTTP Strict Transport Security===
<source lang=apache>
<VirtualHost <host>:443>
...
Header always set Strict-Transport-Security "max-age=31556926; includeSubDomains;"
...
</VirtualHost>
</source>
You need to enable the headers module in Apache.
On Ubuntu just do:
<source lang=bash>
# sudo a2enmod headers
</source>
The max-age is entered in seconds:
<source lang=bash>
$ bc -l
31556926/(60*60*24)
365.24219907407407407407
</souce>
So this value is a year as seconds.
What changes when we set this header and the browser understands it?
The browser transforms any link on this page to https even if the link is a http link. If the secure connection cannot be established because of Certificate errors, the browser will refuse to load the page. If this header contains ''includeSubDomains;'' subdomains are treated like this as well.
Links:
* [https://en.wikipedia.org/wiki/HTTP_Strict_Transport_Security HSTS at Wikipedia (English)]
* [https://de.wikipedia.org/wiki/Hypertext_Transfer_Protocol_Secure#HSTS HSTS at Wikipedia (German)]
===HPKP - HTTP Public Key Pinning===
A helpful script to create the hashes was made by Hanno Böck and is accessible at [https://github.com/hannob/hpkp Github].
I added a create option which makes the script more comfortable for me at [https://github.com/Popyllol/hpkp Github], too.
The public key pins for this site are created like this:
<source lang=bash>
# /etc/apache2/ssl/hpkp-gen.sh create DE Hamburg Hamburg lars.timmann.de
Generating RSA private key, 4096 bit long modulus
..................................................................................................................................................................................................................++
..........................................................................................................................................................................................++
e is 65537 (0x10001)
Generating RSA private key, 4096 bit long modulus
..................................................++
..........................................++
e is 65537 (0x10001)
Header always set Strict-Transport-Security "max-age=31556926;"
Header always set Public-Key-Pins "max-age=5184000; pin-sha256=\"UcmGe/VSm6N9ruX235yb9PEYseuo+mr2volWwx1RffE=\";pin-sha256=\"O8xUszxHm+JJpRR4Pycl7LCnKjFpTY3REemrBxQZWQU=\";pin-sha256=\"UcmGe/VSm6N9ruX235yb9PEYseuo+mr2volWwx1RffE=\";"
</source>
At the end you get one line for adding Strict-Transport-Security and one for Public-Key-Pins. Both in Apache format.
<source lang=apache>
<VirtualHost lars.timmann.de:443>
...
SSLEngine On
SSLProtocol all -SSLv2 -SSLv3 -TLSv1
SSLCompression off
SSLOptions +FakeBasicAuth +ExportCertData +StrictRequire
SSLCertificateFile /etc/apache2/ssl/timmann.de-wildcard.pem
SSLCertificateKeyFile /etc/apache2/ssl/timmann.de.ec-key
Header always set Strict-Transport-Security "max-age=31556926;"
Header always set Public-Key-Pins "max-age=5184000; pin-sha256=\"sEQMIUbXSCbQQAMcCH7712u+cYCjFITlUSH/C1DEGHY=\";pin-sha256=\"9f3SRITO2UNdpnurhfJGLZqcaXJBUm3WRKRIKYiPARc=\";pin-sha256=\"sEQMIUbXSCbQQAMcCH7712u+cYCjFITlUSH/C1DEGHY=\";"
...
</VirtualHost>
</source>
You need to enable the headers module in Apache.
On Ubuntu just do:
<source lang=bash>
# sudo a2enmod headers
</source>
=Mail=
==STARTTLS==
with OpenSSL:
<source lang=bash>
$ openssl s_client -starttls smtp -connect <mailserver>:<port>
</source>
with GNUTLS:
<source lang=bash>
$ gnutls-cli --crlf --starttls --port <port> <mailserver>
EHLO hey <-- Send EHLO
250-<mailserver> Hello <yourhost> [<yourip>]
250-SIZE 52428800
250-8BITMIME
250-ETRN
250-PIPELINING
250-AUTH PLAIN
250-STARTTLS
250 HELP
STARTTLS <-- Send STARTTLS
220 TLS go ahead
^D <-- Send CTRL-D to begin STARTTLS handshake
...
- Version: TLS1.2
- Key Exchange: DHE-RSA
- Cipher: AES-256-CBC
- MAC: SHA256
- Compression: NULL
</source>
You can specify the security priority for the handshake like this:
<source lang=bash>
$ gnutls-cli --crlf --starttls --priority 'SECURE256:%LATEST_RECORD_VERSION:-VERS-SSL3.0' --port <port> <mailserver>
</source>
Or us sslscan to check the available ciphers:
<source lang=bash>
$ sudo apt-get install sslscan
$ sslscan --no-failed --starttls <mailserver>:<port>
</source>
==SMTPS==
with OpenSSL:
<source lang=bash>
$ openssl s_client -connect <mailserver>:465
</source>
with GNUTLS:
<source lang=bash>
$ gnutls-cli --port 465 <mailserver>
</source>
73547cb0572e6f25c941e7a688836371e5a208e8
Apache
0
205
1814
1262
2017-09-29T11:04:20Z
Lollypop
2
/* Apache konfigurieren */
wikitext
text/x-wiki
[[Kategorie:Web]]
== Zertifikat generieren ==
===Defaultwerte vernünftig anpassen===
Country & Co auf für einen selbst passende Werte anpassen:
<source lang=bash>
# vi /etc/ssl/openssl.cnf
</source>
===Schlüssel generieren===
<source lang=bash>
# openssl ecparam -genkey -name secp256r1 | openssl ec -aes256 -out server.de.ec-key
read EC key
using curve name prime256v1 instead of secp256r1
writing EC key
Enter PEM pass phrase:
Verifying - Enter PEM pass phrase:
</source>
Sollte es gewünscht sein den Schlüssel ohne Passwort abzulegen, kann man den Schlüssel nachträglich so entfernen:
<source lang=bash>
# openssl ec -in server.de.ec-key -out server.de.ec-key
read EC key
Enter PEM pass phrase:
writing EC key
</source>
===Zertifikat ausstellen===
<source lang=bash>
# openssl req -new -x509 -sha256 -key server.de.ec-key -out server.de-wildcard.pem -days 1825 -nodes
You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter '.', the field will be left blank.
-----
Country Name (2 letter code) [DE]:
State or Province Name (full name) [Hamburg]:
Locality Name (eg, city) [Hamburg]:
Organization Name (eg, company) [My Site]:
Organizational Unit Name (eg, section) [Sub]:
Common Name (e.g. server FQDN or YOUR name) []:*.server.de
Email Address [ssl@server.de]:
</source>
===Zertifikat ansehen===
<source lang=bash>
# openssl x509 -text -noout -in server.de-wildcard.pem
Certificate:
Data:
Version: 3 (0x2)
Serial Number: ... (0x...)
Signature Algorithm: ecdsa-with-SHA256
Issuer: C=DE, ST=Hamburg, L=Hamburg, O=My Site, OU=Sub, CN=*.server.de/emailAddress=ssl@server.de
Validity
Not Before: Apr 16 09:35:02 2015 GMT
Not After : Apr 14 09:35:02 2020 GMT
Subject: C=DE, ST=Hamburg, L=Hamburg, O=My Site, OU=Sub, CN=*.server.de/emailAddress=ssl@server.de
Subject Public Key Info:
Public Key Algorithm: id-ecPublicKey
Public-Key: (256 bit)
pub:
...
ASN1 OID: prime256v1
X509v3 extensions:
X509v3 Subject Key Identifier:
...
X509v3 Authority Key Identifier:
keyid:...
X509v3 Basic Constraints:
CA:TRUE
Signature Algorithm: ecdsa-with-SHA256
...
</source>
==Apache konfigurieren==
<source lang=apache>
<VirtualHost ssl.server.de:443>
# ...
SSLEngine On
SSLProtocol all -SSLv2 -SSLv3 -TLSv1
SSLCompression off
SSLHonorCipherOrder On
SSLCipherSuite EECDH+AESGCM:EECDH+AES:EDH+AES
SSLCertificateFile /etc/apache2/ssl/server.de-wildcard.pem
SSLCertificateKeyFile /etc/apache2/ssl/server.de.ec-key
SSLOptions +FakeBasicAuth +ExportCertData +StrictRequire
SetEnvIfNoCase Referer ^https://ssl\.server\.de keep_cookies
RequestHeader unset Cookie env=!keep_cookies
<IfModule mod_headers.c>
# https://kb.sucuri.net/warnings/hardening/headers-x-content-type
Header set X-Content-Type-Options nosniff
# https://kb.sucuri.net/warnings/hardening/headers-x-frame-clickjacking
Header append X-FRAME-OPTIONS "SAMEORIGIN"
# https://kb.sucuri.net/warnings/hardening/headers-x-xss-protection
Header set X-XSS-Protection "1; mode=block"
# Strict Transport Security
Header always set Strict-Transport-Security "max-age=31556926;"
# Public Key Pins
Header always set Public-Key-Pins "max-age=5184000; pin-sha256=\"...\"; pin-sha256=\"...\"; includeSubDomains"
</IfModule>
<IfModule mod_rewrite.c>
RewriteEngine On
# https://kb.sucuri.net/warnings/hardening/http-trace HTTP Trace Method
RewriteCond %{REQUEST_METHOD} ^TRACE
RewriteRule .* - [F]
</IfModule>
</VirtualHost>
</source>
==Client certificates==
<source lang=apache>
#
## <ClientCertificate>
#
SSLVerifyClient none
SSLCACertificateFile "/var/log/apache2/conf/ca.crt"
SSLCARevocationFile "/var/log/apache2/conf/crl.pem"
SSLCARevocationCheck chain
CustomLog "/var/log/apache2/logs/ssl_user.log" \
"%t %h Serial=%{SSL_CLIENT_M_SERIAL}x User=%{SSL_CLIENT_S_DN_CN}x \"%r\" %b"
<Location />
SSLVerifyClient require
SSLVerifyDepth 10
SSLOptions +FakeBasicAuth
SSLRequireSSL
SSLRequire %{SSL_CLIENT_S_DN_O} eq "Your Organization" \
and %{SSL_CLIENT_S_DN_OU} in {"AllowedOU1","AllowedOU2"}
</Location>
#
## </ClientCertificate>
#
</source>
==ApacheTop==
Top of all sites on your host:
<source lang=bash>
# ls /var/log/apache2/*.log | xargs -n 1 echo -f | xargs apachetop
</source>
80bbce1c455d7a311b06e5be2398f80b37432dd3
Ubuntu apt
0
120
1815
1400
2017-10-13T11:09:48Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:Ubuntu|apt]]
== Get all non LTS packages ==
<source lang=awk>
# dpkg --list | awk '/^ii/ {print $2}' | xargs apt-cache show | awk '
BEGIN{
support="none";
}
/^Package:/,/^$/{
if(/^Package:/){ pkg=$2; }
if(/^Supported:/){ support=$2; }
if(/^$/ && support != "5y"){ printf "%s:\t%s\n", pkg, support; }
}
/^$/ {
support="none";
}'
</source>
== Configuring a proxy for apt ==
Put this into your /etc/apt/apt.conf.d/00proxy :
<source lang=bash>
// Options for the downloading routines
Acquire
{
Queue-Mode "host"; // host|access
Retries "0";
Source-Symlinks "true";
// HTTP method configuration
http
{
//Proxy::http.us.debian.org "DIRECT"; // Specific per-host setting
Proxy "http://<user>:<password>@<proxy-host>:<proxy-port>";
Timeout "120";
Pipeline-Depth "5";
// Cache Control. Note these do not work with Squid 2.0.2
No-Cache "false";
Max-Age "86400"; // 1 Day age on index files
No-Store "false"; // Prevent the cache from storing archives
};
ftp
{
Proxy "http://<user>:<password>@<proxy-host>:<proxy-port>";
//Proxy::http.us.debian.org "DIRECT"; // Specific per-host setting
Timeout "120";
/* Passive mode control, proxy, non-proxy and per-host. Pasv mode
is prefered if possible */
Passive "true";
Proxy::Passive "true";
Passive::http.us.debian.org "true"; // Specific per-host setting
};
cdrom
{
mount "/cdrom";
// You need the trailing slash!
"/cdrom"
{
Mount "sleep 1000";
UMount "sleep 500";
}
};
};
</source>
== Getting some packages from a newer release ==
In this example we are living in <i>xenial</i> and want PowerDNS from <i>zesty</i> because we need CAA records in the nameservice.
=== Pin the normal release ===
<source lang=bash>
# echo 'APT::Default-Release "xenial";' > /etc/apt/apt.conf.d/01pinning
</source>
=== Add new release to /etc/apt/sources.list ===
This is the /etc/apt/sources.list on my x86 64bit Ubuntu:
<pre>
# Xenial
deb [arch=amd64] http://de.archive.ubuntu.com/ubuntu/ xenial main restricted universe
deb [arch=amd64] http://de.archive.ubuntu.com/ubuntu/ xenial-updates main restricted universe
deb [arch=amd64] http://security.ubuntu.com/ubuntu xenial-security main restricted universe
# Zesty
deb [arch=amd64] http://de.archive.ubuntu.com/ubuntu/ zesty main restricted universe
deb [arch=amd64] http://de.archive.ubuntu.com/ubuntu/ zesty-updates main restricted universe
deb [arch=amd64] http://security.ubuntu.com/ubuntu zesty-security main restricted universe
</pre>
=== Tell apt via /etc/apt/preferences.d/... to prefer some packages from the new release ===
This is the /etc/apt/preferences.d/pdns:
<pre>
Package: pdns-*
Pin: release a=zesty, l=Ubuntu
Pin-Priority: 1000
Package: pdns-*
Pin: release a=zesty-updates, l=Ubuntu
Pin-Priority: 1000
Package: pdns-*
Pin: release a=zesty-security, l=Ubuntu
Pin-Priority: 1000
</pre>
=== Upgrade to the packages from the new release ===
<source lang=bash>
# apt update
...
2 packages can be upgraded. Run 'apt list --upgradable' to see them.
...
</source>
=== Check with "apt-cache policy" which version is preferred now ===
<source lang=bash>
# apt-cache policy pdns-server pdns-tools
pdns-server:
Installed: 4.0.3-1
Candidate: 4.0.3-1
Version table:
*** 4.0.3-1 1000
500 http://de.archive.ubuntu.com/ubuntu zesty/universe amd64 Packages
100 /var/lib/dpkg/status
4.0.0~alpha2-3build1 990
990 http://de.archive.ubuntu.com/ubuntu xenial/universe amd64 Packages
pdns-tools:
Installed: (none)
Candidate: 4.0.3-1
Version table:
4.0.3-1 1000
500 http://de.archive.ubuntu.com/ubuntu zesty/universe amd64 Packages
4.0.0~alpha2-3build1 990
990 http://de.archive.ubuntu.com/ubuntu xenial/universe amd64 Packages
</source>
=== Upgrade to the packages from the new release ===
<source lang=bash>
# apt upgrade
...
</source>
3ed1d9dc62bab28a87ac587bccaf102355347278
1816
1815
2017-10-13T11:13:46Z
Lollypop
2
/* Upgrade to the packages from the new release */
wikitext
text/x-wiki
[[Kategorie:Ubuntu|apt]]
== Get all non LTS packages ==
<source lang=awk>
# dpkg --list | awk '/^ii/ {print $2}' | xargs apt-cache show | awk '
BEGIN{
support="none";
}
/^Package:/,/^$/{
if(/^Package:/){ pkg=$2; }
if(/^Supported:/){ support=$2; }
if(/^$/ && support != "5y"){ printf "%s:\t%s\n", pkg, support; }
}
/^$/ {
support="none";
}'
</source>
== Configuring a proxy for apt ==
Put this into your /etc/apt/apt.conf.d/00proxy :
<source lang=bash>
// Options for the downloading routines
Acquire
{
Queue-Mode "host"; // host|access
Retries "0";
Source-Symlinks "true";
// HTTP method configuration
http
{
//Proxy::http.us.debian.org "DIRECT"; // Specific per-host setting
Proxy "http://<user>:<password>@<proxy-host>:<proxy-port>";
Timeout "120";
Pipeline-Depth "5";
// Cache Control. Note these do not work with Squid 2.0.2
No-Cache "false";
Max-Age "86400"; // 1 Day age on index files
No-Store "false"; // Prevent the cache from storing archives
};
ftp
{
Proxy "http://<user>:<password>@<proxy-host>:<proxy-port>";
//Proxy::http.us.debian.org "DIRECT"; // Specific per-host setting
Timeout "120";
/* Passive mode control, proxy, non-proxy and per-host. Pasv mode
is prefered if possible */
Passive "true";
Proxy::Passive "true";
Passive::http.us.debian.org "true"; // Specific per-host setting
};
cdrom
{
mount "/cdrom";
// You need the trailing slash!
"/cdrom"
{
Mount "sleep 1000";
UMount "sleep 500";
}
};
};
</source>
== Getting some packages from a newer release ==
In this example we are living in <i>xenial</i> and want PowerDNS from <i>zesty</i> because we need CAA records in the nameservice.
=== Pin the normal release ===
<source lang=bash>
# echo 'APT::Default-Release "xenial";' > /etc/apt/apt.conf.d/01pinning
</source>
=== Add new release to /etc/apt/sources.list ===
This is the /etc/apt/sources.list on my x86 64bit Ubuntu:
<pre>
# Xenial
deb [arch=amd64] http://de.archive.ubuntu.com/ubuntu/ xenial main restricted universe
deb [arch=amd64] http://de.archive.ubuntu.com/ubuntu/ xenial-updates main restricted universe
deb [arch=amd64] http://security.ubuntu.com/ubuntu xenial-security main restricted universe
# Zesty
deb [arch=amd64] http://de.archive.ubuntu.com/ubuntu/ zesty main restricted universe
deb [arch=amd64] http://de.archive.ubuntu.com/ubuntu/ zesty-updates main restricted universe
deb [arch=amd64] http://security.ubuntu.com/ubuntu zesty-security main restricted universe
</pre>
=== Tell apt via /etc/apt/preferences.d/... to prefer some packages from the new release ===
This is the /etc/apt/preferences.d/pdns:
<pre>
Package: pdns-*
Pin: release a=zesty, l=Ubuntu
Pin-Priority: 1000
Package: pdns-*
Pin: release a=zesty-updates, l=Ubuntu
Pin-Priority: 1000
Package: pdns-*
Pin: release a=zesty-security, l=Ubuntu
Pin-Priority: 1000
</pre>
=== Upgrade to the packages from the new release ===
<source lang=bash>
# apt update
...
2 packages can be upgraded. Run 'apt list --upgradable' to see them.
...
</source>
=== Check with "apt-cache policy" which version is preferred now ===
<source lang=bash>
# apt-cache policy pdns-server pdns-tools
pdns-server:
Installed: 4.0.3-1
Candidate: 4.0.3-1
Version table:
*** 4.0.3-1 1000
500 http://de.archive.ubuntu.com/ubuntu zesty/universe amd64 Packages
100 /var/lib/dpkg/status
4.0.0~alpha2-3build1 990
990 http://de.archive.ubuntu.com/ubuntu xenial/universe amd64 Packages
pdns-tools:
Installed: (none)
Candidate: 4.0.3-1
Version table:
4.0.3-1 1000
500 http://de.archive.ubuntu.com/ubuntu zesty/universe amd64 Packages
4.0.0~alpha2-3build1 990
990 http://de.archive.ubuntu.com/ubuntu xenial/universe amd64 Packages
</source>
=== Upgrade to the packages from the new release ===
<source lang=bash>
# apt install pdns-tools
Reading package lists... Done
Building dependency tree
Reading state information... Done
Some packages could not be installed. This may mean that you have
requested an impossible situation or if you are using the unstable
distribution that some required packages have not yet been created
or been moved out of Incoming.
The following information may help to resolve the situation:
The following packages have unmet dependencies:
pdns-tools : Depends: libstdc++6 (>= 6) but 5.4.0-6ubuntu1~16.04.5 is to be installed
E: Unable to correct problems, you have held broken packages.
</source>
This shows the pinning to xenial works ;-).
=== Override pinning for one package ===
<source lang=bash>
# apt -t zesty install libstdc++6
...
</source>
3442595b5b9d29cf7d2ff7ec199504509b3ab4c1
PowerDNS
0
287
1817
1308
2017-10-13T15:34:21Z
Lollypop
2
/* chroot with systemd */
wikitext
text/x-wiki
[[Kategorie: DNS]]
=PowerDNS Server (pdns_server)=
==Logging with systemd and syslog-ng==
1. Tell the journald of systemd to forward messages to syslog:
In <i>/etc/systemd/journald.conf</i> set it from
<source lang=bash>
#ForwardToSyslog=yes
</source>
to
<source lang=bash>
ForwardToSyslog=yes
</source>
Then restart the journald
<source lang=bash>
# systemctl restart systemd-journald.service
</source>
2. Tell syslog-ng to take the dev-log-socket from journald as input:
Change the part in <i>/etc/syslog-ng/syslog-ng.conf</i> from
<source lang=bash>
source s_src {
system();
internal();
};
</source>
to
<source lang=bash>
source s_src {
system();
internal();
unix-dgram ("/run/systemd/journal/dev-log");
};
</source>
==chroot with systemd==
<source lang=bash>
# mkdir -p /var/chroot/run/systemd
# touch /var/chroot/run/systemd/notify
</source>
<source lang=ini>
# /lib/systemd/system/var-chroot-run-systemd-notify.mount
[Mount]
What=/run/systemd/notify
Where=/var/chroot/run/systemd/notify
Type=none
Options=bind
</source>
<source lang=ini>
# /lib/systemd/system/pdns.service
[Unit]
Description=PowerDNS Authoritative Server
Documentation=man:pdns_server(1) man:pdns_control(1)
Documentation=https://doc.powerdns.com
Wants=network-online.target
After=network-online.target mysqld.service postgresql.service slapd.service mariadb.service
After=var-chroot-run-systemd-notify.mount
[Service]
Type=notify
ExecStart=/usr/sbin/pdns_server --guardian=no --daemon=no --write-pid=no
Restart=on-failure
RestartSec=1
StartLimitInterval=0
PrivateTmp=true
PrivateDevices=true
CapabilityBoundingSet=CAP_NET_BIND_SERVICE CAP_SETGID CAP_SETUID CAP_CHOWN CAP_SYS_CHROOT
NoNewPrivileges=true
# ProtectSystem=full will disallow write access to /etc and /usr, possibly
# not being able to write slaved-zones into sqlite3 or zonefiles.
ProtectSystem=full
ProtectHome=true
RestrictAddressFamilies=AF_UNIX AF_INET AF_INET6
[Install]
WantedBy=multi-user.target
</source>
<source lang=ini>
# /lib/systemd/system/pdns-recursor.service
[Unit]
Description=PowerDNS Recursor
Documentation=man:pdns_recursor(1) man:rec_control(1)
Documentation=https://doc.powerdns.com
Wants=network-online.target nss-lookup.target
Before=nss-lookup.target
After=network-online.target
After=var-chroot-run-systemd-notify.mount
[Service]
Type=notify
ExecStart=/usr/sbin/pdns_recursor --daemon=no --write-pid=no --disable-syslog
Restart=on-failure
StartLimitInterval=0
PrivateTmp=true
PrivateDevices=true
CapabilityBoundingSet=CAP_NET_BIND_SERVICE CAP_SETGID CAP_SETUID CAP_CHOWN CAP_SYS_CHROOT
NoNewPrivileges=true
ProtectSystem=full
ProtectHome=true
RestrictAddressFamilies=AF_UNIX AF_INET AF_INET6
LimitNOFILE=4200
[Install]
WantedBy=multi-user.target
</source>
95b366e4235bb65267cc1145ba1cd6292aca6fab
1818
1817
2017-10-13T16:12:43Z
Lollypop
2
/* chroot with systemd */
wikitext
text/x-wiki
[[Kategorie: DNS]]
=PowerDNS Server (pdns_server)=
==Logging with systemd and syslog-ng==
1. Tell the journald of systemd to forward messages to syslog:
In <i>/etc/systemd/journald.conf</i> set it from
<source lang=bash>
#ForwardToSyslog=yes
</source>
to
<source lang=bash>
ForwardToSyslog=yes
</source>
Then restart the journald
<source lang=bash>
# systemctl restart systemd-journald.service
</source>
2. Tell syslog-ng to take the dev-log-socket from journald as input:
Change the part in <i>/etc/syslog-ng/syslog-ng.conf</i> from
<source lang=bash>
source s_src {
system();
internal();
};
</source>
to
<source lang=bash>
source s_src {
system();
internal();
unix-dgram ("/run/systemd/journal/dev-log");
};
</source>
==chroot with systemd==
<source lang=bash>
# mkdir -p /var/chroot/run/systemd
# touch /var/chroot/run/systemd/notify
</source>
<source lang=ini>
# /lib/systemd/system/var-chroot-run-systemd-notify.mount
[Unit]
After=zfs-mount.service
Requires=var-chroot.mount
[Mount]
What=/run/systemd/notify
Where=/var/chroot/run/systemd/notify
Type=none
Options=bind
</source>
<source lang=ini>
# /lib/systemd/system/pdns.service
[Unit]
Description=PowerDNS Authoritative Server
Documentation=man:pdns_server(1) man:pdns_control(1)
Documentation=https://doc.powerdns.com
Wants=network-online.target
After=network-online.target mysqld.service postgresql.service slapd.service mariadb.service
After=var-chroot-run-systemd-notify.mount
[Service]
Type=notify
ExecStart=/usr/sbin/pdns_server --guardian=no --daemon=no --write-pid=no
Restart=on-failure
RestartSec=1
StartLimitInterval=0
PrivateTmp=true
PrivateDevices=true
CapabilityBoundingSet=CAP_NET_BIND_SERVICE CAP_SETGID CAP_SETUID CAP_CHOWN CAP_SYS_CHROOT
NoNewPrivileges=true
# ProtectSystem=full will disallow write access to /etc and /usr, possibly
# not being able to write slaved-zones into sqlite3 or zonefiles.
ProtectSystem=full
ProtectHome=true
RestrictAddressFamilies=AF_UNIX AF_INET AF_INET6
[Install]
WantedBy=multi-user.target
</source>
<source lang=ini>
# /lib/systemd/system/pdns-recursor.service
[Unit]
Description=PowerDNS Recursor
Documentation=man:pdns_recursor(1) man:rec_control(1)
Documentation=https://doc.powerdns.com
Wants=network-online.target nss-lookup.target
Before=nss-lookup.target
After=network-online.target
After=var-chroot-run-systemd-notify.mount
[Service]
Type=notify
ExecStart=/usr/sbin/pdns_recursor --daemon=no --write-pid=no --disable-syslog
Restart=on-failure
StartLimitInterval=0
PrivateTmp=true
PrivateDevices=true
CapabilityBoundingSet=CAP_NET_BIND_SERVICE CAP_SETGID CAP_SETUID CAP_CHOWN CAP_SYS_CHROOT
NoNewPrivileges=true
ProtectSystem=full
ProtectHome=true
RestrictAddressFamilies=AF_UNIX AF_INET AF_INET6
LimitNOFILE=4200
[Install]
WantedBy=multi-user.target
</source>
03c3a456ecce751b91cabe763ba5bed73ba437fd
1827
1818
2017-11-23T17:07:55Z
Lollypop
2
/* chroot with systemd */
wikitext
text/x-wiki
[[Kategorie: DNS]]
=PowerDNS Server (pdns_server)=
==Logging with systemd and syslog-ng==
1. Tell the journald of systemd to forward messages to syslog:
In <i>/etc/systemd/journald.conf</i> set it from
<source lang=bash>
#ForwardToSyslog=yes
</source>
to
<source lang=bash>
ForwardToSyslog=yes
</source>
Then restart the journald
<source lang=bash>
# systemctl restart systemd-journald.service
</source>
2. Tell syslog-ng to take the dev-log-socket from journald as input:
Change the part in <i>/etc/syslog-ng/syslog-ng.conf</i> from
<source lang=bash>
source s_src {
system();
internal();
};
</source>
to
<source lang=bash>
source s_src {
system();
internal();
unix-dgram ("/run/systemd/journal/dev-log");
};
</source>
==chroot with systemd==
<source lang=bash>
# mkdir -p /var/chroot/run/systemd
# touch /var/chroot/run/systemd/notify
</source>
<source lang=ini>
# /lib/systemd/system/var-chroot-run-systemd-notify.mount
[Unit]
After=zfs-mount.service
Requires=var-chroot.mount
[Mount]
What=/run/systemd/notify
Where=/var/chroot/run/systemd/notify
Type=none
Options=bind
</source>
or
<source lang=ini>
# /lib/systemd/system/var-chroot-run-systemd-notify.mount
[Unit]
Description=Mount /run/systemd/notify to chroot
DefaultDependencies=no
ConditionPathExists=/var/chroot/run/systemd/notify
ConditionCapability=CAP_SYS_ADMIN
After=systemd-modules-load.service
Before=pdns-recursor.service
[Mount]
What=/run/systemd/notify
Where=/var/chroot/run/systemd/notify
Type=none
Options=bind
[Install]
WantedBy=multi-user.target
</source>
<source lang=ini>
# /lib/systemd/system/pdns.service
[Unit]
Description=PowerDNS Authoritative Server
Documentation=man:pdns_server(1) man:pdns_control(1)
Documentation=https://doc.powerdns.com
Wants=network-online.target
After=network-online.target mysqld.service postgresql.service slapd.service mariadb.service
After=var-chroot-run-systemd-notify.mount
[Service]
Type=notify
ExecStart=/usr/sbin/pdns_server --guardian=no --daemon=no --write-pid=no
Restart=on-failure
RestartSec=1
StartLimitInterval=0
PrivateTmp=true
PrivateDevices=true
CapabilityBoundingSet=CAP_NET_BIND_SERVICE CAP_SETGID CAP_SETUID CAP_CHOWN CAP_SYS_CHROOT
NoNewPrivileges=true
# ProtectSystem=full will disallow write access to /etc and /usr, possibly
# not being able to write slaved-zones into sqlite3 or zonefiles.
ProtectSystem=full
ProtectHome=true
RestrictAddressFamilies=AF_UNIX AF_INET AF_INET6
[Install]
WantedBy=multi-user.target
</source>
<source lang=ini>
# /lib/systemd/system/pdns-recursor.service
[Unit]
Description=PowerDNS Recursor
Documentation=man:pdns_recursor(1) man:rec_control(1)
Documentation=https://doc.powerdns.com
Wants=network-online.target nss-lookup.target
Before=nss-lookup.target
After=network-online.target
After=var-chroot-run-systemd-notify.mount
[Service]
Type=notify
ExecStart=/usr/sbin/pdns_recursor --daemon=no --write-pid=no --disable-syslog
Restart=on-failure
StartLimitInterval=0
PrivateTmp=true
PrivateDevices=true
CapabilityBoundingSet=CAP_NET_BIND_SERVICE CAP_SETGID CAP_SETUID CAP_CHOWN CAP_SYS_CHROOT
NoNewPrivileges=true
ProtectSystem=full
ProtectHome=true
RestrictAddressFamilies=AF_UNIX AF_INET AF_INET6
LimitNOFILE=4200
[Install]
WantedBy=multi-user.target
</source>
9c1ee488d71a8f6c047b4c7ac035ba67dc0c6243
1828
1827
2017-11-24T13:36:52Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie: DNS]]
=PowerDNS Server (pdns_server)=
If you are living in Ubunbtu xenial and need a newer PowerDNS from Ubuntu zesty, do this:
1. /etc/apt/apt.conf.d/01pinning
<source lang=apt>
APT::Default-Release "xenial";
</source>
2. /etc/apt/preferences.d/pdns
<source lang=apt>
Package: pdns-*
Pin: release a=zesty, l=Ubuntu
Pin-Priority: 1000
Package: pdns-*
Pin: release a=zesty-updates, l=Ubuntu
Pin-Priority: 1000
Package: pdns-*
Pin: release a=zesty-security, l=Ubuntu
Pin-Priority: 1000
</source>
3. /etc/apt/sources.list
add zesty sources. for example:
<source lang=apt>
deb [arch=amd64] http://de.archive.ubuntu.com/ubuntu/ xenial main restricted universe
deb [arch=amd64] http://de.archive.ubuntu.com/ubuntu/ xenial-updates main restricted universe
deb [arch=amd64] http://security.ubuntu.com/ubuntu xenial-security main restricted universe
deb [arch=amd64] http://de.archive.ubuntu.com/ubuntu/ zesty main restricted universe
deb [arch=amd64] http://de.archive.ubuntu.com/ubuntu/ zesty-updates main restricted universe
deb [arch=amd64] http://security.ubuntu.com/ubuntu zesty-security main restricted universe
</source>
==Logging with systemd and syslog-ng==
1. Tell the journald of systemd to forward messages to syslog:
In <i>/etc/systemd/journald.conf</i> set it from
<source lang=bash>
#ForwardToSyslog=yes
</source>
to
<source lang=bash>
ForwardToSyslog=yes
</source>
Then restart the journald
<source lang=bash>
# systemctl restart systemd-journald.service
</source>
2. Tell syslog-ng to take the dev-log-socket from journald as input:
Change the part in <i>/etc/syslog-ng/syslog-ng.conf</i> from
<source lang=bash>
source s_src {
system();
internal();
};
</source>
to
<source lang=bash>
source s_src {
system();
internal();
unix-dgram ("/run/systemd/journal/dev-log");
};
</source>
==chroot with systemd==
<source lang=bash>
# mkdir -p /var/chroot/run/systemd
# touch /var/chroot/run/systemd/notify
</source>
<source lang=ini>
# /lib/systemd/system/var-chroot-run-systemd-notify.mount
[Unit]
After=zfs-mount.service
Requires=var-chroot.mount
[Mount]
What=/run/systemd/notify
Where=/var/chroot/run/systemd/notify
Type=none
Options=bind
</source>
or
<source lang=ini>
# /lib/systemd/system/var-chroot-run-systemd-notify.mount
[Unit]
Description=Mount /run/systemd/notify to chroot
DefaultDependencies=no
ConditionPathExists=/var/chroot/run/systemd/notify
ConditionCapability=CAP_SYS_ADMIN
After=systemd-modules-load.service
Before=pdns-recursor.service
[Mount]
What=/run/systemd/notify
Where=/var/chroot/run/systemd/notify
Type=none
Options=bind
[Install]
WantedBy=multi-user.target
</source>
<source lang=ini>
# /lib/systemd/system/pdns.service
[Unit]
Description=PowerDNS Authoritative Server
Documentation=man:pdns_server(1) man:pdns_control(1)
Documentation=https://doc.powerdns.com
Wants=network-online.target
After=network-online.target mysqld.service postgresql.service slapd.service mariadb.service
After=var-chroot-run-systemd-notify.mount
[Service]
Type=notify
ExecStart=/usr/sbin/pdns_server --guardian=no --daemon=no --write-pid=no
Restart=on-failure
RestartSec=1
StartLimitInterval=0
PrivateTmp=true
PrivateDevices=true
CapabilityBoundingSet=CAP_NET_BIND_SERVICE CAP_SETGID CAP_SETUID CAP_CHOWN CAP_SYS_CHROOT
NoNewPrivileges=true
# ProtectSystem=full will disallow write access to /etc and /usr, possibly
# not being able to write slaved-zones into sqlite3 or zonefiles.
ProtectSystem=full
ProtectHome=true
RestrictAddressFamilies=AF_UNIX AF_INET AF_INET6
[Install]
WantedBy=multi-user.target
</source>
<source lang=ini>
# /lib/systemd/system/pdns-recursor.service
[Unit]
Description=PowerDNS Recursor
Documentation=man:pdns_recursor(1) man:rec_control(1)
Documentation=https://doc.powerdns.com
Wants=network-online.target nss-lookup.target
Before=nss-lookup.target
After=network-online.target
After=var-chroot-run-systemd-notify.mount
[Service]
Type=notify
ExecStart=/usr/sbin/pdns_recursor --daemon=no --write-pid=no --disable-syslog
Restart=on-failure
StartLimitInterval=0
PrivateTmp=true
PrivateDevices=true
CapabilityBoundingSet=CAP_NET_BIND_SERVICE CAP_SETGID CAP_SETUID CAP_CHOWN CAP_SYS_CHROOT
NoNewPrivileges=true
ProtectSystem=full
ProtectHome=true
RestrictAddressFamilies=AF_UNIX AF_INET AF_INET6
LimitNOFILE=4200
[Install]
WantedBy=multi-user.target
</source>
76af2a792dfc2f35193bc5c354bf9ace2990bbb8
Solaris LiveUpgrade
0
218
1819
809
2017-10-16T12:37:00Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:Solaris|LiveUpgrade]]
=Upgrade Solaris release=
==Install LiveUpgrade patches==
[http://sysadmin-tips-and-tricks.blogspot.co.uk/2012/07/solaris-live-upgrade-installation.html This site] has a good list of patches needed:
<source lang=bash>
SPARC:
119254-LR Install and Patch Utilities Patch
121430-LR Live Upgrade patch
121428-LR SUNWluzone required patches
138130-01 vold patch
140914-02 cpio patch
x86:
119255-LR Install and Patch Utilities Patch
121431-LR Live Upgrade patch
121429-LR SUNWluzone required patches
138884-01 SunOS 5.10_x86: GRUB patch
138131-01 vold patch
140915-02 cpio patch
</source>
Higher patch revisions may be available...
==Mount the Solaris 10 DVD ISO-image==
<source lang=bash>
# mkdir /tmp/os
# mount $(lofiadm -a /root/sol-10-u11-ga-x86-dvd.iso) /tmp/os
</source>
==Create the new BootEnvironment==
<source lang=bash>
# lucreate -n Solaris10u11
</source>
==Upgrade the new BootEnvironment==
<source lang=bash>
# echo "autoreg=disable" > /tmp/no-autoreg
# luupgrade -u -n Solaris10u11 -s /tmp/os -k /tmp/no-autoreg
</source>
==Activate the new BootEnvironment==
<source lang=bash>
# luactivate Solaris10u11
</source>
=Install EIS patches=
==Mount the new EIS-ISO==
<source lang=bash>
# mkdir /tmp/eis
# mount -F hsfs $(lofiadm -a /root/EIS/EIS-DVD-ONE-15JUL15.iso) /tmp/eis
</source>
==Update LU patches==
<source lang=bash>
# cd /tmp/eis/sun/patch/x86/LU/10
# unpack-patches -q -r
# cd
</source>
==Create the new BootEnvironment==
<source lang=bash>
# lucreate -n Solaris10-EIS-15JUL15
</source>
==Mount the new BootEnvironment==
<source lang=bash>
# mkdir /tmp/BE
# lumount Solaris10-EIS-15JUL15 /tmp/BE
</source>
==Install EIS-Patches==
<source lang=bash>
# cd /tmp/eis/sun
# patch-EIS -R /tmp/BE /var/tmp
Will apply patches from directories: x86/10 x86/cacao/2.1 x86/SWUP/10 SunVTS/7.0_x86 x86/LU/10
Patching from directory: patch/x86/10
Cleaning out /tmp/BE//var/tmp/10...
...
Now the Solaris 10_x86 Recommended Patches...
...
</source>
==Problems: Installing this patch set to an alternate boot environment first requires the live boot environment to have patch utilities and other prerequisite patches==
<source lang=bash>
Installing this patch set to an alternate boot environment first requires the
live boot environment to have patch utilities and other prerequisite patches
at the same (or higher) patch revisions as those delivered by this patch set.
The required prerequisite patches can be applied to the live boot environment
by invoking this script with the '--apply-prereq' option, ie.
./installpatchset --apply-prereq --s10patchset
</source>
===Solution===
<source lang=bash>
root@solaris10 # cd /mnt/var/tmp/10/10_x86_Recommended
root@solaris10 # ./installpatchset --apply-prereq --s10patchset
...
Installation of prerequisite patches complete.
...
</source>
==Umount the BE==
<source lang=bash>
# luumount Solaris10-EIS-15JUL15
</source>
==Activate BE & Reboot==
<source lang=bash>
# luactivate Solaris10-EIS-15JUL15
# init 6
</source>
= Solaris 10 CPU with LiveUpgrade =
== Install LiveUpgrade (and some other necessary) Patches==
In the unzipped CPU do:
<source lang=bash>
root@solaris10 # ./installpatchset --s10patchset --apply-prereq
</source>
== Create LiveUpgrade environment ==
In this example we use the CPU_2017-07:
<source lang=bash>
root@solaris10 # lucreate -n Solaris_10-CPU_2017-07
...
Population of boot environment <Solaris_10-CPU_2017-07> successful.
Creation of boot environment <Solaris_10-CPU_2017-07> successful.
</source>
== Apply the patchset to the LiveUpgrade environment ==
<source lang=bash>
root@solaris10 # ./installpatchset --s10patchset -B Solaris_10-CPU_2017-07
</source>
== Activate the new patched LiveUpgrade envinronment ==
<source lang=bash>
root@solaris10 # luactivate Solaris_10-CPU_2017-07
</source>
Now you can reboot into it whenever you want, but it should be soon, because of things that will be only in this boot environment later like logs and such.
4adaffc5a9afc0f2365bfac29f5b5082a4120e98
Linux Tipps und Tricks
0
273
1820
1357
2017-11-03T07:17:07Z
Lollypop
2
/* Hard reboot */
wikitext
text/x-wiki
[[Kategorie:Linux|Tipps und Tricks]]
==Hard reboot==
This is the hard way to kick your kernel into void. No filesystem sync is done, just and ugly fast direkt reboot!
You should never do this...
<source lang=bash>
# echo 1 > /proc/sys/kernel/sysrq
# echo b > /proc/sysrq-trigger
</source>
First line enables sysrq, second line sends the reboot request.
For more look at [https://www.kernel.org/doc/Documentation/sysrq.txt kernel.org]!
==Scan all SCSI buses for new devices==
<source lang=bash>
# for i in /sys/class/scsi_host/host*/scan ; do echo "- - -" > $i ; done
</source>
==Remove a SCSI-device==
Let us say we want to remove /dev/sdb.
Be careful! Like in this example the lowest SCSI-ID is not always the lowest device name!
Check it with <i>lsscsi</i> from the Ubuntu package lsscsi:
<source lang=bash>
# lsscsi
[2:0:0:0] cd/dvd NECVMWar VMware SATA CD00 1.00 /dev/sr0
[32:0:0:0] disk VMware Virtual disk 1.0 /dev/sdb
[32:0:1:0] disk VMware Virtual disk 1.0 /dev/sda
</source>
Then check it is not longer in use:
# mount
# pvs
# zpool status
# etc.
Then delete it:
<source lang=bash>
# echo 1 > /sys/bus/scsi/drivers/sd/32\:0\:0\:0/delete
</source>
The 32:0:0:0 is the number reported from the lsscsi above.
Et voila:
<source lang=bash>
# lsscsi
[2:0:0:0] cd/dvd NECVMWar VMware SATA CD00 1.00 /dev/sr0
[32:0:1:0] disk VMware Virtual disk 1.0 /dev/sda
</source>
==Resize a GPT partition==
The partition was resized in VMWare from ~6GB to ~50GB.
In the VM I did [[#Remove a SCSI-device|Remove a SCSI-device]] for the resized device and then [[#Scan all SCSI buses for new devices|Scan all SCSI buses for new devices]] after that parted saw the new size.
===Correct the GPT partition table===
<source lang=bash>
root@mariadb:~# parted /dev/sdb
GNU Parted 3.2
Using /dev/sdb
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted) p
Warning: Not all of the space available to /dev/sdb appears to be used, you can fix the GPT to use all of the space (an extra 92274688 blocks) or continue with the
current setting?
Fix/Ignore? F <-- ! choose F
Model: VMware Virtual disk (scsi)
Disk /dev/sdb: 53,7GB <-- ! the new size is reported now
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:
Number Start End Size File system Name Flags
1 1049kB 6442MB 6441MB zfs
</source>
===Resize the partition===
<source lang=bash>
root@mariadb:~# parted /dev/sdb
GNU Parted 3.2
Using /dev/sdb
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted) p
Model: VMware Virtual disk (scsi)
Disk /dev/sdb: 53,7GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:
Number Start End Size File system Name Flags
1 1049kB 6442MB 6441MB zfs
(parted) resizepart 1
End? [6442MB]? 53,7GB <-- ! Put new size here
(parted) p <-- ! Control if it worked
Model: VMware Virtual disk (scsi)
Disk /dev/sdb: 53,7GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:
Number Start End Size File system Name Flags
1 1049kB 53,7GB 53,7GB zfs
(parted) q
Information: You may need to update /etc/fstab.
</source>
===Optional: Resize the ZPool in it===
Check the actual values:
<source lang=bash>
root@mariadb:~# zpool list MYSQL-DATA
NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
MYSQL-DATA 5,97G 994M 5,00G 44G 47% 16% 1.00x ONLINE -
root@mariadb:~# zpool get autoexpand MYSQL-DATA
NAME PROPERTY VALUE SOURCE
MYSQL-DATA autoexpand off default
</source>
Now inform ZPool to grow to the end of the partition.
Set autoexpand to on:
<source lang=bash>
root@mariadb:~# zpool set autoexpand=on MYSQL-DATA
</source>
Send an online to the already onlined device to force a recheck in the ZPool to resize it without export/import:
<source lang=bash>
root@mariadb:~# zpool online MYSQL-DATA /dev/sdb1
</source>
Et voila:
<source lang=bash>
root@mariadb:~# zpool list MYSQL-DATA
NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
MYSQL-DATA 50,0G 994M 49,0G - 5% 1% 1.00x ONLINE -
rpool 19,9G 3,36G 16,5G - 19% 16% 1.00x ONLINE -
</source>
Set autoexpand to off if you want prevent to autoexpand if partition grows:
<source lang=bash>
root@mariadb:~# zpool set autoexpand=off MYSQL-DATA
</source>
3434c37de3396ce2e5e159ed89513d35a67b8487
Perl Tipps und Tricks
0
178
1821
1761
2017-11-10T10:07:00Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:Perl|Tipps und Tricks]]
==Negative match in RegEx (?:(?!PATTERN).)*==
Usage of perl as a spcial grep :-):
<source lang=bash>
perl -ne 'if (/(<a href=[^>]+action=login[^>]+>(?:(?!<\/a>).)*<\/a>)/){ print $1."\n"; }' index.html
</source>
This one matches a complete <pre><a href=...action=login...>(not </a>)</a></pre>.
==Unread while reading from filehandle==
Dov Grobgeld made my day!
<source lang=perl>
# Found at a comment of Dov Grobgeld at https://groups.google.com/d/msg/comp.lang.perl/7fPyGpWpP8M/hc7xTMvAoW0J
while($_ = shift(@linestack) || <IN>) {
:
push(@linestack, $whatever); # unread
}
</source>
== Config ==
Override compile time flags on the commandline like this:
<source lang=perl>
PERL_MM_OPT='optimize=-O2 cc=gcc ld=gcc cccdlflags=-DPIC'
</source>
I used it to run sa-compile on Solaris:
<source lang=perl>
#!/bin/bash
exec >> /var/log/update-spamd-rules.log 2>&1
#LD_LIBRARY_PATH=/usr/sfw/lib
PATH=$PATH:/usr/local/bin:/opt/re2c/bin:/usr/sfw/bin:/usr/ccs/bin:/opt/csw/bin
PERL_VER=$(/usr/perl5/bin/perl -e 'printf "%.3f",$];')
SA_VER=$(/opt/spamassassin/bin/spamassassin -V | /usr/bin/nawk '
/SpamAssassin version/ {
split($NF,version,/\./);
printf "%d.%03d%03d",version[1],version[2],version[3];
}')
export LD_LIBRARY_PATH PATH PERL_VER SA_VER
/usr/perl5/bin/perlgcc -T /opt/spamassassin/bin/sa-update --updatedir=/var/opt/spamassassin/$SA_VER -D
PERL_MM_OPT='optimize=-O2 cc=gcc ld=gcc cccdlflags=-DPIC' /opt/spamassassin/bin/sa-compile --updatedir=/var/opt/spamassassin/compiled/${PERL_VER}/${SA_VER} -D
/usr/bin/kill -HUP `cat /tmp/spamd-exim-acl.pid`
/usr/bin/kill -HUP `cat /tmp/spamd-ip.pid`
</source>
92366e7ed85447273d742a1ed10684b6828165fc
1822
1821
2017-11-10T12:12:55Z
Lollypop
2
/* Negative match in RegEx (?:(?!PATTERN).)* */
wikitext
text/x-wiki
[[Kategorie:Perl|Tipps und Tricks]]
==Negative match in RegEx (?:(?!PATTERN).)*==
Usage of perl as a spcial grep :-):
<source lang=bash>
perl -ne 'if (/(<a href=[^>]+action=login[^>]+>(?:(?!<\/a>).)*<\/a>)/){ print $1."\n"; }' index.html
</source>
This one matches a complete <pre><a href=...action=login...>(not </a>)</a></pre>.
Or more complex:
<source lang=bash>
perl -ne 'if (/(<a href=[^h]*http[s]{0,1}:\/\/(([^\/"]+)[^> "]+)[^> ]*>(?:(?!<\/a>).)*<\/a>)/){ print $3."|".$2."|".$1."\n"; }' index.html
</source>
Prints out:
<pre>
<server>|<url>|<complete href>
</pre>
==Unread while reading from filehandle==
Dov Grobgeld made my day!
<source lang=perl>
# Found at a comment of Dov Grobgeld at https://groups.google.com/d/msg/comp.lang.perl/7fPyGpWpP8M/hc7xTMvAoW0J
while($_ = shift(@linestack) || <IN>) {
:
push(@linestack, $whatever); # unread
}
</source>
== Config ==
Override compile time flags on the commandline like this:
<source lang=perl>
PERL_MM_OPT='optimize=-O2 cc=gcc ld=gcc cccdlflags=-DPIC'
</source>
I used it to run sa-compile on Solaris:
<source lang=perl>
#!/bin/bash
exec >> /var/log/update-spamd-rules.log 2>&1
#LD_LIBRARY_PATH=/usr/sfw/lib
PATH=$PATH:/usr/local/bin:/opt/re2c/bin:/usr/sfw/bin:/usr/ccs/bin:/opt/csw/bin
PERL_VER=$(/usr/perl5/bin/perl -e 'printf "%.3f",$];')
SA_VER=$(/opt/spamassassin/bin/spamassassin -V | /usr/bin/nawk '
/SpamAssassin version/ {
split($NF,version,/\./);
printf "%d.%03d%03d",version[1],version[2],version[3];
}')
export LD_LIBRARY_PATH PATH PERL_VER SA_VER
/usr/perl5/bin/perlgcc -T /opt/spamassassin/bin/sa-update --updatedir=/var/opt/spamassassin/$SA_VER -D
PERL_MM_OPT='optimize=-O2 cc=gcc ld=gcc cccdlflags=-DPIC' /opt/spamassassin/bin/sa-compile --updatedir=/var/opt/spamassassin/compiled/${PERL_VER}/${SA_VER} -D
/usr/bin/kill -HUP `cat /tmp/spamd-exim-acl.pid`
/usr/bin/kill -HUP `cat /tmp/spamd-ip.pid`
</source>
260a04c867a8449141e0fc906fa07706576a47e3
1823
1822
2017-11-10T12:13:54Z
Lollypop
2
/* Negative match in RegEx (?:(?!PATTERN).)* */
wikitext
text/x-wiki
[[Kategorie:Perl|Tipps und Tricks]]
==Negative match in RegEx (?:(?!PATTERN).)*==
Usage of perl as a spcial grep :-):
<source lang=bash>
perl -ne 'if (/(<a href=[^>]+action=login[^>]+>(?:(?!<\/a>).)*<\/a>)/){ print $1."\n"; }' index.html
</source>
This one matches a complete <pre><a href=...action=login...>(not </a>)</a></pre>.
Or more complex:
<source lang=bash>
perl -ne 'if (/(<a href=[^h]*(http[s]{0,1}:\/\/([^\/"]+)[^> "]+)[^> ]*>(?:(?!<\/a>).)*<\/a>)/){ print $3."|".$2."|".$1."\n"; }' index.html
</source>
Prints out:
<pre>
<server>|<url>|<complete href>
</pre>
==Unread while reading from filehandle==
Dov Grobgeld made my day!
<source lang=perl>
# Found at a comment of Dov Grobgeld at https://groups.google.com/d/msg/comp.lang.perl/7fPyGpWpP8M/hc7xTMvAoW0J
while($_ = shift(@linestack) || <IN>) {
:
push(@linestack, $whatever); # unread
}
</source>
== Config ==
Override compile time flags on the commandline like this:
<source lang=perl>
PERL_MM_OPT='optimize=-O2 cc=gcc ld=gcc cccdlflags=-DPIC'
</source>
I used it to run sa-compile on Solaris:
<source lang=perl>
#!/bin/bash
exec >> /var/log/update-spamd-rules.log 2>&1
#LD_LIBRARY_PATH=/usr/sfw/lib
PATH=$PATH:/usr/local/bin:/opt/re2c/bin:/usr/sfw/bin:/usr/ccs/bin:/opt/csw/bin
PERL_VER=$(/usr/perl5/bin/perl -e 'printf "%.3f",$];')
SA_VER=$(/opt/spamassassin/bin/spamassassin -V | /usr/bin/nawk '
/SpamAssassin version/ {
split($NF,version,/\./);
printf "%d.%03d%03d",version[1],version[2],version[3];
}')
export LD_LIBRARY_PATH PATH PERL_VER SA_VER
/usr/perl5/bin/perlgcc -T /opt/spamassassin/bin/sa-update --updatedir=/var/opt/spamassassin/$SA_VER -D
PERL_MM_OPT='optimize=-O2 cc=gcc ld=gcc cccdlflags=-DPIC' /opt/spamassassin/bin/sa-compile --updatedir=/var/opt/spamassassin/compiled/${PERL_VER}/${SA_VER} -D
/usr/bin/kill -HUP `cat /tmp/spamd-exim-acl.pid`
/usr/bin/kill -HUP `cat /tmp/spamd-ip.pid`
</source>
3750c540f681e318e6d974f444fbe3fc8efe5ccb
OpenSSL
0
347
1824
2017-11-21T17:06:16Z
Lollypop
2
Die Seite wurde neu angelegt: „[[Kategorie:Security]] =Verify= <source lang=bash> # openssl verify -CAfile /srv/www/htdocs/pub/RHN-ORG-TRUSTED-SSL-CERT /etc/pki/spacewalk/jabberd/server.pem…“
wikitext
text/x-wiki
[[Kategorie:Security]]
=Verify=
<source lang=bash>
# openssl verify -CAfile /srv/www/htdocs/pub/RHN-ORG-TRUSTED-SSL-CERT /etc/pki/spacewalk/jabberd/server.pem
</source>
<source lang=bash>
# openssl crl2pkcs7 -nocrl -certfile /srv/www/htdocs/pub/RHN-ORG-TRUSTED-SSL-CERT | openssl pkcs7 -print_certs -noout -print_certs
</source>
03feb78bafae288c589861de4fe47544dfb5ad43
SuSE Manager
0
348
1825
2017-11-22T13:39:51Z
Lollypop
2
Die Seite wurde neu angelegt: „[[Kategorie:Linux]] =SuSE Manager= ==Channels== ===Refresh channle list=== <source lang=bash> # mgr-sync refresh </source> ===List available channels=== <sourc…“
wikitext
text/x-wiki
[[Kategorie:Linux]]
=SuSE Manager=
==Channels==
===Refresh channle list===
<source lang=bash>
# mgr-sync refresh
</source>
===List available channels===
<source lang=bash>
# mgr-sync list channels
</source>
===Add Channel===
<source lang=bash>
# mgr-sync add channel <channel>
</source>
===Delete Channel===
<source lang=bash>
# spacewalk-remove-channel -c <channel>
</source>
===Create a frozen channel===
Clone a channel (which is like a snapshot) and add a timestamp at the end of the name:
<source lang=bash>
# spacecmd softwarechannel_clonetree -s '<source channel or pool>' -x "s/\$/-$(date '+%Y-%m-%d_%H:%M:%S')/"
</source>
e.g.:
<source lang=bash>
# spacecmd softwarechannel_clonetree -s 'sles12-sp3-pool-x86_64' -x "s/\$/-$(date '+%Y-%m-%d_%H:%M:%S')/"
</source>
will result in a new channel pool named e.g. sles12-sp3-pool-x86_64-2017-11-22_14:26:42
==Bootstrap==
===Create bootstrap repo===
<source lang=bash>
# mgr-create-bootstrap-repo
</source>
===Create bootstrap shell scripts in /srv/www/htdocs/pub/bootstrap===
Do not forget to lookup the available [[#List available activation keys|activation keys]]
<source lang=bash>
# mgr-bootstrap --traditional --script=My-New-SLES11-SP4.sh --activation-keys=6-sles11-sp4-x86_64
</source>
==Activation keys==
===List available activation keys===
web: Systems -> Activation Keys
<source lang=bash>
# spacecmd -q activationkey_list
6-sles11-sp3-x86_64
6-sles11-sp4-x86_64
6-sles12-sp0-x86_64
6-sles12-sp1-x86_64
6-sles12-sp2-x86_64
6-sles12-sp3-x86_64
</source>
==spacecmd==
Just some useful space commands
<source lang=bash>
# spacecmd system_list
</source>
==rhn-search==
===Cleanup the search index===
<source lang=bash>
# rhn-search cleanindex
</source>
1b43a89f9a8a8847a0db1e5fc005242f0e20ea0e
1826
1825
2017-11-22T13:40:38Z
Lollypop
2
/* Create bootstrap repo */
wikitext
text/x-wiki
[[Kategorie:Linux]]
=SuSE Manager=
==Channels==
===Refresh channle list===
<source lang=bash>
# mgr-sync refresh
</source>
===List available channels===
<source lang=bash>
# mgr-sync list channels
</source>
===Add Channel===
<source lang=bash>
# mgr-sync add channel <channel>
</source>
===Delete Channel===
<source lang=bash>
# spacewalk-remove-channel -c <channel>
</source>
===Create a frozen channel===
Clone a channel (which is like a snapshot) and add a timestamp at the end of the name:
<source lang=bash>
# spacecmd softwarechannel_clonetree -s '<source channel or pool>' -x "s/\$/-$(date '+%Y-%m-%d_%H:%M:%S')/"
</source>
e.g.:
<source lang=bash>
# spacecmd softwarechannel_clonetree -s 'sles12-sp3-pool-x86_64' -x "s/\$/-$(date '+%Y-%m-%d_%H:%M:%S')/"
</source>
will result in a new channel pool named e.g. sles12-sp3-pool-x86_64-2017-11-22_14:26:42
==Bootstrap==
===Create bootstrap repo===
Do it for each channel!
<source lang=bash>
# mgr-create-bootstrap-repo
</source>
===Create bootstrap shell scripts in /srv/www/htdocs/pub/bootstrap===
Do not forget to lookup the available [[#List available activation keys|activation keys]]
<source lang=bash>
# mgr-bootstrap --traditional --script=My-New-SLES11-SP4.sh --activation-keys=6-sles11-sp4-x86_64
</source>
==Activation keys==
===List available activation keys===
web: Systems -> Activation Keys
<source lang=bash>
# spacecmd -q activationkey_list
6-sles11-sp3-x86_64
6-sles11-sp4-x86_64
6-sles12-sp0-x86_64
6-sles12-sp1-x86_64
6-sles12-sp2-x86_64
6-sles12-sp3-x86_64
</source>
==spacecmd==
Just some useful space commands
<source lang=bash>
# spacecmd system_list
</source>
==rhn-search==
===Cleanup the search index===
<source lang=bash>
# rhn-search cleanindex
</source>
810d4f4be41274bb2e877ecbf4c8c57d58ba2bcc
PowerDNS
0
287
1829
1828
2017-11-24T13:43:21Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie: DNS]]
=PowerDNS Server (pdns_server)=
==Newer version in Ubuntu==
If you are living in Ubunbtu xenial and need a newer PowerDNS from Ubuntu zesty, do this:
===/etc/apt/apt.conf.d/01pinning===
<source lang=apt>
APT::Default-Release "xenial";
</source>
===/etc/apt/preferences.d/pdns===
<source lang=apt>
Package: pdns-*
Pin: release a=zesty, l=Ubuntu
Pin-Priority: 1000
Package: pdns-*
Pin: release a=zesty-updates, l=Ubuntu
Pin-Priority: 1000
Package: pdns-*
Pin: release a=zesty-security, l=Ubuntu
Pin-Priority: 1000
</source>
===/etc/apt/sources.list===
add zesty sources. for example:
<source lang=apt>
deb [arch=amd64] http://de.archive.ubuntu.com/ubuntu/ xenial main restricted universe
deb [arch=amd64] http://de.archive.ubuntu.com/ubuntu/ xenial-updates main restricted universe
deb [arch=amd64] http://security.ubuntu.com/ubuntu xenial-security main restricted universe
deb [arch=amd64] http://de.archive.ubuntu.com/ubuntu/ zesty main restricted universe
deb [arch=amd64] http://de.archive.ubuntu.com/ubuntu/ zesty-updates main restricted universe
deb [arch=amd64] http://security.ubuntu.com/ubuntu zesty-security main restricted universe
</source>
===Do the upgrade===
<source lang=bash>
# apt update
# apt install pdns-recursor/zesty pdns-tools/zesty libstdc++6/zesty gcc-6-base/zesty
</source>
==Logging with systemd and syslog-ng==
1. Tell the journald of systemd to forward messages to syslog:
In <i>/etc/systemd/journald.conf</i> set it from
<source lang=bash>
#ForwardToSyslog=yes
</source>
to
<source lang=bash>
ForwardToSyslog=yes
</source>
Then restart the journald
<source lang=bash>
# systemctl restart systemd-journald.service
</source>
2. Tell syslog-ng to take the dev-log-socket from journald as input:
Change the part in <i>/etc/syslog-ng/syslog-ng.conf</i> from
<source lang=bash>
source s_src {
system();
internal();
};
</source>
to
<source lang=bash>
source s_src {
system();
internal();
unix-dgram ("/run/systemd/journal/dev-log");
};
</source>
==chroot with systemd==
<source lang=bash>
# mkdir -p /var/chroot/run/systemd
# touch /var/chroot/run/systemd/notify
</source>
<source lang=ini>
# /lib/systemd/system/var-chroot-run-systemd-notify.mount
[Unit]
After=zfs-mount.service
Requires=var-chroot.mount
[Mount]
What=/run/systemd/notify
Where=/var/chroot/run/systemd/notify
Type=none
Options=bind
</source>
or
<source lang=ini>
# /lib/systemd/system/var-chroot-run-systemd-notify.mount
[Unit]
Description=Mount /run/systemd/notify to chroot
DefaultDependencies=no
ConditionPathExists=/var/chroot/run/systemd/notify
ConditionCapability=CAP_SYS_ADMIN
After=systemd-modules-load.service
Before=pdns-recursor.service
[Mount]
What=/run/systemd/notify
Where=/var/chroot/run/systemd/notify
Type=none
Options=bind
[Install]
WantedBy=multi-user.target
</source>
<source lang=ini>
# /lib/systemd/system/pdns.service
[Unit]
Description=PowerDNS Authoritative Server
Documentation=man:pdns_server(1) man:pdns_control(1)
Documentation=https://doc.powerdns.com
Wants=network-online.target
After=network-online.target mysqld.service postgresql.service slapd.service mariadb.service
After=var-chroot-run-systemd-notify.mount
[Service]
Type=notify
ExecStart=/usr/sbin/pdns_server --guardian=no --daemon=no --write-pid=no
Restart=on-failure
RestartSec=1
StartLimitInterval=0
PrivateTmp=true
PrivateDevices=true
CapabilityBoundingSet=CAP_NET_BIND_SERVICE CAP_SETGID CAP_SETUID CAP_CHOWN CAP_SYS_CHROOT
NoNewPrivileges=true
# ProtectSystem=full will disallow write access to /etc and /usr, possibly
# not being able to write slaved-zones into sqlite3 or zonefiles.
ProtectSystem=full
ProtectHome=true
RestrictAddressFamilies=AF_UNIX AF_INET AF_INET6
[Install]
WantedBy=multi-user.target
</source>
<source lang=ini>
# /lib/systemd/system/pdns-recursor.service
[Unit]
Description=PowerDNS Recursor
Documentation=man:pdns_recursor(1) man:rec_control(1)
Documentation=https://doc.powerdns.com
Wants=network-online.target nss-lookup.target
Before=nss-lookup.target
After=network-online.target
After=var-chroot-run-systemd-notify.mount
[Service]
Type=notify
ExecStart=/usr/sbin/pdns_recursor --daemon=no --write-pid=no --disable-syslog
Restart=on-failure
StartLimitInterval=0
PrivateTmp=true
PrivateDevices=true
CapabilityBoundingSet=CAP_NET_BIND_SERVICE CAP_SETGID CAP_SETUID CAP_CHOWN CAP_SYS_CHROOT
NoNewPrivileges=true
ProtectSystem=full
ProtectHome=true
RestrictAddressFamilies=AF_UNIX AF_INET AF_INET6
LimitNOFILE=4200
[Install]
WantedBy=multi-user.target
</source>
80bc8152e9d02180ba268476d4850ebf079b2afc
Hauptseite
0
1
1830
1274
2017-11-27T13:36:37Z
Lollypop
2
wikitext
text/x-wiki
First of all please read my [[Project:General_disclaimer|disclaimer]]!
Bitte zuerst bitte meinen [[Project:General_disclaimer|Haftungsausschluss]] lesen!
=[[:Kategorie:KnowHow|KnowHow]]=
<categorytree mode=pages depth=2>KnowHow</categorytree>
=[[:Kategorie:Projekte|Meine Projekte]]=
<categorytree mode=pages hideroot=on depth=3>Projekte</categorytree>
Anmerkungen immer gern an mich: Lars Timmann <<email>L@rs.Timmann.de</email>>
= Starthilfen zum Wiki =
Hilfe zur Benutzung und Konfiguration der Wiki-Software findest du im [http://meta.wikimedia.org/wiki/Help:Contents Benutzerhandbuch].
* [http://www.mediawiki.org/wiki/Manual:Configuration_settings Liste der Konfigurationsvariablen]
* [http://www.mediawiki.org/wiki/Manual:FAQ MediaWiki-FAQ]
* [https://lists.wikimedia.org/mailman/listinfo/mediawiki-announce Mailingliste neuer MediaWiki-Versionen]
Mit dem Urteil vom 12. Mai 1998 hat das Landgericht Hamburg entschieden, dass man durch die Anbringung eines Links die Inhalte der gelinkten Seiten ggf. mit zu verantworten hat. Dies kann nur dadurch verhindert werden, dass man sich ausdrücklich von diesem Inhalt distanziert. Für alle Links auf dieser Homepage gilt: Ich distanziere mich hiermit ausdrücklich von allen Inhalten aller verlinkten Seitenadressen auf meiner Homepage und mache mir diese Inhalte nicht zu eigen.
81c489fd383faa8f8b6a9baef42679dde63216e9
Ecryptfs
0
349
1831
2017-12-07T08:37:29Z
Lollypop
2
Die Seite wurde neu angelegt: „[[Kategorie:Linux]] ==Tipps&Tricks== ===ecryptfs-mount-private -> mount: No such file or directory=== ====Problem==== <source lang=bash> user@host:~$ ecryptfs-…“
wikitext
text/x-wiki
[[Kategorie:Linux]]
==Tipps&Tricks==
===ecryptfs-mount-private -> mount: No such file or directory===
====Problem====
<source lang=bash>
user@host:~$ ecryptfs-mount-private
Enter your login passphrase:
Inserted auth tok with sig [affecaffeeaffe00] into the user session keyring
mount: No such file or directory
user@host:~$
</source>
The keys are correctly unlocked
<source lang=bash>
user@host:~$ keyctl list @u
2 keys in keyring:
1013878144: --alswrv 2223 2223 user: affecaffeeaffe01
270316877: --alswrv 2223 2223 user: affecaffeeaffe02
</source>
But no luck:
<source lang=bash>
$ ls -al
total 20
drwx------ 3 ansible admin 8 Dez 7 09:12 .
drwxr-xr-x 6 root root 6 Dez 7 09:10 ..
lrwxrwxrwx 1 root root 32 Dez 7 09:11 .Private -> /home/.ecryptfs/ansible/.Private
lrwxrwxrwx 1 root root 33 Dez 7 09:11 .ecryptfs -> /home/.ecryptfs/ansible/.ecryptfs
lrwxrwxrwx 1 root root 52 Dez 7 09:12 README.txt -> /usr/share/ecryptfs-utils/ecryptfs-mount-private.txt
lrwxrwxrwx 1 root root 56 Dez 7 09:11 ecryptfs-mount-private.desktop -> /usr/share/ecryptfs-utils/ecryptfs-mount-private.desktop
</source>
====Workaround====
<source lang=bash>
user@host:~$ keyctl link @u @s
user@host:~$ ecryptfs-mount-private
user@host:~$
</source>
832ad7dd822705e6d17a5e8267a5b167f5eb7279
Apache
0
205
1832
1814
2017-12-07T17:05:18Z
Lollypop
2
/* Apache konfigurieren */
wikitext
text/x-wiki
[[Kategorie:Web]]
== Zertifikat generieren ==
===Defaultwerte vernünftig anpassen===
Country & Co auf für einen selbst passende Werte anpassen:
<source lang=bash>
# vi /etc/ssl/openssl.cnf
</source>
===Schlüssel generieren===
<source lang=bash>
# openssl ecparam -genkey -name secp256r1 | openssl ec -aes256 -out server.de.ec-key
read EC key
using curve name prime256v1 instead of secp256r1
writing EC key
Enter PEM pass phrase:
Verifying - Enter PEM pass phrase:
</source>
Sollte es gewünscht sein den Schlüssel ohne Passwort abzulegen, kann man den Schlüssel nachträglich so entfernen:
<source lang=bash>
# openssl ec -in server.de.ec-key -out server.de.ec-key
read EC key
Enter PEM pass phrase:
writing EC key
</source>
===Zertifikat ausstellen===
<source lang=bash>
# openssl req -new -x509 -sha256 -key server.de.ec-key -out server.de-wildcard.pem -days 1825 -nodes
You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter '.', the field will be left blank.
-----
Country Name (2 letter code) [DE]:
State or Province Name (full name) [Hamburg]:
Locality Name (eg, city) [Hamburg]:
Organization Name (eg, company) [My Site]:
Organizational Unit Name (eg, section) [Sub]:
Common Name (e.g. server FQDN or YOUR name) []:*.server.de
Email Address [ssl@server.de]:
</source>
===Zertifikat ansehen===
<source lang=bash>
# openssl x509 -text -noout -in server.de-wildcard.pem
Certificate:
Data:
Version: 3 (0x2)
Serial Number: ... (0x...)
Signature Algorithm: ecdsa-with-SHA256
Issuer: C=DE, ST=Hamburg, L=Hamburg, O=My Site, OU=Sub, CN=*.server.de/emailAddress=ssl@server.de
Validity
Not Before: Apr 16 09:35:02 2015 GMT
Not After : Apr 14 09:35:02 2020 GMT
Subject: C=DE, ST=Hamburg, L=Hamburg, O=My Site, OU=Sub, CN=*.server.de/emailAddress=ssl@server.de
Subject Public Key Info:
Public Key Algorithm: id-ecPublicKey
Public-Key: (256 bit)
pub:
...
ASN1 OID: prime256v1
X509v3 extensions:
X509v3 Subject Key Identifier:
...
X509v3 Authority Key Identifier:
keyid:...
X509v3 Basic Constraints:
CA:TRUE
Signature Algorithm: ecdsa-with-SHA256
...
</source>
==Apache konfigurieren==
<source lang=apache>
<VirtualHost ssl.server.de:443>
# ...
SSLEngine On
SSLProtocol all -SSLv2 -SSLv3 -TLSv1
SSLCompression off
SSLHonorCipherOrder On
SSLCipherSuite ECDHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-SHA256:ECDHE-RSA-AES256-SHA:ECDHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA256:DHE-RSA-AES128-SHA256:DHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA:EDH-RSA-DES-CBC3-SHA:AES256-GCM-SHA384:AES128-GCM-SHA256:AES256-SHA256:AES128-SHA256:AES256-SHA:AES128-SHA:HIGH:!DES-CBC3-SHA:!aNULL:!eNULL:!EXPORT:!CAMELLIA:!DES:!MD5:!PSK:!RC4:!SSLv2:!SSLv3
SSLCertificateFile /etc/apache2/ssl/server.de-wildcard.pem
SSLCertificateKeyFile /etc/apache2/ssl/server.de.ec-key
SSLOptions +FakeBasicAuth +ExportCertData +StrictRequire
SetEnvIfNoCase Referer ^https://ssl\.server\.de keep_cookies
RequestHeader unset Cookie env=!keep_cookies
<IfModule mod_headers.c>
# https://kb.sucuri.net/warnings/hardening/headers-x-content-type
Header set X-Content-Type-Options nosniff
# https://kb.sucuri.net/warnings/hardening/headers-x-frame-clickjacking
Header append X-FRAME-OPTIONS "SAMEORIGIN"
# https://kb.sucuri.net/warnings/hardening/headers-x-xss-protection
Header set X-XSS-Protection "1; mode=block"
# Strict Transport Security
Header always set Strict-Transport-Security "max-age=31556926;"
# Public Key Pins
Header always set Public-Key-Pins "max-age=5184000; pin-sha256=\"...\"; pin-sha256=\"...\"; includeSubDomains"
</IfModule>
<IfModule mod_rewrite.c>
RewriteEngine On
# https://kb.sucuri.net/warnings/hardening/http-trace HTTP Trace Method
RewriteCond %{REQUEST_METHOD} ^TRACE
RewriteRule .* - [F]
</IfModule>
</VirtualHost>
</source>
==Client certificates==
<source lang=apache>
#
## <ClientCertificate>
#
SSLVerifyClient none
SSLCACertificateFile "/var/log/apache2/conf/ca.crt"
SSLCARevocationFile "/var/log/apache2/conf/crl.pem"
SSLCARevocationCheck chain
CustomLog "/var/log/apache2/logs/ssl_user.log" \
"%t %h Serial=%{SSL_CLIENT_M_SERIAL}x User=%{SSL_CLIENT_S_DN_CN}x \"%r\" %b"
<Location />
SSLVerifyClient require
SSLVerifyDepth 10
SSLOptions +FakeBasicAuth
SSLRequireSSL
SSLRequire %{SSL_CLIENT_S_DN_O} eq "Your Organization" \
and %{SSL_CLIENT_S_DN_OU} in {"AllowedOU1","AllowedOU2"}
</Location>
#
## </ClientCertificate>
#
</source>
==ApacheTop==
Top of all sites on your host:
<source lang=bash>
# ls /var/log/apache2/*.log | xargs -n 1 echo -f | xargs apachetop
</source>
1d0d5f6540cef1e31faac001aa5e8e542fe8c9ba
1841
1832
2018-02-19T15:04:05Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:KnowHow]]
== Zertifikat generieren ==
===Defaultwerte vernünftig anpassen===
Country & Co auf für einen selbst passende Werte anpassen:
<source lang=bash>
# vi /etc/ssl/openssl.cnf
</source>
===Schlüssel generieren===
<source lang=bash>
# openssl ecparam -genkey -name secp256r1 | openssl ec -aes256 -out server.de.ec-key
read EC key
using curve name prime256v1 instead of secp256r1
writing EC key
Enter PEM pass phrase:
Verifying - Enter PEM pass phrase:
</source>
Sollte es gewünscht sein den Schlüssel ohne Passwort abzulegen, kann man den Schlüssel nachträglich so entfernen:
<source lang=bash>
# openssl ec -in server.de.ec-key -out server.de.ec-key
read EC key
Enter PEM pass phrase:
writing EC key
</source>
===Zertifikat ausstellen===
<source lang=bash>
# openssl req -new -x509 -sha256 -key server.de.ec-key -out server.de-wildcard.pem -days 1825 -nodes
You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter '.', the field will be left blank.
-----
Country Name (2 letter code) [DE]:
State or Province Name (full name) [Hamburg]:
Locality Name (eg, city) [Hamburg]:
Organization Name (eg, company) [My Site]:
Organizational Unit Name (eg, section) [Sub]:
Common Name (e.g. server FQDN or YOUR name) []:*.server.de
Email Address [ssl@server.de]:
</source>
===Zertifikat ansehen===
<source lang=bash>
# openssl x509 -text -noout -in server.de-wildcard.pem
Certificate:
Data:
Version: 3 (0x2)
Serial Number: ... (0x...)
Signature Algorithm: ecdsa-with-SHA256
Issuer: C=DE, ST=Hamburg, L=Hamburg, O=My Site, OU=Sub, CN=*.server.de/emailAddress=ssl@server.de
Validity
Not Before: Apr 16 09:35:02 2015 GMT
Not After : Apr 14 09:35:02 2020 GMT
Subject: C=DE, ST=Hamburg, L=Hamburg, O=My Site, OU=Sub, CN=*.server.de/emailAddress=ssl@server.de
Subject Public Key Info:
Public Key Algorithm: id-ecPublicKey
Public-Key: (256 bit)
pub:
...
ASN1 OID: prime256v1
X509v3 extensions:
X509v3 Subject Key Identifier:
...
X509v3 Authority Key Identifier:
keyid:...
X509v3 Basic Constraints:
CA:TRUE
Signature Algorithm: ecdsa-with-SHA256
...
</source>
==Apache konfigurieren==
<source lang=apache>
<VirtualHost ssl.server.de:443>
# ...
SSLEngine On
SSLProtocol all -SSLv2 -SSLv3 -TLSv1
SSLCompression off
SSLHonorCipherOrder On
SSLCipherSuite ECDHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-SHA256:ECDHE-RSA-AES256-SHA:ECDHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA256:DHE-RSA-AES128-SHA256:DHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA:EDH-RSA-DES-CBC3-SHA:AES256-GCM-SHA384:AES128-GCM-SHA256:AES256-SHA256:AES128-SHA256:AES256-SHA:AES128-SHA:HIGH:!DES-CBC3-SHA:!aNULL:!eNULL:!EXPORT:!CAMELLIA:!DES:!MD5:!PSK:!RC4:!SSLv2:!SSLv3
SSLCertificateFile /etc/apache2/ssl/server.de-wildcard.pem
SSLCertificateKeyFile /etc/apache2/ssl/server.de.ec-key
SSLOptions +FakeBasicAuth +ExportCertData +StrictRequire
SetEnvIfNoCase Referer ^https://ssl\.server\.de keep_cookies
RequestHeader unset Cookie env=!keep_cookies
<IfModule mod_headers.c>
# https://kb.sucuri.net/warnings/hardening/headers-x-content-type
Header set X-Content-Type-Options nosniff
# https://kb.sucuri.net/warnings/hardening/headers-x-frame-clickjacking
Header append X-FRAME-OPTIONS "SAMEORIGIN"
# https://kb.sucuri.net/warnings/hardening/headers-x-xss-protection
Header set X-XSS-Protection "1; mode=block"
# Strict Transport Security
Header always set Strict-Transport-Security "max-age=31556926;"
# Public Key Pins
Header always set Public-Key-Pins "max-age=5184000; pin-sha256=\"...\"; pin-sha256=\"...\"; includeSubDomains"
</IfModule>
<IfModule mod_rewrite.c>
RewriteEngine On
# https://kb.sucuri.net/warnings/hardening/http-trace HTTP Trace Method
RewriteCond %{REQUEST_METHOD} ^TRACE
RewriteRule .* - [F]
</IfModule>
</VirtualHost>
</source>
==Client certificates==
<source lang=apache>
#
## <ClientCertificate>
#
SSLVerifyClient none
SSLCACertificateFile "/var/log/apache2/conf/ca.crt"
SSLCARevocationFile "/var/log/apache2/conf/crl.pem"
SSLCARevocationCheck chain
CustomLog "/var/log/apache2/logs/ssl_user.log" \
"%t %h Serial=%{SSL_CLIENT_M_SERIAL}x User=%{SSL_CLIENT_S_DN_CN}x \"%r\" %b"
<Location />
SSLVerifyClient require
SSLVerifyDepth 10
SSLOptions +FakeBasicAuth
SSLRequireSSL
SSLRequire %{SSL_CLIENT_S_DN_O} eq "Your Organization" \
and %{SSL_CLIENT_S_DN_OU} in {"AllowedOU1","AllowedOU2"}
</Location>
#
## </ClientCertificate>
#
</source>
==ApacheTop==
Top of all sites on your host:
<source lang=bash>
# ls /var/log/apache2/*.log | xargs -n 1 echo -f | xargs apachetop
</source>
731a68c85229f7fc46dc7e90a29e9a75c69e7c0f
1842
1841
2018-02-19T15:21:31Z
Lollypop
2
/* Apache konfigurieren */
wikitext
text/x-wiki
[[Kategorie:KnowHow]]
== Zertifikat generieren ==
===Defaultwerte vernünftig anpassen===
Country & Co auf für einen selbst passende Werte anpassen:
<source lang=bash>
# vi /etc/ssl/openssl.cnf
</source>
===Schlüssel generieren===
<source lang=bash>
# openssl ecparam -genkey -name secp256r1 | openssl ec -aes256 -out server.de.ec-key
read EC key
using curve name prime256v1 instead of secp256r1
writing EC key
Enter PEM pass phrase:
Verifying - Enter PEM pass phrase:
</source>
Sollte es gewünscht sein den Schlüssel ohne Passwort abzulegen, kann man den Schlüssel nachträglich so entfernen:
<source lang=bash>
# openssl ec -in server.de.ec-key -out server.de.ec-key
read EC key
Enter PEM pass phrase:
writing EC key
</source>
===Zertifikat ausstellen===
<source lang=bash>
# openssl req -new -x509 -sha256 -key server.de.ec-key -out server.de-wildcard.pem -days 1825 -nodes
You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter '.', the field will be left blank.
-----
Country Name (2 letter code) [DE]:
State or Province Name (full name) [Hamburg]:
Locality Name (eg, city) [Hamburg]:
Organization Name (eg, company) [My Site]:
Organizational Unit Name (eg, section) [Sub]:
Common Name (e.g. server FQDN or YOUR name) []:*.server.de
Email Address [ssl@server.de]:
</source>
===Zertifikat ansehen===
<source lang=bash>
# openssl x509 -text -noout -in server.de-wildcard.pem
Certificate:
Data:
Version: 3 (0x2)
Serial Number: ... (0x...)
Signature Algorithm: ecdsa-with-SHA256
Issuer: C=DE, ST=Hamburg, L=Hamburg, O=My Site, OU=Sub, CN=*.server.de/emailAddress=ssl@server.de
Validity
Not Before: Apr 16 09:35:02 2015 GMT
Not After : Apr 14 09:35:02 2020 GMT
Subject: C=DE, ST=Hamburg, L=Hamburg, O=My Site, OU=Sub, CN=*.server.de/emailAddress=ssl@server.de
Subject Public Key Info:
Public Key Algorithm: id-ecPublicKey
Public-Key: (256 bit)
pub:
...
ASN1 OID: prime256v1
X509v3 extensions:
X509v3 Subject Key Identifier:
...
X509v3 Authority Key Identifier:
keyid:...
X509v3 Basic Constraints:
CA:TRUE
Signature Algorithm: ecdsa-with-SHA256
...
</source>
==Apache konfigurieren==
/etc/apache2/mods-available/ssl.conf
<source lang=apache>
<IfModule mod_ssl.c>
...
SSLUseStapling On
SSLStaplingCache "shmcb:${APACHE_RUN_DIR}/stapling_cache(128000)"
...
</IfModule>
</source>
<source lang=apache>
<VirtualHost ssl.server.de:443>
# ...
SSLEngine On
# Do this only if you are sure you have no old clients
SSLProtocol all -SSLv2 -SSLv3 -TLSv1 -TLSv1.1
# If you need to support old clients use this instead
# SSLProtocol all -SSLv2 -SSLv3 -TLSv1
SSLCompression off
SSLHonorCipherOrder On
# Do this only if you are sure you have no old clients
SSLCipherSuite DHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES256-SHA:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-SHA
# If you need to support old clients use this instead
# SSLCipherSuite ECDHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-SHA256:ECDHE-RSA-AES256-SHA:ECDHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA256:DHE-RSA-AES128-SHA256:DHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA:EDH-RSA-DES-CBC3-SHA:AES256-GCM-SHA384:AES128-GCM-SHA256:AES256-SHA256:AES128-SHA256:AES256-SHA:AES128-SHA:HIGH:!DES-CBC3-SHA:!aNULL:!eNULL:!EXPORT:!CAMELLIA:!DES:!MD5:!PSK:!RC4:!SSLv2:!SSLv3
SSLCertificateFile /etc/letsencrypt/live/ssl.server.de/fullchain.pem
SSLCertificateKeyFile /etc/letsencrypt/live/ssl.server.de/privkey.pem
SSLOptions +FakeBasicAuth +ExportCertData +StrictRequire
# Generate DH parameters with
# # openssl dhparam -out /etc/ssl/certs/dhparam_4096.pem 4096
SSLOpenSSLConfCmd DHParameters "/etc/ssl/certs/dhparam_4096.pem"
SetEnvIfNoCase Referer ^https://ssl\.server\.de keep_cookies
RequestHeader unset Cookie env=!keep_cookies
<IfModule mod_headers.c>
# https://kb.sucuri.net/warnings/hardening/headers-x-content-type
Header set X-Content-Type-Options nosniff
# https://kb.sucuri.net/warnings/hardening/headers-x-frame-clickjacking
Header append X-FRAME-OPTIONS "SAMEORIGIN"
# https://kb.sucuri.net/warnings/hardening/headers-x-xss-protection
Header set X-XSS-Protection "1; mode=block"
# Strict Transport Security
Header always set Strict-Transport-Security "max-age=31556926;"
# Public Key Pins
Header always set Public-Key-Pins "max-age=5184000; pin-sha256=\"...\"; pin-sha256=\"...\"; includeSubDomains"
</IfModule>
<IfModule mod_rewrite.c>
RewriteEngine On
# https://kb.sucuri.net/warnings/hardening/http-trace HTTP Trace Method
RewriteCond %{REQUEST_METHOD} ^TRACE
RewriteRule .* - [F]
</IfModule>
</VirtualHost>
</source>
==Client certificates==
<source lang=apache>
#
## <ClientCertificate>
#
SSLVerifyClient none
SSLCACertificateFile "/var/log/apache2/conf/ca.crt"
SSLCARevocationFile "/var/log/apache2/conf/crl.pem"
SSLCARevocationCheck chain
CustomLog "/var/log/apache2/logs/ssl_user.log" \
"%t %h Serial=%{SSL_CLIENT_M_SERIAL}x User=%{SSL_CLIENT_S_DN_CN}x \"%r\" %b"
<Location />
SSLVerifyClient require
SSLVerifyDepth 10
SSLOptions +FakeBasicAuth
SSLRequireSSL
SSLRequire %{SSL_CLIENT_S_DN_O} eq "Your Organization" \
and %{SSL_CLIENT_S_DN_OU} in {"AllowedOU1","AllowedOU2"}
</Location>
#
## </ClientCertificate>
#
</source>
==ApacheTop==
Top of all sites on your host:
<source lang=bash>
# ls /var/log/apache2/*.log | xargs -n 1 echo -f | xargs apachetop
</source>
90e7e2887380f3ad12128e834225420dccf95c4f
1846
1842
2018-03-06T07:44:05Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:Apache]]
== Zertifikat generieren ==
===Defaultwerte vernünftig anpassen===
Country & Co auf für einen selbst passende Werte anpassen:
<source lang=bash>
# vi /etc/ssl/openssl.cnf
</source>
===Schlüssel generieren===
<source lang=bash>
# openssl ecparam -genkey -name secp256r1 | openssl ec -aes256 -out server.de.ec-key
read EC key
using curve name prime256v1 instead of secp256r1
writing EC key
Enter PEM pass phrase:
Verifying - Enter PEM pass phrase:
</source>
Sollte es gewünscht sein den Schlüssel ohne Passwort abzulegen, kann man den Schlüssel nachträglich so entfernen:
<source lang=bash>
# openssl ec -in server.de.ec-key -out server.de.ec-key
read EC key
Enter PEM pass phrase:
writing EC key
</source>
===Zertifikat ausstellen===
<source lang=bash>
# openssl req -new -x509 -sha256 -key server.de.ec-key -out server.de-wildcard.pem -days 1825 -nodes
You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter '.', the field will be left blank.
-----
Country Name (2 letter code) [DE]:
State or Province Name (full name) [Hamburg]:
Locality Name (eg, city) [Hamburg]:
Organization Name (eg, company) [My Site]:
Organizational Unit Name (eg, section) [Sub]:
Common Name (e.g. server FQDN or YOUR name) []:*.server.de
Email Address [ssl@server.de]:
</source>
===Zertifikat ansehen===
<source lang=bash>
# openssl x509 -text -noout -in server.de-wildcard.pem
Certificate:
Data:
Version: 3 (0x2)
Serial Number: ... (0x...)
Signature Algorithm: ecdsa-with-SHA256
Issuer: C=DE, ST=Hamburg, L=Hamburg, O=My Site, OU=Sub, CN=*.server.de/emailAddress=ssl@server.de
Validity
Not Before: Apr 16 09:35:02 2015 GMT
Not After : Apr 14 09:35:02 2020 GMT
Subject: C=DE, ST=Hamburg, L=Hamburg, O=My Site, OU=Sub, CN=*.server.de/emailAddress=ssl@server.de
Subject Public Key Info:
Public Key Algorithm: id-ecPublicKey
Public-Key: (256 bit)
pub:
...
ASN1 OID: prime256v1
X509v3 extensions:
X509v3 Subject Key Identifier:
...
X509v3 Authority Key Identifier:
keyid:...
X509v3 Basic Constraints:
CA:TRUE
Signature Algorithm: ecdsa-with-SHA256
...
</source>
==Apache konfigurieren==
/etc/apache2/mods-available/ssl.conf
<source lang=apache>
<IfModule mod_ssl.c>
...
SSLUseStapling On
SSLStaplingCache "shmcb:${APACHE_RUN_DIR}/stapling_cache(128000)"
...
</IfModule>
</source>
<source lang=apache>
<VirtualHost ssl.server.de:443>
# ...
SSLEngine On
# Do this only if you are sure you have no old clients
SSLProtocol all -SSLv2 -SSLv3 -TLSv1 -TLSv1.1
# If you need to support old clients use this instead
# SSLProtocol all -SSLv2 -SSLv3 -TLSv1
SSLCompression off
SSLHonorCipherOrder On
# Do this only if you are sure you have no old clients
SSLCipherSuite DHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES256-SHA:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-SHA
# If you need to support old clients use this instead
# SSLCipherSuite ECDHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-SHA256:ECDHE-RSA-AES256-SHA:ECDHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA256:DHE-RSA-AES128-SHA256:DHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA:EDH-RSA-DES-CBC3-SHA:AES256-GCM-SHA384:AES128-GCM-SHA256:AES256-SHA256:AES128-SHA256:AES256-SHA:AES128-SHA:HIGH:!DES-CBC3-SHA:!aNULL:!eNULL:!EXPORT:!CAMELLIA:!DES:!MD5:!PSK:!RC4:!SSLv2:!SSLv3
SSLCertificateFile /etc/letsencrypt/live/ssl.server.de/fullchain.pem
SSLCertificateKeyFile /etc/letsencrypt/live/ssl.server.de/privkey.pem
SSLOptions +FakeBasicAuth +ExportCertData +StrictRequire
# Generate DH parameters with
# # openssl dhparam -out /etc/ssl/certs/dhparam_4096.pem 4096
SSLOpenSSLConfCmd DHParameters "/etc/ssl/certs/dhparam_4096.pem"
SetEnvIfNoCase Referer ^https://ssl\.server\.de keep_cookies
RequestHeader unset Cookie env=!keep_cookies
<IfModule mod_headers.c>
# https://kb.sucuri.net/warnings/hardening/headers-x-content-type
Header set X-Content-Type-Options nosniff
# https://kb.sucuri.net/warnings/hardening/headers-x-frame-clickjacking
Header append X-FRAME-OPTIONS "SAMEORIGIN"
# https://kb.sucuri.net/warnings/hardening/headers-x-xss-protection
Header set X-XSS-Protection "1; mode=block"
# Strict Transport Security
Header always set Strict-Transport-Security "max-age=31556926;"
# Public Key Pins
Header always set Public-Key-Pins "max-age=5184000; pin-sha256=\"...\"; pin-sha256=\"...\"; includeSubDomains"
</IfModule>
<IfModule mod_rewrite.c>
RewriteEngine On
# https://kb.sucuri.net/warnings/hardening/http-trace HTTP Trace Method
RewriteCond %{REQUEST_METHOD} ^TRACE
RewriteRule .* - [F]
</IfModule>
</VirtualHost>
</source>
===SSLLabs A+ with all 100%===
'''If you consider to take this snippet be warned. Old clients have no chance to reach the server.'''
<source lang=apache>
<VirtualHost ssl.server.de:443>
...
# SSL parameters
SSLEngine On
SSLProtocol all -SSLv2 -SSLv3 -TLSv1 -TLSv1.1
SSLCipherSuite DHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES256-SHA:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-SHA
SSLHonorCipherOrder on
SSLUseStapling on
SSLCompression off
SSLOptions +FakeBasicAuth +ExportCertData +StrictRequire
SSLCertificateFile /etc/letsencrypt/live/ssl.server.de/fullchain.pem
SSLCertificateKeyFile /etc/letsencrypt/live/ssl.server.de/privkey.pem
SSLOpenSSLConfCmd DHParameters "/etc/ssl/certs/dhparam.pem"
SSLOpenSSLConfCmd ECDHParameters Automatic
SSLOpenSSLConfCmd Curves secp521r1:secp384r1:prime256v1
<IfModule mod_headers.c>
# Add security and privacy related headers
Header edit Set-Cookie ^(.*)$ $1;HttpOnly;Secure
Header always set Strict-Transport-Security "max-age=31556926; includeSubDomains; preload"
Header always set X-Frame-Options SAMEORIGIN
Header always set X-Content-Type-Options nosniff
Header set X-XSS-Protection "1; mode=block"
Header set X-Robots-Tag "none"
SetEnv modHeadersAvailable true
</IfModule>
...
</VirtualHost>
</source>
==Client certificates==
<source lang=apache>
#
## <ClientCertificate>
#
SSLVerifyClient none
SSLCACertificateFile "/var/log/apache2/conf/ca.crt"
SSLCARevocationFile "/var/log/apache2/conf/crl.pem"
SSLCARevocationCheck chain
CustomLog "/var/log/apache2/logs/ssl_user.log" \
"%t %h Serial=%{SSL_CLIENT_M_SERIAL}x User=%{SSL_CLIENT_S_DN_CN}x \"%r\" %b"
<Location />
SSLVerifyClient require
SSLVerifyDepth 10
SSLOptions +FakeBasicAuth
SSLRequireSSL
SSLRequire %{SSL_CLIENT_S_DN_O} eq "Your Organization" \
and %{SSL_CLIENT_S_DN_OU} in {"AllowedOU1","AllowedOU2"}
</Location>
#
## </ClientCertificate>
#
</source>
==ApacheTop==
Top of all sites on your host:
<source lang=bash>
# ls /var/log/apache2/*.log | xargs -n 1 echo -f | xargs apachetop
</source>
783629bd74ffed7ac41d3ca393b3aa4a93bb2039
Oracle Tips and Tricks
0
220
1833
1267
2017-12-08T14:19:10Z
Lollypop
2
Lollypop verschob die Seite [[Oracle Tipps und Tricks]] nach [[Oracle Tips and Tricks]]
wikitext
text/x-wiki
[[Kategorie:Oracle|Tipps]]
==Set environment in .bash_profile==
<source lang=bash>
ORATAB=/etc/oratab # <-- maybe somwhere else?
declare -A ORACLE_HOMES
export BASE_PATH=${PATH}
while IFS=$': \t\n' read ORACLE_SID ORACLE_HOME DBSTART
do
# Ignore empty lines, commented lines (#) and ORACLE_SIDs starting with + or - (RAC)
if [[ ${ORACLE_SID} =~ ^(#|[\t ]*$|[-+]) ]] ; then continue ; fi
ALIASNAME=${ORACLE_SID,,*}
eval ORACLE_HOMES["${ORACLE_SID}"]=${ORACLE_HOME}
alias ${ALIASNAME}="export ORACLE_SID=${ORACLE_SID}; export ORACLE_HOME=\${ORACLE_HOMES[${ORACLE_SID}]}; export PATH=\${ORACLE_HOMES[${ORACLE_SID}]}/bin:\${ORACLE_HOMES[${ORACLE_SID}]}/OPatch:\${BASE_PATH}"
done < ${ORATAB}
</source>
After that .bash_profile is sourced (as done at login) you have aliases as lower case ORACLE_SIDs that set all you need.
==Recover datafiles==
Problem:
<source lang=oracle11>
ORA-00376: file 18 cannot be read at this time
ORA-01110: data file 18: '/data/oracle/oradata/datafile04.dbf'
</source>
<source lang=oracle11>
SQL> select * from v$recover_file;
FILE# ONLINE ONLINE_
---------- ------- -------
ERROR CHANGE#
----------------------------------------------------------------- ----------
TIME
---------
18 OFFLINE OFFLINE
5.8016E+12
22-JUL-15
</source>
<source lang=oracle11>
SQL> select ONLINE_STATUS from dba_data_files where file_id = 18;
ONLINE_
-------
RECOVER
</source>
Recover datafile:
<source lang=oracle11>
SQL> recover datafile 18;
ORA-00279: change 5801623243148 generated at 07/22/2015 21:26:51 needed for thread 1
ORA-00289: suggestion :
/data/oracle/arclog/ORACLESID_1946_1_882824275.ARC
ORA-00280: change 5801623243148 for thread 1 is in sequence #1946
ORA-00278: log file
'/data/oracle/arclog/ORACLESID_1945_1_882824275.ARC' no longer needed
for this recovery
Specify log: {<RET>=suggested | filename | AUTO | CANCEL}
AUTO
Log applied.
Media recovery complete.
</source>
Set file online:
<source lang=oracle11>
SQL> alter database datafile 18 online
</source>
Anything else?
<source lang=oracle11>
SQL> select * from v$recover_file;
no rows selected
</source>
==Show non default settings==
<source lang=oracle11>
SQL> select name || ' = ' || value from v$parameter where isdefault = 'FALSE';
</source>
==Show CPU count from database==
<source lang=oracle11>
SQL> SELECT 'DATABASE CPU COUNT: ' || value ||
decode(ISDEFAULT, 'TRUE', ' (ISDEFAULT)', ' (IS NOT DEFAULT !!!: '|| ISDEFAULT ||')')
from V$PARAMETER where UPPER(name) like '%CPU_COUNT%'
</source>
ff1a631bcc904cbe7a927c786598675094f684ed
1873
1833
2018-05-16T15:39:59Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:Oracle|Tipps]]
==Set environment in .bash_profile==
<source lang=bash>
ORATAB=/etc/oratab # <-- maybe somwhere else?
declare -A ORACLE_HOMES
export BASE_PATH=${PATH}
while IFS=$': \t\n' read ORACLE_SID ORACLE_HOME DBSTART
do
# Ignore empty lines, commented lines (#) and ORACLE_SIDs starting with + or - (RAC)
if [[ ${ORACLE_SID} =~ ^(#|[\t ]*$|[-+]) ]] ; then continue ; fi
ALIASNAME=${ORACLE_SID,,*}
eval ORACLE_HOMES["${ORACLE_SID}"]=${ORACLE_HOME}
alias ${ALIASNAME}="export ORACLE_SID=${ORACLE_SID}; export ORACLE_HOME=\${ORACLE_HOMES[${ORACLE_SID}]}; export PATH=\${ORACLE_HOMES[${ORACLE_SID}]}/bin:\${ORACLE_HOMES[${ORACLE_SID}]}/OPatch:\${BASE_PATH}"
done < ${ORATAB}
</source>
After that .bash_profile is sourced (as done at login) you have aliases as lower case ORACLE_SIDs that set all you need.
==Recover datafiles==
Problem:
<source lang=oracle11>
ORA-00376: file 18 cannot be read at this time
ORA-01110: data file 18: '/data/oracle/oradata/datafile04.dbf'
</source>
<source lang=oracle11>
SQL> select * from v$recover_file;
FILE# ONLINE ONLINE_
---------- ------- -------
ERROR CHANGE#
----------------------------------------------------------------- ----------
TIME
---------
18 OFFLINE OFFLINE
5.8016E+12
22-JUL-15
</source>
<source lang=oracle11>
SQL> select ONLINE_STATUS from dba_data_files where file_id = 18;
ONLINE_
-------
RECOVER
</source>
Recover datafile:
<source lang=oracle11>
SQL> recover datafile 18;
ORA-00279: change 5801623243148 generated at 07/22/2015 21:26:51 needed for thread 1
ORA-00289: suggestion :
/data/oracle/arclog/ORACLESID_1946_1_882824275.ARC
ORA-00280: change 5801623243148 for thread 1 is in sequence #1946
ORA-00278: log file
'/data/oracle/arclog/ORACLESID_1945_1_882824275.ARC' no longer needed
for this recovery
Specify log: {<RET>=suggested | filename | AUTO | CANCEL}
AUTO
Log applied.
Media recovery complete.
</source>
Set file online:
<source lang=oracle11>
SQL> alter database datafile 18 online
</source>
Anything else?
<source lang=oracle11>
SQL> select * from v$recover_file;
no rows selected
</source>
==Show non default settings==
<source lang=oracle11>
SQL> select name || ' = ' || value from v$parameter where isdefault = 'FALSE';
</source>
==Show CPU count from database==
<source lang=oracle11>
SQL> SELECT 'DATABASE CPU COUNT: ' || value ||
decode(ISDEFAULT, 'TRUE', ' (ISDEFAULT)', ' (IS NOT DEFAULT !!!: '|| ISDEFAULT ||')')
from V$PARAMETER where UPPER(name) like '%CPU_COUNT%'
</source>
==Startup some Databases manually==
<source lang=bash>
for SID in DEVDE $(awk -F':' '$1 ~ /^DEV/ && $1 !~ /^DEVDE$/ {print $1}' /var/opt/oracle/oratab )
do
export ORAENV_ASK=NO ORACLE_SID=${SID}
. oraenv
printf "startup\nquit\n" | sqlplus -s "/ as sysdba"
lsnrctl start ${SID}
done
</source>
7ea344eba5c7387635cf3648f6d14d09537da740
1874
1873
2018-05-16T15:42:04Z
Lollypop
2
/* Startup some Databases manually */
wikitext
text/x-wiki
[[Kategorie:Oracle|Tipps]]
==Set environment in .bash_profile==
<source lang=bash>
ORATAB=/etc/oratab # <-- maybe somwhere else?
declare -A ORACLE_HOMES
export BASE_PATH=${PATH}
while IFS=$': \t\n' read ORACLE_SID ORACLE_HOME DBSTART
do
# Ignore empty lines, commented lines (#) and ORACLE_SIDs starting with + or - (RAC)
if [[ ${ORACLE_SID} =~ ^(#|[\t ]*$|[-+]) ]] ; then continue ; fi
ALIASNAME=${ORACLE_SID,,*}
eval ORACLE_HOMES["${ORACLE_SID}"]=${ORACLE_HOME}
alias ${ALIASNAME}="export ORACLE_SID=${ORACLE_SID}; export ORACLE_HOME=\${ORACLE_HOMES[${ORACLE_SID}]}; export PATH=\${ORACLE_HOMES[${ORACLE_SID}]}/bin:\${ORACLE_HOMES[${ORACLE_SID}]}/OPatch:\${BASE_PATH}"
done < ${ORATAB}
</source>
After that .bash_profile is sourced (as done at login) you have aliases as lower case ORACLE_SIDs that set all you need.
==Recover datafiles==
Problem:
<source lang=oracle11>
ORA-00376: file 18 cannot be read at this time
ORA-01110: data file 18: '/data/oracle/oradata/datafile04.dbf'
</source>
<source lang=oracle11>
SQL> select * from v$recover_file;
FILE# ONLINE ONLINE_
---------- ------- -------
ERROR CHANGE#
----------------------------------------------------------------- ----------
TIME
---------
18 OFFLINE OFFLINE
5.8016E+12
22-JUL-15
</source>
<source lang=oracle11>
SQL> select ONLINE_STATUS from dba_data_files where file_id = 18;
ONLINE_
-------
RECOVER
</source>
Recover datafile:
<source lang=oracle11>
SQL> recover datafile 18;
ORA-00279: change 5801623243148 generated at 07/22/2015 21:26:51 needed for thread 1
ORA-00289: suggestion :
/data/oracle/arclog/ORACLESID_1946_1_882824275.ARC
ORA-00280: change 5801623243148 for thread 1 is in sequence #1946
ORA-00278: log file
'/data/oracle/arclog/ORACLESID_1945_1_882824275.ARC' no longer needed
for this recovery
Specify log: {<RET>=suggested | filename | AUTO | CANCEL}
AUTO
Log applied.
Media recovery complete.
</source>
Set file online:
<source lang=oracle11>
SQL> alter database datafile 18 online
</source>
Anything else?
<source lang=oracle11>
SQL> select * from v$recover_file;
no rows selected
</source>
==Show non default settings==
<source lang=oracle11>
SQL> select name || ' = ' || value from v$parameter where isdefault = 'FALSE';
</source>
==Show CPU count from database==
<source lang=oracle11>
SQL> SELECT 'DATABASE CPU COUNT: ' || value ||
decode(ISDEFAULT, 'TRUE', ' (ISDEFAULT)', ' (IS NOT DEFAULT !!!: '|| ISDEFAULT ||')')
from V$PARAMETER where UPPER(name) like '%CPU_COUNT%'
</source>
==Startup some Databases manually==
For example: First DEVDE, than all other DEV*
<source lang=bash>
for SID in DEVDE $(awk -F':' '$1 ~ /^DEV/ && $1 !~ /^DEVDE$/ {print $1}' /var/opt/oracle/oratab )
do
export ORAENV_ASK=NO ORACLE_SID=${SID}
. oraenv
printf "startup\nquit\n" | sqlplus -s "/ as sysdba"
lsnrctl start ${SID}
done
</source>
==Shutdown some Databases manually==
For example: First all other DEV*, than DEVDE
<source lang=bash>
for SID in $(awk -F':' '$1 ~ /^DEV/ && $1 !~ /^DEVDE$/ {print $1}' /var/opt/oracle/oratab ) DEVDE
do
export ORAENV_ASK=NO ORACLE_SID=${SID}
. oraenv
lsnrctl stop ${SID}
printf "shutdown immediate\nquit\n" | sqlplus -s "/ as sysdba"
done
</source>
04216401d6c46fc927c8291830386cbcbf8cdc2e
Oracle Tipps und Tricks
0
350
1834
2017-12-08T14:19:10Z
Lollypop
2
Lollypop verschob die Seite [[Oracle Tipps und Tricks]] nach [[Oracle Tips and Tricks]]
wikitext
text/x-wiki
#WEITERLEITUNG [[Oracle Tips and Tricks]]
54d40cd32440b48bbf8e68f80d0fe93a7fdc5fd6
SuSE Manager
0
348
1835
1826
2017-12-13T13:23:56Z
Lollypop
2
/* SuSE Manager */
wikitext
text/x-wiki
[[Kategorie:Linux]]
=SuSE Manager=
==Channels==
===Refresh channle list===
<source lang=bash>
# mgr-sync refresh
</source>
===List available channels===
<source lang=bash>
# mgr-sync list channels
</source>
===Add Channel===
<source lang=bash>
# mgr-sync add channel <channel>
</source>
===Delete Channel===
<source lang=bash>
# spacewalk-remove-channel -c <channel>
</source>
===Create a frozen channel===
Clone a channel (which is like a snapshot) and add a timestamp at the end of the name:
<source lang=bash>
# spacecmd softwarechannel_clonetree -s '<source channel or pool>' -x "s/\$/-$(date '+%Y-%m-%d_%H:%M:%S')/"
</source>
e.g.:
<source lang=bash>
# spacecmd softwarechannel_clonetree -s 'sles12-sp3-pool-x86_64' -x "s/\$/-$(date '+%Y-%m-%d_%H:%M:%S')/"
</source>
will result in a new channel pool named e.g. sles12-sp3-pool-x86_64-2017-11-22_14:26:42
===Compose your own channel===
<source lang=bash>
# spacecmd
spacecmd {SSM:0}> softwarechannel_create -n OpenSuSE -l opensuse -a x86_64 -c sha256
spacecmd {SSM:0}> repo_create -n opensuse-database-sles12-sp2-x86_64 -u https://download.opensuse.org/repositories/server:/database/SLE_12_SP2/
spacecmd {SSM:0}> repo_create -n opensuse-database-sles12-sp3-x86_64 -u https://download.opensuse.org/repositories/server:/database/SLE_12_SP3/
spacecmd {SSM:0}> repo_list
opensuse-database-sles12-sp2-x86_64
opensuse-database-sles12-sp3-x86_64
spacecmd {SSM:0}> softwarechannel_addrepo opensuse opensuse-database-sles12-sp2-x86_64
spacecmd {SSM:0}> softwarechannel_addrepo opensuse opensuse-database-sles12-sp3-x86_64
spacecmd {SSM:0}> quit
# spacewalk-repo-sync -c opensuse
</source>
==Bootstrap==
===Create bootstrap repo===
Do it for each channel!
<source lang=bash>
# mgr-create-bootstrap-repo
</source>
===Create bootstrap shell scripts in /srv/www/htdocs/pub/bootstrap===
Do not forget to lookup the available [[#List available activation keys|activation keys]]
<source lang=bash>
# mgr-bootstrap --traditional --script=My-New-SLES11-SP4.sh --activation-keys=6-sles11-sp4-x86_64
</source>
==Activation keys==
===List available activation keys===
web: Systems -> Activation Keys
<source lang=bash>
# spacecmd -q activationkey_list
6-sles11-sp3-x86_64
6-sles11-sp4-x86_64
6-sles12-sp0-x86_64
6-sles12-sp1-x86_64
6-sles12-sp2-x86_64
6-sles12-sp3-x86_64
</source>
==spacecmd==
Just some useful space commands
<source lang=bash>
# spacecmd system_list
</source>
==rhn-search==
===Cleanup the search index===
<source lang=bash>
# rhn-search cleanindex
</source>
b0cbe547e23c098b5f4f89ea798f630ef32797ff
MySQL Tipps und Tricks
0
197
1836
1760
2017-12-21T10:34:50Z
Lollypop
2
/* Oneliner */
wikitext
text/x-wiki
[[Kategorie:MySQL|Tipps und Tricks]]
==Oneliner==
===Show MySQL-traffic fired from a client===
<source lang=bash>
# tcpdump -i any -s 0 -l -vvv -w - dst port 3306 | strings | perl -e '
while(<>) { chomp; next if /^[^ ]+[ ]*$/;
if(/^(SELECT|UPDATE|DELETE|INSERT|SET|COMMIT|ROLLBACK|CREATE|DROP|ALTER)/i) {
if (defined $q) { print "$q\n"; }
$q=$_;
} else {
$_ =~ s/^[ \t]+//; $q.=" $_";
}
}'
</source>
===Mysql processes each second===
<source lang=bash>
# mysqladmin -i 1 --verbose processlist
</source>
===All grants===
<source lang=bash>
# mysql --skip-column-names --batch --execute 'select concat("`",user,"`@`",host,"`") from mysql.user' | xargs -n 1 -i mysql --execute 'show grants for {}'
</source>
Or a little nicer:
<source lang=bash>
#!/bin/bash
#
## Written by Lars Timmann <L@rs.Timmann.de> 2017
#
function usage () {
cat << EOH
Usage: $0 [--grant-user <pattern>|--gu <pattern>] [--grant-db <pattern>|--gdb <pattern>] [--help] ...
--help: This output
--grant-user|--gu: You can specify this option several times.
The <pattern> can be:
<user> : You will get grants on all hosts for this user.
@<host> : You will get grants for all users on this host.
<user>@<host> : You will get specific grants for user@host.
The pattern may contain % as wildcard.
If the pattern is @% it shows all grants where host is exactly '%'.
--grant-db|--gdb: You can specify this option several times.
The pattern names the database to look for.
The pattern may contain % as wildcard.
...: Optional parameters to the mysql command
EOH
exit
}
declare -a grant_user
for ((param=1;param<=${#};param++))
do
case ${!param} in
--grant-user|--gu)
param=$[ ${param} + 1 ]
grant_user+=( $( echo ${!param} | awk -F'@' "NF==2 && \$1 {printf \"'%s'@'%s'\n\",\$1,\$2;next;}{print}") )
# delete 2 parameters from list and set back $param
set -- "${@:1:param-2}" "${@:param+1}"
param=$[ ${param} - 2 ]
;;
--grant-db|--gdb)
param=$[ ${param} + 1 ]
grant_db+=( "${!param}" )
# delete 2 parameters from list and set back $param
set -- "${@:1:param-2}" "${@:param+1}"
param=$[ ${param} - 2 ]
;;
--help)
usage
;;
*)
;;
esac
done
# Fill users which are without host
count=${#grant_user[@]}
for((param=0;param<count;param++))
do
user="${grant_user[${param}]}"
if [[ ${user} != ?*"@"?* ]]
then
before=${#grant_user[@]}
if [[ ${user} == "@"?* ]]
then
host="${user/@}"
if [[ "_${host}_" == "_%_" ]]
then
grant_user=( "${grant_user[@]:0:param}" $(mysql $* --silent --skip-column-names --execute "select concat('\'',user,'\'@\'',host,'\'') as user from mysql.user where host='${host}'" | sort ) "${grant_user[@]:param+1}" )
else
grant_user=( "${grant_user[@]:0:param}" $(mysql $* --silent --skip-column-names --execute "select concat('\'',user,'\'@\'',host,'\'') as user from mysql.user where host like '${host}'" | sort ) "${grant_user[@]:param+1}" )
fi
else
grant_user=( "${grant_user[@]:0:param}" $(mysql $* --silent --skip-column-names --execute "select concat('\'',user,'\'@\'',host,'\'') as user from mysql.user where user like '${user}'" | sort ) "${grant_user[@]:param+1}" )
fi
after=${#grant_user[@]}
param=$[ param + after - before ]
count=$[ count + after - before ]
fi
done
# Get user for database in grant_db array
for db in ${grant_db[@]}
do
grant_user+=( $(mysql $* --silent --skip-column-names --execute "
select concat('\'',user,'\'@\'',host,'\'') as user from mysql.db where db like '${db}';
select concat('\'',user,'\'@\'',host,'\'') as user from mysql.columns_priv where db like '${db}';
select concat('\'',user,'\'@\'',host,'\'') as user from mysql.tables_priv where db like '${db}';
" | sort -u ) )
done
# if no users specified, show all grants
if [ ${#grant_user[@]} -eq 0 -a ${#grant_db[@]} -eq 0 ]
then
printf -- '--\n-- %s\n--\n' "all grants";
grant_user=( $(mysql $* --silent --skip-column-names --execute "select concat('\'',user,'\'@\'',host,'\'') as user from mysql.user" | sort ) )
fi
for user in ${grant_user[@]}
do
printf -- '--\n-- %s\n--\n' "${user}";
mysql $* --silent --skip-column-names --execute "show create user ${user}; show grants for ${user}" | sed 's/$/;/'
done
</source>
===Last update time===
* Per table
<source lang=mysql>
mysql> SELECT TABLE_SCHEMA AS DB,TABLE_NAME,UPDATE_TIME FROM INFORMATION_SCHEMA.TABLES ORDER BY DB,UPDATE_TIME;
</source>
* Per database
<source lang=mysql>
mysql> SELECT TABLE_SCHEMA AS DB,MAX(UPDATE_TIME) AS LAST_UPDATE FROM INFORMATION_SCHEMA.TABLES GROUP BY DB ORDER BY LAST_UPDATE;
</source>
==InnoDB space==
===Per database===
<source lang=mysql>
mysql> select table_schema as database_name, sum(round(data_length/1024/1024,2)) as total_size_mb from information_schema.tables where engine like 'innodb' group by table_schema order by total_size_mb;
</source>
===Per table===
<source lang=mysql>
mysql> select table_schema as database_name,table_name,round(data_length/1024/1024,2) as size_mb from information_schema.tables order by size_mb;
</source>
==Logging==
If you use SET GLOBAL it is just for the moment.
'''Don't forget to add it in your my.cnf to make it permanent!'''
===What can I log?===
The interesting variables here are:
* log_queries_not_using_indexes
* log_slave_updates
* log_slow_queries
* general_log
===Choose logging destination FILE/TABLE/NONE===
This affects general_log and slow_query_log.
* Log to the table mysql.slow_log and mysql.general_log
<source lang=mysql>
mysql> SET GLOBAL log_output=TABLE;
</source>
* Log to the table mysql.slow_log and mysql.general_log
<source lang=mysql>
mysql> SET GLOBAL log_output=TABLE;
</source>
* Both: tables and files
<source lang=mysql>
mysql> SET GLOBAL log_output = 'TABLE,FILE';
</source>
* None, if NONE appears in the log_output destinations there is no logging
<source lang=mysql>
mysql> SET GLOBAL log_output = 'TABLE,FILE,NONE';
</source>
is equal to
<source lang=mysql>
mysql> SET GLOBAL log_output = 'NONE';
</source>
===Enable/disable general logging===
<source lang=mysql>
mysql> SET GLOBAL general_log_file = '/var/lib/mysql/general.log';
Query OK, 0 rows affected (0.00 sec)
mysql> SET GLOBAL general_log = 'ON';
Query OK, 0 rows affected (0.00 sec)
</source>
<source lang=mysql>
mysql> SET GLOBAL general_log = 'OFF';
Query OK, 0 rows affected (0.00 sec)
</source>
===Enable/disable logging of slow queries===
<source lang=mysql>
mysql> SET GLOBAL slow_query_log_file = '/var/lib/mysql/slow-query.log';
Query OK, 0 rows affected (0.00 sec)
mysql> SET GLOBAL slow_query_log = 'ON';
Query OK, 0 rows affected (0.00 sec)
</source>
<source lang=mysql>
mysql> SET GLOBAL slow_query_log = 'OFF';
Query OK, 0 rows affected (0.00 sec)
</source>
== Slave ==
=== Debugging ===
==== What did we see from the master ====
Read the binlog from Master:
<source lang=bash>
# mysqlbinlog --read-from-remote-server --host='your replication host' --user='your replication user' --password='your replication password' --base64-output=auto --database='limit output to this database' -vv mysql-bin.number | less
</source>
if you get
ERROR: Failed on connect: SSL connection error: protocol version mismatch
try
<source lang=bash>
# mysqlbinlog --read-from-remote-server --host='your replication host' --user='your replication user' --password='your replication password' --ssl-mode=DISABLED --base64-output=auto --database='limit output to this database' -vv mysql-bin.number | less
</source>
For an idea of the binlog file to investigate on the master do this on your slave:
<source lang=bash>
# mysql -e 'show slave status\G' | awk '$1=="Master_Log_File:"'
</source>
==Filesystems for MySQL==
===ext3/ext4===
====Create Options====
<source lang=bash>
# mkfs.ext4 -b 4096 /dev/mapper/vg--data-lv--ext4--mysql_data
</source>
====Mountoptions====
* noatime
* data=writeback (best performance , only metadata is logged)
* data=ordered (ok performance , recording metadata and grouping metadata related to the data changes)
* data=journal (worst performance, but best data protection, ext3 default mode, recording metadata and all data)
===Raw devices with InnoDB===
'''Take a look at [[Linux_udev_permissions|setting device permissions via udev]] first.'''
'''After''' that the device is owned by mysql:
<source lang=bash>
# ls -alL /dev/vg-data/lv-rawdisk-innodb01
brw-rw---- 1 mysql mysql 252, 0 Aug 12 15:07 /dev/vg-data/lv-rawdisk-innodb01
</source>
Determine the size:
<source lang=bash>
# lvs vg-data
LV VG Attr LSize Pool Origin Data% Move Log Copy% Convert
lv-rawdisk-innodb01 vg-data -wi-a---- 25.00g
# fdisk -l /dev/vg-data/lv-rawdisk-innodb01
Disk /dev/vg-data/lv-rawdisk-innodb01: 26.8 GB, 26843545600 bytes
255 heads, 63 sectors/track, 3263 cylinders, total 52428800 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
# bc -l
26843545600/(1024*1024*1024)
25.00000000000000000000
</source>
Yes... really 25GB!
Add your logical volume to your configuration /etc/mysql/conf.d/innodb.cnf :
<source lang=mysql>
[mysqld]
# InnoDB raw disks
innodb_data_home_dir=
innodb_data_file_path=/dev/vg-data/lv-rawdisk-innodb01:25Gnewraw
</source>
Start mysql:
<source lang=bash>
# service mysql start
</source>
Aaaaaand.. do not forget apparmor! Like I did.. :-D
<source lang=mysql>
InnoDB: Operating system error number 13 in a file operation.
InnoDB: The error means mysqld does not have the access rights to
InnoDB: the directory.
InnoDB: File name /dev/dm-0
InnoDB: File operation call: 'open'.
InnoDB: Cannot continue operation.
</source>
<source lang=bash>
# tail /var/log/kern.log
...
Aug 12 15:30:09 mysql kernel: [ 5840.118528] audit: type=1400 audit(1439386209.399:33): apparmor="DENIED" operation="open" profile="/usr/sbin/mysqld" name="/dev/dm-0" pid=11810 comm="mysqld" requested_mask="wr" denied_mask="wr" fsuid=108 ouid=108
...
</source>
Add your raw device to the apparmor config in /etc/apparmor.d/local/usr.sbin.mysqld :
<source lang=bash>
# Site-specific additions and overrides for usr.sbin.mysqld.
# For more details, please see /etc/apparmor.d/local/README.
/dev/dm-* rwk,
</source>
Reload apparmor:
<source lang=bash>
# service apparmor reload
</source>
Another try!
<source lang=bash>
# service mysql start
</source>
<source lang=mysql>
InnoDB: The first specified data file /dev/vg-data/lv-rawdisk-innodb01 did not exist:
InnoDB: a new database to be created!
150812 15:48:23 InnoDB: Setting file /dev/vg-data/lv-rawdisk-innodb01 size to 25600 MB
InnoDB: Database physically writes the file full: wait...
InnoDB: Progress in MB: 100 200 300 400 500 600 700 800 900 1000 1100 1200 ...
</source>
Much better!
So shutdown MySQL again!
Change your configuration /etc/mysql/conf.d/innodb.cnf and '''change newraw to raw!''' :
<source lang=mysql>
[mysqld]
# InnoDB raw disks
innodb_data_home_dir=
innodb_data_file_path=/dev/vg-data/lv-rawdisk-innodb01:25Graw
</source>
=== NFS ===
==== NFSv4 ====
===== On NetApp CDOT SVM =====
<source lang=cdot>
cdot1nfsv4::> export-policy rule create -policyname default -clientmatch 172.18.128.0/22 -superuser none -rwrule none -rorule sys -allow-dev false -allow-suid false
cdot1nfsv4::>
cdot1nfsv4::> export-policy create -policyname mysql_clients
cdot1nfsv4::> export-policy rule create -policyname mysql_clients -clientmatch 172.18.128.0/22 -superuser sys -rwrule sys -rorule sys -allow-dev true -allow-suid false
cdot1nfsv4::>
cdot1nfsv4::> nfs server modify -v4.0 enabled -v4-id-domain this.domain.tld
cdot1nfsv4::> set -units GB
cdot1nfsv4::> vol show -volume MYSQLNFS_* -fields volume,policy,size,junction-path
vserver volume size policy junction-path
------------------ --------------------- ---- ------------- ----------------------
cdot1nfsv4 MYSQLNFS_DATA 40GB mysql_clients /MYSQLNFS_DATA
cdot1nfsv4 MYSQLNFS_LOG 1GB mysql_clients /MYSQLNFS_LOG
2 entries were displayed.
</source>
Links:
* [https://kb.netapp.com/support/s/article/how-to-configure-nfsv4-in-cluster-mode How to configure NFSv4 in Cluster-Mode]
* [https://kb.netapp.com/support/s/article/clustered-data-ontap-nfs-expert-recommended-articles Clustered Data ONTAP NFS Expert recommended articles]
* [https://kb.netapp.com/support/s/article/how-to-configure-netapp-storage-systems-for-network-file-system-version-4-in-aix-and-linux-environments How to configure NetApp storage systems for Network File System version 4 in AIX and Linux environments]
* [https://kb.netapp.com/support/s/article/how-to-enable-or-disable-nfsv4-on-netapp-storage-systems How to enable or disable NFSv4 on NetApp storage systems]
===== On Linux =====
====== /etc/sysctl.d/99-mysql.conf ======
<source>
#
## http://www.ajohnstone.com/achives/optimizing-mysql-over-nfs-with-netapp/
#
###################################################################
# Semaphores & IPC for optimizations in innodb
kernel.shmmax=2147483648
kernel.shmall=2147483648
kernel.msgmni=1024
kernel.msgmax=65536
kernel.sem=250 32000 32 1024
###################################################################
# Swap
vm.swappiness = 0
vm.vfs_cache_pressure = 50
</source>
====== /etc/sysctl.d/99-netapp-nfs.conf ======
<source>
#
## http://www.ajohnstone.com/achives/optimizing-mysql-over-nfs-with-netapp/
#
###################################################################
# Optimization for netapp/nfs increased from 64k, @see http://tldp.org/HOWTO/NFS-HOWTO/performance.html#MEMLIMITS
net.core.wmem_default=262144
net.core.rmem_default=262144
net.core.wmem_max=262144
net.core.rmem_max=262144
net.ipv4.tcp_rmem = 4096 87380 16777216
net.ipv4.tcp_wmem = 4096 65536 16777216
net.ipv4.tcp_no_metrics_save = 1
# Guidelines from http://media.netapp.com/documents/mysqlperformance-5.pdf
net.ipv4.tcp_sack=0
net.ipv4.tcp_timestamps=0
sunrpc.tcp_slot_table_entries=128
#nfs.v3.enable on
nfs.tcp.enable=on
nfs.tcp.recvwindowsize=65536
nfs.tcp.xfersize=65536
#iscsi.iswt.max_ios_per_session 128
#iscsi.iswt.tcp_window_size 131400
#iscsi.max_connections_per_session 16
net.ipv4.tcp_tw_reuse = 1
net.ipv4.ip_local_port_range = 1024 65023
net.ipv4.tcp_max_syn_backlog = 10240
net.ipv4.tcp_max_tw_buckets = 400000
net.ipv4.tcp_max_orphans = 60000
net.ipv4.tcp_synack_retries = 3
net.core.somaxconn = 10000
kernel.sysrq=0
net.ipv4.neigh.default.gc_thresh1 = 4096
net.ipv4.neigh.default.gc_thresh2 = 8192
net.ipv4.neigh.default.gc_thresh3 = 8192
net.ipv4.neigh.default.base_reachable_time = 86400
net.ipv4.neigh.default.gc_stale_time = 86400
</source>
====== Raise allowed number of open files for mysql in /etc/security/limits.d/mysql.conf ======
<source>
mysql soft nofile 1024000
mysql hard nofile 1024000
mysql soft nproc 10240
mysql hard nproc 10240
</source>
====== Modify systemd mysql.service to raise the number of files limit ======
To raise the number of files for the service you have to tell the systemd the new limit.
<source lang=bash>
# systemctl edit mysql.service
</source>
and enter:
<source lang=inifile>
[Service]
LimitNOFILE=1024000
</source>
<source lang=bash>
# systemctl cat mysql
# /lib/systemd/system/mysql.service
# MySQL systemd service file
...
# /etc/systemd/system/mysql.service.d/override.conf
[Service]
LimitNOFILE=1024000
</source>
Do not forget to activate and check the limit
<source lang=bash>
# systemctl daemon-reload
# systemctl restart mysql
# awk 'NR==1 || /Max open files/' /proc/$(pgrep mysqld$)/limits
Limit Soft Limit Hard Limit Units
Max open files 1024000 1024000 files
</source>
====== Modify systemd service to wait for NFS ======
To be sure that the NFS mount is ready when the mysql server starts add After=nfs-client.target to the systemd service in the Unit-section.
<source lang=bash>
# systemctl edit mysql.service
</source>
and enter:
<source lang=inifile>
[Unit]
Description=MySQL Community Server
After=network.target
After=nfs-client.target
</source>
<source lang=bash>
# systemctl cat mysql
# /lib/systemd/system/mysql.service
# MySQL systemd service file
[Unit]
Description=MySQL Community Server
After=network.target
[Install]
WantedBy=multi-user.target
[Service]
User=mysql
Group=mysql
PermissionsStartOnly=true
ExecStartPre=/usr/share/mysql/mysql-systemd-start pre
ExecStart=/usr/sbin/mysqld
ExecStartPost=/usr/share/mysql/mysql-systemd-start post
TimeoutSec=600
Restart=on-failure
RuntimeDirectory=mysqld
RuntimeDirectoryMode=755
# /etc/systemd/system/mysql.service.d/override.conf
[Unit]
Description=MySQL Community Server
After=network.target
After=nfs-client.target
[Service]
LimitNOFILE=1024000
</source>
Do not forget to activate the changes...
<source lang=bash>
# systemctl daemon-reload
# systemctl restart mysql
</source>
... and check they are active:
<source lang=bash>
# systemctl list-dependencies --after mysql.service | grep nfs-client.target
● ├─nfs-client.target
</source>
====== /etc/idmapd.conf ======
<source lang=conf>
# Domain = localdomain
Domain = this.domain.tld
</source>
====== /etc/fstab ======
<source lang=fstab>
cdot-nfsv4-svm:/MYSQLNFS_LOG /MYSQLNFS_LOG nfs rw,hard,nointr,rsize=65536,wsize=65536,bg,vers=4,proto=tcp,noatime
cdot-nfsv4-svm:/MYSQLNFS_DATA /MYSQLNFS_DATA nfs rw,hard,nointr,rsize=65536,wsize=65536,bg,vers=4,proto=tcp,noatime
</source>
====== /etc/mysql/mysql.conf.d/mysqld.cnf ======
<source lang=inifile>
[mysqld]
...
datadir = /MYSQLNFS_DATA/data/mysql
...
</source>
====== /etc/mysql/mysql.conf.d/innodb.cnf ======
<source lang=inifile>
[mysqld]
#
# * InnoDB
#
innodb_data_home_dir = /MYSQLNFS_DATA/InnoDB
innodb_data_file_path = ibdata1:200M:autoextend
innodb_log_group_home_dir = /MYSQLNFS_LOG/ib_log
#innodb_flush_method = O_DIRECT
innodb_flush_log_at_trx_commit = 2
innodb_file_per_table = on
</source>
<source lang=mysql>
# mysql -e "show variables where variable_name like '%dir' and value like '/MYSQLNFS%'"
+---------------------------+------------------------------------+
| Variable_name | Value |
+---------------------------+------------------------------------+
| datadir | /MYSQLNFS_DATA/data/mysql/ |
| innodb_data_home_dir | /MYSQLNFS_DATA/InnoDB |
| innodb_log_group_home_dir | /MYSQLNFS_LOG/ib_log |
+---------------------------+------------------------------------+
</source>
====== /etc/mysql/mysql.conf.d/query_cache.cnf ======
<source lang=inifile>
[mysqld]
#
# * Query Cache Configuration
#
query_cache_type = 1
query_cache_limit = 256K
query_cache_min_res_unit = 2k
query_cache_size = 80M
</source>
<source lang=mysql>
mysql> SHOW VARIABLES LIKE 'have_query_cache';
+------------------+-------+
| Variable_name | Value |
+------------------+-------+
| have_query_cache | YES |
+------------------+-------+
1 row in set (0,00 sec)
mysql> SHOW VARIABLES LIKE 'query_cache%';
+------------------------------+----------+
| Variable_name | Value |
+------------------------------+----------+
| query_cache_limit | 262144 |
| query_cache_min_res_unit | 2048 |
| query_cache_size | 83886080 |
| query_cache_type | ON |
| query_cache_wlock_invalidate | OFF |
+------------------------------+----------+
5 rows in set (0,00 sec)
</source>
====== apparmor : /etc/apparmor.d/local/usr.sbin.mysqld ======
<source lang=apparmor>
# vim:syntax=apparmor
# This should be always there...
owner @{PROC}/@{pid}/status r,
/sys/devices/system/node/ r,
/sys/devices/system/node/** r,
# The mysql datadir, innodb_data_home_dir
/MYSQLNFS_DATA/ r,
/MYSQLNFS_DATA/** rwk,
# The mysql innodb_log_group_home_dir
/MYSQLNFS_LOG/ r,
/MYSQLNFS_LOG/** rwk,
</source>
====== Short stupid performance test ======
<source lang=bash>
# time dd if=/dev/zero of=/MYSQLNFS_DATA/io.test bs=16k count=65536
65536+0 records in
65536+0 records out
1073741824 bytes (1,1 GB, 1,0 GiB) copied, 1,7552 s, 612 MB/s
real 0m1.772s
user 0m0.016s
sys 0m0.672s
</source>
Some things seem to work...
==Sample InnoDB configuration==
/etc/mysql/conf.d/innodb.cnf
<source lang=mysql>
[mysqld]
# InnoDB Parameters
# innodb_buffer_pool_size=(0.7*total_mem_size)
innodb_buffer_pool_size=1433M
# bulk_insert_buffer_size
bulk_insert_buffer_size=256M
# innodb_buffer_pool_instances=... more = more concurrency
innodb_buffer_pool_instances=2
# innodb_thread_concurrency= 2*CPUs
innodb_thread_concurrency=4
# innodb_flush_method=O_DIRECT (avoids double buffering)
innodb_flush_method=O_DIRECT
# InnoDB data raw disks
innodb_data_home_dir=
innodb_data_file_path=/dev/vg-data/lv-rawdisk-innodb01:25Graw
# InnoDB log files
innodb_log_files_in_group=2
innodb_log_file_size=100M
innodb_log_group_home_dir=/var/lib/mysql/ib_log
</source>
==Analyze==
<source lang=mysql>
mysql> select * from <tablename> PROCEDURE ANALYSE();
</source>
<source lang=mysql>
mysql> SHOW /*!50000 GLOBAL*/ STATUS;
</source>
* See [[http://de.slideshare.net/shinguz/pt-presentation-11465700 MySQL Performance Tuning]]
===percona-toolkit===
<source lang=bash>
# aptitude install percona-toolkit
# mysql -e "explain select * from mysql.user,mysql.db where user.user=db.user" | pt-visual-explain
JOIN
+- Bookmark lookup
| +- Table
| | table db
| | possible_keys User
| +- Index lookup
| key db->User
| possible_keys User
| key_len 48
| ref mysql.user.User
| rows 3
+- Table scan
rows 68
+- Table
table user
</source>
===Sysbench===
<source lang=bash>
# mysql -u root -e "create database sbtest;"
# sysbench \
--test=oltp \
--oltp-table-size=10000000 \
--db-driver=mysql \
--mysql-table-engine=innodb \
--mysql-db=sbtest \
--mysql-user=root \
--mysql-password=$(nawk -F'=' '/password/{print $2}' /root/.my.cnf) \
--mysql-socket=/var/run/mysqld/mysqld.sock \
prepare
# sysbench \
--test=oltp \
--oltp-test-mode=complex \
--oltp-table-size=80000000 \
--db-driver=mysql \
--mysql-table-engine=innodb \
--mysql-db=sbtest \
--mysql-user=root \
--mysql-password=$(nawk -F'=' '/password/{print $2}' /root/.my.cnf) \
--mysql-socket=/var/run/mysqld/mysqld.sock \
--num-threads=4 \
--max-time=900 \
--max-requests=500000 \
run
# mysql -u root_rw -e "drop table sbtest;" sbtest
</source>
==Recover a damaged root account==
===Lost grants===
Try out:
<source lang=bash>
# service mysql stop
# echo "grant all privileges on *.* to 'root'@'localhost' with grant option;" > /root/mysql-init
# mysqld_safe --init-file=/root/mysql-init
...
150812 19:14:24 mysqld_safe mysqld from pid file /var/run/mysqld/mysqld.pid ended
# rm /root/mysql-init
# service mysql start
</source>
Or:
<source lang=bash>
# service mysql stop
# mysqld_safe --skip-grant-tables &
...
# mysql -e "UPDATE mysql.user SET Grant_priv='Y', Super_priv='Y' WHERE User='root'; FLUSH PRIVILEGES; GRANT ALL ON *.* TO 'root'@'localhost';"
# mysqladmin -u root shutdown
# service mysql start
</source>
===Lost password===
<source lang=bash>
# service mysql stop
# echo "SET PASSWORD FOR 'root'@'localhost' = PASSWORD('the root password for mysql');" > /root/mysql-init
# mysqld_safe --init-file=/root/mysql-init
...
150812 19:15:24 mysqld_safe mysqld from pid file /var/run/mysqld/mysqld.pid ended
# rm /root/mysql-init
# service mysql start
</source>
==Structured configuration==
This is the default in Ubuntus /etc/mysql/my.cnf:
<source lang=mysql>
...
#
# * IMPORTANT: Additional settings that can override those from this file!
# The files must end with '.cnf', otherwise they'll be ignored.
#
!includedir /etc/mysql/conf.d/
</source>
/etc/mysql/conf.d/innodb.cnf:
<source lang=mysql>
[mysqld]
# InnoDB Parameters
# innodb_buffer_pool_size=(0.7*total_mem_size)
#innodb_buffer_pool_size=512M
innodb_buffer_pool_size=256M
# bulk_insert_buffer_size
#bulk_insert_buffer_size=256M
bulk_insert_buffer_size=128M
# innodb_buffer_pool_instances=... more = more concurrency
innodb_buffer_pool_instances=2
# innodb_thread_concurrency= 2*CPUs
innodb_thread_concurrency=4
# innodb_flush_method=O_DIRECT (avoids double buffering)
innodb_flush_method=O_DIRECT
# InnoDB data raw disks
innodb_data_home_dir=
innodb_data_file_path=/dev/vg-data/lv-rawdisk-innodb01:25Graw
# InnoDB log files
innodb_log_files_in_group=2
innodb_log_file_size=100M
innodb_log_group_home_dir=/var/lib/mysql/ib_log
</source>
/etc/mysql/conf.d/myisam.cnf:
<source lang=mysql>
[mysqld]
#key_buffer = 512M
key_buffer = 128M
table_cache = 8K
myisam_sort_buffer_size = 64M
tmp_table_size = 64M
# Variable: concurrent_insert
# Value Description
# 0 Disables concurrent inserts
# 1 (Default) Enables concurrent insert for MyISAM tables that do not have holes
# 2 Enables concurrent inserts for all MyISAM tables, even those that have holes.
# For a table with a hole, new rows are inserted at the end of the table if it is in use by another thread.
# Otherwise, MySQL acquires a normal write lock and inserts the row into the hole.
concurrent_insert=2
# Variable: myisam_use_mmap
# https://www.percona.com/blog/2006/05/26/myisam-mmap-feature-51/
#
myisam_use_mmap=1
</source>
/etc/mysql/conf.d/mysqld.cnf:
<source lang=mysql>
[mysqld]
datadir = /var/lib/mysql/data/data
# because mysql is soooo stupid
#ignore-db-dirs = lost+found # when we will have mysql >= 5.6.3
bind-address = 127.0.0.1
open-files-limit = 4096
max_connections = 512
max_allowed_packet = 16M
thread_stack = 192K
thread_cache_size = 8
myisam-recover-options = BACKUP
max_connections = 512
table_cache = 8192
thread_concurrency = 4
default-storage-engine = innodb
# Enable the full query log. Every query (even ones with incorrect
# syntax) that the server receives will be logged. This is useful for
# debugging, it is usually disabled in production use.
#log
# Print warnings to the error log file. If you have any problem with
# MySQL you should enable logging of warnings and examine the error log
# for possible explanations.
log_warnings
# Log slow queries. Slow queries are queries which take more than the
# amount of time defined in "long_query_time" or which do not use
# indexes well, if log_long_format is enabled. It is normally good idea
# to have this turned on if you frequently add new queries to the
# system.
log_slow_queries
slow_query_log_file = /var/log/mysql/mysql-slow.log
# All queries taking more than this amount of time (in seconds) will be
# trated as slow. Do not use "1" as a value here, as this will result in
# even very fast queries being logged from time to time (as MySQL
# currently measures time with second accuracy only).
long_query_time = 2
# Log more information in the slow query log. Normally it is good to
# have this turned on. This will enable logging of queries that are not
# using indexes in addition to long running queries.
#log_long_format
log_bin = /var/lib/mysql/binlog/mysql-bin.log
expire_logs_days = 10
max_binlog_size = 100M
sync_binlog = 0
performance_schema = ON
</source>
/etc/mysql/conf.d/mysqld_safe.cnf:
<source lang=mysql>
[mysqld_safe]
</source>
/etc/mysql/conf.d/mysqld_safe_syslog.cnf:
<source lang=mysql>
[mysqld_safe]
syslog
</source>
/etc/mysql/conf.d/query_cache.cnf:
<source lang=mysql>
[mysqld]
query_cache_limit = 4M
query_cache_size = 128M
query_cache_min_res_unit = 2K
</source>
=MySQL Clients=
Small one liners for testing purposes.
==PHP==
===PHP PDO===
<source lang=php>
$ php -r '
$pdo=new PDO("mysql:host=mydbhost;dbname=mydb", "user", "pass", ARRAY(
PDO::ATTR_PERSISTENT => true
)
);
$stmt=$pdo->prepare("SELECT * FROM mytable");
if($stmt->execute()){
while($row = $stmt->fetch()){
print_r($row);
}
};
$stmt = null;
$pdo=null;
'
</source>
006b5e37a0d7bc70bf42764dd4aba94c48641a9e
1848
1836
2018-03-09T14:07:21Z
Lollypop
2
/* Analyze */
wikitext
text/x-wiki
[[Kategorie:MySQL|Tipps und Tricks]]
==Oneliner==
===Show MySQL-traffic fired from a client===
<source lang=bash>
# tcpdump -i any -s 0 -l -vvv -w - dst port 3306 | strings | perl -e '
while(<>) { chomp; next if /^[^ ]+[ ]*$/;
if(/^(SELECT|UPDATE|DELETE|INSERT|SET|COMMIT|ROLLBACK|CREATE|DROP|ALTER)/i) {
if (defined $q) { print "$q\n"; }
$q=$_;
} else {
$_ =~ s/^[ \t]+//; $q.=" $_";
}
}'
</source>
===Mysql processes each second===
<source lang=bash>
# mysqladmin -i 1 --verbose processlist
</source>
===All grants===
<source lang=bash>
# mysql --skip-column-names --batch --execute 'select concat("`",user,"`@`",host,"`") from mysql.user' | xargs -n 1 -i mysql --execute 'show grants for {}'
</source>
Or a little nicer:
<source lang=bash>
#!/bin/bash
#
## Written by Lars Timmann <L@rs.Timmann.de> 2017
#
function usage () {
cat << EOH
Usage: $0 [--grant-user <pattern>|--gu <pattern>] [--grant-db <pattern>|--gdb <pattern>] [--help] ...
--help: This output
--grant-user|--gu: You can specify this option several times.
The <pattern> can be:
<user> : You will get grants on all hosts for this user.
@<host> : You will get grants for all users on this host.
<user>@<host> : You will get specific grants for user@host.
The pattern may contain % as wildcard.
If the pattern is @% it shows all grants where host is exactly '%'.
--grant-db|--gdb: You can specify this option several times.
The pattern names the database to look for.
The pattern may contain % as wildcard.
...: Optional parameters to the mysql command
EOH
exit
}
declare -a grant_user
for ((param=1;param<=${#};param++))
do
case ${!param} in
--grant-user|--gu)
param=$[ ${param} + 1 ]
grant_user+=( $( echo ${!param} | awk -F'@' "NF==2 && \$1 {printf \"'%s'@'%s'\n\",\$1,\$2;next;}{print}") )
# delete 2 parameters from list and set back $param
set -- "${@:1:param-2}" "${@:param+1}"
param=$[ ${param} - 2 ]
;;
--grant-db|--gdb)
param=$[ ${param} + 1 ]
grant_db+=( "${!param}" )
# delete 2 parameters from list and set back $param
set -- "${@:1:param-2}" "${@:param+1}"
param=$[ ${param} - 2 ]
;;
--help)
usage
;;
*)
;;
esac
done
# Fill users which are without host
count=${#grant_user[@]}
for((param=0;param<count;param++))
do
user="${grant_user[${param}]}"
if [[ ${user} != ?*"@"?* ]]
then
before=${#grant_user[@]}
if [[ ${user} == "@"?* ]]
then
host="${user/@}"
if [[ "_${host}_" == "_%_" ]]
then
grant_user=( "${grant_user[@]:0:param}" $(mysql $* --silent --skip-column-names --execute "select concat('\'',user,'\'@\'',host,'\'') as user from mysql.user where host='${host}'" | sort ) "${grant_user[@]:param+1}" )
else
grant_user=( "${grant_user[@]:0:param}" $(mysql $* --silent --skip-column-names --execute "select concat('\'',user,'\'@\'',host,'\'') as user from mysql.user where host like '${host}'" | sort ) "${grant_user[@]:param+1}" )
fi
else
grant_user=( "${grant_user[@]:0:param}" $(mysql $* --silent --skip-column-names --execute "select concat('\'',user,'\'@\'',host,'\'') as user from mysql.user where user like '${user}'" | sort ) "${grant_user[@]:param+1}" )
fi
after=${#grant_user[@]}
param=$[ param + after - before ]
count=$[ count + after - before ]
fi
done
# Get user for database in grant_db array
for db in ${grant_db[@]}
do
grant_user+=( $(mysql $* --silent --skip-column-names --execute "
select concat('\'',user,'\'@\'',host,'\'') as user from mysql.db where db like '${db}';
select concat('\'',user,'\'@\'',host,'\'') as user from mysql.columns_priv where db like '${db}';
select concat('\'',user,'\'@\'',host,'\'') as user from mysql.tables_priv where db like '${db}';
" | sort -u ) )
done
# if no users specified, show all grants
if [ ${#grant_user[@]} -eq 0 -a ${#grant_db[@]} -eq 0 ]
then
printf -- '--\n-- %s\n--\n' "all grants";
grant_user=( $(mysql $* --silent --skip-column-names --execute "select concat('\'',user,'\'@\'',host,'\'') as user from mysql.user" | sort ) )
fi
for user in ${grant_user[@]}
do
printf -- '--\n-- %s\n--\n' "${user}";
mysql $* --silent --skip-column-names --execute "show create user ${user}; show grants for ${user}" | sed 's/$/;/'
done
</source>
===Last update time===
* Per table
<source lang=mysql>
mysql> SELECT TABLE_SCHEMA AS DB,TABLE_NAME,UPDATE_TIME FROM INFORMATION_SCHEMA.TABLES ORDER BY DB,UPDATE_TIME;
</source>
* Per database
<source lang=mysql>
mysql> SELECT TABLE_SCHEMA AS DB,MAX(UPDATE_TIME) AS LAST_UPDATE FROM INFORMATION_SCHEMA.TABLES GROUP BY DB ORDER BY LAST_UPDATE;
</source>
==InnoDB space==
===Per database===
<source lang=mysql>
mysql> select table_schema as database_name, sum(round(data_length/1024/1024,2)) as total_size_mb from information_schema.tables where engine like 'innodb' group by table_schema order by total_size_mb;
</source>
===Per table===
<source lang=mysql>
mysql> select table_schema as database_name,table_name,round(data_length/1024/1024,2) as size_mb from information_schema.tables order by size_mb;
</source>
==Logging==
If you use SET GLOBAL it is just for the moment.
'''Don't forget to add it in your my.cnf to make it permanent!'''
===What can I log?===
The interesting variables here are:
* log_queries_not_using_indexes
* log_slave_updates
* log_slow_queries
* general_log
===Choose logging destination FILE/TABLE/NONE===
This affects general_log and slow_query_log.
* Log to the table mysql.slow_log and mysql.general_log
<source lang=mysql>
mysql> SET GLOBAL log_output=TABLE;
</source>
* Log to the table mysql.slow_log and mysql.general_log
<source lang=mysql>
mysql> SET GLOBAL log_output=TABLE;
</source>
* Both: tables and files
<source lang=mysql>
mysql> SET GLOBAL log_output = 'TABLE,FILE';
</source>
* None, if NONE appears in the log_output destinations there is no logging
<source lang=mysql>
mysql> SET GLOBAL log_output = 'TABLE,FILE,NONE';
</source>
is equal to
<source lang=mysql>
mysql> SET GLOBAL log_output = 'NONE';
</source>
===Enable/disable general logging===
<source lang=mysql>
mysql> SET GLOBAL general_log_file = '/var/lib/mysql/general.log';
Query OK, 0 rows affected (0.00 sec)
mysql> SET GLOBAL general_log = 'ON';
Query OK, 0 rows affected (0.00 sec)
</source>
<source lang=mysql>
mysql> SET GLOBAL general_log = 'OFF';
Query OK, 0 rows affected (0.00 sec)
</source>
===Enable/disable logging of slow queries===
<source lang=mysql>
mysql> SET GLOBAL slow_query_log_file = '/var/lib/mysql/slow-query.log';
Query OK, 0 rows affected (0.00 sec)
mysql> SET GLOBAL slow_query_log = 'ON';
Query OK, 0 rows affected (0.00 sec)
</source>
<source lang=mysql>
mysql> SET GLOBAL slow_query_log = 'OFF';
Query OK, 0 rows affected (0.00 sec)
</source>
== Slave ==
=== Debugging ===
==== What did we see from the master ====
Read the binlog from Master:
<source lang=bash>
# mysqlbinlog --read-from-remote-server --host='your replication host' --user='your replication user' --password='your replication password' --base64-output=auto --database='limit output to this database' -vv mysql-bin.number | less
</source>
if you get
ERROR: Failed on connect: SSL connection error: protocol version mismatch
try
<source lang=bash>
# mysqlbinlog --read-from-remote-server --host='your replication host' --user='your replication user' --password='your replication password' --ssl-mode=DISABLED --base64-output=auto --database='limit output to this database' -vv mysql-bin.number | less
</source>
For an idea of the binlog file to investigate on the master do this on your slave:
<source lang=bash>
# mysql -e 'show slave status\G' | awk '$1=="Master_Log_File:"'
</source>
==Filesystems for MySQL==
===ext3/ext4===
====Create Options====
<source lang=bash>
# mkfs.ext4 -b 4096 /dev/mapper/vg--data-lv--ext4--mysql_data
</source>
====Mountoptions====
* noatime
* data=writeback (best performance , only metadata is logged)
* data=ordered (ok performance , recording metadata and grouping metadata related to the data changes)
* data=journal (worst performance, but best data protection, ext3 default mode, recording metadata and all data)
===Raw devices with InnoDB===
'''Take a look at [[Linux_udev_permissions|setting device permissions via udev]] first.'''
'''After''' that the device is owned by mysql:
<source lang=bash>
# ls -alL /dev/vg-data/lv-rawdisk-innodb01
brw-rw---- 1 mysql mysql 252, 0 Aug 12 15:07 /dev/vg-data/lv-rawdisk-innodb01
</source>
Determine the size:
<source lang=bash>
# lvs vg-data
LV VG Attr LSize Pool Origin Data% Move Log Copy% Convert
lv-rawdisk-innodb01 vg-data -wi-a---- 25.00g
# fdisk -l /dev/vg-data/lv-rawdisk-innodb01
Disk /dev/vg-data/lv-rawdisk-innodb01: 26.8 GB, 26843545600 bytes
255 heads, 63 sectors/track, 3263 cylinders, total 52428800 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
# bc -l
26843545600/(1024*1024*1024)
25.00000000000000000000
</source>
Yes... really 25GB!
Add your logical volume to your configuration /etc/mysql/conf.d/innodb.cnf :
<source lang=mysql>
[mysqld]
# InnoDB raw disks
innodb_data_home_dir=
innodb_data_file_path=/dev/vg-data/lv-rawdisk-innodb01:25Gnewraw
</source>
Start mysql:
<source lang=bash>
# service mysql start
</source>
Aaaaaand.. do not forget apparmor! Like I did.. :-D
<source lang=mysql>
InnoDB: Operating system error number 13 in a file operation.
InnoDB: The error means mysqld does not have the access rights to
InnoDB: the directory.
InnoDB: File name /dev/dm-0
InnoDB: File operation call: 'open'.
InnoDB: Cannot continue operation.
</source>
<source lang=bash>
# tail /var/log/kern.log
...
Aug 12 15:30:09 mysql kernel: [ 5840.118528] audit: type=1400 audit(1439386209.399:33): apparmor="DENIED" operation="open" profile="/usr/sbin/mysqld" name="/dev/dm-0" pid=11810 comm="mysqld" requested_mask="wr" denied_mask="wr" fsuid=108 ouid=108
...
</source>
Add your raw device to the apparmor config in /etc/apparmor.d/local/usr.sbin.mysqld :
<source lang=bash>
# Site-specific additions and overrides for usr.sbin.mysqld.
# For more details, please see /etc/apparmor.d/local/README.
/dev/dm-* rwk,
</source>
Reload apparmor:
<source lang=bash>
# service apparmor reload
</source>
Another try!
<source lang=bash>
# service mysql start
</source>
<source lang=mysql>
InnoDB: The first specified data file /dev/vg-data/lv-rawdisk-innodb01 did not exist:
InnoDB: a new database to be created!
150812 15:48:23 InnoDB: Setting file /dev/vg-data/lv-rawdisk-innodb01 size to 25600 MB
InnoDB: Database physically writes the file full: wait...
InnoDB: Progress in MB: 100 200 300 400 500 600 700 800 900 1000 1100 1200 ...
</source>
Much better!
So shutdown MySQL again!
Change your configuration /etc/mysql/conf.d/innodb.cnf and '''change newraw to raw!''' :
<source lang=mysql>
[mysqld]
# InnoDB raw disks
innodb_data_home_dir=
innodb_data_file_path=/dev/vg-data/lv-rawdisk-innodb01:25Graw
</source>
=== NFS ===
==== NFSv4 ====
===== On NetApp CDOT SVM =====
<source lang=cdot>
cdot1nfsv4::> export-policy rule create -policyname default -clientmatch 172.18.128.0/22 -superuser none -rwrule none -rorule sys -allow-dev false -allow-suid false
cdot1nfsv4::>
cdot1nfsv4::> export-policy create -policyname mysql_clients
cdot1nfsv4::> export-policy rule create -policyname mysql_clients -clientmatch 172.18.128.0/22 -superuser sys -rwrule sys -rorule sys -allow-dev true -allow-suid false
cdot1nfsv4::>
cdot1nfsv4::> nfs server modify -v4.0 enabled -v4-id-domain this.domain.tld
cdot1nfsv4::> set -units GB
cdot1nfsv4::> vol show -volume MYSQLNFS_* -fields volume,policy,size,junction-path
vserver volume size policy junction-path
------------------ --------------------- ---- ------------- ----------------------
cdot1nfsv4 MYSQLNFS_DATA 40GB mysql_clients /MYSQLNFS_DATA
cdot1nfsv4 MYSQLNFS_LOG 1GB mysql_clients /MYSQLNFS_LOG
2 entries were displayed.
</source>
Links:
* [https://kb.netapp.com/support/s/article/how-to-configure-nfsv4-in-cluster-mode How to configure NFSv4 in Cluster-Mode]
* [https://kb.netapp.com/support/s/article/clustered-data-ontap-nfs-expert-recommended-articles Clustered Data ONTAP NFS Expert recommended articles]
* [https://kb.netapp.com/support/s/article/how-to-configure-netapp-storage-systems-for-network-file-system-version-4-in-aix-and-linux-environments How to configure NetApp storage systems for Network File System version 4 in AIX and Linux environments]
* [https://kb.netapp.com/support/s/article/how-to-enable-or-disable-nfsv4-on-netapp-storage-systems How to enable or disable NFSv4 on NetApp storage systems]
===== On Linux =====
====== /etc/sysctl.d/99-mysql.conf ======
<source>
#
## http://www.ajohnstone.com/achives/optimizing-mysql-over-nfs-with-netapp/
#
###################################################################
# Semaphores & IPC for optimizations in innodb
kernel.shmmax=2147483648
kernel.shmall=2147483648
kernel.msgmni=1024
kernel.msgmax=65536
kernel.sem=250 32000 32 1024
###################################################################
# Swap
vm.swappiness = 0
vm.vfs_cache_pressure = 50
</source>
====== /etc/sysctl.d/99-netapp-nfs.conf ======
<source>
#
## http://www.ajohnstone.com/achives/optimizing-mysql-over-nfs-with-netapp/
#
###################################################################
# Optimization for netapp/nfs increased from 64k, @see http://tldp.org/HOWTO/NFS-HOWTO/performance.html#MEMLIMITS
net.core.wmem_default=262144
net.core.rmem_default=262144
net.core.wmem_max=262144
net.core.rmem_max=262144
net.ipv4.tcp_rmem = 4096 87380 16777216
net.ipv4.tcp_wmem = 4096 65536 16777216
net.ipv4.tcp_no_metrics_save = 1
# Guidelines from http://media.netapp.com/documents/mysqlperformance-5.pdf
net.ipv4.tcp_sack=0
net.ipv4.tcp_timestamps=0
sunrpc.tcp_slot_table_entries=128
#nfs.v3.enable on
nfs.tcp.enable=on
nfs.tcp.recvwindowsize=65536
nfs.tcp.xfersize=65536
#iscsi.iswt.max_ios_per_session 128
#iscsi.iswt.tcp_window_size 131400
#iscsi.max_connections_per_session 16
net.ipv4.tcp_tw_reuse = 1
net.ipv4.ip_local_port_range = 1024 65023
net.ipv4.tcp_max_syn_backlog = 10240
net.ipv4.tcp_max_tw_buckets = 400000
net.ipv4.tcp_max_orphans = 60000
net.ipv4.tcp_synack_retries = 3
net.core.somaxconn = 10000
kernel.sysrq=0
net.ipv4.neigh.default.gc_thresh1 = 4096
net.ipv4.neigh.default.gc_thresh2 = 8192
net.ipv4.neigh.default.gc_thresh3 = 8192
net.ipv4.neigh.default.base_reachable_time = 86400
net.ipv4.neigh.default.gc_stale_time = 86400
</source>
====== Raise allowed number of open files for mysql in /etc/security/limits.d/mysql.conf ======
<source>
mysql soft nofile 1024000
mysql hard nofile 1024000
mysql soft nproc 10240
mysql hard nproc 10240
</source>
====== Modify systemd mysql.service to raise the number of files limit ======
To raise the number of files for the service you have to tell the systemd the new limit.
<source lang=bash>
# systemctl edit mysql.service
</source>
and enter:
<source lang=inifile>
[Service]
LimitNOFILE=1024000
</source>
<source lang=bash>
# systemctl cat mysql
# /lib/systemd/system/mysql.service
# MySQL systemd service file
...
# /etc/systemd/system/mysql.service.d/override.conf
[Service]
LimitNOFILE=1024000
</source>
Do not forget to activate and check the limit
<source lang=bash>
# systemctl daemon-reload
# systemctl restart mysql
# awk 'NR==1 || /Max open files/' /proc/$(pgrep mysqld$)/limits
Limit Soft Limit Hard Limit Units
Max open files 1024000 1024000 files
</source>
====== Modify systemd service to wait for NFS ======
To be sure that the NFS mount is ready when the mysql server starts add After=nfs-client.target to the systemd service in the Unit-section.
<source lang=bash>
# systemctl edit mysql.service
</source>
and enter:
<source lang=inifile>
[Unit]
Description=MySQL Community Server
After=network.target
After=nfs-client.target
</source>
<source lang=bash>
# systemctl cat mysql
# /lib/systemd/system/mysql.service
# MySQL systemd service file
[Unit]
Description=MySQL Community Server
After=network.target
[Install]
WantedBy=multi-user.target
[Service]
User=mysql
Group=mysql
PermissionsStartOnly=true
ExecStartPre=/usr/share/mysql/mysql-systemd-start pre
ExecStart=/usr/sbin/mysqld
ExecStartPost=/usr/share/mysql/mysql-systemd-start post
TimeoutSec=600
Restart=on-failure
RuntimeDirectory=mysqld
RuntimeDirectoryMode=755
# /etc/systemd/system/mysql.service.d/override.conf
[Unit]
Description=MySQL Community Server
After=network.target
After=nfs-client.target
[Service]
LimitNOFILE=1024000
</source>
Do not forget to activate the changes...
<source lang=bash>
# systemctl daemon-reload
# systemctl restart mysql
</source>
... and check they are active:
<source lang=bash>
# systemctl list-dependencies --after mysql.service | grep nfs-client.target
● ├─nfs-client.target
</source>
====== /etc/idmapd.conf ======
<source lang=conf>
# Domain = localdomain
Domain = this.domain.tld
</source>
====== /etc/fstab ======
<source lang=fstab>
cdot-nfsv4-svm:/MYSQLNFS_LOG /MYSQLNFS_LOG nfs rw,hard,nointr,rsize=65536,wsize=65536,bg,vers=4,proto=tcp,noatime
cdot-nfsv4-svm:/MYSQLNFS_DATA /MYSQLNFS_DATA nfs rw,hard,nointr,rsize=65536,wsize=65536,bg,vers=4,proto=tcp,noatime
</source>
====== /etc/mysql/mysql.conf.d/mysqld.cnf ======
<source lang=inifile>
[mysqld]
...
datadir = /MYSQLNFS_DATA/data/mysql
...
</source>
====== /etc/mysql/mysql.conf.d/innodb.cnf ======
<source lang=inifile>
[mysqld]
#
# * InnoDB
#
innodb_data_home_dir = /MYSQLNFS_DATA/InnoDB
innodb_data_file_path = ibdata1:200M:autoextend
innodb_log_group_home_dir = /MYSQLNFS_LOG/ib_log
#innodb_flush_method = O_DIRECT
innodb_flush_log_at_trx_commit = 2
innodb_file_per_table = on
</source>
<source lang=mysql>
# mysql -e "show variables where variable_name like '%dir' and value like '/MYSQLNFS%'"
+---------------------------+------------------------------------+
| Variable_name | Value |
+---------------------------+------------------------------------+
| datadir | /MYSQLNFS_DATA/data/mysql/ |
| innodb_data_home_dir | /MYSQLNFS_DATA/InnoDB |
| innodb_log_group_home_dir | /MYSQLNFS_LOG/ib_log |
+---------------------------+------------------------------------+
</source>
====== /etc/mysql/mysql.conf.d/query_cache.cnf ======
<source lang=inifile>
[mysqld]
#
# * Query Cache Configuration
#
query_cache_type = 1
query_cache_limit = 256K
query_cache_min_res_unit = 2k
query_cache_size = 80M
</source>
<source lang=mysql>
mysql> SHOW VARIABLES LIKE 'have_query_cache';
+------------------+-------+
| Variable_name | Value |
+------------------+-------+
| have_query_cache | YES |
+------------------+-------+
1 row in set (0,00 sec)
mysql> SHOW VARIABLES LIKE 'query_cache%';
+------------------------------+----------+
| Variable_name | Value |
+------------------------------+----------+
| query_cache_limit | 262144 |
| query_cache_min_res_unit | 2048 |
| query_cache_size | 83886080 |
| query_cache_type | ON |
| query_cache_wlock_invalidate | OFF |
+------------------------------+----------+
5 rows in set (0,00 sec)
</source>
====== apparmor : /etc/apparmor.d/local/usr.sbin.mysqld ======
<source lang=apparmor>
# vim:syntax=apparmor
# This should be always there...
owner @{PROC}/@{pid}/status r,
/sys/devices/system/node/ r,
/sys/devices/system/node/** r,
# The mysql datadir, innodb_data_home_dir
/MYSQLNFS_DATA/ r,
/MYSQLNFS_DATA/** rwk,
# The mysql innodb_log_group_home_dir
/MYSQLNFS_LOG/ r,
/MYSQLNFS_LOG/** rwk,
</source>
====== Short stupid performance test ======
<source lang=bash>
# time dd if=/dev/zero of=/MYSQLNFS_DATA/io.test bs=16k count=65536
65536+0 records in
65536+0 records out
1073741824 bytes (1,1 GB, 1,0 GiB) copied, 1,7552 s, 612 MB/s
real 0m1.772s
user 0m0.016s
sys 0m0.672s
</source>
Some things seem to work...
==Sample InnoDB configuration==
/etc/mysql/conf.d/innodb.cnf
<source lang=mysql>
[mysqld]
# InnoDB Parameters
# innodb_buffer_pool_size=(0.7*total_mem_size)
innodb_buffer_pool_size=1433M
# bulk_insert_buffer_size
bulk_insert_buffer_size=256M
# innodb_buffer_pool_instances=... more = more concurrency
innodb_buffer_pool_instances=2
# innodb_thread_concurrency= 2*CPUs
innodb_thread_concurrency=4
# innodb_flush_method=O_DIRECT (avoids double buffering)
innodb_flush_method=O_DIRECT
# InnoDB data raw disks
innodb_data_home_dir=
innodb_data_file_path=/dev/vg-data/lv-rawdisk-innodb01:25Graw
# InnoDB log files
innodb_log_files_in_group=2
innodb_log_file_size=100M
innodb_log_group_home_dir=/var/lib/mysql/ib_log
</source>
==Analyze==
<source lang=mysql>
mysql> select * from <tablename> PROCEDURE ANALYSE();
</source>
<source lang=mysql>
mysql> SHOW /*!50000 GLOBAL*/ STATUS;
</source>
* See [[http://de.slideshare.net/shinguz/pt-presentation-11465700 MySQL Performance Tuning]]
===Find statements which lead into an error===
<source lang=mysql>
mysql> select CURRENT_SCHEMA,DIGEST_TEXT,MYSQL_ERRNO,MESSAGE_TEXT from performance_schema.events_statements_history where errors!=0\G
*************************** 1. row ***************************
CURRENT_SCHEMA: NULL
DIGEST_TEXT: NULL
MYSQL_ERRNO: 1046
MESSAGE_TEXT: No database selected
1 row in set (0,00 sec)
</source>
===percona-toolkit===
<source lang=bash>
# aptitude install percona-toolkit
# mysql -e "explain select * from mysql.user,mysql.db where user.user=db.user" | pt-visual-explain
JOIN
+- Bookmark lookup
| +- Table
| | table db
| | possible_keys User
| +- Index lookup
| key db->User
| possible_keys User
| key_len 48
| ref mysql.user.User
| rows 3
+- Table scan
rows 68
+- Table
table user
</source>
===Sysbench===
<source lang=bash>
# mysql -u root -e "create database sbtest;"
# sysbench \
--test=oltp \
--oltp-table-size=10000000 \
--db-driver=mysql \
--mysql-table-engine=innodb \
--mysql-db=sbtest \
--mysql-user=root \
--mysql-password=$(nawk -F'=' '/password/{print $2}' /root/.my.cnf) \
--mysql-socket=/var/run/mysqld/mysqld.sock \
prepare
# sysbench \
--test=oltp \
--oltp-test-mode=complex \
--oltp-table-size=80000000 \
--db-driver=mysql \
--mysql-table-engine=innodb \
--mysql-db=sbtest \
--mysql-user=root \
--mysql-password=$(nawk -F'=' '/password/{print $2}' /root/.my.cnf) \
--mysql-socket=/var/run/mysqld/mysqld.sock \
--num-threads=4 \
--max-time=900 \
--max-requests=500000 \
run
# mysql -u root_rw -e "drop table sbtest;" sbtest
</source>
==Recover a damaged root account==
===Lost grants===
Try out:
<source lang=bash>
# service mysql stop
# echo "grant all privileges on *.* to 'root'@'localhost' with grant option;" > /root/mysql-init
# mysqld_safe --init-file=/root/mysql-init
...
150812 19:14:24 mysqld_safe mysqld from pid file /var/run/mysqld/mysqld.pid ended
# rm /root/mysql-init
# service mysql start
</source>
Or:
<source lang=bash>
# service mysql stop
# mysqld_safe --skip-grant-tables &
...
# mysql -e "UPDATE mysql.user SET Grant_priv='Y', Super_priv='Y' WHERE User='root'; FLUSH PRIVILEGES; GRANT ALL ON *.* TO 'root'@'localhost';"
# mysqladmin -u root shutdown
# service mysql start
</source>
===Lost password===
<source lang=bash>
# service mysql stop
# echo "SET PASSWORD FOR 'root'@'localhost' = PASSWORD('the root password for mysql');" > /root/mysql-init
# mysqld_safe --init-file=/root/mysql-init
...
150812 19:15:24 mysqld_safe mysqld from pid file /var/run/mysqld/mysqld.pid ended
# rm /root/mysql-init
# service mysql start
</source>
==Structured configuration==
This is the default in Ubuntus /etc/mysql/my.cnf:
<source lang=mysql>
...
#
# * IMPORTANT: Additional settings that can override those from this file!
# The files must end with '.cnf', otherwise they'll be ignored.
#
!includedir /etc/mysql/conf.d/
</source>
/etc/mysql/conf.d/innodb.cnf:
<source lang=mysql>
[mysqld]
# InnoDB Parameters
# innodb_buffer_pool_size=(0.7*total_mem_size)
#innodb_buffer_pool_size=512M
innodb_buffer_pool_size=256M
# bulk_insert_buffer_size
#bulk_insert_buffer_size=256M
bulk_insert_buffer_size=128M
# innodb_buffer_pool_instances=... more = more concurrency
innodb_buffer_pool_instances=2
# innodb_thread_concurrency= 2*CPUs
innodb_thread_concurrency=4
# innodb_flush_method=O_DIRECT (avoids double buffering)
innodb_flush_method=O_DIRECT
# InnoDB data raw disks
innodb_data_home_dir=
innodb_data_file_path=/dev/vg-data/lv-rawdisk-innodb01:25Graw
# InnoDB log files
innodb_log_files_in_group=2
innodb_log_file_size=100M
innodb_log_group_home_dir=/var/lib/mysql/ib_log
</source>
/etc/mysql/conf.d/myisam.cnf:
<source lang=mysql>
[mysqld]
#key_buffer = 512M
key_buffer = 128M
table_cache = 8K
myisam_sort_buffer_size = 64M
tmp_table_size = 64M
# Variable: concurrent_insert
# Value Description
# 0 Disables concurrent inserts
# 1 (Default) Enables concurrent insert for MyISAM tables that do not have holes
# 2 Enables concurrent inserts for all MyISAM tables, even those that have holes.
# For a table with a hole, new rows are inserted at the end of the table if it is in use by another thread.
# Otherwise, MySQL acquires a normal write lock and inserts the row into the hole.
concurrent_insert=2
# Variable: myisam_use_mmap
# https://www.percona.com/blog/2006/05/26/myisam-mmap-feature-51/
#
myisam_use_mmap=1
</source>
/etc/mysql/conf.d/mysqld.cnf:
<source lang=mysql>
[mysqld]
datadir = /var/lib/mysql/data/data
# because mysql is soooo stupid
#ignore-db-dirs = lost+found # when we will have mysql >= 5.6.3
bind-address = 127.0.0.1
open-files-limit = 4096
max_connections = 512
max_allowed_packet = 16M
thread_stack = 192K
thread_cache_size = 8
myisam-recover-options = BACKUP
max_connections = 512
table_cache = 8192
thread_concurrency = 4
default-storage-engine = innodb
# Enable the full query log. Every query (even ones with incorrect
# syntax) that the server receives will be logged. This is useful for
# debugging, it is usually disabled in production use.
#log
# Print warnings to the error log file. If you have any problem with
# MySQL you should enable logging of warnings and examine the error log
# for possible explanations.
log_warnings
# Log slow queries. Slow queries are queries which take more than the
# amount of time defined in "long_query_time" or which do not use
# indexes well, if log_long_format is enabled. It is normally good idea
# to have this turned on if you frequently add new queries to the
# system.
log_slow_queries
slow_query_log_file = /var/log/mysql/mysql-slow.log
# All queries taking more than this amount of time (in seconds) will be
# trated as slow. Do not use "1" as a value here, as this will result in
# even very fast queries being logged from time to time (as MySQL
# currently measures time with second accuracy only).
long_query_time = 2
# Log more information in the slow query log. Normally it is good to
# have this turned on. This will enable logging of queries that are not
# using indexes in addition to long running queries.
#log_long_format
log_bin = /var/lib/mysql/binlog/mysql-bin.log
expire_logs_days = 10
max_binlog_size = 100M
sync_binlog = 0
performance_schema = ON
</source>
/etc/mysql/conf.d/mysqld_safe.cnf:
<source lang=mysql>
[mysqld_safe]
</source>
/etc/mysql/conf.d/mysqld_safe_syslog.cnf:
<source lang=mysql>
[mysqld_safe]
syslog
</source>
/etc/mysql/conf.d/query_cache.cnf:
<source lang=mysql>
[mysqld]
query_cache_limit = 4M
query_cache_size = 128M
query_cache_min_res_unit = 2K
</source>
=MySQL Clients=
Small one liners for testing purposes.
==PHP==
===PHP PDO===
<source lang=php>
$ php -r '
$pdo=new PDO("mysql:host=mydbhost;dbname=mydb", "user", "pass", ARRAY(
PDO::ATTR_PERSISTENT => true
)
);
$stmt=$pdo->prepare("SELECT * FROM mytable");
if($stmt->execute()){
while($row = $stmt->fetch()){
print_r($row);
}
};
$stmt = null;
$pdo=null;
'
</source>
7cef0692be9f4da1ddefb3f59e480f040f031f73
Gromphadorhina oblongonota
0
175
1837
1590
2017-12-22T09:46:52Z
Lollypop
2
wikitext
text/x-wiki
{{Systematik
| DeName = Fauchschabe
| WissName = Gromphadorhina oblongonata
| Autor = van Herrewege, 1973
| regnum = Animalia
| subregnum = Eumetazoa
| phylum = Arthropoda
| subphylum = Hexapoda
| classis = Insecta
| superordo = Dictyoptera
| ordo = Blattodea
| superfamilia = Blaberoidea
| familia = Blaberidae
| subfamilia = Oxyhaloinae
| tribus = Gromphadorhini
| genus = Gromphadorhina
| species = oblongonata
| Verbreitung =
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| StudyGroupNumber = BCG 48
| Winterruhe =
| cockroach.speciesfile.org_TaxonNameID = 1174411
| LSID = urn:lsid:Blattodea.speciesfile.org:TaxonName:6332
}}
520cf002c7c883aab6737765d7c4c7722f445ae2
Ubuntu networking
0
278
1838
1258
2018-02-15T08:46:09Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:Ubuntu|Networking]]
[[Kategorie:Linux|Networking]]
==Disable IPv6==
===Create /etc/sysctl.d/60-disable-ipv6.conf===
Create a file named <i>/etc/sysctl.d/60-disable-ipv6.conf</i> with this content:
<source lang=bash>
net.ipv6.conf.all.disable_ipv6 = 1
net.ipv6.conf.default.disable_ipv6 = 1
net.ipv6.conf.lo.disable_ipv6 = 1
</source>
===Activate /etc/sysctl.d/60-disable-ipv6.conf===
<source lang=bash>
# sysctl -p /etc/sysctl.d/10-disable-ipv6.conf
net.ipv6.conf.all.disable_ipv6 = 1
net.ipv6.conf.default.disable_ipv6 = 1
net.ipv6.conf.lo.disable_ipv6 = 1
</source>
===Check settings===
<source lang=bash>
# cat /proc/sys/net/ipv6/conf/all/disable_ipv6
1
</source>
==New in 17.10==
===netplan===
Former configuration in /etc/network/interfaces{,.d} is now found in /etc/netplan in YAML syntax.
For example: <i>/etc/netplan/ens160.yaml</i>
<source lang=bash>
network:
ethernets:
ens160: {dhcp4: true}
version: 2
</source>
5f49c1156be3165a092f27309324248635fb311f
Chrome
0
351
1839
2018-02-19T07:59:10Z
Lollypop
2
Die Seite wurde neu angelegt: „==Overview of Chrome URLS== * chrome://about/ * chrome://flags/ * chrome://extensions/ ==Useful URLs== * chrome://net-internals/#dns -> Clear Host Cache“
wikitext
text/x-wiki
==Overview of Chrome URLS==
* chrome://about/
* chrome://flags/
* chrome://extensions/
==Useful URLs==
* chrome://net-internals/#dns -> Clear Host Cache
87b38338d5cec8c02f45c256c991432cb784d48f
1840
1839
2018-02-19T08:10:20Z
Lollypop
2
wikitext
text/x-wiki
==Overview of Chrome URLS==
* chrome://about/
== Apps ==
* chrome://apps/
== Extensions ==
* chrome://extensions/
== Special settings ==
* chrome://flags/
== Your Downloads ==
chrome://downloads/
==Useful URLs==
* chrome://net-internals/#dns -> Clear Host Cache
dd86e1d6f69f6d9292ce4447ff1957348cce9b4e
SSH Tipps und Tricks
0
75
1843
1732
2018-02-20T14:53:27Z
Lollypop
2
/* /etc/ssh/sshd_config */
wikitext
text/x-wiki
[[Kategorie:SSH]]
[[Kategorie:Putty]]
=SSH, der Weg zum Ziel=
==SSH über ein oder mehrere Hops==
Um die SSH-Verbindung von Host_A zu Host_B machen zu können muß man sich über zwei Rechner dorthin tunneln (GW_1 und GW_2). Wenn man sich immer erst einloggt und dann weiter einloggt ist es manchmal sehr schwierig die Portforwardings mitzuschleifen, oder den Socks5-Proxy. Einfacher ist es, wenn man sich ProxyCommands für den Weg von Host_a zu Host_b definiert.
Wir kommen also nur von GW_2 zu Host_b, also machen wir uns hierfür einen Eintrag in der ~/.ssh/config:
<pre>
Host Host_B
ProxyCommand ssh GW_2 "/bin/bash -c 'exec 3<>/dev/tcp/%h/%p; cat <&3 & cat >&3;kill $!'"
</pre>
Zu GW_2 kommen wir aber nur über GW_1, also brauchen wir hierfür auch einen Eintrag:
<pre>
Host GW_2
ProxyCommand ssh GW_1 "/bin/bash -c 'exec 3<>/dev/tcp/%h/%p; cat <&3 & cat >&3;kill $!'"
</pre>
Jetzt gibt man auf Host_A einfach <i>ssh Host_B</i> ein und wird über die beiden Gateways GW_1 und GW_2 getunnelt.
Portforwardings für z.B. NFS macht man jetzt einfach so:
<pre>
root@Host_A# share -F nfs -o ro=@127.0.0.1/32 /tmp
root@Host_A# ssh -R 22049:localhost:2049 user@Host_B
user@Host_B$ su -
root@Host_B# mount -oro nfs://127.0.0.1:22049/tmp /mnt
</pre>
Im Hintergrund werden dann die TunnelVerbindungen aufgebaut und man macht das PortForwarding direkt von Host_A nach Host_B. Sehr schlank und elegant.
PS: Das /dev/tcp/%h/%p ist ein BASH-Builtin wobei %h und %p von der SSH durch Host (%h) und Port (%p) ausgefüllt werden
==Ausbruch aus dem Paradies==
Problem: Die Umgebung in der man sich aufhält ist leider so unglücklich mit Firewalls verbaut, daß man nicht arbeiten kann. Man muß aber per SSH raus, um woanders kurz etwas zu schauen, oder zu holen. Nunja, es gibt immer einen Weg...
Vorraussetzung ist ein lokal installiertes [http://www.meadowy.org/~gotoh/projects/connect connect], z.B. unter Ubuntu: apt-get install connect-proxy.
Weiterhin braucht man einen SSH-Server, wo ein sshd auf dem Port 443 lauscht, denn die meisten Proxies wollen einen nur auf known Ports durchlassen.
Dann trägt man in der ~/.ssh/config ein:
<pre>
Host ssh-via-proxy
ProxyCommand connect -H proxy-server:3128 ssh-server 443
</pre>
Schwuppdiwupp ist man mit <i>ssh ssh-server</i> auf dem SSH-Ziel, wo man hinmöchte. Natürlich kann man auf den ssh-server wieder als ProxyCommand eintragen usw. usw.
==Achja... das interne Wiki...==
Auch nicht schlimm, wenn das nur vom internen Netz erreichbar ist, dann fragen wir einfach via Socks-Proxy an:
<pre>
user@Host_A$ ssh -C -N -T -f -D8080 interner-rechner
user@Host_A$ chromium-browser --proxy-server="socks5://localhost:8080" https://wiki.intern.firma.de/ &
</pre>
Die Optionen sind:
<pre>
-C Requests compression <- das ist optional
-N Do not execute a remote command.
-T Disable pseudo-tty allocation.
-f Requests ssh to go to background just before command execution.
-D Local-Remote-Socks5-Proxy Port
</pre>
Oder wieder via ~/.ssh/config:
<pre>
Host wiki
Compression yes
DynamicForward 8888
RequestTTY no
PermitLocalCommand yes
LocalCommand chromium-browser --proxy-server="socks5://localhost:8888" https://wiki.intern.firma.de/ &
Hostname interner-rechner
</pre>
Und dann <i>ssh -N -f wiki</i> (Entsprechungen für -N und -f habe ich noch nicht gefunden).
=Der Fingerabdruck=
Für die Verifikation ist es oft leichter mit kürzeren Zahlenketten. Daher ist der Fingerabdruck praktisch, um Keys einfacher zu vergleichen:
<pre>
$ ssh-keygen -lf ~/.ssh/id_dsa.pub
1024 98:c5:76:...:08:fa:ba lollypop@lollybook (DSA)
</pre>
=Nutzer einschränken=
<source lang=bash>
# SSH is only allowed for users in the group ssh except syslog
AllowGroups ssh
DenyUsers syslog
</source>
=PuTTY Portable=
==pageant zusammen mit putty starten==
In die Datei ..\PortableApps\PuTTYPortable\App\AppInfo\Launcher\PuTTYPortable.ini muß folgendes unter [Launch] stehen:
<pre>
[Launch]
ProgramExecutable=putty\pageant.exe
CommandLineArguments='%PAL:DataDir%\settings\mykeys.ppk -c %PAL:AppDir%\putty\putty.exe'
DirectoryMoveOK=yes
SupportsUNC=yes
</pre>
Zu PortableApps siehe auch:
* [http://portableapps.com/manuals/PortableApps.comLauncher/ref/envsub.html Environment variable substitions]
* [http://portableapps.com/manuals/PortableApps.comLauncher/ref/launcher.ini/launch.html#programexecutable Launch]
==ppk -> pem==
<source lang=bash>
$ nawk '/---- BEGIN SSH2 PUBLIC KEY ----/{printf "ssh-rsa "; getline; comment=$2; gsub(/"/,"",comment); getline line; while(line !~ /^---- END/){printf line; getline line;} printf " %s\n",comment;}' pubkey.ppk
</source>
=Probleme mit älteren Gegenstellen=
==Unable to negotiate with <IP> port 22: no matching host key type found. Their offer: ssh-dss==
<source lang=bash>
$ ssh -oHostKeyAlgorithms=+ssh-dss <IP>
</source>
==ssh_dispatch_run_fatal: Connection to <IP> port 22: DH GEX group out of range==
<source lang=bash>
$ ssh -oKexAlgorithms=diffie-hellman-group-exchange-sha256,diffie-hellman-group14-sha1,diffie-hellman-group1-sha1 <IP>
</source>
=SFTP chroot=
<source lang=bash>
# mkdir --parents --mode=0755 /home/sftp
# mkdir --mode=0700 /home/sftp/.authorized_keys
</source>
==/etc/ssh/sshd_config==
<source lang=bash>
...
AllowGroups ssh-user
Subsystem sftp internal-sftp
Match group sftp
AllowGroups sftp
X11Forwarding no
AllowTcpForwarding no
AllowAgentForwarding no
PermitTunnel no
ForceCommand internal-sftp
PasswordAuthentication yes
ChrootDirectory /sftp_chroot/
AuthorizedKeysFile /sftp_chroot/%h/.ssh/authorized_keys
</source>
==Create SFTP user==
Now you can put authorized keys into the files /home/sftp/.authorized_keys/<i>username</i>
And create the sftp users like this:
<source lang=bash>
# USER=myuser
# mkdir --parents --mode=0755 /home/sftp/${USER}
# useradd --create-home --home-dir /home/sftp/${USER}/home ${USER}
</source>
= Two factor authentication =
== Google Authenticator ==
As the Google Authenticator is a tool which is available on several SmartPhone OS I took this one for the OTP authentication.
All steps have to be done on the destination host.
=== Install libpam-google-authenticator ===
<source lang=bash>
$ sudo apt-get install libpam-google-authenticator
</source>
=== Add settings to the /etc/pam.d/sshd ===
Put this line at the top of your /etc/pam.d/sshd!
<source>
auth [success=done new_authtok_reqd=done default=die] pam_google_authenticator.so nullok
</source>
See the man page pam.d(5) or read here...
The meaning of the parameters:
* success=done : If pam_google_authenticator returns successful (code was correct) all authentication is done.
* new_authtok_reqd=done : New authentication token is required set to done. Done is like ok, <nowiki><man page></nowiki>except that the stack also terminates and control is immediately returned to the application.<nowiki></man page></nowiki>
* default=die : If pam_google_authenticator failed no other authentications will be tried
* nullok : Allow user to access auth mechanism even if the password is empty
=== Add settings to the /etc/ssh/sshd_config ===
This lines have to be in the /etc/ssh/sshd_config:
<source>
UsePAM yes
PasswordAuthentication no
PubkeyAuthentication yes
ChallengeResponseAuthentication yes
AuthenticationMethods publickey,keyboard-interactive:pam
</source>
Without the setting in /etc/pam.d/sshd the "PasswordAuthentication no" will not be sufficient and still ask for a password because /etc/pam.d/sshd enables password authentication.
46616fb672aa243a06a321665b9802bbe4dcc74f
1844
1843
2018-02-20T14:55:48Z
Lollypop
2
/* SFTP chroot */
wikitext
text/x-wiki
[[Kategorie:SSH]]
[[Kategorie:Putty]]
=SSH, der Weg zum Ziel=
==SSH über ein oder mehrere Hops==
Um die SSH-Verbindung von Host_A zu Host_B machen zu können muß man sich über zwei Rechner dorthin tunneln (GW_1 und GW_2). Wenn man sich immer erst einloggt und dann weiter einloggt ist es manchmal sehr schwierig die Portforwardings mitzuschleifen, oder den Socks5-Proxy. Einfacher ist es, wenn man sich ProxyCommands für den Weg von Host_a zu Host_b definiert.
Wir kommen also nur von GW_2 zu Host_b, also machen wir uns hierfür einen Eintrag in der ~/.ssh/config:
<pre>
Host Host_B
ProxyCommand ssh GW_2 "/bin/bash -c 'exec 3<>/dev/tcp/%h/%p; cat <&3 & cat >&3;kill $!'"
</pre>
Zu GW_2 kommen wir aber nur über GW_1, also brauchen wir hierfür auch einen Eintrag:
<pre>
Host GW_2
ProxyCommand ssh GW_1 "/bin/bash -c 'exec 3<>/dev/tcp/%h/%p; cat <&3 & cat >&3;kill $!'"
</pre>
Jetzt gibt man auf Host_A einfach <i>ssh Host_B</i> ein und wird über die beiden Gateways GW_1 und GW_2 getunnelt.
Portforwardings für z.B. NFS macht man jetzt einfach so:
<pre>
root@Host_A# share -F nfs -o ro=@127.0.0.1/32 /tmp
root@Host_A# ssh -R 22049:localhost:2049 user@Host_B
user@Host_B$ su -
root@Host_B# mount -oro nfs://127.0.0.1:22049/tmp /mnt
</pre>
Im Hintergrund werden dann die TunnelVerbindungen aufgebaut und man macht das PortForwarding direkt von Host_A nach Host_B. Sehr schlank und elegant.
PS: Das /dev/tcp/%h/%p ist ein BASH-Builtin wobei %h und %p von der SSH durch Host (%h) und Port (%p) ausgefüllt werden
==Ausbruch aus dem Paradies==
Problem: Die Umgebung in der man sich aufhält ist leider so unglücklich mit Firewalls verbaut, daß man nicht arbeiten kann. Man muß aber per SSH raus, um woanders kurz etwas zu schauen, oder zu holen. Nunja, es gibt immer einen Weg...
Vorraussetzung ist ein lokal installiertes [http://www.meadowy.org/~gotoh/projects/connect connect], z.B. unter Ubuntu: apt-get install connect-proxy.
Weiterhin braucht man einen SSH-Server, wo ein sshd auf dem Port 443 lauscht, denn die meisten Proxies wollen einen nur auf known Ports durchlassen.
Dann trägt man in der ~/.ssh/config ein:
<pre>
Host ssh-via-proxy
ProxyCommand connect -H proxy-server:3128 ssh-server 443
</pre>
Schwuppdiwupp ist man mit <i>ssh ssh-server</i> auf dem SSH-Ziel, wo man hinmöchte. Natürlich kann man auf den ssh-server wieder als ProxyCommand eintragen usw. usw.
==Achja... das interne Wiki...==
Auch nicht schlimm, wenn das nur vom internen Netz erreichbar ist, dann fragen wir einfach via Socks-Proxy an:
<pre>
user@Host_A$ ssh -C -N -T -f -D8080 interner-rechner
user@Host_A$ chromium-browser --proxy-server="socks5://localhost:8080" https://wiki.intern.firma.de/ &
</pre>
Die Optionen sind:
<pre>
-C Requests compression <- das ist optional
-N Do not execute a remote command.
-T Disable pseudo-tty allocation.
-f Requests ssh to go to background just before command execution.
-D Local-Remote-Socks5-Proxy Port
</pre>
Oder wieder via ~/.ssh/config:
<pre>
Host wiki
Compression yes
DynamicForward 8888
RequestTTY no
PermitLocalCommand yes
LocalCommand chromium-browser --proxy-server="socks5://localhost:8888" https://wiki.intern.firma.de/ &
Hostname interner-rechner
</pre>
Und dann <i>ssh -N -f wiki</i> (Entsprechungen für -N und -f habe ich noch nicht gefunden).
=Der Fingerabdruck=
Für die Verifikation ist es oft leichter mit kürzeren Zahlenketten. Daher ist der Fingerabdruck praktisch, um Keys einfacher zu vergleichen:
<pre>
$ ssh-keygen -lf ~/.ssh/id_dsa.pub
1024 98:c5:76:...:08:fa:ba lollypop@lollybook (DSA)
</pre>
=Nutzer einschränken=
<source lang=bash>
# SSH is only allowed for users in the group ssh except syslog
AllowGroups ssh
DenyUsers syslog
</source>
=PuTTY Portable=
==pageant zusammen mit putty starten==
In die Datei ..\PortableApps\PuTTYPortable\App\AppInfo\Launcher\PuTTYPortable.ini muß folgendes unter [Launch] stehen:
<pre>
[Launch]
ProgramExecutable=putty\pageant.exe
CommandLineArguments='%PAL:DataDir%\settings\mykeys.ppk -c %PAL:AppDir%\putty\putty.exe'
DirectoryMoveOK=yes
SupportsUNC=yes
</pre>
Zu PortableApps siehe auch:
* [http://portableapps.com/manuals/PortableApps.comLauncher/ref/envsub.html Environment variable substitions]
* [http://portableapps.com/manuals/PortableApps.comLauncher/ref/launcher.ini/launch.html#programexecutable Launch]
==ppk -> pem==
<source lang=bash>
$ nawk '/---- BEGIN SSH2 PUBLIC KEY ----/{printf "ssh-rsa "; getline; comment=$2; gsub(/"/,"",comment); getline line; while(line !~ /^---- END/){printf line; getline line;} printf " %s\n",comment;}' pubkey.ppk
</source>
=Probleme mit älteren Gegenstellen=
==Unable to negotiate with <IP> port 22: no matching host key type found. Their offer: ssh-dss==
<source lang=bash>
$ ssh -oHostKeyAlgorithms=+ssh-dss <IP>
</source>
==ssh_dispatch_run_fatal: Connection to <IP> port 22: DH GEX group out of range==
<source lang=bash>
$ ssh -oKexAlgorithms=diffie-hellman-group-exchange-sha256,diffie-hellman-group14-sha1,diffie-hellman-group1-sha1 <IP>
</source>
=SFTP chroot=
<source lang=bash>
# mkdir --parents --mode=0755 /sftp_chroot/etc
</source>
==/etc/fstab==
<source lang=bash>
...
/etc/passwd /sftp_chroot/etc/passwd none ro,bind 0 0
/etc/group /sftp_chroot/etc/group none ro,bind 0 0
</source>
==/etc/ssh/sshd_config==
<source lang=bash>
...
AllowGroups ssh-user
Subsystem sftp internal-sftp
Match group sftp
AllowGroups sftp
X11Forwarding no
AllowTcpForwarding no
AllowAgentForwarding no
PermitTunnel no
ForceCommand internal-sftp
PasswordAuthentication yes
ChrootDirectory /sftp_chroot/
AuthorizedKeysFile /sftp_chroot/%h/.ssh/authorized_keys
</source>
==Create SFTP user==
Now you can put authorized keys into the files /home/sftp/.authorized_keys/<i>username</i>
And create the sftp users like this:
<source lang=bash>
# USER=myuser
# mkdir --parents --mode=0755 /home/sftp/${USER}
# useradd --create-home --home-dir /home/sftp/${USER}/home ${USER}
</source>
= Two factor authentication =
== Google Authenticator ==
As the Google Authenticator is a tool which is available on several SmartPhone OS I took this one for the OTP authentication.
All steps have to be done on the destination host.
=== Install libpam-google-authenticator ===
<source lang=bash>
$ sudo apt-get install libpam-google-authenticator
</source>
=== Add settings to the /etc/pam.d/sshd ===
Put this line at the top of your /etc/pam.d/sshd!
<source>
auth [success=done new_authtok_reqd=done default=die] pam_google_authenticator.so nullok
</source>
See the man page pam.d(5) or read here...
The meaning of the parameters:
* success=done : If pam_google_authenticator returns successful (code was correct) all authentication is done.
* new_authtok_reqd=done : New authentication token is required set to done. Done is like ok, <nowiki><man page></nowiki>except that the stack also terminates and control is immediately returned to the application.<nowiki></man page></nowiki>
* default=die : If pam_google_authenticator failed no other authentications will be tried
* nullok : Allow user to access auth mechanism even if the password is empty
=== Add settings to the /etc/ssh/sshd_config ===
This lines have to be in the /etc/ssh/sshd_config:
<source>
UsePAM yes
PasswordAuthentication no
PubkeyAuthentication yes
ChallengeResponseAuthentication yes
AuthenticationMethods publickey,keyboard-interactive:pam
</source>
Without the setting in /etc/pam.d/sshd the "PasswordAuthentication no" will not be sufficient and still ask for a password because /etc/pam.d/sshd enables password authentication.
9ee6f8533155e7c27c0f380f6f1df6ad9c20d14c
Solaris 11 hwmgmt
0
352
1845
2018-03-02T07:28:35Z
Lollypop
2
Die Seite wurde neu angelegt: „[[Kategorie:Solaris11]] =Commands= ==hwmgmtcli== ==ilomconfig== # ilomconfig list network ==raidconfig== raidconfig list all ==fwupdate== fwupdate list…“
wikitext
text/x-wiki
[[Kategorie:Solaris11]]
=Commands=
==hwmgmtcli==
==ilomconfig==
# ilomconfig list network
==raidconfig==
raidconfig list all
==fwupdate==
fwupdate list all
==itpconfig==
<source lang=bash>
# itpconfig list interconnect
Interconnect
============
State: enabled
Type: USB Ethernet
SP Interconnect IP Address: 169.254.182.76
Host Interconnect IP Address: 169.254.182.77
Interconnect Netmask: 255.255.255.0
SP Interconnect MAC Address: 02:21:28:57:47:16
Host Interconnect MAC Address: 02:21:28:57:47:17
</source>
24537a7bbdc0fa992a4728bb65fb406046b4e259
Category:Webserver
14
353
1847
2018-03-06T07:44:39Z
Lollypop
2
Die Seite wurde neu angelegt: „[[Kategorie:KnowHow]]“
wikitext
text/x-wiki
[[Kategorie:KnowHow]]
5b3e805e2df69a16d339bfd0115e4688ccfd0e65
ZFS on Linux
0
222
1849
1757
2018-03-22T15:49:21Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:Linux|ZFS]]
[[Kategorie:ZFS|Linux]]
[[Kategorie:VirtualBox|ZFS]]
==Grub==
Create /etc/udev/rules.d/99-local-grub.rules with this content:
<source lang=bash>
# Create by-id links in /dev as well for zfs vdev. Needed by grub
# Add links for zfs_member only
KERNEL=="sd*[0-9]", IMPORT{parent}=="ID_*", ENV{ID_FS_TYPE}=="zfs_member", SYMLINK+="$env{ID_BUS}-$env{ID_SERIAL}-part%n"
</source>
==Virtualbox on ZVols==
If you use ZVols as rawvmdk-device in VirtualBox as normal user (vmuser in this example) create /etc/udev/rules.d/99-local-zvol.rules with this content:
<source lang=bash>
KERNEL=="zd*" SUBSYSTEM=="block" ACTION=="add|change" PROGRAM="/lib/udev/zvol_id /dev/%k" RESULT=="rpool/VM/*" OWNER="vmuser"
</source>
<source lang=bash>
vmuser@virtualbox-server:~$ VBoxManage internalcommands createrawvmdk -filename /var/data/VMs/dev/Solaris10.vmdk -rawdisk /dev/zvol/rpool/VM/Solaris10
</source>
==Setup Ubuntu 16.04 with ZFS root==
Most is from here [https://github.com/zfsonlinux/zfs/wiki/Ubuntu-16.04-Root-on-ZFS Ubuntu-16.04-Root-on-ZFS].
Boot Ubuntu Desktop (alias Live CD) and choose "try out".
===Get the right ashift value===
For example to get sda and sdb:
<source lang=bash>
# lsblk -o NAME,PHY-SeC,LOG-SEC /dev/sd{a,b} | awk 'function exponent (value) {for(i=0;value>1;i++){value/=2;}; return i;}{if($2 ~ /[0-9]+/){print $0,exponent($2)}else{print$0,"ashift"}}'
NAME PHY-SEC LOG-SEC ashift
sda 512 512 9
├─sda1 512 512 9
├─sda2 512 512 9
├─sda3 512 512 9
└─sda4 512 512 9
sdb 4096 512 12
├─sdb1 4096 512 12
├─sdb2 4096 512 12
├─sdb3 4096 512 12
└─sdb4 4096 512 12
</source>
===Connect it to your network===
<source lang=bash>
sudo -i
ifconfig ens160 <IP> netmask 255.255.255.0
route add default gw <defaultrouter>
echo "nameserver <nameserver>" >> /etc/resolv.conf
echo 'Acquire::http::Proxy "http://<user>:<pass>@<proxyhost>:<proxyport>";' >> /etc/apt/apt.conf
apt-add-repository universe
apt update
apt --yes install openssh-server
passwd ubuntu
Reconnect via ssh
apt install --yes debootstrap gdisk zfs-initramfs
sgdisk -g -a1 -n2:34:2047 -t2:EF02 /dev/disk/by-id/scsi-36000c2932cdb62febff0b5ac93786dd4
sgdisk -n9:-8M:0 -t9:BF07 /dev/disk/by-id/scsi-36000c2932cdb62febff0b5ac93786dd4
sgdisk -n1:0:0 -t1:BF01 /dev/disk/by-id/scsi-36000c2932cdb62febff0b5ac93786dd4
zpool create -f -o ashift=12 \
-O atime=off \
-O canmount=off \
-O compression=lz4 \
-O normalization=formD \
-O mountpoint=/ \
-R /mnt \
rpool /dev/disk/by-id/scsi-36000c2932cdb62febff0b5ac93786dd4-part1
zfs create -o canmount=off -o mountpoint=none rpool/ROOT
zfs create -o canmount=noauto -o mountpoint=/ rpool/ROOT/ubuntu
zfs mount rpool/ROOT/ubuntu
zfs create -o setuid=off rpool/home
zfs create -o mountpoint=/root rpool/home/root
zfs create -o canmount=off -o setuid=off -o exec=off rpool/var
zfs create -o com.sun:auto-snapshot=false rpool/var/cache
zfs create rpool/var/log
zfs create rpool/var/spool
zfs create -o com.sun:auto-snapshot=false -o exec=on rpool/var/tmp
zfs create -V 4G -b $(getconf PAGESIZE) -o compression=zle \
-o logbias=throughput -o sync=always \
-o primarycache=metadata -o secondarycache=none \
-o com.sun:auto-snapshot=false rpool/swap
cp -p {,/mnt}/etc/apt/apt.conf
export http_proxy=$(awk '/Acquire::http::Proxy/{gsub(/\"/,"");gsub(/;$/,"");print $2}' /mnt/etc/apt/apt.conf)
echo -n xenial{,-security,-updates} | \
xargs -n 1 -d ' ' -I{} echo "deb http://archive.ubuntu.com/ubuntu {} main universe" > /mnt/etc/apt/sources.list
chmod 1777 /mnt/var/tmp
debootstrap xenial /mnt
zfs set devices=off rpool
HOSTNAME=Template-VM
echo ${HOSTNAME} > /mnt/etc/hostname
printf "127.0.1.1\t%s\n" "${HOSTNAME}" >> /mnt/etc/hosts
INTERFACE=$(ip a s scope global | awk 'NR==1{gsub(/:$/,"",$2);print $2;}')
printf "auto %s\niface %s inet dhcp\n" "${INTERFACE}" "${INTERFACE}" > /mnt/etc/network/interfaces.d/${INTERFACE}
mount --rbind /dev /mnt/dev
mount --rbind /proc /mnt/proc
mount --rbind /sys /mnt/sys
cp -p {,/mnt}/etc/apt/apt.conf
echo -n xenial{,-security,-updates} | \
xargs -n 1 -d ' ' -I{} echo "deb http://archive.ubuntu.com/ubuntu {} main universe" > /mnt/etc/apt/sources.list
chroot /mnt /bin/bash --login
locale-gen en_US.UTF-8
echo 'LANG="en_US.UTF-8"' > /etc/default/locale
LANG="en_US.UTF-8"
dpkg-reconfigure tzdata
ln -s /proc/self/mounts /etc/mtab
apt update
apt install --yes ubuntu-minimal
apt install --yes --no-install-recommends linux-image-generic
apt install --yes zfs-initramfs
apt install --yes openssh-server
apt install --yes grub-pc
addgroup --system lpadmin
addgroup --system sambashare
passwd
grub-probe /
update-initramfs -c -k all
vi /etc/default/grub
Comment out: GRUB_HIDDEN_TIMEOUT=0
Remove quiet and splash from: GRUB_CMDLINE_LINUX_DEFAULT
Uncomment: GRUB_TERMINAL=console
update-grub
grub-install /dev/disk/by-id/scsi-36000c2932cdb62febff0b5ac93786dd4
zfs snapshot rpool/ROOT/ubuntu@install
exit
mount | grep -v zfs | tac | awk '/\/mnt/ {print $3}' | xargs -i{} umount -lf {}
zpool export rpool
reboot
apt install --yes cryptsetup
echo cryptswap1 /dev/zvol/rpool/swap /dev/urandom swap,cipher=aes-xts-plain64:sha256,size=256 >> /etc/crypttab
systemctl daemon-reload
systemctl start systemd-cryptsetup@cryptswap1.service
echo /dev/mapper/cryptswap1 none swap defaults 0 0 >> /etc/fstab
swapon -av
</source>
==ARC Cache==
===Limit the cache without reboot non permanent===
For example limit it to 512MB (which is too small for production environments, just an example...):
<source lang=bash>
# echo "$[512*1024*1024]" > /sys/module/zfs/parameters/zfs_arc_max
</source>
Now you have to drop the caches:
<source lang=bash>
# echo 3 > /proc/sys/vm/drop_caches
</source>
===Make the cache limit permanent===
For example limit it to 512MB (which is too small for production environments, just an example...):
<source lang=bash>
# echo "options zfs zfs_arc_max=$[512*1024*1024]" >> /etc/modprobe.d/zfs.conf
</source>
After reboot this values take effect.
==Links==
* [[https://github.com/zfsonlinux/pkg-zfs/wiki/HOWTO-install-Ubuntu-16.04-to-a-Whole-Disk-Native-ZFS-Root-Filesystem-using-Ubiquity-GUI-installer HOWTO install Ubuntu 16.04 to a Whole Disk Native ZFS Root Filesystem using Ubiquity GUI installer]]
* [[https://github.com/zfsonlinux/zfs/wiki/Ubuntu-16.04-Root-on-ZFS Ubuntu 16.04 Root on ZFS]]
72c1fa516b79924ecb90be7e9e43699f2f57e663
1850
1849
2018-03-22T15:49:37Z
Lollypop
2
/* Make the cache limit permanent */
wikitext
text/x-wiki
[[Kategorie:Linux|ZFS]]
[[Kategorie:ZFS|Linux]]
[[Kategorie:VirtualBox|ZFS]]
==Grub==
Create /etc/udev/rules.d/99-local-grub.rules with this content:
<source lang=bash>
# Create by-id links in /dev as well for zfs vdev. Needed by grub
# Add links for zfs_member only
KERNEL=="sd*[0-9]", IMPORT{parent}=="ID_*", ENV{ID_FS_TYPE}=="zfs_member", SYMLINK+="$env{ID_BUS}-$env{ID_SERIAL}-part%n"
</source>
==Virtualbox on ZVols==
If you use ZVols as rawvmdk-device in VirtualBox as normal user (vmuser in this example) create /etc/udev/rules.d/99-local-zvol.rules with this content:
<source lang=bash>
KERNEL=="zd*" SUBSYSTEM=="block" ACTION=="add|change" PROGRAM="/lib/udev/zvol_id /dev/%k" RESULT=="rpool/VM/*" OWNER="vmuser"
</source>
<source lang=bash>
vmuser@virtualbox-server:~$ VBoxManage internalcommands createrawvmdk -filename /var/data/VMs/dev/Solaris10.vmdk -rawdisk /dev/zvol/rpool/VM/Solaris10
</source>
==Setup Ubuntu 16.04 with ZFS root==
Most is from here [https://github.com/zfsonlinux/zfs/wiki/Ubuntu-16.04-Root-on-ZFS Ubuntu-16.04-Root-on-ZFS].
Boot Ubuntu Desktop (alias Live CD) and choose "try out".
===Get the right ashift value===
For example to get sda and sdb:
<source lang=bash>
# lsblk -o NAME,PHY-SeC,LOG-SEC /dev/sd{a,b} | awk 'function exponent (value) {for(i=0;value>1;i++){value/=2;}; return i;}{if($2 ~ /[0-9]+/){print $0,exponent($2)}else{print$0,"ashift"}}'
NAME PHY-SEC LOG-SEC ashift
sda 512 512 9
├─sda1 512 512 9
├─sda2 512 512 9
├─sda3 512 512 9
└─sda4 512 512 9
sdb 4096 512 12
├─sdb1 4096 512 12
├─sdb2 4096 512 12
├─sdb3 4096 512 12
└─sdb4 4096 512 12
</source>
===Connect it to your network===
<source lang=bash>
sudo -i
ifconfig ens160 <IP> netmask 255.255.255.0
route add default gw <defaultrouter>
echo "nameserver <nameserver>" >> /etc/resolv.conf
echo 'Acquire::http::Proxy "http://<user>:<pass>@<proxyhost>:<proxyport>";' >> /etc/apt/apt.conf
apt-add-repository universe
apt update
apt --yes install openssh-server
passwd ubuntu
Reconnect via ssh
apt install --yes debootstrap gdisk zfs-initramfs
sgdisk -g -a1 -n2:34:2047 -t2:EF02 /dev/disk/by-id/scsi-36000c2932cdb62febff0b5ac93786dd4
sgdisk -n9:-8M:0 -t9:BF07 /dev/disk/by-id/scsi-36000c2932cdb62febff0b5ac93786dd4
sgdisk -n1:0:0 -t1:BF01 /dev/disk/by-id/scsi-36000c2932cdb62febff0b5ac93786dd4
zpool create -f -o ashift=12 \
-O atime=off \
-O canmount=off \
-O compression=lz4 \
-O normalization=formD \
-O mountpoint=/ \
-R /mnt \
rpool /dev/disk/by-id/scsi-36000c2932cdb62febff0b5ac93786dd4-part1
zfs create -o canmount=off -o mountpoint=none rpool/ROOT
zfs create -o canmount=noauto -o mountpoint=/ rpool/ROOT/ubuntu
zfs mount rpool/ROOT/ubuntu
zfs create -o setuid=off rpool/home
zfs create -o mountpoint=/root rpool/home/root
zfs create -o canmount=off -o setuid=off -o exec=off rpool/var
zfs create -o com.sun:auto-snapshot=false rpool/var/cache
zfs create rpool/var/log
zfs create rpool/var/spool
zfs create -o com.sun:auto-snapshot=false -o exec=on rpool/var/tmp
zfs create -V 4G -b $(getconf PAGESIZE) -o compression=zle \
-o logbias=throughput -o sync=always \
-o primarycache=metadata -o secondarycache=none \
-o com.sun:auto-snapshot=false rpool/swap
cp -p {,/mnt}/etc/apt/apt.conf
export http_proxy=$(awk '/Acquire::http::Proxy/{gsub(/\"/,"");gsub(/;$/,"");print $2}' /mnt/etc/apt/apt.conf)
echo -n xenial{,-security,-updates} | \
xargs -n 1 -d ' ' -I{} echo "deb http://archive.ubuntu.com/ubuntu {} main universe" > /mnt/etc/apt/sources.list
chmod 1777 /mnt/var/tmp
debootstrap xenial /mnt
zfs set devices=off rpool
HOSTNAME=Template-VM
echo ${HOSTNAME} > /mnt/etc/hostname
printf "127.0.1.1\t%s\n" "${HOSTNAME}" >> /mnt/etc/hosts
INTERFACE=$(ip a s scope global | awk 'NR==1{gsub(/:$/,"",$2);print $2;}')
printf "auto %s\niface %s inet dhcp\n" "${INTERFACE}" "${INTERFACE}" > /mnt/etc/network/interfaces.d/${INTERFACE}
mount --rbind /dev /mnt/dev
mount --rbind /proc /mnt/proc
mount --rbind /sys /mnt/sys
cp -p {,/mnt}/etc/apt/apt.conf
echo -n xenial{,-security,-updates} | \
xargs -n 1 -d ' ' -I{} echo "deb http://archive.ubuntu.com/ubuntu {} main universe" > /mnt/etc/apt/sources.list
chroot /mnt /bin/bash --login
locale-gen en_US.UTF-8
echo 'LANG="en_US.UTF-8"' > /etc/default/locale
LANG="en_US.UTF-8"
dpkg-reconfigure tzdata
ln -s /proc/self/mounts /etc/mtab
apt update
apt install --yes ubuntu-minimal
apt install --yes --no-install-recommends linux-image-generic
apt install --yes zfs-initramfs
apt install --yes openssh-server
apt install --yes grub-pc
addgroup --system lpadmin
addgroup --system sambashare
passwd
grub-probe /
update-initramfs -c -k all
vi /etc/default/grub
Comment out: GRUB_HIDDEN_TIMEOUT=0
Remove quiet and splash from: GRUB_CMDLINE_LINUX_DEFAULT
Uncomment: GRUB_TERMINAL=console
update-grub
grub-install /dev/disk/by-id/scsi-36000c2932cdb62febff0b5ac93786dd4
zfs snapshot rpool/ROOT/ubuntu@install
exit
mount | grep -v zfs | tac | awk '/\/mnt/ {print $3}' | xargs -i{} umount -lf {}
zpool export rpool
reboot
apt install --yes cryptsetup
echo cryptswap1 /dev/zvol/rpool/swap /dev/urandom swap,cipher=aes-xts-plain64:sha256,size=256 >> /etc/crypttab
systemctl daemon-reload
systemctl start systemd-cryptsetup@cryptswap1.service
echo /dev/mapper/cryptswap1 none swap defaults 0 0 >> /etc/fstab
swapon -av
</source>
==ARC Cache==
===Limit the cache without reboot non permanent===
For example limit it to 512MB (which is too small for production environments, just an example...):
<source lang=bash>
# echo "$[512*1024*1024]" > /sys/module/zfs/parameters/zfs_arc_max
</source>
Now you have to drop the caches:
<source lang=bash>
# echo 3 > /proc/sys/vm/drop_caches
</source>
===Make the cache limit permanent===
For example limit it to 512MB (which is too small for production environments, just an example...):
<source lang=bash>
# echo "options zfs zfs_arc_max=$[512*1024*1024]" >> /etc/modprobe.d/zfs.conf
</source>
After reboot this value take effect.
==Links==
* [[https://github.com/zfsonlinux/pkg-zfs/wiki/HOWTO-install-Ubuntu-16.04-to-a-Whole-Disk-Native-ZFS-Root-Filesystem-using-Ubiquity-GUI-installer HOWTO install Ubuntu 16.04 to a Whole Disk Native ZFS Root Filesystem using Ubiquity GUI installer]]
* [[https://github.com/zfsonlinux/zfs/wiki/Ubuntu-16.04-Root-on-ZFS Ubuntu 16.04 Root on ZFS]]
31a3da5e1d2f379081d89bd18b0463cbbb2e93f7
1855
1850
2018-04-09T07:40:52Z
Lollypop
2
/* ARC Cache */
wikitext
text/x-wiki
[[Kategorie:Linux|ZFS]]
[[Kategorie:ZFS|Linux]]
[[Kategorie:VirtualBox|ZFS]]
==Grub==
Create /etc/udev/rules.d/99-local-grub.rules with this content:
<source lang=bash>
# Create by-id links in /dev as well for zfs vdev. Needed by grub
# Add links for zfs_member only
KERNEL=="sd*[0-9]", IMPORT{parent}=="ID_*", ENV{ID_FS_TYPE}=="zfs_member", SYMLINK+="$env{ID_BUS}-$env{ID_SERIAL}-part%n"
</source>
==Virtualbox on ZVols==
If you use ZVols as rawvmdk-device in VirtualBox as normal user (vmuser in this example) create /etc/udev/rules.d/99-local-zvol.rules with this content:
<source lang=bash>
KERNEL=="zd*" SUBSYSTEM=="block" ACTION=="add|change" PROGRAM="/lib/udev/zvol_id /dev/%k" RESULT=="rpool/VM/*" OWNER="vmuser"
</source>
<source lang=bash>
vmuser@virtualbox-server:~$ VBoxManage internalcommands createrawvmdk -filename /var/data/VMs/dev/Solaris10.vmdk -rawdisk /dev/zvol/rpool/VM/Solaris10
</source>
==Setup Ubuntu 16.04 with ZFS root==
Most is from here [https://github.com/zfsonlinux/zfs/wiki/Ubuntu-16.04-Root-on-ZFS Ubuntu-16.04-Root-on-ZFS].
Boot Ubuntu Desktop (alias Live CD) and choose "try out".
===Get the right ashift value===
For example to get sda and sdb:
<source lang=bash>
# lsblk -o NAME,PHY-SeC,LOG-SEC /dev/sd{a,b} | awk 'function exponent (value) {for(i=0;value>1;i++){value/=2;}; return i;}{if($2 ~ /[0-9]+/){print $0,exponent($2)}else{print$0,"ashift"}}'
NAME PHY-SEC LOG-SEC ashift
sda 512 512 9
├─sda1 512 512 9
├─sda2 512 512 9
├─sda3 512 512 9
└─sda4 512 512 9
sdb 4096 512 12
├─sdb1 4096 512 12
├─sdb2 4096 512 12
├─sdb3 4096 512 12
└─sdb4 4096 512 12
</source>
===Connect it to your network===
<source lang=bash>
sudo -i
ifconfig ens160 <IP> netmask 255.255.255.0
route add default gw <defaultrouter>
echo "nameserver <nameserver>" >> /etc/resolv.conf
echo 'Acquire::http::Proxy "http://<user>:<pass>@<proxyhost>:<proxyport>";' >> /etc/apt/apt.conf
apt-add-repository universe
apt update
apt --yes install openssh-server
passwd ubuntu
Reconnect via ssh
apt install --yes debootstrap gdisk zfs-initramfs
sgdisk -g -a1 -n2:34:2047 -t2:EF02 /dev/disk/by-id/scsi-36000c2932cdb62febff0b5ac93786dd4
sgdisk -n9:-8M:0 -t9:BF07 /dev/disk/by-id/scsi-36000c2932cdb62febff0b5ac93786dd4
sgdisk -n1:0:0 -t1:BF01 /dev/disk/by-id/scsi-36000c2932cdb62febff0b5ac93786dd4
zpool create -f -o ashift=12 \
-O atime=off \
-O canmount=off \
-O compression=lz4 \
-O normalization=formD \
-O mountpoint=/ \
-R /mnt \
rpool /dev/disk/by-id/scsi-36000c2932cdb62febff0b5ac93786dd4-part1
zfs create -o canmount=off -o mountpoint=none rpool/ROOT
zfs create -o canmount=noauto -o mountpoint=/ rpool/ROOT/ubuntu
zfs mount rpool/ROOT/ubuntu
zfs create -o setuid=off rpool/home
zfs create -o mountpoint=/root rpool/home/root
zfs create -o canmount=off -o setuid=off -o exec=off rpool/var
zfs create -o com.sun:auto-snapshot=false rpool/var/cache
zfs create rpool/var/log
zfs create rpool/var/spool
zfs create -o com.sun:auto-snapshot=false -o exec=on rpool/var/tmp
zfs create -V 4G -b $(getconf PAGESIZE) -o compression=zle \
-o logbias=throughput -o sync=always \
-o primarycache=metadata -o secondarycache=none \
-o com.sun:auto-snapshot=false rpool/swap
cp -p {,/mnt}/etc/apt/apt.conf
export http_proxy=$(awk '/Acquire::http::Proxy/{gsub(/\"/,"");gsub(/;$/,"");print $2}' /mnt/etc/apt/apt.conf)
echo -n xenial{,-security,-updates} | \
xargs -n 1 -d ' ' -I{} echo "deb http://archive.ubuntu.com/ubuntu {} main universe" > /mnt/etc/apt/sources.list
chmod 1777 /mnt/var/tmp
debootstrap xenial /mnt
zfs set devices=off rpool
HOSTNAME=Template-VM
echo ${HOSTNAME} > /mnt/etc/hostname
printf "127.0.1.1\t%s\n" "${HOSTNAME}" >> /mnt/etc/hosts
INTERFACE=$(ip a s scope global | awk 'NR==1{gsub(/:$/,"",$2);print $2;}')
printf "auto %s\niface %s inet dhcp\n" "${INTERFACE}" "${INTERFACE}" > /mnt/etc/network/interfaces.d/${INTERFACE}
mount --rbind /dev /mnt/dev
mount --rbind /proc /mnt/proc
mount --rbind /sys /mnt/sys
cp -p {,/mnt}/etc/apt/apt.conf
echo -n xenial{,-security,-updates} | \
xargs -n 1 -d ' ' -I{} echo "deb http://archive.ubuntu.com/ubuntu {} main universe" > /mnt/etc/apt/sources.list
chroot /mnt /bin/bash --login
locale-gen en_US.UTF-8
echo 'LANG="en_US.UTF-8"' > /etc/default/locale
LANG="en_US.UTF-8"
dpkg-reconfigure tzdata
ln -s /proc/self/mounts /etc/mtab
apt update
apt install --yes ubuntu-minimal
apt install --yes --no-install-recommends linux-image-generic
apt install --yes zfs-initramfs
apt install --yes openssh-server
apt install --yes grub-pc
addgroup --system lpadmin
addgroup --system sambashare
passwd
grub-probe /
update-initramfs -c -k all
vi /etc/default/grub
Comment out: GRUB_HIDDEN_TIMEOUT=0
Remove quiet and splash from: GRUB_CMDLINE_LINUX_DEFAULT
Uncomment: GRUB_TERMINAL=console
update-grub
grub-install /dev/disk/by-id/scsi-36000c2932cdb62febff0b5ac93786dd4
zfs snapshot rpool/ROOT/ubuntu@install
exit
mount | grep -v zfs | tac | awk '/\/mnt/ {print $3}' | xargs -i{} umount -lf {}
zpool export rpool
reboot
apt install --yes cryptsetup
echo cryptswap1 /dev/zvol/rpool/swap /dev/urandom swap,cipher=aes-xts-plain64:sha256,size=256 >> /etc/crypttab
systemctl daemon-reload
systemctl start systemd-cryptsetup@cryptswap1.service
echo /dev/mapper/cryptswap1 none swap defaults 0 0 >> /etc/fstab
swapon -av
</source>
==ARC Cache==
===Get the current usage of cache===
<source lang=bash>
# cat /proc/spl/kstat/zfs/arcstats |grep c_
c_min 4 521779200
c_max 4 1073741824
arc_no_grow 4 0
arc_tempreserve 4 0
arc_loaned_bytes 4 0
arc_prune 4 25360
arc_meta_used 4 493285336
arc_meta_limit 4 805306368
arc_dnode_limit 4 80530636
arc_meta_max 4 706551816
arc_meta_min 4 16777216
sync_wait_for_async 4 357
arc_need_free 4 0
arc_sys_free 4 260889600
</source>
===Limit the cache without reboot non permanent===
For example limit it to 512MB (which is too small for production environments, just an example...):
<source lang=bash>
# echo "$[512*1024*1024]" > /sys/module/zfs/parameters/zfs_arc_max
</source>
Now you have to drop the caches:
<source lang=bash>
# echo 3 > /proc/sys/vm/drop_caches
</source>
===Make the cache limit permanent===
For example limit it to 512MB (which is too small for production environments, just an example...):
<source lang=bash>
# echo "options zfs zfs_arc_max=$[512*1024*1024]" >> /etc/modprobe.d/zfs.conf
</source>
After reboot this value take effect.
==Links==
* [[https://github.com/zfsonlinux/pkg-zfs/wiki/HOWTO-install-Ubuntu-16.04-to-a-Whole-Disk-Native-ZFS-Root-Filesystem-using-Ubiquity-GUI-installer HOWTO install Ubuntu 16.04 to a Whole Disk Native ZFS Root Filesystem using Ubiquity GUI installer]]
* [[https://github.com/zfsonlinux/zfs/wiki/Ubuntu-16.04-Root-on-ZFS Ubuntu 16.04 Root on ZFS]]
03fdfc78d49c6a46b24fc5e8f7e745cc088e9ac2
1859
1855
2018-04-24T11:04:04Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:Linux|ZFS]]
[[Kategorie:ZFS|Linux]]
[[Kategorie:VirtualBox|ZFS]]
==Grub==
Create /etc/udev/rules.d/99-local-grub.rules with this content:
<source lang=bash>
# Create by-id links in /dev as well for zfs vdev. Needed by grub
# Add links for zfs_member only
KERNEL=="sd*[0-9]", IMPORT{parent}=="ID_*", ENV{ID_FS_TYPE}=="zfs_member", SYMLINK+="$env{ID_BUS}-$env{ID_SERIAL}-part%n"
</source>
==Virtualbox on ZVols==
If you use ZVols as rawvmdk-device in VirtualBox as normal user (vmuser in this example) create /etc/udev/rules.d/99-local-zvol.rules with this content:
<source lang=bash>
KERNEL=="zd*" SUBSYSTEM=="block" ACTION=="add|change" PROGRAM="/lib/udev/zvol_id /dev/%k" RESULT=="rpool/VM/*" OWNER="vmuser"
</source>
<source lang=bash>
vmuser@virtualbox-server:~$ VBoxManage internalcommands createrawvmdk -filename /var/data/VMs/dev/Solaris10.vmdk -rawdisk /dev/zvol/rpool/VM/Solaris10
</source>
==Setup Ubuntu 16.04 with ZFS root==
Most is from here [https://github.com/zfsonlinux/zfs/wiki/Ubuntu-16.04-Root-on-ZFS Ubuntu-16.04-Root-on-ZFS].
Boot Ubuntu Desktop (alias Live CD) and choose "try out".
===Get the right ashift value===
For example to get sda and sdb:
<source lang=bash>
# lsblk -o NAME,PHY-SeC,LOG-SEC /dev/sd{a,b} | awk 'function exponent (value) {for(i=0;value>1;i++){value/=2;}; return i;}{if($2 ~ /[0-9]+/){print $0,exponent($2)}else{print$0,"ashift"}}'
NAME PHY-SEC LOG-SEC ashift
sda 512 512 9
├─sda1 512 512 9
├─sda2 512 512 9
├─sda3 512 512 9
└─sda4 512 512 9
sdb 4096 512 12
├─sdb1 4096 512 12
├─sdb2 4096 512 12
├─sdb3 4096 512 12
└─sdb4 4096 512 12
</source>
===Connect it to your network===
<source lang=bash>
sudo -i
ifconfig ens160 <IP> netmask 255.255.255.0
route add default gw <defaultrouter>
echo "nameserver <nameserver>" >> /etc/resolv.conf
echo 'Acquire::http::Proxy "http://<user>:<pass>@<proxyhost>:<proxyport>";' >> /etc/apt/apt.conf
apt-add-repository universe
apt update
apt --yes install openssh-server
passwd ubuntu
Reconnect via ssh
apt install --yes debootstrap gdisk zfs-initramfs
sgdisk -g -a1 -n2:34:2047 -t2:EF02 /dev/disk/by-id/scsi-36000c2932cdb62febff0b5ac93786dd4
sgdisk -n9:-8M:0 -t9:BF07 /dev/disk/by-id/scsi-36000c2932cdb62febff0b5ac93786dd4
sgdisk -n1:0:0 -t1:BF01 /dev/disk/by-id/scsi-36000c2932cdb62febff0b5ac93786dd4
zpool create -f -o ashift=12 \
-O atime=off \
-O canmount=off \
-O compression=lz4 \
-O normalization=formD \
-O mountpoint=/ \
-R /mnt \
rpool /dev/disk/by-id/scsi-36000c2932cdb62febff0b5ac93786dd4-part1
zfs create -o canmount=off -o mountpoint=none rpool/ROOT
zfs create -o canmount=noauto -o mountpoint=/ rpool/ROOT/ubuntu
zfs mount rpool/ROOT/ubuntu
zfs create -o setuid=off rpool/home
zfs create -o mountpoint=/root rpool/home/root
zfs create -o canmount=off -o setuid=off -o exec=off rpool/var
zfs create -o com.sun:auto-snapshot=false rpool/var/cache
zfs create rpool/var/log
zfs create rpool/var/spool
zfs create -o com.sun:auto-snapshot=false -o exec=on rpool/var/tmp
zfs create -V 4G -b $(getconf PAGESIZE) -o compression=zle \
-o logbias=throughput -o sync=always \
-o primarycache=metadata -o secondarycache=none \
-o com.sun:auto-snapshot=false rpool/swap
cp -p {,/mnt}/etc/apt/apt.conf
export http_proxy=$(awk '/Acquire::http::Proxy/{gsub(/\"/,"");gsub(/;$/,"");print $2}' /mnt/etc/apt/apt.conf)
echo -n xenial{,-security,-updates} | \
xargs -n 1 -d ' ' -I{} echo "deb http://archive.ubuntu.com/ubuntu {} main universe" > /mnt/etc/apt/sources.list
chmod 1777 /mnt/var/tmp
debootstrap xenial /mnt
zfs set devices=off rpool
HOSTNAME=Template-VM
echo ${HOSTNAME} > /mnt/etc/hostname
printf "127.0.1.1\t%s\n" "${HOSTNAME}" >> /mnt/etc/hosts
INTERFACE=$(ip a s scope global | awk 'NR==1{gsub(/:$/,"",$2);print $2;}')
printf "auto %s\niface %s inet dhcp\n" "${INTERFACE}" "${INTERFACE}" > /mnt/etc/network/interfaces.d/${INTERFACE}
mount --rbind /dev /mnt/dev
mount --rbind /proc /mnt/proc
mount --rbind /sys /mnt/sys
cp -p {,/mnt}/etc/apt/apt.conf
echo -n xenial{,-security,-updates} | \
xargs -n 1 -d ' ' -I{} echo "deb http://archive.ubuntu.com/ubuntu {} main universe" > /mnt/etc/apt/sources.list
chroot /mnt /bin/bash --login
locale-gen en_US.UTF-8
echo 'LANG="en_US.UTF-8"' > /etc/default/locale
LANG="en_US.UTF-8"
dpkg-reconfigure tzdata
ln -s /proc/self/mounts /etc/mtab
apt update
apt install --yes ubuntu-minimal
apt install --yes --no-install-recommends linux-image-generic
apt install --yes zfs-initramfs
apt install --yes openssh-server
apt install --yes grub-pc
addgroup --system lpadmin
addgroup --system sambashare
passwd
grub-probe /
update-initramfs -c -k all
vi /etc/default/grub
Comment out: GRUB_HIDDEN_TIMEOUT=0
Remove quiet and splash from: GRUB_CMDLINE_LINUX_DEFAULT
Uncomment: GRUB_TERMINAL=console
update-grub
grub-install /dev/disk/by-id/scsi-36000c2932cdb62febff0b5ac93786dd4
zfs snapshot rpool/ROOT/ubuntu@install
exit
mount | grep -v zfs | tac | awk '/\/mnt/ {print $3}' | xargs -i{} umount -lf {}
zpool export rpool
reboot
apt install --yes cryptsetup
echo cryptswap1 /dev/zvol/rpool/swap /dev/urandom swap,cipher=aes-xts-plain64:sha256,size=256 >> /etc/crypttab
systemctl daemon-reload
systemctl start systemd-cryptsetup@cryptswap1.service
echo /dev/mapper/cryptswap1 none swap defaults 0 0 >> /etc/fstab
swapon -av
</source>
==ARC Cache==
===Get the current usage of cache===
<source lang=bash>
# cat /proc/spl/kstat/zfs/arcstats |grep c_
c_min 4 521779200
c_max 4 1073741824
arc_no_grow 4 0
arc_tempreserve 4 0
arc_loaned_bytes 4 0
arc_prune 4 25360
arc_meta_used 4 493285336
arc_meta_limit 4 805306368
arc_dnode_limit 4 80530636
arc_meta_max 4 706551816
arc_meta_min 4 16777216
sync_wait_for_async 4 357
arc_need_free 4 0
arc_sys_free 4 260889600
</source>
===Limit the cache without reboot non permanent===
For example limit it to 512MB (which is too small for production environments, just an example...):
<source lang=bash>
# echo "$[512*1024*1024]" > /sys/module/zfs/parameters/zfs_arc_max
</source>
Now you have to drop the caches:
<source lang=bash>
# echo 3 > /proc/sys/vm/drop_caches
</source>
===Make the cache limit permanent===
For example limit it to 512MB (which is too small for production environments, just an example...):
<source lang=bash>
# echo "options zfs zfs_arc_max=$[512*1024*1024]" >> /etc/modprobe.d/zfs.conf
</source>
After reboot this value take effect.
==Backup ZFS settings===
A little script which may be used on your own risk.
<source lang=bash>
#!/bin/bash
# Written by Lars Timmann <L@rs.Timmann.de> 2018
# Tested on solaris 11.3 & Ubuntu Linux
# This script is a rotten bunch of code... rewrite it!
AWK_CMD=/usr/bin/gawk
ZPOOL_CMD=/sbin/zpool
ZFS_CMD=/sbin/zfs
ZDB_CMD=/sbin/zdb
function print_local_options () {
DATASET=$1
OPTION=$2
EXCLUDE_REGEX=$3
${ZFS_CMD} get -s local -Ho property,value -p ${OPTION} ${DATASET} | while read -r property value
do
if [[ ! ${property} =~ ${EXCLUDE_REGEX} ]]
then
if [ "_${property}_" == "_share.*_" ]
then
print_local_options "${DATASET}" 'share.all' '^$'
else
printf '\t-o %s=%s \\\n' "${property}" "${value}"
fi
fi
done
}
function print_filesystem () {
ZFS=$1
printf '%s create \\\n' "${ZFS_CMD}"
print_local_options "${ZFS}" 'all' '^$'
printf '\t%s\n' "${ZFS}"
}
function print_filesystems () {
ZPOOL=$1
for ZFS in $(${ZFS_CMD} list -Ho name -t filesystem -r ${ZPOOL})
do
if [ ${ZFS} == ${ZPOOL} ] ; then continue ; fi
printf '#\n## Filesystem: %s\n#\n\n' "${ZFS}"
print_filesystem ${ZFS}
printf '\n'
done
}
function print_volume () {
ZVOL=$1
volsize=$(${ZFS_CMD} get -Ho value volsize ${ZVOL})
volblocksize=$(${ZFS_CMD} get -Ho value volblocksize ${ZVOL})
printf '%s create \\\n\t-V %s \\\n\t-b %s \\\n' "${ZFS_CMD}" "${volsize}" "${volblocksize}"
print_local_options "${ZVOL}" 'all' '(volsize|refreservation)'
printf '\t%s\n' "${ZVOL}"
}
function print_volumes () {
ZPOOL=$1
for ZVOL in $(${ZFS_CMD} list -Ho name -t volume -r ${ZPOOL})
do
printf '#\n## Volume: %s\n#\n\n' "${ZVOL}"
print_volume ${ZVOL}
printf '\n'
done
}
function print_vdevs () {
ZPOOL=$1
${ZDB_CMD} -C ${ZPOOL} | ${AWK_CMD} -F':' '
$1 ~ /^[[:space:]]*type$/ {
gsub(/[ ]+/,"",$NF);
type=substr($NF,2,length($NF)-2);
if ( type == "mirror" ) {
printf " \\\n\t%s",type;
}
}
$1 ~ /^[[:space:]]*path$/ {
gsub(/[ ]+/,"",$NF);
vdev=substr($NF,2,length($NF)-2);
printf " \\\n\t%s",vdev;
}
END {
printf "\n";
}
'
}
function print_zpool () {
ZPOOL=$1
printf '#############################################################\n'
printf '#\n## ZPool: %s\n#\n' "${ZPOOL}"
printf '#############################################################\n\n'
printf '%s create \\\n' "${ZPOOL_CMD}"
print_local_options "${ZPOOL}" 'all' '/@/'
printf '\t%s' "${ZPOOL}"
print_vdevs "${ZPOOL}"
printf '\n'
printf '#############################################################\n\n'
print_filesystems "${ZPOOL}"
print_volumes "${ZPOOL}"
}
OS=$(uname -s)
eval $(uname -s)=1
HOSTNAME=$(hostname)
printf '#############################################################\n'
printf '# Hostname: %s\n' "${HOSTNAME}"
printf '#############################################################\n\n'
for ZPOOL in $(${ZPOOL_CMD} list -Ho name)
do
print_zpool ${ZPOOL}
done
</source>
==Links==
* [[https://github.com/zfsonlinux/pkg-zfs/wiki/HOWTO-install-Ubuntu-16.04-to-a-Whole-Disk-Native-ZFS-Root-Filesystem-using-Ubiquity-GUI-installer HOWTO install Ubuntu 16.04 to a Whole Disk Native ZFS Root Filesystem using Ubiquity GUI installer]]
* [[https://github.com/zfsonlinux/zfs/wiki/Ubuntu-16.04-Root-on-ZFS Ubuntu 16.04 Root on ZFS]]
665761f2f64244b0aa21f3a89ae66c901cc9a503
1860
1859
2018-04-24T11:04:20Z
Lollypop
2
/* Backup ZFS settings= */
wikitext
text/x-wiki
[[Kategorie:Linux|ZFS]]
[[Kategorie:ZFS|Linux]]
[[Kategorie:VirtualBox|ZFS]]
==Grub==
Create /etc/udev/rules.d/99-local-grub.rules with this content:
<source lang=bash>
# Create by-id links in /dev as well for zfs vdev. Needed by grub
# Add links for zfs_member only
KERNEL=="sd*[0-9]", IMPORT{parent}=="ID_*", ENV{ID_FS_TYPE}=="zfs_member", SYMLINK+="$env{ID_BUS}-$env{ID_SERIAL}-part%n"
</source>
==Virtualbox on ZVols==
If you use ZVols as rawvmdk-device in VirtualBox as normal user (vmuser in this example) create /etc/udev/rules.d/99-local-zvol.rules with this content:
<source lang=bash>
KERNEL=="zd*" SUBSYSTEM=="block" ACTION=="add|change" PROGRAM="/lib/udev/zvol_id /dev/%k" RESULT=="rpool/VM/*" OWNER="vmuser"
</source>
<source lang=bash>
vmuser@virtualbox-server:~$ VBoxManage internalcommands createrawvmdk -filename /var/data/VMs/dev/Solaris10.vmdk -rawdisk /dev/zvol/rpool/VM/Solaris10
</source>
==Setup Ubuntu 16.04 with ZFS root==
Most is from here [https://github.com/zfsonlinux/zfs/wiki/Ubuntu-16.04-Root-on-ZFS Ubuntu-16.04-Root-on-ZFS].
Boot Ubuntu Desktop (alias Live CD) and choose "try out".
===Get the right ashift value===
For example to get sda and sdb:
<source lang=bash>
# lsblk -o NAME,PHY-SeC,LOG-SEC /dev/sd{a,b} | awk 'function exponent (value) {for(i=0;value>1;i++){value/=2;}; return i;}{if($2 ~ /[0-9]+/){print $0,exponent($2)}else{print$0,"ashift"}}'
NAME PHY-SEC LOG-SEC ashift
sda 512 512 9
├─sda1 512 512 9
├─sda2 512 512 9
├─sda3 512 512 9
└─sda4 512 512 9
sdb 4096 512 12
├─sdb1 4096 512 12
├─sdb2 4096 512 12
├─sdb3 4096 512 12
└─sdb4 4096 512 12
</source>
===Connect it to your network===
<source lang=bash>
sudo -i
ifconfig ens160 <IP> netmask 255.255.255.0
route add default gw <defaultrouter>
echo "nameserver <nameserver>" >> /etc/resolv.conf
echo 'Acquire::http::Proxy "http://<user>:<pass>@<proxyhost>:<proxyport>";' >> /etc/apt/apt.conf
apt-add-repository universe
apt update
apt --yes install openssh-server
passwd ubuntu
Reconnect via ssh
apt install --yes debootstrap gdisk zfs-initramfs
sgdisk -g -a1 -n2:34:2047 -t2:EF02 /dev/disk/by-id/scsi-36000c2932cdb62febff0b5ac93786dd4
sgdisk -n9:-8M:0 -t9:BF07 /dev/disk/by-id/scsi-36000c2932cdb62febff0b5ac93786dd4
sgdisk -n1:0:0 -t1:BF01 /dev/disk/by-id/scsi-36000c2932cdb62febff0b5ac93786dd4
zpool create -f -o ashift=12 \
-O atime=off \
-O canmount=off \
-O compression=lz4 \
-O normalization=formD \
-O mountpoint=/ \
-R /mnt \
rpool /dev/disk/by-id/scsi-36000c2932cdb62febff0b5ac93786dd4-part1
zfs create -o canmount=off -o mountpoint=none rpool/ROOT
zfs create -o canmount=noauto -o mountpoint=/ rpool/ROOT/ubuntu
zfs mount rpool/ROOT/ubuntu
zfs create -o setuid=off rpool/home
zfs create -o mountpoint=/root rpool/home/root
zfs create -o canmount=off -o setuid=off -o exec=off rpool/var
zfs create -o com.sun:auto-snapshot=false rpool/var/cache
zfs create rpool/var/log
zfs create rpool/var/spool
zfs create -o com.sun:auto-snapshot=false -o exec=on rpool/var/tmp
zfs create -V 4G -b $(getconf PAGESIZE) -o compression=zle \
-o logbias=throughput -o sync=always \
-o primarycache=metadata -o secondarycache=none \
-o com.sun:auto-snapshot=false rpool/swap
cp -p {,/mnt}/etc/apt/apt.conf
export http_proxy=$(awk '/Acquire::http::Proxy/{gsub(/\"/,"");gsub(/;$/,"");print $2}' /mnt/etc/apt/apt.conf)
echo -n xenial{,-security,-updates} | \
xargs -n 1 -d ' ' -I{} echo "deb http://archive.ubuntu.com/ubuntu {} main universe" > /mnt/etc/apt/sources.list
chmod 1777 /mnt/var/tmp
debootstrap xenial /mnt
zfs set devices=off rpool
HOSTNAME=Template-VM
echo ${HOSTNAME} > /mnt/etc/hostname
printf "127.0.1.1\t%s\n" "${HOSTNAME}" >> /mnt/etc/hosts
INTERFACE=$(ip a s scope global | awk 'NR==1{gsub(/:$/,"",$2);print $2;}')
printf "auto %s\niface %s inet dhcp\n" "${INTERFACE}" "${INTERFACE}" > /mnt/etc/network/interfaces.d/${INTERFACE}
mount --rbind /dev /mnt/dev
mount --rbind /proc /mnt/proc
mount --rbind /sys /mnt/sys
cp -p {,/mnt}/etc/apt/apt.conf
echo -n xenial{,-security,-updates} | \
xargs -n 1 -d ' ' -I{} echo "deb http://archive.ubuntu.com/ubuntu {} main universe" > /mnt/etc/apt/sources.list
chroot /mnt /bin/bash --login
locale-gen en_US.UTF-8
echo 'LANG="en_US.UTF-8"' > /etc/default/locale
LANG="en_US.UTF-8"
dpkg-reconfigure tzdata
ln -s /proc/self/mounts /etc/mtab
apt update
apt install --yes ubuntu-minimal
apt install --yes --no-install-recommends linux-image-generic
apt install --yes zfs-initramfs
apt install --yes openssh-server
apt install --yes grub-pc
addgroup --system lpadmin
addgroup --system sambashare
passwd
grub-probe /
update-initramfs -c -k all
vi /etc/default/grub
Comment out: GRUB_HIDDEN_TIMEOUT=0
Remove quiet and splash from: GRUB_CMDLINE_LINUX_DEFAULT
Uncomment: GRUB_TERMINAL=console
update-grub
grub-install /dev/disk/by-id/scsi-36000c2932cdb62febff0b5ac93786dd4
zfs snapshot rpool/ROOT/ubuntu@install
exit
mount | grep -v zfs | tac | awk '/\/mnt/ {print $3}' | xargs -i{} umount -lf {}
zpool export rpool
reboot
apt install --yes cryptsetup
echo cryptswap1 /dev/zvol/rpool/swap /dev/urandom swap,cipher=aes-xts-plain64:sha256,size=256 >> /etc/crypttab
systemctl daemon-reload
systemctl start systemd-cryptsetup@cryptswap1.service
echo /dev/mapper/cryptswap1 none swap defaults 0 0 >> /etc/fstab
swapon -av
</source>
==ARC Cache==
===Get the current usage of cache===
<source lang=bash>
# cat /proc/spl/kstat/zfs/arcstats |grep c_
c_min 4 521779200
c_max 4 1073741824
arc_no_grow 4 0
arc_tempreserve 4 0
arc_loaned_bytes 4 0
arc_prune 4 25360
arc_meta_used 4 493285336
arc_meta_limit 4 805306368
arc_dnode_limit 4 80530636
arc_meta_max 4 706551816
arc_meta_min 4 16777216
sync_wait_for_async 4 357
arc_need_free 4 0
arc_sys_free 4 260889600
</source>
===Limit the cache without reboot non permanent===
For example limit it to 512MB (which is too small for production environments, just an example...):
<source lang=bash>
# echo "$[512*1024*1024]" > /sys/module/zfs/parameters/zfs_arc_max
</source>
Now you have to drop the caches:
<source lang=bash>
# echo 3 > /proc/sys/vm/drop_caches
</source>
===Make the cache limit permanent===
For example limit it to 512MB (which is too small for production environments, just an example...):
<source lang=bash>
# echo "options zfs zfs_arc_max=$[512*1024*1024]" >> /etc/modprobe.d/zfs.conf
</source>
After reboot this value take effect.
==Backup ZFS settings==
A little script which may be used on your own risk.
<source lang=bash>
#!/bin/bash
# Written by Lars Timmann <L@rs.Timmann.de> 2018
# Tested on solaris 11.3 & Ubuntu Linux
# This script is a rotten bunch of code... rewrite it!
AWK_CMD=/usr/bin/gawk
ZPOOL_CMD=/sbin/zpool
ZFS_CMD=/sbin/zfs
ZDB_CMD=/sbin/zdb
function print_local_options () {
DATASET=$1
OPTION=$2
EXCLUDE_REGEX=$3
${ZFS_CMD} get -s local -Ho property,value -p ${OPTION} ${DATASET} | while read -r property value
do
if [[ ! ${property} =~ ${EXCLUDE_REGEX} ]]
then
if [ "_${property}_" == "_share.*_" ]
then
print_local_options "${DATASET}" 'share.all' '^$'
else
printf '\t-o %s=%s \\\n' "${property}" "${value}"
fi
fi
done
}
function print_filesystem () {
ZFS=$1
printf '%s create \\\n' "${ZFS_CMD}"
print_local_options "${ZFS}" 'all' '^$'
printf '\t%s\n' "${ZFS}"
}
function print_filesystems () {
ZPOOL=$1
for ZFS in $(${ZFS_CMD} list -Ho name -t filesystem -r ${ZPOOL})
do
if [ ${ZFS} == ${ZPOOL} ] ; then continue ; fi
printf '#\n## Filesystem: %s\n#\n\n' "${ZFS}"
print_filesystem ${ZFS}
printf '\n'
done
}
function print_volume () {
ZVOL=$1
volsize=$(${ZFS_CMD} get -Ho value volsize ${ZVOL})
volblocksize=$(${ZFS_CMD} get -Ho value volblocksize ${ZVOL})
printf '%s create \\\n\t-V %s \\\n\t-b %s \\\n' "${ZFS_CMD}" "${volsize}" "${volblocksize}"
print_local_options "${ZVOL}" 'all' '(volsize|refreservation)'
printf '\t%s\n' "${ZVOL}"
}
function print_volumes () {
ZPOOL=$1
for ZVOL in $(${ZFS_CMD} list -Ho name -t volume -r ${ZPOOL})
do
printf '#\n## Volume: %s\n#\n\n' "${ZVOL}"
print_volume ${ZVOL}
printf '\n'
done
}
function print_vdevs () {
ZPOOL=$1
${ZDB_CMD} -C ${ZPOOL} | ${AWK_CMD} -F':' '
$1 ~ /^[[:space:]]*type$/ {
gsub(/[ ]+/,"",$NF);
type=substr($NF,2,length($NF)-2);
if ( type == "mirror" ) {
printf " \\\n\t%s",type;
}
}
$1 ~ /^[[:space:]]*path$/ {
gsub(/[ ]+/,"",$NF);
vdev=substr($NF,2,length($NF)-2);
printf " \\\n\t%s",vdev;
}
END {
printf "\n";
}
'
}
function print_zpool () {
ZPOOL=$1
printf '#############################################################\n'
printf '#\n## ZPool: %s\n#\n' "${ZPOOL}"
printf '#############################################################\n\n'
printf '%s create \\\n' "${ZPOOL_CMD}"
print_local_options "${ZPOOL}" 'all' '/@/'
printf '\t%s' "${ZPOOL}"
print_vdevs "${ZPOOL}"
printf '\n'
printf '#############################################################\n\n'
print_filesystems "${ZPOOL}"
print_volumes "${ZPOOL}"
}
OS=$(uname -s)
eval $(uname -s)=1
HOSTNAME=$(hostname)
printf '#############################################################\n'
printf '# Hostname: %s\n' "${HOSTNAME}"
printf '#############################################################\n\n'
for ZPOOL in $(${ZPOOL_CMD} list -Ho name)
do
print_zpool ${ZPOOL}
done
</source>
==Links==
* [[https://github.com/zfsonlinux/pkg-zfs/wiki/HOWTO-install-Ubuntu-16.04-to-a-Whole-Disk-Native-ZFS-Root-Filesystem-using-Ubiquity-GUI-installer HOWTO install Ubuntu 16.04 to a Whole Disk Native ZFS Root Filesystem using Ubiquity GUI installer]]
* [[https://github.com/zfsonlinux/zfs/wiki/Ubuntu-16.04-Root-on-ZFS Ubuntu 16.04 Root on ZFS]]
7cdce04569630ff5ed49ae23d16c311171389282
Fibrechannel Analyse
0
139
1851
1415
2018-03-23T14:18:18Z
Lollypop
2
/* Sonstiges */
wikitext
text/x-wiki
[[Kategorie:Solaris]]
[[Kategorie:Brocade]]
[[Kategorie:NetApp]]
[[Kategorie:FC]]
=Fibrechannel Analyse=
=Kommandos : Solaris=
==luxadm==
===luxadm -e port===
Gibt die Hardwarepfade der vorhandened Fibrechannelports und deren Status aus:
<source lang=bash>
# luxadm -e port
/devices/pci@79,0/pci10de,378@b/pci1077,143@0/fp@0,0:devctl CONNECTED
/devices/pci@79,0/pci10de,378@b/pci1077,143@0,1/fp@0,0:devctl NOT CONNECTED
/devices/pci@79,0/pci10de,376@e/pci1077,143@0/fp@0,0:devctl CONNECTED
/devices/pci@79,0/pci10de,376@e/pci1077,143@0,1/fp@0,0:devctl NOT CONNECTED
</source>
2 Dualport Karten:
/devices/pci@79,0/pci10de,378@b/pci1077,143@0 und ...,1
/devices/pci@79,0/pci10de,376@e/pci1077,143@0 und ...,1
<source lang=bash>
# prtdiag -v | head -1
System Configuration: Sun Microsystems Sun Fire X4440
</source>
Aus der Seite [https://support.oracle.com/epmos/faces/DocContentDisplay?id=1277396.1 Sun x86 Platforms: Matrix of Recognized Device Paths (Doc ID 1277396.1)] (Oracle Support Login benötigt):
Sun Fire x4440 (Tucana)
PCI:
PCIe SLOT0 /pci@0,0/pci10de,375@f/pci1000,3150@0 // with PCI Express 8-Port SAS/SATA HBA
PCIe SLOT0 /pci@0,0/pci10de,375@f/ // without PCI Express 8-Port SAS/SATA HBA
PCIe SLOT1 /pci@0,0/pci10de,376@e/
PCIe SLOT2 /pci@7c,0/pci10de,377@f/
PCIe SLOT3 /pci@0,0/pci10de,377@a/
PCIe SLOT4 /pci@7c,0/pci10de,376@e/
PCIe SLOT5 /pci@7c,0/pci10de,378@b/
(7c can be renamed something else depending on BIOS/OS version)
Also stecken unsere Karten in Slot 4 und 5.
===luxadm -e dump_map <HW_path>===
Gibt die Tabelle der bekannten Geräte an einem Port aus
<source lang=bash>
# luxadm -e dump_map /devices/pci@79,0/pci10de,378@b/pci1077,143@0/fp@0,0:devctl
Pos Port_ID Hard_Addr Port WWN Node WWN Type
0 30200 0 202600a0b86e10e4 200600a0b86e10e4 0x0 (Disk device)
1 30600 0 202700a0b86e10e4 200600a0b86e10e4 0x0 (Disk device)
2 10100 0 203400a0b85bb030 200400a0b85bb030 0x0 (Disk device)
3 10500 0 203500a0b85bb030 200400a0b85bb030 0x0 (Disk device)
4 10200 0 202600a0b86e103c 200600a0b86e103c 0x0 (Disk device)
5 11400 0 202700a0b86e103c 200600a0b86e103c 0x0 (Disk device)
6 30100 0 203200a0b85aeb2d 200200a0b85aeb2d 0x0 (Disk device)
7 30500 0 203300a0b85aeb2d 200200a0b85aeb2d 0x0 (Disk device)
8 10800 0 2100001b32902d45 2000001b32902d45 0x1f (Unknown Type,Host Bus Adapter)
</source>
Erklärung der interessanten Spalten:
* Port_ID <Switch_ID><Switchport><??>
Es sind also offensichtlich 2 Switches in der Fabric an Port /devices/pci@79,0/pci10de,378@b/pci1077,143@0/fp@0,0:devctl
und zwar mit der ID 1 und mit der ID 3.
Switch ID 1
Port 1 und 5 : Node WWN 200400a0b85bb030
Port 2 und 14 : Node WWN 200600a0b86e103c
Port 8 : Node WWN 2000001b32902d45 (Wir selbst)
Switch ID 3
Port 1 und 5 : Node WWN 200200a0b85aeb2d
Port 2 und 6 : Node WWN 200600a0b86e10e4
Wir hängen also mit 2 Storages auf dem Switch mit der ID 1 und haben eine Verbindung zu einem Switch mit der ID 3 an dem 2 weitere Storages hängen.
* Node WWN
Wir sehen hier 4 Disk Devices mit jeweils 2 Einträgen (Gleiche Node WWN)
* Port WWN
Dies ist die Port WWN der an den Switch angeschlossenen Geräte (unter 8 finden wir uns selbst).
Pro Storage sehen wir hier 2 Port WWNs, also 2 Pfade über unseren einen Hostport.
Daher nachher 4 Pfade (2 Pro Hostport) beim [[#mpathadm list lu]].
* Type
Disk Device: Storage
Host Bus Adapter: FC-Karte
===luxadm probe===
Auflistung aller erkannten Fibrechanneldevices
<source lang=bash>
#> luxadm probe
Found Fibre Channel device(s):
Node WWN:200600a0b86e10e4 Device Type:Disk device
Logical Path:/dev/rdsk/c8t600A0B80006E10E40000DC1C52E8B751d0s2
...
</source>
===luxadm display <Diskpath|WWN>===
<source lang=bash>
#> luxadm display /dev/rdsk/c8t600A0B80006E10E40000DC1C52E8B751d0s2
DEVICE PROPERTIES for disk: /dev/rdsk/c8t600A0B80006E10E40000DC1C52E8B751d0s2
Vendor: SUN
Product ID: STK6580_6780
Revision: 0784
Serial Num: SP01068442
Unformatted capacity: 204800.000 MBytes
Write Cache: Enabled
Read Cache: Enabled
Minimum prefetch: 0x300
Maximum prefetch: 0x0
Device Type: Disk device
Path(s):
/dev/rdsk/c8t600A0B80006E10E40000DC1C52E8B751d0s2
/devices/scsi_vhci/disk@g600a0b80006e10e40000dc1c52e8b751:c,raw
Controller /dev/cfg/c4
Device Address 202600a0b86e10e4,5
Host controller port WWN 2100001b328a417f
Class primary
State ONLINE
Controller /dev/cfg/c4
Device Address 202700a0b86e10e4,5
Host controller port WWN 2100001b328a417f
Class secondary
State STANDBY
Controller /dev/cfg/c6
Device Address 201600a0b86e10e4,5
Host controller port WWN 2100001b32904445
Class primary
State ONLINE
Controller /dev/cfg/c6
Device Address 201700a0b86e10e4,5
Host controller port WWN 2100001b32904445
Class secondary
State STANDBY
</source>
* Vendor: SUN
Hersteller
* Product ID: STK6580_6780
Also ein StorageTek 6580/6780
* Revision: 0784
Grobe Firmwarepeilung (Firmware Version: 07.84.47.10)
Siehe hier [[#lsscs list array <array_name>]]
* Serial Num: SP01068442
Praktisch, wenn man mit NetApps arbeitet, um die LUNs zuzuordnen.
* Unformatted capacity: 204800.000 MBytes
Immer gut zu wissen
* Write Cache: Enabled
Die Batterie im Storage sollte also OK sein ;-)
* Path(s):
Rawdevicepath
Hardwaredevicepath
Jetzt folgen immer pro Pfad zu diesem Device ein Block aus
Controller (siehe unten)
Device Address <Port WWN vom Device>,<LUN ID>
Class <primary|secondary> (siehe unten)
State <Online|Standby|Oflline>
Zuweisung Controller zum FC-Port über:
<source lang=bash>
# ls -al /dev/cfg/c6
lrwxrwxrwx 1 root root 60 Sep 3 2009 /dev/cfg/c6 -> ../../devices/pci@79,0/pci10de,376@e/pci1077,143@0/fp@0,0:fc
</source>
Man sieht den Hardwarepfad von [[#luxadm -e port]]
Class:
Via ALUA (Asymmetric Logical Unit Access) teilt das Device dem Host mit, über welche Pfade der Host primär auf die LUN zugreifen soll.
==fcinfo==
===fcinfo hba-port===
Gibt ein paar Infos über Hersteller, Modell, Firmware, Port und Node WWN, Current Speed, ... aus
<source lang=bash>
#> fcinfo hba-port
HBA Port WWN: 2100001b328a417f
OS Device Name: /dev/cfg/c4
Manufacturer: QLogic Corp.
Model: 375-3356-02
Firmware Version: 05.06.00
FCode/BIOS Version: BIOS: 2.02; fcode: 2.01; EFI: 2.00;
Serial Number: 0402R00-0918701860
Driver Name: qlc
Driver Version: 20110825-3.06
Type: N-port
State: online
Supported Speeds: 1Gb 2Gb 4Gb
Current Speed: 4Gb
Node WWN: 2000001b328a417f
HBA Port WWN: 2101001b32aa417f
OS Device Name: /dev/cfg/c5
Manufacturer: QLogic Corp.
Model: 375-3356-02
Firmware Version: 05.06.00
FCode/BIOS Version: BIOS: 2.02; fcode: 2.01; EFI: 2.00;
Serial Number: 0402R00-0918701860
Driver Name: qlc
Driver Version: 20110825-3.06
Type: unknown
State: offline
Supported Speeds: 1Gb 2Gb 4Gb
Current Speed: not established
Node WWN: 2001001b32aa417f
HBA Port WWN: 2100001b32904445
OS Device Name: /dev/cfg/c6
Manufacturer: QLogic Corp.
Model: 375-3356-02
Firmware Version: 05.06.00
FCode/BIOS Version: BIOS: 2.02; fcode: 2.01; EFI: 2.00;
Serial Number: 0402R00-0918701887
Driver Name: qlc
Driver Version: 20110825-3.06
Type: N-port
State: online
Supported Speeds: 1Gb 2Gb 4Gb
Current Speed: 4Gb
Node WWN: 2000001b32904445
HBA Port WWN: 2101001b32b04445
OS Device Name: /dev/cfg/c7
Manufacturer: QLogic Corp.
Model: 375-3356-02
Firmware Version: 05.06.00
FCode/BIOS Version: BIOS: 2.02; fcode: 2.01; EFI: 2.00;
Serial Number: 0402R00-0918701887
Driver Name: qlc
Driver Version: 20110825-3.06
Type: unknown
State: offline
Supported Speeds: 1Gb 2Gb 4Gb
Current Speed: not established
Node WWN: 2001001b32b04445
</source>
===fcinfo remote-port --port <HBA Port WWN> --linkstat===
<source lang=bash>
# fcinfo remote-port --port 2100001b32904445 --linkstat
Remote Port WWN: 201600a0b86e103c
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200600a0b86e103c
Link Error Statistics:
Link Failure Count: 3
Loss of Sync Count: 3
Loss of Signal Count: 2
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 201700a0b86e103c
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200600a0b86e103c
Link Error Statistics:
Link Failure Count: 4
Loss of Sync Count: 261
Loss of Signal Count: 4
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 202200a0b85aeb2d
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200200a0b85aeb2d
Link Error Statistics:
Link Failure Count: 2
Loss of Sync Count: 1
Loss of Signal Count: 2
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 202300a0b85aeb2d
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200200a0b85aeb2d
Link Error Statistics:
Link Failure Count: 2
Loss of Sync Count: 1
Loss of Signal Count: 2
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 201600a0b86e10e4
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200600a0b86e10e4
Link Error Statistics:
Link Failure Count: 3
Loss of Sync Count: 1
Loss of Signal Count: 0
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 201700a0b86e10e4
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200600a0b86e10e4
Link Error Statistics:
Link Failure Count: 3
Loss of Sync Count: 1
Loss of Signal Count: 0
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 202400a0b85bb030
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200400a0b85bb030
Link Error Statistics:
Link Failure Count: 2
Loss of Sync Count: 1
Loss of Signal Count: 2
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 202500a0b85bb030
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200400a0b85bb030
Link Error Statistics:
Link Failure Count: 2
Loss of Sync Count: 1
Loss of Signal Count: 3
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
</source>
===fcinfo remote-port --port <HBA Port WWN> --scsi-target===
<source lang=bash>
# fcinfo hba-port | grep HBA
HBA Port WWN: 21000024ff3cf472
HBA Port WWN: 21000024ff3cf473
HBA Port WWN: 21000024ff3cf454
HBA Port WWN: 21000024ff3cf455
# fcinfo remote-port --port 21000024ff3cf472 --scsi-target
Remote Port WWN: 20110002ac0059ce
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 2ff70002ac0059ce
LUN: 0
Vendor: 3PARdata
Product: VV
OS Device Name: /dev/rdsk/c6t60002AC00000000000000002000059CEd0s2
LUN: 1
Vendor: 3PARdata
Product: VV
OS Device Name: /dev/rdsk/c6t60002AC00000000000000003000059CEd0s2
LUN: 2
Vendor: 3PARdata
Product: VV
OS Device Name: /dev/rdsk/c6t60002AC00000000000000004000059CEd0s2
...
</source>
==mpathadm==
===mpathadm list lu===
<source lang=bash>
</source>
==cfgadm==
===cfgadm -al -o show_FCP_dev [<controller>]===
<source lang=bash>
# cfgadm -al -o show_FCP_dev | grep unusable
c8::21000024ff2d49a2,0 disk connected configured unusable
c8::21000024ff2d49a2,1 disk connected configured unusable
c8::21000024ff2d49a2,2 disk connected configured unusable
c8::21000024ff2d49a2,3 disk connected configured unusable
c8::21000024ff2d49a2,4 disk connected configured unusable
c8::21000024ff2d49a2,5 disk connected configured unusable
c8::21000024ff2d49a2,6 disk connected configured unusable
c8::21000024ff2d49a2,7 disk connected configured unusable
c8::21000024ff2d49a2,8 disk connected configured unusable
c8::21000024ff2d49a2,9 disk connected configured unusable
c8::21000024ff2d49a2,10 disk connected configured unusable
c9::203400a0b839c421,31 disk connected configured unusable
c9::203400a0b84913d2,31 disk connected configured unusable
c9::203500a0b839c421,31 disk connected configured unusable
c9::203500a0b84913d2,31 disk connected configured unusable
</source>
===cfgadm -c unconfigure -o unusable_SCSI_LUN <unusable device>===
<source lang=bash>
# cfgadm -c unconfigure -o unusable_SCSI_LUN c8::21000024ff2d49a2
</source>
Alle aufräumen:
<source lang=bash>
# cfgadm -alo show_SCSI_LUN | nawk '$NF=="unusable"{gsub(/,[0-9]+$/,"",$1);print $1}' | sort -u | xargs -n 1 cfgadm -c unconfigure -o unusable_SCSI_LUN
</source>
===cfgadm -o force_update -c configure <controller>===
Rescan LUNs. Be careful! Does a forcelip!
<source lang=bash>
# cfgadm -o force_update -c configure c10
</source>
==prtconf -Da <device>==
<source lang=bash>
# prtconf -Da /dev/cfg/c3
i86pc (driver name: rootnex)
pci, instance #0 (driver name: npe)
pci8086,3410, instance #5 (driver name: pcieb)
pci111d,806e, instance #12 (driver name: pcieb)
pci111d,806e, instance #13 (driver name: pcieb)
pci1077,170, instance #0 (driver name: qlc) <---
fp, instance #0 (driver name: fp)
</source>
==LUN masking (access LUNs of a storage)==
<source lang=bash>
Nov 6 13:44:59 server01 Corrupt label; wrong magic number
Nov 6 13:44:59 server01 cmlb: WARNING: /pci@380/pci@1/pci@0/pci@5/SUNW,qlc@0/fp@0,0/ssd@w204300a096691217,7 (ssd7):
Nov 6 13:44:59 server01 Corrupt label; wrong magic number
Nov 6 13:44:59 server01 cmlb: WARNING: /pci@380/pci@1/pci@0/pci@5/SUNW,qlc@0/fp@0,0/ssd@w204300a096691217,7 (ssd7):
Nov 6 13:44:59 server01 Corrupt label; wrong magic number
Nov 6 13:44:59 server01 cmlb: WARNING: /pci@300/pci@1/pci@0/pci@4/SUNW,qlc@0/fp@0,0/ssd@w203300a096691217,7 (ssd2):
Nov 6 13:44:59 server01 Corrupt label; wrong magic number
...
</source>
<source lang=bash>
# cat /etc/driver/drv/fp.conf
mpxio-disable="no";
pwwn-lun-blacklist=
"203200a096691265,7",
"203300a096691265,7",
"204200a096691265,7",
"204300a096691265,7",
"203200a096691217,7",
"203300a096691217,7",
"204200a096691217,7",
"204300a096691217,7";
</source>
<source lang=bash>
# reboot -- -r
...
Boot device: /pci@300/pci@1/pci@0/pci@2/scsi@0/disk@p0 File and args: -r
SunOS Release 5.11 Version 11.3 64-bit
Copyright (c) 1983, 2015, Oracle and/or its affiliates. All rights reserved.
/pseudo/fcp@0 (fcp0):
LUN 7 of port 203300a096691217 is masked due to black listing.
/pseudo/fcp@0 (fcp0):
LUN 7 of port 203200a096691217 is masked due to black listing.
/pseudo/fcp@0 (fcp0):
LUN 7 of port 203300a096691265 is masked due to black listing.
/pseudo/fcp@0 (fcp0):
LUN 7 of port 203200a096691265 is masked due to black listing.
/pseudo/fcp@0 (fcp0):
LUN 7 of port 204300a096691217 is masked due to black listing.
/pseudo/fcp@0 (fcp0):
LUN 7 of port 204200a096691217 is masked due to black listing.
/pseudo/fcp@0 (fcp0):
LUN 7 of port 204300a096691265 is masked due to black listing.
/pseudo/fcp@0 (fcp0):
LUN 7 of port 204200a096691265 is masked due to black listing.
Configuring devices.
</source>
=Kommandos : Common Array Manager=
==lsscs==
Ist unter Solaris in /opt/SUNWsefms/bin
===lsscs list array===
<source lang=bash>
</source>
===lsscs list array <array_name>===
<source lang=bash>
</source>
===lsscs list -a <array_name> fcport===
<source lang=bash>
</source>
=Kommandos : Brocade=
==Switch-Kommandos==
===switchshow===
<source lang=bash>
san-sw_11:admin> switchshow
switchName: san-sw_11
switchType: 71.2
switchState: Online
switchMode: Native
switchRole: Principal
switchDomain: 1
switchId: fffc01
switchWwn: 10:00:00:05:33:df:43:5a
zoning: ON (Fabric1)
switchBeacon: OFF
Index Port Address Media Speed State Proto
==============================================
0 0 010000 id N8 No_Light FC
1 1 010100 id N8 Online FC E-Port 10:00:00:05:33:df:bd:b9 "san-sw_21" (downstream)
2 2 010200 id N8 Online FC F-Port 21:00:00:24:ff:05:74:e4
3 3 010300 id N8 Online FC F-Port 50:0a:09:81:8d:32:5d:c4
4 4 010400 id N8 No_Light FC
5 5 010500 id N8 Online FC E-Port 10:00:00:05:33:df:bd:b9 "san-sw_21"
6 6 010600 id N4 Online FC F-Port 20:06:00:a0:b8:32:38:17
7 7 010700 id N4 Online FC F-Port 20:07:00:a0:b8:32:38:17
8 8 010800 id N4 Online FC F-Port 21:00:00:1b:32:91:4c:ed
9 9 010900 id N4 Online FC F-Port 21:00:00:1b:32:98:05:1a
10 10 010a00 id N8 Online FC F-Port 21:00:00:24:ff:4a:d3:bc
11 11 010b00 id N8 No_Light FC
12 12 010c00 id N8 No_Light FC
13 13 010d00 id N8 No_Light FC
14 14 010e00 id N8 No_Light FC
15 15 010f00 id N8 No_Light FC
16 16 011000 -- N8 No_Module FC (No POD License) Disabled
17 17 011100 -- N8 No_Module FC (No POD License) Disabled
18 18 011200 -- N8 No_Module FC (No POD License) Disabled
19 19 011300 -- N8 No_Module FC (No POD License) Disabled
20 20 011400 -- N8 No_Module FC (No POD License) Disabled
21 21 011500 -- N8 No_Module FC (No POD License) Disabled
22 22 011600 -- N8 No_Module FC (No POD License) Disabled
23 23 011700 -- N8 No_Module FC (No POD License) Disabled
</source>
Was sagt uns das?
# Dies ist der "Principal" (alle andere sind "Subordinate") der Fabric "Fabric1" (switchRole:, zoning:)
# Der Switch ist gezoned (zoning:)
# SwitchID ist "fffc01"
# Es ist ein 24-Port Switch
# Es gibt einen doppelten ISL (InterSwitchLink) zu einem anderen Switch E-Port (san-sw_21)
# 6 Ports sind mit SFPs bestückt, aber nicht belegt (0,4,11-15)
# 8 Ports haben keine Lizenz und auch kein SFP (No_Module)
# 9 Ports sind belegt
===fabricshow===
<source lang=bash>
san-sw_11:root> fabricshow
Switch ID Worldwide Name Enet IP Addr FC IP Addr Name
-------------------------------------------------------------------------
1: fffc01 10:00:00:05:33:df:43:5a 192.168.1.117 0.0.0.0 >"san-sw_11"
2: fffc02 10:00:00:05:33:df:bd:b9 192.168.1.119 0.0.0.0 "san-sw_21"
The Fabric has 2 switches
</source>
===islshow===
<source lang=bash>
rz1_fab2_11:admin> islshow
1: 1-> 0 10:00:00:05:1e:0d:5e:96 12 bc1_sl4_fab2_12 sp: 4.000G bw: 4.000G
2: 2-> 0 10:00:00:05:1e:0d:e2:53 13 bc2_sl4_fab2_13 sp: 4.000G bw: 4.000G
3: 3-> 0 10:00:00:05:1e:b3:71:bf 14 bc3_sl4_fab2_14 sp: 4.000G bw: 4.000G
4: 5-> 17 10:00:00:05:1e:0d:5e:96 12 bc1_sl4_fab2_12 sp: 4.000G bw: 4.000G
5: 6-> 17 10:00:00:05:1e:0d:e2:53 13 bc2_sl4_fab2_13 sp: 4.000G bw: 4.000G
6: 7-> 17 10:00:00:05:1e:b3:71:bf 14 bc3_sl4_fab2_14 sp: 4.000G bw: 4.000G
7: 10-> 8 10:00:50:eb:1a:45:71:96 15 rz-6510-fab2-15 sp: 4.000G bw: 4.000G
8: 18-> 0 10:00:50:eb:1a:45:71:96 15 rz-6510-fab2-15 sp: 4.000G bw: 4.000G
</source>
==Port-Kommandos==
===porterrshow===
===portstatsshow===
===portstatsclear===
===portloginshow===
Show information about NPIV-ports.
<source lang=bash>
fcsw1:admin> switchshow
...
Index Port Address Media Speed State Proto
==================================================
...
34 34 0f2200 id N16 Online FC F-Port 1 N Port + 1 NPIV public
...
</source>
This is a NetApp 8080 with CDOT behind this port as you can see with <i>nodefind <address></i>
<source lang=bash>
fcsw1:admin> nodefind 0f2200
Local:
Type Pid COS PortName NodeName SCR
N 0f2200; 3;50:0a:09:82:80:d1:21:ee;50:0a:09:80:80:d1:21:ee; 0x00000000
PortSymb: [45] "NetApp FC Target Adapter (8324) cdot1-01:0g"
NodeSymb: [38] "NetApp FAS8080 (cdot1-01/cdot1-02)"
Fabric Port Name: 20:22:50:eb:1a:42:f8:45
Permanent Port Name: 50:0a:09:82:80:d1:21:ee
Device type: Physical Unknown(initiator/target)
Port Index: 34
Share Area: No
Device Shared in Other AD: No
Redirect: No
Partial: No
LSAN: No
Aliases:
</source>
Now look with <i>portloginshow <portnumber></i>:
<source lang=bash>
fcsw1:admin> portloginshow 34
Type PID World Wide Name credit df_sz cos
=====================================================
fd 0f2201 20:00:00:a0:98:5d:33:82 6 2048 8 scr=0x3
fe 0f2200 50:0a:09:82:80:d1:21:ee 6 2048 8 scr=0x0
ff 0f2201 20:00:00:a0:98:5d:33:82 0 0 8 d_id=FFFFFC
ff 0f2200 50:0a:09:82:80:d1:21:ee 0 0 8 d_id=FFFFFC
</source>
With this information you can find out more about the WWNs:
<source lang=bash>
fcsw1:admin> nodefind 20:00:00:a0:98:5d:33:82
Local:
Type Pid COS PortName NodeName SCR
N 0f2201; 3;20:00:00:a0:98:5d:33:82;20:04:00:a0:98:5d:33:82; 0x00000003
FC4s: FCP
PortSymb: [58] "NetApp FC Target Port (8324) cdot1fc:cdot1-01_fc_lif_1"
NodeSymb: [24] "NetApp Vserver cdot1fc"
Fabric Port Name: 20:22:50:eb:1a:42:f8:45
Permanent Port Name: 50:0a:09:82:80:d1:21:ee
Device type: NPIV Target
Port Index: 34
Share Area: No
Device Shared in Other AD: No
Redirect: No
Partial: No
LSAN: No
Aliases: cdot1fc_01_lif1
</source>
Even the VServer (NodeSymb)!
And with the NodeName you can find all logical interfaces of this svm:
<source lang=bash>
fcsw1:admin> nodefind 20:04:00:a0:98:5d:33:82
Local:
Type Pid COS PortName NodeName SCR
N 0f2201; 3;20:00:00:a0:98:5d:33:82;20:04:00:a0:98:5d:33:82; 0x00000003
FC4s: FCP
PortSymb: [58] "NetApp FC Target Port (8324) cdot1fc:cdot1-01_fc_lif_1"
NodeSymb: [24] "NetApp Vserver cdot1fc"
Fabric Port Name: 20:22:50:eb:1a:42:f8:45
Permanent Port Name: 50:0a:09:82:80:d1:21:ee
Device type: NPIV Target
Port Index: 34
Share Area: No
Device Shared in Other AD: No
Redirect: No
Partial: No
LSAN: No
Aliases: cdot1fc_01_lif1
N 0f2301; 3;20:02:00:a0:98:5d:33:82;20:04:00:a0:98:5d:33:82; 0x00000003
FC4s: FCP
PortSymb: [58] "NetApp FC Target Port (8324) cdot1fc:cdot1-02_fc_lif_1"
NodeSymb: [24] "NetApp Vserver cdot1fc"
Fabric Port Name: 20:23:50:eb:1a:42:f8:45
Permanent Port Name: 50:0a:09:82:80:61:21:e8
Device type: NPIV Target
Port Index: 35
Share Area: No
Device Shared in Other AD: No
Redirect: No
Partial: No
LSAN: No
Aliases: cdot1fc_02_lif1
</source>
==Zone-Kommandos==
===zoneshow===
===alicreate===
===alishow===
==Backup der Switchconfig per Script==
===Put the backup host ssh-pub-key on the switches===
<source lang=bash>
fcsw1:root> cat >/root/.ssh/authorized_keys <<EOF
> ssh-dss AAAAB3NzaC1...
...
...
lF8qsgtTD8cc= root@host
> EOF
</source>
===Generate ssh-key on the switches===
<source lang=bash>
fcsw1:root> ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
2a:23:33:...:69:bc:25:a5:f9 root@fcsw1
The key's randomart image is:
+--[ RSA 2048]----+
| |
| ... |
| |
+-----------------+
</source>
===Copy the key to your backup users ~/.ssh/authorized_keys on backup host===
<source lang=bash>
fcsw1:root> cat /root/.ssh/id_rsa.pub
ssh-rsa AAAAB3NzaC1yc2EAAA...
...
KHnw1T1NaQ== root@fcsw1
</source>
===Now the script on the backup host===
<source lang=bash>
# cat /opt/bin/backup_brocade_config
#!/bin/bash
SWITCHES="
172.30.40.50
172.30.40.51
"
LOCALUSER="backupuser"
BACKUPDIR="brocade_backup"
BACKUPHOST="172.30.40.10"
DATE="$(date '+%Y%m%d-%H%M%S')"
for switch in ${SWITCHES} ; do
printf "Backing up ${switch} to ~${LOCALUSER}/${BACKUPDIR}/${switch}_config_${date}.txt... "
ssh root@${switch} /fabos/link_sbin/configupload -all -p scp ${BACKUPHOST},${LOCALUSER},${BACKUPDIR}/${switch}_config_${DATE}.txt
done
</source>
==Script zum parsen einer configupload-Datei==
<source lang=awk>
#!/usr/bin/gawk -f
BEGIN{
vendor["001438"]="Hewlett-Packard";
vendor["00a098"]="NetApp";
vendor["0024ff"]="Qlogic";
vendor["001b32"]="Qlogic";
vendor["0000c9"]="Emulex";
vendor["00e002"]="CROSSROADS SYSTEMS, INC.";
}
/\[Zoning\]/,/^$/ {
if(/^cfg./){
split($0,cfgparts,":");
gsub(/^cfg./,"",cfgparts[1]);
cfg[cfgparts[1]]=cfgparts[2];
}
else if(/^zone./) {
zonename=$0;
gsub(/:.*$/,"",zonename);
gsub(/^zone./,"",zonename);
zonemembers=$0;
gsub(/^[^:]*:/,"",zonemembers);
zone[zonename]=zonemembers;
}
else if(/^alias./) {
aliasname=$0;
gsub(/:.*$/,"",aliasname);
gsub(/^alias./,"",aliasname);
aliasmembers=$0;
gsub(/^[^:]*:/,"",aliasmembers);
alias[aliasname]=aliasmembers;
if(length(aliasname)>longestalias){
longestalias=length(aliasname);
}
}
else if(/^enable:/) {
cfgenabled=$0;
gsub(/^enable:/,"",cfgenabled);
}
}
END {
print "Config:",cfgenabled;
split(cfg[cfgenabled],active_zones,";");
for(active_zone in active_zones) {
split(zone[active_zones[active_zone]],zone_members,";");
asort(zone_members);
print "Zone",active_zones[active_zone],"(",length(zone_members),"Members ):";
for(zone_member in zone_members){
member=zone_members[zone_member];
if(alias[member]!=""){
member=alias[member];
}
WWN=member;
gsub(/:/,"",WWN);
if(WWN ~ /^5/){start=2;}else{start=5;}
vendor_id=substr(WWN,start,6);
printf " Member: %s\t",member;
if(alias[zone_members[zone_member]]!=""){
format=sprintf("%%s%%%ds\t",longestalias-length(zone_members[zone_member]));
printf format,zone_members[zone_member]," ";
}
printf "%s\n",vendor[vendor_id];
}
}
printf "\n\n\nCreate config:\n-------------------------------------------------\n";
printf "cfgdelete \"%s\"\n",cfgenabled;
for(active_zone in active_zones) {
split(zone[active_zones[active_zone]],zone_members,";");
asort(zone_members);
for(zone_member in zone_members){
member=zone_members[zone_member];
if(alias[member]!=""){
printf "alicreate \"%s\",\"%s\"\n",member,alias[member];
alias[member]="";
}
}
printf "zonecreate \"%s\",\"%s\"\n",active_zones[active_zone],zone[active_zones[active_zone]];
if(!secondelement){
secondelement=1;
printf "cfgcreate";
} else {
printf "cfgadd ";
}
printf " \"%s\",\"%s\"\n",cfgenabled,active_zones[active_zone];
}
printf "cfgsave\ncfgenable \"%s\"\n",cfgenabled;
}
</source>
=Kommandos: NetApp=
==fcp topology show : Wo hängt mein Frontend-SAN?==
<source lang=bash>
fas01> fcp topology show
Switches connected on adapter 0d:
None connected.
Switches connected on adapter 0c:
None connected.
Switches connected on adapter 1a:
Switch Name: fcsw01
Switch Vendor: Brocade Communications, Inc.
Switch Release: v6.4.2a
Switch Domain: 1
Switch WWN: 10:00:00:05:33:c6:1e:6c
Port Count: 24
Switches connected on adapter 1b:
Switch Name: fcsw02
Switch Vendor: Brocade Communications, Inc.
Switch Release: v6.4.2a
Switch Domain: 1
Switch WWN: 10:00:00:05:33:c7:5e:d2
Port Count: 24
Switches connected on adapter 1c:
None connected.
Switches connected on adapter 1d:
None connected.
</source>
==fcp config <port> : Welche WWN habe ich?==
<source lang=bash>
fas01> fcp config 1a
1a: ONLINE <ADAPTER UP> PTP Fabric
host address 010600
portname 50:0a:09:83:90:00:29:24 nodename 50:0a:09:80:80:00:29:24
mediatype auto speed auto
</source>
Kleines Schmankerl ist noch die "host address", die uns zeigt, daß wir auf Switch-ID 01 Port 06 hängen.
==fcp wwpn-alias (set|show) : Aliasnamen für mehr Klarheit beim Debugging==
<source lang=bash>
fas01> fcp wwpn-alias set sun07_Slot2_Port0 21000024ff363a5a
fas01> fcp wwpn-alias show
WWPN Alias
---- -----
21:00:00:24:ff:36:3a:5a sun07_Slot2_Port0
</source>
==sanlun lun show -d <dev> (mit Solaris und ZPool)==
Wenn man wissen möchte, welche NetApp LUNs zu einem ZPool gehören, geht das folgender Maßen:
<source lang=bash>
# zpool status | nawk '/c[0-9]t/{dev=$1;gsub(/s[0-9]+$/,"",$1);command="/opt/NTAP/SANToolkit/bin/sanlun lun show -d /dev/rdsk/"$1"s2";command | getline; command | getline; print dev,$1$2;next;}{print;}'
</source>
Beispiel:
<source lang=bash>
# zpool status | nawk '/c[0-9]t/{dev=$1;gsub(/s[0-9]+$/,"",$1);command="/opt/NTAP/SANToolkit/bin/sanlun lun show -d /dev/rdsk/"$1"s2";command | getline; command | getline; print dev,$1$2;next;}{print;}'
Pool: testpool
Status: ONLINE
scan: resilvered 11,0G in 0h1m with 0 errors on Thu Oct 2 09:41:39 2014
config:
NAME STATE READ WRITE CKSUM
testpool ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
c5t60A98000433544625634696B76705370d0s0 fas01:/vol/testlun/LUN0
c5t60A980003830304F392446473844375Ad0 fas02:/vol/testlun/LUN0
</source>
=Sonstiges=
==Alle WWNs in einem File finden==
Gibt nur die WWNs aus, auch mehrere, wenn mehrere in einer Zeile sind.
<source lang=awk>
gawk '{line=$0;while(match(line,/[0-9a-f]{2}(:[0-9a-f]{2}){7}/,wwn)){line=substr(line,wwn[0,"start"]+wwn[0,"length"]); print wwn[0];}}' <file>
</source>
==Some adittions to NetApps sanlun lun show on Solaris==
<source lang=awk>
# /opt/NTAP/SANToolkit/bin/sanlun lun show | gawk '
$3 ~ /\/dev\// {
sanlun=$0;
cmd="luxadm display "$3;
while( cmd|getline line ){
count=split(line,word);
if(line ~ /DEVICE PROPERTIES for disk:/){
disk=word[count];
ctrl="";
dev_addr="";
svm_ports="";
delete ports;
delete pri;
delete sec;
delete paths;
delete online;
continue;
}
if(line ~ /Controller/){
ctrl=word[count];
continue;
}
if(line ~ /Device Address/){
dev_addr=word[count];
gsub(/,.*$/,"",dev_addr);
ports[dev_addr]=1;
pair=ctrl"_"dev_addr;
continue;
}
if(line ~ /Class/){
class[pair]=word[count];
if(word[count]=="primary"){
pri[disk]++;
} else {
sec[disk]++;
}
continue;
}
if(line ~ /State/){
state[pair]=word[count];
paths[disk]++;
if(word[count]=="ONLINE"){
online[disk]++;
}
}
if(line ~ /^$/ && ctrl!=""){
for(port in ports){
if(svm_ports==""){
sep="";
} else {
sep=",";
}
svm_ports=svm_ports sep port;
}
printf "%s %2d/%2d %2d/%2d %s\n",sanlun,paths[disk],online[disk],pri[disk],sec[disk], svm_ports;
}
}
close(cmd);
next;
}
/^vserver/{
line=sprintf("%s Online/Total Primary/Secondary Device Addresses\n", $0);
printf line;
gsub(/./,"-",line);
print line;
next;
}
/^[-]+$/{next;}
{print;}
'
</source>
db72e6f799599241286b6c6c257b356a5422bf31
1852
1851
2018-03-23T16:24:13Z
Lollypop
2
/* Some adittions to NetApps sanlun lun show on Solaris */
wikitext
text/x-wiki
[[Kategorie:Solaris]]
[[Kategorie:Brocade]]
[[Kategorie:NetApp]]
[[Kategorie:FC]]
=Fibrechannel Analyse=
=Kommandos : Solaris=
==luxadm==
===luxadm -e port===
Gibt die Hardwarepfade der vorhandened Fibrechannelports und deren Status aus:
<source lang=bash>
# luxadm -e port
/devices/pci@79,0/pci10de,378@b/pci1077,143@0/fp@0,0:devctl CONNECTED
/devices/pci@79,0/pci10de,378@b/pci1077,143@0,1/fp@0,0:devctl NOT CONNECTED
/devices/pci@79,0/pci10de,376@e/pci1077,143@0/fp@0,0:devctl CONNECTED
/devices/pci@79,0/pci10de,376@e/pci1077,143@0,1/fp@0,0:devctl NOT CONNECTED
</source>
2 Dualport Karten:
/devices/pci@79,0/pci10de,378@b/pci1077,143@0 und ...,1
/devices/pci@79,0/pci10de,376@e/pci1077,143@0 und ...,1
<source lang=bash>
# prtdiag -v | head -1
System Configuration: Sun Microsystems Sun Fire X4440
</source>
Aus der Seite [https://support.oracle.com/epmos/faces/DocContentDisplay?id=1277396.1 Sun x86 Platforms: Matrix of Recognized Device Paths (Doc ID 1277396.1)] (Oracle Support Login benötigt):
Sun Fire x4440 (Tucana)
PCI:
PCIe SLOT0 /pci@0,0/pci10de,375@f/pci1000,3150@0 // with PCI Express 8-Port SAS/SATA HBA
PCIe SLOT0 /pci@0,0/pci10de,375@f/ // without PCI Express 8-Port SAS/SATA HBA
PCIe SLOT1 /pci@0,0/pci10de,376@e/
PCIe SLOT2 /pci@7c,0/pci10de,377@f/
PCIe SLOT3 /pci@0,0/pci10de,377@a/
PCIe SLOT4 /pci@7c,0/pci10de,376@e/
PCIe SLOT5 /pci@7c,0/pci10de,378@b/
(7c can be renamed something else depending on BIOS/OS version)
Also stecken unsere Karten in Slot 4 und 5.
===luxadm -e dump_map <HW_path>===
Gibt die Tabelle der bekannten Geräte an einem Port aus
<source lang=bash>
# luxadm -e dump_map /devices/pci@79,0/pci10de,378@b/pci1077,143@0/fp@0,0:devctl
Pos Port_ID Hard_Addr Port WWN Node WWN Type
0 30200 0 202600a0b86e10e4 200600a0b86e10e4 0x0 (Disk device)
1 30600 0 202700a0b86e10e4 200600a0b86e10e4 0x0 (Disk device)
2 10100 0 203400a0b85bb030 200400a0b85bb030 0x0 (Disk device)
3 10500 0 203500a0b85bb030 200400a0b85bb030 0x0 (Disk device)
4 10200 0 202600a0b86e103c 200600a0b86e103c 0x0 (Disk device)
5 11400 0 202700a0b86e103c 200600a0b86e103c 0x0 (Disk device)
6 30100 0 203200a0b85aeb2d 200200a0b85aeb2d 0x0 (Disk device)
7 30500 0 203300a0b85aeb2d 200200a0b85aeb2d 0x0 (Disk device)
8 10800 0 2100001b32902d45 2000001b32902d45 0x1f (Unknown Type,Host Bus Adapter)
</source>
Erklärung der interessanten Spalten:
* Port_ID <Switch_ID><Switchport><??>
Es sind also offensichtlich 2 Switches in der Fabric an Port /devices/pci@79,0/pci10de,378@b/pci1077,143@0/fp@0,0:devctl
und zwar mit der ID 1 und mit der ID 3.
Switch ID 1
Port 1 und 5 : Node WWN 200400a0b85bb030
Port 2 und 14 : Node WWN 200600a0b86e103c
Port 8 : Node WWN 2000001b32902d45 (Wir selbst)
Switch ID 3
Port 1 und 5 : Node WWN 200200a0b85aeb2d
Port 2 und 6 : Node WWN 200600a0b86e10e4
Wir hängen also mit 2 Storages auf dem Switch mit der ID 1 und haben eine Verbindung zu einem Switch mit der ID 3 an dem 2 weitere Storages hängen.
* Node WWN
Wir sehen hier 4 Disk Devices mit jeweils 2 Einträgen (Gleiche Node WWN)
* Port WWN
Dies ist die Port WWN der an den Switch angeschlossenen Geräte (unter 8 finden wir uns selbst).
Pro Storage sehen wir hier 2 Port WWNs, also 2 Pfade über unseren einen Hostport.
Daher nachher 4 Pfade (2 Pro Hostport) beim [[#mpathadm list lu]].
* Type
Disk Device: Storage
Host Bus Adapter: FC-Karte
===luxadm probe===
Auflistung aller erkannten Fibrechanneldevices
<source lang=bash>
#> luxadm probe
Found Fibre Channel device(s):
Node WWN:200600a0b86e10e4 Device Type:Disk device
Logical Path:/dev/rdsk/c8t600A0B80006E10E40000DC1C52E8B751d0s2
...
</source>
===luxadm display <Diskpath|WWN>===
<source lang=bash>
#> luxadm display /dev/rdsk/c8t600A0B80006E10E40000DC1C52E8B751d0s2
DEVICE PROPERTIES for disk: /dev/rdsk/c8t600A0B80006E10E40000DC1C52E8B751d0s2
Vendor: SUN
Product ID: STK6580_6780
Revision: 0784
Serial Num: SP01068442
Unformatted capacity: 204800.000 MBytes
Write Cache: Enabled
Read Cache: Enabled
Minimum prefetch: 0x300
Maximum prefetch: 0x0
Device Type: Disk device
Path(s):
/dev/rdsk/c8t600A0B80006E10E40000DC1C52E8B751d0s2
/devices/scsi_vhci/disk@g600a0b80006e10e40000dc1c52e8b751:c,raw
Controller /dev/cfg/c4
Device Address 202600a0b86e10e4,5
Host controller port WWN 2100001b328a417f
Class primary
State ONLINE
Controller /dev/cfg/c4
Device Address 202700a0b86e10e4,5
Host controller port WWN 2100001b328a417f
Class secondary
State STANDBY
Controller /dev/cfg/c6
Device Address 201600a0b86e10e4,5
Host controller port WWN 2100001b32904445
Class primary
State ONLINE
Controller /dev/cfg/c6
Device Address 201700a0b86e10e4,5
Host controller port WWN 2100001b32904445
Class secondary
State STANDBY
</source>
* Vendor: SUN
Hersteller
* Product ID: STK6580_6780
Also ein StorageTek 6580/6780
* Revision: 0784
Grobe Firmwarepeilung (Firmware Version: 07.84.47.10)
Siehe hier [[#lsscs list array <array_name>]]
* Serial Num: SP01068442
Praktisch, wenn man mit NetApps arbeitet, um die LUNs zuzuordnen.
* Unformatted capacity: 204800.000 MBytes
Immer gut zu wissen
* Write Cache: Enabled
Die Batterie im Storage sollte also OK sein ;-)
* Path(s):
Rawdevicepath
Hardwaredevicepath
Jetzt folgen immer pro Pfad zu diesem Device ein Block aus
Controller (siehe unten)
Device Address <Port WWN vom Device>,<LUN ID>
Class <primary|secondary> (siehe unten)
State <Online|Standby|Oflline>
Zuweisung Controller zum FC-Port über:
<source lang=bash>
# ls -al /dev/cfg/c6
lrwxrwxrwx 1 root root 60 Sep 3 2009 /dev/cfg/c6 -> ../../devices/pci@79,0/pci10de,376@e/pci1077,143@0/fp@0,0:fc
</source>
Man sieht den Hardwarepfad von [[#luxadm -e port]]
Class:
Via ALUA (Asymmetric Logical Unit Access) teilt das Device dem Host mit, über welche Pfade der Host primär auf die LUN zugreifen soll.
==fcinfo==
===fcinfo hba-port===
Gibt ein paar Infos über Hersteller, Modell, Firmware, Port und Node WWN, Current Speed, ... aus
<source lang=bash>
#> fcinfo hba-port
HBA Port WWN: 2100001b328a417f
OS Device Name: /dev/cfg/c4
Manufacturer: QLogic Corp.
Model: 375-3356-02
Firmware Version: 05.06.00
FCode/BIOS Version: BIOS: 2.02; fcode: 2.01; EFI: 2.00;
Serial Number: 0402R00-0918701860
Driver Name: qlc
Driver Version: 20110825-3.06
Type: N-port
State: online
Supported Speeds: 1Gb 2Gb 4Gb
Current Speed: 4Gb
Node WWN: 2000001b328a417f
HBA Port WWN: 2101001b32aa417f
OS Device Name: /dev/cfg/c5
Manufacturer: QLogic Corp.
Model: 375-3356-02
Firmware Version: 05.06.00
FCode/BIOS Version: BIOS: 2.02; fcode: 2.01; EFI: 2.00;
Serial Number: 0402R00-0918701860
Driver Name: qlc
Driver Version: 20110825-3.06
Type: unknown
State: offline
Supported Speeds: 1Gb 2Gb 4Gb
Current Speed: not established
Node WWN: 2001001b32aa417f
HBA Port WWN: 2100001b32904445
OS Device Name: /dev/cfg/c6
Manufacturer: QLogic Corp.
Model: 375-3356-02
Firmware Version: 05.06.00
FCode/BIOS Version: BIOS: 2.02; fcode: 2.01; EFI: 2.00;
Serial Number: 0402R00-0918701887
Driver Name: qlc
Driver Version: 20110825-3.06
Type: N-port
State: online
Supported Speeds: 1Gb 2Gb 4Gb
Current Speed: 4Gb
Node WWN: 2000001b32904445
HBA Port WWN: 2101001b32b04445
OS Device Name: /dev/cfg/c7
Manufacturer: QLogic Corp.
Model: 375-3356-02
Firmware Version: 05.06.00
FCode/BIOS Version: BIOS: 2.02; fcode: 2.01; EFI: 2.00;
Serial Number: 0402R00-0918701887
Driver Name: qlc
Driver Version: 20110825-3.06
Type: unknown
State: offline
Supported Speeds: 1Gb 2Gb 4Gb
Current Speed: not established
Node WWN: 2001001b32b04445
</source>
===fcinfo remote-port --port <HBA Port WWN> --linkstat===
<source lang=bash>
# fcinfo remote-port --port 2100001b32904445 --linkstat
Remote Port WWN: 201600a0b86e103c
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200600a0b86e103c
Link Error Statistics:
Link Failure Count: 3
Loss of Sync Count: 3
Loss of Signal Count: 2
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 201700a0b86e103c
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200600a0b86e103c
Link Error Statistics:
Link Failure Count: 4
Loss of Sync Count: 261
Loss of Signal Count: 4
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 202200a0b85aeb2d
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200200a0b85aeb2d
Link Error Statistics:
Link Failure Count: 2
Loss of Sync Count: 1
Loss of Signal Count: 2
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 202300a0b85aeb2d
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200200a0b85aeb2d
Link Error Statistics:
Link Failure Count: 2
Loss of Sync Count: 1
Loss of Signal Count: 2
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 201600a0b86e10e4
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200600a0b86e10e4
Link Error Statistics:
Link Failure Count: 3
Loss of Sync Count: 1
Loss of Signal Count: 0
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 201700a0b86e10e4
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200600a0b86e10e4
Link Error Statistics:
Link Failure Count: 3
Loss of Sync Count: 1
Loss of Signal Count: 0
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 202400a0b85bb030
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200400a0b85bb030
Link Error Statistics:
Link Failure Count: 2
Loss of Sync Count: 1
Loss of Signal Count: 2
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 202500a0b85bb030
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200400a0b85bb030
Link Error Statistics:
Link Failure Count: 2
Loss of Sync Count: 1
Loss of Signal Count: 3
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
</source>
===fcinfo remote-port --port <HBA Port WWN> --scsi-target===
<source lang=bash>
# fcinfo hba-port | grep HBA
HBA Port WWN: 21000024ff3cf472
HBA Port WWN: 21000024ff3cf473
HBA Port WWN: 21000024ff3cf454
HBA Port WWN: 21000024ff3cf455
# fcinfo remote-port --port 21000024ff3cf472 --scsi-target
Remote Port WWN: 20110002ac0059ce
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 2ff70002ac0059ce
LUN: 0
Vendor: 3PARdata
Product: VV
OS Device Name: /dev/rdsk/c6t60002AC00000000000000002000059CEd0s2
LUN: 1
Vendor: 3PARdata
Product: VV
OS Device Name: /dev/rdsk/c6t60002AC00000000000000003000059CEd0s2
LUN: 2
Vendor: 3PARdata
Product: VV
OS Device Name: /dev/rdsk/c6t60002AC00000000000000004000059CEd0s2
...
</source>
==mpathadm==
===mpathadm list lu===
<source lang=bash>
</source>
==cfgadm==
===cfgadm -al -o show_FCP_dev [<controller>]===
<source lang=bash>
# cfgadm -al -o show_FCP_dev | grep unusable
c8::21000024ff2d49a2,0 disk connected configured unusable
c8::21000024ff2d49a2,1 disk connected configured unusable
c8::21000024ff2d49a2,2 disk connected configured unusable
c8::21000024ff2d49a2,3 disk connected configured unusable
c8::21000024ff2d49a2,4 disk connected configured unusable
c8::21000024ff2d49a2,5 disk connected configured unusable
c8::21000024ff2d49a2,6 disk connected configured unusable
c8::21000024ff2d49a2,7 disk connected configured unusable
c8::21000024ff2d49a2,8 disk connected configured unusable
c8::21000024ff2d49a2,9 disk connected configured unusable
c8::21000024ff2d49a2,10 disk connected configured unusable
c9::203400a0b839c421,31 disk connected configured unusable
c9::203400a0b84913d2,31 disk connected configured unusable
c9::203500a0b839c421,31 disk connected configured unusable
c9::203500a0b84913d2,31 disk connected configured unusable
</source>
===cfgadm -c unconfigure -o unusable_SCSI_LUN <unusable device>===
<source lang=bash>
# cfgadm -c unconfigure -o unusable_SCSI_LUN c8::21000024ff2d49a2
</source>
Alle aufräumen:
<source lang=bash>
# cfgadm -alo show_SCSI_LUN | nawk '$NF=="unusable"{gsub(/,[0-9]+$/,"",$1);print $1}' | sort -u | xargs -n 1 cfgadm -c unconfigure -o unusable_SCSI_LUN
</source>
===cfgadm -o force_update -c configure <controller>===
Rescan LUNs. Be careful! Does a forcelip!
<source lang=bash>
# cfgadm -o force_update -c configure c10
</source>
==prtconf -Da <device>==
<source lang=bash>
# prtconf -Da /dev/cfg/c3
i86pc (driver name: rootnex)
pci, instance #0 (driver name: npe)
pci8086,3410, instance #5 (driver name: pcieb)
pci111d,806e, instance #12 (driver name: pcieb)
pci111d,806e, instance #13 (driver name: pcieb)
pci1077,170, instance #0 (driver name: qlc) <---
fp, instance #0 (driver name: fp)
</source>
==LUN masking (access LUNs of a storage)==
<source lang=bash>
Nov 6 13:44:59 server01 Corrupt label; wrong magic number
Nov 6 13:44:59 server01 cmlb: WARNING: /pci@380/pci@1/pci@0/pci@5/SUNW,qlc@0/fp@0,0/ssd@w204300a096691217,7 (ssd7):
Nov 6 13:44:59 server01 Corrupt label; wrong magic number
Nov 6 13:44:59 server01 cmlb: WARNING: /pci@380/pci@1/pci@0/pci@5/SUNW,qlc@0/fp@0,0/ssd@w204300a096691217,7 (ssd7):
Nov 6 13:44:59 server01 Corrupt label; wrong magic number
Nov 6 13:44:59 server01 cmlb: WARNING: /pci@300/pci@1/pci@0/pci@4/SUNW,qlc@0/fp@0,0/ssd@w203300a096691217,7 (ssd2):
Nov 6 13:44:59 server01 Corrupt label; wrong magic number
...
</source>
<source lang=bash>
# cat /etc/driver/drv/fp.conf
mpxio-disable="no";
pwwn-lun-blacklist=
"203200a096691265,7",
"203300a096691265,7",
"204200a096691265,7",
"204300a096691265,7",
"203200a096691217,7",
"203300a096691217,7",
"204200a096691217,7",
"204300a096691217,7";
</source>
<source lang=bash>
# reboot -- -r
...
Boot device: /pci@300/pci@1/pci@0/pci@2/scsi@0/disk@p0 File and args: -r
SunOS Release 5.11 Version 11.3 64-bit
Copyright (c) 1983, 2015, Oracle and/or its affiliates. All rights reserved.
/pseudo/fcp@0 (fcp0):
LUN 7 of port 203300a096691217 is masked due to black listing.
/pseudo/fcp@0 (fcp0):
LUN 7 of port 203200a096691217 is masked due to black listing.
/pseudo/fcp@0 (fcp0):
LUN 7 of port 203300a096691265 is masked due to black listing.
/pseudo/fcp@0 (fcp0):
LUN 7 of port 203200a096691265 is masked due to black listing.
/pseudo/fcp@0 (fcp0):
LUN 7 of port 204300a096691217 is masked due to black listing.
/pseudo/fcp@0 (fcp0):
LUN 7 of port 204200a096691217 is masked due to black listing.
/pseudo/fcp@0 (fcp0):
LUN 7 of port 204300a096691265 is masked due to black listing.
/pseudo/fcp@0 (fcp0):
LUN 7 of port 204200a096691265 is masked due to black listing.
Configuring devices.
</source>
=Kommandos : Common Array Manager=
==lsscs==
Ist unter Solaris in /opt/SUNWsefms/bin
===lsscs list array===
<source lang=bash>
</source>
===lsscs list array <array_name>===
<source lang=bash>
</source>
===lsscs list -a <array_name> fcport===
<source lang=bash>
</source>
=Kommandos : Brocade=
==Switch-Kommandos==
===switchshow===
<source lang=bash>
san-sw_11:admin> switchshow
switchName: san-sw_11
switchType: 71.2
switchState: Online
switchMode: Native
switchRole: Principal
switchDomain: 1
switchId: fffc01
switchWwn: 10:00:00:05:33:df:43:5a
zoning: ON (Fabric1)
switchBeacon: OFF
Index Port Address Media Speed State Proto
==============================================
0 0 010000 id N8 No_Light FC
1 1 010100 id N8 Online FC E-Port 10:00:00:05:33:df:bd:b9 "san-sw_21" (downstream)
2 2 010200 id N8 Online FC F-Port 21:00:00:24:ff:05:74:e4
3 3 010300 id N8 Online FC F-Port 50:0a:09:81:8d:32:5d:c4
4 4 010400 id N8 No_Light FC
5 5 010500 id N8 Online FC E-Port 10:00:00:05:33:df:bd:b9 "san-sw_21"
6 6 010600 id N4 Online FC F-Port 20:06:00:a0:b8:32:38:17
7 7 010700 id N4 Online FC F-Port 20:07:00:a0:b8:32:38:17
8 8 010800 id N4 Online FC F-Port 21:00:00:1b:32:91:4c:ed
9 9 010900 id N4 Online FC F-Port 21:00:00:1b:32:98:05:1a
10 10 010a00 id N8 Online FC F-Port 21:00:00:24:ff:4a:d3:bc
11 11 010b00 id N8 No_Light FC
12 12 010c00 id N8 No_Light FC
13 13 010d00 id N8 No_Light FC
14 14 010e00 id N8 No_Light FC
15 15 010f00 id N8 No_Light FC
16 16 011000 -- N8 No_Module FC (No POD License) Disabled
17 17 011100 -- N8 No_Module FC (No POD License) Disabled
18 18 011200 -- N8 No_Module FC (No POD License) Disabled
19 19 011300 -- N8 No_Module FC (No POD License) Disabled
20 20 011400 -- N8 No_Module FC (No POD License) Disabled
21 21 011500 -- N8 No_Module FC (No POD License) Disabled
22 22 011600 -- N8 No_Module FC (No POD License) Disabled
23 23 011700 -- N8 No_Module FC (No POD License) Disabled
</source>
Was sagt uns das?
# Dies ist der "Principal" (alle andere sind "Subordinate") der Fabric "Fabric1" (switchRole:, zoning:)
# Der Switch ist gezoned (zoning:)
# SwitchID ist "fffc01"
# Es ist ein 24-Port Switch
# Es gibt einen doppelten ISL (InterSwitchLink) zu einem anderen Switch E-Port (san-sw_21)
# 6 Ports sind mit SFPs bestückt, aber nicht belegt (0,4,11-15)
# 8 Ports haben keine Lizenz und auch kein SFP (No_Module)
# 9 Ports sind belegt
===fabricshow===
<source lang=bash>
san-sw_11:root> fabricshow
Switch ID Worldwide Name Enet IP Addr FC IP Addr Name
-------------------------------------------------------------------------
1: fffc01 10:00:00:05:33:df:43:5a 192.168.1.117 0.0.0.0 >"san-sw_11"
2: fffc02 10:00:00:05:33:df:bd:b9 192.168.1.119 0.0.0.0 "san-sw_21"
The Fabric has 2 switches
</source>
===islshow===
<source lang=bash>
rz1_fab2_11:admin> islshow
1: 1-> 0 10:00:00:05:1e:0d:5e:96 12 bc1_sl4_fab2_12 sp: 4.000G bw: 4.000G
2: 2-> 0 10:00:00:05:1e:0d:e2:53 13 bc2_sl4_fab2_13 sp: 4.000G bw: 4.000G
3: 3-> 0 10:00:00:05:1e:b3:71:bf 14 bc3_sl4_fab2_14 sp: 4.000G bw: 4.000G
4: 5-> 17 10:00:00:05:1e:0d:5e:96 12 bc1_sl4_fab2_12 sp: 4.000G bw: 4.000G
5: 6-> 17 10:00:00:05:1e:0d:e2:53 13 bc2_sl4_fab2_13 sp: 4.000G bw: 4.000G
6: 7-> 17 10:00:00:05:1e:b3:71:bf 14 bc3_sl4_fab2_14 sp: 4.000G bw: 4.000G
7: 10-> 8 10:00:50:eb:1a:45:71:96 15 rz-6510-fab2-15 sp: 4.000G bw: 4.000G
8: 18-> 0 10:00:50:eb:1a:45:71:96 15 rz-6510-fab2-15 sp: 4.000G bw: 4.000G
</source>
==Port-Kommandos==
===porterrshow===
===portstatsshow===
===portstatsclear===
===portloginshow===
Show information about NPIV-ports.
<source lang=bash>
fcsw1:admin> switchshow
...
Index Port Address Media Speed State Proto
==================================================
...
34 34 0f2200 id N16 Online FC F-Port 1 N Port + 1 NPIV public
...
</source>
This is a NetApp 8080 with CDOT behind this port as you can see with <i>nodefind <address></i>
<source lang=bash>
fcsw1:admin> nodefind 0f2200
Local:
Type Pid COS PortName NodeName SCR
N 0f2200; 3;50:0a:09:82:80:d1:21:ee;50:0a:09:80:80:d1:21:ee; 0x00000000
PortSymb: [45] "NetApp FC Target Adapter (8324) cdot1-01:0g"
NodeSymb: [38] "NetApp FAS8080 (cdot1-01/cdot1-02)"
Fabric Port Name: 20:22:50:eb:1a:42:f8:45
Permanent Port Name: 50:0a:09:82:80:d1:21:ee
Device type: Physical Unknown(initiator/target)
Port Index: 34
Share Area: No
Device Shared in Other AD: No
Redirect: No
Partial: No
LSAN: No
Aliases:
</source>
Now look with <i>portloginshow <portnumber></i>:
<source lang=bash>
fcsw1:admin> portloginshow 34
Type PID World Wide Name credit df_sz cos
=====================================================
fd 0f2201 20:00:00:a0:98:5d:33:82 6 2048 8 scr=0x3
fe 0f2200 50:0a:09:82:80:d1:21:ee 6 2048 8 scr=0x0
ff 0f2201 20:00:00:a0:98:5d:33:82 0 0 8 d_id=FFFFFC
ff 0f2200 50:0a:09:82:80:d1:21:ee 0 0 8 d_id=FFFFFC
</source>
With this information you can find out more about the WWNs:
<source lang=bash>
fcsw1:admin> nodefind 20:00:00:a0:98:5d:33:82
Local:
Type Pid COS PortName NodeName SCR
N 0f2201; 3;20:00:00:a0:98:5d:33:82;20:04:00:a0:98:5d:33:82; 0x00000003
FC4s: FCP
PortSymb: [58] "NetApp FC Target Port (8324) cdot1fc:cdot1-01_fc_lif_1"
NodeSymb: [24] "NetApp Vserver cdot1fc"
Fabric Port Name: 20:22:50:eb:1a:42:f8:45
Permanent Port Name: 50:0a:09:82:80:d1:21:ee
Device type: NPIV Target
Port Index: 34
Share Area: No
Device Shared in Other AD: No
Redirect: No
Partial: No
LSAN: No
Aliases: cdot1fc_01_lif1
</source>
Even the VServer (NodeSymb)!
And with the NodeName you can find all logical interfaces of this svm:
<source lang=bash>
fcsw1:admin> nodefind 20:04:00:a0:98:5d:33:82
Local:
Type Pid COS PortName NodeName SCR
N 0f2201; 3;20:00:00:a0:98:5d:33:82;20:04:00:a0:98:5d:33:82; 0x00000003
FC4s: FCP
PortSymb: [58] "NetApp FC Target Port (8324) cdot1fc:cdot1-01_fc_lif_1"
NodeSymb: [24] "NetApp Vserver cdot1fc"
Fabric Port Name: 20:22:50:eb:1a:42:f8:45
Permanent Port Name: 50:0a:09:82:80:d1:21:ee
Device type: NPIV Target
Port Index: 34
Share Area: No
Device Shared in Other AD: No
Redirect: No
Partial: No
LSAN: No
Aliases: cdot1fc_01_lif1
N 0f2301; 3;20:02:00:a0:98:5d:33:82;20:04:00:a0:98:5d:33:82; 0x00000003
FC4s: FCP
PortSymb: [58] "NetApp FC Target Port (8324) cdot1fc:cdot1-02_fc_lif_1"
NodeSymb: [24] "NetApp Vserver cdot1fc"
Fabric Port Name: 20:23:50:eb:1a:42:f8:45
Permanent Port Name: 50:0a:09:82:80:61:21:e8
Device type: NPIV Target
Port Index: 35
Share Area: No
Device Shared in Other AD: No
Redirect: No
Partial: No
LSAN: No
Aliases: cdot1fc_02_lif1
</source>
==Zone-Kommandos==
===zoneshow===
===alicreate===
===alishow===
==Backup der Switchconfig per Script==
===Put the backup host ssh-pub-key on the switches===
<source lang=bash>
fcsw1:root> cat >/root/.ssh/authorized_keys <<EOF
> ssh-dss AAAAB3NzaC1...
...
...
lF8qsgtTD8cc= root@host
> EOF
</source>
===Generate ssh-key on the switches===
<source lang=bash>
fcsw1:root> ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
2a:23:33:...:69:bc:25:a5:f9 root@fcsw1
The key's randomart image is:
+--[ RSA 2048]----+
| |
| ... |
| |
+-----------------+
</source>
===Copy the key to your backup users ~/.ssh/authorized_keys on backup host===
<source lang=bash>
fcsw1:root> cat /root/.ssh/id_rsa.pub
ssh-rsa AAAAB3NzaC1yc2EAAA...
...
KHnw1T1NaQ== root@fcsw1
</source>
===Now the script on the backup host===
<source lang=bash>
# cat /opt/bin/backup_brocade_config
#!/bin/bash
SWITCHES="
172.30.40.50
172.30.40.51
"
LOCALUSER="backupuser"
BACKUPDIR="brocade_backup"
BACKUPHOST="172.30.40.10"
DATE="$(date '+%Y%m%d-%H%M%S')"
for switch in ${SWITCHES} ; do
printf "Backing up ${switch} to ~${LOCALUSER}/${BACKUPDIR}/${switch}_config_${date}.txt... "
ssh root@${switch} /fabos/link_sbin/configupload -all -p scp ${BACKUPHOST},${LOCALUSER},${BACKUPDIR}/${switch}_config_${DATE}.txt
done
</source>
==Script zum parsen einer configupload-Datei==
<source lang=awk>
#!/usr/bin/gawk -f
BEGIN{
vendor["001438"]="Hewlett-Packard";
vendor["00a098"]="NetApp";
vendor["0024ff"]="Qlogic";
vendor["001b32"]="Qlogic";
vendor["0000c9"]="Emulex";
vendor["00e002"]="CROSSROADS SYSTEMS, INC.";
}
/\[Zoning\]/,/^$/ {
if(/^cfg./){
split($0,cfgparts,":");
gsub(/^cfg./,"",cfgparts[1]);
cfg[cfgparts[1]]=cfgparts[2];
}
else if(/^zone./) {
zonename=$0;
gsub(/:.*$/,"",zonename);
gsub(/^zone./,"",zonename);
zonemembers=$0;
gsub(/^[^:]*:/,"",zonemembers);
zone[zonename]=zonemembers;
}
else if(/^alias./) {
aliasname=$0;
gsub(/:.*$/,"",aliasname);
gsub(/^alias./,"",aliasname);
aliasmembers=$0;
gsub(/^[^:]*:/,"",aliasmembers);
alias[aliasname]=aliasmembers;
if(length(aliasname)>longestalias){
longestalias=length(aliasname);
}
}
else if(/^enable:/) {
cfgenabled=$0;
gsub(/^enable:/,"",cfgenabled);
}
}
END {
print "Config:",cfgenabled;
split(cfg[cfgenabled],active_zones,";");
for(active_zone in active_zones) {
split(zone[active_zones[active_zone]],zone_members,";");
asort(zone_members);
print "Zone",active_zones[active_zone],"(",length(zone_members),"Members ):";
for(zone_member in zone_members){
member=zone_members[zone_member];
if(alias[member]!=""){
member=alias[member];
}
WWN=member;
gsub(/:/,"",WWN);
if(WWN ~ /^5/){start=2;}else{start=5;}
vendor_id=substr(WWN,start,6);
printf " Member: %s\t",member;
if(alias[zone_members[zone_member]]!=""){
format=sprintf("%%s%%%ds\t",longestalias-length(zone_members[zone_member]));
printf format,zone_members[zone_member]," ";
}
printf "%s\n",vendor[vendor_id];
}
}
printf "\n\n\nCreate config:\n-------------------------------------------------\n";
printf "cfgdelete \"%s\"\n",cfgenabled;
for(active_zone in active_zones) {
split(zone[active_zones[active_zone]],zone_members,";");
asort(zone_members);
for(zone_member in zone_members){
member=zone_members[zone_member];
if(alias[member]!=""){
printf "alicreate \"%s\",\"%s\"\n",member,alias[member];
alias[member]="";
}
}
printf "zonecreate \"%s\",\"%s\"\n",active_zones[active_zone],zone[active_zones[active_zone]];
if(!secondelement){
secondelement=1;
printf "cfgcreate";
} else {
printf "cfgadd ";
}
printf " \"%s\",\"%s\"\n",cfgenabled,active_zones[active_zone];
}
printf "cfgsave\ncfgenable \"%s\"\n",cfgenabled;
}
</source>
=Kommandos: NetApp=
==fcp topology show : Wo hängt mein Frontend-SAN?==
<source lang=bash>
fas01> fcp topology show
Switches connected on adapter 0d:
None connected.
Switches connected on adapter 0c:
None connected.
Switches connected on adapter 1a:
Switch Name: fcsw01
Switch Vendor: Brocade Communications, Inc.
Switch Release: v6.4.2a
Switch Domain: 1
Switch WWN: 10:00:00:05:33:c6:1e:6c
Port Count: 24
Switches connected on adapter 1b:
Switch Name: fcsw02
Switch Vendor: Brocade Communications, Inc.
Switch Release: v6.4.2a
Switch Domain: 1
Switch WWN: 10:00:00:05:33:c7:5e:d2
Port Count: 24
Switches connected on adapter 1c:
None connected.
Switches connected on adapter 1d:
None connected.
</source>
==fcp config <port> : Welche WWN habe ich?==
<source lang=bash>
fas01> fcp config 1a
1a: ONLINE <ADAPTER UP> PTP Fabric
host address 010600
portname 50:0a:09:83:90:00:29:24 nodename 50:0a:09:80:80:00:29:24
mediatype auto speed auto
</source>
Kleines Schmankerl ist noch die "host address", die uns zeigt, daß wir auf Switch-ID 01 Port 06 hängen.
==fcp wwpn-alias (set|show) : Aliasnamen für mehr Klarheit beim Debugging==
<source lang=bash>
fas01> fcp wwpn-alias set sun07_Slot2_Port0 21000024ff363a5a
fas01> fcp wwpn-alias show
WWPN Alias
---- -----
21:00:00:24:ff:36:3a:5a sun07_Slot2_Port0
</source>
==sanlun lun show -d <dev> (mit Solaris und ZPool)==
Wenn man wissen möchte, welche NetApp LUNs zu einem ZPool gehören, geht das folgender Maßen:
<source lang=bash>
# zpool status | nawk '/c[0-9]t/{dev=$1;gsub(/s[0-9]+$/,"",$1);command="/opt/NTAP/SANToolkit/bin/sanlun lun show -d /dev/rdsk/"$1"s2";command | getline; command | getline; print dev,$1$2;next;}{print;}'
</source>
Beispiel:
<source lang=bash>
# zpool status | nawk '/c[0-9]t/{dev=$1;gsub(/s[0-9]+$/,"",$1);command="/opt/NTAP/SANToolkit/bin/sanlun lun show -d /dev/rdsk/"$1"s2";command | getline; command | getline; print dev,$1$2;next;}{print;}'
Pool: testpool
Status: ONLINE
scan: resilvered 11,0G in 0h1m with 0 errors on Thu Oct 2 09:41:39 2014
config:
NAME STATE READ WRITE CKSUM
testpool ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
c5t60A98000433544625634696B76705370d0s0 fas01:/vol/testlun/LUN0
c5t60A980003830304F392446473844375Ad0 fas02:/vol/testlun/LUN0
</source>
=Sonstiges=
==Alle WWNs in einem File finden==
Gibt nur die WWNs aus, auch mehrere, wenn mehrere in einer Zeile sind.
<source lang=awk>
gawk '{line=$0;while(match(line,/[0-9a-f]{2}(:[0-9a-f]{2}){7}/,wwn)){line=substr(line,wwn[0,"start"]+wwn[0,"length"]); print wwn[0];}}' <file>
</source>
==Some adittions to NetApps sanlun lun show on Solaris==
<source lang=awk>
# /opt/NTAP/SANToolkit/bin/sanlun lun show | gawk '
$3 ~ /\/dev\// {
sanlun=$0;
cmd="luxadm display "$3;
while( cmd|getline line ){
count=split(line,word);
if(line ~ /DEVICE PROPERTIES for disk:/){
disk=word[count];
ctrl="";
dev_addr="";
svm_ports="";
delete ports;
delete pri;
delete sec;
delete paths;
delete online;
continue;
}
if(line ~ /Controller/){
ctrl=word[count];
continue;
}
if(line ~ /Device Address/){
dev_addr=word[count];
gsub(/,.*$/,"",dev_addr);
ports[dev_addr]=1;
pair=ctrl"_"dev_addr;
continue;
}
if(line ~ /Class/){
class[pair]=word[count];
if(word[count]=="primary"){
pri[disk]++;
} else {
sec[disk]++;
}
continue;
}
if(line ~ /State/){
state[pair]=word[count];
paths[disk]++;
if(word[count]=="ONLINE"){
online[disk]++;
}
}
if(line ~ /^$/ && ctrl!=""){
for(port in ports){
if(svm_ports==""){
sep="";
} else {
sep=",";
}
svm_ports=svm_ports sep port;
}
printf "%s %2d/%2d %2d/%2d %s\n",sanlun,online[disk],paths[disk],pri[disk],sec[disk], svm_ports;
}
}
close(cmd);
next;
}
/^vserver/{
line=sprintf("%s Online/Total Primary/Secondary Device Addresses\n", $0);
printf line;
gsub(/./,"-",line);
print line;
next;
}
/^[-]+$/{next;}
{print;}
'
</source>
b5f7fc0f2a9aa524286ff5255532a0f0fd92c8f7
CreepyLinks
0
354
1853
2018-03-27T13:18:57Z
Lollypop
2
Die Seite wurde neu angelegt: „[[Kategorie:Security]] =Google= * [https://www.google.com/maps/timeline?pb Google Maps Timeline] * [https://myactivity.google.com/myactivity Activity] ==Settin…“
wikitext
text/x-wiki
[[Kategorie:Security]]
=Google=
* [https://www.google.com/maps/timeline?pb Google Maps Timeline]
* [https://myactivity.google.com/myactivity Activity]
==Settings==
* [https://google.com/settings/ads/ Control your ads]
* [https://myaccount.google.com/security Account security settings]
=YouTube=
* [https://www.youtube.com/feed/history Youtube History]
25b1f73aa45ceed47526acb289e8d5447fb81216
1854
1853
2018-03-27T13:20:35Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:Security]]
=Google=
* [https://www.google.com/maps/timeline?pb Google Maps Timeline]
* [https://myactivity.google.com/myactivity Activity]
==Settings==
* [https://google.com/settings/ads/ Control your ads]
* [https://myaccount.google.com/security Account security settings]
==Get what google has about you==
* [https://takeout.google.com/settings/takeout?pli=1 Download huge amount of data about you]
=YouTube=
* [https://www.youtube.com/feed/history Youtube History]
88fdecea31ac4a4e6f3cda3aaf1dbb4619e3dd83
VirtualBox physical mapping
0
355
1856
2018-04-09T08:54:27Z
Lollypop
2
Die Seite wurde neu angelegt: „[[Kategorie:Virtualbox] ==Create a virtual mapping to your physical Windows== In my example it is on partitions 1 and 2 of the disk.<br> This helps me to work…“
wikitext
text/x-wiki
[[Kategorie:Virtualbox]
==Create a virtual mapping to your physical Windows==
In my example it is on partitions 1 and 2 of the disk.<br>
This helps me to work around problems with installing Windows updates and grub. Some Windows updates are failing if you have grub in your MBR.
===Create a dummy mbr===
<source lang=bash>
# apt install mbr
# install-mbr /var/data/VMs/dev/mbr.img
</source>
===Create the mapping as a VMDK file===
<source lang=bash>
# VBoxManage internalcommands createrawvmdk -filename /var/data/VMs/dev/Windows-physical.vmdk -rawdisk /dev/sda -partitions 1,2 -mbr /var/data/VMs/dev/mbr.img
</source>
a4190462a4647d756e9004de870637795eb08912
1857
1856
2018-04-09T08:55:16Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:Virtualbox]
==Create a virtual mapping to your physical Windows==
In my example it is on partitions 1 and 2 of the disk.<br>
This helps me to work around problems with installing Windows updates and grub. Some Windows updates are failing if you have grub in your MBR.
===Create a dummy mbr===
<source lang=bash>
# apt install mbr
# install-mbr /var/data/VMs/dev/mbr.img
</source>
===Create the mapping as a VMDK file===
<source lang=bash>
# VBoxManage internalcommands createrawvmdk -filename /var/data/VMs/dev/Windows-physical.vmdk -rawdisk /dev/sda -partitions 1,2 -mbr /var/data/VMs/dev/mbr.img
</source>
After that create a VM and use this special VMDK file.
c4d0d75232d276c460ae7ddcc1fd2fef76238db4
Windows
0
356
1858
2018-04-13T08:52:56Z
Lollypop
2
Die Seite wurde neu angelegt: „ ==Manage Stored User Names Passwords== <source lang=windows> %windir%\System32\rundll32.exe keymgr.dll,KRShowKeyMgr </source>“
wikitext
text/x-wiki
==Manage Stored User Names Passwords==
<source lang=windows>
%windir%\System32\rundll32.exe keymgr.dll,KRShowKeyMgr
</source>
3ae2b2e616595d478421051b34d8b32058475e7d
Sophora
0
357
1861
2018-05-02T10:35:50Z
Lollypop
2
Die Seite wurde neu angelegt: „ Database: grep db ~sophora/intranet/sophora/repository/repository.xml“
wikitext
text/x-wiki
Database:
grep db ~sophora/intranet/sophora/repository/repository.xml
5a42230751338a02aa2707f4929908c7b766b383
Systemd
0
233
1862
1812
2018-05-03T09:07:44Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:Linux]]
=systemd=
Yes, like daemons are usually written this has to be written lowercase.
=What is systemd?=
systemd is a replacement for the old and rusty init system of Linux.
It has many new features and extends the normal init system with the ability to watch processes after the start has done, list sockets owned by processes started with systemd, adds security features like [http://manpages.ubuntu.com/manpages/vivid/en/man7/capabilities.7.html capabilities(7)] and a lot more.
Maybe it will be as good as SMF (Service Management Facility) of Solaris one day :-).
=Take a look with systemctl=
==List units==
As you can see, there are hardware and software related units.
<source lang=bash>
# systemctl list-units
UNIT LOAD ACTIVE SUB DESCRIPTION
proc-sys-fs-binfmt_misc.automount loaded active running Arbitrary Executable File Formats File System Automount Point
sys-devices-pci0000:00-0000:00:02.0-backlight-acpi_video0.device loaded active plugged /sys/devices/pci0000:00/0000:00:02.0/backlight/acpi_video0
sys-devices-pci0000:00-0000:00:02.0-drm-card0-card0\x2dLVDS\x2d1-intel_backlight.device loaded active plugged /sys/devices/pci0000:00/0000:00:02.0/drm
sys-devices-pci0000:00-0000:00:19.0-net-eth0.device loaded active plugged 82579LM Gigabit Network Connection
sys-devices-pci0000:00-0000:00:1a.0-usb1-1\x2d1-1\x2d1.4-1\x2d1.4:1.0-bluetooth-hci0-rfkill3.device loaded active plugged /sys/devices/pci0000:00/0000
sys-devices-pci0000:00-0000:00:1a.0-usb1-1\x2d1-1\x2d1.4-1\x2d1.4:1.0-bluetooth-hci0.device loaded active plugged /sys/devices/pci0000:00/0000:00:1a.0
sys-devices-pci0000:00-0000:00:1b.0-sound-card0.device loaded active plugged 6 Series/C200 Series Chipset Family High Definition Audio Contro
sys-devices-pci0000:00-0000:00:1c.1-0000:03:00.0-ieee80211-phy0-rfkill2.device loaded active plugged /sys/devices/pci0000:00/0000:00:1c.1/0000:03:00.0
sys-devices-pci0000:00-0000:00:1c.1-0000:03:00.0-net-wlan0.device loaded active plugged Centrino Advanced-N 6205 [Taylor Peak] (Centrino Advanced-N 62
sys-devices-pci0000:00-0000:00:1d.0-usb2-2\x2d1-2\x2d1.4-2\x2d1.4:1.1-tty-ttyACM0.device loaded active plugged F5521gw
sys-devices-pci0000:00-0000:00:1d.0-usb2-2\x2d1-2\x2d1.4-2\x2d1.4:1.3-tty-ttyACM1.device loaded active plugged F5521gw
...
session-c2.scope loaded active running Session c2 of user lollypop
accounts-daemon.service loaded active running Accounts Service
● anacron.service loaded failed failed Run anacron jobs
apparmor.service loaded active exited LSB: AppArmor initialization
apport.service loaded active exited LSB: automatic crash report generation
...
</source>
In this example you can see that the anacron.service failed to start.
==Display unit status==
<source lang=bash>
# systemctl status anacron
● anacron.service - Run anacron jobs
Loaded: loaded (/lib/systemd/system/anacron.service; enabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Fr 2015-08-28 09:18:13 CEST; 31min ago
Process: 1591 ExecStart=/usr/sbin/anacron -dsq (code=exited, status=1/FAILURE)
Main PID: 1591 (code=exited, status=1/FAILURE)
Aug 28 09:18:13 lollybook systemd[1]: Started Run anacron jobs.
Aug 28 09:18:13 lollybook systemd[1]: Starting Run anacron jobs...
Aug 28 09:18:13 lollybook systemd[1]: anacron.service: main process exited, code=exited, status=1/FAILURE
Aug 28 09:18:13 lollybook anacron[1591]: anacron: Can't chdir to /var/spool/anacron: No such file or directory
Aug 28 09:18:13 lollybook systemd[1]: Unit anacron.service entered failed state.
Aug 28 09:18:13 lollybook systemd[1]: anacron.service failed.
</source>
Ah, deleted the anacron spool directory. ;-)
==Restart units==
Fix the problem and restart the service.
<source lang=bash>
root@lollybook:~# mkdir /var/spool/anacron
root@lollybook:~# systemctl restart anacron.service
root@lollybook:~# systemctl status anacron
● anacron.service - Run anacron jobs
Loaded: loaded (/lib/systemd/system/anacron.service; enabled; vendor preset: enabled)
Active: active (running) since Fr 2015-08-28 09:53:49 CEST; 4s ago
Main PID: 5179 (anacron)
CGroup: /system.slice/anacron.service
└─5179 /usr/sbin/anacron -dsq
Aug 28 09:53:49 lollybook systemd[1]: Started Run anacron jobs.
Aug 28 09:53:49 lollybook systemd[1]: Starting Run anacron jobs...
Aug 28 09:53:49 lollybook anacron[5179]: Anacron 2.3 started on 2015-08-28
Aug 28 09:53:49 lollybook anacron[5179]: Will run job `cron.daily' in 5 min.
Aug 28 09:53:49 lollybook anacron[5179]: Will run job `cron.weekly' in 10 min.
Aug 28 09:53:49 lollybook anacron[5179]: Will run job `cron.monthly' in 15 min.
Aug 28 09:53:49 lollybook anacron[5179]: Jobs will be executed sequentially
</source>
==Display unit declaration==
<source lang=ini>
# systemctl cat zfs.target
# /lib/systemd/system/zfs.target
[Unit]
Description=ZFS startup target
Requires=zfs-mount.service
Requires=zfs-share.service
Wants=zed.service
[Install]
WantedBy=multi-user.target
</source>
==Sockets==
<source lang=bash>
# systemctl list-sockets --all
LISTEN UNIT ACTIVATES
/run/acpid.socket acpid.socket acpid.service
/run/systemd/fsckd systemd-fsckd.socket systemd-fsckd.service
/run/systemd/initctl/fifo systemd-initctl.socket systemd-initctl.service
/run/systemd/journal/dev-log systemd-journald-dev-log.socket systemd-journald.service
/run/systemd/journal/socket systemd-journald.socket systemd-journald.service
/run/systemd/journal/stdout systemd-journald.socket systemd-journald.service
/run/systemd/journal/syslog syslog.socket rsyslog.service
/run/systemd/shutdownd systemd-shutdownd.socket systemd-shutdownd.service
/run/udev/control systemd-udevd-control.socket systemd-udevd.service
/run/uuidd/request uuidd.socket uuidd.service
/var/run/avahi-daemon/socket avahi-daemon.socket avahi-daemon.service
/var/run/cups/cups.sock cups.socket cups.service
/var/run/dbus/system_bus_socket dbus.socket dbus.service
127.0.0.1:631 cups.socket cups.service
[::1]:631 cups.socket cups.service
audit 1 systemd-journald-audit.socket systemd-journald.service
kobject-uevent 1 systemd-udevd-kernel.socket systemd-udevd.service
17 sockets listed.
</source>
==View dependencies==
What depends on ''zfs.target'':
<source lang=bash>
# systemctl list-dependencies --reverse zfs.target
zfs.target
● ├─basic.target
...
● └─multi-user.target
...
</source>
And what do we need to reach the ''zfs.target''?
<source lang=bash>
# systemctl list-dependencies --recursive zfs.target
zfs.target
● ├─zed.service
● ├─zfs-mount.service
● └─zfs-share.service
</source>
=Security=
==Use capabilities to drop user privileges (CapabilityBoundingSet)==
<source lang=bash>
# systemctl cat systemd-networkd.service --no-pager
...
[Service]
Type=notify
Restart=on-failure
RestartSec=0
ExecStart=/lib/systemd/systemd-networkd
CapabilityBoundingSet=CAP_NET_ADMIN CAP_NET_BIND_SERVICE CAP_NET_BROADCAST CAP_NET_RAW CAP_SETUID CAP_SETGID CAP_SETPCAP CAP_CHOWN CAP_DAC_OVERRIDE CAP_FOWNER
ProtectSystem=full
ProtectHome=yes
WatchdogSec=1min
...
</source>
Now the process is started with exactly the capabilities it needs to have. Even if it starts as root all unnessesary capabilities are dropped for starting the process.
I dont want to copy the whole man page of [http://manpages.ubuntu.com/manpages/vivid/en/man7/capabilities.7.html capabilities(7)] here but you can take a look to understand what this capabilities are.
'''BUT''' beware of programs which just test on UID 0!
==Nailing a process to it's rights : NoNewPrivileges==
Setting ''NoNewPrivileges=true'' ensures that the processtree from this level on will stuck at the UID and the privileges it has. This prohibits UID changes. No set UID binary will help the hacker to get more privileges than the user of the exploited service.
=systemd-timesyncd an alternative to ntp=
The ntpd is a good and fat old horse for servers but clients do not necessarily need this one. Just give systemd-timesyncd a chance.
Configuration can be easily made through <i>/etc/systemd/timesyncd.conf</i>:
<source lang=ini>
# This file is part of systemd.
#
# systemd is free software; you can redistribute it and/or modify it
# under the terms of the GNU Lesser General Public License as published by
# the Free Software Foundation; either version 2.1 of the License, or
# (at your option) any later version.
#
# Entries in this file show the compile time defaults.
# You can change settings by editing this file.
# Defaults can be restored by simply deleting this file.
#
# See timesyncd.conf(5) for details.
[Time]
NTP=ptbtime1.ptb.de hora.cs.tu-berlin.de
FallbackNTP=ntp.ubuntu.com
</source>
The server list is a space separated list of NTP servers.
FallbackNTP is a list of servers if none of the NTP list could be reached.
If you want to split them into multiple files or generate them at start you can put files with the ending <i>.conf</i> in <i>/etc/systemd/timesyncd.conf.d/</i>.
After you setup the config you can enable the timesyncd via:
<source lang=bash>
# timedatectl set-ntp true
</source>
Control your success with:
<source lang=bash>
# timedatectl
Local time: Fr 2016-07-01 09:16:24 CEST
Universal time: Fr 2016-07-01 07:16:24 UTC
RTC time: Fr 2016-07-01 07:16:24
Time zone: Europe/Berlin (CEST, +0200)
Network time on: yes
NTP synchronized: yes
RTC in local TZ: no
</source>
Nice it worked <i>NTP synchronized: yes</i>.
If not take a look with <i>systemctl</i>:
<source lang=bash>
# systemctl status systemd-timesyncd.service
● systemd-timesyncd.service - Network Time Synchronization
Loaded: loaded (/lib/systemd/system/systemd-timesyncd.service; enabled; vendor preset: enabled)
Drop-In: /lib/systemd/system/systemd-timesyncd.service.d
└─disable-with-time-daemon.conf
Active: inactive (dead)
Condition: start condition failed at Fr 2016-07-01 10:49:15 CEST; 1h 43min left
Docs: man:systemd-timesyncd.service(8)
</source>
Hmm... let us take a look at ntp:
<source lang=bash>
# systemctl status ntp.service
● ntp.service - LSB: Start NTP daemon
Loaded: loaded (/etc/init.d/ntp; bad; vendor preset: enabled)
Active: active (exited) since Fr 2016-07-01 10:49:19 CEST; 1h 44min left
Docs: man:systemd-sysv-generator(8)
</source>
Maybe we should uninstall or disable ntp first ;-).
<source lang=bash>
# systemctl stop ntp.service
# systemctl disable ntp.service
</source>
<source lang=bash>
# systemctl start systemd-timesyncd.service
# systemctl status systemd-timesyncd.service
● systemd-timesyncd.service - Network Time Synchronization
Loaded: loaded (/lib/systemd/system/systemd-timesyncd.service; enabled; vendor preset: enabled)
Drop-In: /lib/systemd/system/systemd-timesyncd.service.d
└─disable-with-time-daemon.conf
Active: active (running) since Fr 2016-07-01 09:06:10 CEST; 1s ago
Docs: man:systemd-timesyncd.service(8)
Main PID: 12360 (systemd-timesyn)
Status: "Synchronized to time server 192.53.103.108:123 (ptbtime1.ptb.de)."
CGroup: /system.slice/systemd-timesyncd.service
└─12360 /lib/systemd/systemd-timesyncd
Jul 01 09:06:10 lollybook systemd[1]: Starting Network Time Synchronization...
Jul 01 09:06:10 lollybook systemd[1]: Started Network Time Synchronization.
Jul 01 09:06:10 lollybook systemd-timesyncd[12360]: Synchronized to time server 192.53.103.108:123 (ptbtime1.ptb.de).
</source>
That's it!
=Units=
==[Unit]==
===Define dependencies===
For example the ''zfs.target'' is defined like this:
<source lang=bash>
# systemctl cat zfs.target
# /lib/systemd/system/zfs.target
[Unit]
Description=ZFS startup target
Requires=zfs-mount.service
Requires=zfs-share.service
Wants=zed.service
[Install]
WantedBy=multi-user.target
</source>
This means to reach the ''zfs.target'' we want that ''zed.service'' is started if enabled and we need ''zfs-mount.service'' and ''zfs-share.service''.
===Directories===
====ReadWrite-, ReadOnly- and InaccessibleDirectories====
====Private Tmp-Directories====
Mounts a private incarnation of /tmp and /var/tmp which only lives as long as the unit is up. When the unit comes down the directories are cleared. This is done by a seperate namespace for this unit.
<source lang=ini>
[Unit]
...
PrivateTmp=true|false
...
</source>
If several units should share a private tmp-directory you can use ''JoinsNamespaceOf=<unit1>[,<unit2>,<unit3>]''.
==[Service]==
==[Install]==
=Tools=
==Testing around with capabilities==
For example arping:
<source lang=bash>
# getcap /usr/bin/arping
/usr/bin/arping = cap_net_raw+ep
</source>
With this capability set we can use this as normal user:
<source lang=bash>
lollypop $ /usr/bin/arping -I wlan0 192.168.178.1
ARPING 192.168.178.1 from 192.168.178.31 wlan0
Unicast reply from 192.168.178.1 [24:65:11:F0:DC:A8] 1.774ms
Unicast reply from 192.168.178.1 [24:65:11:F0:DC:A8] 1.658ms
</source>
If we remove this capability it does not work:
<source lang=bash>
# setcap cap_net_raw=-ep /usr/bin/arping
</source>
<source lang=bash>
lollypop $ /usr/bin/arping -I wlan0 192.168.178.1
arping: socket: Operation not permitted
</source>
Of course it still works as root as root has all capabilities:
<source lang=bash>
root@lollybook:~# /usr/bin/arping -I wlan0 192.168.178.1
ARPING 192.168.178.1 from 192.168.178.31 wlan0
Unicast reply from 192.168.178.1 [24:65:11:F0:DC:A8] 2.052ms
Unicast reply from 192.168.178.1 [24:65:11:F0:DC:A8] 1.852ms
Received 2 response(s)
</source>
So we better set this capability again:
<source lang=bash>
# setcap cap_net_raw=+ep /usr/bin/arping
</source>
= Logging with syslog-ng and systemd in a chroot environment =
If you have a chroot environment (here I have /var/chroot) some things are a little bit tricky.
==The needed logging socket in your chroot is /run/systemd/journal/dev-log==
Prepare the mountpoint:
<source lang=bash>
# mkdir -p /var/chroot/run/systemd/journal
# touch /var/chroot/run/systemd/journal/dev-log
</source>
===Get the name for the needed unit file===
The name of a .mount-unit file has to be the mount destination path. Dashes must be escaped. To get the resulting name you can easily use systemd-escape.
<source lang=bash>
# systemd-escape -p --suffix=mount /var/chroot/run/systemd/journal/dev-log
var-chroot-run-systemd-journal-dev\x2dlog.mount
</source>
===Create the unit file /lib/systemd/system/var-chroot-run-systemd-journal-dev\\x2dlog.mount for the mount===
Remember to double escape (\\) the x2d (which is a dash -).
<source lang=bash>
# vi /lib/systemd/system/var-chroot-run-systemd-journal-dev\\x2dlog.mount
</source>
Put this contents in the file:
<source lang=inifile>
[Unit]
Description=Mount /run/systemd/journal/dev-log to chroot
DefaultDependencies=no
ConditionPathExists=/var/chroot/run/systemd/journal/dev-log
ConditionCapability=CAP_SYS_ADMIN
After=systemd-modules-load.service
Before=pdns-recursor.service
[Mount]
What=/run/systemd/journal/dev-log
Where=/var/chroot/run/systemd/journal/dev-log
Type=none
Options=bind
[Install]
WantedBy=multi-user.target
</source>
===Mount the socket===
<source lang=bash>
# systemctl daemon-reload
# systemctl enable var-chroot-run-systemd-journal-dev\\x2dlog.mount
# systemctl start var-chroot-run-systemd-journal-dev\\x2dlog.mount
</source>
Check the success:
<source lang=bash>
# grep /var/chroot/run/systemd/journal/dev-log /proc/mounts
tmpfs /var/chroot/run/systemd/journal/dev-log tmpfs rw,nosuid,noexec,relatime,size=101604k,mode=755 0 0
</source>
==Tell the journald to forward logging lines to the socket==
===/etc/systemd/journald.conf===
<source lang=inifile>
[Journal]
...
ForwardToSyslog=yes
...
</source>
Restart the journal daemon:
<source lang=bash>
# systemctl restart systemd-journald.service
</source>
==Configure syslog-ng==
===/etc/syslog-ng/syslog-ng.conf===
Take the log from systemd-journald socket:
<source>
...
source s_src {
system();
internal();
unix-dgram ("/run/systemd/journal/dev-log");
};
...
</source>
===Example for powerdns recursor===
====/etc/syslog-ng/conf.d/destination.d/pdns.conf====
<source>
# PowerDNS authoritative server destination
destination d_pdns { file("/var/log/powerdns/pdns.log"); };
destination d_pdns_recursor { file("/var/log/powerdns/recursor.log"); };
</source>
====/etc/syslog-ng/conf.d/filter.d/pdns.conf====
<source>
# PowerDNS authoritative server filter
filter f_pdns { program("^pdns$"); };
filter f_pdns_recursor { program("^pdns_recursor$"); };
</source>
====/etc/syslog-ng/conf.d/log.d/90_pdns.conf====
<source>
# PowerDNS authoritative server default final file log
log { source(s_src); filter(f_pdns); destination(d_pdns); flags(final); };
log { source(s_src); filter(f_pdns_recursor); destination(d_pdns_recursor); flags(final); };
</source>
= Examples =
== Oracle ==
UNTESTED, just an example!
File this as
/usr/lib/systemd/system/dbora@.service (SLES12)
<source lang=ini>
# This file is part of systemd.
#
# Configure instances for your oracle database versions like this
# # systemctl enable dbora@<product>.service
# e.g.:
# # systemctl enable dbora@12cR1.service
#
[Unit]
Description=Oracle Database %I
After=syslog.target network.target
[Service]
# systemd ignores PAM limits, so set any necessary limits in the service.
# Not really a bug, but a feature.
# https://bugzilla.redhat.com/show_bug.cgi?id=754285
LimitMEMLOCK=infinity
LimitNOFILE=65535
#
Type=simple
RemainAfterExit=yes
User=oracle
Group=dba
Environment="ORACLE_HOME=/opt/oracle/product/%i/db"
ExecStart=/opt/oracle/product/%i/db/bin/dbstart $ORACLE_HOME >> 2>&1 &
ExecStop=/opt/oracle/product/%i/db/bin/dbstart $ORACLE_HOME 2>&1 &
[Install]
WantedBy=multi-user.target
</source>
<source lang=bash>
# systemctl daemon-reload
# systemctl enable dbora@12cR2.service
Created symlink from /etc/systemd/system/multi-user.target.wants/dbora@12cR2.service to /usr/lib/systemd/system/dbora@.service.
</source>
28365bf8c182aae712b4357a3fc14363ea77b9c5
1863
1862
2018-05-03T09:09:26Z
Lollypop
2
/* Logging with syslog-ng and systemd in a chroot environment */
wikitext
text/x-wiki
[[Kategorie:Linux]]
=systemd=
Yes, like daemons are usually written this has to be written lowercase.
=What is systemd?=
systemd is a replacement for the old and rusty init system of Linux.
It has many new features and extends the normal init system with the ability to watch processes after the start has done, list sockets owned by processes started with systemd, adds security features like [http://manpages.ubuntu.com/manpages/vivid/en/man7/capabilities.7.html capabilities(7)] and a lot more.
Maybe it will be as good as SMF (Service Management Facility) of Solaris one day :-).
=Take a look with systemctl=
==List units==
As you can see, there are hardware and software related units.
<source lang=bash>
# systemctl list-units
UNIT LOAD ACTIVE SUB DESCRIPTION
proc-sys-fs-binfmt_misc.automount loaded active running Arbitrary Executable File Formats File System Automount Point
sys-devices-pci0000:00-0000:00:02.0-backlight-acpi_video0.device loaded active plugged /sys/devices/pci0000:00/0000:00:02.0/backlight/acpi_video0
sys-devices-pci0000:00-0000:00:02.0-drm-card0-card0\x2dLVDS\x2d1-intel_backlight.device loaded active plugged /sys/devices/pci0000:00/0000:00:02.0/drm
sys-devices-pci0000:00-0000:00:19.0-net-eth0.device loaded active plugged 82579LM Gigabit Network Connection
sys-devices-pci0000:00-0000:00:1a.0-usb1-1\x2d1-1\x2d1.4-1\x2d1.4:1.0-bluetooth-hci0-rfkill3.device loaded active plugged /sys/devices/pci0000:00/0000
sys-devices-pci0000:00-0000:00:1a.0-usb1-1\x2d1-1\x2d1.4-1\x2d1.4:1.0-bluetooth-hci0.device loaded active plugged /sys/devices/pci0000:00/0000:00:1a.0
sys-devices-pci0000:00-0000:00:1b.0-sound-card0.device loaded active plugged 6 Series/C200 Series Chipset Family High Definition Audio Contro
sys-devices-pci0000:00-0000:00:1c.1-0000:03:00.0-ieee80211-phy0-rfkill2.device loaded active plugged /sys/devices/pci0000:00/0000:00:1c.1/0000:03:00.0
sys-devices-pci0000:00-0000:00:1c.1-0000:03:00.0-net-wlan0.device loaded active plugged Centrino Advanced-N 6205 [Taylor Peak] (Centrino Advanced-N 62
sys-devices-pci0000:00-0000:00:1d.0-usb2-2\x2d1-2\x2d1.4-2\x2d1.4:1.1-tty-ttyACM0.device loaded active plugged F5521gw
sys-devices-pci0000:00-0000:00:1d.0-usb2-2\x2d1-2\x2d1.4-2\x2d1.4:1.3-tty-ttyACM1.device loaded active plugged F5521gw
...
session-c2.scope loaded active running Session c2 of user lollypop
accounts-daemon.service loaded active running Accounts Service
● anacron.service loaded failed failed Run anacron jobs
apparmor.service loaded active exited LSB: AppArmor initialization
apport.service loaded active exited LSB: automatic crash report generation
...
</source>
In this example you can see that the anacron.service failed to start.
==Display unit status==
<source lang=bash>
# systemctl status anacron
● anacron.service - Run anacron jobs
Loaded: loaded (/lib/systemd/system/anacron.service; enabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Fr 2015-08-28 09:18:13 CEST; 31min ago
Process: 1591 ExecStart=/usr/sbin/anacron -dsq (code=exited, status=1/FAILURE)
Main PID: 1591 (code=exited, status=1/FAILURE)
Aug 28 09:18:13 lollybook systemd[1]: Started Run anacron jobs.
Aug 28 09:18:13 lollybook systemd[1]: Starting Run anacron jobs...
Aug 28 09:18:13 lollybook systemd[1]: anacron.service: main process exited, code=exited, status=1/FAILURE
Aug 28 09:18:13 lollybook anacron[1591]: anacron: Can't chdir to /var/spool/anacron: No such file or directory
Aug 28 09:18:13 lollybook systemd[1]: Unit anacron.service entered failed state.
Aug 28 09:18:13 lollybook systemd[1]: anacron.service failed.
</source>
Ah, deleted the anacron spool directory. ;-)
==Restart units==
Fix the problem and restart the service.
<source lang=bash>
root@lollybook:~# mkdir /var/spool/anacron
root@lollybook:~# systemctl restart anacron.service
root@lollybook:~# systemctl status anacron
● anacron.service - Run anacron jobs
Loaded: loaded (/lib/systemd/system/anacron.service; enabled; vendor preset: enabled)
Active: active (running) since Fr 2015-08-28 09:53:49 CEST; 4s ago
Main PID: 5179 (anacron)
CGroup: /system.slice/anacron.service
└─5179 /usr/sbin/anacron -dsq
Aug 28 09:53:49 lollybook systemd[1]: Started Run anacron jobs.
Aug 28 09:53:49 lollybook systemd[1]: Starting Run anacron jobs...
Aug 28 09:53:49 lollybook anacron[5179]: Anacron 2.3 started on 2015-08-28
Aug 28 09:53:49 lollybook anacron[5179]: Will run job `cron.daily' in 5 min.
Aug 28 09:53:49 lollybook anacron[5179]: Will run job `cron.weekly' in 10 min.
Aug 28 09:53:49 lollybook anacron[5179]: Will run job `cron.monthly' in 15 min.
Aug 28 09:53:49 lollybook anacron[5179]: Jobs will be executed sequentially
</source>
==Display unit declaration==
<source lang=ini>
# systemctl cat zfs.target
# /lib/systemd/system/zfs.target
[Unit]
Description=ZFS startup target
Requires=zfs-mount.service
Requires=zfs-share.service
Wants=zed.service
[Install]
WantedBy=multi-user.target
</source>
==Sockets==
<source lang=bash>
# systemctl list-sockets --all
LISTEN UNIT ACTIVATES
/run/acpid.socket acpid.socket acpid.service
/run/systemd/fsckd systemd-fsckd.socket systemd-fsckd.service
/run/systemd/initctl/fifo systemd-initctl.socket systemd-initctl.service
/run/systemd/journal/dev-log systemd-journald-dev-log.socket systemd-journald.service
/run/systemd/journal/socket systemd-journald.socket systemd-journald.service
/run/systemd/journal/stdout systemd-journald.socket systemd-journald.service
/run/systemd/journal/syslog syslog.socket rsyslog.service
/run/systemd/shutdownd systemd-shutdownd.socket systemd-shutdownd.service
/run/udev/control systemd-udevd-control.socket systemd-udevd.service
/run/uuidd/request uuidd.socket uuidd.service
/var/run/avahi-daemon/socket avahi-daemon.socket avahi-daemon.service
/var/run/cups/cups.sock cups.socket cups.service
/var/run/dbus/system_bus_socket dbus.socket dbus.service
127.0.0.1:631 cups.socket cups.service
[::1]:631 cups.socket cups.service
audit 1 systemd-journald-audit.socket systemd-journald.service
kobject-uevent 1 systemd-udevd-kernel.socket systemd-udevd.service
17 sockets listed.
</source>
==View dependencies==
What depends on ''zfs.target'':
<source lang=bash>
# systemctl list-dependencies --reverse zfs.target
zfs.target
● ├─basic.target
...
● └─multi-user.target
...
</source>
And what do we need to reach the ''zfs.target''?
<source lang=bash>
# systemctl list-dependencies --recursive zfs.target
zfs.target
● ├─zed.service
● ├─zfs-mount.service
● └─zfs-share.service
</source>
=Security=
==Use capabilities to drop user privileges (CapabilityBoundingSet)==
<source lang=bash>
# systemctl cat systemd-networkd.service --no-pager
...
[Service]
Type=notify
Restart=on-failure
RestartSec=0
ExecStart=/lib/systemd/systemd-networkd
CapabilityBoundingSet=CAP_NET_ADMIN CAP_NET_BIND_SERVICE CAP_NET_BROADCAST CAP_NET_RAW CAP_SETUID CAP_SETGID CAP_SETPCAP CAP_CHOWN CAP_DAC_OVERRIDE CAP_FOWNER
ProtectSystem=full
ProtectHome=yes
WatchdogSec=1min
...
</source>
Now the process is started with exactly the capabilities it needs to have. Even if it starts as root all unnessesary capabilities are dropped for starting the process.
I dont want to copy the whole man page of [http://manpages.ubuntu.com/manpages/vivid/en/man7/capabilities.7.html capabilities(7)] here but you can take a look to understand what this capabilities are.
'''BUT''' beware of programs which just test on UID 0!
==Nailing a process to it's rights : NoNewPrivileges==
Setting ''NoNewPrivileges=true'' ensures that the processtree from this level on will stuck at the UID and the privileges it has. This prohibits UID changes. No set UID binary will help the hacker to get more privileges than the user of the exploited service.
=systemd-timesyncd an alternative to ntp=
The ntpd is a good and fat old horse for servers but clients do not necessarily need this one. Just give systemd-timesyncd a chance.
Configuration can be easily made through <i>/etc/systemd/timesyncd.conf</i>:
<source lang=ini>
# This file is part of systemd.
#
# systemd is free software; you can redistribute it and/or modify it
# under the terms of the GNU Lesser General Public License as published by
# the Free Software Foundation; either version 2.1 of the License, or
# (at your option) any later version.
#
# Entries in this file show the compile time defaults.
# You can change settings by editing this file.
# Defaults can be restored by simply deleting this file.
#
# See timesyncd.conf(5) for details.
[Time]
NTP=ptbtime1.ptb.de hora.cs.tu-berlin.de
FallbackNTP=ntp.ubuntu.com
</source>
The server list is a space separated list of NTP servers.
FallbackNTP is a list of servers if none of the NTP list could be reached.
If you want to split them into multiple files or generate them at start you can put files with the ending <i>.conf</i> in <i>/etc/systemd/timesyncd.conf.d/</i>.
After you setup the config you can enable the timesyncd via:
<source lang=bash>
# timedatectl set-ntp true
</source>
Control your success with:
<source lang=bash>
# timedatectl
Local time: Fr 2016-07-01 09:16:24 CEST
Universal time: Fr 2016-07-01 07:16:24 UTC
RTC time: Fr 2016-07-01 07:16:24
Time zone: Europe/Berlin (CEST, +0200)
Network time on: yes
NTP synchronized: yes
RTC in local TZ: no
</source>
Nice it worked <i>NTP synchronized: yes</i>.
If not take a look with <i>systemctl</i>:
<source lang=bash>
# systemctl status systemd-timesyncd.service
● systemd-timesyncd.service - Network Time Synchronization
Loaded: loaded (/lib/systemd/system/systemd-timesyncd.service; enabled; vendor preset: enabled)
Drop-In: /lib/systemd/system/systemd-timesyncd.service.d
└─disable-with-time-daemon.conf
Active: inactive (dead)
Condition: start condition failed at Fr 2016-07-01 10:49:15 CEST; 1h 43min left
Docs: man:systemd-timesyncd.service(8)
</source>
Hmm... let us take a look at ntp:
<source lang=bash>
# systemctl status ntp.service
● ntp.service - LSB: Start NTP daemon
Loaded: loaded (/etc/init.d/ntp; bad; vendor preset: enabled)
Active: active (exited) since Fr 2016-07-01 10:49:19 CEST; 1h 44min left
Docs: man:systemd-sysv-generator(8)
</source>
Maybe we should uninstall or disable ntp first ;-).
<source lang=bash>
# systemctl stop ntp.service
# systemctl disable ntp.service
</source>
<source lang=bash>
# systemctl start systemd-timesyncd.service
# systemctl status systemd-timesyncd.service
● systemd-timesyncd.service - Network Time Synchronization
Loaded: loaded (/lib/systemd/system/systemd-timesyncd.service; enabled; vendor preset: enabled)
Drop-In: /lib/systemd/system/systemd-timesyncd.service.d
└─disable-with-time-daemon.conf
Active: active (running) since Fr 2016-07-01 09:06:10 CEST; 1s ago
Docs: man:systemd-timesyncd.service(8)
Main PID: 12360 (systemd-timesyn)
Status: "Synchronized to time server 192.53.103.108:123 (ptbtime1.ptb.de)."
CGroup: /system.slice/systemd-timesyncd.service
└─12360 /lib/systemd/systemd-timesyncd
Jul 01 09:06:10 lollybook systemd[1]: Starting Network Time Synchronization...
Jul 01 09:06:10 lollybook systemd[1]: Started Network Time Synchronization.
Jul 01 09:06:10 lollybook systemd-timesyncd[12360]: Synchronized to time server 192.53.103.108:123 (ptbtime1.ptb.de).
</source>
That's it!
=Units=
==[Unit]==
===Define dependencies===
For example the ''zfs.target'' is defined like this:
<source lang=bash>
# systemctl cat zfs.target
# /lib/systemd/system/zfs.target
[Unit]
Description=ZFS startup target
Requires=zfs-mount.service
Requires=zfs-share.service
Wants=zed.service
[Install]
WantedBy=multi-user.target
</source>
This means to reach the ''zfs.target'' we want that ''zed.service'' is started if enabled and we need ''zfs-mount.service'' and ''zfs-share.service''.
===Directories===
====ReadWrite-, ReadOnly- and InaccessibleDirectories====
====Private Tmp-Directories====
Mounts a private incarnation of /tmp and /var/tmp which only lives as long as the unit is up. When the unit comes down the directories are cleared. This is done by a seperate namespace for this unit.
<source lang=ini>
[Unit]
...
PrivateTmp=true|false
...
</source>
If several units should share a private tmp-directory you can use ''JoinsNamespaceOf=<unit1>[,<unit2>,<unit3>]''.
==[Service]==
==[Install]==
=Tools=
==Testing around with capabilities==
For example arping:
<source lang=bash>
# getcap /usr/bin/arping
/usr/bin/arping = cap_net_raw+ep
</source>
With this capability set we can use this as normal user:
<source lang=bash>
lollypop $ /usr/bin/arping -I wlan0 192.168.178.1
ARPING 192.168.178.1 from 192.168.178.31 wlan0
Unicast reply from 192.168.178.1 [24:65:11:F0:DC:A8] 1.774ms
Unicast reply from 192.168.178.1 [24:65:11:F0:DC:A8] 1.658ms
</source>
If we remove this capability it does not work:
<source lang=bash>
# setcap cap_net_raw=-ep /usr/bin/arping
</source>
<source lang=bash>
lollypop $ /usr/bin/arping -I wlan0 192.168.178.1
arping: socket: Operation not permitted
</source>
Of course it still works as root as root has all capabilities:
<source lang=bash>
root@lollybook:~# /usr/bin/arping -I wlan0 192.168.178.1
ARPING 192.168.178.1 from 192.168.178.31 wlan0
Unicast reply from 192.168.178.1 [24:65:11:F0:DC:A8] 2.052ms
Unicast reply from 192.168.178.1 [24:65:11:F0:DC:A8] 1.852ms
Received 2 response(s)
</source>
So we better set this capability again:
<source lang=bash>
# setcap cap_net_raw=+ep /usr/bin/arping
</source>
= Logging with syslog-ng and systemd in a chroot environment =
If you have a chroot environment (here I have /var/chroot) some things are a little bit tricky.
==The needed logging socket in your chroot is /run/systemd/journal/dev-log==
Prepare the mountpoint:
<source lang=bash>
# mkdir -p /var/chroot/run/systemd/journal
# touch /var/chroot/run/systemd/journal/dev-log
</source>
===Get the name for the needed unit file===
The name of a .mount-unit file has to be the mount destination path. Dashes must be escaped. To get the resulting name you can easily use systemd-escape.
<source lang=bash>
# systemd-escape -p --suffix=mount /var/chroot/run/systemd/journal/dev-log
var-chroot-run-systemd-journal-dev\x2dlog.mount
</source>
===Create the unit file /lib/systemd/system/var-chroot-run-systemd-journal-dev\\x2dlog.mount for the mount===
Remember to double escape (\\) the x2d (which is a dash -).
<source lang=bash>
# vi /lib/systemd/system/var-chroot-run-systemd-journal-dev\\x2dlog.mount
</source>
Put this contents in the file:
<source lang=ini>
[Unit]
Description=Mount /run/systemd/journal/dev-log to chroot
DefaultDependencies=no
ConditionPathExists=/var/chroot/run/systemd/journal/dev-log
ConditionCapability=CAP_SYS_ADMIN
After=systemd-modules-load.service
Before=pdns-recursor.service
[Mount]
What=/run/systemd/journal/dev-log
Where=/var/chroot/run/systemd/journal/dev-log
Type=none
Options=bind
[Install]
WantedBy=multi-user.target
</source>
===Mount the socket===
<source lang=bash>
# systemctl daemon-reload
# systemctl enable var-chroot-run-systemd-journal-dev\\x2dlog.mount
# systemctl start var-chroot-run-systemd-journal-dev\\x2dlog.mount
</source>
Check the success:
<source lang=bash>
# grep /var/chroot/run/systemd/journal/dev-log /proc/mounts
tmpfs /var/chroot/run/systemd/journal/dev-log tmpfs rw,nosuid,noexec,relatime,size=101604k,mode=755 0 0
</source>
==Tell the journald to forward logging lines to the socket==
===/etc/systemd/journald.conf===
<source lang=ini>
[Journal]
...
ForwardToSyslog=yes
...
</source>
Restart the journal daemon:
<source lang=bash>
# systemctl restart systemd-journald.service
</source>
==Configure syslog-ng==
===/etc/syslog-ng/syslog-ng.conf===
Take the log from systemd-journald socket:
<source>
...
source s_src {
system();
internal();
unix-dgram ("/run/systemd/journal/dev-log");
};
...
</source>
===Example for powerdns recursor===
====/etc/syslog-ng/conf.d/destination.d/pdns.conf====
<source>
# PowerDNS authoritative server destination
destination d_pdns { file("/var/log/powerdns/pdns.log"); };
destination d_pdns_recursor { file("/var/log/powerdns/recursor.log"); };
</source>
====/etc/syslog-ng/conf.d/filter.d/pdns.conf====
<source>
# PowerDNS authoritative server filter
filter f_pdns { program("^pdns$"); };
filter f_pdns_recursor { program("^pdns_recursor$"); };
</source>
====/etc/syslog-ng/conf.d/log.d/90_pdns.conf====
<source>
# PowerDNS authoritative server default final file log
log { source(s_src); filter(f_pdns); destination(d_pdns); flags(final); };
log { source(s_src); filter(f_pdns_recursor); destination(d_pdns_recursor); flags(final); };
</source>
= Examples =
== Oracle ==
UNTESTED, just an example!
File this as
/usr/lib/systemd/system/dbora@.service (SLES12)
<source lang=ini>
# This file is part of systemd.
#
# Configure instances for your oracle database versions like this
# # systemctl enable dbora@<product>.service
# e.g.:
# # systemctl enable dbora@12cR1.service
#
[Unit]
Description=Oracle Database %I
After=syslog.target network.target
[Service]
# systemd ignores PAM limits, so set any necessary limits in the service.
# Not really a bug, but a feature.
# https://bugzilla.redhat.com/show_bug.cgi?id=754285
LimitMEMLOCK=infinity
LimitNOFILE=65535
#
Type=simple
RemainAfterExit=yes
User=oracle
Group=dba
Environment="ORACLE_HOME=/opt/oracle/product/%i/db"
ExecStart=/opt/oracle/product/%i/db/bin/dbstart $ORACLE_HOME >> 2>&1 &
ExecStop=/opt/oracle/product/%i/db/bin/dbstart $ORACLE_HOME 2>&1 &
[Install]
WantedBy=multi-user.target
</source>
<source lang=bash>
# systemctl daemon-reload
# systemctl enable dbora@12cR2.service
Created symlink from /etc/systemd/system/multi-user.target.wants/dbora@12cR2.service to /usr/lib/systemd/system/dbora@.service.
</source>
d6d6f9051ee4f8e27bd3c1f349c4c9f1c18a09ce
1864
1863
2018-05-03T09:10:21Z
Lollypop
2
/* Use capabilities to drop user privileges (CapabilityBoundingSet) */
wikitext
text/x-wiki
[[Kategorie:Linux]]
=systemd=
Yes, like daemons are usually written this has to be written lowercase.
=What is systemd?=
systemd is a replacement for the old and rusty init system of Linux.
It has many new features and extends the normal init system with the ability to watch processes after the start has done, list sockets owned by processes started with systemd, adds security features like [http://manpages.ubuntu.com/manpages/vivid/en/man7/capabilities.7.html capabilities(7)] and a lot more.
Maybe it will be as good as SMF (Service Management Facility) of Solaris one day :-).
=Take a look with systemctl=
==List units==
As you can see, there are hardware and software related units.
<source lang=bash>
# systemctl list-units
UNIT LOAD ACTIVE SUB DESCRIPTION
proc-sys-fs-binfmt_misc.automount loaded active running Arbitrary Executable File Formats File System Automount Point
sys-devices-pci0000:00-0000:00:02.0-backlight-acpi_video0.device loaded active plugged /sys/devices/pci0000:00/0000:00:02.0/backlight/acpi_video0
sys-devices-pci0000:00-0000:00:02.0-drm-card0-card0\x2dLVDS\x2d1-intel_backlight.device loaded active plugged /sys/devices/pci0000:00/0000:00:02.0/drm
sys-devices-pci0000:00-0000:00:19.0-net-eth0.device loaded active plugged 82579LM Gigabit Network Connection
sys-devices-pci0000:00-0000:00:1a.0-usb1-1\x2d1-1\x2d1.4-1\x2d1.4:1.0-bluetooth-hci0-rfkill3.device loaded active plugged /sys/devices/pci0000:00/0000
sys-devices-pci0000:00-0000:00:1a.0-usb1-1\x2d1-1\x2d1.4-1\x2d1.4:1.0-bluetooth-hci0.device loaded active plugged /sys/devices/pci0000:00/0000:00:1a.0
sys-devices-pci0000:00-0000:00:1b.0-sound-card0.device loaded active plugged 6 Series/C200 Series Chipset Family High Definition Audio Contro
sys-devices-pci0000:00-0000:00:1c.1-0000:03:00.0-ieee80211-phy0-rfkill2.device loaded active plugged /sys/devices/pci0000:00/0000:00:1c.1/0000:03:00.0
sys-devices-pci0000:00-0000:00:1c.1-0000:03:00.0-net-wlan0.device loaded active plugged Centrino Advanced-N 6205 [Taylor Peak] (Centrino Advanced-N 62
sys-devices-pci0000:00-0000:00:1d.0-usb2-2\x2d1-2\x2d1.4-2\x2d1.4:1.1-tty-ttyACM0.device loaded active plugged F5521gw
sys-devices-pci0000:00-0000:00:1d.0-usb2-2\x2d1-2\x2d1.4-2\x2d1.4:1.3-tty-ttyACM1.device loaded active plugged F5521gw
...
session-c2.scope loaded active running Session c2 of user lollypop
accounts-daemon.service loaded active running Accounts Service
● anacron.service loaded failed failed Run anacron jobs
apparmor.service loaded active exited LSB: AppArmor initialization
apport.service loaded active exited LSB: automatic crash report generation
...
</source>
In this example you can see that the anacron.service failed to start.
==Display unit status==
<source lang=bash>
# systemctl status anacron
● anacron.service - Run anacron jobs
Loaded: loaded (/lib/systemd/system/anacron.service; enabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Fr 2015-08-28 09:18:13 CEST; 31min ago
Process: 1591 ExecStart=/usr/sbin/anacron -dsq (code=exited, status=1/FAILURE)
Main PID: 1591 (code=exited, status=1/FAILURE)
Aug 28 09:18:13 lollybook systemd[1]: Started Run anacron jobs.
Aug 28 09:18:13 lollybook systemd[1]: Starting Run anacron jobs...
Aug 28 09:18:13 lollybook systemd[1]: anacron.service: main process exited, code=exited, status=1/FAILURE
Aug 28 09:18:13 lollybook anacron[1591]: anacron: Can't chdir to /var/spool/anacron: No such file or directory
Aug 28 09:18:13 lollybook systemd[1]: Unit anacron.service entered failed state.
Aug 28 09:18:13 lollybook systemd[1]: anacron.service failed.
</source>
Ah, deleted the anacron spool directory. ;-)
==Restart units==
Fix the problem and restart the service.
<source lang=bash>
root@lollybook:~# mkdir /var/spool/anacron
root@lollybook:~# systemctl restart anacron.service
root@lollybook:~# systemctl status anacron
● anacron.service - Run anacron jobs
Loaded: loaded (/lib/systemd/system/anacron.service; enabled; vendor preset: enabled)
Active: active (running) since Fr 2015-08-28 09:53:49 CEST; 4s ago
Main PID: 5179 (anacron)
CGroup: /system.slice/anacron.service
└─5179 /usr/sbin/anacron -dsq
Aug 28 09:53:49 lollybook systemd[1]: Started Run anacron jobs.
Aug 28 09:53:49 lollybook systemd[1]: Starting Run anacron jobs...
Aug 28 09:53:49 lollybook anacron[5179]: Anacron 2.3 started on 2015-08-28
Aug 28 09:53:49 lollybook anacron[5179]: Will run job `cron.daily' in 5 min.
Aug 28 09:53:49 lollybook anacron[5179]: Will run job `cron.weekly' in 10 min.
Aug 28 09:53:49 lollybook anacron[5179]: Will run job `cron.monthly' in 15 min.
Aug 28 09:53:49 lollybook anacron[5179]: Jobs will be executed sequentially
</source>
==Display unit declaration==
<source lang=ini>
# systemctl cat zfs.target
# /lib/systemd/system/zfs.target
[Unit]
Description=ZFS startup target
Requires=zfs-mount.service
Requires=zfs-share.service
Wants=zed.service
[Install]
WantedBy=multi-user.target
</source>
==Sockets==
<source lang=bash>
# systemctl list-sockets --all
LISTEN UNIT ACTIVATES
/run/acpid.socket acpid.socket acpid.service
/run/systemd/fsckd systemd-fsckd.socket systemd-fsckd.service
/run/systemd/initctl/fifo systemd-initctl.socket systemd-initctl.service
/run/systemd/journal/dev-log systemd-journald-dev-log.socket systemd-journald.service
/run/systemd/journal/socket systemd-journald.socket systemd-journald.service
/run/systemd/journal/stdout systemd-journald.socket systemd-journald.service
/run/systemd/journal/syslog syslog.socket rsyslog.service
/run/systemd/shutdownd systemd-shutdownd.socket systemd-shutdownd.service
/run/udev/control systemd-udevd-control.socket systemd-udevd.service
/run/uuidd/request uuidd.socket uuidd.service
/var/run/avahi-daemon/socket avahi-daemon.socket avahi-daemon.service
/var/run/cups/cups.sock cups.socket cups.service
/var/run/dbus/system_bus_socket dbus.socket dbus.service
127.0.0.1:631 cups.socket cups.service
[::1]:631 cups.socket cups.service
audit 1 systemd-journald-audit.socket systemd-journald.service
kobject-uevent 1 systemd-udevd-kernel.socket systemd-udevd.service
17 sockets listed.
</source>
==View dependencies==
What depends on ''zfs.target'':
<source lang=bash>
# systemctl list-dependencies --reverse zfs.target
zfs.target
● ├─basic.target
...
● └─multi-user.target
...
</source>
And what do we need to reach the ''zfs.target''?
<source lang=bash>
# systemctl list-dependencies --recursive zfs.target
zfs.target
● ├─zed.service
● ├─zfs-mount.service
● └─zfs-share.service
</source>
=Security=
==Use capabilities to drop user privileges (CapabilityBoundingSet)==
<source lang=ini>
# systemctl cat systemd-networkd.service --no-pager
...
[Service]
Type=notify
Restart=on-failure
RestartSec=0
ExecStart=/lib/systemd/systemd-networkd
CapabilityBoundingSet=CAP_NET_ADMIN CAP_NET_BIND_SERVICE CAP_NET_BROADCAST CAP_NET_RAW CAP_SETUID CAP_SETGID CAP_SETPCAP CAP_CHOWN CAP_DAC_OVERRIDE CAP_FOWNER
ProtectSystem=full
ProtectHome=yes
WatchdogSec=1min
...
</source>
Now the process is started with exactly the capabilities it needs to have. Even if it starts as root all unnessesary capabilities are dropped for starting the process.
I dont want to copy the whole man page of [http://manpages.ubuntu.com/manpages/vivid/en/man7/capabilities.7.html capabilities(7)] here but you can take a look to understand what this capabilities are.
'''BUT''' beware of programs which just test on UID 0!
==Nailing a process to it's rights : NoNewPrivileges==
Setting ''NoNewPrivileges=true'' ensures that the processtree from this level on will stuck at the UID and the privileges it has. This prohibits UID changes. No set UID binary will help the hacker to get more privileges than the user of the exploited service.
=systemd-timesyncd an alternative to ntp=
The ntpd is a good and fat old horse for servers but clients do not necessarily need this one. Just give systemd-timesyncd a chance.
Configuration can be easily made through <i>/etc/systemd/timesyncd.conf</i>:
<source lang=ini>
# This file is part of systemd.
#
# systemd is free software; you can redistribute it and/or modify it
# under the terms of the GNU Lesser General Public License as published by
# the Free Software Foundation; either version 2.1 of the License, or
# (at your option) any later version.
#
# Entries in this file show the compile time defaults.
# You can change settings by editing this file.
# Defaults can be restored by simply deleting this file.
#
# See timesyncd.conf(5) for details.
[Time]
NTP=ptbtime1.ptb.de hora.cs.tu-berlin.de
FallbackNTP=ntp.ubuntu.com
</source>
The server list is a space separated list of NTP servers.
FallbackNTP is a list of servers if none of the NTP list could be reached.
If you want to split them into multiple files or generate them at start you can put files with the ending <i>.conf</i> in <i>/etc/systemd/timesyncd.conf.d/</i>.
After you setup the config you can enable the timesyncd via:
<source lang=bash>
# timedatectl set-ntp true
</source>
Control your success with:
<source lang=bash>
# timedatectl
Local time: Fr 2016-07-01 09:16:24 CEST
Universal time: Fr 2016-07-01 07:16:24 UTC
RTC time: Fr 2016-07-01 07:16:24
Time zone: Europe/Berlin (CEST, +0200)
Network time on: yes
NTP synchronized: yes
RTC in local TZ: no
</source>
Nice it worked <i>NTP synchronized: yes</i>.
If not take a look with <i>systemctl</i>:
<source lang=bash>
# systemctl status systemd-timesyncd.service
● systemd-timesyncd.service - Network Time Synchronization
Loaded: loaded (/lib/systemd/system/systemd-timesyncd.service; enabled; vendor preset: enabled)
Drop-In: /lib/systemd/system/systemd-timesyncd.service.d
└─disable-with-time-daemon.conf
Active: inactive (dead)
Condition: start condition failed at Fr 2016-07-01 10:49:15 CEST; 1h 43min left
Docs: man:systemd-timesyncd.service(8)
</source>
Hmm... let us take a look at ntp:
<source lang=bash>
# systemctl status ntp.service
● ntp.service - LSB: Start NTP daemon
Loaded: loaded (/etc/init.d/ntp; bad; vendor preset: enabled)
Active: active (exited) since Fr 2016-07-01 10:49:19 CEST; 1h 44min left
Docs: man:systemd-sysv-generator(8)
</source>
Maybe we should uninstall or disable ntp first ;-).
<source lang=bash>
# systemctl stop ntp.service
# systemctl disable ntp.service
</source>
<source lang=bash>
# systemctl start systemd-timesyncd.service
# systemctl status systemd-timesyncd.service
● systemd-timesyncd.service - Network Time Synchronization
Loaded: loaded (/lib/systemd/system/systemd-timesyncd.service; enabled; vendor preset: enabled)
Drop-In: /lib/systemd/system/systemd-timesyncd.service.d
└─disable-with-time-daemon.conf
Active: active (running) since Fr 2016-07-01 09:06:10 CEST; 1s ago
Docs: man:systemd-timesyncd.service(8)
Main PID: 12360 (systemd-timesyn)
Status: "Synchronized to time server 192.53.103.108:123 (ptbtime1.ptb.de)."
CGroup: /system.slice/systemd-timesyncd.service
└─12360 /lib/systemd/systemd-timesyncd
Jul 01 09:06:10 lollybook systemd[1]: Starting Network Time Synchronization...
Jul 01 09:06:10 lollybook systemd[1]: Started Network Time Synchronization.
Jul 01 09:06:10 lollybook systemd-timesyncd[12360]: Synchronized to time server 192.53.103.108:123 (ptbtime1.ptb.de).
</source>
That's it!
=Units=
==[Unit]==
===Define dependencies===
For example the ''zfs.target'' is defined like this:
<source lang=bash>
# systemctl cat zfs.target
# /lib/systemd/system/zfs.target
[Unit]
Description=ZFS startup target
Requires=zfs-mount.service
Requires=zfs-share.service
Wants=zed.service
[Install]
WantedBy=multi-user.target
</source>
This means to reach the ''zfs.target'' we want that ''zed.service'' is started if enabled and we need ''zfs-mount.service'' and ''zfs-share.service''.
===Directories===
====ReadWrite-, ReadOnly- and InaccessibleDirectories====
====Private Tmp-Directories====
Mounts a private incarnation of /tmp and /var/tmp which only lives as long as the unit is up. When the unit comes down the directories are cleared. This is done by a seperate namespace for this unit.
<source lang=ini>
[Unit]
...
PrivateTmp=true|false
...
</source>
If several units should share a private tmp-directory you can use ''JoinsNamespaceOf=<unit1>[,<unit2>,<unit3>]''.
==[Service]==
==[Install]==
=Tools=
==Testing around with capabilities==
For example arping:
<source lang=bash>
# getcap /usr/bin/arping
/usr/bin/arping = cap_net_raw+ep
</source>
With this capability set we can use this as normal user:
<source lang=bash>
lollypop $ /usr/bin/arping -I wlan0 192.168.178.1
ARPING 192.168.178.1 from 192.168.178.31 wlan0
Unicast reply from 192.168.178.1 [24:65:11:F0:DC:A8] 1.774ms
Unicast reply from 192.168.178.1 [24:65:11:F0:DC:A8] 1.658ms
</source>
If we remove this capability it does not work:
<source lang=bash>
# setcap cap_net_raw=-ep /usr/bin/arping
</source>
<source lang=bash>
lollypop $ /usr/bin/arping -I wlan0 192.168.178.1
arping: socket: Operation not permitted
</source>
Of course it still works as root as root has all capabilities:
<source lang=bash>
root@lollybook:~# /usr/bin/arping -I wlan0 192.168.178.1
ARPING 192.168.178.1 from 192.168.178.31 wlan0
Unicast reply from 192.168.178.1 [24:65:11:F0:DC:A8] 2.052ms
Unicast reply from 192.168.178.1 [24:65:11:F0:DC:A8] 1.852ms
Received 2 response(s)
</source>
So we better set this capability again:
<source lang=bash>
# setcap cap_net_raw=+ep /usr/bin/arping
</source>
= Logging with syslog-ng and systemd in a chroot environment =
If you have a chroot environment (here I have /var/chroot) some things are a little bit tricky.
==The needed logging socket in your chroot is /run/systemd/journal/dev-log==
Prepare the mountpoint:
<source lang=bash>
# mkdir -p /var/chroot/run/systemd/journal
# touch /var/chroot/run/systemd/journal/dev-log
</source>
===Get the name for the needed unit file===
The name of a .mount-unit file has to be the mount destination path. Dashes must be escaped. To get the resulting name you can easily use systemd-escape.
<source lang=bash>
# systemd-escape -p --suffix=mount /var/chroot/run/systemd/journal/dev-log
var-chroot-run-systemd-journal-dev\x2dlog.mount
</source>
===Create the unit file /lib/systemd/system/var-chroot-run-systemd-journal-dev\\x2dlog.mount for the mount===
Remember to double escape (\\) the x2d (which is a dash -).
<source lang=bash>
# vi /lib/systemd/system/var-chroot-run-systemd-journal-dev\\x2dlog.mount
</source>
Put this contents in the file:
<source lang=ini>
[Unit]
Description=Mount /run/systemd/journal/dev-log to chroot
DefaultDependencies=no
ConditionPathExists=/var/chroot/run/systemd/journal/dev-log
ConditionCapability=CAP_SYS_ADMIN
After=systemd-modules-load.service
Before=pdns-recursor.service
[Mount]
What=/run/systemd/journal/dev-log
Where=/var/chroot/run/systemd/journal/dev-log
Type=none
Options=bind
[Install]
WantedBy=multi-user.target
</source>
===Mount the socket===
<source lang=bash>
# systemctl daemon-reload
# systemctl enable var-chroot-run-systemd-journal-dev\\x2dlog.mount
# systemctl start var-chroot-run-systemd-journal-dev\\x2dlog.mount
</source>
Check the success:
<source lang=bash>
# grep /var/chroot/run/systemd/journal/dev-log /proc/mounts
tmpfs /var/chroot/run/systemd/journal/dev-log tmpfs rw,nosuid,noexec,relatime,size=101604k,mode=755 0 0
</source>
==Tell the journald to forward logging lines to the socket==
===/etc/systemd/journald.conf===
<source lang=ini>
[Journal]
...
ForwardToSyslog=yes
...
</source>
Restart the journal daemon:
<source lang=bash>
# systemctl restart systemd-journald.service
</source>
==Configure syslog-ng==
===/etc/syslog-ng/syslog-ng.conf===
Take the log from systemd-journald socket:
<source>
...
source s_src {
system();
internal();
unix-dgram ("/run/systemd/journal/dev-log");
};
...
</source>
===Example for powerdns recursor===
====/etc/syslog-ng/conf.d/destination.d/pdns.conf====
<source>
# PowerDNS authoritative server destination
destination d_pdns { file("/var/log/powerdns/pdns.log"); };
destination d_pdns_recursor { file("/var/log/powerdns/recursor.log"); };
</source>
====/etc/syslog-ng/conf.d/filter.d/pdns.conf====
<source>
# PowerDNS authoritative server filter
filter f_pdns { program("^pdns$"); };
filter f_pdns_recursor { program("^pdns_recursor$"); };
</source>
====/etc/syslog-ng/conf.d/log.d/90_pdns.conf====
<source>
# PowerDNS authoritative server default final file log
log { source(s_src); filter(f_pdns); destination(d_pdns); flags(final); };
log { source(s_src); filter(f_pdns_recursor); destination(d_pdns_recursor); flags(final); };
</source>
= Examples =
== Oracle ==
UNTESTED, just an example!
File this as
/usr/lib/systemd/system/dbora@.service (SLES12)
<source lang=ini>
# This file is part of systemd.
#
# Configure instances for your oracle database versions like this
# # systemctl enable dbora@<product>.service
# e.g.:
# # systemctl enable dbora@12cR1.service
#
[Unit]
Description=Oracle Database %I
After=syslog.target network.target
[Service]
# systemd ignores PAM limits, so set any necessary limits in the service.
# Not really a bug, but a feature.
# https://bugzilla.redhat.com/show_bug.cgi?id=754285
LimitMEMLOCK=infinity
LimitNOFILE=65535
#
Type=simple
RemainAfterExit=yes
User=oracle
Group=dba
Environment="ORACLE_HOME=/opt/oracle/product/%i/db"
ExecStart=/opt/oracle/product/%i/db/bin/dbstart $ORACLE_HOME >> 2>&1 &
ExecStop=/opt/oracle/product/%i/db/bin/dbstart $ORACLE_HOME 2>&1 &
[Install]
WantedBy=multi-user.target
</source>
<source lang=bash>
# systemctl daemon-reload
# systemctl enable dbora@12cR2.service
Created symlink from /etc/systemd/system/multi-user.target.wants/dbora@12cR2.service to /usr/lib/systemd/system/dbora@.service.
</source>
dc44411b4dfeb0610206abb69c45755a9338cadb
1865
1864
2018-05-03T09:10:46Z
Lollypop
2
/* Define dependencies */
wikitext
text/x-wiki
[[Kategorie:Linux]]
=systemd=
Yes, like daemons are usually written this has to be written lowercase.
=What is systemd?=
systemd is a replacement for the old and rusty init system of Linux.
It has many new features and extends the normal init system with the ability to watch processes after the start has done, list sockets owned by processes started with systemd, adds security features like [http://manpages.ubuntu.com/manpages/vivid/en/man7/capabilities.7.html capabilities(7)] and a lot more.
Maybe it will be as good as SMF (Service Management Facility) of Solaris one day :-).
=Take a look with systemctl=
==List units==
As you can see, there are hardware and software related units.
<source lang=bash>
# systemctl list-units
UNIT LOAD ACTIVE SUB DESCRIPTION
proc-sys-fs-binfmt_misc.automount loaded active running Arbitrary Executable File Formats File System Automount Point
sys-devices-pci0000:00-0000:00:02.0-backlight-acpi_video0.device loaded active plugged /sys/devices/pci0000:00/0000:00:02.0/backlight/acpi_video0
sys-devices-pci0000:00-0000:00:02.0-drm-card0-card0\x2dLVDS\x2d1-intel_backlight.device loaded active plugged /sys/devices/pci0000:00/0000:00:02.0/drm
sys-devices-pci0000:00-0000:00:19.0-net-eth0.device loaded active plugged 82579LM Gigabit Network Connection
sys-devices-pci0000:00-0000:00:1a.0-usb1-1\x2d1-1\x2d1.4-1\x2d1.4:1.0-bluetooth-hci0-rfkill3.device loaded active plugged /sys/devices/pci0000:00/0000
sys-devices-pci0000:00-0000:00:1a.0-usb1-1\x2d1-1\x2d1.4-1\x2d1.4:1.0-bluetooth-hci0.device loaded active plugged /sys/devices/pci0000:00/0000:00:1a.0
sys-devices-pci0000:00-0000:00:1b.0-sound-card0.device loaded active plugged 6 Series/C200 Series Chipset Family High Definition Audio Contro
sys-devices-pci0000:00-0000:00:1c.1-0000:03:00.0-ieee80211-phy0-rfkill2.device loaded active plugged /sys/devices/pci0000:00/0000:00:1c.1/0000:03:00.0
sys-devices-pci0000:00-0000:00:1c.1-0000:03:00.0-net-wlan0.device loaded active plugged Centrino Advanced-N 6205 [Taylor Peak] (Centrino Advanced-N 62
sys-devices-pci0000:00-0000:00:1d.0-usb2-2\x2d1-2\x2d1.4-2\x2d1.4:1.1-tty-ttyACM0.device loaded active plugged F5521gw
sys-devices-pci0000:00-0000:00:1d.0-usb2-2\x2d1-2\x2d1.4-2\x2d1.4:1.3-tty-ttyACM1.device loaded active plugged F5521gw
...
session-c2.scope loaded active running Session c2 of user lollypop
accounts-daemon.service loaded active running Accounts Service
● anacron.service loaded failed failed Run anacron jobs
apparmor.service loaded active exited LSB: AppArmor initialization
apport.service loaded active exited LSB: automatic crash report generation
...
</source>
In this example you can see that the anacron.service failed to start.
==Display unit status==
<source lang=bash>
# systemctl status anacron
● anacron.service - Run anacron jobs
Loaded: loaded (/lib/systemd/system/anacron.service; enabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Fr 2015-08-28 09:18:13 CEST; 31min ago
Process: 1591 ExecStart=/usr/sbin/anacron -dsq (code=exited, status=1/FAILURE)
Main PID: 1591 (code=exited, status=1/FAILURE)
Aug 28 09:18:13 lollybook systemd[1]: Started Run anacron jobs.
Aug 28 09:18:13 lollybook systemd[1]: Starting Run anacron jobs...
Aug 28 09:18:13 lollybook systemd[1]: anacron.service: main process exited, code=exited, status=1/FAILURE
Aug 28 09:18:13 lollybook anacron[1591]: anacron: Can't chdir to /var/spool/anacron: No such file or directory
Aug 28 09:18:13 lollybook systemd[1]: Unit anacron.service entered failed state.
Aug 28 09:18:13 lollybook systemd[1]: anacron.service failed.
</source>
Ah, deleted the anacron spool directory. ;-)
==Restart units==
Fix the problem and restart the service.
<source lang=bash>
root@lollybook:~# mkdir /var/spool/anacron
root@lollybook:~# systemctl restart anacron.service
root@lollybook:~# systemctl status anacron
● anacron.service - Run anacron jobs
Loaded: loaded (/lib/systemd/system/anacron.service; enabled; vendor preset: enabled)
Active: active (running) since Fr 2015-08-28 09:53:49 CEST; 4s ago
Main PID: 5179 (anacron)
CGroup: /system.slice/anacron.service
└─5179 /usr/sbin/anacron -dsq
Aug 28 09:53:49 lollybook systemd[1]: Started Run anacron jobs.
Aug 28 09:53:49 lollybook systemd[1]: Starting Run anacron jobs...
Aug 28 09:53:49 lollybook anacron[5179]: Anacron 2.3 started on 2015-08-28
Aug 28 09:53:49 lollybook anacron[5179]: Will run job `cron.daily' in 5 min.
Aug 28 09:53:49 lollybook anacron[5179]: Will run job `cron.weekly' in 10 min.
Aug 28 09:53:49 lollybook anacron[5179]: Will run job `cron.monthly' in 15 min.
Aug 28 09:53:49 lollybook anacron[5179]: Jobs will be executed sequentially
</source>
==Display unit declaration==
<source lang=ini>
# systemctl cat zfs.target
# /lib/systemd/system/zfs.target
[Unit]
Description=ZFS startup target
Requires=zfs-mount.service
Requires=zfs-share.service
Wants=zed.service
[Install]
WantedBy=multi-user.target
</source>
==Sockets==
<source lang=bash>
# systemctl list-sockets --all
LISTEN UNIT ACTIVATES
/run/acpid.socket acpid.socket acpid.service
/run/systemd/fsckd systemd-fsckd.socket systemd-fsckd.service
/run/systemd/initctl/fifo systemd-initctl.socket systemd-initctl.service
/run/systemd/journal/dev-log systemd-journald-dev-log.socket systemd-journald.service
/run/systemd/journal/socket systemd-journald.socket systemd-journald.service
/run/systemd/journal/stdout systemd-journald.socket systemd-journald.service
/run/systemd/journal/syslog syslog.socket rsyslog.service
/run/systemd/shutdownd systemd-shutdownd.socket systemd-shutdownd.service
/run/udev/control systemd-udevd-control.socket systemd-udevd.service
/run/uuidd/request uuidd.socket uuidd.service
/var/run/avahi-daemon/socket avahi-daemon.socket avahi-daemon.service
/var/run/cups/cups.sock cups.socket cups.service
/var/run/dbus/system_bus_socket dbus.socket dbus.service
127.0.0.1:631 cups.socket cups.service
[::1]:631 cups.socket cups.service
audit 1 systemd-journald-audit.socket systemd-journald.service
kobject-uevent 1 systemd-udevd-kernel.socket systemd-udevd.service
17 sockets listed.
</source>
==View dependencies==
What depends on ''zfs.target'':
<source lang=bash>
# systemctl list-dependencies --reverse zfs.target
zfs.target
● ├─basic.target
...
● └─multi-user.target
...
</source>
And what do we need to reach the ''zfs.target''?
<source lang=bash>
# systemctl list-dependencies --recursive zfs.target
zfs.target
● ├─zed.service
● ├─zfs-mount.service
● └─zfs-share.service
</source>
=Security=
==Use capabilities to drop user privileges (CapabilityBoundingSet)==
<source lang=ini>
# systemctl cat systemd-networkd.service --no-pager
...
[Service]
Type=notify
Restart=on-failure
RestartSec=0
ExecStart=/lib/systemd/systemd-networkd
CapabilityBoundingSet=CAP_NET_ADMIN CAP_NET_BIND_SERVICE CAP_NET_BROADCAST CAP_NET_RAW CAP_SETUID CAP_SETGID CAP_SETPCAP CAP_CHOWN CAP_DAC_OVERRIDE CAP_FOWNER
ProtectSystem=full
ProtectHome=yes
WatchdogSec=1min
...
</source>
Now the process is started with exactly the capabilities it needs to have. Even if it starts as root all unnessesary capabilities are dropped for starting the process.
I dont want to copy the whole man page of [http://manpages.ubuntu.com/manpages/vivid/en/man7/capabilities.7.html capabilities(7)] here but you can take a look to understand what this capabilities are.
'''BUT''' beware of programs which just test on UID 0!
==Nailing a process to it's rights : NoNewPrivileges==
Setting ''NoNewPrivileges=true'' ensures that the processtree from this level on will stuck at the UID and the privileges it has. This prohibits UID changes. No set UID binary will help the hacker to get more privileges than the user of the exploited service.
=systemd-timesyncd an alternative to ntp=
The ntpd is a good and fat old horse for servers but clients do not necessarily need this one. Just give systemd-timesyncd a chance.
Configuration can be easily made through <i>/etc/systemd/timesyncd.conf</i>:
<source lang=ini>
# This file is part of systemd.
#
# systemd is free software; you can redistribute it and/or modify it
# under the terms of the GNU Lesser General Public License as published by
# the Free Software Foundation; either version 2.1 of the License, or
# (at your option) any later version.
#
# Entries in this file show the compile time defaults.
# You can change settings by editing this file.
# Defaults can be restored by simply deleting this file.
#
# See timesyncd.conf(5) for details.
[Time]
NTP=ptbtime1.ptb.de hora.cs.tu-berlin.de
FallbackNTP=ntp.ubuntu.com
</source>
The server list is a space separated list of NTP servers.
FallbackNTP is a list of servers if none of the NTP list could be reached.
If you want to split them into multiple files or generate them at start you can put files with the ending <i>.conf</i> in <i>/etc/systemd/timesyncd.conf.d/</i>.
After you setup the config you can enable the timesyncd via:
<source lang=bash>
# timedatectl set-ntp true
</source>
Control your success with:
<source lang=bash>
# timedatectl
Local time: Fr 2016-07-01 09:16:24 CEST
Universal time: Fr 2016-07-01 07:16:24 UTC
RTC time: Fr 2016-07-01 07:16:24
Time zone: Europe/Berlin (CEST, +0200)
Network time on: yes
NTP synchronized: yes
RTC in local TZ: no
</source>
Nice it worked <i>NTP synchronized: yes</i>.
If not take a look with <i>systemctl</i>:
<source lang=bash>
# systemctl status systemd-timesyncd.service
● systemd-timesyncd.service - Network Time Synchronization
Loaded: loaded (/lib/systemd/system/systemd-timesyncd.service; enabled; vendor preset: enabled)
Drop-In: /lib/systemd/system/systemd-timesyncd.service.d
└─disable-with-time-daemon.conf
Active: inactive (dead)
Condition: start condition failed at Fr 2016-07-01 10:49:15 CEST; 1h 43min left
Docs: man:systemd-timesyncd.service(8)
</source>
Hmm... let us take a look at ntp:
<source lang=bash>
# systemctl status ntp.service
● ntp.service - LSB: Start NTP daemon
Loaded: loaded (/etc/init.d/ntp; bad; vendor preset: enabled)
Active: active (exited) since Fr 2016-07-01 10:49:19 CEST; 1h 44min left
Docs: man:systemd-sysv-generator(8)
</source>
Maybe we should uninstall or disable ntp first ;-).
<source lang=bash>
# systemctl stop ntp.service
# systemctl disable ntp.service
</source>
<source lang=bash>
# systemctl start systemd-timesyncd.service
# systemctl status systemd-timesyncd.service
● systemd-timesyncd.service - Network Time Synchronization
Loaded: loaded (/lib/systemd/system/systemd-timesyncd.service; enabled; vendor preset: enabled)
Drop-In: /lib/systemd/system/systemd-timesyncd.service.d
└─disable-with-time-daemon.conf
Active: active (running) since Fr 2016-07-01 09:06:10 CEST; 1s ago
Docs: man:systemd-timesyncd.service(8)
Main PID: 12360 (systemd-timesyn)
Status: "Synchronized to time server 192.53.103.108:123 (ptbtime1.ptb.de)."
CGroup: /system.slice/systemd-timesyncd.service
└─12360 /lib/systemd/systemd-timesyncd
Jul 01 09:06:10 lollybook systemd[1]: Starting Network Time Synchronization...
Jul 01 09:06:10 lollybook systemd[1]: Started Network Time Synchronization.
Jul 01 09:06:10 lollybook systemd-timesyncd[12360]: Synchronized to time server 192.53.103.108:123 (ptbtime1.ptb.de).
</source>
That's it!
=Units=
==[Unit]==
===Define dependencies===
For example the ''zfs.target'' is defined like this:
<source lang=ini>
# systemctl cat zfs.target
# /lib/systemd/system/zfs.target
[Unit]
Description=ZFS startup target
Requires=zfs-mount.service
Requires=zfs-share.service
Wants=zed.service
[Install]
WantedBy=multi-user.target
</source>
This means to reach the ''zfs.target'' we want that ''zed.service'' is started if enabled and we need ''zfs-mount.service'' and ''zfs-share.service''.
===Directories===
====ReadWrite-, ReadOnly- and InaccessibleDirectories====
====Private Tmp-Directories====
Mounts a private incarnation of /tmp and /var/tmp which only lives as long as the unit is up. When the unit comes down the directories are cleared. This is done by a seperate namespace for this unit.
<source lang=ini>
[Unit]
...
PrivateTmp=true|false
...
</source>
If several units should share a private tmp-directory you can use ''JoinsNamespaceOf=<unit1>[,<unit2>,<unit3>]''.
==[Service]==
==[Install]==
=Tools=
==Testing around with capabilities==
For example arping:
<source lang=bash>
# getcap /usr/bin/arping
/usr/bin/arping = cap_net_raw+ep
</source>
With this capability set we can use this as normal user:
<source lang=bash>
lollypop $ /usr/bin/arping -I wlan0 192.168.178.1
ARPING 192.168.178.1 from 192.168.178.31 wlan0
Unicast reply from 192.168.178.1 [24:65:11:F0:DC:A8] 1.774ms
Unicast reply from 192.168.178.1 [24:65:11:F0:DC:A8] 1.658ms
</source>
If we remove this capability it does not work:
<source lang=bash>
# setcap cap_net_raw=-ep /usr/bin/arping
</source>
<source lang=bash>
lollypop $ /usr/bin/arping -I wlan0 192.168.178.1
arping: socket: Operation not permitted
</source>
Of course it still works as root as root has all capabilities:
<source lang=bash>
root@lollybook:~# /usr/bin/arping -I wlan0 192.168.178.1
ARPING 192.168.178.1 from 192.168.178.31 wlan0
Unicast reply from 192.168.178.1 [24:65:11:F0:DC:A8] 2.052ms
Unicast reply from 192.168.178.1 [24:65:11:F0:DC:A8] 1.852ms
Received 2 response(s)
</source>
So we better set this capability again:
<source lang=bash>
# setcap cap_net_raw=+ep /usr/bin/arping
</source>
= Logging with syslog-ng and systemd in a chroot environment =
If you have a chroot environment (here I have /var/chroot) some things are a little bit tricky.
==The needed logging socket in your chroot is /run/systemd/journal/dev-log==
Prepare the mountpoint:
<source lang=bash>
# mkdir -p /var/chroot/run/systemd/journal
# touch /var/chroot/run/systemd/journal/dev-log
</source>
===Get the name for the needed unit file===
The name of a .mount-unit file has to be the mount destination path. Dashes must be escaped. To get the resulting name you can easily use systemd-escape.
<source lang=bash>
# systemd-escape -p --suffix=mount /var/chroot/run/systemd/journal/dev-log
var-chroot-run-systemd-journal-dev\x2dlog.mount
</source>
===Create the unit file /lib/systemd/system/var-chroot-run-systemd-journal-dev\\x2dlog.mount for the mount===
Remember to double escape (\\) the x2d (which is a dash -).
<source lang=bash>
# vi /lib/systemd/system/var-chroot-run-systemd-journal-dev\\x2dlog.mount
</source>
Put this contents in the file:
<source lang=ini>
[Unit]
Description=Mount /run/systemd/journal/dev-log to chroot
DefaultDependencies=no
ConditionPathExists=/var/chroot/run/systemd/journal/dev-log
ConditionCapability=CAP_SYS_ADMIN
After=systemd-modules-load.service
Before=pdns-recursor.service
[Mount]
What=/run/systemd/journal/dev-log
Where=/var/chroot/run/systemd/journal/dev-log
Type=none
Options=bind
[Install]
WantedBy=multi-user.target
</source>
===Mount the socket===
<source lang=bash>
# systemctl daemon-reload
# systemctl enable var-chroot-run-systemd-journal-dev\\x2dlog.mount
# systemctl start var-chroot-run-systemd-journal-dev\\x2dlog.mount
</source>
Check the success:
<source lang=bash>
# grep /var/chroot/run/systemd/journal/dev-log /proc/mounts
tmpfs /var/chroot/run/systemd/journal/dev-log tmpfs rw,nosuid,noexec,relatime,size=101604k,mode=755 0 0
</source>
==Tell the journald to forward logging lines to the socket==
===/etc/systemd/journald.conf===
<source lang=ini>
[Journal]
...
ForwardToSyslog=yes
...
</source>
Restart the journal daemon:
<source lang=bash>
# systemctl restart systemd-journald.service
</source>
==Configure syslog-ng==
===/etc/syslog-ng/syslog-ng.conf===
Take the log from systemd-journald socket:
<source>
...
source s_src {
system();
internal();
unix-dgram ("/run/systemd/journal/dev-log");
};
...
</source>
===Example for powerdns recursor===
====/etc/syslog-ng/conf.d/destination.d/pdns.conf====
<source>
# PowerDNS authoritative server destination
destination d_pdns { file("/var/log/powerdns/pdns.log"); };
destination d_pdns_recursor { file("/var/log/powerdns/recursor.log"); };
</source>
====/etc/syslog-ng/conf.d/filter.d/pdns.conf====
<source>
# PowerDNS authoritative server filter
filter f_pdns { program("^pdns$"); };
filter f_pdns_recursor { program("^pdns_recursor$"); };
</source>
====/etc/syslog-ng/conf.d/log.d/90_pdns.conf====
<source>
# PowerDNS authoritative server default final file log
log { source(s_src); filter(f_pdns); destination(d_pdns); flags(final); };
log { source(s_src); filter(f_pdns_recursor); destination(d_pdns_recursor); flags(final); };
</source>
= Examples =
== Oracle ==
UNTESTED, just an example!
File this as
/usr/lib/systemd/system/dbora@.service (SLES12)
<source lang=ini>
# This file is part of systemd.
#
# Configure instances for your oracle database versions like this
# # systemctl enable dbora@<product>.service
# e.g.:
# # systemctl enable dbora@12cR1.service
#
[Unit]
Description=Oracle Database %I
After=syslog.target network.target
[Service]
# systemd ignores PAM limits, so set any necessary limits in the service.
# Not really a bug, but a feature.
# https://bugzilla.redhat.com/show_bug.cgi?id=754285
LimitMEMLOCK=infinity
LimitNOFILE=65535
#
Type=simple
RemainAfterExit=yes
User=oracle
Group=dba
Environment="ORACLE_HOME=/opt/oracle/product/%i/db"
ExecStart=/opt/oracle/product/%i/db/bin/dbstart $ORACLE_HOME >> 2>&1 &
ExecStop=/opt/oracle/product/%i/db/bin/dbstart $ORACLE_HOME 2>&1 &
[Install]
WantedBy=multi-user.target
</source>
<source lang=bash>
# systemctl daemon-reload
# systemctl enable dbora@12cR2.service
Created symlink from /etc/systemd/system/multi-user.target.wants/dbora@12cR2.service to /usr/lib/systemd/system/dbora@.service.
</source>
10410c5a09b7dadc8f43fb95cbaac319c1f0e90f
1866
1865
2018-05-03T09:15:16Z
Lollypop
2
/* Configure syslog-ng */
wikitext
text/x-wiki
[[Kategorie:Linux]]
=systemd=
Yes, like daemons are usually written this has to be written lowercase.
=What is systemd?=
systemd is a replacement for the old and rusty init system of Linux.
It has many new features and extends the normal init system with the ability to watch processes after the start has done, list sockets owned by processes started with systemd, adds security features like [http://manpages.ubuntu.com/manpages/vivid/en/man7/capabilities.7.html capabilities(7)] and a lot more.
Maybe it will be as good as SMF (Service Management Facility) of Solaris one day :-).
=Take a look with systemctl=
==List units==
As you can see, there are hardware and software related units.
<source lang=bash>
# systemctl list-units
UNIT LOAD ACTIVE SUB DESCRIPTION
proc-sys-fs-binfmt_misc.automount loaded active running Arbitrary Executable File Formats File System Automount Point
sys-devices-pci0000:00-0000:00:02.0-backlight-acpi_video0.device loaded active plugged /sys/devices/pci0000:00/0000:00:02.0/backlight/acpi_video0
sys-devices-pci0000:00-0000:00:02.0-drm-card0-card0\x2dLVDS\x2d1-intel_backlight.device loaded active plugged /sys/devices/pci0000:00/0000:00:02.0/drm
sys-devices-pci0000:00-0000:00:19.0-net-eth0.device loaded active plugged 82579LM Gigabit Network Connection
sys-devices-pci0000:00-0000:00:1a.0-usb1-1\x2d1-1\x2d1.4-1\x2d1.4:1.0-bluetooth-hci0-rfkill3.device loaded active plugged /sys/devices/pci0000:00/0000
sys-devices-pci0000:00-0000:00:1a.0-usb1-1\x2d1-1\x2d1.4-1\x2d1.4:1.0-bluetooth-hci0.device loaded active plugged /sys/devices/pci0000:00/0000:00:1a.0
sys-devices-pci0000:00-0000:00:1b.0-sound-card0.device loaded active plugged 6 Series/C200 Series Chipset Family High Definition Audio Contro
sys-devices-pci0000:00-0000:00:1c.1-0000:03:00.0-ieee80211-phy0-rfkill2.device loaded active plugged /sys/devices/pci0000:00/0000:00:1c.1/0000:03:00.0
sys-devices-pci0000:00-0000:00:1c.1-0000:03:00.0-net-wlan0.device loaded active plugged Centrino Advanced-N 6205 [Taylor Peak] (Centrino Advanced-N 62
sys-devices-pci0000:00-0000:00:1d.0-usb2-2\x2d1-2\x2d1.4-2\x2d1.4:1.1-tty-ttyACM0.device loaded active plugged F5521gw
sys-devices-pci0000:00-0000:00:1d.0-usb2-2\x2d1-2\x2d1.4-2\x2d1.4:1.3-tty-ttyACM1.device loaded active plugged F5521gw
...
session-c2.scope loaded active running Session c2 of user lollypop
accounts-daemon.service loaded active running Accounts Service
● anacron.service loaded failed failed Run anacron jobs
apparmor.service loaded active exited LSB: AppArmor initialization
apport.service loaded active exited LSB: automatic crash report generation
...
</source>
In this example you can see that the anacron.service failed to start.
==Display unit status==
<source lang=bash>
# systemctl status anacron
● anacron.service - Run anacron jobs
Loaded: loaded (/lib/systemd/system/anacron.service; enabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Fr 2015-08-28 09:18:13 CEST; 31min ago
Process: 1591 ExecStart=/usr/sbin/anacron -dsq (code=exited, status=1/FAILURE)
Main PID: 1591 (code=exited, status=1/FAILURE)
Aug 28 09:18:13 lollybook systemd[1]: Started Run anacron jobs.
Aug 28 09:18:13 lollybook systemd[1]: Starting Run anacron jobs...
Aug 28 09:18:13 lollybook systemd[1]: anacron.service: main process exited, code=exited, status=1/FAILURE
Aug 28 09:18:13 lollybook anacron[1591]: anacron: Can't chdir to /var/spool/anacron: No such file or directory
Aug 28 09:18:13 lollybook systemd[1]: Unit anacron.service entered failed state.
Aug 28 09:18:13 lollybook systemd[1]: anacron.service failed.
</source>
Ah, deleted the anacron spool directory. ;-)
==Restart units==
Fix the problem and restart the service.
<source lang=bash>
root@lollybook:~# mkdir /var/spool/anacron
root@lollybook:~# systemctl restart anacron.service
root@lollybook:~# systemctl status anacron
● anacron.service - Run anacron jobs
Loaded: loaded (/lib/systemd/system/anacron.service; enabled; vendor preset: enabled)
Active: active (running) since Fr 2015-08-28 09:53:49 CEST; 4s ago
Main PID: 5179 (anacron)
CGroup: /system.slice/anacron.service
└─5179 /usr/sbin/anacron -dsq
Aug 28 09:53:49 lollybook systemd[1]: Started Run anacron jobs.
Aug 28 09:53:49 lollybook systemd[1]: Starting Run anacron jobs...
Aug 28 09:53:49 lollybook anacron[5179]: Anacron 2.3 started on 2015-08-28
Aug 28 09:53:49 lollybook anacron[5179]: Will run job `cron.daily' in 5 min.
Aug 28 09:53:49 lollybook anacron[5179]: Will run job `cron.weekly' in 10 min.
Aug 28 09:53:49 lollybook anacron[5179]: Will run job `cron.monthly' in 15 min.
Aug 28 09:53:49 lollybook anacron[5179]: Jobs will be executed sequentially
</source>
==Display unit declaration==
<source lang=ini>
# systemctl cat zfs.target
# /lib/systemd/system/zfs.target
[Unit]
Description=ZFS startup target
Requires=zfs-mount.service
Requires=zfs-share.service
Wants=zed.service
[Install]
WantedBy=multi-user.target
</source>
==Sockets==
<source lang=bash>
# systemctl list-sockets --all
LISTEN UNIT ACTIVATES
/run/acpid.socket acpid.socket acpid.service
/run/systemd/fsckd systemd-fsckd.socket systemd-fsckd.service
/run/systemd/initctl/fifo systemd-initctl.socket systemd-initctl.service
/run/systemd/journal/dev-log systemd-journald-dev-log.socket systemd-journald.service
/run/systemd/journal/socket systemd-journald.socket systemd-journald.service
/run/systemd/journal/stdout systemd-journald.socket systemd-journald.service
/run/systemd/journal/syslog syslog.socket rsyslog.service
/run/systemd/shutdownd systemd-shutdownd.socket systemd-shutdownd.service
/run/udev/control systemd-udevd-control.socket systemd-udevd.service
/run/uuidd/request uuidd.socket uuidd.service
/var/run/avahi-daemon/socket avahi-daemon.socket avahi-daemon.service
/var/run/cups/cups.sock cups.socket cups.service
/var/run/dbus/system_bus_socket dbus.socket dbus.service
127.0.0.1:631 cups.socket cups.service
[::1]:631 cups.socket cups.service
audit 1 systemd-journald-audit.socket systemd-journald.service
kobject-uevent 1 systemd-udevd-kernel.socket systemd-udevd.service
17 sockets listed.
</source>
==View dependencies==
What depends on ''zfs.target'':
<source lang=bash>
# systemctl list-dependencies --reverse zfs.target
zfs.target
● ├─basic.target
...
● └─multi-user.target
...
</source>
And what do we need to reach the ''zfs.target''?
<source lang=bash>
# systemctl list-dependencies --recursive zfs.target
zfs.target
● ├─zed.service
● ├─zfs-mount.service
● └─zfs-share.service
</source>
=Security=
==Use capabilities to drop user privileges (CapabilityBoundingSet)==
<source lang=ini>
# systemctl cat systemd-networkd.service --no-pager
...
[Service]
Type=notify
Restart=on-failure
RestartSec=0
ExecStart=/lib/systemd/systemd-networkd
CapabilityBoundingSet=CAP_NET_ADMIN CAP_NET_BIND_SERVICE CAP_NET_BROADCAST CAP_NET_RAW CAP_SETUID CAP_SETGID CAP_SETPCAP CAP_CHOWN CAP_DAC_OVERRIDE CAP_FOWNER
ProtectSystem=full
ProtectHome=yes
WatchdogSec=1min
...
</source>
Now the process is started with exactly the capabilities it needs to have. Even if it starts as root all unnessesary capabilities are dropped for starting the process.
I dont want to copy the whole man page of [http://manpages.ubuntu.com/manpages/vivid/en/man7/capabilities.7.html capabilities(7)] here but you can take a look to understand what this capabilities are.
'''BUT''' beware of programs which just test on UID 0!
==Nailing a process to it's rights : NoNewPrivileges==
Setting ''NoNewPrivileges=true'' ensures that the processtree from this level on will stuck at the UID and the privileges it has. This prohibits UID changes. No set UID binary will help the hacker to get more privileges than the user of the exploited service.
=systemd-timesyncd an alternative to ntp=
The ntpd is a good and fat old horse for servers but clients do not necessarily need this one. Just give systemd-timesyncd a chance.
Configuration can be easily made through <i>/etc/systemd/timesyncd.conf</i>:
<source lang=ini>
# This file is part of systemd.
#
# systemd is free software; you can redistribute it and/or modify it
# under the terms of the GNU Lesser General Public License as published by
# the Free Software Foundation; either version 2.1 of the License, or
# (at your option) any later version.
#
# Entries in this file show the compile time defaults.
# You can change settings by editing this file.
# Defaults can be restored by simply deleting this file.
#
# See timesyncd.conf(5) for details.
[Time]
NTP=ptbtime1.ptb.de hora.cs.tu-berlin.de
FallbackNTP=ntp.ubuntu.com
</source>
The server list is a space separated list of NTP servers.
FallbackNTP is a list of servers if none of the NTP list could be reached.
If you want to split them into multiple files or generate them at start you can put files with the ending <i>.conf</i> in <i>/etc/systemd/timesyncd.conf.d/</i>.
After you setup the config you can enable the timesyncd via:
<source lang=bash>
# timedatectl set-ntp true
</source>
Control your success with:
<source lang=bash>
# timedatectl
Local time: Fr 2016-07-01 09:16:24 CEST
Universal time: Fr 2016-07-01 07:16:24 UTC
RTC time: Fr 2016-07-01 07:16:24
Time zone: Europe/Berlin (CEST, +0200)
Network time on: yes
NTP synchronized: yes
RTC in local TZ: no
</source>
Nice it worked <i>NTP synchronized: yes</i>.
If not take a look with <i>systemctl</i>:
<source lang=bash>
# systemctl status systemd-timesyncd.service
● systemd-timesyncd.service - Network Time Synchronization
Loaded: loaded (/lib/systemd/system/systemd-timesyncd.service; enabled; vendor preset: enabled)
Drop-In: /lib/systemd/system/systemd-timesyncd.service.d
└─disable-with-time-daemon.conf
Active: inactive (dead)
Condition: start condition failed at Fr 2016-07-01 10:49:15 CEST; 1h 43min left
Docs: man:systemd-timesyncd.service(8)
</source>
Hmm... let us take a look at ntp:
<source lang=bash>
# systemctl status ntp.service
● ntp.service - LSB: Start NTP daemon
Loaded: loaded (/etc/init.d/ntp; bad; vendor preset: enabled)
Active: active (exited) since Fr 2016-07-01 10:49:19 CEST; 1h 44min left
Docs: man:systemd-sysv-generator(8)
</source>
Maybe we should uninstall or disable ntp first ;-).
<source lang=bash>
# systemctl stop ntp.service
# systemctl disable ntp.service
</source>
<source lang=bash>
# systemctl start systemd-timesyncd.service
# systemctl status systemd-timesyncd.service
● systemd-timesyncd.service - Network Time Synchronization
Loaded: loaded (/lib/systemd/system/systemd-timesyncd.service; enabled; vendor preset: enabled)
Drop-In: /lib/systemd/system/systemd-timesyncd.service.d
└─disable-with-time-daemon.conf
Active: active (running) since Fr 2016-07-01 09:06:10 CEST; 1s ago
Docs: man:systemd-timesyncd.service(8)
Main PID: 12360 (systemd-timesyn)
Status: "Synchronized to time server 192.53.103.108:123 (ptbtime1.ptb.de)."
CGroup: /system.slice/systemd-timesyncd.service
└─12360 /lib/systemd/systemd-timesyncd
Jul 01 09:06:10 lollybook systemd[1]: Starting Network Time Synchronization...
Jul 01 09:06:10 lollybook systemd[1]: Started Network Time Synchronization.
Jul 01 09:06:10 lollybook systemd-timesyncd[12360]: Synchronized to time server 192.53.103.108:123 (ptbtime1.ptb.de).
</source>
That's it!
=Units=
==[Unit]==
===Define dependencies===
For example the ''zfs.target'' is defined like this:
<source lang=ini>
# systemctl cat zfs.target
# /lib/systemd/system/zfs.target
[Unit]
Description=ZFS startup target
Requires=zfs-mount.service
Requires=zfs-share.service
Wants=zed.service
[Install]
WantedBy=multi-user.target
</source>
This means to reach the ''zfs.target'' we want that ''zed.service'' is started if enabled and we need ''zfs-mount.service'' and ''zfs-share.service''.
===Directories===
====ReadWrite-, ReadOnly- and InaccessibleDirectories====
====Private Tmp-Directories====
Mounts a private incarnation of /tmp and /var/tmp which only lives as long as the unit is up. When the unit comes down the directories are cleared. This is done by a seperate namespace for this unit.
<source lang=ini>
[Unit]
...
PrivateTmp=true|false
...
</source>
If several units should share a private tmp-directory you can use ''JoinsNamespaceOf=<unit1>[,<unit2>,<unit3>]''.
==[Service]==
==[Install]==
=Tools=
==Testing around with capabilities==
For example arping:
<source lang=bash>
# getcap /usr/bin/arping
/usr/bin/arping = cap_net_raw+ep
</source>
With this capability set we can use this as normal user:
<source lang=bash>
lollypop $ /usr/bin/arping -I wlan0 192.168.178.1
ARPING 192.168.178.1 from 192.168.178.31 wlan0
Unicast reply from 192.168.178.1 [24:65:11:F0:DC:A8] 1.774ms
Unicast reply from 192.168.178.1 [24:65:11:F0:DC:A8] 1.658ms
</source>
If we remove this capability it does not work:
<source lang=bash>
# setcap cap_net_raw=-ep /usr/bin/arping
</source>
<source lang=bash>
lollypop $ /usr/bin/arping -I wlan0 192.168.178.1
arping: socket: Operation not permitted
</source>
Of course it still works as root as root has all capabilities:
<source lang=bash>
root@lollybook:~# /usr/bin/arping -I wlan0 192.168.178.1
ARPING 192.168.178.1 from 192.168.178.31 wlan0
Unicast reply from 192.168.178.1 [24:65:11:F0:DC:A8] 2.052ms
Unicast reply from 192.168.178.1 [24:65:11:F0:DC:A8] 1.852ms
Received 2 response(s)
</source>
So we better set this capability again:
<source lang=bash>
# setcap cap_net_raw=+ep /usr/bin/arping
</source>
= Logging with syslog-ng and systemd in a chroot environment =
If you have a chroot environment (here I have /var/chroot) some things are a little bit tricky.
==The needed logging socket in your chroot is /run/systemd/journal/dev-log==
Prepare the mountpoint:
<source lang=bash>
# mkdir -p /var/chroot/run/systemd/journal
# touch /var/chroot/run/systemd/journal/dev-log
</source>
===Get the name for the needed unit file===
The name of a .mount-unit file has to be the mount destination path. Dashes must be escaped. To get the resulting name you can easily use systemd-escape.
<source lang=bash>
# systemd-escape -p --suffix=mount /var/chroot/run/systemd/journal/dev-log
var-chroot-run-systemd-journal-dev\x2dlog.mount
</source>
===Create the unit file /lib/systemd/system/var-chroot-run-systemd-journal-dev\\x2dlog.mount for the mount===
Remember to double escape (\\) the x2d (which is a dash -).
<source lang=bash>
# vi /lib/systemd/system/var-chroot-run-systemd-journal-dev\\x2dlog.mount
</source>
Put this contents in the file:
<source lang=ini>
[Unit]
Description=Mount /run/systemd/journal/dev-log to chroot
DefaultDependencies=no
ConditionPathExists=/var/chroot/run/systemd/journal/dev-log
ConditionCapability=CAP_SYS_ADMIN
After=systemd-modules-load.service
Before=pdns-recursor.service
[Mount]
What=/run/systemd/journal/dev-log
Where=/var/chroot/run/systemd/journal/dev-log
Type=none
Options=bind
[Install]
WantedBy=multi-user.target
</source>
===Mount the socket===
<source lang=bash>
# systemctl daemon-reload
# systemctl enable var-chroot-run-systemd-journal-dev\\x2dlog.mount
# systemctl start var-chroot-run-systemd-journal-dev\\x2dlog.mount
</source>
Check the success:
<source lang=bash>
# grep /var/chroot/run/systemd/journal/dev-log /proc/mounts
tmpfs /var/chroot/run/systemd/journal/dev-log tmpfs rw,nosuid,noexec,relatime,size=101604k,mode=755 0 0
</source>
==Tell the journald to forward logging lines to the socket==
===/etc/systemd/journald.conf===
<source lang=ini>
[Journal]
...
ForwardToSyslog=yes
...
</source>
Restart the journal daemon:
<source lang=bash>
# systemctl restart systemd-journald.service
</source>
==Configure syslog-ng==
===/etc/syslog-ng/syslog-ng.conf===
Take the log from systemd-journald socket:
<source>
...
source s_src {
system();
internal();
unix-dgram ("/run/systemd/journal/dev-log");
};
...
</source>
===Example for powerdns recursor===
====/etc/syslog-ng/conf.d/destination.d/pdns.conf====
<source>
# PowerDNS authoritative server destination
destination d_pdns { file("/var/log/powerdns/pdns.log"); };
destination d_pdns_recursor { file("/var/log/powerdns/recursor.log"); };
</source>
====/etc/syslog-ng/conf.d/filter.d/pdns.conf====
<source>
# PowerDNS authoritative server filter
filter f_pdns { program("^pdns$"); };
filter f_pdns_recursor { program("^pdns_recursor$"); };
</source>
====/etc/syslog-ng/conf.d/log.d/90_pdns.conf====
<source>
# PowerDNS authoritative server default final file log
log { source(s_src); filter(f_pdns); destination(d_pdns); flags(final); };
log { source(s_src); filter(f_pdns_recursor); destination(d_pdns_recursor); flags(final); };
</source>
===Restart syslog-ng daemon===
<source lang=bash>
# systemctl restart syslog-ng.service
</source>
= Examples =
== Oracle ==
UNTESTED, just an example!
File this as
/usr/lib/systemd/system/dbora@.service (SLES12)
<source lang=ini>
# This file is part of systemd.
#
# Configure instances for your oracle database versions like this
# # systemctl enable dbora@<product>.service
# e.g.:
# # systemctl enable dbora@12cR1.service
#
[Unit]
Description=Oracle Database %I
After=syslog.target network.target
[Service]
# systemd ignores PAM limits, so set any necessary limits in the service.
# Not really a bug, but a feature.
# https://bugzilla.redhat.com/show_bug.cgi?id=754285
LimitMEMLOCK=infinity
LimitNOFILE=65535
#
Type=simple
RemainAfterExit=yes
User=oracle
Group=dba
Environment="ORACLE_HOME=/opt/oracle/product/%i/db"
ExecStart=/opt/oracle/product/%i/db/bin/dbstart $ORACLE_HOME >> 2>&1 &
ExecStop=/opt/oracle/product/%i/db/bin/dbstart $ORACLE_HOME 2>&1 &
[Install]
WantedBy=multi-user.target
</source>
<source lang=bash>
# systemctl daemon-reload
# systemctl enable dbora@12cR2.service
Created symlink from /etc/systemd/system/multi-user.target.wants/dbora@12cR2.service to /usr/lib/systemd/system/dbora@.service.
</source>
5a51678cb91d14dd284a7e0a582df125c77acaea
1867
1866
2018-05-03T09:19:09Z
Lollypop
2
/* Create the unit file /lib/systemd/system/var-chroot-run-systemd-journal-dev\\x2dlog.mount for the mount */
wikitext
text/x-wiki
[[Kategorie:Linux]]
=systemd=
Yes, like daemons are usually written this has to be written lowercase.
=What is systemd?=
systemd is a replacement for the old and rusty init system of Linux.
It has many new features and extends the normal init system with the ability to watch processes after the start has done, list sockets owned by processes started with systemd, adds security features like [http://manpages.ubuntu.com/manpages/vivid/en/man7/capabilities.7.html capabilities(7)] and a lot more.
Maybe it will be as good as SMF (Service Management Facility) of Solaris one day :-).
=Take a look with systemctl=
==List units==
As you can see, there are hardware and software related units.
<source lang=bash>
# systemctl list-units
UNIT LOAD ACTIVE SUB DESCRIPTION
proc-sys-fs-binfmt_misc.automount loaded active running Arbitrary Executable File Formats File System Automount Point
sys-devices-pci0000:00-0000:00:02.0-backlight-acpi_video0.device loaded active plugged /sys/devices/pci0000:00/0000:00:02.0/backlight/acpi_video0
sys-devices-pci0000:00-0000:00:02.0-drm-card0-card0\x2dLVDS\x2d1-intel_backlight.device loaded active plugged /sys/devices/pci0000:00/0000:00:02.0/drm
sys-devices-pci0000:00-0000:00:19.0-net-eth0.device loaded active plugged 82579LM Gigabit Network Connection
sys-devices-pci0000:00-0000:00:1a.0-usb1-1\x2d1-1\x2d1.4-1\x2d1.4:1.0-bluetooth-hci0-rfkill3.device loaded active plugged /sys/devices/pci0000:00/0000
sys-devices-pci0000:00-0000:00:1a.0-usb1-1\x2d1-1\x2d1.4-1\x2d1.4:1.0-bluetooth-hci0.device loaded active plugged /sys/devices/pci0000:00/0000:00:1a.0
sys-devices-pci0000:00-0000:00:1b.0-sound-card0.device loaded active plugged 6 Series/C200 Series Chipset Family High Definition Audio Contro
sys-devices-pci0000:00-0000:00:1c.1-0000:03:00.0-ieee80211-phy0-rfkill2.device loaded active plugged /sys/devices/pci0000:00/0000:00:1c.1/0000:03:00.0
sys-devices-pci0000:00-0000:00:1c.1-0000:03:00.0-net-wlan0.device loaded active plugged Centrino Advanced-N 6205 [Taylor Peak] (Centrino Advanced-N 62
sys-devices-pci0000:00-0000:00:1d.0-usb2-2\x2d1-2\x2d1.4-2\x2d1.4:1.1-tty-ttyACM0.device loaded active plugged F5521gw
sys-devices-pci0000:00-0000:00:1d.0-usb2-2\x2d1-2\x2d1.4-2\x2d1.4:1.3-tty-ttyACM1.device loaded active plugged F5521gw
...
session-c2.scope loaded active running Session c2 of user lollypop
accounts-daemon.service loaded active running Accounts Service
● anacron.service loaded failed failed Run anacron jobs
apparmor.service loaded active exited LSB: AppArmor initialization
apport.service loaded active exited LSB: automatic crash report generation
...
</source>
In this example you can see that the anacron.service failed to start.
==Display unit status==
<source lang=bash>
# systemctl status anacron
● anacron.service - Run anacron jobs
Loaded: loaded (/lib/systemd/system/anacron.service; enabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Fr 2015-08-28 09:18:13 CEST; 31min ago
Process: 1591 ExecStart=/usr/sbin/anacron -dsq (code=exited, status=1/FAILURE)
Main PID: 1591 (code=exited, status=1/FAILURE)
Aug 28 09:18:13 lollybook systemd[1]: Started Run anacron jobs.
Aug 28 09:18:13 lollybook systemd[1]: Starting Run anacron jobs...
Aug 28 09:18:13 lollybook systemd[1]: anacron.service: main process exited, code=exited, status=1/FAILURE
Aug 28 09:18:13 lollybook anacron[1591]: anacron: Can't chdir to /var/spool/anacron: No such file or directory
Aug 28 09:18:13 lollybook systemd[1]: Unit anacron.service entered failed state.
Aug 28 09:18:13 lollybook systemd[1]: anacron.service failed.
</source>
Ah, deleted the anacron spool directory. ;-)
==Restart units==
Fix the problem and restart the service.
<source lang=bash>
root@lollybook:~# mkdir /var/spool/anacron
root@lollybook:~# systemctl restart anacron.service
root@lollybook:~# systemctl status anacron
● anacron.service - Run anacron jobs
Loaded: loaded (/lib/systemd/system/anacron.service; enabled; vendor preset: enabled)
Active: active (running) since Fr 2015-08-28 09:53:49 CEST; 4s ago
Main PID: 5179 (anacron)
CGroup: /system.slice/anacron.service
└─5179 /usr/sbin/anacron -dsq
Aug 28 09:53:49 lollybook systemd[1]: Started Run anacron jobs.
Aug 28 09:53:49 lollybook systemd[1]: Starting Run anacron jobs...
Aug 28 09:53:49 lollybook anacron[5179]: Anacron 2.3 started on 2015-08-28
Aug 28 09:53:49 lollybook anacron[5179]: Will run job `cron.daily' in 5 min.
Aug 28 09:53:49 lollybook anacron[5179]: Will run job `cron.weekly' in 10 min.
Aug 28 09:53:49 lollybook anacron[5179]: Will run job `cron.monthly' in 15 min.
Aug 28 09:53:49 lollybook anacron[5179]: Jobs will be executed sequentially
</source>
==Display unit declaration==
<source lang=ini>
# systemctl cat zfs.target
# /lib/systemd/system/zfs.target
[Unit]
Description=ZFS startup target
Requires=zfs-mount.service
Requires=zfs-share.service
Wants=zed.service
[Install]
WantedBy=multi-user.target
</source>
==Sockets==
<source lang=bash>
# systemctl list-sockets --all
LISTEN UNIT ACTIVATES
/run/acpid.socket acpid.socket acpid.service
/run/systemd/fsckd systemd-fsckd.socket systemd-fsckd.service
/run/systemd/initctl/fifo systemd-initctl.socket systemd-initctl.service
/run/systemd/journal/dev-log systemd-journald-dev-log.socket systemd-journald.service
/run/systemd/journal/socket systemd-journald.socket systemd-journald.service
/run/systemd/journal/stdout systemd-journald.socket systemd-journald.service
/run/systemd/journal/syslog syslog.socket rsyslog.service
/run/systemd/shutdownd systemd-shutdownd.socket systemd-shutdownd.service
/run/udev/control systemd-udevd-control.socket systemd-udevd.service
/run/uuidd/request uuidd.socket uuidd.service
/var/run/avahi-daemon/socket avahi-daemon.socket avahi-daemon.service
/var/run/cups/cups.sock cups.socket cups.service
/var/run/dbus/system_bus_socket dbus.socket dbus.service
127.0.0.1:631 cups.socket cups.service
[::1]:631 cups.socket cups.service
audit 1 systemd-journald-audit.socket systemd-journald.service
kobject-uevent 1 systemd-udevd-kernel.socket systemd-udevd.service
17 sockets listed.
</source>
==View dependencies==
What depends on ''zfs.target'':
<source lang=bash>
# systemctl list-dependencies --reverse zfs.target
zfs.target
● ├─basic.target
...
● └─multi-user.target
...
</source>
And what do we need to reach the ''zfs.target''?
<source lang=bash>
# systemctl list-dependencies --recursive zfs.target
zfs.target
● ├─zed.service
● ├─zfs-mount.service
● └─zfs-share.service
</source>
=Security=
==Use capabilities to drop user privileges (CapabilityBoundingSet)==
<source lang=ini>
# systemctl cat systemd-networkd.service --no-pager
...
[Service]
Type=notify
Restart=on-failure
RestartSec=0
ExecStart=/lib/systemd/systemd-networkd
CapabilityBoundingSet=CAP_NET_ADMIN CAP_NET_BIND_SERVICE CAP_NET_BROADCAST CAP_NET_RAW CAP_SETUID CAP_SETGID CAP_SETPCAP CAP_CHOWN CAP_DAC_OVERRIDE CAP_FOWNER
ProtectSystem=full
ProtectHome=yes
WatchdogSec=1min
...
</source>
Now the process is started with exactly the capabilities it needs to have. Even if it starts as root all unnessesary capabilities are dropped for starting the process.
I dont want to copy the whole man page of [http://manpages.ubuntu.com/manpages/vivid/en/man7/capabilities.7.html capabilities(7)] here but you can take a look to understand what this capabilities are.
'''BUT''' beware of programs which just test on UID 0!
==Nailing a process to it's rights : NoNewPrivileges==
Setting ''NoNewPrivileges=true'' ensures that the processtree from this level on will stuck at the UID and the privileges it has. This prohibits UID changes. No set UID binary will help the hacker to get more privileges than the user of the exploited service.
=systemd-timesyncd an alternative to ntp=
The ntpd is a good and fat old horse for servers but clients do not necessarily need this one. Just give systemd-timesyncd a chance.
Configuration can be easily made through <i>/etc/systemd/timesyncd.conf</i>:
<source lang=ini>
# This file is part of systemd.
#
# systemd is free software; you can redistribute it and/or modify it
# under the terms of the GNU Lesser General Public License as published by
# the Free Software Foundation; either version 2.1 of the License, or
# (at your option) any later version.
#
# Entries in this file show the compile time defaults.
# You can change settings by editing this file.
# Defaults can be restored by simply deleting this file.
#
# See timesyncd.conf(5) for details.
[Time]
NTP=ptbtime1.ptb.de hora.cs.tu-berlin.de
FallbackNTP=ntp.ubuntu.com
</source>
The server list is a space separated list of NTP servers.
FallbackNTP is a list of servers if none of the NTP list could be reached.
If you want to split them into multiple files or generate them at start you can put files with the ending <i>.conf</i> in <i>/etc/systemd/timesyncd.conf.d/</i>.
After you setup the config you can enable the timesyncd via:
<source lang=bash>
# timedatectl set-ntp true
</source>
Control your success with:
<source lang=bash>
# timedatectl
Local time: Fr 2016-07-01 09:16:24 CEST
Universal time: Fr 2016-07-01 07:16:24 UTC
RTC time: Fr 2016-07-01 07:16:24
Time zone: Europe/Berlin (CEST, +0200)
Network time on: yes
NTP synchronized: yes
RTC in local TZ: no
</source>
Nice it worked <i>NTP synchronized: yes</i>.
If not take a look with <i>systemctl</i>:
<source lang=bash>
# systemctl status systemd-timesyncd.service
● systemd-timesyncd.service - Network Time Synchronization
Loaded: loaded (/lib/systemd/system/systemd-timesyncd.service; enabled; vendor preset: enabled)
Drop-In: /lib/systemd/system/systemd-timesyncd.service.d
└─disable-with-time-daemon.conf
Active: inactive (dead)
Condition: start condition failed at Fr 2016-07-01 10:49:15 CEST; 1h 43min left
Docs: man:systemd-timesyncd.service(8)
</source>
Hmm... let us take a look at ntp:
<source lang=bash>
# systemctl status ntp.service
● ntp.service - LSB: Start NTP daemon
Loaded: loaded (/etc/init.d/ntp; bad; vendor preset: enabled)
Active: active (exited) since Fr 2016-07-01 10:49:19 CEST; 1h 44min left
Docs: man:systemd-sysv-generator(8)
</source>
Maybe we should uninstall or disable ntp first ;-).
<source lang=bash>
# systemctl stop ntp.service
# systemctl disable ntp.service
</source>
<source lang=bash>
# systemctl start systemd-timesyncd.service
# systemctl status systemd-timesyncd.service
● systemd-timesyncd.service - Network Time Synchronization
Loaded: loaded (/lib/systemd/system/systemd-timesyncd.service; enabled; vendor preset: enabled)
Drop-In: /lib/systemd/system/systemd-timesyncd.service.d
└─disable-with-time-daemon.conf
Active: active (running) since Fr 2016-07-01 09:06:10 CEST; 1s ago
Docs: man:systemd-timesyncd.service(8)
Main PID: 12360 (systemd-timesyn)
Status: "Synchronized to time server 192.53.103.108:123 (ptbtime1.ptb.de)."
CGroup: /system.slice/systemd-timesyncd.service
└─12360 /lib/systemd/systemd-timesyncd
Jul 01 09:06:10 lollybook systemd[1]: Starting Network Time Synchronization...
Jul 01 09:06:10 lollybook systemd[1]: Started Network Time Synchronization.
Jul 01 09:06:10 lollybook systemd-timesyncd[12360]: Synchronized to time server 192.53.103.108:123 (ptbtime1.ptb.de).
</source>
That's it!
=Units=
==[Unit]==
===Define dependencies===
For example the ''zfs.target'' is defined like this:
<source lang=ini>
# systemctl cat zfs.target
# /lib/systemd/system/zfs.target
[Unit]
Description=ZFS startup target
Requires=zfs-mount.service
Requires=zfs-share.service
Wants=zed.service
[Install]
WantedBy=multi-user.target
</source>
This means to reach the ''zfs.target'' we want that ''zed.service'' is started if enabled and we need ''zfs-mount.service'' and ''zfs-share.service''.
===Directories===
====ReadWrite-, ReadOnly- and InaccessibleDirectories====
====Private Tmp-Directories====
Mounts a private incarnation of /tmp and /var/tmp which only lives as long as the unit is up. When the unit comes down the directories are cleared. This is done by a seperate namespace for this unit.
<source lang=ini>
[Unit]
...
PrivateTmp=true|false
...
</source>
If several units should share a private tmp-directory you can use ''JoinsNamespaceOf=<unit1>[,<unit2>,<unit3>]''.
==[Service]==
==[Install]==
=Tools=
==Testing around with capabilities==
For example arping:
<source lang=bash>
# getcap /usr/bin/arping
/usr/bin/arping = cap_net_raw+ep
</source>
With this capability set we can use this as normal user:
<source lang=bash>
lollypop $ /usr/bin/arping -I wlan0 192.168.178.1
ARPING 192.168.178.1 from 192.168.178.31 wlan0
Unicast reply from 192.168.178.1 [24:65:11:F0:DC:A8] 1.774ms
Unicast reply from 192.168.178.1 [24:65:11:F0:DC:A8] 1.658ms
</source>
If we remove this capability it does not work:
<source lang=bash>
# setcap cap_net_raw=-ep /usr/bin/arping
</source>
<source lang=bash>
lollypop $ /usr/bin/arping -I wlan0 192.168.178.1
arping: socket: Operation not permitted
</source>
Of course it still works as root as root has all capabilities:
<source lang=bash>
root@lollybook:~# /usr/bin/arping -I wlan0 192.168.178.1
ARPING 192.168.178.1 from 192.168.178.31 wlan0
Unicast reply from 192.168.178.1 [24:65:11:F0:DC:A8] 2.052ms
Unicast reply from 192.168.178.1 [24:65:11:F0:DC:A8] 1.852ms
Received 2 response(s)
</source>
So we better set this capability again:
<source lang=bash>
# setcap cap_net_raw=+ep /usr/bin/arping
</source>
= Logging with syslog-ng and systemd in a chroot environment =
If you have a chroot environment (here I have /var/chroot) some things are a little bit tricky.
==The needed logging socket in your chroot is /run/systemd/journal/dev-log==
Prepare the mountpoint:
<source lang=bash>
# mkdir -p /var/chroot/run/systemd/journal
# touch /var/chroot/run/systemd/journal/dev-log
</source>
===Get the name for the needed unit file===
The name of a .mount-unit file has to be the mount destination path. Dashes must be escaped. To get the resulting name you can easily use systemd-escape.
<source lang=bash>
# systemd-escape -p --suffix=mount /var/chroot/run/systemd/journal/dev-log
var-chroot-run-systemd-journal-dev\x2dlog.mount
</source>
===Create the unit file /lib/systemd/system/var-chroot-run-systemd-journal-dev\\x2dlog.mount for the mount===
Remember to double escape (\\) the x2d (which is a dash -).
<source lang=bash>
# vi /lib/systemd/system/var-chroot-run-systemd-journal-dev\\x2dlog.mount
</source>
I want to mount it before syslog-ng and pdns-recursor are up.
Put this contents in the file:
<source lang=ini>
[Unit]
Description=Mount /run/systemd/journal/dev-log to chroot
DefaultDependencies=no
ConditionPathExists=/var/chroot/run/systemd/journal/dev-log
ConditionCapability=CAP_SYS_ADMIN
After=systemd-modules-load.service
Before=pdns-recursor.service
Before=syslog-ng.service
[Mount]
What=/run/systemd/journal/dev-log
Where=/var/chroot/run/systemd/journal/dev-log
Type=none
Options=bind
[Install]
WantedBy=multi-user.target
</source>
===Mount the socket===
<source lang=bash>
# systemctl daemon-reload
# systemctl enable var-chroot-run-systemd-journal-dev\\x2dlog.mount
# systemctl start var-chroot-run-systemd-journal-dev\\x2dlog.mount
</source>
Check the success:
<source lang=bash>
# grep /var/chroot/run/systemd/journal/dev-log /proc/mounts
tmpfs /var/chroot/run/systemd/journal/dev-log tmpfs rw,nosuid,noexec,relatime,size=101604k,mode=755 0 0
</source>
==Tell the journald to forward logging lines to the socket==
===/etc/systemd/journald.conf===
<source lang=ini>
[Journal]
...
ForwardToSyslog=yes
...
</source>
Restart the journal daemon:
<source lang=bash>
# systemctl restart systemd-journald.service
</source>
==Configure syslog-ng==
===/etc/syslog-ng/syslog-ng.conf===
Take the log from systemd-journald socket:
<source>
...
source s_src {
system();
internal();
unix-dgram ("/run/systemd/journal/dev-log");
};
...
</source>
===Example for powerdns recursor===
====/etc/syslog-ng/conf.d/destination.d/pdns.conf====
<source>
# PowerDNS authoritative server destination
destination d_pdns { file("/var/log/powerdns/pdns.log"); };
destination d_pdns_recursor { file("/var/log/powerdns/recursor.log"); };
</source>
====/etc/syslog-ng/conf.d/filter.d/pdns.conf====
<source>
# PowerDNS authoritative server filter
filter f_pdns { program("^pdns$"); };
filter f_pdns_recursor { program("^pdns_recursor$"); };
</source>
====/etc/syslog-ng/conf.d/log.d/90_pdns.conf====
<source>
# PowerDNS authoritative server default final file log
log { source(s_src); filter(f_pdns); destination(d_pdns); flags(final); };
log { source(s_src); filter(f_pdns_recursor); destination(d_pdns_recursor); flags(final); };
</source>
===Restart syslog-ng daemon===
<source lang=bash>
# systemctl restart syslog-ng.service
</source>
= Examples =
== Oracle ==
UNTESTED, just an example!
File this as
/usr/lib/systemd/system/dbora@.service (SLES12)
<source lang=ini>
# This file is part of systemd.
#
# Configure instances for your oracle database versions like this
# # systemctl enable dbora@<product>.service
# e.g.:
# # systemctl enable dbora@12cR1.service
#
[Unit]
Description=Oracle Database %I
After=syslog.target network.target
[Service]
# systemd ignores PAM limits, so set any necessary limits in the service.
# Not really a bug, but a feature.
# https://bugzilla.redhat.com/show_bug.cgi?id=754285
LimitMEMLOCK=infinity
LimitNOFILE=65535
#
Type=simple
RemainAfterExit=yes
User=oracle
Group=dba
Environment="ORACLE_HOME=/opt/oracle/product/%i/db"
ExecStart=/opt/oracle/product/%i/db/bin/dbstart $ORACLE_HOME >> 2>&1 &
ExecStop=/opt/oracle/product/%i/db/bin/dbstart $ORACLE_HOME 2>&1 &
[Install]
WantedBy=multi-user.target
</source>
<source lang=bash>
# systemctl daemon-reload
# systemctl enable dbora@12cR2.service
Created symlink from /etc/systemd/system/multi-user.target.wants/dbora@12cR2.service to /usr/lib/systemd/system/dbora@.service.
</source>
d8baea41782fb5aafb9266347fbd90aa4d12dde6
1868
1867
2018-05-09T06:28:04Z
Lollypop
2
/* systemd-timesyncd an alternative to ntp */
wikitext
text/x-wiki
[[Kategorie:Linux]]
=systemd=
Yes, like daemons are usually written this has to be written lowercase.
=What is systemd?=
systemd is a replacement for the old and rusty init system of Linux.
It has many new features and extends the normal init system with the ability to watch processes after the start has done, list sockets owned by processes started with systemd, adds security features like [http://manpages.ubuntu.com/manpages/vivid/en/man7/capabilities.7.html capabilities(7)] and a lot more.
Maybe it will be as good as SMF (Service Management Facility) of Solaris one day :-).
=Take a look with systemctl=
==List units==
As you can see, there are hardware and software related units.
<source lang=bash>
# systemctl list-units
UNIT LOAD ACTIVE SUB DESCRIPTION
proc-sys-fs-binfmt_misc.automount loaded active running Arbitrary Executable File Formats File System Automount Point
sys-devices-pci0000:00-0000:00:02.0-backlight-acpi_video0.device loaded active plugged /sys/devices/pci0000:00/0000:00:02.0/backlight/acpi_video0
sys-devices-pci0000:00-0000:00:02.0-drm-card0-card0\x2dLVDS\x2d1-intel_backlight.device loaded active plugged /sys/devices/pci0000:00/0000:00:02.0/drm
sys-devices-pci0000:00-0000:00:19.0-net-eth0.device loaded active plugged 82579LM Gigabit Network Connection
sys-devices-pci0000:00-0000:00:1a.0-usb1-1\x2d1-1\x2d1.4-1\x2d1.4:1.0-bluetooth-hci0-rfkill3.device loaded active plugged /sys/devices/pci0000:00/0000
sys-devices-pci0000:00-0000:00:1a.0-usb1-1\x2d1-1\x2d1.4-1\x2d1.4:1.0-bluetooth-hci0.device loaded active plugged /sys/devices/pci0000:00/0000:00:1a.0
sys-devices-pci0000:00-0000:00:1b.0-sound-card0.device loaded active plugged 6 Series/C200 Series Chipset Family High Definition Audio Contro
sys-devices-pci0000:00-0000:00:1c.1-0000:03:00.0-ieee80211-phy0-rfkill2.device loaded active plugged /sys/devices/pci0000:00/0000:00:1c.1/0000:03:00.0
sys-devices-pci0000:00-0000:00:1c.1-0000:03:00.0-net-wlan0.device loaded active plugged Centrino Advanced-N 6205 [Taylor Peak] (Centrino Advanced-N 62
sys-devices-pci0000:00-0000:00:1d.0-usb2-2\x2d1-2\x2d1.4-2\x2d1.4:1.1-tty-ttyACM0.device loaded active plugged F5521gw
sys-devices-pci0000:00-0000:00:1d.0-usb2-2\x2d1-2\x2d1.4-2\x2d1.4:1.3-tty-ttyACM1.device loaded active plugged F5521gw
...
session-c2.scope loaded active running Session c2 of user lollypop
accounts-daemon.service loaded active running Accounts Service
● anacron.service loaded failed failed Run anacron jobs
apparmor.service loaded active exited LSB: AppArmor initialization
apport.service loaded active exited LSB: automatic crash report generation
...
</source>
In this example you can see that the anacron.service failed to start.
==Display unit status==
<source lang=bash>
# systemctl status anacron
● anacron.service - Run anacron jobs
Loaded: loaded (/lib/systemd/system/anacron.service; enabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Fr 2015-08-28 09:18:13 CEST; 31min ago
Process: 1591 ExecStart=/usr/sbin/anacron -dsq (code=exited, status=1/FAILURE)
Main PID: 1591 (code=exited, status=1/FAILURE)
Aug 28 09:18:13 lollybook systemd[1]: Started Run anacron jobs.
Aug 28 09:18:13 lollybook systemd[1]: Starting Run anacron jobs...
Aug 28 09:18:13 lollybook systemd[1]: anacron.service: main process exited, code=exited, status=1/FAILURE
Aug 28 09:18:13 lollybook anacron[1591]: anacron: Can't chdir to /var/spool/anacron: No such file or directory
Aug 28 09:18:13 lollybook systemd[1]: Unit anacron.service entered failed state.
Aug 28 09:18:13 lollybook systemd[1]: anacron.service failed.
</source>
Ah, deleted the anacron spool directory. ;-)
==Restart units==
Fix the problem and restart the service.
<source lang=bash>
root@lollybook:~# mkdir /var/spool/anacron
root@lollybook:~# systemctl restart anacron.service
root@lollybook:~# systemctl status anacron
● anacron.service - Run anacron jobs
Loaded: loaded (/lib/systemd/system/anacron.service; enabled; vendor preset: enabled)
Active: active (running) since Fr 2015-08-28 09:53:49 CEST; 4s ago
Main PID: 5179 (anacron)
CGroup: /system.slice/anacron.service
└─5179 /usr/sbin/anacron -dsq
Aug 28 09:53:49 lollybook systemd[1]: Started Run anacron jobs.
Aug 28 09:53:49 lollybook systemd[1]: Starting Run anacron jobs...
Aug 28 09:53:49 lollybook anacron[5179]: Anacron 2.3 started on 2015-08-28
Aug 28 09:53:49 lollybook anacron[5179]: Will run job `cron.daily' in 5 min.
Aug 28 09:53:49 lollybook anacron[5179]: Will run job `cron.weekly' in 10 min.
Aug 28 09:53:49 lollybook anacron[5179]: Will run job `cron.monthly' in 15 min.
Aug 28 09:53:49 lollybook anacron[5179]: Jobs will be executed sequentially
</source>
==Display unit declaration==
<source lang=ini>
# systemctl cat zfs.target
# /lib/systemd/system/zfs.target
[Unit]
Description=ZFS startup target
Requires=zfs-mount.service
Requires=zfs-share.service
Wants=zed.service
[Install]
WantedBy=multi-user.target
</source>
==Sockets==
<source lang=bash>
# systemctl list-sockets --all
LISTEN UNIT ACTIVATES
/run/acpid.socket acpid.socket acpid.service
/run/systemd/fsckd systemd-fsckd.socket systemd-fsckd.service
/run/systemd/initctl/fifo systemd-initctl.socket systemd-initctl.service
/run/systemd/journal/dev-log systemd-journald-dev-log.socket systemd-journald.service
/run/systemd/journal/socket systemd-journald.socket systemd-journald.service
/run/systemd/journal/stdout systemd-journald.socket systemd-journald.service
/run/systemd/journal/syslog syslog.socket rsyslog.service
/run/systemd/shutdownd systemd-shutdownd.socket systemd-shutdownd.service
/run/udev/control systemd-udevd-control.socket systemd-udevd.service
/run/uuidd/request uuidd.socket uuidd.service
/var/run/avahi-daemon/socket avahi-daemon.socket avahi-daemon.service
/var/run/cups/cups.sock cups.socket cups.service
/var/run/dbus/system_bus_socket dbus.socket dbus.service
127.0.0.1:631 cups.socket cups.service
[::1]:631 cups.socket cups.service
audit 1 systemd-journald-audit.socket systemd-journald.service
kobject-uevent 1 systemd-udevd-kernel.socket systemd-udevd.service
17 sockets listed.
</source>
==View dependencies==
What depends on ''zfs.target'':
<source lang=bash>
# systemctl list-dependencies --reverse zfs.target
zfs.target
● ├─basic.target
...
● └─multi-user.target
...
</source>
And what do we need to reach the ''zfs.target''?
<source lang=bash>
# systemctl list-dependencies --recursive zfs.target
zfs.target
● ├─zed.service
● ├─zfs-mount.service
● └─zfs-share.service
</source>
=Security=
==Use capabilities to drop user privileges (CapabilityBoundingSet)==
<source lang=ini>
# systemctl cat systemd-networkd.service --no-pager
...
[Service]
Type=notify
Restart=on-failure
RestartSec=0
ExecStart=/lib/systemd/systemd-networkd
CapabilityBoundingSet=CAP_NET_ADMIN CAP_NET_BIND_SERVICE CAP_NET_BROADCAST CAP_NET_RAW CAP_SETUID CAP_SETGID CAP_SETPCAP CAP_CHOWN CAP_DAC_OVERRIDE CAP_FOWNER
ProtectSystem=full
ProtectHome=yes
WatchdogSec=1min
...
</source>
Now the process is started with exactly the capabilities it needs to have. Even if it starts as root all unnessesary capabilities are dropped for starting the process.
I dont want to copy the whole man page of [http://manpages.ubuntu.com/manpages/vivid/en/man7/capabilities.7.html capabilities(7)] here but you can take a look to understand what this capabilities are.
'''BUT''' beware of programs which just test on UID 0!
==Nailing a process to it's rights : NoNewPrivileges==
Setting ''NoNewPrivileges=true'' ensures that the processtree from this level on will stuck at the UID and the privileges it has. This prohibits UID changes. No set UID binary will help the hacker to get more privileges than the user of the exploited service.
=systemd-resolved the name resolve service=
==Cache statistics==
<source lang=bash>
# systemd-resolve --statistics
DNSSEC supported by current servers: no
Transactions
Current Transactions: 0
Total Transactions: 1824
Cache
Current Cache Size: 11
Cache Hits: 1104
Cache Misses: 771
DNSSEC Verdicts
Secure: 0
Insecure: 0
Bogus: 0
Indeterminate: 0
</source>
==Flush the cache==
<source lang=bash>
$ systemd-resolve --flush-caches
$ systemd-resolve --statistics
DNSSEC supported by current servers: no
Transactions
Current Transactions: 0
Total Transactions: 1809
Cache
Current Cache Size: 0 <--- Empty
Cache Hits: 1099
Cache Misses: 761
DNSSEC Verdicts
Secure: 0
Insecure: 0
Bogus: 0
Indeterminate: 0
</source>
=systemd-timesyncd an alternative to ntp=
The ntpd is a good and fat old horse for servers but clients do not necessarily need this one. Just give systemd-timesyncd a chance.
Configuration can be easily made through <i>/etc/systemd/timesyncd.conf</i>:
<source lang=ini>
# This file is part of systemd.
#
# systemd is free software; you can redistribute it and/or modify it
# under the terms of the GNU Lesser General Public License as published by
# the Free Software Foundation; either version 2.1 of the License, or
# (at your option) any later version.
#
# Entries in this file show the compile time defaults.
# You can change settings by editing this file.
# Defaults can be restored by simply deleting this file.
#
# See timesyncd.conf(5) for details.
[Time]
NTP=ptbtime1.ptb.de hora.cs.tu-berlin.de
FallbackNTP=ntp.ubuntu.com
</source>
The server list is a space separated list of NTP servers.
FallbackNTP is a list of servers if none of the NTP list could be reached.
If you want to split them into multiple files or generate them at start you can put files with the ending <i>.conf</i> in <i>/etc/systemd/timesyncd.conf.d/</i>.
After you setup the config you can enable the timesyncd via:
<source lang=bash>
# timedatectl set-ntp true
</source>
Control your success with:
<source lang=bash>
# timedatectl
Local time: Fr 2016-07-01 09:16:24 CEST
Universal time: Fr 2016-07-01 07:16:24 UTC
RTC time: Fr 2016-07-01 07:16:24
Time zone: Europe/Berlin (CEST, +0200)
Network time on: yes
NTP synchronized: yes
RTC in local TZ: no
</source>
Nice it worked <i>NTP synchronized: yes</i>.
If not take a look with <i>systemctl</i>:
<source lang=bash>
# systemctl status systemd-timesyncd.service
● systemd-timesyncd.service - Network Time Synchronization
Loaded: loaded (/lib/systemd/system/systemd-timesyncd.service; enabled; vendor preset: enabled)
Drop-In: /lib/systemd/system/systemd-timesyncd.service.d
└─disable-with-time-daemon.conf
Active: inactive (dead)
Condition: start condition failed at Fr 2016-07-01 10:49:15 CEST; 1h 43min left
Docs: man:systemd-timesyncd.service(8)
</source>
Hmm... let us take a look at ntp:
<source lang=bash>
# systemctl status ntp.service
● ntp.service - LSB: Start NTP daemon
Loaded: loaded (/etc/init.d/ntp; bad; vendor preset: enabled)
Active: active (exited) since Fr 2016-07-01 10:49:19 CEST; 1h 44min left
Docs: man:systemd-sysv-generator(8)
</source>
Maybe we should uninstall or disable ntp first ;-).
<source lang=bash>
# systemctl stop ntp.service
# systemctl disable ntp.service
</source>
<source lang=bash>
# systemctl start systemd-timesyncd.service
# systemctl status systemd-timesyncd.service
● systemd-timesyncd.service - Network Time Synchronization
Loaded: loaded (/lib/systemd/system/systemd-timesyncd.service; enabled; vendor preset: enabled)
Drop-In: /lib/systemd/system/systemd-timesyncd.service.d
└─disable-with-time-daemon.conf
Active: active (running) since Fr 2016-07-01 09:06:10 CEST; 1s ago
Docs: man:systemd-timesyncd.service(8)
Main PID: 12360 (systemd-timesyn)
Status: "Synchronized to time server 192.53.103.108:123 (ptbtime1.ptb.de)."
CGroup: /system.slice/systemd-timesyncd.service
└─12360 /lib/systemd/systemd-timesyncd
Jul 01 09:06:10 lollybook systemd[1]: Starting Network Time Synchronization...
Jul 01 09:06:10 lollybook systemd[1]: Started Network Time Synchronization.
Jul 01 09:06:10 lollybook systemd-timesyncd[12360]: Synchronized to time server 192.53.103.108:123 (ptbtime1.ptb.de).
</source>
That's it!
=Units=
==[Unit]==
===Define dependencies===
For example the ''zfs.target'' is defined like this:
<source lang=ini>
# systemctl cat zfs.target
# /lib/systemd/system/zfs.target
[Unit]
Description=ZFS startup target
Requires=zfs-mount.service
Requires=zfs-share.service
Wants=zed.service
[Install]
WantedBy=multi-user.target
</source>
This means to reach the ''zfs.target'' we want that ''zed.service'' is started if enabled and we need ''zfs-mount.service'' and ''zfs-share.service''.
===Directories===
====ReadWrite-, ReadOnly- and InaccessibleDirectories====
====Private Tmp-Directories====
Mounts a private incarnation of /tmp and /var/tmp which only lives as long as the unit is up. When the unit comes down the directories are cleared. This is done by a seperate namespace for this unit.
<source lang=ini>
[Unit]
...
PrivateTmp=true|false
...
</source>
If several units should share a private tmp-directory you can use ''JoinsNamespaceOf=<unit1>[,<unit2>,<unit3>]''.
==[Service]==
==[Install]==
=Tools=
==Testing around with capabilities==
For example arping:
<source lang=bash>
# getcap /usr/bin/arping
/usr/bin/arping = cap_net_raw+ep
</source>
With this capability set we can use this as normal user:
<source lang=bash>
lollypop $ /usr/bin/arping -I wlan0 192.168.178.1
ARPING 192.168.178.1 from 192.168.178.31 wlan0
Unicast reply from 192.168.178.1 [24:65:11:F0:DC:A8] 1.774ms
Unicast reply from 192.168.178.1 [24:65:11:F0:DC:A8] 1.658ms
</source>
If we remove this capability it does not work:
<source lang=bash>
# setcap cap_net_raw=-ep /usr/bin/arping
</source>
<source lang=bash>
lollypop $ /usr/bin/arping -I wlan0 192.168.178.1
arping: socket: Operation not permitted
</source>
Of course it still works as root as root has all capabilities:
<source lang=bash>
root@lollybook:~# /usr/bin/arping -I wlan0 192.168.178.1
ARPING 192.168.178.1 from 192.168.178.31 wlan0
Unicast reply from 192.168.178.1 [24:65:11:F0:DC:A8] 2.052ms
Unicast reply from 192.168.178.1 [24:65:11:F0:DC:A8] 1.852ms
Received 2 response(s)
</source>
So we better set this capability again:
<source lang=bash>
# setcap cap_net_raw=+ep /usr/bin/arping
</source>
= Logging with syslog-ng and systemd in a chroot environment =
If you have a chroot environment (here I have /var/chroot) some things are a little bit tricky.
==The needed logging socket in your chroot is /run/systemd/journal/dev-log==
Prepare the mountpoint:
<source lang=bash>
# mkdir -p /var/chroot/run/systemd/journal
# touch /var/chroot/run/systemd/journal/dev-log
</source>
===Get the name for the needed unit file===
The name of a .mount-unit file has to be the mount destination path. Dashes must be escaped. To get the resulting name you can easily use systemd-escape.
<source lang=bash>
# systemd-escape -p --suffix=mount /var/chroot/run/systemd/journal/dev-log
var-chroot-run-systemd-journal-dev\x2dlog.mount
</source>
===Create the unit file /lib/systemd/system/var-chroot-run-systemd-journal-dev\\x2dlog.mount for the mount===
Remember to double escape (\\) the x2d (which is a dash -).
<source lang=bash>
# vi /lib/systemd/system/var-chroot-run-systemd-journal-dev\\x2dlog.mount
</source>
I want to mount it before syslog-ng and pdns-recursor are up.
Put this contents in the file:
<source lang=ini>
[Unit]
Description=Mount /run/systemd/journal/dev-log to chroot
DefaultDependencies=no
ConditionPathExists=/var/chroot/run/systemd/journal/dev-log
ConditionCapability=CAP_SYS_ADMIN
After=systemd-modules-load.service
Before=pdns-recursor.service
Before=syslog-ng.service
[Mount]
What=/run/systemd/journal/dev-log
Where=/var/chroot/run/systemd/journal/dev-log
Type=none
Options=bind
[Install]
WantedBy=multi-user.target
</source>
===Mount the socket===
<source lang=bash>
# systemctl daemon-reload
# systemctl enable var-chroot-run-systemd-journal-dev\\x2dlog.mount
# systemctl start var-chroot-run-systemd-journal-dev\\x2dlog.mount
</source>
Check the success:
<source lang=bash>
# grep /var/chroot/run/systemd/journal/dev-log /proc/mounts
tmpfs /var/chroot/run/systemd/journal/dev-log tmpfs rw,nosuid,noexec,relatime,size=101604k,mode=755 0 0
</source>
==Tell the journald to forward logging lines to the socket==
===/etc/systemd/journald.conf===
<source lang=ini>
[Journal]
...
ForwardToSyslog=yes
...
</source>
Restart the journal daemon:
<source lang=bash>
# systemctl restart systemd-journald.service
</source>
==Configure syslog-ng==
===/etc/syslog-ng/syslog-ng.conf===
Take the log from systemd-journald socket:
<source>
...
source s_src {
system();
internal();
unix-dgram ("/run/systemd/journal/dev-log");
};
...
</source>
===Example for powerdns recursor===
====/etc/syslog-ng/conf.d/destination.d/pdns.conf====
<source>
# PowerDNS authoritative server destination
destination d_pdns { file("/var/log/powerdns/pdns.log"); };
destination d_pdns_recursor { file("/var/log/powerdns/recursor.log"); };
</source>
====/etc/syslog-ng/conf.d/filter.d/pdns.conf====
<source>
# PowerDNS authoritative server filter
filter f_pdns { program("^pdns$"); };
filter f_pdns_recursor { program("^pdns_recursor$"); };
</source>
====/etc/syslog-ng/conf.d/log.d/90_pdns.conf====
<source>
# PowerDNS authoritative server default final file log
log { source(s_src); filter(f_pdns); destination(d_pdns); flags(final); };
log { source(s_src); filter(f_pdns_recursor); destination(d_pdns_recursor); flags(final); };
</source>
===Restart syslog-ng daemon===
<source lang=bash>
# systemctl restart syslog-ng.service
</source>
= Examples =
== Oracle ==
UNTESTED, just an example!
File this as
/usr/lib/systemd/system/dbora@.service (SLES12)
<source lang=ini>
# This file is part of systemd.
#
# Configure instances for your oracle database versions like this
# # systemctl enable dbora@<product>.service
# e.g.:
# # systemctl enable dbora@12cR1.service
#
[Unit]
Description=Oracle Database %I
After=syslog.target network.target
[Service]
# systemd ignores PAM limits, so set any necessary limits in the service.
# Not really a bug, but a feature.
# https://bugzilla.redhat.com/show_bug.cgi?id=754285
LimitMEMLOCK=infinity
LimitNOFILE=65535
#
Type=simple
RemainAfterExit=yes
User=oracle
Group=dba
Environment="ORACLE_HOME=/opt/oracle/product/%i/db"
ExecStart=/opt/oracle/product/%i/db/bin/dbstart $ORACLE_HOME >> 2>&1 &
ExecStop=/opt/oracle/product/%i/db/bin/dbstart $ORACLE_HOME 2>&1 &
[Install]
WantedBy=multi-user.target
</source>
<source lang=bash>
# systemctl daemon-reload
# systemctl enable dbora@12cR2.service
Created symlink from /etc/systemd/system/multi-user.target.wants/dbora@12cR2.service to /usr/lib/systemd/system/dbora@.service.
</source>
63db56176237ec5c50f01797a183a8176d749564
1869
1868
2018-05-09T06:28:23Z
Lollypop
2
/* Cache statistics */
wikitext
text/x-wiki
[[Kategorie:Linux]]
=systemd=
Yes, like daemons are usually written this has to be written lowercase.
=What is systemd?=
systemd is a replacement for the old and rusty init system of Linux.
It has many new features and extends the normal init system with the ability to watch processes after the start has done, list sockets owned by processes started with systemd, adds security features like [http://manpages.ubuntu.com/manpages/vivid/en/man7/capabilities.7.html capabilities(7)] and a lot more.
Maybe it will be as good as SMF (Service Management Facility) of Solaris one day :-).
=Take a look with systemctl=
==List units==
As you can see, there are hardware and software related units.
<source lang=bash>
# systemctl list-units
UNIT LOAD ACTIVE SUB DESCRIPTION
proc-sys-fs-binfmt_misc.automount loaded active running Arbitrary Executable File Formats File System Automount Point
sys-devices-pci0000:00-0000:00:02.0-backlight-acpi_video0.device loaded active plugged /sys/devices/pci0000:00/0000:00:02.0/backlight/acpi_video0
sys-devices-pci0000:00-0000:00:02.0-drm-card0-card0\x2dLVDS\x2d1-intel_backlight.device loaded active plugged /sys/devices/pci0000:00/0000:00:02.0/drm
sys-devices-pci0000:00-0000:00:19.0-net-eth0.device loaded active plugged 82579LM Gigabit Network Connection
sys-devices-pci0000:00-0000:00:1a.0-usb1-1\x2d1-1\x2d1.4-1\x2d1.4:1.0-bluetooth-hci0-rfkill3.device loaded active plugged /sys/devices/pci0000:00/0000
sys-devices-pci0000:00-0000:00:1a.0-usb1-1\x2d1-1\x2d1.4-1\x2d1.4:1.0-bluetooth-hci0.device loaded active plugged /sys/devices/pci0000:00/0000:00:1a.0
sys-devices-pci0000:00-0000:00:1b.0-sound-card0.device loaded active plugged 6 Series/C200 Series Chipset Family High Definition Audio Contro
sys-devices-pci0000:00-0000:00:1c.1-0000:03:00.0-ieee80211-phy0-rfkill2.device loaded active plugged /sys/devices/pci0000:00/0000:00:1c.1/0000:03:00.0
sys-devices-pci0000:00-0000:00:1c.1-0000:03:00.0-net-wlan0.device loaded active plugged Centrino Advanced-N 6205 [Taylor Peak] (Centrino Advanced-N 62
sys-devices-pci0000:00-0000:00:1d.0-usb2-2\x2d1-2\x2d1.4-2\x2d1.4:1.1-tty-ttyACM0.device loaded active plugged F5521gw
sys-devices-pci0000:00-0000:00:1d.0-usb2-2\x2d1-2\x2d1.4-2\x2d1.4:1.3-tty-ttyACM1.device loaded active plugged F5521gw
...
session-c2.scope loaded active running Session c2 of user lollypop
accounts-daemon.service loaded active running Accounts Service
● anacron.service loaded failed failed Run anacron jobs
apparmor.service loaded active exited LSB: AppArmor initialization
apport.service loaded active exited LSB: automatic crash report generation
...
</source>
In this example you can see that the anacron.service failed to start.
==Display unit status==
<source lang=bash>
# systemctl status anacron
● anacron.service - Run anacron jobs
Loaded: loaded (/lib/systemd/system/anacron.service; enabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Fr 2015-08-28 09:18:13 CEST; 31min ago
Process: 1591 ExecStart=/usr/sbin/anacron -dsq (code=exited, status=1/FAILURE)
Main PID: 1591 (code=exited, status=1/FAILURE)
Aug 28 09:18:13 lollybook systemd[1]: Started Run anacron jobs.
Aug 28 09:18:13 lollybook systemd[1]: Starting Run anacron jobs...
Aug 28 09:18:13 lollybook systemd[1]: anacron.service: main process exited, code=exited, status=1/FAILURE
Aug 28 09:18:13 lollybook anacron[1591]: anacron: Can't chdir to /var/spool/anacron: No such file or directory
Aug 28 09:18:13 lollybook systemd[1]: Unit anacron.service entered failed state.
Aug 28 09:18:13 lollybook systemd[1]: anacron.service failed.
</source>
Ah, deleted the anacron spool directory. ;-)
==Restart units==
Fix the problem and restart the service.
<source lang=bash>
root@lollybook:~# mkdir /var/spool/anacron
root@lollybook:~# systemctl restart anacron.service
root@lollybook:~# systemctl status anacron
● anacron.service - Run anacron jobs
Loaded: loaded (/lib/systemd/system/anacron.service; enabled; vendor preset: enabled)
Active: active (running) since Fr 2015-08-28 09:53:49 CEST; 4s ago
Main PID: 5179 (anacron)
CGroup: /system.slice/anacron.service
└─5179 /usr/sbin/anacron -dsq
Aug 28 09:53:49 lollybook systemd[1]: Started Run anacron jobs.
Aug 28 09:53:49 lollybook systemd[1]: Starting Run anacron jobs...
Aug 28 09:53:49 lollybook anacron[5179]: Anacron 2.3 started on 2015-08-28
Aug 28 09:53:49 lollybook anacron[5179]: Will run job `cron.daily' in 5 min.
Aug 28 09:53:49 lollybook anacron[5179]: Will run job `cron.weekly' in 10 min.
Aug 28 09:53:49 lollybook anacron[5179]: Will run job `cron.monthly' in 15 min.
Aug 28 09:53:49 lollybook anacron[5179]: Jobs will be executed sequentially
</source>
==Display unit declaration==
<source lang=ini>
# systemctl cat zfs.target
# /lib/systemd/system/zfs.target
[Unit]
Description=ZFS startup target
Requires=zfs-mount.service
Requires=zfs-share.service
Wants=zed.service
[Install]
WantedBy=multi-user.target
</source>
==Sockets==
<source lang=bash>
# systemctl list-sockets --all
LISTEN UNIT ACTIVATES
/run/acpid.socket acpid.socket acpid.service
/run/systemd/fsckd systemd-fsckd.socket systemd-fsckd.service
/run/systemd/initctl/fifo systemd-initctl.socket systemd-initctl.service
/run/systemd/journal/dev-log systemd-journald-dev-log.socket systemd-journald.service
/run/systemd/journal/socket systemd-journald.socket systemd-journald.service
/run/systemd/journal/stdout systemd-journald.socket systemd-journald.service
/run/systemd/journal/syslog syslog.socket rsyslog.service
/run/systemd/shutdownd systemd-shutdownd.socket systemd-shutdownd.service
/run/udev/control systemd-udevd-control.socket systemd-udevd.service
/run/uuidd/request uuidd.socket uuidd.service
/var/run/avahi-daemon/socket avahi-daemon.socket avahi-daemon.service
/var/run/cups/cups.sock cups.socket cups.service
/var/run/dbus/system_bus_socket dbus.socket dbus.service
127.0.0.1:631 cups.socket cups.service
[::1]:631 cups.socket cups.service
audit 1 systemd-journald-audit.socket systemd-journald.service
kobject-uevent 1 systemd-udevd-kernel.socket systemd-udevd.service
17 sockets listed.
</source>
==View dependencies==
What depends on ''zfs.target'':
<source lang=bash>
# systemctl list-dependencies --reverse zfs.target
zfs.target
● ├─basic.target
...
● └─multi-user.target
...
</source>
And what do we need to reach the ''zfs.target''?
<source lang=bash>
# systemctl list-dependencies --recursive zfs.target
zfs.target
● ├─zed.service
● ├─zfs-mount.service
● └─zfs-share.service
</source>
=Security=
==Use capabilities to drop user privileges (CapabilityBoundingSet)==
<source lang=ini>
# systemctl cat systemd-networkd.service --no-pager
...
[Service]
Type=notify
Restart=on-failure
RestartSec=0
ExecStart=/lib/systemd/systemd-networkd
CapabilityBoundingSet=CAP_NET_ADMIN CAP_NET_BIND_SERVICE CAP_NET_BROADCAST CAP_NET_RAW CAP_SETUID CAP_SETGID CAP_SETPCAP CAP_CHOWN CAP_DAC_OVERRIDE CAP_FOWNER
ProtectSystem=full
ProtectHome=yes
WatchdogSec=1min
...
</source>
Now the process is started with exactly the capabilities it needs to have. Even if it starts as root all unnessesary capabilities are dropped for starting the process.
I dont want to copy the whole man page of [http://manpages.ubuntu.com/manpages/vivid/en/man7/capabilities.7.html capabilities(7)] here but you can take a look to understand what this capabilities are.
'''BUT''' beware of programs which just test on UID 0!
==Nailing a process to it's rights : NoNewPrivileges==
Setting ''NoNewPrivileges=true'' ensures that the processtree from this level on will stuck at the UID and the privileges it has. This prohibits UID changes. No set UID binary will help the hacker to get more privileges than the user of the exploited service.
=systemd-resolved the name resolve service=
==Cache statistics==
<source lang=bash>
$ systemd-resolve --statistics
DNSSEC supported by current servers: no
Transactions
Current Transactions: 0
Total Transactions: 1824
Cache
Current Cache Size: 11
Cache Hits: 1104
Cache Misses: 771
DNSSEC Verdicts
Secure: 0
Insecure: 0
Bogus: 0
Indeterminate: 0
</source>
==Flush the cache==
<source lang=bash>
$ systemd-resolve --flush-caches
$ systemd-resolve --statistics
DNSSEC supported by current servers: no
Transactions
Current Transactions: 0
Total Transactions: 1809
Cache
Current Cache Size: 0 <--- Empty
Cache Hits: 1099
Cache Misses: 761
DNSSEC Verdicts
Secure: 0
Insecure: 0
Bogus: 0
Indeterminate: 0
</source>
=systemd-timesyncd an alternative to ntp=
The ntpd is a good and fat old horse for servers but clients do not necessarily need this one. Just give systemd-timesyncd a chance.
Configuration can be easily made through <i>/etc/systemd/timesyncd.conf</i>:
<source lang=ini>
# This file is part of systemd.
#
# systemd is free software; you can redistribute it and/or modify it
# under the terms of the GNU Lesser General Public License as published by
# the Free Software Foundation; either version 2.1 of the License, or
# (at your option) any later version.
#
# Entries in this file show the compile time defaults.
# You can change settings by editing this file.
# Defaults can be restored by simply deleting this file.
#
# See timesyncd.conf(5) for details.
[Time]
NTP=ptbtime1.ptb.de hora.cs.tu-berlin.de
FallbackNTP=ntp.ubuntu.com
</source>
The server list is a space separated list of NTP servers.
FallbackNTP is a list of servers if none of the NTP list could be reached.
If you want to split them into multiple files or generate them at start you can put files with the ending <i>.conf</i> in <i>/etc/systemd/timesyncd.conf.d/</i>.
After you setup the config you can enable the timesyncd via:
<source lang=bash>
# timedatectl set-ntp true
</source>
Control your success with:
<source lang=bash>
# timedatectl
Local time: Fr 2016-07-01 09:16:24 CEST
Universal time: Fr 2016-07-01 07:16:24 UTC
RTC time: Fr 2016-07-01 07:16:24
Time zone: Europe/Berlin (CEST, +0200)
Network time on: yes
NTP synchronized: yes
RTC in local TZ: no
</source>
Nice it worked <i>NTP synchronized: yes</i>.
If not take a look with <i>systemctl</i>:
<source lang=bash>
# systemctl status systemd-timesyncd.service
● systemd-timesyncd.service - Network Time Synchronization
Loaded: loaded (/lib/systemd/system/systemd-timesyncd.service; enabled; vendor preset: enabled)
Drop-In: /lib/systemd/system/systemd-timesyncd.service.d
└─disable-with-time-daemon.conf
Active: inactive (dead)
Condition: start condition failed at Fr 2016-07-01 10:49:15 CEST; 1h 43min left
Docs: man:systemd-timesyncd.service(8)
</source>
Hmm... let us take a look at ntp:
<source lang=bash>
# systemctl status ntp.service
● ntp.service - LSB: Start NTP daemon
Loaded: loaded (/etc/init.d/ntp; bad; vendor preset: enabled)
Active: active (exited) since Fr 2016-07-01 10:49:19 CEST; 1h 44min left
Docs: man:systemd-sysv-generator(8)
</source>
Maybe we should uninstall or disable ntp first ;-).
<source lang=bash>
# systemctl stop ntp.service
# systemctl disable ntp.service
</source>
<source lang=bash>
# systemctl start systemd-timesyncd.service
# systemctl status systemd-timesyncd.service
● systemd-timesyncd.service - Network Time Synchronization
Loaded: loaded (/lib/systemd/system/systemd-timesyncd.service; enabled; vendor preset: enabled)
Drop-In: /lib/systemd/system/systemd-timesyncd.service.d
└─disable-with-time-daemon.conf
Active: active (running) since Fr 2016-07-01 09:06:10 CEST; 1s ago
Docs: man:systemd-timesyncd.service(8)
Main PID: 12360 (systemd-timesyn)
Status: "Synchronized to time server 192.53.103.108:123 (ptbtime1.ptb.de)."
CGroup: /system.slice/systemd-timesyncd.service
└─12360 /lib/systemd/systemd-timesyncd
Jul 01 09:06:10 lollybook systemd[1]: Starting Network Time Synchronization...
Jul 01 09:06:10 lollybook systemd[1]: Started Network Time Synchronization.
Jul 01 09:06:10 lollybook systemd-timesyncd[12360]: Synchronized to time server 192.53.103.108:123 (ptbtime1.ptb.de).
</source>
That's it!
=Units=
==[Unit]==
===Define dependencies===
For example the ''zfs.target'' is defined like this:
<source lang=ini>
# systemctl cat zfs.target
# /lib/systemd/system/zfs.target
[Unit]
Description=ZFS startup target
Requires=zfs-mount.service
Requires=zfs-share.service
Wants=zed.service
[Install]
WantedBy=multi-user.target
</source>
This means to reach the ''zfs.target'' we want that ''zed.service'' is started if enabled and we need ''zfs-mount.service'' and ''zfs-share.service''.
===Directories===
====ReadWrite-, ReadOnly- and InaccessibleDirectories====
====Private Tmp-Directories====
Mounts a private incarnation of /tmp and /var/tmp which only lives as long as the unit is up. When the unit comes down the directories are cleared. This is done by a seperate namespace for this unit.
<source lang=ini>
[Unit]
...
PrivateTmp=true|false
...
</source>
If several units should share a private tmp-directory you can use ''JoinsNamespaceOf=<unit1>[,<unit2>,<unit3>]''.
==[Service]==
==[Install]==
=Tools=
==Testing around with capabilities==
For example arping:
<source lang=bash>
# getcap /usr/bin/arping
/usr/bin/arping = cap_net_raw+ep
</source>
With this capability set we can use this as normal user:
<source lang=bash>
lollypop $ /usr/bin/arping -I wlan0 192.168.178.1
ARPING 192.168.178.1 from 192.168.178.31 wlan0
Unicast reply from 192.168.178.1 [24:65:11:F0:DC:A8] 1.774ms
Unicast reply from 192.168.178.1 [24:65:11:F0:DC:A8] 1.658ms
</source>
If we remove this capability it does not work:
<source lang=bash>
# setcap cap_net_raw=-ep /usr/bin/arping
</source>
<source lang=bash>
lollypop $ /usr/bin/arping -I wlan0 192.168.178.1
arping: socket: Operation not permitted
</source>
Of course it still works as root as root has all capabilities:
<source lang=bash>
root@lollybook:~# /usr/bin/arping -I wlan0 192.168.178.1
ARPING 192.168.178.1 from 192.168.178.31 wlan0
Unicast reply from 192.168.178.1 [24:65:11:F0:DC:A8] 2.052ms
Unicast reply from 192.168.178.1 [24:65:11:F0:DC:A8] 1.852ms
Received 2 response(s)
</source>
So we better set this capability again:
<source lang=bash>
# setcap cap_net_raw=+ep /usr/bin/arping
</source>
= Logging with syslog-ng and systemd in a chroot environment =
If you have a chroot environment (here I have /var/chroot) some things are a little bit tricky.
==The needed logging socket in your chroot is /run/systemd/journal/dev-log==
Prepare the mountpoint:
<source lang=bash>
# mkdir -p /var/chroot/run/systemd/journal
# touch /var/chroot/run/systemd/journal/dev-log
</source>
===Get the name for the needed unit file===
The name of a .mount-unit file has to be the mount destination path. Dashes must be escaped. To get the resulting name you can easily use systemd-escape.
<source lang=bash>
# systemd-escape -p --suffix=mount /var/chroot/run/systemd/journal/dev-log
var-chroot-run-systemd-journal-dev\x2dlog.mount
</source>
===Create the unit file /lib/systemd/system/var-chroot-run-systemd-journal-dev\\x2dlog.mount for the mount===
Remember to double escape (\\) the x2d (which is a dash -).
<source lang=bash>
# vi /lib/systemd/system/var-chroot-run-systemd-journal-dev\\x2dlog.mount
</source>
I want to mount it before syslog-ng and pdns-recursor are up.
Put this contents in the file:
<source lang=ini>
[Unit]
Description=Mount /run/systemd/journal/dev-log to chroot
DefaultDependencies=no
ConditionPathExists=/var/chroot/run/systemd/journal/dev-log
ConditionCapability=CAP_SYS_ADMIN
After=systemd-modules-load.service
Before=pdns-recursor.service
Before=syslog-ng.service
[Mount]
What=/run/systemd/journal/dev-log
Where=/var/chroot/run/systemd/journal/dev-log
Type=none
Options=bind
[Install]
WantedBy=multi-user.target
</source>
===Mount the socket===
<source lang=bash>
# systemctl daemon-reload
# systemctl enable var-chroot-run-systemd-journal-dev\\x2dlog.mount
# systemctl start var-chroot-run-systemd-journal-dev\\x2dlog.mount
</source>
Check the success:
<source lang=bash>
# grep /var/chroot/run/systemd/journal/dev-log /proc/mounts
tmpfs /var/chroot/run/systemd/journal/dev-log tmpfs rw,nosuid,noexec,relatime,size=101604k,mode=755 0 0
</source>
==Tell the journald to forward logging lines to the socket==
===/etc/systemd/journald.conf===
<source lang=ini>
[Journal]
...
ForwardToSyslog=yes
...
</source>
Restart the journal daemon:
<source lang=bash>
# systemctl restart systemd-journald.service
</source>
==Configure syslog-ng==
===/etc/syslog-ng/syslog-ng.conf===
Take the log from systemd-journald socket:
<source>
...
source s_src {
system();
internal();
unix-dgram ("/run/systemd/journal/dev-log");
};
...
</source>
===Example for powerdns recursor===
====/etc/syslog-ng/conf.d/destination.d/pdns.conf====
<source>
# PowerDNS authoritative server destination
destination d_pdns { file("/var/log/powerdns/pdns.log"); };
destination d_pdns_recursor { file("/var/log/powerdns/recursor.log"); };
</source>
====/etc/syslog-ng/conf.d/filter.d/pdns.conf====
<source>
# PowerDNS authoritative server filter
filter f_pdns { program("^pdns$"); };
filter f_pdns_recursor { program("^pdns_recursor$"); };
</source>
====/etc/syslog-ng/conf.d/log.d/90_pdns.conf====
<source>
# PowerDNS authoritative server default final file log
log { source(s_src); filter(f_pdns); destination(d_pdns); flags(final); };
log { source(s_src); filter(f_pdns_recursor); destination(d_pdns_recursor); flags(final); };
</source>
===Restart syslog-ng daemon===
<source lang=bash>
# systemctl restart syslog-ng.service
</source>
= Examples =
== Oracle ==
UNTESTED, just an example!
File this as
/usr/lib/systemd/system/dbora@.service (SLES12)
<source lang=ini>
# This file is part of systemd.
#
# Configure instances for your oracle database versions like this
# # systemctl enable dbora@<product>.service
# e.g.:
# # systemctl enable dbora@12cR1.service
#
[Unit]
Description=Oracle Database %I
After=syslog.target network.target
[Service]
# systemd ignores PAM limits, so set any necessary limits in the service.
# Not really a bug, but a feature.
# https://bugzilla.redhat.com/show_bug.cgi?id=754285
LimitMEMLOCK=infinity
LimitNOFILE=65535
#
Type=simple
RemainAfterExit=yes
User=oracle
Group=dba
Environment="ORACLE_HOME=/opt/oracle/product/%i/db"
ExecStart=/opt/oracle/product/%i/db/bin/dbstart $ORACLE_HOME >> 2>&1 &
ExecStop=/opt/oracle/product/%i/db/bin/dbstart $ORACLE_HOME 2>&1 &
[Install]
WantedBy=multi-user.target
</source>
<source lang=bash>
# systemctl daemon-reload
# systemctl enable dbora@12cR2.service
Created symlink from /etc/systemd/system/multi-user.target.wants/dbora@12cR2.service to /usr/lib/systemd/system/dbora@.service.
</source>
8fba54f0aa71de60097e765bb4631779ac4dd0e8
1870
1869
2018-05-09T06:29:18Z
Lollypop
2
/* Flush the cache */
wikitext
text/x-wiki
[[Kategorie:Linux]]
=systemd=
Yes, like daemons are usually written this has to be written lowercase.
=What is systemd?=
systemd is a replacement for the old and rusty init system of Linux.
It has many new features and extends the normal init system with the ability to watch processes after the start has done, list sockets owned by processes started with systemd, adds security features like [http://manpages.ubuntu.com/manpages/vivid/en/man7/capabilities.7.html capabilities(7)] and a lot more.
Maybe it will be as good as SMF (Service Management Facility) of Solaris one day :-).
=Take a look with systemctl=
==List units==
As you can see, there are hardware and software related units.
<source lang=bash>
# systemctl list-units
UNIT LOAD ACTIVE SUB DESCRIPTION
proc-sys-fs-binfmt_misc.automount loaded active running Arbitrary Executable File Formats File System Automount Point
sys-devices-pci0000:00-0000:00:02.0-backlight-acpi_video0.device loaded active plugged /sys/devices/pci0000:00/0000:00:02.0/backlight/acpi_video0
sys-devices-pci0000:00-0000:00:02.0-drm-card0-card0\x2dLVDS\x2d1-intel_backlight.device loaded active plugged /sys/devices/pci0000:00/0000:00:02.0/drm
sys-devices-pci0000:00-0000:00:19.0-net-eth0.device loaded active plugged 82579LM Gigabit Network Connection
sys-devices-pci0000:00-0000:00:1a.0-usb1-1\x2d1-1\x2d1.4-1\x2d1.4:1.0-bluetooth-hci0-rfkill3.device loaded active plugged /sys/devices/pci0000:00/0000
sys-devices-pci0000:00-0000:00:1a.0-usb1-1\x2d1-1\x2d1.4-1\x2d1.4:1.0-bluetooth-hci0.device loaded active plugged /sys/devices/pci0000:00/0000:00:1a.0
sys-devices-pci0000:00-0000:00:1b.0-sound-card0.device loaded active plugged 6 Series/C200 Series Chipset Family High Definition Audio Contro
sys-devices-pci0000:00-0000:00:1c.1-0000:03:00.0-ieee80211-phy0-rfkill2.device loaded active plugged /sys/devices/pci0000:00/0000:00:1c.1/0000:03:00.0
sys-devices-pci0000:00-0000:00:1c.1-0000:03:00.0-net-wlan0.device loaded active plugged Centrino Advanced-N 6205 [Taylor Peak] (Centrino Advanced-N 62
sys-devices-pci0000:00-0000:00:1d.0-usb2-2\x2d1-2\x2d1.4-2\x2d1.4:1.1-tty-ttyACM0.device loaded active plugged F5521gw
sys-devices-pci0000:00-0000:00:1d.0-usb2-2\x2d1-2\x2d1.4-2\x2d1.4:1.3-tty-ttyACM1.device loaded active plugged F5521gw
...
session-c2.scope loaded active running Session c2 of user lollypop
accounts-daemon.service loaded active running Accounts Service
● anacron.service loaded failed failed Run anacron jobs
apparmor.service loaded active exited LSB: AppArmor initialization
apport.service loaded active exited LSB: automatic crash report generation
...
</source>
In this example you can see that the anacron.service failed to start.
==Display unit status==
<source lang=bash>
# systemctl status anacron
● anacron.service - Run anacron jobs
Loaded: loaded (/lib/systemd/system/anacron.service; enabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Fr 2015-08-28 09:18:13 CEST; 31min ago
Process: 1591 ExecStart=/usr/sbin/anacron -dsq (code=exited, status=1/FAILURE)
Main PID: 1591 (code=exited, status=1/FAILURE)
Aug 28 09:18:13 lollybook systemd[1]: Started Run anacron jobs.
Aug 28 09:18:13 lollybook systemd[1]: Starting Run anacron jobs...
Aug 28 09:18:13 lollybook systemd[1]: anacron.service: main process exited, code=exited, status=1/FAILURE
Aug 28 09:18:13 lollybook anacron[1591]: anacron: Can't chdir to /var/spool/anacron: No such file or directory
Aug 28 09:18:13 lollybook systemd[1]: Unit anacron.service entered failed state.
Aug 28 09:18:13 lollybook systemd[1]: anacron.service failed.
</source>
Ah, deleted the anacron spool directory. ;-)
==Restart units==
Fix the problem and restart the service.
<source lang=bash>
root@lollybook:~# mkdir /var/spool/anacron
root@lollybook:~# systemctl restart anacron.service
root@lollybook:~# systemctl status anacron
● anacron.service - Run anacron jobs
Loaded: loaded (/lib/systemd/system/anacron.service; enabled; vendor preset: enabled)
Active: active (running) since Fr 2015-08-28 09:53:49 CEST; 4s ago
Main PID: 5179 (anacron)
CGroup: /system.slice/anacron.service
└─5179 /usr/sbin/anacron -dsq
Aug 28 09:53:49 lollybook systemd[1]: Started Run anacron jobs.
Aug 28 09:53:49 lollybook systemd[1]: Starting Run anacron jobs...
Aug 28 09:53:49 lollybook anacron[5179]: Anacron 2.3 started on 2015-08-28
Aug 28 09:53:49 lollybook anacron[5179]: Will run job `cron.daily' in 5 min.
Aug 28 09:53:49 lollybook anacron[5179]: Will run job `cron.weekly' in 10 min.
Aug 28 09:53:49 lollybook anacron[5179]: Will run job `cron.monthly' in 15 min.
Aug 28 09:53:49 lollybook anacron[5179]: Jobs will be executed sequentially
</source>
==Display unit declaration==
<source lang=ini>
# systemctl cat zfs.target
# /lib/systemd/system/zfs.target
[Unit]
Description=ZFS startup target
Requires=zfs-mount.service
Requires=zfs-share.service
Wants=zed.service
[Install]
WantedBy=multi-user.target
</source>
==Sockets==
<source lang=bash>
# systemctl list-sockets --all
LISTEN UNIT ACTIVATES
/run/acpid.socket acpid.socket acpid.service
/run/systemd/fsckd systemd-fsckd.socket systemd-fsckd.service
/run/systemd/initctl/fifo systemd-initctl.socket systemd-initctl.service
/run/systemd/journal/dev-log systemd-journald-dev-log.socket systemd-journald.service
/run/systemd/journal/socket systemd-journald.socket systemd-journald.service
/run/systemd/journal/stdout systemd-journald.socket systemd-journald.service
/run/systemd/journal/syslog syslog.socket rsyslog.service
/run/systemd/shutdownd systemd-shutdownd.socket systemd-shutdownd.service
/run/udev/control systemd-udevd-control.socket systemd-udevd.service
/run/uuidd/request uuidd.socket uuidd.service
/var/run/avahi-daemon/socket avahi-daemon.socket avahi-daemon.service
/var/run/cups/cups.sock cups.socket cups.service
/var/run/dbus/system_bus_socket dbus.socket dbus.service
127.0.0.1:631 cups.socket cups.service
[::1]:631 cups.socket cups.service
audit 1 systemd-journald-audit.socket systemd-journald.service
kobject-uevent 1 systemd-udevd-kernel.socket systemd-udevd.service
17 sockets listed.
</source>
==View dependencies==
What depends on ''zfs.target'':
<source lang=bash>
# systemctl list-dependencies --reverse zfs.target
zfs.target
● ├─basic.target
...
● └─multi-user.target
...
</source>
And what do we need to reach the ''zfs.target''?
<source lang=bash>
# systemctl list-dependencies --recursive zfs.target
zfs.target
● ├─zed.service
● ├─zfs-mount.service
● └─zfs-share.service
</source>
=Security=
==Use capabilities to drop user privileges (CapabilityBoundingSet)==
<source lang=ini>
# systemctl cat systemd-networkd.service --no-pager
...
[Service]
Type=notify
Restart=on-failure
RestartSec=0
ExecStart=/lib/systemd/systemd-networkd
CapabilityBoundingSet=CAP_NET_ADMIN CAP_NET_BIND_SERVICE CAP_NET_BROADCAST CAP_NET_RAW CAP_SETUID CAP_SETGID CAP_SETPCAP CAP_CHOWN CAP_DAC_OVERRIDE CAP_FOWNER
ProtectSystem=full
ProtectHome=yes
WatchdogSec=1min
...
</source>
Now the process is started with exactly the capabilities it needs to have. Even if it starts as root all unnessesary capabilities are dropped for starting the process.
I dont want to copy the whole man page of [http://manpages.ubuntu.com/manpages/vivid/en/man7/capabilities.7.html capabilities(7)] here but you can take a look to understand what this capabilities are.
'''BUT''' beware of programs which just test on UID 0!
==Nailing a process to it's rights : NoNewPrivileges==
Setting ''NoNewPrivileges=true'' ensures that the processtree from this level on will stuck at the UID and the privileges it has. This prohibits UID changes. No set UID binary will help the hacker to get more privileges than the user of the exploited service.
=systemd-resolved the name resolve service=
==Cache statistics==
<source lang=bash>
$ systemd-resolve --statistics
DNSSEC supported by current servers: no
Transactions
Current Transactions: 0
Total Transactions: 1824
Cache
Current Cache Size: 11
Cache Hits: 1104
Cache Misses: 771
DNSSEC Verdicts
Secure: 0
Insecure: 0
Bogus: 0
Indeterminate: 0
</source>
==Flush the cache==
<source lang=bash>
$ systemd-resolve --flush-caches
</source>
Check with:
<source lang=bash>
$ systemd-resolve --statistics
DNSSEC supported by current servers: no
Transactions
Current Transactions: 0
Total Transactions: 1809
Cache
Current Cache Size: 0 <--- Empty
Cache Hits: 1099
Cache Misses: 761
DNSSEC Verdicts
Secure: 0
Insecure: 0
Bogus: 0
Indeterminate: 0
</source>
=systemd-timesyncd an alternative to ntp=
The ntpd is a good and fat old horse for servers but clients do not necessarily need this one. Just give systemd-timesyncd a chance.
Configuration can be easily made through <i>/etc/systemd/timesyncd.conf</i>:
<source lang=ini>
# This file is part of systemd.
#
# systemd is free software; you can redistribute it and/or modify it
# under the terms of the GNU Lesser General Public License as published by
# the Free Software Foundation; either version 2.1 of the License, or
# (at your option) any later version.
#
# Entries in this file show the compile time defaults.
# You can change settings by editing this file.
# Defaults can be restored by simply deleting this file.
#
# See timesyncd.conf(5) for details.
[Time]
NTP=ptbtime1.ptb.de hora.cs.tu-berlin.de
FallbackNTP=ntp.ubuntu.com
</source>
The server list is a space separated list of NTP servers.
FallbackNTP is a list of servers if none of the NTP list could be reached.
If you want to split them into multiple files or generate them at start you can put files with the ending <i>.conf</i> in <i>/etc/systemd/timesyncd.conf.d/</i>.
After you setup the config you can enable the timesyncd via:
<source lang=bash>
# timedatectl set-ntp true
</source>
Control your success with:
<source lang=bash>
# timedatectl
Local time: Fr 2016-07-01 09:16:24 CEST
Universal time: Fr 2016-07-01 07:16:24 UTC
RTC time: Fr 2016-07-01 07:16:24
Time zone: Europe/Berlin (CEST, +0200)
Network time on: yes
NTP synchronized: yes
RTC in local TZ: no
</source>
Nice it worked <i>NTP synchronized: yes</i>.
If not take a look with <i>systemctl</i>:
<source lang=bash>
# systemctl status systemd-timesyncd.service
● systemd-timesyncd.service - Network Time Synchronization
Loaded: loaded (/lib/systemd/system/systemd-timesyncd.service; enabled; vendor preset: enabled)
Drop-In: /lib/systemd/system/systemd-timesyncd.service.d
└─disable-with-time-daemon.conf
Active: inactive (dead)
Condition: start condition failed at Fr 2016-07-01 10:49:15 CEST; 1h 43min left
Docs: man:systemd-timesyncd.service(8)
</source>
Hmm... let us take a look at ntp:
<source lang=bash>
# systemctl status ntp.service
● ntp.service - LSB: Start NTP daemon
Loaded: loaded (/etc/init.d/ntp; bad; vendor preset: enabled)
Active: active (exited) since Fr 2016-07-01 10:49:19 CEST; 1h 44min left
Docs: man:systemd-sysv-generator(8)
</source>
Maybe we should uninstall or disable ntp first ;-).
<source lang=bash>
# systemctl stop ntp.service
# systemctl disable ntp.service
</source>
<source lang=bash>
# systemctl start systemd-timesyncd.service
# systemctl status systemd-timesyncd.service
● systemd-timesyncd.service - Network Time Synchronization
Loaded: loaded (/lib/systemd/system/systemd-timesyncd.service; enabled; vendor preset: enabled)
Drop-In: /lib/systemd/system/systemd-timesyncd.service.d
└─disable-with-time-daemon.conf
Active: active (running) since Fr 2016-07-01 09:06:10 CEST; 1s ago
Docs: man:systemd-timesyncd.service(8)
Main PID: 12360 (systemd-timesyn)
Status: "Synchronized to time server 192.53.103.108:123 (ptbtime1.ptb.de)."
CGroup: /system.slice/systemd-timesyncd.service
└─12360 /lib/systemd/systemd-timesyncd
Jul 01 09:06:10 lollybook systemd[1]: Starting Network Time Synchronization...
Jul 01 09:06:10 lollybook systemd[1]: Started Network Time Synchronization.
Jul 01 09:06:10 lollybook systemd-timesyncd[12360]: Synchronized to time server 192.53.103.108:123 (ptbtime1.ptb.de).
</source>
That's it!
=Units=
==[Unit]==
===Define dependencies===
For example the ''zfs.target'' is defined like this:
<source lang=ini>
# systemctl cat zfs.target
# /lib/systemd/system/zfs.target
[Unit]
Description=ZFS startup target
Requires=zfs-mount.service
Requires=zfs-share.service
Wants=zed.service
[Install]
WantedBy=multi-user.target
</source>
This means to reach the ''zfs.target'' we want that ''zed.service'' is started if enabled and we need ''zfs-mount.service'' and ''zfs-share.service''.
===Directories===
====ReadWrite-, ReadOnly- and InaccessibleDirectories====
====Private Tmp-Directories====
Mounts a private incarnation of /tmp and /var/tmp which only lives as long as the unit is up. When the unit comes down the directories are cleared. This is done by a seperate namespace for this unit.
<source lang=ini>
[Unit]
...
PrivateTmp=true|false
...
</source>
If several units should share a private tmp-directory you can use ''JoinsNamespaceOf=<unit1>[,<unit2>,<unit3>]''.
==[Service]==
==[Install]==
=Tools=
==Testing around with capabilities==
For example arping:
<source lang=bash>
# getcap /usr/bin/arping
/usr/bin/arping = cap_net_raw+ep
</source>
With this capability set we can use this as normal user:
<source lang=bash>
lollypop $ /usr/bin/arping -I wlan0 192.168.178.1
ARPING 192.168.178.1 from 192.168.178.31 wlan0
Unicast reply from 192.168.178.1 [24:65:11:F0:DC:A8] 1.774ms
Unicast reply from 192.168.178.1 [24:65:11:F0:DC:A8] 1.658ms
</source>
If we remove this capability it does not work:
<source lang=bash>
# setcap cap_net_raw=-ep /usr/bin/arping
</source>
<source lang=bash>
lollypop $ /usr/bin/arping -I wlan0 192.168.178.1
arping: socket: Operation not permitted
</source>
Of course it still works as root as root has all capabilities:
<source lang=bash>
root@lollybook:~# /usr/bin/arping -I wlan0 192.168.178.1
ARPING 192.168.178.1 from 192.168.178.31 wlan0
Unicast reply from 192.168.178.1 [24:65:11:F0:DC:A8] 2.052ms
Unicast reply from 192.168.178.1 [24:65:11:F0:DC:A8] 1.852ms
Received 2 response(s)
</source>
So we better set this capability again:
<source lang=bash>
# setcap cap_net_raw=+ep /usr/bin/arping
</source>
= Logging with syslog-ng and systemd in a chroot environment =
If you have a chroot environment (here I have /var/chroot) some things are a little bit tricky.
==The needed logging socket in your chroot is /run/systemd/journal/dev-log==
Prepare the mountpoint:
<source lang=bash>
# mkdir -p /var/chroot/run/systemd/journal
# touch /var/chroot/run/systemd/journal/dev-log
</source>
===Get the name for the needed unit file===
The name of a .mount-unit file has to be the mount destination path. Dashes must be escaped. To get the resulting name you can easily use systemd-escape.
<source lang=bash>
# systemd-escape -p --suffix=mount /var/chroot/run/systemd/journal/dev-log
var-chroot-run-systemd-journal-dev\x2dlog.mount
</source>
===Create the unit file /lib/systemd/system/var-chroot-run-systemd-journal-dev\\x2dlog.mount for the mount===
Remember to double escape (\\) the x2d (which is a dash -).
<source lang=bash>
# vi /lib/systemd/system/var-chroot-run-systemd-journal-dev\\x2dlog.mount
</source>
I want to mount it before syslog-ng and pdns-recursor are up.
Put this contents in the file:
<source lang=ini>
[Unit]
Description=Mount /run/systemd/journal/dev-log to chroot
DefaultDependencies=no
ConditionPathExists=/var/chroot/run/systemd/journal/dev-log
ConditionCapability=CAP_SYS_ADMIN
After=systemd-modules-load.service
Before=pdns-recursor.service
Before=syslog-ng.service
[Mount]
What=/run/systemd/journal/dev-log
Where=/var/chroot/run/systemd/journal/dev-log
Type=none
Options=bind
[Install]
WantedBy=multi-user.target
</source>
===Mount the socket===
<source lang=bash>
# systemctl daemon-reload
# systemctl enable var-chroot-run-systemd-journal-dev\\x2dlog.mount
# systemctl start var-chroot-run-systemd-journal-dev\\x2dlog.mount
</source>
Check the success:
<source lang=bash>
# grep /var/chroot/run/systemd/journal/dev-log /proc/mounts
tmpfs /var/chroot/run/systemd/journal/dev-log tmpfs rw,nosuid,noexec,relatime,size=101604k,mode=755 0 0
</source>
==Tell the journald to forward logging lines to the socket==
===/etc/systemd/journald.conf===
<source lang=ini>
[Journal]
...
ForwardToSyslog=yes
...
</source>
Restart the journal daemon:
<source lang=bash>
# systemctl restart systemd-journald.service
</source>
==Configure syslog-ng==
===/etc/syslog-ng/syslog-ng.conf===
Take the log from systemd-journald socket:
<source>
...
source s_src {
system();
internal();
unix-dgram ("/run/systemd/journal/dev-log");
};
...
</source>
===Example for powerdns recursor===
====/etc/syslog-ng/conf.d/destination.d/pdns.conf====
<source>
# PowerDNS authoritative server destination
destination d_pdns { file("/var/log/powerdns/pdns.log"); };
destination d_pdns_recursor { file("/var/log/powerdns/recursor.log"); };
</source>
====/etc/syslog-ng/conf.d/filter.d/pdns.conf====
<source>
# PowerDNS authoritative server filter
filter f_pdns { program("^pdns$"); };
filter f_pdns_recursor { program("^pdns_recursor$"); };
</source>
====/etc/syslog-ng/conf.d/log.d/90_pdns.conf====
<source>
# PowerDNS authoritative server default final file log
log { source(s_src); filter(f_pdns); destination(d_pdns); flags(final); };
log { source(s_src); filter(f_pdns_recursor); destination(d_pdns_recursor); flags(final); };
</source>
===Restart syslog-ng daemon===
<source lang=bash>
# systemctl restart syslog-ng.service
</source>
= Examples =
== Oracle ==
UNTESTED, just an example!
File this as
/usr/lib/systemd/system/dbora@.service (SLES12)
<source lang=ini>
# This file is part of systemd.
#
# Configure instances for your oracle database versions like this
# # systemctl enable dbora@<product>.service
# e.g.:
# # systemctl enable dbora@12cR1.service
#
[Unit]
Description=Oracle Database %I
After=syslog.target network.target
[Service]
# systemd ignores PAM limits, so set any necessary limits in the service.
# Not really a bug, but a feature.
# https://bugzilla.redhat.com/show_bug.cgi?id=754285
LimitMEMLOCK=infinity
LimitNOFILE=65535
#
Type=simple
RemainAfterExit=yes
User=oracle
Group=dba
Environment="ORACLE_HOME=/opt/oracle/product/%i/db"
ExecStart=/opt/oracle/product/%i/db/bin/dbstart $ORACLE_HOME >> 2>&1 &
ExecStop=/opt/oracle/product/%i/db/bin/dbstart $ORACLE_HOME 2>&1 &
[Install]
WantedBy=multi-user.target
</source>
<source lang=bash>
# systemctl daemon-reload
# systemctl enable dbora@12cR2.service
Created symlink from /etc/systemd/system/multi-user.target.wants/dbora@12cR2.service to /usr/lib/systemd/system/dbora@.service.
</source>
63fc6bd4e1f6ebfbac327158418318f760b1d586
1871
1870
2018-05-09T06:33:21Z
Lollypop
2
/* systemd-resolved the name resolve service */
wikitext
text/x-wiki
[[Kategorie:Linux]]
=systemd=
Yes, like daemons are usually written this has to be written lowercase.
=What is systemd?=
systemd is a replacement for the old and rusty init system of Linux.
It has many new features and extends the normal init system with the ability to watch processes after the start has done, list sockets owned by processes started with systemd, adds security features like [http://manpages.ubuntu.com/manpages/vivid/en/man7/capabilities.7.html capabilities(7)] and a lot more.
Maybe it will be as good as SMF (Service Management Facility) of Solaris one day :-).
=Take a look with systemctl=
==List units==
As you can see, there are hardware and software related units.
<source lang=bash>
# systemctl list-units
UNIT LOAD ACTIVE SUB DESCRIPTION
proc-sys-fs-binfmt_misc.automount loaded active running Arbitrary Executable File Formats File System Automount Point
sys-devices-pci0000:00-0000:00:02.0-backlight-acpi_video0.device loaded active plugged /sys/devices/pci0000:00/0000:00:02.0/backlight/acpi_video0
sys-devices-pci0000:00-0000:00:02.0-drm-card0-card0\x2dLVDS\x2d1-intel_backlight.device loaded active plugged /sys/devices/pci0000:00/0000:00:02.0/drm
sys-devices-pci0000:00-0000:00:19.0-net-eth0.device loaded active plugged 82579LM Gigabit Network Connection
sys-devices-pci0000:00-0000:00:1a.0-usb1-1\x2d1-1\x2d1.4-1\x2d1.4:1.0-bluetooth-hci0-rfkill3.device loaded active plugged /sys/devices/pci0000:00/0000
sys-devices-pci0000:00-0000:00:1a.0-usb1-1\x2d1-1\x2d1.4-1\x2d1.4:1.0-bluetooth-hci0.device loaded active plugged /sys/devices/pci0000:00/0000:00:1a.0
sys-devices-pci0000:00-0000:00:1b.0-sound-card0.device loaded active plugged 6 Series/C200 Series Chipset Family High Definition Audio Contro
sys-devices-pci0000:00-0000:00:1c.1-0000:03:00.0-ieee80211-phy0-rfkill2.device loaded active plugged /sys/devices/pci0000:00/0000:00:1c.1/0000:03:00.0
sys-devices-pci0000:00-0000:00:1c.1-0000:03:00.0-net-wlan0.device loaded active plugged Centrino Advanced-N 6205 [Taylor Peak] (Centrino Advanced-N 62
sys-devices-pci0000:00-0000:00:1d.0-usb2-2\x2d1-2\x2d1.4-2\x2d1.4:1.1-tty-ttyACM0.device loaded active plugged F5521gw
sys-devices-pci0000:00-0000:00:1d.0-usb2-2\x2d1-2\x2d1.4-2\x2d1.4:1.3-tty-ttyACM1.device loaded active plugged F5521gw
...
session-c2.scope loaded active running Session c2 of user lollypop
accounts-daemon.service loaded active running Accounts Service
● anacron.service loaded failed failed Run anacron jobs
apparmor.service loaded active exited LSB: AppArmor initialization
apport.service loaded active exited LSB: automatic crash report generation
...
</source>
In this example you can see that the anacron.service failed to start.
==Display unit status==
<source lang=bash>
# systemctl status anacron
● anacron.service - Run anacron jobs
Loaded: loaded (/lib/systemd/system/anacron.service; enabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Fr 2015-08-28 09:18:13 CEST; 31min ago
Process: 1591 ExecStart=/usr/sbin/anacron -dsq (code=exited, status=1/FAILURE)
Main PID: 1591 (code=exited, status=1/FAILURE)
Aug 28 09:18:13 lollybook systemd[1]: Started Run anacron jobs.
Aug 28 09:18:13 lollybook systemd[1]: Starting Run anacron jobs...
Aug 28 09:18:13 lollybook systemd[1]: anacron.service: main process exited, code=exited, status=1/FAILURE
Aug 28 09:18:13 lollybook anacron[1591]: anacron: Can't chdir to /var/spool/anacron: No such file or directory
Aug 28 09:18:13 lollybook systemd[1]: Unit anacron.service entered failed state.
Aug 28 09:18:13 lollybook systemd[1]: anacron.service failed.
</source>
Ah, deleted the anacron spool directory. ;-)
==Restart units==
Fix the problem and restart the service.
<source lang=bash>
root@lollybook:~# mkdir /var/spool/anacron
root@lollybook:~# systemctl restart anacron.service
root@lollybook:~# systemctl status anacron
● anacron.service - Run anacron jobs
Loaded: loaded (/lib/systemd/system/anacron.service; enabled; vendor preset: enabled)
Active: active (running) since Fr 2015-08-28 09:53:49 CEST; 4s ago
Main PID: 5179 (anacron)
CGroup: /system.slice/anacron.service
└─5179 /usr/sbin/anacron -dsq
Aug 28 09:53:49 lollybook systemd[1]: Started Run anacron jobs.
Aug 28 09:53:49 lollybook systemd[1]: Starting Run anacron jobs...
Aug 28 09:53:49 lollybook anacron[5179]: Anacron 2.3 started on 2015-08-28
Aug 28 09:53:49 lollybook anacron[5179]: Will run job `cron.daily' in 5 min.
Aug 28 09:53:49 lollybook anacron[5179]: Will run job `cron.weekly' in 10 min.
Aug 28 09:53:49 lollybook anacron[5179]: Will run job `cron.monthly' in 15 min.
Aug 28 09:53:49 lollybook anacron[5179]: Jobs will be executed sequentially
</source>
==Display unit declaration==
<source lang=ini>
# systemctl cat zfs.target
# /lib/systemd/system/zfs.target
[Unit]
Description=ZFS startup target
Requires=zfs-mount.service
Requires=zfs-share.service
Wants=zed.service
[Install]
WantedBy=multi-user.target
</source>
==Sockets==
<source lang=bash>
# systemctl list-sockets --all
LISTEN UNIT ACTIVATES
/run/acpid.socket acpid.socket acpid.service
/run/systemd/fsckd systemd-fsckd.socket systemd-fsckd.service
/run/systemd/initctl/fifo systemd-initctl.socket systemd-initctl.service
/run/systemd/journal/dev-log systemd-journald-dev-log.socket systemd-journald.service
/run/systemd/journal/socket systemd-journald.socket systemd-journald.service
/run/systemd/journal/stdout systemd-journald.socket systemd-journald.service
/run/systemd/journal/syslog syslog.socket rsyslog.service
/run/systemd/shutdownd systemd-shutdownd.socket systemd-shutdownd.service
/run/udev/control systemd-udevd-control.socket systemd-udevd.service
/run/uuidd/request uuidd.socket uuidd.service
/var/run/avahi-daemon/socket avahi-daemon.socket avahi-daemon.service
/var/run/cups/cups.sock cups.socket cups.service
/var/run/dbus/system_bus_socket dbus.socket dbus.service
127.0.0.1:631 cups.socket cups.service
[::1]:631 cups.socket cups.service
audit 1 systemd-journald-audit.socket systemd-journald.service
kobject-uevent 1 systemd-udevd-kernel.socket systemd-udevd.service
17 sockets listed.
</source>
==View dependencies==
What depends on ''zfs.target'':
<source lang=bash>
# systemctl list-dependencies --reverse zfs.target
zfs.target
● ├─basic.target
...
● └─multi-user.target
...
</source>
And what do we need to reach the ''zfs.target''?
<source lang=bash>
# systemctl list-dependencies --recursive zfs.target
zfs.target
● ├─zed.service
● ├─zfs-mount.service
● └─zfs-share.service
</source>
=Security=
==Use capabilities to drop user privileges (CapabilityBoundingSet)==
<source lang=ini>
# systemctl cat systemd-networkd.service --no-pager
...
[Service]
Type=notify
Restart=on-failure
RestartSec=0
ExecStart=/lib/systemd/systemd-networkd
CapabilityBoundingSet=CAP_NET_ADMIN CAP_NET_BIND_SERVICE CAP_NET_BROADCAST CAP_NET_RAW CAP_SETUID CAP_SETGID CAP_SETPCAP CAP_CHOWN CAP_DAC_OVERRIDE CAP_FOWNER
ProtectSystem=full
ProtectHome=yes
WatchdogSec=1min
...
</source>
Now the process is started with exactly the capabilities it needs to have. Even if it starts as root all unnessesary capabilities are dropped for starting the process.
I dont want to copy the whole man page of [http://manpages.ubuntu.com/manpages/vivid/en/man7/capabilities.7.html capabilities(7)] here but you can take a look to understand what this capabilities are.
'''BUT''' beware of programs which just test on UID 0!
==Nailing a process to it's rights : NoNewPrivileges==
Setting ''NoNewPrivileges=true'' ensures that the processtree from this level on will stuck at the UID and the privileges it has. This prohibits UID changes. No set UID binary will help the hacker to get more privileges than the user of the exploited service.
=systemd-resolved the name resolve service=
==Status==
<source lang=bash>
$ systemd-resolve --status
Global
DNS Domain: fritz.box
DNSSEC NTA: 10.in-addr.arpa
168.192.in-addr.arpa
corp
d.f.ip6.arpa
home
internal
intranet
lan
local
private
test
Link 3 (wlan0)
Current Scopes: none
LLMNR setting: yes
MulticastDNS setting: no
DNSSEC setting: no
DNSSEC supported: no
Link 2 (eth0)
Current Scopes: DNS
LLMNR setting: yes
MulticastDNS setting: no
DNSSEC setting: no
DNSSEC supported: no
DNS Servers: 192.168.178.1
DNS Domain: fritz.box
</source>
==Cache statistics==
<source lang=bash>
$ systemd-resolve --statistics
DNSSEC supported by current servers: no
Transactions
Current Transactions: 0
Total Transactions: 1824
Cache
Current Cache Size: 11
Cache Hits: 1104
Cache Misses: 771
DNSSEC Verdicts
Secure: 0
Insecure: 0
Bogus: 0
Indeterminate: 0
</source>
==Flush the cache==
<source lang=bash>
$ systemd-resolve --flush-caches
</source>
Check with:
<source lang=bash>
$ systemd-resolve --statistics
DNSSEC supported by current servers: no
Transactions
Current Transactions: 0
Total Transactions: 1809
Cache
Current Cache Size: 0 <--- Empty
Cache Hits: 1099
Cache Misses: 761
DNSSEC Verdicts
Secure: 0
Insecure: 0
Bogus: 0
Indeterminate: 0
</source>
=systemd-timesyncd an alternative to ntp=
The ntpd is a good and fat old horse for servers but clients do not necessarily need this one. Just give systemd-timesyncd a chance.
Configuration can be easily made through <i>/etc/systemd/timesyncd.conf</i>:
<source lang=ini>
# This file is part of systemd.
#
# systemd is free software; you can redistribute it and/or modify it
# under the terms of the GNU Lesser General Public License as published by
# the Free Software Foundation; either version 2.1 of the License, or
# (at your option) any later version.
#
# Entries in this file show the compile time defaults.
# You can change settings by editing this file.
# Defaults can be restored by simply deleting this file.
#
# See timesyncd.conf(5) for details.
[Time]
NTP=ptbtime1.ptb.de hora.cs.tu-berlin.de
FallbackNTP=ntp.ubuntu.com
</source>
The server list is a space separated list of NTP servers.
FallbackNTP is a list of servers if none of the NTP list could be reached.
If you want to split them into multiple files or generate them at start you can put files with the ending <i>.conf</i> in <i>/etc/systemd/timesyncd.conf.d/</i>.
After you setup the config you can enable the timesyncd via:
<source lang=bash>
# timedatectl set-ntp true
</source>
Control your success with:
<source lang=bash>
# timedatectl
Local time: Fr 2016-07-01 09:16:24 CEST
Universal time: Fr 2016-07-01 07:16:24 UTC
RTC time: Fr 2016-07-01 07:16:24
Time zone: Europe/Berlin (CEST, +0200)
Network time on: yes
NTP synchronized: yes
RTC in local TZ: no
</source>
Nice it worked <i>NTP synchronized: yes</i>.
If not take a look with <i>systemctl</i>:
<source lang=bash>
# systemctl status systemd-timesyncd.service
● systemd-timesyncd.service - Network Time Synchronization
Loaded: loaded (/lib/systemd/system/systemd-timesyncd.service; enabled; vendor preset: enabled)
Drop-In: /lib/systemd/system/systemd-timesyncd.service.d
└─disable-with-time-daemon.conf
Active: inactive (dead)
Condition: start condition failed at Fr 2016-07-01 10:49:15 CEST; 1h 43min left
Docs: man:systemd-timesyncd.service(8)
</source>
Hmm... let us take a look at ntp:
<source lang=bash>
# systemctl status ntp.service
● ntp.service - LSB: Start NTP daemon
Loaded: loaded (/etc/init.d/ntp; bad; vendor preset: enabled)
Active: active (exited) since Fr 2016-07-01 10:49:19 CEST; 1h 44min left
Docs: man:systemd-sysv-generator(8)
</source>
Maybe we should uninstall or disable ntp first ;-).
<source lang=bash>
# systemctl stop ntp.service
# systemctl disable ntp.service
</source>
<source lang=bash>
# systemctl start systemd-timesyncd.service
# systemctl status systemd-timesyncd.service
● systemd-timesyncd.service - Network Time Synchronization
Loaded: loaded (/lib/systemd/system/systemd-timesyncd.service; enabled; vendor preset: enabled)
Drop-In: /lib/systemd/system/systemd-timesyncd.service.d
└─disable-with-time-daemon.conf
Active: active (running) since Fr 2016-07-01 09:06:10 CEST; 1s ago
Docs: man:systemd-timesyncd.service(8)
Main PID: 12360 (systemd-timesyn)
Status: "Synchronized to time server 192.53.103.108:123 (ptbtime1.ptb.de)."
CGroup: /system.slice/systemd-timesyncd.service
└─12360 /lib/systemd/systemd-timesyncd
Jul 01 09:06:10 lollybook systemd[1]: Starting Network Time Synchronization...
Jul 01 09:06:10 lollybook systemd[1]: Started Network Time Synchronization.
Jul 01 09:06:10 lollybook systemd-timesyncd[12360]: Synchronized to time server 192.53.103.108:123 (ptbtime1.ptb.de).
</source>
That's it!
=Units=
==[Unit]==
===Define dependencies===
For example the ''zfs.target'' is defined like this:
<source lang=ini>
# systemctl cat zfs.target
# /lib/systemd/system/zfs.target
[Unit]
Description=ZFS startup target
Requires=zfs-mount.service
Requires=zfs-share.service
Wants=zed.service
[Install]
WantedBy=multi-user.target
</source>
This means to reach the ''zfs.target'' we want that ''zed.service'' is started if enabled and we need ''zfs-mount.service'' and ''zfs-share.service''.
===Directories===
====ReadWrite-, ReadOnly- and InaccessibleDirectories====
====Private Tmp-Directories====
Mounts a private incarnation of /tmp and /var/tmp which only lives as long as the unit is up. When the unit comes down the directories are cleared. This is done by a seperate namespace for this unit.
<source lang=ini>
[Unit]
...
PrivateTmp=true|false
...
</source>
If several units should share a private tmp-directory you can use ''JoinsNamespaceOf=<unit1>[,<unit2>,<unit3>]''.
==[Service]==
==[Install]==
=Tools=
==Testing around with capabilities==
For example arping:
<source lang=bash>
# getcap /usr/bin/arping
/usr/bin/arping = cap_net_raw+ep
</source>
With this capability set we can use this as normal user:
<source lang=bash>
lollypop $ /usr/bin/arping -I wlan0 192.168.178.1
ARPING 192.168.178.1 from 192.168.178.31 wlan0
Unicast reply from 192.168.178.1 [24:65:11:F0:DC:A8] 1.774ms
Unicast reply from 192.168.178.1 [24:65:11:F0:DC:A8] 1.658ms
</source>
If we remove this capability it does not work:
<source lang=bash>
# setcap cap_net_raw=-ep /usr/bin/arping
</source>
<source lang=bash>
lollypop $ /usr/bin/arping -I wlan0 192.168.178.1
arping: socket: Operation not permitted
</source>
Of course it still works as root as root has all capabilities:
<source lang=bash>
root@lollybook:~# /usr/bin/arping -I wlan0 192.168.178.1
ARPING 192.168.178.1 from 192.168.178.31 wlan0
Unicast reply from 192.168.178.1 [24:65:11:F0:DC:A8] 2.052ms
Unicast reply from 192.168.178.1 [24:65:11:F0:DC:A8] 1.852ms
Received 2 response(s)
</source>
So we better set this capability again:
<source lang=bash>
# setcap cap_net_raw=+ep /usr/bin/arping
</source>
= Logging with syslog-ng and systemd in a chroot environment =
If you have a chroot environment (here I have /var/chroot) some things are a little bit tricky.
==The needed logging socket in your chroot is /run/systemd/journal/dev-log==
Prepare the mountpoint:
<source lang=bash>
# mkdir -p /var/chroot/run/systemd/journal
# touch /var/chroot/run/systemd/journal/dev-log
</source>
===Get the name for the needed unit file===
The name of a .mount-unit file has to be the mount destination path. Dashes must be escaped. To get the resulting name you can easily use systemd-escape.
<source lang=bash>
# systemd-escape -p --suffix=mount /var/chroot/run/systemd/journal/dev-log
var-chroot-run-systemd-journal-dev\x2dlog.mount
</source>
===Create the unit file /lib/systemd/system/var-chroot-run-systemd-journal-dev\\x2dlog.mount for the mount===
Remember to double escape (\\) the x2d (which is a dash -).
<source lang=bash>
# vi /lib/systemd/system/var-chroot-run-systemd-journal-dev\\x2dlog.mount
</source>
I want to mount it before syslog-ng and pdns-recursor are up.
Put this contents in the file:
<source lang=ini>
[Unit]
Description=Mount /run/systemd/journal/dev-log to chroot
DefaultDependencies=no
ConditionPathExists=/var/chroot/run/systemd/journal/dev-log
ConditionCapability=CAP_SYS_ADMIN
After=systemd-modules-load.service
Before=pdns-recursor.service
Before=syslog-ng.service
[Mount]
What=/run/systemd/journal/dev-log
Where=/var/chroot/run/systemd/journal/dev-log
Type=none
Options=bind
[Install]
WantedBy=multi-user.target
</source>
===Mount the socket===
<source lang=bash>
# systemctl daemon-reload
# systemctl enable var-chroot-run-systemd-journal-dev\\x2dlog.mount
# systemctl start var-chroot-run-systemd-journal-dev\\x2dlog.mount
</source>
Check the success:
<source lang=bash>
# grep /var/chroot/run/systemd/journal/dev-log /proc/mounts
tmpfs /var/chroot/run/systemd/journal/dev-log tmpfs rw,nosuid,noexec,relatime,size=101604k,mode=755 0 0
</source>
==Tell the journald to forward logging lines to the socket==
===/etc/systemd/journald.conf===
<source lang=ini>
[Journal]
...
ForwardToSyslog=yes
...
</source>
Restart the journal daemon:
<source lang=bash>
# systemctl restart systemd-journald.service
</source>
==Configure syslog-ng==
===/etc/syslog-ng/syslog-ng.conf===
Take the log from systemd-journald socket:
<source>
...
source s_src {
system();
internal();
unix-dgram ("/run/systemd/journal/dev-log");
};
...
</source>
===Example for powerdns recursor===
====/etc/syslog-ng/conf.d/destination.d/pdns.conf====
<source>
# PowerDNS authoritative server destination
destination d_pdns { file("/var/log/powerdns/pdns.log"); };
destination d_pdns_recursor { file("/var/log/powerdns/recursor.log"); };
</source>
====/etc/syslog-ng/conf.d/filter.d/pdns.conf====
<source>
# PowerDNS authoritative server filter
filter f_pdns { program("^pdns$"); };
filter f_pdns_recursor { program("^pdns_recursor$"); };
</source>
====/etc/syslog-ng/conf.d/log.d/90_pdns.conf====
<source>
# PowerDNS authoritative server default final file log
log { source(s_src); filter(f_pdns); destination(d_pdns); flags(final); };
log { source(s_src); filter(f_pdns_recursor); destination(d_pdns_recursor); flags(final); };
</source>
===Restart syslog-ng daemon===
<source lang=bash>
# systemctl restart syslog-ng.service
</source>
= Examples =
== Oracle ==
UNTESTED, just an example!
File this as
/usr/lib/systemd/system/dbora@.service (SLES12)
<source lang=ini>
# This file is part of systemd.
#
# Configure instances for your oracle database versions like this
# # systemctl enable dbora@<product>.service
# e.g.:
# # systemctl enable dbora@12cR1.service
#
[Unit]
Description=Oracle Database %I
After=syslog.target network.target
[Service]
# systemd ignores PAM limits, so set any necessary limits in the service.
# Not really a bug, but a feature.
# https://bugzilla.redhat.com/show_bug.cgi?id=754285
LimitMEMLOCK=infinity
LimitNOFILE=65535
#
Type=simple
RemainAfterExit=yes
User=oracle
Group=dba
Environment="ORACLE_HOME=/opt/oracle/product/%i/db"
ExecStart=/opt/oracle/product/%i/db/bin/dbstart $ORACLE_HOME >> 2>&1 &
ExecStop=/opt/oracle/product/%i/db/bin/dbstart $ORACLE_HOME 2>&1 &
[Install]
WantedBy=multi-user.target
</source>
<source lang=bash>
# systemctl daemon-reload
# systemctl enable dbora@12cR2.service
Created symlink from /etc/systemd/system/multi-user.target.wants/dbora@12cR2.service to /usr/lib/systemd/system/dbora@.service.
</source>
cb5ed39f732f9e8b51798fb4e80d8d143d25ede3
Ubuntu apt
0
120
1872
1816
2018-05-15T13:32:19Z
Lollypop
2
/* Configuring a proxy for apt */
wikitext
text/x-wiki
[[Kategorie:Ubuntu|apt]]
== Get all non LTS packages ==
<source lang=awk>
# dpkg --list | awk '/^ii/ {print $2}' | xargs apt-cache show | awk '
BEGIN{
support="none";
}
/^Package:/,/^$/{
if(/^Package:/){ pkg=$2; }
if(/^Supported:/){ support=$2; }
if(/^$/ && support != "5y"){ printf "%s:\t%s\n", pkg, support; }
}
/^$/ {
support="none";
}'
</source>
== Configuring a proxy for apt ==
Put this into your /etc/apt/apt.conf.d/00proxy :
<source lang=bash>
// Options for the downloading routines
Acquire
{
Queue-Mode "host"; // host|access
Retries "0";
Source-Symlinks "true";
// HTTP method configuration
http
{
//Proxy::http.us.debian.org "DIRECT"; // Specific per-host setting
Proxy "http://<user>:<password>@<proxy-host>:<proxy-port>";
Timeout "120";
Pipeline-Depth "5";
// Cache Control. Note these do not work with Squid 2.0.2
No-Cache "false";
Max-Age "86400"; // 1 Day age on index files
No-Store "false"; // Prevent the cache from storing archives
};
ftp
{
Proxy "http://<user>:<password>@<proxy-host>:<proxy-port>";
//Proxy::http.us.debian.org "DIRECT"; // Specific per-host setting
Timeout "120";
/* Passive mode control, proxy, non-proxy and per-host. Pasv mode
is prefered if possible */
Passive "true";
Proxy::Passive "true";
Passive::http.us.debian.org "true"; // Specific per-host setting
};
cdrom
{
mount "/cdrom";
// You need the trailing slash!
"/cdrom"
{
Mount "sleep 1000";
UMount "sleep 500";
}
};
};
</source>
==Use this proxy config for in the shell==
<source lang=bash>
eval $(apt-config dump Acquire | awk -F '(::| )' '$3 ~ /Proxy/{printf "%s_proxy=%s\nexport %s_proxy\n",$2,$4,$2;}')
</source>
== Getting some packages from a newer release ==
In this example we are living in <i>xenial</i> and want PowerDNS from <i>zesty</i> because we need CAA records in the nameservice.
=== Pin the normal release ===
<source lang=bash>
# echo 'APT::Default-Release "xenial";' > /etc/apt/apt.conf.d/01pinning
</source>
=== Add new release to /etc/apt/sources.list ===
This is the /etc/apt/sources.list on my x86 64bit Ubuntu:
<pre>
# Xenial
deb [arch=amd64] http://de.archive.ubuntu.com/ubuntu/ xenial main restricted universe
deb [arch=amd64] http://de.archive.ubuntu.com/ubuntu/ xenial-updates main restricted universe
deb [arch=amd64] http://security.ubuntu.com/ubuntu xenial-security main restricted universe
# Zesty
deb [arch=amd64] http://de.archive.ubuntu.com/ubuntu/ zesty main restricted universe
deb [arch=amd64] http://de.archive.ubuntu.com/ubuntu/ zesty-updates main restricted universe
deb [arch=amd64] http://security.ubuntu.com/ubuntu zesty-security main restricted universe
</pre>
=== Tell apt via /etc/apt/preferences.d/... to prefer some packages from the new release ===
This is the /etc/apt/preferences.d/pdns:
<pre>
Package: pdns-*
Pin: release a=zesty, l=Ubuntu
Pin-Priority: 1000
Package: pdns-*
Pin: release a=zesty-updates, l=Ubuntu
Pin-Priority: 1000
Package: pdns-*
Pin: release a=zesty-security, l=Ubuntu
Pin-Priority: 1000
</pre>
=== Upgrade to the packages from the new release ===
<source lang=bash>
# apt update
...
2 packages can be upgraded. Run 'apt list --upgradable' to see them.
...
</source>
=== Check with "apt-cache policy" which version is preferred now ===
<source lang=bash>
# apt-cache policy pdns-server pdns-tools
pdns-server:
Installed: 4.0.3-1
Candidate: 4.0.3-1
Version table:
*** 4.0.3-1 1000
500 http://de.archive.ubuntu.com/ubuntu zesty/universe amd64 Packages
100 /var/lib/dpkg/status
4.0.0~alpha2-3build1 990
990 http://de.archive.ubuntu.com/ubuntu xenial/universe amd64 Packages
pdns-tools:
Installed: (none)
Candidate: 4.0.3-1
Version table:
4.0.3-1 1000
500 http://de.archive.ubuntu.com/ubuntu zesty/universe amd64 Packages
4.0.0~alpha2-3build1 990
990 http://de.archive.ubuntu.com/ubuntu xenial/universe amd64 Packages
</source>
=== Upgrade to the packages from the new release ===
<source lang=bash>
# apt install pdns-tools
Reading package lists... Done
Building dependency tree
Reading state information... Done
Some packages could not be installed. This may mean that you have
requested an impossible situation or if you are using the unstable
distribution that some required packages have not yet been created
or been moved out of Incoming.
The following information may help to resolve the situation:
The following packages have unmet dependencies:
pdns-tools : Depends: libstdc++6 (>= 6) but 5.4.0-6ubuntu1~16.04.5 is to be installed
E: Unable to correct problems, you have held broken packages.
</source>
This shows the pinning to xenial works ;-).
=== Override pinning for one package ===
<source lang=bash>
# apt -t zesty install libstdc++6
...
</source>
eb3f2bdd30639bf5ed479177ddd67620b8e15ac2
SSH FingerprintLogging
0
358
1875
2018-05-17T11:35:14Z
Lollypop
2
Die Seite wurde neu angelegt: „[[Kategorie:SSH]] [[Kategorie:Bash]] =Why logging fingerprints?= It is just for the possibility of setting the [[Bash]] HISTFILE per logged in user. ==The Auth…“
wikitext
text/x-wiki
[[Kategorie:SSH]]
[[Kategorie:Bash]]
=Why logging fingerprints?=
It is just for the possibility of setting the [[Bash]] HISTFILE per logged in user.
==The AuthorizedKeysCommand==
* /opt/sbin/fingerprintlog:
<source lang=bash>
#!/bin/bash
# /opt/sbin/fingerprintlog <logfile> %u %k %t %f
# Arguments to AuthorizedKeysCommand may be provided using the following tokens, which will be expanded at runtime:
# %% is replaced by a literal '%',
# %u is replaced by the username being authenticated,
# %h is replaced by the home directory of the user being authenticated,
# %t is replaced with the key type offered for authentication,
# %f is replaced with the fingerprint of the key, and
# %k is replaced with the key being offered for authentication.
# If no arguments are specified then the username of the target user will be supplied.
[ "_${LOGNAME}_" != "_daemon_" ] && exit 1
LOGFILE=$1
USER=$2
KEY=$3
KEYTYPE=$4
FINGERPRINT=$5
printf "%s ssh-login T=%s U=%s PPID=%s FP=%s K=%s\n" "$(/bin/date -Iseconds)" "${KEYTYPE}" "${USER}" "${PPID}" "${FINGERPRINT}" "${KEY}" >> ${LOGFILE}
</source>
<source lang=bash>
# chmod 0750 /opt/sbin/fingerprintlog
# chown root:daemon /opt/sbin/fingerprintlog
</source>
==Create the logfile==
* /var/log/fingerprint.log
<source lang=bash>
# touch /var/log/fingerprint.log
# chown daemon:ssh-user /var/log/fingerprint.log
# chmod 0640 /var/log/fingerprint.log
</source>
==Setup logrotation==
* /etc/logrotate.d/fingerprintlog
<source lang=bash>
/var/log/fingerprint.log
{
su daemon syslog
create 0640 daemon ssh-user
rotate 8
weekly
missingok
notifempty
}
</source>
==Add fingerprint logging to sshd==
* /etc/ssh/sshd_config
<source lang=bash>
...
DenyUsers daemon
AuthorizedKeysCommand /opt/sbin/fingerprintlog /var/log/fingerprint.log %u %k %t %f
AuthorizedKeysCommandUser daemon
...
</source>
Restart sshd
<source lang=bash>
# systemctl restart ssh.service
</source>
==Add magic to your .bashrc==
<source lang=bash>
# apt install gawk
</source>
* ~/.bashrc
<source lang=bash>
...
[ -f /var/log/fingerprint.log ] && FINGERPRINT=$(/usr/bin/gawk -v ppid="${PPID}" -v user=${LOGNAME} 'BEGIN{split(ssh_connection,connection);}$5 ~ "PPID="ppid"$" {gsub(/^FP=/,"",$6); gsub(/\//,"_",$6); print $6;exit;}' /var/log/fingerprint.log)
export HISTFILE=~/.bash_history_${FINGERPRINT:-${SUDO_USER:-default}}
...
</source>
5c59fdcf78c5b9721d39f10732f2a86c5181c831
1876
1875
2018-05-17T11:37:10Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:SSH]]
[[Kategorie:Bash]]
=SSH Fingerprintlogging=
==Why logging fingerprints?==
It is just for the possibility of setting the [[Bash]] HISTFILE per logged in user.
==The AuthorizedKeysCommand==
* /opt/sbin/fingerprintlog:
<source lang=bash>
#!/bin/bash
# /opt/sbin/fingerprintlog <logfile> %u %k %t %f
# Arguments to AuthorizedKeysCommand may be provided using the following tokens, which will be expanded at runtime:
# %% is replaced by a literal '%',
# %u is replaced by the username being authenticated,
# %h is replaced by the home directory of the user being authenticated,
# %t is replaced with the key type offered for authentication,
# %f is replaced with the fingerprint of the key, and
# %k is replaced with the key being offered for authentication.
# If no arguments are specified then the username of the target user will be supplied.
[ "_${LOGNAME}_" != "_daemon_" ] && exit 1
LOGFILE=$1
USER=$2
KEY=$3
KEYTYPE=$4
FINGERPRINT=$5
printf "%s ssh-login T=%s U=%s PPID=%s FP=%s K=%s\n" "$(/bin/date -Iseconds)" "${KEYTYPE}" "${USER}" "${PPID}" "${FINGERPRINT}" "${KEY}" >> ${LOGFILE}
</source>
<source lang=bash>
# chmod 0750 /opt/sbin/fingerprintlog
# chown root:daemon /opt/sbin/fingerprintlog
</source>
==Create the logfile==
* /var/log/fingerprint.log
<source lang=bash>
# touch /var/log/fingerprint.log
# chown daemon:ssh-user /var/log/fingerprint.log
# chmod 0640 /var/log/fingerprint.log
</source>
==Setup logrotation==
* /etc/logrotate.d/fingerprintlog
<source lang=bash>
/var/log/fingerprint.log
{
su daemon syslog
create 0640 daemon ssh-user
rotate 8
weekly
missingok
notifempty
}
</source>
==Add fingerprint logging to sshd==
* /etc/ssh/sshd_config
<source lang=bash>
...
DenyUsers daemon
AuthorizedKeysCommand /opt/sbin/fingerprintlog /var/log/fingerprint.log %u %k %t %f
AuthorizedKeysCommandUser daemon
...
</source>
Restart sshd
<source lang=bash>
# systemctl restart ssh.service
</source>
==Add magic to your .bashrc==
<source lang=bash>
# apt install gawk
</source>
* ~/.bashrc
<source lang=bash>
...
[ -f /var/log/fingerprint.log ] && FINGERPRINT=$(/usr/bin/gawk -v ppid="${PPID}" -v user=${LOGNAME} 'BEGIN{split(ssh_connection,connection);}$5 ~ "PPID="ppid"$" {gsub(/^FP=/,"",$6); gsub(/\//,"_",$6); print $6;exit;}' /var/log/fingerprint.log)
export HISTFILE=~/.bash_history_${FINGERPRINT:-${SUDO_USER:-default}}
...
</source>
e6ff9456f1acd6ee185df6f3e8cceb375e862065
1878
1876
2018-05-17T11:41:59Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:SSH Fingerprint]]
[[Kategorie:Bash Fingerprint]]
=SSH Fingerprintlogging=
==Why logging fingerprints?==
It is just for the possibility of setting the [[Bash]] HISTFILE per logged in user.
==The AuthorizedKeysCommand==
* /opt/sbin/fingerprintlog:
<source lang=bash>
#!/bin/bash
# /opt/sbin/fingerprintlog <logfile> %u %k %t %f
# Arguments to AuthorizedKeysCommand may be provided using the following tokens, which will be expanded at runtime:
# %% is replaced by a literal '%',
# %u is replaced by the username being authenticated,
# %h is replaced by the home directory of the user being authenticated,
# %t is replaced with the key type offered for authentication,
# %f is replaced with the fingerprint of the key, and
# %k is replaced with the key being offered for authentication.
# If no arguments are specified then the username of the target user will be supplied.
[ "_${LOGNAME}_" != "_daemon_" ] && exit 1
LOGFILE=$1
USER=$2
KEY=$3
KEYTYPE=$4
FINGERPRINT=$5
printf "%s ssh-login T=%s U=%s PPID=%s FP=%s K=%s\n" "$(/bin/date -Iseconds)" "${KEYTYPE}" "${USER}" "${PPID}" "${FINGERPRINT}" "${KEY}" >> ${LOGFILE}
</source>
<source lang=bash>
# chmod 0750 /opt/sbin/fingerprintlog
# chown root:daemon /opt/sbin/fingerprintlog
</source>
==Create the logfile==
* /var/log/fingerprint.log
<source lang=bash>
# touch /var/log/fingerprint.log
# chown daemon:ssh-user /var/log/fingerprint.log
# chmod 0640 /var/log/fingerprint.log
</source>
==Setup logrotation==
* /etc/logrotate.d/fingerprintlog
<source lang=bash>
/var/log/fingerprint.log
{
su daemon syslog
create 0640 daemon ssh-user
rotate 8
weekly
missingok
notifempty
}
</source>
==Add fingerprint logging to sshd==
* /etc/ssh/sshd_config
<source lang=bash>
...
DenyUsers daemon
AuthorizedKeysCommand /opt/sbin/fingerprintlog /var/log/fingerprint.log %u %k %t %f
AuthorizedKeysCommandUser daemon
...
</source>
Restart sshd
<source lang=bash>
# systemctl restart ssh.service
</source>
==Add magic to your .bashrc==
<source lang=bash>
# apt install gawk
</source>
* ~/.bashrc
<source lang=bash>
...
[ -f /var/log/fingerprint.log ] && FINGERPRINT=$(/usr/bin/gawk -v ppid="${PPID}" -v user=${LOGNAME} 'BEGIN{split(ssh_connection,connection);}$5 ~ "PPID="ppid"$" {gsub(/^FP=/,"",$6); gsub(/\//,"_",$6); print $6;exit;}' /var/log/fingerprint.log)
export HISTFILE=~/.bash_history_${FINGERPRINT:-${SUDO_USER:-default}}
...
</source>
0a9a27b410c235b009bd46d3738b9b22f5fe0f62
Bash cheatsheet
0
37
1877
1804
2018-05-17T11:41:22Z
Lollypop
2
/* bash history per user */
wikitext
text/x-wiki
[[Kategorie:Bash]]
=bash history per user=
See [[SSH_FingerprintLogging|Logging the SSH fingerprint]]
=bash prompt=
Put this in your ~/.bash_profile
<source lang=bash>
typeset +x PS1="\[\e]0;\u@\h: \w\a\]\u@\h:\w# "
</source>
=Useful variable substitutions=
==split==
For example split an ip:
<source lang=bash>
$ delimiter="."
$ ip="10.1.2.3"
$ declare -a octets=( ${ip//${delimiter}/ } )
$ echo "${#octets[@]} octets -> ${octets[@]}"
4 octets -> 10 1 2 3
</source>
==dirname==
<source lang=bash>
$ myself=/usr/bin/blafasel ; echo ${myself%/*}
/usr/bin
</source>
==basename==
<source lang=bash>
$ myself=/usr/bin/blafasel ; echo ${myself##*/}
blafasel
</source>
==Path name resolving function==
<source lang=bash>
# dir_resolve originally from http://stackoverflow.com/a/20901614/5887626
# modified at https://lars.timmann.de/wiki/index.php/Bash_cheatsheet
dir_resolve() {
local dir=${1%/*}
local file=${1##*/}
# if the name does not contain a / leave file blank or the name will be name/name
[ "_${1/\//}_" == "_${1}_" -a -d ${1} ] && file=""
[ "_${1/\//}_" == "_${1}_" -a -f ${1} ] && dir=""
pushd "$dir" &>/dev/null || return $? # On error, return error code
echo ${PWD}${file:+"/"${file}} # output full path with filename
popd &> /dev/null
}
</source>
=Arrays=
==Reverse the order of elements==
An example for services in normal and reverse order for start/stop
<source lang=bash>
declare -a SERVICES_STOP=(service1 service2 service3 service4)
declare -a SERVICES_START
for(( i=$[ ${#SERVICES_STOP[*]} - 1 ] ; i>=0 ; i-- ))
do
SERVICES_START+=(${SERVICES_STOP[$i]})
done
</source>
This results in:
<source lang=bash>
$ echo ${SERVICES_STOP[*]} ; echo ${SERVICES_START[*]}
service1 service2 service3 service4
service4 service3 service2 service1
</source>
=Loops=
==Numbers==
$ for i in {0..9} ; do echo $i ; done
oder
$ for ((i=0;i<=9;i++)); do echo $i; done
so gehen natürlich auch andere Sprünge, z.B. immer 3 weiter:
$ for ((i=0;i<=9;i+=3)); do echo $i; done
oder oder oder
$ for ((i=0,j=1;i<=9;i+=3,j++)); do echo "$i $j"; done
==Exit controlled loop==
Just put your code between <i>while</i> and <i>do</i> and use <i>continue</i> alias : in the loop.
<source lang=bash>
#!/bin/bash
while
# some code
(( <your control expression> ))
do
:
done
</source>
For example:
<source lang=bash>
#!/bin/bash
i=1
while
i=$[ $i + 1 ];
(( $i < 10 ))
do
:
done
</source>
=Functions=
==Log with timestamp==
<source lang=bash>
function printlog () {
if [ ${#} -ge 1 ]
then
format=$1; shift;
printf "%s : ${format}" "$(/bin/date '+%Y%m%d %H:%M:%S')" ${*}
else
while read input
do
printf "%s : %s\n" "$(/bin/date '+%Y%m%d %H:%M:%S')" "${input}"
done
fi
}
</source>
<source lang=bash>
$ printf "test\n\ntoast\n" | printlog
20161103 10:47:25 test
20161103 10:47:25
20161103 10:47:25 toast
$ printlog test
20161103 10:47:30 test
$ printlog "test %s %d %s\n" "bla" 0 "bli"
20170721 09:45:06 : test bla 0 bli
</source>
=Calculations=
$ echo $[ 3 + 4 ]
$ echo $[ 2 ** 8 ] # 2^8
=init scripts=
==A basic skeleton==
<source lang=bash>
#!/bin/bash
NAME=<myname> # The name of the daemon
USER=<runuser> # The user to run the daemon as
SELF=${0##*/}
CALLER=$(id -nu)
# Check if called as ${USER}
if [ "_${CALLER}_" != "_${USER}_" ]
then
# If not do a su if called as root
if [ "_${CALLER}_" == "_root_" ]
then
exec su -l ${USER} -c "$0 $@"
else
echo "Please start this script only as user ${USER}"
exit 1
fi
fi
if [ $# -eq 1 ]
then
command=$1
else
# Called as ${NAME}-start.sh or ${NAME}-stop.sh
command=${SELF%.sh}
command=${command##${NAME}-}
[ "_${command}_" == "_${NAME}_" ] && command=""
fi
case ${command} in
start)
# start commands
;;
stop)
# stop commands
;;
restart)
$0 stop
$0 start
;;
*)
[ ! -z "${command}" ] && echo "ERROR: Unknown option ${command}!"
echo "Usage: $0 (start|stop|restart)";
echo "Or call as ${NAME}-(start|stop|restart).sh"
exit 1
;;
esac
</source>
= Logging and output in your scripts =
== Add a timestamp to all output ==
<source lang=bash>
#!/bin/bash
# Find temp filename
FIFO=$(mktemp)
# Clenup on exit
trap 'rm -f ${FIFO}' 0
# Delete file created by mktemp
rm "${FIFO}"
# Create a FIFO instead
mkfifo "${FIFO}"
# Read from FIFO and add date at the beginning
sed -e "s|^|$(date '+%d.%m.%Y %H:%M:%S') :: |g" < ${FIFO} &
# Redirect stdout & stderr to FIFO
exec > ${FIFO} 2>&1
#
# Now your program
#
echo bla
echo bli >&2
</source>
== Add a timestamp to all output and send to file==
<source lang=bash>
#!/bin/bash
LOGFILE=/tmp/bla.log
# Find temp filename
FIFO=$(mktemp)
# Clenup on exit
trap 'rm -f ${FIFO}' 0
# Delete file created by mktemp
rm "${FIFO}"
# Create a FIFO instead
mkfifo "${FIFO}"
# Read from FIFO and add date at the beginning
sed -e "s|^|$(date '+%d.%m.%Y %H:%M:%S') :: |g" < ${FIFO} > ${LOGFILE}&
# Redirect stdout & stderr to FIFO
exec > ${FIFO} 2>&1
#
# Now your program
#
echo bla
echo bli >&2
</source>
=Parameter parsing=
In progress... no time...
<source lang=bash>
while [ $# -gt 0 ]
do
case $1 in
-h|--help)
usage help
shift;
exit 0;
;;
--?*=?*|-?*=?*)
param=${1%=*}
value=${1#*=}
shift;
;;
--?*=|-?*=)
param=${1%=*}
usage "${param} needs a vlaue!"
;;
*)
if [ $# -lt 2 ] ; then usage "$1 needs a value!"; fi
param=$1
value=$2
shift; shift;
;;
esac
case $param in
*)
other_params[$[${#other_params[*]} + 1]]="${param}=${value}"
param=${param#--}
param=${param/-/_}
export ${param^^}=${value}
;;
esac
done
</source>
5e6395063a90eadec1f1e827529fba21a29b9355
SSH FingerprintLogging
0
358
1879
1878
2018-05-17T11:42:21Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:SSH|Fingerprint]]
[[Kategorie:Bash|Fingerprint]]
=SSH Fingerprintlogging=
==Why logging fingerprints?==
It is just for the possibility of setting the [[Bash]] HISTFILE per logged in user.
==The AuthorizedKeysCommand==
* /opt/sbin/fingerprintlog:
<source lang=bash>
#!/bin/bash
# /opt/sbin/fingerprintlog <logfile> %u %k %t %f
# Arguments to AuthorizedKeysCommand may be provided using the following tokens, which will be expanded at runtime:
# %% is replaced by a literal '%',
# %u is replaced by the username being authenticated,
# %h is replaced by the home directory of the user being authenticated,
# %t is replaced with the key type offered for authentication,
# %f is replaced with the fingerprint of the key, and
# %k is replaced with the key being offered for authentication.
# If no arguments are specified then the username of the target user will be supplied.
[ "_${LOGNAME}_" != "_daemon_" ] && exit 1
LOGFILE=$1
USER=$2
KEY=$3
KEYTYPE=$4
FINGERPRINT=$5
printf "%s ssh-login T=%s U=%s PPID=%s FP=%s K=%s\n" "$(/bin/date -Iseconds)" "${KEYTYPE}" "${USER}" "${PPID}" "${FINGERPRINT}" "${KEY}" >> ${LOGFILE}
</source>
<source lang=bash>
# chmod 0750 /opt/sbin/fingerprintlog
# chown root:daemon /opt/sbin/fingerprintlog
</source>
==Create the logfile==
* /var/log/fingerprint.log
<source lang=bash>
# touch /var/log/fingerprint.log
# chown daemon:ssh-user /var/log/fingerprint.log
# chmod 0640 /var/log/fingerprint.log
</source>
==Setup logrotation==
* /etc/logrotate.d/fingerprintlog
<source lang=bash>
/var/log/fingerprint.log
{
su daemon syslog
create 0640 daemon ssh-user
rotate 8
weekly
missingok
notifempty
}
</source>
==Add fingerprint logging to sshd==
* /etc/ssh/sshd_config
<source lang=bash>
...
DenyUsers daemon
AuthorizedKeysCommand /opt/sbin/fingerprintlog /var/log/fingerprint.log %u %k %t %f
AuthorizedKeysCommandUser daemon
...
</source>
Restart sshd
<source lang=bash>
# systemctl restart ssh.service
</source>
==Add magic to your .bashrc==
<source lang=bash>
# apt install gawk
</source>
* ~/.bashrc
<source lang=bash>
...
[ -f /var/log/fingerprint.log ] && FINGERPRINT=$(/usr/bin/gawk -v ppid="${PPID}" -v user=${LOGNAME} 'BEGIN{split(ssh_connection,connection);}$5 ~ "PPID="ppid"$" {gsub(/^FP=/,"",$6); gsub(/\//,"_",$6); print $6;exit;}' /var/log/fingerprint.log)
export HISTFILE=~/.bash_history_${FINGERPRINT:-${SUDO_USER:-default}}
...
</source>
3c202d130c91f4251537b81c07decf54703ead99
1886
1879
2018-05-17T12:45:14Z
Lollypop
2
/* Add magic to your .bashrc */
wikitext
text/x-wiki
[[Kategorie:SSH|Fingerprint]]
[[Kategorie:Bash|Fingerprint]]
=SSH Fingerprintlogging=
==Why logging fingerprints?==
It is just for the possibility of setting the [[Bash]] HISTFILE per logged in user.
==The AuthorizedKeysCommand==
* /opt/sbin/fingerprintlog:
<source lang=bash>
#!/bin/bash
# /opt/sbin/fingerprintlog <logfile> %u %k %t %f
# Arguments to AuthorizedKeysCommand may be provided using the following tokens, which will be expanded at runtime:
# %% is replaced by a literal '%',
# %u is replaced by the username being authenticated,
# %h is replaced by the home directory of the user being authenticated,
# %t is replaced with the key type offered for authentication,
# %f is replaced with the fingerprint of the key, and
# %k is replaced with the key being offered for authentication.
# If no arguments are specified then the username of the target user will be supplied.
[ "_${LOGNAME}_" != "_daemon_" ] && exit 1
LOGFILE=$1
USER=$2
KEY=$3
KEYTYPE=$4
FINGERPRINT=$5
printf "%s ssh-login T=%s U=%s PPID=%s FP=%s K=%s\n" "$(/bin/date -Iseconds)" "${KEYTYPE}" "${USER}" "${PPID}" "${FINGERPRINT}" "${KEY}" >> ${LOGFILE}
</source>
<source lang=bash>
# chmod 0750 /opt/sbin/fingerprintlog
# chown root:daemon /opt/sbin/fingerprintlog
</source>
==Create the logfile==
* /var/log/fingerprint.log
<source lang=bash>
# touch /var/log/fingerprint.log
# chown daemon:ssh-user /var/log/fingerprint.log
# chmod 0640 /var/log/fingerprint.log
</source>
==Setup logrotation==
* /etc/logrotate.d/fingerprintlog
<source lang=bash>
/var/log/fingerprint.log
{
su daemon syslog
create 0640 daemon ssh-user
rotate 8
weekly
missingok
notifempty
}
</source>
==Add fingerprint logging to sshd==
* /etc/ssh/sshd_config
<source lang=bash>
...
DenyUsers daemon
AuthorizedKeysCommand /opt/sbin/fingerprintlog /var/log/fingerprint.log %u %k %t %f
AuthorizedKeysCommandUser daemon
...
</source>
Restart sshd
<source lang=bash>
# systemctl restart ssh.service
</source>
==Add magic to your .bashrc==
<source lang=bash>
# apt install gawk
</source>
* ~/.bashrc
<source lang=bash>
...
# Match parent PID or grand parent PID against fingerprint.log
[ -f /var/log/fingerprint.log ] && FINGERPRINT=$(/usr/bin/gawk -v ppid="(${PPID}|$(awk '{print $4;}' /proc/${PPID}/stat))" -v user=${LOGNAME} '$5 ~ "^PPID="ppid"$" {gsub(/^FP=/,"",$6); gsub(/\//,"_",$6); print $6;exit;}' /var/log/fingerprint.log)
# Set the history file
export HISTFILE=~/.bash_history_${FINGERPRINT:-${SUDO_USER:-default}}
...
</source>
80519a12828d5f9b266a5947b521109c6771cd30
SSH Tipps und Tricks
0
75
1880
1844
2018-05-17T11:43:04Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:SSH|Tipps]]
[[Kategorie:Putty|Tipps]]
=SSH, der Weg zum Ziel=
==SSH über ein oder mehrere Hops==
Um die SSH-Verbindung von Host_A zu Host_B machen zu können muß man sich über zwei Rechner dorthin tunneln (GW_1 und GW_2). Wenn man sich immer erst einloggt und dann weiter einloggt ist es manchmal sehr schwierig die Portforwardings mitzuschleifen, oder den Socks5-Proxy. Einfacher ist es, wenn man sich ProxyCommands für den Weg von Host_a zu Host_b definiert.
Wir kommen also nur von GW_2 zu Host_b, also machen wir uns hierfür einen Eintrag in der ~/.ssh/config:
<pre>
Host Host_B
ProxyCommand ssh GW_2 "/bin/bash -c 'exec 3<>/dev/tcp/%h/%p; cat <&3 & cat >&3;kill $!'"
</pre>
Zu GW_2 kommen wir aber nur über GW_1, also brauchen wir hierfür auch einen Eintrag:
<pre>
Host GW_2
ProxyCommand ssh GW_1 "/bin/bash -c 'exec 3<>/dev/tcp/%h/%p; cat <&3 & cat >&3;kill $!'"
</pre>
Jetzt gibt man auf Host_A einfach <i>ssh Host_B</i> ein und wird über die beiden Gateways GW_1 und GW_2 getunnelt.
Portforwardings für z.B. NFS macht man jetzt einfach so:
<pre>
root@Host_A# share -F nfs -o ro=@127.0.0.1/32 /tmp
root@Host_A# ssh -R 22049:localhost:2049 user@Host_B
user@Host_B$ su -
root@Host_B# mount -oro nfs://127.0.0.1:22049/tmp /mnt
</pre>
Im Hintergrund werden dann die TunnelVerbindungen aufgebaut und man macht das PortForwarding direkt von Host_A nach Host_B. Sehr schlank und elegant.
PS: Das /dev/tcp/%h/%p ist ein BASH-Builtin wobei %h und %p von der SSH durch Host (%h) und Port (%p) ausgefüllt werden
==Ausbruch aus dem Paradies==
Problem: Die Umgebung in der man sich aufhält ist leider so unglücklich mit Firewalls verbaut, daß man nicht arbeiten kann. Man muß aber per SSH raus, um woanders kurz etwas zu schauen, oder zu holen. Nunja, es gibt immer einen Weg...
Vorraussetzung ist ein lokal installiertes [http://www.meadowy.org/~gotoh/projects/connect connect], z.B. unter Ubuntu: apt-get install connect-proxy.
Weiterhin braucht man einen SSH-Server, wo ein sshd auf dem Port 443 lauscht, denn die meisten Proxies wollen einen nur auf known Ports durchlassen.
Dann trägt man in der ~/.ssh/config ein:
<pre>
Host ssh-via-proxy
ProxyCommand connect -H proxy-server:3128 ssh-server 443
</pre>
Schwuppdiwupp ist man mit <i>ssh ssh-server</i> auf dem SSH-Ziel, wo man hinmöchte. Natürlich kann man auf den ssh-server wieder als ProxyCommand eintragen usw. usw.
==Achja... das interne Wiki...==
Auch nicht schlimm, wenn das nur vom internen Netz erreichbar ist, dann fragen wir einfach via Socks-Proxy an:
<pre>
user@Host_A$ ssh -C -N -T -f -D8080 interner-rechner
user@Host_A$ chromium-browser --proxy-server="socks5://localhost:8080" https://wiki.intern.firma.de/ &
</pre>
Die Optionen sind:
<pre>
-C Requests compression <- das ist optional
-N Do not execute a remote command.
-T Disable pseudo-tty allocation.
-f Requests ssh to go to background just before command execution.
-D Local-Remote-Socks5-Proxy Port
</pre>
Oder wieder via ~/.ssh/config:
<pre>
Host wiki
Compression yes
DynamicForward 8888
RequestTTY no
PermitLocalCommand yes
LocalCommand chromium-browser --proxy-server="socks5://localhost:8888" https://wiki.intern.firma.de/ &
Hostname interner-rechner
</pre>
Und dann <i>ssh -N -f wiki</i> (Entsprechungen für -N und -f habe ich noch nicht gefunden).
=Der Fingerabdruck=
Für die Verifikation ist es oft leichter mit kürzeren Zahlenketten. Daher ist der Fingerabdruck praktisch, um Keys einfacher zu vergleichen:
<pre>
$ ssh-keygen -lf ~/.ssh/id_dsa.pub
1024 98:c5:76:...:08:fa:ba lollypop@lollybook (DSA)
</pre>
=Nutzer einschränken=
<source lang=bash>
# SSH is only allowed for users in the group ssh except syslog
AllowGroups ssh
DenyUsers syslog
</source>
=PuTTY Portable=
==pageant zusammen mit putty starten==
In die Datei ..\PortableApps\PuTTYPortable\App\AppInfo\Launcher\PuTTYPortable.ini muß folgendes unter [Launch] stehen:
<pre>
[Launch]
ProgramExecutable=putty\pageant.exe
CommandLineArguments='%PAL:DataDir%\settings\mykeys.ppk -c %PAL:AppDir%\putty\putty.exe'
DirectoryMoveOK=yes
SupportsUNC=yes
</pre>
Zu PortableApps siehe auch:
* [http://portableapps.com/manuals/PortableApps.comLauncher/ref/envsub.html Environment variable substitions]
* [http://portableapps.com/manuals/PortableApps.comLauncher/ref/launcher.ini/launch.html#programexecutable Launch]
==ppk -> pem==
<source lang=bash>
$ nawk '/---- BEGIN SSH2 PUBLIC KEY ----/{printf "ssh-rsa "; getline; comment=$2; gsub(/"/,"",comment); getline line; while(line !~ /^---- END/){printf line; getline line;} printf " %s\n",comment;}' pubkey.ppk
</source>
=Probleme mit älteren Gegenstellen=
==Unable to negotiate with <IP> port 22: no matching host key type found. Their offer: ssh-dss==
<source lang=bash>
$ ssh -oHostKeyAlgorithms=+ssh-dss <IP>
</source>
==ssh_dispatch_run_fatal: Connection to <IP> port 22: DH GEX group out of range==
<source lang=bash>
$ ssh -oKexAlgorithms=diffie-hellman-group-exchange-sha256,diffie-hellman-group14-sha1,diffie-hellman-group1-sha1 <IP>
</source>
=SFTP chroot=
<source lang=bash>
# mkdir --parents --mode=0755 /sftp_chroot/etc
</source>
==/etc/fstab==
<source lang=bash>
...
/etc/passwd /sftp_chroot/etc/passwd none ro,bind 0 0
/etc/group /sftp_chroot/etc/group none ro,bind 0 0
</source>
==/etc/ssh/sshd_config==
<source lang=bash>
...
AllowGroups ssh-user
Subsystem sftp internal-sftp
Match group sftp
AllowGroups sftp
X11Forwarding no
AllowTcpForwarding no
AllowAgentForwarding no
PermitTunnel no
ForceCommand internal-sftp
PasswordAuthentication yes
ChrootDirectory /sftp_chroot/
AuthorizedKeysFile /sftp_chroot/%h/.ssh/authorized_keys
</source>
==Create SFTP user==
Now you can put authorized keys into the files /home/sftp/.authorized_keys/<i>username</i>
And create the sftp users like this:
<source lang=bash>
# USER=myuser
# mkdir --parents --mode=0755 /home/sftp/${USER}
# useradd --create-home --home-dir /home/sftp/${USER}/home ${USER}
</source>
= Two factor authentication =
== Google Authenticator ==
As the Google Authenticator is a tool which is available on several SmartPhone OS I took this one for the OTP authentication.
All steps have to be done on the destination host.
=== Install libpam-google-authenticator ===
<source lang=bash>
$ sudo apt-get install libpam-google-authenticator
</source>
=== Add settings to the /etc/pam.d/sshd ===
Put this line at the top of your /etc/pam.d/sshd!
<source>
auth [success=done new_authtok_reqd=done default=die] pam_google_authenticator.so nullok
</source>
See the man page pam.d(5) or read here...
The meaning of the parameters:
* success=done : If pam_google_authenticator returns successful (code was correct) all authentication is done.
* new_authtok_reqd=done : New authentication token is required set to done. Done is like ok, <nowiki><man page></nowiki>except that the stack also terminates and control is immediately returned to the application.<nowiki></man page></nowiki>
* default=die : If pam_google_authenticator failed no other authentications will be tried
* nullok : Allow user to access auth mechanism even if the password is empty
=== Add settings to the /etc/ssh/sshd_config ===
This lines have to be in the /etc/ssh/sshd_config:
<source>
UsePAM yes
PasswordAuthentication no
PubkeyAuthentication yes
ChallengeResponseAuthentication yes
AuthenticationMethods publickey,keyboard-interactive:pam
</source>
Without the setting in /etc/pam.d/sshd the "PasswordAuthentication no" will not be sufficient and still ask for a password because /etc/pam.d/sshd enables password authentication.
f3bc91874cba189028abf406a0d061c3dea5f289
1881
1880
2018-05-17T11:44:53Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:SSH|Tipps]]
[[Kategorie:Putty|Tipps]]
=SSH, der Weg zum Ziel=
==SSH über ein oder mehrere Hops==
Um die SSH-Verbindung von Host_A zu Host_B machen zu können muß man sich über zwei Rechner dorthin tunneln (GW_1 und GW_2). Wenn man sich immer erst einloggt und dann weiter einloggt ist es manchmal sehr schwierig die Portforwardings mitzuschleifen, oder den Socks5-Proxy. Einfacher ist es, wenn man sich ProxyCommands für den Weg von Host_a zu Host_b definiert.
Wir kommen also nur von GW_2 zu Host_b, also machen wir uns hierfür einen Eintrag in der ~/.ssh/config:
<pre>
Host Host_B
ProxyCommand ssh GW_2 "/bin/bash -c 'exec 3<>/dev/tcp/%h/%p; cat <&3 & cat >&3;kill $!'"
</pre>
Zu GW_2 kommen wir aber nur über GW_1, also brauchen wir hierfür auch einen Eintrag:
<pre>
Host GW_2
ProxyCommand ssh GW_1 "/bin/bash -c 'exec 3<>/dev/tcp/%h/%p; cat <&3 & cat >&3;kill $!'"
</pre>
Jetzt gibt man auf Host_A einfach <i>ssh Host_B</i> ein und wird über die beiden Gateways GW_1 und GW_2 getunnelt.
Portforwardings für z.B. NFS macht man jetzt einfach so:
<pre>
root@Host_A# share -F nfs -o ro=@127.0.0.1/32 /tmp
root@Host_A# ssh -R 22049:localhost:2049 user@Host_B
user@Host_B$ su -
root@Host_B# mount -oro nfs://127.0.0.1:22049/tmp /mnt
</pre>
Im Hintergrund werden dann die TunnelVerbindungen aufgebaut und man macht das PortForwarding direkt von Host_A nach Host_B. Sehr schlank und elegant.
PS: Das /dev/tcp/%h/%p ist ein BASH-Builtin wobei %h und %p von der SSH durch Host (%h) und Port (%p) ausgefüllt werden
==Ausbruch aus dem Paradies==
Problem: Die Umgebung in der man sich aufhält ist leider so unglücklich mit Firewalls verbaut, daß man nicht arbeiten kann. Man muß aber per SSH raus, um woanders kurz etwas zu schauen, oder zu holen. Nunja, es gibt immer einen Weg...
Vorraussetzung ist ein lokal installiertes [http://www.meadowy.org/~gotoh/projects/connect connect], z.B. unter Ubuntu: apt-get install connect-proxy.
Weiterhin braucht man einen SSH-Server, wo ein sshd auf dem Port 443 lauscht, denn die meisten Proxies wollen einen nur auf known Ports durchlassen.
Dann trägt man in der ~/.ssh/config ein:
<pre>
Host ssh-via-proxy
ProxyCommand connect -H proxy-server:3128 ssh-server 443
</pre>
Schwuppdiwupp ist man mit <i>ssh ssh-server</i> auf dem SSH-Ziel, wo man hinmöchte. Natürlich kann man auf den ssh-server wieder als ProxyCommand eintragen usw. usw.
==Achja... das interne Wiki...==
Auch nicht schlimm, wenn das nur vom internen Netz erreichbar ist, dann fragen wir einfach via Socks-Proxy an:
<pre>
user@Host_A$ ssh -C -N -T -f -D8080 interner-rechner
user@Host_A$ chromium-browser --proxy-server="socks5://localhost:8080" https://wiki.intern.firma.de/ &
</pre>
Die Optionen sind:
<pre>
-C Requests compression <- das ist optional
-N Do not execute a remote command.
-T Disable pseudo-tty allocation.
-f Requests ssh to go to background just before command execution.
-D Local-Remote-Socks5-Proxy Port
</pre>
Oder wieder via ~/.ssh/config:
<pre>
Host wiki
Compression yes
DynamicForward 8888
RequestTTY no
PermitLocalCommand yes
LocalCommand chromium-browser --proxy-server="socks5://localhost:8888" https://wiki.intern.firma.de/ &
Hostname interner-rechner
</pre>
Und dann <i>ssh -N -f wiki</i> (Entsprechungen für -N und -f habe ich noch nicht gefunden).
=Der Fingerabdruck=
Für die Verifikation ist es oft leichter mit kürzeren Zahlenketten. Daher ist der Fingerabdruck praktisch, um Keys einfacher zu vergleichen:
<pre>
$ ssh-keygen -lf ~/.ssh/id_dsa.pub
1024 98:c5:76:...:08:fa:ba lollypop@lollybook (DSA)
</pre>
=Nutzer einschränken=
<source lang=bash>
# SSH is only allowed for users in the group ssh except syslog
AllowGroups ssh
DenyUsers syslog
</source>
=PuTTY Portable=
==pageant zusammen mit putty starten==
In die Datei ..\PortableApps\PuTTYPortable\App\AppInfo\Launcher\PuTTYPortable.ini muß folgendes unter [Launch] stehen:
<pre>
[Launch]
ProgramExecutable=putty\pageant.exe
CommandLineArguments='%PAL:DataDir%\settings\mykeys.ppk -c %PAL:AppDir%\putty\putty.exe'
DirectoryMoveOK=yes
SupportsUNC=yes
</pre>
Zu PortableApps siehe auch:
* [http://portableapps.com/manuals/PortableApps.comLauncher/ref/envsub.html Environment variable substitions]
* [http://portableapps.com/manuals/PortableApps.comLauncher/ref/launcher.ini/launch.html#programexecutable Launch]
==ppk -> pem==
<source lang=bash>
$ nawk '/---- BEGIN SSH2 PUBLIC KEY ----/{printf "ssh-rsa "; getline; comment=$2; gsub(/"/,"",comment); getline line; while(line !~ /^---- END/){printf line; getline line;} printf " %s\n",comment;}' pubkey.ppk
</source>
=Probleme mit älteren Gegenstellen=
==Unable to negotiate with <IP> port 22: no matching host key type found. Their offer: ssh-dss==
<source lang=bash>
$ ssh -oHostKeyAlgorithms=+ssh-dss <IP>
</source>
==ssh_dispatch_run_fatal: Connection to <IP> port 22: DH GEX group out of range==
<source lang=bash>
$ ssh -oKexAlgorithms=diffie-hellman-group-exchange-sha256,diffie-hellman-group14-sha1,diffie-hellman-group1-sha1 <IP>
</source>
=SFTP chroot=
<source lang=bash>
# mkdir --parents --mode=0755 /sftp_chroot/etc
</source>
==/etc/fstab==
<source lang=bash>
...
/etc/passwd /sftp_chroot/etc/passwd none ro,bind 0 0
/etc/group /sftp_chroot/etc/group none ro,bind 0 0
</source>
==/etc/ssh/sshd_config==
<source lang=bash>
...
AllowGroups ssh-user
Subsystem sftp internal-sftp
Match group sftp
AllowGroups sftp
X11Forwarding no
AllowTcpForwarding no
AllowAgentForwarding no
PermitTunnel no
ForceCommand internal-sftp
PasswordAuthentication yes
ChrootDirectory /sftp_chroot/
AuthorizedKeysFile /sftp_chroot/%h/.ssh/authorized_keys
</source>
==Create SFTP user==
Now you can put authorized keys into the files /home/sftp/.authorized_keys/<i>username</i>
And create the sftp users like this:
<source lang=bash>
# USER=myuser
# mkdir --parents --mode=0755 /home/sftp/${USER}
# useradd --create-home --home-dir /home/sftp/${USER}/home ${USER}
</source>
= Two factor authentication =
== Google Authenticator ==
As the Google Authenticator is a tool which is available on several SmartPhone OS I took this one for the OTP authentication.
All steps have to be done on the destination host.
=== Install libpam-google-authenticator ===
<source lang=bash>
$ sudo apt-get install libpam-google-authenticator
</source>
=== Add settings to the /etc/pam.d/sshd ===
Put this line at the top of your /etc/pam.d/sshd!
<source lang=bash>
auth [success=done new_authtok_reqd=done default=die] pam_google_authenticator.so nullok
</source>
See the man page pam.d(5) or read here...
The meaning of the parameters:
* success=done : If pam_google_authenticator returns successful (code was correct) all authentication is done.
* new_authtok_reqd=done : New authentication token is required set to done. Done is like ok, <nowiki><man page></nowiki>except that the stack also terminates and control is immediately returned to the application.<nowiki></man page></nowiki>
* default=die : If pam_google_authenticator failed no other authentications will be tried
* nullok : Allow user to access auth mechanism even if the password is empty
=== Add settings to the /etc/ssh/sshd_config ===
This lines have to be in the /etc/ssh/sshd_config:
<source>
UsePAM yes
PasswordAuthentication no
PubkeyAuthentication yes
ChallengeResponseAuthentication yes
AuthenticationMethods publickey,keyboard-interactive:pam
</source>
Without the setting in /etc/pam.d/sshd the "PasswordAuthentication no" will not be sufficient and still ask for a password because /etc/pam.d/sshd enables password authentication.
92dd397d94695aeeadff0db7734a61099947719f
1882
1881
2018-05-17T11:45:58Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:SSH|Tipps]]
[[Kategorie:Putty|Tipps]]
=SSH, der Weg zum Ziel=
==SSH über ein oder mehrere Hops==
Um die SSH-Verbindung von Host_A zu Host_B machen zu können muß man sich über zwei Rechner dorthin tunneln (GW_1 und GW_2). Wenn man sich immer erst einloggt und dann weiter einloggt ist es manchmal sehr schwierig die Portforwardings mitzuschleifen, oder den Socks5-Proxy. Einfacher ist es, wenn man sich ProxyCommands für den Weg von Host_a zu Host_b definiert.
Wir kommen also nur von GW_2 zu Host_b, also machen wir uns hierfür einen Eintrag in der ~/.ssh/config:
<pre>
Host Host_B
ProxyCommand ssh GW_2 "/bin/bash -c 'exec 3<>/dev/tcp/%h/%p; cat <&3 & cat >&3;kill $!'"
</pre>
Zu GW_2 kommen wir aber nur über GW_1, also brauchen wir hierfür auch einen Eintrag:
<pre>
Host GW_2
ProxyCommand ssh GW_1 "/bin/bash -c 'exec 3<>/dev/tcp/%h/%p; cat <&3 & cat >&3;kill $!'"
</pre>
Jetzt gibt man auf Host_A einfach <i>ssh Host_B</i> ein und wird über die beiden Gateways GW_1 und GW_2 getunnelt.
Portforwardings für z.B. NFS macht man jetzt einfach so:
<pre>
root@Host_A# share -F nfs -o ro=@127.0.0.1/32 /tmp
root@Host_A# ssh -R 22049:localhost:2049 user@Host_B
user@Host_B$ su -
root@Host_B# mount -oro nfs://127.0.0.1:22049/tmp /mnt
</pre>
Im Hintergrund werden dann die TunnelVerbindungen aufgebaut und man macht das PortForwarding direkt von Host_A nach Host_B. Sehr schlank und elegant.
PS: Das /dev/tcp/%h/%p ist ein BASH-Builtin wobei %h und %p von der SSH durch Host (%h) und Port (%p) ausgefüllt werden
==Ausbruch aus dem Paradies==
Problem: Die Umgebung in der man sich aufhält ist leider so unglücklich mit Firewalls verbaut, daß man nicht arbeiten kann. Man muß aber per SSH raus, um woanders kurz etwas zu schauen, oder zu holen. Nunja, es gibt immer einen Weg...
Vorraussetzung ist ein lokal installiertes [http://www.meadowy.org/~gotoh/projects/connect connect], z.B. unter Ubuntu: apt-get install connect-proxy.
Weiterhin braucht man einen SSH-Server, wo ein sshd auf dem Port 443 lauscht, denn die meisten Proxies wollen einen nur auf known Ports durchlassen.
Dann trägt man in der ~/.ssh/config ein:
<pre>
Host ssh-via-proxy
ProxyCommand connect -H proxy-server:3128 ssh-server 443
</pre>
Schwuppdiwupp ist man mit <i>ssh ssh-server</i> auf dem SSH-Ziel, wo man hinmöchte. Natürlich kann man auf den ssh-server wieder als ProxyCommand eintragen usw. usw.
==Achja... das interne Wiki...==
Auch nicht schlimm, wenn das nur vom internen Netz erreichbar ist, dann fragen wir einfach via Socks-Proxy an:
<pre>
user@Host_A$ ssh -C -N -T -f -D8080 interner-rechner
user@Host_A$ chromium-browser --proxy-server="socks5://localhost:8080" https://wiki.intern.firma.de/ &
</pre>
Die Optionen sind:
<pre>
-C Requests compression <- das ist optional
-N Do not execute a remote command.
-T Disable pseudo-tty allocation.
-f Requests ssh to go to background just before command execution.
-D Local-Remote-Socks5-Proxy Port
</pre>
Oder wieder via ~/.ssh/config:
<pre>
Host wiki
Compression yes
DynamicForward 8888
RequestTTY no
PermitLocalCommand yes
LocalCommand chromium-browser --proxy-server="socks5://localhost:8888" https://wiki.intern.firma.de/ &
Hostname interner-rechner
</pre>
Und dann <i>ssh -N -f wiki</i> (Entsprechungen für -N und -f habe ich noch nicht gefunden).
=Der Fingerabdruck=
Für die Verifikation ist es oft leichter mit kürzeren Zahlenketten. Daher ist der Fingerabdruck praktisch, um Keys einfacher zu vergleichen:
<pre>
$ ssh-keygen -lf ~/.ssh/id_dsa.pub
1024 98:c5:76:...:08:fa:ba lollypop@lollybook (DSA)
</pre>
=Nutzer einschränken=
<source lang=bash>
# SSH is only allowed for users in the group ssh except syslog
AllowGroups ssh
DenyUsers syslog
</source>
=PuTTY Portable=
==pageant zusammen mit putty starten==
In die Datei ..\PortableApps\PuTTYPortable\App\AppInfo\Launcher\PuTTYPortable.ini muß folgendes unter [Launch] stehen:
<pre>
[Launch]
ProgramExecutable=putty\pageant.exe
CommandLineArguments='%PAL:DataDir%\settings\mykeys.ppk -c %PAL:AppDir%\putty\putty.exe'
DirectoryMoveOK=yes
SupportsUNC=yes
</pre>
Zu PortableApps siehe auch:
* [http://portableapps.com/manuals/PortableApps.comLauncher/ref/envsub.html Environment variable substitions]
* [http://portableapps.com/manuals/PortableApps.comLauncher/ref/launcher.ini/launch.html#programexecutable Launch]
==ppk -> pem==
<source lang=bash>
$ nawk '/---- BEGIN SSH2 PUBLIC KEY ----/{printf "ssh-rsa "; getline; comment=$2; gsub(/"/,"",comment); getline line; while(line !~ /^---- END/){printf line; getline line;} printf " %s\n",comment;}' pubkey.ppk
</source>
=Probleme mit älteren Gegenstellen=
==Unable to negotiate with <IP> port 22: no matching host key type found. Their offer: ssh-dss==
<source lang=bash>
$ ssh -oHostKeyAlgorithms=+ssh-dss <IP>
</source>
==ssh_dispatch_run_fatal: Connection to <IP> port 22: DH GEX group out of range==
<source lang=bash>
$ ssh -oKexAlgorithms=diffie-hellman-group-exchange-sha256,diffie-hellman-group14-sha1,diffie-hellman-group1-sha1 <IP>
</source>
=SFTP chroot=
<source lang=bash>
# mkdir --parents --mode=0755 /sftp_chroot/etc
</source>
==/etc/fstab==
<source lang=bash>
...
/etc/passwd /sftp_chroot/etc/passwd none ro,bind 0 0
/etc/group /sftp_chroot/etc/group none ro,bind 0 0
</source>
==/etc/ssh/sshd_config==
<source lang=bash>
...
AllowGroups ssh-user
Subsystem sftp internal-sftp
Match group sftp
AllowGroups sftp
X11Forwarding no
AllowTcpForwarding no
AllowAgentForwarding no
PermitTunnel no
ForceCommand internal-sftp
PasswordAuthentication yes
ChrootDirectory /sftp_chroot/
AuthorizedKeysFile /sftp_chroot/%h/.ssh/authorized_keys
</source>
==Create SFTP user==
Now you can put authorized keys into the files /home/sftp/.authorized_keys/<i>username</i>
And create the sftp users like this:
<source lang=bash>
# USER=myuser
# mkdir --parents --mode=0755 /home/sftp/${USER}
# useradd --create-home --home-dir /home/sftp/${USER}/home ${USER}
</source>
= Two factor authentication =
== Google Authenticator ==
As the Google Authenticator is a tool which is available on several SmartPhone OS I took this one for the OTP authentication.
All steps have to be done on the destination host.
=== Install libpam-google-authenticator ===
<source lang=bash>
$ sudo apt-get install libpam-google-authenticator
</source>
=== Add settings to the /etc/pam.d/sshd ===
Put this line at the top of your /etc/pam.d/sshd!
<source lang=bash>
auth [success=done new_authtok_reqd=done default=die] pam_google_authenticator.so nullok
</source>
See the man page pam.d(5) or read here...
The meaning of the parameters:
* success=done : If pam_google_authenticator returns successful (code was correct) all authentication is done.
* new_authtok_reqd=done : New authentication token is required set to done. Done is like ok, <nowiki><man page></nowiki>except that the stack also terminates and control is immediately returned to the application.<nowiki></man page></nowiki>
* default=die : If pam_google_authenticator failed no other authentications will be tried
* nullok : Allow user to access auth mechanism even if the password is empty
=== Add settings to the /etc/ssh/sshd_config ===
This lines have to be in the /etc/ssh/sshd_config:
<source lang=bash>
UsePAM yes
PasswordAuthentication no
PubkeyAuthentication yes
ChallengeResponseAuthentication yes
AuthenticationMethods publickey,keyboard-interactive:pam
</source>
Without the setting in /etc/pam.d/sshd the "PasswordAuthentication no" will not be sufficient and still ask for a password because /etc/pam.d/sshd enables password authentication.
7763a97964bdb4b59b79ecc02f600e1ac6b41661
MySQL Tipps und Tricks
0
197
1883
1848
2018-05-17T11:53:47Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:MySQL|Tipps und Tricks]]
==Oneliner==
===Show MySQL-traffic fired from a client===
<source lang=bash>
# tcpdump -i any -s 0 -l -vvv -w - dst port 3306 | strings | perl -e '
while(<>) { chomp; next if /^[^ ]+[ ]*$/;
if(/^(SELECT|UPDATE|DELETE|INSERT|SET|COMMIT|ROLLBACK|CREATE|DROP|ALTER)/i) {
if (defined $q) { print "$q\n"; }
$q=$_;
} else {
$_ =~ s/^[ \t]+//; $q.=" $_";
}
}'
</source>
===Mysql processes each second===
<source lang=bash>
# mysqladmin -i 1 --verbose processlist
</source>
===All grants===
<source lang=bash>
# mysql --skip-column-names --batch --execute 'select concat("`",user,"`@`",host,"`") from mysql.user' | xargs -n 1 -i mysql --execute 'show grants for {}'
</source>
Or a little nicer:
<source lang=bash>
#!/bin/bash
#
## Written by Lars Timmann <L@rs.Timmann.de> 2017
#
function usage () {
cat << EOH
Usage: $0 [--grant-user <pattern>|--gu <pattern>] [--grant-db <pattern>|--gdb <pattern>] [--help] ...
--help: This output
--grant-user|--gu: You can specify this option several times.
The <pattern> can be:
<user> : You will get grants on all hosts for this user.
@<host> : You will get grants for all users on this host.
<user>@<host> : You will get specific grants for user@host.
The pattern may contain % as wildcard.
If the pattern is @% it shows all grants where host is exactly '%'.
--grant-db|--gdb: You can specify this option several times.
The pattern names the database to look for.
The pattern may contain % as wildcard.
...: Optional parameters to the mysql command
EOH
exit
}
declare -a grant_user
for ((param=1;param<=${#};param++))
do
case ${!param} in
--grant-user|--gu)
param=$[ ${param} + 1 ]
grant_user+=( $( echo ${!param} | awk -F'@' "NF==2 && \$1 {printf \"'%s'@'%s'\n\",\$1,\$2;next;}{print}") )
# delete 2 parameters from list and set back $param
set -- "${@:1:param-2}" "${@:param+1}"
param=$[ ${param} - 2 ]
;;
--grant-db|--gdb)
param=$[ ${param} + 1 ]
grant_db+=( "${!param}" )
# delete 2 parameters from list and set back $param
set -- "${@:1:param-2}" "${@:param+1}"
param=$[ ${param} - 2 ]
;;
--help)
usage
;;
*)
;;
esac
done
# Fill users which are without host
count=${#grant_user[@]}
for((param=0;param<count;param++))
do
user="${grant_user[${param}]}"
if [[ ${user} != ?*"@"?* ]]
then
before=${#grant_user[@]}
if [[ ${user} == "@"?* ]]
then
host="${user/@}"
if [[ "_${host}_" == "_%_" ]]
then
grant_user=( "${grant_user[@]:0:param}" $(mysql $* --silent --skip-column-names --execute "select concat('\'',user,'\'@\'',host,'\'') as user from mysql.user where host='${host}'" | sort ) "${grant_user[@]:param+1}" )
else
grant_user=( "${grant_user[@]:0:param}" $(mysql $* --silent --skip-column-names --execute "select concat('\'',user,'\'@\'',host,'\'') as user from mysql.user where host like '${host}'" | sort ) "${grant_user[@]:param+1}" )
fi
else
grant_user=( "${grant_user[@]:0:param}" $(mysql $* --silent --skip-column-names --execute "select concat('\'',user,'\'@\'',host,'\'') as user from mysql.user where user like '${user}'" | sort ) "${grant_user[@]:param+1}" )
fi
after=${#grant_user[@]}
param=$[ param + after - before ]
count=$[ count + after - before ]
fi
done
# Get user for database in grant_db array
for db in ${grant_db[@]}
do
grant_user+=( $(mysql $* --silent --skip-column-names --execute "
select concat('\'',user,'\'@\'',host,'\'') as user from mysql.db where db like '${db}';
select concat('\'',user,'\'@\'',host,'\'') as user from mysql.columns_priv where db like '${db}';
select concat('\'',user,'\'@\'',host,'\'') as user from mysql.tables_priv where db like '${db}';
" | sort -u ) )
done
# if no users specified, show all grants
if [ ${#grant_user[@]} -eq 0 -a ${#grant_db[@]} -eq 0 ]
then
printf -- '--\n-- %s\n--\n' "all grants";
grant_user=( $(mysql $* --silent --skip-column-names --execute "select concat('\'',user,'\'@\'',host,'\'') as user from mysql.user" | sort ) )
fi
for user in ${grant_user[@]}
do
printf -- '--\n-- %s\n--\n' "${user}";
mysql $* --silent --skip-column-names --execute "show create user ${user}; show grants for ${user}" | sed 's/$/;/'
done
</source>
===Last update time===
* Per table
<source lang=mysql>
mysql> SELECT TABLE_SCHEMA AS DB,TABLE_NAME,UPDATE_TIME FROM INFORMATION_SCHEMA.TABLES ORDER BY DB,UPDATE_TIME;
</source>
* Per database
<source lang=mysql>
mysql> SELECT TABLE_SCHEMA AS DB,MAX(UPDATE_TIME) AS LAST_UPDATE FROM INFORMATION_SCHEMA.TABLES GROUP BY DB ORDER BY LAST_UPDATE;
</source>
==InnoDB space==
===Per database===
<source lang=mysql>
mysql> select table_schema as database_name, sum(round(data_length/1024/1024,2)) as total_size_mb from information_schema.tables where engine like 'innodb' group by table_schema order by total_size_mb;
</source>
===Per table===
<source lang=mysql>
mysql> select table_schema as database_name,table_name,round(data_length/1024/1024,2) as size_mb from information_schema.tables order by size_mb;
</source>
==Logging==
If you use SET GLOBAL it is just for the moment.
'''Don't forget to add it in your my.cnf to make it permanent!'''
===What can I log?===
The interesting variables here are:
* log_queries_not_using_indexes
* log_slave_updates
* log_slow_queries
* general_log
===Choose logging destination FILE/TABLE/NONE===
This affects general_log and slow_query_log.
* Log to the table mysql.slow_log and mysql.general_log
<source lang=mysql>
mysql> SET GLOBAL log_output=TABLE;
</source>
* Log to the table mysql.slow_log and mysql.general_log
<source lang=mysql>
mysql> SET GLOBAL log_output=TABLE;
</source>
* Both: tables and files
<source lang=mysql>
mysql> SET GLOBAL log_output = 'TABLE,FILE';
</source>
* None, if NONE appears in the log_output destinations there is no logging
<source lang=mysql>
mysql> SET GLOBAL log_output = 'TABLE,FILE,NONE';
</source>
is equal to
<source lang=mysql>
mysql> SET GLOBAL log_output = 'NONE';
</source>
===Enable/disable general logging===
<source lang=mysql>
mysql> SET GLOBAL general_log_file = '/var/lib/mysql/general.log';
Query OK, 0 rows affected (0.00 sec)
mysql> SET GLOBAL general_log = 'ON';
Query OK, 0 rows affected (0.00 sec)
</source>
<source lang=mysql>
mysql> SET GLOBAL general_log = 'OFF';
Query OK, 0 rows affected (0.00 sec)
</source>
===Enable/disable logging of slow queries===
<source lang=mysql>
mysql> SET GLOBAL slow_query_log_file = '/var/lib/mysql/slow-query.log';
Query OK, 0 rows affected (0.00 sec)
mysql> SET GLOBAL slow_query_log = 'ON';
Query OK, 0 rows affected (0.00 sec)
</source>
<source lang=mysql>
mysql> SET GLOBAL slow_query_log = 'OFF';
Query OK, 0 rows affected (0.00 sec)
</source>
== Slave ==
=== Debugging ===
==== What did we see from the master ====
Read the binlog from Master:
<source lang=bash>
# mysqlbinlog --read-from-remote-server --host='your replication host' --user='your replication user' --password='your replication password' --base64-output=auto --database='limit output to this database' -vv mysql-bin.number | less
</source>
if you get
ERROR: Failed on connect: SSL connection error: protocol version mismatch
try
<source lang=bash>
# mysqlbinlog --read-from-remote-server --host='your replication host' --user='your replication user' --password='your replication password' --ssl-mode=DISABLED --base64-output=auto --database='limit output to this database' -vv mysql-bin.number | less
</source>
For an idea of the binlog file to investigate on the master do this on your slave:
<source lang=bash>
# mysql -e 'show slave status\G' | awk '$1=="Master_Log_File:"'
</source>
==Filesystems for MySQL==
===ext3/ext4===
====Create Options====
<source lang=bash>
# mkfs.ext4 -b 4096 /dev/mapper/vg--data-lv--ext4--mysql_data
</source>
====Mountoptions====
* noatime
* data=writeback (best performance , only metadata is logged)
* data=ordered (ok performance , recording metadata and grouping metadata related to the data changes)
* data=journal (worst performance, but best data protection, ext3 default mode, recording metadata and all data)
===Raw devices with InnoDB===
'''Take a look at [[Linux_udev_permissions|setting device permissions via udev]] first.'''
'''After''' that the device is owned by mysql:
<source lang=bash>
# ls -alL /dev/vg-data/lv-rawdisk-innodb01
brw-rw---- 1 mysql mysql 252, 0 Aug 12 15:07 /dev/vg-data/lv-rawdisk-innodb01
</source>
Determine the size:
<source lang=bash>
# lvs vg-data
LV VG Attr LSize Pool Origin Data% Move Log Copy% Convert
lv-rawdisk-innodb01 vg-data -wi-a---- 25.00g
# fdisk -l /dev/vg-data/lv-rawdisk-innodb01
Disk /dev/vg-data/lv-rawdisk-innodb01: 26.8 GB, 26843545600 bytes
255 heads, 63 sectors/track, 3263 cylinders, total 52428800 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
# bc -l
26843545600/(1024*1024*1024)
25.00000000000000000000
</source>
Yes... really 25GB!
Add your logical volume to your configuration /etc/mysql/conf.d/innodb.cnf :
<source lang=mysql>
[mysqld]
# InnoDB raw disks
innodb_data_home_dir=
innodb_data_file_path=/dev/vg-data/lv-rawdisk-innodb01:25Gnewraw
</source>
Start mysql:
<source lang=bash>
# service mysql start
</source>
Aaaaaand.. do not forget apparmor! Like I did.. :-D
<source lang=mysql>
InnoDB: Operating system error number 13 in a file operation.
InnoDB: The error means mysqld does not have the access rights to
InnoDB: the directory.
InnoDB: File name /dev/dm-0
InnoDB: File operation call: 'open'.
InnoDB: Cannot continue operation.
</source>
<source lang=bash>
# tail /var/log/kern.log
...
Aug 12 15:30:09 mysql kernel: [ 5840.118528] audit: type=1400 audit(1439386209.399:33): apparmor="DENIED" operation="open" profile="/usr/sbin/mysqld" name="/dev/dm-0" pid=11810 comm="mysqld" requested_mask="wr" denied_mask="wr" fsuid=108 ouid=108
...
</source>
Add your raw device to the apparmor config in /etc/apparmor.d/local/usr.sbin.mysqld :
<source lang=bash>
# Site-specific additions and overrides for usr.sbin.mysqld.
# For more details, please see /etc/apparmor.d/local/README.
/dev/dm-* rwk,
</source>
Reload apparmor:
<source lang=bash>
# service apparmor reload
</source>
Another try!
<source lang=bash>
# service mysql start
</source>
<source lang=mysql>
InnoDB: The first specified data file /dev/vg-data/lv-rawdisk-innodb01 did not exist:
InnoDB: a new database to be created!
150812 15:48:23 InnoDB: Setting file /dev/vg-data/lv-rawdisk-innodb01 size to 25600 MB
InnoDB: Database physically writes the file full: wait...
InnoDB: Progress in MB: 100 200 300 400 500 600 700 800 900 1000 1100 1200 ...
</source>
Much better!
So shutdown MySQL again!
Change your configuration /etc/mysql/conf.d/innodb.cnf and '''change newraw to raw!''' :
<source lang=mysql>
[mysqld]
# InnoDB raw disks
innodb_data_home_dir=
innodb_data_file_path=/dev/vg-data/lv-rawdisk-innodb01:25Graw
</source>
=== NFS ===
==== NFSv4 ====
===== On NetApp CDOT SVM =====
<source lang=text>
cdot1nfsv4::> export-policy rule create -policyname default -clientmatch 172.18.128.0/22 -superuser none -rwrule none -rorule sys -allow-dev false -allow-suid false
cdot1nfsv4::>
cdot1nfsv4::> export-policy create -policyname mysql_clients
cdot1nfsv4::> export-policy rule create -policyname mysql_clients -clientmatch 172.18.128.0/22 -superuser sys -rwrule sys -rorule sys -allow-dev true -allow-suid false
cdot1nfsv4::>
cdot1nfsv4::> nfs server modify -v4.0 enabled -v4-id-domain this.domain.tld
cdot1nfsv4::> set -units GB
cdot1nfsv4::> vol show -volume MYSQLNFS_* -fields volume,policy,size,junction-path
vserver volume size policy junction-path
------------------ --------------------- ---- ------------- ----------------------
cdot1nfsv4 MYSQLNFS_DATA 40GB mysql_clients /MYSQLNFS_DATA
cdot1nfsv4 MYSQLNFS_LOG 1GB mysql_clients /MYSQLNFS_LOG
2 entries were displayed.
</source>
Links:
* [https://kb.netapp.com/support/s/article/how-to-configure-nfsv4-in-cluster-mode How to configure NFSv4 in Cluster-Mode]
* [https://kb.netapp.com/support/s/article/clustered-data-ontap-nfs-expert-recommended-articles Clustered Data ONTAP NFS Expert recommended articles]
* [https://kb.netapp.com/support/s/article/how-to-configure-netapp-storage-systems-for-network-file-system-version-4-in-aix-and-linux-environments How to configure NetApp storage systems for Network File System version 4 in AIX and Linux environments]
* [https://kb.netapp.com/support/s/article/how-to-enable-or-disable-nfsv4-on-netapp-storage-systems How to enable or disable NFSv4 on NetApp storage systems]
===== On Linux =====
====== /etc/sysctl.d/99-mysql.conf ======
<source lang=text>
#
## http://www.ajohnstone.com/achives/optimizing-mysql-over-nfs-with-netapp/
#
###################################################################
# Semaphores & IPC for optimizations in innodb
kernel.shmmax=2147483648
kernel.shmall=2147483648
kernel.msgmni=1024
kernel.msgmax=65536
kernel.sem=250 32000 32 1024
###################################################################
# Swap
vm.swappiness = 0
vm.vfs_cache_pressure = 50
</source>
====== /etc/sysctl.d/99-netapp-nfs.conf ======
<source lang=text>
#
## http://www.ajohnstone.com/achives/optimizing-mysql-over-nfs-with-netapp/
#
###################################################################
# Optimization for netapp/nfs increased from 64k, @see http://tldp.org/HOWTO/NFS-HOWTO/performance.html#MEMLIMITS
net.core.wmem_default=262144
net.core.rmem_default=262144
net.core.wmem_max=262144
net.core.rmem_max=262144
net.ipv4.tcp_rmem = 4096 87380 16777216
net.ipv4.tcp_wmem = 4096 65536 16777216
net.ipv4.tcp_no_metrics_save = 1
# Guidelines from http://media.netapp.com/documents/mysqlperformance-5.pdf
net.ipv4.tcp_sack=0
net.ipv4.tcp_timestamps=0
sunrpc.tcp_slot_table_entries=128
#nfs.v3.enable on
nfs.tcp.enable=on
nfs.tcp.recvwindowsize=65536
nfs.tcp.xfersize=65536
#iscsi.iswt.max_ios_per_session 128
#iscsi.iswt.tcp_window_size 131400
#iscsi.max_connections_per_session 16
net.ipv4.tcp_tw_reuse = 1
net.ipv4.ip_local_port_range = 1024 65023
net.ipv4.tcp_max_syn_backlog = 10240
net.ipv4.tcp_max_tw_buckets = 400000
net.ipv4.tcp_max_orphans = 60000
net.ipv4.tcp_synack_retries = 3
net.core.somaxconn = 10000
kernel.sysrq=0
net.ipv4.neigh.default.gc_thresh1 = 4096
net.ipv4.neigh.default.gc_thresh2 = 8192
net.ipv4.neigh.default.gc_thresh3 = 8192
net.ipv4.neigh.default.base_reachable_time = 86400
net.ipv4.neigh.default.gc_stale_time = 86400
</source>
====== Raise allowed number of open files for mysql in /etc/security/limits.d/mysql.conf ======
<source lang=text>
mysql soft nofile 1024000
mysql hard nofile 1024000
mysql soft nproc 10240
mysql hard nproc 10240
</source>
====== Modify systemd mysql.service to raise the number of files limit ======
To raise the number of files for the service you have to tell the systemd the new limit.
<source lang=bash>
# systemctl edit mysql.service
</source>
and enter:
<source lang=ini>
[Service]
LimitNOFILE=1024000
</source>
<source lang=bash>
# systemctl cat mysql
# /lib/systemd/system/mysql.service
# MySQL systemd service file
...
# /etc/systemd/system/mysql.service.d/override.conf
[Service]
LimitNOFILE=1024000
</source>
Do not forget to activate and check the limit
<source lang=bash>
# systemctl daemon-reload
# systemctl restart mysql
# awk 'NR==1 || /Max open files/' /proc/$(pgrep mysqld$)/limits
Limit Soft Limit Hard Limit Units
Max open files 1024000 1024000 files
</source>
====== Modify systemd service to wait for NFS ======
To be sure that the NFS mount is ready when the mysql server starts add After=nfs-client.target to the systemd service in the Unit-section.
<source lang=bash>
# systemctl edit mysql.service
</source>
and enter:
<source lang=ini>
[Unit]
Description=MySQL Community Server
After=network.target
After=nfs-client.target
</source>
<source lang=bash>
# systemctl cat mysql
# /lib/systemd/system/mysql.service
# MySQL systemd service file
[Unit]
Description=MySQL Community Server
After=network.target
[Install]
WantedBy=multi-user.target
[Service]
User=mysql
Group=mysql
PermissionsStartOnly=true
ExecStartPre=/usr/share/mysql/mysql-systemd-start pre
ExecStart=/usr/sbin/mysqld
ExecStartPost=/usr/share/mysql/mysql-systemd-start post
TimeoutSec=600
Restart=on-failure
RuntimeDirectory=mysqld
RuntimeDirectoryMode=755
# /etc/systemd/system/mysql.service.d/override.conf
[Unit]
Description=MySQL Community Server
After=network.target
After=nfs-client.target
[Service]
LimitNOFILE=1024000
</source>
Do not forget to activate the changes...
<source lang=bash>
# systemctl daemon-reload
# systemctl restart mysql
</source>
... and check they are active:
<source lang=bash>
# systemctl list-dependencies --after mysql.service | grep nfs-client.target
● ├─nfs-client.target
</source>
====== /etc/idmapd.conf ======
<source lang=text>
# Domain = localdomain
Domain = this.domain.tld
</source>
====== /etc/fstab ======
<source lang=fstab>
cdot-nfsv4-svm:/MYSQLNFS_LOG /MYSQLNFS_LOG nfs rw,hard,nointr,rsize=65536,wsize=65536,bg,vers=4,proto=tcp,noatime
cdot-nfsv4-svm:/MYSQLNFS_DATA /MYSQLNFS_DATA nfs rw,hard,nointr,rsize=65536,wsize=65536,bg,vers=4,proto=tcp,noatime
</source>
====== /etc/mysql/mysql.conf.d/mysqld.cnf ======
<source lang=ini>
[mysqld]
...
datadir = /MYSQLNFS_DATA/data/mysql
...
</source>
====== /etc/mysql/mysql.conf.d/innodb.cnf ======
<source lang=ini>
[mysqld]
#
# * InnoDB
#
innodb_data_home_dir = /MYSQLNFS_DATA/InnoDB
innodb_data_file_path = ibdata1:200M:autoextend
innodb_log_group_home_dir = /MYSQLNFS_LOG/ib_log
#innodb_flush_method = O_DIRECT
innodb_flush_log_at_trx_commit = 2
innodb_file_per_table = on
</source>
<source lang=mysql>
# mysql -e "show variables where variable_name like '%dir' and value like '/MYSQLNFS%'"
+---------------------------+------------------------------------+
| Variable_name | Value |
+---------------------------+------------------------------------+
| datadir | /MYSQLNFS_DATA/data/mysql/ |
| innodb_data_home_dir | /MYSQLNFS_DATA/InnoDB |
| innodb_log_group_home_dir | /MYSQLNFS_LOG/ib_log |
+---------------------------+------------------------------------+
</source>
====== /etc/mysql/mysql.conf.d/query_cache.cnf ======
<source lang=ini>
[mysqld]
#
# * Query Cache Configuration
#
query_cache_type = 1
query_cache_limit = 256K
query_cache_min_res_unit = 2k
query_cache_size = 80M
</source>
<source lang=mysql>
mysql> SHOW VARIABLES LIKE 'have_query_cache';
+------------------+-------+
| Variable_name | Value |
+------------------+-------+
| have_query_cache | YES |
+------------------+-------+
1 row in set (0,00 sec)
mysql> SHOW VARIABLES LIKE 'query_cache%';
+------------------------------+----------+
| Variable_name | Value |
+------------------------------+----------+
| query_cache_limit | 262144 |
| query_cache_min_res_unit | 2048 |
| query_cache_size | 83886080 |
| query_cache_type | ON |
| query_cache_wlock_invalidate | OFF |
+------------------------------+----------+
5 rows in set (0,00 sec)
</source>
====== apparmor : /etc/apparmor.d/local/usr.sbin.mysqld ======
<source lang=apparmor>
# vim:syntax=apparmor
# This should be always there...
owner @{PROC}/@{pid}/status r,
/sys/devices/system/node/ r,
/sys/devices/system/node/** r,
# The mysql datadir, innodb_data_home_dir
/MYSQLNFS_DATA/ r,
/MYSQLNFS_DATA/** rwk,
# The mysql innodb_log_group_home_dir
/MYSQLNFS_LOG/ r,
/MYSQLNFS_LOG/** rwk,
</source>
====== Short stupid performance test ======
<source lang=bash>
# time dd if=/dev/zero of=/MYSQLNFS_DATA/io.test bs=16k count=65536
65536+0 records in
65536+0 records out
1073741824 bytes (1,1 GB, 1,0 GiB) copied, 1,7552 s, 612 MB/s
real 0m1.772s
user 0m0.016s
sys 0m0.672s
</source>
Some things seem to work...
==Sample InnoDB configuration==
/etc/mysql/conf.d/innodb.cnf
<source lang=mysql>
[mysqld]
# InnoDB Parameters
# innodb_buffer_pool_size=(0.7*total_mem_size)
innodb_buffer_pool_size=1433M
# bulk_insert_buffer_size
bulk_insert_buffer_size=256M
# innodb_buffer_pool_instances=... more = more concurrency
innodb_buffer_pool_instances=2
# innodb_thread_concurrency= 2*CPUs
innodb_thread_concurrency=4
# innodb_flush_method=O_DIRECT (avoids double buffering)
innodb_flush_method=O_DIRECT
# InnoDB data raw disks
innodb_data_home_dir=
innodb_data_file_path=/dev/vg-data/lv-rawdisk-innodb01:25Graw
# InnoDB log files
innodb_log_files_in_group=2
innodb_log_file_size=100M
innodb_log_group_home_dir=/var/lib/mysql/ib_log
</source>
==Analyze==
<source lang=mysql>
mysql> select * from <tablename> PROCEDURE ANALYSE();
</source>
<source lang=mysql>
mysql> SHOW /*!50000 GLOBAL*/ STATUS;
</source>
* See [[http://de.slideshare.net/shinguz/pt-presentation-11465700 MySQL Performance Tuning]]
===Find statements which lead into an error===
<source lang=mysql>
mysql> select CURRENT_SCHEMA,DIGEST_TEXT,MYSQL_ERRNO,MESSAGE_TEXT from performance_schema.events_statements_history where errors!=0\G
*************************** 1. row ***************************
CURRENT_SCHEMA: NULL
DIGEST_TEXT: NULL
MYSQL_ERRNO: 1046
MESSAGE_TEXT: No database selected
1 row in set (0,00 sec)
</source>
===percona-toolkit===
<source lang=bash>
# aptitude install percona-toolkit
# mysql -e "explain select * from mysql.user,mysql.db where user.user=db.user" | pt-visual-explain
JOIN
+- Bookmark lookup
| +- Table
| | table db
| | possible_keys User
| +- Index lookup
| key db->User
| possible_keys User
| key_len 48
| ref mysql.user.User
| rows 3
+- Table scan
rows 68
+- Table
table user
</source>
===Sysbench===
<source lang=bash>
# mysql -u root -e "create database sbtest;"
# sysbench \
--test=oltp \
--oltp-table-size=10000000 \
--db-driver=mysql \
--mysql-table-engine=innodb \
--mysql-db=sbtest \
--mysql-user=root \
--mysql-password=$(nawk -F'=' '/password/{print $2}' /root/.my.cnf) \
--mysql-socket=/var/run/mysqld/mysqld.sock \
prepare
# sysbench \
--test=oltp \
--oltp-test-mode=complex \
--oltp-table-size=80000000 \
--db-driver=mysql \
--mysql-table-engine=innodb \
--mysql-db=sbtest \
--mysql-user=root \
--mysql-password=$(nawk -F'=' '/password/{print $2}' /root/.my.cnf) \
--mysql-socket=/var/run/mysqld/mysqld.sock \
--num-threads=4 \
--max-time=900 \
--max-requests=500000 \
run
# mysql -u root_rw -e "drop table sbtest;" sbtest
</source>
==Recover a damaged root account==
===Lost grants===
Try out:
<source lang=bash>
# service mysql stop
# echo "grant all privileges on *.* to 'root'@'localhost' with grant option;" > /root/mysql-init
# mysqld_safe --init-file=/root/mysql-init
...
150812 19:14:24 mysqld_safe mysqld from pid file /var/run/mysqld/mysqld.pid ended
# rm /root/mysql-init
# service mysql start
</source>
Or:
<source lang=bash>
# service mysql stop
# mysqld_safe --skip-grant-tables &
...
# mysql -e "UPDATE mysql.user SET Grant_priv='Y', Super_priv='Y' WHERE User='root'; FLUSH PRIVILEGES; GRANT ALL ON *.* TO 'root'@'localhost';"
# mysqladmin -u root shutdown
# service mysql start
</source>
===Lost password===
<source lang=bash>
# service mysql stop
# echo "SET PASSWORD FOR 'root'@'localhost' = PASSWORD('the root password for mysql');" > /root/mysql-init
# mysqld_safe --init-file=/root/mysql-init
...
150812 19:15:24 mysqld_safe mysqld from pid file /var/run/mysqld/mysqld.pid ended
# rm /root/mysql-init
# service mysql start
</source>
==Structured configuration==
This is the default in Ubuntus /etc/mysql/my.cnf:
<source lang=mysql>
...
#
# * IMPORTANT: Additional settings that can override those from this file!
# The files must end with '.cnf', otherwise they'll be ignored.
#
!includedir /etc/mysql/conf.d/
</source>
/etc/mysql/conf.d/innodb.cnf:
<source lang=mysql>
[mysqld]
# InnoDB Parameters
# innodb_buffer_pool_size=(0.7*total_mem_size)
#innodb_buffer_pool_size=512M
innodb_buffer_pool_size=256M
# bulk_insert_buffer_size
#bulk_insert_buffer_size=256M
bulk_insert_buffer_size=128M
# innodb_buffer_pool_instances=... more = more concurrency
innodb_buffer_pool_instances=2
# innodb_thread_concurrency= 2*CPUs
innodb_thread_concurrency=4
# innodb_flush_method=O_DIRECT (avoids double buffering)
innodb_flush_method=O_DIRECT
# InnoDB data raw disks
innodb_data_home_dir=
innodb_data_file_path=/dev/vg-data/lv-rawdisk-innodb01:25Graw
# InnoDB log files
innodb_log_files_in_group=2
innodb_log_file_size=100M
innodb_log_group_home_dir=/var/lib/mysql/ib_log
</source>
/etc/mysql/conf.d/myisam.cnf:
<source lang=mysql>
[mysqld]
#key_buffer = 512M
key_buffer = 128M
table_cache = 8K
myisam_sort_buffer_size = 64M
tmp_table_size = 64M
# Variable: concurrent_insert
# Value Description
# 0 Disables concurrent inserts
# 1 (Default) Enables concurrent insert for MyISAM tables that do not have holes
# 2 Enables concurrent inserts for all MyISAM tables, even those that have holes.
# For a table with a hole, new rows are inserted at the end of the table if it is in use by another thread.
# Otherwise, MySQL acquires a normal write lock and inserts the row into the hole.
concurrent_insert=2
# Variable: myisam_use_mmap
# https://www.percona.com/blog/2006/05/26/myisam-mmap-feature-51/
#
myisam_use_mmap=1
</source>
/etc/mysql/conf.d/mysqld.cnf:
<source lang=mysql>
[mysqld]
datadir = /var/lib/mysql/data/data
# because mysql is soooo stupid
#ignore-db-dirs = lost+found # when we will have mysql >= 5.6.3
bind-address = 127.0.0.1
open-files-limit = 4096
max_connections = 512
max_allowed_packet = 16M
thread_stack = 192K
thread_cache_size = 8
myisam-recover-options = BACKUP
max_connections = 512
table_cache = 8192
thread_concurrency = 4
default-storage-engine = innodb
# Enable the full query log. Every query (even ones with incorrect
# syntax) that the server receives will be logged. This is useful for
# debugging, it is usually disabled in production use.
#log
# Print warnings to the error log file. If you have any problem with
# MySQL you should enable logging of warnings and examine the error log
# for possible explanations.
log_warnings
# Log slow queries. Slow queries are queries which take more than the
# amount of time defined in "long_query_time" or which do not use
# indexes well, if log_long_format is enabled. It is normally good idea
# to have this turned on if you frequently add new queries to the
# system.
log_slow_queries
slow_query_log_file = /var/log/mysql/mysql-slow.log
# All queries taking more than this amount of time (in seconds) will be
# trated as slow. Do not use "1" as a value here, as this will result in
# even very fast queries being logged from time to time (as MySQL
# currently measures time with second accuracy only).
long_query_time = 2
# Log more information in the slow query log. Normally it is good to
# have this turned on. This will enable logging of queries that are not
# using indexes in addition to long running queries.
#log_long_format
log_bin = /var/lib/mysql/binlog/mysql-bin.log
expire_logs_days = 10
max_binlog_size = 100M
sync_binlog = 0
performance_schema = ON
</source>
/etc/mysql/conf.d/mysqld_safe.cnf:
<source lang=mysql>
[mysqld_safe]
</source>
/etc/mysql/conf.d/mysqld_safe_syslog.cnf:
<source lang=mysql>
[mysqld_safe]
syslog
</source>
/etc/mysql/conf.d/query_cache.cnf:
<source lang=mysql>
[mysqld]
query_cache_limit = 4M
query_cache_size = 128M
query_cache_min_res_unit = 2K
</source>
=MySQL Clients=
Small one liners for testing purposes.
==PHP==
===PHP PDO===
<source lang=php>
$ php -r '
$pdo=new PDO("mysql:host=mydbhost;dbname=mydb", "user", "pass", ARRAY(
PDO::ATTR_PERSISTENT => true
)
);
$stmt=$pdo->prepare("SELECT * FROM mytable");
if($stmt->execute()){
while($row = $stmt->fetch()){
print_r($row);
}
};
$stmt = null;
$pdo=null;
'
</source>
52c72d373d297b71381f28d09712e689b36ad2a1
1884
1883
2018-05-17T12:05:02Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:MySQL|Tipps und Tricks]]
==Oneliner==
===Show MySQL-traffic fired from a client===
<source lang=bash>
# tcpdump -i any -s 0 -l -vvv -w - dst port 3306 | strings | perl -e '
while(<>) { chomp; next if /^[^ ]+[ ]*$/;
if(/^(SELECT|UPDATE|DELETE|INSERT|SET|COMMIT|ROLLBACK|CREATE|DROP|ALTER)/i) {
if (defined $q) { print "$q\n"; }
$q=$_;
} else {
$_ =~ s/^[ \t]+//; $q.=" $_";
}
}'
</source>
===Mysql processes each second===
<source lang=bash>
# mysqladmin -i 1 --verbose processlist
</source>
===All grants===
<source lang=bash>
# mysql --skip-column-names --batch --execute 'select concat("`",user,"`@`",host,"`") from mysql.user' | xargs -n 1 -i mysql --execute 'show grants for {}'
</source>
Or a little nicer:
<source lang=bash>
#!/bin/bash
#
## Written by Lars Timmann <L@rs.Timmann.de> 2017
#
function usage () {
cat << EOH
Usage: $0 [--grant-user <pattern>|--gu <pattern>] [--grant-db <pattern>|--gdb <pattern>] [--help] ...
--help: This output
--grant-user|--gu: You can specify this option several times.
The <pattern> can be:
<user> : You will get grants on all hosts for this user.
@<host> : You will get grants for all users on this host.
<user>@<host> : You will get specific grants for user@host.
The pattern may contain % as wildcard.
If the pattern is @% it shows all grants where host is exactly '%'.
--grant-db|--gdb: You can specify this option several times.
The pattern names the database to look for.
The pattern may contain % as wildcard.
...: Optional parameters to the mysql command
EOH
exit
}
declare -a grant_user
for ((param=1;param<=${#};param++))
do
case ${!param} in
--grant-user|--gu)
param=$[ ${param} + 1 ]
grant_user+=( $( echo ${!param} | awk -F'@' "NF==2 && \$1 {printf \"'%s'@'%s'\n\",\$1,\$2;next;}{print}") )
# delete 2 parameters from list and set back $param
set -- "${@:1:param-2}" "${@:param+1}"
param=$[ ${param} - 2 ]
;;
--grant-db|--gdb)
param=$[ ${param} + 1 ]
grant_db+=( "${!param}" )
# delete 2 parameters from list and set back $param
set -- "${@:1:param-2}" "${@:param+1}"
param=$[ ${param} - 2 ]
;;
--help)
usage
;;
*)
;;
esac
done
# Fill users which are without host
count=${#grant_user[@]}
for((param=0;param<count;param++))
do
user="${grant_user[${param}]}"
if [[ ${user} != ?*"@"?* ]]
then
before=${#grant_user[@]}
if [[ ${user} == "@"?* ]]
then
host="${user/@}"
if [[ "_${host}_" == "_%_" ]]
then
grant_user=( "${grant_user[@]:0:param}" $(mysql $* --silent --skip-column-names --execute "select concat('\'',user,'\'@\'',host,'\'') as user from mysql.user where host='${host}'" | sort ) "${grant_user[@]:param+1}" )
else
grant_user=( "${grant_user[@]:0:param}" $(mysql $* --silent --skip-column-names --execute "select concat('\'',user,'\'@\'',host,'\'') as user from mysql.user where host like '${host}'" | sort ) "${grant_user[@]:param+1}" )
fi
else
grant_user=( "${grant_user[@]:0:param}" $(mysql $* --silent --skip-column-names --execute "select concat('\'',user,'\'@\'',host,'\'') as user from mysql.user where user like '${user}'" | sort ) "${grant_user[@]:param+1}" )
fi
after=${#grant_user[@]}
param=$[ param + after - before ]
count=$[ count + after - before ]
fi
done
# Get user for database in grant_db array
for db in ${grant_db[@]}
do
grant_user+=( $(mysql $* --silent --skip-column-names --execute "
select concat('\'',user,'\'@\'',host,'\'') as user from mysql.db where db like '${db}';
select concat('\'',user,'\'@\'',host,'\'') as user from mysql.columns_priv where db like '${db}';
select concat('\'',user,'\'@\'',host,'\'') as user from mysql.tables_priv where db like '${db}';
" | sort -u ) )
done
# if no users specified, show all grants
if [ ${#grant_user[@]} -eq 0 -a ${#grant_db[@]} -eq 0 ]
then
printf -- '--\n-- %s\n--\n' "all grants";
grant_user=( $(mysql $* --silent --skip-column-names --execute "select concat('\'',user,'\'@\'',host,'\'') as user from mysql.user" | sort ) )
fi
for user in ${grant_user[@]}
do
printf -- '--\n-- %s\n--\n' "${user}";
mysql $* --silent --skip-column-names --execute "show create user ${user}; show grants for ${user}" | sed 's/$/;/'
done
</source>
===Last update time===
* Per table
<source lang=mysql>
mysql> SELECT TABLE_SCHEMA AS DB,TABLE_NAME,UPDATE_TIME FROM INFORMATION_SCHEMA.TABLES ORDER BY DB,UPDATE_TIME;
</source>
* Per database
<source lang=mysql>
mysql> SELECT TABLE_SCHEMA AS DB,MAX(UPDATE_TIME) AS LAST_UPDATE FROM INFORMATION_SCHEMA.TABLES GROUP BY DB ORDER BY LAST_UPDATE;
</source>
==InnoDB space==
===Per database===
<source lang=mysql>
mysql> select table_schema as database_name, sum(round(data_length/1024/1024,2)) as total_size_mb from information_schema.tables where engine like 'innodb' group by table_schema order by total_size_mb;
</source>
===Per table===
<source lang=mysql>
mysql> select table_schema as database_name,table_name,round(data_length/1024/1024,2) as size_mb from information_schema.tables order by size_mb;
</source>
==Logging==
If you use SET GLOBAL it is just for the moment.
'''Don't forget to add it in your my.cnf to make it permanent!'''
===What can I log?===
The interesting variables here are:
* log_queries_not_using_indexes
* log_slave_updates
* log_slow_queries
* general_log
===Choose logging destination FILE/TABLE/NONE===
This affects general_log and slow_query_log.
* Log to the table mysql.slow_log and mysql.general_log
<source lang=mysql>
mysql> SET GLOBAL log_output=TABLE;
</source>
* Log to the table mysql.slow_log and mysql.general_log
<source lang=mysql>
mysql> SET GLOBAL log_output=TABLE;
</source>
* Both: tables and files
<source lang=mysql>
mysql> SET GLOBAL log_output = 'TABLE,FILE';
</source>
* None, if NONE appears in the log_output destinations there is no logging
<source lang=mysql>
mysql> SET GLOBAL log_output = 'TABLE,FILE,NONE';
</source>
is equal to
<source lang=mysql>
mysql> SET GLOBAL log_output = 'NONE';
</source>
===Enable/disable general logging===
<source lang=mysql>
mysql> SET GLOBAL general_log_file = '/var/lib/mysql/general.log';
Query OK, 0 rows affected (0.00 sec)
mysql> SET GLOBAL general_log = 'ON';
Query OK, 0 rows affected (0.00 sec)
</source>
<source lang=mysql>
mysql> SET GLOBAL general_log = 'OFF';
Query OK, 0 rows affected (0.00 sec)
</source>
===Enable/disable logging of slow queries===
<source lang=mysql>
mysql> SET GLOBAL slow_query_log_file = '/var/lib/mysql/slow-query.log';
Query OK, 0 rows affected (0.00 sec)
mysql> SET GLOBAL slow_query_log = 'ON';
Query OK, 0 rows affected (0.00 sec)
</source>
<source lang=mysql>
mysql> SET GLOBAL slow_query_log = 'OFF';
Query OK, 0 rows affected (0.00 sec)
</source>
== Slave ==
=== Debugging ===
==== What did we see from the master ====
Read the binlog from Master:
<source lang=bash>
# mysqlbinlog --read-from-remote-server --host='your replication host' --user='your replication user' --password='your replication password' --base64-output=auto --database='limit output to this database' -vv mysql-bin.number | less
</source>
if you get
ERROR: Failed on connect: SSL connection error: protocol version mismatch
try
<source lang=bash>
# mysqlbinlog --read-from-remote-server --host='your replication host' --user='your replication user' --password='your replication password' --ssl-mode=DISABLED --base64-output=auto --database='limit output to this database' -vv mysql-bin.number | less
</source>
For an idea of the binlog file to investigate on the master do this on your slave:
<source lang=bash>
# mysql -e 'show slave status\G' | awk '$1=="Master_Log_File:"'
</source>
==Filesystems for MySQL==
===ext3/ext4===
====Create Options====
<source lang=bash>
# mkfs.ext4 -b 4096 /dev/mapper/vg--data-lv--ext4--mysql_data
</source>
====Mountoptions====
* noatime
* data=writeback (best performance , only metadata is logged)
* data=ordered (ok performance , recording metadata and grouping metadata related to the data changes)
* data=journal (worst performance, but best data protection, ext3 default mode, recording metadata and all data)
===Raw devices with InnoDB===
'''Take a look at [[Linux_udev_permissions|setting device permissions via udev]] first.'''
'''After''' that the device is owned by mysql:
<source lang=bash>
# ls -alL /dev/vg-data/lv-rawdisk-innodb01
brw-rw---- 1 mysql mysql 252, 0 Aug 12 15:07 /dev/vg-data/lv-rawdisk-innodb01
</source>
Determine the size:
<source lang=bash>
# lvs vg-data
LV VG Attr LSize Pool Origin Data% Move Log Copy% Convert
lv-rawdisk-innodb01 vg-data -wi-a---- 25.00g
# fdisk -l /dev/vg-data/lv-rawdisk-innodb01
Disk /dev/vg-data/lv-rawdisk-innodb01: 26.8 GB, 26843545600 bytes
255 heads, 63 sectors/track, 3263 cylinders, total 52428800 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
# bc -l
26843545600/(1024*1024*1024)
25.00000000000000000000
</source>
Yes... really 25GB!
Add your logical volume to your configuration /etc/mysql/conf.d/innodb.cnf :
<source lang=mysql>
[mysqld]
# InnoDB raw disks
innodb_data_home_dir=
innodb_data_file_path=/dev/vg-data/lv-rawdisk-innodb01:25Gnewraw
</source>
Start mysql:
<source lang=bash>
# service mysql start
</source>
Aaaaaand.. do not forget apparmor! Like I did.. :-D
<source lang=mysql>
InnoDB: Operating system error number 13 in a file operation.
InnoDB: The error means mysqld does not have the access rights to
InnoDB: the directory.
InnoDB: File name /dev/dm-0
InnoDB: File operation call: 'open'.
InnoDB: Cannot continue operation.
</source>
<source lang=bash>
# tail /var/log/kern.log
...
Aug 12 15:30:09 mysql kernel: [ 5840.118528] audit: type=1400 audit(1439386209.399:33): apparmor="DENIED" operation="open" profile="/usr/sbin/mysqld" name="/dev/dm-0" pid=11810 comm="mysqld" requested_mask="wr" denied_mask="wr" fsuid=108 ouid=108
...
</source>
Add your raw device to the apparmor config in /etc/apparmor.d/local/usr.sbin.mysqld :
<source lang=bash>
# Site-specific additions and overrides for usr.sbin.mysqld.
# For more details, please see /etc/apparmor.d/local/README.
/dev/dm-* rwk,
</source>
Reload apparmor:
<source lang=bash>
# service apparmor reload
</source>
Another try!
<source lang=bash>
# service mysql start
</source>
<source lang=mysql>
InnoDB: The first specified data file /dev/vg-data/lv-rawdisk-innodb01 did not exist:
InnoDB: a new database to be created!
150812 15:48:23 InnoDB: Setting file /dev/vg-data/lv-rawdisk-innodb01 size to 25600 MB
InnoDB: Database physically writes the file full: wait...
InnoDB: Progress in MB: 100 200 300 400 500 600 700 800 900 1000 1100 1200 ...
</source>
Much better!
So shutdown MySQL again!
Change your configuration /etc/mysql/conf.d/innodb.cnf and '''change newraw to raw!''' :
<source lang=mysql>
[mysqld]
# InnoDB raw disks
innodb_data_home_dir=
innodb_data_file_path=/dev/vg-data/lv-rawdisk-innodb01:25Graw
</source>
=== NFS ===
==== NFSv4 ====
===== On NetApp CDOT SVM =====
<source lang=text>
cdot1nfsv4::> export-policy rule create -policyname default -clientmatch 172.18.128.0/22 -superuser none -rwrule none -rorule sys -allow-dev false -allow-suid false
cdot1nfsv4::>
cdot1nfsv4::> export-policy create -policyname mysql_clients
cdot1nfsv4::> export-policy rule create -policyname mysql_clients -clientmatch 172.18.128.0/22 -superuser sys -rwrule sys -rorule sys -allow-dev true -allow-suid false
cdot1nfsv4::>
cdot1nfsv4::> nfs server modify -v4.0 enabled -v4-id-domain this.domain.tld
cdot1nfsv4::> set -units GB
cdot1nfsv4::> vol show -volume MYSQLNFS_* -fields volume,policy,size,junction-path
vserver volume size policy junction-path
------------------ --------------------- ---- ------------- ----------------------
cdot1nfsv4 MYSQLNFS_DATA 40GB mysql_clients /MYSQLNFS_DATA
cdot1nfsv4 MYSQLNFS_LOG 1GB mysql_clients /MYSQLNFS_LOG
2 entries were displayed.
</source>
Links:
* [https://kb.netapp.com/support/s/article/how-to-configure-nfsv4-in-cluster-mode How to configure NFSv4 in Cluster-Mode]
* [https://kb.netapp.com/support/s/article/clustered-data-ontap-nfs-expert-recommended-articles Clustered Data ONTAP NFS Expert recommended articles]
* [https://kb.netapp.com/support/s/article/how-to-configure-netapp-storage-systems-for-network-file-system-version-4-in-aix-and-linux-environments How to configure NetApp storage systems for Network File System version 4 in AIX and Linux environments]
* [https://kb.netapp.com/support/s/article/how-to-enable-or-disable-nfsv4-on-netapp-storage-systems How to enable or disable NFSv4 on NetApp storage systems]
===== On Linux =====
====== /etc/sysctl.d/99-mysql.conf ======
<source lang=text>
#
## http://www.ajohnstone.com/achives/optimizing-mysql-over-nfs-with-netapp/
#
###################################################################
# Semaphores & IPC for optimizations in innodb
kernel.shmmax=2147483648
kernel.shmall=2147483648
kernel.msgmni=1024
kernel.msgmax=65536
kernel.sem=250 32000 32 1024
###################################################################
# Swap
vm.swappiness = 0
vm.vfs_cache_pressure = 50
</source>
====== /etc/sysctl.d/99-netapp-nfs.conf ======
<source lang=text>
#
## http://www.ajohnstone.com/achives/optimizing-mysql-over-nfs-with-netapp/
#
###################################################################
# Optimization for netapp/nfs increased from 64k, @see http://tldp.org/HOWTO/NFS-HOWTO/performance.html#MEMLIMITS
net.core.wmem_default=262144
net.core.rmem_default=262144
net.core.wmem_max=262144
net.core.rmem_max=262144
net.ipv4.tcp_rmem = 4096 87380 16777216
net.ipv4.tcp_wmem = 4096 65536 16777216
net.ipv4.tcp_no_metrics_save = 1
# Guidelines from http://media.netapp.com/documents/mysqlperformance-5.pdf
net.ipv4.tcp_sack=0
net.ipv4.tcp_timestamps=0
sunrpc.tcp_slot_table_entries=128
#nfs.v3.enable on
nfs.tcp.enable=on
nfs.tcp.recvwindowsize=65536
nfs.tcp.xfersize=65536
#iscsi.iswt.max_ios_per_session 128
#iscsi.iswt.tcp_window_size 131400
#iscsi.max_connections_per_session 16
net.ipv4.tcp_tw_reuse = 1
net.ipv4.ip_local_port_range = 1024 65023
net.ipv4.tcp_max_syn_backlog = 10240
net.ipv4.tcp_max_tw_buckets = 400000
net.ipv4.tcp_max_orphans = 60000
net.ipv4.tcp_synack_retries = 3
net.core.somaxconn = 10000
kernel.sysrq=0
net.ipv4.neigh.default.gc_thresh1 = 4096
net.ipv4.neigh.default.gc_thresh2 = 8192
net.ipv4.neigh.default.gc_thresh3 = 8192
net.ipv4.neigh.default.base_reachable_time = 86400
net.ipv4.neigh.default.gc_stale_time = 86400
</source>
====== Raise allowed number of open files for mysql in /etc/security/limits.d/mysql.conf ======
<source lang=text>
mysql soft nofile 1024000
mysql hard nofile 1024000
mysql soft nproc 10240
mysql hard nproc 10240
</source>
====== Modify systemd mysql.service to raise the number of files limit ======
To raise the number of files for the service you have to tell the systemd the new limit.
<source lang=bash>
# systemctl edit mysql.service
</source>
and enter:
<source lang=ini>
[Service]
LimitNOFILE=1024000
</source>
<source lang=bash>
# systemctl cat mysql
# /lib/systemd/system/mysql.service
# MySQL systemd service file
...
# /etc/systemd/system/mysql.service.d/override.conf
[Service]
LimitNOFILE=1024000
</source>
Do not forget to activate and check the limit
<source lang=bash>
# systemctl daemon-reload
# systemctl restart mysql
# awk 'NR==1 || /Max open files/' /proc/$(pgrep mysqld$)/limits
Limit Soft Limit Hard Limit Units
Max open files 1024000 1024000 files
</source>
====== Modify systemd service to wait for NFS ======
To be sure that the NFS mount is ready when the mysql server starts add After=nfs-client.target to the systemd service in the Unit-section.
<source lang=bash>
# systemctl edit mysql.service
</source>
and enter:
<source lang=ini>
[Unit]
Description=MySQL Community Server
After=network.target
After=nfs-client.target
</source>
<source lang=bash>
# systemctl cat mysql
# /lib/systemd/system/mysql.service
# MySQL systemd service file
[Unit]
Description=MySQL Community Server
After=network.target
[Install]
WantedBy=multi-user.target
[Service]
User=mysql
Group=mysql
PermissionsStartOnly=true
ExecStartPre=/usr/share/mysql/mysql-systemd-start pre
ExecStart=/usr/sbin/mysqld
ExecStartPost=/usr/share/mysql/mysql-systemd-start post
TimeoutSec=600
Restart=on-failure
RuntimeDirectory=mysqld
RuntimeDirectoryMode=755
# /etc/systemd/system/mysql.service.d/override.conf
[Unit]
Description=MySQL Community Server
After=network.target
After=nfs-client.target
[Service]
LimitNOFILE=1024000
</source>
Do not forget to activate the changes...
<source lang=bash>
# systemctl daemon-reload
# systemctl restart mysql
</source>
... and check they are active:
<source lang=bash>
# systemctl list-dependencies --after mysql.service | grep nfs-client.target
● ├─nfs-client.target
</source>
====== /etc/idmapd.conf ======
<source lang=text>
# Domain = localdomain
Domain = this.domain.tld
</source>
====== /etc/fstab ======
<source lang=text>
cdot-nfsv4-svm:/MYSQLNFS_LOG /MYSQLNFS_LOG nfs rw,hard,nointr,rsize=65536,wsize=65536,bg,vers=4,proto=tcp,noatime
cdot-nfsv4-svm:/MYSQLNFS_DATA /MYSQLNFS_DATA nfs rw,hard,nointr,rsize=65536,wsize=65536,bg,vers=4,proto=tcp,noatime
</source>
====== /etc/mysql/mysql.conf.d/mysqld.cnf ======
<source lang=ini>
[mysqld]
...
datadir = /MYSQLNFS_DATA/data/mysql
...
</source>
====== /etc/mysql/mysql.conf.d/innodb.cnf ======
<source lang=ini>
[mysqld]
#
# * InnoDB
#
innodb_data_home_dir = /MYSQLNFS_DATA/InnoDB
innodb_data_file_path = ibdata1:200M:autoextend
innodb_log_group_home_dir = /MYSQLNFS_LOG/ib_log
#innodb_flush_method = O_DIRECT
innodb_flush_log_at_trx_commit = 2
innodb_file_per_table = on
</source>
<source lang=mysql>
# mysql -e "show variables where variable_name like '%dir' and value like '/MYSQLNFS%'"
+---------------------------+------------------------------------+
| Variable_name | Value |
+---------------------------+------------------------------------+
| datadir | /MYSQLNFS_DATA/data/mysql/ |
| innodb_data_home_dir | /MYSQLNFS_DATA/InnoDB |
| innodb_log_group_home_dir | /MYSQLNFS_LOG/ib_log |
+---------------------------+------------------------------------+
</source>
====== /etc/mysql/mysql.conf.d/query_cache.cnf ======
<source lang=ini>
[mysqld]
#
# * Query Cache Configuration
#
query_cache_type = 1
query_cache_limit = 256K
query_cache_min_res_unit = 2k
query_cache_size = 80M
</source>
<source lang=mysql>
mysql> SHOW VARIABLES LIKE 'have_query_cache';
+------------------+-------+
| Variable_name | Value |
+------------------+-------+
| have_query_cache | YES |
+------------------+-------+
1 row in set (0,00 sec)
mysql> SHOW VARIABLES LIKE 'query_cache%';
+------------------------------+----------+
| Variable_name | Value |
+------------------------------+----------+
| query_cache_limit | 262144 |
| query_cache_min_res_unit | 2048 |
| query_cache_size | 83886080 |
| query_cache_type | ON |
| query_cache_wlock_invalidate | OFF |
+------------------------------+----------+
5 rows in set (0,00 sec)
</source>
====== apparmor : /etc/apparmor.d/local/usr.sbin.mysqld ======
<source lang=text>
# vim:syntax=apparmor
# This should be always there...
owner @{PROC}/@{pid}/status r,
/sys/devices/system/node/ r,
/sys/devices/system/node/** r,
# The mysql datadir, innodb_data_home_dir
/MYSQLNFS_DATA/ r,
/MYSQLNFS_DATA/** rwk,
# The mysql innodb_log_group_home_dir
/MYSQLNFS_LOG/ r,
/MYSQLNFS_LOG/** rwk,
</source>
====== Short stupid performance test ======
<source lang=bash>
# time dd if=/dev/zero of=/MYSQLNFS_DATA/io.test bs=16k count=65536
65536+0 records in
65536+0 records out
1073741824 bytes (1,1 GB, 1,0 GiB) copied, 1,7552 s, 612 MB/s
real 0m1.772s
user 0m0.016s
sys 0m0.672s
</source>
Some things seem to work...
==Sample InnoDB configuration==
/etc/mysql/conf.d/innodb.cnf
<source lang=mysql>
[mysqld]
# InnoDB Parameters
# innodb_buffer_pool_size=(0.7*total_mem_size)
innodb_buffer_pool_size=1433M
# bulk_insert_buffer_size
bulk_insert_buffer_size=256M
# innodb_buffer_pool_instances=... more = more concurrency
innodb_buffer_pool_instances=2
# innodb_thread_concurrency= 2*CPUs
innodb_thread_concurrency=4
# innodb_flush_method=O_DIRECT (avoids double buffering)
innodb_flush_method=O_DIRECT
# InnoDB data raw disks
innodb_data_home_dir=
innodb_data_file_path=/dev/vg-data/lv-rawdisk-innodb01:25Graw
# InnoDB log files
innodb_log_files_in_group=2
innodb_log_file_size=100M
innodb_log_group_home_dir=/var/lib/mysql/ib_log
</source>
==Analyze==
<source lang=mysql>
mysql> select * from <tablename> PROCEDURE ANALYSE();
</source>
<source lang=mysql>
mysql> SHOW /*!50000 GLOBAL*/ STATUS;
</source>
* See [[http://de.slideshare.net/shinguz/pt-presentation-11465700 MySQL Performance Tuning]]
===Find statements which lead into an error===
<source lang=mysql>
mysql> select CURRENT_SCHEMA,DIGEST_TEXT,MYSQL_ERRNO,MESSAGE_TEXT from performance_schema.events_statements_history where errors!=0\G
*************************** 1. row ***************************
CURRENT_SCHEMA: NULL
DIGEST_TEXT: NULL
MYSQL_ERRNO: 1046
MESSAGE_TEXT: No database selected
1 row in set (0,00 sec)
</source>
===percona-toolkit===
<source lang=bash>
# aptitude install percona-toolkit
# mysql -e "explain select * from mysql.user,mysql.db where user.user=db.user" | pt-visual-explain
JOIN
+- Bookmark lookup
| +- Table
| | table db
| | possible_keys User
| +- Index lookup
| key db->User
| possible_keys User
| key_len 48
| ref mysql.user.User
| rows 3
+- Table scan
rows 68
+- Table
table user
</source>
===Sysbench===
<source lang=bash>
# mysql -u root -e "create database sbtest;"
# sysbench \
--test=oltp \
--oltp-table-size=10000000 \
--db-driver=mysql \
--mysql-table-engine=innodb \
--mysql-db=sbtest \
--mysql-user=root \
--mysql-password=$(nawk -F'=' '/password/{print $2}' /root/.my.cnf) \
--mysql-socket=/var/run/mysqld/mysqld.sock \
prepare
# sysbench \
--test=oltp \
--oltp-test-mode=complex \
--oltp-table-size=80000000 \
--db-driver=mysql \
--mysql-table-engine=innodb \
--mysql-db=sbtest \
--mysql-user=root \
--mysql-password=$(nawk -F'=' '/password/{print $2}' /root/.my.cnf) \
--mysql-socket=/var/run/mysqld/mysqld.sock \
--num-threads=4 \
--max-time=900 \
--max-requests=500000 \
run
# mysql -u root_rw -e "drop table sbtest;" sbtest
</source>
==Recover a damaged root account==
===Lost grants===
Try out:
<source lang=bash>
# service mysql stop
# echo "grant all privileges on *.* to 'root'@'localhost' with grant option;" > /root/mysql-init
# mysqld_safe --init-file=/root/mysql-init
...
150812 19:14:24 mysqld_safe mysqld from pid file /var/run/mysqld/mysqld.pid ended
# rm /root/mysql-init
# service mysql start
</source>
Or:
<source lang=bash>
# service mysql stop
# mysqld_safe --skip-grant-tables &
...
# mysql -e "UPDATE mysql.user SET Grant_priv='Y', Super_priv='Y' WHERE User='root'; FLUSH PRIVILEGES; GRANT ALL ON *.* TO 'root'@'localhost';"
# mysqladmin -u root shutdown
# service mysql start
</source>
===Lost password===
<source lang=bash>
# service mysql stop
# echo "SET PASSWORD FOR 'root'@'localhost' = PASSWORD('the root password for mysql');" > /root/mysql-init
# mysqld_safe --init-file=/root/mysql-init
...
150812 19:15:24 mysqld_safe mysqld from pid file /var/run/mysqld/mysqld.pid ended
# rm /root/mysql-init
# service mysql start
</source>
==Structured configuration==
This is the default in Ubuntus /etc/mysql/my.cnf:
<source lang=mysql>
...
#
# * IMPORTANT: Additional settings that can override those from this file!
# The files must end with '.cnf', otherwise they'll be ignored.
#
!includedir /etc/mysql/conf.d/
</source>
/etc/mysql/conf.d/innodb.cnf:
<source lang=mysql>
[mysqld]
# InnoDB Parameters
# innodb_buffer_pool_size=(0.7*total_mem_size)
#innodb_buffer_pool_size=512M
innodb_buffer_pool_size=256M
# bulk_insert_buffer_size
#bulk_insert_buffer_size=256M
bulk_insert_buffer_size=128M
# innodb_buffer_pool_instances=... more = more concurrency
innodb_buffer_pool_instances=2
# innodb_thread_concurrency= 2*CPUs
innodb_thread_concurrency=4
# innodb_flush_method=O_DIRECT (avoids double buffering)
innodb_flush_method=O_DIRECT
# InnoDB data raw disks
innodb_data_home_dir=
innodb_data_file_path=/dev/vg-data/lv-rawdisk-innodb01:25Graw
# InnoDB log files
innodb_log_files_in_group=2
innodb_log_file_size=100M
innodb_log_group_home_dir=/var/lib/mysql/ib_log
</source>
/etc/mysql/conf.d/myisam.cnf:
<source lang=mysql>
[mysqld]
#key_buffer = 512M
key_buffer = 128M
table_cache = 8K
myisam_sort_buffer_size = 64M
tmp_table_size = 64M
# Variable: concurrent_insert
# Value Description
# 0 Disables concurrent inserts
# 1 (Default) Enables concurrent insert for MyISAM tables that do not have holes
# 2 Enables concurrent inserts for all MyISAM tables, even those that have holes.
# For a table with a hole, new rows are inserted at the end of the table if it is in use by another thread.
# Otherwise, MySQL acquires a normal write lock and inserts the row into the hole.
concurrent_insert=2
# Variable: myisam_use_mmap
# https://www.percona.com/blog/2006/05/26/myisam-mmap-feature-51/
#
myisam_use_mmap=1
</source>
/etc/mysql/conf.d/mysqld.cnf:
<source lang=mysql>
[mysqld]
datadir = /var/lib/mysql/data/data
# because mysql is soooo stupid
#ignore-db-dirs = lost+found # when we will have mysql >= 5.6.3
bind-address = 127.0.0.1
open-files-limit = 4096
max_connections = 512
max_allowed_packet = 16M
thread_stack = 192K
thread_cache_size = 8
myisam-recover-options = BACKUP
max_connections = 512
table_cache = 8192
thread_concurrency = 4
default-storage-engine = innodb
# Enable the full query log. Every query (even ones with incorrect
# syntax) that the server receives will be logged. This is useful for
# debugging, it is usually disabled in production use.
#log
# Print warnings to the error log file. If you have any problem with
# MySQL you should enable logging of warnings and examine the error log
# for possible explanations.
log_warnings
# Log slow queries. Slow queries are queries which take more than the
# amount of time defined in "long_query_time" or which do not use
# indexes well, if log_long_format is enabled. It is normally good idea
# to have this turned on if you frequently add new queries to the
# system.
log_slow_queries
slow_query_log_file = /var/log/mysql/mysql-slow.log
# All queries taking more than this amount of time (in seconds) will be
# trated as slow. Do not use "1" as a value here, as this will result in
# even very fast queries being logged from time to time (as MySQL
# currently measures time with second accuracy only).
long_query_time = 2
# Log more information in the slow query log. Normally it is good to
# have this turned on. This will enable logging of queries that are not
# using indexes in addition to long running queries.
#log_long_format
log_bin = /var/lib/mysql/binlog/mysql-bin.log
expire_logs_days = 10
max_binlog_size = 100M
sync_binlog = 0
performance_schema = ON
</source>
/etc/mysql/conf.d/mysqld_safe.cnf:
<source lang=mysql>
[mysqld_safe]
</source>
/etc/mysql/conf.d/mysqld_safe_syslog.cnf:
<source lang=mysql>
[mysqld_safe]
syslog
</source>
/etc/mysql/conf.d/query_cache.cnf:
<source lang=mysql>
[mysqld]
query_cache_limit = 4M
query_cache_size = 128M
query_cache_min_res_unit = 2K
</source>
=MySQL Clients=
Small one liners for testing purposes.
==PHP==
===PHP PDO===
<source lang=php>
$ php -r '
$pdo=new PDO("mysql:host=mydbhost;dbname=mydb", "user", "pass", ARRAY(
PDO::ATTR_PERSISTENT => true
)
);
$stmt=$pdo->prepare("SELECT * FROM mytable");
if($stmt->execute()){
while($row = $stmt->fetch()){
print_r($row);
}
};
$stmt = null;
$pdo=null;
'
</source>
37a8ed9d1ee3d7ef0194308097c8769bb06c9f97
1905
1884
2018-08-28T16:54:12Z
Lollypop
2
/* All grants */
wikitext
text/x-wiki
[[Kategorie:MySQL|Tipps und Tricks]]
==Oneliner==
===Show MySQL-traffic fired from a client===
<source lang=bash>
# tcpdump -i any -s 0 -l -vvv -w - dst port 3306 | strings | perl -e '
while(<>) { chomp; next if /^[^ ]+[ ]*$/;
if(/^(SELECT|UPDATE|DELETE|INSERT|SET|COMMIT|ROLLBACK|CREATE|DROP|ALTER)/i) {
if (defined $q) { print "$q\n"; }
$q=$_;
} else {
$_ =~ s/^[ \t]+//; $q.=" $_";
}
}'
</source>
===Mysql processes each second===
<source lang=bash>
# mysqladmin -i 1 --verbose processlist
</source>
===All grants===
<source lang=bash>
# mysql --skip-column-names --batch --execute 'select concat("`",user,"`@`",host,"`") from mysql.user' | xargs -n 1 -i mysql --execute 'show grants for {}'
</source>
Or a little nicer:
<source lang=bash>
#!/bin/bash
#
## Written by Lars Timmann <L@rs.Timmann.de> 2017
#
function usage () {
cat << EOH
Usage: $0 [--all] [--grant-user <pattern>|--gu <pattern>] [--grant-db <pattern>|--gdb <pattern>] [--help] ...
--help: This output
--grant-user|--gu: You can specify this option several times.
The <pattern> can be:
<user> : You will get grants on all hosts for this user.
@<host> : You will get grants for all users on this host.
<user>@<host> : You will get specific grants for user@host.
The pattern may contain % as wildcard.
If the pattern is @% it shows all grants where host is exactly '%'.
--grant-db|--gdb: You can specify this option several times.
The pattern names the database to look for.
The pattern may contain % as wildcard.
--all: Show all grants
...: Optional parameters to the mysql command
EOH
exit
}
show_all_grants=0
declare -a grant_user
for ((param=1;param<=${#};param++))
do
case ${!param} in
--grant-user|--gu)
param=$[ ${param} + 1 ]
grant_user+=( "${!param}" )
# delete 2 parameters from list and set back $param
set -- "${@:1:param-2}" "${@:param+1}"
param=$[ ${param} - 2 ]
;;
--grant-db|--gdb)
param=$[ ${param} + 1 ]
grant_db+=( "${!param}" )
# delete 2 parameters from list and set back $param
set -- "${@:1:param-2}" "${@:param+1}"
param=$[ ${param} - 2 ]
;;
--all)
show_all_grants=1
;;
--help)
usage
;;
*)
;;
esac
done
count=${#grant_user[@]}
for((param=0;param<count;param++))
do
before=${#grant_user[@]}
grant="${grant_user[${param}]}"
user="${grant%@*}"
if [[ "${grant}" == *\@?* ]]
then
host="${grant/*@}"
else
host=''
fi
case ${host} in
'')
select="select concat('\'',user,'\'@\'',host,'\'') as user from mysql.user where user like '${user}'"
;;
'%')
select="select concat('\'',user,'\'@\'',host,'\'') as user from mysql.user where host='${host}' ${user:+and user like '${user}'}"
;;
*)
select="select concat('\'',user,'\'@\'',host,'\'') as user from mysql.user where host like '${host}' ${user:+and user like '${user}'}"
;;
esac
grant_user=( "${grant_user[@]:0:param}" $(mysql $* --silent --skip-column-names --execute "${select}" | sort ) "${grant_user[@]:param+1}" )
after=${#grant_user[@]}
param=$[ param + after - before ]
count=$[ count + after - before ]
done
# Get user for database in grant_db array
for db in ${grant_db[@]}
do
grant_user+=( $(mysql $* --silent --skip-column-names --execute "
select concat('\'',user,'\'@\'',host,'\'') as user from mysql.db where db like '${db}';
select concat('\'',user,'\'@\'',host,'\'') as user from mysql.columns_priv where db like '${db}';
select concat('\'',user,'\'@\'',host,'\'') as user from mysql.tables_priv where db like '${db}';
" | sort -u ) )
done
# --all
if [ ${show_all_grants} -eq 1 ]
then
printf -- '--\n-- %s\n--\n' "all grants";
grant_user=( $(mysql $* --silent --skip-column-names --execute "select concat('\'',user,'\'@\'',host,'\'') as user from mysql.user" | sort ) )
fi
for user in ${grant_user[@]}
do
printf -- '--\n-- %s\n--\n' "${user}";
mysql $* --silent --skip-column-names --execute "show create user ${user}; show grants for ${user}" | sed 's/$/;/'
done
</source>
===Last update time===
* Per table
<source lang=mysql>
mysql> SELECT TABLE_SCHEMA AS DB,TABLE_NAME,UPDATE_TIME FROM INFORMATION_SCHEMA.TABLES ORDER BY DB,UPDATE_TIME;
</source>
* Per database
<source lang=mysql>
mysql> SELECT TABLE_SCHEMA AS DB,MAX(UPDATE_TIME) AS LAST_UPDATE FROM INFORMATION_SCHEMA.TABLES GROUP BY DB ORDER BY LAST_UPDATE;
</source>
==InnoDB space==
===Per database===
<source lang=mysql>
mysql> select table_schema as database_name, sum(round(data_length/1024/1024,2)) as total_size_mb from information_schema.tables where engine like 'innodb' group by table_schema order by total_size_mb;
</source>
===Per table===
<source lang=mysql>
mysql> select table_schema as database_name,table_name,round(data_length/1024/1024,2) as size_mb from information_schema.tables order by size_mb;
</source>
==Logging==
If you use SET GLOBAL it is just for the moment.
'''Don't forget to add it in your my.cnf to make it permanent!'''
===What can I log?===
The interesting variables here are:
* log_queries_not_using_indexes
* log_slave_updates
* log_slow_queries
* general_log
===Choose logging destination FILE/TABLE/NONE===
This affects general_log and slow_query_log.
* Log to the table mysql.slow_log and mysql.general_log
<source lang=mysql>
mysql> SET GLOBAL log_output=TABLE;
</source>
* Log to the table mysql.slow_log and mysql.general_log
<source lang=mysql>
mysql> SET GLOBAL log_output=TABLE;
</source>
* Both: tables and files
<source lang=mysql>
mysql> SET GLOBAL log_output = 'TABLE,FILE';
</source>
* None, if NONE appears in the log_output destinations there is no logging
<source lang=mysql>
mysql> SET GLOBAL log_output = 'TABLE,FILE,NONE';
</source>
is equal to
<source lang=mysql>
mysql> SET GLOBAL log_output = 'NONE';
</source>
===Enable/disable general logging===
<source lang=mysql>
mysql> SET GLOBAL general_log_file = '/var/lib/mysql/general.log';
Query OK, 0 rows affected (0.00 sec)
mysql> SET GLOBAL general_log = 'ON';
Query OK, 0 rows affected (0.00 sec)
</source>
<source lang=mysql>
mysql> SET GLOBAL general_log = 'OFF';
Query OK, 0 rows affected (0.00 sec)
</source>
===Enable/disable logging of slow queries===
<source lang=mysql>
mysql> SET GLOBAL slow_query_log_file = '/var/lib/mysql/slow-query.log';
Query OK, 0 rows affected (0.00 sec)
mysql> SET GLOBAL slow_query_log = 'ON';
Query OK, 0 rows affected (0.00 sec)
</source>
<source lang=mysql>
mysql> SET GLOBAL slow_query_log = 'OFF';
Query OK, 0 rows affected (0.00 sec)
</source>
== Slave ==
=== Debugging ===
==== What did we see from the master ====
Read the binlog from Master:
<source lang=bash>
# mysqlbinlog --read-from-remote-server --host='your replication host' --user='your replication user' --password='your replication password' --base64-output=auto --database='limit output to this database' -vv mysql-bin.number | less
</source>
if you get
ERROR: Failed on connect: SSL connection error: protocol version mismatch
try
<source lang=bash>
# mysqlbinlog --read-from-remote-server --host='your replication host' --user='your replication user' --password='your replication password' --ssl-mode=DISABLED --base64-output=auto --database='limit output to this database' -vv mysql-bin.number | less
</source>
For an idea of the binlog file to investigate on the master do this on your slave:
<source lang=bash>
# mysql -e 'show slave status\G' | awk '$1=="Master_Log_File:"'
</source>
==Filesystems for MySQL==
===ext3/ext4===
====Create Options====
<source lang=bash>
# mkfs.ext4 -b 4096 /dev/mapper/vg--data-lv--ext4--mysql_data
</source>
====Mountoptions====
* noatime
* data=writeback (best performance , only metadata is logged)
* data=ordered (ok performance , recording metadata and grouping metadata related to the data changes)
* data=journal (worst performance, but best data protection, ext3 default mode, recording metadata and all data)
===Raw devices with InnoDB===
'''Take a look at [[Linux_udev_permissions|setting device permissions via udev]] first.'''
'''After''' that the device is owned by mysql:
<source lang=bash>
# ls -alL /dev/vg-data/lv-rawdisk-innodb01
brw-rw---- 1 mysql mysql 252, 0 Aug 12 15:07 /dev/vg-data/lv-rawdisk-innodb01
</source>
Determine the size:
<source lang=bash>
# lvs vg-data
LV VG Attr LSize Pool Origin Data% Move Log Copy% Convert
lv-rawdisk-innodb01 vg-data -wi-a---- 25.00g
# fdisk -l /dev/vg-data/lv-rawdisk-innodb01
Disk /dev/vg-data/lv-rawdisk-innodb01: 26.8 GB, 26843545600 bytes
255 heads, 63 sectors/track, 3263 cylinders, total 52428800 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
# bc -l
26843545600/(1024*1024*1024)
25.00000000000000000000
</source>
Yes... really 25GB!
Add your logical volume to your configuration /etc/mysql/conf.d/innodb.cnf :
<source lang=mysql>
[mysqld]
# InnoDB raw disks
innodb_data_home_dir=
innodb_data_file_path=/dev/vg-data/lv-rawdisk-innodb01:25Gnewraw
</source>
Start mysql:
<source lang=bash>
# service mysql start
</source>
Aaaaaand.. do not forget apparmor! Like I did.. :-D
<source lang=mysql>
InnoDB: Operating system error number 13 in a file operation.
InnoDB: The error means mysqld does not have the access rights to
InnoDB: the directory.
InnoDB: File name /dev/dm-0
InnoDB: File operation call: 'open'.
InnoDB: Cannot continue operation.
</source>
<source lang=bash>
# tail /var/log/kern.log
...
Aug 12 15:30:09 mysql kernel: [ 5840.118528] audit: type=1400 audit(1439386209.399:33): apparmor="DENIED" operation="open" profile="/usr/sbin/mysqld" name="/dev/dm-0" pid=11810 comm="mysqld" requested_mask="wr" denied_mask="wr" fsuid=108 ouid=108
...
</source>
Add your raw device to the apparmor config in /etc/apparmor.d/local/usr.sbin.mysqld :
<source lang=bash>
# Site-specific additions and overrides for usr.sbin.mysqld.
# For more details, please see /etc/apparmor.d/local/README.
/dev/dm-* rwk,
</source>
Reload apparmor:
<source lang=bash>
# service apparmor reload
</source>
Another try!
<source lang=bash>
# service mysql start
</source>
<source lang=mysql>
InnoDB: The first specified data file /dev/vg-data/lv-rawdisk-innodb01 did not exist:
InnoDB: a new database to be created!
150812 15:48:23 InnoDB: Setting file /dev/vg-data/lv-rawdisk-innodb01 size to 25600 MB
InnoDB: Database physically writes the file full: wait...
InnoDB: Progress in MB: 100 200 300 400 500 600 700 800 900 1000 1100 1200 ...
</source>
Much better!
So shutdown MySQL again!
Change your configuration /etc/mysql/conf.d/innodb.cnf and '''change newraw to raw!''' :
<source lang=mysql>
[mysqld]
# InnoDB raw disks
innodb_data_home_dir=
innodb_data_file_path=/dev/vg-data/lv-rawdisk-innodb01:25Graw
</source>
=== NFS ===
==== NFSv4 ====
===== On NetApp CDOT SVM =====
<source lang=text>
cdot1nfsv4::> export-policy rule create -policyname default -clientmatch 172.18.128.0/22 -superuser none -rwrule none -rorule sys -allow-dev false -allow-suid false
cdot1nfsv4::>
cdot1nfsv4::> export-policy create -policyname mysql_clients
cdot1nfsv4::> export-policy rule create -policyname mysql_clients -clientmatch 172.18.128.0/22 -superuser sys -rwrule sys -rorule sys -allow-dev true -allow-suid false
cdot1nfsv4::>
cdot1nfsv4::> nfs server modify -v4.0 enabled -v4-id-domain this.domain.tld
cdot1nfsv4::> set -units GB
cdot1nfsv4::> vol show -volume MYSQLNFS_* -fields volume,policy,size,junction-path
vserver volume size policy junction-path
------------------ --------------------- ---- ------------- ----------------------
cdot1nfsv4 MYSQLNFS_DATA 40GB mysql_clients /MYSQLNFS_DATA
cdot1nfsv4 MYSQLNFS_LOG 1GB mysql_clients /MYSQLNFS_LOG
2 entries were displayed.
</source>
Links:
* [https://kb.netapp.com/support/s/article/how-to-configure-nfsv4-in-cluster-mode How to configure NFSv4 in Cluster-Mode]
* [https://kb.netapp.com/support/s/article/clustered-data-ontap-nfs-expert-recommended-articles Clustered Data ONTAP NFS Expert recommended articles]
* [https://kb.netapp.com/support/s/article/how-to-configure-netapp-storage-systems-for-network-file-system-version-4-in-aix-and-linux-environments How to configure NetApp storage systems for Network File System version 4 in AIX and Linux environments]
* [https://kb.netapp.com/support/s/article/how-to-enable-or-disable-nfsv4-on-netapp-storage-systems How to enable or disable NFSv4 on NetApp storage systems]
===== On Linux =====
====== /etc/sysctl.d/99-mysql.conf ======
<source lang=text>
#
## http://www.ajohnstone.com/achives/optimizing-mysql-over-nfs-with-netapp/
#
###################################################################
# Semaphores & IPC for optimizations in innodb
kernel.shmmax=2147483648
kernel.shmall=2147483648
kernel.msgmni=1024
kernel.msgmax=65536
kernel.sem=250 32000 32 1024
###################################################################
# Swap
vm.swappiness = 0
vm.vfs_cache_pressure = 50
</source>
====== /etc/sysctl.d/99-netapp-nfs.conf ======
<source lang=text>
#
## http://www.ajohnstone.com/achives/optimizing-mysql-over-nfs-with-netapp/
#
###################################################################
# Optimization for netapp/nfs increased from 64k, @see http://tldp.org/HOWTO/NFS-HOWTO/performance.html#MEMLIMITS
net.core.wmem_default=262144
net.core.rmem_default=262144
net.core.wmem_max=262144
net.core.rmem_max=262144
net.ipv4.tcp_rmem = 4096 87380 16777216
net.ipv4.tcp_wmem = 4096 65536 16777216
net.ipv4.tcp_no_metrics_save = 1
# Guidelines from http://media.netapp.com/documents/mysqlperformance-5.pdf
net.ipv4.tcp_sack=0
net.ipv4.tcp_timestamps=0
sunrpc.tcp_slot_table_entries=128
#nfs.v3.enable on
nfs.tcp.enable=on
nfs.tcp.recvwindowsize=65536
nfs.tcp.xfersize=65536
#iscsi.iswt.max_ios_per_session 128
#iscsi.iswt.tcp_window_size 131400
#iscsi.max_connections_per_session 16
net.ipv4.tcp_tw_reuse = 1
net.ipv4.ip_local_port_range = 1024 65023
net.ipv4.tcp_max_syn_backlog = 10240
net.ipv4.tcp_max_tw_buckets = 400000
net.ipv4.tcp_max_orphans = 60000
net.ipv4.tcp_synack_retries = 3
net.core.somaxconn = 10000
kernel.sysrq=0
net.ipv4.neigh.default.gc_thresh1 = 4096
net.ipv4.neigh.default.gc_thresh2 = 8192
net.ipv4.neigh.default.gc_thresh3 = 8192
net.ipv4.neigh.default.base_reachable_time = 86400
net.ipv4.neigh.default.gc_stale_time = 86400
</source>
====== Raise allowed number of open files for mysql in /etc/security/limits.d/mysql.conf ======
<source lang=text>
mysql soft nofile 1024000
mysql hard nofile 1024000
mysql soft nproc 10240
mysql hard nproc 10240
</source>
====== Modify systemd mysql.service to raise the number of files limit ======
To raise the number of files for the service you have to tell the systemd the new limit.
<source lang=bash>
# systemctl edit mysql.service
</source>
and enter:
<source lang=ini>
[Service]
LimitNOFILE=1024000
</source>
<source lang=bash>
# systemctl cat mysql
# /lib/systemd/system/mysql.service
# MySQL systemd service file
...
# /etc/systemd/system/mysql.service.d/override.conf
[Service]
LimitNOFILE=1024000
</source>
Do not forget to activate and check the limit
<source lang=bash>
# systemctl daemon-reload
# systemctl restart mysql
# awk 'NR==1 || /Max open files/' /proc/$(pgrep mysqld$)/limits
Limit Soft Limit Hard Limit Units
Max open files 1024000 1024000 files
</source>
====== Modify systemd service to wait for NFS ======
To be sure that the NFS mount is ready when the mysql server starts add After=nfs-client.target to the systemd service in the Unit-section.
<source lang=bash>
# systemctl edit mysql.service
</source>
and enter:
<source lang=ini>
[Unit]
Description=MySQL Community Server
After=network.target
After=nfs-client.target
</source>
<source lang=bash>
# systemctl cat mysql
# /lib/systemd/system/mysql.service
# MySQL systemd service file
[Unit]
Description=MySQL Community Server
After=network.target
[Install]
WantedBy=multi-user.target
[Service]
User=mysql
Group=mysql
PermissionsStartOnly=true
ExecStartPre=/usr/share/mysql/mysql-systemd-start pre
ExecStart=/usr/sbin/mysqld
ExecStartPost=/usr/share/mysql/mysql-systemd-start post
TimeoutSec=600
Restart=on-failure
RuntimeDirectory=mysqld
RuntimeDirectoryMode=755
# /etc/systemd/system/mysql.service.d/override.conf
[Unit]
Description=MySQL Community Server
After=network.target
After=nfs-client.target
[Service]
LimitNOFILE=1024000
</source>
Do not forget to activate the changes...
<source lang=bash>
# systemctl daemon-reload
# systemctl restart mysql
</source>
... and check they are active:
<source lang=bash>
# systemctl list-dependencies --after mysql.service | grep nfs-client.target
● ├─nfs-client.target
</source>
====== /etc/idmapd.conf ======
<source lang=text>
# Domain = localdomain
Domain = this.domain.tld
</source>
====== /etc/fstab ======
<source lang=text>
cdot-nfsv4-svm:/MYSQLNFS_LOG /MYSQLNFS_LOG nfs rw,hard,nointr,rsize=65536,wsize=65536,bg,vers=4,proto=tcp,noatime
cdot-nfsv4-svm:/MYSQLNFS_DATA /MYSQLNFS_DATA nfs rw,hard,nointr,rsize=65536,wsize=65536,bg,vers=4,proto=tcp,noatime
</source>
====== /etc/mysql/mysql.conf.d/mysqld.cnf ======
<source lang=ini>
[mysqld]
...
datadir = /MYSQLNFS_DATA/data/mysql
...
</source>
====== /etc/mysql/mysql.conf.d/innodb.cnf ======
<source lang=ini>
[mysqld]
#
# * InnoDB
#
innodb_data_home_dir = /MYSQLNFS_DATA/InnoDB
innodb_data_file_path = ibdata1:200M:autoextend
innodb_log_group_home_dir = /MYSQLNFS_LOG/ib_log
#innodb_flush_method = O_DIRECT
innodb_flush_log_at_trx_commit = 2
innodb_file_per_table = on
</source>
<source lang=mysql>
# mysql -e "show variables where variable_name like '%dir' and value like '/MYSQLNFS%'"
+---------------------------+------------------------------------+
| Variable_name | Value |
+---------------------------+------------------------------------+
| datadir | /MYSQLNFS_DATA/data/mysql/ |
| innodb_data_home_dir | /MYSQLNFS_DATA/InnoDB |
| innodb_log_group_home_dir | /MYSQLNFS_LOG/ib_log |
+---------------------------+------------------------------------+
</source>
====== /etc/mysql/mysql.conf.d/query_cache.cnf ======
<source lang=ini>
[mysqld]
#
# * Query Cache Configuration
#
query_cache_type = 1
query_cache_limit = 256K
query_cache_min_res_unit = 2k
query_cache_size = 80M
</source>
<source lang=mysql>
mysql> SHOW VARIABLES LIKE 'have_query_cache';
+------------------+-------+
| Variable_name | Value |
+------------------+-------+
| have_query_cache | YES |
+------------------+-------+
1 row in set (0,00 sec)
mysql> SHOW VARIABLES LIKE 'query_cache%';
+------------------------------+----------+
| Variable_name | Value |
+------------------------------+----------+
| query_cache_limit | 262144 |
| query_cache_min_res_unit | 2048 |
| query_cache_size | 83886080 |
| query_cache_type | ON |
| query_cache_wlock_invalidate | OFF |
+------------------------------+----------+
5 rows in set (0,00 sec)
</source>
====== apparmor : /etc/apparmor.d/local/usr.sbin.mysqld ======
<source lang=text>
# vim:syntax=apparmor
# This should be always there...
owner @{PROC}/@{pid}/status r,
/sys/devices/system/node/ r,
/sys/devices/system/node/** r,
# The mysql datadir, innodb_data_home_dir
/MYSQLNFS_DATA/ r,
/MYSQLNFS_DATA/** rwk,
# The mysql innodb_log_group_home_dir
/MYSQLNFS_LOG/ r,
/MYSQLNFS_LOG/** rwk,
</source>
====== Short stupid performance test ======
<source lang=bash>
# time dd if=/dev/zero of=/MYSQLNFS_DATA/io.test bs=16k count=65536
65536+0 records in
65536+0 records out
1073741824 bytes (1,1 GB, 1,0 GiB) copied, 1,7552 s, 612 MB/s
real 0m1.772s
user 0m0.016s
sys 0m0.672s
</source>
Some things seem to work...
==Sample InnoDB configuration==
/etc/mysql/conf.d/innodb.cnf
<source lang=mysql>
[mysqld]
# InnoDB Parameters
# innodb_buffer_pool_size=(0.7*total_mem_size)
innodb_buffer_pool_size=1433M
# bulk_insert_buffer_size
bulk_insert_buffer_size=256M
# innodb_buffer_pool_instances=... more = more concurrency
innodb_buffer_pool_instances=2
# innodb_thread_concurrency= 2*CPUs
innodb_thread_concurrency=4
# innodb_flush_method=O_DIRECT (avoids double buffering)
innodb_flush_method=O_DIRECT
# InnoDB data raw disks
innodb_data_home_dir=
innodb_data_file_path=/dev/vg-data/lv-rawdisk-innodb01:25Graw
# InnoDB log files
innodb_log_files_in_group=2
innodb_log_file_size=100M
innodb_log_group_home_dir=/var/lib/mysql/ib_log
</source>
==Analyze==
<source lang=mysql>
mysql> select * from <tablename> PROCEDURE ANALYSE();
</source>
<source lang=mysql>
mysql> SHOW /*!50000 GLOBAL*/ STATUS;
</source>
* See [[http://de.slideshare.net/shinguz/pt-presentation-11465700 MySQL Performance Tuning]]
===Find statements which lead into an error===
<source lang=mysql>
mysql> select CURRENT_SCHEMA,DIGEST_TEXT,MYSQL_ERRNO,MESSAGE_TEXT from performance_schema.events_statements_history where errors!=0\G
*************************** 1. row ***************************
CURRENT_SCHEMA: NULL
DIGEST_TEXT: NULL
MYSQL_ERRNO: 1046
MESSAGE_TEXT: No database selected
1 row in set (0,00 sec)
</source>
===percona-toolkit===
<source lang=bash>
# aptitude install percona-toolkit
# mysql -e "explain select * from mysql.user,mysql.db where user.user=db.user" | pt-visual-explain
JOIN
+- Bookmark lookup
| +- Table
| | table db
| | possible_keys User
| +- Index lookup
| key db->User
| possible_keys User
| key_len 48
| ref mysql.user.User
| rows 3
+- Table scan
rows 68
+- Table
table user
</source>
===Sysbench===
<source lang=bash>
# mysql -u root -e "create database sbtest;"
# sysbench \
--test=oltp \
--oltp-table-size=10000000 \
--db-driver=mysql \
--mysql-table-engine=innodb \
--mysql-db=sbtest \
--mysql-user=root \
--mysql-password=$(nawk -F'=' '/password/{print $2}' /root/.my.cnf) \
--mysql-socket=/var/run/mysqld/mysqld.sock \
prepare
# sysbench \
--test=oltp \
--oltp-test-mode=complex \
--oltp-table-size=80000000 \
--db-driver=mysql \
--mysql-table-engine=innodb \
--mysql-db=sbtest \
--mysql-user=root \
--mysql-password=$(nawk -F'=' '/password/{print $2}' /root/.my.cnf) \
--mysql-socket=/var/run/mysqld/mysqld.sock \
--num-threads=4 \
--max-time=900 \
--max-requests=500000 \
run
# mysql -u root_rw -e "drop table sbtest;" sbtest
</source>
==Recover a damaged root account==
===Lost grants===
Try out:
<source lang=bash>
# service mysql stop
# echo "grant all privileges on *.* to 'root'@'localhost' with grant option;" > /root/mysql-init
# mysqld_safe --init-file=/root/mysql-init
...
150812 19:14:24 mysqld_safe mysqld from pid file /var/run/mysqld/mysqld.pid ended
# rm /root/mysql-init
# service mysql start
</source>
Or:
<source lang=bash>
# service mysql stop
# mysqld_safe --skip-grant-tables &
...
# mysql -e "UPDATE mysql.user SET Grant_priv='Y', Super_priv='Y' WHERE User='root'; FLUSH PRIVILEGES; GRANT ALL ON *.* TO 'root'@'localhost';"
# mysqladmin -u root shutdown
# service mysql start
</source>
===Lost password===
<source lang=bash>
# service mysql stop
# echo "SET PASSWORD FOR 'root'@'localhost' = PASSWORD('the root password for mysql');" > /root/mysql-init
# mysqld_safe --init-file=/root/mysql-init
...
150812 19:15:24 mysqld_safe mysqld from pid file /var/run/mysqld/mysqld.pid ended
# rm /root/mysql-init
# service mysql start
</source>
==Structured configuration==
This is the default in Ubuntus /etc/mysql/my.cnf:
<source lang=mysql>
...
#
# * IMPORTANT: Additional settings that can override those from this file!
# The files must end with '.cnf', otherwise they'll be ignored.
#
!includedir /etc/mysql/conf.d/
</source>
/etc/mysql/conf.d/innodb.cnf:
<source lang=mysql>
[mysqld]
# InnoDB Parameters
# innodb_buffer_pool_size=(0.7*total_mem_size)
#innodb_buffer_pool_size=512M
innodb_buffer_pool_size=256M
# bulk_insert_buffer_size
#bulk_insert_buffer_size=256M
bulk_insert_buffer_size=128M
# innodb_buffer_pool_instances=... more = more concurrency
innodb_buffer_pool_instances=2
# innodb_thread_concurrency= 2*CPUs
innodb_thread_concurrency=4
# innodb_flush_method=O_DIRECT (avoids double buffering)
innodb_flush_method=O_DIRECT
# InnoDB data raw disks
innodb_data_home_dir=
innodb_data_file_path=/dev/vg-data/lv-rawdisk-innodb01:25Graw
# InnoDB log files
innodb_log_files_in_group=2
innodb_log_file_size=100M
innodb_log_group_home_dir=/var/lib/mysql/ib_log
</source>
/etc/mysql/conf.d/myisam.cnf:
<source lang=mysql>
[mysqld]
#key_buffer = 512M
key_buffer = 128M
table_cache = 8K
myisam_sort_buffer_size = 64M
tmp_table_size = 64M
# Variable: concurrent_insert
# Value Description
# 0 Disables concurrent inserts
# 1 (Default) Enables concurrent insert for MyISAM tables that do not have holes
# 2 Enables concurrent inserts for all MyISAM tables, even those that have holes.
# For a table with a hole, new rows are inserted at the end of the table if it is in use by another thread.
# Otherwise, MySQL acquires a normal write lock and inserts the row into the hole.
concurrent_insert=2
# Variable: myisam_use_mmap
# https://www.percona.com/blog/2006/05/26/myisam-mmap-feature-51/
#
myisam_use_mmap=1
</source>
/etc/mysql/conf.d/mysqld.cnf:
<source lang=mysql>
[mysqld]
datadir = /var/lib/mysql/data/data
# because mysql is soooo stupid
#ignore-db-dirs = lost+found # when we will have mysql >= 5.6.3
bind-address = 127.0.0.1
open-files-limit = 4096
max_connections = 512
max_allowed_packet = 16M
thread_stack = 192K
thread_cache_size = 8
myisam-recover-options = BACKUP
max_connections = 512
table_cache = 8192
thread_concurrency = 4
default-storage-engine = innodb
# Enable the full query log. Every query (even ones with incorrect
# syntax) that the server receives will be logged. This is useful for
# debugging, it is usually disabled in production use.
#log
# Print warnings to the error log file. If you have any problem with
# MySQL you should enable logging of warnings and examine the error log
# for possible explanations.
log_warnings
# Log slow queries. Slow queries are queries which take more than the
# amount of time defined in "long_query_time" or which do not use
# indexes well, if log_long_format is enabled. It is normally good idea
# to have this turned on if you frequently add new queries to the
# system.
log_slow_queries
slow_query_log_file = /var/log/mysql/mysql-slow.log
# All queries taking more than this amount of time (in seconds) will be
# trated as slow. Do not use "1" as a value here, as this will result in
# even very fast queries being logged from time to time (as MySQL
# currently measures time with second accuracy only).
long_query_time = 2
# Log more information in the slow query log. Normally it is good to
# have this turned on. This will enable logging of queries that are not
# using indexes in addition to long running queries.
#log_long_format
log_bin = /var/lib/mysql/binlog/mysql-bin.log
expire_logs_days = 10
max_binlog_size = 100M
sync_binlog = 0
performance_schema = ON
</source>
/etc/mysql/conf.d/mysqld_safe.cnf:
<source lang=mysql>
[mysqld_safe]
</source>
/etc/mysql/conf.d/mysqld_safe_syslog.cnf:
<source lang=mysql>
[mysqld_safe]
syslog
</source>
/etc/mysql/conf.d/query_cache.cnf:
<source lang=mysql>
[mysqld]
query_cache_limit = 4M
query_cache_size = 128M
query_cache_min_res_unit = 2K
</source>
=MySQL Clients=
Small one liners for testing purposes.
==PHP==
===PHP PDO===
<source lang=php>
$ php -r '
$pdo=new PDO("mysql:host=mydbhost;dbname=mydb", "user", "pass", ARRAY(
PDO::ATTR_PERSISTENT => true
)
);
$stmt=$pdo->prepare("SELECT * FROM mytable");
if($stmt->execute()){
while($row = $stmt->fetch()){
print_r($row);
}
};
$stmt = null;
$pdo=null;
'
</source>
f98448203e8d585effa0c15632ff44b9ce962430
1920
1905
2018-12-07T15:00:01Z
Lollypop
2
/* All grants */
wikitext
text/x-wiki
[[Kategorie:MySQL|Tipps und Tricks]]
==Oneliner==
===Show MySQL-traffic fired from a client===
<source lang=bash>
# tcpdump -i any -s 0 -l -vvv -w - dst port 3306 | strings | perl -e '
while(<>) { chomp; next if /^[^ ]+[ ]*$/;
if(/^(SELECT|UPDATE|DELETE|INSERT|SET|COMMIT|ROLLBACK|CREATE|DROP|ALTER)/i) {
if (defined $q) { print "$q\n"; }
$q=$_;
} else {
$_ =~ s/^[ \t]+//; $q.=" $_";
}
}'
</source>
===Mysql processes each second===
<source lang=bash>
# mysqladmin -i 1 --verbose processlist
</source>
===All grants===
<source lang=bash>
# mysql --skip-column-names --batch --execute 'select concat("`",user,"`@`",host,"`") from mysql.user' | xargs -n 1 -i mysql --execute 'show grants for {}'
</source>
Or a little nicer:
<source lang=bash>
#!/bin/bash
#
## Written by Lars Timmann <L@rs.Timmann.de> 2018
#
function usage () {
cat << EOH
Usage: $0 [--all] [--grant-user <pattern>|--gu <pattern>] [--grant-db <pattern>|--gdb <pattern>] [--help] ...
--help: This output
--grant-user|--gu: You can specify this option several times.
The <pattern> can be:
<user> : You will get grants on all hosts for this user.
@<host> : You will get grants for all users on this host.
<user>@<host> : You will get specific grants for user@host.
The pattern may contain % as wildcard.
If the pattern is @% it shows all grants where host is exactly '%'.
--grant-db|--gdb: You can specify this option several times.
The pattern names the database to look for.
The pattern may contain % as wildcard.
--all: Show all grants
...: Optional parameters to the mysql command
EOH
exit
}
show_all_grants=0
declare -a grant_user
for ((param=1;param<=${#};param++))
do
case ${!param} in
--grant-user|--gu)
param=$[ ${param} + 1 ]
grant_user+=( "${!param}" )
# delete 2 parameters from list and set back $param
set -- "${@:1:param-2}" "${@:param+1}"
param=$[ ${param} - 2 ]
;;
--grant-db|--gdb)
param=$[ ${param} + 1 ]
grant_db+=( "${!param}" )
# delete 2 parameters from list and set back $param
set -- "${@:1:param-2}" "${@:param+1}"
param=$[ ${param} - 2 ]
;;
--all)
show_all_grants=1
# delete 1 parameter from list and set back $param
set -- "${@:1:param-1}" "${@:param+1}"
param=$[ ${param} - 1 ]
;;
--help)
usage
;;
*)
;;
esac
done
#echo ${grant_user[@]}
#echo ${grant_db[@]}
count=${#grant_user[@]}
for((param=0;param<count;param++))
do
before=${#grant_user[@]}
grant="${grant_user[${param}]}"
user="${grant%@*}"
if [[ "${grant}" == *\@?* ]]
then
host="${grant/*@}"
else
host=''
fi
case ${host} in
'')
select="select concat('\'',user,'\'@\'',host,'\'') as user from mysql.user where user like '${user}'"
;;
'%')
select="select concat('\'',user,'\'@\'',host,'\'') as user from mysql.user where host='${host}' ${user:+and user like '${user}'}"
;;
*)
select="select concat('\'',user,'\'@\'',host,'\'') as user from mysql.user where host like '${host}' ${user:+and user like '${user}'}"
;;
esac
grant_user=( "${grant_user[@]:0:param}" $(mysql $* --silent --skip-column-names --execute "${select}" | sort ) "${grant_user[@]:param+1}" )
after=${#grant_user[@]}
param=$[ param + after - before ]
count=$[ count + after - before ]
done
# Get user for database in grant_db array
for db in ${grant_db[@]}
do
grant_user+=( $(mysql $* --silent --skip-column-names --execute "
select concat('\'',user,'\'@\'',host,'\'') as user from mysql.db where db like '${db}';
select concat('\'',user,'\'@\'',host,'\'') as user from mysql.columns_priv where db like '${db}';
select concat('\'',user,'\'@\'',host,'\'') as user from mysql.tables_priv where db like '${db}';
" | sort -u ) )
done
#echo ${grant_user[@]}
#echo ${grant_db[@]}
# --all
if [ ${show_all_grants} -eq 1 ]
then
printf -- '--\n-- %s\n--\n' "all grants";
grant_user=( $(mysql $* --silent --skip-column-names --execute "select concat('\'',user,'\'@\'',host,'\'') as user from mysql.user" | sort ) )
fi
for user in ${grant_user[@]}
do
printf -- '--\n-- %s\n--\n' "${user}";
show_create_user="$(mysql $* --silent --skip-column-names --execute "select (substring_index(version(), '.',1) >= 5) and (substring_index(substring_index(version(), '.', 2),'.',-1) >=7) as show_create_user;";)"
if [ "${show_create_user}" -eq 1 ]
then
mysql $* --silent --skip-column-names --execute "show create user ${user};" | sed 's/$/;/'
fi
OLD_IFS=${IFS}
IFS=$'\n'
for grant in $(mysql $* --silent --skip-column-names --execute "show grants for ${user}" | sed 's/$/;/')
do
regex='GRANT[ ]+.*[ ]+ON[ ]+(FUNCTION[ ]+|)`([^`]*)`\..*'
if [[ $grant =~ $regex ]]
then
database=${BASH_REMATCH[2]}
if [[ " ${grant_db[@]} " =~ " ${database} " ]]
then
echo "${grant}"
fi
else
echo "${grant}"
fi
done
done
</source>
===Last update time===
* Per table
<source lang=mysql>
mysql> SELECT TABLE_SCHEMA AS DB,TABLE_NAME,UPDATE_TIME FROM INFORMATION_SCHEMA.TABLES ORDER BY DB,UPDATE_TIME;
</source>
* Per database
<source lang=mysql>
mysql> SELECT TABLE_SCHEMA AS DB,MAX(UPDATE_TIME) AS LAST_UPDATE FROM INFORMATION_SCHEMA.TABLES GROUP BY DB ORDER BY LAST_UPDATE;
</source>
==InnoDB space==
===Per database===
<source lang=mysql>
mysql> select table_schema as database_name, sum(round(data_length/1024/1024,2)) as total_size_mb from information_schema.tables where engine like 'innodb' group by table_schema order by total_size_mb;
</source>
===Per table===
<source lang=mysql>
mysql> select table_schema as database_name,table_name,round(data_length/1024/1024,2) as size_mb from information_schema.tables order by size_mb;
</source>
==Logging==
If you use SET GLOBAL it is just for the moment.
'''Don't forget to add it in your my.cnf to make it permanent!'''
===What can I log?===
The interesting variables here are:
* log_queries_not_using_indexes
* log_slave_updates
* log_slow_queries
* general_log
===Choose logging destination FILE/TABLE/NONE===
This affects general_log and slow_query_log.
* Log to the table mysql.slow_log and mysql.general_log
<source lang=mysql>
mysql> SET GLOBAL log_output=TABLE;
</source>
* Log to the table mysql.slow_log and mysql.general_log
<source lang=mysql>
mysql> SET GLOBAL log_output=TABLE;
</source>
* Both: tables and files
<source lang=mysql>
mysql> SET GLOBAL log_output = 'TABLE,FILE';
</source>
* None, if NONE appears in the log_output destinations there is no logging
<source lang=mysql>
mysql> SET GLOBAL log_output = 'TABLE,FILE,NONE';
</source>
is equal to
<source lang=mysql>
mysql> SET GLOBAL log_output = 'NONE';
</source>
===Enable/disable general logging===
<source lang=mysql>
mysql> SET GLOBAL general_log_file = '/var/lib/mysql/general.log';
Query OK, 0 rows affected (0.00 sec)
mysql> SET GLOBAL general_log = 'ON';
Query OK, 0 rows affected (0.00 sec)
</source>
<source lang=mysql>
mysql> SET GLOBAL general_log = 'OFF';
Query OK, 0 rows affected (0.00 sec)
</source>
===Enable/disable logging of slow queries===
<source lang=mysql>
mysql> SET GLOBAL slow_query_log_file = '/var/lib/mysql/slow-query.log';
Query OK, 0 rows affected (0.00 sec)
mysql> SET GLOBAL slow_query_log = 'ON';
Query OK, 0 rows affected (0.00 sec)
</source>
<source lang=mysql>
mysql> SET GLOBAL slow_query_log = 'OFF';
Query OK, 0 rows affected (0.00 sec)
</source>
== Slave ==
=== Debugging ===
==== What did we see from the master ====
Read the binlog from Master:
<source lang=bash>
# mysqlbinlog --read-from-remote-server --host='your replication host' --user='your replication user' --password='your replication password' --base64-output=auto --database='limit output to this database' -vv mysql-bin.number | less
</source>
if you get
ERROR: Failed on connect: SSL connection error: protocol version mismatch
try
<source lang=bash>
# mysqlbinlog --read-from-remote-server --host='your replication host' --user='your replication user' --password='your replication password' --ssl-mode=DISABLED --base64-output=auto --database='limit output to this database' -vv mysql-bin.number | less
</source>
For an idea of the binlog file to investigate on the master do this on your slave:
<source lang=bash>
# mysql -e 'show slave status\G' | awk '$1=="Master_Log_File:"'
</source>
==Filesystems for MySQL==
===ext3/ext4===
====Create Options====
<source lang=bash>
# mkfs.ext4 -b 4096 /dev/mapper/vg--data-lv--ext4--mysql_data
</source>
====Mountoptions====
* noatime
* data=writeback (best performance , only metadata is logged)
* data=ordered (ok performance , recording metadata and grouping metadata related to the data changes)
* data=journal (worst performance, but best data protection, ext3 default mode, recording metadata and all data)
===Raw devices with InnoDB===
'''Take a look at [[Linux_udev_permissions|setting device permissions via udev]] first.'''
'''After''' that the device is owned by mysql:
<source lang=bash>
# ls -alL /dev/vg-data/lv-rawdisk-innodb01
brw-rw---- 1 mysql mysql 252, 0 Aug 12 15:07 /dev/vg-data/lv-rawdisk-innodb01
</source>
Determine the size:
<source lang=bash>
# lvs vg-data
LV VG Attr LSize Pool Origin Data% Move Log Copy% Convert
lv-rawdisk-innodb01 vg-data -wi-a---- 25.00g
# fdisk -l /dev/vg-data/lv-rawdisk-innodb01
Disk /dev/vg-data/lv-rawdisk-innodb01: 26.8 GB, 26843545600 bytes
255 heads, 63 sectors/track, 3263 cylinders, total 52428800 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
# bc -l
26843545600/(1024*1024*1024)
25.00000000000000000000
</source>
Yes... really 25GB!
Add your logical volume to your configuration /etc/mysql/conf.d/innodb.cnf :
<source lang=mysql>
[mysqld]
# InnoDB raw disks
innodb_data_home_dir=
innodb_data_file_path=/dev/vg-data/lv-rawdisk-innodb01:25Gnewraw
</source>
Start mysql:
<source lang=bash>
# service mysql start
</source>
Aaaaaand.. do not forget apparmor! Like I did.. :-D
<source lang=mysql>
InnoDB: Operating system error number 13 in a file operation.
InnoDB: The error means mysqld does not have the access rights to
InnoDB: the directory.
InnoDB: File name /dev/dm-0
InnoDB: File operation call: 'open'.
InnoDB: Cannot continue operation.
</source>
<source lang=bash>
# tail /var/log/kern.log
...
Aug 12 15:30:09 mysql kernel: [ 5840.118528] audit: type=1400 audit(1439386209.399:33): apparmor="DENIED" operation="open" profile="/usr/sbin/mysqld" name="/dev/dm-0" pid=11810 comm="mysqld" requested_mask="wr" denied_mask="wr" fsuid=108 ouid=108
...
</source>
Add your raw device to the apparmor config in /etc/apparmor.d/local/usr.sbin.mysqld :
<source lang=bash>
# Site-specific additions and overrides for usr.sbin.mysqld.
# For more details, please see /etc/apparmor.d/local/README.
/dev/dm-* rwk,
</source>
Reload apparmor:
<source lang=bash>
# service apparmor reload
</source>
Another try!
<source lang=bash>
# service mysql start
</source>
<source lang=mysql>
InnoDB: The first specified data file /dev/vg-data/lv-rawdisk-innodb01 did not exist:
InnoDB: a new database to be created!
150812 15:48:23 InnoDB: Setting file /dev/vg-data/lv-rawdisk-innodb01 size to 25600 MB
InnoDB: Database physically writes the file full: wait...
InnoDB: Progress in MB: 100 200 300 400 500 600 700 800 900 1000 1100 1200 ...
</source>
Much better!
So shutdown MySQL again!
Change your configuration /etc/mysql/conf.d/innodb.cnf and '''change newraw to raw!''' :
<source lang=mysql>
[mysqld]
# InnoDB raw disks
innodb_data_home_dir=
innodb_data_file_path=/dev/vg-data/lv-rawdisk-innodb01:25Graw
</source>
=== NFS ===
==== NFSv4 ====
===== On NetApp CDOT SVM =====
<source lang=text>
cdot1nfsv4::> export-policy rule create -policyname default -clientmatch 172.18.128.0/22 -superuser none -rwrule none -rorule sys -allow-dev false -allow-suid false
cdot1nfsv4::>
cdot1nfsv4::> export-policy create -policyname mysql_clients
cdot1nfsv4::> export-policy rule create -policyname mysql_clients -clientmatch 172.18.128.0/22 -superuser sys -rwrule sys -rorule sys -allow-dev true -allow-suid false
cdot1nfsv4::>
cdot1nfsv4::> nfs server modify -v4.0 enabled -v4-id-domain this.domain.tld
cdot1nfsv4::> set -units GB
cdot1nfsv4::> vol show -volume MYSQLNFS_* -fields volume,policy,size,junction-path
vserver volume size policy junction-path
------------------ --------------------- ---- ------------- ----------------------
cdot1nfsv4 MYSQLNFS_DATA 40GB mysql_clients /MYSQLNFS_DATA
cdot1nfsv4 MYSQLNFS_LOG 1GB mysql_clients /MYSQLNFS_LOG
2 entries were displayed.
</source>
Links:
* [https://kb.netapp.com/support/s/article/how-to-configure-nfsv4-in-cluster-mode How to configure NFSv4 in Cluster-Mode]
* [https://kb.netapp.com/support/s/article/clustered-data-ontap-nfs-expert-recommended-articles Clustered Data ONTAP NFS Expert recommended articles]
* [https://kb.netapp.com/support/s/article/how-to-configure-netapp-storage-systems-for-network-file-system-version-4-in-aix-and-linux-environments How to configure NetApp storage systems for Network File System version 4 in AIX and Linux environments]
* [https://kb.netapp.com/support/s/article/how-to-enable-or-disable-nfsv4-on-netapp-storage-systems How to enable or disable NFSv4 on NetApp storage systems]
===== On Linux =====
====== /etc/sysctl.d/99-mysql.conf ======
<source lang=text>
#
## http://www.ajohnstone.com/achives/optimizing-mysql-over-nfs-with-netapp/
#
###################################################################
# Semaphores & IPC for optimizations in innodb
kernel.shmmax=2147483648
kernel.shmall=2147483648
kernel.msgmni=1024
kernel.msgmax=65536
kernel.sem=250 32000 32 1024
###################################################################
# Swap
vm.swappiness = 0
vm.vfs_cache_pressure = 50
</source>
====== /etc/sysctl.d/99-netapp-nfs.conf ======
<source lang=text>
#
## http://www.ajohnstone.com/achives/optimizing-mysql-over-nfs-with-netapp/
#
###################################################################
# Optimization for netapp/nfs increased from 64k, @see http://tldp.org/HOWTO/NFS-HOWTO/performance.html#MEMLIMITS
net.core.wmem_default=262144
net.core.rmem_default=262144
net.core.wmem_max=262144
net.core.rmem_max=262144
net.ipv4.tcp_rmem = 4096 87380 16777216
net.ipv4.tcp_wmem = 4096 65536 16777216
net.ipv4.tcp_no_metrics_save = 1
# Guidelines from http://media.netapp.com/documents/mysqlperformance-5.pdf
net.ipv4.tcp_sack=0
net.ipv4.tcp_timestamps=0
sunrpc.tcp_slot_table_entries=128
#nfs.v3.enable on
nfs.tcp.enable=on
nfs.tcp.recvwindowsize=65536
nfs.tcp.xfersize=65536
#iscsi.iswt.max_ios_per_session 128
#iscsi.iswt.tcp_window_size 131400
#iscsi.max_connections_per_session 16
net.ipv4.tcp_tw_reuse = 1
net.ipv4.ip_local_port_range = 1024 65023
net.ipv4.tcp_max_syn_backlog = 10240
net.ipv4.tcp_max_tw_buckets = 400000
net.ipv4.tcp_max_orphans = 60000
net.ipv4.tcp_synack_retries = 3
net.core.somaxconn = 10000
kernel.sysrq=0
net.ipv4.neigh.default.gc_thresh1 = 4096
net.ipv4.neigh.default.gc_thresh2 = 8192
net.ipv4.neigh.default.gc_thresh3 = 8192
net.ipv4.neigh.default.base_reachable_time = 86400
net.ipv4.neigh.default.gc_stale_time = 86400
</source>
====== Raise allowed number of open files for mysql in /etc/security/limits.d/mysql.conf ======
<source lang=text>
mysql soft nofile 1024000
mysql hard nofile 1024000
mysql soft nproc 10240
mysql hard nproc 10240
</source>
====== Modify systemd mysql.service to raise the number of files limit ======
To raise the number of files for the service you have to tell the systemd the new limit.
<source lang=bash>
# systemctl edit mysql.service
</source>
and enter:
<source lang=ini>
[Service]
LimitNOFILE=1024000
</source>
<source lang=bash>
# systemctl cat mysql
# /lib/systemd/system/mysql.service
# MySQL systemd service file
...
# /etc/systemd/system/mysql.service.d/override.conf
[Service]
LimitNOFILE=1024000
</source>
Do not forget to activate and check the limit
<source lang=bash>
# systemctl daemon-reload
# systemctl restart mysql
# awk 'NR==1 || /Max open files/' /proc/$(pgrep mysqld$)/limits
Limit Soft Limit Hard Limit Units
Max open files 1024000 1024000 files
</source>
====== Modify systemd service to wait for NFS ======
To be sure that the NFS mount is ready when the mysql server starts add After=nfs-client.target to the systemd service in the Unit-section.
<source lang=bash>
# systemctl edit mysql.service
</source>
and enter:
<source lang=ini>
[Unit]
Description=MySQL Community Server
After=network.target
After=nfs-client.target
</source>
<source lang=bash>
# systemctl cat mysql
# /lib/systemd/system/mysql.service
# MySQL systemd service file
[Unit]
Description=MySQL Community Server
After=network.target
[Install]
WantedBy=multi-user.target
[Service]
User=mysql
Group=mysql
PermissionsStartOnly=true
ExecStartPre=/usr/share/mysql/mysql-systemd-start pre
ExecStart=/usr/sbin/mysqld
ExecStartPost=/usr/share/mysql/mysql-systemd-start post
TimeoutSec=600
Restart=on-failure
RuntimeDirectory=mysqld
RuntimeDirectoryMode=755
# /etc/systemd/system/mysql.service.d/override.conf
[Unit]
Description=MySQL Community Server
After=network.target
After=nfs-client.target
[Service]
LimitNOFILE=1024000
</source>
Do not forget to activate the changes...
<source lang=bash>
# systemctl daemon-reload
# systemctl restart mysql
</source>
... and check they are active:
<source lang=bash>
# systemctl list-dependencies --after mysql.service | grep nfs-client.target
● ├─nfs-client.target
</source>
====== /etc/idmapd.conf ======
<source lang=text>
# Domain = localdomain
Domain = this.domain.tld
</source>
====== /etc/fstab ======
<source lang=text>
cdot-nfsv4-svm:/MYSQLNFS_LOG /MYSQLNFS_LOG nfs rw,hard,nointr,rsize=65536,wsize=65536,bg,vers=4,proto=tcp,noatime
cdot-nfsv4-svm:/MYSQLNFS_DATA /MYSQLNFS_DATA nfs rw,hard,nointr,rsize=65536,wsize=65536,bg,vers=4,proto=tcp,noatime
</source>
====== /etc/mysql/mysql.conf.d/mysqld.cnf ======
<source lang=ini>
[mysqld]
...
datadir = /MYSQLNFS_DATA/data/mysql
...
</source>
====== /etc/mysql/mysql.conf.d/innodb.cnf ======
<source lang=ini>
[mysqld]
#
# * InnoDB
#
innodb_data_home_dir = /MYSQLNFS_DATA/InnoDB
innodb_data_file_path = ibdata1:200M:autoextend
innodb_log_group_home_dir = /MYSQLNFS_LOG/ib_log
#innodb_flush_method = O_DIRECT
innodb_flush_log_at_trx_commit = 2
innodb_file_per_table = on
</source>
<source lang=mysql>
# mysql -e "show variables where variable_name like '%dir' and value like '/MYSQLNFS%'"
+---------------------------+------------------------------------+
| Variable_name | Value |
+---------------------------+------------------------------------+
| datadir | /MYSQLNFS_DATA/data/mysql/ |
| innodb_data_home_dir | /MYSQLNFS_DATA/InnoDB |
| innodb_log_group_home_dir | /MYSQLNFS_LOG/ib_log |
+---------------------------+------------------------------------+
</source>
====== /etc/mysql/mysql.conf.d/query_cache.cnf ======
<source lang=ini>
[mysqld]
#
# * Query Cache Configuration
#
query_cache_type = 1
query_cache_limit = 256K
query_cache_min_res_unit = 2k
query_cache_size = 80M
</source>
<source lang=mysql>
mysql> SHOW VARIABLES LIKE 'have_query_cache';
+------------------+-------+
| Variable_name | Value |
+------------------+-------+
| have_query_cache | YES |
+------------------+-------+
1 row in set (0,00 sec)
mysql> SHOW VARIABLES LIKE 'query_cache%';
+------------------------------+----------+
| Variable_name | Value |
+------------------------------+----------+
| query_cache_limit | 262144 |
| query_cache_min_res_unit | 2048 |
| query_cache_size | 83886080 |
| query_cache_type | ON |
| query_cache_wlock_invalidate | OFF |
+------------------------------+----------+
5 rows in set (0,00 sec)
</source>
====== apparmor : /etc/apparmor.d/local/usr.sbin.mysqld ======
<source lang=text>
# vim:syntax=apparmor
# This should be always there...
owner @{PROC}/@{pid}/status r,
/sys/devices/system/node/ r,
/sys/devices/system/node/** r,
# The mysql datadir, innodb_data_home_dir
/MYSQLNFS_DATA/ r,
/MYSQLNFS_DATA/** rwk,
# The mysql innodb_log_group_home_dir
/MYSQLNFS_LOG/ r,
/MYSQLNFS_LOG/** rwk,
</source>
====== Short stupid performance test ======
<source lang=bash>
# time dd if=/dev/zero of=/MYSQLNFS_DATA/io.test bs=16k count=65536
65536+0 records in
65536+0 records out
1073741824 bytes (1,1 GB, 1,0 GiB) copied, 1,7552 s, 612 MB/s
real 0m1.772s
user 0m0.016s
sys 0m0.672s
</source>
Some things seem to work...
==Sample InnoDB configuration==
/etc/mysql/conf.d/innodb.cnf
<source lang=mysql>
[mysqld]
# InnoDB Parameters
# innodb_buffer_pool_size=(0.7*total_mem_size)
innodb_buffer_pool_size=1433M
# bulk_insert_buffer_size
bulk_insert_buffer_size=256M
# innodb_buffer_pool_instances=... more = more concurrency
innodb_buffer_pool_instances=2
# innodb_thread_concurrency= 2*CPUs
innodb_thread_concurrency=4
# innodb_flush_method=O_DIRECT (avoids double buffering)
innodb_flush_method=O_DIRECT
# InnoDB data raw disks
innodb_data_home_dir=
innodb_data_file_path=/dev/vg-data/lv-rawdisk-innodb01:25Graw
# InnoDB log files
innodb_log_files_in_group=2
innodb_log_file_size=100M
innodb_log_group_home_dir=/var/lib/mysql/ib_log
</source>
==Analyze==
<source lang=mysql>
mysql> select * from <tablename> PROCEDURE ANALYSE();
</source>
<source lang=mysql>
mysql> SHOW /*!50000 GLOBAL*/ STATUS;
</source>
* See [[http://de.slideshare.net/shinguz/pt-presentation-11465700 MySQL Performance Tuning]]
===Find statements which lead into an error===
<source lang=mysql>
mysql> select CURRENT_SCHEMA,DIGEST_TEXT,MYSQL_ERRNO,MESSAGE_TEXT from performance_schema.events_statements_history where errors!=0\G
*************************** 1. row ***************************
CURRENT_SCHEMA: NULL
DIGEST_TEXT: NULL
MYSQL_ERRNO: 1046
MESSAGE_TEXT: No database selected
1 row in set (0,00 sec)
</source>
===percona-toolkit===
<source lang=bash>
# aptitude install percona-toolkit
# mysql -e "explain select * from mysql.user,mysql.db where user.user=db.user" | pt-visual-explain
JOIN
+- Bookmark lookup
| +- Table
| | table db
| | possible_keys User
| +- Index lookup
| key db->User
| possible_keys User
| key_len 48
| ref mysql.user.User
| rows 3
+- Table scan
rows 68
+- Table
table user
</source>
===Sysbench===
<source lang=bash>
# mysql -u root -e "create database sbtest;"
# sysbench \
--test=oltp \
--oltp-table-size=10000000 \
--db-driver=mysql \
--mysql-table-engine=innodb \
--mysql-db=sbtest \
--mysql-user=root \
--mysql-password=$(nawk -F'=' '/password/{print $2}' /root/.my.cnf) \
--mysql-socket=/var/run/mysqld/mysqld.sock \
prepare
# sysbench \
--test=oltp \
--oltp-test-mode=complex \
--oltp-table-size=80000000 \
--db-driver=mysql \
--mysql-table-engine=innodb \
--mysql-db=sbtest \
--mysql-user=root \
--mysql-password=$(nawk -F'=' '/password/{print $2}' /root/.my.cnf) \
--mysql-socket=/var/run/mysqld/mysqld.sock \
--num-threads=4 \
--max-time=900 \
--max-requests=500000 \
run
# mysql -u root_rw -e "drop table sbtest;" sbtest
</source>
==Recover a damaged root account==
===Lost grants===
Try out:
<source lang=bash>
# service mysql stop
# echo "grant all privileges on *.* to 'root'@'localhost' with grant option;" > /root/mysql-init
# mysqld_safe --init-file=/root/mysql-init
...
150812 19:14:24 mysqld_safe mysqld from pid file /var/run/mysqld/mysqld.pid ended
# rm /root/mysql-init
# service mysql start
</source>
Or:
<source lang=bash>
# service mysql stop
# mysqld_safe --skip-grant-tables &
...
# mysql -e "UPDATE mysql.user SET Grant_priv='Y', Super_priv='Y' WHERE User='root'; FLUSH PRIVILEGES; GRANT ALL ON *.* TO 'root'@'localhost';"
# mysqladmin -u root shutdown
# service mysql start
</source>
===Lost password===
<source lang=bash>
# service mysql stop
# echo "SET PASSWORD FOR 'root'@'localhost' = PASSWORD('the root password for mysql');" > /root/mysql-init
# mysqld_safe --init-file=/root/mysql-init
...
150812 19:15:24 mysqld_safe mysqld from pid file /var/run/mysqld/mysqld.pid ended
# rm /root/mysql-init
# service mysql start
</source>
==Structured configuration==
This is the default in Ubuntus /etc/mysql/my.cnf:
<source lang=mysql>
...
#
# * IMPORTANT: Additional settings that can override those from this file!
# The files must end with '.cnf', otherwise they'll be ignored.
#
!includedir /etc/mysql/conf.d/
</source>
/etc/mysql/conf.d/innodb.cnf:
<source lang=mysql>
[mysqld]
# InnoDB Parameters
# innodb_buffer_pool_size=(0.7*total_mem_size)
#innodb_buffer_pool_size=512M
innodb_buffer_pool_size=256M
# bulk_insert_buffer_size
#bulk_insert_buffer_size=256M
bulk_insert_buffer_size=128M
# innodb_buffer_pool_instances=... more = more concurrency
innodb_buffer_pool_instances=2
# innodb_thread_concurrency= 2*CPUs
innodb_thread_concurrency=4
# innodb_flush_method=O_DIRECT (avoids double buffering)
innodb_flush_method=O_DIRECT
# InnoDB data raw disks
innodb_data_home_dir=
innodb_data_file_path=/dev/vg-data/lv-rawdisk-innodb01:25Graw
# InnoDB log files
innodb_log_files_in_group=2
innodb_log_file_size=100M
innodb_log_group_home_dir=/var/lib/mysql/ib_log
</source>
/etc/mysql/conf.d/myisam.cnf:
<source lang=mysql>
[mysqld]
#key_buffer = 512M
key_buffer = 128M
table_cache = 8K
myisam_sort_buffer_size = 64M
tmp_table_size = 64M
# Variable: concurrent_insert
# Value Description
# 0 Disables concurrent inserts
# 1 (Default) Enables concurrent insert for MyISAM tables that do not have holes
# 2 Enables concurrent inserts for all MyISAM tables, even those that have holes.
# For a table with a hole, new rows are inserted at the end of the table if it is in use by another thread.
# Otherwise, MySQL acquires a normal write lock and inserts the row into the hole.
concurrent_insert=2
# Variable: myisam_use_mmap
# https://www.percona.com/blog/2006/05/26/myisam-mmap-feature-51/
#
myisam_use_mmap=1
</source>
/etc/mysql/conf.d/mysqld.cnf:
<source lang=mysql>
[mysqld]
datadir = /var/lib/mysql/data/data
# because mysql is soooo stupid
#ignore-db-dirs = lost+found # when we will have mysql >= 5.6.3
bind-address = 127.0.0.1
open-files-limit = 4096
max_connections = 512
max_allowed_packet = 16M
thread_stack = 192K
thread_cache_size = 8
myisam-recover-options = BACKUP
max_connections = 512
table_cache = 8192
thread_concurrency = 4
default-storage-engine = innodb
# Enable the full query log. Every query (even ones with incorrect
# syntax) that the server receives will be logged. This is useful for
# debugging, it is usually disabled in production use.
#log
# Print warnings to the error log file. If you have any problem with
# MySQL you should enable logging of warnings and examine the error log
# for possible explanations.
log_warnings
# Log slow queries. Slow queries are queries which take more than the
# amount of time defined in "long_query_time" or which do not use
# indexes well, if log_long_format is enabled. It is normally good idea
# to have this turned on if you frequently add new queries to the
# system.
log_slow_queries
slow_query_log_file = /var/log/mysql/mysql-slow.log
# All queries taking more than this amount of time (in seconds) will be
# trated as slow. Do not use "1" as a value here, as this will result in
# even very fast queries being logged from time to time (as MySQL
# currently measures time with second accuracy only).
long_query_time = 2
# Log more information in the slow query log. Normally it is good to
# have this turned on. This will enable logging of queries that are not
# using indexes in addition to long running queries.
#log_long_format
log_bin = /var/lib/mysql/binlog/mysql-bin.log
expire_logs_days = 10
max_binlog_size = 100M
sync_binlog = 0
performance_schema = ON
</source>
/etc/mysql/conf.d/mysqld_safe.cnf:
<source lang=mysql>
[mysqld_safe]
</source>
/etc/mysql/conf.d/mysqld_safe_syslog.cnf:
<source lang=mysql>
[mysqld_safe]
syslog
</source>
/etc/mysql/conf.d/query_cache.cnf:
<source lang=mysql>
[mysqld]
query_cache_limit = 4M
query_cache_size = 128M
query_cache_min_res_unit = 2K
</source>
=MySQL Clients=
Small one liners for testing purposes.
==PHP==
===PHP PDO===
<source lang=php>
$ php -r '
$pdo=new PDO("mysql:host=mydbhost;dbname=mydb", "user", "pass", ARRAY(
PDO::ATTR_PERSISTENT => true
)
);
$stmt=$pdo->prepare("SELECT * FROM mytable");
if($stmt->execute()){
while($row = $stmt->fetch()){
print_r($row);
}
};
$stmt = null;
$pdo=null;
'
</source>
857ab745ea56498830db504a3ebb0a817ca1d44e
1921
1920
2018-12-07T15:17:52Z
Lollypop
2
/* All grants */
wikitext
text/x-wiki
[[Kategorie:MySQL|Tipps und Tricks]]
==Oneliner==
===Show MySQL-traffic fired from a client===
<source lang=bash>
# tcpdump -i any -s 0 -l -vvv -w - dst port 3306 | strings | perl -e '
while(<>) { chomp; next if /^[^ ]+[ ]*$/;
if(/^(SELECT|UPDATE|DELETE|INSERT|SET|COMMIT|ROLLBACK|CREATE|DROP|ALTER)/i) {
if (defined $q) { print "$q\n"; }
$q=$_;
} else {
$_ =~ s/^[ \t]+//; $q.=" $_";
}
}'
</source>
===Mysql processes each second===
<source lang=bash>
# mysqladmin -i 1 --verbose processlist
</source>
===All grants===
<source lang=bash>
# mysql --skip-column-names --batch --execute 'select concat("`",user,"`@`",host,"`") from mysql.user' | xargs -n 1 -i mysql --execute 'show grants for {}'
</source>
Or a little nicer:
<source lang=bash>
#!/bin/bash
#
## Written by Lars Timmann <L@rs.Timmann.de> 2017
#
function usage () {
cat << EOH
Usage: $0 [--all] [--grant-user <pattern>|--gu <pattern>] [--grant-db <pattern>|--gdb <pattern>] [--help] ...
--help: This output
--grant-user|--gu: You can specify this option several times.
The <pattern> can be:
<user> : You will get grants on all hosts for this user.
@<host> : You will get grants for all users on this host.
<user>@<host> : You will get specific grants for user@host.
The pattern may contain % as wildcard.
If the pattern is @% it shows all grants where host is exactly '%'.
--grant-db|--gdb: You can specify this option several times.
The pattern names the database to look for.
The pattern may contain % as wildcard.
--all: Show all grants
...: Optional parameters to the mysql command
EOH
exit
}
show_all_grants=0
declare -a grant_user
for ((param=1;param<=${#};param++))
do
case ${!param} in
--grant-user|--gu)
param=$[ ${param} + 1 ]
grant_user+=( "${!param}" )
# delete 2 parameters from list and set back $param
set -- "${@:1:param-2}" "${@:param+1}"
param=$[ ${param} - 2 ]
;;
--grant-db|--gdb)
param=$[ ${param} + 1 ]
grant_db+=( "${!param}" )
# delete 2 parameters from list and set back $param
set -- "${@:1:param-2}" "${@:param+1}"
param=$[ ${param} - 2 ]
;;
--all)
show_all_grants=1
# delete 1 parameter from list and set back $param
set -- "${@:1:param-1}" "${@:param+1}"
param=$[ ${param} - 1 ]
;;
--help)
usage
;;
*)
;;
esac
done
count=${#grant_user[@]}
for((param=0;param<count;param++))
do
before=${#grant_user[@]}
grant="${grant_user[${param}]}"
user="${grant%@*}"
if [[ "${grant}" == *\@?* ]]
then
host="${grant/*@}"
else
host=''
fi
case ${host} in
'')
select="select concat('\'',user,'\'@\'',host,'\'') as user from mysql.user where user like '${user}'"
;;
'%')
select="select concat('\'',user,'\'@\'',host,'\'') as user from mysql.user where host='${host}' ${user:+and user like '${user}'}"
;;
*)
select="select concat('\'',user,'\'@\'',host,'\'') as user from mysql.user where host like '${host}' ${user:+and user like '${user}'}"
;;
esac
grant_user=( "${grant_user[@]:0:param}" $(mysql $* --silent --skip-column-names --execute "${select}" | sort ) "${grant_user[@]:param+1}" )
after=${#grant_user[@]}
param=$[ param + after - before ]
count=$[ count + after - before ]
done
# Get user for database in grant_db array
for db in ${grant_db[@]}
do
grant_user+=( $(mysql $* --silent --skip-column-names --execute "
select concat('\'',user,'\'@\'',host,'\'') as user from mysql.db where db like '${db}';
select concat('\'',user,'\'@\'',host,'\'') as user from mysql.columns_priv where db like '${db}';
select concat('\'',user,'\'@\'',host,'\'') as user from mysql.tables_priv where db like '${db}';
" | sort -u ) )
done
# --all
if [ ${show_all_grants} -eq 1 ]
then
printf -- '--\n-- %s\n--\n' "all grants";
grant_user=( $(mysql $* --silent --skip-column-names --execute "select concat('\'',user,'\'@\'',host,'\'') as user from mysql.user" | sort ) )
fi
for user in ${grant_user[@]}
do
printf -- '--\n-- %s\n--\n' "${user}";
show_create_user="$(mysql $* --silent --skip-column-names --execute "select (substring_index(version(), '.',1) >= 5) and (substring_index(substring_index(version(), '.', 2),'.',-1) >=7) as show_create_user;";)"
if [ "${show_create_user}" -eq 1 ]
then
mysql $* --silent --skip-column-names --execute "show create user ${user};" | sed 's/$/;/'
fi
OLD_IFS=${IFS}
IFS=$'\n'
for grant in $(mysql $* --silent --skip-column-names --execute "show grants for ${user}" | sed 's/$/;/')
do
regex='GRANT[ ]+.*[ ]+ON[ ]+(FUNCTION[ ]+|)`([^`]*)`\..*'
if [[ $grant =~ $regex ]]
then
database=${BASH_REMATCH[2]}
if [ ${#grant_db[@]} -gt 0 ]
then
if [[ " ${grant_db[@]} " =~ " ${database} " ]]
then
echo "${grant}"
fi
else
echo "${grant}"
fi
else
echo "${grant}"
fi
done
done
</source>
===Last update time===
* Per table
<source lang=mysql>
mysql> SELECT TABLE_SCHEMA AS DB,TABLE_NAME,UPDATE_TIME FROM INFORMATION_SCHEMA.TABLES ORDER BY DB,UPDATE_TIME;
</source>
* Per database
<source lang=mysql>
mysql> SELECT TABLE_SCHEMA AS DB,MAX(UPDATE_TIME) AS LAST_UPDATE FROM INFORMATION_SCHEMA.TABLES GROUP BY DB ORDER BY LAST_UPDATE;
</source>
==InnoDB space==
===Per database===
<source lang=mysql>
mysql> select table_schema as database_name, sum(round(data_length/1024/1024,2)) as total_size_mb from information_schema.tables where engine like 'innodb' group by table_schema order by total_size_mb;
</source>
===Per table===
<source lang=mysql>
mysql> select table_schema as database_name,table_name,round(data_length/1024/1024,2) as size_mb from information_schema.tables order by size_mb;
</source>
==Logging==
If you use SET GLOBAL it is just for the moment.
'''Don't forget to add it in your my.cnf to make it permanent!'''
===What can I log?===
The interesting variables here are:
* log_queries_not_using_indexes
* log_slave_updates
* log_slow_queries
* general_log
===Choose logging destination FILE/TABLE/NONE===
This affects general_log and slow_query_log.
* Log to the table mysql.slow_log and mysql.general_log
<source lang=mysql>
mysql> SET GLOBAL log_output=TABLE;
</source>
* Log to the table mysql.slow_log and mysql.general_log
<source lang=mysql>
mysql> SET GLOBAL log_output=TABLE;
</source>
* Both: tables and files
<source lang=mysql>
mysql> SET GLOBAL log_output = 'TABLE,FILE';
</source>
* None, if NONE appears in the log_output destinations there is no logging
<source lang=mysql>
mysql> SET GLOBAL log_output = 'TABLE,FILE,NONE';
</source>
is equal to
<source lang=mysql>
mysql> SET GLOBAL log_output = 'NONE';
</source>
===Enable/disable general logging===
<source lang=mysql>
mysql> SET GLOBAL general_log_file = '/var/lib/mysql/general.log';
Query OK, 0 rows affected (0.00 sec)
mysql> SET GLOBAL general_log = 'ON';
Query OK, 0 rows affected (0.00 sec)
</source>
<source lang=mysql>
mysql> SET GLOBAL general_log = 'OFF';
Query OK, 0 rows affected (0.00 sec)
</source>
===Enable/disable logging of slow queries===
<source lang=mysql>
mysql> SET GLOBAL slow_query_log_file = '/var/lib/mysql/slow-query.log';
Query OK, 0 rows affected (0.00 sec)
mysql> SET GLOBAL slow_query_log = 'ON';
Query OK, 0 rows affected (0.00 sec)
</source>
<source lang=mysql>
mysql> SET GLOBAL slow_query_log = 'OFF';
Query OK, 0 rows affected (0.00 sec)
</source>
== Slave ==
=== Debugging ===
==== What did we see from the master ====
Read the binlog from Master:
<source lang=bash>
# mysqlbinlog --read-from-remote-server --host='your replication host' --user='your replication user' --password='your replication password' --base64-output=auto --database='limit output to this database' -vv mysql-bin.number | less
</source>
if you get
ERROR: Failed on connect: SSL connection error: protocol version mismatch
try
<source lang=bash>
# mysqlbinlog --read-from-remote-server --host='your replication host' --user='your replication user' --password='your replication password' --ssl-mode=DISABLED --base64-output=auto --database='limit output to this database' -vv mysql-bin.number | less
</source>
For an idea of the binlog file to investigate on the master do this on your slave:
<source lang=bash>
# mysql -e 'show slave status\G' | awk '$1=="Master_Log_File:"'
</source>
==Filesystems for MySQL==
===ext3/ext4===
====Create Options====
<source lang=bash>
# mkfs.ext4 -b 4096 /dev/mapper/vg--data-lv--ext4--mysql_data
</source>
====Mountoptions====
* noatime
* data=writeback (best performance , only metadata is logged)
* data=ordered (ok performance , recording metadata and grouping metadata related to the data changes)
* data=journal (worst performance, but best data protection, ext3 default mode, recording metadata and all data)
===Raw devices with InnoDB===
'''Take a look at [[Linux_udev_permissions|setting device permissions via udev]] first.'''
'''After''' that the device is owned by mysql:
<source lang=bash>
# ls -alL /dev/vg-data/lv-rawdisk-innodb01
brw-rw---- 1 mysql mysql 252, 0 Aug 12 15:07 /dev/vg-data/lv-rawdisk-innodb01
</source>
Determine the size:
<source lang=bash>
# lvs vg-data
LV VG Attr LSize Pool Origin Data% Move Log Copy% Convert
lv-rawdisk-innodb01 vg-data -wi-a---- 25.00g
# fdisk -l /dev/vg-data/lv-rawdisk-innodb01
Disk /dev/vg-data/lv-rawdisk-innodb01: 26.8 GB, 26843545600 bytes
255 heads, 63 sectors/track, 3263 cylinders, total 52428800 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
# bc -l
26843545600/(1024*1024*1024)
25.00000000000000000000
</source>
Yes... really 25GB!
Add your logical volume to your configuration /etc/mysql/conf.d/innodb.cnf :
<source lang=mysql>
[mysqld]
# InnoDB raw disks
innodb_data_home_dir=
innodb_data_file_path=/dev/vg-data/lv-rawdisk-innodb01:25Gnewraw
</source>
Start mysql:
<source lang=bash>
# service mysql start
</source>
Aaaaaand.. do not forget apparmor! Like I did.. :-D
<source lang=mysql>
InnoDB: Operating system error number 13 in a file operation.
InnoDB: The error means mysqld does not have the access rights to
InnoDB: the directory.
InnoDB: File name /dev/dm-0
InnoDB: File operation call: 'open'.
InnoDB: Cannot continue operation.
</source>
<source lang=bash>
# tail /var/log/kern.log
...
Aug 12 15:30:09 mysql kernel: [ 5840.118528] audit: type=1400 audit(1439386209.399:33): apparmor="DENIED" operation="open" profile="/usr/sbin/mysqld" name="/dev/dm-0" pid=11810 comm="mysqld" requested_mask="wr" denied_mask="wr" fsuid=108 ouid=108
...
</source>
Add your raw device to the apparmor config in /etc/apparmor.d/local/usr.sbin.mysqld :
<source lang=bash>
# Site-specific additions and overrides for usr.sbin.mysqld.
# For more details, please see /etc/apparmor.d/local/README.
/dev/dm-* rwk,
</source>
Reload apparmor:
<source lang=bash>
# service apparmor reload
</source>
Another try!
<source lang=bash>
# service mysql start
</source>
<source lang=mysql>
InnoDB: The first specified data file /dev/vg-data/lv-rawdisk-innodb01 did not exist:
InnoDB: a new database to be created!
150812 15:48:23 InnoDB: Setting file /dev/vg-data/lv-rawdisk-innodb01 size to 25600 MB
InnoDB: Database physically writes the file full: wait...
InnoDB: Progress in MB: 100 200 300 400 500 600 700 800 900 1000 1100 1200 ...
</source>
Much better!
So shutdown MySQL again!
Change your configuration /etc/mysql/conf.d/innodb.cnf and '''change newraw to raw!''' :
<source lang=mysql>
[mysqld]
# InnoDB raw disks
innodb_data_home_dir=
innodb_data_file_path=/dev/vg-data/lv-rawdisk-innodb01:25Graw
</source>
=== NFS ===
==== NFSv4 ====
===== On NetApp CDOT SVM =====
<source lang=text>
cdot1nfsv4::> export-policy rule create -policyname default -clientmatch 172.18.128.0/22 -superuser none -rwrule none -rorule sys -allow-dev false -allow-suid false
cdot1nfsv4::>
cdot1nfsv4::> export-policy create -policyname mysql_clients
cdot1nfsv4::> export-policy rule create -policyname mysql_clients -clientmatch 172.18.128.0/22 -superuser sys -rwrule sys -rorule sys -allow-dev true -allow-suid false
cdot1nfsv4::>
cdot1nfsv4::> nfs server modify -v4.0 enabled -v4-id-domain this.domain.tld
cdot1nfsv4::> set -units GB
cdot1nfsv4::> vol show -volume MYSQLNFS_* -fields volume,policy,size,junction-path
vserver volume size policy junction-path
------------------ --------------------- ---- ------------- ----------------------
cdot1nfsv4 MYSQLNFS_DATA 40GB mysql_clients /MYSQLNFS_DATA
cdot1nfsv4 MYSQLNFS_LOG 1GB mysql_clients /MYSQLNFS_LOG
2 entries were displayed.
</source>
Links:
* [https://kb.netapp.com/support/s/article/how-to-configure-nfsv4-in-cluster-mode How to configure NFSv4 in Cluster-Mode]
* [https://kb.netapp.com/support/s/article/clustered-data-ontap-nfs-expert-recommended-articles Clustered Data ONTAP NFS Expert recommended articles]
* [https://kb.netapp.com/support/s/article/how-to-configure-netapp-storage-systems-for-network-file-system-version-4-in-aix-and-linux-environments How to configure NetApp storage systems for Network File System version 4 in AIX and Linux environments]
* [https://kb.netapp.com/support/s/article/how-to-enable-or-disable-nfsv4-on-netapp-storage-systems How to enable or disable NFSv4 on NetApp storage systems]
===== On Linux =====
====== /etc/sysctl.d/99-mysql.conf ======
<source lang=text>
#
## http://www.ajohnstone.com/achives/optimizing-mysql-over-nfs-with-netapp/
#
###################################################################
# Semaphores & IPC for optimizations in innodb
kernel.shmmax=2147483648
kernel.shmall=2147483648
kernel.msgmni=1024
kernel.msgmax=65536
kernel.sem=250 32000 32 1024
###################################################################
# Swap
vm.swappiness = 0
vm.vfs_cache_pressure = 50
</source>
====== /etc/sysctl.d/99-netapp-nfs.conf ======
<source lang=text>
#
## http://www.ajohnstone.com/achives/optimizing-mysql-over-nfs-with-netapp/
#
###################################################################
# Optimization for netapp/nfs increased from 64k, @see http://tldp.org/HOWTO/NFS-HOWTO/performance.html#MEMLIMITS
net.core.wmem_default=262144
net.core.rmem_default=262144
net.core.wmem_max=262144
net.core.rmem_max=262144
net.ipv4.tcp_rmem = 4096 87380 16777216
net.ipv4.tcp_wmem = 4096 65536 16777216
net.ipv4.tcp_no_metrics_save = 1
# Guidelines from http://media.netapp.com/documents/mysqlperformance-5.pdf
net.ipv4.tcp_sack=0
net.ipv4.tcp_timestamps=0
sunrpc.tcp_slot_table_entries=128
#nfs.v3.enable on
nfs.tcp.enable=on
nfs.tcp.recvwindowsize=65536
nfs.tcp.xfersize=65536
#iscsi.iswt.max_ios_per_session 128
#iscsi.iswt.tcp_window_size 131400
#iscsi.max_connections_per_session 16
net.ipv4.tcp_tw_reuse = 1
net.ipv4.ip_local_port_range = 1024 65023
net.ipv4.tcp_max_syn_backlog = 10240
net.ipv4.tcp_max_tw_buckets = 400000
net.ipv4.tcp_max_orphans = 60000
net.ipv4.tcp_synack_retries = 3
net.core.somaxconn = 10000
kernel.sysrq=0
net.ipv4.neigh.default.gc_thresh1 = 4096
net.ipv4.neigh.default.gc_thresh2 = 8192
net.ipv4.neigh.default.gc_thresh3 = 8192
net.ipv4.neigh.default.base_reachable_time = 86400
net.ipv4.neigh.default.gc_stale_time = 86400
</source>
====== Raise allowed number of open files for mysql in /etc/security/limits.d/mysql.conf ======
<source lang=text>
mysql soft nofile 1024000
mysql hard nofile 1024000
mysql soft nproc 10240
mysql hard nproc 10240
</source>
====== Modify systemd mysql.service to raise the number of files limit ======
To raise the number of files for the service you have to tell the systemd the new limit.
<source lang=bash>
# systemctl edit mysql.service
</source>
and enter:
<source lang=ini>
[Service]
LimitNOFILE=1024000
</source>
<source lang=bash>
# systemctl cat mysql
# /lib/systemd/system/mysql.service
# MySQL systemd service file
...
# /etc/systemd/system/mysql.service.d/override.conf
[Service]
LimitNOFILE=1024000
</source>
Do not forget to activate and check the limit
<source lang=bash>
# systemctl daemon-reload
# systemctl restart mysql
# awk 'NR==1 || /Max open files/' /proc/$(pgrep mysqld$)/limits
Limit Soft Limit Hard Limit Units
Max open files 1024000 1024000 files
</source>
====== Modify systemd service to wait for NFS ======
To be sure that the NFS mount is ready when the mysql server starts add After=nfs-client.target to the systemd service in the Unit-section.
<source lang=bash>
# systemctl edit mysql.service
</source>
and enter:
<source lang=ini>
[Unit]
Description=MySQL Community Server
After=network.target
After=nfs-client.target
</source>
<source lang=bash>
# systemctl cat mysql
# /lib/systemd/system/mysql.service
# MySQL systemd service file
[Unit]
Description=MySQL Community Server
After=network.target
[Install]
WantedBy=multi-user.target
[Service]
User=mysql
Group=mysql
PermissionsStartOnly=true
ExecStartPre=/usr/share/mysql/mysql-systemd-start pre
ExecStart=/usr/sbin/mysqld
ExecStartPost=/usr/share/mysql/mysql-systemd-start post
TimeoutSec=600
Restart=on-failure
RuntimeDirectory=mysqld
RuntimeDirectoryMode=755
# /etc/systemd/system/mysql.service.d/override.conf
[Unit]
Description=MySQL Community Server
After=network.target
After=nfs-client.target
[Service]
LimitNOFILE=1024000
</source>
Do not forget to activate the changes...
<source lang=bash>
# systemctl daemon-reload
# systemctl restart mysql
</source>
... and check they are active:
<source lang=bash>
# systemctl list-dependencies --after mysql.service | grep nfs-client.target
● ├─nfs-client.target
</source>
====== /etc/idmapd.conf ======
<source lang=text>
# Domain = localdomain
Domain = this.domain.tld
</source>
====== /etc/fstab ======
<source lang=text>
cdot-nfsv4-svm:/MYSQLNFS_LOG /MYSQLNFS_LOG nfs rw,hard,nointr,rsize=65536,wsize=65536,bg,vers=4,proto=tcp,noatime
cdot-nfsv4-svm:/MYSQLNFS_DATA /MYSQLNFS_DATA nfs rw,hard,nointr,rsize=65536,wsize=65536,bg,vers=4,proto=tcp,noatime
</source>
====== /etc/mysql/mysql.conf.d/mysqld.cnf ======
<source lang=ini>
[mysqld]
...
datadir = /MYSQLNFS_DATA/data/mysql
...
</source>
====== /etc/mysql/mysql.conf.d/innodb.cnf ======
<source lang=ini>
[mysqld]
#
# * InnoDB
#
innodb_data_home_dir = /MYSQLNFS_DATA/InnoDB
innodb_data_file_path = ibdata1:200M:autoextend
innodb_log_group_home_dir = /MYSQLNFS_LOG/ib_log
#innodb_flush_method = O_DIRECT
innodb_flush_log_at_trx_commit = 2
innodb_file_per_table = on
</source>
<source lang=mysql>
# mysql -e "show variables where variable_name like '%dir' and value like '/MYSQLNFS%'"
+---------------------------+------------------------------------+
| Variable_name | Value |
+---------------------------+------------------------------------+
| datadir | /MYSQLNFS_DATA/data/mysql/ |
| innodb_data_home_dir | /MYSQLNFS_DATA/InnoDB |
| innodb_log_group_home_dir | /MYSQLNFS_LOG/ib_log |
+---------------------------+------------------------------------+
</source>
====== /etc/mysql/mysql.conf.d/query_cache.cnf ======
<source lang=ini>
[mysqld]
#
# * Query Cache Configuration
#
query_cache_type = 1
query_cache_limit = 256K
query_cache_min_res_unit = 2k
query_cache_size = 80M
</source>
<source lang=mysql>
mysql> SHOW VARIABLES LIKE 'have_query_cache';
+------------------+-------+
| Variable_name | Value |
+------------------+-------+
| have_query_cache | YES |
+------------------+-------+
1 row in set (0,00 sec)
mysql> SHOW VARIABLES LIKE 'query_cache%';
+------------------------------+----------+
| Variable_name | Value |
+------------------------------+----------+
| query_cache_limit | 262144 |
| query_cache_min_res_unit | 2048 |
| query_cache_size | 83886080 |
| query_cache_type | ON |
| query_cache_wlock_invalidate | OFF |
+------------------------------+----------+
5 rows in set (0,00 sec)
</source>
====== apparmor : /etc/apparmor.d/local/usr.sbin.mysqld ======
<source lang=text>
# vim:syntax=apparmor
# This should be always there...
owner @{PROC}/@{pid}/status r,
/sys/devices/system/node/ r,
/sys/devices/system/node/** r,
# The mysql datadir, innodb_data_home_dir
/MYSQLNFS_DATA/ r,
/MYSQLNFS_DATA/** rwk,
# The mysql innodb_log_group_home_dir
/MYSQLNFS_LOG/ r,
/MYSQLNFS_LOG/** rwk,
</source>
====== Short stupid performance test ======
<source lang=bash>
# time dd if=/dev/zero of=/MYSQLNFS_DATA/io.test bs=16k count=65536
65536+0 records in
65536+0 records out
1073741824 bytes (1,1 GB, 1,0 GiB) copied, 1,7552 s, 612 MB/s
real 0m1.772s
user 0m0.016s
sys 0m0.672s
</source>
Some things seem to work...
==Sample InnoDB configuration==
/etc/mysql/conf.d/innodb.cnf
<source lang=mysql>
[mysqld]
# InnoDB Parameters
# innodb_buffer_pool_size=(0.7*total_mem_size)
innodb_buffer_pool_size=1433M
# bulk_insert_buffer_size
bulk_insert_buffer_size=256M
# innodb_buffer_pool_instances=... more = more concurrency
innodb_buffer_pool_instances=2
# innodb_thread_concurrency= 2*CPUs
innodb_thread_concurrency=4
# innodb_flush_method=O_DIRECT (avoids double buffering)
innodb_flush_method=O_DIRECT
# InnoDB data raw disks
innodb_data_home_dir=
innodb_data_file_path=/dev/vg-data/lv-rawdisk-innodb01:25Graw
# InnoDB log files
innodb_log_files_in_group=2
innodb_log_file_size=100M
innodb_log_group_home_dir=/var/lib/mysql/ib_log
</source>
==Analyze==
<source lang=mysql>
mysql> select * from <tablename> PROCEDURE ANALYSE();
</source>
<source lang=mysql>
mysql> SHOW /*!50000 GLOBAL*/ STATUS;
</source>
* See [[http://de.slideshare.net/shinguz/pt-presentation-11465700 MySQL Performance Tuning]]
===Find statements which lead into an error===
<source lang=mysql>
mysql> select CURRENT_SCHEMA,DIGEST_TEXT,MYSQL_ERRNO,MESSAGE_TEXT from performance_schema.events_statements_history where errors!=0\G
*************************** 1. row ***************************
CURRENT_SCHEMA: NULL
DIGEST_TEXT: NULL
MYSQL_ERRNO: 1046
MESSAGE_TEXT: No database selected
1 row in set (0,00 sec)
</source>
===percona-toolkit===
<source lang=bash>
# aptitude install percona-toolkit
# mysql -e "explain select * from mysql.user,mysql.db where user.user=db.user" | pt-visual-explain
JOIN
+- Bookmark lookup
| +- Table
| | table db
| | possible_keys User
| +- Index lookup
| key db->User
| possible_keys User
| key_len 48
| ref mysql.user.User
| rows 3
+- Table scan
rows 68
+- Table
table user
</source>
===Sysbench===
<source lang=bash>
# mysql -u root -e "create database sbtest;"
# sysbench \
--test=oltp \
--oltp-table-size=10000000 \
--db-driver=mysql \
--mysql-table-engine=innodb \
--mysql-db=sbtest \
--mysql-user=root \
--mysql-password=$(nawk -F'=' '/password/{print $2}' /root/.my.cnf) \
--mysql-socket=/var/run/mysqld/mysqld.sock \
prepare
# sysbench \
--test=oltp \
--oltp-test-mode=complex \
--oltp-table-size=80000000 \
--db-driver=mysql \
--mysql-table-engine=innodb \
--mysql-db=sbtest \
--mysql-user=root \
--mysql-password=$(nawk -F'=' '/password/{print $2}' /root/.my.cnf) \
--mysql-socket=/var/run/mysqld/mysqld.sock \
--num-threads=4 \
--max-time=900 \
--max-requests=500000 \
run
# mysql -u root_rw -e "drop table sbtest;" sbtest
</source>
==Recover a damaged root account==
===Lost grants===
Try out:
<source lang=bash>
# service mysql stop
# echo "grant all privileges on *.* to 'root'@'localhost' with grant option;" > /root/mysql-init
# mysqld_safe --init-file=/root/mysql-init
...
150812 19:14:24 mysqld_safe mysqld from pid file /var/run/mysqld/mysqld.pid ended
# rm /root/mysql-init
# service mysql start
</source>
Or:
<source lang=bash>
# service mysql stop
# mysqld_safe --skip-grant-tables &
...
# mysql -e "UPDATE mysql.user SET Grant_priv='Y', Super_priv='Y' WHERE User='root'; FLUSH PRIVILEGES; GRANT ALL ON *.* TO 'root'@'localhost';"
# mysqladmin -u root shutdown
# service mysql start
</source>
===Lost password===
<source lang=bash>
# service mysql stop
# echo "SET PASSWORD FOR 'root'@'localhost' = PASSWORD('the root password for mysql');" > /root/mysql-init
# mysqld_safe --init-file=/root/mysql-init
...
150812 19:15:24 mysqld_safe mysqld from pid file /var/run/mysqld/mysqld.pid ended
# rm /root/mysql-init
# service mysql start
</source>
==Structured configuration==
This is the default in Ubuntus /etc/mysql/my.cnf:
<source lang=mysql>
...
#
# * IMPORTANT: Additional settings that can override those from this file!
# The files must end with '.cnf', otherwise they'll be ignored.
#
!includedir /etc/mysql/conf.d/
</source>
/etc/mysql/conf.d/innodb.cnf:
<source lang=mysql>
[mysqld]
# InnoDB Parameters
# innodb_buffer_pool_size=(0.7*total_mem_size)
#innodb_buffer_pool_size=512M
innodb_buffer_pool_size=256M
# bulk_insert_buffer_size
#bulk_insert_buffer_size=256M
bulk_insert_buffer_size=128M
# innodb_buffer_pool_instances=... more = more concurrency
innodb_buffer_pool_instances=2
# innodb_thread_concurrency= 2*CPUs
innodb_thread_concurrency=4
# innodb_flush_method=O_DIRECT (avoids double buffering)
innodb_flush_method=O_DIRECT
# InnoDB data raw disks
innodb_data_home_dir=
innodb_data_file_path=/dev/vg-data/lv-rawdisk-innodb01:25Graw
# InnoDB log files
innodb_log_files_in_group=2
innodb_log_file_size=100M
innodb_log_group_home_dir=/var/lib/mysql/ib_log
</source>
/etc/mysql/conf.d/myisam.cnf:
<source lang=mysql>
[mysqld]
#key_buffer = 512M
key_buffer = 128M
table_cache = 8K
myisam_sort_buffer_size = 64M
tmp_table_size = 64M
# Variable: concurrent_insert
# Value Description
# 0 Disables concurrent inserts
# 1 (Default) Enables concurrent insert for MyISAM tables that do not have holes
# 2 Enables concurrent inserts for all MyISAM tables, even those that have holes.
# For a table with a hole, new rows are inserted at the end of the table if it is in use by another thread.
# Otherwise, MySQL acquires a normal write lock and inserts the row into the hole.
concurrent_insert=2
# Variable: myisam_use_mmap
# https://www.percona.com/blog/2006/05/26/myisam-mmap-feature-51/
#
myisam_use_mmap=1
</source>
/etc/mysql/conf.d/mysqld.cnf:
<source lang=mysql>
[mysqld]
datadir = /var/lib/mysql/data/data
# because mysql is soooo stupid
#ignore-db-dirs = lost+found # when we will have mysql >= 5.6.3
bind-address = 127.0.0.1
open-files-limit = 4096
max_connections = 512
max_allowed_packet = 16M
thread_stack = 192K
thread_cache_size = 8
myisam-recover-options = BACKUP
max_connections = 512
table_cache = 8192
thread_concurrency = 4
default-storage-engine = innodb
# Enable the full query log. Every query (even ones with incorrect
# syntax) that the server receives will be logged. This is useful for
# debugging, it is usually disabled in production use.
#log
# Print warnings to the error log file. If you have any problem with
# MySQL you should enable logging of warnings and examine the error log
# for possible explanations.
log_warnings
# Log slow queries. Slow queries are queries which take more than the
# amount of time defined in "long_query_time" or which do not use
# indexes well, if log_long_format is enabled. It is normally good idea
# to have this turned on if you frequently add new queries to the
# system.
log_slow_queries
slow_query_log_file = /var/log/mysql/mysql-slow.log
# All queries taking more than this amount of time (in seconds) will be
# trated as slow. Do not use "1" as a value here, as this will result in
# even very fast queries being logged from time to time (as MySQL
# currently measures time with second accuracy only).
long_query_time = 2
# Log more information in the slow query log. Normally it is good to
# have this turned on. This will enable logging of queries that are not
# using indexes in addition to long running queries.
#log_long_format
log_bin = /var/lib/mysql/binlog/mysql-bin.log
expire_logs_days = 10
max_binlog_size = 100M
sync_binlog = 0
performance_schema = ON
</source>
/etc/mysql/conf.d/mysqld_safe.cnf:
<source lang=mysql>
[mysqld_safe]
</source>
/etc/mysql/conf.d/mysqld_safe_syslog.cnf:
<source lang=mysql>
[mysqld_safe]
syslog
</source>
/etc/mysql/conf.d/query_cache.cnf:
<source lang=mysql>
[mysqld]
query_cache_limit = 4M
query_cache_size = 128M
query_cache_min_res_unit = 2K
</source>
=MySQL Clients=
Small one liners for testing purposes.
==PHP==
===PHP PDO===
<source lang=php>
$ php -r '
$pdo=new PDO("mysql:host=mydbhost;dbname=mydb", "user", "pass", ARRAY(
PDO::ATTR_PERSISTENT => true
)
);
$stmt=$pdo->prepare("SELECT * FROM mytable");
if($stmt->execute()){
while($row = $stmt->fetch()){
print_r($row);
}
};
$stmt = null;
$pdo=null;
'
</source>
f5ff0e91cdef6a1378ec719034b82c15c5a77961
RadSecProxy
0
345
1885
1808
2018-05-17T12:07:45Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:Eduroam]]
=RadSecProxy=
==Build==
===Patch for radsecproxy-1.6.8 on Ubuntu 16.04===
In radsecproxy 1.6.9 and source from git on [[https://git.nordu.net/?p=radsecproxy.git;a=tree git.nordu.net]] this patch is not needed since [[https://git.nordu.net/?p=radsecproxy.git;a=commit;h=f3619bf65967255e1009fec42b28007b49e0f4e4 18.1.2017]].
<source lang=bash>
$ git clone https://git.nordu.net/radsecproxy.git
</source>
[https://project.nordu.net/browse/RADSECPROXY-72 taken from here]
<source lang=diff>
diff -rub radsecproxy-1.6.8/tcp.c radsecproxy-1.6.8_Ubuntu_16.04/tcp.c
--- radsecproxy-1.6.8/tcp.c 2016-09-21 13:49:09.000000000 +0200
+++ radsecproxy-1.6.8_Ubuntu_16.04/tcp.c 2017-07-13 16:35:52.414151832 +0200
@@ -353,7 +353,7 @@
struct sockaddr_storage from;
socklen_t fromlen = sizeof(from);
- listen(*sp, 0);
+ listen(*sp, 16);
for (;;) {
s = accept(*sp, (struct sockaddr *)&from, &fromlen);
diff -rub radsecproxy-1.6.8/tls.c radsecproxy-1.6.8_Ubuntu_16.04/tls.c
--- radsecproxy-1.6.8/tls.c 2016-09-21 13:49:09.000000000 +0200
+++ radsecproxy-1.6.8_Ubuntu_16.04/tls.c 2017-07-13 16:36:22.678166655 +0200
@@ -467,7 +467,7 @@
struct sockaddr_storage from;
socklen_t fromlen = sizeof(from);
- listen(*sp, 0);
+ listen(*sp, 16);
for (;;) {
s = accept(*sp, (struct sockaddr *)&from, &fromlen);
</source>
===Configure===
<source lang=bash>
$ ./configure --prefix=/opt/radsecproxy-1.6.8 --sysconfdir=/etc/radsec --with-ssl --enable-fticks
$ make clean all && sudo make install
</source>
==Config==
===/etc/radsec/radsecproxy.conf===
<source lang=text>
# Master config file for radsecproxy
IPv4Only on
listenUDP <IP>:1812
listenUDP <IP>:1813
listenTLS <IP>:2083
LogLevel 5 # For testing later reduce to 3
#LogDestination file:///var/log/radsecproxy.log
LogDestination x-syslog:///LOG_DAEMON
LoopPrevention on
######## TLS section
tls default {
#CACertificatePath /etc/radsec/cert/ca
CACertificateFile /etc/radsec/cert/radsecproxy.pem
CertificateFile /etc/radsec/cert/radsecproxy.pem
CertificateKeyFile /etc/radsec/cert/radsecproxy.key
CertificateKeyPassword ****secret****
}
Include /etc/radsec/rewrites.conf
Include /etc/radsec/clients.conf
Include /etc/radsec/servers.conf
Include /etc/radsec/realms.conf
</source>
===/etc/radsec/rewrites.conf===
<source lang=text>
## Empty for our setup
</source>
===/etc/radsec/clients.conf===
This matches our german top level radius (tlr) you have to customize it for other countries.
<source lang=text>
client tlr1 {
host 193.174.75.134
type tls
certificatenamecheck off
matchCertificateAttribute CN:/^(radius1\.dfn|tld1\.eduroam)\.de$/
}
client tlr2 {
host 193.174.75.138
type tls
certificatenamecheck off
matchCertificateAttribute CN:/^(radius2\.dfn|tld2\.eduroam)\.de$/
}
# Our WLAN Controller
client wlc {
host 10.1.1.0/24
type udp
secret ****secret****
}
client anyIP4TLS {
host 0.0.0.0/0
type TLS
}
</source>
===/etc/radsec/servers.conf===
<source lang=text>
Server Our-EduroamRadiusAuth {
host <internal radius server>
port 1812
#rewriteOut UserName
type udp
secret ****secret****
}
Server Our-EduroamRadiusAcct {
host <internal radius accounting server>
port 1813
type udp
secret ****secret****
}
server tlr1 {
host 193.174.75.134
type tls
certificatenamecheck off
matchCertificateAttribute CN:/^(radius1\.dfn|tld1\.eduroam)\.de$/
StatusServer on
}
server tlr2 {
host 193.174.75.138
type tls
certificatenamecheck off
matchCertificateAttribute CN:/^(radius2\.dfn|tld2\.eduroam)\.de$/
StatusServer on
}
</source>
===/etc/radsec/realms.conf===
<source lang=text>
# Our domain
realm domain.tld {
server Our-EduroamRadiusAuth
accountingServer Our-EduroamRadiusAcct
}
# Wrong counfigured clients are rejected here
realm /myabc\.com$ {
replymessage "Misconfigured client: default realm of Intel PRO/Wireless supplicant! Rejected by us."
accountingresponse on
}
realm /^$/ {
replymessage "Misconfigured client: empty realm! Rejected by us."
accountingresponse on
}
# Default route -> Eduroam toplevel servers
realm * {
server tlr1
server tlr2
accountingserver tlr1
accountingserver tlr2
}
</source>
===/etc/radsec/cert/radsecproxy.pem===
<source lang=text>
subject=/CN=radsecproxy.domain.tld/OU=bla/O=bli/L=Hamburg/ST=Hamburg/C=DE
-----BEGIN CERTIFICATE-----
...
-----END CERTIFICATE-----
And now the whole cerstificate chain...
</source>
==Run the daemon==
===Security===
There is no need to run radsecproxy as root.
But you need write access to the log or use syslog.
The config, certificate and key is not readable by the user (nogroup) but by the group radsecproxy where the porocess lives in (see systemd unit file radsecproxy.service).
====User====
<source lang=bash>
# addgroup -g 2083 radsecproxy
# useradd -u 2083 -g nogroup -s /bin/false -h /nonexistent
</source>
====Permissions====
<source lang=bash>
# chown -R root:radsecproxy /etc/radsec
# find /etc/radsec -type d -exec chmod 0750 {} \;
# find /etc/radsec -type f -exec chmod 0640 {} \;
</source>
====systemd unit file====
<source lang=bash>
# systemctl cat radsecproxy.service
</source>
<source lang=ini>
[Unit]
Description=radsecproxy
ConditionPathExists=/etc/radsec/radsecproxy.conf
After=network.target
Documentation=man:radsecproxy(1)
[Service]
Type=forking
User=radsecproxy
Group=radsecproxy
RuntimeDirectory=radsecproxy
RuntimeDirectoryMode=0700
PrivateTmp=yes
InaccessibleDirectories=/var
ReadOnlyDirectories=/etc
ReadOnlyDirectories=/lib
ReadOnlyDirectories=/usr
ExecStart=/opt/radsecproxy/sbin/radsecproxy -i /run/radsecproxy/radsecproxy.pid
PIDFile=/run/radsecproxy/radsecproxy.pid
[Install]
WantedBy=multi-user.target
</source>
Put this to /lib/systemd/system/radsecproxy.service and do:
# systemctl daemon-reload
# systemctl enable radsecproxy.service
# systemctl start radsecproxy.service
===Testing===
$ openssl s_client -connect <IP>:2083 -showcerts
===Certificate Enddate===
$ openssl s_client -connect <IP>:2083 -tls1 -no_ssl2 -no_ssl3 -showcerts 2>/dev/null | openssl x509 -enddate -noout
notAfter=Oct 9 12:13:17 2020 GMT
7796f3fb050dd8abdc6ea96e1c1ecf7bf9fb2e13
1888
1885
2018-07-05T14:31:24Z
Lollypop
2
/* Build */
wikitext
text/x-wiki
[[Kategorie:Eduroam]]
=RadSecProxy=
==Build==
===Patch for radsecproxy-1.6.8 on Ubuntu 16.04===
In radsecproxy 1.6.9 and source from git on [[https://git.nordu.net/?p=radsecproxy.git;a=tree git.nordu.net]] this patch is not needed since [[https://git.nordu.net/?p=radsecproxy.git;a=commit;h=f3619bf65967255e1009fec42b28007b49e0f4e4 18.1.2017]].
<source lang=bash>
$ git clone https://git.nordu.net/radsecproxy.git
</source>
[https://project.nordu.net/browse/RADSECPROXY-72 taken from here]
<source lang=diff>
diff -rub radsecproxy-1.6.8/tcp.c radsecproxy-1.6.8_Ubuntu_16.04/tcp.c
--- radsecproxy-1.6.8/tcp.c 2016-09-21 13:49:09.000000000 +0200
+++ radsecproxy-1.6.8_Ubuntu_16.04/tcp.c 2017-07-13 16:35:52.414151832 +0200
@@ -353,7 +353,7 @@
struct sockaddr_storage from;
socklen_t fromlen = sizeof(from);
- listen(*sp, 0);
+ listen(*sp, 16);
for (;;) {
s = accept(*sp, (struct sockaddr *)&from, &fromlen);
diff -rub radsecproxy-1.6.8/tls.c radsecproxy-1.6.8_Ubuntu_16.04/tls.c
--- radsecproxy-1.6.8/tls.c 2016-09-21 13:49:09.000000000 +0200
+++ radsecproxy-1.6.8_Ubuntu_16.04/tls.c 2017-07-13 16:36:22.678166655 +0200
@@ -467,7 +467,7 @@
struct sockaddr_storage from;
socklen_t fromlen = sizeof(from);
- listen(*sp, 0);
+ listen(*sp, 16);
for (;;) {
s = accept(*sp, (struct sockaddr *)&from, &fromlen);
</source>
=== Another example: Version 1.7.1 from git ===
<source lang=bash>
$ mkdir radsecproxy && cd radsecproxy
$ git clone https://github.com/radsecproxy/radsecproxy tags/1.7.1
$ cd tags/1.7.1
$ ./autogen.sh
$ ./configure --prefix=/opt/radsecproxy-1.7.1 --sysconfdir=/etc/radsec --with-ssl
$ make
$ sudo make install
</source>
===Configure===
<source lang=bash>
$ ./configure --prefix=/opt/radsecproxy-1.6.8 --sysconfdir=/etc/radsec --with-ssl --enable-fticks
$ make clean all && sudo make install
</source>
=== Another example: Version 1.7.1 from git ===
<source lang=bash>
$ mkdir radsecproxy && cd radsecproxy
$ git clone https://github.com/radsecproxy/radsecproxy tags/1.7.1
$ cd tags/1.7.1
$ ./autogen.sh
$ ./configure --prefix=/opt/radsecproxy-1.7.1 --sysconfdir=/etc/radsec --with-ssl
$ make clean all && sudo make install
</source>
==Config==
===/etc/radsec/radsecproxy.conf===
<source lang=text>
# Master config file for radsecproxy
IPv4Only on
listenUDP <IP>:1812
listenUDP <IP>:1813
listenTLS <IP>:2083
LogLevel 5 # For testing later reduce to 3
#LogDestination file:///var/log/radsecproxy.log
LogDestination x-syslog:///LOG_DAEMON
LoopPrevention on
######## TLS section
tls default {
#CACertificatePath /etc/radsec/cert/ca
CACertificateFile /etc/radsec/cert/radsecproxy.pem
CertificateFile /etc/radsec/cert/radsecproxy.pem
CertificateKeyFile /etc/radsec/cert/radsecproxy.key
CertificateKeyPassword ****secret****
}
Include /etc/radsec/rewrites.conf
Include /etc/radsec/clients.conf
Include /etc/radsec/servers.conf
Include /etc/radsec/realms.conf
</source>
===/etc/radsec/rewrites.conf===
<source lang=text>
## Empty for our setup
</source>
===/etc/radsec/clients.conf===
This matches our german top level radius (tlr) you have to customize it for other countries.
<source lang=text>
client tlr1 {
host 193.174.75.134
type tls
certificatenamecheck off
matchCertificateAttribute CN:/^(radius1\.dfn|tld1\.eduroam)\.de$/
}
client tlr2 {
host 193.174.75.138
type tls
certificatenamecheck off
matchCertificateAttribute CN:/^(radius2\.dfn|tld2\.eduroam)\.de$/
}
# Our WLAN Controller
client wlc {
host 10.1.1.0/24
type udp
secret ****secret****
}
client anyIP4TLS {
host 0.0.0.0/0
type TLS
}
</source>
===/etc/radsec/servers.conf===
<source lang=text>
Server Our-EduroamRadiusAuth {
host <internal radius server>
port 1812
#rewriteOut UserName
type udp
secret ****secret****
}
Server Our-EduroamRadiusAcct {
host <internal radius accounting server>
port 1813
type udp
secret ****secret****
}
server tlr1 {
host 193.174.75.134
type tls
certificatenamecheck off
matchCertificateAttribute CN:/^(radius1\.dfn|tld1\.eduroam)\.de$/
StatusServer on
}
server tlr2 {
host 193.174.75.138
type tls
certificatenamecheck off
matchCertificateAttribute CN:/^(radius2\.dfn|tld2\.eduroam)\.de$/
StatusServer on
}
</source>
===/etc/radsec/realms.conf===
<source lang=text>
# Our domain
realm domain.tld {
server Our-EduroamRadiusAuth
accountingServer Our-EduroamRadiusAcct
}
# Wrong counfigured clients are rejected here
realm /myabc\.com$ {
replymessage "Misconfigured client: default realm of Intel PRO/Wireless supplicant! Rejected by us."
accountingresponse on
}
realm /^$/ {
replymessage "Misconfigured client: empty realm! Rejected by us."
accountingresponse on
}
# Default route -> Eduroam toplevel servers
realm * {
server tlr1
server tlr2
accountingserver tlr1
accountingserver tlr2
}
</source>
===/etc/radsec/cert/radsecproxy.pem===
<source lang=text>
subject=/CN=radsecproxy.domain.tld/OU=bla/O=bli/L=Hamburg/ST=Hamburg/C=DE
-----BEGIN CERTIFICATE-----
...
-----END CERTIFICATE-----
And now the whole cerstificate chain...
</source>
==Run the daemon==
===Security===
There is no need to run radsecproxy as root.
But you need write access to the log or use syslog.
The config, certificate and key is not readable by the user (nogroup) but by the group radsecproxy where the porocess lives in (see systemd unit file radsecproxy.service).
====User====
<source lang=bash>
# addgroup -g 2083 radsecproxy
# useradd -u 2083 -g nogroup -s /bin/false -h /nonexistent
</source>
====Permissions====
<source lang=bash>
# chown -R root:radsecproxy /etc/radsec
# find /etc/radsec -type d -exec chmod 0750 {} \;
# find /etc/radsec -type f -exec chmod 0640 {} \;
</source>
====systemd unit file====
<source lang=bash>
# systemctl cat radsecproxy.service
</source>
<source lang=ini>
[Unit]
Description=radsecproxy
ConditionPathExists=/etc/radsec/radsecproxy.conf
After=network.target
Documentation=man:radsecproxy(1)
[Service]
Type=forking
User=radsecproxy
Group=radsecproxy
RuntimeDirectory=radsecproxy
RuntimeDirectoryMode=0700
PrivateTmp=yes
InaccessibleDirectories=/var
ReadOnlyDirectories=/etc
ReadOnlyDirectories=/lib
ReadOnlyDirectories=/usr
ExecStart=/opt/radsecproxy/sbin/radsecproxy -i /run/radsecproxy/radsecproxy.pid
PIDFile=/run/radsecproxy/radsecproxy.pid
[Install]
WantedBy=multi-user.target
</source>
Put this to /lib/systemd/system/radsecproxy.service and do:
# systemctl daemon-reload
# systemctl enable radsecproxy.service
# systemctl start radsecproxy.service
===Testing===
$ openssl s_client -connect <IP>:2083 -showcerts
===Certificate Enddate===
$ openssl s_client -connect <IP>:2083 -tls1 -no_ssl2 -no_ssl3 -showcerts 2>/dev/null | openssl x509 -enddate -noout
notAfter=Oct 9 12:13:17 2020 GMT
da7fa3e3f0aa5d04806db39f4a3293238be662af
1889
1888
2018-07-05T14:31:53Z
Lollypop
2
/* Build */
wikitext
text/x-wiki
[[Kategorie:Eduroam]]
=RadSecProxy=
==Build==
===Patch for radsecproxy-1.6.8 on Ubuntu 16.04===
In radsecproxy 1.6.9 and source from git on [[https://git.nordu.net/?p=radsecproxy.git;a=tree git.nordu.net]] this patch is not needed since [[https://git.nordu.net/?p=radsecproxy.git;a=commit;h=f3619bf65967255e1009fec42b28007b49e0f4e4 18.1.2017]].
<source lang=bash>
$ git clone https://git.nordu.net/radsecproxy.git
</source>
[https://project.nordu.net/browse/RADSECPROXY-72 taken from here]
<source lang=diff>
diff -rub radsecproxy-1.6.8/tcp.c radsecproxy-1.6.8_Ubuntu_16.04/tcp.c
--- radsecproxy-1.6.8/tcp.c 2016-09-21 13:49:09.000000000 +0200
+++ radsecproxy-1.6.8_Ubuntu_16.04/tcp.c 2017-07-13 16:35:52.414151832 +0200
@@ -353,7 +353,7 @@
struct sockaddr_storage from;
socklen_t fromlen = sizeof(from);
- listen(*sp, 0);
+ listen(*sp, 16);
for (;;) {
s = accept(*sp, (struct sockaddr *)&from, &fromlen);
diff -rub radsecproxy-1.6.8/tls.c radsecproxy-1.6.8_Ubuntu_16.04/tls.c
--- radsecproxy-1.6.8/tls.c 2016-09-21 13:49:09.000000000 +0200
+++ radsecproxy-1.6.8_Ubuntu_16.04/tls.c 2017-07-13 16:36:22.678166655 +0200
@@ -467,7 +467,7 @@
struct sockaddr_storage from;
socklen_t fromlen = sizeof(from);
- listen(*sp, 0);
+ listen(*sp, 16);
for (;;) {
s = accept(*sp, (struct sockaddr *)&from, &fromlen);
</source>
===Configure===
<source lang=bash>
$ ./configure --prefix=/opt/radsecproxy-1.6.8 --sysconfdir=/etc/radsec --with-ssl --enable-fticks
$ make clean all && sudo make install
</source>
=== Another example: Version 1.7.1 from git ===
<source lang=bash>
$ mkdir radsecproxy && cd radsecproxy
$ git clone https://github.com/radsecproxy/radsecproxy tags/1.7.1
$ cd tags/1.7.1
$ ./autogen.sh
$ ./configure --prefix=/opt/radsecproxy-1.7.1 --sysconfdir=/etc/radsec --with-ssl
$ make clean all && sudo make install
</source>
==Config==
===/etc/radsec/radsecproxy.conf===
<source lang=text>
# Master config file for radsecproxy
IPv4Only on
listenUDP <IP>:1812
listenUDP <IP>:1813
listenTLS <IP>:2083
LogLevel 5 # For testing later reduce to 3
#LogDestination file:///var/log/radsecproxy.log
LogDestination x-syslog:///LOG_DAEMON
LoopPrevention on
######## TLS section
tls default {
#CACertificatePath /etc/radsec/cert/ca
CACertificateFile /etc/radsec/cert/radsecproxy.pem
CertificateFile /etc/radsec/cert/radsecproxy.pem
CertificateKeyFile /etc/radsec/cert/radsecproxy.key
CertificateKeyPassword ****secret****
}
Include /etc/radsec/rewrites.conf
Include /etc/radsec/clients.conf
Include /etc/radsec/servers.conf
Include /etc/radsec/realms.conf
</source>
===/etc/radsec/rewrites.conf===
<source lang=text>
## Empty for our setup
</source>
===/etc/radsec/clients.conf===
This matches our german top level radius (tlr) you have to customize it for other countries.
<source lang=text>
client tlr1 {
host 193.174.75.134
type tls
certificatenamecheck off
matchCertificateAttribute CN:/^(radius1\.dfn|tld1\.eduroam)\.de$/
}
client tlr2 {
host 193.174.75.138
type tls
certificatenamecheck off
matchCertificateAttribute CN:/^(radius2\.dfn|tld2\.eduroam)\.de$/
}
# Our WLAN Controller
client wlc {
host 10.1.1.0/24
type udp
secret ****secret****
}
client anyIP4TLS {
host 0.0.0.0/0
type TLS
}
</source>
===/etc/radsec/servers.conf===
<source lang=text>
Server Our-EduroamRadiusAuth {
host <internal radius server>
port 1812
#rewriteOut UserName
type udp
secret ****secret****
}
Server Our-EduroamRadiusAcct {
host <internal radius accounting server>
port 1813
type udp
secret ****secret****
}
server tlr1 {
host 193.174.75.134
type tls
certificatenamecheck off
matchCertificateAttribute CN:/^(radius1\.dfn|tld1\.eduroam)\.de$/
StatusServer on
}
server tlr2 {
host 193.174.75.138
type tls
certificatenamecheck off
matchCertificateAttribute CN:/^(radius2\.dfn|tld2\.eduroam)\.de$/
StatusServer on
}
</source>
===/etc/radsec/realms.conf===
<source lang=text>
# Our domain
realm domain.tld {
server Our-EduroamRadiusAuth
accountingServer Our-EduroamRadiusAcct
}
# Wrong counfigured clients are rejected here
realm /myabc\.com$ {
replymessage "Misconfigured client: default realm of Intel PRO/Wireless supplicant! Rejected by us."
accountingresponse on
}
realm /^$/ {
replymessage "Misconfigured client: empty realm! Rejected by us."
accountingresponse on
}
# Default route -> Eduroam toplevel servers
realm * {
server tlr1
server tlr2
accountingserver tlr1
accountingserver tlr2
}
</source>
===/etc/radsec/cert/radsecproxy.pem===
<source lang=text>
subject=/CN=radsecproxy.domain.tld/OU=bla/O=bli/L=Hamburg/ST=Hamburg/C=DE
-----BEGIN CERTIFICATE-----
...
-----END CERTIFICATE-----
And now the whole cerstificate chain...
</source>
==Run the daemon==
===Security===
There is no need to run radsecproxy as root.
But you need write access to the log or use syslog.
The config, certificate and key is not readable by the user (nogroup) but by the group radsecproxy where the porocess lives in (see systemd unit file radsecproxy.service).
====User====
<source lang=bash>
# addgroup -g 2083 radsecproxy
# useradd -u 2083 -g nogroup -s /bin/false -h /nonexistent
</source>
====Permissions====
<source lang=bash>
# chown -R root:radsecproxy /etc/radsec
# find /etc/radsec -type d -exec chmod 0750 {} \;
# find /etc/radsec -type f -exec chmod 0640 {} \;
</source>
====systemd unit file====
<source lang=bash>
# systemctl cat radsecproxy.service
</source>
<source lang=ini>
[Unit]
Description=radsecproxy
ConditionPathExists=/etc/radsec/radsecproxy.conf
After=network.target
Documentation=man:radsecproxy(1)
[Service]
Type=forking
User=radsecproxy
Group=radsecproxy
RuntimeDirectory=radsecproxy
RuntimeDirectoryMode=0700
PrivateTmp=yes
InaccessibleDirectories=/var
ReadOnlyDirectories=/etc
ReadOnlyDirectories=/lib
ReadOnlyDirectories=/usr
ExecStart=/opt/radsecproxy/sbin/radsecproxy -i /run/radsecproxy/radsecproxy.pid
PIDFile=/run/radsecproxy/radsecproxy.pid
[Install]
WantedBy=multi-user.target
</source>
Put this to /lib/systemd/system/radsecproxy.service and do:
# systemctl daemon-reload
# systemctl enable radsecproxy.service
# systemctl start radsecproxy.service
===Testing===
$ openssl s_client -connect <IP>:2083 -showcerts
===Certificate Enddate===
$ openssl s_client -connect <IP>:2083 -tls1 -no_ssl2 -no_ssl3 -showcerts 2>/dev/null | openssl x509 -enddate -noout
notAfter=Oct 9 12:13:17 2020 GMT
8c9087e4c8945f7d7f75dc7146b904bfca19b562
1907
1889
2018-09-06T09:23:13Z
Lollypop
2
/* Another example: Version 1.7.1 from git */
wikitext
text/x-wiki
[[Kategorie:Eduroam]]
=RadSecProxy=
==Build==
===Patch for radsecproxy-1.6.8 on Ubuntu 16.04===
In radsecproxy 1.6.9 and source from git on [[https://git.nordu.net/?p=radsecproxy.git;a=tree git.nordu.net]] this patch is not needed since [[https://git.nordu.net/?p=radsecproxy.git;a=commit;h=f3619bf65967255e1009fec42b28007b49e0f4e4 18.1.2017]].
<source lang=bash>
$ git clone https://git.nordu.net/radsecproxy.git
</source>
[https://project.nordu.net/browse/RADSECPROXY-72 taken from here]
<source lang=diff>
diff -rub radsecproxy-1.6.8/tcp.c radsecproxy-1.6.8_Ubuntu_16.04/tcp.c
--- radsecproxy-1.6.8/tcp.c 2016-09-21 13:49:09.000000000 +0200
+++ radsecproxy-1.6.8_Ubuntu_16.04/tcp.c 2017-07-13 16:35:52.414151832 +0200
@@ -353,7 +353,7 @@
struct sockaddr_storage from;
socklen_t fromlen = sizeof(from);
- listen(*sp, 0);
+ listen(*sp, 16);
for (;;) {
s = accept(*sp, (struct sockaddr *)&from, &fromlen);
diff -rub radsecproxy-1.6.8/tls.c radsecproxy-1.6.8_Ubuntu_16.04/tls.c
--- radsecproxy-1.6.8/tls.c 2016-09-21 13:49:09.000000000 +0200
+++ radsecproxy-1.6.8_Ubuntu_16.04/tls.c 2017-07-13 16:36:22.678166655 +0200
@@ -467,7 +467,7 @@
struct sockaddr_storage from;
socklen_t fromlen = sizeof(from);
- listen(*sp, 0);
+ listen(*sp, 16);
for (;;) {
s = accept(*sp, (struct sockaddr *)&from, &fromlen);
</source>
===Configure===
<source lang=bash>
$ ./configure --prefix=/opt/radsecproxy-1.6.8 --sysconfdir=/etc/radsec --with-ssl --enable-fticks
$ make clean all && sudo make install
</source>
=== Another example: Version 1.7.2 from git ===
<source lang=bash>
$ mkdir radsecproxy && cd radsecproxy
$ git clone --single-branch --branch 1.7.2 https://github.com/radsecproxy/radsecproxy tags/1.7.2
$ cd tags/1.7.2
$ ./autogen.sh
$ ./configure --prefix=/opt/radsecproxy-${PWD##*/} --sysconfdir=/etc/radsec --with-ssl
$ make clean all && sudo make install
</source>
==Config==
===/etc/radsec/radsecproxy.conf===
<source lang=text>
# Master config file for radsecproxy
IPv4Only on
listenUDP <IP>:1812
listenUDP <IP>:1813
listenTLS <IP>:2083
LogLevel 5 # For testing later reduce to 3
#LogDestination file:///var/log/radsecproxy.log
LogDestination x-syslog:///LOG_DAEMON
LoopPrevention on
######## TLS section
tls default {
#CACertificatePath /etc/radsec/cert/ca
CACertificateFile /etc/radsec/cert/radsecproxy.pem
CertificateFile /etc/radsec/cert/radsecproxy.pem
CertificateKeyFile /etc/radsec/cert/radsecproxy.key
CertificateKeyPassword ****secret****
}
Include /etc/radsec/rewrites.conf
Include /etc/radsec/clients.conf
Include /etc/radsec/servers.conf
Include /etc/radsec/realms.conf
</source>
===/etc/radsec/rewrites.conf===
<source lang=text>
## Empty for our setup
</source>
===/etc/radsec/clients.conf===
This matches our german top level radius (tlr) you have to customize it for other countries.
<source lang=text>
client tlr1 {
host 193.174.75.134
type tls
certificatenamecheck off
matchCertificateAttribute CN:/^(radius1\.dfn|tld1\.eduroam)\.de$/
}
client tlr2 {
host 193.174.75.138
type tls
certificatenamecheck off
matchCertificateAttribute CN:/^(radius2\.dfn|tld2\.eduroam)\.de$/
}
# Our WLAN Controller
client wlc {
host 10.1.1.0/24
type udp
secret ****secret****
}
client anyIP4TLS {
host 0.0.0.0/0
type TLS
}
</source>
===/etc/radsec/servers.conf===
<source lang=text>
Server Our-EduroamRadiusAuth {
host <internal radius server>
port 1812
#rewriteOut UserName
type udp
secret ****secret****
}
Server Our-EduroamRadiusAcct {
host <internal radius accounting server>
port 1813
type udp
secret ****secret****
}
server tlr1 {
host 193.174.75.134
type tls
certificatenamecheck off
matchCertificateAttribute CN:/^(radius1\.dfn|tld1\.eduroam)\.de$/
StatusServer on
}
server tlr2 {
host 193.174.75.138
type tls
certificatenamecheck off
matchCertificateAttribute CN:/^(radius2\.dfn|tld2\.eduroam)\.de$/
StatusServer on
}
</source>
===/etc/radsec/realms.conf===
<source lang=text>
# Our domain
realm domain.tld {
server Our-EduroamRadiusAuth
accountingServer Our-EduroamRadiusAcct
}
# Wrong counfigured clients are rejected here
realm /myabc\.com$ {
replymessage "Misconfigured client: default realm of Intel PRO/Wireless supplicant! Rejected by us."
accountingresponse on
}
realm /^$/ {
replymessage "Misconfigured client: empty realm! Rejected by us."
accountingresponse on
}
# Default route -> Eduroam toplevel servers
realm * {
server tlr1
server tlr2
accountingserver tlr1
accountingserver tlr2
}
</source>
===/etc/radsec/cert/radsecproxy.pem===
<source lang=text>
subject=/CN=radsecproxy.domain.tld/OU=bla/O=bli/L=Hamburg/ST=Hamburg/C=DE
-----BEGIN CERTIFICATE-----
...
-----END CERTIFICATE-----
And now the whole cerstificate chain...
</source>
==Run the daemon==
===Security===
There is no need to run radsecproxy as root.
But you need write access to the log or use syslog.
The config, certificate and key is not readable by the user (nogroup) but by the group radsecproxy where the porocess lives in (see systemd unit file radsecproxy.service).
====User====
<source lang=bash>
# addgroup -g 2083 radsecproxy
# useradd -u 2083 -g nogroup -s /bin/false -h /nonexistent
</source>
====Permissions====
<source lang=bash>
# chown -R root:radsecproxy /etc/radsec
# find /etc/radsec -type d -exec chmod 0750 {} \;
# find /etc/radsec -type f -exec chmod 0640 {} \;
</source>
====systemd unit file====
<source lang=bash>
# systemctl cat radsecproxy.service
</source>
<source lang=ini>
[Unit]
Description=radsecproxy
ConditionPathExists=/etc/radsec/radsecproxy.conf
After=network.target
Documentation=man:radsecproxy(1)
[Service]
Type=forking
User=radsecproxy
Group=radsecproxy
RuntimeDirectory=radsecproxy
RuntimeDirectoryMode=0700
PrivateTmp=yes
InaccessibleDirectories=/var
ReadOnlyDirectories=/etc
ReadOnlyDirectories=/lib
ReadOnlyDirectories=/usr
ExecStart=/opt/radsecproxy/sbin/radsecproxy -i /run/radsecproxy/radsecproxy.pid
PIDFile=/run/radsecproxy/radsecproxy.pid
[Install]
WantedBy=multi-user.target
</source>
Put this to /lib/systemd/system/radsecproxy.service and do:
# systemctl daemon-reload
# systemctl enable radsecproxy.service
# systemctl start radsecproxy.service
===Testing===
$ openssl s_client -connect <IP>:2083 -showcerts
===Certificate Enddate===
$ openssl s_client -connect <IP>:2083 -tls1 -no_ssl2 -no_ssl3 -showcerts 2>/dev/null | openssl x509 -enddate -noout
notAfter=Oct 9 12:13:17 2020 GMT
0a44808df2659a62e529f9b2dce795f8d1b67784
Apache
0
205
1887
1846
2018-06-21T11:49:02Z
Lollypop
2
/* Apache konfigurieren */
wikitext
text/x-wiki
[[Kategorie:Apache]]
== Zertifikat generieren ==
===Defaultwerte vernünftig anpassen===
Country & Co auf für einen selbst passende Werte anpassen:
<source lang=bash>
# vi /etc/ssl/openssl.cnf
</source>
===Schlüssel generieren===
<source lang=bash>
# openssl ecparam -genkey -name secp256r1 | openssl ec -aes256 -out server.de.ec-key
read EC key
using curve name prime256v1 instead of secp256r1
writing EC key
Enter PEM pass phrase:
Verifying - Enter PEM pass phrase:
</source>
Sollte es gewünscht sein den Schlüssel ohne Passwort abzulegen, kann man den Schlüssel nachträglich so entfernen:
<source lang=bash>
# openssl ec -in server.de.ec-key -out server.de.ec-key
read EC key
Enter PEM pass phrase:
writing EC key
</source>
===Zertifikat ausstellen===
<source lang=bash>
# openssl req -new -x509 -sha256 -key server.de.ec-key -out server.de-wildcard.pem -days 1825 -nodes
You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter '.', the field will be left blank.
-----
Country Name (2 letter code) [DE]:
State or Province Name (full name) [Hamburg]:
Locality Name (eg, city) [Hamburg]:
Organization Name (eg, company) [My Site]:
Organizational Unit Name (eg, section) [Sub]:
Common Name (e.g. server FQDN or YOUR name) []:*.server.de
Email Address [ssl@server.de]:
</source>
===Zertifikat ansehen===
<source lang=bash>
# openssl x509 -text -noout -in server.de-wildcard.pem
Certificate:
Data:
Version: 3 (0x2)
Serial Number: ... (0x...)
Signature Algorithm: ecdsa-with-SHA256
Issuer: C=DE, ST=Hamburg, L=Hamburg, O=My Site, OU=Sub, CN=*.server.de/emailAddress=ssl@server.de
Validity
Not Before: Apr 16 09:35:02 2015 GMT
Not After : Apr 14 09:35:02 2020 GMT
Subject: C=DE, ST=Hamburg, L=Hamburg, O=My Site, OU=Sub, CN=*.server.de/emailAddress=ssl@server.de
Subject Public Key Info:
Public Key Algorithm: id-ecPublicKey
Public-Key: (256 bit)
pub:
...
ASN1 OID: prime256v1
X509v3 extensions:
X509v3 Subject Key Identifier:
...
X509v3 Authority Key Identifier:
keyid:...
X509v3 Basic Constraints:
CA:TRUE
Signature Algorithm: ecdsa-with-SHA256
...
</source>
==Apache konfigurieren==
/etc/apache2/mods-available/ssl.conf
<source lang=apache>
<IfModule mod_ssl.c>
...
SSLUseStapling On
SSLStaplingCache "shmcb:${APACHE_RUN_DIR}/stapling_cache(128000)"
...
</IfModule>
</source>
<source lang=apache>
<VirtualHost ssl.server.de:443>
# ...
SSLEngine On
# Do this only if you are sure you have no old clients
SSLProtocol all -SSLv2 -SSLv3 -TLSv1 -TLSv1.1
# If you need to support old clients use this instead
# SSLProtocol all -SSLv2 -SSLv3 -TLSv1
SSLCompression off
SSLHonorCipherOrder On
# Do this only if you are sure you have no old clients
SSLCipherSuite HIGH:EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH:!AES256+RSA:!AES128:!ADH:!EXP:!SSLv2:!SSLv3:!MEDIUM:!LOW:!NULL:!aNULL
# If you need to support old clients use this instead
# SSLCipherSuite ECDHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-SHA256:ECDHE-RSA-AES256-SHA:ECDHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA256:DHE-RSA-AES128-SHA256:DHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA:EDH-RSA-DES-CBC3-SHA:AES256-GCM-SHA384:AES128-GCM-SHA256:AES256-SHA256:AES128-SHA256:AES256-SHA:AES128-SHA:HIGH:!DES-CBC3-SHA:!aNULL:!eNULL:!EXPORT:!CAMELLIA:!DES:!MD5:!PSK:!RC4:!SSLv2:!SSLv3
SSLCertificateFile /etc/letsencrypt/live/ssl.server.de/fullchain.pem
SSLCertificateKeyFile /etc/letsencrypt/live/ssl.server.de/privkey.pem
SSLOptions +FakeBasicAuth +ExportCertData +StrictRequire
# Generate DH parameters with
# # openssl dhparam -out /etc/ssl/certs/dhparam_4096.pem 4096
SSLOpenSSLConfCmd DHParameters "/etc/ssl/certs/dhparam_4096.pem"
SSLOpenSSLConfCmd ECDHParameters Automatic
SSLOpenSSLConfCmd Curves secp521r1:secp384r1:prime256v1
SetEnvIfNoCase Referer ^https://ssl\.server\.de keep_cookies
RequestHeader unset Cookie env=!keep_cookies
<IfModule mod_headers.c>
# https://kb.sucuri.net/warnings/hardening/headers-x-content-type
Header set X-Content-Type-Options nosniff
# https://kb.sucuri.net/warnings/hardening/headers-x-frame-clickjacking
Header append X-FRAME-OPTIONS "SAMEORIGIN"
# https://kb.sucuri.net/warnings/hardening/headers-x-xss-protection
Header set X-XSS-Protection "1; mode=block"
# Strict Transport Security
Header always set Strict-Transport-Security "max-age=31556926;"
# Public Key Pins
Header always set Public-Key-Pins "max-age=5184000; pin-sha256=\"...\"; pin-sha256=\"...\"; includeSubDomains"
</IfModule>
<IfModule mod_rewrite.c>
RewriteEngine On
# https://kb.sucuri.net/warnings/hardening/http-trace HTTP Trace Method
RewriteCond %{REQUEST_METHOD} ^TRACE
RewriteRule .* - [F]
</IfModule>
</VirtualHost>
</source>
===SSLLabs A+ with all 100%===
'''If you consider to take this snippet be warned. Old clients have no chance to reach the server.'''
<source lang=apache>
<VirtualHost ssl.server.de:443>
...
# SSL parameters
SSLEngine On
SSLProtocol all -SSLv2 -SSLv3 -TLSv1 -TLSv1.1
SSLCipherSuite DHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES256-SHA:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-SHA
SSLHonorCipherOrder on
SSLUseStapling on
SSLCompression off
SSLOptions +FakeBasicAuth +ExportCertData +StrictRequire
SSLCertificateFile /etc/letsencrypt/live/ssl.server.de/fullchain.pem
SSLCertificateKeyFile /etc/letsencrypt/live/ssl.server.de/privkey.pem
SSLOpenSSLConfCmd DHParameters "/etc/ssl/certs/dhparam.pem"
SSLOpenSSLConfCmd ECDHParameters Automatic
SSLOpenSSLConfCmd Curves secp521r1:secp384r1:prime256v1
<IfModule mod_headers.c>
# Add security and privacy related headers
Header edit Set-Cookie ^(.*)$ $1;HttpOnly;Secure
Header always set Strict-Transport-Security "max-age=31556926; includeSubDomains; preload"
Header always set X-Frame-Options SAMEORIGIN
Header always set X-Content-Type-Options nosniff
Header set X-XSS-Protection "1; mode=block"
Header set X-Robots-Tag "none"
SetEnv modHeadersAvailable true
</IfModule>
...
</VirtualHost>
</source>
==Client certificates==
<source lang=apache>
#
## <ClientCertificate>
#
SSLVerifyClient none
SSLCACertificateFile "/var/log/apache2/conf/ca.crt"
SSLCARevocationFile "/var/log/apache2/conf/crl.pem"
SSLCARevocationCheck chain
CustomLog "/var/log/apache2/logs/ssl_user.log" \
"%t %h Serial=%{SSL_CLIENT_M_SERIAL}x User=%{SSL_CLIENT_S_DN_CN}x \"%r\" %b"
<Location />
SSLVerifyClient require
SSLVerifyDepth 10
SSLOptions +FakeBasicAuth
SSLRequireSSL
SSLRequire %{SSL_CLIENT_S_DN_O} eq "Your Organization" \
and %{SSL_CLIENT_S_DN_OU} in {"AllowedOU1","AllowedOU2"}
</Location>
#
## </ClientCertificate>
#
</source>
==ApacheTop==
Top of all sites on your host:
<source lang=bash>
# ls /var/log/apache2/*.log | xargs -n 1 echo -f | xargs apachetop
</source>
a4e33efbf9b2a20c14256f59e4dc5a129c494d21
1903
1887
2018-08-21T14:40:25Z
Lollypop
2
/* Zertifikat generieren */
wikitext
text/x-wiki
[[Kategorie:Apache]]
== Zertifikat generieren ==
===Simple script===
<source lang=bash>
#!/bin/bash
BASE_SUBJECT='/C=DE/ST=Hamburg/L=Hamburg/O=NDR Norddeutscher Rundfunk/OU=IFS'
BASEDIR=/etc/apache2
DAYS=$[ 5 * 365 ]
KEY_DIR=${BASEDIR}/ssl.key
CRT_DIR=${BASEDIR}/ssl.crt
if [ $# -eq 0 ]
then
printf "usage: $0 <webserver-name> [<alias1> <alias2>...]\n"
exit 1
fi
CN=$1
declare -a subject_alt_names;
for i in ${*}
do
subject_alt_names=( ${subject_alt_names[*]} "DNS:${i}")
done
echo ${subject_alt_names[*]}
SHORT=${CN%%.*}
KEY=${KEY_DIR}/${SHORT}.key
CRT=${CRT_DIR}/${SHORT}.crt
OLD_IFS=${IFS}
IFS=","
openssl req \
-new \
-days ${DAYS} \
-newkey rsa:4096bits \
-sha512 \
-x509 \
-nodes \
-out ${CRT} \
-keyout ${KEY} \
-subj "${BASE_SUBJECT}/CN=${CN}" \
-reqexts SAN \
-extensions SAN \
-config <(
cat /etc/ssl/openssl.cnf
printf "[ext]\nbasicConstraints=CA:FALSE,pathlen:0\n[SAN]\n%s\n" \
"${subject_alt_names:+subjectAltName = ${subject_alt_names[*]}}"
)
IFS=${OLD_IFS}
printf "Put this in your Apache config:\n\n\tSSLCertificateFile %s\n\tSSLCertificateKeyFile %s\n\n" "${CRT}" "${KEY}"
</source>
===Defaultwerte vernünftig anpassen===
Country & Co auf für einen selbst passende Werte anpassen:
<source lang=bash>
# vi /etc/ssl/openssl.cnf
</source>
===Schlüssel generieren===
<source lang=bash>
# openssl ecparam -genkey -name secp256r1 | openssl ec -aes256 -out server.de.ec-key
read EC key
using curve name prime256v1 instead of secp256r1
writing EC key
Enter PEM pass phrase:
Verifying - Enter PEM pass phrase:
</source>
Sollte es gewünscht sein den Schlüssel ohne Passwort abzulegen, kann man den Schlüssel nachträglich so entfernen:
<source lang=bash>
# openssl ec -in server.de.ec-key -out server.de.ec-key
read EC key
Enter PEM pass phrase:
writing EC key
</source>
===Zertifikat ausstellen===
<source lang=bash>
# openssl req -new -x509 -sha256 -key server.de.ec-key -out server.de-wildcard.pem -days 1825 -nodes
You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter '.', the field will be left blank.
-----
Country Name (2 letter code) [DE]:
State or Province Name (full name) [Hamburg]:
Locality Name (eg, city) [Hamburg]:
Organization Name (eg, company) [My Site]:
Organizational Unit Name (eg, section) [Sub]:
Common Name (e.g. server FQDN or YOUR name) []:*.server.de
Email Address [ssl@server.de]:
</source>
===Zertifikat ansehen===
<source lang=bash>
# openssl x509 -text -noout -in server.de-wildcard.pem
Certificate:
Data:
Version: 3 (0x2)
Serial Number: ... (0x...)
Signature Algorithm: ecdsa-with-SHA256
Issuer: C=DE, ST=Hamburg, L=Hamburg, O=My Site, OU=Sub, CN=*.server.de/emailAddress=ssl@server.de
Validity
Not Before: Apr 16 09:35:02 2015 GMT
Not After : Apr 14 09:35:02 2020 GMT
Subject: C=DE, ST=Hamburg, L=Hamburg, O=My Site, OU=Sub, CN=*.server.de/emailAddress=ssl@server.de
Subject Public Key Info:
Public Key Algorithm: id-ecPublicKey
Public-Key: (256 bit)
pub:
...
ASN1 OID: prime256v1
X509v3 extensions:
X509v3 Subject Key Identifier:
...
X509v3 Authority Key Identifier:
keyid:...
X509v3 Basic Constraints:
CA:TRUE
Signature Algorithm: ecdsa-with-SHA256
...
</source>
==Apache konfigurieren==
/etc/apache2/mods-available/ssl.conf
<source lang=apache>
<IfModule mod_ssl.c>
...
SSLUseStapling On
SSLStaplingCache "shmcb:${APACHE_RUN_DIR}/stapling_cache(128000)"
...
</IfModule>
</source>
<source lang=apache>
<VirtualHost ssl.server.de:443>
# ...
SSLEngine On
# Do this only if you are sure you have no old clients
SSLProtocol all -SSLv2 -SSLv3 -TLSv1 -TLSv1.1
# If you need to support old clients use this instead
# SSLProtocol all -SSLv2 -SSLv3 -TLSv1
SSLCompression off
SSLHonorCipherOrder On
# Do this only if you are sure you have no old clients
SSLCipherSuite HIGH:EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH:!AES256+RSA:!AES128:!ADH:!EXP:!SSLv2:!SSLv3:!MEDIUM:!LOW:!NULL:!aNULL
# If you need to support old clients use this instead
# SSLCipherSuite ECDHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-SHA256:ECDHE-RSA-AES256-SHA:ECDHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA256:DHE-RSA-AES128-SHA256:DHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA:EDH-RSA-DES-CBC3-SHA:AES256-GCM-SHA384:AES128-GCM-SHA256:AES256-SHA256:AES128-SHA256:AES256-SHA:AES128-SHA:HIGH:!DES-CBC3-SHA:!aNULL:!eNULL:!EXPORT:!CAMELLIA:!DES:!MD5:!PSK:!RC4:!SSLv2:!SSLv3
SSLCertificateFile /etc/letsencrypt/live/ssl.server.de/fullchain.pem
SSLCertificateKeyFile /etc/letsencrypt/live/ssl.server.de/privkey.pem
SSLOptions +FakeBasicAuth +ExportCertData +StrictRequire
# Generate DH parameters with
# # openssl dhparam -out /etc/ssl/certs/dhparam_4096.pem 4096
SSLOpenSSLConfCmd DHParameters "/etc/ssl/certs/dhparam_4096.pem"
SSLOpenSSLConfCmd ECDHParameters Automatic
SSLOpenSSLConfCmd Curves secp521r1:secp384r1:prime256v1
SetEnvIfNoCase Referer ^https://ssl\.server\.de keep_cookies
RequestHeader unset Cookie env=!keep_cookies
<IfModule mod_headers.c>
# https://kb.sucuri.net/warnings/hardening/headers-x-content-type
Header set X-Content-Type-Options nosniff
# https://kb.sucuri.net/warnings/hardening/headers-x-frame-clickjacking
Header append X-FRAME-OPTIONS "SAMEORIGIN"
# https://kb.sucuri.net/warnings/hardening/headers-x-xss-protection
Header set X-XSS-Protection "1; mode=block"
# Strict Transport Security
Header always set Strict-Transport-Security "max-age=31556926;"
# Public Key Pins
Header always set Public-Key-Pins "max-age=5184000; pin-sha256=\"...\"; pin-sha256=\"...\"; includeSubDomains"
</IfModule>
<IfModule mod_rewrite.c>
RewriteEngine On
# https://kb.sucuri.net/warnings/hardening/http-trace HTTP Trace Method
RewriteCond %{REQUEST_METHOD} ^TRACE
RewriteRule .* - [F]
</IfModule>
</VirtualHost>
</source>
===SSLLabs A+ with all 100%===
'''If you consider to take this snippet be warned. Old clients have no chance to reach the server.'''
<source lang=apache>
<VirtualHost ssl.server.de:443>
...
# SSL parameters
SSLEngine On
SSLProtocol all -SSLv2 -SSLv3 -TLSv1 -TLSv1.1
SSLCipherSuite DHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES256-SHA:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-SHA
SSLHonorCipherOrder on
SSLUseStapling on
SSLCompression off
SSLOptions +FakeBasicAuth +ExportCertData +StrictRequire
SSLCertificateFile /etc/letsencrypt/live/ssl.server.de/fullchain.pem
SSLCertificateKeyFile /etc/letsencrypt/live/ssl.server.de/privkey.pem
SSLOpenSSLConfCmd DHParameters "/etc/ssl/certs/dhparam.pem"
SSLOpenSSLConfCmd ECDHParameters Automatic
SSLOpenSSLConfCmd Curves secp521r1:secp384r1:prime256v1
<IfModule mod_headers.c>
# Add security and privacy related headers
Header edit Set-Cookie ^(.*)$ $1;HttpOnly;Secure
Header always set Strict-Transport-Security "max-age=31556926; includeSubDomains; preload"
Header always set X-Frame-Options SAMEORIGIN
Header always set X-Content-Type-Options nosniff
Header set X-XSS-Protection "1; mode=block"
Header set X-Robots-Tag "none"
SetEnv modHeadersAvailable true
</IfModule>
...
</VirtualHost>
</source>
==Client certificates==
<source lang=apache>
#
## <ClientCertificate>
#
SSLVerifyClient none
SSLCACertificateFile "/var/log/apache2/conf/ca.crt"
SSLCARevocationFile "/var/log/apache2/conf/crl.pem"
SSLCARevocationCheck chain
CustomLog "/var/log/apache2/logs/ssl_user.log" \
"%t %h Serial=%{SSL_CLIENT_M_SERIAL}x User=%{SSL_CLIENT_S_DN_CN}x \"%r\" %b"
<Location />
SSLVerifyClient require
SSLVerifyDepth 10
SSLOptions +FakeBasicAuth
SSLRequireSSL
SSLRequire %{SSL_CLIENT_S_DN_O} eq "Your Organization" \
and %{SSL_CLIENT_S_DN_OU} in {"AllowedOU1","AllowedOU2"}
</Location>
#
## </ClientCertificate>
#
</source>
==ApacheTop==
Top of all sites on your host:
<source lang=bash>
# ls /var/log/apache2/*.log | xargs -n 1 echo -f | xargs apachetop
</source>
8c59cad555072f331ce4680455e26007db7e4ad4
1904
1903
2018-08-21T14:40:57Z
Lollypop
2
/* Simple script */
wikitext
text/x-wiki
[[Kategorie:Apache]]
== Zertifikat generieren ==
===Simple script===
<source lang=bash>
#!/bin/bash
BASE_SUBJECT='/C=DE/ST=Hamburg/L=Hamburg/O=MyOrg/OU=IT'
BASEDIR=/etc/apache2
DAYS=$[ 5 * 365 ]
KEY_DIR=${BASEDIR}/ssl.key
CRT_DIR=${BASEDIR}/ssl.crt
if [ $# -eq 0 ]
then
printf "usage: $0 <webserver-name> [<alias1> <alias2>...]\n"
exit 1
fi
CN=$1
declare -a subject_alt_names;
for i in ${*}
do
subject_alt_names=( ${subject_alt_names[*]} "DNS:${i}")
done
echo ${subject_alt_names[*]}
SHORT=${CN%%.*}
KEY=${KEY_DIR}/${SHORT}.key
CRT=${CRT_DIR}/${SHORT}.crt
OLD_IFS=${IFS}
IFS=","
openssl req \
-new \
-days ${DAYS} \
-newkey rsa:4096bits \
-sha512 \
-x509 \
-nodes \
-out ${CRT} \
-keyout ${KEY} \
-subj "${BASE_SUBJECT}/CN=${CN}" \
-reqexts SAN \
-extensions SAN \
-config <(
cat /etc/ssl/openssl.cnf
printf "[ext]\nbasicConstraints=CA:FALSE,pathlen:0\n[SAN]\n%s\n" \
"${subject_alt_names:+subjectAltName = ${subject_alt_names[*]}}"
)
IFS=${OLD_IFS}
printf "Put this in your Apache config:\n\n\tSSLCertificateFile %s\n\tSSLCertificateKeyFile %s\n\n" "${CRT}" "${KEY}"
</source>
===Defaultwerte vernünftig anpassen===
Country & Co auf für einen selbst passende Werte anpassen:
<source lang=bash>
# vi /etc/ssl/openssl.cnf
</source>
===Schlüssel generieren===
<source lang=bash>
# openssl ecparam -genkey -name secp256r1 | openssl ec -aes256 -out server.de.ec-key
read EC key
using curve name prime256v1 instead of secp256r1
writing EC key
Enter PEM pass phrase:
Verifying - Enter PEM pass phrase:
</source>
Sollte es gewünscht sein den Schlüssel ohne Passwort abzulegen, kann man den Schlüssel nachträglich so entfernen:
<source lang=bash>
# openssl ec -in server.de.ec-key -out server.de.ec-key
read EC key
Enter PEM pass phrase:
writing EC key
</source>
===Zertifikat ausstellen===
<source lang=bash>
# openssl req -new -x509 -sha256 -key server.de.ec-key -out server.de-wildcard.pem -days 1825 -nodes
You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter '.', the field will be left blank.
-----
Country Name (2 letter code) [DE]:
State or Province Name (full name) [Hamburg]:
Locality Name (eg, city) [Hamburg]:
Organization Name (eg, company) [My Site]:
Organizational Unit Name (eg, section) [Sub]:
Common Name (e.g. server FQDN or YOUR name) []:*.server.de
Email Address [ssl@server.de]:
</source>
===Zertifikat ansehen===
<source lang=bash>
# openssl x509 -text -noout -in server.de-wildcard.pem
Certificate:
Data:
Version: 3 (0x2)
Serial Number: ... (0x...)
Signature Algorithm: ecdsa-with-SHA256
Issuer: C=DE, ST=Hamburg, L=Hamburg, O=My Site, OU=Sub, CN=*.server.de/emailAddress=ssl@server.de
Validity
Not Before: Apr 16 09:35:02 2015 GMT
Not After : Apr 14 09:35:02 2020 GMT
Subject: C=DE, ST=Hamburg, L=Hamburg, O=My Site, OU=Sub, CN=*.server.de/emailAddress=ssl@server.de
Subject Public Key Info:
Public Key Algorithm: id-ecPublicKey
Public-Key: (256 bit)
pub:
...
ASN1 OID: prime256v1
X509v3 extensions:
X509v3 Subject Key Identifier:
...
X509v3 Authority Key Identifier:
keyid:...
X509v3 Basic Constraints:
CA:TRUE
Signature Algorithm: ecdsa-with-SHA256
...
</source>
==Apache konfigurieren==
/etc/apache2/mods-available/ssl.conf
<source lang=apache>
<IfModule mod_ssl.c>
...
SSLUseStapling On
SSLStaplingCache "shmcb:${APACHE_RUN_DIR}/stapling_cache(128000)"
...
</IfModule>
</source>
<source lang=apache>
<VirtualHost ssl.server.de:443>
# ...
SSLEngine On
# Do this only if you are sure you have no old clients
SSLProtocol all -SSLv2 -SSLv3 -TLSv1 -TLSv1.1
# If you need to support old clients use this instead
# SSLProtocol all -SSLv2 -SSLv3 -TLSv1
SSLCompression off
SSLHonorCipherOrder On
# Do this only if you are sure you have no old clients
SSLCipherSuite HIGH:EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH:!AES256+RSA:!AES128:!ADH:!EXP:!SSLv2:!SSLv3:!MEDIUM:!LOW:!NULL:!aNULL
# If you need to support old clients use this instead
# SSLCipherSuite ECDHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-SHA256:ECDHE-RSA-AES256-SHA:ECDHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA256:DHE-RSA-AES128-SHA256:DHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA:EDH-RSA-DES-CBC3-SHA:AES256-GCM-SHA384:AES128-GCM-SHA256:AES256-SHA256:AES128-SHA256:AES256-SHA:AES128-SHA:HIGH:!DES-CBC3-SHA:!aNULL:!eNULL:!EXPORT:!CAMELLIA:!DES:!MD5:!PSK:!RC4:!SSLv2:!SSLv3
SSLCertificateFile /etc/letsencrypt/live/ssl.server.de/fullchain.pem
SSLCertificateKeyFile /etc/letsencrypt/live/ssl.server.de/privkey.pem
SSLOptions +FakeBasicAuth +ExportCertData +StrictRequire
# Generate DH parameters with
# # openssl dhparam -out /etc/ssl/certs/dhparam_4096.pem 4096
SSLOpenSSLConfCmd DHParameters "/etc/ssl/certs/dhparam_4096.pem"
SSLOpenSSLConfCmd ECDHParameters Automatic
SSLOpenSSLConfCmd Curves secp521r1:secp384r1:prime256v1
SetEnvIfNoCase Referer ^https://ssl\.server\.de keep_cookies
RequestHeader unset Cookie env=!keep_cookies
<IfModule mod_headers.c>
# https://kb.sucuri.net/warnings/hardening/headers-x-content-type
Header set X-Content-Type-Options nosniff
# https://kb.sucuri.net/warnings/hardening/headers-x-frame-clickjacking
Header append X-FRAME-OPTIONS "SAMEORIGIN"
# https://kb.sucuri.net/warnings/hardening/headers-x-xss-protection
Header set X-XSS-Protection "1; mode=block"
# Strict Transport Security
Header always set Strict-Transport-Security "max-age=31556926;"
# Public Key Pins
Header always set Public-Key-Pins "max-age=5184000; pin-sha256=\"...\"; pin-sha256=\"...\"; includeSubDomains"
</IfModule>
<IfModule mod_rewrite.c>
RewriteEngine On
# https://kb.sucuri.net/warnings/hardening/http-trace HTTP Trace Method
RewriteCond %{REQUEST_METHOD} ^TRACE
RewriteRule .* - [F]
</IfModule>
</VirtualHost>
</source>
===SSLLabs A+ with all 100%===
'''If you consider to take this snippet be warned. Old clients have no chance to reach the server.'''
<source lang=apache>
<VirtualHost ssl.server.de:443>
...
# SSL parameters
SSLEngine On
SSLProtocol all -SSLv2 -SSLv3 -TLSv1 -TLSv1.1
SSLCipherSuite DHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES256-SHA:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-SHA
SSLHonorCipherOrder on
SSLUseStapling on
SSLCompression off
SSLOptions +FakeBasicAuth +ExportCertData +StrictRequire
SSLCertificateFile /etc/letsencrypt/live/ssl.server.de/fullchain.pem
SSLCertificateKeyFile /etc/letsencrypt/live/ssl.server.de/privkey.pem
SSLOpenSSLConfCmd DHParameters "/etc/ssl/certs/dhparam.pem"
SSLOpenSSLConfCmd ECDHParameters Automatic
SSLOpenSSLConfCmd Curves secp521r1:secp384r1:prime256v1
<IfModule mod_headers.c>
# Add security and privacy related headers
Header edit Set-Cookie ^(.*)$ $1;HttpOnly;Secure
Header always set Strict-Transport-Security "max-age=31556926; includeSubDomains; preload"
Header always set X-Frame-Options SAMEORIGIN
Header always set X-Content-Type-Options nosniff
Header set X-XSS-Protection "1; mode=block"
Header set X-Robots-Tag "none"
SetEnv modHeadersAvailable true
</IfModule>
...
</VirtualHost>
</source>
==Client certificates==
<source lang=apache>
#
## <ClientCertificate>
#
SSLVerifyClient none
SSLCACertificateFile "/var/log/apache2/conf/ca.crt"
SSLCARevocationFile "/var/log/apache2/conf/crl.pem"
SSLCARevocationCheck chain
CustomLog "/var/log/apache2/logs/ssl_user.log" \
"%t %h Serial=%{SSL_CLIENT_M_SERIAL}x User=%{SSL_CLIENT_S_DN_CN}x \"%r\" %b"
<Location />
SSLVerifyClient require
SSLVerifyDepth 10
SSLOptions +FakeBasicAuth
SSLRequireSSL
SSLRequire %{SSL_CLIENT_S_DN_O} eq "Your Organization" \
and %{SSL_CLIENT_S_DN_OU} in {"AllowedOU1","AllowedOU2"}
</Location>
#
## </ClientCertificate>
#
</source>
==ApacheTop==
Top of all sites on your host:
<source lang=bash>
# ls /var/log/apache2/*.log | xargs -n 1 echo -f | xargs apachetop
</source>
5dd30b1ddd4430fed39777640e6f9dc351cd0846
SuSE Manager
0
348
1890
1835
2018-07-10T11:26:22Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:Linux]]
=SuSE Manager=
==Channels==
===Refresh channle list===
<source lang=bash>
# mgr-sync refresh
</source>
===List available channels===
<source lang=bash>
# mgr-sync list channels
</source>
===Add Channel===
<source lang=bash>
# mgr-sync add channel <channel>
</source>
===Delete Channel===
<source lang=bash>
# spacewalk-remove-channel -c <channel>
</source>
===Create a frozen channel===
Clone a channel (which is like a snapshot) and add a timestamp at the end of the name:
<source lang=bash>
# spacecmd softwarechannel_clonetree -s '<source channel or pool>' -x "s/\$/-$(date '+%Y-%m-%d_%H:%M:%S')/"
</source>
e.g.:
<source lang=bash>
# spacecmd softwarechannel_clonetree -s 'sles12-sp3-pool-x86_64' -x "s/\$/-$(date '+%Y-%m-%d_%H:%M:%S')/"
</source>
will result in a new channel pool named e.g. sles12-sp3-pool-x86_64-2017-11-22_14:26:42
===Compose your own channel===
<source lang=bash>
# spacecmd
spacecmd {SSM:0}> softwarechannel_create -n OpenSuSE -l opensuse -a x86_64 -c sha256
spacecmd {SSM:0}> repo_create -n opensuse-database-sles12-sp2-x86_64 -u https://download.opensuse.org/repositories/server:/database/SLE_12_SP2/
spacecmd {SSM:0}> repo_create -n opensuse-database-sles12-sp3-x86_64 -u https://download.opensuse.org/repositories/server:/database/SLE_12_SP3/
spacecmd {SSM:0}> repo_list
opensuse-database-sles12-sp2-x86_64
opensuse-database-sles12-sp3-x86_64
spacecmd {SSM:0}> softwarechannel_addrepo opensuse opensuse-database-sles12-sp2-x86_64
spacecmd {SSM:0}> softwarechannel_addrepo opensuse opensuse-database-sles12-sp3-x86_64
spacecmd {SSM:0}> quit
# spacewalk-repo-sync -c opensuse
</source>
==Bootstrap==
===Create bootstrap repo===
Do it for each channel!
<source lang=bash>
# mgr-create-bootstrap-repo
</source>
===Create bootstrap shell scripts in /srv/www/htdocs/pub/bootstrap===
Do not forget to lookup the available [[#List available activation keys|activation keys]]
<source lang=bash>
# mgr-bootstrap --traditional --script=My-New-SLES11-SP4.sh --activation-keys=6-sles11-sp4-x86_64
</source>
==Activation keys==
===List available activation keys===
web: Systems -> Activation Keys
<source lang=bash>
# spacecmd -q activationkey_list
6-sles11-sp3-x86_64
6-sles11-sp4-x86_64
6-sles12-sp0-x86_64
6-sles12-sp1-x86_64
6-sles12-sp2-x86_64
6-sles12-sp3-x86_64
</source>
==spacecmd==
Just some useful space commands
<source lang=bash>
# spacecmd system_list
</source>
==rhn-search==
===Cleanup the search index===
<source lang=bash>
# rhn-search cleanindex
</source>
==Troubleshooting==
===Clients===
====Error code: Curl error 59 / Error message: failed setting cipher list: DEFAULT_SUSE====
<source lang=bash>
# zypper refresh
...
Error code: Curl error 59
Error message: failed setting cipher list: DEFAULT_SUSE
...
</source>
The reason is that zypper in newer versions calls curl with a specific cipher list named "DEFAULT_SUSE" which is not defined in older curl versions.
So the only way to get rid of this is to get rid of spacewalk:
<source lang=bash>
# zypper rm -u spacewalksd spacewalk-check zypp-plugin-spacewalk spacewalk-client-tools
</source>
Now get any kind of repository bound to your SuSE like an ISO:
<source lang=bash>
# zypper addrepo -c -t yast2 'iso:///?iso=/install/OS/suse/iso/SLE-12-SP2-Server-DVD-x86_64-GM-DVD1.iso' 'SLES12-SP2-12.2-0'
Adding repository 'SLES12-SP2-12.2-0' ...........................................................................................................[done]
Repository 'SLES12-SP2-12.2-0' successfully added
Enabled : Yes
Autorefresh : No
GPG Check : Yes
Priority : 99
URI : iso:///?iso=/install/OS/suse/iso/SLE-12-SP2-Server-DVD-x86_64-GM-DVD1.iso
</source>
Reinstall zypper in the old version that does not call curl with the cipher list SUSE_DEFAULT:
<source lang=bash>
# zypper in -f $(rpm -qa *zypper* --qf '%{NAME} ')
</source>
After that reregister your server with the SuSE Manager like this:
<source lang=bash>
# /usr/bin/wget --no-check-certificate -O - https://susemgr.server.tld/pub/bootstrap/yourbootstrap.sh | bash
</source>
Done.
d1ba0394d7168185f904c3a0d7f0dd109a5d2696
1891
1890
2018-07-10T11:27:25Z
Lollypop
2
/* Clients */
wikitext
text/x-wiki
[[Kategorie:Linux]]
=SuSE Manager=
==Channels==
===Refresh channle list===
<source lang=bash>
# mgr-sync refresh
</source>
===List available channels===
<source lang=bash>
# mgr-sync list channels
</source>
===Add Channel===
<source lang=bash>
# mgr-sync add channel <channel>
</source>
===Delete Channel===
<source lang=bash>
# spacewalk-remove-channel -c <channel>
</source>
===Create a frozen channel===
Clone a channel (which is like a snapshot) and add a timestamp at the end of the name:
<source lang=bash>
# spacecmd softwarechannel_clonetree -s '<source channel or pool>' -x "s/\$/-$(date '+%Y-%m-%d_%H:%M:%S')/"
</source>
e.g.:
<source lang=bash>
# spacecmd softwarechannel_clonetree -s 'sles12-sp3-pool-x86_64' -x "s/\$/-$(date '+%Y-%m-%d_%H:%M:%S')/"
</source>
will result in a new channel pool named e.g. sles12-sp3-pool-x86_64-2017-11-22_14:26:42
===Compose your own channel===
<source lang=bash>
# spacecmd
spacecmd {SSM:0}> softwarechannel_create -n OpenSuSE -l opensuse -a x86_64 -c sha256
spacecmd {SSM:0}> repo_create -n opensuse-database-sles12-sp2-x86_64 -u https://download.opensuse.org/repositories/server:/database/SLE_12_SP2/
spacecmd {SSM:0}> repo_create -n opensuse-database-sles12-sp3-x86_64 -u https://download.opensuse.org/repositories/server:/database/SLE_12_SP3/
spacecmd {SSM:0}> repo_list
opensuse-database-sles12-sp2-x86_64
opensuse-database-sles12-sp3-x86_64
spacecmd {SSM:0}> softwarechannel_addrepo opensuse opensuse-database-sles12-sp2-x86_64
spacecmd {SSM:0}> softwarechannel_addrepo opensuse opensuse-database-sles12-sp3-x86_64
spacecmd {SSM:0}> quit
# spacewalk-repo-sync -c opensuse
</source>
==Bootstrap==
===Create bootstrap repo===
Do it for each channel!
<source lang=bash>
# mgr-create-bootstrap-repo
</source>
===Create bootstrap shell scripts in /srv/www/htdocs/pub/bootstrap===
Do not forget to lookup the available [[#List available activation keys|activation keys]]
<source lang=bash>
# mgr-bootstrap --traditional --script=My-New-SLES11-SP4.sh --activation-keys=6-sles11-sp4-x86_64
</source>
==Activation keys==
===List available activation keys===
web: Systems -> Activation Keys
<source lang=bash>
# spacecmd -q activationkey_list
6-sles11-sp3-x86_64
6-sles11-sp4-x86_64
6-sles12-sp0-x86_64
6-sles12-sp1-x86_64
6-sles12-sp2-x86_64
6-sles12-sp3-x86_64
</source>
==spacecmd==
Just some useful space commands
<source lang=bash>
# spacecmd system_list
</source>
==rhn-search==
===Cleanup the search index===
<source lang=bash>
# rhn-search cleanindex
</source>
==Troubleshooting==
===Clients===
====Error code: Curl error 59 / Error message: failed setting cipher list: DEFAULT_SUSE====
<source lang=bash>
# zypper refresh
...
Error code: Curl error 59
Error message: failed setting cipher list: DEFAULT_SUSE
...
</source>
The reason is that zypper in newer versions calls curl with a specific cipher list named "DEFAULT_SUSE" which is not defined in older curl versions.
So the only way to get rid of this is to get rid of spacewalk:
<source lang=bash>
# zypper remove --clean-deps spacewalksd spacewalk-check zypp-plugin-spacewalk spacewalk-client-tools
</source>
Now get any kind of repository bound to your SuSE like an ISO:
<source lang=bash>
# zypper addrepo -c -t yast2 'iso:///?iso=/install/OS/suse/iso/SLE-12-SP2-Server-DVD-x86_64-GM-DVD1.iso' 'SLES12-SP2-12.2-0'
Adding repository 'SLES12-SP2-12.2-0' ...........................................................................................................[done]
Repository 'SLES12-SP2-12.2-0' successfully added
Enabled : Yes
Autorefresh : No
GPG Check : Yes
Priority : 99
URI : iso:///?iso=/install/OS/suse/iso/SLE-12-SP2-Server-DVD-x86_64-GM-DVD1.iso
</source>
Reinstall zypper in the old version that does not call curl with the cipher list SUSE_DEFAULT:
<source lang=bash>
# zypper in -f $(rpm -qa *zypper* --qf '%{NAME} ')
</source>
After that reregister your server with the SuSE Manager like this:
<source lang=bash>
# /usr/bin/wget --no-check-certificate -O - https://susemgr.server.tld/pub/bootstrap/yourbootstrap.sh | bash
</source>
Done.
11fc044fd3ce93cc7b2af5b568e9aa7807b3e039
1892
1891
2018-07-10T11:28:48Z
Lollypop
2
/* Error code: Curl error 59 / Error message: failed setting cipher list: DEFAULT_SUSE */
wikitext
text/x-wiki
[[Kategorie:Linux]]
=SuSE Manager=
==Channels==
===Refresh channle list===
<source lang=bash>
# mgr-sync refresh
</source>
===List available channels===
<source lang=bash>
# mgr-sync list channels
</source>
===Add Channel===
<source lang=bash>
# mgr-sync add channel <channel>
</source>
===Delete Channel===
<source lang=bash>
# spacewalk-remove-channel -c <channel>
</source>
===Create a frozen channel===
Clone a channel (which is like a snapshot) and add a timestamp at the end of the name:
<source lang=bash>
# spacecmd softwarechannel_clonetree -s '<source channel or pool>' -x "s/\$/-$(date '+%Y-%m-%d_%H:%M:%S')/"
</source>
e.g.:
<source lang=bash>
# spacecmd softwarechannel_clonetree -s 'sles12-sp3-pool-x86_64' -x "s/\$/-$(date '+%Y-%m-%d_%H:%M:%S')/"
</source>
will result in a new channel pool named e.g. sles12-sp3-pool-x86_64-2017-11-22_14:26:42
===Compose your own channel===
<source lang=bash>
# spacecmd
spacecmd {SSM:0}> softwarechannel_create -n OpenSuSE -l opensuse -a x86_64 -c sha256
spacecmd {SSM:0}> repo_create -n opensuse-database-sles12-sp2-x86_64 -u https://download.opensuse.org/repositories/server:/database/SLE_12_SP2/
spacecmd {SSM:0}> repo_create -n opensuse-database-sles12-sp3-x86_64 -u https://download.opensuse.org/repositories/server:/database/SLE_12_SP3/
spacecmd {SSM:0}> repo_list
opensuse-database-sles12-sp2-x86_64
opensuse-database-sles12-sp3-x86_64
spacecmd {SSM:0}> softwarechannel_addrepo opensuse opensuse-database-sles12-sp2-x86_64
spacecmd {SSM:0}> softwarechannel_addrepo opensuse opensuse-database-sles12-sp3-x86_64
spacecmd {SSM:0}> quit
# spacewalk-repo-sync -c opensuse
</source>
==Bootstrap==
===Create bootstrap repo===
Do it for each channel!
<source lang=bash>
# mgr-create-bootstrap-repo
</source>
===Create bootstrap shell scripts in /srv/www/htdocs/pub/bootstrap===
Do not forget to lookup the available [[#List available activation keys|activation keys]]
<source lang=bash>
# mgr-bootstrap --traditional --script=My-New-SLES11-SP4.sh --activation-keys=6-sles11-sp4-x86_64
</source>
==Activation keys==
===List available activation keys===
web: Systems -> Activation Keys
<source lang=bash>
# spacecmd -q activationkey_list
6-sles11-sp3-x86_64
6-sles11-sp4-x86_64
6-sles12-sp0-x86_64
6-sles12-sp1-x86_64
6-sles12-sp2-x86_64
6-sles12-sp3-x86_64
</source>
==spacecmd==
Just some useful space commands
<source lang=bash>
# spacecmd system_list
</source>
==rhn-search==
===Cleanup the search index===
<source lang=bash>
# rhn-search cleanindex
</source>
==Troubleshooting==
===Clients===
====Error code: Curl error 59 / Error message: failed setting cipher list: DEFAULT_SUSE====
<source lang=bash>
# zypper refresh
...
Error code: Curl error 59
Error message: failed setting cipher list: DEFAULT_SUSE
...
</source>
The reason is that zypper in newer versions calls curl with a specific cipher list named "DEFAULT_SUSE" which is not defined in older curl versions.
So the only way to get rid of this is to get rid of spacewalk:
<source lang=bash>
# zypper remove --clean-deps spacewalksd spacewalk-check zypp-plugin-spacewalk spacewalk-client-tools
</source>
Now get any kind of repository bound to your SuSE like the ISO this version was installed with:
<source lang=bash>
# zypper addrepo --check --type yast2 'iso:///?iso=/install/OS/suse/iso/SLE-12-SP2-Server-DVD-x86_64-GM-DVD1.iso' 'SLES12-SP2-12.2-0'
Adding repository 'SLES12-SP2-12.2-0' ...........................................................................................................[done]
Repository 'SLES12-SP2-12.2-0' successfully added
Enabled : Yes
Autorefresh : No
GPG Check : Yes
Priority : 99
URI : iso:///?iso=/install/OS/suse/iso/SLE-12-SP2-Server-DVD-x86_64-GM-DVD1.iso
</source>
Reinstall zypper in the old version that does not call curl with the cipher list SUSE_DEFAULT:
<source lang=bash>
# zypper in -f $(rpm -qa *zypper* --qf '%{NAME} ')
</source>
After that reregister your server with the SuSE Manager like this:
<source lang=bash>
# /usr/bin/wget --no-check-certificate -O - https://susemgr.server.tld/pub/bootstrap/yourbootstrap.sh | bash
</source>
Done.
335fb2be98726de1a13010b48c5c3f8eee8200bd
1893
1892
2018-07-10T11:29:20Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:Linux]]
[[Kategorie:SuSE]]
=SuSE Manager=
==Channels==
===Refresh channle list===
<source lang=bash>
# mgr-sync refresh
</source>
===List available channels===
<source lang=bash>
# mgr-sync list channels
</source>
===Add Channel===
<source lang=bash>
# mgr-sync add channel <channel>
</source>
===Delete Channel===
<source lang=bash>
# spacewalk-remove-channel -c <channel>
</source>
===Create a frozen channel===
Clone a channel (which is like a snapshot) and add a timestamp at the end of the name:
<source lang=bash>
# spacecmd softwarechannel_clonetree -s '<source channel or pool>' -x "s/\$/-$(date '+%Y-%m-%d_%H:%M:%S')/"
</source>
e.g.:
<source lang=bash>
# spacecmd softwarechannel_clonetree -s 'sles12-sp3-pool-x86_64' -x "s/\$/-$(date '+%Y-%m-%d_%H:%M:%S')/"
</source>
will result in a new channel pool named e.g. sles12-sp3-pool-x86_64-2017-11-22_14:26:42
===Compose your own channel===
<source lang=bash>
# spacecmd
spacecmd {SSM:0}> softwarechannel_create -n OpenSuSE -l opensuse -a x86_64 -c sha256
spacecmd {SSM:0}> repo_create -n opensuse-database-sles12-sp2-x86_64 -u https://download.opensuse.org/repositories/server:/database/SLE_12_SP2/
spacecmd {SSM:0}> repo_create -n opensuse-database-sles12-sp3-x86_64 -u https://download.opensuse.org/repositories/server:/database/SLE_12_SP3/
spacecmd {SSM:0}> repo_list
opensuse-database-sles12-sp2-x86_64
opensuse-database-sles12-sp3-x86_64
spacecmd {SSM:0}> softwarechannel_addrepo opensuse opensuse-database-sles12-sp2-x86_64
spacecmd {SSM:0}> softwarechannel_addrepo opensuse opensuse-database-sles12-sp3-x86_64
spacecmd {SSM:0}> quit
# spacewalk-repo-sync -c opensuse
</source>
==Bootstrap==
===Create bootstrap repo===
Do it for each channel!
<source lang=bash>
# mgr-create-bootstrap-repo
</source>
===Create bootstrap shell scripts in /srv/www/htdocs/pub/bootstrap===
Do not forget to lookup the available [[#List available activation keys|activation keys]]
<source lang=bash>
# mgr-bootstrap --traditional --script=My-New-SLES11-SP4.sh --activation-keys=6-sles11-sp4-x86_64
</source>
==Activation keys==
===List available activation keys===
web: Systems -> Activation Keys
<source lang=bash>
# spacecmd -q activationkey_list
6-sles11-sp3-x86_64
6-sles11-sp4-x86_64
6-sles12-sp0-x86_64
6-sles12-sp1-x86_64
6-sles12-sp2-x86_64
6-sles12-sp3-x86_64
</source>
==spacecmd==
Just some useful space commands
<source lang=bash>
# spacecmd system_list
</source>
==rhn-search==
===Cleanup the search index===
<source lang=bash>
# rhn-search cleanindex
</source>
==Troubleshooting==
===Clients===
====Error code: Curl error 59 / Error message: failed setting cipher list: DEFAULT_SUSE====
<source lang=bash>
# zypper refresh
...
Error code: Curl error 59
Error message: failed setting cipher list: DEFAULT_SUSE
...
</source>
The reason is that zypper in newer versions calls curl with a specific cipher list named "DEFAULT_SUSE" which is not defined in older curl versions.
So the only way to get rid of this is to get rid of spacewalk:
<source lang=bash>
# zypper remove --clean-deps spacewalksd spacewalk-check zypp-plugin-spacewalk spacewalk-client-tools
</source>
Now get any kind of repository bound to your SuSE like the ISO this version was installed with:
<source lang=bash>
# zypper addrepo --check --type yast2 'iso:///?iso=/install/OS/suse/iso/SLE-12-SP2-Server-DVD-x86_64-GM-DVD1.iso' 'SLES12-SP2-12.2-0'
Adding repository 'SLES12-SP2-12.2-0' ...........................................................................................................[done]
Repository 'SLES12-SP2-12.2-0' successfully added
Enabled : Yes
Autorefresh : No
GPG Check : Yes
Priority : 99
URI : iso:///?iso=/install/OS/suse/iso/SLE-12-SP2-Server-DVD-x86_64-GM-DVD1.iso
</source>
Reinstall zypper in the old version that does not call curl with the cipher list SUSE_DEFAULT:
<source lang=bash>
# zypper in -f $(rpm -qa *zypper* --qf '%{NAME} ')
</source>
After that reregister your server with the SuSE Manager like this:
<source lang=bash>
# /usr/bin/wget --no-check-certificate -O - https://susemgr.server.tld/pub/bootstrap/yourbootstrap.sh | bash
</source>
Done.
8b6527f6d19ac7a7d327ef7b5f3c830ab2f62c97
1895
1893
2018-07-11T13:20:12Z
Lollypop
2
/* Clients */
wikitext
text/x-wiki
[[Kategorie:Linux]]
[[Kategorie:SuSE]]
=SuSE Manager=
==Channels==
===Refresh channle list===
<source lang=bash>
# mgr-sync refresh
</source>
===List available channels===
<source lang=bash>
# mgr-sync list channels
</source>
===Add Channel===
<source lang=bash>
# mgr-sync add channel <channel>
</source>
===Delete Channel===
<source lang=bash>
# spacewalk-remove-channel -c <channel>
</source>
===Create a frozen channel===
Clone a channel (which is like a snapshot) and add a timestamp at the end of the name:
<source lang=bash>
# spacecmd softwarechannel_clonetree -s '<source channel or pool>' -x "s/\$/-$(date '+%Y-%m-%d_%H:%M:%S')/"
</source>
e.g.:
<source lang=bash>
# spacecmd softwarechannel_clonetree -s 'sles12-sp3-pool-x86_64' -x "s/\$/-$(date '+%Y-%m-%d_%H:%M:%S')/"
</source>
will result in a new channel pool named e.g. sles12-sp3-pool-x86_64-2017-11-22_14:26:42
===Compose your own channel===
<source lang=bash>
# spacecmd
spacecmd {SSM:0}> softwarechannel_create -n OpenSuSE -l opensuse -a x86_64 -c sha256
spacecmd {SSM:0}> repo_create -n opensuse-database-sles12-sp2-x86_64 -u https://download.opensuse.org/repositories/server:/database/SLE_12_SP2/
spacecmd {SSM:0}> repo_create -n opensuse-database-sles12-sp3-x86_64 -u https://download.opensuse.org/repositories/server:/database/SLE_12_SP3/
spacecmd {SSM:0}> repo_list
opensuse-database-sles12-sp2-x86_64
opensuse-database-sles12-sp3-x86_64
spacecmd {SSM:0}> softwarechannel_addrepo opensuse opensuse-database-sles12-sp2-x86_64
spacecmd {SSM:0}> softwarechannel_addrepo opensuse opensuse-database-sles12-sp3-x86_64
spacecmd {SSM:0}> quit
# spacewalk-repo-sync -c opensuse
</source>
==Bootstrap==
===Create bootstrap repo===
Do it for each channel!
<source lang=bash>
# mgr-create-bootstrap-repo
</source>
===Create bootstrap shell scripts in /srv/www/htdocs/pub/bootstrap===
Do not forget to lookup the available [[#List available activation keys|activation keys]]
<source lang=bash>
# mgr-bootstrap --traditional --script=My-New-SLES11-SP4.sh --activation-keys=6-sles11-sp4-x86_64
</source>
==Activation keys==
===List available activation keys===
web: Systems -> Activation Keys
<source lang=bash>
# spacecmd -q activationkey_list
6-sles11-sp3-x86_64
6-sles11-sp4-x86_64
6-sles12-sp0-x86_64
6-sles12-sp1-x86_64
6-sles12-sp2-x86_64
6-sles12-sp3-x86_64
</source>
==spacecmd==
Just some useful space commands
<source lang=bash>
# spacecmd system_list
</source>
==rhn-search==
===Cleanup the search index===
<source lang=bash>
# rhn-search cleanindex
</source>
==Troubleshooting==
===Clients===
====Error code: Curl error 59 / Error message: failed setting cipher list: DEFAULT_SUSE====
<source lang=bash>
# zypper refresh
...
Error code: Curl error 59
Error message: failed setting cipher list: DEFAULT_SUSE
...
</source>
The reason is that zypper in newer versions calls curl with a specific cipher list named "DEFAULT_SUSE" which is not defined in curl version 7.37.0-37.17.1 (version 7.37.0-28.1 is OK).
Now get any kind of repository bound to your SuSE like the ISO this version was installed with:
<source lang=bash>
# zypper addrepo --check --type yast2 'iso:///?iso=/install/OS/suse/iso/SLE-12-SP2-Server-DVD-x86_64-GM-DVD1.iso' 'SLES12-SP2-12.2-0'
Adding repository 'SLES12-SP2-12.2-0' ...........................................................................................................[done]
Repository 'SLES12-SP2-12.2-0' successfully added
Enabled : Yes
Autorefresh : No
GPG Check : Yes
Priority : 99
URI : iso:///?iso=/install/OS/suse/iso/SLE-12-SP2-Server-DVD-x86_64-GM-DVD1.iso
</source>
or enable it:
<source lang=bash>
# zypper modifyrepo --enable SLES12-SP2-12.2-0
</source>
Reinstall zypper in the old version that does not call curl with the cipher list SUSE_DEFAULT:
<source lang=bash>
# zypper install --force --repo SLES12-SP2-12.2-0 $(rpm --query --all *curl* --queryformat '%{NAME} ')
</source>
And disable the ISO repository:
<source lang=bash>
# zypper modifyrepo --disable SLES12-SP2-12.2-0
</source>
Done.
== Remove spacewalk from client ==
So the way to get rid spacewalk is:
<source lang=bash>
# zypper remove --clean-deps spacewalksd spacewalk-check zypp-plugin-spacewalk spacewalk-client-tools
</source>
== Register at SuSE Manager ==
After that reregister your server with the SuSE Manager like this:
<source lang=bash>
# /usr/bin/wget --no-check-certificate -O - https://susemgr.server.tld/pub/bootstrap/yourbootstrap.sh | bash
</source>
371eb3f392a5b1313f1d2f6771734bbce2efff64
1897
1895
2018-07-17T09:02:22Z
Lollypop
2
/* Error code: Curl error 59 / Error message: failed setting cipher list: DEFAULT_SUSE */
wikitext
text/x-wiki
[[Kategorie:Linux]]
[[Kategorie:SuSE]]
=SuSE Manager=
==Channels==
===Refresh channle list===
<source lang=bash>
# mgr-sync refresh
</source>
===List available channels===
<source lang=bash>
# mgr-sync list channels
</source>
===Add Channel===
<source lang=bash>
# mgr-sync add channel <channel>
</source>
===Delete Channel===
<source lang=bash>
# spacewalk-remove-channel -c <channel>
</source>
===Create a frozen channel===
Clone a channel (which is like a snapshot) and add a timestamp at the end of the name:
<source lang=bash>
# spacecmd softwarechannel_clonetree -s '<source channel or pool>' -x "s/\$/-$(date '+%Y-%m-%d_%H:%M:%S')/"
</source>
e.g.:
<source lang=bash>
# spacecmd softwarechannel_clonetree -s 'sles12-sp3-pool-x86_64' -x "s/\$/-$(date '+%Y-%m-%d_%H:%M:%S')/"
</source>
will result in a new channel pool named e.g. sles12-sp3-pool-x86_64-2017-11-22_14:26:42
===Compose your own channel===
<source lang=bash>
# spacecmd
spacecmd {SSM:0}> softwarechannel_create -n OpenSuSE -l opensuse -a x86_64 -c sha256
spacecmd {SSM:0}> repo_create -n opensuse-database-sles12-sp2-x86_64 -u https://download.opensuse.org/repositories/server:/database/SLE_12_SP2/
spacecmd {SSM:0}> repo_create -n opensuse-database-sles12-sp3-x86_64 -u https://download.opensuse.org/repositories/server:/database/SLE_12_SP3/
spacecmd {SSM:0}> repo_list
opensuse-database-sles12-sp2-x86_64
opensuse-database-sles12-sp3-x86_64
spacecmd {SSM:0}> softwarechannel_addrepo opensuse opensuse-database-sles12-sp2-x86_64
spacecmd {SSM:0}> softwarechannel_addrepo opensuse opensuse-database-sles12-sp3-x86_64
spacecmd {SSM:0}> quit
# spacewalk-repo-sync -c opensuse
</source>
==Bootstrap==
===Create bootstrap repo===
Do it for each channel!
<source lang=bash>
# mgr-create-bootstrap-repo
</source>
===Create bootstrap shell scripts in /srv/www/htdocs/pub/bootstrap===
Do not forget to lookup the available [[#List available activation keys|activation keys]]
<source lang=bash>
# mgr-bootstrap --traditional --script=My-New-SLES11-SP4.sh --activation-keys=6-sles11-sp4-x86_64
</source>
==Activation keys==
===List available activation keys===
web: Systems -> Activation Keys
<source lang=bash>
# spacecmd -q activationkey_list
6-sles11-sp3-x86_64
6-sles11-sp4-x86_64
6-sles12-sp0-x86_64
6-sles12-sp1-x86_64
6-sles12-sp2-x86_64
6-sles12-sp3-x86_64
</source>
==spacecmd==
Just some useful space commands
<source lang=bash>
# spacecmd system_list
</source>
==rhn-search==
===Cleanup the search index===
<source lang=bash>
# rhn-search cleanindex
</source>
==Troubleshooting==
===Clients===
====Error code: Curl error 59 / Error message: failed setting cipher list: DEFAULT_SUSE====
<source lang=bash>
# zypper refresh
...
Error code: Curl error 59
Error message: failed setting cipher list: DEFAULT_SUSE
...
</source>
The reason is that zypper in newer versions calls curl with a specific cipher list named "DEFAULT_SUSE" which is not defined in curl version 7.37.0-37.17.1 (version 7.37.0-28.1 is OK).
Now get any kind of repository bound to your SuSE like the ISO this version was installed with:
<source lang=bash>
# zypper addrepo --check --type yast2 'iso:///?iso=/install/OS/suse/iso/SLE-12-SP2-Server-DVD-x86_64-GM-DVD1.iso' 'SLES12-SP2-12.2-0'
Adding repository 'SLES12-SP2-12.2-0' ...........................................................................................................[done]
Repository 'SLES12-SP2-12.2-0' successfully added
Enabled : Yes
Autorefresh : No
GPG Check : Yes
Priority : 99
URI : iso:///?iso=/install/OS/suse/iso/SLE-12-SP2-Server-DVD-x86_64-GM-DVD1.iso
</source>
or enable it:
<source lang=bash>
# zypper modifyrepo --enable SLES12-SP2-12.2-0
</source>
Reinstall zypper in the old version that does not call curl with the cipher list SUSE_DEFAULT:
<source lang=bash>
# zypper install --force --repo SLES12-SP2-12.2-0 $(rpm --query --all *curl* --queryformat '%{NAME} ')
</source>
And disable the ISO repository:
<source lang=bash>
# zypper modifyrepo --disable SLES12-SP2-12.2-0
</source>
Done.
Note: After some further debugging we found that the system path forces a wrong openssl library to come in place.
<source lang=bash>
# curl --version ; zypper --version
curl 7.37.0 (x86_64-suse-linux-gnu) libcurl/7.37.0 OpenSSL/1.0.2h zlib/1.2.8 libidn/1.28 libssh2/1.4.3
Protocols: dict file ftp ftps gopher http https imap imaps ldap ldaps pop3 pop3s rtsp scp sftp smtp smtps telnet tftp
Features: AsynchDNS GSS-Negotiate IDN IPv6 Largefile NTLM NTLM_WB SSL libz TLS-SRP
zypper 1.13.40
</source>
In our version of curl it should be OpenSSL/1.0.2j.
<source lang=bash>
# rpm -qv openssl
openssl-1.0.2j-60.24.1.x86_64
# openssl version
WARNING: can't open config file: /usr/local/ssl/openssl.cnf
OpenSSL 1.0.2j-fips 26 Sep 2016 (Library: OpenSSL 1.0.2h-fips 3 May 2016)
</source>
Ha!
Ok... then after lookin at the system library path, we got a clue ;-):
<source lang=bash>
# ldconfig -p | grep ssl
libssl.so.1.0.0 (libc6,x86-64) => /usr/lib/nsr/lib64/libssl.so.1.0.0
libssl.so.1.0.0 (libc6,x86-64) => /lib64/libssl.so.1.0.0
libssl.so.1.0.0 (libc6) => /usr/lib/nsr/libssl.so.1.0.0
libgnutls-xssl.so.0 (libc6,x86-64) => /usr/lib64/libgnutls-xssl.so.0
libevent_openssl-2.0.so.5 (libc6,x86-64) => /usr/lib64/libevent_openssl-2.0.so.5
libcommonssl.so (libc6,x86-64) => /usr/lib/nsr/lib64/libcommonssl.so
libcommonssl.so (libc6) => /usr/lib/nsr/libcommonssl.so
libcommonssl-9.2.1.so (libc6,x86-64) => /usr/lib/nsr/lib64/libcommonssl-9.2.1.so
</source>
The problem was a file in /etc/ld.so.conf.d/ which brought /usr/lib/nsr/lib64 in the system library path. There was another libssl.so.1.0.0 which was version 1.0.2h. OK. What to do?
<source lang=bash>
# rm /etc/ld.so.conf.d/problematic.conf
# rm /etc/ld.so.cache
# ldconfig
</source>
Check the success:
<source lang=bash>
# ldconfig -p | grep ssl
libssl.so.1.0.0 (libc6,x86-64) => /lib64/libssl.so.1.0.0
libgnutls-xssl.so.0 (libc6,x86-64) => /usr/lib64/libgnutls-xssl.so.0
libevent_openssl-2.0.so.5 (libc6,x86-64) => /usr/lib64/libevent_openssl-2.0.so.5
</source>
Now you just have to find a way to get your other stuff running without the manipulation at the system library path.
== Remove spacewalk from client ==
So the way to get rid spacewalk is:
<source lang=bash>
# zypper remove --clean-deps spacewalksd spacewalk-check zypp-plugin-spacewalk spacewalk-client-tools
</source>
== Register at SuSE Manager ==
After that reregister your server with the SuSE Manager like this:
<source lang=bash>
# /usr/bin/wget --no-check-certificate -O - https://susemgr.server.tld/pub/bootstrap/yourbootstrap.sh | bash
</source>
a42704f0680043c8b68ddaafdf8730bd8f4f30d5
1898
1897
2018-07-17T09:11:36Z
Lollypop
2
/* Error code: Curl error 59 / Error message: failed setting cipher list: DEFAULT_SUSE */
wikitext
text/x-wiki
[[Kategorie:Linux]]
[[Kategorie:SuSE]]
=SuSE Manager=
==Channels==
===Refresh channle list===
<source lang=bash>
# mgr-sync refresh
</source>
===List available channels===
<source lang=bash>
# mgr-sync list channels
</source>
===Add Channel===
<source lang=bash>
# mgr-sync add channel <channel>
</source>
===Delete Channel===
<source lang=bash>
# spacewalk-remove-channel -c <channel>
</source>
===Create a frozen channel===
Clone a channel (which is like a snapshot) and add a timestamp at the end of the name:
<source lang=bash>
# spacecmd softwarechannel_clonetree -s '<source channel or pool>' -x "s/\$/-$(date '+%Y-%m-%d_%H:%M:%S')/"
</source>
e.g.:
<source lang=bash>
# spacecmd softwarechannel_clonetree -s 'sles12-sp3-pool-x86_64' -x "s/\$/-$(date '+%Y-%m-%d_%H:%M:%S')/"
</source>
will result in a new channel pool named e.g. sles12-sp3-pool-x86_64-2017-11-22_14:26:42
===Compose your own channel===
<source lang=bash>
# spacecmd
spacecmd {SSM:0}> softwarechannel_create -n OpenSuSE -l opensuse -a x86_64 -c sha256
spacecmd {SSM:0}> repo_create -n opensuse-database-sles12-sp2-x86_64 -u https://download.opensuse.org/repositories/server:/database/SLE_12_SP2/
spacecmd {SSM:0}> repo_create -n opensuse-database-sles12-sp3-x86_64 -u https://download.opensuse.org/repositories/server:/database/SLE_12_SP3/
spacecmd {SSM:0}> repo_list
opensuse-database-sles12-sp2-x86_64
opensuse-database-sles12-sp3-x86_64
spacecmd {SSM:0}> softwarechannel_addrepo opensuse opensuse-database-sles12-sp2-x86_64
spacecmd {SSM:0}> softwarechannel_addrepo opensuse opensuse-database-sles12-sp3-x86_64
spacecmd {SSM:0}> quit
# spacewalk-repo-sync -c opensuse
</source>
==Bootstrap==
===Create bootstrap repo===
Do it for each channel!
<source lang=bash>
# mgr-create-bootstrap-repo
</source>
===Create bootstrap shell scripts in /srv/www/htdocs/pub/bootstrap===
Do not forget to lookup the available [[#List available activation keys|activation keys]]
<source lang=bash>
# mgr-bootstrap --traditional --script=My-New-SLES11-SP4.sh --activation-keys=6-sles11-sp4-x86_64
</source>
==Activation keys==
===List available activation keys===
web: Systems -> Activation Keys
<source lang=bash>
# spacecmd -q activationkey_list
6-sles11-sp3-x86_64
6-sles11-sp4-x86_64
6-sles12-sp0-x86_64
6-sles12-sp1-x86_64
6-sles12-sp2-x86_64
6-sles12-sp3-x86_64
</source>
==spacecmd==
Just some useful space commands
<source lang=bash>
# spacecmd system_list
</source>
==rhn-search==
===Cleanup the search index===
<source lang=bash>
# rhn-search cleanindex
</source>
==Troubleshooting==
===Clients===
====Error code: Curl error 59 / Error message: failed setting cipher list: DEFAULT_SUSE====
<source lang=bash>
# zypper refresh
...
Error code: Curl error 59
Error message: failed setting cipher list: DEFAULT_SUSE
...
</source>
The reason is that zypper in newer versions calls curl with a specific cipher list named "DEFAULT_SUSE" which is not defined in curl version 7.37.0-37.17.1 (version 7.37.0-28.1 is OK).
Now get any kind of repository bound to your SuSE like the ISO this version was installed with:
<source lang=bash>
# zypper addrepo --check --type yast2 'iso:///?iso=/install/OS/suse/iso/SLE-12-SP2-Server-DVD-x86_64-GM-DVD1.iso' 'SLES12-SP2-12.2-0'
Adding repository 'SLES12-SP2-12.2-0' ...........................................................................................................[done]
Repository 'SLES12-SP2-12.2-0' successfully added
Enabled : Yes
Autorefresh : No
GPG Check : Yes
Priority : 99
URI : iso:///?iso=/install/OS/suse/iso/SLE-12-SP2-Server-DVD-x86_64-GM-DVD1.iso
</source>
or enable it:
<source lang=bash>
# zypper modifyrepo --enable SLES12-SP2-12.2-0
</source>
Reinstall zypper in the old version that does not call curl with the cipher list SUSE_DEFAULT:
<source lang=bash>
# zypper install --force --repo SLES12-SP2-12.2-0 $(rpm --query --all *curl* --queryformat '%{NAME} ')
</source>
And disable the ISO repository:
<source lang=bash>
# zypper modifyrepo --disable SLES12-SP2-12.2-0
</source>
Done.
Note: After some further debugging we found that the system path forces a wrong openssl library to come in place.
<source lang=bash>
# curl --version ; zypper --version
curl 7.37.0 (x86_64-suse-linux-gnu) libcurl/7.37.0 OpenSSL/1.0.2h zlib/1.2.8 libidn/1.28 libssh2/1.4.3
Protocols: dict file ftp ftps gopher http https imap imaps ldap ldaps pop3 pop3s rtsp scp sftp smtp smtps telnet tftp
Features: AsynchDNS GSS-Negotiate IDN IPv6 Largefile NTLM NTLM_WB SSL libz TLS-SRP
zypper 1.13.40
</source>
In our version of curl it should be OpenSSL/1.0.2j.
<syntaxhighlight lang="bash" highlight="5">
# rpm -qv openssl
openssl-1.0.2j-60.24.1.x86_64
# openssl version
WARNING: can't open config file: /usr/local/ssl/openssl.cnf
OpenSSL 1.0.2j-fips 26 Sep 2016 (Library: OpenSSL 1.0.2h-fips 3 May 2016)
</syntaxhighlight>
Ha!
Ok... then after lookin at the system library path, we got a clue ;-):
<syntaxhighlight lang="bash" highlight="2">
# ldconfig -p | grep ssl
libssl.so.1.0.0 (libc6,x86-64) => /usr/lib/nsr/lib64/libssl.so.1.0.0
libssl.so.1.0.0 (libc6,x86-64) => /lib64/libssl.so.1.0.0
libssl.so.1.0.0 (libc6) => /usr/lib/nsr/libssl.so.1.0.0
libgnutls-xssl.so.0 (libc6,x86-64) => /usr/lib64/libgnutls-xssl.so.0
libevent_openssl-2.0.so.5 (libc6,x86-64) => /usr/lib64/libevent_openssl-2.0.so.5
libcommonssl.so (libc6,x86-64) => /usr/lib/nsr/lib64/libcommonssl.so
libcommonssl.so (libc6) => /usr/lib/nsr/libcommonssl.so
libcommonssl-9.2.1.so (libc6,x86-64) => /usr/lib/nsr/lib64/libcommonssl-9.2.1.so
</syntaxhighlight>
The problem was a file in /etc/ld.so.conf.d/ which brought /usr/lib/nsr/lib64 in the system library path. There was another libssl.so.1.0.0 which was version 1.0.2h. OK. What to do?
<source lang=bash>
# rm /etc/ld.so.conf.d/problematic.conf
# rm /etc/ld.so.cache
# ldconfig
</source>
Check the success:
<source lang=bash>
# ldconfig -p | grep ssl
libssl.so.1.0.0 (libc6,x86-64) => /lib64/libssl.so.1.0.0
libgnutls-xssl.so.0 (libc6,x86-64) => /usr/lib64/libgnutls-xssl.so.0
libevent_openssl-2.0.so.5 (libc6,x86-64) => /usr/lib64/libevent_openssl-2.0.so.5
</source>
Now you just have to find a way to get your other stuff running without the manipulation at the system library path.
== Remove spacewalk from client ==
So the way to get rid spacewalk is:
<source lang=bash>
# zypper remove --clean-deps spacewalksd spacewalk-check zypp-plugin-spacewalk spacewalk-client-tools
</source>
== Register at SuSE Manager ==
After that reregister your server with the SuSE Manager like this:
<source lang=bash>
# /usr/bin/wget --no-check-certificate -O - https://susemgr.server.tld/pub/bootstrap/yourbootstrap.sh | bash
</source>
ac49d1a39f6ad1d29f655428868a272b2cbb8124
1899
1898
2018-07-17T09:33:58Z
Lollypop
2
/* Error code: Curl error 59 / Error message: failed setting cipher list: DEFAULT_SUSE */
wikitext
text/x-wiki
[[Kategorie:Linux]]
[[Kategorie:SuSE]]
=SuSE Manager=
==Channels==
===Refresh channle list===
<source lang=bash>
# mgr-sync refresh
</source>
===List available channels===
<source lang=bash>
# mgr-sync list channels
</source>
===Add Channel===
<source lang=bash>
# mgr-sync add channel <channel>
</source>
===Delete Channel===
<source lang=bash>
# spacewalk-remove-channel -c <channel>
</source>
===Create a frozen channel===
Clone a channel (which is like a snapshot) and add a timestamp at the end of the name:
<source lang=bash>
# spacecmd softwarechannel_clonetree -s '<source channel or pool>' -x "s/\$/-$(date '+%Y-%m-%d_%H:%M:%S')/"
</source>
e.g.:
<source lang=bash>
# spacecmd softwarechannel_clonetree -s 'sles12-sp3-pool-x86_64' -x "s/\$/-$(date '+%Y-%m-%d_%H:%M:%S')/"
</source>
will result in a new channel pool named e.g. sles12-sp3-pool-x86_64-2017-11-22_14:26:42
===Compose your own channel===
<source lang=bash>
# spacecmd
spacecmd {SSM:0}> softwarechannel_create -n OpenSuSE -l opensuse -a x86_64 -c sha256
spacecmd {SSM:0}> repo_create -n opensuse-database-sles12-sp2-x86_64 -u https://download.opensuse.org/repositories/server:/database/SLE_12_SP2/
spacecmd {SSM:0}> repo_create -n opensuse-database-sles12-sp3-x86_64 -u https://download.opensuse.org/repositories/server:/database/SLE_12_SP3/
spacecmd {SSM:0}> repo_list
opensuse-database-sles12-sp2-x86_64
opensuse-database-sles12-sp3-x86_64
spacecmd {SSM:0}> softwarechannel_addrepo opensuse opensuse-database-sles12-sp2-x86_64
spacecmd {SSM:0}> softwarechannel_addrepo opensuse opensuse-database-sles12-sp3-x86_64
spacecmd {SSM:0}> quit
# spacewalk-repo-sync -c opensuse
</source>
==Bootstrap==
===Create bootstrap repo===
Do it for each channel!
<source lang=bash>
# mgr-create-bootstrap-repo
</source>
===Create bootstrap shell scripts in /srv/www/htdocs/pub/bootstrap===
Do not forget to lookup the available [[#List available activation keys|activation keys]]
<source lang=bash>
# mgr-bootstrap --traditional --script=My-New-SLES11-SP4.sh --activation-keys=6-sles11-sp4-x86_64
</source>
==Activation keys==
===List available activation keys===
web: Systems -> Activation Keys
<source lang=bash>
# spacecmd -q activationkey_list
6-sles11-sp3-x86_64
6-sles11-sp4-x86_64
6-sles12-sp0-x86_64
6-sles12-sp1-x86_64
6-sles12-sp2-x86_64
6-sles12-sp3-x86_64
</source>
==spacecmd==
Just some useful space commands
<source lang=bash>
# spacecmd system_list
</source>
==rhn-search==
===Cleanup the search index===
<source lang=bash>
# rhn-search cleanindex
</source>
==Troubleshooting==
===Clients===
====Error code: Curl error 59 / Error message: failed setting cipher list: DEFAULT_SUSE====
<source lang=bash>
# zypper refresh
...
Error code: Curl error 59
Error message: failed setting cipher list: DEFAULT_SUSE
...
</source>
The reason is that zypper in newer versions calls curl with a specific cipher list named "DEFAULT_SUSE" which is not defined in curl version 7.37.0-37.17.1 (version 7.37.0-28.1 is OK).
Now get any kind of repository bound to your SuSE like the ISO this version was installed with:
<source lang=bash>
# zypper addrepo --check --type yast2 'iso:///?iso=/install/OS/suse/iso/SLE-12-SP2-Server-DVD-x86_64-GM-DVD1.iso' 'SLES12-SP2-12.2-0'
Adding repository 'SLES12-SP2-12.2-0' ...........................................................................................................[done]
Repository 'SLES12-SP2-12.2-0' successfully added
Enabled : Yes
Autorefresh : No
GPG Check : Yes
Priority : 99
URI : iso:///?iso=/install/OS/suse/iso/SLE-12-SP2-Server-DVD-x86_64-GM-DVD1.iso
</source>
or enable it:
<source lang=bash>
# zypper modifyrepo --enable SLES12-SP2-12.2-0
</source>
Reinstall zypper in the old version that does not call curl with the cipher list SUSE_DEFAULT:
<source lang=bash>
# zypper install --force --repo SLES12-SP2-12.2-0 $(rpm --query --all *curl* --queryformat '%{NAME} ')
</source>
And disable the ISO repository:
<source lang=bash>
# zypper modifyrepo --disable SLES12-SP2-12.2-0
</source>
Done.
=====Note: After some further debugging we found that the system path forces a wrong openssl library to come in place.=====
<source lang=bash>
# curl --version ; zypper --version
curl 7.37.0 (x86_64-suse-linux-gnu) libcurl/7.37.0 OpenSSL/1.0.2h zlib/1.2.8 libidn/1.28 libssh2/1.4.3
Protocols: dict file ftp ftps gopher http https imap imaps ldap ldaps pop3 pop3s rtsp scp sftp smtp smtps telnet tftp
Features: AsynchDNS GSS-Negotiate IDN IPv6 Largefile NTLM NTLM_WB SSL libz TLS-SRP
zypper 1.13.40
</source>
In our version of curl it should be OpenSSL/1.0.2j.
<syntaxhighlight lang="bash" highlight="5">
# rpm -qv openssl
openssl-1.0.2j-60.24.1.x86_64
# openssl version
WARNING: can't open config file: /usr/local/ssl/openssl.cnf
OpenSSL 1.0.2j-fips 26 Sep 2016 (Library: OpenSSL 1.0.2h-fips 3 May 2016)
</syntaxhighlight>
Ha!
Ok... then after lookin at the system library path, we got a clue ;-):
<syntaxhighlight lang="bash" highlight="2">
# ldconfig -p | grep ssl
libssl.so.1.0.0 (libc6,x86-64) => /usr/lib/nsr/lib64/libssl.so.1.0.0
libssl.so.1.0.0 (libc6,x86-64) => /lib64/libssl.so.1.0.0
libssl.so.1.0.0 (libc6) => /usr/lib/nsr/libssl.so.1.0.0
libgnutls-xssl.so.0 (libc6,x86-64) => /usr/lib64/libgnutls-xssl.so.0
libevent_openssl-2.0.so.5 (libc6,x86-64) => /usr/lib64/libevent_openssl-2.0.so.5
libcommonssl.so (libc6,x86-64) => /usr/lib/nsr/lib64/libcommonssl.so
libcommonssl.so (libc6) => /usr/lib/nsr/libcommonssl.so
libcommonssl-9.2.1.so (libc6,x86-64) => /usr/lib/nsr/lib64/libcommonssl-9.2.1.so
</syntaxhighlight>
The problem was a file in /etc/ld.so.conf.d/ which brought /usr/lib/nsr/lib64 in the system library path. There was another libssl.so.1.0.0 which was version 1.0.2h. OK. What to do?
<source lang=bash>
# rm /etc/ld.so.conf.d/problematic.conf
# rm /etc/ld.so.cache
# ldconfig
</source>
Check the success:
<source lang=bash>
# ldconfig -p | grep ssl
libssl.so.1.0.0 (libc6,x86-64) => /lib64/libssl.so.1.0.0
libgnutls-xssl.so.0 (libc6,x86-64) => /usr/lib64/libgnutls-xssl.so.0
libevent_openssl-2.0.so.5 (libc6,x86-64) => /usr/lib64/libevent_openssl-2.0.so.5
</source>
Now you just have to find a way to get your other stuff running without the manipulation at the system library path.
Last check for our case. Does our networker use it's own ssl libraries?
<source lang=bash>
# ls -al /proc/$(pgrep --full /usr/sbin/nsrexecd)/map_files | egrep "lib(ssl|crypto)"
lr-------- 1 root root 64 17. Jul 11:31 7f9d1bb73000-7f9d1bdc7000 -> /usr/lib/nsr/lib64/libcrypto.so.1.0.0
lr-------- 1 root root 64 17. Jul 11:31 7f9d1bdc7000-7f9d1bec7000 -> /usr/lib/nsr/lib64/libcrypto.so.1.0.0
lr-------- 1 root root 64 17. Jul 11:31 7f9d1bec7000-7f9d1bef3000 -> /usr/lib/nsr/lib64/libcrypto.so.1.0.0
lr-------- 1 root root 64 17. Jul 11:31 7f9d1bfab000-7f9d1c00c000 -> /usr/lib/nsr/lib64/libssl.so.1.0.0
lr-------- 1 root root 64 17. Jul 11:31 7f9d1c00c000-7f9d1c10c000 -> /usr/lib/nsr/lib64/libssl.so.1.0.0
lr-------- 1 root root 64 17. Jul 11:31 7f9d1c10c000-7f9d1c116000 -> /usr/lib/nsr/lib64/libssl.so.1.0.0
</source>
Yep. Great!
== Remove spacewalk from client ==
So the way to get rid spacewalk is:
<source lang=bash>
# zypper remove --clean-deps spacewalksd spacewalk-check zypp-plugin-spacewalk spacewalk-client-tools
</source>
== Register at SuSE Manager ==
After that reregister your server with the SuSE Manager like this:
<source lang=bash>
# /usr/bin/wget --no-check-certificate -O - https://susemgr.server.tld/pub/bootstrap/yourbootstrap.sh | bash
</source>
7b8ef034f982604ac96c8278d0a9dc3db87451b6
Category:SuSE
14
359
1894
2018-07-10T11:29:53Z
Lollypop
2
Die Seite wurde neu angelegt: „[[Kategorie:KnowHow]]“
wikitext
text/x-wiki
[[Kategorie:KnowHow]]
5b3e805e2df69a16d339bfd0115e4688ccfd0e65
Admin hints
0
360
1896
2018-07-16T11:07:31Z
Lollypop
2
Die Seite wurde neu angelegt: „[[Kategorie:KnowHow]] ==Cheat sheets== * [https://cheat.sh Curl usable general cheat sheet] ==DNS== ===Get your IP address=== <source lang=bash> $ dig +short…“
wikitext
text/x-wiki
[[Kategorie:KnowHow]]
==Cheat sheets==
* [https://cheat.sh Curl usable general cheat sheet]
==DNS==
===Get your IP address===
<source lang=bash>
$ dig +short +time=2 +tries=1 myip.opendns.com @resolver1.opendns.com
</source>
c99c7a929fef247205791d3ebc46f2b6c602e3b8
NetApp Commands
0
201
1900
1404
2018-07-19T13:13:10Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie: NetApp]]
==Alignment==
CDOT 8.3:
<source lang=bash>
netapp-svm1::> set -priv diag
netapp-svm1::*> lun alignment show -vserver svm_kerberos_backup -fields vserver,lun,alignment -path /vol/kerberos_vol_luns_backup/kerberos_lun_*
vserver path lun alignment
---------- --------------------------------- ------------ ---------
svm_backup /vol/vol_luns_backup/lun_1.bcklun lun_1.bcklun aligned
svm_backup /vol/vol_luns_backup/lun_2.bcklun lun_2.bcklun aligned
svm_backup /vol/vol_luns_backup/lun_3.bcklun lun_3.bcklun aligned
svm_backup /vol/vol_luns_backup/lun_4.bcklun lun_4.bcklun aligned
4 entries were displayed.
netapp-svm1::> set -priv admin
</source>
To see on which bucket the reads and writes occure:
<source lang=bash>
netapp-svm1::> set -priv diag
netapp-svm1::*> lun alignment show -vserver svm_kerberos_backup -fields vserver,lun,alignment,read-histogram,write-histogram -path /vol/kerberos_vol_luns_backup/kerberos_lun_**
vserver path lun alignment write-histogram read-histogram
---------- --------------------------------- ------------ --------- ---------------- ----------------
svm_backup /vol/vol_luns_backup/lun_1.bcklun lun_1.bcklun aligned 99,0,0,0,0,0,0,0 99,0,0,0,0,0,0,0
svm_backup /vol/vol_luns_backup/lun_2.bcklun lun_2.bcklun aligned 99,0,0,0,0,0,0,0 99,0,0,0,0,0,0,0
svm_backup /vol/vol_luns_backup/lun_3.bcklun lun_3.bcklun aligned 99,0,0,0,0,0,0,0 99,0,0,0,0,0,0,0
svm_backup /vol/vol_luns_backup/lun_4.bcklun lun_4.bcklun aligned 99,0,0,0,0,0,0,0 99,0,0,0,0,0,0,0
4 entries were displayed.
netapp-svm1::> set -priv admin
</source>
==Performance ==
filer> priv set -q diag ; statit -b ; sysstat -x -s -c 20 3 ; statit -e ; priv set
filer> priv set -q diag ; stats show lun:*:avg_latency ; priv set
=== Flashpool ===
filer> priv set -q diag ; stats show -p hybrid_aggr ; priv set
== User ==
=== Create snapshot user ===
<source lang=bash>
security login role create -vserver svm1 vol-snapshot-only -cmddirname DEFAULT -access none
security login role create -vserver svm1 vol-snapshot-only -cmddirname "security login publickey" -access all
security login role create -vserver svm1 vol-snapshot-only -cmddirname "volume snapshot" -access readonly
security login role create -vserver svm1 vol-snapshot-only -cmddirname "volume snapshot create" -access all
security login create -vserver svm1 -role vol-snapshot-only -user-or-group-name snapshot-user -application ssh -authmethod publickey -comment "Snapshot User"
security login publickey create -vserver svm1 -username snapshot-user -publickey "ssh-rsa AAAAB3Nz...geX33k5 snapshot-user"
</source>
==Network interfaces==
<source lang=bash>
ncl01::> network interface show -vserver ncl1
Logical Status Network Current Current Is
Vserver Interface Admin/Oper Address/Mask Node Port Home
----------- ---------- ---------- ------------------ ------------- ------- ----
ncl01
cluster_mgmt up/up 10.10.20.41/24 ncl01-01 a0a true
ncl01-01-ic1 up/up 10.10.20.44/24 ncl01-01 a0a true
ncl01-01_mgmt1 up/up 10.10.20.42/24 ncl01-01 a0a true
ncl01-02-ic1 up/up 10.10.20.45/24 ncl01-02 a0a true
ncl01-02_mgmt1 up/up 10.10.20.43/24 ncl01-02 a0a true
5 entries were displayed.
ncl01::> network port show -link down
Node: ncl01-01
Speed(Mbps) Health
Port IPspace Broadcast Domain Link MTU Admin/Oper Status
--------- ------------ ---------------- ---- ---- ----------- --------
e0j Default - down 1500 auto/1000 -
e0l Default - down 1500 auto/1000 -
Node: ncl01-02
Speed(Mbps) Health
Port IPspace Broadcast Domain Link MTU Admin/Oper Status
--------- ------------ ---------------- ---- ---- ----------- --------
e0j Default - down 1500 auto/1000 -
e0l Default - down 1500 auto/1000 -
4 entries were displayed.
ncl01::> network port show -health-status degraded
There are no entries matching your query.
ncl01::> network port ifgrp show
Port Distribution Active
Node IfGrp Function MAC Address Ports Ports
-------- ---------- ------------ ----------------- ------- -------------------
ncl01-01
a0a ip 02:a0:98:6d:06:b7 full e0i, e0k
a0b ip 02:a0:98:6d:06:b8 full e3a, e3b, e7a, e7b
ncl01-02
a0a ip 02:a0:98:6d:07:1f full e0i, e0k
a0b ip 02:a0:98:6d:07:20 full e3a, e3b, e7a, e7b
ncl01::> network port ifgrp show -fields down-ports
node ifgrp down-ports
------------- ----- ----------
ncl01-01 a0a -
ncl01-01 a0b -
ncl01-02 a0a -
ncl01-02 a0b -
4 entries were displayed.
ncl01::> network port show -fields speed-oper -port e0j,e0l
node port speed-oper
------------- ---- ----------
ncl01-01 e0j 1000
ncl01-01 e0l 1000
ncl01-02 e0j 1000
ncl01-02 e0l 1000
4 entries were displayed.
ncl01::> network port ifgrp show -fields down-ports
node ifgrp down-ports
------------- ----- ----------
ncl01-01 a0a -
ncl01-01 a0b -
ncl01-02 a0a -
ncl01-02 a0b -
4 entries were displayed.
</source>
==Links==
* [http://www.cosonok.com/2014/02/brief-notes-on-advanced-troubleshooting.html Brief Notes on Advanced Troubleshooting in CDOT]
7f921bb48b618ca61d2b62aff59208636a42d76d
1901
1900
2018-07-19T13:14:06Z
Lollypop
2
/* Network interfaces */
wikitext
text/x-wiki
[[Kategorie: NetApp]]
==Alignment==
CDOT 8.3:
<source lang=bash>
netapp-svm1::> set -priv diag
netapp-svm1::*> lun alignment show -vserver svm_kerberos_backup -fields vserver,lun,alignment -path /vol/kerberos_vol_luns_backup/kerberos_lun_*
vserver path lun alignment
---------- --------------------------------- ------------ ---------
svm_backup /vol/vol_luns_backup/lun_1.bcklun lun_1.bcklun aligned
svm_backup /vol/vol_luns_backup/lun_2.bcklun lun_2.bcklun aligned
svm_backup /vol/vol_luns_backup/lun_3.bcklun lun_3.bcklun aligned
svm_backup /vol/vol_luns_backup/lun_4.bcklun lun_4.bcklun aligned
4 entries were displayed.
netapp-svm1::> set -priv admin
</source>
To see on which bucket the reads and writes occure:
<source lang=bash>
netapp-svm1::> set -priv diag
netapp-svm1::*> lun alignment show -vserver svm_kerberos_backup -fields vserver,lun,alignment,read-histogram,write-histogram -path /vol/kerberos_vol_luns_backup/kerberos_lun_**
vserver path lun alignment write-histogram read-histogram
---------- --------------------------------- ------------ --------- ---------------- ----------------
svm_backup /vol/vol_luns_backup/lun_1.bcklun lun_1.bcklun aligned 99,0,0,0,0,0,0,0 99,0,0,0,0,0,0,0
svm_backup /vol/vol_luns_backup/lun_2.bcklun lun_2.bcklun aligned 99,0,0,0,0,0,0,0 99,0,0,0,0,0,0,0
svm_backup /vol/vol_luns_backup/lun_3.bcklun lun_3.bcklun aligned 99,0,0,0,0,0,0,0 99,0,0,0,0,0,0,0
svm_backup /vol/vol_luns_backup/lun_4.bcklun lun_4.bcklun aligned 99,0,0,0,0,0,0,0 99,0,0,0,0,0,0,0
4 entries were displayed.
netapp-svm1::> set -priv admin
</source>
==Performance ==
filer> priv set -q diag ; statit -b ; sysstat -x -s -c 20 3 ; statit -e ; priv set
filer> priv set -q diag ; stats show lun:*:avg_latency ; priv set
=== Flashpool ===
filer> priv set -q diag ; stats show -p hybrid_aggr ; priv set
== User ==
=== Create snapshot user ===
<source lang=bash>
security login role create -vserver svm1 vol-snapshot-only -cmddirname DEFAULT -access none
security login role create -vserver svm1 vol-snapshot-only -cmddirname "security login publickey" -access all
security login role create -vserver svm1 vol-snapshot-only -cmddirname "volume snapshot" -access readonly
security login role create -vserver svm1 vol-snapshot-only -cmddirname "volume snapshot create" -access all
security login create -vserver svm1 -role vol-snapshot-only -user-or-group-name snapshot-user -application ssh -authmethod publickey -comment "Snapshot User"
security login publickey create -vserver svm1 -username snapshot-user -publickey "ssh-rsa AAAAB3Nz...geX33k5 snapshot-user"
</source>
==Network interfaces==
<source lang=bash>
ncl01::> network interface show -vserver ncl1
Logical Status Network Current Current Is
Vserver Interface Admin/Oper Address/Mask Node Port Home
----------- ---------- ---------- ------------------ ------------- ------- ----
ncl01
cluster_mgmt up/up 10.10.20.41/24 ncl01-01 a0a true
ncl01-01-ic1 up/up 10.10.20.44/24 ncl01-01 a0a true
ncl01-01_mgmt1 up/up 10.10.20.42/24 ncl01-01 a0a true
ncl01-02-ic1 up/up 10.10.20.45/24 ncl01-02 a0a true
ncl01-02_mgmt1 up/up 10.10.20.43/24 ncl01-02 a0a true
5 entries were displayed.
ncl01::> network port show -link down
Node: ncl01-01
Speed(Mbps) Health
Port IPspace Broadcast Domain Link MTU Admin/Oper Status
--------- ------------ ---------------- ---- ---- ----------- --------
e0j Default - down 1500 auto/1000 -
e0l Default - down 1500 auto/1000 -
Node: ncl01-02
Speed(Mbps) Health
Port IPspace Broadcast Domain Link MTU Admin/Oper Status
--------- ------------ ---------------- ---- ---- ----------- --------
e0j Default - down 1500 auto/1000 -
e0l Default - down 1500 auto/1000 -
4 entries were displayed.
ncl01::> network port show -health-status degraded
There are no entries matching your query.
ncl01::> network port ifgrp show
Port Distribution Active
Node IfGrp Function MAC Address Ports Ports
-------- ---------- ------------ ----------------- ------- -------------------
ncl01-01
a0a ip 02:a0:98:6d:06:b7 full e0i, e0k
a0b ip 02:a0:98:6d:06:b8 full e3a, e3b, e7a, e7b
ncl01-02
a0a ip 02:a0:98:6d:07:1f full e0i, e0k
a0b ip 02:a0:98:6d:07:20 full e3a, e3b, e7a, e7b
ncl01::> network port ifgrp show -fields down-ports
node ifgrp down-ports
------------- ----- ----------
ncl01-01 a0a -
ncl01-01 a0b -
ncl01-02 a0a -
ncl01-02 a0b -
4 entries were displayed.
ncl01::> network port show -fields speed-oper -port e0j,e0l
node port speed-oper
------------- ---- ----------
ncl01-01 e0j 1000
ncl01-01 e0l 1000
ncl01-02 e0j 1000
ncl01-02 e0l 1000
4 entries were displayed.
ncl01::> network port ifgrp show -fields down-ports
node ifgrp down-ports
------------- ----- ----------
ncl01-01 a0a -
ncl01-01 a0b -
ncl01-02 a0a -
ncl01-02 a0b -
4 entries were displayed.
</source>
==Links==
* [http://www.cosonok.com/2014/02/brief-notes-on-advanced-troubleshooting.html Brief Notes on Advanced Troubleshooting in CDOT]
3c28a2db37a82fb2728f486d17030ee3d7262ca6
1910
1901
2018-09-26T06:39:25Z
Lollypop
2
/* User */
wikitext
text/x-wiki
[[Kategorie: NetApp]]
==Alignment==
CDOT 8.3:
<source lang=bash>
netapp-svm1::> set -priv diag
netapp-svm1::*> lun alignment show -vserver svm_kerberos_backup -fields vserver,lun,alignment -path /vol/kerberos_vol_luns_backup/kerberos_lun_*
vserver path lun alignment
---------- --------------------------------- ------------ ---------
svm_backup /vol/vol_luns_backup/lun_1.bcklun lun_1.bcklun aligned
svm_backup /vol/vol_luns_backup/lun_2.bcklun lun_2.bcklun aligned
svm_backup /vol/vol_luns_backup/lun_3.bcklun lun_3.bcklun aligned
svm_backup /vol/vol_luns_backup/lun_4.bcklun lun_4.bcklun aligned
4 entries were displayed.
netapp-svm1::> set -priv admin
</source>
To see on which bucket the reads and writes occure:
<source lang=bash>
netapp-svm1::> set -priv diag
netapp-svm1::*> lun alignment show -vserver svm_kerberos_backup -fields vserver,lun,alignment,read-histogram,write-histogram -path /vol/kerberos_vol_luns_backup/kerberos_lun_**
vserver path lun alignment write-histogram read-histogram
---------- --------------------------------- ------------ --------- ---------------- ----------------
svm_backup /vol/vol_luns_backup/lun_1.bcklun lun_1.bcklun aligned 99,0,0,0,0,0,0,0 99,0,0,0,0,0,0,0
svm_backup /vol/vol_luns_backup/lun_2.bcklun lun_2.bcklun aligned 99,0,0,0,0,0,0,0 99,0,0,0,0,0,0,0
svm_backup /vol/vol_luns_backup/lun_3.bcklun lun_3.bcklun aligned 99,0,0,0,0,0,0,0 99,0,0,0,0,0,0,0
svm_backup /vol/vol_luns_backup/lun_4.bcklun lun_4.bcklun aligned 99,0,0,0,0,0,0,0 99,0,0,0,0,0,0,0
4 entries were displayed.
netapp-svm1::> set -priv admin
</source>
==Performance ==
filer> priv set -q diag ; statit -b ; sysstat -x -s -c 20 3 ; statit -e ; priv set
filer> priv set -q diag ; stats show lun:*:avg_latency ; priv set
=== Flashpool ===
filer> priv set -q diag ; stats show -p hybrid_aggr ; priv set
== User ==
=== Create snapshot user ===
<source lang=bash>
security login role create -vserver svm1 vol-snapshot-only -cmddirname DEFAULT -access none
security login role create -vserver svm1 vol-snapshot-only -cmddirname "security login publickey" -access all
security login role create -vserver svm1 vol-snapshot-only -cmddirname "volume snapshot" -access readonly
security login role create -vserver svm1 vol-snapshot-only -cmddirname "volume snapshot create" -access all
security login create -vserver svm1 -role vol-snapshot-only -user-or-group-name snapshot-user -application ssh -authmethod publickey -comment "Snapshot User"
security login publickey create -vserver svm1 -username snapshot-user -publickey "ssh-rsa AAAAB3Nz...geX33k5 snapshot-user"
</source>
=== Create snapshot user for http api ===
==== Create the role ====
<source lang=bash>
security login role create -vserver svm42 -role ansible-snapshot-only -cmddirname DEFAULT -access none
security login role create -vserver svm42 -role ansible-snapshot-only -cmddirname "volume snapshot" -access readonly
security login role create -vserver svm42 -role ansible-snapshot-only -cmddirname "volume snapshot create" -query "-snapshot ansible_*" -access all
security login role create -vserver svm42 -role ansible-snapshot-only -cmddirname "volume snapshot delete" -query "-snapshot ansible_*" -access all
</source>
==== Check role parameter ====
<source lang=bash>
set -showseparator ";" -showallfields true
security login role show -vserver svm42 -role ansible-snapshot-only Role
vserver;role;profilename;cmddirname;access;query;
Vserver;Role Name;Role Name;Command / Directory;Access Level;Query;
svm42;ansible-snapshot-only;ansible-snapshot-only;DEFAULT;none;"";
svm42;ansible-snapshot-only;ansible-snapshot-only;"volume snapshot";readonly;"";
svm42;ansible-snapshot-only;ansible-snapshot-only;"volume snapshot create";all;"-snapshot ansible_*";
svm42;ansible-snapshot-only;ansible-snapshot-only;"volume snapshot delete";all;"-snapshot ansible_*";
svm42;ansible-snapshot-only;ansible-snapshot-only;"volume snapshot modify";all;"-snapshot ansible_*";
svm42;ansible-snapshot-only;ansible-snapshot-only;"volume snapshot show";all;"-snapshot ansible_*";
</source>
==== create user with role ====
<source lang=bash>
security login create -vserver svm42 -application http -authentication-method password -role ansible-snapshot-only -user-or-group-name ansible
</source>
==Network interfaces==
<source lang=bash>
ncl01::> network interface show -vserver ncl1
Logical Status Network Current Current Is
Vserver Interface Admin/Oper Address/Mask Node Port Home
----------- ---------- ---------- ------------------ ------------- ------- ----
ncl01
cluster_mgmt up/up 10.10.20.41/24 ncl01-01 a0a true
ncl01-01-ic1 up/up 10.10.20.44/24 ncl01-01 a0a true
ncl01-01_mgmt1 up/up 10.10.20.42/24 ncl01-01 a0a true
ncl01-02-ic1 up/up 10.10.20.45/24 ncl01-02 a0a true
ncl01-02_mgmt1 up/up 10.10.20.43/24 ncl01-02 a0a true
5 entries were displayed.
ncl01::> network port show -link down
Node: ncl01-01
Speed(Mbps) Health
Port IPspace Broadcast Domain Link MTU Admin/Oper Status
--------- ------------ ---------------- ---- ---- ----------- --------
e0j Default - down 1500 auto/1000 -
e0l Default - down 1500 auto/1000 -
Node: ncl01-02
Speed(Mbps) Health
Port IPspace Broadcast Domain Link MTU Admin/Oper Status
--------- ------------ ---------------- ---- ---- ----------- --------
e0j Default - down 1500 auto/1000 -
e0l Default - down 1500 auto/1000 -
4 entries were displayed.
ncl01::> network port show -health-status degraded
There are no entries matching your query.
ncl01::> network port ifgrp show
Port Distribution Active
Node IfGrp Function MAC Address Ports Ports
-------- ---------- ------------ ----------------- ------- -------------------
ncl01-01
a0a ip 02:a0:98:6d:06:b7 full e0i, e0k
a0b ip 02:a0:98:6d:06:b8 full e3a, e3b, e7a, e7b
ncl01-02
a0a ip 02:a0:98:6d:07:1f full e0i, e0k
a0b ip 02:a0:98:6d:07:20 full e3a, e3b, e7a, e7b
ncl01::> network port ifgrp show -fields down-ports
node ifgrp down-ports
------------- ----- ----------
ncl01-01 a0a -
ncl01-01 a0b -
ncl01-02 a0a -
ncl01-02 a0b -
4 entries were displayed.
ncl01::> network port show -fields speed-oper -port e0j,e0l
node port speed-oper
------------- ---- ----------
ncl01-01 e0j 1000
ncl01-01 e0l 1000
ncl01-02 e0j 1000
ncl01-02 e0l 1000
4 entries were displayed.
ncl01::> network port ifgrp show -fields down-ports
node ifgrp down-ports
------------- ----- ----------
ncl01-01 a0a -
ncl01-01 a0b -
ncl01-02 a0a -
ncl01-02 a0b -
4 entries were displayed.
</source>
==Links==
* [http://www.cosonok.com/2014/02/brief-notes-on-advanced-troubleshooting.html Brief Notes on Advanced Troubleshooting in CDOT]
b94b17f67eef449ee433ca308e9d98feda9533dc
1911
1910
2018-09-26T06:39:44Z
Lollypop
2
/* create user with role */
wikitext
text/x-wiki
[[Kategorie: NetApp]]
==Alignment==
CDOT 8.3:
<source lang=bash>
netapp-svm1::> set -priv diag
netapp-svm1::*> lun alignment show -vserver svm_kerberos_backup -fields vserver,lun,alignment -path /vol/kerberos_vol_luns_backup/kerberos_lun_*
vserver path lun alignment
---------- --------------------------------- ------------ ---------
svm_backup /vol/vol_luns_backup/lun_1.bcklun lun_1.bcklun aligned
svm_backup /vol/vol_luns_backup/lun_2.bcklun lun_2.bcklun aligned
svm_backup /vol/vol_luns_backup/lun_3.bcklun lun_3.bcklun aligned
svm_backup /vol/vol_luns_backup/lun_4.bcklun lun_4.bcklun aligned
4 entries were displayed.
netapp-svm1::> set -priv admin
</source>
To see on which bucket the reads and writes occure:
<source lang=bash>
netapp-svm1::> set -priv diag
netapp-svm1::*> lun alignment show -vserver svm_kerberos_backup -fields vserver,lun,alignment,read-histogram,write-histogram -path /vol/kerberos_vol_luns_backup/kerberos_lun_**
vserver path lun alignment write-histogram read-histogram
---------- --------------------------------- ------------ --------- ---------------- ----------------
svm_backup /vol/vol_luns_backup/lun_1.bcklun lun_1.bcklun aligned 99,0,0,0,0,0,0,0 99,0,0,0,0,0,0,0
svm_backup /vol/vol_luns_backup/lun_2.bcklun lun_2.bcklun aligned 99,0,0,0,0,0,0,0 99,0,0,0,0,0,0,0
svm_backup /vol/vol_luns_backup/lun_3.bcklun lun_3.bcklun aligned 99,0,0,0,0,0,0,0 99,0,0,0,0,0,0,0
svm_backup /vol/vol_luns_backup/lun_4.bcklun lun_4.bcklun aligned 99,0,0,0,0,0,0,0 99,0,0,0,0,0,0,0
4 entries were displayed.
netapp-svm1::> set -priv admin
</source>
==Performance ==
filer> priv set -q diag ; statit -b ; sysstat -x -s -c 20 3 ; statit -e ; priv set
filer> priv set -q diag ; stats show lun:*:avg_latency ; priv set
=== Flashpool ===
filer> priv set -q diag ; stats show -p hybrid_aggr ; priv set
== User ==
=== Create snapshot user ===
<source lang=bash>
security login role create -vserver svm1 vol-snapshot-only -cmddirname DEFAULT -access none
security login role create -vserver svm1 vol-snapshot-only -cmddirname "security login publickey" -access all
security login role create -vserver svm1 vol-snapshot-only -cmddirname "volume snapshot" -access readonly
security login role create -vserver svm1 vol-snapshot-only -cmddirname "volume snapshot create" -access all
security login create -vserver svm1 -role vol-snapshot-only -user-or-group-name snapshot-user -application ssh -authmethod publickey -comment "Snapshot User"
security login publickey create -vserver svm1 -username snapshot-user -publickey "ssh-rsa AAAAB3Nz...geX33k5 snapshot-user"
</source>
=== Create snapshot user for http api ===
==== Create the role ====
<source lang=bash>
security login role create -vserver svm42 -role ansible-snapshot-only -cmddirname DEFAULT -access none
security login role create -vserver svm42 -role ansible-snapshot-only -cmddirname "volume snapshot" -access readonly
security login role create -vserver svm42 -role ansible-snapshot-only -cmddirname "volume snapshot create" -query "-snapshot ansible_*" -access all
security login role create -vserver svm42 -role ansible-snapshot-only -cmddirname "volume snapshot delete" -query "-snapshot ansible_*" -access all
</source>
==== Check role parameter ====
<source lang=bash>
set -showseparator ";" -showallfields true
security login role show -vserver svm42 -role ansible-snapshot-only Role
vserver;role;profilename;cmddirname;access;query;
Vserver;Role Name;Role Name;Command / Directory;Access Level;Query;
svm42;ansible-snapshot-only;ansible-snapshot-only;DEFAULT;none;"";
svm42;ansible-snapshot-only;ansible-snapshot-only;"volume snapshot";readonly;"";
svm42;ansible-snapshot-only;ansible-snapshot-only;"volume snapshot create";all;"-snapshot ansible_*";
svm42;ansible-snapshot-only;ansible-snapshot-only;"volume snapshot delete";all;"-snapshot ansible_*";
svm42;ansible-snapshot-only;ansible-snapshot-only;"volume snapshot modify";all;"-snapshot ansible_*";
svm42;ansible-snapshot-only;ansible-snapshot-only;"volume snapshot show";all;"-snapshot ansible_*";
</source>
==== Create user with role ====
<source lang=bash>
security login create -vserver svm42 -application http -authentication-method password -role ansible-snapshot-only -user-or-group-name ansible
</source>
==Network interfaces==
<source lang=bash>
ncl01::> network interface show -vserver ncl1
Logical Status Network Current Current Is
Vserver Interface Admin/Oper Address/Mask Node Port Home
----------- ---------- ---------- ------------------ ------------- ------- ----
ncl01
cluster_mgmt up/up 10.10.20.41/24 ncl01-01 a0a true
ncl01-01-ic1 up/up 10.10.20.44/24 ncl01-01 a0a true
ncl01-01_mgmt1 up/up 10.10.20.42/24 ncl01-01 a0a true
ncl01-02-ic1 up/up 10.10.20.45/24 ncl01-02 a0a true
ncl01-02_mgmt1 up/up 10.10.20.43/24 ncl01-02 a0a true
5 entries were displayed.
ncl01::> network port show -link down
Node: ncl01-01
Speed(Mbps) Health
Port IPspace Broadcast Domain Link MTU Admin/Oper Status
--------- ------------ ---------------- ---- ---- ----------- --------
e0j Default - down 1500 auto/1000 -
e0l Default - down 1500 auto/1000 -
Node: ncl01-02
Speed(Mbps) Health
Port IPspace Broadcast Domain Link MTU Admin/Oper Status
--------- ------------ ---------------- ---- ---- ----------- --------
e0j Default - down 1500 auto/1000 -
e0l Default - down 1500 auto/1000 -
4 entries were displayed.
ncl01::> network port show -health-status degraded
There are no entries matching your query.
ncl01::> network port ifgrp show
Port Distribution Active
Node IfGrp Function MAC Address Ports Ports
-------- ---------- ------------ ----------------- ------- -------------------
ncl01-01
a0a ip 02:a0:98:6d:06:b7 full e0i, e0k
a0b ip 02:a0:98:6d:06:b8 full e3a, e3b, e7a, e7b
ncl01-02
a0a ip 02:a0:98:6d:07:1f full e0i, e0k
a0b ip 02:a0:98:6d:07:20 full e3a, e3b, e7a, e7b
ncl01::> network port ifgrp show -fields down-ports
node ifgrp down-ports
------------- ----- ----------
ncl01-01 a0a -
ncl01-01 a0b -
ncl01-02 a0a -
ncl01-02 a0b -
4 entries were displayed.
ncl01::> network port show -fields speed-oper -port e0j,e0l
node port speed-oper
------------- ---- ----------
ncl01-01 e0j 1000
ncl01-01 e0l 1000
ncl01-02 e0j 1000
ncl01-02 e0l 1000
4 entries were displayed.
ncl01::> network port ifgrp show -fields down-ports
node ifgrp down-ports
------------- ----- ----------
ncl01-01 a0a -
ncl01-01 a0b -
ncl01-02 a0a -
ncl01-02 a0b -
4 entries were displayed.
</source>
==Links==
* [http://www.cosonok.com/2014/02/brief-notes-on-advanced-troubleshooting.html Brief Notes on Advanced Troubleshooting in CDOT]
7c8fc8778eb763731c428a0c9cbfbb17bf947017
1912
1911
2018-09-26T09:26:13Z
Lollypop
2
/* Create snapshot user for http api */
wikitext
text/x-wiki
[[Kategorie: NetApp]]
==Alignment==
CDOT 8.3:
<source lang=bash>
netapp-svm1::> set -priv diag
netapp-svm1::*> lun alignment show -vserver svm_kerberos_backup -fields vserver,lun,alignment -path /vol/kerberos_vol_luns_backup/kerberos_lun_*
vserver path lun alignment
---------- --------------------------------- ------------ ---------
svm_backup /vol/vol_luns_backup/lun_1.bcklun lun_1.bcklun aligned
svm_backup /vol/vol_luns_backup/lun_2.bcklun lun_2.bcklun aligned
svm_backup /vol/vol_luns_backup/lun_3.bcklun lun_3.bcklun aligned
svm_backup /vol/vol_luns_backup/lun_4.bcklun lun_4.bcklun aligned
4 entries were displayed.
netapp-svm1::> set -priv admin
</source>
To see on which bucket the reads and writes occure:
<source lang=bash>
netapp-svm1::> set -priv diag
netapp-svm1::*> lun alignment show -vserver svm_kerberos_backup -fields vserver,lun,alignment,read-histogram,write-histogram -path /vol/kerberos_vol_luns_backup/kerberos_lun_**
vserver path lun alignment write-histogram read-histogram
---------- --------------------------------- ------------ --------- ---------------- ----------------
svm_backup /vol/vol_luns_backup/lun_1.bcklun lun_1.bcklun aligned 99,0,0,0,0,0,0,0 99,0,0,0,0,0,0,0
svm_backup /vol/vol_luns_backup/lun_2.bcklun lun_2.bcklun aligned 99,0,0,0,0,0,0,0 99,0,0,0,0,0,0,0
svm_backup /vol/vol_luns_backup/lun_3.bcklun lun_3.bcklun aligned 99,0,0,0,0,0,0,0 99,0,0,0,0,0,0,0
svm_backup /vol/vol_luns_backup/lun_4.bcklun lun_4.bcklun aligned 99,0,0,0,0,0,0,0 99,0,0,0,0,0,0,0
4 entries were displayed.
netapp-svm1::> set -priv admin
</source>
==Performance ==
filer> priv set -q diag ; statit -b ; sysstat -x -s -c 20 3 ; statit -e ; priv set
filer> priv set -q diag ; stats show lun:*:avg_latency ; priv set
=== Flashpool ===
filer> priv set -q diag ; stats show -p hybrid_aggr ; priv set
== User ==
=== Create snapshot user ===
<source lang=bash>
security login role create -vserver svm1 vol-snapshot-only -cmddirname DEFAULT -access none
security login role create -vserver svm1 vol-snapshot-only -cmddirname "security login publickey" -access all
security login role create -vserver svm1 vol-snapshot-only -cmddirname "volume snapshot" -access readonly
security login role create -vserver svm1 vol-snapshot-only -cmddirname "volume snapshot create" -access all
security login create -vserver svm1 -role vol-snapshot-only -user-or-group-name snapshot-user -application ssh -authmethod publickey -comment "Snapshot User"
security login publickey create -vserver svm1 -username snapshot-user -publickey "ssh-rsa AAAAB3Nz...geX33k5 snapshot-user"
</source>
=== Create snapshot user for http api ===
==== Create the role ====
<source lang=bash>
security login role create -vserver svm42 -role ansible-snapshot-only -cmddirname DEFAULT -access none
security login role create -vserver svm42 -role ansible-snapshot-only -cmddirname "volume snapshot" -access readonly
security login role create -vserver svm42 -role ansible-snapshot-only -cmddirname "volume snapshot create" -query "-snapshot ansible_*" -access all
security login role create -vserver svm42 -role ansible-snapshot-only -cmddirname "volume snapshot delete" -query "-snapshot ansible_*" -access all
</source>
==== Check role parameter ====
<source lang=bash>
set -showseparator ";" -showallfields true
security login role show -vserver svm42 -role ansible-snapshot-only Role
vserver;role;profilename;cmddirname;access;query;
Vserver;Role Name;Role Name;Command / Directory;Access Level;Query;
svm42;ansible-snapshot-only;ansible-snapshot-only;DEFAULT;none;"";
svm42;ansible-snapshot-only;ansible-snapshot-only;"volume snapshot";readonly;"";
svm42;ansible-snapshot-only;ansible-snapshot-only;"volume snapshot create";all;"-snapshot ansible_*";
svm42;ansible-snapshot-only;ansible-snapshot-only;"volume snapshot delete";all;"-snapshot ansible_*";
svm42;ansible-snapshot-only;ansible-snapshot-only;"volume snapshot modify";all;"-snapshot ansible_*";
svm42;ansible-snapshot-only;ansible-snapshot-only;"volume snapshot show";all;"-snapshot ansible_*";
</source>
==== Create user with role ====
<source lang=bash>
security login create -vserver svm42 -application ontapi -authentication-method password -role ansible-snapshot-only -user-or-group-name ansible
</source>
==Network interfaces==
<source lang=bash>
ncl01::> network interface show -vserver ncl1
Logical Status Network Current Current Is
Vserver Interface Admin/Oper Address/Mask Node Port Home
----------- ---------- ---------- ------------------ ------------- ------- ----
ncl01
cluster_mgmt up/up 10.10.20.41/24 ncl01-01 a0a true
ncl01-01-ic1 up/up 10.10.20.44/24 ncl01-01 a0a true
ncl01-01_mgmt1 up/up 10.10.20.42/24 ncl01-01 a0a true
ncl01-02-ic1 up/up 10.10.20.45/24 ncl01-02 a0a true
ncl01-02_mgmt1 up/up 10.10.20.43/24 ncl01-02 a0a true
5 entries were displayed.
ncl01::> network port show -link down
Node: ncl01-01
Speed(Mbps) Health
Port IPspace Broadcast Domain Link MTU Admin/Oper Status
--------- ------------ ---------------- ---- ---- ----------- --------
e0j Default - down 1500 auto/1000 -
e0l Default - down 1500 auto/1000 -
Node: ncl01-02
Speed(Mbps) Health
Port IPspace Broadcast Domain Link MTU Admin/Oper Status
--------- ------------ ---------------- ---- ---- ----------- --------
e0j Default - down 1500 auto/1000 -
e0l Default - down 1500 auto/1000 -
4 entries were displayed.
ncl01::> network port show -health-status degraded
There are no entries matching your query.
ncl01::> network port ifgrp show
Port Distribution Active
Node IfGrp Function MAC Address Ports Ports
-------- ---------- ------------ ----------------- ------- -------------------
ncl01-01
a0a ip 02:a0:98:6d:06:b7 full e0i, e0k
a0b ip 02:a0:98:6d:06:b8 full e3a, e3b, e7a, e7b
ncl01-02
a0a ip 02:a0:98:6d:07:1f full e0i, e0k
a0b ip 02:a0:98:6d:07:20 full e3a, e3b, e7a, e7b
ncl01::> network port ifgrp show -fields down-ports
node ifgrp down-ports
------------- ----- ----------
ncl01-01 a0a -
ncl01-01 a0b -
ncl01-02 a0a -
ncl01-02 a0b -
4 entries were displayed.
ncl01::> network port show -fields speed-oper -port e0j,e0l
node port speed-oper
------------- ---- ----------
ncl01-01 e0j 1000
ncl01-01 e0l 1000
ncl01-02 e0j 1000
ncl01-02 e0l 1000
4 entries were displayed.
ncl01::> network port ifgrp show -fields down-ports
node ifgrp down-ports
------------- ----- ----------
ncl01-01 a0a -
ncl01-01 a0b -
ncl01-02 a0a -
ncl01-02 a0b -
4 entries were displayed.
</source>
==Links==
* [http://www.cosonok.com/2014/02/brief-notes-on-advanced-troubleshooting.html Brief Notes on Advanced Troubleshooting in CDOT]
28da3157916aa4f299cabfc11ce39042bf9f9849
Nice Options
0
253
1902
1315
2018-07-20T07:42:01Z
Lollypop
2
wikitext
text/x-wiki
Linux:
<source lang=bash>
ls -aldi
ls -aladin
netstat -plant
netstat -tulpen
ss -open4all
pwgen -nancy 17
</source>
Solaris:
<source lang=bash>
prstat -Lmaa
iostat -Erni
</source>
5df23e1595ceea4a09ddfdeff2721329dff57200
IPS cheat sheet
0
98
1908
1312
2018-09-11T08:39:10Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:Solaris11]]
=Cheat sheet=
[[File:Ips-one-liners.pdf|page=1|600px]]
=Examples=
== Switching to Oracle Support Repository ==
1. Get the client certificate at https://pkg-register.oracle.com/
(x) Oracle Solaris 11 Support -> Submit
Comment: Rechnername -> Accept
-> Download Key
-> Download Certificate
2. Copy it to your Solaris 11 host into /var/pkg/ssl.
<pre>
# mv Oracle_Solaris_11_Support.key.pem /var/pkg/ssl
# mv Oracle_Solaris_11_Support.certificate.pem /var/pkg/ssl
</pre>
4. Set you proxy environment if needed:
<pre>
# http_proxy=http://proxy:3128/
# https_proxy=http://proxy:3128/
# export http_proxy https_proxy
</pre>
5. Set the publisher to the support repository:
# pkg set-publisher \
-k /var/pkg/ssl/Oracle_Solaris_11_Support.key.pem \
-c /var/pkg/ssl/Oracle_Solaris_11_Support.certificate.pem \
-G '*' -g https://pkg.oracle.com/solaris/support/ solaris
6. Refresh the catalog:
# pkg refresh --full
7. Check for updates:
# pkg update -nv
8. If needed or wanted do the update:
# pkg update -v
== Adding another repository ==
Until now the repository of OpenCSW is [http://www.opencsw.org/2012/02/ips-repository-in-the-works/ not in IPS format].
== Repairing packages ==
Damn fast fingers did it! Lucky Luk style... the man who delete files faster than his shadow...
<pre>
root@solaris11:/home/lollypop# rm /usr/bin/ls
</pre>
So... the file is gone... oops.
No problem in Solaris 11. You can repair package contents!
But.. in which package was it?
<pre>
root@solaris11:/home/lollypop# pkg search /usr/bin/ls
INDEX ACTION VALUE PACKAGE
path file usr/bin/ls pkg:/system/core-os@0.5.11-0.175.0.10.1.0.0
</pre>
So it is in the package pkg:/system/core-os@0.5.11-0.175.0.10.1.0.0 . Let us take a look what the system thinks what is wrong with our files from this package:
<pre>
root@solaris11:/home/lollypop# pkg verify pkg:/system/core-os@0.5.11-0.175.0.10.1.0.0
PACKAGE STATUS <3>
pkg://solaris/system/core-os ERROR
file: usr/bin/ls
Missing: regular file does not exist
</pre>
That is exactly what we thougt :-).
So let us fix it!
<pre>
root@solaris11:/home/lollypop# pkg fix pkg:/system/core-os@0.5.11-0.175.0.10.1.0.0
Verifying: pkg://solaris/system/core-os ERROR <3>
file: usr/bin/ls
Missing: regular file does not exist
Created ZFS snapshot: 2013-04-10-07:40:21
Repairing: pkg://solaris/system/core-os
DOWNLOAD PKGS FILES XFER (MB)
Completed 1/1 1/1 0.0/0.0$<3>
PHASE ACTIONS
Update Phase 1/1
PHASE ITEMS
Image State Update Phase 2/2
root@solaris11:/home/lollypop#
</pre>
Beware of trying this with /usr/bin/pkg !!!
=Solaris 11 release=
<source lang=bash>
$ LANG=C pkg info kernel | nawk '$1 == "Version:"{split($2,version,/\./)}$1 == "Branch:"{split($2,branch,/\./)}END{printf ("Solaris %d.%d Update %d SRU %d SRU-Build %d\n",version[2],version[3],branch[3],branch[4],branch[6])}'
Solaris 5.11 Update 2 SRU 0 SRU-Build 42
</source>
== ZFS automatic snapshots ==
<source lang=bash>
pkg install pkg:/desktop/time-slider
svcadm restart svc:/system/dbus:default
</source>
d40891561f12cb0eb9022b37541a0fe11044f283
1918
1908
2018-10-18T15:03:44Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:Solaris11]]
=Cheat sheet=
[[File:Ips-one-liners.pdf|page=1|600px]]
=Examples=
== Switching to Oracle Support Repository ==
1. Get the client certificate at https://pkg-register.oracle.com/
(x) Oracle Solaris 11 Support -> Submit
Comment: Rechnername -> Accept
-> Download Key
-> Download Certificate
2. Copy it to your Solaris 11 host into /var/pkg/ssl.
<pre>
# mv Oracle_Solaris_11_Support.key.pem /var/pkg/ssl
# mv Oracle_Solaris_11_Support.certificate.pem /var/pkg/ssl
</pre>
4. Set you proxy environment if needed:
<pre>
# http_proxy=http://proxy:3128/
# https_proxy=http://proxy:3128/
# export http_proxy https_proxy
</pre>
5. Set the publisher to the support repository:
# pkg set-publisher \
-k /var/pkg/ssl/Oracle_Solaris_11_Support.key.pem \
-c /var/pkg/ssl/Oracle_Solaris_11_Support.certificate.pem \
-G '*' -g https://pkg.oracle.com/solaris/support/ solaris
6. Refresh the catalog:
# pkg refresh --full
7. Check for updates:
# pkg update -nv
8. If needed or wanted do the update:
# pkg update -v
== Adding another repository ==
Until now the repository of OpenCSW is [http://www.opencsw.org/2012/02/ips-repository-in-the-works/ not in IPS format].
== Repairing packages ==
Damn fast fingers did it! Lucky Luk style... the man who delete files faster than his shadow...
<pre>
root@solaris11:/home/lollypop# rm /usr/bin/ls
</pre>
So... the file is gone... oops.
No problem in Solaris 11. You can repair package contents!
But.. in which package was it?
<pre>
root@solaris11:/home/lollypop# pkg search /usr/bin/ls
INDEX ACTION VALUE PACKAGE
path file usr/bin/ls pkg:/system/core-os@0.5.11-0.175.0.10.1.0.0
</pre>
So it is in the package pkg:/system/core-os@0.5.11-0.175.0.10.1.0.0 . Let us take a look what the system thinks what is wrong with our files from this package:
<pre>
root@solaris11:/home/lollypop# pkg verify pkg:/system/core-os@0.5.11-0.175.0.10.1.0.0
PACKAGE STATUS <3>
pkg://solaris/system/core-os ERROR
file: usr/bin/ls
Missing: regular file does not exist
</pre>
That is exactly what we thougt :-).
So let us fix it!
<pre>
root@solaris11:/home/lollypop# pkg fix pkg:/system/core-os@0.5.11-0.175.0.10.1.0.0
Verifying: pkg://solaris/system/core-os ERROR <3>
file: usr/bin/ls
Missing: regular file does not exist
Created ZFS snapshot: 2013-04-10-07:40:21
Repairing: pkg://solaris/system/core-os
DOWNLOAD PKGS FILES XFER (MB)
Completed 1/1 1/1 0.0/0.0$<3>
PHASE ACTIONS
Update Phase 1/1
PHASE ITEMS
Image State Update Phase 2/2
root@solaris11:/home/lollypop#
</pre>
Beware of trying this with /usr/bin/pkg !!!
=Solaris 11 release=
<source lang=bash>
$ LANG=C pkg info kernel | nawk '$1 == "Version:"{split($2,version,/\./)}$1 == "Branch:"{split($2,branch,/\./)}END{printf ("Solaris %d.%d Update %d SRU %d SRU-Build %d\n",version[2],version[3],branch[3],branch[4],branch[6])}'
Solaris 5.11 Update 2 SRU 0 SRU-Build 42
</source>
= Update available? =
<source lang=bash>
#!/bin/bash
# solaris11_uptodate_check.sh
# Written by Lars Timmann <L@rs.Timmann.de> 2018
export LANG=C
function check () {
package=$1
# pkg list -af entire@latest
local=$(pkg info ${package} 2>&1)
remote=$(pkg info -r ${package} 2>&1)
printf "%s\n%s\n" "${local}" "${remote}" | nawk -v package="${package}" '
$1=="Version:" {
version[nr]=$2;
next;
}
$1=="Branch:" {
branch[nr++]=$2;
next;
}
/^pkg:/ {
error=$0;
}
END{
if(error) {
printf ("Package %s:\t%s\n", package, error);
status=-1;
} else {
if(branch[0]==branch[1]){
printf ("Package %s:\tUptodate at %s\n", package, branch[0]);
status=0;
}else{
printf ("Package %s:\tUpdate is available: %s -> %s\n", package, branch[0], branch[1]);
split(version[1], version_part, /\./);
split(branch[1], branch_part, /\./);
if(version[1]=="0.5.11") {
be_version=sprintf("%d.%d.%d.%d.%d",version_part[3], branch_part[3], branch_part[4], branch_part[5], branch_part[6]);
}
if(version[1]=="11.4") {
be_version=sprintf("%d.%d.%d.%d.%d",branch_part[1], branch_part[2], branch_part[3], branch_part[5], branch_part[6]);
}
printf ("\n\nUse:\tpkg update --accept --require-new-be --be-name solaris_%s\n\n\n", be_version);
status=2;
}
}
exit status;
}
'
}
package="entire"
pkg refresh >/dev/null \
|| echo "Cannot refresh packages" \
&& if [ $# -gt 0 ]
then
while [ $# -gt 0 ]
do
package=$1
shift
check ${package}
done
else
check ${package}
fi
</source>
= ZFS automatic snapshots =
<source lang=bash>
pkg install pkg:/desktop/time-slider
svcadm restart svc:/system/dbus:default
</source>
502c0ec74d57ebccc1be57a489e454fa9daa9dc1
1919
1918
2018-10-22T14:59:46Z
Lollypop
2
/* Update available? */
wikitext
text/x-wiki
[[Kategorie:Solaris11]]
=Cheat sheet=
[[File:Ips-one-liners.pdf|page=1|600px]]
=Examples=
== Switching to Oracle Support Repository ==
1. Get the client certificate at https://pkg-register.oracle.com/
(x) Oracle Solaris 11 Support -> Submit
Comment: Rechnername -> Accept
-> Download Key
-> Download Certificate
2. Copy it to your Solaris 11 host into /var/pkg/ssl.
<pre>
# mv Oracle_Solaris_11_Support.key.pem /var/pkg/ssl
# mv Oracle_Solaris_11_Support.certificate.pem /var/pkg/ssl
</pre>
4. Set you proxy environment if needed:
<pre>
# http_proxy=http://proxy:3128/
# https_proxy=http://proxy:3128/
# export http_proxy https_proxy
</pre>
5. Set the publisher to the support repository:
# pkg set-publisher \
-k /var/pkg/ssl/Oracle_Solaris_11_Support.key.pem \
-c /var/pkg/ssl/Oracle_Solaris_11_Support.certificate.pem \
-G '*' -g https://pkg.oracle.com/solaris/support/ solaris
6. Refresh the catalog:
# pkg refresh --full
7. Check for updates:
# pkg update -nv
8. If needed or wanted do the update:
# pkg update -v
== Adding another repository ==
Until now the repository of OpenCSW is [http://www.opencsw.org/2012/02/ips-repository-in-the-works/ not in IPS format].
== Repairing packages ==
Damn fast fingers did it! Lucky Luk style... the man who delete files faster than his shadow...
<pre>
root@solaris11:/home/lollypop# rm /usr/bin/ls
</pre>
So... the file is gone... oops.
No problem in Solaris 11. You can repair package contents!
But.. in which package was it?
<pre>
root@solaris11:/home/lollypop# pkg search /usr/bin/ls
INDEX ACTION VALUE PACKAGE
path file usr/bin/ls pkg:/system/core-os@0.5.11-0.175.0.10.1.0.0
</pre>
So it is in the package pkg:/system/core-os@0.5.11-0.175.0.10.1.0.0 . Let us take a look what the system thinks what is wrong with our files from this package:
<pre>
root@solaris11:/home/lollypop# pkg verify pkg:/system/core-os@0.5.11-0.175.0.10.1.0.0
PACKAGE STATUS <3>
pkg://solaris/system/core-os ERROR
file: usr/bin/ls
Missing: regular file does not exist
</pre>
That is exactly what we thougt :-).
So let us fix it!
<pre>
root@solaris11:/home/lollypop# pkg fix pkg:/system/core-os@0.5.11-0.175.0.10.1.0.0
Verifying: pkg://solaris/system/core-os ERROR <3>
file: usr/bin/ls
Missing: regular file does not exist
Created ZFS snapshot: 2013-04-10-07:40:21
Repairing: pkg://solaris/system/core-os
DOWNLOAD PKGS FILES XFER (MB)
Completed 1/1 1/1 0.0/0.0$<3>
PHASE ACTIONS
Update Phase 1/1
PHASE ITEMS
Image State Update Phase 2/2
root@solaris11:/home/lollypop#
</pre>
Beware of trying this with /usr/bin/pkg !!!
=Solaris 11 release=
<source lang=bash>
$ LANG=C pkg info kernel | nawk '$1 == "Version:"{split($2,version,/\./)}$1 == "Branch:"{split($2,branch,/\./)}END{printf ("Solaris %d.%d Update %d SRU %d SRU-Build %d\n",version[2],version[3],branch[3],branch[4],branch[6])}'
Solaris 5.11 Update 2 SRU 0 SRU-Build 42
</source>
= Update available? =
<source lang=bash>
#!/bin/bash
# Written by Lars Timmann <L@rs.Timmann.de> 2018
export LANG=C
function check () {
package=$1
# pkg list -af entire@latest
local=$(pkg info ${package} 2>&1)
remote=$(pkg info -r ${package} 2>&1)
latest_11_3=$(pkg list -H -af ${package} | nawk '$2 ~ /^0.5.11-0.175.3/{print $2; exit;}')
printf "%s\n%s\nLatest_11.3: %s\n" "${local}" "${remote}" "${latest_11_3}" | nawk -v package="${package}" '
BEGIN{
nr=0;
}
$1=="Version:" {
version[nr]=$2;
next;
}
$1=="Branch:" {
branch[nr++]=$2;
next;
}
$1=="Latest_11.3:" {
split($2, latest_part, "-");
latest_version=latest_part[1];
latest_branch=latest_part[2];
}
/^pkg:/ {
error=$0;
}
END{
if(error) {
printf ("Package %s:\t%s\n", package, error);
status=-1;
} else {
if(branch[0]==branch[1]){
printf ("Package %s:\tUptodate at %s\n", package, branch[0]);
status=0;
}else{
printf ("Package %s:\tUpdate is available: %s -> %s\n", package, branch[0], branch[1]);
split(version[1], version_part, /\./);
split(branch[1], branch_part, /\./);
if(version[1]=="0.5.11") {
be_version=sprintf("%d.%d.%d.%d.%d",version_part[3], branch_part[3], branch_part[4], branch_part[5], branch_part[6]);
}
if(version[1]=="11.4") {
be_version=sprintf("%d.%d.%d.%d.%d",branch_part[1], branch_part[2], branch_part[3], branch_part[5], branch_part[6]);
if (version[0]=="0.5.11" && branch[0] != latest_branch ) {
split(latest_branch, latest_part, /\./);
be_version3=sprintf("%d.%d.%d.%d.%d",version_part[3], latest_part[3], latest_part[4], latest_part[5], latest_part[6]);
printf ("\nTo update and stay in Solaris 11.3-Branch you can use:\n\tpkg install --accept --require-new-be --be-name solaris_%s\n\n", be_version3);
}else if (version[0]=="0.5.11" && branch[0] == latest_branch ) {
printf ("\nYou are at the latest version of the 11.3-Branch (%s), but you can upgrade to 11.4 .\n",branch[0]);
}
}
printf ("\n\nUse:\tpkg update --accept --require-new-be --be-name solaris_%s\n\n\n", be_version);
status=2;
}
}
exit status;
}
'
}
package="entire"
pkg refresh >/dev/null \
|| echo "Cannot refresh packages" \
&& if [ $# -gt 0 ]
then
while [ $# -gt 0 ]
do
package=$1
shift
check ${package}
done
else
check ${package}
fi
</source>
= ZFS automatic snapshots =
<source lang=bash>
pkg install pkg:/desktop/time-slider
svcadm restart svc:/system/dbus:default
</source>
17c18fef7ddbb780a6d9272211e6c8dda93b46fc
Fibrechannel Analyse
0
139
1909
1852
2018-09-19T07:36:32Z
Lollypop
2
/* luxadm */
wikitext
text/x-wiki
[[Kategorie:Solaris]]
[[Kategorie:Brocade]]
[[Kategorie:NetApp]]
[[Kategorie:FC]]
=Fibrechannel Analyse=
=Kommandos : Solaris=
==luxadm==
===luxadm -e port===
Gibt die Hardwarepfade der vorhandened Fibrechannelports und deren Status aus:
<source lang=bash>
# luxadm -e port
/devices/pci@79,0/pci10de,378@b/pci1077,143@0/fp@0,0:devctl CONNECTED
/devices/pci@79,0/pci10de,378@b/pci1077,143@0,1/fp@0,0:devctl NOT CONNECTED
/devices/pci@79,0/pci10de,376@e/pci1077,143@0/fp@0,0:devctl CONNECTED
/devices/pci@79,0/pci10de,376@e/pci1077,143@0,1/fp@0,0:devctl NOT CONNECTED
</source>
2 Dualport Karten:
/devices/pci@79,0/pci10de,378@b/pci1077,143@0 und ...,1
/devices/pci@79,0/pci10de,376@e/pci1077,143@0 und ...,1
<source lang=bash>
# prtdiag -v | head -1
System Configuration: Sun Microsystems Sun Fire X4440
</source>
Aus der Seite [https://support.oracle.com/epmos/faces/DocContentDisplay?id=1277396.1 Sun x86 Platforms: Matrix of Recognized Device Paths (Doc ID 1277396.1)] (Oracle Support Login benötigt):
Sun Fire x4440 (Tucana)
PCI:
PCIe SLOT0 /pci@0,0/pci10de,375@f/pci1000,3150@0 // with PCI Express 8-Port SAS/SATA HBA
PCIe SLOT0 /pci@0,0/pci10de,375@f/ // without PCI Express 8-Port SAS/SATA HBA
PCIe SLOT1 /pci@0,0/pci10de,376@e/
PCIe SLOT2 /pci@7c,0/pci10de,377@f/
PCIe SLOT3 /pci@0,0/pci10de,377@a/
PCIe SLOT4 /pci@7c,0/pci10de,376@e/
PCIe SLOT5 /pci@7c,0/pci10de,378@b/
(7c can be renamed something else depending on BIOS/OS version)
Also stecken unsere Karten in Slot 4 und 5.
===luxadm -e dump_map <HW_path>===
Gibt die Tabelle der bekannten Geräte an einem Port aus
<source lang=bash>
# luxadm -e dump_map /devices/pci@79,0/pci10de,378@b/pci1077,143@0/fp@0,0:devctl
Pos Port_ID Hard_Addr Port WWN Node WWN Type
0 30200 0 202600a0b86e10e4 200600a0b86e10e4 0x0 (Disk device)
1 30600 0 202700a0b86e10e4 200600a0b86e10e4 0x0 (Disk device)
2 10100 0 203400a0b85bb030 200400a0b85bb030 0x0 (Disk device)
3 10500 0 203500a0b85bb030 200400a0b85bb030 0x0 (Disk device)
4 10200 0 202600a0b86e103c 200600a0b86e103c 0x0 (Disk device)
5 11400 0 202700a0b86e103c 200600a0b86e103c 0x0 (Disk device)
6 30100 0 203200a0b85aeb2d 200200a0b85aeb2d 0x0 (Disk device)
7 30500 0 203300a0b85aeb2d 200200a0b85aeb2d 0x0 (Disk device)
8 10800 0 2100001b32902d45 2000001b32902d45 0x1f (Unknown Type,Host Bus Adapter)
</source>
Erklärung der interessanten Spalten:
* Port_ID <Switch_ID><Switchport><??>
Es sind also offensichtlich 2 Switches in der Fabric an Port /devices/pci@79,0/pci10de,378@b/pci1077,143@0/fp@0,0:devctl
und zwar mit der ID 1 und mit der ID 3.
Switch ID 1
Port 1 und 5 : Node WWN 200400a0b85bb030
Port 2 und 14 : Node WWN 200600a0b86e103c
Port 8 : Node WWN 2000001b32902d45 (Wir selbst)
Switch ID 3
Port 1 und 5 : Node WWN 200200a0b85aeb2d
Port 2 und 6 : Node WWN 200600a0b86e10e4
Wir hängen also mit 2 Storages auf dem Switch mit der ID 1 und haben eine Verbindung zu einem Switch mit der ID 3 an dem 2 weitere Storages hängen.
* Node WWN
Wir sehen hier 4 Disk Devices mit jeweils 2 Einträgen (Gleiche Node WWN)
* Port WWN
Dies ist die Port WWN der an den Switch angeschlossenen Geräte (unter 8 finden wir uns selbst).
Pro Storage sehen wir hier 2 Port WWNs, also 2 Pfade über unseren einen Hostport.
Daher nachher 4 Pfade (2 Pro Hostport) beim [[#mpathadm list lu]].
* Type
Disk Device: Storage
Host Bus Adapter: FC-Karte
===luxadm -e rdls <HW_path> ===
<source lang=bash>
# luxadm -e port 2>/dev/null | awk '{print $1;}' | xargs -n 1 luxadm -e rdls 2>/dev/null
Link Error Status information for loop:/devices/pci@0,0/pci8086,340e@7/pci111d,806e@0/pci111d,806e@2/pci1077,143@0/fp@0,0:devctl
al_pa lnk fail sync loss signal loss sequence err invalid word CRC
30200 2 1 0 0 0 0
30600 2 1 0 0 0 0
10200 1 1 0 0 0 0
11400 2 1 0 0 0 0
10b00 0 0 0 0 0 0
NOTE: These LESB counts are not cleared by a reset, only power cycles.
These counts must be compared to previously read counts.
Link Error Status information for loop:/devices/pci@0,0/pci8086,340e@7/pci111d,806e@0/pci111d,806e@2/pci1077,143@0,1/fp@0,0:devctl
al_pa lnk fail sync loss signal loss sequence err invalid word CRC
0 0 0 0 0 0 0
NOTE: These LESB counts are not cleared by a reset, only power cycles.
These counts must be compared to previously read counts.
</source>
===luxadm probe===
Auflistung aller erkannten Fibrechanneldevices
<source lang=bash>
#> luxadm probe
Found Fibre Channel device(s):
Node WWN:200600a0b86e10e4 Device Type:Disk device
Logical Path:/dev/rdsk/c8t600A0B80006E10E40000DC1C52E8B751d0s2
...
</source>
===luxadm display <Diskpath|WWN>===
<source lang=bash>
#> luxadm display /dev/rdsk/c8t600A0B80006E10E40000DC1C52E8B751d0s2
DEVICE PROPERTIES for disk: /dev/rdsk/c8t600A0B80006E10E40000DC1C52E8B751d0s2
Vendor: SUN
Product ID: STK6580_6780
Revision: 0784
Serial Num: SP01068442
Unformatted capacity: 204800.000 MBytes
Write Cache: Enabled
Read Cache: Enabled
Minimum prefetch: 0x300
Maximum prefetch: 0x0
Device Type: Disk device
Path(s):
/dev/rdsk/c8t600A0B80006E10E40000DC1C52E8B751d0s2
/devices/scsi_vhci/disk@g600a0b80006e10e40000dc1c52e8b751:c,raw
Controller /dev/cfg/c4
Device Address 202600a0b86e10e4,5
Host controller port WWN 2100001b328a417f
Class primary
State ONLINE
Controller /dev/cfg/c4
Device Address 202700a0b86e10e4,5
Host controller port WWN 2100001b328a417f
Class secondary
State STANDBY
Controller /dev/cfg/c6
Device Address 201600a0b86e10e4,5
Host controller port WWN 2100001b32904445
Class primary
State ONLINE
Controller /dev/cfg/c6
Device Address 201700a0b86e10e4,5
Host controller port WWN 2100001b32904445
Class secondary
State STANDBY
</source>
* Vendor: SUN
Hersteller
* Product ID: STK6580_6780
Also ein StorageTek 6580/6780
* Revision: 0784
Grobe Firmwarepeilung (Firmware Version: 07.84.47.10)
Siehe hier [[#lsscs list array <array_name>]]
* Serial Num: SP01068442
Praktisch, wenn man mit NetApps arbeitet, um die LUNs zuzuordnen.
* Unformatted capacity: 204800.000 MBytes
Immer gut zu wissen
* Write Cache: Enabled
Die Batterie im Storage sollte also OK sein ;-)
* Path(s):
Rawdevicepath
Hardwaredevicepath
Jetzt folgen immer pro Pfad zu diesem Device ein Block aus
Controller (siehe unten)
Device Address <Port WWN vom Device>,<LUN ID>
Class <primary|secondary> (siehe unten)
State <Online|Standby|Oflline>
Zuweisung Controller zum FC-Port über:
<source lang=bash>
# ls -al /dev/cfg/c6
lrwxrwxrwx 1 root root 60 Sep 3 2009 /dev/cfg/c6 -> ../../devices/pci@79,0/pci10de,376@e/pci1077,143@0/fp@0,0:fc
</source>
Man sieht den Hardwarepfad von [[#luxadm -e port]]
Class:
Via ALUA (Asymmetric Logical Unit Access) teilt das Device dem Host mit, über welche Pfade der Host primär auf die LUN zugreifen soll.
==fcinfo==
===fcinfo hba-port===
Gibt ein paar Infos über Hersteller, Modell, Firmware, Port und Node WWN, Current Speed, ... aus
<source lang=bash>
#> fcinfo hba-port
HBA Port WWN: 2100001b328a417f
OS Device Name: /dev/cfg/c4
Manufacturer: QLogic Corp.
Model: 375-3356-02
Firmware Version: 05.06.00
FCode/BIOS Version: BIOS: 2.02; fcode: 2.01; EFI: 2.00;
Serial Number: 0402R00-0918701860
Driver Name: qlc
Driver Version: 20110825-3.06
Type: N-port
State: online
Supported Speeds: 1Gb 2Gb 4Gb
Current Speed: 4Gb
Node WWN: 2000001b328a417f
HBA Port WWN: 2101001b32aa417f
OS Device Name: /dev/cfg/c5
Manufacturer: QLogic Corp.
Model: 375-3356-02
Firmware Version: 05.06.00
FCode/BIOS Version: BIOS: 2.02; fcode: 2.01; EFI: 2.00;
Serial Number: 0402R00-0918701860
Driver Name: qlc
Driver Version: 20110825-3.06
Type: unknown
State: offline
Supported Speeds: 1Gb 2Gb 4Gb
Current Speed: not established
Node WWN: 2001001b32aa417f
HBA Port WWN: 2100001b32904445
OS Device Name: /dev/cfg/c6
Manufacturer: QLogic Corp.
Model: 375-3356-02
Firmware Version: 05.06.00
FCode/BIOS Version: BIOS: 2.02; fcode: 2.01; EFI: 2.00;
Serial Number: 0402R00-0918701887
Driver Name: qlc
Driver Version: 20110825-3.06
Type: N-port
State: online
Supported Speeds: 1Gb 2Gb 4Gb
Current Speed: 4Gb
Node WWN: 2000001b32904445
HBA Port WWN: 2101001b32b04445
OS Device Name: /dev/cfg/c7
Manufacturer: QLogic Corp.
Model: 375-3356-02
Firmware Version: 05.06.00
FCode/BIOS Version: BIOS: 2.02; fcode: 2.01; EFI: 2.00;
Serial Number: 0402R00-0918701887
Driver Name: qlc
Driver Version: 20110825-3.06
Type: unknown
State: offline
Supported Speeds: 1Gb 2Gb 4Gb
Current Speed: not established
Node WWN: 2001001b32b04445
</source>
===fcinfo remote-port --port <HBA Port WWN> --linkstat===
<source lang=bash>
# fcinfo remote-port --port 2100001b32904445 --linkstat
Remote Port WWN: 201600a0b86e103c
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200600a0b86e103c
Link Error Statistics:
Link Failure Count: 3
Loss of Sync Count: 3
Loss of Signal Count: 2
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 201700a0b86e103c
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200600a0b86e103c
Link Error Statistics:
Link Failure Count: 4
Loss of Sync Count: 261
Loss of Signal Count: 4
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 202200a0b85aeb2d
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200200a0b85aeb2d
Link Error Statistics:
Link Failure Count: 2
Loss of Sync Count: 1
Loss of Signal Count: 2
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 202300a0b85aeb2d
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200200a0b85aeb2d
Link Error Statistics:
Link Failure Count: 2
Loss of Sync Count: 1
Loss of Signal Count: 2
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 201600a0b86e10e4
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200600a0b86e10e4
Link Error Statistics:
Link Failure Count: 3
Loss of Sync Count: 1
Loss of Signal Count: 0
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 201700a0b86e10e4
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200600a0b86e10e4
Link Error Statistics:
Link Failure Count: 3
Loss of Sync Count: 1
Loss of Signal Count: 0
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 202400a0b85bb030
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200400a0b85bb030
Link Error Statistics:
Link Failure Count: 2
Loss of Sync Count: 1
Loss of Signal Count: 2
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 202500a0b85bb030
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200400a0b85bb030
Link Error Statistics:
Link Failure Count: 2
Loss of Sync Count: 1
Loss of Signal Count: 3
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
</source>
===fcinfo remote-port --port <HBA Port WWN> --scsi-target===
<source lang=bash>
# fcinfo hba-port | grep HBA
HBA Port WWN: 21000024ff3cf472
HBA Port WWN: 21000024ff3cf473
HBA Port WWN: 21000024ff3cf454
HBA Port WWN: 21000024ff3cf455
# fcinfo remote-port --port 21000024ff3cf472 --scsi-target
Remote Port WWN: 20110002ac0059ce
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 2ff70002ac0059ce
LUN: 0
Vendor: 3PARdata
Product: VV
OS Device Name: /dev/rdsk/c6t60002AC00000000000000002000059CEd0s2
LUN: 1
Vendor: 3PARdata
Product: VV
OS Device Name: /dev/rdsk/c6t60002AC00000000000000003000059CEd0s2
LUN: 2
Vendor: 3PARdata
Product: VV
OS Device Name: /dev/rdsk/c6t60002AC00000000000000004000059CEd0s2
...
</source>
==mpathadm==
===mpathadm list lu===
<source lang=bash>
</source>
==cfgadm==
===cfgadm -al -o show_FCP_dev [<controller>]===
<source lang=bash>
# cfgadm -al -o show_FCP_dev | grep unusable
c8::21000024ff2d49a2,0 disk connected configured unusable
c8::21000024ff2d49a2,1 disk connected configured unusable
c8::21000024ff2d49a2,2 disk connected configured unusable
c8::21000024ff2d49a2,3 disk connected configured unusable
c8::21000024ff2d49a2,4 disk connected configured unusable
c8::21000024ff2d49a2,5 disk connected configured unusable
c8::21000024ff2d49a2,6 disk connected configured unusable
c8::21000024ff2d49a2,7 disk connected configured unusable
c8::21000024ff2d49a2,8 disk connected configured unusable
c8::21000024ff2d49a2,9 disk connected configured unusable
c8::21000024ff2d49a2,10 disk connected configured unusable
c9::203400a0b839c421,31 disk connected configured unusable
c9::203400a0b84913d2,31 disk connected configured unusable
c9::203500a0b839c421,31 disk connected configured unusable
c9::203500a0b84913d2,31 disk connected configured unusable
</source>
===cfgadm -c unconfigure -o unusable_SCSI_LUN <unusable device>===
<source lang=bash>
# cfgadm -c unconfigure -o unusable_SCSI_LUN c8::21000024ff2d49a2
</source>
Alle aufräumen:
<source lang=bash>
# cfgadm -alo show_SCSI_LUN | nawk '$NF=="unusable"{gsub(/,[0-9]+$/,"",$1);print $1}' | sort -u | xargs -n 1 cfgadm -c unconfigure -o unusable_SCSI_LUN
</source>
===cfgadm -o force_update -c configure <controller>===
Rescan LUNs. Be careful! Does a forcelip!
<source lang=bash>
# cfgadm -o force_update -c configure c10
</source>
==prtconf -Da <device>==
<source lang=bash>
# prtconf -Da /dev/cfg/c3
i86pc (driver name: rootnex)
pci, instance #0 (driver name: npe)
pci8086,3410, instance #5 (driver name: pcieb)
pci111d,806e, instance #12 (driver name: pcieb)
pci111d,806e, instance #13 (driver name: pcieb)
pci1077,170, instance #0 (driver name: qlc) <---
fp, instance #0 (driver name: fp)
</source>
==LUN masking (access LUNs of a storage)==
<source lang=bash>
Nov 6 13:44:59 server01 Corrupt label; wrong magic number
Nov 6 13:44:59 server01 cmlb: WARNING: /pci@380/pci@1/pci@0/pci@5/SUNW,qlc@0/fp@0,0/ssd@w204300a096691217,7 (ssd7):
Nov 6 13:44:59 server01 Corrupt label; wrong magic number
Nov 6 13:44:59 server01 cmlb: WARNING: /pci@380/pci@1/pci@0/pci@5/SUNW,qlc@0/fp@0,0/ssd@w204300a096691217,7 (ssd7):
Nov 6 13:44:59 server01 Corrupt label; wrong magic number
Nov 6 13:44:59 server01 cmlb: WARNING: /pci@300/pci@1/pci@0/pci@4/SUNW,qlc@0/fp@0,0/ssd@w203300a096691217,7 (ssd2):
Nov 6 13:44:59 server01 Corrupt label; wrong magic number
...
</source>
<source lang=bash>
# cat /etc/driver/drv/fp.conf
mpxio-disable="no";
pwwn-lun-blacklist=
"203200a096691265,7",
"203300a096691265,7",
"204200a096691265,7",
"204300a096691265,7",
"203200a096691217,7",
"203300a096691217,7",
"204200a096691217,7",
"204300a096691217,7";
</source>
<source lang=bash>
# reboot -- -r
...
Boot device: /pci@300/pci@1/pci@0/pci@2/scsi@0/disk@p0 File and args: -r
SunOS Release 5.11 Version 11.3 64-bit
Copyright (c) 1983, 2015, Oracle and/or its affiliates. All rights reserved.
/pseudo/fcp@0 (fcp0):
LUN 7 of port 203300a096691217 is masked due to black listing.
/pseudo/fcp@0 (fcp0):
LUN 7 of port 203200a096691217 is masked due to black listing.
/pseudo/fcp@0 (fcp0):
LUN 7 of port 203300a096691265 is masked due to black listing.
/pseudo/fcp@0 (fcp0):
LUN 7 of port 203200a096691265 is masked due to black listing.
/pseudo/fcp@0 (fcp0):
LUN 7 of port 204300a096691217 is masked due to black listing.
/pseudo/fcp@0 (fcp0):
LUN 7 of port 204200a096691217 is masked due to black listing.
/pseudo/fcp@0 (fcp0):
LUN 7 of port 204300a096691265 is masked due to black listing.
/pseudo/fcp@0 (fcp0):
LUN 7 of port 204200a096691265 is masked due to black listing.
Configuring devices.
</source>
=Kommandos : Common Array Manager=
==lsscs==
Ist unter Solaris in /opt/SUNWsefms/bin
===lsscs list array===
<source lang=bash>
</source>
===lsscs list array <array_name>===
<source lang=bash>
</source>
===lsscs list -a <array_name> fcport===
<source lang=bash>
</source>
=Kommandos : Brocade=
==Switch-Kommandos==
===switchshow===
<source lang=bash>
san-sw_11:admin> switchshow
switchName: san-sw_11
switchType: 71.2
switchState: Online
switchMode: Native
switchRole: Principal
switchDomain: 1
switchId: fffc01
switchWwn: 10:00:00:05:33:df:43:5a
zoning: ON (Fabric1)
switchBeacon: OFF
Index Port Address Media Speed State Proto
==============================================
0 0 010000 id N8 No_Light FC
1 1 010100 id N8 Online FC E-Port 10:00:00:05:33:df:bd:b9 "san-sw_21" (downstream)
2 2 010200 id N8 Online FC F-Port 21:00:00:24:ff:05:74:e4
3 3 010300 id N8 Online FC F-Port 50:0a:09:81:8d:32:5d:c4
4 4 010400 id N8 No_Light FC
5 5 010500 id N8 Online FC E-Port 10:00:00:05:33:df:bd:b9 "san-sw_21"
6 6 010600 id N4 Online FC F-Port 20:06:00:a0:b8:32:38:17
7 7 010700 id N4 Online FC F-Port 20:07:00:a0:b8:32:38:17
8 8 010800 id N4 Online FC F-Port 21:00:00:1b:32:91:4c:ed
9 9 010900 id N4 Online FC F-Port 21:00:00:1b:32:98:05:1a
10 10 010a00 id N8 Online FC F-Port 21:00:00:24:ff:4a:d3:bc
11 11 010b00 id N8 No_Light FC
12 12 010c00 id N8 No_Light FC
13 13 010d00 id N8 No_Light FC
14 14 010e00 id N8 No_Light FC
15 15 010f00 id N8 No_Light FC
16 16 011000 -- N8 No_Module FC (No POD License) Disabled
17 17 011100 -- N8 No_Module FC (No POD License) Disabled
18 18 011200 -- N8 No_Module FC (No POD License) Disabled
19 19 011300 -- N8 No_Module FC (No POD License) Disabled
20 20 011400 -- N8 No_Module FC (No POD License) Disabled
21 21 011500 -- N8 No_Module FC (No POD License) Disabled
22 22 011600 -- N8 No_Module FC (No POD License) Disabled
23 23 011700 -- N8 No_Module FC (No POD License) Disabled
</source>
Was sagt uns das?
# Dies ist der "Principal" (alle andere sind "Subordinate") der Fabric "Fabric1" (switchRole:, zoning:)
# Der Switch ist gezoned (zoning:)
# SwitchID ist "fffc01"
# Es ist ein 24-Port Switch
# Es gibt einen doppelten ISL (InterSwitchLink) zu einem anderen Switch E-Port (san-sw_21)
# 6 Ports sind mit SFPs bestückt, aber nicht belegt (0,4,11-15)
# 8 Ports haben keine Lizenz und auch kein SFP (No_Module)
# 9 Ports sind belegt
===fabricshow===
<source lang=bash>
san-sw_11:root> fabricshow
Switch ID Worldwide Name Enet IP Addr FC IP Addr Name
-------------------------------------------------------------------------
1: fffc01 10:00:00:05:33:df:43:5a 192.168.1.117 0.0.0.0 >"san-sw_11"
2: fffc02 10:00:00:05:33:df:bd:b9 192.168.1.119 0.0.0.0 "san-sw_21"
The Fabric has 2 switches
</source>
===islshow===
<source lang=bash>
rz1_fab2_11:admin> islshow
1: 1-> 0 10:00:00:05:1e:0d:5e:96 12 bc1_sl4_fab2_12 sp: 4.000G bw: 4.000G
2: 2-> 0 10:00:00:05:1e:0d:e2:53 13 bc2_sl4_fab2_13 sp: 4.000G bw: 4.000G
3: 3-> 0 10:00:00:05:1e:b3:71:bf 14 bc3_sl4_fab2_14 sp: 4.000G bw: 4.000G
4: 5-> 17 10:00:00:05:1e:0d:5e:96 12 bc1_sl4_fab2_12 sp: 4.000G bw: 4.000G
5: 6-> 17 10:00:00:05:1e:0d:e2:53 13 bc2_sl4_fab2_13 sp: 4.000G bw: 4.000G
6: 7-> 17 10:00:00:05:1e:b3:71:bf 14 bc3_sl4_fab2_14 sp: 4.000G bw: 4.000G
7: 10-> 8 10:00:50:eb:1a:45:71:96 15 rz-6510-fab2-15 sp: 4.000G bw: 4.000G
8: 18-> 0 10:00:50:eb:1a:45:71:96 15 rz-6510-fab2-15 sp: 4.000G bw: 4.000G
</source>
==Port-Kommandos==
===porterrshow===
===portstatsshow===
===portstatsclear===
===portloginshow===
Show information about NPIV-ports.
<source lang=bash>
fcsw1:admin> switchshow
...
Index Port Address Media Speed State Proto
==================================================
...
34 34 0f2200 id N16 Online FC F-Port 1 N Port + 1 NPIV public
...
</source>
This is a NetApp 8080 with CDOT behind this port as you can see with <i>nodefind <address></i>
<source lang=bash>
fcsw1:admin> nodefind 0f2200
Local:
Type Pid COS PortName NodeName SCR
N 0f2200; 3;50:0a:09:82:80:d1:21:ee;50:0a:09:80:80:d1:21:ee; 0x00000000
PortSymb: [45] "NetApp FC Target Adapter (8324) cdot1-01:0g"
NodeSymb: [38] "NetApp FAS8080 (cdot1-01/cdot1-02)"
Fabric Port Name: 20:22:50:eb:1a:42:f8:45
Permanent Port Name: 50:0a:09:82:80:d1:21:ee
Device type: Physical Unknown(initiator/target)
Port Index: 34
Share Area: No
Device Shared in Other AD: No
Redirect: No
Partial: No
LSAN: No
Aliases:
</source>
Now look with <i>portloginshow <portnumber></i>:
<source lang=bash>
fcsw1:admin> portloginshow 34
Type PID World Wide Name credit df_sz cos
=====================================================
fd 0f2201 20:00:00:a0:98:5d:33:82 6 2048 8 scr=0x3
fe 0f2200 50:0a:09:82:80:d1:21:ee 6 2048 8 scr=0x0
ff 0f2201 20:00:00:a0:98:5d:33:82 0 0 8 d_id=FFFFFC
ff 0f2200 50:0a:09:82:80:d1:21:ee 0 0 8 d_id=FFFFFC
</source>
With this information you can find out more about the WWNs:
<source lang=bash>
fcsw1:admin> nodefind 20:00:00:a0:98:5d:33:82
Local:
Type Pid COS PortName NodeName SCR
N 0f2201; 3;20:00:00:a0:98:5d:33:82;20:04:00:a0:98:5d:33:82; 0x00000003
FC4s: FCP
PortSymb: [58] "NetApp FC Target Port (8324) cdot1fc:cdot1-01_fc_lif_1"
NodeSymb: [24] "NetApp Vserver cdot1fc"
Fabric Port Name: 20:22:50:eb:1a:42:f8:45
Permanent Port Name: 50:0a:09:82:80:d1:21:ee
Device type: NPIV Target
Port Index: 34
Share Area: No
Device Shared in Other AD: No
Redirect: No
Partial: No
LSAN: No
Aliases: cdot1fc_01_lif1
</source>
Even the VServer (NodeSymb)!
And with the NodeName you can find all logical interfaces of this svm:
<source lang=bash>
fcsw1:admin> nodefind 20:04:00:a0:98:5d:33:82
Local:
Type Pid COS PortName NodeName SCR
N 0f2201; 3;20:00:00:a0:98:5d:33:82;20:04:00:a0:98:5d:33:82; 0x00000003
FC4s: FCP
PortSymb: [58] "NetApp FC Target Port (8324) cdot1fc:cdot1-01_fc_lif_1"
NodeSymb: [24] "NetApp Vserver cdot1fc"
Fabric Port Name: 20:22:50:eb:1a:42:f8:45
Permanent Port Name: 50:0a:09:82:80:d1:21:ee
Device type: NPIV Target
Port Index: 34
Share Area: No
Device Shared in Other AD: No
Redirect: No
Partial: No
LSAN: No
Aliases: cdot1fc_01_lif1
N 0f2301; 3;20:02:00:a0:98:5d:33:82;20:04:00:a0:98:5d:33:82; 0x00000003
FC4s: FCP
PortSymb: [58] "NetApp FC Target Port (8324) cdot1fc:cdot1-02_fc_lif_1"
NodeSymb: [24] "NetApp Vserver cdot1fc"
Fabric Port Name: 20:23:50:eb:1a:42:f8:45
Permanent Port Name: 50:0a:09:82:80:61:21:e8
Device type: NPIV Target
Port Index: 35
Share Area: No
Device Shared in Other AD: No
Redirect: No
Partial: No
LSAN: No
Aliases: cdot1fc_02_lif1
</source>
==Zone-Kommandos==
===zoneshow===
===alicreate===
===alishow===
==Backup der Switchconfig per Script==
===Put the backup host ssh-pub-key on the switches===
<source lang=bash>
fcsw1:root> cat >/root/.ssh/authorized_keys <<EOF
> ssh-dss AAAAB3NzaC1...
...
...
lF8qsgtTD8cc= root@host
> EOF
</source>
===Generate ssh-key on the switches===
<source lang=bash>
fcsw1:root> ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
2a:23:33:...:69:bc:25:a5:f9 root@fcsw1
The key's randomart image is:
+--[ RSA 2048]----+
| |
| ... |
| |
+-----------------+
</source>
===Copy the key to your backup users ~/.ssh/authorized_keys on backup host===
<source lang=bash>
fcsw1:root> cat /root/.ssh/id_rsa.pub
ssh-rsa AAAAB3NzaC1yc2EAAA...
...
KHnw1T1NaQ== root@fcsw1
</source>
===Now the script on the backup host===
<source lang=bash>
# cat /opt/bin/backup_brocade_config
#!/bin/bash
SWITCHES="
172.30.40.50
172.30.40.51
"
LOCALUSER="backupuser"
BACKUPDIR="brocade_backup"
BACKUPHOST="172.30.40.10"
DATE="$(date '+%Y%m%d-%H%M%S')"
for switch in ${SWITCHES} ; do
printf "Backing up ${switch} to ~${LOCALUSER}/${BACKUPDIR}/${switch}_config_${date}.txt... "
ssh root@${switch} /fabos/link_sbin/configupload -all -p scp ${BACKUPHOST},${LOCALUSER},${BACKUPDIR}/${switch}_config_${DATE}.txt
done
</source>
==Script zum parsen einer configupload-Datei==
<source lang=awk>
#!/usr/bin/gawk -f
BEGIN{
vendor["001438"]="Hewlett-Packard";
vendor["00a098"]="NetApp";
vendor["0024ff"]="Qlogic";
vendor["001b32"]="Qlogic";
vendor["0000c9"]="Emulex";
vendor["00e002"]="CROSSROADS SYSTEMS, INC.";
}
/\[Zoning\]/,/^$/ {
if(/^cfg./){
split($0,cfgparts,":");
gsub(/^cfg./,"",cfgparts[1]);
cfg[cfgparts[1]]=cfgparts[2];
}
else if(/^zone./) {
zonename=$0;
gsub(/:.*$/,"",zonename);
gsub(/^zone./,"",zonename);
zonemembers=$0;
gsub(/^[^:]*:/,"",zonemembers);
zone[zonename]=zonemembers;
}
else if(/^alias./) {
aliasname=$0;
gsub(/:.*$/,"",aliasname);
gsub(/^alias./,"",aliasname);
aliasmembers=$0;
gsub(/^[^:]*:/,"",aliasmembers);
alias[aliasname]=aliasmembers;
if(length(aliasname)>longestalias){
longestalias=length(aliasname);
}
}
else if(/^enable:/) {
cfgenabled=$0;
gsub(/^enable:/,"",cfgenabled);
}
}
END {
print "Config:",cfgenabled;
split(cfg[cfgenabled],active_zones,";");
for(active_zone in active_zones) {
split(zone[active_zones[active_zone]],zone_members,";");
asort(zone_members);
print "Zone",active_zones[active_zone],"(",length(zone_members),"Members ):";
for(zone_member in zone_members){
member=zone_members[zone_member];
if(alias[member]!=""){
member=alias[member];
}
WWN=member;
gsub(/:/,"",WWN);
if(WWN ~ /^5/){start=2;}else{start=5;}
vendor_id=substr(WWN,start,6);
printf " Member: %s\t",member;
if(alias[zone_members[zone_member]]!=""){
format=sprintf("%%s%%%ds\t",longestalias-length(zone_members[zone_member]));
printf format,zone_members[zone_member]," ";
}
printf "%s\n",vendor[vendor_id];
}
}
printf "\n\n\nCreate config:\n-------------------------------------------------\n";
printf "cfgdelete \"%s\"\n",cfgenabled;
for(active_zone in active_zones) {
split(zone[active_zones[active_zone]],zone_members,";");
asort(zone_members);
for(zone_member in zone_members){
member=zone_members[zone_member];
if(alias[member]!=""){
printf "alicreate \"%s\",\"%s\"\n",member,alias[member];
alias[member]="";
}
}
printf "zonecreate \"%s\",\"%s\"\n",active_zones[active_zone],zone[active_zones[active_zone]];
if(!secondelement){
secondelement=1;
printf "cfgcreate";
} else {
printf "cfgadd ";
}
printf " \"%s\",\"%s\"\n",cfgenabled,active_zones[active_zone];
}
printf "cfgsave\ncfgenable \"%s\"\n",cfgenabled;
}
</source>
=Kommandos: NetApp=
==fcp topology show : Wo hängt mein Frontend-SAN?==
<source lang=bash>
fas01> fcp topology show
Switches connected on adapter 0d:
None connected.
Switches connected on adapter 0c:
None connected.
Switches connected on adapter 1a:
Switch Name: fcsw01
Switch Vendor: Brocade Communications, Inc.
Switch Release: v6.4.2a
Switch Domain: 1
Switch WWN: 10:00:00:05:33:c6:1e:6c
Port Count: 24
Switches connected on adapter 1b:
Switch Name: fcsw02
Switch Vendor: Brocade Communications, Inc.
Switch Release: v6.4.2a
Switch Domain: 1
Switch WWN: 10:00:00:05:33:c7:5e:d2
Port Count: 24
Switches connected on adapter 1c:
None connected.
Switches connected on adapter 1d:
None connected.
</source>
==fcp config <port> : Welche WWN habe ich?==
<source lang=bash>
fas01> fcp config 1a
1a: ONLINE <ADAPTER UP> PTP Fabric
host address 010600
portname 50:0a:09:83:90:00:29:24 nodename 50:0a:09:80:80:00:29:24
mediatype auto speed auto
</source>
Kleines Schmankerl ist noch die "host address", die uns zeigt, daß wir auf Switch-ID 01 Port 06 hängen.
==fcp wwpn-alias (set|show) : Aliasnamen für mehr Klarheit beim Debugging==
<source lang=bash>
fas01> fcp wwpn-alias set sun07_Slot2_Port0 21000024ff363a5a
fas01> fcp wwpn-alias show
WWPN Alias
---- -----
21:00:00:24:ff:36:3a:5a sun07_Slot2_Port0
</source>
==sanlun lun show -d <dev> (mit Solaris und ZPool)==
Wenn man wissen möchte, welche NetApp LUNs zu einem ZPool gehören, geht das folgender Maßen:
<source lang=bash>
# zpool status | nawk '/c[0-9]t/{dev=$1;gsub(/s[0-9]+$/,"",$1);command="/opt/NTAP/SANToolkit/bin/sanlun lun show -d /dev/rdsk/"$1"s2";command | getline; command | getline; print dev,$1$2;next;}{print;}'
</source>
Beispiel:
<source lang=bash>
# zpool status | nawk '/c[0-9]t/{dev=$1;gsub(/s[0-9]+$/,"",$1);command="/opt/NTAP/SANToolkit/bin/sanlun lun show -d /dev/rdsk/"$1"s2";command | getline; command | getline; print dev,$1$2;next;}{print;}'
Pool: testpool
Status: ONLINE
scan: resilvered 11,0G in 0h1m with 0 errors on Thu Oct 2 09:41:39 2014
config:
NAME STATE READ WRITE CKSUM
testpool ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
c5t60A98000433544625634696B76705370d0s0 fas01:/vol/testlun/LUN0
c5t60A980003830304F392446473844375Ad0 fas02:/vol/testlun/LUN0
</source>
=Sonstiges=
==Alle WWNs in einem File finden==
Gibt nur die WWNs aus, auch mehrere, wenn mehrere in einer Zeile sind.
<source lang=awk>
gawk '{line=$0;while(match(line,/[0-9a-f]{2}(:[0-9a-f]{2}){7}/,wwn)){line=substr(line,wwn[0,"start"]+wwn[0,"length"]); print wwn[0];}}' <file>
</source>
==Some adittions to NetApps sanlun lun show on Solaris==
<source lang=awk>
# /opt/NTAP/SANToolkit/bin/sanlun lun show | gawk '
$3 ~ /\/dev\// {
sanlun=$0;
cmd="luxadm display "$3;
while( cmd|getline line ){
count=split(line,word);
if(line ~ /DEVICE PROPERTIES for disk:/){
disk=word[count];
ctrl="";
dev_addr="";
svm_ports="";
delete ports;
delete pri;
delete sec;
delete paths;
delete online;
continue;
}
if(line ~ /Controller/){
ctrl=word[count];
continue;
}
if(line ~ /Device Address/){
dev_addr=word[count];
gsub(/,.*$/,"",dev_addr);
ports[dev_addr]=1;
pair=ctrl"_"dev_addr;
continue;
}
if(line ~ /Class/){
class[pair]=word[count];
if(word[count]=="primary"){
pri[disk]++;
} else {
sec[disk]++;
}
continue;
}
if(line ~ /State/){
state[pair]=word[count];
paths[disk]++;
if(word[count]=="ONLINE"){
online[disk]++;
}
}
if(line ~ /^$/ && ctrl!=""){
for(port in ports){
if(svm_ports==""){
sep="";
} else {
sep=",";
}
svm_ports=svm_ports sep port;
}
printf "%s %2d/%2d %2d/%2d %s\n",sanlun,online[disk],paths[disk],pri[disk],sec[disk], svm_ports;
}
}
close(cmd);
next;
}
/^vserver/{
line=sprintf("%s Online/Total Primary/Secondary Device Addresses\n", $0);
printf line;
gsub(/./,"-",line);
print line;
next;
}
/^[-]+$/{next;}
{print;}
'
</source>
d8604d7361f0c657f95cb1b2750dece576e74fa4
Ansible tips and tricks
0
299
1913
1755
2018-10-02T13:19:16Z
Lollypop
2
wikitext
text/x-wiki
[[ Kategorie: Ansible | Tips and tricks ]]
== Ansible commandline ==
=== Get settings for host ===
Gathering settings for host in ${hostname}:
<source lang=bash>
$ ansible -m debug -a 'var=hostvars[inventory_hostname]' ${hostname}
</source>
For example:
<source lang=bash>
$ ansible -m debug -a 'var=hostvars[inventory_hostname]' localhost
</source>
== Gathering facts from file ==
=== Variables from an Oracle response file ===
This snippet gets some variables from the response file and puts them into an environment variable <i>oracle_environment</i> and sets the variable name itself (prepended with oracle_ if not already there). The variable <i>oracle_environment</i> can be used for <i>environment:</i> when you use <i>shell:</i>.
<source lang=yaml>
vars:
oracle_user: oracle
oracle_version: 12cR2
oracle_response_file: /install/tepmplate_{{ oracle_version }}/db_{{ oracle_version | lower}}.rsp
</source>
<source lang=yaml>
- name: "Getting variables for version {{ oracle_version }} from response file"
shell: |
awk -F '=' '/{{ item }}/{print $2;}' {{ oracle_response_file }}
register: oracle_response_variables
with_items:
- ORACLE_HOME
- ORACLE_BASE
- INVENTORY_LOCATION
tags:
- oracle
- oracle_install
- name: Setting facts from response file to oracle_environment
set_fact:
"{{ 'oracle_' + item.item | lower | regex_replace('oracle_','') }}": "{{ item.stdout }}"
oracle_environment: "{{oracle_environment|default([]) + [ {item.item: item.stdout} ] }}"
with_items:
- "{{ oracle_response_variables.results }}"
tags:
- oracle
- oracle_install
</source>
== Gathering oracle environment ==
<source lang=yaml>
- name: Calling oraenv
shell: |
# Set ORAENV_ASK=NO and ORACLE_SID, ORACLE_HOME, PATH from /etc/oratab
eval $(awk -F':' '!/^[ ]*(#|$)/ && $3=="Y"{printf "export ORAENV_ASK=NO ORACLE_SID=%s ORACLE_HOME=%s PATH=${PATH}:%s/bin\n",$1,$2,$2}' /etc/oratab)
# Call /usr/local/bin/oraenv for additional settings
. /usr/local/bin/oraenv -s
# Just register what we need for Oracle
env | egrep "(ORACLE_.*|PATH|LD_LIBRARY_PATH)="
register: env
changed_when: False
- name: Creating environment ora_env
set_fact:
ora_env: |
{# Creating empty dictionary #}
{%- set tmp_env={} -%}
{# For each line from env call tmp_env._setitem_(<variable>,<value>) #}
{%- for line in env.stdout_lines -%}
{{ tmp_env.__setitem__(line.split('=')[0], line.split('=')[1]) }}
{%- endfor -%}
{# Print the created variable #}
{{ tmp_env }}
- debug: var=ora_env
</source>
== NetApp Modules ==
=== NetApp role ===
==== Snapshot user ====
<source>
security login role create -vserver cluster01 -role ansible-snapshot-only -cmddirname DEFAULT -access none
security login role create -vserver cluster01 -role ansible-snapshot-only -cmddirname "event generate-autosupport-log" -access all
security login role create -vserver cluster01 -role ansible-snapshot-only -cmddirname "volume snapshot" -access readonly
security login role create -vserver cluster01 -role ansible-snapshot-only -cmddirname "volume snapshot create" -query "-snapshot ansible_*" -access all
security login role create -vserver cluster01 -role ansible-snapshot-only -cmddirname "volume snapshot delete" -query "-snapshot ansible_*" -access all
security login create -vserver cluster01 -role ansible-snapshot-only -application ontapi -authentication-method password -user-or-group-name ansible-snapuser
</source>
2b9a87ad8e7f177efdc6f6727ce1312b7cab18a9
PHP
0
361
1914
2018-10-05T08:32:24Z
Lollypop
2
Die Seite wurde neu angelegt: „[[Kategorie:PHP]] ==Install mcrypt on Ubuntu 18.04== <source lang=bash> $ sudo apt -y install gcc make autoconf libc-dev pkg-config libmcrypt-dev php7.2-dev $…“
wikitext
text/x-wiki
[[Kategorie:PHP]]
==Install mcrypt on Ubuntu 18.04==
<source lang=bash>
$ sudo apt -y install gcc make autoconf libc-dev pkg-config libmcrypt-dev php7.2-dev
$ sudo pecl install --nodeps mcrypt-snapshot
</source>
<source lang=bash>
$ echo "extension=mcrypt.so" | sudo tee -a /etc/php/7.2/fpm/php.ini
$ php-fpm7.2 -i | grep mc
Registered Stream Filters => zlib.*, string.rot13, string.toupper, string.tolower, string.strip_tags, convert.*, consumed, dechunk, mcrypt.*, mdecrypt.*, bzip2.*, convert.iconv.*
mcrypt
mcrypt support => enabled
mcrypt_filter support => enabled
mcrypt.algorithms_dir => no value => no value
mcrypt.modes_dir => no value => no value
</source>
0ce4790098649be36bc1121c6363592105c273da
SunServer
0
210
1915
798
2018-10-17T09:14:11Z
Lollypop
2
/* ILOM */
wikitext
text/x-wiki
[[Kategorie:Hardware]]
=X86 Systeme=
==ILOM==
===Access ILOM from OS===
<source lang=bash>
# ipmitool sunoem cli
Connected. Use ^D to exit.
->
</source>
===Set SP IP address from OS via ipmitool===
* Set:
<source lang=bash>
# ipmitool lan set 1 ipaddr 172.30.42.149
Setting LAN IP Address to 172.30.42.149
# ipmitool lan set 1 netmask 255.255.255.0
Setting LAN Subnet Mask to 255.255.255.0
# ipmitool lan set 1 defgw ipaddr 172.30.42.1
Setting LAN Default Gateway IP to 172.30.42.1
</source>
* Check:
<source lang=bash>
# ipmitool lan print
Set in Progress : Commit Write
Auth Type Support : NONE MD2 MD5 PASSWORD
Auth Type Enable : Callback : MD2 MD5 PASSWORD
: User : MD2 MD5 PASSWORD
: Operator : MD2 MD5 PASSWORD
: Admin : MD2 MD5 PASSWORD
: OEM :
IP Address Source : Static Address
IP Address : 172.30.42.149
Subnet Mask : 255.255.255.0
MAC Address : 00:1c:24:f0:70:b0
SNMP Community String : public
IP Header : TTL=0x40 Flags=0x40 Precedence=0x00 TOS=0x10
Default Gateway IP : 172.30.42.1
Default Gateway MAC : ff:ff:ff:ff:ff:ff
Backup Gateway IP : 255.255.255.255
Backup Gateway MAC : ff:ff:ff:ff:ff:ff
Cipher Suite Priv Max : aaaaaaaaaaaaaaa
: X=Cipher Suite Unused
: c=CALLBACK
: u=USER
: o=OPERATOR
: a=ADMIN
: O=OEM
</source>
===Restore lost Serial/Product Information===
<source lang=bash>
$ ssh root@x4100-sp
-> show /SYS/MB
/SYS/MB
...
Properties:
type = Motherboard
chassis_name = SUN FIRE X4100
chassis_part_number = 541-0250-04
chassis_serial_number = 0000000-0000000000
chassis_manufacturer = SUN MICROSYSTEMS
product_name = SUN FIRE X4100
product_part_number = 602-0000-00
product_serial_number = 0000000000
product_version = (none)
product_manufacturer = SUN MICROSYSTEMS
fru_name = ASSY,MOTHERBOARD,A64
fru_manufacturer = SUN MICROSYSTEMS
fru_part_number = 501-7644-01
fru_serial_number = 1762TH1-0627002296
...
-> exit
$ ssh sunservice@x4100-sp
Password: <the root password>
[(flash)root@X4100-SP:~]# servicetool --board_replaced=mainboard --fru_product_serial_number --fru_chassis_serial_number --fru_product_part_number
<Fill out the answers>
</source>
=SPARC Systeme=
==T4-1==
===Get disk slot===
<b>get_disk_slot.sh</b>:
<source lang=bash>
#!/bin/bash
/usr/sbin/prtconf -v | nawk -v disk="$1" '
function get_value() {
getline line;
split(line,values,"=");
return values[2];
}
/inquiry-serial-no/ {
inquiry_serial_no=get_value();
}
/inquiry-product-id/ {
inquiry_product_id=get_value();
}
/inquiry-vendor-id/ {
inquiry_vendor_id=get_value();
}
/obp-path/ {
obp_path=get_value();
}
/phy-num/ {
phy_num[obp_path]=get_value();
}
$0 ~ "/dev/rdsk/"disk"$" {
split(obp_path,path_parts,"/");
if(path_parts[3]=="pci@1"){
controller[obp_path]=0;
}
if(path_parts[3]=="pci@2"){
controller[obp_path]=1;
}
printf "%s\n\t%s\n\t%s\n\t%s\n\tcontroller %d, PhyNum %d => Slot %d\n",$1,inquiry_vendor_id,inquiry_serial_no,obp_path,controller[obp_path],phy_num[obp_path],4*controller[obp_path]+phy_num[obp_path]
}'
</source>
Example:
<source lang=bash>
# ./get_disk_slot.sh c0t5000C500230D43A3d0
dev_link=/dev/rdsk/c0t5000C500230D43A3d0
'SEAGATE'
'00101371ZVHM 3SE1ZVHM'
'/pci@400/pci@2/pci@0/pci@4/scsi@0/disk@w5000c500230d43a1,0'
controller 1, PhyNum 2 => Slot 6
</source>
==XSCF==
===Set XSCF IP address from OS via ssh through dscp===
* Show:
<source lang=bash>
# /usr/platform/`uname -i`/sbin/prtdscp
Domain Address: 192.168.224.2
SP Address: 192.168.224.1
# ssh eis-installer@192.168.224.1
XSCF> shownetwork -a
xscf#0-lan#0
Link encap:Ethernet HWaddr 00:0B:5D:E3:D8:C4
inet addr:172.42.0.120 Bcast:172.42.0.255 Mask:255.255.255.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:885919090 errors:0 dropped:0 overruns:0 frame:0
TX packets:7150700 errors:1 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:1987024183 (1.8 GiB) TX bytes:492148426 (469.3 MiB)
Base address:0xe000
xscf#0-lan#1
Link encap:Ethernet HWaddr 00:0B:5D:E3:D8:C5
BROADCAST MULTICAST MTU:1500 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
Base address:0xc000
XSCF> showroute -a
Destination Gateway Netmask Flags Interface
172.42.0.0 * 255.255.0.0 U xscf#0-lan#0
default 172.42.0.1 0.0.0.0 UG xscf#0-lan#0
</source>
* Delete default gateway:
<source lang=bash>
XSCF> setroute -c del -n 0.0.0.0 -m 0.0.0.0 -g 172.42.0.1 xscf#0-lan#0
</source>
* Set:
<source lang=bash>
XSCF> setnetwork xscf#0-lan#0 172.32.40.52 -m 255.255.255.0
XSCF> setroute -c add -n 0.0.0.0 -m 0.0.0.0 -g 172.32.40.1 xscf#0-lan#0
XSCF> applynetwork
The following network settings will be applied:
xscf#0 hostname :hfgsun07-xsfc
DNS domain name :intern.hfg-inkasso.de
nameserver :172.41.0.2
interface :xscf#0-lan#0
status :up
IP address :172.32.40.52
netmask :255.255.255.0
route :-n 172.32.40.1 -m 255.255.255.255
route :-n 0.0.0.0 -m 0.0.0.0 -g 172.32.40.1
interface :xscf#0-lan#1
status :down
IP address :
netmask :
route :
Continue? [y|n] :y
Please reset the XSCF by rebootxscf to apply the network settings.
Please confirm that the settings have been applied by executing
showhostname, shownetwork, showroute and shownameserver after rebooting
the XSCF.
XSCF> rebootxscf
The XSCF will be reset. Continue? [y|n] :y
</source>
===Enable sending of break signal===
Example on a M10 (Partition 0)... break_signal=off means turn supression of break signals off ;-) :
<source lang=bash>
XSCF> showpparmode -p 0
Host-ID :90071f40
Diagnostic Level :max
Message Level :max
Alive Check :on
Watchdog Reaction :reset
Break Signal :on
Autoboot(Guest Domain) :on
Elastic Mode :off
IOreconfigure :false
CPU Mode :auto
PPAR DR(Current) :off
PPAR DR(Next) :off
XSCF> setpparmode -p 0 -m break_signal=off
Diagnostic Level :max -> -
Message Level :max -> -
Alive Check :on -> -
Watchdog Reaction :reset -> -
Break Signal :on -> off
Autoboot(Guest Domain) :on -> -
Elastic Mode :off -> -
IOreconfigure :false -> -
CPU Mode :auto -> -
PPAR DR :off -> -
The specified modes will be changed.
Continue? [y|n] :y
configured.
Diagnostic Level :max
Message Level :max
Alive Check :on (alive check:available)
Watchdog Reaction :reset (watchdog reaction:reset)
Break Signal :off (break signal:send)
Autoboot(Guest Domain) :on
Elastic Mode :off
IOreconfigure :false
CPU Mode :auto
PPAR DR :off
XSCF> sendbreak -y -p0
Send break signal to PPAR-ID 0?[y|n] :y
XSCF> console -y -p0
Console contents may be logged.
Connect to PPAR-ID 0?[y|n] :y
c)ontinue, s)ync, r)eset? c
^@Notifying cluster that this node is panicking
panic[cpu10]/thread=2a100df1c80: Aborting node because pm_tick delay of 13644 ms exceeds 5050 ms
...
</source>
b04a42b23e91efd575d222f09cc31aceef2bfedf
1916
1915
2018-10-17T09:30:51Z
Lollypop
2
/* Access ILOM from OS */
wikitext
text/x-wiki
[[Kategorie:Hardware]]
=X86 Systeme=
==ILOM==
===Access ILOM from OS===
<source lang=bash>
# ipmitool sunoem cli
Connected. Use ^D to exit.
->
</source>
or
<source lang=bash>
# ipmitool -I bmc sunoem cli
Connected. Use ^D to exit.
->
</source>
===Set SP IP address from OS via ipmitool===
* Set:
<source lang=bash>
# ipmitool lan set 1 ipaddr 172.30.42.149
Setting LAN IP Address to 172.30.42.149
# ipmitool lan set 1 netmask 255.255.255.0
Setting LAN Subnet Mask to 255.255.255.0
# ipmitool lan set 1 defgw ipaddr 172.30.42.1
Setting LAN Default Gateway IP to 172.30.42.1
</source>
* Check:
<source lang=bash>
# ipmitool lan print
Set in Progress : Commit Write
Auth Type Support : NONE MD2 MD5 PASSWORD
Auth Type Enable : Callback : MD2 MD5 PASSWORD
: User : MD2 MD5 PASSWORD
: Operator : MD2 MD5 PASSWORD
: Admin : MD2 MD5 PASSWORD
: OEM :
IP Address Source : Static Address
IP Address : 172.30.42.149
Subnet Mask : 255.255.255.0
MAC Address : 00:1c:24:f0:70:b0
SNMP Community String : public
IP Header : TTL=0x40 Flags=0x40 Precedence=0x00 TOS=0x10
Default Gateway IP : 172.30.42.1
Default Gateway MAC : ff:ff:ff:ff:ff:ff
Backup Gateway IP : 255.255.255.255
Backup Gateway MAC : ff:ff:ff:ff:ff:ff
Cipher Suite Priv Max : aaaaaaaaaaaaaaa
: X=Cipher Suite Unused
: c=CALLBACK
: u=USER
: o=OPERATOR
: a=ADMIN
: O=OEM
</source>
===Restore lost Serial/Product Information===
<source lang=bash>
$ ssh root@x4100-sp
-> show /SYS/MB
/SYS/MB
...
Properties:
type = Motherboard
chassis_name = SUN FIRE X4100
chassis_part_number = 541-0250-04
chassis_serial_number = 0000000-0000000000
chassis_manufacturer = SUN MICROSYSTEMS
product_name = SUN FIRE X4100
product_part_number = 602-0000-00
product_serial_number = 0000000000
product_version = (none)
product_manufacturer = SUN MICROSYSTEMS
fru_name = ASSY,MOTHERBOARD,A64
fru_manufacturer = SUN MICROSYSTEMS
fru_part_number = 501-7644-01
fru_serial_number = 1762TH1-0627002296
...
-> exit
$ ssh sunservice@x4100-sp
Password: <the root password>
[(flash)root@X4100-SP:~]# servicetool --board_replaced=mainboard --fru_product_serial_number --fru_chassis_serial_number --fru_product_part_number
<Fill out the answers>
</source>
=SPARC Systeme=
==T4-1==
===Get disk slot===
<b>get_disk_slot.sh</b>:
<source lang=bash>
#!/bin/bash
/usr/sbin/prtconf -v | nawk -v disk="$1" '
function get_value() {
getline line;
split(line,values,"=");
return values[2];
}
/inquiry-serial-no/ {
inquiry_serial_no=get_value();
}
/inquiry-product-id/ {
inquiry_product_id=get_value();
}
/inquiry-vendor-id/ {
inquiry_vendor_id=get_value();
}
/obp-path/ {
obp_path=get_value();
}
/phy-num/ {
phy_num[obp_path]=get_value();
}
$0 ~ "/dev/rdsk/"disk"$" {
split(obp_path,path_parts,"/");
if(path_parts[3]=="pci@1"){
controller[obp_path]=0;
}
if(path_parts[3]=="pci@2"){
controller[obp_path]=1;
}
printf "%s\n\t%s\n\t%s\n\t%s\n\tcontroller %d, PhyNum %d => Slot %d\n",$1,inquiry_vendor_id,inquiry_serial_no,obp_path,controller[obp_path],phy_num[obp_path],4*controller[obp_path]+phy_num[obp_path]
}'
</source>
Example:
<source lang=bash>
# ./get_disk_slot.sh c0t5000C500230D43A3d0
dev_link=/dev/rdsk/c0t5000C500230D43A3d0
'SEAGATE'
'00101371ZVHM 3SE1ZVHM'
'/pci@400/pci@2/pci@0/pci@4/scsi@0/disk@w5000c500230d43a1,0'
controller 1, PhyNum 2 => Slot 6
</source>
==XSCF==
===Set XSCF IP address from OS via ssh through dscp===
* Show:
<source lang=bash>
# /usr/platform/`uname -i`/sbin/prtdscp
Domain Address: 192.168.224.2
SP Address: 192.168.224.1
# ssh eis-installer@192.168.224.1
XSCF> shownetwork -a
xscf#0-lan#0
Link encap:Ethernet HWaddr 00:0B:5D:E3:D8:C4
inet addr:172.42.0.120 Bcast:172.42.0.255 Mask:255.255.255.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:885919090 errors:0 dropped:0 overruns:0 frame:0
TX packets:7150700 errors:1 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:1987024183 (1.8 GiB) TX bytes:492148426 (469.3 MiB)
Base address:0xe000
xscf#0-lan#1
Link encap:Ethernet HWaddr 00:0B:5D:E3:D8:C5
BROADCAST MULTICAST MTU:1500 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
Base address:0xc000
XSCF> showroute -a
Destination Gateway Netmask Flags Interface
172.42.0.0 * 255.255.0.0 U xscf#0-lan#0
default 172.42.0.1 0.0.0.0 UG xscf#0-lan#0
</source>
* Delete default gateway:
<source lang=bash>
XSCF> setroute -c del -n 0.0.0.0 -m 0.0.0.0 -g 172.42.0.1 xscf#0-lan#0
</source>
* Set:
<source lang=bash>
XSCF> setnetwork xscf#0-lan#0 172.32.40.52 -m 255.255.255.0
XSCF> setroute -c add -n 0.0.0.0 -m 0.0.0.0 -g 172.32.40.1 xscf#0-lan#0
XSCF> applynetwork
The following network settings will be applied:
xscf#0 hostname :hfgsun07-xsfc
DNS domain name :intern.hfg-inkasso.de
nameserver :172.41.0.2
interface :xscf#0-lan#0
status :up
IP address :172.32.40.52
netmask :255.255.255.0
route :-n 172.32.40.1 -m 255.255.255.255
route :-n 0.0.0.0 -m 0.0.0.0 -g 172.32.40.1
interface :xscf#0-lan#1
status :down
IP address :
netmask :
route :
Continue? [y|n] :y
Please reset the XSCF by rebootxscf to apply the network settings.
Please confirm that the settings have been applied by executing
showhostname, shownetwork, showroute and shownameserver after rebooting
the XSCF.
XSCF> rebootxscf
The XSCF will be reset. Continue? [y|n] :y
</source>
===Enable sending of break signal===
Example on a M10 (Partition 0)... break_signal=off means turn supression of break signals off ;-) :
<source lang=bash>
XSCF> showpparmode -p 0
Host-ID :90071f40
Diagnostic Level :max
Message Level :max
Alive Check :on
Watchdog Reaction :reset
Break Signal :on
Autoboot(Guest Domain) :on
Elastic Mode :off
IOreconfigure :false
CPU Mode :auto
PPAR DR(Current) :off
PPAR DR(Next) :off
XSCF> setpparmode -p 0 -m break_signal=off
Diagnostic Level :max -> -
Message Level :max -> -
Alive Check :on -> -
Watchdog Reaction :reset -> -
Break Signal :on -> off
Autoboot(Guest Domain) :on -> -
Elastic Mode :off -> -
IOreconfigure :false -> -
CPU Mode :auto -> -
PPAR DR :off -> -
The specified modes will be changed.
Continue? [y|n] :y
configured.
Diagnostic Level :max
Message Level :max
Alive Check :on (alive check:available)
Watchdog Reaction :reset (watchdog reaction:reset)
Break Signal :off (break signal:send)
Autoboot(Guest Domain) :on
Elastic Mode :off
IOreconfigure :false
CPU Mode :auto
PPAR DR :off
XSCF> sendbreak -y -p0
Send break signal to PPAR-ID 0?[y|n] :y
XSCF> console -y -p0
Console contents may be logged.
Connect to PPAR-ID 0?[y|n] :y
c)ontinue, s)ync, r)eset? c
^@Notifying cluster that this node is panicking
panic[cpu10]/thread=2a100df1c80: Aborting node because pm_tick delay of 13644 ms exceeds 5050 ms
...
</source>
d54b4d029e86dff1c1a4e9819a6726d0f9056608
1917
1916
2018-10-17T09:47:05Z
Lollypop
2
/* ILOM */
wikitext
text/x-wiki
[[Kategorie:Hardware]]
=X86 Systeme=
==ILOM==
===Reset SP from OS===
<source lang=bash>
# ipmitool -I bmc bmc reset cold
Sent cold reset command to MC
</source>
===Access ILOM from OS===
<source lang=bash>
# ipmitool sunoem cli
Connected. Use ^D to exit.
->
</source>
or
<source lang=bash>
# ipmitool -I bmc sunoem cli
Connected. Use ^D to exit.
->
</source>
===Set SP IP address from OS via ipmitool===
* Set:
<source lang=bash>
# ipmitool lan set 1 ipaddr 172.30.42.149
Setting LAN IP Address to 172.30.42.149
# ipmitool lan set 1 netmask 255.255.255.0
Setting LAN Subnet Mask to 255.255.255.0
# ipmitool lan set 1 defgw ipaddr 172.30.42.1
Setting LAN Default Gateway IP to 172.30.42.1
</source>
* Check:
<source lang=bash>
# ipmitool lan print
Set in Progress : Commit Write
Auth Type Support : NONE MD2 MD5 PASSWORD
Auth Type Enable : Callback : MD2 MD5 PASSWORD
: User : MD2 MD5 PASSWORD
: Operator : MD2 MD5 PASSWORD
: Admin : MD2 MD5 PASSWORD
: OEM :
IP Address Source : Static Address
IP Address : 172.30.42.149
Subnet Mask : 255.255.255.0
MAC Address : 00:1c:24:f0:70:b0
SNMP Community String : public
IP Header : TTL=0x40 Flags=0x40 Precedence=0x00 TOS=0x10
Default Gateway IP : 172.30.42.1
Default Gateway MAC : ff:ff:ff:ff:ff:ff
Backup Gateway IP : 255.255.255.255
Backup Gateway MAC : ff:ff:ff:ff:ff:ff
Cipher Suite Priv Max : aaaaaaaaaaaaaaa
: X=Cipher Suite Unused
: c=CALLBACK
: u=USER
: o=OPERATOR
: a=ADMIN
: O=OEM
</source>
===Restore lost Serial/Product Information===
<source lang=bash>
$ ssh root@x4100-sp
-> show /SYS/MB
/SYS/MB
...
Properties:
type = Motherboard
chassis_name = SUN FIRE X4100
chassis_part_number = 541-0250-04
chassis_serial_number = 0000000-0000000000
chassis_manufacturer = SUN MICROSYSTEMS
product_name = SUN FIRE X4100
product_part_number = 602-0000-00
product_serial_number = 0000000000
product_version = (none)
product_manufacturer = SUN MICROSYSTEMS
fru_name = ASSY,MOTHERBOARD,A64
fru_manufacturer = SUN MICROSYSTEMS
fru_part_number = 501-7644-01
fru_serial_number = 1762TH1-0627002296
...
-> exit
$ ssh sunservice@x4100-sp
Password: <the root password>
[(flash)root@X4100-SP:~]# servicetool --board_replaced=mainboard --fru_product_serial_number --fru_chassis_serial_number --fru_product_part_number
<Fill out the answers>
</source>
=SPARC Systeme=
==T4-1==
===Get disk slot===
<b>get_disk_slot.sh</b>:
<source lang=bash>
#!/bin/bash
/usr/sbin/prtconf -v | nawk -v disk="$1" '
function get_value() {
getline line;
split(line,values,"=");
return values[2];
}
/inquiry-serial-no/ {
inquiry_serial_no=get_value();
}
/inquiry-product-id/ {
inquiry_product_id=get_value();
}
/inquiry-vendor-id/ {
inquiry_vendor_id=get_value();
}
/obp-path/ {
obp_path=get_value();
}
/phy-num/ {
phy_num[obp_path]=get_value();
}
$0 ~ "/dev/rdsk/"disk"$" {
split(obp_path,path_parts,"/");
if(path_parts[3]=="pci@1"){
controller[obp_path]=0;
}
if(path_parts[3]=="pci@2"){
controller[obp_path]=1;
}
printf "%s\n\t%s\n\t%s\n\t%s\n\tcontroller %d, PhyNum %d => Slot %d\n",$1,inquiry_vendor_id,inquiry_serial_no,obp_path,controller[obp_path],phy_num[obp_path],4*controller[obp_path]+phy_num[obp_path]
}'
</source>
Example:
<source lang=bash>
# ./get_disk_slot.sh c0t5000C500230D43A3d0
dev_link=/dev/rdsk/c0t5000C500230D43A3d0
'SEAGATE'
'00101371ZVHM 3SE1ZVHM'
'/pci@400/pci@2/pci@0/pci@4/scsi@0/disk@w5000c500230d43a1,0'
controller 1, PhyNum 2 => Slot 6
</source>
==XSCF==
===Set XSCF IP address from OS via ssh through dscp===
* Show:
<source lang=bash>
# /usr/platform/`uname -i`/sbin/prtdscp
Domain Address: 192.168.224.2
SP Address: 192.168.224.1
# ssh eis-installer@192.168.224.1
XSCF> shownetwork -a
xscf#0-lan#0
Link encap:Ethernet HWaddr 00:0B:5D:E3:D8:C4
inet addr:172.42.0.120 Bcast:172.42.0.255 Mask:255.255.255.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:885919090 errors:0 dropped:0 overruns:0 frame:0
TX packets:7150700 errors:1 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:1987024183 (1.8 GiB) TX bytes:492148426 (469.3 MiB)
Base address:0xe000
xscf#0-lan#1
Link encap:Ethernet HWaddr 00:0B:5D:E3:D8:C5
BROADCAST MULTICAST MTU:1500 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
Base address:0xc000
XSCF> showroute -a
Destination Gateway Netmask Flags Interface
172.42.0.0 * 255.255.0.0 U xscf#0-lan#0
default 172.42.0.1 0.0.0.0 UG xscf#0-lan#0
</source>
* Delete default gateway:
<source lang=bash>
XSCF> setroute -c del -n 0.0.0.0 -m 0.0.0.0 -g 172.42.0.1 xscf#0-lan#0
</source>
* Set:
<source lang=bash>
XSCF> setnetwork xscf#0-lan#0 172.32.40.52 -m 255.255.255.0
XSCF> setroute -c add -n 0.0.0.0 -m 0.0.0.0 -g 172.32.40.1 xscf#0-lan#0
XSCF> applynetwork
The following network settings will be applied:
xscf#0 hostname :hfgsun07-xsfc
DNS domain name :intern.hfg-inkasso.de
nameserver :172.41.0.2
interface :xscf#0-lan#0
status :up
IP address :172.32.40.52
netmask :255.255.255.0
route :-n 172.32.40.1 -m 255.255.255.255
route :-n 0.0.0.0 -m 0.0.0.0 -g 172.32.40.1
interface :xscf#0-lan#1
status :down
IP address :
netmask :
route :
Continue? [y|n] :y
Please reset the XSCF by rebootxscf to apply the network settings.
Please confirm that the settings have been applied by executing
showhostname, shownetwork, showroute and shownameserver after rebooting
the XSCF.
XSCF> rebootxscf
The XSCF will be reset. Continue? [y|n] :y
</source>
===Enable sending of break signal===
Example on a M10 (Partition 0)... break_signal=off means turn supression of break signals off ;-) :
<source lang=bash>
XSCF> showpparmode -p 0
Host-ID :90071f40
Diagnostic Level :max
Message Level :max
Alive Check :on
Watchdog Reaction :reset
Break Signal :on
Autoboot(Guest Domain) :on
Elastic Mode :off
IOreconfigure :false
CPU Mode :auto
PPAR DR(Current) :off
PPAR DR(Next) :off
XSCF> setpparmode -p 0 -m break_signal=off
Diagnostic Level :max -> -
Message Level :max -> -
Alive Check :on -> -
Watchdog Reaction :reset -> -
Break Signal :on -> off
Autoboot(Guest Domain) :on -> -
Elastic Mode :off -> -
IOreconfigure :false -> -
CPU Mode :auto -> -
PPAR DR :off -> -
The specified modes will be changed.
Continue? [y|n] :y
configured.
Diagnostic Level :max
Message Level :max
Alive Check :on (alive check:available)
Watchdog Reaction :reset (watchdog reaction:reset)
Break Signal :off (break signal:send)
Autoboot(Guest Domain) :on
Elastic Mode :off
IOreconfigure :false
CPU Mode :auto
PPAR DR :off
XSCF> sendbreak -y -p0
Send break signal to PPAR-ID 0?[y|n] :y
XSCF> console -y -p0
Console contents may be logged.
Connect to PPAR-ID 0?[y|n] :y
c)ontinue, s)ync, r)eset? c
^@Notifying cluster that this node is panicking
panic[cpu10]/thread=2a100df1c80: Aborting node because pm_tick delay of 13644 ms exceeds 5050 ms
...
</source>
2af8df8ad29faf13e9dddf133048862f750d578a
ZFS nice commands
0
362
1922
2018-12-11T08:23:52Z
Lollypop
2
Die Seite wurde neu angelegt: „[[Kategorie:ZFS]] =Some ZFS commands I use often (on Linux)= ==zpool== ===Get zpool status=== <source lang=bash> # zpool status -P pool: rpool state: ONLI…“
wikitext
text/x-wiki
[[Kategorie:ZFS]]
=Some ZFS commands I use often (on Linux)=
==zpool==
===Get zpool status===
<source lang=bash>
# zpool status -P
pool: rpool
state: ONLINE
scan: scrub repaired 0B in 0h41m with 0 errors on Tue Nov 27 11:49:30 2018
config:
NAME STATE READ WRITE CKSUM
rpool ONLINE 0 0 0
/dev/disk/by-id/ata-SanDisk_SDSSDHII960G_151740411091-part4 ONLINE 0 0 0
</source>
* -P : Display real paths for vdevs instead of only the last component of the path.
<source lang=bash>
# zpool status -PL
pool: rpool
state: ONLINE
scan: scrub repaired 0B in 0h41m with 0 errors on Tue Nov 27 11:49:30 2018
config:
NAME STATE READ WRITE CKSUM
rpool ONLINE 0 0 0
/dev/sda4 ONLINE 0 0 0
errors: No known data errors
</source>
* -P : Display real paths for vdevs instead of only the last component of the path.
* -L : Display real paths for vdevs resolving all symbolic links.
===Get zpool size===
<source lang=bash>
# zpool list
NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
rpool 788G 609G 179G - 53% 77% 1.00x ONLINE -
</source>
Ooooh... bad fragmentation! So what? It's a SSD!
===Get the ashift value===
<source lang=bash>
# zpool list -o name,ashift
NAME ASHIFT
rpool 9
</source>
which means 2^9=512 := 512 byte blocks in the backend... that is uncool for SSDs.
<source lang=bash>
# echo $[ 2 ** 12 ]
4096
# zpool set ashift=12 rpool
</source>
<source lang=bash>
# zpool list -o name,ashift
NAME ASHIFT
rpool 12
</source>
which means 2^12=4096 := 4k blocks in the backend. Perfect!
==zfs==
==zdb==
===Traverse all blocks===
<source lang=bash>
# zdb -b rpool
Traversing all blocks to verify nothing leaked ...
loading space map for vdev 0 of 1, metaslab 196 of 197 ...
609G completed (4928MB/s) estimated time remaining: 0hr 00min 00sec
No leaks (block sum matches space maps exactly)
bp count: 32920989
ganged count: 0
bp logical: 760060348928 avg: 23087
bp physical: 650570102784 avg: 19761 compression: 1.17
bp allocated: 654308115456 avg: 19875 compression: 1.16
bp deduped: 0 ref>1: 0 deduplication: 1.00
SPA allocated: 654308115456 used: 77.33%
additional, non-pointer bps of type 0: 237576
Dittoed blocks on same vdev: 1230844
</source>
b0b4fdb38703c18e5d0aba5c80945a8822c6c4a2
Linux Tipps und Tricks
0
273
1923
1820
2019-01-02T12:09:12Z
Lollypop
2
/* Resize a GPT partition */
wikitext
text/x-wiki
[[Kategorie:Linux|Tipps und Tricks]]
==Hard reboot==
This is the hard way to kick your kernel into void. No filesystem sync is done, just and ugly fast direkt reboot!
You should never do this...
<source lang=bash>
# echo 1 > /proc/sys/kernel/sysrq
# echo b > /proc/sysrq-trigger
</source>
First line enables sysrq, second line sends the reboot request.
For more look at [https://www.kernel.org/doc/Documentation/sysrq.txt kernel.org]!
==Scan all SCSI buses for new devices==
<source lang=bash>
# for i in /sys/class/scsi_host/host*/scan ; do echo "- - -" > $i ; done
</source>
==Remove a SCSI-device==
Let us say we want to remove /dev/sdb.
Be careful! Like in this example the lowest SCSI-ID is not always the lowest device name!
Check it with <i>lsscsi</i> from the Ubuntu package lsscsi:
<source lang=bash>
# lsscsi
[2:0:0:0] cd/dvd NECVMWar VMware SATA CD00 1.00 /dev/sr0
[32:0:0:0] disk VMware Virtual disk 1.0 /dev/sdb
[32:0:1:0] disk VMware Virtual disk 1.0 /dev/sda
</source>
Then check it is not longer in use:
# mount
# pvs
# zpool status
# etc.
Then delete it:
<source lang=bash>
# echo 1 > /sys/bus/scsi/drivers/sd/32\:0\:0\:0/delete
</source>
The 32:0:0:0 is the number reported from the lsscsi above.
Et voila:
<source lang=bash>
# lsscsi
[2:0:0:0] cd/dvd NECVMWar VMware SATA CD00 1.00 /dev/sr0
[32:0:1:0] disk VMware Virtual disk 1.0 /dev/sda
</source>
==Copy a GPT partition table==
Copy partition table of sdX to sdY:
<source lang=bash>
# sgdisk /dev/sdX --replicate=/dev/sdY
# sgdisk --randomize-guids /dev/sdY
</source>
<pre>
-R, --replicate=second_device_filename
Replicate the main device's partition table on the specified second device. Note that the replicated partition table is an exact
copy, including all GUIDs; if the device should have its own unique GUIDs, you should use the -G option on the new disk.
-G, --randomize-guids
Randomize the disk's GUID and all partitions' unique GUIDs (but not their partition type code GUIDs). This function may be used
after cloning a disk in order to render all GUIDs once again unique.
</pre>
==Resize a GPT partition==
The partition was resized in VMWare from ~6GB to ~50GB.
In the VM I did [[#Remove a SCSI-device|Remove a SCSI-device]] for the resized device and then [[#Scan all SCSI buses for new devices|Scan all SCSI buses for new devices]] after that parted saw the new size.
===Correct the GPT partition table===
<source lang=bash>
root@mariadb:~# parted /dev/sdb
GNU Parted 3.2
Using /dev/sdb
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted) p
Warning: Not all of the space available to /dev/sdb appears to be used, you can fix the GPT to use all of the space (an extra 92274688 blocks) or continue with the
current setting?
Fix/Ignore? F <-- ! choose F
Model: VMware Virtual disk (scsi)
Disk /dev/sdb: 53,7GB <-- ! the new size is reported now
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:
Number Start End Size File system Name Flags
1 1049kB 6442MB 6441MB zfs
</source>
===Resize the partition===
<source lang=bash>
root@mariadb:~# parted /dev/sdb
GNU Parted 3.2
Using /dev/sdb
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted) p
Model: VMware Virtual disk (scsi)
Disk /dev/sdb: 53,7GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:
Number Start End Size File system Name Flags
1 1049kB 6442MB 6441MB zfs
(parted) resizepart 1
End? [6442MB]? 53,7GB <-- ! Put new size here
(parted) p <-- ! Control if it worked
Model: VMware Virtual disk (scsi)
Disk /dev/sdb: 53,7GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:
Number Start End Size File system Name Flags
1 1049kB 53,7GB 53,7GB zfs
(parted) q
Information: You may need to update /etc/fstab.
</source>
===Optional: Resize the ZPool in it===
Check the actual values:
<source lang=bash>
root@mariadb:~# zpool list MYSQL-DATA
NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
MYSQL-DATA 5,97G 994M 5,00G 44G 47% 16% 1.00x ONLINE -
root@mariadb:~# zpool get autoexpand MYSQL-DATA
NAME PROPERTY VALUE SOURCE
MYSQL-DATA autoexpand off default
</source>
Now inform ZPool to grow to the end of the partition.
Set autoexpand to on:
<source lang=bash>
root@mariadb:~# zpool set autoexpand=on MYSQL-DATA
</source>
Send an online to the already onlined device to force a recheck in the ZPool to resize it without export/import:
<source lang=bash>
root@mariadb:~# zpool online MYSQL-DATA /dev/sdb1
</source>
Et voila:
<source lang=bash>
root@mariadb:~# zpool list MYSQL-DATA
NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
MYSQL-DATA 50,0G 994M 49,0G - 5% 1% 1.00x ONLINE -
rpool 19,9G 3,36G 16,5G - 19% 16% 1.00x ONLINE -
</source>
Set autoexpand to off if you want prevent to autoexpand if partition grows:
<source lang=bash>
root@mariadb:~# zpool set autoexpand=off MYSQL-DATA
</source>
56891c747b22e4d0b5d3f9ee202a2f1f7046d0a8
1926
1923
2019-02-19T10:30:36Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:Linux|Tipps und Tricks]]
==Hard reboot==
This is the hard way to kick your kernel into void. No filesystem sync is done, just and ugly fast direkt reboot!
You should never do this...
<source lang=bash>
# echo 1 > /proc/sys/kernel/sysrq
# echo b > /proc/sysrq-trigger
</source>
First line enables sysrq, second line sends the reboot request.
For more look at [https://www.kernel.org/doc/Documentation/sysrq.txt kernel.org]!
==Scan all SCSI buses for new devices==
<source lang=bash>
# for i in /sys/class/scsi_host/host*/scan ; do echo "- - -" > $i ; done
</source>
==Rescan a device (for example after changing a VMDK size)==
<source lang=bash>
# echo 1 > /sys/class/block/${device}/device/rescan
</source>
This is for device sda:
<source lang=bash>
# echo 1 > /sys/class/block/sda/device/rescan
</source>
==Remove a SCSI-device==
Let us say we want to remove /dev/sdb.
Be careful! Like in this example the lowest SCSI-ID is not always the lowest device name!
Check it with <i>lsscsi</i> from the Ubuntu package lsscsi:
<source lang=bash>
# lsscsi
[2:0:0:0] cd/dvd NECVMWar VMware SATA CD00 1.00 /dev/sr0
[32:0:0:0] disk VMware Virtual disk 1.0 /dev/sdb
[32:0:1:0] disk VMware Virtual disk 1.0 /dev/sda
</source>
Then check it is not longer in use:
# mount
# pvs
# zpool status
# etc.
Then delete it:
<source lang=bash>
# echo 1 > /sys/bus/scsi/drivers/sd/32\:0\:0\:0/delete
</source>
The 32:0:0:0 is the number reported from the lsscsi above.
Et voila:
<source lang=bash>
# lsscsi
[2:0:0:0] cd/dvd NECVMWar VMware SATA CD00 1.00 /dev/sr0
[32:0:1:0] disk VMware Virtual disk 1.0 /dev/sda
</source>
==Copy a GPT partition table==
Copy partition table of sdX to sdY:
<source lang=bash>
# sgdisk /dev/sdX --replicate=/dev/sdY
# sgdisk --randomize-guids /dev/sdY
</source>
<pre>
-R, --replicate=second_device_filename
Replicate the main device's partition table on the specified second device. Note that the replicated partition table is an exact
copy, including all GUIDs; if the device should have its own unique GUIDs, you should use the -G option on the new disk.
-G, --randomize-guids
Randomize the disk's GUID and all partitions' unique GUIDs (but not their partition type code GUIDs). This function may be used
after cloning a disk in order to render all GUIDs once again unique.
</pre>
==Resize a GPT partition==
The partition was resized in VMWare from ~6GB to ~50GB.
In the VM I did [[#Remove a SCSI-device|Remove a SCSI-device]] for the resized device and then [[#Scan all SCSI buses for new devices|Scan all SCSI buses for new devices]] after that parted saw the new size.
===Correct the GPT partition table===
<source lang=bash>
root@mariadb:~# parted /dev/sdb
GNU Parted 3.2
Using /dev/sdb
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted) p
Warning: Not all of the space available to /dev/sdb appears to be used, you can fix the GPT to use all of the space (an extra 92274688 blocks) or continue with the
current setting?
Fix/Ignore? F <-- ! choose F
Model: VMware Virtual disk (scsi)
Disk /dev/sdb: 53,7GB <-- ! the new size is reported now
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:
Number Start End Size File system Name Flags
1 1049kB 6442MB 6441MB zfs
</source>
===Resize the partition===
<source lang=bash>
root@mariadb:~# parted /dev/sdb
GNU Parted 3.2
Using /dev/sdb
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted) p
Model: VMware Virtual disk (scsi)
Disk /dev/sdb: 53,7GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:
Number Start End Size File system Name Flags
1 1049kB 6442MB 6441MB zfs
(parted) resizepart 1
End? [6442MB]? 53,7GB <-- ! Put new size here
(parted) p <-- ! Control if it worked
Model: VMware Virtual disk (scsi)
Disk /dev/sdb: 53,7GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:
Number Start End Size File system Name Flags
1 1049kB 53,7GB 53,7GB zfs
(parted) q
Information: You may need to update /etc/fstab.
</source>
===Optional: Resize the ZPool in it===
Check the actual values:
<source lang=bash>
root@mariadb:~# zpool list MYSQL-DATA
NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
MYSQL-DATA 5,97G 994M 5,00G 44G 47% 16% 1.00x ONLINE -
root@mariadb:~# zpool get autoexpand MYSQL-DATA
NAME PROPERTY VALUE SOURCE
MYSQL-DATA autoexpand off default
</source>
Now inform ZPool to grow to the end of the partition.
Set autoexpand to on:
<source lang=bash>
root@mariadb:~# zpool set autoexpand=on MYSQL-DATA
</source>
Send an online to the already onlined device to force a recheck in the ZPool to resize it without export/import:
<source lang=bash>
root@mariadb:~# zpool online MYSQL-DATA /dev/sdb1
</source>
Et voila:
<source lang=bash>
root@mariadb:~# zpool list MYSQL-DATA
NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
MYSQL-DATA 50,0G 994M 49,0G - 5% 1% 1.00x ONLINE -
rpool 19,9G 3,36G 16,5G - 19% 16% 1.00x ONLINE -
</source>
Set autoexpand to off if you want prevent to autoexpand if partition grows:
<source lang=bash>
root@mariadb:~# zpool set autoexpand=off MYSQL-DATA
</source>
5a16ea4013f99d068bc1c3f833cdb5982de68766
1927
1926
2019-02-19T10:31:57Z
Lollypop
2
/* Rescan a device (for example after changing a VMDK size) */
wikitext
text/x-wiki
[[Kategorie:Linux|Tipps und Tricks]]
==Hard reboot==
This is the hard way to kick your kernel into void. No filesystem sync is done, just and ugly fast direkt reboot!
You should never do this...
<source lang=bash>
# echo 1 > /proc/sys/kernel/sysrq
# echo b > /proc/sysrq-trigger
</source>
First line enables sysrq, second line sends the reboot request.
For more look at [https://www.kernel.org/doc/Documentation/sysrq.txt kernel.org]!
==Scan all SCSI buses for new devices==
<source lang=bash>
# for i in /sys/class/scsi_host/host*/scan ; do echo "- - -" > $i ; done
</source>
==Rescan a device (for example after changing a VMDK size)==
<source lang=bash>
# echo 1 > /sys/class/block/${device}/device/rescan
</source>
This is for device sda after changing the VMDK from 20GB to 25GB:
<source lang=bash>
# echo $[ 512 * $(cat /sys/block/sda/size) / 1024 / 1024 / 1024 ]
20
# echo 1 > /sys/class/block/sda/device/rescan
# echo $[ 512 * $(cat /sys/block/sda/size) / 1024 / 1024 / 1024 ]
25
</source>
==Remove a SCSI-device==
Let us say we want to remove /dev/sdb.
Be careful! Like in this example the lowest SCSI-ID is not always the lowest device name!
Check it with <i>lsscsi</i> from the Ubuntu package lsscsi:
<source lang=bash>
# lsscsi
[2:0:0:0] cd/dvd NECVMWar VMware SATA CD00 1.00 /dev/sr0
[32:0:0:0] disk VMware Virtual disk 1.0 /dev/sdb
[32:0:1:0] disk VMware Virtual disk 1.0 /dev/sda
</source>
Then check it is not longer in use:
# mount
# pvs
# zpool status
# etc.
Then delete it:
<source lang=bash>
# echo 1 > /sys/bus/scsi/drivers/sd/32\:0\:0\:0/delete
</source>
The 32:0:0:0 is the number reported from the lsscsi above.
Et voila:
<source lang=bash>
# lsscsi
[2:0:0:0] cd/dvd NECVMWar VMware SATA CD00 1.00 /dev/sr0
[32:0:1:0] disk VMware Virtual disk 1.0 /dev/sda
</source>
==Copy a GPT partition table==
Copy partition table of sdX to sdY:
<source lang=bash>
# sgdisk /dev/sdX --replicate=/dev/sdY
# sgdisk --randomize-guids /dev/sdY
</source>
<pre>
-R, --replicate=second_device_filename
Replicate the main device's partition table on the specified second device. Note that the replicated partition table is an exact
copy, including all GUIDs; if the device should have its own unique GUIDs, you should use the -G option on the new disk.
-G, --randomize-guids
Randomize the disk's GUID and all partitions' unique GUIDs (but not their partition type code GUIDs). This function may be used
after cloning a disk in order to render all GUIDs once again unique.
</pre>
==Resize a GPT partition==
The partition was resized in VMWare from ~6GB to ~50GB.
In the VM I did [[#Remove a SCSI-device|Remove a SCSI-device]] for the resized device and then [[#Scan all SCSI buses for new devices|Scan all SCSI buses for new devices]] after that parted saw the new size.
===Correct the GPT partition table===
<source lang=bash>
root@mariadb:~# parted /dev/sdb
GNU Parted 3.2
Using /dev/sdb
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted) p
Warning: Not all of the space available to /dev/sdb appears to be used, you can fix the GPT to use all of the space (an extra 92274688 blocks) or continue with the
current setting?
Fix/Ignore? F <-- ! choose F
Model: VMware Virtual disk (scsi)
Disk /dev/sdb: 53,7GB <-- ! the new size is reported now
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:
Number Start End Size File system Name Flags
1 1049kB 6442MB 6441MB zfs
</source>
===Resize the partition===
<source lang=bash>
root@mariadb:~# parted /dev/sdb
GNU Parted 3.2
Using /dev/sdb
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted) p
Model: VMware Virtual disk (scsi)
Disk /dev/sdb: 53,7GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:
Number Start End Size File system Name Flags
1 1049kB 6442MB 6441MB zfs
(parted) resizepart 1
End? [6442MB]? 53,7GB <-- ! Put new size here
(parted) p <-- ! Control if it worked
Model: VMware Virtual disk (scsi)
Disk /dev/sdb: 53,7GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:
Number Start End Size File system Name Flags
1 1049kB 53,7GB 53,7GB zfs
(parted) q
Information: You may need to update /etc/fstab.
</source>
===Optional: Resize the ZPool in it===
Check the actual values:
<source lang=bash>
root@mariadb:~# zpool list MYSQL-DATA
NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
MYSQL-DATA 5,97G 994M 5,00G 44G 47% 16% 1.00x ONLINE -
root@mariadb:~# zpool get autoexpand MYSQL-DATA
NAME PROPERTY VALUE SOURCE
MYSQL-DATA autoexpand off default
</source>
Now inform ZPool to grow to the end of the partition.
Set autoexpand to on:
<source lang=bash>
root@mariadb:~# zpool set autoexpand=on MYSQL-DATA
</source>
Send an online to the already onlined device to force a recheck in the ZPool to resize it without export/import:
<source lang=bash>
root@mariadb:~# zpool online MYSQL-DATA /dev/sdb1
</source>
Et voila:
<source lang=bash>
root@mariadb:~# zpool list MYSQL-DATA
NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
MYSQL-DATA 50,0G 994M 49,0G - 5% 1% 1.00x ONLINE -
rpool 19,9G 3,36G 16,5G - 19% 16% 1.00x ONLINE -
</source>
Set autoexpand to off if you want prevent to autoexpand if partition grows:
<source lang=bash>
root@mariadb:~# zpool set autoexpand=off MYSQL-DATA
</source>
ac7b3bd6718e8929822365d7b916d1e0558fd895
1928
1927
2019-02-19T10:32:43Z
Lollypop
2
/* Rescan a device (for example after changing a VMDK size) */
wikitext
text/x-wiki
[[Kategorie:Linux|Tipps und Tricks]]
==Hard reboot==
This is the hard way to kick your kernel into void. No filesystem sync is done, just and ugly fast direkt reboot!
You should never do this...
<source lang=bash>
# echo 1 > /proc/sys/kernel/sysrq
# echo b > /proc/sysrq-trigger
</source>
First line enables sysrq, second line sends the reboot request.
For more look at [https://www.kernel.org/doc/Documentation/sysrq.txt kernel.org]!
==Scan all SCSI buses for new devices==
<source lang=bash>
# for i in /sys/class/scsi_host/host*/scan ; do echo "- - -" > $i ; done
</source>
==Rescan a device (for example after changing a VMDK size)==
<source lang=bash>
# echo 1 > /sys/class/block/${device}/device/rescan
</source>
This is for device sda after changing the VMDK from 20GB to 25GB:
<source lang=bash>
# device=sda
# echo $[ 512 * $(cat /sys/block/${device}/size) / 1024 / 1024 / 1024 ]
20
# echo 1 > /sys/class/block/${device}/device/rescan
# echo $[ 512 * $(cat /sys/block/${device}/size) / 1024 / 1024 / 1024 ]
25
</source>
==Remove a SCSI-device==
Let us say we want to remove /dev/sdb.
Be careful! Like in this example the lowest SCSI-ID is not always the lowest device name!
Check it with <i>lsscsi</i> from the Ubuntu package lsscsi:
<source lang=bash>
# lsscsi
[2:0:0:0] cd/dvd NECVMWar VMware SATA CD00 1.00 /dev/sr0
[32:0:0:0] disk VMware Virtual disk 1.0 /dev/sdb
[32:0:1:0] disk VMware Virtual disk 1.0 /dev/sda
</source>
Then check it is not longer in use:
# mount
# pvs
# zpool status
# etc.
Then delete it:
<source lang=bash>
# echo 1 > /sys/bus/scsi/drivers/sd/32\:0\:0\:0/delete
</source>
The 32:0:0:0 is the number reported from the lsscsi above.
Et voila:
<source lang=bash>
# lsscsi
[2:0:0:0] cd/dvd NECVMWar VMware SATA CD00 1.00 /dev/sr0
[32:0:1:0] disk VMware Virtual disk 1.0 /dev/sda
</source>
==Copy a GPT partition table==
Copy partition table of sdX to sdY:
<source lang=bash>
# sgdisk /dev/sdX --replicate=/dev/sdY
# sgdisk --randomize-guids /dev/sdY
</source>
<pre>
-R, --replicate=second_device_filename
Replicate the main device's partition table on the specified second device. Note that the replicated partition table is an exact
copy, including all GUIDs; if the device should have its own unique GUIDs, you should use the -G option on the new disk.
-G, --randomize-guids
Randomize the disk's GUID and all partitions' unique GUIDs (but not their partition type code GUIDs). This function may be used
after cloning a disk in order to render all GUIDs once again unique.
</pre>
==Resize a GPT partition==
The partition was resized in VMWare from ~6GB to ~50GB.
In the VM I did [[#Remove a SCSI-device|Remove a SCSI-device]] for the resized device and then [[#Scan all SCSI buses for new devices|Scan all SCSI buses for new devices]] after that parted saw the new size.
===Correct the GPT partition table===
<source lang=bash>
root@mariadb:~# parted /dev/sdb
GNU Parted 3.2
Using /dev/sdb
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted) p
Warning: Not all of the space available to /dev/sdb appears to be used, you can fix the GPT to use all of the space (an extra 92274688 blocks) or continue with the
current setting?
Fix/Ignore? F <-- ! choose F
Model: VMware Virtual disk (scsi)
Disk /dev/sdb: 53,7GB <-- ! the new size is reported now
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:
Number Start End Size File system Name Flags
1 1049kB 6442MB 6441MB zfs
</source>
===Resize the partition===
<source lang=bash>
root@mariadb:~# parted /dev/sdb
GNU Parted 3.2
Using /dev/sdb
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted) p
Model: VMware Virtual disk (scsi)
Disk /dev/sdb: 53,7GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:
Number Start End Size File system Name Flags
1 1049kB 6442MB 6441MB zfs
(parted) resizepart 1
End? [6442MB]? 53,7GB <-- ! Put new size here
(parted) p <-- ! Control if it worked
Model: VMware Virtual disk (scsi)
Disk /dev/sdb: 53,7GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:
Number Start End Size File system Name Flags
1 1049kB 53,7GB 53,7GB zfs
(parted) q
Information: You may need to update /etc/fstab.
</source>
===Optional: Resize the ZPool in it===
Check the actual values:
<source lang=bash>
root@mariadb:~# zpool list MYSQL-DATA
NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
MYSQL-DATA 5,97G 994M 5,00G 44G 47% 16% 1.00x ONLINE -
root@mariadb:~# zpool get autoexpand MYSQL-DATA
NAME PROPERTY VALUE SOURCE
MYSQL-DATA autoexpand off default
</source>
Now inform ZPool to grow to the end of the partition.
Set autoexpand to on:
<source lang=bash>
root@mariadb:~# zpool set autoexpand=on MYSQL-DATA
</source>
Send an online to the already onlined device to force a recheck in the ZPool to resize it without export/import:
<source lang=bash>
root@mariadb:~# zpool online MYSQL-DATA /dev/sdb1
</source>
Et voila:
<source lang=bash>
root@mariadb:~# zpool list MYSQL-DATA
NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
MYSQL-DATA 50,0G 994M 49,0G - 5% 1% 1.00x ONLINE -
rpool 19,9G 3,36G 16,5G - 19% 16% 1.00x ONLINE -
</source>
Set autoexpand to off if you want prevent to autoexpand if partition grows:
<source lang=bash>
root@mariadb:~# zpool set autoexpand=off MYSQL-DATA
</source>
52122d4ee98bd68a37294f7dbfd46e42bf3e021d
1929
1928
2019-02-19T10:39:41Z
Lollypop
2
/* Rescan a device (for example after changing a VMDK size) */
wikitext
text/x-wiki
[[Kategorie:Linux|Tipps und Tricks]]
==Hard reboot==
This is the hard way to kick your kernel into void. No filesystem sync is done, just and ugly fast direkt reboot!
You should never do this...
<source lang=bash>
# echo 1 > /proc/sys/kernel/sysrq
# echo b > /proc/sysrq-trigger
</source>
First line enables sysrq, second line sends the reboot request.
For more look at [https://www.kernel.org/doc/Documentation/sysrq.txt kernel.org]!
==Scan all SCSI buses for new devices==
<source lang=bash>
# for i in /sys/class/scsi_host/host*/scan ; do echo "- - -" > $i ; done
</source>
==Rescan a device (for example after changing a VMDK size)==
<source lang=bash>
# echo 1 > /sys/class/block/${device}/device/rescan
</source>
This is for device sda after changing the VMDK from 20GB to 25GB:
<source lang=bash>
# device=sda
# echo $[ 512 * $(cat /sys/block/${device}/size) / 1024 / 1024 / 1024 ]
20
# echo 1 > /sys/class/block/${device}/device/rescan
# echo $[ 512 * $(cat /sys/block/${device}/size) / 1024 / 1024 / 1024 ]
25
# parted /dev/${device} print free
Warning: Not all of the space available to /dev/sda appears to be used, you can fix the GPT to use all of the space (an extra 10485760 blocks) or
continue with the current setting?
Fix/Ignore? F
Model: VMware Virtual disk (scsi)
Disk /dev/sda: 26,8GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:
Number Start End Size File system Name Flags
2 17,4kB 1049kB 1031kB bios_grub
1 1049kB 21,5GB 21,5GB zfs
21,5GB 26,8GB 5369MB Free Space
</source>
I want to put the free space into partition 1 and resize the rpool:
<source lang=bash>
# parted /dev/${device} "resizepart 1 -1"
# zpool list rpool
NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
rpool 19,9G 1,68G 18,2G - 14% 8% 1.00x ONLINE -
# zpool set autoexpand=on rpool
# zpool online rpool sda1
# zpool list rpool
NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
rpool 24,9G 1,69G 23,2G - 11% 6% 1.00x ONLINE -
# zpool set autoexpand=off rpool
</source>
Done.
==Remove a SCSI-device==
Let us say we want to remove /dev/sdb.
Be careful! Like in this example the lowest SCSI-ID is not always the lowest device name!
Check it with <i>lsscsi</i> from the Ubuntu package lsscsi:
<source lang=bash>
# lsscsi
[2:0:0:0] cd/dvd NECVMWar VMware SATA CD00 1.00 /dev/sr0
[32:0:0:0] disk VMware Virtual disk 1.0 /dev/sdb
[32:0:1:0] disk VMware Virtual disk 1.0 /dev/sda
</source>
Then check it is not longer in use:
# mount
# pvs
# zpool status
# etc.
Then delete it:
<source lang=bash>
# echo 1 > /sys/bus/scsi/drivers/sd/32\:0\:0\:0/delete
</source>
The 32:0:0:0 is the number reported from the lsscsi above.
Et voila:
<source lang=bash>
# lsscsi
[2:0:0:0] cd/dvd NECVMWar VMware SATA CD00 1.00 /dev/sr0
[32:0:1:0] disk VMware Virtual disk 1.0 /dev/sda
</source>
==Copy a GPT partition table==
Copy partition table of sdX to sdY:
<source lang=bash>
# sgdisk /dev/sdX --replicate=/dev/sdY
# sgdisk --randomize-guids /dev/sdY
</source>
<pre>
-R, --replicate=second_device_filename
Replicate the main device's partition table on the specified second device. Note that the replicated partition table is an exact
copy, including all GUIDs; if the device should have its own unique GUIDs, you should use the -G option on the new disk.
-G, --randomize-guids
Randomize the disk's GUID and all partitions' unique GUIDs (but not their partition type code GUIDs). This function may be used
after cloning a disk in order to render all GUIDs once again unique.
</pre>
==Resize a GPT partition==
The partition was resized in VMWare from ~6GB to ~50GB.
In the VM I did [[#Remove a SCSI-device|Remove a SCSI-device]] for the resized device and then [[#Scan all SCSI buses for new devices|Scan all SCSI buses for new devices]] after that parted saw the new size.
===Correct the GPT partition table===
<source lang=bash>
root@mariadb:~# parted /dev/sdb
GNU Parted 3.2
Using /dev/sdb
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted) p
Warning: Not all of the space available to /dev/sdb appears to be used, you can fix the GPT to use all of the space (an extra 92274688 blocks) or continue with the
current setting?
Fix/Ignore? F <-- ! choose F
Model: VMware Virtual disk (scsi)
Disk /dev/sdb: 53,7GB <-- ! the new size is reported now
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:
Number Start End Size File system Name Flags
1 1049kB 6442MB 6441MB zfs
</source>
===Resize the partition===
<source lang=bash>
root@mariadb:~# parted /dev/sdb
GNU Parted 3.2
Using /dev/sdb
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted) p
Model: VMware Virtual disk (scsi)
Disk /dev/sdb: 53,7GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:
Number Start End Size File system Name Flags
1 1049kB 6442MB 6441MB zfs
(parted) resizepart 1
End? [6442MB]? 53,7GB <-- ! Put new size here
(parted) p <-- ! Control if it worked
Model: VMware Virtual disk (scsi)
Disk /dev/sdb: 53,7GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:
Number Start End Size File system Name Flags
1 1049kB 53,7GB 53,7GB zfs
(parted) q
Information: You may need to update /etc/fstab.
</source>
===Optional: Resize the ZPool in it===
Check the actual values:
<source lang=bash>
root@mariadb:~# zpool list MYSQL-DATA
NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
MYSQL-DATA 5,97G 994M 5,00G 44G 47% 16% 1.00x ONLINE -
root@mariadb:~# zpool get autoexpand MYSQL-DATA
NAME PROPERTY VALUE SOURCE
MYSQL-DATA autoexpand off default
</source>
Now inform ZPool to grow to the end of the partition.
Set autoexpand to on:
<source lang=bash>
root@mariadb:~# zpool set autoexpand=on MYSQL-DATA
</source>
Send an online to the already onlined device to force a recheck in the ZPool to resize it without export/import:
<source lang=bash>
root@mariadb:~# zpool online MYSQL-DATA /dev/sdb1
</source>
Et voila:
<source lang=bash>
root@mariadb:~# zpool list MYSQL-DATA
NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
MYSQL-DATA 50,0G 994M 49,0G - 5% 1% 1.00x ONLINE -
rpool 19,9G 3,36G 16,5G - 19% 16% 1.00x ONLINE -
</source>
Set autoexpand to off if you want prevent to autoexpand if partition grows:
<source lang=bash>
root@mariadb:~# zpool set autoexpand=off MYSQL-DATA
</source>
9edd9b6d7c85be9560a26727cd652160a9f7dd37
VMWare Hints
0
343
1924
1783
2019-01-09T14:09:59Z
Lollypop
2
/* Links */
wikitext
text/x-wiki
[[Kategorie:VMWare]]
== VCenter ==
===HTML5 User Interface===
* https://<vcenter>/ui
===Appliance Management===
* https://<vcenter>:5480/
== CLI ==
=== I/O ===
* [https://voiceforvirtual.com/2010/12/06/vscsistats/ vscsiStats] : /usr/lib/vmware/bin/vscsiStats
* [https://labs.vmware.com/flings/i-o-analyzer VMware I/O Analyzer]
* [https://labs.vmware.com/flings/ioinsight VMware IOInsight]
* [https://kb.vmware.com/kb/1008205 Using esxtop to identify storage performance issues for ESX / ESXi]
* [https://support.netapp.com support.netapp.com] -> Downloads -> Software -> NetApp NFS Plug-in for VMware
=== ESXTOP ===
* [http://www.running-system.com/vsphere-6-esxtop-quick-overview-for-troubleshooting/ vSphere 6 ESXTOP quick overview for Troubleshooting]
* [https://communities.vmware.com/docs/DOC-9279 Interpreting esxtop Statistics]
* [http://www.vmworld.net/wp-content/uploads/2012/05/Esxtop_Troubleshooting_ger.pdf PDF : vSphere 5 ESXTOP quick Overview for Troubleshooting]
* [http://www.running-system.com/wp-content/uploads/2012/08/esxtop_english_v11.pdf PDF : vSphere 5.5 ESXTOP quick Overview for Troubleshooting]
* [http://www.running-system.com/wp-content/uploads/2015/04/ESXTOP_vSphere6.pdf PDF : vSphere 6 ESXTOP quick Overview for Troubleshooting]
== Links ==
* [https://labs.vmware.com/flings VMWare Flings]
* [https://blogs.vmware.com/vsphere/2012/09/vmware-posters.html VMWare Posters] or here [https://blogs.vmware.com/vsphere/posters VMWare Posters]
* [https://www.vmware.com/support/developer/vima/ vSphere Management Assistant Documentation]
* [https://www.vmware.com/support/developer/vcli/ vSphere Command-Line Interface Documentation (vCLI)]
* [http://labs.hol.vmware.com VMWare Hands On Labs]
* [http://docs.ansible.com/ansible/list_of_cloud_modules.html#vmware Ansible modules for VMWare]
* [http://www.robware.net/rvtools/ RVTools]
* [http://www.running-system.com VMWare related BLOG]
* [https://kb.vmware.com/s/article/2106283 Required ports for vCenter Server 6.x (2106283)]
google: vsphere <version> configuration maximums:
* [https://www.vmware.com/pdf/vsphere6/r65/vsphere-65-configuration-maximums.pdf V6.5]
3a738a22472bd70d6fd3ad76ab2657331d67e00e
Ubuntu apt
0
120
1925
1872
2019-01-22T12:48:31Z
Lollypop
2
/* Get all non LTS packages */
wikitext
text/x-wiki
[[Kategorie:Ubuntu|apt]]
== Get all non LTS packages ==
<source lang=awk>
# dpkg --list | awk '/^ii/ {print $2}' | xargs apt-cache show | awk '
BEGIN{
support="none";
}
/^Package:/,/^$/{
if(/^Package:/){ pkg=$2; }
if(/^Supported:/){ support=$2; }
if(/^$/ && support != "5y"){ printf "%s:\t%s\n", pkg, support; }
}
/^$/ {
support="none";
}'
</source>
== Ubuntu support status ==
<source lang=bash>
$ ubuntu-support-status --show-unsupported
</source>
== Configuring a proxy for apt ==
Put this into your /etc/apt/apt.conf.d/00proxy :
<source lang=bash>
// Options for the downloading routines
Acquire
{
Queue-Mode "host"; // host|access
Retries "0";
Source-Symlinks "true";
// HTTP method configuration
http
{
//Proxy::http.us.debian.org "DIRECT"; // Specific per-host setting
Proxy "http://<user>:<password>@<proxy-host>:<proxy-port>";
Timeout "120";
Pipeline-Depth "5";
// Cache Control. Note these do not work with Squid 2.0.2
No-Cache "false";
Max-Age "86400"; // 1 Day age on index files
No-Store "false"; // Prevent the cache from storing archives
};
ftp
{
Proxy "http://<user>:<password>@<proxy-host>:<proxy-port>";
//Proxy::http.us.debian.org "DIRECT"; // Specific per-host setting
Timeout "120";
/* Passive mode control, proxy, non-proxy and per-host. Pasv mode
is prefered if possible */
Passive "true";
Proxy::Passive "true";
Passive::http.us.debian.org "true"; // Specific per-host setting
};
cdrom
{
mount "/cdrom";
// You need the trailing slash!
"/cdrom"
{
Mount "sleep 1000";
UMount "sleep 500";
}
};
};
</source>
==Use this proxy config for in the shell==
<source lang=bash>
eval $(apt-config dump Acquire | awk -F '(::| )' '$3 ~ /Proxy/{printf "%s_proxy=%s\nexport %s_proxy\n",$2,$4,$2;}')
</source>
== Getting some packages from a newer release ==
In this example we are living in <i>xenial</i> and want PowerDNS from <i>zesty</i> because we need CAA records in the nameservice.
=== Pin the normal release ===
<source lang=bash>
# echo 'APT::Default-Release "xenial";' > /etc/apt/apt.conf.d/01pinning
</source>
=== Add new release to /etc/apt/sources.list ===
This is the /etc/apt/sources.list on my x86 64bit Ubuntu:
<pre>
# Xenial
deb [arch=amd64] http://de.archive.ubuntu.com/ubuntu/ xenial main restricted universe
deb [arch=amd64] http://de.archive.ubuntu.com/ubuntu/ xenial-updates main restricted universe
deb [arch=amd64] http://security.ubuntu.com/ubuntu xenial-security main restricted universe
# Zesty
deb [arch=amd64] http://de.archive.ubuntu.com/ubuntu/ zesty main restricted universe
deb [arch=amd64] http://de.archive.ubuntu.com/ubuntu/ zesty-updates main restricted universe
deb [arch=amd64] http://security.ubuntu.com/ubuntu zesty-security main restricted universe
</pre>
=== Tell apt via /etc/apt/preferences.d/... to prefer some packages from the new release ===
This is the /etc/apt/preferences.d/pdns:
<pre>
Package: pdns-*
Pin: release a=zesty, l=Ubuntu
Pin-Priority: 1000
Package: pdns-*
Pin: release a=zesty-updates, l=Ubuntu
Pin-Priority: 1000
Package: pdns-*
Pin: release a=zesty-security, l=Ubuntu
Pin-Priority: 1000
</pre>
=== Upgrade to the packages from the new release ===
<source lang=bash>
# apt update
...
2 packages can be upgraded. Run 'apt list --upgradable' to see them.
...
</source>
=== Check with "apt-cache policy" which version is preferred now ===
<source lang=bash>
# apt-cache policy pdns-server pdns-tools
pdns-server:
Installed: 4.0.3-1
Candidate: 4.0.3-1
Version table:
*** 4.0.3-1 1000
500 http://de.archive.ubuntu.com/ubuntu zesty/universe amd64 Packages
100 /var/lib/dpkg/status
4.0.0~alpha2-3build1 990
990 http://de.archive.ubuntu.com/ubuntu xenial/universe amd64 Packages
pdns-tools:
Installed: (none)
Candidate: 4.0.3-1
Version table:
4.0.3-1 1000
500 http://de.archive.ubuntu.com/ubuntu zesty/universe amd64 Packages
4.0.0~alpha2-3build1 990
990 http://de.archive.ubuntu.com/ubuntu xenial/universe amd64 Packages
</source>
=== Upgrade to the packages from the new release ===
<source lang=bash>
# apt install pdns-tools
Reading package lists... Done
Building dependency tree
Reading state information... Done
Some packages could not be installed. This may mean that you have
requested an impossible situation or if you are using the unstable
distribution that some required packages have not yet been created
or been moved out of Incoming.
The following information may help to resolve the situation:
The following packages have unmet dependencies:
pdns-tools : Depends: libstdc++6 (>= 6) but 5.4.0-6ubuntu1~16.04.5 is to be installed
E: Unable to correct problems, you have held broken packages.
</source>
This shows the pinning to xenial works ;-).
=== Override pinning for one package ===
<source lang=bash>
# apt -t zesty install libstdc++6
...
</source>
8bae3c7fdf5d954aa22559ec6b8af57e688791c9
Linux Tipps und Tricks
0
273
1930
1929
2019-02-19T10:41:10Z
Lollypop
2
/* Rescan a device (for example after changing a VMDK size) */
wikitext
text/x-wiki
[[Kategorie:Linux|Tipps und Tricks]]
==Hard reboot==
This is the hard way to kick your kernel into void. No filesystem sync is done, just and ugly fast direkt reboot!
You should never do this...
<source lang=bash>
# echo 1 > /proc/sys/kernel/sysrq
# echo b > /proc/sysrq-trigger
</source>
First line enables sysrq, second line sends the reboot request.
For more look at [https://www.kernel.org/doc/Documentation/sysrq.txt kernel.org]!
==Scan all SCSI buses for new devices==
<source lang=bash>
# for i in /sys/class/scsi_host/host*/scan ; do echo "- - -" > $i ; done
</source>
==Rescan a device (for example after changing a VMDK size)==
<source lang=bash>
# echo 1 > /sys/class/block/${device}/device/rescan
</source>
This is for device sda after changing the VMDK from 20GB to 25GB:
<source lang=bash>
# device=sda
# echo $[ 512 * $(cat /sys/block/${device}/size) / 1024 / 1024 / 1024 ]
20
# echo 1 > /sys/class/block/${device}/device/rescan
# echo $[ 512 * $(cat /sys/block/${device}/size) / 1024 / 1024 / 1024 ]
25
# parted /dev/${device} print free
Warning: Not all of the space available to /dev/sda appears to be used, you can fix the GPT to use all of the space (an extra 10485760 blocks) or
continue with the current setting?
Fix/Ignore? F
Model: VMware Virtual disk (scsi)
Disk /dev/sda: 26,8GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:
Number Start End Size File system Name Flags
2 17,4kB 1049kB 1031kB bios_grub
1 1049kB 21,5GB 21,5GB zfs
21,5GB 26,8GB 5369MB Free Space
</source>
I want to put the free space into partition 1 and resize the rpool:
<source lang=bash>
# parted /dev/${device} "resizepart 1 -1"
# parted /dev/sda print free
Model: VMware Virtual disk (scsi)
Disk /dev/sda: 26,8GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:
Number Start End Size File system Name Flags
2 17,4kB 1049kB 1031kB bios_grub
1 1049kB 26,8GB 26,8GB zfs
26,8GB 26,8GB 983kB Free Space
# zpool list rpool
NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
rpool 19,9G 1,68G 18,2G - 14% 8% 1.00x ONLINE -
# zpool set autoexpand=on rpool
# zpool online rpool sda1
# zpool list rpool
NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
rpool 24,9G 1,69G 23,2G - 11% 6% 1.00x ONLINE -
# zpool set autoexpand=off rpool
</source>
Done.
==Remove a SCSI-device==
Let us say we want to remove /dev/sdb.
Be careful! Like in this example the lowest SCSI-ID is not always the lowest device name!
Check it with <i>lsscsi</i> from the Ubuntu package lsscsi:
<source lang=bash>
# lsscsi
[2:0:0:0] cd/dvd NECVMWar VMware SATA CD00 1.00 /dev/sr0
[32:0:0:0] disk VMware Virtual disk 1.0 /dev/sdb
[32:0:1:0] disk VMware Virtual disk 1.0 /dev/sda
</source>
Then check it is not longer in use:
# mount
# pvs
# zpool status
# etc.
Then delete it:
<source lang=bash>
# echo 1 > /sys/bus/scsi/drivers/sd/32\:0\:0\:0/delete
</source>
The 32:0:0:0 is the number reported from the lsscsi above.
Et voila:
<source lang=bash>
# lsscsi
[2:0:0:0] cd/dvd NECVMWar VMware SATA CD00 1.00 /dev/sr0
[32:0:1:0] disk VMware Virtual disk 1.0 /dev/sda
</source>
==Copy a GPT partition table==
Copy partition table of sdX to sdY:
<source lang=bash>
# sgdisk /dev/sdX --replicate=/dev/sdY
# sgdisk --randomize-guids /dev/sdY
</source>
<pre>
-R, --replicate=second_device_filename
Replicate the main device's partition table on the specified second device. Note that the replicated partition table is an exact
copy, including all GUIDs; if the device should have its own unique GUIDs, you should use the -G option on the new disk.
-G, --randomize-guids
Randomize the disk's GUID and all partitions' unique GUIDs (but not their partition type code GUIDs). This function may be used
after cloning a disk in order to render all GUIDs once again unique.
</pre>
==Resize a GPT partition==
The partition was resized in VMWare from ~6GB to ~50GB.
In the VM I did [[#Remove a SCSI-device|Remove a SCSI-device]] for the resized device and then [[#Scan all SCSI buses for new devices|Scan all SCSI buses for new devices]] after that parted saw the new size.
===Correct the GPT partition table===
<source lang=bash>
root@mariadb:~# parted /dev/sdb
GNU Parted 3.2
Using /dev/sdb
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted) p
Warning: Not all of the space available to /dev/sdb appears to be used, you can fix the GPT to use all of the space (an extra 92274688 blocks) or continue with the
current setting?
Fix/Ignore? F <-- ! choose F
Model: VMware Virtual disk (scsi)
Disk /dev/sdb: 53,7GB <-- ! the new size is reported now
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:
Number Start End Size File system Name Flags
1 1049kB 6442MB 6441MB zfs
</source>
===Resize the partition===
<source lang=bash>
root@mariadb:~# parted /dev/sdb
GNU Parted 3.2
Using /dev/sdb
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted) p
Model: VMware Virtual disk (scsi)
Disk /dev/sdb: 53,7GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:
Number Start End Size File system Name Flags
1 1049kB 6442MB 6441MB zfs
(parted) resizepart 1
End? [6442MB]? 53,7GB <-- ! Put new size here
(parted) p <-- ! Control if it worked
Model: VMware Virtual disk (scsi)
Disk /dev/sdb: 53,7GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:
Number Start End Size File system Name Flags
1 1049kB 53,7GB 53,7GB zfs
(parted) q
Information: You may need to update /etc/fstab.
</source>
===Optional: Resize the ZPool in it===
Check the actual values:
<source lang=bash>
root@mariadb:~# zpool list MYSQL-DATA
NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
MYSQL-DATA 5,97G 994M 5,00G 44G 47% 16% 1.00x ONLINE -
root@mariadb:~# zpool get autoexpand MYSQL-DATA
NAME PROPERTY VALUE SOURCE
MYSQL-DATA autoexpand off default
</source>
Now inform ZPool to grow to the end of the partition.
Set autoexpand to on:
<source lang=bash>
root@mariadb:~# zpool set autoexpand=on MYSQL-DATA
</source>
Send an online to the already onlined device to force a recheck in the ZPool to resize it without export/import:
<source lang=bash>
root@mariadb:~# zpool online MYSQL-DATA /dev/sdb1
</source>
Et voila:
<source lang=bash>
root@mariadb:~# zpool list MYSQL-DATA
NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
MYSQL-DATA 50,0G 994M 49,0G - 5% 1% 1.00x ONLINE -
rpool 19,9G 3,36G 16,5G - 19% 16% 1.00x ONLINE -
</source>
Set autoexpand to off if you want prevent to autoexpand if partition grows:
<source lang=bash>
root@mariadb:~# zpool set autoexpand=off MYSQL-DATA
</source>
dc033c3f4c7008baa8d9b8819aff14d7a1a0254d
1931
1930
2019-02-19T10:42:47Z
Lollypop
2
/* Rescan a device (for example after changing a VMDK size) */
wikitext
text/x-wiki
[[Kategorie:Linux|Tipps und Tricks]]
==Hard reboot==
This is the hard way to kick your kernel into void. No filesystem sync is done, just and ugly fast direkt reboot!
You should never do this...
<source lang=bash>
# echo 1 > /proc/sys/kernel/sysrq
# echo b > /proc/sysrq-trigger
</source>
First line enables sysrq, second line sends the reboot request.
For more look at [https://www.kernel.org/doc/Documentation/sysrq.txt kernel.org]!
==Scan all SCSI buses for new devices==
<source lang=bash>
# for i in /sys/class/scsi_host/host*/scan ; do echo "- - -" > $i ; done
</source>
==Rescan a device (for example after changing a VMDK size)==
<source lang=bash>
# echo 1 > /sys/class/block/${device}/device/rescan
</source>
This is for device sda after changing the VMDK from 20GB to 25GB:
<source lang=bash>
# device=sda
# echo $[ 512 * $(cat /sys/block/${device}/size) / 1024 / 1024 / 1024 ]
20
# echo 1 > /sys/class/block/${device}/device/rescan
# echo $[ 512 * $(cat /sys/block/${device}/size) / 1024 / 1024 / 1024 ]
25
# parted /dev/${device} "print free"
Warning: Not all of the space available to /dev/sda appears to be used, you can fix the GPT to use all of the space (an extra 10485760 blocks) or
continue with the current setting?
Fix/Ignore? F
Model: VMware Virtual disk (scsi)
Disk /dev/sda: 26,8GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:
Number Start End Size File system Name Flags
2 17,4kB 1049kB 1031kB bios_grub
1 1049kB 21,5GB 21,5GB zfs
21,5GB 26,8GB 5369MB Free Space
</source>
I want to put the free space into partition 1 and resize the rpool:
<source lang=bash>
# parted /dev/${device} "resizepart 1 -1"
# parted /dev/${device} "print free"
Model: VMware Virtual disk (scsi)
Disk /dev/sda: 26,8GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:
Number Start End Size File system Name Flags
2 17,4kB 1049kB 1031kB bios_grub
1 1049kB 26,8GB 26,8GB zfs
26,8GB 26,8GB 983kB Free Space
# zpool list rpool
NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
rpool 19,9G 1,68G 18,2G - 14% 8% 1.00x ONLINE -
# zpool set autoexpand=on rpool
# zpool status rpool
pool: rpool
state: ONLINE
scan: none requested
config:
NAME STATE READ WRITE CKSUM
rpool ONLINE 0 0 0
sda1 ONLINE 0 0 0
# zpool online rpool sda1
# zpool list rpool
NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
rpool 24,9G 1,69G 23,2G - 11% 6% 1.00x ONLINE -
# zpool set autoexpand=off rpool
</source>
Done.
==Remove a SCSI-device==
Let us say we want to remove /dev/sdb.
Be careful! Like in this example the lowest SCSI-ID is not always the lowest device name!
Check it with <i>lsscsi</i> from the Ubuntu package lsscsi:
<source lang=bash>
# lsscsi
[2:0:0:0] cd/dvd NECVMWar VMware SATA CD00 1.00 /dev/sr0
[32:0:0:0] disk VMware Virtual disk 1.0 /dev/sdb
[32:0:1:0] disk VMware Virtual disk 1.0 /dev/sda
</source>
Then check it is not longer in use:
# mount
# pvs
# zpool status
# etc.
Then delete it:
<source lang=bash>
# echo 1 > /sys/bus/scsi/drivers/sd/32\:0\:0\:0/delete
</source>
The 32:0:0:0 is the number reported from the lsscsi above.
Et voila:
<source lang=bash>
# lsscsi
[2:0:0:0] cd/dvd NECVMWar VMware SATA CD00 1.00 /dev/sr0
[32:0:1:0] disk VMware Virtual disk 1.0 /dev/sda
</source>
==Copy a GPT partition table==
Copy partition table of sdX to sdY:
<source lang=bash>
# sgdisk /dev/sdX --replicate=/dev/sdY
# sgdisk --randomize-guids /dev/sdY
</source>
<pre>
-R, --replicate=second_device_filename
Replicate the main device's partition table on the specified second device. Note that the replicated partition table is an exact
copy, including all GUIDs; if the device should have its own unique GUIDs, you should use the -G option on the new disk.
-G, --randomize-guids
Randomize the disk's GUID and all partitions' unique GUIDs (but not their partition type code GUIDs). This function may be used
after cloning a disk in order to render all GUIDs once again unique.
</pre>
==Resize a GPT partition==
The partition was resized in VMWare from ~6GB to ~50GB.
In the VM I did [[#Remove a SCSI-device|Remove a SCSI-device]] for the resized device and then [[#Scan all SCSI buses for new devices|Scan all SCSI buses for new devices]] after that parted saw the new size.
===Correct the GPT partition table===
<source lang=bash>
root@mariadb:~# parted /dev/sdb
GNU Parted 3.2
Using /dev/sdb
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted) p
Warning: Not all of the space available to /dev/sdb appears to be used, you can fix the GPT to use all of the space (an extra 92274688 blocks) or continue with the
current setting?
Fix/Ignore? F <-- ! choose F
Model: VMware Virtual disk (scsi)
Disk /dev/sdb: 53,7GB <-- ! the new size is reported now
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:
Number Start End Size File system Name Flags
1 1049kB 6442MB 6441MB zfs
</source>
===Resize the partition===
<source lang=bash>
root@mariadb:~# parted /dev/sdb
GNU Parted 3.2
Using /dev/sdb
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted) p
Model: VMware Virtual disk (scsi)
Disk /dev/sdb: 53,7GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:
Number Start End Size File system Name Flags
1 1049kB 6442MB 6441MB zfs
(parted) resizepart 1
End? [6442MB]? 53,7GB <-- ! Put new size here
(parted) p <-- ! Control if it worked
Model: VMware Virtual disk (scsi)
Disk /dev/sdb: 53,7GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:
Number Start End Size File system Name Flags
1 1049kB 53,7GB 53,7GB zfs
(parted) q
Information: You may need to update /etc/fstab.
</source>
===Optional: Resize the ZPool in it===
Check the actual values:
<source lang=bash>
root@mariadb:~# zpool list MYSQL-DATA
NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
MYSQL-DATA 5,97G 994M 5,00G 44G 47% 16% 1.00x ONLINE -
root@mariadb:~# zpool get autoexpand MYSQL-DATA
NAME PROPERTY VALUE SOURCE
MYSQL-DATA autoexpand off default
</source>
Now inform ZPool to grow to the end of the partition.
Set autoexpand to on:
<source lang=bash>
root@mariadb:~# zpool set autoexpand=on MYSQL-DATA
</source>
Send an online to the already onlined device to force a recheck in the ZPool to resize it without export/import:
<source lang=bash>
root@mariadb:~# zpool online MYSQL-DATA /dev/sdb1
</source>
Et voila:
<source lang=bash>
root@mariadb:~# zpool list MYSQL-DATA
NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
MYSQL-DATA 50,0G 994M 49,0G - 5% 1% 1.00x ONLINE -
rpool 19,9G 3,36G 16,5G - 19% 16% 1.00x ONLINE -
</source>
Set autoexpand to off if you want prevent to autoexpand if partition grows:
<source lang=bash>
root@mariadb:~# zpool set autoexpand=off MYSQL-DATA
</source>
7eed206cc40e8c4b426b5a37594e9751742515d2
1932
1931
2019-02-19T11:02:08Z
Lollypop
2
/* Rescan a device (for example after changing a VMDK size) */
wikitext
text/x-wiki
[[Kategorie:Linux|Tipps und Tricks]]
==Hard reboot==
This is the hard way to kick your kernel into void. No filesystem sync is done, just and ugly fast direkt reboot!
You should never do this...
<source lang=bash>
# echo 1 > /proc/sys/kernel/sysrq
# echo b > /proc/sysrq-trigger
</source>
First line enables sysrq, second line sends the reboot request.
For more look at [https://www.kernel.org/doc/Documentation/sysrq.txt kernel.org]!
==Scan all SCSI buses for new devices==
<source lang=bash>
# for i in /sys/class/scsi_host/host*/scan ; do echo "- - -" > $i ; done
</source>
==Rescan a device (for example after changing a VMDK size)==
<source lang=bash>
# echo 1 > /sys/class/block/${device}/device/rescan
</source>
This is for device sda after changing the VMDK from 20GB to 25GB:
<source lang=bash>
# device=sda
# echo $[ 512 * $(cat /sys/block/${device}/size) / 1024 ** 3 ]
20
# echo 1 > /sys/class/block/${device}/device/rescan
# echo $[ 512 * $(cat /sys/block/${device}/size) / 1024 ** 3 ]
25
# parted /dev/${device} "print free"
Warning: Not all of the space available to /dev/sda appears to be used, you can fix the GPT to use all of the space (an extra 10485760 blocks) or
continue with the current setting?
Fix/Ignore? F
Model: VMware Virtual disk (scsi)
Disk /dev/sda: 26,8GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:
Number Start End Size File system Name Flags
2 17,4kB 1049kB 1031kB bios_grub
1 1049kB 21,5GB 21,5GB zfs
21,5GB 26,8GB 5369MB Free Space
</source>
I want to put the free space into partition 1 and resize the rpool:
<source lang=bash>
# parted /dev/${device} "resizepart 1 -1"
# parted /dev/${device} "print free"
Model: VMware Virtual disk (scsi)
Disk /dev/sda: 26,8GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:
Number Start End Size File system Name Flags
2 17,4kB 1049kB 1031kB bios_grub
1 1049kB 26,8GB 26,8GB zfs
26,8GB 26,8GB 983kB Free Space
# zpool list rpool
NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
rpool 19,9G 1,68G 18,2G - 14% 8% 1.00x ONLINE -
# zpool set autoexpand=on rpool
# zpool status rpool
pool: rpool
state: ONLINE
scan: none requested
config:
NAME STATE READ WRITE CKSUM
rpool ONLINE 0 0 0
sda1 ONLINE 0 0 0
# zpool online rpool sda1
# zpool list rpool
NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
rpool 24,9G 1,69G 23,2G - 11% 6% 1.00x ONLINE -
# zpool set autoexpand=off rpool
</source>
Done.
==Remove a SCSI-device==
Let us say we want to remove /dev/sdb.
Be careful! Like in this example the lowest SCSI-ID is not always the lowest device name!
Check it with <i>lsscsi</i> from the Ubuntu package lsscsi:
<source lang=bash>
# lsscsi
[2:0:0:0] cd/dvd NECVMWar VMware SATA CD00 1.00 /dev/sr0
[32:0:0:0] disk VMware Virtual disk 1.0 /dev/sdb
[32:0:1:0] disk VMware Virtual disk 1.0 /dev/sda
</source>
Then check it is not longer in use:
# mount
# pvs
# zpool status
# etc.
Then delete it:
<source lang=bash>
# echo 1 > /sys/bus/scsi/drivers/sd/32\:0\:0\:0/delete
</source>
The 32:0:0:0 is the number reported from the lsscsi above.
Et voila:
<source lang=bash>
# lsscsi
[2:0:0:0] cd/dvd NECVMWar VMware SATA CD00 1.00 /dev/sr0
[32:0:1:0] disk VMware Virtual disk 1.0 /dev/sda
</source>
==Copy a GPT partition table==
Copy partition table of sdX to sdY:
<source lang=bash>
# sgdisk /dev/sdX --replicate=/dev/sdY
# sgdisk --randomize-guids /dev/sdY
</source>
<pre>
-R, --replicate=second_device_filename
Replicate the main device's partition table on the specified second device. Note that the replicated partition table is an exact
copy, including all GUIDs; if the device should have its own unique GUIDs, you should use the -G option on the new disk.
-G, --randomize-guids
Randomize the disk's GUID and all partitions' unique GUIDs (but not their partition type code GUIDs). This function may be used
after cloning a disk in order to render all GUIDs once again unique.
</pre>
==Resize a GPT partition==
The partition was resized in VMWare from ~6GB to ~50GB.
In the VM I did [[#Remove a SCSI-device|Remove a SCSI-device]] for the resized device and then [[#Scan all SCSI buses for new devices|Scan all SCSI buses for new devices]] after that parted saw the new size.
===Correct the GPT partition table===
<source lang=bash>
root@mariadb:~# parted /dev/sdb
GNU Parted 3.2
Using /dev/sdb
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted) p
Warning: Not all of the space available to /dev/sdb appears to be used, you can fix the GPT to use all of the space (an extra 92274688 blocks) or continue with the
current setting?
Fix/Ignore? F <-- ! choose F
Model: VMware Virtual disk (scsi)
Disk /dev/sdb: 53,7GB <-- ! the new size is reported now
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:
Number Start End Size File system Name Flags
1 1049kB 6442MB 6441MB zfs
</source>
===Resize the partition===
<source lang=bash>
root@mariadb:~# parted /dev/sdb
GNU Parted 3.2
Using /dev/sdb
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted) p
Model: VMware Virtual disk (scsi)
Disk /dev/sdb: 53,7GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:
Number Start End Size File system Name Flags
1 1049kB 6442MB 6441MB zfs
(parted) resizepart 1
End? [6442MB]? 53,7GB <-- ! Put new size here
(parted) p <-- ! Control if it worked
Model: VMware Virtual disk (scsi)
Disk /dev/sdb: 53,7GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:
Number Start End Size File system Name Flags
1 1049kB 53,7GB 53,7GB zfs
(parted) q
Information: You may need to update /etc/fstab.
</source>
===Optional: Resize the ZPool in it===
Check the actual values:
<source lang=bash>
root@mariadb:~# zpool list MYSQL-DATA
NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
MYSQL-DATA 5,97G 994M 5,00G 44G 47% 16% 1.00x ONLINE -
root@mariadb:~# zpool get autoexpand MYSQL-DATA
NAME PROPERTY VALUE SOURCE
MYSQL-DATA autoexpand off default
</source>
Now inform ZPool to grow to the end of the partition.
Set autoexpand to on:
<source lang=bash>
root@mariadb:~# zpool set autoexpand=on MYSQL-DATA
</source>
Send an online to the already onlined device to force a recheck in the ZPool to resize it without export/import:
<source lang=bash>
root@mariadb:~# zpool online MYSQL-DATA /dev/sdb1
</source>
Et voila:
<source lang=bash>
root@mariadb:~# zpool list MYSQL-DATA
NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
MYSQL-DATA 50,0G 994M 49,0G - 5% 1% 1.00x ONLINE -
rpool 19,9G 3,36G 16,5G - 19% 16% 1.00x ONLINE -
</source>
Set autoexpand to off if you want prevent to autoexpand if partition grows:
<source lang=bash>
root@mariadb:~# zpool set autoexpand=off MYSQL-DATA
</source>
520ad0f11e70ff60bee039c9f07dafb5fd3d671f
TShark
0
238
1933
1301
2019-05-10T14:56:53Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:MySQL]]
[[Kategorie:Security]]
=TShark=
[https://www.wireshark.org/docs/wsug_html_chunked/AppToolstshark.html TShark is the terminal based wireshark.]
The ultimate tool to sniff network traffic when you have no X. It analyzes the traffic as wireshark does. Great tool!
==MySQL traffic==
To look on an application server for MySQL traffic you can use this line:
<source lang=bash>
# IFACE=eth0 ; tshark -i ${IFACE} -d tcp.port==3306,mysql -R "eth.addr eq $(ip link show ${IFACE} | awk '$1 ~ /link\/ether/{print $2}')" -T fields -e mysql.query 'port 3306'
</source>
The little awk magic selects only pakets which are from our ethernet address on interface ''IFACE''.
==Duplicate ACKs==
<source lang=bash>
# tshark -i eth1 -Y tcp.analysis.duplicate_ack
</source>
==Finding TCP problems==
<source lang=bash>
# tshark -i eth1 -Y 'expert.message == "Retransmission (suspected)" || expert.message == "Duplicate ACK (#1)" || expert.message == "Out-Of-Order segment"'
</source>
==Decode SSL Connections==
For example show thje used TLS-Versions.
<pre>
Supported Version: TLS 1.3 (0x0304)
Supported Version: TLS 1.2 (0x0303)
Supported Version: TLS 1.1 (0x0302)
Supported Version: TLS 1.0 (0x0301)
</pre>
<source lang=bash>
# tshark -n -f 'dst port 1812 or dst port 2083' -Y "ssl.handshake.version<=0x00000303" -T fields -e ip.src_host -e ip.dst_host -e tcp.dstport -e udp.dstport -e ssl.quic.negotiated_version -e ssl.pct.client_version -e ssl.handshake.version
192.168.1.87 192.168.1.140 2083 0x00000303
10.155.4.97 192.168.1.141 1812 0x00000303
192.168.1.85 192.168.1.140 2083 0x00000303
...
</source>
fe122bcb2162feec230c5ca43b902d0cd7f6c6cc
1934
1933
2019-05-10T14:57:12Z
Lollypop
2
/* Decode SSL Connections */
wikitext
text/x-wiki
[[Kategorie:MySQL]]
[[Kategorie:Security]]
=TShark=
[https://www.wireshark.org/docs/wsug_html_chunked/AppToolstshark.html TShark is the terminal based wireshark.]
The ultimate tool to sniff network traffic when you have no X. It analyzes the traffic as wireshark does. Great tool!
==MySQL traffic==
To look on an application server for MySQL traffic you can use this line:
<source lang=bash>
# IFACE=eth0 ; tshark -i ${IFACE} -d tcp.port==3306,mysql -R "eth.addr eq $(ip link show ${IFACE} | awk '$1 ~ /link\/ether/{print $2}')" -T fields -e mysql.query 'port 3306'
</source>
The little awk magic selects only pakets which are from our ethernet address on interface ''IFACE''.
==Duplicate ACKs==
<source lang=bash>
# tshark -i eth1 -Y tcp.analysis.duplicate_ack
</source>
==Finding TCP problems==
<source lang=bash>
# tshark -i eth1 -Y 'expert.message == "Retransmission (suspected)" || expert.message == "Duplicate ACK (#1)" || expert.message == "Out-Of-Order segment"'
</source>
==Decode SSL Connections==
For example show the used TLS-Versions.
<pre>
Supported Version: TLS 1.3 (0x0304)
Supported Version: TLS 1.2 (0x0303)
Supported Version: TLS 1.1 (0x0302)
Supported Version: TLS 1.0 (0x0301)
</pre>
<source lang=bash>
# tshark -n -f 'dst port 1812 or dst port 2083' -Y "ssl.handshake.version<=0x00000303" -T fields -e ip.src_host -e ip.dst_host -e tcp.dstport -e udp.dstport -e ssl.quic.negotiated_version -e ssl.pct.client_version -e ssl.handshake.version
192.168.1.87 192.168.1.140 2083 0x00000303
10.155.4.97 192.168.1.141 1812 0x00000303
192.168.1.85 192.168.1.140 2083 0x00000303
...
</source>
9cd6e228ebbd20c2e65c2b8b43f445c7d5f03e7d
1935
1934
2019-05-10T14:58:07Z
Lollypop
2
/* Decode SSL Connections */
wikitext
text/x-wiki
[[Kategorie:MySQL]]
[[Kategorie:Security]]
=TShark=
[https://www.wireshark.org/docs/wsug_html_chunked/AppToolstshark.html TShark is the terminal based wireshark.]
The ultimate tool to sniff network traffic when you have no X. It analyzes the traffic as wireshark does. Great tool!
==MySQL traffic==
To look on an application server for MySQL traffic you can use this line:
<source lang=bash>
# IFACE=eth0 ; tshark -i ${IFACE} -d tcp.port==3306,mysql -R "eth.addr eq $(ip link show ${IFACE} | awk '$1 ~ /link\/ether/{print $2}')" -T fields -e mysql.query 'port 3306'
</source>
The little awk magic selects only pakets which are from our ethernet address on interface ''IFACE''.
==Duplicate ACKs==
<source lang=bash>
# tshark -i eth1 -Y tcp.analysis.duplicate_ack
</source>
==Finding TCP problems==
<source lang=bash>
# tshark -i eth1 -Y 'expert.message == "Retransmission (suspected)" || expert.message == "Duplicate ACK (#1)" || expert.message == "Out-Of-Order segment"'
</source>
==Decode SSL Connections==
For example show the used TLS-Versions lower than 1.2.
<pre>
Supported Version: TLS 1.3 (0x0304)
Supported Version: TLS 1.2 (0x0303)
Supported Version: TLS 1.1 (0x0302)
Supported Version: TLS 1.0 (0x0301)
</pre>
<source lang=bash>
# tshark -n -f 'dst port 1812 or dst port 2083' -Y "ssl.handshake.version<0x00000303" -T fields -e ip.src_host -e ip.dst_host -e tcp.dstport -e udp.dstport -e ssl.quic.negotiated_version -e ssl.pct.client_version -e ssl.handshake.version
192.168.1.87 192.168.1.140 2083 0x00000301
10.155.4.97 192.168.1.141 1812 0x00000301
192.168.1.85 192.168.1.140 2083 0x00000301
...
</source>
0c590fc3e44b7d1457f1804bbd3d0bb88d2903b0
1936
1935
2019-05-10T15:02:40Z
Lollypop
2
/* Decode SSL Connections */
wikitext
text/x-wiki
[[Kategorie:MySQL]]
[[Kategorie:Security]]
=TShark=
[https://www.wireshark.org/docs/wsug_html_chunked/AppToolstshark.html TShark is the terminal based wireshark.]
The ultimate tool to sniff network traffic when you have no X. It analyzes the traffic as wireshark does. Great tool!
==MySQL traffic==
To look on an application server for MySQL traffic you can use this line:
<source lang=bash>
# IFACE=eth0 ; tshark -i ${IFACE} -d tcp.port==3306,mysql -R "eth.addr eq $(ip link show ${IFACE} | awk '$1 ~ /link\/ether/{print $2}')" -T fields -e mysql.query 'port 3306'
</source>
The little awk magic selects only pakets which are from our ethernet address on interface ''IFACE''.
==Duplicate ACKs==
<source lang=bash>
# tshark -i eth1 -Y tcp.analysis.duplicate_ack
</source>
==Finding TCP problems==
<source lang=bash>
# tshark -i eth1 -Y 'expert.message == "Retransmission (suspected)" || expert.message == "Duplicate ACK (#1)" || expert.message == "Out-Of-Order segment"'
</source>
==Decode SSL Connections==
For example show the used TLS-Versions lower than 1.2.
<pre>
Supported Version: TLS 1.3 (0x0304)
Supported Version: TLS 1.2 (0x0303)
Supported Version: TLS 1.1 (0x0302)
Supported Version: TLS 1.0 (0x0301)
</pre>
<source lang=bash>
# tshark -n -f 'dst port 1812 or dst port 2083' -Y "ssl.handshake.version<0x00000303" -T fields -e ip.src_host -e ip.dst_host -e tcp.dstport -e udp.dstport -e ssl.handshake.version
192.168.1.87 192.168.1.140 2083 0x00000301
10.155.4.97 192.168.1.141 1812 0x00000301
192.168.1.85 192.168.1.140 2083 0x00000301
...
</source>
67d474055083e84bd9da2f2696153f0e457ca04a
1937
1936
2019-05-10T15:06:31Z
Lollypop
2
/* Decode SSL Connections */
wikitext
text/x-wiki
[[Kategorie:MySQL]]
[[Kategorie:Security]]
=TShark=
[https://www.wireshark.org/docs/wsug_html_chunked/AppToolstshark.html TShark is the terminal based wireshark.]
The ultimate tool to sniff network traffic when you have no X. It analyzes the traffic as wireshark does. Great tool!
==MySQL traffic==
To look on an application server for MySQL traffic you can use this line:
<source lang=bash>
# IFACE=eth0 ; tshark -i ${IFACE} -d tcp.port==3306,mysql -R "eth.addr eq $(ip link show ${IFACE} | awk '$1 ~ /link\/ether/{print $2}')" -T fields -e mysql.query 'port 3306'
</source>
The little awk magic selects only pakets which are from our ethernet address on interface ''IFACE''.
==Duplicate ACKs==
<source lang=bash>
# tshark -i eth1 -Y tcp.analysis.duplicate_ack
</source>
==Finding TCP problems==
<source lang=bash>
# tshark -i eth1 -Y 'expert.message == "Retransmission (suspected)" || expert.message == "Duplicate ACK (#1)" || expert.message == "Out-Of-Order segment"'
</source>
==Decode SSL Connections==
For example show the used TLS-Versions lower than 1.2.
<pre>
Supported Version: TLS 1.3 (0x0304)
Supported Version: TLS 1.2 (0x0303)
Supported Version: TLS 1.1 (0x0302)
Supported Version: TLS 1.0 (0x0301)
</pre>
<source lang=bash>
$ tshark -n -f 'dst port 1812 or dst port 2083' -Y "ssl.handshake.version<0x00000303" -T fields -e ip.src_host -e ip.dst_host -e tcp.dstport -e udp.dstport -e ssl.handshake.version
192.168.1.87 192.168.1.140 2083 0x00000301
10.155.4.97 192.168.1.141 1812 0x00000301
192.168.1.85 192.168.1.140 2083 0x00000301
...
</source>
or for https:
<source lang=bash>
$ tshark -i eth0 -n -f 'dst port 443' -Y "ssl.handshake.version<0x00000303" -T fields -e ip.src_host -e ip.dst_host -e tcp.dstport -e ssl.handshake.version
</source>
92f8ff1fb91db421ad5996c4b14bd749fc61804c
PowerDNS
0
287
1938
1829
2019-05-20T07:27:46Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie: DNS]]
=PowerDNS Server (pdns_server)=
==Newer version in Ubuntu==
If you are living in Ubunbtu xenial and need a newer PowerDNS from Ubuntu zesty, do this:
===/etc/apt/apt.conf.d/01pinning===
<source lang=apt>
APT::Default-Release "xenial";
</source>
===/etc/apt/preferences.d/pdns===
<source lang=apt>
Package: pdns-*
Pin: release a=zesty, l=Ubuntu
Pin-Priority: 1000
Package: pdns-*
Pin: release a=zesty-updates, l=Ubuntu
Pin-Priority: 1000
Package: pdns-*
Pin: release a=zesty-security, l=Ubuntu
Pin-Priority: 1000
</source>
===/etc/apt/sources.list===
add zesty sources. for example:
<source>
deb [arch=amd64] http://de.archive.ubuntu.com/ubuntu/ xenial main restricted universe
deb [arch=amd64] http://de.archive.ubuntu.com/ubuntu/ xenial-updates main restricted universe
deb [arch=amd64] http://security.ubuntu.com/ubuntu xenial-security main restricted universe
deb [arch=amd64] http://de.archive.ubuntu.com/ubuntu/ zesty main restricted universe
deb [arch=amd64] http://de.archive.ubuntu.com/ubuntu/ zesty-updates main restricted universe
deb [arch=amd64] http://security.ubuntu.com/ubuntu zesty-security main restricted universe
</source>
===Do the upgrade===
<source lang=bash>
# apt update
# apt install pdns-recursor/zesty pdns-tools/zesty libstdc++6/zesty gcc-6-base/zesty
</source>
==Logging with systemd and syslog-ng==
1. Tell the journald of systemd to forward messages to syslog:
In <i>/etc/systemd/journald.conf</i> set it from
<source lang=bash>
#ForwardToSyslog=yes
</source>
to
<source lang=bash>
ForwardToSyslog=yes
</source>
Then restart the journald
<source lang=bash>
# systemctl restart systemd-journald.service
</source>
2. Tell syslog-ng to take the dev-log-socket from journald as input:
Change the part in <i>/etc/syslog-ng/syslog-ng.conf</i> from
<source lang=bash>
source s_src {
system();
internal();
};
</source>
to
<source lang=bash>
source s_src {
system();
internal();
unix-dgram ("/run/systemd/journal/dev-log");
};
</source>
==chroot with systemd==
<source lang=bash>
# mkdir -p /var/chroot/run/systemd
# touch /var/chroot/run/systemd/notify
</source>
<source lang=ini>
# /lib/systemd/system/var-chroot-run-systemd-notify.mount
[Unit]
After=zfs-mount.service
Requires=var-chroot.mount
[Mount]
What=/run/systemd/notify
Where=/var/chroot/run/systemd/notify
Type=none
Options=bind
</source>
or
<source lang=ini>
# /lib/systemd/system/var-chroot-run-systemd-notify.mount
[Unit]
Description=Mount /run/systemd/notify to chroot
DefaultDependencies=no
ConditionPathExists=/var/chroot/run/systemd/notify
ConditionCapability=CAP_SYS_ADMIN
After=systemd-modules-load.service
Before=pdns-recursor.service
[Mount]
What=/run/systemd/notify
Where=/var/chroot/run/systemd/notify
Type=none
Options=bind
[Install]
WantedBy=multi-user.target
</source>
<source lang=ini>
# /etc/systemd/system/pdns.service.d/override.conf
[Service]
Type=simple
ExecStart=
ExecStart=/usr/sbin/pdns_server --guardian=no --daemon=no --disable-syslog --log-timestamp=no --write-pid=no
CapabilityBoundingSet=CAP_NET_BIND_SERVICE CAP_SETGID CAP_SETUID CAP_CHOWN CAP_SYS_CHROOT
[Unit]
Wants=local-fs.target
</source>
<source lang=ini>
# /etc/systemd/system/pdns-recursor.service.d/override.conf
[Service]
Type=simple
ExecStart=
ExecStart=/usr/sbin/pdns_recursor --daemon=no --write-pid=no --include-dir=/etc/powerdns/recursor.d
CapabilityBoundingSet=CAP_NET_BIND_SERVICE CAP_SETGID CAP_SETUID CAP_CHOWN CAP_SYS_CHROOT
[Unit]
Wants=local-fs.target
</source>
14c2bfeadf329fc68e542a71335d0293d1c48d75
1939
1938
2019-05-20T07:49:08Z
Lollypop
2
/* chroot with systemd */
wikitext
text/x-wiki
[[Kategorie: DNS]]
=PowerDNS Server (pdns_server)=
==Newer version in Ubuntu==
If you are living in Ubunbtu xenial and need a newer PowerDNS from Ubuntu zesty, do this:
===/etc/apt/apt.conf.d/01pinning===
<source lang=apt>
APT::Default-Release "xenial";
</source>
===/etc/apt/preferences.d/pdns===
<source lang=apt>
Package: pdns-*
Pin: release a=zesty, l=Ubuntu
Pin-Priority: 1000
Package: pdns-*
Pin: release a=zesty-updates, l=Ubuntu
Pin-Priority: 1000
Package: pdns-*
Pin: release a=zesty-security, l=Ubuntu
Pin-Priority: 1000
</source>
===/etc/apt/sources.list===
add zesty sources. for example:
<source>
deb [arch=amd64] http://de.archive.ubuntu.com/ubuntu/ xenial main restricted universe
deb [arch=amd64] http://de.archive.ubuntu.com/ubuntu/ xenial-updates main restricted universe
deb [arch=amd64] http://security.ubuntu.com/ubuntu xenial-security main restricted universe
deb [arch=amd64] http://de.archive.ubuntu.com/ubuntu/ zesty main restricted universe
deb [arch=amd64] http://de.archive.ubuntu.com/ubuntu/ zesty-updates main restricted universe
deb [arch=amd64] http://security.ubuntu.com/ubuntu zesty-security main restricted universe
</source>
===Do the upgrade===
<source lang=bash>
# apt update
# apt install pdns-recursor/zesty pdns-tools/zesty libstdc++6/zesty gcc-6-base/zesty
</source>
==Logging with systemd and syslog-ng==
1. Tell the journald of systemd to forward messages to syslog:
In <i>/etc/systemd/journald.conf</i> set it from
<source lang=bash>
#ForwardToSyslog=yes
</source>
to
<source lang=bash>
ForwardToSyslog=yes
</source>
Then restart the journald
<source lang=bash>
# systemctl restart systemd-journald.service
</source>
2. Tell syslog-ng to take the dev-log-socket from journald as input:
Change the part in <i>/etc/syslog-ng/syslog-ng.conf</i> from
<source lang=bash>
source s_src {
system();
internal();
};
</source>
to
<source lang=bash>
source s_src {
system();
internal();
unix-dgram ("/run/systemd/journal/dev-log");
};
</source>
==chroot with systemd==
<source lang=bash>
# mkdir -p /var/chroot/run/systemd
# touch /var/chroot/run/systemd/notify
</source>
<source lang=ini>
# /etc/systemd/system/var-chroot-run-systemd-notify.mount
[Unit]
After=zfs-mount.service
Requires=var-chroot.mount
[Mount]
What=/run/systemd/notify
Where=/var/chroot/run/systemd/notify
Type=none
Options=bind
</source>
or
<source lang=ini>
# /etc/systemd/system/var-chroot-run-systemd-notify.mount
[Unit]
Description=Mount /run/systemd/notify to chroot
DefaultDependencies=no
ConditionPathExists=/var/chroot/run/systemd/notify
ConditionCapability=CAP_SYS_ADMIN
After=systemd-modules-load.service
Before=pdns-recursor.service
[Mount]
What=/run/systemd/notify
Where=/var/chroot/run/systemd/notify
Type=none
Options=bind
[Install]
WantedBy=multi-user.target
</source>
<source lang=ini>
# /etc/systemd/system/pdns.service.d/override.conf
[Service]
Type=simple
ExecStart=
ExecStart=/usr/sbin/pdns_server --guardian=no --daemon=no --disable-syslog --log-timestamp=no --write-pid=no
CapabilityBoundingSet=CAP_NET_BIND_SERVICE CAP_SETGID CAP_SETUID CAP_CHOWN CAP_SYS_CHROOT
[Unit]
Wants=local-fs.target
</source>
<source lang=ini>
# /etc/systemd/system/pdns-recursor.service.d/override.conf
[Service]
Type=simple
ExecStart=
ExecStart=/usr/sbin/pdns_recursor --daemon=no --write-pid=no --include-dir=/etc/powerdns/recursor.d
CapabilityBoundingSet=CAP_NET_BIND_SERVICE CAP_SETGID CAP_SETUID CAP_CHOWN CAP_SYS_CHROOT
[Unit]
Wants=local-fs.target
</source>
2e6697166c7cc635651f8748dec9e72bde2a9d28
NetApp and Linux
0
227
1940
871
2019-05-24T10:44:50Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:NetApp|Linux]]
[[Kategorie:Linux|NetApp]]
==Check partitioning==
Maybe this works:
<source lang=bash>
# fdisk -l /dev/sda | awk '/^\/dev\//{fieldnum=2;if($fielnum=="*"){fieldnum++}blockalign=$fieldnum/8;if(blockalign!=int(blockalign)){match($0,$fieldnum);printf "%s <<<<<<< Bucket %d\n",substr($0,0,RSTART-2)"_"substr($0,RSTART,RLENGTH)"_"substr($0,RSTART+RLENGTH+1),8*(blockalign-int(blockalign));}else{print $0;}next;}{print;}'
Disk /dev/sda: 17.2 GB, 17179869184 bytes
64 heads, 32 sectors/track, 16384 cylinders, total 33554432 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x000d6bea
Device Boot Start End Blocks Id System
/dev/sda1 * 2048 13672447 6835200 83 Linux
/dev/sda2 _13674494_ 33552383 9938945 5 Extended <<<<<<< Bucket 6
/dev/sda5 13674496 25391103 5858304 83 Linux
/dev/sda6 25393152 33552383 4079616 82 Linux swap / Solaris
</source>
Partition 2 does not matter because it is just metadata of partitioning. We are aligned!
==Remove multipath LUNs==
This writes out the commands to delete multipathed devices for filer <filer>:
<source lang=bash>
( sanlun lun show -p <filer> ; echo ) | gawk '
BEGIN {
split(volumes,vols,":");
}
/ONTAP Path:/,/^$/ {
if(/ONTAP Path:/){
opath=$NF;
if (volumes==""){
todelete="yes";
}else{
odev=$NF;
gsub(/^.*:/,"",odev);
for(vol in vols){
if (odev == vols[vol] || volumes==""){
todelete="yes";
}
}
}
}
if(todelete!="yes")next;
if(/Host Device:/){
command="dmsetup info --columns --noheadings -o open /dev/mapper/"$NF;
command | getline inuse;
close(command);
if (inuse!=0) {
printf "#\n## Device %s (%s) is in use!!!\n## check with: lsof | grep \"$(dmsetup info --columns --noheadings --separator \",\" -omajor,minor /dev/mapper/%s)\"\n#\n",$NF,opath,$NF;
} else {
printf "multipath -w %s\n",$NF;
printf "multipath -f /dev/mapper/%s && (\n",$NF;
}
};
if(inuse==0 && $2 ~ /(primary|secondary)/){
printf "echo 1 > /sys/block/%s/device/delete\n",$3;
}
if (inuse==0 && /^$/) { printf ")\n";}
}
/^$/ {
mpathdevice="";
todelete="no";
delete devices;
}
'
</source>
==Links==
* [https://kb.netapp.com/index?page=content&id=3011193 NetApp KB ID: 3011193 - What is an unaligned I/O?]
e9e4e3ec771645bb693513b77f1a3d7adec81b35
1941
1940
2019-05-24T10:47:32Z
Lollypop
2
/* Remove multipath LUNs */
wikitext
text/x-wiki
[[Kategorie:NetApp|Linux]]
[[Kategorie:Linux|NetApp]]
==Check partitioning==
Maybe this works:
<source lang=bash>
# fdisk -l /dev/sda | awk '/^\/dev\//{fieldnum=2;if($fielnum=="*"){fieldnum++}blockalign=$fieldnum/8;if(blockalign!=int(blockalign)){match($0,$fieldnum);printf "%s <<<<<<< Bucket %d\n",substr($0,0,RSTART-2)"_"substr($0,RSTART,RLENGTH)"_"substr($0,RSTART+RLENGTH+1),8*(blockalign-int(blockalign));}else{print $0;}next;}{print;}'
Disk /dev/sda: 17.2 GB, 17179869184 bytes
64 heads, 32 sectors/track, 16384 cylinders, total 33554432 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x000d6bea
Device Boot Start End Blocks Id System
/dev/sda1 * 2048 13672447 6835200 83 Linux
/dev/sda2 _13674494_ 33552383 9938945 5 Extended <<<<<<< Bucket 6
/dev/sda5 13674496 25391103 5858304 83 Linux
/dev/sda6 25393152 33552383 4079616 82 Linux swap / Solaris
</source>
Partition 2 does not matter because it is just metadata of partitioning. We are aligned!
==Remove multipath LUNs==
This writes out the commands to delete multipathed devices for filer <filer> (empty volumes parameter to gawk means all volumes):
<source lang=bash>
( sanlun lun show -p <filer> ; echo ) | gawk -v volumes="/vol/vol1:/vol/vol2"'
BEGIN {
split(volumes,vols,":");
}
/ONTAP Path:/,/^$/ {
if(/ONTAP Path:/){
opath=$NF;
if (volumes==""){
todelete="yes";
}else{
odev=$NF;
gsub(/^.*:/,"",odev);
for(vol in vols){
if (odev == vols[vol] || volumes==""){
todelete="yes";
}
}
}
}
if(todelete!="yes")next;
if(/Host Device:/){
command="dmsetup info --columns --noheadings -o open /dev/mapper/"$NF;
command | getline inuse;
close(command);
if (inuse!=0) {
printf "#\n## Device %s (%s) is in use!!!\n## check with: lsof | grep \"$(dmsetup info --columns --noheadings --separator \",\" -omajor,minor /dev/mapper/%s)\"\n#\n",$NF,opath,$NF;
} else {
printf "multipath -w %s\n",$NF;
printf "multipath -f /dev/mapper/%s && (\n",$NF;
}
};
if(inuse==0 && $2 ~ /(primary|secondary)/){
printf "echo 1 > /sys/block/%s/device/delete\n",$3;
}
if (inuse==0 && /^$/) { printf ")\n";}
}
/^$/ {
mpathdevice="";
todelete="no";
delete devices;
}
'
</source>
==Links==
* [https://kb.netapp.com/index?page=content&id=3011193 NetApp KB ID: 3011193 - What is an unaligned I/O?]
cd8188ba0ad62fc00cac2d95f7d3e9980e553165
Oracle Tips and Tricks
0
220
1942
1874
2019-05-24T11:53:14Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:Oracle|Tipps]]
==Set environment in .bash_profile==
<source lang=bash>
ORATAB=/etc/oratab # <-- maybe somwhere else?
declare -A ORACLE_HOMES
export BASE_PATH=${PATH}
while IFS=$': \t\n' read ORACLE_SID ORACLE_HOME DBSTART
do
# Ignore empty lines, commented lines (#) and ORACLE_SIDs starting with + or - (RAC)
if [[ ${ORACLE_SID} =~ ^(#|[\t ]*$|[-+]) ]] ; then continue ; fi
ALIASNAME=${ORACLE_SID,,*}
eval ORACLE_HOMES["${ORACLE_SID}"]=${ORACLE_HOME}
alias ${ALIASNAME}="export ORACLE_SID=${ORACLE_SID}; export ORACLE_HOME=\${ORACLE_HOMES[${ORACLE_SID}]}; export PATH=\${ORACLE_HOMES[${ORACLE_SID}]}/bin:\${ORACLE_HOMES[${ORACLE_SID}]}/OPatch:\${BASE_PATH}"
done < ${ORATAB}
</source>
After that .bash_profile is sourced (as done at login) you have aliases as lower case ORACLE_SIDs that set all you need.
==Recover datafiles==
Problem:
<source lang=oracle11>
ORA-00376: file 18 cannot be read at this time
ORA-01110: data file 18: '/data/oracle/oradata/datafile04.dbf'
</source>
<source lang=oracle11>
SQL> select * from v$recover_file;
FILE# ONLINE ONLINE_
---------- ------- -------
ERROR CHANGE#
----------------------------------------------------------------- ----------
TIME
---------
18 OFFLINE OFFLINE
5.8016E+12
22-JUL-15
</source>
<source lang=oracle11>
SQL> select ONLINE_STATUS from dba_data_files where file_id = 18;
ONLINE_
-------
RECOVER
</source>
Recover datafile:
<source lang=oracle11>
SQL> recover datafile 18;
ORA-00279: change 5801623243148 generated at 07/22/2015 21:26:51 needed for thread 1
ORA-00289: suggestion :
/data/oracle/arclog/ORACLESID_1946_1_882824275.ARC
ORA-00280: change 5801623243148 for thread 1 is in sequence #1946
ORA-00278: log file
'/data/oracle/arclog/ORACLESID_1945_1_882824275.ARC' no longer needed
for this recovery
Specify log: {<RET>=suggested | filename | AUTO | CANCEL}
AUTO
Log applied.
Media recovery complete.
</source>
Set file online:
<source lang=oracle11>
SQL> alter database datafile 18 online
</source>
Anything else?
<source lang=oracle11>
SQL> select * from v$recover_file;
no rows selected
</source>
==Show non default settings==
<source lang=oracle11>
SQL> select name || ' = ' || value from v$parameter where isdefault = 'FALSE';
</source>
==Show CPU count from database==
<source lang=oracle11>
SQL> SELECT 'DATABASE CPU COUNT: ' || value ||
decode(ISDEFAULT, 'TRUE', ' (ISDEFAULT)', ' (IS NOT DEFAULT !!!: '|| ISDEFAULT ||')')
from V$PARAMETER where UPPER(name) like '%CPU_COUNT%'
</source>
==Startup some Databases manually==
For example: First DEVDE, than all other DEV*
<source lang=bash>
for SID in DEVDE $(awk -F':' '$1 ~ /^DEV/ && $1 !~ /^DEVDE$/ {print $1}' /var/opt/oracle/oratab )
do
export ORAENV_ASK=NO ORACLE_SID=${SID}
. oraenv
printf "startup\nquit\n" | sqlplus -s "/ as sysdba"
lsnrctl start ${SID}
done
</source>
==Shutdown some Databases manually==
For example: First all other DEV*, than DEVDE
<source lang=bash>
for SID in $(awk -F':' '$1 ~ /^DEV/ && $1 !~ /^DEVDE$/ {print $1}' /var/opt/oracle/oratab ) DEVDE
do
export ORAENV_ASK=NO ORACLE_SID=${SID}
. oraenv
lsnrctl stop ${SID}
printf "shutdown immediate\nquit\n" | sqlplus -s "/ as sysdba"
done
</source>
==Get session of system process id (pid)==
<source lang=sql>
col sid format 999999
col username format a20
col osuser format a15
select b.spid,a.sid, a.serial#,a.username, a.osuser from v$session a, v$process b where a.paddr= b.addr and b.spid='&spid' order by b.spid;
</source>
8969b2fdd23fd08e95fa9220515e6afc30474e2b
1943
1942
2019-05-24T11:53:45Z
Lollypop
2
/* Get session of system process id (pid) */
wikitext
text/x-wiki
[[Kategorie:Oracle|Tipps]]
==Set environment in .bash_profile==
<source lang=bash>
ORATAB=/etc/oratab # <-- maybe somwhere else?
declare -A ORACLE_HOMES
export BASE_PATH=${PATH}
while IFS=$': \t\n' read ORACLE_SID ORACLE_HOME DBSTART
do
# Ignore empty lines, commented lines (#) and ORACLE_SIDs starting with + or - (RAC)
if [[ ${ORACLE_SID} =~ ^(#|[\t ]*$|[-+]) ]] ; then continue ; fi
ALIASNAME=${ORACLE_SID,,*}
eval ORACLE_HOMES["${ORACLE_SID}"]=${ORACLE_HOME}
alias ${ALIASNAME}="export ORACLE_SID=${ORACLE_SID}; export ORACLE_HOME=\${ORACLE_HOMES[${ORACLE_SID}]}; export PATH=\${ORACLE_HOMES[${ORACLE_SID}]}/bin:\${ORACLE_HOMES[${ORACLE_SID}]}/OPatch:\${BASE_PATH}"
done < ${ORATAB}
</source>
After that .bash_profile is sourced (as done at login) you have aliases as lower case ORACLE_SIDs that set all you need.
==Recover datafiles==
Problem:
<source lang=oracle11>
ORA-00376: file 18 cannot be read at this time
ORA-01110: data file 18: '/data/oracle/oradata/datafile04.dbf'
</source>
<source lang=oracle11>
SQL> select * from v$recover_file;
FILE# ONLINE ONLINE_
---------- ------- -------
ERROR CHANGE#
----------------------------------------------------------------- ----------
TIME
---------
18 OFFLINE OFFLINE
5.8016E+12
22-JUL-15
</source>
<source lang=oracle11>
SQL> select ONLINE_STATUS from dba_data_files where file_id = 18;
ONLINE_
-------
RECOVER
</source>
Recover datafile:
<source lang=oracle11>
SQL> recover datafile 18;
ORA-00279: change 5801623243148 generated at 07/22/2015 21:26:51 needed for thread 1
ORA-00289: suggestion :
/data/oracle/arclog/ORACLESID_1946_1_882824275.ARC
ORA-00280: change 5801623243148 for thread 1 is in sequence #1946
ORA-00278: log file
'/data/oracle/arclog/ORACLESID_1945_1_882824275.ARC' no longer needed
for this recovery
Specify log: {<RET>=suggested | filename | AUTO | CANCEL}
AUTO
Log applied.
Media recovery complete.
</source>
Set file online:
<source lang=oracle11>
SQL> alter database datafile 18 online
</source>
Anything else?
<source lang=oracle11>
SQL> select * from v$recover_file;
no rows selected
</source>
==Show non default settings==
<source lang=oracle11>
SQL> select name || ' = ' || value from v$parameter where isdefault = 'FALSE';
</source>
==Show CPU count from database==
<source lang=oracle11>
SQL> SELECT 'DATABASE CPU COUNT: ' || value ||
decode(ISDEFAULT, 'TRUE', ' (ISDEFAULT)', ' (IS NOT DEFAULT !!!: '|| ISDEFAULT ||')')
from V$PARAMETER where UPPER(name) like '%CPU_COUNT%'
</source>
==Startup some Databases manually==
For example: First DEVDE, than all other DEV*
<source lang=bash>
for SID in DEVDE $(awk -F':' '$1 ~ /^DEV/ && $1 !~ /^DEVDE$/ {print $1}' /var/opt/oracle/oratab )
do
export ORAENV_ASK=NO ORACLE_SID=${SID}
. oraenv
printf "startup\nquit\n" | sqlplus -s "/ as sysdba"
lsnrctl start ${SID}
done
</source>
==Shutdown some Databases manually==
For example: First all other DEV*, than DEVDE
<source lang=bash>
for SID in $(awk -F':' '$1 ~ /^DEV/ && $1 !~ /^DEVDE$/ {print $1}' /var/opt/oracle/oratab ) DEVDE
do
export ORAENV_ASK=NO ORACLE_SID=${SID}
. oraenv
lsnrctl stop ${SID}
printf "shutdown immediate\nquit\n" | sqlplus -s "/ as sysdba"
done
</source>
==Get session id (sid) of system process id (pid)==
<source lang=sql>
col sid format 999999
col username format a20
col osuser format a15
select b.spid,a.sid, a.serial#,a.username, a.osuser from v$session a, v$process b where a.paddr= b.addr and b.spid='&spid' order by b.spid;
</source>
732d564511235fadba36dd4528038ec249430e49
Bash cheatsheet
0
37
1944
1877
2019-06-03T08:47:31Z
Lollypop
2
/* Numbers */
wikitext
text/x-wiki
[[Kategorie:Bash]]
=bash history per user=
See [[SSH_FingerprintLogging|Logging the SSH fingerprint]]
=bash prompt=
Put this in your ~/.bash_profile
<source lang=bash>
typeset +x PS1="\[\e]0;\u@\h: \w\a\]\u@\h:\w# "
</source>
=Useful variable substitutions=
==split==
For example split an ip:
<source lang=bash>
$ delimiter="."
$ ip="10.1.2.3"
$ declare -a octets=( ${ip//${delimiter}/ } )
$ echo "${#octets[@]} octets -> ${octets[@]}"
4 octets -> 10 1 2 3
</source>
==dirname==
<source lang=bash>
$ myself=/usr/bin/blafasel ; echo ${myself%/*}
/usr/bin
</source>
==basename==
<source lang=bash>
$ myself=/usr/bin/blafasel ; echo ${myself##*/}
blafasel
</source>
==Path name resolving function==
<source lang=bash>
# dir_resolve originally from http://stackoverflow.com/a/20901614/5887626
# modified at https://lars.timmann.de/wiki/index.php/Bash_cheatsheet
dir_resolve() {
local dir=${1%/*}
local file=${1##*/}
# if the name does not contain a / leave file blank or the name will be name/name
[ "_${1/\//}_" == "_${1}_" -a -d ${1} ] && file=""
[ "_${1/\//}_" == "_${1}_" -a -f ${1} ] && dir=""
pushd "$dir" &>/dev/null || return $? # On error, return error code
echo ${PWD}${file:+"/"${file}} # output full path with filename
popd &> /dev/null
}
</source>
=Arrays=
==Reverse the order of elements==
An example for services in normal and reverse order for start/stop
<source lang=bash>
declare -a SERVICES_STOP=(service1 service2 service3 service4)
declare -a SERVICES_START
for(( i=$[ ${#SERVICES_STOP[*]} - 1 ] ; i>=0 ; i-- ))
do
SERVICES_START+=(${SERVICES_STOP[$i]})
done
</source>
This results in:
<source lang=bash>
$ echo ${SERVICES_STOP[*]} ; echo ${SERVICES_START[*]}
service1 service2 service3 service4
service4 service3 service2 service1
</source>
=Loops=
==Numbers==
$ for i in {0..9} ; do echo $i ; done
or
$ for ((i=0;i<=9;i++)); do echo $i; done
so gehen natürlich auch andere Sprünge, z.B. immer 3 weiter:
$ for ((i=0;i<=9;i+=3)); do echo $i; done
or or or
$ for ((i=0,j=1;i<=9;i+=3,j++)); do echo "$i $j"; done
==Exit controlled loop==
Just put your code between <i>while</i> and <i>do</i> and use <i>continue</i> alias : in the loop.
<source lang=bash>
#!/bin/bash
while
# some code
(( <your control expression> ))
do
:
done
</source>
For example:
<source lang=bash>
#!/bin/bash
i=1
while
i=$[ $i + 1 ];
(( $i < 10 ))
do
:
done
</source>
=Functions=
==Log with timestamp==
<source lang=bash>
function printlog () {
if [ ${#} -ge 1 ]
then
format=$1; shift;
printf "%s : ${format}" "$(/bin/date '+%Y%m%d %H:%M:%S')" ${*}
else
while read input
do
printf "%s : %s\n" "$(/bin/date '+%Y%m%d %H:%M:%S')" "${input}"
done
fi
}
</source>
<source lang=bash>
$ printf "test\n\ntoast\n" | printlog
20161103 10:47:25 test
20161103 10:47:25
20161103 10:47:25 toast
$ printlog test
20161103 10:47:30 test
$ printlog "test %s %d %s\n" "bla" 0 "bli"
20170721 09:45:06 : test bla 0 bli
</source>
=Calculations=
$ echo $[ 3 + 4 ]
$ echo $[ 2 ** 8 ] # 2^8
=init scripts=
==A basic skeleton==
<source lang=bash>
#!/bin/bash
NAME=<myname> # The name of the daemon
USER=<runuser> # The user to run the daemon as
SELF=${0##*/}
CALLER=$(id -nu)
# Check if called as ${USER}
if [ "_${CALLER}_" != "_${USER}_" ]
then
# If not do a su if called as root
if [ "_${CALLER}_" == "_root_" ]
then
exec su -l ${USER} -c "$0 $@"
else
echo "Please start this script only as user ${USER}"
exit 1
fi
fi
if [ $# -eq 1 ]
then
command=$1
else
# Called as ${NAME}-start.sh or ${NAME}-stop.sh
command=${SELF%.sh}
command=${command##${NAME}-}
[ "_${command}_" == "_${NAME}_" ] && command=""
fi
case ${command} in
start)
# start commands
;;
stop)
# stop commands
;;
restart)
$0 stop
$0 start
;;
*)
[ ! -z "${command}" ] && echo "ERROR: Unknown option ${command}!"
echo "Usage: $0 (start|stop|restart)";
echo "Or call as ${NAME}-(start|stop|restart).sh"
exit 1
;;
esac
</source>
= Logging and output in your scripts =
== Add a timestamp to all output ==
<source lang=bash>
#!/bin/bash
# Find temp filename
FIFO=$(mktemp)
# Clenup on exit
trap 'rm -f ${FIFO}' 0
# Delete file created by mktemp
rm "${FIFO}"
# Create a FIFO instead
mkfifo "${FIFO}"
# Read from FIFO and add date at the beginning
sed -e "s|^|$(date '+%d.%m.%Y %H:%M:%S') :: |g" < ${FIFO} &
# Redirect stdout & stderr to FIFO
exec > ${FIFO} 2>&1
#
# Now your program
#
echo bla
echo bli >&2
</source>
== Add a timestamp to all output and send to file==
<source lang=bash>
#!/bin/bash
LOGFILE=/tmp/bla.log
# Find temp filename
FIFO=$(mktemp)
# Clenup on exit
trap 'rm -f ${FIFO}' 0
# Delete file created by mktemp
rm "${FIFO}"
# Create a FIFO instead
mkfifo "${FIFO}"
# Read from FIFO and add date at the beginning
sed -e "s|^|$(date '+%d.%m.%Y %H:%M:%S') :: |g" < ${FIFO} > ${LOGFILE}&
# Redirect stdout & stderr to FIFO
exec > ${FIFO} 2>&1
#
# Now your program
#
echo bla
echo bli >&2
</source>
=Parameter parsing=
In progress... no time...
<source lang=bash>
while [ $# -gt 0 ]
do
case $1 in
-h|--help)
usage help
shift;
exit 0;
;;
--?*=?*|-?*=?*)
param=${1%=*}
value=${1#*=}
shift;
;;
--?*=|-?*=)
param=${1%=*}
usage "${param} needs a vlaue!"
;;
*)
if [ $# -lt 2 ] ; then usage "$1 needs a value!"; fi
param=$1
value=$2
shift; shift;
;;
esac
case $param in
*)
other_params[$[${#other_params[*]} + 1]]="${param}=${value}"
param=${param#--}
param=${param/-/_}
export ${param^^}=${value}
;;
esac
done
</source>
1a2a5b373ad02f132ca3383f4cd3227a140c8d79
1945
1944
2019-06-03T10:48:45Z
Lollypop
2
/* Log with timestamp */
wikitext
text/x-wiki
[[Kategorie:Bash]]
=bash history per user=
See [[SSH_FingerprintLogging|Logging the SSH fingerprint]]
=bash prompt=
Put this in your ~/.bash_profile
<source lang=bash>
typeset +x PS1="\[\e]0;\u@\h: \w\a\]\u@\h:\w# "
</source>
=Useful variable substitutions=
==split==
For example split an ip:
<source lang=bash>
$ delimiter="."
$ ip="10.1.2.3"
$ declare -a octets=( ${ip//${delimiter}/ } )
$ echo "${#octets[@]} octets -> ${octets[@]}"
4 octets -> 10 1 2 3
</source>
==dirname==
<source lang=bash>
$ myself=/usr/bin/blafasel ; echo ${myself%/*}
/usr/bin
</source>
==basename==
<source lang=bash>
$ myself=/usr/bin/blafasel ; echo ${myself##*/}
blafasel
</source>
==Path name resolving function==
<source lang=bash>
# dir_resolve originally from http://stackoverflow.com/a/20901614/5887626
# modified at https://lars.timmann.de/wiki/index.php/Bash_cheatsheet
dir_resolve() {
local dir=${1%/*}
local file=${1##*/}
# if the name does not contain a / leave file blank or the name will be name/name
[ "_${1/\//}_" == "_${1}_" -a -d ${1} ] && file=""
[ "_${1/\//}_" == "_${1}_" -a -f ${1} ] && dir=""
pushd "$dir" &>/dev/null || return $? # On error, return error code
echo ${PWD}${file:+"/"${file}} # output full path with filename
popd &> /dev/null
}
</source>
=Arrays=
==Reverse the order of elements==
An example for services in normal and reverse order for start/stop
<source lang=bash>
declare -a SERVICES_STOP=(service1 service2 service3 service4)
declare -a SERVICES_START
for(( i=$[ ${#SERVICES_STOP[*]} - 1 ] ; i>=0 ; i-- ))
do
SERVICES_START+=(${SERVICES_STOP[$i]})
done
</source>
This results in:
<source lang=bash>
$ echo ${SERVICES_STOP[*]} ; echo ${SERVICES_START[*]}
service1 service2 service3 service4
service4 service3 service2 service1
</source>
=Loops=
==Numbers==
$ for i in {0..9} ; do echo $i ; done
or
$ for ((i=0;i<=9;i++)); do echo $i; done
so gehen natürlich auch andere Sprünge, z.B. immer 3 weiter:
$ for ((i=0;i<=9;i+=3)); do echo $i; done
or or or
$ for ((i=0,j=1;i<=9;i+=3,j++)); do echo "$i $j"; done
==Exit controlled loop==
Just put your code between <i>while</i> and <i>do</i> and use <i>continue</i> alias : in the loop.
<source lang=bash>
#!/bin/bash
while
# some code
(( <your control expression> ))
do
:
done
</source>
For example:
<source lang=bash>
#!/bin/bash
i=1
while
i=$[ $i + 1 ];
(( $i < 10 ))
do
:
done
</source>
=Functions=
==Log with timestamp==
<source lang=bash>
function printlog () {
if [ -n "$*" ]
then
format=$1
shift
printf "%s ${format}" "$(/bin/date '+%Y%m%d %H:%M:%S')" ${*}
else
#while read -t 0.1 input
while read input
do
printf "%s %s\n" "$(/bin/date '+%Y%m%d %H:%M:%S')" "${input}"
done
fi
}
</source>
<source lang=bash>
$ printf "test\n\ntoast\n" | printlog
20190603 12:48:13 test
20190603 12:48:13
20190603 12:48:13 toast
$ printlog "test\n"
20190603 12:48:19 test
$ printlog "test %s %d %s\n" "bla" 0 "bli"
20190603 12:48:25 test bla 0 bli
$
</source>
=Calculations=
$ echo $[ 3 + 4 ]
$ echo $[ 2 ** 8 ] # 2^8
=init scripts=
==A basic skeleton==
<source lang=bash>
#!/bin/bash
NAME=<myname> # The name of the daemon
USER=<runuser> # The user to run the daemon as
SELF=${0##*/}
CALLER=$(id -nu)
# Check if called as ${USER}
if [ "_${CALLER}_" != "_${USER}_" ]
then
# If not do a su if called as root
if [ "_${CALLER}_" == "_root_" ]
then
exec su -l ${USER} -c "$0 $@"
else
echo "Please start this script only as user ${USER}"
exit 1
fi
fi
if [ $# -eq 1 ]
then
command=$1
else
# Called as ${NAME}-start.sh or ${NAME}-stop.sh
command=${SELF%.sh}
command=${command##${NAME}-}
[ "_${command}_" == "_${NAME}_" ] && command=""
fi
case ${command} in
start)
# start commands
;;
stop)
# stop commands
;;
restart)
$0 stop
$0 start
;;
*)
[ ! -z "${command}" ] && echo "ERROR: Unknown option ${command}!"
echo "Usage: $0 (start|stop|restart)";
echo "Or call as ${NAME}-(start|stop|restart).sh"
exit 1
;;
esac
</source>
= Logging and output in your scripts =
== Add a timestamp to all output ==
<source lang=bash>
#!/bin/bash
# Find temp filename
FIFO=$(mktemp)
# Clenup on exit
trap 'rm -f ${FIFO}' 0
# Delete file created by mktemp
rm "${FIFO}"
# Create a FIFO instead
mkfifo "${FIFO}"
# Read from FIFO and add date at the beginning
sed -e "s|^|$(date '+%d.%m.%Y %H:%M:%S') :: |g" < ${FIFO} &
# Redirect stdout & stderr to FIFO
exec > ${FIFO} 2>&1
#
# Now your program
#
echo bla
echo bli >&2
</source>
== Add a timestamp to all output and send to file==
<source lang=bash>
#!/bin/bash
LOGFILE=/tmp/bla.log
# Find temp filename
FIFO=$(mktemp)
# Clenup on exit
trap 'rm -f ${FIFO}' 0
# Delete file created by mktemp
rm "${FIFO}"
# Create a FIFO instead
mkfifo "${FIFO}"
# Read from FIFO and add date at the beginning
sed -e "s|^|$(date '+%d.%m.%Y %H:%M:%S') :: |g" < ${FIFO} > ${LOGFILE}&
# Redirect stdout & stderr to FIFO
exec > ${FIFO} 2>&1
#
# Now your program
#
echo bla
echo bli >&2
</source>
=Parameter parsing=
In progress... no time...
<source lang=bash>
while [ $# -gt 0 ]
do
case $1 in
-h|--help)
usage help
shift;
exit 0;
;;
--?*=?*|-?*=?*)
param=${1%=*}
value=${1#*=}
shift;
;;
--?*=|-?*=)
param=${1%=*}
usage "${param} needs a vlaue!"
;;
*)
if [ $# -lt 2 ] ; then usage "$1 needs a value!"; fi
param=$1
value=$2
shift; shift;
;;
esac
case $param in
*)
other_params[$[${#other_params[*]} + 1]]="${param}=${value}"
param=${param#--}
param=${param/-/_}
export ${param^^}=${value}
;;
esac
done
</source>
c53243a796757b50e80031674d13961cc2c12079
1946
1945
2019-06-03T10:49:50Z
Lollypop
2
/* Calculations */
wikitext
text/x-wiki
[[Kategorie:Bash]]
=bash history per user=
See [[SSH_FingerprintLogging|Logging the SSH fingerprint]]
=bash prompt=
Put this in your ~/.bash_profile
<source lang=bash>
typeset +x PS1="\[\e]0;\u@\h: \w\a\]\u@\h:\w# "
</source>
=Useful variable substitutions=
==split==
For example split an ip:
<source lang=bash>
$ delimiter="."
$ ip="10.1.2.3"
$ declare -a octets=( ${ip//${delimiter}/ } )
$ echo "${#octets[@]} octets -> ${octets[@]}"
4 octets -> 10 1 2 3
</source>
==dirname==
<source lang=bash>
$ myself=/usr/bin/blafasel ; echo ${myself%/*}
/usr/bin
</source>
==basename==
<source lang=bash>
$ myself=/usr/bin/blafasel ; echo ${myself##*/}
blafasel
</source>
==Path name resolving function==
<source lang=bash>
# dir_resolve originally from http://stackoverflow.com/a/20901614/5887626
# modified at https://lars.timmann.de/wiki/index.php/Bash_cheatsheet
dir_resolve() {
local dir=${1%/*}
local file=${1##*/}
# if the name does not contain a / leave file blank or the name will be name/name
[ "_${1/\//}_" == "_${1}_" -a -d ${1} ] && file=""
[ "_${1/\//}_" == "_${1}_" -a -f ${1} ] && dir=""
pushd "$dir" &>/dev/null || return $? # On error, return error code
echo ${PWD}${file:+"/"${file}} # output full path with filename
popd &> /dev/null
}
</source>
=Arrays=
==Reverse the order of elements==
An example for services in normal and reverse order for start/stop
<source lang=bash>
declare -a SERVICES_STOP=(service1 service2 service3 service4)
declare -a SERVICES_START
for(( i=$[ ${#SERVICES_STOP[*]} - 1 ] ; i>=0 ; i-- ))
do
SERVICES_START+=(${SERVICES_STOP[$i]})
done
</source>
This results in:
<source lang=bash>
$ echo ${SERVICES_STOP[*]} ; echo ${SERVICES_START[*]}
service1 service2 service3 service4
service4 service3 service2 service1
</source>
=Loops=
==Numbers==
$ for i in {0..9} ; do echo $i ; done
or
$ for ((i=0;i<=9;i++)); do echo $i; done
so gehen natürlich auch andere Sprünge, z.B. immer 3 weiter:
$ for ((i=0;i<=9;i+=3)); do echo $i; done
or or or
$ for ((i=0,j=1;i<=9;i+=3,j++)); do echo "$i $j"; done
==Exit controlled loop==
Just put your code between <i>while</i> and <i>do</i> and use <i>continue</i> alias : in the loop.
<source lang=bash>
#!/bin/bash
while
# some code
(( <your control expression> ))
do
:
done
</source>
For example:
<source lang=bash>
#!/bin/bash
i=1
while
i=$[ $i + 1 ];
(( $i < 10 ))
do
:
done
</source>
=Functions=
==Log with timestamp==
<source lang=bash>
function printlog () {
if [ -n "$*" ]
then
format=$1
shift
printf "%s ${format}" "$(/bin/date '+%Y%m%d %H:%M:%S')" ${*}
else
#while read -t 0.1 input
while read input
do
printf "%s %s\n" "$(/bin/date '+%Y%m%d %H:%M:%S')" "${input}"
done
fi
}
</source>
<source lang=bash>
$ printf "test\n\ntoast\n" | printlog
20190603 12:48:13 test
20190603 12:48:13
20190603 12:48:13 toast
$ printlog "test\n"
20190603 12:48:19 test
$ printlog "test %s %d %s\n" "bla" 0 "bli"
20190603 12:48:25 test bla 0 bli
$
</source>
=Calculations=
<source lang=bash>
$ echo $[ 3 + 4 ]
7
$ echo $[ 2 ** 8 ] # 2^8
256
</source>
=init scripts=
==A basic skeleton==
<source lang=bash>
#!/bin/bash
NAME=<myname> # The name of the daemon
USER=<runuser> # The user to run the daemon as
SELF=${0##*/}
CALLER=$(id -nu)
# Check if called as ${USER}
if [ "_${CALLER}_" != "_${USER}_" ]
then
# If not do a su if called as root
if [ "_${CALLER}_" == "_root_" ]
then
exec su -l ${USER} -c "$0 $@"
else
echo "Please start this script only as user ${USER}"
exit 1
fi
fi
if [ $# -eq 1 ]
then
command=$1
else
# Called as ${NAME}-start.sh or ${NAME}-stop.sh
command=${SELF%.sh}
command=${command##${NAME}-}
[ "_${command}_" == "_${NAME}_" ] && command=""
fi
case ${command} in
start)
# start commands
;;
stop)
# stop commands
;;
restart)
$0 stop
$0 start
;;
*)
[ ! -z "${command}" ] && echo "ERROR: Unknown option ${command}!"
echo "Usage: $0 (start|stop|restart)";
echo "Or call as ${NAME}-(start|stop|restart).sh"
exit 1
;;
esac
</source>
= Logging and output in your scripts =
== Add a timestamp to all output ==
<source lang=bash>
#!/bin/bash
# Find temp filename
FIFO=$(mktemp)
# Clenup on exit
trap 'rm -f ${FIFO}' 0
# Delete file created by mktemp
rm "${FIFO}"
# Create a FIFO instead
mkfifo "${FIFO}"
# Read from FIFO and add date at the beginning
sed -e "s|^|$(date '+%d.%m.%Y %H:%M:%S') :: |g" < ${FIFO} &
# Redirect stdout & stderr to FIFO
exec > ${FIFO} 2>&1
#
# Now your program
#
echo bla
echo bli >&2
</source>
== Add a timestamp to all output and send to file==
<source lang=bash>
#!/bin/bash
LOGFILE=/tmp/bla.log
# Find temp filename
FIFO=$(mktemp)
# Clenup on exit
trap 'rm -f ${FIFO}' 0
# Delete file created by mktemp
rm "${FIFO}"
# Create a FIFO instead
mkfifo "${FIFO}"
# Read from FIFO and add date at the beginning
sed -e "s|^|$(date '+%d.%m.%Y %H:%M:%S') :: |g" < ${FIFO} > ${LOGFILE}&
# Redirect stdout & stderr to FIFO
exec > ${FIFO} 2>&1
#
# Now your program
#
echo bla
echo bli >&2
</source>
=Parameter parsing=
In progress... no time...
<source lang=bash>
while [ $# -gt 0 ]
do
case $1 in
-h|--help)
usage help
shift;
exit 0;
;;
--?*=?*|-?*=?*)
param=${1%=*}
value=${1#*=}
shift;
;;
--?*=|-?*=)
param=${1%=*}
usage "${param} needs a vlaue!"
;;
*)
if [ $# -lt 2 ] ; then usage "$1 needs a value!"; fi
param=$1
value=$2
shift; shift;
;;
esac
case $param in
*)
other_params[$[${#other_params[*]} + 1]]="${param}=${value}"
param=${param#--}
param=${param/-/_}
export ${param^^}=${value}
;;
esac
done
</source>
3f74649dd411b4ef435dee52f85db24ca3d08a27
NGINX
0
363
1947
2019-06-27T11:19:05Z
Lollypop
2
Die Seite wurde neu angelegt: „[[Kategorie:NGINX]] ==Add module to nginx on Ubuntu== For example http-auth-ldap: <source lang=bash> mkdir /opt/src cd /opt/src apt source nginx cd nginx-* e…“
wikitext
text/x-wiki
[[Kategorie:NGINX]]
==Add module to nginx on Ubuntu==
For example http-auth-ldap:
<source lang=bash>
mkdir /opt/src
cd /opt/src
apt source nginx
cd nginx-*
export HTTPS_PROXY=<your proxy server>
git clone https://github.com/kvspb/nginx-auth-ldap.git debian/modules/http-auth-ldap
./configure \
--with-cc-opt="$(dpkg-buildflags --get CFLAGS) -fPIC $(dpkg-buildflags --get CPPFLAGS)" \
--with-ld-opt="$(dpkg-buildflags --get LDFLAGS) -fPIC" \
--prefix=/usr/share/nginx \
--conf-path=/etc/nginx/nginx.conf \
--http-log-path=/var/log/nginx/access.log \
--error-log-path=/var/log/nginx/error.log \
--lock-path=/var/lock/nginx.lock \
--pid-path=/run/nginx.pid \
--modules-path=/usr/lib/nginx/modules \
--with-http_v2_module \
--with-threads \
--without-http_gzip_module \
--add-dynamic-module=debian/modules/http-auth-ldap
make modules
sudo install --mode=0644 --owner=root --group=root objs/ngx_http_auth_ldap_module.so /usr/lib/nginx/modules/
</source>
4f3014b2bab66d2f4d6f5b23873fb01d2b9b1633
Oracle Discoverer
0
364
1948
2019-07-31T06:38:11Z
Lollypop
2
Die Seite wurde neu angelegt: „[[Kategorie:Oracle]] == Changing the IP address == Just some lines from last change... sorry <source lang=bash> vi /etc/sysconfig/network/ifcfg-eth0 vi /etc/…“
wikitext
text/x-wiki
[[Kategorie:Oracle]]
== Changing the IP address ==
Just some lines from last change... sorry
<source lang=bash>
vi /etc/sysconfig/network/ifcfg-eth0
vi /etc/sysconfig/network/routes
vi /etc/hosts
/etc/init.d/network restart
# Change the VLAN in vCenter
# reconnect with new IP
#
# Change the config
#
/opt/Middleware/ashome_1/chgip/scripts/chgiphost.sh -noconfig -oldhost discoverer01.srv.net.de -newhost discoverer.srv.net.de -oldip 172.16.31.29 -newip 172.16.7.4 -instanceHome /opt/Middleware/asinst_1
/etc/init.d/weblogic stop
# Adminserver, too
/opt/Middleware/wlserver_10.3/server/bin/setWLSEnv.sh
/opt/Middleware/wlserver_10.3/common/bin/wlst.sh
wls:/offline> readDomain('/opt/Middleware/user_projects/domains/ClassicDomain')
wls:/offline/ClassicDomain> cd ('/Machine/neuerhostname')
wls:/offline/ClassicDomain/Machine/neuerhostname> machine=cmo
wls:/offline/ClassicDomain/Machine/neuerhostname> cd ('/Server/AdminServer')
wls:/offline/ClassicDomain/Server/AdminServer> set('Machine', machine)
wls:/offline/ClassicDomain/Server/AdminServer> updateDomain()
wls:/offline/ClassicDomain/Server/AdminServer> exit()
# Nach den Anpassungen starten
/etc/init.d/weblogic start
netstat -plant | grep 9001
tail -f /opt/Middleware/user_projects/domains/ClassicDomain/servers/WLS_DISCO/logs/WLS_DISCO.out
</source>
b32d25df001c83e1d3492da09b5e98cf8337c9af
Chrome
0
351
1949
1840
2019-08-02T13:11:06Z
Lollypop
2
wikitext
text/x-wiki
==Overview of Chrome URLS==
* chrome://about/
== Apps ==
* chrome://apps/
== Extensions ==
* chrome://extensions/
== Special settings ==
* chrome://flags/
== Your Downloads ==
chrome://downloads/
==Useful URLs==
* chrome://net-internals/#dns -> Clear host cache
* chrome://net-internals/#sockets
1f5f09e50470e3d453ab87f7ee4a361df51bc992
HP Smart Array Controller
0
365
1950
2019-08-13T11:07:14Z
Lollypop
2
Die Seite wurde neu angelegt: „[[Kategorie:Hardware]] === Install the tool === <source lang=bash> # echo "deb http://downloads.linux.hpe.com/SDR/downloads/MCP/ubuntu $(lsb_release --short --…“
wikitext
text/x-wiki
[[Kategorie:Hardware]]
=== Install the tool ===
<source lang=bash>
# echo "deb http://downloads.linux.hpe.com/SDR/downloads/MCP/ubuntu $(lsb_release --short --codename)/current non-free" >> /etc/apt/sources.list.d/hp.list
# curl http://downloads.linux.hpe.com/SDR/hpePublicKey2048_key1.pub | sudo apt-key add -
# apt update && apt install --yes ssacli
</source>
=== Revive formerly failed disk ===
<source lang=bash>
root@loop1:~# lsscsi
[0:0:0:0] storage HP P440ar 2.52 -
[0:1:0:0] disk HP LOGICAL VOLUME 2.52 /dev/sda
[0:1:0:1] disk HP LOGICAL VOLUME 2.52 /dev/sdb
[0:1:0:2] disk HP LOGICAL VOLUME 2.52 /dev/sdc
[0:1:0:3] disk HP LOGICAL VOLUME 2.52 /dev/sdd
[0:1:0:5] disk HP LOGICAL VOLUME 2.52 /dev/sdf
# ssacli ctrl slot=0 ld all show status
logicaldrive 1 (279.37 GB, RAID 0): OK
logicaldrive 2 (931.48 GB, RAID 0): OK
logicaldrive 3 (279.37 GB, RAID 0): OK
logicaldrive 4 (931.48 GB, RAID 0): OK
logicaldrive 5 (279.37 GB, RAID 0): Failed
logicaldrive 6 (931.48 GB, RAID 0): OK
# ssacli ctrl slot=0 ld 5 modify reenable forced
# ssacli ctrl slot=0 ld all show status
logicaldrive 1 (279.37 GB, RAID 0): OK
logicaldrive 2 (931.48 GB, RAID 0): OK
logicaldrive 3 (279.37 GB, RAID 0): OK
logicaldrive 4 (931.48 GB, RAID 0): OK
logicaldrive 5 (279.37 GB, RAID 0): OK
logicaldrive 6 (931.48 GB, RAID 0): OK
# lsscsi
[0:0:0:0] storage HP P440ar 2.52 -
[0:1:0:0] disk HP LOGICAL VOLUME 2.52 /dev/sda
[0:1:0:1] disk HP LOGICAL VOLUME 2.52 /dev/sdb
[0:1:0:2] disk HP LOGICAL VOLUME 2.52 /dev/sdc
[0:1:0:3] disk HP LOGICAL VOLUME 2.52 /dev/sdd
[0:1:0:4] disk HP LOGICAL VOLUME 2.52 /dev/sde
[0:1:0:5] disk HP LOGICAL VOLUME 2.52 /dev/sdf
</source>
4bd13171084c11cb4a3265203c1e610e6ad5ed0f
1951
1950
2019-08-13T11:07:32Z
Lollypop
2
/* Revive formerly failed disk */
wikitext
text/x-wiki
[[Kategorie:Hardware]]
=== Install the tool ===
<source lang=bash>
# echo "deb http://downloads.linux.hpe.com/SDR/downloads/MCP/ubuntu $(lsb_release --short --codename)/current non-free" >> /etc/apt/sources.list.d/hp.list
# curl http://downloads.linux.hpe.com/SDR/hpePublicKey2048_key1.pub | sudo apt-key add -
# apt update && apt install --yes ssacli
</source>
=== Revive formerly failed disk ===
<source lang=bash>
# lsscsi
[0:0:0:0] storage HP P440ar 2.52 -
[0:1:0:0] disk HP LOGICAL VOLUME 2.52 /dev/sda
[0:1:0:1] disk HP LOGICAL VOLUME 2.52 /dev/sdb
[0:1:0:2] disk HP LOGICAL VOLUME 2.52 /dev/sdc
[0:1:0:3] disk HP LOGICAL VOLUME 2.52 /dev/sdd
[0:1:0:5] disk HP LOGICAL VOLUME 2.52 /dev/sdf
# ssacli ctrl slot=0 ld all show status
logicaldrive 1 (279.37 GB, RAID 0): OK
logicaldrive 2 (931.48 GB, RAID 0): OK
logicaldrive 3 (279.37 GB, RAID 0): OK
logicaldrive 4 (931.48 GB, RAID 0): OK
logicaldrive 5 (279.37 GB, RAID 0): Failed
logicaldrive 6 (931.48 GB, RAID 0): OK
# ssacli ctrl slot=0 ld 5 modify reenable forced
# ssacli ctrl slot=0 ld all show status
logicaldrive 1 (279.37 GB, RAID 0): OK
logicaldrive 2 (931.48 GB, RAID 0): OK
logicaldrive 3 (279.37 GB, RAID 0): OK
logicaldrive 4 (931.48 GB, RAID 0): OK
logicaldrive 5 (279.37 GB, RAID 0): OK
logicaldrive 6 (931.48 GB, RAID 0): OK
# lsscsi
[0:0:0:0] storage HP P440ar 2.52 -
[0:1:0:0] disk HP LOGICAL VOLUME 2.52 /dev/sda
[0:1:0:1] disk HP LOGICAL VOLUME 2.52 /dev/sdb
[0:1:0:2] disk HP LOGICAL VOLUME 2.52 /dev/sdc
[0:1:0:3] disk HP LOGICAL VOLUME 2.52 /dev/sdd
[0:1:0:4] disk HP LOGICAL VOLUME 2.52 /dev/sde
[0:1:0:5] disk HP LOGICAL VOLUME 2.52 /dev/sdf
</source>
bac62cf3ea7c20420edc6ddb338425c9feab65ab
ZFS on Linux
0
222
1952
1860
2019-08-20T12:48:48Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:Linux|ZFS]]
[[Kategorie:ZFS|Linux]]
[[Kategorie:VirtualBox|ZFS]]
==Grub==
Create /etc/udev/rules.d/99-local-grub.rules with this content:
<source lang=bash>
# Create by-id links in /dev as well for zfs vdev. Needed by grub
# Add links for zfs_member only
KERNEL=="sd*[0-9]", IMPORT{parent}=="ID_*", ENV{ID_FS_TYPE}=="zfs_member", SYMLINK+="$env{ID_BUS}-$env{ID_SERIAL}-part%n"
</source>
==Virtualbox on ZVols==
If you use ZVols as rawvmdk-device in VirtualBox as normal user (vmuser in this example) create /etc/udev/rules.d/99-local-zvol.rules with this content:
<source lang=bash>
KERNEL=="zd*" SUBSYSTEM=="block" ACTION=="add|change" PROGRAM="/lib/udev/zvol_id /dev/%k" RESULT=="rpool/VM/*" OWNER="vmuser"
</source>
<source lang=bash>
vmuser@virtualbox-server:~$ VBoxManage internalcommands createrawvmdk -filename /var/data/VMs/dev/Solaris10.vmdk -rawdisk /dev/zvol/rpool/VM/Solaris10
</source>
==Setup Ubuntu 16.04 with ZFS root==
Most is from here [https://github.com/zfsonlinux/zfs/wiki/Ubuntu-16.04-Root-on-ZFS Ubuntu-16.04-Root-on-ZFS].
Boot Ubuntu Desktop (alias Live CD) and choose "try out".
===Get the right ashift value===
For example to get sda and sdb:
<source lang=bash>
# lsblk -o NAME,PHY-SeC,LOG-SEC /dev/sd{a,b} | awk 'function exponent (value) {for(i=0;value>1;i++){value/=2;}; return i;}{if($2 ~ /[0-9]+/){print $0,exponent($2)}else{print$0,"ashift"}}'
NAME PHY-SEC LOG-SEC ashift
sda 512 512 9
├─sda1 512 512 9
├─sda2 512 512 9
├─sda3 512 512 9
└─sda4 512 512 9
sdb 4096 512 12
├─sdb1 4096 512 12
├─sdb2 4096 512 12
├─sdb3 4096 512 12
└─sdb4 4096 512 12
</source>
===Connect it to your network===
<source lang=bash>
sudo -i
ifconfig ens160 <IP> netmask 255.255.255.0
route add default gw <defaultrouter>
echo "nameserver <nameserver>" >> /etc/resolv.conf
echo 'Acquire::http::Proxy "http://<user>:<pass>@<proxyhost>:<proxyport>";' >> /etc/apt/apt.conf
apt-add-repository universe
apt update
apt --yes install openssh-server
passwd ubuntu
Reconnect via ssh
apt install --yes debootstrap gdisk zfs-initramfs
sgdisk -g -a1 -n2:34:2047 -t2:EF02 /dev/disk/by-id/scsi-36000c2932cdb62febff0b5ac93786dd4
sgdisk -n9:-8M:0 -t9:BF07 /dev/disk/by-id/scsi-36000c2932cdb62febff0b5ac93786dd4
sgdisk -n1:0:0 -t1:BF01 /dev/disk/by-id/scsi-36000c2932cdb62febff0b5ac93786dd4
zpool create -f -o ashift=12 \
-O atime=off \
-O canmount=off \
-O compression=lz4 \
-O normalization=formD \
-O mountpoint=/ \
-R /mnt \
rpool /dev/disk/by-id/scsi-36000c2932cdb62febff0b5ac93786dd4-part1
zfs create -o canmount=off -o mountpoint=none rpool/ROOT
zfs create -o canmount=noauto -o mountpoint=/ rpool/ROOT/ubuntu
zfs mount rpool/ROOT/ubuntu
zfs create -o setuid=off rpool/home
zfs create -o mountpoint=/root rpool/home/root
zfs create -o canmount=off -o setuid=off -o exec=off rpool/var
zfs create -o com.sun:auto-snapshot=false rpool/var/cache
zfs create rpool/var/log
zfs create rpool/var/spool
zfs create -o com.sun:auto-snapshot=false -o exec=on rpool/var/tmp
zfs create -V 4G -b $(getconf PAGESIZE) -o compression=zle \
-o logbias=throughput -o sync=always \
-o primarycache=metadata -o secondarycache=none \
-o com.sun:auto-snapshot=false rpool/swap
cp -p {,/mnt}/etc/apt/apt.conf
export http_proxy=$(awk '/Acquire::http::Proxy/{gsub(/\"/,"");gsub(/;$/,"");print $2}' /mnt/etc/apt/apt.conf)
echo -n xenial{,-security,-updates} | \
xargs -n 1 -d ' ' -I{} echo "deb http://archive.ubuntu.com/ubuntu {} main universe" > /mnt/etc/apt/sources.list
chmod 1777 /mnt/var/tmp
debootstrap xenial /mnt
zfs set devices=off rpool
HOSTNAME=Template-VM
echo ${HOSTNAME} > /mnt/etc/hostname
printf "127.0.1.1\t%s\n" "${HOSTNAME}" >> /mnt/etc/hosts
INTERFACE=$(ip a s scope global | awk 'NR==1{gsub(/:$/,"",$2);print $2;}')
printf "auto %s\niface %s inet dhcp\n" "${INTERFACE}" "${INTERFACE}" > /mnt/etc/network/interfaces.d/${INTERFACE}
mount --rbind /dev /mnt/dev
mount --rbind /proc /mnt/proc
mount --rbind /sys /mnt/sys
cp -p {,/mnt}/etc/apt/apt.conf
echo -n xenial{,-security,-updates} | \
xargs -n 1 -d ' ' -I{} echo "deb http://archive.ubuntu.com/ubuntu {} main universe" > /mnt/etc/apt/sources.list
chroot /mnt /bin/bash --login
locale-gen en_US.UTF-8
echo 'LANG="en_US.UTF-8"' > /etc/default/locale
LANG="en_US.UTF-8"
dpkg-reconfigure tzdata
ln -s /proc/self/mounts /etc/mtab
apt update
apt install --yes ubuntu-minimal
apt install --yes --no-install-recommends linux-image-generic
apt install --yes zfs-initramfs
apt install --yes openssh-server
apt install --yes grub-pc
addgroup --system lpadmin
addgroup --system sambashare
passwd
grub-probe /
update-initramfs -c -k all
vi /etc/default/grub
Comment out: GRUB_HIDDEN_TIMEOUT=0
Remove quiet and splash from: GRUB_CMDLINE_LINUX_DEFAULT
Uncomment: GRUB_TERMINAL=console
update-grub
grub-install /dev/disk/by-id/scsi-36000c2932cdb62febff0b5ac93786dd4
zfs snapshot rpool/ROOT/ubuntu@install
exit
mount | grep -v zfs | tac | awk '/\/mnt/ {print $3}' | xargs -i{} umount -lf {}
zpool export rpool
reboot
apt install --yes cryptsetup
echo cryptswap1 /dev/zvol/rpool/swap /dev/urandom swap,cipher=aes-xts-plain64:sha256,size=256 >> /etc/crypttab
systemctl daemon-reload
systemctl start systemd-cryptsetup@cryptswap1.service
echo /dev/mapper/cryptswap1 none swap defaults 0 0 >> /etc/fstab
swapon -av
</source>
==Kernel settings for ZFS==
<source lang=bash>
root@zfshost:~# modprobe -c | grep zfs | grep -v alias
options zfs zfs_arc_max=10737418240
options zfs zfs_vdev_scrub_min_active=24
options zfs zfs_vdev_scrub_max_active=64
options zfs zfs_vdev_sync_write_min_active=8
options zfs zfs_vdev_sync_write_max_active=32
options zfs zfs_vdev_sync_read_min_active=8
options zfs zfs_vdev_sync_read_max_active=32
options zfs zfs_vdev_async_read_min_active=8
options zfs zfs_vdev_async_read_max_active=32
options zfs zfs_vdev_async_write_min_active=8
options zfs zfs_vdev_async_write_max_active=32
options zfs l2arc_write_max=524288000
options zfs zfs_top_maxinflight=512
options zfs zfs_resilver_min_time_ms=8000
options zfs zfs_resilver_delay=0
</source>
<source lang=bash>
root@zfshost:~# modprobe --show-depends zfs
insmod /lib/modules/4.15.0-58-generic/kernel/spl/spl.ko
insmod /lib/modules/4.15.0-58-generic/kernel/zfs/znvpair.ko
insmod /lib/modules/4.15.0-58-generic/kernel/zfs/zcommon.ko
insmod /lib/modules/4.15.0-58-generic/kernel/zfs/icp.ko
insmod /lib/modules/4.15.0-58-generic/kernel/zfs/zavl.ko
insmod /lib/modules/4.15.0-58-generic/kernel/zfs/zunicode.ko
insmod /lib/modules/4.15.0-58-generic/kernel/zfs/zfs.ko zfs_arc_max=10737418240 zfs_vdev_scrub_min_active=24 zfs_vdev_scrub_max_active=64 zfs_vdev_sync_write_min_active=8 zfs_vdev_sync_write_max_active=32 zfs_vdev_sync_read_min_active=8 zfs_vdev_sync_read_max_active=32 zfs_vdev_async_read_min_active=8 zfs_vdev_async_read_max_active=32 zfs_vdev_async_write_min_active=8 zfs_vdev_async_write_max_active=32 l2arc_write_max=524288000 zfs_top_maxinflight=512 zfs_resilver_min_time_ms=8000 zfs_resilver_delay=0
</source>
==ARC Cache==
===Get the current usage of cache===
<source lang=bash>
# cat /proc/spl/kstat/zfs/arcstats |grep c_
c_min 4 521779200
c_max 4 1073741824
arc_no_grow 4 0
arc_tempreserve 4 0
arc_loaned_bytes 4 0
arc_prune 4 25360
arc_meta_used 4 493285336
arc_meta_limit 4 805306368
arc_dnode_limit 4 80530636
arc_meta_max 4 706551816
arc_meta_min 4 16777216
sync_wait_for_async 4 357
arc_need_free 4 0
arc_sys_free 4 260889600
</source>
===Limit the cache without reboot non permanent===
For example limit it to 512MB (which is too small for production environments, just an example...):
<source lang=bash>
# echo "$[512*1024*1024]" > /sys/module/zfs/parameters/zfs_arc_max
</source>
Now you have to drop the caches:
<source lang=bash>
# echo 3 > /proc/sys/vm/drop_caches
</source>
===Make the cache limit permanent===
For example limit it to 512MB (which is too small for production environments, just an example...):
<source lang=bash>
# echo "options zfs zfs_arc_max=$[512*1024*1024]" >> /etc/modprobe.d/zfs.conf
</source>
After reboot this value take effect.
==Backup ZFS settings==
A little script which may be used on your own risk.
<source lang=bash>
#!/bin/bash
# Written by Lars Timmann <L@rs.Timmann.de> 2018
# Tested on solaris 11.3 & Ubuntu Linux
# This script is a rotten bunch of code... rewrite it!
AWK_CMD=/usr/bin/gawk
ZPOOL_CMD=/sbin/zpool
ZFS_CMD=/sbin/zfs
ZDB_CMD=/sbin/zdb
function print_local_options () {
DATASET=$1
OPTION=$2
EXCLUDE_REGEX=$3
${ZFS_CMD} get -s local -Ho property,value -p ${OPTION} ${DATASET} | while read -r property value
do
if [[ ! ${property} =~ ${EXCLUDE_REGEX} ]]
then
if [ "_${property}_" == "_share.*_" ]
then
print_local_options "${DATASET}" 'share.all' '^$'
else
printf '\t-o %s=%s \\\n' "${property}" "${value}"
fi
fi
done
}
function print_filesystem () {
ZFS=$1
printf '%s create \\\n' "${ZFS_CMD}"
print_local_options "${ZFS}" 'all' '^$'
printf '\t%s\n' "${ZFS}"
}
function print_filesystems () {
ZPOOL=$1
for ZFS in $(${ZFS_CMD} list -Ho name -t filesystem -r ${ZPOOL})
do
if [ ${ZFS} == ${ZPOOL} ] ; then continue ; fi
printf '#\n## Filesystem: %s\n#\n\n' "${ZFS}"
print_filesystem ${ZFS}
printf '\n'
done
}
function print_volume () {
ZVOL=$1
volsize=$(${ZFS_CMD} get -Ho value volsize ${ZVOL})
volblocksize=$(${ZFS_CMD} get -Ho value volblocksize ${ZVOL})
printf '%s create \\\n\t-V %s \\\n\t-b %s \\\n' "${ZFS_CMD}" "${volsize}" "${volblocksize}"
print_local_options "${ZVOL}" 'all' '(volsize|refreservation)'
printf '\t%s\n' "${ZVOL}"
}
function print_volumes () {
ZPOOL=$1
for ZVOL in $(${ZFS_CMD} list -Ho name -t volume -r ${ZPOOL})
do
printf '#\n## Volume: %s\n#\n\n' "${ZVOL}"
print_volume ${ZVOL}
printf '\n'
done
}
function print_vdevs () {
ZPOOL=$1
${ZDB_CMD} -C ${ZPOOL} | ${AWK_CMD} -F':' '
$1 ~ /^[[:space:]]*type$/ {
gsub(/[ ]+/,"",$NF);
type=substr($NF,2,length($NF)-2);
if ( type == "mirror" ) {
printf " \\\n\t%s",type;
}
}
$1 ~ /^[[:space:]]*path$/ {
gsub(/[ ]+/,"",$NF);
vdev=substr($NF,2,length($NF)-2);
printf " \\\n\t%s",vdev;
}
END {
printf "\n";
}
'
}
function print_zpool () {
ZPOOL=$1
printf '#############################################################\n'
printf '#\n## ZPool: %s\n#\n' "${ZPOOL}"
printf '#############################################################\n\n'
printf '%s create \\\n' "${ZPOOL_CMD}"
print_local_options "${ZPOOL}" 'all' '/@/'
printf '\t%s' "${ZPOOL}"
print_vdevs "${ZPOOL}"
printf '\n'
printf '#############################################################\n\n'
print_filesystems "${ZPOOL}"
print_volumes "${ZPOOL}"
}
OS=$(uname -s)
eval $(uname -s)=1
HOSTNAME=$(hostname)
printf '#############################################################\n'
printf '# Hostname: %s\n' "${HOSTNAME}"
printf '#############################################################\n\n'
for ZPOOL in $(${ZPOOL_CMD} list -Ho name)
do
print_zpool ${ZPOOL}
done
</source>
==Links==
* [[https://github.com/zfsonlinux/pkg-zfs/wiki/HOWTO-install-Ubuntu-16.04-to-a-Whole-Disk-Native-ZFS-Root-Filesystem-using-Ubiquity-GUI-installer HOWTO install Ubuntu 16.04 to a Whole Disk Native ZFS Root Filesystem using Ubiquity GUI installer]]
* [[https://github.com/zfsonlinux/zfs/wiki/Ubuntu-16.04-Root-on-ZFS Ubuntu 16.04 Root on ZFS]]
5b91b968a2e05ff1226068b75201587dfec10830
1953
1952
2019-08-20T12:54:08Z
Lollypop
2
/* Kernel settings for ZFS */
wikitext
text/x-wiki
[[Kategorie:Linux|ZFS]]
[[Kategorie:ZFS|Linux]]
[[Kategorie:VirtualBox|ZFS]]
==Grub==
Create /etc/udev/rules.d/99-local-grub.rules with this content:
<source lang=bash>
# Create by-id links in /dev as well for zfs vdev. Needed by grub
# Add links for zfs_member only
KERNEL=="sd*[0-9]", IMPORT{parent}=="ID_*", ENV{ID_FS_TYPE}=="zfs_member", SYMLINK+="$env{ID_BUS}-$env{ID_SERIAL}-part%n"
</source>
==Virtualbox on ZVols==
If you use ZVols as rawvmdk-device in VirtualBox as normal user (vmuser in this example) create /etc/udev/rules.d/99-local-zvol.rules with this content:
<source lang=bash>
KERNEL=="zd*" SUBSYSTEM=="block" ACTION=="add|change" PROGRAM="/lib/udev/zvol_id /dev/%k" RESULT=="rpool/VM/*" OWNER="vmuser"
</source>
<source lang=bash>
vmuser@virtualbox-server:~$ VBoxManage internalcommands createrawvmdk -filename /var/data/VMs/dev/Solaris10.vmdk -rawdisk /dev/zvol/rpool/VM/Solaris10
</source>
==Setup Ubuntu 16.04 with ZFS root==
Most is from here [https://github.com/zfsonlinux/zfs/wiki/Ubuntu-16.04-Root-on-ZFS Ubuntu-16.04-Root-on-ZFS].
Boot Ubuntu Desktop (alias Live CD) and choose "try out".
===Get the right ashift value===
For example to get sda and sdb:
<source lang=bash>
# lsblk -o NAME,PHY-SeC,LOG-SEC /dev/sd{a,b} | awk 'function exponent (value) {for(i=0;value>1;i++){value/=2;}; return i;}{if($2 ~ /[0-9]+/){print $0,exponent($2)}else{print$0,"ashift"}}'
NAME PHY-SEC LOG-SEC ashift
sda 512 512 9
├─sda1 512 512 9
├─sda2 512 512 9
├─sda3 512 512 9
└─sda4 512 512 9
sdb 4096 512 12
├─sdb1 4096 512 12
├─sdb2 4096 512 12
├─sdb3 4096 512 12
└─sdb4 4096 512 12
</source>
===Connect it to your network===
<source lang=bash>
sudo -i
ifconfig ens160 <IP> netmask 255.255.255.0
route add default gw <defaultrouter>
echo "nameserver <nameserver>" >> /etc/resolv.conf
echo 'Acquire::http::Proxy "http://<user>:<pass>@<proxyhost>:<proxyport>";' >> /etc/apt/apt.conf
apt-add-repository universe
apt update
apt --yes install openssh-server
passwd ubuntu
Reconnect via ssh
apt install --yes debootstrap gdisk zfs-initramfs
sgdisk -g -a1 -n2:34:2047 -t2:EF02 /dev/disk/by-id/scsi-36000c2932cdb62febff0b5ac93786dd4
sgdisk -n9:-8M:0 -t9:BF07 /dev/disk/by-id/scsi-36000c2932cdb62febff0b5ac93786dd4
sgdisk -n1:0:0 -t1:BF01 /dev/disk/by-id/scsi-36000c2932cdb62febff0b5ac93786dd4
zpool create -f -o ashift=12 \
-O atime=off \
-O canmount=off \
-O compression=lz4 \
-O normalization=formD \
-O mountpoint=/ \
-R /mnt \
rpool /dev/disk/by-id/scsi-36000c2932cdb62febff0b5ac93786dd4-part1
zfs create -o canmount=off -o mountpoint=none rpool/ROOT
zfs create -o canmount=noauto -o mountpoint=/ rpool/ROOT/ubuntu
zfs mount rpool/ROOT/ubuntu
zfs create -o setuid=off rpool/home
zfs create -o mountpoint=/root rpool/home/root
zfs create -o canmount=off -o setuid=off -o exec=off rpool/var
zfs create -o com.sun:auto-snapshot=false rpool/var/cache
zfs create rpool/var/log
zfs create rpool/var/spool
zfs create -o com.sun:auto-snapshot=false -o exec=on rpool/var/tmp
zfs create -V 4G -b $(getconf PAGESIZE) -o compression=zle \
-o logbias=throughput -o sync=always \
-o primarycache=metadata -o secondarycache=none \
-o com.sun:auto-snapshot=false rpool/swap
cp -p {,/mnt}/etc/apt/apt.conf
export http_proxy=$(awk '/Acquire::http::Proxy/{gsub(/\"/,"");gsub(/;$/,"");print $2}' /mnt/etc/apt/apt.conf)
echo -n xenial{,-security,-updates} | \
xargs -n 1 -d ' ' -I{} echo "deb http://archive.ubuntu.com/ubuntu {} main universe" > /mnt/etc/apt/sources.list
chmod 1777 /mnt/var/tmp
debootstrap xenial /mnt
zfs set devices=off rpool
HOSTNAME=Template-VM
echo ${HOSTNAME} > /mnt/etc/hostname
printf "127.0.1.1\t%s\n" "${HOSTNAME}" >> /mnt/etc/hosts
INTERFACE=$(ip a s scope global | awk 'NR==1{gsub(/:$/,"",$2);print $2;}')
printf "auto %s\niface %s inet dhcp\n" "${INTERFACE}" "${INTERFACE}" > /mnt/etc/network/interfaces.d/${INTERFACE}
mount --rbind /dev /mnt/dev
mount --rbind /proc /mnt/proc
mount --rbind /sys /mnt/sys
cp -p {,/mnt}/etc/apt/apt.conf
echo -n xenial{,-security,-updates} | \
xargs -n 1 -d ' ' -I{} echo "deb http://archive.ubuntu.com/ubuntu {} main universe" > /mnt/etc/apt/sources.list
chroot /mnt /bin/bash --login
locale-gen en_US.UTF-8
echo 'LANG="en_US.UTF-8"' > /etc/default/locale
LANG="en_US.UTF-8"
dpkg-reconfigure tzdata
ln -s /proc/self/mounts /etc/mtab
apt update
apt install --yes ubuntu-minimal
apt install --yes --no-install-recommends linux-image-generic
apt install --yes zfs-initramfs
apt install --yes openssh-server
apt install --yes grub-pc
addgroup --system lpadmin
addgroup --system sambashare
passwd
grub-probe /
update-initramfs -c -k all
vi /etc/default/grub
Comment out: GRUB_HIDDEN_TIMEOUT=0
Remove quiet and splash from: GRUB_CMDLINE_LINUX_DEFAULT
Uncomment: GRUB_TERMINAL=console
update-grub
grub-install /dev/disk/by-id/scsi-36000c2932cdb62febff0b5ac93786dd4
zfs snapshot rpool/ROOT/ubuntu@install
exit
mount | grep -v zfs | tac | awk '/\/mnt/ {print $3}' | xargs -i{} umount -lf {}
zpool export rpool
reboot
apt install --yes cryptsetup
echo cryptswap1 /dev/zvol/rpool/swap /dev/urandom swap,cipher=aes-xts-plain64:sha256,size=256 >> /etc/crypttab
systemctl daemon-reload
systemctl start systemd-cryptsetup@cryptswap1.service
echo /dev/mapper/cryptswap1 none swap defaults 0 0 >> /etc/fstab
swapon -av
</source>
==Kernel settings for ZFS==
=== Set module parameter in /etc/modprobe.d/zfs.conf===
<source lang=bash>
options zfs zfs_arc_max=10737418240
# increase them so scrub/resilver is more quickly at the cost of other work
options zfs zfs_vdev_scrub_min_active=24
options zfs zfs_vdev_scrub_max_active=64
# sync write
options zfs zfs_vdev_sync_write_min_active=8
options zfs zfs_vdev_sync_write_max_active=32
# sync reads (normal)
options zfs zfs_vdev_sync_read_min_active=8
options zfs zfs_vdev_sync_read_max_active=32
# async reads : prefetcher
options zfs zfs_vdev_async_read_min_active=8
options zfs zfs_vdev_async_read_max_active=32
# async write : bulk writes
options zfs zfs_vdev_async_write_min_active=8
options zfs zfs_vdev_async_write_max_active=32
# max write speed to l2arc
# tradeoff between write/read and durability of ssd (?)
# default : 8 * 1024 * 1024
# setting here : 500 * 1024 * 1024
options zfs l2arc_write_max=524288000
options zfs zfs_top_maxinflight=512
options zfs zfs_resilver_min_time_ms=8000
options zfs zfs_resilver_delay=0
</source>
=== Check settings ===
<source lang=bash>
root@zfshost:~# modprobe -c | grep "options zfs"
options zfs zfs_arc_max=10737418240
options zfs zfs_vdev_scrub_min_active=24
options zfs zfs_vdev_scrub_max_active=64
options zfs zfs_vdev_sync_write_min_active=8
options zfs zfs_vdev_sync_write_max_active=32
options zfs zfs_vdev_sync_read_min_active=8
options zfs zfs_vdev_sync_read_max_active=32
options zfs zfs_vdev_async_read_min_active=8
options zfs zfs_vdev_async_read_max_active=32
options zfs zfs_vdev_async_write_min_active=8
options zfs zfs_vdev_async_write_max_active=32
options zfs l2arc_write_max=524288000
options zfs zfs_top_maxinflight=512
options zfs zfs_resilver_min_time_ms=8000
options zfs zfs_resilver_delay=0
</source>
<source lang=bash>
root@zfshost:~# modprobe --show-depends zfs
insmod /lib/modules/4.15.0-58-generic/kernel/spl/spl.ko
insmod /lib/modules/4.15.0-58-generic/kernel/zfs/znvpair.ko
insmod /lib/modules/4.15.0-58-generic/kernel/zfs/zcommon.ko
insmod /lib/modules/4.15.0-58-generic/kernel/zfs/icp.ko
insmod /lib/modules/4.15.0-58-generic/kernel/zfs/zavl.ko
insmod /lib/modules/4.15.0-58-generic/kernel/zfs/zunicode.ko
insmod /lib/modules/4.15.0-58-generic/kernel/zfs/zfs.ko zfs_arc_max=10737418240 zfs_vdev_scrub_min_active=24 zfs_vdev_scrub_max_active=64 zfs_vdev_sync_write_min_active=8 zfs_vdev_sync_write_max_active=32 zfs_vdev_sync_read_min_active=8 zfs_vdev_sync_read_max_active=32 zfs_vdev_async_read_min_active=8 zfs_vdev_async_read_max_active=32 zfs_vdev_async_write_min_active=8 zfs_vdev_async_write_max_active=32 l2arc_write_max=524288000 zfs_top_maxinflight=512 zfs_resilver_min_time_ms=8000 zfs_resilver_delay=0
</source>
=== Check actual settings ===
Check files in
* /proc/spl/kstat/zfs/
* /sys/module/zfs/parameters/
==ARC Cache==
===Get the current usage of cache===
<source lang=bash>
# cat /proc/spl/kstat/zfs/arcstats |grep c_
c_min 4 521779200
c_max 4 1073741824
arc_no_grow 4 0
arc_tempreserve 4 0
arc_loaned_bytes 4 0
arc_prune 4 25360
arc_meta_used 4 493285336
arc_meta_limit 4 805306368
arc_dnode_limit 4 80530636
arc_meta_max 4 706551816
arc_meta_min 4 16777216
sync_wait_for_async 4 357
arc_need_free 4 0
arc_sys_free 4 260889600
</source>
===Limit the cache without reboot non permanent===
For example limit it to 512MB (which is too small for production environments, just an example...):
<source lang=bash>
# echo "$[512*1024*1024]" > /sys/module/zfs/parameters/zfs_arc_max
</source>
Now you have to drop the caches:
<source lang=bash>
# echo 3 > /proc/sys/vm/drop_caches
</source>
===Make the cache limit permanent===
For example limit it to 512MB (which is too small for production environments, just an example...):
<source lang=bash>
# echo "options zfs zfs_arc_max=$[512*1024*1024]" >> /etc/modprobe.d/zfs.conf
</source>
After reboot this value take effect.
==Backup ZFS settings==
A little script which may be used on your own risk.
<source lang=bash>
#!/bin/bash
# Written by Lars Timmann <L@rs.Timmann.de> 2018
# Tested on solaris 11.3 & Ubuntu Linux
# This script is a rotten bunch of code... rewrite it!
AWK_CMD=/usr/bin/gawk
ZPOOL_CMD=/sbin/zpool
ZFS_CMD=/sbin/zfs
ZDB_CMD=/sbin/zdb
function print_local_options () {
DATASET=$1
OPTION=$2
EXCLUDE_REGEX=$3
${ZFS_CMD} get -s local -Ho property,value -p ${OPTION} ${DATASET} | while read -r property value
do
if [[ ! ${property} =~ ${EXCLUDE_REGEX} ]]
then
if [ "_${property}_" == "_share.*_" ]
then
print_local_options "${DATASET}" 'share.all' '^$'
else
printf '\t-o %s=%s \\\n' "${property}" "${value}"
fi
fi
done
}
function print_filesystem () {
ZFS=$1
printf '%s create \\\n' "${ZFS_CMD}"
print_local_options "${ZFS}" 'all' '^$'
printf '\t%s\n' "${ZFS}"
}
function print_filesystems () {
ZPOOL=$1
for ZFS in $(${ZFS_CMD} list -Ho name -t filesystem -r ${ZPOOL})
do
if [ ${ZFS} == ${ZPOOL} ] ; then continue ; fi
printf '#\n## Filesystem: %s\n#\n\n' "${ZFS}"
print_filesystem ${ZFS}
printf '\n'
done
}
function print_volume () {
ZVOL=$1
volsize=$(${ZFS_CMD} get -Ho value volsize ${ZVOL})
volblocksize=$(${ZFS_CMD} get -Ho value volblocksize ${ZVOL})
printf '%s create \\\n\t-V %s \\\n\t-b %s \\\n' "${ZFS_CMD}" "${volsize}" "${volblocksize}"
print_local_options "${ZVOL}" 'all' '(volsize|refreservation)'
printf '\t%s\n' "${ZVOL}"
}
function print_volumes () {
ZPOOL=$1
for ZVOL in $(${ZFS_CMD} list -Ho name -t volume -r ${ZPOOL})
do
printf '#\n## Volume: %s\n#\n\n' "${ZVOL}"
print_volume ${ZVOL}
printf '\n'
done
}
function print_vdevs () {
ZPOOL=$1
${ZDB_CMD} -C ${ZPOOL} | ${AWK_CMD} -F':' '
$1 ~ /^[[:space:]]*type$/ {
gsub(/[ ]+/,"",$NF);
type=substr($NF,2,length($NF)-2);
if ( type == "mirror" ) {
printf " \\\n\t%s",type;
}
}
$1 ~ /^[[:space:]]*path$/ {
gsub(/[ ]+/,"",$NF);
vdev=substr($NF,2,length($NF)-2);
printf " \\\n\t%s",vdev;
}
END {
printf "\n";
}
'
}
function print_zpool () {
ZPOOL=$1
printf '#############################################################\n'
printf '#\n## ZPool: %s\n#\n' "${ZPOOL}"
printf '#############################################################\n\n'
printf '%s create \\\n' "${ZPOOL_CMD}"
print_local_options "${ZPOOL}" 'all' '/@/'
printf '\t%s' "${ZPOOL}"
print_vdevs "${ZPOOL}"
printf '\n'
printf '#############################################################\n\n'
print_filesystems "${ZPOOL}"
print_volumes "${ZPOOL}"
}
OS=$(uname -s)
eval $(uname -s)=1
HOSTNAME=$(hostname)
printf '#############################################################\n'
printf '# Hostname: %s\n' "${HOSTNAME}"
printf '#############################################################\n\n'
for ZPOOL in $(${ZPOOL_CMD} list -Ho name)
do
print_zpool ${ZPOOL}
done
</source>
==Links==
* [[https://github.com/zfsonlinux/pkg-zfs/wiki/HOWTO-install-Ubuntu-16.04-to-a-Whole-Disk-Native-ZFS-Root-Filesystem-using-Ubiquity-GUI-installer HOWTO install Ubuntu 16.04 to a Whole Disk Native ZFS Root Filesystem using Ubiquity GUI installer]]
* [[https://github.com/zfsonlinux/zfs/wiki/Ubuntu-16.04-Root-on-ZFS Ubuntu 16.04 Root on ZFS]]
94465d4303f4b1e627c39254cc21f30b21f86247
1954
1953
2019-08-20T12:57:24Z
Lollypop
2
/* Set module parameter in /etc/modprobe.d/zfs.conf */
wikitext
text/x-wiki
[[Kategorie:Linux|ZFS]]
[[Kategorie:ZFS|Linux]]
[[Kategorie:VirtualBox|ZFS]]
==Grub==
Create /etc/udev/rules.d/99-local-grub.rules with this content:
<source lang=bash>
# Create by-id links in /dev as well for zfs vdev. Needed by grub
# Add links for zfs_member only
KERNEL=="sd*[0-9]", IMPORT{parent}=="ID_*", ENV{ID_FS_TYPE}=="zfs_member", SYMLINK+="$env{ID_BUS}-$env{ID_SERIAL}-part%n"
</source>
==Virtualbox on ZVols==
If you use ZVols as rawvmdk-device in VirtualBox as normal user (vmuser in this example) create /etc/udev/rules.d/99-local-zvol.rules with this content:
<source lang=bash>
KERNEL=="zd*" SUBSYSTEM=="block" ACTION=="add|change" PROGRAM="/lib/udev/zvol_id /dev/%k" RESULT=="rpool/VM/*" OWNER="vmuser"
</source>
<source lang=bash>
vmuser@virtualbox-server:~$ VBoxManage internalcommands createrawvmdk -filename /var/data/VMs/dev/Solaris10.vmdk -rawdisk /dev/zvol/rpool/VM/Solaris10
</source>
==Setup Ubuntu 16.04 with ZFS root==
Most is from here [https://github.com/zfsonlinux/zfs/wiki/Ubuntu-16.04-Root-on-ZFS Ubuntu-16.04-Root-on-ZFS].
Boot Ubuntu Desktop (alias Live CD) and choose "try out".
===Get the right ashift value===
For example to get sda and sdb:
<source lang=bash>
# lsblk -o NAME,PHY-SeC,LOG-SEC /dev/sd{a,b} | awk 'function exponent (value) {for(i=0;value>1;i++){value/=2;}; return i;}{if($2 ~ /[0-9]+/){print $0,exponent($2)}else{print$0,"ashift"}}'
NAME PHY-SEC LOG-SEC ashift
sda 512 512 9
├─sda1 512 512 9
├─sda2 512 512 9
├─sda3 512 512 9
└─sda4 512 512 9
sdb 4096 512 12
├─sdb1 4096 512 12
├─sdb2 4096 512 12
├─sdb3 4096 512 12
└─sdb4 4096 512 12
</source>
===Connect it to your network===
<source lang=bash>
sudo -i
ifconfig ens160 <IP> netmask 255.255.255.0
route add default gw <defaultrouter>
echo "nameserver <nameserver>" >> /etc/resolv.conf
echo 'Acquire::http::Proxy "http://<user>:<pass>@<proxyhost>:<proxyport>";' >> /etc/apt/apt.conf
apt-add-repository universe
apt update
apt --yes install openssh-server
passwd ubuntu
Reconnect via ssh
apt install --yes debootstrap gdisk zfs-initramfs
sgdisk -g -a1 -n2:34:2047 -t2:EF02 /dev/disk/by-id/scsi-36000c2932cdb62febff0b5ac93786dd4
sgdisk -n9:-8M:0 -t9:BF07 /dev/disk/by-id/scsi-36000c2932cdb62febff0b5ac93786dd4
sgdisk -n1:0:0 -t1:BF01 /dev/disk/by-id/scsi-36000c2932cdb62febff0b5ac93786dd4
zpool create -f -o ashift=12 \
-O atime=off \
-O canmount=off \
-O compression=lz4 \
-O normalization=formD \
-O mountpoint=/ \
-R /mnt \
rpool /dev/disk/by-id/scsi-36000c2932cdb62febff0b5ac93786dd4-part1
zfs create -o canmount=off -o mountpoint=none rpool/ROOT
zfs create -o canmount=noauto -o mountpoint=/ rpool/ROOT/ubuntu
zfs mount rpool/ROOT/ubuntu
zfs create -o setuid=off rpool/home
zfs create -o mountpoint=/root rpool/home/root
zfs create -o canmount=off -o setuid=off -o exec=off rpool/var
zfs create -o com.sun:auto-snapshot=false rpool/var/cache
zfs create rpool/var/log
zfs create rpool/var/spool
zfs create -o com.sun:auto-snapshot=false -o exec=on rpool/var/tmp
zfs create -V 4G -b $(getconf PAGESIZE) -o compression=zle \
-o logbias=throughput -o sync=always \
-o primarycache=metadata -o secondarycache=none \
-o com.sun:auto-snapshot=false rpool/swap
cp -p {,/mnt}/etc/apt/apt.conf
export http_proxy=$(awk '/Acquire::http::Proxy/{gsub(/\"/,"");gsub(/;$/,"");print $2}' /mnt/etc/apt/apt.conf)
echo -n xenial{,-security,-updates} | \
xargs -n 1 -d ' ' -I{} echo "deb http://archive.ubuntu.com/ubuntu {} main universe" > /mnt/etc/apt/sources.list
chmod 1777 /mnt/var/tmp
debootstrap xenial /mnt
zfs set devices=off rpool
HOSTNAME=Template-VM
echo ${HOSTNAME} > /mnt/etc/hostname
printf "127.0.1.1\t%s\n" "${HOSTNAME}" >> /mnt/etc/hosts
INTERFACE=$(ip a s scope global | awk 'NR==1{gsub(/:$/,"",$2);print $2;}')
printf "auto %s\niface %s inet dhcp\n" "${INTERFACE}" "${INTERFACE}" > /mnt/etc/network/interfaces.d/${INTERFACE}
mount --rbind /dev /mnt/dev
mount --rbind /proc /mnt/proc
mount --rbind /sys /mnt/sys
cp -p {,/mnt}/etc/apt/apt.conf
echo -n xenial{,-security,-updates} | \
xargs -n 1 -d ' ' -I{} echo "deb http://archive.ubuntu.com/ubuntu {} main universe" > /mnt/etc/apt/sources.list
chroot /mnt /bin/bash --login
locale-gen en_US.UTF-8
echo 'LANG="en_US.UTF-8"' > /etc/default/locale
LANG="en_US.UTF-8"
dpkg-reconfigure tzdata
ln -s /proc/self/mounts /etc/mtab
apt update
apt install --yes ubuntu-minimal
apt install --yes --no-install-recommends linux-image-generic
apt install --yes zfs-initramfs
apt install --yes openssh-server
apt install --yes grub-pc
addgroup --system lpadmin
addgroup --system sambashare
passwd
grub-probe /
update-initramfs -c -k all
vi /etc/default/grub
Comment out: GRUB_HIDDEN_TIMEOUT=0
Remove quiet and splash from: GRUB_CMDLINE_LINUX_DEFAULT
Uncomment: GRUB_TERMINAL=console
update-grub
grub-install /dev/disk/by-id/scsi-36000c2932cdb62febff0b5ac93786dd4
zfs snapshot rpool/ROOT/ubuntu@install
exit
mount | grep -v zfs | tac | awk '/\/mnt/ {print $3}' | xargs -i{} umount -lf {}
zpool export rpool
reboot
apt install --yes cryptsetup
echo cryptswap1 /dev/zvol/rpool/swap /dev/urandom swap,cipher=aes-xts-plain64:sha256,size=256 >> /etc/crypttab
systemctl daemon-reload
systemctl start systemd-cryptsetup@cryptswap1.service
echo /dev/mapper/cryptswap1 none swap defaults 0 0 >> /etc/fstab
swapon -av
</source>
==Kernel settings for ZFS==
=== Set module parameter in /etc/modprobe.d/zfs.conf===
<source lang=bash>
options zfs zfs_arc_max=10737418240
# increase them so scrub/resilver is more quickly at the cost of other work
options zfs zfs_vdev_scrub_min_active=24
options zfs zfs_vdev_scrub_max_active=64
# sync write
options zfs zfs_vdev_sync_write_min_active=8
options zfs zfs_vdev_sync_write_max_active=32
# sync reads (normal)
options zfs zfs_vdev_sync_read_min_active=8
options zfs zfs_vdev_sync_read_max_active=32
# async reads : prefetcher
options zfs zfs_vdev_async_read_min_active=8
options zfs zfs_vdev_async_read_max_active=32
# async write : bulk writes
options zfs zfs_vdev_async_write_min_active=8
options zfs zfs_vdev_async_write_max_active=32
# max write speed to l2arc
# tradeoff between write/read and durability of ssd (?)
# default : 8 * 1024 * 1024
# setting here : 500 * 1024 * 1024
options zfs l2arc_write_max=524288000
options zfs zfs_top_maxinflight=512
options zfs zfs_resilver_min_time_ms=8000
options zfs zfs_resilver_delay=0
</source>
Remember to update your initramfs before boot. This is the filesystem which is read when your module is loaded.
<source lang=bash>
# update-initramfs -k all -u
</source>
=== Check settings ===
<source lang=bash>
root@zfshost:~# modprobe -c | grep "options zfs"
options zfs zfs_arc_max=10737418240
options zfs zfs_vdev_scrub_min_active=24
options zfs zfs_vdev_scrub_max_active=64
options zfs zfs_vdev_sync_write_min_active=8
options zfs zfs_vdev_sync_write_max_active=32
options zfs zfs_vdev_sync_read_min_active=8
options zfs zfs_vdev_sync_read_max_active=32
options zfs zfs_vdev_async_read_min_active=8
options zfs zfs_vdev_async_read_max_active=32
options zfs zfs_vdev_async_write_min_active=8
options zfs zfs_vdev_async_write_max_active=32
options zfs l2arc_write_max=524288000
options zfs zfs_top_maxinflight=512
options zfs zfs_resilver_min_time_ms=8000
options zfs zfs_resilver_delay=0
</source>
<source lang=bash>
root@zfshost:~# modprobe --show-depends zfs
insmod /lib/modules/4.15.0-58-generic/kernel/spl/spl.ko
insmod /lib/modules/4.15.0-58-generic/kernel/zfs/znvpair.ko
insmod /lib/modules/4.15.0-58-generic/kernel/zfs/zcommon.ko
insmod /lib/modules/4.15.0-58-generic/kernel/zfs/icp.ko
insmod /lib/modules/4.15.0-58-generic/kernel/zfs/zavl.ko
insmod /lib/modules/4.15.0-58-generic/kernel/zfs/zunicode.ko
insmod /lib/modules/4.15.0-58-generic/kernel/zfs/zfs.ko zfs_arc_max=10737418240 zfs_vdev_scrub_min_active=24 zfs_vdev_scrub_max_active=64 zfs_vdev_sync_write_min_active=8 zfs_vdev_sync_write_max_active=32 zfs_vdev_sync_read_min_active=8 zfs_vdev_sync_read_max_active=32 zfs_vdev_async_read_min_active=8 zfs_vdev_async_read_max_active=32 zfs_vdev_async_write_min_active=8 zfs_vdev_async_write_max_active=32 l2arc_write_max=524288000 zfs_top_maxinflight=512 zfs_resilver_min_time_ms=8000 zfs_resilver_delay=0
</source>
=== Check actual settings ===
Check files in
* /proc/spl/kstat/zfs/
* /sys/module/zfs/parameters/
==ARC Cache==
===Get the current usage of cache===
<source lang=bash>
# cat /proc/spl/kstat/zfs/arcstats |grep c_
c_min 4 521779200
c_max 4 1073741824
arc_no_grow 4 0
arc_tempreserve 4 0
arc_loaned_bytes 4 0
arc_prune 4 25360
arc_meta_used 4 493285336
arc_meta_limit 4 805306368
arc_dnode_limit 4 80530636
arc_meta_max 4 706551816
arc_meta_min 4 16777216
sync_wait_for_async 4 357
arc_need_free 4 0
arc_sys_free 4 260889600
</source>
===Limit the cache without reboot non permanent===
For example limit it to 512MB (which is too small for production environments, just an example...):
<source lang=bash>
# echo "$[512*1024*1024]" > /sys/module/zfs/parameters/zfs_arc_max
</source>
Now you have to drop the caches:
<source lang=bash>
# echo 3 > /proc/sys/vm/drop_caches
</source>
===Make the cache limit permanent===
For example limit it to 512MB (which is too small for production environments, just an example...):
<source lang=bash>
# echo "options zfs zfs_arc_max=$[512*1024*1024]" >> /etc/modprobe.d/zfs.conf
</source>
After reboot this value take effect.
==Backup ZFS settings==
A little script which may be used on your own risk.
<source lang=bash>
#!/bin/bash
# Written by Lars Timmann <L@rs.Timmann.de> 2018
# Tested on solaris 11.3 & Ubuntu Linux
# This script is a rotten bunch of code... rewrite it!
AWK_CMD=/usr/bin/gawk
ZPOOL_CMD=/sbin/zpool
ZFS_CMD=/sbin/zfs
ZDB_CMD=/sbin/zdb
function print_local_options () {
DATASET=$1
OPTION=$2
EXCLUDE_REGEX=$3
${ZFS_CMD} get -s local -Ho property,value -p ${OPTION} ${DATASET} | while read -r property value
do
if [[ ! ${property} =~ ${EXCLUDE_REGEX} ]]
then
if [ "_${property}_" == "_share.*_" ]
then
print_local_options "${DATASET}" 'share.all' '^$'
else
printf '\t-o %s=%s \\\n' "${property}" "${value}"
fi
fi
done
}
function print_filesystem () {
ZFS=$1
printf '%s create \\\n' "${ZFS_CMD}"
print_local_options "${ZFS}" 'all' '^$'
printf '\t%s\n' "${ZFS}"
}
function print_filesystems () {
ZPOOL=$1
for ZFS in $(${ZFS_CMD} list -Ho name -t filesystem -r ${ZPOOL})
do
if [ ${ZFS} == ${ZPOOL} ] ; then continue ; fi
printf '#\n## Filesystem: %s\n#\n\n' "${ZFS}"
print_filesystem ${ZFS}
printf '\n'
done
}
function print_volume () {
ZVOL=$1
volsize=$(${ZFS_CMD} get -Ho value volsize ${ZVOL})
volblocksize=$(${ZFS_CMD} get -Ho value volblocksize ${ZVOL})
printf '%s create \\\n\t-V %s \\\n\t-b %s \\\n' "${ZFS_CMD}" "${volsize}" "${volblocksize}"
print_local_options "${ZVOL}" 'all' '(volsize|refreservation)'
printf '\t%s\n' "${ZVOL}"
}
function print_volumes () {
ZPOOL=$1
for ZVOL in $(${ZFS_CMD} list -Ho name -t volume -r ${ZPOOL})
do
printf '#\n## Volume: %s\n#\n\n' "${ZVOL}"
print_volume ${ZVOL}
printf '\n'
done
}
function print_vdevs () {
ZPOOL=$1
${ZDB_CMD} -C ${ZPOOL} | ${AWK_CMD} -F':' '
$1 ~ /^[[:space:]]*type$/ {
gsub(/[ ]+/,"",$NF);
type=substr($NF,2,length($NF)-2);
if ( type == "mirror" ) {
printf " \\\n\t%s",type;
}
}
$1 ~ /^[[:space:]]*path$/ {
gsub(/[ ]+/,"",$NF);
vdev=substr($NF,2,length($NF)-2);
printf " \\\n\t%s",vdev;
}
END {
printf "\n";
}
'
}
function print_zpool () {
ZPOOL=$1
printf '#############################################################\n'
printf '#\n## ZPool: %s\n#\n' "${ZPOOL}"
printf '#############################################################\n\n'
printf '%s create \\\n' "${ZPOOL_CMD}"
print_local_options "${ZPOOL}" 'all' '/@/'
printf '\t%s' "${ZPOOL}"
print_vdevs "${ZPOOL}"
printf '\n'
printf '#############################################################\n\n'
print_filesystems "${ZPOOL}"
print_volumes "${ZPOOL}"
}
OS=$(uname -s)
eval $(uname -s)=1
HOSTNAME=$(hostname)
printf '#############################################################\n'
printf '# Hostname: %s\n' "${HOSTNAME}"
printf '#############################################################\n\n'
for ZPOOL in $(${ZPOOL_CMD} list -Ho name)
do
print_zpool ${ZPOOL}
done
</source>
==Links==
* [[https://github.com/zfsonlinux/pkg-zfs/wiki/HOWTO-install-Ubuntu-16.04-to-a-Whole-Disk-Native-ZFS-Root-Filesystem-using-Ubiquity-GUI-installer HOWTO install Ubuntu 16.04 to a Whole Disk Native ZFS Root Filesystem using Ubiquity GUI installer]]
* [[https://github.com/zfsonlinux/zfs/wiki/Ubuntu-16.04-Root-on-ZFS Ubuntu 16.04 Root on ZFS]]
a21213466990e85e260c56fa7915124b55a09df5
Apache
0
205
1955
1904
2019-09-04T08:05:28Z
Lollypop
2
/* Apache konfigurieren */
wikitext
text/x-wiki
[[Kategorie:Apache]]
== Zertifikat generieren ==
===Simple script===
<source lang=bash>
#!/bin/bash
BASE_SUBJECT='/C=DE/ST=Hamburg/L=Hamburg/O=MyOrg/OU=IT'
BASEDIR=/etc/apache2
DAYS=$[ 5 * 365 ]
KEY_DIR=${BASEDIR}/ssl.key
CRT_DIR=${BASEDIR}/ssl.crt
if [ $# -eq 0 ]
then
printf "usage: $0 <webserver-name> [<alias1> <alias2>...]\n"
exit 1
fi
CN=$1
declare -a subject_alt_names;
for i in ${*}
do
subject_alt_names=( ${subject_alt_names[*]} "DNS:${i}")
done
echo ${subject_alt_names[*]}
SHORT=${CN%%.*}
KEY=${KEY_DIR}/${SHORT}.key
CRT=${CRT_DIR}/${SHORT}.crt
OLD_IFS=${IFS}
IFS=","
openssl req \
-new \
-days ${DAYS} \
-newkey rsa:4096bits \
-sha512 \
-x509 \
-nodes \
-out ${CRT} \
-keyout ${KEY} \
-subj "${BASE_SUBJECT}/CN=${CN}" \
-reqexts SAN \
-extensions SAN \
-config <(
cat /etc/ssl/openssl.cnf
printf "[ext]\nbasicConstraints=CA:FALSE,pathlen:0\n[SAN]\n%s\n" \
"${subject_alt_names:+subjectAltName = ${subject_alt_names[*]}}"
)
IFS=${OLD_IFS}
printf "Put this in your Apache config:\n\n\tSSLCertificateFile %s\n\tSSLCertificateKeyFile %s\n\n" "${CRT}" "${KEY}"
</source>
===Defaultwerte vernünftig anpassen===
Country & Co auf für einen selbst passende Werte anpassen:
<source lang=bash>
# vi /etc/ssl/openssl.cnf
</source>
===Schlüssel generieren===
<source lang=bash>
# openssl ecparam -genkey -name secp256r1 | openssl ec -aes256 -out server.de.ec-key
read EC key
using curve name prime256v1 instead of secp256r1
writing EC key
Enter PEM pass phrase:
Verifying - Enter PEM pass phrase:
</source>
Sollte es gewünscht sein den Schlüssel ohne Passwort abzulegen, kann man den Schlüssel nachträglich so entfernen:
<source lang=bash>
# openssl ec -in server.de.ec-key -out server.de.ec-key
read EC key
Enter PEM pass phrase:
writing EC key
</source>
===Zertifikat ausstellen===
<source lang=bash>
# openssl req -new -x509 -sha256 -key server.de.ec-key -out server.de-wildcard.pem -days 1825 -nodes
You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter '.', the field will be left blank.
-----
Country Name (2 letter code) [DE]:
State or Province Name (full name) [Hamburg]:
Locality Name (eg, city) [Hamburg]:
Organization Name (eg, company) [My Site]:
Organizational Unit Name (eg, section) [Sub]:
Common Name (e.g. server FQDN or YOUR name) []:*.server.de
Email Address [ssl@server.de]:
</source>
===Zertifikat ansehen===
<source lang=bash>
# openssl x509 -text -noout -in server.de-wildcard.pem
Certificate:
Data:
Version: 3 (0x2)
Serial Number: ... (0x...)
Signature Algorithm: ecdsa-with-SHA256
Issuer: C=DE, ST=Hamburg, L=Hamburg, O=My Site, OU=Sub, CN=*.server.de/emailAddress=ssl@server.de
Validity
Not Before: Apr 16 09:35:02 2015 GMT
Not After : Apr 14 09:35:02 2020 GMT
Subject: C=DE, ST=Hamburg, L=Hamburg, O=My Site, OU=Sub, CN=*.server.de/emailAddress=ssl@server.de
Subject Public Key Info:
Public Key Algorithm: id-ecPublicKey
Public-Key: (256 bit)
pub:
...
ASN1 OID: prime256v1
X509v3 extensions:
X509v3 Subject Key Identifier:
...
X509v3 Authority Key Identifier:
keyid:...
X509v3 Basic Constraints:
CA:TRUE
Signature Algorithm: ecdsa-with-SHA256
...
</source>
==Apache konfigurieren==
=== Serving mp4 media files ===
If your media files are on a network filesystem like CIFS or NFS you should disable memory mapping (EnableMMAP Off) to avoid corrupted data at the client side.
<source lang=apache>
<Directory /var/www/media-files>
Options -Indexes
AllowOverride None
Require all granted
<IfModule mod_mime>
AddType video/mp4 .mp4
</IfModule>
EnableMMAP Off
</Directory>
</source>
=== SSL configuration ===
/etc/apache2/mods-available/ssl.conf
<source lang=apache>
<IfModule mod_ssl.c>
...
SSLUseStapling On
SSLStaplingCache "shmcb:${APACHE_RUN_DIR}/stapling_cache(128000)"
...
</IfModule>
</source>
<source lang=apache>
<VirtualHost ssl.server.de:443>
# ...
SSLEngine On
# Do this only if you are sure you have no old clients
SSLProtocol all -SSLv2 -SSLv3 -TLSv1 -TLSv1.1
# If you need to support old clients use this instead
# SSLProtocol all -SSLv2 -SSLv3 -TLSv1
SSLCompression off
SSLHonorCipherOrder On
# Do this only if you are sure you have no old clients
SSLCipherSuite HIGH:EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH:!AES256+RSA:!AES128:!ADH:!EXP:!SSLv2:!SSLv3:!MEDIUM:!LOW:!NULL:!aNULL
# If you need to support old clients use this instead
# SSLCipherSuite ECDHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-SHA256:ECDHE-RSA-AES256-SHA:ECDHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA256:DHE-RSA-AES128-SHA256:DHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA:EDH-RSA-DES-CBC3-SHA:AES256-GCM-SHA384:AES128-GCM-SHA256:AES256-SHA256:AES128-SHA256:AES256-SHA:AES128-SHA:HIGH:!DES-CBC3-SHA:!aNULL:!eNULL:!EXPORT:!CAMELLIA:!DES:!MD5:!PSK:!RC4:!SSLv2:!SSLv3
SSLCertificateFile /etc/letsencrypt/live/ssl.server.de/fullchain.pem
SSLCertificateKeyFile /etc/letsencrypt/live/ssl.server.de/privkey.pem
SSLOptions +FakeBasicAuth +ExportCertData +StrictRequire
# Generate DH parameters with
# # openssl dhparam -out /etc/ssl/certs/dhparam_4096.pem 4096
SSLOpenSSLConfCmd DHParameters "/etc/ssl/certs/dhparam_4096.pem"
SSLOpenSSLConfCmd ECDHParameters Automatic
SSLOpenSSLConfCmd Curves secp521r1:secp384r1:prime256v1
SetEnvIfNoCase Referer ^https://ssl\.server\.de keep_cookies
RequestHeader unset Cookie env=!keep_cookies
<IfModule mod_headers.c>
# https://kb.sucuri.net/warnings/hardening/headers-x-content-type
Header set X-Content-Type-Options nosniff
# https://kb.sucuri.net/warnings/hardening/headers-x-frame-clickjacking
Header append X-FRAME-OPTIONS "SAMEORIGIN"
# https://kb.sucuri.net/warnings/hardening/headers-x-xss-protection
Header set X-XSS-Protection "1; mode=block"
# Strict Transport Security
Header always set Strict-Transport-Security "max-age=31556926;"
# Public Key Pins
Header always set Public-Key-Pins "max-age=5184000; pin-sha256=\"...\"; pin-sha256=\"...\"; includeSubDomains"
</IfModule>
<IfModule mod_rewrite.c>
RewriteEngine On
# https://kb.sucuri.net/warnings/hardening/http-trace HTTP Trace Method
RewriteCond %{REQUEST_METHOD} ^TRACE
RewriteRule .* - [F]
</IfModule>
</VirtualHost>
</source>
===SSLLabs A+ with all 100%===
'''If you consider to take this snippet be warned. Old clients have no chance to reach the server.'''
<source lang=apache>
<VirtualHost ssl.server.de:443>
...
# SSL parameters
SSLEngine On
SSLProtocol all -SSLv2 -SSLv3 -TLSv1 -TLSv1.1
SSLCipherSuite DHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES256-SHA:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-SHA
SSLHonorCipherOrder on
SSLUseStapling on
SSLCompression off
SSLOptions +FakeBasicAuth +ExportCertData +StrictRequire
SSLCertificateFile /etc/letsencrypt/live/ssl.server.de/fullchain.pem
SSLCertificateKeyFile /etc/letsencrypt/live/ssl.server.de/privkey.pem
SSLOpenSSLConfCmd DHParameters "/etc/ssl/certs/dhparam.pem"
SSLOpenSSLConfCmd ECDHParameters Automatic
SSLOpenSSLConfCmd Curves secp521r1:secp384r1:prime256v1
<IfModule mod_headers.c>
# Add security and privacy related headers
Header edit Set-Cookie ^(.*)$ $1;HttpOnly;Secure
Header always set Strict-Transport-Security "max-age=31556926; includeSubDomains; preload"
Header always set X-Frame-Options SAMEORIGIN
Header always set X-Content-Type-Options nosniff
Header set X-XSS-Protection "1; mode=block"
Header set X-Robots-Tag "none"
SetEnv modHeadersAvailable true
</IfModule>
...
</VirtualHost>
</source>
==Client certificates==
<source lang=apache>
#
## <ClientCertificate>
#
SSLVerifyClient none
SSLCACertificateFile "/var/log/apache2/conf/ca.crt"
SSLCARevocationFile "/var/log/apache2/conf/crl.pem"
SSLCARevocationCheck chain
CustomLog "/var/log/apache2/logs/ssl_user.log" \
"%t %h Serial=%{SSL_CLIENT_M_SERIAL}x User=%{SSL_CLIENT_S_DN_CN}x \"%r\" %b"
<Location />
SSLVerifyClient require
SSLVerifyDepth 10
SSLOptions +FakeBasicAuth
SSLRequireSSL
SSLRequire %{SSL_CLIENT_S_DN_O} eq "Your Organization" \
and %{SSL_CLIENT_S_DN_OU} in {"AllowedOU1","AllowedOU2"}
</Location>
#
## </ClientCertificate>
#
</source>
==ApacheTop==
Top of all sites on your host:
<source lang=bash>
# ls /var/log/apache2/*.log | xargs -n 1 echo -f | xargs apachetop
</source>
4c5a728b131493f3dcb9f778c522a81c394af4e6
1956
1955
2019-09-04T08:10:24Z
Lollypop
2
/* Serving mp4 media files */
wikitext
text/x-wiki
[[Kategorie:Apache]]
== Zertifikat generieren ==
===Simple script===
<source lang=bash>
#!/bin/bash
BASE_SUBJECT='/C=DE/ST=Hamburg/L=Hamburg/O=MyOrg/OU=IT'
BASEDIR=/etc/apache2
DAYS=$[ 5 * 365 ]
KEY_DIR=${BASEDIR}/ssl.key
CRT_DIR=${BASEDIR}/ssl.crt
if [ $# -eq 0 ]
then
printf "usage: $0 <webserver-name> [<alias1> <alias2>...]\n"
exit 1
fi
CN=$1
declare -a subject_alt_names;
for i in ${*}
do
subject_alt_names=( ${subject_alt_names[*]} "DNS:${i}")
done
echo ${subject_alt_names[*]}
SHORT=${CN%%.*}
KEY=${KEY_DIR}/${SHORT}.key
CRT=${CRT_DIR}/${SHORT}.crt
OLD_IFS=${IFS}
IFS=","
openssl req \
-new \
-days ${DAYS} \
-newkey rsa:4096bits \
-sha512 \
-x509 \
-nodes \
-out ${CRT} \
-keyout ${KEY} \
-subj "${BASE_SUBJECT}/CN=${CN}" \
-reqexts SAN \
-extensions SAN \
-config <(
cat /etc/ssl/openssl.cnf
printf "[ext]\nbasicConstraints=CA:FALSE,pathlen:0\n[SAN]\n%s\n" \
"${subject_alt_names:+subjectAltName = ${subject_alt_names[*]}}"
)
IFS=${OLD_IFS}
printf "Put this in your Apache config:\n\n\tSSLCertificateFile %s\n\tSSLCertificateKeyFile %s\n\n" "${CRT}" "${KEY}"
</source>
===Defaultwerte vernünftig anpassen===
Country & Co auf für einen selbst passende Werte anpassen:
<source lang=bash>
# vi /etc/ssl/openssl.cnf
</source>
===Schlüssel generieren===
<source lang=bash>
# openssl ecparam -genkey -name secp256r1 | openssl ec -aes256 -out server.de.ec-key
read EC key
using curve name prime256v1 instead of secp256r1
writing EC key
Enter PEM pass phrase:
Verifying - Enter PEM pass phrase:
</source>
Sollte es gewünscht sein den Schlüssel ohne Passwort abzulegen, kann man den Schlüssel nachträglich so entfernen:
<source lang=bash>
# openssl ec -in server.de.ec-key -out server.de.ec-key
read EC key
Enter PEM pass phrase:
writing EC key
</source>
===Zertifikat ausstellen===
<source lang=bash>
# openssl req -new -x509 -sha256 -key server.de.ec-key -out server.de-wildcard.pem -days 1825 -nodes
You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter '.', the field will be left blank.
-----
Country Name (2 letter code) [DE]:
State or Province Name (full name) [Hamburg]:
Locality Name (eg, city) [Hamburg]:
Organization Name (eg, company) [My Site]:
Organizational Unit Name (eg, section) [Sub]:
Common Name (e.g. server FQDN or YOUR name) []:*.server.de
Email Address [ssl@server.de]:
</source>
===Zertifikat ansehen===
<source lang=bash>
# openssl x509 -text -noout -in server.de-wildcard.pem
Certificate:
Data:
Version: 3 (0x2)
Serial Number: ... (0x...)
Signature Algorithm: ecdsa-with-SHA256
Issuer: C=DE, ST=Hamburg, L=Hamburg, O=My Site, OU=Sub, CN=*.server.de/emailAddress=ssl@server.de
Validity
Not Before: Apr 16 09:35:02 2015 GMT
Not After : Apr 14 09:35:02 2020 GMT
Subject: C=DE, ST=Hamburg, L=Hamburg, O=My Site, OU=Sub, CN=*.server.de/emailAddress=ssl@server.de
Subject Public Key Info:
Public Key Algorithm: id-ecPublicKey
Public-Key: (256 bit)
pub:
...
ASN1 OID: prime256v1
X509v3 extensions:
X509v3 Subject Key Identifier:
...
X509v3 Authority Key Identifier:
keyid:...
X509v3 Basic Constraints:
CA:TRUE
Signature Algorithm: ecdsa-with-SHA256
...
</source>
==Apache konfigurieren==
=== Serving mp4 media files ===
If your media files are on a network filesystem like CIFS or NFS you should disable memory mapping (EnableMMAP Off) to avoid corrupted data at the client side and allow seeking inside the video.
<source lang=apache>
<Directory /var/www/media-files>
Options -Indexes
AllowOverride None
Require all granted
<IfModule mod_mime>
AddType video/mp4 .mp4
</IfModule>
EnableMMAP Off
</Directory>
</source>
=== SSL configuration ===
/etc/apache2/mods-available/ssl.conf
<source lang=apache>
<IfModule mod_ssl.c>
...
SSLUseStapling On
SSLStaplingCache "shmcb:${APACHE_RUN_DIR}/stapling_cache(128000)"
...
</IfModule>
</source>
<source lang=apache>
<VirtualHost ssl.server.de:443>
# ...
SSLEngine On
# Do this only if you are sure you have no old clients
SSLProtocol all -SSLv2 -SSLv3 -TLSv1 -TLSv1.1
# If you need to support old clients use this instead
# SSLProtocol all -SSLv2 -SSLv3 -TLSv1
SSLCompression off
SSLHonorCipherOrder On
# Do this only if you are sure you have no old clients
SSLCipherSuite HIGH:EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH:!AES256+RSA:!AES128:!ADH:!EXP:!SSLv2:!SSLv3:!MEDIUM:!LOW:!NULL:!aNULL
# If you need to support old clients use this instead
# SSLCipherSuite ECDHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-SHA256:ECDHE-RSA-AES256-SHA:ECDHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA256:DHE-RSA-AES128-SHA256:DHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA:EDH-RSA-DES-CBC3-SHA:AES256-GCM-SHA384:AES128-GCM-SHA256:AES256-SHA256:AES128-SHA256:AES256-SHA:AES128-SHA:HIGH:!DES-CBC3-SHA:!aNULL:!eNULL:!EXPORT:!CAMELLIA:!DES:!MD5:!PSK:!RC4:!SSLv2:!SSLv3
SSLCertificateFile /etc/letsencrypt/live/ssl.server.de/fullchain.pem
SSLCertificateKeyFile /etc/letsencrypt/live/ssl.server.de/privkey.pem
SSLOptions +FakeBasicAuth +ExportCertData +StrictRequire
# Generate DH parameters with
# # openssl dhparam -out /etc/ssl/certs/dhparam_4096.pem 4096
SSLOpenSSLConfCmd DHParameters "/etc/ssl/certs/dhparam_4096.pem"
SSLOpenSSLConfCmd ECDHParameters Automatic
SSLOpenSSLConfCmd Curves secp521r1:secp384r1:prime256v1
SetEnvIfNoCase Referer ^https://ssl\.server\.de keep_cookies
RequestHeader unset Cookie env=!keep_cookies
<IfModule mod_headers.c>
# https://kb.sucuri.net/warnings/hardening/headers-x-content-type
Header set X-Content-Type-Options nosniff
# https://kb.sucuri.net/warnings/hardening/headers-x-frame-clickjacking
Header append X-FRAME-OPTIONS "SAMEORIGIN"
# https://kb.sucuri.net/warnings/hardening/headers-x-xss-protection
Header set X-XSS-Protection "1; mode=block"
# Strict Transport Security
Header always set Strict-Transport-Security "max-age=31556926;"
# Public Key Pins
Header always set Public-Key-Pins "max-age=5184000; pin-sha256=\"...\"; pin-sha256=\"...\"; includeSubDomains"
</IfModule>
<IfModule mod_rewrite.c>
RewriteEngine On
# https://kb.sucuri.net/warnings/hardening/http-trace HTTP Trace Method
RewriteCond %{REQUEST_METHOD} ^TRACE
RewriteRule .* - [F]
</IfModule>
</VirtualHost>
</source>
===SSLLabs A+ with all 100%===
'''If you consider to take this snippet be warned. Old clients have no chance to reach the server.'''
<source lang=apache>
<VirtualHost ssl.server.de:443>
...
# SSL parameters
SSLEngine On
SSLProtocol all -SSLv2 -SSLv3 -TLSv1 -TLSv1.1
SSLCipherSuite DHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES256-SHA:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-SHA
SSLHonorCipherOrder on
SSLUseStapling on
SSLCompression off
SSLOptions +FakeBasicAuth +ExportCertData +StrictRequire
SSLCertificateFile /etc/letsencrypt/live/ssl.server.de/fullchain.pem
SSLCertificateKeyFile /etc/letsencrypt/live/ssl.server.de/privkey.pem
SSLOpenSSLConfCmd DHParameters "/etc/ssl/certs/dhparam.pem"
SSLOpenSSLConfCmd ECDHParameters Automatic
SSLOpenSSLConfCmd Curves secp521r1:secp384r1:prime256v1
<IfModule mod_headers.c>
# Add security and privacy related headers
Header edit Set-Cookie ^(.*)$ $1;HttpOnly;Secure
Header always set Strict-Transport-Security "max-age=31556926; includeSubDomains; preload"
Header always set X-Frame-Options SAMEORIGIN
Header always set X-Content-Type-Options nosniff
Header set X-XSS-Protection "1; mode=block"
Header set X-Robots-Tag "none"
SetEnv modHeadersAvailable true
</IfModule>
...
</VirtualHost>
</source>
==Client certificates==
<source lang=apache>
#
## <ClientCertificate>
#
SSLVerifyClient none
SSLCACertificateFile "/var/log/apache2/conf/ca.crt"
SSLCARevocationFile "/var/log/apache2/conf/crl.pem"
SSLCARevocationCheck chain
CustomLog "/var/log/apache2/logs/ssl_user.log" \
"%t %h Serial=%{SSL_CLIENT_M_SERIAL}x User=%{SSL_CLIENT_S_DN_CN}x \"%r\" %b"
<Location />
SSLVerifyClient require
SSLVerifyDepth 10
SSLOptions +FakeBasicAuth
SSLRequireSSL
SSLRequire %{SSL_CLIENT_S_DN_O} eq "Your Organization" \
and %{SSL_CLIENT_S_DN_OU} in {"AllowedOU1","AllowedOU2"}
</Location>
#
## </ClientCertificate>
#
</source>
==ApacheTop==
Top of all sites on your host:
<source lang=bash>
# ls /var/log/apache2/*.log | xargs -n 1 echo -f | xargs apachetop
</source>
133eacf2fc007f5752e6d9336d341a3e0d058eac
1957
1956
2019-09-04T08:32:58Z
Lollypop
2
/* Apache konfigurieren */
wikitext
text/x-wiki
[[Kategorie:Apache]]
== Zertifikat generieren ==
===Simple script===
<source lang=bash>
#!/bin/bash
BASE_SUBJECT='/C=DE/ST=Hamburg/L=Hamburg/O=MyOrg/OU=IT'
BASEDIR=/etc/apache2
DAYS=$[ 5 * 365 ]
KEY_DIR=${BASEDIR}/ssl.key
CRT_DIR=${BASEDIR}/ssl.crt
if [ $# -eq 0 ]
then
printf "usage: $0 <webserver-name> [<alias1> <alias2>...]\n"
exit 1
fi
CN=$1
declare -a subject_alt_names;
for i in ${*}
do
subject_alt_names=( ${subject_alt_names[*]} "DNS:${i}")
done
echo ${subject_alt_names[*]}
SHORT=${CN%%.*}
KEY=${KEY_DIR}/${SHORT}.key
CRT=${CRT_DIR}/${SHORT}.crt
OLD_IFS=${IFS}
IFS=","
openssl req \
-new \
-days ${DAYS} \
-newkey rsa:4096bits \
-sha512 \
-x509 \
-nodes \
-out ${CRT} \
-keyout ${KEY} \
-subj "${BASE_SUBJECT}/CN=${CN}" \
-reqexts SAN \
-extensions SAN \
-config <(
cat /etc/ssl/openssl.cnf
printf "[ext]\nbasicConstraints=CA:FALSE,pathlen:0\n[SAN]\n%s\n" \
"${subject_alt_names:+subjectAltName = ${subject_alt_names[*]}}"
)
IFS=${OLD_IFS}
printf "Put this in your Apache config:\n\n\tSSLCertificateFile %s\n\tSSLCertificateKeyFile %s\n\n" "${CRT}" "${KEY}"
</source>
===Defaultwerte vernünftig anpassen===
Country & Co auf für einen selbst passende Werte anpassen:
<source lang=bash>
# vi /etc/ssl/openssl.cnf
</source>
===Schlüssel generieren===
<source lang=bash>
# openssl ecparam -genkey -name secp256r1 | openssl ec -aes256 -out server.de.ec-key
read EC key
using curve name prime256v1 instead of secp256r1
writing EC key
Enter PEM pass phrase:
Verifying - Enter PEM pass phrase:
</source>
Sollte es gewünscht sein den Schlüssel ohne Passwort abzulegen, kann man den Schlüssel nachträglich so entfernen:
<source lang=bash>
# openssl ec -in server.de.ec-key -out server.de.ec-key
read EC key
Enter PEM pass phrase:
writing EC key
</source>
===Zertifikat ausstellen===
<source lang=bash>
# openssl req -new -x509 -sha256 -key server.de.ec-key -out server.de-wildcard.pem -days 1825 -nodes
You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter '.', the field will be left blank.
-----
Country Name (2 letter code) [DE]:
State or Province Name (full name) [Hamburg]:
Locality Name (eg, city) [Hamburg]:
Organization Name (eg, company) [My Site]:
Organizational Unit Name (eg, section) [Sub]:
Common Name (e.g. server FQDN or YOUR name) []:*.server.de
Email Address [ssl@server.de]:
</source>
===Zertifikat ansehen===
<source lang=bash>
# openssl x509 -text -noout -in server.de-wildcard.pem
Certificate:
Data:
Version: 3 (0x2)
Serial Number: ... (0x...)
Signature Algorithm: ecdsa-with-SHA256
Issuer: C=DE, ST=Hamburg, L=Hamburg, O=My Site, OU=Sub, CN=*.server.de/emailAddress=ssl@server.de
Validity
Not Before: Apr 16 09:35:02 2015 GMT
Not After : Apr 14 09:35:02 2020 GMT
Subject: C=DE, ST=Hamburg, L=Hamburg, O=My Site, OU=Sub, CN=*.server.de/emailAddress=ssl@server.de
Subject Public Key Info:
Public Key Algorithm: id-ecPublicKey
Public-Key: (256 bit)
pub:
...
ASN1 OID: prime256v1
X509v3 extensions:
X509v3 Subject Key Identifier:
...
X509v3 Authority Key Identifier:
keyid:...
X509v3 Basic Constraints:
CA:TRUE
Signature Algorithm: ecdsa-with-SHA256
...
</source>
==Configuring Apache==
=== Serving mp4 media files ===
If your media files are on a network filesystem like CIFS or NFS you should disable memory mapping (EnableMMAP Off) to avoid corrupted data at the client side and allow seeking inside the video.
<source lang=apache>
<Directory /var/www/media-files>
Options -Indexes
AllowOverride None
Require all granted
<IfModule mod_mime>
AddType video/mp4 .mp4
</IfModule>
EnableMMAP Off
</Directory>
</source>
=== SSL configuration ===
/etc/apache2/mods-available/ssl.conf
<source lang=apache>
<IfModule mod_ssl.c>
...
SSLUseStapling On
SSLStaplingCache "shmcb:${APACHE_RUN_DIR}/stapling_cache(128000)"
...
</IfModule>
</source>
<source lang=apache>
<VirtualHost ssl.server.de:443>
# ...
SSLEngine On
# Do this only if you are sure you have no old clients
SSLProtocol all -SSLv2 -SSLv3 -TLSv1 -TLSv1.1
# If you need to support old clients use this instead
# SSLProtocol all -SSLv2 -SSLv3 -TLSv1
SSLCompression off
SSLHonorCipherOrder On
# Do this only if you are sure you have no old clients
SSLCipherSuite HIGH:EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH:!AES256+RSA:!AES128:!ADH:!EXP:!SSLv2:!SSLv3:!MEDIUM:!LOW:!NULL:!aNULL
# If you need to support old clients use this instead
# SSLCipherSuite ECDHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-SHA256:ECDHE-RSA-AES256-SHA:ECDHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA256:DHE-RSA-AES128-SHA256:DHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA:EDH-RSA-DES-CBC3-SHA:AES256-GCM-SHA384:AES128-GCM-SHA256:AES256-SHA256:AES128-SHA256:AES256-SHA:AES128-SHA:HIGH:!DES-CBC3-SHA:!aNULL:!eNULL:!EXPORT:!CAMELLIA:!DES:!MD5:!PSK:!RC4:!SSLv2:!SSLv3
SSLCertificateFile /etc/letsencrypt/live/ssl.server.de/fullchain.pem
SSLCertificateKeyFile /etc/letsencrypt/live/ssl.server.de/privkey.pem
SSLOptions +FakeBasicAuth +ExportCertData +StrictRequire
# Generate DH parameters with
# # openssl dhparam -out /etc/ssl/certs/dhparam_4096.pem 4096
SSLOpenSSLConfCmd DHParameters "/etc/ssl/certs/dhparam_4096.pem"
SSLOpenSSLConfCmd ECDHParameters Automatic
SSLOpenSSLConfCmd Curves secp521r1:secp384r1:prime256v1
SetEnvIfNoCase Referer ^https://ssl\.server\.de keep_cookies
RequestHeader unset Cookie env=!keep_cookies
<IfModule mod_headers.c>
# https://kb.sucuri.net/warnings/hardening/headers-x-content-type
Header set X-Content-Type-Options nosniff
# https://kb.sucuri.net/warnings/hardening/headers-x-frame-clickjacking
Header append X-FRAME-OPTIONS "SAMEORIGIN"
# https://kb.sucuri.net/warnings/hardening/headers-x-xss-protection
Header set X-XSS-Protection "1; mode=block"
# Strict Transport Security
Header always set Strict-Transport-Security "max-age=31556926;"
# Public Key Pins
Header always set Public-Key-Pins "max-age=5184000; pin-sha256=\"...\"; pin-sha256=\"...\"; includeSubDomains"
</IfModule>
<IfModule mod_rewrite.c>
RewriteEngine On
# https://kb.sucuri.net/warnings/hardening/http-trace HTTP Trace Method
RewriteCond %{REQUEST_METHOD} ^TRACE
RewriteRule .* - [F]
</IfModule>
</VirtualHost>
</source>
===SSLLabs A+ with all 100%===
'''If you consider to take this snippet be warned. Old clients have no chance to reach the server.'''
<source lang=apache>
<VirtualHost ssl.server.de:443>
...
# SSL parameters
SSLEngine On
SSLProtocol all -SSLv2 -SSLv3 -TLSv1 -TLSv1.1
SSLCipherSuite DHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES256-SHA:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-SHA
SSLHonorCipherOrder on
SSLUseStapling on
SSLCompression off
SSLOptions +FakeBasicAuth +ExportCertData +StrictRequire
SSLCertificateFile /etc/letsencrypt/live/ssl.server.de/fullchain.pem
SSLCertificateKeyFile /etc/letsencrypt/live/ssl.server.de/privkey.pem
SSLOpenSSLConfCmd DHParameters "/etc/ssl/certs/dhparam.pem"
SSLOpenSSLConfCmd ECDHParameters Automatic
SSLOpenSSLConfCmd Curves secp521r1:secp384r1:prime256v1
<IfModule mod_headers.c>
# Add security and privacy related headers
Header edit Set-Cookie ^(.*)$ $1;HttpOnly;Secure
Header always set Strict-Transport-Security "max-age=31556926; includeSubDomains; preload"
Header always set X-Frame-Options SAMEORIGIN
Header always set X-Content-Type-Options nosniff
Header set X-XSS-Protection "1; mode=block"
Header set X-Robots-Tag "none"
SetEnv modHeadersAvailable true
</IfModule>
...
</VirtualHost>
</source>
==Client certificates==
<source lang=apache>
#
## <ClientCertificate>
#
SSLVerifyClient none
SSLCACertificateFile "/var/log/apache2/conf/ca.crt"
SSLCARevocationFile "/var/log/apache2/conf/crl.pem"
SSLCARevocationCheck chain
CustomLog "/var/log/apache2/logs/ssl_user.log" \
"%t %h Serial=%{SSL_CLIENT_M_SERIAL}x User=%{SSL_CLIENT_S_DN_CN}x \"%r\" %b"
<Location />
SSLVerifyClient require
SSLVerifyDepth 10
SSLOptions +FakeBasicAuth
SSLRequireSSL
SSLRequire %{SSL_CLIENT_S_DN_O} eq "Your Organization" \
and %{SSL_CLIENT_S_DN_OU} in {"AllowedOU1","AllowedOU2"}
</Location>
#
## </ClientCertificate>
#
</source>
==ApacheTop==
Top of all sites on your host:
<source lang=bash>
# ls /var/log/apache2/*.log | xargs -n 1 echo -f | xargs apachetop
</source>
11a4447a1b956778fb836584da1e07adbb4434ff
1958
1957
2019-09-04T08:38:30Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:Apache]]
== Create certificate ==
===Simple script===
<source lang=bash>
#!/bin/bash
BASE_SUBJECT='/C=DE/ST=Hamburg/L=Hamburg/O=MyOrg/OU=IT'
BASEDIR=/etc/apache2
DAYS=$[ 5 * 365 ]
KEY_DIR=${BASEDIR}/ssl.key
CRT_DIR=${BASEDIR}/ssl.crt
if [ $# -eq 0 ]
then
printf "usage: $0 <webserver-name> [<alias1> <alias2>...]\n"
exit 1
fi
CN=$1
declare -a subject_alt_names;
for i in ${*}
do
subject_alt_names=( ${subject_alt_names[*]} "DNS:${i}")
done
echo ${subject_alt_names[*]}
SHORT=${CN%%.*}
KEY=${KEY_DIR}/${SHORT}.key
CRT=${CRT_DIR}/${SHORT}.crt
OLD_IFS=${IFS}
IFS=","
openssl req \
-new \
-days ${DAYS} \
-newkey rsa:4096bits \
-sha512 \
-x509 \
-nodes \
-out ${CRT} \
-keyout ${KEY} \
-subj "${BASE_SUBJECT}/CN=${CN}" \
-reqexts SAN \
-extensions SAN \
-config <(
cat /etc/ssl/openssl.cnf
printf "[ext]\nbasicConstraints=CA:FALSE,pathlen:0\n[SAN]\n%s\n" \
"${subject_alt_names:+subjectAltName = ${subject_alt_names[*]}}"
)
IFS=${OLD_IFS}
printf "Put this in your Apache config:\n\n\tSSLCertificateFile %s\n\tSSLCertificateKeyFile %s\n\n" "${CRT}" "${KEY}"
</source>
===Adjust the default values===
Set the country, etc. to values that match your needs:
<source lang=bash>
# vi /etc/ssl/openssl.cnf
</source>
===Generate the key===
<source lang=bash>
# openssl ecparam -genkey -name secp256r1 | openssl ec -aes256 -out server.de.ec-key
read EC key
using curve name prime256v1 instead of secp256r1
writing EC key
Enter PEM pass phrase:
Verifying - Enter PEM pass phrase:
</source>
If you don't need a password protected encrypted key file, you can remove the encryption like this:
<source lang=bash>
# openssl ec -in server.de.ec-key -out server.de.ec-key
read EC key
Enter PEM pass phrase:
writing EC key
</source>
===Issue certificate===
<source lang=bash>
# openssl req -new -x509 -sha256 -key server.de.ec-key -out server.de-wildcard.pem -days 1825 -nodes
You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter '.', the field will be left blank.
-----
Country Name (2 letter code) [DE]:
State or Province Name (full name) [Hamburg]:
Locality Name (eg, city) [Hamburg]:
Organization Name (eg, company) [My Site]:
Organizational Unit Name (eg, section) [Sub]:
Common Name (e.g. server FQDN or YOUR name) []:*.server.de
Email Address [ssl@server.de]:
</source>
===View certificate===
<source lang=bash>
# openssl x509 -text -noout -in server.de-wildcard.pem
Certificate:
Data:
Version: 3 (0x2)
Serial Number: ... (0x...)
Signature Algorithm: ecdsa-with-SHA256
Issuer: C=DE, ST=Hamburg, L=Hamburg, O=My Site, OU=Sub, CN=*.server.de/emailAddress=ssl@server.de
Validity
Not Before: Apr 16 09:35:02 2015 GMT
Not After : Apr 14 09:35:02 2020 GMT
Subject: C=DE, ST=Hamburg, L=Hamburg, O=My Site, OU=Sub, CN=*.server.de/emailAddress=ssl@server.de
Subject Public Key Info:
Public Key Algorithm: id-ecPublicKey
Public-Key: (256 bit)
pub:
...
ASN1 OID: prime256v1
X509v3 extensions:
X509v3 Subject Key Identifier:
...
X509v3 Authority Key Identifier:
keyid:...
X509v3 Basic Constraints:
CA:TRUE
Signature Algorithm: ecdsa-with-SHA256
...
</source>
==Configuring Apache==
=== Serving mp4 media files ===
If your media files are on a network filesystem like CIFS or NFS you should disable memory mapping (EnableMMAP Off) to avoid corrupted data at the client side and allow seeking inside the video.
<source lang=apache>
<Directory /var/www/media-files>
Options -Indexes
AllowOverride None
Require all granted
<IfModule mod_mime>
AddType video/mp4 .mp4
</IfModule>
EnableMMAP Off
</Directory>
</source>
=== SSL configuration ===
/etc/apache2/mods-available/ssl.conf
<source lang=apache>
<IfModule mod_ssl.c>
...
SSLUseStapling On
SSLStaplingCache "shmcb:${APACHE_RUN_DIR}/stapling_cache(128000)"
...
</IfModule>
</source>
<source lang=apache>
<VirtualHost ssl.server.de:443>
# ...
SSLEngine On
# Do this only if you are sure you have no old clients
SSLProtocol all -SSLv2 -SSLv3 -TLSv1 -TLSv1.1
# If you need to support old clients use this instead
# SSLProtocol all -SSLv2 -SSLv3 -TLSv1
SSLCompression off
SSLHonorCipherOrder On
# Do this only if you are sure you have no old clients
SSLCipherSuite HIGH:EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH:!AES256+RSA:!AES128:!ADH:!EXP:!SSLv2:!SSLv3:!MEDIUM:!LOW:!NULL:!aNULL
# If you need to support old clients use this instead
# SSLCipherSuite ECDHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-SHA256:ECDHE-RSA-AES256-SHA:ECDHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA256:DHE-RSA-AES128-SHA256:DHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA:EDH-RSA-DES-CBC3-SHA:AES256-GCM-SHA384:AES128-GCM-SHA256:AES256-SHA256:AES128-SHA256:AES256-SHA:AES128-SHA:HIGH:!DES-CBC3-SHA:!aNULL:!eNULL:!EXPORT:!CAMELLIA:!DES:!MD5:!PSK:!RC4:!SSLv2:!SSLv3
SSLCertificateFile /etc/letsencrypt/live/ssl.server.de/fullchain.pem
SSLCertificateKeyFile /etc/letsencrypt/live/ssl.server.de/privkey.pem
SSLOptions +FakeBasicAuth +ExportCertData +StrictRequire
# Generate DH parameters with
# # openssl dhparam -out /etc/ssl/certs/dhparam_4096.pem 4096
SSLOpenSSLConfCmd DHParameters "/etc/ssl/certs/dhparam_4096.pem"
SSLOpenSSLConfCmd ECDHParameters Automatic
SSLOpenSSLConfCmd Curves secp521r1:secp384r1:prime256v1
SetEnvIfNoCase Referer ^https://ssl\.server\.de keep_cookies
RequestHeader unset Cookie env=!keep_cookies
<IfModule mod_headers.c>
# https://kb.sucuri.net/warnings/hardening/headers-x-content-type
Header set X-Content-Type-Options nosniff
# https://kb.sucuri.net/warnings/hardening/headers-x-frame-clickjacking
Header append X-FRAME-OPTIONS "SAMEORIGIN"
# https://kb.sucuri.net/warnings/hardening/headers-x-xss-protection
Header set X-XSS-Protection "1; mode=block"
# Strict Transport Security
Header always set Strict-Transport-Security "max-age=31556926;"
# Public Key Pins
Header always set Public-Key-Pins "max-age=5184000; pin-sha256=\"...\"; pin-sha256=\"...\"; includeSubDomains"
</IfModule>
<IfModule mod_rewrite.c>
RewriteEngine On
# https://kb.sucuri.net/warnings/hardening/http-trace HTTP Trace Method
RewriteCond %{REQUEST_METHOD} ^TRACE
RewriteRule .* - [F]
</IfModule>
</VirtualHost>
</source>
===SSLLabs A+ with all 100%===
'''If you consider to take this snippet be warned. Old clients have no chance to reach the server.'''
<source lang=apache>
<VirtualHost ssl.server.de:443>
...
# SSL parameters
SSLEngine On
SSLProtocol all -SSLv2 -SSLv3 -TLSv1 -TLSv1.1
SSLCipherSuite DHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES256-SHA:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-SHA
SSLHonorCipherOrder on
SSLUseStapling on
SSLCompression off
SSLOptions +FakeBasicAuth +ExportCertData +StrictRequire
SSLCertificateFile /etc/letsencrypt/live/ssl.server.de/fullchain.pem
SSLCertificateKeyFile /etc/letsencrypt/live/ssl.server.de/privkey.pem
SSLOpenSSLConfCmd DHParameters "/etc/ssl/certs/dhparam.pem"
SSLOpenSSLConfCmd ECDHParameters Automatic
SSLOpenSSLConfCmd Curves secp521r1:secp384r1:prime256v1
<IfModule mod_headers.c>
# Add security and privacy related headers
Header edit Set-Cookie ^(.*)$ $1;HttpOnly;Secure
Header always set Strict-Transport-Security "max-age=31556926; includeSubDomains; preload"
Header always set X-Frame-Options SAMEORIGIN
Header always set X-Content-Type-Options nosniff
Header set X-XSS-Protection "1; mode=block"
Header set X-Robots-Tag "none"
SetEnv modHeadersAvailable true
</IfModule>
...
</VirtualHost>
</source>
==Client certificates==
<source lang=apache>
#
## <ClientCertificate>
#
SSLVerifyClient none
SSLCACertificateFile "/var/log/apache2/conf/ca.crt"
SSLCARevocationFile "/var/log/apache2/conf/crl.pem"
SSLCARevocationCheck chain
CustomLog "/var/log/apache2/logs/ssl_user.log" \
"%t %h Serial=%{SSL_CLIENT_M_SERIAL}x User=%{SSL_CLIENT_S_DN_CN}x \"%r\" %b"
<Location />
SSLVerifyClient require
SSLVerifyDepth 10
SSLOptions +FakeBasicAuth
SSLRequireSSL
SSLRequire %{SSL_CLIENT_S_DN_O} eq "Your Organization" \
and %{SSL_CLIENT_S_DN_OU} in {"AllowedOU1","AllowedOU2"}
</Location>
#
## </ClientCertificate>
#
</source>
==ApacheTop==
Top of all sites on your host:
<source lang=bash>
# ls /var/log/apache2/*.log | xargs -n 1 echo -f | xargs apachetop
</source>
54c91f3e8ab7696bee18fce4a63675a74858fc9c
1959
1958
2019-09-04T08:38:48Z
Lollypop
2
/* Adjust the default values */
wikitext
text/x-wiki
[[Kategorie:Apache]]
== Create certificate ==
===Simple script===
<source lang=bash>
#!/bin/bash
BASE_SUBJECT='/C=DE/ST=Hamburg/L=Hamburg/O=MyOrg/OU=IT'
BASEDIR=/etc/apache2
DAYS=$[ 5 * 365 ]
KEY_DIR=${BASEDIR}/ssl.key
CRT_DIR=${BASEDIR}/ssl.crt
if [ $# -eq 0 ]
then
printf "usage: $0 <webserver-name> [<alias1> <alias2>...]\n"
exit 1
fi
CN=$1
declare -a subject_alt_names;
for i in ${*}
do
subject_alt_names=( ${subject_alt_names[*]} "DNS:${i}")
done
echo ${subject_alt_names[*]}
SHORT=${CN%%.*}
KEY=${KEY_DIR}/${SHORT}.key
CRT=${CRT_DIR}/${SHORT}.crt
OLD_IFS=${IFS}
IFS=","
openssl req \
-new \
-days ${DAYS} \
-newkey rsa:4096bits \
-sha512 \
-x509 \
-nodes \
-out ${CRT} \
-keyout ${KEY} \
-subj "${BASE_SUBJECT}/CN=${CN}" \
-reqexts SAN \
-extensions SAN \
-config <(
cat /etc/ssl/openssl.cnf
printf "[ext]\nbasicConstraints=CA:FALSE,pathlen:0\n[SAN]\n%s\n" \
"${subject_alt_names:+subjectAltName = ${subject_alt_names[*]}}"
)
IFS=${OLD_IFS}
printf "Put this in your Apache config:\n\n\tSSLCertificateFile %s\n\tSSLCertificateKeyFile %s\n\n" "${CRT}" "${KEY}"
</source>
===Adjust the OpenSSL default values===
Set the country, etc. to values that match your needs:
<source lang=bash>
# vi /etc/ssl/openssl.cnf
</source>
===Generate the key===
<source lang=bash>
# openssl ecparam -genkey -name secp256r1 | openssl ec -aes256 -out server.de.ec-key
read EC key
using curve name prime256v1 instead of secp256r1
writing EC key
Enter PEM pass phrase:
Verifying - Enter PEM pass phrase:
</source>
If you don't need a password protected encrypted key file, you can remove the encryption like this:
<source lang=bash>
# openssl ec -in server.de.ec-key -out server.de.ec-key
read EC key
Enter PEM pass phrase:
writing EC key
</source>
===Issue certificate===
<source lang=bash>
# openssl req -new -x509 -sha256 -key server.de.ec-key -out server.de-wildcard.pem -days 1825 -nodes
You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter '.', the field will be left blank.
-----
Country Name (2 letter code) [DE]:
State or Province Name (full name) [Hamburg]:
Locality Name (eg, city) [Hamburg]:
Organization Name (eg, company) [My Site]:
Organizational Unit Name (eg, section) [Sub]:
Common Name (e.g. server FQDN or YOUR name) []:*.server.de
Email Address [ssl@server.de]:
</source>
===View certificate===
<source lang=bash>
# openssl x509 -text -noout -in server.de-wildcard.pem
Certificate:
Data:
Version: 3 (0x2)
Serial Number: ... (0x...)
Signature Algorithm: ecdsa-with-SHA256
Issuer: C=DE, ST=Hamburg, L=Hamburg, O=My Site, OU=Sub, CN=*.server.de/emailAddress=ssl@server.de
Validity
Not Before: Apr 16 09:35:02 2015 GMT
Not After : Apr 14 09:35:02 2020 GMT
Subject: C=DE, ST=Hamburg, L=Hamburg, O=My Site, OU=Sub, CN=*.server.de/emailAddress=ssl@server.de
Subject Public Key Info:
Public Key Algorithm: id-ecPublicKey
Public-Key: (256 bit)
pub:
...
ASN1 OID: prime256v1
X509v3 extensions:
X509v3 Subject Key Identifier:
...
X509v3 Authority Key Identifier:
keyid:...
X509v3 Basic Constraints:
CA:TRUE
Signature Algorithm: ecdsa-with-SHA256
...
</source>
==Configuring Apache==
=== Serving mp4 media files ===
If your media files are on a network filesystem like CIFS or NFS you should disable memory mapping (EnableMMAP Off) to avoid corrupted data at the client side and allow seeking inside the video.
<source lang=apache>
<Directory /var/www/media-files>
Options -Indexes
AllowOverride None
Require all granted
<IfModule mod_mime>
AddType video/mp4 .mp4
</IfModule>
EnableMMAP Off
</Directory>
</source>
=== SSL configuration ===
/etc/apache2/mods-available/ssl.conf
<source lang=apache>
<IfModule mod_ssl.c>
...
SSLUseStapling On
SSLStaplingCache "shmcb:${APACHE_RUN_DIR}/stapling_cache(128000)"
...
</IfModule>
</source>
<source lang=apache>
<VirtualHost ssl.server.de:443>
# ...
SSLEngine On
# Do this only if you are sure you have no old clients
SSLProtocol all -SSLv2 -SSLv3 -TLSv1 -TLSv1.1
# If you need to support old clients use this instead
# SSLProtocol all -SSLv2 -SSLv3 -TLSv1
SSLCompression off
SSLHonorCipherOrder On
# Do this only if you are sure you have no old clients
SSLCipherSuite HIGH:EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH:!AES256+RSA:!AES128:!ADH:!EXP:!SSLv2:!SSLv3:!MEDIUM:!LOW:!NULL:!aNULL
# If you need to support old clients use this instead
# SSLCipherSuite ECDHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-SHA256:ECDHE-RSA-AES256-SHA:ECDHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA256:DHE-RSA-AES128-SHA256:DHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA:EDH-RSA-DES-CBC3-SHA:AES256-GCM-SHA384:AES128-GCM-SHA256:AES256-SHA256:AES128-SHA256:AES256-SHA:AES128-SHA:HIGH:!DES-CBC3-SHA:!aNULL:!eNULL:!EXPORT:!CAMELLIA:!DES:!MD5:!PSK:!RC4:!SSLv2:!SSLv3
SSLCertificateFile /etc/letsencrypt/live/ssl.server.de/fullchain.pem
SSLCertificateKeyFile /etc/letsencrypt/live/ssl.server.de/privkey.pem
SSLOptions +FakeBasicAuth +ExportCertData +StrictRequire
# Generate DH parameters with
# # openssl dhparam -out /etc/ssl/certs/dhparam_4096.pem 4096
SSLOpenSSLConfCmd DHParameters "/etc/ssl/certs/dhparam_4096.pem"
SSLOpenSSLConfCmd ECDHParameters Automatic
SSLOpenSSLConfCmd Curves secp521r1:secp384r1:prime256v1
SetEnvIfNoCase Referer ^https://ssl\.server\.de keep_cookies
RequestHeader unset Cookie env=!keep_cookies
<IfModule mod_headers.c>
# https://kb.sucuri.net/warnings/hardening/headers-x-content-type
Header set X-Content-Type-Options nosniff
# https://kb.sucuri.net/warnings/hardening/headers-x-frame-clickjacking
Header append X-FRAME-OPTIONS "SAMEORIGIN"
# https://kb.sucuri.net/warnings/hardening/headers-x-xss-protection
Header set X-XSS-Protection "1; mode=block"
# Strict Transport Security
Header always set Strict-Transport-Security "max-age=31556926;"
# Public Key Pins
Header always set Public-Key-Pins "max-age=5184000; pin-sha256=\"...\"; pin-sha256=\"...\"; includeSubDomains"
</IfModule>
<IfModule mod_rewrite.c>
RewriteEngine On
# https://kb.sucuri.net/warnings/hardening/http-trace HTTP Trace Method
RewriteCond %{REQUEST_METHOD} ^TRACE
RewriteRule .* - [F]
</IfModule>
</VirtualHost>
</source>
===SSLLabs A+ with all 100%===
'''If you consider to take this snippet be warned. Old clients have no chance to reach the server.'''
<source lang=apache>
<VirtualHost ssl.server.de:443>
...
# SSL parameters
SSLEngine On
SSLProtocol all -SSLv2 -SSLv3 -TLSv1 -TLSv1.1
SSLCipherSuite DHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES256-SHA:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-SHA
SSLHonorCipherOrder on
SSLUseStapling on
SSLCompression off
SSLOptions +FakeBasicAuth +ExportCertData +StrictRequire
SSLCertificateFile /etc/letsencrypt/live/ssl.server.de/fullchain.pem
SSLCertificateKeyFile /etc/letsencrypt/live/ssl.server.de/privkey.pem
SSLOpenSSLConfCmd DHParameters "/etc/ssl/certs/dhparam.pem"
SSLOpenSSLConfCmd ECDHParameters Automatic
SSLOpenSSLConfCmd Curves secp521r1:secp384r1:prime256v1
<IfModule mod_headers.c>
# Add security and privacy related headers
Header edit Set-Cookie ^(.*)$ $1;HttpOnly;Secure
Header always set Strict-Transport-Security "max-age=31556926; includeSubDomains; preload"
Header always set X-Frame-Options SAMEORIGIN
Header always set X-Content-Type-Options nosniff
Header set X-XSS-Protection "1; mode=block"
Header set X-Robots-Tag "none"
SetEnv modHeadersAvailable true
</IfModule>
...
</VirtualHost>
</source>
==Client certificates==
<source lang=apache>
#
## <ClientCertificate>
#
SSLVerifyClient none
SSLCACertificateFile "/var/log/apache2/conf/ca.crt"
SSLCARevocationFile "/var/log/apache2/conf/crl.pem"
SSLCARevocationCheck chain
CustomLog "/var/log/apache2/logs/ssl_user.log" \
"%t %h Serial=%{SSL_CLIENT_M_SERIAL}x User=%{SSL_CLIENT_S_DN_CN}x \"%r\" %b"
<Location />
SSLVerifyClient require
SSLVerifyDepth 10
SSLOptions +FakeBasicAuth
SSLRequireSSL
SSLRequire %{SSL_CLIENT_S_DN_O} eq "Your Organization" \
and %{SSL_CLIENT_S_DN_OU} in {"AllowedOU1","AllowedOU2"}
</Location>
#
## </ClientCertificate>
#
</source>
==ApacheTop==
Top of all sites on your host:
<source lang=bash>
# ls /var/log/apache2/*.log | xargs -n 1 echo -f | xargs apachetop
</source>
abbb9ed60323c642ad7aa890b925e49051e0946a
1960
1959
2019-09-04T08:39:04Z
Lollypop
2
/* Generate the key */
wikitext
text/x-wiki
[[Kategorie:Apache]]
== Create certificate ==
===Simple script===
<source lang=bash>
#!/bin/bash
BASE_SUBJECT='/C=DE/ST=Hamburg/L=Hamburg/O=MyOrg/OU=IT'
BASEDIR=/etc/apache2
DAYS=$[ 5 * 365 ]
KEY_DIR=${BASEDIR}/ssl.key
CRT_DIR=${BASEDIR}/ssl.crt
if [ $# -eq 0 ]
then
printf "usage: $0 <webserver-name> [<alias1> <alias2>...]\n"
exit 1
fi
CN=$1
declare -a subject_alt_names;
for i in ${*}
do
subject_alt_names=( ${subject_alt_names[*]} "DNS:${i}")
done
echo ${subject_alt_names[*]}
SHORT=${CN%%.*}
KEY=${KEY_DIR}/${SHORT}.key
CRT=${CRT_DIR}/${SHORT}.crt
OLD_IFS=${IFS}
IFS=","
openssl req \
-new \
-days ${DAYS} \
-newkey rsa:4096bits \
-sha512 \
-x509 \
-nodes \
-out ${CRT} \
-keyout ${KEY} \
-subj "${BASE_SUBJECT}/CN=${CN}" \
-reqexts SAN \
-extensions SAN \
-config <(
cat /etc/ssl/openssl.cnf
printf "[ext]\nbasicConstraints=CA:FALSE,pathlen:0\n[SAN]\n%s\n" \
"${subject_alt_names:+subjectAltName = ${subject_alt_names[*]}}"
)
IFS=${OLD_IFS}
printf "Put this in your Apache config:\n\n\tSSLCertificateFile %s\n\tSSLCertificateKeyFile %s\n\n" "${CRT}" "${KEY}"
</source>
===Adjust the OpenSSL default values===
Set the country, etc. to values that match your needs:
<source lang=bash>
# vi /etc/ssl/openssl.cnf
</source>
===Generate key===
<source lang=bash>
# openssl ecparam -genkey -name secp256r1 | openssl ec -aes256 -out server.de.ec-key
read EC key
using curve name prime256v1 instead of secp256r1
writing EC key
Enter PEM pass phrase:
Verifying - Enter PEM pass phrase:
</source>
If you don't need a password protected encrypted key file, you can remove the encryption like this:
<source lang=bash>
# openssl ec -in server.de.ec-key -out server.de.ec-key
read EC key
Enter PEM pass phrase:
writing EC key
</source>
===Issue certificate===
<source lang=bash>
# openssl req -new -x509 -sha256 -key server.de.ec-key -out server.de-wildcard.pem -days 1825 -nodes
You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter '.', the field will be left blank.
-----
Country Name (2 letter code) [DE]:
State or Province Name (full name) [Hamburg]:
Locality Name (eg, city) [Hamburg]:
Organization Name (eg, company) [My Site]:
Organizational Unit Name (eg, section) [Sub]:
Common Name (e.g. server FQDN or YOUR name) []:*.server.de
Email Address [ssl@server.de]:
</source>
===View certificate===
<source lang=bash>
# openssl x509 -text -noout -in server.de-wildcard.pem
Certificate:
Data:
Version: 3 (0x2)
Serial Number: ... (0x...)
Signature Algorithm: ecdsa-with-SHA256
Issuer: C=DE, ST=Hamburg, L=Hamburg, O=My Site, OU=Sub, CN=*.server.de/emailAddress=ssl@server.de
Validity
Not Before: Apr 16 09:35:02 2015 GMT
Not After : Apr 14 09:35:02 2020 GMT
Subject: C=DE, ST=Hamburg, L=Hamburg, O=My Site, OU=Sub, CN=*.server.de/emailAddress=ssl@server.de
Subject Public Key Info:
Public Key Algorithm: id-ecPublicKey
Public-Key: (256 bit)
pub:
...
ASN1 OID: prime256v1
X509v3 extensions:
X509v3 Subject Key Identifier:
...
X509v3 Authority Key Identifier:
keyid:...
X509v3 Basic Constraints:
CA:TRUE
Signature Algorithm: ecdsa-with-SHA256
...
</source>
==Configuring Apache==
=== Serving mp4 media files ===
If your media files are on a network filesystem like CIFS or NFS you should disable memory mapping (EnableMMAP Off) to avoid corrupted data at the client side and allow seeking inside the video.
<source lang=apache>
<Directory /var/www/media-files>
Options -Indexes
AllowOverride None
Require all granted
<IfModule mod_mime>
AddType video/mp4 .mp4
</IfModule>
EnableMMAP Off
</Directory>
</source>
=== SSL configuration ===
/etc/apache2/mods-available/ssl.conf
<source lang=apache>
<IfModule mod_ssl.c>
...
SSLUseStapling On
SSLStaplingCache "shmcb:${APACHE_RUN_DIR}/stapling_cache(128000)"
...
</IfModule>
</source>
<source lang=apache>
<VirtualHost ssl.server.de:443>
# ...
SSLEngine On
# Do this only if you are sure you have no old clients
SSLProtocol all -SSLv2 -SSLv3 -TLSv1 -TLSv1.1
# If you need to support old clients use this instead
# SSLProtocol all -SSLv2 -SSLv3 -TLSv1
SSLCompression off
SSLHonorCipherOrder On
# Do this only if you are sure you have no old clients
SSLCipherSuite HIGH:EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH:!AES256+RSA:!AES128:!ADH:!EXP:!SSLv2:!SSLv3:!MEDIUM:!LOW:!NULL:!aNULL
# If you need to support old clients use this instead
# SSLCipherSuite ECDHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-SHA256:ECDHE-RSA-AES256-SHA:ECDHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA256:DHE-RSA-AES128-SHA256:DHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA:EDH-RSA-DES-CBC3-SHA:AES256-GCM-SHA384:AES128-GCM-SHA256:AES256-SHA256:AES128-SHA256:AES256-SHA:AES128-SHA:HIGH:!DES-CBC3-SHA:!aNULL:!eNULL:!EXPORT:!CAMELLIA:!DES:!MD5:!PSK:!RC4:!SSLv2:!SSLv3
SSLCertificateFile /etc/letsencrypt/live/ssl.server.de/fullchain.pem
SSLCertificateKeyFile /etc/letsencrypt/live/ssl.server.de/privkey.pem
SSLOptions +FakeBasicAuth +ExportCertData +StrictRequire
# Generate DH parameters with
# # openssl dhparam -out /etc/ssl/certs/dhparam_4096.pem 4096
SSLOpenSSLConfCmd DHParameters "/etc/ssl/certs/dhparam_4096.pem"
SSLOpenSSLConfCmd ECDHParameters Automatic
SSLOpenSSLConfCmd Curves secp521r1:secp384r1:prime256v1
SetEnvIfNoCase Referer ^https://ssl\.server\.de keep_cookies
RequestHeader unset Cookie env=!keep_cookies
<IfModule mod_headers.c>
# https://kb.sucuri.net/warnings/hardening/headers-x-content-type
Header set X-Content-Type-Options nosniff
# https://kb.sucuri.net/warnings/hardening/headers-x-frame-clickjacking
Header append X-FRAME-OPTIONS "SAMEORIGIN"
# https://kb.sucuri.net/warnings/hardening/headers-x-xss-protection
Header set X-XSS-Protection "1; mode=block"
# Strict Transport Security
Header always set Strict-Transport-Security "max-age=31556926;"
# Public Key Pins
Header always set Public-Key-Pins "max-age=5184000; pin-sha256=\"...\"; pin-sha256=\"...\"; includeSubDomains"
</IfModule>
<IfModule mod_rewrite.c>
RewriteEngine On
# https://kb.sucuri.net/warnings/hardening/http-trace HTTP Trace Method
RewriteCond %{REQUEST_METHOD} ^TRACE
RewriteRule .* - [F]
</IfModule>
</VirtualHost>
</source>
===SSLLabs A+ with all 100%===
'''If you consider to take this snippet be warned. Old clients have no chance to reach the server.'''
<source lang=apache>
<VirtualHost ssl.server.de:443>
...
# SSL parameters
SSLEngine On
SSLProtocol all -SSLv2 -SSLv3 -TLSv1 -TLSv1.1
SSLCipherSuite DHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES256-SHA:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-SHA
SSLHonorCipherOrder on
SSLUseStapling on
SSLCompression off
SSLOptions +FakeBasicAuth +ExportCertData +StrictRequire
SSLCertificateFile /etc/letsencrypt/live/ssl.server.de/fullchain.pem
SSLCertificateKeyFile /etc/letsencrypt/live/ssl.server.de/privkey.pem
SSLOpenSSLConfCmd DHParameters "/etc/ssl/certs/dhparam.pem"
SSLOpenSSLConfCmd ECDHParameters Automatic
SSLOpenSSLConfCmd Curves secp521r1:secp384r1:prime256v1
<IfModule mod_headers.c>
# Add security and privacy related headers
Header edit Set-Cookie ^(.*)$ $1;HttpOnly;Secure
Header always set Strict-Transport-Security "max-age=31556926; includeSubDomains; preload"
Header always set X-Frame-Options SAMEORIGIN
Header always set X-Content-Type-Options nosniff
Header set X-XSS-Protection "1; mode=block"
Header set X-Robots-Tag "none"
SetEnv modHeadersAvailable true
</IfModule>
...
</VirtualHost>
</source>
==Client certificates==
<source lang=apache>
#
## <ClientCertificate>
#
SSLVerifyClient none
SSLCACertificateFile "/var/log/apache2/conf/ca.crt"
SSLCARevocationFile "/var/log/apache2/conf/crl.pem"
SSLCARevocationCheck chain
CustomLog "/var/log/apache2/logs/ssl_user.log" \
"%t %h Serial=%{SSL_CLIENT_M_SERIAL}x User=%{SSL_CLIENT_S_DN_CN}x \"%r\" %b"
<Location />
SSLVerifyClient require
SSLVerifyDepth 10
SSLOptions +FakeBasicAuth
SSLRequireSSL
SSLRequire %{SSL_CLIENT_S_DN_O} eq "Your Organization" \
and %{SSL_CLIENT_S_DN_OU} in {"AllowedOU1","AllowedOU2"}
</Location>
#
## </ClientCertificate>
#
</source>
==ApacheTop==
Top of all sites on your host:
<source lang=bash>
# ls /var/log/apache2/*.log | xargs -n 1 echo -f | xargs apachetop
</source>
4115e8c20fd83c033629fed4ce7d43a49de87722
Find free ip
0
366
1961
2019-09-06T14:03:29Z
Lollypop
2
Die Seite wurde neu angelegt: „[[Kategorie: Bash]] <source lang=bash> #!/bin/bash # # $Id: find_free_ip.sh,v 1.1 2019/09/06 14:00:58 lollypop Exp $ # $Source: /var/cvs/lollypop/scripts/linu…“
wikitext
text/x-wiki
[[Kategorie: Bash]]
<source lang=bash>
#!/bin/bash
#
# $Id: find_free_ip.sh,v 1.1 2019/09/06 14:00:58 lollypop Exp $
# $Source: /var/cvs/lollypop/scripts/linux/find_free_ip.sh,v $
#
# Written in 2019 by Lars Timmann <L@rs.Timmann.de>
#
input=${1}
case $(uname -s) in
Linux)
PING='ping -4 -c 1 -n -q -W 1 ${ip}'
;;
SunOS)
PING='ping -s -A inet -n -t 1 ${ip} 56 1'
;;
esac
IFS='/' read -ra parts <<< "${input}"
address=${parts[0]}
suffix=${parts[1]:-24}
# build binary notation from CIDR suffix
function ones2bin () {
ones=${1}
printf "%0.s1" $(seq 1 ${ones})
[ ${ones} -lt 32 ] && printf "%0.s0" $(seq 1 $[ 32 - ${ones} ])
}
# dezimal number to octets
# for example: 2130706689 -> 127.0.1.1
function dec2ipv4 () {
ipdec=${1}
octets=()
for((i=24;i>=0;i-=8))
do
octet=$((${ipdec} >> ${i}))
octets+=(${octet})
ipdec=$(( ${ipdec} - ( ${octet} << ${i} ) ))
done
echo $(IFS=.;echo "${octets[*]}")
}
# ipv4 to decimal
function ipv42dec () {
ipv4=$1
dec=0
IFS='.' read -ra octets <<< "${ipv4}"
for ((i=0;i<4;i++))
do
dec=$(( dec + ${octets[i]} * ( 256 ** ( 3 - i ) ) ))
done
echo ${dec}
}
# decimal to binary
function dec2bin () {
dec=$1
bin=""
for((i=${dec};i>0;i>>=1))
do
bin=$(( ${i} % 2 ))${bin}
done
echo ${bin}
}
# binary to decimal : dec = $(( 2#010001010001 ))
# binary complement
function binaryComplement () {
unset complement
binary=$1
for((i=0;i<${#binary};i++))
do
complement+=$(( ${binary:${i}:1} ^ 1 ))
done
echo $complement
}
# Add missing octets
function fillOctets () {
IFS='.' read -ra octets <<< "${1}"
for ((i=${#octets[@]};i<4;++i))
do
octets+=(0)
done
echo "$(IFS=. ; echo "${octets[*]}")"
}
if [[ ${suffix} =~ ^([0-9]+\.[0-9]+\.[0-9]+\.[0-9]+)$ ]]
then
suffixbin=$(dec2bin $(ipv42dec ${suffix}))
else
suffixbin=$(ones2bin ${suffix})
fi
address=$(fillOctets ${address})
firstipdec=$(( ( $(ipv42dec ${address}) & 2#${suffixbin} ) ))
network=$(dec2ipv4 ${firstipdec})
lastipdec=$(( ( $(ipv42dec ${address}) & 2#${suffixbin} ) | 2#$(binaryComplement ${suffixbin}) ))
broadcast=$(dec2ipv4 ${lastipdec})
netmask=$(dec2ipv4 $(( 2#${suffixbin} )) )
printf "Your request:\t${address}/${suffix}\nNetwork:\t${network}\nBroadcast:\t${broadcast}\nNetmask:\t${netmask}\nSearching in:\t${network}-${broadcast}\n"
printf "%0.s-" $(seq 1 80) ; echo
count=1
bool=( yes no )
for((i=${firstipdec};i<=${lastipdec};i++))
do
ip=$(dec2ipv4 ${i})
info=$(getent hosts ${ip})
if [ "_${info}_" == "__" ]
then
eval ${PING} ${ip} >/dev/null 2>&1 ; pingable=$?
case ${ip} in
${network})
remark="This is the network IP."
;;
${broadcast})
remark="This is the network IP."
;;
*)
remark=""
;;
esac
printf "%s\tfrei\t%d\t( got a pong: %s )\t%s\n" "${ip}" "${count}" "${bool[${pingable}]}" "${remark}"
count=$[ ${count} + 1 ]
else
printf "%s\n" "${info}"
count=1
fi
done
</source>
8861e4fb7a91f5ed5bc8db232a03b76bd63dca50
1962
1961
2019-09-06T14:03:58Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie: Bash|find_free_ip]]
<source lang=bash>
#!/bin/bash
#
# $Id: find_free_ip.sh,v 1.1 2019/09/06 14:00:58 lollypop Exp $
# $Source: /var/cvs/lollypop/scripts/linux/find_free_ip.sh,v $
#
# Written in 2019 by Lars Timmann <L@rs.Timmann.de>
#
input=${1}
case $(uname -s) in
Linux)
PING='ping -4 -c 1 -n -q -W 1 ${ip}'
;;
SunOS)
PING='ping -s -A inet -n -t 1 ${ip} 56 1'
;;
esac
IFS='/' read -ra parts <<< "${input}"
address=${parts[0]}
suffix=${parts[1]:-24}
# build binary notation from CIDR suffix
function ones2bin () {
ones=${1}
printf "%0.s1" $(seq 1 ${ones})
[ ${ones} -lt 32 ] && printf "%0.s0" $(seq 1 $[ 32 - ${ones} ])
}
# dezimal number to octets
# for example: 2130706689 -> 127.0.1.1
function dec2ipv4 () {
ipdec=${1}
octets=()
for((i=24;i>=0;i-=8))
do
octet=$((${ipdec} >> ${i}))
octets+=(${octet})
ipdec=$(( ${ipdec} - ( ${octet} << ${i} ) ))
done
echo $(IFS=.;echo "${octets[*]}")
}
# ipv4 to decimal
function ipv42dec () {
ipv4=$1
dec=0
IFS='.' read -ra octets <<< "${ipv4}"
for ((i=0;i<4;i++))
do
dec=$(( dec + ${octets[i]} * ( 256 ** ( 3 - i ) ) ))
done
echo ${dec}
}
# decimal to binary
function dec2bin () {
dec=$1
bin=""
for((i=${dec};i>0;i>>=1))
do
bin=$(( ${i} % 2 ))${bin}
done
echo ${bin}
}
# binary to decimal : dec = $(( 2#010001010001 ))
# binary complement
function binaryComplement () {
unset complement
binary=$1
for((i=0;i<${#binary};i++))
do
complement+=$(( ${binary:${i}:1} ^ 1 ))
done
echo $complement
}
# Add missing octets
function fillOctets () {
IFS='.' read -ra octets <<< "${1}"
for ((i=${#octets[@]};i<4;++i))
do
octets+=(0)
done
echo "$(IFS=. ; echo "${octets[*]}")"
}
if [[ ${suffix} =~ ^([0-9]+\.[0-9]+\.[0-9]+\.[0-9]+)$ ]]
then
suffixbin=$(dec2bin $(ipv42dec ${suffix}))
else
suffixbin=$(ones2bin ${suffix})
fi
address=$(fillOctets ${address})
firstipdec=$(( ( $(ipv42dec ${address}) & 2#${suffixbin} ) ))
network=$(dec2ipv4 ${firstipdec})
lastipdec=$(( ( $(ipv42dec ${address}) & 2#${suffixbin} ) | 2#$(binaryComplement ${suffixbin}) ))
broadcast=$(dec2ipv4 ${lastipdec})
netmask=$(dec2ipv4 $(( 2#${suffixbin} )) )
printf "Your request:\t${address}/${suffix}\nNetwork:\t${network}\nBroadcast:\t${broadcast}\nNetmask:\t${netmask}\nSearching in:\t${network}-${broadcast}\n"
printf "%0.s-" $(seq 1 80) ; echo
count=1
bool=( yes no )
for((i=${firstipdec};i<=${lastipdec};i++))
do
ip=$(dec2ipv4 ${i})
info=$(getent hosts ${ip})
if [ "_${info}_" == "__" ]
then
eval ${PING} ${ip} >/dev/null 2>&1 ; pingable=$?
case ${ip} in
${network})
remark="This is the network IP."
;;
${broadcast})
remark="This is the network IP."
;;
*)
remark=""
;;
esac
printf "%s\tfrei\t%d\t( got a pong: %s )\t%s\n" "${ip}" "${count}" "${bool[${pingable}]}" "${remark}"
count=$[ ${count} + 1 ]
else
printf "%s\n" "${info}"
count=1
fi
done
</source>
205243091db8a34742b01af76f0670cbe14f4237
1963
1962
2019-09-06T14:04:36Z
Lollypop
2
Lollypop verschob die Seite [[Bash find free ip]] nach [[Find free ip]], ohne dabei eine Weiterleitung anzulegen: Name doof
wikitext
text/x-wiki
[[Kategorie: Bash|find_free_ip]]
<source lang=bash>
#!/bin/bash
#
# $Id: find_free_ip.sh,v 1.1 2019/09/06 14:00:58 lollypop Exp $
# $Source: /var/cvs/lollypop/scripts/linux/find_free_ip.sh,v $
#
# Written in 2019 by Lars Timmann <L@rs.Timmann.de>
#
input=${1}
case $(uname -s) in
Linux)
PING='ping -4 -c 1 -n -q -W 1 ${ip}'
;;
SunOS)
PING='ping -s -A inet -n -t 1 ${ip} 56 1'
;;
esac
IFS='/' read -ra parts <<< "${input}"
address=${parts[0]}
suffix=${parts[1]:-24}
# build binary notation from CIDR suffix
function ones2bin () {
ones=${1}
printf "%0.s1" $(seq 1 ${ones})
[ ${ones} -lt 32 ] && printf "%0.s0" $(seq 1 $[ 32 - ${ones} ])
}
# dezimal number to octets
# for example: 2130706689 -> 127.0.1.1
function dec2ipv4 () {
ipdec=${1}
octets=()
for((i=24;i>=0;i-=8))
do
octet=$((${ipdec} >> ${i}))
octets+=(${octet})
ipdec=$(( ${ipdec} - ( ${octet} << ${i} ) ))
done
echo $(IFS=.;echo "${octets[*]}")
}
# ipv4 to decimal
function ipv42dec () {
ipv4=$1
dec=0
IFS='.' read -ra octets <<< "${ipv4}"
for ((i=0;i<4;i++))
do
dec=$(( dec + ${octets[i]} * ( 256 ** ( 3 - i ) ) ))
done
echo ${dec}
}
# decimal to binary
function dec2bin () {
dec=$1
bin=""
for((i=${dec};i>0;i>>=1))
do
bin=$(( ${i} % 2 ))${bin}
done
echo ${bin}
}
# binary to decimal : dec = $(( 2#010001010001 ))
# binary complement
function binaryComplement () {
unset complement
binary=$1
for((i=0;i<${#binary};i++))
do
complement+=$(( ${binary:${i}:1} ^ 1 ))
done
echo $complement
}
# Add missing octets
function fillOctets () {
IFS='.' read -ra octets <<< "${1}"
for ((i=${#octets[@]};i<4;++i))
do
octets+=(0)
done
echo "$(IFS=. ; echo "${octets[*]}")"
}
if [[ ${suffix} =~ ^([0-9]+\.[0-9]+\.[0-9]+\.[0-9]+)$ ]]
then
suffixbin=$(dec2bin $(ipv42dec ${suffix}))
else
suffixbin=$(ones2bin ${suffix})
fi
address=$(fillOctets ${address})
firstipdec=$(( ( $(ipv42dec ${address}) & 2#${suffixbin} ) ))
network=$(dec2ipv4 ${firstipdec})
lastipdec=$(( ( $(ipv42dec ${address}) & 2#${suffixbin} ) | 2#$(binaryComplement ${suffixbin}) ))
broadcast=$(dec2ipv4 ${lastipdec})
netmask=$(dec2ipv4 $(( 2#${suffixbin} )) )
printf "Your request:\t${address}/${suffix}\nNetwork:\t${network}\nBroadcast:\t${broadcast}\nNetmask:\t${netmask}\nSearching in:\t${network}-${broadcast}\n"
printf "%0.s-" $(seq 1 80) ; echo
count=1
bool=( yes no )
for((i=${firstipdec};i<=${lastipdec};i++))
do
ip=$(dec2ipv4 ${i})
info=$(getent hosts ${ip})
if [ "_${info}_" == "__" ]
then
eval ${PING} ${ip} >/dev/null 2>&1 ; pingable=$?
case ${ip} in
${network})
remark="This is the network IP."
;;
${broadcast})
remark="This is the network IP."
;;
*)
remark=""
;;
esac
printf "%s\tfrei\t%d\t( got a pong: %s )\t%s\n" "${ip}" "${count}" "${bool[${pingable}]}" "${remark}"
count=$[ ${count} + 1 ]
else
printf "%s\n" "${info}"
count=1
fi
done
</source>
205243091db8a34742b01af76f0670cbe14f4237
1964
1963
2019-09-06T14:34:19Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie: Bash|find_free_ip]]
<source lang=bash>
#!/bin/bash
#
# $Id: find_free_ip.sh,v 1.2 2019/09/06 14:33:32 lollypop Exp $
# $Source: /var/cvs/lollypop/scripts/linux/find_free_ip.sh,v $
#
# Written in 2019 by Lars Timmann <L@rs.Timmann.de>
#
function usage () {
printf "Usage: ${0} <ip address>[/(<CIDR suffix>|<netmask>)]\n\n"
printf " This script searches a range of IP addresses for ones that have no reverse DNS.\n"
printf " Default range if no CIDR suffix or netmask is given is a class C (/24) range of 256 addresses.\n"
printf " address : This has to be a IPv4 address. Zero octets can be omittet.\n"
printf " For example 192.168 is sufficient for 192.168.0.0 .\n"
printf " CIDR suffix : This describes the nomber of bits set to 1 from left in the netmask.\n"
printf " netmask : Four octets representing the netmask.\n"
printf "\n"
}
case ${1} in
""|--help|-h)
usage
exit 1
;;
*)
input=${1}
;;
esac
case $(uname -s) in
Linux)
PING='ping -4 -c 1 -n -q -W 1 ${ip}'
;;
SunOS)
PING='ping -s -A inet -n -t 1 ${ip} 56 1'
;;
esac
IFS='/' read -ra parts <<< "${input}"
address=${parts[0]}
suffix=${parts[1]:-24}
# build binary notation from CIDR suffix
function ones2bin () {
ones=${1}
printf "%0.s1" $(seq 1 ${ones})
[ ${ones} -lt 32 ] && printf "%0.s0" $(seq 1 $[ 32 - ${ones} ])
}
function bin2ones () {
bin=${1}
ones=0
for((i=0;i<${#bin};i++))
do
bit=${bin:$i:1}
[ ${bit} -eq 0 ] && break
ones=$[ ones + 1 ]
done
echo ${ones}
}
# dezimal number to octets
# for example: 2130706689 -> 127.0.1.1
function dec2ipv4 () {
ipdec=${1}
octets=()
for((i=24;i>=0;i-=8))
do
octet=$((${ipdec} >> ${i}))
octets+=(${octet})
ipdec=$(( ${ipdec} - ( ${octet} << ${i} ) ))
done
echo $(IFS=.;echo "${octets[*]}")
}
# ipv4 to decimal
function ipv42dec () {
ipv4=$1
dec=0
IFS='.' read -ra octets <<< "${ipv4}"
for ((i=0;i<4;i++))
do
dec=$(( dec + ${octets[i]} * ( 256 ** ( 3 - i ) ) ))
done
echo ${dec}
}
# decimal to binary
function dec2bin () {
dec=$1
bin=""
for((i=${dec};i>0;i>>=1))
do
bin=$(( ${i} % 2 ))${bin}
done
echo ${bin}
}
# binary to decimal : dec = $(( 2#010001010001 ))
# binary complement
function binaryComplement () {
unset complement
binary=$1
for((i=0;i<${#binary};i++))
do
complement+=$(( ${binary:${i}:1} ^ 1 ))
done
echo $complement
}
# Add missing octets
function fillOctets () {
IFS='.' read -ra octets <<< "${1}"
for ((i=${#octets[@]};i<4;++i))
do
octets+=(0)
done
echo "$(IFS=. ; echo "${octets[*]}")"
}
if [[ ${suffix} =~ ^([0-9]+\.[0-9]+\.[0-9]+\.[0-9]+)$ ]]
then
suffixbin=$(dec2bin $(ipv42dec $(fillOctets ${suffix})))
else
suffixbin=$(ones2bin ${suffix})
fi
address=$(fillOctets ${address})
firstipdec=$(( ( $(ipv42dec ${address}) & 2#${suffixbin} ) ))
network=$(dec2ipv4 ${firstipdec})
lastipdec=$(( ( $(ipv42dec ${address}) & 2#${suffixbin} ) | 2#$(binaryComplement ${suffixbin}) ))
broadcast=$(dec2ipv4 ${lastipdec})
netmask=$(dec2ipv4 $(( 2#${suffixbin} )) )
printf "Your request:\t${address}/$(bin2ones ${suffixbin})\nNetwork:\t${network}\nBroadcast:\t${broadcast}\nNetmask:\t${netmask}\nSearching in:\t${network}-${broadcast}\n"
printf "%0.s-" $(seq 1 80) ; echo
count=1
bool=( yes no )
for((i=${firstipdec};i<=${lastipdec};i++))
do
ip=$(dec2ipv4 ${i})
info=$(getent hosts ${ip})
if [ "_${info}_" == "__" ]
then
eval ${PING} ${ip} >/dev/null 2>&1 ; pingable=$?
case ${ip} in
${network})
remark="This is the network IP."
;;
${broadcast})
remark="This is the network IP."
;;
*)
remark=""
;;
esac
printf "%s\tfrei\t%d\t( got a pong: %s )\t%s\n" "${ip}" "${count}" "${bool[${pingable}]}" "${remark}"
count=$[ ${count} + 1 ]
else
printf "%s\n" "${info}"
count=1
fi
done
</source>
8cb2a79b8a98369b0b4b60d9b82336695f616f15
Ubuntu networking
0
278
1965
1838
2019-09-18T06:00:49Z
Lollypop
2
/* netplan */
wikitext
text/x-wiki
[[Kategorie:Ubuntu|Networking]]
[[Kategorie:Linux|Networking]]
==Disable IPv6==
===Create /etc/sysctl.d/60-disable-ipv6.conf===
Create a file named <i>/etc/sysctl.d/60-disable-ipv6.conf</i> with this content:
<source lang=bash>
net.ipv6.conf.all.disable_ipv6 = 1
net.ipv6.conf.default.disable_ipv6 = 1
net.ipv6.conf.lo.disable_ipv6 = 1
</source>
===Activate /etc/sysctl.d/60-disable-ipv6.conf===
<source lang=bash>
# sysctl -p /etc/sysctl.d/10-disable-ipv6.conf
net.ipv6.conf.all.disable_ipv6 = 1
net.ipv6.conf.default.disable_ipv6 = 1
net.ipv6.conf.lo.disable_ipv6 = 1
</source>
===Check settings===
<source lang=bash>
# cat /proc/sys/net/ipv6/conf/all/disable_ipv6
1
</source>
==New in 17.10==
===netplan===
Former configuration in /etc/network/interfaces{,.d} is now found in /etc/netplan in YAML syntax.
For example: <i>/etc/netplan/ens160.yaml</i>
<source lang=yaml>
network:
ethernets:
ens160: {dhcp4: true}
version: 2
</source>
====Bonding====
<i>/etc/netplan/bond007.yaml</i>
<source lang=yaml>
network:
version: 2
renderer: networkd
ethernets:
slave1:
match:
macaddress: "3c:a7:2a:22:af:70"
dhcp4: no
slave2:
match:
macaddress: "3c:a7:2a:22:af:71"
dhcp4: no
bonds:
bond007:
interfaces:
- slave1
- slave2
parameters:
mode: balance-rr
mii-monitor-interval: 10
dhcp4: no
addresses:
- 192.168.189.202/27
gateway4: 192.168.189.193
nameservers:
search:
- mcs.de
addresses:
- "192.168.3.61"
</source>
80dcb1af98f0665f5c5476970a694f4ff93fde5b
1966
1965
2019-09-18T06:02:37Z
Lollypop
2
/* New in 17.10 */
wikitext
text/x-wiki
[[Kategorie:Ubuntu|Networking]]
[[Kategorie:Linux|Networking]]
==Disable IPv6==
===Create /etc/sysctl.d/60-disable-ipv6.conf===
Create a file named <i>/etc/sysctl.d/60-disable-ipv6.conf</i> with this content:
<source lang=bash>
net.ipv6.conf.all.disable_ipv6 = 1
net.ipv6.conf.default.disable_ipv6 = 1
net.ipv6.conf.lo.disable_ipv6 = 1
</source>
===Activate /etc/sysctl.d/60-disable-ipv6.conf===
<source lang=bash>
# sysctl -p /etc/sysctl.d/10-disable-ipv6.conf
net.ipv6.conf.all.disable_ipv6 = 1
net.ipv6.conf.default.disable_ipv6 = 1
net.ipv6.conf.lo.disable_ipv6 = 1
</source>
===Check settings===
<source lang=bash>
# cat /proc/sys/net/ipv6/conf/all/disable_ipv6
1
</source>
==New since Ubuntu 17.10==
===netplan===
Former configuration in /etc/network/interfaces{,.d} is now found in /etc/netplan in YAML syntax.
====DHCP====
For example: <i>/etc/netplan/ens160.yaml</i>
<source lang=yaml>
network:
ethernets:
ens160:
dhcp4: yes
version: 2
</source>
====Bonding====
<i>/etc/netplan/bond007.yaml</i>
<source lang=yaml>
network:
version: 2
renderer: networkd
ethernets:
slave1:
match:
macaddress: "3c:a7:2a:22:af:70"
dhcp4: no
slave2:
match:
macaddress: "3c:a7:2a:22:af:71"
dhcp4: no
bonds:
bond007:
interfaces:
- slave1
- slave2
parameters:
mode: balance-rr
mii-monitor-interval: 10
dhcp4: no
addresses:
- 192.168.189.202/27
gateway4: 192.168.189.193
nameservers:
search:
- mcs.de
addresses:
- "192.168.3.61"
</source>
8070ac760053805dc3e83fa4d2618331bde51263
1967
1966
2019-09-18T06:02:56Z
Lollypop
2
/* DHCP */
wikitext
text/x-wiki
[[Kategorie:Ubuntu|Networking]]
[[Kategorie:Linux|Networking]]
==Disable IPv6==
===Create /etc/sysctl.d/60-disable-ipv6.conf===
Create a file named <i>/etc/sysctl.d/60-disable-ipv6.conf</i> with this content:
<source lang=bash>
net.ipv6.conf.all.disable_ipv6 = 1
net.ipv6.conf.default.disable_ipv6 = 1
net.ipv6.conf.lo.disable_ipv6 = 1
</source>
===Activate /etc/sysctl.d/60-disable-ipv6.conf===
<source lang=bash>
# sysctl -p /etc/sysctl.d/10-disable-ipv6.conf
net.ipv6.conf.all.disable_ipv6 = 1
net.ipv6.conf.default.disable_ipv6 = 1
net.ipv6.conf.lo.disable_ipv6 = 1
</source>
===Check settings===
<source lang=bash>
# cat /proc/sys/net/ipv6/conf/all/disable_ipv6
1
</source>
==New since Ubuntu 17.10==
===netplan===
Former configuration in /etc/network/interfaces{,.d} is now found in /etc/netplan in YAML syntax.
====DHCP====
<i>/etc/netplan/ens160.yaml</i>
<source lang=yaml>
network:
ethernets:
ens160:
dhcp4: yes
version: 2
</source>
====Bonding====
<i>/etc/netplan/bond007.yaml</i>
<source lang=yaml>
network:
version: 2
renderer: networkd
ethernets:
slave1:
match:
macaddress: "3c:a7:2a:22:af:70"
dhcp4: no
slave2:
match:
macaddress: "3c:a7:2a:22:af:71"
dhcp4: no
bonds:
bond007:
interfaces:
- slave1
- slave2
parameters:
mode: balance-rr
mii-monitor-interval: 10
dhcp4: no
addresses:
- 192.168.189.202/27
gateway4: 192.168.189.193
nameservers:
search:
- mcs.de
addresses:
- "192.168.3.61"
</source>
710044c95ed0867b99e179ad8506f8186f65f908
1968
1967
2019-09-18T06:04:46Z
Lollypop
2
/* netplan */
wikitext
text/x-wiki
[[Kategorie:Ubuntu|Networking]]
[[Kategorie:Linux|Networking]]
==Disable IPv6==
===Create /etc/sysctl.d/60-disable-ipv6.conf===
Create a file named <i>/etc/sysctl.d/60-disable-ipv6.conf</i> with this content:
<source lang=bash>
net.ipv6.conf.all.disable_ipv6 = 1
net.ipv6.conf.default.disable_ipv6 = 1
net.ipv6.conf.lo.disable_ipv6 = 1
</source>
===Activate /etc/sysctl.d/60-disable-ipv6.conf===
<source lang=bash>
# sysctl -p /etc/sysctl.d/10-disable-ipv6.conf
net.ipv6.conf.all.disable_ipv6 = 1
net.ipv6.conf.default.disable_ipv6 = 1
net.ipv6.conf.lo.disable_ipv6 = 1
</source>
===Check settings===
<source lang=bash>
# cat /proc/sys/net/ipv6/conf/all/disable_ipv6
1
</source>
==New since Ubuntu 17.10==
===netplan===
Former configuration in /etc/network/interfaces{,.d} is now found in /etc/netplan in YAML syntax.
The name of the file is /etc/netplan/<whatever you want, I prefer the interface name>.yaml . The .yaml at the end is not optional!
====DHCP====
<i>/etc/netplan/ens160.yaml</i>
<source lang=yaml>
network:
ethernets:
ens160:
dhcp4: yes
version: 2
</source>
====Bonding====
<i>/etc/netplan/bond007.yaml</i>
<source lang=yaml>
network:
version: 2
renderer: networkd
ethernets:
slave1:
match:
macaddress: "3c:a7:2a:22:af:70"
dhcp4: no
slave2:
match:
macaddress: "3c:a7:2a:22:af:71"
dhcp4: no
bonds:
bond007:
interfaces:
- slave1
- slave2
parameters:
mode: balance-rr
mii-monitor-interval: 10
dhcp4: no
addresses:
- 192.168.189.202/27
gateway4: 192.168.189.193
nameservers:
search:
- mcs.de
addresses:
- "192.168.3.61"
</source>
6ab0e6a4ae45a65d3035a96e0b7a53a7ca04b77e
1969
1968
2019-09-18T06:08:10Z
Lollypop
2
/* Bonding */
wikitext
text/x-wiki
[[Kategorie:Ubuntu|Networking]]
[[Kategorie:Linux|Networking]]
==Disable IPv6==
===Create /etc/sysctl.d/60-disable-ipv6.conf===
Create a file named <i>/etc/sysctl.d/60-disable-ipv6.conf</i> with this content:
<source lang=bash>
net.ipv6.conf.all.disable_ipv6 = 1
net.ipv6.conf.default.disable_ipv6 = 1
net.ipv6.conf.lo.disable_ipv6 = 1
</source>
===Activate /etc/sysctl.d/60-disable-ipv6.conf===
<source lang=bash>
# sysctl -p /etc/sysctl.d/10-disable-ipv6.conf
net.ipv6.conf.all.disable_ipv6 = 1
net.ipv6.conf.default.disable_ipv6 = 1
net.ipv6.conf.lo.disable_ipv6 = 1
</source>
===Check settings===
<source lang=bash>
# cat /proc/sys/net/ipv6/conf/all/disable_ipv6
1
</source>
==New since Ubuntu 17.10==
===netplan===
Former configuration in /etc/network/interfaces{,.d} is now found in /etc/netplan in YAML syntax.
The name of the file is /etc/netplan/<whatever you want, I prefer the interface name>.yaml . The .yaml at the end is not optional!
====DHCP====
<i>/etc/netplan/ens160.yaml</i>
<source lang=yaml>
network:
ethernets:
ens160:
dhcp4: yes
version: 2
</source>
====Bonding====
<i>/etc/netplan/bond007.yaml</i>
<source lang=yaml>
network:
version: 2
renderer: networkd
ethernets:
slave1:
match:
macaddress: "3c:a7:2a:22:af:70"
dhcp4: no
slave2:
match:
macaddress: "3c:a7:2a:22:af:71"
dhcp4: no
bonds:
bond007:
interfaces:
- slave1
- slave2
parameters:
mode: balance-rr
mii-monitor-interval: 10
dhcp4: no
addresses:
- 192.168.189.202/27
gateway4: 192.168.189.193
nameservers:
search:
- mcs.de
addresses:
- "192.168.3.60"
- "192.168.3.61"
</source>
47fa236174fad026cc14ce7a86757934b9d1c0b5
1970
1969
2019-09-18T06:21:28Z
Lollypop
2
/* New since Ubuntu 17.10 */
wikitext
text/x-wiki
[[Kategorie:Ubuntu|Networking]]
[[Kategorie:Linux|Networking]]
==Disable IPv6==
===Create /etc/sysctl.d/60-disable-ipv6.conf===
Create a file named <i>/etc/sysctl.d/60-disable-ipv6.conf</i> with this content:
<source lang=bash>
net.ipv6.conf.all.disable_ipv6 = 1
net.ipv6.conf.default.disable_ipv6 = 1
net.ipv6.conf.lo.disable_ipv6 = 1
</source>
===Activate /etc/sysctl.d/60-disable-ipv6.conf===
<source lang=bash>
# sysctl -p /etc/sysctl.d/10-disable-ipv6.conf
net.ipv6.conf.all.disable_ipv6 = 1
net.ipv6.conf.default.disable_ipv6 = 1
net.ipv6.conf.lo.disable_ipv6 = 1
</source>
===Check settings===
<source lang=bash>
# cat /proc/sys/net/ipv6/conf/all/disable_ipv6
1
</source>
==New since Ubuntu 17.10==
===netplan===
Former configuration in /etc/network/interfaces{,.d} is now found in /etc/netplan in YAML syntax.
The name of the file is /etc/netplan/<whatever you want, I prefer the interface name>.yaml . The .yaml at the end is not optional!
====netplan <command>====
To apply changes to your files in /etc/netplan without reboot use:
<source lang=bash>
# netplan appy
</source>
Keep in mind: You might lose your connection depending on the changes made!
====DHCP====
<i>/etc/netplan/ens160.yaml</i>
<source lang=yaml>
network:
ethernets:
ens160:
dhcp4: yes
version: 2
</source>
====Bonding====
<i>/etc/netplan/bond007.yaml</i>
<source lang=yaml>
network:
version: 2
renderer: networkd
ethernets:
slave1:
match:
macaddress: "3c:a7:2a:22:af:70"
dhcp4: no
slave2:
match:
macaddress: "3c:a7:2a:22:af:71"
dhcp4: no
bonds:
bond007:
interfaces:
- slave1
- slave2
parameters:
mode: balance-rr
mii-monitor-interval: 10
dhcp4: no
addresses:
- 192.168.189.202/27
gateway4: 192.168.189.193
nameservers:
search:
- mcs.de
addresses:
- "192.168.3.60"
- "192.168.3.61"
</source>
7ba1b176503da732f96d69285ca9bbbf98d56956
1971
1970
2019-09-18T06:27:26Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:Ubuntu|Networking]]
[[Kategorie:Linux|Networking]]
==Disable IPv6==
===Create /etc/sysctl.d/60-disable-ipv6.conf===
Create a file named <i>/etc/sysctl.d/60-disable-ipv6.conf</i> with this content:
<source lang=bash>
net.ipv6.conf.all.disable_ipv6 = 1
net.ipv6.conf.default.disable_ipv6 = 1
net.ipv6.conf.lo.disable_ipv6 = 1
</source>
===Activate /etc/sysctl.d/60-disable-ipv6.conf===
<source lang=bash>
# sysctl -p /etc/sysctl.d/10-disable-ipv6.conf
net.ipv6.conf.all.disable_ipv6 = 1
net.ipv6.conf.default.disable_ipv6 = 1
net.ipv6.conf.lo.disable_ipv6 = 1
</source>
===Check settings===
<source lang=bash>
# cat /proc/sys/net/ipv6/conf/all/disable_ipv6
1
</source>
==The ip command==
===ipa===
This is not only indian pale ale! On linux
<source lang=bash>
# ip a
</source>
show you the configured addresses.
It is the short cut for "ip address show".
===iplishup===
This just sounds like a word and helps you to keep it in mind.
<source lang=bash>
# ip li sh up
</source>
shows you all links (interfaces) that are up.
This is short for "ip link show up".
==New since Ubuntu 17.10==
===netplan===
Former configuration in /etc/network/interfaces{,.d} is now found in /etc/netplan in YAML syntax.
The name of the file is /etc/netplan/<whatever you want, I prefer the interface name>.yaml . The .yaml at the end is not optional!
====netplan <command>====
To apply changes to your files in /etc/netplan without reboot use:
<source lang=bash>
# netplan appy
</source>
Keep in mind: You might lose your connection depending on the changes made!
====DHCP====
<i>/etc/netplan/ens160.yaml</i>
<source lang=yaml>
network:
ethernets:
ens160:
dhcp4: yes
version: 2
</source>
====Bonding====
<i>/etc/netplan/bond007.yaml</i>
<source lang=yaml>
network:
version: 2
renderer: networkd
ethernets:
slave1:
match:
macaddress: "3c:a7:2a:22:af:70"
dhcp4: no
slave2:
match:
macaddress: "3c:a7:2a:22:af:71"
dhcp4: no
bonds:
bond007:
interfaces:
- slave1
- slave2
parameters:
mode: balance-rr
mii-monitor-interval: 10
dhcp4: no
addresses:
- 192.168.189.202/27
gateway4: 192.168.189.193
nameservers:
search:
- mcs.de
addresses:
- "192.168.3.60"
- "192.168.3.61"
</source>
8a07e98cd9cf007fd45874c0b92da34b9829f159
Pass
0
367
1972
2019-09-18T13:39:36Z
Lollypop
2
Die Seite wurde neu angelegt: „=pass - The standard unix password manager= ==Tipps & Tricks== ===SSH=== To pass the password to the ssh password promt you need another tool, too: sshpass…“
wikitext
text/x-wiki
=pass - The standard unix password manager=
==Tipps & Tricks==
===SSH===
To pass the password to the ssh password promt you need another tool, too: sshpass .
====Obvious way====
<source lang=bash>
$ pass -c Customers/CustomerA/myuser@sshhost
$ ssh myuser@sshhost
Password:<paste the copied password>
myuser@sshhost:~$
</source>
====Cooler way====
=====Create the password entry=====
Put only the password at the first line (needed for sshpass).
<source lang=bash>
$ pass edit Customers/CustomerA/myuser@sshhost
</source>
=====Create an alias=====
<source lang=bash>
$ alias customer-sshhost='sshpass -f <(pass Customers/CustomerA/myuser@sshhost) ssh myuser@sshhost'
</source>
=====Use it=====
<source lang=bash>
$ customer-sshhost
myuser@sshhost:~$
</source>
==Links==
* [https://www.passwordstore.org/ Official site of pass]
* [https://sourceforge.net/projects/sshpass/ sshpass]
37b68ecfe639b8f07b861104990be038899ed9f9
1973
1972
2019-09-18T14:21:26Z
Lollypop
2
/* Tipps & Tricks */
wikitext
text/x-wiki
=pass - The standard unix password manager=
==Tipps & Tricks==
===SSH===
To pass the password to the ssh password promt you need another tool, too: sshpass .
Put only the password in your Customers/CustomerA/myuser@sshhost.
====Obvious way====
<source lang=bash>
$ pass -c Customers/CustomerA/myuser@sshhost
$ ssh myuser@sshhost
Password:<paste the copied password>
myuser@sshhost:~$
</source>
====Cooler way====
=====Create an alias=====
<source lang=bash>
$ alias customerA-sshhost='sshpass -f <(pass Customers/CustomerA/sshuser@sshhost) ssh sshuser@sshhost'
</source>
=====Use it=====
<source lang=bash>
$ customerA-sshhost
sshuser@sshhost:~$
</source>
===MySQL===
Put only the password in your Customsers/CustomerB/mysqluser@mysqlhost:mysql.
====Obvious way====
<source lang=bash>
$ pass -c Customsers/CustomerB/mysqluser@mysqlhost:mysql
$ mysql -h mysqlhost -u mysqluser
Enter password: <paste the copied password>
...
MariaDB [(none)]>
</source>
====Cooler way====
=====Create an alias=====
<source lang=bash>
$ alias customerB-mysqlhost-mysqluser='mysql --user mysqluser --host mysqlhost --password=$(pass show Customsers/CustomerB/mysqluser@mysqlhost:mysql)'
</source>
Or even cooler with seperate history and defaults file per connection
<source lang=bash>
$ mkdir -p ~/Customsers/CustomerB/.mysql
$ cat > ~/Customsers/CustomerB/.mysql/.my.cnf-mysqlhost-mysqluser << EOF
[client]
host=mysqlhost
user=mysqluser
EOF
$ alias customerB-mysqlhost-mysqluser='MYSQL_HISTFILE=~/Customsers/CustomerB/.mysql/.mysql_history_mysqlhost mysql --defaults-file=~/Customsers/CustomerB/.mysql/.my.cnf-mysqlhost-mysqluser --password=$(pass show Customsers/CustomerB/mysqluser@mysqlhost:mysql)'
</source>
=====Use it=====
<source lang=bash>
$ customerB-mysqlhost-mysqluser
...
MariaDB [(none)]>
</source>
==Links==
* [https://www.passwordstore.org/ Official site of pass]
* [https://sourceforge.net/projects/sshpass/ sshpass]
fa023e00d965d1b97e39021690763549c9cf9a3b
1974
1973
2019-09-18T14:22:13Z
Lollypop
2
wikitext
text/x-wiki
[Kategorie:Linux|pass]
=pass - The standard unix password manager=
==Tipps & Tricks==
===SSH===
To pass the password to the ssh password promt you need another tool, too: sshpass .
Put only the password in your Customers/CustomerA/myuser@sshhost.
====Obvious way====
<source lang=bash>
$ pass -c Customers/CustomerA/myuser@sshhost
$ ssh myuser@sshhost
Password:<paste the copied password>
myuser@sshhost:~$
</source>
====Cooler way====
=====Create an alias=====
<source lang=bash>
$ alias customerA-sshhost='sshpass -f <(pass Customers/CustomerA/sshuser@sshhost) ssh sshuser@sshhost'
</source>
=====Use it=====
<source lang=bash>
$ customerA-sshhost
sshuser@sshhost:~$
</source>
===MySQL===
Put only the password in your Customsers/CustomerB/mysqluser@mysqlhost:mysql.
====Obvious way====
<source lang=bash>
$ pass -c Customsers/CustomerB/mysqluser@mysqlhost:mysql
$ mysql -h mysqlhost -u mysqluser
Enter password: <paste the copied password>
...
MariaDB [(none)]>
</source>
====Cooler way====
=====Create an alias=====
<source lang=bash>
$ alias customerB-mysqlhost-mysqluser='mysql --user mysqluser --host mysqlhost --password=$(pass show Customsers/CustomerB/mysqluser@mysqlhost:mysql)'
</source>
Or even cooler with seperate history and defaults file per connection
<source lang=bash>
$ mkdir -p ~/Customsers/CustomerB/.mysql
$ cat > ~/Customsers/CustomerB/.mysql/.my.cnf-mysqlhost-mysqluser << EOF
[client]
host=mysqlhost
user=mysqluser
EOF
$ alias customerB-mysqlhost-mysqluser='MYSQL_HISTFILE=~/Customsers/CustomerB/.mysql/.mysql_history_mysqlhost mysql --defaults-file=~/Customsers/CustomerB/.mysql/.my.cnf-mysqlhost-mysqluser --password=$(pass show Customsers/CustomerB/mysqluser@mysqlhost:mysql)'
</source>
=====Use it=====
<source lang=bash>
$ customerB-mysqlhost-mysqluser
...
MariaDB [(none)]>
</source>
==Links==
* [https://www.passwordstore.org/ Official site of pass]
* [https://sourceforge.net/projects/sshpass/ sshpass]
557cf7a698c6c616fc0bd1856b6dd7994ca492c2
1975
1974
2019-09-18T14:22:36Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:Linux|pass]]
=pass - The standard unix password manager=
==Tipps & Tricks==
===SSH===
To pass the password to the ssh password promt you need another tool, too: sshpass .
Put only the password in your Customers/CustomerA/myuser@sshhost.
====Obvious way====
<source lang=bash>
$ pass -c Customers/CustomerA/myuser@sshhost
$ ssh myuser@sshhost
Password:<paste the copied password>
myuser@sshhost:~$
</source>
====Cooler way====
=====Create an alias=====
<source lang=bash>
$ alias customerA-sshhost='sshpass -f <(pass Customers/CustomerA/sshuser@sshhost) ssh sshuser@sshhost'
</source>
=====Use it=====
<source lang=bash>
$ customerA-sshhost
sshuser@sshhost:~$
</source>
===MySQL===
Put only the password in your Customsers/CustomerB/mysqluser@mysqlhost:mysql.
====Obvious way====
<source lang=bash>
$ pass -c Customsers/CustomerB/mysqluser@mysqlhost:mysql
$ mysql -h mysqlhost -u mysqluser
Enter password: <paste the copied password>
...
MariaDB [(none)]>
</source>
====Cooler way====
=====Create an alias=====
<source lang=bash>
$ alias customerB-mysqlhost-mysqluser='mysql --user mysqluser --host mysqlhost --password=$(pass show Customsers/CustomerB/mysqluser@mysqlhost:mysql)'
</source>
Or even cooler with seperate history and defaults file per connection
<source lang=bash>
$ mkdir -p ~/Customsers/CustomerB/.mysql
$ cat > ~/Customsers/CustomerB/.mysql/.my.cnf-mysqlhost-mysqluser << EOF
[client]
host=mysqlhost
user=mysqluser
EOF
$ alias customerB-mysqlhost-mysqluser='MYSQL_HISTFILE=~/Customsers/CustomerB/.mysql/.mysql_history_mysqlhost mysql --defaults-file=~/Customsers/CustomerB/.mysql/.my.cnf-mysqlhost-mysqluser --password=$(pass show Customsers/CustomerB/mysqluser@mysqlhost:mysql)'
</source>
=====Use it=====
<source lang=bash>
$ customerB-mysqlhost-mysqluser
...
MariaDB [(none)]>
</source>
==Links==
* [https://www.passwordstore.org/ Official site of pass]
* [https://sourceforge.net/projects/sshpass/ sshpass]
9f47f0a2fde575e295b9d22502b976ec694ff028
1976
1975
2019-09-18T14:23:17Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:Linux|pass - The standard unix password manager]]
=pass - The standard unix password manager=
==Tipps & Tricks==
===SSH===
To pass the password to the ssh password promt you need another tool, too: sshpass .
Put only the password in your Customers/CustomerA/myuser@sshhost.
====Obvious way====
<source lang=bash>
$ pass -c Customers/CustomerA/myuser@sshhost
$ ssh myuser@sshhost
Password:<paste the copied password>
myuser@sshhost:~$
</source>
====Cooler way====
=====Create an alias=====
<source lang=bash>
$ alias customerA-sshhost='sshpass -f <(pass Customers/CustomerA/sshuser@sshhost) ssh sshuser@sshhost'
</source>
=====Use it=====
<source lang=bash>
$ customerA-sshhost
sshuser@sshhost:~$
</source>
===MySQL===
Put only the password in your Customsers/CustomerB/mysqluser@mysqlhost:mysql.
====Obvious way====
<source lang=bash>
$ pass -c Customsers/CustomerB/mysqluser@mysqlhost:mysql
$ mysql -h mysqlhost -u mysqluser
Enter password: <paste the copied password>
...
MariaDB [(none)]>
</source>
====Cooler way====
=====Create an alias=====
<source lang=bash>
$ alias customerB-mysqlhost-mysqluser='mysql --user mysqluser --host mysqlhost --password=$(pass show Customsers/CustomerB/mysqluser@mysqlhost:mysql)'
</source>
Or even cooler with seperate history and defaults file per connection
<source lang=bash>
$ mkdir -p ~/Customsers/CustomerB/.mysql
$ cat > ~/Customsers/CustomerB/.mysql/.my.cnf-mysqlhost-mysqluser << EOF
[client]
host=mysqlhost
user=mysqluser
EOF
$ alias customerB-mysqlhost-mysqluser='MYSQL_HISTFILE=~/Customsers/CustomerB/.mysql/.mysql_history_mysqlhost mysql --defaults-file=~/Customsers/CustomerB/.mysql/.my.cnf-mysqlhost-mysqluser --password=$(pass show Customsers/CustomerB/mysqluser@mysqlhost:mysql)'
</source>
=====Use it=====
<source lang=bash>
$ customerB-mysqlhost-mysqluser
...
MariaDB [(none)]>
</source>
==Links==
* [https://www.passwordstore.org/ Official site of pass]
* [https://sourceforge.net/projects/sshpass/ sshpass]
e837ccaa6560fa50758387f35d847c21e35ddb50
1977
1976
2019-09-18T14:23:56Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:Linux|pass]]
=pass - The standard unix password manager=
==Tipps & Tricks==
===SSH===
To pass the password to the ssh password promt you need another tool, too: sshpass .
Put only the password in your Customers/CustomerA/myuser@sshhost.
====Obvious way====
<source lang=bash>
$ pass -c Customers/CustomerA/myuser@sshhost
$ ssh myuser@sshhost
Password:<paste the copied password>
myuser@sshhost:~$
</source>
====Cooler way====
=====Create an alias=====
<source lang=bash>
$ alias customerA-sshhost='sshpass -f <(pass Customers/CustomerA/sshuser@sshhost) ssh sshuser@sshhost'
</source>
=====Use it=====
<source lang=bash>
$ customerA-sshhost
sshuser@sshhost:~$
</source>
===MySQL===
Put only the password in your Customsers/CustomerB/mysqluser@mysqlhost:mysql.
====Obvious way====
<source lang=bash>
$ pass -c Customsers/CustomerB/mysqluser@mysqlhost:mysql
$ mysql -h mysqlhost -u mysqluser
Enter password: <paste the copied password>
...
MariaDB [(none)]>
</source>
====Cooler way====
=====Create an alias=====
<source lang=bash>
$ alias customerB-mysqlhost-mysqluser='mysql --user mysqluser --host mysqlhost --password=$(pass show Customsers/CustomerB/mysqluser@mysqlhost:mysql)'
</source>
Or even cooler with seperate history and defaults file per connection
<source lang=bash>
$ mkdir -p ~/Customsers/CustomerB/.mysql
$ cat > ~/Customsers/CustomerB/.mysql/.my.cnf-mysqlhost-mysqluser << EOF
[client]
host=mysqlhost
user=mysqluser
EOF
$ alias customerB-mysqlhost-mysqluser='MYSQL_HISTFILE=~/Customsers/CustomerB/.mysql/.mysql_history_mysqlhost mysql --defaults-file=~/Customsers/CustomerB/.mysql/.my.cnf-mysqlhost-mysqluser --password=$(pass show Customsers/CustomerB/mysqluser@mysqlhost:mysql)'
</source>
=====Use it=====
<source lang=bash>
$ customerB-mysqlhost-mysqluser
...
MariaDB [(none)]>
</source>
==Links==
* [https://www.passwordstore.org/ Official site of pass]
* [https://sourceforge.net/projects/sshpass/ sshpass]
9f47f0a2fde575e295b9d22502b976ec694ff028
Ansible tips and tricks
0
299
1978
1913
2019-10-17T08:06:44Z
Lollypop
2
/* Get settings for host */
wikitext
text/x-wiki
[[ Kategorie: Ansible | Tips and tricks ]]
== Ansible commandline ==
=== Get settings for host ===
Gathering settings for host in ${hostname}:
<source lang=bash>
$ ansible -m debug -a 'var=hostvars[inventory_hostname]' ${hostname}
</source>
For example:
<source lang=bash>
$ ansible -m debug -a 'var=hostvars[inventory_hostname]' localhost
</source>
Gathering groups for host in ${hostname}:
<source lang=bash>
$ ansible -m debug -a 'var=group_names' ${hostname}
</source>
== Gathering facts from file ==
=== Variables from an Oracle response file ===
This snippet gets some variables from the response file and puts them into an environment variable <i>oracle_environment</i> and sets the variable name itself (prepended with oracle_ if not already there). The variable <i>oracle_environment</i> can be used for <i>environment:</i> when you use <i>shell:</i>.
<source lang=yaml>
vars:
oracle_user: oracle
oracle_version: 12cR2
oracle_response_file: /install/tepmplate_{{ oracle_version }}/db_{{ oracle_version | lower}}.rsp
</source>
<source lang=yaml>
- name: "Getting variables for version {{ oracle_version }} from response file"
shell: |
awk -F '=' '/{{ item }}/{print $2;}' {{ oracle_response_file }}
register: oracle_response_variables
with_items:
- ORACLE_HOME
- ORACLE_BASE
- INVENTORY_LOCATION
tags:
- oracle
- oracle_install
- name: Setting facts from response file to oracle_environment
set_fact:
"{{ 'oracle_' + item.item | lower | regex_replace('oracle_','') }}": "{{ item.stdout }}"
oracle_environment: "{{oracle_environment|default([]) + [ {item.item: item.stdout} ] }}"
with_items:
- "{{ oracle_response_variables.results }}"
tags:
- oracle
- oracle_install
</source>
== Gathering oracle environment ==
<source lang=yaml>
- name: Calling oraenv
shell: |
# Set ORAENV_ASK=NO and ORACLE_SID, ORACLE_HOME, PATH from /etc/oratab
eval $(awk -F':' '!/^[ ]*(#|$)/ && $3=="Y"{printf "export ORAENV_ASK=NO ORACLE_SID=%s ORACLE_HOME=%s PATH=${PATH}:%s/bin\n",$1,$2,$2}' /etc/oratab)
# Call /usr/local/bin/oraenv for additional settings
. /usr/local/bin/oraenv -s
# Just register what we need for Oracle
env | egrep "(ORACLE_.*|PATH|LD_LIBRARY_PATH)="
register: env
changed_when: False
- name: Creating environment ora_env
set_fact:
ora_env: |
{# Creating empty dictionary #}
{%- set tmp_env={} -%}
{# For each line from env call tmp_env._setitem_(<variable>,<value>) #}
{%- for line in env.stdout_lines -%}
{{ tmp_env.__setitem__(line.split('=')[0], line.split('=')[1]) }}
{%- endfor -%}
{# Print the created variable #}
{{ tmp_env }}
- debug: var=ora_env
</source>
== NetApp Modules ==
=== NetApp role ===
==== Snapshot user ====
<source>
security login role create -vserver cluster01 -role ansible-snapshot-only -cmddirname DEFAULT -access none
security login role create -vserver cluster01 -role ansible-snapshot-only -cmddirname "event generate-autosupport-log" -access all
security login role create -vserver cluster01 -role ansible-snapshot-only -cmddirname "volume snapshot" -access readonly
security login role create -vserver cluster01 -role ansible-snapshot-only -cmddirname "volume snapshot create" -query "-snapshot ansible_*" -access all
security login role create -vserver cluster01 -role ansible-snapshot-only -cmddirname "volume snapshot delete" -query "-snapshot ansible_*" -access all
security login create -vserver cluster01 -role ansible-snapshot-only -application ontapi -authentication-method password -user-or-group-name ansible-snapuser
</source>
10defd7571c6eeb9ed30003d1a425d6906ecd01d
MariaDB on ZFS
0
294
1979
1359
2019-11-08T13:33:56Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie: MySQL|ZFS]]
[[Kategorie: MariaDB|ZFS]]
==ZFS parameters==
<source lang=bash>
zfs set atime=off MYSQL-DATA
zfs set compression=lz4 MYSQL-DATA
zfs set atime=off MYSQL-LOG
zfs set compression=lz4 MYSQL-LOG
zfs set recordsize=8k MYSQL-DATA/data
zfs set recordsize=16k MYSQL-DATA/InnoDB
zfs set primarycache=metadata MYSQL-DATA/InnoDB
zfs set primarycache=metadata MYSQL-LOG/ib_log
</source>
<source lang=bash>
# zfs list -o recordsize,primarycache,compression,compressratio,atime,name -r MYSQL-DATA -r MYSQL-LOG
RECSIZE PRIMARYCACHE COMPRESS RATIO ATIME NAME
128K all lz4 1.06x off MYSQL-DATA
16K metadata lz4 2.81x off MYSQL-DATA/InnoDB
8K all lz4 1.05x off MYSQL-DATA/data
128K all lz4 2.15x off MYSQL-LOG
128K all lz4 1.00x off MYSQL-LOG/binlog
128K metadata lz4 2.17x off MYSQL-LOG/ib_log
</source>
===If you have innodb_file_per_table=on===
<source lang=bash>
# mysql -e 'show variables like "innodb_file_per_table";'
+-----------------------+-------+
| Variable_name | Value |
+-----------------------+-------+
| innodb_file_per_table | ON |
+-----------------------+-------+
</source>
* If you have only InnoDB-Tables or the only productive ones are InnoDB then consider setting the blocksize of MYSQL-DATA/data to 16k because all Innodb-Datafiles (*.ibd) will be written there :-\.*
* consider setting the initial innodb_data_file_path to smaller value like ibdata1:100M:autoextend
==Database parameters for ZFS==
<source lang=mysql>
datadir = /MYSQL-DATA/data/mysql
innodb_data_home_dir = /MYSQL-DATA/InnoDB
innodb_data_file_path = ibdata1:2000M:autoextend
innodb_log_group_home_dir = /MYSQL-LOG/ib_log
#innodb_flush_method = O_DIRECT
innodb_flush_log_at_trx_commit = 2
innodb_file_per_table = off
skip-innodb_doublewrite
</source>
<source lang=bash>
# /usr/sbin/mysqld --print-defaults
/usr/sbin/mysqld would have been started with the following arguments:
--server_id=42
--user=mysql
--pid-file=/var/run/mysqld/mysqld.pid
--socket=/var/run/mysqld/mysqld.sock
--port=3306
--basedir=/usr
--datadir=/MYSQL-DATA/data/mysql
--innodb_data_home_dir=/MYSQL-DATA/InnoDB
--innodb_data_file_path=ibdata1:100M:autoextend
--innodb_log_group_home_dir=/MYSQL-LOG/ib_log
--innodb_flush_method=O_DIRECT
--innodb_flush_log_at_trx_commit=2
--skip-innodb_doublewrite
--tmpdir=/tmp
</source>
On Linux do not forget to add new directories to apparmor!
d531553a54ab2f640060c2cd845128ca92e555ad
Linux Tipps und Tricks
0
273
1982
1932
2019-11-26T12:39:37Z
Lollypop
2
/* Optional: Resize the ZPool in it */
wikitext
text/x-wiki
[[Kategorie:Linux|Tipps und Tricks]]
==Hard reboot==
This is the hard way to kick your kernel into void. No filesystem sync is done, just and ugly fast direkt reboot!
You should never do this...
<source lang=bash>
# echo 1 > /proc/sys/kernel/sysrq
# echo b > /proc/sysrq-trigger
</source>
First line enables sysrq, second line sends the reboot request.
For more look at [https://www.kernel.org/doc/Documentation/sysrq.txt kernel.org]!
==Scan all SCSI buses for new devices==
<source lang=bash>
# for i in /sys/class/scsi_host/host*/scan ; do echo "- - -" > $i ; done
</source>
==Rescan a device (for example after changing a VMDK size)==
<source lang=bash>
# echo 1 > /sys/class/block/${device}/device/rescan
</source>
This is for device sda after changing the VMDK from 20GB to 25GB:
<source lang=bash>
# device=sda
# echo $[ 512 * $(cat /sys/block/${device}/size) / 1024 ** 3 ]
20
# echo 1 > /sys/class/block/${device}/device/rescan
# echo $[ 512 * $(cat /sys/block/${device}/size) / 1024 ** 3 ]
25
# parted /dev/${device} "print free"
Warning: Not all of the space available to /dev/sda appears to be used, you can fix the GPT to use all of the space (an extra 10485760 blocks) or
continue with the current setting?
Fix/Ignore? F
Model: VMware Virtual disk (scsi)
Disk /dev/sda: 26,8GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:
Number Start End Size File system Name Flags
2 17,4kB 1049kB 1031kB bios_grub
1 1049kB 21,5GB 21,5GB zfs
21,5GB 26,8GB 5369MB Free Space
</source>
I want to put the free space into partition 1 and resize the rpool:
<source lang=bash>
# parted /dev/${device} "resizepart 1 -1"
# parted /dev/${device} "print free"
Model: VMware Virtual disk (scsi)
Disk /dev/sda: 26,8GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:
Number Start End Size File system Name Flags
2 17,4kB 1049kB 1031kB bios_grub
1 1049kB 26,8GB 26,8GB zfs
26,8GB 26,8GB 983kB Free Space
# zpool list rpool
NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
rpool 19,9G 1,68G 18,2G - 14% 8% 1.00x ONLINE -
# zpool set autoexpand=on rpool
# zpool status rpool
pool: rpool
state: ONLINE
scan: none requested
config:
NAME STATE READ WRITE CKSUM
rpool ONLINE 0 0 0
sda1 ONLINE 0 0 0
# zpool online rpool sda1
# zpool list rpool
NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
rpool 24,9G 1,69G 23,2G - 11% 6% 1.00x ONLINE -
# zpool set autoexpand=off rpool
</source>
Done.
==Remove a SCSI-device==
Let us say we want to remove /dev/sdb.
Be careful! Like in this example the lowest SCSI-ID is not always the lowest device name!
Check it with <i>lsscsi</i> from the Ubuntu package lsscsi:
<source lang=bash>
# lsscsi
[2:0:0:0] cd/dvd NECVMWar VMware SATA CD00 1.00 /dev/sr0
[32:0:0:0] disk VMware Virtual disk 1.0 /dev/sdb
[32:0:1:0] disk VMware Virtual disk 1.0 /dev/sda
</source>
Then check it is not longer in use:
# mount
# pvs
# zpool status
# etc.
Then delete it:
<source lang=bash>
# echo 1 > /sys/bus/scsi/drivers/sd/32\:0\:0\:0/delete
</source>
The 32:0:0:0 is the number reported from the lsscsi above.
Et voila:
<source lang=bash>
# lsscsi
[2:0:0:0] cd/dvd NECVMWar VMware SATA CD00 1.00 /dev/sr0
[32:0:1:0] disk VMware Virtual disk 1.0 /dev/sda
</source>
==Copy a GPT partition table==
Copy partition table of sdX to sdY:
<source lang=bash>
# sgdisk /dev/sdX --replicate=/dev/sdY
# sgdisk --randomize-guids /dev/sdY
</source>
<pre>
-R, --replicate=second_device_filename
Replicate the main device's partition table on the specified second device. Note that the replicated partition table is an exact
copy, including all GUIDs; if the device should have its own unique GUIDs, you should use the -G option on the new disk.
-G, --randomize-guids
Randomize the disk's GUID and all partitions' unique GUIDs (but not their partition type code GUIDs). This function may be used
after cloning a disk in order to render all GUIDs once again unique.
</pre>
==Resize a GPT partition==
The partition was resized in VMWare from ~6GB to ~50GB.
In the VM I did [[#Remove a SCSI-device|Remove a SCSI-device]] for the resized device and then [[#Scan all SCSI buses for new devices|Scan all SCSI buses for new devices]] after that parted saw the new size.
===Correct the GPT partition table===
<source lang=bash>
root@mariadb:~# parted /dev/sdb
GNU Parted 3.2
Using /dev/sdb
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted) p
Warning: Not all of the space available to /dev/sdb appears to be used, you can fix the GPT to use all of the space (an extra 92274688 blocks) or continue with the
current setting?
Fix/Ignore? F <-- ! choose F
Model: VMware Virtual disk (scsi)
Disk /dev/sdb: 53,7GB <-- ! the new size is reported now
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:
Number Start End Size File system Name Flags
1 1049kB 6442MB 6441MB zfs
</source>
===Resize the partition===
<source lang=bash>
root@mariadb:~# parted /dev/sdb
GNU Parted 3.2
Using /dev/sdb
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted) p
Model: VMware Virtual disk (scsi)
Disk /dev/sdb: 53,7GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:
Number Start End Size File system Name Flags
1 1049kB 6442MB 6441MB zfs
(parted) resizepart 1
End? [6442MB]? 53,7GB <-- ! Put new size here
(parted) p <-- ! Control if it worked
Model: VMware Virtual disk (scsi)
Disk /dev/sdb: 53,7GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:
Number Start End Size File system Name Flags
1 1049kB 53,7GB 53,7GB zfs
(parted) q
Information: You may need to update /etc/fstab.
</source>
===Optional: Resize the ZPool in it===
Check the actual values:
<source lang=bash>
root@mariadb:~# zpool list MYSQL-DATA
NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
MYSQL-DATA 5,97G 994M 5,00G 44G 47% 16% 1.00x ONLINE -
root@mariadb:~# zpool get autoexpand MYSQL-DATA
NAME PROPERTY VALUE SOURCE
MYSQL-DATA autoexpand off default
</source>
Now inform ZPool to grow to the end of the partition.
Set autoexpand to on:
<source lang=bash>
root@mariadb:~# zpool set autoexpand=on MYSQL-DATA
</source>
Send an online to the already onlined device to force a recheck in the ZPool to resize it without export/import:
<source lang=bash>
root@mariadb:~# zpool online MYSQL-DATA /dev/sdb1
</source>
Et voila:
<source lang=bash>
root@mariadb:~# zpool list MYSQL-DATA
NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
MYSQL-DATA 50,0G 994M 49,0G - 5% 1% 1.00x ONLINE -
rpool 19,9G 3,36G 16,5G - 19% 16% 1.00x ONLINE -
</source>
Set autoexpand to off if you want prevent to autoexpand if partition grows:
<source lang=bash>
root@mariadb:~# zpool set autoexpand=off MYSQL-DATA
</source>
===Optional: Resize the LVM physical volume===
Check the values:
<source lang=bash>
# parted /dev/${device} "print free"
Model: VMware Virtual disk (scsi)
Disk /dev/sda: 48.3GB
Sector size (logical/physical): 512B/512B
Partition Table: msdos
Disk Flags:
Number Start End Size Type File system Flags
32.3kB 1049kB 1016kB Free Space
1 1049kB 48.3GB 48.3GB primary boot
48.3GB 48.3GB 999kB Free Space
# pvs
PV VG Fmt Attr PSize PFree
/dev/sda1 vg-root lvm2 a-- <35.00g 0
</source>
OK, we need to resize the physical volume
<source lang=bash>
# pvresize /dev/sda1
Physical volume "/dev/sda1" changed
1 physical volume(s) resized / 0 physical volume(s) not resized
</source>
Check the values:
<source lang=bash>
# pvs
PV VG Fmt Attr PSize PFree
/dev/sda1 vg-root lvm2 a-- <45.00g 10.00g
</source>
Done.
e04a8bf374ae36bb835907b3a123950b63c47541
1983
1982
2019-11-26T12:43:28Z
Lollypop
2
/* Rescan a device (for example after changing a VMDK size) */
wikitext
text/x-wiki
[[Kategorie:Linux|Tipps und Tricks]]
==Hard reboot==
This is the hard way to kick your kernel into void. No filesystem sync is done, just and ugly fast direkt reboot!
You should never do this...
<source lang=bash>
# echo 1 > /proc/sys/kernel/sysrq
# echo b > /proc/sysrq-trigger
</source>
First line enables sysrq, second line sends the reboot request.
For more look at [https://www.kernel.org/doc/Documentation/sysrq.txt kernel.org]!
==Scan all SCSI buses for new devices==
<source lang=bash>
# for i in /sys/class/scsi_host/host*/scan ; do echo "- - -" > $i ; done
</source>
==Rescan a device (for example after changing a VMDK size)==
<source lang=bash>
# device=sda
# echo 1 > /sys/class/block/${device}/device/rescan
</source>
This is for device sda after changing the VMDK from 20GB to 25GB:
<source lang=bash>
# device=sda
# echo $[ 512 * $(cat /sys/block/${device}/size) / 1024 ** 3 ]
20
# echo 1 > /sys/class/block/${device}/device/rescan
# echo $[ 512 * $(cat /sys/block/${device}/size) / 1024 ** 3 ]
25
# parted /dev/${device} "print free"
Warning: Not all of the space available to /dev/sda appears to be used, you can fix the GPT to use all of the space (an extra 10485760 blocks) or
continue with the current setting?
Fix/Ignore? F
Model: VMware Virtual disk (scsi)
Disk /dev/sda: 26,8GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:
Number Start End Size File system Name Flags
2 17,4kB 1049kB 1031kB bios_grub
1 1049kB 21,5GB 21,5GB zfs
21,5GB 26,8GB 5369MB Free Space
</source>
I want to put the free space into partition 1 and resize the rpool:
<source lang=bash>
# parted /dev/${device} "resizepart 1 -1"
# parted /dev/${device} "print free"
Model: VMware Virtual disk (scsi)
Disk /dev/sda: 26,8GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:
Number Start End Size File system Name Flags
2 17,4kB 1049kB 1031kB bios_grub
1 1049kB 26,8GB 26,8GB zfs
26,8GB 26,8GB 983kB Free Space
# zpool list rpool
NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
rpool 19,9G 1,68G 18,2G - 14% 8% 1.00x ONLINE -
# zpool set autoexpand=on rpool
# zpool status rpool
pool: rpool
state: ONLINE
scan: none requested
config:
NAME STATE READ WRITE CKSUM
rpool ONLINE 0 0 0
sda1 ONLINE 0 0 0
# zpool online rpool sda1
# zpool list rpool
NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
rpool 24,9G 1,69G 23,2G - 11% 6% 1.00x ONLINE -
# zpool set autoexpand=off rpool
</source>
Done.
==Remove a SCSI-device==
Let us say we want to remove /dev/sdb.
Be careful! Like in this example the lowest SCSI-ID is not always the lowest device name!
Check it with <i>lsscsi</i> from the Ubuntu package lsscsi:
<source lang=bash>
# lsscsi
[2:0:0:0] cd/dvd NECVMWar VMware SATA CD00 1.00 /dev/sr0
[32:0:0:0] disk VMware Virtual disk 1.0 /dev/sdb
[32:0:1:0] disk VMware Virtual disk 1.0 /dev/sda
</source>
Then check it is not longer in use:
# mount
# pvs
# zpool status
# etc.
Then delete it:
<source lang=bash>
# echo 1 > /sys/bus/scsi/drivers/sd/32\:0\:0\:0/delete
</source>
The 32:0:0:0 is the number reported from the lsscsi above.
Et voila:
<source lang=bash>
# lsscsi
[2:0:0:0] cd/dvd NECVMWar VMware SATA CD00 1.00 /dev/sr0
[32:0:1:0] disk VMware Virtual disk 1.0 /dev/sda
</source>
==Copy a GPT partition table==
Copy partition table of sdX to sdY:
<source lang=bash>
# sgdisk /dev/sdX --replicate=/dev/sdY
# sgdisk --randomize-guids /dev/sdY
</source>
<pre>
-R, --replicate=second_device_filename
Replicate the main device's partition table on the specified second device. Note that the replicated partition table is an exact
copy, including all GUIDs; if the device should have its own unique GUIDs, you should use the -G option on the new disk.
-G, --randomize-guids
Randomize the disk's GUID and all partitions' unique GUIDs (but not their partition type code GUIDs). This function may be used
after cloning a disk in order to render all GUIDs once again unique.
</pre>
==Resize a GPT partition==
The partition was resized in VMWare from ~6GB to ~50GB.
In the VM I did [[#Remove a SCSI-device|Remove a SCSI-device]] for the resized device and then [[#Scan all SCSI buses for new devices|Scan all SCSI buses for new devices]] after that parted saw the new size.
===Correct the GPT partition table===
<source lang=bash>
root@mariadb:~# parted /dev/sdb
GNU Parted 3.2
Using /dev/sdb
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted) p
Warning: Not all of the space available to /dev/sdb appears to be used, you can fix the GPT to use all of the space (an extra 92274688 blocks) or continue with the
current setting?
Fix/Ignore? F <-- ! choose F
Model: VMware Virtual disk (scsi)
Disk /dev/sdb: 53,7GB <-- ! the new size is reported now
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:
Number Start End Size File system Name Flags
1 1049kB 6442MB 6441MB zfs
</source>
===Resize the partition===
<source lang=bash>
root@mariadb:~# parted /dev/sdb
GNU Parted 3.2
Using /dev/sdb
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted) p
Model: VMware Virtual disk (scsi)
Disk /dev/sdb: 53,7GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:
Number Start End Size File system Name Flags
1 1049kB 6442MB 6441MB zfs
(parted) resizepart 1
End? [6442MB]? 53,7GB <-- ! Put new size here
(parted) p <-- ! Control if it worked
Model: VMware Virtual disk (scsi)
Disk /dev/sdb: 53,7GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:
Number Start End Size File system Name Flags
1 1049kB 53,7GB 53,7GB zfs
(parted) q
Information: You may need to update /etc/fstab.
</source>
===Optional: Resize the ZPool in it===
Check the actual values:
<source lang=bash>
root@mariadb:~# zpool list MYSQL-DATA
NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
MYSQL-DATA 5,97G 994M 5,00G 44G 47% 16% 1.00x ONLINE -
root@mariadb:~# zpool get autoexpand MYSQL-DATA
NAME PROPERTY VALUE SOURCE
MYSQL-DATA autoexpand off default
</source>
Now inform ZPool to grow to the end of the partition.
Set autoexpand to on:
<source lang=bash>
root@mariadb:~# zpool set autoexpand=on MYSQL-DATA
</source>
Send an online to the already onlined device to force a recheck in the ZPool to resize it without export/import:
<source lang=bash>
root@mariadb:~# zpool online MYSQL-DATA /dev/sdb1
</source>
Et voila:
<source lang=bash>
root@mariadb:~# zpool list MYSQL-DATA
NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
MYSQL-DATA 50,0G 994M 49,0G - 5% 1% 1.00x ONLINE -
rpool 19,9G 3,36G 16,5G - 19% 16% 1.00x ONLINE -
</source>
Set autoexpand to off if you want prevent to autoexpand if partition grows:
<source lang=bash>
root@mariadb:~# zpool set autoexpand=off MYSQL-DATA
</source>
===Optional: Resize the LVM physical volume===
Check the values:
<source lang=bash>
# parted /dev/${device} "print free"
Model: VMware Virtual disk (scsi)
Disk /dev/sda: 48.3GB
Sector size (logical/physical): 512B/512B
Partition Table: msdos
Disk Flags:
Number Start End Size Type File system Flags
32.3kB 1049kB 1016kB Free Space
1 1049kB 48.3GB 48.3GB primary boot
48.3GB 48.3GB 999kB Free Space
# pvs
PV VG Fmt Attr PSize PFree
/dev/sda1 vg-root lvm2 a-- <35.00g 0
</source>
OK, we need to resize the physical volume
<source lang=bash>
# pvresize /dev/sda1
Physical volume "/dev/sda1" changed
1 physical volume(s) resized / 0 physical volume(s) not resized
</source>
Check the values:
<source lang=bash>
# pvs
PV VG Fmt Attr PSize PFree
/dev/sda1 vg-root lvm2 a-- <45.00g 10.00g
</source>
Done.
2c43e7bac247070fa34a68b78098e4f61887a2a3
1988
1983
2019-12-10T07:10:50Z
Lollypop
2
/* Scan all SCSI buses for new devices */
wikitext
text/x-wiki
[[Kategorie:Linux|Tipps und Tricks]]
==Hard reboot==
This is the hard way to kick your kernel into void. No filesystem sync is done, just and ugly fast direkt reboot!
You should never do this...
<source lang=bash>
# echo 1 > /proc/sys/kernel/sysrq
# echo b > /proc/sysrq-trigger
</source>
First line enables sysrq, second line sends the reboot request.
For more look at [https://www.kernel.org/doc/Documentation/sysrq.txt kernel.org]!
==Scan all SCSI buses for new devices==
<source lang=bash>
# for i in /sys/class/scsi_host/host*/scan ; do echo "- - -" > $i ; done
</source>
==Scan all FC ports for new devices==
!!!Be CAREFUL!!!
This command line issues a Loop Initialization Protocol (LIP). This is a bus reset hat means that removed devices in the fabric will disappear and new ones will appear.
!!!BUT the connection might get lost for a moment!!!
The softer way is [[Scan all SCSI buses for new devices]] above.
<source lang=bash>
# for i in /sys/class/fc_host/*/issue_lip ; do echo "1" > $i ; done
</source>
==Rescan a device (for example after changing a VMDK size)==
<source lang=bash>
# device=sda
# echo 1 > /sys/class/block/${device}/device/rescan
</source>
This is for device sda after changing the VMDK from 20GB to 25GB:
<source lang=bash>
# device=sda
# echo $[ 512 * $(cat /sys/block/${device}/size) / 1024 ** 3 ]
20
# echo 1 > /sys/class/block/${device}/device/rescan
# echo $[ 512 * $(cat /sys/block/${device}/size) / 1024 ** 3 ]
25
# parted /dev/${device} "print free"
Warning: Not all of the space available to /dev/sda appears to be used, you can fix the GPT to use all of the space (an extra 10485760 blocks) or
continue with the current setting?
Fix/Ignore? F
Model: VMware Virtual disk (scsi)
Disk /dev/sda: 26,8GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:
Number Start End Size File system Name Flags
2 17,4kB 1049kB 1031kB bios_grub
1 1049kB 21,5GB 21,5GB zfs
21,5GB 26,8GB 5369MB Free Space
</source>
I want to put the free space into partition 1 and resize the rpool:
<source lang=bash>
# parted /dev/${device} "resizepart 1 -1"
# parted /dev/${device} "print free"
Model: VMware Virtual disk (scsi)
Disk /dev/sda: 26,8GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:
Number Start End Size File system Name Flags
2 17,4kB 1049kB 1031kB bios_grub
1 1049kB 26,8GB 26,8GB zfs
26,8GB 26,8GB 983kB Free Space
# zpool list rpool
NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
rpool 19,9G 1,68G 18,2G - 14% 8% 1.00x ONLINE -
# zpool set autoexpand=on rpool
# zpool status rpool
pool: rpool
state: ONLINE
scan: none requested
config:
NAME STATE READ WRITE CKSUM
rpool ONLINE 0 0 0
sda1 ONLINE 0 0 0
# zpool online rpool sda1
# zpool list rpool
NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
rpool 24,9G 1,69G 23,2G - 11% 6% 1.00x ONLINE -
# zpool set autoexpand=off rpool
</source>
Done.
==Remove a SCSI-device==
Let us say we want to remove /dev/sdb.
Be careful! Like in this example the lowest SCSI-ID is not always the lowest device name!
Check it with <i>lsscsi</i> from the Ubuntu package lsscsi:
<source lang=bash>
# lsscsi
[2:0:0:0] cd/dvd NECVMWar VMware SATA CD00 1.00 /dev/sr0
[32:0:0:0] disk VMware Virtual disk 1.0 /dev/sdb
[32:0:1:0] disk VMware Virtual disk 1.0 /dev/sda
</source>
Then check it is not longer in use:
# mount
# pvs
# zpool status
# etc.
Then delete it:
<source lang=bash>
# echo 1 > /sys/bus/scsi/drivers/sd/32\:0\:0\:0/delete
</source>
The 32:0:0:0 is the number reported from the lsscsi above.
Et voila:
<source lang=bash>
# lsscsi
[2:0:0:0] cd/dvd NECVMWar VMware SATA CD00 1.00 /dev/sr0
[32:0:1:0] disk VMware Virtual disk 1.0 /dev/sda
</source>
==Copy a GPT partition table==
Copy partition table of sdX to sdY:
<source lang=bash>
# sgdisk /dev/sdX --replicate=/dev/sdY
# sgdisk --randomize-guids /dev/sdY
</source>
<pre>
-R, --replicate=second_device_filename
Replicate the main device's partition table on the specified second device. Note that the replicated partition table is an exact
copy, including all GUIDs; if the device should have its own unique GUIDs, you should use the -G option on the new disk.
-G, --randomize-guids
Randomize the disk's GUID and all partitions' unique GUIDs (but not their partition type code GUIDs). This function may be used
after cloning a disk in order to render all GUIDs once again unique.
</pre>
==Resize a GPT partition==
The partition was resized in VMWare from ~6GB to ~50GB.
In the VM I did [[#Remove a SCSI-device|Remove a SCSI-device]] for the resized device and then [[#Scan all SCSI buses for new devices|Scan all SCSI buses for new devices]] after that parted saw the new size.
===Correct the GPT partition table===
<source lang=bash>
root@mariadb:~# parted /dev/sdb
GNU Parted 3.2
Using /dev/sdb
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted) p
Warning: Not all of the space available to /dev/sdb appears to be used, you can fix the GPT to use all of the space (an extra 92274688 blocks) or continue with the
current setting?
Fix/Ignore? F <-- ! choose F
Model: VMware Virtual disk (scsi)
Disk /dev/sdb: 53,7GB <-- ! the new size is reported now
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:
Number Start End Size File system Name Flags
1 1049kB 6442MB 6441MB zfs
</source>
===Resize the partition===
<source lang=bash>
root@mariadb:~# parted /dev/sdb
GNU Parted 3.2
Using /dev/sdb
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted) p
Model: VMware Virtual disk (scsi)
Disk /dev/sdb: 53,7GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:
Number Start End Size File system Name Flags
1 1049kB 6442MB 6441MB zfs
(parted) resizepart 1
End? [6442MB]? 53,7GB <-- ! Put new size here
(parted) p <-- ! Control if it worked
Model: VMware Virtual disk (scsi)
Disk /dev/sdb: 53,7GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:
Number Start End Size File system Name Flags
1 1049kB 53,7GB 53,7GB zfs
(parted) q
Information: You may need to update /etc/fstab.
</source>
===Optional: Resize the ZPool in it===
Check the actual values:
<source lang=bash>
root@mariadb:~# zpool list MYSQL-DATA
NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
MYSQL-DATA 5,97G 994M 5,00G 44G 47% 16% 1.00x ONLINE -
root@mariadb:~# zpool get autoexpand MYSQL-DATA
NAME PROPERTY VALUE SOURCE
MYSQL-DATA autoexpand off default
</source>
Now inform ZPool to grow to the end of the partition.
Set autoexpand to on:
<source lang=bash>
root@mariadb:~# zpool set autoexpand=on MYSQL-DATA
</source>
Send an online to the already onlined device to force a recheck in the ZPool to resize it without export/import:
<source lang=bash>
root@mariadb:~# zpool online MYSQL-DATA /dev/sdb1
</source>
Et voila:
<source lang=bash>
root@mariadb:~# zpool list MYSQL-DATA
NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
MYSQL-DATA 50,0G 994M 49,0G - 5% 1% 1.00x ONLINE -
rpool 19,9G 3,36G 16,5G - 19% 16% 1.00x ONLINE -
</source>
Set autoexpand to off if you want prevent to autoexpand if partition grows:
<source lang=bash>
root@mariadb:~# zpool set autoexpand=off MYSQL-DATA
</source>
===Optional: Resize the LVM physical volume===
Check the values:
<source lang=bash>
# parted /dev/${device} "print free"
Model: VMware Virtual disk (scsi)
Disk /dev/sda: 48.3GB
Sector size (logical/physical): 512B/512B
Partition Table: msdos
Disk Flags:
Number Start End Size Type File system Flags
32.3kB 1049kB 1016kB Free Space
1 1049kB 48.3GB 48.3GB primary boot
48.3GB 48.3GB 999kB Free Space
# pvs
PV VG Fmt Attr PSize PFree
/dev/sda1 vg-root lvm2 a-- <35.00g 0
</source>
OK, we need to resize the physical volume
<source lang=bash>
# pvresize /dev/sda1
Physical volume "/dev/sda1" changed
1 physical volume(s) resized / 0 physical volume(s) not resized
</source>
Check the values:
<source lang=bash>
# pvs
PV VG Fmt Attr PSize PFree
/dev/sda1 vg-root lvm2 a-- <45.00g 10.00g
</source>
Done.
1766d4aee8442a819c1d93929a76436d7c8e13e6
1989
1988
2019-12-10T07:12:05Z
Lollypop
2
/* Scan all FC ports for new devices */
wikitext
text/x-wiki
[[Kategorie:Linux|Tipps und Tricks]]
==Hard reboot==
This is the hard way to kick your kernel into void. No filesystem sync is done, just and ugly fast direkt reboot!
You should never do this...
<source lang=bash>
# echo 1 > /proc/sys/kernel/sysrq
# echo b > /proc/sysrq-trigger
</source>
First line enables sysrq, second line sends the reboot request.
For more look at [https://www.kernel.org/doc/Documentation/sysrq.txt kernel.org]!
==Scan all SCSI buses for new devices==
<source lang=bash>
# for i in /sys/class/scsi_host/host*/scan ; do echo "- - -" > $i ; done
</source>
==Scan all FC ports for new devices==
!!!Be CAREFUL!!!
This command line issues a Loop Initialization Protocol (LIP). This is a bus reset hat means that removed devices in the fabric will disappear and new ones will appear.
!!!BUT the connection might get lost for a moment!!!
The softer way is [[#Scan all SCSI buses for new devices]] above.
<source lang=bash>
# for i in /sys/class/fc_host/*/issue_lip ; do echo "1" > $i ; done
</source>
==Rescan a device (for example after changing a VMDK size)==
<source lang=bash>
# device=sda
# echo 1 > /sys/class/block/${device}/device/rescan
</source>
This is for device sda after changing the VMDK from 20GB to 25GB:
<source lang=bash>
# device=sda
# echo $[ 512 * $(cat /sys/block/${device}/size) / 1024 ** 3 ]
20
# echo 1 > /sys/class/block/${device}/device/rescan
# echo $[ 512 * $(cat /sys/block/${device}/size) / 1024 ** 3 ]
25
# parted /dev/${device} "print free"
Warning: Not all of the space available to /dev/sda appears to be used, you can fix the GPT to use all of the space (an extra 10485760 blocks) or
continue with the current setting?
Fix/Ignore? F
Model: VMware Virtual disk (scsi)
Disk /dev/sda: 26,8GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:
Number Start End Size File system Name Flags
2 17,4kB 1049kB 1031kB bios_grub
1 1049kB 21,5GB 21,5GB zfs
21,5GB 26,8GB 5369MB Free Space
</source>
I want to put the free space into partition 1 and resize the rpool:
<source lang=bash>
# parted /dev/${device} "resizepart 1 -1"
# parted /dev/${device} "print free"
Model: VMware Virtual disk (scsi)
Disk /dev/sda: 26,8GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:
Number Start End Size File system Name Flags
2 17,4kB 1049kB 1031kB bios_grub
1 1049kB 26,8GB 26,8GB zfs
26,8GB 26,8GB 983kB Free Space
# zpool list rpool
NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
rpool 19,9G 1,68G 18,2G - 14% 8% 1.00x ONLINE -
# zpool set autoexpand=on rpool
# zpool status rpool
pool: rpool
state: ONLINE
scan: none requested
config:
NAME STATE READ WRITE CKSUM
rpool ONLINE 0 0 0
sda1 ONLINE 0 0 0
# zpool online rpool sda1
# zpool list rpool
NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
rpool 24,9G 1,69G 23,2G - 11% 6% 1.00x ONLINE -
# zpool set autoexpand=off rpool
</source>
Done.
==Remove a SCSI-device==
Let us say we want to remove /dev/sdb.
Be careful! Like in this example the lowest SCSI-ID is not always the lowest device name!
Check it with <i>lsscsi</i> from the Ubuntu package lsscsi:
<source lang=bash>
# lsscsi
[2:0:0:0] cd/dvd NECVMWar VMware SATA CD00 1.00 /dev/sr0
[32:0:0:0] disk VMware Virtual disk 1.0 /dev/sdb
[32:0:1:0] disk VMware Virtual disk 1.0 /dev/sda
</source>
Then check it is not longer in use:
# mount
# pvs
# zpool status
# etc.
Then delete it:
<source lang=bash>
# echo 1 > /sys/bus/scsi/drivers/sd/32\:0\:0\:0/delete
</source>
The 32:0:0:0 is the number reported from the lsscsi above.
Et voila:
<source lang=bash>
# lsscsi
[2:0:0:0] cd/dvd NECVMWar VMware SATA CD00 1.00 /dev/sr0
[32:0:1:0] disk VMware Virtual disk 1.0 /dev/sda
</source>
==Copy a GPT partition table==
Copy partition table of sdX to sdY:
<source lang=bash>
# sgdisk /dev/sdX --replicate=/dev/sdY
# sgdisk --randomize-guids /dev/sdY
</source>
<pre>
-R, --replicate=second_device_filename
Replicate the main device's partition table on the specified second device. Note that the replicated partition table is an exact
copy, including all GUIDs; if the device should have its own unique GUIDs, you should use the -G option on the new disk.
-G, --randomize-guids
Randomize the disk's GUID and all partitions' unique GUIDs (but not their partition type code GUIDs). This function may be used
after cloning a disk in order to render all GUIDs once again unique.
</pre>
==Resize a GPT partition==
The partition was resized in VMWare from ~6GB to ~50GB.
In the VM I did [[#Remove a SCSI-device|Remove a SCSI-device]] for the resized device and then [[#Scan all SCSI buses for new devices|Scan all SCSI buses for new devices]] after that parted saw the new size.
===Correct the GPT partition table===
<source lang=bash>
root@mariadb:~# parted /dev/sdb
GNU Parted 3.2
Using /dev/sdb
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted) p
Warning: Not all of the space available to /dev/sdb appears to be used, you can fix the GPT to use all of the space (an extra 92274688 blocks) or continue with the
current setting?
Fix/Ignore? F <-- ! choose F
Model: VMware Virtual disk (scsi)
Disk /dev/sdb: 53,7GB <-- ! the new size is reported now
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:
Number Start End Size File system Name Flags
1 1049kB 6442MB 6441MB zfs
</source>
===Resize the partition===
<source lang=bash>
root@mariadb:~# parted /dev/sdb
GNU Parted 3.2
Using /dev/sdb
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted) p
Model: VMware Virtual disk (scsi)
Disk /dev/sdb: 53,7GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:
Number Start End Size File system Name Flags
1 1049kB 6442MB 6441MB zfs
(parted) resizepart 1
End? [6442MB]? 53,7GB <-- ! Put new size here
(parted) p <-- ! Control if it worked
Model: VMware Virtual disk (scsi)
Disk /dev/sdb: 53,7GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:
Number Start End Size File system Name Flags
1 1049kB 53,7GB 53,7GB zfs
(parted) q
Information: You may need to update /etc/fstab.
</source>
===Optional: Resize the ZPool in it===
Check the actual values:
<source lang=bash>
root@mariadb:~# zpool list MYSQL-DATA
NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
MYSQL-DATA 5,97G 994M 5,00G 44G 47% 16% 1.00x ONLINE -
root@mariadb:~# zpool get autoexpand MYSQL-DATA
NAME PROPERTY VALUE SOURCE
MYSQL-DATA autoexpand off default
</source>
Now inform ZPool to grow to the end of the partition.
Set autoexpand to on:
<source lang=bash>
root@mariadb:~# zpool set autoexpand=on MYSQL-DATA
</source>
Send an online to the already onlined device to force a recheck in the ZPool to resize it without export/import:
<source lang=bash>
root@mariadb:~# zpool online MYSQL-DATA /dev/sdb1
</source>
Et voila:
<source lang=bash>
root@mariadb:~# zpool list MYSQL-DATA
NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
MYSQL-DATA 50,0G 994M 49,0G - 5% 1% 1.00x ONLINE -
rpool 19,9G 3,36G 16,5G - 19% 16% 1.00x ONLINE -
</source>
Set autoexpand to off if you want prevent to autoexpand if partition grows:
<source lang=bash>
root@mariadb:~# zpool set autoexpand=off MYSQL-DATA
</source>
===Optional: Resize the LVM physical volume===
Check the values:
<source lang=bash>
# parted /dev/${device} "print free"
Model: VMware Virtual disk (scsi)
Disk /dev/sda: 48.3GB
Sector size (logical/physical): 512B/512B
Partition Table: msdos
Disk Flags:
Number Start End Size Type File system Flags
32.3kB 1049kB 1016kB Free Space
1 1049kB 48.3GB 48.3GB primary boot
48.3GB 48.3GB 999kB Free Space
# pvs
PV VG Fmt Attr PSize PFree
/dev/sda1 vg-root lvm2 a-- <35.00g 0
</source>
OK, we need to resize the physical volume
<source lang=bash>
# pvresize /dev/sda1
Physical volume "/dev/sda1" changed
1 physical volume(s) resized / 0 physical volume(s) not resized
</source>
Check the values:
<source lang=bash>
# pvs
PV VG Fmt Attr PSize PFree
/dev/sda1 vg-root lvm2 a-- <45.00g 10.00g
</source>
Done.
8b13452d3139df80b70777cad786f25ef88824f5
1990
1989
2019-12-10T07:13:27Z
Lollypop
2
/* Scan all FC ports for new devices */
wikitext
text/x-wiki
[[Kategorie:Linux|Tipps und Tricks]]
==Hard reboot==
This is the hard way to kick your kernel into void. No filesystem sync is done, just and ugly fast direkt reboot!
You should never do this...
<source lang=bash>
# echo 1 > /proc/sys/kernel/sysrq
# echo b > /proc/sysrq-trigger
</source>
First line enables sysrq, second line sends the reboot request.
For more look at [https://www.kernel.org/doc/Documentation/sysrq.txt kernel.org]!
==Scan all SCSI buses for new devices==
<source lang=bash>
# for i in /sys/class/scsi_host/host*/scan ; do echo "- - -" > $i ; done
</source>
==Scan all FC ports for new devices==
!!!Be CAREFUL!!!
This command line issues a Loop Initialization Protocol (LIP). This is a bus reset hat means that removed devices in the fabric will disappear and new ones will appear.
!!!BUT the connection might get lost for a moment!!!
The softer way is [[#Scan all SCSI buses for new devices|to scan the SCSI buses]].
<source lang=bash>
# for i in /sys/class/fc_host/*/issue_lip ; do echo "1" > $i ; done
</source>
==Rescan a device (for example after changing a VMDK size)==
<source lang=bash>
# device=sda
# echo 1 > /sys/class/block/${device}/device/rescan
</source>
This is for device sda after changing the VMDK from 20GB to 25GB:
<source lang=bash>
# device=sda
# echo $[ 512 * $(cat /sys/block/${device}/size) / 1024 ** 3 ]
20
# echo 1 > /sys/class/block/${device}/device/rescan
# echo $[ 512 * $(cat /sys/block/${device}/size) / 1024 ** 3 ]
25
# parted /dev/${device} "print free"
Warning: Not all of the space available to /dev/sda appears to be used, you can fix the GPT to use all of the space (an extra 10485760 blocks) or
continue with the current setting?
Fix/Ignore? F
Model: VMware Virtual disk (scsi)
Disk /dev/sda: 26,8GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:
Number Start End Size File system Name Flags
2 17,4kB 1049kB 1031kB bios_grub
1 1049kB 21,5GB 21,5GB zfs
21,5GB 26,8GB 5369MB Free Space
</source>
I want to put the free space into partition 1 and resize the rpool:
<source lang=bash>
# parted /dev/${device} "resizepart 1 -1"
# parted /dev/${device} "print free"
Model: VMware Virtual disk (scsi)
Disk /dev/sda: 26,8GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:
Number Start End Size File system Name Flags
2 17,4kB 1049kB 1031kB bios_grub
1 1049kB 26,8GB 26,8GB zfs
26,8GB 26,8GB 983kB Free Space
# zpool list rpool
NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
rpool 19,9G 1,68G 18,2G - 14% 8% 1.00x ONLINE -
# zpool set autoexpand=on rpool
# zpool status rpool
pool: rpool
state: ONLINE
scan: none requested
config:
NAME STATE READ WRITE CKSUM
rpool ONLINE 0 0 0
sda1 ONLINE 0 0 0
# zpool online rpool sda1
# zpool list rpool
NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
rpool 24,9G 1,69G 23,2G - 11% 6% 1.00x ONLINE -
# zpool set autoexpand=off rpool
</source>
Done.
==Remove a SCSI-device==
Let us say we want to remove /dev/sdb.
Be careful! Like in this example the lowest SCSI-ID is not always the lowest device name!
Check it with <i>lsscsi</i> from the Ubuntu package lsscsi:
<source lang=bash>
# lsscsi
[2:0:0:0] cd/dvd NECVMWar VMware SATA CD00 1.00 /dev/sr0
[32:0:0:0] disk VMware Virtual disk 1.0 /dev/sdb
[32:0:1:0] disk VMware Virtual disk 1.0 /dev/sda
</source>
Then check it is not longer in use:
# mount
# pvs
# zpool status
# etc.
Then delete it:
<source lang=bash>
# echo 1 > /sys/bus/scsi/drivers/sd/32\:0\:0\:0/delete
</source>
The 32:0:0:0 is the number reported from the lsscsi above.
Et voila:
<source lang=bash>
# lsscsi
[2:0:0:0] cd/dvd NECVMWar VMware SATA CD00 1.00 /dev/sr0
[32:0:1:0] disk VMware Virtual disk 1.0 /dev/sda
</source>
==Copy a GPT partition table==
Copy partition table of sdX to sdY:
<source lang=bash>
# sgdisk /dev/sdX --replicate=/dev/sdY
# sgdisk --randomize-guids /dev/sdY
</source>
<pre>
-R, --replicate=second_device_filename
Replicate the main device's partition table on the specified second device. Note that the replicated partition table is an exact
copy, including all GUIDs; if the device should have its own unique GUIDs, you should use the -G option on the new disk.
-G, --randomize-guids
Randomize the disk's GUID and all partitions' unique GUIDs (but not their partition type code GUIDs). This function may be used
after cloning a disk in order to render all GUIDs once again unique.
</pre>
==Resize a GPT partition==
The partition was resized in VMWare from ~6GB to ~50GB.
In the VM I did [[#Remove a SCSI-device|Remove a SCSI-device]] for the resized device and then [[#Scan all SCSI buses for new devices|Scan all SCSI buses for new devices]] after that parted saw the new size.
===Correct the GPT partition table===
<source lang=bash>
root@mariadb:~# parted /dev/sdb
GNU Parted 3.2
Using /dev/sdb
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted) p
Warning: Not all of the space available to /dev/sdb appears to be used, you can fix the GPT to use all of the space (an extra 92274688 blocks) or continue with the
current setting?
Fix/Ignore? F <-- ! choose F
Model: VMware Virtual disk (scsi)
Disk /dev/sdb: 53,7GB <-- ! the new size is reported now
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:
Number Start End Size File system Name Flags
1 1049kB 6442MB 6441MB zfs
</source>
===Resize the partition===
<source lang=bash>
root@mariadb:~# parted /dev/sdb
GNU Parted 3.2
Using /dev/sdb
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted) p
Model: VMware Virtual disk (scsi)
Disk /dev/sdb: 53,7GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:
Number Start End Size File system Name Flags
1 1049kB 6442MB 6441MB zfs
(parted) resizepart 1
End? [6442MB]? 53,7GB <-- ! Put new size here
(parted) p <-- ! Control if it worked
Model: VMware Virtual disk (scsi)
Disk /dev/sdb: 53,7GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:
Number Start End Size File system Name Flags
1 1049kB 53,7GB 53,7GB zfs
(parted) q
Information: You may need to update /etc/fstab.
</source>
===Optional: Resize the ZPool in it===
Check the actual values:
<source lang=bash>
root@mariadb:~# zpool list MYSQL-DATA
NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
MYSQL-DATA 5,97G 994M 5,00G 44G 47% 16% 1.00x ONLINE -
root@mariadb:~# zpool get autoexpand MYSQL-DATA
NAME PROPERTY VALUE SOURCE
MYSQL-DATA autoexpand off default
</source>
Now inform ZPool to grow to the end of the partition.
Set autoexpand to on:
<source lang=bash>
root@mariadb:~# zpool set autoexpand=on MYSQL-DATA
</source>
Send an online to the already onlined device to force a recheck in the ZPool to resize it without export/import:
<source lang=bash>
root@mariadb:~# zpool online MYSQL-DATA /dev/sdb1
</source>
Et voila:
<source lang=bash>
root@mariadb:~# zpool list MYSQL-DATA
NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
MYSQL-DATA 50,0G 994M 49,0G - 5% 1% 1.00x ONLINE -
rpool 19,9G 3,36G 16,5G - 19% 16% 1.00x ONLINE -
</source>
Set autoexpand to off if you want prevent to autoexpand if partition grows:
<source lang=bash>
root@mariadb:~# zpool set autoexpand=off MYSQL-DATA
</source>
===Optional: Resize the LVM physical volume===
Check the values:
<source lang=bash>
# parted /dev/${device} "print free"
Model: VMware Virtual disk (scsi)
Disk /dev/sda: 48.3GB
Sector size (logical/physical): 512B/512B
Partition Table: msdos
Disk Flags:
Number Start End Size Type File system Flags
32.3kB 1049kB 1016kB Free Space
1 1049kB 48.3GB 48.3GB primary boot
48.3GB 48.3GB 999kB Free Space
# pvs
PV VG Fmt Attr PSize PFree
/dev/sda1 vg-root lvm2 a-- <35.00g 0
</source>
OK, we need to resize the physical volume
<source lang=bash>
# pvresize /dev/sda1
Physical volume "/dev/sda1" changed
1 physical volume(s) resized / 0 physical volume(s) not resized
</source>
Check the values:
<source lang=bash>
# pvs
PV VG Fmt Attr PSize PFree
/dev/sda1 vg-root lvm2 a-- <45.00g 10.00g
</source>
Done.
997ec2ff40d9e8437e0fad7ce77c6b7d93bc4bf9
1992
1990
2019-12-18T10:22:05Z
Lollypop
2
/* Copy a GPT partition table */
wikitext
text/x-wiki
[[Kategorie:Linux|Tipps und Tricks]]
==Hard reboot==
This is the hard way to kick your kernel into void. No filesystem sync is done, just and ugly fast direkt reboot!
You should never do this...
<source lang=bash>
# echo 1 > /proc/sys/kernel/sysrq
# echo b > /proc/sysrq-trigger
</source>
First line enables sysrq, second line sends the reboot request.
For more look at [https://www.kernel.org/doc/Documentation/sysrq.txt kernel.org]!
==Scan all SCSI buses for new devices==
<source lang=bash>
# for i in /sys/class/scsi_host/host*/scan ; do echo "- - -" > $i ; done
</source>
==Scan all FC ports for new devices==
!!!Be CAREFUL!!!
This command line issues a Loop Initialization Protocol (LIP). This is a bus reset hat means that removed devices in the fabric will disappear and new ones will appear.
!!!BUT the connection might get lost for a moment!!!
The softer way is [[#Scan all SCSI buses for new devices|to scan the SCSI buses]].
<source lang=bash>
# for i in /sys/class/fc_host/*/issue_lip ; do echo "1" > $i ; done
</source>
==Rescan a device (for example after changing a VMDK size)==
<source lang=bash>
# device=sda
# echo 1 > /sys/class/block/${device}/device/rescan
</source>
This is for device sda after changing the VMDK from 20GB to 25GB:
<source lang=bash>
# device=sda
# echo $[ 512 * $(cat /sys/block/${device}/size) / 1024 ** 3 ]
20
# echo 1 > /sys/class/block/${device}/device/rescan
# echo $[ 512 * $(cat /sys/block/${device}/size) / 1024 ** 3 ]
25
# parted /dev/${device} "print free"
Warning: Not all of the space available to /dev/sda appears to be used, you can fix the GPT to use all of the space (an extra 10485760 blocks) or
continue with the current setting?
Fix/Ignore? F
Model: VMware Virtual disk (scsi)
Disk /dev/sda: 26,8GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:
Number Start End Size File system Name Flags
2 17,4kB 1049kB 1031kB bios_grub
1 1049kB 21,5GB 21,5GB zfs
21,5GB 26,8GB 5369MB Free Space
</source>
I want to put the free space into partition 1 and resize the rpool:
<source lang=bash>
# parted /dev/${device} "resizepart 1 -1"
# parted /dev/${device} "print free"
Model: VMware Virtual disk (scsi)
Disk /dev/sda: 26,8GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:
Number Start End Size File system Name Flags
2 17,4kB 1049kB 1031kB bios_grub
1 1049kB 26,8GB 26,8GB zfs
26,8GB 26,8GB 983kB Free Space
# zpool list rpool
NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
rpool 19,9G 1,68G 18,2G - 14% 8% 1.00x ONLINE -
# zpool set autoexpand=on rpool
# zpool status rpool
pool: rpool
state: ONLINE
scan: none requested
config:
NAME STATE READ WRITE CKSUM
rpool ONLINE 0 0 0
sda1 ONLINE 0 0 0
# zpool online rpool sda1
# zpool list rpool
NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
rpool 24,9G 1,69G 23,2G - 11% 6% 1.00x ONLINE -
# zpool set autoexpand=off rpool
</source>
Done.
==Remove a SCSI-device==
Let us say we want to remove /dev/sdb.
Be careful! Like in this example the lowest SCSI-ID is not always the lowest device name!
Check it with <i>lsscsi</i> from the Ubuntu package lsscsi:
<source lang=bash>
# lsscsi
[2:0:0:0] cd/dvd NECVMWar VMware SATA CD00 1.00 /dev/sr0
[32:0:0:0] disk VMware Virtual disk 1.0 /dev/sdb
[32:0:1:0] disk VMware Virtual disk 1.0 /dev/sda
</source>
Then check it is not longer in use:
# mount
# pvs
# zpool status
# etc.
Then delete it:
<source lang=bash>
# echo 1 > /sys/bus/scsi/drivers/sd/32\:0\:0\:0/delete
</source>
The 32:0:0:0 is the number reported from the lsscsi above.
Et voila:
<source lang=bash>
# lsscsi
[2:0:0:0] cd/dvd NECVMWar VMware SATA CD00 1.00 /dev/sr0
[32:0:1:0] disk VMware Virtual disk 1.0 /dev/sda
</source>
==Copy a GPT partition table==
Copy partition table of sdX to sdY:
<source lang=bash>
# sgdisk /dev/sdX --replicate=/dev/sdY
# sgdisk --randomize-guids /dev/sdY
</source>
Or with:
<source lang=bash>
# sgdisk --backup=sdX.table /dev/sdX
# sgdisk --load-backup=sdX.table /dev/sdY
# sgdisk -G /dev/sdY
</source>
<pre>
-R, --replicate=second_device_filename
Replicate the main device's partition table on the specified second device. Note that the replicated partition table is an exact
copy, including all GUIDs; if the device should have its own unique GUIDs, you should use the -G option on the new disk.
-G, --randomize-guids
Randomize the disk's GUID and all partitions' unique GUIDs (but not their partition type code GUIDs). This function may be used
after cloning a disk in order to render all GUIDs once again unique.
</pre>
==Resize a GPT partition==
The partition was resized in VMWare from ~6GB to ~50GB.
In the VM I did [[#Remove a SCSI-device|Remove a SCSI-device]] for the resized device and then [[#Scan all SCSI buses for new devices|Scan all SCSI buses for new devices]] after that parted saw the new size.
===Correct the GPT partition table===
<source lang=bash>
root@mariadb:~# parted /dev/sdb
GNU Parted 3.2
Using /dev/sdb
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted) p
Warning: Not all of the space available to /dev/sdb appears to be used, you can fix the GPT to use all of the space (an extra 92274688 blocks) or continue with the
current setting?
Fix/Ignore? F <-- ! choose F
Model: VMware Virtual disk (scsi)
Disk /dev/sdb: 53,7GB <-- ! the new size is reported now
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:
Number Start End Size File system Name Flags
1 1049kB 6442MB 6441MB zfs
</source>
===Resize the partition===
<source lang=bash>
root@mariadb:~# parted /dev/sdb
GNU Parted 3.2
Using /dev/sdb
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted) p
Model: VMware Virtual disk (scsi)
Disk /dev/sdb: 53,7GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:
Number Start End Size File system Name Flags
1 1049kB 6442MB 6441MB zfs
(parted) resizepart 1
End? [6442MB]? 53,7GB <-- ! Put new size here
(parted) p <-- ! Control if it worked
Model: VMware Virtual disk (scsi)
Disk /dev/sdb: 53,7GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:
Number Start End Size File system Name Flags
1 1049kB 53,7GB 53,7GB zfs
(parted) q
Information: You may need to update /etc/fstab.
</source>
===Optional: Resize the ZPool in it===
Check the actual values:
<source lang=bash>
root@mariadb:~# zpool list MYSQL-DATA
NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
MYSQL-DATA 5,97G 994M 5,00G 44G 47% 16% 1.00x ONLINE -
root@mariadb:~# zpool get autoexpand MYSQL-DATA
NAME PROPERTY VALUE SOURCE
MYSQL-DATA autoexpand off default
</source>
Now inform ZPool to grow to the end of the partition.
Set autoexpand to on:
<source lang=bash>
root@mariadb:~# zpool set autoexpand=on MYSQL-DATA
</source>
Send an online to the already onlined device to force a recheck in the ZPool to resize it without export/import:
<source lang=bash>
root@mariadb:~# zpool online MYSQL-DATA /dev/sdb1
</source>
Et voila:
<source lang=bash>
root@mariadb:~# zpool list MYSQL-DATA
NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
MYSQL-DATA 50,0G 994M 49,0G - 5% 1% 1.00x ONLINE -
rpool 19,9G 3,36G 16,5G - 19% 16% 1.00x ONLINE -
</source>
Set autoexpand to off if you want prevent to autoexpand if partition grows:
<source lang=bash>
root@mariadb:~# zpool set autoexpand=off MYSQL-DATA
</source>
===Optional: Resize the LVM physical volume===
Check the values:
<source lang=bash>
# parted /dev/${device} "print free"
Model: VMware Virtual disk (scsi)
Disk /dev/sda: 48.3GB
Sector size (logical/physical): 512B/512B
Partition Table: msdos
Disk Flags:
Number Start End Size Type File system Flags
32.3kB 1049kB 1016kB Free Space
1 1049kB 48.3GB 48.3GB primary boot
48.3GB 48.3GB 999kB Free Space
# pvs
PV VG Fmt Attr PSize PFree
/dev/sda1 vg-root lvm2 a-- <35.00g 0
</source>
OK, we need to resize the physical volume
<source lang=bash>
# pvresize /dev/sda1
Physical volume "/dev/sda1" changed
1 physical volume(s) resized / 0 physical volume(s) not resized
</source>
Check the values:
<source lang=bash>
# pvs
PV VG Fmt Attr PSize PFree
/dev/sda1 vg-root lvm2 a-- <45.00g 10.00g
</source>
Done.
276ca63be05420c3a6249e58a137463b7e1c4098
TShark
0
238
1984
1937
2019-11-28T13:05:39Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:MySQL]]
[[Kategorie:Security]]
=TShark=
[https://www.wireshark.org/docs/wsug_html_chunked/AppToolstshark.html TShark is the terminal based wireshark.]
The ultimate tool to sniff network traffic when you have no X. It analyzes the traffic as wireshark does. Great tool!
==MySQL traffic==
To look on an application server for MySQL traffic you can use this line:
<source lang=bash>
# IFACE=eth0 ; tshark -i ${IFACE} -d tcp.port==3306,mysql -R "eth.addr eq $(ip link show ${IFACE} | awk '$1 ~ /link\/ether/{print $2}')" -T fields -e mysql.query 'port 3306'
</source>
The little awk magic selects only pakets which are from our ethernet address on interface ''IFACE''.
==Radius traffic==
Find client with macaddress fc-18-3c-4a-c1-fa :
<source lang=bash>
# tshark -Y "tls.handshake.type == 1" -T fields -e frame.number -e ip.src -e tls.handshake.version -e radius.Calling_Station_Id -Y 'radius.Calling_Station_Id=="fc-18-3c-4a-c1-fa"' -f "udp port 1812" -V
Running as user "root" and group "root". This could be dangerous.
Capturing on 'ens192'
785 10.155.1.23 fc-18-3c-4a-c1-fa
788 10.155.1.23 0x00000303 fc-18-3c-4a-c1-fa <-- TLS 1.2 , see table below
790 10.155.1.23 fc-18-3c-4a-c1-fa
792 10.155.1.23 fc-18-3c-4a-c1-fa
794 10.155.1.23 fc-18-3c-4a-c1-fa
</source>
==Duplicate ACKs==
<source lang=bash>
# tshark -i eth1 -Y tcp.analysis.duplicate_ack
</source>
==Finding TCP problems==
<source lang=bash>
# tshark -i eth1 -Y 'expert.message == "Retransmission (suspected)" || expert.message == "Duplicate ACK (#1)" || expert.message == "Out-Of-Order segment"'
</source>
==Decode SSL Connections==
For example show the used TLS-Versions lower than 1.2.
<pre>
Supported Version: TLS 1.3 (0x0304)
Supported Version: TLS 1.2 (0x0303)
Supported Version: TLS 1.1 (0x0302)
Supported Version: TLS 1.0 (0x0301)
</pre>
<source lang=bash>
$ tshark -n -f 'dst port 1812 or dst port 2083' -Y "ssl.handshake.version<0x00000303" -T fields -e ip.src_host -e ip.dst_host -e tcp.dstport -e udp.dstport -e ssl.handshake.version
192.168.1.87 192.168.1.140 2083 0x00000301
10.155.4.97 192.168.1.141 1812 0x00000301
192.168.1.85 192.168.1.140 2083 0x00000301
...
</source>
or for https:
<source lang=bash>
$ tshark -i eth0 -n -f 'dst port 443' -Y "ssl.handshake.version<0x00000303" -T fields -e ip.src_host -e ip.dst_host -e tcp.dstport -e ssl.handshake.version
</source>
968bb833400c9f47c2993444e5b0bf0baa99ce5c
1985
1984
2019-11-28T13:06:19Z
Lollypop
2
/* Radius traffic */
wikitext
text/x-wiki
[[Kategorie:MySQL]]
[[Kategorie:Security]]
=TShark=
[https://www.wireshark.org/docs/wsug_html_chunked/AppToolstshark.html TShark is the terminal based wireshark.]
The ultimate tool to sniff network traffic when you have no X. It analyzes the traffic as wireshark does. Great tool!
==MySQL traffic==
To look on an application server for MySQL traffic you can use this line:
<source lang=bash>
# IFACE=eth0 ; tshark -i ${IFACE} -d tcp.port==3306,mysql -R "eth.addr eq $(ip link show ${IFACE} | awk '$1 ~ /link\/ether/{print $2}')" -T fields -e mysql.query 'port 3306'
</source>
The little awk magic selects only pakets which are from our ethernet address on interface ''IFACE''.
==Radius traffic==
Find client with macaddress fc-18-3c-4a-c1-fa :
<source lang=bash>
# tshark -Y "tls.handshake.type == 1" -T fields -e frame.number -e ip.src -e tls.handshake.version -e radius.Calling_Station_Id -Y 'radius.Calling_Station_Id=="fc-18-3c-4a-c1-fa"' -f "udp port 1812" -V
Running as user "root" and group "root". This could be dangerous.
Capturing on 'ens192'
785 10.155.1.23 fc-18-3c-4a-c1-fa
788 10.155.1.23 0x00000303 fc-18-3c-4a-c1-fa <-- 0x00000303 is TLS handshake version 1.2 , see table below
790 10.155.1.23 fc-18-3c-4a-c1-fa
792 10.155.1.23 fc-18-3c-4a-c1-fa
794 10.155.1.23 fc-18-3c-4a-c1-fa
</source>
==Duplicate ACKs==
<source lang=bash>
# tshark -i eth1 -Y tcp.analysis.duplicate_ack
</source>
==Finding TCP problems==
<source lang=bash>
# tshark -i eth1 -Y 'expert.message == "Retransmission (suspected)" || expert.message == "Duplicate ACK (#1)" || expert.message == "Out-Of-Order segment"'
</source>
==Decode SSL Connections==
For example show the used TLS-Versions lower than 1.2.
<pre>
Supported Version: TLS 1.3 (0x0304)
Supported Version: TLS 1.2 (0x0303)
Supported Version: TLS 1.1 (0x0302)
Supported Version: TLS 1.0 (0x0301)
</pre>
<source lang=bash>
$ tshark -n -f 'dst port 1812 or dst port 2083' -Y "ssl.handshake.version<0x00000303" -T fields -e ip.src_host -e ip.dst_host -e tcp.dstport -e udp.dstport -e ssl.handshake.version
192.168.1.87 192.168.1.140 2083 0x00000301
10.155.4.97 192.168.1.141 1812 0x00000301
192.168.1.85 192.168.1.140 2083 0x00000301
...
</source>
or for https:
<source lang=bash>
$ tshark -i eth0 -n -f 'dst port 443' -Y "ssl.handshake.version<0x00000303" -T fields -e ip.src_host -e ip.dst_host -e tcp.dstport -e ssl.handshake.version
</source>
789ef419f7a4ee78beddec9b1e7b1d4aa505e9ff
1986
1985
2019-11-28T13:52:57Z
Lollypop
2
/* Radius traffic */
wikitext
text/x-wiki
[[Kategorie:MySQL]]
[[Kategorie:Security]]
=TShark=
[https://www.wireshark.org/docs/wsug_html_chunked/AppToolstshark.html TShark is the terminal based wireshark.]
The ultimate tool to sniff network traffic when you have no X. It analyzes the traffic as wireshark does. Great tool!
==MySQL traffic==
To look on an application server for MySQL traffic you can use this line:
<source lang=bash>
# IFACE=eth0 ; tshark -i ${IFACE} -d tcp.port==3306,mysql -R "eth.addr eq $(ip link show ${IFACE} | awk '$1 ~ /link\/ether/{print $2}')" -T fields -e mysql.query 'port 3306'
</source>
The little awk magic selects only pakets which are from our ethernet address on interface ''IFACE''.
==Radius traffic==
Find client with macaddress fc-18-3c-4a-c1-fa :
<source lang=bash>
# tshark -Y "tls.handshake.type == 1" -T fields -e frame.number -e ip.src -e tls.handshake.version -e radius.Calling_Station_Id -Y 'radius.Calling_Station_Id=="fc-18-3c-4a-c1-fa"' -f "udp port 1812" -V
Running as user "root" and group "root". This could be dangerous.
Capturing on 'ens192'
785 10.155.1.23 fc-18-3c-4a-c1-fa
788 10.155.1.23 0x00000303 fc-18-3c-4a-c1-fa <-- 0x00000303 is TLS handshake version 1.2 , see table below
790 10.155.1.23 fc-18-3c-4a-c1-fa
792 10.155.1.23 fc-18-3c-4a-c1-fa
794 10.155.1.23 fc-18-3c-4a-c1-fa
</source>
With older tshark versions try:
<source lang=bash>
# tshark -Y "ssl.handshake.type == 1" -T fields -e frame.number -e ip.src -e ssl.handshake.version -e radius.Calling_Station_Id -Y 'radius.Calling_Station_Id=="8c-85-90-1f-03-ff"' -f "udp port 1812"
</source>
==Duplicate ACKs==
<source lang=bash>
# tshark -i eth1 -Y tcp.analysis.duplicate_ack
</source>
==Finding TCP problems==
<source lang=bash>
# tshark -i eth1 -Y 'expert.message == "Retransmission (suspected)" || expert.message == "Duplicate ACK (#1)" || expert.message == "Out-Of-Order segment"'
</source>
==Decode SSL Connections==
For example show the used TLS-Versions lower than 1.2.
<pre>
Supported Version: TLS 1.3 (0x0304)
Supported Version: TLS 1.2 (0x0303)
Supported Version: TLS 1.1 (0x0302)
Supported Version: TLS 1.0 (0x0301)
</pre>
<source lang=bash>
$ tshark -n -f 'dst port 1812 or dst port 2083' -Y "ssl.handshake.version<0x00000303" -T fields -e ip.src_host -e ip.dst_host -e tcp.dstport -e udp.dstport -e ssl.handshake.version
192.168.1.87 192.168.1.140 2083 0x00000301
10.155.4.97 192.168.1.141 1812 0x00000301
192.168.1.85 192.168.1.140 2083 0x00000301
...
</source>
or for https:
<source lang=bash>
$ tshark -i eth0 -n -f 'dst port 443' -Y "ssl.handshake.version<0x00000303" -T fields -e ip.src_host -e ip.dst_host -e tcp.dstport -e ssl.handshake.version
</source>
c79703128cce7c2744cee269bd2eef203c74a5d7
Bash cheatsheet
0
37
1987
1946
2019-12-04T12:49:50Z
Lollypop
2
/* Log with timestamp */
wikitext
text/x-wiki
[[Kategorie:Bash]]
=bash history per user=
See [[SSH_FingerprintLogging|Logging the SSH fingerprint]]
=bash prompt=
Put this in your ~/.bash_profile
<source lang=bash>
typeset +x PS1="\[\e]0;\u@\h: \w\a\]\u@\h:\w# "
</source>
=Useful variable substitutions=
==split==
For example split an ip:
<source lang=bash>
$ delimiter="."
$ ip="10.1.2.3"
$ declare -a octets=( ${ip//${delimiter}/ } )
$ echo "${#octets[@]} octets -> ${octets[@]}"
4 octets -> 10 1 2 3
</source>
==dirname==
<source lang=bash>
$ myself=/usr/bin/blafasel ; echo ${myself%/*}
/usr/bin
</source>
==basename==
<source lang=bash>
$ myself=/usr/bin/blafasel ; echo ${myself##*/}
blafasel
</source>
==Path name resolving function==
<source lang=bash>
# dir_resolve originally from http://stackoverflow.com/a/20901614/5887626
# modified at https://lars.timmann.de/wiki/index.php/Bash_cheatsheet
dir_resolve() {
local dir=${1%/*}
local file=${1##*/}
# if the name does not contain a / leave file blank or the name will be name/name
[ "_${1/\//}_" == "_${1}_" -a -d ${1} ] && file=""
[ "_${1/\//}_" == "_${1}_" -a -f ${1} ] && dir=""
pushd "$dir" &>/dev/null || return $? # On error, return error code
echo ${PWD}${file:+"/"${file}} # output full path with filename
popd &> /dev/null
}
</source>
=Arrays=
==Reverse the order of elements==
An example for services in normal and reverse order for start/stop
<source lang=bash>
declare -a SERVICES_STOP=(service1 service2 service3 service4)
declare -a SERVICES_START
for(( i=$[ ${#SERVICES_STOP[*]} - 1 ] ; i>=0 ; i-- ))
do
SERVICES_START+=(${SERVICES_STOP[$i]})
done
</source>
This results in:
<source lang=bash>
$ echo ${SERVICES_STOP[*]} ; echo ${SERVICES_START[*]}
service1 service2 service3 service4
service4 service3 service2 service1
</source>
=Loops=
==Numbers==
$ for i in {0..9} ; do echo $i ; done
or
$ for ((i=0;i<=9;i++)); do echo $i; done
so gehen natürlich auch andere Sprünge, z.B. immer 3 weiter:
$ for ((i=0;i<=9;i+=3)); do echo $i; done
or or or
$ for ((i=0,j=1;i<=9;i+=3,j++)); do echo "$i $j"; done
==Exit controlled loop==
Just put your code between <i>while</i> and <i>do</i> and use <i>continue</i> alias : in the loop.
<source lang=bash>
#!/bin/bash
while
# some code
(( <your control expression> ))
do
:
done
</source>
For example:
<source lang=bash>
#!/bin/bash
i=1
while
i=$[ $i + 1 ];
(( $i < 10 ))
do
:
done
</source>
=Functions=
==Log with timestamp==
<source lang=bash>
function printlog () {
# Function:
# Log things to logfile
#
# Parameter:
# 1: logfile
# *: You can call printlog like printf (except the first parameter is the logfile)
#
# OR
#
# Just pipe things to printlog
#
local logfile=${1}
shift
if [ -n "${*}" ]
then
format=${1}
shift
printf "%s ${format}" "$(/bin/date '+%Y%m%d %H:%M:%S')" ${*} >> ${logfile}
else
while read input
do
printf "%s %s\n" "$(/bin/date '+%Y%m%d %H:%M:%S')" "${input}" >> ${logfile}
done
fi
}
</source>
<source lang=bash>
$ printf "test\n\ntoast\n" | printlog /dev/stdout
20190603 12:48:13 test
20190603 12:48:13
20190603 12:48:13 toast
$ printlog /dev/stdout "test\n"
20190603 12:48:19 test
$ printlog /dev/stdout "test %s %d %s\n" "bla" 0 "bli"
20190603 12:48:25 test bla 0 bli
$
</source>
=Calculations=
<source lang=bash>
$ echo $[ 3 + 4 ]
7
$ echo $[ 2 ** 8 ] # 2^8
256
</source>
=init scripts=
==A basic skeleton==
<source lang=bash>
#!/bin/bash
NAME=<myname> # The name of the daemon
USER=<runuser> # The user to run the daemon as
SELF=${0##*/}
CALLER=$(id -nu)
# Check if called as ${USER}
if [ "_${CALLER}_" != "_${USER}_" ]
then
# If not do a su if called as root
if [ "_${CALLER}_" == "_root_" ]
then
exec su -l ${USER} -c "$0 $@"
else
echo "Please start this script only as user ${USER}"
exit 1
fi
fi
if [ $# -eq 1 ]
then
command=$1
else
# Called as ${NAME}-start.sh or ${NAME}-stop.sh
command=${SELF%.sh}
command=${command##${NAME}-}
[ "_${command}_" == "_${NAME}_" ] && command=""
fi
case ${command} in
start)
# start commands
;;
stop)
# stop commands
;;
restart)
$0 stop
$0 start
;;
*)
[ ! -z "${command}" ] && echo "ERROR: Unknown option ${command}!"
echo "Usage: $0 (start|stop|restart)";
echo "Or call as ${NAME}-(start|stop|restart).sh"
exit 1
;;
esac
</source>
= Logging and output in your scripts =
== Add a timestamp to all output ==
<source lang=bash>
#!/bin/bash
# Find temp filename
FIFO=$(mktemp)
# Clenup on exit
trap 'rm -f ${FIFO}' 0
# Delete file created by mktemp
rm "${FIFO}"
# Create a FIFO instead
mkfifo "${FIFO}"
# Read from FIFO and add date at the beginning
sed -e "s|^|$(date '+%d.%m.%Y %H:%M:%S') :: |g" < ${FIFO} &
# Redirect stdout & stderr to FIFO
exec > ${FIFO} 2>&1
#
# Now your program
#
echo bla
echo bli >&2
</source>
== Add a timestamp to all output and send to file==
<source lang=bash>
#!/bin/bash
LOGFILE=/tmp/bla.log
# Find temp filename
FIFO=$(mktemp)
# Clenup on exit
trap 'rm -f ${FIFO}' 0
# Delete file created by mktemp
rm "${FIFO}"
# Create a FIFO instead
mkfifo "${FIFO}"
# Read from FIFO and add date at the beginning
sed -e "s|^|$(date '+%d.%m.%Y %H:%M:%S') :: |g" < ${FIFO} > ${LOGFILE}&
# Redirect stdout & stderr to FIFO
exec > ${FIFO} 2>&1
#
# Now your program
#
echo bla
echo bli >&2
</source>
=Parameter parsing=
In progress... no time...
<source lang=bash>
while [ $# -gt 0 ]
do
case $1 in
-h|--help)
usage help
shift;
exit 0;
;;
--?*=?*|-?*=?*)
param=${1%=*}
value=${1#*=}
shift;
;;
--?*=|-?*=)
param=${1%=*}
usage "${param} needs a vlaue!"
;;
*)
if [ $# -lt 2 ] ; then usage "$1 needs a value!"; fi
param=$1
value=$2
shift; shift;
;;
esac
case $param in
*)
other_params[$[${#other_params[*]} + 1]]="${param}=${value}"
param=${param#--}
param=${param/-/_}
export ${param^^}=${value}
;;
esac
done
</source>
4375ed822507be498b209c55102a25d82e0f5ae6
SSH Tipps und Tricks
0
75
1991
1882
2019-12-10T15:42:00Z
Lollypop
2
/* SSH über ein oder mehrere Hops */
wikitext
text/x-wiki
[[Kategorie:SSH|Tipps]]
[[Kategorie:Putty|Tipps]]
=SSH, der Weg zum Ziel=
==SSH über ein oder mehrere Hops==
Um die SSH-Verbindung von Host_A zu Host_B machen zu können muß man sich über zwei Rechner dorthin tunneln (GW_1 und GW_2). Wenn man sich immer erst einloggt und dann weiter einloggt ist es manchmal sehr schwierig die Portforwardings mitzuschleifen, oder den Socks5-Proxy. Einfacher ist es, wenn man sich ProxyCommands für den Weg von Host_a zu Host_b definiert.
Wir kommen also nur von GW_2 zu Host_b, also machen wir uns hierfür einen Eintrag in der ~/.ssh/config:
<pre>
Host Host_B
ProxyJump GW_2
</pre>
Zu GW_2 kommen wir aber nur über GW_1, also brauchen wir hierfür auch einen Eintrag:
<pre>
Host GW_2
ProxyJump GW_1
</pre>
Jetzt gibt man auf Host_A einfach <i>ssh Host_B</i> ein und wird über die beiden Gateways GW_1 und GW_2 getunnelt.
Portforwardings für z.B. NFS macht man jetzt einfach so:
<pre>
root@Host_A# share -F nfs -o ro=@127.0.0.1/32 /tmp
root@Host_A# ssh -R 22049:localhost:2049 user@Host_B
user@Host_B$ su -
root@Host_B# mount -oro nfs://127.0.0.1:22049/tmp /mnt
</pre>
Im Hintergrund werden dann die TunnelVerbindungen aufgebaut und man macht das PortForwarding direkt von Host_A nach Host_B. Sehr schlank und elegant.
PS: Das /dev/tcp/%h/%p ist ein BASH-Builtin wobei %h und %p von der SSH durch Host (%h) und Port (%p) ausgefüllt werden
==Ausbruch aus dem Paradies==
Problem: Die Umgebung in der man sich aufhält ist leider so unglücklich mit Firewalls verbaut, daß man nicht arbeiten kann. Man muß aber per SSH raus, um woanders kurz etwas zu schauen, oder zu holen. Nunja, es gibt immer einen Weg...
Vorraussetzung ist ein lokal installiertes [http://www.meadowy.org/~gotoh/projects/connect connect], z.B. unter Ubuntu: apt-get install connect-proxy.
Weiterhin braucht man einen SSH-Server, wo ein sshd auf dem Port 443 lauscht, denn die meisten Proxies wollen einen nur auf known Ports durchlassen.
Dann trägt man in der ~/.ssh/config ein:
<pre>
Host ssh-via-proxy
ProxyCommand connect -H proxy-server:3128 ssh-server 443
</pre>
Schwuppdiwupp ist man mit <i>ssh ssh-server</i> auf dem SSH-Ziel, wo man hinmöchte. Natürlich kann man auf den ssh-server wieder als ProxyCommand eintragen usw. usw.
==Achja... das interne Wiki...==
Auch nicht schlimm, wenn das nur vom internen Netz erreichbar ist, dann fragen wir einfach via Socks-Proxy an:
<pre>
user@Host_A$ ssh -C -N -T -f -D8080 interner-rechner
user@Host_A$ chromium-browser --proxy-server="socks5://localhost:8080" https://wiki.intern.firma.de/ &
</pre>
Die Optionen sind:
<pre>
-C Requests compression <- das ist optional
-N Do not execute a remote command.
-T Disable pseudo-tty allocation.
-f Requests ssh to go to background just before command execution.
-D Local-Remote-Socks5-Proxy Port
</pre>
Oder wieder via ~/.ssh/config:
<pre>
Host wiki
Compression yes
DynamicForward 8888
RequestTTY no
PermitLocalCommand yes
LocalCommand chromium-browser --proxy-server="socks5://localhost:8888" https://wiki.intern.firma.de/ &
Hostname interner-rechner
</pre>
Und dann <i>ssh -N -f wiki</i> (Entsprechungen für -N und -f habe ich noch nicht gefunden).
=Der Fingerabdruck=
Für die Verifikation ist es oft leichter mit kürzeren Zahlenketten. Daher ist der Fingerabdruck praktisch, um Keys einfacher zu vergleichen:
<pre>
$ ssh-keygen -lf ~/.ssh/id_dsa.pub
1024 98:c5:76:...:08:fa:ba lollypop@lollybook (DSA)
</pre>
=Nutzer einschränken=
<source lang=bash>
# SSH is only allowed for users in the group ssh except syslog
AllowGroups ssh
DenyUsers syslog
</source>
=PuTTY Portable=
==pageant zusammen mit putty starten==
In die Datei ..\PortableApps\PuTTYPortable\App\AppInfo\Launcher\PuTTYPortable.ini muß folgendes unter [Launch] stehen:
<pre>
[Launch]
ProgramExecutable=putty\pageant.exe
CommandLineArguments='%PAL:DataDir%\settings\mykeys.ppk -c %PAL:AppDir%\putty\putty.exe'
DirectoryMoveOK=yes
SupportsUNC=yes
</pre>
Zu PortableApps siehe auch:
* [http://portableapps.com/manuals/PortableApps.comLauncher/ref/envsub.html Environment variable substitions]
* [http://portableapps.com/manuals/PortableApps.comLauncher/ref/launcher.ini/launch.html#programexecutable Launch]
==ppk -> pem==
<source lang=bash>
$ nawk '/---- BEGIN SSH2 PUBLIC KEY ----/{printf "ssh-rsa "; getline; comment=$2; gsub(/"/,"",comment); getline line; while(line !~ /^---- END/){printf line; getline line;} printf " %s\n",comment;}' pubkey.ppk
</source>
=Probleme mit älteren Gegenstellen=
==Unable to negotiate with <IP> port 22: no matching host key type found. Their offer: ssh-dss==
<source lang=bash>
$ ssh -oHostKeyAlgorithms=+ssh-dss <IP>
</source>
==ssh_dispatch_run_fatal: Connection to <IP> port 22: DH GEX group out of range==
<source lang=bash>
$ ssh -oKexAlgorithms=diffie-hellman-group-exchange-sha256,diffie-hellman-group14-sha1,diffie-hellman-group1-sha1 <IP>
</source>
=SFTP chroot=
<source lang=bash>
# mkdir --parents --mode=0755 /sftp_chroot/etc
</source>
==/etc/fstab==
<source lang=bash>
...
/etc/passwd /sftp_chroot/etc/passwd none ro,bind 0 0
/etc/group /sftp_chroot/etc/group none ro,bind 0 0
</source>
==/etc/ssh/sshd_config==
<source lang=bash>
...
AllowGroups ssh-user
Subsystem sftp internal-sftp
Match group sftp
AllowGroups sftp
X11Forwarding no
AllowTcpForwarding no
AllowAgentForwarding no
PermitTunnel no
ForceCommand internal-sftp
PasswordAuthentication yes
ChrootDirectory /sftp_chroot/
AuthorizedKeysFile /sftp_chroot/%h/.ssh/authorized_keys
</source>
==Create SFTP user==
Now you can put authorized keys into the files /home/sftp/.authorized_keys/<i>username</i>
And create the sftp users like this:
<source lang=bash>
# USER=myuser
# mkdir --parents --mode=0755 /home/sftp/${USER}
# useradd --create-home --home-dir /home/sftp/${USER}/home ${USER}
</source>
= Two factor authentication =
== Google Authenticator ==
As the Google Authenticator is a tool which is available on several SmartPhone OS I took this one for the OTP authentication.
All steps have to be done on the destination host.
=== Install libpam-google-authenticator ===
<source lang=bash>
$ sudo apt-get install libpam-google-authenticator
</source>
=== Add settings to the /etc/pam.d/sshd ===
Put this line at the top of your /etc/pam.d/sshd!
<source lang=bash>
auth [success=done new_authtok_reqd=done default=die] pam_google_authenticator.so nullok
</source>
See the man page pam.d(5) or read here...
The meaning of the parameters:
* success=done : If pam_google_authenticator returns successful (code was correct) all authentication is done.
* new_authtok_reqd=done : New authentication token is required set to done. Done is like ok, <nowiki><man page></nowiki>except that the stack also terminates and control is immediately returned to the application.<nowiki></man page></nowiki>
* default=die : If pam_google_authenticator failed no other authentications will be tried
* nullok : Allow user to access auth mechanism even if the password is empty
=== Add settings to the /etc/ssh/sshd_config ===
This lines have to be in the /etc/ssh/sshd_config:
<source lang=bash>
UsePAM yes
PasswordAuthentication no
PubkeyAuthentication yes
ChallengeResponseAuthentication yes
AuthenticationMethods publickey,keyboard-interactive:pam
</source>
Without the setting in /etc/pam.d/sshd the "PasswordAuthentication no" will not be sufficient and still ask for a password because /etc/pam.d/sshd enables password authentication.
11eee0f38f6dd061fd68768e43b5c3726436b03a
Nextcloud
0
368
1993
2019-12-20T15:29:22Z
Lollypop
2
Die Seite wurde neu angelegt: „[[Kategorie:Web]] =Nextcloud= =Manual upgrade= <source lang=bash> # cd /var/www/nextcloud/updater # sudo -u www-data php updater.phar </source> Answer the qu…“
wikitext
text/x-wiki
[[Kategorie:Web]]
=Nextcloud=
=Manual upgrade=
<source lang=bash>
# cd /var/www/nextcloud/updater
# sudo -u www-data php updater.phar
</source>
Answer the questions...
If you have an own theme proceed with this steps:
<source lang=bash>
# sudo -u www-data php /var/www/nextcloud/occ config:system:set theme --value <your theme>
# sudo -u www-data php /var/www/nextcloud/occ maintenance:theme:update
</source>
6ef622e723b316ccdfa96784bc9ebf54d73cdf5c
1998
1993
2020-01-10T13:44:32Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:Web]]
=Nextcloud=
=Manual upgrade=
<source lang=bash>
# cd /var/www/nextcloud/updater
# sudo -u www-data php updater.phar
</source>
Answer the questions...
If you have an own theme proceed with this steps:
<source lang=bash>
# sudo -u www-data php /var/www/nextcloud/occ config:system:set theme --value <your theme>
# sudo -u www-data php /var/www/nextcloud/occ maintenance:theme:update
</source>
=Some tweaks for the theme to disable several things=
<source lang=css>
/* remove quota */
#quota {
border: 0;
clip: rect(0 0 0 0);
height: 1px;
margin: -1px;
overflow: hidden;
padding: 0;
position: absolute;
width: 1px;
}
/* remove lost password */
.lost-password-container #lost-password, .lost-password-container #lost-password-back {
display: none;
}
/* remove contacts menu */
#contactsmenu { display: none; }
/* remove contacts button */
li[data-id="contacts"] {
display: none;
visibility : hidden;
height : 0px;
width : 0px;
margin : 0px;
padding : 0px;
overflow : hidden;
}
/* remove user button */
li[data-id="core_users"] {
display: none;
visibility : hidden;
height : 0px;
width : 0px;
margin : 0px;
padding : 0px;
overflow : hidden;
}
</source>
d5d0e4c31f295f2033926e052cf94b35d4e3146c
1999
1998
2020-01-30T16:28:28Z
Lollypop
2
/* Manual upgrade */
wikitext
text/x-wiki
[[Kategorie:Web]]
=Nextcloud=
=Manual upgrade=
<source lang=bash>
# cd /var/www/nextcloud/updater && sudo -u www-data php updater.phar
</source>
Answer the questions...
If you have an own theme proceed with this steps:
<source lang=bash>
# sudo -u www-data php /var/www/nextcloud/occ config:system:set theme --value <your theme>
# sudo -u www-data php /var/www/nextcloud/occ maintenance:theme:update
</source>
=Some tweaks for the theme to disable several things=
<source lang=css>
/* remove quota */
#quota {
border: 0;
clip: rect(0 0 0 0);
height: 1px;
margin: -1px;
overflow: hidden;
padding: 0;
position: absolute;
width: 1px;
}
/* remove lost password */
.lost-password-container #lost-password, .lost-password-container #lost-password-back {
display: none;
}
/* remove contacts menu */
#contactsmenu { display: none; }
/* remove contacts button */
li[data-id="contacts"] {
display: none;
visibility : hidden;
height : 0px;
width : 0px;
margin : 0px;
padding : 0px;
overflow : hidden;
}
/* remove user button */
li[data-id="core_users"] {
display: none;
visibility : hidden;
height : 0px;
width : 0px;
margin : 0px;
padding : 0px;
overflow : hidden;
}
</source>
a8419892e56cfb0c80afc82ed45a5d9dee676408
2027
1999
2020-09-03T11:27:22Z
Lollypop
2
/* Nextcloud */
wikitext
text/x-wiki
[[Kategorie:Web]]
=Nextcloud=
==BASH alias==
<source lang=bash>
alias occ='sudo --user=www-data /usr/bin/php -f /var/www/nextcloud/occ'
</source>
=Manual upgrade=
<source lang=bash>
# cd /var/www/nextcloud/updater && sudo -u www-data php updater.phar
</source>
Answer the questions...
If you have an own theme proceed with this steps:
<source lang=bash>
# sudo -u www-data php /var/www/nextcloud/occ config:system:set theme --value <your theme>
# sudo -u www-data php /var/www/nextcloud/occ maintenance:theme:update
</source>
=Some tweaks for the theme to disable several things=
<source lang=css>
/* remove quota */
#quota {
border: 0;
clip: rect(0 0 0 0);
height: 1px;
margin: -1px;
overflow: hidden;
padding: 0;
position: absolute;
width: 1px;
}
/* remove lost password */
.lost-password-container #lost-password, .lost-password-container #lost-password-back {
display: none;
}
/* remove contacts menu */
#contactsmenu { display: none; }
/* remove contacts button */
li[data-id="contacts"] {
display: none;
visibility : hidden;
height : 0px;
width : 0px;
margin : 0px;
padding : 0px;
overflow : hidden;
}
/* remove user button */
li[data-id="core_users"] {
display: none;
visibility : hidden;
height : 0px;
width : 0px;
margin : 0px;
padding : 0px;
overflow : hidden;
}
</source>
da7ffcfc115a063de34b034431103e7997e537b4
2028
2027
2020-09-03T11:28:20Z
Lollypop
2
/* BASH alias */
wikitext
text/x-wiki
[[Kategorie:Web]]
=Nextcloud=
==BASH alias==
<source lang=bash>
alias occ='sudo --user=www-data /usr/bin/php -f /var/www/nextcloud/occ'
</source>
<source lang=bash>
# occ status
- installed: true
- version: 19.0.2.2
- versionstring: 19.0.2
- edition:
</source>
=Manual upgrade=
<source lang=bash>
# cd /var/www/nextcloud/updater && sudo -u www-data php updater.phar
</source>
Answer the questions...
If you have an own theme proceed with this steps:
<source lang=bash>
# sudo -u www-data php /var/www/nextcloud/occ config:system:set theme --value <your theme>
# sudo -u www-data php /var/www/nextcloud/occ maintenance:theme:update
</source>
=Some tweaks for the theme to disable several things=
<source lang=css>
/* remove quota */
#quota {
border: 0;
clip: rect(0 0 0 0);
height: 1px;
margin: -1px;
overflow: hidden;
padding: 0;
position: absolute;
width: 1px;
}
/* remove lost password */
.lost-password-container #lost-password, .lost-password-container #lost-password-back {
display: none;
}
/* remove contacts menu */
#contactsmenu { display: none; }
/* remove contacts button */
li[data-id="contacts"] {
display: none;
visibility : hidden;
height : 0px;
width : 0px;
margin : 0px;
padding : 0px;
overflow : hidden;
}
/* remove user button */
li[data-id="core_users"] {
display: none;
visibility : hidden;
height : 0px;
width : 0px;
margin : 0px;
padding : 0px;
overflow : hidden;
}
</source>
a2c0ee132b079056a8a1ed2d61af3b20cb4fb6af
2029
2028
2020-09-10T12:19:21Z
Lollypop
2
/* BASH alias */
wikitext
text/x-wiki
[[Kategorie:Web]]
=Nextcloud=
==BASH alias==
<source lang=bash>
alias occ='sudo --user=www-data /usr/bin/php -f /var/www/nextcloud/occ'
</source>
<source lang=bash>
# occ status
- installed: true
- version: 19.0.2.2
- versionstring: 19.0.2
- edition:
</source>
==Send calendar events==
Set the EventRemindersMode to occ:
<source lang=bash>
# occ config:app:set dav sendEventRemindersMode --value occ
</source>
and add a cronjob for the user running he webserver:
<source lang=bash>
# crontab -u www-data -e
# m h dom mon dow command
*/15 * * * * php -f /var/www/nextcloud/cron.php
# send calendar events every 5 minutes
*/5 * * * * php -f /var/www/nextcloud/occ dav:send-event-reminders
</source>
=Manual upgrade=
<source lang=bash>
# cd /var/www/nextcloud/updater && sudo -u www-data php updater.phar
</source>
Answer the questions...
If you have an own theme proceed with this steps:
<source lang=bash>
# sudo -u www-data php /var/www/nextcloud/occ config:system:set theme --value <your theme>
# sudo -u www-data php /var/www/nextcloud/occ maintenance:theme:update
</source>
=Some tweaks for the theme to disable several things=
<source lang=css>
/* remove quota */
#quota {
border: 0;
clip: rect(0 0 0 0);
height: 1px;
margin: -1px;
overflow: hidden;
padding: 0;
position: absolute;
width: 1px;
}
/* remove lost password */
.lost-password-container #lost-password, .lost-password-container #lost-password-back {
display: none;
}
/* remove contacts menu */
#contactsmenu { display: none; }
/* remove contacts button */
li[data-id="contacts"] {
display: none;
visibility : hidden;
height : 0px;
width : 0px;
margin : 0px;
padding : 0px;
overflow : hidden;
}
/* remove user button */
li[data-id="core_users"] {
display: none;
visibility : hidden;
height : 0px;
width : 0px;
margin : 0px;
padding : 0px;
overflow : hidden;
}
</source>
dfb1cdf68603e050ee185e2bd82e8748bed17179
2030
2029
2020-09-10T12:19:53Z
Lollypop
2
/* Send calendar events */
wikitext
text/x-wiki
[[Kategorie:Web]]
=Nextcloud=
==BASH alias==
<source lang=bash>
alias occ='sudo --user=www-data /usr/bin/php -f /var/www/nextcloud/occ'
</source>
<source lang=bash>
# occ status
- installed: true
- version: 19.0.2.2
- versionstring: 19.0.2
- edition:
</source>
==Send calendar events==
Set the EventRemindersMode to occ:
<source lang=bash>
# occ config:app:set dav sendEventRemindersMode --value occ
</source>
and add a cronjob for the user running he webserver:
<source lang=bash>
# crontab -u www-data -e
# send calendar events every 5 minutes
*/5 * * * * php -f /var/www/nextcloud/occ dav:send-event-reminders
</source>
=Manual upgrade=
<source lang=bash>
# cd /var/www/nextcloud/updater && sudo -u www-data php updater.phar
</source>
Answer the questions...
If you have an own theme proceed with this steps:
<source lang=bash>
# sudo -u www-data php /var/www/nextcloud/occ config:system:set theme --value <your theme>
# sudo -u www-data php /var/www/nextcloud/occ maintenance:theme:update
</source>
=Some tweaks for the theme to disable several things=
<source lang=css>
/* remove quota */
#quota {
border: 0;
clip: rect(0 0 0 0);
height: 1px;
margin: -1px;
overflow: hidden;
padding: 0;
position: absolute;
width: 1px;
}
/* remove lost password */
.lost-password-container #lost-password, .lost-password-container #lost-password-back {
display: none;
}
/* remove contacts menu */
#contactsmenu { display: none; }
/* remove contacts button */
li[data-id="contacts"] {
display: none;
visibility : hidden;
height : 0px;
width : 0px;
margin : 0px;
padding : 0px;
overflow : hidden;
}
/* remove user button */
li[data-id="core_users"] {
display: none;
visibility : hidden;
height : 0px;
width : 0px;
margin : 0px;
padding : 0px;
overflow : hidden;
}
</source>
24281210531b701b964a7277c08331ba774c25f2
ZFS on Linux
0
222
1994
1954
2020-01-07T12:30:08Z
Lollypop
2
/* Connect it to your network */
wikitext
text/x-wiki
[[Kategorie:Linux|ZFS]]
[[Kategorie:ZFS|Linux]]
[[Kategorie:VirtualBox|ZFS]]
==Grub==
Create /etc/udev/rules.d/99-local-grub.rules with this content:
<source lang=bash>
# Create by-id links in /dev as well for zfs vdev. Needed by grub
# Add links for zfs_member only
KERNEL=="sd*[0-9]", IMPORT{parent}=="ID_*", ENV{ID_FS_TYPE}=="zfs_member", SYMLINK+="$env{ID_BUS}-$env{ID_SERIAL}-part%n"
</source>
==Virtualbox on ZVols==
If you use ZVols as rawvmdk-device in VirtualBox as normal user (vmuser in this example) create /etc/udev/rules.d/99-local-zvol.rules with this content:
<source lang=bash>
KERNEL=="zd*" SUBSYSTEM=="block" ACTION=="add|change" PROGRAM="/lib/udev/zvol_id /dev/%k" RESULT=="rpool/VM/*" OWNER="vmuser"
</source>
<source lang=bash>
vmuser@virtualbox-server:~$ VBoxManage internalcommands createrawvmdk -filename /var/data/VMs/dev/Solaris10.vmdk -rawdisk /dev/zvol/rpool/VM/Solaris10
</source>
==Setup Ubuntu 16.04 with ZFS root==
Most is from here [https://github.com/zfsonlinux/zfs/wiki/Ubuntu-16.04-Root-on-ZFS Ubuntu-16.04-Root-on-ZFS].
Boot Ubuntu Desktop (alias Live CD) and choose "try out".
===Get the right ashift value===
For example to get sda and sdb:
<source lang=bash>
# lsblk -o NAME,PHY-SeC,LOG-SEC /dev/sd{a,b} | awk 'function exponent (value) {for(i=0;value>1;i++){value/=2;}; return i;}{if($2 ~ /[0-9]+/){print $0,exponent($2)}else{print$0,"ashift"}}'
NAME PHY-SEC LOG-SEC ashift
sda 512 512 9
├─sda1 512 512 9
├─sda2 512 512 9
├─sda3 512 512 9
└─sda4 512 512 9
sdb 4096 512 12
├─sdb1 4096 512 12
├─sdb2 4096 512 12
├─sdb3 4096 512 12
└─sdb4 4096 512 12
</source>
===Connect it to your network===
<source lang=bash>
sudo -i
ifconfig ens160 <IP> netmask 255.255.255.0
route add default gw <defaultrouter>
echo "nameserver <nameserver>" >> /etc/resolv.conf
echo 'Acquire::http::Proxy "http://<user>:<pass>@<proxyhost>:<proxyport>";' >> /etc/apt/apt.conf
apt-add-repository universe
apt update
apt --yes install openssh-server
passwd ubuntu
Reconnect via ssh
apt install --yes debootstrap gdisk zfs-initramfs
sgdisk -g -a1 -n2:34:2047 -t2:EF02 /dev/disk/by-id/scsi-36000c2932cdb62febff0b5ac93786dd4
sgdisk -n9:-8M:0 -t9:BF07 /dev/disk/by-id/scsi-36000c2932cdb62febff0b5ac93786dd4
sgdisk -n1:0:0 -t1:BF01 /dev/disk/by-id/scsi-36000c2932cdb62febff0b5ac93786dd4
zpool create -f -o ashift=12 \
-O atime=off \
-O canmount=off \
-O compression=lz4 \
-O normalization=formD \
-O mountpoint=/ \
-R /mnt \
rpool /dev/disk/by-id/scsi-36000c2932cdb62febff0b5ac93786dd4-part1
zfs create -o canmount=off -o mountpoint=none rpool/ROOT
zfs create -o canmount=noauto -o mountpoint=/ rpool/ROOT/ubuntu
zfs mount rpool/ROOT/ubuntu
zfs create -o setuid=off rpool/home
zfs create -o mountpoint=/root rpool/home/root
zfs create -o canmount=off -o setuid=off -o exec=off rpool/var
zfs create -o com.sun:auto-snapshot=false rpool/var/cache
zfs create rpool/var/log
zfs create rpool/var/spool
zfs create -o com.sun:auto-snapshot=false -o exec=on rpool/var/tmp
zfs create -V 4G -b $(getconf PAGESIZE) -o compression=zle \
-o logbias=throughput -o sync=always \
-o primarycache=metadata -o secondarycache=none \
-o com.sun:auto-snapshot=false rpool/swap
cp -p {,/mnt}/etc/apt/apt.conf
export http_proxy=$(awk '/Acquire::http::Proxy/{gsub(/\"/,"");gsub(/;$/,"");print $2}' /mnt/etc/apt/apt.conf)
echo -n xenial{,-security,-updates} | \
xargs -n 1 -d ' ' -I{} echo "deb http://archive.ubuntu.com/ubuntu {} main universe" > /mnt/etc/apt/sources.list
chmod 1777 /mnt/var/tmp
debootstrap xenial /mnt
zfs set devices=off rpool
HOSTNAME=Template-VM
echo ${HOSTNAME} > /mnt/etc/hostname
printf "127.0.1.1\t%s\n" "${HOSTNAME}" >> /mnt/etc/hosts
INTERFACE=$(ip a s scope global | awk 'NR==1{gsub(/:$/,"",$2);print $2;}')
printf "auto %s\niface %s inet dhcp\n" "${INTERFACE}" "${INTERFACE}" > /mnt/etc/network/interfaces.d/${INTERFACE}
mount --rbind /dev /mnt/dev
mount --rbind /proc /mnt/proc
mount --rbind /sys /mnt/sys
cp -p {,/mnt}/etc/apt/apt.conf
echo -n xenial{,-security,-updates} | \
xargs -n 1 -d ' ' -I{} echo "deb http://archive.ubuntu.com/ubuntu {} main universe" > /mnt/etc/apt/sources.list
chroot /mnt /bin/bash --login
locale-gen en_US.UTF-8
echo 'LANG="en_US.UTF-8"' > /etc/default/locale
LANG="en_US.UTF-8"
dpkg-reconfigure tzdata
ln -s /proc/self/mounts /etc/mtab
apt update
apt install --yes ubuntu-minimal
apt install --yes --no-install-recommends linux-image-generic
apt install --yes zfs-initramfs
apt install --yes openssh-server
apt install --yes grub-pc
addgroup --system lpadmin
addgroup --system sambashare
passwd
grub-probe /
update-initramfs -c -k all
vi /etc/default/grub
Comment out: GRUB_HIDDEN_TIMEOUT=0
Remove quiet and splash from: GRUB_CMDLINE_LINUX_DEFAULT
Uncomment: GRUB_TERMINAL=console
update-grub
grub-install /dev/disk/by-id/scsi-36000c2932cdb62febff0b5ac93786dd4
zfs snapshot rpool/ROOT/ubuntu@install
exit
mount | grep -v zfs | tac | awk '/\/mnt/ {print $3}' | xargs -i{} umount -lf {}
zpool export rpool
reboot
apt install --yes cryptsetup
echo cryptswap1 /dev/zvol/rpool/swap /dev/urandom swap,cipher=aes-xts-plain64:sha256,size=256 >> /etc/crypttab
systemctl daemon-reload
systemctl start systemd-cryptsetup@cryptswap1.service
echo /dev/mapper/cryptswap1 none swap defaults 0 0 >> /etc/fstab
swapon -av
</source>
==Swap on ZFS with random key encryption==
<source lang=ini>
[Unit]
Description=ZFS Random Cryptography Setup for %I
Documentation=man:zfs(8)
DefaultDependencies=no
Conflicts=umount.target
IgnoreOnIsolate=true
After=systemd-random-seed.service
BindsTo=dev-zvol-rpool-%i.device
Before=umount.target
[Service]
Type=oneshot
RemainAfterExit=yes
TimeoutSec=0
KeyringMode=shared
OOMScoreAdjust=500
UMask=0077
RuntimeDirectory=crypt.%i
RuntimeDirectoryMode=0700
ExecStartPre=-/sbin/swapoff '/dev/zvol/rpool/%i'
ExecStartPre=-/sbin/zfs destroy 'rpool/%i'
ExecStartPre=/bin/dd if=/dev/urandom of=/run/crypt.%i/%i.key bs=32 count=1
ExecStart=/sbin/zfs create -V 4G -b 4k -o compression=zle -o logbias=throughput -o sync=always -o primarycache=metadata -o secondarycache=none -o com.sun:auto-snapshot=false -o encryption=on -o keyformat=raw -o keylocation=file:///run/crypt.%i/%i.key rpool/%i
ExecStartPost=/sbin/mkswap '/dev/zvol/rpool/%i'
ExecStartPost=/sbin/swapon '/dev/zvol/rpool/%i'
ExecStop=/sbin/swapoff '/dev/zvol/rpool/%i'
ExecStopPost=/sbin/zfs destroy 'rpool/%i'
[Install]
WantedBy=swap.target
</source>
==Kernel settings for ZFS==
=== Set module parameter in /etc/modprobe.d/zfs.conf===
<source lang=bash>
options zfs zfs_arc_max=10737418240
# increase them so scrub/resilver is more quickly at the cost of other work
options zfs zfs_vdev_scrub_min_active=24
options zfs zfs_vdev_scrub_max_active=64
# sync write
options zfs zfs_vdev_sync_write_min_active=8
options zfs zfs_vdev_sync_write_max_active=32
# sync reads (normal)
options zfs zfs_vdev_sync_read_min_active=8
options zfs zfs_vdev_sync_read_max_active=32
# async reads : prefetcher
options zfs zfs_vdev_async_read_min_active=8
options zfs zfs_vdev_async_read_max_active=32
# async write : bulk writes
options zfs zfs_vdev_async_write_min_active=8
options zfs zfs_vdev_async_write_max_active=32
# max write speed to l2arc
# tradeoff between write/read and durability of ssd (?)
# default : 8 * 1024 * 1024
# setting here : 500 * 1024 * 1024
options zfs l2arc_write_max=524288000
options zfs zfs_top_maxinflight=512
options zfs zfs_resilver_min_time_ms=8000
options zfs zfs_resilver_delay=0
</source>
Remember to update your initramfs before boot. This is the filesystem which is read when your module is loaded.
<source lang=bash>
# update-initramfs -k all -u
</source>
=== Check settings ===
<source lang=bash>
root@zfshost:~# modprobe -c | grep "options zfs"
options zfs zfs_arc_max=10737418240
options zfs zfs_vdev_scrub_min_active=24
options zfs zfs_vdev_scrub_max_active=64
options zfs zfs_vdev_sync_write_min_active=8
options zfs zfs_vdev_sync_write_max_active=32
options zfs zfs_vdev_sync_read_min_active=8
options zfs zfs_vdev_sync_read_max_active=32
options zfs zfs_vdev_async_read_min_active=8
options zfs zfs_vdev_async_read_max_active=32
options zfs zfs_vdev_async_write_min_active=8
options zfs zfs_vdev_async_write_max_active=32
options zfs l2arc_write_max=524288000
options zfs zfs_top_maxinflight=512
options zfs zfs_resilver_min_time_ms=8000
options zfs zfs_resilver_delay=0
</source>
<source lang=bash>
root@zfshost:~# modprobe --show-depends zfs
insmod /lib/modules/4.15.0-58-generic/kernel/spl/spl.ko
insmod /lib/modules/4.15.0-58-generic/kernel/zfs/znvpair.ko
insmod /lib/modules/4.15.0-58-generic/kernel/zfs/zcommon.ko
insmod /lib/modules/4.15.0-58-generic/kernel/zfs/icp.ko
insmod /lib/modules/4.15.0-58-generic/kernel/zfs/zavl.ko
insmod /lib/modules/4.15.0-58-generic/kernel/zfs/zunicode.ko
insmod /lib/modules/4.15.0-58-generic/kernel/zfs/zfs.ko zfs_arc_max=10737418240 zfs_vdev_scrub_min_active=24 zfs_vdev_scrub_max_active=64 zfs_vdev_sync_write_min_active=8 zfs_vdev_sync_write_max_active=32 zfs_vdev_sync_read_min_active=8 zfs_vdev_sync_read_max_active=32 zfs_vdev_async_read_min_active=8 zfs_vdev_async_read_max_active=32 zfs_vdev_async_write_min_active=8 zfs_vdev_async_write_max_active=32 l2arc_write_max=524288000 zfs_top_maxinflight=512 zfs_resilver_min_time_ms=8000 zfs_resilver_delay=0
</source>
=== Check actual settings ===
Check files in
* /proc/spl/kstat/zfs/
* /sys/module/zfs/parameters/
==ARC Cache==
===Get the current usage of cache===
<source lang=bash>
# cat /proc/spl/kstat/zfs/arcstats |grep c_
c_min 4 521779200
c_max 4 1073741824
arc_no_grow 4 0
arc_tempreserve 4 0
arc_loaned_bytes 4 0
arc_prune 4 25360
arc_meta_used 4 493285336
arc_meta_limit 4 805306368
arc_dnode_limit 4 80530636
arc_meta_max 4 706551816
arc_meta_min 4 16777216
sync_wait_for_async 4 357
arc_need_free 4 0
arc_sys_free 4 260889600
</source>
===Limit the cache without reboot non permanent===
For example limit it to 512MB (which is too small for production environments, just an example...):
<source lang=bash>
# echo "$[512*1024*1024]" > /sys/module/zfs/parameters/zfs_arc_max
</source>
Now you have to drop the caches:
<source lang=bash>
# echo 3 > /proc/sys/vm/drop_caches
</source>
===Make the cache limit permanent===
For example limit it to 512MB (which is too small for production environments, just an example...):
<source lang=bash>
# echo "options zfs zfs_arc_max=$[512*1024*1024]" >> /etc/modprobe.d/zfs.conf
</source>
After reboot this value take effect.
==Backup ZFS settings==
A little script which may be used on your own risk.
<source lang=bash>
#!/bin/bash
# Written by Lars Timmann <L@rs.Timmann.de> 2018
# Tested on solaris 11.3 & Ubuntu Linux
# This script is a rotten bunch of code... rewrite it!
AWK_CMD=/usr/bin/gawk
ZPOOL_CMD=/sbin/zpool
ZFS_CMD=/sbin/zfs
ZDB_CMD=/sbin/zdb
function print_local_options () {
DATASET=$1
OPTION=$2
EXCLUDE_REGEX=$3
${ZFS_CMD} get -s local -Ho property,value -p ${OPTION} ${DATASET} | while read -r property value
do
if [[ ! ${property} =~ ${EXCLUDE_REGEX} ]]
then
if [ "_${property}_" == "_share.*_" ]
then
print_local_options "${DATASET}" 'share.all' '^$'
else
printf '\t-o %s=%s \\\n' "${property}" "${value}"
fi
fi
done
}
function print_filesystem () {
ZFS=$1
printf '%s create \\\n' "${ZFS_CMD}"
print_local_options "${ZFS}" 'all' '^$'
printf '\t%s\n' "${ZFS}"
}
function print_filesystems () {
ZPOOL=$1
for ZFS in $(${ZFS_CMD} list -Ho name -t filesystem -r ${ZPOOL})
do
if [ ${ZFS} == ${ZPOOL} ] ; then continue ; fi
printf '#\n## Filesystem: %s\n#\n\n' "${ZFS}"
print_filesystem ${ZFS}
printf '\n'
done
}
function print_volume () {
ZVOL=$1
volsize=$(${ZFS_CMD} get -Ho value volsize ${ZVOL})
volblocksize=$(${ZFS_CMD} get -Ho value volblocksize ${ZVOL})
printf '%s create \\\n\t-V %s \\\n\t-b %s \\\n' "${ZFS_CMD}" "${volsize}" "${volblocksize}"
print_local_options "${ZVOL}" 'all' '(volsize|refreservation)'
printf '\t%s\n' "${ZVOL}"
}
function print_volumes () {
ZPOOL=$1
for ZVOL in $(${ZFS_CMD} list -Ho name -t volume -r ${ZPOOL})
do
printf '#\n## Volume: %s\n#\n\n' "${ZVOL}"
print_volume ${ZVOL}
printf '\n'
done
}
function print_vdevs () {
ZPOOL=$1
${ZDB_CMD} -C ${ZPOOL} | ${AWK_CMD} -F':' '
$1 ~ /^[[:space:]]*type$/ {
gsub(/[ ]+/,"",$NF);
type=substr($NF,2,length($NF)-2);
if ( type == "mirror" ) {
printf " \\\n\t%s",type;
}
}
$1 ~ /^[[:space:]]*path$/ {
gsub(/[ ]+/,"",$NF);
vdev=substr($NF,2,length($NF)-2);
printf " \\\n\t%s",vdev;
}
END {
printf "\n";
}
'
}
function print_zpool () {
ZPOOL=$1
printf '#############################################################\n'
printf '#\n## ZPool: %s\n#\n' "${ZPOOL}"
printf '#############################################################\n\n'
printf '%s create \\\n' "${ZPOOL_CMD}"
print_local_options "${ZPOOL}" 'all' '/@/'
printf '\t%s' "${ZPOOL}"
print_vdevs "${ZPOOL}"
printf '\n'
printf '#############################################################\n\n'
print_filesystems "${ZPOOL}"
print_volumes "${ZPOOL}"
}
OS=$(uname -s)
eval $(uname -s)=1
HOSTNAME=$(hostname)
printf '#############################################################\n'
printf '# Hostname: %s\n' "${HOSTNAME}"
printf '#############################################################\n\n'
for ZPOOL in $(${ZPOOL_CMD} list -Ho name)
do
print_zpool ${ZPOOL}
done
</source>
==Links==
* [[https://github.com/zfsonlinux/pkg-zfs/wiki/HOWTO-install-Ubuntu-16.04-to-a-Whole-Disk-Native-ZFS-Root-Filesystem-using-Ubiquity-GUI-installer HOWTO install Ubuntu 16.04 to a Whole Disk Native ZFS Root Filesystem using Ubiquity GUI installer]]
* [[https://github.com/zfsonlinux/zfs/wiki/Ubuntu-16.04-Root-on-ZFS Ubuntu 16.04 Root on ZFS]]
e33517d143f0b7c9adf94c3aaa53ca0a44c2be95
1995
1994
2020-01-07T13:56:15Z
Lollypop
2
/* Swap on ZFS with random key encryption */
wikitext
text/x-wiki
[[Kategorie:Linux|ZFS]]
[[Kategorie:ZFS|Linux]]
[[Kategorie:VirtualBox|ZFS]]
==Grub==
Create /etc/udev/rules.d/99-local-grub.rules with this content:
<source lang=bash>
# Create by-id links in /dev as well for zfs vdev. Needed by grub
# Add links for zfs_member only
KERNEL=="sd*[0-9]", IMPORT{parent}=="ID_*", ENV{ID_FS_TYPE}=="zfs_member", SYMLINK+="$env{ID_BUS}-$env{ID_SERIAL}-part%n"
</source>
==Virtualbox on ZVols==
If you use ZVols as rawvmdk-device in VirtualBox as normal user (vmuser in this example) create /etc/udev/rules.d/99-local-zvol.rules with this content:
<source lang=bash>
KERNEL=="zd*" SUBSYSTEM=="block" ACTION=="add|change" PROGRAM="/lib/udev/zvol_id /dev/%k" RESULT=="rpool/VM/*" OWNER="vmuser"
</source>
<source lang=bash>
vmuser@virtualbox-server:~$ VBoxManage internalcommands createrawvmdk -filename /var/data/VMs/dev/Solaris10.vmdk -rawdisk /dev/zvol/rpool/VM/Solaris10
</source>
==Setup Ubuntu 16.04 with ZFS root==
Most is from here [https://github.com/zfsonlinux/zfs/wiki/Ubuntu-16.04-Root-on-ZFS Ubuntu-16.04-Root-on-ZFS].
Boot Ubuntu Desktop (alias Live CD) and choose "try out".
===Get the right ashift value===
For example to get sda and sdb:
<source lang=bash>
# lsblk -o NAME,PHY-SeC,LOG-SEC /dev/sd{a,b} | awk 'function exponent (value) {for(i=0;value>1;i++){value/=2;}; return i;}{if($2 ~ /[0-9]+/){print $0,exponent($2)}else{print$0,"ashift"}}'
NAME PHY-SEC LOG-SEC ashift
sda 512 512 9
├─sda1 512 512 9
├─sda2 512 512 9
├─sda3 512 512 9
└─sda4 512 512 9
sdb 4096 512 12
├─sdb1 4096 512 12
├─sdb2 4096 512 12
├─sdb3 4096 512 12
└─sdb4 4096 512 12
</source>
===Connect it to your network===
<source lang=bash>
sudo -i
ifconfig ens160 <IP> netmask 255.255.255.0
route add default gw <defaultrouter>
echo "nameserver <nameserver>" >> /etc/resolv.conf
echo 'Acquire::http::Proxy "http://<user>:<pass>@<proxyhost>:<proxyport>";' >> /etc/apt/apt.conf
apt-add-repository universe
apt update
apt --yes install openssh-server
passwd ubuntu
Reconnect via ssh
apt install --yes debootstrap gdisk zfs-initramfs
sgdisk -g -a1 -n2:34:2047 -t2:EF02 /dev/disk/by-id/scsi-36000c2932cdb62febff0b5ac93786dd4
sgdisk -n9:-8M:0 -t9:BF07 /dev/disk/by-id/scsi-36000c2932cdb62febff0b5ac93786dd4
sgdisk -n1:0:0 -t1:BF01 /dev/disk/by-id/scsi-36000c2932cdb62febff0b5ac93786dd4
zpool create -f -o ashift=12 \
-O atime=off \
-O canmount=off \
-O compression=lz4 \
-O normalization=formD \
-O mountpoint=/ \
-R /mnt \
rpool /dev/disk/by-id/scsi-36000c2932cdb62febff0b5ac93786dd4-part1
zfs create -o canmount=off -o mountpoint=none rpool/ROOT
zfs create -o canmount=noauto -o mountpoint=/ rpool/ROOT/ubuntu
zfs mount rpool/ROOT/ubuntu
zfs create -o setuid=off rpool/home
zfs create -o mountpoint=/root rpool/home/root
zfs create -o canmount=off -o setuid=off -o exec=off rpool/var
zfs create -o com.sun:auto-snapshot=false rpool/var/cache
zfs create rpool/var/log
zfs create rpool/var/spool
zfs create -o com.sun:auto-snapshot=false -o exec=on rpool/var/tmp
zfs create -V 4G -b $(getconf PAGESIZE) -o compression=zle \
-o logbias=throughput -o sync=always \
-o primarycache=metadata -o secondarycache=none \
-o com.sun:auto-snapshot=false rpool/swap
cp -p {,/mnt}/etc/apt/apt.conf
export http_proxy=$(awk '/Acquire::http::Proxy/{gsub(/\"/,"");gsub(/;$/,"");print $2}' /mnt/etc/apt/apt.conf)
echo -n xenial{,-security,-updates} | \
xargs -n 1 -d ' ' -I{} echo "deb http://archive.ubuntu.com/ubuntu {} main universe" > /mnt/etc/apt/sources.list
chmod 1777 /mnt/var/tmp
debootstrap xenial /mnt
zfs set devices=off rpool
HOSTNAME=Template-VM
echo ${HOSTNAME} > /mnt/etc/hostname
printf "127.0.1.1\t%s\n" "${HOSTNAME}" >> /mnt/etc/hosts
INTERFACE=$(ip a s scope global | awk 'NR==1{gsub(/:$/,"",$2);print $2;}')
printf "auto %s\niface %s inet dhcp\n" "${INTERFACE}" "${INTERFACE}" > /mnt/etc/network/interfaces.d/${INTERFACE}
mount --rbind /dev /mnt/dev
mount --rbind /proc /mnt/proc
mount --rbind /sys /mnt/sys
cp -p {,/mnt}/etc/apt/apt.conf
echo -n xenial{,-security,-updates} | \
xargs -n 1 -d ' ' -I{} echo "deb http://archive.ubuntu.com/ubuntu {} main universe" > /mnt/etc/apt/sources.list
chroot /mnt /bin/bash --login
locale-gen en_US.UTF-8
echo 'LANG="en_US.UTF-8"' > /etc/default/locale
LANG="en_US.UTF-8"
dpkg-reconfigure tzdata
ln -s /proc/self/mounts /etc/mtab
apt update
apt install --yes ubuntu-minimal
apt install --yes --no-install-recommends linux-image-generic
apt install --yes zfs-initramfs
apt install --yes openssh-server
apt install --yes grub-pc
addgroup --system lpadmin
addgroup --system sambashare
passwd
grub-probe /
update-initramfs -c -k all
vi /etc/default/grub
Comment out: GRUB_HIDDEN_TIMEOUT=0
Remove quiet and splash from: GRUB_CMDLINE_LINUX_DEFAULT
Uncomment: GRUB_TERMINAL=console
update-grub
grub-install /dev/disk/by-id/scsi-36000c2932cdb62febff0b5ac93786dd4
zfs snapshot rpool/ROOT/ubuntu@install
exit
mount | grep -v zfs | tac | awk '/\/mnt/ {print $3}' | xargs -i{} umount -lf {}
zpool export rpool
reboot
apt install --yes cryptsetup
echo cryptswap1 /dev/zvol/rpool/swap /dev/urandom swap,cipher=aes-xts-plain64:sha256,size=256 >> /etc/crypttab
systemctl daemon-reload
systemctl start systemd-cryptsetup@cryptswap1.service
echo /dev/mapper/cryptswap1 none swap defaults 0 0 >> /etc/fstab
swapon -av
</source>
==Swap on ZFS with random key encryption==
<source lang=ini>
# /etc/systemd/system/zfs-cryptswap@.service
[Unit]
Description=ZFS Random Cryptography Setup for %I
Documentation=man:zfs(8)
DefaultDependencies=no
Conflicts=umount.target
IgnoreOnIsolate=true
After=systemd-random-seed.service
BindsTo=dev-zvol-rpool-%i.device
Before=umount.target
[Service]
Type=oneshot
RemainAfterExit=yes
TimeoutSec=0
KeyringMode=shared
OOMScoreAdjust=500
UMask=0077
RuntimeDirectory=zfs-cryptswap.%i
RuntimeDirectoryMode=0700
ExecStartPre=-/sbin/swapoff '/dev/zvol/rpool/%i'
ExecStartPre=-/sbin/zfs destroy 'rpool/%i'
ExecStartPre=/bin/dd if=/dev/urandom of=/run/zfs-cryptswap.%i/%i.key bs=32 count=1
ExecStart=/sbin/zfs create -V 4G -b 4k -o compression=zle -o logbias=throughput -o sync=always -o primarycache=metadata -o secondaryc
ExecStartPost=/sbin/mkswap '/dev/zvol/rpool/%i'
ExecStartPost=/sbin/swapon '/dev/zvol/rpool/%i'
ExecStop=/sbin/swapoff '/dev/zvol/rpool/%i'
ExecStopPost=/sbin/zfs destroy 'rpool/%i'
[Install]
WantedBy=swap.target
</source>
==Kernel settings for ZFS==
=== Set module parameter in /etc/modprobe.d/zfs.conf===
<source lang=bash>
options zfs zfs_arc_max=10737418240
# increase them so scrub/resilver is more quickly at the cost of other work
options zfs zfs_vdev_scrub_min_active=24
options zfs zfs_vdev_scrub_max_active=64
# sync write
options zfs zfs_vdev_sync_write_min_active=8
options zfs zfs_vdev_sync_write_max_active=32
# sync reads (normal)
options zfs zfs_vdev_sync_read_min_active=8
options zfs zfs_vdev_sync_read_max_active=32
# async reads : prefetcher
options zfs zfs_vdev_async_read_min_active=8
options zfs zfs_vdev_async_read_max_active=32
# async write : bulk writes
options zfs zfs_vdev_async_write_min_active=8
options zfs zfs_vdev_async_write_max_active=32
# max write speed to l2arc
# tradeoff between write/read and durability of ssd (?)
# default : 8 * 1024 * 1024
# setting here : 500 * 1024 * 1024
options zfs l2arc_write_max=524288000
options zfs zfs_top_maxinflight=512
options zfs zfs_resilver_min_time_ms=8000
options zfs zfs_resilver_delay=0
</source>
Remember to update your initramfs before boot. This is the filesystem which is read when your module is loaded.
<source lang=bash>
# update-initramfs -k all -u
</source>
=== Check settings ===
<source lang=bash>
root@zfshost:~# modprobe -c | grep "options zfs"
options zfs zfs_arc_max=10737418240
options zfs zfs_vdev_scrub_min_active=24
options zfs zfs_vdev_scrub_max_active=64
options zfs zfs_vdev_sync_write_min_active=8
options zfs zfs_vdev_sync_write_max_active=32
options zfs zfs_vdev_sync_read_min_active=8
options zfs zfs_vdev_sync_read_max_active=32
options zfs zfs_vdev_async_read_min_active=8
options zfs zfs_vdev_async_read_max_active=32
options zfs zfs_vdev_async_write_min_active=8
options zfs zfs_vdev_async_write_max_active=32
options zfs l2arc_write_max=524288000
options zfs zfs_top_maxinflight=512
options zfs zfs_resilver_min_time_ms=8000
options zfs zfs_resilver_delay=0
</source>
<source lang=bash>
root@zfshost:~# modprobe --show-depends zfs
insmod /lib/modules/4.15.0-58-generic/kernel/spl/spl.ko
insmod /lib/modules/4.15.0-58-generic/kernel/zfs/znvpair.ko
insmod /lib/modules/4.15.0-58-generic/kernel/zfs/zcommon.ko
insmod /lib/modules/4.15.0-58-generic/kernel/zfs/icp.ko
insmod /lib/modules/4.15.0-58-generic/kernel/zfs/zavl.ko
insmod /lib/modules/4.15.0-58-generic/kernel/zfs/zunicode.ko
insmod /lib/modules/4.15.0-58-generic/kernel/zfs/zfs.ko zfs_arc_max=10737418240 zfs_vdev_scrub_min_active=24 zfs_vdev_scrub_max_active=64 zfs_vdev_sync_write_min_active=8 zfs_vdev_sync_write_max_active=32 zfs_vdev_sync_read_min_active=8 zfs_vdev_sync_read_max_active=32 zfs_vdev_async_read_min_active=8 zfs_vdev_async_read_max_active=32 zfs_vdev_async_write_min_active=8 zfs_vdev_async_write_max_active=32 l2arc_write_max=524288000 zfs_top_maxinflight=512 zfs_resilver_min_time_ms=8000 zfs_resilver_delay=0
</source>
=== Check actual settings ===
Check files in
* /proc/spl/kstat/zfs/
* /sys/module/zfs/parameters/
==ARC Cache==
===Get the current usage of cache===
<source lang=bash>
# cat /proc/spl/kstat/zfs/arcstats |grep c_
c_min 4 521779200
c_max 4 1073741824
arc_no_grow 4 0
arc_tempreserve 4 0
arc_loaned_bytes 4 0
arc_prune 4 25360
arc_meta_used 4 493285336
arc_meta_limit 4 805306368
arc_dnode_limit 4 80530636
arc_meta_max 4 706551816
arc_meta_min 4 16777216
sync_wait_for_async 4 357
arc_need_free 4 0
arc_sys_free 4 260889600
</source>
===Limit the cache without reboot non permanent===
For example limit it to 512MB (which is too small for production environments, just an example...):
<source lang=bash>
# echo "$[512*1024*1024]" > /sys/module/zfs/parameters/zfs_arc_max
</source>
Now you have to drop the caches:
<source lang=bash>
# echo 3 > /proc/sys/vm/drop_caches
</source>
===Make the cache limit permanent===
For example limit it to 512MB (which is too small for production environments, just an example...):
<source lang=bash>
# echo "options zfs zfs_arc_max=$[512*1024*1024]" >> /etc/modprobe.d/zfs.conf
</source>
After reboot this value take effect.
==Backup ZFS settings==
A little script which may be used on your own risk.
<source lang=bash>
#!/bin/bash
# Written by Lars Timmann <L@rs.Timmann.de> 2018
# Tested on solaris 11.3 & Ubuntu Linux
# This script is a rotten bunch of code... rewrite it!
AWK_CMD=/usr/bin/gawk
ZPOOL_CMD=/sbin/zpool
ZFS_CMD=/sbin/zfs
ZDB_CMD=/sbin/zdb
function print_local_options () {
DATASET=$1
OPTION=$2
EXCLUDE_REGEX=$3
${ZFS_CMD} get -s local -Ho property,value -p ${OPTION} ${DATASET} | while read -r property value
do
if [[ ! ${property} =~ ${EXCLUDE_REGEX} ]]
then
if [ "_${property}_" == "_share.*_" ]
then
print_local_options "${DATASET}" 'share.all' '^$'
else
printf '\t-o %s=%s \\\n' "${property}" "${value}"
fi
fi
done
}
function print_filesystem () {
ZFS=$1
printf '%s create \\\n' "${ZFS_CMD}"
print_local_options "${ZFS}" 'all' '^$'
printf '\t%s\n' "${ZFS}"
}
function print_filesystems () {
ZPOOL=$1
for ZFS in $(${ZFS_CMD} list -Ho name -t filesystem -r ${ZPOOL})
do
if [ ${ZFS} == ${ZPOOL} ] ; then continue ; fi
printf '#\n## Filesystem: %s\n#\n\n' "${ZFS}"
print_filesystem ${ZFS}
printf '\n'
done
}
function print_volume () {
ZVOL=$1
volsize=$(${ZFS_CMD} get -Ho value volsize ${ZVOL})
volblocksize=$(${ZFS_CMD} get -Ho value volblocksize ${ZVOL})
printf '%s create \\\n\t-V %s \\\n\t-b %s \\\n' "${ZFS_CMD}" "${volsize}" "${volblocksize}"
print_local_options "${ZVOL}" 'all' '(volsize|refreservation)'
printf '\t%s\n' "${ZVOL}"
}
function print_volumes () {
ZPOOL=$1
for ZVOL in $(${ZFS_CMD} list -Ho name -t volume -r ${ZPOOL})
do
printf '#\n## Volume: %s\n#\n\n' "${ZVOL}"
print_volume ${ZVOL}
printf '\n'
done
}
function print_vdevs () {
ZPOOL=$1
${ZDB_CMD} -C ${ZPOOL} | ${AWK_CMD} -F':' '
$1 ~ /^[[:space:]]*type$/ {
gsub(/[ ]+/,"",$NF);
type=substr($NF,2,length($NF)-2);
if ( type == "mirror" ) {
printf " \\\n\t%s",type;
}
}
$1 ~ /^[[:space:]]*path$/ {
gsub(/[ ]+/,"",$NF);
vdev=substr($NF,2,length($NF)-2);
printf " \\\n\t%s",vdev;
}
END {
printf "\n";
}
'
}
function print_zpool () {
ZPOOL=$1
printf '#############################################################\n'
printf '#\n## ZPool: %s\n#\n' "${ZPOOL}"
printf '#############################################################\n\n'
printf '%s create \\\n' "${ZPOOL_CMD}"
print_local_options "${ZPOOL}" 'all' '/@/'
printf '\t%s' "${ZPOOL}"
print_vdevs "${ZPOOL}"
printf '\n'
printf '#############################################################\n\n'
print_filesystems "${ZPOOL}"
print_volumes "${ZPOOL}"
}
OS=$(uname -s)
eval $(uname -s)=1
HOSTNAME=$(hostname)
printf '#############################################################\n'
printf '# Hostname: %s\n' "${HOSTNAME}"
printf '#############################################################\n\n'
for ZPOOL in $(${ZPOOL_CMD} list -Ho name)
do
print_zpool ${ZPOOL}
done
</source>
==Links==
* [[https://github.com/zfsonlinux/pkg-zfs/wiki/HOWTO-install-Ubuntu-16.04-to-a-Whole-Disk-Native-ZFS-Root-Filesystem-using-Ubiquity-GUI-installer HOWTO install Ubuntu 16.04 to a Whole Disk Native ZFS Root Filesystem using Ubiquity GUI installer]]
* [[https://github.com/zfsonlinux/zfs/wiki/Ubuntu-16.04-Root-on-ZFS Ubuntu 16.04 Root on ZFS]]
65ee14b54064ff6a75a4560742fe9635981358d1
1996
1995
2020-01-07T14:03:13Z
Lollypop
2
/* Swap on ZFS with random key encryption */
wikitext
text/x-wiki
[[Kategorie:Linux|ZFS]]
[[Kategorie:ZFS|Linux]]
[[Kategorie:VirtualBox|ZFS]]
==Grub==
Create /etc/udev/rules.d/99-local-grub.rules with this content:
<source lang=bash>
# Create by-id links in /dev as well for zfs vdev. Needed by grub
# Add links for zfs_member only
KERNEL=="sd*[0-9]", IMPORT{parent}=="ID_*", ENV{ID_FS_TYPE}=="zfs_member", SYMLINK+="$env{ID_BUS}-$env{ID_SERIAL}-part%n"
</source>
==Virtualbox on ZVols==
If you use ZVols as rawvmdk-device in VirtualBox as normal user (vmuser in this example) create /etc/udev/rules.d/99-local-zvol.rules with this content:
<source lang=bash>
KERNEL=="zd*" SUBSYSTEM=="block" ACTION=="add|change" PROGRAM="/lib/udev/zvol_id /dev/%k" RESULT=="rpool/VM/*" OWNER="vmuser"
</source>
<source lang=bash>
vmuser@virtualbox-server:~$ VBoxManage internalcommands createrawvmdk -filename /var/data/VMs/dev/Solaris10.vmdk -rawdisk /dev/zvol/rpool/VM/Solaris10
</source>
==Setup Ubuntu 16.04 with ZFS root==
Most is from here [https://github.com/zfsonlinux/zfs/wiki/Ubuntu-16.04-Root-on-ZFS Ubuntu-16.04-Root-on-ZFS].
Boot Ubuntu Desktop (alias Live CD) and choose "try out".
===Get the right ashift value===
For example to get sda and sdb:
<source lang=bash>
# lsblk -o NAME,PHY-SeC,LOG-SEC /dev/sd{a,b} | awk 'function exponent (value) {for(i=0;value>1;i++){value/=2;}; return i;}{if($2 ~ /[0-9]+/){print $0,exponent($2)}else{print$0,"ashift"}}'
NAME PHY-SEC LOG-SEC ashift
sda 512 512 9
├─sda1 512 512 9
├─sda2 512 512 9
├─sda3 512 512 9
└─sda4 512 512 9
sdb 4096 512 12
├─sdb1 4096 512 12
├─sdb2 4096 512 12
├─sdb3 4096 512 12
└─sdb4 4096 512 12
</source>
===Connect it to your network===
<source lang=bash>
sudo -i
ifconfig ens160 <IP> netmask 255.255.255.0
route add default gw <defaultrouter>
echo "nameserver <nameserver>" >> /etc/resolv.conf
echo 'Acquire::http::Proxy "http://<user>:<pass>@<proxyhost>:<proxyport>";' >> /etc/apt/apt.conf
apt-add-repository universe
apt update
apt --yes install openssh-server
passwd ubuntu
Reconnect via ssh
apt install --yes debootstrap gdisk zfs-initramfs
sgdisk -g -a1 -n2:34:2047 -t2:EF02 /dev/disk/by-id/scsi-36000c2932cdb62febff0b5ac93786dd4
sgdisk -n9:-8M:0 -t9:BF07 /dev/disk/by-id/scsi-36000c2932cdb62febff0b5ac93786dd4
sgdisk -n1:0:0 -t1:BF01 /dev/disk/by-id/scsi-36000c2932cdb62febff0b5ac93786dd4
zpool create -f -o ashift=12 \
-O atime=off \
-O canmount=off \
-O compression=lz4 \
-O normalization=formD \
-O mountpoint=/ \
-R /mnt \
rpool /dev/disk/by-id/scsi-36000c2932cdb62febff0b5ac93786dd4-part1
zfs create -o canmount=off -o mountpoint=none rpool/ROOT
zfs create -o canmount=noauto -o mountpoint=/ rpool/ROOT/ubuntu
zfs mount rpool/ROOT/ubuntu
zfs create -o setuid=off rpool/home
zfs create -o mountpoint=/root rpool/home/root
zfs create -o canmount=off -o setuid=off -o exec=off rpool/var
zfs create -o com.sun:auto-snapshot=false rpool/var/cache
zfs create rpool/var/log
zfs create rpool/var/spool
zfs create -o com.sun:auto-snapshot=false -o exec=on rpool/var/tmp
zfs create -V 4G -b $(getconf PAGESIZE) -o compression=zle \
-o logbias=throughput -o sync=always \
-o primarycache=metadata -o secondarycache=none \
-o com.sun:auto-snapshot=false rpool/swap
cp -p {,/mnt}/etc/apt/apt.conf
export http_proxy=$(awk '/Acquire::http::Proxy/{gsub(/\"/,"");gsub(/;$/,"");print $2}' /mnt/etc/apt/apt.conf)
echo -n xenial{,-security,-updates} | \
xargs -n 1 -d ' ' -I{} echo "deb http://archive.ubuntu.com/ubuntu {} main universe" > /mnt/etc/apt/sources.list
chmod 1777 /mnt/var/tmp
debootstrap xenial /mnt
zfs set devices=off rpool
HOSTNAME=Template-VM
echo ${HOSTNAME} > /mnt/etc/hostname
printf "127.0.1.1\t%s\n" "${HOSTNAME}" >> /mnt/etc/hosts
INTERFACE=$(ip a s scope global | awk 'NR==1{gsub(/:$/,"",$2);print $2;}')
printf "auto %s\niface %s inet dhcp\n" "${INTERFACE}" "${INTERFACE}" > /mnt/etc/network/interfaces.d/${INTERFACE}
mount --rbind /dev /mnt/dev
mount --rbind /proc /mnt/proc
mount --rbind /sys /mnt/sys
cp -p {,/mnt}/etc/apt/apt.conf
echo -n xenial{,-security,-updates} | \
xargs -n 1 -d ' ' -I{} echo "deb http://archive.ubuntu.com/ubuntu {} main universe" > /mnt/etc/apt/sources.list
chroot /mnt /bin/bash --login
locale-gen en_US.UTF-8
echo 'LANG="en_US.UTF-8"' > /etc/default/locale
LANG="en_US.UTF-8"
dpkg-reconfigure tzdata
ln -s /proc/self/mounts /etc/mtab
apt update
apt install --yes ubuntu-minimal
apt install --yes --no-install-recommends linux-image-generic
apt install --yes zfs-initramfs
apt install --yes openssh-server
apt install --yes grub-pc
addgroup --system lpadmin
addgroup --system sambashare
passwd
grub-probe /
update-initramfs -c -k all
vi /etc/default/grub
Comment out: GRUB_HIDDEN_TIMEOUT=0
Remove quiet and splash from: GRUB_CMDLINE_LINUX_DEFAULT
Uncomment: GRUB_TERMINAL=console
update-grub
grub-install /dev/disk/by-id/scsi-36000c2932cdb62febff0b5ac93786dd4
zfs snapshot rpool/ROOT/ubuntu@install
exit
mount | grep -v zfs | tac | awk '/\/mnt/ {print $3}' | xargs -i{} umount -lf {}
zpool export rpool
reboot
apt install --yes cryptsetup
echo cryptswap1 /dev/zvol/rpool/swap /dev/urandom swap,cipher=aes-xts-plain64:sha256,size=256 >> /etc/crypttab
systemctl daemon-reload
systemctl start systemd-cryptsetup@cryptswap1.service
echo /dev/mapper/cryptswap1 none swap defaults 0 0 >> /etc/fstab
swapon -av
</source>
==Swap on ZFS with random key encryption==
<source lang=ini>
# /etc/systemd/system/zfs-cryptswap@.service
[Unit]
Description=ZFS Random Cryptography Setup for %I
Documentation=man:zfs(8)
DefaultDependencies=no
Conflicts=umount.target
IgnoreOnIsolate=true
After=systemd-random-seed.service
BindsTo=dev-zvol-rpool-%i.device
Before=umount.target
[Service]
Type=oneshot
RemainAfterExit=yes
TimeoutSec=0
KeyringMode=shared
OOMScoreAdjust=500
UMask=0077
RuntimeDirectory=zfs-cryptswap.%i
RuntimeDirectoryMode=0700
ExecStartPre=-/sbin/swapoff '/dev/zvol/rpool/%i'
ExecStartPre=-/sbin/zfs destroy 'rpool/%i'
ExecStartPre=/bin/dd if=/dev/urandom of=/run/zfs-cryptswap.%i/%i.key bs=32 count=1
ExecStart=/sbin/zfs create -V 4G -b 4k -o compression=zle -o logbias=throughput -o sync=always -o primarycache=metadata -o secondarycache=none -o com.sun:auto-snapshot=false -o encryption=on -o keyformat=raw -o keylocation=file:///run/zfs-cryptswap.%i/%i.key rpool/%i
ExecStartPost=/sbin/mkswap '/dev/zvol/rpool/%i'
ExecStartPost=/sbin/swapon '/dev/zvol/rpool/%i'
ExecStop=/sbin/swapoff '/dev/zvol/rpool/%i'
ExecStopPost=/sbin/zfs destroy 'rpool/%i'
[Install]
WantedBy=swap.target
</source>
!!!BE CAREFUL with the name after @ !!!
The name after the @ is the name of the ZFS the will be DESTROYED and recreated!!!
To destroy and recreate an encrypted ZFS volume named cryptswap use:
<source lang=bash>
# systemctl start zfs-cryptswap@cryptswap.service
# systemctl enable zfs-cryptswap@cryptswap.service
</source>
==Kernel settings for ZFS==
=== Set module parameter in /etc/modprobe.d/zfs.conf===
<source lang=bash>
options zfs zfs_arc_max=10737418240
# increase them so scrub/resilver is more quickly at the cost of other work
options zfs zfs_vdev_scrub_min_active=24
options zfs zfs_vdev_scrub_max_active=64
# sync write
options zfs zfs_vdev_sync_write_min_active=8
options zfs zfs_vdev_sync_write_max_active=32
# sync reads (normal)
options zfs zfs_vdev_sync_read_min_active=8
options zfs zfs_vdev_sync_read_max_active=32
# async reads : prefetcher
options zfs zfs_vdev_async_read_min_active=8
options zfs zfs_vdev_async_read_max_active=32
# async write : bulk writes
options zfs zfs_vdev_async_write_min_active=8
options zfs zfs_vdev_async_write_max_active=32
# max write speed to l2arc
# tradeoff between write/read and durability of ssd (?)
# default : 8 * 1024 * 1024
# setting here : 500 * 1024 * 1024
options zfs l2arc_write_max=524288000
options zfs zfs_top_maxinflight=512
options zfs zfs_resilver_min_time_ms=8000
options zfs zfs_resilver_delay=0
</source>
Remember to update your initramfs before boot. This is the filesystem which is read when your module is loaded.
<source lang=bash>
# update-initramfs -k all -u
</source>
=== Check settings ===
<source lang=bash>
root@zfshost:~# modprobe -c | grep "options zfs"
options zfs zfs_arc_max=10737418240
options zfs zfs_vdev_scrub_min_active=24
options zfs zfs_vdev_scrub_max_active=64
options zfs zfs_vdev_sync_write_min_active=8
options zfs zfs_vdev_sync_write_max_active=32
options zfs zfs_vdev_sync_read_min_active=8
options zfs zfs_vdev_sync_read_max_active=32
options zfs zfs_vdev_async_read_min_active=8
options zfs zfs_vdev_async_read_max_active=32
options zfs zfs_vdev_async_write_min_active=8
options zfs zfs_vdev_async_write_max_active=32
options zfs l2arc_write_max=524288000
options zfs zfs_top_maxinflight=512
options zfs zfs_resilver_min_time_ms=8000
options zfs zfs_resilver_delay=0
</source>
<source lang=bash>
root@zfshost:~# modprobe --show-depends zfs
insmod /lib/modules/4.15.0-58-generic/kernel/spl/spl.ko
insmod /lib/modules/4.15.0-58-generic/kernel/zfs/znvpair.ko
insmod /lib/modules/4.15.0-58-generic/kernel/zfs/zcommon.ko
insmod /lib/modules/4.15.0-58-generic/kernel/zfs/icp.ko
insmod /lib/modules/4.15.0-58-generic/kernel/zfs/zavl.ko
insmod /lib/modules/4.15.0-58-generic/kernel/zfs/zunicode.ko
insmod /lib/modules/4.15.0-58-generic/kernel/zfs/zfs.ko zfs_arc_max=10737418240 zfs_vdev_scrub_min_active=24 zfs_vdev_scrub_max_active=64 zfs_vdev_sync_write_min_active=8 zfs_vdev_sync_write_max_active=32 zfs_vdev_sync_read_min_active=8 zfs_vdev_sync_read_max_active=32 zfs_vdev_async_read_min_active=8 zfs_vdev_async_read_max_active=32 zfs_vdev_async_write_min_active=8 zfs_vdev_async_write_max_active=32 l2arc_write_max=524288000 zfs_top_maxinflight=512 zfs_resilver_min_time_ms=8000 zfs_resilver_delay=0
</source>
=== Check actual settings ===
Check files in
* /proc/spl/kstat/zfs/
* /sys/module/zfs/parameters/
==ARC Cache==
===Get the current usage of cache===
<source lang=bash>
# cat /proc/spl/kstat/zfs/arcstats |grep c_
c_min 4 521779200
c_max 4 1073741824
arc_no_grow 4 0
arc_tempreserve 4 0
arc_loaned_bytes 4 0
arc_prune 4 25360
arc_meta_used 4 493285336
arc_meta_limit 4 805306368
arc_dnode_limit 4 80530636
arc_meta_max 4 706551816
arc_meta_min 4 16777216
sync_wait_for_async 4 357
arc_need_free 4 0
arc_sys_free 4 260889600
</source>
===Limit the cache without reboot non permanent===
For example limit it to 512MB (which is too small for production environments, just an example...):
<source lang=bash>
# echo "$[512*1024*1024]" > /sys/module/zfs/parameters/zfs_arc_max
</source>
Now you have to drop the caches:
<source lang=bash>
# echo 3 > /proc/sys/vm/drop_caches
</source>
===Make the cache limit permanent===
For example limit it to 512MB (which is too small for production environments, just an example...):
<source lang=bash>
# echo "options zfs zfs_arc_max=$[512*1024*1024]" >> /etc/modprobe.d/zfs.conf
</source>
After reboot this value take effect.
==Backup ZFS settings==
A little script which may be used on your own risk.
<source lang=bash>
#!/bin/bash
# Written by Lars Timmann <L@rs.Timmann.de> 2018
# Tested on solaris 11.3 & Ubuntu Linux
# This script is a rotten bunch of code... rewrite it!
AWK_CMD=/usr/bin/gawk
ZPOOL_CMD=/sbin/zpool
ZFS_CMD=/sbin/zfs
ZDB_CMD=/sbin/zdb
function print_local_options () {
DATASET=$1
OPTION=$2
EXCLUDE_REGEX=$3
${ZFS_CMD} get -s local -Ho property,value -p ${OPTION} ${DATASET} | while read -r property value
do
if [[ ! ${property} =~ ${EXCLUDE_REGEX} ]]
then
if [ "_${property}_" == "_share.*_" ]
then
print_local_options "${DATASET}" 'share.all' '^$'
else
printf '\t-o %s=%s \\\n' "${property}" "${value}"
fi
fi
done
}
function print_filesystem () {
ZFS=$1
printf '%s create \\\n' "${ZFS_CMD}"
print_local_options "${ZFS}" 'all' '^$'
printf '\t%s\n' "${ZFS}"
}
function print_filesystems () {
ZPOOL=$1
for ZFS in $(${ZFS_CMD} list -Ho name -t filesystem -r ${ZPOOL})
do
if [ ${ZFS} == ${ZPOOL} ] ; then continue ; fi
printf '#\n## Filesystem: %s\n#\n\n' "${ZFS}"
print_filesystem ${ZFS}
printf '\n'
done
}
function print_volume () {
ZVOL=$1
volsize=$(${ZFS_CMD} get -Ho value volsize ${ZVOL})
volblocksize=$(${ZFS_CMD} get -Ho value volblocksize ${ZVOL})
printf '%s create \\\n\t-V %s \\\n\t-b %s \\\n' "${ZFS_CMD}" "${volsize}" "${volblocksize}"
print_local_options "${ZVOL}" 'all' '(volsize|refreservation)'
printf '\t%s\n' "${ZVOL}"
}
function print_volumes () {
ZPOOL=$1
for ZVOL in $(${ZFS_CMD} list -Ho name -t volume -r ${ZPOOL})
do
printf '#\n## Volume: %s\n#\n\n' "${ZVOL}"
print_volume ${ZVOL}
printf '\n'
done
}
function print_vdevs () {
ZPOOL=$1
${ZDB_CMD} -C ${ZPOOL} | ${AWK_CMD} -F':' '
$1 ~ /^[[:space:]]*type$/ {
gsub(/[ ]+/,"",$NF);
type=substr($NF,2,length($NF)-2);
if ( type == "mirror" ) {
printf " \\\n\t%s",type;
}
}
$1 ~ /^[[:space:]]*path$/ {
gsub(/[ ]+/,"",$NF);
vdev=substr($NF,2,length($NF)-2);
printf " \\\n\t%s",vdev;
}
END {
printf "\n";
}
'
}
function print_zpool () {
ZPOOL=$1
printf '#############################################################\n'
printf '#\n## ZPool: %s\n#\n' "${ZPOOL}"
printf '#############################################################\n\n'
printf '%s create \\\n' "${ZPOOL_CMD}"
print_local_options "${ZPOOL}" 'all' '/@/'
printf '\t%s' "${ZPOOL}"
print_vdevs "${ZPOOL}"
printf '\n'
printf '#############################################################\n\n'
print_filesystems "${ZPOOL}"
print_volumes "${ZPOOL}"
}
OS=$(uname -s)
eval $(uname -s)=1
HOSTNAME=$(hostname)
printf '#############################################################\n'
printf '# Hostname: %s\n' "${HOSTNAME}"
printf '#############################################################\n\n'
for ZPOOL in $(${ZPOOL_CMD} list -Ho name)
do
print_zpool ${ZPOOL}
done
</source>
==Links==
* [[https://github.com/zfsonlinux/pkg-zfs/wiki/HOWTO-install-Ubuntu-16.04-to-a-Whole-Disk-Native-ZFS-Root-Filesystem-using-Ubiquity-GUI-installer HOWTO install Ubuntu 16.04 to a Whole Disk Native ZFS Root Filesystem using Ubiquity GUI installer]]
* [[https://github.com/zfsonlinux/zfs/wiki/Ubuntu-16.04-Root-on-ZFS Ubuntu 16.04 Root on ZFS]]
cd2c75e5353010701937bbccf60b220445a51055
1997
1996
2020-01-07T14:06:18Z
Lollypop
2
/* Swap on ZFS with random key encryption */
wikitext
text/x-wiki
[[Kategorie:Linux|ZFS]]
[[Kategorie:ZFS|Linux]]
[[Kategorie:VirtualBox|ZFS]]
==Grub==
Create /etc/udev/rules.d/99-local-grub.rules with this content:
<source lang=bash>
# Create by-id links in /dev as well for zfs vdev. Needed by grub
# Add links for zfs_member only
KERNEL=="sd*[0-9]", IMPORT{parent}=="ID_*", ENV{ID_FS_TYPE}=="zfs_member", SYMLINK+="$env{ID_BUS}-$env{ID_SERIAL}-part%n"
</source>
==Virtualbox on ZVols==
If you use ZVols as rawvmdk-device in VirtualBox as normal user (vmuser in this example) create /etc/udev/rules.d/99-local-zvol.rules with this content:
<source lang=bash>
KERNEL=="zd*" SUBSYSTEM=="block" ACTION=="add|change" PROGRAM="/lib/udev/zvol_id /dev/%k" RESULT=="rpool/VM/*" OWNER="vmuser"
</source>
<source lang=bash>
vmuser@virtualbox-server:~$ VBoxManage internalcommands createrawvmdk -filename /var/data/VMs/dev/Solaris10.vmdk -rawdisk /dev/zvol/rpool/VM/Solaris10
</source>
==Setup Ubuntu 16.04 with ZFS root==
Most is from here [https://github.com/zfsonlinux/zfs/wiki/Ubuntu-16.04-Root-on-ZFS Ubuntu-16.04-Root-on-ZFS].
Boot Ubuntu Desktop (alias Live CD) and choose "try out".
===Get the right ashift value===
For example to get sda and sdb:
<source lang=bash>
# lsblk -o NAME,PHY-SeC,LOG-SEC /dev/sd{a,b} | awk 'function exponent (value) {for(i=0;value>1;i++){value/=2;}; return i;}{if($2 ~ /[0-9]+/){print $0,exponent($2)}else{print$0,"ashift"}}'
NAME PHY-SEC LOG-SEC ashift
sda 512 512 9
├─sda1 512 512 9
├─sda2 512 512 9
├─sda3 512 512 9
└─sda4 512 512 9
sdb 4096 512 12
├─sdb1 4096 512 12
├─sdb2 4096 512 12
├─sdb3 4096 512 12
└─sdb4 4096 512 12
</source>
===Connect it to your network===
<source lang=bash>
sudo -i
ifconfig ens160 <IP> netmask 255.255.255.0
route add default gw <defaultrouter>
echo "nameserver <nameserver>" >> /etc/resolv.conf
echo 'Acquire::http::Proxy "http://<user>:<pass>@<proxyhost>:<proxyport>";' >> /etc/apt/apt.conf
apt-add-repository universe
apt update
apt --yes install openssh-server
passwd ubuntu
Reconnect via ssh
apt install --yes debootstrap gdisk zfs-initramfs
sgdisk -g -a1 -n2:34:2047 -t2:EF02 /dev/disk/by-id/scsi-36000c2932cdb62febff0b5ac93786dd4
sgdisk -n9:-8M:0 -t9:BF07 /dev/disk/by-id/scsi-36000c2932cdb62febff0b5ac93786dd4
sgdisk -n1:0:0 -t1:BF01 /dev/disk/by-id/scsi-36000c2932cdb62febff0b5ac93786dd4
zpool create -f -o ashift=12 \
-O atime=off \
-O canmount=off \
-O compression=lz4 \
-O normalization=formD \
-O mountpoint=/ \
-R /mnt \
rpool /dev/disk/by-id/scsi-36000c2932cdb62febff0b5ac93786dd4-part1
zfs create -o canmount=off -o mountpoint=none rpool/ROOT
zfs create -o canmount=noauto -o mountpoint=/ rpool/ROOT/ubuntu
zfs mount rpool/ROOT/ubuntu
zfs create -o setuid=off rpool/home
zfs create -o mountpoint=/root rpool/home/root
zfs create -o canmount=off -o setuid=off -o exec=off rpool/var
zfs create -o com.sun:auto-snapshot=false rpool/var/cache
zfs create rpool/var/log
zfs create rpool/var/spool
zfs create -o com.sun:auto-snapshot=false -o exec=on rpool/var/tmp
zfs create -V 4G -b $(getconf PAGESIZE) -o compression=zle \
-o logbias=throughput -o sync=always \
-o primarycache=metadata -o secondarycache=none \
-o com.sun:auto-snapshot=false rpool/swap
cp -p {,/mnt}/etc/apt/apt.conf
export http_proxy=$(awk '/Acquire::http::Proxy/{gsub(/\"/,"");gsub(/;$/,"");print $2}' /mnt/etc/apt/apt.conf)
echo -n xenial{,-security,-updates} | \
xargs -n 1 -d ' ' -I{} echo "deb http://archive.ubuntu.com/ubuntu {} main universe" > /mnt/etc/apt/sources.list
chmod 1777 /mnt/var/tmp
debootstrap xenial /mnt
zfs set devices=off rpool
HOSTNAME=Template-VM
echo ${HOSTNAME} > /mnt/etc/hostname
printf "127.0.1.1\t%s\n" "${HOSTNAME}" >> /mnt/etc/hosts
INTERFACE=$(ip a s scope global | awk 'NR==1{gsub(/:$/,"",$2);print $2;}')
printf "auto %s\niface %s inet dhcp\n" "${INTERFACE}" "${INTERFACE}" > /mnt/etc/network/interfaces.d/${INTERFACE}
mount --rbind /dev /mnt/dev
mount --rbind /proc /mnt/proc
mount --rbind /sys /mnt/sys
cp -p {,/mnt}/etc/apt/apt.conf
echo -n xenial{,-security,-updates} | \
xargs -n 1 -d ' ' -I{} echo "deb http://archive.ubuntu.com/ubuntu {} main universe" > /mnt/etc/apt/sources.list
chroot /mnt /bin/bash --login
locale-gen en_US.UTF-8
echo 'LANG="en_US.UTF-8"' > /etc/default/locale
LANG="en_US.UTF-8"
dpkg-reconfigure tzdata
ln -s /proc/self/mounts /etc/mtab
apt update
apt install --yes ubuntu-minimal
apt install --yes --no-install-recommends linux-image-generic
apt install --yes zfs-initramfs
apt install --yes openssh-server
apt install --yes grub-pc
addgroup --system lpadmin
addgroup --system sambashare
passwd
grub-probe /
update-initramfs -c -k all
vi /etc/default/grub
Comment out: GRUB_HIDDEN_TIMEOUT=0
Remove quiet and splash from: GRUB_CMDLINE_LINUX_DEFAULT
Uncomment: GRUB_TERMINAL=console
update-grub
grub-install /dev/disk/by-id/scsi-36000c2932cdb62febff0b5ac93786dd4
zfs snapshot rpool/ROOT/ubuntu@install
exit
mount | grep -v zfs | tac | awk '/\/mnt/ {print $3}' | xargs -i{} umount -lf {}
zpool export rpool
reboot
apt install --yes cryptsetup
echo cryptswap1 /dev/zvol/rpool/swap /dev/urandom swap,cipher=aes-xts-plain64:sha256,size=256 >> /etc/crypttab
systemctl daemon-reload
systemctl start systemd-cryptsetup@cryptswap1.service
echo /dev/mapper/cryptswap1 none swap defaults 0 0 >> /etc/fstab
swapon -av
</source>
==Swap on ZFS with random key encryption==
<source lang=ini>
# /etc/systemd/system/zfs-cryptswap@.service
[Unit]
Description=ZFS Random Cryptography Setup for %I
Documentation=man:zfs(8)
DefaultDependencies=no
Conflicts=umount.target
IgnoreOnIsolate=true
After=systemd-random-seed.service
BindsTo=dev-zvol-rpool-%i.device
Before=umount.target
[Service]
Type=oneshot
RemainAfterExit=yes
TimeoutSec=0
KeyringMode=shared
OOMScoreAdjust=500
UMask=0077
RuntimeDirectory=zfs-cryptswap.%i
RuntimeDirectoryMode=0700
ExecStartPre=-/sbin/swapoff '/dev/zvol/rpool/%i'
ExecStartPre=-/sbin/zfs destroy 'rpool/%i'
ExecStartPre=/bin/dd if=/dev/urandom of=/run/zfs-cryptswap.%i/%i.key bs=32 count=1
ExecStart=/sbin/zfs create -V 4G -b 4k -o compression=zle -o logbias=throughput -o sync=always -o primarycache=metadata -o secondarycache=none -o com.sun:auto-snapshot=false -o encryption=on -o keyformat=raw -o keylocation=file:///run/zfs-cryptswap.%i/%i.key rpool/%i
ExecStartPost=/sbin/mkswap '/dev/zvol/rpool/%i'
ExecStartPost=/sbin/swapon '/dev/zvol/rpool/%i'
ExecStop=/sbin/swapoff '/dev/zvol/rpool/%i'
ExecStopPost=/sbin/zfs destroy 'rpool/%i'
[Install]
WantedBy=swap.target
</source>
!!!BE CAREFUL with the name after @ !!!
The name after the @ is the name of the ZFS the will be DESTROYED and recreated!!!
To destroy and recreate an encrypted ZFS volume named cryptswap use:
<source lang=bash>
# systemctl start zfs-cryptswap@cryptswap.service
# systemctl enable zfs-cryptswap@cryptswap.service
# update-initramfs -k all -u
</source>
==Kernel settings for ZFS==
=== Set module parameter in /etc/modprobe.d/zfs.conf===
<source lang=bash>
options zfs zfs_arc_max=10737418240
# increase them so scrub/resilver is more quickly at the cost of other work
options zfs zfs_vdev_scrub_min_active=24
options zfs zfs_vdev_scrub_max_active=64
# sync write
options zfs zfs_vdev_sync_write_min_active=8
options zfs zfs_vdev_sync_write_max_active=32
# sync reads (normal)
options zfs zfs_vdev_sync_read_min_active=8
options zfs zfs_vdev_sync_read_max_active=32
# async reads : prefetcher
options zfs zfs_vdev_async_read_min_active=8
options zfs zfs_vdev_async_read_max_active=32
# async write : bulk writes
options zfs zfs_vdev_async_write_min_active=8
options zfs zfs_vdev_async_write_max_active=32
# max write speed to l2arc
# tradeoff between write/read and durability of ssd (?)
# default : 8 * 1024 * 1024
# setting here : 500 * 1024 * 1024
options zfs l2arc_write_max=524288000
options zfs zfs_top_maxinflight=512
options zfs zfs_resilver_min_time_ms=8000
options zfs zfs_resilver_delay=0
</source>
Remember to update your initramfs before boot. This is the filesystem which is read when your module is loaded.
<source lang=bash>
# update-initramfs -k all -u
</source>
=== Check settings ===
<source lang=bash>
root@zfshost:~# modprobe -c | grep "options zfs"
options zfs zfs_arc_max=10737418240
options zfs zfs_vdev_scrub_min_active=24
options zfs zfs_vdev_scrub_max_active=64
options zfs zfs_vdev_sync_write_min_active=8
options zfs zfs_vdev_sync_write_max_active=32
options zfs zfs_vdev_sync_read_min_active=8
options zfs zfs_vdev_sync_read_max_active=32
options zfs zfs_vdev_async_read_min_active=8
options zfs zfs_vdev_async_read_max_active=32
options zfs zfs_vdev_async_write_min_active=8
options zfs zfs_vdev_async_write_max_active=32
options zfs l2arc_write_max=524288000
options zfs zfs_top_maxinflight=512
options zfs zfs_resilver_min_time_ms=8000
options zfs zfs_resilver_delay=0
</source>
<source lang=bash>
root@zfshost:~# modprobe --show-depends zfs
insmod /lib/modules/4.15.0-58-generic/kernel/spl/spl.ko
insmod /lib/modules/4.15.0-58-generic/kernel/zfs/znvpair.ko
insmod /lib/modules/4.15.0-58-generic/kernel/zfs/zcommon.ko
insmod /lib/modules/4.15.0-58-generic/kernel/zfs/icp.ko
insmod /lib/modules/4.15.0-58-generic/kernel/zfs/zavl.ko
insmod /lib/modules/4.15.0-58-generic/kernel/zfs/zunicode.ko
insmod /lib/modules/4.15.0-58-generic/kernel/zfs/zfs.ko zfs_arc_max=10737418240 zfs_vdev_scrub_min_active=24 zfs_vdev_scrub_max_active=64 zfs_vdev_sync_write_min_active=8 zfs_vdev_sync_write_max_active=32 zfs_vdev_sync_read_min_active=8 zfs_vdev_sync_read_max_active=32 zfs_vdev_async_read_min_active=8 zfs_vdev_async_read_max_active=32 zfs_vdev_async_write_min_active=8 zfs_vdev_async_write_max_active=32 l2arc_write_max=524288000 zfs_top_maxinflight=512 zfs_resilver_min_time_ms=8000 zfs_resilver_delay=0
</source>
=== Check actual settings ===
Check files in
* /proc/spl/kstat/zfs/
* /sys/module/zfs/parameters/
==ARC Cache==
===Get the current usage of cache===
<source lang=bash>
# cat /proc/spl/kstat/zfs/arcstats |grep c_
c_min 4 521779200
c_max 4 1073741824
arc_no_grow 4 0
arc_tempreserve 4 0
arc_loaned_bytes 4 0
arc_prune 4 25360
arc_meta_used 4 493285336
arc_meta_limit 4 805306368
arc_dnode_limit 4 80530636
arc_meta_max 4 706551816
arc_meta_min 4 16777216
sync_wait_for_async 4 357
arc_need_free 4 0
arc_sys_free 4 260889600
</source>
===Limit the cache without reboot non permanent===
For example limit it to 512MB (which is too small for production environments, just an example...):
<source lang=bash>
# echo "$[512*1024*1024]" > /sys/module/zfs/parameters/zfs_arc_max
</source>
Now you have to drop the caches:
<source lang=bash>
# echo 3 > /proc/sys/vm/drop_caches
</source>
===Make the cache limit permanent===
For example limit it to 512MB (which is too small for production environments, just an example...):
<source lang=bash>
# echo "options zfs zfs_arc_max=$[512*1024*1024]" >> /etc/modprobe.d/zfs.conf
</source>
After reboot this value take effect.
==Backup ZFS settings==
A little script which may be used on your own risk.
<source lang=bash>
#!/bin/bash
# Written by Lars Timmann <L@rs.Timmann.de> 2018
# Tested on solaris 11.3 & Ubuntu Linux
# This script is a rotten bunch of code... rewrite it!
AWK_CMD=/usr/bin/gawk
ZPOOL_CMD=/sbin/zpool
ZFS_CMD=/sbin/zfs
ZDB_CMD=/sbin/zdb
function print_local_options () {
DATASET=$1
OPTION=$2
EXCLUDE_REGEX=$3
${ZFS_CMD} get -s local -Ho property,value -p ${OPTION} ${DATASET} | while read -r property value
do
if [[ ! ${property} =~ ${EXCLUDE_REGEX} ]]
then
if [ "_${property}_" == "_share.*_" ]
then
print_local_options "${DATASET}" 'share.all' '^$'
else
printf '\t-o %s=%s \\\n' "${property}" "${value}"
fi
fi
done
}
function print_filesystem () {
ZFS=$1
printf '%s create \\\n' "${ZFS_CMD}"
print_local_options "${ZFS}" 'all' '^$'
printf '\t%s\n' "${ZFS}"
}
function print_filesystems () {
ZPOOL=$1
for ZFS in $(${ZFS_CMD} list -Ho name -t filesystem -r ${ZPOOL})
do
if [ ${ZFS} == ${ZPOOL} ] ; then continue ; fi
printf '#\n## Filesystem: %s\n#\n\n' "${ZFS}"
print_filesystem ${ZFS}
printf '\n'
done
}
function print_volume () {
ZVOL=$1
volsize=$(${ZFS_CMD} get -Ho value volsize ${ZVOL})
volblocksize=$(${ZFS_CMD} get -Ho value volblocksize ${ZVOL})
printf '%s create \\\n\t-V %s \\\n\t-b %s \\\n' "${ZFS_CMD}" "${volsize}" "${volblocksize}"
print_local_options "${ZVOL}" 'all' '(volsize|refreservation)'
printf '\t%s\n' "${ZVOL}"
}
function print_volumes () {
ZPOOL=$1
for ZVOL in $(${ZFS_CMD} list -Ho name -t volume -r ${ZPOOL})
do
printf '#\n## Volume: %s\n#\n\n' "${ZVOL}"
print_volume ${ZVOL}
printf '\n'
done
}
function print_vdevs () {
ZPOOL=$1
${ZDB_CMD} -C ${ZPOOL} | ${AWK_CMD} -F':' '
$1 ~ /^[[:space:]]*type$/ {
gsub(/[ ]+/,"",$NF);
type=substr($NF,2,length($NF)-2);
if ( type == "mirror" ) {
printf " \\\n\t%s",type;
}
}
$1 ~ /^[[:space:]]*path$/ {
gsub(/[ ]+/,"",$NF);
vdev=substr($NF,2,length($NF)-2);
printf " \\\n\t%s",vdev;
}
END {
printf "\n";
}
'
}
function print_zpool () {
ZPOOL=$1
printf '#############################################################\n'
printf '#\n## ZPool: %s\n#\n' "${ZPOOL}"
printf '#############################################################\n\n'
printf '%s create \\\n' "${ZPOOL_CMD}"
print_local_options "${ZPOOL}" 'all' '/@/'
printf '\t%s' "${ZPOOL}"
print_vdevs "${ZPOOL}"
printf '\n'
printf '#############################################################\n\n'
print_filesystems "${ZPOOL}"
print_volumes "${ZPOOL}"
}
OS=$(uname -s)
eval $(uname -s)=1
HOSTNAME=$(hostname)
printf '#############################################################\n'
printf '# Hostname: %s\n' "${HOSTNAME}"
printf '#############################################################\n\n'
for ZPOOL in $(${ZPOOL_CMD} list -Ho name)
do
print_zpool ${ZPOOL}
done
</source>
==Links==
* [[https://github.com/zfsonlinux/pkg-zfs/wiki/HOWTO-install-Ubuntu-16.04-to-a-Whole-Disk-Native-ZFS-Root-Filesystem-using-Ubiquity-GUI-installer HOWTO install Ubuntu 16.04 to a Whole Disk Native ZFS Root Filesystem using Ubiquity GUI installer]]
* [[https://github.com/zfsonlinux/zfs/wiki/Ubuntu-16.04-Root-on-ZFS Ubuntu 16.04 Root on ZFS]]
221940976b2a51f6fb6d71fb4ee30fb0491fab57
2000
1997
2020-03-11T10:06:10Z
Lollypop
2
/* ARC Cache */
wikitext
text/x-wiki
[[Kategorie:Linux|ZFS]]
[[Kategorie:ZFS|Linux]]
[[Kategorie:VirtualBox|ZFS]]
==Grub==
Create /etc/udev/rules.d/99-local-grub.rules with this content:
<source lang=bash>
# Create by-id links in /dev as well for zfs vdev. Needed by grub
# Add links for zfs_member only
KERNEL=="sd*[0-9]", IMPORT{parent}=="ID_*", ENV{ID_FS_TYPE}=="zfs_member", SYMLINK+="$env{ID_BUS}-$env{ID_SERIAL}-part%n"
</source>
==Virtualbox on ZVols==
If you use ZVols as rawvmdk-device in VirtualBox as normal user (vmuser in this example) create /etc/udev/rules.d/99-local-zvol.rules with this content:
<source lang=bash>
KERNEL=="zd*" SUBSYSTEM=="block" ACTION=="add|change" PROGRAM="/lib/udev/zvol_id /dev/%k" RESULT=="rpool/VM/*" OWNER="vmuser"
</source>
<source lang=bash>
vmuser@virtualbox-server:~$ VBoxManage internalcommands createrawvmdk -filename /var/data/VMs/dev/Solaris10.vmdk -rawdisk /dev/zvol/rpool/VM/Solaris10
</source>
==Setup Ubuntu 16.04 with ZFS root==
Most is from here [https://github.com/zfsonlinux/zfs/wiki/Ubuntu-16.04-Root-on-ZFS Ubuntu-16.04-Root-on-ZFS].
Boot Ubuntu Desktop (alias Live CD) and choose "try out".
===Get the right ashift value===
For example to get sda and sdb:
<source lang=bash>
# lsblk -o NAME,PHY-SeC,LOG-SEC /dev/sd{a,b} | awk 'function exponent (value) {for(i=0;value>1;i++){value/=2;}; return i;}{if($2 ~ /[0-9]+/){print $0,exponent($2)}else{print$0,"ashift"}}'
NAME PHY-SEC LOG-SEC ashift
sda 512 512 9
├─sda1 512 512 9
├─sda2 512 512 9
├─sda3 512 512 9
└─sda4 512 512 9
sdb 4096 512 12
├─sdb1 4096 512 12
├─sdb2 4096 512 12
├─sdb3 4096 512 12
└─sdb4 4096 512 12
</source>
===Connect it to your network===
<source lang=bash>
sudo -i
ifconfig ens160 <IP> netmask 255.255.255.0
route add default gw <defaultrouter>
echo "nameserver <nameserver>" >> /etc/resolv.conf
echo 'Acquire::http::Proxy "http://<user>:<pass>@<proxyhost>:<proxyport>";' >> /etc/apt/apt.conf
apt-add-repository universe
apt update
apt --yes install openssh-server
passwd ubuntu
Reconnect via ssh
apt install --yes debootstrap gdisk zfs-initramfs
sgdisk -g -a1 -n2:34:2047 -t2:EF02 /dev/disk/by-id/scsi-36000c2932cdb62febff0b5ac93786dd4
sgdisk -n9:-8M:0 -t9:BF07 /dev/disk/by-id/scsi-36000c2932cdb62febff0b5ac93786dd4
sgdisk -n1:0:0 -t1:BF01 /dev/disk/by-id/scsi-36000c2932cdb62febff0b5ac93786dd4
zpool create -f -o ashift=12 \
-O atime=off \
-O canmount=off \
-O compression=lz4 \
-O normalization=formD \
-O mountpoint=/ \
-R /mnt \
rpool /dev/disk/by-id/scsi-36000c2932cdb62febff0b5ac93786dd4-part1
zfs create -o canmount=off -o mountpoint=none rpool/ROOT
zfs create -o canmount=noauto -o mountpoint=/ rpool/ROOT/ubuntu
zfs mount rpool/ROOT/ubuntu
zfs create -o setuid=off rpool/home
zfs create -o mountpoint=/root rpool/home/root
zfs create -o canmount=off -o setuid=off -o exec=off rpool/var
zfs create -o com.sun:auto-snapshot=false rpool/var/cache
zfs create rpool/var/log
zfs create rpool/var/spool
zfs create -o com.sun:auto-snapshot=false -o exec=on rpool/var/tmp
zfs create -V 4G -b $(getconf PAGESIZE) -o compression=zle \
-o logbias=throughput -o sync=always \
-o primarycache=metadata -o secondarycache=none \
-o com.sun:auto-snapshot=false rpool/swap
cp -p {,/mnt}/etc/apt/apt.conf
export http_proxy=$(awk '/Acquire::http::Proxy/{gsub(/\"/,"");gsub(/;$/,"");print $2}' /mnt/etc/apt/apt.conf)
echo -n xenial{,-security,-updates} | \
xargs -n 1 -d ' ' -I{} echo "deb http://archive.ubuntu.com/ubuntu {} main universe" > /mnt/etc/apt/sources.list
chmod 1777 /mnt/var/tmp
debootstrap xenial /mnt
zfs set devices=off rpool
HOSTNAME=Template-VM
echo ${HOSTNAME} > /mnt/etc/hostname
printf "127.0.1.1\t%s\n" "${HOSTNAME}" >> /mnt/etc/hosts
INTERFACE=$(ip a s scope global | awk 'NR==1{gsub(/:$/,"",$2);print $2;}')
printf "auto %s\niface %s inet dhcp\n" "${INTERFACE}" "${INTERFACE}" > /mnt/etc/network/interfaces.d/${INTERFACE}
mount --rbind /dev /mnt/dev
mount --rbind /proc /mnt/proc
mount --rbind /sys /mnt/sys
cp -p {,/mnt}/etc/apt/apt.conf
echo -n xenial{,-security,-updates} | \
xargs -n 1 -d ' ' -I{} echo "deb http://archive.ubuntu.com/ubuntu {} main universe" > /mnt/etc/apt/sources.list
chroot /mnt /bin/bash --login
locale-gen en_US.UTF-8
echo 'LANG="en_US.UTF-8"' > /etc/default/locale
LANG="en_US.UTF-8"
dpkg-reconfigure tzdata
ln -s /proc/self/mounts /etc/mtab
apt update
apt install --yes ubuntu-minimal
apt install --yes --no-install-recommends linux-image-generic
apt install --yes zfs-initramfs
apt install --yes openssh-server
apt install --yes grub-pc
addgroup --system lpadmin
addgroup --system sambashare
passwd
grub-probe /
update-initramfs -c -k all
vi /etc/default/grub
Comment out: GRUB_HIDDEN_TIMEOUT=0
Remove quiet and splash from: GRUB_CMDLINE_LINUX_DEFAULT
Uncomment: GRUB_TERMINAL=console
update-grub
grub-install /dev/disk/by-id/scsi-36000c2932cdb62febff0b5ac93786dd4
zfs snapshot rpool/ROOT/ubuntu@install
exit
mount | grep -v zfs | tac | awk '/\/mnt/ {print $3}' | xargs -i{} umount -lf {}
zpool export rpool
reboot
apt install --yes cryptsetup
echo cryptswap1 /dev/zvol/rpool/swap /dev/urandom swap,cipher=aes-xts-plain64:sha256,size=256 >> /etc/crypttab
systemctl daemon-reload
systemctl start systemd-cryptsetup@cryptswap1.service
echo /dev/mapper/cryptswap1 none swap defaults 0 0 >> /etc/fstab
swapon -av
</source>
==Swap on ZFS with random key encryption==
<source lang=ini>
# /etc/systemd/system/zfs-cryptswap@.service
[Unit]
Description=ZFS Random Cryptography Setup for %I
Documentation=man:zfs(8)
DefaultDependencies=no
Conflicts=umount.target
IgnoreOnIsolate=true
After=systemd-random-seed.service
BindsTo=dev-zvol-rpool-%i.device
Before=umount.target
[Service]
Type=oneshot
RemainAfterExit=yes
TimeoutSec=0
KeyringMode=shared
OOMScoreAdjust=500
UMask=0077
RuntimeDirectory=zfs-cryptswap.%i
RuntimeDirectoryMode=0700
ExecStartPre=-/sbin/swapoff '/dev/zvol/rpool/%i'
ExecStartPre=-/sbin/zfs destroy 'rpool/%i'
ExecStartPre=/bin/dd if=/dev/urandom of=/run/zfs-cryptswap.%i/%i.key bs=32 count=1
ExecStart=/sbin/zfs create -V 4G -b 4k -o compression=zle -o logbias=throughput -o sync=always -o primarycache=metadata -o secondarycache=none -o com.sun:auto-snapshot=false -o encryption=on -o keyformat=raw -o keylocation=file:///run/zfs-cryptswap.%i/%i.key rpool/%i
ExecStartPost=/sbin/mkswap '/dev/zvol/rpool/%i'
ExecStartPost=/sbin/swapon '/dev/zvol/rpool/%i'
ExecStop=/sbin/swapoff '/dev/zvol/rpool/%i'
ExecStopPost=/sbin/zfs destroy 'rpool/%i'
[Install]
WantedBy=swap.target
</source>
!!!BE CAREFUL with the name after @ !!!
The name after the @ is the name of the ZFS the will be DESTROYED and recreated!!!
To destroy and recreate an encrypted ZFS volume named cryptswap use:
<source lang=bash>
# systemctl start zfs-cryptswap@cryptswap.service
# systemctl enable zfs-cryptswap@cryptswap.service
# update-initramfs -k all -u
</source>
==Kernel settings for ZFS==
=== Set module parameter in /etc/modprobe.d/zfs.conf===
<source lang=bash>
options zfs zfs_arc_max=10737418240
# increase them so scrub/resilver is more quickly at the cost of other work
options zfs zfs_vdev_scrub_min_active=24
options zfs zfs_vdev_scrub_max_active=64
# sync write
options zfs zfs_vdev_sync_write_min_active=8
options zfs zfs_vdev_sync_write_max_active=32
# sync reads (normal)
options zfs zfs_vdev_sync_read_min_active=8
options zfs zfs_vdev_sync_read_max_active=32
# async reads : prefetcher
options zfs zfs_vdev_async_read_min_active=8
options zfs zfs_vdev_async_read_max_active=32
# async write : bulk writes
options zfs zfs_vdev_async_write_min_active=8
options zfs zfs_vdev_async_write_max_active=32
# max write speed to l2arc
# tradeoff between write/read and durability of ssd (?)
# default : 8 * 1024 * 1024
# setting here : 500 * 1024 * 1024
options zfs l2arc_write_max=524288000
options zfs zfs_top_maxinflight=512
options zfs zfs_resilver_min_time_ms=8000
options zfs zfs_resilver_delay=0
</source>
Remember to update your initramfs before boot. This is the filesystem which is read when your module is loaded.
<source lang=bash>
# update-initramfs -k all -u
</source>
=== Check settings ===
<source lang=bash>
root@zfshost:~# modprobe -c | grep "options zfs"
options zfs zfs_arc_max=10737418240
options zfs zfs_vdev_scrub_min_active=24
options zfs zfs_vdev_scrub_max_active=64
options zfs zfs_vdev_sync_write_min_active=8
options zfs zfs_vdev_sync_write_max_active=32
options zfs zfs_vdev_sync_read_min_active=8
options zfs zfs_vdev_sync_read_max_active=32
options zfs zfs_vdev_async_read_min_active=8
options zfs zfs_vdev_async_read_max_active=32
options zfs zfs_vdev_async_write_min_active=8
options zfs zfs_vdev_async_write_max_active=32
options zfs l2arc_write_max=524288000
options zfs zfs_top_maxinflight=512
options zfs zfs_resilver_min_time_ms=8000
options zfs zfs_resilver_delay=0
</source>
<source lang=bash>
root@zfshost:~# modprobe --show-depends zfs
insmod /lib/modules/4.15.0-58-generic/kernel/spl/spl.ko
insmod /lib/modules/4.15.0-58-generic/kernel/zfs/znvpair.ko
insmod /lib/modules/4.15.0-58-generic/kernel/zfs/zcommon.ko
insmod /lib/modules/4.15.0-58-generic/kernel/zfs/icp.ko
insmod /lib/modules/4.15.0-58-generic/kernel/zfs/zavl.ko
insmod /lib/modules/4.15.0-58-generic/kernel/zfs/zunicode.ko
insmod /lib/modules/4.15.0-58-generic/kernel/zfs/zfs.ko zfs_arc_max=10737418240 zfs_vdev_scrub_min_active=24 zfs_vdev_scrub_max_active=64 zfs_vdev_sync_write_min_active=8 zfs_vdev_sync_write_max_active=32 zfs_vdev_sync_read_min_active=8 zfs_vdev_sync_read_max_active=32 zfs_vdev_async_read_min_active=8 zfs_vdev_async_read_max_active=32 zfs_vdev_async_write_min_active=8 zfs_vdev_async_write_max_active=32 l2arc_write_max=524288000 zfs_top_maxinflight=512 zfs_resilver_min_time_ms=8000 zfs_resilver_delay=0
</source>
=== Check actual settings ===
Check files in
* /proc/spl/kstat/zfs/
* /sys/module/zfs/parameters/
==ARC Cache==
===Get the current usage of cache===
<source lang=bash>
# cat /proc/spl/kstat/zfs/arcstats |grep c_
c_min 4 521779200
c_max 4 1073741824
arc_no_grow 4 0
arc_tempreserve 4 0
arc_loaned_bytes 4 0
arc_prune 4 25360
arc_meta_used 4 493285336
arc_meta_limit 4 805306368
arc_dnode_limit 4 80530636
arc_meta_max 4 706551816
arc_meta_min 4 16777216
sync_wait_for_async 4 357
arc_need_free 4 0
arc_sys_free 4 260889600
</source>
===Limit the cache without reboot non permanent===
For example limit it to 512MB (which is too small for production environments, just an example...):
<source lang=bash>
# echo "$[512*1024*1024]" > /sys/module/zfs/parameters/zfs_arc_max
</source>
Now you have to drop the caches:
<source lang=bash>
# echo 3 > /proc/sys/vm/drop_caches
</source>
===Make the cache limit permanent===
For example limit it to 512MB (which is too small for production environments, just an example...):
<source lang=bash>
# echo "options zfs zfs_arc_max=$[512*1024*1024]" >> /etc/modprobe.d/zfs.conf
</source>
After reboot this value take effect.
===Check cache hits/misses===
<source lang=bash>
# (while : ; do cat /proc/spl/kstat/zfs/arcstats ; sleep 5 ; done ) | awk '
BEGIN {
}
$1 ~ /(hits|misses)/ {
name=$1;
gsub(/[_]*(hits|misses)/,"",name);
if(name == ""){
name="global";
}
}
$1 ~ /hits/ {
hits[name] = $3 - hitslast[name]
hitslast[name] = $3
}
$1 ~ /misses/ {
misses[name] = $3 - misslast[name]
misslast[name] = $3
rate = 0
total = hits[name] + misses[name]
if (total)
rate = (hits[name] * 100) / total
if (name=="global")
printf "%30s %12s %12s %9s\n", "NAME", "HITS", "MISSES", "HITRATE"
printf "%30s %12d %12d %8.2f%%\n", name, hits[name], misses[name], rate
}
'
</source>
==Backup ZFS settings==
A little script which may be used on your own risk.
<source lang=bash>
#!/bin/bash
# Written by Lars Timmann <L@rs.Timmann.de> 2018
# Tested on solaris 11.3 & Ubuntu Linux
# This script is a rotten bunch of code... rewrite it!
AWK_CMD=/usr/bin/gawk
ZPOOL_CMD=/sbin/zpool
ZFS_CMD=/sbin/zfs
ZDB_CMD=/sbin/zdb
function print_local_options () {
DATASET=$1
OPTION=$2
EXCLUDE_REGEX=$3
${ZFS_CMD} get -s local -Ho property,value -p ${OPTION} ${DATASET} | while read -r property value
do
if [[ ! ${property} =~ ${EXCLUDE_REGEX} ]]
then
if [ "_${property}_" == "_share.*_" ]
then
print_local_options "${DATASET}" 'share.all' '^$'
else
printf '\t-o %s=%s \\\n' "${property}" "${value}"
fi
fi
done
}
function print_filesystem () {
ZFS=$1
printf '%s create \\\n' "${ZFS_CMD}"
print_local_options "${ZFS}" 'all' '^$'
printf '\t%s\n' "${ZFS}"
}
function print_filesystems () {
ZPOOL=$1
for ZFS in $(${ZFS_CMD} list -Ho name -t filesystem -r ${ZPOOL})
do
if [ ${ZFS} == ${ZPOOL} ] ; then continue ; fi
printf '#\n## Filesystem: %s\n#\n\n' "${ZFS}"
print_filesystem ${ZFS}
printf '\n'
done
}
function print_volume () {
ZVOL=$1
volsize=$(${ZFS_CMD} get -Ho value volsize ${ZVOL})
volblocksize=$(${ZFS_CMD} get -Ho value volblocksize ${ZVOL})
printf '%s create \\\n\t-V %s \\\n\t-b %s \\\n' "${ZFS_CMD}" "${volsize}" "${volblocksize}"
print_local_options "${ZVOL}" 'all' '(volsize|refreservation)'
printf '\t%s\n' "${ZVOL}"
}
function print_volumes () {
ZPOOL=$1
for ZVOL in $(${ZFS_CMD} list -Ho name -t volume -r ${ZPOOL})
do
printf '#\n## Volume: %s\n#\n\n' "${ZVOL}"
print_volume ${ZVOL}
printf '\n'
done
}
function print_vdevs () {
ZPOOL=$1
${ZDB_CMD} -C ${ZPOOL} | ${AWK_CMD} -F':' '
$1 ~ /^[[:space:]]*type$/ {
gsub(/[ ]+/,"",$NF);
type=substr($NF,2,length($NF)-2);
if ( type == "mirror" ) {
printf " \\\n\t%s",type;
}
}
$1 ~ /^[[:space:]]*path$/ {
gsub(/[ ]+/,"",$NF);
vdev=substr($NF,2,length($NF)-2);
printf " \\\n\t%s",vdev;
}
END {
printf "\n";
}
'
}
function print_zpool () {
ZPOOL=$1
printf '#############################################################\n'
printf '#\n## ZPool: %s\n#\n' "${ZPOOL}"
printf '#############################################################\n\n'
printf '%s create \\\n' "${ZPOOL_CMD}"
print_local_options "${ZPOOL}" 'all' '/@/'
printf '\t%s' "${ZPOOL}"
print_vdevs "${ZPOOL}"
printf '\n'
printf '#############################################################\n\n'
print_filesystems "${ZPOOL}"
print_volumes "${ZPOOL}"
}
OS=$(uname -s)
eval $(uname -s)=1
HOSTNAME=$(hostname)
printf '#############################################################\n'
printf '# Hostname: %s\n' "${HOSTNAME}"
printf '#############################################################\n\n'
for ZPOOL in $(${ZPOOL_CMD} list -Ho name)
do
print_zpool ${ZPOOL}
done
</source>
==Links==
* [[https://github.com/zfsonlinux/pkg-zfs/wiki/HOWTO-install-Ubuntu-16.04-to-a-Whole-Disk-Native-ZFS-Root-Filesystem-using-Ubiquity-GUI-installer HOWTO install Ubuntu 16.04 to a Whole Disk Native ZFS Root Filesystem using Ubiquity GUI installer]]
* [[https://github.com/zfsonlinux/zfs/wiki/Ubuntu-16.04-Root-on-ZFS Ubuntu 16.04 Root on ZFS]]
cb3c9f1b58df57c599167eee79ea9d0b2807580b
2001
2000
2020-03-11T10:24:13Z
Lollypop
2
/* Backup ZFS settings */
wikitext
text/x-wiki
[[Kategorie:Linux|ZFS]]
[[Kategorie:ZFS|Linux]]
[[Kategorie:VirtualBox|ZFS]]
==Grub==
Create /etc/udev/rules.d/99-local-grub.rules with this content:
<source lang=bash>
# Create by-id links in /dev as well for zfs vdev. Needed by grub
# Add links for zfs_member only
KERNEL=="sd*[0-9]", IMPORT{parent}=="ID_*", ENV{ID_FS_TYPE}=="zfs_member", SYMLINK+="$env{ID_BUS}-$env{ID_SERIAL}-part%n"
</source>
==Virtualbox on ZVols==
If you use ZVols as rawvmdk-device in VirtualBox as normal user (vmuser in this example) create /etc/udev/rules.d/99-local-zvol.rules with this content:
<source lang=bash>
KERNEL=="zd*" SUBSYSTEM=="block" ACTION=="add|change" PROGRAM="/lib/udev/zvol_id /dev/%k" RESULT=="rpool/VM/*" OWNER="vmuser"
</source>
<source lang=bash>
vmuser@virtualbox-server:~$ VBoxManage internalcommands createrawvmdk -filename /var/data/VMs/dev/Solaris10.vmdk -rawdisk /dev/zvol/rpool/VM/Solaris10
</source>
==Setup Ubuntu 16.04 with ZFS root==
Most is from here [https://github.com/zfsonlinux/zfs/wiki/Ubuntu-16.04-Root-on-ZFS Ubuntu-16.04-Root-on-ZFS].
Boot Ubuntu Desktop (alias Live CD) and choose "try out".
===Get the right ashift value===
For example to get sda and sdb:
<source lang=bash>
# lsblk -o NAME,PHY-SeC,LOG-SEC /dev/sd{a,b} | awk 'function exponent (value) {for(i=0;value>1;i++){value/=2;}; return i;}{if($2 ~ /[0-9]+/){print $0,exponent($2)}else{print$0,"ashift"}}'
NAME PHY-SEC LOG-SEC ashift
sda 512 512 9
├─sda1 512 512 9
├─sda2 512 512 9
├─sda3 512 512 9
└─sda4 512 512 9
sdb 4096 512 12
├─sdb1 4096 512 12
├─sdb2 4096 512 12
├─sdb3 4096 512 12
└─sdb4 4096 512 12
</source>
===Connect it to your network===
<source lang=bash>
sudo -i
ifconfig ens160 <IP> netmask 255.255.255.0
route add default gw <defaultrouter>
echo "nameserver <nameserver>" >> /etc/resolv.conf
echo 'Acquire::http::Proxy "http://<user>:<pass>@<proxyhost>:<proxyport>";' >> /etc/apt/apt.conf
apt-add-repository universe
apt update
apt --yes install openssh-server
passwd ubuntu
Reconnect via ssh
apt install --yes debootstrap gdisk zfs-initramfs
sgdisk -g -a1 -n2:34:2047 -t2:EF02 /dev/disk/by-id/scsi-36000c2932cdb62febff0b5ac93786dd4
sgdisk -n9:-8M:0 -t9:BF07 /dev/disk/by-id/scsi-36000c2932cdb62febff0b5ac93786dd4
sgdisk -n1:0:0 -t1:BF01 /dev/disk/by-id/scsi-36000c2932cdb62febff0b5ac93786dd4
zpool create -f -o ashift=12 \
-O atime=off \
-O canmount=off \
-O compression=lz4 \
-O normalization=formD \
-O mountpoint=/ \
-R /mnt \
rpool /dev/disk/by-id/scsi-36000c2932cdb62febff0b5ac93786dd4-part1
zfs create -o canmount=off -o mountpoint=none rpool/ROOT
zfs create -o canmount=noauto -o mountpoint=/ rpool/ROOT/ubuntu
zfs mount rpool/ROOT/ubuntu
zfs create -o setuid=off rpool/home
zfs create -o mountpoint=/root rpool/home/root
zfs create -o canmount=off -o setuid=off -o exec=off rpool/var
zfs create -o com.sun:auto-snapshot=false rpool/var/cache
zfs create rpool/var/log
zfs create rpool/var/spool
zfs create -o com.sun:auto-snapshot=false -o exec=on rpool/var/tmp
zfs create -V 4G -b $(getconf PAGESIZE) -o compression=zle \
-o logbias=throughput -o sync=always \
-o primarycache=metadata -o secondarycache=none \
-o com.sun:auto-snapshot=false rpool/swap
cp -p {,/mnt}/etc/apt/apt.conf
export http_proxy=$(awk '/Acquire::http::Proxy/{gsub(/\"/,"");gsub(/;$/,"");print $2}' /mnt/etc/apt/apt.conf)
echo -n xenial{,-security,-updates} | \
xargs -n 1 -d ' ' -I{} echo "deb http://archive.ubuntu.com/ubuntu {} main universe" > /mnt/etc/apt/sources.list
chmod 1777 /mnt/var/tmp
debootstrap xenial /mnt
zfs set devices=off rpool
HOSTNAME=Template-VM
echo ${HOSTNAME} > /mnt/etc/hostname
printf "127.0.1.1\t%s\n" "${HOSTNAME}" >> /mnt/etc/hosts
INTERFACE=$(ip a s scope global | awk 'NR==1{gsub(/:$/,"",$2);print $2;}')
printf "auto %s\niface %s inet dhcp\n" "${INTERFACE}" "${INTERFACE}" > /mnt/etc/network/interfaces.d/${INTERFACE}
mount --rbind /dev /mnt/dev
mount --rbind /proc /mnt/proc
mount --rbind /sys /mnt/sys
cp -p {,/mnt}/etc/apt/apt.conf
echo -n xenial{,-security,-updates} | \
xargs -n 1 -d ' ' -I{} echo "deb http://archive.ubuntu.com/ubuntu {} main universe" > /mnt/etc/apt/sources.list
chroot /mnt /bin/bash --login
locale-gen en_US.UTF-8
echo 'LANG="en_US.UTF-8"' > /etc/default/locale
LANG="en_US.UTF-8"
dpkg-reconfigure tzdata
ln -s /proc/self/mounts /etc/mtab
apt update
apt install --yes ubuntu-minimal
apt install --yes --no-install-recommends linux-image-generic
apt install --yes zfs-initramfs
apt install --yes openssh-server
apt install --yes grub-pc
addgroup --system lpadmin
addgroup --system sambashare
passwd
grub-probe /
update-initramfs -c -k all
vi /etc/default/grub
Comment out: GRUB_HIDDEN_TIMEOUT=0
Remove quiet and splash from: GRUB_CMDLINE_LINUX_DEFAULT
Uncomment: GRUB_TERMINAL=console
update-grub
grub-install /dev/disk/by-id/scsi-36000c2932cdb62febff0b5ac93786dd4
zfs snapshot rpool/ROOT/ubuntu@install
exit
mount | grep -v zfs | tac | awk '/\/mnt/ {print $3}' | xargs -i{} umount -lf {}
zpool export rpool
reboot
apt install --yes cryptsetup
echo cryptswap1 /dev/zvol/rpool/swap /dev/urandom swap,cipher=aes-xts-plain64:sha256,size=256 >> /etc/crypttab
systemctl daemon-reload
systemctl start systemd-cryptsetup@cryptswap1.service
echo /dev/mapper/cryptswap1 none swap defaults 0 0 >> /etc/fstab
swapon -av
</source>
==Swap on ZFS with random key encryption==
<source lang=ini>
# /etc/systemd/system/zfs-cryptswap@.service
[Unit]
Description=ZFS Random Cryptography Setup for %I
Documentation=man:zfs(8)
DefaultDependencies=no
Conflicts=umount.target
IgnoreOnIsolate=true
After=systemd-random-seed.service
BindsTo=dev-zvol-rpool-%i.device
Before=umount.target
[Service]
Type=oneshot
RemainAfterExit=yes
TimeoutSec=0
KeyringMode=shared
OOMScoreAdjust=500
UMask=0077
RuntimeDirectory=zfs-cryptswap.%i
RuntimeDirectoryMode=0700
ExecStartPre=-/sbin/swapoff '/dev/zvol/rpool/%i'
ExecStartPre=-/sbin/zfs destroy 'rpool/%i'
ExecStartPre=/bin/dd if=/dev/urandom of=/run/zfs-cryptswap.%i/%i.key bs=32 count=1
ExecStart=/sbin/zfs create -V 4G -b 4k -o compression=zle -o logbias=throughput -o sync=always -o primarycache=metadata -o secondarycache=none -o com.sun:auto-snapshot=false -o encryption=on -o keyformat=raw -o keylocation=file:///run/zfs-cryptswap.%i/%i.key rpool/%i
ExecStartPost=/sbin/mkswap '/dev/zvol/rpool/%i'
ExecStartPost=/sbin/swapon '/dev/zvol/rpool/%i'
ExecStop=/sbin/swapoff '/dev/zvol/rpool/%i'
ExecStopPost=/sbin/zfs destroy 'rpool/%i'
[Install]
WantedBy=swap.target
</source>
!!!BE CAREFUL with the name after @ !!!
The name after the @ is the name of the ZFS the will be DESTROYED and recreated!!!
To destroy and recreate an encrypted ZFS volume named cryptswap use:
<source lang=bash>
# systemctl start zfs-cryptswap@cryptswap.service
# systemctl enable zfs-cryptswap@cryptswap.service
# update-initramfs -k all -u
</source>
==Kernel settings for ZFS==
=== Set module parameter in /etc/modprobe.d/zfs.conf===
<source lang=bash>
options zfs zfs_arc_max=10737418240
# increase them so scrub/resilver is more quickly at the cost of other work
options zfs zfs_vdev_scrub_min_active=24
options zfs zfs_vdev_scrub_max_active=64
# sync write
options zfs zfs_vdev_sync_write_min_active=8
options zfs zfs_vdev_sync_write_max_active=32
# sync reads (normal)
options zfs zfs_vdev_sync_read_min_active=8
options zfs zfs_vdev_sync_read_max_active=32
# async reads : prefetcher
options zfs zfs_vdev_async_read_min_active=8
options zfs zfs_vdev_async_read_max_active=32
# async write : bulk writes
options zfs zfs_vdev_async_write_min_active=8
options zfs zfs_vdev_async_write_max_active=32
# max write speed to l2arc
# tradeoff between write/read and durability of ssd (?)
# default : 8 * 1024 * 1024
# setting here : 500 * 1024 * 1024
options zfs l2arc_write_max=524288000
options zfs zfs_top_maxinflight=512
options zfs zfs_resilver_min_time_ms=8000
options zfs zfs_resilver_delay=0
</source>
Remember to update your initramfs before boot. This is the filesystem which is read when your module is loaded.
<source lang=bash>
# update-initramfs -k all -u
</source>
=== Check settings ===
<source lang=bash>
root@zfshost:~# modprobe -c | grep "options zfs"
options zfs zfs_arc_max=10737418240
options zfs zfs_vdev_scrub_min_active=24
options zfs zfs_vdev_scrub_max_active=64
options zfs zfs_vdev_sync_write_min_active=8
options zfs zfs_vdev_sync_write_max_active=32
options zfs zfs_vdev_sync_read_min_active=8
options zfs zfs_vdev_sync_read_max_active=32
options zfs zfs_vdev_async_read_min_active=8
options zfs zfs_vdev_async_read_max_active=32
options zfs zfs_vdev_async_write_min_active=8
options zfs zfs_vdev_async_write_max_active=32
options zfs l2arc_write_max=524288000
options zfs zfs_top_maxinflight=512
options zfs zfs_resilver_min_time_ms=8000
options zfs zfs_resilver_delay=0
</source>
<source lang=bash>
root@zfshost:~# modprobe --show-depends zfs
insmod /lib/modules/4.15.0-58-generic/kernel/spl/spl.ko
insmod /lib/modules/4.15.0-58-generic/kernel/zfs/znvpair.ko
insmod /lib/modules/4.15.0-58-generic/kernel/zfs/zcommon.ko
insmod /lib/modules/4.15.0-58-generic/kernel/zfs/icp.ko
insmod /lib/modules/4.15.0-58-generic/kernel/zfs/zavl.ko
insmod /lib/modules/4.15.0-58-generic/kernel/zfs/zunicode.ko
insmod /lib/modules/4.15.0-58-generic/kernel/zfs/zfs.ko zfs_arc_max=10737418240 zfs_vdev_scrub_min_active=24 zfs_vdev_scrub_max_active=64 zfs_vdev_sync_write_min_active=8 zfs_vdev_sync_write_max_active=32 zfs_vdev_sync_read_min_active=8 zfs_vdev_sync_read_max_active=32 zfs_vdev_async_read_min_active=8 zfs_vdev_async_read_max_active=32 zfs_vdev_async_write_min_active=8 zfs_vdev_async_write_max_active=32 l2arc_write_max=524288000 zfs_top_maxinflight=512 zfs_resilver_min_time_ms=8000 zfs_resilver_delay=0
</source>
=== Check actual settings ===
Check files in
* /proc/spl/kstat/zfs/
* /sys/module/zfs/parameters/
==ARC Cache==
===Get the current usage of cache===
<source lang=bash>
# cat /proc/spl/kstat/zfs/arcstats |grep c_
c_min 4 521779200
c_max 4 1073741824
arc_no_grow 4 0
arc_tempreserve 4 0
arc_loaned_bytes 4 0
arc_prune 4 25360
arc_meta_used 4 493285336
arc_meta_limit 4 805306368
arc_dnode_limit 4 80530636
arc_meta_max 4 706551816
arc_meta_min 4 16777216
sync_wait_for_async 4 357
arc_need_free 4 0
arc_sys_free 4 260889600
</source>
===Limit the cache without reboot non permanent===
For example limit it to 512MB (which is too small for production environments, just an example...):
<source lang=bash>
# echo "$[512*1024*1024]" > /sys/module/zfs/parameters/zfs_arc_max
</source>
Now you have to drop the caches:
<source lang=bash>
# echo 3 > /proc/sys/vm/drop_caches
</source>
===Make the cache limit permanent===
For example limit it to 512MB (which is too small for production environments, just an example...):
<source lang=bash>
# echo "options zfs zfs_arc_max=$[512*1024*1024]" >> /etc/modprobe.d/zfs.conf
</source>
After reboot this value take effect.
===Check cache hits/misses===
<source lang=bash>
# (while : ; do cat /proc/spl/kstat/zfs/arcstats ; sleep 5 ; done ) | awk '
BEGIN {
}
$1 ~ /(hits|misses)/ {
name=$1;
gsub(/[_]*(hits|misses)/,"",name);
if(name == ""){
name="global";
}
}
$1 ~ /hits/ {
hits[name] = $3 - hitslast[name]
hitslast[name] = $3
}
$1 ~ /misses/ {
misses[name] = $3 - misslast[name]
misslast[name] = $3
rate = 0
total = hits[name] + misses[name]
if (total)
rate = (hits[name] * 100) / total
if (name=="global")
printf "%30s %12s %12s %9s\n", "NAME", "HITS", "MISSES", "HITRATE"
printf "%30s %12d %12d %8.2f%%\n", name, hits[name], misses[name], rate
}
'
</source>
==Higher scrub performance==
<source lang=bash highlight=3-5>
#!/bin/bash
#
## scrub_fast.sh
#
case $1 in
start)
echo 0 > /sys/module/zfs/parameters/zfs_scan_idle
echo 0 > /sys/module/zfs/parameters/zfs_scrub_delay
echo 512 > /sys/module/zfs/parameters/zfs_top_maxinflight
echo 5000 > /sys/module/zfs/parameters/zfs_scan_min_time_ms
echo 4 > /sys/module/zfs/parameters/zfs_vdev_scrub_min_active
echo 8 > /sys/module/zfs/parameters/zfs_vdev_scrub_max_active
;;
stop)
echo 50 > /sys/module/zfs/parameters/zfs_scan_idle
echo 4 > /sys/module/zfs/parameters/zfs_scrub_delay
echo 32 > /sys/module/zfs/parameters/zfs_top_maxinflight
echo 1000 > /sys/module/zfs/parameters/zfs_scan_min_time_ms
echo 1 > /sys/module/zfs/parameters/zfs_vdev_scrub_min_active
echo 2 > /sys/module/zfs/parameters/zfs_vdev_scrub_max_active
;;
status)
for i in zfs_scan_idle zfs_scrub_delay zfs_top_maxinflight zfs_scan_min_time_ms zfs_vdev_scrub_{min,max}_active
do
param="/sys/module/zfs/parameters/${i}"
printf "%60s\t%d\n" "${param}" "$(cat ${param})"
done
;;
*)
echo "Usage: ${0} (start|stop|status)"
;;
esac
</source>
==Backup ZFS settings==
A little script which may be used on your own risk.
<source lang=bash>
#!/bin/bash
# Written by Lars Timmann <L@rs.Timmann.de> 2018
# Tested on solaris 11.3 & Ubuntu Linux
# This script is a rotten bunch of code... rewrite it!
AWK_CMD=/usr/bin/gawk
ZPOOL_CMD=/sbin/zpool
ZFS_CMD=/sbin/zfs
ZDB_CMD=/sbin/zdb
function print_local_options () {
DATASET=$1
OPTION=$2
EXCLUDE_REGEX=$3
${ZFS_CMD} get -s local -Ho property,value -p ${OPTION} ${DATASET} | while read -r property value
do
if [[ ! ${property} =~ ${EXCLUDE_REGEX} ]]
then
if [ "_${property}_" == "_share.*_" ]
then
print_local_options "${DATASET}" 'share.all' '^$'
else
printf '\t-o %s=%s \\\n' "${property}" "${value}"
fi
fi
done
}
function print_filesystem () {
ZFS=$1
printf '%s create \\\n' "${ZFS_CMD}"
print_local_options "${ZFS}" 'all' '^$'
printf '\t%s\n' "${ZFS}"
}
function print_filesystems () {
ZPOOL=$1
for ZFS in $(${ZFS_CMD} list -Ho name -t filesystem -r ${ZPOOL})
do
if [ ${ZFS} == ${ZPOOL} ] ; then continue ; fi
printf '#\n## Filesystem: %s\n#\n\n' "${ZFS}"
print_filesystem ${ZFS}
printf '\n'
done
}
function print_volume () {
ZVOL=$1
volsize=$(${ZFS_CMD} get -Ho value volsize ${ZVOL})
volblocksize=$(${ZFS_CMD} get -Ho value volblocksize ${ZVOL})
printf '%s create \\\n\t-V %s \\\n\t-b %s \\\n' "${ZFS_CMD}" "${volsize}" "${volblocksize}"
print_local_options "${ZVOL}" 'all' '(volsize|refreservation)'
printf '\t%s\n' "${ZVOL}"
}
function print_volumes () {
ZPOOL=$1
for ZVOL in $(${ZFS_CMD} list -Ho name -t volume -r ${ZPOOL})
do
printf '#\n## Volume: %s\n#\n\n' "${ZVOL}"
print_volume ${ZVOL}
printf '\n'
done
}
function print_vdevs () {
ZPOOL=$1
${ZDB_CMD} -C ${ZPOOL} | ${AWK_CMD} -F':' '
$1 ~ /^[[:space:]]*type$/ {
gsub(/[ ]+/,"",$NF);
type=substr($NF,2,length($NF)-2);
if ( type == "mirror" ) {
printf " \\\n\t%s",type;
}
}
$1 ~ /^[[:space:]]*path$/ {
gsub(/[ ]+/,"",$NF);
vdev=substr($NF,2,length($NF)-2);
printf " \\\n\t%s",vdev;
}
END {
printf "\n";
}
'
}
function print_zpool () {
ZPOOL=$1
printf '#############################################################\n'
printf '#\n## ZPool: %s\n#\n' "${ZPOOL}"
printf '#############################################################\n\n'
printf '%s create \\\n' "${ZPOOL_CMD}"
print_local_options "${ZPOOL}" 'all' '/@/'
printf '\t%s' "${ZPOOL}"
print_vdevs "${ZPOOL}"
printf '\n'
printf '#############################################################\n\n'
print_filesystems "${ZPOOL}"
print_volumes "${ZPOOL}"
}
OS=$(uname -s)
eval $(uname -s)=1
HOSTNAME=$(hostname)
printf '#############################################################\n'
printf '# Hostname: %s\n' "${HOSTNAME}"
printf '#############################################################\n\n'
for ZPOOL in $(${ZPOOL_CMD} list -Ho name)
do
print_zpool ${ZPOOL}
done
</source>
==Links==
* [[https://github.com/zfsonlinux/pkg-zfs/wiki/HOWTO-install-Ubuntu-16.04-to-a-Whole-Disk-Native-ZFS-Root-Filesystem-using-Ubiquity-GUI-installer HOWTO install Ubuntu 16.04 to a Whole Disk Native ZFS Root Filesystem using Ubiquity GUI installer]]
* [[https://github.com/zfsonlinux/zfs/wiki/Ubuntu-16.04-Root-on-ZFS Ubuntu 16.04 Root on ZFS]]
553288b1c3841e8aa48dd6e43bc860f341e7bb86
Systemd
0
233
2002
1871
2020-03-25T09:47:17Z
Lollypop
2
/* Take a look with systemctl */
wikitext
text/x-wiki
[[Kategorie:Linux]]
=systemd=
Yes, like daemons are usually written this has to be written lowercase.
=What is systemd?=
systemd is a replacement for the old and rusty init system of Linux.
It has many new features and extends the normal init system with the ability to watch processes after the start has done, list sockets owned by processes started with systemd, adds security features like [http://manpages.ubuntu.com/manpages/vivid/en/man7/capabilities.7.html capabilities(7)] and a lot more.
Maybe it will be as good as SMF (Service Management Facility) of Solaris one day :-).
=Take a look with systemctl=
==List units==
As you can see, there are hardware and software related units.
<source lang=bash>
# systemctl list-units
UNIT LOAD ACTIVE SUB DESCRIPTION
proc-sys-fs-binfmt_misc.automount loaded active running Arbitrary Executable File Formats File System Automount Point
sys-devices-pci0000:00-0000:00:02.0-backlight-acpi_video0.device loaded active plugged /sys/devices/pci0000:00/0000:00:02.0/backlight/acpi_video0
sys-devices-pci0000:00-0000:00:02.0-drm-card0-card0\x2dLVDS\x2d1-intel_backlight.device loaded active plugged /sys/devices/pci0000:00/0000:00:02.0/drm
sys-devices-pci0000:00-0000:00:19.0-net-eth0.device loaded active plugged 82579LM Gigabit Network Connection
sys-devices-pci0000:00-0000:00:1a.0-usb1-1\x2d1-1\x2d1.4-1\x2d1.4:1.0-bluetooth-hci0-rfkill3.device loaded active plugged /sys/devices/pci0000:00/0000
sys-devices-pci0000:00-0000:00:1a.0-usb1-1\x2d1-1\x2d1.4-1\x2d1.4:1.0-bluetooth-hci0.device loaded active plugged /sys/devices/pci0000:00/0000:00:1a.0
sys-devices-pci0000:00-0000:00:1b.0-sound-card0.device loaded active plugged 6 Series/C200 Series Chipset Family High Definition Audio Contro
sys-devices-pci0000:00-0000:00:1c.1-0000:03:00.0-ieee80211-phy0-rfkill2.device loaded active plugged /sys/devices/pci0000:00/0000:00:1c.1/0000:03:00.0
sys-devices-pci0000:00-0000:00:1c.1-0000:03:00.0-net-wlan0.device loaded active plugged Centrino Advanced-N 6205 [Taylor Peak] (Centrino Advanced-N 62
sys-devices-pci0000:00-0000:00:1d.0-usb2-2\x2d1-2\x2d1.4-2\x2d1.4:1.1-tty-ttyACM0.device loaded active plugged F5521gw
sys-devices-pci0000:00-0000:00:1d.0-usb2-2\x2d1-2\x2d1.4-2\x2d1.4:1.3-tty-ttyACM1.device loaded active plugged F5521gw
...
session-c2.scope loaded active running Session c2 of user lollypop
accounts-daemon.service loaded active running Accounts Service
● anacron.service loaded failed failed Run anacron jobs
apparmor.service loaded active exited LSB: AppArmor initialization
apport.service loaded active exited LSB: automatic crash report generation
...
</source>
In this example you can see that the anacron.service failed to start.
==Display unit status==
<source lang=bash>
# systemctl status anacron
● anacron.service - Run anacron jobs
Loaded: loaded (/lib/systemd/system/anacron.service; enabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Fr 2015-08-28 09:18:13 CEST; 31min ago
Process: 1591 ExecStart=/usr/sbin/anacron -dsq (code=exited, status=1/FAILURE)
Main PID: 1591 (code=exited, status=1/FAILURE)
Aug 28 09:18:13 lollybook systemd[1]: Started Run anacron jobs.
Aug 28 09:18:13 lollybook systemd[1]: Starting Run anacron jobs...
Aug 28 09:18:13 lollybook systemd[1]: anacron.service: main process exited, code=exited, status=1/FAILURE
Aug 28 09:18:13 lollybook anacron[1591]: anacron: Can't chdir to /var/spool/anacron: No such file or directory
Aug 28 09:18:13 lollybook systemd[1]: Unit anacron.service entered failed state.
Aug 28 09:18:13 lollybook systemd[1]: anacron.service failed.
</source>
Ah, deleted the anacron spool directory. ;-)
==Restart units==
Fix the problem and restart the service.
<source lang=bash>
root@lollybook:~# mkdir /var/spool/anacron
root@lollybook:~# systemctl restart anacron.service
root@lollybook:~# systemctl status anacron
● anacron.service - Run anacron jobs
Loaded: loaded (/lib/systemd/system/anacron.service; enabled; vendor preset: enabled)
Active: active (running) since Fr 2015-08-28 09:53:49 CEST; 4s ago
Main PID: 5179 (anacron)
CGroup: /system.slice/anacron.service
└─5179 /usr/sbin/anacron -dsq
Aug 28 09:53:49 lollybook systemd[1]: Started Run anacron jobs.
Aug 28 09:53:49 lollybook systemd[1]: Starting Run anacron jobs...
Aug 28 09:53:49 lollybook anacron[5179]: Anacron 2.3 started on 2015-08-28
Aug 28 09:53:49 lollybook anacron[5179]: Will run job `cron.daily' in 5 min.
Aug 28 09:53:49 lollybook anacron[5179]: Will run job `cron.weekly' in 10 min.
Aug 28 09:53:49 lollybook anacron[5179]: Will run job `cron.monthly' in 15 min.
Aug 28 09:53:49 lollybook anacron[5179]: Jobs will be executed sequentially
</source>
==Display unit declaration==
<source lang=ini>
# systemctl cat zfs.target
# /lib/systemd/system/zfs.target
[Unit]
Description=ZFS startup target
Requires=zfs-mount.service
Requires=zfs-share.service
Wants=zed.service
[Install]
WantedBy=multi-user.target
</source>
==Sockets==
<source lang=bash>
# systemctl list-sockets --all
LISTEN UNIT ACTIVATES
/run/acpid.socket acpid.socket acpid.service
/run/systemd/fsckd systemd-fsckd.socket systemd-fsckd.service
/run/systemd/initctl/fifo systemd-initctl.socket systemd-initctl.service
/run/systemd/journal/dev-log systemd-journald-dev-log.socket systemd-journald.service
/run/systemd/journal/socket systemd-journald.socket systemd-journald.service
/run/systemd/journal/stdout systemd-journald.socket systemd-journald.service
/run/systemd/journal/syslog syslog.socket rsyslog.service
/run/systemd/shutdownd systemd-shutdownd.socket systemd-shutdownd.service
/run/udev/control systemd-udevd-control.socket systemd-udevd.service
/run/uuidd/request uuidd.socket uuidd.service
/var/run/avahi-daemon/socket avahi-daemon.socket avahi-daemon.service
/var/run/cups/cups.sock cups.socket cups.service
/var/run/dbus/system_bus_socket dbus.socket dbus.service
127.0.0.1:631 cups.socket cups.service
[::1]:631 cups.socket cups.service
audit 1 systemd-journald-audit.socket systemd-journald.service
kobject-uevent 1 systemd-udevd-kernel.socket systemd-udevd.service
17 sockets listed.
</source>
==View dependencies==
What depends on ''zfs.target'':
<source lang=bash>
# systemctl list-dependencies --reverse zfs.target
zfs.target
● ├─basic.target
...
● └─multi-user.target
...
</source>
And what do we need to reach the ''zfs.target''?
<source lang=bash>
# systemctl list-dependencies --recursive zfs.target
zfs.target
● ├─zed.service
● ├─zfs-mount.service
● └─zfs-share.service
</source>
==Get the main PID of a service==
<source lang=bash>
$ systemctl show --property=MainPID --value ssh.service
2026
</source>
=Security=
==Use capabilities to drop user privileges (CapabilityBoundingSet)==
<source lang=ini>
# systemctl cat systemd-networkd.service --no-pager
...
[Service]
Type=notify
Restart=on-failure
RestartSec=0
ExecStart=/lib/systemd/systemd-networkd
CapabilityBoundingSet=CAP_NET_ADMIN CAP_NET_BIND_SERVICE CAP_NET_BROADCAST CAP_NET_RAW CAP_SETUID CAP_SETGID CAP_SETPCAP CAP_CHOWN CAP_DAC_OVERRIDE CAP_FOWNER
ProtectSystem=full
ProtectHome=yes
WatchdogSec=1min
...
</source>
Now the process is started with exactly the capabilities it needs to have. Even if it starts as root all unnessesary capabilities are dropped for starting the process.
I dont want to copy the whole man page of [http://manpages.ubuntu.com/manpages/vivid/en/man7/capabilities.7.html capabilities(7)] here but you can take a look to understand what this capabilities are.
'''BUT''' beware of programs which just test on UID 0!
==Nailing a process to it's rights : NoNewPrivileges==
Setting ''NoNewPrivileges=true'' ensures that the processtree from this level on will stuck at the UID and the privileges it has. This prohibits UID changes. No set UID binary will help the hacker to get more privileges than the user of the exploited service.
=systemd-resolved the name resolve service=
==Status==
<source lang=bash>
$ systemd-resolve --status
Global
DNS Domain: fritz.box
DNSSEC NTA: 10.in-addr.arpa
168.192.in-addr.arpa
corp
d.f.ip6.arpa
home
internal
intranet
lan
local
private
test
Link 3 (wlan0)
Current Scopes: none
LLMNR setting: yes
MulticastDNS setting: no
DNSSEC setting: no
DNSSEC supported: no
Link 2 (eth0)
Current Scopes: DNS
LLMNR setting: yes
MulticastDNS setting: no
DNSSEC setting: no
DNSSEC supported: no
DNS Servers: 192.168.178.1
DNS Domain: fritz.box
</source>
==Cache statistics==
<source lang=bash>
$ systemd-resolve --statistics
DNSSEC supported by current servers: no
Transactions
Current Transactions: 0
Total Transactions: 1824
Cache
Current Cache Size: 11
Cache Hits: 1104
Cache Misses: 771
DNSSEC Verdicts
Secure: 0
Insecure: 0
Bogus: 0
Indeterminate: 0
</source>
==Flush the cache==
<source lang=bash>
$ systemd-resolve --flush-caches
</source>
Check with:
<source lang=bash>
$ systemd-resolve --statistics
DNSSEC supported by current servers: no
Transactions
Current Transactions: 0
Total Transactions: 1809
Cache
Current Cache Size: 0 <--- Empty
Cache Hits: 1099
Cache Misses: 761
DNSSEC Verdicts
Secure: 0
Insecure: 0
Bogus: 0
Indeterminate: 0
</source>
=systemd-timesyncd an alternative to ntp=
The ntpd is a good and fat old horse for servers but clients do not necessarily need this one. Just give systemd-timesyncd a chance.
Configuration can be easily made through <i>/etc/systemd/timesyncd.conf</i>:
<source lang=ini>
# This file is part of systemd.
#
# systemd is free software; you can redistribute it and/or modify it
# under the terms of the GNU Lesser General Public License as published by
# the Free Software Foundation; either version 2.1 of the License, or
# (at your option) any later version.
#
# Entries in this file show the compile time defaults.
# You can change settings by editing this file.
# Defaults can be restored by simply deleting this file.
#
# See timesyncd.conf(5) for details.
[Time]
NTP=ptbtime1.ptb.de hora.cs.tu-berlin.de
FallbackNTP=ntp.ubuntu.com
</source>
The server list is a space separated list of NTP servers.
FallbackNTP is a list of servers if none of the NTP list could be reached.
If you want to split them into multiple files or generate them at start you can put files with the ending <i>.conf</i> in <i>/etc/systemd/timesyncd.conf.d/</i>.
After you setup the config you can enable the timesyncd via:
<source lang=bash>
# timedatectl set-ntp true
</source>
Control your success with:
<source lang=bash>
# timedatectl
Local time: Fr 2016-07-01 09:16:24 CEST
Universal time: Fr 2016-07-01 07:16:24 UTC
RTC time: Fr 2016-07-01 07:16:24
Time zone: Europe/Berlin (CEST, +0200)
Network time on: yes
NTP synchronized: yes
RTC in local TZ: no
</source>
Nice it worked <i>NTP synchronized: yes</i>.
If not take a look with <i>systemctl</i>:
<source lang=bash>
# systemctl status systemd-timesyncd.service
● systemd-timesyncd.service - Network Time Synchronization
Loaded: loaded (/lib/systemd/system/systemd-timesyncd.service; enabled; vendor preset: enabled)
Drop-In: /lib/systemd/system/systemd-timesyncd.service.d
└─disable-with-time-daemon.conf
Active: inactive (dead)
Condition: start condition failed at Fr 2016-07-01 10:49:15 CEST; 1h 43min left
Docs: man:systemd-timesyncd.service(8)
</source>
Hmm... let us take a look at ntp:
<source lang=bash>
# systemctl status ntp.service
● ntp.service - LSB: Start NTP daemon
Loaded: loaded (/etc/init.d/ntp; bad; vendor preset: enabled)
Active: active (exited) since Fr 2016-07-01 10:49:19 CEST; 1h 44min left
Docs: man:systemd-sysv-generator(8)
</source>
Maybe we should uninstall or disable ntp first ;-).
<source lang=bash>
# systemctl stop ntp.service
# systemctl disable ntp.service
</source>
<source lang=bash>
# systemctl start systemd-timesyncd.service
# systemctl status systemd-timesyncd.service
● systemd-timesyncd.service - Network Time Synchronization
Loaded: loaded (/lib/systemd/system/systemd-timesyncd.service; enabled; vendor preset: enabled)
Drop-In: /lib/systemd/system/systemd-timesyncd.service.d
└─disable-with-time-daemon.conf
Active: active (running) since Fr 2016-07-01 09:06:10 CEST; 1s ago
Docs: man:systemd-timesyncd.service(8)
Main PID: 12360 (systemd-timesyn)
Status: "Synchronized to time server 192.53.103.108:123 (ptbtime1.ptb.de)."
CGroup: /system.slice/systemd-timesyncd.service
└─12360 /lib/systemd/systemd-timesyncd
Jul 01 09:06:10 lollybook systemd[1]: Starting Network Time Synchronization...
Jul 01 09:06:10 lollybook systemd[1]: Started Network Time Synchronization.
Jul 01 09:06:10 lollybook systemd-timesyncd[12360]: Synchronized to time server 192.53.103.108:123 (ptbtime1.ptb.de).
</source>
That's it!
=Units=
==[Unit]==
===Define dependencies===
For example the ''zfs.target'' is defined like this:
<source lang=ini>
# systemctl cat zfs.target
# /lib/systemd/system/zfs.target
[Unit]
Description=ZFS startup target
Requires=zfs-mount.service
Requires=zfs-share.service
Wants=zed.service
[Install]
WantedBy=multi-user.target
</source>
This means to reach the ''zfs.target'' we want that ''zed.service'' is started if enabled and we need ''zfs-mount.service'' and ''zfs-share.service''.
===Directories===
====ReadWrite-, ReadOnly- and InaccessibleDirectories====
====Private Tmp-Directories====
Mounts a private incarnation of /tmp and /var/tmp which only lives as long as the unit is up. When the unit comes down the directories are cleared. This is done by a seperate namespace for this unit.
<source lang=ini>
[Unit]
...
PrivateTmp=true|false
...
</source>
If several units should share a private tmp-directory you can use ''JoinsNamespaceOf=<unit1>[,<unit2>,<unit3>]''.
==[Service]==
==[Install]==
=Tools=
==Testing around with capabilities==
For example arping:
<source lang=bash>
# getcap /usr/bin/arping
/usr/bin/arping = cap_net_raw+ep
</source>
With this capability set we can use this as normal user:
<source lang=bash>
lollypop $ /usr/bin/arping -I wlan0 192.168.178.1
ARPING 192.168.178.1 from 192.168.178.31 wlan0
Unicast reply from 192.168.178.1 [24:65:11:F0:DC:A8] 1.774ms
Unicast reply from 192.168.178.1 [24:65:11:F0:DC:A8] 1.658ms
</source>
If we remove this capability it does not work:
<source lang=bash>
# setcap cap_net_raw=-ep /usr/bin/arping
</source>
<source lang=bash>
lollypop $ /usr/bin/arping -I wlan0 192.168.178.1
arping: socket: Operation not permitted
</source>
Of course it still works as root as root has all capabilities:
<source lang=bash>
root@lollybook:~# /usr/bin/arping -I wlan0 192.168.178.1
ARPING 192.168.178.1 from 192.168.178.31 wlan0
Unicast reply from 192.168.178.1 [24:65:11:F0:DC:A8] 2.052ms
Unicast reply from 192.168.178.1 [24:65:11:F0:DC:A8] 1.852ms
Received 2 response(s)
</source>
So we better set this capability again:
<source lang=bash>
# setcap cap_net_raw=+ep /usr/bin/arping
</source>
= Logging with syslog-ng and systemd in a chroot environment =
If you have a chroot environment (here I have /var/chroot) some things are a little bit tricky.
==The needed logging socket in your chroot is /run/systemd/journal/dev-log==
Prepare the mountpoint:
<source lang=bash>
# mkdir -p /var/chroot/run/systemd/journal
# touch /var/chroot/run/systemd/journal/dev-log
</source>
===Get the name for the needed unit file===
The name of a .mount-unit file has to be the mount destination path. Dashes must be escaped. To get the resulting name you can easily use systemd-escape.
<source lang=bash>
# systemd-escape -p --suffix=mount /var/chroot/run/systemd/journal/dev-log
var-chroot-run-systemd-journal-dev\x2dlog.mount
</source>
===Create the unit file /lib/systemd/system/var-chroot-run-systemd-journal-dev\\x2dlog.mount for the mount===
Remember to double escape (\\) the x2d (which is a dash -).
<source lang=bash>
# vi /lib/systemd/system/var-chroot-run-systemd-journal-dev\\x2dlog.mount
</source>
I want to mount it before syslog-ng and pdns-recursor are up.
Put this contents in the file:
<source lang=ini>
[Unit]
Description=Mount /run/systemd/journal/dev-log to chroot
DefaultDependencies=no
ConditionPathExists=/var/chroot/run/systemd/journal/dev-log
ConditionCapability=CAP_SYS_ADMIN
After=systemd-modules-load.service
Before=pdns-recursor.service
Before=syslog-ng.service
[Mount]
What=/run/systemd/journal/dev-log
Where=/var/chroot/run/systemd/journal/dev-log
Type=none
Options=bind
[Install]
WantedBy=multi-user.target
</source>
===Mount the socket===
<source lang=bash>
# systemctl daemon-reload
# systemctl enable var-chroot-run-systemd-journal-dev\\x2dlog.mount
# systemctl start var-chroot-run-systemd-journal-dev\\x2dlog.mount
</source>
Check the success:
<source lang=bash>
# grep /var/chroot/run/systemd/journal/dev-log /proc/mounts
tmpfs /var/chroot/run/systemd/journal/dev-log tmpfs rw,nosuid,noexec,relatime,size=101604k,mode=755 0 0
</source>
==Tell the journald to forward logging lines to the socket==
===/etc/systemd/journald.conf===
<source lang=ini>
[Journal]
...
ForwardToSyslog=yes
...
</source>
Restart the journal daemon:
<source lang=bash>
# systemctl restart systemd-journald.service
</source>
==Configure syslog-ng==
===/etc/syslog-ng/syslog-ng.conf===
Take the log from systemd-journald socket:
<source>
...
source s_src {
system();
internal();
unix-dgram ("/run/systemd/journal/dev-log");
};
...
</source>
===Example for powerdns recursor===
====/etc/syslog-ng/conf.d/destination.d/pdns.conf====
<source>
# PowerDNS authoritative server destination
destination d_pdns { file("/var/log/powerdns/pdns.log"); };
destination d_pdns_recursor { file("/var/log/powerdns/recursor.log"); };
</source>
====/etc/syslog-ng/conf.d/filter.d/pdns.conf====
<source>
# PowerDNS authoritative server filter
filter f_pdns { program("^pdns$"); };
filter f_pdns_recursor { program("^pdns_recursor$"); };
</source>
====/etc/syslog-ng/conf.d/log.d/90_pdns.conf====
<source>
# PowerDNS authoritative server default final file log
log { source(s_src); filter(f_pdns); destination(d_pdns); flags(final); };
log { source(s_src); filter(f_pdns_recursor); destination(d_pdns_recursor); flags(final); };
</source>
===Restart syslog-ng daemon===
<source lang=bash>
# systemctl restart syslog-ng.service
</source>
= Examples =
== Oracle ==
UNTESTED, just an example!
File this as
/usr/lib/systemd/system/dbora@.service (SLES12)
<source lang=ini>
# This file is part of systemd.
#
# Configure instances for your oracle database versions like this
# # systemctl enable dbora@<product>.service
# e.g.:
# # systemctl enable dbora@12cR1.service
#
[Unit]
Description=Oracle Database %I
After=syslog.target network.target
[Service]
# systemd ignores PAM limits, so set any necessary limits in the service.
# Not really a bug, but a feature.
# https://bugzilla.redhat.com/show_bug.cgi?id=754285
LimitMEMLOCK=infinity
LimitNOFILE=65535
#
Type=simple
RemainAfterExit=yes
User=oracle
Group=dba
Environment="ORACLE_HOME=/opt/oracle/product/%i/db"
ExecStart=/opt/oracle/product/%i/db/bin/dbstart $ORACLE_HOME >> 2>&1 &
ExecStop=/opt/oracle/product/%i/db/bin/dbstart $ORACLE_HOME 2>&1 &
[Install]
WantedBy=multi-user.target
</source>
<source lang=bash>
# systemctl daemon-reload
# systemctl enable dbora@12cR2.service
Created symlink from /etc/systemd/system/multi-user.target.wants/dbora@12cR2.service to /usr/lib/systemd/system/dbora@.service.
</source>
470bf6a9c341fcf6ebc3195be78a259d88b1f637
2021
2002
2020-08-13T13:10:26Z
Lollypop
2
/* Examples */
wikitext
text/x-wiki
[[Kategorie:Linux]]
=systemd=
Yes, like daemons are usually written this has to be written lowercase.
=What is systemd?=
systemd is a replacement for the old and rusty init system of Linux.
It has many new features and extends the normal init system with the ability to watch processes after the start has done, list sockets owned by processes started with systemd, adds security features like [http://manpages.ubuntu.com/manpages/vivid/en/man7/capabilities.7.html capabilities(7)] and a lot more.
Maybe it will be as good as SMF (Service Management Facility) of Solaris one day :-).
=Take a look with systemctl=
==List units==
As you can see, there are hardware and software related units.
<source lang=bash>
# systemctl list-units
UNIT LOAD ACTIVE SUB DESCRIPTION
proc-sys-fs-binfmt_misc.automount loaded active running Arbitrary Executable File Formats File System Automount Point
sys-devices-pci0000:00-0000:00:02.0-backlight-acpi_video0.device loaded active plugged /sys/devices/pci0000:00/0000:00:02.0/backlight/acpi_video0
sys-devices-pci0000:00-0000:00:02.0-drm-card0-card0\x2dLVDS\x2d1-intel_backlight.device loaded active plugged /sys/devices/pci0000:00/0000:00:02.0/drm
sys-devices-pci0000:00-0000:00:19.0-net-eth0.device loaded active plugged 82579LM Gigabit Network Connection
sys-devices-pci0000:00-0000:00:1a.0-usb1-1\x2d1-1\x2d1.4-1\x2d1.4:1.0-bluetooth-hci0-rfkill3.device loaded active plugged /sys/devices/pci0000:00/0000
sys-devices-pci0000:00-0000:00:1a.0-usb1-1\x2d1-1\x2d1.4-1\x2d1.4:1.0-bluetooth-hci0.device loaded active plugged /sys/devices/pci0000:00/0000:00:1a.0
sys-devices-pci0000:00-0000:00:1b.0-sound-card0.device loaded active plugged 6 Series/C200 Series Chipset Family High Definition Audio Contro
sys-devices-pci0000:00-0000:00:1c.1-0000:03:00.0-ieee80211-phy0-rfkill2.device loaded active plugged /sys/devices/pci0000:00/0000:00:1c.1/0000:03:00.0
sys-devices-pci0000:00-0000:00:1c.1-0000:03:00.0-net-wlan0.device loaded active plugged Centrino Advanced-N 6205 [Taylor Peak] (Centrino Advanced-N 62
sys-devices-pci0000:00-0000:00:1d.0-usb2-2\x2d1-2\x2d1.4-2\x2d1.4:1.1-tty-ttyACM0.device loaded active plugged F5521gw
sys-devices-pci0000:00-0000:00:1d.0-usb2-2\x2d1-2\x2d1.4-2\x2d1.4:1.3-tty-ttyACM1.device loaded active plugged F5521gw
...
session-c2.scope loaded active running Session c2 of user lollypop
accounts-daemon.service loaded active running Accounts Service
● anacron.service loaded failed failed Run anacron jobs
apparmor.service loaded active exited LSB: AppArmor initialization
apport.service loaded active exited LSB: automatic crash report generation
...
</source>
In this example you can see that the anacron.service failed to start.
==Display unit status==
<source lang=bash>
# systemctl status anacron
● anacron.service - Run anacron jobs
Loaded: loaded (/lib/systemd/system/anacron.service; enabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Fr 2015-08-28 09:18:13 CEST; 31min ago
Process: 1591 ExecStart=/usr/sbin/anacron -dsq (code=exited, status=1/FAILURE)
Main PID: 1591 (code=exited, status=1/FAILURE)
Aug 28 09:18:13 lollybook systemd[1]: Started Run anacron jobs.
Aug 28 09:18:13 lollybook systemd[1]: Starting Run anacron jobs...
Aug 28 09:18:13 lollybook systemd[1]: anacron.service: main process exited, code=exited, status=1/FAILURE
Aug 28 09:18:13 lollybook anacron[1591]: anacron: Can't chdir to /var/spool/anacron: No such file or directory
Aug 28 09:18:13 lollybook systemd[1]: Unit anacron.service entered failed state.
Aug 28 09:18:13 lollybook systemd[1]: anacron.service failed.
</source>
Ah, deleted the anacron spool directory. ;-)
==Restart units==
Fix the problem and restart the service.
<source lang=bash>
root@lollybook:~# mkdir /var/spool/anacron
root@lollybook:~# systemctl restart anacron.service
root@lollybook:~# systemctl status anacron
● anacron.service - Run anacron jobs
Loaded: loaded (/lib/systemd/system/anacron.service; enabled; vendor preset: enabled)
Active: active (running) since Fr 2015-08-28 09:53:49 CEST; 4s ago
Main PID: 5179 (anacron)
CGroup: /system.slice/anacron.service
└─5179 /usr/sbin/anacron -dsq
Aug 28 09:53:49 lollybook systemd[1]: Started Run anacron jobs.
Aug 28 09:53:49 lollybook systemd[1]: Starting Run anacron jobs...
Aug 28 09:53:49 lollybook anacron[5179]: Anacron 2.3 started on 2015-08-28
Aug 28 09:53:49 lollybook anacron[5179]: Will run job `cron.daily' in 5 min.
Aug 28 09:53:49 lollybook anacron[5179]: Will run job `cron.weekly' in 10 min.
Aug 28 09:53:49 lollybook anacron[5179]: Will run job `cron.monthly' in 15 min.
Aug 28 09:53:49 lollybook anacron[5179]: Jobs will be executed sequentially
</source>
==Display unit declaration==
<source lang=ini>
# systemctl cat zfs.target
# /lib/systemd/system/zfs.target
[Unit]
Description=ZFS startup target
Requires=zfs-mount.service
Requires=zfs-share.service
Wants=zed.service
[Install]
WantedBy=multi-user.target
</source>
==Sockets==
<source lang=bash>
# systemctl list-sockets --all
LISTEN UNIT ACTIVATES
/run/acpid.socket acpid.socket acpid.service
/run/systemd/fsckd systemd-fsckd.socket systemd-fsckd.service
/run/systemd/initctl/fifo systemd-initctl.socket systemd-initctl.service
/run/systemd/journal/dev-log systemd-journald-dev-log.socket systemd-journald.service
/run/systemd/journal/socket systemd-journald.socket systemd-journald.service
/run/systemd/journal/stdout systemd-journald.socket systemd-journald.service
/run/systemd/journal/syslog syslog.socket rsyslog.service
/run/systemd/shutdownd systemd-shutdownd.socket systemd-shutdownd.service
/run/udev/control systemd-udevd-control.socket systemd-udevd.service
/run/uuidd/request uuidd.socket uuidd.service
/var/run/avahi-daemon/socket avahi-daemon.socket avahi-daemon.service
/var/run/cups/cups.sock cups.socket cups.service
/var/run/dbus/system_bus_socket dbus.socket dbus.service
127.0.0.1:631 cups.socket cups.service
[::1]:631 cups.socket cups.service
audit 1 systemd-journald-audit.socket systemd-journald.service
kobject-uevent 1 systemd-udevd-kernel.socket systemd-udevd.service
17 sockets listed.
</source>
==View dependencies==
What depends on ''zfs.target'':
<source lang=bash>
# systemctl list-dependencies --reverse zfs.target
zfs.target
● ├─basic.target
...
● └─multi-user.target
...
</source>
And what do we need to reach the ''zfs.target''?
<source lang=bash>
# systemctl list-dependencies --recursive zfs.target
zfs.target
● ├─zed.service
● ├─zfs-mount.service
● └─zfs-share.service
</source>
==Get the main PID of a service==
<source lang=bash>
$ systemctl show --property=MainPID --value ssh.service
2026
</source>
=Security=
==Use capabilities to drop user privileges (CapabilityBoundingSet)==
<source lang=ini>
# systemctl cat systemd-networkd.service --no-pager
...
[Service]
Type=notify
Restart=on-failure
RestartSec=0
ExecStart=/lib/systemd/systemd-networkd
CapabilityBoundingSet=CAP_NET_ADMIN CAP_NET_BIND_SERVICE CAP_NET_BROADCAST CAP_NET_RAW CAP_SETUID CAP_SETGID CAP_SETPCAP CAP_CHOWN CAP_DAC_OVERRIDE CAP_FOWNER
ProtectSystem=full
ProtectHome=yes
WatchdogSec=1min
...
</source>
Now the process is started with exactly the capabilities it needs to have. Even if it starts as root all unnessesary capabilities are dropped for starting the process.
I dont want to copy the whole man page of [http://manpages.ubuntu.com/manpages/vivid/en/man7/capabilities.7.html capabilities(7)] here but you can take a look to understand what this capabilities are.
'''BUT''' beware of programs which just test on UID 0!
==Nailing a process to it's rights : NoNewPrivileges==
Setting ''NoNewPrivileges=true'' ensures that the processtree from this level on will stuck at the UID and the privileges it has. This prohibits UID changes. No set UID binary will help the hacker to get more privileges than the user of the exploited service.
=systemd-resolved the name resolve service=
==Status==
<source lang=bash>
$ systemd-resolve --status
Global
DNS Domain: fritz.box
DNSSEC NTA: 10.in-addr.arpa
168.192.in-addr.arpa
corp
d.f.ip6.arpa
home
internal
intranet
lan
local
private
test
Link 3 (wlan0)
Current Scopes: none
LLMNR setting: yes
MulticastDNS setting: no
DNSSEC setting: no
DNSSEC supported: no
Link 2 (eth0)
Current Scopes: DNS
LLMNR setting: yes
MulticastDNS setting: no
DNSSEC setting: no
DNSSEC supported: no
DNS Servers: 192.168.178.1
DNS Domain: fritz.box
</source>
==Cache statistics==
<source lang=bash>
$ systemd-resolve --statistics
DNSSEC supported by current servers: no
Transactions
Current Transactions: 0
Total Transactions: 1824
Cache
Current Cache Size: 11
Cache Hits: 1104
Cache Misses: 771
DNSSEC Verdicts
Secure: 0
Insecure: 0
Bogus: 0
Indeterminate: 0
</source>
==Flush the cache==
<source lang=bash>
$ systemd-resolve --flush-caches
</source>
Check with:
<source lang=bash>
$ systemd-resolve --statistics
DNSSEC supported by current servers: no
Transactions
Current Transactions: 0
Total Transactions: 1809
Cache
Current Cache Size: 0 <--- Empty
Cache Hits: 1099
Cache Misses: 761
DNSSEC Verdicts
Secure: 0
Insecure: 0
Bogus: 0
Indeterminate: 0
</source>
=systemd-timesyncd an alternative to ntp=
The ntpd is a good and fat old horse for servers but clients do not necessarily need this one. Just give systemd-timesyncd a chance.
Configuration can be easily made through <i>/etc/systemd/timesyncd.conf</i>:
<source lang=ini>
# This file is part of systemd.
#
# systemd is free software; you can redistribute it and/or modify it
# under the terms of the GNU Lesser General Public License as published by
# the Free Software Foundation; either version 2.1 of the License, or
# (at your option) any later version.
#
# Entries in this file show the compile time defaults.
# You can change settings by editing this file.
# Defaults can be restored by simply deleting this file.
#
# See timesyncd.conf(5) for details.
[Time]
NTP=ptbtime1.ptb.de hora.cs.tu-berlin.de
FallbackNTP=ntp.ubuntu.com
</source>
The server list is a space separated list of NTP servers.
FallbackNTP is a list of servers if none of the NTP list could be reached.
If you want to split them into multiple files or generate them at start you can put files with the ending <i>.conf</i> in <i>/etc/systemd/timesyncd.conf.d/</i>.
After you setup the config you can enable the timesyncd via:
<source lang=bash>
# timedatectl set-ntp true
</source>
Control your success with:
<source lang=bash>
# timedatectl
Local time: Fr 2016-07-01 09:16:24 CEST
Universal time: Fr 2016-07-01 07:16:24 UTC
RTC time: Fr 2016-07-01 07:16:24
Time zone: Europe/Berlin (CEST, +0200)
Network time on: yes
NTP synchronized: yes
RTC in local TZ: no
</source>
Nice it worked <i>NTP synchronized: yes</i>.
If not take a look with <i>systemctl</i>:
<source lang=bash>
# systemctl status systemd-timesyncd.service
● systemd-timesyncd.service - Network Time Synchronization
Loaded: loaded (/lib/systemd/system/systemd-timesyncd.service; enabled; vendor preset: enabled)
Drop-In: /lib/systemd/system/systemd-timesyncd.service.d
└─disable-with-time-daemon.conf
Active: inactive (dead)
Condition: start condition failed at Fr 2016-07-01 10:49:15 CEST; 1h 43min left
Docs: man:systemd-timesyncd.service(8)
</source>
Hmm... let us take a look at ntp:
<source lang=bash>
# systemctl status ntp.service
● ntp.service - LSB: Start NTP daemon
Loaded: loaded (/etc/init.d/ntp; bad; vendor preset: enabled)
Active: active (exited) since Fr 2016-07-01 10:49:19 CEST; 1h 44min left
Docs: man:systemd-sysv-generator(8)
</source>
Maybe we should uninstall or disable ntp first ;-).
<source lang=bash>
# systemctl stop ntp.service
# systemctl disable ntp.service
</source>
<source lang=bash>
# systemctl start systemd-timesyncd.service
# systemctl status systemd-timesyncd.service
● systemd-timesyncd.service - Network Time Synchronization
Loaded: loaded (/lib/systemd/system/systemd-timesyncd.service; enabled; vendor preset: enabled)
Drop-In: /lib/systemd/system/systemd-timesyncd.service.d
└─disable-with-time-daemon.conf
Active: active (running) since Fr 2016-07-01 09:06:10 CEST; 1s ago
Docs: man:systemd-timesyncd.service(8)
Main PID: 12360 (systemd-timesyn)
Status: "Synchronized to time server 192.53.103.108:123 (ptbtime1.ptb.de)."
CGroup: /system.slice/systemd-timesyncd.service
└─12360 /lib/systemd/systemd-timesyncd
Jul 01 09:06:10 lollybook systemd[1]: Starting Network Time Synchronization...
Jul 01 09:06:10 lollybook systemd[1]: Started Network Time Synchronization.
Jul 01 09:06:10 lollybook systemd-timesyncd[12360]: Synchronized to time server 192.53.103.108:123 (ptbtime1.ptb.de).
</source>
That's it!
=Units=
==[Unit]==
===Define dependencies===
For example the ''zfs.target'' is defined like this:
<source lang=ini>
# systemctl cat zfs.target
# /lib/systemd/system/zfs.target
[Unit]
Description=ZFS startup target
Requires=zfs-mount.service
Requires=zfs-share.service
Wants=zed.service
[Install]
WantedBy=multi-user.target
</source>
This means to reach the ''zfs.target'' we want that ''zed.service'' is started if enabled and we need ''zfs-mount.service'' and ''zfs-share.service''.
===Directories===
====ReadWrite-, ReadOnly- and InaccessibleDirectories====
====Private Tmp-Directories====
Mounts a private incarnation of /tmp and /var/tmp which only lives as long as the unit is up. When the unit comes down the directories are cleared. This is done by a seperate namespace for this unit.
<source lang=ini>
[Unit]
...
PrivateTmp=true|false
...
</source>
If several units should share a private tmp-directory you can use ''JoinsNamespaceOf=<unit1>[,<unit2>,<unit3>]''.
==[Service]==
==[Install]==
=Tools=
==Testing around with capabilities==
For example arping:
<source lang=bash>
# getcap /usr/bin/arping
/usr/bin/arping = cap_net_raw+ep
</source>
With this capability set we can use this as normal user:
<source lang=bash>
lollypop $ /usr/bin/arping -I wlan0 192.168.178.1
ARPING 192.168.178.1 from 192.168.178.31 wlan0
Unicast reply from 192.168.178.1 [24:65:11:F0:DC:A8] 1.774ms
Unicast reply from 192.168.178.1 [24:65:11:F0:DC:A8] 1.658ms
</source>
If we remove this capability it does not work:
<source lang=bash>
# setcap cap_net_raw=-ep /usr/bin/arping
</source>
<source lang=bash>
lollypop $ /usr/bin/arping -I wlan0 192.168.178.1
arping: socket: Operation not permitted
</source>
Of course it still works as root as root has all capabilities:
<source lang=bash>
root@lollybook:~# /usr/bin/arping -I wlan0 192.168.178.1
ARPING 192.168.178.1 from 192.168.178.31 wlan0
Unicast reply from 192.168.178.1 [24:65:11:F0:DC:A8] 2.052ms
Unicast reply from 192.168.178.1 [24:65:11:F0:DC:A8] 1.852ms
Received 2 response(s)
</source>
So we better set this capability again:
<source lang=bash>
# setcap cap_net_raw=+ep /usr/bin/arping
</source>
= Logging with syslog-ng and systemd in a chroot environment =
If you have a chroot environment (here I have /var/chroot) some things are a little bit tricky.
==The needed logging socket in your chroot is /run/systemd/journal/dev-log==
Prepare the mountpoint:
<source lang=bash>
# mkdir -p /var/chroot/run/systemd/journal
# touch /var/chroot/run/systemd/journal/dev-log
</source>
===Get the name for the needed unit file===
The name of a .mount-unit file has to be the mount destination path. Dashes must be escaped. To get the resulting name you can easily use systemd-escape.
<source lang=bash>
# systemd-escape -p --suffix=mount /var/chroot/run/systemd/journal/dev-log
var-chroot-run-systemd-journal-dev\x2dlog.mount
</source>
===Create the unit file /lib/systemd/system/var-chroot-run-systemd-journal-dev\\x2dlog.mount for the mount===
Remember to double escape (\\) the x2d (which is a dash -).
<source lang=bash>
# vi /lib/systemd/system/var-chroot-run-systemd-journal-dev\\x2dlog.mount
</source>
I want to mount it before syslog-ng and pdns-recursor are up.
Put this contents in the file:
<source lang=ini>
[Unit]
Description=Mount /run/systemd/journal/dev-log to chroot
DefaultDependencies=no
ConditionPathExists=/var/chroot/run/systemd/journal/dev-log
ConditionCapability=CAP_SYS_ADMIN
After=systemd-modules-load.service
Before=pdns-recursor.service
Before=syslog-ng.service
[Mount]
What=/run/systemd/journal/dev-log
Where=/var/chroot/run/systemd/journal/dev-log
Type=none
Options=bind
[Install]
WantedBy=multi-user.target
</source>
===Mount the socket===
<source lang=bash>
# systemctl daemon-reload
# systemctl enable var-chroot-run-systemd-journal-dev\\x2dlog.mount
# systemctl start var-chroot-run-systemd-journal-dev\\x2dlog.mount
</source>
Check the success:
<source lang=bash>
# grep /var/chroot/run/systemd/journal/dev-log /proc/mounts
tmpfs /var/chroot/run/systemd/journal/dev-log tmpfs rw,nosuid,noexec,relatime,size=101604k,mode=755 0 0
</source>
==Tell the journald to forward logging lines to the socket==
===/etc/systemd/journald.conf===
<source lang=ini>
[Journal]
...
ForwardToSyslog=yes
...
</source>
Restart the journal daemon:
<source lang=bash>
# systemctl restart systemd-journald.service
</source>
==Configure syslog-ng==
===/etc/syslog-ng/syslog-ng.conf===
Take the log from systemd-journald socket:
<source>
...
source s_src {
system();
internal();
unix-dgram ("/run/systemd/journal/dev-log");
};
...
</source>
===Example for powerdns recursor===
====/etc/syslog-ng/conf.d/destination.d/pdns.conf====
<source>
# PowerDNS authoritative server destination
destination d_pdns { file("/var/log/powerdns/pdns.log"); };
destination d_pdns_recursor { file("/var/log/powerdns/recursor.log"); };
</source>
====/etc/syslog-ng/conf.d/filter.d/pdns.conf====
<source>
# PowerDNS authoritative server filter
filter f_pdns { program("^pdns$"); };
filter f_pdns_recursor { program("^pdns_recursor$"); };
</source>
====/etc/syslog-ng/conf.d/log.d/90_pdns.conf====
<source>
# PowerDNS authoritative server default final file log
log { source(s_src); filter(f_pdns); destination(d_pdns); flags(final); };
log { source(s_src); filter(f_pdns_recursor); destination(d_pdns_recursor); flags(final); };
</source>
===Restart syslog-ng daemon===
<source lang=bash>
# systemctl restart syslog-ng.service
</source>
= systemd-tmpfiles =
The housekeeping of temporary directories is done by the service <i>systemd-tmpfiles-clean.service</i>.
= Examples =
== Oracle ==
UNTESTED, just an example!
File this as
/usr/lib/systemd/system/dbora@.service (SLES12)
<source lang=ini>
# This file is part of systemd.
#
# Configure instances for your oracle database versions like this
# # systemctl enable dbora@<product>.service
# e.g.:
# # systemctl enable dbora@12cR1.service
#
[Unit]
Description=Oracle Database %I
After=syslog.target network.target
[Service]
# systemd ignores PAM limits, so set any necessary limits in the service.
# Not really a bug, but a feature.
# https://bugzilla.redhat.com/show_bug.cgi?id=754285
LimitMEMLOCK=infinity
LimitNOFILE=65535
#
Type=simple
RemainAfterExit=yes
User=oracle
Group=dba
Environment="ORACLE_HOME=/opt/oracle/product/%i/db"
ExecStart=/opt/oracle/product/%i/db/bin/dbstart $ORACLE_HOME >> 2>&1 &
ExecStop=/opt/oracle/product/%i/db/bin/dbstart $ORACLE_HOME 2>&1 &
[Install]
WantedBy=multi-user.target
</source>
<source lang=bash>
# systemctl daemon-reload
# systemctl enable dbora@12cR2.service
Created symlink from /etc/systemd/system/multi-user.target.wants/dbora@12cR2.service to /usr/lib/systemd/system/dbora@.service.
</source>
6ffa18518e052a5d45175fef1e7a79980190059e
2022
2021
2020-08-13T13:26:48Z
Lollypop
2
/* systemd-tmpfiles */
wikitext
text/x-wiki
[[Kategorie:Linux]]
=systemd=
Yes, like daemons are usually written this has to be written lowercase.
=What is systemd?=
systemd is a replacement for the old and rusty init system of Linux.
It has many new features and extends the normal init system with the ability to watch processes after the start has done, list sockets owned by processes started with systemd, adds security features like [http://manpages.ubuntu.com/manpages/vivid/en/man7/capabilities.7.html capabilities(7)] and a lot more.
Maybe it will be as good as SMF (Service Management Facility) of Solaris one day :-).
=Take a look with systemctl=
==List units==
As you can see, there are hardware and software related units.
<source lang=bash>
# systemctl list-units
UNIT LOAD ACTIVE SUB DESCRIPTION
proc-sys-fs-binfmt_misc.automount loaded active running Arbitrary Executable File Formats File System Automount Point
sys-devices-pci0000:00-0000:00:02.0-backlight-acpi_video0.device loaded active plugged /sys/devices/pci0000:00/0000:00:02.0/backlight/acpi_video0
sys-devices-pci0000:00-0000:00:02.0-drm-card0-card0\x2dLVDS\x2d1-intel_backlight.device loaded active plugged /sys/devices/pci0000:00/0000:00:02.0/drm
sys-devices-pci0000:00-0000:00:19.0-net-eth0.device loaded active plugged 82579LM Gigabit Network Connection
sys-devices-pci0000:00-0000:00:1a.0-usb1-1\x2d1-1\x2d1.4-1\x2d1.4:1.0-bluetooth-hci0-rfkill3.device loaded active plugged /sys/devices/pci0000:00/0000
sys-devices-pci0000:00-0000:00:1a.0-usb1-1\x2d1-1\x2d1.4-1\x2d1.4:1.0-bluetooth-hci0.device loaded active plugged /sys/devices/pci0000:00/0000:00:1a.0
sys-devices-pci0000:00-0000:00:1b.0-sound-card0.device loaded active plugged 6 Series/C200 Series Chipset Family High Definition Audio Contro
sys-devices-pci0000:00-0000:00:1c.1-0000:03:00.0-ieee80211-phy0-rfkill2.device loaded active plugged /sys/devices/pci0000:00/0000:00:1c.1/0000:03:00.0
sys-devices-pci0000:00-0000:00:1c.1-0000:03:00.0-net-wlan0.device loaded active plugged Centrino Advanced-N 6205 [Taylor Peak] (Centrino Advanced-N 62
sys-devices-pci0000:00-0000:00:1d.0-usb2-2\x2d1-2\x2d1.4-2\x2d1.4:1.1-tty-ttyACM0.device loaded active plugged F5521gw
sys-devices-pci0000:00-0000:00:1d.0-usb2-2\x2d1-2\x2d1.4-2\x2d1.4:1.3-tty-ttyACM1.device loaded active plugged F5521gw
...
session-c2.scope loaded active running Session c2 of user lollypop
accounts-daemon.service loaded active running Accounts Service
● anacron.service loaded failed failed Run anacron jobs
apparmor.service loaded active exited LSB: AppArmor initialization
apport.service loaded active exited LSB: automatic crash report generation
...
</source>
In this example you can see that the anacron.service failed to start.
==Display unit status==
<source lang=bash>
# systemctl status anacron
● anacron.service - Run anacron jobs
Loaded: loaded (/lib/systemd/system/anacron.service; enabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Fr 2015-08-28 09:18:13 CEST; 31min ago
Process: 1591 ExecStart=/usr/sbin/anacron -dsq (code=exited, status=1/FAILURE)
Main PID: 1591 (code=exited, status=1/FAILURE)
Aug 28 09:18:13 lollybook systemd[1]: Started Run anacron jobs.
Aug 28 09:18:13 lollybook systemd[1]: Starting Run anacron jobs...
Aug 28 09:18:13 lollybook systemd[1]: anacron.service: main process exited, code=exited, status=1/FAILURE
Aug 28 09:18:13 lollybook anacron[1591]: anacron: Can't chdir to /var/spool/anacron: No such file or directory
Aug 28 09:18:13 lollybook systemd[1]: Unit anacron.service entered failed state.
Aug 28 09:18:13 lollybook systemd[1]: anacron.service failed.
</source>
Ah, deleted the anacron spool directory. ;-)
==Restart units==
Fix the problem and restart the service.
<source lang=bash>
root@lollybook:~# mkdir /var/spool/anacron
root@lollybook:~# systemctl restart anacron.service
root@lollybook:~# systemctl status anacron
● anacron.service - Run anacron jobs
Loaded: loaded (/lib/systemd/system/anacron.service; enabled; vendor preset: enabled)
Active: active (running) since Fr 2015-08-28 09:53:49 CEST; 4s ago
Main PID: 5179 (anacron)
CGroup: /system.slice/anacron.service
└─5179 /usr/sbin/anacron -dsq
Aug 28 09:53:49 lollybook systemd[1]: Started Run anacron jobs.
Aug 28 09:53:49 lollybook systemd[1]: Starting Run anacron jobs...
Aug 28 09:53:49 lollybook anacron[5179]: Anacron 2.3 started on 2015-08-28
Aug 28 09:53:49 lollybook anacron[5179]: Will run job `cron.daily' in 5 min.
Aug 28 09:53:49 lollybook anacron[5179]: Will run job `cron.weekly' in 10 min.
Aug 28 09:53:49 lollybook anacron[5179]: Will run job `cron.monthly' in 15 min.
Aug 28 09:53:49 lollybook anacron[5179]: Jobs will be executed sequentially
</source>
==Display unit declaration==
<source lang=ini>
# systemctl cat zfs.target
# /lib/systemd/system/zfs.target
[Unit]
Description=ZFS startup target
Requires=zfs-mount.service
Requires=zfs-share.service
Wants=zed.service
[Install]
WantedBy=multi-user.target
</source>
==Sockets==
<source lang=bash>
# systemctl list-sockets --all
LISTEN UNIT ACTIVATES
/run/acpid.socket acpid.socket acpid.service
/run/systemd/fsckd systemd-fsckd.socket systemd-fsckd.service
/run/systemd/initctl/fifo systemd-initctl.socket systemd-initctl.service
/run/systemd/journal/dev-log systemd-journald-dev-log.socket systemd-journald.service
/run/systemd/journal/socket systemd-journald.socket systemd-journald.service
/run/systemd/journal/stdout systemd-journald.socket systemd-journald.service
/run/systemd/journal/syslog syslog.socket rsyslog.service
/run/systemd/shutdownd systemd-shutdownd.socket systemd-shutdownd.service
/run/udev/control systemd-udevd-control.socket systemd-udevd.service
/run/uuidd/request uuidd.socket uuidd.service
/var/run/avahi-daemon/socket avahi-daemon.socket avahi-daemon.service
/var/run/cups/cups.sock cups.socket cups.service
/var/run/dbus/system_bus_socket dbus.socket dbus.service
127.0.0.1:631 cups.socket cups.service
[::1]:631 cups.socket cups.service
audit 1 systemd-journald-audit.socket systemd-journald.service
kobject-uevent 1 systemd-udevd-kernel.socket systemd-udevd.service
17 sockets listed.
</source>
==View dependencies==
What depends on ''zfs.target'':
<source lang=bash>
# systemctl list-dependencies --reverse zfs.target
zfs.target
● ├─basic.target
...
● └─multi-user.target
...
</source>
And what do we need to reach the ''zfs.target''?
<source lang=bash>
# systemctl list-dependencies --recursive zfs.target
zfs.target
● ├─zed.service
● ├─zfs-mount.service
● └─zfs-share.service
</source>
==Get the main PID of a service==
<source lang=bash>
$ systemctl show --property=MainPID --value ssh.service
2026
</source>
=Security=
==Use capabilities to drop user privileges (CapabilityBoundingSet)==
<source lang=ini>
# systemctl cat systemd-networkd.service --no-pager
...
[Service]
Type=notify
Restart=on-failure
RestartSec=0
ExecStart=/lib/systemd/systemd-networkd
CapabilityBoundingSet=CAP_NET_ADMIN CAP_NET_BIND_SERVICE CAP_NET_BROADCAST CAP_NET_RAW CAP_SETUID CAP_SETGID CAP_SETPCAP CAP_CHOWN CAP_DAC_OVERRIDE CAP_FOWNER
ProtectSystem=full
ProtectHome=yes
WatchdogSec=1min
...
</source>
Now the process is started with exactly the capabilities it needs to have. Even if it starts as root all unnessesary capabilities are dropped for starting the process.
I dont want to copy the whole man page of [http://manpages.ubuntu.com/manpages/vivid/en/man7/capabilities.7.html capabilities(7)] here but you can take a look to understand what this capabilities are.
'''BUT''' beware of programs which just test on UID 0!
==Nailing a process to it's rights : NoNewPrivileges==
Setting ''NoNewPrivileges=true'' ensures that the processtree from this level on will stuck at the UID and the privileges it has. This prohibits UID changes. No set UID binary will help the hacker to get more privileges than the user of the exploited service.
=systemd-resolved the name resolve service=
==Status==
<source lang=bash>
$ systemd-resolve --status
Global
DNS Domain: fritz.box
DNSSEC NTA: 10.in-addr.arpa
168.192.in-addr.arpa
corp
d.f.ip6.arpa
home
internal
intranet
lan
local
private
test
Link 3 (wlan0)
Current Scopes: none
LLMNR setting: yes
MulticastDNS setting: no
DNSSEC setting: no
DNSSEC supported: no
Link 2 (eth0)
Current Scopes: DNS
LLMNR setting: yes
MulticastDNS setting: no
DNSSEC setting: no
DNSSEC supported: no
DNS Servers: 192.168.178.1
DNS Domain: fritz.box
</source>
==Cache statistics==
<source lang=bash>
$ systemd-resolve --statistics
DNSSEC supported by current servers: no
Transactions
Current Transactions: 0
Total Transactions: 1824
Cache
Current Cache Size: 11
Cache Hits: 1104
Cache Misses: 771
DNSSEC Verdicts
Secure: 0
Insecure: 0
Bogus: 0
Indeterminate: 0
</source>
==Flush the cache==
<source lang=bash>
$ systemd-resolve --flush-caches
</source>
Check with:
<source lang=bash>
$ systemd-resolve --statistics
DNSSEC supported by current servers: no
Transactions
Current Transactions: 0
Total Transactions: 1809
Cache
Current Cache Size: 0 <--- Empty
Cache Hits: 1099
Cache Misses: 761
DNSSEC Verdicts
Secure: 0
Insecure: 0
Bogus: 0
Indeterminate: 0
</source>
=systemd-timesyncd an alternative to ntp=
The ntpd is a good and fat old horse for servers but clients do not necessarily need this one. Just give systemd-timesyncd a chance.
Configuration can be easily made through <i>/etc/systemd/timesyncd.conf</i>:
<source lang=ini>
# This file is part of systemd.
#
# systemd is free software; you can redistribute it and/or modify it
# under the terms of the GNU Lesser General Public License as published by
# the Free Software Foundation; either version 2.1 of the License, or
# (at your option) any later version.
#
# Entries in this file show the compile time defaults.
# You can change settings by editing this file.
# Defaults can be restored by simply deleting this file.
#
# See timesyncd.conf(5) for details.
[Time]
NTP=ptbtime1.ptb.de hora.cs.tu-berlin.de
FallbackNTP=ntp.ubuntu.com
</source>
The server list is a space separated list of NTP servers.
FallbackNTP is a list of servers if none of the NTP list could be reached.
If you want to split them into multiple files or generate them at start you can put files with the ending <i>.conf</i> in <i>/etc/systemd/timesyncd.conf.d/</i>.
After you setup the config you can enable the timesyncd via:
<source lang=bash>
# timedatectl set-ntp true
</source>
Control your success with:
<source lang=bash>
# timedatectl
Local time: Fr 2016-07-01 09:16:24 CEST
Universal time: Fr 2016-07-01 07:16:24 UTC
RTC time: Fr 2016-07-01 07:16:24
Time zone: Europe/Berlin (CEST, +0200)
Network time on: yes
NTP synchronized: yes
RTC in local TZ: no
</source>
Nice it worked <i>NTP synchronized: yes</i>.
If not take a look with <i>systemctl</i>:
<source lang=bash>
# systemctl status systemd-timesyncd.service
● systemd-timesyncd.service - Network Time Synchronization
Loaded: loaded (/lib/systemd/system/systemd-timesyncd.service; enabled; vendor preset: enabled)
Drop-In: /lib/systemd/system/systemd-timesyncd.service.d
└─disable-with-time-daemon.conf
Active: inactive (dead)
Condition: start condition failed at Fr 2016-07-01 10:49:15 CEST; 1h 43min left
Docs: man:systemd-timesyncd.service(8)
</source>
Hmm... let us take a look at ntp:
<source lang=bash>
# systemctl status ntp.service
● ntp.service - LSB: Start NTP daemon
Loaded: loaded (/etc/init.d/ntp; bad; vendor preset: enabled)
Active: active (exited) since Fr 2016-07-01 10:49:19 CEST; 1h 44min left
Docs: man:systemd-sysv-generator(8)
</source>
Maybe we should uninstall or disable ntp first ;-).
<source lang=bash>
# systemctl stop ntp.service
# systemctl disable ntp.service
</source>
<source lang=bash>
# systemctl start systemd-timesyncd.service
# systemctl status systemd-timesyncd.service
● systemd-timesyncd.service - Network Time Synchronization
Loaded: loaded (/lib/systemd/system/systemd-timesyncd.service; enabled; vendor preset: enabled)
Drop-In: /lib/systemd/system/systemd-timesyncd.service.d
└─disable-with-time-daemon.conf
Active: active (running) since Fr 2016-07-01 09:06:10 CEST; 1s ago
Docs: man:systemd-timesyncd.service(8)
Main PID: 12360 (systemd-timesyn)
Status: "Synchronized to time server 192.53.103.108:123 (ptbtime1.ptb.de)."
CGroup: /system.slice/systemd-timesyncd.service
└─12360 /lib/systemd/systemd-timesyncd
Jul 01 09:06:10 lollybook systemd[1]: Starting Network Time Synchronization...
Jul 01 09:06:10 lollybook systemd[1]: Started Network Time Synchronization.
Jul 01 09:06:10 lollybook systemd-timesyncd[12360]: Synchronized to time server 192.53.103.108:123 (ptbtime1.ptb.de).
</source>
That's it!
=Units=
==[Unit]==
===Define dependencies===
For example the ''zfs.target'' is defined like this:
<source lang=ini>
# systemctl cat zfs.target
# /lib/systemd/system/zfs.target
[Unit]
Description=ZFS startup target
Requires=zfs-mount.service
Requires=zfs-share.service
Wants=zed.service
[Install]
WantedBy=multi-user.target
</source>
This means to reach the ''zfs.target'' we want that ''zed.service'' is started if enabled and we need ''zfs-mount.service'' and ''zfs-share.service''.
===Directories===
====ReadWrite-, ReadOnly- and InaccessibleDirectories====
====Private Tmp-Directories====
Mounts a private incarnation of /tmp and /var/tmp which only lives as long as the unit is up. When the unit comes down the directories are cleared. This is done by a seperate namespace for this unit.
<source lang=ini>
[Unit]
...
PrivateTmp=true|false
...
</source>
If several units should share a private tmp-directory you can use ''JoinsNamespaceOf=<unit1>[,<unit2>,<unit3>]''.
==[Service]==
==[Install]==
=Tools=
==Testing around with capabilities==
For example arping:
<source lang=bash>
# getcap /usr/bin/arping
/usr/bin/arping = cap_net_raw+ep
</source>
With this capability set we can use this as normal user:
<source lang=bash>
lollypop $ /usr/bin/arping -I wlan0 192.168.178.1
ARPING 192.168.178.1 from 192.168.178.31 wlan0
Unicast reply from 192.168.178.1 [24:65:11:F0:DC:A8] 1.774ms
Unicast reply from 192.168.178.1 [24:65:11:F0:DC:A8] 1.658ms
</source>
If we remove this capability it does not work:
<source lang=bash>
# setcap cap_net_raw=-ep /usr/bin/arping
</source>
<source lang=bash>
lollypop $ /usr/bin/arping -I wlan0 192.168.178.1
arping: socket: Operation not permitted
</source>
Of course it still works as root as root has all capabilities:
<source lang=bash>
root@lollybook:~# /usr/bin/arping -I wlan0 192.168.178.1
ARPING 192.168.178.1 from 192.168.178.31 wlan0
Unicast reply from 192.168.178.1 [24:65:11:F0:DC:A8] 2.052ms
Unicast reply from 192.168.178.1 [24:65:11:F0:DC:A8] 1.852ms
Received 2 response(s)
</source>
So we better set this capability again:
<source lang=bash>
# setcap cap_net_raw=+ep /usr/bin/arping
</source>
= Logging with syslog-ng and systemd in a chroot environment =
If you have a chroot environment (here I have /var/chroot) some things are a little bit tricky.
==The needed logging socket in your chroot is /run/systemd/journal/dev-log==
Prepare the mountpoint:
<source lang=bash>
# mkdir -p /var/chroot/run/systemd/journal
# touch /var/chroot/run/systemd/journal/dev-log
</source>
===Get the name for the needed unit file===
The name of a .mount-unit file has to be the mount destination path. Dashes must be escaped. To get the resulting name you can easily use systemd-escape.
<source lang=bash>
# systemd-escape -p --suffix=mount /var/chroot/run/systemd/journal/dev-log
var-chroot-run-systemd-journal-dev\x2dlog.mount
</source>
===Create the unit file /lib/systemd/system/var-chroot-run-systemd-journal-dev\\x2dlog.mount for the mount===
Remember to double escape (\\) the x2d (which is a dash -).
<source lang=bash>
# vi /lib/systemd/system/var-chroot-run-systemd-journal-dev\\x2dlog.mount
</source>
I want to mount it before syslog-ng and pdns-recursor are up.
Put this contents in the file:
<source lang=ini>
[Unit]
Description=Mount /run/systemd/journal/dev-log to chroot
DefaultDependencies=no
ConditionPathExists=/var/chroot/run/systemd/journal/dev-log
ConditionCapability=CAP_SYS_ADMIN
After=systemd-modules-load.service
Before=pdns-recursor.service
Before=syslog-ng.service
[Mount]
What=/run/systemd/journal/dev-log
Where=/var/chroot/run/systemd/journal/dev-log
Type=none
Options=bind
[Install]
WantedBy=multi-user.target
</source>
===Mount the socket===
<source lang=bash>
# systemctl daemon-reload
# systemctl enable var-chroot-run-systemd-journal-dev\\x2dlog.mount
# systemctl start var-chroot-run-systemd-journal-dev\\x2dlog.mount
</source>
Check the success:
<source lang=bash>
# grep /var/chroot/run/systemd/journal/dev-log /proc/mounts
tmpfs /var/chroot/run/systemd/journal/dev-log tmpfs rw,nosuid,noexec,relatime,size=101604k,mode=755 0 0
</source>
==Tell the journald to forward logging lines to the socket==
===/etc/systemd/journald.conf===
<source lang=ini>
[Journal]
...
ForwardToSyslog=yes
...
</source>
Restart the journal daemon:
<source lang=bash>
# systemctl restart systemd-journald.service
</source>
==Configure syslog-ng==
===/etc/syslog-ng/syslog-ng.conf===
Take the log from systemd-journald socket:
<source>
...
source s_src {
system();
internal();
unix-dgram ("/run/systemd/journal/dev-log");
};
...
</source>
===Example for powerdns recursor===
====/etc/syslog-ng/conf.d/destination.d/pdns.conf====
<source>
# PowerDNS authoritative server destination
destination d_pdns { file("/var/log/powerdns/pdns.log"); };
destination d_pdns_recursor { file("/var/log/powerdns/recursor.log"); };
</source>
====/etc/syslog-ng/conf.d/filter.d/pdns.conf====
<source>
# PowerDNS authoritative server filter
filter f_pdns { program("^pdns$"); };
filter f_pdns_recursor { program("^pdns_recursor$"); };
</source>
====/etc/syslog-ng/conf.d/log.d/90_pdns.conf====
<source>
# PowerDNS authoritative server default final file log
log { source(s_src); filter(f_pdns); destination(d_pdns); flags(final); };
log { source(s_src); filter(f_pdns_recursor); destination(d_pdns_recursor); flags(final); };
</source>
===Restart syslog-ng daemon===
<source lang=bash>
# systemctl restart syslog-ng.service
</source>
= systemd-tmpfiles =
The housekeeping of temporary directories is done by the service <i>systemd-tmpfiles-clean.service</i> .
This service is triggered by the timer <i>systemd-tmpfiles-clean.timer</i>
To use this service for PrivateTMP directories for example of <i>apache2.service</i> you may use a config file under <i>/etc/tmpfiles.d/</i> like this example <i>/etc/tmpfiles.d/apache-cleanup.conf</i> :
<pre>
e /tmp/systemd-private-%b-apache2.service-*/tmp - - - 6h
</pre>
This will cleanup all files under <i>/tmp/systemd-private-%b-apache2.service-*/tmp</i> which are older than 6 hours every time the <i>systemd-tmpfiles-clean.service</i> runs.
When will that be? Try:
<source lang="bash">
# systemctl list-timers systemd-tmpfiles-clean.timer
NEXT LEFT LAST PASSED UNIT ACTIVATES
Thu 2020-08-13 16:07:24 CEST 46min left n/a n/a systemd-tmpfiles-clean.timer systemd-tmpfiles-clean.service
1 timers listed.
Pass --all to see loaded but inactive timers, too.
</source>
OK, but you probably want to run ist once an hour? OK, just rescedule the timer like this:
<source lang="bash">
# systemctl edit systemd-tmpfiles-clean.timer
</source>
and change the interval like this
<pre>
[Timer]
OnUnitActiveSec=1h
</pre>
Well done...
= Examples =
== Oracle ==
UNTESTED, just an example!
File this as
/usr/lib/systemd/system/dbora@.service (SLES12)
<source lang=ini>
# This file is part of systemd.
#
# Configure instances for your oracle database versions like this
# # systemctl enable dbora@<product>.service
# e.g.:
# # systemctl enable dbora@12cR1.service
#
[Unit]
Description=Oracle Database %I
After=syslog.target network.target
[Service]
# systemd ignores PAM limits, so set any necessary limits in the service.
# Not really a bug, but a feature.
# https://bugzilla.redhat.com/show_bug.cgi?id=754285
LimitMEMLOCK=infinity
LimitNOFILE=65535
#
Type=simple
RemainAfterExit=yes
User=oracle
Group=dba
Environment="ORACLE_HOME=/opt/oracle/product/%i/db"
ExecStart=/opt/oracle/product/%i/db/bin/dbstart $ORACLE_HOME >> 2>&1 &
ExecStop=/opt/oracle/product/%i/db/bin/dbstart $ORACLE_HOME 2>&1 &
[Install]
WantedBy=multi-user.target
</source>
<source lang=bash>
# systemctl daemon-reload
# systemctl enable dbora@12cR2.service
Created symlink from /etc/systemd/system/multi-user.target.wants/dbora@12cR2.service to /usr/lib/systemd/system/dbora@.service.
</source>
4206c10817ddb2813e1c5cfd26a2d8ebbfb33922
2023
2022
2020-08-13T13:35:53Z
Lollypop
2
/* systemd-tmpfiles */
wikitext
text/x-wiki
[[Kategorie:Linux]]
=systemd=
Yes, like daemons are usually written this has to be written lowercase.
=What is systemd?=
systemd is a replacement for the old and rusty init system of Linux.
It has many new features and extends the normal init system with the ability to watch processes after the start has done, list sockets owned by processes started with systemd, adds security features like [http://manpages.ubuntu.com/manpages/vivid/en/man7/capabilities.7.html capabilities(7)] and a lot more.
Maybe it will be as good as SMF (Service Management Facility) of Solaris one day :-).
=Take a look with systemctl=
==List units==
As you can see, there are hardware and software related units.
<source lang=bash>
# systemctl list-units
UNIT LOAD ACTIVE SUB DESCRIPTION
proc-sys-fs-binfmt_misc.automount loaded active running Arbitrary Executable File Formats File System Automount Point
sys-devices-pci0000:00-0000:00:02.0-backlight-acpi_video0.device loaded active plugged /sys/devices/pci0000:00/0000:00:02.0/backlight/acpi_video0
sys-devices-pci0000:00-0000:00:02.0-drm-card0-card0\x2dLVDS\x2d1-intel_backlight.device loaded active plugged /sys/devices/pci0000:00/0000:00:02.0/drm
sys-devices-pci0000:00-0000:00:19.0-net-eth0.device loaded active plugged 82579LM Gigabit Network Connection
sys-devices-pci0000:00-0000:00:1a.0-usb1-1\x2d1-1\x2d1.4-1\x2d1.4:1.0-bluetooth-hci0-rfkill3.device loaded active plugged /sys/devices/pci0000:00/0000
sys-devices-pci0000:00-0000:00:1a.0-usb1-1\x2d1-1\x2d1.4-1\x2d1.4:1.0-bluetooth-hci0.device loaded active plugged /sys/devices/pci0000:00/0000:00:1a.0
sys-devices-pci0000:00-0000:00:1b.0-sound-card0.device loaded active plugged 6 Series/C200 Series Chipset Family High Definition Audio Contro
sys-devices-pci0000:00-0000:00:1c.1-0000:03:00.0-ieee80211-phy0-rfkill2.device loaded active plugged /sys/devices/pci0000:00/0000:00:1c.1/0000:03:00.0
sys-devices-pci0000:00-0000:00:1c.1-0000:03:00.0-net-wlan0.device loaded active plugged Centrino Advanced-N 6205 [Taylor Peak] (Centrino Advanced-N 62
sys-devices-pci0000:00-0000:00:1d.0-usb2-2\x2d1-2\x2d1.4-2\x2d1.4:1.1-tty-ttyACM0.device loaded active plugged F5521gw
sys-devices-pci0000:00-0000:00:1d.0-usb2-2\x2d1-2\x2d1.4-2\x2d1.4:1.3-tty-ttyACM1.device loaded active plugged F5521gw
...
session-c2.scope loaded active running Session c2 of user lollypop
accounts-daemon.service loaded active running Accounts Service
● anacron.service loaded failed failed Run anacron jobs
apparmor.service loaded active exited LSB: AppArmor initialization
apport.service loaded active exited LSB: automatic crash report generation
...
</source>
In this example you can see that the anacron.service failed to start.
==Display unit status==
<source lang=bash>
# systemctl status anacron
● anacron.service - Run anacron jobs
Loaded: loaded (/lib/systemd/system/anacron.service; enabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Fr 2015-08-28 09:18:13 CEST; 31min ago
Process: 1591 ExecStart=/usr/sbin/anacron -dsq (code=exited, status=1/FAILURE)
Main PID: 1591 (code=exited, status=1/FAILURE)
Aug 28 09:18:13 lollybook systemd[1]: Started Run anacron jobs.
Aug 28 09:18:13 lollybook systemd[1]: Starting Run anacron jobs...
Aug 28 09:18:13 lollybook systemd[1]: anacron.service: main process exited, code=exited, status=1/FAILURE
Aug 28 09:18:13 lollybook anacron[1591]: anacron: Can't chdir to /var/spool/anacron: No such file or directory
Aug 28 09:18:13 lollybook systemd[1]: Unit anacron.service entered failed state.
Aug 28 09:18:13 lollybook systemd[1]: anacron.service failed.
</source>
Ah, deleted the anacron spool directory. ;-)
==Restart units==
Fix the problem and restart the service.
<source lang=bash>
root@lollybook:~# mkdir /var/spool/anacron
root@lollybook:~# systemctl restart anacron.service
root@lollybook:~# systemctl status anacron
● anacron.service - Run anacron jobs
Loaded: loaded (/lib/systemd/system/anacron.service; enabled; vendor preset: enabled)
Active: active (running) since Fr 2015-08-28 09:53:49 CEST; 4s ago
Main PID: 5179 (anacron)
CGroup: /system.slice/anacron.service
└─5179 /usr/sbin/anacron -dsq
Aug 28 09:53:49 lollybook systemd[1]: Started Run anacron jobs.
Aug 28 09:53:49 lollybook systemd[1]: Starting Run anacron jobs...
Aug 28 09:53:49 lollybook anacron[5179]: Anacron 2.3 started on 2015-08-28
Aug 28 09:53:49 lollybook anacron[5179]: Will run job `cron.daily' in 5 min.
Aug 28 09:53:49 lollybook anacron[5179]: Will run job `cron.weekly' in 10 min.
Aug 28 09:53:49 lollybook anacron[5179]: Will run job `cron.monthly' in 15 min.
Aug 28 09:53:49 lollybook anacron[5179]: Jobs will be executed sequentially
</source>
==Display unit declaration==
<source lang=ini>
# systemctl cat zfs.target
# /lib/systemd/system/zfs.target
[Unit]
Description=ZFS startup target
Requires=zfs-mount.service
Requires=zfs-share.service
Wants=zed.service
[Install]
WantedBy=multi-user.target
</source>
==Sockets==
<source lang=bash>
# systemctl list-sockets --all
LISTEN UNIT ACTIVATES
/run/acpid.socket acpid.socket acpid.service
/run/systemd/fsckd systemd-fsckd.socket systemd-fsckd.service
/run/systemd/initctl/fifo systemd-initctl.socket systemd-initctl.service
/run/systemd/journal/dev-log systemd-journald-dev-log.socket systemd-journald.service
/run/systemd/journal/socket systemd-journald.socket systemd-journald.service
/run/systemd/journal/stdout systemd-journald.socket systemd-journald.service
/run/systemd/journal/syslog syslog.socket rsyslog.service
/run/systemd/shutdownd systemd-shutdownd.socket systemd-shutdownd.service
/run/udev/control systemd-udevd-control.socket systemd-udevd.service
/run/uuidd/request uuidd.socket uuidd.service
/var/run/avahi-daemon/socket avahi-daemon.socket avahi-daemon.service
/var/run/cups/cups.sock cups.socket cups.service
/var/run/dbus/system_bus_socket dbus.socket dbus.service
127.0.0.1:631 cups.socket cups.service
[::1]:631 cups.socket cups.service
audit 1 systemd-journald-audit.socket systemd-journald.service
kobject-uevent 1 systemd-udevd-kernel.socket systemd-udevd.service
17 sockets listed.
</source>
==View dependencies==
What depends on ''zfs.target'':
<source lang=bash>
# systemctl list-dependencies --reverse zfs.target
zfs.target
● ├─basic.target
...
● └─multi-user.target
...
</source>
And what do we need to reach the ''zfs.target''?
<source lang=bash>
# systemctl list-dependencies --recursive zfs.target
zfs.target
● ├─zed.service
● ├─zfs-mount.service
● └─zfs-share.service
</source>
==Get the main PID of a service==
<source lang=bash>
$ systemctl show --property=MainPID --value ssh.service
2026
</source>
=Security=
==Use capabilities to drop user privileges (CapabilityBoundingSet)==
<source lang=ini>
# systemctl cat systemd-networkd.service --no-pager
...
[Service]
Type=notify
Restart=on-failure
RestartSec=0
ExecStart=/lib/systemd/systemd-networkd
CapabilityBoundingSet=CAP_NET_ADMIN CAP_NET_BIND_SERVICE CAP_NET_BROADCAST CAP_NET_RAW CAP_SETUID CAP_SETGID CAP_SETPCAP CAP_CHOWN CAP_DAC_OVERRIDE CAP_FOWNER
ProtectSystem=full
ProtectHome=yes
WatchdogSec=1min
...
</source>
Now the process is started with exactly the capabilities it needs to have. Even if it starts as root all unnessesary capabilities are dropped for starting the process.
I dont want to copy the whole man page of [http://manpages.ubuntu.com/manpages/vivid/en/man7/capabilities.7.html capabilities(7)] here but you can take a look to understand what this capabilities are.
'''BUT''' beware of programs which just test on UID 0!
==Nailing a process to it's rights : NoNewPrivileges==
Setting ''NoNewPrivileges=true'' ensures that the processtree from this level on will stuck at the UID and the privileges it has. This prohibits UID changes. No set UID binary will help the hacker to get more privileges than the user of the exploited service.
=systemd-resolved the name resolve service=
==Status==
<source lang=bash>
$ systemd-resolve --status
Global
DNS Domain: fritz.box
DNSSEC NTA: 10.in-addr.arpa
168.192.in-addr.arpa
corp
d.f.ip6.arpa
home
internal
intranet
lan
local
private
test
Link 3 (wlan0)
Current Scopes: none
LLMNR setting: yes
MulticastDNS setting: no
DNSSEC setting: no
DNSSEC supported: no
Link 2 (eth0)
Current Scopes: DNS
LLMNR setting: yes
MulticastDNS setting: no
DNSSEC setting: no
DNSSEC supported: no
DNS Servers: 192.168.178.1
DNS Domain: fritz.box
</source>
==Cache statistics==
<source lang=bash>
$ systemd-resolve --statistics
DNSSEC supported by current servers: no
Transactions
Current Transactions: 0
Total Transactions: 1824
Cache
Current Cache Size: 11
Cache Hits: 1104
Cache Misses: 771
DNSSEC Verdicts
Secure: 0
Insecure: 0
Bogus: 0
Indeterminate: 0
</source>
==Flush the cache==
<source lang=bash>
$ systemd-resolve --flush-caches
</source>
Check with:
<source lang=bash>
$ systemd-resolve --statistics
DNSSEC supported by current servers: no
Transactions
Current Transactions: 0
Total Transactions: 1809
Cache
Current Cache Size: 0 <--- Empty
Cache Hits: 1099
Cache Misses: 761
DNSSEC Verdicts
Secure: 0
Insecure: 0
Bogus: 0
Indeterminate: 0
</source>
=systemd-timesyncd an alternative to ntp=
The ntpd is a good and fat old horse for servers but clients do not necessarily need this one. Just give systemd-timesyncd a chance.
Configuration can be easily made through <i>/etc/systemd/timesyncd.conf</i>:
<source lang=ini>
# This file is part of systemd.
#
# systemd is free software; you can redistribute it and/or modify it
# under the terms of the GNU Lesser General Public License as published by
# the Free Software Foundation; either version 2.1 of the License, or
# (at your option) any later version.
#
# Entries in this file show the compile time defaults.
# You can change settings by editing this file.
# Defaults can be restored by simply deleting this file.
#
# See timesyncd.conf(5) for details.
[Time]
NTP=ptbtime1.ptb.de hora.cs.tu-berlin.de
FallbackNTP=ntp.ubuntu.com
</source>
The server list is a space separated list of NTP servers.
FallbackNTP is a list of servers if none of the NTP list could be reached.
If you want to split them into multiple files or generate them at start you can put files with the ending <i>.conf</i> in <i>/etc/systemd/timesyncd.conf.d/</i>.
After you setup the config you can enable the timesyncd via:
<source lang=bash>
# timedatectl set-ntp true
</source>
Control your success with:
<source lang=bash>
# timedatectl
Local time: Fr 2016-07-01 09:16:24 CEST
Universal time: Fr 2016-07-01 07:16:24 UTC
RTC time: Fr 2016-07-01 07:16:24
Time zone: Europe/Berlin (CEST, +0200)
Network time on: yes
NTP synchronized: yes
RTC in local TZ: no
</source>
Nice it worked <i>NTP synchronized: yes</i>.
If not take a look with <i>systemctl</i>:
<source lang=bash>
# systemctl status systemd-timesyncd.service
● systemd-timesyncd.service - Network Time Synchronization
Loaded: loaded (/lib/systemd/system/systemd-timesyncd.service; enabled; vendor preset: enabled)
Drop-In: /lib/systemd/system/systemd-timesyncd.service.d
└─disable-with-time-daemon.conf
Active: inactive (dead)
Condition: start condition failed at Fr 2016-07-01 10:49:15 CEST; 1h 43min left
Docs: man:systemd-timesyncd.service(8)
</source>
Hmm... let us take a look at ntp:
<source lang=bash>
# systemctl status ntp.service
● ntp.service - LSB: Start NTP daemon
Loaded: loaded (/etc/init.d/ntp; bad; vendor preset: enabled)
Active: active (exited) since Fr 2016-07-01 10:49:19 CEST; 1h 44min left
Docs: man:systemd-sysv-generator(8)
</source>
Maybe we should uninstall or disable ntp first ;-).
<source lang=bash>
# systemctl stop ntp.service
# systemctl disable ntp.service
</source>
<source lang=bash>
# systemctl start systemd-timesyncd.service
# systemctl status systemd-timesyncd.service
● systemd-timesyncd.service - Network Time Synchronization
Loaded: loaded (/lib/systemd/system/systemd-timesyncd.service; enabled; vendor preset: enabled)
Drop-In: /lib/systemd/system/systemd-timesyncd.service.d
└─disable-with-time-daemon.conf
Active: active (running) since Fr 2016-07-01 09:06:10 CEST; 1s ago
Docs: man:systemd-timesyncd.service(8)
Main PID: 12360 (systemd-timesyn)
Status: "Synchronized to time server 192.53.103.108:123 (ptbtime1.ptb.de)."
CGroup: /system.slice/systemd-timesyncd.service
└─12360 /lib/systemd/systemd-timesyncd
Jul 01 09:06:10 lollybook systemd[1]: Starting Network Time Synchronization...
Jul 01 09:06:10 lollybook systemd[1]: Started Network Time Synchronization.
Jul 01 09:06:10 lollybook systemd-timesyncd[12360]: Synchronized to time server 192.53.103.108:123 (ptbtime1.ptb.de).
</source>
That's it!
=Units=
==[Unit]==
===Define dependencies===
For example the ''zfs.target'' is defined like this:
<source lang=ini>
# systemctl cat zfs.target
# /lib/systemd/system/zfs.target
[Unit]
Description=ZFS startup target
Requires=zfs-mount.service
Requires=zfs-share.service
Wants=zed.service
[Install]
WantedBy=multi-user.target
</source>
This means to reach the ''zfs.target'' we want that ''zed.service'' is started if enabled and we need ''zfs-mount.service'' and ''zfs-share.service''.
===Directories===
====ReadWrite-, ReadOnly- and InaccessibleDirectories====
====Private Tmp-Directories====
Mounts a private incarnation of /tmp and /var/tmp which only lives as long as the unit is up. When the unit comes down the directories are cleared. This is done by a seperate namespace for this unit.
<source lang=ini>
[Unit]
...
PrivateTmp=true|false
...
</source>
If several units should share a private tmp-directory you can use ''JoinsNamespaceOf=<unit1>[,<unit2>,<unit3>]''.
==[Service]==
==[Install]==
=Tools=
==Testing around with capabilities==
For example arping:
<source lang=bash>
# getcap /usr/bin/arping
/usr/bin/arping = cap_net_raw+ep
</source>
With this capability set we can use this as normal user:
<source lang=bash>
lollypop $ /usr/bin/arping -I wlan0 192.168.178.1
ARPING 192.168.178.1 from 192.168.178.31 wlan0
Unicast reply from 192.168.178.1 [24:65:11:F0:DC:A8] 1.774ms
Unicast reply from 192.168.178.1 [24:65:11:F0:DC:A8] 1.658ms
</source>
If we remove this capability it does not work:
<source lang=bash>
# setcap cap_net_raw=-ep /usr/bin/arping
</source>
<source lang=bash>
lollypop $ /usr/bin/arping -I wlan0 192.168.178.1
arping: socket: Operation not permitted
</source>
Of course it still works as root as root has all capabilities:
<source lang=bash>
root@lollybook:~# /usr/bin/arping -I wlan0 192.168.178.1
ARPING 192.168.178.1 from 192.168.178.31 wlan0
Unicast reply from 192.168.178.1 [24:65:11:F0:DC:A8] 2.052ms
Unicast reply from 192.168.178.1 [24:65:11:F0:DC:A8] 1.852ms
Received 2 response(s)
</source>
So we better set this capability again:
<source lang=bash>
# setcap cap_net_raw=+ep /usr/bin/arping
</source>
= Logging with syslog-ng and systemd in a chroot environment =
If you have a chroot environment (here I have /var/chroot) some things are a little bit tricky.
==The needed logging socket in your chroot is /run/systemd/journal/dev-log==
Prepare the mountpoint:
<source lang=bash>
# mkdir -p /var/chroot/run/systemd/journal
# touch /var/chroot/run/systemd/journal/dev-log
</source>
===Get the name for the needed unit file===
The name of a .mount-unit file has to be the mount destination path. Dashes must be escaped. To get the resulting name you can easily use systemd-escape.
<source lang=bash>
# systemd-escape -p --suffix=mount /var/chroot/run/systemd/journal/dev-log
var-chroot-run-systemd-journal-dev\x2dlog.mount
</source>
===Create the unit file /lib/systemd/system/var-chroot-run-systemd-journal-dev\\x2dlog.mount for the mount===
Remember to double escape (\\) the x2d (which is a dash -).
<source lang=bash>
# vi /lib/systemd/system/var-chroot-run-systemd-journal-dev\\x2dlog.mount
</source>
I want to mount it before syslog-ng and pdns-recursor are up.
Put this contents in the file:
<source lang=ini>
[Unit]
Description=Mount /run/systemd/journal/dev-log to chroot
DefaultDependencies=no
ConditionPathExists=/var/chroot/run/systemd/journal/dev-log
ConditionCapability=CAP_SYS_ADMIN
After=systemd-modules-load.service
Before=pdns-recursor.service
Before=syslog-ng.service
[Mount]
What=/run/systemd/journal/dev-log
Where=/var/chroot/run/systemd/journal/dev-log
Type=none
Options=bind
[Install]
WantedBy=multi-user.target
</source>
===Mount the socket===
<source lang=bash>
# systemctl daemon-reload
# systemctl enable var-chroot-run-systemd-journal-dev\\x2dlog.mount
# systemctl start var-chroot-run-systemd-journal-dev\\x2dlog.mount
</source>
Check the success:
<source lang=bash>
# grep /var/chroot/run/systemd/journal/dev-log /proc/mounts
tmpfs /var/chroot/run/systemd/journal/dev-log tmpfs rw,nosuid,noexec,relatime,size=101604k,mode=755 0 0
</source>
==Tell the journald to forward logging lines to the socket==
===/etc/systemd/journald.conf===
<source lang=ini>
[Journal]
...
ForwardToSyslog=yes
...
</source>
Restart the journal daemon:
<source lang=bash>
# systemctl restart systemd-journald.service
</source>
==Configure syslog-ng==
===/etc/syslog-ng/syslog-ng.conf===
Take the log from systemd-journald socket:
<source>
...
source s_src {
system();
internal();
unix-dgram ("/run/systemd/journal/dev-log");
};
...
</source>
===Example for powerdns recursor===
====/etc/syslog-ng/conf.d/destination.d/pdns.conf====
<source>
# PowerDNS authoritative server destination
destination d_pdns { file("/var/log/powerdns/pdns.log"); };
destination d_pdns_recursor { file("/var/log/powerdns/recursor.log"); };
</source>
====/etc/syslog-ng/conf.d/filter.d/pdns.conf====
<source>
# PowerDNS authoritative server filter
filter f_pdns { program("^pdns$"); };
filter f_pdns_recursor { program("^pdns_recursor$"); };
</source>
====/etc/syslog-ng/conf.d/log.d/90_pdns.conf====
<source>
# PowerDNS authoritative server default final file log
log { source(s_src); filter(f_pdns); destination(d_pdns); flags(final); };
log { source(s_src); filter(f_pdns_recursor); destination(d_pdns_recursor); flags(final); };
</source>
===Restart syslog-ng daemon===
<source lang=bash>
# systemctl restart syslog-ng.service
</source>
= systemd-tmpfiles =
The housekeeping of temporary directories is done by the service <i>systemd-tmpfiles-clean.service</i> .
This service is triggered by the timer <i>systemd-tmpfiles-clean.timer</i>
To use this service for PrivateTMP directories for example of <i>apache2.service</i> you may use a config file under <i>/etc/tmpfiles.d/</i> like this example <i>/etc/tmpfiles.d/apache-cleanup.conf</i> :
<pre>
e /tmp/systemd-private-%b-apache2.service-*/tmp - - - 6h
</pre>
This will cleanup all files under <i>/tmp/systemd-private-%b-apache2.service-*/tmp</i> which are older than 6 hours every time the <i>systemd-tmpfiles-clean.service</i> runs.
The <i>%b</i> in the path is the actual boot-id.
What ist that? An id which is generated at each boot.
You can get the boot-id with:
<source lang=bash>
# journalctl --list-boots
</source>
The second field of the last line is the actual one, e.g.:
<source lang=bash>
# journalctl --list-boots | awk 'END {print $2}'
52ae0c2a587a47048ee76818ede269a6
</source>
When will that be? Try:
<source lang="bash">
# systemctl list-timers systemd-tmpfiles-clean.timer
NEXT LEFT LAST PASSED UNIT ACTIVATES
Thu 2020-08-13 16:07:24 CEST 46min left n/a n/a systemd-tmpfiles-clean.timer systemd-tmpfiles-clean.service
1 timers listed.
Pass --all to see loaded but inactive timers, too.
</source>
OK, but you probably want to run ist once an hour? OK, just rescedule the timer like this:
<source lang="bash">
# systemctl edit systemd-tmpfiles-clean.timer
</source>
and change the interval like this
<pre>
[Timer]
OnUnitActiveSec=1h
</pre>
Well done...
= Examples =
== Oracle ==
UNTESTED, just an example!
File this as
/usr/lib/systemd/system/dbora@.service (SLES12)
<source lang=ini>
# This file is part of systemd.
#
# Configure instances for your oracle database versions like this
# # systemctl enable dbora@<product>.service
# e.g.:
# # systemctl enable dbora@12cR1.service
#
[Unit]
Description=Oracle Database %I
After=syslog.target network.target
[Service]
# systemd ignores PAM limits, so set any necessary limits in the service.
# Not really a bug, but a feature.
# https://bugzilla.redhat.com/show_bug.cgi?id=754285
LimitMEMLOCK=infinity
LimitNOFILE=65535
#
Type=simple
RemainAfterExit=yes
User=oracle
Group=dba
Environment="ORACLE_HOME=/opt/oracle/product/%i/db"
ExecStart=/opt/oracle/product/%i/db/bin/dbstart $ORACLE_HOME >> 2>&1 &
ExecStop=/opt/oracle/product/%i/db/bin/dbstart $ORACLE_HOME 2>&1 &
[Install]
WantedBy=multi-user.target
</source>
<source lang=bash>
# systemctl daemon-reload
# systemctl enable dbora@12cR2.service
Created symlink from /etc/systemd/system/multi-user.target.wants/dbora@12cR2.service to /usr/lib/systemd/system/dbora@.service.
</source>
623971168ed23a7ce988f4d881fcc3b670feb1e7
2024
2023
2020-08-13T13:40:37Z
Lollypop
2
/* systemd-tmpfiles */
wikitext
text/x-wiki
[[Kategorie:Linux]]
=systemd=
Yes, like daemons are usually written this has to be written lowercase.
=What is systemd?=
systemd is a replacement for the old and rusty init system of Linux.
It has many new features and extends the normal init system with the ability to watch processes after the start has done, list sockets owned by processes started with systemd, adds security features like [http://manpages.ubuntu.com/manpages/vivid/en/man7/capabilities.7.html capabilities(7)] and a lot more.
Maybe it will be as good as SMF (Service Management Facility) of Solaris one day :-).
=Take a look with systemctl=
==List units==
As you can see, there are hardware and software related units.
<source lang=bash>
# systemctl list-units
UNIT LOAD ACTIVE SUB DESCRIPTION
proc-sys-fs-binfmt_misc.automount loaded active running Arbitrary Executable File Formats File System Automount Point
sys-devices-pci0000:00-0000:00:02.0-backlight-acpi_video0.device loaded active plugged /sys/devices/pci0000:00/0000:00:02.0/backlight/acpi_video0
sys-devices-pci0000:00-0000:00:02.0-drm-card0-card0\x2dLVDS\x2d1-intel_backlight.device loaded active plugged /sys/devices/pci0000:00/0000:00:02.0/drm
sys-devices-pci0000:00-0000:00:19.0-net-eth0.device loaded active plugged 82579LM Gigabit Network Connection
sys-devices-pci0000:00-0000:00:1a.0-usb1-1\x2d1-1\x2d1.4-1\x2d1.4:1.0-bluetooth-hci0-rfkill3.device loaded active plugged /sys/devices/pci0000:00/0000
sys-devices-pci0000:00-0000:00:1a.0-usb1-1\x2d1-1\x2d1.4-1\x2d1.4:1.0-bluetooth-hci0.device loaded active plugged /sys/devices/pci0000:00/0000:00:1a.0
sys-devices-pci0000:00-0000:00:1b.0-sound-card0.device loaded active plugged 6 Series/C200 Series Chipset Family High Definition Audio Contro
sys-devices-pci0000:00-0000:00:1c.1-0000:03:00.0-ieee80211-phy0-rfkill2.device loaded active plugged /sys/devices/pci0000:00/0000:00:1c.1/0000:03:00.0
sys-devices-pci0000:00-0000:00:1c.1-0000:03:00.0-net-wlan0.device loaded active plugged Centrino Advanced-N 6205 [Taylor Peak] (Centrino Advanced-N 62
sys-devices-pci0000:00-0000:00:1d.0-usb2-2\x2d1-2\x2d1.4-2\x2d1.4:1.1-tty-ttyACM0.device loaded active plugged F5521gw
sys-devices-pci0000:00-0000:00:1d.0-usb2-2\x2d1-2\x2d1.4-2\x2d1.4:1.3-tty-ttyACM1.device loaded active plugged F5521gw
...
session-c2.scope loaded active running Session c2 of user lollypop
accounts-daemon.service loaded active running Accounts Service
● anacron.service loaded failed failed Run anacron jobs
apparmor.service loaded active exited LSB: AppArmor initialization
apport.service loaded active exited LSB: automatic crash report generation
...
</source>
In this example you can see that the anacron.service failed to start.
==Display unit status==
<source lang=bash>
# systemctl status anacron
● anacron.service - Run anacron jobs
Loaded: loaded (/lib/systemd/system/anacron.service; enabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Fr 2015-08-28 09:18:13 CEST; 31min ago
Process: 1591 ExecStart=/usr/sbin/anacron -dsq (code=exited, status=1/FAILURE)
Main PID: 1591 (code=exited, status=1/FAILURE)
Aug 28 09:18:13 lollybook systemd[1]: Started Run anacron jobs.
Aug 28 09:18:13 lollybook systemd[1]: Starting Run anacron jobs...
Aug 28 09:18:13 lollybook systemd[1]: anacron.service: main process exited, code=exited, status=1/FAILURE
Aug 28 09:18:13 lollybook anacron[1591]: anacron: Can't chdir to /var/spool/anacron: No such file or directory
Aug 28 09:18:13 lollybook systemd[1]: Unit anacron.service entered failed state.
Aug 28 09:18:13 lollybook systemd[1]: anacron.service failed.
</source>
Ah, deleted the anacron spool directory. ;-)
==Restart units==
Fix the problem and restart the service.
<source lang=bash>
root@lollybook:~# mkdir /var/spool/anacron
root@lollybook:~# systemctl restart anacron.service
root@lollybook:~# systemctl status anacron
● anacron.service - Run anacron jobs
Loaded: loaded (/lib/systemd/system/anacron.service; enabled; vendor preset: enabled)
Active: active (running) since Fr 2015-08-28 09:53:49 CEST; 4s ago
Main PID: 5179 (anacron)
CGroup: /system.slice/anacron.service
└─5179 /usr/sbin/anacron -dsq
Aug 28 09:53:49 lollybook systemd[1]: Started Run anacron jobs.
Aug 28 09:53:49 lollybook systemd[1]: Starting Run anacron jobs...
Aug 28 09:53:49 lollybook anacron[5179]: Anacron 2.3 started on 2015-08-28
Aug 28 09:53:49 lollybook anacron[5179]: Will run job `cron.daily' in 5 min.
Aug 28 09:53:49 lollybook anacron[5179]: Will run job `cron.weekly' in 10 min.
Aug 28 09:53:49 lollybook anacron[5179]: Will run job `cron.monthly' in 15 min.
Aug 28 09:53:49 lollybook anacron[5179]: Jobs will be executed sequentially
</source>
==Display unit declaration==
<source lang=ini>
# systemctl cat zfs.target
# /lib/systemd/system/zfs.target
[Unit]
Description=ZFS startup target
Requires=zfs-mount.service
Requires=zfs-share.service
Wants=zed.service
[Install]
WantedBy=multi-user.target
</source>
==Sockets==
<source lang=bash>
# systemctl list-sockets --all
LISTEN UNIT ACTIVATES
/run/acpid.socket acpid.socket acpid.service
/run/systemd/fsckd systemd-fsckd.socket systemd-fsckd.service
/run/systemd/initctl/fifo systemd-initctl.socket systemd-initctl.service
/run/systemd/journal/dev-log systemd-journald-dev-log.socket systemd-journald.service
/run/systemd/journal/socket systemd-journald.socket systemd-journald.service
/run/systemd/journal/stdout systemd-journald.socket systemd-journald.service
/run/systemd/journal/syslog syslog.socket rsyslog.service
/run/systemd/shutdownd systemd-shutdownd.socket systemd-shutdownd.service
/run/udev/control systemd-udevd-control.socket systemd-udevd.service
/run/uuidd/request uuidd.socket uuidd.service
/var/run/avahi-daemon/socket avahi-daemon.socket avahi-daemon.service
/var/run/cups/cups.sock cups.socket cups.service
/var/run/dbus/system_bus_socket dbus.socket dbus.service
127.0.0.1:631 cups.socket cups.service
[::1]:631 cups.socket cups.service
audit 1 systemd-journald-audit.socket systemd-journald.service
kobject-uevent 1 systemd-udevd-kernel.socket systemd-udevd.service
17 sockets listed.
</source>
==View dependencies==
What depends on ''zfs.target'':
<source lang=bash>
# systemctl list-dependencies --reverse zfs.target
zfs.target
● ├─basic.target
...
● └─multi-user.target
...
</source>
And what do we need to reach the ''zfs.target''?
<source lang=bash>
# systemctl list-dependencies --recursive zfs.target
zfs.target
● ├─zed.service
● ├─zfs-mount.service
● └─zfs-share.service
</source>
==Get the main PID of a service==
<source lang=bash>
$ systemctl show --property=MainPID --value ssh.service
2026
</source>
=Security=
==Use capabilities to drop user privileges (CapabilityBoundingSet)==
<source lang=ini>
# systemctl cat systemd-networkd.service --no-pager
...
[Service]
Type=notify
Restart=on-failure
RestartSec=0
ExecStart=/lib/systemd/systemd-networkd
CapabilityBoundingSet=CAP_NET_ADMIN CAP_NET_BIND_SERVICE CAP_NET_BROADCAST CAP_NET_RAW CAP_SETUID CAP_SETGID CAP_SETPCAP CAP_CHOWN CAP_DAC_OVERRIDE CAP_FOWNER
ProtectSystem=full
ProtectHome=yes
WatchdogSec=1min
...
</source>
Now the process is started with exactly the capabilities it needs to have. Even if it starts as root all unnessesary capabilities are dropped for starting the process.
I dont want to copy the whole man page of [http://manpages.ubuntu.com/manpages/vivid/en/man7/capabilities.7.html capabilities(7)] here but you can take a look to understand what this capabilities are.
'''BUT''' beware of programs which just test on UID 0!
==Nailing a process to it's rights : NoNewPrivileges==
Setting ''NoNewPrivileges=true'' ensures that the processtree from this level on will stuck at the UID and the privileges it has. This prohibits UID changes. No set UID binary will help the hacker to get more privileges than the user of the exploited service.
=systemd-resolved the name resolve service=
==Status==
<source lang=bash>
$ systemd-resolve --status
Global
DNS Domain: fritz.box
DNSSEC NTA: 10.in-addr.arpa
168.192.in-addr.arpa
corp
d.f.ip6.arpa
home
internal
intranet
lan
local
private
test
Link 3 (wlan0)
Current Scopes: none
LLMNR setting: yes
MulticastDNS setting: no
DNSSEC setting: no
DNSSEC supported: no
Link 2 (eth0)
Current Scopes: DNS
LLMNR setting: yes
MulticastDNS setting: no
DNSSEC setting: no
DNSSEC supported: no
DNS Servers: 192.168.178.1
DNS Domain: fritz.box
</source>
==Cache statistics==
<source lang=bash>
$ systemd-resolve --statistics
DNSSEC supported by current servers: no
Transactions
Current Transactions: 0
Total Transactions: 1824
Cache
Current Cache Size: 11
Cache Hits: 1104
Cache Misses: 771
DNSSEC Verdicts
Secure: 0
Insecure: 0
Bogus: 0
Indeterminate: 0
</source>
==Flush the cache==
<source lang=bash>
$ systemd-resolve --flush-caches
</source>
Check with:
<source lang=bash>
$ systemd-resolve --statistics
DNSSEC supported by current servers: no
Transactions
Current Transactions: 0
Total Transactions: 1809
Cache
Current Cache Size: 0 <--- Empty
Cache Hits: 1099
Cache Misses: 761
DNSSEC Verdicts
Secure: 0
Insecure: 0
Bogus: 0
Indeterminate: 0
</source>
=systemd-timesyncd an alternative to ntp=
The ntpd is a good and fat old horse for servers but clients do not necessarily need this one. Just give systemd-timesyncd a chance.
Configuration can be easily made through <i>/etc/systemd/timesyncd.conf</i>:
<source lang=ini>
# This file is part of systemd.
#
# systemd is free software; you can redistribute it and/or modify it
# under the terms of the GNU Lesser General Public License as published by
# the Free Software Foundation; either version 2.1 of the License, or
# (at your option) any later version.
#
# Entries in this file show the compile time defaults.
# You can change settings by editing this file.
# Defaults can be restored by simply deleting this file.
#
# See timesyncd.conf(5) for details.
[Time]
NTP=ptbtime1.ptb.de hora.cs.tu-berlin.de
FallbackNTP=ntp.ubuntu.com
</source>
The server list is a space separated list of NTP servers.
FallbackNTP is a list of servers if none of the NTP list could be reached.
If you want to split them into multiple files or generate them at start you can put files with the ending <i>.conf</i> in <i>/etc/systemd/timesyncd.conf.d/</i>.
After you setup the config you can enable the timesyncd via:
<source lang=bash>
# timedatectl set-ntp true
</source>
Control your success with:
<source lang=bash>
# timedatectl
Local time: Fr 2016-07-01 09:16:24 CEST
Universal time: Fr 2016-07-01 07:16:24 UTC
RTC time: Fr 2016-07-01 07:16:24
Time zone: Europe/Berlin (CEST, +0200)
Network time on: yes
NTP synchronized: yes
RTC in local TZ: no
</source>
Nice it worked <i>NTP synchronized: yes</i>.
If not take a look with <i>systemctl</i>:
<source lang=bash>
# systemctl status systemd-timesyncd.service
● systemd-timesyncd.service - Network Time Synchronization
Loaded: loaded (/lib/systemd/system/systemd-timesyncd.service; enabled; vendor preset: enabled)
Drop-In: /lib/systemd/system/systemd-timesyncd.service.d
└─disable-with-time-daemon.conf
Active: inactive (dead)
Condition: start condition failed at Fr 2016-07-01 10:49:15 CEST; 1h 43min left
Docs: man:systemd-timesyncd.service(8)
</source>
Hmm... let us take a look at ntp:
<source lang=bash>
# systemctl status ntp.service
● ntp.service - LSB: Start NTP daemon
Loaded: loaded (/etc/init.d/ntp; bad; vendor preset: enabled)
Active: active (exited) since Fr 2016-07-01 10:49:19 CEST; 1h 44min left
Docs: man:systemd-sysv-generator(8)
</source>
Maybe we should uninstall or disable ntp first ;-).
<source lang=bash>
# systemctl stop ntp.service
# systemctl disable ntp.service
</source>
<source lang=bash>
# systemctl start systemd-timesyncd.service
# systemctl status systemd-timesyncd.service
● systemd-timesyncd.service - Network Time Synchronization
Loaded: loaded (/lib/systemd/system/systemd-timesyncd.service; enabled; vendor preset: enabled)
Drop-In: /lib/systemd/system/systemd-timesyncd.service.d
└─disable-with-time-daemon.conf
Active: active (running) since Fr 2016-07-01 09:06:10 CEST; 1s ago
Docs: man:systemd-timesyncd.service(8)
Main PID: 12360 (systemd-timesyn)
Status: "Synchronized to time server 192.53.103.108:123 (ptbtime1.ptb.de)."
CGroup: /system.slice/systemd-timesyncd.service
└─12360 /lib/systemd/systemd-timesyncd
Jul 01 09:06:10 lollybook systemd[1]: Starting Network Time Synchronization...
Jul 01 09:06:10 lollybook systemd[1]: Started Network Time Synchronization.
Jul 01 09:06:10 lollybook systemd-timesyncd[12360]: Synchronized to time server 192.53.103.108:123 (ptbtime1.ptb.de).
</source>
That's it!
=Units=
==[Unit]==
===Define dependencies===
For example the ''zfs.target'' is defined like this:
<source lang=ini>
# systemctl cat zfs.target
# /lib/systemd/system/zfs.target
[Unit]
Description=ZFS startup target
Requires=zfs-mount.service
Requires=zfs-share.service
Wants=zed.service
[Install]
WantedBy=multi-user.target
</source>
This means to reach the ''zfs.target'' we want that ''zed.service'' is started if enabled and we need ''zfs-mount.service'' and ''zfs-share.service''.
===Directories===
====ReadWrite-, ReadOnly- and InaccessibleDirectories====
====Private Tmp-Directories====
Mounts a private incarnation of /tmp and /var/tmp which only lives as long as the unit is up. When the unit comes down the directories are cleared. This is done by a seperate namespace for this unit.
<source lang=ini>
[Unit]
...
PrivateTmp=true|false
...
</source>
If several units should share a private tmp-directory you can use ''JoinsNamespaceOf=<unit1>[,<unit2>,<unit3>]''.
==[Service]==
==[Install]==
=Tools=
==Testing around with capabilities==
For example arping:
<source lang=bash>
# getcap /usr/bin/arping
/usr/bin/arping = cap_net_raw+ep
</source>
With this capability set we can use this as normal user:
<source lang=bash>
lollypop $ /usr/bin/arping -I wlan0 192.168.178.1
ARPING 192.168.178.1 from 192.168.178.31 wlan0
Unicast reply from 192.168.178.1 [24:65:11:F0:DC:A8] 1.774ms
Unicast reply from 192.168.178.1 [24:65:11:F0:DC:A8] 1.658ms
</source>
If we remove this capability it does not work:
<source lang=bash>
# setcap cap_net_raw=-ep /usr/bin/arping
</source>
<source lang=bash>
lollypop $ /usr/bin/arping -I wlan0 192.168.178.1
arping: socket: Operation not permitted
</source>
Of course it still works as root as root has all capabilities:
<source lang=bash>
root@lollybook:~# /usr/bin/arping -I wlan0 192.168.178.1
ARPING 192.168.178.1 from 192.168.178.31 wlan0
Unicast reply from 192.168.178.1 [24:65:11:F0:DC:A8] 2.052ms
Unicast reply from 192.168.178.1 [24:65:11:F0:DC:A8] 1.852ms
Received 2 response(s)
</source>
So we better set this capability again:
<source lang=bash>
# setcap cap_net_raw=+ep /usr/bin/arping
</source>
= Logging with syslog-ng and systemd in a chroot environment =
If you have a chroot environment (here I have /var/chroot) some things are a little bit tricky.
==The needed logging socket in your chroot is /run/systemd/journal/dev-log==
Prepare the mountpoint:
<source lang=bash>
# mkdir -p /var/chroot/run/systemd/journal
# touch /var/chroot/run/systemd/journal/dev-log
</source>
===Get the name for the needed unit file===
The name of a .mount-unit file has to be the mount destination path. Dashes must be escaped. To get the resulting name you can easily use systemd-escape.
<source lang=bash>
# systemd-escape -p --suffix=mount /var/chroot/run/systemd/journal/dev-log
var-chroot-run-systemd-journal-dev\x2dlog.mount
</source>
===Create the unit file /lib/systemd/system/var-chroot-run-systemd-journal-dev\\x2dlog.mount for the mount===
Remember to double escape (\\) the x2d (which is a dash -).
<source lang=bash>
# vi /lib/systemd/system/var-chroot-run-systemd-journal-dev\\x2dlog.mount
</source>
I want to mount it before syslog-ng and pdns-recursor are up.
Put this contents in the file:
<source lang=ini>
[Unit]
Description=Mount /run/systemd/journal/dev-log to chroot
DefaultDependencies=no
ConditionPathExists=/var/chroot/run/systemd/journal/dev-log
ConditionCapability=CAP_SYS_ADMIN
After=systemd-modules-load.service
Before=pdns-recursor.service
Before=syslog-ng.service
[Mount]
What=/run/systemd/journal/dev-log
Where=/var/chroot/run/systemd/journal/dev-log
Type=none
Options=bind
[Install]
WantedBy=multi-user.target
</source>
===Mount the socket===
<source lang=bash>
# systemctl daemon-reload
# systemctl enable var-chroot-run-systemd-journal-dev\\x2dlog.mount
# systemctl start var-chroot-run-systemd-journal-dev\\x2dlog.mount
</source>
Check the success:
<source lang=bash>
# grep /var/chroot/run/systemd/journal/dev-log /proc/mounts
tmpfs /var/chroot/run/systemd/journal/dev-log tmpfs rw,nosuid,noexec,relatime,size=101604k,mode=755 0 0
</source>
==Tell the journald to forward logging lines to the socket==
===/etc/systemd/journald.conf===
<source lang=ini>
[Journal]
...
ForwardToSyslog=yes
...
</source>
Restart the journal daemon:
<source lang=bash>
# systemctl restart systemd-journald.service
</source>
==Configure syslog-ng==
===/etc/syslog-ng/syslog-ng.conf===
Take the log from systemd-journald socket:
<source>
...
source s_src {
system();
internal();
unix-dgram ("/run/systemd/journal/dev-log");
};
...
</source>
===Example for powerdns recursor===
====/etc/syslog-ng/conf.d/destination.d/pdns.conf====
<source>
# PowerDNS authoritative server destination
destination d_pdns { file("/var/log/powerdns/pdns.log"); };
destination d_pdns_recursor { file("/var/log/powerdns/recursor.log"); };
</source>
====/etc/syslog-ng/conf.d/filter.d/pdns.conf====
<source>
# PowerDNS authoritative server filter
filter f_pdns { program("^pdns$"); };
filter f_pdns_recursor { program("^pdns_recursor$"); };
</source>
====/etc/syslog-ng/conf.d/log.d/90_pdns.conf====
<source>
# PowerDNS authoritative server default final file log
log { source(s_src); filter(f_pdns); destination(d_pdns); flags(final); };
log { source(s_src); filter(f_pdns_recursor); destination(d_pdns_recursor); flags(final); };
</source>
===Restart syslog-ng daemon===
<source lang=bash>
# systemctl restart syslog-ng.service
</source>
= systemd-tmpfiles =
The housekeeping of temporary directories is done by the service <i>systemd-tmpfiles-clean.service</i> .
This service is triggered by the timer <i>systemd-tmpfiles-clean.timer</i>
To use this service for PrivateTMP directories for example of <i>apache2.service</i> you may use a config file under <i>/etc/[https://www.freedesktop.org/software/systemd/man/tmpfiles.d.html tmpfiles.d]/</i> like this example <i>/etc/tmpfiles.d/apache-cleanup.conf</i> :
<pre>
e /tmp/systemd-private-%b-apache2.service-*/tmp - - - 6h
</pre>
This will cleanup all files under <i>/tmp/systemd-private-%b-apache2.service-*/tmp</i> which are older than 6 hours every time the <i>systemd-tmpfiles-clean.service</i> runs.
The <i>%b</i> in the path is the actual boot-id.
What ist that? An id which is generated at each boot.
You can get the boot-id with:
<source lang=bash>
# journalctl --list-boots
</source>
The second field of the last line is the actual one, e.g.:
<source lang=bash>
# journalctl --list-boots | awk 'END {print $2}'
52ae0c2a587a47048ee76818ede269a6
</source>
When will that be? Try:
<source lang="bash">
# systemctl list-timers systemd-tmpfiles-clean.timer
NEXT LEFT LAST PASSED UNIT ACTIVATES
Thu 2020-08-13 16:07:24 CEST 46min left n/a n/a systemd-tmpfiles-clean.timer systemd-tmpfiles-clean.service
1 timers listed.
Pass --all to see loaded but inactive timers, too.
</source>
OK, but you probably want to run ist once an hour? OK, just rescedule the timer like this:
<source lang="bash">
# systemctl edit systemd-tmpfiles-clean.timer
</source>
and change the interval like this
<pre>
[Timer]
OnUnitActiveSec=1h
</pre>
Well done...
= Examples =
== Oracle ==
UNTESTED, just an example!
File this as
/usr/lib/systemd/system/dbora@.service (SLES12)
<source lang=ini>
# This file is part of systemd.
#
# Configure instances for your oracle database versions like this
# # systemctl enable dbora@<product>.service
# e.g.:
# # systemctl enable dbora@12cR1.service
#
[Unit]
Description=Oracle Database %I
After=syslog.target network.target
[Service]
# systemd ignores PAM limits, so set any necessary limits in the service.
# Not really a bug, but a feature.
# https://bugzilla.redhat.com/show_bug.cgi?id=754285
LimitMEMLOCK=infinity
LimitNOFILE=65535
#
Type=simple
RemainAfterExit=yes
User=oracle
Group=dba
Environment="ORACLE_HOME=/opt/oracle/product/%i/db"
ExecStart=/opt/oracle/product/%i/db/bin/dbstart $ORACLE_HOME >> 2>&1 &
ExecStop=/opt/oracle/product/%i/db/bin/dbstart $ORACLE_HOME 2>&1 &
[Install]
WantedBy=multi-user.target
</source>
<source lang=bash>
# systemctl daemon-reload
# systemctl enable dbora@12cR2.service
Created symlink from /etc/systemd/system/multi-user.target.wants/dbora@12cR2.service to /usr/lib/systemd/system/dbora@.service.
</source>
2ed6b4f2b2106ab956f864ea0fc0e83b74e2a34d
Kategorie: KnowHow
0
369
2003
2020-05-05T09:44:55Z
Lollypop
2
Created blank page
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
Category:Ansible
14
300
2004
1409
2020-05-05T09:47:28Z
Lollypop
2
wikitext
text/x-wiki
[[ category: KnowHow ]]
e74da8b7d9b719125eb42987524a06cc06355c08
Category:Webserver
14
353
2005
1847
2020-05-05T09:47:54Z
Lollypop
2
wikitext
text/x-wiki
[[category:KnowHow]]
fe86406e0ecec33d5fd088530e9e359cca32d2f7
2009
2005
2020-05-27T13:12:22Z
Lollypop
2
Lollypop moved page [[Category:Apache]] to [[Category:Webserver]]
wikitext
text/x-wiki
[[category:KnowHow]]
fe86406e0ecec33d5fd088530e9e359cca32d2f7
Category:AWK
14
293
2006
1346
2020-05-05T09:48:19Z
Lollypop
2
wikitext
text/x-wiki
[[category:KnowHow]]
fe86406e0ecec33d5fd088530e9e359cca32d2f7
Category:Backup
14
191
2007
626
2020-05-05T09:48:41Z
Lollypop
2
wikitext
text/x-wiki
[[category:KnowHow]]
fe86406e0ecec33d5fd088530e9e359cca32d2f7
Category:Bash
14
39
2008
72
2020-05-05T09:50:16Z
Lollypop
2
wikitext
text/x-wiki
[[category: KnowHow]]
f17954f9c2db58d6bbf27facbcad867adcf3a13b
Category:Apache
14
370
2010
2020-05-27T13:12:22Z
Lollypop
2
Lollypop moved page [[Category:Apache]] to [[Category:Webserver]]
wikitext
text/x-wiki
#REDIRECT [[:Category:Webserver]]
31569b9f7524de7e473542d2391ca7afd99fe6a7
Apache
0
205
2011
1960
2020-05-27T13:12:58Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:Webserver]]
== Create certificate ==
===Simple script===
<source lang=bash>
#!/bin/bash
BASE_SUBJECT='/C=DE/ST=Hamburg/L=Hamburg/O=MyOrg/OU=IT'
BASEDIR=/etc/apache2
DAYS=$[ 5 * 365 ]
KEY_DIR=${BASEDIR}/ssl.key
CRT_DIR=${BASEDIR}/ssl.crt
if [ $# -eq 0 ]
then
printf "usage: $0 <webserver-name> [<alias1> <alias2>...]\n"
exit 1
fi
CN=$1
declare -a subject_alt_names;
for i in ${*}
do
subject_alt_names=( ${subject_alt_names[*]} "DNS:${i}")
done
echo ${subject_alt_names[*]}
SHORT=${CN%%.*}
KEY=${KEY_DIR}/${SHORT}.key
CRT=${CRT_DIR}/${SHORT}.crt
OLD_IFS=${IFS}
IFS=","
openssl req \
-new \
-days ${DAYS} \
-newkey rsa:4096bits \
-sha512 \
-x509 \
-nodes \
-out ${CRT} \
-keyout ${KEY} \
-subj "${BASE_SUBJECT}/CN=${CN}" \
-reqexts SAN \
-extensions SAN \
-config <(
cat /etc/ssl/openssl.cnf
printf "[ext]\nbasicConstraints=CA:FALSE,pathlen:0\n[SAN]\n%s\n" \
"${subject_alt_names:+subjectAltName = ${subject_alt_names[*]}}"
)
IFS=${OLD_IFS}
printf "Put this in your Apache config:\n\n\tSSLCertificateFile %s\n\tSSLCertificateKeyFile %s\n\n" "${CRT}" "${KEY}"
</source>
===Adjust the OpenSSL default values===
Set the country, etc. to values that match your needs:
<source lang=bash>
# vi /etc/ssl/openssl.cnf
</source>
===Generate key===
<source lang=bash>
# openssl ecparam -genkey -name secp256r1 | openssl ec -aes256 -out server.de.ec-key
read EC key
using curve name prime256v1 instead of secp256r1
writing EC key
Enter PEM pass phrase:
Verifying - Enter PEM pass phrase:
</source>
If you don't need a password protected encrypted key file, you can remove the encryption like this:
<source lang=bash>
# openssl ec -in server.de.ec-key -out server.de.ec-key
read EC key
Enter PEM pass phrase:
writing EC key
</source>
===Issue certificate===
<source lang=bash>
# openssl req -new -x509 -sha256 -key server.de.ec-key -out server.de-wildcard.pem -days 1825 -nodes
You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter '.', the field will be left blank.
-----
Country Name (2 letter code) [DE]:
State or Province Name (full name) [Hamburg]:
Locality Name (eg, city) [Hamburg]:
Organization Name (eg, company) [My Site]:
Organizational Unit Name (eg, section) [Sub]:
Common Name (e.g. server FQDN or YOUR name) []:*.server.de
Email Address [ssl@server.de]:
</source>
===View certificate===
<source lang=bash>
# openssl x509 -text -noout -in server.de-wildcard.pem
Certificate:
Data:
Version: 3 (0x2)
Serial Number: ... (0x...)
Signature Algorithm: ecdsa-with-SHA256
Issuer: C=DE, ST=Hamburg, L=Hamburg, O=My Site, OU=Sub, CN=*.server.de/emailAddress=ssl@server.de
Validity
Not Before: Apr 16 09:35:02 2015 GMT
Not After : Apr 14 09:35:02 2020 GMT
Subject: C=DE, ST=Hamburg, L=Hamburg, O=My Site, OU=Sub, CN=*.server.de/emailAddress=ssl@server.de
Subject Public Key Info:
Public Key Algorithm: id-ecPublicKey
Public-Key: (256 bit)
pub:
...
ASN1 OID: prime256v1
X509v3 extensions:
X509v3 Subject Key Identifier:
...
X509v3 Authority Key Identifier:
keyid:...
X509v3 Basic Constraints:
CA:TRUE
Signature Algorithm: ecdsa-with-SHA256
...
</source>
==Configuring Apache==
=== Serving mp4 media files ===
If your media files are on a network filesystem like CIFS or NFS you should disable memory mapping (EnableMMAP Off) to avoid corrupted data at the client side and allow seeking inside the video.
<source lang=apache>
<Directory /var/www/media-files>
Options -Indexes
AllowOverride None
Require all granted
<IfModule mod_mime>
AddType video/mp4 .mp4
</IfModule>
EnableMMAP Off
</Directory>
</source>
=== SSL configuration ===
/etc/apache2/mods-available/ssl.conf
<source lang=apache>
<IfModule mod_ssl.c>
...
SSLUseStapling On
SSLStaplingCache "shmcb:${APACHE_RUN_DIR}/stapling_cache(128000)"
...
</IfModule>
</source>
<source lang=apache>
<VirtualHost ssl.server.de:443>
# ...
SSLEngine On
# Do this only if you are sure you have no old clients
SSLProtocol all -SSLv2 -SSLv3 -TLSv1 -TLSv1.1
# If you need to support old clients use this instead
# SSLProtocol all -SSLv2 -SSLv3 -TLSv1
SSLCompression off
SSLHonorCipherOrder On
# Do this only if you are sure you have no old clients
SSLCipherSuite HIGH:EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH:!AES256+RSA:!AES128:!ADH:!EXP:!SSLv2:!SSLv3:!MEDIUM:!LOW:!NULL:!aNULL
# If you need to support old clients use this instead
# SSLCipherSuite ECDHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-SHA256:ECDHE-RSA-AES256-SHA:ECDHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA256:DHE-RSA-AES128-SHA256:DHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA:EDH-RSA-DES-CBC3-SHA:AES256-GCM-SHA384:AES128-GCM-SHA256:AES256-SHA256:AES128-SHA256:AES256-SHA:AES128-SHA:HIGH:!DES-CBC3-SHA:!aNULL:!eNULL:!EXPORT:!CAMELLIA:!DES:!MD5:!PSK:!RC4:!SSLv2:!SSLv3
SSLCertificateFile /etc/letsencrypt/live/ssl.server.de/fullchain.pem
SSLCertificateKeyFile /etc/letsencrypt/live/ssl.server.de/privkey.pem
SSLOptions +FakeBasicAuth +ExportCertData +StrictRequire
# Generate DH parameters with
# # openssl dhparam -out /etc/ssl/certs/dhparam_4096.pem 4096
SSLOpenSSLConfCmd DHParameters "/etc/ssl/certs/dhparam_4096.pem"
SSLOpenSSLConfCmd ECDHParameters Automatic
SSLOpenSSLConfCmd Curves secp521r1:secp384r1:prime256v1
SetEnvIfNoCase Referer ^https://ssl\.server\.de keep_cookies
RequestHeader unset Cookie env=!keep_cookies
<IfModule mod_headers.c>
# https://kb.sucuri.net/warnings/hardening/headers-x-content-type
Header set X-Content-Type-Options nosniff
# https://kb.sucuri.net/warnings/hardening/headers-x-frame-clickjacking
Header append X-FRAME-OPTIONS "SAMEORIGIN"
# https://kb.sucuri.net/warnings/hardening/headers-x-xss-protection
Header set X-XSS-Protection "1; mode=block"
# Strict Transport Security
Header always set Strict-Transport-Security "max-age=31556926;"
# Public Key Pins
Header always set Public-Key-Pins "max-age=5184000; pin-sha256=\"...\"; pin-sha256=\"...\"; includeSubDomains"
</IfModule>
<IfModule mod_rewrite.c>
RewriteEngine On
# https://kb.sucuri.net/warnings/hardening/http-trace HTTP Trace Method
RewriteCond %{REQUEST_METHOD} ^TRACE
RewriteRule .* - [F]
</IfModule>
</VirtualHost>
</source>
===SSLLabs A+ with all 100%===
'''If you consider to take this snippet be warned. Old clients have no chance to reach the server.'''
<source lang=apache>
<VirtualHost ssl.server.de:443>
...
# SSL parameters
SSLEngine On
SSLProtocol all -SSLv2 -SSLv3 -TLSv1 -TLSv1.1
SSLCipherSuite DHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES256-SHA:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-SHA
SSLHonorCipherOrder on
SSLUseStapling on
SSLCompression off
SSLOptions +FakeBasicAuth +ExportCertData +StrictRequire
SSLCertificateFile /etc/letsencrypt/live/ssl.server.de/fullchain.pem
SSLCertificateKeyFile /etc/letsencrypt/live/ssl.server.de/privkey.pem
SSLOpenSSLConfCmd DHParameters "/etc/ssl/certs/dhparam.pem"
SSLOpenSSLConfCmd ECDHParameters Automatic
SSLOpenSSLConfCmd Curves secp521r1:secp384r1:prime256v1
<IfModule mod_headers.c>
# Add security and privacy related headers
Header edit Set-Cookie ^(.*)$ $1;HttpOnly;Secure
Header always set Strict-Transport-Security "max-age=31556926; includeSubDomains; preload"
Header always set X-Frame-Options SAMEORIGIN
Header always set X-Content-Type-Options nosniff
Header set X-XSS-Protection "1; mode=block"
Header set X-Robots-Tag "none"
SetEnv modHeadersAvailable true
</IfModule>
...
</VirtualHost>
</source>
==Client certificates==
<source lang=apache>
#
## <ClientCertificate>
#
SSLVerifyClient none
SSLCACertificateFile "/var/log/apache2/conf/ca.crt"
SSLCARevocationFile "/var/log/apache2/conf/crl.pem"
SSLCARevocationCheck chain
CustomLog "/var/log/apache2/logs/ssl_user.log" \
"%t %h Serial=%{SSL_CLIENT_M_SERIAL}x User=%{SSL_CLIENT_S_DN_CN}x \"%r\" %b"
<Location />
SSLVerifyClient require
SSLVerifyDepth 10
SSLOptions +FakeBasicAuth
SSLRequireSSL
SSLRequire %{SSL_CLIENT_S_DN_O} eq "Your Organization" \
and %{SSL_CLIENT_S_DN_OU} in {"AllowedOU1","AllowedOU2"}
</Location>
#
## </ClientCertificate>
#
</source>
==ApacheTop==
Top of all sites on your host:
<source lang=bash>
# ls /var/log/apache2/*.log | xargs -n 1 echo -f | xargs apachetop
</source>
ec49300c7f12682088920aeb51ce12e2b32ae90f
2012
2011
2020-05-27T13:13:47Z
Lollypop
2
wikitext
text/x-wiki
[[Category:Webserver]]
== Create certificate ==
===Simple script===
<source lang=bash>
#!/bin/bash
BASE_SUBJECT='/C=DE/ST=Hamburg/L=Hamburg/O=MyOrg/OU=IT'
BASEDIR=/etc/apache2
DAYS=$[ 5 * 365 ]
KEY_DIR=${BASEDIR}/ssl.key
CRT_DIR=${BASEDIR}/ssl.crt
if [ $# -eq 0 ]
then
printf "usage: $0 <webserver-name> [<alias1> <alias2>...]\n"
exit 1
fi
CN=$1
declare -a subject_alt_names;
for i in ${*}
do
subject_alt_names=( ${subject_alt_names[*]} "DNS:${i}")
done
echo ${subject_alt_names[*]}
SHORT=${CN%%.*}
KEY=${KEY_DIR}/${SHORT}.key
CRT=${CRT_DIR}/${SHORT}.crt
OLD_IFS=${IFS}
IFS=","
openssl req \
-new \
-days ${DAYS} \
-newkey rsa:4096bits \
-sha512 \
-x509 \
-nodes \
-out ${CRT} \
-keyout ${KEY} \
-subj "${BASE_SUBJECT}/CN=${CN}" \
-reqexts SAN \
-extensions SAN \
-config <(
cat /etc/ssl/openssl.cnf
printf "[ext]\nbasicConstraints=CA:FALSE,pathlen:0\n[SAN]\n%s\n" \
"${subject_alt_names:+subjectAltName = ${subject_alt_names[*]}}"
)
IFS=${OLD_IFS}
printf "Put this in your Apache config:\n\n\tSSLCertificateFile %s\n\tSSLCertificateKeyFile %s\n\n" "${CRT}" "${KEY}"
</source>
===Adjust the OpenSSL default values===
Set the country, etc. to values that match your needs:
<source lang=bash>
# vi /etc/ssl/openssl.cnf
</source>
===Generate key===
<source lang=bash>
# openssl ecparam -genkey -name secp256r1 | openssl ec -aes256 -out server.de.ec-key
read EC key
using curve name prime256v1 instead of secp256r1
writing EC key
Enter PEM pass phrase:
Verifying - Enter PEM pass phrase:
</source>
If you don't need a password protected encrypted key file, you can remove the encryption like this:
<source lang=bash>
# openssl ec -in server.de.ec-key -out server.de.ec-key
read EC key
Enter PEM pass phrase:
writing EC key
</source>
===Issue certificate===
<source lang=bash>
# openssl req -new -x509 -sha256 -key server.de.ec-key -out server.de-wildcard.pem -days 1825 -nodes
You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter '.', the field will be left blank.
-----
Country Name (2 letter code) [DE]:
State or Province Name (full name) [Hamburg]:
Locality Name (eg, city) [Hamburg]:
Organization Name (eg, company) [My Site]:
Organizational Unit Name (eg, section) [Sub]:
Common Name (e.g. server FQDN or YOUR name) []:*.server.de
Email Address [ssl@server.de]:
</source>
===View certificate===
<source lang=bash>
# openssl x509 -text -noout -in server.de-wildcard.pem
Certificate:
Data:
Version: 3 (0x2)
Serial Number: ... (0x...)
Signature Algorithm: ecdsa-with-SHA256
Issuer: C=DE, ST=Hamburg, L=Hamburg, O=My Site, OU=Sub, CN=*.server.de/emailAddress=ssl@server.de
Validity
Not Before: Apr 16 09:35:02 2015 GMT
Not After : Apr 14 09:35:02 2020 GMT
Subject: C=DE, ST=Hamburg, L=Hamburg, O=My Site, OU=Sub, CN=*.server.de/emailAddress=ssl@server.de
Subject Public Key Info:
Public Key Algorithm: id-ecPublicKey
Public-Key: (256 bit)
pub:
...
ASN1 OID: prime256v1
X509v3 extensions:
X509v3 Subject Key Identifier:
...
X509v3 Authority Key Identifier:
keyid:...
X509v3 Basic Constraints:
CA:TRUE
Signature Algorithm: ecdsa-with-SHA256
...
</source>
==Configuring Apache==
=== Serving mp4 media files ===
If your media files are on a network filesystem like CIFS or NFS you should disable memory mapping (EnableMMAP Off) to avoid corrupted data at the client side and allow seeking inside the video.
<source lang=apache>
<Directory /var/www/media-files>
Options -Indexes
AllowOverride None
Require all granted
<IfModule mod_mime>
AddType video/mp4 .mp4
</IfModule>
EnableMMAP Off
</Directory>
</source>
=== SSL configuration ===
/etc/apache2/mods-available/ssl.conf
<source lang=apache>
<IfModule mod_ssl.c>
...
SSLUseStapling On
SSLStaplingCache "shmcb:${APACHE_RUN_DIR}/stapling_cache(128000)"
...
</IfModule>
</source>
<source lang=apache>
<VirtualHost ssl.server.de:443>
# ...
SSLEngine On
# Do this only if you are sure you have no old clients
SSLProtocol all -SSLv2 -SSLv3 -TLSv1 -TLSv1.1
# If you need to support old clients use this instead
# SSLProtocol all -SSLv2 -SSLv3 -TLSv1
SSLCompression off
SSLHonorCipherOrder On
# Do this only if you are sure you have no old clients
SSLCipherSuite HIGH:EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH:!AES256+RSA:!AES128:!ADH:!EXP:!SSLv2:!SSLv3:!MEDIUM:!LOW:!NULL:!aNULL
# If you need to support old clients use this instead
# SSLCipherSuite ECDHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-SHA256:ECDHE-RSA-AES256-SHA:ECDHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA256:DHE-RSA-AES128-SHA256:DHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA:EDH-RSA-DES-CBC3-SHA:AES256-GCM-SHA384:AES128-GCM-SHA256:AES256-SHA256:AES128-SHA256:AES256-SHA:AES128-SHA:HIGH:!DES-CBC3-SHA:!aNULL:!eNULL:!EXPORT:!CAMELLIA:!DES:!MD5:!PSK:!RC4:!SSLv2:!SSLv3
SSLCertificateFile /etc/letsencrypt/live/ssl.server.de/fullchain.pem
SSLCertificateKeyFile /etc/letsencrypt/live/ssl.server.de/privkey.pem
SSLOptions +FakeBasicAuth +ExportCertData +StrictRequire
# Generate DH parameters with
# # openssl dhparam -out /etc/ssl/certs/dhparam_4096.pem 4096
SSLOpenSSLConfCmd DHParameters "/etc/ssl/certs/dhparam_4096.pem"
SSLOpenSSLConfCmd ECDHParameters Automatic
SSLOpenSSLConfCmd Curves secp521r1:secp384r1:prime256v1
SetEnvIfNoCase Referer ^https://ssl\.server\.de keep_cookies
RequestHeader unset Cookie env=!keep_cookies
<IfModule mod_headers.c>
# https://kb.sucuri.net/warnings/hardening/headers-x-content-type
Header set X-Content-Type-Options nosniff
# https://kb.sucuri.net/warnings/hardening/headers-x-frame-clickjacking
Header append X-FRAME-OPTIONS "SAMEORIGIN"
# https://kb.sucuri.net/warnings/hardening/headers-x-xss-protection
Header set X-XSS-Protection "1; mode=block"
# Strict Transport Security
Header always set Strict-Transport-Security "max-age=31556926;"
# Public Key Pins
Header always set Public-Key-Pins "max-age=5184000; pin-sha256=\"...\"; pin-sha256=\"...\"; includeSubDomains"
</IfModule>
<IfModule mod_rewrite.c>
RewriteEngine On
# https://kb.sucuri.net/warnings/hardening/http-trace HTTP Trace Method
RewriteCond %{REQUEST_METHOD} ^TRACE
RewriteRule .* - [F]
</IfModule>
</VirtualHost>
</source>
===SSLLabs A+ with all 100%===
'''If you consider to take this snippet be warned. Old clients have no chance to reach the server.'''
<source lang=apache>
<VirtualHost ssl.server.de:443>
...
# SSL parameters
SSLEngine On
SSLProtocol all -SSLv2 -SSLv3 -TLSv1 -TLSv1.1
SSLCipherSuite DHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES256-SHA:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-SHA
SSLHonorCipherOrder on
SSLUseStapling on
SSLCompression off
SSLOptions +FakeBasicAuth +ExportCertData +StrictRequire
SSLCertificateFile /etc/letsencrypt/live/ssl.server.de/fullchain.pem
SSLCertificateKeyFile /etc/letsencrypt/live/ssl.server.de/privkey.pem
SSLOpenSSLConfCmd DHParameters "/etc/ssl/certs/dhparam.pem"
SSLOpenSSLConfCmd ECDHParameters Automatic
SSLOpenSSLConfCmd Curves secp521r1:secp384r1:prime256v1
<IfModule mod_headers.c>
# Add security and privacy related headers
Header edit Set-Cookie ^(.*)$ $1;HttpOnly;Secure
Header always set Strict-Transport-Security "max-age=31556926; includeSubDomains; preload"
Header always set X-Frame-Options SAMEORIGIN
Header always set X-Content-Type-Options nosniff
Header set X-XSS-Protection "1; mode=block"
Header set X-Robots-Tag "none"
SetEnv modHeadersAvailable true
</IfModule>
...
</VirtualHost>
</source>
==Client certificates==
<source lang=apache>
#
## <ClientCertificate>
#
SSLVerifyClient none
SSLCACertificateFile "/var/log/apache2/conf/ca.crt"
SSLCARevocationFile "/var/log/apache2/conf/crl.pem"
SSLCARevocationCheck chain
CustomLog "/var/log/apache2/logs/ssl_user.log" \
"%t %h Serial=%{SSL_CLIENT_M_SERIAL}x User=%{SSL_CLIENT_S_DN_CN}x \"%r\" %b"
<Location />
SSLVerifyClient require
SSLVerifyDepth 10
SSLOptions +FakeBasicAuth
SSLRequireSSL
SSLRequire %{SSL_CLIENT_S_DN_O} eq "Your Organization" \
and %{SSL_CLIENT_S_DN_OU} in {"AllowedOU1","AllowedOU2"}
</Location>
#
## </ClientCertificate>
#
</source>
==ApacheTop==
Top of all sites on your host:
<source lang=bash>
# ls /var/log/apache2/*.log | xargs -n 1 echo -f | xargs apachetop
</source>
73bd48231aa2bb41cdc0e797d36597eb0f5bdfc1
ESPEasy
0
371
2014
2020-05-28T13:09:37Z
Lollypop
2
Created page with "$ sudo apt install --yes esptool $ wget https://github.com/letscontrolit/ESPEasy/releases/download/mega-20200515/ESPEasy_mega-20200515.zip $ esptool --port /dev/ttyUSB0 --baud..."
wikitext
text/x-wiki
$ sudo apt install --yes esptool
$ wget https://github.com/letscontrolit/ESPEasy/releases/download/mega-20200515/ESPEasy_mega-20200515.zip
$ esptool --port /dev/ttyUSB0 --baud 9600 write_flash 0 ESP_Easy_mega_20200516_test_beta_ESP8266_4M1M.bin
1ddca4fb9a1b2fb8d7d41689e9225d6b076bbd8c
2015
2014
2020-05-28T13:09:58Z
Lollypop
2
wikitext
text/x-wiki
<source lang=bash>
$ sudo apt install --yes esptool
$ wget https://github.com/letscontrolit/ESPEasy/releases/download/mega-20200515/ESPEasy_mega-20200515.zip
$ esptool --port /dev/ttyUSB0 --baud 9600 write_flash 0 ESP_Easy_mega_20200516_test_beta_ESP8266_4M1M.bin
</source>
339d5b30d61b87d2a0344126030616ac1910c8eb
2016
2015
2020-05-28T13:14:57Z
Lollypop
2
wikitext
text/x-wiki
<source lang=bash>
$ sudo apt install --yes esptool
$ wget https://github.com/letscontrolit/ESPEasy/releases/download/mega-20200515/ESPEasy_mega-20200515.zip
$ esptool --port /dev/ttyUSB0 --baud 9600 write_flash 0 ESP_Easy_mega_20200516_test_beta_ESP8266_4M1M.bin
</source>
[https://www.az-delivery.de/products/copy-of-nodemcu-lua-amica-v2-modul-mit-esp8266-12e NodeMCU Lua Lolin V3 Module ESP8266 ESP-12F WIFI Wifi Development Board mit CH340]
9d6e614fa26480488e391c80ef005f90f5050acf
2017
2016
2020-05-28T13:16:25Z
Lollypop
2
wikitext
text/x-wiki
<source lang=bash>
$ sudo apt install --yes esptool
$ wget https://github.com/letscontrolit/ESPEasy/releases/download/mega-20200515/ESPEasy_mega-20200515.zip
$ esptool --port /dev/ttyUSB0 --baud 9600 write_flash 0 ESP_Easy_mega_20200516_test_beta_ESP8266_4M1M.bin
</source>
* [https://www.az-delivery.de/products/copy-of-nodemcu-lua-amica-v2-modul-mit-esp8266-12e NodeMCU Lua Lolin V3 Module ESP8266 ESP-12F WIFI Wifi Development Board mit CH340]
916c9ad130f1e5aec5e91216e7cf711170d70e6f
2018
2017
2020-05-28T13:22:04Z
Lollypop
2
wikitext
text/x-wiki
<source lang=bash>
$ sudo apt install --yes esptool
$ wget https://github.com/letscontrolit/ESPEasy/releases/download/mega-20200515/ESPEasy_mega-20200515.zip
$ esptool --port /dev/ttyUSB0 --baud 9600 write_flash 0 ESP_Easy_mega_20200516_test_beta_ESP8266_4M1M.bin
esptool.py v2.8
Serial port /dev/ttyUSB0
Connecting...
Detecting chip type... ESP8266
Chip is ESP8266EX
Features: WiFi
Crystal is 26MHz
MAC: 3c:71:bf:2a:a6:0b
Enabling default SPI flash mode...
Configuring flash size...
Auto-detected Flash size: 4MB
Erasing flash...
Took 2.33s to erase flash block
Writing at 0x000dbc00... (87 %)
</source>
* [https://www.az-delivery.de/products/copy-of-nodemcu-lua-amica-v2-modul-mit-esp8266-12e NodeMCU Lua Lolin V3 Module ESP8266 ESP-12F WIFI Wifi Development Board mit CH340]
6dcab855dbfbae28dcfcd4f8a9f3f3f0c0c876be
2025
2018
2020-08-19T16:52:06Z
Lollypop
2
wikitext
text/x-wiki
<source lang=bash>
$ sudo apt install --yes esptool
$ wget https://github.com/letscontrolit/ESPEasy/releases/download/mega-20200515/ESPEasy_mega-20200515.zip
$ esptool --port /dev/ttyUSB0 --baud 115200 write_flash 0 ESP_Easy_mega_20200516_test_beta_ESP8266_4M1M.bin
esptool.py v2.8
Serial port /dev/ttyUSB0
Connecting...
Detecting chip type... ESP8266
Chip is ESP8266EX
Features: WiFi
Crystal is 26MHz
MAC: 3c:71:bf:2a:a6:0b
Enabling default SPI flash mode...
Configuring flash size...
Auto-detected Flash size: 4MB
Erasing flash...
Took 2.33s to erase flash block
Writing at 0x000dbc00... (87 %)
</source>
* [https://www.az-delivery.de/products/copy-of-nodemcu-lua-amica-v2-modul-mit-esp8266-12e NodeMCU Lua Lolin V3 Module ESP8266 ESP-12F WIFI Wifi Development Board mit CH340]
35466e8a827eb72bfb65e9cf896cd0514f9f3526
Docker tips and tricks
0
372
2019
2020-06-10T12:27:15Z
Lollypop
2
Created page with "== Using docker behind a proxy == <source lang=bash> # systemctl edit docker.service </source> Enter the next three lines and save: <source lang=ini> [Service] Environment="H..."
wikitext
text/x-wiki
== Using docker behind a proxy ==
<source lang=bash>
# systemctl edit docker.service
</source>
Enter the next three lines and save:
<source lang=ini>
[Service]
Environment="HTTP_PROXY=user:pass@proxy:port"
Environment="HTTPS_PROXY=user:pass@proxy:port"
</source>
Restart docker:
<source lang=bash>
# systemctl restart docker.service
</source>
7231c0e3928e6a6d5412e3329fdce939059a2296
2020
2019
2020-06-10T12:37:27Z
Lollypop
2
/* Using docker behind a proxy */
wikitext
text/x-wiki
== Using docker behind a proxy ==
<source lang=bash>
# systemctl edit docker.service
</source>
Enter the next three lines and save:
<source lang=ini>
[Service]
Environment="HTTP_PROXY=user:pass@proxy:port"
Environment="HTTPS_PROXY=user:pass@proxy:port"
</source>
Restart docker:
<source lang=bash>
# systemctl restart docker.service
</source>
== Some useful aliases ==
I put this in my ~/.bash_aliases to maintain a check_mk container:
<source lang=bash>
alias omd-log='docker container logs monitoring'
alias omd-recreate-volume='docker volume create --driver local --opt type=nfs --opt o=addr=nfs.server.tld,rw --opt device=:/share monitoring'
alias omd-root='docker container exec -it $(docker ps --filter name=monitoring -q) /bin/bash'
alias omd-cmk='docker container exec -it -u omd monitoring bash'
alias omd-start='docker container run --rm -dit -p 8080:5000 --tmpfs /omd/sites/omd/tmp:uid=1000,gid=1000 --ulimit nofile=1024 -v monitoring:/omd/sites --name monitoring -e CMK_SITE_ID=omd -e MAIL_RELAY_HOST='\''smtp-gw.server.tld'\'' -v /etc/localtime:/etc/localtime:ro checkmk/check-mk-raw:1.6.0p12'
alias omd-stop='docker stop $(docker ps --filter name=monitoring -q)'
</source>
307f409e11da7cf19e00d8218a46acd3d16f74fd
Brocade
0
107
2026
1331
2020-08-21T14:52:31Z
Lollypop
2
/* SSH mit public key */
wikitext
text/x-wiki
[[Kategorie:FC]]
[[Kategorie:Brocade]]
=Ein paar Kommandos mit kurzer Erklärung dazu=
==Firmware==
<source lang=bash>
brocade:admin> firmwareshow
Appl Primary/Secondary Versions
------------------------------------------
FOS v6.4.2a
v6.4.2a
</source>
== General Switch Information ==
<source lang=bash>
brocade:admin> switchshow
switchName: brocade
switchType: 71.2
switchState: Online
switchMode: Native
switchRole: Principal
switchDomain: 1
switchId: fffc01
switchWwn: 10:00:00:05:34:be:f3:f0
zoning: ON (Fabric1)
switchBeacon: OFF
Index Port Address Media Speed State Proto
==============================================
0 0 010000 id N4 Online FC F-Port 50:0a:09:81:96:c8:3e:f8
1 1 010100 id N4 Online FC F-Port 50:0a:09:81:86:c8:3e:f8
2 2 010200 id N8 Online FC F-Port 21:00:00:24:ff:36:45:02
3 3 010300 id N8 Online FC F-Port 21:00:00:24:ff:36:45:21
4 4 010400 id N8 Online FC F-Port 21:00:00:24:ff:36:44:90
5 5 010500 id N8 Online FC F-Port 21:00:00:24:ff:36:45:f6
6 6 010600 id N8 No_Light FC
...
</source>
Wichtige Zeilen:
===switchshow:switchType===
<source lang=bash>
switchType: 71.2
</source>
switchType gibt Auskunft, welchen Switch wir vor uns haben. Hier einen Brocade 300.
* [https://www.ibm.com/developerworks/community/blogs/anthonyv/entry/brocade_san_switch_models1?lang=en Tabelle von IBM]
* PDF von Brocade: [[Media:Switch-types-blads-ids-product-names.pdf|Switch Types, Blade IDs, and Product Names]]
===switchshow:zoning===
<source lang=bash>
zoning: ON (Fabric1)
</source>
Zeigt an, ob das [[#Zoning|Zoning]] aktiv ist und welche Konfiguration aktiv ist (hier Fabric1) siehe auch [[#Fabric|Fabric]].
===switchshow:switchRole===
Es gibt zwei Rollen
* Principal (also den Chef)
und
* Subordinate (also den Untergeordneten)
z.B.:
<source lang=bash>
switchRole: Principal
</source>
Die Rolle kann man ändern
'''ACHTUNG: Nicht Unterbrechungsfrei!'''<br>
'''WARNING: DISRUPTIVE ACTION !'''
<source lang=bash>
brocade1:admin> fabricprincipal -f 1
</source>
==Fabric==
Eine Fabric besteht aus einem oder mehreren Fibre-Channel-Switchen, die miteinander verbunden sind. Komponenten wie Hosts, Storage und Tapes werden über die Fibre-Channel-Switche mit der Fabric verbunden.
<source lang=bash>
brocade:admin> fabricshow
Switch ID Worldwide Name Enet IP Addr FC IP Addr Name
-------------------------------------------------------------------------
1: fffc01 10:00:00:05:34:be:f3:f0 10.60.1.110 0.0.0.0 >"brocade"
2: fffc02 10:00:00:05:1e:0d:da:27 10.60.1.111 0.0.0.0 "brocade1"
4: fffc04 10:00:00:05:1e:b3:61:7d 10.60.1.113 0.0.0.0 "brocade3"
42: fffc2a 10:00:00:05:1e:0c:f3:98 10.60.1.112 0.0.0.0 "brocade2"
The Fabric has 4 switches
</source>
==InterSwitchLinks (ISL)==
Mit islshow bekommt man heraus, welche weiteren Switches angeschlossen sind und über welche Ports sie mit dem aktuellen verbunden sind.
<source lang=bash>
brocade:admin> islshow
rz1_fab1_01:admin> islshow
1: 0-> 0 10:00:00:05:1e:0d:ca:27 2 brocade1 sp: 4.000G bw: 4.000G
2: 4-> 0 10:00:00:05:1e:0c:e3:98 42 brocade2 sp: 4.000G bw: 4.000G
3: 8-> 17 10:00:00:05:1e:0d:ca:27 2 brocade1 sp: 4.000G bw: 4.000G
4: 9-> 0 10:00:00:05:1e:b3:51:7d 4 brocade3 sp: 4.000G bw: 4.000G
5: 12-> 17 10:00:00:05:1e:0c:e3:98 42 brocade2 sp: 4.000G bw: 4.000G
6: 13-> 17 10:00:00:05:1e:b3:51:7d 4 brocade3 sp: 4.000G bw: 4.000G
</source>
==Zoning==
Eine Zone legt fest, welche Ports oder WWNs sich sehen dürfen.
Heute mach man eigentlich nur noch WWN-Zoning, weil es das flexibelste und sicherste ist. Man kann dadurch einfach die Kabel innerhalb der [[#Fabric|Fabric]] hin und herstecken, ohne daß ein Gerät mit mal ein anderes sehen kann, als vorher.
Bei Portzoning ist die Gefahr des falsch steckens gegeben.
=Switch Types and Product Names=
{| class="wikitable sortable" style="text-align: center; width: 85%"
! Switch Type
! Switch Name
|-
| 1 || Brocade 1000 Switches
|-
| 2, 6 || Brocade 2800 Switch
|-
| 3 || Brocade 2100, 2400 Switches
|-
| 4 || Brocade 20x0, 2010, 2040, 2050 Switches
|-
| 5 || Brocade 22x0, 2210, 2240, 2250 Switches
|-
| 7 || Brocade 2000 Switch
|-
| 9 || Brocade 3800 Switch
|-
| 10 || Brocade 12000 Director
|-
| 12 || Brocade 3900 Switch
|-
| 16 || Brocade 3200 Switch
|-
| 17 || Brocade 3800VL
|-
| 18 || Brocade 3000 Switch
|-
| 21 || Brocade 24000 Director
|-
| 22 || Brocade 3016 Switch
|-
| 26 || Brocade 3850 Switch
|-
| 27 || Brocade 3250 Switch
|-
| 29 || Brocade 4012 Embedded Switch
|-
| 32 || Brocade 4100 Switch
|-
| 33 || Brocade 3014 Switch
|-
| 34 || Brocade 200E Switch
|-
| 37 || Brocade 4020 Embedded Switch
|-
| 38 || Brocade 7420 SAN Router
|-
| 40 || Fibre Channel Routing (FCR) Front Domain
|-
| 41 || Fibre Channel Routing, (FCR) Xlate Domain
|-
| 42 || Brocade 48000 Director
|-
| 43 || Brocade 4024 Embedded Switch
|-
| 44 || Brocade 4900 Switch
|-
| 45 || Brocade 4016 Embedded Switch
|-
| 46 || Brocade 7500 Switch
|-
| 51 || Brocade 4018 Embedded Switch
|-
| 55.2 || Brocade 7600 Switch
|-
| 58 || Brocade 5000 Switch
|-
| 61 || Brocade 4424 Embedded Switch
|-
| 62 || Brocade DCX Backbone
|-
| 64 || Brocade 5300 Switch
|-
| 66 || Brocade 5100 Switch
|-
| 67 || Brocade Encryption Switch
|-
| 69 || Brocade 5410 Blade
|-
| 70 || Brocade 5410 Embedded Switch
|-
| 71 || Brocade 300 Switch
|-
| 72 || Brocade 5480 Embedded Switch
|-
| 73 || Brocade 5470 Embedded Switch
|-
| 75 || Brocade M5424 Embedded Switch
|-
| 76 || Brocade 8000 Switch
|-
| 77 || Brocade DCX-4S Backbone
|-
| 83 || Brocade 7800 Extension Switch
|-
| 86 || Brocade 5450 Embedded Switch
|-
| 87 || Brocade 5460 Embedded Switch
|-
| 90 || Brocade 8470 Embedded Switch
|-
| 92 || Brocade VA-40FC Switch
|-
| 95 || Brocade VDX 6720-24 Data Center Switch
|-
| 96 || Brocade VDX 6730-32 Data Center Switch
|-
| 97 || Brocade VDX 6720-60 Data Center Switch
|-
| 98 || Brocade VDX 6730-76 Data Center Switch
|-
| 108 || Dell M8428-k FCoE Embedded Switch
|-
| 109 || Brocade 6510 Switch
|-
| 116 || Brocade VDX 6710 Data Center Switch
|-
| 117 || Brocade 6547 Embedded Switch
|-
| 118 || Brocade 6505 Switch
|-
| 120 || Brocade DCX 8510-8 Backbone
|-
| 121 || Brocade DCX 8510-4 Backbone
|}
=Enable root account for ssh=
==Enable root for ssh==
<source lang=bash>
sw-fc02fab-b:admin> rootaccess --show
RootAccess: consoleonly
sw-fc02fab-b:admin> rootaccess --set all
sw-fc02fab-b:admin> rootaccess --show
RootAccess: all
sw-fc02fab-b:admin> userconfig --change root -e yes
</source>
==Enable root account==
<source lang=bash>
sw-fc02fab-b:admin> userconfig --show root
Account name: root
Description: root
Enabled: No
Password Last Change Date: Fri Aug 21 2020 (UTC)
Password Expiration Date: Not Applicable (UTC)
Locked: No
Role: root
AD membership: 0-255
Home AD: 0
Day Time Access: N/A
sw-fc02fab-b:admin> userconfig --change root -e yes
sw-fc02fab-b:admin> userconfig --show root
Account name: root
Description: root
Enabled: Yes
Password Last Change Date: Fri Aug 21 2020 (UTC)
Password Expiration Date: Not Applicable (UTC)
Locked: No
Role: root
AD membership: 0-255
Home AD: 0
Day Time Access: N/A
</source>
==Set root password directly after enabling the account==
<source lang=bash>
$ ssh root@192.168.1.1
root@192.168.1.1's password:
============================================================================================
ATTENTION:
It is recommended that you change the default passwords for all the switch accounts.
Refer to the product release notes and administrators guide if you need further information.
============================================================================================
...
</source>
=SSH mit public key=
==Host -> Brocade==
<source lang=bash>
BSAN01:root> cd ~/.ssh
BSAN01:root> ls -al
total 8
drwxr-xr-x 2 root sys 4096 Jul 18 2011 ./
drwxr-x--- 4 root sys 4096 Jun 19 2013 ../
BSAN01:root> echo "ssh-dss AAAA...TD8cc= root@sun" >> authorized_keys
</source>
==Brocade -> Host==
===Key auf Switch generieren===
Als '''admin''' !
<source lang=bash>
Host# ssh admin@bsan01
BSAN01:admin> sshutil genkey
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Key pair generated successfully.
BSAN01:admin> exit
</source>
===Key vom Switch -> Host ~/.ssh/authorized_keys===
Als '''root''' !
<source lang=bash>
Host# ssh root@bsan01 cat .ssh/id_rsa.pub >> ~/.ssh/authorized_keys
</source>
=Backup der Config=
Wichtig, vorher die Keys austauschen!
# Der Brocade Pubkey muß nach ~bckpuser/.ssh/authorized_keys
# Der Pubkey des aufrufenden Users muß auf den Brocade ~root/.ssh/authorized_keys
Ein mögliches Script könnte so aussehen:
<source lang=bash>
#!/bin/bash
SWITCHES="
bsan01
bsan02
"
BACKUP_HOST="10.0.0.42"
LOCALUSER="bckpuser"
BACKUPDIR="brocade_backup"
[ ! -d ~/brocade_backup ] && mkdir -p ~/brocade_backup
date="$(date '+%Y%m%d-%H%M%S')"
for switch in ${SWITCHES} ; do
printf "Backing up ${switch} to ~${LOCALUSER}/${BACKUPDIR}/${switch}_config_${date}.txt... "
ssh -i ~/.ssh/id_rsa_nopw root@${switch} /fabos/link_sbin/configupload -all -p scp ${BACKUP_HOST},${LOCALUSER},${BACKUPDIR}/${switch}_config_${date}.txt
tmp_file=/tmp/.$$_${switch}.txt
bakup_file=~/${BACKUPDIR}/${switch}_config_${date}.txt
last_backup_file="$(ls -1rt ~/${BACKUPDIR}/${switch}_config_*.txt.gz | tail -1)"
gzip -cd ${last_backup_file} | grep -v "date =" > ${tmp_file}
if ( grep -v "date =" ${bakup_file} | diff -ub - ${tmp_file} )
then
# The last backup is identical
rm -f ${bakup_file}
else
# Differences encountered keep new backup
gzip -9 ${bakup_file}
fi
[ -f "${tmp_file}" ] && rm -f ${tmp_file}
done
</source>
=Firmware update=
==Record the running firmware==
==Example for a brocade sftp firmware download directory==
First take a look [[SSH_Tipps_und_Tricks#SFTP_chroot|here]] for setting up a chroot sftp environment.
Then create the home on the sftp-server:
<source lang=bash>
# mkdir --parents --mode=0755 /home/sftp/brocade
# useradd --create-home --home-dir /home/sftp/brocade/fw brocade
</source>
If there is allready an brocade user with an authorized_keys file do:
<source lang=bash>
# cp --preserve=mode ~brocade/.ssh/authorized_keys /home/sftp/.authorized_keys/brocade
</source>
else put them into /home/sftp/.authorized_keys/brocade if you want.
Untar your firmware as brocade in /home/sftp/brocade/fw.
Login to the switch as admin and do for example:
<source lang=bash>
san-sw:admin> firmwaredownload -s -b -p sftp <ip of the sftp-server>,brocade,fw/v7.2.1f
</source>
04c0487206e67525d6ccb8b37fabd0c838e212e6
Category:General Disclaimer
14
373
2031
2020-09-14T06:58:23Z
Lollypop
2
Created blank page
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
Main Page
0
374
2032
2020-09-14T07:12:36Z
Lollypop
2
Created page with "{{Special:Allpages|namespace=14}}"
wikitext
text/x-wiki
{{Special:Allpages|namespace=14}}
da57b4748980cc968400c466ab6da185f05f0983
Main Page
0
374
2033
2032
2020-09-14T07:13:48Z
Lollypop
2
wikitext
text/x-wiki
{{Special:Categories}}
e338466c23f81c1555056882a3b463dc21ba4c6d
2034
2033
2020-09-14T07:17:21Z
Lollypop
2
wikitext
text/x-wiki
{{Special:AllPages}}
1a92294a6cf8bf61fefc4b7b09dbb2bddcc36eac
2035
2034
2020-09-14T07:22:34Z
Lollypop
2
wikitext
text/x-wiki
{{#categorytree:KnowHow}}
0080e34f6d29b80bd7f6310ba63c21927af98e11
2044
2035
2020-09-14T13:47:18Z
Lollypop
2
wikitext
text/x-wiki
First of all please read my [[Project:General_disclaimer|disclaimer]]!
Bitte zuerst bitte meinen [[Project:General_disclaimer|Haftungsausschluss]] lesen!
=[[:Kategorie:KnowHow|KnowHow]]=
<categorytree mode=pages depth=2>KnowHow</categorytree>
=[[:Kategorie:Projekte|Meine Projekte]]=
<categorytree mode=pages hideroot=on depth=3>Projekte</categorytree>
Anmerkungen immer gern an mich: Lars Timmann <<email>L@rs.Timmann.de</email>>
= Starthilfen zum Wiki =
Hilfe zur Benutzung und Konfiguration der Wiki-Software findest du im [http://meta.wikimedia.org/wiki/Help:Contents Benutzerhandbuch].
* [http://www.mediawiki.org/wiki/Manual:Configuration_settings Liste der Konfigurationsvariablen]
* [http://www.mediawiki.org/wiki/Manual:FAQ MediaWiki-FAQ]
* [https://lists.wikimedia.org/mailman/listinfo/mediawiki-announce Mailingliste neuer MediaWiki-Versionen]
Mit dem Urteil vom 12. Mai 1998 hat das Landgericht Hamburg entschieden, dass man durch die Anbringung eines Links die Inhalte der gelinkten Seiten ggf. mit zu verantworten hat. Dies kann nur dadurch verhindert werden, dass man sich ausdrücklich von diesem Inhalt distanziert. Für alle Links auf dieser Homepage gilt: Ich distanziere mich hiermit ausdrücklich von allen Inhalten aller verlinkten Seitenadressen auf meiner Homepage und mache mir diese Inhalte nicht zu eigen.
81c489fd383faa8f8b6a9baef42679dde63216e9
2045
2044
2020-09-14T13:47:56Z
Lollypop
2
wikitext
text/x-wiki
First of all please read my [[Project:General_disclaimer|disclaimer]]!
Bitte zuerst bitte meinen [[Project:General_disclaimer|Haftungsausschluss]] lesen!
=[[:category:KnowHow|KnowHow]]=
<categorytree mode=pages depth=2>KnowHow</categorytree>
=[[:category:Projekte|Meine Projekte]]=
<categorytree mode=pages hideroot=on depth=3>Projekte</categorytree>
Anmerkungen immer gern an mich: Lars Timmann <<email>L@rs.Timmann.de</email>>
= Starthilfen zum Wiki =
Hilfe zur Benutzung und Konfiguration der Wiki-Software findest du im [http://meta.wikimedia.org/wiki/Help:Contents Benutzerhandbuch].
* [http://www.mediawiki.org/wiki/Manual:Configuration_settings Liste der Konfigurationsvariablen]
* [http://www.mediawiki.org/wiki/Manual:FAQ MediaWiki-FAQ]
* [https://lists.wikimedia.org/mailman/listinfo/mediawiki-announce Mailingliste neuer MediaWiki-Versionen]
Mit dem Urteil vom 12. Mai 1998 hat das Landgericht Hamburg entschieden, dass man durch die Anbringung eines Links die Inhalte der gelinkten Seiten ggf. mit zu verantworten hat. Dies kann nur dadurch verhindert werden, dass man sich ausdrücklich von diesem Inhalt distanziert. Für alle Links auf dieser Homepage gilt: Ich distanziere mich hiermit ausdrücklich von allen Inhalten aller verlinkten Seitenadressen auf meiner Homepage und mache mir diese Inhalte nicht zu eigen.
709657d3e5bdbbc6ad0e54118aad3d9e71b4e86d
Template:!-
10
56
2036
94
2020-09-14T10:14:03Z
Lollypop
2
wikitext
text/x-wiki
<includeonly>|-</includeonly><noinclude>''Diese Vorlage wird für [[Vorlage:Ameisenart]] und [[Vorlage:Ameisengattung]] benötigt.''</noinclude><noinclude>
[[Category:Vorlage]]
</noinclude>
286578ee2a2ed6072db03cf117b9d24e925252ea
2037
2036
2020-09-14T10:15:31Z
Lollypop
2
wikitext
text/x-wiki
<includeonly>|-</includeonly><noinclude>''Diese Vorlage wird für [[Vorlage:Ameisenart]] und [[Vorlage:Ameisengattung]] benötigt.''</noinclude><noinclude>
[[category:Vorlage]]
</noinclude>
79c01648f5552bd67bc41740f5d256b0731270a1
2038
2037
2020-09-14T10:16:08Z
Lollypop
2
wikitext
text/x-wiki
<includeonly>|-</includeonly><noinclude>''Diese Vorlage wird für [[Template:Ameisenart]] und [[Template:Ameisengattung]] benötigt.''</noinclude><noinclude>
[[category:Vorlage]]
</noinclude>
d710444f3c43dfd9079bb896c8de77c660c9a0c5
Template:Ameisenart
10
47
2039
207
2020-09-14T10:17:46Z
Lollypop
2
wikitext
text/x-wiki
<includeonly>
{| cellpadding="2" cellspacing="0" style="border: 1px solid #6688AA;width:260px; background-color:#efefef; float:right;overflow: hidden; margin-left:10px; margin-bottom:10px;" valign="middle" |
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| '''''{{{WissName}}}''''' {{#if:{{{DeName|}}}| <br>({{{DeName|}}}) }}
|-
|style=" border: 0px solid #6688AA;border-bottom-width:1px;"| <div style="text-align:center;font-size:8pt;">
{{#if: {{{Bild|}}} | [[Bild:{{{Bild}}}|frameless|250x300px|{{{Bildbeschreibung}}}]]{{{Bildbeschreibung}}}</div> | |}}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| Systematik
|-
|style="text-align:center;width=250px;"|
{| style="background-color:#efefef;text-align:left;"
|-
| Unterfamilie:
|[[{{{Unterfamilie|}}}]]
|-
| Gattung:
|''[[{{{Gattung|}}}]]''
{{#if:{{{Untergattung|}}}|
{{!-}}
{{!}} Untergattung:
{{!}} ''[[{{{Untergattung|}}}]]''
}}
|-
| Art:
|''{{{WissName|}}}''
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Wissenschaftlicher Name
|-
|style="text-align:center"|
{| style="text-align:center;background-color:#efefef;widht:250px"
|-
|style="text-align:center;width:250px"|''{{{WissName}}}''
{{#if:{{{Autor|}}}| {{{Autor|}}} | }}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Weitere Informationen
|-
|style="text-align:left"|
{| style="text-align:left;background-color:#efefef;widht:250px"
{{#if:{{{Verbreitung|}}} |{{!-}}
{{!}} Verbreitung:
{{!}} {{{Verbreitung|}}} |{{!-}}}}
{{#if:{{{Habitat|}}}
| {{!-}}
{{!}} Habitat:
{{!}} {{{Habitat|}}}|{{!-}}}}
{{#if:{{{Gruendung|}}}
| {{!-}}
{{!}} Gründung:
{{!}} {{{Gruendung|}}} |{{!-}}}}
{{#if:{{{Koeniginnen|}}}
|{{!-}}
{{!}} Königinnen:
{{!}} {{{Koeniginnen|}}} |{{!-}}}}
{{#if:{{{maxKolo|}}}
|{{!-}}
{{!}} max. Koloniegröße:
{{!}} {{{maxKolo|}}} |{{!-}}}}
{{#if:{{{sizeGynomorphe|}}}
|{{!-}}
{{!}} Gyne:
{{!}} {{{sizeGynomorphe|}}} |{{!-}}}}
{{#if:{{{sizeErgatomorphe|}}}
|{{!-}}
{{!}} Arbeiterinnen:
{{!}} {{{sizeErgatomorphe|}}} |{{!-}}}}
{{#if:{{{sizeMajor|}}}
|{{!-}}
{{!}} Majorarbeiterinnen:
{{!}} {{{sizeMajor|}}} |{{!-}}}}
{{#if:{{{sizeSoldat|}}}
|{{!-}}
{{!}} Soldaten:
{{!}} {{{sizeSoldat|}}} |{{!-}}}}
{{#if:{{{Nest|}}}
| {{!-}}
{{!}} Nest:
{{!}} {{{Nest|}}}
|{{!-}}}}
{{#if:{{{Ausbruchsschutz|}}}
| {{!-}}
{{!}} Ausbruchsschutz:
{{!}} {{{Ausbruchsschutz|}}} |{{!-}}}}
{{#if:{{{Nahrung|}}}
| {{!-}}
{{!}} Nahrung:
{{!}} {{{Nahrung|}}} |{{!-}}}}
{{#if:{{{Luftfeuchtigkeit|}}}
| {{!-}}
{{!}} Luftfeuchtigkeit:
{{!}} {{{Luftfeuchtigkeit|}}} |{{!-}}}}
{{#if:{{{Temperatur|}}}
| {{!-}}
{{!}} Temperatur:
{{!}} {{{Temperatur|}}} |{{!-}}}}
{{#if:{{{Winterruhe|}}}
| {{!-}}
{{!}} Winterruhe:
{{!}} {{{Winterruhe|}}} |{{!-}}}}
|}
|}
{{#if:{{{Untergattung|}}}| [[category:{{{Untergattung|}}}]]}}
{{#if:{{{Gattung|}}}| [[category:{{{Gattung|}}}]]}}
{{#ifeq: {{NAMESPACE}} | {{ns:0}} | [[category:Ameisenarten]]}}
</includeonly>
<noinclude>
4806afa826c7d5df58ce14b5d40884141385d719
2041
2039
2020-09-14T10:23:39Z
Lollypop
2
wikitext
text/x-wiki
<includeonly>
{| cellpadding="2" cellspacing="0" style="border: 1px solid #6688AA;width:260px; background-color:#efefef; float:right;overflow: hidden; margin-left:10px; margin-bottom:10px;" valign="middle" |
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| '''''{{{WissName}}}''''' {{#if:{{{DeName|}}}| <br>({{{DeName|}}}) }}
|-
|style=" border: 0px solid #6688AA;border-bottom-width:1px;"| <div style="text-align:center;font-size:8pt;">
{{#if: {{{Bild|}}} | [[File:{{{Bild}}}|frameless|250x300px|{{{Bildbeschreibung}}}]]{{{Bildbeschreibung}}}</div> | |}}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| Systematik
|-
|style="text-align:center;width=250px;"|
{| style="background-color:#efefef;text-align:left;"
|-
| Unterfamilie:
|[[{{{Unterfamilie|}}}]]
|-
| Gattung:
|''[[{{{Gattung|}}}]]''
{{#if:{{{Untergattung|}}}|
{{!-}}
{{!}} Untergattung:
{{!}} ''[[{{{Untergattung|}}}]]''
}}
|-
| Art:
|''{{{WissName|}}}''
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Wissenschaftlicher Name
|-
|style="text-align:center"|
{| style="text-align:center;background-color:#efefef;widht:250px"
|-
|style="text-align:center;width:250px"|''{{{WissName}}}''
{{#if:{{{Autor|}}}| {{{Autor|}}} | }}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Weitere Informationen
|-
|style="text-align:left"|
{| style="text-align:left;background-color:#efefef;widht:250px"
{{#if:{{{Verbreitung|}}} |{{!-}}
{{!}} Verbreitung:
{{!}} {{{Verbreitung|}}} |{{!-}}}}
{{#if:{{{Habitat|}}}
| {{!-}}
{{!}} Habitat:
{{!}} {{{Habitat|}}}|{{!-}}}}
{{#if:{{{Gruendung|}}}
| {{!-}}
{{!}} Gründung:
{{!}} {{{Gruendung|}}} |{{!-}}}}
{{#if:{{{Koeniginnen|}}}
|{{!-}}
{{!}} Königinnen:
{{!}} {{{Koeniginnen|}}} |{{!-}}}}
{{#if:{{{maxKolo|}}}
|{{!-}}
{{!}} max. Koloniegröße:
{{!}} {{{maxKolo|}}} |{{!-}}}}
{{#if:{{{sizeGynomorphe|}}}
|{{!-}}
{{!}} Gyne:
{{!}} {{{sizeGynomorphe|}}} |{{!-}}}}
{{#if:{{{sizeErgatomorphe|}}}
|{{!-}}
{{!}} Arbeiterinnen:
{{!}} {{{sizeErgatomorphe|}}} |{{!-}}}}
{{#if:{{{sizeMajor|}}}
|{{!-}}
{{!}} Majorarbeiterinnen:
{{!}} {{{sizeMajor|}}} |{{!-}}}}
{{#if:{{{sizeSoldat|}}}
|{{!-}}
{{!}} Soldaten:
{{!}} {{{sizeSoldat|}}} |{{!-}}}}
{{#if:{{{Nest|}}}
| {{!-}}
{{!}} Nest:
{{!}} {{{Nest|}}}
|{{!-}}}}
{{#if:{{{Ausbruchsschutz|}}}
| {{!-}}
{{!}} Ausbruchsschutz:
{{!}} {{{Ausbruchsschutz|}}} |{{!-}}}}
{{#if:{{{Nahrung|}}}
| {{!-}}
{{!}} Nahrung:
{{!}} {{{Nahrung|}}} |{{!-}}}}
{{#if:{{{Luftfeuchtigkeit|}}}
| {{!-}}
{{!}} Luftfeuchtigkeit:
{{!}} {{{Luftfeuchtigkeit|}}} |{{!-}}}}
{{#if:{{{Temperatur|}}}
| {{!-}}
{{!}} Temperatur:
{{!}} {{{Temperatur|}}} |{{!-}}}}
{{#if:{{{Winterruhe|}}}
| {{!-}}
{{!}} Winterruhe:
{{!}} {{{Winterruhe|}}} |{{!-}}}}
|}
|}
{{#if:{{{Untergattung|}}}| [[category:{{{Untergattung|}}}]]}}
{{#if:{{{Gattung|}}}| [[category:{{{Gattung|}}}]]}}
{{#ifeq: {{NAMESPACE}} | {{ns:0}} | [[category:Ameisenarten]]}}
</includeonly>
<noinclude>
3bc372b8a0695ad7e91059b6148a763b0f907895
Category:Formica
14
20
2040
33
2020-09-14T10:19:41Z
Lollypop
2
wikitext
text/x-wiki
[[category:Ameisen]]
5d5b288708c73639e181d2289ef49af90495fea2
Admin hints
0
360
2042
1896
2020-09-14T10:27:16Z
Lollypop
2
wikitext
text/x-wiki
[[category:KnowHow]]
==Cheat sheets==
* [https://cheat.sh Curl usable general cheat sheet]
==DNS==
===Get your IP address===
<source lang=bash>
$ dig +short +time=2 +tries=1 myip.opendns.com @resolver1.opendns.com
</source>
da6e593d9a32905f5673d0f2fc3101a557c389a7
Nextcloud
0
368
2046
2030
2020-09-15T12:10:39Z
Lollypop
2
wikitext
text/x-wiki
[[category:Web]]
=Nextcloud=
==BASH alias==
<source lang=bash>
alias occ='sudo --user=www-data /usr/bin/php -f /var/www/nextcloud/occ'
</source>
<source lang=bash>
# occ status
- installed: true
- version: 19.0.2.2
- versionstring: 19.0.2
- edition:
</source>
==Send calendar events==
Set the EventRemindersMode to occ:
<source lang=bash>
# occ config:app:set dav sendEventRemindersMode --value occ
</source>
and add a cronjob for the user running he webserver:
<source lang=bash>
# crontab -u www-data -e
# send calendar events every 5 minutes
*/5 * * * * php -f /var/www/nextcloud/occ dav:send-event-reminders
</source>
=Manual upgrade=
<source lang=bash>
# cd /var/www/nextcloud/updater && sudo -u www-data php updater.phar
</source>
Answer the questions...
If you have an own theme proceed with this steps:
<source lang=bash>
# sudo -u www-data php /var/www/nextcloud/occ config:system:set theme --value <your theme>
# sudo -u www-data php /var/www/nextcloud/occ maintenance:theme:update
</source>
=Some tweaks for the theme to disable several things=
<source lang=css>
/* remove quota */
#quota {
border: 0;
clip: rect(0 0 0 0);
height: 1px;
margin: -1px;
overflow: hidden;
padding: 0;
position: absolute;
width: 1px;
}
/* remove lost password */
.lost-password-container #lost-password, .lost-password-container #lost-password-back {
display: none;
}
/* remove contacts menu */
#contactsmenu { display: none; }
/* remove contacts button */
li[data-id="contacts"] {
display: none;
visibility : hidden;
height : 0px;
width : 0px;
margin : 0px;
padding : 0px;
overflow : hidden;
}
/* remove user button */
li[data-id="core_users"] {
display: none;
visibility : hidden;
height : 0px;
width : 0px;
margin : 0px;
padding : 0px;
overflow : hidden;
}
</source>
36b3b0c03a5f341d34cb61fa4b5528e30329b736
2054
2046
2020-10-15T05:05:23Z
Lollypop
2
/* Manual upgrade */
wikitext
text/x-wiki
[[category:Web]]
=Nextcloud=
==BASH alias==
<source lang=bash>
alias occ='sudo --user=www-data /usr/bin/php -f /var/www/nextcloud/occ'
</source>
<source lang=bash>
# occ status
- installed: true
- version: 19.0.2.2
- versionstring: 19.0.2
- edition:
</source>
==Send calendar events==
Set the EventRemindersMode to occ:
<source lang=bash>
# occ config:app:set dav sendEventRemindersMode --value occ
</source>
and add a cronjob for the user running he webserver:
<source lang=bash>
# crontab -u www-data -e
# send calendar events every 5 minutes
*/5 * * * * php -f /var/www/nextcloud/occ dav:send-event-reminders
</source>
=Manual upgrade=
<source lang=bash>
# cd /var/www/nextcloud/updater && sudo -u www-data php updater.phar
# occ db:add-missing-indices
# occ db:add-missing-columns
</source>
Answer the questions...
If you have an own theme proceed with this steps:
<source lang=bash>
# sudo -u www-data php /var/www/nextcloud/occ config:system:set theme --value <your theme>
# sudo -u www-data php /var/www/nextcloud/occ maintenance:theme:update
</source>
=Some tweaks for the theme to disable several things=
<source lang=css>
/* remove quota */
#quota {
border: 0;
clip: rect(0 0 0 0);
height: 1px;
margin: -1px;
overflow: hidden;
padding: 0;
position: absolute;
width: 1px;
}
/* remove lost password */
.lost-password-container #lost-password, .lost-password-container #lost-password-back {
display: none;
}
/* remove contacts menu */
#contactsmenu { display: none; }
/* remove contacts button */
li[data-id="contacts"] {
display: none;
visibility : hidden;
height : 0px;
width : 0px;
margin : 0px;
padding : 0px;
overflow : hidden;
}
/* remove user button */
li[data-id="core_users"] {
display: none;
visibility : hidden;
height : 0px;
width : 0px;
margin : 0px;
padding : 0px;
overflow : hidden;
}
</source>
0b96a1cddf7d1ba204d5e92cecb0f25de1c4ab33
2055
2054
2020-10-15T05:22:27Z
Lollypop
2
/* Manual upgrade */
wikitext
text/x-wiki
[[category:Web]]
=Nextcloud=
==BASH alias==
<source lang=bash>
alias occ='sudo --user=www-data /usr/bin/php -f /var/www/nextcloud/occ'
</source>
<source lang=bash>
# occ status
- installed: true
- version: 19.0.2.2
- versionstring: 19.0.2
- edition:
</source>
==Send calendar events==
Set the EventRemindersMode to occ:
<source lang=bash>
# occ config:app:set dav sendEventRemindersMode --value occ
</source>
and add a cronjob for the user running he webserver:
<source lang=bash>
# crontab -u www-data -e
# send calendar events every 5 minutes
*/5 * * * * php -f /var/www/nextcloud/occ dav:send-event-reminders
</source>
=Manual upgrade=
<source lang=bash>
# cd /var/www/nextcloud/updater && sudo -u www-data php updater.phar
# occ db:add-missing-indices
</source>
and since version 19:
<source lang=bash>
# occ db:add-missing-columns
</source>
Answer the questions...
If you have an own theme proceed with this steps:
<source lang=bash>
# sudo -u www-data php /var/www/nextcloud/occ config:system:set theme --value <your theme>
# sudo -u www-data php /var/www/nextcloud/occ maintenance:theme:update
</source>
=Some tweaks for the theme to disable several things=
<source lang=css>
/* remove quota */
#quota {
border: 0;
clip: rect(0 0 0 0);
height: 1px;
margin: -1px;
overflow: hidden;
padding: 0;
position: absolute;
width: 1px;
}
/* remove lost password */
.lost-password-container #lost-password, .lost-password-container #lost-password-back {
display: none;
}
/* remove contacts menu */
#contactsmenu { display: none; }
/* remove contacts button */
li[data-id="contacts"] {
display: none;
visibility : hidden;
height : 0px;
width : 0px;
margin : 0px;
padding : 0px;
overflow : hidden;
}
/* remove user button */
li[data-id="core_users"] {
display: none;
visibility : hidden;
height : 0px;
width : 0px;
margin : 0px;
padding : 0px;
overflow : hidden;
}
</source>
38c4f3d9a03c62e37defa0d2c620dc66fe963403
Category:OwnCloud
14
196
2047
632
2020-09-15T12:11:18Z
Lollypop
2
wikitext
text/x-wiki
[[category:Web]]
2fd3e4c6b8ce5d1411a77ad31f8a8a4231ccfedb
HP Smart Array Controller
0
365
2048
1951
2020-09-16T08:38:15Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:Hardware]]
=ssacli=
=== Install the tool ===
<source lang=bash>
# echo "deb http://downloads.linux.hpe.com/SDR/downloads/MCP/ubuntu $(lsb_release --short --codename)/current non-free" >> /etc/apt/sources.list.d/hp.list
# curl http://downloads.linux.hpe.com/SDR/hpePublicKey2048_key1.pub | sudo apt-key add -
# apt update && apt install --yes ssacli
</source>
=== Revive formerly failed disk ===
<source lang=bash>
# lsscsi
[0:0:0:0] storage HP P440ar 2.52 -
[0:1:0:0] disk HP LOGICAL VOLUME 2.52 /dev/sda
[0:1:0:1] disk HP LOGICAL VOLUME 2.52 /dev/sdb
[0:1:0:2] disk HP LOGICAL VOLUME 2.52 /dev/sdc
[0:1:0:3] disk HP LOGICAL VOLUME 2.52 /dev/sdd
[0:1:0:5] disk HP LOGICAL VOLUME 2.52 /dev/sdf
# ssacli ctrl slot=0 ld all show status
logicaldrive 1 (279.37 GB, RAID 0): OK
logicaldrive 2 (931.48 GB, RAID 0): OK
logicaldrive 3 (279.37 GB, RAID 0): OK
logicaldrive 4 (931.48 GB, RAID 0): OK
logicaldrive 5 (279.37 GB, RAID 0): Failed
logicaldrive 6 (931.48 GB, RAID 0): OK
# ssacli ctrl slot=0 ld 5 modify reenable forced
# ssacli ctrl slot=0 ld all show status
logicaldrive 1 (279.37 GB, RAID 0): OK
logicaldrive 2 (931.48 GB, RAID 0): OK
logicaldrive 3 (279.37 GB, RAID 0): OK
logicaldrive 4 (931.48 GB, RAID 0): OK
logicaldrive 5 (279.37 GB, RAID 0): OK
logicaldrive 6 (931.48 GB, RAID 0): OK
# lsscsi
[0:0:0:0] storage HP P440ar 2.52 -
[0:1:0:0] disk HP LOGICAL VOLUME 2.52 /dev/sda
[0:1:0:1] disk HP LOGICAL VOLUME 2.52 /dev/sdb
[0:1:0:2] disk HP LOGICAL VOLUME 2.52 /dev/sdc
[0:1:0:3] disk HP LOGICAL VOLUME 2.52 /dev/sdd
[0:1:0:4] disk HP LOGICAL VOLUME 2.52 /dev/sde
[0:1:0:5] disk HP LOGICAL VOLUME 2.52 /dev/sdf
</source>
=hpacucli=
==reenable disk after replacement==
<source lang=bash>
[root@app02 ~]# hpacucli ctrl all show config
Smart Array P410i in Slot 0 (Embedded) (sn: 50014380141236F0)
array A (SAS, Unused Space: 0 MB)
logicaldrive 1 (279.4 GB, RAID 0, OK)
physicaldrive 1I:1:1 (port 1I:box 1:bay 1, SAS, 300 GB, OK)
array B (SAS, Unused Space: 0 MB)
logicaldrive 2 (279.4 GB, RAID 0, Failed)
physicaldrive 1I:1:2 (port 1I:box 1:bay 2, SAS, 300 GB, OK)
SEP (Vendor ID PMCSIERA, Model SRC 8x6G) 250 (WWID: 50014380141236FF)
[root@app02 ~]# hpacucli controller slot=0 logicaldrive 2 modify reenable forced
[root@app02 ~]# hpacucli ctrl all show config
Smart Array P410i in Slot 0 (Embedded) (sn: 50014380141236F0)
array A (SAS, Unused Space: 0 MB)
logicaldrive 1 (279.4 GB, RAID 0, OK)
physicaldrive 1I:1:1 (port 1I:box 1:bay 1, SAS, 300 GB, OK)
array B (SAS, Unused Space: 0 MB)
logicaldrive 2 (279.4 GB, RAID 0, OK)
physicaldrive 1I:1:2 (port 1I:box 1:bay 2, SAS, 300 GB, OK)
SEP (Vendor ID PMCSIERA, Model SRC 8x6G) 250 (WWID: 50014380141236FF)
</source>
53600056cabafd9bcaaee8be2cb625e44d66b929
2049
2048
2020-09-16T08:38:38Z
Lollypop
2
wikitext
text/x-wiki
[[category:Hardware]]
=ssacli=
=== Install the tool ===
<source lang=bash>
# echo "deb http://downloads.linux.hpe.com/SDR/downloads/MCP/ubuntu $(lsb_release --short --codename)/current non-free" >> /etc/apt/sources.list.d/hp.list
# curl http://downloads.linux.hpe.com/SDR/hpePublicKey2048_key1.pub | sudo apt-key add -
# apt update && apt install --yes ssacli
</source>
=== Revive formerly failed disk ===
<source lang=bash>
# lsscsi
[0:0:0:0] storage HP P440ar 2.52 -
[0:1:0:0] disk HP LOGICAL VOLUME 2.52 /dev/sda
[0:1:0:1] disk HP LOGICAL VOLUME 2.52 /dev/sdb
[0:1:0:2] disk HP LOGICAL VOLUME 2.52 /dev/sdc
[0:1:0:3] disk HP LOGICAL VOLUME 2.52 /dev/sdd
[0:1:0:5] disk HP LOGICAL VOLUME 2.52 /dev/sdf
# ssacli ctrl slot=0 ld all show status
logicaldrive 1 (279.37 GB, RAID 0): OK
logicaldrive 2 (931.48 GB, RAID 0): OK
logicaldrive 3 (279.37 GB, RAID 0): OK
logicaldrive 4 (931.48 GB, RAID 0): OK
logicaldrive 5 (279.37 GB, RAID 0): Failed
logicaldrive 6 (931.48 GB, RAID 0): OK
# ssacli ctrl slot=0 ld 5 modify reenable forced
# ssacli ctrl slot=0 ld all show status
logicaldrive 1 (279.37 GB, RAID 0): OK
logicaldrive 2 (931.48 GB, RAID 0): OK
logicaldrive 3 (279.37 GB, RAID 0): OK
logicaldrive 4 (931.48 GB, RAID 0): OK
logicaldrive 5 (279.37 GB, RAID 0): OK
logicaldrive 6 (931.48 GB, RAID 0): OK
# lsscsi
[0:0:0:0] storage HP P440ar 2.52 -
[0:1:0:0] disk HP LOGICAL VOLUME 2.52 /dev/sda
[0:1:0:1] disk HP LOGICAL VOLUME 2.52 /dev/sdb
[0:1:0:2] disk HP LOGICAL VOLUME 2.52 /dev/sdc
[0:1:0:3] disk HP LOGICAL VOLUME 2.52 /dev/sdd
[0:1:0:4] disk HP LOGICAL VOLUME 2.52 /dev/sde
[0:1:0:5] disk HP LOGICAL VOLUME 2.52 /dev/sdf
</source>
=hpacucli=
==reenable disk after replacement==
<source lang=bash>
[root@app02 ~]# hpacucli ctrl all show config
Smart Array P410i in Slot 0 (Embedded) (sn: 50014380141236F0)
array A (SAS, Unused Space: 0 MB)
logicaldrive 1 (279.4 GB, RAID 0, OK)
physicaldrive 1I:1:1 (port 1I:box 1:bay 1, SAS, 300 GB, OK)
array B (SAS, Unused Space: 0 MB)
logicaldrive 2 (279.4 GB, RAID 0, Failed)
physicaldrive 1I:1:2 (port 1I:box 1:bay 2, SAS, 300 GB, OK)
SEP (Vendor ID PMCSIERA, Model SRC 8x6G) 250 (WWID: 50014380141236FF)
[root@app02 ~]# hpacucli controller slot=0 logicaldrive 2 modify reenable forced
[root@app02 ~]# hpacucli ctrl all show config
Smart Array P410i in Slot 0 (Embedded) (sn: 50014380141236F0)
array A (SAS, Unused Space: 0 MB)
logicaldrive 1 (279.4 GB, RAID 0, OK)
physicaldrive 1I:1:1 (port 1I:box 1:bay 1, SAS, 300 GB, OK)
array B (SAS, Unused Space: 0 MB)
logicaldrive 2 (279.4 GB, RAID 0, OK)
physicaldrive 1I:1:2 (port 1I:box 1:bay 2, SAS, 300 GB, OK)
SEP (Vendor ID PMCSIERA, Model SRC 8x6G) 250 (WWID: 50014380141236FF)
</source>
a7f558a3747ab8986d77f6a0676a3ba92903ae59
Linux Software RAID
0
286
2050
1299
2020-09-16T08:42:30Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:Linux]]
=mdadm=
==Force rebuild of a failed RAID==
Example for /dev/md10
===The problem: Two failed disks in a RAID5===
Looks ugly but maybe we have luck and the disks are just marked as bad.
==== cat /proc/mdstat ====
<source lang=bash>
# cat /proc/mdstat
Personalities : [raid1] [raid6] [raid5] [raid4]
...
md10 : inactive sdap1[11] sdao1[5] sdah1[15](S) sdag1[4] sdy1[3] sdz1[14] sdr1[8] sdb1[13] sdq1[16](S) sdi1[1] sda1[12]
5236577280 blocks super 1.2
...
</source>
State is <i>inactive</i> this is not what we want... look for the details in the next step
==== mdadm --detail ====
<source lang=bash>
# mdadm --detail /dev/md10
/dev/md10:
Version : 1.2
Creation Time : Wed Feb 6 13:44:52 2013
Raid Level : raid5
Used Dev Size : 476052288 (454.00 GiB 487.48 GB)
Raid Devices : 11
Total Devices : 11
Persistence : Superblock is persistent
Update Time : Wed Jun 15 17:46:57 2016
State : active, FAILED, Not Started
Active Devices : 9
Working Devices : 11
Failed Devices : 0
Spare Devices : 2
Layout : left-symmetric
Chunk Size : 64K
Name : md10
UUID : 82f2b88d:276a1fd3:55a4928e:b2228edf
Events : 17071
Number Major Minor RaidDevice State
11 66 145 0 active sync /dev/sdap1
1 8 129 1 active sync /dev/sdi1
2 0 0 2 removed
3 65 129 3 active sync /dev/sdy1
4 66 1 4 active sync /dev/sdag1
5 66 129 5 active sync /dev/sdao1
12 8 1 6 active sync /dev/sda1
7 0 0 7 removed
8 65 17 8 active sync /dev/sdr1
13 8 17 9 active sync /dev/sdb1
14 65 145 10 active sync /dev/sdz1
15 66 17 - spare /dev/sdah1
16 65 1 - spare /dev/sdq1
</source>
===replace a disk in a mirror===
Device /dev/cciss/c0d1 is a replaced and new disk.
<source lang=bash>
[root@app02 ~]# sfdisk -d /dev/cciss/c0d0 | sfdisk --no-reread --force /dev/cciss/c0d1
[root@app02 ~]# mdadm --manage /dev/md0 --fail /dev/cciss/c0d1p1
[root@app02 ~]# mdadm --manage /dev/md0 --remove /dev/cciss/c0d1p1
[root@app02 ~]# mdadm --manage /dev/md0 --add /dev/cciss/c0d1p1
[root@app02 ~]# mdadm --manage /dev/md1 --fail /dev/cciss/c0d1p2
[root@app02 ~]# mdadm --manage /dev/md1 --remove /dev/cciss/c0d1p2
[root@app02 ~]# mdadm --manage /dev/md1 --add /dev/cciss/c0d1p2
[root@app02 ~]# cat /proc/mdstat
Personalities : [raid1]
md1 : active raid1 cciss/c0d1p2[2] cciss/c0d0p2[0]
36925312 blocks [2/1] [U_]
resync=DELAYED
md0 : active raid1 cciss/c0d1p1[2] cciss/c0d0p1[0]
256003712 blocks [2/1] [U_]
[>....................] recovery = 0.0% (38144/256003712) finish=2680.2min speed=1589K/sec
unused devices: <none>
</source>
===Force the rescan and reassemble the RAID===
For a SCSI-rescan you can try this:
[[Linux_Tipps_und_Tricks#Scan_all_SCSI_buses_for_new_devices|Scan all SCSI buses for new devices]]
And you have to do this:
<source lang=bash>
# mdadm --scan /dev/md10
# mdadm --assemble --force --scan
# mdadm --run /dev/md10
</source>
===Check the status===
<source lang=bash>
# mdadm --detail /dev/md10
/dev/md10:
Version : 1.2
Creation Time : Wed Feb 6 13:44:52 2013
Raid Level : raid5
Array Size : 4760522880 (4539.99 GiB 4874.78 GB)
Used Dev Size : 476052288 (454.00 GiB 487.48 GB)
Raid Devices : 11
Total Devices : 12
Persistence : Superblock is persistent
Update Time : Thu Jun 16 10:59:16 2016
State : clean, degraded, recovering
Active Devices : 10
Working Devices : 12
Failed Devices : 0
Spare Devices : 2
Layout : left-symmetric
Chunk Size : 64K
Rebuild Status : 5% complete
Name : md10
UUID : 82f2b88d:276a1fd3:55a4928e:b2228edf
Events : 17074
Number Major Minor RaidDevice State
11 66 145 0 active sync /dev/sdap1
1 8 129 1 active sync /dev/sdi1
16 65 1 2 spare rebuilding /dev/sdq1
3 65 129 3 active sync /dev/sdy1
4 66 1 4 active sync /dev/sdag1
5 66 129 5 active sync /dev/sdao1
12 8 1 6 active sync /dev/sda1
7 8 145 7 active sync /dev/sdj1
8 65 17 8 active sync /dev/sdr1
13 8 17 9 active sync /dev/sdb1
14 65 145 10 active sync /dev/sdz1
15 66 17 - spare /dev/sdah1
</source>
This is good:
State : clean, degraded, recovering
Better wait with the next reboot for completion:
Rebuild Status : 5% complete
It should continue rebuilding if you boot but... know the devils...
27fdc16b70e7b976111a0357099f3b25476e4ae5
2051
2050
2020-09-16T08:43:57Z
Lollypop
2
wikitext
text/x-wiki
[[category:Linux]]
=mdadm=
==Force rebuild of a failed RAID==
Example for /dev/md10
===The problem: Two failed disks in a RAID5===
Looks ugly but maybe we have luck and the disks are just marked as bad.
==== cat /proc/mdstat ====
<source lang=bash>
# cat /proc/mdstat
Personalities : [raid1] [raid6] [raid5] [raid4]
...
md10 : inactive sdap1[11] sdao1[5] sdah1[15](S) sdag1[4] sdy1[3] sdz1[14] sdr1[8] sdb1[13] sdq1[16](S) sdi1[1] sda1[12]
5236577280 blocks super 1.2
...
</source>
State is <i>inactive</i> this is not what we want... look for the details in the next step
==== mdadm --detail ====
<source lang=bash>
# mdadm --detail /dev/md10
/dev/md10:
Version : 1.2
Creation Time : Wed Feb 6 13:44:52 2013
Raid Level : raid5
Used Dev Size : 476052288 (454.00 GiB 487.48 GB)
Raid Devices : 11
Total Devices : 11
Persistence : Superblock is persistent
Update Time : Wed Jun 15 17:46:57 2016
State : active, FAILED, Not Started
Active Devices : 9
Working Devices : 11
Failed Devices : 0
Spare Devices : 2
Layout : left-symmetric
Chunk Size : 64K
Name : md10
UUID : 82f2b88d:276a1fd3:55a4928e:b2228edf
Events : 17071
Number Major Minor RaidDevice State
11 66 145 0 active sync /dev/sdap1
1 8 129 1 active sync /dev/sdi1
2 0 0 2 removed
3 65 129 3 active sync /dev/sdy1
4 66 1 4 active sync /dev/sdag1
5 66 129 5 active sync /dev/sdao1
12 8 1 6 active sync /dev/sda1
7 0 0 7 removed
8 65 17 8 active sync /dev/sdr1
13 8 17 9 active sync /dev/sdb1
14 65 145 10 active sync /dev/sdz1
15 66 17 - spare /dev/sdah1
16 65 1 - spare /dev/sdq1
</source>
===Force the rescan and reassemble the RAID===
For a SCSI-rescan you can try this:
[[Linux_Tipps_und_Tricks#Scan_all_SCSI_buses_for_new_devices|Scan all SCSI buses for new devices]]
And you have to do this:
<source lang=bash>
# mdadm --scan /dev/md10
# mdadm --assemble --force --scan
# mdadm --run /dev/md10
</source>
===Check the status===
<source lang=bash>
# mdadm --detail /dev/md10
/dev/md10:
Version : 1.2
Creation Time : Wed Feb 6 13:44:52 2013
Raid Level : raid5
Array Size : 4760522880 (4539.99 GiB 4874.78 GB)
Used Dev Size : 476052288 (454.00 GiB 487.48 GB)
Raid Devices : 11
Total Devices : 12
Persistence : Superblock is persistent
Update Time : Thu Jun 16 10:59:16 2016
State : clean, degraded, recovering
Active Devices : 10
Working Devices : 12
Failed Devices : 0
Spare Devices : 2
Layout : left-symmetric
Chunk Size : 64K
Rebuild Status : 5% complete
Name : md10
UUID : 82f2b88d:276a1fd3:55a4928e:b2228edf
Events : 17074
Number Major Minor RaidDevice State
11 66 145 0 active sync /dev/sdap1
1 8 129 1 active sync /dev/sdi1
16 65 1 2 spare rebuilding /dev/sdq1
3 65 129 3 active sync /dev/sdy1
4 66 1 4 active sync /dev/sdag1
5 66 129 5 active sync /dev/sdao1
12 8 1 6 active sync /dev/sda1
7 8 145 7 active sync /dev/sdj1
8 65 17 8 active sync /dev/sdr1
13 8 17 9 active sync /dev/sdb1
14 65 145 10 active sync /dev/sdz1
15 66 17 - spare /dev/sdah1
</source>
This is good:
State : clean, degraded, recovering
Better wait with the next reboot for completion:
Rebuild Status : 5% complete
It should continue rebuilding if you boot but... know the devils...
==Replace a disk in a mirror==
Device /dev/cciss/c0d1 is a replaced and new disk.
<source lang=bash>
[root@app02 ~]# sfdisk -d /dev/cciss/c0d0 | sfdisk --no-reread --force /dev/cciss/c0d1
[root@app02 ~]# mdadm --manage /dev/md0 --fail /dev/cciss/c0d1p1
[root@app02 ~]# mdadm --manage /dev/md0 --remove /dev/cciss/c0d1p1
[root@app02 ~]# mdadm --manage /dev/md0 --add /dev/cciss/c0d1p1
[root@app02 ~]# mdadm --manage /dev/md1 --fail /dev/cciss/c0d1p2
[root@app02 ~]# mdadm --manage /dev/md1 --remove /dev/cciss/c0d1p2
[root@app02 ~]# mdadm --manage /dev/md1 --add /dev/cciss/c0d1p2
[root@app02 ~]# cat /proc/mdstat
Personalities : [raid1]
md1 : active raid1 cciss/c0d1p2[2] cciss/c0d0p2[0]
36925312 blocks [2/1] [U_]
resync=DELAYED
md0 : active raid1 cciss/c0d1p1[2] cciss/c0d0p1[0]
256003712 blocks [2/1] [U_]
[>....................] recovery = 0.0% (38144/256003712) finish=2680.2min speed=1589K/sec
unused devices: <none>
</source>
e4e1e1b2ac659a7b0b054beb3515520b92a8eda2
2052
2051
2020-09-16T08:46:00Z
Lollypop
2
/* Replace a disk in a mirror */
wikitext
text/x-wiki
[[category:Linux]]
=mdadm=
==Force rebuild of a failed RAID==
Example for /dev/md10
===The problem: Two failed disks in a RAID5===
Looks ugly but maybe we have luck and the disks are just marked as bad.
==== cat /proc/mdstat ====
<source lang=bash>
# cat /proc/mdstat
Personalities : [raid1] [raid6] [raid5] [raid4]
...
md10 : inactive sdap1[11] sdao1[5] sdah1[15](S) sdag1[4] sdy1[3] sdz1[14] sdr1[8] sdb1[13] sdq1[16](S) sdi1[1] sda1[12]
5236577280 blocks super 1.2
...
</source>
State is <i>inactive</i> this is not what we want... look for the details in the next step
==== mdadm --detail ====
<source lang=bash>
# mdadm --detail /dev/md10
/dev/md10:
Version : 1.2
Creation Time : Wed Feb 6 13:44:52 2013
Raid Level : raid5
Used Dev Size : 476052288 (454.00 GiB 487.48 GB)
Raid Devices : 11
Total Devices : 11
Persistence : Superblock is persistent
Update Time : Wed Jun 15 17:46:57 2016
State : active, FAILED, Not Started
Active Devices : 9
Working Devices : 11
Failed Devices : 0
Spare Devices : 2
Layout : left-symmetric
Chunk Size : 64K
Name : md10
UUID : 82f2b88d:276a1fd3:55a4928e:b2228edf
Events : 17071
Number Major Minor RaidDevice State
11 66 145 0 active sync /dev/sdap1
1 8 129 1 active sync /dev/sdi1
2 0 0 2 removed
3 65 129 3 active sync /dev/sdy1
4 66 1 4 active sync /dev/sdag1
5 66 129 5 active sync /dev/sdao1
12 8 1 6 active sync /dev/sda1
7 0 0 7 removed
8 65 17 8 active sync /dev/sdr1
13 8 17 9 active sync /dev/sdb1
14 65 145 10 active sync /dev/sdz1
15 66 17 - spare /dev/sdah1
16 65 1 - spare /dev/sdq1
</source>
===Force the rescan and reassemble the RAID===
For a SCSI-rescan you can try this:
[[Linux_Tipps_und_Tricks#Scan_all_SCSI_buses_for_new_devices|Scan all SCSI buses for new devices]]
And you have to do this:
<source lang=bash>
# mdadm --scan /dev/md10
# mdadm --assemble --force --scan
# mdadm --run /dev/md10
</source>
===Check the status===
<source lang=bash>
# mdadm --detail /dev/md10
/dev/md10:
Version : 1.2
Creation Time : Wed Feb 6 13:44:52 2013
Raid Level : raid5
Array Size : 4760522880 (4539.99 GiB 4874.78 GB)
Used Dev Size : 476052288 (454.00 GiB 487.48 GB)
Raid Devices : 11
Total Devices : 12
Persistence : Superblock is persistent
Update Time : Thu Jun 16 10:59:16 2016
State : clean, degraded, recovering
Active Devices : 10
Working Devices : 12
Failed Devices : 0
Spare Devices : 2
Layout : left-symmetric
Chunk Size : 64K
Rebuild Status : 5% complete
Name : md10
UUID : 82f2b88d:276a1fd3:55a4928e:b2228edf
Events : 17074
Number Major Minor RaidDevice State
11 66 145 0 active sync /dev/sdap1
1 8 129 1 active sync /dev/sdi1
16 65 1 2 spare rebuilding /dev/sdq1
3 65 129 3 active sync /dev/sdy1
4 66 1 4 active sync /dev/sdag1
5 66 129 5 active sync /dev/sdao1
12 8 1 6 active sync /dev/sda1
7 8 145 7 active sync /dev/sdj1
8 65 17 8 active sync /dev/sdr1
13 8 17 9 active sync /dev/sdb1
14 65 145 10 active sync /dev/sdz1
15 66 17 - spare /dev/sdah1
</source>
This is good:
State : clean, degraded, recovering
Better wait with the next reboot for completion:
Rebuild Status : 5% complete
It should continue rebuilding if you boot but... know the devils...
==Replace a disk in a mirror==
Device /dev/cciss/c0d1 is a replaced and new disk in a [[HP_Smart_Array_Controller#reenable_disk_after_replacement | HP Array Controller]]
<source lang=bash>
[root@app02 ~]# sfdisk -d /dev/cciss/c0d0 | sfdisk --no-reread --force /dev/cciss/c0d1
[root@app02 ~]# mdadm --manage /dev/md0 --fail /dev/cciss/c0d1p1
[root@app02 ~]# mdadm --manage /dev/md0 --remove /dev/cciss/c0d1p1
[root@app02 ~]# mdadm --manage /dev/md0 --add /dev/cciss/c0d1p1
[root@app02 ~]# mdadm --manage /dev/md1 --fail /dev/cciss/c0d1p2
[root@app02 ~]# mdadm --manage /dev/md1 --remove /dev/cciss/c0d1p2
[root@app02 ~]# mdadm --manage /dev/md1 --add /dev/cciss/c0d1p2
[root@app02 ~]# cat /proc/mdstat
Personalities : [raid1]
md1 : active raid1 cciss/c0d1p2[2] cciss/c0d0p2[0]
36925312 blocks [2/1] [U_]
resync=DELAYED
md0 : active raid1 cciss/c0d1p1[2] cciss/c0d0p1[0]
256003712 blocks [2/1] [U_]
[>....................] recovery = 0.0% (38144/256003712) finish=2680.2min speed=1589K/sec
unused devices: <none>
</source>
47248e877ccf08db549024e7ea0eb14308d443b9
Tomcat
0
375
2053
2020-09-23T12:27:25Z
Lollypop
2
Created page with " == Terminating SSL at the webserver or load balancer == If you want the tomcat let know that he is behind another Instance that terminates the SSL and tomcat should put https..."
wikitext
text/x-wiki
== Terminating SSL at the webserver or load balancer ==
If you want the tomcat let know that he is behind another Instance that terminates the SSL and tomcat should put https:// in the links, just add <i>scheme="https"</i> and <i>proxyPort="443"</i> to the non SSL Connector definition like this:
<source>
<Connector port="8080" protocol="HTTP/1.1"
server="Apache"
connectionTimeout="20000"
scheme="https"
proxyPort="443"
/>
</source>
94ab58ab15c5d9216895e43db3cd0cf833e488f4
Brocade
0
107
2056
2026
2020-10-26T10:58:56Z
Lollypop
2
/* Switch Types and Product Names */
wikitext
text/x-wiki
[[Kategorie:FC]]
[[Kategorie:Brocade]]
=Ein paar Kommandos mit kurzer Erklärung dazu=
==Firmware==
<source lang=bash>
brocade:admin> firmwareshow
Appl Primary/Secondary Versions
------------------------------------------
FOS v6.4.2a
v6.4.2a
</source>
== General Switch Information ==
<source lang=bash>
brocade:admin> switchshow
switchName: brocade
switchType: 71.2
switchState: Online
switchMode: Native
switchRole: Principal
switchDomain: 1
switchId: fffc01
switchWwn: 10:00:00:05:34:be:f3:f0
zoning: ON (Fabric1)
switchBeacon: OFF
Index Port Address Media Speed State Proto
==============================================
0 0 010000 id N4 Online FC F-Port 50:0a:09:81:96:c8:3e:f8
1 1 010100 id N4 Online FC F-Port 50:0a:09:81:86:c8:3e:f8
2 2 010200 id N8 Online FC F-Port 21:00:00:24:ff:36:45:02
3 3 010300 id N8 Online FC F-Port 21:00:00:24:ff:36:45:21
4 4 010400 id N8 Online FC F-Port 21:00:00:24:ff:36:44:90
5 5 010500 id N8 Online FC F-Port 21:00:00:24:ff:36:45:f6
6 6 010600 id N8 No_Light FC
...
</source>
Wichtige Zeilen:
===switchshow:switchType===
<source lang=bash>
switchType: 71.2
</source>
switchType gibt Auskunft, welchen Switch wir vor uns haben. Hier einen Brocade 300.
* [https://www.ibm.com/developerworks/community/blogs/anthonyv/entry/brocade_san_switch_models1?lang=en Tabelle von IBM]
* PDF von Brocade: [[Media:Switch-types-blads-ids-product-names.pdf|Switch Types, Blade IDs, and Product Names]]
===switchshow:zoning===
<source lang=bash>
zoning: ON (Fabric1)
</source>
Zeigt an, ob das [[#Zoning|Zoning]] aktiv ist und welche Konfiguration aktiv ist (hier Fabric1) siehe auch [[#Fabric|Fabric]].
===switchshow:switchRole===
Es gibt zwei Rollen
* Principal (also den Chef)
und
* Subordinate (also den Untergeordneten)
z.B.:
<source lang=bash>
switchRole: Principal
</source>
Die Rolle kann man ändern
'''ACHTUNG: Nicht Unterbrechungsfrei!'''<br>
'''WARNING: DISRUPTIVE ACTION !'''
<source lang=bash>
brocade1:admin> fabricprincipal -f 1
</source>
==Fabric==
Eine Fabric besteht aus einem oder mehreren Fibre-Channel-Switchen, die miteinander verbunden sind. Komponenten wie Hosts, Storage und Tapes werden über die Fibre-Channel-Switche mit der Fabric verbunden.
<source lang=bash>
brocade:admin> fabricshow
Switch ID Worldwide Name Enet IP Addr FC IP Addr Name
-------------------------------------------------------------------------
1: fffc01 10:00:00:05:34:be:f3:f0 10.60.1.110 0.0.0.0 >"brocade"
2: fffc02 10:00:00:05:1e:0d:da:27 10.60.1.111 0.0.0.0 "brocade1"
4: fffc04 10:00:00:05:1e:b3:61:7d 10.60.1.113 0.0.0.0 "brocade3"
42: fffc2a 10:00:00:05:1e:0c:f3:98 10.60.1.112 0.0.0.0 "brocade2"
The Fabric has 4 switches
</source>
==InterSwitchLinks (ISL)==
Mit islshow bekommt man heraus, welche weiteren Switches angeschlossen sind und über welche Ports sie mit dem aktuellen verbunden sind.
<source lang=bash>
brocade:admin> islshow
rz1_fab1_01:admin> islshow
1: 0-> 0 10:00:00:05:1e:0d:ca:27 2 brocade1 sp: 4.000G bw: 4.000G
2: 4-> 0 10:00:00:05:1e:0c:e3:98 42 brocade2 sp: 4.000G bw: 4.000G
3: 8-> 17 10:00:00:05:1e:0d:ca:27 2 brocade1 sp: 4.000G bw: 4.000G
4: 9-> 0 10:00:00:05:1e:b3:51:7d 4 brocade3 sp: 4.000G bw: 4.000G
5: 12-> 17 10:00:00:05:1e:0c:e3:98 42 brocade2 sp: 4.000G bw: 4.000G
6: 13-> 17 10:00:00:05:1e:b3:51:7d 4 brocade3 sp: 4.000G bw: 4.000G
</source>
==Zoning==
Eine Zone legt fest, welche Ports oder WWNs sich sehen dürfen.
Heute mach man eigentlich nur noch WWN-Zoning, weil es das flexibelste und sicherste ist. Man kann dadurch einfach die Kabel innerhalb der [[#Fabric|Fabric]] hin und herstecken, ohne daß ein Gerät mit mal ein anderes sehen kann, als vorher.
Bei Portzoning ist die Gefahr des falsch steckens gegeben.
=Switch Types and Product Names=
{| class="wikitable sortable" style="text-align: center; width: 85%"
! Switch Type
! Switch Name
|-
| 1 || Brocade 1000 Switches
|-
| 2, 6 || Brocade 2800 Switch
|-
| 3 || Brocade 2100, 2400 Switches
|-
| 4 || Brocade 20x0, 2010, 2040, 2050 Switches
|-
| 5 || Brocade 22x0, 2210, 2240, 2250 Switches
|-
| 7 || Brocade 2000 Switch
|-
| 9 || Brocade 3800 Switch
|-
| 10 || Brocade 12000 Director
|-
| 12 || Brocade 3900 Switch
|-
| 16 || Brocade 3200 Switch
|-
| 17 || Brocade 3800VL
|-
| 18 || Brocade 3000 Switch
|-
| 21 || Brocade 24000 Director
|-
| 22 || Brocade 3016 Switch
|-
| 26 || Brocade 3850 Switch
|-
| 27 || Brocade 3250 Switch
|-
| 29 || Brocade 4012 Embedded Switch
|-
| 32 || Brocade 4100 Switch
|-
| 33 || Brocade 3014 Switch
|-
| 34 || Brocade 200E Switch
|-
| 37 || Brocade 4020 Embedded Switch
|-
| 38 || Brocade 7420 SAN Router
|-
| 40 || Fibre Channel Routing (FCR) Front Domain
|-
| 41 || Fibre Channel Routing, (FCR) Xlate Domain
|-
| 42 || Brocade 48000 Director
|-
| 43 || Brocade 4024 Embedded Switch
|-
| 44 || Brocade 4900 Switch
|-
| 45 || Brocade 4016 Embedded Switch
|-
| 46 || Brocade 7500 Switch
|-
| 51 || Brocade 4018 Embedded Switch
|-
| 55.2 || Brocade 7600 Switch
|-
| 58 || Brocade 5000 Switch
|-
| 61 || Brocade 4424 Embedded Switch
|-
| 62 || Brocade DCX Backbone
|-
| 64 || Brocade 5300 Switch
|-
| 66 || Brocade 5100 Switch
|-
| 67 || Brocade Encryption Switch
|-
| 69 || Brocade 5410 Blade
|-
| 70 || Brocade 5410 Embedded Switch
|-
| 71 || Brocade 300 Switch
|-
| 72 || Brocade 5480 Embedded Switch
|-
| 73 || Brocade 5470 Embedded Switch
|-
| 75 || Brocade M5424 Embedded Switch
|-
| 76 || Brocade 8000 Switch
|-
| 77 || Brocade DCX-4S Backbone
|-
| 83 || Brocade 7800 Extension Switch
|-
| 86 || Brocade 5450 Embedded Switch
|-
| 87 || Brocade 5460 Embedded Switch
|-
| 90 || Brocade 8470 Embedded Switch
|-
| 92 || Brocade VA-40FC Switch
|-
| 95 || Brocade VDX 6720-24 Data Center Switch
|-
| 96 || Brocade VDX 6730-32 Data Center Switch
|-
| 97 || Brocade VDX 6720-60 Data Center Switch
|-
| 98 || Brocade VDX 6730-76 Data Center Switch
|-
| 108 || Dell M8428-k FCoE Embedded Switch
|-
| 109 || Brocade 6510 Switch
|-
| 116 || Brocade VDX 6710 Data Center Switch
|-
| 117 || Brocade 6547 Embedded Switch
|-
| 118 || Brocade 6505 Switch
|-
| 120 || Brocade DCX 8510-8 Backbone
|-
| 121 || Brocade DCX 8510-4 Backbone
|-
| 124 || Brocade 5430 8 Gb 16-port Blade Server SAN I/O Module
|-
| 125 || Brocade 5431 8 Gbit 16-port stackable switch module
|-
| 129 || Brocade 6548 16 Gb 28-port Blade Server SAN I/O Module
|-
| 130 || Brocade M6505 16 Gbit 24-port Blade Server SAN I/O Module
|-
| 133 || Brocade 6520 16 Gb 96-port switch
|-
| 134 || Brocade 5432 8 Gb 24-port Blade Server SAN I/O Module
|-
| 148 || Brocade 7840 16 Gb 24-FC ports, 16 10GbE ports, 2 40GbE ports extension switch
|-
| 170 || Brocade G610
|}
=Enable root account for ssh=
==Enable root for ssh==
<source lang=bash>
sw-fc02fab-b:admin> rootaccess --show
RootAccess: consoleonly
sw-fc02fab-b:admin> rootaccess --set all
sw-fc02fab-b:admin> rootaccess --show
RootAccess: all
sw-fc02fab-b:admin> userconfig --change root -e yes
</source>
==Enable root account==
<source lang=bash>
sw-fc02fab-b:admin> userconfig --show root
Account name: root
Description: root
Enabled: No
Password Last Change Date: Fri Aug 21 2020 (UTC)
Password Expiration Date: Not Applicable (UTC)
Locked: No
Role: root
AD membership: 0-255
Home AD: 0
Day Time Access: N/A
sw-fc02fab-b:admin> userconfig --change root -e yes
sw-fc02fab-b:admin> userconfig --show root
Account name: root
Description: root
Enabled: Yes
Password Last Change Date: Fri Aug 21 2020 (UTC)
Password Expiration Date: Not Applicable (UTC)
Locked: No
Role: root
AD membership: 0-255
Home AD: 0
Day Time Access: N/A
</source>
==Set root password directly after enabling the account==
<source lang=bash>
$ ssh root@192.168.1.1
root@192.168.1.1's password:
============================================================================================
ATTENTION:
It is recommended that you change the default passwords for all the switch accounts.
Refer to the product release notes and administrators guide if you need further information.
============================================================================================
...
</source>
=SSH mit public key=
==Host -> Brocade==
<source lang=bash>
BSAN01:root> cd ~/.ssh
BSAN01:root> ls -al
total 8
drwxr-xr-x 2 root sys 4096 Jul 18 2011 ./
drwxr-x--- 4 root sys 4096 Jun 19 2013 ../
BSAN01:root> echo "ssh-dss AAAA...TD8cc= root@sun" >> authorized_keys
</source>
==Brocade -> Host==
===Key auf Switch generieren===
Als '''admin''' !
<source lang=bash>
Host# ssh admin@bsan01
BSAN01:admin> sshutil genkey
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Key pair generated successfully.
BSAN01:admin> exit
</source>
===Key vom Switch -> Host ~/.ssh/authorized_keys===
Als '''root''' !
<source lang=bash>
Host# ssh root@bsan01 cat .ssh/id_rsa.pub >> ~/.ssh/authorized_keys
</source>
=Backup der Config=
Wichtig, vorher die Keys austauschen!
# Der Brocade Pubkey muß nach ~bckpuser/.ssh/authorized_keys
# Der Pubkey des aufrufenden Users muß auf den Brocade ~root/.ssh/authorized_keys
Ein mögliches Script könnte so aussehen:
<source lang=bash>
#!/bin/bash
SWITCHES="
bsan01
bsan02
"
BACKUP_HOST="10.0.0.42"
LOCALUSER="bckpuser"
BACKUPDIR="brocade_backup"
[ ! -d ~/brocade_backup ] && mkdir -p ~/brocade_backup
date="$(date '+%Y%m%d-%H%M%S')"
for switch in ${SWITCHES} ; do
printf "Backing up ${switch} to ~${LOCALUSER}/${BACKUPDIR}/${switch}_config_${date}.txt... "
ssh -i ~/.ssh/id_rsa_nopw root@${switch} /fabos/link_sbin/configupload -all -p scp ${BACKUP_HOST},${LOCALUSER},${BACKUPDIR}/${switch}_config_${date}.txt
tmp_file=/tmp/.$$_${switch}.txt
bakup_file=~/${BACKUPDIR}/${switch}_config_${date}.txt
last_backup_file="$(ls -1rt ~/${BACKUPDIR}/${switch}_config_*.txt.gz | tail -1)"
gzip -cd ${last_backup_file} | grep -v "date =" > ${tmp_file}
if ( grep -v "date =" ${bakup_file} | diff -ub - ${tmp_file} )
then
# The last backup is identical
rm -f ${bakup_file}
else
# Differences encountered keep new backup
gzip -9 ${bakup_file}
fi
[ -f "${tmp_file}" ] && rm -f ${tmp_file}
done
</source>
=Firmware update=
==Record the running firmware==
==Example for a brocade sftp firmware download directory==
First take a look [[SSH_Tipps_und_Tricks#SFTP_chroot|here]] for setting up a chroot sftp environment.
Then create the home on the sftp-server:
<source lang=bash>
# mkdir --parents --mode=0755 /home/sftp/brocade
# useradd --create-home --home-dir /home/sftp/brocade/fw brocade
</source>
If there is allready an brocade user with an authorized_keys file do:
<source lang=bash>
# cp --preserve=mode ~brocade/.ssh/authorized_keys /home/sftp/.authorized_keys/brocade
</source>
else put them into /home/sftp/.authorized_keys/brocade if you want.
Untar your firmware as brocade in /home/sftp/brocade/fw.
Login to the switch as admin and do for example:
<source lang=bash>
san-sw:admin> firmwaredownload -s -b -p sftp <ip of the sftp-server>,brocade,fw/v7.2.1f
</source>
426a7fb8f12228238fe1e3d25a3898bbfd4cd157
2057
2056
2020-10-26T11:11:48Z
Lollypop
2
wikitext
text/x-wiki
[[Category:FC]]
[[Category:Brocade]]
=Ein paar Kommandos mit kurzer Erklärung dazu=
==Firmware==
<source lang=bash>
brocade:admin> firmwareshow
Appl Primary/Secondary Versions
------------------------------------------
FOS v6.4.2a
v6.4.2a
</source>
== General Switch Information ==
<source lang=bash>
brocade:admin> switchshow
switchName: brocade
switchType: 71.2
switchState: Online
switchMode: Native
switchRole: Principal
switchDomain: 1
switchId: fffc01
switchWwn: 10:00:00:05:34:be:f3:f0
zoning: ON (Fabric1)
switchBeacon: OFF
Index Port Address Media Speed State Proto
==============================================
0 0 010000 id N4 Online FC F-Port 50:0a:09:81:96:c8:3e:f8
1 1 010100 id N4 Online FC F-Port 50:0a:09:81:86:c8:3e:f8
2 2 010200 id N8 Online FC F-Port 21:00:00:24:ff:36:45:02
3 3 010300 id N8 Online FC F-Port 21:00:00:24:ff:36:45:21
4 4 010400 id N8 Online FC F-Port 21:00:00:24:ff:36:44:90
5 5 010500 id N8 Online FC F-Port 21:00:00:24:ff:36:45:f6
6 6 010600 id N8 No_Light FC
...
</source>
Wichtige Zeilen:
===switchshow:switchType===
<source lang=bash>
switchType: 71.2
</source>
switchType gibt Auskunft, welchen Switch wir vor uns haben. Hier einen Brocade 300.
* [https://www.ibm.com/developerworks/community/blogs/anthonyv/entry/brocade_san_switch_models1?lang=en Tabelle von IBM]
* PDF von Brocade: [[Media:Switch-types-blads-ids-product-names.pdf|Switch Types, Blade IDs, and Product Names]]
===switchshow:zoning===
<source lang=bash>
zoning: ON (Fabric1)
</source>
Zeigt an, ob das [[#Zoning|Zoning]] aktiv ist und welche Konfiguration aktiv ist (hier Fabric1) siehe auch [[#Fabric|Fabric]].
===switchshow:switchRole===
Es gibt zwei Rollen
* Principal (also den Chef)
und
* Subordinate (also den Untergeordneten)
z.B.:
<source lang=bash>
switchRole: Principal
</source>
Die Rolle kann man ändern
'''ACHTUNG: Nicht Unterbrechungsfrei!'''<br>
'''WARNING: DISRUPTIVE ACTION !'''
<source lang=bash>
brocade1:admin> fabricprincipal -f 1
</source>
==Fabric==
Eine Fabric besteht aus einem oder mehreren Fibre-Channel-Switchen, die miteinander verbunden sind. Komponenten wie Hosts, Storage und Tapes werden über die Fibre-Channel-Switche mit der Fabric verbunden.
<source lang=bash>
brocade:admin> fabricshow
Switch ID Worldwide Name Enet IP Addr FC IP Addr Name
-------------------------------------------------------------------------
1: fffc01 10:00:00:05:34:be:f3:f0 10.60.1.110 0.0.0.0 >"brocade"
2: fffc02 10:00:00:05:1e:0d:da:27 10.60.1.111 0.0.0.0 "brocade1"
4: fffc04 10:00:00:05:1e:b3:61:7d 10.60.1.113 0.0.0.0 "brocade3"
42: fffc2a 10:00:00:05:1e:0c:f3:98 10.60.1.112 0.0.0.0 "brocade2"
The Fabric has 4 switches
</source>
==InterSwitchLinks (ISL)==
Mit islshow bekommt man heraus, welche weiteren Switches angeschlossen sind und über welche Ports sie mit dem aktuellen verbunden sind.
<source lang=bash>
brocade:admin> islshow
rz1_fab1_01:admin> islshow
1: 0-> 0 10:00:00:05:1e:0d:ca:27 2 brocade1 sp: 4.000G bw: 4.000G
2: 4-> 0 10:00:00:05:1e:0c:e3:98 42 brocade2 sp: 4.000G bw: 4.000G
3: 8-> 17 10:00:00:05:1e:0d:ca:27 2 brocade1 sp: 4.000G bw: 4.000G
4: 9-> 0 10:00:00:05:1e:b3:51:7d 4 brocade3 sp: 4.000G bw: 4.000G
5: 12-> 17 10:00:00:05:1e:0c:e3:98 42 brocade2 sp: 4.000G bw: 4.000G
6: 13-> 17 10:00:00:05:1e:b3:51:7d 4 brocade3 sp: 4.000G bw: 4.000G
</source>
==Zoning==
Eine Zone legt fest, welche Ports oder WWNs sich sehen dürfen.
Heute mach man eigentlich nur noch WWN-Zoning, weil es das flexibelste und sicherste ist. Man kann dadurch einfach die Kabel innerhalb der [[#Fabric|Fabric]] hin und herstecken, ohne daß ein Gerät mit mal ein anderes sehen kann, als vorher.
Bei Portzoning ist die Gefahr des falsch steckens gegeben.
=Switch Types and Product Names=
{| class="wikitable sortable" style="text-align: center; width: 85%"
! Switch Type
! Switch Name
|-
| 1 || Brocade 1000 Switches
|-
| 2, 6 || Brocade 2800 Switch
|-
| 3 || Brocade 2100, 2400 Switches
|-
| 4 || Brocade 20x0, 2010, 2040, 2050 Switches
|-
| 5 || Brocade 22x0, 2210, 2240, 2250 Switches
|-
| 7 || Brocade 2000 Switch
|-
| 9 || Brocade 3800 Switch
|-
| 10 || Brocade 12000 Director
|-
| 12 || Brocade 3900 Switch
|-
| 16 || Brocade 3200 Switch
|-
| 17 || Brocade 3800VL
|-
| 18 || Brocade 3000 Switch
|-
| 21 || Brocade 24000 Director
|-
| 22 || Brocade 3016 Switch
|-
| 26 || Brocade 3850 Switch
|-
| 27 || Brocade 3250 Switch
|-
| 29 || Brocade 4012 Embedded Switch
|-
| 32 || Brocade 4100 Switch
|-
| 33 || Brocade 3014 Switch
|-
| 34 || Brocade 200E Switch
|-
| 37 || Brocade 4020 Embedded Switch
|-
| 38 || Brocade 7420 SAN Router
|-
| 40 || Fibre Channel Routing (FCR) Front Domain
|-
| 41 || Fibre Channel Routing, (FCR) Xlate Domain
|-
| 42 || Brocade 48000 Director
|-
| 43 || Brocade 4024 Embedded Switch
|-
| 44 || Brocade 4900 Switch
|-
| 45 || Brocade 4016 Embedded Switch
|-
| 46 || Brocade 7500 Switch
|-
| 51 || Brocade 4018 Embedded Switch
|-
| 55.2 || Brocade 7600 Switch
|-
| 58 || Brocade 5000 Switch
|-
| 61 || Brocade 4424 Embedded Switch
|-
| 62 || Brocade DCX Backbone
|-
| 64 || Brocade 5300 Switch
|-
| 66 || Brocade 5100 Switch
|-
| 67 || Brocade Encryption Switch
|-
| 69 || Brocade 5410 Blade
|-
| 70 || Brocade 5410 Embedded Switch
|-
| 71 || Brocade 300 Switch
|-
| 72 || Brocade 5480 Embedded Switch
|-
| 73 || Brocade 5470 Embedded Switch
|-
| 75 || Brocade M5424 Embedded Switch
|-
| 76 || Brocade 8000 Switch
|-
| 77 || Brocade DCX-4S Backbone
|-
| 83 || Brocade 7800 Extension Switch
|-
| 86 || Brocade 5450 Embedded Switch
|-
| 87 || Brocade 5460 Embedded Switch
|-
| 90 || Brocade 8470 Embedded Switch
|-
| 92 || Brocade VA-40FC Switch
|-
| 95 || Brocade VDX 6720-24 Data Center Switch
|-
| 96 || Brocade VDX 6730-32 Data Center Switch
|-
| 97 || Brocade VDX 6720-60 Data Center Switch
|-
| 98 || Brocade VDX 6730-76 Data Center Switch
|-
| 108 || Dell M8428-k FCoE Embedded Switch
|-
| 109 || Brocade 6510 Switch
|-
| 116 || Brocade VDX 6710 Data Center Switch
|-
| 117 || Brocade 6547 Embedded Switch
|-
| 118 || Brocade 6505 Switch
|-
| 120 || Brocade DCX 8510-8 Backbone
|-
| 121 || Brocade DCX 8510-4 Backbone
|-
| 124 || Brocade 5430 8 Gb 16-port Blade Server SAN I/O Module
|-
| 125 || Brocade 5431 8 Gbit 16-port stackable switch module
|-
| 129 || Brocade 6548 16 Gb 28-port Blade Server SAN I/O Module
|-
| 130 || Brocade M6505 16 Gbit 24-port Blade Server SAN I/O Module
|-
| 133 || Brocade 6520 16 Gb 96-port switch
|-
| 134 || Brocade 5432 8 Gb 24-port Blade Server SAN I/O Module
|-
| 148 || Brocade 7840 16 Gb 24-FC ports, 16 10GbE ports, 2 40GbE ports extension switch
|-
| 170 || Brocade G610
|}
=Enable root account for ssh=
==Enable root for ssh==
<source lang=bash>
sw-fc02fab-b:admin> rootaccess --show
RootAccess: consoleonly
sw-fc02fab-b:admin> rootaccess --set all
sw-fc02fab-b:admin> rootaccess --show
RootAccess: all
sw-fc02fab-b:admin> userconfig --change root -e yes
</source>
==Enable root account==
<source lang=bash>
sw-fc02fab-b:admin> userconfig --show root
Account name: root
Description: root
Enabled: No
Password Last Change Date: Fri Aug 21 2020 (UTC)
Password Expiration Date: Not Applicable (UTC)
Locked: No
Role: root
AD membership: 0-255
Home AD: 0
Day Time Access: N/A
sw-fc02fab-b:admin> userconfig --change root -e yes
sw-fc02fab-b:admin> userconfig --show root
Account name: root
Description: root
Enabled: Yes
Password Last Change Date: Fri Aug 21 2020 (UTC)
Password Expiration Date: Not Applicable (UTC)
Locked: No
Role: root
AD membership: 0-255
Home AD: 0
Day Time Access: N/A
</source>
==Set root password directly after enabling the account==
<source lang=bash>
$ ssh root@192.168.1.1
root@192.168.1.1's password:
============================================================================================
ATTENTION:
It is recommended that you change the default passwords for all the switch accounts.
Refer to the product release notes and administrators guide if you need further information.
============================================================================================
...
</source>
=SSH mit public key=
==Host -> Brocade==
<source lang=bash>
BSAN01:root> cd ~/.ssh
BSAN01:root> ls -al
total 8
drwxr-xr-x 2 root sys 4096 Jul 18 2011 ./
drwxr-x--- 4 root sys 4096 Jun 19 2013 ../
BSAN01:root> echo "ssh-dss AAAA...TD8cc= root@sun" >> authorized_keys
</source>
==Brocade -> Host==
===Key auf Switch generieren===
Als '''admin''' !
<source lang=bash>
Host# ssh admin@bsan01
BSAN01:admin> sshutil genkey
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Key pair generated successfully.
BSAN01:admin> exit
</source>
===Key vom Switch -> Host ~/.ssh/authorized_keys===
Als '''root''' !
<source lang=bash>
Host# ssh root@bsan01 cat .ssh/id_rsa.pub >> ~/.ssh/authorized_keys
</source>
=Backup der Config=
Wichtig, vorher die Keys austauschen!
# Der Brocade Pubkey muß nach ~bckpuser/.ssh/authorized_keys
# Der Pubkey des aufrufenden Users muß auf den Brocade ~root/.ssh/authorized_keys
Ein mögliches Script könnte so aussehen:
<source lang=bash>
#!/bin/bash
SWITCHES="
bsan01
bsan02
"
BACKUP_HOST="10.0.0.42"
LOCALUSER="bckpuser"
BACKUPDIR="brocade_backup"
[ ! -d ~/brocade_backup ] && mkdir -p ~/brocade_backup
date="$(date '+%Y%m%d-%H%M%S')"
for switch in ${SWITCHES} ; do
printf "Backing up ${switch} to ~${LOCALUSER}/${BACKUPDIR}/${switch}_config_${date}.txt... "
ssh -i ~/.ssh/id_rsa_nopw root@${switch} /fabos/link_sbin/configupload -all -p scp ${BACKUP_HOST},${LOCALUSER},${BACKUPDIR}/${switch}_config_${date}.txt
tmp_file=/tmp/.$$_${switch}.txt
bakup_file=~/${BACKUPDIR}/${switch}_config_${date}.txt
last_backup_file="$(ls -1rt ~/${BACKUPDIR}/${switch}_config_*.txt.gz | tail -1)"
gzip -cd ${last_backup_file} | grep -v "date =" > ${tmp_file}
if ( grep -v "date =" ${bakup_file} | diff -ub - ${tmp_file} )
then
# The last backup is identical
rm -f ${bakup_file}
else
# Differences encountered keep new backup
gzip -9 ${bakup_file}
fi
[ -f "${tmp_file}" ] && rm -f ${tmp_file}
done
</source>
=Firmware update=
==Record the running firmware==
==Example for a brocade sftp firmware download directory==
First take a look [[SSH_Tipps_und_Tricks#SFTP_chroot|here]] for setting up a chroot sftp environment.
Then create the home on the sftp-server:
<source lang=bash>
# mkdir --parents --mode=0755 /home/sftp/brocade
# useradd --create-home --home-dir /home/sftp/brocade/fw brocade
</source>
If there is allready an brocade user with an authorized_keys file do:
<source lang=bash>
# cp --preserve=mode ~brocade/.ssh/authorized_keys /home/sftp/.authorized_keys/brocade
</source>
else put them into /home/sftp/.authorized_keys/brocade if you want.
Untar your firmware as brocade in /home/sftp/brocade/fw.
Login to the switch as admin and do for example:
<source lang=bash>
san-sw:admin> firmwaredownload -s -b -p sftp <ip of the sftp-server>,brocade,fw/v7.2.1f
</source>
f20f741ff8a6fac759159661f25273fb935f9dcd
MySQL Tipps und Tricks
0
197
2058
1921
2020-10-26T15:00:30Z
Lollypop
2
/* On Linux */
wikitext
text/x-wiki
[[Kategorie:MySQL|Tipps und Tricks]]
==Oneliner==
===Show MySQL-traffic fired from a client===
<source lang=bash>
# tcpdump -i any -s 0 -l -vvv -w - dst port 3306 | strings | perl -e '
while(<>) { chomp; next if /^[^ ]+[ ]*$/;
if(/^(SELECT|UPDATE|DELETE|INSERT|SET|COMMIT|ROLLBACK|CREATE|DROP|ALTER)/i) {
if (defined $q) { print "$q\n"; }
$q=$_;
} else {
$_ =~ s/^[ \t]+//; $q.=" $_";
}
}'
</source>
===Mysql processes each second===
<source lang=bash>
# mysqladmin -i 1 --verbose processlist
</source>
===All grants===
<source lang=bash>
# mysql --skip-column-names --batch --execute 'select concat("`",user,"`@`",host,"`") from mysql.user' | xargs -n 1 -i mysql --execute 'show grants for {}'
</source>
Or a little nicer:
<source lang=bash>
#!/bin/bash
#
## Written by Lars Timmann <L@rs.Timmann.de> 2017
#
function usage () {
cat << EOH
Usage: $0 [--all] [--grant-user <pattern>|--gu <pattern>] [--grant-db <pattern>|--gdb <pattern>] [--help] ...
--help: This output
--grant-user|--gu: You can specify this option several times.
The <pattern> can be:
<user> : You will get grants on all hosts for this user.
@<host> : You will get grants for all users on this host.
<user>@<host> : You will get specific grants for user@host.
The pattern may contain % as wildcard.
If the pattern is @% it shows all grants where host is exactly '%'.
--grant-db|--gdb: You can specify this option several times.
The pattern names the database to look for.
The pattern may contain % as wildcard.
--all: Show all grants
...: Optional parameters to the mysql command
EOH
exit
}
show_all_grants=0
declare -a grant_user
for ((param=1;param<=${#};param++))
do
case ${!param} in
--grant-user|--gu)
param=$[ ${param} + 1 ]
grant_user+=( "${!param}" )
# delete 2 parameters from list and set back $param
set -- "${@:1:param-2}" "${@:param+1}"
param=$[ ${param} - 2 ]
;;
--grant-db|--gdb)
param=$[ ${param} + 1 ]
grant_db+=( "${!param}" )
# delete 2 parameters from list and set back $param
set -- "${@:1:param-2}" "${@:param+1}"
param=$[ ${param} - 2 ]
;;
--all)
show_all_grants=1
# delete 1 parameter from list and set back $param
set -- "${@:1:param-1}" "${@:param+1}"
param=$[ ${param} - 1 ]
;;
--help)
usage
;;
*)
;;
esac
done
count=${#grant_user[@]}
for((param=0;param<count;param++))
do
before=${#grant_user[@]}
grant="${grant_user[${param}]}"
user="${grant%@*}"
if [[ "${grant}" == *\@?* ]]
then
host="${grant/*@}"
else
host=''
fi
case ${host} in
'')
select="select concat('\'',user,'\'@\'',host,'\'') as user from mysql.user where user like '${user}'"
;;
'%')
select="select concat('\'',user,'\'@\'',host,'\'') as user from mysql.user where host='${host}' ${user:+and user like '${user}'}"
;;
*)
select="select concat('\'',user,'\'@\'',host,'\'') as user from mysql.user where host like '${host}' ${user:+and user like '${user}'}"
;;
esac
grant_user=( "${grant_user[@]:0:param}" $(mysql $* --silent --skip-column-names --execute "${select}" | sort ) "${grant_user[@]:param+1}" )
after=${#grant_user[@]}
param=$[ param + after - before ]
count=$[ count + after - before ]
done
# Get user for database in grant_db array
for db in ${grant_db[@]}
do
grant_user+=( $(mysql $* --silent --skip-column-names --execute "
select concat('\'',user,'\'@\'',host,'\'') as user from mysql.db where db like '${db}';
select concat('\'',user,'\'@\'',host,'\'') as user from mysql.columns_priv where db like '${db}';
select concat('\'',user,'\'@\'',host,'\'') as user from mysql.tables_priv where db like '${db}';
" | sort -u ) )
done
# --all
if [ ${show_all_grants} -eq 1 ]
then
printf -- '--\n-- %s\n--\n' "all grants";
grant_user=( $(mysql $* --silent --skip-column-names --execute "select concat('\'',user,'\'@\'',host,'\'') as user from mysql.user" | sort ) )
fi
for user in ${grant_user[@]}
do
printf -- '--\n-- %s\n--\n' "${user}";
show_create_user="$(mysql $* --silent --skip-column-names --execute "select (substring_index(version(), '.',1) >= 5) and (substring_index(substring_index(version(), '.', 2),'.',-1) >=7) as show_create_user;";)"
if [ "${show_create_user}" -eq 1 ]
then
mysql $* --silent --skip-column-names --execute "show create user ${user};" | sed 's/$/;/'
fi
OLD_IFS=${IFS}
IFS=$'\n'
for grant in $(mysql $* --silent --skip-column-names --execute "show grants for ${user}" | sed 's/$/;/')
do
regex='GRANT[ ]+.*[ ]+ON[ ]+(FUNCTION[ ]+|)`([^`]*)`\..*'
if [[ $grant =~ $regex ]]
then
database=${BASH_REMATCH[2]}
if [ ${#grant_db[@]} -gt 0 ]
then
if [[ " ${grant_db[@]} " =~ " ${database} " ]]
then
echo "${grant}"
fi
else
echo "${grant}"
fi
else
echo "${grant}"
fi
done
done
</source>
===Last update time===
* Per table
<source lang=mysql>
mysql> SELECT TABLE_SCHEMA AS DB,TABLE_NAME,UPDATE_TIME FROM INFORMATION_SCHEMA.TABLES ORDER BY DB,UPDATE_TIME;
</source>
* Per database
<source lang=mysql>
mysql> SELECT TABLE_SCHEMA AS DB,MAX(UPDATE_TIME) AS LAST_UPDATE FROM INFORMATION_SCHEMA.TABLES GROUP BY DB ORDER BY LAST_UPDATE;
</source>
==InnoDB space==
===Per database===
<source lang=mysql>
mysql> select table_schema as database_name, sum(round(data_length/1024/1024,2)) as total_size_mb from information_schema.tables where engine like 'innodb' group by table_schema order by total_size_mb;
</source>
===Per table===
<source lang=mysql>
mysql> select table_schema as database_name,table_name,round(data_length/1024/1024,2) as size_mb from information_schema.tables order by size_mb;
</source>
==Logging==
If you use SET GLOBAL it is just for the moment.
'''Don't forget to add it in your my.cnf to make it permanent!'''
===What can I log?===
The interesting variables here are:
* log_queries_not_using_indexes
* log_slave_updates
* log_slow_queries
* general_log
===Choose logging destination FILE/TABLE/NONE===
This affects general_log and slow_query_log.
* Log to the table mysql.slow_log and mysql.general_log
<source lang=mysql>
mysql> SET GLOBAL log_output=TABLE;
</source>
* Log to the table mysql.slow_log and mysql.general_log
<source lang=mysql>
mysql> SET GLOBAL log_output=TABLE;
</source>
* Both: tables and files
<source lang=mysql>
mysql> SET GLOBAL log_output = 'TABLE,FILE';
</source>
* None, if NONE appears in the log_output destinations there is no logging
<source lang=mysql>
mysql> SET GLOBAL log_output = 'TABLE,FILE,NONE';
</source>
is equal to
<source lang=mysql>
mysql> SET GLOBAL log_output = 'NONE';
</source>
===Enable/disable general logging===
<source lang=mysql>
mysql> SET GLOBAL general_log_file = '/var/lib/mysql/general.log';
Query OK, 0 rows affected (0.00 sec)
mysql> SET GLOBAL general_log = 'ON';
Query OK, 0 rows affected (0.00 sec)
</source>
<source lang=mysql>
mysql> SET GLOBAL general_log = 'OFF';
Query OK, 0 rows affected (0.00 sec)
</source>
===Enable/disable logging of slow queries===
<source lang=mysql>
mysql> SET GLOBAL slow_query_log_file = '/var/lib/mysql/slow-query.log';
Query OK, 0 rows affected (0.00 sec)
mysql> SET GLOBAL slow_query_log = 'ON';
Query OK, 0 rows affected (0.00 sec)
</source>
<source lang=mysql>
mysql> SET GLOBAL slow_query_log = 'OFF';
Query OK, 0 rows affected (0.00 sec)
</source>
== Slave ==
=== Debugging ===
==== What did we see from the master ====
Read the binlog from Master:
<source lang=bash>
# mysqlbinlog --read-from-remote-server --host='your replication host' --user='your replication user' --password='your replication password' --base64-output=auto --database='limit output to this database' -vv mysql-bin.number | less
</source>
if you get
ERROR: Failed on connect: SSL connection error: protocol version mismatch
try
<source lang=bash>
# mysqlbinlog --read-from-remote-server --host='your replication host' --user='your replication user' --password='your replication password' --ssl-mode=DISABLED --base64-output=auto --database='limit output to this database' -vv mysql-bin.number | less
</source>
For an idea of the binlog file to investigate on the master do this on your slave:
<source lang=bash>
# mysql -e 'show slave status\G' | awk '$1=="Master_Log_File:"'
</source>
==Filesystems for MySQL==
===ext3/ext4===
====Create Options====
<source lang=bash>
# mkfs.ext4 -b 4096 /dev/mapper/vg--data-lv--ext4--mysql_data
</source>
====Mountoptions====
* noatime
* data=writeback (best performance , only metadata is logged)
* data=ordered (ok performance , recording metadata and grouping metadata related to the data changes)
* data=journal (worst performance, but best data protection, ext3 default mode, recording metadata and all data)
===Raw devices with InnoDB===
'''Take a look at [[Linux_udev_permissions|setting device permissions via udev]] first.'''
'''After''' that the device is owned by mysql:
<source lang=bash>
# ls -alL /dev/vg-data/lv-rawdisk-innodb01
brw-rw---- 1 mysql mysql 252, 0 Aug 12 15:07 /dev/vg-data/lv-rawdisk-innodb01
</source>
Determine the size:
<source lang=bash>
# lvs vg-data
LV VG Attr LSize Pool Origin Data% Move Log Copy% Convert
lv-rawdisk-innodb01 vg-data -wi-a---- 25.00g
# fdisk -l /dev/vg-data/lv-rawdisk-innodb01
Disk /dev/vg-data/lv-rawdisk-innodb01: 26.8 GB, 26843545600 bytes
255 heads, 63 sectors/track, 3263 cylinders, total 52428800 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
# bc -l
26843545600/(1024*1024*1024)
25.00000000000000000000
</source>
Yes... really 25GB!
Add your logical volume to your configuration /etc/mysql/conf.d/innodb.cnf :
<source lang=mysql>
[mysqld]
# InnoDB raw disks
innodb_data_home_dir=
innodb_data_file_path=/dev/vg-data/lv-rawdisk-innodb01:25Gnewraw
</source>
Start mysql:
<source lang=bash>
# service mysql start
</source>
Aaaaaand.. do not forget apparmor! Like I did.. :-D
<source lang=mysql>
InnoDB: Operating system error number 13 in a file operation.
InnoDB: The error means mysqld does not have the access rights to
InnoDB: the directory.
InnoDB: File name /dev/dm-0
InnoDB: File operation call: 'open'.
InnoDB: Cannot continue operation.
</source>
<source lang=bash>
# tail /var/log/kern.log
...
Aug 12 15:30:09 mysql kernel: [ 5840.118528] audit: type=1400 audit(1439386209.399:33): apparmor="DENIED" operation="open" profile="/usr/sbin/mysqld" name="/dev/dm-0" pid=11810 comm="mysqld" requested_mask="wr" denied_mask="wr" fsuid=108 ouid=108
...
</source>
Add your raw device to the apparmor config in /etc/apparmor.d/local/usr.sbin.mysqld :
<source lang=bash>
# Site-specific additions and overrides for usr.sbin.mysqld.
# For more details, please see /etc/apparmor.d/local/README.
/dev/dm-* rwk,
</source>
Reload apparmor:
<source lang=bash>
# service apparmor reload
</source>
Another try!
<source lang=bash>
# service mysql start
</source>
<source lang=mysql>
InnoDB: The first specified data file /dev/vg-data/lv-rawdisk-innodb01 did not exist:
InnoDB: a new database to be created!
150812 15:48:23 InnoDB: Setting file /dev/vg-data/lv-rawdisk-innodb01 size to 25600 MB
InnoDB: Database physically writes the file full: wait...
InnoDB: Progress in MB: 100 200 300 400 500 600 700 800 900 1000 1100 1200 ...
</source>
Much better!
So shutdown MySQL again!
Change your configuration /etc/mysql/conf.d/innodb.cnf and '''change newraw to raw!''' :
<source lang=mysql>
[mysqld]
# InnoDB raw disks
innodb_data_home_dir=
innodb_data_file_path=/dev/vg-data/lv-rawdisk-innodb01:25Graw
</source>
=== NFS ===
==== NFSv4 ====
===== On NetApp CDOT SVM =====
<source lang=text>
cdot1nfsv4::> export-policy rule create -policyname default -clientmatch 172.18.128.0/22 -superuser none -rwrule none -rorule sys -allow-dev false -allow-suid false
cdot1nfsv4::>
cdot1nfsv4::> export-policy create -policyname mysql_clients
cdot1nfsv4::> export-policy rule create -policyname mysql_clients -clientmatch 172.18.128.0/22 -superuser sys -rwrule sys -rorule sys -allow-dev true -allow-suid false
cdot1nfsv4::>
cdot1nfsv4::> nfs server modify -v4.0 enabled -v4-id-domain this.domain.tld
cdot1nfsv4::> set -units GB
cdot1nfsv4::> vol show -volume MYSQLNFS_* -fields volume,policy,size,junction-path
vserver volume size policy junction-path
------------------ --------------------- ---- ------------- ----------------------
cdot1nfsv4 MYSQLNFS_DATA 40GB mysql_clients /MYSQLNFS_DATA
cdot1nfsv4 MYSQLNFS_LOG 1GB mysql_clients /MYSQLNFS_LOG
2 entries were displayed.
</source>
Links:
* [https://kb.netapp.com/support/s/article/how-to-configure-nfsv4-in-cluster-mode How to configure NFSv4 in Cluster-Mode]
* [https://kb.netapp.com/support/s/article/clustered-data-ontap-nfs-expert-recommended-articles Clustered Data ONTAP NFS Expert recommended articles]
* [https://kb.netapp.com/support/s/article/how-to-configure-netapp-storage-systems-for-network-file-system-version-4-in-aix-and-linux-environments How to configure NetApp storage systems for Network File System version 4 in AIX and Linux environments]
* [https://kb.netapp.com/support/s/article/how-to-enable-or-disable-nfsv4-on-netapp-storage-systems How to enable or disable NFSv4 on NetApp storage systems]
===== On Linux =====
====== /etc/modprobe.d/rpcsec_gss_krb5 ======
To disable loading of the rpcsec_gss_krb5 kernel module which causes problems with performance, do this:
<source lang=text>
# echo "blacklist rpcsec_gss_krb5" > /etc/modprobe.d/rpcsec_gss_krb5
# rmmod rpcsec_gss_krb5
</source>
====== /etc/sysctl.d/99-mysql.conf ======
<source lang=text>
#
## http://www.ajohnstone.com/achives/optimizing-mysql-over-nfs-with-netapp/
#
###################################################################
# Semaphores & IPC for optimizations in innodb
kernel.shmmax=2147483648
kernel.shmall=2147483648
kernel.msgmni=1024
kernel.msgmax=65536
kernel.sem=250 32000 32 1024
###################################################################
# Swap
vm.swappiness = 0
vm.vfs_cache_pressure = 50
</source>
====== /etc/sysctl.d/99-netapp-nfs.conf ======
<source lang=text>
#
## http://www.ajohnstone.com/achives/optimizing-mysql-over-nfs-with-netapp/
#
###################################################################
# Optimization for netapp/nfs increased from 64k, @see http://tldp.org/HOWTO/NFS-HOWTO/performance.html#MEMLIMITS
net.core.wmem_default=262144
net.core.rmem_default=262144
net.core.wmem_max=262144
net.core.rmem_max=262144
net.ipv4.tcp_rmem = 4096 87380 16777216
net.ipv4.tcp_wmem = 4096 65536 16777216
net.ipv4.tcp_no_metrics_save = 1
# Guidelines from http://media.netapp.com/documents/mysqlperformance-5.pdf
net.ipv4.tcp_sack=0
net.ipv4.tcp_timestamps=0
sunrpc.tcp_slot_table_entries=128
#nfs.v3.enable on
nfs.tcp.enable=on
nfs.tcp.recvwindowsize=65536
nfs.tcp.xfersize=65536
#iscsi.iswt.max_ios_per_session 128
#iscsi.iswt.tcp_window_size 131400
#iscsi.max_connections_per_session 16
net.ipv4.tcp_tw_reuse = 1
net.ipv4.ip_local_port_range = 1024 65023
net.ipv4.tcp_max_syn_backlog = 10240
net.ipv4.tcp_max_tw_buckets = 400000
net.ipv4.tcp_max_orphans = 60000
net.ipv4.tcp_synack_retries = 3
net.core.somaxconn = 10000
kernel.sysrq=0
net.ipv4.neigh.default.gc_thresh1 = 4096
net.ipv4.neigh.default.gc_thresh2 = 8192
net.ipv4.neigh.default.gc_thresh3 = 8192
net.ipv4.neigh.default.base_reachable_time = 86400
net.ipv4.neigh.default.gc_stale_time = 86400
</source>
====== Raise allowed number of open files for mysql in /etc/security/limits.d/mysql.conf ======
<source lang=text>
mysql soft nofile 1024000
mysql hard nofile 1024000
mysql soft nproc 10240
mysql hard nproc 10240
</source>
====== Modify systemd mysql.service to raise the number of files limit ======
To raise the number of files for the service you have to tell the systemd the new limit.
<source lang=bash>
# systemctl edit mysql.service
</source>
and enter:
<source lang=ini>
[Service]
LimitNOFILE=1024000
</source>
<source lang=bash>
# systemctl cat mysql
# /lib/systemd/system/mysql.service
# MySQL systemd service file
...
# /etc/systemd/system/mysql.service.d/override.conf
[Service]
LimitNOFILE=1024000
</source>
Do not forget to activate and check the limit
<source lang=bash>
# systemctl daemon-reload
# systemctl restart mysql
# awk 'NR==1 || /Max open files/' /proc/$(pgrep mysqld$)/limits
Limit Soft Limit Hard Limit Units
Max open files 1024000 1024000 files
</source>
====== Modify systemd service to wait for NFS ======
To be sure that the NFS mount is ready when the mysql server starts add After=nfs-client.target to the systemd service in the Unit-section.
<source lang=bash>
# systemctl edit mysql.service
</source>
and enter:
<source lang=ini>
[Unit]
Description=MySQL Community Server
After=network.target
After=nfs-client.target
</source>
<source lang=bash>
# systemctl cat mysql
# /lib/systemd/system/mysql.service
# MySQL systemd service file
[Unit]
Description=MySQL Community Server
After=network.target
[Install]
WantedBy=multi-user.target
[Service]
User=mysql
Group=mysql
PermissionsStartOnly=true
ExecStartPre=/usr/share/mysql/mysql-systemd-start pre
ExecStart=/usr/sbin/mysqld
ExecStartPost=/usr/share/mysql/mysql-systemd-start post
TimeoutSec=600
Restart=on-failure
RuntimeDirectory=mysqld
RuntimeDirectoryMode=755
# /etc/systemd/system/mysql.service.d/override.conf
[Unit]
Description=MySQL Community Server
After=network.target
After=nfs-client.target
[Service]
LimitNOFILE=1024000
</source>
Do not forget to activate the changes...
<source lang=bash>
# systemctl daemon-reload
# systemctl restart mysql
</source>
... and check they are active:
<source lang=bash>
# systemctl list-dependencies --after mysql.service | grep nfs-client.target
● ├─nfs-client.target
</source>
====== /etc/idmapd.conf ======
<source lang=text>
# Domain = localdomain
Domain = this.domain.tld
</source>
====== /etc/fstab ======
<source lang=text>
cdot-nfsv4-svm:/MYSQLNFS_LOG /MYSQLNFS_LOG nfs rw,hard,nointr,rsize=65536,wsize=65536,bg,vers=4,proto=tcp,noatime
cdot-nfsv4-svm:/MYSQLNFS_DATA /MYSQLNFS_DATA nfs rw,hard,nointr,rsize=65536,wsize=65536,bg,vers=4,proto=tcp,noatime
</source>
====== /etc/mysql/mysql.conf.d/mysqld.cnf ======
<source lang=ini>
[mysqld]
...
datadir = /MYSQLNFS_DATA/data/mysql
...
</source>
====== /etc/mysql/mysql.conf.d/innodb.cnf ======
<source lang=ini>
[mysqld]
#
# * InnoDB
#
innodb_data_home_dir = /MYSQLNFS_DATA/InnoDB
innodb_data_file_path = ibdata1:200M:autoextend
innodb_log_group_home_dir = /MYSQLNFS_LOG/ib_log
#innodb_flush_method = O_DIRECT
innodb_flush_log_at_trx_commit = 2
innodb_file_per_table = on
</source>
<source lang=mysql>
# mysql -e "show variables where variable_name like '%dir' and value like '/MYSQLNFS%'"
+---------------------------+------------------------------------+
| Variable_name | Value |
+---------------------------+------------------------------------+
| datadir | /MYSQLNFS_DATA/data/mysql/ |
| innodb_data_home_dir | /MYSQLNFS_DATA/InnoDB |
| innodb_log_group_home_dir | /MYSQLNFS_LOG/ib_log |
+---------------------------+------------------------------------+
</source>
====== /etc/mysql/mysql.conf.d/query_cache.cnf ======
<source lang=ini>
[mysqld]
#
# * Query Cache Configuration
#
query_cache_type = 1
query_cache_limit = 256K
query_cache_min_res_unit = 2k
query_cache_size = 80M
</source>
<source lang=mysql>
mysql> SHOW VARIABLES LIKE 'have_query_cache';
+------------------+-------+
| Variable_name | Value |
+------------------+-------+
| have_query_cache | YES |
+------------------+-------+
1 row in set (0,00 sec)
mysql> SHOW VARIABLES LIKE 'query_cache%';
+------------------------------+----------+
| Variable_name | Value |
+------------------------------+----------+
| query_cache_limit | 262144 |
| query_cache_min_res_unit | 2048 |
| query_cache_size | 83886080 |
| query_cache_type | ON |
| query_cache_wlock_invalidate | OFF |
+------------------------------+----------+
5 rows in set (0,00 sec)
</source>
====== apparmor : /etc/apparmor.d/local/usr.sbin.mysqld ======
<source lang=text>
# vim:syntax=apparmor
# This should be always there...
owner @{PROC}/@{pid}/status r,
/sys/devices/system/node/ r,
/sys/devices/system/node/** r,
# The mysql datadir, innodb_data_home_dir
/MYSQLNFS_DATA/ r,
/MYSQLNFS_DATA/** rwk,
# The mysql innodb_log_group_home_dir
/MYSQLNFS_LOG/ r,
/MYSQLNFS_LOG/** rwk,
</source>
====== Short stupid performance test ======
<source lang=bash>
# time dd if=/dev/zero of=/MYSQLNFS_DATA/io.test bs=16k count=65536
65536+0 records in
65536+0 records out
1073741824 bytes (1,1 GB, 1,0 GiB) copied, 1,7552 s, 612 MB/s
real 0m1.772s
user 0m0.016s
sys 0m0.672s
</source>
Some things seem to work...
==Sample InnoDB configuration==
/etc/mysql/conf.d/innodb.cnf
<source lang=mysql>
[mysqld]
# InnoDB Parameters
# innodb_buffer_pool_size=(0.7*total_mem_size)
innodb_buffer_pool_size=1433M
# bulk_insert_buffer_size
bulk_insert_buffer_size=256M
# innodb_buffer_pool_instances=... more = more concurrency
innodb_buffer_pool_instances=2
# innodb_thread_concurrency= 2*CPUs
innodb_thread_concurrency=4
# innodb_flush_method=O_DIRECT (avoids double buffering)
innodb_flush_method=O_DIRECT
# InnoDB data raw disks
innodb_data_home_dir=
innodb_data_file_path=/dev/vg-data/lv-rawdisk-innodb01:25Graw
# InnoDB log files
innodb_log_files_in_group=2
innodb_log_file_size=100M
innodb_log_group_home_dir=/var/lib/mysql/ib_log
</source>
==Analyze==
<source lang=mysql>
mysql> select * from <tablename> PROCEDURE ANALYSE();
</source>
<source lang=mysql>
mysql> SHOW /*!50000 GLOBAL*/ STATUS;
</source>
* See [[http://de.slideshare.net/shinguz/pt-presentation-11465700 MySQL Performance Tuning]]
===Find statements which lead into an error===
<source lang=mysql>
mysql> select CURRENT_SCHEMA,DIGEST_TEXT,MYSQL_ERRNO,MESSAGE_TEXT from performance_schema.events_statements_history where errors!=0\G
*************************** 1. row ***************************
CURRENT_SCHEMA: NULL
DIGEST_TEXT: NULL
MYSQL_ERRNO: 1046
MESSAGE_TEXT: No database selected
1 row in set (0,00 sec)
</source>
===percona-toolkit===
<source lang=bash>
# aptitude install percona-toolkit
# mysql -e "explain select * from mysql.user,mysql.db where user.user=db.user" | pt-visual-explain
JOIN
+- Bookmark lookup
| +- Table
| | table db
| | possible_keys User
| +- Index lookup
| key db->User
| possible_keys User
| key_len 48
| ref mysql.user.User
| rows 3
+- Table scan
rows 68
+- Table
table user
</source>
===Sysbench===
<source lang=bash>
# mysql -u root -e "create database sbtest;"
# sysbench \
--test=oltp \
--oltp-table-size=10000000 \
--db-driver=mysql \
--mysql-table-engine=innodb \
--mysql-db=sbtest \
--mysql-user=root \
--mysql-password=$(nawk -F'=' '/password/{print $2}' /root/.my.cnf) \
--mysql-socket=/var/run/mysqld/mysqld.sock \
prepare
# sysbench \
--test=oltp \
--oltp-test-mode=complex \
--oltp-table-size=80000000 \
--db-driver=mysql \
--mysql-table-engine=innodb \
--mysql-db=sbtest \
--mysql-user=root \
--mysql-password=$(nawk -F'=' '/password/{print $2}' /root/.my.cnf) \
--mysql-socket=/var/run/mysqld/mysqld.sock \
--num-threads=4 \
--max-time=900 \
--max-requests=500000 \
run
# mysql -u root_rw -e "drop table sbtest;" sbtest
</source>
==Recover a damaged root account==
===Lost grants===
Try out:
<source lang=bash>
# service mysql stop
# echo "grant all privileges on *.* to 'root'@'localhost' with grant option;" > /root/mysql-init
# mysqld_safe --init-file=/root/mysql-init
...
150812 19:14:24 mysqld_safe mysqld from pid file /var/run/mysqld/mysqld.pid ended
# rm /root/mysql-init
# service mysql start
</source>
Or:
<source lang=bash>
# service mysql stop
# mysqld_safe --skip-grant-tables &
...
# mysql -e "UPDATE mysql.user SET Grant_priv='Y', Super_priv='Y' WHERE User='root'; FLUSH PRIVILEGES; GRANT ALL ON *.* TO 'root'@'localhost';"
# mysqladmin -u root shutdown
# service mysql start
</source>
===Lost password===
<source lang=bash>
# service mysql stop
# echo "SET PASSWORD FOR 'root'@'localhost' = PASSWORD('the root password for mysql');" > /root/mysql-init
# mysqld_safe --init-file=/root/mysql-init
...
150812 19:15:24 mysqld_safe mysqld from pid file /var/run/mysqld/mysqld.pid ended
# rm /root/mysql-init
# service mysql start
</source>
==Structured configuration==
This is the default in Ubuntus /etc/mysql/my.cnf:
<source lang=mysql>
...
#
# * IMPORTANT: Additional settings that can override those from this file!
# The files must end with '.cnf', otherwise they'll be ignored.
#
!includedir /etc/mysql/conf.d/
</source>
/etc/mysql/conf.d/innodb.cnf:
<source lang=mysql>
[mysqld]
# InnoDB Parameters
# innodb_buffer_pool_size=(0.7*total_mem_size)
#innodb_buffer_pool_size=512M
innodb_buffer_pool_size=256M
# bulk_insert_buffer_size
#bulk_insert_buffer_size=256M
bulk_insert_buffer_size=128M
# innodb_buffer_pool_instances=... more = more concurrency
innodb_buffer_pool_instances=2
# innodb_thread_concurrency= 2*CPUs
innodb_thread_concurrency=4
# innodb_flush_method=O_DIRECT (avoids double buffering)
innodb_flush_method=O_DIRECT
# InnoDB data raw disks
innodb_data_home_dir=
innodb_data_file_path=/dev/vg-data/lv-rawdisk-innodb01:25Graw
# InnoDB log files
innodb_log_files_in_group=2
innodb_log_file_size=100M
innodb_log_group_home_dir=/var/lib/mysql/ib_log
</source>
/etc/mysql/conf.d/myisam.cnf:
<source lang=mysql>
[mysqld]
#key_buffer = 512M
key_buffer = 128M
table_cache = 8K
myisam_sort_buffer_size = 64M
tmp_table_size = 64M
# Variable: concurrent_insert
# Value Description
# 0 Disables concurrent inserts
# 1 (Default) Enables concurrent insert for MyISAM tables that do not have holes
# 2 Enables concurrent inserts for all MyISAM tables, even those that have holes.
# For a table with a hole, new rows are inserted at the end of the table if it is in use by another thread.
# Otherwise, MySQL acquires a normal write lock and inserts the row into the hole.
concurrent_insert=2
# Variable: myisam_use_mmap
# https://www.percona.com/blog/2006/05/26/myisam-mmap-feature-51/
#
myisam_use_mmap=1
</source>
/etc/mysql/conf.d/mysqld.cnf:
<source lang=mysql>
[mysqld]
datadir = /var/lib/mysql/data/data
# because mysql is soooo stupid
#ignore-db-dirs = lost+found # when we will have mysql >= 5.6.3
bind-address = 127.0.0.1
open-files-limit = 4096
max_connections = 512
max_allowed_packet = 16M
thread_stack = 192K
thread_cache_size = 8
myisam-recover-options = BACKUP
max_connections = 512
table_cache = 8192
thread_concurrency = 4
default-storage-engine = innodb
# Enable the full query log. Every query (even ones with incorrect
# syntax) that the server receives will be logged. This is useful for
# debugging, it is usually disabled in production use.
#log
# Print warnings to the error log file. If you have any problem with
# MySQL you should enable logging of warnings and examine the error log
# for possible explanations.
log_warnings
# Log slow queries. Slow queries are queries which take more than the
# amount of time defined in "long_query_time" or which do not use
# indexes well, if log_long_format is enabled. It is normally good idea
# to have this turned on if you frequently add new queries to the
# system.
log_slow_queries
slow_query_log_file = /var/log/mysql/mysql-slow.log
# All queries taking more than this amount of time (in seconds) will be
# trated as slow. Do not use "1" as a value here, as this will result in
# even very fast queries being logged from time to time (as MySQL
# currently measures time with second accuracy only).
long_query_time = 2
# Log more information in the slow query log. Normally it is good to
# have this turned on. This will enable logging of queries that are not
# using indexes in addition to long running queries.
#log_long_format
log_bin = /var/lib/mysql/binlog/mysql-bin.log
expire_logs_days = 10
max_binlog_size = 100M
sync_binlog = 0
performance_schema = ON
</source>
/etc/mysql/conf.d/mysqld_safe.cnf:
<source lang=mysql>
[mysqld_safe]
</source>
/etc/mysql/conf.d/mysqld_safe_syslog.cnf:
<source lang=mysql>
[mysqld_safe]
syslog
</source>
/etc/mysql/conf.d/query_cache.cnf:
<source lang=mysql>
[mysqld]
query_cache_limit = 4M
query_cache_size = 128M
query_cache_min_res_unit = 2K
</source>
=MySQL Clients=
Small one liners for testing purposes.
==PHP==
===PHP PDO===
<source lang=php>
$ php -r '
$pdo=new PDO("mysql:host=mydbhost;dbname=mydb", "user", "pass", ARRAY(
PDO::ATTR_PERSISTENT => true
)
);
$stmt=$pdo->prepare("SELECT * FROM mytable");
if($stmt->execute()){
while($row = $stmt->fetch()){
print_r($row);
}
};
$stmt = null;
$pdo=null;
'
</source>
3396574165a9fc3f9be80edeb2f54614612512da
2059
2058
2020-10-26T15:01:21Z
Lollypop
2
/* /etc/modprobe.d/rpcsec_gss_krb5 */
wikitext
text/x-wiki
[[Kategorie:MySQL|Tipps und Tricks]]
==Oneliner==
===Show MySQL-traffic fired from a client===
<source lang=bash>
# tcpdump -i any -s 0 -l -vvv -w - dst port 3306 | strings | perl -e '
while(<>) { chomp; next if /^[^ ]+[ ]*$/;
if(/^(SELECT|UPDATE|DELETE|INSERT|SET|COMMIT|ROLLBACK|CREATE|DROP|ALTER)/i) {
if (defined $q) { print "$q\n"; }
$q=$_;
} else {
$_ =~ s/^[ \t]+//; $q.=" $_";
}
}'
</source>
===Mysql processes each second===
<source lang=bash>
# mysqladmin -i 1 --verbose processlist
</source>
===All grants===
<source lang=bash>
# mysql --skip-column-names --batch --execute 'select concat("`",user,"`@`",host,"`") from mysql.user' | xargs -n 1 -i mysql --execute 'show grants for {}'
</source>
Or a little nicer:
<source lang=bash>
#!/bin/bash
#
## Written by Lars Timmann <L@rs.Timmann.de> 2017
#
function usage () {
cat << EOH
Usage: $0 [--all] [--grant-user <pattern>|--gu <pattern>] [--grant-db <pattern>|--gdb <pattern>] [--help] ...
--help: This output
--grant-user|--gu: You can specify this option several times.
The <pattern> can be:
<user> : You will get grants on all hosts for this user.
@<host> : You will get grants for all users on this host.
<user>@<host> : You will get specific grants for user@host.
The pattern may contain % as wildcard.
If the pattern is @% it shows all grants where host is exactly '%'.
--grant-db|--gdb: You can specify this option several times.
The pattern names the database to look for.
The pattern may contain % as wildcard.
--all: Show all grants
...: Optional parameters to the mysql command
EOH
exit
}
show_all_grants=0
declare -a grant_user
for ((param=1;param<=${#};param++))
do
case ${!param} in
--grant-user|--gu)
param=$[ ${param} + 1 ]
grant_user+=( "${!param}" )
# delete 2 parameters from list and set back $param
set -- "${@:1:param-2}" "${@:param+1}"
param=$[ ${param} - 2 ]
;;
--grant-db|--gdb)
param=$[ ${param} + 1 ]
grant_db+=( "${!param}" )
# delete 2 parameters from list and set back $param
set -- "${@:1:param-2}" "${@:param+1}"
param=$[ ${param} - 2 ]
;;
--all)
show_all_grants=1
# delete 1 parameter from list and set back $param
set -- "${@:1:param-1}" "${@:param+1}"
param=$[ ${param} - 1 ]
;;
--help)
usage
;;
*)
;;
esac
done
count=${#grant_user[@]}
for((param=0;param<count;param++))
do
before=${#grant_user[@]}
grant="${grant_user[${param}]}"
user="${grant%@*}"
if [[ "${grant}" == *\@?* ]]
then
host="${grant/*@}"
else
host=''
fi
case ${host} in
'')
select="select concat('\'',user,'\'@\'',host,'\'') as user from mysql.user where user like '${user}'"
;;
'%')
select="select concat('\'',user,'\'@\'',host,'\'') as user from mysql.user where host='${host}' ${user:+and user like '${user}'}"
;;
*)
select="select concat('\'',user,'\'@\'',host,'\'') as user from mysql.user where host like '${host}' ${user:+and user like '${user}'}"
;;
esac
grant_user=( "${grant_user[@]:0:param}" $(mysql $* --silent --skip-column-names --execute "${select}" | sort ) "${grant_user[@]:param+1}" )
after=${#grant_user[@]}
param=$[ param + after - before ]
count=$[ count + after - before ]
done
# Get user for database in grant_db array
for db in ${grant_db[@]}
do
grant_user+=( $(mysql $* --silent --skip-column-names --execute "
select concat('\'',user,'\'@\'',host,'\'') as user from mysql.db where db like '${db}';
select concat('\'',user,'\'@\'',host,'\'') as user from mysql.columns_priv where db like '${db}';
select concat('\'',user,'\'@\'',host,'\'') as user from mysql.tables_priv where db like '${db}';
" | sort -u ) )
done
# --all
if [ ${show_all_grants} -eq 1 ]
then
printf -- '--\n-- %s\n--\n' "all grants";
grant_user=( $(mysql $* --silent --skip-column-names --execute "select concat('\'',user,'\'@\'',host,'\'') as user from mysql.user" | sort ) )
fi
for user in ${grant_user[@]}
do
printf -- '--\n-- %s\n--\n' "${user}";
show_create_user="$(mysql $* --silent --skip-column-names --execute "select (substring_index(version(), '.',1) >= 5) and (substring_index(substring_index(version(), '.', 2),'.',-1) >=7) as show_create_user;";)"
if [ "${show_create_user}" -eq 1 ]
then
mysql $* --silent --skip-column-names --execute "show create user ${user};" | sed 's/$/;/'
fi
OLD_IFS=${IFS}
IFS=$'\n'
for grant in $(mysql $* --silent --skip-column-names --execute "show grants for ${user}" | sed 's/$/;/')
do
regex='GRANT[ ]+.*[ ]+ON[ ]+(FUNCTION[ ]+|)`([^`]*)`\..*'
if [[ $grant =~ $regex ]]
then
database=${BASH_REMATCH[2]}
if [ ${#grant_db[@]} -gt 0 ]
then
if [[ " ${grant_db[@]} " =~ " ${database} " ]]
then
echo "${grant}"
fi
else
echo "${grant}"
fi
else
echo "${grant}"
fi
done
done
</source>
===Last update time===
* Per table
<source lang=mysql>
mysql> SELECT TABLE_SCHEMA AS DB,TABLE_NAME,UPDATE_TIME FROM INFORMATION_SCHEMA.TABLES ORDER BY DB,UPDATE_TIME;
</source>
* Per database
<source lang=mysql>
mysql> SELECT TABLE_SCHEMA AS DB,MAX(UPDATE_TIME) AS LAST_UPDATE FROM INFORMATION_SCHEMA.TABLES GROUP BY DB ORDER BY LAST_UPDATE;
</source>
==InnoDB space==
===Per database===
<source lang=mysql>
mysql> select table_schema as database_name, sum(round(data_length/1024/1024,2)) as total_size_mb from information_schema.tables where engine like 'innodb' group by table_schema order by total_size_mb;
</source>
===Per table===
<source lang=mysql>
mysql> select table_schema as database_name,table_name,round(data_length/1024/1024,2) as size_mb from information_schema.tables order by size_mb;
</source>
==Logging==
If you use SET GLOBAL it is just for the moment.
'''Don't forget to add it in your my.cnf to make it permanent!'''
===What can I log?===
The interesting variables here are:
* log_queries_not_using_indexes
* log_slave_updates
* log_slow_queries
* general_log
===Choose logging destination FILE/TABLE/NONE===
This affects general_log and slow_query_log.
* Log to the table mysql.slow_log and mysql.general_log
<source lang=mysql>
mysql> SET GLOBAL log_output=TABLE;
</source>
* Log to the table mysql.slow_log and mysql.general_log
<source lang=mysql>
mysql> SET GLOBAL log_output=TABLE;
</source>
* Both: tables and files
<source lang=mysql>
mysql> SET GLOBAL log_output = 'TABLE,FILE';
</source>
* None, if NONE appears in the log_output destinations there is no logging
<source lang=mysql>
mysql> SET GLOBAL log_output = 'TABLE,FILE,NONE';
</source>
is equal to
<source lang=mysql>
mysql> SET GLOBAL log_output = 'NONE';
</source>
===Enable/disable general logging===
<source lang=mysql>
mysql> SET GLOBAL general_log_file = '/var/lib/mysql/general.log';
Query OK, 0 rows affected (0.00 sec)
mysql> SET GLOBAL general_log = 'ON';
Query OK, 0 rows affected (0.00 sec)
</source>
<source lang=mysql>
mysql> SET GLOBAL general_log = 'OFF';
Query OK, 0 rows affected (0.00 sec)
</source>
===Enable/disable logging of slow queries===
<source lang=mysql>
mysql> SET GLOBAL slow_query_log_file = '/var/lib/mysql/slow-query.log';
Query OK, 0 rows affected (0.00 sec)
mysql> SET GLOBAL slow_query_log = 'ON';
Query OK, 0 rows affected (0.00 sec)
</source>
<source lang=mysql>
mysql> SET GLOBAL slow_query_log = 'OFF';
Query OK, 0 rows affected (0.00 sec)
</source>
== Slave ==
=== Debugging ===
==== What did we see from the master ====
Read the binlog from Master:
<source lang=bash>
# mysqlbinlog --read-from-remote-server --host='your replication host' --user='your replication user' --password='your replication password' --base64-output=auto --database='limit output to this database' -vv mysql-bin.number | less
</source>
if you get
ERROR: Failed on connect: SSL connection error: protocol version mismatch
try
<source lang=bash>
# mysqlbinlog --read-from-remote-server --host='your replication host' --user='your replication user' --password='your replication password' --ssl-mode=DISABLED --base64-output=auto --database='limit output to this database' -vv mysql-bin.number | less
</source>
For an idea of the binlog file to investigate on the master do this on your slave:
<source lang=bash>
# mysql -e 'show slave status\G' | awk '$1=="Master_Log_File:"'
</source>
==Filesystems for MySQL==
===ext3/ext4===
====Create Options====
<source lang=bash>
# mkfs.ext4 -b 4096 /dev/mapper/vg--data-lv--ext4--mysql_data
</source>
====Mountoptions====
* noatime
* data=writeback (best performance , only metadata is logged)
* data=ordered (ok performance , recording metadata and grouping metadata related to the data changes)
* data=journal (worst performance, but best data protection, ext3 default mode, recording metadata and all data)
===Raw devices with InnoDB===
'''Take a look at [[Linux_udev_permissions|setting device permissions via udev]] first.'''
'''After''' that the device is owned by mysql:
<source lang=bash>
# ls -alL /dev/vg-data/lv-rawdisk-innodb01
brw-rw---- 1 mysql mysql 252, 0 Aug 12 15:07 /dev/vg-data/lv-rawdisk-innodb01
</source>
Determine the size:
<source lang=bash>
# lvs vg-data
LV VG Attr LSize Pool Origin Data% Move Log Copy% Convert
lv-rawdisk-innodb01 vg-data -wi-a---- 25.00g
# fdisk -l /dev/vg-data/lv-rawdisk-innodb01
Disk /dev/vg-data/lv-rawdisk-innodb01: 26.8 GB, 26843545600 bytes
255 heads, 63 sectors/track, 3263 cylinders, total 52428800 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
# bc -l
26843545600/(1024*1024*1024)
25.00000000000000000000
</source>
Yes... really 25GB!
Add your logical volume to your configuration /etc/mysql/conf.d/innodb.cnf :
<source lang=mysql>
[mysqld]
# InnoDB raw disks
innodb_data_home_dir=
innodb_data_file_path=/dev/vg-data/lv-rawdisk-innodb01:25Gnewraw
</source>
Start mysql:
<source lang=bash>
# service mysql start
</source>
Aaaaaand.. do not forget apparmor! Like I did.. :-D
<source lang=mysql>
InnoDB: Operating system error number 13 in a file operation.
InnoDB: The error means mysqld does not have the access rights to
InnoDB: the directory.
InnoDB: File name /dev/dm-0
InnoDB: File operation call: 'open'.
InnoDB: Cannot continue operation.
</source>
<source lang=bash>
# tail /var/log/kern.log
...
Aug 12 15:30:09 mysql kernel: [ 5840.118528] audit: type=1400 audit(1439386209.399:33): apparmor="DENIED" operation="open" profile="/usr/sbin/mysqld" name="/dev/dm-0" pid=11810 comm="mysqld" requested_mask="wr" denied_mask="wr" fsuid=108 ouid=108
...
</source>
Add your raw device to the apparmor config in /etc/apparmor.d/local/usr.sbin.mysqld :
<source lang=bash>
# Site-specific additions and overrides for usr.sbin.mysqld.
# For more details, please see /etc/apparmor.d/local/README.
/dev/dm-* rwk,
</source>
Reload apparmor:
<source lang=bash>
# service apparmor reload
</source>
Another try!
<source lang=bash>
# service mysql start
</source>
<source lang=mysql>
InnoDB: The first specified data file /dev/vg-data/lv-rawdisk-innodb01 did not exist:
InnoDB: a new database to be created!
150812 15:48:23 InnoDB: Setting file /dev/vg-data/lv-rawdisk-innodb01 size to 25600 MB
InnoDB: Database physically writes the file full: wait...
InnoDB: Progress in MB: 100 200 300 400 500 600 700 800 900 1000 1100 1200 ...
</source>
Much better!
So shutdown MySQL again!
Change your configuration /etc/mysql/conf.d/innodb.cnf and '''change newraw to raw!''' :
<source lang=mysql>
[mysqld]
# InnoDB raw disks
innodb_data_home_dir=
innodb_data_file_path=/dev/vg-data/lv-rawdisk-innodb01:25Graw
</source>
=== NFS ===
==== NFSv4 ====
===== On NetApp CDOT SVM =====
<source lang=text>
cdot1nfsv4::> export-policy rule create -policyname default -clientmatch 172.18.128.0/22 -superuser none -rwrule none -rorule sys -allow-dev false -allow-suid false
cdot1nfsv4::>
cdot1nfsv4::> export-policy create -policyname mysql_clients
cdot1nfsv4::> export-policy rule create -policyname mysql_clients -clientmatch 172.18.128.0/22 -superuser sys -rwrule sys -rorule sys -allow-dev true -allow-suid false
cdot1nfsv4::>
cdot1nfsv4::> nfs server modify -v4.0 enabled -v4-id-domain this.domain.tld
cdot1nfsv4::> set -units GB
cdot1nfsv4::> vol show -volume MYSQLNFS_* -fields volume,policy,size,junction-path
vserver volume size policy junction-path
------------------ --------------------- ---- ------------- ----------------------
cdot1nfsv4 MYSQLNFS_DATA 40GB mysql_clients /MYSQLNFS_DATA
cdot1nfsv4 MYSQLNFS_LOG 1GB mysql_clients /MYSQLNFS_LOG
2 entries were displayed.
</source>
Links:
* [https://kb.netapp.com/support/s/article/how-to-configure-nfsv4-in-cluster-mode How to configure NFSv4 in Cluster-Mode]
* [https://kb.netapp.com/support/s/article/clustered-data-ontap-nfs-expert-recommended-articles Clustered Data ONTAP NFS Expert recommended articles]
* [https://kb.netapp.com/support/s/article/how-to-configure-netapp-storage-systems-for-network-file-system-version-4-in-aix-and-linux-environments How to configure NetApp storage systems for Network File System version 4 in AIX and Linux environments]
* [https://kb.netapp.com/support/s/article/how-to-enable-or-disable-nfsv4-on-netapp-storage-systems How to enable or disable NFSv4 on NetApp storage systems]
===== On Linux =====
====== /etc/modprobe.d/blacklist-rpcsec_gss_krb5.conf ======
To disable loading of the rpcsec_gss_krb5 kernel module which causes problems with performance, do this:
<source lang=text>
# echo "blacklist rpcsec_gss_krb5" > /etc/modprobe.d/blacklist-rpcsec_gss_krb5.conf
# rmmod rpcsec_gss_krb5
</source>
====== /etc/sysctl.d/99-mysql.conf ======
<source lang=text>
#
## http://www.ajohnstone.com/achives/optimizing-mysql-over-nfs-with-netapp/
#
###################################################################
# Semaphores & IPC for optimizations in innodb
kernel.shmmax=2147483648
kernel.shmall=2147483648
kernel.msgmni=1024
kernel.msgmax=65536
kernel.sem=250 32000 32 1024
###################################################################
# Swap
vm.swappiness = 0
vm.vfs_cache_pressure = 50
</source>
====== /etc/sysctl.d/99-netapp-nfs.conf ======
<source lang=text>
#
## http://www.ajohnstone.com/achives/optimizing-mysql-over-nfs-with-netapp/
#
###################################################################
# Optimization for netapp/nfs increased from 64k, @see http://tldp.org/HOWTO/NFS-HOWTO/performance.html#MEMLIMITS
net.core.wmem_default=262144
net.core.rmem_default=262144
net.core.wmem_max=262144
net.core.rmem_max=262144
net.ipv4.tcp_rmem = 4096 87380 16777216
net.ipv4.tcp_wmem = 4096 65536 16777216
net.ipv4.tcp_no_metrics_save = 1
# Guidelines from http://media.netapp.com/documents/mysqlperformance-5.pdf
net.ipv4.tcp_sack=0
net.ipv4.tcp_timestamps=0
sunrpc.tcp_slot_table_entries=128
#nfs.v3.enable on
nfs.tcp.enable=on
nfs.tcp.recvwindowsize=65536
nfs.tcp.xfersize=65536
#iscsi.iswt.max_ios_per_session 128
#iscsi.iswt.tcp_window_size 131400
#iscsi.max_connections_per_session 16
net.ipv4.tcp_tw_reuse = 1
net.ipv4.ip_local_port_range = 1024 65023
net.ipv4.tcp_max_syn_backlog = 10240
net.ipv4.tcp_max_tw_buckets = 400000
net.ipv4.tcp_max_orphans = 60000
net.ipv4.tcp_synack_retries = 3
net.core.somaxconn = 10000
kernel.sysrq=0
net.ipv4.neigh.default.gc_thresh1 = 4096
net.ipv4.neigh.default.gc_thresh2 = 8192
net.ipv4.neigh.default.gc_thresh3 = 8192
net.ipv4.neigh.default.base_reachable_time = 86400
net.ipv4.neigh.default.gc_stale_time = 86400
</source>
====== Raise allowed number of open files for mysql in /etc/security/limits.d/mysql.conf ======
<source lang=text>
mysql soft nofile 1024000
mysql hard nofile 1024000
mysql soft nproc 10240
mysql hard nproc 10240
</source>
====== Modify systemd mysql.service to raise the number of files limit ======
To raise the number of files for the service you have to tell the systemd the new limit.
<source lang=bash>
# systemctl edit mysql.service
</source>
and enter:
<source lang=ini>
[Service]
LimitNOFILE=1024000
</source>
<source lang=bash>
# systemctl cat mysql
# /lib/systemd/system/mysql.service
# MySQL systemd service file
...
# /etc/systemd/system/mysql.service.d/override.conf
[Service]
LimitNOFILE=1024000
</source>
Do not forget to activate and check the limit
<source lang=bash>
# systemctl daemon-reload
# systemctl restart mysql
# awk 'NR==1 || /Max open files/' /proc/$(pgrep mysqld$)/limits
Limit Soft Limit Hard Limit Units
Max open files 1024000 1024000 files
</source>
====== Modify systemd service to wait for NFS ======
To be sure that the NFS mount is ready when the mysql server starts add After=nfs-client.target to the systemd service in the Unit-section.
<source lang=bash>
# systemctl edit mysql.service
</source>
and enter:
<source lang=ini>
[Unit]
Description=MySQL Community Server
After=network.target
After=nfs-client.target
</source>
<source lang=bash>
# systemctl cat mysql
# /lib/systemd/system/mysql.service
# MySQL systemd service file
[Unit]
Description=MySQL Community Server
After=network.target
[Install]
WantedBy=multi-user.target
[Service]
User=mysql
Group=mysql
PermissionsStartOnly=true
ExecStartPre=/usr/share/mysql/mysql-systemd-start pre
ExecStart=/usr/sbin/mysqld
ExecStartPost=/usr/share/mysql/mysql-systemd-start post
TimeoutSec=600
Restart=on-failure
RuntimeDirectory=mysqld
RuntimeDirectoryMode=755
# /etc/systemd/system/mysql.service.d/override.conf
[Unit]
Description=MySQL Community Server
After=network.target
After=nfs-client.target
[Service]
LimitNOFILE=1024000
</source>
Do not forget to activate the changes...
<source lang=bash>
# systemctl daemon-reload
# systemctl restart mysql
</source>
... and check they are active:
<source lang=bash>
# systemctl list-dependencies --after mysql.service | grep nfs-client.target
● ├─nfs-client.target
</source>
====== /etc/idmapd.conf ======
<source lang=text>
# Domain = localdomain
Domain = this.domain.tld
</source>
====== /etc/fstab ======
<source lang=text>
cdot-nfsv4-svm:/MYSQLNFS_LOG /MYSQLNFS_LOG nfs rw,hard,nointr,rsize=65536,wsize=65536,bg,vers=4,proto=tcp,noatime
cdot-nfsv4-svm:/MYSQLNFS_DATA /MYSQLNFS_DATA nfs rw,hard,nointr,rsize=65536,wsize=65536,bg,vers=4,proto=tcp,noatime
</source>
====== /etc/mysql/mysql.conf.d/mysqld.cnf ======
<source lang=ini>
[mysqld]
...
datadir = /MYSQLNFS_DATA/data/mysql
...
</source>
====== /etc/mysql/mysql.conf.d/innodb.cnf ======
<source lang=ini>
[mysqld]
#
# * InnoDB
#
innodb_data_home_dir = /MYSQLNFS_DATA/InnoDB
innodb_data_file_path = ibdata1:200M:autoextend
innodb_log_group_home_dir = /MYSQLNFS_LOG/ib_log
#innodb_flush_method = O_DIRECT
innodb_flush_log_at_trx_commit = 2
innodb_file_per_table = on
</source>
<source lang=mysql>
# mysql -e "show variables where variable_name like '%dir' and value like '/MYSQLNFS%'"
+---------------------------+------------------------------------+
| Variable_name | Value |
+---------------------------+------------------------------------+
| datadir | /MYSQLNFS_DATA/data/mysql/ |
| innodb_data_home_dir | /MYSQLNFS_DATA/InnoDB |
| innodb_log_group_home_dir | /MYSQLNFS_LOG/ib_log |
+---------------------------+------------------------------------+
</source>
====== /etc/mysql/mysql.conf.d/query_cache.cnf ======
<source lang=ini>
[mysqld]
#
# * Query Cache Configuration
#
query_cache_type = 1
query_cache_limit = 256K
query_cache_min_res_unit = 2k
query_cache_size = 80M
</source>
<source lang=mysql>
mysql> SHOW VARIABLES LIKE 'have_query_cache';
+------------------+-------+
| Variable_name | Value |
+------------------+-------+
| have_query_cache | YES |
+------------------+-------+
1 row in set (0,00 sec)
mysql> SHOW VARIABLES LIKE 'query_cache%';
+------------------------------+----------+
| Variable_name | Value |
+------------------------------+----------+
| query_cache_limit | 262144 |
| query_cache_min_res_unit | 2048 |
| query_cache_size | 83886080 |
| query_cache_type | ON |
| query_cache_wlock_invalidate | OFF |
+------------------------------+----------+
5 rows in set (0,00 sec)
</source>
====== apparmor : /etc/apparmor.d/local/usr.sbin.mysqld ======
<source lang=text>
# vim:syntax=apparmor
# This should be always there...
owner @{PROC}/@{pid}/status r,
/sys/devices/system/node/ r,
/sys/devices/system/node/** r,
# The mysql datadir, innodb_data_home_dir
/MYSQLNFS_DATA/ r,
/MYSQLNFS_DATA/** rwk,
# The mysql innodb_log_group_home_dir
/MYSQLNFS_LOG/ r,
/MYSQLNFS_LOG/** rwk,
</source>
====== Short stupid performance test ======
<source lang=bash>
# time dd if=/dev/zero of=/MYSQLNFS_DATA/io.test bs=16k count=65536
65536+0 records in
65536+0 records out
1073741824 bytes (1,1 GB, 1,0 GiB) copied, 1,7552 s, 612 MB/s
real 0m1.772s
user 0m0.016s
sys 0m0.672s
</source>
Some things seem to work...
==Sample InnoDB configuration==
/etc/mysql/conf.d/innodb.cnf
<source lang=mysql>
[mysqld]
# InnoDB Parameters
# innodb_buffer_pool_size=(0.7*total_mem_size)
innodb_buffer_pool_size=1433M
# bulk_insert_buffer_size
bulk_insert_buffer_size=256M
# innodb_buffer_pool_instances=... more = more concurrency
innodb_buffer_pool_instances=2
# innodb_thread_concurrency= 2*CPUs
innodb_thread_concurrency=4
# innodb_flush_method=O_DIRECT (avoids double buffering)
innodb_flush_method=O_DIRECT
# InnoDB data raw disks
innodb_data_home_dir=
innodb_data_file_path=/dev/vg-data/lv-rawdisk-innodb01:25Graw
# InnoDB log files
innodb_log_files_in_group=2
innodb_log_file_size=100M
innodb_log_group_home_dir=/var/lib/mysql/ib_log
</source>
==Analyze==
<source lang=mysql>
mysql> select * from <tablename> PROCEDURE ANALYSE();
</source>
<source lang=mysql>
mysql> SHOW /*!50000 GLOBAL*/ STATUS;
</source>
* See [[http://de.slideshare.net/shinguz/pt-presentation-11465700 MySQL Performance Tuning]]
===Find statements which lead into an error===
<source lang=mysql>
mysql> select CURRENT_SCHEMA,DIGEST_TEXT,MYSQL_ERRNO,MESSAGE_TEXT from performance_schema.events_statements_history where errors!=0\G
*************************** 1. row ***************************
CURRENT_SCHEMA: NULL
DIGEST_TEXT: NULL
MYSQL_ERRNO: 1046
MESSAGE_TEXT: No database selected
1 row in set (0,00 sec)
</source>
===percona-toolkit===
<source lang=bash>
# aptitude install percona-toolkit
# mysql -e "explain select * from mysql.user,mysql.db where user.user=db.user" | pt-visual-explain
JOIN
+- Bookmark lookup
| +- Table
| | table db
| | possible_keys User
| +- Index lookup
| key db->User
| possible_keys User
| key_len 48
| ref mysql.user.User
| rows 3
+- Table scan
rows 68
+- Table
table user
</source>
===Sysbench===
<source lang=bash>
# mysql -u root -e "create database sbtest;"
# sysbench \
--test=oltp \
--oltp-table-size=10000000 \
--db-driver=mysql \
--mysql-table-engine=innodb \
--mysql-db=sbtest \
--mysql-user=root \
--mysql-password=$(nawk -F'=' '/password/{print $2}' /root/.my.cnf) \
--mysql-socket=/var/run/mysqld/mysqld.sock \
prepare
# sysbench \
--test=oltp \
--oltp-test-mode=complex \
--oltp-table-size=80000000 \
--db-driver=mysql \
--mysql-table-engine=innodb \
--mysql-db=sbtest \
--mysql-user=root \
--mysql-password=$(nawk -F'=' '/password/{print $2}' /root/.my.cnf) \
--mysql-socket=/var/run/mysqld/mysqld.sock \
--num-threads=4 \
--max-time=900 \
--max-requests=500000 \
run
# mysql -u root_rw -e "drop table sbtest;" sbtest
</source>
==Recover a damaged root account==
===Lost grants===
Try out:
<source lang=bash>
# service mysql stop
# echo "grant all privileges on *.* to 'root'@'localhost' with grant option;" > /root/mysql-init
# mysqld_safe --init-file=/root/mysql-init
...
150812 19:14:24 mysqld_safe mysqld from pid file /var/run/mysqld/mysqld.pid ended
# rm /root/mysql-init
# service mysql start
</source>
Or:
<source lang=bash>
# service mysql stop
# mysqld_safe --skip-grant-tables &
...
# mysql -e "UPDATE mysql.user SET Grant_priv='Y', Super_priv='Y' WHERE User='root'; FLUSH PRIVILEGES; GRANT ALL ON *.* TO 'root'@'localhost';"
# mysqladmin -u root shutdown
# service mysql start
</source>
===Lost password===
<source lang=bash>
# service mysql stop
# echo "SET PASSWORD FOR 'root'@'localhost' = PASSWORD('the root password for mysql');" > /root/mysql-init
# mysqld_safe --init-file=/root/mysql-init
...
150812 19:15:24 mysqld_safe mysqld from pid file /var/run/mysqld/mysqld.pid ended
# rm /root/mysql-init
# service mysql start
</source>
==Structured configuration==
This is the default in Ubuntus /etc/mysql/my.cnf:
<source lang=mysql>
...
#
# * IMPORTANT: Additional settings that can override those from this file!
# The files must end with '.cnf', otherwise they'll be ignored.
#
!includedir /etc/mysql/conf.d/
</source>
/etc/mysql/conf.d/innodb.cnf:
<source lang=mysql>
[mysqld]
# InnoDB Parameters
# innodb_buffer_pool_size=(0.7*total_mem_size)
#innodb_buffer_pool_size=512M
innodb_buffer_pool_size=256M
# bulk_insert_buffer_size
#bulk_insert_buffer_size=256M
bulk_insert_buffer_size=128M
# innodb_buffer_pool_instances=... more = more concurrency
innodb_buffer_pool_instances=2
# innodb_thread_concurrency= 2*CPUs
innodb_thread_concurrency=4
# innodb_flush_method=O_DIRECT (avoids double buffering)
innodb_flush_method=O_DIRECT
# InnoDB data raw disks
innodb_data_home_dir=
innodb_data_file_path=/dev/vg-data/lv-rawdisk-innodb01:25Graw
# InnoDB log files
innodb_log_files_in_group=2
innodb_log_file_size=100M
innodb_log_group_home_dir=/var/lib/mysql/ib_log
</source>
/etc/mysql/conf.d/myisam.cnf:
<source lang=mysql>
[mysqld]
#key_buffer = 512M
key_buffer = 128M
table_cache = 8K
myisam_sort_buffer_size = 64M
tmp_table_size = 64M
# Variable: concurrent_insert
# Value Description
# 0 Disables concurrent inserts
# 1 (Default) Enables concurrent insert for MyISAM tables that do not have holes
# 2 Enables concurrent inserts for all MyISAM tables, even those that have holes.
# For a table with a hole, new rows are inserted at the end of the table if it is in use by another thread.
# Otherwise, MySQL acquires a normal write lock and inserts the row into the hole.
concurrent_insert=2
# Variable: myisam_use_mmap
# https://www.percona.com/blog/2006/05/26/myisam-mmap-feature-51/
#
myisam_use_mmap=1
</source>
/etc/mysql/conf.d/mysqld.cnf:
<source lang=mysql>
[mysqld]
datadir = /var/lib/mysql/data/data
# because mysql is soooo stupid
#ignore-db-dirs = lost+found # when we will have mysql >= 5.6.3
bind-address = 127.0.0.1
open-files-limit = 4096
max_connections = 512
max_allowed_packet = 16M
thread_stack = 192K
thread_cache_size = 8
myisam-recover-options = BACKUP
max_connections = 512
table_cache = 8192
thread_concurrency = 4
default-storage-engine = innodb
# Enable the full query log. Every query (even ones with incorrect
# syntax) that the server receives will be logged. This is useful for
# debugging, it is usually disabled in production use.
#log
# Print warnings to the error log file. If you have any problem with
# MySQL you should enable logging of warnings and examine the error log
# for possible explanations.
log_warnings
# Log slow queries. Slow queries are queries which take more than the
# amount of time defined in "long_query_time" or which do not use
# indexes well, if log_long_format is enabled. It is normally good idea
# to have this turned on if you frequently add new queries to the
# system.
log_slow_queries
slow_query_log_file = /var/log/mysql/mysql-slow.log
# All queries taking more than this amount of time (in seconds) will be
# trated as slow. Do not use "1" as a value here, as this will result in
# even very fast queries being logged from time to time (as MySQL
# currently measures time with second accuracy only).
long_query_time = 2
# Log more information in the slow query log. Normally it is good to
# have this turned on. This will enable logging of queries that are not
# using indexes in addition to long running queries.
#log_long_format
log_bin = /var/lib/mysql/binlog/mysql-bin.log
expire_logs_days = 10
max_binlog_size = 100M
sync_binlog = 0
performance_schema = ON
</source>
/etc/mysql/conf.d/mysqld_safe.cnf:
<source lang=mysql>
[mysqld_safe]
</source>
/etc/mysql/conf.d/mysqld_safe_syslog.cnf:
<source lang=mysql>
[mysqld_safe]
syslog
</source>
/etc/mysql/conf.d/query_cache.cnf:
<source lang=mysql>
[mysqld]
query_cache_limit = 4M
query_cache_size = 128M
query_cache_min_res_unit = 2K
</source>
=MySQL Clients=
Small one liners for testing purposes.
==PHP==
===PHP PDO===
<source lang=php>
$ php -r '
$pdo=new PDO("mysql:host=mydbhost;dbname=mydb", "user", "pass", ARRAY(
PDO::ATTR_PERSISTENT => true
)
);
$stmt=$pdo->prepare("SELECT * FROM mytable");
if($stmt->execute()){
while($row = $stmt->fetch()){
print_r($row);
}
};
$stmt = null;
$pdo=null;
'
</source>
b9bc3fdeff84a231daaf62bfeb1ccb5c75cb54bb
2060
2059
2020-10-26T15:01:53Z
Lollypop
2
/* /etc/modprobe.d/blacklist-rpcsec_gss_krb5.conf */
wikitext
text/x-wiki
[[Kategorie:MySQL|Tipps und Tricks]]
==Oneliner==
===Show MySQL-traffic fired from a client===
<source lang=bash>
# tcpdump -i any -s 0 -l -vvv -w - dst port 3306 | strings | perl -e '
while(<>) { chomp; next if /^[^ ]+[ ]*$/;
if(/^(SELECT|UPDATE|DELETE|INSERT|SET|COMMIT|ROLLBACK|CREATE|DROP|ALTER)/i) {
if (defined $q) { print "$q\n"; }
$q=$_;
} else {
$_ =~ s/^[ \t]+//; $q.=" $_";
}
}'
</source>
===Mysql processes each second===
<source lang=bash>
# mysqladmin -i 1 --verbose processlist
</source>
===All grants===
<source lang=bash>
# mysql --skip-column-names --batch --execute 'select concat("`",user,"`@`",host,"`") from mysql.user' | xargs -n 1 -i mysql --execute 'show grants for {}'
</source>
Or a little nicer:
<source lang=bash>
#!/bin/bash
#
## Written by Lars Timmann <L@rs.Timmann.de> 2017
#
function usage () {
cat << EOH
Usage: $0 [--all] [--grant-user <pattern>|--gu <pattern>] [--grant-db <pattern>|--gdb <pattern>] [--help] ...
--help: This output
--grant-user|--gu: You can specify this option several times.
The <pattern> can be:
<user> : You will get grants on all hosts for this user.
@<host> : You will get grants for all users on this host.
<user>@<host> : You will get specific grants for user@host.
The pattern may contain % as wildcard.
If the pattern is @% it shows all grants where host is exactly '%'.
--grant-db|--gdb: You can specify this option several times.
The pattern names the database to look for.
The pattern may contain % as wildcard.
--all: Show all grants
...: Optional parameters to the mysql command
EOH
exit
}
show_all_grants=0
declare -a grant_user
for ((param=1;param<=${#};param++))
do
case ${!param} in
--grant-user|--gu)
param=$[ ${param} + 1 ]
grant_user+=( "${!param}" )
# delete 2 parameters from list and set back $param
set -- "${@:1:param-2}" "${@:param+1}"
param=$[ ${param} - 2 ]
;;
--grant-db|--gdb)
param=$[ ${param} + 1 ]
grant_db+=( "${!param}" )
# delete 2 parameters from list and set back $param
set -- "${@:1:param-2}" "${@:param+1}"
param=$[ ${param} - 2 ]
;;
--all)
show_all_grants=1
# delete 1 parameter from list and set back $param
set -- "${@:1:param-1}" "${@:param+1}"
param=$[ ${param} - 1 ]
;;
--help)
usage
;;
*)
;;
esac
done
count=${#grant_user[@]}
for((param=0;param<count;param++))
do
before=${#grant_user[@]}
grant="${grant_user[${param}]}"
user="${grant%@*}"
if [[ "${grant}" == *\@?* ]]
then
host="${grant/*@}"
else
host=''
fi
case ${host} in
'')
select="select concat('\'',user,'\'@\'',host,'\'') as user from mysql.user where user like '${user}'"
;;
'%')
select="select concat('\'',user,'\'@\'',host,'\'') as user from mysql.user where host='${host}' ${user:+and user like '${user}'}"
;;
*)
select="select concat('\'',user,'\'@\'',host,'\'') as user from mysql.user where host like '${host}' ${user:+and user like '${user}'}"
;;
esac
grant_user=( "${grant_user[@]:0:param}" $(mysql $* --silent --skip-column-names --execute "${select}" | sort ) "${grant_user[@]:param+1}" )
after=${#grant_user[@]}
param=$[ param + after - before ]
count=$[ count + after - before ]
done
# Get user for database in grant_db array
for db in ${grant_db[@]}
do
grant_user+=( $(mysql $* --silent --skip-column-names --execute "
select concat('\'',user,'\'@\'',host,'\'') as user from mysql.db where db like '${db}';
select concat('\'',user,'\'@\'',host,'\'') as user from mysql.columns_priv where db like '${db}';
select concat('\'',user,'\'@\'',host,'\'') as user from mysql.tables_priv where db like '${db}';
" | sort -u ) )
done
# --all
if [ ${show_all_grants} -eq 1 ]
then
printf -- '--\n-- %s\n--\n' "all grants";
grant_user=( $(mysql $* --silent --skip-column-names --execute "select concat('\'',user,'\'@\'',host,'\'') as user from mysql.user" | sort ) )
fi
for user in ${grant_user[@]}
do
printf -- '--\n-- %s\n--\n' "${user}";
show_create_user="$(mysql $* --silent --skip-column-names --execute "select (substring_index(version(), '.',1) >= 5) and (substring_index(substring_index(version(), '.', 2),'.',-1) >=7) as show_create_user;";)"
if [ "${show_create_user}" -eq 1 ]
then
mysql $* --silent --skip-column-names --execute "show create user ${user};" | sed 's/$/;/'
fi
OLD_IFS=${IFS}
IFS=$'\n'
for grant in $(mysql $* --silent --skip-column-names --execute "show grants for ${user}" | sed 's/$/;/')
do
regex='GRANT[ ]+.*[ ]+ON[ ]+(FUNCTION[ ]+|)`([^`]*)`\..*'
if [[ $grant =~ $regex ]]
then
database=${BASH_REMATCH[2]}
if [ ${#grant_db[@]} -gt 0 ]
then
if [[ " ${grant_db[@]} " =~ " ${database} " ]]
then
echo "${grant}"
fi
else
echo "${grant}"
fi
else
echo "${grant}"
fi
done
done
</source>
===Last update time===
* Per table
<source lang=mysql>
mysql> SELECT TABLE_SCHEMA AS DB,TABLE_NAME,UPDATE_TIME FROM INFORMATION_SCHEMA.TABLES ORDER BY DB,UPDATE_TIME;
</source>
* Per database
<source lang=mysql>
mysql> SELECT TABLE_SCHEMA AS DB,MAX(UPDATE_TIME) AS LAST_UPDATE FROM INFORMATION_SCHEMA.TABLES GROUP BY DB ORDER BY LAST_UPDATE;
</source>
==InnoDB space==
===Per database===
<source lang=mysql>
mysql> select table_schema as database_name, sum(round(data_length/1024/1024,2)) as total_size_mb from information_schema.tables where engine like 'innodb' group by table_schema order by total_size_mb;
</source>
===Per table===
<source lang=mysql>
mysql> select table_schema as database_name,table_name,round(data_length/1024/1024,2) as size_mb from information_schema.tables order by size_mb;
</source>
==Logging==
If you use SET GLOBAL it is just for the moment.
'''Don't forget to add it in your my.cnf to make it permanent!'''
===What can I log?===
The interesting variables here are:
* log_queries_not_using_indexes
* log_slave_updates
* log_slow_queries
* general_log
===Choose logging destination FILE/TABLE/NONE===
This affects general_log and slow_query_log.
* Log to the table mysql.slow_log and mysql.general_log
<source lang=mysql>
mysql> SET GLOBAL log_output=TABLE;
</source>
* Log to the table mysql.slow_log and mysql.general_log
<source lang=mysql>
mysql> SET GLOBAL log_output=TABLE;
</source>
* Both: tables and files
<source lang=mysql>
mysql> SET GLOBAL log_output = 'TABLE,FILE';
</source>
* None, if NONE appears in the log_output destinations there is no logging
<source lang=mysql>
mysql> SET GLOBAL log_output = 'TABLE,FILE,NONE';
</source>
is equal to
<source lang=mysql>
mysql> SET GLOBAL log_output = 'NONE';
</source>
===Enable/disable general logging===
<source lang=mysql>
mysql> SET GLOBAL general_log_file = '/var/lib/mysql/general.log';
Query OK, 0 rows affected (0.00 sec)
mysql> SET GLOBAL general_log = 'ON';
Query OK, 0 rows affected (0.00 sec)
</source>
<source lang=mysql>
mysql> SET GLOBAL general_log = 'OFF';
Query OK, 0 rows affected (0.00 sec)
</source>
===Enable/disable logging of slow queries===
<source lang=mysql>
mysql> SET GLOBAL slow_query_log_file = '/var/lib/mysql/slow-query.log';
Query OK, 0 rows affected (0.00 sec)
mysql> SET GLOBAL slow_query_log = 'ON';
Query OK, 0 rows affected (0.00 sec)
</source>
<source lang=mysql>
mysql> SET GLOBAL slow_query_log = 'OFF';
Query OK, 0 rows affected (0.00 sec)
</source>
== Slave ==
=== Debugging ===
==== What did we see from the master ====
Read the binlog from Master:
<source lang=bash>
# mysqlbinlog --read-from-remote-server --host='your replication host' --user='your replication user' --password='your replication password' --base64-output=auto --database='limit output to this database' -vv mysql-bin.number | less
</source>
if you get
ERROR: Failed on connect: SSL connection error: protocol version mismatch
try
<source lang=bash>
# mysqlbinlog --read-from-remote-server --host='your replication host' --user='your replication user' --password='your replication password' --ssl-mode=DISABLED --base64-output=auto --database='limit output to this database' -vv mysql-bin.number | less
</source>
For an idea of the binlog file to investigate on the master do this on your slave:
<source lang=bash>
# mysql -e 'show slave status\G' | awk '$1=="Master_Log_File:"'
</source>
==Filesystems for MySQL==
===ext3/ext4===
====Create Options====
<source lang=bash>
# mkfs.ext4 -b 4096 /dev/mapper/vg--data-lv--ext4--mysql_data
</source>
====Mountoptions====
* noatime
* data=writeback (best performance , only metadata is logged)
* data=ordered (ok performance , recording metadata and grouping metadata related to the data changes)
* data=journal (worst performance, but best data protection, ext3 default mode, recording metadata and all data)
===Raw devices with InnoDB===
'''Take a look at [[Linux_udev_permissions|setting device permissions via udev]] first.'''
'''After''' that the device is owned by mysql:
<source lang=bash>
# ls -alL /dev/vg-data/lv-rawdisk-innodb01
brw-rw---- 1 mysql mysql 252, 0 Aug 12 15:07 /dev/vg-data/lv-rawdisk-innodb01
</source>
Determine the size:
<source lang=bash>
# lvs vg-data
LV VG Attr LSize Pool Origin Data% Move Log Copy% Convert
lv-rawdisk-innodb01 vg-data -wi-a---- 25.00g
# fdisk -l /dev/vg-data/lv-rawdisk-innodb01
Disk /dev/vg-data/lv-rawdisk-innodb01: 26.8 GB, 26843545600 bytes
255 heads, 63 sectors/track, 3263 cylinders, total 52428800 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
# bc -l
26843545600/(1024*1024*1024)
25.00000000000000000000
</source>
Yes... really 25GB!
Add your logical volume to your configuration /etc/mysql/conf.d/innodb.cnf :
<source lang=mysql>
[mysqld]
# InnoDB raw disks
innodb_data_home_dir=
innodb_data_file_path=/dev/vg-data/lv-rawdisk-innodb01:25Gnewraw
</source>
Start mysql:
<source lang=bash>
# service mysql start
</source>
Aaaaaand.. do not forget apparmor! Like I did.. :-D
<source lang=mysql>
InnoDB: Operating system error number 13 in a file operation.
InnoDB: The error means mysqld does not have the access rights to
InnoDB: the directory.
InnoDB: File name /dev/dm-0
InnoDB: File operation call: 'open'.
InnoDB: Cannot continue operation.
</source>
<source lang=bash>
# tail /var/log/kern.log
...
Aug 12 15:30:09 mysql kernel: [ 5840.118528] audit: type=1400 audit(1439386209.399:33): apparmor="DENIED" operation="open" profile="/usr/sbin/mysqld" name="/dev/dm-0" pid=11810 comm="mysqld" requested_mask="wr" denied_mask="wr" fsuid=108 ouid=108
...
</source>
Add your raw device to the apparmor config in /etc/apparmor.d/local/usr.sbin.mysqld :
<source lang=bash>
# Site-specific additions and overrides for usr.sbin.mysqld.
# For more details, please see /etc/apparmor.d/local/README.
/dev/dm-* rwk,
</source>
Reload apparmor:
<source lang=bash>
# service apparmor reload
</source>
Another try!
<source lang=bash>
# service mysql start
</source>
<source lang=mysql>
InnoDB: The first specified data file /dev/vg-data/lv-rawdisk-innodb01 did not exist:
InnoDB: a new database to be created!
150812 15:48:23 InnoDB: Setting file /dev/vg-data/lv-rawdisk-innodb01 size to 25600 MB
InnoDB: Database physically writes the file full: wait...
InnoDB: Progress in MB: 100 200 300 400 500 600 700 800 900 1000 1100 1200 ...
</source>
Much better!
So shutdown MySQL again!
Change your configuration /etc/mysql/conf.d/innodb.cnf and '''change newraw to raw!''' :
<source lang=mysql>
[mysqld]
# InnoDB raw disks
innodb_data_home_dir=
innodb_data_file_path=/dev/vg-data/lv-rawdisk-innodb01:25Graw
</source>
=== NFS ===
==== NFSv4 ====
===== On NetApp CDOT SVM =====
<source lang=text>
cdot1nfsv4::> export-policy rule create -policyname default -clientmatch 172.18.128.0/22 -superuser none -rwrule none -rorule sys -allow-dev false -allow-suid false
cdot1nfsv4::>
cdot1nfsv4::> export-policy create -policyname mysql_clients
cdot1nfsv4::> export-policy rule create -policyname mysql_clients -clientmatch 172.18.128.0/22 -superuser sys -rwrule sys -rorule sys -allow-dev true -allow-suid false
cdot1nfsv4::>
cdot1nfsv4::> nfs server modify -v4.0 enabled -v4-id-domain this.domain.tld
cdot1nfsv4::> set -units GB
cdot1nfsv4::> vol show -volume MYSQLNFS_* -fields volume,policy,size,junction-path
vserver volume size policy junction-path
------------------ --------------------- ---- ------------- ----------------------
cdot1nfsv4 MYSQLNFS_DATA 40GB mysql_clients /MYSQLNFS_DATA
cdot1nfsv4 MYSQLNFS_LOG 1GB mysql_clients /MYSQLNFS_LOG
2 entries were displayed.
</source>
Links:
* [https://kb.netapp.com/support/s/article/how-to-configure-nfsv4-in-cluster-mode How to configure NFSv4 in Cluster-Mode]
* [https://kb.netapp.com/support/s/article/clustered-data-ontap-nfs-expert-recommended-articles Clustered Data ONTAP NFS Expert recommended articles]
* [https://kb.netapp.com/support/s/article/how-to-configure-netapp-storage-systems-for-network-file-system-version-4-in-aix-and-linux-environments How to configure NetApp storage systems for Network File System version 4 in AIX and Linux environments]
* [https://kb.netapp.com/support/s/article/how-to-enable-or-disable-nfsv4-on-netapp-storage-systems How to enable or disable NFSv4 on NetApp storage systems]
===== On Linux =====
====== Blacklist rpcsec_gss_krb5 ======
To disable loading of the rpcsec_gss_krb5 kernel module which causes problems with performance, do this:
<source lang=text>
# echo "blacklist rpcsec_gss_krb5" > /etc/modprobe.d/blacklist-rpcsec_gss_krb5.conf
# rmmod rpcsec_gss_krb5
</source>
====== /etc/sysctl.d/99-mysql.conf ======
<source lang=text>
#
## http://www.ajohnstone.com/achives/optimizing-mysql-over-nfs-with-netapp/
#
###################################################################
# Semaphores & IPC for optimizations in innodb
kernel.shmmax=2147483648
kernel.shmall=2147483648
kernel.msgmni=1024
kernel.msgmax=65536
kernel.sem=250 32000 32 1024
###################################################################
# Swap
vm.swappiness = 0
vm.vfs_cache_pressure = 50
</source>
====== /etc/sysctl.d/99-netapp-nfs.conf ======
<source lang=text>
#
## http://www.ajohnstone.com/achives/optimizing-mysql-over-nfs-with-netapp/
#
###################################################################
# Optimization for netapp/nfs increased from 64k, @see http://tldp.org/HOWTO/NFS-HOWTO/performance.html#MEMLIMITS
net.core.wmem_default=262144
net.core.rmem_default=262144
net.core.wmem_max=262144
net.core.rmem_max=262144
net.ipv4.tcp_rmem = 4096 87380 16777216
net.ipv4.tcp_wmem = 4096 65536 16777216
net.ipv4.tcp_no_metrics_save = 1
# Guidelines from http://media.netapp.com/documents/mysqlperformance-5.pdf
net.ipv4.tcp_sack=0
net.ipv4.tcp_timestamps=0
sunrpc.tcp_slot_table_entries=128
#nfs.v3.enable on
nfs.tcp.enable=on
nfs.tcp.recvwindowsize=65536
nfs.tcp.xfersize=65536
#iscsi.iswt.max_ios_per_session 128
#iscsi.iswt.tcp_window_size 131400
#iscsi.max_connections_per_session 16
net.ipv4.tcp_tw_reuse = 1
net.ipv4.ip_local_port_range = 1024 65023
net.ipv4.tcp_max_syn_backlog = 10240
net.ipv4.tcp_max_tw_buckets = 400000
net.ipv4.tcp_max_orphans = 60000
net.ipv4.tcp_synack_retries = 3
net.core.somaxconn = 10000
kernel.sysrq=0
net.ipv4.neigh.default.gc_thresh1 = 4096
net.ipv4.neigh.default.gc_thresh2 = 8192
net.ipv4.neigh.default.gc_thresh3 = 8192
net.ipv4.neigh.default.base_reachable_time = 86400
net.ipv4.neigh.default.gc_stale_time = 86400
</source>
====== Raise allowed number of open files for mysql in /etc/security/limits.d/mysql.conf ======
<source lang=text>
mysql soft nofile 1024000
mysql hard nofile 1024000
mysql soft nproc 10240
mysql hard nproc 10240
</source>
====== Modify systemd mysql.service to raise the number of files limit ======
To raise the number of files for the service you have to tell the systemd the new limit.
<source lang=bash>
# systemctl edit mysql.service
</source>
and enter:
<source lang=ini>
[Service]
LimitNOFILE=1024000
</source>
<source lang=bash>
# systemctl cat mysql
# /lib/systemd/system/mysql.service
# MySQL systemd service file
...
# /etc/systemd/system/mysql.service.d/override.conf
[Service]
LimitNOFILE=1024000
</source>
Do not forget to activate and check the limit
<source lang=bash>
# systemctl daemon-reload
# systemctl restart mysql
# awk 'NR==1 || /Max open files/' /proc/$(pgrep mysqld$)/limits
Limit Soft Limit Hard Limit Units
Max open files 1024000 1024000 files
</source>
====== Modify systemd service to wait for NFS ======
To be sure that the NFS mount is ready when the mysql server starts add After=nfs-client.target to the systemd service in the Unit-section.
<source lang=bash>
# systemctl edit mysql.service
</source>
and enter:
<source lang=ini>
[Unit]
Description=MySQL Community Server
After=network.target
After=nfs-client.target
</source>
<source lang=bash>
# systemctl cat mysql
# /lib/systemd/system/mysql.service
# MySQL systemd service file
[Unit]
Description=MySQL Community Server
After=network.target
[Install]
WantedBy=multi-user.target
[Service]
User=mysql
Group=mysql
PermissionsStartOnly=true
ExecStartPre=/usr/share/mysql/mysql-systemd-start pre
ExecStart=/usr/sbin/mysqld
ExecStartPost=/usr/share/mysql/mysql-systemd-start post
TimeoutSec=600
Restart=on-failure
RuntimeDirectory=mysqld
RuntimeDirectoryMode=755
# /etc/systemd/system/mysql.service.d/override.conf
[Unit]
Description=MySQL Community Server
After=network.target
After=nfs-client.target
[Service]
LimitNOFILE=1024000
</source>
Do not forget to activate the changes...
<source lang=bash>
# systemctl daemon-reload
# systemctl restart mysql
</source>
... and check they are active:
<source lang=bash>
# systemctl list-dependencies --after mysql.service | grep nfs-client.target
● ├─nfs-client.target
</source>
====== /etc/idmapd.conf ======
<source lang=text>
# Domain = localdomain
Domain = this.domain.tld
</source>
====== /etc/fstab ======
<source lang=text>
cdot-nfsv4-svm:/MYSQLNFS_LOG /MYSQLNFS_LOG nfs rw,hard,nointr,rsize=65536,wsize=65536,bg,vers=4,proto=tcp,noatime
cdot-nfsv4-svm:/MYSQLNFS_DATA /MYSQLNFS_DATA nfs rw,hard,nointr,rsize=65536,wsize=65536,bg,vers=4,proto=tcp,noatime
</source>
====== /etc/mysql/mysql.conf.d/mysqld.cnf ======
<source lang=ini>
[mysqld]
...
datadir = /MYSQLNFS_DATA/data/mysql
...
</source>
====== /etc/mysql/mysql.conf.d/innodb.cnf ======
<source lang=ini>
[mysqld]
#
# * InnoDB
#
innodb_data_home_dir = /MYSQLNFS_DATA/InnoDB
innodb_data_file_path = ibdata1:200M:autoextend
innodb_log_group_home_dir = /MYSQLNFS_LOG/ib_log
#innodb_flush_method = O_DIRECT
innodb_flush_log_at_trx_commit = 2
innodb_file_per_table = on
</source>
<source lang=mysql>
# mysql -e "show variables where variable_name like '%dir' and value like '/MYSQLNFS%'"
+---------------------------+------------------------------------+
| Variable_name | Value |
+---------------------------+------------------------------------+
| datadir | /MYSQLNFS_DATA/data/mysql/ |
| innodb_data_home_dir | /MYSQLNFS_DATA/InnoDB |
| innodb_log_group_home_dir | /MYSQLNFS_LOG/ib_log |
+---------------------------+------------------------------------+
</source>
====== /etc/mysql/mysql.conf.d/query_cache.cnf ======
<source lang=ini>
[mysqld]
#
# * Query Cache Configuration
#
query_cache_type = 1
query_cache_limit = 256K
query_cache_min_res_unit = 2k
query_cache_size = 80M
</source>
<source lang=mysql>
mysql> SHOW VARIABLES LIKE 'have_query_cache';
+------------------+-------+
| Variable_name | Value |
+------------------+-------+
| have_query_cache | YES |
+------------------+-------+
1 row in set (0,00 sec)
mysql> SHOW VARIABLES LIKE 'query_cache%';
+------------------------------+----------+
| Variable_name | Value |
+------------------------------+----------+
| query_cache_limit | 262144 |
| query_cache_min_res_unit | 2048 |
| query_cache_size | 83886080 |
| query_cache_type | ON |
| query_cache_wlock_invalidate | OFF |
+------------------------------+----------+
5 rows in set (0,00 sec)
</source>
====== apparmor : /etc/apparmor.d/local/usr.sbin.mysqld ======
<source lang=text>
# vim:syntax=apparmor
# This should be always there...
owner @{PROC}/@{pid}/status r,
/sys/devices/system/node/ r,
/sys/devices/system/node/** r,
# The mysql datadir, innodb_data_home_dir
/MYSQLNFS_DATA/ r,
/MYSQLNFS_DATA/** rwk,
# The mysql innodb_log_group_home_dir
/MYSQLNFS_LOG/ r,
/MYSQLNFS_LOG/** rwk,
</source>
====== Short stupid performance test ======
<source lang=bash>
# time dd if=/dev/zero of=/MYSQLNFS_DATA/io.test bs=16k count=65536
65536+0 records in
65536+0 records out
1073741824 bytes (1,1 GB, 1,0 GiB) copied, 1,7552 s, 612 MB/s
real 0m1.772s
user 0m0.016s
sys 0m0.672s
</source>
Some things seem to work...
==Sample InnoDB configuration==
/etc/mysql/conf.d/innodb.cnf
<source lang=mysql>
[mysqld]
# InnoDB Parameters
# innodb_buffer_pool_size=(0.7*total_mem_size)
innodb_buffer_pool_size=1433M
# bulk_insert_buffer_size
bulk_insert_buffer_size=256M
# innodb_buffer_pool_instances=... more = more concurrency
innodb_buffer_pool_instances=2
# innodb_thread_concurrency= 2*CPUs
innodb_thread_concurrency=4
# innodb_flush_method=O_DIRECT (avoids double buffering)
innodb_flush_method=O_DIRECT
# InnoDB data raw disks
innodb_data_home_dir=
innodb_data_file_path=/dev/vg-data/lv-rawdisk-innodb01:25Graw
# InnoDB log files
innodb_log_files_in_group=2
innodb_log_file_size=100M
innodb_log_group_home_dir=/var/lib/mysql/ib_log
</source>
==Analyze==
<source lang=mysql>
mysql> select * from <tablename> PROCEDURE ANALYSE();
</source>
<source lang=mysql>
mysql> SHOW /*!50000 GLOBAL*/ STATUS;
</source>
* See [[http://de.slideshare.net/shinguz/pt-presentation-11465700 MySQL Performance Tuning]]
===Find statements which lead into an error===
<source lang=mysql>
mysql> select CURRENT_SCHEMA,DIGEST_TEXT,MYSQL_ERRNO,MESSAGE_TEXT from performance_schema.events_statements_history where errors!=0\G
*************************** 1. row ***************************
CURRENT_SCHEMA: NULL
DIGEST_TEXT: NULL
MYSQL_ERRNO: 1046
MESSAGE_TEXT: No database selected
1 row in set (0,00 sec)
</source>
===percona-toolkit===
<source lang=bash>
# aptitude install percona-toolkit
# mysql -e "explain select * from mysql.user,mysql.db where user.user=db.user" | pt-visual-explain
JOIN
+- Bookmark lookup
| +- Table
| | table db
| | possible_keys User
| +- Index lookup
| key db->User
| possible_keys User
| key_len 48
| ref mysql.user.User
| rows 3
+- Table scan
rows 68
+- Table
table user
</source>
===Sysbench===
<source lang=bash>
# mysql -u root -e "create database sbtest;"
# sysbench \
--test=oltp \
--oltp-table-size=10000000 \
--db-driver=mysql \
--mysql-table-engine=innodb \
--mysql-db=sbtest \
--mysql-user=root \
--mysql-password=$(nawk -F'=' '/password/{print $2}' /root/.my.cnf) \
--mysql-socket=/var/run/mysqld/mysqld.sock \
prepare
# sysbench \
--test=oltp \
--oltp-test-mode=complex \
--oltp-table-size=80000000 \
--db-driver=mysql \
--mysql-table-engine=innodb \
--mysql-db=sbtest \
--mysql-user=root \
--mysql-password=$(nawk -F'=' '/password/{print $2}' /root/.my.cnf) \
--mysql-socket=/var/run/mysqld/mysqld.sock \
--num-threads=4 \
--max-time=900 \
--max-requests=500000 \
run
# mysql -u root_rw -e "drop table sbtest;" sbtest
</source>
==Recover a damaged root account==
===Lost grants===
Try out:
<source lang=bash>
# service mysql stop
# echo "grant all privileges on *.* to 'root'@'localhost' with grant option;" > /root/mysql-init
# mysqld_safe --init-file=/root/mysql-init
...
150812 19:14:24 mysqld_safe mysqld from pid file /var/run/mysqld/mysqld.pid ended
# rm /root/mysql-init
# service mysql start
</source>
Or:
<source lang=bash>
# service mysql stop
# mysqld_safe --skip-grant-tables &
...
# mysql -e "UPDATE mysql.user SET Grant_priv='Y', Super_priv='Y' WHERE User='root'; FLUSH PRIVILEGES; GRANT ALL ON *.* TO 'root'@'localhost';"
# mysqladmin -u root shutdown
# service mysql start
</source>
===Lost password===
<source lang=bash>
# service mysql stop
# echo "SET PASSWORD FOR 'root'@'localhost' = PASSWORD('the root password for mysql');" > /root/mysql-init
# mysqld_safe --init-file=/root/mysql-init
...
150812 19:15:24 mysqld_safe mysqld from pid file /var/run/mysqld/mysqld.pid ended
# rm /root/mysql-init
# service mysql start
</source>
==Structured configuration==
This is the default in Ubuntus /etc/mysql/my.cnf:
<source lang=mysql>
...
#
# * IMPORTANT: Additional settings that can override those from this file!
# The files must end with '.cnf', otherwise they'll be ignored.
#
!includedir /etc/mysql/conf.d/
</source>
/etc/mysql/conf.d/innodb.cnf:
<source lang=mysql>
[mysqld]
# InnoDB Parameters
# innodb_buffer_pool_size=(0.7*total_mem_size)
#innodb_buffer_pool_size=512M
innodb_buffer_pool_size=256M
# bulk_insert_buffer_size
#bulk_insert_buffer_size=256M
bulk_insert_buffer_size=128M
# innodb_buffer_pool_instances=... more = more concurrency
innodb_buffer_pool_instances=2
# innodb_thread_concurrency= 2*CPUs
innodb_thread_concurrency=4
# innodb_flush_method=O_DIRECT (avoids double buffering)
innodb_flush_method=O_DIRECT
# InnoDB data raw disks
innodb_data_home_dir=
innodb_data_file_path=/dev/vg-data/lv-rawdisk-innodb01:25Graw
# InnoDB log files
innodb_log_files_in_group=2
innodb_log_file_size=100M
innodb_log_group_home_dir=/var/lib/mysql/ib_log
</source>
/etc/mysql/conf.d/myisam.cnf:
<source lang=mysql>
[mysqld]
#key_buffer = 512M
key_buffer = 128M
table_cache = 8K
myisam_sort_buffer_size = 64M
tmp_table_size = 64M
# Variable: concurrent_insert
# Value Description
# 0 Disables concurrent inserts
# 1 (Default) Enables concurrent insert for MyISAM tables that do not have holes
# 2 Enables concurrent inserts for all MyISAM tables, even those that have holes.
# For a table with a hole, new rows are inserted at the end of the table if it is in use by another thread.
# Otherwise, MySQL acquires a normal write lock and inserts the row into the hole.
concurrent_insert=2
# Variable: myisam_use_mmap
# https://www.percona.com/blog/2006/05/26/myisam-mmap-feature-51/
#
myisam_use_mmap=1
</source>
/etc/mysql/conf.d/mysqld.cnf:
<source lang=mysql>
[mysqld]
datadir = /var/lib/mysql/data/data
# because mysql is soooo stupid
#ignore-db-dirs = lost+found # when we will have mysql >= 5.6.3
bind-address = 127.0.0.1
open-files-limit = 4096
max_connections = 512
max_allowed_packet = 16M
thread_stack = 192K
thread_cache_size = 8
myisam-recover-options = BACKUP
max_connections = 512
table_cache = 8192
thread_concurrency = 4
default-storage-engine = innodb
# Enable the full query log. Every query (even ones with incorrect
# syntax) that the server receives will be logged. This is useful for
# debugging, it is usually disabled in production use.
#log
# Print warnings to the error log file. If you have any problem with
# MySQL you should enable logging of warnings and examine the error log
# for possible explanations.
log_warnings
# Log slow queries. Slow queries are queries which take more than the
# amount of time defined in "long_query_time" or which do not use
# indexes well, if log_long_format is enabled. It is normally good idea
# to have this turned on if you frequently add new queries to the
# system.
log_slow_queries
slow_query_log_file = /var/log/mysql/mysql-slow.log
# All queries taking more than this amount of time (in seconds) will be
# trated as slow. Do not use "1" as a value here, as this will result in
# even very fast queries being logged from time to time (as MySQL
# currently measures time with second accuracy only).
long_query_time = 2
# Log more information in the slow query log. Normally it is good to
# have this turned on. This will enable logging of queries that are not
# using indexes in addition to long running queries.
#log_long_format
log_bin = /var/lib/mysql/binlog/mysql-bin.log
expire_logs_days = 10
max_binlog_size = 100M
sync_binlog = 0
performance_schema = ON
</source>
/etc/mysql/conf.d/mysqld_safe.cnf:
<source lang=mysql>
[mysqld_safe]
</source>
/etc/mysql/conf.d/mysqld_safe_syslog.cnf:
<source lang=mysql>
[mysqld_safe]
syslog
</source>
/etc/mysql/conf.d/query_cache.cnf:
<source lang=mysql>
[mysqld]
query_cache_limit = 4M
query_cache_size = 128M
query_cache_min_res_unit = 2K
</source>
=MySQL Clients=
Small one liners for testing purposes.
==PHP==
===PHP PDO===
<source lang=php>
$ php -r '
$pdo=new PDO("mysql:host=mydbhost;dbname=mydb", "user", "pass", ARRAY(
PDO::ATTR_PERSISTENT => true
)
);
$stmt=$pdo->prepare("SELECT * FROM mytable");
if($stmt->execute()){
while($row = $stmt->fetch()){
print_r($row);
}
};
$stmt = null;
$pdo=null;
'
</source>
d61b70ae6c496165f869f1ffdd21e40f8f92cf8f
Fibrechannel Analyse
0
139
2061
1909
2020-10-26T15:36:38Z
Lollypop
2
/* fcinfo */
wikitext
text/x-wiki
[[Kategorie:Solaris]]
[[Kategorie:Brocade]]
[[Kategorie:NetApp]]
[[Kategorie:FC]]
=Fibrechannel Analyse=
=Kommandos : Solaris=
==luxadm==
===luxadm -e port===
Gibt die Hardwarepfade der vorhandened Fibrechannelports und deren Status aus:
<source lang=bash>
# luxadm -e port
/devices/pci@79,0/pci10de,378@b/pci1077,143@0/fp@0,0:devctl CONNECTED
/devices/pci@79,0/pci10de,378@b/pci1077,143@0,1/fp@0,0:devctl NOT CONNECTED
/devices/pci@79,0/pci10de,376@e/pci1077,143@0/fp@0,0:devctl CONNECTED
/devices/pci@79,0/pci10de,376@e/pci1077,143@0,1/fp@0,0:devctl NOT CONNECTED
</source>
2 Dualport Karten:
/devices/pci@79,0/pci10de,378@b/pci1077,143@0 und ...,1
/devices/pci@79,0/pci10de,376@e/pci1077,143@0 und ...,1
<source lang=bash>
# prtdiag -v | head -1
System Configuration: Sun Microsystems Sun Fire X4440
</source>
Aus der Seite [https://support.oracle.com/epmos/faces/DocContentDisplay?id=1277396.1 Sun x86 Platforms: Matrix of Recognized Device Paths (Doc ID 1277396.1)] (Oracle Support Login benötigt):
Sun Fire x4440 (Tucana)
PCI:
PCIe SLOT0 /pci@0,0/pci10de,375@f/pci1000,3150@0 // with PCI Express 8-Port SAS/SATA HBA
PCIe SLOT0 /pci@0,0/pci10de,375@f/ // without PCI Express 8-Port SAS/SATA HBA
PCIe SLOT1 /pci@0,0/pci10de,376@e/
PCIe SLOT2 /pci@7c,0/pci10de,377@f/
PCIe SLOT3 /pci@0,0/pci10de,377@a/
PCIe SLOT4 /pci@7c,0/pci10de,376@e/
PCIe SLOT5 /pci@7c,0/pci10de,378@b/
(7c can be renamed something else depending on BIOS/OS version)
Also stecken unsere Karten in Slot 4 und 5.
===luxadm -e dump_map <HW_path>===
Gibt die Tabelle der bekannten Geräte an einem Port aus
<source lang=bash>
# luxadm -e dump_map /devices/pci@79,0/pci10de,378@b/pci1077,143@0/fp@0,0:devctl
Pos Port_ID Hard_Addr Port WWN Node WWN Type
0 30200 0 202600a0b86e10e4 200600a0b86e10e4 0x0 (Disk device)
1 30600 0 202700a0b86e10e4 200600a0b86e10e4 0x0 (Disk device)
2 10100 0 203400a0b85bb030 200400a0b85bb030 0x0 (Disk device)
3 10500 0 203500a0b85bb030 200400a0b85bb030 0x0 (Disk device)
4 10200 0 202600a0b86e103c 200600a0b86e103c 0x0 (Disk device)
5 11400 0 202700a0b86e103c 200600a0b86e103c 0x0 (Disk device)
6 30100 0 203200a0b85aeb2d 200200a0b85aeb2d 0x0 (Disk device)
7 30500 0 203300a0b85aeb2d 200200a0b85aeb2d 0x0 (Disk device)
8 10800 0 2100001b32902d45 2000001b32902d45 0x1f (Unknown Type,Host Bus Adapter)
</source>
Erklärung der interessanten Spalten:
* Port_ID <Switch_ID><Switchport><??>
Es sind also offensichtlich 2 Switches in der Fabric an Port /devices/pci@79,0/pci10de,378@b/pci1077,143@0/fp@0,0:devctl
und zwar mit der ID 1 und mit der ID 3.
Switch ID 1
Port 1 und 5 : Node WWN 200400a0b85bb030
Port 2 und 14 : Node WWN 200600a0b86e103c
Port 8 : Node WWN 2000001b32902d45 (Wir selbst)
Switch ID 3
Port 1 und 5 : Node WWN 200200a0b85aeb2d
Port 2 und 6 : Node WWN 200600a0b86e10e4
Wir hängen also mit 2 Storages auf dem Switch mit der ID 1 und haben eine Verbindung zu einem Switch mit der ID 3 an dem 2 weitere Storages hängen.
* Node WWN
Wir sehen hier 4 Disk Devices mit jeweils 2 Einträgen (Gleiche Node WWN)
* Port WWN
Dies ist die Port WWN der an den Switch angeschlossenen Geräte (unter 8 finden wir uns selbst).
Pro Storage sehen wir hier 2 Port WWNs, also 2 Pfade über unseren einen Hostport.
Daher nachher 4 Pfade (2 Pro Hostport) beim [[#mpathadm list lu]].
* Type
Disk Device: Storage
Host Bus Adapter: FC-Karte
===luxadm -e rdls <HW_path> ===
<source lang=bash>
# luxadm -e port 2>/dev/null | awk '{print $1;}' | xargs -n 1 luxadm -e rdls 2>/dev/null
Link Error Status information for loop:/devices/pci@0,0/pci8086,340e@7/pci111d,806e@0/pci111d,806e@2/pci1077,143@0/fp@0,0:devctl
al_pa lnk fail sync loss signal loss sequence err invalid word CRC
30200 2 1 0 0 0 0
30600 2 1 0 0 0 0
10200 1 1 0 0 0 0
11400 2 1 0 0 0 0
10b00 0 0 0 0 0 0
NOTE: These LESB counts are not cleared by a reset, only power cycles.
These counts must be compared to previously read counts.
Link Error Status information for loop:/devices/pci@0,0/pci8086,340e@7/pci111d,806e@0/pci111d,806e@2/pci1077,143@0,1/fp@0,0:devctl
al_pa lnk fail sync loss signal loss sequence err invalid word CRC
0 0 0 0 0 0 0
NOTE: These LESB counts are not cleared by a reset, only power cycles.
These counts must be compared to previously read counts.
</source>
===luxadm probe===
Auflistung aller erkannten Fibrechanneldevices
<source lang=bash>
#> luxadm probe
Found Fibre Channel device(s):
Node WWN:200600a0b86e10e4 Device Type:Disk device
Logical Path:/dev/rdsk/c8t600A0B80006E10E40000DC1C52E8B751d0s2
...
</source>
===luxadm display <Diskpath|WWN>===
<source lang=bash>
#> luxadm display /dev/rdsk/c8t600A0B80006E10E40000DC1C52E8B751d0s2
DEVICE PROPERTIES for disk: /dev/rdsk/c8t600A0B80006E10E40000DC1C52E8B751d0s2
Vendor: SUN
Product ID: STK6580_6780
Revision: 0784
Serial Num: SP01068442
Unformatted capacity: 204800.000 MBytes
Write Cache: Enabled
Read Cache: Enabled
Minimum prefetch: 0x300
Maximum prefetch: 0x0
Device Type: Disk device
Path(s):
/dev/rdsk/c8t600A0B80006E10E40000DC1C52E8B751d0s2
/devices/scsi_vhci/disk@g600a0b80006e10e40000dc1c52e8b751:c,raw
Controller /dev/cfg/c4
Device Address 202600a0b86e10e4,5
Host controller port WWN 2100001b328a417f
Class primary
State ONLINE
Controller /dev/cfg/c4
Device Address 202700a0b86e10e4,5
Host controller port WWN 2100001b328a417f
Class secondary
State STANDBY
Controller /dev/cfg/c6
Device Address 201600a0b86e10e4,5
Host controller port WWN 2100001b32904445
Class primary
State ONLINE
Controller /dev/cfg/c6
Device Address 201700a0b86e10e4,5
Host controller port WWN 2100001b32904445
Class secondary
State STANDBY
</source>
* Vendor: SUN
Hersteller
* Product ID: STK6580_6780
Also ein StorageTek 6580/6780
* Revision: 0784
Grobe Firmwarepeilung (Firmware Version: 07.84.47.10)
Siehe hier [[#lsscs list array <array_name>]]
* Serial Num: SP01068442
Praktisch, wenn man mit NetApps arbeitet, um die LUNs zuzuordnen.
* Unformatted capacity: 204800.000 MBytes
Immer gut zu wissen
* Write Cache: Enabled
Die Batterie im Storage sollte also OK sein ;-)
* Path(s):
Rawdevicepath
Hardwaredevicepath
Jetzt folgen immer pro Pfad zu diesem Device ein Block aus
Controller (siehe unten)
Device Address <Port WWN vom Device>,<LUN ID>
Class <primary|secondary> (siehe unten)
State <Online|Standby|Oflline>
Zuweisung Controller zum FC-Port über:
<source lang=bash>
# ls -al /dev/cfg/c6
lrwxrwxrwx 1 root root 60 Sep 3 2009 /dev/cfg/c6 -> ../../devices/pci@79,0/pci10de,376@e/pci1077,143@0/fp@0,0:fc
</source>
Man sieht den Hardwarepfad von [[#luxadm -e port]]
Class:
Via ALUA (Asymmetric Logical Unit Access) teilt das Device dem Host mit, über welche Pfade der Host primär auf die LUN zugreifen soll.
==fcinfo==
===fcinfo hba-port===
Gibt ein paar Infos über Hersteller, Modell, Firmware, Port und Node WWN, Current Speed, ... aus
<source lang=bash>
#> fcinfo hba-port
HBA Port WWN: 2100001b328a417f
OS Device Name: /dev/cfg/c4
Manufacturer: QLogic Corp.
Model: 375-3356-02
Firmware Version: 05.06.00
FCode/BIOS Version: BIOS: 2.02; fcode: 2.01; EFI: 2.00;
Serial Number: 0402R00-0918701860
Driver Name: qlc
Driver Version: 20110825-3.06
Type: N-port
State: online
Supported Speeds: 1Gb 2Gb 4Gb
Current Speed: 4Gb
Node WWN: 2000001b328a417f
HBA Port WWN: 2101001b32aa417f
OS Device Name: /dev/cfg/c5
Manufacturer: QLogic Corp.
Model: 375-3356-02
Firmware Version: 05.06.00
FCode/BIOS Version: BIOS: 2.02; fcode: 2.01; EFI: 2.00;
Serial Number: 0402R00-0918701860
Driver Name: qlc
Driver Version: 20110825-3.06
Type: unknown
State: offline
Supported Speeds: 1Gb 2Gb 4Gb
Current Speed: not established
Node WWN: 2001001b32aa417f
HBA Port WWN: 2100001b32904445
OS Device Name: /dev/cfg/c6
Manufacturer: QLogic Corp.
Model: 375-3356-02
Firmware Version: 05.06.00
FCode/BIOS Version: BIOS: 2.02; fcode: 2.01; EFI: 2.00;
Serial Number: 0402R00-0918701887
Driver Name: qlc
Driver Version: 20110825-3.06
Type: N-port
State: online
Supported Speeds: 1Gb 2Gb 4Gb
Current Speed: 4Gb
Node WWN: 2000001b32904445
HBA Port WWN: 2101001b32b04445
OS Device Name: /dev/cfg/c7
Manufacturer: QLogic Corp.
Model: 375-3356-02
Firmware Version: 05.06.00
FCode/BIOS Version: BIOS: 2.02; fcode: 2.01; EFI: 2.00;
Serial Number: 0402R00-0918701887
Driver Name: qlc
Driver Version: 20110825-3.06
Type: unknown
State: offline
Supported Speeds: 1Gb 2Gb 4Gb
Current Speed: not established
Node WWN: 2001001b32b04445
</source>
===fcinfo remote-port --port <HBA Port WWN> --linkstat===
<source lang=bash>
# fcinfo remote-port --port 2100001b32904445 --linkstat
Remote Port WWN: 201600a0b86e103c
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200600a0b86e103c
Link Error Statistics:
Link Failure Count: 3
Loss of Sync Count: 3
Loss of Signal Count: 2
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 201700a0b86e103c
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200600a0b86e103c
Link Error Statistics:
Link Failure Count: 4
Loss of Sync Count: 261
Loss of Signal Count: 4
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 202200a0b85aeb2d
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200200a0b85aeb2d
Link Error Statistics:
Link Failure Count: 2
Loss of Sync Count: 1
Loss of Signal Count: 2
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 202300a0b85aeb2d
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200200a0b85aeb2d
Link Error Statistics:
Link Failure Count: 2
Loss of Sync Count: 1
Loss of Signal Count: 2
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 201600a0b86e10e4
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200600a0b86e10e4
Link Error Statistics:
Link Failure Count: 3
Loss of Sync Count: 1
Loss of Signal Count: 0
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 201700a0b86e10e4
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200600a0b86e10e4
Link Error Statistics:
Link Failure Count: 3
Loss of Sync Count: 1
Loss of Signal Count: 0
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 202400a0b85bb030
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200400a0b85bb030
Link Error Statistics:
Link Failure Count: 2
Loss of Sync Count: 1
Loss of Signal Count: 2
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 202500a0b85bb030
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200400a0b85bb030
Link Error Statistics:
Link Failure Count: 2
Loss of Sync Count: 1
Loss of Signal Count: 3
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
</source>
===fcinfo remote-port --port <HBA Port WWN> --scsi-target===
<source lang=bash>
# fcinfo hba-port | grep HBA
HBA Port WWN: 21000024ff3cf472
HBA Port WWN: 21000024ff3cf473
HBA Port WWN: 21000024ff3cf454
HBA Port WWN: 21000024ff3cf455
# fcinfo remote-port --port 21000024ff3cf472 --scsi-target
Remote Port WWN: 20110002ac0059ce
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 2ff70002ac0059ce
LUN: 0
Vendor: 3PARdata
Product: VV
OS Device Name: /dev/rdsk/c6t60002AC00000000000000002000059CEd0s2
LUN: 1
Vendor: 3PARdata
Product: VV
OS Device Name: /dev/rdsk/c6t60002AC00000000000000003000059CEd0s2
LUN: 2
Vendor: 3PARdata
Product: VV
OS Device Name: /dev/rdsk/c6t60002AC00000000000000004000059CEd0s2
...
</source>
===fcinfo lu -v <device>===
<source lang=bash>
# fcinfo lu -v /dev/rdsk/c0t60030D90D9DD1A059655804D4A5EAD2Ed0s2
OS Device Name: /dev/rdsk/c0t60030D90D9DD1A059655804D4A5EAD2Ed0s2
HBA Port WWN: 2100000e1ed89451
Controller: /dev/cfg/c4
Remote Port WWN: 2100f4e9d4564d21
LUN: 11
State: active/optimized
Remote Port WWN: 2100f4e9d4564c97
LUN: 11
State: active/non-optimized
HBA Port WWN: 2100000e1ed89450
Controller: /dev/cfg/c3
Remote Port WWN: 2100f4e9d4564d44
LUN: 11
State: active/optimized
Remote Port WWN: 2100f4e9d4564c1c
LUN: 11
State: active/non-optimized
Vendor: DataCore
Product: Virtual Disk
Device Type: Disk Device
Unformatted capacity: 204800.000 MBytes
</source>
==mpathadm==
===mpathadm list lu===
<source lang=bash>
</source>
==cfgadm==
===cfgadm -al -o show_FCP_dev [<controller>]===
<source lang=bash>
# cfgadm -al -o show_FCP_dev | grep unusable
c8::21000024ff2d49a2,0 disk connected configured unusable
c8::21000024ff2d49a2,1 disk connected configured unusable
c8::21000024ff2d49a2,2 disk connected configured unusable
c8::21000024ff2d49a2,3 disk connected configured unusable
c8::21000024ff2d49a2,4 disk connected configured unusable
c8::21000024ff2d49a2,5 disk connected configured unusable
c8::21000024ff2d49a2,6 disk connected configured unusable
c8::21000024ff2d49a2,7 disk connected configured unusable
c8::21000024ff2d49a2,8 disk connected configured unusable
c8::21000024ff2d49a2,9 disk connected configured unusable
c8::21000024ff2d49a2,10 disk connected configured unusable
c9::203400a0b839c421,31 disk connected configured unusable
c9::203400a0b84913d2,31 disk connected configured unusable
c9::203500a0b839c421,31 disk connected configured unusable
c9::203500a0b84913d2,31 disk connected configured unusable
</source>
===cfgadm -c unconfigure -o unusable_SCSI_LUN <unusable device>===
<source lang=bash>
# cfgadm -c unconfigure -o unusable_SCSI_LUN c8::21000024ff2d49a2
</source>
Alle aufräumen:
<source lang=bash>
# cfgadm -alo show_SCSI_LUN | nawk '$NF=="unusable"{gsub(/,[0-9]+$/,"",$1);print $1}' | sort -u | xargs -n 1 cfgadm -c unconfigure -o unusable_SCSI_LUN
</source>
===cfgadm -o force_update -c configure <controller>===
Rescan LUNs. Be careful! Does a forcelip!
<source lang=bash>
# cfgadm -o force_update -c configure c10
</source>
==prtconf -Da <device>==
<source lang=bash>
# prtconf -Da /dev/cfg/c3
i86pc (driver name: rootnex)
pci, instance #0 (driver name: npe)
pci8086,3410, instance #5 (driver name: pcieb)
pci111d,806e, instance #12 (driver name: pcieb)
pci111d,806e, instance #13 (driver name: pcieb)
pci1077,170, instance #0 (driver name: qlc) <---
fp, instance #0 (driver name: fp)
</source>
==LUN masking (access LUNs of a storage)==
<source lang=bash>
Nov 6 13:44:59 server01 Corrupt label; wrong magic number
Nov 6 13:44:59 server01 cmlb: WARNING: /pci@380/pci@1/pci@0/pci@5/SUNW,qlc@0/fp@0,0/ssd@w204300a096691217,7 (ssd7):
Nov 6 13:44:59 server01 Corrupt label; wrong magic number
Nov 6 13:44:59 server01 cmlb: WARNING: /pci@380/pci@1/pci@0/pci@5/SUNW,qlc@0/fp@0,0/ssd@w204300a096691217,7 (ssd7):
Nov 6 13:44:59 server01 Corrupt label; wrong magic number
Nov 6 13:44:59 server01 cmlb: WARNING: /pci@300/pci@1/pci@0/pci@4/SUNW,qlc@0/fp@0,0/ssd@w203300a096691217,7 (ssd2):
Nov 6 13:44:59 server01 Corrupt label; wrong magic number
...
</source>
<source lang=bash>
# cat /etc/driver/drv/fp.conf
mpxio-disable="no";
pwwn-lun-blacklist=
"203200a096691265,7",
"203300a096691265,7",
"204200a096691265,7",
"204300a096691265,7",
"203200a096691217,7",
"203300a096691217,7",
"204200a096691217,7",
"204300a096691217,7";
</source>
<source lang=bash>
# reboot -- -r
...
Boot device: /pci@300/pci@1/pci@0/pci@2/scsi@0/disk@p0 File and args: -r
SunOS Release 5.11 Version 11.3 64-bit
Copyright (c) 1983, 2015, Oracle and/or its affiliates. All rights reserved.
/pseudo/fcp@0 (fcp0):
LUN 7 of port 203300a096691217 is masked due to black listing.
/pseudo/fcp@0 (fcp0):
LUN 7 of port 203200a096691217 is masked due to black listing.
/pseudo/fcp@0 (fcp0):
LUN 7 of port 203300a096691265 is masked due to black listing.
/pseudo/fcp@0 (fcp0):
LUN 7 of port 203200a096691265 is masked due to black listing.
/pseudo/fcp@0 (fcp0):
LUN 7 of port 204300a096691217 is masked due to black listing.
/pseudo/fcp@0 (fcp0):
LUN 7 of port 204200a096691217 is masked due to black listing.
/pseudo/fcp@0 (fcp0):
LUN 7 of port 204300a096691265 is masked due to black listing.
/pseudo/fcp@0 (fcp0):
LUN 7 of port 204200a096691265 is masked due to black listing.
Configuring devices.
</source>
=Kommandos : Common Array Manager=
==lsscs==
Ist unter Solaris in /opt/SUNWsefms/bin
===lsscs list array===
<source lang=bash>
</source>
===lsscs list array <array_name>===
<source lang=bash>
</source>
===lsscs list -a <array_name> fcport===
<source lang=bash>
</source>
=Kommandos : Brocade=
==Switch-Kommandos==
===switchshow===
<source lang=bash>
san-sw_11:admin> switchshow
switchName: san-sw_11
switchType: 71.2
switchState: Online
switchMode: Native
switchRole: Principal
switchDomain: 1
switchId: fffc01
switchWwn: 10:00:00:05:33:df:43:5a
zoning: ON (Fabric1)
switchBeacon: OFF
Index Port Address Media Speed State Proto
==============================================
0 0 010000 id N8 No_Light FC
1 1 010100 id N8 Online FC E-Port 10:00:00:05:33:df:bd:b9 "san-sw_21" (downstream)
2 2 010200 id N8 Online FC F-Port 21:00:00:24:ff:05:74:e4
3 3 010300 id N8 Online FC F-Port 50:0a:09:81:8d:32:5d:c4
4 4 010400 id N8 No_Light FC
5 5 010500 id N8 Online FC E-Port 10:00:00:05:33:df:bd:b9 "san-sw_21"
6 6 010600 id N4 Online FC F-Port 20:06:00:a0:b8:32:38:17
7 7 010700 id N4 Online FC F-Port 20:07:00:a0:b8:32:38:17
8 8 010800 id N4 Online FC F-Port 21:00:00:1b:32:91:4c:ed
9 9 010900 id N4 Online FC F-Port 21:00:00:1b:32:98:05:1a
10 10 010a00 id N8 Online FC F-Port 21:00:00:24:ff:4a:d3:bc
11 11 010b00 id N8 No_Light FC
12 12 010c00 id N8 No_Light FC
13 13 010d00 id N8 No_Light FC
14 14 010e00 id N8 No_Light FC
15 15 010f00 id N8 No_Light FC
16 16 011000 -- N8 No_Module FC (No POD License) Disabled
17 17 011100 -- N8 No_Module FC (No POD License) Disabled
18 18 011200 -- N8 No_Module FC (No POD License) Disabled
19 19 011300 -- N8 No_Module FC (No POD License) Disabled
20 20 011400 -- N8 No_Module FC (No POD License) Disabled
21 21 011500 -- N8 No_Module FC (No POD License) Disabled
22 22 011600 -- N8 No_Module FC (No POD License) Disabled
23 23 011700 -- N8 No_Module FC (No POD License) Disabled
</source>
Was sagt uns das?
# Dies ist der "Principal" (alle andere sind "Subordinate") der Fabric "Fabric1" (switchRole:, zoning:)
# Der Switch ist gezoned (zoning:)
# SwitchID ist "fffc01"
# Es ist ein 24-Port Switch
# Es gibt einen doppelten ISL (InterSwitchLink) zu einem anderen Switch E-Port (san-sw_21)
# 6 Ports sind mit SFPs bestückt, aber nicht belegt (0,4,11-15)
# 8 Ports haben keine Lizenz und auch kein SFP (No_Module)
# 9 Ports sind belegt
===fabricshow===
<source lang=bash>
san-sw_11:root> fabricshow
Switch ID Worldwide Name Enet IP Addr FC IP Addr Name
-------------------------------------------------------------------------
1: fffc01 10:00:00:05:33:df:43:5a 192.168.1.117 0.0.0.0 >"san-sw_11"
2: fffc02 10:00:00:05:33:df:bd:b9 192.168.1.119 0.0.0.0 "san-sw_21"
The Fabric has 2 switches
</source>
===islshow===
<source lang=bash>
rz1_fab2_11:admin> islshow
1: 1-> 0 10:00:00:05:1e:0d:5e:96 12 bc1_sl4_fab2_12 sp: 4.000G bw: 4.000G
2: 2-> 0 10:00:00:05:1e:0d:e2:53 13 bc2_sl4_fab2_13 sp: 4.000G bw: 4.000G
3: 3-> 0 10:00:00:05:1e:b3:71:bf 14 bc3_sl4_fab2_14 sp: 4.000G bw: 4.000G
4: 5-> 17 10:00:00:05:1e:0d:5e:96 12 bc1_sl4_fab2_12 sp: 4.000G bw: 4.000G
5: 6-> 17 10:00:00:05:1e:0d:e2:53 13 bc2_sl4_fab2_13 sp: 4.000G bw: 4.000G
6: 7-> 17 10:00:00:05:1e:b3:71:bf 14 bc3_sl4_fab2_14 sp: 4.000G bw: 4.000G
7: 10-> 8 10:00:50:eb:1a:45:71:96 15 rz-6510-fab2-15 sp: 4.000G bw: 4.000G
8: 18-> 0 10:00:50:eb:1a:45:71:96 15 rz-6510-fab2-15 sp: 4.000G bw: 4.000G
</source>
==Port-Kommandos==
===porterrshow===
===portstatsshow===
===portstatsclear===
===portloginshow===
Show information about NPIV-ports.
<source lang=bash>
fcsw1:admin> switchshow
...
Index Port Address Media Speed State Proto
==================================================
...
34 34 0f2200 id N16 Online FC F-Port 1 N Port + 1 NPIV public
...
</source>
This is a NetApp 8080 with CDOT behind this port as you can see with <i>nodefind <address></i>
<source lang=bash>
fcsw1:admin> nodefind 0f2200
Local:
Type Pid COS PortName NodeName SCR
N 0f2200; 3;50:0a:09:82:80:d1:21:ee;50:0a:09:80:80:d1:21:ee; 0x00000000
PortSymb: [45] "NetApp FC Target Adapter (8324) cdot1-01:0g"
NodeSymb: [38] "NetApp FAS8080 (cdot1-01/cdot1-02)"
Fabric Port Name: 20:22:50:eb:1a:42:f8:45
Permanent Port Name: 50:0a:09:82:80:d1:21:ee
Device type: Physical Unknown(initiator/target)
Port Index: 34
Share Area: No
Device Shared in Other AD: No
Redirect: No
Partial: No
LSAN: No
Aliases:
</source>
Now look with <i>portloginshow <portnumber></i>:
<source lang=bash>
fcsw1:admin> portloginshow 34
Type PID World Wide Name credit df_sz cos
=====================================================
fd 0f2201 20:00:00:a0:98:5d:33:82 6 2048 8 scr=0x3
fe 0f2200 50:0a:09:82:80:d1:21:ee 6 2048 8 scr=0x0
ff 0f2201 20:00:00:a0:98:5d:33:82 0 0 8 d_id=FFFFFC
ff 0f2200 50:0a:09:82:80:d1:21:ee 0 0 8 d_id=FFFFFC
</source>
With this information you can find out more about the WWNs:
<source lang=bash>
fcsw1:admin> nodefind 20:00:00:a0:98:5d:33:82
Local:
Type Pid COS PortName NodeName SCR
N 0f2201; 3;20:00:00:a0:98:5d:33:82;20:04:00:a0:98:5d:33:82; 0x00000003
FC4s: FCP
PortSymb: [58] "NetApp FC Target Port (8324) cdot1fc:cdot1-01_fc_lif_1"
NodeSymb: [24] "NetApp Vserver cdot1fc"
Fabric Port Name: 20:22:50:eb:1a:42:f8:45
Permanent Port Name: 50:0a:09:82:80:d1:21:ee
Device type: NPIV Target
Port Index: 34
Share Area: No
Device Shared in Other AD: No
Redirect: No
Partial: No
LSAN: No
Aliases: cdot1fc_01_lif1
</source>
Even the VServer (NodeSymb)!
And with the NodeName you can find all logical interfaces of this svm:
<source lang=bash>
fcsw1:admin> nodefind 20:04:00:a0:98:5d:33:82
Local:
Type Pid COS PortName NodeName SCR
N 0f2201; 3;20:00:00:a0:98:5d:33:82;20:04:00:a0:98:5d:33:82; 0x00000003
FC4s: FCP
PortSymb: [58] "NetApp FC Target Port (8324) cdot1fc:cdot1-01_fc_lif_1"
NodeSymb: [24] "NetApp Vserver cdot1fc"
Fabric Port Name: 20:22:50:eb:1a:42:f8:45
Permanent Port Name: 50:0a:09:82:80:d1:21:ee
Device type: NPIV Target
Port Index: 34
Share Area: No
Device Shared in Other AD: No
Redirect: No
Partial: No
LSAN: No
Aliases: cdot1fc_01_lif1
N 0f2301; 3;20:02:00:a0:98:5d:33:82;20:04:00:a0:98:5d:33:82; 0x00000003
FC4s: FCP
PortSymb: [58] "NetApp FC Target Port (8324) cdot1fc:cdot1-02_fc_lif_1"
NodeSymb: [24] "NetApp Vserver cdot1fc"
Fabric Port Name: 20:23:50:eb:1a:42:f8:45
Permanent Port Name: 50:0a:09:82:80:61:21:e8
Device type: NPIV Target
Port Index: 35
Share Area: No
Device Shared in Other AD: No
Redirect: No
Partial: No
LSAN: No
Aliases: cdot1fc_02_lif1
</source>
==Zone-Kommandos==
===zoneshow===
===alicreate===
===alishow===
==Backup der Switchconfig per Script==
===Put the backup host ssh-pub-key on the switches===
<source lang=bash>
fcsw1:root> cat >/root/.ssh/authorized_keys <<EOF
> ssh-dss AAAAB3NzaC1...
...
...
lF8qsgtTD8cc= root@host
> EOF
</source>
===Generate ssh-key on the switches===
<source lang=bash>
fcsw1:root> ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
2a:23:33:...:69:bc:25:a5:f9 root@fcsw1
The key's randomart image is:
+--[ RSA 2048]----+
| |
| ... |
| |
+-----------------+
</source>
===Copy the key to your backup users ~/.ssh/authorized_keys on backup host===
<source lang=bash>
fcsw1:root> cat /root/.ssh/id_rsa.pub
ssh-rsa AAAAB3NzaC1yc2EAAA...
...
KHnw1T1NaQ== root@fcsw1
</source>
===Now the script on the backup host===
<source lang=bash>
# cat /opt/bin/backup_brocade_config
#!/bin/bash
SWITCHES="
172.30.40.50
172.30.40.51
"
LOCALUSER="backupuser"
BACKUPDIR="brocade_backup"
BACKUPHOST="172.30.40.10"
DATE="$(date '+%Y%m%d-%H%M%S')"
for switch in ${SWITCHES} ; do
printf "Backing up ${switch} to ~${LOCALUSER}/${BACKUPDIR}/${switch}_config_${date}.txt... "
ssh root@${switch} /fabos/link_sbin/configupload -all -p scp ${BACKUPHOST},${LOCALUSER},${BACKUPDIR}/${switch}_config_${DATE}.txt
done
</source>
==Script zum parsen einer configupload-Datei==
<source lang=awk>
#!/usr/bin/gawk -f
BEGIN{
vendor["001438"]="Hewlett-Packard";
vendor["00a098"]="NetApp";
vendor["0024ff"]="Qlogic";
vendor["001b32"]="Qlogic";
vendor["0000c9"]="Emulex";
vendor["00e002"]="CROSSROADS SYSTEMS, INC.";
}
/\[Zoning\]/,/^$/ {
if(/^cfg./){
split($0,cfgparts,":");
gsub(/^cfg./,"",cfgparts[1]);
cfg[cfgparts[1]]=cfgparts[2];
}
else if(/^zone./) {
zonename=$0;
gsub(/:.*$/,"",zonename);
gsub(/^zone./,"",zonename);
zonemembers=$0;
gsub(/^[^:]*:/,"",zonemembers);
zone[zonename]=zonemembers;
}
else if(/^alias./) {
aliasname=$0;
gsub(/:.*$/,"",aliasname);
gsub(/^alias./,"",aliasname);
aliasmembers=$0;
gsub(/^[^:]*:/,"",aliasmembers);
alias[aliasname]=aliasmembers;
if(length(aliasname)>longestalias){
longestalias=length(aliasname);
}
}
else if(/^enable:/) {
cfgenabled=$0;
gsub(/^enable:/,"",cfgenabled);
}
}
END {
print "Config:",cfgenabled;
split(cfg[cfgenabled],active_zones,";");
for(active_zone in active_zones) {
split(zone[active_zones[active_zone]],zone_members,";");
asort(zone_members);
print "Zone",active_zones[active_zone],"(",length(zone_members),"Members ):";
for(zone_member in zone_members){
member=zone_members[zone_member];
if(alias[member]!=""){
member=alias[member];
}
WWN=member;
gsub(/:/,"",WWN);
if(WWN ~ /^5/){start=2;}else{start=5;}
vendor_id=substr(WWN,start,6);
printf " Member: %s\t",member;
if(alias[zone_members[zone_member]]!=""){
format=sprintf("%%s%%%ds\t",longestalias-length(zone_members[zone_member]));
printf format,zone_members[zone_member]," ";
}
printf "%s\n",vendor[vendor_id];
}
}
printf "\n\n\nCreate config:\n-------------------------------------------------\n";
printf "cfgdelete \"%s\"\n",cfgenabled;
for(active_zone in active_zones) {
split(zone[active_zones[active_zone]],zone_members,";");
asort(zone_members);
for(zone_member in zone_members){
member=zone_members[zone_member];
if(alias[member]!=""){
printf "alicreate \"%s\",\"%s\"\n",member,alias[member];
alias[member]="";
}
}
printf "zonecreate \"%s\",\"%s\"\n",active_zones[active_zone],zone[active_zones[active_zone]];
if(!secondelement){
secondelement=1;
printf "cfgcreate";
} else {
printf "cfgadd ";
}
printf " \"%s\",\"%s\"\n",cfgenabled,active_zones[active_zone];
}
printf "cfgsave\ncfgenable \"%s\"\n",cfgenabled;
}
</source>
=Kommandos: NetApp=
==fcp topology show : Wo hängt mein Frontend-SAN?==
<source lang=bash>
fas01> fcp topology show
Switches connected on adapter 0d:
None connected.
Switches connected on adapter 0c:
None connected.
Switches connected on adapter 1a:
Switch Name: fcsw01
Switch Vendor: Brocade Communications, Inc.
Switch Release: v6.4.2a
Switch Domain: 1
Switch WWN: 10:00:00:05:33:c6:1e:6c
Port Count: 24
Switches connected on adapter 1b:
Switch Name: fcsw02
Switch Vendor: Brocade Communications, Inc.
Switch Release: v6.4.2a
Switch Domain: 1
Switch WWN: 10:00:00:05:33:c7:5e:d2
Port Count: 24
Switches connected on adapter 1c:
None connected.
Switches connected on adapter 1d:
None connected.
</source>
==fcp config <port> : Welche WWN habe ich?==
<source lang=bash>
fas01> fcp config 1a
1a: ONLINE <ADAPTER UP> PTP Fabric
host address 010600
portname 50:0a:09:83:90:00:29:24 nodename 50:0a:09:80:80:00:29:24
mediatype auto speed auto
</source>
Kleines Schmankerl ist noch die "host address", die uns zeigt, daß wir auf Switch-ID 01 Port 06 hängen.
==fcp wwpn-alias (set|show) : Aliasnamen für mehr Klarheit beim Debugging==
<source lang=bash>
fas01> fcp wwpn-alias set sun07_Slot2_Port0 21000024ff363a5a
fas01> fcp wwpn-alias show
WWPN Alias
---- -----
21:00:00:24:ff:36:3a:5a sun07_Slot2_Port0
</source>
==sanlun lun show -d <dev> (mit Solaris und ZPool)==
Wenn man wissen möchte, welche NetApp LUNs zu einem ZPool gehören, geht das folgender Maßen:
<source lang=bash>
# zpool status | nawk '/c[0-9]t/{dev=$1;gsub(/s[0-9]+$/,"",$1);command="/opt/NTAP/SANToolkit/bin/sanlun lun show -d /dev/rdsk/"$1"s2";command | getline; command | getline; print dev,$1$2;next;}{print;}'
</source>
Beispiel:
<source lang=bash>
# zpool status | nawk '/c[0-9]t/{dev=$1;gsub(/s[0-9]+$/,"",$1);command="/opt/NTAP/SANToolkit/bin/sanlun lun show -d /dev/rdsk/"$1"s2";command | getline; command | getline; print dev,$1$2;next;}{print;}'
Pool: testpool
Status: ONLINE
scan: resilvered 11,0G in 0h1m with 0 errors on Thu Oct 2 09:41:39 2014
config:
NAME STATE READ WRITE CKSUM
testpool ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
c5t60A98000433544625634696B76705370d0s0 fas01:/vol/testlun/LUN0
c5t60A980003830304F392446473844375Ad0 fas02:/vol/testlun/LUN0
</source>
=Sonstiges=
==Alle WWNs in einem File finden==
Gibt nur die WWNs aus, auch mehrere, wenn mehrere in einer Zeile sind.
<source lang=awk>
gawk '{line=$0;while(match(line,/[0-9a-f]{2}(:[0-9a-f]{2}){7}/,wwn)){line=substr(line,wwn[0,"start"]+wwn[0,"length"]); print wwn[0];}}' <file>
</source>
==Some adittions to NetApps sanlun lun show on Solaris==
<source lang=awk>
# /opt/NTAP/SANToolkit/bin/sanlun lun show | gawk '
$3 ~ /\/dev\// {
sanlun=$0;
cmd="luxadm display "$3;
while( cmd|getline line ){
count=split(line,word);
if(line ~ /DEVICE PROPERTIES for disk:/){
disk=word[count];
ctrl="";
dev_addr="";
svm_ports="";
delete ports;
delete pri;
delete sec;
delete paths;
delete online;
continue;
}
if(line ~ /Controller/){
ctrl=word[count];
continue;
}
if(line ~ /Device Address/){
dev_addr=word[count];
gsub(/,.*$/,"",dev_addr);
ports[dev_addr]=1;
pair=ctrl"_"dev_addr;
continue;
}
if(line ~ /Class/){
class[pair]=word[count];
if(word[count]=="primary"){
pri[disk]++;
} else {
sec[disk]++;
}
continue;
}
if(line ~ /State/){
state[pair]=word[count];
paths[disk]++;
if(word[count]=="ONLINE"){
online[disk]++;
}
}
if(line ~ /^$/ && ctrl!=""){
for(port in ports){
if(svm_ports==""){
sep="";
} else {
sep=",";
}
svm_ports=svm_ports sep port;
}
printf "%s %2d/%2d %2d/%2d %s\n",sanlun,online[disk],paths[disk],pri[disk],sec[disk], svm_ports;
}
}
close(cmd);
next;
}
/^vserver/{
line=sprintf("%s Online/Total Primary/Secondary Device Addresses\n", $0);
printf line;
gsub(/./,"-",line);
print line;
next;
}
/^[-]+$/{next;}
{print;}
'
</source>
f17c23cf1ff0ea3e609fb0c2a2a1e21cae5ada07
RadSecProxy
0
345
2062
1907
2020-10-29T06:12:28Z
Lollypop
2
/* /etc/radsec/radsecproxy.conf */
wikitext
text/x-wiki
[[Kategorie:Eduroam]]
=RadSecProxy=
==Build==
===Patch for radsecproxy-1.6.8 on Ubuntu 16.04===
In radsecproxy 1.6.9 and source from git on [[https://git.nordu.net/?p=radsecproxy.git;a=tree git.nordu.net]] this patch is not needed since [[https://git.nordu.net/?p=radsecproxy.git;a=commit;h=f3619bf65967255e1009fec42b28007b49e0f4e4 18.1.2017]].
<source lang=bash>
$ git clone https://git.nordu.net/radsecproxy.git
</source>
[https://project.nordu.net/browse/RADSECPROXY-72 taken from here]
<source lang=diff>
diff -rub radsecproxy-1.6.8/tcp.c radsecproxy-1.6.8_Ubuntu_16.04/tcp.c
--- radsecproxy-1.6.8/tcp.c 2016-09-21 13:49:09.000000000 +0200
+++ radsecproxy-1.6.8_Ubuntu_16.04/tcp.c 2017-07-13 16:35:52.414151832 +0200
@@ -353,7 +353,7 @@
struct sockaddr_storage from;
socklen_t fromlen = sizeof(from);
- listen(*sp, 0);
+ listen(*sp, 16);
for (;;) {
s = accept(*sp, (struct sockaddr *)&from, &fromlen);
diff -rub radsecproxy-1.6.8/tls.c radsecproxy-1.6.8_Ubuntu_16.04/tls.c
--- radsecproxy-1.6.8/tls.c 2016-09-21 13:49:09.000000000 +0200
+++ radsecproxy-1.6.8_Ubuntu_16.04/tls.c 2017-07-13 16:36:22.678166655 +0200
@@ -467,7 +467,7 @@
struct sockaddr_storage from;
socklen_t fromlen = sizeof(from);
- listen(*sp, 0);
+ listen(*sp, 16);
for (;;) {
s = accept(*sp, (struct sockaddr *)&from, &fromlen);
</source>
===Configure===
<source lang=bash>
$ ./configure --prefix=/opt/radsecproxy-1.6.8 --sysconfdir=/etc/radsec --with-ssl --enable-fticks
$ make clean all && sudo make install
</source>
=== Another example: Version 1.7.2 from git ===
<source lang=bash>
$ mkdir radsecproxy && cd radsecproxy
$ git clone --single-branch --branch 1.7.2 https://github.com/radsecproxy/radsecproxy tags/1.7.2
$ cd tags/1.7.2
$ ./autogen.sh
$ ./configure --prefix=/opt/radsecproxy-${PWD##*/} --sysconfdir=/etc/radsec --with-ssl
$ make clean all && sudo make install
</source>
==Config==
===/etc/radsec/radsecproxy.conf===
<source lang=text>
# Master config file for radsecproxy
IPv4Only on
listenUDP <IP>:1812
listenUDP <IP>:1813
listenTLS <IP>:2083
LogLevel 5 # For testing later reduce to 3
#LogDestination file:///var/log/radsecproxy.log
LogDestination x-syslog:///LOG_DAEMON
LoopPrevention on
######## TLS section
tls default {
CACertificatePath /etc/radsec/cert/ca
CertificateFile /etc/radsec/cert/radsecproxy-cert.pem
CertificateKeyFile /etc/radsec/cert/radsecproxy-key.pem
CertificateKeyPassword <PASSWORD>
}
Include /etc/radsec/rewrites.conf
Include /etc/radsec/clients.conf
Include /etc/radsec/servers.conf
Include /etc/radsec/realms.conf
</source>
===/etc/radsec/rewrites.conf===
<source lang=text>
## Empty for our setup
</source>
===/etc/radsec/clients.conf===
This matches our german top level radius (tlr) you have to customize it for other countries.
<source lang=text>
client tlr1 {
host 193.174.75.134
type tls
certificatenamecheck off
matchCertificateAttribute CN:/^(radius1\.dfn|tld1\.eduroam)\.de$/
}
client tlr2 {
host 193.174.75.138
type tls
certificatenamecheck off
matchCertificateAttribute CN:/^(radius2\.dfn|tld2\.eduroam)\.de$/
}
# Our WLAN Controller
client wlc {
host 10.1.1.0/24
type udp
secret ****secret****
}
client anyIP4TLS {
host 0.0.0.0/0
type TLS
}
</source>
===/etc/radsec/servers.conf===
<source lang=text>
Server Our-EduroamRadiusAuth {
host <internal radius server>
port 1812
#rewriteOut UserName
type udp
secret ****secret****
}
Server Our-EduroamRadiusAcct {
host <internal radius accounting server>
port 1813
type udp
secret ****secret****
}
server tlr1 {
host 193.174.75.134
type tls
certificatenamecheck off
matchCertificateAttribute CN:/^(radius1\.dfn|tld1\.eduroam)\.de$/
StatusServer on
}
server tlr2 {
host 193.174.75.138
type tls
certificatenamecheck off
matchCertificateAttribute CN:/^(radius2\.dfn|tld2\.eduroam)\.de$/
StatusServer on
}
</source>
===/etc/radsec/realms.conf===
<source lang=text>
# Our domain
realm domain.tld {
server Our-EduroamRadiusAuth
accountingServer Our-EduroamRadiusAcct
}
# Wrong counfigured clients are rejected here
realm /myabc\.com$ {
replymessage "Misconfigured client: default realm of Intel PRO/Wireless supplicant! Rejected by us."
accountingresponse on
}
realm /^$/ {
replymessage "Misconfigured client: empty realm! Rejected by us."
accountingresponse on
}
# Default route -> Eduroam toplevel servers
realm * {
server tlr1
server tlr2
accountingserver tlr1
accountingserver tlr2
}
</source>
===/etc/radsec/cert/radsecproxy.pem===
<source lang=text>
subject=/CN=radsecproxy.domain.tld/OU=bla/O=bli/L=Hamburg/ST=Hamburg/C=DE
-----BEGIN CERTIFICATE-----
...
-----END CERTIFICATE-----
And now the whole cerstificate chain...
</source>
==Run the daemon==
===Security===
There is no need to run radsecproxy as root.
But you need write access to the log or use syslog.
The config, certificate and key is not readable by the user (nogroup) but by the group radsecproxy where the porocess lives in (see systemd unit file radsecproxy.service).
====User====
<source lang=bash>
# addgroup -g 2083 radsecproxy
# useradd -u 2083 -g nogroup -s /bin/false -h /nonexistent
</source>
====Permissions====
<source lang=bash>
# chown -R root:radsecproxy /etc/radsec
# find /etc/radsec -type d -exec chmod 0750 {} \;
# find /etc/radsec -type f -exec chmod 0640 {} \;
</source>
====systemd unit file====
<source lang=bash>
# systemctl cat radsecproxy.service
</source>
<source lang=ini>
[Unit]
Description=radsecproxy
ConditionPathExists=/etc/radsec/radsecproxy.conf
After=network.target
Documentation=man:radsecproxy(1)
[Service]
Type=forking
User=radsecproxy
Group=radsecproxy
RuntimeDirectory=radsecproxy
RuntimeDirectoryMode=0700
PrivateTmp=yes
InaccessibleDirectories=/var
ReadOnlyDirectories=/etc
ReadOnlyDirectories=/lib
ReadOnlyDirectories=/usr
ExecStart=/opt/radsecproxy/sbin/radsecproxy -i /run/radsecproxy/radsecproxy.pid
PIDFile=/run/radsecproxy/radsecproxy.pid
[Install]
WantedBy=multi-user.target
</source>
Put this to /lib/systemd/system/radsecproxy.service and do:
# systemctl daemon-reload
# systemctl enable radsecproxy.service
# systemctl start radsecproxy.service
===Testing===
$ openssl s_client -connect <IP>:2083 -showcerts
===Certificate Enddate===
$ openssl s_client -connect <IP>:2083 -tls1 -no_ssl2 -no_ssl3 -showcerts 2>/dev/null | openssl x509 -enddate -noout
notAfter=Oct 9 12:13:17 2020 GMT
c3ad8fb7abd359b7f6a8f6eb523a45f36ed617d3
2063
2062
2020-10-29T06:13:51Z
Lollypop
2
/* /etc/radsec/clients.conf */
wikitext
text/x-wiki
[[Kategorie:Eduroam]]
=RadSecProxy=
==Build==
===Patch for radsecproxy-1.6.8 on Ubuntu 16.04===
In radsecproxy 1.6.9 and source from git on [[https://git.nordu.net/?p=radsecproxy.git;a=tree git.nordu.net]] this patch is not needed since [[https://git.nordu.net/?p=radsecproxy.git;a=commit;h=f3619bf65967255e1009fec42b28007b49e0f4e4 18.1.2017]].
<source lang=bash>
$ git clone https://git.nordu.net/radsecproxy.git
</source>
[https://project.nordu.net/browse/RADSECPROXY-72 taken from here]
<source lang=diff>
diff -rub radsecproxy-1.6.8/tcp.c radsecproxy-1.6.8_Ubuntu_16.04/tcp.c
--- radsecproxy-1.6.8/tcp.c 2016-09-21 13:49:09.000000000 +0200
+++ radsecproxy-1.6.8_Ubuntu_16.04/tcp.c 2017-07-13 16:35:52.414151832 +0200
@@ -353,7 +353,7 @@
struct sockaddr_storage from;
socklen_t fromlen = sizeof(from);
- listen(*sp, 0);
+ listen(*sp, 16);
for (;;) {
s = accept(*sp, (struct sockaddr *)&from, &fromlen);
diff -rub radsecproxy-1.6.8/tls.c radsecproxy-1.6.8_Ubuntu_16.04/tls.c
--- radsecproxy-1.6.8/tls.c 2016-09-21 13:49:09.000000000 +0200
+++ radsecproxy-1.6.8_Ubuntu_16.04/tls.c 2017-07-13 16:36:22.678166655 +0200
@@ -467,7 +467,7 @@
struct sockaddr_storage from;
socklen_t fromlen = sizeof(from);
- listen(*sp, 0);
+ listen(*sp, 16);
for (;;) {
s = accept(*sp, (struct sockaddr *)&from, &fromlen);
</source>
===Configure===
<source lang=bash>
$ ./configure --prefix=/opt/radsecproxy-1.6.8 --sysconfdir=/etc/radsec --with-ssl --enable-fticks
$ make clean all && sudo make install
</source>
=== Another example: Version 1.7.2 from git ===
<source lang=bash>
$ mkdir radsecproxy && cd radsecproxy
$ git clone --single-branch --branch 1.7.2 https://github.com/radsecproxy/radsecproxy tags/1.7.2
$ cd tags/1.7.2
$ ./autogen.sh
$ ./configure --prefix=/opt/radsecproxy-${PWD##*/} --sysconfdir=/etc/radsec --with-ssl
$ make clean all && sudo make install
</source>
==Config==
===/etc/radsec/radsecproxy.conf===
<source lang=text>
# Master config file for radsecproxy
IPv4Only on
listenUDP <IP>:1812
listenUDP <IP>:1813
listenTLS <IP>:2083
LogLevel 5 # For testing later reduce to 3
#LogDestination file:///var/log/radsecproxy.log
LogDestination x-syslog:///LOG_DAEMON
LoopPrevention on
######## TLS section
tls default {
CACertificatePath /etc/radsec/cert/ca
CertificateFile /etc/radsec/cert/radsecproxy-cert.pem
CertificateKeyFile /etc/radsec/cert/radsecproxy-key.pem
CertificateKeyPassword <PASSWORD>
}
Include /etc/radsec/rewrites.conf
Include /etc/radsec/clients.conf
Include /etc/radsec/servers.conf
Include /etc/radsec/realms.conf
</source>
===/etc/radsec/rewrites.conf===
<source lang=text>
## Empty for our setup
</source>
===/etc/radsec/clients.conf===
This matches our german top level radius (tlr) you have to customize it for other countries.
<source lang=text>
client tlr1 {
host 193.174.75.134
type tls
certificatenamecheck off
matchCertificateAttribute CN:/^(radius1\.dfn|tld1\.eduroam)\.de$/
}
client tlr2 {
host 193.174.75.138
type tls
certificatenamecheck off
matchCertificateAttribute CN:/^(radius2\.dfn|tld2\.eduroam)\.de$/
}
# Our WLAN Controller
client wlc {
host 10.1.1.0/24
type udp
secret ****secret****
}
#client anyIP4TLS {
# host 0.0.0.0/0
# type TLS
#}
</source>
===/etc/radsec/servers.conf===
<source lang=text>
Server Our-EduroamRadiusAuth {
host <internal radius server>
port 1812
#rewriteOut UserName
type udp
secret ****secret****
}
Server Our-EduroamRadiusAcct {
host <internal radius accounting server>
port 1813
type udp
secret ****secret****
}
server tlr1 {
host 193.174.75.134
type tls
certificatenamecheck off
matchCertificateAttribute CN:/^(radius1\.dfn|tld1\.eduroam)\.de$/
StatusServer on
}
server tlr2 {
host 193.174.75.138
type tls
certificatenamecheck off
matchCertificateAttribute CN:/^(radius2\.dfn|tld2\.eduroam)\.de$/
StatusServer on
}
</source>
===/etc/radsec/realms.conf===
<source lang=text>
# Our domain
realm domain.tld {
server Our-EduroamRadiusAuth
accountingServer Our-EduroamRadiusAcct
}
# Wrong counfigured clients are rejected here
realm /myabc\.com$ {
replymessage "Misconfigured client: default realm of Intel PRO/Wireless supplicant! Rejected by us."
accountingresponse on
}
realm /^$/ {
replymessage "Misconfigured client: empty realm! Rejected by us."
accountingresponse on
}
# Default route -> Eduroam toplevel servers
realm * {
server tlr1
server tlr2
accountingserver tlr1
accountingserver tlr2
}
</source>
===/etc/radsec/cert/radsecproxy.pem===
<source lang=text>
subject=/CN=radsecproxy.domain.tld/OU=bla/O=bli/L=Hamburg/ST=Hamburg/C=DE
-----BEGIN CERTIFICATE-----
...
-----END CERTIFICATE-----
And now the whole cerstificate chain...
</source>
==Run the daemon==
===Security===
There is no need to run radsecproxy as root.
But you need write access to the log or use syslog.
The config, certificate and key is not readable by the user (nogroup) but by the group radsecproxy where the porocess lives in (see systemd unit file radsecproxy.service).
====User====
<source lang=bash>
# addgroup -g 2083 radsecproxy
# useradd -u 2083 -g nogroup -s /bin/false -h /nonexistent
</source>
====Permissions====
<source lang=bash>
# chown -R root:radsecproxy /etc/radsec
# find /etc/radsec -type d -exec chmod 0750 {} \;
# find /etc/radsec -type f -exec chmod 0640 {} \;
</source>
====systemd unit file====
<source lang=bash>
# systemctl cat radsecproxy.service
</source>
<source lang=ini>
[Unit]
Description=radsecproxy
ConditionPathExists=/etc/radsec/radsecproxy.conf
After=network.target
Documentation=man:radsecproxy(1)
[Service]
Type=forking
User=radsecproxy
Group=radsecproxy
RuntimeDirectory=radsecproxy
RuntimeDirectoryMode=0700
PrivateTmp=yes
InaccessibleDirectories=/var
ReadOnlyDirectories=/etc
ReadOnlyDirectories=/lib
ReadOnlyDirectories=/usr
ExecStart=/opt/radsecproxy/sbin/radsecproxy -i /run/radsecproxy/radsecproxy.pid
PIDFile=/run/radsecproxy/radsecproxy.pid
[Install]
WantedBy=multi-user.target
</source>
Put this to /lib/systemd/system/radsecproxy.service and do:
# systemctl daemon-reload
# systemctl enable radsecproxy.service
# systemctl start radsecproxy.service
===Testing===
$ openssl s_client -connect <IP>:2083 -showcerts
===Certificate Enddate===
$ openssl s_client -connect <IP>:2083 -tls1 -no_ssl2 -no_ssl3 -showcerts 2>/dev/null | openssl x509 -enddate -noout
notAfter=Oct 9 12:13:17 2020 GMT
dabc6f56ea292c7b2884b19a2e2f631f9d40d1a6
2064
2063
2020-10-29T06:25:58Z
Lollypop
2
/* /etc/radsec/realms.conf */
wikitext
text/x-wiki
[[Kategorie:Eduroam]]
=RadSecProxy=
==Build==
===Patch for radsecproxy-1.6.8 on Ubuntu 16.04===
In radsecproxy 1.6.9 and source from git on [[https://git.nordu.net/?p=radsecproxy.git;a=tree git.nordu.net]] this patch is not needed since [[https://git.nordu.net/?p=radsecproxy.git;a=commit;h=f3619bf65967255e1009fec42b28007b49e0f4e4 18.1.2017]].
<source lang=bash>
$ git clone https://git.nordu.net/radsecproxy.git
</source>
[https://project.nordu.net/browse/RADSECPROXY-72 taken from here]
<source lang=diff>
diff -rub radsecproxy-1.6.8/tcp.c radsecproxy-1.6.8_Ubuntu_16.04/tcp.c
--- radsecproxy-1.6.8/tcp.c 2016-09-21 13:49:09.000000000 +0200
+++ radsecproxy-1.6.8_Ubuntu_16.04/tcp.c 2017-07-13 16:35:52.414151832 +0200
@@ -353,7 +353,7 @@
struct sockaddr_storage from;
socklen_t fromlen = sizeof(from);
- listen(*sp, 0);
+ listen(*sp, 16);
for (;;) {
s = accept(*sp, (struct sockaddr *)&from, &fromlen);
diff -rub radsecproxy-1.6.8/tls.c radsecproxy-1.6.8_Ubuntu_16.04/tls.c
--- radsecproxy-1.6.8/tls.c 2016-09-21 13:49:09.000000000 +0200
+++ radsecproxy-1.6.8_Ubuntu_16.04/tls.c 2017-07-13 16:36:22.678166655 +0200
@@ -467,7 +467,7 @@
struct sockaddr_storage from;
socklen_t fromlen = sizeof(from);
- listen(*sp, 0);
+ listen(*sp, 16);
for (;;) {
s = accept(*sp, (struct sockaddr *)&from, &fromlen);
</source>
===Configure===
<source lang=bash>
$ ./configure --prefix=/opt/radsecproxy-1.6.8 --sysconfdir=/etc/radsec --with-ssl --enable-fticks
$ make clean all && sudo make install
</source>
=== Another example: Version 1.7.2 from git ===
<source lang=bash>
$ mkdir radsecproxy && cd radsecproxy
$ git clone --single-branch --branch 1.7.2 https://github.com/radsecproxy/radsecproxy tags/1.7.2
$ cd tags/1.7.2
$ ./autogen.sh
$ ./configure --prefix=/opt/radsecproxy-${PWD##*/} --sysconfdir=/etc/radsec --with-ssl
$ make clean all && sudo make install
</source>
==Config==
===/etc/radsec/radsecproxy.conf===
<source lang=text>
# Master config file for radsecproxy
IPv4Only on
listenUDP <IP>:1812
listenUDP <IP>:1813
listenTLS <IP>:2083
LogLevel 5 # For testing later reduce to 3
#LogDestination file:///var/log/radsecproxy.log
LogDestination x-syslog:///LOG_DAEMON
LoopPrevention on
######## TLS section
tls default {
CACertificatePath /etc/radsec/cert/ca
CertificateFile /etc/radsec/cert/radsecproxy-cert.pem
CertificateKeyFile /etc/radsec/cert/radsecproxy-key.pem
CertificateKeyPassword <PASSWORD>
}
Include /etc/radsec/rewrites.conf
Include /etc/radsec/clients.conf
Include /etc/radsec/servers.conf
Include /etc/radsec/realms.conf
</source>
===/etc/radsec/rewrites.conf===
<source lang=text>
## Empty for our setup
</source>
===/etc/radsec/clients.conf===
This matches our german top level radius (tlr) you have to customize it for other countries.
<source lang=text>
client tlr1 {
host 193.174.75.134
type tls
certificatenamecheck off
matchCertificateAttribute CN:/^(radius1\.dfn|tld1\.eduroam)\.de$/
}
client tlr2 {
host 193.174.75.138
type tls
certificatenamecheck off
matchCertificateAttribute CN:/^(radius2\.dfn|tld2\.eduroam)\.de$/
}
# Our WLAN Controller
client wlc {
host 10.1.1.0/24
type udp
secret ****secret****
}
#client anyIP4TLS {
# host 0.0.0.0/0
# type TLS
#}
</source>
===/etc/radsec/servers.conf===
<source lang=text>
Server Our-EduroamRadiusAuth {
host <internal radius server>
port 1812
#rewriteOut UserName
type udp
secret ****secret****
}
Server Our-EduroamRadiusAcct {
host <internal radius accounting server>
port 1813
type udp
secret ****secret****
}
server tlr1 {
host 193.174.75.134
type tls
certificatenamecheck off
matchCertificateAttribute CN:/^(radius1\.dfn|tld1\.eduroam)\.de$/
StatusServer on
}
server tlr2 {
host 193.174.75.138
type tls
certificatenamecheck off
matchCertificateAttribute CN:/^(radius2\.dfn|tld2\.eduroam)\.de$/
StatusServer on
}
</source>
===/etc/radsec/realms.conf===
<source lang=text>
# Our domain domain.tld
realm /(eduroam|anonymous)@domain\.tld$/ {
server freeradius-1
server freeradius-2
accountingServer freeradius-1
accountingServer freeradius-2
}
# If the anonymous user has not been matched above, fail
# So users that use their real identity fail, too. Force anonymous!
realm /@domain\.tld$ {
replymessage "Access rejected, wrong anonymous identity. Use eduroam@domain.tld as anonymous identity."
accountingresponse on
}
# Other domain of our site not used for eduroam
realm /@wrong-domain\.tld$/ {
replymessage "Misconfigured client: Use domain.tld as domain instead."
accountingresponse on
}
# Default realm of some clients. Do not send to top level radius servers.
realm /@.*\.3gppnetwork\.org$/ {
replymessage "Misconfigured client."
accountingresponse on
}
# Default realm of some clients. Do not send to top level radius servers.
realm /myabc\.com$/ {
replymessage "Misconfigured client: default realm of Intel PRO/Wireless supplicant! Rejected by us."
accountingresponse on
}
# Empty realm. Do not send to top level radius servers.
realm /^$/ {
replymessage "Misconfigured client: empty realm! Rejected by us."
accountingresponse on
}
# Typo in realm. Realm without any dot in it. Do not send to top level radius servers.
realm /@[^\.]+$/ {
replymessage "Misconfigured client: Typo in realm - No dot in realm ! Rejected by us."
accountingresponse on
}
# Typo in realm. Realm without double dot in it. Do not send to top level radius servers.
realm /@.*\.\..*$/ {
replymessage "Misconfigured client: Typo in realm - .. ! Rejected by us."
accountingresponse on
}
# Typo in realm. Realm without space in it. Do not send to top level radius servers.
realm /@.*\s+.*$/ {
replymessage "Misconfigured client: Typo in realm - Don't use spaces in your realm! Rejected by us."
accountingresponse on
}
# All other realms -> Eduroam toplevel servers
realm * {
server tlr1
server tlr2
accountingserver tlr1
accountingserver tlr2
}
</source>
===/etc/radsec/cert/radsecproxy.pem===
<source lang=text>
subject=/CN=radsecproxy.domain.tld/OU=bla/O=bli/L=Hamburg/ST=Hamburg/C=DE
-----BEGIN CERTIFICATE-----
...
-----END CERTIFICATE-----
And now the whole cerstificate chain...
</source>
==Run the daemon==
===Security===
There is no need to run radsecproxy as root.
But you need write access to the log or use syslog.
The config, certificate and key is not readable by the user (nogroup) but by the group radsecproxy where the porocess lives in (see systemd unit file radsecproxy.service).
====User====
<source lang=bash>
# addgroup -g 2083 radsecproxy
# useradd -u 2083 -g nogroup -s /bin/false -h /nonexistent
</source>
====Permissions====
<source lang=bash>
# chown -R root:radsecproxy /etc/radsec
# find /etc/radsec -type d -exec chmod 0750 {} \;
# find /etc/radsec -type f -exec chmod 0640 {} \;
</source>
====systemd unit file====
<source lang=bash>
# systemctl cat radsecproxy.service
</source>
<source lang=ini>
[Unit]
Description=radsecproxy
ConditionPathExists=/etc/radsec/radsecproxy.conf
After=network.target
Documentation=man:radsecproxy(1)
[Service]
Type=forking
User=radsecproxy
Group=radsecproxy
RuntimeDirectory=radsecproxy
RuntimeDirectoryMode=0700
PrivateTmp=yes
InaccessibleDirectories=/var
ReadOnlyDirectories=/etc
ReadOnlyDirectories=/lib
ReadOnlyDirectories=/usr
ExecStart=/opt/radsecproxy/sbin/radsecproxy -i /run/radsecproxy/radsecproxy.pid
PIDFile=/run/radsecproxy/radsecproxy.pid
[Install]
WantedBy=multi-user.target
</source>
Put this to /lib/systemd/system/radsecproxy.service and do:
# systemctl daemon-reload
# systemctl enable radsecproxy.service
# systemctl start radsecproxy.service
===Testing===
$ openssl s_client -connect <IP>:2083 -showcerts
===Certificate Enddate===
$ openssl s_client -connect <IP>:2083 -tls1 -no_ssl2 -no_ssl3 -showcerts 2>/dev/null | openssl x509 -enddate -noout
notAfter=Oct 9 12:13:17 2020 GMT
08ea40af6260214340e939c621ae2e68d24cffa6
2065
2064
2020-10-29T06:32:34Z
Lollypop
2
/* /etc/radsec/servers.conf */
wikitext
text/x-wiki
[[Kategorie:Eduroam]]
=RadSecProxy=
==Build==
===Patch for radsecproxy-1.6.8 on Ubuntu 16.04===
In radsecproxy 1.6.9 and source from git on [[https://git.nordu.net/?p=radsecproxy.git;a=tree git.nordu.net]] this patch is not needed since [[https://git.nordu.net/?p=radsecproxy.git;a=commit;h=f3619bf65967255e1009fec42b28007b49e0f4e4 18.1.2017]].
<source lang=bash>
$ git clone https://git.nordu.net/radsecproxy.git
</source>
[https://project.nordu.net/browse/RADSECPROXY-72 taken from here]
<source lang=diff>
diff -rub radsecproxy-1.6.8/tcp.c radsecproxy-1.6.8_Ubuntu_16.04/tcp.c
--- radsecproxy-1.6.8/tcp.c 2016-09-21 13:49:09.000000000 +0200
+++ radsecproxy-1.6.8_Ubuntu_16.04/tcp.c 2017-07-13 16:35:52.414151832 +0200
@@ -353,7 +353,7 @@
struct sockaddr_storage from;
socklen_t fromlen = sizeof(from);
- listen(*sp, 0);
+ listen(*sp, 16);
for (;;) {
s = accept(*sp, (struct sockaddr *)&from, &fromlen);
diff -rub radsecproxy-1.6.8/tls.c radsecproxy-1.6.8_Ubuntu_16.04/tls.c
--- radsecproxy-1.6.8/tls.c 2016-09-21 13:49:09.000000000 +0200
+++ radsecproxy-1.6.8_Ubuntu_16.04/tls.c 2017-07-13 16:36:22.678166655 +0200
@@ -467,7 +467,7 @@
struct sockaddr_storage from;
socklen_t fromlen = sizeof(from);
- listen(*sp, 0);
+ listen(*sp, 16);
for (;;) {
s = accept(*sp, (struct sockaddr *)&from, &fromlen);
</source>
===Configure===
<source lang=bash>
$ ./configure --prefix=/opt/radsecproxy-1.6.8 --sysconfdir=/etc/radsec --with-ssl --enable-fticks
$ make clean all && sudo make install
</source>
=== Another example: Version 1.7.2 from git ===
<source lang=bash>
$ mkdir radsecproxy && cd radsecproxy
$ git clone --single-branch --branch 1.7.2 https://github.com/radsecproxy/radsecproxy tags/1.7.2
$ cd tags/1.7.2
$ ./autogen.sh
$ ./configure --prefix=/opt/radsecproxy-${PWD##*/} --sysconfdir=/etc/radsec --with-ssl
$ make clean all && sudo make install
</source>
==Config==
===/etc/radsec/radsecproxy.conf===
<source lang=text>
# Master config file for radsecproxy
IPv4Only on
listenUDP <IP>:1812
listenUDP <IP>:1813
listenTLS <IP>:2083
LogLevel 5 # For testing later reduce to 3
#LogDestination file:///var/log/radsecproxy.log
LogDestination x-syslog:///LOG_DAEMON
LoopPrevention on
######## TLS section
tls default {
CACertificatePath /etc/radsec/cert/ca
CertificateFile /etc/radsec/cert/radsecproxy-cert.pem
CertificateKeyFile /etc/radsec/cert/radsecproxy-key.pem
CertificateKeyPassword <PASSWORD>
}
Include /etc/radsec/rewrites.conf
Include /etc/radsec/clients.conf
Include /etc/radsec/servers.conf
Include /etc/radsec/realms.conf
</source>
===/etc/radsec/rewrites.conf===
<source lang=text>
## Empty for our setup
</source>
===/etc/radsec/clients.conf===
This matches our german top level radius (tlr) you have to customize it for other countries.
<source lang=text>
client tlr1 {
host 193.174.75.134
type tls
certificatenamecheck off
matchCertificateAttribute CN:/^(radius1\.dfn|tld1\.eduroam)\.de$/
}
client tlr2 {
host 193.174.75.138
type tls
certificatenamecheck off
matchCertificateAttribute CN:/^(radius2\.dfn|tld2\.eduroam)\.de$/
}
# Our WLAN Controller
client wlc {
host 10.1.1.0/24
type udp
secret ****secret****
}
#client anyIP4TLS {
# host 0.0.0.0/0
# type TLS
#}
</source>
===/etc/radsec/servers.conf===
<source lang=text>
#
## UDP Radius
#
Server Our-EduroamRadiusAuth {
host <internal radius server>
port 1812
#rewriteOut UserName
type udp
secret ****secret****
}
Server Our-EduroamRadiusAcct {
host <internal radius accounting server>
port 1813
type udp
secret ****secret****
}
#
## TLS Radius / RadSec
#
server freeradius-1 {
host <internal radius accounting server1>
type tls
certificatenamecheck off
matchCertificateAttribute CN:/^freeradius1\.domain\.tld$/
StatusServer on
secret ****secret****
}
server freeradius-2 {
host <internal radius accounting server2>
type tls
certificatenamecheck off
matchCertificateAttribute CN:/^freeradius2\.domain\.tld$/
StatusServer on
secret ****secret****
}
server tlr1 {
host 193.174.75.134
type tls
certificatenamecheck off
matchCertificateAttribute CN:/^(radius1\.dfn|tld1\.eduroam)\.de$/
StatusServer on
}
server tlr2 {
host 193.174.75.138
type tls
certificatenamecheck off
matchCertificateAttribute CN:/^(radius2\.dfn|tld2\.eduroam)\.de$/
StatusServer on
}
</source>
===/etc/radsec/realms.conf===
<source lang=text>
# Our domain domain.tld
realm /(eduroam|anonymous)@domain\.tld$/ {
server freeradius-1
server freeradius-2
accountingServer freeradius-1
accountingServer freeradius-2
}
# If the anonymous user has not been matched above, fail
# So users that use their real identity fail, too. Force anonymous!
realm /@domain\.tld$ {
replymessage "Access rejected, wrong anonymous identity. Use eduroam@domain.tld as anonymous identity."
accountingresponse on
}
# Other domain of our site not used for eduroam
realm /@wrong-domain\.tld$/ {
replymessage "Misconfigured client: Use domain.tld as domain instead."
accountingresponse on
}
# Default realm of some clients. Do not send to top level radius servers.
realm /@.*\.3gppnetwork\.org$/ {
replymessage "Misconfigured client."
accountingresponse on
}
# Default realm of some clients. Do not send to top level radius servers.
realm /myabc\.com$/ {
replymessage "Misconfigured client: default realm of Intel PRO/Wireless supplicant! Rejected by us."
accountingresponse on
}
# Empty realm. Do not send to top level radius servers.
realm /^$/ {
replymessage "Misconfigured client: empty realm! Rejected by us."
accountingresponse on
}
# Typo in realm. Realm without any dot in it. Do not send to top level radius servers.
realm /@[^\.]+$/ {
replymessage "Misconfigured client: Typo in realm - No dot in realm ! Rejected by us."
accountingresponse on
}
# Typo in realm. Realm without double dot in it. Do not send to top level radius servers.
realm /@.*\.\..*$/ {
replymessage "Misconfigured client: Typo in realm - .. ! Rejected by us."
accountingresponse on
}
# Typo in realm. Realm without space in it. Do not send to top level radius servers.
realm /@.*\s+.*$/ {
replymessage "Misconfigured client: Typo in realm - Don't use spaces in your realm! Rejected by us."
accountingresponse on
}
# All other realms -> Eduroam toplevel servers
realm * {
server tlr1
server tlr2
accountingserver tlr1
accountingserver tlr2
}
</source>
===/etc/radsec/cert/radsecproxy.pem===
<source lang=text>
subject=/CN=radsecproxy.domain.tld/OU=bla/O=bli/L=Hamburg/ST=Hamburg/C=DE
-----BEGIN CERTIFICATE-----
...
-----END CERTIFICATE-----
And now the whole cerstificate chain...
</source>
==Run the daemon==
===Security===
There is no need to run radsecproxy as root.
But you need write access to the log or use syslog.
The config, certificate and key is not readable by the user (nogroup) but by the group radsecproxy where the porocess lives in (see systemd unit file radsecproxy.service).
====User====
<source lang=bash>
# addgroup -g 2083 radsecproxy
# useradd -u 2083 -g nogroup -s /bin/false -h /nonexistent
</source>
====Permissions====
<source lang=bash>
# chown -R root:radsecproxy /etc/radsec
# find /etc/radsec -type d -exec chmod 0750 {} \;
# find /etc/radsec -type f -exec chmod 0640 {} \;
</source>
====systemd unit file====
<source lang=bash>
# systemctl cat radsecproxy.service
</source>
<source lang=ini>
[Unit]
Description=radsecproxy
ConditionPathExists=/etc/radsec/radsecproxy.conf
After=network.target
Documentation=man:radsecproxy(1)
[Service]
Type=forking
User=radsecproxy
Group=radsecproxy
RuntimeDirectory=radsecproxy
RuntimeDirectoryMode=0700
PrivateTmp=yes
InaccessibleDirectories=/var
ReadOnlyDirectories=/etc
ReadOnlyDirectories=/lib
ReadOnlyDirectories=/usr
ExecStart=/opt/radsecproxy/sbin/radsecproxy -i /run/radsecproxy/radsecproxy.pid
PIDFile=/run/radsecproxy/radsecproxy.pid
[Install]
WantedBy=multi-user.target
</source>
Put this to /lib/systemd/system/radsecproxy.service and do:
# systemctl daemon-reload
# systemctl enable radsecproxy.service
# systemctl start radsecproxy.service
===Testing===
$ openssl s_client -connect <IP>:2083 -showcerts
===Certificate Enddate===
$ openssl s_client -connect <IP>:2083 -tls1 -no_ssl2 -no_ssl3 -showcerts 2>/dev/null | openssl x509 -enddate -noout
notAfter=Oct 9 12:13:17 2020 GMT
4c7f6dee73448892f9df327125fa17665f484bf4
2066
2065
2020-10-29T06:35:20Z
Lollypop
2
/* /etc/radsec/servers.conf */
wikitext
text/x-wiki
[[Kategorie:Eduroam]]
=RadSecProxy=
==Build==
===Patch for radsecproxy-1.6.8 on Ubuntu 16.04===
In radsecproxy 1.6.9 and source from git on [[https://git.nordu.net/?p=radsecproxy.git;a=tree git.nordu.net]] this patch is not needed since [[https://git.nordu.net/?p=radsecproxy.git;a=commit;h=f3619bf65967255e1009fec42b28007b49e0f4e4 18.1.2017]].
<source lang=bash>
$ git clone https://git.nordu.net/radsecproxy.git
</source>
[https://project.nordu.net/browse/RADSECPROXY-72 taken from here]
<source lang=diff>
diff -rub radsecproxy-1.6.8/tcp.c radsecproxy-1.6.8_Ubuntu_16.04/tcp.c
--- radsecproxy-1.6.8/tcp.c 2016-09-21 13:49:09.000000000 +0200
+++ radsecproxy-1.6.8_Ubuntu_16.04/tcp.c 2017-07-13 16:35:52.414151832 +0200
@@ -353,7 +353,7 @@
struct sockaddr_storage from;
socklen_t fromlen = sizeof(from);
- listen(*sp, 0);
+ listen(*sp, 16);
for (;;) {
s = accept(*sp, (struct sockaddr *)&from, &fromlen);
diff -rub radsecproxy-1.6.8/tls.c radsecproxy-1.6.8_Ubuntu_16.04/tls.c
--- radsecproxy-1.6.8/tls.c 2016-09-21 13:49:09.000000000 +0200
+++ radsecproxy-1.6.8_Ubuntu_16.04/tls.c 2017-07-13 16:36:22.678166655 +0200
@@ -467,7 +467,7 @@
struct sockaddr_storage from;
socklen_t fromlen = sizeof(from);
- listen(*sp, 0);
+ listen(*sp, 16);
for (;;) {
s = accept(*sp, (struct sockaddr *)&from, &fromlen);
</source>
===Configure===
<source lang=bash>
$ ./configure --prefix=/opt/radsecproxy-1.6.8 --sysconfdir=/etc/radsec --with-ssl --enable-fticks
$ make clean all && sudo make install
</source>
=== Another example: Version 1.7.2 from git ===
<source lang=bash>
$ mkdir radsecproxy && cd radsecproxy
$ git clone --single-branch --branch 1.7.2 https://github.com/radsecproxy/radsecproxy tags/1.7.2
$ cd tags/1.7.2
$ ./autogen.sh
$ ./configure --prefix=/opt/radsecproxy-${PWD##*/} --sysconfdir=/etc/radsec --with-ssl
$ make clean all && sudo make install
</source>
==Config==
===/etc/radsec/radsecproxy.conf===
<source lang=text>
# Master config file for radsecproxy
IPv4Only on
listenUDP <IP>:1812
listenUDP <IP>:1813
listenTLS <IP>:2083
LogLevel 5 # For testing later reduce to 3
#LogDestination file:///var/log/radsecproxy.log
LogDestination x-syslog:///LOG_DAEMON
LoopPrevention on
######## TLS section
tls default {
CACertificatePath /etc/radsec/cert/ca
CertificateFile /etc/radsec/cert/radsecproxy-cert.pem
CertificateKeyFile /etc/radsec/cert/radsecproxy-key.pem
CertificateKeyPassword <PASSWORD>
}
Include /etc/radsec/rewrites.conf
Include /etc/radsec/clients.conf
Include /etc/radsec/servers.conf
Include /etc/radsec/realms.conf
</source>
===/etc/radsec/rewrites.conf===
<source lang=text>
## Empty for our setup
</source>
===/etc/radsec/clients.conf===
This matches our german top level radius (tlr) you have to customize it for other countries.
<source lang=text>
client tlr1 {
host 193.174.75.134
type tls
certificatenamecheck off
matchCertificateAttribute CN:/^(radius1\.dfn|tld1\.eduroam)\.de$/
}
client tlr2 {
host 193.174.75.138
type tls
certificatenamecheck off
matchCertificateAttribute CN:/^(radius2\.dfn|tld2\.eduroam)\.de$/
}
# Our WLAN Controller
client wlc {
host 10.1.1.0/24
type udp
secret ****secret****
}
#client anyIP4TLS {
# host 0.0.0.0/0
# type TLS
#}
</source>
===/etc/radsec/servers.conf===
<source lang=text>
#
## UDP Radius
#
#Server Our-EduroamRadiusAuth {
# host <internal radius server>
# port 1812
# type udp
# secret ****secret****
#}
#Server Our-EduroamRadiusAcct {
# host <internal radius accounting server>
# port 1813
# type udp
# secret ****secret****
#}
#
## TLS Radius / RadSec
#
server freeradius-1 {
host <internal radius accounting server1>
type tls
certificatenamecheck off
matchCertificateAttribute CN:/^freeradius1\.domain\.tld$/
StatusServer on
secret ****secret****
}
server freeradius-2 {
host <internal radius accounting server2>
type tls
certificatenamecheck off
matchCertificateAttribute CN:/^freeradius2\.domain\.tld$/
StatusServer on
secret ****secret****
}
server tlr1 {
host 193.174.75.134
type tls
certificatenamecheck off
matchCertificateAttribute CN:/^(radius1\.dfn|tld1\.eduroam)\.de$/
StatusServer on
}
server tlr2 {
host 193.174.75.138
type tls
certificatenamecheck off
matchCertificateAttribute CN:/^(radius2\.dfn|tld2\.eduroam)\.de$/
StatusServer on
}
</source>
===/etc/radsec/realms.conf===
<source lang=text>
# Our domain domain.tld
realm /(eduroam|anonymous)@domain\.tld$/ {
server freeradius-1
server freeradius-2
accountingServer freeradius-1
accountingServer freeradius-2
}
# If the anonymous user has not been matched above, fail
# So users that use their real identity fail, too. Force anonymous!
realm /@domain\.tld$ {
replymessage "Access rejected, wrong anonymous identity. Use eduroam@domain.tld as anonymous identity."
accountingresponse on
}
# Other domain of our site not used for eduroam
realm /@wrong-domain\.tld$/ {
replymessage "Misconfigured client: Use domain.tld as domain instead."
accountingresponse on
}
# Default realm of some clients. Do not send to top level radius servers.
realm /@.*\.3gppnetwork\.org$/ {
replymessage "Misconfigured client."
accountingresponse on
}
# Default realm of some clients. Do not send to top level radius servers.
realm /myabc\.com$/ {
replymessage "Misconfigured client: default realm of Intel PRO/Wireless supplicant! Rejected by us."
accountingresponse on
}
# Empty realm. Do not send to top level radius servers.
realm /^$/ {
replymessage "Misconfigured client: empty realm! Rejected by us."
accountingresponse on
}
# Typo in realm. Realm without any dot in it. Do not send to top level radius servers.
realm /@[^\.]+$/ {
replymessage "Misconfigured client: Typo in realm - No dot in realm ! Rejected by us."
accountingresponse on
}
# Typo in realm. Realm without double dot in it. Do not send to top level radius servers.
realm /@.*\.\..*$/ {
replymessage "Misconfigured client: Typo in realm - .. ! Rejected by us."
accountingresponse on
}
# Typo in realm. Realm without space in it. Do not send to top level radius servers.
realm /@.*\s+.*$/ {
replymessage "Misconfigured client: Typo in realm - Don't use spaces in your realm! Rejected by us."
accountingresponse on
}
# All other realms -> Eduroam toplevel servers
realm * {
server tlr1
server tlr2
accountingserver tlr1
accountingserver tlr2
}
</source>
===/etc/radsec/cert/radsecproxy.pem===
<source lang=text>
subject=/CN=radsecproxy.domain.tld/OU=bla/O=bli/L=Hamburg/ST=Hamburg/C=DE
-----BEGIN CERTIFICATE-----
...
-----END CERTIFICATE-----
And now the whole cerstificate chain...
</source>
==Run the daemon==
===Security===
There is no need to run radsecproxy as root.
But you need write access to the log or use syslog.
The config, certificate and key is not readable by the user (nogroup) but by the group radsecproxy where the porocess lives in (see systemd unit file radsecproxy.service).
====User====
<source lang=bash>
# addgroup -g 2083 radsecproxy
# useradd -u 2083 -g nogroup -s /bin/false -h /nonexistent
</source>
====Permissions====
<source lang=bash>
# chown -R root:radsecproxy /etc/radsec
# find /etc/radsec -type d -exec chmod 0750 {} \;
# find /etc/radsec -type f -exec chmod 0640 {} \;
</source>
====systemd unit file====
<source lang=bash>
# systemctl cat radsecproxy.service
</source>
<source lang=ini>
[Unit]
Description=radsecproxy
ConditionPathExists=/etc/radsec/radsecproxy.conf
After=network.target
Documentation=man:radsecproxy(1)
[Service]
Type=forking
User=radsecproxy
Group=radsecproxy
RuntimeDirectory=radsecproxy
RuntimeDirectoryMode=0700
PrivateTmp=yes
InaccessibleDirectories=/var
ReadOnlyDirectories=/etc
ReadOnlyDirectories=/lib
ReadOnlyDirectories=/usr
ExecStart=/opt/radsecproxy/sbin/radsecproxy -i /run/radsecproxy/radsecproxy.pid
PIDFile=/run/radsecproxy/radsecproxy.pid
[Install]
WantedBy=multi-user.target
</source>
Put this to /lib/systemd/system/radsecproxy.service and do:
# systemctl daemon-reload
# systemctl enable radsecproxy.service
# systemctl start radsecproxy.service
===Testing===
$ openssl s_client -connect <IP>:2083 -showcerts
===Certificate Enddate===
$ openssl s_client -connect <IP>:2083 -tls1 -no_ssl2 -no_ssl3 -showcerts 2>/dev/null | openssl x509 -enddate -noout
notAfter=Oct 9 12:13:17 2020 GMT
9aeac4841b1d0fe2db85f77548a61ac41132badb
2067
2066
2020-10-29T06:36:14Z
Lollypop
2
/* /etc/radsec/realms.conf */
wikitext
text/x-wiki
[[Kategorie:Eduroam]]
=RadSecProxy=
==Build==
===Patch for radsecproxy-1.6.8 on Ubuntu 16.04===
In radsecproxy 1.6.9 and source from git on [[https://git.nordu.net/?p=radsecproxy.git;a=tree git.nordu.net]] this patch is not needed since [[https://git.nordu.net/?p=radsecproxy.git;a=commit;h=f3619bf65967255e1009fec42b28007b49e0f4e4 18.1.2017]].
<source lang=bash>
$ git clone https://git.nordu.net/radsecproxy.git
</source>
[https://project.nordu.net/browse/RADSECPROXY-72 taken from here]
<source lang=diff>
diff -rub radsecproxy-1.6.8/tcp.c radsecproxy-1.6.8_Ubuntu_16.04/tcp.c
--- radsecproxy-1.6.8/tcp.c 2016-09-21 13:49:09.000000000 +0200
+++ radsecproxy-1.6.8_Ubuntu_16.04/tcp.c 2017-07-13 16:35:52.414151832 +0200
@@ -353,7 +353,7 @@
struct sockaddr_storage from;
socklen_t fromlen = sizeof(from);
- listen(*sp, 0);
+ listen(*sp, 16);
for (;;) {
s = accept(*sp, (struct sockaddr *)&from, &fromlen);
diff -rub radsecproxy-1.6.8/tls.c radsecproxy-1.6.8_Ubuntu_16.04/tls.c
--- radsecproxy-1.6.8/tls.c 2016-09-21 13:49:09.000000000 +0200
+++ radsecproxy-1.6.8_Ubuntu_16.04/tls.c 2017-07-13 16:36:22.678166655 +0200
@@ -467,7 +467,7 @@
struct sockaddr_storage from;
socklen_t fromlen = sizeof(from);
- listen(*sp, 0);
+ listen(*sp, 16);
for (;;) {
s = accept(*sp, (struct sockaddr *)&from, &fromlen);
</source>
===Configure===
<source lang=bash>
$ ./configure --prefix=/opt/radsecproxy-1.6.8 --sysconfdir=/etc/radsec --with-ssl --enable-fticks
$ make clean all && sudo make install
</source>
=== Another example: Version 1.7.2 from git ===
<source lang=bash>
$ mkdir radsecproxy && cd radsecproxy
$ git clone --single-branch --branch 1.7.2 https://github.com/radsecproxy/radsecproxy tags/1.7.2
$ cd tags/1.7.2
$ ./autogen.sh
$ ./configure --prefix=/opt/radsecproxy-${PWD##*/} --sysconfdir=/etc/radsec --with-ssl
$ make clean all && sudo make install
</source>
==Config==
===/etc/radsec/radsecproxy.conf===
<source lang=text>
# Master config file for radsecproxy
IPv4Only on
listenUDP <IP>:1812
listenUDP <IP>:1813
listenTLS <IP>:2083
LogLevel 5 # For testing later reduce to 3
#LogDestination file:///var/log/radsecproxy.log
LogDestination x-syslog:///LOG_DAEMON
LoopPrevention on
######## TLS section
tls default {
CACertificatePath /etc/radsec/cert/ca
CertificateFile /etc/radsec/cert/radsecproxy-cert.pem
CertificateKeyFile /etc/radsec/cert/radsecproxy-key.pem
CertificateKeyPassword <PASSWORD>
}
Include /etc/radsec/rewrites.conf
Include /etc/radsec/clients.conf
Include /etc/radsec/servers.conf
Include /etc/radsec/realms.conf
</source>
===/etc/radsec/rewrites.conf===
<source lang=text>
## Empty for our setup
</source>
===/etc/radsec/clients.conf===
This matches our german top level radius (tlr) you have to customize it for other countries.
<source lang=text>
client tlr1 {
host 193.174.75.134
type tls
certificatenamecheck off
matchCertificateAttribute CN:/^(radius1\.dfn|tld1\.eduroam)\.de$/
}
client tlr2 {
host 193.174.75.138
type tls
certificatenamecheck off
matchCertificateAttribute CN:/^(radius2\.dfn|tld2\.eduroam)\.de$/
}
# Our WLAN Controller
client wlc {
host 10.1.1.0/24
type udp
secret ****secret****
}
#client anyIP4TLS {
# host 0.0.0.0/0
# type TLS
#}
</source>
===/etc/radsec/servers.conf===
<source lang=text>
#
## UDP Radius
#
#Server Our-EduroamRadiusAuth {
# host <internal radius server>
# port 1812
# type udp
# secret ****secret****
#}
#Server Our-EduroamRadiusAcct {
# host <internal radius accounting server>
# port 1813
# type udp
# secret ****secret****
#}
#
## TLS Radius / RadSec
#
server freeradius-1 {
host <internal radius accounting server1>
type tls
certificatenamecheck off
matchCertificateAttribute CN:/^freeradius1\.domain\.tld$/
StatusServer on
secret ****secret****
}
server freeradius-2 {
host <internal radius accounting server2>
type tls
certificatenamecheck off
matchCertificateAttribute CN:/^freeradius2\.domain\.tld$/
StatusServer on
secret ****secret****
}
server tlr1 {
host 193.174.75.134
type tls
certificatenamecheck off
matchCertificateAttribute CN:/^(radius1\.dfn|tld1\.eduroam)\.de$/
StatusServer on
}
server tlr2 {
host 193.174.75.138
type tls
certificatenamecheck off
matchCertificateAttribute CN:/^(radius2\.dfn|tld2\.eduroam)\.de$/
StatusServer on
}
</source>
===/etc/radsec/realms.conf===
<source lang=text>
# Our domain domain.tld
realm /(eduroam|anonymous)@domain\.tld$/ {
server freeradius-1
server freeradius-2
accountingServer freeradius-1
accountingServer freeradius-2
}
# If the anonymous user has not been matched above, fail
# So users that use their real identity fail, too. Force anonymous!
realm /@domain\.tld$ {
replymessage "Access rejected, wrong anonymous identity. Use eduroam@domain.tld as anonymous identity."
accountingresponse on
}
# Other domain of our site not used for eduroam
realm /@wrong-domain\.tld$/ {
replymessage "Misconfigured client: Use domain.tld as domain instead."
accountingresponse on
}
# Default realm of some clients. Do not send to top level radius servers.
realm /@.*\.3gppnetwork\.org$/ {
replymessage "Misconfigured client."
accountingresponse on
}
# Default realm of some clients. Do not send to top level radius servers.
realm /myabc\.com$/ {
replymessage "Misconfigured client: default realm of Intel PRO/Wireless supplicant! Rejected by us."
accountingresponse on
}
# Empty realm. Do not send to top level radius servers.
realm /^$/ {
replymessage "Misconfigured client: empty realm! Rejected by us."
accountingresponse on
}
# Typo in realm. Realm without any dot in it. Do not send to top level radius servers.
realm /@[^\.]+$/ {
replymessage "Misconfigured client: Typo in realm - No dot in realm ! Rejected by us."
accountingresponse on
}
# Typo in realm. Realm without double dot in it. Do not send to top level radius servers.
realm /@.*\.\..*$/ {
replymessage "Misconfigured client: Typo in realm - .. ! Rejected by us."
accountingresponse on
}
# Typo in realm. Realm without space in it. Do not send to top level radius servers.
realm /@.*\s+.*$/ {
replymessage "Misconfigured client: Typo in realm - Don't use spaces in your realm! Rejected by us."
accountingresponse on
}
# All other realms -> Eduroam toplevel servers
realm * {
server tlr1
server tlr2
accountingserver tlr1
accountingserver tlr2
}
</source>
===/etc/radsec/cert/radsecproxy.pem===
<source lang=text>
subject=/CN=radsecproxy.domain.tld/OU=bla/O=bli/L=Hamburg/ST=Hamburg/C=DE
-----BEGIN CERTIFICATE-----
...
-----END CERTIFICATE-----
And now the whole cerstificate chain...
</source>
==Run the daemon==
===Security===
There is no need to run radsecproxy as root.
But you need write access to the log or use syslog.
The config, certificate and key is not readable by the user (nogroup) but by the group radsecproxy where the porocess lives in (see systemd unit file radsecproxy.service).
====User====
<source lang=bash>
# addgroup -g 2083 radsecproxy
# useradd -u 2083 -g nogroup -s /bin/false -h /nonexistent
</source>
====Permissions====
<source lang=bash>
# chown -R root:radsecproxy /etc/radsec
# find /etc/radsec -type d -exec chmod 0750 {} \;
# find /etc/radsec -type f -exec chmod 0640 {} \;
</source>
====systemd unit file====
<source lang=bash>
# systemctl cat radsecproxy.service
</source>
<source lang=ini>
[Unit]
Description=radsecproxy
ConditionPathExists=/etc/radsec/radsecproxy.conf
After=network.target
Documentation=man:radsecproxy(1)
[Service]
Type=forking
User=radsecproxy
Group=radsecproxy
RuntimeDirectory=radsecproxy
RuntimeDirectoryMode=0700
PrivateTmp=yes
InaccessibleDirectories=/var
ReadOnlyDirectories=/etc
ReadOnlyDirectories=/lib
ReadOnlyDirectories=/usr
ExecStart=/opt/radsecproxy/sbin/radsecproxy -i /run/radsecproxy/radsecproxy.pid
PIDFile=/run/radsecproxy/radsecproxy.pid
[Install]
WantedBy=multi-user.target
</source>
Put this to /lib/systemd/system/radsecproxy.service and do:
# systemctl daemon-reload
# systemctl enable radsecproxy.service
# systemctl start radsecproxy.service
===Testing===
$ openssl s_client -connect <IP>:2083 -showcerts
===Certificate Enddate===
$ openssl s_client -connect <IP>:2083 -tls1 -no_ssl2 -no_ssl3 -showcerts 2>/dev/null | openssl x509 -enddate -noout
notAfter=Oct 9 12:13:17 2020 GMT
bfebc204289f95e47a095a8894e295c4005a5c40
2068
2067
2020-10-29T06:39:36Z
Lollypop
2
/* Testing */
wikitext
text/x-wiki
[[Kategorie:Eduroam]]
=RadSecProxy=
==Build==
===Patch for radsecproxy-1.6.8 on Ubuntu 16.04===
In radsecproxy 1.6.9 and source from git on [[https://git.nordu.net/?p=radsecproxy.git;a=tree git.nordu.net]] this patch is not needed since [[https://git.nordu.net/?p=radsecproxy.git;a=commit;h=f3619bf65967255e1009fec42b28007b49e0f4e4 18.1.2017]].
<source lang=bash>
$ git clone https://git.nordu.net/radsecproxy.git
</source>
[https://project.nordu.net/browse/RADSECPROXY-72 taken from here]
<source lang=diff>
diff -rub radsecproxy-1.6.8/tcp.c radsecproxy-1.6.8_Ubuntu_16.04/tcp.c
--- radsecproxy-1.6.8/tcp.c 2016-09-21 13:49:09.000000000 +0200
+++ radsecproxy-1.6.8_Ubuntu_16.04/tcp.c 2017-07-13 16:35:52.414151832 +0200
@@ -353,7 +353,7 @@
struct sockaddr_storage from;
socklen_t fromlen = sizeof(from);
- listen(*sp, 0);
+ listen(*sp, 16);
for (;;) {
s = accept(*sp, (struct sockaddr *)&from, &fromlen);
diff -rub radsecproxy-1.6.8/tls.c radsecproxy-1.6.8_Ubuntu_16.04/tls.c
--- radsecproxy-1.6.8/tls.c 2016-09-21 13:49:09.000000000 +0200
+++ radsecproxy-1.6.8_Ubuntu_16.04/tls.c 2017-07-13 16:36:22.678166655 +0200
@@ -467,7 +467,7 @@
struct sockaddr_storage from;
socklen_t fromlen = sizeof(from);
- listen(*sp, 0);
+ listen(*sp, 16);
for (;;) {
s = accept(*sp, (struct sockaddr *)&from, &fromlen);
</source>
===Configure===
<source lang=bash>
$ ./configure --prefix=/opt/radsecproxy-1.6.8 --sysconfdir=/etc/radsec --with-ssl --enable-fticks
$ make clean all && sudo make install
</source>
=== Another example: Version 1.7.2 from git ===
<source lang=bash>
$ mkdir radsecproxy && cd radsecproxy
$ git clone --single-branch --branch 1.7.2 https://github.com/radsecproxy/radsecproxy tags/1.7.2
$ cd tags/1.7.2
$ ./autogen.sh
$ ./configure --prefix=/opt/radsecproxy-${PWD##*/} --sysconfdir=/etc/radsec --with-ssl
$ make clean all && sudo make install
</source>
==Config==
===/etc/radsec/radsecproxy.conf===
<source lang=text>
# Master config file for radsecproxy
IPv4Only on
listenUDP <IP>:1812
listenUDP <IP>:1813
listenTLS <IP>:2083
LogLevel 5 # For testing later reduce to 3
#LogDestination file:///var/log/radsecproxy.log
LogDestination x-syslog:///LOG_DAEMON
LoopPrevention on
######## TLS section
tls default {
CACertificatePath /etc/radsec/cert/ca
CertificateFile /etc/radsec/cert/radsecproxy-cert.pem
CertificateKeyFile /etc/radsec/cert/radsecproxy-key.pem
CertificateKeyPassword <PASSWORD>
}
Include /etc/radsec/rewrites.conf
Include /etc/radsec/clients.conf
Include /etc/radsec/servers.conf
Include /etc/radsec/realms.conf
</source>
===/etc/radsec/rewrites.conf===
<source lang=text>
## Empty for our setup
</source>
===/etc/radsec/clients.conf===
This matches our german top level radius (tlr) you have to customize it for other countries.
<source lang=text>
client tlr1 {
host 193.174.75.134
type tls
certificatenamecheck off
matchCertificateAttribute CN:/^(radius1\.dfn|tld1\.eduroam)\.de$/
}
client tlr2 {
host 193.174.75.138
type tls
certificatenamecheck off
matchCertificateAttribute CN:/^(radius2\.dfn|tld2\.eduroam)\.de$/
}
# Our WLAN Controller
client wlc {
host 10.1.1.0/24
type udp
secret ****secret****
}
#client anyIP4TLS {
# host 0.0.0.0/0
# type TLS
#}
</source>
===/etc/radsec/servers.conf===
<source lang=text>
#
## UDP Radius
#
#Server Our-EduroamRadiusAuth {
# host <internal radius server>
# port 1812
# type udp
# secret ****secret****
#}
#Server Our-EduroamRadiusAcct {
# host <internal radius accounting server>
# port 1813
# type udp
# secret ****secret****
#}
#
## TLS Radius / RadSec
#
server freeradius-1 {
host <internal radius accounting server1>
type tls
certificatenamecheck off
matchCertificateAttribute CN:/^freeradius1\.domain\.tld$/
StatusServer on
secret ****secret****
}
server freeradius-2 {
host <internal radius accounting server2>
type tls
certificatenamecheck off
matchCertificateAttribute CN:/^freeradius2\.domain\.tld$/
StatusServer on
secret ****secret****
}
server tlr1 {
host 193.174.75.134
type tls
certificatenamecheck off
matchCertificateAttribute CN:/^(radius1\.dfn|tld1\.eduroam)\.de$/
StatusServer on
}
server tlr2 {
host 193.174.75.138
type tls
certificatenamecheck off
matchCertificateAttribute CN:/^(radius2\.dfn|tld2\.eduroam)\.de$/
StatusServer on
}
</source>
===/etc/radsec/realms.conf===
<source lang=text>
# Our domain domain.tld
realm /(eduroam|anonymous)@domain\.tld$/ {
server freeradius-1
server freeradius-2
accountingServer freeradius-1
accountingServer freeradius-2
}
# If the anonymous user has not been matched above, fail
# So users that use their real identity fail, too. Force anonymous!
realm /@domain\.tld$ {
replymessage "Access rejected, wrong anonymous identity. Use eduroam@domain.tld as anonymous identity."
accountingresponse on
}
# Other domain of our site not used for eduroam
realm /@wrong-domain\.tld$/ {
replymessage "Misconfigured client: Use domain.tld as domain instead."
accountingresponse on
}
# Default realm of some clients. Do not send to top level radius servers.
realm /@.*\.3gppnetwork\.org$/ {
replymessage "Misconfigured client."
accountingresponse on
}
# Default realm of some clients. Do not send to top level radius servers.
realm /myabc\.com$/ {
replymessage "Misconfigured client: default realm of Intel PRO/Wireless supplicant! Rejected by us."
accountingresponse on
}
# Empty realm. Do not send to top level radius servers.
realm /^$/ {
replymessage "Misconfigured client: empty realm! Rejected by us."
accountingresponse on
}
# Typo in realm. Realm without any dot in it. Do not send to top level radius servers.
realm /@[^\.]+$/ {
replymessage "Misconfigured client: Typo in realm - No dot in realm ! Rejected by us."
accountingresponse on
}
# Typo in realm. Realm without double dot in it. Do not send to top level radius servers.
realm /@.*\.\..*$/ {
replymessage "Misconfigured client: Typo in realm - .. ! Rejected by us."
accountingresponse on
}
# Typo in realm. Realm without space in it. Do not send to top level radius servers.
realm /@.*\s+.*$/ {
replymessage "Misconfigured client: Typo in realm - Don't use spaces in your realm! Rejected by us."
accountingresponse on
}
# All other realms -> Eduroam toplevel servers
realm * {
server tlr1
server tlr2
accountingserver tlr1
accountingserver tlr2
}
</source>
===/etc/radsec/cert/radsecproxy.pem===
<source lang=text>
subject=/CN=radsecproxy.domain.tld/OU=bla/O=bli/L=Hamburg/ST=Hamburg/C=DE
-----BEGIN CERTIFICATE-----
...
-----END CERTIFICATE-----
And now the whole cerstificate chain...
</source>
==Run the daemon==
===Security===
There is no need to run radsecproxy as root.
But you need write access to the log or use syslog.
The config, certificate and key is not readable by the user (nogroup) but by the group radsecproxy where the porocess lives in (see systemd unit file radsecproxy.service).
====User====
<source lang=bash>
# addgroup -g 2083 radsecproxy
# useradd -u 2083 -g nogroup -s /bin/false -h /nonexistent
</source>
====Permissions====
<source lang=bash>
# chown -R root:radsecproxy /etc/radsec
# find /etc/radsec -type d -exec chmod 0750 {} \;
# find /etc/radsec -type f -exec chmod 0640 {} \;
</source>
====systemd unit file====
<source lang=bash>
# systemctl cat radsecproxy.service
</source>
<source lang=ini>
[Unit]
Description=radsecproxy
ConditionPathExists=/etc/radsec/radsecproxy.conf
After=network.target
Documentation=man:radsecproxy(1)
[Service]
Type=forking
User=radsecproxy
Group=radsecproxy
RuntimeDirectory=radsecproxy
RuntimeDirectoryMode=0700
PrivateTmp=yes
InaccessibleDirectories=/var
ReadOnlyDirectories=/etc
ReadOnlyDirectories=/lib
ReadOnlyDirectories=/usr
ExecStart=/opt/radsecproxy/sbin/radsecproxy -i /run/radsecproxy/radsecproxy.pid
PIDFile=/run/radsecproxy/radsecproxy.pid
[Install]
WantedBy=multi-user.target
</source>
Put this to /lib/systemd/system/radsecproxy.service and do:
# systemctl daemon-reload
# systemctl enable radsecproxy.service
# systemctl start radsecproxy.service
===Testing===
<source lang=bash>
# lsof -Pni TCP:2083 -s TCP:Listen
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
radsecpro 1344 radsecproxy 9u IPv4 22751 0t0 TCP 139.11.1.85:2083 (LISTEN)
</source>
<source lang=bash>
$ openssl s_client -connect <IP>:2083 -showcerts
</source>
===Certificate Enddate===
$ openssl s_client -connect <IP>:2083 -tls1 -no_ssl2 -no_ssl3 -showcerts 2>/dev/null | openssl x509 -enddate -noout
notAfter=Oct 9 12:13:17 2020 GMT
a2631a2630b300143eff9f66dff919bb980ad008
2069
2068
2020-10-29T06:42:18Z
Lollypop
2
/* Testing */
wikitext
text/x-wiki
[[Kategorie:Eduroam]]
=RadSecProxy=
==Build==
===Patch for radsecproxy-1.6.8 on Ubuntu 16.04===
In radsecproxy 1.6.9 and source from git on [[https://git.nordu.net/?p=radsecproxy.git;a=tree git.nordu.net]] this patch is not needed since [[https://git.nordu.net/?p=radsecproxy.git;a=commit;h=f3619bf65967255e1009fec42b28007b49e0f4e4 18.1.2017]].
<source lang=bash>
$ git clone https://git.nordu.net/radsecproxy.git
</source>
[https://project.nordu.net/browse/RADSECPROXY-72 taken from here]
<source lang=diff>
diff -rub radsecproxy-1.6.8/tcp.c radsecproxy-1.6.8_Ubuntu_16.04/tcp.c
--- radsecproxy-1.6.8/tcp.c 2016-09-21 13:49:09.000000000 +0200
+++ radsecproxy-1.6.8_Ubuntu_16.04/tcp.c 2017-07-13 16:35:52.414151832 +0200
@@ -353,7 +353,7 @@
struct sockaddr_storage from;
socklen_t fromlen = sizeof(from);
- listen(*sp, 0);
+ listen(*sp, 16);
for (;;) {
s = accept(*sp, (struct sockaddr *)&from, &fromlen);
diff -rub radsecproxy-1.6.8/tls.c radsecproxy-1.6.8_Ubuntu_16.04/tls.c
--- radsecproxy-1.6.8/tls.c 2016-09-21 13:49:09.000000000 +0200
+++ radsecproxy-1.6.8_Ubuntu_16.04/tls.c 2017-07-13 16:36:22.678166655 +0200
@@ -467,7 +467,7 @@
struct sockaddr_storage from;
socklen_t fromlen = sizeof(from);
- listen(*sp, 0);
+ listen(*sp, 16);
for (;;) {
s = accept(*sp, (struct sockaddr *)&from, &fromlen);
</source>
===Configure===
<source lang=bash>
$ ./configure --prefix=/opt/radsecproxy-1.6.8 --sysconfdir=/etc/radsec --with-ssl --enable-fticks
$ make clean all && sudo make install
</source>
=== Another example: Version 1.7.2 from git ===
<source lang=bash>
$ mkdir radsecproxy && cd radsecproxy
$ git clone --single-branch --branch 1.7.2 https://github.com/radsecproxy/radsecproxy tags/1.7.2
$ cd tags/1.7.2
$ ./autogen.sh
$ ./configure --prefix=/opt/radsecproxy-${PWD##*/} --sysconfdir=/etc/radsec --with-ssl
$ make clean all && sudo make install
</source>
==Config==
===/etc/radsec/radsecproxy.conf===
<source lang=text>
# Master config file for radsecproxy
IPv4Only on
listenUDP <IP>:1812
listenUDP <IP>:1813
listenTLS <IP>:2083
LogLevel 5 # For testing later reduce to 3
#LogDestination file:///var/log/radsecproxy.log
LogDestination x-syslog:///LOG_DAEMON
LoopPrevention on
######## TLS section
tls default {
CACertificatePath /etc/radsec/cert/ca
CertificateFile /etc/radsec/cert/radsecproxy-cert.pem
CertificateKeyFile /etc/radsec/cert/radsecproxy-key.pem
CertificateKeyPassword <PASSWORD>
}
Include /etc/radsec/rewrites.conf
Include /etc/radsec/clients.conf
Include /etc/radsec/servers.conf
Include /etc/radsec/realms.conf
</source>
===/etc/radsec/rewrites.conf===
<source lang=text>
## Empty for our setup
</source>
===/etc/radsec/clients.conf===
This matches our german top level radius (tlr) you have to customize it for other countries.
<source lang=text>
client tlr1 {
host 193.174.75.134
type tls
certificatenamecheck off
matchCertificateAttribute CN:/^(radius1\.dfn|tld1\.eduroam)\.de$/
}
client tlr2 {
host 193.174.75.138
type tls
certificatenamecheck off
matchCertificateAttribute CN:/^(radius2\.dfn|tld2\.eduroam)\.de$/
}
# Our WLAN Controller
client wlc {
host 10.1.1.0/24
type udp
secret ****secret****
}
#client anyIP4TLS {
# host 0.0.0.0/0
# type TLS
#}
</source>
===/etc/radsec/servers.conf===
<source lang=text>
#
## UDP Radius
#
#Server Our-EduroamRadiusAuth {
# host <internal radius server>
# port 1812
# type udp
# secret ****secret****
#}
#Server Our-EduroamRadiusAcct {
# host <internal radius accounting server>
# port 1813
# type udp
# secret ****secret****
#}
#
## TLS Radius / RadSec
#
server freeradius-1 {
host <internal radius accounting server1>
type tls
certificatenamecheck off
matchCertificateAttribute CN:/^freeradius1\.domain\.tld$/
StatusServer on
secret ****secret****
}
server freeradius-2 {
host <internal radius accounting server2>
type tls
certificatenamecheck off
matchCertificateAttribute CN:/^freeradius2\.domain\.tld$/
StatusServer on
secret ****secret****
}
server tlr1 {
host 193.174.75.134
type tls
certificatenamecheck off
matchCertificateAttribute CN:/^(radius1\.dfn|tld1\.eduroam)\.de$/
StatusServer on
}
server tlr2 {
host 193.174.75.138
type tls
certificatenamecheck off
matchCertificateAttribute CN:/^(radius2\.dfn|tld2\.eduroam)\.de$/
StatusServer on
}
</source>
===/etc/radsec/realms.conf===
<source lang=text>
# Our domain domain.tld
realm /(eduroam|anonymous)@domain\.tld$/ {
server freeradius-1
server freeradius-2
accountingServer freeradius-1
accountingServer freeradius-2
}
# If the anonymous user has not been matched above, fail
# So users that use their real identity fail, too. Force anonymous!
realm /@domain\.tld$ {
replymessage "Access rejected, wrong anonymous identity. Use eduroam@domain.tld as anonymous identity."
accountingresponse on
}
# Other domain of our site not used for eduroam
realm /@wrong-domain\.tld$/ {
replymessage "Misconfigured client: Use domain.tld as domain instead."
accountingresponse on
}
# Default realm of some clients. Do not send to top level radius servers.
realm /@.*\.3gppnetwork\.org$/ {
replymessage "Misconfigured client."
accountingresponse on
}
# Default realm of some clients. Do not send to top level radius servers.
realm /myabc\.com$/ {
replymessage "Misconfigured client: default realm of Intel PRO/Wireless supplicant! Rejected by us."
accountingresponse on
}
# Empty realm. Do not send to top level radius servers.
realm /^$/ {
replymessage "Misconfigured client: empty realm! Rejected by us."
accountingresponse on
}
# Typo in realm. Realm without any dot in it. Do not send to top level radius servers.
realm /@[^\.]+$/ {
replymessage "Misconfigured client: Typo in realm - No dot in realm ! Rejected by us."
accountingresponse on
}
# Typo in realm. Realm without double dot in it. Do not send to top level radius servers.
realm /@.*\.\..*$/ {
replymessage "Misconfigured client: Typo in realm - .. ! Rejected by us."
accountingresponse on
}
# Typo in realm. Realm without space in it. Do not send to top level radius servers.
realm /@.*\s+.*$/ {
replymessage "Misconfigured client: Typo in realm - Don't use spaces in your realm! Rejected by us."
accountingresponse on
}
# All other realms -> Eduroam toplevel servers
realm * {
server tlr1
server tlr2
accountingserver tlr1
accountingserver tlr2
}
</source>
===/etc/radsec/cert/radsecproxy.pem===
<source lang=text>
subject=/CN=radsecproxy.domain.tld/OU=bla/O=bli/L=Hamburg/ST=Hamburg/C=DE
-----BEGIN CERTIFICATE-----
...
-----END CERTIFICATE-----
And now the whole cerstificate chain...
</source>
==Run the daemon==
===Security===
There is no need to run radsecproxy as root.
But you need write access to the log or use syslog.
The config, certificate and key is not readable by the user (nogroup) but by the group radsecproxy where the porocess lives in (see systemd unit file radsecproxy.service).
====User====
<source lang=bash>
# addgroup -g 2083 radsecproxy
# useradd -u 2083 -g nogroup -s /bin/false -h /nonexistent
</source>
====Permissions====
<source lang=bash>
# chown -R root:radsecproxy /etc/radsec
# find /etc/radsec -type d -exec chmod 0750 {} \;
# find /etc/radsec -type f -exec chmod 0640 {} \;
</source>
====systemd unit file====
<source lang=bash>
# systemctl cat radsecproxy.service
</source>
<source lang=ini>
[Unit]
Description=radsecproxy
ConditionPathExists=/etc/radsec/radsecproxy.conf
After=network.target
Documentation=man:radsecproxy(1)
[Service]
Type=forking
User=radsecproxy
Group=radsecproxy
RuntimeDirectory=radsecproxy
RuntimeDirectoryMode=0700
PrivateTmp=yes
InaccessibleDirectories=/var
ReadOnlyDirectories=/etc
ReadOnlyDirectories=/lib
ReadOnlyDirectories=/usr
ExecStart=/opt/radsecproxy/sbin/radsecproxy -i /run/radsecproxy/radsecproxy.pid
PIDFile=/run/radsecproxy/radsecproxy.pid
[Install]
WantedBy=multi-user.target
</source>
Put this to /lib/systemd/system/radsecproxy.service and do:
# systemctl daemon-reload
# systemctl enable radsecproxy.service
# systemctl start radsecproxy.service
===Testing===
Check on the server if the radsec proxy is listening:
<source lang=bash>
# lsof -Pni TCP:2083 -s TCP:Listen
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
radsecpro 1344 radsecproxy 9u IPv4 22751 0t0 TCP <server ip>:2083 (LISTEN)
</source>
===Certificate Enddate===
$ openssl s_client -connect <IP>:2083 -tls1 -no_ssl2 -no_ssl3 -showcerts 2>/dev/null | openssl x509 -enddate -noout
notAfter=Oct 9 12:13:17 2020 GMT
65d4dbea92096ed67b5722a4ac8f11a0dd0e19d1
2070
2069
2020-10-29T06:42:35Z
Lollypop
2
/* Testing */
wikitext
text/x-wiki
[[Kategorie:Eduroam]]
=RadSecProxy=
==Build==
===Patch for radsecproxy-1.6.8 on Ubuntu 16.04===
In radsecproxy 1.6.9 and source from git on [[https://git.nordu.net/?p=radsecproxy.git;a=tree git.nordu.net]] this patch is not needed since [[https://git.nordu.net/?p=radsecproxy.git;a=commit;h=f3619bf65967255e1009fec42b28007b49e0f4e4 18.1.2017]].
<source lang=bash>
$ git clone https://git.nordu.net/radsecproxy.git
</source>
[https://project.nordu.net/browse/RADSECPROXY-72 taken from here]
<source lang=diff>
diff -rub radsecproxy-1.6.8/tcp.c radsecproxy-1.6.8_Ubuntu_16.04/tcp.c
--- radsecproxy-1.6.8/tcp.c 2016-09-21 13:49:09.000000000 +0200
+++ radsecproxy-1.6.8_Ubuntu_16.04/tcp.c 2017-07-13 16:35:52.414151832 +0200
@@ -353,7 +353,7 @@
struct sockaddr_storage from;
socklen_t fromlen = sizeof(from);
- listen(*sp, 0);
+ listen(*sp, 16);
for (;;) {
s = accept(*sp, (struct sockaddr *)&from, &fromlen);
diff -rub radsecproxy-1.6.8/tls.c radsecproxy-1.6.8_Ubuntu_16.04/tls.c
--- radsecproxy-1.6.8/tls.c 2016-09-21 13:49:09.000000000 +0200
+++ radsecproxy-1.6.8_Ubuntu_16.04/tls.c 2017-07-13 16:36:22.678166655 +0200
@@ -467,7 +467,7 @@
struct sockaddr_storage from;
socklen_t fromlen = sizeof(from);
- listen(*sp, 0);
+ listen(*sp, 16);
for (;;) {
s = accept(*sp, (struct sockaddr *)&from, &fromlen);
</source>
===Configure===
<source lang=bash>
$ ./configure --prefix=/opt/radsecproxy-1.6.8 --sysconfdir=/etc/radsec --with-ssl --enable-fticks
$ make clean all && sudo make install
</source>
=== Another example: Version 1.7.2 from git ===
<source lang=bash>
$ mkdir radsecproxy && cd radsecproxy
$ git clone --single-branch --branch 1.7.2 https://github.com/radsecproxy/radsecproxy tags/1.7.2
$ cd tags/1.7.2
$ ./autogen.sh
$ ./configure --prefix=/opt/radsecproxy-${PWD##*/} --sysconfdir=/etc/radsec --with-ssl
$ make clean all && sudo make install
</source>
==Config==
===/etc/radsec/radsecproxy.conf===
<source lang=text>
# Master config file for radsecproxy
IPv4Only on
listenUDP <IP>:1812
listenUDP <IP>:1813
listenTLS <IP>:2083
LogLevel 5 # For testing later reduce to 3
#LogDestination file:///var/log/radsecproxy.log
LogDestination x-syslog:///LOG_DAEMON
LoopPrevention on
######## TLS section
tls default {
CACertificatePath /etc/radsec/cert/ca
CertificateFile /etc/radsec/cert/radsecproxy-cert.pem
CertificateKeyFile /etc/radsec/cert/radsecproxy-key.pem
CertificateKeyPassword <PASSWORD>
}
Include /etc/radsec/rewrites.conf
Include /etc/radsec/clients.conf
Include /etc/radsec/servers.conf
Include /etc/radsec/realms.conf
</source>
===/etc/radsec/rewrites.conf===
<source lang=text>
## Empty for our setup
</source>
===/etc/radsec/clients.conf===
This matches our german top level radius (tlr) you have to customize it for other countries.
<source lang=text>
client tlr1 {
host 193.174.75.134
type tls
certificatenamecheck off
matchCertificateAttribute CN:/^(radius1\.dfn|tld1\.eduroam)\.de$/
}
client tlr2 {
host 193.174.75.138
type tls
certificatenamecheck off
matchCertificateAttribute CN:/^(radius2\.dfn|tld2\.eduroam)\.de$/
}
# Our WLAN Controller
client wlc {
host 10.1.1.0/24
type udp
secret ****secret****
}
#client anyIP4TLS {
# host 0.0.0.0/0
# type TLS
#}
</source>
===/etc/radsec/servers.conf===
<source lang=text>
#
## UDP Radius
#
#Server Our-EduroamRadiusAuth {
# host <internal radius server>
# port 1812
# type udp
# secret ****secret****
#}
#Server Our-EduroamRadiusAcct {
# host <internal radius accounting server>
# port 1813
# type udp
# secret ****secret****
#}
#
## TLS Radius / RadSec
#
server freeradius-1 {
host <internal radius accounting server1>
type tls
certificatenamecheck off
matchCertificateAttribute CN:/^freeradius1\.domain\.tld$/
StatusServer on
secret ****secret****
}
server freeradius-2 {
host <internal radius accounting server2>
type tls
certificatenamecheck off
matchCertificateAttribute CN:/^freeradius2\.domain\.tld$/
StatusServer on
secret ****secret****
}
server tlr1 {
host 193.174.75.134
type tls
certificatenamecheck off
matchCertificateAttribute CN:/^(radius1\.dfn|tld1\.eduroam)\.de$/
StatusServer on
}
server tlr2 {
host 193.174.75.138
type tls
certificatenamecheck off
matchCertificateAttribute CN:/^(radius2\.dfn|tld2\.eduroam)\.de$/
StatusServer on
}
</source>
===/etc/radsec/realms.conf===
<source lang=text>
# Our domain domain.tld
realm /(eduroam|anonymous)@domain\.tld$/ {
server freeradius-1
server freeradius-2
accountingServer freeradius-1
accountingServer freeradius-2
}
# If the anonymous user has not been matched above, fail
# So users that use their real identity fail, too. Force anonymous!
realm /@domain\.tld$ {
replymessage "Access rejected, wrong anonymous identity. Use eduroam@domain.tld as anonymous identity."
accountingresponse on
}
# Other domain of our site not used for eduroam
realm /@wrong-domain\.tld$/ {
replymessage "Misconfigured client: Use domain.tld as domain instead."
accountingresponse on
}
# Default realm of some clients. Do not send to top level radius servers.
realm /@.*\.3gppnetwork\.org$/ {
replymessage "Misconfigured client."
accountingresponse on
}
# Default realm of some clients. Do not send to top level radius servers.
realm /myabc\.com$/ {
replymessage "Misconfigured client: default realm of Intel PRO/Wireless supplicant! Rejected by us."
accountingresponse on
}
# Empty realm. Do not send to top level radius servers.
realm /^$/ {
replymessage "Misconfigured client: empty realm! Rejected by us."
accountingresponse on
}
# Typo in realm. Realm without any dot in it. Do not send to top level radius servers.
realm /@[^\.]+$/ {
replymessage "Misconfigured client: Typo in realm - No dot in realm ! Rejected by us."
accountingresponse on
}
# Typo in realm. Realm without double dot in it. Do not send to top level radius servers.
realm /@.*\.\..*$/ {
replymessage "Misconfigured client: Typo in realm - .. ! Rejected by us."
accountingresponse on
}
# Typo in realm. Realm without space in it. Do not send to top level radius servers.
realm /@.*\s+.*$/ {
replymessage "Misconfigured client: Typo in realm - Don't use spaces in your realm! Rejected by us."
accountingresponse on
}
# All other realms -> Eduroam toplevel servers
realm * {
server tlr1
server tlr2
accountingserver tlr1
accountingserver tlr2
}
</source>
===/etc/radsec/cert/radsecproxy.pem===
<source lang=text>
subject=/CN=radsecproxy.domain.tld/OU=bla/O=bli/L=Hamburg/ST=Hamburg/C=DE
-----BEGIN CERTIFICATE-----
...
-----END CERTIFICATE-----
And now the whole cerstificate chain...
</source>
==Run the daemon==
===Security===
There is no need to run radsecproxy as root.
But you need write access to the log or use syslog.
The config, certificate and key is not readable by the user (nogroup) but by the group radsecproxy where the porocess lives in (see systemd unit file radsecproxy.service).
====User====
<source lang=bash>
# addgroup -g 2083 radsecproxy
# useradd -u 2083 -g nogroup -s /bin/false -h /nonexistent
</source>
====Permissions====
<source lang=bash>
# chown -R root:radsecproxy /etc/radsec
# find /etc/radsec -type d -exec chmod 0750 {} \;
# find /etc/radsec -type f -exec chmod 0640 {} \;
</source>
====systemd unit file====
<source lang=bash>
# systemctl cat radsecproxy.service
</source>
<source lang=ini>
[Unit]
Description=radsecproxy
ConditionPathExists=/etc/radsec/radsecproxy.conf
After=network.target
Documentation=man:radsecproxy(1)
[Service]
Type=forking
User=radsecproxy
Group=radsecproxy
RuntimeDirectory=radsecproxy
RuntimeDirectoryMode=0700
PrivateTmp=yes
InaccessibleDirectories=/var
ReadOnlyDirectories=/etc
ReadOnlyDirectories=/lib
ReadOnlyDirectories=/usr
ExecStart=/opt/radsecproxy/sbin/radsecproxy -i /run/radsecproxy/radsecproxy.pid
PIDFile=/run/radsecproxy/radsecproxy.pid
[Install]
WantedBy=multi-user.target
</source>
Put this to /lib/systemd/system/radsecproxy.service and do:
# systemctl daemon-reload
# systemctl enable radsecproxy.service
# systemctl start radsecproxy.service
===Testing===
Check on the server if the radsecproxy is listening:
<source lang=bash>
# lsof -Pni TCP:2083 -s TCP:Listen
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
radsecpro 1344 radsecproxy 9u IPv4 22751 0t0 TCP <server ip>:2083 (LISTEN)
</source>
===Certificate Enddate===
$ openssl s_client -connect <IP>:2083 -tls1 -no_ssl2 -no_ssl3 -showcerts 2>/dev/null | openssl x509 -enddate -noout
notAfter=Oct 9 12:13:17 2020 GMT
133459210516019ccda3cca3c35ca65bac0647af
2071
2070
2020-10-29T06:52:55Z
Lollypop
2
/* /etc/radsec/radsecproxy.conf */
wikitext
text/x-wiki
[[Kategorie:Eduroam]]
=RadSecProxy=
==Build==
===Patch for radsecproxy-1.6.8 on Ubuntu 16.04===
In radsecproxy 1.6.9 and source from git on [[https://git.nordu.net/?p=radsecproxy.git;a=tree git.nordu.net]] this patch is not needed since [[https://git.nordu.net/?p=radsecproxy.git;a=commit;h=f3619bf65967255e1009fec42b28007b49e0f4e4 18.1.2017]].
<source lang=bash>
$ git clone https://git.nordu.net/radsecproxy.git
</source>
[https://project.nordu.net/browse/RADSECPROXY-72 taken from here]
<source lang=diff>
diff -rub radsecproxy-1.6.8/tcp.c radsecproxy-1.6.8_Ubuntu_16.04/tcp.c
--- radsecproxy-1.6.8/tcp.c 2016-09-21 13:49:09.000000000 +0200
+++ radsecproxy-1.6.8_Ubuntu_16.04/tcp.c 2017-07-13 16:35:52.414151832 +0200
@@ -353,7 +353,7 @@
struct sockaddr_storage from;
socklen_t fromlen = sizeof(from);
- listen(*sp, 0);
+ listen(*sp, 16);
for (;;) {
s = accept(*sp, (struct sockaddr *)&from, &fromlen);
diff -rub radsecproxy-1.6.8/tls.c radsecproxy-1.6.8_Ubuntu_16.04/tls.c
--- radsecproxy-1.6.8/tls.c 2016-09-21 13:49:09.000000000 +0200
+++ radsecproxy-1.6.8_Ubuntu_16.04/tls.c 2017-07-13 16:36:22.678166655 +0200
@@ -467,7 +467,7 @@
struct sockaddr_storage from;
socklen_t fromlen = sizeof(from);
- listen(*sp, 0);
+ listen(*sp, 16);
for (;;) {
s = accept(*sp, (struct sockaddr *)&from, &fromlen);
</source>
===Configure===
<source lang=bash>
$ ./configure --prefix=/opt/radsecproxy-1.6.8 --sysconfdir=/etc/radsec --with-ssl --enable-fticks
$ make clean all && sudo make install
</source>
=== Another example: Version 1.7.2 from git ===
<source lang=bash>
$ mkdir radsecproxy && cd radsecproxy
$ git clone --single-branch --branch 1.7.2 https://github.com/radsecproxy/radsecproxy tags/1.7.2
$ cd tags/1.7.2
$ ./autogen.sh
$ ./configure --prefix=/opt/radsecproxy-${PWD##*/} --sysconfdir=/etc/radsec --with-ssl
$ make clean all && sudo make install
</source>
==Config==
===/etc/radsec/radsecproxy.conf===
<source lang=text>
# Master config file for radsecproxy
IPv4Only on
listenUDP <IP>:1812
listenUDP <IP>:1813
listenTLS <IP>:2083
LogLevel 5 # For testing later reduce to 3
#LogDestination file:///var/log/radsecproxy.log
LogDestination x-syslog:///LOG_DAEMON
LoopPrevention on
######## TLS section
tls default {
CACertificatePath /etc/radsec/cert/ca
CertificateFile /etc/radsec/cert/radsecproxy-cert.pem
CertificateKeyFile /etc/radsec/cert/radsecproxy-key.pem
CertificateKeyPassword <PASSWORD>
}
Include /etc/radsec/rewrites.conf
Include /etc/radsec/clients.conf
Include /etc/radsec/servers.conf
Include /etc/radsec/realms.conf
</source>
===ca certificate in /etc/radsec/cert/ca===
For DFN users it is the TeleSec root certificate
====The destination file name is <hash of the certificate>.0====
<source lang=text>
# openssl x509 -noout -hash -in /tmp/telesec.pem
1e09d511
# mv /tmp/telesec.pem /etc/radsec/cert/ca/1e09d511.0
</source>
====/etc/radsec/cert/ca/1e09d511.0====
<source lang=text>
subject= /C=DE/O=T-Systems Enterprise Services GmbH/OU=T-Systems Trust Center/CN=T-TeleSec GlobalRoot Class 2
-----BEGIN CERTIFICATE-----
MIIDwzCCAqugAwIBAgIBATANBgkqhkiG9w0BAQsFADCBgjELMAkGA1UEBhMCREUx
KzApBgNVBAoMIlQtU3lzdGVtcyBFbnRlcnByaXNlIFNlcnZpY2VzIEdtYkgxHzAd
BgNVBAsMFlQtU3lzdGVtcyBUcnVzdCBDZW50ZXIxJTAjBgNVBAMMHFQtVGVsZVNl
YyBHbG9iYWxSb290IENsYXNzIDIwHhcNMDgxMDAxMTA0MDE0WhcNMzMxMDAxMjM1
OTU5WjCBgjELMAkGA1UEBhMCREUxKzApBgNVBAoMIlQtU3lzdGVtcyBFbnRlcnBy
aXNlIFNlcnZpY2VzIEdtYkgxHzAdBgNVBAsMFlQtU3lzdGVtcyBUcnVzdCBDZW50
ZXIxJTAjBgNVBAMMHFQtVGVsZVNlYyBHbG9iYWxSb290IENsYXNzIDIwggEiMA0G
CSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQCqX9obX+hzkeXaXPSi5kfl82hVYAUd
AqSzm1nzHoqvNK38DcLZSBnuaY/JIPwhqgcZ7bBcrGXHX+0CfHt8LRvWurmAwhiC
FoT6ZrAIxlQjgeTNuUk/9k9uN0goOA/FvudocP05l03Sx5iRUKrERLMjfTlH6VJi
1hKTXrcxlkIF+3anHqP1wvzpesVsqXFP6st4vGCvx9702cu+fjOlbpSD8DT6Iavq
jnKgP6TeMFvvhk1qlVtDRKgQFRzlAVfFmPHmBiiRqiDFt1MmUUOyCxGVWOHAD3bZ
wI18gfNycJ5v/hqO2V81xrJvNHy+SE/iWjnX2J14np+GPgNeGYtEotXHAgMBAAGj
QjBAMA8GA1UdEwEB/wQFMAMBAf8wDgYDVR0PAQH/BAQDAgEGMB0GA1UdDgQWBBS/
WSA2AHmgoCJrjNXyYdK4LMuCSjANBgkqhkiG9w0BAQsFAAOCAQEAMQOiYQsfdOhy
NsZt+U2e+iKo4YFWz827n+qrkRk4r6p8FU3ztqONpfSO9kSpp+ghla0+AGIWiPAC
uvxhI+YzmzB6azZie60EI4RYZeLbK4rnJVM3YlNfvNoBYimipidx5joifsFvHZVw
IEoHNN/q/xWA5brXethbdXwFeilHfkCoMRN3zUA7tFFHei4R40cR3p1m0IvVVGb6
g1XqfMIpiRvpb7PO4gWEyS8+eIVibslfwXhjdFjASBgMmTnrpMwatXlajRWc2BQN
9noHV8cigwUtPJslJj0Ys6lDfMjIq2SPDqO/nBudMNva0Bkuqjzx+zOAduTNrRlP
BSeOE6Fuwg==
-----END CERTIFICATE-----
</source>
===/etc/radsec/rewrites.conf===
<source lang=text>
## Empty for our setup
</source>
===/etc/radsec/clients.conf===
This matches our german top level radius (tlr) you have to customize it for other countries.
<source lang=text>
client tlr1 {
host 193.174.75.134
type tls
certificatenamecheck off
matchCertificateAttribute CN:/^(radius1\.dfn|tld1\.eduroam)\.de$/
}
client tlr2 {
host 193.174.75.138
type tls
certificatenamecheck off
matchCertificateAttribute CN:/^(radius2\.dfn|tld2\.eduroam)\.de$/
}
# Our WLAN Controller
client wlc {
host 10.1.1.0/24
type udp
secret ****secret****
}
#client anyIP4TLS {
# host 0.0.0.0/0
# type TLS
#}
</source>
===/etc/radsec/servers.conf===
<source lang=text>
#
## UDP Radius
#
#Server Our-EduroamRadiusAuth {
# host <internal radius server>
# port 1812
# type udp
# secret ****secret****
#}
#Server Our-EduroamRadiusAcct {
# host <internal radius accounting server>
# port 1813
# type udp
# secret ****secret****
#}
#
## TLS Radius / RadSec
#
server freeradius-1 {
host <internal radius accounting server1>
type tls
certificatenamecheck off
matchCertificateAttribute CN:/^freeradius1\.domain\.tld$/
StatusServer on
secret ****secret****
}
server freeradius-2 {
host <internal radius accounting server2>
type tls
certificatenamecheck off
matchCertificateAttribute CN:/^freeradius2\.domain\.tld$/
StatusServer on
secret ****secret****
}
server tlr1 {
host 193.174.75.134
type tls
certificatenamecheck off
matchCertificateAttribute CN:/^(radius1\.dfn|tld1\.eduroam)\.de$/
StatusServer on
}
server tlr2 {
host 193.174.75.138
type tls
certificatenamecheck off
matchCertificateAttribute CN:/^(radius2\.dfn|tld2\.eduroam)\.de$/
StatusServer on
}
</source>
===/etc/radsec/realms.conf===
<source lang=text>
# Our domain domain.tld
realm /(eduroam|anonymous)@domain\.tld$/ {
server freeradius-1
server freeradius-2
accountingServer freeradius-1
accountingServer freeradius-2
}
# If the anonymous user has not been matched above, fail
# So users that use their real identity fail, too. Force anonymous!
realm /@domain\.tld$ {
replymessage "Access rejected, wrong anonymous identity. Use eduroam@domain.tld as anonymous identity."
accountingresponse on
}
# Other domain of our site not used for eduroam
realm /@wrong-domain\.tld$/ {
replymessage "Misconfigured client: Use domain.tld as domain instead."
accountingresponse on
}
# Default realm of some clients. Do not send to top level radius servers.
realm /@.*\.3gppnetwork\.org$/ {
replymessage "Misconfigured client."
accountingresponse on
}
# Default realm of some clients. Do not send to top level radius servers.
realm /myabc\.com$/ {
replymessage "Misconfigured client: default realm of Intel PRO/Wireless supplicant! Rejected by us."
accountingresponse on
}
# Empty realm. Do not send to top level radius servers.
realm /^$/ {
replymessage "Misconfigured client: empty realm! Rejected by us."
accountingresponse on
}
# Typo in realm. Realm without any dot in it. Do not send to top level radius servers.
realm /@[^\.]+$/ {
replymessage "Misconfigured client: Typo in realm - No dot in realm ! Rejected by us."
accountingresponse on
}
# Typo in realm. Realm without double dot in it. Do not send to top level radius servers.
realm /@.*\.\..*$/ {
replymessage "Misconfigured client: Typo in realm - .. ! Rejected by us."
accountingresponse on
}
# Typo in realm. Realm without space in it. Do not send to top level radius servers.
realm /@.*\s+.*$/ {
replymessage "Misconfigured client: Typo in realm - Don't use spaces in your realm! Rejected by us."
accountingresponse on
}
# All other realms -> Eduroam toplevel servers
realm * {
server tlr1
server tlr2
accountingserver tlr1
accountingserver tlr2
}
</source>
===/etc/radsec/cert/radsecproxy.pem===
<source lang=text>
subject=/CN=radsecproxy.domain.tld/OU=bla/O=bli/L=Hamburg/ST=Hamburg/C=DE
-----BEGIN CERTIFICATE-----
...
-----END CERTIFICATE-----
And now the whole cerstificate chain...
</source>
==Run the daemon==
===Security===
There is no need to run radsecproxy as root.
But you need write access to the log or use syslog.
The config, certificate and key is not readable by the user (nogroup) but by the group radsecproxy where the porocess lives in (see systemd unit file radsecproxy.service).
====User====
<source lang=bash>
# addgroup -g 2083 radsecproxy
# useradd -u 2083 -g nogroup -s /bin/false -h /nonexistent
</source>
====Permissions====
<source lang=bash>
# chown -R root:radsecproxy /etc/radsec
# find /etc/radsec -type d -exec chmod 0750 {} \;
# find /etc/radsec -type f -exec chmod 0640 {} \;
</source>
====systemd unit file====
<source lang=bash>
# systemctl cat radsecproxy.service
</source>
<source lang=ini>
[Unit]
Description=radsecproxy
ConditionPathExists=/etc/radsec/radsecproxy.conf
After=network.target
Documentation=man:radsecproxy(1)
[Service]
Type=forking
User=radsecproxy
Group=radsecproxy
RuntimeDirectory=radsecproxy
RuntimeDirectoryMode=0700
PrivateTmp=yes
InaccessibleDirectories=/var
ReadOnlyDirectories=/etc
ReadOnlyDirectories=/lib
ReadOnlyDirectories=/usr
ExecStart=/opt/radsecproxy/sbin/radsecproxy -i /run/radsecproxy/radsecproxy.pid
PIDFile=/run/radsecproxy/radsecproxy.pid
[Install]
WantedBy=multi-user.target
</source>
Put this to /lib/systemd/system/radsecproxy.service and do:
# systemctl daemon-reload
# systemctl enable radsecproxy.service
# systemctl start radsecproxy.service
===Testing===
Check on the server if the radsecproxy is listening:
<source lang=bash>
# lsof -Pni TCP:2083 -s TCP:Listen
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
radsecpro 1344 radsecproxy 9u IPv4 22751 0t0 TCP <server ip>:2083 (LISTEN)
</source>
===Certificate Enddate===
$ openssl s_client -connect <IP>:2083 -tls1 -no_ssl2 -no_ssl3 -showcerts 2>/dev/null | openssl x509 -enddate -noout
notAfter=Oct 9 12:13:17 2020 GMT
b488612110e0a45474980cd0b1f0dbdab1469991
TShark
0
238
2072
1986
2020-11-12T13:35:40Z
Lollypop
2
/* MySQL traffic */
wikitext
text/x-wiki
[[Kategorie:MySQL]]
[[Kategorie:Security]]
=TShark=
[https://www.wireshark.org/docs/wsug_html_chunked/AppToolstshark.html TShark is the terminal based wireshark.]
The ultimate tool to sniff network traffic when you have no X. It analyzes the traffic as wireshark does. Great tool!
==MySQL traffic==
To look on an application server for MySQL traffic you can use this line:
<source lang=bash>
# IFACE=eth0 ; tshark -i ${IFACE} -d tcp.port==3306,mysql -R "eth.addr eq $(ip link show ${IFACE} | awk '$1 ~ /link\/ether/{print $2}')" -T fields -e mysql.query 'port 3306'
</source>
<source lang=bash>
IFACE=ens192 ; tshark -i ${IFACE} -d tcp.port==3306,mysql -Y "eth.addr eq $(ip link show ${IFACE} | awk '$1 ~ /link\/ether/{print $2}')" -T fields -e mysql.auth_plugin -e mysql.client_auth_plugin -e mysql.error_code -e mysql.error.message -e mysql.message -e mysql.user -e mysql.passwd -e mysql.command 'port 3306'
</source>
The little awk magic selects only pakets which are from our ethernet address on interface ''IFACE''.
==Radius traffic==
Find client with macaddress fc-18-3c-4a-c1-fa :
<source lang=bash>
# tshark -Y "tls.handshake.type == 1" -T fields -e frame.number -e ip.src -e tls.handshake.version -e radius.Calling_Station_Id -Y 'radius.Calling_Station_Id=="fc-18-3c-4a-c1-fa"' -f "udp port 1812" -V
Running as user "root" and group "root". This could be dangerous.
Capturing on 'ens192'
785 10.155.1.23 fc-18-3c-4a-c1-fa
788 10.155.1.23 0x00000303 fc-18-3c-4a-c1-fa <-- 0x00000303 is TLS handshake version 1.2 , see table below
790 10.155.1.23 fc-18-3c-4a-c1-fa
792 10.155.1.23 fc-18-3c-4a-c1-fa
794 10.155.1.23 fc-18-3c-4a-c1-fa
</source>
With older tshark versions try:
<source lang=bash>
# tshark -Y "ssl.handshake.type == 1" -T fields -e frame.number -e ip.src -e ssl.handshake.version -e radius.Calling_Station_Id -Y 'radius.Calling_Station_Id=="8c-85-90-1f-03-ff"' -f "udp port 1812"
</source>
==Duplicate ACKs==
<source lang=bash>
# tshark -i eth1 -Y tcp.analysis.duplicate_ack
</source>
==Finding TCP problems==
<source lang=bash>
# tshark -i eth1 -Y 'expert.message == "Retransmission (suspected)" || expert.message == "Duplicate ACK (#1)" || expert.message == "Out-Of-Order segment"'
</source>
==Decode SSL Connections==
For example show the used TLS-Versions lower than 1.2.
<pre>
Supported Version: TLS 1.3 (0x0304)
Supported Version: TLS 1.2 (0x0303)
Supported Version: TLS 1.1 (0x0302)
Supported Version: TLS 1.0 (0x0301)
</pre>
<source lang=bash>
$ tshark -n -f 'dst port 1812 or dst port 2083' -Y "ssl.handshake.version<0x00000303" -T fields -e ip.src_host -e ip.dst_host -e tcp.dstport -e udp.dstport -e ssl.handshake.version
192.168.1.87 192.168.1.140 2083 0x00000301
10.155.4.97 192.168.1.141 1812 0x00000301
192.168.1.85 192.168.1.140 2083 0x00000301
...
</source>
or for https:
<source lang=bash>
$ tshark -i eth0 -n -f 'dst port 443' -Y "ssl.handshake.version<0x00000303" -T fields -e ip.src_host -e ip.dst_host -e tcp.dstport -e ssl.handshake.version
</source>
59a27a13412ced553e9f9f7e31af1b7e90c4801f
2073
2072
2020-11-12T13:35:51Z
Lollypop
2
/* MySQL traffic */
wikitext
text/x-wiki
[[Kategorie:MySQL]]
[[Kategorie:Security]]
=TShark=
[https://www.wireshark.org/docs/wsug_html_chunked/AppToolstshark.html TShark is the terminal based wireshark.]
The ultimate tool to sniff network traffic when you have no X. It analyzes the traffic as wireshark does. Great tool!
==MySQL traffic==
To look on an application server for MySQL traffic you can use this line:
<source lang=bash>
# IFACE=eth0 ; tshark -i ${IFACE} -d tcp.port==3306,mysql -R "eth.addr eq $(ip link show ${IFACE} | awk '$1 ~ /link\/ether/{print $2}')" -T fields -e mysql.query 'port 3306'
</source>
<source lang=bash>
# IFACE=ens192 ; tshark -i ${IFACE} -d tcp.port==3306,mysql -Y "eth.addr eq $(ip link show ${IFACE} | awk '$1 ~ /link\/ether/{print $2}')" -T fields -e mysql.auth_plugin -e mysql.client_auth_plugin -e mysql.error_code -e mysql.error.message -e mysql.message -e mysql.user -e mysql.passwd -e mysql.command 'port 3306'
</source>
The little awk magic selects only pakets which are from our ethernet address on interface ''IFACE''.
==Radius traffic==
Find client with macaddress fc-18-3c-4a-c1-fa :
<source lang=bash>
# tshark -Y "tls.handshake.type == 1" -T fields -e frame.number -e ip.src -e tls.handshake.version -e radius.Calling_Station_Id -Y 'radius.Calling_Station_Id=="fc-18-3c-4a-c1-fa"' -f "udp port 1812" -V
Running as user "root" and group "root". This could be dangerous.
Capturing on 'ens192'
785 10.155.1.23 fc-18-3c-4a-c1-fa
788 10.155.1.23 0x00000303 fc-18-3c-4a-c1-fa <-- 0x00000303 is TLS handshake version 1.2 , see table below
790 10.155.1.23 fc-18-3c-4a-c1-fa
792 10.155.1.23 fc-18-3c-4a-c1-fa
794 10.155.1.23 fc-18-3c-4a-c1-fa
</source>
With older tshark versions try:
<source lang=bash>
# tshark -Y "ssl.handshake.type == 1" -T fields -e frame.number -e ip.src -e ssl.handshake.version -e radius.Calling_Station_Id -Y 'radius.Calling_Station_Id=="8c-85-90-1f-03-ff"' -f "udp port 1812"
</source>
==Duplicate ACKs==
<source lang=bash>
# tshark -i eth1 -Y tcp.analysis.duplicate_ack
</source>
==Finding TCP problems==
<source lang=bash>
# tshark -i eth1 -Y 'expert.message == "Retransmission (suspected)" || expert.message == "Duplicate ACK (#1)" || expert.message == "Out-Of-Order segment"'
</source>
==Decode SSL Connections==
For example show the used TLS-Versions lower than 1.2.
<pre>
Supported Version: TLS 1.3 (0x0304)
Supported Version: TLS 1.2 (0x0303)
Supported Version: TLS 1.1 (0x0302)
Supported Version: TLS 1.0 (0x0301)
</pre>
<source lang=bash>
$ tshark -n -f 'dst port 1812 or dst port 2083' -Y "ssl.handshake.version<0x00000303" -T fields -e ip.src_host -e ip.dst_host -e tcp.dstport -e udp.dstport -e ssl.handshake.version
192.168.1.87 192.168.1.140 2083 0x00000301
10.155.4.97 192.168.1.141 1812 0x00000301
192.168.1.85 192.168.1.140 2083 0x00000301
...
</source>
or for https:
<source lang=bash>
$ tshark -i eth0 -n -f 'dst port 443' -Y "ssl.handshake.version<0x00000303" -T fields -e ip.src_host -e ip.dst_host -e tcp.dstport -e ssl.handshake.version
</source>
3604a897e22ed0538fb363c3fe18f0d2b6494f5c
Tmux tips and tricks
0
376
2074
2020-11-16T16:35:14Z
Lollypop
2
Created page with "== Enable mouse scrollwheel == <source> # echo "set -g mouse on" >> ~/.tmux.conf </source>"
wikitext
text/x-wiki
== Enable mouse scrollwheel ==
<source>
# echo "set -g mouse on" >> ~/.tmux.conf
</source>
e3faba139a85ef8cd1493ce636609342e4ab1f7d
Ubuntu zsys
0
377
2075
2020-11-25T13:58:46Z
Lollypop
2
Created page with "==/etc/zsys.conf== <source> cat > /etc/zsys.conf <<EOF history: # Keep at least n history entry per unit of time if enough of them are present # The order condition the bu..."
wikitext
text/x-wiki
==/etc/zsys.conf==
<source>
cat > /etc/zsys.conf <<EOF
history:
# Keep at least n history entry per unit of time if enough of them are present
# The order condition the bucket start and end dates (from most recent to oldest)
# We also keep all previous state saves for the previous day.
# gcstartafter: 1 (GC start after a whole day).
gcstartafter: 1
# Minimum number of recent states to keep.
keeplast: 10
# - name: Abitrary name of the bucket
# buckets: Number of buckets over the interval
# bucketlength: Length of each bucket in days
# samplesperbucket: Number of datasets to keep in each bucket
gcrules:
- name: PreviousDay
buckets: 1
bucketlength: 1
samplesperbucket: 3
#
# For the previous Day (after on full day of retention of all
# snapshots due to gcstartafter: 1), the rule PreviousDay
# defines one bucket (buckets: 1) of size 1 day (bucketlength: 1),
# where we keep 3 states. So basically, we keep 3 states on the
# previous full day.
#
- name: PreviousWeek
buckets: 5
bucketlength: 1
samplesperbucket: 1
#
# For the 5 days before (buckets: 5 of size 1 day (bucketlength: 1)),
# we keep one state (samplesperbucket: 1).
# It means thus that we keep one state per day for each of those 5 days.
#
- name: PreviousMonth
buckets: 4
bucketlength: 7
samplesperbucket: 1
#
# We divide the previous month, in 4 buckets (buckets: 4) of
# 7 days each (bucketlength: 7) and keep one state for each
# (samplesperbucket: 1).
# In English, this means that we try to keep one state save
# per week over the previous month.
#
general:
# Minimal free space required before taking a snapshot
minfreepoolspace: 20
# Daemon timeout in seconds
timeout: 60
EOF
systemctl restart zsys-gc.service
</source>
b342b024413fd5129e7d660bb28186b8b360edfa
2076
2075
2020-11-25T13:59:57Z
Lollypop
2
/* /etc/zsys.conf */
wikitext
text/x-wiki
==Cconfigure garbage collection==
<source>
cat > /etc/zsys.conf <<EOF
history:
# Keep at least n history entry per unit of time if enough of them are present
# The order condition the bucket start and end dates (from most recent to oldest)
# We also keep all previous state saves for the previous day.
# gcstartafter: 1 (GC start after a whole day).
gcstartafter: 1
# Minimum number of recent states to keep.
keeplast: 10
# - name: Abitrary name of the bucket
# buckets: Number of buckets over the interval
# bucketlength: Length of each bucket in days
# samplesperbucket: Number of datasets to keep in each bucket
gcrules:
- name: PreviousDay
buckets: 1
bucketlength: 1
samplesperbucket: 3
#
# For the previous Day (after on full day of retention of all
# snapshots due to gcstartafter: 1), the rule PreviousDay
# defines one bucket (buckets: 1) of size 1 day (bucketlength: 1),
# where we keep 3 states. So basically, we keep 3 states on the
# previous full day.
#
- name: PreviousWeek
buckets: 5
bucketlength: 1
samplesperbucket: 1
#
# For the 5 days before (buckets: 5 of size 1 day (bucketlength: 1)),
# we keep one state (samplesperbucket: 1).
# It means thus that we keep one state per day for each of those 5 days.
#
- name: PreviousMonth
buckets: 4
bucketlength: 7
samplesperbucket: 1
#
# We divide the previous month, in 4 buckets (buckets: 4) of
# 7 days each (bucketlength: 7) and keep one state for each
# (samplesperbucket: 1).
# In English, this means that we try to keep one state save
# per week over the previous month.
#
general:
# Minimal free space required before taking a snapshot
minfreepoolspace: 20
# Daemon timeout in seconds
timeout: 60
EOF
systemctl restart zsys-gc.service
</source>
85564a451b9832e5cabe9900bb2422035a8b0bf0
Solaris pkg
0
378
2077
2020-11-26T05:21:30Z
Lollypop
2
Created page with "[[Category: Solaris11]] == Troubleshooting == === Error: pkg: This is an internal error in pkg(7) version b'3beb69dcf209'. Please log a Service Request about this issue inc..."
wikitext
text/x-wiki
[[Category: Solaris11]]
== Troubleshooting ==
=== Error: pkg: This is an internal error in pkg(7) version b'3beb69dcf209'. Please log a Service Request about this issue including the information above and this message.===
Full output example:
<source>
# pkg update --accept --require-new-be --be-name solaris_11.4.27.1.82
Traceback (most recent call last):
File "/usr/bin/pkg", line 5668, in handle_errors
__ret = func(*args, **kwargs)
File "/usr/bin/pkg", line 5654, in main_func
pargs=pargs, **opts)
File "/usr/bin/pkg", line 2267, in update
display_plan_cb=display_plan_cb, logger=logger)
File "/usr/lib/python3.7/vendor-packages/pkg/client/client_api.py", line 1556, in _update
logger=logger)
File "/usr/lib/python3.7/vendor-packages/pkg/client/client_api.py", line 1395, in __api_op
logger=logger, **kwargs)
File "/usr/lib/python3.7/vendor-packages/pkg/client/client_api.py", line 1252, in __api_plan
display_plan_cb=display_plan_cb)
File "/usr/lib/python3.7/vendor-packages/pkg/client/client_api.py", line 1224, in __api_plan
for pd in api_plan_func(**kwargs):
File "/usr/lib/python3.7/vendor-packages/pkg/client/api.py", line 1516, in __plan_op
log_op_end_all=True)
File "/usr/lib/python3.7/vendor-packages/pkg/client/api.py", line 1144, in __plan_common_exception
six.reraise(exc_type, exc_value, exc_traceback)
File "/usr/lib/python3.7/vendor-packages/six.py", line 703, in reraise
raise value
File "/usr/lib/python3.7/vendor-packages/pkg/client/api.py", line 1429, in __plan_op
self.__refresh_publishers()
File "/usr/lib/python3.7/vendor-packages/pkg/client/api.py", line 620, in __refresh_publishers
self.__cert_verify()
File "/usr/lib/python3.7/vendor-packages/pkg/client/api.py", line 603, in __cert_verify
self._img.check_cert_validity()
File "/usr/lib/python3.7/vendor-packages/pkg/client/image.py", line 1338, in check_cert_validity
uri=uri)
File "/usr/lib/python3.7/vendor-packages/pkg/misc.py", line 1242, in validate_ssl_cert
if cert.has_expired():
File "/usr/lib/python3.7/vendor-packages/OpenSSL/crypto.py", line 1360, in has_expired
not_after = datetime.datetime.strptime(time_string, "%Y%m%d%H%M%SZ")
File "/usr/lib/python3.7/_strptime.py", line 277, in <module>
_TimeRE_cache = TimeRE()
File "/usr/lib/python3.7/_strptime.py", line 191, in __init__
self.locale_time = LocaleTime()
File "/usr/lib/python3.7/_strptime.py", line 71, in __init__
self.__calc_month()
File "/usr/lib/python3.7/_strptime.py", line 99, in __calc_month
a_month = [calendar.month_abbr[i].lower() for i in range(13)]
File "/usr/lib/python3.7/_strptime.py", line 99, in <listcomp>
a_month = [calendar.month_abbr[i].lower() for i in range(13)]
File "/usr/lib/python3.7/calendar.py", line 63, in __getitem__
return funcs(self.format)
ValueError: character U+30000043 is not in range [U+0000; U+10ffff]
pkg: This is an internal error in pkg(7) version b'3beb69dcf209'. Please log a
Service Request about this issue including the information above and this
message.
</source>
Workaround:
<source>
# unset $(env | awk -F'=' '$1 ~ /^LC_/{print $1;}')
# pkg update --accept --require-new-be --be-name solaris_11.4.27.1.82
Creating Plan (Package planning: 766/1256): \
</source>
69090fc20e3af0b6582993c48146a6b16973625b
2078
2077
2020-11-26T05:22:08Z
Lollypop
2
/* Troubleshooting */
wikitext
text/x-wiki
[[Category: Solaris11]]
== Troubleshooting ==
=== Error: pkg: This is an internal error in pkg(7) version b'3beb69dcf209'. Please log a Service Request about this issue including the information above and this message.===
Full output example:
<source lang=bash>
# pkg update --accept --require-new-be --be-name solaris_11.4.27.1.82
Traceback (most recent call last):
File "/usr/bin/pkg", line 5668, in handle_errors
__ret = func(*args, **kwargs)
File "/usr/bin/pkg", line 5654, in main_func
pargs=pargs, **opts)
File "/usr/bin/pkg", line 2267, in update
display_plan_cb=display_plan_cb, logger=logger)
File "/usr/lib/python3.7/vendor-packages/pkg/client/client_api.py", line 1556, in _update
logger=logger)
File "/usr/lib/python3.7/vendor-packages/pkg/client/client_api.py", line 1395, in __api_op
logger=logger, **kwargs)
File "/usr/lib/python3.7/vendor-packages/pkg/client/client_api.py", line 1252, in __api_plan
display_plan_cb=display_plan_cb)
File "/usr/lib/python3.7/vendor-packages/pkg/client/client_api.py", line 1224, in __api_plan
for pd in api_plan_func(**kwargs):
File "/usr/lib/python3.7/vendor-packages/pkg/client/api.py", line 1516, in __plan_op
log_op_end_all=True)
File "/usr/lib/python3.7/vendor-packages/pkg/client/api.py", line 1144, in __plan_common_exception
six.reraise(exc_type, exc_value, exc_traceback)
File "/usr/lib/python3.7/vendor-packages/six.py", line 703, in reraise
raise value
File "/usr/lib/python3.7/vendor-packages/pkg/client/api.py", line 1429, in __plan_op
self.__refresh_publishers()
File "/usr/lib/python3.7/vendor-packages/pkg/client/api.py", line 620, in __refresh_publishers
self.__cert_verify()
File "/usr/lib/python3.7/vendor-packages/pkg/client/api.py", line 603, in __cert_verify
self._img.check_cert_validity()
File "/usr/lib/python3.7/vendor-packages/pkg/client/image.py", line 1338, in check_cert_validity
uri=uri)
File "/usr/lib/python3.7/vendor-packages/pkg/misc.py", line 1242, in validate_ssl_cert
if cert.has_expired():
File "/usr/lib/python3.7/vendor-packages/OpenSSL/crypto.py", line 1360, in has_expired
not_after = datetime.datetime.strptime(time_string, "%Y%m%d%H%M%SZ")
File "/usr/lib/python3.7/_strptime.py", line 277, in <module>
_TimeRE_cache = TimeRE()
File "/usr/lib/python3.7/_strptime.py", line 191, in __init__
self.locale_time = LocaleTime()
File "/usr/lib/python3.7/_strptime.py", line 71, in __init__
self.__calc_month()
File "/usr/lib/python3.7/_strptime.py", line 99, in __calc_month
a_month = [calendar.month_abbr[i].lower() for i in range(13)]
File "/usr/lib/python3.7/_strptime.py", line 99, in <listcomp>
a_month = [calendar.month_abbr[i].lower() for i in range(13)]
File "/usr/lib/python3.7/calendar.py", line 63, in __getitem__
return funcs(self.format)
ValueError: character U+30000043 is not in range [U+0000; U+10ffff]
pkg: This is an internal error in pkg(7) version b'3beb69dcf209'. Please log a
Service Request about this issue including the information above and this
message.
</source>
Workaround:
<source lang=bash>
# unset $(env | awk -F'=' '$1 ~ /^LC_/{print $1;}')
# pkg update --accept --require-new-be --be-name solaris_11.4.27.1.82
Creating Plan (Package planning: 766/1256): \
</source>
d9cac6f41f632e37b01d42435a6ea22fb98b97a5
2079
2078
2020-11-26T05:46:07Z
Lollypop
2
wikitext
text/x-wiki
[[Category: Solaris11|pkg]]
== Troubleshooting ==
=== Error: pkg: This is an internal error in pkg(7) version b'3beb69dcf209'. Please log a Service Request about this issue including the information above and this message.===
Full output example:
<source lang=bash>
# pkg update --accept --require-new-be --be-name solaris_11.4.27.1.82
Traceback (most recent call last):
File "/usr/bin/pkg", line 5668, in handle_errors
__ret = func(*args, **kwargs)
File "/usr/bin/pkg", line 5654, in main_func
pargs=pargs, **opts)
File "/usr/bin/pkg", line 2267, in update
display_plan_cb=display_plan_cb, logger=logger)
File "/usr/lib/python3.7/vendor-packages/pkg/client/client_api.py", line 1556, in _update
logger=logger)
File "/usr/lib/python3.7/vendor-packages/pkg/client/client_api.py", line 1395, in __api_op
logger=logger, **kwargs)
File "/usr/lib/python3.7/vendor-packages/pkg/client/client_api.py", line 1252, in __api_plan
display_plan_cb=display_plan_cb)
File "/usr/lib/python3.7/vendor-packages/pkg/client/client_api.py", line 1224, in __api_plan
for pd in api_plan_func(**kwargs):
File "/usr/lib/python3.7/vendor-packages/pkg/client/api.py", line 1516, in __plan_op
log_op_end_all=True)
File "/usr/lib/python3.7/vendor-packages/pkg/client/api.py", line 1144, in __plan_common_exception
six.reraise(exc_type, exc_value, exc_traceback)
File "/usr/lib/python3.7/vendor-packages/six.py", line 703, in reraise
raise value
File "/usr/lib/python3.7/vendor-packages/pkg/client/api.py", line 1429, in __plan_op
self.__refresh_publishers()
File "/usr/lib/python3.7/vendor-packages/pkg/client/api.py", line 620, in __refresh_publishers
self.__cert_verify()
File "/usr/lib/python3.7/vendor-packages/pkg/client/api.py", line 603, in __cert_verify
self._img.check_cert_validity()
File "/usr/lib/python3.7/vendor-packages/pkg/client/image.py", line 1338, in check_cert_validity
uri=uri)
File "/usr/lib/python3.7/vendor-packages/pkg/misc.py", line 1242, in validate_ssl_cert
if cert.has_expired():
File "/usr/lib/python3.7/vendor-packages/OpenSSL/crypto.py", line 1360, in has_expired
not_after = datetime.datetime.strptime(time_string, "%Y%m%d%H%M%SZ")
File "/usr/lib/python3.7/_strptime.py", line 277, in <module>
_TimeRE_cache = TimeRE()
File "/usr/lib/python3.7/_strptime.py", line 191, in __init__
self.locale_time = LocaleTime()
File "/usr/lib/python3.7/_strptime.py", line 71, in __init__
self.__calc_month()
File "/usr/lib/python3.7/_strptime.py", line 99, in __calc_month
a_month = [calendar.month_abbr[i].lower() for i in range(13)]
File "/usr/lib/python3.7/_strptime.py", line 99, in <listcomp>
a_month = [calendar.month_abbr[i].lower() for i in range(13)]
File "/usr/lib/python3.7/calendar.py", line 63, in __getitem__
return funcs(self.format)
ValueError: character U+30000043 is not in range [U+0000; U+10ffff]
pkg: This is an internal error in pkg(7) version b'3beb69dcf209'. Please log a
Service Request about this issue including the information above and this
message.
</source>
Workaround:
<source lang=bash>
# unset $(env | awk -F'=' '$1 ~ /^LC_/{print $1;}')
# pkg update --accept --require-new-be --be-name solaris_11.4.27.1.82
Creating Plan (Package planning: 766/1256): \
</source>
392660f4380bdc46a69b1d26dce50f33acde1089
Solaris 11 hwmgmt
0
352
2080
1845
2020-11-26T05:46:30Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:Solaris11|hwmgmt]]
=Commands=
==hwmgmtcli==
==ilomconfig==
# ilomconfig list network
==raidconfig==
raidconfig list all
==fwupdate==
fwupdate list all
==itpconfig==
<source lang=bash>
# itpconfig list interconnect
Interconnect
============
State: enabled
Type: USB Ethernet
SP Interconnect IP Address: 169.254.182.76
Host Interconnect IP Address: 169.254.182.77
Interconnect Netmask: 255.255.255.0
SP Interconnect MAC Address: 02:21:28:57:47:16
Host Interconnect MAC Address: 02:21:28:57:47:17
</source>
3c2185bb5217b79d0c95052bee4d3c3f0aa0cff4
2081
2080
2020-11-26T05:46:53Z
Lollypop
2
wikitext
text/x-wiki
[[Category:Solaris11|hwmgmt]]
=Commands=
==hwmgmtcli==
==ilomconfig==
# ilomconfig list network
==raidconfig==
raidconfig list all
==fwupdate==
fwupdate list all
==itpconfig==
<source lang=bash>
# itpconfig list interconnect
Interconnect
============
State: enabled
Type: USB Ethernet
SP Interconnect IP Address: 169.254.182.76
Host Interconnect IP Address: 169.254.182.77
Interconnect Netmask: 255.255.255.0
SP Interconnect MAC Address: 02:21:28:57:47:16
Host Interconnect MAC Address: 02:21:28:57:47:17
</source>
7838f817cc07d45192ebf7ef5362503615d55b0f
Solaris 11 First Steps
0
97
2082
963
2020-11-26T05:48:00Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:Solaris11]]
[[Kategorie:Solaris11]]
[[Kategorie:Solaris11]]
[[Kategorie:Solaris11]]
[[Kategorie:Solaris11]]
[[Category:Solaris11|First steps]]
= What's new in Solaris 11 =
== Installation ==
=== Automated Installer ===
The automated installer short AI is a new way to setup an install server. The configuration is in XML files.
For further informations look [http://www.oracle.com/technetwork/articles/servers-storage-admin/best-commands-ai-1667217.html here].
== Package Management ==
No more patching! The new way to update your operating system is pkg. This tool get's new versions of Software over the network.
You can add multiple repositories, search repositories for software packages and install them over the network.
[[IPS_cheat_sheet#Examples|Some examples]].
===Support repository===
[[https://pkg-register.oracle.com/register/certificate Get your client certificates]]
[[https://pkg-register.oracle.com/register/product_info/1/ Instructions]]
== Live upgrade is now Boot environments (beadm) ==
For many years the usage of live upgrade was a bit difficult. With support of ZFS in live upgrade the updates went easier and consumed less disk space.
Since OpenSolaris (and now in Solaris 11) we have a new way to make updates.
The new way to handle upgrades and updates is beadm the boot environment admin tool. You can create a boot environment manually at any time as known from live upgrade.
New is that software updates from pkg create boot environments automatically if needed (or if pkg is used with --require-new-be or --require-backup-be).
== Distro Constructor ==
You can compile your own Solaris 11 distribution ISO image by using the Distribution Constructor. This will make customized installations much faster.
There is a good article at Oracle called [http://www.oracle.com/technetwork/articles/servers-storage-admin/o11-087-sol11-dist-const-496819.html How to Create a Customized Oracle Solaris 11 Image Using the Distribution Constructor].
== Networking (Crossbow) ==
An enhanced version of the new network stack from the project Crossbow (known from OpenSolaris) is implemented.
The new stack virtualizes the network of your Solaris. This means a lot of new features like virtual switches, virtual NICs and so on can be used.
You can build even complex networks virtualized inside your Solaris instance.
=== Interface Names ===
The new network virtualization covers interface names. They are now named net0, net1, ... and not after their drivers. So now you can just say net0 is frontend traffic. and interface net1 is backend. Independant of which hardware your server is build of.
You can even name them after their usage like frontend0 and backend0. So you always now what kind of traffic is at this interface.
=== Etherstubs and VNICs ===
Etherstubs are virtual switches inside your OS which can be connected to VNICs and physical interfaces.
=== ipadm ===
The tool ipadm is, together with dladm, a powerful tool to manage your network stack.
== Storage Engine (COMSTAR) ==
== ZFS deduplication and encryption ==
=== ZFS deduplication ===
=== ZFS encryption ===
== Zones ==
=== Immutable Zones ===
=== zonestat ===
== Kernel based CIFS ==
1b3a8f80be75ff32277bd35b365718727bb6f008
Solaris 11 bootadm
0
207
2083
709
2020-11-26T05:48:33Z
Lollypop
2
wikitext
text/x-wiki
[[Category:Solaris11|bootadm]]
==Booten via SP-Console 115200 Baud==
Add a new ttydef with 115200:
<source lang=bash>
# echo "console115200:115200 hupcl opost onclr:115200::console" >> /etc/ttydefs
</source>
Set the new console for system/console-login:default
<source lang=bash>
# svccfg -s svc:/system/console-login:default setprop ttymon/label=console115200
# svcadm refresh svc:/system/console-login:default
# svcadm restart svc:/system/console-login:default
</source>
Setup your boot menu:
<source lang=bash>
# bootadm generate-menu
# bootadm set-menu console=text serial_params='0,115200,8,N,1'
# bootadm change-entry -i 0 kargs="-B \$zfs_bootfs,console=ttya"
# bootadm add-entry -i 1 "Solaris (non-cluster)"
# bootadm change-entry -i 1 kargs="-B \$zfs_bootfs,console=ttya -x"
# bootadm add-entry -i 2 "Solaris (non-cluster)(single-user)"
# bootadm change-entry -i 2 kargs="-B \$zfs_bootfs,console=ttya -xs"
# bootadm add-entry -i 3 "Solaris (kernel debugger)"
# bootadm change-entry -i 3 kargs="-B \$zfs_bootfs,console=ttya -k"
# bootadm add-entry -i 4 "Solaris (non-cluster)(milestone=none)"
# bootadm change-entry -i 4 kargs="-B \$zfs_bootfs,console=ttya -x -m milestone=none"
</source>
d309dc91e83b85590e07364a68e5f75d4d1d9c99
Solaris 11 Networking
0
96
2084
1371
2020-11-26T05:49:18Z
Lollypop
2
wikitext
text/x-wiki
[[Category:Solaris11|Networking]]
= Switch to manual configuration =
To disable automatic procedures to take back your changes you have to enable the manual configuration mode.
<pre>
# netadm enable -p ncp DefaultFixed
</pre>
= Nodename =
<pre>
# svccfg -s svc:/system/identity:node setprop config/nodename = astring: camponotus
# svcadm refresh svc:/system/identity:node
# svcadm restart svc:/system/identity:node
</pre>
= Interfaces =
== Initial setup ==
<pre>
# ipadm create-ip net1
# ipadm create-addr -T static -a local=192.168.5.101/24 net1/v4mailcluster1
</pre>
== IPMP ==
<pre>
# ipadm create-ip net2
# ipadm create-ip net3
# ipadm create-addr -T static -a 192.168.5.102/24 net2/v4ipmptestadress
# ipadm create-addr -T static -a 192.168.5.103/24 net3/v4ipmptestadress
# ipadm create-ipmp ipmp0
# ipadm add-ipmp -i net2 -i net3 ipmp0
# ipadm create-addr -T static -a 192.168.5.101/24 ipmp0/v4mailcluster0
# ipmpstat -i
INTERFACE ACTIVE GROUP FLAGS LINK PROBE STATE
net2 yes ipmp0 ------- up ok ok
net3 yes ipmp0 --mbM-- up ok ok
# ipmpstat -an
ADDRESS STATE GROUP INBOUND OUTBOUND
:: down ipmp0 -- --
192.168.5.101 up ipmp0 net3 net2 net3
</pre>
Set one interface to standby:
<pre>
# ipadm set-ifprop -p standby=on -m ip net2
# ipmpstat -i
INTERFACE ACTIVE GROUP FLAGS LINK PROBE STATE
net3 yes ipmp0 --mbM-- up ok ok
net2 no ipmp0 is----- up ok ok
# ipmpstat -g
GROUP GROUPNAME STATE FDT INTERFACES
ipmp0 ipmp0 ok 10.00s net3 (net2)
</pre>
== More sophisticated with aggregations and vnics ==
<source lang=bash>
# dladm show-phys -L
LINK DEVICE LOC
net0 igb12 /SYS/MB
net1 igb13 /SYS/MB
net2 igb14 /SYS/MB
net3 igb15 /SYS/MB
net4 igb0 /SYS/MB/PCI_MEZZ/PCIE3
net5 igb1 /SYS/MB/PCI_MEZZ/PCIE3
net6 igb2 /SYS/MB/PCI_MEZZ/PCIE3
net7 igb3 /SYS/MB/PCI_MEZZ/PCIE3
net8 igb4 /SYS/MB/RISER2/PCIE2
net9 igb5 /SYS/MB/RISER2/PCIE2
net10 igb6 /SYS/MB/RISER2/PCIE2
net11 igb7 /SYS/MB/RISER2/PCIE2
net12 igb8 /SYS/MB/RISER0/PCIE0
net13 igb9 /SYS/MB/RISER0/PCIE0
net14 igb10 /SYS/MB/RISER0/PCIE0
net15 igb11 /SYS/MB/RISER0/PCIE0
net16 usbecm2 --
# dladm create-aggr -P L2,L3 -l net8 -l net9 -l net10 -l net11 PCIE2
# dladm create-aggr -P L2,L3 -l net4 -l net5 -l net6 -l net7 PCIE3
# dladm show-link
...
PCIE2 aggr 1500 up net8 net9 net10 net11
PCIE3 aggr 1500 up net4 net5 net6 net7
...
# dladm create-vnic -l PCIE2 zone01_ipmp0
# dladm create-vnic -l PCIE3 zone01_ipmp1
# dladm show-link
...
zone01_ipmp1 vnic 1500 up PCIE3
zone01_ipmp0 vnic 1500 up PCIE2
...
# zonecfg -z zone01
zonecfg:zone01> add net
zonecfg:zone01:net> set configure-allowed-address=true
zonecfg:zone01:net> set physical=zone01_ipmp0
zonecfg:zone01:net> end
zonecfg:zone01> add net
zonecfg:zone01:net> set configure-allowed-address=true
zonecfg:zone01:net> set physical=zone01_ipmp1
zonecfg:zone01:net> end
zonecfg:zone01> verify
zonecfg:zone01> commit
zonecfg:zone01> exit
</source>
== Change address ==
1. Create new interface:
<pre>
# ipadm create-addr -T static -a 192.168.5.111/24 ipmp0/v4mailcluster1
</pre>
2. Login to new IP.
3. Delete the old interface:
<pre>
# ipadm delete-addr ipmp0/v4mailcluster0
</pre>
= DNS =
== Client ==
<pre>
# svccfg -s svc:/network/dns/client setprop config/nameserver = net_address: "( 0.0.0.0 192.168.1.1 )"
# svccfg -s svc:/network/dns/client setprop config/search = astring: "timmann.de blindhuhn.de"
# svcadm refresh svc:/network/dns/client:default
# svcadm restart svc:/network/dns/client:default
</pre>
Activate dns in nameservice switch (nsswitch.conf):
<pre>
# perl -pi -e "s/^hosts:\s+files$/hosts: files dns/g" /etc/nsswitch.conf
# nscfg import -f svc:/system/name-service/switch:default
# svcadm refresh name-service/switch
# svcprop -p config/host svc:/system/name-service/switch:default
files\ dns
</pre>
== Server ==
<pre>
# groupadd -g 53 dns
# useradd -u 53 -g dns -d /var/named -m dns
# usermod -A solaris.smf.manage.bind dns
# svccfg -s svc:network/dns/server:default setprop start/group = dns
# svccfg -s svc:network/dns/server:default setprop start/user = dns
# svccfg -s svc:network/dns/server:default setprop options/ip_interfaces = IPv4
# svccfg -s svc:network/dns/server:default setprop options/configuration_file = /etc/named.conf
# svcadm refresh svc:network/dns/server:default
# svcadm enable svc:network/dns/server:default
</pre>
= Set tcp/udp parameter (formerly ndd) =
<source lang=bash>
# ipadm show-prop -p smallest_anon_port tcp
PROTO PROPERTY PERM CURRENT PERSISTENT DEFAULT POSSIBLE
tcp smallest_anon_port rw 1024 -- 1024 1024-65535
</source>
<source lang=bash>
# ipadm set-prop -p smallest_anon_port=9000 tcp
# ipadm set-prop -p smallest_anon_port=9000 udp
# ipadm set-prop -p largest_anon_port=65500 tcp
# ipadm set-prop -p largest_anon_port=65500 udp
</source>
= Jumbo Frames =
The MTU of an ipadm-interface can never be greater than its underlying dladm-interface.
To change the dladm-interface the ipadm-interface has to be disabled (DOWNTIME!? BE CAREFUL!).
<source lang=bash>
# ipadm disable-if -t iscsi0
# dladm set-linkprop -p mtu=9000 iscsi0
# ipadm enable-if -t iscsi0
# ipadm set-ifprop -m ipv4 -p mtu=9000 iscsi0
</source>
= Aggregate for iSCSI =
This is cruel but worked on our ciscos:
<source lang=bash>
# dladm create-aggr -m trunk -P L4 -L off "-l iscsi"{0..7} iscsi_aggr0 | /bin/sh
# dladm show-aggr -P iscsi_aggr0
LINK MODE POLICY ADDRPOLICY LACPACTIVITY LACPTIMER
iscsi_aggr0 trunk L4 auto off short
# dladm show-aggr -L iscsi_aggr0
LINK PORT AGGREGATABLE SYNC COLL DIST DEFAULTED EXPIRED
iscsi_aggr0 iscsi0 no no no no yes no
-- iscsi1 no no no no yes no
-- iscsi2 no no no no yes no
-- iscsi3 no no no no yes no
-- iscsi4 no no no no yes no
-- iscsi5 no no no no yes no
-- iscsi6 no no no no yes no
-- iscsi7 no no no no yes no
</source>
= Set TCP parameters in immutable zones =
In normal immutable mode zlogin -U does not change it:
<source lang=bash>
root@global# zlogin -U immutable-zone ipadm set-prop -p _time_wait_interval=30000 tcp
ipadm: set-prop: _time_wait_interval: Invalid argument provided
root@global# zlogin immutable-zone ipadm show-prop -p _time_wait_interval tcp
PROTO PROPERTY PERM CURRENT PERSISTENT DEFAULT POSSIBLE
tcp _time_wait_interval rw 30000 -- 60000 1000-600000
</source>
Need to boot into writable:
<source lang=bash>
root@global# zoneadm -z immutable-zone reboot -w
root@global# zlogin -U immutable-zone ipadm set-prop -p _time_wait_interval=30000 tcp
root@global# zlogin immutable-zone ipadm show-prop -p _time_wait_interval tcp
PROTO PROPERTY PERM CURRENT PERSISTENT DEFAULT POSSIBLE
tcp _time_wait_interval rw 30000 30000 60000 1000-600000
root@global# zoneadm -z immutable-zone reboot
</source>
e3d9ebc4e970c172765cdcec74382b5f7fbb1752
Category:Solaris11
14
95
2085
239
2020-11-26T05:49:49Z
Lollypop
2
wikitext
text/x-wiki
[[Category: Solaris]]
8ec8d4bf939b30f7de484329263cd2b4e597836a
Solaris 11 unsorted
0
379
2086
2020-11-26T05:52:27Z
Lollypop
2
Created page with "[[category:Solaris11]] == kcfd: unable to load certificate from /etc/crypto/certs/ORCLObjectCA == Problem: <pre> Apr 2 11:05:29 host42 kcfd[77]: [ID 180312 user.error] kcfd:..."
wikitext
text/x-wiki
[[category:Solaris11]]
== kcfd: unable to load certificate from /etc/crypto/certs/ORCLObjectCA ==
Problem:
<pre>
Apr 2 11:05:29 host42 kcfd[77]: [ID 180312 user.error] kcfd: unable to load certificate from /etc/crypto/certs/ORCLObjectCA
Apr 2 11:05:29 host42 openssl[2360]: [ID 238837 user.error] libpkcs11: /usr/lib/security/amd64/pkcs11_softtoken.so unexpected failure in ELF signature verification. See cryptoadm(1M). Skipping this plug-in.
</pre>
Solution:
<pre>
# pkg fix pkg:/crypto/ca-certificates
</pre>
== Solaris 11 up to date? ==
<source lang=bash>
#!/bin/bash
# Written by Lars Timmann <L@rs.Timmann.de> 2018
export LANG=C
function check () {
package=$1
# pkg list -af entire@latest
local=$(pkg info ${package} 2>&1)
remote=$(pkg info -r ${package} 2>&1)
printf "%s\n%s\n" "${local}" "${remote}" | nawk -v package="${package}" '
$1=="Version:" {
version[nr]=$2;
next;
}
$1=="Branch:" {
branch[nr++]=$2;
next;
}
/^pkg:/ {
error=$0;
}
END{
if(error) {
printf ("Package %s:\t%s\n", package, error);
status=-1;
} else {
if(branch[0]==branch[1]){
printf ("Package %s:\tUptodate at %s\n", package, branch[0]);
status=0;
}else{
printf ("Package %s:\tUpdate is available: %s -> %s\n", package, branch[0], branch[1]);
split(version[1], version_part, /\./);
split(branch[1], branch_part, /\./);
if(version[1]=="0.5.11") {
be_version=sprintf("%d.%d.%d.%d.%d",version_part[3], branch_part[3], branch_part[4], branch_part[5], branch_part[6]);
}
if(version[1]=="11.4") {
be_version=sprintf("%d.%d.%d.%d.%d",branch_part[1], branch_part[2], branch_part[4], branch_part[5], branch_part[6]);
}
printf ("\n\nUse:\tpkg update --accept --require-new-be --be-name solaris_%s\n\n\n", be_version);
status=2;
}
}
exit status;
}
'
}
package="entire"
pkg refresh >/dev/null \
|| echo "Cannot refresh packages" \
&& if [ $# -gt 0 ]
then
while [ $# -gt 0 ]
do
package=$1
shift
check ${package}
done
else
check ${package}
fi
</source>
8dcd8eed7f853af1e49b310bb617c945ca158e3c
2087
2086
2020-11-26T05:53:31Z
Lollypop
2
wikitext
text/x-wiki
[[category:Solaris11|unsorted]]
== kcfd: unable to load certificate from /etc/crypto/certs/ORCLObjectCA ==
Problem:
<pre>
Apr 2 11:05:29 host42 kcfd[77]: [ID 180312 user.error] kcfd: unable to load certificate from /etc/crypto/certs/ORCLObjectCA
Apr 2 11:05:29 host42 openssl[2360]: [ID 238837 user.error] libpkcs11: /usr/lib/security/amd64/pkcs11_softtoken.so unexpected failure in ELF signature verification. See cryptoadm(1M). Skipping this plug-in.
</pre>
Solution:
<pre>
# pkg fix pkg:/crypto/ca-certificates
</pre>
== Solaris 11 up to date? ==
<source lang=bash>
#!/bin/bash
# Written by Lars Timmann <L@rs.Timmann.de> 2018
export LANG=C
function check () {
package=$1
# pkg list -af entire@latest
local=$(pkg info ${package} 2>&1)
remote=$(pkg info -r ${package} 2>&1)
printf "%s\n%s\n" "${local}" "${remote}" | nawk -v package="${package}" '
$1=="Version:" {
version[nr]=$2;
next;
}
$1=="Branch:" {
branch[nr++]=$2;
next;
}
/^pkg:/ {
error=$0;
}
END{
if(error) {
printf ("Package %s:\t%s\n", package, error);
status=-1;
} else {
if(branch[0]==branch[1]){
printf ("Package %s:\tUptodate at %s\n", package, branch[0]);
status=0;
}else{
printf ("Package %s:\tUpdate is available: %s -> %s\n", package, branch[0], branch[1]);
split(version[1], version_part, /\./);
split(branch[1], branch_part, /\./);
if(version[1]=="0.5.11") {
be_version=sprintf("%d.%d.%d.%d.%d",version_part[3], branch_part[3], branch_part[4], branch_part[5], branch_part[6]);
}
if(version[1]=="11.4") {
be_version=sprintf("%d.%d.%d.%d.%d",branch_part[1], branch_part[2], branch_part[4], branch_part[5], branch_part[6]);
}
printf ("\n\nUse:\tpkg update --accept --require-new-be --be-name solaris_%s\n\n\n", be_version);
status=2;
}
}
exit status;
}
'
}
package="entire"
pkg refresh >/dev/null \
|| echo "Cannot refresh packages" \
&& if [ $# -gt 0 ]
then
while [ $# -gt 0 ]
do
package=$1
shift
check ${package}
done
else
check ${package}
fi
</source>
79dc3b557e7d30f9f0fe2fbb10f095d89229059d
Nextcloud
0
368
2088
2055
2021-01-07T13:53:40Z
Lollypop
2
/* Manual upgrade */
wikitext
text/x-wiki
[[category:Web]]
=Nextcloud=
==BASH alias==
<source lang=bash>
alias occ='sudo --user=www-data /usr/bin/php -f /var/www/nextcloud/occ'
</source>
<source lang=bash>
# occ status
- installed: true
- version: 19.0.2.2
- versionstring: 19.0.2
- edition:
</source>
==Send calendar events==
Set the EventRemindersMode to occ:
<source lang=bash>
# occ config:app:set dav sendEventRemindersMode --value occ
</source>
and add a cronjob for the user running he webserver:
<source lang=bash>
# crontab -u www-data -e
# send calendar events every 5 minutes
*/5 * * * * php -f /var/www/nextcloud/occ dav:send-event-reminders
</source>
=Manual upgrade=
<source lang=bash>
# cd /var/www/nextcloud/updater && sudo -u www-data php updater.phar
# occ db:add-missing-indices
</source>
and since version 19:
<source lang=bash>
# occ db:add-missing-columns
</source>
Answer the questions...
If you have an own theme proceed with this steps:
<source lang=bash>
# occ config:system:set theme --value <your theme>
# occ maintenance:theme:update
# occ app:update --all
</source>
=Some tweaks for the theme to disable several things=
<source lang=css>
/* remove quota */
#quota {
border: 0;
clip: rect(0 0 0 0);
height: 1px;
margin: -1px;
overflow: hidden;
padding: 0;
position: absolute;
width: 1px;
}
/* remove lost password */
.lost-password-container #lost-password, .lost-password-container #lost-password-back {
display: none;
}
/* remove contacts menu */
#contactsmenu { display: none; }
/* remove contacts button */
li[data-id="contacts"] {
display: none;
visibility : hidden;
height : 0px;
width : 0px;
margin : 0px;
padding : 0px;
overflow : hidden;
}
/* remove user button */
li[data-id="core_users"] {
display: none;
visibility : hidden;
height : 0px;
width : 0px;
margin : 0px;
padding : 0px;
overflow : hidden;
}
</source>
26645d9096fad515ed2dcc8ee69753e44646be34
2089
2088
2021-01-07T13:54:16Z
Lollypop
2
/* Manual upgrade */
wikitext
text/x-wiki
[[category:Web]]
=Nextcloud=
==BASH alias==
<source lang=bash>
alias occ='sudo --user=www-data /usr/bin/php -f /var/www/nextcloud/occ'
</source>
<source lang=bash>
# occ status
- installed: true
- version: 19.0.2.2
- versionstring: 19.0.2
- edition:
</source>
==Send calendar events==
Set the EventRemindersMode to occ:
<source lang=bash>
# occ config:app:set dav sendEventRemindersMode --value occ
</source>
and add a cronjob for the user running he webserver:
<source lang=bash>
# crontab -u www-data -e
# send calendar events every 5 minutes
*/5 * * * * php -f /var/www/nextcloud/occ dav:send-event-reminders
</source>
=Manual upgrade=
<source lang=bash>
# cd /var/www/nextcloud/updater && sudo -u www-data php updater.phar
# occ db:add-missing-indices
</source>
and since version 19:
<source lang=bash>
# occ db:add-missing-columns
</source>
Answer the questions...
If you have an own theme proceed with this steps:
<source lang=bash>
# occ config:system:set theme --value <your theme>
# occ maintenance:theme:update
</source>
And the apps:
<source lang=bash>
# occ app:update --all
</source>
=Some tweaks for the theme to disable several things=
<source lang=css>
/* remove quota */
#quota {
border: 0;
clip: rect(0 0 0 0);
height: 1px;
margin: -1px;
overflow: hidden;
padding: 0;
position: absolute;
width: 1px;
}
/* remove lost password */
.lost-password-container #lost-password, .lost-password-container #lost-password-back {
display: none;
}
/* remove contacts menu */
#contactsmenu { display: none; }
/* remove contacts button */
li[data-id="contacts"] {
display: none;
visibility : hidden;
height : 0px;
width : 0px;
margin : 0px;
padding : 0px;
overflow : hidden;
}
/* remove user button */
li[data-id="core_users"] {
display: none;
visibility : hidden;
height : 0px;
width : 0px;
margin : 0px;
padding : 0px;
overflow : hidden;
}
</source>
0b05bc8a2df642c61b763400441d4ec4b9ffd68a
2090
2089
2021-01-14T06:55:02Z
Lollypop
2
/* Manual upgrade */
wikitext
text/x-wiki
[[category:Web]]
=Nextcloud=
==BASH alias==
<source lang=bash>
alias occ='sudo --user=www-data /usr/bin/php -f /var/www/nextcloud/occ'
</source>
<source lang=bash>
# occ status
- installed: true
- version: 19.0.2.2
- versionstring: 19.0.2
- edition:
</source>
==Send calendar events==
Set the EventRemindersMode to occ:
<source lang=bash>
# occ config:app:set dav sendEventRemindersMode --value occ
</source>
and add a cronjob for the user running he webserver:
<source lang=bash>
# crontab -u www-data -e
# send calendar events every 5 minutes
*/5 * * * * php -f /var/www/nextcloud/occ dav:send-event-reminders
</source>
=Manual upgrade=
<source lang=bash>
# cd /var/www/nextcloud/updater && sudo -u www-data php updater.phar
# occ db:add-missing-indices
</source>
and since version 19:
<source lang=bash>
# occ db:add-missing-columns
# occ db:add-missing-primary-keys
</source>
Answer the questions...
If you have an own theme proceed with this steps:
<source lang=bash>
# occ config:system:set theme --value <your theme>
# occ maintenance:theme:update
</source>
And the apps:
<source lang=bash>
# occ app:update --all
</source>
=Some tweaks for the theme to disable several things=
<source lang=css>
/* remove quota */
#quota {
border: 0;
clip: rect(0 0 0 0);
height: 1px;
margin: -1px;
overflow: hidden;
padding: 0;
position: absolute;
width: 1px;
}
/* remove lost password */
.lost-password-container #lost-password, .lost-password-container #lost-password-back {
display: none;
}
/* remove contacts menu */
#contactsmenu { display: none; }
/* remove contacts button */
li[data-id="contacts"] {
display: none;
visibility : hidden;
height : 0px;
width : 0px;
margin : 0px;
padding : 0px;
overflow : hidden;
}
/* remove user button */
li[data-id="core_users"] {
display: none;
visibility : hidden;
height : 0px;
width : 0px;
margin : 0px;
padding : 0px;
overflow : hidden;
}
</source>
d7e74ec6e520e826678129b70aae270a289c7dd1
2091
2090
2021-01-14T06:56:35Z
Lollypop
2
/* Manual upgrade */
wikitext
text/x-wiki
[[category:Web]]
=Nextcloud=
==BASH alias==
<source lang=bash>
alias occ='sudo --user=www-data /usr/bin/php -f /var/www/nextcloud/occ'
</source>
<source lang=bash>
# occ status
- installed: true
- version: 19.0.2.2
- versionstring: 19.0.2
- edition:
</source>
==Send calendar events==
Set the EventRemindersMode to occ:
<source lang=bash>
# occ config:app:set dav sendEventRemindersMode --value occ
</source>
and add a cronjob for the user running he webserver:
<source lang=bash>
# crontab -u www-data -e
# send calendar events every 5 minutes
*/5 * * * * php -f /var/www/nextcloud/occ dav:send-event-reminders
</source>
=Manual upgrade=
<source lang=bash>
# cd /var/www/nextcloud/updater && sudo -u www-data php updater.phar
# occ db:add-missing-indices
</source>
and since version 19:
<source lang=bash>
# occ db:add-missing-columns
# occ db:add-missing-primary-keys
# occ db:convert-filecache-bigint
</source>
Answer the questions...
If you have an own theme proceed with this steps:
<source lang=bash>
# occ config:system:set theme --value <your theme>
# occ maintenance:theme:update
</source>
And the apps:
<source lang=bash>
# occ app:update --all
</source>
=Some tweaks for the theme to disable several things=
<source lang=css>
/* remove quota */
#quota {
border: 0;
clip: rect(0 0 0 0);
height: 1px;
margin: -1px;
overflow: hidden;
padding: 0;
position: absolute;
width: 1px;
}
/* remove lost password */
.lost-password-container #lost-password, .lost-password-container #lost-password-back {
display: none;
}
/* remove contacts menu */
#contactsmenu { display: none; }
/* remove contacts button */
li[data-id="contacts"] {
display: none;
visibility : hidden;
height : 0px;
width : 0px;
margin : 0px;
padding : 0px;
overflow : hidden;
}
/* remove user button */
li[data-id="core_users"] {
display: none;
visibility : hidden;
height : 0px;
width : 0px;
margin : 0px;
padding : 0px;
overflow : hidden;
}
</source>
5cfa732605015d97ed9c6d0d06c16d897bae50d6
2105
2091
2021-05-07T12:10:43Z
Lollypop
2
/* Manual upgrade */
wikitext
text/x-wiki
[[category:Web]]
=Nextcloud=
==BASH alias==
<source lang=bash>
alias occ='sudo --user=www-data /usr/bin/php -f /var/www/nextcloud/occ'
</source>
<source lang=bash>
# occ status
- installed: true
- version: 19.0.2.2
- versionstring: 19.0.2
- edition:
</source>
==Send calendar events==
Set the EventRemindersMode to occ:
<source lang=bash>
# occ config:app:set dav sendEventRemindersMode --value occ
</source>
and add a cronjob for the user running he webserver:
<source lang=bash>
# crontab -u www-data -e
# send calendar events every 5 minutes
*/5 * * * * php -f /var/www/nextcloud/occ dav:send-event-reminders
</source>
=Manual upgrade=
Caution when upgrading to Nextcloud 21!
If you are using APCu as <i>memcache.local</i>
<source lang=bash>
# occ config:system:get memcache.local
\OC\Memcache\APCu
</source>
you have to put this in your apcu.ini:
apc.enable_cli=1
otherwise you will get in memory trouble during upgrade and in my case the server was down because out of memory.
<source lang=bash>
# cd /var/www/nextcloud/updater && sudo -u www-data php updater.phar
# occ db:add-missing-indices
</source>
and since version 19:
<source lang=bash>
# occ db:add-missing-columns
# occ db:add-missing-primary-keys
# occ db:convert-filecache-bigint
</source>
Answer the questions...
If you have an own theme proceed with this steps:
<source lang=bash>
# occ config:system:set theme --value <your theme>
# occ maintenance:theme:update
</source>
And the apps:
<source lang=bash>
# occ app:update --all
</source>
=Some tweaks for the theme to disable several things=
<source lang=css>
/* remove quota */
#quota {
border: 0;
clip: rect(0 0 0 0);
height: 1px;
margin: -1px;
overflow: hidden;
padding: 0;
position: absolute;
width: 1px;
}
/* remove lost password */
.lost-password-container #lost-password, .lost-password-container #lost-password-back {
display: none;
}
/* remove contacts menu */
#contactsmenu { display: none; }
/* remove contacts button */
li[data-id="contacts"] {
display: none;
visibility : hidden;
height : 0px;
width : 0px;
margin : 0px;
padding : 0px;
overflow : hidden;
}
/* remove user button */
li[data-id="core_users"] {
display: none;
visibility : hidden;
height : 0px;
width : 0px;
margin : 0px;
padding : 0px;
overflow : hidden;
}
</source>
f069dd667db9232f1ec478a24f3b9d055d2482a6
2106
2105
2021-05-07T12:11:33Z
Lollypop
2
/* Manual upgrade */
wikitext
text/x-wiki
[[category:Web]]
=Nextcloud=
==BASH alias==
<source lang=bash>
alias occ='sudo --user=www-data /usr/bin/php -f /var/www/nextcloud/occ'
</source>
<source lang=bash>
# occ status
- installed: true
- version: 19.0.2.2
- versionstring: 19.0.2
- edition:
</source>
==Send calendar events==
Set the EventRemindersMode to occ:
<source lang=bash>
# occ config:app:set dav sendEventRemindersMode --value occ
</source>
and add a cronjob for the user running he webserver:
<source lang=bash>
# crontab -u www-data -e
# send calendar events every 5 minutes
*/5 * * * * php -f /var/www/nextcloud/occ dav:send-event-reminders
</source>
=Manual upgrade=
Caution when upgrading to Nextcloud 21!
If you are using APCu as <i>memcache.local</i>
<source lang=bash>
# occ config:system:get memcache.local
\OC\Memcache\APCu
</source>
you have to put this in your php apcu.ini (e.g. /etc/php/7.4/mods-available/apcu.ini):
apc.enable_cli=1
otherwise you will get in memory trouble during upgrade and in my case the server was down because out of memory.
<source lang=bash>
# cd /var/www/nextcloud/updater && sudo -u www-data php updater.phar
# occ db:add-missing-indices
</source>
and since version 19:
<source lang=bash>
# occ db:add-missing-columns
# occ db:add-missing-primary-keys
# occ db:convert-filecache-bigint
</source>
Answer the questions...
If you have an own theme proceed with this steps:
<source lang=bash>
# occ config:system:set theme --value <your theme>
# occ maintenance:theme:update
</source>
And the apps:
<source lang=bash>
# occ app:update --all
</source>
=Some tweaks for the theme to disable several things=
<source lang=css>
/* remove quota */
#quota {
border: 0;
clip: rect(0 0 0 0);
height: 1px;
margin: -1px;
overflow: hidden;
padding: 0;
position: absolute;
width: 1px;
}
/* remove lost password */
.lost-password-container #lost-password, .lost-password-container #lost-password-back {
display: none;
}
/* remove contacts menu */
#contactsmenu { display: none; }
/* remove contacts button */
li[data-id="contacts"] {
display: none;
visibility : hidden;
height : 0px;
width : 0px;
margin : 0px;
padding : 0px;
overflow : hidden;
}
/* remove user button */
li[data-id="core_users"] {
display: none;
visibility : hidden;
height : 0px;
width : 0px;
margin : 0px;
padding : 0px;
overflow : hidden;
}
</source>
1c1cb1dfd630897c1dd31923a1b0101d295e2b8a
2107
2106
2021-05-07T12:19:53Z
Lollypop
2
/* Manual upgrade */
wikitext
text/x-wiki
[[category:Web]]
=Nextcloud=
==BASH alias==
<source lang=bash>
alias occ='sudo --user=www-data /usr/bin/php -f /var/www/nextcloud/occ'
</source>
<source lang=bash>
# occ status
- installed: true
- version: 19.0.2.2
- versionstring: 19.0.2
- edition:
</source>
==Send calendar events==
Set the EventRemindersMode to occ:
<source lang=bash>
# occ config:app:set dav sendEventRemindersMode --value occ
</source>
and add a cronjob for the user running he webserver:
<source lang=bash>
# crontab -u www-data -e
# send calendar events every 5 minutes
*/5 * * * * php -f /var/www/nextcloud/occ dav:send-event-reminders
</source>
=Manual upgrade=
Caution when upgrading from Nextcloud 20.0.9 to Nextcloud 21.0.1!
If you are using APCu as <i>memcache.local</i>
<source lang=bash>
# occ config:system:get memcache.local
\OC\Memcache\APCu
</source>
you have to put this in your php apcu.ini (e.g. /etc/php/7.4/mods-available/apcu.ini):
apc.enable_cli=1
otherwise you will get in memory trouble during upgrade and in my case the server was down because out of memory.
<source lang=bash>
# cd /var/www/nextcloud/updater && sudo -u www-data php updater.phar
# occ db:add-missing-indices
</source>
and since version 19:
<source lang=bash>
# occ db:add-missing-columns
# occ db:add-missing-primary-keys
# occ db:convert-filecache-bigint
</source>
Answer the questions...
If you have an own theme proceed with this steps:
<source lang=bash>
# occ config:system:set theme --value <your theme>
# occ maintenance:theme:update
</source>
And the apps:
<source lang=bash>
# occ app:update --all
</source>
=Some tweaks for the theme to disable several things=
<source lang=css>
/* remove quota */
#quota {
border: 0;
clip: rect(0 0 0 0);
height: 1px;
margin: -1px;
overflow: hidden;
padding: 0;
position: absolute;
width: 1px;
}
/* remove lost password */
.lost-password-container #lost-password, .lost-password-container #lost-password-back {
display: none;
}
/* remove contacts menu */
#contactsmenu { display: none; }
/* remove contacts button */
li[data-id="contacts"] {
display: none;
visibility : hidden;
height : 0px;
width : 0px;
margin : 0px;
padding : 0px;
overflow : hidden;
}
/* remove user button */
li[data-id="core_users"] {
display: none;
visibility : hidden;
height : 0px;
width : 0px;
margin : 0px;
padding : 0px;
overflow : hidden;
}
</source>
9984673f050e3f3390a40352e688bd99f7c3c6a5
RadSecProxy
0
345
2092
2071
2021-01-19T09:30:31Z
Lollypop
2
wikitext
text/x-wiki
[[Category:Eduroam]]
=RadSecProxy=
==Build==
===Patch for radsecproxy-1.6.8 on Ubuntu 16.04===
In radsecproxy 1.6.9 and source from git on [[https://git.nordu.net/?p=radsecproxy.git;a=tree git.nordu.net]] this patch is not needed since [[https://git.nordu.net/?p=radsecproxy.git;a=commit;h=f3619bf65967255e1009fec42b28007b49e0f4e4 18.1.2017]].
<source lang=bash>
$ git clone https://git.nordu.net/radsecproxy.git
</source>
[https://project.nordu.net/browse/RADSECPROXY-72 taken from here]
<source lang=diff>
diff -rub radsecproxy-1.6.8/tcp.c radsecproxy-1.6.8_Ubuntu_16.04/tcp.c
--- radsecproxy-1.6.8/tcp.c 2016-09-21 13:49:09.000000000 +0200
+++ radsecproxy-1.6.8_Ubuntu_16.04/tcp.c 2017-07-13 16:35:52.414151832 +0200
@@ -353,7 +353,7 @@
struct sockaddr_storage from;
socklen_t fromlen = sizeof(from);
- listen(*sp, 0);
+ listen(*sp, 16);
for (;;) {
s = accept(*sp, (struct sockaddr *)&from, &fromlen);
diff -rub radsecproxy-1.6.8/tls.c radsecproxy-1.6.8_Ubuntu_16.04/tls.c
--- radsecproxy-1.6.8/tls.c 2016-09-21 13:49:09.000000000 +0200
+++ radsecproxy-1.6.8_Ubuntu_16.04/tls.c 2017-07-13 16:36:22.678166655 +0200
@@ -467,7 +467,7 @@
struct sockaddr_storage from;
socklen_t fromlen = sizeof(from);
- listen(*sp, 0);
+ listen(*sp, 16);
for (;;) {
s = accept(*sp, (struct sockaddr *)&from, &fromlen);
</source>
===Configure===
<source lang=bash>
$ ./configure --prefix=/opt/radsecproxy-1.6.8 --sysconfdir=/etc/radsec --with-ssl --enable-fticks
$ make clean all && sudo make install
</source>
=== Another example: Version 1.7.2 from git ===
<source lang=bash>
$ mkdir radsecproxy && cd radsecproxy
$ git clone --single-branch --branch 1.7.2 https://github.com/radsecproxy/radsecproxy tags/1.7.2
$ cd tags/1.7.2
$ ./autogen.sh
$ ./configure --prefix=/opt/radsecproxy-${PWD##*/} --sysconfdir=/etc/radsec --with-ssl
$ make clean all && sudo make install
</source>
==Config==
===/etc/radsec/radsecproxy.conf===
<source lang=text>
# Master config file for radsecproxy
IPv4Only on
listenUDP <IP>:1812
listenUDP <IP>:1813
listenTLS <IP>:2083
LogLevel 5 # For testing later reduce to 3
#LogDestination file:///var/log/radsecproxy.log
LogDestination x-syslog:///LOG_DAEMON
LoopPrevention on
######## TLS section
tls default {
CACertificatePath /etc/radsec/cert/ca
CertificateFile /etc/radsec/cert/radsecproxy-cert.pem
CertificateKeyFile /etc/radsec/cert/radsecproxy-key.pem
CertificateKeyPassword <PASSWORD>
}
Include /etc/radsec/rewrites.conf
Include /etc/radsec/clients.conf
Include /etc/radsec/servers.conf
Include /etc/radsec/realms.conf
</source>
===ca certificate in /etc/radsec/cert/ca===
For DFN users it is the TeleSec root certificate
====The destination file name is <hash of the certificate>.0====
<source lang=text>
# openssl x509 -noout -hash -in /tmp/telesec.pem
1e09d511
# mv /tmp/telesec.pem /etc/radsec/cert/ca/1e09d511.0
</source>
====/etc/radsec/cert/ca/1e09d511.0====
<source lang=text>
subject= /C=DE/O=T-Systems Enterprise Services GmbH/OU=T-Systems Trust Center/CN=T-TeleSec GlobalRoot Class 2
-----BEGIN CERTIFICATE-----
MIIDwzCCAqugAwIBAgIBATANBgkqhkiG9w0BAQsFADCBgjELMAkGA1UEBhMCREUx
KzApBgNVBAoMIlQtU3lzdGVtcyBFbnRlcnByaXNlIFNlcnZpY2VzIEdtYkgxHzAd
BgNVBAsMFlQtU3lzdGVtcyBUcnVzdCBDZW50ZXIxJTAjBgNVBAMMHFQtVGVsZVNl
YyBHbG9iYWxSb290IENsYXNzIDIwHhcNMDgxMDAxMTA0MDE0WhcNMzMxMDAxMjM1
OTU5WjCBgjELMAkGA1UEBhMCREUxKzApBgNVBAoMIlQtU3lzdGVtcyBFbnRlcnBy
aXNlIFNlcnZpY2VzIEdtYkgxHzAdBgNVBAsMFlQtU3lzdGVtcyBUcnVzdCBDZW50
ZXIxJTAjBgNVBAMMHFQtVGVsZVNlYyBHbG9iYWxSb290IENsYXNzIDIwggEiMA0G
CSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQCqX9obX+hzkeXaXPSi5kfl82hVYAUd
AqSzm1nzHoqvNK38DcLZSBnuaY/JIPwhqgcZ7bBcrGXHX+0CfHt8LRvWurmAwhiC
FoT6ZrAIxlQjgeTNuUk/9k9uN0goOA/FvudocP05l03Sx5iRUKrERLMjfTlH6VJi
1hKTXrcxlkIF+3anHqP1wvzpesVsqXFP6st4vGCvx9702cu+fjOlbpSD8DT6Iavq
jnKgP6TeMFvvhk1qlVtDRKgQFRzlAVfFmPHmBiiRqiDFt1MmUUOyCxGVWOHAD3bZ
wI18gfNycJ5v/hqO2V81xrJvNHy+SE/iWjnX2J14np+GPgNeGYtEotXHAgMBAAGj
QjBAMA8GA1UdEwEB/wQFMAMBAf8wDgYDVR0PAQH/BAQDAgEGMB0GA1UdDgQWBBS/
WSA2AHmgoCJrjNXyYdK4LMuCSjANBgkqhkiG9w0BAQsFAAOCAQEAMQOiYQsfdOhy
NsZt+U2e+iKo4YFWz827n+qrkRk4r6p8FU3ztqONpfSO9kSpp+ghla0+AGIWiPAC
uvxhI+YzmzB6azZie60EI4RYZeLbK4rnJVM3YlNfvNoBYimipidx5joifsFvHZVw
IEoHNN/q/xWA5brXethbdXwFeilHfkCoMRN3zUA7tFFHei4R40cR3p1m0IvVVGb6
g1XqfMIpiRvpb7PO4gWEyS8+eIVibslfwXhjdFjASBgMmTnrpMwatXlajRWc2BQN
9noHV8cigwUtPJslJj0Ys6lDfMjIq2SPDqO/nBudMNva0Bkuqjzx+zOAduTNrRlP
BSeOE6Fuwg==
-----END CERTIFICATE-----
</source>
===/etc/radsec/rewrites.conf===
<source lang=text>
## Empty for our setup
</source>
===/etc/radsec/clients.conf===
This matches our german top level radius (tlr) you have to customize it for other countries.
<source lang=text>
client tlr1 {
host 193.174.75.134
type tls
certificatenamecheck off
matchCertificateAttribute CN:/^(radius1\.dfn|tld1\.eduroam)\.de$/
}
client tlr2 {
host 193.174.75.138
type tls
certificatenamecheck off
matchCertificateAttribute CN:/^(radius2\.dfn|tld2\.eduroam)\.de$/
}
# Our WLAN Controller
client wlc {
host 10.1.1.0/24
type udp
secret ****secret****
}
#client anyIP4TLS {
# host 0.0.0.0/0
# type TLS
#}
</source>
===/etc/radsec/servers.conf===
<source lang=text>
#
## UDP Radius
#
#Server Our-EduroamRadiusAuth {
# host <internal radius server>
# port 1812
# type udp
# secret ****secret****
#}
#Server Our-EduroamRadiusAcct {
# host <internal radius accounting server>
# port 1813
# type udp
# secret ****secret****
#}
#
## TLS Radius / RadSec
#
server freeradius-1 {
host <internal radius accounting server1>
type tls
certificatenamecheck off
matchCertificateAttribute CN:/^freeradius1\.domain\.tld$/
StatusServer on
secret ****secret****
}
server freeradius-2 {
host <internal radius accounting server2>
type tls
certificatenamecheck off
matchCertificateAttribute CN:/^freeradius2\.domain\.tld$/
StatusServer on
secret ****secret****
}
server tlr1 {
host 193.174.75.134
type tls
certificatenamecheck off
matchCertificateAttribute CN:/^(radius1\.dfn|tld1\.eduroam)\.de$/
StatusServer on
}
server tlr2 {
host 193.174.75.138
type tls
certificatenamecheck off
matchCertificateAttribute CN:/^(radius2\.dfn|tld2\.eduroam)\.de$/
StatusServer on
}
</source>
===/etc/radsec/realms.conf===
<source lang=text>
# Our domain domain.tld
realm /(eduroam|anonymous)@domain\.tld$/ {
server freeradius-1
server freeradius-2
accountingServer freeradius-1
accountingServer freeradius-2
}
# If the anonymous user has not been matched above, fail
# So users that use their real identity fail, too. Force anonymous!
realm /@domain\.tld$ {
replymessage "Access rejected, wrong anonymous identity. Use eduroam@domain.tld as anonymous identity."
accountingresponse on
}
# Other domain of our site not used for eduroam
realm /@wrong-domain\.tld$/ {
replymessage "Misconfigured client: Use domain.tld as domain instead."
accountingresponse on
}
# Default realm of some clients. Do not send to top level radius servers.
realm /@.*\.3gppnetwork\.org$/ {
replymessage "Misconfigured client."
accountingresponse on
}
# Default realm of some clients. Do not send to top level radius servers.
realm /myabc\.com$/ {
replymessage "Misconfigured client: default realm of Intel PRO/Wireless supplicant! Rejected by us."
accountingresponse on
}
# Empty realm. Do not send to top level radius servers.
realm /^$/ {
replymessage "Misconfigured client: empty realm! Rejected by us."
accountingresponse on
}
# Typo in realm. Realm without any dot in it. Do not send to top level radius servers.
realm /@[^\.]+$/ {
replymessage "Misconfigured client: Typo in realm - No dot in realm ! Rejected by us."
accountingresponse on
}
# Typo in realm. Realm without double dot in it. Do not send to top level radius servers.
realm /@.*\.\..*$/ {
replymessage "Misconfigured client: Typo in realm - .. ! Rejected by us."
accountingresponse on
}
# Typo in realm. Realm without space in it. Do not send to top level radius servers.
realm /@.*\s+.*$/ {
replymessage "Misconfigured client: Typo in realm - Don't use spaces in your realm! Rejected by us."
accountingresponse on
}
# All other realms -> Eduroam toplevel servers
realm * {
server tlr1
server tlr2
accountingserver tlr1
accountingserver tlr2
}
</source>
===/etc/radsec/cert/radsecproxy.pem===
<source lang=text>
subject=/CN=radsecproxy.domain.tld/OU=bla/O=bli/L=Hamburg/ST=Hamburg/C=DE
-----BEGIN CERTIFICATE-----
...
-----END CERTIFICATE-----
And now the whole cerstificate chain...
</source>
==Run the daemon==
===Security===
There is no need to run radsecproxy as root.
But you need write access to the log or use syslog.
The config, certificate and key is not readable by the user (nogroup) but by the group radsecproxy where the porocess lives in (see systemd unit file radsecproxy.service).
====User====
<source lang=bash>
# addgroup -g 2083 radsecproxy
# useradd -u 2083 -g nogroup -s /bin/false -h /nonexistent
</source>
====Permissions====
<source lang=bash>
# chown -R root:radsecproxy /etc/radsec
# find /etc/radsec -type d -exec chmod 0750 {} \;
# find /etc/radsec -type f -exec chmod 0640 {} \;
</source>
====systemd unit file====
<source lang=bash>
# systemctl cat radsecproxy.service
</source>
<source lang=ini>
[Unit]
Description=radsecproxy
ConditionPathExists=/etc/radsec/radsecproxy.conf
After=network.target
Documentation=man:radsecproxy(1)
[Service]
Type=forking
User=radsecproxy
Group=radsecproxy
RuntimeDirectory=radsecproxy
RuntimeDirectoryMode=0700
PrivateTmp=yes
InaccessibleDirectories=/var
ReadOnlyDirectories=/etc
ReadOnlyDirectories=/lib
ReadOnlyDirectories=/usr
ExecStart=/opt/radsecproxy/sbin/radsecproxy -i /run/radsecproxy/radsecproxy.pid
PIDFile=/run/radsecproxy/radsecproxy.pid
[Install]
WantedBy=multi-user.target
</source>
Put this to /lib/systemd/system/radsecproxy.service and do:
# systemctl daemon-reload
# systemctl enable radsecproxy.service
# systemctl start radsecproxy.service
===Testing===
Check on the server if the radsecproxy is listening:
<source lang=bash>
# lsof -Pni TCP:2083 -s TCP:Listen
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
radsecpro 1344 radsecproxy 9u IPv4 22751 0t0 TCP <server ip>:2083 (LISTEN)
</source>
===Certificate Enddate===
$ openssl s_client -connect <IP>:2083 -tls1 -no_ssl2 -no_ssl3 -showcerts 2>/dev/null | openssl x509 -enddate -noout
notAfter=Oct 9 12:13:17 2020 GMT
7ee6450304d7e4610e17092b0288142ee2b5d3f6
Category:Eduroam
14
346
2093
1794
2021-01-19T09:30:50Z
Lollypop
2
wikitext
text/x-wiki
[[ Category: KnowHow ]]
1bc5413399cf0c9d648d014ff6486b9391f0c785
SSH Tipps und Tricks
0
75
2094
1991
2021-01-21T08:28:57Z
Lollypop
2
/* SSH über ein oder mehrere Hops */
wikitext
text/x-wiki
[[Kategorie:SSH|Tipps]]
[[Kategorie:Putty|Tipps]]
=SSH, der Weg zum Ziel=
==SSH über ein oder mehrere Hops==
Um die SSH-Verbindung von Host_A zu Host_B machen zu können muß man sich über zwei Rechner dorthin tunneln (GW_1 und GW_2). Wenn man sich immer erst einloggt und dann weiter einloggt ist es manchmal sehr schwierig die Portforwardings mitzuschleifen, oder den Socks5-Proxy. Einfacher ist es, wenn man sich ProxyCommands für den Weg von Host_a zu Host_b definiert.
Wir kommen also nur von GW_2 zu Host_b, also machen wir uns hierfür einen Eintrag in der ~/.ssh/config:
<pre>
Host Host_B
ProxyJump GW_2
</pre>
Zu GW_2 kommen wir aber nur über GW_1, also brauchen wir hierfür auch einen Eintrag:
<pre>
Host GW_2
ProxyJump GW_1
</pre>
Jetzt gibt man auf Host_A einfach <i>ssh Host_B</i> ein und wird über die beiden Gateways GW_1 und GW_2 getunnelt.
==Portforwardings für z.B. NFS macht man jetzt einfach so==
<pre>
root@Host_A# share -F nfs -o ro=@127.0.0.1/32 /tmp
root@Host_A# ssh -R 22049:localhost:2049 user@Host_B
user@Host_B$ su -
root@Host_B# mount -oro nfs://127.0.0.1:22049/tmp /mnt
</pre>
Im Hintergrund werden dann die TunnelVerbindungen aufgebaut und man macht das PortForwarding direkt von Host_A nach Host_B. Sehr schlank und elegant.
PS: Das /dev/tcp/%h/%p ist ein BASH-Builtin wobei %h und %p von der SSH durch Host (%h) und Port (%p) ausgefüllt werden
==Ausbruch aus dem Paradies==
Problem: Die Umgebung in der man sich aufhält ist leider so unglücklich mit Firewalls verbaut, daß man nicht arbeiten kann. Man muß aber per SSH raus, um woanders kurz etwas zu schauen, oder zu holen. Nunja, es gibt immer einen Weg...
Vorraussetzung ist ein lokal installiertes [http://www.meadowy.org/~gotoh/projects/connect connect], z.B. unter Ubuntu: apt-get install connect-proxy.
Weiterhin braucht man einen SSH-Server, wo ein sshd auf dem Port 443 lauscht, denn die meisten Proxies wollen einen nur auf known Ports durchlassen.
Dann trägt man in der ~/.ssh/config ein:
<pre>
Host ssh-via-proxy
ProxyCommand connect -H proxy-server:3128 ssh-server 443
</pre>
Schwuppdiwupp ist man mit <i>ssh ssh-server</i> auf dem SSH-Ziel, wo man hinmöchte. Natürlich kann man auf den ssh-server wieder als ProxyCommand eintragen usw. usw.
==Achja... das interne Wiki...==
Auch nicht schlimm, wenn das nur vom internen Netz erreichbar ist, dann fragen wir einfach via Socks-Proxy an:
<pre>
user@Host_A$ ssh -C -N -T -f -D8080 interner-rechner
user@Host_A$ chromium-browser --proxy-server="socks5://localhost:8080" https://wiki.intern.firma.de/ &
</pre>
Die Optionen sind:
<pre>
-C Requests compression <- das ist optional
-N Do not execute a remote command.
-T Disable pseudo-tty allocation.
-f Requests ssh to go to background just before command execution.
-D Local-Remote-Socks5-Proxy Port
</pre>
Oder wieder via ~/.ssh/config:
<pre>
Host wiki
Compression yes
DynamicForward 8888
RequestTTY no
PermitLocalCommand yes
LocalCommand chromium-browser --proxy-server="socks5://localhost:8888" https://wiki.intern.firma.de/ &
Hostname interner-rechner
</pre>
Und dann <i>ssh -N -f wiki</i> (Entsprechungen für -N und -f habe ich noch nicht gefunden).
=Der Fingerabdruck=
Für die Verifikation ist es oft leichter mit kürzeren Zahlenketten. Daher ist der Fingerabdruck praktisch, um Keys einfacher zu vergleichen:
<pre>
$ ssh-keygen -lf ~/.ssh/id_dsa.pub
1024 98:c5:76:...:08:fa:ba lollypop@lollybook (DSA)
</pre>
=Nutzer einschränken=
<source lang=bash>
# SSH is only allowed for users in the group ssh except syslog
AllowGroups ssh
DenyUsers syslog
</source>
=PuTTY Portable=
==pageant zusammen mit putty starten==
In die Datei ..\PortableApps\PuTTYPortable\App\AppInfo\Launcher\PuTTYPortable.ini muß folgendes unter [Launch] stehen:
<pre>
[Launch]
ProgramExecutable=putty\pageant.exe
CommandLineArguments='%PAL:DataDir%\settings\mykeys.ppk -c %PAL:AppDir%\putty\putty.exe'
DirectoryMoveOK=yes
SupportsUNC=yes
</pre>
Zu PortableApps siehe auch:
* [http://portableapps.com/manuals/PortableApps.comLauncher/ref/envsub.html Environment variable substitions]
* [http://portableapps.com/manuals/PortableApps.comLauncher/ref/launcher.ini/launch.html#programexecutable Launch]
==ppk -> pem==
<source lang=bash>
$ nawk '/---- BEGIN SSH2 PUBLIC KEY ----/{printf "ssh-rsa "; getline; comment=$2; gsub(/"/,"",comment); getline line; while(line !~ /^---- END/){printf line; getline line;} printf " %s\n",comment;}' pubkey.ppk
</source>
=Probleme mit älteren Gegenstellen=
==Unable to negotiate with <IP> port 22: no matching host key type found. Their offer: ssh-dss==
<source lang=bash>
$ ssh -oHostKeyAlgorithms=+ssh-dss <IP>
</source>
==ssh_dispatch_run_fatal: Connection to <IP> port 22: DH GEX group out of range==
<source lang=bash>
$ ssh -oKexAlgorithms=diffie-hellman-group-exchange-sha256,diffie-hellman-group14-sha1,diffie-hellman-group1-sha1 <IP>
</source>
=SFTP chroot=
<source lang=bash>
# mkdir --parents --mode=0755 /sftp_chroot/etc
</source>
==/etc/fstab==
<source lang=bash>
...
/etc/passwd /sftp_chroot/etc/passwd none ro,bind 0 0
/etc/group /sftp_chroot/etc/group none ro,bind 0 0
</source>
==/etc/ssh/sshd_config==
<source lang=bash>
...
AllowGroups ssh-user
Subsystem sftp internal-sftp
Match group sftp
AllowGroups sftp
X11Forwarding no
AllowTcpForwarding no
AllowAgentForwarding no
PermitTunnel no
ForceCommand internal-sftp
PasswordAuthentication yes
ChrootDirectory /sftp_chroot/
AuthorizedKeysFile /sftp_chroot/%h/.ssh/authorized_keys
</source>
==Create SFTP user==
Now you can put authorized keys into the files /home/sftp/.authorized_keys/<i>username</i>
And create the sftp users like this:
<source lang=bash>
# USER=myuser
# mkdir --parents --mode=0755 /home/sftp/${USER}
# useradd --create-home --home-dir /home/sftp/${USER}/home ${USER}
</source>
= Two factor authentication =
== Google Authenticator ==
As the Google Authenticator is a tool which is available on several SmartPhone OS I took this one for the OTP authentication.
All steps have to be done on the destination host.
=== Install libpam-google-authenticator ===
<source lang=bash>
$ sudo apt-get install libpam-google-authenticator
</source>
=== Add settings to the /etc/pam.d/sshd ===
Put this line at the top of your /etc/pam.d/sshd!
<source lang=bash>
auth [success=done new_authtok_reqd=done default=die] pam_google_authenticator.so nullok
</source>
See the man page pam.d(5) or read here...
The meaning of the parameters:
* success=done : If pam_google_authenticator returns successful (code was correct) all authentication is done.
* new_authtok_reqd=done : New authentication token is required set to done. Done is like ok, <nowiki><man page></nowiki>except that the stack also terminates and control is immediately returned to the application.<nowiki></man page></nowiki>
* default=die : If pam_google_authenticator failed no other authentications will be tried
* nullok : Allow user to access auth mechanism even if the password is empty
=== Add settings to the /etc/ssh/sshd_config ===
This lines have to be in the /etc/ssh/sshd_config:
<source lang=bash>
UsePAM yes
PasswordAuthentication no
PubkeyAuthentication yes
ChallengeResponseAuthentication yes
AuthenticationMethods publickey,keyboard-interactive:pam
</source>
Without the setting in /etc/pam.d/sshd the "PasswordAuthentication no" will not be sufficient and still ask for a password because /etc/pam.d/sshd enables password authentication.
7cdef67fe641c02af3dcbe23309df73cf6276b6b
Category:SuSE
14
359
2095
1894
2021-01-26T13:21:33Z
Lollypop
2
wikitext
text/x-wiki
[[Category:KnowHow]]
a53883501ef62bde531096835b5015f2915a2297
SuSE NIS
0
380
2096
2021-01-26T13:27:43Z
Lollypop
2
Created page with " ==NIS Client== # zypper in yast2-nis-client ypbind /etc/sysconfig/network/config: NETCONFIG_MODULES_ORDER="dns-resolver dns-bind dns-dnsmasq nis ntp-runtime" NETCONFIG_NIS_S..."
wikitext
text/x-wiki
==NIS Client==
# zypper in yast2-nis-client ypbind
/etc/sysconfig/network/config:
NETCONFIG_MODULES_ORDER="dns-resolver dns-bind dns-dnsmasq nis ntp-runtime"
NETCONFIG_NIS_STATIC_SERVERS="hhloklnx02.srv.ndr-net.de"
NETCONFIG_NIS_SETDOMAINNAME="yes"
NETCONFIG_NIS_POLICY="auto"
# netconfig update -f
Check:
# cat /etc/yp.conf
...
ypserver hhloklnx02.srv.ndr-net.de
Setzen der NIS Domain:
# nisdomainname nis.ndr-net.de
Check:
# nisdomainname
nis.ndr-net.de
#
/etc/passwd hinzufügen:
+::::::
/etc/shadow hinzufügen:
+::0:0:0::::
# yast
Network Services -> NIS Client
[Alt]+[u] (Use NIS)
[F10] Finish
[F9] Quit
Check:
# ypcat passwd.byname
27e4a80c26f02bdbd6bf2e417bba2ff31a7a13b6
2097
2096
2021-01-26T13:43:35Z
Lollypop
2
/* NIS Client */
wikitext
text/x-wiki
==NIS Client==
===Add packages===
<source>
# zypper in yast2-nis-client ypbind
</source>
===/etc/sysconfig/network/config===
<source>
NETCONFIG_MODULES_ORDER="dns-resolver dns-bind dns-dnsmasq nis ntp-runtime"
NETCONFIG_NIS_STATIC_SERVERS="nis-server.domain.tld"
NETCONFIG_NIS_SETDOMAINNAME="yes"
NETCONFIG_NIS_POLICY="auto"
</source>
<source>
# netconfig update -f
</source>
Check:
<source>
# cat /etc/yp.conf
...
ypserver nis-server.domain.tld
</source>
===Set NIS Domain===
<source>
# nisdomainname nis.domain.tld
</source>
Check:
<source>
# nisdomainname
nis.domain.tld
#
</source>
===Add to /etc/passwd===
<source>
+::::::
</source>
===Add to /etc/shadow===
<source>
+::0:0:0::::
</source>
===/etc/nsswitch.conf===
<source>
...
passwd: compat
group: compat
...
</source>
alternative for older installations:
<source>
...
passwd: files nis
group: files nis
...
</source>
===yast===
<source>
Network Services -> NIS Client
[Alt]+[u] (Use NIS)
[F10] Finish
[F9] Quit
</source>
Check:
<source>
# ypcat passwd.byname
</source>
1b5d6ee7f601e4528796582917d0e12ec4c4593d
2098
2097
2021-01-26T13:44:07Z
Lollypop
2
/* NIS Client */
wikitext
text/x-wiki
[[Category:SuSE]]
==NIS Client==
===Add packages===
<source>
# zypper in yast2-nis-client ypbind
</source>
===/etc/sysconfig/network/config===
<source>
NETCONFIG_MODULES_ORDER="dns-resolver dns-bind dns-dnsmasq nis ntp-runtime"
NETCONFIG_NIS_STATIC_SERVERS="nis-server.domain.tld"
NETCONFIG_NIS_SETDOMAINNAME="yes"
NETCONFIG_NIS_POLICY="auto"
</source>
<source>
# netconfig update -f
</source>
Check:
<source>
# cat /etc/yp.conf
...
ypserver nis-server.domain.tld
</source>
===Set NIS Domain===
<source>
# nisdomainname nis.domain.tld
</source>
Check:
<source>
# nisdomainname
nis.domain.tld
#
</source>
===Add to /etc/passwd===
<source>
+::::::
</source>
===Add to /etc/shadow===
<source>
+::0:0:0::::
</source>
===/etc/nsswitch.conf===
<source>
...
passwd: compat
group: compat
...
</source>
alternative for older installations:
<source>
...
passwd: files nis
group: files nis
...
</source>
===yast===
<source>
Network Services -> NIS Client
[Alt]+[u] (Use NIS)
[F10] Finish
[F9] Quit
</source>
Check:
<source>
# ypcat passwd.byname
</source>
84cb0de63048d29ff9e0a49b648b3bcbd0b1902d
2099
2098
2021-01-26T13:57:43Z
Lollypop
2
wikitext
text/x-wiki
[[Category:SuSE]]
=!!!! First of all: You do NOT want NIS because of security reasons !!!!=
NIS is not NIS+ and it is without encryption. So do not use it or if you really have to, use it wisely!
==NIS Client==
===Add packages===
<source>
# zypper in yast2-nis-client ypbind
</source>
===/etc/sysconfig/network/config===
<source>
NETCONFIG_MODULES_ORDER="dns-resolver dns-bind dns-dnsmasq nis ntp-runtime"
NETCONFIG_NIS_STATIC_SERVERS="nis-server.domain.tld"
NETCONFIG_NIS_SETDOMAINNAME="yes"
NETCONFIG_NIS_POLICY="auto"
</source>
<source>
# netconfig update -f
</source>
Check:
<source>
# cat /etc/yp.conf
...
ypserver nis-server.domain.tld
</source>
===Set NIS Domain===
<source>
# nisdomainname nis.domain.tld
</source>
Check:
<source>
# nisdomainname
nis.domain.tld
#
</source>
===Add to /etc/passwd===
<source>
+::::::
</source>
===Add to /etc/shadow===
<source>
+::0:0:0::::
</source>
===/etc/nsswitch.conf===
<source>
...
passwd: compat
group: compat
...
</source>
alternative for older installations:
<source>
...
passwd: files nis
group: files nis
...
</source>
===yast===
<source>
Network Services -> NIS Client
[Alt]+[u] (Use NIS)
[F10] Finish
[F9] Quit
</source>
Check:
<source>
# ypcat passwd.byname
</source>
fcb3588dd6f856f28d7095f13d80b16b56bd6464
Mauersegler
0
381
2100
2021-02-03T09:56:58Z
Lollypop
2
Created page with "==Lockrufe mit einem RaspberryPi und Bluetooth-Boxen abspielen== ===Womit ich es realisiert habe=== * RaspberryPi 3B. * Micro-SD Karte (die Größe ist den aktuellen Anforder..."
wikitext
text/x-wiki
==Lockrufe mit einem RaspberryPi und Bluetooth-Boxen abspielen==
===Womit ich es realisiert habe===
* RaspberryPi 3B.
* Micro-SD Karte (die Größe ist den aktuellen Anforderungen auf [https://www.raspbian.org/ Raspian.org] zu entnehmen).
* USB Netzteil mit Micro-USB Steckern (für den RaspberryPi).
* Bluetoothfähige, wasserfeste Lautprecher (in meinem Fall [https://www.amazon.de/gp/product/B07QY66L9M Wireless Bluetooth Lautsprecher, Sonkir Tragbarer Bluetooth 5.0 TWS Lautsprecher mit Dual-Treiber Bass, 3D-Stereo, FM Radio, Freisprechfunktion, integriertem 1500-mAh-Akku]).
* USB Netzteil mit Micro-USB Steckern (für die Boxen, bei anderen Boxen evtl. anderes Netzteil!).
* An meinem Laptop ist ein SD Card Reader, um das Betriebssystem auf die SD-Karte zu bekommen. Ist dieser nicht vorhanden, braucht man noch einen USB SD Card Reader. Kostet aber auch nicht die Welt.
===Gründe für diese Wahl===
Dank der guten Reichweite von Bluetooth, kann der RaspberryPi im Haus bleiben und nur die Boxen und ein Netzteil müssen nach draußen.
===Raspian auf dem Pi installieren===
* Anleitungen etc. sind auf [https://www.raspbian.org/ Raspian.org] zu finden, das würde hier kein Sinn machen alles doppelt zu halten.
===Bluetooth aktivieren===
Mit ssh als Benutzer pi auf den RaspberryPi verbinden.
Windows-Nutzer können dafür [https://www.putty.org/ Putty] nutzen.
====Bluetooth Service dauerhaft anschalten====
<source lang=bash>
pi@raspberrypi:~ $ sudo systemctl enable bluetooth.service
pi@raspberrypi:~ $ sudo systemctl start bluetooth.service
</source>
====Bluetooth Service Status prüfen====
So sollte es aussehen:
<source lang=bash>
pi@raspberrypi:~ $ sudo systemctl status bluetooth.service
* bluetooth.service - Bluetooth service
Loaded: loaded (/lib/systemd/system/bluetooth.service; enabled; vendor preset: enabled)
Active: active (running) since Wed 2021-02-03 09:18:58 CET; 32min ago
Docs: man:bluetoothd(8)
Main PID: 943 (bluetoothd)
Status: "Running"
Tasks: 1 (limit: 2062)
CGroup: /system.slice/bluetooth.service
`-943 /usr/lib/bluetooth/bluetoothd
...
</source>
Sieht es hingegen so aus:
<source lang=bash>
pi@raspberrypi:~ $ sudo systemctl status bluetooth.service
* bluetooth.service - Bluetooth service
Loaded: loaded (/lib/systemd/system/bluetooth.service; enabled; vendor preset: enabled)
Active: inactive (dead)
Docs: man:bluetoothd(8)
</source>
Dann sind im Betriebssystem die entsprechenden Treiber(module) für Bluetooth nicht geladen.
====Bluetooth Module aktivieren====
Wenn die Module deaktiviert (blacklisted) sind, müssen wir das ändern.
Der Befehl
egrep "(hci_uart|btbcm)" /etc/modprobe.d/*.conf
zeigt einem die Datei, wo das passiert.
Beispiel:
<source lang=bash>
pi@raspberrypi:~ $ egrep "(hci_uart|btbcm)" /etc/modprobe.d/*.conf
/etc/modprobe.d/blacklist-bluetooth.conf:blacklist hci_uart
/etc/modprobe.d/blacklist-bluetooth.conf:blacklist btbcm
</source>
In meinem Beispiel also in <i>/etc/modprobe.d/blacklist-bluetooth.conf</i>.
Der Befehl
sudo perl -pi -e "s/(blacklist.*(hci_uart|btbcm))/#\1/g" <Datei>
kommentiert die <i>blacklist</i> Zeilen für die Beiden benötigten Module aus.
<source lang=bash>
pi@raspberrypi:~ $ sudo perl -pi -e "s/(blacklist.*(hci_uart|btbcm))/#\1/g" /etc/modprobe.d/blacklist-bluetooth.conf
</source>
Anschließend einen Neustart(reboot) durchführen
<source lang=bash>
pi@raspberrypi:~ $ sudo reboot
</source>
Wenn die Module nach dem geladen sind, sieht es so aus:
<source lang=bash>
pi@raspberrypi:~ $ sudo lsmod | grep bt
btbcm 16384 1 hci_uart
bluetooth 393216 37 hci_uart,bnep,btbcm,rfcomm
</source>
Dann nochmal den [[#Bluetooth Service Status prüfen|Bluetooth Service Status prüfen]].
Jetzt sollte alles gut sein.
===Finden der Bluetooth-Boxen===
pi@raspberrypi:~ $ sudo bluetoothctl
Agent registered
[bluetooth]# scan on
Discovery started
[CHG] Controller B8:27:EB:E6:D2:78 Discovering: yes
[NEW] Device D6:53:25:BE:36:74 Connected:
12ff593eaa30298bdd10c18a80d593305259f672
2101
2100
2021-02-03T16:12:23Z
Lollypop
2
wikitext
text/x-wiki
==Lockrufe mit einem RaspberryPi und Bluetooth-Boxen abspielen==
===Womit ich es realisiert habe===
* RaspberryPi 3B.
* Micro-SD Karte (die Größe ist den aktuellen Anforderungen auf [https://www.raspbian.org/ Raspian.org] zu entnehmen).
* USB Netzteil mit Micro-USB Steckern (für den RaspberryPi).
* Bluetoothfähige, wasserfeste Lautprecher (in meinem Fall [https://www.amazon.de/gp/product/B07QY66L9M Wireless Bluetooth Lautsprecher, Sonkir Tragbarer Bluetooth 5.0 TWS Lautsprecher mit Dual-Treiber Bass, 3D-Stereo, FM Radio, Freisprechfunktion, integriertem 1500-mAh-Akku]).
* USB Netzteil mit Micro-USB Steckern (für die Boxen, bei anderen Boxen evtl. anderes Netzteil!).
* An meinem Laptop ist ein SD Card Reader, um das Betriebssystem auf die SD-Karte zu bekommen. Ist dieser nicht vorhanden, braucht man noch einen USB SD Card Reader. Kostet aber auch nicht die Welt.
===Gründe für diese Wahl===
Dank der guten Reichweite von Bluetooth, kann der RaspberryPi im Haus bleiben und nur die Boxen und ein Netzteil müssen nach draußen.
===Raspian auf dem Pi installieren===
* Anleitungen etc. sind auf [https://www.raspbian.org/ Raspian.org] zu finden, das würde hier kein Sinn machen alles doppelt zu halten.
===Bluetooth aktivieren===
Mit ssh als Benutzer pi auf den RaspberryPi verbinden.
Windows-Nutzer können dafür [https://www.putty.org/ Putty] nutzen.
====Bluetooth Service dauerhaft anschalten====
<source lang=bash>
pi@raspberrypi:~ $ sudo systemctl enable bluetooth.service
pi@raspberrypi:~ $ sudo systemctl start bluetooth.service
</source>
====Bluetooth Service Status prüfen====
So sollte es aussehen:
<source lang=bash>
pi@raspberrypi:~ $ sudo systemctl status bluetooth.service
* bluetooth.service - Bluetooth service
Loaded: loaded (/lib/systemd/system/bluetooth.service; enabled; vendor preset: enabled)
Active: active (running) since Wed 2021-02-03 09:18:58 CET; 32min ago
Docs: man:bluetoothd(8)
Main PID: 943 (bluetoothd)
Status: "Running"
Tasks: 1 (limit: 2062)
CGroup: /system.slice/bluetooth.service
`-943 /usr/lib/bluetooth/bluetoothd
...
</source>
Sieht es hingegen so aus:
<source lang=bash>
pi@raspberrypi:~ $ sudo systemctl status bluetooth.service
* bluetooth.service - Bluetooth service
Loaded: loaded (/lib/systemd/system/bluetooth.service; enabled; vendor preset: enabled)
Active: inactive (dead)
Docs: man:bluetoothd(8)
</source>
Dann sind im Betriebssystem die entsprechenden Treiber(module) für Bluetooth nicht geladen.
====Bluetooth Module aktivieren====
Wenn die Module deaktiviert (blacklisted) sind, müssen wir das ändern.
Der Befehl
egrep "(hci_uart|btbcm)" /etc/modprobe.d/*.conf
zeigt einem die Datei, wo das passiert.
Beispiel:
<source lang=bash>
pi@raspberrypi:~ $ egrep "(hci_uart|btbcm)" /etc/modprobe.d/*.conf
/etc/modprobe.d/blacklist-bluetooth.conf:blacklist hci_uart
/etc/modprobe.d/blacklist-bluetooth.conf:blacklist btbcm
</source>
In meinem Beispiel also in <i>/etc/modprobe.d/blacklist-bluetooth.conf</i>.
Der Befehl
sudo perl -pi -e "s/(blacklist.*(hci_uart|btbcm))/#\1/g" <Datei>
kommentiert die <i>blacklist</i> Zeilen für die Beiden benötigten Module aus.
<source lang=bash>
pi@raspberrypi:~ $ sudo perl -pi -e "s/(blacklist.*(hci_uart|btbcm))/#\1/g" /etc/modprobe.d/blacklist-bluetooth.conf
</source>
Anschließend einen Neustart(reboot) durchführen
<source lang=bash>
pi@raspberrypi:~ $ sudo reboot
</source>
Wenn die Module nach dem geladen sind, sieht es so aus:
<source lang=bash>
pi@raspberrypi:~ $ sudo lsmod | grep bt
btbcm 16384 1 hci_uart
bluetooth 393216 37 hci_uart,bnep,btbcm,rfcomm
</source>
Dann nochmal den [[#Bluetooth Service Status prüfen|Bluetooth Service Status prüfen]].
Jetzt sollte alles gut sein.
===Finden der Bluetooth-Boxen===
<source lang=bash>
pi@raspberrypi:~ $ sudo bluetoothctl
Agent registered
[bluetooth]# scan on
Discovery started
[CHG] Controller B8:27:EB:E6:D3:79 Discovering: yes
</source>
Jetzt die Bluetooth-Boxen an
<source lang=bash>
[NEW] Device D6:53:25:BE:37:73 SPEAKER5.0
</source>
Ah, da ist sie ja!
Jetzt noch verbinden und raus:
<source lang=bash>
[bluetooth]# scan off
[CHG] Controller B8:27:EB:E6:D3:79 Discovering: no
Discovery stopped
[bluetooth]#
[bluetooth]# connect D6:53:25:BE:37:73
Attempting to connect to D6:53:25:BE:37:73
[CHG] Device D6:53:25:BE:37:73 Connected: yes
[CHG] Device D6:53:25:BE:37:73 UUIDs: 0000110b-0000-1000-8000-00805f9b34fb
[CHG] Device D6:53:25:BE:37:73 UUIDs: 0000110c-0000-1000-8000-00805f9b34fb
[CHG] Device D6:53:25:BE:37:73 UUIDs: 0000110e-0000-1000-8000-00805f9b34fb
[CHG] Device D6:53:25:BE:37:73 UUIDs: 0000111e-0000-1000-8000-00805f9b34fb
[CHG] Device D6:53:25:BE:37:73 ServicesResolved: yes
[CHG] Device D6:53:25:BE:37:73 Paired: yes
Connection successful
[SPEAKER5.0]# trust D6:53:25:BE:37:73
[CHG] Device D6:53:25:BE:37:73 Trusted: yes
Changing D6:53:25:BE:37:73 trust succeeded
[SPEAKER5.0]# paired-devices
Device D6:53:25:BE:37:73 SPEAKER5.0
[SPEAKER5.0]# quit
</source>
Jetzt muß noch die Addresse der Boxen in die <i>/etc/asound.conf</i> eingetragen werden (die existiert normalerweise noch nicht, einfach neu anlegen).
<source>
pcm.!default {
type plug
slave {
pcm {
type bluealsa
device D6:53:25:BE:37:73
profile "a2dp"
}
}
hint {
show on
description "Bluetooth SPEAKER5.0"
}
}
ctl.!default {
type bluealsa
}
</source>
Noch einmal reboot:
<source lang=bash>
pi@raspberrypi:~ $ sudo reboot
</source>
Die Boxen sollten beim Starten des RaspberryPi jetzt auch ein kleines Signal abspielen, wenn das Pairing stattfindet.
Jetzt kann man mal Testen:
<source lang=bash>
pi@raspberrypi:~ $ aplay -L
...
default
Bluetooth SPEAKER5.0
...
</source>
77bf33efce35dce1befcddf564deaebfaa0aba51
Systemd
0
233
2102
2024
2021-04-07T09:51:16Z
Lollypop
2
wikitext
text/x-wiki
[[Category:Linux]]
=systemd=
Yes, like daemons are usually written this has to be written lowercase.
=What is systemd?=
systemd is a replacement for the old and rusty init system of Linux.
It has many new features and extends the normal init system with the ability to watch processes after the start has done, list sockets owned by processes started with systemd, adds security features like [http://manpages.ubuntu.com/manpages/vivid/en/man7/capabilities.7.html capabilities(7)] and a lot more.
Maybe it will be as good as SMF (Service Management Facility) of Solaris one day :-).
=Take a look with systemctl=
==List units==
As you can see, there are hardware and software related units.
<source lang=bash>
# systemctl list-units
UNIT LOAD ACTIVE SUB DESCRIPTION
proc-sys-fs-binfmt_misc.automount loaded active running Arbitrary Executable File Formats File System Automount Point
sys-devices-pci0000:00-0000:00:02.0-backlight-acpi_video0.device loaded active plugged /sys/devices/pci0000:00/0000:00:02.0/backlight/acpi_video0
sys-devices-pci0000:00-0000:00:02.0-drm-card0-card0\x2dLVDS\x2d1-intel_backlight.device loaded active plugged /sys/devices/pci0000:00/0000:00:02.0/drm
sys-devices-pci0000:00-0000:00:19.0-net-eth0.device loaded active plugged 82579LM Gigabit Network Connection
sys-devices-pci0000:00-0000:00:1a.0-usb1-1\x2d1-1\x2d1.4-1\x2d1.4:1.0-bluetooth-hci0-rfkill3.device loaded active plugged /sys/devices/pci0000:00/0000
sys-devices-pci0000:00-0000:00:1a.0-usb1-1\x2d1-1\x2d1.4-1\x2d1.4:1.0-bluetooth-hci0.device loaded active plugged /sys/devices/pci0000:00/0000:00:1a.0
sys-devices-pci0000:00-0000:00:1b.0-sound-card0.device loaded active plugged 6 Series/C200 Series Chipset Family High Definition Audio Contro
sys-devices-pci0000:00-0000:00:1c.1-0000:03:00.0-ieee80211-phy0-rfkill2.device loaded active plugged /sys/devices/pci0000:00/0000:00:1c.1/0000:03:00.0
sys-devices-pci0000:00-0000:00:1c.1-0000:03:00.0-net-wlan0.device loaded active plugged Centrino Advanced-N 6205 [Taylor Peak] (Centrino Advanced-N 62
sys-devices-pci0000:00-0000:00:1d.0-usb2-2\x2d1-2\x2d1.4-2\x2d1.4:1.1-tty-ttyACM0.device loaded active plugged F5521gw
sys-devices-pci0000:00-0000:00:1d.0-usb2-2\x2d1-2\x2d1.4-2\x2d1.4:1.3-tty-ttyACM1.device loaded active plugged F5521gw
...
session-c2.scope loaded active running Session c2 of user lollypop
accounts-daemon.service loaded active running Accounts Service
● anacron.service loaded failed failed Run anacron jobs
apparmor.service loaded active exited LSB: AppArmor initialization
apport.service loaded active exited LSB: automatic crash report generation
...
</source>
In this example you can see that the anacron.service failed to start.
==Display unit status==
<source lang=bash>
# systemctl status anacron
● anacron.service - Run anacron jobs
Loaded: loaded (/lib/systemd/system/anacron.service; enabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Fr 2015-08-28 09:18:13 CEST; 31min ago
Process: 1591 ExecStart=/usr/sbin/anacron -dsq (code=exited, status=1/FAILURE)
Main PID: 1591 (code=exited, status=1/FAILURE)
Aug 28 09:18:13 lollybook systemd[1]: Started Run anacron jobs.
Aug 28 09:18:13 lollybook systemd[1]: Starting Run anacron jobs...
Aug 28 09:18:13 lollybook systemd[1]: anacron.service: main process exited, code=exited, status=1/FAILURE
Aug 28 09:18:13 lollybook anacron[1591]: anacron: Can't chdir to /var/spool/anacron: No such file or directory
Aug 28 09:18:13 lollybook systemd[1]: Unit anacron.service entered failed state.
Aug 28 09:18:13 lollybook systemd[1]: anacron.service failed.
</source>
Ah, deleted the anacron spool directory. ;-)
==Restart units==
Fix the problem and restart the service.
<source lang=bash>
root@lollybook:~# mkdir /var/spool/anacron
root@lollybook:~# systemctl restart anacron.service
root@lollybook:~# systemctl status anacron
● anacron.service - Run anacron jobs
Loaded: loaded (/lib/systemd/system/anacron.service; enabled; vendor preset: enabled)
Active: active (running) since Fr 2015-08-28 09:53:49 CEST; 4s ago
Main PID: 5179 (anacron)
CGroup: /system.slice/anacron.service
└─5179 /usr/sbin/anacron -dsq
Aug 28 09:53:49 lollybook systemd[1]: Started Run anacron jobs.
Aug 28 09:53:49 lollybook systemd[1]: Starting Run anacron jobs...
Aug 28 09:53:49 lollybook anacron[5179]: Anacron 2.3 started on 2015-08-28
Aug 28 09:53:49 lollybook anacron[5179]: Will run job `cron.daily' in 5 min.
Aug 28 09:53:49 lollybook anacron[5179]: Will run job `cron.weekly' in 10 min.
Aug 28 09:53:49 lollybook anacron[5179]: Will run job `cron.monthly' in 15 min.
Aug 28 09:53:49 lollybook anacron[5179]: Jobs will be executed sequentially
</source>
==Display unit declaration==
<source lang=ini>
# systemctl cat zfs.target
# /lib/systemd/system/zfs.target
[Unit]
Description=ZFS startup target
Requires=zfs-mount.service
Requires=zfs-share.service
Wants=zed.service
[Install]
WantedBy=multi-user.target
</source>
==Sockets==
<source lang=bash>
# systemctl list-sockets --all
LISTEN UNIT ACTIVATES
/run/acpid.socket acpid.socket acpid.service
/run/systemd/fsckd systemd-fsckd.socket systemd-fsckd.service
/run/systemd/initctl/fifo systemd-initctl.socket systemd-initctl.service
/run/systemd/journal/dev-log systemd-journald-dev-log.socket systemd-journald.service
/run/systemd/journal/socket systemd-journald.socket systemd-journald.service
/run/systemd/journal/stdout systemd-journald.socket systemd-journald.service
/run/systemd/journal/syslog syslog.socket rsyslog.service
/run/systemd/shutdownd systemd-shutdownd.socket systemd-shutdownd.service
/run/udev/control systemd-udevd-control.socket systemd-udevd.service
/run/uuidd/request uuidd.socket uuidd.service
/var/run/avahi-daemon/socket avahi-daemon.socket avahi-daemon.service
/var/run/cups/cups.sock cups.socket cups.service
/var/run/dbus/system_bus_socket dbus.socket dbus.service
127.0.0.1:631 cups.socket cups.service
[::1]:631 cups.socket cups.service
audit 1 systemd-journald-audit.socket systemd-journald.service
kobject-uevent 1 systemd-udevd-kernel.socket systemd-udevd.service
17 sockets listed.
</source>
==View dependencies==
What depends on ''zfs.target'':
<source lang=bash>
# systemctl list-dependencies --reverse zfs.target
zfs.target
● ├─basic.target
...
● └─multi-user.target
...
</source>
And what do we need to reach the ''zfs.target''?
<source lang=bash>
# systemctl list-dependencies --recursive zfs.target
zfs.target
● ├─zed.service
● ├─zfs-mount.service
● └─zfs-share.service
</source>
==Get the main PID of a service==
<source lang=bash>
$ systemctl show --property=MainPID --value ssh.service
2026
</source>
=Security=
==Use capabilities to drop user privileges (CapabilityBoundingSet)==
<source lang=ini>
# systemctl cat systemd-networkd.service --no-pager
...
[Service]
Type=notify
Restart=on-failure
RestartSec=0
ExecStart=/lib/systemd/systemd-networkd
CapabilityBoundingSet=CAP_NET_ADMIN CAP_NET_BIND_SERVICE CAP_NET_BROADCAST CAP_NET_RAW CAP_SETUID CAP_SETGID CAP_SETPCAP CAP_CHOWN CAP_DAC_OVERRIDE CAP_FOWNER
ProtectSystem=full
ProtectHome=yes
WatchdogSec=1min
...
</source>
Now the process is started with exactly the capabilities it needs to have. Even if it starts as root all unnessesary capabilities are dropped for starting the process.
I dont want to copy the whole man page of [http://manpages.ubuntu.com/manpages/vivid/en/man7/capabilities.7.html capabilities(7)] here but you can take a look to understand what this capabilities are.
'''BUT''' beware of programs which just test on UID 0!
==Nailing a process to it's rights : NoNewPrivileges==
Setting ''NoNewPrivileges=true'' ensures that the processtree from this level on will stuck at the UID and the privileges it has. This prohibits UID changes. No set UID binary will help the hacker to get more privileges than the user of the exploited service.
=systemd-resolved the name resolve service=
==Status==
<source lang=bash>
$ systemd-resolve --status
Global
DNS Domain: fritz.box
DNSSEC NTA: 10.in-addr.arpa
168.192.in-addr.arpa
corp
d.f.ip6.arpa
home
internal
intranet
lan
local
private
test
Link 3 (wlan0)
Current Scopes: none
LLMNR setting: yes
MulticastDNS setting: no
DNSSEC setting: no
DNSSEC supported: no
Link 2 (eth0)
Current Scopes: DNS
LLMNR setting: yes
MulticastDNS setting: no
DNSSEC setting: no
DNSSEC supported: no
DNS Servers: 192.168.178.1
DNS Domain: fritz.box
</source>
==Cache statistics==
<source lang=bash>
$ systemd-resolve --statistics
DNSSEC supported by current servers: no
Transactions
Current Transactions: 0
Total Transactions: 1824
Cache
Current Cache Size: 11
Cache Hits: 1104
Cache Misses: 771
DNSSEC Verdicts
Secure: 0
Insecure: 0
Bogus: 0
Indeterminate: 0
</source>
==Flush the cache==
<source lang=bash>
$ systemd-resolve --flush-caches
</source>
Check with:
<source lang=bash>
$ systemd-resolve --statistics
DNSSEC supported by current servers: no
Transactions
Current Transactions: 0
Total Transactions: 1809
Cache
Current Cache Size: 0 <--- Empty
Cache Hits: 1099
Cache Misses: 761
DNSSEC Verdicts
Secure: 0
Insecure: 0
Bogus: 0
Indeterminate: 0
</source>
=systemd-timesyncd an alternative to ntp=
The ntpd is a good and fat old horse for servers but clients do not necessarily need this one. Just give systemd-timesyncd a chance.
Configuration can be easily made through <i>/etc/systemd/timesyncd.conf</i>:
<source lang=ini>
# This file is part of systemd.
#
# systemd is free software; you can redistribute it and/or modify it
# under the terms of the GNU Lesser General Public License as published by
# the Free Software Foundation; either version 2.1 of the License, or
# (at your option) any later version.
#
# Entries in this file show the compile time defaults.
# You can change settings by editing this file.
# Defaults can be restored by simply deleting this file.
#
# See timesyncd.conf(5) for details.
[Time]
NTP=ptbtime1.ptb.de hora.cs.tu-berlin.de
FallbackNTP=ntp.ubuntu.com
</source>
The server list is a space separated list of NTP servers.
FallbackNTP is a list of servers if none of the NTP list could be reached.
If you want to split them into multiple files or generate them at start you can put files with the ending <i>.conf</i> in <i>/etc/systemd/timesyncd.conf.d/</i>.
After you setup the config you can enable the timesyncd via:
<source lang=bash>
# timedatectl set-ntp true
</source>
Control your success with:
<source lang=bash>
# timedatectl
Local time: Fr 2016-07-01 09:16:24 CEST
Universal time: Fr 2016-07-01 07:16:24 UTC
RTC time: Fr 2016-07-01 07:16:24
Time zone: Europe/Berlin (CEST, +0200)
Network time on: yes
NTP synchronized: yes
RTC in local TZ: no
</source>
Nice it worked <i>NTP synchronized: yes</i>.
If not take a look with <i>systemctl</i>:
<source lang=bash>
# systemctl status systemd-timesyncd.service
● systemd-timesyncd.service - Network Time Synchronization
Loaded: loaded (/lib/systemd/system/systemd-timesyncd.service; enabled; vendor preset: enabled)
Drop-In: /lib/systemd/system/systemd-timesyncd.service.d
└─disable-with-time-daemon.conf
Active: inactive (dead)
Condition: start condition failed at Fr 2016-07-01 10:49:15 CEST; 1h 43min left
Docs: man:systemd-timesyncd.service(8)
</source>
Hmm... let us take a look at ntp:
<source lang=bash>
# systemctl status ntp.service
● ntp.service - LSB: Start NTP daemon
Loaded: loaded (/etc/init.d/ntp; bad; vendor preset: enabled)
Active: active (exited) since Fr 2016-07-01 10:49:19 CEST; 1h 44min left
Docs: man:systemd-sysv-generator(8)
</source>
Maybe we should uninstall or disable ntp first ;-).
<source lang=bash>
# systemctl stop ntp.service
# systemctl disable ntp.service
</source>
<source lang=bash>
# systemctl start systemd-timesyncd.service
# systemctl status systemd-timesyncd.service
● systemd-timesyncd.service - Network Time Synchronization
Loaded: loaded (/lib/systemd/system/systemd-timesyncd.service; enabled; vendor preset: enabled)
Drop-In: /lib/systemd/system/systemd-timesyncd.service.d
└─disable-with-time-daemon.conf
Active: active (running) since Fr 2016-07-01 09:06:10 CEST; 1s ago
Docs: man:systemd-timesyncd.service(8)
Main PID: 12360 (systemd-timesyn)
Status: "Synchronized to time server 192.53.103.108:123 (ptbtime1.ptb.de)."
CGroup: /system.slice/systemd-timesyncd.service
└─12360 /lib/systemd/systemd-timesyncd
Jul 01 09:06:10 lollybook systemd[1]: Starting Network Time Synchronization...
Jul 01 09:06:10 lollybook systemd[1]: Started Network Time Synchronization.
Jul 01 09:06:10 lollybook systemd-timesyncd[12360]: Synchronized to time server 192.53.103.108:123 (ptbtime1.ptb.de).
</source>
That's it!
=Units=
==[Unit]==
===Define dependencies===
For example the ''zfs.target'' is defined like this:
<source lang=ini>
# systemctl cat zfs.target
# /lib/systemd/system/zfs.target
[Unit]
Description=ZFS startup target
Requires=zfs-mount.service
Requires=zfs-share.service
Wants=zed.service
[Install]
WantedBy=multi-user.target
</source>
This means to reach the ''zfs.target'' we want that ''zed.service'' is started if enabled and we need ''zfs-mount.service'' and ''zfs-share.service''.
===Directories===
====ReadWrite-, ReadOnly- and InaccessibleDirectories====
====Private Tmp-Directories====
Mounts a private incarnation of /tmp and /var/tmp which only lives as long as the unit is up. When the unit comes down the directories are cleared. This is done by a seperate namespace for this unit.
<source lang=ini>
[Unit]
...
PrivateTmp=true|false
...
</source>
If several units should share a private tmp-directory you can use ''JoinsNamespaceOf=<unit1>[,<unit2>,<unit3>]''.
==[Service]==
==[Install]==
=Tools=
==Testing around with capabilities==
For example arping:
<source lang=bash>
# getcap /usr/bin/arping
/usr/bin/arping = cap_net_raw+ep
</source>
With this capability set we can use this as normal user:
<source lang=bash>
lollypop $ /usr/bin/arping -I wlan0 192.168.178.1
ARPING 192.168.178.1 from 192.168.178.31 wlan0
Unicast reply from 192.168.178.1 [24:65:11:F0:DC:A8] 1.774ms
Unicast reply from 192.168.178.1 [24:65:11:F0:DC:A8] 1.658ms
</source>
If we remove this capability it does not work:
<source lang=bash>
# setcap cap_net_raw=-ep /usr/bin/arping
</source>
<source lang=bash>
lollypop $ /usr/bin/arping -I wlan0 192.168.178.1
arping: socket: Operation not permitted
</source>
Of course it still works as root as root has all capabilities:
<source lang=bash>
root@lollybook:~# /usr/bin/arping -I wlan0 192.168.178.1
ARPING 192.168.178.1 from 192.168.178.31 wlan0
Unicast reply from 192.168.178.1 [24:65:11:F0:DC:A8] 2.052ms
Unicast reply from 192.168.178.1 [24:65:11:F0:DC:A8] 1.852ms
Received 2 response(s)
</source>
So we better set this capability again:
<source lang=bash>
# setcap cap_net_raw=+ep /usr/bin/arping
</source>
= Logging with syslog-ng and systemd in a chroot environment =
If you have a chroot environment (here I have /var/chroot) some things are a little bit tricky.
==The needed logging socket in your chroot is /run/systemd/journal/dev-log==
Prepare the mountpoint:
<source lang=bash>
# mkdir -p /var/chroot/run/systemd/journal
# touch /var/chroot/run/systemd/journal/dev-log
</source>
===Get the name for the needed unit file===
The name of a .mount-unit file has to be the mount destination path. Dashes must be escaped. To get the resulting name you can easily use systemd-escape.
<source lang=bash>
# systemd-escape -p --suffix=mount /var/chroot/run/systemd/journal/dev-log
var-chroot-run-systemd-journal-dev\x2dlog.mount
</source>
===Create the unit file /lib/systemd/system/var-chroot-run-systemd-journal-dev\\x2dlog.mount for the mount===
Remember to double escape (\\) the x2d (which is a dash -).
<source lang=bash>
# vi /lib/systemd/system/var-chroot-run-systemd-journal-dev\\x2dlog.mount
</source>
I want to mount it before syslog-ng and pdns-recursor are up.
Put this contents in the file:
<source lang=ini>
[Unit]
Description=Mount /run/systemd/journal/dev-log to chroot
DefaultDependencies=no
ConditionPathExists=/var/chroot/run/systemd/journal/dev-log
ConditionCapability=CAP_SYS_ADMIN
After=systemd-modules-load.service
Before=pdns-recursor.service
Before=syslog-ng.service
[Mount]
What=/run/systemd/journal/dev-log
Where=/var/chroot/run/systemd/journal/dev-log
Type=none
Options=bind
[Install]
WantedBy=multi-user.target
</source>
===Mount the socket===
<source lang=bash>
# systemctl daemon-reload
# systemctl enable var-chroot-run-systemd-journal-dev\\x2dlog.mount
# systemctl start var-chroot-run-systemd-journal-dev\\x2dlog.mount
</source>
Check the success:
<source lang=bash>
# grep /var/chroot/run/systemd/journal/dev-log /proc/mounts
tmpfs /var/chroot/run/systemd/journal/dev-log tmpfs rw,nosuid,noexec,relatime,size=101604k,mode=755 0 0
</source>
==Tell the journald to forward logging lines to the socket==
===/etc/systemd/journald.conf===
<source lang=ini>
[Journal]
...
ForwardToSyslog=yes
...
</source>
Restart the journal daemon:
<source lang=bash>
# systemctl restart systemd-journald.service
</source>
==Configure syslog-ng==
===/etc/syslog-ng/syslog-ng.conf===
Take the log from systemd-journald socket:
<source>
...
source s_src {
system();
internal();
unix-dgram ("/run/systemd/journal/dev-log");
};
...
</source>
===Example for powerdns recursor===
====/etc/syslog-ng/conf.d/destination.d/pdns.conf====
<source>
# PowerDNS authoritative server destination
destination d_pdns { file("/var/log/powerdns/pdns.log"); };
destination d_pdns_recursor { file("/var/log/powerdns/recursor.log"); };
</source>
====/etc/syslog-ng/conf.d/filter.d/pdns.conf====
<source>
# PowerDNS authoritative server filter
filter f_pdns { program("^pdns$"); };
filter f_pdns_recursor { program("^pdns_recursor$"); };
</source>
====/etc/syslog-ng/conf.d/log.d/90_pdns.conf====
<source>
# PowerDNS authoritative server default final file log
log { source(s_src); filter(f_pdns); destination(d_pdns); flags(final); };
log { source(s_src); filter(f_pdns_recursor); destination(d_pdns_recursor); flags(final); };
</source>
===Restart syslog-ng daemon===
<source lang=bash>
# systemctl restart syslog-ng.service
</source>
= systemd-tmpfiles =
The housekeeping of temporary directories is done by the service <i>systemd-tmpfiles-clean.service</i> .
This service is triggered by the timer <i>systemd-tmpfiles-clean.timer</i>
To use this service for PrivateTMP directories for example of <i>apache2.service</i> you may use a config file under <i>/etc/[https://www.freedesktop.org/software/systemd/man/tmpfiles.d.html tmpfiles.d]/</i> like this example <i>/etc/tmpfiles.d/apache-cleanup.conf</i> :
<pre>
e /tmp/systemd-private-%b-apache2.service-*/tmp - - - 6h
</pre>
This will cleanup all files under <i>/tmp/systemd-private-%b-apache2.service-*/tmp</i> which are older than 6 hours every time the <i>systemd-tmpfiles-clean.service</i> runs.
The <i>%b</i> in the path is the actual boot-id.
What ist that? An id which is generated at each boot.
You can get the boot-id with:
<source lang=bash>
# journalctl --list-boots
</source>
The second field of the last line is the actual one, e.g.:
<source lang=bash>
# journalctl --list-boots | awk 'END {print $2}'
52ae0c2a587a47048ee76818ede269a6
</source>
When will that be? Try:
<source lang="bash">
# systemctl list-timers systemd-tmpfiles-clean.timer
NEXT LEFT LAST PASSED UNIT ACTIVATES
Thu 2020-08-13 16:07:24 CEST 46min left n/a n/a systemd-tmpfiles-clean.timer systemd-tmpfiles-clean.service
1 timers listed.
Pass --all to see loaded but inactive timers, too.
</source>
OK, but you probably want to run ist once an hour? OK, just rescedule the timer like this:
<source lang="bash">
# systemctl edit systemd-tmpfiles-clean.timer
</source>
and change the interval like this
<pre>
[Timer]
OnUnitActiveSec=1h
</pre>
Well done...
= Examples =
== Oracle ==
UNTESTED, just an example!
File this as
/usr/lib/systemd/system/dbora@.service (SLES12)
<source lang=ini>
# This file is part of systemd.
#
# Configure instances for your oracle database versions like this
# # systemctl enable dbora@<product>.service
# e.g.:
# # systemctl enable dbora@12cR1.service
#
[Unit]
Description=Oracle Database %I
After=syslog.target network.target
[Service]
# systemd ignores PAM limits, so set any necessary limits in the service.
# Not really a bug, but a feature.
# https://bugzilla.redhat.com/show_bug.cgi?id=754285
LimitMEMLOCK=infinity
LimitNOFILE=65535
#
Type=simple
RemainAfterExit=yes
User=oracle
Group=dba
Environment="ORACLE_HOME=/opt/oracle/product/%i/db"
ExecStart=/opt/oracle/product/%i/db/bin/dbstart $ORACLE_HOME >> 2>&1 &
ExecStop=/opt/oracle/product/%i/db/bin/dbstart $ORACLE_HOME 2>&1 &
[Install]
WantedBy=multi-user.target
</source>
<source lang=bash>
# systemctl daemon-reload
# systemctl enable dbora@12cR2.service
Created symlink from /etc/systemd/system/multi-user.target.wants/dbora@12cR2.service to /usr/lib/systemd/system/dbora@.service.
</source>
aa1a98c2ba58199ca06d053d0be28e1dc03ccb26
2103
2102
2021-04-21T08:16:17Z
Lollypop
2
/* Security */
wikitext
text/x-wiki
[[Category:Linux]]
=systemd=
Yes, like daemons are usually written this has to be written lowercase.
=What is systemd?=
systemd is a replacement for the old and rusty init system of Linux.
It has many new features and extends the normal init system with the ability to watch processes after the start has done, list sockets owned by processes started with systemd, adds security features like [http://manpages.ubuntu.com/manpages/vivid/en/man7/capabilities.7.html capabilities(7)] and a lot more.
Maybe it will be as good as SMF (Service Management Facility) of Solaris one day :-).
=Take a look with systemctl=
==List units==
As you can see, there are hardware and software related units.
<source lang=bash>
# systemctl list-units
UNIT LOAD ACTIVE SUB DESCRIPTION
proc-sys-fs-binfmt_misc.automount loaded active running Arbitrary Executable File Formats File System Automount Point
sys-devices-pci0000:00-0000:00:02.0-backlight-acpi_video0.device loaded active plugged /sys/devices/pci0000:00/0000:00:02.0/backlight/acpi_video0
sys-devices-pci0000:00-0000:00:02.0-drm-card0-card0\x2dLVDS\x2d1-intel_backlight.device loaded active plugged /sys/devices/pci0000:00/0000:00:02.0/drm
sys-devices-pci0000:00-0000:00:19.0-net-eth0.device loaded active plugged 82579LM Gigabit Network Connection
sys-devices-pci0000:00-0000:00:1a.0-usb1-1\x2d1-1\x2d1.4-1\x2d1.4:1.0-bluetooth-hci0-rfkill3.device loaded active plugged /sys/devices/pci0000:00/0000
sys-devices-pci0000:00-0000:00:1a.0-usb1-1\x2d1-1\x2d1.4-1\x2d1.4:1.0-bluetooth-hci0.device loaded active plugged /sys/devices/pci0000:00/0000:00:1a.0
sys-devices-pci0000:00-0000:00:1b.0-sound-card0.device loaded active plugged 6 Series/C200 Series Chipset Family High Definition Audio Contro
sys-devices-pci0000:00-0000:00:1c.1-0000:03:00.0-ieee80211-phy0-rfkill2.device loaded active plugged /sys/devices/pci0000:00/0000:00:1c.1/0000:03:00.0
sys-devices-pci0000:00-0000:00:1c.1-0000:03:00.0-net-wlan0.device loaded active plugged Centrino Advanced-N 6205 [Taylor Peak] (Centrino Advanced-N 62
sys-devices-pci0000:00-0000:00:1d.0-usb2-2\x2d1-2\x2d1.4-2\x2d1.4:1.1-tty-ttyACM0.device loaded active plugged F5521gw
sys-devices-pci0000:00-0000:00:1d.0-usb2-2\x2d1-2\x2d1.4-2\x2d1.4:1.3-tty-ttyACM1.device loaded active plugged F5521gw
...
session-c2.scope loaded active running Session c2 of user lollypop
accounts-daemon.service loaded active running Accounts Service
● anacron.service loaded failed failed Run anacron jobs
apparmor.service loaded active exited LSB: AppArmor initialization
apport.service loaded active exited LSB: automatic crash report generation
...
</source>
In this example you can see that the anacron.service failed to start.
==Display unit status==
<source lang=bash>
# systemctl status anacron
● anacron.service - Run anacron jobs
Loaded: loaded (/lib/systemd/system/anacron.service; enabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Fr 2015-08-28 09:18:13 CEST; 31min ago
Process: 1591 ExecStart=/usr/sbin/anacron -dsq (code=exited, status=1/FAILURE)
Main PID: 1591 (code=exited, status=1/FAILURE)
Aug 28 09:18:13 lollybook systemd[1]: Started Run anacron jobs.
Aug 28 09:18:13 lollybook systemd[1]: Starting Run anacron jobs...
Aug 28 09:18:13 lollybook systemd[1]: anacron.service: main process exited, code=exited, status=1/FAILURE
Aug 28 09:18:13 lollybook anacron[1591]: anacron: Can't chdir to /var/spool/anacron: No such file or directory
Aug 28 09:18:13 lollybook systemd[1]: Unit anacron.service entered failed state.
Aug 28 09:18:13 lollybook systemd[1]: anacron.service failed.
</source>
Ah, deleted the anacron spool directory. ;-)
==Restart units==
Fix the problem and restart the service.
<source lang=bash>
root@lollybook:~# mkdir /var/spool/anacron
root@lollybook:~# systemctl restart anacron.service
root@lollybook:~# systemctl status anacron
● anacron.service - Run anacron jobs
Loaded: loaded (/lib/systemd/system/anacron.service; enabled; vendor preset: enabled)
Active: active (running) since Fr 2015-08-28 09:53:49 CEST; 4s ago
Main PID: 5179 (anacron)
CGroup: /system.slice/anacron.service
└─5179 /usr/sbin/anacron -dsq
Aug 28 09:53:49 lollybook systemd[1]: Started Run anacron jobs.
Aug 28 09:53:49 lollybook systemd[1]: Starting Run anacron jobs...
Aug 28 09:53:49 lollybook anacron[5179]: Anacron 2.3 started on 2015-08-28
Aug 28 09:53:49 lollybook anacron[5179]: Will run job `cron.daily' in 5 min.
Aug 28 09:53:49 lollybook anacron[5179]: Will run job `cron.weekly' in 10 min.
Aug 28 09:53:49 lollybook anacron[5179]: Will run job `cron.monthly' in 15 min.
Aug 28 09:53:49 lollybook anacron[5179]: Jobs will be executed sequentially
</source>
==Display unit declaration==
<source lang=ini>
# systemctl cat zfs.target
# /lib/systemd/system/zfs.target
[Unit]
Description=ZFS startup target
Requires=zfs-mount.service
Requires=zfs-share.service
Wants=zed.service
[Install]
WantedBy=multi-user.target
</source>
==Sockets==
<source lang=bash>
# systemctl list-sockets --all
LISTEN UNIT ACTIVATES
/run/acpid.socket acpid.socket acpid.service
/run/systemd/fsckd systemd-fsckd.socket systemd-fsckd.service
/run/systemd/initctl/fifo systemd-initctl.socket systemd-initctl.service
/run/systemd/journal/dev-log systemd-journald-dev-log.socket systemd-journald.service
/run/systemd/journal/socket systemd-journald.socket systemd-journald.service
/run/systemd/journal/stdout systemd-journald.socket systemd-journald.service
/run/systemd/journal/syslog syslog.socket rsyslog.service
/run/systemd/shutdownd systemd-shutdownd.socket systemd-shutdownd.service
/run/udev/control systemd-udevd-control.socket systemd-udevd.service
/run/uuidd/request uuidd.socket uuidd.service
/var/run/avahi-daemon/socket avahi-daemon.socket avahi-daemon.service
/var/run/cups/cups.sock cups.socket cups.service
/var/run/dbus/system_bus_socket dbus.socket dbus.service
127.0.0.1:631 cups.socket cups.service
[::1]:631 cups.socket cups.service
audit 1 systemd-journald-audit.socket systemd-journald.service
kobject-uevent 1 systemd-udevd-kernel.socket systemd-udevd.service
17 sockets listed.
</source>
==View dependencies==
What depends on ''zfs.target'':
<source lang=bash>
# systemctl list-dependencies --reverse zfs.target
zfs.target
● ├─basic.target
...
● └─multi-user.target
...
</source>
And what do we need to reach the ''zfs.target''?
<source lang=bash>
# systemctl list-dependencies --recursive zfs.target
zfs.target
● ├─zed.service
● ├─zfs-mount.service
● └─zfs-share.service
</source>
==Get the main PID of a service==
<source lang=bash>
$ systemctl show --property=MainPID --value ssh.service
2026
</source>
=Security=
==Use capabilities to drop user privileges (CapabilityBoundingSet)==
<source lang=ini>
# systemctl cat systemd-networkd.service --no-pager
...
[Service]
Type=notify
Restart=on-failure
RestartSec=0
ExecStart=/lib/systemd/systemd-networkd
CapabilityBoundingSet=CAP_NET_ADMIN CAP_NET_BIND_SERVICE CAP_NET_BROADCAST CAP_NET_RAW CAP_SETUID CAP_SETGID CAP_SETPCAP CAP_CHOWN CAP_DAC_OVERRIDE CAP_FOWNER
ProtectSystem=full
ProtectHome=yes
WatchdogSec=1min
...
</source>
Now the process is started with exactly the capabilities it needs to have. Even if it starts as root all unnessesary capabilities are dropped for starting the process.
I dont want to copy the whole man page of [http://manpages.ubuntu.com/manpages/vivid/en/man7/capabilities.7.html capabilities(7)] here but you can take a look to understand what this capabilities are.
'''BUT''' beware of programs which just test on UID 0!
==Nailing a process to it's rights : NoNewPrivileges==
Setting ''NoNewPrivileges=true'' ensures that the processtree from this level on will stuck at the UID and the privileges it has. This prohibits UID changes. No set UID binary will help the hacker to get more privileges than the user of the exploited service.
==Limiting access to a socket==
For example for the check_mk monitoring system:
<source lang=ini>
# systemctl edit check_mk.socket
</source>
Deny from all, but the monitoring server (172.17.128.193):
<source lang=ini>
[Socket]
IPAddressDeny=any
IPAddressAllow=172.17.128.193
</source>
=systemd-resolved the name resolve service=
==Status==
<source lang=bash>
$ systemd-resolve --status
Global
DNS Domain: fritz.box
DNSSEC NTA: 10.in-addr.arpa
168.192.in-addr.arpa
corp
d.f.ip6.arpa
home
internal
intranet
lan
local
private
test
Link 3 (wlan0)
Current Scopes: none
LLMNR setting: yes
MulticastDNS setting: no
DNSSEC setting: no
DNSSEC supported: no
Link 2 (eth0)
Current Scopes: DNS
LLMNR setting: yes
MulticastDNS setting: no
DNSSEC setting: no
DNSSEC supported: no
DNS Servers: 192.168.178.1
DNS Domain: fritz.box
</source>
==Cache statistics==
<source lang=bash>
$ systemd-resolve --statistics
DNSSEC supported by current servers: no
Transactions
Current Transactions: 0
Total Transactions: 1824
Cache
Current Cache Size: 11
Cache Hits: 1104
Cache Misses: 771
DNSSEC Verdicts
Secure: 0
Insecure: 0
Bogus: 0
Indeterminate: 0
</source>
==Flush the cache==
<source lang=bash>
$ systemd-resolve --flush-caches
</source>
Check with:
<source lang=bash>
$ systemd-resolve --statistics
DNSSEC supported by current servers: no
Transactions
Current Transactions: 0
Total Transactions: 1809
Cache
Current Cache Size: 0 <--- Empty
Cache Hits: 1099
Cache Misses: 761
DNSSEC Verdicts
Secure: 0
Insecure: 0
Bogus: 0
Indeterminate: 0
</source>
=systemd-timesyncd an alternative to ntp=
The ntpd is a good and fat old horse for servers but clients do not necessarily need this one. Just give systemd-timesyncd a chance.
Configuration can be easily made through <i>/etc/systemd/timesyncd.conf</i>:
<source lang=ini>
# This file is part of systemd.
#
# systemd is free software; you can redistribute it and/or modify it
# under the terms of the GNU Lesser General Public License as published by
# the Free Software Foundation; either version 2.1 of the License, or
# (at your option) any later version.
#
# Entries in this file show the compile time defaults.
# You can change settings by editing this file.
# Defaults can be restored by simply deleting this file.
#
# See timesyncd.conf(5) for details.
[Time]
NTP=ptbtime1.ptb.de hora.cs.tu-berlin.de
FallbackNTP=ntp.ubuntu.com
</source>
The server list is a space separated list of NTP servers.
FallbackNTP is a list of servers if none of the NTP list could be reached.
If you want to split them into multiple files or generate them at start you can put files with the ending <i>.conf</i> in <i>/etc/systemd/timesyncd.conf.d/</i>.
After you setup the config you can enable the timesyncd via:
<source lang=bash>
# timedatectl set-ntp true
</source>
Control your success with:
<source lang=bash>
# timedatectl
Local time: Fr 2016-07-01 09:16:24 CEST
Universal time: Fr 2016-07-01 07:16:24 UTC
RTC time: Fr 2016-07-01 07:16:24
Time zone: Europe/Berlin (CEST, +0200)
Network time on: yes
NTP synchronized: yes
RTC in local TZ: no
</source>
Nice it worked <i>NTP synchronized: yes</i>.
If not take a look with <i>systemctl</i>:
<source lang=bash>
# systemctl status systemd-timesyncd.service
● systemd-timesyncd.service - Network Time Synchronization
Loaded: loaded (/lib/systemd/system/systemd-timesyncd.service; enabled; vendor preset: enabled)
Drop-In: /lib/systemd/system/systemd-timesyncd.service.d
└─disable-with-time-daemon.conf
Active: inactive (dead)
Condition: start condition failed at Fr 2016-07-01 10:49:15 CEST; 1h 43min left
Docs: man:systemd-timesyncd.service(8)
</source>
Hmm... let us take a look at ntp:
<source lang=bash>
# systemctl status ntp.service
● ntp.service - LSB: Start NTP daemon
Loaded: loaded (/etc/init.d/ntp; bad; vendor preset: enabled)
Active: active (exited) since Fr 2016-07-01 10:49:19 CEST; 1h 44min left
Docs: man:systemd-sysv-generator(8)
</source>
Maybe we should uninstall or disable ntp first ;-).
<source lang=bash>
# systemctl stop ntp.service
# systemctl disable ntp.service
</source>
<source lang=bash>
# systemctl start systemd-timesyncd.service
# systemctl status systemd-timesyncd.service
● systemd-timesyncd.service - Network Time Synchronization
Loaded: loaded (/lib/systemd/system/systemd-timesyncd.service; enabled; vendor preset: enabled)
Drop-In: /lib/systemd/system/systemd-timesyncd.service.d
└─disable-with-time-daemon.conf
Active: active (running) since Fr 2016-07-01 09:06:10 CEST; 1s ago
Docs: man:systemd-timesyncd.service(8)
Main PID: 12360 (systemd-timesyn)
Status: "Synchronized to time server 192.53.103.108:123 (ptbtime1.ptb.de)."
CGroup: /system.slice/systemd-timesyncd.service
└─12360 /lib/systemd/systemd-timesyncd
Jul 01 09:06:10 lollybook systemd[1]: Starting Network Time Synchronization...
Jul 01 09:06:10 lollybook systemd[1]: Started Network Time Synchronization.
Jul 01 09:06:10 lollybook systemd-timesyncd[12360]: Synchronized to time server 192.53.103.108:123 (ptbtime1.ptb.de).
</source>
That's it!
=Units=
==[Unit]==
===Define dependencies===
For example the ''zfs.target'' is defined like this:
<source lang=ini>
# systemctl cat zfs.target
# /lib/systemd/system/zfs.target
[Unit]
Description=ZFS startup target
Requires=zfs-mount.service
Requires=zfs-share.service
Wants=zed.service
[Install]
WantedBy=multi-user.target
</source>
This means to reach the ''zfs.target'' we want that ''zed.service'' is started if enabled and we need ''zfs-mount.service'' and ''zfs-share.service''.
===Directories===
====ReadWrite-, ReadOnly- and InaccessibleDirectories====
====Private Tmp-Directories====
Mounts a private incarnation of /tmp and /var/tmp which only lives as long as the unit is up. When the unit comes down the directories are cleared. This is done by a seperate namespace for this unit.
<source lang=ini>
[Unit]
...
PrivateTmp=true|false
...
</source>
If several units should share a private tmp-directory you can use ''JoinsNamespaceOf=<unit1>[,<unit2>,<unit3>]''.
==[Service]==
==[Install]==
=Tools=
==Testing around with capabilities==
For example arping:
<source lang=bash>
# getcap /usr/bin/arping
/usr/bin/arping = cap_net_raw+ep
</source>
With this capability set we can use this as normal user:
<source lang=bash>
lollypop $ /usr/bin/arping -I wlan0 192.168.178.1
ARPING 192.168.178.1 from 192.168.178.31 wlan0
Unicast reply from 192.168.178.1 [24:65:11:F0:DC:A8] 1.774ms
Unicast reply from 192.168.178.1 [24:65:11:F0:DC:A8] 1.658ms
</source>
If we remove this capability it does not work:
<source lang=bash>
# setcap cap_net_raw=-ep /usr/bin/arping
</source>
<source lang=bash>
lollypop $ /usr/bin/arping -I wlan0 192.168.178.1
arping: socket: Operation not permitted
</source>
Of course it still works as root as root has all capabilities:
<source lang=bash>
root@lollybook:~# /usr/bin/arping -I wlan0 192.168.178.1
ARPING 192.168.178.1 from 192.168.178.31 wlan0
Unicast reply from 192.168.178.1 [24:65:11:F0:DC:A8] 2.052ms
Unicast reply from 192.168.178.1 [24:65:11:F0:DC:A8] 1.852ms
Received 2 response(s)
</source>
So we better set this capability again:
<source lang=bash>
# setcap cap_net_raw=+ep /usr/bin/arping
</source>
= Logging with syslog-ng and systemd in a chroot environment =
If you have a chroot environment (here I have /var/chroot) some things are a little bit tricky.
==The needed logging socket in your chroot is /run/systemd/journal/dev-log==
Prepare the mountpoint:
<source lang=bash>
# mkdir -p /var/chroot/run/systemd/journal
# touch /var/chroot/run/systemd/journal/dev-log
</source>
===Get the name for the needed unit file===
The name of a .mount-unit file has to be the mount destination path. Dashes must be escaped. To get the resulting name you can easily use systemd-escape.
<source lang=bash>
# systemd-escape -p --suffix=mount /var/chroot/run/systemd/journal/dev-log
var-chroot-run-systemd-journal-dev\x2dlog.mount
</source>
===Create the unit file /lib/systemd/system/var-chroot-run-systemd-journal-dev\\x2dlog.mount for the mount===
Remember to double escape (\\) the x2d (which is a dash -).
<source lang=bash>
# vi /lib/systemd/system/var-chroot-run-systemd-journal-dev\\x2dlog.mount
</source>
I want to mount it before syslog-ng and pdns-recursor are up.
Put this contents in the file:
<source lang=ini>
[Unit]
Description=Mount /run/systemd/journal/dev-log to chroot
DefaultDependencies=no
ConditionPathExists=/var/chroot/run/systemd/journal/dev-log
ConditionCapability=CAP_SYS_ADMIN
After=systemd-modules-load.service
Before=pdns-recursor.service
Before=syslog-ng.service
[Mount]
What=/run/systemd/journal/dev-log
Where=/var/chroot/run/systemd/journal/dev-log
Type=none
Options=bind
[Install]
WantedBy=multi-user.target
</source>
===Mount the socket===
<source lang=bash>
# systemctl daemon-reload
# systemctl enable var-chroot-run-systemd-journal-dev\\x2dlog.mount
# systemctl start var-chroot-run-systemd-journal-dev\\x2dlog.mount
</source>
Check the success:
<source lang=bash>
# grep /var/chroot/run/systemd/journal/dev-log /proc/mounts
tmpfs /var/chroot/run/systemd/journal/dev-log tmpfs rw,nosuid,noexec,relatime,size=101604k,mode=755 0 0
</source>
==Tell the journald to forward logging lines to the socket==
===/etc/systemd/journald.conf===
<source lang=ini>
[Journal]
...
ForwardToSyslog=yes
...
</source>
Restart the journal daemon:
<source lang=bash>
# systemctl restart systemd-journald.service
</source>
==Configure syslog-ng==
===/etc/syslog-ng/syslog-ng.conf===
Take the log from systemd-journald socket:
<source>
...
source s_src {
system();
internal();
unix-dgram ("/run/systemd/journal/dev-log");
};
...
</source>
===Example for powerdns recursor===
====/etc/syslog-ng/conf.d/destination.d/pdns.conf====
<source>
# PowerDNS authoritative server destination
destination d_pdns { file("/var/log/powerdns/pdns.log"); };
destination d_pdns_recursor { file("/var/log/powerdns/recursor.log"); };
</source>
====/etc/syslog-ng/conf.d/filter.d/pdns.conf====
<source>
# PowerDNS authoritative server filter
filter f_pdns { program("^pdns$"); };
filter f_pdns_recursor { program("^pdns_recursor$"); };
</source>
====/etc/syslog-ng/conf.d/log.d/90_pdns.conf====
<source>
# PowerDNS authoritative server default final file log
log { source(s_src); filter(f_pdns); destination(d_pdns); flags(final); };
log { source(s_src); filter(f_pdns_recursor); destination(d_pdns_recursor); flags(final); };
</source>
===Restart syslog-ng daemon===
<source lang=bash>
# systemctl restart syslog-ng.service
</source>
= systemd-tmpfiles =
The housekeeping of temporary directories is done by the service <i>systemd-tmpfiles-clean.service</i> .
This service is triggered by the timer <i>systemd-tmpfiles-clean.timer</i>
To use this service for PrivateTMP directories for example of <i>apache2.service</i> you may use a config file under <i>/etc/[https://www.freedesktop.org/software/systemd/man/tmpfiles.d.html tmpfiles.d]/</i> like this example <i>/etc/tmpfiles.d/apache-cleanup.conf</i> :
<pre>
e /tmp/systemd-private-%b-apache2.service-*/tmp - - - 6h
</pre>
This will cleanup all files under <i>/tmp/systemd-private-%b-apache2.service-*/tmp</i> which are older than 6 hours every time the <i>systemd-tmpfiles-clean.service</i> runs.
The <i>%b</i> in the path is the actual boot-id.
What ist that? An id which is generated at each boot.
You can get the boot-id with:
<source lang=bash>
# journalctl --list-boots
</source>
The second field of the last line is the actual one, e.g.:
<source lang=bash>
# journalctl --list-boots | awk 'END {print $2}'
52ae0c2a587a47048ee76818ede269a6
</source>
When will that be? Try:
<source lang="bash">
# systemctl list-timers systemd-tmpfiles-clean.timer
NEXT LEFT LAST PASSED UNIT ACTIVATES
Thu 2020-08-13 16:07:24 CEST 46min left n/a n/a systemd-tmpfiles-clean.timer systemd-tmpfiles-clean.service
1 timers listed.
Pass --all to see loaded but inactive timers, too.
</source>
OK, but you probably want to run ist once an hour? OK, just rescedule the timer like this:
<source lang="bash">
# systemctl edit systemd-tmpfiles-clean.timer
</source>
and change the interval like this
<pre>
[Timer]
OnUnitActiveSec=1h
</pre>
Well done...
= Examples =
== Oracle ==
UNTESTED, just an example!
File this as
/usr/lib/systemd/system/dbora@.service (SLES12)
<source lang=ini>
# This file is part of systemd.
#
# Configure instances for your oracle database versions like this
# # systemctl enable dbora@<product>.service
# e.g.:
# # systemctl enable dbora@12cR1.service
#
[Unit]
Description=Oracle Database %I
After=syslog.target network.target
[Service]
# systemd ignores PAM limits, so set any necessary limits in the service.
# Not really a bug, but a feature.
# https://bugzilla.redhat.com/show_bug.cgi?id=754285
LimitMEMLOCK=infinity
LimitNOFILE=65535
#
Type=simple
RemainAfterExit=yes
User=oracle
Group=dba
Environment="ORACLE_HOME=/opt/oracle/product/%i/db"
ExecStart=/opt/oracle/product/%i/db/bin/dbstart $ORACLE_HOME >> 2>&1 &
ExecStop=/opt/oracle/product/%i/db/bin/dbstart $ORACLE_HOME 2>&1 &
[Install]
WantedBy=multi-user.target
</source>
<source lang=bash>
# systemctl daemon-reload
# systemctl enable dbora@12cR2.service
Created symlink from /etc/systemd/system/multi-user.target.wants/dbora@12cR2.service to /usr/lib/systemd/system/dbora@.service.
</source>
579db2431b0b82763aea884397a34983e7fe6af6
2109
2103
2021-06-02T14:29:45Z
Lollypop
2
/* Examples */
wikitext
text/x-wiki
[[Category:Linux]]
=systemd=
Yes, like daemons are usually written this has to be written lowercase.
=What is systemd?=
systemd is a replacement for the old and rusty init system of Linux.
It has many new features and extends the normal init system with the ability to watch processes after the start has done, list sockets owned by processes started with systemd, adds security features like [http://manpages.ubuntu.com/manpages/vivid/en/man7/capabilities.7.html capabilities(7)] and a lot more.
Maybe it will be as good as SMF (Service Management Facility) of Solaris one day :-).
=Take a look with systemctl=
==List units==
As you can see, there are hardware and software related units.
<source lang=bash>
# systemctl list-units
UNIT LOAD ACTIVE SUB DESCRIPTION
proc-sys-fs-binfmt_misc.automount loaded active running Arbitrary Executable File Formats File System Automount Point
sys-devices-pci0000:00-0000:00:02.0-backlight-acpi_video0.device loaded active plugged /sys/devices/pci0000:00/0000:00:02.0/backlight/acpi_video0
sys-devices-pci0000:00-0000:00:02.0-drm-card0-card0\x2dLVDS\x2d1-intel_backlight.device loaded active plugged /sys/devices/pci0000:00/0000:00:02.0/drm
sys-devices-pci0000:00-0000:00:19.0-net-eth0.device loaded active plugged 82579LM Gigabit Network Connection
sys-devices-pci0000:00-0000:00:1a.0-usb1-1\x2d1-1\x2d1.4-1\x2d1.4:1.0-bluetooth-hci0-rfkill3.device loaded active plugged /sys/devices/pci0000:00/0000
sys-devices-pci0000:00-0000:00:1a.0-usb1-1\x2d1-1\x2d1.4-1\x2d1.4:1.0-bluetooth-hci0.device loaded active plugged /sys/devices/pci0000:00/0000:00:1a.0
sys-devices-pci0000:00-0000:00:1b.0-sound-card0.device loaded active plugged 6 Series/C200 Series Chipset Family High Definition Audio Contro
sys-devices-pci0000:00-0000:00:1c.1-0000:03:00.0-ieee80211-phy0-rfkill2.device loaded active plugged /sys/devices/pci0000:00/0000:00:1c.1/0000:03:00.0
sys-devices-pci0000:00-0000:00:1c.1-0000:03:00.0-net-wlan0.device loaded active plugged Centrino Advanced-N 6205 [Taylor Peak] (Centrino Advanced-N 62
sys-devices-pci0000:00-0000:00:1d.0-usb2-2\x2d1-2\x2d1.4-2\x2d1.4:1.1-tty-ttyACM0.device loaded active plugged F5521gw
sys-devices-pci0000:00-0000:00:1d.0-usb2-2\x2d1-2\x2d1.4-2\x2d1.4:1.3-tty-ttyACM1.device loaded active plugged F5521gw
...
session-c2.scope loaded active running Session c2 of user lollypop
accounts-daemon.service loaded active running Accounts Service
● anacron.service loaded failed failed Run anacron jobs
apparmor.service loaded active exited LSB: AppArmor initialization
apport.service loaded active exited LSB: automatic crash report generation
...
</source>
In this example you can see that the anacron.service failed to start.
==Display unit status==
<source lang=bash>
# systemctl status anacron
● anacron.service - Run anacron jobs
Loaded: loaded (/lib/systemd/system/anacron.service; enabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Fr 2015-08-28 09:18:13 CEST; 31min ago
Process: 1591 ExecStart=/usr/sbin/anacron -dsq (code=exited, status=1/FAILURE)
Main PID: 1591 (code=exited, status=1/FAILURE)
Aug 28 09:18:13 lollybook systemd[1]: Started Run anacron jobs.
Aug 28 09:18:13 lollybook systemd[1]: Starting Run anacron jobs...
Aug 28 09:18:13 lollybook systemd[1]: anacron.service: main process exited, code=exited, status=1/FAILURE
Aug 28 09:18:13 lollybook anacron[1591]: anacron: Can't chdir to /var/spool/anacron: No such file or directory
Aug 28 09:18:13 lollybook systemd[1]: Unit anacron.service entered failed state.
Aug 28 09:18:13 lollybook systemd[1]: anacron.service failed.
</source>
Ah, deleted the anacron spool directory. ;-)
==Restart units==
Fix the problem and restart the service.
<source lang=bash>
root@lollybook:~# mkdir /var/spool/anacron
root@lollybook:~# systemctl restart anacron.service
root@lollybook:~# systemctl status anacron
● anacron.service - Run anacron jobs
Loaded: loaded (/lib/systemd/system/anacron.service; enabled; vendor preset: enabled)
Active: active (running) since Fr 2015-08-28 09:53:49 CEST; 4s ago
Main PID: 5179 (anacron)
CGroup: /system.slice/anacron.service
└─5179 /usr/sbin/anacron -dsq
Aug 28 09:53:49 lollybook systemd[1]: Started Run anacron jobs.
Aug 28 09:53:49 lollybook systemd[1]: Starting Run anacron jobs...
Aug 28 09:53:49 lollybook anacron[5179]: Anacron 2.3 started on 2015-08-28
Aug 28 09:53:49 lollybook anacron[5179]: Will run job `cron.daily' in 5 min.
Aug 28 09:53:49 lollybook anacron[5179]: Will run job `cron.weekly' in 10 min.
Aug 28 09:53:49 lollybook anacron[5179]: Will run job `cron.monthly' in 15 min.
Aug 28 09:53:49 lollybook anacron[5179]: Jobs will be executed sequentially
</source>
==Display unit declaration==
<source lang=ini>
# systemctl cat zfs.target
# /lib/systemd/system/zfs.target
[Unit]
Description=ZFS startup target
Requires=zfs-mount.service
Requires=zfs-share.service
Wants=zed.service
[Install]
WantedBy=multi-user.target
</source>
==Sockets==
<source lang=bash>
# systemctl list-sockets --all
LISTEN UNIT ACTIVATES
/run/acpid.socket acpid.socket acpid.service
/run/systemd/fsckd systemd-fsckd.socket systemd-fsckd.service
/run/systemd/initctl/fifo systemd-initctl.socket systemd-initctl.service
/run/systemd/journal/dev-log systemd-journald-dev-log.socket systemd-journald.service
/run/systemd/journal/socket systemd-journald.socket systemd-journald.service
/run/systemd/journal/stdout systemd-journald.socket systemd-journald.service
/run/systemd/journal/syslog syslog.socket rsyslog.service
/run/systemd/shutdownd systemd-shutdownd.socket systemd-shutdownd.service
/run/udev/control systemd-udevd-control.socket systemd-udevd.service
/run/uuidd/request uuidd.socket uuidd.service
/var/run/avahi-daemon/socket avahi-daemon.socket avahi-daemon.service
/var/run/cups/cups.sock cups.socket cups.service
/var/run/dbus/system_bus_socket dbus.socket dbus.service
127.0.0.1:631 cups.socket cups.service
[::1]:631 cups.socket cups.service
audit 1 systemd-journald-audit.socket systemd-journald.service
kobject-uevent 1 systemd-udevd-kernel.socket systemd-udevd.service
17 sockets listed.
</source>
==View dependencies==
What depends on ''zfs.target'':
<source lang=bash>
# systemctl list-dependencies --reverse zfs.target
zfs.target
● ├─basic.target
...
● └─multi-user.target
...
</source>
And what do we need to reach the ''zfs.target''?
<source lang=bash>
# systemctl list-dependencies --recursive zfs.target
zfs.target
● ├─zed.service
● ├─zfs-mount.service
● └─zfs-share.service
</source>
==Get the main PID of a service==
<source lang=bash>
$ systemctl show --property=MainPID --value ssh.service
2026
</source>
=Security=
==Use capabilities to drop user privileges (CapabilityBoundingSet)==
<source lang=ini>
# systemctl cat systemd-networkd.service --no-pager
...
[Service]
Type=notify
Restart=on-failure
RestartSec=0
ExecStart=/lib/systemd/systemd-networkd
CapabilityBoundingSet=CAP_NET_ADMIN CAP_NET_BIND_SERVICE CAP_NET_BROADCAST CAP_NET_RAW CAP_SETUID CAP_SETGID CAP_SETPCAP CAP_CHOWN CAP_DAC_OVERRIDE CAP_FOWNER
ProtectSystem=full
ProtectHome=yes
WatchdogSec=1min
...
</source>
Now the process is started with exactly the capabilities it needs to have. Even if it starts as root all unnessesary capabilities are dropped for starting the process.
I dont want to copy the whole man page of [http://manpages.ubuntu.com/manpages/vivid/en/man7/capabilities.7.html capabilities(7)] here but you can take a look to understand what this capabilities are.
'''BUT''' beware of programs which just test on UID 0!
==Nailing a process to it's rights : NoNewPrivileges==
Setting ''NoNewPrivileges=true'' ensures that the processtree from this level on will stuck at the UID and the privileges it has. This prohibits UID changes. No set UID binary will help the hacker to get more privileges than the user of the exploited service.
==Limiting access to a socket==
For example for the check_mk monitoring system:
<source lang=ini>
# systemctl edit check_mk.socket
</source>
Deny from all, but the monitoring server (172.17.128.193):
<source lang=ini>
[Socket]
IPAddressDeny=any
IPAddressAllow=172.17.128.193
</source>
=systemd-resolved the name resolve service=
==Status==
<source lang=bash>
$ systemd-resolve --status
Global
DNS Domain: fritz.box
DNSSEC NTA: 10.in-addr.arpa
168.192.in-addr.arpa
corp
d.f.ip6.arpa
home
internal
intranet
lan
local
private
test
Link 3 (wlan0)
Current Scopes: none
LLMNR setting: yes
MulticastDNS setting: no
DNSSEC setting: no
DNSSEC supported: no
Link 2 (eth0)
Current Scopes: DNS
LLMNR setting: yes
MulticastDNS setting: no
DNSSEC setting: no
DNSSEC supported: no
DNS Servers: 192.168.178.1
DNS Domain: fritz.box
</source>
==Cache statistics==
<source lang=bash>
$ systemd-resolve --statistics
DNSSEC supported by current servers: no
Transactions
Current Transactions: 0
Total Transactions: 1824
Cache
Current Cache Size: 11
Cache Hits: 1104
Cache Misses: 771
DNSSEC Verdicts
Secure: 0
Insecure: 0
Bogus: 0
Indeterminate: 0
</source>
==Flush the cache==
<source lang=bash>
$ systemd-resolve --flush-caches
</source>
Check with:
<source lang=bash>
$ systemd-resolve --statistics
DNSSEC supported by current servers: no
Transactions
Current Transactions: 0
Total Transactions: 1809
Cache
Current Cache Size: 0 <--- Empty
Cache Hits: 1099
Cache Misses: 761
DNSSEC Verdicts
Secure: 0
Insecure: 0
Bogus: 0
Indeterminate: 0
</source>
=systemd-timesyncd an alternative to ntp=
The ntpd is a good and fat old horse for servers but clients do not necessarily need this one. Just give systemd-timesyncd a chance.
Configuration can be easily made through <i>/etc/systemd/timesyncd.conf</i>:
<source lang=ini>
# This file is part of systemd.
#
# systemd is free software; you can redistribute it and/or modify it
# under the terms of the GNU Lesser General Public License as published by
# the Free Software Foundation; either version 2.1 of the License, or
# (at your option) any later version.
#
# Entries in this file show the compile time defaults.
# You can change settings by editing this file.
# Defaults can be restored by simply deleting this file.
#
# See timesyncd.conf(5) for details.
[Time]
NTP=ptbtime1.ptb.de hora.cs.tu-berlin.de
FallbackNTP=ntp.ubuntu.com
</source>
The server list is a space separated list of NTP servers.
FallbackNTP is a list of servers if none of the NTP list could be reached.
If you want to split them into multiple files or generate them at start you can put files with the ending <i>.conf</i> in <i>/etc/systemd/timesyncd.conf.d/</i>.
After you setup the config you can enable the timesyncd via:
<source lang=bash>
# timedatectl set-ntp true
</source>
Control your success with:
<source lang=bash>
# timedatectl
Local time: Fr 2016-07-01 09:16:24 CEST
Universal time: Fr 2016-07-01 07:16:24 UTC
RTC time: Fr 2016-07-01 07:16:24
Time zone: Europe/Berlin (CEST, +0200)
Network time on: yes
NTP synchronized: yes
RTC in local TZ: no
</source>
Nice it worked <i>NTP synchronized: yes</i>.
If not take a look with <i>systemctl</i>:
<source lang=bash>
# systemctl status systemd-timesyncd.service
● systemd-timesyncd.service - Network Time Synchronization
Loaded: loaded (/lib/systemd/system/systemd-timesyncd.service; enabled; vendor preset: enabled)
Drop-In: /lib/systemd/system/systemd-timesyncd.service.d
└─disable-with-time-daemon.conf
Active: inactive (dead)
Condition: start condition failed at Fr 2016-07-01 10:49:15 CEST; 1h 43min left
Docs: man:systemd-timesyncd.service(8)
</source>
Hmm... let us take a look at ntp:
<source lang=bash>
# systemctl status ntp.service
● ntp.service - LSB: Start NTP daemon
Loaded: loaded (/etc/init.d/ntp; bad; vendor preset: enabled)
Active: active (exited) since Fr 2016-07-01 10:49:19 CEST; 1h 44min left
Docs: man:systemd-sysv-generator(8)
</source>
Maybe we should uninstall or disable ntp first ;-).
<source lang=bash>
# systemctl stop ntp.service
# systemctl disable ntp.service
</source>
<source lang=bash>
# systemctl start systemd-timesyncd.service
# systemctl status systemd-timesyncd.service
● systemd-timesyncd.service - Network Time Synchronization
Loaded: loaded (/lib/systemd/system/systemd-timesyncd.service; enabled; vendor preset: enabled)
Drop-In: /lib/systemd/system/systemd-timesyncd.service.d
└─disable-with-time-daemon.conf
Active: active (running) since Fr 2016-07-01 09:06:10 CEST; 1s ago
Docs: man:systemd-timesyncd.service(8)
Main PID: 12360 (systemd-timesyn)
Status: "Synchronized to time server 192.53.103.108:123 (ptbtime1.ptb.de)."
CGroup: /system.slice/systemd-timesyncd.service
└─12360 /lib/systemd/systemd-timesyncd
Jul 01 09:06:10 lollybook systemd[1]: Starting Network Time Synchronization...
Jul 01 09:06:10 lollybook systemd[1]: Started Network Time Synchronization.
Jul 01 09:06:10 lollybook systemd-timesyncd[12360]: Synchronized to time server 192.53.103.108:123 (ptbtime1.ptb.de).
</source>
That's it!
=Units=
==[Unit]==
===Define dependencies===
For example the ''zfs.target'' is defined like this:
<source lang=ini>
# systemctl cat zfs.target
# /lib/systemd/system/zfs.target
[Unit]
Description=ZFS startup target
Requires=zfs-mount.service
Requires=zfs-share.service
Wants=zed.service
[Install]
WantedBy=multi-user.target
</source>
This means to reach the ''zfs.target'' we want that ''zed.service'' is started if enabled and we need ''zfs-mount.service'' and ''zfs-share.service''.
===Directories===
====ReadWrite-, ReadOnly- and InaccessibleDirectories====
====Private Tmp-Directories====
Mounts a private incarnation of /tmp and /var/tmp which only lives as long as the unit is up. When the unit comes down the directories are cleared. This is done by a seperate namespace for this unit.
<source lang=ini>
[Unit]
...
PrivateTmp=true|false
...
</source>
If several units should share a private tmp-directory you can use ''JoinsNamespaceOf=<unit1>[,<unit2>,<unit3>]''.
==[Service]==
==[Install]==
=Tools=
==Testing around with capabilities==
For example arping:
<source lang=bash>
# getcap /usr/bin/arping
/usr/bin/arping = cap_net_raw+ep
</source>
With this capability set we can use this as normal user:
<source lang=bash>
lollypop $ /usr/bin/arping -I wlan0 192.168.178.1
ARPING 192.168.178.1 from 192.168.178.31 wlan0
Unicast reply from 192.168.178.1 [24:65:11:F0:DC:A8] 1.774ms
Unicast reply from 192.168.178.1 [24:65:11:F0:DC:A8] 1.658ms
</source>
If we remove this capability it does not work:
<source lang=bash>
# setcap cap_net_raw=-ep /usr/bin/arping
</source>
<source lang=bash>
lollypop $ /usr/bin/arping -I wlan0 192.168.178.1
arping: socket: Operation not permitted
</source>
Of course it still works as root as root has all capabilities:
<source lang=bash>
root@lollybook:~# /usr/bin/arping -I wlan0 192.168.178.1
ARPING 192.168.178.1 from 192.168.178.31 wlan0
Unicast reply from 192.168.178.1 [24:65:11:F0:DC:A8] 2.052ms
Unicast reply from 192.168.178.1 [24:65:11:F0:DC:A8] 1.852ms
Received 2 response(s)
</source>
So we better set this capability again:
<source lang=bash>
# setcap cap_net_raw=+ep /usr/bin/arping
</source>
= Logging with syslog-ng and systemd in a chroot environment =
If you have a chroot environment (here I have /var/chroot) some things are a little bit tricky.
==The needed logging socket in your chroot is /run/systemd/journal/dev-log==
Prepare the mountpoint:
<source lang=bash>
# mkdir -p /var/chroot/run/systemd/journal
# touch /var/chroot/run/systemd/journal/dev-log
</source>
===Get the name for the needed unit file===
The name of a .mount-unit file has to be the mount destination path. Dashes must be escaped. To get the resulting name you can easily use systemd-escape.
<source lang=bash>
# systemd-escape -p --suffix=mount /var/chroot/run/systemd/journal/dev-log
var-chroot-run-systemd-journal-dev\x2dlog.mount
</source>
===Create the unit file /lib/systemd/system/var-chroot-run-systemd-journal-dev\\x2dlog.mount for the mount===
Remember to double escape (\\) the x2d (which is a dash -).
<source lang=bash>
# vi /lib/systemd/system/var-chroot-run-systemd-journal-dev\\x2dlog.mount
</source>
I want to mount it before syslog-ng and pdns-recursor are up.
Put this contents in the file:
<source lang=ini>
[Unit]
Description=Mount /run/systemd/journal/dev-log to chroot
DefaultDependencies=no
ConditionPathExists=/var/chroot/run/systemd/journal/dev-log
ConditionCapability=CAP_SYS_ADMIN
After=systemd-modules-load.service
Before=pdns-recursor.service
Before=syslog-ng.service
[Mount]
What=/run/systemd/journal/dev-log
Where=/var/chroot/run/systemd/journal/dev-log
Type=none
Options=bind
[Install]
WantedBy=multi-user.target
</source>
===Mount the socket===
<source lang=bash>
# systemctl daemon-reload
# systemctl enable var-chroot-run-systemd-journal-dev\\x2dlog.mount
# systemctl start var-chroot-run-systemd-journal-dev\\x2dlog.mount
</source>
Check the success:
<source lang=bash>
# grep /var/chroot/run/systemd/journal/dev-log /proc/mounts
tmpfs /var/chroot/run/systemd/journal/dev-log tmpfs rw,nosuid,noexec,relatime,size=101604k,mode=755 0 0
</source>
==Tell the journald to forward logging lines to the socket==
===/etc/systemd/journald.conf===
<source lang=ini>
[Journal]
...
ForwardToSyslog=yes
...
</source>
Restart the journal daemon:
<source lang=bash>
# systemctl restart systemd-journald.service
</source>
==Configure syslog-ng==
===/etc/syslog-ng/syslog-ng.conf===
Take the log from systemd-journald socket:
<source>
...
source s_src {
system();
internal();
unix-dgram ("/run/systemd/journal/dev-log");
};
...
</source>
===Example for powerdns recursor===
====/etc/syslog-ng/conf.d/destination.d/pdns.conf====
<source>
# PowerDNS authoritative server destination
destination d_pdns { file("/var/log/powerdns/pdns.log"); };
destination d_pdns_recursor { file("/var/log/powerdns/recursor.log"); };
</source>
====/etc/syslog-ng/conf.d/filter.d/pdns.conf====
<source>
# PowerDNS authoritative server filter
filter f_pdns { program("^pdns$"); };
filter f_pdns_recursor { program("^pdns_recursor$"); };
</source>
====/etc/syslog-ng/conf.d/log.d/90_pdns.conf====
<source>
# PowerDNS authoritative server default final file log
log { source(s_src); filter(f_pdns); destination(d_pdns); flags(final); };
log { source(s_src); filter(f_pdns_recursor); destination(d_pdns_recursor); flags(final); };
</source>
===Restart syslog-ng daemon===
<source lang=bash>
# systemctl restart syslog-ng.service
</source>
= systemd-tmpfiles =
The housekeeping of temporary directories is done by the service <i>systemd-tmpfiles-clean.service</i> .
This service is triggered by the timer <i>systemd-tmpfiles-clean.timer</i>
To use this service for PrivateTMP directories for example of <i>apache2.service</i> you may use a config file under <i>/etc/[https://www.freedesktop.org/software/systemd/man/tmpfiles.d.html tmpfiles.d]/</i> like this example <i>/etc/tmpfiles.d/apache-cleanup.conf</i> :
<pre>
e /tmp/systemd-private-%b-apache2.service-*/tmp - - - 6h
</pre>
This will cleanup all files under <i>/tmp/systemd-private-%b-apache2.service-*/tmp</i> which are older than 6 hours every time the <i>systemd-tmpfiles-clean.service</i> runs.
The <i>%b</i> in the path is the actual boot-id.
What ist that? An id which is generated at each boot.
You can get the boot-id with:
<source lang=bash>
# journalctl --list-boots
</source>
The second field of the last line is the actual one, e.g.:
<source lang=bash>
# journalctl --list-boots | awk 'END {print $2}'
52ae0c2a587a47048ee76818ede269a6
</source>
When will that be? Try:
<source lang="bash">
# systemctl list-timers systemd-tmpfiles-clean.timer
NEXT LEFT LAST PASSED UNIT ACTIVATES
Thu 2020-08-13 16:07:24 CEST 46min left n/a n/a systemd-tmpfiles-clean.timer systemd-tmpfiles-clean.service
1 timers listed.
Pass --all to see loaded but inactive timers, too.
</source>
OK, but you probably want to run ist once an hour? OK, just rescedule the timer like this:
<source lang="bash">
# systemctl edit systemd-tmpfiles-clean.timer
</source>
and change the interval like this
<pre>
[Timer]
OnUnitActiveSec=1h
</pre>
Well done...
= Examples =
== Tomcat ==
=== /etc/systemd/system/tomcat-example.service ===
Simple service definition with some security options (ReadOnlyDirectories):
<source lang=ini>
# /etc/systemd/system/tomcat-ndr.service
[Unit]
Description=Apache Tomcat Web Application Container
After=syslog.target network.target remote-fs.target
ConditionPathExists=/opt/tomcat/bin
ConditionPathExists=/home/tomcat/bin
[Service]
Type=forking
User=tomcat
Group=java
PrivateTmp=true
RuntimeDirectory=tomcat-example
RuntimeDirectoryMode=0700
ReadOnlyDirectories=/etc
ReadOnlyDirectories=/lib
ReadOnlyDirectories=/usr
EnvironmentFile=/home/tomcat/.Tomcat_init_systemd
PIDFile=/run/tomcat-example/tomcat.pid
ExecStart=/opt/tomcat/bin/catalina.sh start
ExecStop=/opt/tomcat/bin/catalina.sh stop
SuccessExitStatus=0
[Install]
WantedBy=multi-user.target
</source>
=== /etc/polkit-1/rules.d/57-tomcat-example.rules ===
Allow the user <i>tomcat</i> to start/stop the service:
<source>
polkit.addRule(function(action, subject) {
if (action.id == "org.freedesktop.systemd1.manage-units" &&
action.lookup("unit") == "tomcat-example.service" &&
subject.user == "tomcat") {
return polkit.Result.YES;
}
});
</source>
== Oracle ==
UNTESTED, just an example!
File this as
/usr/lib/systemd/system/dbora@.service (SLES12)
<source lang=ini>
# This file is part of systemd.
#
# Configure instances for your oracle database versions like this
# # systemctl enable dbora@<product>.service
# e.g.:
# # systemctl enable dbora@12cR1.service
#
[Unit]
Description=Oracle Database %I
After=syslog.target network.target
[Service]
# systemd ignores PAM limits, so set any necessary limits in the service.
# Not really a bug, but a feature.
# https://bugzilla.redhat.com/show_bug.cgi?id=754285
LimitMEMLOCK=infinity
LimitNOFILE=65535
#
Type=simple
RemainAfterExit=yes
User=oracle
Group=dba
Environment="ORACLE_HOME=/opt/oracle/product/%i/db"
ExecStart=/opt/oracle/product/%i/db/bin/dbstart $ORACLE_HOME >> 2>&1 &
ExecStop=/opt/oracle/product/%i/db/bin/dbstart $ORACLE_HOME 2>&1 &
[Install]
WantedBy=multi-user.target
</source>
<source lang=bash>
# systemctl daemon-reload
# systemctl enable dbora@12cR2.service
Created symlink from /etc/systemd/system/multi-user.target.wants/dbora@12cR2.service to /usr/lib/systemd/system/dbora@.service.
</source>
dfab59843dbdea7e8995bc5005a62f1dd5751488
2130
2109
2021-06-29T13:35:48Z
Lollypop
2
/* Limiting access to a socket */
wikitext
text/x-wiki
[[Category:Linux]]
=systemd=
Yes, like daemons are usually written this has to be written lowercase.
=What is systemd?=
systemd is a replacement for the old and rusty init system of Linux.
It has many new features and extends the normal init system with the ability to watch processes after the start has done, list sockets owned by processes started with systemd, adds security features like [http://manpages.ubuntu.com/manpages/vivid/en/man7/capabilities.7.html capabilities(7)] and a lot more.
Maybe it will be as good as SMF (Service Management Facility) of Solaris one day :-).
=Take a look with systemctl=
==List units==
As you can see, there are hardware and software related units.
<source lang=bash>
# systemctl list-units
UNIT LOAD ACTIVE SUB DESCRIPTION
proc-sys-fs-binfmt_misc.automount loaded active running Arbitrary Executable File Formats File System Automount Point
sys-devices-pci0000:00-0000:00:02.0-backlight-acpi_video0.device loaded active plugged /sys/devices/pci0000:00/0000:00:02.0/backlight/acpi_video0
sys-devices-pci0000:00-0000:00:02.0-drm-card0-card0\x2dLVDS\x2d1-intel_backlight.device loaded active plugged /sys/devices/pci0000:00/0000:00:02.0/drm
sys-devices-pci0000:00-0000:00:19.0-net-eth0.device loaded active plugged 82579LM Gigabit Network Connection
sys-devices-pci0000:00-0000:00:1a.0-usb1-1\x2d1-1\x2d1.4-1\x2d1.4:1.0-bluetooth-hci0-rfkill3.device loaded active plugged /sys/devices/pci0000:00/0000
sys-devices-pci0000:00-0000:00:1a.0-usb1-1\x2d1-1\x2d1.4-1\x2d1.4:1.0-bluetooth-hci0.device loaded active plugged /sys/devices/pci0000:00/0000:00:1a.0
sys-devices-pci0000:00-0000:00:1b.0-sound-card0.device loaded active plugged 6 Series/C200 Series Chipset Family High Definition Audio Contro
sys-devices-pci0000:00-0000:00:1c.1-0000:03:00.0-ieee80211-phy0-rfkill2.device loaded active plugged /sys/devices/pci0000:00/0000:00:1c.1/0000:03:00.0
sys-devices-pci0000:00-0000:00:1c.1-0000:03:00.0-net-wlan0.device loaded active plugged Centrino Advanced-N 6205 [Taylor Peak] (Centrino Advanced-N 62
sys-devices-pci0000:00-0000:00:1d.0-usb2-2\x2d1-2\x2d1.4-2\x2d1.4:1.1-tty-ttyACM0.device loaded active plugged F5521gw
sys-devices-pci0000:00-0000:00:1d.0-usb2-2\x2d1-2\x2d1.4-2\x2d1.4:1.3-tty-ttyACM1.device loaded active plugged F5521gw
...
session-c2.scope loaded active running Session c2 of user lollypop
accounts-daemon.service loaded active running Accounts Service
● anacron.service loaded failed failed Run anacron jobs
apparmor.service loaded active exited LSB: AppArmor initialization
apport.service loaded active exited LSB: automatic crash report generation
...
</source>
In this example you can see that the anacron.service failed to start.
==Display unit status==
<source lang=bash>
# systemctl status anacron
● anacron.service - Run anacron jobs
Loaded: loaded (/lib/systemd/system/anacron.service; enabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Fr 2015-08-28 09:18:13 CEST; 31min ago
Process: 1591 ExecStart=/usr/sbin/anacron -dsq (code=exited, status=1/FAILURE)
Main PID: 1591 (code=exited, status=1/FAILURE)
Aug 28 09:18:13 lollybook systemd[1]: Started Run anacron jobs.
Aug 28 09:18:13 lollybook systemd[1]: Starting Run anacron jobs...
Aug 28 09:18:13 lollybook systemd[1]: anacron.service: main process exited, code=exited, status=1/FAILURE
Aug 28 09:18:13 lollybook anacron[1591]: anacron: Can't chdir to /var/spool/anacron: No such file or directory
Aug 28 09:18:13 lollybook systemd[1]: Unit anacron.service entered failed state.
Aug 28 09:18:13 lollybook systemd[1]: anacron.service failed.
</source>
Ah, deleted the anacron spool directory. ;-)
==Restart units==
Fix the problem and restart the service.
<source lang=bash>
root@lollybook:~# mkdir /var/spool/anacron
root@lollybook:~# systemctl restart anacron.service
root@lollybook:~# systemctl status anacron
● anacron.service - Run anacron jobs
Loaded: loaded (/lib/systemd/system/anacron.service; enabled; vendor preset: enabled)
Active: active (running) since Fr 2015-08-28 09:53:49 CEST; 4s ago
Main PID: 5179 (anacron)
CGroup: /system.slice/anacron.service
└─5179 /usr/sbin/anacron -dsq
Aug 28 09:53:49 lollybook systemd[1]: Started Run anacron jobs.
Aug 28 09:53:49 lollybook systemd[1]: Starting Run anacron jobs...
Aug 28 09:53:49 lollybook anacron[5179]: Anacron 2.3 started on 2015-08-28
Aug 28 09:53:49 lollybook anacron[5179]: Will run job `cron.daily' in 5 min.
Aug 28 09:53:49 lollybook anacron[5179]: Will run job `cron.weekly' in 10 min.
Aug 28 09:53:49 lollybook anacron[5179]: Will run job `cron.monthly' in 15 min.
Aug 28 09:53:49 lollybook anacron[5179]: Jobs will be executed sequentially
</source>
==Display unit declaration==
<source lang=ini>
# systemctl cat zfs.target
# /lib/systemd/system/zfs.target
[Unit]
Description=ZFS startup target
Requires=zfs-mount.service
Requires=zfs-share.service
Wants=zed.service
[Install]
WantedBy=multi-user.target
</source>
==Sockets==
<source lang=bash>
# systemctl list-sockets --all
LISTEN UNIT ACTIVATES
/run/acpid.socket acpid.socket acpid.service
/run/systemd/fsckd systemd-fsckd.socket systemd-fsckd.service
/run/systemd/initctl/fifo systemd-initctl.socket systemd-initctl.service
/run/systemd/journal/dev-log systemd-journald-dev-log.socket systemd-journald.service
/run/systemd/journal/socket systemd-journald.socket systemd-journald.service
/run/systemd/journal/stdout systemd-journald.socket systemd-journald.service
/run/systemd/journal/syslog syslog.socket rsyslog.service
/run/systemd/shutdownd systemd-shutdownd.socket systemd-shutdownd.service
/run/udev/control systemd-udevd-control.socket systemd-udevd.service
/run/uuidd/request uuidd.socket uuidd.service
/var/run/avahi-daemon/socket avahi-daemon.socket avahi-daemon.service
/var/run/cups/cups.sock cups.socket cups.service
/var/run/dbus/system_bus_socket dbus.socket dbus.service
127.0.0.1:631 cups.socket cups.service
[::1]:631 cups.socket cups.service
audit 1 systemd-journald-audit.socket systemd-journald.service
kobject-uevent 1 systemd-udevd-kernel.socket systemd-udevd.service
17 sockets listed.
</source>
==View dependencies==
What depends on ''zfs.target'':
<source lang=bash>
# systemctl list-dependencies --reverse zfs.target
zfs.target
● ├─basic.target
...
● └─multi-user.target
...
</source>
And what do we need to reach the ''zfs.target''?
<source lang=bash>
# systemctl list-dependencies --recursive zfs.target
zfs.target
● ├─zed.service
● ├─zfs-mount.service
● └─zfs-share.service
</source>
==Get the main PID of a service==
<source lang=bash>
$ systemctl show --property=MainPID --value ssh.service
2026
</source>
=Security=
==Use capabilities to drop user privileges (CapabilityBoundingSet)==
<source lang=ini>
# systemctl cat systemd-networkd.service --no-pager
...
[Service]
Type=notify
Restart=on-failure
RestartSec=0
ExecStart=/lib/systemd/systemd-networkd
CapabilityBoundingSet=CAP_NET_ADMIN CAP_NET_BIND_SERVICE CAP_NET_BROADCAST CAP_NET_RAW CAP_SETUID CAP_SETGID CAP_SETPCAP CAP_CHOWN CAP_DAC_OVERRIDE CAP_FOWNER
ProtectSystem=full
ProtectHome=yes
WatchdogSec=1min
...
</source>
Now the process is started with exactly the capabilities it needs to have. Even if it starts as root all unnessesary capabilities are dropped for starting the process.
I dont want to copy the whole man page of [http://manpages.ubuntu.com/manpages/vivid/en/man7/capabilities.7.html capabilities(7)] here but you can take a look to understand what this capabilities are.
'''BUT''' beware of programs which just test on UID 0!
==Nailing a process to it's rights : NoNewPrivileges==
Setting ''NoNewPrivileges=true'' ensures that the processtree from this level on will stuck at the UID and the privileges it has. This prohibits UID changes. No set UID binary will help the hacker to get more privileges than the user of the exploited service.
==Limiting access to a socket==
For example for the check_mk monitoring system:
<source lang=ini>
# systemctl edit check_mk.socket
</source>
Deny from all, but the monitoring server (172.17.128.193):
<source lang=ini>
[Socket]
IPAddressDeny=any
IPAddressAllow=172.17.128.193
</source>
==Limiting a socket to IPv4==
For example for the check_mk monitoring system:
<source lang=ini>
# systemctl edit check_mk.socket
</source>
Deny from all, but the monitoring server (172.17.128.193):
<source lang=ini>
[Socket]
ListenStream=
ListenStream=0.0.0.0:6556
</source>
=systemd-resolved the name resolve service=
==Status==
<source lang=bash>
$ systemd-resolve --status
Global
DNS Domain: fritz.box
DNSSEC NTA: 10.in-addr.arpa
168.192.in-addr.arpa
corp
d.f.ip6.arpa
home
internal
intranet
lan
local
private
test
Link 3 (wlan0)
Current Scopes: none
LLMNR setting: yes
MulticastDNS setting: no
DNSSEC setting: no
DNSSEC supported: no
Link 2 (eth0)
Current Scopes: DNS
LLMNR setting: yes
MulticastDNS setting: no
DNSSEC setting: no
DNSSEC supported: no
DNS Servers: 192.168.178.1
DNS Domain: fritz.box
</source>
==Cache statistics==
<source lang=bash>
$ systemd-resolve --statistics
DNSSEC supported by current servers: no
Transactions
Current Transactions: 0
Total Transactions: 1824
Cache
Current Cache Size: 11
Cache Hits: 1104
Cache Misses: 771
DNSSEC Verdicts
Secure: 0
Insecure: 0
Bogus: 0
Indeterminate: 0
</source>
==Flush the cache==
<source lang=bash>
$ systemd-resolve --flush-caches
</source>
Check with:
<source lang=bash>
$ systemd-resolve --statistics
DNSSEC supported by current servers: no
Transactions
Current Transactions: 0
Total Transactions: 1809
Cache
Current Cache Size: 0 <--- Empty
Cache Hits: 1099
Cache Misses: 761
DNSSEC Verdicts
Secure: 0
Insecure: 0
Bogus: 0
Indeterminate: 0
</source>
=systemd-timesyncd an alternative to ntp=
The ntpd is a good and fat old horse for servers but clients do not necessarily need this one. Just give systemd-timesyncd a chance.
Configuration can be easily made through <i>/etc/systemd/timesyncd.conf</i>:
<source lang=ini>
# This file is part of systemd.
#
# systemd is free software; you can redistribute it and/or modify it
# under the terms of the GNU Lesser General Public License as published by
# the Free Software Foundation; either version 2.1 of the License, or
# (at your option) any later version.
#
# Entries in this file show the compile time defaults.
# You can change settings by editing this file.
# Defaults can be restored by simply deleting this file.
#
# See timesyncd.conf(5) for details.
[Time]
NTP=ptbtime1.ptb.de hora.cs.tu-berlin.de
FallbackNTP=ntp.ubuntu.com
</source>
The server list is a space separated list of NTP servers.
FallbackNTP is a list of servers if none of the NTP list could be reached.
If you want to split them into multiple files or generate them at start you can put files with the ending <i>.conf</i> in <i>/etc/systemd/timesyncd.conf.d/</i>.
After you setup the config you can enable the timesyncd via:
<source lang=bash>
# timedatectl set-ntp true
</source>
Control your success with:
<source lang=bash>
# timedatectl
Local time: Fr 2016-07-01 09:16:24 CEST
Universal time: Fr 2016-07-01 07:16:24 UTC
RTC time: Fr 2016-07-01 07:16:24
Time zone: Europe/Berlin (CEST, +0200)
Network time on: yes
NTP synchronized: yes
RTC in local TZ: no
</source>
Nice it worked <i>NTP synchronized: yes</i>.
If not take a look with <i>systemctl</i>:
<source lang=bash>
# systemctl status systemd-timesyncd.service
● systemd-timesyncd.service - Network Time Synchronization
Loaded: loaded (/lib/systemd/system/systemd-timesyncd.service; enabled; vendor preset: enabled)
Drop-In: /lib/systemd/system/systemd-timesyncd.service.d
└─disable-with-time-daemon.conf
Active: inactive (dead)
Condition: start condition failed at Fr 2016-07-01 10:49:15 CEST; 1h 43min left
Docs: man:systemd-timesyncd.service(8)
</source>
Hmm... let us take a look at ntp:
<source lang=bash>
# systemctl status ntp.service
● ntp.service - LSB: Start NTP daemon
Loaded: loaded (/etc/init.d/ntp; bad; vendor preset: enabled)
Active: active (exited) since Fr 2016-07-01 10:49:19 CEST; 1h 44min left
Docs: man:systemd-sysv-generator(8)
</source>
Maybe we should uninstall or disable ntp first ;-).
<source lang=bash>
# systemctl stop ntp.service
# systemctl disable ntp.service
</source>
<source lang=bash>
# systemctl start systemd-timesyncd.service
# systemctl status systemd-timesyncd.service
● systemd-timesyncd.service - Network Time Synchronization
Loaded: loaded (/lib/systemd/system/systemd-timesyncd.service; enabled; vendor preset: enabled)
Drop-In: /lib/systemd/system/systemd-timesyncd.service.d
└─disable-with-time-daemon.conf
Active: active (running) since Fr 2016-07-01 09:06:10 CEST; 1s ago
Docs: man:systemd-timesyncd.service(8)
Main PID: 12360 (systemd-timesyn)
Status: "Synchronized to time server 192.53.103.108:123 (ptbtime1.ptb.de)."
CGroup: /system.slice/systemd-timesyncd.service
└─12360 /lib/systemd/systemd-timesyncd
Jul 01 09:06:10 lollybook systemd[1]: Starting Network Time Synchronization...
Jul 01 09:06:10 lollybook systemd[1]: Started Network Time Synchronization.
Jul 01 09:06:10 lollybook systemd-timesyncd[12360]: Synchronized to time server 192.53.103.108:123 (ptbtime1.ptb.de).
</source>
That's it!
=Units=
==[Unit]==
===Define dependencies===
For example the ''zfs.target'' is defined like this:
<source lang=ini>
# systemctl cat zfs.target
# /lib/systemd/system/zfs.target
[Unit]
Description=ZFS startup target
Requires=zfs-mount.service
Requires=zfs-share.service
Wants=zed.service
[Install]
WantedBy=multi-user.target
</source>
This means to reach the ''zfs.target'' we want that ''zed.service'' is started if enabled and we need ''zfs-mount.service'' and ''zfs-share.service''.
===Directories===
====ReadWrite-, ReadOnly- and InaccessibleDirectories====
====Private Tmp-Directories====
Mounts a private incarnation of /tmp and /var/tmp which only lives as long as the unit is up. When the unit comes down the directories are cleared. This is done by a seperate namespace for this unit.
<source lang=ini>
[Unit]
...
PrivateTmp=true|false
...
</source>
If several units should share a private tmp-directory you can use ''JoinsNamespaceOf=<unit1>[,<unit2>,<unit3>]''.
==[Service]==
==[Install]==
=Tools=
==Testing around with capabilities==
For example arping:
<source lang=bash>
# getcap /usr/bin/arping
/usr/bin/arping = cap_net_raw+ep
</source>
With this capability set we can use this as normal user:
<source lang=bash>
lollypop $ /usr/bin/arping -I wlan0 192.168.178.1
ARPING 192.168.178.1 from 192.168.178.31 wlan0
Unicast reply from 192.168.178.1 [24:65:11:F0:DC:A8] 1.774ms
Unicast reply from 192.168.178.1 [24:65:11:F0:DC:A8] 1.658ms
</source>
If we remove this capability it does not work:
<source lang=bash>
# setcap cap_net_raw=-ep /usr/bin/arping
</source>
<source lang=bash>
lollypop $ /usr/bin/arping -I wlan0 192.168.178.1
arping: socket: Operation not permitted
</source>
Of course it still works as root as root has all capabilities:
<source lang=bash>
root@lollybook:~# /usr/bin/arping -I wlan0 192.168.178.1
ARPING 192.168.178.1 from 192.168.178.31 wlan0
Unicast reply from 192.168.178.1 [24:65:11:F0:DC:A8] 2.052ms
Unicast reply from 192.168.178.1 [24:65:11:F0:DC:A8] 1.852ms
Received 2 response(s)
</source>
So we better set this capability again:
<source lang=bash>
# setcap cap_net_raw=+ep /usr/bin/arping
</source>
= Logging with syslog-ng and systemd in a chroot environment =
If you have a chroot environment (here I have /var/chroot) some things are a little bit tricky.
==The needed logging socket in your chroot is /run/systemd/journal/dev-log==
Prepare the mountpoint:
<source lang=bash>
# mkdir -p /var/chroot/run/systemd/journal
# touch /var/chroot/run/systemd/journal/dev-log
</source>
===Get the name for the needed unit file===
The name of a .mount-unit file has to be the mount destination path. Dashes must be escaped. To get the resulting name you can easily use systemd-escape.
<source lang=bash>
# systemd-escape -p --suffix=mount /var/chroot/run/systemd/journal/dev-log
var-chroot-run-systemd-journal-dev\x2dlog.mount
</source>
===Create the unit file /lib/systemd/system/var-chroot-run-systemd-journal-dev\\x2dlog.mount for the mount===
Remember to double escape (\\) the x2d (which is a dash -).
<source lang=bash>
# vi /lib/systemd/system/var-chroot-run-systemd-journal-dev\\x2dlog.mount
</source>
I want to mount it before syslog-ng and pdns-recursor are up.
Put this contents in the file:
<source lang=ini>
[Unit]
Description=Mount /run/systemd/journal/dev-log to chroot
DefaultDependencies=no
ConditionPathExists=/var/chroot/run/systemd/journal/dev-log
ConditionCapability=CAP_SYS_ADMIN
After=systemd-modules-load.service
Before=pdns-recursor.service
Before=syslog-ng.service
[Mount]
What=/run/systemd/journal/dev-log
Where=/var/chroot/run/systemd/journal/dev-log
Type=none
Options=bind
[Install]
WantedBy=multi-user.target
</source>
===Mount the socket===
<source lang=bash>
# systemctl daemon-reload
# systemctl enable var-chroot-run-systemd-journal-dev\\x2dlog.mount
# systemctl start var-chroot-run-systemd-journal-dev\\x2dlog.mount
</source>
Check the success:
<source lang=bash>
# grep /var/chroot/run/systemd/journal/dev-log /proc/mounts
tmpfs /var/chroot/run/systemd/journal/dev-log tmpfs rw,nosuid,noexec,relatime,size=101604k,mode=755 0 0
</source>
==Tell the journald to forward logging lines to the socket==
===/etc/systemd/journald.conf===
<source lang=ini>
[Journal]
...
ForwardToSyslog=yes
...
</source>
Restart the journal daemon:
<source lang=bash>
# systemctl restart systemd-journald.service
</source>
==Configure syslog-ng==
===/etc/syslog-ng/syslog-ng.conf===
Take the log from systemd-journald socket:
<source>
...
source s_src {
system();
internal();
unix-dgram ("/run/systemd/journal/dev-log");
};
...
</source>
===Example for powerdns recursor===
====/etc/syslog-ng/conf.d/destination.d/pdns.conf====
<source>
# PowerDNS authoritative server destination
destination d_pdns { file("/var/log/powerdns/pdns.log"); };
destination d_pdns_recursor { file("/var/log/powerdns/recursor.log"); };
</source>
====/etc/syslog-ng/conf.d/filter.d/pdns.conf====
<source>
# PowerDNS authoritative server filter
filter f_pdns { program("^pdns$"); };
filter f_pdns_recursor { program("^pdns_recursor$"); };
</source>
====/etc/syslog-ng/conf.d/log.d/90_pdns.conf====
<source>
# PowerDNS authoritative server default final file log
log { source(s_src); filter(f_pdns); destination(d_pdns); flags(final); };
log { source(s_src); filter(f_pdns_recursor); destination(d_pdns_recursor); flags(final); };
</source>
===Restart syslog-ng daemon===
<source lang=bash>
# systemctl restart syslog-ng.service
</source>
= systemd-tmpfiles =
The housekeeping of temporary directories is done by the service <i>systemd-tmpfiles-clean.service</i> .
This service is triggered by the timer <i>systemd-tmpfiles-clean.timer</i>
To use this service for PrivateTMP directories for example of <i>apache2.service</i> you may use a config file under <i>/etc/[https://www.freedesktop.org/software/systemd/man/tmpfiles.d.html tmpfiles.d]/</i> like this example <i>/etc/tmpfiles.d/apache-cleanup.conf</i> :
<pre>
e /tmp/systemd-private-%b-apache2.service-*/tmp - - - 6h
</pre>
This will cleanup all files under <i>/tmp/systemd-private-%b-apache2.service-*/tmp</i> which are older than 6 hours every time the <i>systemd-tmpfiles-clean.service</i> runs.
The <i>%b</i> in the path is the actual boot-id.
What ist that? An id which is generated at each boot.
You can get the boot-id with:
<source lang=bash>
# journalctl --list-boots
</source>
The second field of the last line is the actual one, e.g.:
<source lang=bash>
# journalctl --list-boots | awk 'END {print $2}'
52ae0c2a587a47048ee76818ede269a6
</source>
When will that be? Try:
<source lang="bash">
# systemctl list-timers systemd-tmpfiles-clean.timer
NEXT LEFT LAST PASSED UNIT ACTIVATES
Thu 2020-08-13 16:07:24 CEST 46min left n/a n/a systemd-tmpfiles-clean.timer systemd-tmpfiles-clean.service
1 timers listed.
Pass --all to see loaded but inactive timers, too.
</source>
OK, but you probably want to run ist once an hour? OK, just rescedule the timer like this:
<source lang="bash">
# systemctl edit systemd-tmpfiles-clean.timer
</source>
and change the interval like this
<pre>
[Timer]
OnUnitActiveSec=1h
</pre>
Well done...
= Examples =
== Tomcat ==
=== /etc/systemd/system/tomcat-example.service ===
Simple service definition with some security options (ReadOnlyDirectories):
<source lang=ini>
# /etc/systemd/system/tomcat-ndr.service
[Unit]
Description=Apache Tomcat Web Application Container
After=syslog.target network.target remote-fs.target
ConditionPathExists=/opt/tomcat/bin
ConditionPathExists=/home/tomcat/bin
[Service]
Type=forking
User=tomcat
Group=java
PrivateTmp=true
RuntimeDirectory=tomcat-example
RuntimeDirectoryMode=0700
ReadOnlyDirectories=/etc
ReadOnlyDirectories=/lib
ReadOnlyDirectories=/usr
EnvironmentFile=/home/tomcat/.Tomcat_init_systemd
PIDFile=/run/tomcat-example/tomcat.pid
ExecStart=/opt/tomcat/bin/catalina.sh start
ExecStop=/opt/tomcat/bin/catalina.sh stop
SuccessExitStatus=0
[Install]
WantedBy=multi-user.target
</source>
=== /etc/polkit-1/rules.d/57-tomcat-example.rules ===
Allow the user <i>tomcat</i> to start/stop the service:
<source>
polkit.addRule(function(action, subject) {
if (action.id == "org.freedesktop.systemd1.manage-units" &&
action.lookup("unit") == "tomcat-example.service" &&
subject.user == "tomcat") {
return polkit.Result.YES;
}
});
</source>
== Oracle ==
UNTESTED, just an example!
File this as
/usr/lib/systemd/system/dbora@.service (SLES12)
<source lang=ini>
# This file is part of systemd.
#
# Configure instances for your oracle database versions like this
# # systemctl enable dbora@<product>.service
# e.g.:
# # systemctl enable dbora@12cR1.service
#
[Unit]
Description=Oracle Database %I
After=syslog.target network.target
[Service]
# systemd ignores PAM limits, so set any necessary limits in the service.
# Not really a bug, but a feature.
# https://bugzilla.redhat.com/show_bug.cgi?id=754285
LimitMEMLOCK=infinity
LimitNOFILE=65535
#
Type=simple
RemainAfterExit=yes
User=oracle
Group=dba
Environment="ORACLE_HOME=/opt/oracle/product/%i/db"
ExecStart=/opt/oracle/product/%i/db/bin/dbstart $ORACLE_HOME >> 2>&1 &
ExecStop=/opt/oracle/product/%i/db/bin/dbstart $ORACLE_HOME 2>&1 &
[Install]
WantedBy=multi-user.target
</source>
<source lang=bash>
# systemctl daemon-reload
# systemctl enable dbora@12cR2.service
Created symlink from /etc/systemd/system/multi-user.target.wants/dbora@12cR2.service to /usr/lib/systemd/system/dbora@.service.
</source>
9d21ae33a611bb26cdf885dde2eada99aa0df753
2131
2130
2021-06-29T13:36:25Z
Lollypop
2
/* Limiting a socket to IPv4 */
wikitext
text/x-wiki
[[Category:Linux]]
=systemd=
Yes, like daemons are usually written this has to be written lowercase.
=What is systemd?=
systemd is a replacement for the old and rusty init system of Linux.
It has many new features and extends the normal init system with the ability to watch processes after the start has done, list sockets owned by processes started with systemd, adds security features like [http://manpages.ubuntu.com/manpages/vivid/en/man7/capabilities.7.html capabilities(7)] and a lot more.
Maybe it will be as good as SMF (Service Management Facility) of Solaris one day :-).
=Take a look with systemctl=
==List units==
As you can see, there are hardware and software related units.
<source lang=bash>
# systemctl list-units
UNIT LOAD ACTIVE SUB DESCRIPTION
proc-sys-fs-binfmt_misc.automount loaded active running Arbitrary Executable File Formats File System Automount Point
sys-devices-pci0000:00-0000:00:02.0-backlight-acpi_video0.device loaded active plugged /sys/devices/pci0000:00/0000:00:02.0/backlight/acpi_video0
sys-devices-pci0000:00-0000:00:02.0-drm-card0-card0\x2dLVDS\x2d1-intel_backlight.device loaded active plugged /sys/devices/pci0000:00/0000:00:02.0/drm
sys-devices-pci0000:00-0000:00:19.0-net-eth0.device loaded active plugged 82579LM Gigabit Network Connection
sys-devices-pci0000:00-0000:00:1a.0-usb1-1\x2d1-1\x2d1.4-1\x2d1.4:1.0-bluetooth-hci0-rfkill3.device loaded active plugged /sys/devices/pci0000:00/0000
sys-devices-pci0000:00-0000:00:1a.0-usb1-1\x2d1-1\x2d1.4-1\x2d1.4:1.0-bluetooth-hci0.device loaded active plugged /sys/devices/pci0000:00/0000:00:1a.0
sys-devices-pci0000:00-0000:00:1b.0-sound-card0.device loaded active plugged 6 Series/C200 Series Chipset Family High Definition Audio Contro
sys-devices-pci0000:00-0000:00:1c.1-0000:03:00.0-ieee80211-phy0-rfkill2.device loaded active plugged /sys/devices/pci0000:00/0000:00:1c.1/0000:03:00.0
sys-devices-pci0000:00-0000:00:1c.1-0000:03:00.0-net-wlan0.device loaded active plugged Centrino Advanced-N 6205 [Taylor Peak] (Centrino Advanced-N 62
sys-devices-pci0000:00-0000:00:1d.0-usb2-2\x2d1-2\x2d1.4-2\x2d1.4:1.1-tty-ttyACM0.device loaded active plugged F5521gw
sys-devices-pci0000:00-0000:00:1d.0-usb2-2\x2d1-2\x2d1.4-2\x2d1.4:1.3-tty-ttyACM1.device loaded active plugged F5521gw
...
session-c2.scope loaded active running Session c2 of user lollypop
accounts-daemon.service loaded active running Accounts Service
● anacron.service loaded failed failed Run anacron jobs
apparmor.service loaded active exited LSB: AppArmor initialization
apport.service loaded active exited LSB: automatic crash report generation
...
</source>
In this example you can see that the anacron.service failed to start.
==Display unit status==
<source lang=bash>
# systemctl status anacron
● anacron.service - Run anacron jobs
Loaded: loaded (/lib/systemd/system/anacron.service; enabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Fr 2015-08-28 09:18:13 CEST; 31min ago
Process: 1591 ExecStart=/usr/sbin/anacron -dsq (code=exited, status=1/FAILURE)
Main PID: 1591 (code=exited, status=1/FAILURE)
Aug 28 09:18:13 lollybook systemd[1]: Started Run anacron jobs.
Aug 28 09:18:13 lollybook systemd[1]: Starting Run anacron jobs...
Aug 28 09:18:13 lollybook systemd[1]: anacron.service: main process exited, code=exited, status=1/FAILURE
Aug 28 09:18:13 lollybook anacron[1591]: anacron: Can't chdir to /var/spool/anacron: No such file or directory
Aug 28 09:18:13 lollybook systemd[1]: Unit anacron.service entered failed state.
Aug 28 09:18:13 lollybook systemd[1]: anacron.service failed.
</source>
Ah, deleted the anacron spool directory. ;-)
==Restart units==
Fix the problem and restart the service.
<source lang=bash>
root@lollybook:~# mkdir /var/spool/anacron
root@lollybook:~# systemctl restart anacron.service
root@lollybook:~# systemctl status anacron
● anacron.service - Run anacron jobs
Loaded: loaded (/lib/systemd/system/anacron.service; enabled; vendor preset: enabled)
Active: active (running) since Fr 2015-08-28 09:53:49 CEST; 4s ago
Main PID: 5179 (anacron)
CGroup: /system.slice/anacron.service
└─5179 /usr/sbin/anacron -dsq
Aug 28 09:53:49 lollybook systemd[1]: Started Run anacron jobs.
Aug 28 09:53:49 lollybook systemd[1]: Starting Run anacron jobs...
Aug 28 09:53:49 lollybook anacron[5179]: Anacron 2.3 started on 2015-08-28
Aug 28 09:53:49 lollybook anacron[5179]: Will run job `cron.daily' in 5 min.
Aug 28 09:53:49 lollybook anacron[5179]: Will run job `cron.weekly' in 10 min.
Aug 28 09:53:49 lollybook anacron[5179]: Will run job `cron.monthly' in 15 min.
Aug 28 09:53:49 lollybook anacron[5179]: Jobs will be executed sequentially
</source>
==Display unit declaration==
<source lang=ini>
# systemctl cat zfs.target
# /lib/systemd/system/zfs.target
[Unit]
Description=ZFS startup target
Requires=zfs-mount.service
Requires=zfs-share.service
Wants=zed.service
[Install]
WantedBy=multi-user.target
</source>
==Sockets==
<source lang=bash>
# systemctl list-sockets --all
LISTEN UNIT ACTIVATES
/run/acpid.socket acpid.socket acpid.service
/run/systemd/fsckd systemd-fsckd.socket systemd-fsckd.service
/run/systemd/initctl/fifo systemd-initctl.socket systemd-initctl.service
/run/systemd/journal/dev-log systemd-journald-dev-log.socket systemd-journald.service
/run/systemd/journal/socket systemd-journald.socket systemd-journald.service
/run/systemd/journal/stdout systemd-journald.socket systemd-journald.service
/run/systemd/journal/syslog syslog.socket rsyslog.service
/run/systemd/shutdownd systemd-shutdownd.socket systemd-shutdownd.service
/run/udev/control systemd-udevd-control.socket systemd-udevd.service
/run/uuidd/request uuidd.socket uuidd.service
/var/run/avahi-daemon/socket avahi-daemon.socket avahi-daemon.service
/var/run/cups/cups.sock cups.socket cups.service
/var/run/dbus/system_bus_socket dbus.socket dbus.service
127.0.0.1:631 cups.socket cups.service
[::1]:631 cups.socket cups.service
audit 1 systemd-journald-audit.socket systemd-journald.service
kobject-uevent 1 systemd-udevd-kernel.socket systemd-udevd.service
17 sockets listed.
</source>
==View dependencies==
What depends on ''zfs.target'':
<source lang=bash>
# systemctl list-dependencies --reverse zfs.target
zfs.target
● ├─basic.target
...
● └─multi-user.target
...
</source>
And what do we need to reach the ''zfs.target''?
<source lang=bash>
# systemctl list-dependencies --recursive zfs.target
zfs.target
● ├─zed.service
● ├─zfs-mount.service
● └─zfs-share.service
</source>
==Get the main PID of a service==
<source lang=bash>
$ systemctl show --property=MainPID --value ssh.service
2026
</source>
=Security=
==Use capabilities to drop user privileges (CapabilityBoundingSet)==
<source lang=ini>
# systemctl cat systemd-networkd.service --no-pager
...
[Service]
Type=notify
Restart=on-failure
RestartSec=0
ExecStart=/lib/systemd/systemd-networkd
CapabilityBoundingSet=CAP_NET_ADMIN CAP_NET_BIND_SERVICE CAP_NET_BROADCAST CAP_NET_RAW CAP_SETUID CAP_SETGID CAP_SETPCAP CAP_CHOWN CAP_DAC_OVERRIDE CAP_FOWNER
ProtectSystem=full
ProtectHome=yes
WatchdogSec=1min
...
</source>
Now the process is started with exactly the capabilities it needs to have. Even if it starts as root all unnessesary capabilities are dropped for starting the process.
I dont want to copy the whole man page of [http://manpages.ubuntu.com/manpages/vivid/en/man7/capabilities.7.html capabilities(7)] here but you can take a look to understand what this capabilities are.
'''BUT''' beware of programs which just test on UID 0!
==Nailing a process to it's rights : NoNewPrivileges==
Setting ''NoNewPrivileges=true'' ensures that the processtree from this level on will stuck at the UID and the privileges it has. This prohibits UID changes. No set UID binary will help the hacker to get more privileges than the user of the exploited service.
==Limiting access to a socket==
For example for the check_mk monitoring system:
<source lang=ini>
# systemctl edit check_mk.socket
</source>
Deny from all, but the monitoring server (172.17.128.193):
<source lang=ini>
[Socket]
IPAddressDeny=any
IPAddressAllow=172.17.128.193
</source>
==Limiting a socket to IPv4==
For example for the check_mk monitoring system:
<source lang=ini>
# systemctl edit check_mk.socket
</source>
First remove old value, then set new one.
<source lang=ini>
[Socket]
ListenStream=
ListenStream=0.0.0.0:6556
</source>
=systemd-resolved the name resolve service=
==Status==
<source lang=bash>
$ systemd-resolve --status
Global
DNS Domain: fritz.box
DNSSEC NTA: 10.in-addr.arpa
168.192.in-addr.arpa
corp
d.f.ip6.arpa
home
internal
intranet
lan
local
private
test
Link 3 (wlan0)
Current Scopes: none
LLMNR setting: yes
MulticastDNS setting: no
DNSSEC setting: no
DNSSEC supported: no
Link 2 (eth0)
Current Scopes: DNS
LLMNR setting: yes
MulticastDNS setting: no
DNSSEC setting: no
DNSSEC supported: no
DNS Servers: 192.168.178.1
DNS Domain: fritz.box
</source>
==Cache statistics==
<source lang=bash>
$ systemd-resolve --statistics
DNSSEC supported by current servers: no
Transactions
Current Transactions: 0
Total Transactions: 1824
Cache
Current Cache Size: 11
Cache Hits: 1104
Cache Misses: 771
DNSSEC Verdicts
Secure: 0
Insecure: 0
Bogus: 0
Indeterminate: 0
</source>
==Flush the cache==
<source lang=bash>
$ systemd-resolve --flush-caches
</source>
Check with:
<source lang=bash>
$ systemd-resolve --statistics
DNSSEC supported by current servers: no
Transactions
Current Transactions: 0
Total Transactions: 1809
Cache
Current Cache Size: 0 <--- Empty
Cache Hits: 1099
Cache Misses: 761
DNSSEC Verdicts
Secure: 0
Insecure: 0
Bogus: 0
Indeterminate: 0
</source>
=systemd-timesyncd an alternative to ntp=
The ntpd is a good and fat old horse for servers but clients do not necessarily need this one. Just give systemd-timesyncd a chance.
Configuration can be easily made through <i>/etc/systemd/timesyncd.conf</i>:
<source lang=ini>
# This file is part of systemd.
#
# systemd is free software; you can redistribute it and/or modify it
# under the terms of the GNU Lesser General Public License as published by
# the Free Software Foundation; either version 2.1 of the License, or
# (at your option) any later version.
#
# Entries in this file show the compile time defaults.
# You can change settings by editing this file.
# Defaults can be restored by simply deleting this file.
#
# See timesyncd.conf(5) for details.
[Time]
NTP=ptbtime1.ptb.de hora.cs.tu-berlin.de
FallbackNTP=ntp.ubuntu.com
</source>
The server list is a space separated list of NTP servers.
FallbackNTP is a list of servers if none of the NTP list could be reached.
If you want to split them into multiple files or generate them at start you can put files with the ending <i>.conf</i> in <i>/etc/systemd/timesyncd.conf.d/</i>.
After you setup the config you can enable the timesyncd via:
<source lang=bash>
# timedatectl set-ntp true
</source>
Control your success with:
<source lang=bash>
# timedatectl
Local time: Fr 2016-07-01 09:16:24 CEST
Universal time: Fr 2016-07-01 07:16:24 UTC
RTC time: Fr 2016-07-01 07:16:24
Time zone: Europe/Berlin (CEST, +0200)
Network time on: yes
NTP synchronized: yes
RTC in local TZ: no
</source>
Nice it worked <i>NTP synchronized: yes</i>.
If not take a look with <i>systemctl</i>:
<source lang=bash>
# systemctl status systemd-timesyncd.service
● systemd-timesyncd.service - Network Time Synchronization
Loaded: loaded (/lib/systemd/system/systemd-timesyncd.service; enabled; vendor preset: enabled)
Drop-In: /lib/systemd/system/systemd-timesyncd.service.d
└─disable-with-time-daemon.conf
Active: inactive (dead)
Condition: start condition failed at Fr 2016-07-01 10:49:15 CEST; 1h 43min left
Docs: man:systemd-timesyncd.service(8)
</source>
Hmm... let us take a look at ntp:
<source lang=bash>
# systemctl status ntp.service
● ntp.service - LSB: Start NTP daemon
Loaded: loaded (/etc/init.d/ntp; bad; vendor preset: enabled)
Active: active (exited) since Fr 2016-07-01 10:49:19 CEST; 1h 44min left
Docs: man:systemd-sysv-generator(8)
</source>
Maybe we should uninstall or disable ntp first ;-).
<source lang=bash>
# systemctl stop ntp.service
# systemctl disable ntp.service
</source>
<source lang=bash>
# systemctl start systemd-timesyncd.service
# systemctl status systemd-timesyncd.service
● systemd-timesyncd.service - Network Time Synchronization
Loaded: loaded (/lib/systemd/system/systemd-timesyncd.service; enabled; vendor preset: enabled)
Drop-In: /lib/systemd/system/systemd-timesyncd.service.d
└─disable-with-time-daemon.conf
Active: active (running) since Fr 2016-07-01 09:06:10 CEST; 1s ago
Docs: man:systemd-timesyncd.service(8)
Main PID: 12360 (systemd-timesyn)
Status: "Synchronized to time server 192.53.103.108:123 (ptbtime1.ptb.de)."
CGroup: /system.slice/systemd-timesyncd.service
└─12360 /lib/systemd/systemd-timesyncd
Jul 01 09:06:10 lollybook systemd[1]: Starting Network Time Synchronization...
Jul 01 09:06:10 lollybook systemd[1]: Started Network Time Synchronization.
Jul 01 09:06:10 lollybook systemd-timesyncd[12360]: Synchronized to time server 192.53.103.108:123 (ptbtime1.ptb.de).
</source>
That's it!
=Units=
==[Unit]==
===Define dependencies===
For example the ''zfs.target'' is defined like this:
<source lang=ini>
# systemctl cat zfs.target
# /lib/systemd/system/zfs.target
[Unit]
Description=ZFS startup target
Requires=zfs-mount.service
Requires=zfs-share.service
Wants=zed.service
[Install]
WantedBy=multi-user.target
</source>
This means to reach the ''zfs.target'' we want that ''zed.service'' is started if enabled and we need ''zfs-mount.service'' and ''zfs-share.service''.
===Directories===
====ReadWrite-, ReadOnly- and InaccessibleDirectories====
====Private Tmp-Directories====
Mounts a private incarnation of /tmp and /var/tmp which only lives as long as the unit is up. When the unit comes down the directories are cleared. This is done by a seperate namespace for this unit.
<source lang=ini>
[Unit]
...
PrivateTmp=true|false
...
</source>
If several units should share a private tmp-directory you can use ''JoinsNamespaceOf=<unit1>[,<unit2>,<unit3>]''.
==[Service]==
==[Install]==
=Tools=
==Testing around with capabilities==
For example arping:
<source lang=bash>
# getcap /usr/bin/arping
/usr/bin/arping = cap_net_raw+ep
</source>
With this capability set we can use this as normal user:
<source lang=bash>
lollypop $ /usr/bin/arping -I wlan0 192.168.178.1
ARPING 192.168.178.1 from 192.168.178.31 wlan0
Unicast reply from 192.168.178.1 [24:65:11:F0:DC:A8] 1.774ms
Unicast reply from 192.168.178.1 [24:65:11:F0:DC:A8] 1.658ms
</source>
If we remove this capability it does not work:
<source lang=bash>
# setcap cap_net_raw=-ep /usr/bin/arping
</source>
<source lang=bash>
lollypop $ /usr/bin/arping -I wlan0 192.168.178.1
arping: socket: Operation not permitted
</source>
Of course it still works as root as root has all capabilities:
<source lang=bash>
root@lollybook:~# /usr/bin/arping -I wlan0 192.168.178.1
ARPING 192.168.178.1 from 192.168.178.31 wlan0
Unicast reply from 192.168.178.1 [24:65:11:F0:DC:A8] 2.052ms
Unicast reply from 192.168.178.1 [24:65:11:F0:DC:A8] 1.852ms
Received 2 response(s)
</source>
So we better set this capability again:
<source lang=bash>
# setcap cap_net_raw=+ep /usr/bin/arping
</source>
= Logging with syslog-ng and systemd in a chroot environment =
If you have a chroot environment (here I have /var/chroot) some things are a little bit tricky.
==The needed logging socket in your chroot is /run/systemd/journal/dev-log==
Prepare the mountpoint:
<source lang=bash>
# mkdir -p /var/chroot/run/systemd/journal
# touch /var/chroot/run/systemd/journal/dev-log
</source>
===Get the name for the needed unit file===
The name of a .mount-unit file has to be the mount destination path. Dashes must be escaped. To get the resulting name you can easily use systemd-escape.
<source lang=bash>
# systemd-escape -p --suffix=mount /var/chroot/run/systemd/journal/dev-log
var-chroot-run-systemd-journal-dev\x2dlog.mount
</source>
===Create the unit file /lib/systemd/system/var-chroot-run-systemd-journal-dev\\x2dlog.mount for the mount===
Remember to double escape (\\) the x2d (which is a dash -).
<source lang=bash>
# vi /lib/systemd/system/var-chroot-run-systemd-journal-dev\\x2dlog.mount
</source>
I want to mount it before syslog-ng and pdns-recursor are up.
Put this contents in the file:
<source lang=ini>
[Unit]
Description=Mount /run/systemd/journal/dev-log to chroot
DefaultDependencies=no
ConditionPathExists=/var/chroot/run/systemd/journal/dev-log
ConditionCapability=CAP_SYS_ADMIN
After=systemd-modules-load.service
Before=pdns-recursor.service
Before=syslog-ng.service
[Mount]
What=/run/systemd/journal/dev-log
Where=/var/chroot/run/systemd/journal/dev-log
Type=none
Options=bind
[Install]
WantedBy=multi-user.target
</source>
===Mount the socket===
<source lang=bash>
# systemctl daemon-reload
# systemctl enable var-chroot-run-systemd-journal-dev\\x2dlog.mount
# systemctl start var-chroot-run-systemd-journal-dev\\x2dlog.mount
</source>
Check the success:
<source lang=bash>
# grep /var/chroot/run/systemd/journal/dev-log /proc/mounts
tmpfs /var/chroot/run/systemd/journal/dev-log tmpfs rw,nosuid,noexec,relatime,size=101604k,mode=755 0 0
</source>
==Tell the journald to forward logging lines to the socket==
===/etc/systemd/journald.conf===
<source lang=ini>
[Journal]
...
ForwardToSyslog=yes
...
</source>
Restart the journal daemon:
<source lang=bash>
# systemctl restart systemd-journald.service
</source>
==Configure syslog-ng==
===/etc/syslog-ng/syslog-ng.conf===
Take the log from systemd-journald socket:
<source>
...
source s_src {
system();
internal();
unix-dgram ("/run/systemd/journal/dev-log");
};
...
</source>
===Example for powerdns recursor===
====/etc/syslog-ng/conf.d/destination.d/pdns.conf====
<source>
# PowerDNS authoritative server destination
destination d_pdns { file("/var/log/powerdns/pdns.log"); };
destination d_pdns_recursor { file("/var/log/powerdns/recursor.log"); };
</source>
====/etc/syslog-ng/conf.d/filter.d/pdns.conf====
<source>
# PowerDNS authoritative server filter
filter f_pdns { program("^pdns$"); };
filter f_pdns_recursor { program("^pdns_recursor$"); };
</source>
====/etc/syslog-ng/conf.d/log.d/90_pdns.conf====
<source>
# PowerDNS authoritative server default final file log
log { source(s_src); filter(f_pdns); destination(d_pdns); flags(final); };
log { source(s_src); filter(f_pdns_recursor); destination(d_pdns_recursor); flags(final); };
</source>
===Restart syslog-ng daemon===
<source lang=bash>
# systemctl restart syslog-ng.service
</source>
= systemd-tmpfiles =
The housekeeping of temporary directories is done by the service <i>systemd-tmpfiles-clean.service</i> .
This service is triggered by the timer <i>systemd-tmpfiles-clean.timer</i>
To use this service for PrivateTMP directories for example of <i>apache2.service</i> you may use a config file under <i>/etc/[https://www.freedesktop.org/software/systemd/man/tmpfiles.d.html tmpfiles.d]/</i> like this example <i>/etc/tmpfiles.d/apache-cleanup.conf</i> :
<pre>
e /tmp/systemd-private-%b-apache2.service-*/tmp - - - 6h
</pre>
This will cleanup all files under <i>/tmp/systemd-private-%b-apache2.service-*/tmp</i> which are older than 6 hours every time the <i>systemd-tmpfiles-clean.service</i> runs.
The <i>%b</i> in the path is the actual boot-id.
What ist that? An id which is generated at each boot.
You can get the boot-id with:
<source lang=bash>
# journalctl --list-boots
</source>
The second field of the last line is the actual one, e.g.:
<source lang=bash>
# journalctl --list-boots | awk 'END {print $2}'
52ae0c2a587a47048ee76818ede269a6
</source>
When will that be? Try:
<source lang="bash">
# systemctl list-timers systemd-tmpfiles-clean.timer
NEXT LEFT LAST PASSED UNIT ACTIVATES
Thu 2020-08-13 16:07:24 CEST 46min left n/a n/a systemd-tmpfiles-clean.timer systemd-tmpfiles-clean.service
1 timers listed.
Pass --all to see loaded but inactive timers, too.
</source>
OK, but you probably want to run ist once an hour? OK, just rescedule the timer like this:
<source lang="bash">
# systemctl edit systemd-tmpfiles-clean.timer
</source>
and change the interval like this
<pre>
[Timer]
OnUnitActiveSec=1h
</pre>
Well done...
= Examples =
== Tomcat ==
=== /etc/systemd/system/tomcat-example.service ===
Simple service definition with some security options (ReadOnlyDirectories):
<source lang=ini>
# /etc/systemd/system/tomcat-ndr.service
[Unit]
Description=Apache Tomcat Web Application Container
After=syslog.target network.target remote-fs.target
ConditionPathExists=/opt/tomcat/bin
ConditionPathExists=/home/tomcat/bin
[Service]
Type=forking
User=tomcat
Group=java
PrivateTmp=true
RuntimeDirectory=tomcat-example
RuntimeDirectoryMode=0700
ReadOnlyDirectories=/etc
ReadOnlyDirectories=/lib
ReadOnlyDirectories=/usr
EnvironmentFile=/home/tomcat/.Tomcat_init_systemd
PIDFile=/run/tomcat-example/tomcat.pid
ExecStart=/opt/tomcat/bin/catalina.sh start
ExecStop=/opt/tomcat/bin/catalina.sh stop
SuccessExitStatus=0
[Install]
WantedBy=multi-user.target
</source>
=== /etc/polkit-1/rules.d/57-tomcat-example.rules ===
Allow the user <i>tomcat</i> to start/stop the service:
<source>
polkit.addRule(function(action, subject) {
if (action.id == "org.freedesktop.systemd1.manage-units" &&
action.lookup("unit") == "tomcat-example.service" &&
subject.user == "tomcat") {
return polkit.Result.YES;
}
});
</source>
== Oracle ==
UNTESTED, just an example!
File this as
/usr/lib/systemd/system/dbora@.service (SLES12)
<source lang=ini>
# This file is part of systemd.
#
# Configure instances for your oracle database versions like this
# # systemctl enable dbora@<product>.service
# e.g.:
# # systemctl enable dbora@12cR1.service
#
[Unit]
Description=Oracle Database %I
After=syslog.target network.target
[Service]
# systemd ignores PAM limits, so set any necessary limits in the service.
# Not really a bug, but a feature.
# https://bugzilla.redhat.com/show_bug.cgi?id=754285
LimitMEMLOCK=infinity
LimitNOFILE=65535
#
Type=simple
RemainAfterExit=yes
User=oracle
Group=dba
Environment="ORACLE_HOME=/opt/oracle/product/%i/db"
ExecStart=/opt/oracle/product/%i/db/bin/dbstart $ORACLE_HOME >> 2>&1 &
ExecStop=/opt/oracle/product/%i/db/bin/dbstart $ORACLE_HOME 2>&1 &
[Install]
WantedBy=multi-user.target
</source>
<source lang=bash>
# systemctl daemon-reload
# systemctl enable dbora@12cR2.service
Created symlink from /etc/systemd/system/multi-user.target.wants/dbora@12cR2.service to /usr/lib/systemd/system/dbora@.service.
</source>
fdb75177afbb9be7fd1b973bd2c3b718c891aa82
Ubuntu zsys
0
377
2104
2076
2021-04-26T08:25:27Z
Lollypop
2
/* Cconfigure garbage collection */
wikitext
text/x-wiki
==Cconfigure garbage collection==
<source>
cat > /etc/zsys.conf <<EOF
history:
# Keep at least n history entry per unit of time if enough of them are present
# The order condition the bucket start and end dates (from most recent to oldest)
# We also keep all previous state saves for the previous day.
# gcstartafter: 1 (GC start after a whole day).
gcstartafter: 1
# Minimum number of recent states to keep.
keeplast: 7
# - name: Abitrary name of the bucket
# buckets: Number of buckets over the interval
# bucketlength: Length of each bucket in days
# samplesperbucket: Number of datasets to keep in each bucket
gcrules:
- name: PreviousDay
buckets: 1
bucketlength: 1
samplesperbucket: 3
#
# For the previous Day (after on full day of retention of all
# snapshots due to gcstartafter: 1), the rule PreviousDay
# defines one bucket (buckets: 1) of size 1 day (bucketlength: 1),
# where we keep 3 states. So basically, we keep 3 states on the
# previous full day.
#
- name: PreviousWeek
buckets: 5
bucketlength: 1
samplesperbucket: 1
#
# For the 5 days before (buckets: 5 of size 1 day (bucketlength: 1)),
# we keep one state (samplesperbucket: 1).
# It means thus that we keep one state per day for each of those 5 days.
#
- name: PreviousMonth
buckets: 4
bucketlength: 7
samplesperbucket: 1
#
# We divide the previous month, in 4 buckets (buckets: 4) of
# 7 days each (bucketlength: 7) and keep one state for each
# (samplesperbucket: 1).
# In English, this means that we try to keep one state save
# per week over the previous month.
#
general:
# Minimal free space required before taking a snapshot
minfreepoolspace: 20
# Daemon timeout in seconds
timeout: 60
EOF
systemctl restart zsys-gc.service
</source>
54872c73b02c7c64a66292c1428e60dc664c729a
2124
2104
2021-06-22T08:48:28Z
Lollypop
2
/* Cconfigure garbage collection */
wikitext
text/x-wiki
[[category:Ubuntu]]
==Cconfigure garbage collection==
<source lang=yaml>
cat > /etc/zsys.conf <<EOF
history:
# Keep at least n history entry per unit of time if enough of them are present
# The order condition the bucket start and end dates (from most recent to oldest)
# We also keep all previous state saves for the previous day.
# gcstartafter: 1 (GC start after a whole day).
gcstartafter: 1
# Minimum number of recent states to keep.
keeplast: 7
# - name: Abitrary name of the bucket
# buckets: Number of buckets over the interval
# bucketlength: Length of each bucket in days
# samplesperbucket: Number of datasets to keep in each bucket
gcrules:
- name: PreviousDay
buckets: 1
bucketlength: 1
samplesperbucket: 3
#
# For the previous Day (after on full day of retention of all
# snapshots due to gcstartafter: 1), the rule PreviousDay
# defines one bucket (buckets: 1) of size 1 day (bucketlength: 1),
# where we keep 3 states. So basically, we keep 3 states on the
# previous full day.
#
- name: PreviousWeek
buckets: 5
bucketlength: 1
samplesperbucket: 1
#
# For the 5 days before (buckets: 5 of size 1 day (bucketlength: 1)),
# we keep one state (samplesperbucket: 1).
# It means thus that we keep one state per day for each of those 5 days.
#
- name: PreviousMonth
buckets: 4
bucketlength: 7
samplesperbucket: 1
#
# We divide the previous month, in 4 buckets (buckets: 4) of
# 7 days each (bucketlength: 7) and keep one state for each
# (samplesperbucket: 1).
# In English, this means that we try to keep one state save
# per week over the previous month.
#
general:
# Minimal free space required before taking a snapshot
minfreepoolspace: 20
# Daemon timeout in seconds
timeout: 60
EOF
systemctl restart zsysd.service
zsysctl service gc
update-grub
</source>
45a002e30e4651ce266edb82d317d0d7bfcb4ab0
SuSE Manager
0
348
2108
1899
2021-05-19T09:06:03Z
Lollypop
2
/* Create bootstrap shell scripts in /srv/www/htdocs/pub/bootstrap */
wikitext
text/x-wiki
[[Kategorie:Linux]]
[[Kategorie:SuSE]]
=SuSE Manager=
==Channels==
===Refresh channle list===
<source lang=bash>
# mgr-sync refresh
</source>
===List available channels===
<source lang=bash>
# mgr-sync list channels
</source>
===Add Channel===
<source lang=bash>
# mgr-sync add channel <channel>
</source>
===Delete Channel===
<source lang=bash>
# spacewalk-remove-channel -c <channel>
</source>
===Create a frozen channel===
Clone a channel (which is like a snapshot) and add a timestamp at the end of the name:
<source lang=bash>
# spacecmd softwarechannel_clonetree -s '<source channel or pool>' -x "s/\$/-$(date '+%Y-%m-%d_%H:%M:%S')/"
</source>
e.g.:
<source lang=bash>
# spacecmd softwarechannel_clonetree -s 'sles12-sp3-pool-x86_64' -x "s/\$/-$(date '+%Y-%m-%d_%H:%M:%S')/"
</source>
will result in a new channel pool named e.g. sles12-sp3-pool-x86_64-2017-11-22_14:26:42
===Compose your own channel===
<source lang=bash>
# spacecmd
spacecmd {SSM:0}> softwarechannel_create -n OpenSuSE -l opensuse -a x86_64 -c sha256
spacecmd {SSM:0}> repo_create -n opensuse-database-sles12-sp2-x86_64 -u https://download.opensuse.org/repositories/server:/database/SLE_12_SP2/
spacecmd {SSM:0}> repo_create -n opensuse-database-sles12-sp3-x86_64 -u https://download.opensuse.org/repositories/server:/database/SLE_12_SP3/
spacecmd {SSM:0}> repo_list
opensuse-database-sles12-sp2-x86_64
opensuse-database-sles12-sp3-x86_64
spacecmd {SSM:0}> softwarechannel_addrepo opensuse opensuse-database-sles12-sp2-x86_64
spacecmd {SSM:0}> softwarechannel_addrepo opensuse opensuse-database-sles12-sp3-x86_64
spacecmd {SSM:0}> quit
# spacewalk-repo-sync -c opensuse
</source>
==Bootstrap==
===Create bootstrap repo===
Do it for each channel!
<source lang=bash>
# mgr-create-bootstrap-repo
</source>
===Create bootstrap shell scripts in /srv/www/htdocs/pub/bootstrap===
Do not forget to lookup the available [[#List available activation keys|activation keys]]
<source lang=bash>
# spacecmd -s susemanager.server.de -u mytestuser -q activationkey_list
6-sles11-sp3-x86_64
6-sles11-sp4-x86_64
6-sles12-default
6-sles12-sp0-x86_64
6-sles12-sp1-x86_64
6-sles12-sp2-x86_64
6-sles12-sp3-x86_64
6-sles12-sp4-x86_64
6-sles12-sp5-x86_64
6-sles15-sp0-x86_64
6-sles15-sp1-x86_64
6-sles15-sp2-x86_64
# mgr-bootstrap --traditional --script=My-New-SLES11-SP4.sh --activation-keys=6-sles11-sp4-x86_64
</source>
==Activation keys==
===List available activation keys===
web: Systems -> Activation Keys
<source lang=bash>
# spacecmd -q activationkey_list
6-sles11-sp3-x86_64
6-sles11-sp4-x86_64
6-sles12-sp0-x86_64
6-sles12-sp1-x86_64
6-sles12-sp2-x86_64
6-sles12-sp3-x86_64
</source>
==spacecmd==
Just some useful space commands
<source lang=bash>
# spacecmd system_list
</source>
==rhn-search==
===Cleanup the search index===
<source lang=bash>
# rhn-search cleanindex
</source>
==Troubleshooting==
===Clients===
====Error code: Curl error 59 / Error message: failed setting cipher list: DEFAULT_SUSE====
<source lang=bash>
# zypper refresh
...
Error code: Curl error 59
Error message: failed setting cipher list: DEFAULT_SUSE
...
</source>
The reason is that zypper in newer versions calls curl with a specific cipher list named "DEFAULT_SUSE" which is not defined in curl version 7.37.0-37.17.1 (version 7.37.0-28.1 is OK).
Now get any kind of repository bound to your SuSE like the ISO this version was installed with:
<source lang=bash>
# zypper addrepo --check --type yast2 'iso:///?iso=/install/OS/suse/iso/SLE-12-SP2-Server-DVD-x86_64-GM-DVD1.iso' 'SLES12-SP2-12.2-0'
Adding repository 'SLES12-SP2-12.2-0' ...........................................................................................................[done]
Repository 'SLES12-SP2-12.2-0' successfully added
Enabled : Yes
Autorefresh : No
GPG Check : Yes
Priority : 99
URI : iso:///?iso=/install/OS/suse/iso/SLE-12-SP2-Server-DVD-x86_64-GM-DVD1.iso
</source>
or enable it:
<source lang=bash>
# zypper modifyrepo --enable SLES12-SP2-12.2-0
</source>
Reinstall zypper in the old version that does not call curl with the cipher list SUSE_DEFAULT:
<source lang=bash>
# zypper install --force --repo SLES12-SP2-12.2-0 $(rpm --query --all *curl* --queryformat '%{NAME} ')
</source>
And disable the ISO repository:
<source lang=bash>
# zypper modifyrepo --disable SLES12-SP2-12.2-0
</source>
Done.
=====Note: After some further debugging we found that the system path forces a wrong openssl library to come in place.=====
<source lang=bash>
# curl --version ; zypper --version
curl 7.37.0 (x86_64-suse-linux-gnu) libcurl/7.37.0 OpenSSL/1.0.2h zlib/1.2.8 libidn/1.28 libssh2/1.4.3
Protocols: dict file ftp ftps gopher http https imap imaps ldap ldaps pop3 pop3s rtsp scp sftp smtp smtps telnet tftp
Features: AsynchDNS GSS-Negotiate IDN IPv6 Largefile NTLM NTLM_WB SSL libz TLS-SRP
zypper 1.13.40
</source>
In our version of curl it should be OpenSSL/1.0.2j.
<syntaxhighlight lang="bash" highlight="5">
# rpm -qv openssl
openssl-1.0.2j-60.24.1.x86_64
# openssl version
WARNING: can't open config file: /usr/local/ssl/openssl.cnf
OpenSSL 1.0.2j-fips 26 Sep 2016 (Library: OpenSSL 1.0.2h-fips 3 May 2016)
</syntaxhighlight>
Ha!
Ok... then after lookin at the system library path, we got a clue ;-):
<syntaxhighlight lang="bash" highlight="2">
# ldconfig -p | grep ssl
libssl.so.1.0.0 (libc6,x86-64) => /usr/lib/nsr/lib64/libssl.so.1.0.0
libssl.so.1.0.0 (libc6,x86-64) => /lib64/libssl.so.1.0.0
libssl.so.1.0.0 (libc6) => /usr/lib/nsr/libssl.so.1.0.0
libgnutls-xssl.so.0 (libc6,x86-64) => /usr/lib64/libgnutls-xssl.so.0
libevent_openssl-2.0.so.5 (libc6,x86-64) => /usr/lib64/libevent_openssl-2.0.so.5
libcommonssl.so (libc6,x86-64) => /usr/lib/nsr/lib64/libcommonssl.so
libcommonssl.so (libc6) => /usr/lib/nsr/libcommonssl.so
libcommonssl-9.2.1.so (libc6,x86-64) => /usr/lib/nsr/lib64/libcommonssl-9.2.1.so
</syntaxhighlight>
The problem was a file in /etc/ld.so.conf.d/ which brought /usr/lib/nsr/lib64 in the system library path. There was another libssl.so.1.0.0 which was version 1.0.2h. OK. What to do?
<source lang=bash>
# rm /etc/ld.so.conf.d/problematic.conf
# rm /etc/ld.so.cache
# ldconfig
</source>
Check the success:
<source lang=bash>
# ldconfig -p | grep ssl
libssl.so.1.0.0 (libc6,x86-64) => /lib64/libssl.so.1.0.0
libgnutls-xssl.so.0 (libc6,x86-64) => /usr/lib64/libgnutls-xssl.so.0
libevent_openssl-2.0.so.5 (libc6,x86-64) => /usr/lib64/libevent_openssl-2.0.so.5
</source>
Now you just have to find a way to get your other stuff running without the manipulation at the system library path.
Last check for our case. Does our networker use it's own ssl libraries?
<source lang=bash>
# ls -al /proc/$(pgrep --full /usr/sbin/nsrexecd)/map_files | egrep "lib(ssl|crypto)"
lr-------- 1 root root 64 17. Jul 11:31 7f9d1bb73000-7f9d1bdc7000 -> /usr/lib/nsr/lib64/libcrypto.so.1.0.0
lr-------- 1 root root 64 17. Jul 11:31 7f9d1bdc7000-7f9d1bec7000 -> /usr/lib/nsr/lib64/libcrypto.so.1.0.0
lr-------- 1 root root 64 17. Jul 11:31 7f9d1bec7000-7f9d1bef3000 -> /usr/lib/nsr/lib64/libcrypto.so.1.0.0
lr-------- 1 root root 64 17. Jul 11:31 7f9d1bfab000-7f9d1c00c000 -> /usr/lib/nsr/lib64/libssl.so.1.0.0
lr-------- 1 root root 64 17. Jul 11:31 7f9d1c00c000-7f9d1c10c000 -> /usr/lib/nsr/lib64/libssl.so.1.0.0
lr-------- 1 root root 64 17. Jul 11:31 7f9d1c10c000-7f9d1c116000 -> /usr/lib/nsr/lib64/libssl.so.1.0.0
</source>
Yep. Great!
== Remove spacewalk from client ==
So the way to get rid spacewalk is:
<source lang=bash>
# zypper remove --clean-deps spacewalksd spacewalk-check zypp-plugin-spacewalk spacewalk-client-tools
</source>
== Register at SuSE Manager ==
After that reregister your server with the SuSE Manager like this:
<source lang=bash>
# /usr/bin/wget --no-check-certificate -O - https://susemgr.server.tld/pub/bootstrap/yourbootstrap.sh | bash
</source>
e1af033043730d96a05b56d7ba06ec01f0aeb755
Cachefilesd
0
382
2110
2021-06-08T13:08:00Z
Lollypop
2
Created page with "=Cachefilesd= ==Check if kernel supports FSCACHE for your filesystem type== <source lang=bash> # grep "CONFIG_.*_FSCACHE" /boot/config-`uname -r` CONFIG_NFS_FSCACHE=y CONFIG..."
wikitext
text/x-wiki
=Cachefilesd=
==Check if kernel supports FSCACHE for your filesystem type==
<source lang=bash>
# grep "CONFIG_.*_FSCACHE" /boot/config-`uname -r`
CONFIG_NFS_FSCACHE=y
CONFIG_CEPH_FSCACHE=y
CONFIG_CIFS_FSCACHE=y
CONFIG_AFS_FSCACHE=y
CONFIG_9P_FSCACHE=y
</source>
== Setup /etc/cachefilesd.conf ==
<source>
###############################################################################
#
# Copyright (C) 2006,2010 Red Hat, Inc. All Rights Reserved.
# Written by David Howells (dhowells@redhat.com)
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version
# 2 of the License, or (at your option) any later version.
#
###############################################################################
#dir /media/ramdisk
dir /var/cache/fscache
secctx cachefiles_kernel_t
tag mycache
debug 1
brun 10%
bcull 7%
bstop 3%
frun 10%
fcull 7%
fstop 3%
# Assuming you're using SELinux with the default security policy included in
# this package
# secctx system_u:system_r:cachefiles_kernel_t:s0
</source>
== Problems with autofs mounted filesystems ==
=== Make sure it is started by systemd ===
==== Disable the SYSV way of starting ====
<source lang=bash>
# update-rc.d cachefilesd disable
</source>
==== Make cachefilesd started by systemd ====
<source lang=bash>
# systemctl edit --force --full cachefilesd.service
</source>
<source lang=ini>
[Unit]
Documentation=man:cachefilesd
Description=LSB: CacheFiles daemon
After=remote-fs.target
Before=autofs.service
[Service]
Type=simple
Restart=no
TimeoutSec=5min
IgnoreSIGPIPE=no
KillMode=process
GuessMainPID=no
RemainAfterExit=yes
SuccessExitStatus=5 6
RuntimeDirectory=cachefilesd
ExecStartPre=-/sbin/modprobe -qab cachefiles
ExecStart=/sbin/cachefilesd -n -p /run/cachefilesd/cachefilesd.pid
[Install]
WantedBy=multi-user.target
</source>
==== Enable starting new service ====
<source lang=bash>
# systemctl enable cachefilesd.service
</source>
==== Verify autofs is depending on cachefilesd.service ====
<source lang=bash>
# systemctl show -p After,Before autofs.service | grep cachefilesd.service
After=cachefilesd.service network.target network-online.target sysinit.target ypbind.service basic.target system.slice sssd.service systemd-journald.socket remote-fs.target
</source>
== Define a cached CIFS share with autofs ==
=== Install needed packages ===
<source lang=bash>
# apt install cachefilesd autofs cifs-utils
</source>
=== Create the credentials file ===
<source lang=bash>
# mkdir --mode=0700 /etc/cifs_cred
# touch /etc/cifs_cred/credentials
# chmod 0600 /etc/cifs_cred/credentials
# cat > /etc/cifs_cred/credentials <<EOF
username=myuser
password=mypass
EOF
</source>
=== Create basedir of your cifs mounts ===
<source lang=bash>
# mkdir --mode=0755 /data/cifs
</source>
===/etc/auto.master===
<source>
...
/data/cifs /etc/auto.cifs-shares --timeout=0 --ghost
...
</source>
===/etc/auto.cifs-shares===
The option {{strong|fsc}} enables caching:
<source>
myshare -fstype=cifs,credentials=/etc/cifs_cred/credentials,nounix,file_mode=0644,vers=3.0,dir_mode=0755,noperm,fsc ://cifsserver.my.dom/the_cifs_share
</source>
== Check if things are getting cached ==
Initially there is nothing in the cache (almost all values are zero):
<source lang=bash>
# cat /proc/fs/fscache/stats
FS-Cache statistics
Cookies: idx=2 dat=0 spc=0
Objects: alc=0 nal=0 avl=0 ded=0
ChkAux : non=0 ok=0 upd=0 obs=0
Pages : mrk=0 unc=0
Acquire: n=2 nul=0 noc=0 ok=2 nbf=0 oom=0
Lookups: n=0 neg=0 pos=0 crt=0 tmo=0
Invals : n=0 run=0
Updates: n=0 nul=0 run=0
Relinqs: n=0 nul=0 wcr=0 rtr=0
AttrChg: n=0 ok=0 nbf=0 oom=0 run=0
Allocs : n=0 ok=0 wt=0 nbf=0 int=0
Allocs : ops=0 owt=0 abt=0
Retrvls: n=0 ok=0 wt=0 nod=0 nbf=0 int=0 oom=0
Retrvls: ops=0 owt=0 abt=0
Stores : n=0 ok=0 agn=0 nbf=0 oom=0
Stores : ops=0 run=0 pgs=0 rxd=0 olm=0
VmScan : nos=0 gon=0 bsy=0 can=0 wt=0
Ops : pend=0 run=0 enq=0 can=0 rej=0
Ops : ini=0 dfr=0 rel=0 gc=0
CacheOp: alo=0 luo=0 luc=0 gro=0
CacheOp: inv=0 upo=0 dro=0 pto=0 atc=0 syn=0
CacheOp: rap=0 ras=0 alp=0 als=0 wrp=0 ucp=0 dsp=0
CacheEv: nsp=0 stl=0 rtr=0 cul=0
</source>
but after a few requests:
<source lang=bash>
# cat /proc/fs/fscache/stats
FS-Cache statistics
Cookies: idx=3 dat=77 spc=0
Objects: alc=80 nal=0 avl=80 ded=70
ChkAux : non=0 ok=1 upd=0 obs=1
Pages : mrk=3138215 unc=181438
Acquire: n=150 nul=0 noc=0 ok=80 nbf=0 oom=0
Lookups: n=80 neg=78 pos=2 crt=78 tmo=0
Invals : n=0 run=0
Updates: n=0 nul=0 run=0
Relinqs: n=70 nul=0 wcr=0 rtr=70
AttrChg: n=0 ok=0 nbf=0 oom=0 run=0
Allocs : n=0 ok=0 wt=0 nbf=0 int=0
Allocs : ops=0 owt=0 abt=0
Retrvls: n=72447 ok=0 wt=6 nod=72447 nbf=0 int=0 oom=0
Retrvls: ops=72447 owt=15 abt=0
Stores : n=3136954 ok=3136954 agn=0 nbf=0 oom=0
Stores : ops=67042 run=3203996 pgs=3136954 rxd=3136954 olm=0
VmScan : nos=180177 gon=0 bsy=0 can=0 wt=0
Ops : pend=15 run=139489 enq=3203996 can=0 rej=0
Ops : ini=3209401 dfr=266 rel=3209401 gc=266
CacheOp: alo=0 luo=0 luc=0 gro=0
CacheOp: inv=0 upo=0 dro=0 pto=0 atc=0 syn=0
CacheOp: rap=0 ras=0 alp=0 als=0 wrp=0 ucp=0 dsp=0
CacheEv: nsp=1 stl=0 rtr=0 cul=0
</source>
28e6407d52a64fb5907f6bdc072872ac2d79f9ba
2111
2110
2021-06-08T13:08:54Z
Lollypop
2
wikitext
text/x-wiki
[[category: Linux]]
=Cachefilesd=
==Check if kernel supports FSCACHE for your filesystem type==
<source lang=bash>
# grep "CONFIG_.*_FSCACHE" /boot/config-`uname -r`
CONFIG_NFS_FSCACHE=y
CONFIG_CEPH_FSCACHE=y
CONFIG_CIFS_FSCACHE=y
CONFIG_AFS_FSCACHE=y
CONFIG_9P_FSCACHE=y
</source>
== Setup /etc/cachefilesd.conf ==
<source>
###############################################################################
#
# Copyright (C) 2006,2010 Red Hat, Inc. All Rights Reserved.
# Written by David Howells (dhowells@redhat.com)
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version
# 2 of the License, or (at your option) any later version.
#
###############################################################################
#dir /media/ramdisk
dir /var/cache/fscache
secctx cachefiles_kernel_t
tag mycache
debug 1
brun 10%
bcull 7%
bstop 3%
frun 10%
fcull 7%
fstop 3%
# Assuming you're using SELinux with the default security policy included in
# this package
# secctx system_u:system_r:cachefiles_kernel_t:s0
</source>
== Problems with autofs mounted filesystems ==
=== Make sure it is started by systemd ===
==== Disable the SYSV way of starting ====
<source lang=bash>
# update-rc.d cachefilesd disable
</source>
==== Make cachefilesd started by systemd ====
<source lang=bash>
# systemctl edit --force --full cachefilesd.service
</source>
<source lang=ini>
[Unit]
Documentation=man:cachefilesd
Description=LSB: CacheFiles daemon
After=remote-fs.target
Before=autofs.service
[Service]
Type=simple
Restart=no
TimeoutSec=5min
IgnoreSIGPIPE=no
KillMode=process
GuessMainPID=no
RemainAfterExit=yes
SuccessExitStatus=5 6
RuntimeDirectory=cachefilesd
ExecStartPre=-/sbin/modprobe -qab cachefiles
ExecStart=/sbin/cachefilesd -n -p /run/cachefilesd/cachefilesd.pid
[Install]
WantedBy=multi-user.target
</source>
==== Enable starting new service ====
<source lang=bash>
# systemctl enable cachefilesd.service
</source>
==== Verify autofs is depending on cachefilesd.service ====
<source lang=bash>
# systemctl show -p After,Before autofs.service | grep cachefilesd.service
After=cachefilesd.service network.target network-online.target sysinit.target ypbind.service basic.target system.slice sssd.service systemd-journald.socket remote-fs.target
</source>
== Define a cached CIFS share with autofs ==
=== Install needed packages ===
<source lang=bash>
# apt install cachefilesd autofs cifs-utils
</source>
=== Create the credentials file ===
<source lang=bash>
# mkdir --mode=0700 /etc/cifs_cred
# touch /etc/cifs_cred/credentials
# chmod 0600 /etc/cifs_cred/credentials
# cat > /etc/cifs_cred/credentials <<EOF
username=myuser
password=mypass
EOF
</source>
=== Create basedir of your cifs mounts ===
<source lang=bash>
# mkdir --mode=0755 /data/cifs
</source>
===/etc/auto.master===
<source>
...
/data/cifs /etc/auto.cifs-shares --timeout=0 --ghost
...
</source>
===/etc/auto.cifs-shares===
The option {{strong|fsc}} enables caching:
<source>
myshare -fstype=cifs,credentials=/etc/cifs_cred/credentials,nounix,file_mode=0644,vers=3.0,dir_mode=0755,noperm,fsc ://cifsserver.my.dom/the_cifs_share
</source>
== Check if things are getting cached ==
Initially there is nothing in the cache (almost all values are zero):
<source lang=bash>
# cat /proc/fs/fscache/stats
FS-Cache statistics
Cookies: idx=2 dat=0 spc=0
Objects: alc=0 nal=0 avl=0 ded=0
ChkAux : non=0 ok=0 upd=0 obs=0
Pages : mrk=0 unc=0
Acquire: n=2 nul=0 noc=0 ok=2 nbf=0 oom=0
Lookups: n=0 neg=0 pos=0 crt=0 tmo=0
Invals : n=0 run=0
Updates: n=0 nul=0 run=0
Relinqs: n=0 nul=0 wcr=0 rtr=0
AttrChg: n=0 ok=0 nbf=0 oom=0 run=0
Allocs : n=0 ok=0 wt=0 nbf=0 int=0
Allocs : ops=0 owt=0 abt=0
Retrvls: n=0 ok=0 wt=0 nod=0 nbf=0 int=0 oom=0
Retrvls: ops=0 owt=0 abt=0
Stores : n=0 ok=0 agn=0 nbf=0 oom=0
Stores : ops=0 run=0 pgs=0 rxd=0 olm=0
VmScan : nos=0 gon=0 bsy=0 can=0 wt=0
Ops : pend=0 run=0 enq=0 can=0 rej=0
Ops : ini=0 dfr=0 rel=0 gc=0
CacheOp: alo=0 luo=0 luc=0 gro=0
CacheOp: inv=0 upo=0 dro=0 pto=0 atc=0 syn=0
CacheOp: rap=0 ras=0 alp=0 als=0 wrp=0 ucp=0 dsp=0
CacheEv: nsp=0 stl=0 rtr=0 cul=0
</source>
but after a few requests:
<source lang=bash>
# cat /proc/fs/fscache/stats
FS-Cache statistics
Cookies: idx=3 dat=77 spc=0
Objects: alc=80 nal=0 avl=80 ded=70
ChkAux : non=0 ok=1 upd=0 obs=1
Pages : mrk=3138215 unc=181438
Acquire: n=150 nul=0 noc=0 ok=80 nbf=0 oom=0
Lookups: n=80 neg=78 pos=2 crt=78 tmo=0
Invals : n=0 run=0
Updates: n=0 nul=0 run=0
Relinqs: n=70 nul=0 wcr=0 rtr=70
AttrChg: n=0 ok=0 nbf=0 oom=0 run=0
Allocs : n=0 ok=0 wt=0 nbf=0 int=0
Allocs : ops=0 owt=0 abt=0
Retrvls: n=72447 ok=0 wt=6 nod=72447 nbf=0 int=0 oom=0
Retrvls: ops=72447 owt=15 abt=0
Stores : n=3136954 ok=3136954 agn=0 nbf=0 oom=0
Stores : ops=67042 run=3203996 pgs=3136954 rxd=3136954 olm=0
VmScan : nos=180177 gon=0 bsy=0 can=0 wt=0
Ops : pend=15 run=139489 enq=3203996 can=0 rej=0
Ops : ini=3209401 dfr=266 rel=3209401 gc=266
CacheOp: alo=0 luo=0 luc=0 gro=0
CacheOp: inv=0 upo=0 dro=0 pto=0 atc=0 syn=0
CacheOp: rap=0 ras=0 alp=0 als=0 wrp=0 ucp=0 dsp=0
CacheEv: nsp=1 stl=0 rtr=0 cul=0
</source>
612d9531dc7dcb299bd6b3e8f419065a62ec97d1
2112
2111
2021-06-08T13:09:45Z
Lollypop
2
/* Check if kernel supports FSCACHE for your filesystem type */
wikitext
text/x-wiki
[[category: Linux]]
=Cachefilesd=
==Check if kernel supports filesystem cache for your filesystem type==
<source lang=bash>
# grep "CONFIG_.*_FSCACHE" /boot/config-`uname -r`
CONFIG_NFS_FSCACHE=y
CONFIG_CEPH_FSCACHE=y
CONFIG_CIFS_FSCACHE=y
CONFIG_AFS_FSCACHE=y
CONFIG_9P_FSCACHE=y
</source>
== Setup /etc/cachefilesd.conf ==
<source>
###############################################################################
#
# Copyright (C) 2006,2010 Red Hat, Inc. All Rights Reserved.
# Written by David Howells (dhowells@redhat.com)
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version
# 2 of the License, or (at your option) any later version.
#
###############################################################################
#dir /media/ramdisk
dir /var/cache/fscache
secctx cachefiles_kernel_t
tag mycache
debug 1
brun 10%
bcull 7%
bstop 3%
frun 10%
fcull 7%
fstop 3%
# Assuming you're using SELinux with the default security policy included in
# this package
# secctx system_u:system_r:cachefiles_kernel_t:s0
</source>
== Problems with autofs mounted filesystems ==
=== Make sure it is started by systemd ===
==== Disable the SYSV way of starting ====
<source lang=bash>
# update-rc.d cachefilesd disable
</source>
==== Make cachefilesd started by systemd ====
<source lang=bash>
# systemctl edit --force --full cachefilesd.service
</source>
<source lang=ini>
[Unit]
Documentation=man:cachefilesd
Description=LSB: CacheFiles daemon
After=remote-fs.target
Before=autofs.service
[Service]
Type=simple
Restart=no
TimeoutSec=5min
IgnoreSIGPIPE=no
KillMode=process
GuessMainPID=no
RemainAfterExit=yes
SuccessExitStatus=5 6
RuntimeDirectory=cachefilesd
ExecStartPre=-/sbin/modprobe -qab cachefiles
ExecStart=/sbin/cachefilesd -n -p /run/cachefilesd/cachefilesd.pid
[Install]
WantedBy=multi-user.target
</source>
==== Enable starting new service ====
<source lang=bash>
# systemctl enable cachefilesd.service
</source>
==== Verify autofs is depending on cachefilesd.service ====
<source lang=bash>
# systemctl show -p After,Before autofs.service | grep cachefilesd.service
After=cachefilesd.service network.target network-online.target sysinit.target ypbind.service basic.target system.slice sssd.service systemd-journald.socket remote-fs.target
</source>
== Define a cached CIFS share with autofs ==
=== Install needed packages ===
<source lang=bash>
# apt install cachefilesd autofs cifs-utils
</source>
=== Create the credentials file ===
<source lang=bash>
# mkdir --mode=0700 /etc/cifs_cred
# touch /etc/cifs_cred/credentials
# chmod 0600 /etc/cifs_cred/credentials
# cat > /etc/cifs_cred/credentials <<EOF
username=myuser
password=mypass
EOF
</source>
=== Create basedir of your cifs mounts ===
<source lang=bash>
# mkdir --mode=0755 /data/cifs
</source>
===/etc/auto.master===
<source>
...
/data/cifs /etc/auto.cifs-shares --timeout=0 --ghost
...
</source>
===/etc/auto.cifs-shares===
The option {{strong|fsc}} enables caching:
<source>
myshare -fstype=cifs,credentials=/etc/cifs_cred/credentials,nounix,file_mode=0644,vers=3.0,dir_mode=0755,noperm,fsc ://cifsserver.my.dom/the_cifs_share
</source>
== Check if things are getting cached ==
Initially there is nothing in the cache (almost all values are zero):
<source lang=bash>
# cat /proc/fs/fscache/stats
FS-Cache statistics
Cookies: idx=2 dat=0 spc=0
Objects: alc=0 nal=0 avl=0 ded=0
ChkAux : non=0 ok=0 upd=0 obs=0
Pages : mrk=0 unc=0
Acquire: n=2 nul=0 noc=0 ok=2 nbf=0 oom=0
Lookups: n=0 neg=0 pos=0 crt=0 tmo=0
Invals : n=0 run=0
Updates: n=0 nul=0 run=0
Relinqs: n=0 nul=0 wcr=0 rtr=0
AttrChg: n=0 ok=0 nbf=0 oom=0 run=0
Allocs : n=0 ok=0 wt=0 nbf=0 int=0
Allocs : ops=0 owt=0 abt=0
Retrvls: n=0 ok=0 wt=0 nod=0 nbf=0 int=0 oom=0
Retrvls: ops=0 owt=0 abt=0
Stores : n=0 ok=0 agn=0 nbf=0 oom=0
Stores : ops=0 run=0 pgs=0 rxd=0 olm=0
VmScan : nos=0 gon=0 bsy=0 can=0 wt=0
Ops : pend=0 run=0 enq=0 can=0 rej=0
Ops : ini=0 dfr=0 rel=0 gc=0
CacheOp: alo=0 luo=0 luc=0 gro=0
CacheOp: inv=0 upo=0 dro=0 pto=0 atc=0 syn=0
CacheOp: rap=0 ras=0 alp=0 als=0 wrp=0 ucp=0 dsp=0
CacheEv: nsp=0 stl=0 rtr=0 cul=0
</source>
but after a few requests:
<source lang=bash>
# cat /proc/fs/fscache/stats
FS-Cache statistics
Cookies: idx=3 dat=77 spc=0
Objects: alc=80 nal=0 avl=80 ded=70
ChkAux : non=0 ok=1 upd=0 obs=1
Pages : mrk=3138215 unc=181438
Acquire: n=150 nul=0 noc=0 ok=80 nbf=0 oom=0
Lookups: n=80 neg=78 pos=2 crt=78 tmo=0
Invals : n=0 run=0
Updates: n=0 nul=0 run=0
Relinqs: n=70 nul=0 wcr=0 rtr=70
AttrChg: n=0 ok=0 nbf=0 oom=0 run=0
Allocs : n=0 ok=0 wt=0 nbf=0 int=0
Allocs : ops=0 owt=0 abt=0
Retrvls: n=72447 ok=0 wt=6 nod=72447 nbf=0 int=0 oom=0
Retrvls: ops=72447 owt=15 abt=0
Stores : n=3136954 ok=3136954 agn=0 nbf=0 oom=0
Stores : ops=67042 run=3203996 pgs=3136954 rxd=3136954 olm=0
VmScan : nos=180177 gon=0 bsy=0 can=0 wt=0
Ops : pend=15 run=139489 enq=3203996 can=0 rej=0
Ops : ini=3209401 dfr=266 rel=3209401 gc=266
CacheOp: alo=0 luo=0 luc=0 gro=0
CacheOp: inv=0 upo=0 dro=0 pto=0 atc=0 syn=0
CacheOp: rap=0 ras=0 alp=0 als=0 wrp=0 ucp=0 dsp=0
CacheEv: nsp=1 stl=0 rtr=0 cul=0
</source>
a488463d2f9f57e6e2bcf818dbbbe8c1ae3a672c
2113
2112
2021-06-08T13:11:08Z
Lollypop
2
/* Setup /etc/cachefilesd.conf */
wikitext
text/x-wiki
[[category: Linux]]
=Cachefilesd=
==Check if kernel supports filesystem cache for your filesystem type==
<source lang=bash>
# grep "CONFIG_.*_FSCACHE" /boot/config-`uname -r`
CONFIG_NFS_FSCACHE=y
CONFIG_CEPH_FSCACHE=y
CONFIG_CIFS_FSCACHE=y
CONFIG_AFS_FSCACHE=y
CONFIG_9P_FSCACHE=y
</source>
== Setup /etc/cachefilesd.conf ==
<source>
###############################################################################
#
# Copyright (C) 2006,2010 Red Hat, Inc. All Rights Reserved.
# Written by David Howells (dhowells@redhat.com)
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version
# 2 of the License, or (at your option) any later version.
#
###############################################################################
# obviously this should be a path to a ramdisk if you have enough ram
dir /var/cache/fscache
secctx cachefiles_kernel_t
tag mycache
debug 1
brun 10%
bcull 7%
bstop 3%
frun 10%
fcull 7%
fstop 3%
# Assuming you're using SELinux with the default security policy included in
# this package
# secctx system_u:system_r:cachefiles_kernel_t:s0
</source>
== Problems with autofs mounted filesystems ==
=== Make sure it is started by systemd ===
==== Disable the SYSV way of starting ====
<source lang=bash>
# update-rc.d cachefilesd disable
</source>
==== Make cachefilesd started by systemd ====
<source lang=bash>
# systemctl edit --force --full cachefilesd.service
</source>
<source lang=ini>
[Unit]
Documentation=man:cachefilesd
Description=LSB: CacheFiles daemon
After=remote-fs.target
Before=autofs.service
[Service]
Type=simple
Restart=no
TimeoutSec=5min
IgnoreSIGPIPE=no
KillMode=process
GuessMainPID=no
RemainAfterExit=yes
SuccessExitStatus=5 6
RuntimeDirectory=cachefilesd
ExecStartPre=-/sbin/modprobe -qab cachefiles
ExecStart=/sbin/cachefilesd -n -p /run/cachefilesd/cachefilesd.pid
[Install]
WantedBy=multi-user.target
</source>
==== Enable starting new service ====
<source lang=bash>
# systemctl enable cachefilesd.service
</source>
==== Verify autofs is depending on cachefilesd.service ====
<source lang=bash>
# systemctl show -p After,Before autofs.service | grep cachefilesd.service
After=cachefilesd.service network.target network-online.target sysinit.target ypbind.service basic.target system.slice sssd.service systemd-journald.socket remote-fs.target
</source>
== Define a cached CIFS share with autofs ==
=== Install needed packages ===
<source lang=bash>
# apt install cachefilesd autofs cifs-utils
</source>
=== Create the credentials file ===
<source lang=bash>
# mkdir --mode=0700 /etc/cifs_cred
# touch /etc/cifs_cred/credentials
# chmod 0600 /etc/cifs_cred/credentials
# cat > /etc/cifs_cred/credentials <<EOF
username=myuser
password=mypass
EOF
</source>
=== Create basedir of your cifs mounts ===
<source lang=bash>
# mkdir --mode=0755 /data/cifs
</source>
===/etc/auto.master===
<source>
...
/data/cifs /etc/auto.cifs-shares --timeout=0 --ghost
...
</source>
===/etc/auto.cifs-shares===
The option {{strong|fsc}} enables caching:
<source>
myshare -fstype=cifs,credentials=/etc/cifs_cred/credentials,nounix,file_mode=0644,vers=3.0,dir_mode=0755,noperm,fsc ://cifsserver.my.dom/the_cifs_share
</source>
== Check if things are getting cached ==
Initially there is nothing in the cache (almost all values are zero):
<source lang=bash>
# cat /proc/fs/fscache/stats
FS-Cache statistics
Cookies: idx=2 dat=0 spc=0
Objects: alc=0 nal=0 avl=0 ded=0
ChkAux : non=0 ok=0 upd=0 obs=0
Pages : mrk=0 unc=0
Acquire: n=2 nul=0 noc=0 ok=2 nbf=0 oom=0
Lookups: n=0 neg=0 pos=0 crt=0 tmo=0
Invals : n=0 run=0
Updates: n=0 nul=0 run=0
Relinqs: n=0 nul=0 wcr=0 rtr=0
AttrChg: n=0 ok=0 nbf=0 oom=0 run=0
Allocs : n=0 ok=0 wt=0 nbf=0 int=0
Allocs : ops=0 owt=0 abt=0
Retrvls: n=0 ok=0 wt=0 nod=0 nbf=0 int=0 oom=0
Retrvls: ops=0 owt=0 abt=0
Stores : n=0 ok=0 agn=0 nbf=0 oom=0
Stores : ops=0 run=0 pgs=0 rxd=0 olm=0
VmScan : nos=0 gon=0 bsy=0 can=0 wt=0
Ops : pend=0 run=0 enq=0 can=0 rej=0
Ops : ini=0 dfr=0 rel=0 gc=0
CacheOp: alo=0 luo=0 luc=0 gro=0
CacheOp: inv=0 upo=0 dro=0 pto=0 atc=0 syn=0
CacheOp: rap=0 ras=0 alp=0 als=0 wrp=0 ucp=0 dsp=0
CacheEv: nsp=0 stl=0 rtr=0 cul=0
</source>
but after a few requests:
<source lang=bash>
# cat /proc/fs/fscache/stats
FS-Cache statistics
Cookies: idx=3 dat=77 spc=0
Objects: alc=80 nal=0 avl=80 ded=70
ChkAux : non=0 ok=1 upd=0 obs=1
Pages : mrk=3138215 unc=181438
Acquire: n=150 nul=0 noc=0 ok=80 nbf=0 oom=0
Lookups: n=80 neg=78 pos=2 crt=78 tmo=0
Invals : n=0 run=0
Updates: n=0 nul=0 run=0
Relinqs: n=70 nul=0 wcr=0 rtr=70
AttrChg: n=0 ok=0 nbf=0 oom=0 run=0
Allocs : n=0 ok=0 wt=0 nbf=0 int=0
Allocs : ops=0 owt=0 abt=0
Retrvls: n=72447 ok=0 wt=6 nod=72447 nbf=0 int=0 oom=0
Retrvls: ops=72447 owt=15 abt=0
Stores : n=3136954 ok=3136954 agn=0 nbf=0 oom=0
Stores : ops=67042 run=3203996 pgs=3136954 rxd=3136954 olm=0
VmScan : nos=180177 gon=0 bsy=0 can=0 wt=0
Ops : pend=15 run=139489 enq=3203996 can=0 rej=0
Ops : ini=3209401 dfr=266 rel=3209401 gc=266
CacheOp: alo=0 luo=0 luc=0 gro=0
CacheOp: inv=0 upo=0 dro=0 pto=0 atc=0 syn=0
CacheOp: rap=0 ras=0 alp=0 als=0 wrp=0 ucp=0 dsp=0
CacheEv: nsp=1 stl=0 rtr=0 cul=0
</source>
ae207085373fcd3177007801d2aa22f9b5fefa65
2114
2113
2021-06-08T13:13:59Z
Lollypop
2
/* Problems with autofs mounted filesystems */
wikitext
text/x-wiki
[[category: Linux]]
=Cachefilesd=
==Check if kernel supports filesystem cache for your filesystem type==
<source lang=bash>
# grep "CONFIG_.*_FSCACHE" /boot/config-`uname -r`
CONFIG_NFS_FSCACHE=y
CONFIG_CEPH_FSCACHE=y
CONFIG_CIFS_FSCACHE=y
CONFIG_AFS_FSCACHE=y
CONFIG_9P_FSCACHE=y
</source>
== Setup /etc/cachefilesd.conf ==
<source>
###############################################################################
#
# Copyright (C) 2006,2010 Red Hat, Inc. All Rights Reserved.
# Written by David Howells (dhowells@redhat.com)
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version
# 2 of the License, or (at your option) any later version.
#
###############################################################################
# obviously this should be a path to a ramdisk if you have enough ram
dir /var/cache/fscache
secctx cachefiles_kernel_t
tag mycache
debug 1
brun 10%
bcull 7%
bstop 3%
frun 10%
fcull 7%
fstop 3%
# Assuming you're using SELinux with the default security policy included in
# this package
# secctx system_u:system_r:cachefiles_kernel_t:s0
</source>
== Problems with autofs mounted filesystems ==
In case of using automount with caching the cachefilesd must be running <b>before</b> autofs comes up an might mount the filesystem.
=== Make sure it is started by systemd ===
==== Disable the SYSV way of starting ====
<source lang=bash>
# update-rc.d cachefilesd disable
</source>
==== Make cachefilesd started by systemd ====
<source lang=bash>
# systemctl edit --force --full cachefilesd.service
</source>
<source lang=ini>
[Unit]
Documentation=man:cachefilesd
Description=LSB: CacheFiles daemon
After=remote-fs.target
Before=autofs.service
[Service]
Type=simple
Restart=no
TimeoutSec=5min
IgnoreSIGPIPE=no
KillMode=process
GuessMainPID=no
RemainAfterExit=yes
SuccessExitStatus=5 6
RuntimeDirectory=cachefilesd
ExecStartPre=-/sbin/modprobe -qab cachefiles
ExecStart=/sbin/cachefilesd -n -p /run/cachefilesd/cachefilesd.pid
[Install]
WantedBy=multi-user.target
</source>
==== Enable starting new service ====
<source lang=bash>
# systemctl enable cachefilesd.service
</source>
==== Verify autofs is depending on cachefilesd.service ====
<source lang=bash>
# systemctl show -p After,Before autofs.service | grep cachefilesd.service
After=cachefilesd.service network.target network-online.target sysinit.target ypbind.service basic.target system.slice sssd.service systemd-journald.socket remote-fs.target
</source>
== Define a cached CIFS share with autofs ==
=== Install needed packages ===
<source lang=bash>
# apt install cachefilesd autofs cifs-utils
</source>
=== Create the credentials file ===
<source lang=bash>
# mkdir --mode=0700 /etc/cifs_cred
# touch /etc/cifs_cred/credentials
# chmod 0600 /etc/cifs_cred/credentials
# cat > /etc/cifs_cred/credentials <<EOF
username=myuser
password=mypass
EOF
</source>
=== Create basedir of your cifs mounts ===
<source lang=bash>
# mkdir --mode=0755 /data/cifs
</source>
===/etc/auto.master===
<source>
...
/data/cifs /etc/auto.cifs-shares --timeout=0 --ghost
...
</source>
===/etc/auto.cifs-shares===
The option {{strong|fsc}} enables caching:
<source>
myshare -fstype=cifs,credentials=/etc/cifs_cred/credentials,nounix,file_mode=0644,vers=3.0,dir_mode=0755,noperm,fsc ://cifsserver.my.dom/the_cifs_share
</source>
== Check if things are getting cached ==
Initially there is nothing in the cache (almost all values are zero):
<source lang=bash>
# cat /proc/fs/fscache/stats
FS-Cache statistics
Cookies: idx=2 dat=0 spc=0
Objects: alc=0 nal=0 avl=0 ded=0
ChkAux : non=0 ok=0 upd=0 obs=0
Pages : mrk=0 unc=0
Acquire: n=2 nul=0 noc=0 ok=2 nbf=0 oom=0
Lookups: n=0 neg=0 pos=0 crt=0 tmo=0
Invals : n=0 run=0
Updates: n=0 nul=0 run=0
Relinqs: n=0 nul=0 wcr=0 rtr=0
AttrChg: n=0 ok=0 nbf=0 oom=0 run=0
Allocs : n=0 ok=0 wt=0 nbf=0 int=0
Allocs : ops=0 owt=0 abt=0
Retrvls: n=0 ok=0 wt=0 nod=0 nbf=0 int=0 oom=0
Retrvls: ops=0 owt=0 abt=0
Stores : n=0 ok=0 agn=0 nbf=0 oom=0
Stores : ops=0 run=0 pgs=0 rxd=0 olm=0
VmScan : nos=0 gon=0 bsy=0 can=0 wt=0
Ops : pend=0 run=0 enq=0 can=0 rej=0
Ops : ini=0 dfr=0 rel=0 gc=0
CacheOp: alo=0 luo=0 luc=0 gro=0
CacheOp: inv=0 upo=0 dro=0 pto=0 atc=0 syn=0
CacheOp: rap=0 ras=0 alp=0 als=0 wrp=0 ucp=0 dsp=0
CacheEv: nsp=0 stl=0 rtr=0 cul=0
</source>
but after a few requests:
<source lang=bash>
# cat /proc/fs/fscache/stats
FS-Cache statistics
Cookies: idx=3 dat=77 spc=0
Objects: alc=80 nal=0 avl=80 ded=70
ChkAux : non=0 ok=1 upd=0 obs=1
Pages : mrk=3138215 unc=181438
Acquire: n=150 nul=0 noc=0 ok=80 nbf=0 oom=0
Lookups: n=80 neg=78 pos=2 crt=78 tmo=0
Invals : n=0 run=0
Updates: n=0 nul=0 run=0
Relinqs: n=70 nul=0 wcr=0 rtr=70
AttrChg: n=0 ok=0 nbf=0 oom=0 run=0
Allocs : n=0 ok=0 wt=0 nbf=0 int=0
Allocs : ops=0 owt=0 abt=0
Retrvls: n=72447 ok=0 wt=6 nod=72447 nbf=0 int=0 oom=0
Retrvls: ops=72447 owt=15 abt=0
Stores : n=3136954 ok=3136954 agn=0 nbf=0 oom=0
Stores : ops=67042 run=3203996 pgs=3136954 rxd=3136954 olm=0
VmScan : nos=180177 gon=0 bsy=0 can=0 wt=0
Ops : pend=15 run=139489 enq=3203996 can=0 rej=0
Ops : ini=3209401 dfr=266 rel=3209401 gc=266
CacheOp: alo=0 luo=0 luc=0 gro=0
CacheOp: inv=0 upo=0 dro=0 pto=0 atc=0 syn=0
CacheOp: rap=0 ras=0 alp=0 als=0 wrp=0 ucp=0 dsp=0
CacheEv: nsp=1 stl=0 rtr=0 cul=0
</source>
e8dab0c4c6697ed70ac8a1ce289492ea50b73b06
2115
2114
2021-06-09T13:20:43Z
Lollypop
2
wikitext
text/x-wiki
[[category: Linux]]
=Cachefilesd=
==Create ramdisk for cache if enough ram==
A dir named /cache is created and ramdisk ist mounted there!
<source lang=bash>
# systemctl --force --full edit create-ramdisk@.service
</source>
<source lang=ini>
[Unit]
Description=create cache dir in ramdisk
After=remote-fs.target
Before=cachefilesd.service
[Service]
Type=oneshot
RemainAfterExit=yes
TimeoutSec=0
ExecStartPre=/sbin/modprobe brd rd_nr=1 rd_size=%i
ExecStartPre=/sbin/sgdisk -Z --new 1:0:0 /dev/ram0
ExecStartPre=/sbin/mkfs.ext4 -m 0 /dev/ram0p1
ExecStartPre=-/bin/mkdir /cache
ExecStart=/bin/mount -o user_xattr /dev/ram0p1 /cache
ExecStop=/bin/umount /cache
ExecStop=/sbin/rmmod brd
[Install]
WantedBy=multi-user.target
</source>
Create for example a 2gb disk with:
<source lang=bash>
# systemctl start create-ramdisk@$[ 2 * 1024 * 1024 ].service
</source>
Destroy it again:
<source lang=bash>
# systemctl stop create-ramdisk@$[ 2 * 1024 * 1024 ].service
</source>
Make a 4gb one instead:
<source lang=bash>
# systemctl start create-ramdisk@$[ 4 * 1024 * 1024 ].service
</source>
==Check if kernel supports filesystem cache for your filesystem type==
<source lang=bash>
# grep "CONFIG_.*_FSCACHE" /boot/config-`uname -r`
CONFIG_NFS_FSCACHE=y
CONFIG_CEPH_FSCACHE=y
CONFIG_CIFS_FSCACHE=y
CONFIG_AFS_FSCACHE=y
CONFIG_9P_FSCACHE=y
</source>
== Setup /etc/cachefilesd.conf ==
<source>
###############################################################################
#
# Copyright (C) 2006,2010 Red Hat, Inc. All Rights Reserved.
# Written by David Howells (dhowells@redhat.com)
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version
# 2 of the License, or (at your option) any later version.
#
###############################################################################
# obviously this should be a path to a ramdisk if you have enough ram
dir /var/cache/fscache
secctx cachefiles_kernel_t
tag mycache
debug 1
brun 10%
bcull 7%
bstop 3%
frun 10%
fcull 7%
fstop 3%
# Assuming you're using SELinux with the default security policy included in
# this package
# secctx system_u:system_r:cachefiles_kernel_t:s0
</source>
== Problems with autofs mounted filesystems ==
In case of using automount with caching the cachefilesd must be running <b>before</b> autofs comes up an might mount the filesystem.
=== Make sure it is started by systemd ===
==== Disable the SYSV way of starting ====
<source lang=bash>
# update-rc.d cachefilesd disable
</source>
==== Make cachefilesd started by systemd ====
<source lang=bash>
# systemctl edit --force --full cachefilesd.service
</source>
<source lang=ini>
[Unit]
Documentation=man:cachefilesd
Description=LSB: CacheFiles daemon
After=remote-fs.target
Before=autofs.service
[Service]
Type=simple
Restart=no
TimeoutSec=5min
IgnoreSIGPIPE=no
KillMode=process
GuessMainPID=no
RemainAfterExit=yes
SuccessExitStatus=5 6
RuntimeDirectory=cachefilesd
ExecStartPre=-/sbin/modprobe -qab cachefiles
ExecStart=/sbin/cachefilesd -n -p /run/cachefilesd/cachefilesd.pid
[Install]
WantedBy=multi-user.target
</source>
==== Enable starting new service ====
<source lang=bash>
# systemctl enable cachefilesd.service
</source>
==== Verify autofs is depending on cachefilesd.service ====
<source lang=bash>
# systemctl show -p After,Before autofs.service | grep cachefilesd.service
After=cachefilesd.service network.target network-online.target sysinit.target ypbind.service basic.target system.slice sssd.service systemd-journald.socket remote-fs.target
</source>
== Define a cached CIFS share with autofs ==
=== Install needed packages ===
<source lang=bash>
# apt install cachefilesd autofs cifs-utils
</source>
=== Create the credentials file ===
<source lang=bash>
# mkdir --mode=0700 /etc/cifs_cred
# touch /etc/cifs_cred/credentials
# chmod 0600 /etc/cifs_cred/credentials
# cat > /etc/cifs_cred/credentials <<EOF
username=myuser
password=mypass
EOF
</source>
=== Create basedir of your cifs mounts ===
<source lang=bash>
# mkdir --mode=0755 /data/cifs
</source>
===/etc/auto.master===
<source>
...
/data/cifs /etc/auto.cifs-shares --timeout=0 --ghost
...
</source>
===/etc/auto.cifs-shares===
The option {{strong|fsc}} enables caching:
<source>
myshare -fstype=cifs,credentials=/etc/cifs_cred/credentials,nounix,file_mode=0644,vers=3.0,dir_mode=0755,noperm,fsc ://cifsserver.my.dom/the_cifs_share
</source>
== Check if things are getting cached ==
Initially there is nothing in the cache (almost all values are zero):
<source lang=bash>
# cat /proc/fs/fscache/stats
FS-Cache statistics
Cookies: idx=2 dat=0 spc=0
Objects: alc=0 nal=0 avl=0 ded=0
ChkAux : non=0 ok=0 upd=0 obs=0
Pages : mrk=0 unc=0
Acquire: n=2 nul=0 noc=0 ok=2 nbf=0 oom=0
Lookups: n=0 neg=0 pos=0 crt=0 tmo=0
Invals : n=0 run=0
Updates: n=0 nul=0 run=0
Relinqs: n=0 nul=0 wcr=0 rtr=0
AttrChg: n=0 ok=0 nbf=0 oom=0 run=0
Allocs : n=0 ok=0 wt=0 nbf=0 int=0
Allocs : ops=0 owt=0 abt=0
Retrvls: n=0 ok=0 wt=0 nod=0 nbf=0 int=0 oom=0
Retrvls: ops=0 owt=0 abt=0
Stores : n=0 ok=0 agn=0 nbf=0 oom=0
Stores : ops=0 run=0 pgs=0 rxd=0 olm=0
VmScan : nos=0 gon=0 bsy=0 can=0 wt=0
Ops : pend=0 run=0 enq=0 can=0 rej=0
Ops : ini=0 dfr=0 rel=0 gc=0
CacheOp: alo=0 luo=0 luc=0 gro=0
CacheOp: inv=0 upo=0 dro=0 pto=0 atc=0 syn=0
CacheOp: rap=0 ras=0 alp=0 als=0 wrp=0 ucp=0 dsp=0
CacheEv: nsp=0 stl=0 rtr=0 cul=0
</source>
but after a few requests:
<source lang=bash>
# cat /proc/fs/fscache/stats
FS-Cache statistics
Cookies: idx=3 dat=77 spc=0
Objects: alc=80 nal=0 avl=80 ded=70
ChkAux : non=0 ok=1 upd=0 obs=1
Pages : mrk=3138215 unc=181438
Acquire: n=150 nul=0 noc=0 ok=80 nbf=0 oom=0
Lookups: n=80 neg=78 pos=2 crt=78 tmo=0
Invals : n=0 run=0
Updates: n=0 nul=0 run=0
Relinqs: n=70 nul=0 wcr=0 rtr=70
AttrChg: n=0 ok=0 nbf=0 oom=0 run=0
Allocs : n=0 ok=0 wt=0 nbf=0 int=0
Allocs : ops=0 owt=0 abt=0
Retrvls: n=72447 ok=0 wt=6 nod=72447 nbf=0 int=0 oom=0
Retrvls: ops=72447 owt=15 abt=0
Stores : n=3136954 ok=3136954 agn=0 nbf=0 oom=0
Stores : ops=67042 run=3203996 pgs=3136954 rxd=3136954 olm=0
VmScan : nos=180177 gon=0 bsy=0 can=0 wt=0
Ops : pend=15 run=139489 enq=3203996 can=0 rej=0
Ops : ini=3209401 dfr=266 rel=3209401 gc=266
CacheOp: alo=0 luo=0 luc=0 gro=0
CacheOp: inv=0 upo=0 dro=0 pto=0 atc=0 syn=0
CacheOp: rap=0 ras=0 alp=0 als=0 wrp=0 ucp=0 dsp=0
CacheEv: nsp=1 stl=0 rtr=0 cul=0
</source>
e3c4e7dd03e5f8841139db33c12a5308ed189d54
2117
2115
2021-06-10T12:34:20Z
Lollypop
2
/* Create ramdisk for cache if enough ram */
wikitext
text/x-wiki
[[category: Linux]]
=Cachefilesd=
==Create ramdisk for cache if enough ram==
A dir named /cache is created and ramdisk ist mounted there!
<source lang=bash>
# systemctl --force --full edit create-ramdisk@.service
</source>
<source lang=ini>
[Unit]
Description=create cache dir in ramdisk
After=remote-fs.target
Before=cachefilesd.service
[Service]
Type=oneshot
RemainAfterExit=yes
TimeoutSec=0
ExecStartPre=/sbin/modprobe brd rd_nr=1 rd_size=%i
ExecStartPre=/sbin/sgdisk -Z --new 1:0:0 /dev/ram0
ExecStartPre=/sbin/mkfs.ext4 -m 0 /dev/ram0p1
ExecStartPre=-/bin/mkdir /cache
ExecStart=/bin/mount -o user_xattr /dev/ram0p1 /cache
ExecStop=/bin/umount /cache
ExecStop=/sbin/rmmod brd
[Install]
WantedBy=multi-user.target
</source>
Create for example a 2gb disk with:
<source lang=bash>
# systemctl start create-ramdisk@$[ 2 * 1024 * 1024 ].service
</source>
Destroy it again:
<source lang=bash>
# systemctl stop create-ramdisk@$[ 2 * 1024 * 1024 ].service
</source>
Make a 4gb one instead:
<source lang=bash>
# systemctl start create-ramdisk@$[ 4 * 1024 * 1024 ].service
</source>
If you found the right value, nail it for the next reboot with:
<source lang=bash>
# systemctl enable create-ramdisk@$[ ${your_gigabyte_value} * 1024 * 1024 ].service
</source>
==Check if kernel supports filesystem cache for your filesystem type==
<source lang=bash>
# grep "CONFIG_.*_FSCACHE" /boot/config-`uname -r`
CONFIG_NFS_FSCACHE=y
CONFIG_CEPH_FSCACHE=y
CONFIG_CIFS_FSCACHE=y
CONFIG_AFS_FSCACHE=y
CONFIG_9P_FSCACHE=y
</source>
== Setup /etc/cachefilesd.conf ==
<source>
###############################################################################
#
# Copyright (C) 2006,2010 Red Hat, Inc. All Rights Reserved.
# Written by David Howells (dhowells@redhat.com)
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version
# 2 of the License, or (at your option) any later version.
#
###############################################################################
# obviously this should be a path to a ramdisk if you have enough ram
dir /var/cache/fscache
secctx cachefiles_kernel_t
tag mycache
debug 1
brun 10%
bcull 7%
bstop 3%
frun 10%
fcull 7%
fstop 3%
# Assuming you're using SELinux with the default security policy included in
# this package
# secctx system_u:system_r:cachefiles_kernel_t:s0
</source>
== Problems with autofs mounted filesystems ==
In case of using automount with caching the cachefilesd must be running <b>before</b> autofs comes up an might mount the filesystem.
=== Make sure it is started by systemd ===
==== Disable the SYSV way of starting ====
<source lang=bash>
# update-rc.d cachefilesd disable
</source>
==== Make cachefilesd started by systemd ====
<source lang=bash>
# systemctl edit --force --full cachefilesd.service
</source>
<source lang=ini>
[Unit]
Documentation=man:cachefilesd
Description=LSB: CacheFiles daemon
After=remote-fs.target
Before=autofs.service
[Service]
Type=simple
Restart=no
TimeoutSec=5min
IgnoreSIGPIPE=no
KillMode=process
GuessMainPID=no
RemainAfterExit=yes
SuccessExitStatus=5 6
RuntimeDirectory=cachefilesd
ExecStartPre=-/sbin/modprobe -qab cachefiles
ExecStart=/sbin/cachefilesd -n -p /run/cachefilesd/cachefilesd.pid
[Install]
WantedBy=multi-user.target
</source>
==== Enable starting new service ====
<source lang=bash>
# systemctl enable cachefilesd.service
</source>
==== Verify autofs is depending on cachefilesd.service ====
<source lang=bash>
# systemctl show -p After,Before autofs.service | grep cachefilesd.service
After=cachefilesd.service network.target network-online.target sysinit.target ypbind.service basic.target system.slice sssd.service systemd-journald.socket remote-fs.target
</source>
== Define a cached CIFS share with autofs ==
=== Install needed packages ===
<source lang=bash>
# apt install cachefilesd autofs cifs-utils
</source>
=== Create the credentials file ===
<source lang=bash>
# mkdir --mode=0700 /etc/cifs_cred
# touch /etc/cifs_cred/credentials
# chmod 0600 /etc/cifs_cred/credentials
# cat > /etc/cifs_cred/credentials <<EOF
username=myuser
password=mypass
EOF
</source>
=== Create basedir of your cifs mounts ===
<source lang=bash>
# mkdir --mode=0755 /data/cifs
</source>
===/etc/auto.master===
<source>
...
/data/cifs /etc/auto.cifs-shares --timeout=0 --ghost
...
</source>
===/etc/auto.cifs-shares===
The option {{strong|fsc}} enables caching:
<source>
myshare -fstype=cifs,credentials=/etc/cifs_cred/credentials,nounix,file_mode=0644,vers=3.0,dir_mode=0755,noperm,fsc ://cifsserver.my.dom/the_cifs_share
</source>
== Check if things are getting cached ==
Initially there is nothing in the cache (almost all values are zero):
<source lang=bash>
# cat /proc/fs/fscache/stats
FS-Cache statistics
Cookies: idx=2 dat=0 spc=0
Objects: alc=0 nal=0 avl=0 ded=0
ChkAux : non=0 ok=0 upd=0 obs=0
Pages : mrk=0 unc=0
Acquire: n=2 nul=0 noc=0 ok=2 nbf=0 oom=0
Lookups: n=0 neg=0 pos=0 crt=0 tmo=0
Invals : n=0 run=0
Updates: n=0 nul=0 run=0
Relinqs: n=0 nul=0 wcr=0 rtr=0
AttrChg: n=0 ok=0 nbf=0 oom=0 run=0
Allocs : n=0 ok=0 wt=0 nbf=0 int=0
Allocs : ops=0 owt=0 abt=0
Retrvls: n=0 ok=0 wt=0 nod=0 nbf=0 int=0 oom=0
Retrvls: ops=0 owt=0 abt=0
Stores : n=0 ok=0 agn=0 nbf=0 oom=0
Stores : ops=0 run=0 pgs=0 rxd=0 olm=0
VmScan : nos=0 gon=0 bsy=0 can=0 wt=0
Ops : pend=0 run=0 enq=0 can=0 rej=0
Ops : ini=0 dfr=0 rel=0 gc=0
CacheOp: alo=0 luo=0 luc=0 gro=0
CacheOp: inv=0 upo=0 dro=0 pto=0 atc=0 syn=0
CacheOp: rap=0 ras=0 alp=0 als=0 wrp=0 ucp=0 dsp=0
CacheEv: nsp=0 stl=0 rtr=0 cul=0
</source>
but after a few requests:
<source lang=bash>
# cat /proc/fs/fscache/stats
FS-Cache statistics
Cookies: idx=3 dat=77 spc=0
Objects: alc=80 nal=0 avl=80 ded=70
ChkAux : non=0 ok=1 upd=0 obs=1
Pages : mrk=3138215 unc=181438
Acquire: n=150 nul=0 noc=0 ok=80 nbf=0 oom=0
Lookups: n=80 neg=78 pos=2 crt=78 tmo=0
Invals : n=0 run=0
Updates: n=0 nul=0 run=0
Relinqs: n=70 nul=0 wcr=0 rtr=70
AttrChg: n=0 ok=0 nbf=0 oom=0 run=0
Allocs : n=0 ok=0 wt=0 nbf=0 int=0
Allocs : ops=0 owt=0 abt=0
Retrvls: n=72447 ok=0 wt=6 nod=72447 nbf=0 int=0 oom=0
Retrvls: ops=72447 owt=15 abt=0
Stores : n=3136954 ok=3136954 agn=0 nbf=0 oom=0
Stores : ops=67042 run=3203996 pgs=3136954 rxd=3136954 olm=0
VmScan : nos=180177 gon=0 bsy=0 can=0 wt=0
Ops : pend=15 run=139489 enq=3203996 can=0 rej=0
Ops : ini=3209401 dfr=266 rel=3209401 gc=266
CacheOp: alo=0 luo=0 luc=0 gro=0
CacheOp: inv=0 upo=0 dro=0 pto=0 atc=0 syn=0
CacheOp: rap=0 ras=0 alp=0 als=0 wrp=0 ucp=0 dsp=0
CacheEv: nsp=1 stl=0 rtr=0 cul=0
</source>
d19c9a65c07d14d591911c41772b363e8fb791e8
Nice Options
0
253
2116
1902
2021-06-10T08:18:27Z
Lollypop
2
wikitext
text/x-wiki
Linux:
<source lang=bash>
ls -aldi
ls -aladin
netstat -plant
netstat -tulpen
ss -open4all
journalctl -efeu
grep -Hirn
pwgen -nancy 17
</source>
Solaris:
<source lang=bash>
prstat -Lmaa
iostat -Erni
</source>
5f617558ec74823eb0a64dd0f8cb24f4c646800b
Galera Cluster
0
383
2118
2021-06-14T12:29:37Z
Lollypop
2
Created page with "Not a real page now... === Show wsrep_provider_options === <source lang=bash> $ mariadb -NBABe 'show variables like "wsrep_provider_options"' | awk '{gsub(/$/,":\n",$1); gsub..."
wikitext
text/x-wiki
Not a real page now...
=== Show wsrep_provider_options ===
<source lang=bash>
$ mariadb -NBABe 'show variables like "wsrep_provider_options"' | awk '{gsub(/$/,":\n",$1); gsub(/(;|$)/,";\n"); printf $0; }'
</source>
887dd1e9a5498de54644c1f9e6448e36a3579827
OpenSSL
0
347
2119
1824
2021-06-14T12:30:13Z
Lollypop
2
wikitext
text/x-wiki
[[category:Security]]
=Verify=
<source lang=bash>
# openssl verify -CAfile /srv/www/htdocs/pub/RHN-ORG-TRUSTED-SSL-CERT /etc/pki/spacewalk/jabberd/server.pem
</source>
<source lang=bash>
# openssl crl2pkcs7 -nocrl -certfile /srv/www/htdocs/pub/RHN-ORG-TRUSTED-SSL-CERT | openssl pkcs7 -print_certs -noout -print_certs
</source>
9176724bdb2edbd36e2a927659719faa28662297
Sendmail
0
384
2120
2021-06-22T07:16:32Z
Lollypop
2
Created page with "=Compile sendmail= ==Solaris 10== Untar source, then go into the source directory. ===devtools/Site/site.config.m4=== <source lang=m4> dnl ###################################..."
wikitext
text/x-wiki
=Compile sendmail=
==Solaris 10==
Untar source, then go into the source directory.
===devtools/Site/site.config.m4===
<source lang=m4>
dnl #####################################################################
dnl ### Changes to disable the default NIS support ###
dnl #####################################################################
APPENDDEF(`confENVDEF', `-UNIS')
dnl #####################################################################
dnl ### Changes for PH_MAP support. ###
dnl #####################################################################
APPENDDEF(`confMAPDEF',`-DPH_MAP')
APPENDDEF(`confLIBS', `-lphclient')
APPENDDEF(`confINCDIRS', `-I/opt/nph/include')
APPENDDEF(`confLIBDIRS', `-L/opt/nph/lib')
dnl #####################################################################
dnl ### Changes for STARTTLS support ###
dnl #####################################################################
APPENDDEF(`confENVDEF',`-DSTARTTLS')
APPENDDEF(`confLIBS', `-lssl -lcrypto')
APPENDDEF(`confLIBDIRS', `-L/opt/openssl/lib -R/opt/openssl/lib')
APPENDDEF(`confINCDIRS', `-I/opt/openssl/include')
dnl #####################################################################
dnl ### GCC settings ###
dnl #####################################################################
define(`confCC', `gcc')
define(`confOPTIMIZE', `-O3')
define(`confCCOPTS', `-m64 -B/usr/ccs/bin/amd64')
define(`confLDOPTS', `-m64 -static-libgcc -lgcc_s_amd64')
APPENDDEF(`confENVDEF', `-DSM_CONF_STDBOOL_H=0')
APPENDDEF(`confLIBDIRS', `-L/lib/64 -R/lib/64 -L/usr/sfw/lib/amd64 -R/usr/sfw/lib/amd64')
dnl #####################################################################
dnl ### Use the more modern shell ###
dnl #####################################################################
define(`confSHELL', `/usr/bin/bash')
dnl #####################################################################
dnl ### Installdirs ###
dnl #####################################################################
define(`confMANROOT', `/opt/sendmail-8.16.1/share/man/cat')
define(`confMANROOTMAN', `/opt/sendmail-8.16.1/share/man/man')
define(`confMBINDIR', `/opt/sendmail-8.16.1/sbin')
define(`confUBINDIR', `/opt/sendmail-8.16.1/bin')
</source>
<source lang=bash>
# sh ./Build -c
# cd cf/cf
# cp generic-solaris.mc sendmail.mc
# sh ./Build sendmail.cf
# sh ./Build install-cf
# mkdir -p /opt/sendmail-8.16.1/{bin,share/man/cat{1,5,8}} ; ./Build install ;
</source>
8660169c1ece9417aeeed4164e97090448e739cd
Ubuntu remove desktop
0
385
2121
2021-06-22T08:43:15Z
Lollypop
2
Created page with " =Ubuntu 20.04= <source lang=bash> # GRUB: Remove splash and quiet from GRUB_CMDLINE_LINUX_DEFAULT sudo perl -pi -e 's#^(GRUB_CMDLINE_LINUX_DEFAULT=".*)(quiet)(.*")$#\1\3#g,s#..."
wikitext
text/x-wiki
=Ubuntu 20.04=
<source lang=bash>
# GRUB: Remove splash and quiet from GRUB_CMDLINE_LINUX_DEFAULT
sudo perl -pi -e 's#^(GRUB_CMDLINE_LINUX_DEFAULT=".*)(quiet)(.*")$#\1\3#g,s#^(GRUB_CMDLINE_LINUX_DEFAULT=".*)(splash)(.*")$#\1\3#g' /etc/default/grub
# GRUB: Add or change to GRUB_DISABLE_OS_PROBER=true
sudo perl -ni -e '$c=1 if s/^GRUB_DISABLE_OS_PROBER=.*$/GRUB_DISABLE_OS_PROBER=true/; print; if(eof){print "GRUB_DISABLE_OS_PROBER=true\n" unless $c==1};' /etc/default/grub
# Remove desktop packages
sudo apt --yes purge adwaita-icon-theme gedit-common gir1.2-gdm-1.0 \
gir1.2-gnomebluetooth-1.0 gir1.2-gnomedesktop-3.0 gir1.2-goa-1.0 \
gnome-accessibility-themes gnome-bluetooth gnome-calculator gnome-calendar \
gnome-characters gnome-control-center gnome-control-center-data \
gnome-control-center-faces gnome-desktop3-data \
gnome-font-viewer gnome-getting-started-docs gnome-getting-started-docs-ru \
gnome-initial-setup gnome-keyring gnome-keyring-pkcs11 gnome-logs \
gnome-mahjongg gnome-menus gnome-mines gnome-online-accounts \
gnome-power-manager gnome-screenshot gnome-session-bin gnome-session-canberra \
gnome-session-common gnome-settings-daemon gnome-settings-daemon-common \
gnome-shell gnome-shell-common gnome-shell-extension-appindicator \
gnome-shell-extension-desktop-icons gnome-shell-extension-ubuntu-dock \
gnome-startup-applications gnome-sudoku gnome-system-monitor gnome-terminal \
gnome-terminal-data gnome-themes-extra gnome-themes-extra-data gnome-todo \
gnome-todo-common gnome-user-docs gnome-user-docs-ru gnome-video-effects \
language-pack-gnome-en language-pack-gnome-en-base language-pack-gnome-ru \
language-pack-gnome-ru-base language-selector-gnome libgail18 libgail18 \
libgail-common libgail-common libgnome-autoar-0-0 libgnome-bluetooth13 \
libgnome-desktop-3-19 libgnome-games-support-1-3 libgnome-games-support-common \
libgnomekbd8 libgnomekbd-common libgnome-menu-3-0 libgnome-todo libgoa-1.0-0b \
libgoa-1.0-common libpam-gnome-keyring libsoup-gnome2.4-1 libsoup-gnome2.4-1 \
nautilus-extension-gnome-terminal pinentry-gnome3 yaru-theme-gnome-shell \
yaru-theme-icon yaru-theme-sound ubuntu-wallpapers ubuntu-wallpapers-focal \
x11-common x11-apps xcursor-themes xbitmaps xfonts-base xfonts-encodings
# Purge unreferred packages
sudo apt --yes autopurge
# Fix plymouth problems
sudo apt --yes install plymouth-theme-spinner
# Ensure the boot environment creation works
update-initramfs -k $(uname -r) -u
update-grub
</source>
d9750faf4def42e08c339aecd06ecd283e5432a7
2122
2121
2021-06-22T08:43:59Z
Lollypop
2
/* Ubuntu 20.04 */
wikitext
text/x-wiki
[category:Ubuntu]
=Ubuntu 20.04=
<source lang=bash>
# GRUB: Remove splash and quiet from GRUB_CMDLINE_LINUX_DEFAULT
sudo perl -pi -e 's#^(GRUB_CMDLINE_LINUX_DEFAULT=".*)(quiet)(.*")$#\1\3#g,s#^(GRUB_CMDLINE_LINUX_DEFAULT=".*)(splash)(.*")$#\1\3#g' /etc/default/grub
# GRUB: Add or change to GRUB_DISABLE_OS_PROBER=true
sudo perl -ni -e '$c=1 if s/^GRUB_DISABLE_OS_PROBER=.*$/GRUB_DISABLE_OS_PROBER=true/; print; if(eof){print "GRUB_DISABLE_OS_PROBER=true\n" unless $c==1};' /etc/default/grub
# Remove desktop packages
sudo apt --yes purge adwaita-icon-theme gedit-common gir1.2-gdm-1.0 \
gir1.2-gnomebluetooth-1.0 gir1.2-gnomedesktop-3.0 gir1.2-goa-1.0 \
gnome-accessibility-themes gnome-bluetooth gnome-calculator gnome-calendar \
gnome-characters gnome-control-center gnome-control-center-data \
gnome-control-center-faces gnome-desktop3-data \
gnome-font-viewer gnome-getting-started-docs gnome-getting-started-docs-ru \
gnome-initial-setup gnome-keyring gnome-keyring-pkcs11 gnome-logs \
gnome-mahjongg gnome-menus gnome-mines gnome-online-accounts \
gnome-power-manager gnome-screenshot gnome-session-bin gnome-session-canberra \
gnome-session-common gnome-settings-daemon gnome-settings-daemon-common \
gnome-shell gnome-shell-common gnome-shell-extension-appindicator \
gnome-shell-extension-desktop-icons gnome-shell-extension-ubuntu-dock \
gnome-startup-applications gnome-sudoku gnome-system-monitor gnome-terminal \
gnome-terminal-data gnome-themes-extra gnome-themes-extra-data gnome-todo \
gnome-todo-common gnome-user-docs gnome-user-docs-ru gnome-video-effects \
language-pack-gnome-en language-pack-gnome-en-base language-pack-gnome-ru \
language-pack-gnome-ru-base language-selector-gnome libgail18 libgail18 \
libgail-common libgail-common libgnome-autoar-0-0 libgnome-bluetooth13 \
libgnome-desktop-3-19 libgnome-games-support-1-3 libgnome-games-support-common \
libgnomekbd8 libgnomekbd-common libgnome-menu-3-0 libgnome-todo libgoa-1.0-0b \
libgoa-1.0-common libpam-gnome-keyring libsoup-gnome2.4-1 libsoup-gnome2.4-1 \
nautilus-extension-gnome-terminal pinentry-gnome3 yaru-theme-gnome-shell \
yaru-theme-icon yaru-theme-sound ubuntu-wallpapers ubuntu-wallpapers-focal \
x11-common x11-apps xcursor-themes xbitmaps xfonts-base xfonts-encodings
# Purge unreferred packages
sudo apt --yes autopurge
# Fix plymouth problems
sudo apt --yes install plymouth-theme-spinner
# Ensure the boot environment creation works
update-initramfs -k $(uname -r) -u
update-grub
</source>
1510abc55de7eccf29379304ff13987e12807cad
2123
2122
2021-06-22T08:44:40Z
Lollypop
2
/* Ubuntu 20.04 */
wikitext
text/x-wiki
[category:Ubuntu]
[[category:Ubuntu|desktop]]
=Ubuntu 20.04=
<source lang=bash>
# GRUB: Remove splash and quiet from GRUB_CMDLINE_LINUX_DEFAULT
sudo perl -pi -e 's#^(GRUB_CMDLINE_LINUX_DEFAULT=".*)(quiet)(.*")$#\1\3#g,s#^(GRUB_CMDLINE_LINUX_DEFAULT=".*)(splash)(.*")$#\1\3#g' /etc/default/grub
# GRUB: Add or change to GRUB_DISABLE_OS_PROBER=true
sudo perl -ni -e '$c=1 if s/^GRUB_DISABLE_OS_PROBER=.*$/GRUB_DISABLE_OS_PROBER=true/; print; if(eof){print "GRUB_DISABLE_OS_PROBER=true\n" unless $c==1};' /etc/default/grub
# Remove desktop packages
sudo apt --yes purge adwaita-icon-theme gedit-common gir1.2-gdm-1.0 \
gir1.2-gnomebluetooth-1.0 gir1.2-gnomedesktop-3.0 gir1.2-goa-1.0 \
gnome-accessibility-themes gnome-bluetooth gnome-calculator gnome-calendar \
gnome-characters gnome-control-center gnome-control-center-data \
gnome-control-center-faces gnome-desktop3-data \
gnome-font-viewer gnome-getting-started-docs gnome-getting-started-docs-ru \
gnome-initial-setup gnome-keyring gnome-keyring-pkcs11 gnome-logs \
gnome-mahjongg gnome-menus gnome-mines gnome-online-accounts \
gnome-power-manager gnome-screenshot gnome-session-bin gnome-session-canberra \
gnome-session-common gnome-settings-daemon gnome-settings-daemon-common \
gnome-shell gnome-shell-common gnome-shell-extension-appindicator \
gnome-shell-extension-desktop-icons gnome-shell-extension-ubuntu-dock \
gnome-startup-applications gnome-sudoku gnome-system-monitor gnome-terminal \
gnome-terminal-data gnome-themes-extra gnome-themes-extra-data gnome-todo \
gnome-todo-common gnome-user-docs gnome-user-docs-ru gnome-video-effects \
language-pack-gnome-en language-pack-gnome-en-base language-pack-gnome-ru \
language-pack-gnome-ru-base language-selector-gnome libgail18 libgail18 \
libgail-common libgail-common libgnome-autoar-0-0 libgnome-bluetooth13 \
libgnome-desktop-3-19 libgnome-games-support-1-3 libgnome-games-support-common \
libgnomekbd8 libgnomekbd-common libgnome-menu-3-0 libgnome-todo libgoa-1.0-0b \
libgoa-1.0-common libpam-gnome-keyring libsoup-gnome2.4-1 libsoup-gnome2.4-1 \
nautilus-extension-gnome-terminal pinentry-gnome3 yaru-theme-gnome-shell \
yaru-theme-icon yaru-theme-sound ubuntu-wallpapers ubuntu-wallpapers-focal \
x11-common x11-apps xcursor-themes xbitmaps xfonts-base xfonts-encodings
# Purge unreferred packages
sudo apt --yes autopurge
# Fix plymouth problems
sudo apt --yes install plymouth-theme-spinner
# Ensure the boot environment creation works
update-initramfs -k $(uname -r) -u
update-grub
</source>
9b1aa4c597100deb37aa2a814ec6dea4b82e7ce5
Ubuntu networking
0
278
2125
1971
2021-06-23T16:29:37Z
Lollypop
2
/* The ip command */
wikitext
text/x-wiki
[[Kategorie:Ubuntu|Networking]]
[[Kategorie:Linux|Networking]]
==Disable IPv6==
===Create /etc/sysctl.d/60-disable-ipv6.conf===
Create a file named <i>/etc/sysctl.d/60-disable-ipv6.conf</i> with this content:
<source lang=bash>
net.ipv6.conf.all.disable_ipv6 = 1
net.ipv6.conf.default.disable_ipv6 = 1
net.ipv6.conf.lo.disable_ipv6 = 1
</source>
===Activate /etc/sysctl.d/60-disable-ipv6.conf===
<source lang=bash>
# sysctl -p /etc/sysctl.d/10-disable-ipv6.conf
net.ipv6.conf.all.disable_ipv6 = 1
net.ipv6.conf.default.disable_ipv6 = 1
net.ipv6.conf.lo.disable_ipv6 = 1
</source>
===Check settings===
<source lang=bash>
# cat /proc/sys/net/ipv6/conf/all/disable_ipv6
1
</source>
==The ip command==
===Configure bond manually===
<source lang=bash>
# ip link add bond0 type bond
# ip link set bond0 type bond miimon 100 mode active-backup
# ip link set eno5 down
# ip link set eno5 master bond0
# ip link set eno6 down
# ip link set eno6 master bond0
# ip link set bond0 up
# ip addr add <my-ip> dev bond0
# ip route add default via <gateway-ip>
# systemd-resolve --interface=bond0 --set-dns=<dns-ip>
</source>
===ipa===
This is not only indian pale ale! On linux
<source lang=bash>
# ip a
</source>
show you the configured addresses.
It is the short cut for "ip address show".
===iplishup===
This just sounds like a word and helps you to keep it in mind.
<source lang=bash>
# ip li sh up
</source>
shows you all links (interfaces) that are up.
This is short for "ip link show up".
==New since Ubuntu 17.10==
===netplan===
Former configuration in /etc/network/interfaces{,.d} is now found in /etc/netplan in YAML syntax.
The name of the file is /etc/netplan/<whatever you want, I prefer the interface name>.yaml . The .yaml at the end is not optional!
====netplan <command>====
To apply changes to your files in /etc/netplan without reboot use:
<source lang=bash>
# netplan appy
</source>
Keep in mind: You might lose your connection depending on the changes made!
====DHCP====
<i>/etc/netplan/ens160.yaml</i>
<source lang=yaml>
network:
ethernets:
ens160:
dhcp4: yes
version: 2
</source>
====Bonding====
<i>/etc/netplan/bond007.yaml</i>
<source lang=yaml>
network:
version: 2
renderer: networkd
ethernets:
slave1:
match:
macaddress: "3c:a7:2a:22:af:70"
dhcp4: no
slave2:
match:
macaddress: "3c:a7:2a:22:af:71"
dhcp4: no
bonds:
bond007:
interfaces:
- slave1
- slave2
parameters:
mode: balance-rr
mii-monitor-interval: 10
dhcp4: no
addresses:
- 192.168.189.202/27
gateway4: 192.168.189.193
nameservers:
search:
- mcs.de
addresses:
- "192.168.3.60"
- "192.168.3.61"
</source>
e1bfcd7bb5f4711e4bb2071d284f8dcd5c2185ea
2126
2125
2021-06-24T08:20:53Z
Lollypop
2
/* Configure bond manually */
wikitext
text/x-wiki
[[Kategorie:Ubuntu|Networking]]
[[Kategorie:Linux|Networking]]
==Disable IPv6==
===Create /etc/sysctl.d/60-disable-ipv6.conf===
Create a file named <i>/etc/sysctl.d/60-disable-ipv6.conf</i> with this content:
<source lang=bash>
net.ipv6.conf.all.disable_ipv6 = 1
net.ipv6.conf.default.disable_ipv6 = 1
net.ipv6.conf.lo.disable_ipv6 = 1
</source>
===Activate /etc/sysctl.d/60-disable-ipv6.conf===
<source lang=bash>
# sysctl -p /etc/sysctl.d/10-disable-ipv6.conf
net.ipv6.conf.all.disable_ipv6 = 1
net.ipv6.conf.default.disable_ipv6 = 1
net.ipv6.conf.lo.disable_ipv6 = 1
</source>
===Check settings===
<source lang=bash>
# cat /proc/sys/net/ipv6/conf/all/disable_ipv6
1
</source>
==The ip command==
===Configure bond manually===
Specify your environment
<source lang=bash>
# mymaster1=eno5
# mymaster2=eno6
# myinterface=bond007
# myipaddr=172.16.78.9
# mygateway=172.16.78.1
# declare -a mynameservers=( 172.16.77.4 172.16.79.4 )
</source>
Create the bonding interface out of the two masters
<source lang=bash>
# ip link add ${myinterface} type bond
# ip link set ${myinterface} type bond miimon 100 mode active-backup
# ip link set ${mymaster} down
# ip link set ${mymaster} master ${myinterface}
# ip link set ${mymaster} down
# ip link set ${mymaster} master ${myinterface}
</source>
If you want to add a VLAN to your interface
<source lang=bash>
# myvlan=1234
# ip link add link ${myinterface} name ${myinterface}.${myvlan} type vlan id ${myvlan}
# myinterface=${myinterface}.${myvlan}
</source>
Bring your interface up and set yout ip address
<source lang=bash>
# ip link set ${myinterface} up
# ip addr add ${myipaddr} dev ${myinterface}
</source>
Set your default gateway and DNS
<source lang=bash>
# ip route add default via ${mygateway}
# if (( ${#mynameservers[*]} > 1 )) ; then eval systemd-resolve --interface ${myinterface} --set-dns={$(IFS=,; printf '%s' "${mynameservers[*]}")} ; else eval systemd-resolve --interface ${myinterface} --set-dns=${mynameservers[0]} ; fi
</source>
===ipa===
This is not only indian pale ale! On linux
<source lang=bash>
# ip a
</source>
show you the configured addresses.
It is the short cut for "ip address show".
===iplishup===
This just sounds like a word and helps you to keep it in mind.
<source lang=bash>
# ip li sh up
</source>
shows you all links (interfaces) that are up.
This is short for "ip link show up".
==New since Ubuntu 17.10==
===netplan===
Former configuration in /etc/network/interfaces{,.d} is now found in /etc/netplan in YAML syntax.
The name of the file is /etc/netplan/<whatever you want, I prefer the interface name>.yaml . The .yaml at the end is not optional!
====netplan <command>====
To apply changes to your files in /etc/netplan without reboot use:
<source lang=bash>
# netplan appy
</source>
Keep in mind: You might lose your connection depending on the changes made!
====DHCP====
<i>/etc/netplan/ens160.yaml</i>
<source lang=yaml>
network:
ethernets:
ens160:
dhcp4: yes
version: 2
</source>
====Bonding====
<i>/etc/netplan/bond007.yaml</i>
<source lang=yaml>
network:
version: 2
renderer: networkd
ethernets:
slave1:
match:
macaddress: "3c:a7:2a:22:af:70"
dhcp4: no
slave2:
match:
macaddress: "3c:a7:2a:22:af:71"
dhcp4: no
bonds:
bond007:
interfaces:
- slave1
- slave2
parameters:
mode: balance-rr
mii-monitor-interval: 10
dhcp4: no
addresses:
- 192.168.189.202/27
gateway4: 192.168.189.193
nameservers:
search:
- mcs.de
addresses:
- "192.168.3.60"
- "192.168.3.61"
</source>
95050072e6d34c9f25ff9bab607bb4128be08475
2127
2126
2021-06-24T08:21:29Z
Lollypop
2
/* Configure bond manually */
wikitext
text/x-wiki
[[Kategorie:Ubuntu|Networking]]
[[Kategorie:Linux|Networking]]
==Disable IPv6==
===Create /etc/sysctl.d/60-disable-ipv6.conf===
Create a file named <i>/etc/sysctl.d/60-disable-ipv6.conf</i> with this content:
<source lang=bash>
net.ipv6.conf.all.disable_ipv6 = 1
net.ipv6.conf.default.disable_ipv6 = 1
net.ipv6.conf.lo.disable_ipv6 = 1
</source>
===Activate /etc/sysctl.d/60-disable-ipv6.conf===
<source lang=bash>
# sysctl -p /etc/sysctl.d/10-disable-ipv6.conf
net.ipv6.conf.all.disable_ipv6 = 1
net.ipv6.conf.default.disable_ipv6 = 1
net.ipv6.conf.lo.disable_ipv6 = 1
</source>
===Check settings===
<source lang=bash>
# cat /proc/sys/net/ipv6/conf/all/disable_ipv6
1
</source>
==The ip command==
===Configure bond manually===
Specify your environment
<source lang=bash>
# mymaster1=eno5
# mymaster2=eno6
# myinterface=bond007
# myipaddr=172.16.78.9
# mygateway=172.16.78.1
# declare -a mynameservers=( 172.16.77.4 172.16.79.4 )
</source>
Create the bonding interface out of the two masters
<source lang=bash>
# ip link add ${myinterface} type bond
# ip link set ${myinterface} type bond miimon 100 mode active-backup
# ip link set ${mymaster1} down
# ip link set ${mymaster1} master ${myinterface}
# ip link set ${mymaster2} down
# ip link set ${mymaster2} master ${myinterface}
</source>
If you want to add a VLAN to your interface
<source lang=bash>
# myvlan=1234
# ip link add link ${myinterface} name ${myinterface}.${myvlan} type vlan id ${myvlan}
# myinterface=${myinterface}.${myvlan}
</source>
Bring your interface up and set yout ip address
<source lang=bash>
# ip link set ${myinterface} up
# ip addr add ${myipaddr} dev ${myinterface}
</source>
Set your default gateway and DNS
<source lang=bash>
# ip route add default via ${mygateway}
# if (( ${#mynameservers[*]} > 1 )) ; then eval systemd-resolve --interface ${myinterface} --set-dns={$(IFS=,; printf '%s' "${mynameservers[*]}")} ; else eval systemd-resolve --interface ${myinterface} --set-dns=${mynameservers[0]} ; fi
</source>
===ipa===
This is not only indian pale ale! On linux
<source lang=bash>
# ip a
</source>
show you the configured addresses.
It is the short cut for "ip address show".
===iplishup===
This just sounds like a word and helps you to keep it in mind.
<source lang=bash>
# ip li sh up
</source>
shows you all links (interfaces) that are up.
This is short for "ip link show up".
==New since Ubuntu 17.10==
===netplan===
Former configuration in /etc/network/interfaces{,.d} is now found in /etc/netplan in YAML syntax.
The name of the file is /etc/netplan/<whatever you want, I prefer the interface name>.yaml . The .yaml at the end is not optional!
====netplan <command>====
To apply changes to your files in /etc/netplan without reboot use:
<source lang=bash>
# netplan appy
</source>
Keep in mind: You might lose your connection depending on the changes made!
====DHCP====
<i>/etc/netplan/ens160.yaml</i>
<source lang=yaml>
network:
ethernets:
ens160:
dhcp4: yes
version: 2
</source>
====Bonding====
<i>/etc/netplan/bond007.yaml</i>
<source lang=yaml>
network:
version: 2
renderer: networkd
ethernets:
slave1:
match:
macaddress: "3c:a7:2a:22:af:70"
dhcp4: no
slave2:
match:
macaddress: "3c:a7:2a:22:af:71"
dhcp4: no
bonds:
bond007:
interfaces:
- slave1
- slave2
parameters:
mode: balance-rr
mii-monitor-interval: 10
dhcp4: no
addresses:
- 192.168.189.202/27
gateway4: 192.168.189.193
nameservers:
search:
- mcs.de
addresses:
- "192.168.3.60"
- "192.168.3.61"
</source>
d3e4acb6f53c33f60b04f7500768a762f163bf2f
2128
2127
2021-06-24T08:22:54Z
Lollypop
2
/* ipa */
wikitext
text/x-wiki
[[Kategorie:Ubuntu|Networking]]
[[Kategorie:Linux|Networking]]
==Disable IPv6==
===Create /etc/sysctl.d/60-disable-ipv6.conf===
Create a file named <i>/etc/sysctl.d/60-disable-ipv6.conf</i> with this content:
<source lang=bash>
net.ipv6.conf.all.disable_ipv6 = 1
net.ipv6.conf.default.disable_ipv6 = 1
net.ipv6.conf.lo.disable_ipv6 = 1
</source>
===Activate /etc/sysctl.d/60-disable-ipv6.conf===
<source lang=bash>
# sysctl -p /etc/sysctl.d/10-disable-ipv6.conf
net.ipv6.conf.all.disable_ipv6 = 1
net.ipv6.conf.default.disable_ipv6 = 1
net.ipv6.conf.lo.disable_ipv6 = 1
</source>
===Check settings===
<source lang=bash>
# cat /proc/sys/net/ipv6/conf/all/disable_ipv6
1
</source>
==The ip command==
===Configure bond manually===
Specify your environment
<source lang=bash>
# mymaster1=eno5
# mymaster2=eno6
# myinterface=bond007
# myipaddr=172.16.78.9
# mygateway=172.16.78.1
# declare -a mynameservers=( 172.16.77.4 172.16.79.4 )
</source>
Create the bonding interface out of the two masters
<source lang=bash>
# ip link add ${myinterface} type bond
# ip link set ${myinterface} type bond miimon 100 mode active-backup
# ip link set ${mymaster1} down
# ip link set ${mymaster1} master ${myinterface}
# ip link set ${mymaster2} down
# ip link set ${mymaster2} master ${myinterface}
</source>
If you want to add a VLAN to your interface
<source lang=bash>
# myvlan=1234
# ip link add link ${myinterface} name ${myinterface}.${myvlan} type vlan id ${myvlan}
# myinterface=${myinterface}.${myvlan}
</source>
Bring your interface up and set yout ip address
<source lang=bash>
# ip link set ${myinterface} up
# ip addr add ${myipaddr} dev ${myinterface}
</source>
Set your default gateway and DNS
<source lang=bash>
# ip route add default via ${mygateway}
# if (( ${#mynameservers[*]} > 1 )) ; then eval systemd-resolve --interface ${myinterface} --set-dns={$(IFS=,; printf '%s' "${mynameservers[*]}")} ; else eval systemd-resolve --interface ${myinterface} --set-dns=${mynameservers[0]} ; fi
</source>
===ipa===
This is not only indian pale ale! On linux
<source lang=bash>
# ip a
</source>
shows you the configured addresses.
It is the short cut for "ip address show".
===iplishup===
This just sounds like a word and helps you to keep it in mind.
<source lang=bash>
# ip li sh up
</source>
shows you all links (interfaces) that are up.
This is short for "ip link show up".
==New since Ubuntu 17.10==
===netplan===
Former configuration in /etc/network/interfaces{,.d} is now found in /etc/netplan in YAML syntax.
The name of the file is /etc/netplan/<whatever you want, I prefer the interface name>.yaml . The .yaml at the end is not optional!
====netplan <command>====
To apply changes to your files in /etc/netplan without reboot use:
<source lang=bash>
# netplan appy
</source>
Keep in mind: You might lose your connection depending on the changes made!
====DHCP====
<i>/etc/netplan/ens160.yaml</i>
<source lang=yaml>
network:
ethernets:
ens160:
dhcp4: yes
version: 2
</source>
====Bonding====
<i>/etc/netplan/bond007.yaml</i>
<source lang=yaml>
network:
version: 2
renderer: networkd
ethernets:
slave1:
match:
macaddress: "3c:a7:2a:22:af:70"
dhcp4: no
slave2:
match:
macaddress: "3c:a7:2a:22:af:71"
dhcp4: no
bonds:
bond007:
interfaces:
- slave1
- slave2
parameters:
mode: balance-rr
mii-monitor-interval: 10
dhcp4: no
addresses:
- 192.168.189.202/27
gateway4: 192.168.189.193
nameservers:
search:
- mcs.de
addresses:
- "192.168.3.60"
- "192.168.3.61"
</source>
569a5bcb23ea3219df0415d517498d4deb92cb16
2129
2128
2021-06-24T08:23:21Z
Lollypop
2
/* Configure bond manually */
wikitext
text/x-wiki
[[Kategorie:Ubuntu|Networking]]
[[Kategorie:Linux|Networking]]
==Disable IPv6==
===Create /etc/sysctl.d/60-disable-ipv6.conf===
Create a file named <i>/etc/sysctl.d/60-disable-ipv6.conf</i> with this content:
<source lang=bash>
net.ipv6.conf.all.disable_ipv6 = 1
net.ipv6.conf.default.disable_ipv6 = 1
net.ipv6.conf.lo.disable_ipv6 = 1
</source>
===Activate /etc/sysctl.d/60-disable-ipv6.conf===
<source lang=bash>
# sysctl -p /etc/sysctl.d/10-disable-ipv6.conf
net.ipv6.conf.all.disable_ipv6 = 1
net.ipv6.conf.default.disable_ipv6 = 1
net.ipv6.conf.lo.disable_ipv6 = 1
</source>
===Check settings===
<source lang=bash>
# cat /proc/sys/net/ipv6/conf/all/disable_ipv6
1
</source>
==The ip command==
===Configure bond manually===
Specify your environment
<source lang=bash>
# mymaster1=eno5
# mymaster2=eno6
# myinterface=bond007
# myipaddr=172.16.78.9/24
# mygateway=172.16.78.1
# declare -a mynameservers=( 172.16.77.4 172.16.79.4 )
</source>
Create the bonding interface out of the two masters
<source lang=bash>
# ip link add ${myinterface} type bond
# ip link set ${myinterface} type bond miimon 100 mode active-backup
# ip link set ${mymaster1} down
# ip link set ${mymaster1} master ${myinterface}
# ip link set ${mymaster2} down
# ip link set ${mymaster2} master ${myinterface}
</source>
If you want to add a VLAN to your interface
<source lang=bash>
# myvlan=1234
# ip link add link ${myinterface} name ${myinterface}.${myvlan} type vlan id ${myvlan}
# myinterface=${myinterface}.${myvlan}
</source>
Bring your interface up and set yout ip address
<source lang=bash>
# ip link set ${myinterface} up
# ip addr add ${myipaddr} dev ${myinterface}
</source>
Set your default gateway and DNS
<source lang=bash>
# ip route add default via ${mygateway}
# if (( ${#mynameservers[*]} > 1 )) ; then eval systemd-resolve --interface ${myinterface} --set-dns={$(IFS=,; printf '%s' "${mynameservers[*]}")} ; else eval systemd-resolve --interface ${myinterface} --set-dns=${mynameservers[0]} ; fi
</source>
===ipa===
This is not only indian pale ale! On linux
<source lang=bash>
# ip a
</source>
shows you the configured addresses.
It is the short cut for "ip address show".
===iplishup===
This just sounds like a word and helps you to keep it in mind.
<source lang=bash>
# ip li sh up
</source>
shows you all links (interfaces) that are up.
This is short for "ip link show up".
==New since Ubuntu 17.10==
===netplan===
Former configuration in /etc/network/interfaces{,.d} is now found in /etc/netplan in YAML syntax.
The name of the file is /etc/netplan/<whatever you want, I prefer the interface name>.yaml . The .yaml at the end is not optional!
====netplan <command>====
To apply changes to your files in /etc/netplan without reboot use:
<source lang=bash>
# netplan appy
</source>
Keep in mind: You might lose your connection depending on the changes made!
====DHCP====
<i>/etc/netplan/ens160.yaml</i>
<source lang=yaml>
network:
ethernets:
ens160:
dhcp4: yes
version: 2
</source>
====Bonding====
<i>/etc/netplan/bond007.yaml</i>
<source lang=yaml>
network:
version: 2
renderer: networkd
ethernets:
slave1:
match:
macaddress: "3c:a7:2a:22:af:70"
dhcp4: no
slave2:
match:
macaddress: "3c:a7:2a:22:af:71"
dhcp4: no
bonds:
bond007:
interfaces:
- slave1
- slave2
parameters:
mode: balance-rr
mii-monitor-interval: 10
dhcp4: no
addresses:
- 192.168.189.202/27
gateway4: 192.168.189.193
nameservers:
search:
- mcs.de
addresses:
- "192.168.3.60"
- "192.168.3.61"
</source>
487484b5dbdc7d76e466bae77f26cda3339d29c6
Autofs
0
256
2132
1324
2021-06-30T09:46:07Z
Lollypop
2
/* /etc/auto.master.d/home.map */
wikitext
text/x-wiki
[[Kategorie:Linux|autofs]]
[[Kategorie:Solaris|autofs]]
==Automount home directories==
===/etc/auto.master===
<source lang=bash>
#
# Include /etc/auto.master.d/*.autofs
#
+dir:/etc/auto.master.d
</source>
===/etc/auto.master.d/home.autofs===
<source lang=bash>
/home /etc/auto.master.d/home.map
</source>
===/etc/auto.master.d/home.map===
Mount homes from different locations.
<source lang=bash>
* :/data/home/& nfs.server.de:/home/&
</source>
or from a server that supports NFSv4.1:
<source lang=bash>
* -proto=tcp,vers=4.1 nfs.server.de:/home/&
</source>
The asterisk marks any dir in /home/* should be matched by this rule.
The ampers and is replaced by the part which was matched by *.
So if you enter /home/a the automounter searches local for /data/home/a which will be mounted when found.
<source lang=bash>
# cd /home/a
# mount -v | grep /home/a
/data/home/a on /home/a type none (rw,bind)
</source>
For another /home/b which is on the nfs server it looks like this:
<source lang=bash>
# cd /home/b
# mount -v | grep /home/b
nfs.server.de:/home/b on /home/b type nfs (rw,addr=172.16.17.24)
</source>
===cifs===
<i>/etc/auto.master.d/mycifsshare.autofs</i>:
<source lang=bash>
/data/cifs /etc/auto.master.d/mycifsshare.map
</source>
<i>/etc/auto.master.d/mycifsshare.map</i>:
<source lang=bash>
mycifsshare -fstype=cifs,rw,credentials=/etc/samba/mycifsshare_credentials,uid=<myuser>,forceuid ://192.168.1.2/mycifsshare
</source>
341b2f0ef11d5f31025dd9cc262977c149fbac2a
NFS
0
386
2133
2021-06-30T10:07:16Z
Lollypop
2
Created page with "Some things to know about NFS... =NFSv4.1= ==Server== ===Configure rpc.idmapd=== * /etc/idmapd.conf You should better set a Domain. Set the same Domain on server an client(s)..."
wikitext
text/x-wiki
Some things to know about NFS...
=NFSv4.1=
==Server==
===Configure rpc.idmapd===
* /etc/idmapd.conf
You should better set a Domain. Set the same Domain on server an client(s)!
<source lang=ini>
[General]
Verbosity = 0
Pipefs-Directory = /run/rpc_pipefs
# set your own domain here, if it differs from FQDN minus hostname.
# you can use a fantasy name, but whatever it is, keep this identical on server and client!
Domain = myfantasy.domain
[Mapping]
Nobody-User = nobody
Nobody-Group = nogroup
</source>
===Bind rpc.mountd to specific port===
The port of the rpc.mountd is usually random this is a nightmare for firewallers so picking a known port is much better.
* /etc/default/nfs-kernel-server
<source lang=ini>
RPCMOUNTDOPTS="--manage-gids --port 33333"
</source>
===Configure ufw===
Caution! The port you set above for the mountd has to be the same here! I used 33333, if you changed it above for some reason: Change it here, too!
* /etc/ufw/applications.d/nfs
<source lang=ini>
[NFS-Server]
title=NFS-Server
description=NFS Server
ports=111/tcp|111/udp|2049/tcp|33333/tcp
</source>
<source lang=bash>
# ufw allow from 172.16.16.16/28 to any app "NFS-Server"
</source>
96ffce93a5cbc76542d2de0bbe2ff95f16ee39da
NFS
0
386
2134
2133
2021-06-30T10:07:52Z
Lollypop
2
wikitext
text/x-wiki
[[Category:Linux]]
Some things to know about NFS...
=NFSv4.1=
==Server==
===Configure rpc.idmapd===
* /etc/idmapd.conf
You should better set a Domain. Set the same Domain on server an client(s)!
<source lang=ini>
[General]
Verbosity = 0
Pipefs-Directory = /run/rpc_pipefs
# set your own domain here, if it differs from FQDN minus hostname.
# you can use a fantasy name, but whatever it is, keep this identical on server and client!
Domain = myfantasy.domain
[Mapping]
Nobody-User = nobody
Nobody-Group = nogroup
</source>
===Bind rpc.mountd to specific port===
The port of the rpc.mountd is usually random this is a nightmare for firewallers so picking a known port is much better.
* /etc/default/nfs-kernel-server
<source lang=ini>
RPCMOUNTDOPTS="--manage-gids --port 33333"
</source>
===Configure ufw===
Caution! The port you set above for the mountd has to be the same here! I used 33333, if you changed it above for some reason: Change it here, too!
* /etc/ufw/applications.d/nfs
<source lang=ini>
[NFS-Server]
title=NFS-Server
description=NFS Server
ports=111/tcp|111/udp|2049/tcp|33333/tcp
</source>
<source lang=bash>
# ufw allow from 172.16.16.16/28 to any app "NFS-Server"
</source>
c958940221bbfe5b4b270601a9ce23e2ed406c69
2135
2134
2021-06-30T10:08:59Z
Lollypop
2
/* Configure rpc.idmapd */
wikitext
text/x-wiki
[[Category:Linux]]
Some things to know about NFS...
=NFSv4.1=
==Server==
===Configure rpc.idmapd===
* /etc/idmapd.conf
You should better set a Domain. Set the same Domain on server an client(s)!
<source lang=ini>
[General]
...
# set your own domain here, if it differs from FQDN minus hostname.
# you can use a fantasy name, but whatever it is, keep this identical on server and client!
Domain = myfantasy.domain
...
</source>
===Bind rpc.mountd to specific port===
The port of the rpc.mountd is usually random this is a nightmare for firewallers so picking a known port is much better.
* /etc/default/nfs-kernel-server
<source lang=ini>
RPCMOUNTDOPTS="--manage-gids --port 33333"
</source>
===Configure ufw===
Caution! The port you set above for the mountd has to be the same here! I used 33333, if you changed it above for some reason: Change it here, too!
* /etc/ufw/applications.d/nfs
<source lang=ini>
[NFS-Server]
title=NFS-Server
description=NFS Server
ports=111/tcp|111/udp|2049/tcp|33333/tcp
</source>
<source lang=bash>
# ufw allow from 172.16.16.16/28 to any app "NFS-Server"
</source>
c40c24e670adca365d74783cf6fe81772fb7f500
2136
2135
2021-06-30T10:34:51Z
Lollypop
2
/* NFSv4.1 */
wikitext
text/x-wiki
[[Category:Linux]]
Some things to know about NFS...
=NFSv4.1=
==Server==
===Configure rpc.idmapd===
* /etc/idmapd.conf
You should better set a Domain. Set the same Domain on server an client(s)!
<source lang=ini>
[General]
...
# set your own domain here, if it differs from FQDN minus hostname.
# you can use a fantasy name, but whatever it is, keep this identical on server and client!
Domain = myfantasy.domain
...
</source>
===Bind rpc.mountd to specific port===
The port of the rpc.mountd is usually random this is a nightmare for firewallers so picking a known port is much better.
* /etc/default/nfs-kernel-server
<source lang=ini>
RPCMOUNTDOPTS="--manage-gids --port 33333"
</source>
===Bind statd to specific port===
* /etc/default/nfs-common
<source lang=ini>
STATDOPTS="--port 33334 --outgoing-port 33335"
</source>
===Bind lockd to specific port===
* /etc/sysctl.d/nfs-static-ports.conf
<source lang=ini>
fs.nfs.nlm_tcpport = 33336
fs.nfs.nlm_udpport = 33336
</source>
Activate it without rebooting through:
<source lang=bash>
# sysctl --load /etc/sysctl.d/nfs-static-ports.conf
fs.nfs.nlm_tcpport = 33336
fs.nfs.nlm_udpport = 33336
</source>
===Configure ufw===
Caution! The port you set above for the mountd has to be the same here! I used 33333, if you changed it above for some reason: Change it here, too!
* /etc/ufw/applications.d/nfs
<source lang=ini>
[NFS-Server]
title=NFS-Server
description=NFS Server
ports=111/tcp|111/udp|2049/tcp|33333:33336/tcp
</source>
<source lang=bash>
# ufw allow from 172.16.16.16/28 to any app "NFS-Server"
</source>
===List clients that are connected===
<source lang=bash>
# cat /proc/fs/nfsd/clients/*/info
clientid: 0x7829c17160bf7066
address: "172.16.16.17:778"
name: "Linux NFSv4.1 client01.domain.tld"
minor version: 1
Implementation domain: "kernel.org"
Implementation name: "Linux 3.10.0-1127.13.1.el7.x86_64 #1 SMP Tue Jun 23 15:46:38 UTC 2020 x86_64"
Implementation time: [0, 0]
</source>
==Server and Client==
606ad4cf30258ece2fdd6afac2432034f3514cca
2137
2136
2021-06-30T12:35:43Z
Lollypop
2
/* Bind rpc.mountd to specific port */
wikitext
text/x-wiki
[[Category:Linux]]
Some things to know about NFS...
=NFSv4.1=
==Server==
===Configure rpc.idmapd===
* /etc/idmapd.conf
You should better set a Domain. Set the same Domain on server an client(s)!
<source lang=ini>
[General]
...
# set your own domain here, if it differs from FQDN minus hostname.
# you can use a fantasy name, but whatever it is, keep this identical on server and client!
Domain = myfantasy.domain
...
</source>
===Bind rpc.mountd to specific port===
The port of the rpc.mountd is usually random this is a nightmare for firewallers so picking a known port is much better.
* /etc/default/nfs-kernel-server
<source lang=ini>
RPCMOUNTDOPTS="--manage-gids --port 33333 --no-nfs-version 2 --no-nfs-version 3"
</source>
===Bind statd to specific port===
* /etc/default/nfs-common
<source lang=ini>
STATDOPTS="--port 33334 --outgoing-port 33335"
</source>
===Bind lockd to specific port===
* /etc/sysctl.d/nfs-static-ports.conf
<source lang=ini>
fs.nfs.nlm_tcpport = 33336
fs.nfs.nlm_udpport = 33336
</source>
Activate it without rebooting through:
<source lang=bash>
# sysctl --load /etc/sysctl.d/nfs-static-ports.conf
fs.nfs.nlm_tcpport = 33336
fs.nfs.nlm_udpport = 33336
</source>
===Configure ufw===
Caution! The port you set above for the mountd has to be the same here! I used 33333, if you changed it above for some reason: Change it here, too!
* /etc/ufw/applications.d/nfs
<source lang=ini>
[NFS-Server]
title=NFS-Server
description=NFS Server
ports=111/tcp|111/udp|2049/tcp|33333:33336/tcp
</source>
<source lang=bash>
# ufw allow from 172.16.16.16/28 to any app "NFS-Server"
</source>
===List clients that are connected===
<source lang=bash>
# cat /proc/fs/nfsd/clients/*/info
clientid: 0x7829c17160bf7066
address: "172.16.16.17:778"
name: "Linux NFSv4.1 client01.domain.tld"
minor version: 1
Implementation domain: "kernel.org"
Implementation name: "Linux 3.10.0-1127.13.1.el7.x86_64 #1 SMP Tue Jun 23 15:46:38 UTC 2020 x86_64"
Implementation time: [0, 0]
</source>
==Server and Client==
e13a091ba07e9bef5e89257938ce4ce90f9f1b3d
2138
2137
2021-06-30T12:43:43Z
Lollypop
2
/* Bind rpc.mountd to specific port */
wikitext
text/x-wiki
[[Category:Linux]]
Some things to know about NFS...
=NFSv4.1=
==Server==
===Configure rpc.idmapd===
* /etc/idmapd.conf
You should better set a Domain. Set the same Domain on server an client(s)!
<source lang=ini>
[General]
...
# set your own domain here, if it differs from FQDN minus hostname.
# you can use a fantasy name, but whatever it is, keep this identical on server and client!
Domain = myfantasy.domain
...
</source>
===Bind rpc.mountd to specific port===
The port of the rpc.mountd is usually random this is a nightmare for firewallers so picking a known port is much better.
* /etc/default/nfs-kernel-server
<source lang=ini>
RPCMOUNTDOPTS="--manage-gids --port 33333"
</source>
===Bind statd to specific port===
* /etc/default/nfs-common
<source lang=ini>
STATDOPTS="--port 33334 --outgoing-port 33335"
</source>
===Bind lockd to specific port===
* /etc/sysctl.d/nfs-static-ports.conf
<source lang=ini>
fs.nfs.nlm_tcpport = 33336
fs.nfs.nlm_udpport = 33336
</source>
Activate it without rebooting through:
<source lang=bash>
# sysctl --load /etc/sysctl.d/nfs-static-ports.conf
fs.nfs.nlm_tcpport = 33336
fs.nfs.nlm_udpport = 33336
</source>
===Configure ufw===
Caution! The port you set above for the mountd has to be the same here! I used 33333, if you changed it above for some reason: Change it here, too!
* /etc/ufw/applications.d/nfs
<source lang=ini>
[NFS-Server]
title=NFS-Server
description=NFS Server
ports=111/tcp|111/udp|2049/tcp|33333:33336/tcp
</source>
<source lang=bash>
# ufw allow from 172.16.16.16/28 to any app "NFS-Server"
</source>
===List clients that are connected===
<source lang=bash>
# cat /proc/fs/nfsd/clients/*/info
clientid: 0x7829c17160bf7066
address: "172.16.16.17:778"
name: "Linux NFSv4.1 client01.domain.tld"
minor version: 1
Implementation domain: "kernel.org"
Implementation name: "Linux 3.10.0-1127.13.1.el7.x86_64 #1 SMP Tue Jun 23 15:46:38 UTC 2020 x86_64"
Implementation time: [0, 0]
</source>
==Server and Client==
fed51ccb0d91ff227387644e1499b3474864c65e
2139
2138
2021-06-30T12:57:57Z
Lollypop
2
/* NFSv4.1 */
wikitext
text/x-wiki
[[Category:Linux]]
Some things to know about NFS...
=NFSv4.1=
==Server==
===Configure rpc.idmapd===
* /etc/idmapd.conf
You should better set a Domain. Set the same Domain on server an client(s)!
<source lang=ini>
[General]
...
# set your own domain here, if it differs from FQDN minus hostname.
# you can use a fantasy name, but whatever it is, keep this identical on server and client!
Domain = myfantasy.domain
...
</source>
===Bind rpc.mountd to specific port===
The port of the rpc.mountd is usually random this is a nightmare for firewallers so picking a known port is much better.
* /etc/default/nfs-kernel-server
<source lang=ini>
RPCMOUNTDOPTS="--manage-gids --port 33333"
</source>
===Bind statd to specific port===
* /etc/default/nfs-common
<source lang=ini>
STATDOPTS="--port 33334 --outgoing-port 33335"
</source>
===Bind lockd to specific port===
* /etc/sysctl.d/nfs-static-ports.conf
<source lang=ini>
fs.nfs.nlm_tcpport = 33336
fs.nfs.nlm_udpport = 33336
</source>
Activate it without rebooting through:
<source lang=bash>
# sysctl --load /etc/sysctl.d/nfs-static-ports.conf
fs.nfs.nlm_tcpport = 33336
fs.nfs.nlm_udpport = 33336
</source>
===Disable at least NFSv2===
* /etc/default/nfs-kernel-server
<source lang=ini>
STATDOPTS="--port 33334 --outgoing-port 33335 --no-nfs-version 2"
RPCNFSDOPTS="--no-nfs-version 2"
</source>
===Disable all but NFSv4 and higher===
* /etc/default/nfs-kernel-server
<source lang=ini>
RPCMOUNTDOPTS="--manage-gids --port 33333 --no-nfs-version 2 --no-nfs-version 3"
NEED_STATD="no"
NEED_IDMAPD="yes"
RPCNFSDOPTS="--no-nfs-version 2 --no-nfs-version 3"
</source>
===Configure ufw===
Caution! The port you set above for the mountd has to be the same here! I used 33333, if you changed it above for some reason: Change it here, too!
* /etc/ufw/applications.d/nfs
<source lang=ini>
[NFS-Server]
title=NFS-Server
description=NFS Server
ports=111/tcp|111/udp|2049/tcp|33333:33336/tcp
</source>
<source lang=bash>
# ufw allow from 172.16.16.16/28 to any app "NFS-Server"
</source>
===List clients that are connected===
<source lang=bash>
# cat /proc/fs/nfsd/clients/*/info
clientid: 0x7829c17160bf7066
address: "172.16.16.17:778"
name: "Linux NFSv4.1 client01.domain.tld"
minor version: 1
Implementation domain: "kernel.org"
Implementation name: "Linux 3.10.0-1127.13.1.el7.x86_64 #1 SMP Tue Jun 23 15:46:38 UTC 2020 x86_64"
Implementation time: [0, 0]
</source>
==Server and Client==
a96e5d18f61cdb93109e112823fa572f89f60875
2140
2139
2021-06-30T13:08:55Z
Lollypop
2
/* Bind statd to specific port */
wikitext
text/x-wiki
[[Category:Linux]]
Some things to know about NFS...
=NFSv4.1=
==Server==
===Configure rpc.idmapd===
* /etc/idmapd.conf
You should better set a Domain. Set the same Domain on server an client(s)!
<source lang=ini>
[General]
...
# set your own domain here, if it differs from FQDN minus hostname.
# you can use a fantasy name, but whatever it is, keep this identical on server and client!
Domain = myfantasy.domain
...
</source>
===Bind rpc.mountd to specific port===
The port of the rpc.mountd is usually random this is a nightmare for firewallers so picking a known port is much better.
* /etc/default/nfs-kernel-server
<source lang=ini>
RPCMOUNTDOPTS="--manage-gids --port 33333"
</source>
===Bind statd to specific port===
You just need it if you still need protocols below NFSv4.
* /etc/default/nfs-common
<source lang=ini>
STATDOPTS="--port 33334 --outgoing-port 33335"
</source>
===Bind lockd to specific port===
* /etc/sysctl.d/nfs-static-ports.conf
<source lang=ini>
fs.nfs.nlm_tcpport = 33336
fs.nfs.nlm_udpport = 33336
</source>
Activate it without rebooting through:
<source lang=bash>
# sysctl --load /etc/sysctl.d/nfs-static-ports.conf
fs.nfs.nlm_tcpport = 33336
fs.nfs.nlm_udpport = 33336
</source>
===Disable at least NFSv2===
* /etc/default/nfs-kernel-server
<source lang=ini>
STATDOPTS="--port 33334 --outgoing-port 33335 --no-nfs-version 2"
RPCNFSDOPTS="--no-nfs-version 2"
</source>
===Disable all but NFSv4 and higher===
* /etc/default/nfs-kernel-server
<source lang=ini>
RPCMOUNTDOPTS="--manage-gids --port 33333 --no-nfs-version 2 --no-nfs-version 3"
NEED_STATD="no"
NEED_IDMAPD="yes"
RPCNFSDOPTS="--no-nfs-version 2 --no-nfs-version 3"
</source>
===Configure ufw===
Caution! The port you set above for the mountd has to be the same here! I used 33333, if you changed it above for some reason: Change it here, too!
* /etc/ufw/applications.d/nfs
<source lang=ini>
[NFS-Server]
title=NFS-Server
description=NFS Server
ports=111/tcp|111/udp|2049/tcp|33333:33336/tcp
</source>
<source lang=bash>
# ufw allow from 172.16.16.16/28 to any app "NFS-Server"
</source>
===List clients that are connected===
<source lang=bash>
# cat /proc/fs/nfsd/clients/*/info
clientid: 0x7829c17160bf7066
address: "172.16.16.17:778"
name: "Linux NFSv4.1 client01.domain.tld"
minor version: 1
Implementation domain: "kernel.org"
Implementation name: "Linux 3.10.0-1127.13.1.el7.x86_64 #1 SMP Tue Jun 23 15:46:38 UTC 2020 x86_64"
Implementation time: [0, 0]
</source>
==Server and Client==
4d40e2c42722f29877c835f9ae1b657874d8531c
2141
2140
2021-06-30T13:14:47Z
Lollypop
2
wikitext
text/x-wiki
[[Category:Linux]]
Some things to know about NFS...
=NFSv3=
===Bind rpc.mountd to specific port===
The port of the rpc.mountd is usually random this is a nightmare for firewallers so picking a known port is much better.
* /etc/default/nfs-kernel-server
<source lang=ini>
RPCMOUNTDOPTS="--manage-gids --port 33333"
</source>
===Bind statd to specific port===
You just need it if you still need protocols below NFSv4.
* /etc/default/nfs-common
<source lang=ini>
STATDOPTS="--port 33334 --outgoing-port 33335"
</source>
===Bind lockd to specific port===
* /etc/sysctl.d/nfs-static-ports.conf
<source lang=ini>
fs.nfs.nlm_tcpport = 33336
fs.nfs.nlm_udpport = 33336
</source>
Activate it without rebooting through:
<source lang=bash>
# sysctl --load /etc/sysctl.d/nfs-static-ports.conf
fs.nfs.nlm_tcpport = 33336
fs.nfs.nlm_udpport = 33336
</source>
===Configure ufw===
Caution! The port you set above for the mountd has to be the same here! I used 33333, if you changed it above for some reason: Change it here, too!
* /etc/ufw/applications.d/nfs
<source lang=ini>
[NFS-Server]
title=NFS-Server
description=NFS Server
ports=111/tcp|111/udp|2049/tcp|33333:33336/tcp
</source>
<source lang=bash>
# ufw allow from 172.16.16.16/28 to any app "NFS-Server"
</source>
=NFSv4.1=
==Server==
===Configure rpc.idmapd===
* /etc/idmapd.conf
You should better set a Domain. Set the same Domain on server an client(s)!
<source lang=ini>
[General]
...
# set your own domain here, if it differs from FQDN minus hostname.
# you can use a fantasy name, but whatever it is, keep this identical on server and client!
Domain = myfantasy.domain
...
</source>
===Disable at least NFSv2===
* /etc/default/nfs-kernel-server
<source lang=ini>
STATDOPTS="--port 33334 --outgoing-port 33335 --no-nfs-version 2"
RPCNFSDOPTS="--no-nfs-version 2"
</source>
===Disable all but NFSv4 and higher===
* /etc/default/nfs-kernel-server
<source lang=ini>
RPCMOUNTDOPTS="--manage-gids --port 33333 --no-nfs-version 2 --no-nfs-version 3"
NEED_STATD="no"
NEED_IDMAPD="yes"
RPCNFSDOPTS="--no-nfs-version 2 --no-nfs-version 3"
</source>
===Configure ufw===
For plain NFSv4 and up you just need this:
<source lang=bash>
# ufw allow from 172.16.16.16/28 to any port 2049/tcp
</source>
If you need still NFSv3 look above.
===List clients that are connected===
<source lang=bash>
# cat /proc/fs/nfsd/clients/*/info
clientid: 0x7829c17160bf7066
address: "172.16.16.17:778"
name: "Linux NFSv4.1 client01.domain.tld"
minor version: 1
Implementation domain: "kernel.org"
Implementation name: "Linux 3.10.0-1127.13.1.el7.x86_64 #1 SMP Tue Jun 23 15:46:38 UTC 2020 x86_64"
Implementation time: [0, 0]
</source>
==Server and Client==
625e21a0943d5acd6f5f8ecf104030aaec6f57ce
2142
2141
2021-06-30T13:15:22Z
Lollypop
2
wikitext
text/x-wiki
[[Category:Linux]]
Some things to know about NFS...
=NFSv3=
==Server==
===Bind rpc.mountd to specific port===
The port of the rpc.mountd is usually random this is a nightmare for firewallers so picking a known port is much better.
* /etc/default/nfs-kernel-server
<source lang=ini>
RPCMOUNTDOPTS="--manage-gids --port 33333"
</source>
===Bind statd to specific port===
You just need it if you still need protocols below NFSv4.
* /etc/default/nfs-common
<source lang=ini>
STATDOPTS="--port 33334 --outgoing-port 33335"
</source>
===Bind lockd to specific port===
* /etc/sysctl.d/nfs-static-ports.conf
<source lang=ini>
fs.nfs.nlm_tcpport = 33336
fs.nfs.nlm_udpport = 33336
</source>
Activate it without rebooting through:
<source lang=bash>
# sysctl --load /etc/sysctl.d/nfs-static-ports.conf
fs.nfs.nlm_tcpport = 33336
fs.nfs.nlm_udpport = 33336
</source>
===Configure ufw===
Caution! The port you set above for the mountd has to be the same here! I used 33333, if you changed it above for some reason: Change it here, too!
* /etc/ufw/applications.d/nfs
<source lang=ini>
[NFS-Server]
title=NFS-Server
description=NFS Server
ports=111/tcp|111/udp|2049/tcp|33333:33336/tcp
</source>
<source lang=bash>
# ufw allow from 172.16.16.16/28 to any app "NFS-Server"
</source>
=NFSv4.1=
==Server==
===Configure rpc.idmapd===
* /etc/idmapd.conf
You should better set a Domain. Set the same Domain on server an client(s)!
<source lang=ini>
[General]
...
# set your own domain here, if it differs from FQDN minus hostname.
# you can use a fantasy name, but whatever it is, keep this identical on server and client!
Domain = myfantasy.domain
...
</source>
===Disable at least NFSv2===
* /etc/default/nfs-kernel-server
<source lang=ini>
STATDOPTS="--port 33334 --outgoing-port 33335 --no-nfs-version 2"
RPCNFSDOPTS="--no-nfs-version 2"
</source>
===Disable all but NFSv4 and higher===
* /etc/default/nfs-kernel-server
<source lang=ini>
RPCMOUNTDOPTS="--manage-gids --port 33333 --no-nfs-version 2 --no-nfs-version 3"
NEED_STATD="no"
NEED_IDMAPD="yes"
RPCNFSDOPTS="--no-nfs-version 2 --no-nfs-version 3"
</source>
===Configure ufw===
For plain NFSv4 and up you just need this:
<source lang=bash>
# ufw allow from 172.16.16.16/28 to any port 2049/tcp
</source>
If you need still NFSv3 look above.
===List clients that are connected===
<source lang=bash>
# cat /proc/fs/nfsd/clients/*/info
clientid: 0x7829c17160bf7066
address: "172.16.16.17:778"
name: "Linux NFSv4.1 client01.domain.tld"
minor version: 1
Implementation domain: "kernel.org"
Implementation name: "Linux 3.10.0-1127.13.1.el7.x86_64 #1 SMP Tue Jun 23 15:46:38 UTC 2020 x86_64"
Implementation time: [0, 0]
</source>
==Server and Client==
8e4a3e9a36381cb4f64f084450a30635e3459569
Solaris grub
0
199
2143
1386
2021-07-12T10:07:11Z
Lollypop
2
wikitext
text/x-wiki
[[Kategorie:Solaris|Grub]]
[[Kategorie:Grub|Solaris]]
== SP-console on x86-systems ==
=== Set speed and port in grub (Solaris11)===
<source lang=bash>
# bootadm set-menu console=serial serial_params=0,115200,8,N,1
# bootadm generate-menu -f
# eeprom console=ttya
# eeprom ttya-mode=115200,8,n,1,-
</source>
=== Speed und Port im GRUB setzen (Old)===
/rpool/boot/grub/menu.lst
<source lang=bash>
title Oracle Solaris 10 X86
findroot (pool_rpool,0,a)
kernel$ /platform/i86pc/multiboot -B $ZFS-BOOTFS,console=ttya,ttya-mode="115200,8,n,1,-"
module /platform/i86pc/boot_archive
title Solaris failsafe
findroot (pool_rpool,0,a)
kernel /boot/multiboot -s -B console=ttya,ttya-mode="115200,8,n,1,-"
module /boot/amd64/x86.miniroot-safe
</source>
=== Speed im Solaris bekannt machen ===
/boot/solaris/bootenv.rc
<source lang=bash>
setprop ttya-mode '115200,8,n,1,-'
</source>
Nach reboot aktiv
=== Console login speed setzen ===
/etc/ttydefs
<source lang=bash>
console115200:115200 hupcl opost onlcr:115200::console
</source>
<source lang=bash>
# svccfg -s svc:/system/console-login setprop ttymon/label= astring: "console115200"
# svcadm refresh svc:/system/console-login
# svcadm restart svc:/system/console-login
</source>
=== Speed im BIOS setzen ===
Mit CTRL+E ins BIOS, dann: Advanced -> Serial Port Console Redirection -> Bits per second : 115200
=== Speed im SP setzen ===
<source lang=bash>
-> set SP/serial/host pendingspeed=115200 commitpending=true
Set 'pendingspeed' to '115200'
Set 'commitpending' to 'true'
-> show SP/serial/host speed
/SP/serial/host
Properties:
speed = 115200
</source>
=grub rescue>=
The problem:
<source lang=bash>
GRUB loading...
Welcome to GRUB!
error: couldn't find a valid DVA.
Entering rescue mode...
grub rescue>
</source>
==Get into the normal grub==
Find your devices:
<source lang=bash>
grub rescue> ls
(hd0) (hd0,gpt9) (hd0,gpt2) (hd0,gpt1) (hd1)
</source>
===Find the directory where the normal.mod file resides===
In this example the boot environment is named Solaris11.3SRU1.
Remember to replace <i>Solaris11.3SRU15</i> with your boot environment name.
<source lang=bash>
grub rescue> ls (hd0,gpt2)/ROOT/Solaris11.3SRU15/@/boot/grub/i386-pc
... normal.mod ...
</source>
===Set the prefix to the right place===
Remember to replace <i>Solaris11.3SRU15</i> with your boot environment name.
<source lang=bash>
grub rescue> set
prefix=(hd0,gpt2)//@/boot/grub/i386-pc
root=hd0,gpt2
grub rescue> set prefix=(hd0,gpt2)/ROOT/Solaris11.3SRU15/@/boot/grub/i386-pc
</source>
===Now you can load and start the module called "normal"===
<source lang=bash>
grub rescue> insmod normal
grub rescue> normal
GNU GRUB version 1.99,5.11.0.175.2.0.0.42.2
Minimal BASH-like line editing is supported. For the first word, TAB
lists possible command completions. Anywhere else TAB lists possible
device or file completions.
grub>
</source>
==Normal grub is bootet, now start the Solaris==
At the <i>grub></i> prompt enter the following lines. Remember to replace <i>Solaris11.3SRU15</i> with your boot environment name.
<source lang=bash>
insmod zfs
zfs-bootfs /ROOT/Solaris11.3SRU15/@/ zfs_bootfs
set kern=/platform/i86pc/kernel/amd64/unix
$multiboot /ROOT/Solaris11.3SRU15/@/$kern $kern -B $zfs_bootfs
insmod gzio
$module /ROOT/Solaris11.3SRU15/@/platform/i86pc/amd64/boot_archive
boot
</source>
a8923d76685f37f366f508ce9189c47aca7f557f
2144
2143
2021-07-12T10:13:12Z
Lollypop
2
wikitext
text/x-wiki
[[Category:Solaris|Grub]]
[[Category:Grub|Solaris]]
= SP-console on x86-systems =
== Solaris 11 ==
=== Set speed and port in grub (Solaris 11)===
<source lang=bash>
# bootadm set-menu console=serial serial_params=0,115200,8,N,1
# bootadm generate-menu -f
# eeprom console=ttya
# eeprom ttya-mode=115200,8,n,1,-
</source>
== Solaris 10 ==
=== Set speed and port in grub ===
/rpool/boot/grub/menu.lst
<source lang=bash>
title Oracle Solaris 10 X86
findroot (pool_rpool,0,a)
kernel$ /platform/i86pc/multiboot -B $ZFS-BOOTFS,console=ttya,ttya-mode="115200,8,n,1,-"
module /platform/i86pc/boot_archive
title Solaris failsafe
findroot (pool_rpool,0,a)
kernel /boot/multiboot -s -B console=ttya,ttya-mode="115200,8,n,1,-"
module /boot/amd64/x86.miniroot-safe
</source>
=== Set speed ===
/boot/solaris/bootenv.rc
<source lang=bash>
setprop ttya-mode '115200,8,n,1,-'
</source>
Active after reboot.
=== Set console login speed ===
/etc/ttydefs
<source lang=bash>
console115200:115200 hupcl opost onlcr:115200::console
</source>
<source lang=bash>
# svccfg -s svc:/system/console-login setprop ttymon/label= astring: "console115200"
# svcadm refresh svc:/system/console-login
# svcadm restart svc:/system/console-login
</source>
== Set speed in BIOS ===
Mit CTRL+E ins BIOS, dann: Advanced -> Serial Port Console Redirection -> Bits per second : 115200
== Set speed for SP host serial ===
<source lang=bash>
-> set SP/serial/host pendingspeed=115200 commitpending=true
Set 'pendingspeed' to '115200'
Set 'commitpending' to 'true'
-> show SP/serial/host speed
/SP/serial/host
Properties:
speed = 115200
</source>
=grub rescue>=
The problem:
<source lang=bash>
GRUB loading...
Welcome to GRUB!
error: couldn't find a valid DVA.
Entering rescue mode...
grub rescue>
</source>
==Get into the normal grub==
Find your devices:
<source lang=bash>
grub rescue> ls
(hd0) (hd0,gpt9) (hd0,gpt2) (hd0,gpt1) (hd1)
</source>
===Find the directory where the normal.mod file resides===
In this example the boot environment is named Solaris11.3SRU1.
Remember to replace <i>Solaris11.3SRU15</i> with your boot environment name.
<source lang=bash>
grub rescue> ls (hd0,gpt2)/ROOT/Solaris11.3SRU15/@/boot/grub/i386-pc
... normal.mod ...
</source>
===Set the prefix to the right place===
Remember to replace <i>Solaris11.3SRU15</i> with your boot environment name.
<source lang=bash>
grub rescue> set
prefix=(hd0,gpt2)//@/boot/grub/i386-pc
root=hd0,gpt2
grub rescue> set prefix=(hd0,gpt2)/ROOT/Solaris11.3SRU15/@/boot/grub/i386-pc
</source>
===Now you can load and start the module called "normal"===
<source lang=bash>
grub rescue> insmod normal
grub rescue> normal
GNU GRUB version 1.99,5.11.0.175.2.0.0.42.2
Minimal BASH-like line editing is supported. For the first word, TAB
lists possible command completions. Anywhere else TAB lists possible
device or file completions.
grub>
</source>
==Normal grub is bootet, now start the Solaris==
At the <i>grub></i> prompt enter the following lines. Remember to replace <i>Solaris11.3SRU15</i> with your boot environment name.
<source lang=bash>
insmod zfs
zfs-bootfs /ROOT/Solaris11.3SRU15/@/ zfs_bootfs
set kern=/platform/i86pc/kernel/amd64/unix
$multiboot /ROOT/Solaris11.3SRU15/@/$kern $kern -B $zfs_bootfs
insmod gzio
$module /ROOT/Solaris11.3SRU15/@/platform/i86pc/amd64/boot_archive
boot
</source>
ddeeda758be5e7e2f03a5f7cb2b7a4c5b8d6d34d
2145
2144
2021-07-12T10:14:17Z
Lollypop
2
/* Set speed for SP host serial = */
wikitext
text/x-wiki
[[Category:Solaris|Grub]]
[[Category:Grub|Solaris]]
= SP-console on x86-systems =
== Solaris 11 ==
=== Set speed and port in grub (Solaris 11)===
<source lang=bash>
# bootadm set-menu console=serial serial_params=0,115200,8,N,1
# bootadm generate-menu -f
# eeprom console=ttya
# eeprom ttya-mode=115200,8,n,1,-
</source>
== Solaris 10 ==
=== Set speed and port in grub ===
/rpool/boot/grub/menu.lst
<source lang=bash>
title Oracle Solaris 10 X86
findroot (pool_rpool,0,a)
kernel$ /platform/i86pc/multiboot -B $ZFS-BOOTFS,console=ttya,ttya-mode="115200,8,n,1,-"
module /platform/i86pc/boot_archive
title Solaris failsafe
findroot (pool_rpool,0,a)
kernel /boot/multiboot -s -B console=ttya,ttya-mode="115200,8,n,1,-"
module /boot/amd64/x86.miniroot-safe
</source>
=== Set speed ===
/boot/solaris/bootenv.rc
<source lang=bash>
setprop ttya-mode '115200,8,n,1,-'
</source>
Active after reboot.
=== Set console login speed ===
/etc/ttydefs
<source lang=bash>
console115200:115200 hupcl opost onlcr:115200::console
</source>
<source lang=bash>
# svccfg -s svc:/system/console-login setprop ttymon/label= astring: "console115200"
# svcadm refresh svc:/system/console-login
# svcadm restart svc:/system/console-login
</source>
== Set speed in BIOS ===
Mit CTRL+E ins BIOS, dann: Advanced -> Serial Port Console Redirection -> Bits per second : 115200
== Set speed for SP host serial ==
<source lang=bash>
-> set SP/serial/host pendingspeed=115200 commitpending=true
Set 'pendingspeed' to '115200'
Set 'commitpending' to 'true'
-> show SP/serial/host speed
/SP/serial/host
Properties:
speed = 115200
</source>
=grub rescue>=
The problem:
<source lang=bash>
GRUB loading...
Welcome to GRUB!
error: couldn't find a valid DVA.
Entering rescue mode...
grub rescue>
</source>
==Get into the normal grub==
Find your devices:
<source lang=bash>
grub rescue> ls
(hd0) (hd0,gpt9) (hd0,gpt2) (hd0,gpt1) (hd1)
</source>
===Find the directory where the normal.mod file resides===
In this example the boot environment is named Solaris11.3SRU1.
Remember to replace <i>Solaris11.3SRU15</i> with your boot environment name.
<source lang=bash>
grub rescue> ls (hd0,gpt2)/ROOT/Solaris11.3SRU15/@/boot/grub/i386-pc
... normal.mod ...
</source>
===Set the prefix to the right place===
Remember to replace <i>Solaris11.3SRU15</i> with your boot environment name.
<source lang=bash>
grub rescue> set
prefix=(hd0,gpt2)//@/boot/grub/i386-pc
root=hd0,gpt2
grub rescue> set prefix=(hd0,gpt2)/ROOT/Solaris11.3SRU15/@/boot/grub/i386-pc
</source>
===Now you can load and start the module called "normal"===
<source lang=bash>
grub rescue> insmod normal
grub rescue> normal
GNU GRUB version 1.99,5.11.0.175.2.0.0.42.2
Minimal BASH-like line editing is supported. For the first word, TAB
lists possible command completions. Anywhere else TAB lists possible
device or file completions.
grub>
</source>
==Normal grub is bootet, now start the Solaris==
At the <i>grub></i> prompt enter the following lines. Remember to replace <i>Solaris11.3SRU15</i> with your boot environment name.
<source lang=bash>
insmod zfs
zfs-bootfs /ROOT/Solaris11.3SRU15/@/ zfs_bootfs
set kern=/platform/i86pc/kernel/amd64/unix
$multiboot /ROOT/Solaris11.3SRU15/@/$kern $kern -B $zfs_bootfs
insmod gzio
$module /ROOT/Solaris11.3SRU15/@/platform/i86pc/amd64/boot_archive
boot
</source>
8b13cab015e572168994dc6b620d599d13676719
2146
2145
2021-07-12T10:14:34Z
Lollypop
2
/* Set speed in BIOS = */
wikitext
text/x-wiki
[[Category:Solaris|Grub]]
[[Category:Grub|Solaris]]
= SP-console on x86-systems =
== Solaris 11 ==
=== Set speed and port in grub (Solaris 11)===
<source lang=bash>
# bootadm set-menu console=serial serial_params=0,115200,8,N,1
# bootadm generate-menu -f
# eeprom console=ttya
# eeprom ttya-mode=115200,8,n,1,-
</source>
== Solaris 10 ==
=== Set speed and port in grub ===
/rpool/boot/grub/menu.lst
<source lang=bash>
title Oracle Solaris 10 X86
findroot (pool_rpool,0,a)
kernel$ /platform/i86pc/multiboot -B $ZFS-BOOTFS,console=ttya,ttya-mode="115200,8,n,1,-"
module /platform/i86pc/boot_archive
title Solaris failsafe
findroot (pool_rpool,0,a)
kernel /boot/multiboot -s -B console=ttya,ttya-mode="115200,8,n,1,-"
module /boot/amd64/x86.miniroot-safe
</source>
=== Set speed ===
/boot/solaris/bootenv.rc
<source lang=bash>
setprop ttya-mode '115200,8,n,1,-'
</source>
Active after reboot.
=== Set console login speed ===
/etc/ttydefs
<source lang=bash>
console115200:115200 hupcl opost onlcr:115200::console
</source>
<source lang=bash>
# svccfg -s svc:/system/console-login setprop ttymon/label= astring: "console115200"
# svcadm refresh svc:/system/console-login
# svcadm restart svc:/system/console-login
</source>
== Set speed in BIOS ==
Mit CTRL+E ins BIOS, dann: Advanced -> Serial Port Console Redirection -> Bits per second : 115200
== Set speed for SP host serial ==
<source lang=bash>
-> set SP/serial/host pendingspeed=115200 commitpending=true
Set 'pendingspeed' to '115200'
Set 'commitpending' to 'true'
-> show SP/serial/host speed
/SP/serial/host
Properties:
speed = 115200
</source>
=grub rescue>=
The problem:
<source lang=bash>
GRUB loading...
Welcome to GRUB!
error: couldn't find a valid DVA.
Entering rescue mode...
grub rescue>
</source>
==Get into the normal grub==
Find your devices:
<source lang=bash>
grub rescue> ls
(hd0) (hd0,gpt9) (hd0,gpt2) (hd0,gpt1) (hd1)
</source>
===Find the directory where the normal.mod file resides===
In this example the boot environment is named Solaris11.3SRU1.
Remember to replace <i>Solaris11.3SRU15</i> with your boot environment name.
<source lang=bash>
grub rescue> ls (hd0,gpt2)/ROOT/Solaris11.3SRU15/@/boot/grub/i386-pc
... normal.mod ...
</source>
===Set the prefix to the right place===
Remember to replace <i>Solaris11.3SRU15</i> with your boot environment name.
<source lang=bash>
grub rescue> set
prefix=(hd0,gpt2)//@/boot/grub/i386-pc
root=hd0,gpt2
grub rescue> set prefix=(hd0,gpt2)/ROOT/Solaris11.3SRU15/@/boot/grub/i386-pc
</source>
===Now you can load and start the module called "normal"===
<source lang=bash>
grub rescue> insmod normal
grub rescue> normal
GNU GRUB version 1.99,5.11.0.175.2.0.0.42.2
Minimal BASH-like line editing is supported. For the first word, TAB
lists possible command completions. Anywhere else TAB lists possible
device or file completions.
grub>
</source>
==Normal grub is bootet, now start the Solaris==
At the <i>grub></i> prompt enter the following lines. Remember to replace <i>Solaris11.3SRU15</i> with your boot environment name.
<source lang=bash>
insmod zfs
zfs-bootfs /ROOT/Solaris11.3SRU15/@/ zfs_bootfs
set kern=/platform/i86pc/kernel/amd64/unix
$multiboot /ROOT/Solaris11.3SRU15/@/$kern $kern -B $zfs_bootfs
insmod gzio
$module /ROOT/Solaris11.3SRU15/@/platform/i86pc/amd64/boot_archive
boot
</source>
a89bd5dc3b949b17a6e882f47412c3064fc1caf5
2147
2146
2021-07-12T10:16:03Z
Lollypop
2
/* Set speed in BIOS */
wikitext
text/x-wiki
[[Category:Solaris|Grub]]
[[Category:Grub|Solaris]]
= SP-console on x86-systems =
== Solaris 11 ==
=== Set speed and port in grub (Solaris 11)===
<source lang=bash>
# bootadm set-menu console=serial serial_params=0,115200,8,N,1
# bootadm generate-menu -f
# eeprom console=ttya
# eeprom ttya-mode=115200,8,n,1,-
</source>
== Solaris 10 ==
=== Set speed and port in grub ===
/rpool/boot/grub/menu.lst
<source lang=bash>
title Oracle Solaris 10 X86
findroot (pool_rpool,0,a)
kernel$ /platform/i86pc/multiboot -B $ZFS-BOOTFS,console=ttya,ttya-mode="115200,8,n,1,-"
module /platform/i86pc/boot_archive
title Solaris failsafe
findroot (pool_rpool,0,a)
kernel /boot/multiboot -s -B console=ttya,ttya-mode="115200,8,n,1,-"
module /boot/amd64/x86.miniroot-safe
</source>
=== Set speed ===
/boot/solaris/bootenv.rc
<source lang=bash>
setprop ttya-mode '115200,8,n,1,-'
</source>
Active after reboot.
=== Set console login speed ===
/etc/ttydefs
<source lang=bash>
console115200:115200 hupcl opost onlcr:115200::console
</source>
<source lang=bash>
# svccfg -s svc:/system/console-login setprop ttymon/label= astring: "console115200"
# svcadm refresh svc:/system/console-login
# svcadm restart svc:/system/console-login
</source>
== Set speed in BIOS ==
Enter BIOS setup with <i>F2</i> or <i>CTRL+E</i>, then go to
<pre>
Advanced -> Serial Port Console Redirection -> Bits per second : 115200
</pre>
== Set speed for SP host serial ==
<source lang=bash>
-> set SP/serial/host pendingspeed=115200 commitpending=true
Set 'pendingspeed' to '115200'
Set 'commitpending' to 'true'
-> show SP/serial/host speed
/SP/serial/host
Properties:
speed = 115200
</source>
=grub rescue>=
The problem:
<source lang=bash>
GRUB loading...
Welcome to GRUB!
error: couldn't find a valid DVA.
Entering rescue mode...
grub rescue>
</source>
==Get into the normal grub==
Find your devices:
<source lang=bash>
grub rescue> ls
(hd0) (hd0,gpt9) (hd0,gpt2) (hd0,gpt1) (hd1)
</source>
===Find the directory where the normal.mod file resides===
In this example the boot environment is named Solaris11.3SRU1.
Remember to replace <i>Solaris11.3SRU15</i> with your boot environment name.
<source lang=bash>
grub rescue> ls (hd0,gpt2)/ROOT/Solaris11.3SRU15/@/boot/grub/i386-pc
... normal.mod ...
</source>
===Set the prefix to the right place===
Remember to replace <i>Solaris11.3SRU15</i> with your boot environment name.
<source lang=bash>
grub rescue> set
prefix=(hd0,gpt2)//@/boot/grub/i386-pc
root=hd0,gpt2
grub rescue> set prefix=(hd0,gpt2)/ROOT/Solaris11.3SRU15/@/boot/grub/i386-pc
</source>
===Now you can load and start the module called "normal"===
<source lang=bash>
grub rescue> insmod normal
grub rescue> normal
GNU GRUB version 1.99,5.11.0.175.2.0.0.42.2
Minimal BASH-like line editing is supported. For the first word, TAB
lists possible command completions. Anywhere else TAB lists possible
device or file completions.
grub>
</source>
==Normal grub is bootet, now start the Solaris==
At the <i>grub></i> prompt enter the following lines. Remember to replace <i>Solaris11.3SRU15</i> with your boot environment name.
<source lang=bash>
insmod zfs
zfs-bootfs /ROOT/Solaris11.3SRU15/@/ zfs_bootfs
set kern=/platform/i86pc/kernel/amd64/unix
$multiboot /ROOT/Solaris11.3SRU15/@/$kern $kern -B $zfs_bootfs
insmod gzio
$module /ROOT/Solaris11.3SRU15/@/platform/i86pc/amd64/boot_archive
boot
</source>
af71f07d7a0d2ff208eb2487c7134f416ec5a8cc
2148
2147
2021-07-12T10:17:04Z
Lollypop
2
/* SP-console on x86-systems */
wikitext
text/x-wiki
[[Category:Solaris|Grub]]
[[Category:Grub|Solaris]]
= Set SP-console on x86-systems to 115200 Baud =
== Solaris 11 ==
=== Set speed and port in grub (Solaris 11)===
<source lang=bash>
# bootadm set-menu console=serial serial_params=0,115200,8,N,1
# bootadm generate-menu -f
# eeprom console=ttya
# eeprom ttya-mode=115200,8,n,1,-
</source>
== Solaris 10 ==
=== Set speed and port in grub ===
/rpool/boot/grub/menu.lst
<source lang=bash>
title Oracle Solaris 10 X86
findroot (pool_rpool,0,a)
kernel$ /platform/i86pc/multiboot -B $ZFS-BOOTFS,console=ttya,ttya-mode="115200,8,n,1,-"
module /platform/i86pc/boot_archive
title Solaris failsafe
findroot (pool_rpool,0,a)
kernel /boot/multiboot -s -B console=ttya,ttya-mode="115200,8,n,1,-"
module /boot/amd64/x86.miniroot-safe
</source>
=== Set speed ===
/boot/solaris/bootenv.rc
<source lang=bash>
setprop ttya-mode '115200,8,n,1,-'
</source>
Active after reboot.
=== Set console login speed ===
/etc/ttydefs
<source lang=bash>
console115200:115200 hupcl opost onlcr:115200::console
</source>
<source lang=bash>
# svccfg -s svc:/system/console-login setprop ttymon/label= astring: "console115200"
# svcadm refresh svc:/system/console-login
# svcadm restart svc:/system/console-login
</source>
== Set speed in BIOS ==
Enter BIOS setup with <i>F2</i> or <i>CTRL+E</i>, then go to
<pre>
Advanced -> Serial Port Console Redirection -> Bits per second : 115200
</pre>
== Set speed for SP host serial ==
<source lang=bash>
-> set SP/serial/host pendingspeed=115200 commitpending=true
Set 'pendingspeed' to '115200'
Set 'commitpending' to 'true'
-> show SP/serial/host speed
/SP/serial/host
Properties:
speed = 115200
</source>
=grub rescue>=
The problem:
<source lang=bash>
GRUB loading...
Welcome to GRUB!
error: couldn't find a valid DVA.
Entering rescue mode...
grub rescue>
</source>
==Get into the normal grub==
Find your devices:
<source lang=bash>
grub rescue> ls
(hd0) (hd0,gpt9) (hd0,gpt2) (hd0,gpt1) (hd1)
</source>
===Find the directory where the normal.mod file resides===
In this example the boot environment is named Solaris11.3SRU1.
Remember to replace <i>Solaris11.3SRU15</i> with your boot environment name.
<source lang=bash>
grub rescue> ls (hd0,gpt2)/ROOT/Solaris11.3SRU15/@/boot/grub/i386-pc
... normal.mod ...
</source>
===Set the prefix to the right place===
Remember to replace <i>Solaris11.3SRU15</i> with your boot environment name.
<source lang=bash>
grub rescue> set
prefix=(hd0,gpt2)//@/boot/grub/i386-pc
root=hd0,gpt2
grub rescue> set prefix=(hd0,gpt2)/ROOT/Solaris11.3SRU15/@/boot/grub/i386-pc
</source>
===Now you can load and start the module called "normal"===
<source lang=bash>
grub rescue> insmod normal
grub rescue> normal
GNU GRUB version 1.99,5.11.0.175.2.0.0.42.2
Minimal BASH-like line editing is supported. For the first word, TAB
lists possible command completions. Anywhere else TAB lists possible
device or file completions.
grub>
</source>
==Normal grub is bootet, now start the Solaris==
At the <i>grub></i> prompt enter the following lines. Remember to replace <i>Solaris11.3SRU15</i> with your boot environment name.
<source lang=bash>
insmod zfs
zfs-bootfs /ROOT/Solaris11.3SRU15/@/ zfs_bootfs
set kern=/platform/i86pc/kernel/amd64/unix
$multiboot /ROOT/Solaris11.3SRU15/@/$kern $kern -B $zfs_bootfs
insmod gzio
$module /ROOT/Solaris11.3SRU15/@/platform/i86pc/amd64/boot_archive
boot
</source>
2e8c191ffe9ca43096d18141f0afe19b4e01f005
2149
2148
2021-07-12T10:18:43Z
Lollypop
2
/* Set speed and port in grub (Solaris 11) */
wikitext
text/x-wiki
[[Category:Solaris|Grub]]
[[Category:Grub|Solaris]]
= Set SP-console on x86-systems to 115200 Baud =
== Solaris 11 ==
=== Set speed and port in grub ===
<source lang=bash>
# bootadm set-menu console=serial serial_params=0,115200,8,N,1
# bootadm generate-menu -f
# eeprom console=ttya
# eeprom ttya-mode=115200,8,n,1,-
</source>
== Solaris 10 ==
=== Set speed and port in grub ===
/rpool/boot/grub/menu.lst
<source lang=bash>
title Oracle Solaris 10 X86
findroot (pool_rpool,0,a)
kernel$ /platform/i86pc/multiboot -B $ZFS-BOOTFS,console=ttya,ttya-mode="115200,8,n,1,-"
module /platform/i86pc/boot_archive
title Solaris failsafe
findroot (pool_rpool,0,a)
kernel /boot/multiboot -s -B console=ttya,ttya-mode="115200,8,n,1,-"
module /boot/amd64/x86.miniroot-safe
</source>
=== Set speed ===
/boot/solaris/bootenv.rc
<source lang=bash>
setprop ttya-mode '115200,8,n,1,-'
</source>
Active after reboot.
=== Set console login speed ===
/etc/ttydefs
<source lang=bash>
console115200:115200 hupcl opost onlcr:115200::console
</source>
<source lang=bash>
# svccfg -s svc:/system/console-login setprop ttymon/label= astring: "console115200"
# svcadm refresh svc:/system/console-login
# svcadm restart svc:/system/console-login
</source>
== Set speed in BIOS ==
Enter BIOS setup with <i>F2</i> or <i>CTRL+E</i>, then go to
<pre>
Advanced -> Serial Port Console Redirection -> Bits per second : 115200
</pre>
== Set speed for SP host serial ==
<source lang=bash>
-> set SP/serial/host pendingspeed=115200 commitpending=true
Set 'pendingspeed' to '115200'
Set 'commitpending' to 'true'
-> show SP/serial/host speed
/SP/serial/host
Properties:
speed = 115200
</source>
=grub rescue>=
The problem:
<source lang=bash>
GRUB loading...
Welcome to GRUB!
error: couldn't find a valid DVA.
Entering rescue mode...
grub rescue>
</source>
==Get into the normal grub==
Find your devices:
<source lang=bash>
grub rescue> ls
(hd0) (hd0,gpt9) (hd0,gpt2) (hd0,gpt1) (hd1)
</source>
===Find the directory where the normal.mod file resides===
In this example the boot environment is named Solaris11.3SRU1.
Remember to replace <i>Solaris11.3SRU15</i> with your boot environment name.
<source lang=bash>
grub rescue> ls (hd0,gpt2)/ROOT/Solaris11.3SRU15/@/boot/grub/i386-pc
... normal.mod ...
</source>
===Set the prefix to the right place===
Remember to replace <i>Solaris11.3SRU15</i> with your boot environment name.
<source lang=bash>
grub rescue> set
prefix=(hd0,gpt2)//@/boot/grub/i386-pc
root=hd0,gpt2
grub rescue> set prefix=(hd0,gpt2)/ROOT/Solaris11.3SRU15/@/boot/grub/i386-pc
</source>
===Now you can load and start the module called "normal"===
<source lang=bash>
grub rescue> insmod normal
grub rescue> normal
GNU GRUB version 1.99,5.11.0.175.2.0.0.42.2
Minimal BASH-like line editing is supported. For the first word, TAB
lists possible command completions. Anywhere else TAB lists possible
device or file completions.
grub>
</source>
==Normal grub is bootet, now start the Solaris==
At the <i>grub></i> prompt enter the following lines. Remember to replace <i>Solaris11.3SRU15</i> with your boot environment name.
<source lang=bash>
insmod zfs
zfs-bootfs /ROOT/Solaris11.3SRU15/@/ zfs_bootfs
set kern=/platform/i86pc/kernel/amd64/unix
$multiboot /ROOT/Solaris11.3SRU15/@/$kern $kern -B $zfs_bootfs
insmod gzio
$module /ROOT/Solaris11.3SRU15/@/platform/i86pc/amd64/boot_archive
boot
</source>
2dc1b717109816e989b7456c547e6ba26cbe22fc
2150
2149
2021-07-12T10:20:26Z
Lollypop
2
/* Set SP-console on x86-systems to 115200 Baud */
wikitext
text/x-wiki
[[Category:Solaris|Grub]]
[[Category:Grub|Solaris]]
= Set SP-console on x86-systems to 115200 Baud =
You need to set the new speed in all three places:
# grub
# SP host serial
# BIOS serial
== Solaris 11 ==
=== Set speed and port in grub ===
<source lang=bash>
# bootadm set-menu console=serial serial_params=0,115200,8,N,1
# bootadm generate-menu -f
# eeprom console=ttya
# eeprom ttya-mode=115200,8,n,1,-
</source>
== Solaris 10 ==
=== Set speed and port in grub ===
/rpool/boot/grub/menu.lst
<source lang=bash>
title Oracle Solaris 10 X86
findroot (pool_rpool,0,a)
kernel$ /platform/i86pc/multiboot -B $ZFS-BOOTFS,console=ttya,ttya-mode="115200,8,n,1,-"
module /platform/i86pc/boot_archive
title Solaris failsafe
findroot (pool_rpool,0,a)
kernel /boot/multiboot -s -B console=ttya,ttya-mode="115200,8,n,1,-"
module /boot/amd64/x86.miniroot-safe
</source>
=== Set speed ===
/boot/solaris/bootenv.rc
<source lang=bash>
setprop ttya-mode '115200,8,n,1,-'
</source>
Active after reboot.
=== Set console login speed ===
/etc/ttydefs
<source lang=bash>
console115200:115200 hupcl opost onlcr:115200::console
</source>
<source lang=bash>
# svccfg -s svc:/system/console-login setprop ttymon/label= astring: "console115200"
# svcadm refresh svc:/system/console-login
# svcadm restart svc:/system/console-login
</source>
== Set speed in BIOS ==
Enter BIOS setup with <i>F2</i> or <i>CTRL+E</i>, then go to
<pre>
Advanced -> Serial Port Console Redirection -> Bits per second : 115200
</pre>
== Set speed for SP host serial ==
<source lang=bash>
-> set SP/serial/host pendingspeed=115200 commitpending=true
Set 'pendingspeed' to '115200'
Set 'commitpending' to 'true'
-> show SP/serial/host speed
/SP/serial/host
Properties:
speed = 115200
</source>
=grub rescue>=
The problem:
<source lang=bash>
GRUB loading...
Welcome to GRUB!
error: couldn't find a valid DVA.
Entering rescue mode...
grub rescue>
</source>
==Get into the normal grub==
Find your devices:
<source lang=bash>
grub rescue> ls
(hd0) (hd0,gpt9) (hd0,gpt2) (hd0,gpt1) (hd1)
</source>
===Find the directory where the normal.mod file resides===
In this example the boot environment is named Solaris11.3SRU1.
Remember to replace <i>Solaris11.3SRU15</i> with your boot environment name.
<source lang=bash>
grub rescue> ls (hd0,gpt2)/ROOT/Solaris11.3SRU15/@/boot/grub/i386-pc
... normal.mod ...
</source>
===Set the prefix to the right place===
Remember to replace <i>Solaris11.3SRU15</i> with your boot environment name.
<source lang=bash>
grub rescue> set
prefix=(hd0,gpt2)//@/boot/grub/i386-pc
root=hd0,gpt2
grub rescue> set prefix=(hd0,gpt2)/ROOT/Solaris11.3SRU15/@/boot/grub/i386-pc
</source>
===Now you can load and start the module called "normal"===
<source lang=bash>
grub rescue> insmod normal
grub rescue> normal
GNU GRUB version 1.99,5.11.0.175.2.0.0.42.2
Minimal BASH-like line editing is supported. For the first word, TAB
lists possible command completions. Anywhere else TAB lists possible
device or file completions.
grub>
</source>
==Normal grub is bootet, now start the Solaris==
At the <i>grub></i> prompt enter the following lines. Remember to replace <i>Solaris11.3SRU15</i> with your boot environment name.
<source lang=bash>
insmod zfs
zfs-bootfs /ROOT/Solaris11.3SRU15/@/ zfs_bootfs
set kern=/platform/i86pc/kernel/amd64/unix
$multiboot /ROOT/Solaris11.3SRU15/@/$kern $kern -B $zfs_bootfs
insmod gzio
$module /ROOT/Solaris11.3SRU15/@/platform/i86pc/amd64/boot_archive
boot
</source>
767737edc907c24cbdb4af875a028a67a5b384b5
Systemd
0
233
2151
2131
2021-08-10T08:23:22Z
Lollypop
2
/* Examples */
wikitext
text/x-wiki
[[Category:Linux]]
=systemd=
Yes, like daemons are usually written this has to be written lowercase.
=What is systemd?=
systemd is a replacement for the old and rusty init system of Linux.
It has many new features and extends the normal init system with the ability to watch processes after the start has done, list sockets owned by processes started with systemd, adds security features like [http://manpages.ubuntu.com/manpages/vivid/en/man7/capabilities.7.html capabilities(7)] and a lot more.
Maybe it will be as good as SMF (Service Management Facility) of Solaris one day :-).
=Take a look with systemctl=
==List units==
As you can see, there are hardware and software related units.
<source lang=bash>
# systemctl list-units
UNIT LOAD ACTIVE SUB DESCRIPTION
proc-sys-fs-binfmt_misc.automount loaded active running Arbitrary Executable File Formats File System Automount Point
sys-devices-pci0000:00-0000:00:02.0-backlight-acpi_video0.device loaded active plugged /sys/devices/pci0000:00/0000:00:02.0/backlight/acpi_video0
sys-devices-pci0000:00-0000:00:02.0-drm-card0-card0\x2dLVDS\x2d1-intel_backlight.device loaded active plugged /sys/devices/pci0000:00/0000:00:02.0/drm
sys-devices-pci0000:00-0000:00:19.0-net-eth0.device loaded active plugged 82579LM Gigabit Network Connection
sys-devices-pci0000:00-0000:00:1a.0-usb1-1\x2d1-1\x2d1.4-1\x2d1.4:1.0-bluetooth-hci0-rfkill3.device loaded active plugged /sys/devices/pci0000:00/0000
sys-devices-pci0000:00-0000:00:1a.0-usb1-1\x2d1-1\x2d1.4-1\x2d1.4:1.0-bluetooth-hci0.device loaded active plugged /sys/devices/pci0000:00/0000:00:1a.0
sys-devices-pci0000:00-0000:00:1b.0-sound-card0.device loaded active plugged 6 Series/C200 Series Chipset Family High Definition Audio Contro
sys-devices-pci0000:00-0000:00:1c.1-0000:03:00.0-ieee80211-phy0-rfkill2.device loaded active plugged /sys/devices/pci0000:00/0000:00:1c.1/0000:03:00.0
sys-devices-pci0000:00-0000:00:1c.1-0000:03:00.0-net-wlan0.device loaded active plugged Centrino Advanced-N 6205 [Taylor Peak] (Centrino Advanced-N 62
sys-devices-pci0000:00-0000:00:1d.0-usb2-2\x2d1-2\x2d1.4-2\x2d1.4:1.1-tty-ttyACM0.device loaded active plugged F5521gw
sys-devices-pci0000:00-0000:00:1d.0-usb2-2\x2d1-2\x2d1.4-2\x2d1.4:1.3-tty-ttyACM1.device loaded active plugged F5521gw
...
session-c2.scope loaded active running Session c2 of user lollypop
accounts-daemon.service loaded active running Accounts Service
● anacron.service loaded failed failed Run anacron jobs
apparmor.service loaded active exited LSB: AppArmor initialization
apport.service loaded active exited LSB: automatic crash report generation
...
</source>
In this example you can see that the anacron.service failed to start.
==Display unit status==
<source lang=bash>
# systemctl status anacron
● anacron.service - Run anacron jobs
Loaded: loaded (/lib/systemd/system/anacron.service; enabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Fr 2015-08-28 09:18:13 CEST; 31min ago
Process: 1591 ExecStart=/usr/sbin/anacron -dsq (code=exited, status=1/FAILURE)
Main PID: 1591 (code=exited, status=1/FAILURE)
Aug 28 09:18:13 lollybook systemd[1]: Started Run anacron jobs.
Aug 28 09:18:13 lollybook systemd[1]: Starting Run anacron jobs...
Aug 28 09:18:13 lollybook systemd[1]: anacron.service: main process exited, code=exited, status=1/FAILURE
Aug 28 09:18:13 lollybook anacron[1591]: anacron: Can't chdir to /var/spool/anacron: No such file or directory
Aug 28 09:18:13 lollybook systemd[1]: Unit anacron.service entered failed state.
Aug 28 09:18:13 lollybook systemd[1]: anacron.service failed.
</source>
Ah, deleted the anacron spool directory. ;-)
==Restart units==
Fix the problem and restart the service.
<source lang=bash>
root@lollybook:~# mkdir /var/spool/anacron
root@lollybook:~# systemctl restart anacron.service
root@lollybook:~# systemctl status anacron
● anacron.service - Run anacron jobs
Loaded: loaded (/lib/systemd/system/anacron.service; enabled; vendor preset: enabled)
Active: active (running) since Fr 2015-08-28 09:53:49 CEST; 4s ago
Main PID: 5179 (anacron)
CGroup: /system.slice/anacron.service
└─5179 /usr/sbin/anacron -dsq
Aug 28 09:53:49 lollybook systemd[1]: Started Run anacron jobs.
Aug 28 09:53:49 lollybook systemd[1]: Starting Run anacron jobs...
Aug 28 09:53:49 lollybook anacron[5179]: Anacron 2.3 started on 2015-08-28
Aug 28 09:53:49 lollybook anacron[5179]: Will run job `cron.daily' in 5 min.
Aug 28 09:53:49 lollybook anacron[5179]: Will run job `cron.weekly' in 10 min.
Aug 28 09:53:49 lollybook anacron[5179]: Will run job `cron.monthly' in 15 min.
Aug 28 09:53:49 lollybook anacron[5179]: Jobs will be executed sequentially
</source>
==Display unit declaration==
<source lang=ini>
# systemctl cat zfs.target
# /lib/systemd/system/zfs.target
[Unit]
Description=ZFS startup target
Requires=zfs-mount.service
Requires=zfs-share.service
Wants=zed.service
[Install]
WantedBy=multi-user.target
</source>
==Sockets==
<source lang=bash>
# systemctl list-sockets --all
LISTEN UNIT ACTIVATES
/run/acpid.socket acpid.socket acpid.service
/run/systemd/fsckd systemd-fsckd.socket systemd-fsckd.service
/run/systemd/initctl/fifo systemd-initctl.socket systemd-initctl.service
/run/systemd/journal/dev-log systemd-journald-dev-log.socket systemd-journald.service
/run/systemd/journal/socket systemd-journald.socket systemd-journald.service
/run/systemd/journal/stdout systemd-journald.socket systemd-journald.service
/run/systemd/journal/syslog syslog.socket rsyslog.service
/run/systemd/shutdownd systemd-shutdownd.socket systemd-shutdownd.service
/run/udev/control systemd-udevd-control.socket systemd-udevd.service
/run/uuidd/request uuidd.socket uuidd.service
/var/run/avahi-daemon/socket avahi-daemon.socket avahi-daemon.service
/var/run/cups/cups.sock cups.socket cups.service
/var/run/dbus/system_bus_socket dbus.socket dbus.service
127.0.0.1:631 cups.socket cups.service
[::1]:631 cups.socket cups.service
audit 1 systemd-journald-audit.socket systemd-journald.service
kobject-uevent 1 systemd-udevd-kernel.socket systemd-udevd.service
17 sockets listed.
</source>
==View dependencies==
What depends on ''zfs.target'':
<source lang=bash>
# systemctl list-dependencies --reverse zfs.target
zfs.target
● ├─basic.target
...
● └─multi-user.target
...
</source>
And what do we need to reach the ''zfs.target''?
<source lang=bash>
# systemctl list-dependencies --recursive zfs.target
zfs.target
● ├─zed.service
● ├─zfs-mount.service
● └─zfs-share.service
</source>
==Get the main PID of a service==
<source lang=bash>
$ systemctl show --property=MainPID --value ssh.service
2026
</source>
=Security=
==Use capabilities to drop user privileges (CapabilityBoundingSet)==
<source lang=ini>
# systemctl cat systemd-networkd.service --no-pager
...
[Service]
Type=notify
Restart=on-failure
RestartSec=0
ExecStart=/lib/systemd/systemd-networkd
CapabilityBoundingSet=CAP_NET_ADMIN CAP_NET_BIND_SERVICE CAP_NET_BROADCAST CAP_NET_RAW CAP_SETUID CAP_SETGID CAP_SETPCAP CAP_CHOWN CAP_DAC_OVERRIDE CAP_FOWNER
ProtectSystem=full
ProtectHome=yes
WatchdogSec=1min
...
</source>
Now the process is started with exactly the capabilities it needs to have. Even if it starts as root all unnessesary capabilities are dropped for starting the process.
I dont want to copy the whole man page of [http://manpages.ubuntu.com/manpages/vivid/en/man7/capabilities.7.html capabilities(7)] here but you can take a look to understand what this capabilities are.
'''BUT''' beware of programs which just test on UID 0!
==Nailing a process to it's rights : NoNewPrivileges==
Setting ''NoNewPrivileges=true'' ensures that the processtree from this level on will stuck at the UID and the privileges it has. This prohibits UID changes. No set UID binary will help the hacker to get more privileges than the user of the exploited service.
==Limiting access to a socket==
For example for the check_mk monitoring system:
<source lang=ini>
# systemctl edit check_mk.socket
</source>
Deny from all, but the monitoring server (172.17.128.193):
<source lang=ini>
[Socket]
IPAddressDeny=any
IPAddressAllow=172.17.128.193
</source>
==Limiting a socket to IPv4==
For example for the check_mk monitoring system:
<source lang=ini>
# systemctl edit check_mk.socket
</source>
First remove old value, then set new one.
<source lang=ini>
[Socket]
ListenStream=
ListenStream=0.0.0.0:6556
</source>
=systemd-resolved the name resolve service=
==Status==
<source lang=bash>
$ systemd-resolve --status
Global
DNS Domain: fritz.box
DNSSEC NTA: 10.in-addr.arpa
168.192.in-addr.arpa
corp
d.f.ip6.arpa
home
internal
intranet
lan
local
private
test
Link 3 (wlan0)
Current Scopes: none
LLMNR setting: yes
MulticastDNS setting: no
DNSSEC setting: no
DNSSEC supported: no
Link 2 (eth0)
Current Scopes: DNS
LLMNR setting: yes
MulticastDNS setting: no
DNSSEC setting: no
DNSSEC supported: no
DNS Servers: 192.168.178.1
DNS Domain: fritz.box
</source>
==Cache statistics==
<source lang=bash>
$ systemd-resolve --statistics
DNSSEC supported by current servers: no
Transactions
Current Transactions: 0
Total Transactions: 1824
Cache
Current Cache Size: 11
Cache Hits: 1104
Cache Misses: 771
DNSSEC Verdicts
Secure: 0
Insecure: 0
Bogus: 0
Indeterminate: 0
</source>
==Flush the cache==
<source lang=bash>
$ systemd-resolve --flush-caches
</source>
Check with:
<source lang=bash>
$ systemd-resolve --statistics
DNSSEC supported by current servers: no
Transactions
Current Transactions: 0
Total Transactions: 1809
Cache
Current Cache Size: 0 <--- Empty
Cache Hits: 1099
Cache Misses: 761
DNSSEC Verdicts
Secure: 0
Insecure: 0
Bogus: 0
Indeterminate: 0
</source>
=systemd-timesyncd an alternative to ntp=
The ntpd is a good and fat old horse for servers but clients do not necessarily need this one. Just give systemd-timesyncd a chance.
Configuration can be easily made through <i>/etc/systemd/timesyncd.conf</i>:
<source lang=ini>
# This file is part of systemd.
#
# systemd is free software; you can redistribute it and/or modify it
# under the terms of the GNU Lesser General Public License as published by
# the Free Software Foundation; either version 2.1 of the License, or
# (at your option) any later version.
#
# Entries in this file show the compile time defaults.
# You can change settings by editing this file.
# Defaults can be restored by simply deleting this file.
#
# See timesyncd.conf(5) for details.
[Time]
NTP=ptbtime1.ptb.de hora.cs.tu-berlin.de
FallbackNTP=ntp.ubuntu.com
</source>
The server list is a space separated list of NTP servers.
FallbackNTP is a list of servers if none of the NTP list could be reached.
If you want to split them into multiple files or generate them at start you can put files with the ending <i>.conf</i> in <i>/etc/systemd/timesyncd.conf.d/</i>.
After you setup the config you can enable the timesyncd via:
<source lang=bash>
# timedatectl set-ntp true
</source>
Control your success with:
<source lang=bash>
# timedatectl
Local time: Fr 2016-07-01 09:16:24 CEST
Universal time: Fr 2016-07-01 07:16:24 UTC
RTC time: Fr 2016-07-01 07:16:24
Time zone: Europe/Berlin (CEST, +0200)
Network time on: yes
NTP synchronized: yes
RTC in local TZ: no
</source>
Nice it worked <i>NTP synchronized: yes</i>.
If not take a look with <i>systemctl</i>:
<source lang=bash>
# systemctl status systemd-timesyncd.service
● systemd-timesyncd.service - Network Time Synchronization
Loaded: loaded (/lib/systemd/system/systemd-timesyncd.service; enabled; vendor preset: enabled)
Drop-In: /lib/systemd/system/systemd-timesyncd.service.d
└─disable-with-time-daemon.conf
Active: inactive (dead)
Condition: start condition failed at Fr 2016-07-01 10:49:15 CEST; 1h 43min left
Docs: man:systemd-timesyncd.service(8)
</source>
Hmm... let us take a look at ntp:
<source lang=bash>
# systemctl status ntp.service
● ntp.service - LSB: Start NTP daemon
Loaded: loaded (/etc/init.d/ntp; bad; vendor preset: enabled)
Active: active (exited) since Fr 2016-07-01 10:49:19 CEST; 1h 44min left
Docs: man:systemd-sysv-generator(8)
</source>
Maybe we should uninstall or disable ntp first ;-).
<source lang=bash>
# systemctl stop ntp.service
# systemctl disable ntp.service
</source>
<source lang=bash>
# systemctl start systemd-timesyncd.service
# systemctl status systemd-timesyncd.service
● systemd-timesyncd.service - Network Time Synchronization
Loaded: loaded (/lib/systemd/system/systemd-timesyncd.service; enabled; vendor preset: enabled)
Drop-In: /lib/systemd/system/systemd-timesyncd.service.d
└─disable-with-time-daemon.conf
Active: active (running) since Fr 2016-07-01 09:06:10 CEST; 1s ago
Docs: man:systemd-timesyncd.service(8)
Main PID: 12360 (systemd-timesyn)
Status: "Synchronized to time server 192.53.103.108:123 (ptbtime1.ptb.de)."
CGroup: /system.slice/systemd-timesyncd.service
└─12360 /lib/systemd/systemd-timesyncd
Jul 01 09:06:10 lollybook systemd[1]: Starting Network Time Synchronization...
Jul 01 09:06:10 lollybook systemd[1]: Started Network Time Synchronization.
Jul 01 09:06:10 lollybook systemd-timesyncd[12360]: Synchronized to time server 192.53.103.108:123 (ptbtime1.ptb.de).
</source>
That's it!
=Units=
==[Unit]==
===Define dependencies===
For example the ''zfs.target'' is defined like this:
<source lang=ini>
# systemctl cat zfs.target
# /lib/systemd/system/zfs.target
[Unit]
Description=ZFS startup target
Requires=zfs-mount.service
Requires=zfs-share.service
Wants=zed.service
[Install]
WantedBy=multi-user.target
</source>
This means to reach the ''zfs.target'' we want that ''zed.service'' is started if enabled and we need ''zfs-mount.service'' and ''zfs-share.service''.
===Directories===
====ReadWrite-, ReadOnly- and InaccessibleDirectories====
====Private Tmp-Directories====
Mounts a private incarnation of /tmp and /var/tmp which only lives as long as the unit is up. When the unit comes down the directories are cleared. This is done by a seperate namespace for this unit.
<source lang=ini>
[Unit]
...
PrivateTmp=true|false
...
</source>
If several units should share a private tmp-directory you can use ''JoinsNamespaceOf=<unit1>[,<unit2>,<unit3>]''.
==[Service]==
==[Install]==
=Tools=
==Testing around with capabilities==
For example arping:
<source lang=bash>
# getcap /usr/bin/arping
/usr/bin/arping = cap_net_raw+ep
</source>
With this capability set we can use this as normal user:
<source lang=bash>
lollypop $ /usr/bin/arping -I wlan0 192.168.178.1
ARPING 192.168.178.1 from 192.168.178.31 wlan0
Unicast reply from 192.168.178.1 [24:65:11:F0:DC:A8] 1.774ms
Unicast reply from 192.168.178.1 [24:65:11:F0:DC:A8] 1.658ms
</source>
If we remove this capability it does not work:
<source lang=bash>
# setcap cap_net_raw=-ep /usr/bin/arping
</source>
<source lang=bash>
lollypop $ /usr/bin/arping -I wlan0 192.168.178.1
arping: socket: Operation not permitted
</source>
Of course it still works as root as root has all capabilities:
<source lang=bash>
root@lollybook:~# /usr/bin/arping -I wlan0 192.168.178.1
ARPING 192.168.178.1 from 192.168.178.31 wlan0
Unicast reply from 192.168.178.1 [24:65:11:F0:DC:A8] 2.052ms
Unicast reply from 192.168.178.1 [24:65:11:F0:DC:A8] 1.852ms
Received 2 response(s)
</source>
So we better set this capability again:
<source lang=bash>
# setcap cap_net_raw=+ep /usr/bin/arping
</source>
= Logging with syslog-ng and systemd in a chroot environment =
If you have a chroot environment (here I have /var/chroot) some things are a little bit tricky.
==The needed logging socket in your chroot is /run/systemd/journal/dev-log==
Prepare the mountpoint:
<source lang=bash>
# mkdir -p /var/chroot/run/systemd/journal
# touch /var/chroot/run/systemd/journal/dev-log
</source>
===Get the name for the needed unit file===
The name of a .mount-unit file has to be the mount destination path. Dashes must be escaped. To get the resulting name you can easily use systemd-escape.
<source lang=bash>
# systemd-escape -p --suffix=mount /var/chroot/run/systemd/journal/dev-log
var-chroot-run-systemd-journal-dev\x2dlog.mount
</source>
===Create the unit file /lib/systemd/system/var-chroot-run-systemd-journal-dev\\x2dlog.mount for the mount===
Remember to double escape (\\) the x2d (which is a dash -).
<source lang=bash>
# vi /lib/systemd/system/var-chroot-run-systemd-journal-dev\\x2dlog.mount
</source>
I want to mount it before syslog-ng and pdns-recursor are up.
Put this contents in the file:
<source lang=ini>
[Unit]
Description=Mount /run/systemd/journal/dev-log to chroot
DefaultDependencies=no
ConditionPathExists=/var/chroot/run/systemd/journal/dev-log
ConditionCapability=CAP_SYS_ADMIN
After=systemd-modules-load.service
Before=pdns-recursor.service
Before=syslog-ng.service
[Mount]
What=/run/systemd/journal/dev-log
Where=/var/chroot/run/systemd/journal/dev-log
Type=none
Options=bind
[Install]
WantedBy=multi-user.target
</source>
===Mount the socket===
<source lang=bash>
# systemctl daemon-reload
# systemctl enable var-chroot-run-systemd-journal-dev\\x2dlog.mount
# systemctl start var-chroot-run-systemd-journal-dev\\x2dlog.mount
</source>
Check the success:
<source lang=bash>
# grep /var/chroot/run/systemd/journal/dev-log /proc/mounts
tmpfs /var/chroot/run/systemd/journal/dev-log tmpfs rw,nosuid,noexec,relatime,size=101604k,mode=755 0 0
</source>
==Tell the journald to forward logging lines to the socket==
===/etc/systemd/journald.conf===
<source lang=ini>
[Journal]
...
ForwardToSyslog=yes
...
</source>
Restart the journal daemon:
<source lang=bash>
# systemctl restart systemd-journald.service
</source>
==Configure syslog-ng==
===/etc/syslog-ng/syslog-ng.conf===
Take the log from systemd-journald socket:
<source>
...
source s_src {
system();
internal();
unix-dgram ("/run/systemd/journal/dev-log");
};
...
</source>
===Example for powerdns recursor===
====/etc/syslog-ng/conf.d/destination.d/pdns.conf====
<source>
# PowerDNS authoritative server destination
destination d_pdns { file("/var/log/powerdns/pdns.log"); };
destination d_pdns_recursor { file("/var/log/powerdns/recursor.log"); };
</source>
====/etc/syslog-ng/conf.d/filter.d/pdns.conf====
<source>
# PowerDNS authoritative server filter
filter f_pdns { program("^pdns$"); };
filter f_pdns_recursor { program("^pdns_recursor$"); };
</source>
====/etc/syslog-ng/conf.d/log.d/90_pdns.conf====
<source>
# PowerDNS authoritative server default final file log
log { source(s_src); filter(f_pdns); destination(d_pdns); flags(final); };
log { source(s_src); filter(f_pdns_recursor); destination(d_pdns_recursor); flags(final); };
</source>
===Restart syslog-ng daemon===
<source lang=bash>
# systemctl restart syslog-ng.service
</source>
= systemd-tmpfiles =
The housekeeping of temporary directories is done by the service <i>systemd-tmpfiles-clean.service</i> .
This service is triggered by the timer <i>systemd-tmpfiles-clean.timer</i>
To use this service for PrivateTMP directories for example of <i>apache2.service</i> you may use a config file under <i>/etc/[https://www.freedesktop.org/software/systemd/man/tmpfiles.d.html tmpfiles.d]/</i> like this example <i>/etc/tmpfiles.d/apache-cleanup.conf</i> :
<pre>
e /tmp/systemd-private-%b-apache2.service-*/tmp - - - 6h
</pre>
This will cleanup all files under <i>/tmp/systemd-private-%b-apache2.service-*/tmp</i> which are older than 6 hours every time the <i>systemd-tmpfiles-clean.service</i> runs.
The <i>%b</i> in the path is the actual boot-id.
What ist that? An id which is generated at each boot.
You can get the boot-id with:
<source lang=bash>
# journalctl --list-boots
</source>
The second field of the last line is the actual one, e.g.:
<source lang=bash>
# journalctl --list-boots | awk 'END {print $2}'
52ae0c2a587a47048ee76818ede269a6
</source>
When will that be? Try:
<source lang="bash">
# systemctl list-timers systemd-tmpfiles-clean.timer
NEXT LEFT LAST PASSED UNIT ACTIVATES
Thu 2020-08-13 16:07:24 CEST 46min left n/a n/a systemd-tmpfiles-clean.timer systemd-tmpfiles-clean.service
1 timers listed.
Pass --all to see loaded but inactive timers, too.
</source>
OK, but you probably want to run ist once an hour? OK, just rescedule the timer like this:
<source lang="bash">
# systemctl edit systemd-tmpfiles-clean.timer
</source>
and change the interval like this
<pre>
[Timer]
OnUnitActiveSec=1h
</pre>
Well done...
= Examples =
== fwupd.service behind proxy ==
<source lang=bash>
# systemctl edit fwupd-refresh.service
</source>
<source lang=ini>
[Service]
Environment=http_proxy="http://user:passw0rd@proxy.intern.net:8080" https_proxy="http://user:passw0rd@proxy.intern.net:8080"
PassEnvironment=http_proxy https_proxy
</source>
== Tomcat ==
=== /etc/systemd/system/tomcat-example.service ===
Simple service definition with some security options (ReadOnlyDirectories):
<source lang=ini>
# /etc/systemd/system/tomcat-ndr.service
[Unit]
Description=Apache Tomcat Web Application Container
After=syslog.target network.target remote-fs.target
ConditionPathExists=/opt/tomcat/bin
ConditionPathExists=/home/tomcat/bin
[Service]
Type=forking
User=tomcat
Group=java
PrivateTmp=true
RuntimeDirectory=tomcat-example
RuntimeDirectoryMode=0700
ReadOnlyDirectories=/etc
ReadOnlyDirectories=/lib
ReadOnlyDirectories=/usr
EnvironmentFile=/home/tomcat/.Tomcat_init_systemd
PIDFile=/run/tomcat-example/tomcat.pid
ExecStart=/opt/tomcat/bin/catalina.sh start
ExecStop=/opt/tomcat/bin/catalina.sh stop
SuccessExitStatus=0
[Install]
WantedBy=multi-user.target
</source>
=== /etc/polkit-1/rules.d/57-tomcat-example.rules ===
Allow the user <i>tomcat</i> to start/stop the service:
<source>
polkit.addRule(function(action, subject) {
if (action.id == "org.freedesktop.systemd1.manage-units" &&
action.lookup("unit") == "tomcat-example.service" &&
subject.user == "tomcat") {
return polkit.Result.YES;
}
});
</source>
== Oracle ==
UNTESTED, just an example!
File this as
/usr/lib/systemd/system/dbora@.service (SLES12)
<source lang=ini>
# This file is part of systemd.
#
# Configure instances for your oracle database versions like this
# # systemctl enable dbora@<product>.service
# e.g.:
# # systemctl enable dbora@12cR1.service
#
[Unit]
Description=Oracle Database %I
After=syslog.target network.target
[Service]
# systemd ignores PAM limits, so set any necessary limits in the service.
# Not really a bug, but a feature.
# https://bugzilla.redhat.com/show_bug.cgi?id=754285
LimitMEMLOCK=infinity
LimitNOFILE=65535
#
Type=simple
RemainAfterExit=yes
User=oracle
Group=dba
Environment="ORACLE_HOME=/opt/oracle/product/%i/db"
ExecStart=/opt/oracle/product/%i/db/bin/dbstart $ORACLE_HOME >> 2>&1 &
ExecStop=/opt/oracle/product/%i/db/bin/dbstart $ORACLE_HOME 2>&1 &
[Install]
WantedBy=multi-user.target
</source>
<source lang=bash>
# systemctl daemon-reload
# systemctl enable dbora@12cR2.service
Created symlink from /etc/systemd/system/multi-user.target.wants/dbora@12cR2.service to /usr/lib/systemd/system/dbora@.service.
</source>
54e5c44d53d8c92d27ca5bdc6298c8ca95c6e421
Fibrechannel Analyse
0
139
2152
2061
2021-09-20T11:58:11Z
Lollypop
2
wikitext
text/x-wiki
[[category:Solaris]]
[[category:Brocade]]
[[category:NetApp]]
[[category:FC]]
=Fibrechannel Analyse=
=Kommandos : Solaris=
==luxadm==
===luxadm -e port===
Gibt die Hardwarepfade der vorhandened Fibrechannelports und deren Status aus:
<source lang=bash>
# luxadm -e port
/devices/pci@79,0/pci10de,378@b/pci1077,143@0/fp@0,0:devctl CONNECTED
/devices/pci@79,0/pci10de,378@b/pci1077,143@0,1/fp@0,0:devctl NOT CONNECTED
/devices/pci@79,0/pci10de,376@e/pci1077,143@0/fp@0,0:devctl CONNECTED
/devices/pci@79,0/pci10de,376@e/pci1077,143@0,1/fp@0,0:devctl NOT CONNECTED
</source>
2 Dualport Karten:
/devices/pci@79,0/pci10de,378@b/pci1077,143@0 und ...,1
/devices/pci@79,0/pci10de,376@e/pci1077,143@0 und ...,1
<source lang=bash>
# prtdiag -v | head -1
System Configuration: Sun Microsystems Sun Fire X4440
</source>
Aus der Seite [https://support.oracle.com/epmos/faces/DocContentDisplay?id=1277396.1 Sun x86 Platforms: Matrix of Recognized Device Paths (Doc ID 1277396.1)] (Oracle Support Login benötigt):
Sun Fire x4440 (Tucana)
PCI:
PCIe SLOT0 /pci@0,0/pci10de,375@f/pci1000,3150@0 // with PCI Express 8-Port SAS/SATA HBA
PCIe SLOT0 /pci@0,0/pci10de,375@f/ // without PCI Express 8-Port SAS/SATA HBA
PCIe SLOT1 /pci@0,0/pci10de,376@e/
PCIe SLOT2 /pci@7c,0/pci10de,377@f/
PCIe SLOT3 /pci@0,0/pci10de,377@a/
PCIe SLOT4 /pci@7c,0/pci10de,376@e/
PCIe SLOT5 /pci@7c,0/pci10de,378@b/
(7c can be renamed something else depending on BIOS/OS version)
Also stecken unsere Karten in Slot 4 und 5.
===luxadm -e dump_map <HW_path>===
Gibt die Tabelle der bekannten Geräte an einem Port aus
<source lang=bash>
# luxadm -e dump_map /devices/pci@79,0/pci10de,378@b/pci1077,143@0/fp@0,0:devctl
Pos Port_ID Hard_Addr Port WWN Node WWN Type
0 30200 0 202600a0b86e10e4 200600a0b86e10e4 0x0 (Disk device)
1 30600 0 202700a0b86e10e4 200600a0b86e10e4 0x0 (Disk device)
2 10100 0 203400a0b85bb030 200400a0b85bb030 0x0 (Disk device)
3 10500 0 203500a0b85bb030 200400a0b85bb030 0x0 (Disk device)
4 10200 0 202600a0b86e103c 200600a0b86e103c 0x0 (Disk device)
5 11400 0 202700a0b86e103c 200600a0b86e103c 0x0 (Disk device)
6 30100 0 203200a0b85aeb2d 200200a0b85aeb2d 0x0 (Disk device)
7 30500 0 203300a0b85aeb2d 200200a0b85aeb2d 0x0 (Disk device)
8 10800 0 2100001b32902d45 2000001b32902d45 0x1f (Unknown Type,Host Bus Adapter)
</source>
Erklärung der interessanten Spalten:
* Port_ID <Switch_ID><Switchport><??>
Es sind also offensichtlich 2 Switches in der Fabric an Port /devices/pci@79,0/pci10de,378@b/pci1077,143@0/fp@0,0:devctl
und zwar mit der ID 1 und mit der ID 3.
Switch ID 1
Port 1 und 5 : Node WWN 200400a0b85bb030
Port 2 und 14 : Node WWN 200600a0b86e103c
Port 8 : Node WWN 2000001b32902d45 (Wir selbst)
Switch ID 3
Port 1 und 5 : Node WWN 200200a0b85aeb2d
Port 2 und 6 : Node WWN 200600a0b86e10e4
Wir hängen also mit 2 Storages auf dem Switch mit der ID 1 und haben eine Verbindung zu einem Switch mit der ID 3 an dem 2 weitere Storages hängen.
* Node WWN
Wir sehen hier 4 Disk Devices mit jeweils 2 Einträgen (Gleiche Node WWN)
* Port WWN
Dies ist die Port WWN der an den Switch angeschlossenen Geräte (unter 8 finden wir uns selbst).
Pro Storage sehen wir hier 2 Port WWNs, also 2 Pfade über unseren einen Hostport.
Daher nachher 4 Pfade (2 Pro Hostport) beim [[#mpathadm list lu]].
* Type
Disk Device: Storage
Host Bus Adapter: FC-Karte
===luxadm -e rdls <HW_path> ===
<source lang=bash>
# luxadm -e port 2>/dev/null | awk '{print $1;}' | xargs -n 1 luxadm -e rdls 2>/dev/null
Link Error Status information for loop:/devices/pci@0,0/pci8086,340e@7/pci111d,806e@0/pci111d,806e@2/pci1077,143@0/fp@0,0:devctl
al_pa lnk fail sync loss signal loss sequence err invalid word CRC
30200 2 1 0 0 0 0
30600 2 1 0 0 0 0
10200 1 1 0 0 0 0
11400 2 1 0 0 0 0
10b00 0 0 0 0 0 0
NOTE: These LESB counts are not cleared by a reset, only power cycles.
These counts must be compared to previously read counts.
Link Error Status information for loop:/devices/pci@0,0/pci8086,340e@7/pci111d,806e@0/pci111d,806e@2/pci1077,143@0,1/fp@0,0:devctl
al_pa lnk fail sync loss signal loss sequence err invalid word CRC
0 0 0 0 0 0 0
NOTE: These LESB counts are not cleared by a reset, only power cycles.
These counts must be compared to previously read counts.
</source>
===luxadm probe===
Auflistung aller erkannten Fibrechanneldevices
<source lang=bash>
#> luxadm probe
Found Fibre Channel device(s):
Node WWN:200600a0b86e10e4 Device Type:Disk device
Logical Path:/dev/rdsk/c8t600A0B80006E10E40000DC1C52E8B751d0s2
...
</source>
===luxadm display <Diskpath|WWN>===
<source lang=bash>
#> luxadm display /dev/rdsk/c8t600A0B80006E10E40000DC1C52E8B751d0s2
DEVICE PROPERTIES for disk: /dev/rdsk/c8t600A0B80006E10E40000DC1C52E8B751d0s2
Vendor: SUN
Product ID: STK6580_6780
Revision: 0784
Serial Num: SP01068442
Unformatted capacity: 204800.000 MBytes
Write Cache: Enabled
Read Cache: Enabled
Minimum prefetch: 0x300
Maximum prefetch: 0x0
Device Type: Disk device
Path(s):
/dev/rdsk/c8t600A0B80006E10E40000DC1C52E8B751d0s2
/devices/scsi_vhci/disk@g600a0b80006e10e40000dc1c52e8b751:c,raw
Controller /dev/cfg/c4
Device Address 202600a0b86e10e4,5
Host controller port WWN 2100001b328a417f
Class primary
State ONLINE
Controller /dev/cfg/c4
Device Address 202700a0b86e10e4,5
Host controller port WWN 2100001b328a417f
Class secondary
State STANDBY
Controller /dev/cfg/c6
Device Address 201600a0b86e10e4,5
Host controller port WWN 2100001b32904445
Class primary
State ONLINE
Controller /dev/cfg/c6
Device Address 201700a0b86e10e4,5
Host controller port WWN 2100001b32904445
Class secondary
State STANDBY
</source>
* Vendor: SUN
Hersteller
* Product ID: STK6580_6780
Also ein StorageTek 6580/6780
* Revision: 0784
Grobe Firmwarepeilung (Firmware Version: 07.84.47.10)
Siehe hier [[#lsscs list array <array_name>]]
* Serial Num: SP01068442
Praktisch, wenn man mit NetApps arbeitet, um die LUNs zuzuordnen.
* Unformatted capacity: 204800.000 MBytes
Immer gut zu wissen
* Write Cache: Enabled
Die Batterie im Storage sollte also OK sein ;-)
* Path(s):
Rawdevicepath
Hardwaredevicepath
Jetzt folgen immer pro Pfad zu diesem Device ein Block aus
Controller (siehe unten)
Device Address <Port WWN vom Device>,<LUN ID>
Class <primary|secondary> (siehe unten)
State <Online|Standby|Oflline>
Zuweisung Controller zum FC-Port über:
<source lang=bash>
# ls -al /dev/cfg/c6
lrwxrwxrwx 1 root root 60 Sep 3 2009 /dev/cfg/c6 -> ../../devices/pci@79,0/pci10de,376@e/pci1077,143@0/fp@0,0:fc
</source>
Man sieht den Hardwarepfad von [[#luxadm -e port]]
Class:
Via ALUA (Asymmetric Logical Unit Access) teilt das Device dem Host mit, über welche Pfade der Host primär auf die LUN zugreifen soll.
==fcinfo==
===fcinfo hba-port===
Gibt ein paar Infos über Hersteller, Modell, Firmware, Port und Node WWN, Current Speed, ... aus
<source lang=bash>
#> fcinfo hba-port
HBA Port WWN: 2100001b328a417f
OS Device Name: /dev/cfg/c4
Manufacturer: QLogic Corp.
Model: 375-3356-02
Firmware Version: 05.06.00
FCode/BIOS Version: BIOS: 2.02; fcode: 2.01; EFI: 2.00;
Serial Number: 0402R00-0918701860
Driver Name: qlc
Driver Version: 20110825-3.06
Type: N-port
State: online
Supported Speeds: 1Gb 2Gb 4Gb
Current Speed: 4Gb
Node WWN: 2000001b328a417f
HBA Port WWN: 2101001b32aa417f
OS Device Name: /dev/cfg/c5
Manufacturer: QLogic Corp.
Model: 375-3356-02
Firmware Version: 05.06.00
FCode/BIOS Version: BIOS: 2.02; fcode: 2.01; EFI: 2.00;
Serial Number: 0402R00-0918701860
Driver Name: qlc
Driver Version: 20110825-3.06
Type: unknown
State: offline
Supported Speeds: 1Gb 2Gb 4Gb
Current Speed: not established
Node WWN: 2001001b32aa417f
HBA Port WWN: 2100001b32904445
OS Device Name: /dev/cfg/c6
Manufacturer: QLogic Corp.
Model: 375-3356-02
Firmware Version: 05.06.00
FCode/BIOS Version: BIOS: 2.02; fcode: 2.01; EFI: 2.00;
Serial Number: 0402R00-0918701887
Driver Name: qlc
Driver Version: 20110825-3.06
Type: N-port
State: online
Supported Speeds: 1Gb 2Gb 4Gb
Current Speed: 4Gb
Node WWN: 2000001b32904445
HBA Port WWN: 2101001b32b04445
OS Device Name: /dev/cfg/c7
Manufacturer: QLogic Corp.
Model: 375-3356-02
Firmware Version: 05.06.00
FCode/BIOS Version: BIOS: 2.02; fcode: 2.01; EFI: 2.00;
Serial Number: 0402R00-0918701887
Driver Name: qlc
Driver Version: 20110825-3.06
Type: unknown
State: offline
Supported Speeds: 1Gb 2Gb 4Gb
Current Speed: not established
Node WWN: 2001001b32b04445
</source>
===fcinfo remote-port --port <HBA Port WWN> --linkstat===
<source lang=bash>
# fcinfo remote-port --port 2100001b32904445 --linkstat
Remote Port WWN: 201600a0b86e103c
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200600a0b86e103c
Link Error Statistics:
Link Failure Count: 3
Loss of Sync Count: 3
Loss of Signal Count: 2
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 201700a0b86e103c
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200600a0b86e103c
Link Error Statistics:
Link Failure Count: 4
Loss of Sync Count: 261
Loss of Signal Count: 4
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 202200a0b85aeb2d
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200200a0b85aeb2d
Link Error Statistics:
Link Failure Count: 2
Loss of Sync Count: 1
Loss of Signal Count: 2
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 202300a0b85aeb2d
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200200a0b85aeb2d
Link Error Statistics:
Link Failure Count: 2
Loss of Sync Count: 1
Loss of Signal Count: 2
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 201600a0b86e10e4
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200600a0b86e10e4
Link Error Statistics:
Link Failure Count: 3
Loss of Sync Count: 1
Loss of Signal Count: 0
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 201700a0b86e10e4
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200600a0b86e10e4
Link Error Statistics:
Link Failure Count: 3
Loss of Sync Count: 1
Loss of Signal Count: 0
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 202400a0b85bb030
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200400a0b85bb030
Link Error Statistics:
Link Failure Count: 2
Loss of Sync Count: 1
Loss of Signal Count: 2
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 202500a0b85bb030
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200400a0b85bb030
Link Error Statistics:
Link Failure Count: 2
Loss of Sync Count: 1
Loss of Signal Count: 3
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
</source>
===fcinfo remote-port --port <HBA Port WWN> --scsi-target===
<source lang=bash>
# fcinfo hba-port | grep HBA
HBA Port WWN: 21000024ff3cf472
HBA Port WWN: 21000024ff3cf473
HBA Port WWN: 21000024ff3cf454
HBA Port WWN: 21000024ff3cf455
# fcinfo remote-port --port 21000024ff3cf472 --scsi-target
Remote Port WWN: 20110002ac0059ce
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 2ff70002ac0059ce
LUN: 0
Vendor: 3PARdata
Product: VV
OS Device Name: /dev/rdsk/c6t60002AC00000000000000002000059CEd0s2
LUN: 1
Vendor: 3PARdata
Product: VV
OS Device Name: /dev/rdsk/c6t60002AC00000000000000003000059CEd0s2
LUN: 2
Vendor: 3PARdata
Product: VV
OS Device Name: /dev/rdsk/c6t60002AC00000000000000004000059CEd0s2
...
</source>
===fcinfo lu -v <device>===
<source lang=bash>
# fcinfo lu -v /dev/rdsk/c0t60030D90D9DD1A059655804D4A5EAD2Ed0s2
OS Device Name: /dev/rdsk/c0t60030D90D9DD1A059655804D4A5EAD2Ed0s2
HBA Port WWN: 2100000e1ed89451
Controller: /dev/cfg/c4
Remote Port WWN: 2100f4e9d4564d21
LUN: 11
State: active/optimized
Remote Port WWN: 2100f4e9d4564c97
LUN: 11
State: active/non-optimized
HBA Port WWN: 2100000e1ed89450
Controller: /dev/cfg/c3
Remote Port WWN: 2100f4e9d4564d44
LUN: 11
State: active/optimized
Remote Port WWN: 2100f4e9d4564c1c
LUN: 11
State: active/non-optimized
Vendor: DataCore
Product: Virtual Disk
Device Type: Disk Device
Unformatted capacity: 204800.000 MBytes
</source>
==mpathadm==
===mpathadm list lu===
<source lang=bash>
</source>
==cfgadm==
===cfgadm -al -o show_FCP_dev [<controller>]===
<source lang=bash>
# cfgadm -al -o show_FCP_dev | grep unusable
c8::21000024ff2d49a2,0 disk connected configured unusable
c8::21000024ff2d49a2,1 disk connected configured unusable
c8::21000024ff2d49a2,2 disk connected configured unusable
c8::21000024ff2d49a2,3 disk connected configured unusable
c8::21000024ff2d49a2,4 disk connected configured unusable
c8::21000024ff2d49a2,5 disk connected configured unusable
c8::21000024ff2d49a2,6 disk connected configured unusable
c8::21000024ff2d49a2,7 disk connected configured unusable
c8::21000024ff2d49a2,8 disk connected configured unusable
c8::21000024ff2d49a2,9 disk connected configured unusable
c8::21000024ff2d49a2,10 disk connected configured unusable
c9::203400a0b839c421,31 disk connected configured unusable
c9::203400a0b84913d2,31 disk connected configured unusable
c9::203500a0b839c421,31 disk connected configured unusable
c9::203500a0b84913d2,31 disk connected configured unusable
</source>
===cfgadm -c unconfigure -o unusable_SCSI_LUN <unusable device>===
<source lang=bash>
# cfgadm -c unconfigure -o unusable_SCSI_LUN c8::21000024ff2d49a2
</source>
Alle aufräumen:
<source lang=bash>
# cfgadm -alo show_SCSI_LUN | nawk '$NF=="unusable"{gsub(/,[0-9]+$/,"",$1);print $1}' | sort -u | xargs -n 1 cfgadm -c unconfigure -o unusable_SCSI_LUN
</source>
===cfgadm -o force_update -c configure <controller>===
Rescan LUNs. Be careful! Does a forcelip!
<source lang=bash>
# cfgadm -o force_update -c configure c10
</source>
==prtconf -Da <device>==
<source lang=bash>
# prtconf -Da /dev/cfg/c3
i86pc (driver name: rootnex)
pci, instance #0 (driver name: npe)
pci8086,3410, instance #5 (driver name: pcieb)
pci111d,806e, instance #12 (driver name: pcieb)
pci111d,806e, instance #13 (driver name: pcieb)
pci1077,170, instance #0 (driver name: qlc) <---
fp, instance #0 (driver name: fp)
</source>
==LUN masking (access LUNs of a storage)==
<source lang=bash>
Nov 6 13:44:59 server01 Corrupt label; wrong magic number
Nov 6 13:44:59 server01 cmlb: WARNING: /pci@380/pci@1/pci@0/pci@5/SUNW,qlc@0/fp@0,0/ssd@w204300a096691217,7 (ssd7):
Nov 6 13:44:59 server01 Corrupt label; wrong magic number
Nov 6 13:44:59 server01 cmlb: WARNING: /pci@380/pci@1/pci@0/pci@5/SUNW,qlc@0/fp@0,0/ssd@w204300a096691217,7 (ssd7):
Nov 6 13:44:59 server01 Corrupt label; wrong magic number
Nov 6 13:44:59 server01 cmlb: WARNING: /pci@300/pci@1/pci@0/pci@4/SUNW,qlc@0/fp@0,0/ssd@w203300a096691217,7 (ssd2):
Nov 6 13:44:59 server01 Corrupt label; wrong magic number
...
</source>
<source lang=bash>
# cat /etc/driver/drv/fp.conf
mpxio-disable="no";
pwwn-lun-blacklist=
"203200a096691265,7",
"203300a096691265,7",
"204200a096691265,7",
"204300a096691265,7",
"203200a096691217,7",
"203300a096691217,7",
"204200a096691217,7",
"204300a096691217,7";
</source>
<source lang=bash>
# reboot -- -r
...
Boot device: /pci@300/pci@1/pci@0/pci@2/scsi@0/disk@p0 File and args: -r
SunOS Release 5.11 Version 11.3 64-bit
Copyright (c) 1983, 2015, Oracle and/or its affiliates. All rights reserved.
/pseudo/fcp@0 (fcp0):
LUN 7 of port 203300a096691217 is masked due to black listing.
/pseudo/fcp@0 (fcp0):
LUN 7 of port 203200a096691217 is masked due to black listing.
/pseudo/fcp@0 (fcp0):
LUN 7 of port 203300a096691265 is masked due to black listing.
/pseudo/fcp@0 (fcp0):
LUN 7 of port 203200a096691265 is masked due to black listing.
/pseudo/fcp@0 (fcp0):
LUN 7 of port 204300a096691217 is masked due to black listing.
/pseudo/fcp@0 (fcp0):
LUN 7 of port 204200a096691217 is masked due to black listing.
/pseudo/fcp@0 (fcp0):
LUN 7 of port 204300a096691265 is masked due to black listing.
/pseudo/fcp@0 (fcp0):
LUN 7 of port 204200a096691265 is masked due to black listing.
Configuring devices.
</source>
=Kommandos : Common Array Manager=
==lsscs==
Ist unter Solaris in /opt/SUNWsefms/bin
===lsscs list array===
<source lang=bash>
</source>
===lsscs list array <array_name>===
<source lang=bash>
</source>
===lsscs list -a <array_name> fcport===
<source lang=bash>
</source>
=Kommandos : Brocade=
==Switch-Kommandos==
===switchshow===
<source lang=bash>
san-sw_11:admin> switchshow
switchName: san-sw_11
switchType: 71.2
switchState: Online
switchMode: Native
switchRole: Principal
switchDomain: 1
switchId: fffc01
switchWwn: 10:00:00:05:33:df:43:5a
zoning: ON (Fabric1)
switchBeacon: OFF
Index Port Address Media Speed State Proto
==============================================
0 0 010000 id N8 No_Light FC
1 1 010100 id N8 Online FC E-Port 10:00:00:05:33:df:bd:b9 "san-sw_21" (downstream)
2 2 010200 id N8 Online FC F-Port 21:00:00:24:ff:05:74:e4
3 3 010300 id N8 Online FC F-Port 50:0a:09:81:8d:32:5d:c4
4 4 010400 id N8 No_Light FC
5 5 010500 id N8 Online FC E-Port 10:00:00:05:33:df:bd:b9 "san-sw_21"
6 6 010600 id N4 Online FC F-Port 20:06:00:a0:b8:32:38:17
7 7 010700 id N4 Online FC F-Port 20:07:00:a0:b8:32:38:17
8 8 010800 id N4 Online FC F-Port 21:00:00:1b:32:91:4c:ed
9 9 010900 id N4 Online FC F-Port 21:00:00:1b:32:98:05:1a
10 10 010a00 id N8 Online FC F-Port 21:00:00:24:ff:4a:d3:bc
11 11 010b00 id N8 No_Light FC
12 12 010c00 id N8 No_Light FC
13 13 010d00 id N8 No_Light FC
14 14 010e00 id N8 No_Light FC
15 15 010f00 id N8 No_Light FC
16 16 011000 -- N8 No_Module FC (No POD License) Disabled
17 17 011100 -- N8 No_Module FC (No POD License) Disabled
18 18 011200 -- N8 No_Module FC (No POD License) Disabled
19 19 011300 -- N8 No_Module FC (No POD License) Disabled
20 20 011400 -- N8 No_Module FC (No POD License) Disabled
21 21 011500 -- N8 No_Module FC (No POD License) Disabled
22 22 011600 -- N8 No_Module FC (No POD License) Disabled
23 23 011700 -- N8 No_Module FC (No POD License) Disabled
</source>
Was sagt uns das?
# Dies ist der "Principal" (alle andere sind "Subordinate") der Fabric "Fabric1" (switchRole:, zoning:)
# Der Switch ist gezoned (zoning:)
# SwitchID ist "fffc01"
# Es ist ein 24-Port Switch
# Es gibt einen doppelten ISL (InterSwitchLink) zu einem anderen Switch E-Port (san-sw_21)
# 6 Ports sind mit SFPs bestückt, aber nicht belegt (0,4,11-15)
# 8 Ports haben keine Lizenz und auch kein SFP (No_Module)
# 9 Ports sind belegt
===fabricshow===
<source lang=bash>
san-sw_11:root> fabricshow
Switch ID Worldwide Name Enet IP Addr FC IP Addr Name
-------------------------------------------------------------------------
1: fffc01 10:00:00:05:33:df:43:5a 192.168.1.117 0.0.0.0 >"san-sw_11"
2: fffc02 10:00:00:05:33:df:bd:b9 192.168.1.119 0.0.0.0 "san-sw_21"
The Fabric has 2 switches
</source>
===islshow===
<source lang=bash>
rz1_fab2_11:admin> islshow
1: 1-> 0 10:00:00:05:1e:0d:5e:96 12 bc1_sl4_fab2_12 sp: 4.000G bw: 4.000G
2: 2-> 0 10:00:00:05:1e:0d:e2:53 13 bc2_sl4_fab2_13 sp: 4.000G bw: 4.000G
3: 3-> 0 10:00:00:05:1e:b3:71:bf 14 bc3_sl4_fab2_14 sp: 4.000G bw: 4.000G
4: 5-> 17 10:00:00:05:1e:0d:5e:96 12 bc1_sl4_fab2_12 sp: 4.000G bw: 4.000G
5: 6-> 17 10:00:00:05:1e:0d:e2:53 13 bc2_sl4_fab2_13 sp: 4.000G bw: 4.000G
6: 7-> 17 10:00:00:05:1e:b3:71:bf 14 bc3_sl4_fab2_14 sp: 4.000G bw: 4.000G
7: 10-> 8 10:00:50:eb:1a:45:71:96 15 rz-6510-fab2-15 sp: 4.000G bw: 4.000G
8: 18-> 0 10:00:50:eb:1a:45:71:96 15 rz-6510-fab2-15 sp: 4.000G bw: 4.000G
</source>
==Port-Kommandos==
===porterrshow===
===portstatsshow===
===portstatsclear===
===portloginshow===
Show information about NPIV-ports.
<source lang=bash>
fcsw1:admin> switchshow
...
Index Port Address Media Speed State Proto
==================================================
...
34 34 0f2200 id N16 Online FC F-Port 1 N Port + 1 NPIV public
...
</source>
This is a NetApp 8080 with CDOT behind this port as you can see with <i>nodefind <address></i>
<source lang=bash>
fcsw1:admin> nodefind 0f2200
Local:
Type Pid COS PortName NodeName SCR
N 0f2200; 3;50:0a:09:82:80:d1:21:ee;50:0a:09:80:80:d1:21:ee; 0x00000000
PortSymb: [45] "NetApp FC Target Adapter (8324) cdot1-01:0g"
NodeSymb: [38] "NetApp FAS8080 (cdot1-01/cdot1-02)"
Fabric Port Name: 20:22:50:eb:1a:42:f8:45
Permanent Port Name: 50:0a:09:82:80:d1:21:ee
Device type: Physical Unknown(initiator/target)
Port Index: 34
Share Area: No
Device Shared in Other AD: No
Redirect: No
Partial: No
LSAN: No
Aliases:
</source>
Now look with <i>portloginshow <portnumber></i>:
<source lang=bash>
fcsw1:admin> portloginshow 34
Type PID World Wide Name credit df_sz cos
=====================================================
fd 0f2201 20:00:00:a0:98:5d:33:82 6 2048 8 scr=0x3
fe 0f2200 50:0a:09:82:80:d1:21:ee 6 2048 8 scr=0x0
ff 0f2201 20:00:00:a0:98:5d:33:82 0 0 8 d_id=FFFFFC
ff 0f2200 50:0a:09:82:80:d1:21:ee 0 0 8 d_id=FFFFFC
</source>
With this information you can find out more about the WWNs:
<source lang=bash>
fcsw1:admin> nodefind 20:00:00:a0:98:5d:33:82
Local:
Type Pid COS PortName NodeName SCR
N 0f2201; 3;20:00:00:a0:98:5d:33:82;20:04:00:a0:98:5d:33:82; 0x00000003
FC4s: FCP
PortSymb: [58] "NetApp FC Target Port (8324) cdot1fc:cdot1-01_fc_lif_1"
NodeSymb: [24] "NetApp Vserver cdot1fc"
Fabric Port Name: 20:22:50:eb:1a:42:f8:45
Permanent Port Name: 50:0a:09:82:80:d1:21:ee
Device type: NPIV Target
Port Index: 34
Share Area: No
Device Shared in Other AD: No
Redirect: No
Partial: No
LSAN: No
Aliases: cdot1fc_01_lif1
</source>
Even the VServer (NodeSymb)!
And with the NodeName you can find all logical interfaces of this svm:
<source lang=bash>
fcsw1:admin> nodefind 20:04:00:a0:98:5d:33:82
Local:
Type Pid COS PortName NodeName SCR
N 0f2201; 3;20:00:00:a0:98:5d:33:82;20:04:00:a0:98:5d:33:82; 0x00000003
FC4s: FCP
PortSymb: [58] "NetApp FC Target Port (8324) cdot1fc:cdot1-01_fc_lif_1"
NodeSymb: [24] "NetApp Vserver cdot1fc"
Fabric Port Name: 20:22:50:eb:1a:42:f8:45
Permanent Port Name: 50:0a:09:82:80:d1:21:ee
Device type: NPIV Target
Port Index: 34
Share Area: No
Device Shared in Other AD: No
Redirect: No
Partial: No
LSAN: No
Aliases: cdot1fc_01_lif1
N 0f2301; 3;20:02:00:a0:98:5d:33:82;20:04:00:a0:98:5d:33:82; 0x00000003
FC4s: FCP
PortSymb: [58] "NetApp FC Target Port (8324) cdot1fc:cdot1-02_fc_lif_1"
NodeSymb: [24] "NetApp Vserver cdot1fc"
Fabric Port Name: 20:23:50:eb:1a:42:f8:45
Permanent Port Name: 50:0a:09:82:80:61:21:e8
Device type: NPIV Target
Port Index: 35
Share Area: No
Device Shared in Other AD: No
Redirect: No
Partial: No
LSAN: No
Aliases: cdot1fc_02_lif1
</source>
==Zone-Kommandos==
===zoneshow===
===alicreate===
===alishow===
==Backup der Switchconfig per Script==
===Put the backup host ssh-pub-key on the switches===
<source lang=bash>
fcsw1:root> cat >/root/.ssh/authorized_keys <<EOF
> ssh-dss AAAAB3NzaC1...
...
...
lF8qsgtTD8cc= root@host
> EOF
</source>
===Generate ssh-key on the switches===
<source lang=bash>
fcsw1:root> ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
2a:23:33:...:69:bc:25:a5:f9 root@fcsw1
The key's randomart image is:
+--[ RSA 2048]----+
| |
| ... |
| |
+-----------------+
</source>
===Copy the key to your backup users ~/.ssh/authorized_keys on backup host===
<source lang=bash>
fcsw1:root> cat /root/.ssh/id_rsa.pub
ssh-rsa AAAAB3NzaC1yc2EAAA...
...
KHnw1T1NaQ== root@fcsw1
</source>
===Now the script on the backup host===
<source lang=bash>
# cat /opt/bin/backup_brocade_config
#!/bin/bash
SWITCHES="
172.30.40.50
172.30.40.51
"
LOCALUSER="backupuser"
BACKUPDIR="brocade_backup"
BACKUPHOST="172.30.40.10"
DATE="$(date '+%Y%m%d-%H%M%S')"
for switch in ${SWITCHES} ; do
printf "Backing up ${switch} to ~${LOCALUSER}/${BACKUPDIR}/${switch}_config_${date}.txt... "
ssh root@${switch} /fabos/link_sbin/configupload -all -p scp ${BACKUPHOST},${LOCALUSER},${BACKUPDIR}/${switch}_config_${DATE}.txt
done
</source>
==Script zum parsen einer configupload-Datei==
<source lang=awk>
#!/usr/bin/gawk -f
BEGIN{
vendor["001438"]="Hewlett-Packard";
vendor["00a098"]="NetApp";
vendor["0024ff"]="Qlogic";
vendor["001b32"]="Qlogic";
vendor["0000c9"]="Emulex";
vendor["00e002"]="CROSSROADS SYSTEMS, INC.";
}
/\[Zoning\]/,/^$/ {
if(/^cfg./){
split($0,cfgparts,":");
gsub(/^cfg./,"",cfgparts[1]);
cfg[cfgparts[1]]=cfgparts[2];
}
else if(/^zone./) {
zonename=$0;
gsub(/:.*$/,"",zonename);
gsub(/^zone./,"",zonename);
zonemembers=$0;
gsub(/^[^:]*:/,"",zonemembers);
zone[zonename]=zonemembers;
}
else if(/^alias./) {
aliasname=$0;
gsub(/:.*$/,"",aliasname);
gsub(/^alias./,"",aliasname);
aliasmembers=$0;
gsub(/^[^:]*:/,"",aliasmembers);
alias[aliasname]=aliasmembers;
if(length(aliasname)>longestalias){
longestalias=length(aliasname);
}
}
else if(/^enable:/) {
cfgenabled=$0;
gsub(/^enable:/,"",cfgenabled);
}
}
END {
print "Config:",cfgenabled;
split(cfg[cfgenabled],active_zones,";");
for(active_zone in active_zones) {
split(zone[active_zones[active_zone]],zone_members,";");
asort(zone_members);
print "Zone",active_zones[active_zone],"(",length(zone_members),"Members ):";
for(zone_member in zone_members){
member=zone_members[zone_member];
if(alias[member]!=""){
member=alias[member];
}
WWN=member;
gsub(/:/,"",WWN);
if(WWN ~ /^5/){start=2;}else{start=5;}
vendor_id=substr(WWN,start,6);
printf " Member: %s\t",member;
if(alias[zone_members[zone_member]]!=""){
format=sprintf("%%s%%%ds\t",longestalias-length(zone_members[zone_member]));
printf format,zone_members[zone_member]," ";
}
printf "%s\n",vendor[vendor_id];
}
}
printf "\n\n\nCreate config:\n-------------------------------------------------\n";
printf "cfgdelete \"%s\"\n",cfgenabled;
for(active_zone in active_zones) {
split(zone[active_zones[active_zone]],zone_members,";");
asort(zone_members);
for(zone_member in zone_members){
member=zone_members[zone_member];
if(alias[member]!=""){
printf "alicreate \"%s\",\"%s\"\n",member,alias[member];
alias[member]="";
}
}
printf "zonecreate \"%s\",\"%s\"\n",active_zones[active_zone],zone[active_zones[active_zone]];
if(!secondelement){
secondelement=1;
printf "cfgcreate";
} else {
printf "cfgadd ";
}
printf " \"%s\",\"%s\"\n",cfgenabled,active_zones[active_zone];
}
printf "cfgsave\ncfgenable \"%s\"\n",cfgenabled;
}
</source>
=Kommandos: NetApp=
==fcp topology show : Wo hängt mein Frontend-SAN?==
<source lang=bash>
fas01> fcp topology show
Switches connected on adapter 0d:
None connected.
Switches connected on adapter 0c:
None connected.
Switches connected on adapter 1a:
Switch Name: fcsw01
Switch Vendor: Brocade Communications, Inc.
Switch Release: v6.4.2a
Switch Domain: 1
Switch WWN: 10:00:00:05:33:c6:1e:6c
Port Count: 24
Switches connected on adapter 1b:
Switch Name: fcsw02
Switch Vendor: Brocade Communications, Inc.
Switch Release: v6.4.2a
Switch Domain: 1
Switch WWN: 10:00:00:05:33:c7:5e:d2
Port Count: 24
Switches connected on adapter 1c:
None connected.
Switches connected on adapter 1d:
None connected.
</source>
==fcp config <port> : Welche WWN habe ich?==
<source lang=bash>
fas01> fcp config 1a
1a: ONLINE <ADAPTER UP> PTP Fabric
host address 010600
portname 50:0a:09:83:90:00:29:24 nodename 50:0a:09:80:80:00:29:24
mediatype auto speed auto
</source>
Kleines Schmankerl ist noch die "host address", die uns zeigt, daß wir auf Switch-ID 01 Port 06 hängen.
==fcp wwpn-alias (set|show) : Aliasnamen für mehr Klarheit beim Debugging==
<source lang=bash>
fas01> fcp wwpn-alias set sun07_Slot2_Port0 21000024ff363a5a
fas01> fcp wwpn-alias show
WWPN Alias
---- -----
21:00:00:24:ff:36:3a:5a sun07_Slot2_Port0
</source>
==sanlun lun show -d <dev> (mit Solaris und ZPool)==
Wenn man wissen möchte, welche NetApp LUNs zu einem ZPool gehören, geht das folgender Maßen:
<source lang=bash>
# zpool status | nawk '/c[0-9]t/{dev=$1;gsub(/s[0-9]+$/,"",$1);command="/opt/NTAP/SANToolkit/bin/sanlun lun show -d /dev/rdsk/"$1"s2";command | getline; command | getline; print dev,$1$2;next;}{print;}'
</source>
Beispiel:
<source lang=bash>
# zpool status | nawk '/c[0-9]t/{dev=$1;gsub(/s[0-9]+$/,"",$1);command="/opt/NTAP/SANToolkit/bin/sanlun lun show -d /dev/rdsk/"$1"s2";command | getline; command | getline; print dev,$1$2;next;}{print;}'
Pool: testpool
Status: ONLINE
scan: resilvered 11,0G in 0h1m with 0 errors on Thu Oct 2 09:41:39 2014
config:
NAME STATE READ WRITE CKSUM
testpool ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
c5t60A98000433544625634696B76705370d0s0 fas01:/vol/testlun/LUN0
c5t60A980003830304F392446473844375Ad0 fas02:/vol/testlun/LUN0
</source>
=Sonstiges=
==Alle WWNs in einem File finden==
Gibt nur die WWNs aus, auch mehrere, wenn mehrere in einer Zeile sind.
<source lang=awk>
gawk '{line=$0;while(match(line,/[0-9a-f]{2}(:[0-9a-f]{2}){7}/,wwn)){line=substr(line,wwn[0,"start"]+wwn[0,"length"]); print wwn[0];}}' <file>
</source>
==Some adittions to NetApps sanlun lun show on Solaris==
<source lang=awk>
# /opt/NTAP/SANToolkit/bin/sanlun lun show | gawk '
$3 ~ /\/dev\// {
sanlun=$0;
cmd="luxadm display "$3;
while( cmd|getline line ){
count=split(line,word);
if(line ~ /DEVICE PROPERTIES for disk:/){
disk=word[count];
ctrl="";
dev_addr="";
svm_ports="";
delete ports;
delete pri;
delete sec;
delete paths;
delete online;
continue;
}
if(line ~ /Controller/){
ctrl=word[count];
continue;
}
if(line ~ /Device Address/){
dev_addr=word[count];
gsub(/,.*$/,"",dev_addr);
ports[dev_addr]=1;
pair=ctrl"_"dev_addr;
continue;
}
if(line ~ /Class/){
class[pair]=word[count];
if(word[count]=="primary"){
pri[disk]++;
} else {
sec[disk]++;
}
continue;
}
if(line ~ /State/){
state[pair]=word[count];
paths[disk]++;
if(word[count]=="ONLINE"){
online[disk]++;
}
}
if(line ~ /^$/ && ctrl!=""){
for(port in ports){
if(svm_ports==""){
sep="";
} else {
sep=",";
}
svm_ports=svm_ports sep port;
}
printf "%s %2d/%2d %2d/%2d %s\n",sanlun,online[disk],paths[disk],pri[disk],sec[disk], svm_ports;
}
}
close(cmd);
next;
}
/^vserver/{
line=sprintf("%s Online/Total Primary/Secondary Device Addresses\n", $0);
printf line;
gsub(/./,"-",line);
print line;
next;
}
/^[-]+$/{next;}
{print;}
'
</source>
a5e60cb5e89fb07bdbd7c2be491936e71c44c99c
Sendmail
0
384
2153
2120
2021-10-11T20:32:55Z
Lollypop
2
/* Solaris 10 */
wikitext
text/x-wiki
=Compile sendmail=
==Solaris 10==
Untar source, then go into the source directory.
===devtools/Site/site.config.m4===
<source lang=m4>
dnl #####################################################################
dnl ### Changes to disable the default NIS support ###
dnl #####################################################################
APPENDDEF(`confENVDEF', `-UNIS')
dnl #####################################################################
dnl ### Changes for PH_MAP support. ###
dnl #####################################################################
APPENDDEF(`confMAPDEF',`-DPH_MAP')
APPENDDEF(`confLIBS', `-lphclient')
APPENDDEF(`confINCDIRS', `-I/opt/nph/include')
APPENDDEF(`confLIBDIRS', `-L/opt/nph/lib')
dnl #####################################################################
dnl ### Changes for STARTTLS support ###
dnl #####################################################################
APPENDDEF(`confENVDEF',`-DSTARTTLS')
APPENDDEF(`confLIBS', `-lssl -lcrypto')
APPENDDEF(`confLIBDIRS', `-L/opt/openssl/lib -R/opt/openssl/lib')
APPENDDEF(`confINCDIRS', `-I/opt/openssl/include')
dnl #####################################################################
dnl ### GCC settings ###
dnl #####################################################################
define(`confCC', `gcc')
define(`confOPTIMIZE', `-O3')
define(`confCCOPTS', `-m64 -B/usr/ccs/bin/amd64')
define(`confLDOPTS', `-m64 -static-libgcc -lgcc_s_amd64')
APPENDDEF(`confENVDEF', `-DSM_CONF_STDBOOL_H=0')
APPENDDEF(`confLIBDIRS', `-L/lib/64 -R/lib/64 -L/usr/sfw/lib/amd64 -R/usr/sfw/lib/amd64')
dnl #####################################################################
dnl ### Use the more modern shell ###
dnl #####################################################################
define(`confSHELL', `/usr/bin/bash')
dnl #####################################################################
dnl ### Installdirs ###
dnl #####################################################################
define(`confMANROOT', `/opt/sendmail-8.16.1/share/man/cat')
define(`confMANROOTMAN', `/opt/sendmail-8.16.1/share/man/man')
define(`confMBINDIR', `/opt/sendmail-8.16.1/sbin')
define(`confUBINDIR', `/opt/sendmail-8.16.1/bin')
</source>
<source lang=bash>
# sh ./Build -c
# cd cf/cf
# cp generic-solaris.mc sendmail.mc
# sh ./Build sendmail.cf
# sh ./Build install-cf
# mkdir -p /opt/sendmail-8.16.1/{bin,share/man/cat{1,5,8}} ; ./Build install ;
</source>
== Using the original Solaris 10 svc to sart your own sendmail ==
If you have set config/local_only=true at the parameters of svc:/network/smtp:sendmail the service will fail with:
Invalid operation mode l
This is because the start script will result in calling sendmail with the option "-bl" when config/local_only=true is set.
So put this in your sendmail.mc instead:
DAEMON_OPTIONS(`Port=smtp,Addr=127.0.0.1, Name=MTA')
and set config/local_only=false:
<source lang=bash>
# svccfg -s svc:/network/smtp:sendmail setprop config/local_only=false
# svcadm refresh svc:/network/smtp:sendmail
</source>
After that senmail might come up :-).
e6921ac0a47c6f6b984336ab33ee681ffb6d5252
Linux Tipps und Tricks
0
273
2154
1992
2021-10-15T11:29:08Z
Lollypop
2
wikitext
text/x-wiki
[[Category:Linux|Tipps und Tricks]]
==Hard reboot==
This is the hard way to kick your kernel into void. No filesystem sync is done, just and ugly fast direkt reboot!
You should never do this...
<source lang=bash>
# echo 1 > /proc/sys/kernel/sysrq
# echo b > /proc/sysrq-trigger
</source>
First line enables sysrq, second line sends the reboot request.
For more look at [https://www.kernel.org/doc/Documentation/sysrq.txt kernel.org]!
==Scan all SCSI buses for new devices==
<source lang=bash>
# for i in /sys/class/scsi_host/host*/scan ; do echo "- - -" > $i ; done
</source>
==Scan all FC ports for new devices==
!!!Be CAREFUL!!!
This command line issues a Loop Initialization Protocol (LIP). This is a bus reset hat means that removed devices in the fabric will disappear and new ones will appear.
!!!BUT the connection might get lost for a moment!!!
The softer way is [[#Scan all SCSI buses for new devices|to scan the SCSI buses]].
<source lang=bash>
# for i in /sys/class/fc_host/*/issue_lip ; do echo "1" > $i ; done
</source>
==Rescan a device (for example after changing a VMDK size)==
<source lang=bash>
# device=sda
# echo 1 > /sys/class/block/${device}/device/rescan
</source>
This is for device sda after changing the VMDK from 20GB to 25GB:
<source lang=bash>
# device=sda
# echo $[ 512 * $(cat /sys/block/${device}/size) / 1024 ** 3 ]
20
# echo 1 > /sys/class/block/${device}/device/rescan
# echo $[ 512 * $(cat /sys/block/${device}/size) / 1024 ** 3 ]
25
# parted /dev/${device} "print free"
Warning: Not all of the space available to /dev/sda appears to be used, you can fix the GPT to use all of the space (an extra 10485760 blocks) or
continue with the current setting?
Fix/Ignore? F
Model: VMware Virtual disk (scsi)
Disk /dev/sda: 26,8GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:
Number Start End Size File system Name Flags
2 17,4kB 1049kB 1031kB bios_grub
1 1049kB 21,5GB 21,5GB zfs
21,5GB 26,8GB 5369MB Free Space
</source>
I want to put the free space into partition 1 and resize the rpool:
<source lang=bash>
# parted /dev/${device} "resizepart 1 -1"
# parted /dev/${device} "print free"
Model: VMware Virtual disk (scsi)
Disk /dev/sda: 26,8GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:
Number Start End Size File system Name Flags
2 17,4kB 1049kB 1031kB bios_grub
1 1049kB 26,8GB 26,8GB zfs
26,8GB 26,8GB 983kB Free Space
# zpool list rpool
NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
rpool 19,9G 1,68G 18,2G - 14% 8% 1.00x ONLINE -
# zpool set autoexpand=on rpool
# zpool status rpool
pool: rpool
state: ONLINE
scan: none requested
config:
NAME STATE READ WRITE CKSUM
rpool ONLINE 0 0 0
sda1 ONLINE 0 0 0
# zpool online rpool sda1
# zpool list rpool
NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
rpool 24,9G 1,69G 23,2G - 11% 6% 1.00x ONLINE -
# zpool set autoexpand=off rpool
</source>
Done.
==Remove a SCSI-device==
Let us say we want to remove /dev/sdb.
Be careful! Like in this example the lowest SCSI-ID is not always the lowest device name!
Check it with <i>lsscsi</i> from the Ubuntu package lsscsi:
<source lang=bash>
# lsscsi
[2:0:0:0] cd/dvd NECVMWar VMware SATA CD00 1.00 /dev/sr0
[32:0:0:0] disk VMware Virtual disk 1.0 /dev/sdb
[32:0:1:0] disk VMware Virtual disk 1.0 /dev/sda
</source>
Then check it is not longer in use:
# mount
# pvs
# zpool status
# etc.
Then delete it:
<source lang=bash>
# echo 1 > /sys/bus/scsi/drivers/sd/32\:0\:0\:0/delete
</source>
The 32:0:0:0 is the number reported from the lsscsi above.
Et voila:
<source lang=bash>
# lsscsi
[2:0:0:0] cd/dvd NECVMWar VMware SATA CD00 1.00 /dev/sr0
[32:0:1:0] disk VMware Virtual disk 1.0 /dev/sda
</source>
==Copy a GPT partition table==
Copy partition table of sdX to sdY:
<source lang=bash>
# sgdisk /dev/sdX --replicate=/dev/sdY
# sgdisk --randomize-guids /dev/sdY
</source>
Or with:
<source lang=bash>
# sgdisk --backup=sdX.table /dev/sdX
# sgdisk --load-backup=sdX.table /dev/sdY
# sgdisk -G /dev/sdY
</source>
<pre>
-R, --replicate=second_device_filename
Replicate the main device's partition table on the specified second device. Note that the replicated partition table is an exact
copy, including all GUIDs; if the device should have its own unique GUIDs, you should use the -G option on the new disk.
-G, --randomize-guids
Randomize the disk's GUID and all partitions' unique GUIDs (but not their partition type code GUIDs). This function may be used
after cloning a disk in order to render all GUIDs once again unique.
</pre>
==Resize a GPT partition==
The partition was resized in VMWare from ~6GB to ~50GB.
In the VM I did [[#Remove a SCSI-device|Remove a SCSI-device]] for the resized device and then [[#Scan all SCSI buses for new devices|Scan all SCSI buses for new devices]] after that parted saw the new size.
===Correct the GPT partition table===
<source lang=bash>
root@mariadb:~# parted /dev/sdb
GNU Parted 3.2
Using /dev/sdb
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted) p
Warning: Not all of the space available to /dev/sdb appears to be used, you can fix the GPT to use all of the space (an extra 92274688 blocks) or continue with the
current setting?
Fix/Ignore? F <-- ! choose F
Model: VMware Virtual disk (scsi)
Disk /dev/sdb: 53,7GB <-- ! the new size is reported now
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:
Number Start End Size File system Name Flags
1 1049kB 6442MB 6441MB zfs
</source>
===Resize the partition===
<source lang=bash>
root@mariadb:~# parted /dev/sdb
GNU Parted 3.2
Using /dev/sdb
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted) p
Model: VMware Virtual disk (scsi)
Disk /dev/sdb: 53,7GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:
Number Start End Size File system Name Flags
1 1049kB 6442MB 6441MB zfs
(parted) resizepart 1
End? [6442MB]? 53,7GB <-- ! Put new size here
(parted) p <-- ! Control if it worked
Model: VMware Virtual disk (scsi)
Disk /dev/sdb: 53,7GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:
Number Start End Size File system Name Flags
1 1049kB 53,7GB 53,7GB zfs
(parted) q
Information: You may need to update /etc/fstab.
</source>
===Optional: Resize the ZPool in it===
Check the actual values:
<source lang=bash>
root@mariadb:~# zpool list MYSQL-DATA
NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
MYSQL-DATA 5,97G 994M 5,00G 44G 47% 16% 1.00x ONLINE -
root@mariadb:~# zpool get autoexpand MYSQL-DATA
NAME PROPERTY VALUE SOURCE
MYSQL-DATA autoexpand off default
</source>
Now inform ZPool to grow to the end of the partition.
Set autoexpand to on:
<source lang=bash>
root@mariadb:~# zpool set autoexpand=on MYSQL-DATA
</source>
Send an online to the already onlined device to force a recheck in the ZPool to resize it without export/import:
<source lang=bash>
root@mariadb:~# zpool online MYSQL-DATA /dev/sdb1
</source>
Et voila:
<source lang=bash>
root@mariadb:~# zpool list MYSQL-DATA
NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
MYSQL-DATA 50,0G 994M 49,0G - 5% 1% 1.00x ONLINE -
rpool 19,9G 3,36G 16,5G - 19% 16% 1.00x ONLINE -
</source>
Set autoexpand to off if you want prevent to autoexpand if partition grows:
<source lang=bash>
root@mariadb:~# zpool set autoexpand=off MYSQL-DATA
</source>
===Optional: Resize the LVM physical volume===
Check the values:
<source lang=bash>
# parted /dev/${device} "print free"
Model: VMware Virtual disk (scsi)
Disk /dev/sda: 48.3GB
Sector size (logical/physical): 512B/512B
Partition Table: msdos
Disk Flags:
Number Start End Size Type File system Flags
32.3kB 1049kB 1016kB Free Space
1 1049kB 48.3GB 48.3GB primary boot
48.3GB 48.3GB 999kB Free Space
# pvs
PV VG Fmt Attr PSize PFree
/dev/sda1 vg-root lvm2 a-- <35.00g 0
</source>
OK, we need to resize the physical volume
<source lang=bash>
# pvresize /dev/sda1
Physical volume "/dev/sda1" changed
1 physical volume(s) resized / 0 physical volume(s) not resized
</source>
Check the values:
<source lang=bash>
# pvs
PV VG Fmt Attr PSize PFree
/dev/sda1 vg-root lvm2 a-- <45.00g 10.00g
</source>
Done.
ebaebe2af2b1a569ff1990258362adf05f16f643
ISCSI Initiator with Linux
0
387
2155
2021-10-18T13:29:38Z
Lollypop
2
Created page with " # cat /etc/netplan/00-installer-config.yaml # This is the network config written by 'subiquity' network: bonds: bond0: addresses: - 10.171.112.135/16..."
wikitext
text/x-wiki
# cat /etc/netplan/00-installer-config.yaml
# This is the network config written by 'subiquity'
network:
bonds:
bond0:
addresses:
- 10.171.112.135/16
gateway4: 10.171.101.1
interfaces:
- eno1
- eno2
nameservers:
addresses:
- 10.171.111.11
- 10.171.111.12
search:
- aurdxp.amazonen-werke.com
parameters:
lacp-rate: slow
mode: 802.3ad
transmit-hash-policy: layer2
ethernets:
eno1: {}
eno2: {}
eno3: {}
eno4: {}
enp5s0f0: {}
enp5s0f1: {}
enp5s0f2: {}
enp5s0f3: {}
version: 2
# cat /etc/netplan/iscsi.yaml
network:
ethernets:
enp132s0f0:
dhcp4: false
dhcp6: false
addresses:
- 10.250.171.32/24
set-name: iscsi0
match:
macaddress: a0:36:9f:c4:cd:1a
enp132s0f1:
dhcp4: false
dhcp6: false
addresses:
- 10.251.171.32/24
set-name: iscsi1
match:
macaddress: a0:36:9f:c4:cd:18
version: 2
renderer: networkd
netplan apply
# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eno1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond0 state UP group default qlen 1000
link/ether 32:2d:f2:d0:e2:3f brd ff:ff:ff:ff:ff:ff
3: eno2: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond0 state UP group default qlen 1000
link/ether 32:2d:f2:d0:e2:3f brd ff:ff:ff:ff:ff:ff
4: eno3: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN group default qlen 1000
link/ether 24:6e:96:27:68:fa brd ff:ff:ff:ff:ff:ff
5: eno4: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN group default qlen 1000
link/ether 24:6e:96:27:68:fb brd ff:ff:ff:ff:ff:ff
6: enp5s0f0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN group default qlen 1000
link/ether a0:36:9f:b0:6a:f8 brd ff:ff:ff:ff:ff:ff
7: enp5s0f1: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN group default qlen 1000
link/ether a0:36:9f:b0:6a:f9 brd ff:ff:ff:ff:ff:ff
8: enp5s0f2: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN group default qlen 1000
link/ether a0:36:9f:b0:6a:fa brd ff:ff:ff:ff:ff:ff
9: enp5s0f3: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN group default qlen 1000
link/ether a0:36:9f:b0:6a:fb brd ff:ff:ff:ff:ff:ff
10: iscsi1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
link/ether a0:36:9f:c4:cd:18 brd ff:ff:ff:ff:ff:ff
inet 10.251.171.32/24 brd 10.251.171.255 scope global iscsi1
valid_lft forever preferred_lft forever
inet6 fe80::a236:9fff:fec4:cd18/64 scope link
valid_lft forever preferred_lft forever
11: iscsi0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
link/ether a0:36:9f:c4:cd:1a brd ff:ff:ff:ff:ff:ff
inet 10.250.171.32/24 brd 10.250.171.255 scope global iscsi0
valid_lft forever preferred_lft forever
inet6 fe80::a236:9fff:fec4:cd1a/64 scope link
valid_lft forever preferred_lft forever
12: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 32:2d:f2:d0:e2:3f brd ff:ff:ff:ff:ff:ff
inet 10.171.112.135/16 brd 10.171.255.255 scope global bond0
valid_lft forever preferred_lft forever
inet6 fe80::302d:f2ff:fed0:e23f/64 scope link
valid_lft forever preferred_lft forever
ping 10.250.171.1 -I iscsi0
ping 10.251.171.1 -I iscsi1
# cat /etc/iscsi/initiatorname.iscsi
## DO NOT EDIT OR REMOVE THIS FILE!
## If you remove this file, the iSCSI daemon will not start.
## If you change the InitiatorName, existing access control lists
## may reject this initiator. The InitiatorName must be unique
## for each iSCSI initiator. Do NOT duplicate iSCSI InitiatorNames.
InitiatorName=iqn.1993-08.org.debian:01:4efdaa48c143
iscsiadm -m iface -I iscsi0 -o new
iscsiadm -m iface -I iscsi1 -o new
iscsiadm -m iface -I iscsi0 --op=update -n iface.net_ifacename -v iscsi0
iscsiadm -m iface -I iscsi1 --op=update -n iface.net_ifacename -v iscsi1
iscsiadm -m iface -I iscsi0
iscsiadm -m iface -I iscsi1
# iscsiadm -m discovery -t st -p 10.250.171.1
iscsiadm: cannot make connection to 10.250.171.1: No route to host
iscsiadm: cannot make connection to 10.250.171.1: No route to host
iscsiadm: cannot make connection to 10.250.171.1: No route to host
iscsiadm: cannot make connection to 10.250.171.1: No route to host
iscsiadm: cannot make connection to 10.250.171.1: No route to host
iscsiadm: cannot make connection to 10.250.171.1: No route to host
iscsiadm: connection login retries (reopen_max) 5 exceeded
10.250.171.1:3260,1 iqn.2006-08.com.huawei:oceanstor:210028dee5f846b5::20000:10.250.171.1
# iscsiadm -m discovery -t st -p 10.251.171.1
iscsiadm: cannot make connection to 10.251.171.1: No route to host
iscsiadm: cannot make connection to 10.251.171.1: No route to host
iscsiadm: cannot make connection to 10.251.171.1: No route to host
iscsiadm: cannot make connection to 10.251.171.1: No route to host
iscsiadm: cannot make connection to 10.251.171.1: No route to host
iscsiadm: cannot make connection to 10.251.171.1: No route to host
iscsiadm: connection login retries (reopen_max) 5 exceeded
10.251.171.1:3260,2 iqn.2006-08.com.huawei:oceanstor:210028dee5f846b5::20001:10.251.171.1
# iscsiadm -m node -T iqn.2006-08.com.huawei:oceanstor:210028dee5f846b5::20000:10.250.171.1 --login
Logging in to [iface: iscsi0, target: iqn.2006-08.com.huawei:oceanstor:210028dee5f846b5::20000:10.250.171.1, portal: 10.250.171.1,3260] (multiple)
Login to [iface: iscsi0, target: iqn.2006-08.com.huawei:oceanstor:210028dee5f846b5::20000:10.250.171.1, portal: 10.250.171.1,3260] successful.
# iscsiadm -m node -T iqn.2006-08.com.huawei:oceanstor:210028dee5f846b5::20001:10.251.171.1 --login
Logging in to [iface: iscsi1, target: iqn.2006-08.com.huawei:oceanstor:210028dee5f846b5::20001:10.251.171.1, portal: 10.251.171.1,3260] (multiple)
Login to [iface: iscsi1, target: iqn.2006-08.com.huawei:oceanstor:210028dee5f846b5::20001:10.251.171.1, portal: 10.251.171.1,3260] successful.
# iscsiadm -m session -P 1
Target: iqn.2006-08.com.huawei:oceanstor:210028dee5f846b5::20000:10.250.171.1 (non-flash)
Current Portal: 10.250.171.1:3260,1
Persistent Portal: 10.250.171.1:3260,1
**********
Interface:
**********
Iface Name: iscsi0
Iface Transport: tcp
Iface Initiatorname: iqn.1993-08.org.debian:01:4efdaa48c143
Iface IPaddress: 10.250.171.32
Iface HWaddress: <empty>
Iface Netdev: iscsi0
SID: 1
iSCSI Connection State: LOGGED IN
iSCSI Session State: LOGGED_IN
Internal iscsid Session State: NO CHANGE
Target: iqn.2006-08.com.huawei:oceanstor:210028dee5f846b5::20001:10.251.171.1 (non-flash)
Current Portal: 10.251.171.1:3260,2
Persistent Portal: 10.251.171.1:3260,2
**********
Interface:
**********
Iface Name: iscsi1
Iface Transport: tcp
Iface Initiatorname: iqn.1993-08.org.debian:01:4efdaa48c143
Iface IPaddress: 10.251.171.32
Iface HWaddress: <empty>
Iface Netdev: iscsi1
SID: 2
iSCSI Connection State: LOGGED IN
iSCSI Session State: LOGGED_IN
Internal iscsid Session State: NO CHANGE
# lsscsi
[0:2:0:0] disk DELL PERC H730 Mini 4.30 /dev/sda
[11:0:0:0] cd/dvd HL-DT-ST DVD+-RW GTA0N A3C0 /dev/sr0
[12:0:0:1] disk HUAWEI XSG1 4305 /dev/sdb
[13:0:0:1] disk HUAWEI XSG1 4305 /dev/sdc
# systemctl status iscsid.service
# systemctl restart iscsid.service
# systemctl status iscsid.service
# iscsiadm -m session
# iscsiadm -m node --op=update -n node.conn[0].startup -v automatic
# iscsiadm -m node --op=update -n node.startup -v automatic
# iscsiadm -m node -T iqn.2006-08.com.huawei:oceanstor:210028dee5f846b5::20000:10.250.171.1 | grep node.session.timeo.replacement_timeout
node.session.timeo.replacement_timeout = 120
# iscsiadm -m node -T iqn.2006-08.com.huawei:oceanstor:210028dee5f846b5::20001:10.251.171.1 | grep node.session.timeo.replacement_timeout
node.session.timeo.replacement_timeout = 120
# iscsiadm -m node -T iqn.2006-08.com.huawei:oceanstor:210028dee5f846b5::20000:10.250.171.1 -o update -n node.session.timeo.replacement_timeout -v 10
# iscsiadm -m node -T iqn.2006-08.com.huawei:oceanstor:210028dee5f846b5::20001:10.251.171.1 -o update -n node.session.timeo.replacement_timeout -v 10
# /lib/udev/scsi_id --whitelisted --device=/dev/sda
361866da075bdee001f9a2edf2705b9ba
# /lib/udev/scsi_id --whitelisted --device=/dev/sdb
3628dee5100f846b5243be07c00000004
# /lib/udev/scsi_id --whitelisted --device=/dev/sdc
3628dee5100f846b5243be07c00000004
# cat /etc/multipath.conf
defaults {
user_friendly_names yes
}
devices {
device {
vendor "HUAWEI"
product "XSG1"
path_grouping_policy multibus
path_checker tur
prio const
path_selector "round-robin 0"
failback immediate
no_path_retry 15
}
}
blacklist {
# devnode "^sd[a]$"
# I highly recommend you blacklist by wwid instead of device name
# blacklist /dev/sda
wwid 361866da075bdee001f9a2edf2705b9ba
}
multipaths {
multipath {
wwid 3628dee5100f846b5243be07c00000004
# alias here can be anything descriptive for your LUN
alias veeamrepo
}
}
# multipath -r
# multipath -ll
veeamrepo (3628dee5100f846b5243be07c00000004) dm-0 HUAWEI,XSG1
size=10T features='1 queue_if_no_path' hwhandler='1 alua' wp=rw
`-+- policy='round-robin 0' prio=50 status=active
|- 12:0:0:1 sdb 8:16 active ready running
`- 13:0:0:1 sdc 8:32 active ready running
# ls -al /dev/mapper/veeamrepo
lrwxrwxrwx 1 root root 7 Okt 18 14:46 /dev/mapper/veeamrepo -> ../dm-0
# systemctl cat mnt-veeamrepo.mount
# /etc/systemd/system/mnt-veeamrepo.mount
[Unit]
Before=remote-fs.target
After=iscsi.service
Requires=iscsi.service
After=blockdev@dev-mapper-veeamrepo.target
[Mount]
Where=/mnt/veeamrepo
What=/dev/mapper/veeamrepo
Type=xfs
Options=defaults
Dokumente:
• https://linux.dell.com/files/whitepapers/iSCSI_Multipathing_in_Ubuntu_Server.pdf
• https://www.suse.com/support/kb/doc/?id=000019648
• https://ubuntu.com/server/docs/service-iscsi
•
e340da75abbede6fe79c58e21ad00a4567b16461
2156
2155
2021-10-19T12:23:45Z
Lollypop
2
wikitext
text/x-wiki
/etc/netplan/bond0.yaml
<source lang=yaml>
network:
version: 2
renderer: networkd
ethernets:
eno1:
dhcp4: false
dhcp6: false
optional: true
eno2:
dhcp4: false
dhcp6: false
optional: true
bonds:
bond0:
interfaces:
- eno1
- eno2
parameters:
lacp-rate: slow
mode: 802.3ad
transmit-hash-policy: layer2
addresses:
- 10.71.112.135/16
gateway4: 10.71.101.1
nameservers:
addresses:
- 10.71.111.11
- 10.71.111.12
search:
- domain.de
</source>
/etc/netplan/iscsi.yaml
<source lang=yaml>
network:
version: 2
renderer: networkd
ethernets:
enp132s0f0:
dhcp4: false
dhcp6: false
mtu: 9000
addresses:
- 10.250.71.32/24
set-name: iscsi0
match:
macaddress: a0:36:9f:d4:cd:1a
enp132s0f1:
dhcp4: false
dhcp6: false
mtu: 9000
addresses:
- 10.251.71.32/24
set-name: iscsi1
match:
macaddress: a0:36:9f:d4:cd:18
</source>
<source lang=bash>
# netplan apply
# ip a sh iscsi0
7: iscsi0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc mq state UP group default
qlen 1000
link/ether a0:36:9f:d4:cd:1a brd ff:ff:ff:ff:ff:ff
inet 10.250.71.32/24 brd 10.250.71.255 scope global iscsi0
valid_lft forever preferred_lft forever
inet6 fe80::a236:9fff:fed4:cd1a/64 scope link
valid_lft forever preferred_lft forever
# ip a sh iscsi1
5: iscsi1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc mq state UP group default
qlen 1000
link/ether a0:36:9f:d4:cd:18 brd ff:ff:ff:ff:ff:ff
inet 10.251.71.32/24 brd 10.251.71.255 scope global iscsi1
valid_lft forever preferred_lft forever
inet6 fe80::a236:9fff:fed4:cd18/64 scope link
valid_lft forever preferred_lft forever
# ip a sh bond0
12: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP
group default qlen 1000
link/ether 32:2d:f2:c0:e2:3f brd ff:ff:ff:ff:ff:ff
inet 10.171.112.135/16 brd 10.171.255.255 scope global bond0
valid_lft forever preferred_lft forever
inet6 fe80::302d:f2ff:fed0:e23f/64 scope link
valid_lft forever preferred_lft forever
</source>
<source lang=bash>
# ping -c 3 -M do -s 8972 -I iscsi0 10.250.171.1
PING 10.250.171.1 (10.250.171.1) from 10.250.171.32 iscsi0: 8972(9000) bytes of data.
8980 bytes from 10.250.171.1: icmp_seq=1 ttl=64 time=0.227 ms
8980 bytes from 10.250.171.1: icmp_seq=2 ttl=64 time=0.187 ms
8980 bytes from 10.250.171.1: icmp_seq=3 ttl=64 time=0.198 ms
--- 10.250.171.1 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2045ms
rtt min/avg/max/mdev = 0.187/0.204/0.227/0.016 ms
# ping -c 3 -M do -s 8972 -I iscsi1 10.251.171.1
PING 10.251.171.1 (10.251.171.1) from 10.251.171.32 iscsi1: 8972(9000) bytes of data.
8980 bytes from 10.251.171.1: icmp_seq=1 ttl=64 time=0.202 ms
8980 bytes from 10.251.171.1: icmp_seq=2 ttl=64 time=0.195 ms
8980 bytes from 10.251.171.1: icmp_seq=3 ttl=64 time=0.191 ms
--- 10.251.171.1 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2055ms
rtt min/avg/max/mdev = 0.191/0.196/0.202/0.004 ms
</source>
/etc/iscsi/initiatorname.iscsi
<source>
## DO NOT EDIT OR REMOVE THIS FILE!
## If you remove this file, the iSCSI daemon will not start.
## If you change the InitiatorName, existing access control lists
## may reject this initiator. The InitiatorName must be unique
## for each iSCSI initiator. Do NOT duplicate iSCSI InitiatorNames.
InitiatorName=iqn.1993-08.org.debian:01:4efdaa48c123
</source>
<source lang=bash>
# iscsiadm -m iface -I iscsi0 -o new
# iscsiadm -m iface -I iscsi1 -o new
# iscsiadm -m iface -I iscsi0 --op=update -n iface.net_ifacename -v iscsi0
# iscsiadm -m iface -I iscsi1 --op=update -n iface.net_ifacename -v iscsi1
# iscsiadm -m iface -I iscsi0
# iscsiadm -m iface -I iscsi1
# iscsiadm -m discovery -t st -p 10.250.171.1
iscsiadm: cannot make connection to 10.250.171.1: No route to host
iscsiadm: cannot make connection to 10.250.171.1: No route to host
iscsiadm: cannot make connection to 10.250.171.1: No route to host
iscsiadm: cannot make connection to 10.250.171.1: No route to host
iscsiadm: cannot make connection to 10.250.171.1: No route to host
iscsiadm: cannot make connection to 10.250.171.1: No route to host
iscsiadm: connection login retries (reopen_max) 5 exceeded
10.250.71.1:3260,1 iqn.2006-08.com.huawei:oceanstor:210028def5f846b5::20000:10.250.71.1
# iscsiadm -m discovery -t st -p 10.251.171.1
iscsiadm: cannot make connection to 10.251.171.1: No route to host
iscsiadm: cannot make connection to 10.251.171.1: No route to host
iscsiadm: cannot make connection to 10.251.171.1: No route to host
iscsiadm: cannot make connection to 10.251.171.1: No route to host
iscsiadm: cannot make connection to 10.251.171.1: No route to host
iscsiadm: cannot make connection to 10.251.171.1: No route to host
iscsiadm: connection login retries (reopen_max) 5 exceeded
10.251.71.1:3260,2 iqn.2006-08.com.huawei:oceanstor:210028def5f846b5::20001:10.251.71.1
</source>
<source lang=bash>
# iscsiadm -m node -T iqn.2006-08.com.huawei:oceanstor:210028def5f846b5::20000:10.250.71.1 --login
Logging in to [iface: iscsi0, target: iqn.2006-08.com.huawei:oceanstor:210028def5f846b5::20000:10.250.71.1, portal: 10.250.71.1,3260] (multiple)
Login to [iface: iscsi0, target: iqn.2006-08.com.huawei:oceanstor:210028def5f846b5::20000:10.250.71.1, portal: 10.250.71.1,3260] successful.
# iscsiadm -m node -T iqn.2006-08.com.huawei:oceanstor:210028def5f846b5::20001:10.251.71.1 --login
Logging in to [iface: iscsi1, target: iqn.2006-08.com.huawei:oceanstor:210028def5f846b5::20001:10.251.171.1, portal: 10.251.71.1,3260] (multiple)
Login to [iface: iscsi1, target: iqn.2006-08.com.huawei:oceanstor:210028def5f846b5::20001:10.251.171.1, portal: 10.251.71.1,3260] successful.
# iscsiadm -m session -P 1
Target: iqn.2006-08.com.huawei:oceanstor:210028def5f846b5::20000:10.250.71.1 (non-flash)
Current Portal: 10.250.71.1:3260,1
Persistent Portal: 10.250.71.1:3260,1
**********
Interface:
**********
Iface Name: iscsi0
Iface Transport: tcp
Iface Initiatorname: iqn.1993-08.org.debian:01:4efdaa48c123
Iface IPaddress: 10.250.71.32
Iface HWaddress: <empty>
Iface Netdev: iscsi0
SID: 1
iSCSI Connection State: LOGGED IN
iSCSI Session State: LOGGED_IN
Internal iscsid Session State: NO CHANGE
Target: iqn.2006-08.com.huawei:oceanstor:210028def5f846b5::20001:10.251.71.1 (non-flash)
Current Portal: 10.251.71.1:3260,2
Persistent Portal: 10.251.71.1:3260,2
**********
Interface:
**********
Iface Name: iscsi1
Iface Transport: tcp
Iface Initiatorname: iqn.1993-08.org.debian:01:4efdaa48c123
Iface IPaddress: 10.251.71.32
Iface HWaddress: <empty>
Iface Netdev: iscsi1
SID: 2
iSCSI Connection State: LOGGED IN
iSCSI Session State: LOGGED_IN
Internal iscsid Session State: NO CHANGE
# lsscsi
[0:2:0:0] disk DELL PERC H730 Mini 4.30 /dev/sda
[11:0:0:0] cd/dvd HL-DT-ST DVD+-RW GTA0N A3C0 /dev/sr0
[12:0:0:1] disk HUAWEI XSG1 4305 /dev/sdb
[13:0:0:1] disk HUAWEI XSG1 4305 /dev/sdc
# systemctl status iscsid.service
# systemctl restart iscsid.service
# systemctl status iscsid.service
# iscsiadm -m session
# iscsiadm -m node --op=update -n node.conn[0].startup -v automatic
# iscsiadm -m node --op=update -n node.startup -v automatic
# iscsiadm -m node -T iqn.2006-08.com.huawei:oceanstor:210028def5f846b5::20000:10.250.71.1 | grep node.session.timeo.replacement_timeout
node.session.timeo.replacement_timeout = 120
# iscsiadm -m node -T iqn.2006-08.com.huawei:oceanstor:210028def5f846b5::20001:10.251.71.1 | grep node.session.timeo.replacement_timeout
node.session.timeo.replacement_timeout = 120
# iscsiadm -m node -T iqn.2006-08.com.huawei:oceanstor:210028def5f846b5::20000:10.250.71.1 -o update -n node.session.timeo.replacement_timeout -v 10
# iscsiadm -m node -T iqn.2006-08.com.huawei:oceanstor:210028def5f846b5::20001:10.251.71.1 -o update -n node.session.timeo.replacement_timeout -v 10
</source>
<source lang=bash>
# /lib/udev/scsi_id --whitelisted --device=/dev/sda
361866da075bdee001f9a2ede2705b9ba
# /lib/udev/scsi_id --whitelisted --device=/dev/sdb
3628dee5100f846b5243be07d00000004
# /lib/udev/scsi_id --whitelisted --device=/dev/sdc
3628dee5100f846b5243be07d00000004
# cat /etc/multipath.conf
defaults {
user_friendly_names yes
}
devices {
device {
vendor "HUAWEI"
product "XSG1"
path_grouping_policy multibus
path_checker tur
prio const
path_selector "round-robin 0"
failback immediate
no_path_retry 15
}
}
blacklist {
# devnode "^sd[a]$"
# I highly recommend you blacklist by wwid instead of device name
# blacklist /dev/sda
wwid 361866da075bdee001f9a2ede2705b9ba
}
multipaths {
multipath {
wwid 3628dee5100f846b5243be07d00000004
# alias here can be anything descriptive for your LUN
alias veeamrepo
}
}
# multipath -r
# multipath -ll
veeamrepo (3628dee5100f846b5243be07d00000004) dm-0 HUAWEI,XSG1
size=10T features='1 queue_if_no_path' hwhandler='1 alua' wp=rw
`-+- policy='round-robin 0' prio=50 status=active
|- 12:0:0:1 sdb 8:16 active ready running
`- 13:0:0:1 sdc 8:32 active ready running
# ls -al /dev/mapper/veeamrepo
lrwxrwxrwx 1 root root 7 Okt 18 14:46 /dev/mapper/veeamrepo -> ../dm-0
# systemctl cat mnt-veeamrepo.mount
# /etc/systemd/system/mnt-veeamrepo.mount
[Unit]
Before=remote-fs.target
After=iscsi.service
Requires=iscsi.service
After=blockdev@dev-mapper-veeamrepo.target
[Mount]
Where=/mnt/veeamrepo
What=/dev/mapper/veeamrepo
Type=xfs
Options=defaults
</source>
Dokumente:
• https://linux.dell.com/files/whitepapers/iSCSI_Multipathing_in_Ubuntu_Server.pdf
• https://www.suse.com/support/kb/doc/?id=000019648
• https://ubuntu.com/server/docs/service-iscsi
•
dca7b5cc69440b901cb470834b63d11eeb2eeba3
2157
2156
2021-10-19T12:26:25Z
Lollypop
2
wikitext
text/x-wiki
/etc/netplan/bond0.yaml
<source lang=yaml>
network:
version: 2
renderer: networkd
ethernets:
eno1:
dhcp4: false
dhcp6: false
optional: true
eno2:
dhcp4: false
dhcp6: false
optional: true
bonds:
bond0:
interfaces:
- eno1
- eno2
parameters:
lacp-rate: slow
mode: 802.3ad
transmit-hash-policy: layer2
addresses:
- 10.71.112.135/16
gateway4: 10.71.101.1
nameservers:
addresses:
- 10.71.111.11
- 10.71.111.12
search:
- domain.de
</source>
/etc/netplan/iscsi.yaml
<source lang=yaml>
network:
version: 2
renderer: networkd
ethernets:
enp132s0f0:
dhcp4: false
dhcp6: false
mtu: 9000
addresses:
- 10.250.71.32/24
set-name: iscsi0
match:
macaddress: a0:36:9f:d4:cd:1a
enp132s0f1:
dhcp4: false
dhcp6: false
mtu: 9000
addresses:
- 10.251.71.32/24
set-name: iscsi1
match:
macaddress: a0:36:9f:d4:cd:18
</source>
<source lang=bash>
# netplan apply
# ip a sh iscsi0
7: iscsi0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc mq state UP group default
qlen 1000
link/ether a0:36:9f:d4:cd:1a brd ff:ff:ff:ff:ff:ff
inet 10.250.71.32/24 brd 10.250.71.255 scope global iscsi0
valid_lft forever preferred_lft forever
inet6 fe80::a236:9fff:fed4:cd1a/64 scope link
valid_lft forever preferred_lft forever
# ip a sh iscsi1
5: iscsi1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc mq state UP group default
qlen 1000
link/ether a0:36:9f:d4:cd:18 brd ff:ff:ff:ff:ff:ff
inet 10.251.71.32/24 brd 10.251.71.255 scope global iscsi1
valid_lft forever preferred_lft forever
inet6 fe80::a236:9fff:fed4:cd18/64 scope link
valid_lft forever preferred_lft forever
# ip a sh bond0
12: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP
group default qlen 1000
link/ether 32:2d:f2:c0:e2:3f brd ff:ff:ff:ff:ff:ff
inet 10.71.112.135/16 brd 10.71.255.255 scope global bond0
valid_lft forever preferred_lft forever
inet6 fe80::302d:f2ff:fed0:e23f/64 scope link
valid_lft forever preferred_lft forever
</source>
<source lang=bash>
# ping -c 3 -M do -s 8972 -I iscsi0 10.250.71.1
PING 10.250.71.1 (10.250.71.1) from 10.250.71.32 iscsi0: 8972(9000) bytes of data.
8980 bytes from 10.250.71.1: icmp_seq=1 ttl=64 time=0.227 ms
8980 bytes from 10.250.71.1: icmp_seq=2 ttl=64 time=0.187 ms
8980 bytes from 10.250.71.1: icmp_seq=3 ttl=64 time=0.198 ms
--- 10.250.71.1 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2045ms
rtt min/avg/max/mdev = 0.187/0.204/0.227/0.016 ms
# ping -c 3 -M do -s 8972 -I iscsi1 10.251.71.1
PING 10.251.71.1 (10.251.71.1) from 10.251.71.32 iscsi1: 8972(9000) bytes of data.
8980 bytes from 10.251.71.1: icmp_seq=1 ttl=64 time=0.202 ms
8980 bytes from 10.251.71.1: icmp_seq=2 ttl=64 time=0.195 ms
8980 bytes from 10.251.71.1: icmp_seq=3 ttl=64 time=0.191 ms
--- 10.251.71.1 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2055ms
rtt min/avg/max/mdev = 0.191/0.196/0.202/0.004 ms
</source>
/etc/iscsi/initiatorname.iscsi
<source>
## DO NOT EDIT OR REMOVE THIS FILE!
## If you remove this file, the iSCSI daemon will not start.
## If you change the InitiatorName, existing access control lists
## may reject this initiator. The InitiatorName must be unique
## for each iSCSI initiator. Do NOT duplicate iSCSI InitiatorNames.
InitiatorName=iqn.1993-08.org.debian:01:4efdaa48c123
</source>
<source lang=bash>
# iscsiadm -m iface -I iscsi0 -o new
# iscsiadm -m iface -I iscsi1 -o new
# iscsiadm -m iface -I iscsi0 --op=update -n iface.net_ifacename -v iscsi0
# iscsiadm -m iface -I iscsi1 --op=update -n iface.net_ifacename -v iscsi1
# iscsiadm -m iface -I iscsi0
# iscsiadm -m iface -I iscsi1
# iscsiadm -m discovery -t st -p 10.250.71.1
iscsiadm: cannot make connection to 10.250.71.1: No route to host
iscsiadm: cannot make connection to 10.250.71.1: No route to host
iscsiadm: cannot make connection to 10.250.71.1: No route to host
iscsiadm: cannot make connection to 10.250.71.1: No route to host
iscsiadm: cannot make connection to 10.250.71.1: No route to host
iscsiadm: cannot make connection to 10.250.71.1: No route to host
iscsiadm: connection login retries (reopen_max) 5 exceeded
10.250.71.1:3260,1 iqn.2006-08.com.huawei:oceanstor:210028def5f846b5::20000:10.250.71.1
# iscsiadm -m discovery -t st -p 10.251.71.1
iscsiadm: cannot make connection to 10.251.71.1: No route to host
iscsiadm: cannot make connection to 10.251.71.1: No route to host
iscsiadm: cannot make connection to 10.251.71.1: No route to host
iscsiadm: cannot make connection to 10.251.71.1: No route to host
iscsiadm: cannot make connection to 10.251.71.1: No route to host
iscsiadm: cannot make connection to 10.251.71.1: No route to host
iscsiadm: connection login retries (reopen_max) 5 exceeded
10.251.71.1:3260,2 iqn.2006-08.com.huawei:oceanstor:210028def5f846b5::20001:10.251.71.1
</source>
<source lang=bash>
# iscsiadm -m node -T iqn.2006-08.com.huawei:oceanstor:210028def5f846b5::20000:10.250.71.1 --login
Logging in to [iface: iscsi0, target: iqn.2006-08.com.huawei:oceanstor:210028def5f846b5::20000:10.250.71.1, portal: 10.250.71.1,3260] (multiple)
Login to [iface: iscsi0, target: iqn.2006-08.com.huawei:oceanstor:210028def5f846b5::20000:10.250.71.1, portal: 10.250.71.1,3260] successful.
# iscsiadm -m node -T iqn.2006-08.com.huawei:oceanstor:210028def5f846b5::20001:10.251.71.1 --login
Logging in to [iface: iscsi1, target: iqn.2006-08.com.huawei:oceanstor:210028def5f846b5::20001:10.251.71.1, portal: 10.251.71.1,3260] (multiple)
Login to [iface: iscsi1, target: iqn.2006-08.com.huawei:oceanstor:210028def5f846b5::20001:10.251.71.1, portal: 10.251.71.1,3260] successful.
# iscsiadm -m session -P 1
Target: iqn.2006-08.com.huawei:oceanstor:210028def5f846b5::20000:10.250.71.1 (non-flash)
Current Portal: 10.250.71.1:3260,1
Persistent Portal: 10.250.71.1:3260,1
**********
Interface:
**********
Iface Name: iscsi0
Iface Transport: tcp
Iface Initiatorname: iqn.1993-08.org.debian:01:4efdaa48c123
Iface IPaddress: 10.250.71.32
Iface HWaddress: <empty>
Iface Netdev: iscsi0
SID: 1
iSCSI Connection State: LOGGED IN
iSCSI Session State: LOGGED_IN
Internal iscsid Session State: NO CHANGE
Target: iqn.2006-08.com.huawei:oceanstor:210028def5f846b5::20001:10.251.71.1 (non-flash)
Current Portal: 10.251.71.1:3260,2
Persistent Portal: 10.251.71.1:3260,2
**********
Interface:
**********
Iface Name: iscsi1
Iface Transport: tcp
Iface Initiatorname: iqn.1993-08.org.debian:01:4efdaa48c123
Iface IPaddress: 10.251.71.32
Iface HWaddress: <empty>
Iface Netdev: iscsi1
SID: 2
iSCSI Connection State: LOGGED IN
iSCSI Session State: LOGGED_IN
Internal iscsid Session State: NO CHANGE
# lsscsi
[0:2:0:0] disk DELL PERC H730 Mini 4.30 /dev/sda
[11:0:0:0] cd/dvd HL-DT-ST DVD+-RW GTA0N A3C0 /dev/sr0
[12:0:0:1] disk HUAWEI XSG1 4305 /dev/sdb
[13:0:0:1] disk HUAWEI XSG1 4305 /dev/sdc
# systemctl status iscsid.service
# systemctl restart iscsid.service
# systemctl status iscsid.service
# iscsiadm -m session
# iscsiadm -m node --op=update -n node.conn[0].startup -v automatic
# iscsiadm -m node --op=update -n node.startup -v automatic
# iscsiadm -m node -T iqn.2006-08.com.huawei:oceanstor:210028def5f846b5::20000:10.250.71.1 | grep node.session.timeo.replacement_timeout
node.session.timeo.replacement_timeout = 120
# iscsiadm -m node -T iqn.2006-08.com.huawei:oceanstor:210028def5f846b5::20001:10.251.71.1 | grep node.session.timeo.replacement_timeout
node.session.timeo.replacement_timeout = 120
# iscsiadm -m node -T iqn.2006-08.com.huawei:oceanstor:210028def5f846b5::20000:10.250.71.1 -o update -n node.session.timeo.replacement_timeout -v 10
# iscsiadm -m node -T iqn.2006-08.com.huawei:oceanstor:210028def5f846b5::20001:10.251.71.1 -o update -n node.session.timeo.replacement_timeout -v 10
</source>
<source lang=bash>
# /lib/udev/scsi_id --whitelisted --device=/dev/sda
361866da075bdee001f9a2ede2705b9ba
# /lib/udev/scsi_id --whitelisted --device=/dev/sdb
3628dee5100f846b5243be07d00000004
# /lib/udev/scsi_id --whitelisted --device=/dev/sdc
3628dee5100f846b5243be07d00000004
# cat /etc/multipath.conf
defaults {
user_friendly_names yes
}
devices {
device {
vendor "HUAWEI"
product "XSG1"
path_grouping_policy multibus
path_checker tur
prio const
path_selector "round-robin 0"
failback immediate
no_path_retry 15
}
}
blacklist {
# devnode "^sd[a]$"
# I highly recommend you blacklist by wwid instead of device name
# blacklist /dev/sda
wwid 361866da075bdee001f9a2ede2705b9ba
}
multipaths {
multipath {
wwid 3628dee5100f846b5243be07d00000004
# alias here can be anything descriptive for your LUN
alias veeamrepo
}
}
# multipath -r
# multipath -ll
veeamrepo (3628dee5100f846b5243be07d00000004) dm-0 HUAWEI,XSG1
size=10T features='1 queue_if_no_path' hwhandler='1 alua' wp=rw
`-+- policy='round-robin 0' prio=50 status=active
|- 12:0:0:1 sdb 8:16 active ready running
`- 13:0:0:1 sdc 8:32 active ready running
# ls -al /dev/mapper/veeamrepo
lrwxrwxrwx 1 root root 7 Okt 18 14:46 /dev/mapper/veeamrepo -> ../dm-0
# systemctl cat mnt-veeamrepo.mount
# /etc/systemd/system/mnt-veeamrepo.mount
[Unit]
Before=remote-fs.target
After=iscsi.service
Requires=iscsi.service
After=blockdev@dev-mapper-veeamrepo.target
[Mount]
Where=/mnt/veeamrepo
What=/dev/mapper/veeamrepo
Type=xfs
Options=defaults
</source>
Dokumente:
• https://linux.dell.com/files/whitepapers/iSCSI_Multipathing_in_Ubuntu_Server.pdf
• https://www.suse.com/support/kb/doc/?id=000019648
• https://ubuntu.com/server/docs/service-iscsi
•
37d8ae675477b68a8e1e41b6ba2d4d2e3b097f67
2158
2157
2021-10-19T13:40:53Z
Lollypop
2
wikitext
text/x-wiki
= iSCSI with jumbo-frames and multipathing =
== Configure networking ==
=== LACP-bonding for the frontend ===
==== /etc/netplan/bond0.yaml ====
<source lang=yaml>
network:
version: 2
renderer: networkd
ethernets:
eno1:
dhcp4: false
dhcp6: false
optional: true
eno2:
dhcp4: false
dhcp6: false
optional: true
bonds:
bond0:
interfaces:
- eno1
- eno2
parameters:
lacp-rate: slow
mode: 802.3ad
transmit-hash-policy: layer2
addresses:
- 10.71.112.135/16
gateway4: 10.71.101.1
nameservers:
addresses:
- 10.71.111.11
- 10.71.111.12
search:
- domain.de
</source>
=== Two dedicated 10GE interfaces with jumbo-frames for the backend ===
==== /etc/netplan/iscsi.yaml ====
<source lang=yaml>
network:
version: 2
renderer: networkd
ethernets:
enp132s0f0:
dhcp4: false
dhcp6: false
mtu: 9000
addresses:
- 10.250.71.32/24
set-name: iscsi0
match:
macaddress: a0:36:9f:d4:cd:1a
enp132s0f1:
dhcp4: false
dhcp6: false
mtu: 9000
addresses:
- 10.251.71.32/24
set-name: iscsi1
match:
macaddress: a0:36:9f:d4:cd:18
</source>
=== Apply the parameters and check settings ===
<source lang=bash>
# netplan apply
# ip a sh iscsi0
7: iscsi0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc mq state UP group default
qlen 1000
link/ether a0:36:9f:d4:cd:1a brd ff:ff:ff:ff:ff:ff
inet 10.250.71.32/24 brd 10.250.71.255 scope global iscsi0
valid_lft forever preferred_lft forever
inet6 fe80::a236:9fff:fed4:cd1a/64 scope link
valid_lft forever preferred_lft forever
# ip a sh iscsi1
5: iscsi1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc mq state UP group default
qlen 1000
link/ether a0:36:9f:d4:cd:18 brd ff:ff:ff:ff:ff:ff
inet 10.251.71.32/24 brd 10.251.71.255 scope global iscsi1
valid_lft forever preferred_lft forever
inet6 fe80::a236:9fff:fed4:cd18/64 scope link
valid_lft forever preferred_lft forever
# ip a sh bond0
12: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP
group default qlen 1000
link/ether 32:2d:f2:c0:e2:3f brd ff:ff:ff:ff:ff:ff
inet 10.71.112.135/16 brd 10.71.255.255 scope global bond0
valid_lft forever preferred_lft forever
inet6 fe80::302d:f2ff:fed0:e23f/64 scope link
valid_lft forever preferred_lft forever
</source>
=== Check if all components are configured right for jumbo-frames ===
<source lang=bash>
# ping -c 3 -M do -s 8972 -I iscsi0 10.250.71.1
PING 10.250.71.1 (10.250.71.1) from 10.250.71.32 iscsi0: 8972(9000) bytes of data.
8980 bytes from 10.250.71.1: icmp_seq=1 ttl=64 time=0.227 ms
8980 bytes from 10.250.71.1: icmp_seq=2 ttl=64 time=0.187 ms
8980 bytes from 10.250.71.1: icmp_seq=3 ttl=64 time=0.198 ms
--- 10.250.71.1 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2045ms
rtt min/avg/max/mdev = 0.187/0.204/0.227/0.016 ms
# ping -c 3 -M do -s 8972 -I iscsi1 10.251.71.1
PING 10.251.71.1 (10.251.71.1) from 10.251.71.32 iscsi1: 8972(9000) bytes of data.
8980 bytes from 10.251.71.1: icmp_seq=1 ttl=64 time=0.202 ms
8980 bytes from 10.251.71.1: icmp_seq=2 ttl=64 time=0.195 ms
8980 bytes from 10.251.71.1: icmp_seq=3 ttl=64 time=0.191 ms
--- 10.251.71.1 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2055ms
rtt min/avg/max/mdev = 0.191/0.196/0.202/0.004 ms
</source>
If it does not has success, one of the switches on the way or the iSCSI-storage misses jumbo-frame settings.
== Configure iSCSI ==
=== Setup initiator iqn ===
Setup a new iqn:
<source>
# /sbin/iscsi-iname
</source>
The result is in /etc/iscsi/initiatorname.iscsi
==== /etc/iscsi/initiatorname.iscsi ====
<source>
## DO NOT EDIT OR REMOVE THIS FILE!
## If you remove this file, the iSCSI daemon will not start.
## If you change the InitiatorName, existing access control lists
## may reject this initiator. The InitiatorName must be unique
## for each iSCSI initiator. Do NOT duplicate iSCSI InitiatorNames.
InitiatorName=iqn.1993-08.org.debian:01:4efdaa48c123
</source>
=== Setup iSCSI-Interfaces ===
<source lang=bash>
# iscsiadm -m iface -I iscsi0 -o new
# iscsiadm -m iface -I iscsi0 --op=update -n iface.net_ifacename -v iscsi0
# iscsiadm -m iface -I iscsi0
# BEGIN RECORD 2.0-874
iface.iscsi_ifacename = iscsi0
iface.net_ifacename = iscsi0
iface.ipaddress = <empty>
iface.hwaddress = <empty>
iface.transport_name = tcp
...
# END RECORD
</source>
<source lang=bash>
# iscsiadm -m iface -I iscsi1 -o new
# iscsiadm -m iface -I iscsi1 --op=update -n iface.net_ifacename -v iscsi1
# iscsiadm -m iface -I iscsi1
# BEGIN RECORD 2.0-874
iface.iscsi_ifacename = iscsi1
iface.net_ifacename = iscsi1
iface.ipaddress = <empty>
iface.hwaddress = <empty>
iface.transport_name = tcp
...
# END RECORD
</source>
=== Discover LUNs that are offered by the storage ===
<source lang=bash>
# iscsiadm -m discovery -t st -p 10.250.71.1
iscsiadm: cannot make connection to 10.250.71.1: No route to host
iscsiadm: cannot make connection to 10.250.71.1: No route to host
iscsiadm: cannot make connection to 10.250.71.1: No route to host
iscsiadm: cannot make connection to 10.250.71.1: No route to host
iscsiadm: cannot make connection to 10.250.71.1: No route to host
iscsiadm: cannot make connection to 10.250.71.1: No route to host
iscsiadm: connection login retries (reopen_max) 5 exceeded
10.250.71.1:3260,1 iqn.2006-08.com.huawei:oceanstor:210028def5f846b5::20000:10.250.71.1
</source>
<source lang=bash>
# iscsiadm -m discovery -t st -p 10.251.71.1
iscsiadm: cannot make connection to 10.251.71.1: No route to host
iscsiadm: cannot make connection to 10.251.71.1: No route to host
iscsiadm: cannot make connection to 10.251.71.1: No route to host
iscsiadm: cannot make connection to 10.251.71.1: No route to host
iscsiadm: cannot make connection to 10.251.71.1: No route to host
iscsiadm: cannot make connection to 10.251.71.1: No route to host
iscsiadm: connection login retries (reopen_max) 5 exceeded
10.251.71.1:3260,2 iqn.2006-08.com.huawei:oceanstor:210028def5f846b5::20001:10.251.71.1
</source>
=== Login to discovered LUNs ===
<source lang=bash>
# iscsiadm -m node -T iqn.2006-08.com.huawei:oceanstor:210028def5f846b5::20000:10.250.71.1 --login
Logging in to [iface: iscsi0, target: iqn.2006-08.com.huawei:oceanstor:210028def5f846b5::20000:10.250.71.1, portal: 10.250.71.1,3260] (multiple)
Login to [iface: iscsi0, target: iqn.2006-08.com.huawei:oceanstor:210028def5f846b5::20000:10.250.71.1, portal: 10.250.71.1,3260] successful.
# iscsiadm -m node -T iqn.2006-08.com.huawei:oceanstor:210028def5f846b5::20001:10.251.71.1 --login
Logging in to [iface: iscsi1, target: iqn.2006-08.com.huawei:oceanstor:210028def5f846b5::20001:10.251.71.1, portal: 10.251.71.1,3260] (multiple)
Login to [iface: iscsi1, target: iqn.2006-08.com.huawei:oceanstor:210028def5f846b5::20001:10.251.71.1, portal: 10.251.71.1,3260] successful.
</source>
=== Take a look at the running session ===
<source lang=bash>
# iscsiadm -m session -P 1
Target: iqn.2006-08.com.huawei:oceanstor:210028def5f846b5::20000:10.250.71.1 (non-flash)
Current Portal: 10.250.71.1:3260,1
Persistent Portal: 10.250.71.1:3260,1
**********
Interface:
**********
Iface Name: iscsi0
Iface Transport: tcp
Iface Initiatorname: iqn.1993-08.org.debian:01:4efdaa48c123
Iface IPaddress: 10.250.71.32
Iface HWaddress: <empty>
Iface Netdev: iscsi0
SID: 1
iSCSI Connection State: LOGGED IN
iSCSI Session State: LOGGED_IN
Internal iscsid Session State: NO CHANGE
Target: iqn.2006-08.com.huawei:oceanstor:210028def5f846b5::20001:10.251.71.1 (non-flash)
Current Portal: 10.251.71.1:3260,2
Persistent Portal: 10.251.71.1:3260,2
**********
Interface:
**********
Iface Name: iscsi1
Iface Transport: tcp
Iface Initiatorname: iqn.1993-08.org.debian:01:4efdaa48c123
Iface IPaddress: 10.251.71.32
Iface HWaddress: <empty>
Iface Netdev: iscsi1
SID: 2
iSCSI Connection State: LOGGED IN
iSCSI Session State: LOGGED_IN
Internal iscsid Session State: NO CHANGE
</source>
=== Check the session is still ok after a restart of iscsid.service ===
<source lang=bash>
# systemctl status iscsid.service
# systemctl restart iscsid.service
# systemctl status iscsid.service
# iscsiadm -m session -o show
tcp: [1] 10.251.71.1:3260,2 iqn.2006-
08.com.huawei:oceanstor:210028def5f846b5::20001:10.251.71.1 (non-flash)
tcp: [2] 10.250.71.1:3260,1 iqn.2006-
08.com.huawei:oceanstor:210028def5f846b5::20000:10.250.71.1 (non-flash)
</source>
=== Enable automatic startup of connection ===
<source lang=bash>
# iscsiadm -m node --op=update -n node.conn[0].startup -v automatic
# iscsiadm -m node --op=update -n node.startup -v automatic
</source>
=== Check timeout parameter ===
<source lang=bash>
# iscsiadm -m node -T iqn.2006-08.com.huawei:oceanstor:210028def5f846b5::20000:10.250.71.1 | grep node.session.timeo.replacement_timeout
node.session.timeo.replacement_timeout = 120
# iscsiadm -m node -T iqn.2006-08.com.huawei:oceanstor:210028def5f846b5::20001:10.251.71.1 | grep node.session.timeo.replacement_timeout
node.session.timeo.replacement_timeout = 120
</source>
=== Adjust timeout values to your needs ===
<source lang=bash>
# iscsiadm -m node -T iqn.2006-08.com.huawei:oceanstor:210028def5f846b5::20000:10.250.71.1 -o update -n node.session.timeo.replacement_timeout -v 10
# iscsiadm -m node -T iqn.2006-08.com.huawei:oceanstor:210028def5f846b5::20001:10.251.71.1 -o update -n node.session.timeo.replacement_timeout -v 10
</source>
== Configure multipathing ==
=== List SCSI devices ===
<source lang=bash>
# lsscsi
[0:2:0:0] disk DELL PERC H730 Mini 4.30 /dev/sda <--- this is our internal disk / raid
[11:0:0:0] cd/dvd HL-DT-ST DVD+-RW GTA0N A3C0 /dev/sr0
[12:0:0:1] disk HUAWEI XSG1 4305 /dev/sdb <--- this is our iSCSI-storage
[13:0:0:1] disk HUAWEI XSG1 4305 /dev/sdc <--- this is our iSCSI-storage
</source>
=== Get wwids for devices ===
<source lang=bash>
# /lib/udev/scsi_id --whitelisted --device=/dev/sda
361866da075bdee001f9a2ede2705b9ba
# /lib/udev/scsi_id --whitelisted --device=/dev/sdb
3628dee5100f846b5243be07d00000004
# /lib/udev/scsi_id --whitelisted --device=/dev/sdc
3628dee5100f846b5243be07d00000004
</source>
=== Setup multipathing configuration ===
==== /etc/multipath.conf ====
<source>
defaults {
user_friendly_names yes
}
devices {
device {
vendor "HUAWEI"
product "XSG1"
path_grouping_policy multibus
path_checker tur
prio const
path_selector "round-robin 0"
failback immediate
no_path_retry 15
}
}
blacklist {
# devnode "^sd[a]$"
# I highly recommend you blacklist by wwid instead of device name
# blacklist /dev/sda
wwid 361866da075bdee001f9a2ede2705b9ba
}
multipaths {
multipath {
wwid 3628dee5100f846b5243be07d00000004
# alias here can be anything descriptive for your LUN
alias data
}
}
</source>
=== Startup multipathing ===
From the multipath(1) man page:
<pre>
-r Force a reload of all existing multipath maps. This command is delegated to the
multipathd daemon if it's running. In this case, other command line switches of the multipath
command have no effect.
-ll Show ("list") the current multipath topology from all available information (sysfs, the
device mapper, path checkers ...).
</pre>
<source lang=bash>
# multipath -r
</source>
<source lang=bash>
# multipath -ll
data (3628dee5100f846b5243be07d00000004) dm-0 HUAWEI,XSG1
size=10T features='1 queue_if_no_path' hwhandler='1 alua' wp=rw
`-+- policy='round-robin 0' prio=50 status=active
|- 12:0:0:1 sdb 8:16 active ready running
`- 13:0:0:1 sdc 8:32 active ready running
</source>
<source lang=bash>
# ls -al /dev/mapper/data
lrwxrwxrwx 1 root root 7 Okt 18 14:46 /dev/mapper/data -> ../dm-0
</source>
<source lang=bash>
# systemctl edit --force --full data.mount
</source>
==== /etc/systemd/system/data.mount ====
<source lang=inifile>
[Unit]
Before=remote-fs.target
After=iscsi.service
Requires=iscsi.service
After=blockdev@dev-mapper-data.target
[Mount]
Where=/data
What=/dev/mapper/data
Type=xfs
Options=defaults
</source>
Dokumente:
• https://linux.dell.com/files/whitepapers/iSCSI_Multipathing_in_Ubuntu_Server.pdf
• https://www.suse.com/support/kb/doc/?id=000019648
• https://ubuntu.com/server/docs/service-iscsi
•
2abddd2aef462059f1ef70e66d96e5a10fcbb062
2159
2158
2021-10-19T13:41:55Z
Lollypop
2
/* /etc/systemd/system/data.mount */
wikitext
text/x-wiki
= iSCSI with jumbo-frames and multipathing =
== Configure networking ==
=== LACP-bonding for the frontend ===
==== /etc/netplan/bond0.yaml ====
<source lang=yaml>
network:
version: 2
renderer: networkd
ethernets:
eno1:
dhcp4: false
dhcp6: false
optional: true
eno2:
dhcp4: false
dhcp6: false
optional: true
bonds:
bond0:
interfaces:
- eno1
- eno2
parameters:
lacp-rate: slow
mode: 802.3ad
transmit-hash-policy: layer2
addresses:
- 10.71.112.135/16
gateway4: 10.71.101.1
nameservers:
addresses:
- 10.71.111.11
- 10.71.111.12
search:
- domain.de
</source>
=== Two dedicated 10GE interfaces with jumbo-frames for the backend ===
==== /etc/netplan/iscsi.yaml ====
<source lang=yaml>
network:
version: 2
renderer: networkd
ethernets:
enp132s0f0:
dhcp4: false
dhcp6: false
mtu: 9000
addresses:
- 10.250.71.32/24
set-name: iscsi0
match:
macaddress: a0:36:9f:d4:cd:1a
enp132s0f1:
dhcp4: false
dhcp6: false
mtu: 9000
addresses:
- 10.251.71.32/24
set-name: iscsi1
match:
macaddress: a0:36:9f:d4:cd:18
</source>
=== Apply the parameters and check settings ===
<source lang=bash>
# netplan apply
# ip a sh iscsi0
7: iscsi0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc mq state UP group default
qlen 1000
link/ether a0:36:9f:d4:cd:1a brd ff:ff:ff:ff:ff:ff
inet 10.250.71.32/24 brd 10.250.71.255 scope global iscsi0
valid_lft forever preferred_lft forever
inet6 fe80::a236:9fff:fed4:cd1a/64 scope link
valid_lft forever preferred_lft forever
# ip a sh iscsi1
5: iscsi1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc mq state UP group default
qlen 1000
link/ether a0:36:9f:d4:cd:18 brd ff:ff:ff:ff:ff:ff
inet 10.251.71.32/24 brd 10.251.71.255 scope global iscsi1
valid_lft forever preferred_lft forever
inet6 fe80::a236:9fff:fed4:cd18/64 scope link
valid_lft forever preferred_lft forever
# ip a sh bond0
12: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP
group default qlen 1000
link/ether 32:2d:f2:c0:e2:3f brd ff:ff:ff:ff:ff:ff
inet 10.71.112.135/16 brd 10.71.255.255 scope global bond0
valid_lft forever preferred_lft forever
inet6 fe80::302d:f2ff:fed0:e23f/64 scope link
valid_lft forever preferred_lft forever
</source>
=== Check if all components are configured right for jumbo-frames ===
<source lang=bash>
# ping -c 3 -M do -s 8972 -I iscsi0 10.250.71.1
PING 10.250.71.1 (10.250.71.1) from 10.250.71.32 iscsi0: 8972(9000) bytes of data.
8980 bytes from 10.250.71.1: icmp_seq=1 ttl=64 time=0.227 ms
8980 bytes from 10.250.71.1: icmp_seq=2 ttl=64 time=0.187 ms
8980 bytes from 10.250.71.1: icmp_seq=3 ttl=64 time=0.198 ms
--- 10.250.71.1 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2045ms
rtt min/avg/max/mdev = 0.187/0.204/0.227/0.016 ms
# ping -c 3 -M do -s 8972 -I iscsi1 10.251.71.1
PING 10.251.71.1 (10.251.71.1) from 10.251.71.32 iscsi1: 8972(9000) bytes of data.
8980 bytes from 10.251.71.1: icmp_seq=1 ttl=64 time=0.202 ms
8980 bytes from 10.251.71.1: icmp_seq=2 ttl=64 time=0.195 ms
8980 bytes from 10.251.71.1: icmp_seq=3 ttl=64 time=0.191 ms
--- 10.251.71.1 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2055ms
rtt min/avg/max/mdev = 0.191/0.196/0.202/0.004 ms
</source>
If it does not has success, one of the switches on the way or the iSCSI-storage misses jumbo-frame settings.
== Configure iSCSI ==
=== Setup initiator iqn ===
Setup a new iqn:
<source>
# /sbin/iscsi-iname
</source>
The result is in /etc/iscsi/initiatorname.iscsi
==== /etc/iscsi/initiatorname.iscsi ====
<source>
## DO NOT EDIT OR REMOVE THIS FILE!
## If you remove this file, the iSCSI daemon will not start.
## If you change the InitiatorName, existing access control lists
## may reject this initiator. The InitiatorName must be unique
## for each iSCSI initiator. Do NOT duplicate iSCSI InitiatorNames.
InitiatorName=iqn.1993-08.org.debian:01:4efdaa48c123
</source>
=== Setup iSCSI-Interfaces ===
<source lang=bash>
# iscsiadm -m iface -I iscsi0 -o new
# iscsiadm -m iface -I iscsi0 --op=update -n iface.net_ifacename -v iscsi0
# iscsiadm -m iface -I iscsi0
# BEGIN RECORD 2.0-874
iface.iscsi_ifacename = iscsi0
iface.net_ifacename = iscsi0
iface.ipaddress = <empty>
iface.hwaddress = <empty>
iface.transport_name = tcp
...
# END RECORD
</source>
<source lang=bash>
# iscsiadm -m iface -I iscsi1 -o new
# iscsiadm -m iface -I iscsi1 --op=update -n iface.net_ifacename -v iscsi1
# iscsiadm -m iface -I iscsi1
# BEGIN RECORD 2.0-874
iface.iscsi_ifacename = iscsi1
iface.net_ifacename = iscsi1
iface.ipaddress = <empty>
iface.hwaddress = <empty>
iface.transport_name = tcp
...
# END RECORD
</source>
=== Discover LUNs that are offered by the storage ===
<source lang=bash>
# iscsiadm -m discovery -t st -p 10.250.71.1
iscsiadm: cannot make connection to 10.250.71.1: No route to host
iscsiadm: cannot make connection to 10.250.71.1: No route to host
iscsiadm: cannot make connection to 10.250.71.1: No route to host
iscsiadm: cannot make connection to 10.250.71.1: No route to host
iscsiadm: cannot make connection to 10.250.71.1: No route to host
iscsiadm: cannot make connection to 10.250.71.1: No route to host
iscsiadm: connection login retries (reopen_max) 5 exceeded
10.250.71.1:3260,1 iqn.2006-08.com.huawei:oceanstor:210028def5f846b5::20000:10.250.71.1
</source>
<source lang=bash>
# iscsiadm -m discovery -t st -p 10.251.71.1
iscsiadm: cannot make connection to 10.251.71.1: No route to host
iscsiadm: cannot make connection to 10.251.71.1: No route to host
iscsiadm: cannot make connection to 10.251.71.1: No route to host
iscsiadm: cannot make connection to 10.251.71.1: No route to host
iscsiadm: cannot make connection to 10.251.71.1: No route to host
iscsiadm: cannot make connection to 10.251.71.1: No route to host
iscsiadm: connection login retries (reopen_max) 5 exceeded
10.251.71.1:3260,2 iqn.2006-08.com.huawei:oceanstor:210028def5f846b5::20001:10.251.71.1
</source>
=== Login to discovered LUNs ===
<source lang=bash>
# iscsiadm -m node -T iqn.2006-08.com.huawei:oceanstor:210028def5f846b5::20000:10.250.71.1 --login
Logging in to [iface: iscsi0, target: iqn.2006-08.com.huawei:oceanstor:210028def5f846b5::20000:10.250.71.1, portal: 10.250.71.1,3260] (multiple)
Login to [iface: iscsi0, target: iqn.2006-08.com.huawei:oceanstor:210028def5f846b5::20000:10.250.71.1, portal: 10.250.71.1,3260] successful.
# iscsiadm -m node -T iqn.2006-08.com.huawei:oceanstor:210028def5f846b5::20001:10.251.71.1 --login
Logging in to [iface: iscsi1, target: iqn.2006-08.com.huawei:oceanstor:210028def5f846b5::20001:10.251.71.1, portal: 10.251.71.1,3260] (multiple)
Login to [iface: iscsi1, target: iqn.2006-08.com.huawei:oceanstor:210028def5f846b5::20001:10.251.71.1, portal: 10.251.71.1,3260] successful.
</source>
=== Take a look at the running session ===
<source lang=bash>
# iscsiadm -m session -P 1
Target: iqn.2006-08.com.huawei:oceanstor:210028def5f846b5::20000:10.250.71.1 (non-flash)
Current Portal: 10.250.71.1:3260,1
Persistent Portal: 10.250.71.1:3260,1
**********
Interface:
**********
Iface Name: iscsi0
Iface Transport: tcp
Iface Initiatorname: iqn.1993-08.org.debian:01:4efdaa48c123
Iface IPaddress: 10.250.71.32
Iface HWaddress: <empty>
Iface Netdev: iscsi0
SID: 1
iSCSI Connection State: LOGGED IN
iSCSI Session State: LOGGED_IN
Internal iscsid Session State: NO CHANGE
Target: iqn.2006-08.com.huawei:oceanstor:210028def5f846b5::20001:10.251.71.1 (non-flash)
Current Portal: 10.251.71.1:3260,2
Persistent Portal: 10.251.71.1:3260,2
**********
Interface:
**********
Iface Name: iscsi1
Iface Transport: tcp
Iface Initiatorname: iqn.1993-08.org.debian:01:4efdaa48c123
Iface IPaddress: 10.251.71.32
Iface HWaddress: <empty>
Iface Netdev: iscsi1
SID: 2
iSCSI Connection State: LOGGED IN
iSCSI Session State: LOGGED_IN
Internal iscsid Session State: NO CHANGE
</source>
=== Check the session is still ok after a restart of iscsid.service ===
<source lang=bash>
# systemctl status iscsid.service
# systemctl restart iscsid.service
# systemctl status iscsid.service
# iscsiadm -m session -o show
tcp: [1] 10.251.71.1:3260,2 iqn.2006-
08.com.huawei:oceanstor:210028def5f846b5::20001:10.251.71.1 (non-flash)
tcp: [2] 10.250.71.1:3260,1 iqn.2006-
08.com.huawei:oceanstor:210028def5f846b5::20000:10.250.71.1 (non-flash)
</source>
=== Enable automatic startup of connection ===
<source lang=bash>
# iscsiadm -m node --op=update -n node.conn[0].startup -v automatic
# iscsiadm -m node --op=update -n node.startup -v automatic
</source>
=== Check timeout parameter ===
<source lang=bash>
# iscsiadm -m node -T iqn.2006-08.com.huawei:oceanstor:210028def5f846b5::20000:10.250.71.1 | grep node.session.timeo.replacement_timeout
node.session.timeo.replacement_timeout = 120
# iscsiadm -m node -T iqn.2006-08.com.huawei:oceanstor:210028def5f846b5::20001:10.251.71.1 | grep node.session.timeo.replacement_timeout
node.session.timeo.replacement_timeout = 120
</source>
=== Adjust timeout values to your needs ===
<source lang=bash>
# iscsiadm -m node -T iqn.2006-08.com.huawei:oceanstor:210028def5f846b5::20000:10.250.71.1 -o update -n node.session.timeo.replacement_timeout -v 10
# iscsiadm -m node -T iqn.2006-08.com.huawei:oceanstor:210028def5f846b5::20001:10.251.71.1 -o update -n node.session.timeo.replacement_timeout -v 10
</source>
== Configure multipathing ==
=== List SCSI devices ===
<source lang=bash>
# lsscsi
[0:2:0:0] disk DELL PERC H730 Mini 4.30 /dev/sda <--- this is our internal disk / raid
[11:0:0:0] cd/dvd HL-DT-ST DVD+-RW GTA0N A3C0 /dev/sr0
[12:0:0:1] disk HUAWEI XSG1 4305 /dev/sdb <--- this is our iSCSI-storage
[13:0:0:1] disk HUAWEI XSG1 4305 /dev/sdc <--- this is our iSCSI-storage
</source>
=== Get wwids for devices ===
<source lang=bash>
# /lib/udev/scsi_id --whitelisted --device=/dev/sda
361866da075bdee001f9a2ede2705b9ba
# /lib/udev/scsi_id --whitelisted --device=/dev/sdb
3628dee5100f846b5243be07d00000004
# /lib/udev/scsi_id --whitelisted --device=/dev/sdc
3628dee5100f846b5243be07d00000004
</source>
=== Setup multipathing configuration ===
==== /etc/multipath.conf ====
<source>
defaults {
user_friendly_names yes
}
devices {
device {
vendor "HUAWEI"
product "XSG1"
path_grouping_policy multibus
path_checker tur
prio const
path_selector "round-robin 0"
failback immediate
no_path_retry 15
}
}
blacklist {
# devnode "^sd[a]$"
# I highly recommend you blacklist by wwid instead of device name
# blacklist /dev/sda
wwid 361866da075bdee001f9a2ede2705b9ba
}
multipaths {
multipath {
wwid 3628dee5100f846b5243be07d00000004
# alias here can be anything descriptive for your LUN
alias data
}
}
</source>
=== Startup multipathing ===
From the multipath(1) man page:
<pre>
-r Force a reload of all existing multipath maps. This command is delegated to the
multipathd daemon if it's running. In this case, other command line switches of the multipath
command have no effect.
-ll Show ("list") the current multipath topology from all available information (sysfs, the
device mapper, path checkers ...).
</pre>
<source lang=bash>
# multipath -r
</source>
<source lang=bash>
# multipath -ll
data (3628dee5100f846b5243be07d00000004) dm-0 HUAWEI,XSG1
size=10T features='1 queue_if_no_path' hwhandler='1 alua' wp=rw
`-+- policy='round-robin 0' prio=50 status=active
|- 12:0:0:1 sdb 8:16 active ready running
`- 13:0:0:1 sdc 8:32 active ready running
</source>
<source lang=bash>
# ls -al /dev/mapper/data
lrwxrwxrwx 1 root root 7 Okt 18 14:46 /dev/mapper/data -> ../dm-0
</source>
<source lang=bash>
# systemctl edit --force --full data.mount
</source>
==== /etc/systemd/system/data.mount ====
<source lang=inifile>
[Unit]
Before=remote-fs.target
After=iscsi.service
Requires=iscsi.service
After=blockdev@dev-mapper-data.target
[Mount]
Where=/data
What=/dev/mapper/data
Type=xfs
Options=defaults
</source>
== Further reading ==
External link collection:
• https://linux.dell.com/files/whitepapers/iSCSI_Multipathing_in_Ubuntu_Server.pdf
• https://www.suse.com/support/kb/doc/?id=000019648
• https://ubuntu.com/server/docs/service-iscsi
•
c740fe90189376429d2777f8298198d19470c650
2160
2159
2021-10-19T13:43:31Z
Lollypop
2
wikitext
text/x-wiki
[Category:Linux|iSCSI]
[Category:iSCSI|Linux]
= iSCSI with jumbo-frames and multipathing =
== Configure networking ==
=== LACP-bonding for the frontend ===
==== /etc/netplan/bond0.yaml ====
<source lang=yaml>
network:
version: 2
renderer: networkd
ethernets:
eno1:
dhcp4: false
dhcp6: false
optional: true
eno2:
dhcp4: false
dhcp6: false
optional: true
bonds:
bond0:
interfaces:
- eno1
- eno2
parameters:
lacp-rate: slow
mode: 802.3ad
transmit-hash-policy: layer2
addresses:
- 10.71.112.135/16
gateway4: 10.71.101.1
nameservers:
addresses:
- 10.71.111.11
- 10.71.111.12
search:
- domain.de
</source>
=== Two dedicated 10GE interfaces with jumbo-frames for the backend ===
==== /etc/netplan/iscsi.yaml ====
<source lang=yaml>
network:
version: 2
renderer: networkd
ethernets:
enp132s0f0:
dhcp4: false
dhcp6: false
mtu: 9000
addresses:
- 10.250.71.32/24
set-name: iscsi0
match:
macaddress: a0:36:9f:d4:cd:1a
enp132s0f1:
dhcp4: false
dhcp6: false
mtu: 9000
addresses:
- 10.251.71.32/24
set-name: iscsi1
match:
macaddress: a0:36:9f:d4:cd:18
</source>
=== Apply the parameters and check settings ===
<source lang=bash>
# netplan apply
# ip a sh iscsi0
7: iscsi0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc mq state UP group default
qlen 1000
link/ether a0:36:9f:d4:cd:1a brd ff:ff:ff:ff:ff:ff
inet 10.250.71.32/24 brd 10.250.71.255 scope global iscsi0
valid_lft forever preferred_lft forever
inet6 fe80::a236:9fff:fed4:cd1a/64 scope link
valid_lft forever preferred_lft forever
# ip a sh iscsi1
5: iscsi1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc mq state UP group default
qlen 1000
link/ether a0:36:9f:d4:cd:18 brd ff:ff:ff:ff:ff:ff
inet 10.251.71.32/24 brd 10.251.71.255 scope global iscsi1
valid_lft forever preferred_lft forever
inet6 fe80::a236:9fff:fed4:cd18/64 scope link
valid_lft forever preferred_lft forever
# ip a sh bond0
12: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP
group default qlen 1000
link/ether 32:2d:f2:c0:e2:3f brd ff:ff:ff:ff:ff:ff
inet 10.71.112.135/16 brd 10.71.255.255 scope global bond0
valid_lft forever preferred_lft forever
inet6 fe80::302d:f2ff:fed0:e23f/64 scope link
valid_lft forever preferred_lft forever
</source>
=== Check if all components are configured right for jumbo-frames ===
<source lang=bash>
# ping -c 3 -M do -s 8972 -I iscsi0 10.250.71.1
PING 10.250.71.1 (10.250.71.1) from 10.250.71.32 iscsi0: 8972(9000) bytes of data.
8980 bytes from 10.250.71.1: icmp_seq=1 ttl=64 time=0.227 ms
8980 bytes from 10.250.71.1: icmp_seq=2 ttl=64 time=0.187 ms
8980 bytes from 10.250.71.1: icmp_seq=3 ttl=64 time=0.198 ms
--- 10.250.71.1 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2045ms
rtt min/avg/max/mdev = 0.187/0.204/0.227/0.016 ms
# ping -c 3 -M do -s 8972 -I iscsi1 10.251.71.1
PING 10.251.71.1 (10.251.71.1) from 10.251.71.32 iscsi1: 8972(9000) bytes of data.
8980 bytes from 10.251.71.1: icmp_seq=1 ttl=64 time=0.202 ms
8980 bytes from 10.251.71.1: icmp_seq=2 ttl=64 time=0.195 ms
8980 bytes from 10.251.71.1: icmp_seq=3 ttl=64 time=0.191 ms
--- 10.251.71.1 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2055ms
rtt min/avg/max/mdev = 0.191/0.196/0.202/0.004 ms
</source>
If it does not has success, one of the switches on the way or the iSCSI-storage misses jumbo-frame settings.
== Configure iSCSI ==
=== Setup initiator iqn ===
Setup a new iqn:
<source>
# /sbin/iscsi-iname
</source>
The result is in /etc/iscsi/initiatorname.iscsi
==== /etc/iscsi/initiatorname.iscsi ====
<source>
## DO NOT EDIT OR REMOVE THIS FILE!
## If you remove this file, the iSCSI daemon will not start.
## If you change the InitiatorName, existing access control lists
## may reject this initiator. The InitiatorName must be unique
## for each iSCSI initiator. Do NOT duplicate iSCSI InitiatorNames.
InitiatorName=iqn.1993-08.org.debian:01:4efdaa48c123
</source>
=== Setup iSCSI-Interfaces ===
<source lang=bash>
# iscsiadm -m iface -I iscsi0 -o new
# iscsiadm -m iface -I iscsi0 --op=update -n iface.net_ifacename -v iscsi0
# iscsiadm -m iface -I iscsi0
# BEGIN RECORD 2.0-874
iface.iscsi_ifacename = iscsi0
iface.net_ifacename = iscsi0
iface.ipaddress = <empty>
iface.hwaddress = <empty>
iface.transport_name = tcp
...
# END RECORD
</source>
<source lang=bash>
# iscsiadm -m iface -I iscsi1 -o new
# iscsiadm -m iface -I iscsi1 --op=update -n iface.net_ifacename -v iscsi1
# iscsiadm -m iface -I iscsi1
# BEGIN RECORD 2.0-874
iface.iscsi_ifacename = iscsi1
iface.net_ifacename = iscsi1
iface.ipaddress = <empty>
iface.hwaddress = <empty>
iface.transport_name = tcp
...
# END RECORD
</source>
=== Discover LUNs that are offered by the storage ===
<source lang=bash>
# iscsiadm -m discovery -t st -p 10.250.71.1
iscsiadm: cannot make connection to 10.250.71.1: No route to host
iscsiadm: cannot make connection to 10.250.71.1: No route to host
iscsiadm: cannot make connection to 10.250.71.1: No route to host
iscsiadm: cannot make connection to 10.250.71.1: No route to host
iscsiadm: cannot make connection to 10.250.71.1: No route to host
iscsiadm: cannot make connection to 10.250.71.1: No route to host
iscsiadm: connection login retries (reopen_max) 5 exceeded
10.250.71.1:3260,1 iqn.2006-08.com.huawei:oceanstor:210028def5f846b5::20000:10.250.71.1
</source>
<source lang=bash>
# iscsiadm -m discovery -t st -p 10.251.71.1
iscsiadm: cannot make connection to 10.251.71.1: No route to host
iscsiadm: cannot make connection to 10.251.71.1: No route to host
iscsiadm: cannot make connection to 10.251.71.1: No route to host
iscsiadm: cannot make connection to 10.251.71.1: No route to host
iscsiadm: cannot make connection to 10.251.71.1: No route to host
iscsiadm: cannot make connection to 10.251.71.1: No route to host
iscsiadm: connection login retries (reopen_max) 5 exceeded
10.251.71.1:3260,2 iqn.2006-08.com.huawei:oceanstor:210028def5f846b5::20001:10.251.71.1
</source>
=== Login to discovered LUNs ===
<source lang=bash>
# iscsiadm -m node -T iqn.2006-08.com.huawei:oceanstor:210028def5f846b5::20000:10.250.71.1 --login
Logging in to [iface: iscsi0, target: iqn.2006-08.com.huawei:oceanstor:210028def5f846b5::20000:10.250.71.1, portal: 10.250.71.1,3260] (multiple)
Login to [iface: iscsi0, target: iqn.2006-08.com.huawei:oceanstor:210028def5f846b5::20000:10.250.71.1, portal: 10.250.71.1,3260] successful.
# iscsiadm -m node -T iqn.2006-08.com.huawei:oceanstor:210028def5f846b5::20001:10.251.71.1 --login
Logging in to [iface: iscsi1, target: iqn.2006-08.com.huawei:oceanstor:210028def5f846b5::20001:10.251.71.1, portal: 10.251.71.1,3260] (multiple)
Login to [iface: iscsi1, target: iqn.2006-08.com.huawei:oceanstor:210028def5f846b5::20001:10.251.71.1, portal: 10.251.71.1,3260] successful.
</source>
=== Take a look at the running session ===
<source lang=bash>
# iscsiadm -m session -P 1
Target: iqn.2006-08.com.huawei:oceanstor:210028def5f846b5::20000:10.250.71.1 (non-flash)
Current Portal: 10.250.71.1:3260,1
Persistent Portal: 10.250.71.1:3260,1
**********
Interface:
**********
Iface Name: iscsi0
Iface Transport: tcp
Iface Initiatorname: iqn.1993-08.org.debian:01:4efdaa48c123
Iface IPaddress: 10.250.71.32
Iface HWaddress: <empty>
Iface Netdev: iscsi0
SID: 1
iSCSI Connection State: LOGGED IN
iSCSI Session State: LOGGED_IN
Internal iscsid Session State: NO CHANGE
Target: iqn.2006-08.com.huawei:oceanstor:210028def5f846b5::20001:10.251.71.1 (non-flash)
Current Portal: 10.251.71.1:3260,2
Persistent Portal: 10.251.71.1:3260,2
**********
Interface:
**********
Iface Name: iscsi1
Iface Transport: tcp
Iface Initiatorname: iqn.1993-08.org.debian:01:4efdaa48c123
Iface IPaddress: 10.251.71.32
Iface HWaddress: <empty>
Iface Netdev: iscsi1
SID: 2
iSCSI Connection State: LOGGED IN
iSCSI Session State: LOGGED_IN
Internal iscsid Session State: NO CHANGE
</source>
=== Check the session is still ok after a restart of iscsid.service ===
<source lang=bash>
# systemctl status iscsid.service
# systemctl restart iscsid.service
# systemctl status iscsid.service
# iscsiadm -m session -o show
tcp: [1] 10.251.71.1:3260,2 iqn.2006-
08.com.huawei:oceanstor:210028def5f846b5::20001:10.251.71.1 (non-flash)
tcp: [2] 10.250.71.1:3260,1 iqn.2006-
08.com.huawei:oceanstor:210028def5f846b5::20000:10.250.71.1 (non-flash)
</source>
=== Enable automatic startup of connection ===
<source lang=bash>
# iscsiadm -m node --op=update -n node.conn[0].startup -v automatic
# iscsiadm -m node --op=update -n node.startup -v automatic
</source>
=== Check timeout parameter ===
<source lang=bash>
# iscsiadm -m node -T iqn.2006-08.com.huawei:oceanstor:210028def5f846b5::20000:10.250.71.1 | grep node.session.timeo.replacement_timeout
node.session.timeo.replacement_timeout = 120
# iscsiadm -m node -T iqn.2006-08.com.huawei:oceanstor:210028def5f846b5::20001:10.251.71.1 | grep node.session.timeo.replacement_timeout
node.session.timeo.replacement_timeout = 120
</source>
=== Adjust timeout values to your needs ===
<source lang=bash>
# iscsiadm -m node -T iqn.2006-08.com.huawei:oceanstor:210028def5f846b5::20000:10.250.71.1 -o update -n node.session.timeo.replacement_timeout -v 10
# iscsiadm -m node -T iqn.2006-08.com.huawei:oceanstor:210028def5f846b5::20001:10.251.71.1 -o update -n node.session.timeo.replacement_timeout -v 10
</source>
== Configure multipathing ==
=== List SCSI devices ===
<source lang=bash>
# lsscsi
[0:2:0:0] disk DELL PERC H730 Mini 4.30 /dev/sda <--- this is our internal disk / raid
[11:0:0:0] cd/dvd HL-DT-ST DVD+-RW GTA0N A3C0 /dev/sr0
[12:0:0:1] disk HUAWEI XSG1 4305 /dev/sdb <--- this is our iSCSI-storage
[13:0:0:1] disk HUAWEI XSG1 4305 /dev/sdc <--- this is our iSCSI-storage
</source>
=== Get wwids for devices ===
<source lang=bash>
# /lib/udev/scsi_id --whitelisted --device=/dev/sda
361866da075bdee001f9a2ede2705b9ba
# /lib/udev/scsi_id --whitelisted --device=/dev/sdb
3628dee5100f846b5243be07d00000004
# /lib/udev/scsi_id --whitelisted --device=/dev/sdc
3628dee5100f846b5243be07d00000004
</source>
=== Setup multipathing configuration ===
==== /etc/multipath.conf ====
<source>
defaults {
user_friendly_names yes
}
devices {
device {
vendor "HUAWEI"
product "XSG1"
path_grouping_policy multibus
path_checker tur
prio const
path_selector "round-robin 0"
failback immediate
no_path_retry 15
}
}
blacklist {
# devnode "^sd[a]$"
# I highly recommend you blacklist by wwid instead of device name
# blacklist /dev/sda
wwid 361866da075bdee001f9a2ede2705b9ba
}
multipaths {
multipath {
wwid 3628dee5100f846b5243be07d00000004
# alias here can be anything descriptive for your LUN
alias data
}
}
</source>
=== Startup multipathing ===
From the multipath(1) man page:
<pre>
-r Force a reload of all existing multipath maps. This command is delegated to the
multipathd daemon if it's running. In this case, other command line switches of the multipath
command have no effect.
-ll Show ("list") the current multipath topology from all available information (sysfs, the
device mapper, path checkers ...).
</pre>
<source lang=bash>
# multipath -r
</source>
<source lang=bash>
# multipath -ll
data (3628dee5100f846b5243be07d00000004) dm-0 HUAWEI,XSG1
size=10T features='1 queue_if_no_path' hwhandler='1 alua' wp=rw
`-+- policy='round-robin 0' prio=50 status=active
|- 12:0:0:1 sdb 8:16 active ready running
`- 13:0:0:1 sdc 8:32 active ready running
</source>
<source lang=bash>
# ls -al /dev/mapper/data
lrwxrwxrwx 1 root root 7 Okt 18 14:46 /dev/mapper/data -> ../dm-0
</source>
<source lang=bash>
# systemctl edit --force --full data.mount
</source>
==== /etc/systemd/system/data.mount ====
<source lang=inifile>
[Unit]
Before=remote-fs.target
After=iscsi.service
Requires=iscsi.service
After=blockdev@dev-mapper-data.target
[Mount]
Where=/data
What=/dev/mapper/data
Type=xfs
Options=defaults
</source>
== Further reading ==
External link collection:
• https://linux.dell.com/files/whitepapers/iSCSI_Multipathing_in_Ubuntu_Server.pdf
• https://www.suse.com/support/kb/doc/?id=000019648
• https://ubuntu.com/server/docs/service-iscsi
•
23fc40d8f01fce7e05897d2d1a5fc4cc02afc6a9
2161
2160
2021-10-19T13:44:13Z
Lollypop
2
wikitext
text/x-wiki
[[Category:Linux|iSCSI]]
[[Category:iSCSI|Linux]]
= iSCSI with jumbo-frames and multipathing =
== Configure networking ==
=== LACP-bonding for the frontend ===
==== /etc/netplan/bond0.yaml ====
<source lang=yaml>
network:
version: 2
renderer: networkd
ethernets:
eno1:
dhcp4: false
dhcp6: false
optional: true
eno2:
dhcp4: false
dhcp6: false
optional: true
bonds:
bond0:
interfaces:
- eno1
- eno2
parameters:
lacp-rate: slow
mode: 802.3ad
transmit-hash-policy: layer2
addresses:
- 10.71.112.135/16
gateway4: 10.71.101.1
nameservers:
addresses:
- 10.71.111.11
- 10.71.111.12
search:
- domain.de
</source>
=== Two dedicated 10GE interfaces with jumbo-frames for the backend ===
==== /etc/netplan/iscsi.yaml ====
<source lang=yaml>
network:
version: 2
renderer: networkd
ethernets:
enp132s0f0:
dhcp4: false
dhcp6: false
mtu: 9000
addresses:
- 10.250.71.32/24
set-name: iscsi0
match:
macaddress: a0:36:9f:d4:cd:1a
enp132s0f1:
dhcp4: false
dhcp6: false
mtu: 9000
addresses:
- 10.251.71.32/24
set-name: iscsi1
match:
macaddress: a0:36:9f:d4:cd:18
</source>
=== Apply the parameters and check settings ===
<source lang=bash>
# netplan apply
# ip a sh iscsi0
7: iscsi0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc mq state UP group default
qlen 1000
link/ether a0:36:9f:d4:cd:1a brd ff:ff:ff:ff:ff:ff
inet 10.250.71.32/24 brd 10.250.71.255 scope global iscsi0
valid_lft forever preferred_lft forever
inet6 fe80::a236:9fff:fed4:cd1a/64 scope link
valid_lft forever preferred_lft forever
# ip a sh iscsi1
5: iscsi1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc mq state UP group default
qlen 1000
link/ether a0:36:9f:d4:cd:18 brd ff:ff:ff:ff:ff:ff
inet 10.251.71.32/24 brd 10.251.71.255 scope global iscsi1
valid_lft forever preferred_lft forever
inet6 fe80::a236:9fff:fed4:cd18/64 scope link
valid_lft forever preferred_lft forever
# ip a sh bond0
12: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP
group default qlen 1000
link/ether 32:2d:f2:c0:e2:3f brd ff:ff:ff:ff:ff:ff
inet 10.71.112.135/16 brd 10.71.255.255 scope global bond0
valid_lft forever preferred_lft forever
inet6 fe80::302d:f2ff:fed0:e23f/64 scope link
valid_lft forever preferred_lft forever
</source>
=== Check if all components are configured right for jumbo-frames ===
<source lang=bash>
# ping -c 3 -M do -s 8972 -I iscsi0 10.250.71.1
PING 10.250.71.1 (10.250.71.1) from 10.250.71.32 iscsi0: 8972(9000) bytes of data.
8980 bytes from 10.250.71.1: icmp_seq=1 ttl=64 time=0.227 ms
8980 bytes from 10.250.71.1: icmp_seq=2 ttl=64 time=0.187 ms
8980 bytes from 10.250.71.1: icmp_seq=3 ttl=64 time=0.198 ms
--- 10.250.71.1 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2045ms
rtt min/avg/max/mdev = 0.187/0.204/0.227/0.016 ms
# ping -c 3 -M do -s 8972 -I iscsi1 10.251.71.1
PING 10.251.71.1 (10.251.71.1) from 10.251.71.32 iscsi1: 8972(9000) bytes of data.
8980 bytes from 10.251.71.1: icmp_seq=1 ttl=64 time=0.202 ms
8980 bytes from 10.251.71.1: icmp_seq=2 ttl=64 time=0.195 ms
8980 bytes from 10.251.71.1: icmp_seq=3 ttl=64 time=0.191 ms
--- 10.251.71.1 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2055ms
rtt min/avg/max/mdev = 0.191/0.196/0.202/0.004 ms
</source>
If it does not has success, one of the switches on the way or the iSCSI-storage misses jumbo-frame settings.
== Configure iSCSI ==
=== Setup initiator iqn ===
Setup a new iqn:
<source>
# /sbin/iscsi-iname
</source>
The result is in /etc/iscsi/initiatorname.iscsi
==== /etc/iscsi/initiatorname.iscsi ====
<source>
## DO NOT EDIT OR REMOVE THIS FILE!
## If you remove this file, the iSCSI daemon will not start.
## If you change the InitiatorName, existing access control lists
## may reject this initiator. The InitiatorName must be unique
## for each iSCSI initiator. Do NOT duplicate iSCSI InitiatorNames.
InitiatorName=iqn.1993-08.org.debian:01:4efdaa48c123
</source>
=== Setup iSCSI-Interfaces ===
<source lang=bash>
# iscsiadm -m iface -I iscsi0 -o new
# iscsiadm -m iface -I iscsi0 --op=update -n iface.net_ifacename -v iscsi0
# iscsiadm -m iface -I iscsi0
# BEGIN RECORD 2.0-874
iface.iscsi_ifacename = iscsi0
iface.net_ifacename = iscsi0
iface.ipaddress = <empty>
iface.hwaddress = <empty>
iface.transport_name = tcp
...
# END RECORD
</source>
<source lang=bash>
# iscsiadm -m iface -I iscsi1 -o new
# iscsiadm -m iface -I iscsi1 --op=update -n iface.net_ifacename -v iscsi1
# iscsiadm -m iface -I iscsi1
# BEGIN RECORD 2.0-874
iface.iscsi_ifacename = iscsi1
iface.net_ifacename = iscsi1
iface.ipaddress = <empty>
iface.hwaddress = <empty>
iface.transport_name = tcp
...
# END RECORD
</source>
=== Discover LUNs that are offered by the storage ===
<source lang=bash>
# iscsiadm -m discovery -t st -p 10.250.71.1
iscsiadm: cannot make connection to 10.250.71.1: No route to host
iscsiadm: cannot make connection to 10.250.71.1: No route to host
iscsiadm: cannot make connection to 10.250.71.1: No route to host
iscsiadm: cannot make connection to 10.250.71.1: No route to host
iscsiadm: cannot make connection to 10.250.71.1: No route to host
iscsiadm: cannot make connection to 10.250.71.1: No route to host
iscsiadm: connection login retries (reopen_max) 5 exceeded
10.250.71.1:3260,1 iqn.2006-08.com.huawei:oceanstor:210028def5f846b5::20000:10.250.71.1
</source>
<source lang=bash>
# iscsiadm -m discovery -t st -p 10.251.71.1
iscsiadm: cannot make connection to 10.251.71.1: No route to host
iscsiadm: cannot make connection to 10.251.71.1: No route to host
iscsiadm: cannot make connection to 10.251.71.1: No route to host
iscsiadm: cannot make connection to 10.251.71.1: No route to host
iscsiadm: cannot make connection to 10.251.71.1: No route to host
iscsiadm: cannot make connection to 10.251.71.1: No route to host
iscsiadm: connection login retries (reopen_max) 5 exceeded
10.251.71.1:3260,2 iqn.2006-08.com.huawei:oceanstor:210028def5f846b5::20001:10.251.71.1
</source>
=== Login to discovered LUNs ===
<source lang=bash>
# iscsiadm -m node -T iqn.2006-08.com.huawei:oceanstor:210028def5f846b5::20000:10.250.71.1 --login
Logging in to [iface: iscsi0, target: iqn.2006-08.com.huawei:oceanstor:210028def5f846b5::20000:10.250.71.1, portal: 10.250.71.1,3260] (multiple)
Login to [iface: iscsi0, target: iqn.2006-08.com.huawei:oceanstor:210028def5f846b5::20000:10.250.71.1, portal: 10.250.71.1,3260] successful.
# iscsiadm -m node -T iqn.2006-08.com.huawei:oceanstor:210028def5f846b5::20001:10.251.71.1 --login
Logging in to [iface: iscsi1, target: iqn.2006-08.com.huawei:oceanstor:210028def5f846b5::20001:10.251.71.1, portal: 10.251.71.1,3260] (multiple)
Login to [iface: iscsi1, target: iqn.2006-08.com.huawei:oceanstor:210028def5f846b5::20001:10.251.71.1, portal: 10.251.71.1,3260] successful.
</source>
=== Take a look at the running session ===
<source lang=bash>
# iscsiadm -m session -P 1
Target: iqn.2006-08.com.huawei:oceanstor:210028def5f846b5::20000:10.250.71.1 (non-flash)
Current Portal: 10.250.71.1:3260,1
Persistent Portal: 10.250.71.1:3260,1
**********
Interface:
**********
Iface Name: iscsi0
Iface Transport: tcp
Iface Initiatorname: iqn.1993-08.org.debian:01:4efdaa48c123
Iface IPaddress: 10.250.71.32
Iface HWaddress: <empty>
Iface Netdev: iscsi0
SID: 1
iSCSI Connection State: LOGGED IN
iSCSI Session State: LOGGED_IN
Internal iscsid Session State: NO CHANGE
Target: iqn.2006-08.com.huawei:oceanstor:210028def5f846b5::20001:10.251.71.1 (non-flash)
Current Portal: 10.251.71.1:3260,2
Persistent Portal: 10.251.71.1:3260,2
**********
Interface:
**********
Iface Name: iscsi1
Iface Transport: tcp
Iface Initiatorname: iqn.1993-08.org.debian:01:4efdaa48c123
Iface IPaddress: 10.251.71.32
Iface HWaddress: <empty>
Iface Netdev: iscsi1
SID: 2
iSCSI Connection State: LOGGED IN
iSCSI Session State: LOGGED_IN
Internal iscsid Session State: NO CHANGE
</source>
=== Check the session is still ok after a restart of iscsid.service ===
<source lang=bash>
# systemctl status iscsid.service
# systemctl restart iscsid.service
# systemctl status iscsid.service
# iscsiadm -m session -o show
tcp: [1] 10.251.71.1:3260,2 iqn.2006-
08.com.huawei:oceanstor:210028def5f846b5::20001:10.251.71.1 (non-flash)
tcp: [2] 10.250.71.1:3260,1 iqn.2006-
08.com.huawei:oceanstor:210028def5f846b5::20000:10.250.71.1 (non-flash)
</source>
=== Enable automatic startup of connection ===
<source lang=bash>
# iscsiadm -m node --op=update -n node.conn[0].startup -v automatic
# iscsiadm -m node --op=update -n node.startup -v automatic
</source>
=== Check timeout parameter ===
<source lang=bash>
# iscsiadm -m node -T iqn.2006-08.com.huawei:oceanstor:210028def5f846b5::20000:10.250.71.1 | grep node.session.timeo.replacement_timeout
node.session.timeo.replacement_timeout = 120
# iscsiadm -m node -T iqn.2006-08.com.huawei:oceanstor:210028def5f846b5::20001:10.251.71.1 | grep node.session.timeo.replacement_timeout
node.session.timeo.replacement_timeout = 120
</source>
=== Adjust timeout values to your needs ===
<source lang=bash>
# iscsiadm -m node -T iqn.2006-08.com.huawei:oceanstor:210028def5f846b5::20000:10.250.71.1 -o update -n node.session.timeo.replacement_timeout -v 10
# iscsiadm -m node -T iqn.2006-08.com.huawei:oceanstor:210028def5f846b5::20001:10.251.71.1 -o update -n node.session.timeo.replacement_timeout -v 10
</source>
== Configure multipathing ==
=== List SCSI devices ===
<source lang=bash>
# lsscsi
[0:2:0:0] disk DELL PERC H730 Mini 4.30 /dev/sda <--- this is our internal disk / raid
[11:0:0:0] cd/dvd HL-DT-ST DVD+-RW GTA0N A3C0 /dev/sr0
[12:0:0:1] disk HUAWEI XSG1 4305 /dev/sdb <--- this is our iSCSI-storage
[13:0:0:1] disk HUAWEI XSG1 4305 /dev/sdc <--- this is our iSCSI-storage
</source>
=== Get wwids for devices ===
<source lang=bash>
# /lib/udev/scsi_id --whitelisted --device=/dev/sda
361866da075bdee001f9a2ede2705b9ba
# /lib/udev/scsi_id --whitelisted --device=/dev/sdb
3628dee5100f846b5243be07d00000004
# /lib/udev/scsi_id --whitelisted --device=/dev/sdc
3628dee5100f846b5243be07d00000004
</source>
=== Setup multipathing configuration ===
==== /etc/multipath.conf ====
<source>
defaults {
user_friendly_names yes
}
devices {
device {
vendor "HUAWEI"
product "XSG1"
path_grouping_policy multibus
path_checker tur
prio const
path_selector "round-robin 0"
failback immediate
no_path_retry 15
}
}
blacklist {
# devnode "^sd[a]$"
# I highly recommend you blacklist by wwid instead of device name
# blacklist /dev/sda
wwid 361866da075bdee001f9a2ede2705b9ba
}
multipaths {
multipath {
wwid 3628dee5100f846b5243be07d00000004
# alias here can be anything descriptive for your LUN
alias data
}
}
</source>
=== Startup multipathing ===
From the multipath(1) man page:
<pre>
-r Force a reload of all existing multipath maps. This command is delegated to the
multipathd daemon if it's running. In this case, other command line switches of the multipath
command have no effect.
-ll Show ("list") the current multipath topology from all available information (sysfs, the
device mapper, path checkers ...).
</pre>
<source lang=bash>
# multipath -r
</source>
<source lang=bash>
# multipath -ll
data (3628dee5100f846b5243be07d00000004) dm-0 HUAWEI,XSG1
size=10T features='1 queue_if_no_path' hwhandler='1 alua' wp=rw
`-+- policy='round-robin 0' prio=50 status=active
|- 12:0:0:1 sdb 8:16 active ready running
`- 13:0:0:1 sdc 8:32 active ready running
</source>
<source lang=bash>
# ls -al /dev/mapper/data
lrwxrwxrwx 1 root root 7 Okt 18 14:46 /dev/mapper/data -> ../dm-0
</source>
<source lang=bash>
# systemctl edit --force --full data.mount
</source>
==== /etc/systemd/system/data.mount ====
<source lang=inifile>
[Unit]
Before=remote-fs.target
After=iscsi.service
Requires=iscsi.service
After=blockdev@dev-mapper-data.target
[Mount]
Where=/data
What=/dev/mapper/data
Type=xfs
Options=defaults
</source>
== Further reading ==
External link collection:
• https://linux.dell.com/files/whitepapers/iSCSI_Multipathing_in_Ubuntu_Server.pdf
• https://www.suse.com/support/kb/doc/?id=000019648
• https://ubuntu.com/server/docs/service-iscsi
•
62e91c68835f321cddd124e048815e793ce142eb
2164
2161
2021-10-19T13:46:52Z
Lollypop
2
Lollypop moved page [[Linux iSCSI]] to [[ISCSI Initiator with Linux]]
wikitext
text/x-wiki
[[Category:Linux|iSCSI]]
[[Category:iSCSI|Linux]]
= iSCSI with jumbo-frames and multipathing =
== Configure networking ==
=== LACP-bonding for the frontend ===
==== /etc/netplan/bond0.yaml ====
<source lang=yaml>
network:
version: 2
renderer: networkd
ethernets:
eno1:
dhcp4: false
dhcp6: false
optional: true
eno2:
dhcp4: false
dhcp6: false
optional: true
bonds:
bond0:
interfaces:
- eno1
- eno2
parameters:
lacp-rate: slow
mode: 802.3ad
transmit-hash-policy: layer2
addresses:
- 10.71.112.135/16
gateway4: 10.71.101.1
nameservers:
addresses:
- 10.71.111.11
- 10.71.111.12
search:
- domain.de
</source>
=== Two dedicated 10GE interfaces with jumbo-frames for the backend ===
==== /etc/netplan/iscsi.yaml ====
<source lang=yaml>
network:
version: 2
renderer: networkd
ethernets:
enp132s0f0:
dhcp4: false
dhcp6: false
mtu: 9000
addresses:
- 10.250.71.32/24
set-name: iscsi0
match:
macaddress: a0:36:9f:d4:cd:1a
enp132s0f1:
dhcp4: false
dhcp6: false
mtu: 9000
addresses:
- 10.251.71.32/24
set-name: iscsi1
match:
macaddress: a0:36:9f:d4:cd:18
</source>
=== Apply the parameters and check settings ===
<source lang=bash>
# netplan apply
# ip a sh iscsi0
7: iscsi0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc mq state UP group default
qlen 1000
link/ether a0:36:9f:d4:cd:1a brd ff:ff:ff:ff:ff:ff
inet 10.250.71.32/24 brd 10.250.71.255 scope global iscsi0
valid_lft forever preferred_lft forever
inet6 fe80::a236:9fff:fed4:cd1a/64 scope link
valid_lft forever preferred_lft forever
# ip a sh iscsi1
5: iscsi1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc mq state UP group default
qlen 1000
link/ether a0:36:9f:d4:cd:18 brd ff:ff:ff:ff:ff:ff
inet 10.251.71.32/24 brd 10.251.71.255 scope global iscsi1
valid_lft forever preferred_lft forever
inet6 fe80::a236:9fff:fed4:cd18/64 scope link
valid_lft forever preferred_lft forever
# ip a sh bond0
12: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP
group default qlen 1000
link/ether 32:2d:f2:c0:e2:3f brd ff:ff:ff:ff:ff:ff
inet 10.71.112.135/16 brd 10.71.255.255 scope global bond0
valid_lft forever preferred_lft forever
inet6 fe80::302d:f2ff:fed0:e23f/64 scope link
valid_lft forever preferred_lft forever
</source>
=== Check if all components are configured right for jumbo-frames ===
<source lang=bash>
# ping -c 3 -M do -s 8972 -I iscsi0 10.250.71.1
PING 10.250.71.1 (10.250.71.1) from 10.250.71.32 iscsi0: 8972(9000) bytes of data.
8980 bytes from 10.250.71.1: icmp_seq=1 ttl=64 time=0.227 ms
8980 bytes from 10.250.71.1: icmp_seq=2 ttl=64 time=0.187 ms
8980 bytes from 10.250.71.1: icmp_seq=3 ttl=64 time=0.198 ms
--- 10.250.71.1 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2045ms
rtt min/avg/max/mdev = 0.187/0.204/0.227/0.016 ms
# ping -c 3 -M do -s 8972 -I iscsi1 10.251.71.1
PING 10.251.71.1 (10.251.71.1) from 10.251.71.32 iscsi1: 8972(9000) bytes of data.
8980 bytes from 10.251.71.1: icmp_seq=1 ttl=64 time=0.202 ms
8980 bytes from 10.251.71.1: icmp_seq=2 ttl=64 time=0.195 ms
8980 bytes from 10.251.71.1: icmp_seq=3 ttl=64 time=0.191 ms
--- 10.251.71.1 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2055ms
rtt min/avg/max/mdev = 0.191/0.196/0.202/0.004 ms
</source>
If it does not has success, one of the switches on the way or the iSCSI-storage misses jumbo-frame settings.
== Configure iSCSI ==
=== Setup initiator iqn ===
Setup a new iqn:
<source>
# /sbin/iscsi-iname
</source>
The result is in /etc/iscsi/initiatorname.iscsi
==== /etc/iscsi/initiatorname.iscsi ====
<source>
## DO NOT EDIT OR REMOVE THIS FILE!
## If you remove this file, the iSCSI daemon will not start.
## If you change the InitiatorName, existing access control lists
## may reject this initiator. The InitiatorName must be unique
## for each iSCSI initiator. Do NOT duplicate iSCSI InitiatorNames.
InitiatorName=iqn.1993-08.org.debian:01:4efdaa48c123
</source>
=== Setup iSCSI-Interfaces ===
<source lang=bash>
# iscsiadm -m iface -I iscsi0 -o new
# iscsiadm -m iface -I iscsi0 --op=update -n iface.net_ifacename -v iscsi0
# iscsiadm -m iface -I iscsi0
# BEGIN RECORD 2.0-874
iface.iscsi_ifacename = iscsi0
iface.net_ifacename = iscsi0
iface.ipaddress = <empty>
iface.hwaddress = <empty>
iface.transport_name = tcp
...
# END RECORD
</source>
<source lang=bash>
# iscsiadm -m iface -I iscsi1 -o new
# iscsiadm -m iface -I iscsi1 --op=update -n iface.net_ifacename -v iscsi1
# iscsiadm -m iface -I iscsi1
# BEGIN RECORD 2.0-874
iface.iscsi_ifacename = iscsi1
iface.net_ifacename = iscsi1
iface.ipaddress = <empty>
iface.hwaddress = <empty>
iface.transport_name = tcp
...
# END RECORD
</source>
=== Discover LUNs that are offered by the storage ===
<source lang=bash>
# iscsiadm -m discovery -t st -p 10.250.71.1
iscsiadm: cannot make connection to 10.250.71.1: No route to host
iscsiadm: cannot make connection to 10.250.71.1: No route to host
iscsiadm: cannot make connection to 10.250.71.1: No route to host
iscsiadm: cannot make connection to 10.250.71.1: No route to host
iscsiadm: cannot make connection to 10.250.71.1: No route to host
iscsiadm: cannot make connection to 10.250.71.1: No route to host
iscsiadm: connection login retries (reopen_max) 5 exceeded
10.250.71.1:3260,1 iqn.2006-08.com.huawei:oceanstor:210028def5f846b5::20000:10.250.71.1
</source>
<source lang=bash>
# iscsiadm -m discovery -t st -p 10.251.71.1
iscsiadm: cannot make connection to 10.251.71.1: No route to host
iscsiadm: cannot make connection to 10.251.71.1: No route to host
iscsiadm: cannot make connection to 10.251.71.1: No route to host
iscsiadm: cannot make connection to 10.251.71.1: No route to host
iscsiadm: cannot make connection to 10.251.71.1: No route to host
iscsiadm: cannot make connection to 10.251.71.1: No route to host
iscsiadm: connection login retries (reopen_max) 5 exceeded
10.251.71.1:3260,2 iqn.2006-08.com.huawei:oceanstor:210028def5f846b5::20001:10.251.71.1
</source>
=== Login to discovered LUNs ===
<source lang=bash>
# iscsiadm -m node -T iqn.2006-08.com.huawei:oceanstor:210028def5f846b5::20000:10.250.71.1 --login
Logging in to [iface: iscsi0, target: iqn.2006-08.com.huawei:oceanstor:210028def5f846b5::20000:10.250.71.1, portal: 10.250.71.1,3260] (multiple)
Login to [iface: iscsi0, target: iqn.2006-08.com.huawei:oceanstor:210028def5f846b5::20000:10.250.71.1, portal: 10.250.71.1,3260] successful.
# iscsiadm -m node -T iqn.2006-08.com.huawei:oceanstor:210028def5f846b5::20001:10.251.71.1 --login
Logging in to [iface: iscsi1, target: iqn.2006-08.com.huawei:oceanstor:210028def5f846b5::20001:10.251.71.1, portal: 10.251.71.1,3260] (multiple)
Login to [iface: iscsi1, target: iqn.2006-08.com.huawei:oceanstor:210028def5f846b5::20001:10.251.71.1, portal: 10.251.71.1,3260] successful.
</source>
=== Take a look at the running session ===
<source lang=bash>
# iscsiadm -m session -P 1
Target: iqn.2006-08.com.huawei:oceanstor:210028def5f846b5::20000:10.250.71.1 (non-flash)
Current Portal: 10.250.71.1:3260,1
Persistent Portal: 10.250.71.1:3260,1
**********
Interface:
**********
Iface Name: iscsi0
Iface Transport: tcp
Iface Initiatorname: iqn.1993-08.org.debian:01:4efdaa48c123
Iface IPaddress: 10.250.71.32
Iface HWaddress: <empty>
Iface Netdev: iscsi0
SID: 1
iSCSI Connection State: LOGGED IN
iSCSI Session State: LOGGED_IN
Internal iscsid Session State: NO CHANGE
Target: iqn.2006-08.com.huawei:oceanstor:210028def5f846b5::20001:10.251.71.1 (non-flash)
Current Portal: 10.251.71.1:3260,2
Persistent Portal: 10.251.71.1:3260,2
**********
Interface:
**********
Iface Name: iscsi1
Iface Transport: tcp
Iface Initiatorname: iqn.1993-08.org.debian:01:4efdaa48c123
Iface IPaddress: 10.251.71.32
Iface HWaddress: <empty>
Iface Netdev: iscsi1
SID: 2
iSCSI Connection State: LOGGED IN
iSCSI Session State: LOGGED_IN
Internal iscsid Session State: NO CHANGE
</source>
=== Check the session is still ok after a restart of iscsid.service ===
<source lang=bash>
# systemctl status iscsid.service
# systemctl restart iscsid.service
# systemctl status iscsid.service
# iscsiadm -m session -o show
tcp: [1] 10.251.71.1:3260,2 iqn.2006-
08.com.huawei:oceanstor:210028def5f846b5::20001:10.251.71.1 (non-flash)
tcp: [2] 10.250.71.1:3260,1 iqn.2006-
08.com.huawei:oceanstor:210028def5f846b5::20000:10.250.71.1 (non-flash)
</source>
=== Enable automatic startup of connection ===
<source lang=bash>
# iscsiadm -m node --op=update -n node.conn[0].startup -v automatic
# iscsiadm -m node --op=update -n node.startup -v automatic
</source>
=== Check timeout parameter ===
<source lang=bash>
# iscsiadm -m node -T iqn.2006-08.com.huawei:oceanstor:210028def5f846b5::20000:10.250.71.1 | grep node.session.timeo.replacement_timeout
node.session.timeo.replacement_timeout = 120
# iscsiadm -m node -T iqn.2006-08.com.huawei:oceanstor:210028def5f846b5::20001:10.251.71.1 | grep node.session.timeo.replacement_timeout
node.session.timeo.replacement_timeout = 120
</source>
=== Adjust timeout values to your needs ===
<source lang=bash>
# iscsiadm -m node -T iqn.2006-08.com.huawei:oceanstor:210028def5f846b5::20000:10.250.71.1 -o update -n node.session.timeo.replacement_timeout -v 10
# iscsiadm -m node -T iqn.2006-08.com.huawei:oceanstor:210028def5f846b5::20001:10.251.71.1 -o update -n node.session.timeo.replacement_timeout -v 10
</source>
== Configure multipathing ==
=== List SCSI devices ===
<source lang=bash>
# lsscsi
[0:2:0:0] disk DELL PERC H730 Mini 4.30 /dev/sda <--- this is our internal disk / raid
[11:0:0:0] cd/dvd HL-DT-ST DVD+-RW GTA0N A3C0 /dev/sr0
[12:0:0:1] disk HUAWEI XSG1 4305 /dev/sdb <--- this is our iSCSI-storage
[13:0:0:1] disk HUAWEI XSG1 4305 /dev/sdc <--- this is our iSCSI-storage
</source>
=== Get wwids for devices ===
<source lang=bash>
# /lib/udev/scsi_id --whitelisted --device=/dev/sda
361866da075bdee001f9a2ede2705b9ba
# /lib/udev/scsi_id --whitelisted --device=/dev/sdb
3628dee5100f846b5243be07d00000004
# /lib/udev/scsi_id --whitelisted --device=/dev/sdc
3628dee5100f846b5243be07d00000004
</source>
=== Setup multipathing configuration ===
==== /etc/multipath.conf ====
<source>
defaults {
user_friendly_names yes
}
devices {
device {
vendor "HUAWEI"
product "XSG1"
path_grouping_policy multibus
path_checker tur
prio const
path_selector "round-robin 0"
failback immediate
no_path_retry 15
}
}
blacklist {
# devnode "^sd[a]$"
# I highly recommend you blacklist by wwid instead of device name
# blacklist /dev/sda
wwid 361866da075bdee001f9a2ede2705b9ba
}
multipaths {
multipath {
wwid 3628dee5100f846b5243be07d00000004
# alias here can be anything descriptive for your LUN
alias data
}
}
</source>
=== Startup multipathing ===
From the multipath(1) man page:
<pre>
-r Force a reload of all existing multipath maps. This command is delegated to the
multipathd daemon if it's running. In this case, other command line switches of the multipath
command have no effect.
-ll Show ("list") the current multipath topology from all available information (sysfs, the
device mapper, path checkers ...).
</pre>
<source lang=bash>
# multipath -r
</source>
<source lang=bash>
# multipath -ll
data (3628dee5100f846b5243be07d00000004) dm-0 HUAWEI,XSG1
size=10T features='1 queue_if_no_path' hwhandler='1 alua' wp=rw
`-+- policy='round-robin 0' prio=50 status=active
|- 12:0:0:1 sdb 8:16 active ready running
`- 13:0:0:1 sdc 8:32 active ready running
</source>
<source lang=bash>
# ls -al /dev/mapper/data
lrwxrwxrwx 1 root root 7 Okt 18 14:46 /dev/mapper/data -> ../dm-0
</source>
<source lang=bash>
# systemctl edit --force --full data.mount
</source>
==== /etc/systemd/system/data.mount ====
<source lang=inifile>
[Unit]
Before=remote-fs.target
After=iscsi.service
Requires=iscsi.service
After=blockdev@dev-mapper-data.target
[Mount]
Where=/data
What=/dev/mapper/data
Type=xfs
Options=defaults
</source>
== Further reading ==
External link collection:
• https://linux.dell.com/files/whitepapers/iSCSI_Multipathing_in_Ubuntu_Server.pdf
• https://www.suse.com/support/kb/doc/?id=000019648
• https://ubuntu.com/server/docs/service-iscsi
•
62e91c68835f321cddd124e048815e793ce142eb
2166
2164
2021-10-19T13:47:57Z
Lollypop
2
/* /etc/multipath.conf */
wikitext
text/x-wiki
[[Category:Linux|iSCSI]]
[[Category:iSCSI|Linux]]
= iSCSI with jumbo-frames and multipathing =
== Configure networking ==
=== LACP-bonding for the frontend ===
==== /etc/netplan/bond0.yaml ====
<source lang=yaml>
network:
version: 2
renderer: networkd
ethernets:
eno1:
dhcp4: false
dhcp6: false
optional: true
eno2:
dhcp4: false
dhcp6: false
optional: true
bonds:
bond0:
interfaces:
- eno1
- eno2
parameters:
lacp-rate: slow
mode: 802.3ad
transmit-hash-policy: layer2
addresses:
- 10.71.112.135/16
gateway4: 10.71.101.1
nameservers:
addresses:
- 10.71.111.11
- 10.71.111.12
search:
- domain.de
</source>
=== Two dedicated 10GE interfaces with jumbo-frames for the backend ===
==== /etc/netplan/iscsi.yaml ====
<source lang=yaml>
network:
version: 2
renderer: networkd
ethernets:
enp132s0f0:
dhcp4: false
dhcp6: false
mtu: 9000
addresses:
- 10.250.71.32/24
set-name: iscsi0
match:
macaddress: a0:36:9f:d4:cd:1a
enp132s0f1:
dhcp4: false
dhcp6: false
mtu: 9000
addresses:
- 10.251.71.32/24
set-name: iscsi1
match:
macaddress: a0:36:9f:d4:cd:18
</source>
=== Apply the parameters and check settings ===
<source lang=bash>
# netplan apply
# ip a sh iscsi0
7: iscsi0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc mq state UP group default
qlen 1000
link/ether a0:36:9f:d4:cd:1a brd ff:ff:ff:ff:ff:ff
inet 10.250.71.32/24 brd 10.250.71.255 scope global iscsi0
valid_lft forever preferred_lft forever
inet6 fe80::a236:9fff:fed4:cd1a/64 scope link
valid_lft forever preferred_lft forever
# ip a sh iscsi1
5: iscsi1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc mq state UP group default
qlen 1000
link/ether a0:36:9f:d4:cd:18 brd ff:ff:ff:ff:ff:ff
inet 10.251.71.32/24 brd 10.251.71.255 scope global iscsi1
valid_lft forever preferred_lft forever
inet6 fe80::a236:9fff:fed4:cd18/64 scope link
valid_lft forever preferred_lft forever
# ip a sh bond0
12: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP
group default qlen 1000
link/ether 32:2d:f2:c0:e2:3f brd ff:ff:ff:ff:ff:ff
inet 10.71.112.135/16 brd 10.71.255.255 scope global bond0
valid_lft forever preferred_lft forever
inet6 fe80::302d:f2ff:fed0:e23f/64 scope link
valid_lft forever preferred_lft forever
</source>
=== Check if all components are configured right for jumbo-frames ===
<source lang=bash>
# ping -c 3 -M do -s 8972 -I iscsi0 10.250.71.1
PING 10.250.71.1 (10.250.71.1) from 10.250.71.32 iscsi0: 8972(9000) bytes of data.
8980 bytes from 10.250.71.1: icmp_seq=1 ttl=64 time=0.227 ms
8980 bytes from 10.250.71.1: icmp_seq=2 ttl=64 time=0.187 ms
8980 bytes from 10.250.71.1: icmp_seq=3 ttl=64 time=0.198 ms
--- 10.250.71.1 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2045ms
rtt min/avg/max/mdev = 0.187/0.204/0.227/0.016 ms
# ping -c 3 -M do -s 8972 -I iscsi1 10.251.71.1
PING 10.251.71.1 (10.251.71.1) from 10.251.71.32 iscsi1: 8972(9000) bytes of data.
8980 bytes from 10.251.71.1: icmp_seq=1 ttl=64 time=0.202 ms
8980 bytes from 10.251.71.1: icmp_seq=2 ttl=64 time=0.195 ms
8980 bytes from 10.251.71.1: icmp_seq=3 ttl=64 time=0.191 ms
--- 10.251.71.1 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2055ms
rtt min/avg/max/mdev = 0.191/0.196/0.202/0.004 ms
</source>
If it does not has success, one of the switches on the way or the iSCSI-storage misses jumbo-frame settings.
== Configure iSCSI ==
=== Setup initiator iqn ===
Setup a new iqn:
<source>
# /sbin/iscsi-iname
</source>
The result is in /etc/iscsi/initiatorname.iscsi
==== /etc/iscsi/initiatorname.iscsi ====
<source>
## DO NOT EDIT OR REMOVE THIS FILE!
## If you remove this file, the iSCSI daemon will not start.
## If you change the InitiatorName, existing access control lists
## may reject this initiator. The InitiatorName must be unique
## for each iSCSI initiator. Do NOT duplicate iSCSI InitiatorNames.
InitiatorName=iqn.1993-08.org.debian:01:4efdaa48c123
</source>
=== Setup iSCSI-Interfaces ===
<source lang=bash>
# iscsiadm -m iface -I iscsi0 -o new
# iscsiadm -m iface -I iscsi0 --op=update -n iface.net_ifacename -v iscsi0
# iscsiadm -m iface -I iscsi0
# BEGIN RECORD 2.0-874
iface.iscsi_ifacename = iscsi0
iface.net_ifacename = iscsi0
iface.ipaddress = <empty>
iface.hwaddress = <empty>
iface.transport_name = tcp
...
# END RECORD
</source>
<source lang=bash>
# iscsiadm -m iface -I iscsi1 -o new
# iscsiadm -m iface -I iscsi1 --op=update -n iface.net_ifacename -v iscsi1
# iscsiadm -m iface -I iscsi1
# BEGIN RECORD 2.0-874
iface.iscsi_ifacename = iscsi1
iface.net_ifacename = iscsi1
iface.ipaddress = <empty>
iface.hwaddress = <empty>
iface.transport_name = tcp
...
# END RECORD
</source>
=== Discover LUNs that are offered by the storage ===
<source lang=bash>
# iscsiadm -m discovery -t st -p 10.250.71.1
iscsiadm: cannot make connection to 10.250.71.1: No route to host
iscsiadm: cannot make connection to 10.250.71.1: No route to host
iscsiadm: cannot make connection to 10.250.71.1: No route to host
iscsiadm: cannot make connection to 10.250.71.1: No route to host
iscsiadm: cannot make connection to 10.250.71.1: No route to host
iscsiadm: cannot make connection to 10.250.71.1: No route to host
iscsiadm: connection login retries (reopen_max) 5 exceeded
10.250.71.1:3260,1 iqn.2006-08.com.huawei:oceanstor:210028def5f846b5::20000:10.250.71.1
</source>
<source lang=bash>
# iscsiadm -m discovery -t st -p 10.251.71.1
iscsiadm: cannot make connection to 10.251.71.1: No route to host
iscsiadm: cannot make connection to 10.251.71.1: No route to host
iscsiadm: cannot make connection to 10.251.71.1: No route to host
iscsiadm: cannot make connection to 10.251.71.1: No route to host
iscsiadm: cannot make connection to 10.251.71.1: No route to host
iscsiadm: cannot make connection to 10.251.71.1: No route to host
iscsiadm: connection login retries (reopen_max) 5 exceeded
10.251.71.1:3260,2 iqn.2006-08.com.huawei:oceanstor:210028def5f846b5::20001:10.251.71.1
</source>
=== Login to discovered LUNs ===
<source lang=bash>
# iscsiadm -m node -T iqn.2006-08.com.huawei:oceanstor:210028def5f846b5::20000:10.250.71.1 --login
Logging in to [iface: iscsi0, target: iqn.2006-08.com.huawei:oceanstor:210028def5f846b5::20000:10.250.71.1, portal: 10.250.71.1,3260] (multiple)
Login to [iface: iscsi0, target: iqn.2006-08.com.huawei:oceanstor:210028def5f846b5::20000:10.250.71.1, portal: 10.250.71.1,3260] successful.
# iscsiadm -m node -T iqn.2006-08.com.huawei:oceanstor:210028def5f846b5::20001:10.251.71.1 --login
Logging in to [iface: iscsi1, target: iqn.2006-08.com.huawei:oceanstor:210028def5f846b5::20001:10.251.71.1, portal: 10.251.71.1,3260] (multiple)
Login to [iface: iscsi1, target: iqn.2006-08.com.huawei:oceanstor:210028def5f846b5::20001:10.251.71.1, portal: 10.251.71.1,3260] successful.
</source>
=== Take a look at the running session ===
<source lang=bash>
# iscsiadm -m session -P 1
Target: iqn.2006-08.com.huawei:oceanstor:210028def5f846b5::20000:10.250.71.1 (non-flash)
Current Portal: 10.250.71.1:3260,1
Persistent Portal: 10.250.71.1:3260,1
**********
Interface:
**********
Iface Name: iscsi0
Iface Transport: tcp
Iface Initiatorname: iqn.1993-08.org.debian:01:4efdaa48c123
Iface IPaddress: 10.250.71.32
Iface HWaddress: <empty>
Iface Netdev: iscsi0
SID: 1
iSCSI Connection State: LOGGED IN
iSCSI Session State: LOGGED_IN
Internal iscsid Session State: NO CHANGE
Target: iqn.2006-08.com.huawei:oceanstor:210028def5f846b5::20001:10.251.71.1 (non-flash)
Current Portal: 10.251.71.1:3260,2
Persistent Portal: 10.251.71.1:3260,2
**********
Interface:
**********
Iface Name: iscsi1
Iface Transport: tcp
Iface Initiatorname: iqn.1993-08.org.debian:01:4efdaa48c123
Iface IPaddress: 10.251.71.32
Iface HWaddress: <empty>
Iface Netdev: iscsi1
SID: 2
iSCSI Connection State: LOGGED IN
iSCSI Session State: LOGGED_IN
Internal iscsid Session State: NO CHANGE
</source>
=== Check the session is still ok after a restart of iscsid.service ===
<source lang=bash>
# systemctl status iscsid.service
# systemctl restart iscsid.service
# systemctl status iscsid.service
# iscsiadm -m session -o show
tcp: [1] 10.251.71.1:3260,2 iqn.2006-
08.com.huawei:oceanstor:210028def5f846b5::20001:10.251.71.1 (non-flash)
tcp: [2] 10.250.71.1:3260,1 iqn.2006-
08.com.huawei:oceanstor:210028def5f846b5::20000:10.250.71.1 (non-flash)
</source>
=== Enable automatic startup of connection ===
<source lang=bash>
# iscsiadm -m node --op=update -n node.conn[0].startup -v automatic
# iscsiadm -m node --op=update -n node.startup -v automatic
</source>
=== Check timeout parameter ===
<source lang=bash>
# iscsiadm -m node -T iqn.2006-08.com.huawei:oceanstor:210028def5f846b5::20000:10.250.71.1 | grep node.session.timeo.replacement_timeout
node.session.timeo.replacement_timeout = 120
# iscsiadm -m node -T iqn.2006-08.com.huawei:oceanstor:210028def5f846b5::20001:10.251.71.1 | grep node.session.timeo.replacement_timeout
node.session.timeo.replacement_timeout = 120
</source>
=== Adjust timeout values to your needs ===
<source lang=bash>
# iscsiadm -m node -T iqn.2006-08.com.huawei:oceanstor:210028def5f846b5::20000:10.250.71.1 -o update -n node.session.timeo.replacement_timeout -v 10
# iscsiadm -m node -T iqn.2006-08.com.huawei:oceanstor:210028def5f846b5::20001:10.251.71.1 -o update -n node.session.timeo.replacement_timeout -v 10
</source>
== Configure multipathing ==
=== List SCSI devices ===
<source lang=bash>
# lsscsi
[0:2:0:0] disk DELL PERC H730 Mini 4.30 /dev/sda <--- this is our internal disk / raid
[11:0:0:0] cd/dvd HL-DT-ST DVD+-RW GTA0N A3C0 /dev/sr0
[12:0:0:1] disk HUAWEI XSG1 4305 /dev/sdb <--- this is our iSCSI-storage
[13:0:0:1] disk HUAWEI XSG1 4305 /dev/sdc <--- this is our iSCSI-storage
</source>
=== Get wwids for devices ===
<source lang=bash>
# /lib/udev/scsi_id --whitelisted --device=/dev/sda
361866da075bdee001f9a2ede2705b9ba
# /lib/udev/scsi_id --whitelisted --device=/dev/sdb
3628dee5100f846b5243be07d00000004
# /lib/udev/scsi_id --whitelisted --device=/dev/sdc
3628dee5100f846b5243be07d00000004
</source>
=== Setup multipathing configuration ===
==== /etc/multipath.conf ====
<source>
defaults {
user_friendly_names yes
}
devices {
device {
vendor "HUAWEI"
product "XSG1"
path_grouping_policy multibus
path_checker tur
prio const
path_selector "round-robin 0"
failback immediate
no_path_retry 15
}
}
blacklist {
# devnode "^sd[a]$"
# I highly recommend you blacklist by wwid instead of device name
# blacklist /dev/sda by wwid
wwid 361866da075bdee001f9a2ede2705b9ba
}
multipaths {
multipath {
wwid 3628dee5100f846b5243be07d00000004
# alias here can be anything descriptive for your LUN
alias data
}
}
</source>
=== Startup multipathing ===
From the multipath(1) man page:
<pre>
-r Force a reload of all existing multipath maps. This command is delegated to the
multipathd daemon if it's running. In this case, other command line switches of the multipath
command have no effect.
-ll Show ("list") the current multipath topology from all available information (sysfs, the
device mapper, path checkers ...).
</pre>
<source lang=bash>
# multipath -r
</source>
<source lang=bash>
# multipath -ll
data (3628dee5100f846b5243be07d00000004) dm-0 HUAWEI,XSG1
size=10T features='1 queue_if_no_path' hwhandler='1 alua' wp=rw
`-+- policy='round-robin 0' prio=50 status=active
|- 12:0:0:1 sdb 8:16 active ready running
`- 13:0:0:1 sdc 8:32 active ready running
</source>
<source lang=bash>
# ls -al /dev/mapper/data
lrwxrwxrwx 1 root root 7 Okt 18 14:46 /dev/mapper/data -> ../dm-0
</source>
<source lang=bash>
# systemctl edit --force --full data.mount
</source>
==== /etc/systemd/system/data.mount ====
<source lang=inifile>
[Unit]
Before=remote-fs.target
After=iscsi.service
Requires=iscsi.service
After=blockdev@dev-mapper-data.target
[Mount]
Where=/data
What=/dev/mapper/data
Type=xfs
Options=defaults
</source>
== Further reading ==
External link collection:
• https://linux.dell.com/files/whitepapers/iSCSI_Multipathing_in_Ubuntu_Server.pdf
• https://www.suse.com/support/kb/doc/?id=000019648
• https://ubuntu.com/server/docs/service-iscsi
•
1bd35b7dd4ba43109a878c4cc9b5c4d81a4da00c
2167
2166
2021-10-19T14:13:36Z
Lollypop
2
/* Startup multipathing */
wikitext
text/x-wiki
[[Category:Linux|iSCSI]]
[[Category:iSCSI|Linux]]
= iSCSI with jumbo-frames and multipathing =
== Configure networking ==
=== LACP-bonding for the frontend ===
==== /etc/netplan/bond0.yaml ====
<source lang=yaml>
network:
version: 2
renderer: networkd
ethernets:
eno1:
dhcp4: false
dhcp6: false
optional: true
eno2:
dhcp4: false
dhcp6: false
optional: true
bonds:
bond0:
interfaces:
- eno1
- eno2
parameters:
lacp-rate: slow
mode: 802.3ad
transmit-hash-policy: layer2
addresses:
- 10.71.112.135/16
gateway4: 10.71.101.1
nameservers:
addresses:
- 10.71.111.11
- 10.71.111.12
search:
- domain.de
</source>
=== Two dedicated 10GE interfaces with jumbo-frames for the backend ===
==== /etc/netplan/iscsi.yaml ====
<source lang=yaml>
network:
version: 2
renderer: networkd
ethernets:
enp132s0f0:
dhcp4: false
dhcp6: false
mtu: 9000
addresses:
- 10.250.71.32/24
set-name: iscsi0
match:
macaddress: a0:36:9f:d4:cd:1a
enp132s0f1:
dhcp4: false
dhcp6: false
mtu: 9000
addresses:
- 10.251.71.32/24
set-name: iscsi1
match:
macaddress: a0:36:9f:d4:cd:18
</source>
=== Apply the parameters and check settings ===
<source lang=bash>
# netplan apply
# ip a sh iscsi0
7: iscsi0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc mq state UP group default
qlen 1000
link/ether a0:36:9f:d4:cd:1a brd ff:ff:ff:ff:ff:ff
inet 10.250.71.32/24 brd 10.250.71.255 scope global iscsi0
valid_lft forever preferred_lft forever
inet6 fe80::a236:9fff:fed4:cd1a/64 scope link
valid_lft forever preferred_lft forever
# ip a sh iscsi1
5: iscsi1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc mq state UP group default
qlen 1000
link/ether a0:36:9f:d4:cd:18 brd ff:ff:ff:ff:ff:ff
inet 10.251.71.32/24 brd 10.251.71.255 scope global iscsi1
valid_lft forever preferred_lft forever
inet6 fe80::a236:9fff:fed4:cd18/64 scope link
valid_lft forever preferred_lft forever
# ip a sh bond0
12: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP
group default qlen 1000
link/ether 32:2d:f2:c0:e2:3f brd ff:ff:ff:ff:ff:ff
inet 10.71.112.135/16 brd 10.71.255.255 scope global bond0
valid_lft forever preferred_lft forever
inet6 fe80::302d:f2ff:fed0:e23f/64 scope link
valid_lft forever preferred_lft forever
</source>
=== Check if all components are configured right for jumbo-frames ===
<source lang=bash>
# ping -c 3 -M do -s 8972 -I iscsi0 10.250.71.1
PING 10.250.71.1 (10.250.71.1) from 10.250.71.32 iscsi0: 8972(9000) bytes of data.
8980 bytes from 10.250.71.1: icmp_seq=1 ttl=64 time=0.227 ms
8980 bytes from 10.250.71.1: icmp_seq=2 ttl=64 time=0.187 ms
8980 bytes from 10.250.71.1: icmp_seq=3 ttl=64 time=0.198 ms
--- 10.250.71.1 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2045ms
rtt min/avg/max/mdev = 0.187/0.204/0.227/0.016 ms
# ping -c 3 -M do -s 8972 -I iscsi1 10.251.71.1
PING 10.251.71.1 (10.251.71.1) from 10.251.71.32 iscsi1: 8972(9000) bytes of data.
8980 bytes from 10.251.71.1: icmp_seq=1 ttl=64 time=0.202 ms
8980 bytes from 10.251.71.1: icmp_seq=2 ttl=64 time=0.195 ms
8980 bytes from 10.251.71.1: icmp_seq=3 ttl=64 time=0.191 ms
--- 10.251.71.1 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2055ms
rtt min/avg/max/mdev = 0.191/0.196/0.202/0.004 ms
</source>
If it does not has success, one of the switches on the way or the iSCSI-storage misses jumbo-frame settings.
== Configure iSCSI ==
=== Setup initiator iqn ===
Setup a new iqn:
<source>
# /sbin/iscsi-iname
</source>
The result is in /etc/iscsi/initiatorname.iscsi
==== /etc/iscsi/initiatorname.iscsi ====
<source>
## DO NOT EDIT OR REMOVE THIS FILE!
## If you remove this file, the iSCSI daemon will not start.
## If you change the InitiatorName, existing access control lists
## may reject this initiator. The InitiatorName must be unique
## for each iSCSI initiator. Do NOT duplicate iSCSI InitiatorNames.
InitiatorName=iqn.1993-08.org.debian:01:4efdaa48c123
</source>
=== Setup iSCSI-Interfaces ===
<source lang=bash>
# iscsiadm -m iface -I iscsi0 -o new
# iscsiadm -m iface -I iscsi0 --op=update -n iface.net_ifacename -v iscsi0
# iscsiadm -m iface -I iscsi0
# BEGIN RECORD 2.0-874
iface.iscsi_ifacename = iscsi0
iface.net_ifacename = iscsi0
iface.ipaddress = <empty>
iface.hwaddress = <empty>
iface.transport_name = tcp
...
# END RECORD
</source>
<source lang=bash>
# iscsiadm -m iface -I iscsi1 -o new
# iscsiadm -m iface -I iscsi1 --op=update -n iface.net_ifacename -v iscsi1
# iscsiadm -m iface -I iscsi1
# BEGIN RECORD 2.0-874
iface.iscsi_ifacename = iscsi1
iface.net_ifacename = iscsi1
iface.ipaddress = <empty>
iface.hwaddress = <empty>
iface.transport_name = tcp
...
# END RECORD
</source>
=== Discover LUNs that are offered by the storage ===
<source lang=bash>
# iscsiadm -m discovery -t st -p 10.250.71.1
iscsiadm: cannot make connection to 10.250.71.1: No route to host
iscsiadm: cannot make connection to 10.250.71.1: No route to host
iscsiadm: cannot make connection to 10.250.71.1: No route to host
iscsiadm: cannot make connection to 10.250.71.1: No route to host
iscsiadm: cannot make connection to 10.250.71.1: No route to host
iscsiadm: cannot make connection to 10.250.71.1: No route to host
iscsiadm: connection login retries (reopen_max) 5 exceeded
10.250.71.1:3260,1 iqn.2006-08.com.huawei:oceanstor:210028def5f846b5::20000:10.250.71.1
</source>
<source lang=bash>
# iscsiadm -m discovery -t st -p 10.251.71.1
iscsiadm: cannot make connection to 10.251.71.1: No route to host
iscsiadm: cannot make connection to 10.251.71.1: No route to host
iscsiadm: cannot make connection to 10.251.71.1: No route to host
iscsiadm: cannot make connection to 10.251.71.1: No route to host
iscsiadm: cannot make connection to 10.251.71.1: No route to host
iscsiadm: cannot make connection to 10.251.71.1: No route to host
iscsiadm: connection login retries (reopen_max) 5 exceeded
10.251.71.1:3260,2 iqn.2006-08.com.huawei:oceanstor:210028def5f846b5::20001:10.251.71.1
</source>
=== Login to discovered LUNs ===
<source lang=bash>
# iscsiadm -m node -T iqn.2006-08.com.huawei:oceanstor:210028def5f846b5::20000:10.250.71.1 --login
Logging in to [iface: iscsi0, target: iqn.2006-08.com.huawei:oceanstor:210028def5f846b5::20000:10.250.71.1, portal: 10.250.71.1,3260] (multiple)
Login to [iface: iscsi0, target: iqn.2006-08.com.huawei:oceanstor:210028def5f846b5::20000:10.250.71.1, portal: 10.250.71.1,3260] successful.
# iscsiadm -m node -T iqn.2006-08.com.huawei:oceanstor:210028def5f846b5::20001:10.251.71.1 --login
Logging in to [iface: iscsi1, target: iqn.2006-08.com.huawei:oceanstor:210028def5f846b5::20001:10.251.71.1, portal: 10.251.71.1,3260] (multiple)
Login to [iface: iscsi1, target: iqn.2006-08.com.huawei:oceanstor:210028def5f846b5::20001:10.251.71.1, portal: 10.251.71.1,3260] successful.
</source>
=== Take a look at the running session ===
<source lang=bash>
# iscsiadm -m session -P 1
Target: iqn.2006-08.com.huawei:oceanstor:210028def5f846b5::20000:10.250.71.1 (non-flash)
Current Portal: 10.250.71.1:3260,1
Persistent Portal: 10.250.71.1:3260,1
**********
Interface:
**********
Iface Name: iscsi0
Iface Transport: tcp
Iface Initiatorname: iqn.1993-08.org.debian:01:4efdaa48c123
Iface IPaddress: 10.250.71.32
Iface HWaddress: <empty>
Iface Netdev: iscsi0
SID: 1
iSCSI Connection State: LOGGED IN
iSCSI Session State: LOGGED_IN
Internal iscsid Session State: NO CHANGE
Target: iqn.2006-08.com.huawei:oceanstor:210028def5f846b5::20001:10.251.71.1 (non-flash)
Current Portal: 10.251.71.1:3260,2
Persistent Portal: 10.251.71.1:3260,2
**********
Interface:
**********
Iface Name: iscsi1
Iface Transport: tcp
Iface Initiatorname: iqn.1993-08.org.debian:01:4efdaa48c123
Iface IPaddress: 10.251.71.32
Iface HWaddress: <empty>
Iface Netdev: iscsi1
SID: 2
iSCSI Connection State: LOGGED IN
iSCSI Session State: LOGGED_IN
Internal iscsid Session State: NO CHANGE
</source>
=== Check the session is still ok after a restart of iscsid.service ===
<source lang=bash>
# systemctl status iscsid.service
# systemctl restart iscsid.service
# systemctl status iscsid.service
# iscsiadm -m session -o show
tcp: [1] 10.251.71.1:3260,2 iqn.2006-
08.com.huawei:oceanstor:210028def5f846b5::20001:10.251.71.1 (non-flash)
tcp: [2] 10.250.71.1:3260,1 iqn.2006-
08.com.huawei:oceanstor:210028def5f846b5::20000:10.250.71.1 (non-flash)
</source>
=== Enable automatic startup of connection ===
<source lang=bash>
# iscsiadm -m node --op=update -n node.conn[0].startup -v automatic
# iscsiadm -m node --op=update -n node.startup -v automatic
</source>
=== Check timeout parameter ===
<source lang=bash>
# iscsiadm -m node -T iqn.2006-08.com.huawei:oceanstor:210028def5f846b5::20000:10.250.71.1 | grep node.session.timeo.replacement_timeout
node.session.timeo.replacement_timeout = 120
# iscsiadm -m node -T iqn.2006-08.com.huawei:oceanstor:210028def5f846b5::20001:10.251.71.1 | grep node.session.timeo.replacement_timeout
node.session.timeo.replacement_timeout = 120
</source>
=== Adjust timeout values to your needs ===
<source lang=bash>
# iscsiadm -m node -T iqn.2006-08.com.huawei:oceanstor:210028def5f846b5::20000:10.250.71.1 -o update -n node.session.timeo.replacement_timeout -v 10
# iscsiadm -m node -T iqn.2006-08.com.huawei:oceanstor:210028def5f846b5::20001:10.251.71.1 -o update -n node.session.timeo.replacement_timeout -v 10
</source>
== Configure multipathing ==
=== List SCSI devices ===
<source lang=bash>
# lsscsi
[0:2:0:0] disk DELL PERC H730 Mini 4.30 /dev/sda <--- this is our internal disk / raid
[11:0:0:0] cd/dvd HL-DT-ST DVD+-RW GTA0N A3C0 /dev/sr0
[12:0:0:1] disk HUAWEI XSG1 4305 /dev/sdb <--- this is our iSCSI-storage
[13:0:0:1] disk HUAWEI XSG1 4305 /dev/sdc <--- this is our iSCSI-storage
</source>
=== Get wwids for devices ===
<source lang=bash>
# /lib/udev/scsi_id --whitelisted --device=/dev/sda
361866da075bdee001f9a2ede2705b9ba
# /lib/udev/scsi_id --whitelisted --device=/dev/sdb
3628dee5100f846b5243be07d00000004
# /lib/udev/scsi_id --whitelisted --device=/dev/sdc
3628dee5100f846b5243be07d00000004
</source>
=== Setup multipathing configuration ===
==== /etc/multipath.conf ====
<source>
defaults {
user_friendly_names yes
}
devices {
device {
vendor "HUAWEI"
product "XSG1"
path_grouping_policy multibus
path_checker tur
prio const
path_selector "round-robin 0"
failback immediate
no_path_retry 15
}
}
blacklist {
# devnode "^sd[a]$"
# I highly recommend you blacklist by wwid instead of device name
# blacklist /dev/sda by wwid
wwid 361866da075bdee001f9a2ede2705b9ba
}
multipaths {
multipath {
wwid 3628dee5100f846b5243be07d00000004
# alias here can be anything descriptive for your LUN
alias data
}
}
</source>
=== Startup multipathing ===
From the multipath(1) man page:
<pre>
-r Force a reload of all existing multipath maps. This command is delegated to the
multipathd daemon if it's running. In this case, other command line switches of the multipath
command have no effect.
-ll Show ("list") the current multipath topology from all available information (sysfs, the
device mapper, path checkers ...).
</pre>
<source lang=bash>
# multipath -r
</source>
<source lang=bash>
# multipath -ll
data (3628dee5100f846b5243be07d00000004) dm-0 HUAWEI,XSG1
size=10T features='1 queue_if_no_path' hwhandler='1 alua' wp=rw
`-+- policy='round-robin 0' prio=50 status=active
|- 12:0:0:1 sdb 8:16 active ready running
`- 13:0:0:1 sdc 8:32 active ready running
</source>
<source lang=bash>
# ls -al /dev/mapper/data
lrwxrwxrwx 1 root root 7 Okt 18 14:46 /dev/mapper/data -> ../dm-0
</source>
=== Create a systemd unit to mount it at the right time during boot ===
<source lang=bash>
# systemctl edit --force --full data.mount
</source>
==== /etc/systemd/system/data.mount ====
<source lang=inifile>
[Unit]
Before=remote-fs.target
After=iscsi.service
Requires=iscsi.service
After=blockdev@dev-mapper-data.target
[Mount]
Where=/data
What=/dev/mapper/data
Type=xfs
Options=defaults
</source>
=== Enable your unit on next reboot and start it for now ===
<source lang=bash>
# systemctl enable data.mount
# systemctl start data.mount
</source>
=== Check for success ===
<source lang=bash>
# df -h /dev/mapper/data
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/data 10T 72G 10T 1% /data
</source>
== Further reading ==
External link collection:
• https://linux.dell.com/files/whitepapers/iSCSI_Multipathing_in_Ubuntu_Server.pdf
• https://www.suse.com/support/kb/doc/?id=000019648
• https://ubuntu.com/server/docs/service-iscsi
•
50abc493f7b51b408b92de0de08abac60c3c2dbc
Category:FC
14
105
2162
287
2021-10-19T13:44:49Z
Lollypop
2
wikitext
text/x-wiki
[[Category: KnowHow]]
339857b82ab523411f82e25b222bb7f5bb88c2cb
Category:ISCSI
14
388
2163
2021-10-19T13:45:26Z
Lollypop
2
Created page with "[[Category: KnowHow]]"
wikitext
text/x-wiki
[[Category: KnowHow]]
339857b82ab523411f82e25b222bb7f5bb88c2cb
Linux iSCSI
0
389
2165
2021-10-19T13:46:52Z
Lollypop
2
Lollypop moved page [[Linux iSCSI]] to [[ISCSI Initiator with Linux]]
wikitext
text/x-wiki
#REDIRECT [[ISCSI Initiator with Linux]]
188d687fa46e0dd5a4bd5e645c38b9ec0e8c113c
OpenSSL
0
347
2168
2119
2021-10-26T10:09:09Z
Lollypop
2
wikitext
text/x-wiki
[[category:Security]]
=Verify=
<source lang=bash>
# openssl verify -CAfile /srv/www/htdocs/pub/RHN-ORG-TRUSTED-SSL-CERT /etc/pki/spacewalk/jabberd/server.pem
</source>
<source lang=bash>
# openssl crl2pkcs7 -nocrl -certfile /srv/www/htdocs/pub/RHN-ORG-TRUSTED-SSL-CERT | openssl pkcs7 -print_certs -noout -print_certs
</source>
=CSR=
== Create key and CSR ==
<source lang=bash>
$ subject_without_cn='/C=DE/ST=Hamburg/L=Hamburg/O=Organisation/OU=Team'
$ declare -a hosts=( "name1.server.de" "name2.server.de" )
$ openssl req -newkey rsa:4096 -sha256 -keyout ${hosts[0]}-key.pem -out ${hosts[0]}-csr.pem -batch -subj "${subject_without_cn}/CN=${hosts[0]}" -reqexts SAN -config <(cat /etc/ssl/openssl.cnf <(printf "[SAN]\nsubjectAltName=DNS:${hosts[0]}${hosts[1]:+,DNS:${hosts[1]}}${hosts[2]:+,DNS:${hosts[2]}}${hosts[3]:+,DNS:${hosts[3]}}${hosts[4]:+,DNS:${hosts[4]}}"))
</source>
== Verify your CSR==
<source lang=bash>
$ openssl req -text -noout -verify -in ${hosts[0]}-csr.pem
</source>
6f2f142a5940ae98368fff0baa908366cedd7f54
2169
2168
2021-10-26T10:44:19Z
Lollypop
2
/* Create key and CSR */
wikitext
text/x-wiki
[[category:Security]]
=Verify=
<source lang=bash>
# openssl verify -CAfile /srv/www/htdocs/pub/RHN-ORG-TRUSTED-SSL-CERT /etc/pki/spacewalk/jabberd/server.pem
</source>
<source lang=bash>
# openssl crl2pkcs7 -nocrl -certfile /srv/www/htdocs/pub/RHN-ORG-TRUSTED-SSL-CERT | openssl pkcs7 -print_certs -noout -print_certs
</source>
=CSR=
== Create key and CSR ==
<source lang=bash>
$ subject_without_cn='/C=DE/ST=Hamburg/L=Hamburg/O=Organisation/OU=Team'
$ emailAddress='webadmin@server.de'
$ declare -a hosts=( "name1.server.de" "name2.server.de" )
$ openssl req -newkey rsa:4096 -sha256 -keyout ${hosts[0]}-key.pem -out ${hosts[0]}-csr.pem -batch -subj "${subject_without_cn}/CN=${hosts[0]}/emailAddress=${emailAddress}" -reqexts SAN -config <(cat /etc/ssl/openssl.cnf <(printf "[SAN]\nsubjectAltName=DNS:${hosts[0]}${hosts[1]:+,DNS:${hosts[1]}}${hosts[2]:+,DNS:${hosts[2]}}${hosts[3]:+,DNS:${hosts[3]}}${hosts[4]:+,DNS:${hosts[4]}}"))
</source>
== Verify your CSR==
<source lang=bash>
$ openssl req -text -noout -verify -in ${hosts[0]}-csr.pem
</source>
c6dc72f000bf0f64de0b0e095bb1e5a9b6dd5fe1
Galera Cluster
0
383
2170
2118
2021-11-12T10:57:58Z
Lollypop
2
wikitext
text/x-wiki
[[Category:MariaDB]]
[[Category:MySQL]]
=Setup the Cluster=
==Setup certificates for the cluster comunication==
Make a CA certificate with a very long lifetime as you dont want to make normal certificate updates at this point.
<source lang=bash>
# openssl genrsa 2048 -out ca-key.pem
# openssl req -new -x509 -nodes -days 365000 -key ca-key.pem -out ca-cert.pem
</source>
Create a certificate for each server:
<source lang=bash>
# openssl req -newkey rsa:2048 -nodes -days 365000 -keyout maria-1-key.pem -out maria-1-req.pem
# openssl x509 -req -days 365000 -set_serial 01 -in maria-1-req.pem -out maria-1-cert.pem -CA ca-cert.pem -CAkey ca-key.pem
# openssl req -newkey rsa:2048 -nodes -days 365000 -keyout maria-2-key.pem -out maria-2-req.pem
# openssl x509 -req -days 365000 -set_serial 02 -in maria-2-req.pem -out maria-2-cert.pem -CA ca-cert.pem -CAkey ca-key.pem
# openssl req -newkey rsa:2048 -nodes -days 365000 -keyout maria-3-key.pem -out maria-3-req.pem
# openssl x509 -req -days 365000 -set_serial 03 -in maria-3-req.pem -out maria-3-cert.pem -CA ca-cert.pem -CAkey ca-key.pem
# openssl req -newkey rsa:2048 -nodes -days 365000 -keyout maria-4-key.pem -out maria-4-req.pem
# openssl x509 -req -days 365000 -set_serial 04 -in maria-4-req.pem -out maria-4-cert.pem -CA ca-cert.pem -CAkey ca-key.pem
</source>
=== Show wsrep_provider_options ===
<source lang=bash>
$ mariadb -NBABe 'show variables like "wsrep_provider_options"' | awk '{gsub(/$/,":\n",$1); gsub(/(;|$)/,";\n"); printf $0; }'
</source>
f016f59f6e6bfe3cc81caf5e98af976831011c5f
2171
2170
2021-11-12T12:44:02Z
Lollypop
2
/* Setup certificates for the cluster comunication */
wikitext
text/x-wiki
[[Category:MariaDB]]
[[Category:MySQL]]
=Setup the Cluster=
==Setup certificates for the cluster comunication==
Make a CA certificate with a very long lifetime as you dont want to make normal certificate updates at this point.
<source lang=bash>
$ subject='/C=DE/ST=Hamburg/L=Hamburg/O=Organisation/OU=Databases/CN=Galera Cluster'
$ openssl req -new -x509 -nodes -days 365000 -newkey rsa:4096 -sha256 -keyout ca-key.pem -out ca-cert.pem -batch -subj "${subject}"
</source>
Create a certificate for each server:
<source lang=bash>
$ for node in {1..4}
do
emailAddress="dbadmin@server.de"
servername="maria-${node}.server.de"
subject="/C=DE/ST=Hamburg/L=Hamburg/O=Organisation/OU=Databases/CN=${servername}/emailAddress=${emailAddress}"
openssl req -newkey rsa:4096 -nodes -keyout ${servername}-key.pem -out ${servername}-req.pem -batch -subj "${subject}"
openssl x509 -req -days 365000 -set_serial $(printf "%02d" "${node}") -in ${servername}-req.pem -out ${servername}-cert.pem -CA ca-cert.pem -CAkey ca-key.pem
done
</source>
=== Show wsrep_provider_options ===
<source lang=bash>
$ mariadb -NBABe 'show variables like "wsrep_provider_options"' | awk '{gsub(/$/,":\n",$1); gsub(/(;|$)/,";\n"); printf $0; }'
</source>
cab35c30cba200348dd0119021773fdac6a26e8a
2172
2171
2021-11-12T12:47:01Z
Lollypop
2
/* Setup the Cluster */
wikitext
text/x-wiki
[[Category:MariaDB]]
[[Category:MySQL]]
=Setup the Cluster=
==Setup certificates for the cluster comunication==
===Make a CA certificate===
Make a CA certificate with a very long lifetime as you dont want to make normal certificate updates at this point.
<source lang=bash>
$ subject='/C=DE/ST=Hamburg/L=Hamburg/O=Organisation/OU=Databases/CN=Galera Cluster'
$ openssl req -new -x509 -nodes -days 365000 -newkey rsa:4096 -sha256 -keyout ca-key.pem -out ca-cert.pem -batch -subj "${subject}"
</source>
===Create a certificate for each cluster node===
<source lang=bash>
$ for node in {1..4}
do
emailAddress="dbadmin@server.de"
servername="maria-${node}.server.de"
subject="/C=DE/ST=Hamburg/L=Hamburg/O=Organisation/OU=Databases/CN=${servername}/emailAddress=${emailAddress}"
openssl req -newkey rsa:4096 -nodes -keyout ${servername}-key.pem -out ${servername}-req.pem -batch -subj "${subject}"
openssl x509 -req -days 365000 -set_serial $(printf "%02d" "${node}") -in ${servername}-req.pem -out ${servername}-cert.pem -CA ca-cert.pem -CAkey ca-key.pem
done
</source>
=== Show wsrep_provider_options ===
<source lang=bash>
$ mariadb -NBABe 'show variables like "wsrep_provider_options"' | awk '{gsub(/$/,":\n",$1); gsub(/(;|$)/,";\n"); printf $0; }'
</source>
06bdd564a610316e6a559c91fed98fc52c80534d
2173
2172
2021-11-12T14:51:40Z
Lollypop
2
wikitext
text/x-wiki
[[Category:MariaDB]]
[[Category:MySQL]]
=Setup the Cluster=
==Install the packages==
On each node do as root:
* Add sources
<source lang=bash>
# cat > /etc/apt/sources.list.d/mariadb.list << EOF
# MariaDB Server
# To use a different major version of the server, or to pin to a specific minor version, change URI below.
deb [arch=amd64] http://downloads.mariadb.com/MariaDB/mariadb-10.5/repo/ubuntu $(lsb_release -cs) main
deb [arch=amd64] http://downloads.mariadb.com/MariaDB/mariadb-10.5/repo/ubuntu $(lsb_release -cs) main/debug
# MariaDB MaxScale
# To use the latest stable release of MaxScale, use "latest" as the version
# To use the latest beta (or stable if no current beta) release of MaxScale, use "beta" as the version
deb [arch=amd64] https://dlm.mariadb.com/repo/maxscale/latest/apt $(lsb_release -cs) main
# MariaDB Tools
deb [arch=amd64] http://downloads.mariadb.com/Tools/ubuntu $(lsb_release -cs) main
EOF
</source>
* Install the packages
<source lang=bash>
# apt update
# apt install mariadb-server mariadb-backup galera-4
</source>
==Setup certificates for the cluster comunication==
===Make a CA certificate===
Make a CA certificate with a very long lifetime as you dont want to make normal certificate updates at this point.
<source lang=bash>
$ subject='/C=DE/ST=Hamburg/L=Hamburg/O=Organisation/OU=Databases/CN=Galera Cluster'
$ openssl req -new -x509 -nodes -days 365000 -newkey rsa:4096 -sha256 -keyout ca-key.pem -out ca-cert.pem -batch -subj "${subject}"
</source>
===Create a certificate for each cluster node===
<source lang=bash>
$ for node in {1..4}
do
emailAddress="dbadmin@server.de"
servername="maria-${node}.server.de"
subject="/C=DE/ST=Hamburg/L=Hamburg/O=Organisation/OU=Databases/CN=${servername}/emailAddress=${emailAddress}"
openssl req -newkey rsa:4096 -nodes -keyout ${servername}-key.pem -out ${servername}-req.pem -batch -subj "${subject}"
openssl x509 -req -days 365000 -set_serial $(printf "%02d" "${node}") -in ${servername}-req.pem -out ${servername}-cert.pem -CA ca-cert.pem -CAkey ca-key.pem
done
</source>
===Copy keys and certificates to the nodes===
Copy the specific keys and certs to each node:
<source lang=bash>
$ sudo mkdir --mode=0700 /etc/mysql/priv # put in here: maria-${node}.server.de-key.pem
$ sudo mkdir --mode=0750 /etc/mysql/cert # put in here: maria-${node}.server.de-cert.pem , ca-cert.pem
</source>
==Configure the MariaDB Galera Cluster==
=== Show wsrep_provider_options ===
<source lang=bash>
$ mariadb -NBABe 'show variables like "wsrep_provider_options"' | awk '{gsub(/$/,":\n",$1); gsub(/(;|$)/,";\n"); printf $0; }'
</source>
88fe202093ac4f3427ecacb2c05c50028d69305a
2174
2173
2021-11-12T15:10:07Z
Lollypop
2
/* Configure the MariaDB Galera Cluster */
wikitext
text/x-wiki
[[Category:MariaDB]]
[[Category:MySQL]]
=Setup the Cluster=
==Install the packages==
On each node do as root:
* Add sources
<source lang=bash>
# cat > /etc/apt/sources.list.d/mariadb.list << EOF
# MariaDB Server
# To use a different major version of the server, or to pin to a specific minor version, change URI below.
deb [arch=amd64] http://downloads.mariadb.com/MariaDB/mariadb-10.5/repo/ubuntu $(lsb_release -cs) main
deb [arch=amd64] http://downloads.mariadb.com/MariaDB/mariadb-10.5/repo/ubuntu $(lsb_release -cs) main/debug
# MariaDB MaxScale
# To use the latest stable release of MaxScale, use "latest" as the version
# To use the latest beta (or stable if no current beta) release of MaxScale, use "beta" as the version
deb [arch=amd64] https://dlm.mariadb.com/repo/maxscale/latest/apt $(lsb_release -cs) main
# MariaDB Tools
deb [arch=amd64] http://downloads.mariadb.com/Tools/ubuntu $(lsb_release -cs) main
EOF
</source>
* Install the packages
<source lang=bash>
# apt update
# apt install mariadb-server mariadb-backup galera-4
</source>
==Setup certificates for the cluster comunication==
===Make a CA certificate===
Make a CA certificate with a very long lifetime as you dont want to make normal certificate updates at this point.
<source lang=bash>
$ subject='/C=DE/ST=Hamburg/L=Hamburg/O=Organisation/OU=Databases/CN=Galera Cluster'
$ openssl req -new -x509 -nodes -days 365000 -newkey rsa:4096 -sha256 -keyout ca-key.pem -out ca-cert.pem -batch -subj "${subject}"
</source>
===Create a certificate for each cluster node===
<source lang=bash>
$ for node in {1..4}
do
emailAddress="dbadmin@server.de"
servername="maria-${node}.server.de"
subject="/C=DE/ST=Hamburg/L=Hamburg/O=Organisation/OU=Databases/CN=${servername}/emailAddress=${emailAddress}"
openssl req -newkey rsa:4096 -nodes -keyout ${servername}-key.pem -out ${servername}-req.pem -batch -subj "${subject}"
openssl x509 -req -days 365000 -set_serial $(printf "%02d" "${node}") -in ${servername}-req.pem -out ${servername}-cert.pem -CA ca-cert.pem -CAkey ca-key.pem
done
</source>
===Copy keys and certificates to the nodes===
Copy the specific keys and certs to each node:
<source lang=bash>
$ sudo mkdir --mode=0700 /etc/mysql/priv # put in here: maria-${node}.server.de-key.pem
$ sudo mkdir --mode=0750 /etc/mysql/cert # put in here: maria-${node}.server.de-cert.pem , ca-cert.pem
</source>
== Configure the MariaDB Galera Cluster ==
=== Create a mariabackup user on each node ===
<source lang=bash>
# mariadb
MariaDB [(none)]> grant reload, process, lock tables, replication client on *.* to 'mariabackup'@'localhost' identified by 'the_very_secret_mariabackup_password';
MariaDB [(none)]> grant reload, process, lock tables, binlog monitor on *.* to 'mariabackup'@'maria-1.server.de' identified by 'the_very_secret_mariabackup_password';
MariaDB [(none)]> grant reload, process, lock tables, binlog monitor on *.* to 'mariabackup'@'maria-2.server.de' identified by 'the_very_secret_mariabackup_password';
MariaDB [(none)]> grant reload, process, lock tables, binlog monitor on *.* to 'mariabackup'@'maria-3.server.de' identified by 'the_very_secret_mariabackup_password';
MariaDB [(none)]> grant reload, process, lock tables, binlog monitor on *.* to 'mariabackup'@'maria-4.server.de' identified by 'the_very_secret_mariabackup_password';
MariaDB [(none)]> flush privileges;
MariaDB [(none)]>
</source>
=== Galera settings ===
/etc/mysql/mariadb.conf.d/zz-galera.cnf
<source lang=ini>
[galera]
# Cluster Configuration
wsrep_provider = /usr/lib/galera/libgalera_smm.so
# gcomm://{ comma seperated list of all cluster node IPs }
wsrep_cluster_address = gcomm://10.33.6.1,10.33.6.2,10.33.6.3,10.33.6.4
wsrep_cluster_name = MariaDB Galera Cluster
wsrep_on = ON
# Snapshot state transfer (SST): copy entire database, when new node joins
wsrep_sst_method = mariabackup
# set the the_very_secret_mariabackup_password to your real mariabackup password
wsrep_sst_auth = mariabackup:the_very_secret_mariabackup_password
[mariadb]
binlog_format = ROW
innodb_autoinc_lock_mode = 2
</source>
== Get knowledge about your Cluster ==
=== Show wsrep_provider_options ===
<source lang=bash>
$ mariadb -NBABe 'show variables like "wsrep_provider_options"' | awk '{gsub(/$/,":\n",$1); gsub(/(;|$)/,";\n"); printf $0; }'
</source>
1e5429a13405b7421b708efb901d551bfac8bf7a
SuSE Manager
0
348
2175
2108
2021-11-17T10:27:29Z
Lollypop
2
wikitext
text/x-wiki
[[category :Linux]]
[[category:SuSE]]
=SuSE Manager=
==Channels==
===Refresh channle list===
<source lang=bash>
# mgr-sync refresh
</source>
===List available channels===
<source lang=bash>
# mgr-sync list channels
</source>
===Add Channel===
<source lang=bash>
# mgr-sync add channel <channel>
</source>
===Delete Channel===
<source lang=bash>
# spacewalk-remove-channel -c <channel>
</source>
===Create a frozen channel===
Clone a channel (which is like a snapshot) and add a timestamp at the end of the name:
<source lang=bash>
# spacecmd softwarechannel_clonetree -s '<source channel or pool>' -x "s/\$/-$(date '+%Y-%m-%d_%H:%M:%S')/"
</source>
e.g.:
<source lang=bash>
# spacecmd softwarechannel_clonetree -s 'sles12-sp3-pool-x86_64' -x "s/\$/-$(date '+%Y-%m-%d_%H:%M:%S')/"
</source>
will result in a new channel pool named e.g. sles12-sp3-pool-x86_64-2017-11-22_14:26:42
===Compose your own channel===
<source lang=bash>
# spacecmd
spacecmd {SSM:0}> softwarechannel_create -n OpenSuSE -l opensuse -a x86_64 -c sha256
spacecmd {SSM:0}> repo_create -n opensuse-database-sles12-sp2-x86_64 -u https://download.opensuse.org/repositories/server:/database/SLE_12_SP2/
spacecmd {SSM:0}> repo_create -n opensuse-database-sles12-sp3-x86_64 -u https://download.opensuse.org/repositories/server:/database/SLE_12_SP3/
spacecmd {SSM:0}> repo_list
opensuse-database-sles12-sp2-x86_64
opensuse-database-sles12-sp3-x86_64
spacecmd {SSM:0}> softwarechannel_addrepo opensuse opensuse-database-sles12-sp2-x86_64
spacecmd {SSM:0}> softwarechannel_addrepo opensuse opensuse-database-sles12-sp3-x86_64
spacecmd {SSM:0}> quit
# spacewalk-repo-sync -c opensuse
</source>
==Bootstrap==
===Create bootstrap repo===
Do it for each channel!
<source lang=bash>
# mgr-create-bootstrap-repo
</source>
===Create bootstrap shell scripts in /srv/www/htdocs/pub/bootstrap===
Do not forget to lookup the available [[#List available activation keys|activation keys]]
<source lang=bash>
# spacecmd -s susemanager.server.de -u mytestuser -q activationkey_list
6-sles11-sp3-x86_64
6-sles11-sp4-x86_64
6-sles12-default
6-sles12-sp0-x86_64
6-sles12-sp1-x86_64
6-sles12-sp2-x86_64
6-sles12-sp3-x86_64
6-sles12-sp4-x86_64
6-sles12-sp5-x86_64
6-sles15-sp0-x86_64
6-sles15-sp1-x86_64
6-sles15-sp2-x86_64
# mgr-bootstrap --traditional --script=My-New-SLES11-SP4.sh --activation-keys=6-sles11-sp4-x86_64
</source>
==Activation keys==
===List available activation keys===
web: Systems -> Activation Keys
<source lang=bash>
# spacecmd -q activationkey_list
6-sles11-sp3-x86_64
6-sles11-sp4-x86_64
6-sles12-sp0-x86_64
6-sles12-sp1-x86_64
6-sles12-sp2-x86_64
6-sles12-sp3-x86_64
</source>
==spacecmd==
Just some useful space commands
<source lang=bash>
# spacecmd system_list
</source>
==rhn-search==
===Cleanup the search index===
<source lang=bash>
# rhn-search cleanindex
</source>
==Troubleshooting==
===Clients===
====Error code: Curl error 59 / Error message: failed setting cipher list: DEFAULT_SUSE====
<source lang=bash>
# zypper refresh
...
Error code: Curl error 59
Error message: failed setting cipher list: DEFAULT_SUSE
...
</source>
The reason is that zypper in newer versions calls curl with a specific cipher list named "DEFAULT_SUSE" which is not defined in curl version 7.37.0-37.17.1 (version 7.37.0-28.1 is OK).
Now get any kind of repository bound to your SuSE like the ISO this version was installed with:
<source lang=bash>
# zypper addrepo --check --type yast2 'iso:///?iso=/install/OS/suse/iso/SLE-12-SP2-Server-DVD-x86_64-GM-DVD1.iso' 'SLES12-SP2-12.2-0'
Adding repository 'SLES12-SP2-12.2-0' ...........................................................................................................[done]
Repository 'SLES12-SP2-12.2-0' successfully added
Enabled : Yes
Autorefresh : No
GPG Check : Yes
Priority : 99
URI : iso:///?iso=/install/OS/suse/iso/SLE-12-SP2-Server-DVD-x86_64-GM-DVD1.iso
</source>
or enable it:
<source lang=bash>
# zypper modifyrepo --enable SLES12-SP2-12.2-0
</source>
Reinstall zypper in the old version that does not call curl with the cipher list SUSE_DEFAULT:
<source lang=bash>
# zypper install --force --repo SLES12-SP2-12.2-0 $(rpm --query --all *curl* --queryformat '%{NAME} ')
</source>
And disable the ISO repository:
<source lang=bash>
# zypper modifyrepo --disable SLES12-SP2-12.2-0
</source>
Done.
=====Note: After some further debugging we found that the system path forces a wrong openssl library to come in place.=====
<source lang=bash>
# curl --version ; zypper --version
curl 7.37.0 (x86_64-suse-linux-gnu) libcurl/7.37.0 OpenSSL/1.0.2h zlib/1.2.8 libidn/1.28 libssh2/1.4.3
Protocols: dict file ftp ftps gopher http https imap imaps ldap ldaps pop3 pop3s rtsp scp sftp smtp smtps telnet tftp
Features: AsynchDNS GSS-Negotiate IDN IPv6 Largefile NTLM NTLM_WB SSL libz TLS-SRP
zypper 1.13.40
</source>
In our version of curl it should be OpenSSL/1.0.2j.
<syntaxhighlight lang="bash" highlight="5">
# rpm -qv openssl
openssl-1.0.2j-60.24.1.x86_64
# openssl version
WARNING: can't open config file: /usr/local/ssl/openssl.cnf
OpenSSL 1.0.2j-fips 26 Sep 2016 (Library: OpenSSL 1.0.2h-fips 3 May 2016)
</syntaxhighlight>
Ha!
Ok... then after lookin at the system library path, we got a clue ;-):
<syntaxhighlight lang="bash" highlight="2">
# ldconfig -p | grep ssl
libssl.so.1.0.0 (libc6,x86-64) => /usr/lib/nsr/lib64/libssl.so.1.0.0
libssl.so.1.0.0 (libc6,x86-64) => /lib64/libssl.so.1.0.0
libssl.so.1.0.0 (libc6) => /usr/lib/nsr/libssl.so.1.0.0
libgnutls-xssl.so.0 (libc6,x86-64) => /usr/lib64/libgnutls-xssl.so.0
libevent_openssl-2.0.so.5 (libc6,x86-64) => /usr/lib64/libevent_openssl-2.0.so.5
libcommonssl.so (libc6,x86-64) => /usr/lib/nsr/lib64/libcommonssl.so
libcommonssl.so (libc6) => /usr/lib/nsr/libcommonssl.so
libcommonssl-9.2.1.so (libc6,x86-64) => /usr/lib/nsr/lib64/libcommonssl-9.2.1.so
</syntaxhighlight>
The problem was a file in /etc/ld.so.conf.d/ which brought /usr/lib/nsr/lib64 in the system library path. There was another libssl.so.1.0.0 which was version 1.0.2h. OK. What to do?
<source lang=bash>
# rm /etc/ld.so.conf.d/problematic.conf
# rm /etc/ld.so.cache
# ldconfig
</source>
Check the success:
<source lang=bash>
# ldconfig -p | grep ssl
libssl.so.1.0.0 (libc6,x86-64) => /lib64/libssl.so.1.0.0
libgnutls-xssl.so.0 (libc6,x86-64) => /usr/lib64/libgnutls-xssl.so.0
libevent_openssl-2.0.so.5 (libc6,x86-64) => /usr/lib64/libevent_openssl-2.0.so.5
</source>
Now you just have to find a way to get your other stuff running without the manipulation at the system library path.
Last check for our case. Does our networker use it's own ssl libraries?
<source lang=bash>
# ls -al /proc/$(pgrep --full /usr/sbin/nsrexecd)/map_files | egrep "lib(ssl|crypto)"
lr-------- 1 root root 64 17. Jul 11:31 7f9d1bb73000-7f9d1bdc7000 -> /usr/lib/nsr/lib64/libcrypto.so.1.0.0
lr-------- 1 root root 64 17. Jul 11:31 7f9d1bdc7000-7f9d1bec7000 -> /usr/lib/nsr/lib64/libcrypto.so.1.0.0
lr-------- 1 root root 64 17. Jul 11:31 7f9d1bec7000-7f9d1bef3000 -> /usr/lib/nsr/lib64/libcrypto.so.1.0.0
lr-------- 1 root root 64 17. Jul 11:31 7f9d1bfab000-7f9d1c00c000 -> /usr/lib/nsr/lib64/libssl.so.1.0.0
lr-------- 1 root root 64 17. Jul 11:31 7f9d1c00c000-7f9d1c10c000 -> /usr/lib/nsr/lib64/libssl.so.1.0.0
lr-------- 1 root root 64 17. Jul 11:31 7f9d1c10c000-7f9d1c116000 -> /usr/lib/nsr/lib64/libssl.so.1.0.0
</source>
Yep. Great!
== Remove spacewalk from client ==
So the way to get rid spacewalk is:
<source lang=bash>
# zypper remove --clean-deps spacewalksd spacewalk-check zypp-plugin-spacewalk spacewalk-client-tools
</source>
== Register at SuSE Manager ==
After that reregister your server with the SuSE Manager like this:
<source lang=bash>
# /usr/bin/wget --no-check-certificate -O - https://susemgr.server.tld/pub/bootstrap/yourbootstrap.sh | bash
</source>
==Update SuSE Manager certificate==
=== Generate CSR ===
<source lang=bash>
# declare -a hosts=( "susemgr.tld.de" "susemgr-web.tld.de" )
# subject_without_cn='/C=DE/ST=Hamburg/L=Hamburg/O=Hosting/OU=Administration'
# emailAddress='suselinux-admin@tld.de'
</source>
<source lang=bash>
# openssl req -newkey rsa:4096 -nodes -sha256 -keyout server.key -out server.csr -batch -subj "${subject_without_cn}/CN=${hosts[0]}/emailAddress=${emailAddress}" -reqexts SAN -config <(cat /etc/ssl/openssl.cnf <(printf "[SAN]\nsubjectAltName=DNS:${hosts[0]}${hosts[1]:+,DNS:${hosts[1]}}${hosts[2]:+,DNS:${hosts[2]}}${hosts[3]:+,DNS:${hosts[3]}}${hosts[4]:+,DNS:${hosts[4]}}"))
Generating a RSA private key
...............................................++++
.................................................................................................................................................................++++
writing new private key to 'server.key'
-----
</source>
<source lang=bash>
# openssl req -noout -verify -subject -in server.csr
verify OK
subject=C = DE, ST = Hamburg, L = Hamburg, O = Hosting, OU = Administration, CN = susemgr.tld.de, emailAddress = suselinux-admin@tld.de
</source>
7f5897da1ce53f30a2c2b5711e05392703996126
2176
2175
2021-11-17T11:17:01Z
Lollypop
2
/* Update SuSE Manager certificate */
wikitext
text/x-wiki
[[category :Linux]]
[[category:SuSE]]
=SuSE Manager=
==Channels==
===Refresh channle list===
<source lang=bash>
# mgr-sync refresh
</source>
===List available channels===
<source lang=bash>
# mgr-sync list channels
</source>
===Add Channel===
<source lang=bash>
# mgr-sync add channel <channel>
</source>
===Delete Channel===
<source lang=bash>
# spacewalk-remove-channel -c <channel>
</source>
===Create a frozen channel===
Clone a channel (which is like a snapshot) and add a timestamp at the end of the name:
<source lang=bash>
# spacecmd softwarechannel_clonetree -s '<source channel or pool>' -x "s/\$/-$(date '+%Y-%m-%d_%H:%M:%S')/"
</source>
e.g.:
<source lang=bash>
# spacecmd softwarechannel_clonetree -s 'sles12-sp3-pool-x86_64' -x "s/\$/-$(date '+%Y-%m-%d_%H:%M:%S')/"
</source>
will result in a new channel pool named e.g. sles12-sp3-pool-x86_64-2017-11-22_14:26:42
===Compose your own channel===
<source lang=bash>
# spacecmd
spacecmd {SSM:0}> softwarechannel_create -n OpenSuSE -l opensuse -a x86_64 -c sha256
spacecmd {SSM:0}> repo_create -n opensuse-database-sles12-sp2-x86_64 -u https://download.opensuse.org/repositories/server:/database/SLE_12_SP2/
spacecmd {SSM:0}> repo_create -n opensuse-database-sles12-sp3-x86_64 -u https://download.opensuse.org/repositories/server:/database/SLE_12_SP3/
spacecmd {SSM:0}> repo_list
opensuse-database-sles12-sp2-x86_64
opensuse-database-sles12-sp3-x86_64
spacecmd {SSM:0}> softwarechannel_addrepo opensuse opensuse-database-sles12-sp2-x86_64
spacecmd {SSM:0}> softwarechannel_addrepo opensuse opensuse-database-sles12-sp3-x86_64
spacecmd {SSM:0}> quit
# spacewalk-repo-sync -c opensuse
</source>
==Bootstrap==
===Create bootstrap repo===
Do it for each channel!
<source lang=bash>
# mgr-create-bootstrap-repo
</source>
===Create bootstrap shell scripts in /srv/www/htdocs/pub/bootstrap===
Do not forget to lookup the available [[#List available activation keys|activation keys]]
<source lang=bash>
# spacecmd -s susemanager.server.de -u mytestuser -q activationkey_list
6-sles11-sp3-x86_64
6-sles11-sp4-x86_64
6-sles12-default
6-sles12-sp0-x86_64
6-sles12-sp1-x86_64
6-sles12-sp2-x86_64
6-sles12-sp3-x86_64
6-sles12-sp4-x86_64
6-sles12-sp5-x86_64
6-sles15-sp0-x86_64
6-sles15-sp1-x86_64
6-sles15-sp2-x86_64
# mgr-bootstrap --traditional --script=My-New-SLES11-SP4.sh --activation-keys=6-sles11-sp4-x86_64
</source>
==Activation keys==
===List available activation keys===
web: Systems -> Activation Keys
<source lang=bash>
# spacecmd -q activationkey_list
6-sles11-sp3-x86_64
6-sles11-sp4-x86_64
6-sles12-sp0-x86_64
6-sles12-sp1-x86_64
6-sles12-sp2-x86_64
6-sles12-sp3-x86_64
</source>
==spacecmd==
Just some useful space commands
<source lang=bash>
# spacecmd system_list
</source>
==rhn-search==
===Cleanup the search index===
<source lang=bash>
# rhn-search cleanindex
</source>
==Troubleshooting==
===Clients===
====Error code: Curl error 59 / Error message: failed setting cipher list: DEFAULT_SUSE====
<source lang=bash>
# zypper refresh
...
Error code: Curl error 59
Error message: failed setting cipher list: DEFAULT_SUSE
...
</source>
The reason is that zypper in newer versions calls curl with a specific cipher list named "DEFAULT_SUSE" which is not defined in curl version 7.37.0-37.17.1 (version 7.37.0-28.1 is OK).
Now get any kind of repository bound to your SuSE like the ISO this version was installed with:
<source lang=bash>
# zypper addrepo --check --type yast2 'iso:///?iso=/install/OS/suse/iso/SLE-12-SP2-Server-DVD-x86_64-GM-DVD1.iso' 'SLES12-SP2-12.2-0'
Adding repository 'SLES12-SP2-12.2-0' ...........................................................................................................[done]
Repository 'SLES12-SP2-12.2-0' successfully added
Enabled : Yes
Autorefresh : No
GPG Check : Yes
Priority : 99
URI : iso:///?iso=/install/OS/suse/iso/SLE-12-SP2-Server-DVD-x86_64-GM-DVD1.iso
</source>
or enable it:
<source lang=bash>
# zypper modifyrepo --enable SLES12-SP2-12.2-0
</source>
Reinstall zypper in the old version that does not call curl with the cipher list SUSE_DEFAULT:
<source lang=bash>
# zypper install --force --repo SLES12-SP2-12.2-0 $(rpm --query --all *curl* --queryformat '%{NAME} ')
</source>
And disable the ISO repository:
<source lang=bash>
# zypper modifyrepo --disable SLES12-SP2-12.2-0
</source>
Done.
=====Note: After some further debugging we found that the system path forces a wrong openssl library to come in place.=====
<source lang=bash>
# curl --version ; zypper --version
curl 7.37.0 (x86_64-suse-linux-gnu) libcurl/7.37.0 OpenSSL/1.0.2h zlib/1.2.8 libidn/1.28 libssh2/1.4.3
Protocols: dict file ftp ftps gopher http https imap imaps ldap ldaps pop3 pop3s rtsp scp sftp smtp smtps telnet tftp
Features: AsynchDNS GSS-Negotiate IDN IPv6 Largefile NTLM NTLM_WB SSL libz TLS-SRP
zypper 1.13.40
</source>
In our version of curl it should be OpenSSL/1.0.2j.
<syntaxhighlight lang="bash" highlight="5">
# rpm -qv openssl
openssl-1.0.2j-60.24.1.x86_64
# openssl version
WARNING: can't open config file: /usr/local/ssl/openssl.cnf
OpenSSL 1.0.2j-fips 26 Sep 2016 (Library: OpenSSL 1.0.2h-fips 3 May 2016)
</syntaxhighlight>
Ha!
Ok... then after lookin at the system library path, we got a clue ;-):
<syntaxhighlight lang="bash" highlight="2">
# ldconfig -p | grep ssl
libssl.so.1.0.0 (libc6,x86-64) => /usr/lib/nsr/lib64/libssl.so.1.0.0
libssl.so.1.0.0 (libc6,x86-64) => /lib64/libssl.so.1.0.0
libssl.so.1.0.0 (libc6) => /usr/lib/nsr/libssl.so.1.0.0
libgnutls-xssl.so.0 (libc6,x86-64) => /usr/lib64/libgnutls-xssl.so.0
libevent_openssl-2.0.so.5 (libc6,x86-64) => /usr/lib64/libevent_openssl-2.0.so.5
libcommonssl.so (libc6,x86-64) => /usr/lib/nsr/lib64/libcommonssl.so
libcommonssl.so (libc6) => /usr/lib/nsr/libcommonssl.so
libcommonssl-9.2.1.so (libc6,x86-64) => /usr/lib/nsr/lib64/libcommonssl-9.2.1.so
</syntaxhighlight>
The problem was a file in /etc/ld.so.conf.d/ which brought /usr/lib/nsr/lib64 in the system library path. There was another libssl.so.1.0.0 which was version 1.0.2h. OK. What to do?
<source lang=bash>
# rm /etc/ld.so.conf.d/problematic.conf
# rm /etc/ld.so.cache
# ldconfig
</source>
Check the success:
<source lang=bash>
# ldconfig -p | grep ssl
libssl.so.1.0.0 (libc6,x86-64) => /lib64/libssl.so.1.0.0
libgnutls-xssl.so.0 (libc6,x86-64) => /usr/lib64/libgnutls-xssl.so.0
libevent_openssl-2.0.so.5 (libc6,x86-64) => /usr/lib64/libevent_openssl-2.0.so.5
</source>
Now you just have to find a way to get your other stuff running without the manipulation at the system library path.
Last check for our case. Does our networker use it's own ssl libraries?
<source lang=bash>
# ls -al /proc/$(pgrep --full /usr/sbin/nsrexecd)/map_files | egrep "lib(ssl|crypto)"
lr-------- 1 root root 64 17. Jul 11:31 7f9d1bb73000-7f9d1bdc7000 -> /usr/lib/nsr/lib64/libcrypto.so.1.0.0
lr-------- 1 root root 64 17. Jul 11:31 7f9d1bdc7000-7f9d1bec7000 -> /usr/lib/nsr/lib64/libcrypto.so.1.0.0
lr-------- 1 root root 64 17. Jul 11:31 7f9d1bec7000-7f9d1bef3000 -> /usr/lib/nsr/lib64/libcrypto.so.1.0.0
lr-------- 1 root root 64 17. Jul 11:31 7f9d1bfab000-7f9d1c00c000 -> /usr/lib/nsr/lib64/libssl.so.1.0.0
lr-------- 1 root root 64 17. Jul 11:31 7f9d1c00c000-7f9d1c10c000 -> /usr/lib/nsr/lib64/libssl.so.1.0.0
lr-------- 1 root root 64 17. Jul 11:31 7f9d1c10c000-7f9d1c116000 -> /usr/lib/nsr/lib64/libssl.so.1.0.0
</source>
Yep. Great!
== Remove spacewalk from client ==
So the way to get rid spacewalk is:
<source lang=bash>
# zypper remove --clean-deps spacewalksd spacewalk-check zypp-plugin-spacewalk spacewalk-client-tools
</source>
== Register at SuSE Manager ==
After that reregister your server with the SuSE Manager like this:
<source lang=bash>
# /usr/bin/wget --no-check-certificate -O - https://susemgr.server.tld/pub/bootstrap/yourbootstrap.sh | bash
</source>
== Update SuSE Manager certificate ==
=== Create work place ===
<source lang=bash>
# mkdir ~/ssl-build
# mkdir ~/ssl-build/$(hostname --short)
# cd ~/ssl-build
</source>
=== Build RHN-ORG-TRUSTED-SSL-CERT and rhn-org-trusted-ssl-cert-1.0-*.noarch.rpm ===
<source lang=bash>
# rhn-ssl-tool --gen-ca --rpm-only --dir="~/ssl-build" --from-ca-cert=<path to your CA certificate file>
# openssl x509 -noout -subject -dates -in ~/ssl-build/RHN-ORG-TRUSTED-SSL-CERT
subject=C = DE, O = Hosting, CN = My-CA
notBefore=Mar 22 12:28:05 2017 GMT
notAfter=Mar 22 12:38:05 2027 GMT
# ls -al ~/ssl-build/*.rpm
...
-rw-r--r-- 1 root root 18262 17. Nov 12:10 rhn-org-trusted-ssl-cert-1.0-17.noarch.rpm
-rw-r--r-- 1 root root 16672 17. Nov 12:10 rhn-org-trusted-ssl-cert-1.0-17.src.rpm
</source>
=== Generate CSR ===
<source lang=bash>
# cd ~/ssl-build/$(hostname --short)
# declare -a hosts=( "susemgr.tld.de" "othername.tld.de" "anotheranothername.tld.de" )
# subject_without_cn='/C=DE/ST=Hamburg/L=Hamburg/O=Hosting/OU=Administration'
# emailAddress='suselinux-admin@tld.de'
</source>
<source lang=bash>
# openssl req -newkey rsa:4096 -nodes -sha256 -keyout server.key -out server.csr -batch -subj "${subject_without_cn}/CN=${hosts[0]}/emailAddress=${emailAddress}" -reqexts SAN -config <(cat /etc/ssl/openssl.cnf <(printf "[SAN]\nsubjectAltName=DNS:${hosts[0]}${hosts[1]:+,DNS:${hosts[1]}}${hosts[2]:+,DNS:${hosts[2]}}${hosts[3]:+,DNS:${hosts[3]}}${hosts[4]:+,DNS:${hosts[4]}}"))
Generating a RSA private key
...............................................++++
.................................................................................................................................................................++++
writing new private key to 'server.key'
-----
</source>
<source lang=bash>
# openssl req -noout -verify -subject -in server.csr
verify OK
subject=C = DE, ST = Hamburg, L = Hamburg, O = Hosting, OU = Administration, CN = susemgr.tld.de, emailAddress = suselinux-admin@tld.de
</source>
=== Put certificate and key in the apache directories ===
<source lang=bash>
# ls -al /etc/apache2/ssl.key/server.key /etc/apache2/ssl.crt/server.crt
-rw-r--r-- 1 root root 5441 24. Nov 2020 /etc/apache2/ssl.crt/server.crt
-rw------- 1 root root 1704 24. Nov 2020 /etc/apache2/ssl.key/server.key
</source>
=== Generate RPMs from certificate and key ===
<source lang=bash>
# rhn-ssl-tool --gen-server --rpm-only --dir="/root/ssl-build" --from-server-key=/etc/apache2/ssl.key/server.key --from-server-cert=/etc/apache2/ssl.crt/server.crt
...working...
Generating web server's SSL key pair/set RPM:
/root/ssl-build/susemgr/rhn-org-httpd-ssl-key-pair-susemgr-1.0-3.src.rpm
/root/ssl-build/susemgr/rhn-org-httpd-ssl-key-pair-susemgr-1.0-3.noarch.rpm
The most current SUSE Manager Proxy installation process against SUSE Manager hosted
requires the upload of an SSL tar archive that contains the CA SSL public
certificate and the web server's key set.
Generating the web server's SSL key set and CA SSL public certificate archive:
/root/ssl-build/susemgr/rhn-org-httpd-ssl-archive-susemgr-1.0-3.tar
Deploy the server's SSL key pair/set RPM:
(NOTE: the SUSE Manager or Proxy installers may do this step for you.)
The "noarch" RPM needs to be deployed to the machine working as a
web server, or SUSE Manager, or SUSE Manager Proxy.
Presumably 'susemgr.tld.de'.
</source>
<source lang=bash>
</source>
<source lang=bash>
</source>
<source lang=bash>
</source>
5f4bd52aa9dd8bd565944c6d4cd059ff600e909b
2177
2176
2021-11-17T11:36:48Z
Lollypop
2
/* Put certificate and key in the apache directories */
wikitext
text/x-wiki
[[category :Linux]]
[[category:SuSE]]
=SuSE Manager=
==Channels==
===Refresh channle list===
<source lang=bash>
# mgr-sync refresh
</source>
===List available channels===
<source lang=bash>
# mgr-sync list channels
</source>
===Add Channel===
<source lang=bash>
# mgr-sync add channel <channel>
</source>
===Delete Channel===
<source lang=bash>
# spacewalk-remove-channel -c <channel>
</source>
===Create a frozen channel===
Clone a channel (which is like a snapshot) and add a timestamp at the end of the name:
<source lang=bash>
# spacecmd softwarechannel_clonetree -s '<source channel or pool>' -x "s/\$/-$(date '+%Y-%m-%d_%H:%M:%S')/"
</source>
e.g.:
<source lang=bash>
# spacecmd softwarechannel_clonetree -s 'sles12-sp3-pool-x86_64' -x "s/\$/-$(date '+%Y-%m-%d_%H:%M:%S')/"
</source>
will result in a new channel pool named e.g. sles12-sp3-pool-x86_64-2017-11-22_14:26:42
===Compose your own channel===
<source lang=bash>
# spacecmd
spacecmd {SSM:0}> softwarechannel_create -n OpenSuSE -l opensuse -a x86_64 -c sha256
spacecmd {SSM:0}> repo_create -n opensuse-database-sles12-sp2-x86_64 -u https://download.opensuse.org/repositories/server:/database/SLE_12_SP2/
spacecmd {SSM:0}> repo_create -n opensuse-database-sles12-sp3-x86_64 -u https://download.opensuse.org/repositories/server:/database/SLE_12_SP3/
spacecmd {SSM:0}> repo_list
opensuse-database-sles12-sp2-x86_64
opensuse-database-sles12-sp3-x86_64
spacecmd {SSM:0}> softwarechannel_addrepo opensuse opensuse-database-sles12-sp2-x86_64
spacecmd {SSM:0}> softwarechannel_addrepo opensuse opensuse-database-sles12-sp3-x86_64
spacecmd {SSM:0}> quit
# spacewalk-repo-sync -c opensuse
</source>
==Bootstrap==
===Create bootstrap repo===
Do it for each channel!
<source lang=bash>
# mgr-create-bootstrap-repo
</source>
===Create bootstrap shell scripts in /srv/www/htdocs/pub/bootstrap===
Do not forget to lookup the available [[#List available activation keys|activation keys]]
<source lang=bash>
# spacecmd -s susemanager.server.de -u mytestuser -q activationkey_list
6-sles11-sp3-x86_64
6-sles11-sp4-x86_64
6-sles12-default
6-sles12-sp0-x86_64
6-sles12-sp1-x86_64
6-sles12-sp2-x86_64
6-sles12-sp3-x86_64
6-sles12-sp4-x86_64
6-sles12-sp5-x86_64
6-sles15-sp0-x86_64
6-sles15-sp1-x86_64
6-sles15-sp2-x86_64
# mgr-bootstrap --traditional --script=My-New-SLES11-SP4.sh --activation-keys=6-sles11-sp4-x86_64
</source>
==Activation keys==
===List available activation keys===
web: Systems -> Activation Keys
<source lang=bash>
# spacecmd -q activationkey_list
6-sles11-sp3-x86_64
6-sles11-sp4-x86_64
6-sles12-sp0-x86_64
6-sles12-sp1-x86_64
6-sles12-sp2-x86_64
6-sles12-sp3-x86_64
</source>
==spacecmd==
Just some useful space commands
<source lang=bash>
# spacecmd system_list
</source>
==rhn-search==
===Cleanup the search index===
<source lang=bash>
# rhn-search cleanindex
</source>
==Troubleshooting==
===Clients===
====Error code: Curl error 59 / Error message: failed setting cipher list: DEFAULT_SUSE====
<source lang=bash>
# zypper refresh
...
Error code: Curl error 59
Error message: failed setting cipher list: DEFAULT_SUSE
...
</source>
The reason is that zypper in newer versions calls curl with a specific cipher list named "DEFAULT_SUSE" which is not defined in curl version 7.37.0-37.17.1 (version 7.37.0-28.1 is OK).
Now get any kind of repository bound to your SuSE like the ISO this version was installed with:
<source lang=bash>
# zypper addrepo --check --type yast2 'iso:///?iso=/install/OS/suse/iso/SLE-12-SP2-Server-DVD-x86_64-GM-DVD1.iso' 'SLES12-SP2-12.2-0'
Adding repository 'SLES12-SP2-12.2-0' ...........................................................................................................[done]
Repository 'SLES12-SP2-12.2-0' successfully added
Enabled : Yes
Autorefresh : No
GPG Check : Yes
Priority : 99
URI : iso:///?iso=/install/OS/suse/iso/SLE-12-SP2-Server-DVD-x86_64-GM-DVD1.iso
</source>
or enable it:
<source lang=bash>
# zypper modifyrepo --enable SLES12-SP2-12.2-0
</source>
Reinstall zypper in the old version that does not call curl with the cipher list SUSE_DEFAULT:
<source lang=bash>
# zypper install --force --repo SLES12-SP2-12.2-0 $(rpm --query --all *curl* --queryformat '%{NAME} ')
</source>
And disable the ISO repository:
<source lang=bash>
# zypper modifyrepo --disable SLES12-SP2-12.2-0
</source>
Done.
=====Note: After some further debugging we found that the system path forces a wrong openssl library to come in place.=====
<source lang=bash>
# curl --version ; zypper --version
curl 7.37.0 (x86_64-suse-linux-gnu) libcurl/7.37.0 OpenSSL/1.0.2h zlib/1.2.8 libidn/1.28 libssh2/1.4.3
Protocols: dict file ftp ftps gopher http https imap imaps ldap ldaps pop3 pop3s rtsp scp sftp smtp smtps telnet tftp
Features: AsynchDNS GSS-Negotiate IDN IPv6 Largefile NTLM NTLM_WB SSL libz TLS-SRP
zypper 1.13.40
</source>
In our version of curl it should be OpenSSL/1.0.2j.
<syntaxhighlight lang="bash" highlight="5">
# rpm -qv openssl
openssl-1.0.2j-60.24.1.x86_64
# openssl version
WARNING: can't open config file: /usr/local/ssl/openssl.cnf
OpenSSL 1.0.2j-fips 26 Sep 2016 (Library: OpenSSL 1.0.2h-fips 3 May 2016)
</syntaxhighlight>
Ha!
Ok... then after lookin at the system library path, we got a clue ;-):
<syntaxhighlight lang="bash" highlight="2">
# ldconfig -p | grep ssl
libssl.so.1.0.0 (libc6,x86-64) => /usr/lib/nsr/lib64/libssl.so.1.0.0
libssl.so.1.0.0 (libc6,x86-64) => /lib64/libssl.so.1.0.0
libssl.so.1.0.0 (libc6) => /usr/lib/nsr/libssl.so.1.0.0
libgnutls-xssl.so.0 (libc6,x86-64) => /usr/lib64/libgnutls-xssl.so.0
libevent_openssl-2.0.so.5 (libc6,x86-64) => /usr/lib64/libevent_openssl-2.0.so.5
libcommonssl.so (libc6,x86-64) => /usr/lib/nsr/lib64/libcommonssl.so
libcommonssl.so (libc6) => /usr/lib/nsr/libcommonssl.so
libcommonssl-9.2.1.so (libc6,x86-64) => /usr/lib/nsr/lib64/libcommonssl-9.2.1.so
</syntaxhighlight>
The problem was a file in /etc/ld.so.conf.d/ which brought /usr/lib/nsr/lib64 in the system library path. There was another libssl.so.1.0.0 which was version 1.0.2h. OK. What to do?
<source lang=bash>
# rm /etc/ld.so.conf.d/problematic.conf
# rm /etc/ld.so.cache
# ldconfig
</source>
Check the success:
<source lang=bash>
# ldconfig -p | grep ssl
libssl.so.1.0.0 (libc6,x86-64) => /lib64/libssl.so.1.0.0
libgnutls-xssl.so.0 (libc6,x86-64) => /usr/lib64/libgnutls-xssl.so.0
libevent_openssl-2.0.so.5 (libc6,x86-64) => /usr/lib64/libevent_openssl-2.0.so.5
</source>
Now you just have to find a way to get your other stuff running without the manipulation at the system library path.
Last check for our case. Does our networker use it's own ssl libraries?
<source lang=bash>
# ls -al /proc/$(pgrep --full /usr/sbin/nsrexecd)/map_files | egrep "lib(ssl|crypto)"
lr-------- 1 root root 64 17. Jul 11:31 7f9d1bb73000-7f9d1bdc7000 -> /usr/lib/nsr/lib64/libcrypto.so.1.0.0
lr-------- 1 root root 64 17. Jul 11:31 7f9d1bdc7000-7f9d1bec7000 -> /usr/lib/nsr/lib64/libcrypto.so.1.0.0
lr-------- 1 root root 64 17. Jul 11:31 7f9d1bec7000-7f9d1bef3000 -> /usr/lib/nsr/lib64/libcrypto.so.1.0.0
lr-------- 1 root root 64 17. Jul 11:31 7f9d1bfab000-7f9d1c00c000 -> /usr/lib/nsr/lib64/libssl.so.1.0.0
lr-------- 1 root root 64 17. Jul 11:31 7f9d1c00c000-7f9d1c10c000 -> /usr/lib/nsr/lib64/libssl.so.1.0.0
lr-------- 1 root root 64 17. Jul 11:31 7f9d1c10c000-7f9d1c116000 -> /usr/lib/nsr/lib64/libssl.so.1.0.0
</source>
Yep. Great!
== Remove spacewalk from client ==
So the way to get rid spacewalk is:
<source lang=bash>
# zypper remove --clean-deps spacewalksd spacewalk-check zypp-plugin-spacewalk spacewalk-client-tools
</source>
== Register at SuSE Manager ==
After that reregister your server with the SuSE Manager like this:
<source lang=bash>
# /usr/bin/wget --no-check-certificate -O - https://susemgr.server.tld/pub/bootstrap/yourbootstrap.sh | bash
</source>
== Update SuSE Manager certificate ==
=== Create work place ===
<source lang=bash>
# mkdir ~/ssl-build
# mkdir ~/ssl-build/$(hostname --short)
# cd ~/ssl-build
</source>
=== Build RHN-ORG-TRUSTED-SSL-CERT and rhn-org-trusted-ssl-cert-1.0-*.noarch.rpm ===
<source lang=bash>
# rhn-ssl-tool --gen-ca --rpm-only --dir="~/ssl-build" --from-ca-cert=<path to your CA certificate file>
# openssl x509 -noout -subject -dates -in ~/ssl-build/RHN-ORG-TRUSTED-SSL-CERT
subject=C = DE, O = Hosting, CN = My-CA
notBefore=Mar 22 12:28:05 2017 GMT
notAfter=Mar 22 12:38:05 2027 GMT
# ls -al ~/ssl-build/*.rpm
...
-rw-r--r-- 1 root root 18262 17. Nov 12:10 rhn-org-trusted-ssl-cert-1.0-17.noarch.rpm
-rw-r--r-- 1 root root 16672 17. Nov 12:10 rhn-org-trusted-ssl-cert-1.0-17.src.rpm
</source>
=== Generate CSR ===
<source lang=bash>
# cd ~/ssl-build/$(hostname --short)
# declare -a hosts=( "susemgr.tld.de" "othername.tld.de" "anotheranothername.tld.de" )
# subject_without_cn='/C=DE/ST=Hamburg/L=Hamburg/O=Hosting/OU=Administration'
# emailAddress='suselinux-admin@tld.de'
</source>
<source lang=bash>
# openssl req -newkey rsa:4096 -nodes -sha256 -keyout server.key -out server.csr -batch -subj "${subject_without_cn}/CN=${hosts[0]}/emailAddress=${emailAddress}" -reqexts SAN -config <(cat /etc/ssl/openssl.cnf <(printf "[SAN]\nsubjectAltName=DNS:${hosts[0]}${hosts[1]:+,DNS:${hosts[1]}}${hosts[2]:+,DNS:${hosts[2]}}${hosts[3]:+,DNS:${hosts[3]}}${hosts[4]:+,DNS:${hosts[4]}}"))
Generating a RSA private key
...............................................++++
.................................................................................................................................................................++++
writing new private key to 'server.key'
-----
</source>
<source lang=bash>
# openssl req -noout -verify -subject -in server.csr
verify OK
subject=C = DE, ST = Hamburg, L = Hamburg, O = Hosting, OU = Administration, CN = susemgr.tld.de, emailAddress = suselinux-admin@tld.de
</source>
=== Install certificate and key in the apache directories ===
<source lang=bash>
# rpm -i ~/ssl-build/%(hostname --short)/$( grep -E "rhn-org-httpd-ssl-key-pair-.*.noarch.rpm" ~/ssl-build/%(hostname --short)/latest.txt)
</source>
=== Generate RPMs from certificate and key ===
<source lang=bash>
# rhn-ssl-tool --gen-server --rpm-only --dir="/root/ssl-build" --from-server-key=/etc/apache2/ssl.key/server.key --from-server-cert=/etc/apache2/ssl.crt/server.crt
...working...
Generating web server's SSL key pair/set RPM:
/root/ssl-build/susemgr/rhn-org-httpd-ssl-key-pair-susemgr-1.0-3.src.rpm
/root/ssl-build/susemgr/rhn-org-httpd-ssl-key-pair-susemgr-1.0-3.noarch.rpm
The most current SUSE Manager Proxy installation process against SUSE Manager hosted
requires the upload of an SSL tar archive that contains the CA SSL public
certificate and the web server's key set.
Generating the web server's SSL key set and CA SSL public certificate archive:
/root/ssl-build/susemgr/rhn-org-httpd-ssl-archive-susemgr-1.0-3.tar
Deploy the server's SSL key pair/set RPM:
(NOTE: the SUSE Manager or Proxy installers may do this step for you.)
The "noarch" RPM needs to be deployed to the machine working as a
web server, or SUSE Manager, or SUSE Manager Proxy.
Presumably 'susemgr.tld.de'.
</source>
<source lang=bash>
</source>
<source lang=bash>
</source>
<source lang=bash>
</source>
1cb3e8d436979b69362cd37aed953f72b6bb2b44
2178
2177
2021-11-17T12:34:45Z
Lollypop
2
/* Update SuSE Manager certificate */
wikitext
text/x-wiki
[[category :Linux]]
[[category:SuSE]]
=SuSE Manager=
==Channels==
===Refresh channle list===
<source lang=bash>
# mgr-sync refresh
</source>
===List available channels===
<source lang=bash>
# mgr-sync list channels
</source>
===Add Channel===
<source lang=bash>
# mgr-sync add channel <channel>
</source>
===Delete Channel===
<source lang=bash>
# spacewalk-remove-channel -c <channel>
</source>
===Create a frozen channel===
Clone a channel (which is like a snapshot) and add a timestamp at the end of the name:
<source lang=bash>
# spacecmd softwarechannel_clonetree -s '<source channel or pool>' -x "s/\$/-$(date '+%Y-%m-%d_%H:%M:%S')/"
</source>
e.g.:
<source lang=bash>
# spacecmd softwarechannel_clonetree -s 'sles12-sp3-pool-x86_64' -x "s/\$/-$(date '+%Y-%m-%d_%H:%M:%S')/"
</source>
will result in a new channel pool named e.g. sles12-sp3-pool-x86_64-2017-11-22_14:26:42
===Compose your own channel===
<source lang=bash>
# spacecmd
spacecmd {SSM:0}> softwarechannel_create -n OpenSuSE -l opensuse -a x86_64 -c sha256
spacecmd {SSM:0}> repo_create -n opensuse-database-sles12-sp2-x86_64 -u https://download.opensuse.org/repositories/server:/database/SLE_12_SP2/
spacecmd {SSM:0}> repo_create -n opensuse-database-sles12-sp3-x86_64 -u https://download.opensuse.org/repositories/server:/database/SLE_12_SP3/
spacecmd {SSM:0}> repo_list
opensuse-database-sles12-sp2-x86_64
opensuse-database-sles12-sp3-x86_64
spacecmd {SSM:0}> softwarechannel_addrepo opensuse opensuse-database-sles12-sp2-x86_64
spacecmd {SSM:0}> softwarechannel_addrepo opensuse opensuse-database-sles12-sp3-x86_64
spacecmd {SSM:0}> quit
# spacewalk-repo-sync -c opensuse
</source>
==Bootstrap==
===Create bootstrap repo===
Do it for each channel!
<source lang=bash>
# mgr-create-bootstrap-repo
</source>
===Create bootstrap shell scripts in /srv/www/htdocs/pub/bootstrap===
Do not forget to lookup the available [[#List available activation keys|activation keys]]
<source lang=bash>
# spacecmd -s susemanager.server.de -u mytestuser -q activationkey_list
6-sles11-sp3-x86_64
6-sles11-sp4-x86_64
6-sles12-default
6-sles12-sp0-x86_64
6-sles12-sp1-x86_64
6-sles12-sp2-x86_64
6-sles12-sp3-x86_64
6-sles12-sp4-x86_64
6-sles12-sp5-x86_64
6-sles15-sp0-x86_64
6-sles15-sp1-x86_64
6-sles15-sp2-x86_64
# mgr-bootstrap --traditional --script=My-New-SLES11-SP4.sh --activation-keys=6-sles11-sp4-x86_64
</source>
==Activation keys==
===List available activation keys===
web: Systems -> Activation Keys
<source lang=bash>
# spacecmd -q activationkey_list
6-sles11-sp3-x86_64
6-sles11-sp4-x86_64
6-sles12-sp0-x86_64
6-sles12-sp1-x86_64
6-sles12-sp2-x86_64
6-sles12-sp3-x86_64
</source>
==spacecmd==
Just some useful space commands
<source lang=bash>
# spacecmd system_list
</source>
==rhn-search==
===Cleanup the search index===
<source lang=bash>
# rhn-search cleanindex
</source>
==Troubleshooting==
===Clients===
====Error code: Curl error 59 / Error message: failed setting cipher list: DEFAULT_SUSE====
<source lang=bash>
# zypper refresh
...
Error code: Curl error 59
Error message: failed setting cipher list: DEFAULT_SUSE
...
</source>
The reason is that zypper in newer versions calls curl with a specific cipher list named "DEFAULT_SUSE" which is not defined in curl version 7.37.0-37.17.1 (version 7.37.0-28.1 is OK).
Now get any kind of repository bound to your SuSE like the ISO this version was installed with:
<source lang=bash>
# zypper addrepo --check --type yast2 'iso:///?iso=/install/OS/suse/iso/SLE-12-SP2-Server-DVD-x86_64-GM-DVD1.iso' 'SLES12-SP2-12.2-0'
Adding repository 'SLES12-SP2-12.2-0' ...........................................................................................................[done]
Repository 'SLES12-SP2-12.2-0' successfully added
Enabled : Yes
Autorefresh : No
GPG Check : Yes
Priority : 99
URI : iso:///?iso=/install/OS/suse/iso/SLE-12-SP2-Server-DVD-x86_64-GM-DVD1.iso
</source>
or enable it:
<source lang=bash>
# zypper modifyrepo --enable SLES12-SP2-12.2-0
</source>
Reinstall zypper in the old version that does not call curl with the cipher list SUSE_DEFAULT:
<source lang=bash>
# zypper install --force --repo SLES12-SP2-12.2-0 $(rpm --query --all *curl* --queryformat '%{NAME} ')
</source>
And disable the ISO repository:
<source lang=bash>
# zypper modifyrepo --disable SLES12-SP2-12.2-0
</source>
Done.
=====Note: After some further debugging we found that the system path forces a wrong openssl library to come in place.=====
<source lang=bash>
# curl --version ; zypper --version
curl 7.37.0 (x86_64-suse-linux-gnu) libcurl/7.37.0 OpenSSL/1.0.2h zlib/1.2.8 libidn/1.28 libssh2/1.4.3
Protocols: dict file ftp ftps gopher http https imap imaps ldap ldaps pop3 pop3s rtsp scp sftp smtp smtps telnet tftp
Features: AsynchDNS GSS-Negotiate IDN IPv6 Largefile NTLM NTLM_WB SSL libz TLS-SRP
zypper 1.13.40
</source>
In our version of curl it should be OpenSSL/1.0.2j.
<syntaxhighlight lang="bash" highlight="5">
# rpm -qv openssl
openssl-1.0.2j-60.24.1.x86_64
# openssl version
WARNING: can't open config file: /usr/local/ssl/openssl.cnf
OpenSSL 1.0.2j-fips 26 Sep 2016 (Library: OpenSSL 1.0.2h-fips 3 May 2016)
</syntaxhighlight>
Ha!
Ok... then after lookin at the system library path, we got a clue ;-):
<syntaxhighlight lang="bash" highlight="2">
# ldconfig -p | grep ssl
libssl.so.1.0.0 (libc6,x86-64) => /usr/lib/nsr/lib64/libssl.so.1.0.0
libssl.so.1.0.0 (libc6,x86-64) => /lib64/libssl.so.1.0.0
libssl.so.1.0.0 (libc6) => /usr/lib/nsr/libssl.so.1.0.0
libgnutls-xssl.so.0 (libc6,x86-64) => /usr/lib64/libgnutls-xssl.so.0
libevent_openssl-2.0.so.5 (libc6,x86-64) => /usr/lib64/libevent_openssl-2.0.so.5
libcommonssl.so (libc6,x86-64) => /usr/lib/nsr/lib64/libcommonssl.so
libcommonssl.so (libc6) => /usr/lib/nsr/libcommonssl.so
libcommonssl-9.2.1.so (libc6,x86-64) => /usr/lib/nsr/lib64/libcommonssl-9.2.1.so
</syntaxhighlight>
The problem was a file in /etc/ld.so.conf.d/ which brought /usr/lib/nsr/lib64 in the system library path. There was another libssl.so.1.0.0 which was version 1.0.2h. OK. What to do?
<source lang=bash>
# rm /etc/ld.so.conf.d/problematic.conf
# rm /etc/ld.so.cache
# ldconfig
</source>
Check the success:
<source lang=bash>
# ldconfig -p | grep ssl
libssl.so.1.0.0 (libc6,x86-64) => /lib64/libssl.so.1.0.0
libgnutls-xssl.so.0 (libc6,x86-64) => /usr/lib64/libgnutls-xssl.so.0
libevent_openssl-2.0.so.5 (libc6,x86-64) => /usr/lib64/libevent_openssl-2.0.so.5
</source>
Now you just have to find a way to get your other stuff running without the manipulation at the system library path.
Last check for our case. Does our networker use it's own ssl libraries?
<source lang=bash>
# ls -al /proc/$(pgrep --full /usr/sbin/nsrexecd)/map_files | egrep "lib(ssl|crypto)"
lr-------- 1 root root 64 17. Jul 11:31 7f9d1bb73000-7f9d1bdc7000 -> /usr/lib/nsr/lib64/libcrypto.so.1.0.0
lr-------- 1 root root 64 17. Jul 11:31 7f9d1bdc7000-7f9d1bec7000 -> /usr/lib/nsr/lib64/libcrypto.so.1.0.0
lr-------- 1 root root 64 17. Jul 11:31 7f9d1bec7000-7f9d1bef3000 -> /usr/lib/nsr/lib64/libcrypto.so.1.0.0
lr-------- 1 root root 64 17. Jul 11:31 7f9d1bfab000-7f9d1c00c000 -> /usr/lib/nsr/lib64/libssl.so.1.0.0
lr-------- 1 root root 64 17. Jul 11:31 7f9d1c00c000-7f9d1c10c000 -> /usr/lib/nsr/lib64/libssl.so.1.0.0
lr-------- 1 root root 64 17. Jul 11:31 7f9d1c10c000-7f9d1c116000 -> /usr/lib/nsr/lib64/libssl.so.1.0.0
</source>
Yep. Great!
== Remove spacewalk from client ==
So the way to get rid spacewalk is:
<source lang=bash>
# zypper remove --clean-deps spacewalksd spacewalk-check zypp-plugin-spacewalk spacewalk-client-tools
</source>
== Register at SuSE Manager ==
After that reregister your server with the SuSE Manager like this:
<source lang=bash>
# /usr/bin/wget --no-check-certificate -O - https://susemgr.server.tld/pub/bootstrap/yourbootstrap.sh | bash
</source>
== Update SuSE Manager certificate ==
=== Create work place ===
<source lang=bash>
# mkdir ~/ssl-build
# mkdir ~/ssl-build/$(hostname --short)
# cd ~/ssl-build
</source>
=== Build RHN-ORG-TRUSTED-SSL-CERT and rhn-org-trusted-ssl-cert-1.0-*.noarch.rpm ===
<source lang=bash>
# rhn-ssl-tool --gen-ca --rpm-only --dir="~/ssl-build" --from-ca-cert=<path to your CA certificate file>
# openssl x509 -noout -subject -dates -in ~/ssl-build/RHN-ORG-TRUSTED-SSL-CERT
subject=C = DE, O = Hosting, CN = My-CA
notBefore=Mar 22 12:28:05 2017 GMT
notAfter=Mar 22 12:38:05 2027 GMT
# ls -al ~/ssl-build/*.rpm
...
-rw-r--r-- 1 root root 18262 17. Nov 12:10 rhn-org-trusted-ssl-cert-1.0-17.noarch.rpm
-rw-r--r-- 1 root root 16672 17. Nov 12:10 rhn-org-trusted-ssl-cert-1.0-17.src.rpm
</source>
=== Generate CSR ===
<source lang=bash>
# cd ~/ssl-build/$(hostname --short)
# declare -a hosts=( "susemgr.tld.de" "othername.tld.de" "anotheranothername.tld.de" )
# subject_without_cn='/C=DE/ST=Hamburg/L=Hamburg/O=Hosting/OU=Administration'
# emailAddress='suselinux-admin@tld.de'
</source>
<source lang=bash>
# openssl req -newkey rsa:4096 -nodes -sha256 -keyout server.key -out server.csr -batch -subj "${subject_without_cn}/CN=${hosts[0]}/emailAddress=${emailAddress}" -reqexts SAN -config <(cat /etc/ssl/openssl.cnf <(printf "[SAN]\nsubjectAltName=DNS:${hosts[0]}${hosts[1]:+,DNS:${hosts[1]}}${hosts[2]:+,DNS:${hosts[2]}}${hosts[3]:+,DNS:${hosts[3]}}${hosts[4]:+,DNS:${hosts[4]}}"))
Generating a RSA private key
...............................................++++
.................................................................................................................................................................++++
writing new private key to 'server.key'
-----
</source>
<source lang=bash>
# openssl req -noout -verify -subject -in server.csr
verify OK
subject=C = DE, ST = Hamburg, L = Hamburg, O = Hosting, OU = Administration, CN = susemgr.tld.de, emailAddress = suselinux-admin@tld.de
</source>
=== Generate RPMs from certificate and key ===
<source lang=bash>
# rhn-ssl-tool --gen-server --rpm-only --dir="/root/ssl-build"
...working...
Generating web server's SSL key pair/set RPM:
/root/ssl-build/susemgr/rhn-org-httpd-ssl-key-pair-susemgr-1.0-3.src.rpm
/root/ssl-build/susemgr/rhn-org-httpd-ssl-key-pair-susemgr-1.0-3.noarch.rpm
The most current SUSE Manager Proxy installation process against SUSE Manager hosted
requires the upload of an SSL tar archive that contains the CA SSL public
certificate and the web server's key set.
Generating the web server's SSL key set and CA SSL public certificate archive:
/root/ssl-build/susemgr/rhn-org-httpd-ssl-archive-susemgr-1.0-3.tar
Deploy the server's SSL key pair/set RPM:
(NOTE: the SUSE Manager or Proxy installers may do this step for you.)
The "noarch" RPM needs to be deployed to the machine working as a
web server, or SUSE Manager, or SUSE Manager Proxy.
Presumably 'susemgr.tld.de'.
</source>
=== Install certificate and key in the apache directories ===
<source lang=bash>
# cd /root/ssl-build/susemgr
# rpm -i $(grep -E "rhn-org-httpd-ssl-key-pair-.*.noarch.rpm" latest.txt)
</source>
<source lang=bash>
</source>
<source lang=bash>
</source>
<source lang=bash>
</source>
7c1b52a70eebdd44fdaf6d8b470fd2aac57f10bf
Nextcloud
0
368
2179
2107
2021-11-24T15:12:01Z
Lollypop
2
wikitext
text/x-wiki
[[category:Web]]
=Nextcloud=
==BASH alias==
<source lang=bash>
alias occ='sudo --user=www-data /usr/bin/php -f /var/www/nextcloud/occ'
</source>
<source lang=bash>
# occ status
- installed: true
- version: 19.0.2.2
- versionstring: 19.0.2
- edition:
</source>
==Send calendar events==
Set the EventRemindersMode to occ:
<source lang=bash>
# occ config:app:set dav sendEventRemindersMode --value occ
</source>
and add a cronjob for the user running he webserver:
<source lang=bash>
# crontab -u www-data -e
# send calendar events every 5 minutes
*/5 * * * * php -f /var/www/nextcloud/occ dav:send-event-reminders
</source>
=Manual upgrade=
Caution when upgrading from Nextcloud 20.0.9 to Nextcloud 21.0.1!
If you are using APCu as <i>memcache.local</i>
<source lang=bash>
# occ config:system:get memcache.local
\OC\Memcache\APCu
</source>
you have to put this in your php apcu.ini (e.g. /etc/php/7.4/mods-available/apcu.ini):
apc.enable_cli=1
otherwise you will get in memory trouble during upgrade and in my case the server was down because out of memory.
<source lang=bash>
# cd /var/www/nextcloud/updater && sudo -u www-data php updater.phar
# occ db:add-missing-indices
</source>
and since version 19:
<source lang=bash>
# occ db:add-missing-columns
# occ db:add-missing-primary-keys
# occ db:convert-filecache-bigint
</source>
Answer the questions...
If you have an own theme proceed with this steps:
<source lang=bash>
# occ config:system:set theme --value <your theme>
# occ maintenance:theme:update
</source>
And the apps:
<source lang=bash>
# occ app:update --all
</source>
=Some tweaks for the theme to disable several things=
<source lang=css>
/* remove quota */
#quota {
border: 0;
clip: rect(0 0 0 0);
height: 1px;
margin: -1px;
overflow: hidden;
padding: 0;
position: absolute;
width: 1px;
}
/* remove lost password */
.lost-password-container #lost-password, .lost-password-container #lost-password-back {
display: none;
}
/* remove contacts menu */
#contactsmenu { display: none; }
/* remove contacts button */
li[data-id="contacts"] {
display: none;
visibility : hidden;
height : 0px;
width : 0px;
margin : 0px;
padding : 0px;
overflow : hidden;
}
/* remove user button */
li[data-id="core_users"] {
display: none;
visibility : hidden;
height : 0px;
width : 0px;
margin : 0px;
padding : 0px;
overflow : hidden;
}
</source>
== Memcached ==
=== ip:port ===
=== socket ===
b384e240df3e0287a0f292d8334cbebceac189a2
2180
2179
2021-11-24T15:15:30Z
Lollypop
2
wikitext
text/x-wiki
[[category:Web]]
=Nextcloud=
==BASH alias==
<source lang=bash>
alias occ='sudo --user=www-data /usr/bin/php -f /var/www/nextcloud/occ'
</source>
<source lang=bash>
# occ status
- installed: true
- version: 19.0.2.2
- versionstring: 19.0.2
- edition:
</source>
==Send calendar events==
Set the EventRemindersMode to occ:
<source lang=bash>
# occ config:app:set dav sendEventRemindersMode --value occ
</source>
and add a cronjob for the user running he webserver:
<source lang=bash>
# crontab -u www-data -e
# send calendar events every 5 minutes
*/5 * * * * php -f /var/www/nextcloud/occ dav:send-event-reminders
</source>
=Manual upgrade=
Caution when upgrading from Nextcloud 20.0.9 to Nextcloud 21.0.1!
If you are using APCu as <i>memcache.local</i>
<source lang=bash>
# occ config:system:get memcache.local
\OC\Memcache\APCu
</source>
you have to put this in your php apcu.ini (e.g. /etc/php/7.4/mods-available/apcu.ini):
apc.enable_cli=1
otherwise you will get in memory trouble during upgrade and in my case the server was down because out of memory.
<source lang=bash>
# cd /var/www/nextcloud/updater && sudo -u www-data php updater.phar
# occ db:add-missing-indices
</source>
and since version 19:
<source lang=bash>
# occ db:add-missing-columns
# occ db:add-missing-primary-keys
# occ db:convert-filecache-bigint
</source>
Answer the questions...
If you have an own theme proceed with this steps:
<source lang=bash>
# occ config:system:set theme --value <your theme>
# occ maintenance:theme:update
</source>
And the apps:
<source lang=bash>
# occ app:update --all
</source>
=Some tweaks for the theme to disable several things=
<source lang=css>
/* remove quota */
#quota {
border: 0;
clip: rect(0 0 0 0);
height: 1px;
margin: -1px;
overflow: hidden;
padding: 0;
position: absolute;
width: 1px;
}
/* remove lost password */
.lost-password-container #lost-password, .lost-password-container #lost-password-back {
display: none;
}
/* remove contacts menu */
#contactsmenu { display: none; }
/* remove contacts button */
li[data-id="contacts"] {
display: none;
visibility : hidden;
height : 0px;
width : 0px;
margin : 0px;
padding : 0px;
overflow : hidden;
}
/* remove user button */
li[data-id="core_users"] {
display: none;
visibility : hidden;
height : 0px;
width : 0px;
margin : 0px;
padding : 0px;
overflow : hidden;
}
</source>
== Memcached ==
=== ip:port ===
<source lang=php>
'memcache.distributed' => '\\OC\\Memcache\\Memcached',
'memcached_servers' =>
array (
0 =>
array (
0 => 'localhost',
1 => 11211,
),
),
</source>
=== socket ===
<source lang=php>
'memcache.distributed' => '\\OC\\Memcache\\Memcached',
'memcached_servers' =>
array (
0 =>
array (
0 => '/run/memcached/memcached.sock',
1 => 0
),
),
</source>
79ad956ddd29db3bb5858d1c5bc018847dedac6e
2181
2180
2021-11-24T15:16:07Z
Lollypop
2
wikitext
text/x-wiki
[[category:Web]]
=Nextcloud=
==BASH alias==
<source lang=bash>
alias occ='sudo --user=www-data /usr/bin/php -f /var/www/nextcloud/occ'
</source>
<source lang=bash>
# occ status
- installed: true
- version: 19.0.2.2
- versionstring: 19.0.2
- edition:
</source>
==Send calendar events==
Set the EventRemindersMode to occ:
<source lang=bash>
# occ config:app:set dav sendEventRemindersMode --value occ
</source>
and add a cronjob for the user running he webserver:
<source lang=bash>
# crontab -u www-data -e
# send calendar events every 5 minutes
*/5 * * * * php -f /var/www/nextcloud/occ dav:send-event-reminders
</source>
=Manual upgrade=
Caution when upgrading from Nextcloud 20.0.9 to Nextcloud 21.0.1!
If you are using APCu as <i>memcache.local</i>
<source lang=bash>
# occ config:system:get memcache.local
\OC\Memcache\APCu
</source>
you have to put this in your php apcu.ini (e.g. /etc/php/7.4/mods-available/apcu.ini):
apc.enable_cli=1
otherwise you will get in memory trouble during upgrade and in my case the server was down because out of memory.
<source lang=bash>
# cd /var/www/nextcloud/updater && sudo -u www-data php updater.phar
# occ db:add-missing-indices
</source>
and since version 19:
<source lang=bash>
# occ db:add-missing-columns
# occ db:add-missing-primary-keys
# occ db:convert-filecache-bigint
</source>
Answer the questions...
If you have an own theme proceed with this steps:
<source lang=bash>
# occ config:system:set theme --value <your theme>
# occ maintenance:theme:update
</source>
And the apps:
<source lang=bash>
# occ app:update --all
</source>
=Some tweaks for the theme to disable several things=
<source lang=css>
/* remove quota */
#quota {
border: 0;
clip: rect(0 0 0 0);
height: 1px;
margin: -1px;
overflow: hidden;
padding: 0;
position: absolute;
width: 1px;
}
/* remove lost password */
.lost-password-container #lost-password, .lost-password-container #lost-password-back {
display: none;
}
/* remove contacts menu */
#contactsmenu { display: none; }
/* remove contacts button */
li[data-id="contacts"] {
display: none;
visibility : hidden;
height : 0px;
width : 0px;
margin : 0px;
padding : 0px;
overflow : hidden;
}
/* remove user button */
li[data-id="core_users"] {
display: none;
visibility : hidden;
height : 0px;
width : 0px;
margin : 0px;
padding : 0px;
overflow : hidden;
}
</source>
= Memcached =
== ip:port ==
<source lang=php>
'memcache.distributed' => '\\OC\\Memcache\\Memcached',
'memcached_servers' =>
array (
0 =>
array (
0 => 'localhost',
1 => 11211,
),
),
</source>
== socket ==
<source lang=php>
'memcache.distributed' => '\\OC\\Memcache\\Memcached',
'memcached_servers' =>
array (
0 =>
array (
0 => '/run/memcached/memcached.sock',
1 => 0
),
),
</source>
6723dcc4434f743356d0f3707df28fec7f2e6285
2182
2181
2021-11-25T13:25:51Z
Lollypop
2
/* socket */
wikitext
text/x-wiki
[[category:Web]]
=Nextcloud=
==BASH alias==
<source lang=bash>
alias occ='sudo --user=www-data /usr/bin/php -f /var/www/nextcloud/occ'
</source>
<source lang=bash>
# occ status
- installed: true
- version: 19.0.2.2
- versionstring: 19.0.2
- edition:
</source>
==Send calendar events==
Set the EventRemindersMode to occ:
<source lang=bash>
# occ config:app:set dav sendEventRemindersMode --value occ
</source>
and add a cronjob for the user running he webserver:
<source lang=bash>
# crontab -u www-data -e
# send calendar events every 5 minutes
*/5 * * * * php -f /var/www/nextcloud/occ dav:send-event-reminders
</source>
=Manual upgrade=
Caution when upgrading from Nextcloud 20.0.9 to Nextcloud 21.0.1!
If you are using APCu as <i>memcache.local</i>
<source lang=bash>
# occ config:system:get memcache.local
\OC\Memcache\APCu
</source>
you have to put this in your php apcu.ini (e.g. /etc/php/7.4/mods-available/apcu.ini):
apc.enable_cli=1
otherwise you will get in memory trouble during upgrade and in my case the server was down because out of memory.
<source lang=bash>
# cd /var/www/nextcloud/updater && sudo -u www-data php updater.phar
# occ db:add-missing-indices
</source>
and since version 19:
<source lang=bash>
# occ db:add-missing-columns
# occ db:add-missing-primary-keys
# occ db:convert-filecache-bigint
</source>
Answer the questions...
If you have an own theme proceed with this steps:
<source lang=bash>
# occ config:system:set theme --value <your theme>
# occ maintenance:theme:update
</source>
And the apps:
<source lang=bash>
# occ app:update --all
</source>
=Some tweaks for the theme to disable several things=
<source lang=css>
/* remove quota */
#quota {
border: 0;
clip: rect(0 0 0 0);
height: 1px;
margin: -1px;
overflow: hidden;
padding: 0;
position: absolute;
width: 1px;
}
/* remove lost password */
.lost-password-container #lost-password, .lost-password-container #lost-password-back {
display: none;
}
/* remove contacts menu */
#contactsmenu { display: none; }
/* remove contacts button */
li[data-id="contacts"] {
display: none;
visibility : hidden;
height : 0px;
width : 0px;
margin : 0px;
padding : 0px;
overflow : hidden;
}
/* remove user button */
li[data-id="core_users"] {
display: none;
visibility : hidden;
height : 0px;
width : 0px;
margin : 0px;
padding : 0px;
overflow : hidden;
}
</source>
= Memcached =
== ip:port ==
<source lang=php>
'memcache.distributed' => '\\OC\\Memcache\\Memcached',
'memcached_servers' =>
array (
0 =>
array (
0 => 'localhost',
1 => 11211,
),
),
</source>
== socket ==
<source lang=JSON>
{
"system": {
"memcache.distributed": "\\OC\\Memcache\\Memcached",
"memcached_servers": [
[
"\/run\/memcached\/memcached.sock",
0
]
]
}
}
</source>
6b029e3fb67f51988c2a9047f1b6ca1382c4eec9
2183
2182
2021-11-25T13:26:50Z
Lollypop
2
/* ip:port */
wikitext
text/x-wiki
[[category:Web]]
=Nextcloud=
==BASH alias==
<source lang=bash>
alias occ='sudo --user=www-data /usr/bin/php -f /var/www/nextcloud/occ'
</source>
<source lang=bash>
# occ status
- installed: true
- version: 19.0.2.2
- versionstring: 19.0.2
- edition:
</source>
==Send calendar events==
Set the EventRemindersMode to occ:
<source lang=bash>
# occ config:app:set dav sendEventRemindersMode --value occ
</source>
and add a cronjob for the user running he webserver:
<source lang=bash>
# crontab -u www-data -e
# send calendar events every 5 minutes
*/5 * * * * php -f /var/www/nextcloud/occ dav:send-event-reminders
</source>
=Manual upgrade=
Caution when upgrading from Nextcloud 20.0.9 to Nextcloud 21.0.1!
If you are using APCu as <i>memcache.local</i>
<source lang=bash>
# occ config:system:get memcache.local
\OC\Memcache\APCu
</source>
you have to put this in your php apcu.ini (e.g. /etc/php/7.4/mods-available/apcu.ini):
apc.enable_cli=1
otherwise you will get in memory trouble during upgrade and in my case the server was down because out of memory.
<source lang=bash>
# cd /var/www/nextcloud/updater && sudo -u www-data php updater.phar
# occ db:add-missing-indices
</source>
and since version 19:
<source lang=bash>
# occ db:add-missing-columns
# occ db:add-missing-primary-keys
# occ db:convert-filecache-bigint
</source>
Answer the questions...
If you have an own theme proceed with this steps:
<source lang=bash>
# occ config:system:set theme --value <your theme>
# occ maintenance:theme:update
</source>
And the apps:
<source lang=bash>
# occ app:update --all
</source>
=Some tweaks for the theme to disable several things=
<source lang=css>
/* remove quota */
#quota {
border: 0;
clip: rect(0 0 0 0);
height: 1px;
margin: -1px;
overflow: hidden;
padding: 0;
position: absolute;
width: 1px;
}
/* remove lost password */
.lost-password-container #lost-password, .lost-password-container #lost-password-back {
display: none;
}
/* remove contacts menu */
#contactsmenu { display: none; }
/* remove contacts button */
li[data-id="contacts"] {
display: none;
visibility : hidden;
height : 0px;
width : 0px;
margin : 0px;
padding : 0px;
overflow : hidden;
}
/* remove user button */
li[data-id="core_users"] {
display: none;
visibility : hidden;
height : 0px;
width : 0px;
margin : 0px;
padding : 0px;
overflow : hidden;
}
</source>
= Memcached =
== ip:port ==
<source lang=JSON>
{
"system": {
"memcache.distributed": "\\OC\\Memcache\\Memcached",
"memcached_servers": [
[
'127.0.0.1',
1121
]
]
}
}
</source>
== socket ==
<source lang=JSON>
{
"system": {
"memcache.distributed": "\\OC\\Memcache\\Memcached",
"memcached_servers": [
[
"\/run\/memcached\/memcached.sock",
0
]
]
}
}
</source>
a6ed3c30d13c696a4759cdc31c588498f6dbcc44
Nextcloud
0
368
2184
2183
2021-11-25T13:31:05Z
Lollypop
2
/* Memcached */
wikitext
text/x-wiki
[[category:Web]]
=Nextcloud=
==BASH alias==
<source lang=bash>
alias occ='sudo --user=www-data /usr/bin/php -f /var/www/nextcloud/occ'
</source>
<source lang=bash>
# occ status
- installed: true
- version: 19.0.2.2
- versionstring: 19.0.2
- edition:
</source>
==Send calendar events==
Set the EventRemindersMode to occ:
<source lang=bash>
# occ config:app:set dav sendEventRemindersMode --value occ
</source>
and add a cronjob for the user running he webserver:
<source lang=bash>
# crontab -u www-data -e
# send calendar events every 5 minutes
*/5 * * * * php -f /var/www/nextcloud/occ dav:send-event-reminders
</source>
=Manual upgrade=
Caution when upgrading from Nextcloud 20.0.9 to Nextcloud 21.0.1!
If you are using APCu as <i>memcache.local</i>
<source lang=bash>
# occ config:system:get memcache.local
\OC\Memcache\APCu
</source>
you have to put this in your php apcu.ini (e.g. /etc/php/7.4/mods-available/apcu.ini):
apc.enable_cli=1
otherwise you will get in memory trouble during upgrade and in my case the server was down because out of memory.
<source lang=bash>
# cd /var/www/nextcloud/updater && sudo -u www-data php updater.phar
# occ db:add-missing-indices
</source>
and since version 19:
<source lang=bash>
# occ db:add-missing-columns
# occ db:add-missing-primary-keys
# occ db:convert-filecache-bigint
</source>
Answer the questions...
If you have an own theme proceed with this steps:
<source lang=bash>
# occ config:system:set theme --value <your theme>
# occ maintenance:theme:update
</source>
And the apps:
<source lang=bash>
# occ app:update --all
</source>
=Some tweaks for the theme to disable several things=
<source lang=css>
/* remove quota */
#quota {
border: 0;
clip: rect(0 0 0 0);
height: 1px;
margin: -1px;
overflow: hidden;
padding: 0;
position: absolute;
width: 1px;
}
/* remove lost password */
.lost-password-container #lost-password, .lost-password-container #lost-password-back {
display: none;
}
/* remove contacts menu */
#contactsmenu { display: none; }
/* remove contacts button */
li[data-id="contacts"] {
display: none;
visibility : hidden;
height : 0px;
width : 0px;
margin : 0px;
padding : 0px;
overflow : hidden;
}
/* remove user button */
li[data-id="core_users"] {
display: none;
visibility : hidden;
height : 0px;
width : 0px;
margin : 0px;
padding : 0px;
overflow : hidden;
}
</source>
= Memcached =
You can import one of the following versions of configfile with
<source lang=shell-session>
# occ config:import /your_memcache_config_file_like_below.json
Config successfully imported from: /your_memcache_config_file_like_below.json
</source>
== ip:port ==
<source lang=JSON>
{
"system": {
"memcache.distributed": "\\OC\\Memcache\\Memcached",
"memcached_servers": [
[
'127.0.0.1',
1121
]
]
}
}
</source>
== socket ==
<source lang=JSON>
{
"system": {
"memcache.distributed": "\\OC\\Memcache\\Memcached",
"memcached_servers": [
[
"\/run\/memcached\/memcached.sock",
0
]
]
}
}
</source>
0de6f9e4682b4d382cc0be758081bf21016a2c14
Admin hints
0
360
2185
2042
2021-11-25T13:32:18Z
Lollypop
2
/* Get your IP address */
wikitext
text/x-wiki
[[category:KnowHow]]
==Cheat sheets==
* [https://cheat.sh Curl usable general cheat sheet]
==DNS==
===Get your IP address===
<source lang=shell-session>
$ dig +short +time=2 +tries=1 myip.opendns.com @resolver1.opendns.com
</source>
4c5cfbe78335563cdfbf67c1cae93386fa25af77
Apache
0
205
2186
2012
2021-11-25T13:34:10Z
Lollypop
2
/* Simple script */
wikitext
text/x-wiki
[[Category:Webserver]]
== Create certificate ==
===Simple script===
<source lang=Shell scripts>
#!/bin/bash
BASE_SUBJECT='/C=DE/ST=Hamburg/L=Hamburg/O=MyOrg/OU=IT'
BASEDIR=/etc/apache2
DAYS=$[ 5 * 365 ]
KEY_DIR=${BASEDIR}/ssl.key
CRT_DIR=${BASEDIR}/ssl.crt
if [ $# -eq 0 ]
then
printf "usage: $0 <webserver-name> [<alias1> <alias2>...]\n"
exit 1
fi
CN=$1
declare -a subject_alt_names;
for i in ${*}
do
subject_alt_names=( ${subject_alt_names[*]} "DNS:${i}")
done
echo ${subject_alt_names[*]}
SHORT=${CN%%.*}
KEY=${KEY_DIR}/${SHORT}.key
CRT=${CRT_DIR}/${SHORT}.crt
OLD_IFS=${IFS}
IFS=","
openssl req \
-new \
-days ${DAYS} \
-newkey rsa:4096bits \
-sha512 \
-x509 \
-nodes \
-out ${CRT} \
-keyout ${KEY} \
-subj "${BASE_SUBJECT}/CN=${CN}" \
-reqexts SAN \
-extensions SAN \
-config <(
cat /etc/ssl/openssl.cnf
printf "[ext]\nbasicConstraints=CA:FALSE,pathlen:0\n[SAN]\n%s\n" \
"${subject_alt_names:+subjectAltName = ${subject_alt_names[*]}}"
)
IFS=${OLD_IFS}
printf "Put this in your Apache config:\n\n\tSSLCertificateFile %s\n\tSSLCertificateKeyFile %s\n\n" "${CRT}" "${KEY}"
</source>
===Adjust the OpenSSL default values===
Set the country, etc. to values that match your needs:
<source lang=bash>
# vi /etc/ssl/openssl.cnf
</source>
===Generate key===
<source lang=bash>
# openssl ecparam -genkey -name secp256r1 | openssl ec -aes256 -out server.de.ec-key
read EC key
using curve name prime256v1 instead of secp256r1
writing EC key
Enter PEM pass phrase:
Verifying - Enter PEM pass phrase:
</source>
If you don't need a password protected encrypted key file, you can remove the encryption like this:
<source lang=bash>
# openssl ec -in server.de.ec-key -out server.de.ec-key
read EC key
Enter PEM pass phrase:
writing EC key
</source>
===Issue certificate===
<source lang=bash>
# openssl req -new -x509 -sha256 -key server.de.ec-key -out server.de-wildcard.pem -days 1825 -nodes
You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter '.', the field will be left blank.
-----
Country Name (2 letter code) [DE]:
State or Province Name (full name) [Hamburg]:
Locality Name (eg, city) [Hamburg]:
Organization Name (eg, company) [My Site]:
Organizational Unit Name (eg, section) [Sub]:
Common Name (e.g. server FQDN or YOUR name) []:*.server.de
Email Address [ssl@server.de]:
</source>
===View certificate===
<source lang=bash>
# openssl x509 -text -noout -in server.de-wildcard.pem
Certificate:
Data:
Version: 3 (0x2)
Serial Number: ... (0x...)
Signature Algorithm: ecdsa-with-SHA256
Issuer: C=DE, ST=Hamburg, L=Hamburg, O=My Site, OU=Sub, CN=*.server.de/emailAddress=ssl@server.de
Validity
Not Before: Apr 16 09:35:02 2015 GMT
Not After : Apr 14 09:35:02 2020 GMT
Subject: C=DE, ST=Hamburg, L=Hamburg, O=My Site, OU=Sub, CN=*.server.de/emailAddress=ssl@server.de
Subject Public Key Info:
Public Key Algorithm: id-ecPublicKey
Public-Key: (256 bit)
pub:
...
ASN1 OID: prime256v1
X509v3 extensions:
X509v3 Subject Key Identifier:
...
X509v3 Authority Key Identifier:
keyid:...
X509v3 Basic Constraints:
CA:TRUE
Signature Algorithm: ecdsa-with-SHA256
...
</source>
==Configuring Apache==
=== Serving mp4 media files ===
If your media files are on a network filesystem like CIFS or NFS you should disable memory mapping (EnableMMAP Off) to avoid corrupted data at the client side and allow seeking inside the video.
<source lang=apache>
<Directory /var/www/media-files>
Options -Indexes
AllowOverride None
Require all granted
<IfModule mod_mime>
AddType video/mp4 .mp4
</IfModule>
EnableMMAP Off
</Directory>
</source>
=== SSL configuration ===
/etc/apache2/mods-available/ssl.conf
<source lang=apache>
<IfModule mod_ssl.c>
...
SSLUseStapling On
SSLStaplingCache "shmcb:${APACHE_RUN_DIR}/stapling_cache(128000)"
...
</IfModule>
</source>
<source lang=apache>
<VirtualHost ssl.server.de:443>
# ...
SSLEngine On
# Do this only if you are sure you have no old clients
SSLProtocol all -SSLv2 -SSLv3 -TLSv1 -TLSv1.1
# If you need to support old clients use this instead
# SSLProtocol all -SSLv2 -SSLv3 -TLSv1
SSLCompression off
SSLHonorCipherOrder On
# Do this only if you are sure you have no old clients
SSLCipherSuite HIGH:EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH:!AES256+RSA:!AES128:!ADH:!EXP:!SSLv2:!SSLv3:!MEDIUM:!LOW:!NULL:!aNULL
# If you need to support old clients use this instead
# SSLCipherSuite ECDHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-SHA256:ECDHE-RSA-AES256-SHA:ECDHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA256:DHE-RSA-AES128-SHA256:DHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA:EDH-RSA-DES-CBC3-SHA:AES256-GCM-SHA384:AES128-GCM-SHA256:AES256-SHA256:AES128-SHA256:AES256-SHA:AES128-SHA:HIGH:!DES-CBC3-SHA:!aNULL:!eNULL:!EXPORT:!CAMELLIA:!DES:!MD5:!PSK:!RC4:!SSLv2:!SSLv3
SSLCertificateFile /etc/letsencrypt/live/ssl.server.de/fullchain.pem
SSLCertificateKeyFile /etc/letsencrypt/live/ssl.server.de/privkey.pem
SSLOptions +FakeBasicAuth +ExportCertData +StrictRequire
# Generate DH parameters with
# # openssl dhparam -out /etc/ssl/certs/dhparam_4096.pem 4096
SSLOpenSSLConfCmd DHParameters "/etc/ssl/certs/dhparam_4096.pem"
SSLOpenSSLConfCmd ECDHParameters Automatic
SSLOpenSSLConfCmd Curves secp521r1:secp384r1:prime256v1
SetEnvIfNoCase Referer ^https://ssl\.server\.de keep_cookies
RequestHeader unset Cookie env=!keep_cookies
<IfModule mod_headers.c>
# https://kb.sucuri.net/warnings/hardening/headers-x-content-type
Header set X-Content-Type-Options nosniff
# https://kb.sucuri.net/warnings/hardening/headers-x-frame-clickjacking
Header append X-FRAME-OPTIONS "SAMEORIGIN"
# https://kb.sucuri.net/warnings/hardening/headers-x-xss-protection
Header set X-XSS-Protection "1; mode=block"
# Strict Transport Security
Header always set Strict-Transport-Security "max-age=31556926;"
# Public Key Pins
Header always set Public-Key-Pins "max-age=5184000; pin-sha256=\"...\"; pin-sha256=\"...\"; includeSubDomains"
</IfModule>
<IfModule mod_rewrite.c>
RewriteEngine On
# https://kb.sucuri.net/warnings/hardening/http-trace HTTP Trace Method
RewriteCond %{REQUEST_METHOD} ^TRACE
RewriteRule .* - [F]
</IfModule>
</VirtualHost>
</source>
===SSLLabs A+ with all 100%===
'''If you consider to take this snippet be warned. Old clients have no chance to reach the server.'''
<source lang=apache>
<VirtualHost ssl.server.de:443>
...
# SSL parameters
SSLEngine On
SSLProtocol all -SSLv2 -SSLv3 -TLSv1 -TLSv1.1
SSLCipherSuite DHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES256-SHA:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-SHA
SSLHonorCipherOrder on
SSLUseStapling on
SSLCompression off
SSLOptions +FakeBasicAuth +ExportCertData +StrictRequire
SSLCertificateFile /etc/letsencrypt/live/ssl.server.de/fullchain.pem
SSLCertificateKeyFile /etc/letsencrypt/live/ssl.server.de/privkey.pem
SSLOpenSSLConfCmd DHParameters "/etc/ssl/certs/dhparam.pem"
SSLOpenSSLConfCmd ECDHParameters Automatic
SSLOpenSSLConfCmd Curves secp521r1:secp384r1:prime256v1
<IfModule mod_headers.c>
# Add security and privacy related headers
Header edit Set-Cookie ^(.*)$ $1;HttpOnly;Secure
Header always set Strict-Transport-Security "max-age=31556926; includeSubDomains; preload"
Header always set X-Frame-Options SAMEORIGIN
Header always set X-Content-Type-Options nosniff
Header set X-XSS-Protection "1; mode=block"
Header set X-Robots-Tag "none"
SetEnv modHeadersAvailable true
</IfModule>
...
</VirtualHost>
</source>
==Client certificates==
<source lang=apache>
#
## <ClientCertificate>
#
SSLVerifyClient none
SSLCACertificateFile "/var/log/apache2/conf/ca.crt"
SSLCARevocationFile "/var/log/apache2/conf/crl.pem"
SSLCARevocationCheck chain
CustomLog "/var/log/apache2/logs/ssl_user.log" \
"%t %h Serial=%{SSL_CLIENT_M_SERIAL}x User=%{SSL_CLIENT_S_DN_CN}x \"%r\" %b"
<Location />
SSLVerifyClient require
SSLVerifyDepth 10
SSLOptions +FakeBasicAuth
SSLRequireSSL
SSLRequire %{SSL_CLIENT_S_DN_O} eq "Your Organization" \
and %{SSL_CLIENT_S_DN_OU} in {"AllowedOU1","AllowedOU2"}
</Location>
#
## </ClientCertificate>
#
</source>
==ApacheTop==
Top of all sites on your host:
<source lang=bash>
# ls /var/log/apache2/*.log | xargs -n 1 echo -f | xargs apachetop
</source>
cdc142900f4c6e2214783ab9ca4f6cc1fc8fb782
2187
2186
2021-11-25T13:35:55Z
Lollypop
2
wikitext
text/x-wiki
[[Category:Webserver]]
== Create certificate ==
===Simple script===
<source lang=Shell scripts>
#!/bin/bash
BASE_SUBJECT='/C=DE/ST=Hamburg/L=Hamburg/O=MyOrg/OU=IT'
BASEDIR=/etc/apache2
DAYS=$[ 5 * 365 ]
KEY_DIR=${BASEDIR}/ssl.key
CRT_DIR=${BASEDIR}/ssl.crt
if [ $# -eq 0 ]
then
printf "usage: $0 <webserver-name> [<alias1> <alias2>...]\n"
exit 1
fi
CN=$1
declare -a subject_alt_names;
for i in ${*}
do
subject_alt_names=( ${subject_alt_names[*]} "DNS:${i}")
done
echo ${subject_alt_names[*]}
SHORT=${CN%%.*}
KEY=${KEY_DIR}/${SHORT}.key
CRT=${CRT_DIR}/${SHORT}.crt
OLD_IFS=${IFS}
IFS=","
openssl req \
-new \
-days ${DAYS} \
-newkey rsa:4096bits \
-sha512 \
-x509 \
-nodes \
-out ${CRT} \
-keyout ${KEY} \
-subj "${BASE_SUBJECT}/CN=${CN}" \
-reqexts SAN \
-extensions SAN \
-config <(
cat /etc/ssl/openssl.cnf
printf "[ext]\nbasicConstraints=CA:FALSE,pathlen:0\n[SAN]\n%s\n" \
"${subject_alt_names:+subjectAltName = ${subject_alt_names[*]}}"
)
IFS=${OLD_IFS}
printf "Put this in your Apache config:\n\n\tSSLCertificateFile %s\n\tSSLCertificateKeyFile %s\n\n" "${CRT}" "${KEY}"
</source>
===Adjust the OpenSSL default values===
Set the country, etc. to values that match your needs:
<source lang=Shell scripts>
# vi /etc/ssl/openssl.cnf
</source>
===Generate key===
<source lang=Shell scripts>
# openssl ecparam -genkey -name secp256r1 | openssl ec -aes256 -out server.de.ec-key
read EC key
using curve name prime256v1 instead of secp256r1
writing EC key
Enter PEM pass phrase:
Verifying - Enter PEM pass phrase:
</source>
If you don't need a password protected encrypted key file, you can remove the encryption like this:
<source lang=Shell scripts>
# openssl ec -in server.de.ec-key -out server.de.ec-key
read EC key
Enter PEM pass phrase:
writing EC key
</source>
===Issue certificate===
<source lang=Shell scripts>
# openssl req -new -x509 -sha256 -key server.de.ec-key -out server.de-wildcard.pem -days 1825 -nodes
You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter '.', the field will be left blank.
-----
Country Name (2 letter code) [DE]:
State or Province Name (full name) [Hamburg]:
Locality Name (eg, city) [Hamburg]:
Organization Name (eg, company) [My Site]:
Organizational Unit Name (eg, section) [Sub]:
Common Name (e.g. server FQDN or YOUR name) []:*.server.de
Email Address [ssl@server.de]:
</source>
===View certificate===
<source lang=Shell scripts>
# openssl x509 -text -noout -in server.de-wildcard.pem
Certificate:
Data:
Version: 3 (0x2)
Serial Number: ... (0x...)
Signature Algorithm: ecdsa-with-SHA256
Issuer: C=DE, ST=Hamburg, L=Hamburg, O=My Site, OU=Sub, CN=*.server.de/emailAddress=ssl@server.de
Validity
Not Before: Apr 16 09:35:02 2015 GMT
Not After : Apr 14 09:35:02 2020 GMT
Subject: C=DE, ST=Hamburg, L=Hamburg, O=My Site, OU=Sub, CN=*.server.de/emailAddress=ssl@server.de
Subject Public Key Info:
Public Key Algorithm: id-ecPublicKey
Public-Key: (256 bit)
pub:
...
ASN1 OID: prime256v1
X509v3 extensions:
X509v3 Subject Key Identifier:
...
X509v3 Authority Key Identifier:
keyid:...
X509v3 Basic Constraints:
CA:TRUE
Signature Algorithm: ecdsa-with-SHA256
...
</source>
==Configuring Apache==
=== Serving mp4 media files ===
If your media files are on a network filesystem like CIFS or NFS you should disable memory mapping (EnableMMAP Off) to avoid corrupted data at the client side and allow seeking inside the video.
<source lang=apache>
<Directory /var/www/media-files>
Options -Indexes
AllowOverride None
Require all granted
<IfModule mod_mime>
AddType video/mp4 .mp4
</IfModule>
EnableMMAP Off
</Directory>
</source>
=== SSL configuration ===
/etc/apache2/mods-available/ssl.conf
<source lang=apache>
<IfModule mod_ssl.c>
...
SSLUseStapling On
SSLStaplingCache "shmcb:${APACHE_RUN_DIR}/stapling_cache(128000)"
...
</IfModule>
</source>
<source lang=apache>
<VirtualHost ssl.server.de:443>
# ...
SSLEngine On
# Do this only if you are sure you have no old clients
SSLProtocol all -SSLv2 -SSLv3 -TLSv1 -TLSv1.1
# If you need to support old clients use this instead
# SSLProtocol all -SSLv2 -SSLv3 -TLSv1
SSLCompression off
SSLHonorCipherOrder On
# Do this only if you are sure you have no old clients
SSLCipherSuite HIGH:EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH:!AES256+RSA:!AES128:!ADH:!EXP:!SSLv2:!SSLv3:!MEDIUM:!LOW:!NULL:!aNULL
# If you need to support old clients use this instead
# SSLCipherSuite ECDHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-SHA256:ECDHE-RSA-AES256-SHA:ECDHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA256:DHE-RSA-AES128-SHA256:DHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA:EDH-RSA-DES-CBC3-SHA:AES256-GCM-SHA384:AES128-GCM-SHA256:AES256-SHA256:AES128-SHA256:AES256-SHA:AES128-SHA:HIGH:!DES-CBC3-SHA:!aNULL:!eNULL:!EXPORT:!CAMELLIA:!DES:!MD5:!PSK:!RC4:!SSLv2:!SSLv3
SSLCertificateFile /etc/letsencrypt/live/ssl.server.de/fullchain.pem
SSLCertificateKeyFile /etc/letsencrypt/live/ssl.server.de/privkey.pem
SSLOptions +FakeBasicAuth +ExportCertData +StrictRequire
# Generate DH parameters with
# # openssl dhparam -out /etc/ssl/certs/dhparam_4096.pem 4096
SSLOpenSSLConfCmd DHParameters "/etc/ssl/certs/dhparam_4096.pem"
SSLOpenSSLConfCmd ECDHParameters Automatic
SSLOpenSSLConfCmd Curves secp521r1:secp384r1:prime256v1
SetEnvIfNoCase Referer ^https://ssl\.server\.de keep_cookies
RequestHeader unset Cookie env=!keep_cookies
<IfModule mod_headers.c>
# https://kb.sucuri.net/warnings/hardening/headers-x-content-type
Header set X-Content-Type-Options nosniff
# https://kb.sucuri.net/warnings/hardening/headers-x-frame-clickjacking
Header append X-FRAME-OPTIONS "SAMEORIGIN"
# https://kb.sucuri.net/warnings/hardening/headers-x-xss-protection
Header set X-XSS-Protection "1; mode=block"
# Strict Transport Security
Header always set Strict-Transport-Security "max-age=31556926;"
# Public Key Pins
Header always set Public-Key-Pins "max-age=5184000; pin-sha256=\"...\"; pin-sha256=\"...\"; includeSubDomains"
</IfModule>
<IfModule mod_rewrite.c>
RewriteEngine On
# https://kb.sucuri.net/warnings/hardening/http-trace HTTP Trace Method
RewriteCond %{REQUEST_METHOD} ^TRACE
RewriteRule .* - [F]
</IfModule>
</VirtualHost>
</source>
===SSLLabs A+ with all 100%===
'''If you consider to take this snippet be warned. Old clients have no chance to reach the server.'''
<source lang=apache>
<VirtualHost ssl.server.de:443>
...
# SSL parameters
SSLEngine On
SSLProtocol all -SSLv2 -SSLv3 -TLSv1 -TLSv1.1
SSLCipherSuite DHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES256-SHA:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-SHA
SSLHonorCipherOrder on
SSLUseStapling on
SSLCompression off
SSLOptions +FakeBasicAuth +ExportCertData +StrictRequire
SSLCertificateFile /etc/letsencrypt/live/ssl.server.de/fullchain.pem
SSLCertificateKeyFile /etc/letsencrypt/live/ssl.server.de/privkey.pem
SSLOpenSSLConfCmd DHParameters "/etc/ssl/certs/dhparam.pem"
SSLOpenSSLConfCmd ECDHParameters Automatic
SSLOpenSSLConfCmd Curves secp521r1:secp384r1:prime256v1
<IfModule mod_headers.c>
# Add security and privacy related headers
Header edit Set-Cookie ^(.*)$ $1;HttpOnly;Secure
Header always set Strict-Transport-Security "max-age=31556926; includeSubDomains; preload"
Header always set X-Frame-Options SAMEORIGIN
Header always set X-Content-Type-Options nosniff
Header set X-XSS-Protection "1; mode=block"
Header set X-Robots-Tag "none"
SetEnv modHeadersAvailable true
</IfModule>
...
</VirtualHost>
</source>
==Client certificates==
<source lang=apache>
#
## <ClientCertificate>
#
SSLVerifyClient none
SSLCACertificateFile "/var/log/apache2/conf/ca.crt"
SSLCARevocationFile "/var/log/apache2/conf/crl.pem"
SSLCARevocationCheck chain
CustomLog "/var/log/apache2/logs/ssl_user.log" \
"%t %h Serial=%{SSL_CLIENT_M_SERIAL}x User=%{SSL_CLIENT_S_DN_CN}x \"%r\" %b"
<Location />
SSLVerifyClient require
SSLVerifyDepth 10
SSLOptions +FakeBasicAuth
SSLRequireSSL
SSLRequire %{SSL_CLIENT_S_DN_O} eq "Your Organization" \
and %{SSL_CLIENT_S_DN_OU} in {"AllowedOU1","AllowedOU2"}
</Location>
#
## </ClientCertificate>
#
</source>
==ApacheTop==
Top of all sites on your host:
<source lang=Shell scripts>
# ls /var/log/apache2/*.log | xargs -n 1 echo -f | xargs apachetop
</source>
36101eb4f3da7faac42b967f22d0c4ad2ac2bfd6
2188
2187
2021-11-25T13:37:01Z
Lollypop
2
/* Serving mp4 media files */
wikitext
text/x-wiki
[[Category:Webserver]]
== Create certificate ==
===Simple script===
<source lang=Shell scripts>
#!/bin/bash
BASE_SUBJECT='/C=DE/ST=Hamburg/L=Hamburg/O=MyOrg/OU=IT'
BASEDIR=/etc/apache2
DAYS=$[ 5 * 365 ]
KEY_DIR=${BASEDIR}/ssl.key
CRT_DIR=${BASEDIR}/ssl.crt
if [ $# -eq 0 ]
then
printf "usage: $0 <webserver-name> [<alias1> <alias2>...]\n"
exit 1
fi
CN=$1
declare -a subject_alt_names;
for i in ${*}
do
subject_alt_names=( ${subject_alt_names[*]} "DNS:${i}")
done
echo ${subject_alt_names[*]}
SHORT=${CN%%.*}
KEY=${KEY_DIR}/${SHORT}.key
CRT=${CRT_DIR}/${SHORT}.crt
OLD_IFS=${IFS}
IFS=","
openssl req \
-new \
-days ${DAYS} \
-newkey rsa:4096bits \
-sha512 \
-x509 \
-nodes \
-out ${CRT} \
-keyout ${KEY} \
-subj "${BASE_SUBJECT}/CN=${CN}" \
-reqexts SAN \
-extensions SAN \
-config <(
cat /etc/ssl/openssl.cnf
printf "[ext]\nbasicConstraints=CA:FALSE,pathlen:0\n[SAN]\n%s\n" \
"${subject_alt_names:+subjectAltName = ${subject_alt_names[*]}}"
)
IFS=${OLD_IFS}
printf "Put this in your Apache config:\n\n\tSSLCertificateFile %s\n\tSSLCertificateKeyFile %s\n\n" "${CRT}" "${KEY}"
</source>
===Adjust the OpenSSL default values===
Set the country, etc. to values that match your needs:
<source lang=Shell scripts>
# vi /etc/ssl/openssl.cnf
</source>
===Generate key===
<source lang=Shell scripts>
# openssl ecparam -genkey -name secp256r1 | openssl ec -aes256 -out server.de.ec-key
read EC key
using curve name prime256v1 instead of secp256r1
writing EC key
Enter PEM pass phrase:
Verifying - Enter PEM pass phrase:
</source>
If you don't need a password protected encrypted key file, you can remove the encryption like this:
<source lang=Shell scripts>
# openssl ec -in server.de.ec-key -out server.de.ec-key
read EC key
Enter PEM pass phrase:
writing EC key
</source>
===Issue certificate===
<source lang=Shell scripts>
# openssl req -new -x509 -sha256 -key server.de.ec-key -out server.de-wildcard.pem -days 1825 -nodes
You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter '.', the field will be left blank.
-----
Country Name (2 letter code) [DE]:
State or Province Name (full name) [Hamburg]:
Locality Name (eg, city) [Hamburg]:
Organization Name (eg, company) [My Site]:
Organizational Unit Name (eg, section) [Sub]:
Common Name (e.g. server FQDN or YOUR name) []:*.server.de
Email Address [ssl@server.de]:
</source>
===View certificate===
<source lang=Shell scripts>
# openssl x509 -text -noout -in server.de-wildcard.pem
Certificate:
Data:
Version: 3 (0x2)
Serial Number: ... (0x...)
Signature Algorithm: ecdsa-with-SHA256
Issuer: C=DE, ST=Hamburg, L=Hamburg, O=My Site, OU=Sub, CN=*.server.de/emailAddress=ssl@server.de
Validity
Not Before: Apr 16 09:35:02 2015 GMT
Not After : Apr 14 09:35:02 2020 GMT
Subject: C=DE, ST=Hamburg, L=Hamburg, O=My Site, OU=Sub, CN=*.server.de/emailAddress=ssl@server.de
Subject Public Key Info:
Public Key Algorithm: id-ecPublicKey
Public-Key: (256 bit)
pub:
...
ASN1 OID: prime256v1
X509v3 extensions:
X509v3 Subject Key Identifier:
...
X509v3 Authority Key Identifier:
keyid:...
X509v3 Basic Constraints:
CA:TRUE
Signature Algorithm: ecdsa-with-SHA256
...
</source>
==Configuring Apache==
=== Serving mp4 media files ===
If your media files are on a network filesystem like CIFS or NFS you should disable memory mapping (EnableMMAP Off) to avoid corrupted data at the client side and allow seeking inside the video.
<source lang=Apache config files>
<Directory /var/www/media-files>
Options -Indexes
AllowOverride None
Require all granted
<IfModule mod_mime>
AddType video/mp4 .mp4
</IfModule>
EnableMMAP Off
</Directory>
</source>
=== SSL configuration ===
/etc/apache2/mods-available/ssl.conf
<source lang=apache>
<IfModule mod_ssl.c>
...
SSLUseStapling On
SSLStaplingCache "shmcb:${APACHE_RUN_DIR}/stapling_cache(128000)"
...
</IfModule>
</source>
<source lang=apache>
<VirtualHost ssl.server.de:443>
# ...
SSLEngine On
# Do this only if you are sure you have no old clients
SSLProtocol all -SSLv2 -SSLv3 -TLSv1 -TLSv1.1
# If you need to support old clients use this instead
# SSLProtocol all -SSLv2 -SSLv3 -TLSv1
SSLCompression off
SSLHonorCipherOrder On
# Do this only if you are sure you have no old clients
SSLCipherSuite HIGH:EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH:!AES256+RSA:!AES128:!ADH:!EXP:!SSLv2:!SSLv3:!MEDIUM:!LOW:!NULL:!aNULL
# If you need to support old clients use this instead
# SSLCipherSuite ECDHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-SHA256:ECDHE-RSA-AES256-SHA:ECDHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA256:DHE-RSA-AES128-SHA256:DHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA:EDH-RSA-DES-CBC3-SHA:AES256-GCM-SHA384:AES128-GCM-SHA256:AES256-SHA256:AES128-SHA256:AES256-SHA:AES128-SHA:HIGH:!DES-CBC3-SHA:!aNULL:!eNULL:!EXPORT:!CAMELLIA:!DES:!MD5:!PSK:!RC4:!SSLv2:!SSLv3
SSLCertificateFile /etc/letsencrypt/live/ssl.server.de/fullchain.pem
SSLCertificateKeyFile /etc/letsencrypt/live/ssl.server.de/privkey.pem
SSLOptions +FakeBasicAuth +ExportCertData +StrictRequire
# Generate DH parameters with
# # openssl dhparam -out /etc/ssl/certs/dhparam_4096.pem 4096
SSLOpenSSLConfCmd DHParameters "/etc/ssl/certs/dhparam_4096.pem"
SSLOpenSSLConfCmd ECDHParameters Automatic
SSLOpenSSLConfCmd Curves secp521r1:secp384r1:prime256v1
SetEnvIfNoCase Referer ^https://ssl\.server\.de keep_cookies
RequestHeader unset Cookie env=!keep_cookies
<IfModule mod_headers.c>
# https://kb.sucuri.net/warnings/hardening/headers-x-content-type
Header set X-Content-Type-Options nosniff
# https://kb.sucuri.net/warnings/hardening/headers-x-frame-clickjacking
Header append X-FRAME-OPTIONS "SAMEORIGIN"
# https://kb.sucuri.net/warnings/hardening/headers-x-xss-protection
Header set X-XSS-Protection "1; mode=block"
# Strict Transport Security
Header always set Strict-Transport-Security "max-age=31556926;"
# Public Key Pins
Header always set Public-Key-Pins "max-age=5184000; pin-sha256=\"...\"; pin-sha256=\"...\"; includeSubDomains"
</IfModule>
<IfModule mod_rewrite.c>
RewriteEngine On
# https://kb.sucuri.net/warnings/hardening/http-trace HTTP Trace Method
RewriteCond %{REQUEST_METHOD} ^TRACE
RewriteRule .* - [F]
</IfModule>
</VirtualHost>
</source>
===SSLLabs A+ with all 100%===
'''If you consider to take this snippet be warned. Old clients have no chance to reach the server.'''
<source lang=apache>
<VirtualHost ssl.server.de:443>
...
# SSL parameters
SSLEngine On
SSLProtocol all -SSLv2 -SSLv3 -TLSv1 -TLSv1.1
SSLCipherSuite DHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES256-SHA:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-SHA
SSLHonorCipherOrder on
SSLUseStapling on
SSLCompression off
SSLOptions +FakeBasicAuth +ExportCertData +StrictRequire
SSLCertificateFile /etc/letsencrypt/live/ssl.server.de/fullchain.pem
SSLCertificateKeyFile /etc/letsencrypt/live/ssl.server.de/privkey.pem
SSLOpenSSLConfCmd DHParameters "/etc/ssl/certs/dhparam.pem"
SSLOpenSSLConfCmd ECDHParameters Automatic
SSLOpenSSLConfCmd Curves secp521r1:secp384r1:prime256v1
<IfModule mod_headers.c>
# Add security and privacy related headers
Header edit Set-Cookie ^(.*)$ $1;HttpOnly;Secure
Header always set Strict-Transport-Security "max-age=31556926; includeSubDomains; preload"
Header always set X-Frame-Options SAMEORIGIN
Header always set X-Content-Type-Options nosniff
Header set X-XSS-Protection "1; mode=block"
Header set X-Robots-Tag "none"
SetEnv modHeadersAvailable true
</IfModule>
...
</VirtualHost>
</source>
==Client certificates==
<source lang=apache>
#
## <ClientCertificate>
#
SSLVerifyClient none
SSLCACertificateFile "/var/log/apache2/conf/ca.crt"
SSLCARevocationFile "/var/log/apache2/conf/crl.pem"
SSLCARevocationCheck chain
CustomLog "/var/log/apache2/logs/ssl_user.log" \
"%t %h Serial=%{SSL_CLIENT_M_SERIAL}x User=%{SSL_CLIENT_S_DN_CN}x \"%r\" %b"
<Location />
SSLVerifyClient require
SSLVerifyDepth 10
SSLOptions +FakeBasicAuth
SSLRequireSSL
SSLRequire %{SSL_CLIENT_S_DN_O} eq "Your Organization" \
and %{SSL_CLIENT_S_DN_OU} in {"AllowedOU1","AllowedOU2"}
</Location>
#
## </ClientCertificate>
#
</source>
==ApacheTop==
Top of all sites on your host:
<source lang=Shell scripts>
# ls /var/log/apache2/*.log | xargs -n 1 echo -f | xargs apachetop
</source>
851fde54a63d922ecd89e63404d7b8aa944b6f75
2189
2188
2021-11-25T13:37:56Z
Lollypop
2
/* SSL configuration */
wikitext
text/x-wiki
[[Category:Webserver]]
== Create certificate ==
===Simple script===
<source lang=Shell scripts>
#!/bin/bash
BASE_SUBJECT='/C=DE/ST=Hamburg/L=Hamburg/O=MyOrg/OU=IT'
BASEDIR=/etc/apache2
DAYS=$[ 5 * 365 ]
KEY_DIR=${BASEDIR}/ssl.key
CRT_DIR=${BASEDIR}/ssl.crt
if [ $# -eq 0 ]
then
printf "usage: $0 <webserver-name> [<alias1> <alias2>...]\n"
exit 1
fi
CN=$1
declare -a subject_alt_names;
for i in ${*}
do
subject_alt_names=( ${subject_alt_names[*]} "DNS:${i}")
done
echo ${subject_alt_names[*]}
SHORT=${CN%%.*}
KEY=${KEY_DIR}/${SHORT}.key
CRT=${CRT_DIR}/${SHORT}.crt
OLD_IFS=${IFS}
IFS=","
openssl req \
-new \
-days ${DAYS} \
-newkey rsa:4096bits \
-sha512 \
-x509 \
-nodes \
-out ${CRT} \
-keyout ${KEY} \
-subj "${BASE_SUBJECT}/CN=${CN}" \
-reqexts SAN \
-extensions SAN \
-config <(
cat /etc/ssl/openssl.cnf
printf "[ext]\nbasicConstraints=CA:FALSE,pathlen:0\n[SAN]\n%s\n" \
"${subject_alt_names:+subjectAltName = ${subject_alt_names[*]}}"
)
IFS=${OLD_IFS}
printf "Put this in your Apache config:\n\n\tSSLCertificateFile %s\n\tSSLCertificateKeyFile %s\n\n" "${CRT}" "${KEY}"
</source>
===Adjust the OpenSSL default values===
Set the country, etc. to values that match your needs:
<source lang=Shell scripts>
# vi /etc/ssl/openssl.cnf
</source>
===Generate key===
<source lang=Shell scripts>
# openssl ecparam -genkey -name secp256r1 | openssl ec -aes256 -out server.de.ec-key
read EC key
using curve name prime256v1 instead of secp256r1
writing EC key
Enter PEM pass phrase:
Verifying - Enter PEM pass phrase:
</source>
If you don't need a password protected encrypted key file, you can remove the encryption like this:
<source lang=Shell scripts>
# openssl ec -in server.de.ec-key -out server.de.ec-key
read EC key
Enter PEM pass phrase:
writing EC key
</source>
===Issue certificate===
<source lang=Shell scripts>
# openssl req -new -x509 -sha256 -key server.de.ec-key -out server.de-wildcard.pem -days 1825 -nodes
You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter '.', the field will be left blank.
-----
Country Name (2 letter code) [DE]:
State or Province Name (full name) [Hamburg]:
Locality Name (eg, city) [Hamburg]:
Organization Name (eg, company) [My Site]:
Organizational Unit Name (eg, section) [Sub]:
Common Name (e.g. server FQDN or YOUR name) []:*.server.de
Email Address [ssl@server.de]:
</source>
===View certificate===
<source lang=Shell scripts>
# openssl x509 -text -noout -in server.de-wildcard.pem
Certificate:
Data:
Version: 3 (0x2)
Serial Number: ... (0x...)
Signature Algorithm: ecdsa-with-SHA256
Issuer: C=DE, ST=Hamburg, L=Hamburg, O=My Site, OU=Sub, CN=*.server.de/emailAddress=ssl@server.de
Validity
Not Before: Apr 16 09:35:02 2015 GMT
Not After : Apr 14 09:35:02 2020 GMT
Subject: C=DE, ST=Hamburg, L=Hamburg, O=My Site, OU=Sub, CN=*.server.de/emailAddress=ssl@server.de
Subject Public Key Info:
Public Key Algorithm: id-ecPublicKey
Public-Key: (256 bit)
pub:
...
ASN1 OID: prime256v1
X509v3 extensions:
X509v3 Subject Key Identifier:
...
X509v3 Authority Key Identifier:
keyid:...
X509v3 Basic Constraints:
CA:TRUE
Signature Algorithm: ecdsa-with-SHA256
...
</source>
==Configuring Apache==
=== Serving mp4 media files ===
If your media files are on a network filesystem like CIFS or NFS you should disable memory mapping (EnableMMAP Off) to avoid corrupted data at the client side and allow seeking inside the video.
<source lang=Apache config files>
<Directory /var/www/media-files>
Options -Indexes
AllowOverride None
Require all granted
<IfModule mod_mime>
AddType video/mp4 .mp4
</IfModule>
EnableMMAP Off
</Directory>
</source>
=== SSL configuration ===
/etc/apache2/mods-available/ssl.conf
<source lang=Apache config files>
<IfModule mod_ssl.c>
...
SSLUseStapling On
SSLStaplingCache "shmcb:${APACHE_RUN_DIR}/stapling_cache(128000)"
...
</IfModule>
</source>
<source lang=Apache config files>
<VirtualHost ssl.server.de:443>
# ...
SSLEngine On
# Do this only if you are sure you have no old clients
SSLProtocol all -SSLv2 -SSLv3 -TLSv1 -TLSv1.1
# If you need to support old clients use this instead
# SSLProtocol all -SSLv2 -SSLv3 -TLSv1
SSLCompression off
SSLHonorCipherOrder On
# Do this only if you are sure you have no old clients
SSLCipherSuite HIGH:EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH:!AES256+RSA:!AES128:!ADH:!EXP:!SSLv2:!SSLv3:!MEDIUM:!LOW:!NULL:!aNULL
# If you need to support old clients use this instead
# SSLCipherSuite ECDHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-SHA256:ECDHE-RSA-AES256-SHA:ECDHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA256:DHE-RSA-AES128-SHA256:DHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA:EDH-RSA-DES-CBC3-SHA:AES256-GCM-SHA384:AES128-GCM-SHA256:AES256-SHA256:AES128-SHA256:AES256-SHA:AES128-SHA:HIGH:!DES-CBC3-SHA:!aNULL:!eNULL:!EXPORT:!CAMELLIA:!DES:!MD5:!PSK:!RC4:!SSLv2:!SSLv3
SSLCertificateFile /etc/letsencrypt/live/ssl.server.de/fullchain.pem
SSLCertificateKeyFile /etc/letsencrypt/live/ssl.server.de/privkey.pem
SSLOptions +FakeBasicAuth +ExportCertData +StrictRequire
# Generate DH parameters with
# # openssl dhparam -out /etc/ssl/certs/dhparam_4096.pem 4096
SSLOpenSSLConfCmd DHParameters "/etc/ssl/certs/dhparam_4096.pem"
SSLOpenSSLConfCmd ECDHParameters Automatic
SSLOpenSSLConfCmd Curves secp521r1:secp384r1:prime256v1
SetEnvIfNoCase Referer ^https://ssl\.server\.de keep_cookies
RequestHeader unset Cookie env=!keep_cookies
<IfModule mod_headers.c>
# https://kb.sucuri.net/warnings/hardening/headers-x-content-type
Header set X-Content-Type-Options nosniff
# https://kb.sucuri.net/warnings/hardening/headers-x-frame-clickjacking
Header append X-FRAME-OPTIONS "SAMEORIGIN"
# https://kb.sucuri.net/warnings/hardening/headers-x-xss-protection
Header set X-XSS-Protection "1; mode=block"
# Strict Transport Security
Header always set Strict-Transport-Security "max-age=31556926;"
# Public Key Pins
Header always set Public-Key-Pins "max-age=5184000; pin-sha256=\"...\"; pin-sha256=\"...\"; includeSubDomains"
</IfModule>
<IfModule mod_rewrite.c>
RewriteEngine On
# https://kb.sucuri.net/warnings/hardening/http-trace HTTP Trace Method
RewriteCond %{REQUEST_METHOD} ^TRACE
RewriteRule .* - [F]
</IfModule>
</VirtualHost>
</source>
===SSLLabs A+ with all 100%===
'''If you consider to take this snippet be warned. Old clients have no chance to reach the server.'''
<source lang=apache>
<VirtualHost ssl.server.de:443>
...
# SSL parameters
SSLEngine On
SSLProtocol all -SSLv2 -SSLv3 -TLSv1 -TLSv1.1
SSLCipherSuite DHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES256-SHA:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-SHA
SSLHonorCipherOrder on
SSLUseStapling on
SSLCompression off
SSLOptions +FakeBasicAuth +ExportCertData +StrictRequire
SSLCertificateFile /etc/letsencrypt/live/ssl.server.de/fullchain.pem
SSLCertificateKeyFile /etc/letsencrypt/live/ssl.server.de/privkey.pem
SSLOpenSSLConfCmd DHParameters "/etc/ssl/certs/dhparam.pem"
SSLOpenSSLConfCmd ECDHParameters Automatic
SSLOpenSSLConfCmd Curves secp521r1:secp384r1:prime256v1
<IfModule mod_headers.c>
# Add security and privacy related headers
Header edit Set-Cookie ^(.*)$ $1;HttpOnly;Secure
Header always set Strict-Transport-Security "max-age=31556926; includeSubDomains; preload"
Header always set X-Frame-Options SAMEORIGIN
Header always set X-Content-Type-Options nosniff
Header set X-XSS-Protection "1; mode=block"
Header set X-Robots-Tag "none"
SetEnv modHeadersAvailable true
</IfModule>
...
</VirtualHost>
</source>
==Client certificates==
<source lang=apache>
#
## <ClientCertificate>
#
SSLVerifyClient none
SSLCACertificateFile "/var/log/apache2/conf/ca.crt"
SSLCARevocationFile "/var/log/apache2/conf/crl.pem"
SSLCARevocationCheck chain
CustomLog "/var/log/apache2/logs/ssl_user.log" \
"%t %h Serial=%{SSL_CLIENT_M_SERIAL}x User=%{SSL_CLIENT_S_DN_CN}x \"%r\" %b"
<Location />
SSLVerifyClient require
SSLVerifyDepth 10
SSLOptions +FakeBasicAuth
SSLRequireSSL
SSLRequire %{SSL_CLIENT_S_DN_O} eq "Your Organization" \
and %{SSL_CLIENT_S_DN_OU} in {"AllowedOU1","AllowedOU2"}
</Location>
#
## </ClientCertificate>
#
</source>
==ApacheTop==
Top of all sites on your host:
<source lang=Shell scripts>
# ls /var/log/apache2/*.log | xargs -n 1 echo -f | xargs apachetop
</source>
e0fb094eab26e0f8dad33a65cfa7dbd9748c837b
2190
2189
2021-11-25T13:38:11Z
Lollypop
2
/* SSLLabs A+ with all 100% */
wikitext
text/x-wiki
[[Category:Webserver]]
== Create certificate ==
===Simple script===
<source lang=Shell scripts>
#!/bin/bash
BASE_SUBJECT='/C=DE/ST=Hamburg/L=Hamburg/O=MyOrg/OU=IT'
BASEDIR=/etc/apache2
DAYS=$[ 5 * 365 ]
KEY_DIR=${BASEDIR}/ssl.key
CRT_DIR=${BASEDIR}/ssl.crt
if [ $# -eq 0 ]
then
printf "usage: $0 <webserver-name> [<alias1> <alias2>...]\n"
exit 1
fi
CN=$1
declare -a subject_alt_names;
for i in ${*}
do
subject_alt_names=( ${subject_alt_names[*]} "DNS:${i}")
done
echo ${subject_alt_names[*]}
SHORT=${CN%%.*}
KEY=${KEY_DIR}/${SHORT}.key
CRT=${CRT_DIR}/${SHORT}.crt
OLD_IFS=${IFS}
IFS=","
openssl req \
-new \
-days ${DAYS} \
-newkey rsa:4096bits \
-sha512 \
-x509 \
-nodes \
-out ${CRT} \
-keyout ${KEY} \
-subj "${BASE_SUBJECT}/CN=${CN}" \
-reqexts SAN \
-extensions SAN \
-config <(
cat /etc/ssl/openssl.cnf
printf "[ext]\nbasicConstraints=CA:FALSE,pathlen:0\n[SAN]\n%s\n" \
"${subject_alt_names:+subjectAltName = ${subject_alt_names[*]}}"
)
IFS=${OLD_IFS}
printf "Put this in your Apache config:\n\n\tSSLCertificateFile %s\n\tSSLCertificateKeyFile %s\n\n" "${CRT}" "${KEY}"
</source>
===Adjust the OpenSSL default values===
Set the country, etc. to values that match your needs:
<source lang=Shell scripts>
# vi /etc/ssl/openssl.cnf
</source>
===Generate key===
<source lang=Shell scripts>
# openssl ecparam -genkey -name secp256r1 | openssl ec -aes256 -out server.de.ec-key
read EC key
using curve name prime256v1 instead of secp256r1
writing EC key
Enter PEM pass phrase:
Verifying - Enter PEM pass phrase:
</source>
If you don't need a password protected encrypted key file, you can remove the encryption like this:
<source lang=Shell scripts>
# openssl ec -in server.de.ec-key -out server.de.ec-key
read EC key
Enter PEM pass phrase:
writing EC key
</source>
===Issue certificate===
<source lang=Shell scripts>
# openssl req -new -x509 -sha256 -key server.de.ec-key -out server.de-wildcard.pem -days 1825 -nodes
You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter '.', the field will be left blank.
-----
Country Name (2 letter code) [DE]:
State or Province Name (full name) [Hamburg]:
Locality Name (eg, city) [Hamburg]:
Organization Name (eg, company) [My Site]:
Organizational Unit Name (eg, section) [Sub]:
Common Name (e.g. server FQDN or YOUR name) []:*.server.de
Email Address [ssl@server.de]:
</source>
===View certificate===
<source lang=Shell scripts>
# openssl x509 -text -noout -in server.de-wildcard.pem
Certificate:
Data:
Version: 3 (0x2)
Serial Number: ... (0x...)
Signature Algorithm: ecdsa-with-SHA256
Issuer: C=DE, ST=Hamburg, L=Hamburg, O=My Site, OU=Sub, CN=*.server.de/emailAddress=ssl@server.de
Validity
Not Before: Apr 16 09:35:02 2015 GMT
Not After : Apr 14 09:35:02 2020 GMT
Subject: C=DE, ST=Hamburg, L=Hamburg, O=My Site, OU=Sub, CN=*.server.de/emailAddress=ssl@server.de
Subject Public Key Info:
Public Key Algorithm: id-ecPublicKey
Public-Key: (256 bit)
pub:
...
ASN1 OID: prime256v1
X509v3 extensions:
X509v3 Subject Key Identifier:
...
X509v3 Authority Key Identifier:
keyid:...
X509v3 Basic Constraints:
CA:TRUE
Signature Algorithm: ecdsa-with-SHA256
...
</source>
==Configuring Apache==
=== Serving mp4 media files ===
If your media files are on a network filesystem like CIFS or NFS you should disable memory mapping (EnableMMAP Off) to avoid corrupted data at the client side and allow seeking inside the video.
<source lang=Apache config files>
<Directory /var/www/media-files>
Options -Indexes
AllowOverride None
Require all granted
<IfModule mod_mime>
AddType video/mp4 .mp4
</IfModule>
EnableMMAP Off
</Directory>
</source>
=== SSL configuration ===
/etc/apache2/mods-available/ssl.conf
<source lang=Apache config files>
<IfModule mod_ssl.c>
...
SSLUseStapling On
SSLStaplingCache "shmcb:${APACHE_RUN_DIR}/stapling_cache(128000)"
...
</IfModule>
</source>
<source lang=Apache config files>
<VirtualHost ssl.server.de:443>
# ...
SSLEngine On
# Do this only if you are sure you have no old clients
SSLProtocol all -SSLv2 -SSLv3 -TLSv1 -TLSv1.1
# If you need to support old clients use this instead
# SSLProtocol all -SSLv2 -SSLv3 -TLSv1
SSLCompression off
SSLHonorCipherOrder On
# Do this only if you are sure you have no old clients
SSLCipherSuite HIGH:EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH:!AES256+RSA:!AES128:!ADH:!EXP:!SSLv2:!SSLv3:!MEDIUM:!LOW:!NULL:!aNULL
# If you need to support old clients use this instead
# SSLCipherSuite ECDHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-SHA256:ECDHE-RSA-AES256-SHA:ECDHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA256:DHE-RSA-AES128-SHA256:DHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA:EDH-RSA-DES-CBC3-SHA:AES256-GCM-SHA384:AES128-GCM-SHA256:AES256-SHA256:AES128-SHA256:AES256-SHA:AES128-SHA:HIGH:!DES-CBC3-SHA:!aNULL:!eNULL:!EXPORT:!CAMELLIA:!DES:!MD5:!PSK:!RC4:!SSLv2:!SSLv3
SSLCertificateFile /etc/letsencrypt/live/ssl.server.de/fullchain.pem
SSLCertificateKeyFile /etc/letsencrypt/live/ssl.server.de/privkey.pem
SSLOptions +FakeBasicAuth +ExportCertData +StrictRequire
# Generate DH parameters with
# # openssl dhparam -out /etc/ssl/certs/dhparam_4096.pem 4096
SSLOpenSSLConfCmd DHParameters "/etc/ssl/certs/dhparam_4096.pem"
SSLOpenSSLConfCmd ECDHParameters Automatic
SSLOpenSSLConfCmd Curves secp521r1:secp384r1:prime256v1
SetEnvIfNoCase Referer ^https://ssl\.server\.de keep_cookies
RequestHeader unset Cookie env=!keep_cookies
<IfModule mod_headers.c>
# https://kb.sucuri.net/warnings/hardening/headers-x-content-type
Header set X-Content-Type-Options nosniff
# https://kb.sucuri.net/warnings/hardening/headers-x-frame-clickjacking
Header append X-FRAME-OPTIONS "SAMEORIGIN"
# https://kb.sucuri.net/warnings/hardening/headers-x-xss-protection
Header set X-XSS-Protection "1; mode=block"
# Strict Transport Security
Header always set Strict-Transport-Security "max-age=31556926;"
# Public Key Pins
Header always set Public-Key-Pins "max-age=5184000; pin-sha256=\"...\"; pin-sha256=\"...\"; includeSubDomains"
</IfModule>
<IfModule mod_rewrite.c>
RewriteEngine On
# https://kb.sucuri.net/warnings/hardening/http-trace HTTP Trace Method
RewriteCond %{REQUEST_METHOD} ^TRACE
RewriteRule .* - [F]
</IfModule>
</VirtualHost>
</source>
===SSLLabs A+ with all 100%===
'''If you consider to take this snippet be warned. Old clients have no chance to reach the server.'''
<source lang=Apache config files>
<VirtualHost ssl.server.de:443>
...
# SSL parameters
SSLEngine On
SSLProtocol all -SSLv2 -SSLv3 -TLSv1 -TLSv1.1
SSLCipherSuite DHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES256-SHA:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-SHA
SSLHonorCipherOrder on
SSLUseStapling on
SSLCompression off
SSLOptions +FakeBasicAuth +ExportCertData +StrictRequire
SSLCertificateFile /etc/letsencrypt/live/ssl.server.de/fullchain.pem
SSLCertificateKeyFile /etc/letsencrypt/live/ssl.server.de/privkey.pem
SSLOpenSSLConfCmd DHParameters "/etc/ssl/certs/dhparam.pem"
SSLOpenSSLConfCmd ECDHParameters Automatic
SSLOpenSSLConfCmd Curves secp521r1:secp384r1:prime256v1
<IfModule mod_headers.c>
# Add security and privacy related headers
Header edit Set-Cookie ^(.*)$ $1;HttpOnly;Secure
Header always set Strict-Transport-Security "max-age=31556926; includeSubDomains; preload"
Header always set X-Frame-Options SAMEORIGIN
Header always set X-Content-Type-Options nosniff
Header set X-XSS-Protection "1; mode=block"
Header set X-Robots-Tag "none"
SetEnv modHeadersAvailable true
</IfModule>
...
</VirtualHost>
</source>
==Client certificates==
<source lang=apache>
#
## <ClientCertificate>
#
SSLVerifyClient none
SSLCACertificateFile "/var/log/apache2/conf/ca.crt"
SSLCARevocationFile "/var/log/apache2/conf/crl.pem"
SSLCARevocationCheck chain
CustomLog "/var/log/apache2/logs/ssl_user.log" \
"%t %h Serial=%{SSL_CLIENT_M_SERIAL}x User=%{SSL_CLIENT_S_DN_CN}x \"%r\" %b"
<Location />
SSLVerifyClient require
SSLVerifyDepth 10
SSLOptions +FakeBasicAuth
SSLRequireSSL
SSLRequire %{SSL_CLIENT_S_DN_O} eq "Your Organization" \
and %{SSL_CLIENT_S_DN_OU} in {"AllowedOU1","AllowedOU2"}
</Location>
#
## </ClientCertificate>
#
</source>
==ApacheTop==
Top of all sites on your host:
<source lang=Shell scripts>
# ls /var/log/apache2/*.log | xargs -n 1 echo -f | xargs apachetop
</source>
eb76d46a68058bbbbda08bdfd6576b1f4d461ffa
2191
2190
2021-11-25T13:38:25Z
Lollypop
2
/* Client certificates */
wikitext
text/x-wiki
[[Category:Webserver]]
== Create certificate ==
===Simple script===
<source lang=Shell scripts>
#!/bin/bash
BASE_SUBJECT='/C=DE/ST=Hamburg/L=Hamburg/O=MyOrg/OU=IT'
BASEDIR=/etc/apache2
DAYS=$[ 5 * 365 ]
KEY_DIR=${BASEDIR}/ssl.key
CRT_DIR=${BASEDIR}/ssl.crt
if [ $# -eq 0 ]
then
printf "usage: $0 <webserver-name> [<alias1> <alias2>...]\n"
exit 1
fi
CN=$1
declare -a subject_alt_names;
for i in ${*}
do
subject_alt_names=( ${subject_alt_names[*]} "DNS:${i}")
done
echo ${subject_alt_names[*]}
SHORT=${CN%%.*}
KEY=${KEY_DIR}/${SHORT}.key
CRT=${CRT_DIR}/${SHORT}.crt
OLD_IFS=${IFS}
IFS=","
openssl req \
-new \
-days ${DAYS} \
-newkey rsa:4096bits \
-sha512 \
-x509 \
-nodes \
-out ${CRT} \
-keyout ${KEY} \
-subj "${BASE_SUBJECT}/CN=${CN}" \
-reqexts SAN \
-extensions SAN \
-config <(
cat /etc/ssl/openssl.cnf
printf "[ext]\nbasicConstraints=CA:FALSE,pathlen:0\n[SAN]\n%s\n" \
"${subject_alt_names:+subjectAltName = ${subject_alt_names[*]}}"
)
IFS=${OLD_IFS}
printf "Put this in your Apache config:\n\n\tSSLCertificateFile %s\n\tSSLCertificateKeyFile %s\n\n" "${CRT}" "${KEY}"
</source>
===Adjust the OpenSSL default values===
Set the country, etc. to values that match your needs:
<source lang=Shell scripts>
# vi /etc/ssl/openssl.cnf
</source>
===Generate key===
<source lang=Shell scripts>
# openssl ecparam -genkey -name secp256r1 | openssl ec -aes256 -out server.de.ec-key
read EC key
using curve name prime256v1 instead of secp256r1
writing EC key
Enter PEM pass phrase:
Verifying - Enter PEM pass phrase:
</source>
If you don't need a password protected encrypted key file, you can remove the encryption like this:
<source lang=Shell scripts>
# openssl ec -in server.de.ec-key -out server.de.ec-key
read EC key
Enter PEM pass phrase:
writing EC key
</source>
===Issue certificate===
<source lang=Shell scripts>
# openssl req -new -x509 -sha256 -key server.de.ec-key -out server.de-wildcard.pem -days 1825 -nodes
You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter '.', the field will be left blank.
-----
Country Name (2 letter code) [DE]:
State or Province Name (full name) [Hamburg]:
Locality Name (eg, city) [Hamburg]:
Organization Name (eg, company) [My Site]:
Organizational Unit Name (eg, section) [Sub]:
Common Name (e.g. server FQDN or YOUR name) []:*.server.de
Email Address [ssl@server.de]:
</source>
===View certificate===
<source lang=Shell scripts>
# openssl x509 -text -noout -in server.de-wildcard.pem
Certificate:
Data:
Version: 3 (0x2)
Serial Number: ... (0x...)
Signature Algorithm: ecdsa-with-SHA256
Issuer: C=DE, ST=Hamburg, L=Hamburg, O=My Site, OU=Sub, CN=*.server.de/emailAddress=ssl@server.de
Validity
Not Before: Apr 16 09:35:02 2015 GMT
Not After : Apr 14 09:35:02 2020 GMT
Subject: C=DE, ST=Hamburg, L=Hamburg, O=My Site, OU=Sub, CN=*.server.de/emailAddress=ssl@server.de
Subject Public Key Info:
Public Key Algorithm: id-ecPublicKey
Public-Key: (256 bit)
pub:
...
ASN1 OID: prime256v1
X509v3 extensions:
X509v3 Subject Key Identifier:
...
X509v3 Authority Key Identifier:
keyid:...
X509v3 Basic Constraints:
CA:TRUE
Signature Algorithm: ecdsa-with-SHA256
...
</source>
==Configuring Apache==
=== Serving mp4 media files ===
If your media files are on a network filesystem like CIFS or NFS you should disable memory mapping (EnableMMAP Off) to avoid corrupted data at the client side and allow seeking inside the video.
<source lang=Apache config files>
<Directory /var/www/media-files>
Options -Indexes
AllowOverride None
Require all granted
<IfModule mod_mime>
AddType video/mp4 .mp4
</IfModule>
EnableMMAP Off
</Directory>
</source>
=== SSL configuration ===
/etc/apache2/mods-available/ssl.conf
<source lang=Apache config files>
<IfModule mod_ssl.c>
...
SSLUseStapling On
SSLStaplingCache "shmcb:${APACHE_RUN_DIR}/stapling_cache(128000)"
...
</IfModule>
</source>
<source lang=Apache config files>
<VirtualHost ssl.server.de:443>
# ...
SSLEngine On
# Do this only if you are sure you have no old clients
SSLProtocol all -SSLv2 -SSLv3 -TLSv1 -TLSv1.1
# If you need to support old clients use this instead
# SSLProtocol all -SSLv2 -SSLv3 -TLSv1
SSLCompression off
SSLHonorCipherOrder On
# Do this only if you are sure you have no old clients
SSLCipherSuite HIGH:EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH:!AES256+RSA:!AES128:!ADH:!EXP:!SSLv2:!SSLv3:!MEDIUM:!LOW:!NULL:!aNULL
# If you need to support old clients use this instead
# SSLCipherSuite ECDHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-SHA256:ECDHE-RSA-AES256-SHA:ECDHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA256:DHE-RSA-AES128-SHA256:DHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA:EDH-RSA-DES-CBC3-SHA:AES256-GCM-SHA384:AES128-GCM-SHA256:AES256-SHA256:AES128-SHA256:AES256-SHA:AES128-SHA:HIGH:!DES-CBC3-SHA:!aNULL:!eNULL:!EXPORT:!CAMELLIA:!DES:!MD5:!PSK:!RC4:!SSLv2:!SSLv3
SSLCertificateFile /etc/letsencrypt/live/ssl.server.de/fullchain.pem
SSLCertificateKeyFile /etc/letsencrypt/live/ssl.server.de/privkey.pem
SSLOptions +FakeBasicAuth +ExportCertData +StrictRequire
# Generate DH parameters with
# # openssl dhparam -out /etc/ssl/certs/dhparam_4096.pem 4096
SSLOpenSSLConfCmd DHParameters "/etc/ssl/certs/dhparam_4096.pem"
SSLOpenSSLConfCmd ECDHParameters Automatic
SSLOpenSSLConfCmd Curves secp521r1:secp384r1:prime256v1
SetEnvIfNoCase Referer ^https://ssl\.server\.de keep_cookies
RequestHeader unset Cookie env=!keep_cookies
<IfModule mod_headers.c>
# https://kb.sucuri.net/warnings/hardening/headers-x-content-type
Header set X-Content-Type-Options nosniff
# https://kb.sucuri.net/warnings/hardening/headers-x-frame-clickjacking
Header append X-FRAME-OPTIONS "SAMEORIGIN"
# https://kb.sucuri.net/warnings/hardening/headers-x-xss-protection
Header set X-XSS-Protection "1; mode=block"
# Strict Transport Security
Header always set Strict-Transport-Security "max-age=31556926;"
# Public Key Pins
Header always set Public-Key-Pins "max-age=5184000; pin-sha256=\"...\"; pin-sha256=\"...\"; includeSubDomains"
</IfModule>
<IfModule mod_rewrite.c>
RewriteEngine On
# https://kb.sucuri.net/warnings/hardening/http-trace HTTP Trace Method
RewriteCond %{REQUEST_METHOD} ^TRACE
RewriteRule .* - [F]
</IfModule>
</VirtualHost>
</source>
===SSLLabs A+ with all 100%===
'''If you consider to take this snippet be warned. Old clients have no chance to reach the server.'''
<source lang=Apache config files>
<VirtualHost ssl.server.de:443>
...
# SSL parameters
SSLEngine On
SSLProtocol all -SSLv2 -SSLv3 -TLSv1 -TLSv1.1
SSLCipherSuite DHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES256-SHA:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-SHA
SSLHonorCipherOrder on
SSLUseStapling on
SSLCompression off
SSLOptions +FakeBasicAuth +ExportCertData +StrictRequire
SSLCertificateFile /etc/letsencrypt/live/ssl.server.de/fullchain.pem
SSLCertificateKeyFile /etc/letsencrypt/live/ssl.server.de/privkey.pem
SSLOpenSSLConfCmd DHParameters "/etc/ssl/certs/dhparam.pem"
SSLOpenSSLConfCmd ECDHParameters Automatic
SSLOpenSSLConfCmd Curves secp521r1:secp384r1:prime256v1
<IfModule mod_headers.c>
# Add security and privacy related headers
Header edit Set-Cookie ^(.*)$ $1;HttpOnly;Secure
Header always set Strict-Transport-Security "max-age=31556926; includeSubDomains; preload"
Header always set X-Frame-Options SAMEORIGIN
Header always set X-Content-Type-Options nosniff
Header set X-XSS-Protection "1; mode=block"
Header set X-Robots-Tag "none"
SetEnv modHeadersAvailable true
</IfModule>
...
</VirtualHost>
</source>
==Client certificates==
<source lang=Apache config files>
#
## <ClientCertificate>
#
SSLVerifyClient none
SSLCACertificateFile "/var/log/apache2/conf/ca.crt"
SSLCARevocationFile "/var/log/apache2/conf/crl.pem"
SSLCARevocationCheck chain
CustomLog "/var/log/apache2/logs/ssl_user.log" \
"%t %h Serial=%{SSL_CLIENT_M_SERIAL}x User=%{SSL_CLIENT_S_DN_CN}x \"%r\" %b"
<Location />
SSLVerifyClient require
SSLVerifyDepth 10
SSLOptions +FakeBasicAuth
SSLRequireSSL
SSLRequire %{SSL_CLIENT_S_DN_O} eq "Your Organization" \
and %{SSL_CLIENT_S_DN_OU} in {"AllowedOU1","AllowedOU2"}
</Location>
#
## </ClientCertificate>
#
</source>
==ApacheTop==
Top of all sites on your host:
<source lang=Shell scripts>
# ls /var/log/apache2/*.log | xargs -n 1 echo -f | xargs apachetop
</source>
29cd738e833eecefd52159eb8be697b5cd264bfd
2192
2191
2021-11-25T13:49:26Z
Lollypop
2
wikitext
text/x-wiki
[[Category:Webserver]]
== Create certificate ==
===Simple script===
<source lang=Shell scripts>
#!/bin/bash
BASE_SUBJECT='/C=DE/ST=Hamburg/L=Hamburg/O=MyOrg/OU=IT'
BASEDIR=/etc/apache2
DAYS=$[ 5 * 365 ]
KEY_DIR=${BASEDIR}/ssl.key
CRT_DIR=${BASEDIR}/ssl.crt
if [ $# -eq 0 ]
then
printf "usage: $0 <webserver-name> [<alias1> <alias2>...]\n"
exit 1
fi
CN=$1
declare -a subject_alt_names;
for i in ${*}
do
subject_alt_names=( ${subject_alt_names[*]} "DNS:${i}")
done
echo ${subject_alt_names[*]}
SHORT=${CN%%.*}
KEY=${KEY_DIR}/${SHORT}.key
CRT=${CRT_DIR}/${SHORT}.crt
OLD_IFS=${IFS}
IFS=","
openssl req \
-new \
-days ${DAYS} \
-newkey rsa:4096bits \
-sha512 \
-x509 \
-nodes \
-out ${CRT} \
-keyout ${KEY} \
-subj "${BASE_SUBJECT}/CN=${CN}" \
-reqexts SAN \
-extensions SAN \
-config <(
cat /etc/ssl/openssl.cnf
printf "[ext]\nbasicConstraints=CA:FALSE,pathlen:0\n[SAN]\n%s\n" \
"${subject_alt_names:+subjectAltName = ${subject_alt_names[*]}}"
)
IFS=${OLD_IFS}
printf "Put this in your Apache config:\n\n\tSSLCertificateFile %s\n\tSSLCertificateKeyFile %s\n\n" "${CRT}" "${KEY}"
</source>
===Adjust the OpenSSL default values===
Set the country, etc. to values that match your needs:
<source lang=Shell scripts>
# vi /etc/ssl/openssl.cnf
</source>
===Generate key===
<source lang=Shell scripts>
# openssl ecparam -genkey -name secp256r1 | openssl ec -aes256 -out server.de.ec-key
read EC key
using curve name prime256v1 instead of secp256r1
writing EC key
Enter PEM pass phrase:
Verifying - Enter PEM pass phrase:
</source>
If you don't need a password protected encrypted key file, you can remove the encryption like this:
<source lang=Shell scripts>
# openssl ec -in server.de.ec-key -out server.de.ec-key
read EC key
Enter PEM pass phrase:
writing EC key
</source>
===Issue certificate===
<source lang=Shell scripts>
# openssl req -new -x509 -sha256 -key server.de.ec-key -out server.de-wildcard.pem -days 1825 -nodes
You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter '.', the field will be left blank.
-----
Country Name (2 letter code) [DE]:
State or Province Name (full name) [Hamburg]:
Locality Name (eg, city) [Hamburg]:
Organization Name (eg, company) [My Site]:
Organizational Unit Name (eg, section) [Sub]:
Common Name (e.g. server FQDN or YOUR name) []:*.server.de
Email Address [ssl@server.de]:
</source>
===View certificate===
<source lang=Shell scripts>
# openssl x509 -text -noout -in server.de-wildcard.pem
Certificate:
Data:
Version: 3 (0x2)
Serial Number: ... (0x...)
Signature Algorithm: ecdsa-with-SHA256
Issuer: C=DE, ST=Hamburg, L=Hamburg, O=My Site, OU=Sub, CN=*.server.de/emailAddress=ssl@server.de
Validity
Not Before: Apr 16 09:35:02 2015 GMT
Not After : Apr 14 09:35:02 2020 GMT
Subject: C=DE, ST=Hamburg, L=Hamburg, O=My Site, OU=Sub, CN=*.server.de/emailAddress=ssl@server.de
Subject Public Key Info:
Public Key Algorithm: id-ecPublicKey
Public-Key: (256 bit)
pub:
...
ASN1 OID: prime256v1
X509v3 extensions:
X509v3 Subject Key Identifier:
...
X509v3 Authority Key Identifier:
keyid:...
X509v3 Basic Constraints:
CA:TRUE
Signature Algorithm: ecdsa-with-SHA256
...
</source>
==Configuring Apache==
=== Serving mp4 media files ===
If your media files are on a network filesystem like CIFS or NFS you should disable memory mapping (EnableMMAP Off) to avoid corrupted data at the client side and allow seeking inside the video.
<source lang=apache>
<Directory /var/www/media-files>
Options -Indexes
AllowOverride None
Require all granted
<IfModule mod_mime>
AddType video/mp4 .mp4
</IfModule>
EnableMMAP Off
</Directory>
</source>
=== SSL configuration ===
/etc/apache2/mods-available/ssl.conf
<source lang=apache>
<IfModule mod_ssl.c>
...
SSLUseStapling On
SSLStaplingCache "shmcb:${APACHE_RUN_DIR}/stapling_cache(128000)"
...
</IfModule>
</source>
<source lang=apache>
<VirtualHost ssl.server.de:443>
# ...
SSLEngine On
# Do this only if you are sure you have no old clients
SSLProtocol all -SSLv2 -SSLv3 -TLSv1 -TLSv1.1
# If you need to support old clients use this instead
# SSLProtocol all -SSLv2 -SSLv3 -TLSv1
SSLCompression off
SSLHonorCipherOrder On
# Do this only if you are sure you have no old clients
SSLCipherSuite HIGH:EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH:!AES256+RSA:!AES128:!ADH:!EXP:!SSLv2:!SSLv3:!MEDIUM:!LOW:!NULL:!aNULL
# If you need to support old clients use this instead
# SSLCipherSuite ECDHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-SHA256:ECDHE-RSA-AES256-SHA:ECDHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA256:DHE-RSA-AES128-SHA256:DHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA:EDH-RSA-DES-CBC3-SHA:AES256-GCM-SHA384:AES128-GCM-SHA256:AES256-SHA256:AES128-SHA256:AES256-SHA:AES128-SHA:HIGH:!DES-CBC3-SHA:!aNULL:!eNULL:!EXPORT:!CAMELLIA:!DES:!MD5:!PSK:!RC4:!SSLv2:!SSLv3
SSLCertificateFile /etc/letsencrypt/live/ssl.server.de/fullchain.pem
SSLCertificateKeyFile /etc/letsencrypt/live/ssl.server.de/privkey.pem
SSLOptions +FakeBasicAuth +ExportCertData +StrictRequire
# Generate DH parameters with
# # openssl dhparam -out /etc/ssl/certs/dhparam_4096.pem 4096
SSLOpenSSLConfCmd DHParameters "/etc/ssl/certs/dhparam_4096.pem"
SSLOpenSSLConfCmd ECDHParameters Automatic
SSLOpenSSLConfCmd Curves secp521r1:secp384r1:prime256v1
SetEnvIfNoCase Referer ^https://ssl\.server\.de keep_cookies
RequestHeader unset Cookie env=!keep_cookies
<IfModule mod_headers.c>
# https://kb.sucuri.net/warnings/hardening/headers-x-content-type
Header set X-Content-Type-Options nosniff
# https://kb.sucuri.net/warnings/hardening/headers-x-frame-clickjacking
Header append X-FRAME-OPTIONS "SAMEORIGIN"
# https://kb.sucuri.net/warnings/hardening/headers-x-xss-protection
Header set X-XSS-Protection "1; mode=block"
# Strict Transport Security
Header always set Strict-Transport-Security "max-age=31556926;"
# Public Key Pins
Header always set Public-Key-Pins "max-age=5184000; pin-sha256=\"...\"; pin-sha256=\"...\"; includeSubDomains"
</IfModule>
<IfModule mod_rewrite.c>
RewriteEngine On
# https://kb.sucuri.net/warnings/hardening/http-trace HTTP Trace Method
RewriteCond %{REQUEST_METHOD} ^TRACE
RewriteRule .* - [F]
</IfModule>
</VirtualHost>
</source>
===SSLLabs A+ with all 100%===
'''If you consider to take this snippet be warned. Old clients have no chance to reach the server.'''
<source lang=apache>
<VirtualHost ssl.server.de:443>
...
# SSL parameters
SSLEngine On
SSLProtocol all -SSLv2 -SSLv3 -TLSv1 -TLSv1.1
SSLCipherSuite DHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES256-SHA:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-SHA
SSLHonorCipherOrder on
SSLUseStapling on
SSLCompression off
SSLOptions +FakeBasicAuth +ExportCertData +StrictRequire
SSLCertificateFile /etc/letsencrypt/live/ssl.server.de/fullchain.pem
SSLCertificateKeyFile /etc/letsencrypt/live/ssl.server.de/privkey.pem
SSLOpenSSLConfCmd DHParameters "/etc/ssl/certs/dhparam.pem"
SSLOpenSSLConfCmd ECDHParameters Automatic
SSLOpenSSLConfCmd Curves secp521r1:secp384r1:prime256v1
<IfModule mod_headers.c>
# Add security and privacy related headers
Header edit Set-Cookie ^(.*)$ $1;HttpOnly;Secure
Header always set Strict-Transport-Security "max-age=31556926; includeSubDomains; preload"
Header always set X-Frame-Options SAMEORIGIN
Header always set X-Content-Type-Options nosniff
Header set X-XSS-Protection "1; mode=block"
Header set X-Robots-Tag "none"
SetEnv modHeadersAvailable true
</IfModule>
...
</VirtualHost>
</source>
==Client certificates==
<source lang=apache>
#
## <ClientCertificate>
#
SSLVerifyClient none
SSLCACertificateFile "/var/log/apache2/conf/ca.crt"
SSLCARevocationFile "/var/log/apache2/conf/crl.pem"
SSLCARevocationCheck chain
CustomLog "/var/log/apache2/logs/ssl_user.log" \
"%t %h Serial=%{SSL_CLIENT_M_SERIAL}x User=%{SSL_CLIENT_S_DN_CN}x \"%r\" %b"
<Location />
SSLVerifyClient require
SSLVerifyDepth 10
SSLOptions +FakeBasicAuth
SSLRequireSSL
SSLRequire %{SSL_CLIENT_S_DN_O} eq "Your Organization" \
and %{SSL_CLIENT_S_DN_OU} in {"AllowedOU1","AllowedOU2"}
</Location>
#
## </ClientCertificate>
#
</source>
==ApacheTop==
Top of all sites on your host:
<source lang=Shell scripts>
# ls /var/log/apache2/*.log | xargs -n 1 echo -f | xargs apachetop
</source>
lollypop@lollybook:~/Kunden/NDR/SMS$ vi /tmp/bli
lollypop@lollybook:~/Kunden/NDR/SMS$ vi /tmp/bli
lollypop@lollybook:~/Kunden/NDR/SMS$ cat /tmp/bli
[[Category:Webserver]]
== Create certificate ==
===Simple script===
<syntaxhighlight lang=bash>
#!/bin/bash
BASE_SUBJECT='/C=DE/ST=Hamburg/L=Hamburg/O=MyOrg/OU=IT'
BASEDIR=/etc/apache2
DAYS=$[ 5 * 365 ]
KEY_DIR=${BASEDIR}/ssl.key
CRT_DIR=${BASEDIR}/ssl.crt
if [ $# -eq 0 ]
then
printf "usage: $0 <webserver-name> [<alias1> <alias2>...]\n"
exit 1
fi
CN=$1
declare -a subject_alt_names;
for i in ${*}
do
subject_alt_names=( ${subject_alt_names[*]} "DNS:${i}")
done
echo ${subject_alt_names[*]}
SHORT=${CN%%.*}
KEY=${KEY_DIR}/${SHORT}.key
CRT=${CRT_DIR}/${SHORT}.crt
OLD_IFS=${IFS}
IFS=","
openssl req \
-new \
-days ${DAYS} \
-newkey rsa:4096bits \
-sha512 \
-x509 \
-nodes \
-out ${CRT} \
-keyout ${KEY} \
-subj "${BASE_SUBJECT}/CN=${CN}" \
-reqexts SAN \
-extensions SAN \
-config <(
cat /etc/ssl/openssl.cnf
printf "[ext]\nbasicConstraints=CA:FALSE,pathlen:0\n[SAN]\n%s\n" \
"${subject_alt_names:+subjectAltName = ${subject_alt_names[*]}}"
)
IFS=${OLD_IFS}
printf "Put this in your Apache config:\n\n\tSSLCertificateFile %s\n\tSSLCertificateKeyFile %s\n\n" "${CRT}" "${KEY}"
</syntaxhighlight>
===Adjust the OpenSSL default values===
Set the country, etc. to values that match your needs:
<syntaxhighlight lang=bash>
# vi /etc/ssl/openssl.cnf
</syntaxhighlight>
===Generate key===
<syntaxhighlight lang=bash>
# openssl ecparam -genkey -name secp256r1 | openssl ec -aes256 -out server.de.ec-key
read EC key
using curve name prime256v1 instead of secp256r1
writing EC key
Enter PEM pass phrase:
Verifying - Enter PEM pass phrase:
</syntaxhighlight>
If you don't need a password protected encrypted key file, you can remove the encryption like this:
<syntaxhighlight lang=bash>
# openssl ec -in server.de.ec-key -out server.de.ec-key
read EC key
Enter PEM pass phrase:
writing EC key
</syntaxhighlight>
===Issue certificate===
<syntaxhighlight lang=bash>
# openssl req -new -x509 -sha256 -key server.de.ec-key -out server.de-wildcard.pem -days 1825 -nodes
You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter '.', the field will be left blank.
-----
Country Name (2 letter code) [DE]:
State or Province Name (full name) [Hamburg]:
Locality Name (eg, city) [Hamburg]:
Organization Name (eg, company) [My Site]:
Organizational Unit Name (eg, section) [Sub]:
Common Name (e.g. server FQDN or YOUR name) []:*.server.de
Email Address [ssl@server.de]:
</syntaxhighlight>
===View certificate===
<syntaxhighlight lang=bash>
# openssl x509 -text -noout -in server.de-wildcard.pem
Certificate:
Data:
Version: 3 (0x2)
Serial Number: ... (0x...)
Signature Algorithm: ecdsa-with-SHA256
Issuer: C=DE, ST=Hamburg, L=Hamburg, O=My Site, OU=Sub, CN=*.server.de/emailAddress=ssl@server.de
Validity
Not Before: Apr 16 09:35:02 2015 GMT
Not After : Apr 14 09:35:02 2020 GMT
Subject: C=DE, ST=Hamburg, L=Hamburg, O=My Site, OU=Sub, CN=*.server.de/emailAddress=ssl@server.de
Subject Public Key Info:
Public Key Algorithm: id-ecPublicKey
Public-Key: (256 bit)
pub:
...
ASN1 OID: prime256v1
X509v3 extensions:
X509v3 Subject Key Identifier:
...
X509v3 Authority Key Identifier:
keyid:...
X509v3 Basic Constraints:
CA:TRUE
Signature Algorithm: ecdsa-with-SHA256
...
</syntaxhighlight>
==Configuring Apache==
=== Serving mp4 media files ===
If your media files are on a network filesystem like CIFS or NFS you should disable memory mapping (EnableMMAP Off) to avoid corrupted data at the client side and allow seeking inside the video.
<syntaxhighlight lang=apache>
<Directory /var/www/media-files>
Options -Indexes
AllowOverride None
Require all granted
<IfModule mod_mime>
AddType video/mp4 .mp4
</IfModule>
EnableMMAP Off
</Directory>
</syntaxhighlight>
=== SSL configuration ===
/etc/apache2/mods-available/ssl.conf
<syntaxhighlight lang=apache>
<IfModule mod_ssl.c>
...
SSLUseStapling On
SSLStaplingCache "shmcb:${APACHE_RUN_DIR}/stapling_cache(128000)"
...
</IfModule>
</syntaxhighlight>
<syntaxhighlight lang=apache>
<VirtualHost ssl.server.de:443>
# ...
SSLEngine On
# Do this only if you are sure you have no old clients
SSLProtocol all -SSLv2 -SSLv3 -TLSv1 -TLSv1.1
# If you need to support old clients use this instead
# SSLProtocol all -SSLv2 -SSLv3 -TLSv1
SSLCompression off
SSLHonorCipherOrder On
# Do this only if you are sure you have no old clients
SSLCipherSuite HIGH:EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH:!AES256+RSA:!AES128:!ADH:!EXP:!SSLv2:!SSLv3:!MEDIUM:!LOW:!NULL:!aNULL
# If you need to support old clients use this instead
# SSLCipherSuite ECDHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-SHA256:ECDHE-RSA-AES256-SHA:ECDHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA256:DHE-RSA-AES128-SHA256:DHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA:EDH-RSA-DES-CBC3-SHA:AES256-GCM-SHA384:AES128-GCM-SHA256:AES256-SHA256:AES128-SHA256:AES256-SHA:AES128-SHA:HIGH:!DES-CBC3-SHA:!aNULL:!eNULL:!EXPORT:!CAMELLIA:!DES:!MD5:!PSK:!RC4:!SSLv2:!SSLv3
SSLCertificateFile /etc/letsencrypt/live/ssl.server.de/fullchain.pem
SSLCertificateKeyFile /etc/letsencrypt/live/ssl.server.de/privkey.pem
SSLOptions +FakeBasicAuth +ExportCertData +StrictRequire
# Generate DH parameters with
# # openssl dhparam -out /etc/ssl/certs/dhparam_4096.pem 4096
SSLOpenSSLConfCmd DHParameters "/etc/ssl/certs/dhparam_4096.pem"
SSLOpenSSLConfCmd ECDHParameters Automatic
SSLOpenSSLConfCmd Curves secp521r1:secp384r1:prime256v1
SetEnvIfNoCase Referer ^https://ssl\.server\.de keep_cookies
RequestHeader unset Cookie env=!keep_cookies
<IfModule mod_headers.c>
# https://kb.sucuri.net/warnings/hardening/headers-x-content-type
Header set X-Content-Type-Options nosniff
# https://kb.sucuri.net/warnings/hardening/headers-x-frame-clickjacking
Header append X-FRAME-OPTIONS "SAMEORIGIN"
# https://kb.sucuri.net/warnings/hardening/headers-x-xss-protection
Header set X-XSS-Protection "1; mode=block"
# Strict Transport Security
Header always set Strict-Transport-Security "max-age=31556926;"
# Public Key Pins
Header always set Public-Key-Pins "max-age=5184000; pin-sha256=\"...\"; pin-sha256=\"...\"; includeSubDomains"
</IfModule>
<IfModule mod_rewrite.c>
RewriteEngine On
# https://kb.sucuri.net/warnings/hardening/http-trace HTTP Trace Method
RewriteCond %{REQUEST_METHOD} ^TRACE
RewriteRule .* - [F]
</IfModule>
</VirtualHost>
</syntaxhighlight>
===SSLLabs A+ with all 100%===
'''If you consider to take this snippet be warned. Old clients have no chance to reach the server.'''
<syntaxhighlight lang=apache>
<VirtualHost ssl.server.de:443>
...
# SSL parameters
SSLEngine On
SSLProtocol all -SSLv2 -SSLv3 -TLSv1 -TLSv1.1
SSLCipherSuite DHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES256-SHA:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-SHA
SSLHonorCipherOrder on
SSLUseStapling on
SSLCompression off
SSLOptions +FakeBasicAuth +ExportCertData +StrictRequire
SSLCertificateFile /etc/letsencrypt/live/ssl.server.de/fullchain.pem
SSLCertificateKeyFile /etc/letsencrypt/live/ssl.server.de/privkey.pem
SSLOpenSSLConfCmd DHParameters "/etc/ssl/certs/dhparam.pem"
SSLOpenSSLConfCmd ECDHParameters Automatic
SSLOpenSSLConfCmd Curves secp521r1:secp384r1:prime256v1
<IfModule mod_headers.c>
# Add security and privacy related headers
Header edit Set-Cookie ^(.*)$ $1;HttpOnly;Secure
Header always set Strict-Transport-Security "max-age=31556926; includeSubDomains; preload"
Header always set X-Frame-Options SAMEORIGIN
Header always set X-Content-Type-Options nosniff
Header set X-XSS-Protection "1; mode=block"
Header set X-Robots-Tag "none"
SetEnv modHeadersAvailable true
</IfModule>
...
</VirtualHost>
</syntaxhighlight>
==Client certificates==
<syntaxhighlight lang=apache>
#
## <ClientCertificate>
#
SSLVerifyClient none
SSLCACertificateFile "/var/log/apache2/conf/ca.crt"
SSLCARevocationFile "/var/log/apache2/conf/crl.pem"
SSLCARevocationCheck chain
CustomLog "/var/log/apache2/logs/ssl_user.log" \
"%t %h Serial=%{SSL_CLIENT_M_SERIAL}x User=%{SSL_CLIENT_S_DN_CN}x \"%r\" %b"
<Location />
SSLVerifyClient require
SSLVerifyDepth 10
SSLOptions +FakeBasicAuth
SSLRequireSSL
SSLRequire %{SSL_CLIENT_S_DN_O} eq "Your Organization" \
and %{SSL_CLIENT_S_DN_OU} in {"AllowedOU1","AllowedOU2"}
</Location>
#
## </ClientCertificate>
#
</syntaxhighlight>
==ApacheTop==
Top of all sites on your host:
<syntaxhighlight lang=bash>
# ls /var/log/apache2/*.log | xargs -n 1 echo -f | xargs apachetop
</syntaxhighlight>
039c1377bc2be3f0de2265d25d2dd1f09fe3d1e2
2193
2192
2021-11-25T13:59:22Z
Lollypop
2
wikitext
text/x-wiki
[[Category:Webserver]]
== Create certificate ==
===Simple script===
<syntaxhighlight lang=bash>
#!/bin/bash
BASE_SUBJECT='/C=DE/ST=Hamburg/L=Hamburg/O=MyOrg/OU=IT'
BASEDIR=/etc/apache2
DAYS=$[ 5 * 365 ]
KEY_DIR=${BASEDIR}/ssl.key
CRT_DIR=${BASEDIR}/ssl.crt
if [ $# -eq 0 ]
then
printf "usage: $0 <webserver-name> [<alias1> <alias2>...]\n"
exit 1
fi
CN=$1
declare -a subject_alt_names;
for i in ${*}
do
subject_alt_names=( ${subject_alt_names[*]} "DNS:${i}")
done
echo ${subject_alt_names[*]}
SHORT=${CN%%.*}
KEY=${KEY_DIR}/${SHORT}.key
CRT=${CRT_DIR}/${SHORT}.crt
OLD_IFS=${IFS}
IFS=","
openssl req \
-new \
-days ${DAYS} \
-newkey rsa:4096bits \
-sha512 \
-x509 \
-nodes \
-out ${CRT} \
-keyout ${KEY} \
-subj "${BASE_SUBJECT}/CN=${CN}" \
-reqexts SAN \
-extensions SAN \
-config <(
cat /etc/ssl/openssl.cnf
printf "[ext]\nbasicConstraints=CA:FALSE,pathlen:0\n[SAN]\n%s\n" \
"${subject_alt_names:+subjectAltName = ${subject_alt_names[*]}}"
)
IFS=${OLD_IFS}
printf "Put this in your Apache config:\n\n\tSSLCertificateFile %s\n\tSSLCertificateKeyFile %s\n\n" "${CRT}" "${KEY}"
</syntaxhighlight>
===Adjust the OpenSSL default values===
Set the country, etc. to values that match your needs:
<syntaxhighlight lang=bash>
# vi /etc/ssl/openssl.cnf
</syntaxhighlight>
===Generate key===
<syntaxhighlight lang=bash>
# openssl ecparam -genkey -name secp256r1 | openssl ec -aes256 -out server.de.ec-key
read EC key
using curve name prime256v1 instead of secp256r1
writing EC key
Enter PEM pass phrase:
Verifying - Enter PEM pass phrase:
</syntaxhighlight>
If you don't need a password protected encrypted key file, you can remove the encryption like this:
<syntaxhighlight lang=bash>
# openssl ec -in server.de.ec-key -out server.de.ec-key
read EC key
Enter PEM pass phrase:
writing EC key
</syntaxhighlight>
===Issue certificate===
<syntaxhighlight lang=bash>
# openssl req -new -x509 -sha256 -key server.de.ec-key -out server.de-wildcard.pem -days 1825 -nodes
You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter '.', the field will be left blank.
-----
Country Name (2 letter code) [DE]:
State or Province Name (full name) [Hamburg]:
Locality Name (eg, city) [Hamburg]:
Organization Name (eg, company) [My Site]:
Organizational Unit Name (eg, section) [Sub]:
Common Name (e.g. server FQDN or YOUR name) []:*.server.de
Email Address [ssl@server.de]:
</syntaxhighlight>
===View certificate===
<syntaxhighlight lang=bash>
# openssl x509 -text -noout -in server.de-wildcard.pem
Certificate:
Data:
Version: 3 (0x2)
Serial Number: ... (0x...)
Signature Algorithm: ecdsa-with-SHA256
Issuer: C=DE, ST=Hamburg, L=Hamburg, O=My Site, OU=Sub, CN=*.server.de/emailAddress=ssl@server.de
Validity
Not Before: Apr 16 09:35:02 2015 GMT
Not After : Apr 14 09:35:02 2020 GMT
Subject: C=DE, ST=Hamburg, L=Hamburg, O=My Site, OU=Sub, CN=*.server.de/emailAddress=ssl@server.de
Subject Public Key Info:
Public Key Algorithm: id-ecPublicKey
Public-Key: (256 bit)
pub:
...
ASN1 OID: prime256v1
X509v3 extensions:
X509v3 Subject Key Identifier:
...
X509v3 Authority Key Identifier:
keyid:...
X509v3 Basic Constraints:
CA:TRUE
Signature Algorithm: ecdsa-with-SHA256
...
</syntaxhighlight>
==Configuring Apache==
=== Serving mp4 media files ===
If your media files are on a network filesystem like CIFS or NFS you should disable memory mapping (EnableMMAP Off) to avoid corrupted data at the client side and allow seeking inside the video.
<syntaxhighlight lang=apache>
<Directory /var/www/media-files>
Options -Indexes
AllowOverride None
Require all granted
<IfModule mod_mime>
AddType video/mp4 .mp4
</IfModule>
EnableMMAP Off
</Directory>
</syntaxhighlight>
=== SSL configuration ===
/etc/apache2/mods-available/ssl.conf
<syntaxhighlight lang=apache>
<IfModule mod_ssl.c>
...
SSLUseStapling On
SSLStaplingCache "shmcb:${APACHE_RUN_DIR}/stapling_cache(128000)"
...
</IfModule>
</syntaxhighlight>
<syntaxhighlight lang=apache>
<VirtualHost ssl.server.de:443>
# ...
SSLEngine On
# Do this only if you are sure you have no old clients
SSLProtocol all -SSLv2 -SSLv3 -TLSv1 -TLSv1.1
# If you need to support old clients use this instead
# SSLProtocol all -SSLv2 -SSLv3 -TLSv1
SSLCompression off
SSLHonorCipherOrder On
# Do this only if you are sure you have no old clients
SSLCipherSuite HIGH:EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH:!AES256+RSA:!AES128:!ADH:!EXP:!SSLv2:!SSLv3:!MEDIUM:!LOW:!NULL:!aNULL
# If you need to support old clients use this instead
# SSLCipherSuite ECDHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-SHA256:ECDHE-RSA-AES256-SHA:ECDHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA256:DHE-RSA-AES128-SHA256:DHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA:EDH-RSA-DES-CBC3-SHA:AES256-GCM-SHA384:AES128-GCM-SHA256:AES256-SHA256:AES128-SHA256:AES256-SHA:AES128-SHA:HIGH:!DES-CBC3-SHA:!aNULL:!eNULL:!EXPORT:!CAMELLIA:!DES:!MD5:!PSK:!RC4:!SSLv2:!SSLv3
SSLCertificateFile /etc/letsencrypt/live/ssl.server.de/fullchain.pem
SSLCertificateKeyFile /etc/letsencrypt/live/ssl.server.de/privkey.pem
SSLOptions +FakeBasicAuth +ExportCertData +StrictRequire
# Generate DH parameters with
# # openssl dhparam -out /etc/ssl/certs/dhparam_4096.pem 4096
SSLOpenSSLConfCmd DHParameters "/etc/ssl/certs/dhparam_4096.pem"
SSLOpenSSLConfCmd ECDHParameters Automatic
SSLOpenSSLConfCmd Curves secp521r1:secp384r1:prime256v1
SetEnvIfNoCase Referer ^https://ssl\.server\.de keep_cookies
RequestHeader unset Cookie env=!keep_cookies
<IfModule mod_headers.c>
# https://kb.sucuri.net/warnings/hardening/headers-x-content-type
Header set X-Content-Type-Options nosniff
# https://kb.sucuri.net/warnings/hardening/headers-x-frame-clickjacking
Header append X-FRAME-OPTIONS "SAMEORIGIN"
# https://kb.sucuri.net/warnings/hardening/headers-x-xss-protection
Header set X-XSS-Protection "1; mode=block"
# Strict Transport Security
Header always set Strict-Transport-Security "max-age=31556926;"
# Public Key Pins
Header always set Public-Key-Pins "max-age=5184000; pin-sha256=\"...\"; pin-sha256=\"...\"; includeSubDomains"
</IfModule>
<IfModule mod_rewrite.c>
RewriteEngine On
# https://kb.sucuri.net/warnings/hardening/http-trace HTTP Trace Method
RewriteCond %{REQUEST_METHOD} ^TRACE
RewriteRule .* - [F]
</IfModule>
</VirtualHost>
</syntaxhighlight>
===SSLLabs A+ with all 100%===
'''If you consider to take this snippet be warned. Old clients have no chance to reach the server.'''
<syntaxhighlight lang=apache>
<VirtualHost ssl.server.de:443>
...
# SSL parameters
SSLEngine On
SSLProtocol all -SSLv2 -SSLv3 -TLSv1 -TLSv1.1
SSLCipherSuite DHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES256-SHA:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-SHA
SSLHonorCipherOrder on
SSLUseStapling on
SSLCompression off
SSLOptions +FakeBasicAuth +ExportCertData +StrictRequire
SSLCertificateFile /etc/letsencrypt/live/ssl.server.de/fullchain.pem
SSLCertificateKeyFile /etc/letsencrypt/live/ssl.server.de/privkey.pem
SSLOpenSSLConfCmd DHParameters "/etc/ssl/certs/dhparam.pem"
SSLOpenSSLConfCmd ECDHParameters Automatic
SSLOpenSSLConfCmd Curves secp521r1:secp384r1:prime256v1
<IfModule mod_headers.c>
# Add security and privacy related headers
Header edit Set-Cookie ^(.*)$ $1;HttpOnly;Secure
Header always set Strict-Transport-Security "max-age=31556926; includeSubDomains; preload"
Header always set X-Frame-Options SAMEORIGIN
Header always set X-Content-Type-Options nosniff
Header set X-XSS-Protection "1; mode=block"
Header set X-Robots-Tag "none"
SetEnv modHeadersAvailable true
</IfModule>
...
</VirtualHost>
</syntaxhighlight>
==Client certificates==
<syntaxhighlight lang=apache>
#
## <ClientCertificate>
#
SSLVerifyClient none
SSLCACertificateFile "/var/log/apache2/conf/ca.crt"
SSLCARevocationFile "/var/log/apache2/conf/crl.pem"
SSLCARevocationCheck chain
CustomLog "/var/log/apache2/logs/ssl_user.log" \
"%t %h Serial=%{SSL_CLIENT_M_SERIAL}x User=%{SSL_CLIENT_S_DN_CN}x \"%r\" %b"
<Location />
SSLVerifyClient require
SSLVerifyDepth 10
SSLOptions +FakeBasicAuth
SSLRequireSSL
SSLRequire %{SSL_CLIENT_S_DN_O} eq "Your Organization" \
and %{SSL_CLIENT_S_DN_OU} in {"AllowedOU1","AllowedOU2"}
</Location>
#
## </ClientCertificate>
#
</syntaxhighlight>
==ApacheTop==
Top of all sites on your host:
<syntaxhighlight lang=bash>
# ls /var/log/apache2/*.log | xargs -n 1 echo -f | xargs apachetop
</syntaxhighlight>
1ee2298908c4b62f68da06a85b84db173d0219f5
Solaris process debugging
0
254
2194
1363
2021-11-25T14:20:30Z
Lollypop
2
Text replacement - "<source" to "<syntaxhighlight"
wikitext
text/x-wiki
[[Kategorie:Solaris|Debugging]]
==Swap usage per process==
<syntaxhighlight lang=awk>
# pgrep . | xargs -n 1 pmap -S 2>/dev/null | nawk '
function kb2h(value){
unit=1;
while(value>=1024){
unit++;
value/=1024;
};
split("KB,MB,GB,TB,PB", unit_string, /,/);
return sprintf("%7.2f %s",value,unit_string[unit]);
}
/[0-9]+:/ {
pid=$1;
prog=$2;
}
/^total/{
swap_total+=$3;
printf ("%s\t%s\t%s\n",pid,kb2h($3),prog);
}
END{
printf "Total:\t%s\n",kb2h(swap_total);
}'
</source>
==Set the core file size limit on a process==
For example for the sshd (and all resulting childs from now):
<syntaxhighlight lang=bash>
ssh-server# prctl -n process.max-core-size -v 2g -t privileged -r -e deny $(pgrep -u root -o sshd)
</source>
Check:
<syntaxhighlight lang=bash>
ssh-server# prctl -n process.max-core-size $(pgrep -u root -o sshd)
process: 1491: /usr/lib/ssh/sshd
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
process.max-core-size
privileged 2.00GB - deny -
system 8.00EB max deny -
</source>
Now all processes (for example new logged in users) will have a core file size limit of 2GB... really? No!
<syntaxhighlight lang=bash>
ssh-client# ssh ssh-server
ssh-server# ulimit -Ha | grep core
core file size (blocks, -c) 2097152
</source>
See what it says: blocks <-- !!!
From man page: -c Maximum core file size (in 512-byte blocks)
d8c932d3dd744b04ecff771d67f87746baf243da
2225
2194
2021-11-25T14:27:45Z
Lollypop
2
Text replacement - "</source" to "</syntaxhighlight"
wikitext
text/x-wiki
[[Kategorie:Solaris|Debugging]]
==Swap usage per process==
<syntaxhighlight lang=awk>
# pgrep . | xargs -n 1 pmap -S 2>/dev/null | nawk '
function kb2h(value){
unit=1;
while(value>=1024){
unit++;
value/=1024;
};
split("KB,MB,GB,TB,PB", unit_string, /,/);
return sprintf("%7.2f %s",value,unit_string[unit]);
}
/[0-9]+:/ {
pid=$1;
prog=$2;
}
/^total/{
swap_total+=$3;
printf ("%s\t%s\t%s\n",pid,kb2h($3),prog);
}
END{
printf "Total:\t%s\n",kb2h(swap_total);
}'
</syntaxhighlight>
==Set the core file size limit on a process==
For example for the sshd (and all resulting childs from now):
<syntaxhighlight lang=bash>
ssh-server# prctl -n process.max-core-size -v 2g -t privileged -r -e deny $(pgrep -u root -o sshd)
</syntaxhighlight>
Check:
<syntaxhighlight lang=bash>
ssh-server# prctl -n process.max-core-size $(pgrep -u root -o sshd)
process: 1491: /usr/lib/ssh/sshd
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
process.max-core-size
privileged 2.00GB - deny -
system 8.00EB max deny -
</syntaxhighlight>
Now all processes (for example new logged in users) will have a core file size limit of 2GB... really? No!
<syntaxhighlight lang=bash>
ssh-client# ssh ssh-server
ssh-server# ulimit -Ha | grep core
core file size (blocks, -c) 2097152
</syntaxhighlight>
See what it says: blocks <-- !!!
From man page: -c Maximum core file size (in 512-byte blocks)
c97ca10e2a3e97f568cda13b0216006509a59196
Linux grub
0
297
2195
1388
2021-11-25T14:20:34Z
Lollypop
2
Text replacement - "<source" to "<syntaxhighlight"
wikitext
text/x-wiki
[[Kategorie:Linux|Grub]]
[[Kategorie:Grub|Linux]]
=grub rescue>=
The problem:
<syntaxhighlight lang=bash>
...
Entering rescue mode...
grub rescue>
</source>
==Get into the normal grub==
Find your devices:
<syntaxhighlight lang=bash>
grub rescue> ls
</source>
===Find the directory where the normal.mod file resides===
In this example we have LVM and the /boot/grub is in VG vg-root and the LV lv-root.
<syntaxhighlight lang=bash>
grub rescue> ls (lvm/vg--root-lv--root)/boot/grub/i386-pc
... normal.mod ...
</source>
===Set the prefix to the right place===
<syntaxhighlight lang=bash>
grub rescue> set prefix=(lvm/vg--root-lv--root)/boot/grub
</source>
===Now you can load and start the module called "normal"===
<syntaxhighlight lang=bash>
grub rescue> insmod normal
grub rescue> normal
</source>
If the menu not occurs you get something like this:
<syntaxhighlight lang=bash>
GNU GRUB version 1.99,5.11.0.175.2.0.0.42.2
Minimal BASH-like line editing is supported. For the first word, TAB
lists possible command completions. Anywhere else TAB lists possible
device or file completions.
grub>
</source>
==Normal grub is bootet, now start the Kernel==
Example for LVM:
<syntaxhighlight lang=bash>
insmod gzio
insmod part_msdos
insmod lvm
insmod ext2
set root='lvmid/KAlPF4-Qb8I-Sx41-10cC-lACw-Msoh-3qEohv/pmE9Nt-rLG3-FlNM-CwOT-hy42-gSnm-fZSn3l'
linux /boot/vmlinuz-4.4.0-53-generic root=/dev/mapper/vg--root-lv--root ro
initrd /boot/initrd.img-4.4.0-53-generic
</source>
Example for ZFS-Root:
<syntaxhighlight lang=bash>
insmod gzio
insmod part_msdos
insmod zfs
set root='hd0,msdos4'
linux /ROOT/ubuntu-15.04@/boot/vmlinuz-4.4.0-57-generic root=ZFS=rpool/ROOT/ubuntu-15.04 boot=zfs zfs_force=1 ro quiet splash nomdmonddf nomdmonisw $vt_handoff
initrd /ROOT/ubuntu-15.04@/boot/initrd.img-4.4.0-57-generic
</source>
253e4436ced58e88a7e227e319a59a69e07537a6
Solaris IO Analyse
0
208
2196
711
2021-11-25T14:21:06Z
Lollypop
2
Text replacement - "<source" to "<syntaxhighlight"
wikitext
text/x-wiki
[[Kategorie:Solaris]]
==Which filesystem is busy?==
For zfs (-F zfs) you can use this oneliner:
<syntaxhighlight lang=bash>
# fsstat -i $(df -hF zfs | nawk '{print $NF}') 5
</source>
==Links==
* [https://blogs.oracle.com/BestPerf/entry/i_o_analysis_using_dtrace I/O analysis using DTrace]
* [http://www.brendangregg.com/DTrace/dtrace_oneliners.txt Brendan Gregg's DTrace onliners]
09ba0428f0b1fa05a89cea0b0df42fe37b0b310c
ZFS sync script
0
215
2197
972
2021-11-25T14:21:14Z
Lollypop
2
Text replacement - "</source" to "</syntaxhighlight"
wikitext
text/x-wiki
[[Kategorie:ZFS|Sync]]
Like all of my scripts this script is coming without any guaranties!!!
You can use it on your own risk!
==About the script==
* It uses [http://www.maier-komor.de/mbuffer.html mbuffer]. It is easy to compile.
* It uses gawk.
* The variable ''SECURE'' defines if you want to use ssh to encrypt your stream. Set it to ''yes'' or ''no''.
* To mark the datasets to copy from the backup host use this on the source:
<source lang=bash>
# /usr/sbin/zfs set de.timmann:auto-backup=<backup host> <dataset>
</syntaxhighlight>
* Run the script on the destination/backup host.
* If you don't want to use root as backup-user on source host do this to create a ''zfssync'' user (Solaris syntax):
<source lang=bash>
# useradd -m zfssync
# passwd -N zfssync
# usermod -K type=normal zfssync
</syntaxhighlight>
* Make an ssh-key exchange to login without password for ''SRC_USER''.
Good luck!
==zfs_sync.sh==
<source lang=bash>
#!/bin/bash
# Written by Lars Timmann <L@rs.Timmann.de> 2013
# This script is a rotten bunch of code... rewrite it!
# Some defaults
BACKUP_PROPERTY="de.timmann:auto-backup"
BACKUP_SNAPSHOT_NAME="zfssync"
MBUFFER_PORT=10001
MBUFFER=/opt/mbuffer/bin/mbuffer;
SRC_USER=zfssync
INITIAL_COPIES=3
# Default yes means use SSH for encryption over the net. Every other value means just mbuffer.
SECURE="yes"
LOCAL_SYNC="no"
MBUFFER_PORT=10001
MBUFFER_OPTS="-v 0 --md5 -s 128k -m 256M"
BACKUP_PROPERTY="de.timmann:auto-backup"
ZFS=/usr/sbin/zfs
SSH="/usr/bin/ssh -xc blowfish"
AWK=/usr/bin/gawk
#AWK=/opt/sfw/bin/gawk
GREP=/usr/bin/grep
DATE=/usr/bin/date
MD5="/usr/bin/digest -a md5"
ROUTE=/usr/sbin/route
MBUFFER="/opt/mbuffer/bin/mbuffer"
MYHOST=$(/usr/bin/hostname)
MYNAME=$(/usr/bin/basename $0)
function usage () {
if [ $# -gt 0 ]
then
if [ "_${1}_" != "_help_" ]
then
echo "Error: ${MYNAME} : $*"
fi
else
echo "Error: ${MYNAME} : Check parameters"
fi
cat <<EOU
Usage: ${MYNAME} <params>
Where params is from this set of parameters:
-s|--src-ip <IP> The host from where we want to sync
-d|--dst-ip <IP> The IP on this host where the remote mbuffer should try to connect to
If omitted the IP to use is guessed via route get.
-u|--user <user> The user on "--src-ip" which has rights to send a zfs.
It must be able to login via ssh with public key.
On Solaris it is the profile "ZFS File System Management"
Try this on the "--src-ip":
# roleadd \
-d /export/home/zfssync \
-c "User for zfs send/recv" \
-s /bin/bash \
-m \
-P "ZFS File System Management" \
zfssync
# rolemod -K type=normal zfssync
# passwd -N zfssync
And then put the ssh-public-key from this host into
/export/home/zfssync/.ssh/authorized_keys
on the "--src-ip".
Remember to set the permissions on .ssh to 700 and .ssh/authorized_keys to 600.
The Homedir of the user must not be world writeable.
-sp|--src-pool <zpool> The zpool we want to sync from "--src-ip".
-dp|--dst-pool <zpool> The zpool on this host where we want to sync to ${MYNAME}.
-mbp|--mbuffer-port <port>
If the default port 10001 is in use use another port.
-mb|--mbuffer-path <path>
Path of mbuffer binary including binary itself.
-mbbw|--mbuffer-bwlimit <rate>
Limit the read bandwith of mbuffer (mbuffer option -r)
From mbuffer --help: limit read rate to <rate> B/s, where <rate> can be given in b,k,M,G
-bp|--backup-property <property>
This defaults to ${BACKUP_PROPERTY}.
You have to set this property on all ZFS datasets to ${MYHOST}.
# /usr/sbin/zfs set ${BACKUP_PROPERTY}=${MYHOST} <dataset>
This is inherited as usual.
-bsn|--backup-snap-name <snapshotname>
This is the name of the snapshot which we use to sync.
This defaults to ${BACKUP_SNAPSHOT_NAME}.
Never delete this snapshot manually or you will break the sync and restart
from the beginning.
-i|--insecure Not for production environments! No ssh tunneling. No encryption over the net!
EOU
##-l|--local Just do a local zfs send/recv...
exit 1
}
while [ $# -gt 0 ]
do
#if [ $# -ge 2 ]; then value=$2; fi
case $1 in
--help|-h)
usage "help"
;;
-l|--local)
LOCAL_SYNC="yes"
SRC_HOST="localhost"
param="dummy"
shift;
;;
-i|--insecure|--fuck-off-security)
SECURE="no"
param="dummy"
shift;
;;
--?*=?*|-?*=?*)
param=${1%=*}
value=${1#*=}
shift;
;;
--?*=|-?*=)
param=${1%=*}
usage "${param} needs a vlaue!"
;;
*)
param=$1
if [ $# -ge 2 -a "_${2%-*}_" != "__" ]
then
value=$2
shift
fi
shift
;;
esac
case $param in
-s|--src-ip)
if [ -z $value ] ; then usage "Param ${param} needs a value" ; fi
SRC_HOST=${value}
;;
-d|--dst-ip)
if [ -z $value ] ; then usage "Param ${param} needs a value" ; fi
DST_HOST=${value};
;;
-u|--user)
if [ -z $value ] ; then usage "Param ${param} needs a value" ; fi
SRC_USER=${value}
;;
-sp|--src-pool)
if [ -z $value ] ; then usage "Param ${param} needs a value" ; fi
SRC_POOL=${value}
;;
-bsn|--backup-snap-name)
if [ -z $value ] ; then usage "Param ${param} needs a value" ; fi
BACKUP_SNAPSHOT_NAME=${value}
;;
-dp|--dst-pool)
if [ -z $value ] ; then usage "Param ${param} needs a value" ; fi
DST_POOL=${value}
;;
-mbp|--mbuffer-port)
if [ -z $value ] ; then usage "Param ${param} needs a value" ; fi
MBUFFER_PORT=${value}
;;
-mb|--mbuffer-path)
if [ -z $value ] ; then usage "Param ${param} needs a value" ; fi
MBUFFER=${value}
;;
-mbbw|--mbuffer-bwlimit)
if [ -z $value ] ; then usage "Param ${param} needs a value" ; fi
MBUFFER_OPTS="${MBUFFER_OPTS} -r ${value}"
;;
-bp|--backup-property)
if [ -z $value ] ; then usage "Param ${param} needs a value" ; fi
BACKUP_PROPERTY=${value}
;;
dummy)
;;
*)
usage "Unknown parameter $1"
esac
done
if [ "_${LOCAL_SYNC}_" == "no" ]
then
if [ -z ${SRC_HOST} ]; then usage "-s|--src-ip is missing" ; fi
# Guess the right IP for communication with source host
if [ -z ${DST_HOST} ]; then
DST_HOST=$(${ROUTE} -vn get ${SRC_HOST} | ${AWK} '{ip=$2}END{print ip}')
if [ -z ${DST_HOST} ]; then
usage "-d|--dst-ip is missing"
fi
fi
fi
if [ -z ${SRC_POOL} ]; then usage "-sp|--src-pool is missing" ; fi
if [ -z ${DST_POOL} ]; then usage "-dp|--dst-pool is missing" ; fi
SRC_DATASETS=/tmp/${MYNAME}_${DST_POOL/\//_}_src_ds.out
DST_DATASETS=/tmp/${MYNAME}_${DST_POOL/\//_}_dst_ds.out
LOCK_FILE=/var/run/${MYNAME}_${DST_POOL/\//_}.lck
TMP_FILE1=/tmp/${MYNAME}_${DST_POOL/\//_}.tmp1
TMP_FILE2=/tmp/${MYNAME}_${DST_POOL/\//_}.tmp2
START_TIME=$(${AWK} 'BEGIN{printf systime();}')
${AWK} -v time=${START_TIME} 'BEGIN{print "START:",strftime("%d.%m.%Y %H:%M.%S",time)}'
# Clean up on signal
# -------------------------
trap 'echo "\n--- Got signal: Exiting ...\n"; \
date ; \
sleep 3; kill -9 ${!} 2>/dev/null; \
/usr/bin/rm -f ${LOCK_FILE}; \
exit 1' 1 2 3 13 14 15 18
###########################
if [ -f ${LOCK_FILE} ] ; then
echo "$0 is allready running as PID $(/usr/bin/cat ${LOCK_FILE}) look in ${LOCK_FILE}"
exit 1
else
echo $$ > ${LOCK_FILE}
fi
if [ "_${LOCAL_SYNC}_" == "_yes_" ]
then
${ZFS} list -rH -t filesystem,snapshot,volume -o name,type,${BACKUP_PROPERTY} -s creation ${SRC_POOL} > ${SRC_DATASETS} &
else
${SSH} ${SRC_USER:+"${SRC_USER}@"}${SRC_HOST} "${ZFS} list -rH -t filesystem,snapshot,volume -o name,type,${BACKUP_PROPERTY} -s creation ${SRC_POOL}" > ${SRC_DATASETS} &
fi
${ZFS} list -rH -t filesystem,snapshot,volume -o name,type -s creation ${DST_POOL} > ${DST_DATASETS} &
wait
function convert_to_poolname () {
from_zfs=$1
search=$2
replace=$3
echo ${from_zfs} | sed -e "s#^${search}#${replace}#g"
}
function is_available () {
snapshot=$1
list=$2
${AWK} -v snapshot=${snapshot} 'BEGIN{rc=1;}$1 == snapshot{print $1; rc=0;}END{exit rc;}' ${list}
return $?
}
function expire_dst_pool_snapshots () {
days_to_keep=$1
min_to_keep=$2
for expired_zfs in $(
${ZFS} list -o creation,name -S creation -t snapshot | \
${AWK} \
-v days_to_keep=${days_to_keep} \
-v min_to_keep=${min_to_keep} \
-v DST_POOL="^${DST_POOL}" \
'
BEGIN{
split("Jan:Feb:Mar:Apr:May:Jun:Jul:Aug:Sep:Oct:Nov:Dec",mon,":");
for(m in mon){
month[mon[m]]=m
};
expire_date=systime()-days_to_keep*60*60*24
}
$NF ~ DST_POOL {
filesystem=$NF;
gsub(/@.*$/,"",filesystem);
split($4,time,":");
filesystem_date=mktime(sprintf("%d %02d %02d %02d %02d 00", $5, month[$2], $3, time[1], time[2]));
count[filesystem]++;
if(filesystem_date < expire_date && count[filesystem] > min_to_keep )
{
print $NF;
}
}')
do
printf "$(${DATE}) Destroying snapshot ${expired_zfs}\n"
${ZFS} destroy ${expired_zfs}
done
}
function get_src_list () {
${AWK} -v backup_server=${MYHOST} '
( $2=="filesystem" || $2=="volume" ) && $3==backup_server {
path[$1]=1;
for(name in path){
# delete name from list, if name is substring of $1
if( index($1,name)==1 && name != $1 && path[name]!=0 ){
path[name]=0;
}
}
}
END{
for(name in path){
if(path[name]==1) print name
}
}
' ${SRC_DATASETS}
}
function first_snapshot () {
${AWK} -v zfs="${1}@" '
$2=="snapshot" && $1 ~ zfs {
first=$1;
# und raus...
nextfile;
}
END{
print first;
}
' $2
}
function last_snapshot () {
${AWK} -v zfs="^${1}" -F '[@ \t]' '
$3 == "snapshot" && $1 ~ zfs {
last=$1"@"$2;
}
END{
printf last;
}
' $2
}
function get_incremental_snapshot () {
src_host=$1
src_datasets=$2
first=$3
last=$4
dst_pool=$5
dst_datasets=$6
if [ $# -lt 6 ] ; then
echo "Called from line ${BASH_LINENO[$i]} with $# Arguments"
end 1
fi
src_zfs=$(echo ${first} | ${AWK} -F'@' '{print $1}')
first_snap=$(echo ${first} | ${AWK} -F'@' '{print FS""$2}')
echo "Getting snapshot ${zfs}..."
if [ "_${LOCAL_SYNC}_" == "_yes_" ]
then
${ZFS} send -I ${first_snap} ${last} | ${ZFS} recv -vFd ${dst_pool}
else
if [ "_${SECURE}_" == "_yes_" ]
then
# setup receiver
${MBUFFER} ${MBUFFER_OPTS} -l ${TMP_FILE1} -I 127.0.0.1:${MBUFFER_PORT} | \
${ZFS} recv -vFd ${dst_pool} 2>&1 &
# start sender
${SSH} ${SRC_USER:+"${SRC_USER}@"}${SRC_HOST} \
-R ${MBUFFER_PORT}:127.0.0.1:${MBUFFER_PORT} \
"${ZFS} send -I ${first_snap} ${last} | ${MBUFFER} ${MBUFFER_OPTS} -O 127.0.0.1:${MBUFFER_PORT} 2>&1" >${TMP_FILE2} &
else
# setup receiver
${MBUFFER} ${MBUFFER_OPTS} -l ${TMP_FILE1} -I ${MBUFFER_PORT} | \
${ZFS} recv -vFd ${dst_pool} 2>&1 &
# start sender
${SSH} ${SRC_USER:+"${SRC_USER}@"}${SRC_HOST} \
"${ZFS} send -I ${first_snap} ${last} | ${MBUFFER} ${MBUFFER_OPTS} -O ${DST_HOST}:${MBUFFER_PORT} 2>&1" >${TMP_FILE2} &
fi
wait
local_md5=$(grep md5 ${TMP_FILE1})
remote_md5=$(grep md5 ${TMP_FILE2})
local_summary=$(grep summary ${TMP_FILE1})
remote_summary=$(grep summary ${TMP_FILE2})
printf "remote %s\nlocal %s\n" "${remote_md5}" "${local_md5}"
printf "remote %s\nlocal %s\n" "${remote_summary}" "${local_summary}"
rm -f ${TMP_FILE1} ${TMP_FILE2}
fi
}
function get_initial_snapshot () {
src_host=$1
src_datasets=$2
zfs=$3
dst_pool=$4
dst_datasets=$5
if [ -z "$(is_available ${zfs} ${dst_datasets})" ] ; then
echo "Getting snapshot ${zfs}..."
if [ "_${LOCAL_SYNC}_" == "_yes_" ]
then
${ZFS} send -R ${zfs} | ${ZFS} recv -vFd ${dst_pool}
else
if [ "_${SECURE}_" == "_yes_" ]
then
# setup receiver
${MBUFFER} ${MBUFFER_OPTS} -l ${TMP_FILE1} -I 127.0.0.1:${MBUFFER_PORT} | \
${ZFS} recv -vFd ${dst_pool} 2>&1 &
# start sender
${SSH} ${SRC_USER:+"${SRC_USER}@"}${SRC_HOST} \
-R ${MBUFFER_PORT}:127.0.0.1:${MBUFFER_PORT} \
"${ZFS} send -R ${zfs} | ${MBUFFER} ${MBUFFER_OPTS} -O 127.0.0.1:${MBUFFER_PORT} 2>&1" >${TMP_FILE2} &
else
# setup receiver
${MBUFFER} ${MBUFFER_OPTS} -l ${TMP_FILE1} -I ${MBUFFER_PORT} | \
${ZFS} recv -vFd ${dst_pool} 2>&1 &
# start sender
${SSH} ${SRC_USER:+"${SRC_USER}@"}${SRC_HOST} \
"${ZFS} send -R ${zfs} | ${MBUFFER} ${MBUFFER_OPTS} -O ${DST_HOST}:${MBUFFER_PORT} 2>&1" >${TMP_FILE2} &
fi
wait
local_md5=$(grep md5 ${TMP_FILE1})
remote_md5=$(grep md5 ${TMP_FILE2})
local_summary=$(grep summary ${TMP_FILE1})
remote_summary=$(grep summary ${TMP_FILE2})
printf "remote %s\nlocal %s\n" "${remote_md5}" "${local_md5}"
printf "remote %s\nlocal %s\n" "${remote_summary}" "${local_summary}"
rm -f ${TMP_FILE1} ${TMP_FILE2}
fi
fi
}
function timestamp () {
echo $(${DATE} '+%Y%m%d-%H:%M:%S')
}
function expire_backup_snapshots () {
src_host=$1
src_datasets=$2
dst_datasets=$3
src_last_to_keep=$4
dst_pool=$5
src_zfs=$(echo ${src_last_to_keep} | ${AWK} -F'@' '{print $1}')
dst_zfs=$(convert_to_poolname ${src_zfs} ${SRC_POOL} ${dst_pool})
dst_last_to_keep=$(convert_to_poolname ${src_last_to_keep} ${SRC_POOL} ${dst_pool})
echo "Deleting old backup snapshots before ${dst_last_to_keep}"
if ( ${ZFS} list -o name ${dst_last_to_keep} >/dev/null 2>&1 ) ; then
for src_backup_snapshot in $(${AWK} -v src_backup="${src_zfs}@${BACKUP_SNAPSHOT_NAME}" -v src_last_to_keep="${src_last_to_keep}" '
$1 == src_last_to_keep {
exit 0;
}
$1 ~ src_backup {
print $1;
}
' ${src_datasets})
do
printf "\tDeleting on src ${src_backup_snapshot} ..."
if [ "_${LOCAL_SYNC}_" == "_yes_" ]
then
${ZFS} destroy ${src_backup_snapshot}
status=$?
else
${SSH} ${SRC_USER:+"${SRC_USER}@"}${SRC_HOST} "${ZFS} destroy ${src_backup_snapshot}"
status=$?
fi
if [ ${status} -eq 0 ] ; then
echo "done"
else
echo "failed"
fi
done
for dst_backup_snapshot in $(${AWK} -v dst_backup="${dst_zfs}@${BACKUP_SNAPSHOT_NAME}" -v dst_last_to_keep=${dst_last_to_keep} '
$1 == dst_last_to_keep {
exit 0;
}
$1 ~ dst_backup {
print $1;
}
' ${dst_datasets})
do
printf "\tDeleting on destination ${dst_backup_snapshot} ..."
if ( ${ZFS} destroy ${dst_backup_snapshot} ) ; then
echo "done"
else
echo "failed"
fi
done
else
echo "Strange we do not have the copy of ${dst_last_to_keep} => STOP!"
fi
}
function end () {
/usr/bin/rm -f ${LOCK_FILE}
exit $1
}
for src_zfs in $(get_src_list) ; do
echo "Evaluating ${src_zfs}"
dst_zfs=$(convert_to_poolname ${src_zfs} ${SRC_POOL} ${DST_POOL})
last_src=$(last_snapshot ${src_zfs} ${SRC_DATASETS})
last_dst=$(last_snapshot ${dst_zfs} ${DST_DATASETS})
last_backup_src=$(${AWK} -v zfs="${src_zfs}@${BACKUP_SNAPSHOT_NAME}" '$1 ~ zfs{last=$1}END{printf last}' ${SRC_DATASETS})
last_backup_dst=$(${AWK} -v zfs="${dst_zfs}@${BACKUP_SNAPSHOT_NAME}" '$1 ~ zfs{last=$1}END{printf last}' ${DST_DATASETS})
last_dst_on_src=$(convert_to_poolname ${last_dst} ${DST_POOL} ${SRC_POOL})
this_backup_src=${src_zfs}@${BACKUP_SNAPSHOT_NAME}_$(timestamp)
# Create snapshot for incremental backups
if [ "_${LOCAL_SYNC}_" == "_yes_" ]
then
${ZFS} snapshot ${this_backup_src}
else
${SSH} ${SRC_USER:+"${SRC_USER}@"}${SRC_HOST} "${ZFS} snapshot ${this_backup_src}"
fi
if [ -z "${last_src}" ] ; then
last_src=${this_backup_src}
fi
if [ -n "$(is_available ${dst_zfs} ${DST_DATASETS})" -a -z "${last_dst}" ] ; then
echo "zfs is on dst, but no snapshots. Getting ${last_src}..."
get_initial_snapshot ${SRC_HOST} ${SRC_DATASETS} ${last_src} ${DST_POOL} ${DST_DATASETS}
# Look for last backup snapshot on destination
elif [ -n "${last_backup_dst}" ] ; then
# Name of last backup snapshot on src
last_dst_backup_on_src=$(convert_to_poolname ${last_backup_dst} ${DST_POOL} ${SRC_POOL})
# If converted name is not empty and snapshot is in the list of src snapshots
# then get all snapshots from last backup until now
if [ -n "${last_dst_backup_on_src}" ] ; then
if [ -n "$(is_available ${last_dst_backup_on_src} ${SRC_DATASETS})" ] ; then
# Get the snapshot of this backup
printf "%s\tsnapshot\n" ${this_backup_src} >> ${SRC_DATASETS}
get_incremental_snapshot ${SRC_HOST} ${SRC_DATASETS} ${last_dst_backup_on_src} ${this_backup_src} ${DST_POOL} ${DST_DATASETS} && \
expire_backup_snapshots ${SRC_HOST} ${SRC_DATASETS} ${DST_DATASETS} ${this_backup_src} ${DST_POOL}
fi
fi
elif [ -n "$(is_available ${dst_zfs} ${DST_DATASETS})" ] ; then
# No last backup snapshot on dst but we have snapshots
if [ -n "$(is_available ${last_dst_on_src} ${SRC_DATASETS})" ] ; then
echo "Try to backup from ${last_dst_on_src} to ${this_backup_src}"
first=${last_dst_on_src}
last=${last_src}
get_incremental_snapshot ${SRC_HOST} ${SRC_DATASETS} ${first} ${last} ${DST_POOL} ${DST_DATASETS} && \
expire_backup_snapshots ${SRC_HOST} ${SRC_DATASETS} ${DST_DATASETS} ${this_backup_src} ${DST_POOL}
# Get the snapshot of this backup
printf "%s\tsnapshot\n" ${this_backup_src} >> ${SRC_DATASETS}
get_incremental_snapshot ${SRC_HOST} ${SRC_DATASETS} ${last} ${this_backup_src} ${DST_POOL} ${DST_DATASETS} && \
expire_backup_snapshots ${SRC_HOST} ${SRC_DATASETS} ${DST_DATASETS} ${this_backup_src} ${DST_POOL}
else
echo "OK I tried hard... now it is your job..."
fi
else
# No existing copies for this zfs. Get the last <INITIAL_COPIES> copies
first=$(${AWK} -v zfs=${src_zfs} -v intitial_copies=$((${INITIAL_COPIES}-1)) '
$1 ~ zfs && $2=="snapshot" {
last[++count]=$1;
}
END {
if(count>intitial_copies){
print last[count-intitial_copies]
}else{
print last[1]
}
}' ${SRC_DATASETS})
last=$( ${AWK} -v zfs=${src_zfs} '$1 ~ zfs && $2=="snapshot"{last=$1}END{printf last}' ${SRC_DATASETS} )
get_initial_snapshot ${SRC_HOST} ${SRC_DATASETS} ${first} ${DST_POOL} ${DST_DATASETS}
get_incremental_snapshot ${SRC_HOST} ${SRC_DATASETS} ${first} ${last} ${DST_POOL} ${DST_DATASETS} && \
expire_backup_snapshots ${SRC_HOST} ${SRC_DATASETS} ${DST_DATASETS} ${this_backup_src} ${DST_POOL}
# Get the snapshot of this backup
printf "%s\tsnapshot\n" ${this_backup_src} >> ${SRC_DATASETS}
get_incremental_snapshot ${SRC_HOST} ${SRC_DATASETS} ${last} ${this_backup_src} ${DST_POOL} ${DST_DATASETS} && \
expire_backup_snapshots ${SRC_HOST} ${SRC_DATASETS} ${DST_DATASETS} ${this_backup_src} ${DST_POOL}
fi
echo
echo --------------------------------------------------------------------------------
date
echo
done
# expire_dst_pool_snapshots days_to_keep min_to_keep
expire_dst_pool_snapshots 34 70
END_TIME=$(${AWK} 'BEGIN{printf systime();}')
${AWK} -v time=${END_TIME} 'BEGIN{print "END :",strftime("%d.%m.%Y %H:%M.%S",time)}'
${AWK} -v start=${START_TIME} -v end=${END_TIME} 'BEGIN{print "DURATION:",strftime("%H:%M.%S",end-start-3600*strftime("%H",0))}'
end 0
</syntaxhighlight>
7b5fe566578ad808e2d4d8b3536ddcab74b8d40e
Solaris cluster clone
0
185
2198
657
2021-11-25T14:21:17Z
Lollypop
2
Text replacement - "<source" to "<syntaxhighlight"
wikitext
text/x-wiki
[[Kategorie:Solaris|Cluster Clone]][[Kategorie:SunCluster|Clone]]
If you need to recreate a cluster node from a survived node, you need to do the following steps
==Clone system disk==
For example via metattach to the metaroot.
==Edit normal Solaris parameter==
/etc/nodename
/etc/hostname.*
Check: /etc/inet/hosts
If mirrored by SVM do
# Edit /etc/vfstab of the clone to normal Devices
# Edit /etc/system:
<syntaxhighlight lang=bash>
* Begin MDD root info (do not edit)
** rootdev:/pseudo/md@0:0,10,blk
* End MDD root info (do not edit)
</source>
Umount cloned disk
fsck cloned disk root slice
==Edit Cluster parameter==
Get the right id from:
<syntaxhighlight lang=bash>
# nawk '/cluster\.nodes\.[^.]*\.name/{split($1,field,"."); print field[3],$NF}' /etc/cluster/ccr/global/infrastructure
1 node-a
2 node-b
</source>
Edit the
echo <nodeid> > /etc/cluster/nodeid
for example node-b:
echo 2 > /etc/cluster/nodeid
of the clone.
1c5c07643325a5247a95a00ecad9a70fdce7ec81
Linux Tipps und Tricks
0
273
2199
2154
2021-11-25T14:21:25Z
Lollypop
2
Text replacement - "<source" to "<syntaxhighlight"
wikitext
text/x-wiki
[[Category:Linux|Tipps und Tricks]]
==Hard reboot==
This is the hard way to kick your kernel into void. No filesystem sync is done, just and ugly fast direkt reboot!
You should never do this...
<syntaxhighlight lang=bash>
# echo 1 > /proc/sys/kernel/sysrq
# echo b > /proc/sysrq-trigger
</source>
First line enables sysrq, second line sends the reboot request.
For more look at [https://www.kernel.org/doc/Documentation/sysrq.txt kernel.org]!
==Scan all SCSI buses for new devices==
<syntaxhighlight lang=bash>
# for i in /sys/class/scsi_host/host*/scan ; do echo "- - -" > $i ; done
</source>
==Scan all FC ports for new devices==
!!!Be CAREFUL!!!
This command line issues a Loop Initialization Protocol (LIP). This is a bus reset hat means that removed devices in the fabric will disappear and new ones will appear.
!!!BUT the connection might get lost for a moment!!!
The softer way is [[#Scan all SCSI buses for new devices|to scan the SCSI buses]].
<syntaxhighlight lang=bash>
# for i in /sys/class/fc_host/*/issue_lip ; do echo "1" > $i ; done
</source>
==Rescan a device (for example after changing a VMDK size)==
<syntaxhighlight lang=bash>
# device=sda
# echo 1 > /sys/class/block/${device}/device/rescan
</source>
This is for device sda after changing the VMDK from 20GB to 25GB:
<syntaxhighlight lang=bash>
# device=sda
# echo $[ 512 * $(cat /sys/block/${device}/size) / 1024 ** 3 ]
20
# echo 1 > /sys/class/block/${device}/device/rescan
# echo $[ 512 * $(cat /sys/block/${device}/size) / 1024 ** 3 ]
25
# parted /dev/${device} "print free"
Warning: Not all of the space available to /dev/sda appears to be used, you can fix the GPT to use all of the space (an extra 10485760 blocks) or
continue with the current setting?
Fix/Ignore? F
Model: VMware Virtual disk (scsi)
Disk /dev/sda: 26,8GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:
Number Start End Size File system Name Flags
2 17,4kB 1049kB 1031kB bios_grub
1 1049kB 21,5GB 21,5GB zfs
21,5GB 26,8GB 5369MB Free Space
</source>
I want to put the free space into partition 1 and resize the rpool:
<syntaxhighlight lang=bash>
# parted /dev/${device} "resizepart 1 -1"
# parted /dev/${device} "print free"
Model: VMware Virtual disk (scsi)
Disk /dev/sda: 26,8GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:
Number Start End Size File system Name Flags
2 17,4kB 1049kB 1031kB bios_grub
1 1049kB 26,8GB 26,8GB zfs
26,8GB 26,8GB 983kB Free Space
# zpool list rpool
NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
rpool 19,9G 1,68G 18,2G - 14% 8% 1.00x ONLINE -
# zpool set autoexpand=on rpool
# zpool status rpool
pool: rpool
state: ONLINE
scan: none requested
config:
NAME STATE READ WRITE CKSUM
rpool ONLINE 0 0 0
sda1 ONLINE 0 0 0
# zpool online rpool sda1
# zpool list rpool
NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
rpool 24,9G 1,69G 23,2G - 11% 6% 1.00x ONLINE -
# zpool set autoexpand=off rpool
</source>
Done.
==Remove a SCSI-device==
Let us say we want to remove /dev/sdb.
Be careful! Like in this example the lowest SCSI-ID is not always the lowest device name!
Check it with <i>lsscsi</i> from the Ubuntu package lsscsi:
<syntaxhighlight lang=bash>
# lsscsi
[2:0:0:0] cd/dvd NECVMWar VMware SATA CD00 1.00 /dev/sr0
[32:0:0:0] disk VMware Virtual disk 1.0 /dev/sdb
[32:0:1:0] disk VMware Virtual disk 1.0 /dev/sda
</source>
Then check it is not longer in use:
# mount
# pvs
# zpool status
# etc.
Then delete it:
<syntaxhighlight lang=bash>
# echo 1 > /sys/bus/scsi/drivers/sd/32\:0\:0\:0/delete
</source>
The 32:0:0:0 is the number reported from the lsscsi above.
Et voila:
<syntaxhighlight lang=bash>
# lsscsi
[2:0:0:0] cd/dvd NECVMWar VMware SATA CD00 1.00 /dev/sr0
[32:0:1:0] disk VMware Virtual disk 1.0 /dev/sda
</source>
==Copy a GPT partition table==
Copy partition table of sdX to sdY:
<syntaxhighlight lang=bash>
# sgdisk /dev/sdX --replicate=/dev/sdY
# sgdisk --randomize-guids /dev/sdY
</source>
Or with:
<syntaxhighlight lang=bash>
# sgdisk --backup=sdX.table /dev/sdX
# sgdisk --load-backup=sdX.table /dev/sdY
# sgdisk -G /dev/sdY
</source>
<pre>
-R, --replicate=second_device_filename
Replicate the main device's partition table on the specified second device. Note that the replicated partition table is an exact
copy, including all GUIDs; if the device should have its own unique GUIDs, you should use the -G option on the new disk.
-G, --randomize-guids
Randomize the disk's GUID and all partitions' unique GUIDs (but not their partition type code GUIDs). This function may be used
after cloning a disk in order to render all GUIDs once again unique.
</pre>
==Resize a GPT partition==
The partition was resized in VMWare from ~6GB to ~50GB.
In the VM I did [[#Remove a SCSI-device|Remove a SCSI-device]] for the resized device and then [[#Scan all SCSI buses for new devices|Scan all SCSI buses for new devices]] after that parted saw the new size.
===Correct the GPT partition table===
<syntaxhighlight lang=bash>
root@mariadb:~# parted /dev/sdb
GNU Parted 3.2
Using /dev/sdb
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted) p
Warning: Not all of the space available to /dev/sdb appears to be used, you can fix the GPT to use all of the space (an extra 92274688 blocks) or continue with the
current setting?
Fix/Ignore? F <-- ! choose F
Model: VMware Virtual disk (scsi)
Disk /dev/sdb: 53,7GB <-- ! the new size is reported now
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:
Number Start End Size File system Name Flags
1 1049kB 6442MB 6441MB zfs
</source>
===Resize the partition===
<syntaxhighlight lang=bash>
root@mariadb:~# parted /dev/sdb
GNU Parted 3.2
Using /dev/sdb
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted) p
Model: VMware Virtual disk (scsi)
Disk /dev/sdb: 53,7GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:
Number Start End Size File system Name Flags
1 1049kB 6442MB 6441MB zfs
(parted) resizepart 1
End? [6442MB]? 53,7GB <-- ! Put new size here
(parted) p <-- ! Control if it worked
Model: VMware Virtual disk (scsi)
Disk /dev/sdb: 53,7GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:
Number Start End Size File system Name Flags
1 1049kB 53,7GB 53,7GB zfs
(parted) q
Information: You may need to update /etc/fstab.
</source>
===Optional: Resize the ZPool in it===
Check the actual values:
<syntaxhighlight lang=bash>
root@mariadb:~# zpool list MYSQL-DATA
NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
MYSQL-DATA 5,97G 994M 5,00G 44G 47% 16% 1.00x ONLINE -
root@mariadb:~# zpool get autoexpand MYSQL-DATA
NAME PROPERTY VALUE SOURCE
MYSQL-DATA autoexpand off default
</source>
Now inform ZPool to grow to the end of the partition.
Set autoexpand to on:
<syntaxhighlight lang=bash>
root@mariadb:~# zpool set autoexpand=on MYSQL-DATA
</source>
Send an online to the already onlined device to force a recheck in the ZPool to resize it without export/import:
<syntaxhighlight lang=bash>
root@mariadb:~# zpool online MYSQL-DATA /dev/sdb1
</source>
Et voila:
<syntaxhighlight lang=bash>
root@mariadb:~# zpool list MYSQL-DATA
NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
MYSQL-DATA 50,0G 994M 49,0G - 5% 1% 1.00x ONLINE -
rpool 19,9G 3,36G 16,5G - 19% 16% 1.00x ONLINE -
</source>
Set autoexpand to off if you want prevent to autoexpand if partition grows:
<syntaxhighlight lang=bash>
root@mariadb:~# zpool set autoexpand=off MYSQL-DATA
</source>
===Optional: Resize the LVM physical volume===
Check the values:
<syntaxhighlight lang=bash>
# parted /dev/${device} "print free"
Model: VMware Virtual disk (scsi)
Disk /dev/sda: 48.3GB
Sector size (logical/physical): 512B/512B
Partition Table: msdos
Disk Flags:
Number Start End Size Type File system Flags
32.3kB 1049kB 1016kB Free Space
1 1049kB 48.3GB 48.3GB primary boot
48.3GB 48.3GB 999kB Free Space
# pvs
PV VG Fmt Attr PSize PFree
/dev/sda1 vg-root lvm2 a-- <35.00g 0
</source>
OK, we need to resize the physical volume
<syntaxhighlight lang=bash>
# pvresize /dev/sda1
Physical volume "/dev/sda1" changed
1 physical volume(s) resized / 0 physical volume(s) not resized
</source>
Check the values:
<syntaxhighlight lang=bash>
# pvs
PV VG Fmt Attr PSize PFree
/dev/sda1 vg-root lvm2 a-- <45.00g 10.00g
</source>
Done.
7fada3e995603e90f994edaab9eafa5bae71e3ca
SunServer
0
210
2200
1917
2021-11-25T14:21:37Z
Lollypop
2
Text replacement - "<source" to "<syntaxhighlight"
wikitext
text/x-wiki
[[Kategorie:Hardware]]
=X86 Systeme=
==ILOM==
===Reset SP from OS===
<syntaxhighlight lang=bash>
# ipmitool -I bmc bmc reset cold
Sent cold reset command to MC
</source>
===Access ILOM from OS===
<syntaxhighlight lang=bash>
# ipmitool sunoem cli
Connected. Use ^D to exit.
->
</source>
or
<syntaxhighlight lang=bash>
# ipmitool -I bmc sunoem cli
Connected. Use ^D to exit.
->
</source>
===Set SP IP address from OS via ipmitool===
* Set:
<syntaxhighlight lang=bash>
# ipmitool lan set 1 ipaddr 172.30.42.149
Setting LAN IP Address to 172.30.42.149
# ipmitool lan set 1 netmask 255.255.255.0
Setting LAN Subnet Mask to 255.255.255.0
# ipmitool lan set 1 defgw ipaddr 172.30.42.1
Setting LAN Default Gateway IP to 172.30.42.1
</source>
* Check:
<syntaxhighlight lang=bash>
# ipmitool lan print
Set in Progress : Commit Write
Auth Type Support : NONE MD2 MD5 PASSWORD
Auth Type Enable : Callback : MD2 MD5 PASSWORD
: User : MD2 MD5 PASSWORD
: Operator : MD2 MD5 PASSWORD
: Admin : MD2 MD5 PASSWORD
: OEM :
IP Address Source : Static Address
IP Address : 172.30.42.149
Subnet Mask : 255.255.255.0
MAC Address : 00:1c:24:f0:70:b0
SNMP Community String : public
IP Header : TTL=0x40 Flags=0x40 Precedence=0x00 TOS=0x10
Default Gateway IP : 172.30.42.1
Default Gateway MAC : ff:ff:ff:ff:ff:ff
Backup Gateway IP : 255.255.255.255
Backup Gateway MAC : ff:ff:ff:ff:ff:ff
Cipher Suite Priv Max : aaaaaaaaaaaaaaa
: X=Cipher Suite Unused
: c=CALLBACK
: u=USER
: o=OPERATOR
: a=ADMIN
: O=OEM
</source>
===Restore lost Serial/Product Information===
<syntaxhighlight lang=bash>
$ ssh root@x4100-sp
-> show /SYS/MB
/SYS/MB
...
Properties:
type = Motherboard
chassis_name = SUN FIRE X4100
chassis_part_number = 541-0250-04
chassis_serial_number = 0000000-0000000000
chassis_manufacturer = SUN MICROSYSTEMS
product_name = SUN FIRE X4100
product_part_number = 602-0000-00
product_serial_number = 0000000000
product_version = (none)
product_manufacturer = SUN MICROSYSTEMS
fru_name = ASSY,MOTHERBOARD,A64
fru_manufacturer = SUN MICROSYSTEMS
fru_part_number = 501-7644-01
fru_serial_number = 1762TH1-0627002296
...
-> exit
$ ssh sunservice@x4100-sp
Password: <the root password>
[(flash)root@X4100-SP:~]# servicetool --board_replaced=mainboard --fru_product_serial_number --fru_chassis_serial_number --fru_product_part_number
<Fill out the answers>
</source>
=SPARC Systeme=
==T4-1==
===Get disk slot===
<b>get_disk_slot.sh</b>:
<syntaxhighlight lang=bash>
#!/bin/bash
/usr/sbin/prtconf -v | nawk -v disk="$1" '
function get_value() {
getline line;
split(line,values,"=");
return values[2];
}
/inquiry-serial-no/ {
inquiry_serial_no=get_value();
}
/inquiry-product-id/ {
inquiry_product_id=get_value();
}
/inquiry-vendor-id/ {
inquiry_vendor_id=get_value();
}
/obp-path/ {
obp_path=get_value();
}
/phy-num/ {
phy_num[obp_path]=get_value();
}
$0 ~ "/dev/rdsk/"disk"$" {
split(obp_path,path_parts,"/");
if(path_parts[3]=="pci@1"){
controller[obp_path]=0;
}
if(path_parts[3]=="pci@2"){
controller[obp_path]=1;
}
printf "%s\n\t%s\n\t%s\n\t%s\n\tcontroller %d, PhyNum %d => Slot %d\n",$1,inquiry_vendor_id,inquiry_serial_no,obp_path,controller[obp_path],phy_num[obp_path],4*controller[obp_path]+phy_num[obp_path]
}'
</source>
Example:
<syntaxhighlight lang=bash>
# ./get_disk_slot.sh c0t5000C500230D43A3d0
dev_link=/dev/rdsk/c0t5000C500230D43A3d0
'SEAGATE'
'00101371ZVHM 3SE1ZVHM'
'/pci@400/pci@2/pci@0/pci@4/scsi@0/disk@w5000c500230d43a1,0'
controller 1, PhyNum 2 => Slot 6
</source>
==XSCF==
===Set XSCF IP address from OS via ssh through dscp===
* Show:
<syntaxhighlight lang=bash>
# /usr/platform/`uname -i`/sbin/prtdscp
Domain Address: 192.168.224.2
SP Address: 192.168.224.1
# ssh eis-installer@192.168.224.1
XSCF> shownetwork -a
xscf#0-lan#0
Link encap:Ethernet HWaddr 00:0B:5D:E3:D8:C4
inet addr:172.42.0.120 Bcast:172.42.0.255 Mask:255.255.255.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:885919090 errors:0 dropped:0 overruns:0 frame:0
TX packets:7150700 errors:1 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:1987024183 (1.8 GiB) TX bytes:492148426 (469.3 MiB)
Base address:0xe000
xscf#0-lan#1
Link encap:Ethernet HWaddr 00:0B:5D:E3:D8:C5
BROADCAST MULTICAST MTU:1500 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
Base address:0xc000
XSCF> showroute -a
Destination Gateway Netmask Flags Interface
172.42.0.0 * 255.255.0.0 U xscf#0-lan#0
default 172.42.0.1 0.0.0.0 UG xscf#0-lan#0
</source>
* Delete default gateway:
<syntaxhighlight lang=bash>
XSCF> setroute -c del -n 0.0.0.0 -m 0.0.0.0 -g 172.42.0.1 xscf#0-lan#0
</source>
* Set:
<syntaxhighlight lang=bash>
XSCF> setnetwork xscf#0-lan#0 172.32.40.52 -m 255.255.255.0
XSCF> setroute -c add -n 0.0.0.0 -m 0.0.0.0 -g 172.32.40.1 xscf#0-lan#0
XSCF> applynetwork
The following network settings will be applied:
xscf#0 hostname :hfgsun07-xsfc
DNS domain name :intern.hfg-inkasso.de
nameserver :172.41.0.2
interface :xscf#0-lan#0
status :up
IP address :172.32.40.52
netmask :255.255.255.0
route :-n 172.32.40.1 -m 255.255.255.255
route :-n 0.0.0.0 -m 0.0.0.0 -g 172.32.40.1
interface :xscf#0-lan#1
status :down
IP address :
netmask :
route :
Continue? [y|n] :y
Please reset the XSCF by rebootxscf to apply the network settings.
Please confirm that the settings have been applied by executing
showhostname, shownetwork, showroute and shownameserver after rebooting
the XSCF.
XSCF> rebootxscf
The XSCF will be reset. Continue? [y|n] :y
</source>
===Enable sending of break signal===
Example on a M10 (Partition 0)... break_signal=off means turn supression of break signals off ;-) :
<syntaxhighlight lang=bash>
XSCF> showpparmode -p 0
Host-ID :90071f40
Diagnostic Level :max
Message Level :max
Alive Check :on
Watchdog Reaction :reset
Break Signal :on
Autoboot(Guest Domain) :on
Elastic Mode :off
IOreconfigure :false
CPU Mode :auto
PPAR DR(Current) :off
PPAR DR(Next) :off
XSCF> setpparmode -p 0 -m break_signal=off
Diagnostic Level :max -> -
Message Level :max -> -
Alive Check :on -> -
Watchdog Reaction :reset -> -
Break Signal :on -> off
Autoboot(Guest Domain) :on -> -
Elastic Mode :off -> -
IOreconfigure :false -> -
CPU Mode :auto -> -
PPAR DR :off -> -
The specified modes will be changed.
Continue? [y|n] :y
configured.
Diagnostic Level :max
Message Level :max
Alive Check :on (alive check:available)
Watchdog Reaction :reset (watchdog reaction:reset)
Break Signal :off (break signal:send)
Autoboot(Guest Domain) :on
Elastic Mode :off
IOreconfigure :false
CPU Mode :auto
PPAR DR :off
XSCF> sendbreak -y -p0
Send break signal to PPAR-ID 0?[y|n] :y
XSCF> console -y -p0
Console contents may be logged.
Connect to PPAR-ID 0?[y|n] :y
c)ontinue, s)ync, r)eset? c
^@Notifying cluster that this node is panicking
panic[cpu10]/thread=2a100df1c80: Aborting node because pm_tick delay of 13644 ms exceeds 5050 ms
...
</source>
89f4c13c7716f01d232f05f5de1423fe46e53319
Solaris 11 bootadm
0
207
2201
2083
2021-11-25T14:21:54Z
Lollypop
2
Text replacement - "</source" to "</syntaxhighlight"
wikitext
text/x-wiki
[[Category:Solaris11|bootadm]]
==Booten via SP-Console 115200 Baud==
Add a new ttydef with 115200:
<source lang=bash>
# echo "console115200:115200 hupcl opost onclr:115200::console" >> /etc/ttydefs
</syntaxhighlight>
Set the new console for system/console-login:default
<source lang=bash>
# svccfg -s svc:/system/console-login:default setprop ttymon/label=console115200
# svcadm refresh svc:/system/console-login:default
# svcadm restart svc:/system/console-login:default
</syntaxhighlight>
Setup your boot menu:
<source lang=bash>
# bootadm generate-menu
# bootadm set-menu console=text serial_params='0,115200,8,N,1'
# bootadm change-entry -i 0 kargs="-B \$zfs_bootfs,console=ttya"
# bootadm add-entry -i 1 "Solaris (non-cluster)"
# bootadm change-entry -i 1 kargs="-B \$zfs_bootfs,console=ttya -x"
# bootadm add-entry -i 2 "Solaris (non-cluster)(single-user)"
# bootadm change-entry -i 2 kargs="-B \$zfs_bootfs,console=ttya -xs"
# bootadm add-entry -i 3 "Solaris (kernel debugger)"
# bootadm change-entry -i 3 kargs="-B \$zfs_bootfs,console=ttya -k"
# bootadm add-entry -i 4 "Solaris (non-cluster)(milestone=none)"
# bootadm change-entry -i 4 kargs="-B \$zfs_bootfs,console=ttya -x -m milestone=none"
</syntaxhighlight>
97b5cb5bfe0b3e0fa3317970829bffe351e4cd9f
ZFS Recovery
0
30
2202
840
2021-11-25T14:22:21Z
Lollypop
2
Text replacement - "</source" to "</syntaxhighlight"
wikitext
text/x-wiki
[[Kategorie:ZFS|Recovery]]
[[Kategorie:Solaris]]
==Panic at boot time==
Siehe [http://sunsolve.sun.com/search/document.do?assetkey=1-66-233602-1 SunAlert 233602 : Solaris 10 Assertion Failure in ZFS May Cause a System Panic]:
The best recovery for this is to do the following:
<pre>
1. Set the following in /etc/system:
set zfs:zfs_recover=1
set aok=1
2. Import the pool using 'zpool import'
3. Run a full scrub on the pool using 'zpool scrub'
4. Use 'zdb -d' and make sure that there is no ondisk corruption reported
5. Once the pool comes to a clean state, comment / remove the added entries in /etc/system.
</pre>
==Zurückgehen auf einen früheren Uberblock==
<source lang=bash>
# zpool import defect_pool
cannot import 'defect_pool': I/O error
Destroy and re-create the pool from
a backup source.
</syntaxhighlight>
Unter /etc/zfs:
<source lang=bash>
# cd /etc/zfs
# strings zpool.cache | nawk '/c[0-9]+t/'
...
/dev/dsk/c7t0d0s0
...
# zdb -l /dev/dsk/c7t0d0s0 | nawk '$1=="name:"{print;exit;}'
name: 'defect_pool'
</syntaxhighlight>
Für einen ZPool im Solaris Cluster:
<source lang=bash>
# cd /var/cluster/run/HAStoragePlus/zfs/
# strings defect_pool.cachefile | nawk '/c[0-9]+t/'
0/dev/dsk/c8t600A0B80006E103C000008164E51CDD2d0s0
0/dev/dsk/c8t600A0B80006E10E40000D47D4E51CF9Ed0s0
</syntaxhighlight>
oder
<source lang=bash>
# zpool import -o readonly=on -c defect_pool.cachefile
</syntaxhighlight>
<source lang=bash>
# zdb -lu /dev/dsk/c8t600A0B80006E103C000008164E51CDD2d0s0 | nawk '/txg =/{txg=$NF}/timestamp =/{printf "txg %d\t%s\n",txg,$0}' | sort -n -k 2n,2n | uniq | tail -10
txg 40353851 timestamp = 1352184849 UTC = Tue Nov 6 07:54:09 2012
txg 40353852 timestamp = 1352184849 UTC = Tue Nov 6 07:54:09 2012
txg 40353853 timestamp = 1352184849 UTC = Tue Nov 6 07:54:09 2012
txg 40353870 timestamp = 1352185334 UTC = Tue Nov 6 08:02:14 2012
txg 40353871 timestamp = 1352185334 UTC = Tue Nov 6 08:02:14 2012
txg 40353872 timestamp = 1352185334 UTC = Tue Nov 6 08:02:14 2012
txg 40353873 timestamp = 1352185334 UTC = Tue Nov 6 08:02:14 2012
txg 40353874 timestamp = 1352185334 UTC = Tue Nov 6 08:02:14 2012
txg 40353875 timestamp = 1352185334 UTC = Tue Nov 6 08:02:14 2012
txg 40353879 timestamp = 1352185334 UTC = Tue Nov 6 08:02:14 2012
# zpool import -o readonly=on -T <txg> defect_pool
</syntaxhighlight>
Also z.B. auf Tue Nov 6 07:54:09 2012 -> txg 40353853
<source lang=bash>
# zpool import -T 40353853 defect_pool
Pool defect_pool returned to its state as of Tue Nov 06 07:32:33 2012.
Discarded approximately 22 minutes of transactions.
</syntaxhighlight>
==PANIC, NOTICE: spa_import_rootpool: error 19==
Die Lösung ist, den Pool und das Device explizit anzugeben. Wenn beim booten also kommt:
<pre>
NOTICE: spa_import_rootpool: error 19
Cannot mount root on /pci@0,0/pci8086,340a@3/pci1000,3150@0/sd@1,0:a
panic[cpu0]/thread=fffffffffbc28820: vfs_mountroot: cannot mount root
</pre>
Hilft ein Boot in den Failsafe mode und editieren der /a/rpool/boot/grub/menu.lst, oder Eingabe der Parameter in der Grub-Commandline:
<pre>
title s10x_u8wos_08a
findroot (s10x_u8wos_08a,0,a)
bootfs rpool/ROOT/s10x_u8wos_08a
kernel$ /platform/i86pc/multiboot -B zfs-bootfs=rpool/ROOT/s10x_u8wos_08a,bootpath="/pci@0,0/pci8086,340a@3/pci1000,3150@0/sd@1,0:a"
module /platform/i86pc/boot_archive
</pre>
576607c313a11147f973b845737b56ca3e80b4e2
MySQL Tipps und Tricks
0
197
2203
2060
2021-11-25T14:22:26Z
Lollypop
2
Text replacement - "</source" to "</syntaxhighlight"
wikitext
text/x-wiki
[[Kategorie:MySQL|Tipps und Tricks]]
==Oneliner==
===Show MySQL-traffic fired from a client===
<source lang=bash>
# tcpdump -i any -s 0 -l -vvv -w - dst port 3306 | strings | perl -e '
while(<>) { chomp; next if /^[^ ]+[ ]*$/;
if(/^(SELECT|UPDATE|DELETE|INSERT|SET|COMMIT|ROLLBACK|CREATE|DROP|ALTER)/i) {
if (defined $q) { print "$q\n"; }
$q=$_;
} else {
$_ =~ s/^[ \t]+//; $q.=" $_";
}
}'
</syntaxhighlight>
===Mysql processes each second===
<source lang=bash>
# mysqladmin -i 1 --verbose processlist
</syntaxhighlight>
===All grants===
<source lang=bash>
# mysql --skip-column-names --batch --execute 'select concat("`",user,"`@`",host,"`") from mysql.user' | xargs -n 1 -i mysql --execute 'show grants for {}'
</syntaxhighlight>
Or a little nicer:
<source lang=bash>
#!/bin/bash
#
## Written by Lars Timmann <L@rs.Timmann.de> 2017
#
function usage () {
cat << EOH
Usage: $0 [--all] [--grant-user <pattern>|--gu <pattern>] [--grant-db <pattern>|--gdb <pattern>] [--help] ...
--help: This output
--grant-user|--gu: You can specify this option several times.
The <pattern> can be:
<user> : You will get grants on all hosts for this user.
@<host> : You will get grants for all users on this host.
<user>@<host> : You will get specific grants for user@host.
The pattern may contain % as wildcard.
If the pattern is @% it shows all grants where host is exactly '%'.
--grant-db|--gdb: You can specify this option several times.
The pattern names the database to look for.
The pattern may contain % as wildcard.
--all: Show all grants
...: Optional parameters to the mysql command
EOH
exit
}
show_all_grants=0
declare -a grant_user
for ((param=1;param<=${#};param++))
do
case ${!param} in
--grant-user|--gu)
param=$[ ${param} + 1 ]
grant_user+=( "${!param}" )
# delete 2 parameters from list and set back $param
set -- "${@:1:param-2}" "${@:param+1}"
param=$[ ${param} - 2 ]
;;
--grant-db|--gdb)
param=$[ ${param} + 1 ]
grant_db+=( "${!param}" )
# delete 2 parameters from list and set back $param
set -- "${@:1:param-2}" "${@:param+1}"
param=$[ ${param} - 2 ]
;;
--all)
show_all_grants=1
# delete 1 parameter from list and set back $param
set -- "${@:1:param-1}" "${@:param+1}"
param=$[ ${param} - 1 ]
;;
--help)
usage
;;
*)
;;
esac
done
count=${#grant_user[@]}
for((param=0;param<count;param++))
do
before=${#grant_user[@]}
grant="${grant_user[${param}]}"
user="${grant%@*}"
if [[ "${grant}" == *\@?* ]]
then
host="${grant/*@}"
else
host=''
fi
case ${host} in
'')
select="select concat('\'',user,'\'@\'',host,'\'') as user from mysql.user where user like '${user}'"
;;
'%')
select="select concat('\'',user,'\'@\'',host,'\'') as user from mysql.user where host='${host}' ${user:+and user like '${user}'}"
;;
*)
select="select concat('\'',user,'\'@\'',host,'\'') as user from mysql.user where host like '${host}' ${user:+and user like '${user}'}"
;;
esac
grant_user=( "${grant_user[@]:0:param}" $(mysql $* --silent --skip-column-names --execute "${select}" | sort ) "${grant_user[@]:param+1}" )
after=${#grant_user[@]}
param=$[ param + after - before ]
count=$[ count + after - before ]
done
# Get user for database in grant_db array
for db in ${grant_db[@]}
do
grant_user+=( $(mysql $* --silent --skip-column-names --execute "
select concat('\'',user,'\'@\'',host,'\'') as user from mysql.db where db like '${db}';
select concat('\'',user,'\'@\'',host,'\'') as user from mysql.columns_priv where db like '${db}';
select concat('\'',user,'\'@\'',host,'\'') as user from mysql.tables_priv where db like '${db}';
" | sort -u ) )
done
# --all
if [ ${show_all_grants} -eq 1 ]
then
printf -- '--\n-- %s\n--\n' "all grants";
grant_user=( $(mysql $* --silent --skip-column-names --execute "select concat('\'',user,'\'@\'',host,'\'') as user from mysql.user" | sort ) )
fi
for user in ${grant_user[@]}
do
printf -- '--\n-- %s\n--\n' "${user}";
show_create_user="$(mysql $* --silent --skip-column-names --execute "select (substring_index(version(), '.',1) >= 5) and (substring_index(substring_index(version(), '.', 2),'.',-1) >=7) as show_create_user;";)"
if [ "${show_create_user}" -eq 1 ]
then
mysql $* --silent --skip-column-names --execute "show create user ${user};" | sed 's/$/;/'
fi
OLD_IFS=${IFS}
IFS=$'\n'
for grant in $(mysql $* --silent --skip-column-names --execute "show grants for ${user}" | sed 's/$/;/')
do
regex='GRANT[ ]+.*[ ]+ON[ ]+(FUNCTION[ ]+|)`([^`]*)`\..*'
if [[ $grant =~ $regex ]]
then
database=${BASH_REMATCH[2]}
if [ ${#grant_db[@]} -gt 0 ]
then
if [[ " ${grant_db[@]} " =~ " ${database} " ]]
then
echo "${grant}"
fi
else
echo "${grant}"
fi
else
echo "${grant}"
fi
done
done
</syntaxhighlight>
===Last update time===
* Per table
<source lang=mysql>
mysql> SELECT TABLE_SCHEMA AS DB,TABLE_NAME,UPDATE_TIME FROM INFORMATION_SCHEMA.TABLES ORDER BY DB,UPDATE_TIME;
</syntaxhighlight>
* Per database
<source lang=mysql>
mysql> SELECT TABLE_SCHEMA AS DB,MAX(UPDATE_TIME) AS LAST_UPDATE FROM INFORMATION_SCHEMA.TABLES GROUP BY DB ORDER BY LAST_UPDATE;
</syntaxhighlight>
==InnoDB space==
===Per database===
<source lang=mysql>
mysql> select table_schema as database_name, sum(round(data_length/1024/1024,2)) as total_size_mb from information_schema.tables where engine like 'innodb' group by table_schema order by total_size_mb;
</syntaxhighlight>
===Per table===
<source lang=mysql>
mysql> select table_schema as database_name,table_name,round(data_length/1024/1024,2) as size_mb from information_schema.tables order by size_mb;
</syntaxhighlight>
==Logging==
If you use SET GLOBAL it is just for the moment.
'''Don't forget to add it in your my.cnf to make it permanent!'''
===What can I log?===
The interesting variables here are:
* log_queries_not_using_indexes
* log_slave_updates
* log_slow_queries
* general_log
===Choose logging destination FILE/TABLE/NONE===
This affects general_log and slow_query_log.
* Log to the table mysql.slow_log and mysql.general_log
<source lang=mysql>
mysql> SET GLOBAL log_output=TABLE;
</syntaxhighlight>
* Log to the table mysql.slow_log and mysql.general_log
<source lang=mysql>
mysql> SET GLOBAL log_output=TABLE;
</syntaxhighlight>
* Both: tables and files
<source lang=mysql>
mysql> SET GLOBAL log_output = 'TABLE,FILE';
</syntaxhighlight>
* None, if NONE appears in the log_output destinations there is no logging
<source lang=mysql>
mysql> SET GLOBAL log_output = 'TABLE,FILE,NONE';
</syntaxhighlight>
is equal to
<source lang=mysql>
mysql> SET GLOBAL log_output = 'NONE';
</syntaxhighlight>
===Enable/disable general logging===
<source lang=mysql>
mysql> SET GLOBAL general_log_file = '/var/lib/mysql/general.log';
Query OK, 0 rows affected (0.00 sec)
mysql> SET GLOBAL general_log = 'ON';
Query OK, 0 rows affected (0.00 sec)
</syntaxhighlight>
<source lang=mysql>
mysql> SET GLOBAL general_log = 'OFF';
Query OK, 0 rows affected (0.00 sec)
</syntaxhighlight>
===Enable/disable logging of slow queries===
<source lang=mysql>
mysql> SET GLOBAL slow_query_log_file = '/var/lib/mysql/slow-query.log';
Query OK, 0 rows affected (0.00 sec)
mysql> SET GLOBAL slow_query_log = 'ON';
Query OK, 0 rows affected (0.00 sec)
</syntaxhighlight>
<source lang=mysql>
mysql> SET GLOBAL slow_query_log = 'OFF';
Query OK, 0 rows affected (0.00 sec)
</syntaxhighlight>
== Slave ==
=== Debugging ===
==== What did we see from the master ====
Read the binlog from Master:
<source lang=bash>
# mysqlbinlog --read-from-remote-server --host='your replication host' --user='your replication user' --password='your replication password' --base64-output=auto --database='limit output to this database' -vv mysql-bin.number | less
</syntaxhighlight>
if you get
ERROR: Failed on connect: SSL connection error: protocol version mismatch
try
<source lang=bash>
# mysqlbinlog --read-from-remote-server --host='your replication host' --user='your replication user' --password='your replication password' --ssl-mode=DISABLED --base64-output=auto --database='limit output to this database' -vv mysql-bin.number | less
</syntaxhighlight>
For an idea of the binlog file to investigate on the master do this on your slave:
<source lang=bash>
# mysql -e 'show slave status\G' | awk '$1=="Master_Log_File:"'
</syntaxhighlight>
==Filesystems for MySQL==
===ext3/ext4===
====Create Options====
<source lang=bash>
# mkfs.ext4 -b 4096 /dev/mapper/vg--data-lv--ext4--mysql_data
</syntaxhighlight>
====Mountoptions====
* noatime
* data=writeback (best performance , only metadata is logged)
* data=ordered (ok performance , recording metadata and grouping metadata related to the data changes)
* data=journal (worst performance, but best data protection, ext3 default mode, recording metadata and all data)
===Raw devices with InnoDB===
'''Take a look at [[Linux_udev_permissions|setting device permissions via udev]] first.'''
'''After''' that the device is owned by mysql:
<source lang=bash>
# ls -alL /dev/vg-data/lv-rawdisk-innodb01
brw-rw---- 1 mysql mysql 252, 0 Aug 12 15:07 /dev/vg-data/lv-rawdisk-innodb01
</syntaxhighlight>
Determine the size:
<source lang=bash>
# lvs vg-data
LV VG Attr LSize Pool Origin Data% Move Log Copy% Convert
lv-rawdisk-innodb01 vg-data -wi-a---- 25.00g
# fdisk -l /dev/vg-data/lv-rawdisk-innodb01
Disk /dev/vg-data/lv-rawdisk-innodb01: 26.8 GB, 26843545600 bytes
255 heads, 63 sectors/track, 3263 cylinders, total 52428800 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
# bc -l
26843545600/(1024*1024*1024)
25.00000000000000000000
</syntaxhighlight>
Yes... really 25GB!
Add your logical volume to your configuration /etc/mysql/conf.d/innodb.cnf :
<source lang=mysql>
[mysqld]
# InnoDB raw disks
innodb_data_home_dir=
innodb_data_file_path=/dev/vg-data/lv-rawdisk-innodb01:25Gnewraw
</syntaxhighlight>
Start mysql:
<source lang=bash>
# service mysql start
</syntaxhighlight>
Aaaaaand.. do not forget apparmor! Like I did.. :-D
<source lang=mysql>
InnoDB: Operating system error number 13 in a file operation.
InnoDB: The error means mysqld does not have the access rights to
InnoDB: the directory.
InnoDB: File name /dev/dm-0
InnoDB: File operation call: 'open'.
InnoDB: Cannot continue operation.
</syntaxhighlight>
<source lang=bash>
# tail /var/log/kern.log
...
Aug 12 15:30:09 mysql kernel: [ 5840.118528] audit: type=1400 audit(1439386209.399:33): apparmor="DENIED" operation="open" profile="/usr/sbin/mysqld" name="/dev/dm-0" pid=11810 comm="mysqld" requested_mask="wr" denied_mask="wr" fsuid=108 ouid=108
...
</syntaxhighlight>
Add your raw device to the apparmor config in /etc/apparmor.d/local/usr.sbin.mysqld :
<source lang=bash>
# Site-specific additions and overrides for usr.sbin.mysqld.
# For more details, please see /etc/apparmor.d/local/README.
/dev/dm-* rwk,
</syntaxhighlight>
Reload apparmor:
<source lang=bash>
# service apparmor reload
</syntaxhighlight>
Another try!
<source lang=bash>
# service mysql start
</syntaxhighlight>
<source lang=mysql>
InnoDB: The first specified data file /dev/vg-data/lv-rawdisk-innodb01 did not exist:
InnoDB: a new database to be created!
150812 15:48:23 InnoDB: Setting file /dev/vg-data/lv-rawdisk-innodb01 size to 25600 MB
InnoDB: Database physically writes the file full: wait...
InnoDB: Progress in MB: 100 200 300 400 500 600 700 800 900 1000 1100 1200 ...
</syntaxhighlight>
Much better!
So shutdown MySQL again!
Change your configuration /etc/mysql/conf.d/innodb.cnf and '''change newraw to raw!''' :
<source lang=mysql>
[mysqld]
# InnoDB raw disks
innodb_data_home_dir=
innodb_data_file_path=/dev/vg-data/lv-rawdisk-innodb01:25Graw
</syntaxhighlight>
=== NFS ===
==== NFSv4 ====
===== On NetApp CDOT SVM =====
<source lang=text>
cdot1nfsv4::> export-policy rule create -policyname default -clientmatch 172.18.128.0/22 -superuser none -rwrule none -rorule sys -allow-dev false -allow-suid false
cdot1nfsv4::>
cdot1nfsv4::> export-policy create -policyname mysql_clients
cdot1nfsv4::> export-policy rule create -policyname mysql_clients -clientmatch 172.18.128.0/22 -superuser sys -rwrule sys -rorule sys -allow-dev true -allow-suid false
cdot1nfsv4::>
cdot1nfsv4::> nfs server modify -v4.0 enabled -v4-id-domain this.domain.tld
cdot1nfsv4::> set -units GB
cdot1nfsv4::> vol show -volume MYSQLNFS_* -fields volume,policy,size,junction-path
vserver volume size policy junction-path
------------------ --------------------- ---- ------------- ----------------------
cdot1nfsv4 MYSQLNFS_DATA 40GB mysql_clients /MYSQLNFS_DATA
cdot1nfsv4 MYSQLNFS_LOG 1GB mysql_clients /MYSQLNFS_LOG
2 entries were displayed.
</syntaxhighlight>
Links:
* [https://kb.netapp.com/support/s/article/how-to-configure-nfsv4-in-cluster-mode How to configure NFSv4 in Cluster-Mode]
* [https://kb.netapp.com/support/s/article/clustered-data-ontap-nfs-expert-recommended-articles Clustered Data ONTAP NFS Expert recommended articles]
* [https://kb.netapp.com/support/s/article/how-to-configure-netapp-storage-systems-for-network-file-system-version-4-in-aix-and-linux-environments How to configure NetApp storage systems for Network File System version 4 in AIX and Linux environments]
* [https://kb.netapp.com/support/s/article/how-to-enable-or-disable-nfsv4-on-netapp-storage-systems How to enable or disable NFSv4 on NetApp storage systems]
===== On Linux =====
====== Blacklist rpcsec_gss_krb5 ======
To disable loading of the rpcsec_gss_krb5 kernel module which causes problems with performance, do this:
<source lang=text>
# echo "blacklist rpcsec_gss_krb5" > /etc/modprobe.d/blacklist-rpcsec_gss_krb5.conf
# rmmod rpcsec_gss_krb5
</syntaxhighlight>
====== /etc/sysctl.d/99-mysql.conf ======
<source lang=text>
#
## http://www.ajohnstone.com/achives/optimizing-mysql-over-nfs-with-netapp/
#
###################################################################
# Semaphores & IPC for optimizations in innodb
kernel.shmmax=2147483648
kernel.shmall=2147483648
kernel.msgmni=1024
kernel.msgmax=65536
kernel.sem=250 32000 32 1024
###################################################################
# Swap
vm.swappiness = 0
vm.vfs_cache_pressure = 50
</syntaxhighlight>
====== /etc/sysctl.d/99-netapp-nfs.conf ======
<source lang=text>
#
## http://www.ajohnstone.com/achives/optimizing-mysql-over-nfs-with-netapp/
#
###################################################################
# Optimization for netapp/nfs increased from 64k, @see http://tldp.org/HOWTO/NFS-HOWTO/performance.html#MEMLIMITS
net.core.wmem_default=262144
net.core.rmem_default=262144
net.core.wmem_max=262144
net.core.rmem_max=262144
net.ipv4.tcp_rmem = 4096 87380 16777216
net.ipv4.tcp_wmem = 4096 65536 16777216
net.ipv4.tcp_no_metrics_save = 1
# Guidelines from http://media.netapp.com/documents/mysqlperformance-5.pdf
net.ipv4.tcp_sack=0
net.ipv4.tcp_timestamps=0
sunrpc.tcp_slot_table_entries=128
#nfs.v3.enable on
nfs.tcp.enable=on
nfs.tcp.recvwindowsize=65536
nfs.tcp.xfersize=65536
#iscsi.iswt.max_ios_per_session 128
#iscsi.iswt.tcp_window_size 131400
#iscsi.max_connections_per_session 16
net.ipv4.tcp_tw_reuse = 1
net.ipv4.ip_local_port_range = 1024 65023
net.ipv4.tcp_max_syn_backlog = 10240
net.ipv4.tcp_max_tw_buckets = 400000
net.ipv4.tcp_max_orphans = 60000
net.ipv4.tcp_synack_retries = 3
net.core.somaxconn = 10000
kernel.sysrq=0
net.ipv4.neigh.default.gc_thresh1 = 4096
net.ipv4.neigh.default.gc_thresh2 = 8192
net.ipv4.neigh.default.gc_thresh3 = 8192
net.ipv4.neigh.default.base_reachable_time = 86400
net.ipv4.neigh.default.gc_stale_time = 86400
</syntaxhighlight>
====== Raise allowed number of open files for mysql in /etc/security/limits.d/mysql.conf ======
<source lang=text>
mysql soft nofile 1024000
mysql hard nofile 1024000
mysql soft nproc 10240
mysql hard nproc 10240
</syntaxhighlight>
====== Modify systemd mysql.service to raise the number of files limit ======
To raise the number of files for the service you have to tell the systemd the new limit.
<source lang=bash>
# systemctl edit mysql.service
</syntaxhighlight>
and enter:
<source lang=ini>
[Service]
LimitNOFILE=1024000
</syntaxhighlight>
<source lang=bash>
# systemctl cat mysql
# /lib/systemd/system/mysql.service
# MySQL systemd service file
...
# /etc/systemd/system/mysql.service.d/override.conf
[Service]
LimitNOFILE=1024000
</syntaxhighlight>
Do not forget to activate and check the limit
<source lang=bash>
# systemctl daemon-reload
# systemctl restart mysql
# awk 'NR==1 || /Max open files/' /proc/$(pgrep mysqld$)/limits
Limit Soft Limit Hard Limit Units
Max open files 1024000 1024000 files
</syntaxhighlight>
====== Modify systemd service to wait for NFS ======
To be sure that the NFS mount is ready when the mysql server starts add After=nfs-client.target to the systemd service in the Unit-section.
<source lang=bash>
# systemctl edit mysql.service
</syntaxhighlight>
and enter:
<source lang=ini>
[Unit]
Description=MySQL Community Server
After=network.target
After=nfs-client.target
</syntaxhighlight>
<source lang=bash>
# systemctl cat mysql
# /lib/systemd/system/mysql.service
# MySQL systemd service file
[Unit]
Description=MySQL Community Server
After=network.target
[Install]
WantedBy=multi-user.target
[Service]
User=mysql
Group=mysql
PermissionsStartOnly=true
ExecStartPre=/usr/share/mysql/mysql-systemd-start pre
ExecStart=/usr/sbin/mysqld
ExecStartPost=/usr/share/mysql/mysql-systemd-start post
TimeoutSec=600
Restart=on-failure
RuntimeDirectory=mysqld
RuntimeDirectoryMode=755
# /etc/systemd/system/mysql.service.d/override.conf
[Unit]
Description=MySQL Community Server
After=network.target
After=nfs-client.target
[Service]
LimitNOFILE=1024000
</syntaxhighlight>
Do not forget to activate the changes...
<source lang=bash>
# systemctl daemon-reload
# systemctl restart mysql
</syntaxhighlight>
... and check they are active:
<source lang=bash>
# systemctl list-dependencies --after mysql.service | grep nfs-client.target
● ├─nfs-client.target
</syntaxhighlight>
====== /etc/idmapd.conf ======
<source lang=text>
# Domain = localdomain
Domain = this.domain.tld
</syntaxhighlight>
====== /etc/fstab ======
<source lang=text>
cdot-nfsv4-svm:/MYSQLNFS_LOG /MYSQLNFS_LOG nfs rw,hard,nointr,rsize=65536,wsize=65536,bg,vers=4,proto=tcp,noatime
cdot-nfsv4-svm:/MYSQLNFS_DATA /MYSQLNFS_DATA nfs rw,hard,nointr,rsize=65536,wsize=65536,bg,vers=4,proto=tcp,noatime
</syntaxhighlight>
====== /etc/mysql/mysql.conf.d/mysqld.cnf ======
<source lang=ini>
[mysqld]
...
datadir = /MYSQLNFS_DATA/data/mysql
...
</syntaxhighlight>
====== /etc/mysql/mysql.conf.d/innodb.cnf ======
<source lang=ini>
[mysqld]
#
# * InnoDB
#
innodb_data_home_dir = /MYSQLNFS_DATA/InnoDB
innodb_data_file_path = ibdata1:200M:autoextend
innodb_log_group_home_dir = /MYSQLNFS_LOG/ib_log
#innodb_flush_method = O_DIRECT
innodb_flush_log_at_trx_commit = 2
innodb_file_per_table = on
</syntaxhighlight>
<source lang=mysql>
# mysql -e "show variables where variable_name like '%dir' and value like '/MYSQLNFS%'"
+---------------------------+------------------------------------+
| Variable_name | Value |
+---------------------------+------------------------------------+
| datadir | /MYSQLNFS_DATA/data/mysql/ |
| innodb_data_home_dir | /MYSQLNFS_DATA/InnoDB |
| innodb_log_group_home_dir | /MYSQLNFS_LOG/ib_log |
+---------------------------+------------------------------------+
</syntaxhighlight>
====== /etc/mysql/mysql.conf.d/query_cache.cnf ======
<source lang=ini>
[mysqld]
#
# * Query Cache Configuration
#
query_cache_type = 1
query_cache_limit = 256K
query_cache_min_res_unit = 2k
query_cache_size = 80M
</syntaxhighlight>
<source lang=mysql>
mysql> SHOW VARIABLES LIKE 'have_query_cache';
+------------------+-------+
| Variable_name | Value |
+------------------+-------+
| have_query_cache | YES |
+------------------+-------+
1 row in set (0,00 sec)
mysql> SHOW VARIABLES LIKE 'query_cache%';
+------------------------------+----------+
| Variable_name | Value |
+------------------------------+----------+
| query_cache_limit | 262144 |
| query_cache_min_res_unit | 2048 |
| query_cache_size | 83886080 |
| query_cache_type | ON |
| query_cache_wlock_invalidate | OFF |
+------------------------------+----------+
5 rows in set (0,00 sec)
</syntaxhighlight>
====== apparmor : /etc/apparmor.d/local/usr.sbin.mysqld ======
<source lang=text>
# vim:syntax=apparmor
# This should be always there...
owner @{PROC}/@{pid}/status r,
/sys/devices/system/node/ r,
/sys/devices/system/node/** r,
# The mysql datadir, innodb_data_home_dir
/MYSQLNFS_DATA/ r,
/MYSQLNFS_DATA/** rwk,
# The mysql innodb_log_group_home_dir
/MYSQLNFS_LOG/ r,
/MYSQLNFS_LOG/** rwk,
</syntaxhighlight>
====== Short stupid performance test ======
<source lang=bash>
# time dd if=/dev/zero of=/MYSQLNFS_DATA/io.test bs=16k count=65536
65536+0 records in
65536+0 records out
1073741824 bytes (1,1 GB, 1,0 GiB) copied, 1,7552 s, 612 MB/s
real 0m1.772s
user 0m0.016s
sys 0m0.672s
</syntaxhighlight>
Some things seem to work...
==Sample InnoDB configuration==
/etc/mysql/conf.d/innodb.cnf
<source lang=mysql>
[mysqld]
# InnoDB Parameters
# innodb_buffer_pool_size=(0.7*total_mem_size)
innodb_buffer_pool_size=1433M
# bulk_insert_buffer_size
bulk_insert_buffer_size=256M
# innodb_buffer_pool_instances=... more = more concurrency
innodb_buffer_pool_instances=2
# innodb_thread_concurrency= 2*CPUs
innodb_thread_concurrency=4
# innodb_flush_method=O_DIRECT (avoids double buffering)
innodb_flush_method=O_DIRECT
# InnoDB data raw disks
innodb_data_home_dir=
innodb_data_file_path=/dev/vg-data/lv-rawdisk-innodb01:25Graw
# InnoDB log files
innodb_log_files_in_group=2
innodb_log_file_size=100M
innodb_log_group_home_dir=/var/lib/mysql/ib_log
</syntaxhighlight>
==Analyze==
<source lang=mysql>
mysql> select * from <tablename> PROCEDURE ANALYSE();
</syntaxhighlight>
<source lang=mysql>
mysql> SHOW /*!50000 GLOBAL*/ STATUS;
</syntaxhighlight>
* See [[http://de.slideshare.net/shinguz/pt-presentation-11465700 MySQL Performance Tuning]]
===Find statements which lead into an error===
<source lang=mysql>
mysql> select CURRENT_SCHEMA,DIGEST_TEXT,MYSQL_ERRNO,MESSAGE_TEXT from performance_schema.events_statements_history where errors!=0\G
*************************** 1. row ***************************
CURRENT_SCHEMA: NULL
DIGEST_TEXT: NULL
MYSQL_ERRNO: 1046
MESSAGE_TEXT: No database selected
1 row in set (0,00 sec)
</syntaxhighlight>
===percona-toolkit===
<source lang=bash>
# aptitude install percona-toolkit
# mysql -e "explain select * from mysql.user,mysql.db where user.user=db.user" | pt-visual-explain
JOIN
+- Bookmark lookup
| +- Table
| | table db
| | possible_keys User
| +- Index lookup
| key db->User
| possible_keys User
| key_len 48
| ref mysql.user.User
| rows 3
+- Table scan
rows 68
+- Table
table user
</syntaxhighlight>
===Sysbench===
<source lang=bash>
# mysql -u root -e "create database sbtest;"
# sysbench \
--test=oltp \
--oltp-table-size=10000000 \
--db-driver=mysql \
--mysql-table-engine=innodb \
--mysql-db=sbtest \
--mysql-user=root \
--mysql-password=$(nawk -F'=' '/password/{print $2}' /root/.my.cnf) \
--mysql-socket=/var/run/mysqld/mysqld.sock \
prepare
# sysbench \
--test=oltp \
--oltp-test-mode=complex \
--oltp-table-size=80000000 \
--db-driver=mysql \
--mysql-table-engine=innodb \
--mysql-db=sbtest \
--mysql-user=root \
--mysql-password=$(nawk -F'=' '/password/{print $2}' /root/.my.cnf) \
--mysql-socket=/var/run/mysqld/mysqld.sock \
--num-threads=4 \
--max-time=900 \
--max-requests=500000 \
run
# mysql -u root_rw -e "drop table sbtest;" sbtest
</syntaxhighlight>
==Recover a damaged root account==
===Lost grants===
Try out:
<source lang=bash>
# service mysql stop
# echo "grant all privileges on *.* to 'root'@'localhost' with grant option;" > /root/mysql-init
# mysqld_safe --init-file=/root/mysql-init
...
150812 19:14:24 mysqld_safe mysqld from pid file /var/run/mysqld/mysqld.pid ended
# rm /root/mysql-init
# service mysql start
</syntaxhighlight>
Or:
<source lang=bash>
# service mysql stop
# mysqld_safe --skip-grant-tables &
...
# mysql -e "UPDATE mysql.user SET Grant_priv='Y', Super_priv='Y' WHERE User='root'; FLUSH PRIVILEGES; GRANT ALL ON *.* TO 'root'@'localhost';"
# mysqladmin -u root shutdown
# service mysql start
</syntaxhighlight>
===Lost password===
<source lang=bash>
# service mysql stop
# echo "SET PASSWORD FOR 'root'@'localhost' = PASSWORD('the root password for mysql');" > /root/mysql-init
# mysqld_safe --init-file=/root/mysql-init
...
150812 19:15:24 mysqld_safe mysqld from pid file /var/run/mysqld/mysqld.pid ended
# rm /root/mysql-init
# service mysql start
</syntaxhighlight>
==Structured configuration==
This is the default in Ubuntus /etc/mysql/my.cnf:
<source lang=mysql>
...
#
# * IMPORTANT: Additional settings that can override those from this file!
# The files must end with '.cnf', otherwise they'll be ignored.
#
!includedir /etc/mysql/conf.d/
</syntaxhighlight>
/etc/mysql/conf.d/innodb.cnf:
<source lang=mysql>
[mysqld]
# InnoDB Parameters
# innodb_buffer_pool_size=(0.7*total_mem_size)
#innodb_buffer_pool_size=512M
innodb_buffer_pool_size=256M
# bulk_insert_buffer_size
#bulk_insert_buffer_size=256M
bulk_insert_buffer_size=128M
# innodb_buffer_pool_instances=... more = more concurrency
innodb_buffer_pool_instances=2
# innodb_thread_concurrency= 2*CPUs
innodb_thread_concurrency=4
# innodb_flush_method=O_DIRECT (avoids double buffering)
innodb_flush_method=O_DIRECT
# InnoDB data raw disks
innodb_data_home_dir=
innodb_data_file_path=/dev/vg-data/lv-rawdisk-innodb01:25Graw
# InnoDB log files
innodb_log_files_in_group=2
innodb_log_file_size=100M
innodb_log_group_home_dir=/var/lib/mysql/ib_log
</syntaxhighlight>
/etc/mysql/conf.d/myisam.cnf:
<source lang=mysql>
[mysqld]
#key_buffer = 512M
key_buffer = 128M
table_cache = 8K
myisam_sort_buffer_size = 64M
tmp_table_size = 64M
# Variable: concurrent_insert
# Value Description
# 0 Disables concurrent inserts
# 1 (Default) Enables concurrent insert for MyISAM tables that do not have holes
# 2 Enables concurrent inserts for all MyISAM tables, even those that have holes.
# For a table with a hole, new rows are inserted at the end of the table if it is in use by another thread.
# Otherwise, MySQL acquires a normal write lock and inserts the row into the hole.
concurrent_insert=2
# Variable: myisam_use_mmap
# https://www.percona.com/blog/2006/05/26/myisam-mmap-feature-51/
#
myisam_use_mmap=1
</syntaxhighlight>
/etc/mysql/conf.d/mysqld.cnf:
<source lang=mysql>
[mysqld]
datadir = /var/lib/mysql/data/data
# because mysql is soooo stupid
#ignore-db-dirs = lost+found # when we will have mysql >= 5.6.3
bind-address = 127.0.0.1
open-files-limit = 4096
max_connections = 512
max_allowed_packet = 16M
thread_stack = 192K
thread_cache_size = 8
myisam-recover-options = BACKUP
max_connections = 512
table_cache = 8192
thread_concurrency = 4
default-storage-engine = innodb
# Enable the full query log. Every query (even ones with incorrect
# syntax) that the server receives will be logged. This is useful for
# debugging, it is usually disabled in production use.
#log
# Print warnings to the error log file. If you have any problem with
# MySQL you should enable logging of warnings and examine the error log
# for possible explanations.
log_warnings
# Log slow queries. Slow queries are queries which take more than the
# amount of time defined in "long_query_time" or which do not use
# indexes well, if log_long_format is enabled. It is normally good idea
# to have this turned on if you frequently add new queries to the
# system.
log_slow_queries
slow_query_log_file = /var/log/mysql/mysql-slow.log
# All queries taking more than this amount of time (in seconds) will be
# trated as slow. Do not use "1" as a value here, as this will result in
# even very fast queries being logged from time to time (as MySQL
# currently measures time with second accuracy only).
long_query_time = 2
# Log more information in the slow query log. Normally it is good to
# have this turned on. This will enable logging of queries that are not
# using indexes in addition to long running queries.
#log_long_format
log_bin = /var/lib/mysql/binlog/mysql-bin.log
expire_logs_days = 10
max_binlog_size = 100M
sync_binlog = 0
performance_schema = ON
</syntaxhighlight>
/etc/mysql/conf.d/mysqld_safe.cnf:
<source lang=mysql>
[mysqld_safe]
</syntaxhighlight>
/etc/mysql/conf.d/mysqld_safe_syslog.cnf:
<source lang=mysql>
[mysqld_safe]
syslog
</syntaxhighlight>
/etc/mysql/conf.d/query_cache.cnf:
<source lang=mysql>
[mysqld]
query_cache_limit = 4M
query_cache_size = 128M
query_cache_min_res_unit = 2K
</syntaxhighlight>
=MySQL Clients=
Small one liners for testing purposes.
==PHP==
===PHP PDO===
<source lang=php>
$ php -r '
$pdo=new PDO("mysql:host=mydbhost;dbname=mydb", "user", "pass", ARRAY(
PDO::ATTR_PERSISTENT => true
)
);
$stmt=$pdo->prepare("SELECT * FROM mytable");
if($stmt->execute()){
while($row = $stmt->fetch()){
print_r($row);
}
};
$stmt = null;
$pdo=null;
'
</syntaxhighlight>
719e95a4ff50c7f2e71ab5957059d4c4a19a2cca
Fibrechannel Analyse
0
139
2204
2152
2021-11-25T14:22:36Z
Lollypop
2
Text replacement - "<source" to "<syntaxhighlight"
wikitext
text/x-wiki
[[category:Solaris]]
[[category:Brocade]]
[[category:NetApp]]
[[category:FC]]
=Fibrechannel Analyse=
=Kommandos : Solaris=
==luxadm==
===luxadm -e port===
Gibt die Hardwarepfade der vorhandened Fibrechannelports und deren Status aus:
<syntaxhighlight lang=bash>
# luxadm -e port
/devices/pci@79,0/pci10de,378@b/pci1077,143@0/fp@0,0:devctl CONNECTED
/devices/pci@79,0/pci10de,378@b/pci1077,143@0,1/fp@0,0:devctl NOT CONNECTED
/devices/pci@79,0/pci10de,376@e/pci1077,143@0/fp@0,0:devctl CONNECTED
/devices/pci@79,0/pci10de,376@e/pci1077,143@0,1/fp@0,0:devctl NOT CONNECTED
</source>
2 Dualport Karten:
/devices/pci@79,0/pci10de,378@b/pci1077,143@0 und ...,1
/devices/pci@79,0/pci10de,376@e/pci1077,143@0 und ...,1
<syntaxhighlight lang=bash>
# prtdiag -v | head -1
System Configuration: Sun Microsystems Sun Fire X4440
</source>
Aus der Seite [https://support.oracle.com/epmos/faces/DocContentDisplay?id=1277396.1 Sun x86 Platforms: Matrix of Recognized Device Paths (Doc ID 1277396.1)] (Oracle Support Login benötigt):
Sun Fire x4440 (Tucana)
PCI:
PCIe SLOT0 /pci@0,0/pci10de,375@f/pci1000,3150@0 // with PCI Express 8-Port SAS/SATA HBA
PCIe SLOT0 /pci@0,0/pci10de,375@f/ // without PCI Express 8-Port SAS/SATA HBA
PCIe SLOT1 /pci@0,0/pci10de,376@e/
PCIe SLOT2 /pci@7c,0/pci10de,377@f/
PCIe SLOT3 /pci@0,0/pci10de,377@a/
PCIe SLOT4 /pci@7c,0/pci10de,376@e/
PCIe SLOT5 /pci@7c,0/pci10de,378@b/
(7c can be renamed something else depending on BIOS/OS version)
Also stecken unsere Karten in Slot 4 und 5.
===luxadm -e dump_map <HW_path>===
Gibt die Tabelle der bekannten Geräte an einem Port aus
<syntaxhighlight lang=bash>
# luxadm -e dump_map /devices/pci@79,0/pci10de,378@b/pci1077,143@0/fp@0,0:devctl
Pos Port_ID Hard_Addr Port WWN Node WWN Type
0 30200 0 202600a0b86e10e4 200600a0b86e10e4 0x0 (Disk device)
1 30600 0 202700a0b86e10e4 200600a0b86e10e4 0x0 (Disk device)
2 10100 0 203400a0b85bb030 200400a0b85bb030 0x0 (Disk device)
3 10500 0 203500a0b85bb030 200400a0b85bb030 0x0 (Disk device)
4 10200 0 202600a0b86e103c 200600a0b86e103c 0x0 (Disk device)
5 11400 0 202700a0b86e103c 200600a0b86e103c 0x0 (Disk device)
6 30100 0 203200a0b85aeb2d 200200a0b85aeb2d 0x0 (Disk device)
7 30500 0 203300a0b85aeb2d 200200a0b85aeb2d 0x0 (Disk device)
8 10800 0 2100001b32902d45 2000001b32902d45 0x1f (Unknown Type,Host Bus Adapter)
</source>
Erklärung der interessanten Spalten:
* Port_ID <Switch_ID><Switchport><??>
Es sind also offensichtlich 2 Switches in der Fabric an Port /devices/pci@79,0/pci10de,378@b/pci1077,143@0/fp@0,0:devctl
und zwar mit der ID 1 und mit der ID 3.
Switch ID 1
Port 1 und 5 : Node WWN 200400a0b85bb030
Port 2 und 14 : Node WWN 200600a0b86e103c
Port 8 : Node WWN 2000001b32902d45 (Wir selbst)
Switch ID 3
Port 1 und 5 : Node WWN 200200a0b85aeb2d
Port 2 und 6 : Node WWN 200600a0b86e10e4
Wir hängen also mit 2 Storages auf dem Switch mit der ID 1 und haben eine Verbindung zu einem Switch mit der ID 3 an dem 2 weitere Storages hängen.
* Node WWN
Wir sehen hier 4 Disk Devices mit jeweils 2 Einträgen (Gleiche Node WWN)
* Port WWN
Dies ist die Port WWN der an den Switch angeschlossenen Geräte (unter 8 finden wir uns selbst).
Pro Storage sehen wir hier 2 Port WWNs, also 2 Pfade über unseren einen Hostport.
Daher nachher 4 Pfade (2 Pro Hostport) beim [[#mpathadm list lu]].
* Type
Disk Device: Storage
Host Bus Adapter: FC-Karte
===luxadm -e rdls <HW_path> ===
<syntaxhighlight lang=bash>
# luxadm -e port 2>/dev/null | awk '{print $1;}' | xargs -n 1 luxadm -e rdls 2>/dev/null
Link Error Status information for loop:/devices/pci@0,0/pci8086,340e@7/pci111d,806e@0/pci111d,806e@2/pci1077,143@0/fp@0,0:devctl
al_pa lnk fail sync loss signal loss sequence err invalid word CRC
30200 2 1 0 0 0 0
30600 2 1 0 0 0 0
10200 1 1 0 0 0 0
11400 2 1 0 0 0 0
10b00 0 0 0 0 0 0
NOTE: These LESB counts are not cleared by a reset, only power cycles.
These counts must be compared to previously read counts.
Link Error Status information for loop:/devices/pci@0,0/pci8086,340e@7/pci111d,806e@0/pci111d,806e@2/pci1077,143@0,1/fp@0,0:devctl
al_pa lnk fail sync loss signal loss sequence err invalid word CRC
0 0 0 0 0 0 0
NOTE: These LESB counts are not cleared by a reset, only power cycles.
These counts must be compared to previously read counts.
</source>
===luxadm probe===
Auflistung aller erkannten Fibrechanneldevices
<syntaxhighlight lang=bash>
#> luxadm probe
Found Fibre Channel device(s):
Node WWN:200600a0b86e10e4 Device Type:Disk device
Logical Path:/dev/rdsk/c8t600A0B80006E10E40000DC1C52E8B751d0s2
...
</source>
===luxadm display <Diskpath|WWN>===
<syntaxhighlight lang=bash>
#> luxadm display /dev/rdsk/c8t600A0B80006E10E40000DC1C52E8B751d0s2
DEVICE PROPERTIES for disk: /dev/rdsk/c8t600A0B80006E10E40000DC1C52E8B751d0s2
Vendor: SUN
Product ID: STK6580_6780
Revision: 0784
Serial Num: SP01068442
Unformatted capacity: 204800.000 MBytes
Write Cache: Enabled
Read Cache: Enabled
Minimum prefetch: 0x300
Maximum prefetch: 0x0
Device Type: Disk device
Path(s):
/dev/rdsk/c8t600A0B80006E10E40000DC1C52E8B751d0s2
/devices/scsi_vhci/disk@g600a0b80006e10e40000dc1c52e8b751:c,raw
Controller /dev/cfg/c4
Device Address 202600a0b86e10e4,5
Host controller port WWN 2100001b328a417f
Class primary
State ONLINE
Controller /dev/cfg/c4
Device Address 202700a0b86e10e4,5
Host controller port WWN 2100001b328a417f
Class secondary
State STANDBY
Controller /dev/cfg/c6
Device Address 201600a0b86e10e4,5
Host controller port WWN 2100001b32904445
Class primary
State ONLINE
Controller /dev/cfg/c6
Device Address 201700a0b86e10e4,5
Host controller port WWN 2100001b32904445
Class secondary
State STANDBY
</source>
* Vendor: SUN
Hersteller
* Product ID: STK6580_6780
Also ein StorageTek 6580/6780
* Revision: 0784
Grobe Firmwarepeilung (Firmware Version: 07.84.47.10)
Siehe hier [[#lsscs list array <array_name>]]
* Serial Num: SP01068442
Praktisch, wenn man mit NetApps arbeitet, um die LUNs zuzuordnen.
* Unformatted capacity: 204800.000 MBytes
Immer gut zu wissen
* Write Cache: Enabled
Die Batterie im Storage sollte also OK sein ;-)
* Path(s):
Rawdevicepath
Hardwaredevicepath
Jetzt folgen immer pro Pfad zu diesem Device ein Block aus
Controller (siehe unten)
Device Address <Port WWN vom Device>,<LUN ID>
Class <primary|secondary> (siehe unten)
State <Online|Standby|Oflline>
Zuweisung Controller zum FC-Port über:
<syntaxhighlight lang=bash>
# ls -al /dev/cfg/c6
lrwxrwxrwx 1 root root 60 Sep 3 2009 /dev/cfg/c6 -> ../../devices/pci@79,0/pci10de,376@e/pci1077,143@0/fp@0,0:fc
</source>
Man sieht den Hardwarepfad von [[#luxadm -e port]]
Class:
Via ALUA (Asymmetric Logical Unit Access) teilt das Device dem Host mit, über welche Pfade der Host primär auf die LUN zugreifen soll.
==fcinfo==
===fcinfo hba-port===
Gibt ein paar Infos über Hersteller, Modell, Firmware, Port und Node WWN, Current Speed, ... aus
<syntaxhighlight lang=bash>
#> fcinfo hba-port
HBA Port WWN: 2100001b328a417f
OS Device Name: /dev/cfg/c4
Manufacturer: QLogic Corp.
Model: 375-3356-02
Firmware Version: 05.06.00
FCode/BIOS Version: BIOS: 2.02; fcode: 2.01; EFI: 2.00;
Serial Number: 0402R00-0918701860
Driver Name: qlc
Driver Version: 20110825-3.06
Type: N-port
State: online
Supported Speeds: 1Gb 2Gb 4Gb
Current Speed: 4Gb
Node WWN: 2000001b328a417f
HBA Port WWN: 2101001b32aa417f
OS Device Name: /dev/cfg/c5
Manufacturer: QLogic Corp.
Model: 375-3356-02
Firmware Version: 05.06.00
FCode/BIOS Version: BIOS: 2.02; fcode: 2.01; EFI: 2.00;
Serial Number: 0402R00-0918701860
Driver Name: qlc
Driver Version: 20110825-3.06
Type: unknown
State: offline
Supported Speeds: 1Gb 2Gb 4Gb
Current Speed: not established
Node WWN: 2001001b32aa417f
HBA Port WWN: 2100001b32904445
OS Device Name: /dev/cfg/c6
Manufacturer: QLogic Corp.
Model: 375-3356-02
Firmware Version: 05.06.00
FCode/BIOS Version: BIOS: 2.02; fcode: 2.01; EFI: 2.00;
Serial Number: 0402R00-0918701887
Driver Name: qlc
Driver Version: 20110825-3.06
Type: N-port
State: online
Supported Speeds: 1Gb 2Gb 4Gb
Current Speed: 4Gb
Node WWN: 2000001b32904445
HBA Port WWN: 2101001b32b04445
OS Device Name: /dev/cfg/c7
Manufacturer: QLogic Corp.
Model: 375-3356-02
Firmware Version: 05.06.00
FCode/BIOS Version: BIOS: 2.02; fcode: 2.01; EFI: 2.00;
Serial Number: 0402R00-0918701887
Driver Name: qlc
Driver Version: 20110825-3.06
Type: unknown
State: offline
Supported Speeds: 1Gb 2Gb 4Gb
Current Speed: not established
Node WWN: 2001001b32b04445
</source>
===fcinfo remote-port --port <HBA Port WWN> --linkstat===
<syntaxhighlight lang=bash>
# fcinfo remote-port --port 2100001b32904445 --linkstat
Remote Port WWN: 201600a0b86e103c
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200600a0b86e103c
Link Error Statistics:
Link Failure Count: 3
Loss of Sync Count: 3
Loss of Signal Count: 2
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 201700a0b86e103c
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200600a0b86e103c
Link Error Statistics:
Link Failure Count: 4
Loss of Sync Count: 261
Loss of Signal Count: 4
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 202200a0b85aeb2d
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200200a0b85aeb2d
Link Error Statistics:
Link Failure Count: 2
Loss of Sync Count: 1
Loss of Signal Count: 2
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 202300a0b85aeb2d
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200200a0b85aeb2d
Link Error Statistics:
Link Failure Count: 2
Loss of Sync Count: 1
Loss of Signal Count: 2
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 201600a0b86e10e4
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200600a0b86e10e4
Link Error Statistics:
Link Failure Count: 3
Loss of Sync Count: 1
Loss of Signal Count: 0
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 201700a0b86e10e4
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200600a0b86e10e4
Link Error Statistics:
Link Failure Count: 3
Loss of Sync Count: 1
Loss of Signal Count: 0
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 202400a0b85bb030
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200400a0b85bb030
Link Error Statistics:
Link Failure Count: 2
Loss of Sync Count: 1
Loss of Signal Count: 2
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 202500a0b85bb030
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200400a0b85bb030
Link Error Statistics:
Link Failure Count: 2
Loss of Sync Count: 1
Loss of Signal Count: 3
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
</source>
===fcinfo remote-port --port <HBA Port WWN> --scsi-target===
<syntaxhighlight lang=bash>
# fcinfo hba-port | grep HBA
HBA Port WWN: 21000024ff3cf472
HBA Port WWN: 21000024ff3cf473
HBA Port WWN: 21000024ff3cf454
HBA Port WWN: 21000024ff3cf455
# fcinfo remote-port --port 21000024ff3cf472 --scsi-target
Remote Port WWN: 20110002ac0059ce
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 2ff70002ac0059ce
LUN: 0
Vendor: 3PARdata
Product: VV
OS Device Name: /dev/rdsk/c6t60002AC00000000000000002000059CEd0s2
LUN: 1
Vendor: 3PARdata
Product: VV
OS Device Name: /dev/rdsk/c6t60002AC00000000000000003000059CEd0s2
LUN: 2
Vendor: 3PARdata
Product: VV
OS Device Name: /dev/rdsk/c6t60002AC00000000000000004000059CEd0s2
...
</source>
===fcinfo lu -v <device>===
<syntaxhighlight lang=bash>
# fcinfo lu -v /dev/rdsk/c0t60030D90D9DD1A059655804D4A5EAD2Ed0s2
OS Device Name: /dev/rdsk/c0t60030D90D9DD1A059655804D4A5EAD2Ed0s2
HBA Port WWN: 2100000e1ed89451
Controller: /dev/cfg/c4
Remote Port WWN: 2100f4e9d4564d21
LUN: 11
State: active/optimized
Remote Port WWN: 2100f4e9d4564c97
LUN: 11
State: active/non-optimized
HBA Port WWN: 2100000e1ed89450
Controller: /dev/cfg/c3
Remote Port WWN: 2100f4e9d4564d44
LUN: 11
State: active/optimized
Remote Port WWN: 2100f4e9d4564c1c
LUN: 11
State: active/non-optimized
Vendor: DataCore
Product: Virtual Disk
Device Type: Disk Device
Unformatted capacity: 204800.000 MBytes
</source>
==mpathadm==
===mpathadm list lu===
<syntaxhighlight lang=bash>
</source>
==cfgadm==
===cfgadm -al -o show_FCP_dev [<controller>]===
<syntaxhighlight lang=bash>
# cfgadm -al -o show_FCP_dev | grep unusable
c8::21000024ff2d49a2,0 disk connected configured unusable
c8::21000024ff2d49a2,1 disk connected configured unusable
c8::21000024ff2d49a2,2 disk connected configured unusable
c8::21000024ff2d49a2,3 disk connected configured unusable
c8::21000024ff2d49a2,4 disk connected configured unusable
c8::21000024ff2d49a2,5 disk connected configured unusable
c8::21000024ff2d49a2,6 disk connected configured unusable
c8::21000024ff2d49a2,7 disk connected configured unusable
c8::21000024ff2d49a2,8 disk connected configured unusable
c8::21000024ff2d49a2,9 disk connected configured unusable
c8::21000024ff2d49a2,10 disk connected configured unusable
c9::203400a0b839c421,31 disk connected configured unusable
c9::203400a0b84913d2,31 disk connected configured unusable
c9::203500a0b839c421,31 disk connected configured unusable
c9::203500a0b84913d2,31 disk connected configured unusable
</source>
===cfgadm -c unconfigure -o unusable_SCSI_LUN <unusable device>===
<syntaxhighlight lang=bash>
# cfgadm -c unconfigure -o unusable_SCSI_LUN c8::21000024ff2d49a2
</source>
Alle aufräumen:
<syntaxhighlight lang=bash>
# cfgadm -alo show_SCSI_LUN | nawk '$NF=="unusable"{gsub(/,[0-9]+$/,"",$1);print $1}' | sort -u | xargs -n 1 cfgadm -c unconfigure -o unusable_SCSI_LUN
</source>
===cfgadm -o force_update -c configure <controller>===
Rescan LUNs. Be careful! Does a forcelip!
<syntaxhighlight lang=bash>
# cfgadm -o force_update -c configure c10
</source>
==prtconf -Da <device>==
<syntaxhighlight lang=bash>
# prtconf -Da /dev/cfg/c3
i86pc (driver name: rootnex)
pci, instance #0 (driver name: npe)
pci8086,3410, instance #5 (driver name: pcieb)
pci111d,806e, instance #12 (driver name: pcieb)
pci111d,806e, instance #13 (driver name: pcieb)
pci1077,170, instance #0 (driver name: qlc) <---
fp, instance #0 (driver name: fp)
</source>
==LUN masking (access LUNs of a storage)==
<syntaxhighlight lang=bash>
Nov 6 13:44:59 server01 Corrupt label; wrong magic number
Nov 6 13:44:59 server01 cmlb: WARNING: /pci@380/pci@1/pci@0/pci@5/SUNW,qlc@0/fp@0,0/ssd@w204300a096691217,7 (ssd7):
Nov 6 13:44:59 server01 Corrupt label; wrong magic number
Nov 6 13:44:59 server01 cmlb: WARNING: /pci@380/pci@1/pci@0/pci@5/SUNW,qlc@0/fp@0,0/ssd@w204300a096691217,7 (ssd7):
Nov 6 13:44:59 server01 Corrupt label; wrong magic number
Nov 6 13:44:59 server01 cmlb: WARNING: /pci@300/pci@1/pci@0/pci@4/SUNW,qlc@0/fp@0,0/ssd@w203300a096691217,7 (ssd2):
Nov 6 13:44:59 server01 Corrupt label; wrong magic number
...
</source>
<syntaxhighlight lang=bash>
# cat /etc/driver/drv/fp.conf
mpxio-disable="no";
pwwn-lun-blacklist=
"203200a096691265,7",
"203300a096691265,7",
"204200a096691265,7",
"204300a096691265,7",
"203200a096691217,7",
"203300a096691217,7",
"204200a096691217,7",
"204300a096691217,7";
</source>
<syntaxhighlight lang=bash>
# reboot -- -r
...
Boot device: /pci@300/pci@1/pci@0/pci@2/scsi@0/disk@p0 File and args: -r
SunOS Release 5.11 Version 11.3 64-bit
Copyright (c) 1983, 2015, Oracle and/or its affiliates. All rights reserved.
/pseudo/fcp@0 (fcp0):
LUN 7 of port 203300a096691217 is masked due to black listing.
/pseudo/fcp@0 (fcp0):
LUN 7 of port 203200a096691217 is masked due to black listing.
/pseudo/fcp@0 (fcp0):
LUN 7 of port 203300a096691265 is masked due to black listing.
/pseudo/fcp@0 (fcp0):
LUN 7 of port 203200a096691265 is masked due to black listing.
/pseudo/fcp@0 (fcp0):
LUN 7 of port 204300a096691217 is masked due to black listing.
/pseudo/fcp@0 (fcp0):
LUN 7 of port 204200a096691217 is masked due to black listing.
/pseudo/fcp@0 (fcp0):
LUN 7 of port 204300a096691265 is masked due to black listing.
/pseudo/fcp@0 (fcp0):
LUN 7 of port 204200a096691265 is masked due to black listing.
Configuring devices.
</source>
=Kommandos : Common Array Manager=
==lsscs==
Ist unter Solaris in /opt/SUNWsefms/bin
===lsscs list array===
<syntaxhighlight lang=bash>
</source>
===lsscs list array <array_name>===
<syntaxhighlight lang=bash>
</source>
===lsscs list -a <array_name> fcport===
<syntaxhighlight lang=bash>
</source>
=Kommandos : Brocade=
==Switch-Kommandos==
===switchshow===
<syntaxhighlight lang=bash>
san-sw_11:admin> switchshow
switchName: san-sw_11
switchType: 71.2
switchState: Online
switchMode: Native
switchRole: Principal
switchDomain: 1
switchId: fffc01
switchWwn: 10:00:00:05:33:df:43:5a
zoning: ON (Fabric1)
switchBeacon: OFF
Index Port Address Media Speed State Proto
==============================================
0 0 010000 id N8 No_Light FC
1 1 010100 id N8 Online FC E-Port 10:00:00:05:33:df:bd:b9 "san-sw_21" (downstream)
2 2 010200 id N8 Online FC F-Port 21:00:00:24:ff:05:74:e4
3 3 010300 id N8 Online FC F-Port 50:0a:09:81:8d:32:5d:c4
4 4 010400 id N8 No_Light FC
5 5 010500 id N8 Online FC E-Port 10:00:00:05:33:df:bd:b9 "san-sw_21"
6 6 010600 id N4 Online FC F-Port 20:06:00:a0:b8:32:38:17
7 7 010700 id N4 Online FC F-Port 20:07:00:a0:b8:32:38:17
8 8 010800 id N4 Online FC F-Port 21:00:00:1b:32:91:4c:ed
9 9 010900 id N4 Online FC F-Port 21:00:00:1b:32:98:05:1a
10 10 010a00 id N8 Online FC F-Port 21:00:00:24:ff:4a:d3:bc
11 11 010b00 id N8 No_Light FC
12 12 010c00 id N8 No_Light FC
13 13 010d00 id N8 No_Light FC
14 14 010e00 id N8 No_Light FC
15 15 010f00 id N8 No_Light FC
16 16 011000 -- N8 No_Module FC (No POD License) Disabled
17 17 011100 -- N8 No_Module FC (No POD License) Disabled
18 18 011200 -- N8 No_Module FC (No POD License) Disabled
19 19 011300 -- N8 No_Module FC (No POD License) Disabled
20 20 011400 -- N8 No_Module FC (No POD License) Disabled
21 21 011500 -- N8 No_Module FC (No POD License) Disabled
22 22 011600 -- N8 No_Module FC (No POD License) Disabled
23 23 011700 -- N8 No_Module FC (No POD License) Disabled
</source>
Was sagt uns das?
# Dies ist der "Principal" (alle andere sind "Subordinate") der Fabric "Fabric1" (switchRole:, zoning:)
# Der Switch ist gezoned (zoning:)
# SwitchID ist "fffc01"
# Es ist ein 24-Port Switch
# Es gibt einen doppelten ISL (InterSwitchLink) zu einem anderen Switch E-Port (san-sw_21)
# 6 Ports sind mit SFPs bestückt, aber nicht belegt (0,4,11-15)
# 8 Ports haben keine Lizenz und auch kein SFP (No_Module)
# 9 Ports sind belegt
===fabricshow===
<syntaxhighlight lang=bash>
san-sw_11:root> fabricshow
Switch ID Worldwide Name Enet IP Addr FC IP Addr Name
-------------------------------------------------------------------------
1: fffc01 10:00:00:05:33:df:43:5a 192.168.1.117 0.0.0.0 >"san-sw_11"
2: fffc02 10:00:00:05:33:df:bd:b9 192.168.1.119 0.0.0.0 "san-sw_21"
The Fabric has 2 switches
</source>
===islshow===
<syntaxhighlight lang=bash>
rz1_fab2_11:admin> islshow
1: 1-> 0 10:00:00:05:1e:0d:5e:96 12 bc1_sl4_fab2_12 sp: 4.000G bw: 4.000G
2: 2-> 0 10:00:00:05:1e:0d:e2:53 13 bc2_sl4_fab2_13 sp: 4.000G bw: 4.000G
3: 3-> 0 10:00:00:05:1e:b3:71:bf 14 bc3_sl4_fab2_14 sp: 4.000G bw: 4.000G
4: 5-> 17 10:00:00:05:1e:0d:5e:96 12 bc1_sl4_fab2_12 sp: 4.000G bw: 4.000G
5: 6-> 17 10:00:00:05:1e:0d:e2:53 13 bc2_sl4_fab2_13 sp: 4.000G bw: 4.000G
6: 7-> 17 10:00:00:05:1e:b3:71:bf 14 bc3_sl4_fab2_14 sp: 4.000G bw: 4.000G
7: 10-> 8 10:00:50:eb:1a:45:71:96 15 rz-6510-fab2-15 sp: 4.000G bw: 4.000G
8: 18-> 0 10:00:50:eb:1a:45:71:96 15 rz-6510-fab2-15 sp: 4.000G bw: 4.000G
</source>
==Port-Kommandos==
===porterrshow===
===portstatsshow===
===portstatsclear===
===portloginshow===
Show information about NPIV-ports.
<syntaxhighlight lang=bash>
fcsw1:admin> switchshow
...
Index Port Address Media Speed State Proto
==================================================
...
34 34 0f2200 id N16 Online FC F-Port 1 N Port + 1 NPIV public
...
</source>
This is a NetApp 8080 with CDOT behind this port as you can see with <i>nodefind <address></i>
<syntaxhighlight lang=bash>
fcsw1:admin> nodefind 0f2200
Local:
Type Pid COS PortName NodeName SCR
N 0f2200; 3;50:0a:09:82:80:d1:21:ee;50:0a:09:80:80:d1:21:ee; 0x00000000
PortSymb: [45] "NetApp FC Target Adapter (8324) cdot1-01:0g"
NodeSymb: [38] "NetApp FAS8080 (cdot1-01/cdot1-02)"
Fabric Port Name: 20:22:50:eb:1a:42:f8:45
Permanent Port Name: 50:0a:09:82:80:d1:21:ee
Device type: Physical Unknown(initiator/target)
Port Index: 34
Share Area: No
Device Shared in Other AD: No
Redirect: No
Partial: No
LSAN: No
Aliases:
</source>
Now look with <i>portloginshow <portnumber></i>:
<syntaxhighlight lang=bash>
fcsw1:admin> portloginshow 34
Type PID World Wide Name credit df_sz cos
=====================================================
fd 0f2201 20:00:00:a0:98:5d:33:82 6 2048 8 scr=0x3
fe 0f2200 50:0a:09:82:80:d1:21:ee 6 2048 8 scr=0x0
ff 0f2201 20:00:00:a0:98:5d:33:82 0 0 8 d_id=FFFFFC
ff 0f2200 50:0a:09:82:80:d1:21:ee 0 0 8 d_id=FFFFFC
</source>
With this information you can find out more about the WWNs:
<syntaxhighlight lang=bash>
fcsw1:admin> nodefind 20:00:00:a0:98:5d:33:82
Local:
Type Pid COS PortName NodeName SCR
N 0f2201; 3;20:00:00:a0:98:5d:33:82;20:04:00:a0:98:5d:33:82; 0x00000003
FC4s: FCP
PortSymb: [58] "NetApp FC Target Port (8324) cdot1fc:cdot1-01_fc_lif_1"
NodeSymb: [24] "NetApp Vserver cdot1fc"
Fabric Port Name: 20:22:50:eb:1a:42:f8:45
Permanent Port Name: 50:0a:09:82:80:d1:21:ee
Device type: NPIV Target
Port Index: 34
Share Area: No
Device Shared in Other AD: No
Redirect: No
Partial: No
LSAN: No
Aliases: cdot1fc_01_lif1
</source>
Even the VServer (NodeSymb)!
And with the NodeName you can find all logical interfaces of this svm:
<syntaxhighlight lang=bash>
fcsw1:admin> nodefind 20:04:00:a0:98:5d:33:82
Local:
Type Pid COS PortName NodeName SCR
N 0f2201; 3;20:00:00:a0:98:5d:33:82;20:04:00:a0:98:5d:33:82; 0x00000003
FC4s: FCP
PortSymb: [58] "NetApp FC Target Port (8324) cdot1fc:cdot1-01_fc_lif_1"
NodeSymb: [24] "NetApp Vserver cdot1fc"
Fabric Port Name: 20:22:50:eb:1a:42:f8:45
Permanent Port Name: 50:0a:09:82:80:d1:21:ee
Device type: NPIV Target
Port Index: 34
Share Area: No
Device Shared in Other AD: No
Redirect: No
Partial: No
LSAN: No
Aliases: cdot1fc_01_lif1
N 0f2301; 3;20:02:00:a0:98:5d:33:82;20:04:00:a0:98:5d:33:82; 0x00000003
FC4s: FCP
PortSymb: [58] "NetApp FC Target Port (8324) cdot1fc:cdot1-02_fc_lif_1"
NodeSymb: [24] "NetApp Vserver cdot1fc"
Fabric Port Name: 20:23:50:eb:1a:42:f8:45
Permanent Port Name: 50:0a:09:82:80:61:21:e8
Device type: NPIV Target
Port Index: 35
Share Area: No
Device Shared in Other AD: No
Redirect: No
Partial: No
LSAN: No
Aliases: cdot1fc_02_lif1
</source>
==Zone-Kommandos==
===zoneshow===
===alicreate===
===alishow===
==Backup der Switchconfig per Script==
===Put the backup host ssh-pub-key on the switches===
<syntaxhighlight lang=bash>
fcsw1:root> cat >/root/.ssh/authorized_keys <<EOF
> ssh-dss AAAAB3NzaC1...
...
...
lF8qsgtTD8cc= root@host
> EOF
</source>
===Generate ssh-key on the switches===
<syntaxhighlight lang=bash>
fcsw1:root> ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
2a:23:33:...:69:bc:25:a5:f9 root@fcsw1
The key's randomart image is:
+--[ RSA 2048]----+
| |
| ... |
| |
+-----------------+
</source>
===Copy the key to your backup users ~/.ssh/authorized_keys on backup host===
<syntaxhighlight lang=bash>
fcsw1:root> cat /root/.ssh/id_rsa.pub
ssh-rsa AAAAB3NzaC1yc2EAAA...
...
KHnw1T1NaQ== root@fcsw1
</source>
===Now the script on the backup host===
<syntaxhighlight lang=bash>
# cat /opt/bin/backup_brocade_config
#!/bin/bash
SWITCHES="
172.30.40.50
172.30.40.51
"
LOCALUSER="backupuser"
BACKUPDIR="brocade_backup"
BACKUPHOST="172.30.40.10"
DATE="$(date '+%Y%m%d-%H%M%S')"
for switch in ${SWITCHES} ; do
printf "Backing up ${switch} to ~${LOCALUSER}/${BACKUPDIR}/${switch}_config_${date}.txt... "
ssh root@${switch} /fabos/link_sbin/configupload -all -p scp ${BACKUPHOST},${LOCALUSER},${BACKUPDIR}/${switch}_config_${DATE}.txt
done
</source>
==Script zum parsen einer configupload-Datei==
<syntaxhighlight lang=awk>
#!/usr/bin/gawk -f
BEGIN{
vendor["001438"]="Hewlett-Packard";
vendor["00a098"]="NetApp";
vendor["0024ff"]="Qlogic";
vendor["001b32"]="Qlogic";
vendor["0000c9"]="Emulex";
vendor["00e002"]="CROSSROADS SYSTEMS, INC.";
}
/\[Zoning\]/,/^$/ {
if(/^cfg./){
split($0,cfgparts,":");
gsub(/^cfg./,"",cfgparts[1]);
cfg[cfgparts[1]]=cfgparts[2];
}
else if(/^zone./) {
zonename=$0;
gsub(/:.*$/,"",zonename);
gsub(/^zone./,"",zonename);
zonemembers=$0;
gsub(/^[^:]*:/,"",zonemembers);
zone[zonename]=zonemembers;
}
else if(/^alias./) {
aliasname=$0;
gsub(/:.*$/,"",aliasname);
gsub(/^alias./,"",aliasname);
aliasmembers=$0;
gsub(/^[^:]*:/,"",aliasmembers);
alias[aliasname]=aliasmembers;
if(length(aliasname)>longestalias){
longestalias=length(aliasname);
}
}
else if(/^enable:/) {
cfgenabled=$0;
gsub(/^enable:/,"",cfgenabled);
}
}
END {
print "Config:",cfgenabled;
split(cfg[cfgenabled],active_zones,";");
for(active_zone in active_zones) {
split(zone[active_zones[active_zone]],zone_members,";");
asort(zone_members);
print "Zone",active_zones[active_zone],"(",length(zone_members),"Members ):";
for(zone_member in zone_members){
member=zone_members[zone_member];
if(alias[member]!=""){
member=alias[member];
}
WWN=member;
gsub(/:/,"",WWN);
if(WWN ~ /^5/){start=2;}else{start=5;}
vendor_id=substr(WWN,start,6);
printf " Member: %s\t",member;
if(alias[zone_members[zone_member]]!=""){
format=sprintf("%%s%%%ds\t",longestalias-length(zone_members[zone_member]));
printf format,zone_members[zone_member]," ";
}
printf "%s\n",vendor[vendor_id];
}
}
printf "\n\n\nCreate config:\n-------------------------------------------------\n";
printf "cfgdelete \"%s\"\n",cfgenabled;
for(active_zone in active_zones) {
split(zone[active_zones[active_zone]],zone_members,";");
asort(zone_members);
for(zone_member in zone_members){
member=zone_members[zone_member];
if(alias[member]!=""){
printf "alicreate \"%s\",\"%s\"\n",member,alias[member];
alias[member]="";
}
}
printf "zonecreate \"%s\",\"%s\"\n",active_zones[active_zone],zone[active_zones[active_zone]];
if(!secondelement){
secondelement=1;
printf "cfgcreate";
} else {
printf "cfgadd ";
}
printf " \"%s\",\"%s\"\n",cfgenabled,active_zones[active_zone];
}
printf "cfgsave\ncfgenable \"%s\"\n",cfgenabled;
}
</source>
=Kommandos: NetApp=
==fcp topology show : Wo hängt mein Frontend-SAN?==
<syntaxhighlight lang=bash>
fas01> fcp topology show
Switches connected on adapter 0d:
None connected.
Switches connected on adapter 0c:
None connected.
Switches connected on adapter 1a:
Switch Name: fcsw01
Switch Vendor: Brocade Communications, Inc.
Switch Release: v6.4.2a
Switch Domain: 1
Switch WWN: 10:00:00:05:33:c6:1e:6c
Port Count: 24
Switches connected on adapter 1b:
Switch Name: fcsw02
Switch Vendor: Brocade Communications, Inc.
Switch Release: v6.4.2a
Switch Domain: 1
Switch WWN: 10:00:00:05:33:c7:5e:d2
Port Count: 24
Switches connected on adapter 1c:
None connected.
Switches connected on adapter 1d:
None connected.
</source>
==fcp config <port> : Welche WWN habe ich?==
<syntaxhighlight lang=bash>
fas01> fcp config 1a
1a: ONLINE <ADAPTER UP> PTP Fabric
host address 010600
portname 50:0a:09:83:90:00:29:24 nodename 50:0a:09:80:80:00:29:24
mediatype auto speed auto
</source>
Kleines Schmankerl ist noch die "host address", die uns zeigt, daß wir auf Switch-ID 01 Port 06 hängen.
==fcp wwpn-alias (set|show) : Aliasnamen für mehr Klarheit beim Debugging==
<syntaxhighlight lang=bash>
fas01> fcp wwpn-alias set sun07_Slot2_Port0 21000024ff363a5a
fas01> fcp wwpn-alias show
WWPN Alias
---- -----
21:00:00:24:ff:36:3a:5a sun07_Slot2_Port0
</source>
==sanlun lun show -d <dev> (mit Solaris und ZPool)==
Wenn man wissen möchte, welche NetApp LUNs zu einem ZPool gehören, geht das folgender Maßen:
<syntaxhighlight lang=bash>
# zpool status | nawk '/c[0-9]t/{dev=$1;gsub(/s[0-9]+$/,"",$1);command="/opt/NTAP/SANToolkit/bin/sanlun lun show -d /dev/rdsk/"$1"s2";command | getline; command | getline; print dev,$1$2;next;}{print;}'
</source>
Beispiel:
<syntaxhighlight lang=bash>
# zpool status | nawk '/c[0-9]t/{dev=$1;gsub(/s[0-9]+$/,"",$1);command="/opt/NTAP/SANToolkit/bin/sanlun lun show -d /dev/rdsk/"$1"s2";command | getline; command | getline; print dev,$1$2;next;}{print;}'
Pool: testpool
Status: ONLINE
scan: resilvered 11,0G in 0h1m with 0 errors on Thu Oct 2 09:41:39 2014
config:
NAME STATE READ WRITE CKSUM
testpool ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
c5t60A98000433544625634696B76705370d0s0 fas01:/vol/testlun/LUN0
c5t60A980003830304F392446473844375Ad0 fas02:/vol/testlun/LUN0
</source>
=Sonstiges=
==Alle WWNs in einem File finden==
Gibt nur die WWNs aus, auch mehrere, wenn mehrere in einer Zeile sind.
<syntaxhighlight lang=awk>
gawk '{line=$0;while(match(line,/[0-9a-f]{2}(:[0-9a-f]{2}){7}/,wwn)){line=substr(line,wwn[0,"start"]+wwn[0,"length"]); print wwn[0];}}' <file>
</source>
==Some adittions to NetApps sanlun lun show on Solaris==
<syntaxhighlight lang=awk>
# /opt/NTAP/SANToolkit/bin/sanlun lun show | gawk '
$3 ~ /\/dev\// {
sanlun=$0;
cmd="luxadm display "$3;
while( cmd|getline line ){
count=split(line,word);
if(line ~ /DEVICE PROPERTIES for disk:/){
disk=word[count];
ctrl="";
dev_addr="";
svm_ports="";
delete ports;
delete pri;
delete sec;
delete paths;
delete online;
continue;
}
if(line ~ /Controller/){
ctrl=word[count];
continue;
}
if(line ~ /Device Address/){
dev_addr=word[count];
gsub(/,.*$/,"",dev_addr);
ports[dev_addr]=1;
pair=ctrl"_"dev_addr;
continue;
}
if(line ~ /Class/){
class[pair]=word[count];
if(word[count]=="primary"){
pri[disk]++;
} else {
sec[disk]++;
}
continue;
}
if(line ~ /State/){
state[pair]=word[count];
paths[disk]++;
if(word[count]=="ONLINE"){
online[disk]++;
}
}
if(line ~ /^$/ && ctrl!=""){
for(port in ports){
if(svm_ports==""){
sep="";
} else {
sep=",";
}
svm_ports=svm_ports sep port;
}
printf "%s %2d/%2d %2d/%2d %s\n",sanlun,online[disk],paths[disk],pri[disk],sec[disk], svm_ports;
}
}
close(cmd);
next;
}
/^vserver/{
line=sprintf("%s Online/Total Primary/Secondary Device Addresses\n", $0);
printf line;
gsub(/./,"-",line);
print line;
next;
}
/^[-]+$/{next;}
{print;}
'
</source>
4bf7154b96c89097b7c2265584ecfc2ccb8ff0f1
SunCluster oneliner
0
189
2205
655
2021-11-25T14:22:40Z
Lollypop
2
Text replacement - "</source" to "</syntaxhighlight"
wikitext
text/x-wiki
[[Kategorie:SunCluster|Einzeiler]]
==Resource Groups to remaster==
<source lang=bash>
# /usr/cluster/bin/clrg status | \
/usr/bin/nawk '
NR<=5 || ( NF>=3 && $(NF-1)=="Yes" ){
next;
}
NF==4 {
rg=$1;
primary=$2;
if($NF=="Online"){
printf "%20s\t%s on %s\n",rg,$NF,primary
}
while($0 !~ /^$/){
getline;
if($NF=="Online"){
printf "%20s\t%s on %s, but not on primary %s\n",rg,$NF,$1,primary;
list=list" "rg
}
}
}
END{
if(list != ""){
printf "To fix it do:\n\tclrg remaster %s\n",list;
}
}'
</syntaxhighlight>
993b7871f9c455265881004d9bc4fcb6b0244d9a
Sendmail
0
384
2206
2153
2021-11-25T14:22:46Z
Lollypop
2
Text replacement - "</source" to "</syntaxhighlight"
wikitext
text/x-wiki
=Compile sendmail=
==Solaris 10==
Untar source, then go into the source directory.
===devtools/Site/site.config.m4===
<source lang=m4>
dnl #####################################################################
dnl ### Changes to disable the default NIS support ###
dnl #####################################################################
APPENDDEF(`confENVDEF', `-UNIS')
dnl #####################################################################
dnl ### Changes for PH_MAP support. ###
dnl #####################################################################
APPENDDEF(`confMAPDEF',`-DPH_MAP')
APPENDDEF(`confLIBS', `-lphclient')
APPENDDEF(`confINCDIRS', `-I/opt/nph/include')
APPENDDEF(`confLIBDIRS', `-L/opt/nph/lib')
dnl #####################################################################
dnl ### Changes for STARTTLS support ###
dnl #####################################################################
APPENDDEF(`confENVDEF',`-DSTARTTLS')
APPENDDEF(`confLIBS', `-lssl -lcrypto')
APPENDDEF(`confLIBDIRS', `-L/opt/openssl/lib -R/opt/openssl/lib')
APPENDDEF(`confINCDIRS', `-I/opt/openssl/include')
dnl #####################################################################
dnl ### GCC settings ###
dnl #####################################################################
define(`confCC', `gcc')
define(`confOPTIMIZE', `-O3')
define(`confCCOPTS', `-m64 -B/usr/ccs/bin/amd64')
define(`confLDOPTS', `-m64 -static-libgcc -lgcc_s_amd64')
APPENDDEF(`confENVDEF', `-DSM_CONF_STDBOOL_H=0')
APPENDDEF(`confLIBDIRS', `-L/lib/64 -R/lib/64 -L/usr/sfw/lib/amd64 -R/usr/sfw/lib/amd64')
dnl #####################################################################
dnl ### Use the more modern shell ###
dnl #####################################################################
define(`confSHELL', `/usr/bin/bash')
dnl #####################################################################
dnl ### Installdirs ###
dnl #####################################################################
define(`confMANROOT', `/opt/sendmail-8.16.1/share/man/cat')
define(`confMANROOTMAN', `/opt/sendmail-8.16.1/share/man/man')
define(`confMBINDIR', `/opt/sendmail-8.16.1/sbin')
define(`confUBINDIR', `/opt/sendmail-8.16.1/bin')
</syntaxhighlight>
<source lang=bash>
# sh ./Build -c
# cd cf/cf
# cp generic-solaris.mc sendmail.mc
# sh ./Build sendmail.cf
# sh ./Build install-cf
# mkdir -p /opt/sendmail-8.16.1/{bin,share/man/cat{1,5,8}} ; ./Build install ;
</syntaxhighlight>
== Using the original Solaris 10 svc to sart your own sendmail ==
If you have set config/local_only=true at the parameters of svc:/network/smtp:sendmail the service will fail with:
Invalid operation mode l
This is because the start script will result in calling sendmail with the option "-bl" when config/local_only=true is set.
So put this in your sendmail.mc instead:
DAEMON_OPTIONS(`Port=smtp,Addr=127.0.0.1, Name=MTA')
and set config/local_only=false:
<source lang=bash>
# svccfg -s svc:/network/smtp:sendmail setprop config/local_only=false
# svcadm refresh svc:/network/smtp:sendmail
</syntaxhighlight>
After that senmail might come up :-).
72fa96501195a0007913a3479d9c570ff017d9c3
Mauersegler
0
381
2207
2101
2021-11-25T14:22:49Z
Lollypop
2
Text replacement - "<source" to "<syntaxhighlight"
wikitext
text/x-wiki
==Lockrufe mit einem RaspberryPi und Bluetooth-Boxen abspielen==
===Womit ich es realisiert habe===
* RaspberryPi 3B.
* Micro-SD Karte (die Größe ist den aktuellen Anforderungen auf [https://www.raspbian.org/ Raspian.org] zu entnehmen).
* USB Netzteil mit Micro-USB Steckern (für den RaspberryPi).
* Bluetoothfähige, wasserfeste Lautprecher (in meinem Fall [https://www.amazon.de/gp/product/B07QY66L9M Wireless Bluetooth Lautsprecher, Sonkir Tragbarer Bluetooth 5.0 TWS Lautsprecher mit Dual-Treiber Bass, 3D-Stereo, FM Radio, Freisprechfunktion, integriertem 1500-mAh-Akku]).
* USB Netzteil mit Micro-USB Steckern (für die Boxen, bei anderen Boxen evtl. anderes Netzteil!).
* An meinem Laptop ist ein SD Card Reader, um das Betriebssystem auf die SD-Karte zu bekommen. Ist dieser nicht vorhanden, braucht man noch einen USB SD Card Reader. Kostet aber auch nicht die Welt.
===Gründe für diese Wahl===
Dank der guten Reichweite von Bluetooth, kann der RaspberryPi im Haus bleiben und nur die Boxen und ein Netzteil müssen nach draußen.
===Raspian auf dem Pi installieren===
* Anleitungen etc. sind auf [https://www.raspbian.org/ Raspian.org] zu finden, das würde hier kein Sinn machen alles doppelt zu halten.
===Bluetooth aktivieren===
Mit ssh als Benutzer pi auf den RaspberryPi verbinden.
Windows-Nutzer können dafür [https://www.putty.org/ Putty] nutzen.
====Bluetooth Service dauerhaft anschalten====
<syntaxhighlight lang=bash>
pi@raspberrypi:~ $ sudo systemctl enable bluetooth.service
pi@raspberrypi:~ $ sudo systemctl start bluetooth.service
</source>
====Bluetooth Service Status prüfen====
So sollte es aussehen:
<syntaxhighlight lang=bash>
pi@raspberrypi:~ $ sudo systemctl status bluetooth.service
* bluetooth.service - Bluetooth service
Loaded: loaded (/lib/systemd/system/bluetooth.service; enabled; vendor preset: enabled)
Active: active (running) since Wed 2021-02-03 09:18:58 CET; 32min ago
Docs: man:bluetoothd(8)
Main PID: 943 (bluetoothd)
Status: "Running"
Tasks: 1 (limit: 2062)
CGroup: /system.slice/bluetooth.service
`-943 /usr/lib/bluetooth/bluetoothd
...
</source>
Sieht es hingegen so aus:
<syntaxhighlight lang=bash>
pi@raspberrypi:~ $ sudo systemctl status bluetooth.service
* bluetooth.service - Bluetooth service
Loaded: loaded (/lib/systemd/system/bluetooth.service; enabled; vendor preset: enabled)
Active: inactive (dead)
Docs: man:bluetoothd(8)
</source>
Dann sind im Betriebssystem die entsprechenden Treiber(module) für Bluetooth nicht geladen.
====Bluetooth Module aktivieren====
Wenn die Module deaktiviert (blacklisted) sind, müssen wir das ändern.
Der Befehl
egrep "(hci_uart|btbcm)" /etc/modprobe.d/*.conf
zeigt einem die Datei, wo das passiert.
Beispiel:
<syntaxhighlight lang=bash>
pi@raspberrypi:~ $ egrep "(hci_uart|btbcm)" /etc/modprobe.d/*.conf
/etc/modprobe.d/blacklist-bluetooth.conf:blacklist hci_uart
/etc/modprobe.d/blacklist-bluetooth.conf:blacklist btbcm
</source>
In meinem Beispiel also in <i>/etc/modprobe.d/blacklist-bluetooth.conf</i>.
Der Befehl
sudo perl -pi -e "s/(blacklist.*(hci_uart|btbcm))/#\1/g" <Datei>
kommentiert die <i>blacklist</i> Zeilen für die Beiden benötigten Module aus.
<syntaxhighlight lang=bash>
pi@raspberrypi:~ $ sudo perl -pi -e "s/(blacklist.*(hci_uart|btbcm))/#\1/g" /etc/modprobe.d/blacklist-bluetooth.conf
</source>
Anschließend einen Neustart(reboot) durchführen
<syntaxhighlight lang=bash>
pi@raspberrypi:~ $ sudo reboot
</source>
Wenn die Module nach dem geladen sind, sieht es so aus:
<syntaxhighlight lang=bash>
pi@raspberrypi:~ $ sudo lsmod | grep bt
btbcm 16384 1 hci_uart
bluetooth 393216 37 hci_uart,bnep,btbcm,rfcomm
</source>
Dann nochmal den [[#Bluetooth Service Status prüfen|Bluetooth Service Status prüfen]].
Jetzt sollte alles gut sein.
===Finden der Bluetooth-Boxen===
<syntaxhighlight lang=bash>
pi@raspberrypi:~ $ sudo bluetoothctl
Agent registered
[bluetooth]# scan on
Discovery started
[CHG] Controller B8:27:EB:E6:D3:79 Discovering: yes
</source>
Jetzt die Bluetooth-Boxen an
<syntaxhighlight lang=bash>
[NEW] Device D6:53:25:BE:37:73 SPEAKER5.0
</source>
Ah, da ist sie ja!
Jetzt noch verbinden und raus:
<syntaxhighlight lang=bash>
[bluetooth]# scan off
[CHG] Controller B8:27:EB:E6:D3:79 Discovering: no
Discovery stopped
[bluetooth]#
[bluetooth]# connect D6:53:25:BE:37:73
Attempting to connect to D6:53:25:BE:37:73
[CHG] Device D6:53:25:BE:37:73 Connected: yes
[CHG] Device D6:53:25:BE:37:73 UUIDs: 0000110b-0000-1000-8000-00805f9b34fb
[CHG] Device D6:53:25:BE:37:73 UUIDs: 0000110c-0000-1000-8000-00805f9b34fb
[CHG] Device D6:53:25:BE:37:73 UUIDs: 0000110e-0000-1000-8000-00805f9b34fb
[CHG] Device D6:53:25:BE:37:73 UUIDs: 0000111e-0000-1000-8000-00805f9b34fb
[CHG] Device D6:53:25:BE:37:73 ServicesResolved: yes
[CHG] Device D6:53:25:BE:37:73 Paired: yes
Connection successful
[SPEAKER5.0]# trust D6:53:25:BE:37:73
[CHG] Device D6:53:25:BE:37:73 Trusted: yes
Changing D6:53:25:BE:37:73 trust succeeded
[SPEAKER5.0]# paired-devices
Device D6:53:25:BE:37:73 SPEAKER5.0
[SPEAKER5.0]# quit
</source>
Jetzt muß noch die Addresse der Boxen in die <i>/etc/asound.conf</i> eingetragen werden (die existiert normalerweise noch nicht, einfach neu anlegen).
<syntaxhighlight>
pcm.!default {
type plug
slave {
pcm {
type bluealsa
device D6:53:25:BE:37:73
profile "a2dp"
}
}
hint {
show on
description "Bluetooth SPEAKER5.0"
}
}
ctl.!default {
type bluealsa
}
</source>
Noch einmal reboot:
<syntaxhighlight lang=bash>
pi@raspberrypi:~ $ sudo reboot
</source>
Die Boxen sollten beim Starten des RaspberryPi jetzt auch ein kleines Signal abspielen, wenn das Pairing stattfindet.
Jetzt kann man mal Testen:
<syntaxhighlight lang=bash>
pi@raspberrypi:~ $ aplay -L
...
default
Bluetooth SPEAKER5.0
...
</source>
fe072f25c2c0a42035d1bd1361fbd06fde29565e
2224
2207
2021-11-25T14:27:32Z
Lollypop
2
Text replacement - "</source" to "</syntaxhighlight"
wikitext
text/x-wiki
==Lockrufe mit einem RaspberryPi und Bluetooth-Boxen abspielen==
===Womit ich es realisiert habe===
* RaspberryPi 3B.
* Micro-SD Karte (die Größe ist den aktuellen Anforderungen auf [https://www.raspbian.org/ Raspian.org] zu entnehmen).
* USB Netzteil mit Micro-USB Steckern (für den RaspberryPi).
* Bluetoothfähige, wasserfeste Lautprecher (in meinem Fall [https://www.amazon.de/gp/product/B07QY66L9M Wireless Bluetooth Lautsprecher, Sonkir Tragbarer Bluetooth 5.0 TWS Lautsprecher mit Dual-Treiber Bass, 3D-Stereo, FM Radio, Freisprechfunktion, integriertem 1500-mAh-Akku]).
* USB Netzteil mit Micro-USB Steckern (für die Boxen, bei anderen Boxen evtl. anderes Netzteil!).
* An meinem Laptop ist ein SD Card Reader, um das Betriebssystem auf die SD-Karte zu bekommen. Ist dieser nicht vorhanden, braucht man noch einen USB SD Card Reader. Kostet aber auch nicht die Welt.
===Gründe für diese Wahl===
Dank der guten Reichweite von Bluetooth, kann der RaspberryPi im Haus bleiben und nur die Boxen und ein Netzteil müssen nach draußen.
===Raspian auf dem Pi installieren===
* Anleitungen etc. sind auf [https://www.raspbian.org/ Raspian.org] zu finden, das würde hier kein Sinn machen alles doppelt zu halten.
===Bluetooth aktivieren===
Mit ssh als Benutzer pi auf den RaspberryPi verbinden.
Windows-Nutzer können dafür [https://www.putty.org/ Putty] nutzen.
====Bluetooth Service dauerhaft anschalten====
<syntaxhighlight lang=bash>
pi@raspberrypi:~ $ sudo systemctl enable bluetooth.service
pi@raspberrypi:~ $ sudo systemctl start bluetooth.service
</syntaxhighlight>
====Bluetooth Service Status prüfen====
So sollte es aussehen:
<syntaxhighlight lang=bash>
pi@raspberrypi:~ $ sudo systemctl status bluetooth.service
* bluetooth.service - Bluetooth service
Loaded: loaded (/lib/systemd/system/bluetooth.service; enabled; vendor preset: enabled)
Active: active (running) since Wed 2021-02-03 09:18:58 CET; 32min ago
Docs: man:bluetoothd(8)
Main PID: 943 (bluetoothd)
Status: "Running"
Tasks: 1 (limit: 2062)
CGroup: /system.slice/bluetooth.service
`-943 /usr/lib/bluetooth/bluetoothd
...
</syntaxhighlight>
Sieht es hingegen so aus:
<syntaxhighlight lang=bash>
pi@raspberrypi:~ $ sudo systemctl status bluetooth.service
* bluetooth.service - Bluetooth service
Loaded: loaded (/lib/systemd/system/bluetooth.service; enabled; vendor preset: enabled)
Active: inactive (dead)
Docs: man:bluetoothd(8)
</syntaxhighlight>
Dann sind im Betriebssystem die entsprechenden Treiber(module) für Bluetooth nicht geladen.
====Bluetooth Module aktivieren====
Wenn die Module deaktiviert (blacklisted) sind, müssen wir das ändern.
Der Befehl
egrep "(hci_uart|btbcm)" /etc/modprobe.d/*.conf
zeigt einem die Datei, wo das passiert.
Beispiel:
<syntaxhighlight lang=bash>
pi@raspberrypi:~ $ egrep "(hci_uart|btbcm)" /etc/modprobe.d/*.conf
/etc/modprobe.d/blacklist-bluetooth.conf:blacklist hci_uart
/etc/modprobe.d/blacklist-bluetooth.conf:blacklist btbcm
</syntaxhighlight>
In meinem Beispiel also in <i>/etc/modprobe.d/blacklist-bluetooth.conf</i>.
Der Befehl
sudo perl -pi -e "s/(blacklist.*(hci_uart|btbcm))/#\1/g" <Datei>
kommentiert die <i>blacklist</i> Zeilen für die Beiden benötigten Module aus.
<syntaxhighlight lang=bash>
pi@raspberrypi:~ $ sudo perl -pi -e "s/(blacklist.*(hci_uart|btbcm))/#\1/g" /etc/modprobe.d/blacklist-bluetooth.conf
</syntaxhighlight>
Anschließend einen Neustart(reboot) durchführen
<syntaxhighlight lang=bash>
pi@raspberrypi:~ $ sudo reboot
</syntaxhighlight>
Wenn die Module nach dem geladen sind, sieht es so aus:
<syntaxhighlight lang=bash>
pi@raspberrypi:~ $ sudo lsmod | grep bt
btbcm 16384 1 hci_uart
bluetooth 393216 37 hci_uart,bnep,btbcm,rfcomm
</syntaxhighlight>
Dann nochmal den [[#Bluetooth Service Status prüfen|Bluetooth Service Status prüfen]].
Jetzt sollte alles gut sein.
===Finden der Bluetooth-Boxen===
<syntaxhighlight lang=bash>
pi@raspberrypi:~ $ sudo bluetoothctl
Agent registered
[bluetooth]# scan on
Discovery started
[CHG] Controller B8:27:EB:E6:D3:79 Discovering: yes
</syntaxhighlight>
Jetzt die Bluetooth-Boxen an
<syntaxhighlight lang=bash>
[NEW] Device D6:53:25:BE:37:73 SPEAKER5.0
</syntaxhighlight>
Ah, da ist sie ja!
Jetzt noch verbinden und raus:
<syntaxhighlight lang=bash>
[bluetooth]# scan off
[CHG] Controller B8:27:EB:E6:D3:79 Discovering: no
Discovery stopped
[bluetooth]#
[bluetooth]# connect D6:53:25:BE:37:73
Attempting to connect to D6:53:25:BE:37:73
[CHG] Device D6:53:25:BE:37:73 Connected: yes
[CHG] Device D6:53:25:BE:37:73 UUIDs: 0000110b-0000-1000-8000-00805f9b34fb
[CHG] Device D6:53:25:BE:37:73 UUIDs: 0000110c-0000-1000-8000-00805f9b34fb
[CHG] Device D6:53:25:BE:37:73 UUIDs: 0000110e-0000-1000-8000-00805f9b34fb
[CHG] Device D6:53:25:BE:37:73 UUIDs: 0000111e-0000-1000-8000-00805f9b34fb
[CHG] Device D6:53:25:BE:37:73 ServicesResolved: yes
[CHG] Device D6:53:25:BE:37:73 Paired: yes
Connection successful
[SPEAKER5.0]# trust D6:53:25:BE:37:73
[CHG] Device D6:53:25:BE:37:73 Trusted: yes
Changing D6:53:25:BE:37:73 trust succeeded
[SPEAKER5.0]# paired-devices
Device D6:53:25:BE:37:73 SPEAKER5.0
[SPEAKER5.0]# quit
</syntaxhighlight>
Jetzt muß noch die Addresse der Boxen in die <i>/etc/asound.conf</i> eingetragen werden (die existiert normalerweise noch nicht, einfach neu anlegen).
<syntaxhighlight>
pcm.!default {
type plug
slave {
pcm {
type bluealsa
device D6:53:25:BE:37:73
profile "a2dp"
}
}
hint {
show on
description "Bluetooth SPEAKER5.0"
}
}
ctl.!default {
type bluealsa
}
</syntaxhighlight>
Noch einmal reboot:
<syntaxhighlight lang=bash>
pi@raspberrypi:~ $ sudo reboot
</syntaxhighlight>
Die Boxen sollten beim Starten des RaspberryPi jetzt auch ein kleines Signal abspielen, wenn das Pairing stattfindet.
Jetzt kann man mal Testen:
<syntaxhighlight lang=bash>
pi@raspberrypi:~ $ aplay -L
...
default
Bluetooth SPEAKER5.0
...
</syntaxhighlight>
bba77ea67624b6547eb6843ad3c8326732baa624
Linux Software RAID
0
286
2208
2052
2021-11-25T14:22:54Z
Lollypop
2
Text replacement - "</source" to "</syntaxhighlight"
wikitext
text/x-wiki
[[category:Linux]]
=mdadm=
==Force rebuild of a failed RAID==
Example for /dev/md10
===The problem: Two failed disks in a RAID5===
Looks ugly but maybe we have luck and the disks are just marked as bad.
==== cat /proc/mdstat ====
<source lang=bash>
# cat /proc/mdstat
Personalities : [raid1] [raid6] [raid5] [raid4]
...
md10 : inactive sdap1[11] sdao1[5] sdah1[15](S) sdag1[4] sdy1[3] sdz1[14] sdr1[8] sdb1[13] sdq1[16](S) sdi1[1] sda1[12]
5236577280 blocks super 1.2
...
</syntaxhighlight>
State is <i>inactive</i> this is not what we want... look for the details in the next step
==== mdadm --detail ====
<source lang=bash>
# mdadm --detail /dev/md10
/dev/md10:
Version : 1.2
Creation Time : Wed Feb 6 13:44:52 2013
Raid Level : raid5
Used Dev Size : 476052288 (454.00 GiB 487.48 GB)
Raid Devices : 11
Total Devices : 11
Persistence : Superblock is persistent
Update Time : Wed Jun 15 17:46:57 2016
State : active, FAILED, Not Started
Active Devices : 9
Working Devices : 11
Failed Devices : 0
Spare Devices : 2
Layout : left-symmetric
Chunk Size : 64K
Name : md10
UUID : 82f2b88d:276a1fd3:55a4928e:b2228edf
Events : 17071
Number Major Minor RaidDevice State
11 66 145 0 active sync /dev/sdap1
1 8 129 1 active sync /dev/sdi1
2 0 0 2 removed
3 65 129 3 active sync /dev/sdy1
4 66 1 4 active sync /dev/sdag1
5 66 129 5 active sync /dev/sdao1
12 8 1 6 active sync /dev/sda1
7 0 0 7 removed
8 65 17 8 active sync /dev/sdr1
13 8 17 9 active sync /dev/sdb1
14 65 145 10 active sync /dev/sdz1
15 66 17 - spare /dev/sdah1
16 65 1 - spare /dev/sdq1
</syntaxhighlight>
===Force the rescan and reassemble the RAID===
For a SCSI-rescan you can try this:
[[Linux_Tipps_und_Tricks#Scan_all_SCSI_buses_for_new_devices|Scan all SCSI buses for new devices]]
And you have to do this:
<source lang=bash>
# mdadm --scan /dev/md10
# mdadm --assemble --force --scan
# mdadm --run /dev/md10
</syntaxhighlight>
===Check the status===
<source lang=bash>
# mdadm --detail /dev/md10
/dev/md10:
Version : 1.2
Creation Time : Wed Feb 6 13:44:52 2013
Raid Level : raid5
Array Size : 4760522880 (4539.99 GiB 4874.78 GB)
Used Dev Size : 476052288 (454.00 GiB 487.48 GB)
Raid Devices : 11
Total Devices : 12
Persistence : Superblock is persistent
Update Time : Thu Jun 16 10:59:16 2016
State : clean, degraded, recovering
Active Devices : 10
Working Devices : 12
Failed Devices : 0
Spare Devices : 2
Layout : left-symmetric
Chunk Size : 64K
Rebuild Status : 5% complete
Name : md10
UUID : 82f2b88d:276a1fd3:55a4928e:b2228edf
Events : 17074
Number Major Minor RaidDevice State
11 66 145 0 active sync /dev/sdap1
1 8 129 1 active sync /dev/sdi1
16 65 1 2 spare rebuilding /dev/sdq1
3 65 129 3 active sync /dev/sdy1
4 66 1 4 active sync /dev/sdag1
5 66 129 5 active sync /dev/sdao1
12 8 1 6 active sync /dev/sda1
7 8 145 7 active sync /dev/sdj1
8 65 17 8 active sync /dev/sdr1
13 8 17 9 active sync /dev/sdb1
14 65 145 10 active sync /dev/sdz1
15 66 17 - spare /dev/sdah1
</syntaxhighlight>
This is good:
State : clean, degraded, recovering
Better wait with the next reboot for completion:
Rebuild Status : 5% complete
It should continue rebuilding if you boot but... know the devils...
==Replace a disk in a mirror==
Device /dev/cciss/c0d1 is a replaced and new disk in a [[HP_Smart_Array_Controller#reenable_disk_after_replacement | HP Array Controller]]
<source lang=bash>
[root@app02 ~]# sfdisk -d /dev/cciss/c0d0 | sfdisk --no-reread --force /dev/cciss/c0d1
[root@app02 ~]# mdadm --manage /dev/md0 --fail /dev/cciss/c0d1p1
[root@app02 ~]# mdadm --manage /dev/md0 --remove /dev/cciss/c0d1p1
[root@app02 ~]# mdadm --manage /dev/md0 --add /dev/cciss/c0d1p1
[root@app02 ~]# mdadm --manage /dev/md1 --fail /dev/cciss/c0d1p2
[root@app02 ~]# mdadm --manage /dev/md1 --remove /dev/cciss/c0d1p2
[root@app02 ~]# mdadm --manage /dev/md1 --add /dev/cciss/c0d1p2
[root@app02 ~]# cat /proc/mdstat
Personalities : [raid1]
md1 : active raid1 cciss/c0d1p2[2] cciss/c0d0p2[0]
36925312 blocks [2/1] [U_]
resync=DELAYED
md0 : active raid1 cciss/c0d1p1[2] cciss/c0d0p1[0]
256003712 blocks [2/1] [U_]
[>....................] recovery = 0.0% (38144/256003712) finish=2680.2min speed=1589K/sec
unused devices: <none>
</syntaxhighlight>
c7dafc7c1413f33d108c9afc9310c148be97124a
Ansible tips and tricks
0
299
2209
1978
2021-11-25T14:23:09Z
Lollypop
2
Text replacement - "</source" to "</syntaxhighlight"
wikitext
text/x-wiki
[[ Kategorie: Ansible | Tips and tricks ]]
== Ansible commandline ==
=== Get settings for host ===
Gathering settings for host in ${hostname}:
<source lang=bash>
$ ansible -m debug -a 'var=hostvars[inventory_hostname]' ${hostname}
</syntaxhighlight>
For example:
<source lang=bash>
$ ansible -m debug -a 'var=hostvars[inventory_hostname]' localhost
</syntaxhighlight>
Gathering groups for host in ${hostname}:
<source lang=bash>
$ ansible -m debug -a 'var=group_names' ${hostname}
</syntaxhighlight>
== Gathering facts from file ==
=== Variables from an Oracle response file ===
This snippet gets some variables from the response file and puts them into an environment variable <i>oracle_environment</i> and sets the variable name itself (prepended with oracle_ if not already there). The variable <i>oracle_environment</i> can be used for <i>environment:</i> when you use <i>shell:</i>.
<source lang=yaml>
vars:
oracle_user: oracle
oracle_version: 12cR2
oracle_response_file: /install/tepmplate_{{ oracle_version }}/db_{{ oracle_version | lower}}.rsp
</syntaxhighlight>
<source lang=yaml>
- name: "Getting variables for version {{ oracle_version }} from response file"
shell: |
awk -F '=' '/{{ item }}/{print $2;}' {{ oracle_response_file }}
register: oracle_response_variables
with_items:
- ORACLE_HOME
- ORACLE_BASE
- INVENTORY_LOCATION
tags:
- oracle
- oracle_install
- name: Setting facts from response file to oracle_environment
set_fact:
"{{ 'oracle_' + item.item | lower | regex_replace('oracle_','') }}": "{{ item.stdout }}"
oracle_environment: "{{oracle_environment|default([]) + [ {item.item: item.stdout} ] }}"
with_items:
- "{{ oracle_response_variables.results }}"
tags:
- oracle
- oracle_install
</syntaxhighlight>
== Gathering oracle environment ==
<source lang=yaml>
- name: Calling oraenv
shell: |
# Set ORAENV_ASK=NO and ORACLE_SID, ORACLE_HOME, PATH from /etc/oratab
eval $(awk -F':' '!/^[ ]*(#|$)/ && $3=="Y"{printf "export ORAENV_ASK=NO ORACLE_SID=%s ORACLE_HOME=%s PATH=${PATH}:%s/bin\n",$1,$2,$2}' /etc/oratab)
# Call /usr/local/bin/oraenv for additional settings
. /usr/local/bin/oraenv -s
# Just register what we need for Oracle
env | egrep "(ORACLE_.*|PATH|LD_LIBRARY_PATH)="
register: env
changed_when: False
- name: Creating environment ora_env
set_fact:
ora_env: |
{# Creating empty dictionary #}
{%- set tmp_env={} -%}
{# For each line from env call tmp_env._setitem_(<variable>,<value>) #}
{%- for line in env.stdout_lines -%}
{{ tmp_env.__setitem__(line.split('=')[0], line.split('=')[1]) }}
{%- endfor -%}
{# Print the created variable #}
{{ tmp_env }}
- debug: var=ora_env
</syntaxhighlight>
== NetApp Modules ==
=== NetApp role ===
==== Snapshot user ====
<source>
security login role create -vserver cluster01 -role ansible-snapshot-only -cmddirname DEFAULT -access none
security login role create -vserver cluster01 -role ansible-snapshot-only -cmddirname "event generate-autosupport-log" -access all
security login role create -vserver cluster01 -role ansible-snapshot-only -cmddirname "volume snapshot" -access readonly
security login role create -vserver cluster01 -role ansible-snapshot-only -cmddirname "volume snapshot create" -query "-snapshot ansible_*" -access all
security login role create -vserver cluster01 -role ansible-snapshot-only -cmddirname "volume snapshot delete" -query "-snapshot ansible_*" -access all
security login create -vserver cluster01 -role ansible-snapshot-only -application ontapi -authentication-method password -user-or-group-name ansible-snapuser
</syntaxhighlight>
69b10947f61f6881516cd451d0f137a592db8683
Systemd
0
233
2210
2151
2021-11-25T14:23:15Z
Lollypop
2
Text replacement - "</source" to "</syntaxhighlight"
wikitext
text/x-wiki
[[Category:Linux]]
=systemd=
Yes, like daemons are usually written this has to be written lowercase.
=What is systemd?=
systemd is a replacement for the old and rusty init system of Linux.
It has many new features and extends the normal init system with the ability to watch processes after the start has done, list sockets owned by processes started with systemd, adds security features like [http://manpages.ubuntu.com/manpages/vivid/en/man7/capabilities.7.html capabilities(7)] and a lot more.
Maybe it will be as good as SMF (Service Management Facility) of Solaris one day :-).
=Take a look with systemctl=
==List units==
As you can see, there are hardware and software related units.
<source lang=bash>
# systemctl list-units
UNIT LOAD ACTIVE SUB DESCRIPTION
proc-sys-fs-binfmt_misc.automount loaded active running Arbitrary Executable File Formats File System Automount Point
sys-devices-pci0000:00-0000:00:02.0-backlight-acpi_video0.device loaded active plugged /sys/devices/pci0000:00/0000:00:02.0/backlight/acpi_video0
sys-devices-pci0000:00-0000:00:02.0-drm-card0-card0\x2dLVDS\x2d1-intel_backlight.device loaded active plugged /sys/devices/pci0000:00/0000:00:02.0/drm
sys-devices-pci0000:00-0000:00:19.0-net-eth0.device loaded active plugged 82579LM Gigabit Network Connection
sys-devices-pci0000:00-0000:00:1a.0-usb1-1\x2d1-1\x2d1.4-1\x2d1.4:1.0-bluetooth-hci0-rfkill3.device loaded active plugged /sys/devices/pci0000:00/0000
sys-devices-pci0000:00-0000:00:1a.0-usb1-1\x2d1-1\x2d1.4-1\x2d1.4:1.0-bluetooth-hci0.device loaded active plugged /sys/devices/pci0000:00/0000:00:1a.0
sys-devices-pci0000:00-0000:00:1b.0-sound-card0.device loaded active plugged 6 Series/C200 Series Chipset Family High Definition Audio Contro
sys-devices-pci0000:00-0000:00:1c.1-0000:03:00.0-ieee80211-phy0-rfkill2.device loaded active plugged /sys/devices/pci0000:00/0000:00:1c.1/0000:03:00.0
sys-devices-pci0000:00-0000:00:1c.1-0000:03:00.0-net-wlan0.device loaded active plugged Centrino Advanced-N 6205 [Taylor Peak] (Centrino Advanced-N 62
sys-devices-pci0000:00-0000:00:1d.0-usb2-2\x2d1-2\x2d1.4-2\x2d1.4:1.1-tty-ttyACM0.device loaded active plugged F5521gw
sys-devices-pci0000:00-0000:00:1d.0-usb2-2\x2d1-2\x2d1.4-2\x2d1.4:1.3-tty-ttyACM1.device loaded active plugged F5521gw
...
session-c2.scope loaded active running Session c2 of user lollypop
accounts-daemon.service loaded active running Accounts Service
● anacron.service loaded failed failed Run anacron jobs
apparmor.service loaded active exited LSB: AppArmor initialization
apport.service loaded active exited LSB: automatic crash report generation
...
</syntaxhighlight>
In this example you can see that the anacron.service failed to start.
==Display unit status==
<source lang=bash>
# systemctl status anacron
● anacron.service - Run anacron jobs
Loaded: loaded (/lib/systemd/system/anacron.service; enabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Fr 2015-08-28 09:18:13 CEST; 31min ago
Process: 1591 ExecStart=/usr/sbin/anacron -dsq (code=exited, status=1/FAILURE)
Main PID: 1591 (code=exited, status=1/FAILURE)
Aug 28 09:18:13 lollybook systemd[1]: Started Run anacron jobs.
Aug 28 09:18:13 lollybook systemd[1]: Starting Run anacron jobs...
Aug 28 09:18:13 lollybook systemd[1]: anacron.service: main process exited, code=exited, status=1/FAILURE
Aug 28 09:18:13 lollybook anacron[1591]: anacron: Can't chdir to /var/spool/anacron: No such file or directory
Aug 28 09:18:13 lollybook systemd[1]: Unit anacron.service entered failed state.
Aug 28 09:18:13 lollybook systemd[1]: anacron.service failed.
</syntaxhighlight>
Ah, deleted the anacron spool directory. ;-)
==Restart units==
Fix the problem and restart the service.
<source lang=bash>
root@lollybook:~# mkdir /var/spool/anacron
root@lollybook:~# systemctl restart anacron.service
root@lollybook:~# systemctl status anacron
● anacron.service - Run anacron jobs
Loaded: loaded (/lib/systemd/system/anacron.service; enabled; vendor preset: enabled)
Active: active (running) since Fr 2015-08-28 09:53:49 CEST; 4s ago
Main PID: 5179 (anacron)
CGroup: /system.slice/anacron.service
└─5179 /usr/sbin/anacron -dsq
Aug 28 09:53:49 lollybook systemd[1]: Started Run anacron jobs.
Aug 28 09:53:49 lollybook systemd[1]: Starting Run anacron jobs...
Aug 28 09:53:49 lollybook anacron[5179]: Anacron 2.3 started on 2015-08-28
Aug 28 09:53:49 lollybook anacron[5179]: Will run job `cron.daily' in 5 min.
Aug 28 09:53:49 lollybook anacron[5179]: Will run job `cron.weekly' in 10 min.
Aug 28 09:53:49 lollybook anacron[5179]: Will run job `cron.monthly' in 15 min.
Aug 28 09:53:49 lollybook anacron[5179]: Jobs will be executed sequentially
</syntaxhighlight>
==Display unit declaration==
<source lang=ini>
# systemctl cat zfs.target
# /lib/systemd/system/zfs.target
[Unit]
Description=ZFS startup target
Requires=zfs-mount.service
Requires=zfs-share.service
Wants=zed.service
[Install]
WantedBy=multi-user.target
</syntaxhighlight>
==Sockets==
<source lang=bash>
# systemctl list-sockets --all
LISTEN UNIT ACTIVATES
/run/acpid.socket acpid.socket acpid.service
/run/systemd/fsckd systemd-fsckd.socket systemd-fsckd.service
/run/systemd/initctl/fifo systemd-initctl.socket systemd-initctl.service
/run/systemd/journal/dev-log systemd-journald-dev-log.socket systemd-journald.service
/run/systemd/journal/socket systemd-journald.socket systemd-journald.service
/run/systemd/journal/stdout systemd-journald.socket systemd-journald.service
/run/systemd/journal/syslog syslog.socket rsyslog.service
/run/systemd/shutdownd systemd-shutdownd.socket systemd-shutdownd.service
/run/udev/control systemd-udevd-control.socket systemd-udevd.service
/run/uuidd/request uuidd.socket uuidd.service
/var/run/avahi-daemon/socket avahi-daemon.socket avahi-daemon.service
/var/run/cups/cups.sock cups.socket cups.service
/var/run/dbus/system_bus_socket dbus.socket dbus.service
127.0.0.1:631 cups.socket cups.service
[::1]:631 cups.socket cups.service
audit 1 systemd-journald-audit.socket systemd-journald.service
kobject-uevent 1 systemd-udevd-kernel.socket systemd-udevd.service
17 sockets listed.
</syntaxhighlight>
==View dependencies==
What depends on ''zfs.target'':
<source lang=bash>
# systemctl list-dependencies --reverse zfs.target
zfs.target
● ├─basic.target
...
● └─multi-user.target
...
</syntaxhighlight>
And what do we need to reach the ''zfs.target''?
<source lang=bash>
# systemctl list-dependencies --recursive zfs.target
zfs.target
● ├─zed.service
● ├─zfs-mount.service
● └─zfs-share.service
</syntaxhighlight>
==Get the main PID of a service==
<source lang=bash>
$ systemctl show --property=MainPID --value ssh.service
2026
</syntaxhighlight>
=Security=
==Use capabilities to drop user privileges (CapabilityBoundingSet)==
<source lang=ini>
# systemctl cat systemd-networkd.service --no-pager
...
[Service]
Type=notify
Restart=on-failure
RestartSec=0
ExecStart=/lib/systemd/systemd-networkd
CapabilityBoundingSet=CAP_NET_ADMIN CAP_NET_BIND_SERVICE CAP_NET_BROADCAST CAP_NET_RAW CAP_SETUID CAP_SETGID CAP_SETPCAP CAP_CHOWN CAP_DAC_OVERRIDE CAP_FOWNER
ProtectSystem=full
ProtectHome=yes
WatchdogSec=1min
...
</syntaxhighlight>
Now the process is started with exactly the capabilities it needs to have. Even if it starts as root all unnessesary capabilities are dropped for starting the process.
I dont want to copy the whole man page of [http://manpages.ubuntu.com/manpages/vivid/en/man7/capabilities.7.html capabilities(7)] here but you can take a look to understand what this capabilities are.
'''BUT''' beware of programs which just test on UID 0!
==Nailing a process to it's rights : NoNewPrivileges==
Setting ''NoNewPrivileges=true'' ensures that the processtree from this level on will stuck at the UID and the privileges it has. This prohibits UID changes. No set UID binary will help the hacker to get more privileges than the user of the exploited service.
==Limiting access to a socket==
For example for the check_mk monitoring system:
<source lang=ini>
# systemctl edit check_mk.socket
</syntaxhighlight>
Deny from all, but the monitoring server (172.17.128.193):
<source lang=ini>
[Socket]
IPAddressDeny=any
IPAddressAllow=172.17.128.193
</syntaxhighlight>
==Limiting a socket to IPv4==
For example for the check_mk monitoring system:
<source lang=ini>
# systemctl edit check_mk.socket
</syntaxhighlight>
First remove old value, then set new one.
<source lang=ini>
[Socket]
ListenStream=
ListenStream=0.0.0.0:6556
</syntaxhighlight>
=systemd-resolved the name resolve service=
==Status==
<source lang=bash>
$ systemd-resolve --status
Global
DNS Domain: fritz.box
DNSSEC NTA: 10.in-addr.arpa
168.192.in-addr.arpa
corp
d.f.ip6.arpa
home
internal
intranet
lan
local
private
test
Link 3 (wlan0)
Current Scopes: none
LLMNR setting: yes
MulticastDNS setting: no
DNSSEC setting: no
DNSSEC supported: no
Link 2 (eth0)
Current Scopes: DNS
LLMNR setting: yes
MulticastDNS setting: no
DNSSEC setting: no
DNSSEC supported: no
DNS Servers: 192.168.178.1
DNS Domain: fritz.box
</syntaxhighlight>
==Cache statistics==
<source lang=bash>
$ systemd-resolve --statistics
DNSSEC supported by current servers: no
Transactions
Current Transactions: 0
Total Transactions: 1824
Cache
Current Cache Size: 11
Cache Hits: 1104
Cache Misses: 771
DNSSEC Verdicts
Secure: 0
Insecure: 0
Bogus: 0
Indeterminate: 0
</syntaxhighlight>
==Flush the cache==
<source lang=bash>
$ systemd-resolve --flush-caches
</syntaxhighlight>
Check with:
<source lang=bash>
$ systemd-resolve --statistics
DNSSEC supported by current servers: no
Transactions
Current Transactions: 0
Total Transactions: 1809
Cache
Current Cache Size: 0 <--- Empty
Cache Hits: 1099
Cache Misses: 761
DNSSEC Verdicts
Secure: 0
Insecure: 0
Bogus: 0
Indeterminate: 0
</syntaxhighlight>
=systemd-timesyncd an alternative to ntp=
The ntpd is a good and fat old horse for servers but clients do not necessarily need this one. Just give systemd-timesyncd a chance.
Configuration can be easily made through <i>/etc/systemd/timesyncd.conf</i>:
<source lang=ini>
# This file is part of systemd.
#
# systemd is free software; you can redistribute it and/or modify it
# under the terms of the GNU Lesser General Public License as published by
# the Free Software Foundation; either version 2.1 of the License, or
# (at your option) any later version.
#
# Entries in this file show the compile time defaults.
# You can change settings by editing this file.
# Defaults can be restored by simply deleting this file.
#
# See timesyncd.conf(5) for details.
[Time]
NTP=ptbtime1.ptb.de hora.cs.tu-berlin.de
FallbackNTP=ntp.ubuntu.com
</syntaxhighlight>
The server list is a space separated list of NTP servers.
FallbackNTP is a list of servers if none of the NTP list could be reached.
If you want to split them into multiple files or generate them at start you can put files with the ending <i>.conf</i> in <i>/etc/systemd/timesyncd.conf.d/</i>.
After you setup the config you can enable the timesyncd via:
<source lang=bash>
# timedatectl set-ntp true
</syntaxhighlight>
Control your success with:
<source lang=bash>
# timedatectl
Local time: Fr 2016-07-01 09:16:24 CEST
Universal time: Fr 2016-07-01 07:16:24 UTC
RTC time: Fr 2016-07-01 07:16:24
Time zone: Europe/Berlin (CEST, +0200)
Network time on: yes
NTP synchronized: yes
RTC in local TZ: no
</syntaxhighlight>
Nice it worked <i>NTP synchronized: yes</i>.
If not take a look with <i>systemctl</i>:
<source lang=bash>
# systemctl status systemd-timesyncd.service
● systemd-timesyncd.service - Network Time Synchronization
Loaded: loaded (/lib/systemd/system/systemd-timesyncd.service; enabled; vendor preset: enabled)
Drop-In: /lib/systemd/system/systemd-timesyncd.service.d
└─disable-with-time-daemon.conf
Active: inactive (dead)
Condition: start condition failed at Fr 2016-07-01 10:49:15 CEST; 1h 43min left
Docs: man:systemd-timesyncd.service(8)
</syntaxhighlight>
Hmm... let us take a look at ntp:
<source lang=bash>
# systemctl status ntp.service
● ntp.service - LSB: Start NTP daemon
Loaded: loaded (/etc/init.d/ntp; bad; vendor preset: enabled)
Active: active (exited) since Fr 2016-07-01 10:49:19 CEST; 1h 44min left
Docs: man:systemd-sysv-generator(8)
</syntaxhighlight>
Maybe we should uninstall or disable ntp first ;-).
<source lang=bash>
# systemctl stop ntp.service
# systemctl disable ntp.service
</syntaxhighlight>
<source lang=bash>
# systemctl start systemd-timesyncd.service
# systemctl status systemd-timesyncd.service
● systemd-timesyncd.service - Network Time Synchronization
Loaded: loaded (/lib/systemd/system/systemd-timesyncd.service; enabled; vendor preset: enabled)
Drop-In: /lib/systemd/system/systemd-timesyncd.service.d
└─disable-with-time-daemon.conf
Active: active (running) since Fr 2016-07-01 09:06:10 CEST; 1s ago
Docs: man:systemd-timesyncd.service(8)
Main PID: 12360 (systemd-timesyn)
Status: "Synchronized to time server 192.53.103.108:123 (ptbtime1.ptb.de)."
CGroup: /system.slice/systemd-timesyncd.service
└─12360 /lib/systemd/systemd-timesyncd
Jul 01 09:06:10 lollybook systemd[1]: Starting Network Time Synchronization...
Jul 01 09:06:10 lollybook systemd[1]: Started Network Time Synchronization.
Jul 01 09:06:10 lollybook systemd-timesyncd[12360]: Synchronized to time server 192.53.103.108:123 (ptbtime1.ptb.de).
</syntaxhighlight>
That's it!
=Units=
==[Unit]==
===Define dependencies===
For example the ''zfs.target'' is defined like this:
<source lang=ini>
# systemctl cat zfs.target
# /lib/systemd/system/zfs.target
[Unit]
Description=ZFS startup target
Requires=zfs-mount.service
Requires=zfs-share.service
Wants=zed.service
[Install]
WantedBy=multi-user.target
</syntaxhighlight>
This means to reach the ''zfs.target'' we want that ''zed.service'' is started if enabled and we need ''zfs-mount.service'' and ''zfs-share.service''.
===Directories===
====ReadWrite-, ReadOnly- and InaccessibleDirectories====
====Private Tmp-Directories====
Mounts a private incarnation of /tmp and /var/tmp which only lives as long as the unit is up. When the unit comes down the directories are cleared. This is done by a seperate namespace for this unit.
<source lang=ini>
[Unit]
...
PrivateTmp=true|false
...
</syntaxhighlight>
If several units should share a private tmp-directory you can use ''JoinsNamespaceOf=<unit1>[,<unit2>,<unit3>]''.
==[Service]==
==[Install]==
=Tools=
==Testing around with capabilities==
For example arping:
<source lang=bash>
# getcap /usr/bin/arping
/usr/bin/arping = cap_net_raw+ep
</syntaxhighlight>
With this capability set we can use this as normal user:
<source lang=bash>
lollypop $ /usr/bin/arping -I wlan0 192.168.178.1
ARPING 192.168.178.1 from 192.168.178.31 wlan0
Unicast reply from 192.168.178.1 [24:65:11:F0:DC:A8] 1.774ms
Unicast reply from 192.168.178.1 [24:65:11:F0:DC:A8] 1.658ms
</syntaxhighlight>
If we remove this capability it does not work:
<source lang=bash>
# setcap cap_net_raw=-ep /usr/bin/arping
</syntaxhighlight>
<source lang=bash>
lollypop $ /usr/bin/arping -I wlan0 192.168.178.1
arping: socket: Operation not permitted
</syntaxhighlight>
Of course it still works as root as root has all capabilities:
<source lang=bash>
root@lollybook:~# /usr/bin/arping -I wlan0 192.168.178.1
ARPING 192.168.178.1 from 192.168.178.31 wlan0
Unicast reply from 192.168.178.1 [24:65:11:F0:DC:A8] 2.052ms
Unicast reply from 192.168.178.1 [24:65:11:F0:DC:A8] 1.852ms
Received 2 response(s)
</syntaxhighlight>
So we better set this capability again:
<source lang=bash>
# setcap cap_net_raw=+ep /usr/bin/arping
</syntaxhighlight>
= Logging with syslog-ng and systemd in a chroot environment =
If you have a chroot environment (here I have /var/chroot) some things are a little bit tricky.
==The needed logging socket in your chroot is /run/systemd/journal/dev-log==
Prepare the mountpoint:
<source lang=bash>
# mkdir -p /var/chroot/run/systemd/journal
# touch /var/chroot/run/systemd/journal/dev-log
</syntaxhighlight>
===Get the name for the needed unit file===
The name of a .mount-unit file has to be the mount destination path. Dashes must be escaped. To get the resulting name you can easily use systemd-escape.
<source lang=bash>
# systemd-escape -p --suffix=mount /var/chroot/run/systemd/journal/dev-log
var-chroot-run-systemd-journal-dev\x2dlog.mount
</syntaxhighlight>
===Create the unit file /lib/systemd/system/var-chroot-run-systemd-journal-dev\\x2dlog.mount for the mount===
Remember to double escape (\\) the x2d (which is a dash -).
<source lang=bash>
# vi /lib/systemd/system/var-chroot-run-systemd-journal-dev\\x2dlog.mount
</syntaxhighlight>
I want to mount it before syslog-ng and pdns-recursor are up.
Put this contents in the file:
<source lang=ini>
[Unit]
Description=Mount /run/systemd/journal/dev-log to chroot
DefaultDependencies=no
ConditionPathExists=/var/chroot/run/systemd/journal/dev-log
ConditionCapability=CAP_SYS_ADMIN
After=systemd-modules-load.service
Before=pdns-recursor.service
Before=syslog-ng.service
[Mount]
What=/run/systemd/journal/dev-log
Where=/var/chroot/run/systemd/journal/dev-log
Type=none
Options=bind
[Install]
WantedBy=multi-user.target
</syntaxhighlight>
===Mount the socket===
<source lang=bash>
# systemctl daemon-reload
# systemctl enable var-chroot-run-systemd-journal-dev\\x2dlog.mount
# systemctl start var-chroot-run-systemd-journal-dev\\x2dlog.mount
</syntaxhighlight>
Check the success:
<source lang=bash>
# grep /var/chroot/run/systemd/journal/dev-log /proc/mounts
tmpfs /var/chroot/run/systemd/journal/dev-log tmpfs rw,nosuid,noexec,relatime,size=101604k,mode=755 0 0
</syntaxhighlight>
==Tell the journald to forward logging lines to the socket==
===/etc/systemd/journald.conf===
<source lang=ini>
[Journal]
...
ForwardToSyslog=yes
...
</syntaxhighlight>
Restart the journal daemon:
<source lang=bash>
# systemctl restart systemd-journald.service
</syntaxhighlight>
==Configure syslog-ng==
===/etc/syslog-ng/syslog-ng.conf===
Take the log from systemd-journald socket:
<source>
...
source s_src {
system();
internal();
unix-dgram ("/run/systemd/journal/dev-log");
};
...
</syntaxhighlight>
===Example for powerdns recursor===
====/etc/syslog-ng/conf.d/destination.d/pdns.conf====
<source>
# PowerDNS authoritative server destination
destination d_pdns { file("/var/log/powerdns/pdns.log"); };
destination d_pdns_recursor { file("/var/log/powerdns/recursor.log"); };
</syntaxhighlight>
====/etc/syslog-ng/conf.d/filter.d/pdns.conf====
<source>
# PowerDNS authoritative server filter
filter f_pdns { program("^pdns$"); };
filter f_pdns_recursor { program("^pdns_recursor$"); };
</syntaxhighlight>
====/etc/syslog-ng/conf.d/log.d/90_pdns.conf====
<source>
# PowerDNS authoritative server default final file log
log { source(s_src); filter(f_pdns); destination(d_pdns); flags(final); };
log { source(s_src); filter(f_pdns_recursor); destination(d_pdns_recursor); flags(final); };
</syntaxhighlight>
===Restart syslog-ng daemon===
<source lang=bash>
# systemctl restart syslog-ng.service
</syntaxhighlight>
= systemd-tmpfiles =
The housekeeping of temporary directories is done by the service <i>systemd-tmpfiles-clean.service</i> .
This service is triggered by the timer <i>systemd-tmpfiles-clean.timer</i>
To use this service for PrivateTMP directories for example of <i>apache2.service</i> you may use a config file under <i>/etc/[https://www.freedesktop.org/software/systemd/man/tmpfiles.d.html tmpfiles.d]/</i> like this example <i>/etc/tmpfiles.d/apache-cleanup.conf</i> :
<pre>
e /tmp/systemd-private-%b-apache2.service-*/tmp - - - 6h
</pre>
This will cleanup all files under <i>/tmp/systemd-private-%b-apache2.service-*/tmp</i> which are older than 6 hours every time the <i>systemd-tmpfiles-clean.service</i> runs.
The <i>%b</i> in the path is the actual boot-id.
What ist that? An id which is generated at each boot.
You can get the boot-id with:
<source lang=bash>
# journalctl --list-boots
</syntaxhighlight>
The second field of the last line is the actual one, e.g.:
<source lang=bash>
# journalctl --list-boots | awk 'END {print $2}'
52ae0c2a587a47048ee76818ede269a6
</syntaxhighlight>
When will that be? Try:
<source lang="bash">
# systemctl list-timers systemd-tmpfiles-clean.timer
NEXT LEFT LAST PASSED UNIT ACTIVATES
Thu 2020-08-13 16:07:24 CEST 46min left n/a n/a systemd-tmpfiles-clean.timer systemd-tmpfiles-clean.service
1 timers listed.
Pass --all to see loaded but inactive timers, too.
</syntaxhighlight>
OK, but you probably want to run ist once an hour? OK, just rescedule the timer like this:
<source lang="bash">
# systemctl edit systemd-tmpfiles-clean.timer
</syntaxhighlight>
and change the interval like this
<pre>
[Timer]
OnUnitActiveSec=1h
</pre>
Well done...
= Examples =
== fwupd.service behind proxy ==
<source lang=bash>
# systemctl edit fwupd-refresh.service
</syntaxhighlight>
<source lang=ini>
[Service]
Environment=http_proxy="http://user:passw0rd@proxy.intern.net:8080" https_proxy="http://user:passw0rd@proxy.intern.net:8080"
PassEnvironment=http_proxy https_proxy
</syntaxhighlight>
== Tomcat ==
=== /etc/systemd/system/tomcat-example.service ===
Simple service definition with some security options (ReadOnlyDirectories):
<source lang=ini>
# /etc/systemd/system/tomcat-ndr.service
[Unit]
Description=Apache Tomcat Web Application Container
After=syslog.target network.target remote-fs.target
ConditionPathExists=/opt/tomcat/bin
ConditionPathExists=/home/tomcat/bin
[Service]
Type=forking
User=tomcat
Group=java
PrivateTmp=true
RuntimeDirectory=tomcat-example
RuntimeDirectoryMode=0700
ReadOnlyDirectories=/etc
ReadOnlyDirectories=/lib
ReadOnlyDirectories=/usr
EnvironmentFile=/home/tomcat/.Tomcat_init_systemd
PIDFile=/run/tomcat-example/tomcat.pid
ExecStart=/opt/tomcat/bin/catalina.sh start
ExecStop=/opt/tomcat/bin/catalina.sh stop
SuccessExitStatus=0
[Install]
WantedBy=multi-user.target
</syntaxhighlight>
=== /etc/polkit-1/rules.d/57-tomcat-example.rules ===
Allow the user <i>tomcat</i> to start/stop the service:
<source>
polkit.addRule(function(action, subject) {
if (action.id == "org.freedesktop.systemd1.manage-units" &&
action.lookup("unit") == "tomcat-example.service" &&
subject.user == "tomcat") {
return polkit.Result.YES;
}
});
</syntaxhighlight>
== Oracle ==
UNTESTED, just an example!
File this as
/usr/lib/systemd/system/dbora@.service (SLES12)
<source lang=ini>
# This file is part of systemd.
#
# Configure instances for your oracle database versions like this
# # systemctl enable dbora@<product>.service
# e.g.:
# # systemctl enable dbora@12cR1.service
#
[Unit]
Description=Oracle Database %I
After=syslog.target network.target
[Service]
# systemd ignores PAM limits, so set any necessary limits in the service.
# Not really a bug, but a feature.
# https://bugzilla.redhat.com/show_bug.cgi?id=754285
LimitMEMLOCK=infinity
LimitNOFILE=65535
#
Type=simple
RemainAfterExit=yes
User=oracle
Group=dba
Environment="ORACLE_HOME=/opt/oracle/product/%i/db"
ExecStart=/opt/oracle/product/%i/db/bin/dbstart $ORACLE_HOME >> 2>&1 &
ExecStop=/opt/oracle/product/%i/db/bin/dbstart $ORACLE_HOME 2>&1 &
[Install]
WantedBy=multi-user.target
</syntaxhighlight>
<source lang=bash>
# systemctl daemon-reload
# systemctl enable dbora@12cR2.service
Created symlink from /etc/systemd/system/multi-user.target.wants/dbora@12cR2.service to /usr/lib/systemd/system/dbora@.service.
</syntaxhighlight>
d53b2dee19ae1a367a7fb6712e4dd924be89d10e
Ubuntu remove desktop
0
385
2211
2123
2021-11-25T14:23:35Z
Lollypop
2
Text replacement - "<source" to "<syntaxhighlight"
wikitext
text/x-wiki
[category:Ubuntu]
[[category:Ubuntu|desktop]]
=Ubuntu 20.04=
<syntaxhighlight lang=bash>
# GRUB: Remove splash and quiet from GRUB_CMDLINE_LINUX_DEFAULT
sudo perl -pi -e 's#^(GRUB_CMDLINE_LINUX_DEFAULT=".*)(quiet)(.*")$#\1\3#g,s#^(GRUB_CMDLINE_LINUX_DEFAULT=".*)(splash)(.*")$#\1\3#g' /etc/default/grub
# GRUB: Add or change to GRUB_DISABLE_OS_PROBER=true
sudo perl -ni -e '$c=1 if s/^GRUB_DISABLE_OS_PROBER=.*$/GRUB_DISABLE_OS_PROBER=true/; print; if(eof){print "GRUB_DISABLE_OS_PROBER=true\n" unless $c==1};' /etc/default/grub
# Remove desktop packages
sudo apt --yes purge adwaita-icon-theme gedit-common gir1.2-gdm-1.0 \
gir1.2-gnomebluetooth-1.0 gir1.2-gnomedesktop-3.0 gir1.2-goa-1.0 \
gnome-accessibility-themes gnome-bluetooth gnome-calculator gnome-calendar \
gnome-characters gnome-control-center gnome-control-center-data \
gnome-control-center-faces gnome-desktop3-data \
gnome-font-viewer gnome-getting-started-docs gnome-getting-started-docs-ru \
gnome-initial-setup gnome-keyring gnome-keyring-pkcs11 gnome-logs \
gnome-mahjongg gnome-menus gnome-mines gnome-online-accounts \
gnome-power-manager gnome-screenshot gnome-session-bin gnome-session-canberra \
gnome-session-common gnome-settings-daemon gnome-settings-daemon-common \
gnome-shell gnome-shell-common gnome-shell-extension-appindicator \
gnome-shell-extension-desktop-icons gnome-shell-extension-ubuntu-dock \
gnome-startup-applications gnome-sudoku gnome-system-monitor gnome-terminal \
gnome-terminal-data gnome-themes-extra gnome-themes-extra-data gnome-todo \
gnome-todo-common gnome-user-docs gnome-user-docs-ru gnome-video-effects \
language-pack-gnome-en language-pack-gnome-en-base language-pack-gnome-ru \
language-pack-gnome-ru-base language-selector-gnome libgail18 libgail18 \
libgail-common libgail-common libgnome-autoar-0-0 libgnome-bluetooth13 \
libgnome-desktop-3-19 libgnome-games-support-1-3 libgnome-games-support-common \
libgnomekbd8 libgnomekbd-common libgnome-menu-3-0 libgnome-todo libgoa-1.0-0b \
libgoa-1.0-common libpam-gnome-keyring libsoup-gnome2.4-1 libsoup-gnome2.4-1 \
nautilus-extension-gnome-terminal pinentry-gnome3 yaru-theme-gnome-shell \
yaru-theme-icon yaru-theme-sound ubuntu-wallpapers ubuntu-wallpapers-focal \
x11-common x11-apps xcursor-themes xbitmaps xfonts-base xfonts-encodings
# Purge unreferred packages
sudo apt --yes autopurge
# Fix plymouth problems
sudo apt --yes install plymouth-theme-spinner
# Ensure the boot environment creation works
update-initramfs -k $(uname -r) -u
update-grub
</source>
5a857dbb83d18115846ebe9cfc0b7c5d97c0209f
Roundcube
0
232
2212
929
2021-11-25T14:23:48Z
Lollypop
2
Text replacement - "<source" to "<syntaxhighlight"
wikitext
text/x-wiki
[[Kategorie:Web]]
[[Kategorie:Mail]]
==Automatic import carddav from Owncloud==
Enable carddav:
/etc/roundcube/config.inc.php:
<syntaxhighlight lang=php>
...
<// List of active plugins (in plugins/ directory)
$config['plugins'] = array(
'carddav', // <---- Enable carddav
'archive',
);
...
</source>
This imports automagically all Owncloud contacts from the addressbook "contacts" into roundcube carddav:
/usr/share/roundcube/plugins/carddav/config.inc.php
<syntaxhighlight lang=php>
...
$prefs['OwnCloud-Contacts'] = array(
// required attributes
'name' => 'Cloud->contacts->',
'username' => '%u',
'password' => '%p',
'url' => 'https://$cloudserver/remote.php/carddav/addressbooks/%u/contacts/',
// optional attributes
'active' => true,
'readonly' => false,
'refresh_time' => '01:00:00',
'preemptive_auth' => 1,
// attributes that are fixed (i.e., not editable by the user) and
// auto-updated for this preset
'fixed' => array('name', 'active', ),
// hide this preset from CalDAV preferences section so users can't even
// see it
'hide' => false,
);
</source>
9c5b316f8a6504747aea2850a54284ab02f3554b
2233
2212
2021-11-25T14:29:06Z
Lollypop
2
Text replacement - "</source" to "</syntaxhighlight"
wikitext
text/x-wiki
[[Kategorie:Web]]
[[Kategorie:Mail]]
==Automatic import carddav from Owncloud==
Enable carddav:
/etc/roundcube/config.inc.php:
<syntaxhighlight lang=php>
...
<// List of active plugins (in plugins/ directory)
$config['plugins'] = array(
'carddav', // <---- Enable carddav
'archive',
);
...
</syntaxhighlight>
This imports automagically all Owncloud contacts from the addressbook "contacts" into roundcube carddav:
/usr/share/roundcube/plugins/carddav/config.inc.php
<syntaxhighlight lang=php>
...
$prefs['OwnCloud-Contacts'] = array(
// required attributes
'name' => 'Cloud->contacts->',
'username' => '%u',
'password' => '%p',
'url' => 'https://$cloudserver/remote.php/carddav/addressbooks/%u/contacts/',
// optional attributes
'active' => true,
'readonly' => false,
'refresh_time' => '01:00:00',
'preemptive_auth' => 1,
// attributes that are fixed (i.e., not editable by the user) and
// auto-updated for this preset
'fixed' => array('name', 'active', ),
// hide this preset from CalDAV preferences section so users can't even
// see it
'hide' => false,
);
</syntaxhighlight>
16bda4dedfe69b1ebb7e0bcda9fd4ead7fc56f9c
SSH Tipps und Tricks
0
75
2213
2094
2021-11-25T14:23:54Z
Lollypop
2
Text replacement - "<source" to "<syntaxhighlight"
wikitext
text/x-wiki
[[Kategorie:SSH|Tipps]]
[[Kategorie:Putty|Tipps]]
=SSH, der Weg zum Ziel=
==SSH über ein oder mehrere Hops==
Um die SSH-Verbindung von Host_A zu Host_B machen zu können muß man sich über zwei Rechner dorthin tunneln (GW_1 und GW_2). Wenn man sich immer erst einloggt und dann weiter einloggt ist es manchmal sehr schwierig die Portforwardings mitzuschleifen, oder den Socks5-Proxy. Einfacher ist es, wenn man sich ProxyCommands für den Weg von Host_a zu Host_b definiert.
Wir kommen also nur von GW_2 zu Host_b, also machen wir uns hierfür einen Eintrag in der ~/.ssh/config:
<pre>
Host Host_B
ProxyJump GW_2
</pre>
Zu GW_2 kommen wir aber nur über GW_1, also brauchen wir hierfür auch einen Eintrag:
<pre>
Host GW_2
ProxyJump GW_1
</pre>
Jetzt gibt man auf Host_A einfach <i>ssh Host_B</i> ein und wird über die beiden Gateways GW_1 und GW_2 getunnelt.
==Portforwardings für z.B. NFS macht man jetzt einfach so==
<pre>
root@Host_A# share -F nfs -o ro=@127.0.0.1/32 /tmp
root@Host_A# ssh -R 22049:localhost:2049 user@Host_B
user@Host_B$ su -
root@Host_B# mount -oro nfs://127.0.0.1:22049/tmp /mnt
</pre>
Im Hintergrund werden dann die TunnelVerbindungen aufgebaut und man macht das PortForwarding direkt von Host_A nach Host_B. Sehr schlank und elegant.
PS: Das /dev/tcp/%h/%p ist ein BASH-Builtin wobei %h und %p von der SSH durch Host (%h) und Port (%p) ausgefüllt werden
==Ausbruch aus dem Paradies==
Problem: Die Umgebung in der man sich aufhält ist leider so unglücklich mit Firewalls verbaut, daß man nicht arbeiten kann. Man muß aber per SSH raus, um woanders kurz etwas zu schauen, oder zu holen. Nunja, es gibt immer einen Weg...
Vorraussetzung ist ein lokal installiertes [http://www.meadowy.org/~gotoh/projects/connect connect], z.B. unter Ubuntu: apt-get install connect-proxy.
Weiterhin braucht man einen SSH-Server, wo ein sshd auf dem Port 443 lauscht, denn die meisten Proxies wollen einen nur auf known Ports durchlassen.
Dann trägt man in der ~/.ssh/config ein:
<pre>
Host ssh-via-proxy
ProxyCommand connect -H proxy-server:3128 ssh-server 443
</pre>
Schwuppdiwupp ist man mit <i>ssh ssh-server</i> auf dem SSH-Ziel, wo man hinmöchte. Natürlich kann man auf den ssh-server wieder als ProxyCommand eintragen usw. usw.
==Achja... das interne Wiki...==
Auch nicht schlimm, wenn das nur vom internen Netz erreichbar ist, dann fragen wir einfach via Socks-Proxy an:
<pre>
user@Host_A$ ssh -C -N -T -f -D8080 interner-rechner
user@Host_A$ chromium-browser --proxy-server="socks5://localhost:8080" https://wiki.intern.firma.de/ &
</pre>
Die Optionen sind:
<pre>
-C Requests compression <- das ist optional
-N Do not execute a remote command.
-T Disable pseudo-tty allocation.
-f Requests ssh to go to background just before command execution.
-D Local-Remote-Socks5-Proxy Port
</pre>
Oder wieder via ~/.ssh/config:
<pre>
Host wiki
Compression yes
DynamicForward 8888
RequestTTY no
PermitLocalCommand yes
LocalCommand chromium-browser --proxy-server="socks5://localhost:8888" https://wiki.intern.firma.de/ &
Hostname interner-rechner
</pre>
Und dann <i>ssh -N -f wiki</i> (Entsprechungen für -N und -f habe ich noch nicht gefunden).
=Der Fingerabdruck=
Für die Verifikation ist es oft leichter mit kürzeren Zahlenketten. Daher ist der Fingerabdruck praktisch, um Keys einfacher zu vergleichen:
<pre>
$ ssh-keygen -lf ~/.ssh/id_dsa.pub
1024 98:c5:76:...:08:fa:ba lollypop@lollybook (DSA)
</pre>
=Nutzer einschränken=
<syntaxhighlight lang=bash>
# SSH is only allowed for users in the group ssh except syslog
AllowGroups ssh
DenyUsers syslog
</source>
=PuTTY Portable=
==pageant zusammen mit putty starten==
In die Datei ..\PortableApps\PuTTYPortable\App\AppInfo\Launcher\PuTTYPortable.ini muß folgendes unter [Launch] stehen:
<pre>
[Launch]
ProgramExecutable=putty\pageant.exe
CommandLineArguments='%PAL:DataDir%\settings\mykeys.ppk -c %PAL:AppDir%\putty\putty.exe'
DirectoryMoveOK=yes
SupportsUNC=yes
</pre>
Zu PortableApps siehe auch:
* [http://portableapps.com/manuals/PortableApps.comLauncher/ref/envsub.html Environment variable substitions]
* [http://portableapps.com/manuals/PortableApps.comLauncher/ref/launcher.ini/launch.html#programexecutable Launch]
==ppk -> pem==
<syntaxhighlight lang=bash>
$ nawk '/---- BEGIN SSH2 PUBLIC KEY ----/{printf "ssh-rsa "; getline; comment=$2; gsub(/"/,"",comment); getline line; while(line !~ /^---- END/){printf line; getline line;} printf " %s\n",comment;}' pubkey.ppk
</source>
=Probleme mit älteren Gegenstellen=
==Unable to negotiate with <IP> port 22: no matching host key type found. Their offer: ssh-dss==
<syntaxhighlight lang=bash>
$ ssh -oHostKeyAlgorithms=+ssh-dss <IP>
</source>
==ssh_dispatch_run_fatal: Connection to <IP> port 22: DH GEX group out of range==
<syntaxhighlight lang=bash>
$ ssh -oKexAlgorithms=diffie-hellman-group-exchange-sha256,diffie-hellman-group14-sha1,diffie-hellman-group1-sha1 <IP>
</source>
=SFTP chroot=
<syntaxhighlight lang=bash>
# mkdir --parents --mode=0755 /sftp_chroot/etc
</source>
==/etc/fstab==
<syntaxhighlight lang=bash>
...
/etc/passwd /sftp_chroot/etc/passwd none ro,bind 0 0
/etc/group /sftp_chroot/etc/group none ro,bind 0 0
</source>
==/etc/ssh/sshd_config==
<syntaxhighlight lang=bash>
...
AllowGroups ssh-user
Subsystem sftp internal-sftp
Match group sftp
AllowGroups sftp
X11Forwarding no
AllowTcpForwarding no
AllowAgentForwarding no
PermitTunnel no
ForceCommand internal-sftp
PasswordAuthentication yes
ChrootDirectory /sftp_chroot/
AuthorizedKeysFile /sftp_chroot/%h/.ssh/authorized_keys
</source>
==Create SFTP user==
Now you can put authorized keys into the files /home/sftp/.authorized_keys/<i>username</i>
And create the sftp users like this:
<syntaxhighlight lang=bash>
# USER=myuser
# mkdir --parents --mode=0755 /home/sftp/${USER}
# useradd --create-home --home-dir /home/sftp/${USER}/home ${USER}
</source>
= Two factor authentication =
== Google Authenticator ==
As the Google Authenticator is a tool which is available on several SmartPhone OS I took this one for the OTP authentication.
All steps have to be done on the destination host.
=== Install libpam-google-authenticator ===
<syntaxhighlight lang=bash>
$ sudo apt-get install libpam-google-authenticator
</source>
=== Add settings to the /etc/pam.d/sshd ===
Put this line at the top of your /etc/pam.d/sshd!
<syntaxhighlight lang=bash>
auth [success=done new_authtok_reqd=done default=die] pam_google_authenticator.so nullok
</source>
See the man page pam.d(5) or read here...
The meaning of the parameters:
* success=done : If pam_google_authenticator returns successful (code was correct) all authentication is done.
* new_authtok_reqd=done : New authentication token is required set to done. Done is like ok, <nowiki><man page></nowiki>except that the stack also terminates and control is immediately returned to the application.<nowiki></man page></nowiki>
* default=die : If pam_google_authenticator failed no other authentications will be tried
* nullok : Allow user to access auth mechanism even if the password is empty
=== Add settings to the /etc/ssh/sshd_config ===
This lines have to be in the /etc/ssh/sshd_config:
<syntaxhighlight lang=bash>
UsePAM yes
PasswordAuthentication no
PubkeyAuthentication yes
ChallengeResponseAuthentication yes
AuthenticationMethods publickey,keyboard-interactive:pam
</source>
Without the setting in /etc/pam.d/sshd the "PasswordAuthentication no" will not be sufficient and still ask for a password because /etc/pam.d/sshd enables password authentication.
dc54e4ab7de25bb0573452cebd0694f3db460bbd
Solaris zone memory on the fly
0
118
2214
1361
2021-11-25T14:24:10Z
Lollypop
2
Text replacement - "<source" to "<syntaxhighlight"
wikitext
text/x-wiki
[[Kategorie:Solaris|Zone Memory]]
= Setting memory parameter for running zones =
You can change memory parameter for running zones. But remember to make it persistent by changing zone config file, too.
So I do it always in advance.
== Change setting in the config file ==
<syntaxhighlight lang=bash>
# zonecfg -z myzone
zonecfg:myzone> select capped-memory
zonecfg:myzone:capped-memory> info
capped-memory:
[swap: 10G]
zonecfg:myzone:capped-memory> set swap=16G
zonecfg:myzone:capped-memory> set physical=16G
zonecfg:myzone:capped-memory> set locked=10G
zonecfg:myzone:capped-memory> info
physical: 16G
[swap: 16G]
[locked: 10G]
zonecfg:myzone:capped-memory> end
zonecfg:myzone> verify
zonecfg:myzone> commit
zonecfg:myzone> exit
#
</source>
== Change settings for the running zone ==
===First take a look===
<syntaxhighlight lang=bash>
# zlogin myzone prtconf | grep Memory
prtconf: devinfo facility not available
Memory size: 65536 Megabytes
# prctl -t privileged -i zone myzone
zone: 1: myzone
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
zone.max-swap
privileged 10.0GB - deny -
zone.cpu-shares
privileged 1 - none -
</source>
===Set the new values===
<syntaxhighlight lang=bash>
# rcapadm -z myzone -m 16G
# prctl -n zone.max-swap -v 16g -t privileged -r -e deny -i zone myzone
# prctl -n zone.max-locked-memory -v 16g -t privileged -r -e deny -i zone myzone
</source>
===Prove values===
<syntaxhighlight lang=bash>
# zlogin myzone prtconf | grep Memory
prtconf: devinfo facility not available
Memory size: 16384 Megabytes
# prctl -t privileged -i zone myzone
zone: 1: myzone
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
zone.max-swap
privileged 16.0GB - deny -
zone.cpu-shares
privileged 1 - none -
</source>
Done.
63960bf71764c6739da1671f1c956166015437a1
Solaris Einzeiler
0
200
2215
652
2021-11-25T14:24:39Z
Lollypop
2
Text replacement - "<source" to "<syntaxhighlight"
wikitext
text/x-wiki
[[Kategorie:Solaris|Einzeiler]]
=== netstat -aun oder lsof -i -P -n unter Solaris 10 ===
<syntaxhighlight lang=bash>
#!/bin/bash
pfiles /proc/* 2>/dev/null | nawk -v port=$1 '
/^[0-9]/ {
pid=$1; cmd=$2; type="unknown"; next;
}
$1 == "SOCK_STREAM" {
type="tcp"; next;
}
$1 == "SOCK_DGRAM" {
type="udp"; next;
}
$2 ~ /AF_INET?/ && ( port=="" || $5==port ) {
if($2 ~ /[0-9]$/ && type !~ /[0-9]$/) type=type""substr($2,8);
if(cmd!="") { printf("%d %s\n",pid,cmd); cmd="" }
printf(" %s:%s/%s\n",$3,$5,type);
}'
</source>
27deb43daac3931746cb546e683c9656f2496c5c
ESPEasy
0
371
2216
2025
2021-11-25T14:24:48Z
Lollypop
2
Text replacement - "<source" to "<syntaxhighlight"
wikitext
text/x-wiki
<syntaxhighlight lang=bash>
$ sudo apt install --yes esptool
$ wget https://github.com/letscontrolit/ESPEasy/releases/download/mega-20200515/ESPEasy_mega-20200515.zip
$ esptool --port /dev/ttyUSB0 --baud 115200 write_flash 0 ESP_Easy_mega_20200516_test_beta_ESP8266_4M1M.bin
esptool.py v2.8
Serial port /dev/ttyUSB0
Connecting...
Detecting chip type... ESP8266
Chip is ESP8266EX
Features: WiFi
Crystal is 26MHz
MAC: 3c:71:bf:2a:a6:0b
Enabling default SPI flash mode...
Configuring flash size...
Auto-detected Flash size: 4MB
Erasing flash...
Took 2.33s to erase flash block
Writing at 0x000dbc00... (87 %)
</source>
* [https://www.az-delivery.de/products/copy-of-nodemcu-lua-amica-v2-modul-mit-esp8266-12e NodeMCU Lua Lolin V3 Module ESP8266 ESP-12F WIFI Wifi Development Board mit CH340]
d029767a1137c5d1befa481b770c9a44771aa3a0
VMWare Certificate
0
280
2217
1295
2021-11-25T14:24:54Z
Lollypop
2
Text replacement - "</source" to "</syntaxhighlight"
wikitext
text/x-wiki
[[Kategorie:VMWare]]
[[Kategorie:Security]]
== Neues Zertifikat generieren ==
=== ShellWarning deaktivieren ===
<pre>
-> Bestandsliste
-> Hosts und Cluster
-> <ESX-Host auswählen>
-> Verwalten
-> Einstellungen
-> System
-> Erweiterte Systemeinstellungen
-> <In der Suche "suppress" suchen>
-> UserVars.SuppressShellWarning
-> Bearbeiten: UserVars.SuppressShellWarning = 1
</pre>
=== SSH in der Firewall freischalten ===
<pre>
-> Bestandsliste
-> Hosts und Cluster
-> <ESX-Host auswählen>
-> Verwalten
-> Einstellungen
-> System
-> Sicherheitsprofil
-> Firewall
-> Eingehende Verbindungen
-> Bearbeiten
-> SSH-Server aktivieren
</pre>
=== SSH aktivieren ===
<pre>
-> Bestandsliste
-> Hosts und Cluster
-> <ESX-Host auswählen>
-> Verwalten
-> Einstellungen
-> System
-> Sicherheitsprofil
-> Dienste
-> Bearbeiten
-> SSH starten
</pre>
<source lang=bash>
$ ssh root@esx-host
~ # cd /etc/vmware/ssl
/etc/vmware/ssl # mv rui.key rui.key.orig
/etc/vmware/ssl # mv rui.crt rui.crt.orig
/etc/vmware/ssl # /sbin/generate-certificates
/etc/vmware/ssl # ls -al *.key *.crt
-rw-r--r-- 1 root root 1440 May 30 09:33 rui.crt
-r-------- 1 root root 1704 May 30 09:33 rui.key
</syntaxhighlight>
=== SSH deaktivieren ===
<pre>
-> Bestandsliste
-> Hosts und Cluster
-> <ESX-Host auswählen>
-> Verwalten
-> Einstellungen
-> System
-> Sicherheitsprofil
-> Dienste
-> Bearbeiten
-> SSH stoppen
</pre>
=== ShellWarning aktivieren ===
<pre>
-> Bestandsliste
-> Hosts und Cluster
-> <ESX-Host auswählen>
-> Verwalten
-> Einstellungen
-> System
-> Erweiterte Systemeinstellungen
-> <In der Suche "suppress" suchen>
-> UserVars.SuppressShellWarning
-> Bearbeiten: UserVars.SuppressShellWarning = 0
</pre>
=== CIM-Server neu starten ===
Damit auch das neue Zertifikat genutzt wird, muß der CIM-Server durchgestartet werden.
<pre>
-> Bestandsliste
-> Hosts und Cluster
-> <ESX-Host auswählen>
-> Verwalten
-> Einstellungen
-> System
-> Sicherheitsprofil
-> Dienste
-> Bearbeiten
-> CIM-Server
-> Neu Starten
</pre>
218ea84472f06576fc8d916eb1380a30b98537da
Category:Pages using deprecated source tags
14
390
2218
2021-11-25T14:24:56Z
Lollypop
2
Created blank page
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
OwnCloud Config
0
195
2219
637
2021-11-25T14:24:56Z
Lollypop
2
Text replacement - "<source" to "<syntaxhighlight"
wikitext
text/x-wiki
[[Kategorie:OwnCloud]]
==Seperate self installed apps from bundled apps==
In your config add:
<syntaxhighlight lang=php>
'apps_paths' => array (
0 => array (
'path' => OC::$SERVERROOT.'/apps',
'url' => '/apps',
'writable' => false,
),
1 => array (
'path' => OC::$SERVERROOT.'/other_apps',
'url' => '/other_apps',
'writable' => true,
),
),
</source>
b8a33bf13c52f3b08ecc74bf26870d15bfd03e72
2231
2219
2021-11-25T14:28:11Z
Lollypop
2
Text replacement - "</source" to "</syntaxhighlight"
wikitext
text/x-wiki
[[Kategorie:OwnCloud]]
==Seperate self installed apps from bundled apps==
In your config add:
<syntaxhighlight lang=php>
'apps_paths' => array (
0 => array (
'path' => OC::$SERVERROOT.'/apps',
'url' => '/apps',
'writable' => false,
),
1 => array (
'path' => OC::$SERVERROOT.'/other_apps',
'url' => '/other_apps',
'writable' => true,
),
),
</syntaxhighlight>
f1492935c6013876096d2f655394103344142c77
NetApp and Solaris
0
219
2220
1343
2021-11-25T14:25:01Z
Lollypop
2
Text replacement - "</source" to "</syntaxhighlight"
wikitext
text/x-wiki
[[Kategorie:NetApp|Solaris]]
[[Kategorie:Solaris|NetApp]]
'''Just some unsorted lines...'''
'''Working on it... don't believe what you can read here! It is not proofed for now.'''
==Settings in Solaris==
Settings for MPxIO over FC:
===/kernel/drv/ssd.conf===
<source lang=bash>
###### START changes by host_config #####
ssd-config-list="NETAPP LUN", "physical-block-size:4096, retries-busy:30, retries-reset:30, retries-notready:300, retriestimeout:10, throttle-max:64, throttle-min:8";
###### END changes by host_config ####
</syntaxhighlight>
===Check it out===
<source lang=bash>
# iostat -Er | /opt/sfw/bin/gawk 'BEGIN{command="echo ::ssd_state | mdb -k"; while(command|getline){if(/^un [0-9]+:/ && $NF != "0"){ssd=$2;gsub(/:$/,"",ssd);while(!/^}/){command|getline;if(/un_phy_blocksize/){un_phy_blocksize[ssd]=strtonum($NF);}}}};close(command);}/ssd/{ssd=$1;gsub(/^ssd/,"",ssd);getline;split($0,vendor,",");printf "ssd: %s\tun_phy_blocksize: %d\t%s\t%s\n",ssd,un_phy_blocksize[ssd],vendor[1],vendor[4];}'
</syntaxhighlight>
==Alignment and ZFS==
First read [https://library.netapp.com/ecmdocs/ECMP1148982/html/GUID-42CC2EB6-E667-4305-914C-7C2C459EF841.html ZFS zpools create misaligned I/O in Solaris 11 and Solaris 10 Update 8 and later (407376)].
If you have 4k as block size in your storage use ashift=12 (alignment shift exponent).
===Status of alignment===
<source lang=bash>
# ssh filer01 "priv set -q diag ; lun show -v all; priv set"
-------------------------------------------------------------------------------
LUN for ZFS
-------------------------------------------------------------------------------
/vol/ZoneLUNs/Zone01.lun 50g (53687091200) (r/w, online, mapped)
Serial#: 800KP+EpO-33
Share: none
Space Reservation: enabled
Multiprotocol Type: solaris_efi
Maps: SUN_SERVER01_SERVER02=40
Occupied Size: 46.2g (49583595520)
Creation Time: Wed Jan 7 11:37:58 CET 2015
---> Alignment: partial-writes
Cluster Shared Volume Information: 0x0
Space_alloc: disabled
report-physical-size: enabled
Read-Only: disabled
-------------------------------------------------------------------------------
LUN for Oracle Database
-------------------------------------------------------------------------------
/vol/TEMP201/TEMP201 25g (26843545600) (r/w, online, mapped)
Serial#: 800KP+EpO-2t
Share: none
Space Reservation: enabled
Multiprotocol Type: solaris_efi
Maps: SUN_SERVER01_SERVER02=35
Occupied Size: 21.6g (23195856896)
Creation Time: Fri Jul 4 11:02:34 CEST 2014
---> Alignment: misaligned
Cluster Shared Volume Information: 0x0
Space_alloc: disabled
report-physical-size: enabled
Read-Only: disabled
...
</syntaxhighlight>
Or use "lun alignment show":
<source lang=bash>
# ssh filer01 "priv set -q diag ; lun alignment show; priv set"
-------------------------------------------------------------------------------
LUN for ZFS
Wide spread reads. I think the ashift is not correct.
-------------------------------------------------------------------------------
/vol/ZoneLUNs/Zone01.lun
Multiprotocol type: solaris_efi
Alignment: partial-writes
Write alignment histogram percentage: 5, 5, 4, 6, 4, 6, 14, 5
Read alignment histogram percentage: 8, 7, 10, 7, 7, 8, 36, 5
Partial writes percentage: 47
Partial reads percentage: 9
-------------------------------------------------------------------------------
LUN for Oracle Database
-------------------------------------------------------------------------------
/vol/TEMP201/TEMP201
Multiprotocol type: solaris_efi
Alignment: misaligned
Write alignment histogram percentage: 0, 0, 0, 0, 0, 0, 99, 0
Read alignment histogram percentage: 0, 0, 8, 0, 0, 0, 77, 0
Partial writes percentage: 0
Partial reads percentage: 14
</syntaxhighlight>
Or "stats show lun":
<source lang=bash>
filer01*> stats show -e lun:/vol/TEMP201:.*_align_histo.*
</syntaxhighlight>
===ashift=12? Why 12?===
<source lang=bash>
# echo "2^12" | bc -l
4096
</syntaxhighlight>
OK... 4k... I see.
===What ashift do I have?===
<source lang=bash>
# zdb | egrep ' name|ashift'
name: 'apache_pool'
ashift: 9
name: 'mysql_pool'
ashift: 9
...
</syntaxhighlight>
===Create ZPools on NetApp LUNs with this syntax===
<source lang=bash>
# zpool create -o ashift=12 <mypool> mirror <vdev1> <vdev2>
</syntaxhighlight>
===Solaris Cluster===
<source lang=bash>
# /opt/NTAP/SANToolkit/bin/sanlun lun show | nawk '$3 ~ /^\/dev\//{line=$0;gsub(/s[0-9]+$/,"",$3);command="/usr/cluster/bin/cldev list "$3; command | getline; close(command); print line,$1; next;}NR==2{print $0,"DID";next;}NR==3{print $0"-------";next}{print;}'
controller(7mode)/ device host lun
vserver(Cmode) lun-pathname filename adapter protocol size mode DID
--------------------------------------------------------------------------------------------------------------------------------------------------------
ncl01-iscsi-svm1 /vol/vol_cyrus01/lun_tz_cyrus01_1 /dev/rdsk/c0t600A0980383033777B244834556D4865d0s2 iscsi0 iSCSI 500.1g C d5
ncl01-iscsi-svm1 /vol/vol_cyrus01/lun_tz_cyrus01_2 /dev/rdsk/c0t600A0980383033777B244834556D4866d0s2 iscsi0 iSCSI 500.1g C d6
ncl01-iscsi-svm1 /vol/vol_cyrus01/lun_tz_cyrus01_3 /dev/rdsk/c0t600A0980383033777B244834556D4867d0s2 iscsi0 iSCSI 500.1g C d7
...
</syntaxhighlight>
==Links==
* [http://wiki.illumos.org/display/illumos/List+of+sd-config-list+entries+for+Advanced-Format+drives Illumos: List of sd-config-list entries for Advanced-Format drives]
* [https://kb.netapp.com/index?page=content&id=3011193 NetApp: What is an unaligned I/O?]
2081006db604310dc70ef2a27be8f0b429537927
2227
2220
2021-11-25T14:27:52Z
Lollypop
2
Text replacement - "<source" to "<syntaxhighlight"
wikitext
text/x-wiki
[[Kategorie:NetApp|Solaris]]
[[Kategorie:Solaris|NetApp]]
'''Just some unsorted lines...'''
'''Working on it... don't believe what you can read here! It is not proofed for now.'''
==Settings in Solaris==
Settings for MPxIO over FC:
===/kernel/drv/ssd.conf===
<syntaxhighlight lang=bash>
###### START changes by host_config #####
ssd-config-list="NETAPP LUN", "physical-block-size:4096, retries-busy:30, retries-reset:30, retries-notready:300, retriestimeout:10, throttle-max:64, throttle-min:8";
###### END changes by host_config ####
</syntaxhighlight>
===Check it out===
<syntaxhighlight lang=bash>
# iostat -Er | /opt/sfw/bin/gawk 'BEGIN{command="echo ::ssd_state | mdb -k"; while(command|getline){if(/^un [0-9]+:/ && $NF != "0"){ssd=$2;gsub(/:$/,"",ssd);while(!/^}/){command|getline;if(/un_phy_blocksize/){un_phy_blocksize[ssd]=strtonum($NF);}}}};close(command);}/ssd/{ssd=$1;gsub(/^ssd/,"",ssd);getline;split($0,vendor,",");printf "ssd: %s\tun_phy_blocksize: %d\t%s\t%s\n",ssd,un_phy_blocksize[ssd],vendor[1],vendor[4];}'
</syntaxhighlight>
==Alignment and ZFS==
First read [https://library.netapp.com/ecmdocs/ECMP1148982/html/GUID-42CC2EB6-E667-4305-914C-7C2C459EF841.html ZFS zpools create misaligned I/O in Solaris 11 and Solaris 10 Update 8 and later (407376)].
If you have 4k as block size in your storage use ashift=12 (alignment shift exponent).
===Status of alignment===
<syntaxhighlight lang=bash>
# ssh filer01 "priv set -q diag ; lun show -v all; priv set"
-------------------------------------------------------------------------------
LUN for ZFS
-------------------------------------------------------------------------------
/vol/ZoneLUNs/Zone01.lun 50g (53687091200) (r/w, online, mapped)
Serial#: 800KP+EpO-33
Share: none
Space Reservation: enabled
Multiprotocol Type: solaris_efi
Maps: SUN_SERVER01_SERVER02=40
Occupied Size: 46.2g (49583595520)
Creation Time: Wed Jan 7 11:37:58 CET 2015
---> Alignment: partial-writes
Cluster Shared Volume Information: 0x0
Space_alloc: disabled
report-physical-size: enabled
Read-Only: disabled
-------------------------------------------------------------------------------
LUN for Oracle Database
-------------------------------------------------------------------------------
/vol/TEMP201/TEMP201 25g (26843545600) (r/w, online, mapped)
Serial#: 800KP+EpO-2t
Share: none
Space Reservation: enabled
Multiprotocol Type: solaris_efi
Maps: SUN_SERVER01_SERVER02=35
Occupied Size: 21.6g (23195856896)
Creation Time: Fri Jul 4 11:02:34 CEST 2014
---> Alignment: misaligned
Cluster Shared Volume Information: 0x0
Space_alloc: disabled
report-physical-size: enabled
Read-Only: disabled
...
</syntaxhighlight>
Or use "lun alignment show":
<syntaxhighlight lang=bash>
# ssh filer01 "priv set -q diag ; lun alignment show; priv set"
-------------------------------------------------------------------------------
LUN for ZFS
Wide spread reads. I think the ashift is not correct.
-------------------------------------------------------------------------------
/vol/ZoneLUNs/Zone01.lun
Multiprotocol type: solaris_efi
Alignment: partial-writes
Write alignment histogram percentage: 5, 5, 4, 6, 4, 6, 14, 5
Read alignment histogram percentage: 8, 7, 10, 7, 7, 8, 36, 5
Partial writes percentage: 47
Partial reads percentage: 9
-------------------------------------------------------------------------------
LUN for Oracle Database
-------------------------------------------------------------------------------
/vol/TEMP201/TEMP201
Multiprotocol type: solaris_efi
Alignment: misaligned
Write alignment histogram percentage: 0, 0, 0, 0, 0, 0, 99, 0
Read alignment histogram percentage: 0, 0, 8, 0, 0, 0, 77, 0
Partial writes percentage: 0
Partial reads percentage: 14
</syntaxhighlight>
Or "stats show lun":
<syntaxhighlight lang=bash>
filer01*> stats show -e lun:/vol/TEMP201:.*_align_histo.*
</syntaxhighlight>
===ashift=12? Why 12?===
<syntaxhighlight lang=bash>
# echo "2^12" | bc -l
4096
</syntaxhighlight>
OK... 4k... I see.
===What ashift do I have?===
<syntaxhighlight lang=bash>
# zdb | egrep ' name|ashift'
name: 'apache_pool'
ashift: 9
name: 'mysql_pool'
ashift: 9
...
</syntaxhighlight>
===Create ZPools on NetApp LUNs with this syntax===
<syntaxhighlight lang=bash>
# zpool create -o ashift=12 <mypool> mirror <vdev1> <vdev2>
</syntaxhighlight>
===Solaris Cluster===
<syntaxhighlight lang=bash>
# /opt/NTAP/SANToolkit/bin/sanlun lun show | nawk '$3 ~ /^\/dev\//{line=$0;gsub(/s[0-9]+$/,"",$3);command="/usr/cluster/bin/cldev list "$3; command | getline; close(command); print line,$1; next;}NR==2{print $0,"DID";next;}NR==3{print $0"-------";next}{print;}'
controller(7mode)/ device host lun
vserver(Cmode) lun-pathname filename adapter protocol size mode DID
--------------------------------------------------------------------------------------------------------------------------------------------------------
ncl01-iscsi-svm1 /vol/vol_cyrus01/lun_tz_cyrus01_1 /dev/rdsk/c0t600A0980383033777B244834556D4865d0s2 iscsi0 iSCSI 500.1g C d5
ncl01-iscsi-svm1 /vol/vol_cyrus01/lun_tz_cyrus01_2 /dev/rdsk/c0t600A0980383033777B244834556D4866d0s2 iscsi0 iSCSI 500.1g C d6
ncl01-iscsi-svm1 /vol/vol_cyrus01/lun_tz_cyrus01_3 /dev/rdsk/c0t600A0980383033777B244834556D4867d0s2 iscsi0 iSCSI 500.1g C d7
...
</syntaxhighlight>
==Links==
* [http://wiki.illumos.org/display/illumos/List+of+sd-config-list+entries+for+Advanced-Format+drives Illumos: List of sd-config-list entries for Advanced-Format drives]
* [https://kb.netapp.com/index?page=content&id=3011193 NetApp: What is an unaligned I/O?]
79bbc67aef5f7e5b3ee978fd0725e5c997dc66d3
Exim cheatsheet
0
27
2221
1399
2021-11-25T14:25:55Z
Lollypop
2
Text replacement - "<source" to "<syntaxhighlight"
wikitext
text/x-wiki
[[Kategorie:Exim]]
=Fragen und Antworten=
==Header einer MailID ansehen==
<pre># exim -mvh <msgid></pre>
==Statistiken der aktuellen Queue ansehen==
<pre># exim -bpu | exiqsum <parameter></pre>
==Routing von Mails testen==
===Kurz und bündig===
<pre># exim -bv -v <Mailadresse></pre>
===Mit viel Debugging===
<pre># exim -bv -d+all <Mailadresse></pre>
==Wie stosse ich den Versand aller Mails für eine bestimmte Domain an?==
<pre># exim -Rff <Domain></pre>
==Wie stosse ich den Versand EINER bestimmten Mail erneut an?==
<pre># exim -M <message-id></pre>
==Wie ermittle ich, wieviele Mails in der Queue liegen?==
<pre># exim -bpc</pre>
==Wie finde ich eine bestimmte Mail in der Queue?==
Dazu kann entweder in den Logfiles gesucht werden
<pre># exigrep <pattern> /var/log/exim/mainlog-jjjjmmdd</pre>
oder es kann in der Queue gesucht werden
<pre># exiqgrep -r <pattern></pre>
Besser, als exigrep ist exipick!
Ausgabe aller frozen Mails in der Queue:
<pre>
# exipick -z
</pre>
Ausgabe aller Mails an <reciepient> in der Queue:
<pre>
# exipick -r <reciepient>
</pre>
Ausgabe aller Mails von <sender> in der Queue:
<pre>
# exipick -f <sender>
</pre>
Ausggeben aller Mails, die Lokal abgesandt wurden in der Queue:
<pre>
# exipick --or '$sender_host_address eq 127.0.0.1' '$received_protocol eq local'
</pre>
Sogar der Body einer Mail kann durchsucht werden:
<pre>
# /opt/exim/bin/exipick '$message_body =~ /.*Vjagra.*/'
</pre>
Oder Ausgabe der sender_host_address für alle Mails die mehr als 40 und weniger als 50 Minuten alt sind und nicht im Status frozen sind:
<pre>
# exipick --show-vars sender_host_address '$message_age > 40m' '$message_age < 50m' '!$deliver_freeze'
</pre>
==Was tun die Exim-Prozesse?==
<pre># exiwhat</pre>
==Ausgeben von Exim-Parametern==
<pre># exim -bP <Parameter></pre>
z.B.:
<pre># exim -bP message_size_limit</pre>
==Immer gut: queue files ansehen==
<pre>
# find $(exim -bP spool_directory | nawk '{print $NF;}')/input
</pre>
==Ratelimit für einen User zurücksetzen==
Einträge finden:
<syntaxhighlight lang=bash>
# exim_dumpdb /var/spool/exim ratelimit | grep user
24-Mar-2016 09:51:28.152687 rate: 218.512 key: 1d/per_rcpt/mail_recipients:user@server.de
24-Mar-2016 09:51:28.098825 rate: 25.618 key: 1d/per_rcpt/failed_recipients:user@server.de
</source>
Einträge löschen:
Dafür nimmt man das etwas struppige Tool <i>exim_fixdb</i>. Man gibt den Key ein, den man aus den Ausgaben vom letzten Befehl hat und wählt damit den entsprechenden Eintrag in der DB aus. Als nächstes kommd dann d, wie Delete, gefolgt von einem Enter. Weg ist der entsprechende Eintrag.
<syntaxhighlight lang=bash>
# exim_fixdb /var/spool/exim ratelimit
Modifying Exim hints database /var/spool/exim/db/ratelimit
> 1d/per_rcpt/mail_recipients:user@server.de
24-Mar-2016 09:51:28
0 time stamp: 24-Mar-2016 09:51:28
1 fract. time: .152687
2 sender rate: 218.512
> d
deleted
> 1d/per_rcpt/failed_recipients:user@server.de
24-Mar-2016 09:51:28
0 time stamp: 24-Mar-2016 09:51:28
1 fract. time: .098825
2 sender rate: 25.618
> d
deleted
> ^D
</source>
==Spam==
<syntaxhighlight lang=bash>
for file in $(ls -1 /var/log/spamassassin/spamd-exim-acl.log* | sort -t'.' -k3n,3n)
do
if [ "$(basename $file .gz)" == "$(basename $file)" ]
then
command="cat"
else
command="gzip -cd"
fi
printf "%16s - %16s : %7s\t%s\n" \
"$(${command} ${file} | nawk 'NR==1{print $1,$2,$3}')" \
"$(${command} ${file} | tail -1 | nawk '{print $1,$2,$3}')" \
"$(${command} ${file} | grep -c 'result: Y')" \
"$(basename ${file})"
done
</source>
= Logrotation with datestamped logfiles =
I love my logfiles datestamped:
<syntaxhighlight lang=bash>
# exim -bP log_file_path
log_file_path = /var/log/exim/%slog-%D
</source>
But the logrotate with this files is a little bit tricky.
I found this as a good way to rotate the logfiles:
== /etc/logrotate.d/exim ==
<pre>
/var/log/exim/rotate_this_-_do_not_delete {
daily
rotate 0
ifempty
create
lastaction
# gzip all files matching the regex that are not from today
/usr/bin/find /var/log/exim -regextype posix-awk -regex '^/.*/((main|reject)log-[0-9]{8}|paniclog)' ! -mtime +0 -exec /usr/bin/gzip -9q {} \;
# delete gzipped files matching the regex that are older than 90 days
/usr/bin/find /var/log/exim -regextype posix-awk -regex '^/.*/((main|reject)log-[0-9]{8}|paniclog)\.gz' -mtime +90 -delete
endscript
}
== touch the dummy rotate file ==
This one is needed to trigger the rotation even if it is a dummy.
<syntaxhighlight lang=bash>
# touch /var/log/exim/rotate_this_-_do_not_delete
</source>
</pre>
fff9666e1a378ff12ee67eb26c333553c4f29166
Solaris Loadgenerator
0
216
2222
756
2021-11-25T14:26:08Z
Lollypop
2
Text replacement - "</source" to "</syntaxhighlight"
wikitext
text/x-wiki
[[Kategorie:Solaris|Loadgenerator]]
This is a little script to generate load. It uses gzip and bzip2 to generate load fetched from void and compressed into the void again :-).
Call it with <scriptname> <number> to generate a load of <number>.
<source lang=bash>
#!/usr/bin/bash
count=$1
for((i=1;i<=${count};i++))
do
cat /dev/urandom | bzip2 | gzip -9 >/dev/null &
done
</syntaxhighlight>
79df56a5ccaffcf7eaf7d5f3fcf76f6cb1051a2c
Autofs
0
256
2223
2132
2021-11-25T14:27:30Z
Lollypop
2
Text replacement - "<source" to "<syntaxhighlight"
wikitext
text/x-wiki
[[Kategorie:Linux|autofs]]
[[Kategorie:Solaris|autofs]]
==Automount home directories==
===/etc/auto.master===
<syntaxhighlight lang=bash>
#
# Include /etc/auto.master.d/*.autofs
#
+dir:/etc/auto.master.d
</source>
===/etc/auto.master.d/home.autofs===
<syntaxhighlight lang=bash>
/home /etc/auto.master.d/home.map
</source>
===/etc/auto.master.d/home.map===
Mount homes from different locations.
<syntaxhighlight lang=bash>
* :/data/home/& nfs.server.de:/home/&
</source>
or from a server that supports NFSv4.1:
<syntaxhighlight lang=bash>
* -proto=tcp,vers=4.1 nfs.server.de:/home/&
</source>
The asterisk marks any dir in /home/* should be matched by this rule.
The ampers and is replaced by the part which was matched by *.
So if you enter /home/a the automounter searches local for /data/home/a which will be mounted when found.
<syntaxhighlight lang=bash>
# cd /home/a
# mount -v | grep /home/a
/data/home/a on /home/a type none (rw,bind)
</source>
For another /home/b which is on the nfs server it looks like this:
<syntaxhighlight lang=bash>
# cd /home/b
# mount -v | grep /home/b
nfs.server.de:/home/b on /home/b type nfs (rw,addr=172.16.17.24)
</source>
===cifs===
<i>/etc/auto.master.d/mycifsshare.autofs</i>:
<syntaxhighlight lang=bash>
/data/cifs /etc/auto.master.d/mycifsshare.map
</source>
<i>/etc/auto.master.d/mycifsshare.map</i>:
<syntaxhighlight lang=bash>
mycifsshare -fstype=cifs,rw,credentials=/etc/samba/mycifsshare_credentials,uid=<myuser>,forceuid ://192.168.1.2/mycifsshare
</source>
5a615ed47147a36ab1b8d0acf524ccf53db33de5
Qemu
0
281
2226
1285
2021-11-25T14:27:52Z
Lollypop
2
Text replacement - "<source" to "<syntaxhighlight"
wikitext
text/x-wiki
[[Kategorie:Qemu]]
=virsh - management user interface=
==Display running domains==
<syntaxhighlight lang=bash>
# virsh list
Id Name State
----------------------------------------------------
1 domain_v1 running
</source>
==Display domain information==
<syntaxhighlight lang=bash>
# virsh dominfo domain_v1
Id: 1
Name: domain_v1
UUID: b80fe77e-5bdd-29a9-d4c4-84482ace50ff
OS Type: hvm
State: running
CPU(s): 4
CPU time: 674481.3s
Max memory: 15605760 KiB
Used memory: 15605760 KiB
Persistent: yes
Autostart: disable
Managed save: no
Security model: none
Security DOI: 0
</source>
67c50410aff249c1617e435e080cf37c444837b8
ISCSI Initiator with Linux
0
387
2228
2167
2021-11-25T14:27:54Z
Lollypop
2
Text replacement - "</source" to "</syntaxhighlight"
wikitext
text/x-wiki
[[Category:Linux|iSCSI]]
[[Category:iSCSI|Linux]]
= iSCSI with jumbo-frames and multipathing =
== Configure networking ==
=== LACP-bonding for the frontend ===
==== /etc/netplan/bond0.yaml ====
<source lang=yaml>
network:
version: 2
renderer: networkd
ethernets:
eno1:
dhcp4: false
dhcp6: false
optional: true
eno2:
dhcp4: false
dhcp6: false
optional: true
bonds:
bond0:
interfaces:
- eno1
- eno2
parameters:
lacp-rate: slow
mode: 802.3ad
transmit-hash-policy: layer2
addresses:
- 10.71.112.135/16
gateway4: 10.71.101.1
nameservers:
addresses:
- 10.71.111.11
- 10.71.111.12
search:
- domain.de
</syntaxhighlight>
=== Two dedicated 10GE interfaces with jumbo-frames for the backend ===
==== /etc/netplan/iscsi.yaml ====
<source lang=yaml>
network:
version: 2
renderer: networkd
ethernets:
enp132s0f0:
dhcp4: false
dhcp6: false
mtu: 9000
addresses:
- 10.250.71.32/24
set-name: iscsi0
match:
macaddress: a0:36:9f:d4:cd:1a
enp132s0f1:
dhcp4: false
dhcp6: false
mtu: 9000
addresses:
- 10.251.71.32/24
set-name: iscsi1
match:
macaddress: a0:36:9f:d4:cd:18
</syntaxhighlight>
=== Apply the parameters and check settings ===
<source lang=bash>
# netplan apply
# ip a sh iscsi0
7: iscsi0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc mq state UP group default
qlen 1000
link/ether a0:36:9f:d4:cd:1a brd ff:ff:ff:ff:ff:ff
inet 10.250.71.32/24 brd 10.250.71.255 scope global iscsi0
valid_lft forever preferred_lft forever
inet6 fe80::a236:9fff:fed4:cd1a/64 scope link
valid_lft forever preferred_lft forever
# ip a sh iscsi1
5: iscsi1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc mq state UP group default
qlen 1000
link/ether a0:36:9f:d4:cd:18 brd ff:ff:ff:ff:ff:ff
inet 10.251.71.32/24 brd 10.251.71.255 scope global iscsi1
valid_lft forever preferred_lft forever
inet6 fe80::a236:9fff:fed4:cd18/64 scope link
valid_lft forever preferred_lft forever
# ip a sh bond0
12: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP
group default qlen 1000
link/ether 32:2d:f2:c0:e2:3f brd ff:ff:ff:ff:ff:ff
inet 10.71.112.135/16 brd 10.71.255.255 scope global bond0
valid_lft forever preferred_lft forever
inet6 fe80::302d:f2ff:fed0:e23f/64 scope link
valid_lft forever preferred_lft forever
</syntaxhighlight>
=== Check if all components are configured right for jumbo-frames ===
<source lang=bash>
# ping -c 3 -M do -s 8972 -I iscsi0 10.250.71.1
PING 10.250.71.1 (10.250.71.1) from 10.250.71.32 iscsi0: 8972(9000) bytes of data.
8980 bytes from 10.250.71.1: icmp_seq=1 ttl=64 time=0.227 ms
8980 bytes from 10.250.71.1: icmp_seq=2 ttl=64 time=0.187 ms
8980 bytes from 10.250.71.1: icmp_seq=3 ttl=64 time=0.198 ms
--- 10.250.71.1 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2045ms
rtt min/avg/max/mdev = 0.187/0.204/0.227/0.016 ms
# ping -c 3 -M do -s 8972 -I iscsi1 10.251.71.1
PING 10.251.71.1 (10.251.71.1) from 10.251.71.32 iscsi1: 8972(9000) bytes of data.
8980 bytes from 10.251.71.1: icmp_seq=1 ttl=64 time=0.202 ms
8980 bytes from 10.251.71.1: icmp_seq=2 ttl=64 time=0.195 ms
8980 bytes from 10.251.71.1: icmp_seq=3 ttl=64 time=0.191 ms
--- 10.251.71.1 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2055ms
rtt min/avg/max/mdev = 0.191/0.196/0.202/0.004 ms
</syntaxhighlight>
If it does not has success, one of the switches on the way or the iSCSI-storage misses jumbo-frame settings.
== Configure iSCSI ==
=== Setup initiator iqn ===
Setup a new iqn:
<source>
# /sbin/iscsi-iname
</syntaxhighlight>
The result is in /etc/iscsi/initiatorname.iscsi
==== /etc/iscsi/initiatorname.iscsi ====
<source>
## DO NOT EDIT OR REMOVE THIS FILE!
## If you remove this file, the iSCSI daemon will not start.
## If you change the InitiatorName, existing access control lists
## may reject this initiator. The InitiatorName must be unique
## for each iSCSI initiator. Do NOT duplicate iSCSI InitiatorNames.
InitiatorName=iqn.1993-08.org.debian:01:4efdaa48c123
</syntaxhighlight>
=== Setup iSCSI-Interfaces ===
<source lang=bash>
# iscsiadm -m iface -I iscsi0 -o new
# iscsiadm -m iface -I iscsi0 --op=update -n iface.net_ifacename -v iscsi0
# iscsiadm -m iface -I iscsi0
# BEGIN RECORD 2.0-874
iface.iscsi_ifacename = iscsi0
iface.net_ifacename = iscsi0
iface.ipaddress = <empty>
iface.hwaddress = <empty>
iface.transport_name = tcp
...
# END RECORD
</syntaxhighlight>
<source lang=bash>
# iscsiadm -m iface -I iscsi1 -o new
# iscsiadm -m iface -I iscsi1 --op=update -n iface.net_ifacename -v iscsi1
# iscsiadm -m iface -I iscsi1
# BEGIN RECORD 2.0-874
iface.iscsi_ifacename = iscsi1
iface.net_ifacename = iscsi1
iface.ipaddress = <empty>
iface.hwaddress = <empty>
iface.transport_name = tcp
...
# END RECORD
</syntaxhighlight>
=== Discover LUNs that are offered by the storage ===
<source lang=bash>
# iscsiadm -m discovery -t st -p 10.250.71.1
iscsiadm: cannot make connection to 10.250.71.1: No route to host
iscsiadm: cannot make connection to 10.250.71.1: No route to host
iscsiadm: cannot make connection to 10.250.71.1: No route to host
iscsiadm: cannot make connection to 10.250.71.1: No route to host
iscsiadm: cannot make connection to 10.250.71.1: No route to host
iscsiadm: cannot make connection to 10.250.71.1: No route to host
iscsiadm: connection login retries (reopen_max) 5 exceeded
10.250.71.1:3260,1 iqn.2006-08.com.huawei:oceanstor:210028def5f846b5::20000:10.250.71.1
</syntaxhighlight>
<source lang=bash>
# iscsiadm -m discovery -t st -p 10.251.71.1
iscsiadm: cannot make connection to 10.251.71.1: No route to host
iscsiadm: cannot make connection to 10.251.71.1: No route to host
iscsiadm: cannot make connection to 10.251.71.1: No route to host
iscsiadm: cannot make connection to 10.251.71.1: No route to host
iscsiadm: cannot make connection to 10.251.71.1: No route to host
iscsiadm: cannot make connection to 10.251.71.1: No route to host
iscsiadm: connection login retries (reopen_max) 5 exceeded
10.251.71.1:3260,2 iqn.2006-08.com.huawei:oceanstor:210028def5f846b5::20001:10.251.71.1
</syntaxhighlight>
=== Login to discovered LUNs ===
<source lang=bash>
# iscsiadm -m node -T iqn.2006-08.com.huawei:oceanstor:210028def5f846b5::20000:10.250.71.1 --login
Logging in to [iface: iscsi0, target: iqn.2006-08.com.huawei:oceanstor:210028def5f846b5::20000:10.250.71.1, portal: 10.250.71.1,3260] (multiple)
Login to [iface: iscsi0, target: iqn.2006-08.com.huawei:oceanstor:210028def5f846b5::20000:10.250.71.1, portal: 10.250.71.1,3260] successful.
# iscsiadm -m node -T iqn.2006-08.com.huawei:oceanstor:210028def5f846b5::20001:10.251.71.1 --login
Logging in to [iface: iscsi1, target: iqn.2006-08.com.huawei:oceanstor:210028def5f846b5::20001:10.251.71.1, portal: 10.251.71.1,3260] (multiple)
Login to [iface: iscsi1, target: iqn.2006-08.com.huawei:oceanstor:210028def5f846b5::20001:10.251.71.1, portal: 10.251.71.1,3260] successful.
</syntaxhighlight>
=== Take a look at the running session ===
<source lang=bash>
# iscsiadm -m session -P 1
Target: iqn.2006-08.com.huawei:oceanstor:210028def5f846b5::20000:10.250.71.1 (non-flash)
Current Portal: 10.250.71.1:3260,1
Persistent Portal: 10.250.71.1:3260,1
**********
Interface:
**********
Iface Name: iscsi0
Iface Transport: tcp
Iface Initiatorname: iqn.1993-08.org.debian:01:4efdaa48c123
Iface IPaddress: 10.250.71.32
Iface HWaddress: <empty>
Iface Netdev: iscsi0
SID: 1
iSCSI Connection State: LOGGED IN
iSCSI Session State: LOGGED_IN
Internal iscsid Session State: NO CHANGE
Target: iqn.2006-08.com.huawei:oceanstor:210028def5f846b5::20001:10.251.71.1 (non-flash)
Current Portal: 10.251.71.1:3260,2
Persistent Portal: 10.251.71.1:3260,2
**********
Interface:
**********
Iface Name: iscsi1
Iface Transport: tcp
Iface Initiatorname: iqn.1993-08.org.debian:01:4efdaa48c123
Iface IPaddress: 10.251.71.32
Iface HWaddress: <empty>
Iface Netdev: iscsi1
SID: 2
iSCSI Connection State: LOGGED IN
iSCSI Session State: LOGGED_IN
Internal iscsid Session State: NO CHANGE
</syntaxhighlight>
=== Check the session is still ok after a restart of iscsid.service ===
<source lang=bash>
# systemctl status iscsid.service
# systemctl restart iscsid.service
# systemctl status iscsid.service
# iscsiadm -m session -o show
tcp: [1] 10.251.71.1:3260,2 iqn.2006-
08.com.huawei:oceanstor:210028def5f846b5::20001:10.251.71.1 (non-flash)
tcp: [2] 10.250.71.1:3260,1 iqn.2006-
08.com.huawei:oceanstor:210028def5f846b5::20000:10.250.71.1 (non-flash)
</syntaxhighlight>
=== Enable automatic startup of connection ===
<source lang=bash>
# iscsiadm -m node --op=update -n node.conn[0].startup -v automatic
# iscsiadm -m node --op=update -n node.startup -v automatic
</syntaxhighlight>
=== Check timeout parameter ===
<source lang=bash>
# iscsiadm -m node -T iqn.2006-08.com.huawei:oceanstor:210028def5f846b5::20000:10.250.71.1 | grep node.session.timeo.replacement_timeout
node.session.timeo.replacement_timeout = 120
# iscsiadm -m node -T iqn.2006-08.com.huawei:oceanstor:210028def5f846b5::20001:10.251.71.1 | grep node.session.timeo.replacement_timeout
node.session.timeo.replacement_timeout = 120
</syntaxhighlight>
=== Adjust timeout values to your needs ===
<source lang=bash>
# iscsiadm -m node -T iqn.2006-08.com.huawei:oceanstor:210028def5f846b5::20000:10.250.71.1 -o update -n node.session.timeo.replacement_timeout -v 10
# iscsiadm -m node -T iqn.2006-08.com.huawei:oceanstor:210028def5f846b5::20001:10.251.71.1 -o update -n node.session.timeo.replacement_timeout -v 10
</syntaxhighlight>
== Configure multipathing ==
=== List SCSI devices ===
<source lang=bash>
# lsscsi
[0:2:0:0] disk DELL PERC H730 Mini 4.30 /dev/sda <--- this is our internal disk / raid
[11:0:0:0] cd/dvd HL-DT-ST DVD+-RW GTA0N A3C0 /dev/sr0
[12:0:0:1] disk HUAWEI XSG1 4305 /dev/sdb <--- this is our iSCSI-storage
[13:0:0:1] disk HUAWEI XSG1 4305 /dev/sdc <--- this is our iSCSI-storage
</syntaxhighlight>
=== Get wwids for devices ===
<source lang=bash>
# /lib/udev/scsi_id --whitelisted --device=/dev/sda
361866da075bdee001f9a2ede2705b9ba
# /lib/udev/scsi_id --whitelisted --device=/dev/sdb
3628dee5100f846b5243be07d00000004
# /lib/udev/scsi_id --whitelisted --device=/dev/sdc
3628dee5100f846b5243be07d00000004
</syntaxhighlight>
=== Setup multipathing configuration ===
==== /etc/multipath.conf ====
<source>
defaults {
user_friendly_names yes
}
devices {
device {
vendor "HUAWEI"
product "XSG1"
path_grouping_policy multibus
path_checker tur
prio const
path_selector "round-robin 0"
failback immediate
no_path_retry 15
}
}
blacklist {
# devnode "^sd[a]$"
# I highly recommend you blacklist by wwid instead of device name
# blacklist /dev/sda by wwid
wwid 361866da075bdee001f9a2ede2705b9ba
}
multipaths {
multipath {
wwid 3628dee5100f846b5243be07d00000004
# alias here can be anything descriptive for your LUN
alias data
}
}
</syntaxhighlight>
=== Startup multipathing ===
From the multipath(1) man page:
<pre>
-r Force a reload of all existing multipath maps. This command is delegated to the
multipathd daemon if it's running. In this case, other command line switches of the multipath
command have no effect.
-ll Show ("list") the current multipath topology from all available information (sysfs, the
device mapper, path checkers ...).
</pre>
<source lang=bash>
# multipath -r
</syntaxhighlight>
<source lang=bash>
# multipath -ll
data (3628dee5100f846b5243be07d00000004) dm-0 HUAWEI,XSG1
size=10T features='1 queue_if_no_path' hwhandler='1 alua' wp=rw
`-+- policy='round-robin 0' prio=50 status=active
|- 12:0:0:1 sdb 8:16 active ready running
`- 13:0:0:1 sdc 8:32 active ready running
</syntaxhighlight>
<source lang=bash>
# ls -al /dev/mapper/data
lrwxrwxrwx 1 root root 7 Okt 18 14:46 /dev/mapper/data -> ../dm-0
</syntaxhighlight>
=== Create a systemd unit to mount it at the right time during boot ===
<source lang=bash>
# systemctl edit --force --full data.mount
</syntaxhighlight>
==== /etc/systemd/system/data.mount ====
<source lang=inifile>
[Unit]
Before=remote-fs.target
After=iscsi.service
Requires=iscsi.service
After=blockdev@dev-mapper-data.target
[Mount]
Where=/data
What=/dev/mapper/data
Type=xfs
Options=defaults
</syntaxhighlight>
=== Enable your unit on next reboot and start it for now ===
<source lang=bash>
# systemctl enable data.mount
# systemctl start data.mount
</syntaxhighlight>
=== Check for success ===
<source lang=bash>
# df -h /dev/mapper/data
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/data 10T 72G 10T 1% /data
</syntaxhighlight>
== Further reading ==
External link collection:
• https://linux.dell.com/files/whitepapers/iSCSI_Multipathing_in_Ubuntu_Server.pdf
• https://www.suse.com/support/kb/doc/?id=000019648
• https://ubuntu.com/server/docs/service-iscsi
•
b7b642c92cbd7294edcd6f583b8d121741c047e8
ZFS nice commands
0
362
2229
1922
2021-11-25T14:28:04Z
Lollypop
2
Text replacement - "</source" to "</syntaxhighlight"
wikitext
text/x-wiki
[[Kategorie:ZFS]]
=Some ZFS commands I use often (on Linux)=
==zpool==
===Get zpool status===
<source lang=bash>
# zpool status -P
pool: rpool
state: ONLINE
scan: scrub repaired 0B in 0h41m with 0 errors on Tue Nov 27 11:49:30 2018
config:
NAME STATE READ WRITE CKSUM
rpool ONLINE 0 0 0
/dev/disk/by-id/ata-SanDisk_SDSSDHII960G_151740411091-part4 ONLINE 0 0 0
</syntaxhighlight>
* -P : Display real paths for vdevs instead of only the last component of the path.
<source lang=bash>
# zpool status -PL
pool: rpool
state: ONLINE
scan: scrub repaired 0B in 0h41m with 0 errors on Tue Nov 27 11:49:30 2018
config:
NAME STATE READ WRITE CKSUM
rpool ONLINE 0 0 0
/dev/sda4 ONLINE 0 0 0
errors: No known data errors
</syntaxhighlight>
* -P : Display real paths for vdevs instead of only the last component of the path.
* -L : Display real paths for vdevs resolving all symbolic links.
===Get zpool size===
<source lang=bash>
# zpool list
NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
rpool 788G 609G 179G - 53% 77% 1.00x ONLINE -
</syntaxhighlight>
Ooooh... bad fragmentation! So what? It's a SSD!
===Get the ashift value===
<source lang=bash>
# zpool list -o name,ashift
NAME ASHIFT
rpool 9
</syntaxhighlight>
which means 2^9=512 := 512 byte blocks in the backend... that is uncool for SSDs.
<source lang=bash>
# echo $[ 2 ** 12 ]
4096
# zpool set ashift=12 rpool
</syntaxhighlight>
<source lang=bash>
# zpool list -o name,ashift
NAME ASHIFT
rpool 12
</syntaxhighlight>
which means 2^12=4096 := 4k blocks in the backend. Perfect!
==zfs==
==zdb==
===Traverse all blocks===
<source lang=bash>
# zdb -b rpool
Traversing all blocks to verify nothing leaked ...
loading space map for vdev 0 of 1, metaslab 196 of 197 ...
609G completed (4928MB/s) estimated time remaining: 0hr 00min 00sec
No leaks (block sum matches space maps exactly)
bp count: 32920989
ganged count: 0
bp logical: 760060348928 avg: 23087
bp physical: 650570102784 avg: 19761 compression: 1.17
bp allocated: 654308115456 avg: 19875 compression: 1.16
bp deduped: 0 ref>1: 0 deduplication: 1.00
SPA allocated: 654308115456 used: 77.33%
additional, non-pointer bps of type 0: 237576
Dittoed blocks on same vdev: 1230844
</syntaxhighlight>
53962f3de8120073aa0263b4db9ae40cb3147580
SSL and TLS
0
229
2230
1813
2021-11-25T14:28:07Z
Lollypop
2
Text replacement - "<source" to "<syntaxhighlight"
wikitext
text/x-wiki
[[Kategorie: Security]]
=Web=
==HTTPS==
===TLSA - Record ===
<syntaxhighlight lang=bash>
$ openssl s_client -connect lars.timmann.de:443 </dev/null 2>/dev/null | openssl x509 -pubkey -noout | openssl pkey -pubin -outform DER | openssl sha256
(stdin)= e642c89062361241dc77f3fb363c8cd0faa04d870b68a3411b8fac8c4b4581ac
</source>
This could be used for a tlsa record like this:
_443._tcp.lars.timmann.de. 60 IN TLSA 3 0 1 e642c89062361241dc77f3fb363c8cd0faa04d870b68a3411b8fac8c4b4581ac
===HSTS - HTTP Strict Transport Security===
<syntaxhighlight lang=apache>
<VirtualHost <host>:443>
...
Header always set Strict-Transport-Security "max-age=31556926; includeSubDomains;"
...
</VirtualHost>
</source>
You need to enable the headers module in Apache.
On Ubuntu just do:
<syntaxhighlight lang=bash>
# sudo a2enmod headers
</source>
The max-age is entered in seconds:
<syntaxhighlight lang=bash>
$ bc -l
31556926/(60*60*24)
365.24219907407407407407
</souce>
So this value is a year as seconds.
What changes when we set this header and the browser understands it?
The browser transforms any link on this page to https even if the link is a http link. If the secure connection cannot be established because of Certificate errors, the browser will refuse to load the page. If this header contains ''includeSubDomains;'' subdomains are treated like this as well.
Links:
* [https://en.wikipedia.org/wiki/HTTP_Strict_Transport_Security HSTS at Wikipedia (English)]
* [https://de.wikipedia.org/wiki/Hypertext_Transfer_Protocol_Secure#HSTS HSTS at Wikipedia (German)]
===HPKP - HTTP Public Key Pinning===
A helpful script to create the hashes was made by Hanno Böck and is accessible at [https://github.com/hannob/hpkp Github].
I added a create option which makes the script more comfortable for me at [https://github.com/Popyllol/hpkp Github], too.
The public key pins for this site are created like this:
<syntaxhighlight lang=bash>
# /etc/apache2/ssl/hpkp-gen.sh create DE Hamburg Hamburg lars.timmann.de
Generating RSA private key, 4096 bit long modulus
..................................................................................................................................................................................................................++
..........................................................................................................................................................................................++
e is 65537 (0x10001)
Generating RSA private key, 4096 bit long modulus
..................................................++
..........................................++
e is 65537 (0x10001)
Header always set Strict-Transport-Security "max-age=31556926;"
Header always set Public-Key-Pins "max-age=5184000; pin-sha256=\"UcmGe/VSm6N9ruX235yb9PEYseuo+mr2volWwx1RffE=\";pin-sha256=\"O8xUszxHm+JJpRR4Pycl7LCnKjFpTY3REemrBxQZWQU=\";pin-sha256=\"UcmGe/VSm6N9ruX235yb9PEYseuo+mr2volWwx1RffE=\";"
</source>
At the end you get one line for adding Strict-Transport-Security and one for Public-Key-Pins. Both in Apache format.
<syntaxhighlight lang=apache>
<VirtualHost lars.timmann.de:443>
...
SSLEngine On
SSLProtocol all -SSLv2 -SSLv3 -TLSv1
SSLCompression off
SSLOptions +FakeBasicAuth +ExportCertData +StrictRequire
SSLCertificateFile /etc/apache2/ssl/timmann.de-wildcard.pem
SSLCertificateKeyFile /etc/apache2/ssl/timmann.de.ec-key
Header always set Strict-Transport-Security "max-age=31556926;"
Header always set Public-Key-Pins "max-age=5184000; pin-sha256=\"sEQMIUbXSCbQQAMcCH7712u+cYCjFITlUSH/C1DEGHY=\";pin-sha256=\"9f3SRITO2UNdpnurhfJGLZqcaXJBUm3WRKRIKYiPARc=\";pin-sha256=\"sEQMIUbXSCbQQAMcCH7712u+cYCjFITlUSH/C1DEGHY=\";"
...
</VirtualHost>
</source>
You need to enable the headers module in Apache.
On Ubuntu just do:
<syntaxhighlight lang=bash>
# sudo a2enmod headers
</source>
=Mail=
==STARTTLS==
with OpenSSL:
<syntaxhighlight lang=bash>
$ openssl s_client -starttls smtp -connect <mailserver>:<port>
</source>
with GNUTLS:
<syntaxhighlight lang=bash>
$ gnutls-cli --crlf --starttls --port <port> <mailserver>
EHLO hey <-- Send EHLO
250-<mailserver> Hello <yourhost> [<yourip>]
250-SIZE 52428800
250-8BITMIME
250-ETRN
250-PIPELINING
250-AUTH PLAIN
250-STARTTLS
250 HELP
STARTTLS <-- Send STARTTLS
220 TLS go ahead
^D <-- Send CTRL-D to begin STARTTLS handshake
...
- Version: TLS1.2
- Key Exchange: DHE-RSA
- Cipher: AES-256-CBC
- MAC: SHA256
- Compression: NULL
</source>
You can specify the security priority for the handshake like this:
<syntaxhighlight lang=bash>
$ gnutls-cli --crlf --starttls --priority 'SECURE256:%LATEST_RECORD_VERSION:-VERS-SSL3.0' --port <port> <mailserver>
</source>
Or us sslscan to check the available ciphers:
<syntaxhighlight lang=bash>
$ sudo apt-get install sslscan
$ sslscan --no-failed --starttls <mailserver>:<port>
</source>
==SMTPS==
with OpenSSL:
<syntaxhighlight lang=bash>
$ openssl s_client -connect <mailserver>:465
</source>
with GNUTLS:
<syntaxhighlight lang=bash>
$ gnutls-cli --port 465 <mailserver>
</source>
ec8c10ea96acae262b5b89079db6c38f296e1bb2
OpenSSL
0
347
2232
2169
2021-11-25T14:28:26Z
Lollypop
2
Text replacement - "<source" to "<syntaxhighlight"
wikitext
text/x-wiki
[[category:Security]]
=Verify=
<syntaxhighlight lang=bash>
# openssl verify -CAfile /srv/www/htdocs/pub/RHN-ORG-TRUSTED-SSL-CERT /etc/pki/spacewalk/jabberd/server.pem
</source>
<syntaxhighlight lang=bash>
# openssl crl2pkcs7 -nocrl -certfile /srv/www/htdocs/pub/RHN-ORG-TRUSTED-SSL-CERT | openssl pkcs7 -print_certs -noout -print_certs
</source>
=CSR=
== Create key and CSR ==
<syntaxhighlight lang=bash>
$ subject_without_cn='/C=DE/ST=Hamburg/L=Hamburg/O=Organisation/OU=Team'
$ emailAddress='webadmin@server.de'
$ declare -a hosts=( "name1.server.de" "name2.server.de" )
$ openssl req -newkey rsa:4096 -sha256 -keyout ${hosts[0]}-key.pem -out ${hosts[0]}-csr.pem -batch -subj "${subject_without_cn}/CN=${hosts[0]}/emailAddress=${emailAddress}" -reqexts SAN -config <(cat /etc/ssl/openssl.cnf <(printf "[SAN]\nsubjectAltName=DNS:${hosts[0]}${hosts[1]:+,DNS:${hosts[1]}}${hosts[2]:+,DNS:${hosts[2]}}${hosts[3]:+,DNS:${hosts[3]}}${hosts[4]:+,DNS:${hosts[4]}}"))
</source>
== Verify your CSR==
<syntaxhighlight lang=bash>
$ openssl req -text -noout -verify -in ${hosts[0]}-csr.pem
</source>
85a056e1082de3850ae4282d64264c03349f15c4
LUKS - Linux Unified Key Setup
0
255
2234
979
2021-11-25T14:29:10Z
Lollypop
2
Text replacement - "</source" to "</syntaxhighlight"
wikitext
text/x-wiki
[[Kategorie:Linux]]
[[Kategorie:Security]]
==Encrypted swap on LVM==
===Create logical volume for swap===
<source lang=bash>
# lvcreate -L 2g -n lv-swap vg-root
Logical volume "lv-swap" created
</syntaxhighlight>
<source lang=bash>
# lvs /dev/vg-root/lv-swap
LV VG Attr LSize Pool Origin Data% Move Log Copy% Convert
lv-swap vg-root -wi-ao--- 2.00g
</syntaxhighlight>
===Create and get the UUID===
'''This step will erase all of your data from the disk after the mkswap command!!!'''
So be sure you pick the right one!
<source lang=bash>
# mkswap /dev/vg-root/lv-swap
mkswap: /dev/vg-root/lv-swap: warning: don't erase bootbits sectors
on whole disk. Use -f to force.
Setting up swapspace version 1, size = 2097148 KiB
no label, UUID=4764e516-d025-41de-ab5b-72070a3ae765
</syntaxhighlight>
Save this UUID for the next step!!!
===Create the crypted swap===
Put this in your /etc/crypttab :
<source lang=bash>
cryptswap1 UUID=4764e516-d025-41de-ab5b-72070a3ae765 /dev/urandom swap,cipher=aes-cbc-essiv:sha256,offset=40,noearly
</syntaxhighlight>
The UUID is the one from mkswap before!!!
Important things:
# offset=40 : Save the region where your UUID is written on disk.
# noearly : Avoid race conditions of the init scripts (cryptdisks and cryptdisks-early).
====Start the crypted partition====
<source lang=bash>
# cryptdisks_start cryptswap1
* Starting crypto disk...
* cryptswap1 (starting)..
* cryptswap1 (started)...
</syntaxhighlight>
====Check the status====
<source lang=bash>
# cryptsetup status cryptswap1
/dev/mapper/cryptswap1 is active.
type: PLAIN
cipher: aes-cbc-essiv:sha256
keysize: 256 bits
device: /dev/mapper/vg--root-lv--swap
offset: 40 sectors
size: 4194264 sectors
mode: read/write
</syntaxhighlight>
====Make the swapFS====
<source lang=bash>
# mkswap /dev/mapper/cryptswap1
mkswap: /dev/mapper/cryptswap1: warning: don't erase bootbits sectors
on whole disk. Use -f to force.
Setting up swapspace version 1, size = 2097128 KiB
no label, UUID=ccdd1d28-0504-4682-8ece-8b6ef381d7e9
</syntaxhighlight>
This new UUID has no relevance for /etc/crypttab.
===Edit the /etc/fstab===
<source lang=bash>
# vit /etc/fstab
...
/dev/mapper/cryptswap1 none swap sw 0 0
</syntaxhighlight>
Reboot to test your settings.
da8f05cc7cf8922a344b101e37a13deb8e399fe0
Admin hints
0
360
2235
2185
2021-11-25T14:29:17Z
Lollypop
2
Text replacement - "</source" to "</syntaxhighlight"
wikitext
text/x-wiki
[[category:KnowHow]]
==Cheat sheets==
* [https://cheat.sh Curl usable general cheat sheet]
==DNS==
===Get your IP address===
<source lang=shell-session>
$ dig +short +time=2 +tries=1 myip.opendns.com @resolver1.opendns.com
</syntaxhighlight>
e11420b10c77a587a82c812e8fbdcb83e98e1360
2265
2235
2021-11-25T15:44:56Z
Lollypop
2
/* DNS */
wikitext
text/x-wiki
[[category:KnowHow]]
==Cheat sheets==
* [https://cheat.sh Curl usable general cheat sheet]
==DNS==
===Get your IP address===
<syntaxhighlight lang=bash>
$ dig +short +time=2 +tries=1 myip.opendns.com @resolver1.opendns.com
</syntaxhighlight>
b00324230efbd7ccc679800ecb7b29204bcd8931
Ufw
0
224
2236
888
2021-11-25T14:29:20Z
Lollypop
2
Text replacement - "</source" to "</syntaxhighlight"
wikitext
text/x-wiki
[[Kategorie:Linux]]
==Disable IPv6==
/etc/default/ufw
<source lang=bash>
# Set to yes to apply rules to support IPv6 (no means only IPv6 on loopback
# accepted). You will need to 'disable' and then 'enable' the firewall for
# the changes to take affect.
IPV6=no
</syntaxhighlight>
/etc/ufw/sysctl.conf
<source lang=bash>
# Uncomment this to turn off ipv6 autoconfiguration
net/ipv6/conf/default/autoconf=0
net/ipv6/conf/all/autoconf=0
</syntaxhighlight>
==Setup Rules==
===Adding a rule===
<source lang=bash>
# ufw allow log-all from 192.168.2.0/24 to any app OpenSSH
Rule added
# ufw status verbose
Status: active
Logging: on (low)
Default: reject (incoming), allow (outgoing), disabled (routed)
New profiles: skip
To Action From
-- ------ ----
22/tcp (OpenSSH) ALLOW IN 192.168.2.0/24 (log-all)
</syntaxhighlight>
===Inserting before===
<source lang=bash>
# ufw insert 1 allow log-all from 192.168.1.0/24 to any app OpenSSH
Rule inserted
# ufw status verbose
Status: active
Logging: on (low)
Default: reject (incoming), allow (outgoing), disabled (routed)
New profiles: skip
To Action From
-- ------ ----
22/tcp (OpenSSH) ALLOW IN 192.168.1.0/24 (log-all)
22/tcp (OpenSSH) ALLOW IN 192.168.2.0/24 (log-all)
# ufw status numbered
Status: active
To Action From
-- ------ ----
[ 1] OpenSSH ALLOW IN 192.168.1.0/24 (log-all)
[ 2] OpenSSH ALLOW IN 192.168.2.0/24 (log-all)
</syntaxhighlight>
==Own applications==
===nrpe===
/etc/ufw/applications.d/nrpe
<source lang=bash>
[NRPE]
title=Nagios NRPE
description=Nagios Remote Plugin Executor
ports=5666/tcp
</syntaxhighlight>
===MySQL===
/etc/ufw/applications.d/mysql
<source lang=bash>
[MySQL]
title=MySQL Server (MySQL, MYSQL)
description=Old and rusty SQL server
ports=3306/tcp
</syntaxhighlight>
===Exim===
/etc/ufw/applications.d/exim
<source lang=bash>
[Exim SMTP]
title=Mail Server (Exim, SMTP)
description=Small, but very powerful and efficient mail server
ports=25/tcp
[Exim SMTP Virusscanned]
title=Mail Server (Exim, SMTP Virusscanned)
description=Small, but very powerful and efficient mail server
ports=26/tcp
[Exim SMTPS]
title=Mail Server (Exim, SMTPS)
description=Small, but very powerful and efficient mail server
ports=465/tcp
[Exim SMTP Message Submission]
title=Mail Server (Exim, Message Submission)
description=Small, but very powerful and efficient mail server
ports=587/tcp
</syntaxhighlight>
Get a list of rules to set from Exim's configuration:
<source lang=awk>
# exim -bP local_interfaces | awk '
BEGIN{
ports[25]="Exim SMTP";
ports[26]="Exim SMTP Virusscanned"
ports[465]="Exim SMTPS";
ports[587]="Exim SMTP Message Submission";
from="any"; # <----- Look if it fits what you want
}
{
gsub(/^.*= /,"");
split($0,services,/ : /);
for(service in services){
split(services[service],part,/\./);
ip=part[1]"."part[2]"."part[3]"."part[4];
port=part[5];
printf "ufw allow log from %s to %s app \"%s\"\n",from,ip,ports[port];
}
}'
ufw allow log from any to 192.168.5.103 app "Exim SMTP"
ufw allow log from any to 192.168.5.103 app "Exim SMTP Virusscanned"
ufw allow log from any to 192.168.5.103 app "Exim SMTPS"
</syntaxhighlight>
==Inspect your application profile==
<source lang=bash>
# ufw app info MySQL
Profile: MySQL
Title: MySQL Server (MySQL, MYSQL)
Description: Old and rusty SQL server
Port:
3306/tcp
</syntaxhighlight>
223ff8166dcdf131b082c172570ecb76675aa037
Ubuntu zsys
0
377
2237
2124
2021-11-25T14:29:45Z
Lollypop
2
Text replacement - "</source" to "</syntaxhighlight"
wikitext
text/x-wiki
[[category:Ubuntu]]
==Cconfigure garbage collection==
<source lang=yaml>
cat > /etc/zsys.conf <<EOF
history:
# Keep at least n history entry per unit of time if enough of them are present
# The order condition the bucket start and end dates (from most recent to oldest)
# We also keep all previous state saves for the previous day.
# gcstartafter: 1 (GC start after a whole day).
gcstartafter: 1
# Minimum number of recent states to keep.
keeplast: 7
# - name: Abitrary name of the bucket
# buckets: Number of buckets over the interval
# bucketlength: Length of each bucket in days
# samplesperbucket: Number of datasets to keep in each bucket
gcrules:
- name: PreviousDay
buckets: 1
bucketlength: 1
samplesperbucket: 3
#
# For the previous Day (after on full day of retention of all
# snapshots due to gcstartafter: 1), the rule PreviousDay
# defines one bucket (buckets: 1) of size 1 day (bucketlength: 1),
# where we keep 3 states. So basically, we keep 3 states on the
# previous full day.
#
- name: PreviousWeek
buckets: 5
bucketlength: 1
samplesperbucket: 1
#
# For the 5 days before (buckets: 5 of size 1 day (bucketlength: 1)),
# we keep one state (samplesperbucket: 1).
# It means thus that we keep one state per day for each of those 5 days.
#
- name: PreviousMonth
buckets: 4
bucketlength: 7
samplesperbucket: 1
#
# We divide the previous month, in 4 buckets (buckets: 4) of
# 7 days each (bucketlength: 7) and keep one state for each
# (samplesperbucket: 1).
# In English, this means that we try to keep one state save
# per week over the previous month.
#
general:
# Minimal free space required before taking a snapshot
minfreepoolspace: 20
# Daemon timeout in seconds
timeout: 60
EOF
systemctl restart zsysd.service
zsysctl service gc
update-grub
</syntaxhighlight>
2c2d9de9294a4beec0e99264b9fc83aa867473a2
EasyRSA
0
275
2238
1240
2021-11-25T14:30:05Z
Lollypop
2
Text replacement - "<source" to "<syntaxhighlight"
wikitext
text/x-wiki
[[Kategorie: Security]]
[[Kategorie: Linux]]
=create CA user=
<syntaxhighlight lang=bash>
# groupadd -g 22000 ca && adduser --uid 22000 --gid 22000 --gecos "CA user" --encrypt-home ca
</source>
=Do everything CA specific as CA user!=
<syntaxhighlight lang=bash>
# su - ca
ca@rzeasyrsa:~$ ecryptfs-mount-private
ca@rzeasyrsa:~$ cd
ca@rzeasyrsa:~$ exec bash
</source>
=Setup EasyRSA=
==Ubuntu packets==
<syntaxhighlight lang=bash>
# aptitude install openvpn easy-rsa
</source>
==Create your CA==
<syntaxhighlight lang=bash>
mkdir --mode=0700 OpenVPN-CA
cd OpenVPN-CA
for i in /usr/share/easy-rsa/* ; do ln -s $i ; done
rm -f vars clean-all
cp /usr/share/easy-rsa/vars .
</source>
==Edit the defaults==
Setup proper defaults in your vars file.
Source it every time before you do CA work.
==Base setup (Only one time at the beginning!!!)==
'''Really just do this before you start with your CA. It will delete everything: keys and certificates!!!'''
$ cd OpenVPN-CA
$ . vars
$ /usr/share/easy-rsa/clean-all
==Generate DH parameter==
$ cd OpenVPN-CA
$ . vars
$ KEY_SIZE=4096 ./build-dh
or
$ cd OpenVPN-CA/keys
$ openssl dhparam -2 -out dh4096.pem 4096
==Generate TLS-auth parameter==
$ cd OpenVPN-CA/keys
$ /usr/sbin/openvpn --genkey --secret ta.key
==User certificates with passwords in scripts==
If you want to work with password encrypted keys and wat to batch process many users, you might find this helpful.
Add a line after <i># output_password = secret</i>:
<syntaxhighlight lang=bash>
# output_password = secret
output_password = $ENV::KEY_PASS
</source>
After that the openssl calls taking the needed password from the environment variable <i>KEY_PASS</i>.
You can call it like this for example:
<syntaxhighlight lang=bash>
KEY_PASS="password" ./build-key-pass --batch user
</source>
==Create your CA certificate==
$ cd OpenVPN-CA
$ . vars
$ ./buid-ca
Check it with
$ openssl x509 -noout -text -in keys/ca.crt
==Create the server certificate==
$ cd OpenVPN-CA
$ . vars
$ ./build-key-server openvpn-server
For example server keys with 5 years validity:
$ KEY_EXPIRE=1825 ./build-key-server openvpn-server
=Create your OpenVPN config=
==get_ovpn.sh==
I wrote a little helper script called get_ovpn.sh:
<syntaxhighlight lang=bash>
#!/bin/bash
# Written by Lars Timmann L@rs.Timmann.de> 2016
# You may use it for free but on your own risk!!!
TYPE="client"
KEY_DIR="OpenVPN-CA/keys"
function usage() {
if [ "_${1}_" != "_help_" ]
then
printf "ERROR: $*\n"
fi
printf "Options:\n"
cat <<EOF
-h|--help This help
-c|--config-type Default: client (client|server)
-k|--key-dir Default: OpenVPN-CA/keys Directory where certificates and keys can be found
-t|--template Default: ${configtype}.ovpn The template to use
-u|--user User to create config for
-s|--server Servername for --config-type=server
--what-ever=value Replace <WHAT_EVER> in template with value e.g.: --server-net=... replaces <SERVER_NET> with the given value
EOF
exit 1
}
while [ $# -gt 0 ]
do
#if [ $# -ge 2 ]; then value=$2; fi
case $1 in
-h|--help)
usage "help"
;;
--?*=?*|-?*=?*)
param=${1%=*}
value=${1#*=}
shift;
;;
--?*=|-?*=)
param=${1%=*}
usage "${param} needs a vlaue!"
;;
*)
if [ $# -lt 2 ] ; then usage "$1 needs a value!"; fi
param=$1
value=$2
shift; shift;
;;
esac
case $param in
-t|--template)
TEMPLATE=${value}
;;
-k|--key-dir)
KEY_DIR=${value}
;;
-u|--user)
OVPN_USER=${value}
;;
-c|--config-type)
TYPE=${value}
;;
-s|--server-name)
SERVER=${value}
;;
*)
param=${param#--}
param=${param/-/_}
export ${param^^}=${value}
;;
esac
done
TEMPLATE=${TEMPLATE:-"${TYPE}.ovpn"}
[ -z "${SERVER}" -a "_${TYPE}_" == "_server_" ] && usage "For which server?\n"
[ -z "${OVPN_USER}" -a "_${TYPE}_" == "_client_" ] && usage "For which user?\n"
[ ! -f "${TEMPLATE}" ] && usage "Template file ${TEMPLATE} not found!\n"
[ ! -d "${KEY_DIR}" ] && usage "Key directory ${KEY_DIR} not found!\n"
[ ! -f "${KEY_DIR}/ta.key" ] && usage "TLS Auth ${KEY_DIR}/ta.key not found!\n"
[ ! -f "${KEY_DIR}/ca.crt" ] && usage "CA Certificate ${KEY_DIR}/ca.crt not found!\n"
[ ! -f "${KEY_DIR}/${SERVER}.key" -a "_${TYPE}_" == "_server_" ] && usage "Private key ${KEY_DIR}/${SERVER}.key not found!\n"
[ ! -f "${KEY_DIR}/${SERVER}.crt" -a "_${TYPE}_" == "_server_" ] && usage "Certificate ${KEY_DIR}/${SERVER}.crt not found!\n"
[ ! -f "${KEY_DIR}/${OVPN_USER}.key" -a "_${TYPE}_" == "_client_" ] && usage "Private key ${KEY_DIR}/${OVPN_USER}.key not found!\n"
[ ! -f "${KEY_DIR}/${OVPN_USER}.crt" -a "_${TYPE}_" == "_client_" ] && usage "Certificate ${KEY_DIR}/${OVPN_USER}.crt not found!\n"
export SERVER
gawk \
-v user="${OVPN_USER}" \
-v key_dir="${KEY_DIR}" \
-v configtype="${TYPE}" \
-v server="${SERVER}" \
'
function print_fingerprint(certfile){
command="openssl x509 -noout -fingerprint -in "certfile;
FS="=";
while(command | getline);
retval=$2;
close(command);
return retval;
}
function print_part(part,certfile){
command="openssl x509 -noout -text -in "certfile;
while(command | getline){
if ($1 == part) {
for(i=2;i<=NF;i++){
if(i==NF) gsub(/\//,", ", $i)
retval=retval""$i;
if(i<NF) retval=retval" ";
}
}
};
close(command);
return retval;
}
function print_cert(name,certfile){
# Header
#printf "# %s\n",certfile;
while(getline < certfile){if(/^#/) print $0};
close(certfile);
printf "<%s>\n",name;
while(getline < certfile){if(!/^#/) print $0};
close(certfile);
printf "</%s>\n",name;
}
{
# Static part
rest=$0;
while(match(rest,/<[A-Z0-9_]+>/)) {
matched=substr(rest,RSTART+1,RLENGTH-2);
##print "Matched:",matched;
if (ENVIRON[matched]) gsub("<"matched">",ENVIRON[matched]);
rest=substr(rest,RSTART+RLENGTH);
}
print $0;
}
END{
# Dynamic part
if(configtype=="client") {
printf "remote-cert-tls server\n";
} else {
printf "remote-cert-tls client\n";
}
# TLS Auth
print_cert("tls-auth",key_dir"/ta.key");
printf "key-direction %d\n",(configtype=="client");
printf "\n";
print_cert("dh",key_dir"/dh4096.pem");
printf "\n";
# Ca Certificate
if (configtype=="client") {
printf "verify-x509-name \"%s\"\n",print_part("Subject:",key_dir"/"server".crt");
}
printf "verify-hash %s\n",print_fingerprint(key_dir"/ca.crt");
print_cert("ca",key_dir"/ca.crt");
printf "\n";
# User Data
if (configtype=="client") {
print_cert("cert",key_dir"/"user".crt");
printf "\n";
print_cert("key",key_dir"/"user".key");
printf "\n";
} else {
print_cert("cert",key_dir"/"server".crt");
printf "\n";
# key secret/<SERVER>.key is in template
}
#print ENVIRON["SERVER_NET"];
}' ${TEMPLATE}
</source>
ca@rzeasyrsa:~$ ./get_ovpn.sh --help
Options:
-h|--help This help
-c|--config-type Default: client (client|server)
-k|--key-dir Default: OpenVPN-CA/keys Directory where certificates and keys can be found
-t|--template Default: .ovpn The template to use
-u|--user User to create config for
-s|--server Servername for --config-type=server
--what-ever=value Replace <WHAT_EVER> in template with value e.g.: --server-net=... replaces <SERVER_NET> with the given value
==OpenVPN Server ==
===OpenVPN Server Template===
# I am using the mysql-auth-plugin from [https://github.com/chantra/openvpn-mysql-auth https://github.com/chantra/openvpn-mysql-auth]
# On the OpenVPN-Server the user openvpn has uid 1195 and gid 1195 and I have a TMP-dir for this user in the /etc/fstab like this:
none /run/openvpn_tmp tmpfs nodev,noexec,nosuid,size=5m,mode=0700,uid=1195,gid=1195 0 0
Example server.ovpn:
<pre>
local <SERVER_IP>
port <SERVER_PORT>
tmp-dir /run/openvpn_tmp
management <MANAGEMENT_IP> <MANAGEMENT_PORT> /etc/openvpn/management-password
proto udp
dev tun
tun-mtu 1500
mssfix
topology subnet
server <SERVER_NET> <SERVER_NETMASK>
push "redirect-gateway def1 bypass-dhcp"
push "dhcp-option DNS <DNS1>"
push "dhcp-option DNS <DNS2>"
push "route 192.168.18.0 255.255.255.0 net_gateway"
push "route 192.168.0.0 255.255.0.0"
push "route 10.0.0.0 255.0.0.0"
push "route 172.28.0.0 255.255.0.0"
client-to-client
duplicate-cn
keepalive 10 120
auth SHA512
cipher AES-256-CBC
tls-cipher DHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES256-SHA256:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES128-SHA256:DHE-RSA-CAMELLIA256-SHA:DHE-RSA-AES256-SHA:DHE-RSA-CAMELLIA128-SHA:DHE-RSA-AES128-SHA:CAMELLIA256-SHA:AES256-SHA:CAMELLIA128-SHA:AES128-SHA
reneg-sec 36000
comp-lzo adaptive
max-clients 25
user openvpn
group openvpn
persist-key
persist-tun
status /var/log/openvpn/<SERVER>-status.log 2
status-version 2
log-append /var/log/openvpn/<SERVER>-openvpn.log
verb 3
plugin /usr/lib/openvpn/libopenvpn-mysql-auth.so -c /etc/openvpn/auth/<SERVER>_auth_mysql.conf
key secret/<SERVER>.key # This file should be kept secret
remote-cert-tls client
username-as-common-name
</pre>
===Generate OpenVPN Config for server===
<syntaxhighlight lang=bash>
ca@rzeasyrsa:~$ ./get_ovpn.sh \
--server openvpn \
--config-type server \
--server-ip=192.168.18.23 \
--server-port=1234 \
--server-net=10.214.60.128 \
--server-netmask=255.255.255.128 \
--management-ip=192.168.17.23 \
--management-port=11234 \
--dns1=192.168.0.50 \
--dns2=192.168.0.30 \
--template server.ovpn \
--key-dir=OpenVPN-CA/keys
</source>
==OpenVPN Client==
===OpenVPN client template===
Example client.ovpn:
<pre>
client
dev tun
proto udp
remote <SERVER_IP> <SERVER_PORT>
tls-client
ns-cert-type server
comp-lzo
auth-user-pass
auth SHA512
cipher AES-256-CBC
tls-cipher DHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES256-SHA256:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES128-SHA256:DHE-RSA-CAMELLIA256-SHA:DHE-RSA-AES256-SHA:DHE-RSA-CAMELLIA128-SHA:DHE-RSA-AES128-SHA:CAMELLIA256-SHA:AES256-SHA:CAMELLIA128-SHA:AES128-SHA
#tls-version-min 1.2
route-delay 5 30
persist-key
persist-tun
nobind
mssfix
push-peer-info
reneg-sec 0
tun-mtu 1500
verb 3
#auth-nocache
</pre>
===Generate OpenVPN Config for server===
<syntaxhighlight lang=bash>
ca@rzeasyrsa:~$ ./get_ovpn.sh \
--config-type client \
--server-ip 192.168.18.23 \
--server-port 1234 \
--template client.ovpn \
--key-dir OpenVPN-CA/keys \
--user vpnclient
</source>
8096706668f9d3f4aa517c75b86a57e8bbeefba8
Fail2ban
0
276
2239
1247
2021-11-25T14:30:55Z
Lollypop
2
Text replacement - "<source" to "<syntaxhighlight"
wikitext
text/x-wiki
[[Kategorie:Security]]
[[Kategorie:Linux]]
==Installation==
===Debian / Ubuntu===
<syntaxhighlight lang=bash>
# apt-get install fail2ban
</source>
==Configuration==
To be secure on updates put your personal settings in the <i>*.local</i> files. This will protect them from overwriting through update procedures.
===paths-overrides.local===
I have date parts in my logfiles so the defaults from fail2ban would fail to find the logs.
<syntaxhighlight lang=bash>
# exim -bP log_file_path
log_file_path = /var/log/exim/%slog-%D
# doveadm log find
Looking for log files from /var/log
Debug: /var/log/dovecot/dovecot.debug-20160309
Info: /var/log/dovecot/dovecot.debug-20160309
Warning: /var/log/dovecot/dovecot.log-20160309
Error: /var/log/dovecot/dovecot.log-20160309
Fatal: /var/log/dovecot/dovecot.log-20160309
</source>
<syntaxhighlight lang=ini>
[DEFAULT]
dovecot_log = /var/log/dovecot/dovecot.log-*
exim_main_log = /var/log/exim/mainlog-*
</source>
===jail.local===
<syntaxhighlight lang=ini>
[DEFAULT]
bantime = 3600
[sshd]
enabled = true
[exim-spam]
enabled = true
[exim]
enabled = true
[sshd-ddos]
enabled = true
[dovecot]
enabled = true
[sieve]
enabled = true
</source>
b7966c5322fdcf1e97a1fa4c10d31d0aa825b7c0
OwnCloud Config
0
195
2240
2231
2021-11-25T14:34:02Z
Lollypop
2
Text replacement - "[[Kategorie:" to "[[Category:"
wikitext
text/x-wiki
[[Category:OwnCloud]]
==Seperate self installed apps from bundled apps==
In your config add:
<syntaxhighlight lang=php>
'apps_paths' => array (
0 => array (
'path' => OC::$SERVERROOT.'/apps',
'url' => '/apps',
'writable' => false,
),
1 => array (
'path' => OC::$SERVERROOT.'/other_apps',
'url' => '/other_apps',
'writable' => true,
),
),
</syntaxhighlight>
5b5fb6e240e198d38887a8a8295195957e44a8d6
Windows
0
356
2241
1858
2021-11-25T14:34:19Z
Lollypop
2
Text replacement - "<source" to "<syntaxhighlight"
wikitext
text/x-wiki
==Manage Stored User Names Passwords==
<syntaxhighlight lang=windows>
%windir%\System32\rundll32.exe keymgr.dll,KRShowKeyMgr
</source>
cc592537e24fbaa8ab95ed740efbc0b7aa4fc289
SSL and TLS
0
229
2242
2230
2021-11-25T14:34:26Z
Lollypop
2
Text replacement - "[[Kategorie:" to "[[Category:"
wikitext
text/x-wiki
[[Category: Security]]
=Web=
==HTTPS==
===TLSA - Record ===
<syntaxhighlight lang=bash>
$ openssl s_client -connect lars.timmann.de:443 </dev/null 2>/dev/null | openssl x509 -pubkey -noout | openssl pkey -pubin -outform DER | openssl sha256
(stdin)= e642c89062361241dc77f3fb363c8cd0faa04d870b68a3411b8fac8c4b4581ac
</source>
This could be used for a tlsa record like this:
_443._tcp.lars.timmann.de. 60 IN TLSA 3 0 1 e642c89062361241dc77f3fb363c8cd0faa04d870b68a3411b8fac8c4b4581ac
===HSTS - HTTP Strict Transport Security===
<syntaxhighlight lang=apache>
<VirtualHost <host>:443>
...
Header always set Strict-Transport-Security "max-age=31556926; includeSubDomains;"
...
</VirtualHost>
</source>
You need to enable the headers module in Apache.
On Ubuntu just do:
<syntaxhighlight lang=bash>
# sudo a2enmod headers
</source>
The max-age is entered in seconds:
<syntaxhighlight lang=bash>
$ bc -l
31556926/(60*60*24)
365.24219907407407407407
</souce>
So this value is a year as seconds.
What changes when we set this header and the browser understands it?
The browser transforms any link on this page to https even if the link is a http link. If the secure connection cannot be established because of Certificate errors, the browser will refuse to load the page. If this header contains ''includeSubDomains;'' subdomains are treated like this as well.
Links:
* [https://en.wikipedia.org/wiki/HTTP_Strict_Transport_Security HSTS at Wikipedia (English)]
* [https://de.wikipedia.org/wiki/Hypertext_Transfer_Protocol_Secure#HSTS HSTS at Wikipedia (German)]
===HPKP - HTTP Public Key Pinning===
A helpful script to create the hashes was made by Hanno Böck and is accessible at [https://github.com/hannob/hpkp Github].
I added a create option which makes the script more comfortable for me at [https://github.com/Popyllol/hpkp Github], too.
The public key pins for this site are created like this:
<syntaxhighlight lang=bash>
# /etc/apache2/ssl/hpkp-gen.sh create DE Hamburg Hamburg lars.timmann.de
Generating RSA private key, 4096 bit long modulus
..................................................................................................................................................................................................................++
..........................................................................................................................................................................................++
e is 65537 (0x10001)
Generating RSA private key, 4096 bit long modulus
..................................................++
..........................................++
e is 65537 (0x10001)
Header always set Strict-Transport-Security "max-age=31556926;"
Header always set Public-Key-Pins "max-age=5184000; pin-sha256=\"UcmGe/VSm6N9ruX235yb9PEYseuo+mr2volWwx1RffE=\";pin-sha256=\"O8xUszxHm+JJpRR4Pycl7LCnKjFpTY3REemrBxQZWQU=\";pin-sha256=\"UcmGe/VSm6N9ruX235yb9PEYseuo+mr2volWwx1RffE=\";"
</source>
At the end you get one line for adding Strict-Transport-Security and one for Public-Key-Pins. Both in Apache format.
<syntaxhighlight lang=apache>
<VirtualHost lars.timmann.de:443>
...
SSLEngine On
SSLProtocol all -SSLv2 -SSLv3 -TLSv1
SSLCompression off
SSLOptions +FakeBasicAuth +ExportCertData +StrictRequire
SSLCertificateFile /etc/apache2/ssl/timmann.de-wildcard.pem
SSLCertificateKeyFile /etc/apache2/ssl/timmann.de.ec-key
Header always set Strict-Transport-Security "max-age=31556926;"
Header always set Public-Key-Pins "max-age=5184000; pin-sha256=\"sEQMIUbXSCbQQAMcCH7712u+cYCjFITlUSH/C1DEGHY=\";pin-sha256=\"9f3SRITO2UNdpnurhfJGLZqcaXJBUm3WRKRIKYiPARc=\";pin-sha256=\"sEQMIUbXSCbQQAMcCH7712u+cYCjFITlUSH/C1DEGHY=\";"
...
</VirtualHost>
</source>
You need to enable the headers module in Apache.
On Ubuntu just do:
<syntaxhighlight lang=bash>
# sudo a2enmod headers
</source>
=Mail=
==STARTTLS==
with OpenSSL:
<syntaxhighlight lang=bash>
$ openssl s_client -starttls smtp -connect <mailserver>:<port>
</source>
with GNUTLS:
<syntaxhighlight lang=bash>
$ gnutls-cli --crlf --starttls --port <port> <mailserver>
EHLO hey <-- Send EHLO
250-<mailserver> Hello <yourhost> [<yourip>]
250-SIZE 52428800
250-8BITMIME
250-ETRN
250-PIPELINING
250-AUTH PLAIN
250-STARTTLS
250 HELP
STARTTLS <-- Send STARTTLS
220 TLS go ahead
^D <-- Send CTRL-D to begin STARTTLS handshake
...
- Version: TLS1.2
- Key Exchange: DHE-RSA
- Cipher: AES-256-CBC
- MAC: SHA256
- Compression: NULL
</source>
You can specify the security priority for the handshake like this:
<syntaxhighlight lang=bash>
$ gnutls-cli --crlf --starttls --priority 'SECURE256:%LATEST_RECORD_VERSION:-VERS-SSL3.0' --port <port> <mailserver>
</source>
Or us sslscan to check the available ciphers:
<syntaxhighlight lang=bash>
$ sudo apt-get install sslscan
$ sslscan --no-failed --starttls <mailserver>:<port>
</source>
==SMTPS==
with OpenSSL:
<syntaxhighlight lang=bash>
$ openssl s_client -connect <mailserver>:465
</source>
with GNUTLS:
<syntaxhighlight lang=bash>
$ gnutls-cli --port 465 <mailserver>
</source>
b8e718b95350de20c7fcffec1731ce4834b98f06
PowerDNS
0
287
2243
1939
2021-11-25T14:34:33Z
Lollypop
2
Text replacement - "[[Kategorie:" to "[[Category:"
wikitext
text/x-wiki
[[Category: DNS]]
=PowerDNS Server (pdns_server)=
==Newer version in Ubuntu==
If you are living in Ubunbtu xenial and need a newer PowerDNS from Ubuntu zesty, do this:
===/etc/apt/apt.conf.d/01pinning===
<source lang=apt>
APT::Default-Release "xenial";
</source>
===/etc/apt/preferences.d/pdns===
<source lang=apt>
Package: pdns-*
Pin: release a=zesty, l=Ubuntu
Pin-Priority: 1000
Package: pdns-*
Pin: release a=zesty-updates, l=Ubuntu
Pin-Priority: 1000
Package: pdns-*
Pin: release a=zesty-security, l=Ubuntu
Pin-Priority: 1000
</source>
===/etc/apt/sources.list===
add zesty sources. for example:
<source>
deb [arch=amd64] http://de.archive.ubuntu.com/ubuntu/ xenial main restricted universe
deb [arch=amd64] http://de.archive.ubuntu.com/ubuntu/ xenial-updates main restricted universe
deb [arch=amd64] http://security.ubuntu.com/ubuntu xenial-security main restricted universe
deb [arch=amd64] http://de.archive.ubuntu.com/ubuntu/ zesty main restricted universe
deb [arch=amd64] http://de.archive.ubuntu.com/ubuntu/ zesty-updates main restricted universe
deb [arch=amd64] http://security.ubuntu.com/ubuntu zesty-security main restricted universe
</source>
===Do the upgrade===
<source lang=bash>
# apt update
# apt install pdns-recursor/zesty pdns-tools/zesty libstdc++6/zesty gcc-6-base/zesty
</source>
==Logging with systemd and syslog-ng==
1. Tell the journald of systemd to forward messages to syslog:
In <i>/etc/systemd/journald.conf</i> set it from
<source lang=bash>
#ForwardToSyslog=yes
</source>
to
<source lang=bash>
ForwardToSyslog=yes
</source>
Then restart the journald
<source lang=bash>
# systemctl restart systemd-journald.service
</source>
2. Tell syslog-ng to take the dev-log-socket from journald as input:
Change the part in <i>/etc/syslog-ng/syslog-ng.conf</i> from
<source lang=bash>
source s_src {
system();
internal();
};
</source>
to
<source lang=bash>
source s_src {
system();
internal();
unix-dgram ("/run/systemd/journal/dev-log");
};
</source>
==chroot with systemd==
<source lang=bash>
# mkdir -p /var/chroot/run/systemd
# touch /var/chroot/run/systemd/notify
</source>
<source lang=ini>
# /etc/systemd/system/var-chroot-run-systemd-notify.mount
[Unit]
After=zfs-mount.service
Requires=var-chroot.mount
[Mount]
What=/run/systemd/notify
Where=/var/chroot/run/systemd/notify
Type=none
Options=bind
</source>
or
<source lang=ini>
# /etc/systemd/system/var-chroot-run-systemd-notify.mount
[Unit]
Description=Mount /run/systemd/notify to chroot
DefaultDependencies=no
ConditionPathExists=/var/chroot/run/systemd/notify
ConditionCapability=CAP_SYS_ADMIN
After=systemd-modules-load.service
Before=pdns-recursor.service
[Mount]
What=/run/systemd/notify
Where=/var/chroot/run/systemd/notify
Type=none
Options=bind
[Install]
WantedBy=multi-user.target
</source>
<source lang=ini>
# /etc/systemd/system/pdns.service.d/override.conf
[Service]
Type=simple
ExecStart=
ExecStart=/usr/sbin/pdns_server --guardian=no --daemon=no --disable-syslog --log-timestamp=no --write-pid=no
CapabilityBoundingSet=CAP_NET_BIND_SERVICE CAP_SETGID CAP_SETUID CAP_CHOWN CAP_SYS_CHROOT
[Unit]
Wants=local-fs.target
</source>
<source lang=ini>
# /etc/systemd/system/pdns-recursor.service.d/override.conf
[Service]
Type=simple
ExecStart=
ExecStart=/usr/sbin/pdns_recursor --daemon=no --write-pid=no --include-dir=/etc/powerdns/recursor.d
CapabilityBoundingSet=CAP_NET_BIND_SERVICE CAP_SETGID CAP_SETUID CAP_CHOWN CAP_SYS_CHROOT
[Unit]
Wants=local-fs.target
</source>
d761f0072b0a0fbc3029f29997fd932559b55445
Solaris OracleClusterware
0
274
2244
1296
2021-11-25T14:35:38Z
Lollypop
2
Text replacement - "</source" to "</syntaxhighlight"
wikitext
text/x-wiki
[[Kategorie:Solaris11|Clusterware]]
[[Kategorie:Oracle|Clusterware]]
==Get Solaris release information==
<source lang=bash>
# pkg info kernel | \
nawk -F '.' '
/Build Release:/{
solaris=$NF;
}
/Branch:/{
subrel=$3;
update=$4;
}
END{
printf "Solaris %d.%d Update %d\n",solaris,subrel,update;
}'
</syntaxhighlight>
=Needed Solaris packages=
==Install pkg dependencies==
<source lang=bash>
# pkg install developer/assembler
# pkg install developer/build/make
# pkg install x11/diagnostic/x11-info-clients
</syntaxhighlight>
==Check pkg dependencies==
<source lang=bash>
# pkg list \
developer/assembler \
developer/build/make \
x11/diagnostic/x11-info-clients
</syntaxhighlight>
=User / group settings=
==Groups==
<source lang=bash>
# groupadd -g 186 oinstall
# groupadd -g 187 asmadmin
# groupadd -g 188 asmdba
# groupadd -g 200 dba
</syntaxhighlight>
==User==
<source lang=bash>
# useradd \
-u 102 \
-g oinstall \
-G asmdba,dba \
-c "Oracle DB" \
-m -d /export/home/oracle \
oracle
# useradd \
-u 406 \
-g oinstall \
-G asmdba,asmadmin,dba \
-c "Oracle Grid" \
-m -d /export/home/grid \
grid
</syntaxhighlight>
===Generate ssh public keys===
<source lang=bash>
# su - grid
$ ssh-keygen -t rsa -b 2048
Generating public/private rsa key pair.
Enter file in which to save the key (/export/home/grid/.ssh/id_rsa): <Enter>
Created directory '/export/home/grid/.ssh'.
Enter passphrase (empty for no passphrase): <Enter>
Enter same passphrase again: <Enter>
Your identification has been saved in /export/home/grid/.ssh/id_rsa.
Your public key has been saved in /export/home/grid/.ssh/id_rsa.pub.
The key fingerprint is:
..:..:.. grid@grid01
$ cat .ssh/id_rsa.pub > .ssh/authorized_keys
$ chmod 600 .ssh/authorized_keys
$ vi .ssh/authorized_keys
</syntaxhighlight>
Add the public key of other nodes.
After that do this on all other nodes added as grid:
<source lang=bash>
$ scp grid01:.ssh/authorized_keys .ssh/authorized_keys
</syntaxhighlight>
Now do a cross login from every node to every other node (even to its self) to add all to the known_hosts. The installer needs this.
==Projects==
<source lang=bash>
# projadd -p 186 -G oinstall \
-K process.max-file-descriptor="(basic,1024,deny)" \
-K process.max-file-descriptor="(privileged,65536,deny)" \
-K process.max-sem-nsems="(privileged,2048,deny)" \
-K project.max-sem-ids="(privileged,2048,deny)" \
-K project.max-shm-ids="(privileged,200,deny)" \
-K project.max-shm-memory="(privileged,274877906944,deny)" \
group.oinstall
</syntaxhighlight>
===Check project settings===
<source lang=bash>
# su - oracle
$ for name in process.{max-file-descriptor,max-sem-nsems} ; do prctl -t privileged -i process -n ${name} $$ ; done
process: 14822: -bash
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
process.max-file-descriptor
privileged 65.5K - deny -
process: 14822: -bash
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
process.max-sem-nsems
privileged 2.05K - deny -
$ for name in project.{max-sem-ids,max-shm-ids,max-shm-memory} ; do prctl -t privileged -n ${name} $$ ; done
process: 14822: -bash
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
project.max-sem-ids
privileged 2.05K - deny -
process: 14822: -bash
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
project.max-shm-ids
privileged 200 - deny -
process: 14822: -bash
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
project.max-shm-memory
usage 0B
privileged 256GB - deny -
</syntaxhighlight>
=Directories=
<source lang=bash>
# zfs create -o mountpoint=none rpool/grid
# zfs create -o mountpoint=/opt/gridhome rpool/grid/gridhome
# zfs create -o mountpoint=/opt/gridbase rpool/grid/gridbase
# zfs create -o mountpoint=/opt/oraInventory rpool/grid/oraInventory
# chown -R grid:oinstall /opt/{grid{home,base},oraInventory}
</syntaxhighlight>
=Storage tasks=
==Discover LUNs==
<source lang=bash>
# luxadm -e port | \
nawk '{print $1}' | \
xargs -n 1 luxadm -e dump_map | \
nawk '/Disk device/{print $5}' | \
sort -u | \
xargs luxadm display | \
nawk '
/DEVICE PROPERTIES for disk:/{
disk=$NF;
}
/DEVICE PROPERTIES for:/{
disk="";
}
/Vendor:/{
vendor=$NF;
}
/Serial Num:/{
serial=$NF;
}
/Unformatted capacity:/{
capacity=$(NF-1)""$NF;
}
disk != "" && /^$/{
printf "%s vendor=%s serial=%s capacity=%s\n",disk,vendor,serial,capacity;
}' | \
sort -u
</syntaxhighlight>
==Label Disks==
===Single Disk===
<source lang=bash>
# printf 'type 0 no no\nlabel 1 yes\npartition\n0 usr wm 8192 $\nlabel 1 yes\nquit\nquit\n' | \
format -e /dev/rdsk/<disk>
</syntaxhighlight>
===All FC disks===
For x86 you have to call format -> fdisk -> y for all disks first :-\
'''DON'T DO THE NEXT STEP IF YOU DO NOT KNOW WHAT YOU DO!'''
format_command_file.txt:
<source lang=bash>
type 0 no no
label 1 yes
partition
0 usr wm 8192 $
label 1 yes
quit
quit
</syntaxhighlight>
<source lang=bash>
# luxadm -e port | \
nawk '{print $1}' | \
xargs -n 1 luxadm -e dump_map | \
nawk '/Disk device/{print $5}' | \
sort -u | \
xargs luxadm display | \
nawk '
/DEVICE PROPERTIES for disk:/{
disk=$NF;
}
/DEVICE PROPERTIES for:/{
disk="";
}
disk && /^$/{
printf "%s\n",disk;
}' | \
sort -u | \
xargs -n 1 format -e -f ~/format_command_file.txt
</syntaxhighlight>
<source lang=bash>
# chown -RL grid:asmadmin /dev/rdsk/c0t6000*
# chmod 660 /dev/rdsk/c0t6000*
</syntaxhighlight>
==Set swap to physical RAM==
<source lang=bash>
# export RAM=256G
# swap -d /dev/zvol/dsk/rpool/swap
# zfs destroy rpool/swap
# zfs create \
-V ${RAM} \
-b 8k \
-o primarycache=metadata \
-o chksum=on \
-o dedup=off \
-o encryption=off \
-o compression=off \
rpool/swap
# swap -a /dev/zvol/dsk/rpool/swap
</syntaxhighlight>
=Network=
==Check port ranges==
<source lang=bash>
# for protocol in tcp udp ; do ipadm show-prop ${protocol} -p smallest_anon_port,largest_anon_port ; done
PROTO PROPERTY PERM CURRENT PERSISTENT DEFAULT POSSIBLE
tcp smallest_anon_port rw 9000 9000 32768 1024-65500
tcp largest_anon_port rw 65500 65500 65535 9000-65535
PROTO PROPERTY PERM CURRENT PERSISTENT DEFAULT POSSIBLE
udp smallest_anon_port rw 9000 9000 32768 1024-65500
udp largest_anon_port rw 65500 65500 65535 9000-65535
</syntaxhighlight>
==Setup private cluster interconnects==
Example with a small net with six (eight with net and broadcast) usable IPs. This limits the maximum number of nodes to six... which is obvious...
First node:
<source lang=bash>
# ipadm create-ip net1
# ipadm create-addr -T static -a 10.65.0.1/29 net1/ci1
# ipadm create-ip net5
# ipadm create-addr -T static -a 10.65.0.9/29 net5/ci2
</syntaxhighlight>
Second node:
<source lang=bash>
# ipadm create-ip net1
# ipadm create-addr -T static -a 10.65.0.2/29 net1/ci1
# ipadm create-ip net5
# ipadm create-addr -T static -a 10.65.0.10/29 net5/ci2
</syntaxhighlight>
==Set slew always for ntp==
After configuring ntp set slew always to avoid time warps!
<source lang=bash>
# svccfg -s svc:/network/ntp:default setprop config/slew_always = true
# svcadm refresh svc:/network/ntp:default
# svccfg -s svc:/network/ntp:default listprop config/slew_always
config/slew_always boolean true
</syntaxhighlight>
=Patching=
==Upgrade OPatch==
Do as root:
<source lang=bash>
export ORACLE_HOME=/opt/gridhome/11.2.0.4
export PATH=${PATH}:${ORACLE_HOME}/OPatch
OPATCH_PATCH_ZIP=~oracle/orainst/p6880880_112000_Solaris86-64.zip
zfs snapshot -r rpool/grid@$(opatch version | nawk '/OPatch Version:/{print $1"_"$NF;}')
eval mv ${ORACLE_HOME}/{$(opatch version | nawk '/OPatch Version:/{print $1","$1"_"$NF;}')}
unzip -d ${ORACLE_HOME} ${OPATCH_PATCH_ZIP}
chown -R grid:oinstall ${ORACLE_HOME}/OPatch
zfs snapshot -r rpool/grid@$(opatch version | nawk '/OPatch Version:/{print $1"_"$NF;}')
</syntaxhighlight>
==Apply PSU==
On first node as user grid:
<source lang=bash>
export ORACLE_HOME=/opt/gridhome/11.2.0.4
OCM_RSP=~grid/ocm_gridcluster1.rsp
${ORACLE_HOME}/OPatch/ocm/bin/emocmrsp -output ${OCM_RSP}
scp ${OCM_RSP} <other node1>:
scp ${OCM_RSP} <other node2>:
...
</syntaxhighlight>
On all nodes do as root:
<source lang=bash>
export ORACLE_HOME=/opt/gridhome/11.2.0.4
export PATH=${PATH}:${ORACLE_HOME}/bin
export PATH=${PATH}:${ORACLE_HOME}/OPatch
OCM_RSP=~grid/ocm_gridcluster1.rsp
PSU_DIR=~oracle/orainst/psu
PSU_ZIP=~oracle/orainst/p22378167_112040_Solaris86-64.zip
PSU=~oracle/orainst/psu/22378167
su - grid -c "mkdir -p ${PSU_DIR}"
su - grid -c "unzip -d ${PSU_DIR} ${PSU_ZIP}"
su - grid -c "opatch lsinventory -detail -oh ${ORACLE_HOME} > ~grid/lsinventory_before_${PSU##*/}"
zfs snapshot -r rpool/grid@before_psu_${PSU##*/}
cd ~grid
for patch in $(find ${PSU} -name bundle.xml | xargs -n 1 dirname) ; do
opatch auto ${patch} -oh ${ORACLE_HOME} -ocmrf ${OCM_RSP}
done
$ORACLE_HOME/crs/install/rootcrs.pl -unlock # <-- on all nodes
# For every other patch do:
su - grid -c "cd ${patchdir} ; opatch prereq CheckConflictAgainstOHWithDetail -ph ./" # <-- only on first node
su - grid -c "cd ${patchdir} ; opatch apply" # <-- only on first node
$ORACLE_HOME/crs/install/rootcrs.pl -patch # <-- on all nodes
zfs snapshot -r rpool/grid@after_psu_${PSU##*/}
${ORACLE_HOME}/bin/emctl start dbconsole
su - grid -c "opatch lsinventory -detail -oh ${ORACLE_HOME} > ~grid/lsinventory_after_${PSU##*/}"
</syntaxhighlight>
==Configure local listener to another port==
As grid user:
<source lang=bash>
$ srvctl modify listener -l LISTENER -o ${ORACLE_HOME} -p "TCP:50650"
$ srvctl config listener
Name: LISTENER
Network: 1, Owner: grid
Home: <CRS home>
End points: TCP:50650
$ srvctl stop listener -l LISTENER ; srvctl start listener -l LISTENER
$ sqh
SQL>show parameter list
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
listener_networks string
local_listener string (DESCRIPTION=(ADDRESS_LIST=(A
DDRESS=(PROTOCOL=TCP)(HOST=172
.1.20.1)(PORT=1521))))
remote_listener string
SQL> alter system set local_listener ="(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=172.1.20.1)(PORT=50650))))" SID='+ASM1' ;
System altered.
SQL> ^D
</syntaxhighlight>
=ASM=
==Create ASM diskgroups==
LUNs.txt contains all disks with:
# one line per disk.
# each disk in the first field.
===Example for chdg===
<source lang=awk>
# nawk -v type='DATA' '
BEGIN {
printf "<chdg name=\"%s\" power=\"3\">\n",type;
}
/002d0/,/011d0/ {
if(/C903/){storage="HSA1";};
if(/C906/){storage="HSA2";};
if(/C061/){storage="HSA3";};
if(/C062/){storage="HSA4";};
if(/002d0/){
# first disk
count=1;
printf " <add>\n";
printf " <fg name=\"%s_%s\">\n",storage,type;
};
gsub(/s2$/,"s0",$1);
printf " <dsk name=\"%s_%s%02d\" string=\"%s\"/>\n",storage,type,count++,$1;
if(/011d0/){
# last disk
print " </fg>";
print " </add>";
}
}
END {
printf "<a name=\"compatible.asm\" value=\"11.2\"/>\n";
printf "<a name=\"compatible.rdbms\" value=\"11.2\"/>\n";
printf "<a name=\"compatible.advm\" value=\"11.2\"/>\n";
printf "</chdg>\n";
}
' LUNs.txt
</syntaxhighlight>
===Example for mkdg===
<source lang=awk>
# nawk -v type='FRA' '
BEGIN {
printf "<dg name=\"%s\" redundancy=\"normal\">\n",type;
}
/012d0/,/015d0/ {
if(/C903/){storage="HSA1";};
if(/C906/){storage="HSA2";};
if(/C061/){storage="HSA3";};
if(/C062/){storage="HSA4";};
if(/012d0/){
# first disk
count=1;
printf " <fg name=\"%s_%s\">\n",storage,type;
};
gsub(/s2$/,"s0",$1);
printf " <dsk name=\"%s_%s%02d\" string=\"%s\"/>\n",storage,type,count++,$1;
if(/015d0/){
# last disk
print " </fg>";
}
}
END {
printf "<a name=\"compatible.asm\" value=\"11.2\"/>\n";
printf "<a name=\"compatible.rdbms\" value=\"11.2\"/>\n";
printf "<a name=\"compatible.advm\" value=\"11.2\"/>\n";
printf "</dg>\n";
}
' LUNs.txt
</syntaxhighlight>
data_config.xml:
<source lang=xml>
<chdg name="data" power="3">
<add>
<fg name="HSA1_DATA">
<dsk name="HSA1_DATA01" string="/dev/rdsk/c0t60002AC000000000C903010650004002d0s0"/>
<dsk name="HSA1_DATA02" string="/dev/rdsk/c0t60002AC000000000C903010650004003d0s0"/>
<dsk name="HSA1_DATA03" string="/dev/rdsk/c0t60002AC000000000C903010650004004d0s0"/>
<dsk name="HSA1_DATA04" string="/dev/rdsk/c0t60002AC000000000C903010650004005d0s0"/>
<dsk name="HSA1_DATA05" string="/dev/rdsk/c0t60002AC000000000C903010650004006d0s0"/>
<dsk name="HSA1_DATA06" string="/dev/rdsk/c0t60002AC000000000C903010650004007d0s0"/>
<dsk name="HSA1_DATA07" string="/dev/rdsk/c0t60002AC000000000C903010650004008d0s0"/>
<dsk name="HSA1_DATA08" string="/dev/rdsk/c0t60002AC000000000C903010650004009d0s0"/>
<dsk name="HSA1_DATA09" string="/dev/rdsk/c0t60002AC000000000C903010650004010d0s0"/>
<dsk name="HSA1_DATA10" string="/dev/rdsk/c0t60002AC000000000C903010650004011d0s0"/>
</fg>
</add>
<add>
<fg name="HSA2_DATA">
<dsk name="HSA2_DATA01" string="/dev/rdsk/c0t60002AC000000000C906010650004002d0s0"/>
<dsk name="HSA2_DATA02" string="/dev/rdsk/c0t60002AC000000000C906010650004003d0s0"/>
<dsk name="HSA2_DATA03" string="/dev/rdsk/c0t60002AC000000000C906010650004004d0s0"/>
<dsk name="HSA2_DATA04" string="/dev/rdsk/c0t60002AC000000000C906010650004005d0s0"/>
<dsk name="HSA2_DATA05" string="/dev/rdsk/c0t60002AC000000000C906010650004006d0s0"/>
<dsk name="HSA2_DATA06" string="/dev/rdsk/c0t60002AC000000000C906010650004007d0s0"/>
<dsk name="HSA2_DATA07" string="/dev/rdsk/c0t60002AC000000000C906010650004008d0s0"/>
<dsk name="HSA2_DATA08" string="/dev/rdsk/c0t60002AC000000000C906010650004009d0s0"/>
<dsk name="HSA2_DATA09" string="/dev/rdsk/c0t60002AC000000000C906010650004010d0s0"/>
<dsk name="HSA2_DATA10" string="/dev/rdsk/c0t60002AC000000000C906010650004011d0s0"/>
</fg>
</add>
<a name="compatible.asm" value="11.2"/>
<a name="compatible.rdbms" value="11.2"/>
<a name="compatible.advm" value="11.2"/>
</chdg>
</syntaxhighlight>
asmh:
<source lang=oracle11>
ASMCMD [+] > chdg data_config.xml
</syntaxhighlight>
7b239d7cf3c3db664914316586b9ad12fbefd92f
2270
2244
2021-11-25T15:50:12Z
Lollypop
2
Text replacement - "[[Kategorie:" to "[[Category:"
wikitext
text/x-wiki
[[Category:Solaris11|Clusterware]]
[[Category:Oracle|Clusterware]]
==Get Solaris release information==
<source lang=bash>
# pkg info kernel | \
nawk -F '.' '
/Build Release:/{
solaris=$NF;
}
/Branch:/{
subrel=$3;
update=$4;
}
END{
printf "Solaris %d.%d Update %d\n",solaris,subrel,update;
}'
</syntaxhighlight>
=Needed Solaris packages=
==Install pkg dependencies==
<source lang=bash>
# pkg install developer/assembler
# pkg install developer/build/make
# pkg install x11/diagnostic/x11-info-clients
</syntaxhighlight>
==Check pkg dependencies==
<source lang=bash>
# pkg list \
developer/assembler \
developer/build/make \
x11/diagnostic/x11-info-clients
</syntaxhighlight>
=User / group settings=
==Groups==
<source lang=bash>
# groupadd -g 186 oinstall
# groupadd -g 187 asmadmin
# groupadd -g 188 asmdba
# groupadd -g 200 dba
</syntaxhighlight>
==User==
<source lang=bash>
# useradd \
-u 102 \
-g oinstall \
-G asmdba,dba \
-c "Oracle DB" \
-m -d /export/home/oracle \
oracle
# useradd \
-u 406 \
-g oinstall \
-G asmdba,asmadmin,dba \
-c "Oracle Grid" \
-m -d /export/home/grid \
grid
</syntaxhighlight>
===Generate ssh public keys===
<source lang=bash>
# su - grid
$ ssh-keygen -t rsa -b 2048
Generating public/private rsa key pair.
Enter file in which to save the key (/export/home/grid/.ssh/id_rsa): <Enter>
Created directory '/export/home/grid/.ssh'.
Enter passphrase (empty for no passphrase): <Enter>
Enter same passphrase again: <Enter>
Your identification has been saved in /export/home/grid/.ssh/id_rsa.
Your public key has been saved in /export/home/grid/.ssh/id_rsa.pub.
The key fingerprint is:
..:..:.. grid@grid01
$ cat .ssh/id_rsa.pub > .ssh/authorized_keys
$ chmod 600 .ssh/authorized_keys
$ vi .ssh/authorized_keys
</syntaxhighlight>
Add the public key of other nodes.
After that do this on all other nodes added as grid:
<source lang=bash>
$ scp grid01:.ssh/authorized_keys .ssh/authorized_keys
</syntaxhighlight>
Now do a cross login from every node to every other node (even to its self) to add all to the known_hosts. The installer needs this.
==Projects==
<source lang=bash>
# projadd -p 186 -G oinstall \
-K process.max-file-descriptor="(basic,1024,deny)" \
-K process.max-file-descriptor="(privileged,65536,deny)" \
-K process.max-sem-nsems="(privileged,2048,deny)" \
-K project.max-sem-ids="(privileged,2048,deny)" \
-K project.max-shm-ids="(privileged,200,deny)" \
-K project.max-shm-memory="(privileged,274877906944,deny)" \
group.oinstall
</syntaxhighlight>
===Check project settings===
<source lang=bash>
# su - oracle
$ for name in process.{max-file-descriptor,max-sem-nsems} ; do prctl -t privileged -i process -n ${name} $$ ; done
process: 14822: -bash
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
process.max-file-descriptor
privileged 65.5K - deny -
process: 14822: -bash
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
process.max-sem-nsems
privileged 2.05K - deny -
$ for name in project.{max-sem-ids,max-shm-ids,max-shm-memory} ; do prctl -t privileged -n ${name} $$ ; done
process: 14822: -bash
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
project.max-sem-ids
privileged 2.05K - deny -
process: 14822: -bash
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
project.max-shm-ids
privileged 200 - deny -
process: 14822: -bash
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
project.max-shm-memory
usage 0B
privileged 256GB - deny -
</syntaxhighlight>
=Directories=
<source lang=bash>
# zfs create -o mountpoint=none rpool/grid
# zfs create -o mountpoint=/opt/gridhome rpool/grid/gridhome
# zfs create -o mountpoint=/opt/gridbase rpool/grid/gridbase
# zfs create -o mountpoint=/opt/oraInventory rpool/grid/oraInventory
# chown -R grid:oinstall /opt/{grid{home,base},oraInventory}
</syntaxhighlight>
=Storage tasks=
==Discover LUNs==
<source lang=bash>
# luxadm -e port | \
nawk '{print $1}' | \
xargs -n 1 luxadm -e dump_map | \
nawk '/Disk device/{print $5}' | \
sort -u | \
xargs luxadm display | \
nawk '
/DEVICE PROPERTIES for disk:/{
disk=$NF;
}
/DEVICE PROPERTIES for:/{
disk="";
}
/Vendor:/{
vendor=$NF;
}
/Serial Num:/{
serial=$NF;
}
/Unformatted capacity:/{
capacity=$(NF-1)""$NF;
}
disk != "" && /^$/{
printf "%s vendor=%s serial=%s capacity=%s\n",disk,vendor,serial,capacity;
}' | \
sort -u
</syntaxhighlight>
==Label Disks==
===Single Disk===
<source lang=bash>
# printf 'type 0 no no\nlabel 1 yes\npartition\n0 usr wm 8192 $\nlabel 1 yes\nquit\nquit\n' | \
format -e /dev/rdsk/<disk>
</syntaxhighlight>
===All FC disks===
For x86 you have to call format -> fdisk -> y for all disks first :-\
'''DON'T DO THE NEXT STEP IF YOU DO NOT KNOW WHAT YOU DO!'''
format_command_file.txt:
<source lang=bash>
type 0 no no
label 1 yes
partition
0 usr wm 8192 $
label 1 yes
quit
quit
</syntaxhighlight>
<source lang=bash>
# luxadm -e port | \
nawk '{print $1}' | \
xargs -n 1 luxadm -e dump_map | \
nawk '/Disk device/{print $5}' | \
sort -u | \
xargs luxadm display | \
nawk '
/DEVICE PROPERTIES for disk:/{
disk=$NF;
}
/DEVICE PROPERTIES for:/{
disk="";
}
disk && /^$/{
printf "%s\n",disk;
}' | \
sort -u | \
xargs -n 1 format -e -f ~/format_command_file.txt
</syntaxhighlight>
<source lang=bash>
# chown -RL grid:asmadmin /dev/rdsk/c0t6000*
# chmod 660 /dev/rdsk/c0t6000*
</syntaxhighlight>
==Set swap to physical RAM==
<source lang=bash>
# export RAM=256G
# swap -d /dev/zvol/dsk/rpool/swap
# zfs destroy rpool/swap
# zfs create \
-V ${RAM} \
-b 8k \
-o primarycache=metadata \
-o chksum=on \
-o dedup=off \
-o encryption=off \
-o compression=off \
rpool/swap
# swap -a /dev/zvol/dsk/rpool/swap
</syntaxhighlight>
=Network=
==Check port ranges==
<source lang=bash>
# for protocol in tcp udp ; do ipadm show-prop ${protocol} -p smallest_anon_port,largest_anon_port ; done
PROTO PROPERTY PERM CURRENT PERSISTENT DEFAULT POSSIBLE
tcp smallest_anon_port rw 9000 9000 32768 1024-65500
tcp largest_anon_port rw 65500 65500 65535 9000-65535
PROTO PROPERTY PERM CURRENT PERSISTENT DEFAULT POSSIBLE
udp smallest_anon_port rw 9000 9000 32768 1024-65500
udp largest_anon_port rw 65500 65500 65535 9000-65535
</syntaxhighlight>
==Setup private cluster interconnects==
Example with a small net with six (eight with net and broadcast) usable IPs. This limits the maximum number of nodes to six... which is obvious...
First node:
<source lang=bash>
# ipadm create-ip net1
# ipadm create-addr -T static -a 10.65.0.1/29 net1/ci1
# ipadm create-ip net5
# ipadm create-addr -T static -a 10.65.0.9/29 net5/ci2
</syntaxhighlight>
Second node:
<source lang=bash>
# ipadm create-ip net1
# ipadm create-addr -T static -a 10.65.0.2/29 net1/ci1
# ipadm create-ip net5
# ipadm create-addr -T static -a 10.65.0.10/29 net5/ci2
</syntaxhighlight>
==Set slew always for ntp==
After configuring ntp set slew always to avoid time warps!
<source lang=bash>
# svccfg -s svc:/network/ntp:default setprop config/slew_always = true
# svcadm refresh svc:/network/ntp:default
# svccfg -s svc:/network/ntp:default listprop config/slew_always
config/slew_always boolean true
</syntaxhighlight>
=Patching=
==Upgrade OPatch==
Do as root:
<source lang=bash>
export ORACLE_HOME=/opt/gridhome/11.2.0.4
export PATH=${PATH}:${ORACLE_HOME}/OPatch
OPATCH_PATCH_ZIP=~oracle/orainst/p6880880_112000_Solaris86-64.zip
zfs snapshot -r rpool/grid@$(opatch version | nawk '/OPatch Version:/{print $1"_"$NF;}')
eval mv ${ORACLE_HOME}/{$(opatch version | nawk '/OPatch Version:/{print $1","$1"_"$NF;}')}
unzip -d ${ORACLE_HOME} ${OPATCH_PATCH_ZIP}
chown -R grid:oinstall ${ORACLE_HOME}/OPatch
zfs snapshot -r rpool/grid@$(opatch version | nawk '/OPatch Version:/{print $1"_"$NF;}')
</syntaxhighlight>
==Apply PSU==
On first node as user grid:
<source lang=bash>
export ORACLE_HOME=/opt/gridhome/11.2.0.4
OCM_RSP=~grid/ocm_gridcluster1.rsp
${ORACLE_HOME}/OPatch/ocm/bin/emocmrsp -output ${OCM_RSP}
scp ${OCM_RSP} <other node1>:
scp ${OCM_RSP} <other node2>:
...
</syntaxhighlight>
On all nodes do as root:
<source lang=bash>
export ORACLE_HOME=/opt/gridhome/11.2.0.4
export PATH=${PATH}:${ORACLE_HOME}/bin
export PATH=${PATH}:${ORACLE_HOME}/OPatch
OCM_RSP=~grid/ocm_gridcluster1.rsp
PSU_DIR=~oracle/orainst/psu
PSU_ZIP=~oracle/orainst/p22378167_112040_Solaris86-64.zip
PSU=~oracle/orainst/psu/22378167
su - grid -c "mkdir -p ${PSU_DIR}"
su - grid -c "unzip -d ${PSU_DIR} ${PSU_ZIP}"
su - grid -c "opatch lsinventory -detail -oh ${ORACLE_HOME} > ~grid/lsinventory_before_${PSU##*/}"
zfs snapshot -r rpool/grid@before_psu_${PSU##*/}
cd ~grid
for patch in $(find ${PSU} -name bundle.xml | xargs -n 1 dirname) ; do
opatch auto ${patch} -oh ${ORACLE_HOME} -ocmrf ${OCM_RSP}
done
$ORACLE_HOME/crs/install/rootcrs.pl -unlock # <-- on all nodes
# For every other patch do:
su - grid -c "cd ${patchdir} ; opatch prereq CheckConflictAgainstOHWithDetail -ph ./" # <-- only on first node
su - grid -c "cd ${patchdir} ; opatch apply" # <-- only on first node
$ORACLE_HOME/crs/install/rootcrs.pl -patch # <-- on all nodes
zfs snapshot -r rpool/grid@after_psu_${PSU##*/}
${ORACLE_HOME}/bin/emctl start dbconsole
su - grid -c "opatch lsinventory -detail -oh ${ORACLE_HOME} > ~grid/lsinventory_after_${PSU##*/}"
</syntaxhighlight>
==Configure local listener to another port==
As grid user:
<source lang=bash>
$ srvctl modify listener -l LISTENER -o ${ORACLE_HOME} -p "TCP:50650"
$ srvctl config listener
Name: LISTENER
Network: 1, Owner: grid
Home: <CRS home>
End points: TCP:50650
$ srvctl stop listener -l LISTENER ; srvctl start listener -l LISTENER
$ sqh
SQL>show parameter list
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
listener_networks string
local_listener string (DESCRIPTION=(ADDRESS_LIST=(A
DDRESS=(PROTOCOL=TCP)(HOST=172
.1.20.1)(PORT=1521))))
remote_listener string
SQL> alter system set local_listener ="(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=172.1.20.1)(PORT=50650))))" SID='+ASM1' ;
System altered.
SQL> ^D
</syntaxhighlight>
=ASM=
==Create ASM diskgroups==
LUNs.txt contains all disks with:
# one line per disk.
# each disk in the first field.
===Example for chdg===
<source lang=awk>
# nawk -v type='DATA' '
BEGIN {
printf "<chdg name=\"%s\" power=\"3\">\n",type;
}
/002d0/,/011d0/ {
if(/C903/){storage="HSA1";};
if(/C906/){storage="HSA2";};
if(/C061/){storage="HSA3";};
if(/C062/){storage="HSA4";};
if(/002d0/){
# first disk
count=1;
printf " <add>\n";
printf " <fg name=\"%s_%s\">\n",storage,type;
};
gsub(/s2$/,"s0",$1);
printf " <dsk name=\"%s_%s%02d\" string=\"%s\"/>\n",storage,type,count++,$1;
if(/011d0/){
# last disk
print " </fg>";
print " </add>";
}
}
END {
printf "<a name=\"compatible.asm\" value=\"11.2\"/>\n";
printf "<a name=\"compatible.rdbms\" value=\"11.2\"/>\n";
printf "<a name=\"compatible.advm\" value=\"11.2\"/>\n";
printf "</chdg>\n";
}
' LUNs.txt
</syntaxhighlight>
===Example for mkdg===
<source lang=awk>
# nawk -v type='FRA' '
BEGIN {
printf "<dg name=\"%s\" redundancy=\"normal\">\n",type;
}
/012d0/,/015d0/ {
if(/C903/){storage="HSA1";};
if(/C906/){storage="HSA2";};
if(/C061/){storage="HSA3";};
if(/C062/){storage="HSA4";};
if(/012d0/){
# first disk
count=1;
printf " <fg name=\"%s_%s\">\n",storage,type;
};
gsub(/s2$/,"s0",$1);
printf " <dsk name=\"%s_%s%02d\" string=\"%s\"/>\n",storage,type,count++,$1;
if(/015d0/){
# last disk
print " </fg>";
}
}
END {
printf "<a name=\"compatible.asm\" value=\"11.2\"/>\n";
printf "<a name=\"compatible.rdbms\" value=\"11.2\"/>\n";
printf "<a name=\"compatible.advm\" value=\"11.2\"/>\n";
printf "</dg>\n";
}
' LUNs.txt
</syntaxhighlight>
data_config.xml:
<source lang=xml>
<chdg name="data" power="3">
<add>
<fg name="HSA1_DATA">
<dsk name="HSA1_DATA01" string="/dev/rdsk/c0t60002AC000000000C903010650004002d0s0"/>
<dsk name="HSA1_DATA02" string="/dev/rdsk/c0t60002AC000000000C903010650004003d0s0"/>
<dsk name="HSA1_DATA03" string="/dev/rdsk/c0t60002AC000000000C903010650004004d0s0"/>
<dsk name="HSA1_DATA04" string="/dev/rdsk/c0t60002AC000000000C903010650004005d0s0"/>
<dsk name="HSA1_DATA05" string="/dev/rdsk/c0t60002AC000000000C903010650004006d0s0"/>
<dsk name="HSA1_DATA06" string="/dev/rdsk/c0t60002AC000000000C903010650004007d0s0"/>
<dsk name="HSA1_DATA07" string="/dev/rdsk/c0t60002AC000000000C903010650004008d0s0"/>
<dsk name="HSA1_DATA08" string="/dev/rdsk/c0t60002AC000000000C903010650004009d0s0"/>
<dsk name="HSA1_DATA09" string="/dev/rdsk/c0t60002AC000000000C903010650004010d0s0"/>
<dsk name="HSA1_DATA10" string="/dev/rdsk/c0t60002AC000000000C903010650004011d0s0"/>
</fg>
</add>
<add>
<fg name="HSA2_DATA">
<dsk name="HSA2_DATA01" string="/dev/rdsk/c0t60002AC000000000C906010650004002d0s0"/>
<dsk name="HSA2_DATA02" string="/dev/rdsk/c0t60002AC000000000C906010650004003d0s0"/>
<dsk name="HSA2_DATA03" string="/dev/rdsk/c0t60002AC000000000C906010650004004d0s0"/>
<dsk name="HSA2_DATA04" string="/dev/rdsk/c0t60002AC000000000C906010650004005d0s0"/>
<dsk name="HSA2_DATA05" string="/dev/rdsk/c0t60002AC000000000C906010650004006d0s0"/>
<dsk name="HSA2_DATA06" string="/dev/rdsk/c0t60002AC000000000C906010650004007d0s0"/>
<dsk name="HSA2_DATA07" string="/dev/rdsk/c0t60002AC000000000C906010650004008d0s0"/>
<dsk name="HSA2_DATA08" string="/dev/rdsk/c0t60002AC000000000C906010650004009d0s0"/>
<dsk name="HSA2_DATA09" string="/dev/rdsk/c0t60002AC000000000C906010650004010d0s0"/>
<dsk name="HSA2_DATA10" string="/dev/rdsk/c0t60002AC000000000C906010650004011d0s0"/>
</fg>
</add>
<a name="compatible.asm" value="11.2"/>
<a name="compatible.rdbms" value="11.2"/>
<a name="compatible.advm" value="11.2"/>
</chdg>
</syntaxhighlight>
asmh:
<source lang=oracle11>
ASMCMD [+] > chdg data_config.xml
</syntaxhighlight>
a77f1062f239f1ccd4bcae1c4a7cb9ab7e3c3847
NGINX
0
363
2245
1947
2021-11-25T14:35:47Z
Lollypop
2
Text replacement - "</source" to "</syntaxhighlight"
wikitext
text/x-wiki
[[Kategorie:NGINX]]
==Add module to nginx on Ubuntu==
For example http-auth-ldap:
<source lang=bash>
mkdir /opt/src
cd /opt/src
apt source nginx
cd nginx-*
export HTTPS_PROXY=<your proxy server>
git clone https://github.com/kvspb/nginx-auth-ldap.git debian/modules/http-auth-ldap
./configure \
--with-cc-opt="$(dpkg-buildflags --get CFLAGS) -fPIC $(dpkg-buildflags --get CPPFLAGS)" \
--with-ld-opt="$(dpkg-buildflags --get LDFLAGS) -fPIC" \
--prefix=/usr/share/nginx \
--conf-path=/etc/nginx/nginx.conf \
--http-log-path=/var/log/nginx/access.log \
--error-log-path=/var/log/nginx/error.log \
--lock-path=/var/lock/nginx.lock \
--pid-path=/run/nginx.pid \
--modules-path=/usr/lib/nginx/modules \
--with-http_v2_module \
--with-threads \
--without-http_gzip_module \
--add-dynamic-module=debian/modules/http-auth-ldap
make modules
sudo install --mode=0644 --owner=root --group=root objs/ngx_http_auth_ldap_module.so /usr/lib/nginx/modules/
</syntaxhighlight>
96f4adacc4126deda52b6271a8d1a605abda2e4e
Pass
0
367
2246
1977
2021-11-25T14:38:46Z
Lollypop
2
Text replacement - "<source" to "<syntaxhighlight"
wikitext
text/x-wiki
[[Kategorie:Linux|pass]]
=pass - The standard unix password manager=
==Tipps & Tricks==
===SSH===
To pass the password to the ssh password promt you need another tool, too: sshpass .
Put only the password in your Customers/CustomerA/myuser@sshhost.
====Obvious way====
<syntaxhighlight lang=bash>
$ pass -c Customers/CustomerA/myuser@sshhost
$ ssh myuser@sshhost
Password:<paste the copied password>
myuser@sshhost:~$
</source>
====Cooler way====
=====Create an alias=====
<syntaxhighlight lang=bash>
$ alias customerA-sshhost='sshpass -f <(pass Customers/CustomerA/sshuser@sshhost) ssh sshuser@sshhost'
</source>
=====Use it=====
<syntaxhighlight lang=bash>
$ customerA-sshhost
sshuser@sshhost:~$
</source>
===MySQL===
Put only the password in your Customsers/CustomerB/mysqluser@mysqlhost:mysql.
====Obvious way====
<syntaxhighlight lang=bash>
$ pass -c Customsers/CustomerB/mysqluser@mysqlhost:mysql
$ mysql -h mysqlhost -u mysqluser
Enter password: <paste the copied password>
...
MariaDB [(none)]>
</source>
====Cooler way====
=====Create an alias=====
<syntaxhighlight lang=bash>
$ alias customerB-mysqlhost-mysqluser='mysql --user mysqluser --host mysqlhost --password=$(pass show Customsers/CustomerB/mysqluser@mysqlhost:mysql)'
</source>
Or even cooler with seperate history and defaults file per connection
<syntaxhighlight lang=bash>
$ mkdir -p ~/Customsers/CustomerB/.mysql
$ cat > ~/Customsers/CustomerB/.mysql/.my.cnf-mysqlhost-mysqluser << EOF
[client]
host=mysqlhost
user=mysqluser
EOF
$ alias customerB-mysqlhost-mysqluser='MYSQL_HISTFILE=~/Customsers/CustomerB/.mysql/.mysql_history_mysqlhost mysql --defaults-file=~/Customsers/CustomerB/.mysql/.my.cnf-mysqlhost-mysqluser --password=$(pass show Customsers/CustomerB/mysqluser@mysqlhost:mysql)'
</source>
=====Use it=====
<syntaxhighlight lang=bash>
$ customerB-mysqlhost-mysqluser
...
MariaDB [(none)]>
</source>
==Links==
* [https://www.passwordstore.org/ Official site of pass]
* [https://sourceforge.net/projects/sshpass/ sshpass]
35a17d9ad0e164361ce02db66cd95c10a9b68205
2250
2246
2021-11-25T15:28:15Z
Lollypop
2
Text replacement - "</source" to "</syntaxhighlight"
wikitext
text/x-wiki
[[Kategorie:Linux|pass]]
=pass - The standard unix password manager=
==Tipps & Tricks==
===SSH===
To pass the password to the ssh password promt you need another tool, too: sshpass .
Put only the password in your Customers/CustomerA/myuser@sshhost.
====Obvious way====
<syntaxhighlight lang=bash>
$ pass -c Customers/CustomerA/myuser@sshhost
$ ssh myuser@sshhost
Password:<paste the copied password>
myuser@sshhost:~$
</syntaxhighlight>
====Cooler way====
=====Create an alias=====
<syntaxhighlight lang=bash>
$ alias customerA-sshhost='sshpass -f <(pass Customers/CustomerA/sshuser@sshhost) ssh sshuser@sshhost'
</syntaxhighlight>
=====Use it=====
<syntaxhighlight lang=bash>
$ customerA-sshhost
sshuser@sshhost:~$
</syntaxhighlight>
===MySQL===
Put only the password in your Customsers/CustomerB/mysqluser@mysqlhost:mysql.
====Obvious way====
<syntaxhighlight lang=bash>
$ pass -c Customsers/CustomerB/mysqluser@mysqlhost:mysql
$ mysql -h mysqlhost -u mysqluser
Enter password: <paste the copied password>
...
MariaDB [(none)]>
</syntaxhighlight>
====Cooler way====
=====Create an alias=====
<syntaxhighlight lang=bash>
$ alias customerB-mysqlhost-mysqluser='mysql --user mysqluser --host mysqlhost --password=$(pass show Customsers/CustomerB/mysqluser@mysqlhost:mysql)'
</syntaxhighlight>
Or even cooler with seperate history and defaults file per connection
<syntaxhighlight lang=bash>
$ mkdir -p ~/Customsers/CustomerB/.mysql
$ cat > ~/Customsers/CustomerB/.mysql/.my.cnf-mysqlhost-mysqluser << EOF
[client]
host=mysqlhost
user=mysqluser
EOF
$ alias customerB-mysqlhost-mysqluser='MYSQL_HISTFILE=~/Customsers/CustomerB/.mysql/.mysql_history_mysqlhost mysql --defaults-file=~/Customsers/CustomerB/.mysql/.my.cnf-mysqlhost-mysqluser --password=$(pass show Customsers/CustomerB/mysqluser@mysqlhost:mysql)'
</syntaxhighlight>
=====Use it=====
<syntaxhighlight lang=bash>
$ customerB-mysqlhost-mysqluser
...
MariaDB [(none)]>
</syntaxhighlight>
==Links==
* [https://www.passwordstore.org/ Official site of pass]
* [https://sourceforge.net/projects/sshpass/ sshpass]
883d2ace2c55833688a9a53e7c2a1da3ec35f534
Qemu
0
281
2247
2226
2021-11-25T14:39:27Z
Lollypop
2
Text replacement - "</source" to "</syntaxhighlight"
wikitext
text/x-wiki
[[Kategorie:Qemu]]
=virsh - management user interface=
==Display running domains==
<syntaxhighlight lang=bash>
# virsh list
Id Name State
----------------------------------------------------
1 domain_v1 running
</syntaxhighlight>
==Display domain information==
<syntaxhighlight lang=bash>
# virsh dominfo domain_v1
Id: 1
Name: domain_v1
UUID: b80fe77e-5bdd-29a9-d4c4-84482ace50ff
OS Type: hvm
State: running
CPU(s): 4
CPU time: 674481.3s
Max memory: 15605760 KiB
Used memory: 15605760 KiB
Persistent: yes
Autostart: disable
Managed save: no
Security model: none
Security DOI: 0
</syntaxhighlight>
c937d1e95e3d46b28689decdc397f56ee0f82c9e
Oracle Discoverer
0
364
2248
1948
2021-11-25T15:25:06Z
Lollypop
2
Text replacement - "</source" to "</syntaxhighlight"
wikitext
text/x-wiki
[[Kategorie:Oracle]]
== Changing the IP address ==
Just some lines from last change... sorry
<source lang=bash>
vi /etc/sysconfig/network/ifcfg-eth0
vi /etc/sysconfig/network/routes
vi /etc/hosts
/etc/init.d/network restart
# Change the VLAN in vCenter
# reconnect with new IP
#
# Change the config
#
/opt/Middleware/ashome_1/chgip/scripts/chgiphost.sh -noconfig -oldhost discoverer01.srv.net.de -newhost discoverer.srv.net.de -oldip 172.16.31.29 -newip 172.16.7.4 -instanceHome /opt/Middleware/asinst_1
/etc/init.d/weblogic stop
# Adminserver, too
/opt/Middleware/wlserver_10.3/server/bin/setWLSEnv.sh
/opt/Middleware/wlserver_10.3/common/bin/wlst.sh
wls:/offline> readDomain('/opt/Middleware/user_projects/domains/ClassicDomain')
wls:/offline/ClassicDomain> cd ('/Machine/neuerhostname')
wls:/offline/ClassicDomain/Machine/neuerhostname> machine=cmo
wls:/offline/ClassicDomain/Machine/neuerhostname> cd ('/Server/AdminServer')
wls:/offline/ClassicDomain/Server/AdminServer> set('Machine', machine)
wls:/offline/ClassicDomain/Server/AdminServer> updateDomain()
wls:/offline/ClassicDomain/Server/AdminServer> exit()
# Nach den Anpassungen starten
/etc/init.d/weblogic start
netstat -plant | grep 9001
tail -f /opt/Middleware/user_projects/domains/ClassicDomain/servers/WLS_DISCO/logs/WLS_DISCO.out
</syntaxhighlight>
d2d30c76592c9b3420789f3b5f569684bc44b1d4
MariaDB Tipps und Tricks
0
235
2249
913
2021-11-25T15:27:44Z
Lollypop
2
Text replacement - "[[Kategorie:" to "[[Category:"
wikitext
text/x-wiki
[[Category:MySQL]]
[[Category:MariaDB]]
==ERROR 1524 (HY000): Plugin 'unix_socket' is not loaded==
===Problem===
<source lang=bash>
# mysql
ERROR 1524 (HY000): Plugin 'unix_socket' is not loaded
</source>
===Solution===
<source lang=bash>
# service mysql stop
# mysqld_safe --skip-grant-tables
150918 15:41:13 mysqld_safe Logging to '/var/log/mysql/error.log'.
150918 15:41:13 mysqld_safe Starting mysqld daemon with databases from /var/lib/mysql
# mysql
Welcome to the MariaDB monitor. Commands end with ; or \g.
Your MariaDB connection id is 2
Server version: 10.0.20-MariaDB-0ubuntu0.15.04.1 (Ubuntu)
Copyright (c) 2000, 2015, Oracle, MariaDB Corporation Ab and others.
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
MariaDB [(none)]> INSERT INTO mysql.plugin (name, dl) VALUES ('unix_socket', 'auth_socket');
Query OK, 1 row affected (0.00 sec)
MariaDB [(none)]> shutdown
# service mysql start
</source>
fbafab7d339054a6d1b3b8a7bd8495049a29a88d
Category:MySQL
14
198
2251
645
2021-11-25T15:29:51Z
Lollypop
2
Text replacement - "[[Kategorie:" to "[[Category:"
wikitext
text/x-wiki
[[Category:KnowHow]]
a53883501ef62bde531096835b5015f2915a2297
ProblemsWithSecurity
0
241
2252
1175
2021-11-25T15:30:34Z
Lollypop
2
Text replacement - "[[Kategorie:" to "[[Category:"
wikitext
text/x-wiki
[[Category:Security]]
'''Avoiding Security is not an otpion! But helps sometimes if you have no chance to administrate your devices without cheating...'''
=Firefox=
Go '''not''' to URL ''about:config'' and navigate '''not''' to the section ''security.ssl3'', and '''not''' double click ''security.ssl3.dhe_aes_{128,256}_sha'' to set it to false.
[[Datei:Firefox_about-config_ssl.png]]
=Chrome=
==NET::ERR_SSL_PINNED_KEY_NOT_IN_CERT_CHAIN==
when a site changed the Certificate and the max-age is not reached you cann clear the cache for this site at: chrome://net-internals/#hsts
Enter your changed Domain at <i>Delete domain</i>
And press Delete.
288a7f4d0d0913214c88286dcd705ac8e0e4ca71
Solaris OracleDB zone
0
188
2253
662
2021-11-25T15:31:19Z
Lollypop
2
Text replacement - "<source" to "<syntaxhighlight"
wikitext
text/x-wiki
[[Kategorie:Solaris|Oracle Zone]]
=Setup Oracle Database on a Solaris zone with CPU limit=
Our setup is a 48GB x86-Server
==Limit ZFS ARC==
Add to /etc/system:
<syntaxhighlight lang=bash>
set zfs:zfs_arc_max = <bytes as hex value>
</source>
To calculate your own value:
<syntaxhighlight lang=bash>
# LIMIT_GB=8 ; printf "*\n** Limit ZFS ARC to %dGB\n*\nset zfs:zfs_arc_max = 0x%x\n" ${LIMIT_GB} $[${LIMIT_GB} * 1024 * 1024 * 1024]
*
** Limit ZFS ARC to 8GB
*
set zfs:zfs_arc_max = 0x200000000
</source>
==Create Zone==
Set values:
<syntaxhighlight lang=bash>
ZONENAME=oracle
ZONEPOOL=rpool
ZONEBASE=/var/zones
MAX_SHM_MEMORY=30G
LOCKED_MEMORY=30G
MAX_PHYS_MEMORY=34G
SWAP=${MAX_PHYS_MEMORY}
NUMBER_OF_CPUS=2
</source>
Create zone with
<syntaxhighlight lang=bash>
zfs create -o mountpoint=none ${ZONEPOOL}/zones
zfs create -o compression=on -o mountpoint=${ZONEBASE}/${ZONENAME} ${ZONEPOOL}/zones/${ZONENAME}
chmod 700 ${ZONEBASE}/${ZONENAME}
printf "
create
set autoboot=true
set zonepath=${ZONEBASE}/${ZONENAME}
add dedicated-cpu
set ncpus=${NUMBER_OF_CPUS}
end
add capped-memory
set swap=${SWAP}
set physical=${MAX_PHYS_MEMORY}
set locked=${LOCKED_MEMORY}
end
set scheduling-class=FSS
set max-shm-memory=${MAX_SHM_MEMORY}
verify
commit
" | zonecfg -z ${ZONENAME} -f -
</source>
Enable dynamic pool service to add support for dedicated-cpus:
<syntaxhighlight lang=bash>
svcadm enable svc:/system/pools/dynamic
</source>
Install and boot:
<syntaxhighlight lang=bash>
zoneadm -z ${ZONENAME} install
zoneadm -z ${ZONENAME} boot
zlogin ${ZONENAME} usermod -s /bin/bash root
zlogin ${ZONENAME}
</source>
CPU-check:
<syntaxhighlight lang=bash>
-bash-3.2# psrinfo -pv
The physical processor has 2 virtual processors (0 1)
x86 (chipid 0x0 GenuineIntel family 6 model 44 step 2 clock 3059 MHz)
Intel(r) Xeon(r) CPU X5675 @ 3.07GHz
</source>
==Create ZPools==
I used this paper: [http://www.oracle.com/technetwork/server-storage/solaris10/config-solaris-zfs-wp-167894.pdf]
Values are for Solaris 10.
<syntaxhighlight lang=bash>
DATABASEPOOL=dbpool
DATABASEPOOL_DATA_VDEV="mirror c1t1d0 c1t2d0"
DATABASEPOOL_ZIL_VDEV="mirror c1t3d0 c1t4d0"
REDOPOOL_NAME=redopool
REDOPOOL_DATA_VDEV="mirror c1t5d0 c1t6d0"
REDOPOOL_ZIL_VDEV="mirror c1t7d0 c1t8d0"
ARCHIVEPOOL=archivepool
ARCHIVEPOOL_DATAV_DEV="mirror c1t9d0 c1t10d0"
DB_BASEPATH=/database
DB_BLOCK_SIZE=8192
</source>
<syntaxhighlight lang=bash>
zpool create ${DATABASEPOOL} ${DATABASEPOOL_DATA_VDEV} log ${DATABASEPOOL_ZIL_VDEV}
zfs create -o recordsize=${DB_BLOCK_SIZE} -o mountpoint=${DB_BASEPATH}/data ${DATABASEPOOL}/data
zfs set logbias=throughput ${DATABASEPOOL}/data
zfs create -o recordsize=${DB_BLOCK_SIZE} -o mountpoint=${DB_BASEPATH}/index ${DATABASEPOOL}/index
zfs set logbias=throughput ${DATABASEPOOL}/index
zfs create -o mountpoint=${DB_BASEPATH}/temp ${DATABASEPOOL}/temp
zfs set logbias=throughput ${DATABASEPOOL}/temp
zfs create -o mountpoint=${DB_BASEPATH}/undo ${DATABASEPOOL}/undo
zfs set logbias=throughput ${DATABASEPOOL}/undo
</source>
<syntaxhighlight lang=bash>
zpool create ${REDOPOOL} ${REDOPOOL_DATA_VDEV} log ${REDOPOOL_ZIL_VDEV}
zfs create -o mountpoint=${DB_BASEPATH}/redo ${REDOPOOL}/redo
zfs set logbias=latency ${REDOPOOL}/redo
</source>
<syntaxhighlight lang=bash>
zpool create ${ARCHIVEBASEPOOL} ${ARCHIVEPOOL_DATA_VDEV}
zfs create -o compression=on -o mountpoint=${DB_BASEPATH}/archive ${ARCHIVEBASEPOOL}/archive
zfs set primarycache=metadata ${ARCHIVEBASEPOOL}/archive
</source>
880bc20e876f0d02a7ded34135e77f9cbffb45e7
SunCluster Delete Ressource Group
0
206
2254
1313
2021-11-25T15:32:14Z
Lollypop
2
Text replacement - "</source" to "</syntaxhighlight"
wikitext
text/x-wiki
[[Kategorie:SunCluster]]
=Komplettes entfernen einer Ressource Group=
Herleitung der Daten, die nachher in den Einzeilern benutzt werden.
Macht dies nicht! Ich übernehme auch hier wieder keine Verantwortung! Alles falsch! Nicht machen!
==Setzen der betreffenden Ressource Group==
<source lang=bash>
# RG=my-rg
</syntaxhighlight>
==Ressource anzeigen==
<source lang=bash>
# clrs list -g ${RG}
my-nsr-res
my-oracle-res
my-lh-res
my-zone-res
my-hasp-zfs-res
</syntaxhighlight>
==Abschalten der Ressource Group und Ressourcen==
<source lang=bash>
# clrg offline ${RG}
# clrs list -g ${RG} | xargs clrs disable
</syntaxhighlight>
==ZPools anzeigen==
<source lang=bash>
# clrs show -p ZPools -g ${RG}
...
=== Resources ===
Resource: my-hasp-zfs-res
--- Standard and extension properties ---
Zpools: my_pool my-redo1_pool my-redo2_pool
Class: extension
Description: The list of zpools
Per-node: False
Type: stringarray
...
</syntaxhighlight>
==ZPools anzeigen==
<source lang=bash>
# clrs show -p ZPools -g ${RG} | nawk '$1=="Zpools:"{$1="";print $0;}'
my_pool my-redo1_pool my-redo2_pool
</syntaxhighlight>
==DID Devices anzeigen==
<source lang=bash>
# for disk in $(for zpool in $(clrs show -p ZPools -g ${RG} | nawk '$1=="Zpools:"{$1="";print $0;}' ) ; do zpool import ${zpool} 2>/dev/null ; zpool status ${zpool} ; zpool export ${zpool} ; done | nawk '/c[0-9]+t/{gsub(/s.*$/,"",$1);print $1}') ; do echo /dev/rdsk/${disk}; done | xargs cldev list -vn $(hostname)
DID Device Full Device Path
---------- ----------------
d53 node06:/dev/rdsk/c0t600A0B80006E103C00000B9B50B2F83Ed0
d38 node06:/dev/rdsk/c0t600A0B80006E10020000D54150B2FF26d0
d57 node06:/dev/rdsk/c0t600A0B80006E103C00000B9E50B2F9FFd0
d50 node06:/dev/rdsk/c0t600A0B80006E10020000D54450B300C8d0
d46 node06:/dev/rdsk/c0t600A0B80006E103C00000BA250B3098Ad0
d28 node06:/dev/rdsk/c0t600A0B80006E10020000D54850B310C2d0
d55 node06:/dev/rdsk/c0t600A0B80006E134400000B5350B2FB08d0
d56 node06:/dev/rdsk/c0t600A0B80006E10E40000D6F450B2FBB1d0
d40 node06:/dev/rdsk/c0t600A0B80006E134400000B5950B30D8Bd0
d45 node06:/dev/rdsk/c0t600A0B80006E10E40000D6FA50B30E62d0
</syntaxhighlight>
oder nur die DIDs:
<source lang=bash>
# for disk in $(for zpool in $(clrs show -p ZPools -g ${RG} | nawk '$1=="Zpools:"{$1="";print $0;}' ) ; do zpool import ${zpool} 2>/dev/null ; zpool status ${zpool} ; zpool export ${zpool} ; done | nawk '/c[0-9]+t/{gsub(/s.*$/,"",$1);print $1}') ; do echo /dev/rdsk/${disk}; done | xargs scdidadm -lo instance
53
38
57
50
46
28
55
56
40
45
</syntaxhighlight>
==Abschalten des Device Monitorings==
Das ist wichtig, um die Devices später ganz aus dem Cluster zu bekommen!
<source lang=bash>
# for disk in $(for zpool in $(clrs show -p ZPools -g ${RG} | nawk '$1=="Zpools:"{$1="";print $0;}' ) ; do zpool import ${zpool} 2>/dev/null ; zpool status ${zpool} ; zpool export ${zpool} ; done | nawk '/c[0-9]+t/{gsub(/s.*$/,"",$1);print $1}') ; do echo /dev/rdsk/${disk}; done | xargs scdidadm -lo name | xargs cldev unmonitor
</syntaxhighlight>
==Ressourcegruppe löschen==
<source lang=bash>
# RG=bla-rg
# clrs disable -g ${RG} +
# clrs delete -g ${RG} +
# clrg delete ${RG}
</syntaxhighlight>
==Jetzt auf dem Storage die LUNs unmappen==
Und gegebenen Falls löschen...
==Nicht mehr vorhandene LUNs aus dem Solaris entfernen==
<source lang=bash>
# for node in $(clnode list) ; do ssh ${node} cfgadm -alo show_SCSI_LUN | nawk '$NF=="unusable"{gsub(/,[0-9]+$/,"",$1);print $1}' | sort -u | xargs -n 1 ssh ${node} cfgadm -c unconfigure -o unusable_SCSI_LUN ; ssh ${node} devfsadm -C -v -c disk ; done
</syntaxhighlight>
==DIDs aufräumen==
<source lang=bash>
# for node in $(clnode list) ; do cldev refresh -n ${node} ; cldev clear -n ${node} ; done
</syntaxhighlight>
==Bei bedarf Zonenkonfigs aufräumen==
<source lang=bash>
# ZONE=my-zone
# for node in $(clnode list) ; do ssh ${node} zonecfg -z ${ZONE} delete -F ; done
</syntaxhighlight>
810efe2a762e679fa163b7ecd6153109acb47730
Ubuntu apt
0
120
2255
1925
2021-11-25T15:37:30Z
Lollypop
2
Text replacement - "[[Kategorie:" to "[[Category:"
wikitext
text/x-wiki
[[Category:Ubuntu|apt]]
== Get all non LTS packages ==
<source lang=awk>
# dpkg --list | awk '/^ii/ {print $2}' | xargs apt-cache show | awk '
BEGIN{
support="none";
}
/^Package:/,/^$/{
if(/^Package:/){ pkg=$2; }
if(/^Supported:/){ support=$2; }
if(/^$/ && support != "5y"){ printf "%s:\t%s\n", pkg, support; }
}
/^$/ {
support="none";
}'
</source>
== Ubuntu support status ==
<source lang=bash>
$ ubuntu-support-status --show-unsupported
</source>
== Configuring a proxy for apt ==
Put this into your /etc/apt/apt.conf.d/00proxy :
<source lang=bash>
// Options for the downloading routines
Acquire
{
Queue-Mode "host"; // host|access
Retries "0";
Source-Symlinks "true";
// HTTP method configuration
http
{
//Proxy::http.us.debian.org "DIRECT"; // Specific per-host setting
Proxy "http://<user>:<password>@<proxy-host>:<proxy-port>";
Timeout "120";
Pipeline-Depth "5";
// Cache Control. Note these do not work with Squid 2.0.2
No-Cache "false";
Max-Age "86400"; // 1 Day age on index files
No-Store "false"; // Prevent the cache from storing archives
};
ftp
{
Proxy "http://<user>:<password>@<proxy-host>:<proxy-port>";
//Proxy::http.us.debian.org "DIRECT"; // Specific per-host setting
Timeout "120";
/* Passive mode control, proxy, non-proxy and per-host. Pasv mode
is prefered if possible */
Passive "true";
Proxy::Passive "true";
Passive::http.us.debian.org "true"; // Specific per-host setting
};
cdrom
{
mount "/cdrom";
// You need the trailing slash!
"/cdrom"
{
Mount "sleep 1000";
UMount "sleep 500";
}
};
};
</source>
==Use this proxy config for in the shell==
<source lang=bash>
eval $(apt-config dump Acquire | awk -F '(::| )' '$3 ~ /Proxy/{printf "%s_proxy=%s\nexport %s_proxy\n",$2,$4,$2;}')
</source>
== Getting some packages from a newer release ==
In this example we are living in <i>xenial</i> and want PowerDNS from <i>zesty</i> because we need CAA records in the nameservice.
=== Pin the normal release ===
<source lang=bash>
# echo 'APT::Default-Release "xenial";' > /etc/apt/apt.conf.d/01pinning
</source>
=== Add new release to /etc/apt/sources.list ===
This is the /etc/apt/sources.list on my x86 64bit Ubuntu:
<pre>
# Xenial
deb [arch=amd64] http://de.archive.ubuntu.com/ubuntu/ xenial main restricted universe
deb [arch=amd64] http://de.archive.ubuntu.com/ubuntu/ xenial-updates main restricted universe
deb [arch=amd64] http://security.ubuntu.com/ubuntu xenial-security main restricted universe
# Zesty
deb [arch=amd64] http://de.archive.ubuntu.com/ubuntu/ zesty main restricted universe
deb [arch=amd64] http://de.archive.ubuntu.com/ubuntu/ zesty-updates main restricted universe
deb [arch=amd64] http://security.ubuntu.com/ubuntu zesty-security main restricted universe
</pre>
=== Tell apt via /etc/apt/preferences.d/... to prefer some packages from the new release ===
This is the /etc/apt/preferences.d/pdns:
<pre>
Package: pdns-*
Pin: release a=zesty, l=Ubuntu
Pin-Priority: 1000
Package: pdns-*
Pin: release a=zesty-updates, l=Ubuntu
Pin-Priority: 1000
Package: pdns-*
Pin: release a=zesty-security, l=Ubuntu
Pin-Priority: 1000
</pre>
=== Upgrade to the packages from the new release ===
<source lang=bash>
# apt update
...
2 packages can be upgraded. Run 'apt list --upgradable' to see them.
...
</source>
=== Check with "apt-cache policy" which version is preferred now ===
<source lang=bash>
# apt-cache policy pdns-server pdns-tools
pdns-server:
Installed: 4.0.3-1
Candidate: 4.0.3-1
Version table:
*** 4.0.3-1 1000
500 http://de.archive.ubuntu.com/ubuntu zesty/universe amd64 Packages
100 /var/lib/dpkg/status
4.0.0~alpha2-3build1 990
990 http://de.archive.ubuntu.com/ubuntu xenial/universe amd64 Packages
pdns-tools:
Installed: (none)
Candidate: 4.0.3-1
Version table:
4.0.3-1 1000
500 http://de.archive.ubuntu.com/ubuntu zesty/universe amd64 Packages
4.0.0~alpha2-3build1 990
990 http://de.archive.ubuntu.com/ubuntu xenial/universe amd64 Packages
</source>
=== Upgrade to the packages from the new release ===
<source lang=bash>
# apt install pdns-tools
Reading package lists... Done
Building dependency tree
Reading state information... Done
Some packages could not be installed. This may mean that you have
requested an impossible situation or if you are using the unstable
distribution that some required packages have not yet been created
or been moved out of Incoming.
The following information may help to resolve the situation:
The following packages have unmet dependencies:
pdns-tools : Depends: libstdc++6 (>= 6) but 5.4.0-6ubuntu1~16.04.5 is to be installed
E: Unable to correct problems, you have held broken packages.
</source>
This shows the pinning to xenial works ;-).
=== Override pinning for one package ===
<source lang=bash>
# apt -t zesty install libstdc++6
...
</source>
49b692f85dda70b07dffb796867e639297144fc8
ZFS fast scrub
0
141
2256
835
2021-11-25T15:37:46Z
Lollypop
2
Text replacement - "<source" to "<syntaxhighlight"
wikitext
text/x-wiki
[[Kategorie:ZFS|fast scrub]]
[[Kategorie:Solaris]]
NEVER DO THIS!!!
If you need a fast scrub to get to production state after an bloody hard unplanned downtime... and so on...
I would expect you to not to do this.
But it worked for me:
<syntaxhighlight lang=bash>
# echo "zfs_scrub_delay/D" | mdb -k
zfs_scrub_delay:
zfs_scrub_delay:4
# echo "zfs_scrub_delay/W0" | mdb -kw
zfs_scrub_delay:0x4 = 0x0
</source>
This sets the scrub delay to zero... your system will do a lot of scrubbing and not so much other things.
Remember to set it back to the old value later (4 in this example)!
<syntaxhighlight lang=bash>
# echo "zfs_scrub_delay/W4" | mdb -kw
zfs_scrub_delay:0x0 = 0x4
</source>
But remember I told you: NEVER DO THIS!!!
825d86c9ee9bc4ab7dbbb6f0a20a28d88b28c512
SSH FingerprintLogging
0
358
2257
1886
2021-11-25T15:38:35Z
Lollypop
2
Text replacement - "</source" to "</syntaxhighlight"
wikitext
text/x-wiki
[[Kategorie:SSH|Fingerprint]]
[[Kategorie:Bash|Fingerprint]]
=SSH Fingerprintlogging=
==Why logging fingerprints?==
It is just for the possibility of setting the [[Bash]] HISTFILE per logged in user.
==The AuthorizedKeysCommand==
* /opt/sbin/fingerprintlog:
<source lang=bash>
#!/bin/bash
# /opt/sbin/fingerprintlog <logfile> %u %k %t %f
# Arguments to AuthorizedKeysCommand may be provided using the following tokens, which will be expanded at runtime:
# %% is replaced by a literal '%',
# %u is replaced by the username being authenticated,
# %h is replaced by the home directory of the user being authenticated,
# %t is replaced with the key type offered for authentication,
# %f is replaced with the fingerprint of the key, and
# %k is replaced with the key being offered for authentication.
# If no arguments are specified then the username of the target user will be supplied.
[ "_${LOGNAME}_" != "_daemon_" ] && exit 1
LOGFILE=$1
USER=$2
KEY=$3
KEYTYPE=$4
FINGERPRINT=$5
printf "%s ssh-login T=%s U=%s PPID=%s FP=%s K=%s\n" "$(/bin/date -Iseconds)" "${KEYTYPE}" "${USER}" "${PPID}" "${FINGERPRINT}" "${KEY}" >> ${LOGFILE}
</syntaxhighlight>
<source lang=bash>
# chmod 0750 /opt/sbin/fingerprintlog
# chown root:daemon /opt/sbin/fingerprintlog
</syntaxhighlight>
==Create the logfile==
* /var/log/fingerprint.log
<source lang=bash>
# touch /var/log/fingerprint.log
# chown daemon:ssh-user /var/log/fingerprint.log
# chmod 0640 /var/log/fingerprint.log
</syntaxhighlight>
==Setup logrotation==
* /etc/logrotate.d/fingerprintlog
<source lang=bash>
/var/log/fingerprint.log
{
su daemon syslog
create 0640 daemon ssh-user
rotate 8
weekly
missingok
notifempty
}
</syntaxhighlight>
==Add fingerprint logging to sshd==
* /etc/ssh/sshd_config
<source lang=bash>
...
DenyUsers daemon
AuthorizedKeysCommand /opt/sbin/fingerprintlog /var/log/fingerprint.log %u %k %t %f
AuthorizedKeysCommandUser daemon
...
</syntaxhighlight>
Restart sshd
<source lang=bash>
# systemctl restart ssh.service
</syntaxhighlight>
==Add magic to your .bashrc==
<source lang=bash>
# apt install gawk
</syntaxhighlight>
* ~/.bashrc
<source lang=bash>
...
# Match parent PID or grand parent PID against fingerprint.log
[ -f /var/log/fingerprint.log ] && FINGERPRINT=$(/usr/bin/gawk -v ppid="(${PPID}|$(awk '{print $4;}' /proc/${PPID}/stat))" -v user=${LOGNAME} '$5 ~ "^PPID="ppid"$" {gsub(/^FP=/,"",$6); gsub(/\//,"_",$6); print $6;exit;}' /var/log/fingerprint.log)
# Set the history file
export HISTFILE=~/.bash_history_${FINGERPRINT:-${SUDO_USER:-default}}
...
</syntaxhighlight>
6461ffff05d0568b21855e290d6c4504a99d08bf
Snorby
0
234
2258
911
2021-11-25T15:39:12Z
Lollypop
2
Text replacement - "</source" to "</syntaxhighlight"
wikitext
text/x-wiki
Just a scribble...
<source lang=bash>
/usr/local/bin/suricata -D -c /etc/suricata/suricata.yaml -i eth1 --init-errors-fatal
barnyard2 -c /etc/suricata/barnyard2.conf -d /var/log/suricata -f unified2.alert -w /var/log/suricata/suricata.waldo -D
</syntaxhighlight>
eda55b96e198ad96539b7d0a41e306d0f8c6c54e
ZFS Networker
0
158
2259
941
2021-11-25T15:39:35Z
Lollypop
2
Text replacement - "</source" to "</syntaxhighlight"
wikitext
text/x-wiki
[[Kategorie:ZFS|Backup]]
[[Kategorie:Backup|Networker]]
[[Kategorie:Solaris|Backup]]
=Backup of ZFS snapshots on Solaris Cluster with Legato/EMC Networker=
This describes how to setup a backup of the Solaris Cluster resource group named sample-rg.
The structure of my RGs is always:
<pre>
RG: <name>-rg
ZFS-HASP: <name>-hasp-zfs-res
Logical Host: <name>-lh-res
Logical Host Name: <name>-lh
ZPOOL: <name>_pool
</pre>
I used the bash as shell.
==Define variables used in the following command lines==
<source lang=bash>
# NAME=sample
# RGname=${NAME}-rg
# NetworkerGroup=$(echo ${NAME} | tr 'a-z' 'A-Z' )
# ZPOOL=${NAME}_pool
# ZPOOL_BASEDIR=/local/${RGname}
</syntaxhighlight>
==Define a resource for Networker==
What we need now is a resource definition in our Networker directory like this:
<source lang=bash>
# mkdir /nsr/{bin,log,res}
# cat > /nsr/res/${NetworkerGroup}.res <<EOF
type: savepnpc;
precmd: "/nsr/bin/nsr_snapshot.sh pre >/nsr/log/networker_precmd.log 2>&1";
pstcmd: "/nsr/bin/nsr_snapshot.sh pst >/nsr/log/networker_pstcmd.log 2>&1";
timeout: "08:00am";
abort precmd with group: Yes;
EOF
</syntaxhighlight>
==The pre-/pstcmd-script==
!!!THIS CODE IS UNTESTED DO NOT USE THIS!!!
!!!THIS JUST AN EXAMPLE!!!
<source lang=bash>
#!/bin/bash
cmd_option=$1
export cmd_option
SNAPSHOT_NAME="nsr"
BASE_LOG_DIR="/nsr/logs"
NSR_BACKUP_CLONE="nsr_backup"
# Commands
ZFS_CMD="/usr/sbin/zfs"
ZPOOL_CMD="/usr/sbin/zpool"
ZLOGIN_CMD="/usr/sbin/zlogin"
ZONECFG_CMD="/usr/sbin/zonecfg"
SVCS_CMD="/usr/sbin/svcs"
SVCADM_CMD="/usr/sbin/svcadm"
DF_CMD="/usr/bin/df"
RM_CMD="/usr/bin/rm"
AWK_CMD="/usr/bin/nawk"
MKNOD_CMD="/usr/sbin/mknod"
XARGS_CMD="/usr/bin/xargs"
PARGS_CMD="/usr/bin/pargs"
PTREE_CMD="/usr/bin/ptree"
CLRS_CMD="/usr/cluster/bin/clrs"
CLRG_CMD="/usr/cluster/bin/clrg"
CLRT_CMD="/usr/cluster/bin/clrt"
BASENAME_CMD="/usr/bin/basename"
GETENT_CMD="/usr/bin/getent"
SCHA_RESOURCE_GET_CMD="/usr/cluster/bin/scha_resource_get"
WGET_CMD=/usr/sfw/bin/wget
HOSTNAME_CMD="/usr/bin/uname -n"
# Subdir in ZFS where to put ZFS-config
ZFS_SETUP_SUBDIR="cluster_config"
ZFS_CONFIG_FILE=ZFS_Setup.sh
# Oracle parameter
ORACLE_SID=SAMPLE
ORACLE_USER=oracle
# Sophora parameter
SOPHORA_FMRI="svc:/cms/sophora:default"
SOPHORA_USER=admin
SOPHORA_PASS=password
GLOBAL_LOGFILE=${BASE_LOG_DIR}/$(${BASENAME_CMD} $0 .sh).log
# For all but get_slaves redirect output to log
case ${cmd_option} in
get_slaves)
;;
*)
exec >>${GLOBAL_LOGFILE} 2>&1
;;
esac
function print_option () {
option=$1; shift
# now process line
while [ $# -gt 0 ]
do
case $1 in
${option})
echo $2
shift
shift
;;
*)
shift
;;
esac
done
}
function sophora_startup () {
SOPHORA_ZONE=$1 # Zone for zlogin
SOPHORA_FMRI=$2 # FMRI for svcadm
print_log ${LOGFILE} "Starting sophora in ${SOPHORA_ZONE}..."
${ZLOGIN_CMD} ${SOPHORA_ZONE} ${SVCADM_CMD} enable ${SOPHORA_FMRI}
}
function sophora_shutdown () {
SOPHORA_ZONE=$1 # Zone for zlogin
SOPHORA_FMRI=$2 # FMRI for svcadm
print_log ${LOGFILE} "Shutting down sophora in ${SOPHORA_ZONE}..."
${ZLOGIN_CMD} ${SOPHORA_ZONE} ${SVCADM_CMD} disable -t ${SOPHORA_FMRI}
}
function sophora_get_slaves () {
SOPHORA_ZONE=$1 # Zone for zlogin
SOPHORA_PORT=$2 # Sophora port at localhost
SOPHORA_USER=$3 # Sophora admin user
SOPHORA_PASS=$4 # Sophora admin port
${ZLOGIN_CMD} ${SOPHORA_ZONE} \
${WGET_CMD} \
-qO- \
--no-proxy \
--http-user=${SOPHORA_USER} \
--http-password=${SOPHORA_PASS} \
"http://localhost:${SOPHORA_PORT}/content-api/servers/?replicationMode=SLAVE" | \
${AWK_CMD} '
function get_param(param,name){
name="\""name"\"";
count=split(param,tupel,/,/);
for(i=1;i<=count;i++){
split(tupel[i],part,/:/);
if(part[1]==name){
gsub(/\"/,"",part[2]);return part[2];
}
}
}
{
json=$0;
gsub(/(\[\{|\}\])/,"",json);
elements=split(json,array,/\},\{/);
for(element=1;element<=elements;element++){
print get_param(array[element],"hostname");
}
}' | ${XARGS_CMD} -n 1 -i ${BASENAME_CMD} {} .server.de
}
function get_zone_hostname () {
${ZLOGIN_CMD} $1 ${HOSTNAME_CMD}
}
function print_log () {
LOGFILE=$1 ; shift
if [ $# -gt 0 ]
then
printf "%s (%s): %s\n" "$(date '+%Y%m%d %H:%M:%S')" "${cmd_option}" "$*" >> ${LOGFILE}
else
#printf "%s (%s): " "$(date '+%Y%m%d %H:%M:%S')" "${cmd_option}" >> ${LOGFILE}
while read data
do
printf "%s (%s): %s\n" "$(date '+%Y%m%d %H:%M:%S')" "${cmd_option}" "${data}" >> ${LOGFILE}
done
fi
}
function dump_zfs_config {
ZPOOL=$1
OUTPUT_FILE=$2
printf "\n\n# Create ZPool ${ZPOOL} with size $(${ZPOOL_CMD} list -Ho size ${ZPOOL}):\n\n" >> ${OUTPUT_FILE}
${ZPOOL_CMD} status ${ZPOOL} | ${AWK_CMD} '/config:/,/errors:/{if(/NAME/){getline; printf "Zpool structure of %s:\n\nzpool create %s",$1,$1; getline ; device=0; while(!/^$/ && !/errors:/){gsub(/mirror-[0-9]+/,"mirror",$1);gsub(/logs/,"log",$1);gsub(/(\/dev\/(r)*dsk\/)*c[0-9]+t[0-9A-F]+d[0-9]+(s[0-9]+)*/,"<device"device">",$1);if(/device/)device++;printf " %s",$1 ; getline}};printf "\n" ;}' >> ${OUTPUT_FILE}
printf "\n\n# Create ZFS\n\n" >> ${OUTPUT_FILE}
${ZFS_CMD} list -Hrt filesystem -o name,origin ${ZPOOL} | ${AWK_CMD} -v zfs_cmd=${ZFS_CMD} 'NR>1 && $2=="-"{print zfs_cmd,"create -o mountpoint=none",$1}' >> ${OUTPUT_FILE}
printf "\n\n# Set ZFS values\n\n" >> ${OUTPUT_FILE}
${ZFS_CMD} get -s local -Ho name,property,value -pr all ${ZPOOL} | ${AWK_CMD} -v zfs_cmd=${ZFS_CMD} '$2!="readonly"{printf "%s set -p %s=%s %s\n",zfs_cmd,$2,$3,$1}' >> ${OUTPUT_FILE}
}
function dump_cluster_config {
RG=$1
OUTPUT_DIR=$2
${RM_CMD} -f ${OUTPUT_DIR}/${RG}.clrg_export.xml
${CLRG_CMD} export -o ${OUTPUT_DIR}/${RG}.clrg_export.xml ${RG}
for RES in $(${CLRS_CMD} list -g ${RG})
do
${RM_CMD} -f ${OUTPUT_DIR}/${RES}.clrs_export.xml
${CLRS_CMD} export -o ${OUTPUT_DIR}/${RES}.clrs_export.xml ${RES}
done
# Commands to recreate the RG
COMMAND_FILE="${OUTPUT_DIR}/${RG}.ClusterCreateCommands.txt"
printf "Recreate %s:\n%s create -i %s %s\n\n" "${RG}" "${CLRG_CMD}" "${OUTPUT_DIR}/${RG}.clrg_export.xml" "${RG}" > ${COMMAND_FILE}
for RT in SUNW.LogicalHostname SUNW.HAStoragePlus SUNW.gds LGTO.clnt
do
for RT_VERSION in $(${CLRT_CMD} list | ${AWK_CMD} -v rt=${RT} '$1 ~ rt')
do
for RES in $(${CLRS_CMD} list -g ${RG} -t ${RT_VERSION})
do
if [ "_${RT}_" == "_SUNW.LogicalHostname_" ]
then
printf "Add the following entries to all nodes!!!:\n/etc/inet/hosts:\n" >> ${COMMAND_FILE}
${GETENT_CMD} hosts $(${CLRS_CMD} show -p HostnameList ${RES} | nawk '$1=="HostnameList:"{$1="";print}') >> ${COMMAND_FILE}
printf "\n" >> ${COMMAND_FILE}
fi
printf "Recreate %s:\n%s create -i %s %s\n\n" "${RES}" "${CLRS_CMD}" "${OUTPUT_DIR}/${RES}.clrs_export.xml" "${RES}" >> ${COMMAND_FILE}
done
done
done
}
function snapshot_pre {
DB=$1
DBUSER=$2
if [ $# -eq 3 -a "_$3_" != "__" ]
then
ZONE=$3
ZONE_CMD="${ZLOGIN_CMD} -l ${DBUSER} ${ZONE}"
ZONE_BASE=$(/usr/sbin/zonecfg -z ${ZONE} info zonepath | ${AWK_CMD} '{print $NF;}')
ZONE_ROOT="${ZONE_BASE}/root"
else
ZONE_ROOT=""
ZONE_CMD="su - ${DBUSER} -c"
fi
if( ${ZONE_CMD} echo >/dev/null 2>&1 )
then
SCRIPT_NAME="tmp/.nsr-pre-snap-script.$$"
# Create script inside zone
cat >${ZONE_ROOT}/{SCRIPT_NAME} <<EOS
#!/bin/bash
DBDIR=\$(${AWK_CMD} -F':' -v ORACLE_SID=${ORACLE_SID} '\$1==ORACLE_SID {print \$2;}' /var/opt/oracle/oratab)
\${DBDIR}/bin/sqlplus sys/${DBUSER} as sysdba << EOF
create pfile from spfile;
alter system archive log current;
alter database backup controlfile to trace;
alter database begin backup;
EOF
EOS
chmod 755 ${ZONE_ROOT}/${SCRIPT_NAME}
${ZONE_CMD} /${SCRIPT_NAME} 2>&1 | print_log ${LOGFILE}
rm -f ${ZONE_ROOT}/${SCRIPT_NAME}
fi
}
function snapshot_pst {
DB=$1
DBUSER=$2
if [ $# -eq 3 -a "_$3_" != "__" ]
then
ZONE=$3
ZONE_CMD="${ZLOGIN_CMD} -l ${DBUSER} ${ZONE}"
ZONE_BASE=$(/usr/sbin/zonecfg -z ${ZONE} info zonepath | ${AWK_CMD} '{print $NF;}')
ZONE_ROOT="${ZONE_BASE}/root"
else
ZONE_ROOT=""
ZONE_CMD="su - ${DBUSER} -c"
fi
if( ${ZONE_CMD} echo >/dev/null 2>&1 )
then
SCRIPT_NAME="tmp/.nsr-pre-snap-script.$$"
# Create script inside zone
cat >${ZONE_ROOT}/{SCRIPT_NAME} <<EOS
#!/bin/bash
DBDIR=\$(${AWK_CMD} -F':' -v ORACLE_SID=${ORACLE_SID} '\$1==ORACLE_SID {print \$2;}' /var/opt/oracle/oratab)
\${DBDIR}/bin/sqlplus sys/${DBUSER} as sysdba << EOF
alter database end backup;
alter system archive log current;
EOF
EOS
chmod 755 ${ZONE_ROOT}/${SCRIPT_NAME}
${ZONE_CMD} /${SCRIPT_NAME} 2>&1 | print_log ${LOGFILE}
rm -f ${ZONE_ROOT}/${SCRIPT_NAME}
fi
}
function snapshot_create {
ZPOOL=$1
SNAPSHOT_NAME=$2
RES="$(${CLRS_CMD} show -p ZPools | ${AWK_CMD} -v pool=${ZPOOL} '/^Resource:/{res=$NF;}$NF ~ pool{print res;}')"
# Because of problems with unmounting during cluster monitoring disable montoring for this step
print_log ${LOGFILE} "Telling Cluster not to monitor ${RES}"
if [ "_${RES}_" != "__" ]
then
${CLRS_CMD} unmonitor ${RES}
fi
print_log ${LOGFILE} "Create ZFS snapshot -r ${ZPOOL}@${SNAPSHOT_NAME}"
${ZFS_CMD} snapshot -r ${ZPOOL}@${SNAPSHOT_NAME}
for zfs_snapshot in $(${ZFS_CMD} list -Ho name -t snapshot -r ${ZPOOL} | grep ${SNAPSHOT_NAME})
do
${ZFS_CMD} clone -o readonly=on ${zfs_snapshot} ${zfs_snapshot/@*/}/${NSR_BACKUP_CLONE}
${ZFS_CMD} mount ${zfs_snapshot/@*/}/${NSR_BACKUP_CLONE} 2>/dev/null
if [ "_$(${ZFS_CMD} get -Ho value mounted ${zfs_snapshot/@*/}/${NSR_BACKUP_CLONE})_" == "_yes_" ]
then
# echo /usr/sbin/save -s ${SERVER_NAME} -g ${GROUP_NAME} -LL -m ${CLIENT_NAME} $(${ZFS_CMD} get -Ho value mountpoint ${zfs_snapshot/@*/}/${NSR_BACKUP_CLONE})
${ZFS_CMD} list -Ho creation,name ${zfs_snapshot/@*/}/${NSR_BACKUP_CLONE} | print_log ${LOGFILE}
fi
done
print_log ${LOGFILE} "Telling Cluster to monitor ${RES} again"
if [ "_${RES}_" != "__" ]
then
sleep 1
${CLRS_CMD} monitor ${RES}
fi
}
function snapshot_destroy {
ZPOOL=$1
SNAPSHOT_NAME=$2
RES="$(${CLRS_CMD} show -p ZPools | ${AWK_CMD} -v pool=${ZPOOL} '/^Resource:/{res=$NF;}$NF ~ pool{print res;}')"
# Because of problems with unmounting during cluster monitoring disable montoring for this step
print_log ${LOGFILE} "Telling Cluster not to monitor ${RES}"
if [ "_${RES}_" != "__" ]
then
${CLRS_CMD} unmonitor ${RES}
fi
if (${ZFS_CMD} list -t snapshot ${ZPOOL}@${SNAPSHOT_NAME} > /dev/null)
then
for zfs_snapshot in $(${ZFS_CMD} list -Ho name -t snapshot -r ${ZPOOL} | grep ${SNAPSHOT_NAME})
do
if [ "_$(${ZFS_CMD} get -Ho value mounted ${zfs_snapshot/@*/}/${NSR_BACKUP_CLONE})_" == "_yes_" ]
then
print_log ${LOGFILE} "Unmount ZFS clone ${zfs_snapshot/@*/}/${NSR_BACKUP_CLONE}"
${ZFS_CMD} unmount ${zfs_snapshot/@*/}/${NSR_BACKUP_CLONE}
fi
# If this is a clone of ${zfs_snapshot}, then destroy it
if [ "_$(${ZFS_CMD} list -Ho origin ${zfs_snapshot/@*/}/${NSR_BACKUP_CLONE})_" == "_${zfs_snapshot}_" ]
then
print_log ${LOGFILE} "Destroy ZFS clone ${zfs_snapshot/@*/}/${NSR_BACKUP_CLONE}"
${ZFS_CMD} destroy ${zfs_snapshot/@*/}/${NSR_BACKUP_CLONE}
fi
done
print_log ${LOGFILE} "Destroy ZFS snapshot -r ${ZPOOL}@${SNAPSHOT_NAME}"
${ZFS_CMD} destroy -r ${ZPOOL}@${SNAPSHOT_NAME}
fi
print_log ${LOGFILE} "Telling Cluster to monitor ${RES} again"
if [ "_${RES}_" != "__" ]
then
${CLRS_CMD} monitor ${RES}
fi
}
function usage {
echo "Usage: $0 (pre|pst)"
echo "Usage: $0 init <ZPool-Name>"
echo "Usage: $0 initall"
echo "Usage: $0 dump <ZPool-Name> <Output-File>"
exit 1
}
case ${cmd_option} in
pre|pst)
case ${cmd_option} in
pre)
# Get commandline from parent pid
# pre /usr/sbin/savepnpc -c <NetworkerClient> -s <NetworkerServer> -g <NetworkerGroup> -LL
print_log ${GLOBAL_LOGFILE} "Begin (${cmd_option}) Called from $(${PTREE_CMD} $$ | ${AWK_CMD} '/savepnpc/{print $0}')"
pid=$(${PTREE_CMD} $$ | ${AWK_CMD} '/savepnpc/{print $1}')
;;
pst)
# Get commandline from parent pid
# pst /usr/bin/pstclntsave -s <NetworkerServer> -g <NetworkerGroup> -c <NetworkerClient>
print_log ${GLOBAL_LOGFILE} "Begin (${cmd_option}) Called from $(${PTREE_CMD} $$ | ${AWK_CMD} '/pstclntsave/{print $0}')"
pid=$(${PTREE_CMD} $$ | ${AWK_CMD} '/pstclntsave/{print $1}')
${PTREE_CMD} $$ | print_log ${GLOBAL_LOGFILE}
print_log ${GLOBAL_LOGFILE} "(${cmd_option}) PID=${pid}"
;;
esac
commandline="$(${PARGS_CMD} -c ${pid} | ${AWK_CMD} -F':' '$1 ~ /^argv/{printf $2}END{print;}')"
# Called from backupserver use -c
CLIENT_NAME=$(print_option -c ${commandline})
# If called from cmdline use -m
CLIENT_NAME=${CLIENT_NAME:-$(print_option -m ${commandline})}
# Last resort pre/post
CLIENT_NAME=${CLIENT_NAME:-${cmd_option}}
SERVER_NAME=$(print_option -s ${commandline})
GROUP_NAME=$(print_option -g ${commandline})
LOGFILE=${BASE_LOG_DIR}/${CLIENT_NAME}.log
print_log ${LOGFILE} "Called from ${commandline}"
named_pipe=/tmp/.named_pipe.$$
# Delete named pipe on exit
trap "rm -f ${named_pipe}" EXIT
# Create named pipe
${MKNOD_CMD} ${named_pipe} p
# Read from named pipe and send it to print_log
tee <${named_pipe} | print_log ${LOGFILE}&
# Close STDOUT & STDERR
exec 1>&-
exec 2>&-
# Redirect them to named pipe
exec >${named_pipe} 2>&1
print_log ${LOGFILE} "Begin backup of ${CLIENT_NAME}"
# Get resource name from hostname
LH_RES=$(${CLRS_CMD} show -t SUNW.LogicalHostname -p HostnameList | ${AWK_CMD} -v Hostname="${CLIENT_NAME}" '/^Resource:/{res=$NF} /HostnameList:/ {for(i=2;i<=NF;i++){if($i == Hostname){print res}}}')
print_log ${LOGFILE} "LogicalHostname of ${CLIENT_NAME} is ${LH_RES}"
# Get ressourceGroup name from ressource name
RG=$(${SCHA_RESOURCE_GET_CMD} -O GROUP -R ${LH_RES})
print_log ${LOGFILE} "RessourceGroup of ${LH_RES} is ${RG}"
ZPOOLS=$(${CLRS_CMD} show -g ${RG} -p Zpools | ${AWK_CMD} '$1=="Zpools:"{$1="";print $0}')
print_log ${LOGFILE} "ZPools used in ${RG}: ${ZPOOLS}"
Start_command=$(${CLRS_CMD} show -p Start_command -g ${RG} | ${AWK_CMD} -F ':' '$1 ~ /Start_command/ && $2 ~ /sczbt/')
print_log ${LOGFILE} "sczbt Start_command is: ${Start_command}"
sczbt_config=$(print_option -P ${Start_command})/sczbt_$(print_option -R ${Start_command})
print_log ${LOGFILE} "sczbt_config is ${sczbt_config}"
ZONE=$(${AWK_CMD} -F '=' '$1=="Zonename"{gsub(/"/,"",$2);print $2}' ${sczbt_config})
print_log ${LOGFILE} "Zone from ${sczbt_config} is ${ZONE}"
;;
init)
LOGFILE=${BASE_LOG_DIR}/init.log
if [ $# -ne 2 ]
then
echo "Wrong count of parameters."
echo "Use $0 init <ZPool-Name>"
exit 1
fi
ZPOOL=$2
print_log ${GLOBAL_LOGFILE} "Begin (${cmd_option}) of zpool ${ZPOOL}"
print_log ${LOGFILE} "Begin init of zpool ${ZPOOL}"
;;
initall)
LOGFILE=${BASE_LOG_DIR}/initall.log
print_log ${GLOBAL_LOGFILE} "Begin (${cmd_option})"
;;
get_slaves)
if [ $# -ne 5 ]
then
echo "Wrong count of parameters."
echo "Use $0 get_slaves <Zone-Name> <Sophora-Port> <Sophora-Adminuser> <Sophora-Password>"
exit 1
fi
echo "Slave node(s): $(sophora_get_slaves $2 $3 $4 $5)"
exit 0
;;
esac
case ${cmd_option} in
dump_cluster)
if [ $# -ne 3 ]
then
echo "Wrong count of parameters."
echo "Use $0 dump_cluster <Ressource_Group> <DIR>"
exit 1
fi
dump_cluster_config $2 $3
;;
dump)
if [ $# -ne 3 ]
then
echo "Wrong count of parameters."
echo "Use $0 dump <ZPool-Name> <File>"
exit 1
fi
dump_zfs_config $2 $3
;;
init)
snapshot_destroy ${ZPOOL} ${SNAPSHOT_NAME}
snapshot_create ${ZPOOL} ${SNAPSHOT_NAME}
print_log ${LOGFILE} "End init of zpool ${ZPOOL}"
;;
initall)
for ZPOOL in $(${ZPOOL_CMD} list -Ho name)
do
if [ "_${ZPOOL}_" == "_rpool_" ]
then
continue
fi
print_log ${LOGFILE} "Begin init of zpool ${ZPOOL}"
snapshot_destroy ${ZPOOL} ${SNAPSHOT_NAME}
snapshot_create ${ZPOOL} ${SNAPSHOT_NAME}
print_log ${LOGFILE} "End init of zpool ${ZPOOL}"
done
;;
pre)
for ZPOOL in ${ZPOOLS}
do
snapshot_destroy ${ZPOOL} ${SNAPSHOT_NAME}
done
# Shutdown Sophora?
startup="No"
case ${ZONE} in
arcus-rg)
# Staging zones
#sophora_shutdown ${ZONE} ${SOPHORA_FMRI}
#startup="Yes"
;;
incus-zone|velum-zone)
SOPHORA_ADMINPORT=1196
# Master-/slave-zones
is_slave=0
zone_hostname=$(get_zone_hostname ${ZONE})
for slave in $(sophora_get_slaves ${ZONE} ${SOPHORA_ADMINPORT} ${SOPHORA_USER} ${SOPHORA_PASS})
do
print_log ${LOGFILE} "_${slave}_ == _${zone_hostname}_?"
if [ "_${slave}_" == "_${zone_hostname}_" ]
then
is_slave=1
fi
done
if [ ${is_slave} -eq 1 ]
then
# Slave
print_log ${LOGFILE} "Slave..."
sophora_shutdown ${ZONE} ${SOPHORA_FMRI}
startup="Yes"
else
# Master
print_log ${LOGFILE} "Master... Not shutting down Sophora"
fi
;;
merkel-zone|brandt-zone|schmidt-zone)
SOPHORA_ADMINPORT=1396
# Master-/slave-zones
is_slave=0
zone_hostname=$(get_zone_hostname ${ZONE})
for slave in $(sophora_get_slaves ${ZONE} ${SOPHORA_ADMINPORT} ${SOPHORA_USER} ${SOPHORA_PASS})
do
print_log ${LOGFILE} "_${slave}_ == _${zone_hostname}_?"
if [ "_${slave}_" == "_${zone_hostname}_" ]
then
is_slave=1
fi
done
if [ ${is_slave} -eq 1 ]
then
# Slave
print_log ${LOGFILE} "Slave..."
sophora_shutdown ${ZONE} ${SOPHORA_FMRI}
startup="Yes"
else
# Master
print_log ${LOGFILE} "Master... Not shutting down Sophora"
fi
;;
*)
;;
esac
# Find the dir to write down zfs-setup
for ZPOOL in ${ZPOOLS}
do
if [ "_$(${ZFS_CMD} list -Ho name ${ZPOOL}/${ZFS_SETUP_SUBDIR} 2>/dev/null)_" != "__" ]
then
CONFIG_DIR=$(${ZFS_CMD} get -Ho value mountpoint ${ZPOOL}/${ZFS_SETUP_SUBDIR})
else
if [ -d $(${ZFS_CMD} get -Ho value mountpoint ${ZPOOL})/${ZFS_SETUP_SUBDIR} ]
then
CONFIG_DIR=$(${ZFS_CMD} get -Ho value mountpoint ${ZPOOL})/${ZFS_SETUP_SUBDIR}
fi
fi
if [ -d ${CONFIG_DIR} ]
then
printf "# Settings for ZFS\n\n" > ${CONFIG_DIR}/${ZFS_CONFIG_FILE}
ZONE_CONFIG_FILE=zonecfg_${ZONE}.export
[ "_${ZONE}_" != "__" ] && ${ZONECFG_CMD} -z ${ZONE} export > ${CONFIG_DIR}/${ZONE_CONFIG_FILE}
fi
done
# Save configs and create snapshots
for ZPOOL in ${ZPOOLS}
do
if [ "_${CONFIG_DIR}_" != "__" ]
then
# Save zfs config
dump_zfs_config ${ZPOOL} ${CONFIG_DIR}/${ZFS_CONFIG_FILE}
# Save Clusterconfig
dump_cluster_config ${RG} ${CONFIG_DIR}
fi
snapshot_create ${ZPOOL} ${SNAPSHOT_NAME}
done
# Startup Sophora?
if [ "_${startup}_" == "_Yes_" ]
then
sophora_startup ${ZONE} ${SOPHORA_FMRI}
fi
print_log ${LOGFILE} "End backup of ${CLIENT_NAME}"
;;
pst)
for ZPOOL in ${ZPOOLS}
do
snapshot_destroy ${ZPOOL} ${SNAPSHOT_NAME}
done
print_log ${LOGFILE} "End backup of ${CLIENT_NAME}"
;;
*)
usage
;;
esac
print_log ${GLOBAL_LOGFILE} "End (${cmd_option}) Called from:"
${PTREE_CMD} $$ | print_log ${GLOBAL_LOGFILE}
exit 0
</syntaxhighlight>
MD5-Checksum
<source lang=bash>
# digest -a md5 /nsr/bin/nsr_snapshot.sh
01be6677ddf4342b625b1aa59d805628
</syntaxhighlight>
!!!THIS CODE IS UNTESTED DO NOT USE THIS!!!
!!!THIS JUST AN EXAMPLE!!!
==Restore/Recover==
===Set some variables===
<source lang=bash>
NSR_CLIENT="sample-cl"
NSR_SERVER="nsr-server"
ZPOOL="sample_pool"
RG="${NSR_CLIENT%-cl}-rg"
ZONE="${NSR_CLIENT%-cl}-zone"
</syntaxhighlight>
===Look for a valid backup===
<source lang=bash>
# /usr/sbin/mminfo -s ${NSR_SERVER} -o t -N /local/${RG}/cluster_config/nsr_backup
</syntaxhighlight>
===Restore ZFS configuration===
<source lang=bash>
# /usr/sbin/recover -s ${NSR_SERVER} -c ${NSR_CLIENT} -d /tmp -a /local/${RG}/cluster_config/nsr_backup/ZFS_Setup.sh
</syntaxhighlight>
Look into file /tmp/ZFS_Setup.sh which should look like this:
<source lang=bash>
# Create ZPool sample_pool with size 1.02T:
Zpool structure of sample_pool:
zpool create sample_pool mirror <device0> <device1>
# Create ZFS
/usr/sbin/zfs create -o mountpoint=none sample_pool/app
/usr/sbin/zfs create -o mountpoint=none sample_pool/cluster_config
/usr/sbin/zfs create -o mountpoint=none sample_pool/data1
/usr/sbin/zfs create -o mountpoint=none sample_pool/data2
/usr/sbin/zfs create -o mountpoint=none sample_pool/home
/usr/sbin/zfs create -o mountpoint=none sample_pool/log
/usr/sbin/zfs create -o mountpoint=none sample_pool/usr_local
/usr/sbin/zfs create -o mountpoint=none sample_pool/zone
# Set ZFS values
/usr/sbin/zfs set -p reservation=104857600 sample_pool
/usr/sbin/zfs set -p mountpoint=none sample_pool
/usr/sbin/zfs set -p mountpoint=/local/sample-rg/app sample_pool/app
/usr/sbin/zfs set -p mountpoint=/local/sample-rg/cluster_config sample_pool/cluster_config
/usr/sbin/zfs set -p mountpoint=/local/sample-rg/data1 sample_pool/data1
/usr/sbin/zfs set -p mountpoint=/local/sample-rg/data2 sample_pool/data2
/usr/sbin/zfs set -p mountpoint=/local/sample-rg/home sample_pool/home
/usr/sbin/zfs set -p mountpoint=/local/sample-rg/log sample_pool/log
/usr/sbin/zfs set -p mountpoint=/local/sample-rg/usr_local sample_pool/usr_local
/usr/sbin/zfs set -p mountpoint=/local/sample-rg/zone sample_pool/zone
/usr/sbin/zfs set -p zpdata:zn=sample-zone sample_pool/zone
/usr/sbin/zfs set -p zpdata:rbe=S10_U9 sample_pool/zone
/usr/sbin/zfs set -p mountpoint=/local/sample-rg/zone-zfsBE_20121105 sample_pool/zone-zfsBE_20121105
/usr/sbin/zfs set -p zoned=off sample_pool/zone-zfsBE_20121105
/usr/sbin/zfs set -p canmount=on sample_pool/zone-zfsBE_20121105
/usr/sbin/zfs set -p zpdata:zn=sample-zone sample_pool/zone-zfsBE_20121105
/usr/sbin/zfs set -p zpdata:rbe=S10_U9 sample_pool/zone-zfsBE_20121105
</syntaxhighlight>
Mount the needed ZFS filesystems.
===Restore zone configuration===
<source lang=bash>
# /usr/sbin/recover -s ${NSR_SERVER} -c ${NSR_CLIENT} -d /tmp -a /local/${RG}/cluster_config/nsr_backup/zonecfg_${ZONE}.export
# zonecfg -z ${ZONE} -f /tmp/zonecfg_${ZONE}.export
# zonecfg -z ${ZONE} info
</syntaxhighlight>
===Restore cluster configuration===
<source lang=bash>
# /usr/sbin/recover -s ${NSR_SERVER} -c ${NSR_CLIENT} -d /tmp -a /local/${RG}/cluster_config/nsr_backup/*_export.xml
# /usr/sbin/recover -s ${NSR_SERVER} -c ${NSR_CLIENT} -d /tmp -a /local/${RG}/cluster_config/nsr_backup/*.ClusterCreateCommands.txt
# /usr/bin/perl -pi -e "s#/local/${RG}/cluster_config/nsr_backup/#/tmp/#g" /tmp/${RG}.ClusterCreateCommands.txt
</syntaxhighlight>
Follow the instructions in /tmp/${RG}.ClusterCreateCommands.txt:
<source lang=bash>
Recreate sample-rg:
/usr/cluster/bin/clrg create -i /tmp/sample-rg.clrg_export.xml sample-rg
Add the following entries to all nodes!!!:
/etc/inet/hosts:
10.29.7.96 sample-cl
Recreate sample-lh-res:
/usr/cluster/bin/clrs create -i /tmp/sample-lh-res.clrs_export.xml sample-lh-res
Recreate sample-hasp-zfs-res:
/usr/cluster/bin/clrs create -i /tmp/sample-hasp-zfs-res.clrs_export.xml sample-hasp-zfs-res
Recreate sample-emctl-res:
/usr/cluster/bin/clrs create -i /tmp/sample-emctl-res.clrs_export.xml sample-emctl-res
Recreate sample-oracle-res:
/usr/cluster/bin/clrs create -i /tmp/sample-oracle-res.clrs_export.xml sample-oracle-res
Recreate sample-zone-res:
/usr/cluster/bin/clrs create -i /tmp/sample-zone-res.clrs_export.xml sample-zone-res
Recreate sample-nsr-res:
/usr/cluster/bin/clrs create -i /tmp/sample-nsr-res.clrs_export.xml sample-nsr-res
</syntaxhighlight>
==Registering new resource type LGTO.clnt==
1. Install Solaris client package LGTOclnt
2. Register new resource type in cluster. One one node do:
<source lang=bash>
# clrt register -f /usr/sbin/LGTO.clnt.rtr LGTO.clnt
</syntaxhighlight>
Now you have a new resource type LGTO.clnt in your cluster.
==Create client resource of type LGTO.clnt==
So I use scripts like this:
<source lang=bash>
# RGname=sample-rg
# clrs create \
-t LGTO.clnt \
-g ${RGname} \
-p Resource_dependencies=$(basename ${RGname} -rg)-hasp-zfs-res \
-p clientname=$(basename ${RGname} -rg)-lh \
-p Network_resource=$(basename ${RGname} -rg)-lh-res \
-p owned_paths=${ZPOOL_BASEDIR} \
$(basename ${RGname} -rg)-nsr-res
</syntaxhighlight>
This expands to:
<source lang=bash>
# clrs create \
-t LGTO.clnt \
-g sample-rg \
-p Resource_dependencies=sample-hasp-zfs-res \
-p clientname=sample-lh \
-p Network_resource=sample-lh-res \
-p owned_paths=/local/sample-rg \
sample-nsr-res
</syntaxhighlight>
Now we have a client name to which we can connect to: sample-lh
90af79ad3c2180deb5d9a6659fcab8b13f95528a
Solaris ssh from DVD
0
111
2260
665
2021-11-25T15:40:29Z
Lollypop
2
Text replacement - "[[Kategorie:" to "[[Category:"
wikitext
text/x-wiki
[[Category:Solaris|SSH]]
=Get SSH on a system bootet from DVD=
==Mount DVD==
<source lang=bash>
# iostat -En
c0t0d0 Soft Errors: 0 Hard Errors: 0 Transport Errors: 0
Vendor: AMI Product: Virtual CDROM Revision: 1.00 Serial No:
Size: 0.00GB <0 bytes>
Media Error: 0 Device Not Ready: 0 No Device: 0 Recoverable: 0
Illegal Request: 732 Predictive Failure Analysis: 0
...
# mkdir /tmp/dvd
# mount -F hsfs -oro /dev/dsk/c0t0d0s0 /tmp/dvd
</source>
==Unpacking software==
<source lang=bash>
# mkdir /tmp/pkg
# pkgtrans /tmp/dvd/Solaris_10/Product /tmp/pkg SUNWsshu SUNWcry SUNWopenssl-libraries
# mkdir /tmp/ssh
# cd /tmp/ssh
# 7z x -so /tmp/pkg/SUNWsshu/archive/none.7z | cpio -idv
# 7z x -so /tmp/pkg/SUNWcry/archive/none.7z | cpio -idv
# 7z x -so /tmp/pkg/SUNWopenssl-libraries/archive/none.7z | cpio -idv
</source>
==Use unpacked libraries==
<source lang=bash>
# crle -c /var/ld/ld.config -l /tmp/ssh/usr/sfw/lib:/lib:/usr/lib
# crle
Configuration file [version 4]: /var/ld/ld.config
Platform: 32-bit LSB 80386
Default Library Path (ELF): /tmp/ssh/usr/sfw/lib:/lib:/usr/lib
Trusted Directories (ELF): /lib/secure:/usr/lib/secure (system default)
Command line:
crle -c /var/ld/ld.config -l /tmp/ssh/usr/sfw/lib:/lib:/usr/lib
</source>
==Check it==
<source lang=bash>
# ldd /tmp/ssh/usr/bin/ssh
libsocket.so.1 => /lib/libsocket.so.1
libnsl.so.1 => /lib/libnsl.so.1
libz.so.1 => /usr/lib/libz.so.1
libcrypto.so.0.9.7 => /usr/sfw/lib/libcrypto.so.0.9.7
libgss.so.1 => /usr/lib/libgss.so.1
libc.so.1 => /lib/libc.so.1
libmp.so.2 => /lib/libmp.so.2
libmd.so.1 => /lib/libmd.so.1
libscf.so.1 => /lib/libscf.so.1
libcmd.so.1 => /lib/libcmd.so.1
libdoor.so.1 => /lib/libdoor.so.1
libuutil.so.1 => /lib/libuutil.so.1
libgen.so.1 => /lib/libgen.so.1
libcrypto_extra.so.0.9.7 => /tmp/ssh/usr/sfw/lib/libcrypto_extra.so.0.9.7
libm.so.2 => /lib/libm.so.2
</source>
Looks good:
* libcrypto_extra.so.0.9.7 => /tmp/ssh/usr/sfw/lib/libcrypto_extra.so.0.9.7
==Use ssh from /tmp/ssh==
<source lang=bash>
# /tmp/ssh/usr/bin/ssh <user>@<ip>
</source>
f2e307a134feadbe0c9ab20055479c4e6657be89
ZFS on Linux
0
222
2261
2001
2021-11-25T15:42:26Z
Lollypop
2
Text replacement - "<source" to "<syntaxhighlight"
wikitext
text/x-wiki
[[Kategorie:Linux|ZFS]]
[[Kategorie:ZFS|Linux]]
[[Kategorie:VirtualBox|ZFS]]
==Grub==
Create /etc/udev/rules.d/99-local-grub.rules with this content:
<syntaxhighlight lang=bash>
# Create by-id links in /dev as well for zfs vdev. Needed by grub
# Add links for zfs_member only
KERNEL=="sd*[0-9]", IMPORT{parent}=="ID_*", ENV{ID_FS_TYPE}=="zfs_member", SYMLINK+="$env{ID_BUS}-$env{ID_SERIAL}-part%n"
</source>
==Virtualbox on ZVols==
If you use ZVols as rawvmdk-device in VirtualBox as normal user (vmuser in this example) create /etc/udev/rules.d/99-local-zvol.rules with this content:
<syntaxhighlight lang=bash>
KERNEL=="zd*" SUBSYSTEM=="block" ACTION=="add|change" PROGRAM="/lib/udev/zvol_id /dev/%k" RESULT=="rpool/VM/*" OWNER="vmuser"
</source>
<syntaxhighlight lang=bash>
vmuser@virtualbox-server:~$ VBoxManage internalcommands createrawvmdk -filename /var/data/VMs/dev/Solaris10.vmdk -rawdisk /dev/zvol/rpool/VM/Solaris10
</source>
==Setup Ubuntu 16.04 with ZFS root==
Most is from here [https://github.com/zfsonlinux/zfs/wiki/Ubuntu-16.04-Root-on-ZFS Ubuntu-16.04-Root-on-ZFS].
Boot Ubuntu Desktop (alias Live CD) and choose "try out".
===Get the right ashift value===
For example to get sda and sdb:
<syntaxhighlight lang=bash>
# lsblk -o NAME,PHY-SeC,LOG-SEC /dev/sd{a,b} | awk 'function exponent (value) {for(i=0;value>1;i++){value/=2;}; return i;}{if($2 ~ /[0-9]+/){print $0,exponent($2)}else{print$0,"ashift"}}'
NAME PHY-SEC LOG-SEC ashift
sda 512 512 9
├─sda1 512 512 9
├─sda2 512 512 9
├─sda3 512 512 9
└─sda4 512 512 9
sdb 4096 512 12
├─sdb1 4096 512 12
├─sdb2 4096 512 12
├─sdb3 4096 512 12
└─sdb4 4096 512 12
</source>
===Connect it to your network===
<syntaxhighlight lang=bash>
sudo -i
ifconfig ens160 <IP> netmask 255.255.255.0
route add default gw <defaultrouter>
echo "nameserver <nameserver>" >> /etc/resolv.conf
echo 'Acquire::http::Proxy "http://<user>:<pass>@<proxyhost>:<proxyport>";' >> /etc/apt/apt.conf
apt-add-repository universe
apt update
apt --yes install openssh-server
passwd ubuntu
Reconnect via ssh
apt install --yes debootstrap gdisk zfs-initramfs
sgdisk -g -a1 -n2:34:2047 -t2:EF02 /dev/disk/by-id/scsi-36000c2932cdb62febff0b5ac93786dd4
sgdisk -n9:-8M:0 -t9:BF07 /dev/disk/by-id/scsi-36000c2932cdb62febff0b5ac93786dd4
sgdisk -n1:0:0 -t1:BF01 /dev/disk/by-id/scsi-36000c2932cdb62febff0b5ac93786dd4
zpool create -f -o ashift=12 \
-O atime=off \
-O canmount=off \
-O compression=lz4 \
-O normalization=formD \
-O mountpoint=/ \
-R /mnt \
rpool /dev/disk/by-id/scsi-36000c2932cdb62febff0b5ac93786dd4-part1
zfs create -o canmount=off -o mountpoint=none rpool/ROOT
zfs create -o canmount=noauto -o mountpoint=/ rpool/ROOT/ubuntu
zfs mount rpool/ROOT/ubuntu
zfs create -o setuid=off rpool/home
zfs create -o mountpoint=/root rpool/home/root
zfs create -o canmount=off -o setuid=off -o exec=off rpool/var
zfs create -o com.sun:auto-snapshot=false rpool/var/cache
zfs create rpool/var/log
zfs create rpool/var/spool
zfs create -o com.sun:auto-snapshot=false -o exec=on rpool/var/tmp
zfs create -V 4G -b $(getconf PAGESIZE) -o compression=zle \
-o logbias=throughput -o sync=always \
-o primarycache=metadata -o secondarycache=none \
-o com.sun:auto-snapshot=false rpool/swap
cp -p {,/mnt}/etc/apt/apt.conf
export http_proxy=$(awk '/Acquire::http::Proxy/{gsub(/\"/,"");gsub(/;$/,"");print $2}' /mnt/etc/apt/apt.conf)
echo -n xenial{,-security,-updates} | \
xargs -n 1 -d ' ' -I{} echo "deb http://archive.ubuntu.com/ubuntu {} main universe" > /mnt/etc/apt/sources.list
chmod 1777 /mnt/var/tmp
debootstrap xenial /mnt
zfs set devices=off rpool
HOSTNAME=Template-VM
echo ${HOSTNAME} > /mnt/etc/hostname
printf "127.0.1.1\t%s\n" "${HOSTNAME}" >> /mnt/etc/hosts
INTERFACE=$(ip a s scope global | awk 'NR==1{gsub(/:$/,"",$2);print $2;}')
printf "auto %s\niface %s inet dhcp\n" "${INTERFACE}" "${INTERFACE}" > /mnt/etc/network/interfaces.d/${INTERFACE}
mount --rbind /dev /mnt/dev
mount --rbind /proc /mnt/proc
mount --rbind /sys /mnt/sys
cp -p {,/mnt}/etc/apt/apt.conf
echo -n xenial{,-security,-updates} | \
xargs -n 1 -d ' ' -I{} echo "deb http://archive.ubuntu.com/ubuntu {} main universe" > /mnt/etc/apt/sources.list
chroot /mnt /bin/bash --login
locale-gen en_US.UTF-8
echo 'LANG="en_US.UTF-8"' > /etc/default/locale
LANG="en_US.UTF-8"
dpkg-reconfigure tzdata
ln -s /proc/self/mounts /etc/mtab
apt update
apt install --yes ubuntu-minimal
apt install --yes --no-install-recommends linux-image-generic
apt install --yes zfs-initramfs
apt install --yes openssh-server
apt install --yes grub-pc
addgroup --system lpadmin
addgroup --system sambashare
passwd
grub-probe /
update-initramfs -c -k all
vi /etc/default/grub
Comment out: GRUB_HIDDEN_TIMEOUT=0
Remove quiet and splash from: GRUB_CMDLINE_LINUX_DEFAULT
Uncomment: GRUB_TERMINAL=console
update-grub
grub-install /dev/disk/by-id/scsi-36000c2932cdb62febff0b5ac93786dd4
zfs snapshot rpool/ROOT/ubuntu@install
exit
mount | grep -v zfs | tac | awk '/\/mnt/ {print $3}' | xargs -i{} umount -lf {}
zpool export rpool
reboot
apt install --yes cryptsetup
echo cryptswap1 /dev/zvol/rpool/swap /dev/urandom swap,cipher=aes-xts-plain64:sha256,size=256 >> /etc/crypttab
systemctl daemon-reload
systemctl start systemd-cryptsetup@cryptswap1.service
echo /dev/mapper/cryptswap1 none swap defaults 0 0 >> /etc/fstab
swapon -av
</source>
==Swap on ZFS with random key encryption==
<syntaxhighlight lang=ini>
# /etc/systemd/system/zfs-cryptswap@.service
[Unit]
Description=ZFS Random Cryptography Setup for %I
Documentation=man:zfs(8)
DefaultDependencies=no
Conflicts=umount.target
IgnoreOnIsolate=true
After=systemd-random-seed.service
BindsTo=dev-zvol-rpool-%i.device
Before=umount.target
[Service]
Type=oneshot
RemainAfterExit=yes
TimeoutSec=0
KeyringMode=shared
OOMScoreAdjust=500
UMask=0077
RuntimeDirectory=zfs-cryptswap.%i
RuntimeDirectoryMode=0700
ExecStartPre=-/sbin/swapoff '/dev/zvol/rpool/%i'
ExecStartPre=-/sbin/zfs destroy 'rpool/%i'
ExecStartPre=/bin/dd if=/dev/urandom of=/run/zfs-cryptswap.%i/%i.key bs=32 count=1
ExecStart=/sbin/zfs create -V 4G -b 4k -o compression=zle -o logbias=throughput -o sync=always -o primarycache=metadata -o secondarycache=none -o com.sun:auto-snapshot=false -o encryption=on -o keyformat=raw -o keylocation=file:///run/zfs-cryptswap.%i/%i.key rpool/%i
ExecStartPost=/sbin/mkswap '/dev/zvol/rpool/%i'
ExecStartPost=/sbin/swapon '/dev/zvol/rpool/%i'
ExecStop=/sbin/swapoff '/dev/zvol/rpool/%i'
ExecStopPost=/sbin/zfs destroy 'rpool/%i'
[Install]
WantedBy=swap.target
</source>
!!!BE CAREFUL with the name after @ !!!
The name after the @ is the name of the ZFS the will be DESTROYED and recreated!!!
To destroy and recreate an encrypted ZFS volume named cryptswap use:
<syntaxhighlight lang=bash>
# systemctl start zfs-cryptswap@cryptswap.service
# systemctl enable zfs-cryptswap@cryptswap.service
# update-initramfs -k all -u
</source>
==Kernel settings for ZFS==
=== Set module parameter in /etc/modprobe.d/zfs.conf===
<syntaxhighlight lang=bash>
options zfs zfs_arc_max=10737418240
# increase them so scrub/resilver is more quickly at the cost of other work
options zfs zfs_vdev_scrub_min_active=24
options zfs zfs_vdev_scrub_max_active=64
# sync write
options zfs zfs_vdev_sync_write_min_active=8
options zfs zfs_vdev_sync_write_max_active=32
# sync reads (normal)
options zfs zfs_vdev_sync_read_min_active=8
options zfs zfs_vdev_sync_read_max_active=32
# async reads : prefetcher
options zfs zfs_vdev_async_read_min_active=8
options zfs zfs_vdev_async_read_max_active=32
# async write : bulk writes
options zfs zfs_vdev_async_write_min_active=8
options zfs zfs_vdev_async_write_max_active=32
# max write speed to l2arc
# tradeoff between write/read and durability of ssd (?)
# default : 8 * 1024 * 1024
# setting here : 500 * 1024 * 1024
options zfs l2arc_write_max=524288000
options zfs zfs_top_maxinflight=512
options zfs zfs_resilver_min_time_ms=8000
options zfs zfs_resilver_delay=0
</source>
Remember to update your initramfs before boot. This is the filesystem which is read when your module is loaded.
<syntaxhighlight lang=bash>
# update-initramfs -k all -u
</source>
=== Check settings ===
<syntaxhighlight lang=bash>
root@zfshost:~# modprobe -c | grep "options zfs"
options zfs zfs_arc_max=10737418240
options zfs zfs_vdev_scrub_min_active=24
options zfs zfs_vdev_scrub_max_active=64
options zfs zfs_vdev_sync_write_min_active=8
options zfs zfs_vdev_sync_write_max_active=32
options zfs zfs_vdev_sync_read_min_active=8
options zfs zfs_vdev_sync_read_max_active=32
options zfs zfs_vdev_async_read_min_active=8
options zfs zfs_vdev_async_read_max_active=32
options zfs zfs_vdev_async_write_min_active=8
options zfs zfs_vdev_async_write_max_active=32
options zfs l2arc_write_max=524288000
options zfs zfs_top_maxinflight=512
options zfs zfs_resilver_min_time_ms=8000
options zfs zfs_resilver_delay=0
</source>
<syntaxhighlight lang=bash>
root@zfshost:~# modprobe --show-depends zfs
insmod /lib/modules/4.15.0-58-generic/kernel/spl/spl.ko
insmod /lib/modules/4.15.0-58-generic/kernel/zfs/znvpair.ko
insmod /lib/modules/4.15.0-58-generic/kernel/zfs/zcommon.ko
insmod /lib/modules/4.15.0-58-generic/kernel/zfs/icp.ko
insmod /lib/modules/4.15.0-58-generic/kernel/zfs/zavl.ko
insmod /lib/modules/4.15.0-58-generic/kernel/zfs/zunicode.ko
insmod /lib/modules/4.15.0-58-generic/kernel/zfs/zfs.ko zfs_arc_max=10737418240 zfs_vdev_scrub_min_active=24 zfs_vdev_scrub_max_active=64 zfs_vdev_sync_write_min_active=8 zfs_vdev_sync_write_max_active=32 zfs_vdev_sync_read_min_active=8 zfs_vdev_sync_read_max_active=32 zfs_vdev_async_read_min_active=8 zfs_vdev_async_read_max_active=32 zfs_vdev_async_write_min_active=8 zfs_vdev_async_write_max_active=32 l2arc_write_max=524288000 zfs_top_maxinflight=512 zfs_resilver_min_time_ms=8000 zfs_resilver_delay=0
</source>
=== Check actual settings ===
Check files in
* /proc/spl/kstat/zfs/
* /sys/module/zfs/parameters/
==ARC Cache==
===Get the current usage of cache===
<syntaxhighlight lang=bash>
# cat /proc/spl/kstat/zfs/arcstats |grep c_
c_min 4 521779200
c_max 4 1073741824
arc_no_grow 4 0
arc_tempreserve 4 0
arc_loaned_bytes 4 0
arc_prune 4 25360
arc_meta_used 4 493285336
arc_meta_limit 4 805306368
arc_dnode_limit 4 80530636
arc_meta_max 4 706551816
arc_meta_min 4 16777216
sync_wait_for_async 4 357
arc_need_free 4 0
arc_sys_free 4 260889600
</source>
===Limit the cache without reboot non permanent===
For example limit it to 512MB (which is too small for production environments, just an example...):
<syntaxhighlight lang=bash>
# echo "$[512*1024*1024]" > /sys/module/zfs/parameters/zfs_arc_max
</source>
Now you have to drop the caches:
<syntaxhighlight lang=bash>
# echo 3 > /proc/sys/vm/drop_caches
</source>
===Make the cache limit permanent===
For example limit it to 512MB (which is too small for production environments, just an example...):
<syntaxhighlight lang=bash>
# echo "options zfs zfs_arc_max=$[512*1024*1024]" >> /etc/modprobe.d/zfs.conf
</source>
After reboot this value take effect.
===Check cache hits/misses===
<syntaxhighlight lang=bash>
# (while : ; do cat /proc/spl/kstat/zfs/arcstats ; sleep 5 ; done ) | awk '
BEGIN {
}
$1 ~ /(hits|misses)/ {
name=$1;
gsub(/[_]*(hits|misses)/,"",name);
if(name == ""){
name="global";
}
}
$1 ~ /hits/ {
hits[name] = $3 - hitslast[name]
hitslast[name] = $3
}
$1 ~ /misses/ {
misses[name] = $3 - misslast[name]
misslast[name] = $3
rate = 0
total = hits[name] + misses[name]
if (total)
rate = (hits[name] * 100) / total
if (name=="global")
printf "%30s %12s %12s %9s\n", "NAME", "HITS", "MISSES", "HITRATE"
printf "%30s %12d %12d %8.2f%%\n", name, hits[name], misses[name], rate
}
'
</source>
==Higher scrub performance==
<syntaxhighlight lang=bash highlight=3-5>
#!/bin/bash
#
## scrub_fast.sh
#
case $1 in
start)
echo 0 > /sys/module/zfs/parameters/zfs_scan_idle
echo 0 > /sys/module/zfs/parameters/zfs_scrub_delay
echo 512 > /sys/module/zfs/parameters/zfs_top_maxinflight
echo 5000 > /sys/module/zfs/parameters/zfs_scan_min_time_ms
echo 4 > /sys/module/zfs/parameters/zfs_vdev_scrub_min_active
echo 8 > /sys/module/zfs/parameters/zfs_vdev_scrub_max_active
;;
stop)
echo 50 > /sys/module/zfs/parameters/zfs_scan_idle
echo 4 > /sys/module/zfs/parameters/zfs_scrub_delay
echo 32 > /sys/module/zfs/parameters/zfs_top_maxinflight
echo 1000 > /sys/module/zfs/parameters/zfs_scan_min_time_ms
echo 1 > /sys/module/zfs/parameters/zfs_vdev_scrub_min_active
echo 2 > /sys/module/zfs/parameters/zfs_vdev_scrub_max_active
;;
status)
for i in zfs_scan_idle zfs_scrub_delay zfs_top_maxinflight zfs_scan_min_time_ms zfs_vdev_scrub_{min,max}_active
do
param="/sys/module/zfs/parameters/${i}"
printf "%60s\t%d\n" "${param}" "$(cat ${param})"
done
;;
*)
echo "Usage: ${0} (start|stop|status)"
;;
esac
</source>
==Backup ZFS settings==
A little script which may be used on your own risk.
<syntaxhighlight lang=bash>
#!/bin/bash
# Written by Lars Timmann <L@rs.Timmann.de> 2018
# Tested on solaris 11.3 & Ubuntu Linux
# This script is a rotten bunch of code... rewrite it!
AWK_CMD=/usr/bin/gawk
ZPOOL_CMD=/sbin/zpool
ZFS_CMD=/sbin/zfs
ZDB_CMD=/sbin/zdb
function print_local_options () {
DATASET=$1
OPTION=$2
EXCLUDE_REGEX=$3
${ZFS_CMD} get -s local -Ho property,value -p ${OPTION} ${DATASET} | while read -r property value
do
if [[ ! ${property} =~ ${EXCLUDE_REGEX} ]]
then
if [ "_${property}_" == "_share.*_" ]
then
print_local_options "${DATASET}" 'share.all' '^$'
else
printf '\t-o %s=%s \\\n' "${property}" "${value}"
fi
fi
done
}
function print_filesystem () {
ZFS=$1
printf '%s create \\\n' "${ZFS_CMD}"
print_local_options "${ZFS}" 'all' '^$'
printf '\t%s\n' "${ZFS}"
}
function print_filesystems () {
ZPOOL=$1
for ZFS in $(${ZFS_CMD} list -Ho name -t filesystem -r ${ZPOOL})
do
if [ ${ZFS} == ${ZPOOL} ] ; then continue ; fi
printf '#\n## Filesystem: %s\n#\n\n' "${ZFS}"
print_filesystem ${ZFS}
printf '\n'
done
}
function print_volume () {
ZVOL=$1
volsize=$(${ZFS_CMD} get -Ho value volsize ${ZVOL})
volblocksize=$(${ZFS_CMD} get -Ho value volblocksize ${ZVOL})
printf '%s create \\\n\t-V %s \\\n\t-b %s \\\n' "${ZFS_CMD}" "${volsize}" "${volblocksize}"
print_local_options "${ZVOL}" 'all' '(volsize|refreservation)'
printf '\t%s\n' "${ZVOL}"
}
function print_volumes () {
ZPOOL=$1
for ZVOL in $(${ZFS_CMD} list -Ho name -t volume -r ${ZPOOL})
do
printf '#\n## Volume: %s\n#\n\n' "${ZVOL}"
print_volume ${ZVOL}
printf '\n'
done
}
function print_vdevs () {
ZPOOL=$1
${ZDB_CMD} -C ${ZPOOL} | ${AWK_CMD} -F':' '
$1 ~ /^[[:space:]]*type$/ {
gsub(/[ ]+/,"",$NF);
type=substr($NF,2,length($NF)-2);
if ( type == "mirror" ) {
printf " \\\n\t%s",type;
}
}
$1 ~ /^[[:space:]]*path$/ {
gsub(/[ ]+/,"",$NF);
vdev=substr($NF,2,length($NF)-2);
printf " \\\n\t%s",vdev;
}
END {
printf "\n";
}
'
}
function print_zpool () {
ZPOOL=$1
printf '#############################################################\n'
printf '#\n## ZPool: %s\n#\n' "${ZPOOL}"
printf '#############################################################\n\n'
printf '%s create \\\n' "${ZPOOL_CMD}"
print_local_options "${ZPOOL}" 'all' '/@/'
printf '\t%s' "${ZPOOL}"
print_vdevs "${ZPOOL}"
printf '\n'
printf '#############################################################\n\n'
print_filesystems "${ZPOOL}"
print_volumes "${ZPOOL}"
}
OS=$(uname -s)
eval $(uname -s)=1
HOSTNAME=$(hostname)
printf '#############################################################\n'
printf '# Hostname: %s\n' "${HOSTNAME}"
printf '#############################################################\n\n'
for ZPOOL in $(${ZPOOL_CMD} list -Ho name)
do
print_zpool ${ZPOOL}
done
</source>
==Links==
* [[https://github.com/zfsonlinux/pkg-zfs/wiki/HOWTO-install-Ubuntu-16.04-to-a-Whole-Disk-Native-ZFS-Root-Filesystem-using-Ubiquity-GUI-installer HOWTO install Ubuntu 16.04 to a Whole Disk Native ZFS Root Filesystem using Ubiquity GUI installer]]
* [[https://github.com/zfsonlinux/zfs/wiki/Ubuntu-16.04-Root-on-ZFS Ubuntu 16.04 Root on ZFS]]
ca37ae780dc9d9508dd7814026897b8c4a96b972
Solaris pkg
0
378
2262
2079
2021-11-25T15:44:01Z
Lollypop
2
Text replacement - "<source" to "<syntaxhighlight"
wikitext
text/x-wiki
[[Category: Solaris11|pkg]]
== Troubleshooting ==
=== Error: pkg: This is an internal error in pkg(7) version b'3beb69dcf209'. Please log a Service Request about this issue including the information above and this message.===
Full output example:
<syntaxhighlight lang=bash>
# pkg update --accept --require-new-be --be-name solaris_11.4.27.1.82
Traceback (most recent call last):
File "/usr/bin/pkg", line 5668, in handle_errors
__ret = func(*args, **kwargs)
File "/usr/bin/pkg", line 5654, in main_func
pargs=pargs, **opts)
File "/usr/bin/pkg", line 2267, in update
display_plan_cb=display_plan_cb, logger=logger)
File "/usr/lib/python3.7/vendor-packages/pkg/client/client_api.py", line 1556, in _update
logger=logger)
File "/usr/lib/python3.7/vendor-packages/pkg/client/client_api.py", line 1395, in __api_op
logger=logger, **kwargs)
File "/usr/lib/python3.7/vendor-packages/pkg/client/client_api.py", line 1252, in __api_plan
display_plan_cb=display_plan_cb)
File "/usr/lib/python3.7/vendor-packages/pkg/client/client_api.py", line 1224, in __api_plan
for pd in api_plan_func(**kwargs):
File "/usr/lib/python3.7/vendor-packages/pkg/client/api.py", line 1516, in __plan_op
log_op_end_all=True)
File "/usr/lib/python3.7/vendor-packages/pkg/client/api.py", line 1144, in __plan_common_exception
six.reraise(exc_type, exc_value, exc_traceback)
File "/usr/lib/python3.7/vendor-packages/six.py", line 703, in reraise
raise value
File "/usr/lib/python3.7/vendor-packages/pkg/client/api.py", line 1429, in __plan_op
self.__refresh_publishers()
File "/usr/lib/python3.7/vendor-packages/pkg/client/api.py", line 620, in __refresh_publishers
self.__cert_verify()
File "/usr/lib/python3.7/vendor-packages/pkg/client/api.py", line 603, in __cert_verify
self._img.check_cert_validity()
File "/usr/lib/python3.7/vendor-packages/pkg/client/image.py", line 1338, in check_cert_validity
uri=uri)
File "/usr/lib/python3.7/vendor-packages/pkg/misc.py", line 1242, in validate_ssl_cert
if cert.has_expired():
File "/usr/lib/python3.7/vendor-packages/OpenSSL/crypto.py", line 1360, in has_expired
not_after = datetime.datetime.strptime(time_string, "%Y%m%d%H%M%SZ")
File "/usr/lib/python3.7/_strptime.py", line 277, in <module>
_TimeRE_cache = TimeRE()
File "/usr/lib/python3.7/_strptime.py", line 191, in __init__
self.locale_time = LocaleTime()
File "/usr/lib/python3.7/_strptime.py", line 71, in __init__
self.__calc_month()
File "/usr/lib/python3.7/_strptime.py", line 99, in __calc_month
a_month = [calendar.month_abbr[i].lower() for i in range(13)]
File "/usr/lib/python3.7/_strptime.py", line 99, in <listcomp>
a_month = [calendar.month_abbr[i].lower() for i in range(13)]
File "/usr/lib/python3.7/calendar.py", line 63, in __getitem__
return funcs(self.format)
ValueError: character U+30000043 is not in range [U+0000; U+10ffff]
pkg: This is an internal error in pkg(7) version b'3beb69dcf209'. Please log a
Service Request about this issue including the information above and this
message.
</source>
Workaround:
<syntaxhighlight lang=bash>
# unset $(env | awk -F'=' '$1 ~ /^LC_/{print $1;}')
# pkg update --accept --require-new-be --be-name solaris_11.4.27.1.82
Creating Plan (Package planning: 766/1256): \
</source>
557f45ab372f4575a9592644e63fe45116a2aff4
Solaris LDOM
0
203
2263
672
2021-11-25T15:44:05Z
Lollypop
2
Text replacement - "<source" to "<syntaxhighlight"
wikitext
text/x-wiki
[[Kategorie:LDOM]]
[[Kategorie:Solaris]]
==Useful scripts==
===get_pf_from_link_name.sh===
<syntaxhighlight lang=bash>
#!/bin/bash
link=$1
dev=$(dladm show-phys -L ${link} | \
nawk '
NR==2{
dev=$2; gsub(/[0-9]+$/,"",dev);
instance=$2; gsub(/^[^0-9]*/,"",instance);
while(getline < "/etc/path_to_inst"){
gsub(/"/,"",$NF);
if($NF == dev && $(NF-1) == instance){
gsub(/"/,"",$1);
gsub(/^\//,"",$1);
print $1;
}
}
}
')
ldm ls-io -l ${dev}
</source>
a5788631f5d66686b1bccb05d78c04b16e3c457f
Perl Tipps und Tricks
0
178
2264
1823
2021-11-25T15:44:38Z
Lollypop
2
Text replacement - "<source " to "<syntaxhighlight "
wikitext
text/x-wiki
[[Kategorie:Perl|Tipps und Tricks]]
==Negative match in RegEx (?:(?!PATTERN).)*==
Usage of perl as a spcial grep :-):
<syntaxhighlight lang=bash>
perl -ne 'if (/(<a href=[^>]+action=login[^>]+>(?:(?!<\/a>).)*<\/a>)/){ print $1."\n"; }' index.html
</source>
This one matches a complete <pre><a href=...action=login...>(not </a>)</a></pre>.
Or more complex:
<syntaxhighlight lang=bash>
perl -ne 'if (/(<a href=[^h]*(http[s]{0,1}:\/\/([^\/"]+)[^> "]+)[^> ]*>(?:(?!<\/a>).)*<\/a>)/){ print $3."|".$2."|".$1."\n"; }' index.html
</source>
Prints out:
<pre>
<server>|<url>|<complete href>
</pre>
==Unread while reading from filehandle==
Dov Grobgeld made my day!
<syntaxhighlight lang=perl>
# Found at a comment of Dov Grobgeld at https://groups.google.com/d/msg/comp.lang.perl/7fPyGpWpP8M/hc7xTMvAoW0J
while($_ = shift(@linestack) || <IN>) {
:
push(@linestack, $whatever); # unread
}
</source>
== Config ==
Override compile time flags on the commandline like this:
<syntaxhighlight lang=perl>
PERL_MM_OPT='optimize=-O2 cc=gcc ld=gcc cccdlflags=-DPIC'
</source>
I used it to run sa-compile on Solaris:
<syntaxhighlight lang=perl>
#!/bin/bash
exec >> /var/log/update-spamd-rules.log 2>&1
#LD_LIBRARY_PATH=/usr/sfw/lib
PATH=$PATH:/usr/local/bin:/opt/re2c/bin:/usr/sfw/bin:/usr/ccs/bin:/opt/csw/bin
PERL_VER=$(/usr/perl5/bin/perl -e 'printf "%.3f",$];')
SA_VER=$(/opt/spamassassin/bin/spamassassin -V | /usr/bin/nawk '
/SpamAssassin version/ {
split($NF,version,/\./);
printf "%d.%03d%03d",version[1],version[2],version[3];
}')
export LD_LIBRARY_PATH PATH PERL_VER SA_VER
/usr/perl5/bin/perlgcc -T /opt/spamassassin/bin/sa-update --updatedir=/var/opt/spamassassin/$SA_VER -D
PERL_MM_OPT='optimize=-O2 cc=gcc ld=gcc cccdlflags=-DPIC' /opt/spamassassin/bin/sa-compile --updatedir=/var/opt/spamassassin/compiled/${PERL_VER}/${SA_VER} -D
/usr/bin/kill -HUP `cat /tmp/spamd-exim-acl.pid`
/usr/bin/kill -HUP `cat /tmp/spamd-ip.pid`
</source>
4296a75590872d689cfcea67296a7f9be892f98a
TShark
0
238
2266
2073
2021-11-25T15:44:56Z
Lollypop
2
Text replacement - "<source" to "<syntaxhighlight"
wikitext
text/x-wiki
[[Kategorie:MySQL]]
[[Kategorie:Security]]
=TShark=
[https://www.wireshark.org/docs/wsug_html_chunked/AppToolstshark.html TShark is the terminal based wireshark.]
The ultimate tool to sniff network traffic when you have no X. It analyzes the traffic as wireshark does. Great tool!
==MySQL traffic==
To look on an application server for MySQL traffic you can use this line:
<syntaxhighlight lang=bash>
# IFACE=eth0 ; tshark -i ${IFACE} -d tcp.port==3306,mysql -R "eth.addr eq $(ip link show ${IFACE} | awk '$1 ~ /link\/ether/{print $2}')" -T fields -e mysql.query 'port 3306'
</source>
<syntaxhighlight lang=bash>
# IFACE=ens192 ; tshark -i ${IFACE} -d tcp.port==3306,mysql -Y "eth.addr eq $(ip link show ${IFACE} | awk '$1 ~ /link\/ether/{print $2}')" -T fields -e mysql.auth_plugin -e mysql.client_auth_plugin -e mysql.error_code -e mysql.error.message -e mysql.message -e mysql.user -e mysql.passwd -e mysql.command 'port 3306'
</source>
The little awk magic selects only pakets which are from our ethernet address on interface ''IFACE''.
==Radius traffic==
Find client with macaddress fc-18-3c-4a-c1-fa :
<syntaxhighlight lang=bash>
# tshark -Y "tls.handshake.type == 1" -T fields -e frame.number -e ip.src -e tls.handshake.version -e radius.Calling_Station_Id -Y 'radius.Calling_Station_Id=="fc-18-3c-4a-c1-fa"' -f "udp port 1812" -V
Running as user "root" and group "root". This could be dangerous.
Capturing on 'ens192'
785 10.155.1.23 fc-18-3c-4a-c1-fa
788 10.155.1.23 0x00000303 fc-18-3c-4a-c1-fa <-- 0x00000303 is TLS handshake version 1.2 , see table below
790 10.155.1.23 fc-18-3c-4a-c1-fa
792 10.155.1.23 fc-18-3c-4a-c1-fa
794 10.155.1.23 fc-18-3c-4a-c1-fa
</source>
With older tshark versions try:
<syntaxhighlight lang=bash>
# tshark -Y "ssl.handshake.type == 1" -T fields -e frame.number -e ip.src -e ssl.handshake.version -e radius.Calling_Station_Id -Y 'radius.Calling_Station_Id=="8c-85-90-1f-03-ff"' -f "udp port 1812"
</source>
==Duplicate ACKs==
<syntaxhighlight lang=bash>
# tshark -i eth1 -Y tcp.analysis.duplicate_ack
</source>
==Finding TCP problems==
<syntaxhighlight lang=bash>
# tshark -i eth1 -Y 'expert.message == "Retransmission (suspected)" || expert.message == "Duplicate ACK (#1)" || expert.message == "Out-Of-Order segment"'
</source>
==Decode SSL Connections==
For example show the used TLS-Versions lower than 1.2.
<pre>
Supported Version: TLS 1.3 (0x0304)
Supported Version: TLS 1.2 (0x0303)
Supported Version: TLS 1.1 (0x0302)
Supported Version: TLS 1.0 (0x0301)
</pre>
<syntaxhighlight lang=bash>
$ tshark -n -f 'dst port 1812 or dst port 2083' -Y "ssl.handshake.version<0x00000303" -T fields -e ip.src_host -e ip.dst_host -e tcp.dstport -e udp.dstport -e ssl.handshake.version
192.168.1.87 192.168.1.140 2083 0x00000301
10.155.4.97 192.168.1.141 1812 0x00000301
192.168.1.85 192.168.1.140 2083 0x00000301
...
</source>
or for https:
<syntaxhighlight lang=bash>
$ tshark -i eth0 -n -f 'dst port 443' -Y "ssl.handshake.version<0x00000303" -T fields -e ip.src_host -e ip.dst_host -e tcp.dstport -e ssl.handshake.version
</source>
fb6e48a2140a36738cd6d874026a5cb837a67c4d
VMWare Hints
0
343
2267
1924
2021-11-25T15:49:47Z
Lollypop
2
Text replacement - "[[Kategorie:" to "[[Category:"
wikitext
text/x-wiki
[[Category:VMWare]]
== VCenter ==
===HTML5 User Interface===
* https://<vcenter>/ui
===Appliance Management===
* https://<vcenter>:5480/
== CLI ==
=== I/O ===
* [https://voiceforvirtual.com/2010/12/06/vscsistats/ vscsiStats] : /usr/lib/vmware/bin/vscsiStats
* [https://labs.vmware.com/flings/i-o-analyzer VMware I/O Analyzer]
* [https://labs.vmware.com/flings/ioinsight VMware IOInsight]
* [https://kb.vmware.com/kb/1008205 Using esxtop to identify storage performance issues for ESX / ESXi]
* [https://support.netapp.com support.netapp.com] -> Downloads -> Software -> NetApp NFS Plug-in for VMware
=== ESXTOP ===
* [http://www.running-system.com/vsphere-6-esxtop-quick-overview-for-troubleshooting/ vSphere 6 ESXTOP quick overview for Troubleshooting]
* [https://communities.vmware.com/docs/DOC-9279 Interpreting esxtop Statistics]
* [http://www.vmworld.net/wp-content/uploads/2012/05/Esxtop_Troubleshooting_ger.pdf PDF : vSphere 5 ESXTOP quick Overview for Troubleshooting]
* [http://www.running-system.com/wp-content/uploads/2012/08/esxtop_english_v11.pdf PDF : vSphere 5.5 ESXTOP quick Overview for Troubleshooting]
* [http://www.running-system.com/wp-content/uploads/2015/04/ESXTOP_vSphere6.pdf PDF : vSphere 6 ESXTOP quick Overview for Troubleshooting]
== Links ==
* [https://labs.vmware.com/flings VMWare Flings]
* [https://blogs.vmware.com/vsphere/2012/09/vmware-posters.html VMWare Posters] or here [https://blogs.vmware.com/vsphere/posters VMWare Posters]
* [https://www.vmware.com/support/developer/vima/ vSphere Management Assistant Documentation]
* [https://www.vmware.com/support/developer/vcli/ vSphere Command-Line Interface Documentation (vCLI)]
* [http://labs.hol.vmware.com VMWare Hands On Labs]
* [http://docs.ansible.com/ansible/list_of_cloud_modules.html#vmware Ansible modules for VMWare]
* [http://www.robware.net/rvtools/ RVTools]
* [http://www.running-system.com VMWare related BLOG]
* [https://kb.vmware.com/s/article/2106283 Required ports for vCenter Server 6.x (2106283)]
google: vsphere <version> configuration maximums:
* [https://www.vmware.com/pdf/vsphere6/r65/vsphere-65-configuration-maximums.pdf V6.5]
d04ebc3bcbadecde7ad94458826594ab0f8e0a51
NicTool
0
252
2268
967
2021-11-25T15:49:50Z
Lollypop
2
Text replacement - "<source" to "<syntaxhighlight"
wikitext
text/x-wiki
<syntaxhighlight lang=bash>
root@nictool:/var/www/nictool# wget https://github.com/msimerson/NicTool/releases/download/2.30/NicTool.tar.gz
root@nictool:/var/www/nictool# tar -xzf NicTool.tar.gz
root@nictool:/var/www/nictool# tar -xzf server/NicToolServer-2.??.tar.gz
root@nictool:/var/www/nictool# tar -xzf client/NicToolClient-2.??.tar.gz
root@nictool:/var/www/nictool# mv server foo; mv NicToolServer-2.?? server
root@nictool:/var/www/nictool# mv client bar; mv NicToolClient-2.?? client
root@nictool:/var/www/nictool# rm -rf foo bar
root@nictool:/var/www/nictool# cd client; perl Makefile.PL; make; sudo make install clean
root@nictool:/var/www/nictool# cd ../server; perl Makefile.PL; make; sudo make install clean
root@nictool:/var/www/nictool# cp server/lib/nictoolserver.conf{.dist,}
root@nictool:/var/www/nictool# cp client/lib/nictoolclient.conf{.dist,}
</source>
7099d3688d07c2e70bb4c81c329378eca8274592
Wireshark
0
245
2269
955
2021-11-25T15:49:54Z
Lollypop
2
Text replacement - "[[Kategorie:" to "[[Category:"
wikitext
text/x-wiki
[[Category:MySQL]]
[[Category:Security]]
==Add MySQL decoding==
[[Datei:Wireshark Column Preferences.jpg|800px|left|Select "Column Preferences..."]]
[[Datei:Wireshark Column Add.jpg|800px|left|Add a column]]
[[Datei:Wireshark Column Add Field Name.jpg|800px|left|Field type: "Custom", Field name: "mysql.query"]]
[[Datei:Wireshark Column Name.jpg|800px|left|Click on "New Column" and customize the name]]
[[Datei:Wireshark Column MySQL Query.jpg|800px|left|Et voila!]]
2a0e6ed0f2284e554e79cea6aac267baa6f160a7
Linux Software RAID
0
286
2271
2208
2021-11-25T15:50:15Z
Lollypop
2
Text replacement - "<source" to "<syntaxhighlight"
wikitext
text/x-wiki
[[category:Linux]]
=mdadm=
==Force rebuild of a failed RAID==
Example for /dev/md10
===The problem: Two failed disks in a RAID5===
Looks ugly but maybe we have luck and the disks are just marked as bad.
==== cat /proc/mdstat ====
<syntaxhighlight lang=bash>
# cat /proc/mdstat
Personalities : [raid1] [raid6] [raid5] [raid4]
...
md10 : inactive sdap1[11] sdao1[5] sdah1[15](S) sdag1[4] sdy1[3] sdz1[14] sdr1[8] sdb1[13] sdq1[16](S) sdi1[1] sda1[12]
5236577280 blocks super 1.2
...
</syntaxhighlight>
State is <i>inactive</i> this is not what we want... look for the details in the next step
==== mdadm --detail ====
<syntaxhighlight lang=bash>
# mdadm --detail /dev/md10
/dev/md10:
Version : 1.2
Creation Time : Wed Feb 6 13:44:52 2013
Raid Level : raid5
Used Dev Size : 476052288 (454.00 GiB 487.48 GB)
Raid Devices : 11
Total Devices : 11
Persistence : Superblock is persistent
Update Time : Wed Jun 15 17:46:57 2016
State : active, FAILED, Not Started
Active Devices : 9
Working Devices : 11
Failed Devices : 0
Spare Devices : 2
Layout : left-symmetric
Chunk Size : 64K
Name : md10
UUID : 82f2b88d:276a1fd3:55a4928e:b2228edf
Events : 17071
Number Major Minor RaidDevice State
11 66 145 0 active sync /dev/sdap1
1 8 129 1 active sync /dev/sdi1
2 0 0 2 removed
3 65 129 3 active sync /dev/sdy1
4 66 1 4 active sync /dev/sdag1
5 66 129 5 active sync /dev/sdao1
12 8 1 6 active sync /dev/sda1
7 0 0 7 removed
8 65 17 8 active sync /dev/sdr1
13 8 17 9 active sync /dev/sdb1
14 65 145 10 active sync /dev/sdz1
15 66 17 - spare /dev/sdah1
16 65 1 - spare /dev/sdq1
</syntaxhighlight>
===Force the rescan and reassemble the RAID===
For a SCSI-rescan you can try this:
[[Linux_Tipps_und_Tricks#Scan_all_SCSI_buses_for_new_devices|Scan all SCSI buses for new devices]]
And you have to do this:
<syntaxhighlight lang=bash>
# mdadm --scan /dev/md10
# mdadm --assemble --force --scan
# mdadm --run /dev/md10
</syntaxhighlight>
===Check the status===
<syntaxhighlight lang=bash>
# mdadm --detail /dev/md10
/dev/md10:
Version : 1.2
Creation Time : Wed Feb 6 13:44:52 2013
Raid Level : raid5
Array Size : 4760522880 (4539.99 GiB 4874.78 GB)
Used Dev Size : 476052288 (454.00 GiB 487.48 GB)
Raid Devices : 11
Total Devices : 12
Persistence : Superblock is persistent
Update Time : Thu Jun 16 10:59:16 2016
State : clean, degraded, recovering
Active Devices : 10
Working Devices : 12
Failed Devices : 0
Spare Devices : 2
Layout : left-symmetric
Chunk Size : 64K
Rebuild Status : 5% complete
Name : md10
UUID : 82f2b88d:276a1fd3:55a4928e:b2228edf
Events : 17074
Number Major Minor RaidDevice State
11 66 145 0 active sync /dev/sdap1
1 8 129 1 active sync /dev/sdi1
16 65 1 2 spare rebuilding /dev/sdq1
3 65 129 3 active sync /dev/sdy1
4 66 1 4 active sync /dev/sdag1
5 66 129 5 active sync /dev/sdao1
12 8 1 6 active sync /dev/sda1
7 8 145 7 active sync /dev/sdj1
8 65 17 8 active sync /dev/sdr1
13 8 17 9 active sync /dev/sdb1
14 65 145 10 active sync /dev/sdz1
15 66 17 - spare /dev/sdah1
</syntaxhighlight>
This is good:
State : clean, degraded, recovering
Better wait with the next reboot for completion:
Rebuild Status : 5% complete
It should continue rebuilding if you boot but... know the devils...
==Replace a disk in a mirror==
Device /dev/cciss/c0d1 is a replaced and new disk in a [[HP_Smart_Array_Controller#reenable_disk_after_replacement | HP Array Controller]]
<syntaxhighlight lang=bash>
[root@app02 ~]# sfdisk -d /dev/cciss/c0d0 | sfdisk --no-reread --force /dev/cciss/c0d1
[root@app02 ~]# mdadm --manage /dev/md0 --fail /dev/cciss/c0d1p1
[root@app02 ~]# mdadm --manage /dev/md0 --remove /dev/cciss/c0d1p1
[root@app02 ~]# mdadm --manage /dev/md0 --add /dev/cciss/c0d1p1
[root@app02 ~]# mdadm --manage /dev/md1 --fail /dev/cciss/c0d1p2
[root@app02 ~]# mdadm --manage /dev/md1 --remove /dev/cciss/c0d1p2
[root@app02 ~]# mdadm --manage /dev/md1 --add /dev/cciss/c0d1p2
[root@app02 ~]# cat /proc/mdstat
Personalities : [raid1]
md1 : active raid1 cciss/c0d1p2[2] cciss/c0d0p2[0]
36925312 blocks [2/1] [U_]
resync=DELAYED
md0 : active raid1 cciss/c0d1p1[2] cciss/c0d0p1[0]
256003712 blocks [2/1] [U_]
[>....................] recovery = 0.0% (38144/256003712) finish=2680.2min speed=1589K/sec
unused devices: <none>
</syntaxhighlight>
b6f93b07ffb150564bb9379bcad99f8187d5ef9c
Linux grub
0
297
2272
2195
2021-11-25T15:50:20Z
Lollypop
2
Text replacement - "[[Kategorie:" to "[[Category:"
wikitext
text/x-wiki
[[Category:Linux|Grub]]
[[Category:Grub|Linux]]
=grub rescue>=
The problem:
<syntaxhighlight lang=bash>
...
Entering rescue mode...
grub rescue>
</source>
==Get into the normal grub==
Find your devices:
<syntaxhighlight lang=bash>
grub rescue> ls
</source>
===Find the directory where the normal.mod file resides===
In this example we have LVM and the /boot/grub is in VG vg-root and the LV lv-root.
<syntaxhighlight lang=bash>
grub rescue> ls (lvm/vg--root-lv--root)/boot/grub/i386-pc
... normal.mod ...
</source>
===Set the prefix to the right place===
<syntaxhighlight lang=bash>
grub rescue> set prefix=(lvm/vg--root-lv--root)/boot/grub
</source>
===Now you can load and start the module called "normal"===
<syntaxhighlight lang=bash>
grub rescue> insmod normal
grub rescue> normal
</source>
If the menu not occurs you get something like this:
<syntaxhighlight lang=bash>
GNU GRUB version 1.99,5.11.0.175.2.0.0.42.2
Minimal BASH-like line editing is supported. For the first word, TAB
lists possible command completions. Anywhere else TAB lists possible
device or file completions.
grub>
</source>
==Normal grub is bootet, now start the Kernel==
Example for LVM:
<syntaxhighlight lang=bash>
insmod gzio
insmod part_msdos
insmod lvm
insmod ext2
set root='lvmid/KAlPF4-Qb8I-Sx41-10cC-lACw-Msoh-3qEohv/pmE9Nt-rLG3-FlNM-CwOT-hy42-gSnm-fZSn3l'
linux /boot/vmlinuz-4.4.0-53-generic root=/dev/mapper/vg--root-lv--root ro
initrd /boot/initrd.img-4.4.0-53-generic
</source>
Example for ZFS-Root:
<syntaxhighlight lang=bash>
insmod gzio
insmod part_msdos
insmod zfs
set root='hd0,msdos4'
linux /ROOT/ubuntu-15.04@/boot/vmlinuz-4.4.0-57-generic root=ZFS=rpool/ROOT/ubuntu-15.04 boot=zfs zfs_force=1 ro quiet splash nomdmonddf nomdmonisw $vt_handoff
initrd /ROOT/ubuntu-15.04@/boot/initrd.img-4.4.0-57-generic
</source>
41d49d55afa0dd9f8ceeef7d81f229d929c61a5e
2280
2272
2021-11-25T15:51:11Z
Lollypop
2
Text replacement - "</source" to "</syntaxhighlight"
wikitext
text/x-wiki
[[Category:Linux|Grub]]
[[Category:Grub|Linux]]
=grub rescue>=
The problem:
<syntaxhighlight lang=bash>
...
Entering rescue mode...
grub rescue>
</syntaxhighlight>
==Get into the normal grub==
Find your devices:
<syntaxhighlight lang=bash>
grub rescue> ls
</syntaxhighlight>
===Find the directory where the normal.mod file resides===
In this example we have LVM and the /boot/grub is in VG vg-root and the LV lv-root.
<syntaxhighlight lang=bash>
grub rescue> ls (lvm/vg--root-lv--root)/boot/grub/i386-pc
... normal.mod ...
</syntaxhighlight>
===Set the prefix to the right place===
<syntaxhighlight lang=bash>
grub rescue> set prefix=(lvm/vg--root-lv--root)/boot/grub
</syntaxhighlight>
===Now you can load and start the module called "normal"===
<syntaxhighlight lang=bash>
grub rescue> insmod normal
grub rescue> normal
</syntaxhighlight>
If the menu not occurs you get something like this:
<syntaxhighlight lang=bash>
GNU GRUB version 1.99,5.11.0.175.2.0.0.42.2
Minimal BASH-like line editing is supported. For the first word, TAB
lists possible command completions. Anywhere else TAB lists possible
device or file completions.
grub>
</syntaxhighlight>
==Normal grub is bootet, now start the Kernel==
Example for LVM:
<syntaxhighlight lang=bash>
insmod gzio
insmod part_msdos
insmod lvm
insmod ext2
set root='lvmid/KAlPF4-Qb8I-Sx41-10cC-lACw-Msoh-3qEohv/pmE9Nt-rLG3-FlNM-CwOT-hy42-gSnm-fZSn3l'
linux /boot/vmlinuz-4.4.0-53-generic root=/dev/mapper/vg--root-lv--root ro
initrd /boot/initrd.img-4.4.0-53-generic
</syntaxhighlight>
Example for ZFS-Root:
<syntaxhighlight lang=bash>
insmod gzio
insmod part_msdos
insmod zfs
set root='hd0,msdos4'
linux /ROOT/ubuntu-15.04@/boot/vmlinuz-4.4.0-57-generic root=ZFS=rpool/ROOT/ubuntu-15.04 boot=zfs zfs_force=1 ro quiet splash nomdmonddf nomdmonisw $vt_handoff
initrd /ROOT/ubuntu-15.04@/boot/initrd.img-4.4.0-57-generic
</syntaxhighlight>
899a8e88048ee7646a91bf841ae9461daace419a
Fibrechannel Analyse
0
139
2273
2204
2021-11-25T15:50:23Z
Lollypop
2
Text replacement - "</source" to "</syntaxhighlight"
wikitext
text/x-wiki
[[category:Solaris]]
[[category:Brocade]]
[[category:NetApp]]
[[category:FC]]
=Fibrechannel Analyse=
=Kommandos : Solaris=
==luxadm==
===luxadm -e port===
Gibt die Hardwarepfade der vorhandened Fibrechannelports und deren Status aus:
<syntaxhighlight lang=bash>
# luxadm -e port
/devices/pci@79,0/pci10de,378@b/pci1077,143@0/fp@0,0:devctl CONNECTED
/devices/pci@79,0/pci10de,378@b/pci1077,143@0,1/fp@0,0:devctl NOT CONNECTED
/devices/pci@79,0/pci10de,376@e/pci1077,143@0/fp@0,0:devctl CONNECTED
/devices/pci@79,0/pci10de,376@e/pci1077,143@0,1/fp@0,0:devctl NOT CONNECTED
</syntaxhighlight>
2 Dualport Karten:
/devices/pci@79,0/pci10de,378@b/pci1077,143@0 und ...,1
/devices/pci@79,0/pci10de,376@e/pci1077,143@0 und ...,1
<syntaxhighlight lang=bash>
# prtdiag -v | head -1
System Configuration: Sun Microsystems Sun Fire X4440
</syntaxhighlight>
Aus der Seite [https://support.oracle.com/epmos/faces/DocContentDisplay?id=1277396.1 Sun x86 Platforms: Matrix of Recognized Device Paths (Doc ID 1277396.1)] (Oracle Support Login benötigt):
Sun Fire x4440 (Tucana)
PCI:
PCIe SLOT0 /pci@0,0/pci10de,375@f/pci1000,3150@0 // with PCI Express 8-Port SAS/SATA HBA
PCIe SLOT0 /pci@0,0/pci10de,375@f/ // without PCI Express 8-Port SAS/SATA HBA
PCIe SLOT1 /pci@0,0/pci10de,376@e/
PCIe SLOT2 /pci@7c,0/pci10de,377@f/
PCIe SLOT3 /pci@0,0/pci10de,377@a/
PCIe SLOT4 /pci@7c,0/pci10de,376@e/
PCIe SLOT5 /pci@7c,0/pci10de,378@b/
(7c can be renamed something else depending on BIOS/OS version)
Also stecken unsere Karten in Slot 4 und 5.
===luxadm -e dump_map <HW_path>===
Gibt die Tabelle der bekannten Geräte an einem Port aus
<syntaxhighlight lang=bash>
# luxadm -e dump_map /devices/pci@79,0/pci10de,378@b/pci1077,143@0/fp@0,0:devctl
Pos Port_ID Hard_Addr Port WWN Node WWN Type
0 30200 0 202600a0b86e10e4 200600a0b86e10e4 0x0 (Disk device)
1 30600 0 202700a0b86e10e4 200600a0b86e10e4 0x0 (Disk device)
2 10100 0 203400a0b85bb030 200400a0b85bb030 0x0 (Disk device)
3 10500 0 203500a0b85bb030 200400a0b85bb030 0x0 (Disk device)
4 10200 0 202600a0b86e103c 200600a0b86e103c 0x0 (Disk device)
5 11400 0 202700a0b86e103c 200600a0b86e103c 0x0 (Disk device)
6 30100 0 203200a0b85aeb2d 200200a0b85aeb2d 0x0 (Disk device)
7 30500 0 203300a0b85aeb2d 200200a0b85aeb2d 0x0 (Disk device)
8 10800 0 2100001b32902d45 2000001b32902d45 0x1f (Unknown Type,Host Bus Adapter)
</syntaxhighlight>
Erklärung der interessanten Spalten:
* Port_ID <Switch_ID><Switchport><??>
Es sind also offensichtlich 2 Switches in der Fabric an Port /devices/pci@79,0/pci10de,378@b/pci1077,143@0/fp@0,0:devctl
und zwar mit der ID 1 und mit der ID 3.
Switch ID 1
Port 1 und 5 : Node WWN 200400a0b85bb030
Port 2 und 14 : Node WWN 200600a0b86e103c
Port 8 : Node WWN 2000001b32902d45 (Wir selbst)
Switch ID 3
Port 1 und 5 : Node WWN 200200a0b85aeb2d
Port 2 und 6 : Node WWN 200600a0b86e10e4
Wir hängen also mit 2 Storages auf dem Switch mit der ID 1 und haben eine Verbindung zu einem Switch mit der ID 3 an dem 2 weitere Storages hängen.
* Node WWN
Wir sehen hier 4 Disk Devices mit jeweils 2 Einträgen (Gleiche Node WWN)
* Port WWN
Dies ist die Port WWN der an den Switch angeschlossenen Geräte (unter 8 finden wir uns selbst).
Pro Storage sehen wir hier 2 Port WWNs, also 2 Pfade über unseren einen Hostport.
Daher nachher 4 Pfade (2 Pro Hostport) beim [[#mpathadm list lu]].
* Type
Disk Device: Storage
Host Bus Adapter: FC-Karte
===luxadm -e rdls <HW_path> ===
<syntaxhighlight lang=bash>
# luxadm -e port 2>/dev/null | awk '{print $1;}' | xargs -n 1 luxadm -e rdls 2>/dev/null
Link Error Status information for loop:/devices/pci@0,0/pci8086,340e@7/pci111d,806e@0/pci111d,806e@2/pci1077,143@0/fp@0,0:devctl
al_pa lnk fail sync loss signal loss sequence err invalid word CRC
30200 2 1 0 0 0 0
30600 2 1 0 0 0 0
10200 1 1 0 0 0 0
11400 2 1 0 0 0 0
10b00 0 0 0 0 0 0
NOTE: These LESB counts are not cleared by a reset, only power cycles.
These counts must be compared to previously read counts.
Link Error Status information for loop:/devices/pci@0,0/pci8086,340e@7/pci111d,806e@0/pci111d,806e@2/pci1077,143@0,1/fp@0,0:devctl
al_pa lnk fail sync loss signal loss sequence err invalid word CRC
0 0 0 0 0 0 0
NOTE: These LESB counts are not cleared by a reset, only power cycles.
These counts must be compared to previously read counts.
</syntaxhighlight>
===luxadm probe===
Auflistung aller erkannten Fibrechanneldevices
<syntaxhighlight lang=bash>
#> luxadm probe
Found Fibre Channel device(s):
Node WWN:200600a0b86e10e4 Device Type:Disk device
Logical Path:/dev/rdsk/c8t600A0B80006E10E40000DC1C52E8B751d0s2
...
</syntaxhighlight>
===luxadm display <Diskpath|WWN>===
<syntaxhighlight lang=bash>
#> luxadm display /dev/rdsk/c8t600A0B80006E10E40000DC1C52E8B751d0s2
DEVICE PROPERTIES for disk: /dev/rdsk/c8t600A0B80006E10E40000DC1C52E8B751d0s2
Vendor: SUN
Product ID: STK6580_6780
Revision: 0784
Serial Num: SP01068442
Unformatted capacity: 204800.000 MBytes
Write Cache: Enabled
Read Cache: Enabled
Minimum prefetch: 0x300
Maximum prefetch: 0x0
Device Type: Disk device
Path(s):
/dev/rdsk/c8t600A0B80006E10E40000DC1C52E8B751d0s2
/devices/scsi_vhci/disk@g600a0b80006e10e40000dc1c52e8b751:c,raw
Controller /dev/cfg/c4
Device Address 202600a0b86e10e4,5
Host controller port WWN 2100001b328a417f
Class primary
State ONLINE
Controller /dev/cfg/c4
Device Address 202700a0b86e10e4,5
Host controller port WWN 2100001b328a417f
Class secondary
State STANDBY
Controller /dev/cfg/c6
Device Address 201600a0b86e10e4,5
Host controller port WWN 2100001b32904445
Class primary
State ONLINE
Controller /dev/cfg/c6
Device Address 201700a0b86e10e4,5
Host controller port WWN 2100001b32904445
Class secondary
State STANDBY
</syntaxhighlight>
* Vendor: SUN
Hersteller
* Product ID: STK6580_6780
Also ein StorageTek 6580/6780
* Revision: 0784
Grobe Firmwarepeilung (Firmware Version: 07.84.47.10)
Siehe hier [[#lsscs list array <array_name>]]
* Serial Num: SP01068442
Praktisch, wenn man mit NetApps arbeitet, um die LUNs zuzuordnen.
* Unformatted capacity: 204800.000 MBytes
Immer gut zu wissen
* Write Cache: Enabled
Die Batterie im Storage sollte also OK sein ;-)
* Path(s):
Rawdevicepath
Hardwaredevicepath
Jetzt folgen immer pro Pfad zu diesem Device ein Block aus
Controller (siehe unten)
Device Address <Port WWN vom Device>,<LUN ID>
Class <primary|secondary> (siehe unten)
State <Online|Standby|Oflline>
Zuweisung Controller zum FC-Port über:
<syntaxhighlight lang=bash>
# ls -al /dev/cfg/c6
lrwxrwxrwx 1 root root 60 Sep 3 2009 /dev/cfg/c6 -> ../../devices/pci@79,0/pci10de,376@e/pci1077,143@0/fp@0,0:fc
</syntaxhighlight>
Man sieht den Hardwarepfad von [[#luxadm -e port]]
Class:
Via ALUA (Asymmetric Logical Unit Access) teilt das Device dem Host mit, über welche Pfade der Host primär auf die LUN zugreifen soll.
==fcinfo==
===fcinfo hba-port===
Gibt ein paar Infos über Hersteller, Modell, Firmware, Port und Node WWN, Current Speed, ... aus
<syntaxhighlight lang=bash>
#> fcinfo hba-port
HBA Port WWN: 2100001b328a417f
OS Device Name: /dev/cfg/c4
Manufacturer: QLogic Corp.
Model: 375-3356-02
Firmware Version: 05.06.00
FCode/BIOS Version: BIOS: 2.02; fcode: 2.01; EFI: 2.00;
Serial Number: 0402R00-0918701860
Driver Name: qlc
Driver Version: 20110825-3.06
Type: N-port
State: online
Supported Speeds: 1Gb 2Gb 4Gb
Current Speed: 4Gb
Node WWN: 2000001b328a417f
HBA Port WWN: 2101001b32aa417f
OS Device Name: /dev/cfg/c5
Manufacturer: QLogic Corp.
Model: 375-3356-02
Firmware Version: 05.06.00
FCode/BIOS Version: BIOS: 2.02; fcode: 2.01; EFI: 2.00;
Serial Number: 0402R00-0918701860
Driver Name: qlc
Driver Version: 20110825-3.06
Type: unknown
State: offline
Supported Speeds: 1Gb 2Gb 4Gb
Current Speed: not established
Node WWN: 2001001b32aa417f
HBA Port WWN: 2100001b32904445
OS Device Name: /dev/cfg/c6
Manufacturer: QLogic Corp.
Model: 375-3356-02
Firmware Version: 05.06.00
FCode/BIOS Version: BIOS: 2.02; fcode: 2.01; EFI: 2.00;
Serial Number: 0402R00-0918701887
Driver Name: qlc
Driver Version: 20110825-3.06
Type: N-port
State: online
Supported Speeds: 1Gb 2Gb 4Gb
Current Speed: 4Gb
Node WWN: 2000001b32904445
HBA Port WWN: 2101001b32b04445
OS Device Name: /dev/cfg/c7
Manufacturer: QLogic Corp.
Model: 375-3356-02
Firmware Version: 05.06.00
FCode/BIOS Version: BIOS: 2.02; fcode: 2.01; EFI: 2.00;
Serial Number: 0402R00-0918701887
Driver Name: qlc
Driver Version: 20110825-3.06
Type: unknown
State: offline
Supported Speeds: 1Gb 2Gb 4Gb
Current Speed: not established
Node WWN: 2001001b32b04445
</syntaxhighlight>
===fcinfo remote-port --port <HBA Port WWN> --linkstat===
<syntaxhighlight lang=bash>
# fcinfo remote-port --port 2100001b32904445 --linkstat
Remote Port WWN: 201600a0b86e103c
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200600a0b86e103c
Link Error Statistics:
Link Failure Count: 3
Loss of Sync Count: 3
Loss of Signal Count: 2
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 201700a0b86e103c
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200600a0b86e103c
Link Error Statistics:
Link Failure Count: 4
Loss of Sync Count: 261
Loss of Signal Count: 4
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 202200a0b85aeb2d
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200200a0b85aeb2d
Link Error Statistics:
Link Failure Count: 2
Loss of Sync Count: 1
Loss of Signal Count: 2
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 202300a0b85aeb2d
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200200a0b85aeb2d
Link Error Statistics:
Link Failure Count: 2
Loss of Sync Count: 1
Loss of Signal Count: 2
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 201600a0b86e10e4
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200600a0b86e10e4
Link Error Statistics:
Link Failure Count: 3
Loss of Sync Count: 1
Loss of Signal Count: 0
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 201700a0b86e10e4
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200600a0b86e10e4
Link Error Statistics:
Link Failure Count: 3
Loss of Sync Count: 1
Loss of Signal Count: 0
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 202400a0b85bb030
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200400a0b85bb030
Link Error Statistics:
Link Failure Count: 2
Loss of Sync Count: 1
Loss of Signal Count: 2
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
Remote Port WWN: 202500a0b85bb030
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 200400a0b85bb030
Link Error Statistics:
Link Failure Count: 2
Loss of Sync Count: 1
Loss of Signal Count: 3
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
</syntaxhighlight>
===fcinfo remote-port --port <HBA Port WWN> --scsi-target===
<syntaxhighlight lang=bash>
# fcinfo hba-port | grep HBA
HBA Port WWN: 21000024ff3cf472
HBA Port WWN: 21000024ff3cf473
HBA Port WWN: 21000024ff3cf454
HBA Port WWN: 21000024ff3cf455
# fcinfo remote-port --port 21000024ff3cf472 --scsi-target
Remote Port WWN: 20110002ac0059ce
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 2ff70002ac0059ce
LUN: 0
Vendor: 3PARdata
Product: VV
OS Device Name: /dev/rdsk/c6t60002AC00000000000000002000059CEd0s2
LUN: 1
Vendor: 3PARdata
Product: VV
OS Device Name: /dev/rdsk/c6t60002AC00000000000000003000059CEd0s2
LUN: 2
Vendor: 3PARdata
Product: VV
OS Device Name: /dev/rdsk/c6t60002AC00000000000000004000059CEd0s2
...
</syntaxhighlight>
===fcinfo lu -v <device>===
<syntaxhighlight lang=bash>
# fcinfo lu -v /dev/rdsk/c0t60030D90D9DD1A059655804D4A5EAD2Ed0s2
OS Device Name: /dev/rdsk/c0t60030D90D9DD1A059655804D4A5EAD2Ed0s2
HBA Port WWN: 2100000e1ed89451
Controller: /dev/cfg/c4
Remote Port WWN: 2100f4e9d4564d21
LUN: 11
State: active/optimized
Remote Port WWN: 2100f4e9d4564c97
LUN: 11
State: active/non-optimized
HBA Port WWN: 2100000e1ed89450
Controller: /dev/cfg/c3
Remote Port WWN: 2100f4e9d4564d44
LUN: 11
State: active/optimized
Remote Port WWN: 2100f4e9d4564c1c
LUN: 11
State: active/non-optimized
Vendor: DataCore
Product: Virtual Disk
Device Type: Disk Device
Unformatted capacity: 204800.000 MBytes
</syntaxhighlight>
==mpathadm==
===mpathadm list lu===
<syntaxhighlight lang=bash>
</syntaxhighlight>
==cfgadm==
===cfgadm -al -o show_FCP_dev [<controller>]===
<syntaxhighlight lang=bash>
# cfgadm -al -o show_FCP_dev | grep unusable
c8::21000024ff2d49a2,0 disk connected configured unusable
c8::21000024ff2d49a2,1 disk connected configured unusable
c8::21000024ff2d49a2,2 disk connected configured unusable
c8::21000024ff2d49a2,3 disk connected configured unusable
c8::21000024ff2d49a2,4 disk connected configured unusable
c8::21000024ff2d49a2,5 disk connected configured unusable
c8::21000024ff2d49a2,6 disk connected configured unusable
c8::21000024ff2d49a2,7 disk connected configured unusable
c8::21000024ff2d49a2,8 disk connected configured unusable
c8::21000024ff2d49a2,9 disk connected configured unusable
c8::21000024ff2d49a2,10 disk connected configured unusable
c9::203400a0b839c421,31 disk connected configured unusable
c9::203400a0b84913d2,31 disk connected configured unusable
c9::203500a0b839c421,31 disk connected configured unusable
c9::203500a0b84913d2,31 disk connected configured unusable
</syntaxhighlight>
===cfgadm -c unconfigure -o unusable_SCSI_LUN <unusable device>===
<syntaxhighlight lang=bash>
# cfgadm -c unconfigure -o unusable_SCSI_LUN c8::21000024ff2d49a2
</syntaxhighlight>
Alle aufräumen:
<syntaxhighlight lang=bash>
# cfgadm -alo show_SCSI_LUN | nawk '$NF=="unusable"{gsub(/,[0-9]+$/,"",$1);print $1}' | sort -u | xargs -n 1 cfgadm -c unconfigure -o unusable_SCSI_LUN
</syntaxhighlight>
===cfgadm -o force_update -c configure <controller>===
Rescan LUNs. Be careful! Does a forcelip!
<syntaxhighlight lang=bash>
# cfgadm -o force_update -c configure c10
</syntaxhighlight>
==prtconf -Da <device>==
<syntaxhighlight lang=bash>
# prtconf -Da /dev/cfg/c3
i86pc (driver name: rootnex)
pci, instance #0 (driver name: npe)
pci8086,3410, instance #5 (driver name: pcieb)
pci111d,806e, instance #12 (driver name: pcieb)
pci111d,806e, instance #13 (driver name: pcieb)
pci1077,170, instance #0 (driver name: qlc) <---
fp, instance #0 (driver name: fp)
</syntaxhighlight>
==LUN masking (access LUNs of a storage)==
<syntaxhighlight lang=bash>
Nov 6 13:44:59 server01 Corrupt label; wrong magic number
Nov 6 13:44:59 server01 cmlb: WARNING: /pci@380/pci@1/pci@0/pci@5/SUNW,qlc@0/fp@0,0/ssd@w204300a096691217,7 (ssd7):
Nov 6 13:44:59 server01 Corrupt label; wrong magic number
Nov 6 13:44:59 server01 cmlb: WARNING: /pci@380/pci@1/pci@0/pci@5/SUNW,qlc@0/fp@0,0/ssd@w204300a096691217,7 (ssd7):
Nov 6 13:44:59 server01 Corrupt label; wrong magic number
Nov 6 13:44:59 server01 cmlb: WARNING: /pci@300/pci@1/pci@0/pci@4/SUNW,qlc@0/fp@0,0/ssd@w203300a096691217,7 (ssd2):
Nov 6 13:44:59 server01 Corrupt label; wrong magic number
...
</syntaxhighlight>
<syntaxhighlight lang=bash>
# cat /etc/driver/drv/fp.conf
mpxio-disable="no";
pwwn-lun-blacklist=
"203200a096691265,7",
"203300a096691265,7",
"204200a096691265,7",
"204300a096691265,7",
"203200a096691217,7",
"203300a096691217,7",
"204200a096691217,7",
"204300a096691217,7";
</syntaxhighlight>
<syntaxhighlight lang=bash>
# reboot -- -r
...
Boot device: /pci@300/pci@1/pci@0/pci@2/scsi@0/disk@p0 File and args: -r
SunOS Release 5.11 Version 11.3 64-bit
Copyright (c) 1983, 2015, Oracle and/or its affiliates. All rights reserved.
/pseudo/fcp@0 (fcp0):
LUN 7 of port 203300a096691217 is masked due to black listing.
/pseudo/fcp@0 (fcp0):
LUN 7 of port 203200a096691217 is masked due to black listing.
/pseudo/fcp@0 (fcp0):
LUN 7 of port 203300a096691265 is masked due to black listing.
/pseudo/fcp@0 (fcp0):
LUN 7 of port 203200a096691265 is masked due to black listing.
/pseudo/fcp@0 (fcp0):
LUN 7 of port 204300a096691217 is masked due to black listing.
/pseudo/fcp@0 (fcp0):
LUN 7 of port 204200a096691217 is masked due to black listing.
/pseudo/fcp@0 (fcp0):
LUN 7 of port 204300a096691265 is masked due to black listing.
/pseudo/fcp@0 (fcp0):
LUN 7 of port 204200a096691265 is masked due to black listing.
Configuring devices.
</syntaxhighlight>
=Kommandos : Common Array Manager=
==lsscs==
Ist unter Solaris in /opt/SUNWsefms/bin
===lsscs list array===
<syntaxhighlight lang=bash>
</syntaxhighlight>
===lsscs list array <array_name>===
<syntaxhighlight lang=bash>
</syntaxhighlight>
===lsscs list -a <array_name> fcport===
<syntaxhighlight lang=bash>
</syntaxhighlight>
=Kommandos : Brocade=
==Switch-Kommandos==
===switchshow===
<syntaxhighlight lang=bash>
san-sw_11:admin> switchshow
switchName: san-sw_11
switchType: 71.2
switchState: Online
switchMode: Native
switchRole: Principal
switchDomain: 1
switchId: fffc01
switchWwn: 10:00:00:05:33:df:43:5a
zoning: ON (Fabric1)
switchBeacon: OFF
Index Port Address Media Speed State Proto
==============================================
0 0 010000 id N8 No_Light FC
1 1 010100 id N8 Online FC E-Port 10:00:00:05:33:df:bd:b9 "san-sw_21" (downstream)
2 2 010200 id N8 Online FC F-Port 21:00:00:24:ff:05:74:e4
3 3 010300 id N8 Online FC F-Port 50:0a:09:81:8d:32:5d:c4
4 4 010400 id N8 No_Light FC
5 5 010500 id N8 Online FC E-Port 10:00:00:05:33:df:bd:b9 "san-sw_21"
6 6 010600 id N4 Online FC F-Port 20:06:00:a0:b8:32:38:17
7 7 010700 id N4 Online FC F-Port 20:07:00:a0:b8:32:38:17
8 8 010800 id N4 Online FC F-Port 21:00:00:1b:32:91:4c:ed
9 9 010900 id N4 Online FC F-Port 21:00:00:1b:32:98:05:1a
10 10 010a00 id N8 Online FC F-Port 21:00:00:24:ff:4a:d3:bc
11 11 010b00 id N8 No_Light FC
12 12 010c00 id N8 No_Light FC
13 13 010d00 id N8 No_Light FC
14 14 010e00 id N8 No_Light FC
15 15 010f00 id N8 No_Light FC
16 16 011000 -- N8 No_Module FC (No POD License) Disabled
17 17 011100 -- N8 No_Module FC (No POD License) Disabled
18 18 011200 -- N8 No_Module FC (No POD License) Disabled
19 19 011300 -- N8 No_Module FC (No POD License) Disabled
20 20 011400 -- N8 No_Module FC (No POD License) Disabled
21 21 011500 -- N8 No_Module FC (No POD License) Disabled
22 22 011600 -- N8 No_Module FC (No POD License) Disabled
23 23 011700 -- N8 No_Module FC (No POD License) Disabled
</syntaxhighlight>
Was sagt uns das?
# Dies ist der "Principal" (alle andere sind "Subordinate") der Fabric "Fabric1" (switchRole:, zoning:)
# Der Switch ist gezoned (zoning:)
# SwitchID ist "fffc01"
# Es ist ein 24-Port Switch
# Es gibt einen doppelten ISL (InterSwitchLink) zu einem anderen Switch E-Port (san-sw_21)
# 6 Ports sind mit SFPs bestückt, aber nicht belegt (0,4,11-15)
# 8 Ports haben keine Lizenz und auch kein SFP (No_Module)
# 9 Ports sind belegt
===fabricshow===
<syntaxhighlight lang=bash>
san-sw_11:root> fabricshow
Switch ID Worldwide Name Enet IP Addr FC IP Addr Name
-------------------------------------------------------------------------
1: fffc01 10:00:00:05:33:df:43:5a 192.168.1.117 0.0.0.0 >"san-sw_11"
2: fffc02 10:00:00:05:33:df:bd:b9 192.168.1.119 0.0.0.0 "san-sw_21"
The Fabric has 2 switches
</syntaxhighlight>
===islshow===
<syntaxhighlight lang=bash>
rz1_fab2_11:admin> islshow
1: 1-> 0 10:00:00:05:1e:0d:5e:96 12 bc1_sl4_fab2_12 sp: 4.000G bw: 4.000G
2: 2-> 0 10:00:00:05:1e:0d:e2:53 13 bc2_sl4_fab2_13 sp: 4.000G bw: 4.000G
3: 3-> 0 10:00:00:05:1e:b3:71:bf 14 bc3_sl4_fab2_14 sp: 4.000G bw: 4.000G
4: 5-> 17 10:00:00:05:1e:0d:5e:96 12 bc1_sl4_fab2_12 sp: 4.000G bw: 4.000G
5: 6-> 17 10:00:00:05:1e:0d:e2:53 13 bc2_sl4_fab2_13 sp: 4.000G bw: 4.000G
6: 7-> 17 10:00:00:05:1e:b3:71:bf 14 bc3_sl4_fab2_14 sp: 4.000G bw: 4.000G
7: 10-> 8 10:00:50:eb:1a:45:71:96 15 rz-6510-fab2-15 sp: 4.000G bw: 4.000G
8: 18-> 0 10:00:50:eb:1a:45:71:96 15 rz-6510-fab2-15 sp: 4.000G bw: 4.000G
</syntaxhighlight>
==Port-Kommandos==
===porterrshow===
===portstatsshow===
===portstatsclear===
===portloginshow===
Show information about NPIV-ports.
<syntaxhighlight lang=bash>
fcsw1:admin> switchshow
...
Index Port Address Media Speed State Proto
==================================================
...
34 34 0f2200 id N16 Online FC F-Port 1 N Port + 1 NPIV public
...
</syntaxhighlight>
This is a NetApp 8080 with CDOT behind this port as you can see with <i>nodefind <address></i>
<syntaxhighlight lang=bash>
fcsw1:admin> nodefind 0f2200
Local:
Type Pid COS PortName NodeName SCR
N 0f2200; 3;50:0a:09:82:80:d1:21:ee;50:0a:09:80:80:d1:21:ee; 0x00000000
PortSymb: [45] "NetApp FC Target Adapter (8324) cdot1-01:0g"
NodeSymb: [38] "NetApp FAS8080 (cdot1-01/cdot1-02)"
Fabric Port Name: 20:22:50:eb:1a:42:f8:45
Permanent Port Name: 50:0a:09:82:80:d1:21:ee
Device type: Physical Unknown(initiator/target)
Port Index: 34
Share Area: No
Device Shared in Other AD: No
Redirect: No
Partial: No
LSAN: No
Aliases:
</syntaxhighlight>
Now look with <i>portloginshow <portnumber></i>:
<syntaxhighlight lang=bash>
fcsw1:admin> portloginshow 34
Type PID World Wide Name credit df_sz cos
=====================================================
fd 0f2201 20:00:00:a0:98:5d:33:82 6 2048 8 scr=0x3
fe 0f2200 50:0a:09:82:80:d1:21:ee 6 2048 8 scr=0x0
ff 0f2201 20:00:00:a0:98:5d:33:82 0 0 8 d_id=FFFFFC
ff 0f2200 50:0a:09:82:80:d1:21:ee 0 0 8 d_id=FFFFFC
</syntaxhighlight>
With this information you can find out more about the WWNs:
<syntaxhighlight lang=bash>
fcsw1:admin> nodefind 20:00:00:a0:98:5d:33:82
Local:
Type Pid COS PortName NodeName SCR
N 0f2201; 3;20:00:00:a0:98:5d:33:82;20:04:00:a0:98:5d:33:82; 0x00000003
FC4s: FCP
PortSymb: [58] "NetApp FC Target Port (8324) cdot1fc:cdot1-01_fc_lif_1"
NodeSymb: [24] "NetApp Vserver cdot1fc"
Fabric Port Name: 20:22:50:eb:1a:42:f8:45
Permanent Port Name: 50:0a:09:82:80:d1:21:ee
Device type: NPIV Target
Port Index: 34
Share Area: No
Device Shared in Other AD: No
Redirect: No
Partial: No
LSAN: No
Aliases: cdot1fc_01_lif1
</syntaxhighlight>
Even the VServer (NodeSymb)!
And with the NodeName you can find all logical interfaces of this svm:
<syntaxhighlight lang=bash>
fcsw1:admin> nodefind 20:04:00:a0:98:5d:33:82
Local:
Type Pid COS PortName NodeName SCR
N 0f2201; 3;20:00:00:a0:98:5d:33:82;20:04:00:a0:98:5d:33:82; 0x00000003
FC4s: FCP
PortSymb: [58] "NetApp FC Target Port (8324) cdot1fc:cdot1-01_fc_lif_1"
NodeSymb: [24] "NetApp Vserver cdot1fc"
Fabric Port Name: 20:22:50:eb:1a:42:f8:45
Permanent Port Name: 50:0a:09:82:80:d1:21:ee
Device type: NPIV Target
Port Index: 34
Share Area: No
Device Shared in Other AD: No
Redirect: No
Partial: No
LSAN: No
Aliases: cdot1fc_01_lif1
N 0f2301; 3;20:02:00:a0:98:5d:33:82;20:04:00:a0:98:5d:33:82; 0x00000003
FC4s: FCP
PortSymb: [58] "NetApp FC Target Port (8324) cdot1fc:cdot1-02_fc_lif_1"
NodeSymb: [24] "NetApp Vserver cdot1fc"
Fabric Port Name: 20:23:50:eb:1a:42:f8:45
Permanent Port Name: 50:0a:09:82:80:61:21:e8
Device type: NPIV Target
Port Index: 35
Share Area: No
Device Shared in Other AD: No
Redirect: No
Partial: No
LSAN: No
Aliases: cdot1fc_02_lif1
</syntaxhighlight>
==Zone-Kommandos==
===zoneshow===
===alicreate===
===alishow===
==Backup der Switchconfig per Script==
===Put the backup host ssh-pub-key on the switches===
<syntaxhighlight lang=bash>
fcsw1:root> cat >/root/.ssh/authorized_keys <<EOF
> ssh-dss AAAAB3NzaC1...
...
...
lF8qsgtTD8cc= root@host
> EOF
</syntaxhighlight>
===Generate ssh-key on the switches===
<syntaxhighlight lang=bash>
fcsw1:root> ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
2a:23:33:...:69:bc:25:a5:f9 root@fcsw1
The key's randomart image is:
+--[ RSA 2048]----+
| |
| ... |
| |
+-----------------+
</syntaxhighlight>
===Copy the key to your backup users ~/.ssh/authorized_keys on backup host===
<syntaxhighlight lang=bash>
fcsw1:root> cat /root/.ssh/id_rsa.pub
ssh-rsa AAAAB3NzaC1yc2EAAA...
...
KHnw1T1NaQ== root@fcsw1
</syntaxhighlight>
===Now the script on the backup host===
<syntaxhighlight lang=bash>
# cat /opt/bin/backup_brocade_config
#!/bin/bash
SWITCHES="
172.30.40.50
172.30.40.51
"
LOCALUSER="backupuser"
BACKUPDIR="brocade_backup"
BACKUPHOST="172.30.40.10"
DATE="$(date '+%Y%m%d-%H%M%S')"
for switch in ${SWITCHES} ; do
printf "Backing up ${switch} to ~${LOCALUSER}/${BACKUPDIR}/${switch}_config_${date}.txt... "
ssh root@${switch} /fabos/link_sbin/configupload -all -p scp ${BACKUPHOST},${LOCALUSER},${BACKUPDIR}/${switch}_config_${DATE}.txt
done
</syntaxhighlight>
==Script zum parsen einer configupload-Datei==
<syntaxhighlight lang=awk>
#!/usr/bin/gawk -f
BEGIN{
vendor["001438"]="Hewlett-Packard";
vendor["00a098"]="NetApp";
vendor["0024ff"]="Qlogic";
vendor["001b32"]="Qlogic";
vendor["0000c9"]="Emulex";
vendor["00e002"]="CROSSROADS SYSTEMS, INC.";
}
/\[Zoning\]/,/^$/ {
if(/^cfg./){
split($0,cfgparts,":");
gsub(/^cfg./,"",cfgparts[1]);
cfg[cfgparts[1]]=cfgparts[2];
}
else if(/^zone./) {
zonename=$0;
gsub(/:.*$/,"",zonename);
gsub(/^zone./,"",zonename);
zonemembers=$0;
gsub(/^[^:]*:/,"",zonemembers);
zone[zonename]=zonemembers;
}
else if(/^alias./) {
aliasname=$0;
gsub(/:.*$/,"",aliasname);
gsub(/^alias./,"",aliasname);
aliasmembers=$0;
gsub(/^[^:]*:/,"",aliasmembers);
alias[aliasname]=aliasmembers;
if(length(aliasname)>longestalias){
longestalias=length(aliasname);
}
}
else if(/^enable:/) {
cfgenabled=$0;
gsub(/^enable:/,"",cfgenabled);
}
}
END {
print "Config:",cfgenabled;
split(cfg[cfgenabled],active_zones,";");
for(active_zone in active_zones) {
split(zone[active_zones[active_zone]],zone_members,";");
asort(zone_members);
print "Zone",active_zones[active_zone],"(",length(zone_members),"Members ):";
for(zone_member in zone_members){
member=zone_members[zone_member];
if(alias[member]!=""){
member=alias[member];
}
WWN=member;
gsub(/:/,"",WWN);
if(WWN ~ /^5/){start=2;}else{start=5;}
vendor_id=substr(WWN,start,6);
printf " Member: %s\t",member;
if(alias[zone_members[zone_member]]!=""){
format=sprintf("%%s%%%ds\t",longestalias-length(zone_members[zone_member]));
printf format,zone_members[zone_member]," ";
}
printf "%s\n",vendor[vendor_id];
}
}
printf "\n\n\nCreate config:\n-------------------------------------------------\n";
printf "cfgdelete \"%s\"\n",cfgenabled;
for(active_zone in active_zones) {
split(zone[active_zones[active_zone]],zone_members,";");
asort(zone_members);
for(zone_member in zone_members){
member=zone_members[zone_member];
if(alias[member]!=""){
printf "alicreate \"%s\",\"%s\"\n",member,alias[member];
alias[member]="";
}
}
printf "zonecreate \"%s\",\"%s\"\n",active_zones[active_zone],zone[active_zones[active_zone]];
if(!secondelement){
secondelement=1;
printf "cfgcreate";
} else {
printf "cfgadd ";
}
printf " \"%s\",\"%s\"\n",cfgenabled,active_zones[active_zone];
}
printf "cfgsave\ncfgenable \"%s\"\n",cfgenabled;
}
</syntaxhighlight>
=Kommandos: NetApp=
==fcp topology show : Wo hängt mein Frontend-SAN?==
<syntaxhighlight lang=bash>
fas01> fcp topology show
Switches connected on adapter 0d:
None connected.
Switches connected on adapter 0c:
None connected.
Switches connected on adapter 1a:
Switch Name: fcsw01
Switch Vendor: Brocade Communications, Inc.
Switch Release: v6.4.2a
Switch Domain: 1
Switch WWN: 10:00:00:05:33:c6:1e:6c
Port Count: 24
Switches connected on adapter 1b:
Switch Name: fcsw02
Switch Vendor: Brocade Communications, Inc.
Switch Release: v6.4.2a
Switch Domain: 1
Switch WWN: 10:00:00:05:33:c7:5e:d2
Port Count: 24
Switches connected on adapter 1c:
None connected.
Switches connected on adapter 1d:
None connected.
</syntaxhighlight>
==fcp config <port> : Welche WWN habe ich?==
<syntaxhighlight lang=bash>
fas01> fcp config 1a
1a: ONLINE <ADAPTER UP> PTP Fabric
host address 010600
portname 50:0a:09:83:90:00:29:24 nodename 50:0a:09:80:80:00:29:24
mediatype auto speed auto
</syntaxhighlight>
Kleines Schmankerl ist noch die "host address", die uns zeigt, daß wir auf Switch-ID 01 Port 06 hängen.
==fcp wwpn-alias (set|show) : Aliasnamen für mehr Klarheit beim Debugging==
<syntaxhighlight lang=bash>
fas01> fcp wwpn-alias set sun07_Slot2_Port0 21000024ff363a5a
fas01> fcp wwpn-alias show
WWPN Alias
---- -----
21:00:00:24:ff:36:3a:5a sun07_Slot2_Port0
</syntaxhighlight>
==sanlun lun show -d <dev> (mit Solaris und ZPool)==
Wenn man wissen möchte, welche NetApp LUNs zu einem ZPool gehören, geht das folgender Maßen:
<syntaxhighlight lang=bash>
# zpool status | nawk '/c[0-9]t/{dev=$1;gsub(/s[0-9]+$/,"",$1);command="/opt/NTAP/SANToolkit/bin/sanlun lun show -d /dev/rdsk/"$1"s2";command | getline; command | getline; print dev,$1$2;next;}{print;}'
</syntaxhighlight>
Beispiel:
<syntaxhighlight lang=bash>
# zpool status | nawk '/c[0-9]t/{dev=$1;gsub(/s[0-9]+$/,"",$1);command="/opt/NTAP/SANToolkit/bin/sanlun lun show -d /dev/rdsk/"$1"s2";command | getline; command | getline; print dev,$1$2;next;}{print;}'
Pool: testpool
Status: ONLINE
scan: resilvered 11,0G in 0h1m with 0 errors on Thu Oct 2 09:41:39 2014
config:
NAME STATE READ WRITE CKSUM
testpool ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
c5t60A98000433544625634696B76705370d0s0 fas01:/vol/testlun/LUN0
c5t60A980003830304F392446473844375Ad0 fas02:/vol/testlun/LUN0
</syntaxhighlight>
=Sonstiges=
==Alle WWNs in einem File finden==
Gibt nur die WWNs aus, auch mehrere, wenn mehrere in einer Zeile sind.
<syntaxhighlight lang=awk>
gawk '{line=$0;while(match(line,/[0-9a-f]{2}(:[0-9a-f]{2}){7}/,wwn)){line=substr(line,wwn[0,"start"]+wwn[0,"length"]); print wwn[0];}}' <file>
</syntaxhighlight>
==Some adittions to NetApps sanlun lun show on Solaris==
<syntaxhighlight lang=awk>
# /opt/NTAP/SANToolkit/bin/sanlun lun show | gawk '
$3 ~ /\/dev\// {
sanlun=$0;
cmd="luxadm display "$3;
while( cmd|getline line ){
count=split(line,word);
if(line ~ /DEVICE PROPERTIES for disk:/){
disk=word[count];
ctrl="";
dev_addr="";
svm_ports="";
delete ports;
delete pri;
delete sec;
delete paths;
delete online;
continue;
}
if(line ~ /Controller/){
ctrl=word[count];
continue;
}
if(line ~ /Device Address/){
dev_addr=word[count];
gsub(/,.*$/,"",dev_addr);
ports[dev_addr]=1;
pair=ctrl"_"dev_addr;
continue;
}
if(line ~ /Class/){
class[pair]=word[count];
if(word[count]=="primary"){
pri[disk]++;
} else {
sec[disk]++;
}
continue;
}
if(line ~ /State/){
state[pair]=word[count];
paths[disk]++;
if(word[count]=="ONLINE"){
online[disk]++;
}
}
if(line ~ /^$/ && ctrl!=""){
for(port in ports){
if(svm_ports==""){
sep="";
} else {
sep=",";
}
svm_ports=svm_ports sep port;
}
printf "%s %2d/%2d %2d/%2d %s\n",sanlun,online[disk],paths[disk],pri[disk],sec[disk], svm_ports;
}
}
close(cmd);
next;
}
/^vserver/{
line=sprintf("%s Online/Total Primary/Secondary Device Addresses\n", $0);
printf line;
gsub(/./,"-",line);
print line;
next;
}
/^[-]+$/{next;}
{print;}
'
</syntaxhighlight>
42b7ac132423369000f541c2c5de690f51b260bc
NetApp and Solaris
0
219
2274
2227
2021-11-25T15:50:26Z
Lollypop
2
Text replacement - "[[Kategorie:" to "[[Category:"
wikitext
text/x-wiki
[[Category:NetApp|Solaris]]
[[Category:Solaris|NetApp]]
'''Just some unsorted lines...'''
'''Working on it... don't believe what you can read here! It is not proofed for now.'''
==Settings in Solaris==
Settings for MPxIO over FC:
===/kernel/drv/ssd.conf===
<syntaxhighlight lang=bash>
###### START changes by host_config #####
ssd-config-list="NETAPP LUN", "physical-block-size:4096, retries-busy:30, retries-reset:30, retries-notready:300, retriestimeout:10, throttle-max:64, throttle-min:8";
###### END changes by host_config ####
</syntaxhighlight>
===Check it out===
<syntaxhighlight lang=bash>
# iostat -Er | /opt/sfw/bin/gawk 'BEGIN{command="echo ::ssd_state | mdb -k"; while(command|getline){if(/^un [0-9]+:/ && $NF != "0"){ssd=$2;gsub(/:$/,"",ssd);while(!/^}/){command|getline;if(/un_phy_blocksize/){un_phy_blocksize[ssd]=strtonum($NF);}}}};close(command);}/ssd/{ssd=$1;gsub(/^ssd/,"",ssd);getline;split($0,vendor,",");printf "ssd: %s\tun_phy_blocksize: %d\t%s\t%s\n",ssd,un_phy_blocksize[ssd],vendor[1],vendor[4];}'
</syntaxhighlight>
==Alignment and ZFS==
First read [https://library.netapp.com/ecmdocs/ECMP1148982/html/GUID-42CC2EB6-E667-4305-914C-7C2C459EF841.html ZFS zpools create misaligned I/O in Solaris 11 and Solaris 10 Update 8 and later (407376)].
If you have 4k as block size in your storage use ashift=12 (alignment shift exponent).
===Status of alignment===
<syntaxhighlight lang=bash>
# ssh filer01 "priv set -q diag ; lun show -v all; priv set"
-------------------------------------------------------------------------------
LUN for ZFS
-------------------------------------------------------------------------------
/vol/ZoneLUNs/Zone01.lun 50g (53687091200) (r/w, online, mapped)
Serial#: 800KP+EpO-33
Share: none
Space Reservation: enabled
Multiprotocol Type: solaris_efi
Maps: SUN_SERVER01_SERVER02=40
Occupied Size: 46.2g (49583595520)
Creation Time: Wed Jan 7 11:37:58 CET 2015
---> Alignment: partial-writes
Cluster Shared Volume Information: 0x0
Space_alloc: disabled
report-physical-size: enabled
Read-Only: disabled
-------------------------------------------------------------------------------
LUN for Oracle Database
-------------------------------------------------------------------------------
/vol/TEMP201/TEMP201 25g (26843545600) (r/w, online, mapped)
Serial#: 800KP+EpO-2t
Share: none
Space Reservation: enabled
Multiprotocol Type: solaris_efi
Maps: SUN_SERVER01_SERVER02=35
Occupied Size: 21.6g (23195856896)
Creation Time: Fri Jul 4 11:02:34 CEST 2014
---> Alignment: misaligned
Cluster Shared Volume Information: 0x0
Space_alloc: disabled
report-physical-size: enabled
Read-Only: disabled
...
</syntaxhighlight>
Or use "lun alignment show":
<syntaxhighlight lang=bash>
# ssh filer01 "priv set -q diag ; lun alignment show; priv set"
-------------------------------------------------------------------------------
LUN for ZFS
Wide spread reads. I think the ashift is not correct.
-------------------------------------------------------------------------------
/vol/ZoneLUNs/Zone01.lun
Multiprotocol type: solaris_efi
Alignment: partial-writes
Write alignment histogram percentage: 5, 5, 4, 6, 4, 6, 14, 5
Read alignment histogram percentage: 8, 7, 10, 7, 7, 8, 36, 5
Partial writes percentage: 47
Partial reads percentage: 9
-------------------------------------------------------------------------------
LUN for Oracle Database
-------------------------------------------------------------------------------
/vol/TEMP201/TEMP201
Multiprotocol type: solaris_efi
Alignment: misaligned
Write alignment histogram percentage: 0, 0, 0, 0, 0, 0, 99, 0
Read alignment histogram percentage: 0, 0, 8, 0, 0, 0, 77, 0
Partial writes percentage: 0
Partial reads percentage: 14
</syntaxhighlight>
Or "stats show lun":
<syntaxhighlight lang=bash>
filer01*> stats show -e lun:/vol/TEMP201:.*_align_histo.*
</syntaxhighlight>
===ashift=12? Why 12?===
<syntaxhighlight lang=bash>
# echo "2^12" | bc -l
4096
</syntaxhighlight>
OK... 4k... I see.
===What ashift do I have?===
<syntaxhighlight lang=bash>
# zdb | egrep ' name|ashift'
name: 'apache_pool'
ashift: 9
name: 'mysql_pool'
ashift: 9
...
</syntaxhighlight>
===Create ZPools on NetApp LUNs with this syntax===
<syntaxhighlight lang=bash>
# zpool create -o ashift=12 <mypool> mirror <vdev1> <vdev2>
</syntaxhighlight>
===Solaris Cluster===
<syntaxhighlight lang=bash>
# /opt/NTAP/SANToolkit/bin/sanlun lun show | nawk '$3 ~ /^\/dev\//{line=$0;gsub(/s[0-9]+$/,"",$3);command="/usr/cluster/bin/cldev list "$3; command | getline; close(command); print line,$1; next;}NR==2{print $0,"DID";next;}NR==3{print $0"-------";next}{print;}'
controller(7mode)/ device host lun
vserver(Cmode) lun-pathname filename adapter protocol size mode DID
--------------------------------------------------------------------------------------------------------------------------------------------------------
ncl01-iscsi-svm1 /vol/vol_cyrus01/lun_tz_cyrus01_1 /dev/rdsk/c0t600A0980383033777B244834556D4865d0s2 iscsi0 iSCSI 500.1g C d5
ncl01-iscsi-svm1 /vol/vol_cyrus01/lun_tz_cyrus01_2 /dev/rdsk/c0t600A0980383033777B244834556D4866d0s2 iscsi0 iSCSI 500.1g C d6
ncl01-iscsi-svm1 /vol/vol_cyrus01/lun_tz_cyrus01_3 /dev/rdsk/c0t600A0980383033777B244834556D4867d0s2 iscsi0 iSCSI 500.1g C d7
...
</syntaxhighlight>
==Links==
* [http://wiki.illumos.org/display/illumos/List+of+sd-config-list+entries+for+Advanced-Format+drives Illumos: List of sd-config-list entries for Advanced-Format drives]
* [https://kb.netapp.com/index?page=content&id=3011193 NetApp: What is an unaligned I/O?]
e09e6af695cd86e1a8b90d41e9cd4c95acff5c7e
Solaris zone memory on the fly
0
118
2275
2214
2021-11-25T15:50:52Z
Lollypop
2
Text replacement - "</source" to "</syntaxhighlight"
wikitext
text/x-wiki
[[Kategorie:Solaris|Zone Memory]]
= Setting memory parameter for running zones =
You can change memory parameter for running zones. But remember to make it persistent by changing zone config file, too.
So I do it always in advance.
== Change setting in the config file ==
<syntaxhighlight lang=bash>
# zonecfg -z myzone
zonecfg:myzone> select capped-memory
zonecfg:myzone:capped-memory> info
capped-memory:
[swap: 10G]
zonecfg:myzone:capped-memory> set swap=16G
zonecfg:myzone:capped-memory> set physical=16G
zonecfg:myzone:capped-memory> set locked=10G
zonecfg:myzone:capped-memory> info
physical: 16G
[swap: 16G]
[locked: 10G]
zonecfg:myzone:capped-memory> end
zonecfg:myzone> verify
zonecfg:myzone> commit
zonecfg:myzone> exit
#
</syntaxhighlight>
== Change settings for the running zone ==
===First take a look===
<syntaxhighlight lang=bash>
# zlogin myzone prtconf | grep Memory
prtconf: devinfo facility not available
Memory size: 65536 Megabytes
# prctl -t privileged -i zone myzone
zone: 1: myzone
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
zone.max-swap
privileged 10.0GB - deny -
zone.cpu-shares
privileged 1 - none -
</syntaxhighlight>
===Set the new values===
<syntaxhighlight lang=bash>
# rcapadm -z myzone -m 16G
# prctl -n zone.max-swap -v 16g -t privileged -r -e deny -i zone myzone
# prctl -n zone.max-locked-memory -v 16g -t privileged -r -e deny -i zone myzone
</syntaxhighlight>
===Prove values===
<syntaxhighlight lang=bash>
# zlogin myzone prtconf | grep Memory
prtconf: devinfo facility not available
Memory size: 16384 Megabytes
# prctl -t privileged -i zone myzone
zone: 1: myzone
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
zone.max-swap
privileged 16.0GB - deny -
zone.cpu-shares
privileged 1 - none -
</syntaxhighlight>
Done.
087dafc16504589ead32c056d5dd3136d9ba5cd2
Solaris Einzeiler
0
200
2276
2215
2021-11-25T15:50:53Z
Lollypop
2
Text replacement - "</source" to "</syntaxhighlight"
wikitext
text/x-wiki
[[Kategorie:Solaris|Einzeiler]]
=== netstat -aun oder lsof -i -P -n unter Solaris 10 ===
<syntaxhighlight lang=bash>
#!/bin/bash
pfiles /proc/* 2>/dev/null | nawk -v port=$1 '
/^[0-9]/ {
pid=$1; cmd=$2; type="unknown"; next;
}
$1 == "SOCK_STREAM" {
type="tcp"; next;
}
$1 == "SOCK_DGRAM" {
type="udp"; next;
}
$2 ~ /AF_INET?/ && ( port=="" || $5==port ) {
if($2 ~ /[0-9]$/ && type !~ /[0-9]$/) type=type""substr($2,8);
if(cmd!="") { printf("%d %s\n",pid,cmd); cmd="" }
printf(" %s:%s/%s\n",$3,$5,type);
}'
</syntaxhighlight>
0db44a3a8d5dcd70892bdf7800db86f70b85ed23
Caulastrea sp.
0
127
2277
353
2021-11-25T15:51:00Z
Lollypop
2
Text replacement - "[[Kategorie:" to "[[Category:"
wikitext
text/x-wiki
[[Category:MeerwasserAquarium]]
{{Systematik
| Bild = Caulastrea_sp.JPG
| Bildbeschreibung = Junge Calaustrea Kolonie
| DeName = Trompetenkoralle
| WissName = Caulastrea sp.
| Autor =
| Untergattung =
| Gattung =
| Unterfamilie =
| Art =
| Verbreitung =
| Habitat =
| Nahrung = Zooxanthellen / Licht
| Luftfeuchtigkeit =
| Temperatur = 24°C - 26°C
}}
<gallery mode="packed-hover">
Image:Caulastrea_sp.JPG|Junge Calaustrea Kolonie
</gallery>
640709eeea4780a86d5648c0ea09a4aaa5e70dab
Sendmail sender rewrite
0
102
2278
284
2021-11-25T15:51:06Z
Lollypop
2
Text replacement - "[[Kategorie:" to "[[Category:"
wikitext
text/x-wiki
[[Category:Sendmail]]
=Sender rewrite=
In die .mc Datei:
<pre>
FEATURE(`genericstable')dnl
GENERICS_DOMAIN_FILE(`/etc/mail/genericsdomain')dnl
</pre>
==/etc/mail/genericsdomain==
<pre>
src-domain.de
</pre>
==Testen der genericsdomain==
<pre>
# sendmail -bt
ADDRESS TEST MODE (ruleset 3 NOT automatically invoked)
Enter <ruleset> <address>
> $=G
src-domain.de
>
</pre>
==/etc/mail/genericstable==
<pre>
# localuser in any genericsdomain -> dst-user@dst-domain.de
localuser dst-user@dst-domain.de
# any other user @src-domain.de -> default-user@dst-domain.de
@src-domain.de default-user@dst-domain.de
</pre>
==Erzeugen der Übersetzungsdatenbank genericstable.db==
<pre>
# makemap -f hash /etc/mail/genericstable.db < /etc/mail/genericstable
</pre>
==Testen der genericstable.db==
<pre>
# sendmail -bt -d60.1
ADDRESS TEST MODE (ruleset 3 NOT automatically invoked)
Enter <ruleset> <address>
> /tryflags hs
> /try esmtp localuser@src-domain.de
Trying header sender address localuser@src-domain.de for mailer esmtp
canonify input: localuser @ src-domain . de
Canonify2 input: localuser < @ src-domain . de >
map_lookup(host, src-domain.de) => NOT FOUND (68)
Canonify2 returns: localuser < @ src-domain . de . >
canonify returns: localuser < @ src-domain . de . >
1 input: localuser < @ src-domain . de . >
1 returns: localuser < @ src-domain . de . >
HdrFromSMTP input: localuser < @ src-domain . de . >
PseudoToReal input: localuser < @ src-domain . de . >
PseudoToReal returns: localuser < @ src-domain . de . >
MasqSMTP input: localuser < @ src-domain . de . >
MasqSMTP returns: localuser < @ src-domain . de . >
MasqHdr input: localuser < @ src-domain . de . >
map_lookup(generics, localuser@src-domain.de) => NOT FOUND (0)
map_lookup(generics, @src-domain.de) => NOT FOUND (0)
map_lookup(generics, localuser) => dst-user@dst-domain.de (0)
canonify input: dst-user @ dst-domain . de
Canonify2 input: dst-user < @ dst-domain . de >
map_lookup(host, dst-domain.de) => NOT FOUND (68)
Canonify2 returns: dst-user < @ dst-domain . de >
canonify returns: dst-user < @ dst-domain . de >
MasqHdr returns: dst-user < @ dst-domain . de >
HdrFromSMTP returns: dst-user < @ dst-domain . de >
final input: dst-user < @ dst-domain . de >
final returns: dst-user @ dst-domain . de
Rcode = 0, addr = dst-user@dst-domain.de
</pre>
Und beliebige user@src-domain.de:
<pre>
# sendmail -bt -d60.1
ADDRESS TEST MODE (ruleset 3 NOT automatically invoked)
Enter <ruleset> <address>
> /tryflags hs
> /try esmtp anyuser@src-domain.de
Trying header sender address anyuser@src-domain.de for mailer esmtp
canonify input: anyuser @ src-domain . de
Canonify2 input: anyuser < @ src-domain . de >
map_lookup(host, src-domain.de) => NOT FOUND (68)
Canonify2 returns: anyuser < @ src-domain . de . >
canonify returns: anyuser < @ src-domain . de . >
1 input: anyuser < @ src-domain . de . >
1 returns: anyuser < @ src-domain . de . >
HdrFromSMTP input: anyuser < @ src-domain . de . >
PseudoToReal input: anyuser < @ src-domain . de . >
PseudoToReal returns: anyuser < @ src-domain . de . >
MasqSMTP input: anyuser < @ src-domain . de . >
MasqSMTP returns: anyuser < @ src-domain . de . >
MasqHdr input: anyuser < @ src-domain . de . >
map_lookup(generics, anyuser@src-domain.de) => NOT FOUND (0)
map_lookup(generics, @src-domain.de) => default-user@dst-domain.de (0)
canonify input: default-user @ dst-domain . de
Canonify2 input: default-user < @ dst-domain . de >
map_lookup(host, dst-domain.de) => NOT FOUND (68)
Canonify2 returns: default-user < @ dst-domain . de >
canonify returns: default-user < @ dst-domain . de >
MasqHdr returns: default-user < @ dst-domain . de >
HdrFromSMTP returns: default-user < @ dst-domain . de >
final input: default-user < @ dst-domain . de >
final returns: default-user @ dst-domain . de
Rcode = 0, addr = default-user@dst-domain.de
</pre>
02c579df1649e9450f1c380932906422a7e31ac0
Solaris IO Analyse
0
208
2279
2196
2021-11-25T15:51:07Z
Lollypop
2
Text replacement - "</source" to "</syntaxhighlight"
wikitext
text/x-wiki
[[Kategorie:Solaris]]
==Which filesystem is busy?==
For zfs (-F zfs) you can use this oneliner:
<syntaxhighlight lang=bash>
# fsstat -i $(df -hF zfs | nawk '{print $NF}') 5
</syntaxhighlight>
==Links==
* [https://blogs.oracle.com/BestPerf/entry/i_o_analysis_using_dtrace I/O analysis using DTrace]
* [http://www.brendangregg.com/DTrace/dtrace_oneliners.txt Brendan Gregg's DTrace onliners]
55f9647e301d6b7aa8b26fa1b39a3e6c0bd31a79
VirtualBox physical mapping
0
355
2281
1857
2021-11-25T15:51:24Z
Lollypop
2
Text replacement - "</source" to "</syntaxhighlight"
wikitext
text/x-wiki
[[Kategorie:Virtualbox]
==Create a virtual mapping to your physical Windows==
In my example it is on partitions 1 and 2 of the disk.<br>
This helps me to work around problems with installing Windows updates and grub. Some Windows updates are failing if you have grub in your MBR.
===Create a dummy mbr===
<source lang=bash>
# apt install mbr
# install-mbr /var/data/VMs/dev/mbr.img
</syntaxhighlight>
===Create the mapping as a VMDK file===
<source lang=bash>
# VBoxManage internalcommands createrawvmdk -filename /var/data/VMs/dev/Windows-physical.vmdk -rawdisk /dev/sda -partitions 1,2 -mbr /var/data/VMs/dev/mbr.img
</syntaxhighlight>
After that create a VM and use this special VMDK file.
16b9932bdb93c7b6a3944b6d2600e80b746ea19b
VMWare CLi
0
344
2282
1772
2021-11-25T15:51:27Z
Lollypop
2
Text replacement - "[[Kategorie:" to "[[Category:"
wikitext
text/x-wiki
[[Category: VMWare]]
=== Routen ===
<pre>
esxcli network ip route ipv4 add --network=10.14.90.0/25 --gateway=10.128.1.9
esxcli network ip route ipv4 add --network=10.14.95.0/25 --gateway=10.128.1.9
esxcli network ip route ipv4 add --network=10.14.90.128/25 --gateway=10.128.1.10
esxcli network ip route ipv4 add --network=10.14.95.128/25 --gateway=10.128.1.10
</pre>
=== Firewall ===
==== SSH ====
<pre>
esxcli network firewall ruleset set --ruleset-id sshServer --allowed-all false
esxcli network firewall ruleset allowedip add --ruleset-id sshServer --ip-address 10.14.0.0/16
esxcli network firewall ruleset allowedip add --ruleset-id sshServer --ip-address 192.168.2.0/24
esxcli network firewall ruleset allowedip list --ruleset-id sshServer
Ruleset Allowed IP Addresses
--------- ------------------------------
sshServer 10.14.0.0/16, 192.168.2.0/24
</pre>
==== HTTP ====
<pre>
esxcli network firewall ruleset set --ruleset-id CIMHttpServer --allowed-all false
esxcli network firewall ruleset allowedip add --ruleset-id CIMHttpServer --ip-address 10.14.0.0/16
esxcli network firewall ruleset allowedip add --ruleset-id CIMHttpServer --ip-address 192.168.2.0/24
esxcli network firewall ruleset allowedip list --ruleset-id CIMHttpServer
Ruleset Allowed IP Addresses
------------- ----------------------------
CIMHttpServer 10.14.0.0/16, 192.168.2.0/24
</pre>
==== HTTPS ====
<pre>
esxcli network firewall ruleset set --ruleset-id CIMHttpsServer --allowed-all false
esxcli network firewall ruleset allowedip add --ruleset-id CIMHttpsServer --ip-address 10.14.0.0/16
esxcli network firewall ruleset allowedip add --ruleset-id CIMHttpsServer --ip-address 192.168.2.0/24
esxcli network firewall ruleset allowedip list --ruleset-id CIMHttpsServer
Ruleset Allowed IP Addresses
-------------- ----------------------------
CIMHttpsServer 10.14.0.0/16, 192.168.2.0/24
</pre>
==== CIMSLP ====
<pre>
esxcli network firewall ruleset set --ruleset-id CIMSLP --allowed-all false
esxcli network firewall ruleset allowedip add --ruleset-id CIMSLP --ip-address 10.14.0.0/16
esxcli network firewall ruleset allowedip add --ruleset-id CIMSLP --ip-address 192.168.2.0/24
esxcli network firewall ruleset allowedip list --ruleset-id CIMSLP
Ruleset Allowed IP Addresses
------- ----------------------------
CIMSLP 10.14.0.0/16, 192.168.2.0/24
</pre>
55de699155de811a0dfe91882a2bef6789d1ea76
Solaris cluster clone
0
185
2283
2198
2021-11-25T15:51:31Z
Lollypop
2
Text replacement - "[[Kategorie:" to "[[Category:"
wikitext
text/x-wiki
[[Category:Solaris|Cluster Clone]][[Category:SunCluster|Clone]]
If you need to recreate a cluster node from a survived node, you need to do the following steps
==Clone system disk==
For example via metattach to the metaroot.
==Edit normal Solaris parameter==
/etc/nodename
/etc/hostname.*
Check: /etc/inet/hosts
If mirrored by SVM do
# Edit /etc/vfstab of the clone to normal Devices
# Edit /etc/system:
<syntaxhighlight lang=bash>
* Begin MDD root info (do not edit)
** rootdev:/pseudo/md@0:0,10,blk
* End MDD root info (do not edit)
</source>
Umount cloned disk
fsck cloned disk root slice
==Edit Cluster parameter==
Get the right id from:
<syntaxhighlight lang=bash>
# nawk '/cluster\.nodes\.[^.]*\.name/{split($1,field,"."); print field[3],$NF}' /etc/cluster/ccr/global/infrastructure
1 node-a
2 node-b
</source>
Edit the
echo <nodeid> > /etc/cluster/nodeid
for example node-b:
echo 2 > /etc/cluster/nodeid
of the clone.
63002445256ad9fb586c265720d4d8ddcf9a218e
Docker tips and tricks
0
372
2284
2020
2021-11-25T15:51:31Z
Lollypop
2
Text replacement - "<source" to "<syntaxhighlight"
wikitext
text/x-wiki
== Using docker behind a proxy ==
<syntaxhighlight lang=bash>
# systemctl edit docker.service
</source>
Enter the next three lines and save:
<syntaxhighlight lang=ini>
[Service]
Environment="HTTP_PROXY=user:pass@proxy:port"
Environment="HTTPS_PROXY=user:pass@proxy:port"
</source>
Restart docker:
<syntaxhighlight lang=bash>
# systemctl restart docker.service
</source>
== Some useful aliases ==
I put this in my ~/.bash_aliases to maintain a check_mk container:
<syntaxhighlight lang=bash>
alias omd-log='docker container logs monitoring'
alias omd-recreate-volume='docker volume create --driver local --opt type=nfs --opt o=addr=nfs.server.tld,rw --opt device=:/share monitoring'
alias omd-root='docker container exec -it $(docker ps --filter name=monitoring -q) /bin/bash'
alias omd-cmk='docker container exec -it -u omd monitoring bash'
alias omd-start='docker container run --rm -dit -p 8080:5000 --tmpfs /omd/sites/omd/tmp:uid=1000,gid=1000 --ulimit nofile=1024 -v monitoring:/omd/sites --name monitoring -e CMK_SITE_ID=omd -e MAIL_RELAY_HOST='\''smtp-gw.server.tld'\'' -v /etc/localtime:/etc/localtime:ro checkmk/check-mk-raw:1.6.0p12'
alias omd-stop='docker stop $(docker ps --filter name=monitoring -q)'
</source>
7a4a97f84cfdd299bc929e171ac0be9104e1d3fb
SunCluster Delete Ressource Group
0
206
2285
2254
2021-11-25T15:51:41Z
Lollypop
2
Text replacement - "<source" to "<syntaxhighlight"
wikitext
text/x-wiki
[[Kategorie:SunCluster]]
=Komplettes entfernen einer Ressource Group=
Herleitung der Daten, die nachher in den Einzeilern benutzt werden.
Macht dies nicht! Ich übernehme auch hier wieder keine Verantwortung! Alles falsch! Nicht machen!
==Setzen der betreffenden Ressource Group==
<syntaxhighlight lang=bash>
# RG=my-rg
</syntaxhighlight>
==Ressource anzeigen==
<syntaxhighlight lang=bash>
# clrs list -g ${RG}
my-nsr-res
my-oracle-res
my-lh-res
my-zone-res
my-hasp-zfs-res
</syntaxhighlight>
==Abschalten der Ressource Group und Ressourcen==
<syntaxhighlight lang=bash>
# clrg offline ${RG}
# clrs list -g ${RG} | xargs clrs disable
</syntaxhighlight>
==ZPools anzeigen==
<syntaxhighlight lang=bash>
# clrs show -p ZPools -g ${RG}
...
=== Resources ===
Resource: my-hasp-zfs-res
--- Standard and extension properties ---
Zpools: my_pool my-redo1_pool my-redo2_pool
Class: extension
Description: The list of zpools
Per-node: False
Type: stringarray
...
</syntaxhighlight>
==ZPools anzeigen==
<syntaxhighlight lang=bash>
# clrs show -p ZPools -g ${RG} | nawk '$1=="Zpools:"{$1="";print $0;}'
my_pool my-redo1_pool my-redo2_pool
</syntaxhighlight>
==DID Devices anzeigen==
<syntaxhighlight lang=bash>
# for disk in $(for zpool in $(clrs show -p ZPools -g ${RG} | nawk '$1=="Zpools:"{$1="";print $0;}' ) ; do zpool import ${zpool} 2>/dev/null ; zpool status ${zpool} ; zpool export ${zpool} ; done | nawk '/c[0-9]+t/{gsub(/s.*$/,"",$1);print $1}') ; do echo /dev/rdsk/${disk}; done | xargs cldev list -vn $(hostname)
DID Device Full Device Path
---------- ----------------
d53 node06:/dev/rdsk/c0t600A0B80006E103C00000B9B50B2F83Ed0
d38 node06:/dev/rdsk/c0t600A0B80006E10020000D54150B2FF26d0
d57 node06:/dev/rdsk/c0t600A0B80006E103C00000B9E50B2F9FFd0
d50 node06:/dev/rdsk/c0t600A0B80006E10020000D54450B300C8d0
d46 node06:/dev/rdsk/c0t600A0B80006E103C00000BA250B3098Ad0
d28 node06:/dev/rdsk/c0t600A0B80006E10020000D54850B310C2d0
d55 node06:/dev/rdsk/c0t600A0B80006E134400000B5350B2FB08d0
d56 node06:/dev/rdsk/c0t600A0B80006E10E40000D6F450B2FBB1d0
d40 node06:/dev/rdsk/c0t600A0B80006E134400000B5950B30D8Bd0
d45 node06:/dev/rdsk/c0t600A0B80006E10E40000D6FA50B30E62d0
</syntaxhighlight>
oder nur die DIDs:
<syntaxhighlight lang=bash>
# for disk in $(for zpool in $(clrs show -p ZPools -g ${RG} | nawk '$1=="Zpools:"{$1="";print $0;}' ) ; do zpool import ${zpool} 2>/dev/null ; zpool status ${zpool} ; zpool export ${zpool} ; done | nawk '/c[0-9]+t/{gsub(/s.*$/,"",$1);print $1}') ; do echo /dev/rdsk/${disk}; done | xargs scdidadm -lo instance
53
38
57
50
46
28
55
56
40
45
</syntaxhighlight>
==Abschalten des Device Monitorings==
Das ist wichtig, um die Devices später ganz aus dem Cluster zu bekommen!
<syntaxhighlight lang=bash>
# for disk in $(for zpool in $(clrs show -p ZPools -g ${RG} | nawk '$1=="Zpools:"{$1="";print $0;}' ) ; do zpool import ${zpool} 2>/dev/null ; zpool status ${zpool} ; zpool export ${zpool} ; done | nawk '/c[0-9]+t/{gsub(/s.*$/,"",$1);print $1}') ; do echo /dev/rdsk/${disk}; done | xargs scdidadm -lo name | xargs cldev unmonitor
</syntaxhighlight>
==Ressourcegruppe löschen==
<syntaxhighlight lang=bash>
# RG=bla-rg
# clrs disable -g ${RG} +
# clrs delete -g ${RG} +
# clrg delete ${RG}
</syntaxhighlight>
==Jetzt auf dem Storage die LUNs unmappen==
Und gegebenen Falls löschen...
==Nicht mehr vorhandene LUNs aus dem Solaris entfernen==
<syntaxhighlight lang=bash>
# for node in $(clnode list) ; do ssh ${node} cfgadm -alo show_SCSI_LUN | nawk '$NF=="unusable"{gsub(/,[0-9]+$/,"",$1);print $1}' | sort -u | xargs -n 1 ssh ${node} cfgadm -c unconfigure -o unusable_SCSI_LUN ; ssh ${node} devfsadm -C -v -c disk ; done
</syntaxhighlight>
==DIDs aufräumen==
<syntaxhighlight lang=bash>
# for node in $(clnode list) ; do cldev refresh -n ${node} ; cldev clear -n ${node} ; done
</syntaxhighlight>
==Bei bedarf Zonenkonfigs aufräumen==
<syntaxhighlight lang=bash>
# ZONE=my-zone
# for node in $(clnode list) ; do ssh ${node} zonecfg -z ${ZONE} delete -F ; done
</syntaxhighlight>
76b532214124d8fec5bb1e0a471f2fc0fb03d2e6
ZFS Networker
0
158
2286
2259
2021-11-25T15:51:48Z
Lollypop
2
Text replacement - "[[Kategorie:" to "[[Category:"
wikitext
text/x-wiki
[[Category:ZFS|Backup]]
[[Category:Backup|Networker]]
[[Category:Solaris|Backup]]
=Backup of ZFS snapshots on Solaris Cluster with Legato/EMC Networker=
This describes how to setup a backup of the Solaris Cluster resource group named sample-rg.
The structure of my RGs is always:
<pre>
RG: <name>-rg
ZFS-HASP: <name>-hasp-zfs-res
Logical Host: <name>-lh-res
Logical Host Name: <name>-lh
ZPOOL: <name>_pool
</pre>
I used the bash as shell.
==Define variables used in the following command lines==
<source lang=bash>
# NAME=sample
# RGname=${NAME}-rg
# NetworkerGroup=$(echo ${NAME} | tr 'a-z' 'A-Z' )
# ZPOOL=${NAME}_pool
# ZPOOL_BASEDIR=/local/${RGname}
</syntaxhighlight>
==Define a resource for Networker==
What we need now is a resource definition in our Networker directory like this:
<source lang=bash>
# mkdir /nsr/{bin,log,res}
# cat > /nsr/res/${NetworkerGroup}.res <<EOF
type: savepnpc;
precmd: "/nsr/bin/nsr_snapshot.sh pre >/nsr/log/networker_precmd.log 2>&1";
pstcmd: "/nsr/bin/nsr_snapshot.sh pst >/nsr/log/networker_pstcmd.log 2>&1";
timeout: "08:00am";
abort precmd with group: Yes;
EOF
</syntaxhighlight>
==The pre-/pstcmd-script==
!!!THIS CODE IS UNTESTED DO NOT USE THIS!!!
!!!THIS JUST AN EXAMPLE!!!
<source lang=bash>
#!/bin/bash
cmd_option=$1
export cmd_option
SNAPSHOT_NAME="nsr"
BASE_LOG_DIR="/nsr/logs"
NSR_BACKUP_CLONE="nsr_backup"
# Commands
ZFS_CMD="/usr/sbin/zfs"
ZPOOL_CMD="/usr/sbin/zpool"
ZLOGIN_CMD="/usr/sbin/zlogin"
ZONECFG_CMD="/usr/sbin/zonecfg"
SVCS_CMD="/usr/sbin/svcs"
SVCADM_CMD="/usr/sbin/svcadm"
DF_CMD="/usr/bin/df"
RM_CMD="/usr/bin/rm"
AWK_CMD="/usr/bin/nawk"
MKNOD_CMD="/usr/sbin/mknod"
XARGS_CMD="/usr/bin/xargs"
PARGS_CMD="/usr/bin/pargs"
PTREE_CMD="/usr/bin/ptree"
CLRS_CMD="/usr/cluster/bin/clrs"
CLRG_CMD="/usr/cluster/bin/clrg"
CLRT_CMD="/usr/cluster/bin/clrt"
BASENAME_CMD="/usr/bin/basename"
GETENT_CMD="/usr/bin/getent"
SCHA_RESOURCE_GET_CMD="/usr/cluster/bin/scha_resource_get"
WGET_CMD=/usr/sfw/bin/wget
HOSTNAME_CMD="/usr/bin/uname -n"
# Subdir in ZFS where to put ZFS-config
ZFS_SETUP_SUBDIR="cluster_config"
ZFS_CONFIG_FILE=ZFS_Setup.sh
# Oracle parameter
ORACLE_SID=SAMPLE
ORACLE_USER=oracle
# Sophora parameter
SOPHORA_FMRI="svc:/cms/sophora:default"
SOPHORA_USER=admin
SOPHORA_PASS=password
GLOBAL_LOGFILE=${BASE_LOG_DIR}/$(${BASENAME_CMD} $0 .sh).log
# For all but get_slaves redirect output to log
case ${cmd_option} in
get_slaves)
;;
*)
exec >>${GLOBAL_LOGFILE} 2>&1
;;
esac
function print_option () {
option=$1; shift
# now process line
while [ $# -gt 0 ]
do
case $1 in
${option})
echo $2
shift
shift
;;
*)
shift
;;
esac
done
}
function sophora_startup () {
SOPHORA_ZONE=$1 # Zone for zlogin
SOPHORA_FMRI=$2 # FMRI for svcadm
print_log ${LOGFILE} "Starting sophora in ${SOPHORA_ZONE}..."
${ZLOGIN_CMD} ${SOPHORA_ZONE} ${SVCADM_CMD} enable ${SOPHORA_FMRI}
}
function sophora_shutdown () {
SOPHORA_ZONE=$1 # Zone for zlogin
SOPHORA_FMRI=$2 # FMRI for svcadm
print_log ${LOGFILE} "Shutting down sophora in ${SOPHORA_ZONE}..."
${ZLOGIN_CMD} ${SOPHORA_ZONE} ${SVCADM_CMD} disable -t ${SOPHORA_FMRI}
}
function sophora_get_slaves () {
SOPHORA_ZONE=$1 # Zone for zlogin
SOPHORA_PORT=$2 # Sophora port at localhost
SOPHORA_USER=$3 # Sophora admin user
SOPHORA_PASS=$4 # Sophora admin port
${ZLOGIN_CMD} ${SOPHORA_ZONE} \
${WGET_CMD} \
-qO- \
--no-proxy \
--http-user=${SOPHORA_USER} \
--http-password=${SOPHORA_PASS} \
"http://localhost:${SOPHORA_PORT}/content-api/servers/?replicationMode=SLAVE" | \
${AWK_CMD} '
function get_param(param,name){
name="\""name"\"";
count=split(param,tupel,/,/);
for(i=1;i<=count;i++){
split(tupel[i],part,/:/);
if(part[1]==name){
gsub(/\"/,"",part[2]);return part[2];
}
}
}
{
json=$0;
gsub(/(\[\{|\}\])/,"",json);
elements=split(json,array,/\},\{/);
for(element=1;element<=elements;element++){
print get_param(array[element],"hostname");
}
}' | ${XARGS_CMD} -n 1 -i ${BASENAME_CMD} {} .server.de
}
function get_zone_hostname () {
${ZLOGIN_CMD} $1 ${HOSTNAME_CMD}
}
function print_log () {
LOGFILE=$1 ; shift
if [ $# -gt 0 ]
then
printf "%s (%s): %s\n" "$(date '+%Y%m%d %H:%M:%S')" "${cmd_option}" "$*" >> ${LOGFILE}
else
#printf "%s (%s): " "$(date '+%Y%m%d %H:%M:%S')" "${cmd_option}" >> ${LOGFILE}
while read data
do
printf "%s (%s): %s\n" "$(date '+%Y%m%d %H:%M:%S')" "${cmd_option}" "${data}" >> ${LOGFILE}
done
fi
}
function dump_zfs_config {
ZPOOL=$1
OUTPUT_FILE=$2
printf "\n\n# Create ZPool ${ZPOOL} with size $(${ZPOOL_CMD} list -Ho size ${ZPOOL}):\n\n" >> ${OUTPUT_FILE}
${ZPOOL_CMD} status ${ZPOOL} | ${AWK_CMD} '/config:/,/errors:/{if(/NAME/){getline; printf "Zpool structure of %s:\n\nzpool create %s",$1,$1; getline ; device=0; while(!/^$/ && !/errors:/){gsub(/mirror-[0-9]+/,"mirror",$1);gsub(/logs/,"log",$1);gsub(/(\/dev\/(r)*dsk\/)*c[0-9]+t[0-9A-F]+d[0-9]+(s[0-9]+)*/,"<device"device">",$1);if(/device/)device++;printf " %s",$1 ; getline}};printf "\n" ;}' >> ${OUTPUT_FILE}
printf "\n\n# Create ZFS\n\n" >> ${OUTPUT_FILE}
${ZFS_CMD} list -Hrt filesystem -o name,origin ${ZPOOL} | ${AWK_CMD} -v zfs_cmd=${ZFS_CMD} 'NR>1 && $2=="-"{print zfs_cmd,"create -o mountpoint=none",$1}' >> ${OUTPUT_FILE}
printf "\n\n# Set ZFS values\n\n" >> ${OUTPUT_FILE}
${ZFS_CMD} get -s local -Ho name,property,value -pr all ${ZPOOL} | ${AWK_CMD} -v zfs_cmd=${ZFS_CMD} '$2!="readonly"{printf "%s set -p %s=%s %s\n",zfs_cmd,$2,$3,$1}' >> ${OUTPUT_FILE}
}
function dump_cluster_config {
RG=$1
OUTPUT_DIR=$2
${RM_CMD} -f ${OUTPUT_DIR}/${RG}.clrg_export.xml
${CLRG_CMD} export -o ${OUTPUT_DIR}/${RG}.clrg_export.xml ${RG}
for RES in $(${CLRS_CMD} list -g ${RG})
do
${RM_CMD} -f ${OUTPUT_DIR}/${RES}.clrs_export.xml
${CLRS_CMD} export -o ${OUTPUT_DIR}/${RES}.clrs_export.xml ${RES}
done
# Commands to recreate the RG
COMMAND_FILE="${OUTPUT_DIR}/${RG}.ClusterCreateCommands.txt"
printf "Recreate %s:\n%s create -i %s %s\n\n" "${RG}" "${CLRG_CMD}" "${OUTPUT_DIR}/${RG}.clrg_export.xml" "${RG}" > ${COMMAND_FILE}
for RT in SUNW.LogicalHostname SUNW.HAStoragePlus SUNW.gds LGTO.clnt
do
for RT_VERSION in $(${CLRT_CMD} list | ${AWK_CMD} -v rt=${RT} '$1 ~ rt')
do
for RES in $(${CLRS_CMD} list -g ${RG} -t ${RT_VERSION})
do
if [ "_${RT}_" == "_SUNW.LogicalHostname_" ]
then
printf "Add the following entries to all nodes!!!:\n/etc/inet/hosts:\n" >> ${COMMAND_FILE}
${GETENT_CMD} hosts $(${CLRS_CMD} show -p HostnameList ${RES} | nawk '$1=="HostnameList:"{$1="";print}') >> ${COMMAND_FILE}
printf "\n" >> ${COMMAND_FILE}
fi
printf "Recreate %s:\n%s create -i %s %s\n\n" "${RES}" "${CLRS_CMD}" "${OUTPUT_DIR}/${RES}.clrs_export.xml" "${RES}" >> ${COMMAND_FILE}
done
done
done
}
function snapshot_pre {
DB=$1
DBUSER=$2
if [ $# -eq 3 -a "_$3_" != "__" ]
then
ZONE=$3
ZONE_CMD="${ZLOGIN_CMD} -l ${DBUSER} ${ZONE}"
ZONE_BASE=$(/usr/sbin/zonecfg -z ${ZONE} info zonepath | ${AWK_CMD} '{print $NF;}')
ZONE_ROOT="${ZONE_BASE}/root"
else
ZONE_ROOT=""
ZONE_CMD="su - ${DBUSER} -c"
fi
if( ${ZONE_CMD} echo >/dev/null 2>&1 )
then
SCRIPT_NAME="tmp/.nsr-pre-snap-script.$$"
# Create script inside zone
cat >${ZONE_ROOT}/{SCRIPT_NAME} <<EOS
#!/bin/bash
DBDIR=\$(${AWK_CMD} -F':' -v ORACLE_SID=${ORACLE_SID} '\$1==ORACLE_SID {print \$2;}' /var/opt/oracle/oratab)
\${DBDIR}/bin/sqlplus sys/${DBUSER} as sysdba << EOF
create pfile from spfile;
alter system archive log current;
alter database backup controlfile to trace;
alter database begin backup;
EOF
EOS
chmod 755 ${ZONE_ROOT}/${SCRIPT_NAME}
${ZONE_CMD} /${SCRIPT_NAME} 2>&1 | print_log ${LOGFILE}
rm -f ${ZONE_ROOT}/${SCRIPT_NAME}
fi
}
function snapshot_pst {
DB=$1
DBUSER=$2
if [ $# -eq 3 -a "_$3_" != "__" ]
then
ZONE=$3
ZONE_CMD="${ZLOGIN_CMD} -l ${DBUSER} ${ZONE}"
ZONE_BASE=$(/usr/sbin/zonecfg -z ${ZONE} info zonepath | ${AWK_CMD} '{print $NF;}')
ZONE_ROOT="${ZONE_BASE}/root"
else
ZONE_ROOT=""
ZONE_CMD="su - ${DBUSER} -c"
fi
if( ${ZONE_CMD} echo >/dev/null 2>&1 )
then
SCRIPT_NAME="tmp/.nsr-pre-snap-script.$$"
# Create script inside zone
cat >${ZONE_ROOT}/{SCRIPT_NAME} <<EOS
#!/bin/bash
DBDIR=\$(${AWK_CMD} -F':' -v ORACLE_SID=${ORACLE_SID} '\$1==ORACLE_SID {print \$2;}' /var/opt/oracle/oratab)
\${DBDIR}/bin/sqlplus sys/${DBUSER} as sysdba << EOF
alter database end backup;
alter system archive log current;
EOF
EOS
chmod 755 ${ZONE_ROOT}/${SCRIPT_NAME}
${ZONE_CMD} /${SCRIPT_NAME} 2>&1 | print_log ${LOGFILE}
rm -f ${ZONE_ROOT}/${SCRIPT_NAME}
fi
}
function snapshot_create {
ZPOOL=$1
SNAPSHOT_NAME=$2
RES="$(${CLRS_CMD} show -p ZPools | ${AWK_CMD} -v pool=${ZPOOL} '/^Resource:/{res=$NF;}$NF ~ pool{print res;}')"
# Because of problems with unmounting during cluster monitoring disable montoring for this step
print_log ${LOGFILE} "Telling Cluster not to monitor ${RES}"
if [ "_${RES}_" != "__" ]
then
${CLRS_CMD} unmonitor ${RES}
fi
print_log ${LOGFILE} "Create ZFS snapshot -r ${ZPOOL}@${SNAPSHOT_NAME}"
${ZFS_CMD} snapshot -r ${ZPOOL}@${SNAPSHOT_NAME}
for zfs_snapshot in $(${ZFS_CMD} list -Ho name -t snapshot -r ${ZPOOL} | grep ${SNAPSHOT_NAME})
do
${ZFS_CMD} clone -o readonly=on ${zfs_snapshot} ${zfs_snapshot/@*/}/${NSR_BACKUP_CLONE}
${ZFS_CMD} mount ${zfs_snapshot/@*/}/${NSR_BACKUP_CLONE} 2>/dev/null
if [ "_$(${ZFS_CMD} get -Ho value mounted ${zfs_snapshot/@*/}/${NSR_BACKUP_CLONE})_" == "_yes_" ]
then
# echo /usr/sbin/save -s ${SERVER_NAME} -g ${GROUP_NAME} -LL -m ${CLIENT_NAME} $(${ZFS_CMD} get -Ho value mountpoint ${zfs_snapshot/@*/}/${NSR_BACKUP_CLONE})
${ZFS_CMD} list -Ho creation,name ${zfs_snapshot/@*/}/${NSR_BACKUP_CLONE} | print_log ${LOGFILE}
fi
done
print_log ${LOGFILE} "Telling Cluster to monitor ${RES} again"
if [ "_${RES}_" != "__" ]
then
sleep 1
${CLRS_CMD} monitor ${RES}
fi
}
function snapshot_destroy {
ZPOOL=$1
SNAPSHOT_NAME=$2
RES="$(${CLRS_CMD} show -p ZPools | ${AWK_CMD} -v pool=${ZPOOL} '/^Resource:/{res=$NF;}$NF ~ pool{print res;}')"
# Because of problems with unmounting during cluster monitoring disable montoring for this step
print_log ${LOGFILE} "Telling Cluster not to monitor ${RES}"
if [ "_${RES}_" != "__" ]
then
${CLRS_CMD} unmonitor ${RES}
fi
if (${ZFS_CMD} list -t snapshot ${ZPOOL}@${SNAPSHOT_NAME} > /dev/null)
then
for zfs_snapshot in $(${ZFS_CMD} list -Ho name -t snapshot -r ${ZPOOL} | grep ${SNAPSHOT_NAME})
do
if [ "_$(${ZFS_CMD} get -Ho value mounted ${zfs_snapshot/@*/}/${NSR_BACKUP_CLONE})_" == "_yes_" ]
then
print_log ${LOGFILE} "Unmount ZFS clone ${zfs_snapshot/@*/}/${NSR_BACKUP_CLONE}"
${ZFS_CMD} unmount ${zfs_snapshot/@*/}/${NSR_BACKUP_CLONE}
fi
# If this is a clone of ${zfs_snapshot}, then destroy it
if [ "_$(${ZFS_CMD} list -Ho origin ${zfs_snapshot/@*/}/${NSR_BACKUP_CLONE})_" == "_${zfs_snapshot}_" ]
then
print_log ${LOGFILE} "Destroy ZFS clone ${zfs_snapshot/@*/}/${NSR_BACKUP_CLONE}"
${ZFS_CMD} destroy ${zfs_snapshot/@*/}/${NSR_BACKUP_CLONE}
fi
done
print_log ${LOGFILE} "Destroy ZFS snapshot -r ${ZPOOL}@${SNAPSHOT_NAME}"
${ZFS_CMD} destroy -r ${ZPOOL}@${SNAPSHOT_NAME}
fi
print_log ${LOGFILE} "Telling Cluster to monitor ${RES} again"
if [ "_${RES}_" != "__" ]
then
${CLRS_CMD} monitor ${RES}
fi
}
function usage {
echo "Usage: $0 (pre|pst)"
echo "Usage: $0 init <ZPool-Name>"
echo "Usage: $0 initall"
echo "Usage: $0 dump <ZPool-Name> <Output-File>"
exit 1
}
case ${cmd_option} in
pre|pst)
case ${cmd_option} in
pre)
# Get commandline from parent pid
# pre /usr/sbin/savepnpc -c <NetworkerClient> -s <NetworkerServer> -g <NetworkerGroup> -LL
print_log ${GLOBAL_LOGFILE} "Begin (${cmd_option}) Called from $(${PTREE_CMD} $$ | ${AWK_CMD} '/savepnpc/{print $0}')"
pid=$(${PTREE_CMD} $$ | ${AWK_CMD} '/savepnpc/{print $1}')
;;
pst)
# Get commandline from parent pid
# pst /usr/bin/pstclntsave -s <NetworkerServer> -g <NetworkerGroup> -c <NetworkerClient>
print_log ${GLOBAL_LOGFILE} "Begin (${cmd_option}) Called from $(${PTREE_CMD} $$ | ${AWK_CMD} '/pstclntsave/{print $0}')"
pid=$(${PTREE_CMD} $$ | ${AWK_CMD} '/pstclntsave/{print $1}')
${PTREE_CMD} $$ | print_log ${GLOBAL_LOGFILE}
print_log ${GLOBAL_LOGFILE} "(${cmd_option}) PID=${pid}"
;;
esac
commandline="$(${PARGS_CMD} -c ${pid} | ${AWK_CMD} -F':' '$1 ~ /^argv/{printf $2}END{print;}')"
# Called from backupserver use -c
CLIENT_NAME=$(print_option -c ${commandline})
# If called from cmdline use -m
CLIENT_NAME=${CLIENT_NAME:-$(print_option -m ${commandline})}
# Last resort pre/post
CLIENT_NAME=${CLIENT_NAME:-${cmd_option}}
SERVER_NAME=$(print_option -s ${commandline})
GROUP_NAME=$(print_option -g ${commandline})
LOGFILE=${BASE_LOG_DIR}/${CLIENT_NAME}.log
print_log ${LOGFILE} "Called from ${commandline}"
named_pipe=/tmp/.named_pipe.$$
# Delete named pipe on exit
trap "rm -f ${named_pipe}" EXIT
# Create named pipe
${MKNOD_CMD} ${named_pipe} p
# Read from named pipe and send it to print_log
tee <${named_pipe} | print_log ${LOGFILE}&
# Close STDOUT & STDERR
exec 1>&-
exec 2>&-
# Redirect them to named pipe
exec >${named_pipe} 2>&1
print_log ${LOGFILE} "Begin backup of ${CLIENT_NAME}"
# Get resource name from hostname
LH_RES=$(${CLRS_CMD} show -t SUNW.LogicalHostname -p HostnameList | ${AWK_CMD} -v Hostname="${CLIENT_NAME}" '/^Resource:/{res=$NF} /HostnameList:/ {for(i=2;i<=NF;i++){if($i == Hostname){print res}}}')
print_log ${LOGFILE} "LogicalHostname of ${CLIENT_NAME} is ${LH_RES}"
# Get ressourceGroup name from ressource name
RG=$(${SCHA_RESOURCE_GET_CMD} -O GROUP -R ${LH_RES})
print_log ${LOGFILE} "RessourceGroup of ${LH_RES} is ${RG}"
ZPOOLS=$(${CLRS_CMD} show -g ${RG} -p Zpools | ${AWK_CMD} '$1=="Zpools:"{$1="";print $0}')
print_log ${LOGFILE} "ZPools used in ${RG}: ${ZPOOLS}"
Start_command=$(${CLRS_CMD} show -p Start_command -g ${RG} | ${AWK_CMD} -F ':' '$1 ~ /Start_command/ && $2 ~ /sczbt/')
print_log ${LOGFILE} "sczbt Start_command is: ${Start_command}"
sczbt_config=$(print_option -P ${Start_command})/sczbt_$(print_option -R ${Start_command})
print_log ${LOGFILE} "sczbt_config is ${sczbt_config}"
ZONE=$(${AWK_CMD} -F '=' '$1=="Zonename"{gsub(/"/,"",$2);print $2}' ${sczbt_config})
print_log ${LOGFILE} "Zone from ${sczbt_config} is ${ZONE}"
;;
init)
LOGFILE=${BASE_LOG_DIR}/init.log
if [ $# -ne 2 ]
then
echo "Wrong count of parameters."
echo "Use $0 init <ZPool-Name>"
exit 1
fi
ZPOOL=$2
print_log ${GLOBAL_LOGFILE} "Begin (${cmd_option}) of zpool ${ZPOOL}"
print_log ${LOGFILE} "Begin init of zpool ${ZPOOL}"
;;
initall)
LOGFILE=${BASE_LOG_DIR}/initall.log
print_log ${GLOBAL_LOGFILE} "Begin (${cmd_option})"
;;
get_slaves)
if [ $# -ne 5 ]
then
echo "Wrong count of parameters."
echo "Use $0 get_slaves <Zone-Name> <Sophora-Port> <Sophora-Adminuser> <Sophora-Password>"
exit 1
fi
echo "Slave node(s): $(sophora_get_slaves $2 $3 $4 $5)"
exit 0
;;
esac
case ${cmd_option} in
dump_cluster)
if [ $# -ne 3 ]
then
echo "Wrong count of parameters."
echo "Use $0 dump_cluster <Ressource_Group> <DIR>"
exit 1
fi
dump_cluster_config $2 $3
;;
dump)
if [ $# -ne 3 ]
then
echo "Wrong count of parameters."
echo "Use $0 dump <ZPool-Name> <File>"
exit 1
fi
dump_zfs_config $2 $3
;;
init)
snapshot_destroy ${ZPOOL} ${SNAPSHOT_NAME}
snapshot_create ${ZPOOL} ${SNAPSHOT_NAME}
print_log ${LOGFILE} "End init of zpool ${ZPOOL}"
;;
initall)
for ZPOOL in $(${ZPOOL_CMD} list -Ho name)
do
if [ "_${ZPOOL}_" == "_rpool_" ]
then
continue
fi
print_log ${LOGFILE} "Begin init of zpool ${ZPOOL}"
snapshot_destroy ${ZPOOL} ${SNAPSHOT_NAME}
snapshot_create ${ZPOOL} ${SNAPSHOT_NAME}
print_log ${LOGFILE} "End init of zpool ${ZPOOL}"
done
;;
pre)
for ZPOOL in ${ZPOOLS}
do
snapshot_destroy ${ZPOOL} ${SNAPSHOT_NAME}
done
# Shutdown Sophora?
startup="No"
case ${ZONE} in
arcus-rg)
# Staging zones
#sophora_shutdown ${ZONE} ${SOPHORA_FMRI}
#startup="Yes"
;;
incus-zone|velum-zone)
SOPHORA_ADMINPORT=1196
# Master-/slave-zones
is_slave=0
zone_hostname=$(get_zone_hostname ${ZONE})
for slave in $(sophora_get_slaves ${ZONE} ${SOPHORA_ADMINPORT} ${SOPHORA_USER} ${SOPHORA_PASS})
do
print_log ${LOGFILE} "_${slave}_ == _${zone_hostname}_?"
if [ "_${slave}_" == "_${zone_hostname}_" ]
then
is_slave=1
fi
done
if [ ${is_slave} -eq 1 ]
then
# Slave
print_log ${LOGFILE} "Slave..."
sophora_shutdown ${ZONE} ${SOPHORA_FMRI}
startup="Yes"
else
# Master
print_log ${LOGFILE} "Master... Not shutting down Sophora"
fi
;;
merkel-zone|brandt-zone|schmidt-zone)
SOPHORA_ADMINPORT=1396
# Master-/slave-zones
is_slave=0
zone_hostname=$(get_zone_hostname ${ZONE})
for slave in $(sophora_get_slaves ${ZONE} ${SOPHORA_ADMINPORT} ${SOPHORA_USER} ${SOPHORA_PASS})
do
print_log ${LOGFILE} "_${slave}_ == _${zone_hostname}_?"
if [ "_${slave}_" == "_${zone_hostname}_" ]
then
is_slave=1
fi
done
if [ ${is_slave} -eq 1 ]
then
# Slave
print_log ${LOGFILE} "Slave..."
sophora_shutdown ${ZONE} ${SOPHORA_FMRI}
startup="Yes"
else
# Master
print_log ${LOGFILE} "Master... Not shutting down Sophora"
fi
;;
*)
;;
esac
# Find the dir to write down zfs-setup
for ZPOOL in ${ZPOOLS}
do
if [ "_$(${ZFS_CMD} list -Ho name ${ZPOOL}/${ZFS_SETUP_SUBDIR} 2>/dev/null)_" != "__" ]
then
CONFIG_DIR=$(${ZFS_CMD} get -Ho value mountpoint ${ZPOOL}/${ZFS_SETUP_SUBDIR})
else
if [ -d $(${ZFS_CMD} get -Ho value mountpoint ${ZPOOL})/${ZFS_SETUP_SUBDIR} ]
then
CONFIG_DIR=$(${ZFS_CMD} get -Ho value mountpoint ${ZPOOL})/${ZFS_SETUP_SUBDIR}
fi
fi
if [ -d ${CONFIG_DIR} ]
then
printf "# Settings for ZFS\n\n" > ${CONFIG_DIR}/${ZFS_CONFIG_FILE}
ZONE_CONFIG_FILE=zonecfg_${ZONE}.export
[ "_${ZONE}_" != "__" ] && ${ZONECFG_CMD} -z ${ZONE} export > ${CONFIG_DIR}/${ZONE_CONFIG_FILE}
fi
done
# Save configs and create snapshots
for ZPOOL in ${ZPOOLS}
do
if [ "_${CONFIG_DIR}_" != "__" ]
then
# Save zfs config
dump_zfs_config ${ZPOOL} ${CONFIG_DIR}/${ZFS_CONFIG_FILE}
# Save Clusterconfig
dump_cluster_config ${RG} ${CONFIG_DIR}
fi
snapshot_create ${ZPOOL} ${SNAPSHOT_NAME}
done
# Startup Sophora?
if [ "_${startup}_" == "_Yes_" ]
then
sophora_startup ${ZONE} ${SOPHORA_FMRI}
fi
print_log ${LOGFILE} "End backup of ${CLIENT_NAME}"
;;
pst)
for ZPOOL in ${ZPOOLS}
do
snapshot_destroy ${ZPOOL} ${SNAPSHOT_NAME}
done
print_log ${LOGFILE} "End backup of ${CLIENT_NAME}"
;;
*)
usage
;;
esac
print_log ${GLOBAL_LOGFILE} "End (${cmd_option}) Called from:"
${PTREE_CMD} $$ | print_log ${GLOBAL_LOGFILE}
exit 0
</syntaxhighlight>
MD5-Checksum
<source lang=bash>
# digest -a md5 /nsr/bin/nsr_snapshot.sh
01be6677ddf4342b625b1aa59d805628
</syntaxhighlight>
!!!THIS CODE IS UNTESTED DO NOT USE THIS!!!
!!!THIS JUST AN EXAMPLE!!!
==Restore/Recover==
===Set some variables===
<source lang=bash>
NSR_CLIENT="sample-cl"
NSR_SERVER="nsr-server"
ZPOOL="sample_pool"
RG="${NSR_CLIENT%-cl}-rg"
ZONE="${NSR_CLIENT%-cl}-zone"
</syntaxhighlight>
===Look for a valid backup===
<source lang=bash>
# /usr/sbin/mminfo -s ${NSR_SERVER} -o t -N /local/${RG}/cluster_config/nsr_backup
</syntaxhighlight>
===Restore ZFS configuration===
<source lang=bash>
# /usr/sbin/recover -s ${NSR_SERVER} -c ${NSR_CLIENT} -d /tmp -a /local/${RG}/cluster_config/nsr_backup/ZFS_Setup.sh
</syntaxhighlight>
Look into file /tmp/ZFS_Setup.sh which should look like this:
<source lang=bash>
# Create ZPool sample_pool with size 1.02T:
Zpool structure of sample_pool:
zpool create sample_pool mirror <device0> <device1>
# Create ZFS
/usr/sbin/zfs create -o mountpoint=none sample_pool/app
/usr/sbin/zfs create -o mountpoint=none sample_pool/cluster_config
/usr/sbin/zfs create -o mountpoint=none sample_pool/data1
/usr/sbin/zfs create -o mountpoint=none sample_pool/data2
/usr/sbin/zfs create -o mountpoint=none sample_pool/home
/usr/sbin/zfs create -o mountpoint=none sample_pool/log
/usr/sbin/zfs create -o mountpoint=none sample_pool/usr_local
/usr/sbin/zfs create -o mountpoint=none sample_pool/zone
# Set ZFS values
/usr/sbin/zfs set -p reservation=104857600 sample_pool
/usr/sbin/zfs set -p mountpoint=none sample_pool
/usr/sbin/zfs set -p mountpoint=/local/sample-rg/app sample_pool/app
/usr/sbin/zfs set -p mountpoint=/local/sample-rg/cluster_config sample_pool/cluster_config
/usr/sbin/zfs set -p mountpoint=/local/sample-rg/data1 sample_pool/data1
/usr/sbin/zfs set -p mountpoint=/local/sample-rg/data2 sample_pool/data2
/usr/sbin/zfs set -p mountpoint=/local/sample-rg/home sample_pool/home
/usr/sbin/zfs set -p mountpoint=/local/sample-rg/log sample_pool/log
/usr/sbin/zfs set -p mountpoint=/local/sample-rg/usr_local sample_pool/usr_local
/usr/sbin/zfs set -p mountpoint=/local/sample-rg/zone sample_pool/zone
/usr/sbin/zfs set -p zpdata:zn=sample-zone sample_pool/zone
/usr/sbin/zfs set -p zpdata:rbe=S10_U9 sample_pool/zone
/usr/sbin/zfs set -p mountpoint=/local/sample-rg/zone-zfsBE_20121105 sample_pool/zone-zfsBE_20121105
/usr/sbin/zfs set -p zoned=off sample_pool/zone-zfsBE_20121105
/usr/sbin/zfs set -p canmount=on sample_pool/zone-zfsBE_20121105
/usr/sbin/zfs set -p zpdata:zn=sample-zone sample_pool/zone-zfsBE_20121105
/usr/sbin/zfs set -p zpdata:rbe=S10_U9 sample_pool/zone-zfsBE_20121105
</syntaxhighlight>
Mount the needed ZFS filesystems.
===Restore zone configuration===
<source lang=bash>
# /usr/sbin/recover -s ${NSR_SERVER} -c ${NSR_CLIENT} -d /tmp -a /local/${RG}/cluster_config/nsr_backup/zonecfg_${ZONE}.export
# zonecfg -z ${ZONE} -f /tmp/zonecfg_${ZONE}.export
# zonecfg -z ${ZONE} info
</syntaxhighlight>
===Restore cluster configuration===
<source lang=bash>
# /usr/sbin/recover -s ${NSR_SERVER} -c ${NSR_CLIENT} -d /tmp -a /local/${RG}/cluster_config/nsr_backup/*_export.xml
# /usr/sbin/recover -s ${NSR_SERVER} -c ${NSR_CLIENT} -d /tmp -a /local/${RG}/cluster_config/nsr_backup/*.ClusterCreateCommands.txt
# /usr/bin/perl -pi -e "s#/local/${RG}/cluster_config/nsr_backup/#/tmp/#g" /tmp/${RG}.ClusterCreateCommands.txt
</syntaxhighlight>
Follow the instructions in /tmp/${RG}.ClusterCreateCommands.txt:
<source lang=bash>
Recreate sample-rg:
/usr/cluster/bin/clrg create -i /tmp/sample-rg.clrg_export.xml sample-rg
Add the following entries to all nodes!!!:
/etc/inet/hosts:
10.29.7.96 sample-cl
Recreate sample-lh-res:
/usr/cluster/bin/clrs create -i /tmp/sample-lh-res.clrs_export.xml sample-lh-res
Recreate sample-hasp-zfs-res:
/usr/cluster/bin/clrs create -i /tmp/sample-hasp-zfs-res.clrs_export.xml sample-hasp-zfs-res
Recreate sample-emctl-res:
/usr/cluster/bin/clrs create -i /tmp/sample-emctl-res.clrs_export.xml sample-emctl-res
Recreate sample-oracle-res:
/usr/cluster/bin/clrs create -i /tmp/sample-oracle-res.clrs_export.xml sample-oracle-res
Recreate sample-zone-res:
/usr/cluster/bin/clrs create -i /tmp/sample-zone-res.clrs_export.xml sample-zone-res
Recreate sample-nsr-res:
/usr/cluster/bin/clrs create -i /tmp/sample-nsr-res.clrs_export.xml sample-nsr-res
</syntaxhighlight>
==Registering new resource type LGTO.clnt==
1. Install Solaris client package LGTOclnt
2. Register new resource type in cluster. One one node do:
<source lang=bash>
# clrt register -f /usr/sbin/LGTO.clnt.rtr LGTO.clnt
</syntaxhighlight>
Now you have a new resource type LGTO.clnt in your cluster.
==Create client resource of type LGTO.clnt==
So I use scripts like this:
<source lang=bash>
# RGname=sample-rg
# clrs create \
-t LGTO.clnt \
-g ${RGname} \
-p Resource_dependencies=$(basename ${RGname} -rg)-hasp-zfs-res \
-p clientname=$(basename ${RGname} -rg)-lh \
-p Network_resource=$(basename ${RGname} -rg)-lh-res \
-p owned_paths=${ZPOOL_BASEDIR} \
$(basename ${RGname} -rg)-nsr-res
</syntaxhighlight>
This expands to:
<source lang=bash>
# clrs create \
-t LGTO.clnt \
-g sample-rg \
-p Resource_dependencies=sample-hasp-zfs-res \
-p clientname=sample-lh \
-p Network_resource=sample-lh-res \
-p owned_paths=/local/sample-rg \
sample-nsr-res
</syntaxhighlight>
Now we have a client name to which we can connect to: sample-lh
702ad9e44c7e81f981d345e2b8451de6d50739cf
2302
2286
2021-11-25T15:53:00Z
Lollypop
2
Text replacement - "<source" to "<syntaxhighlight"
wikitext
text/x-wiki
[[Category:ZFS|Backup]]
[[Category:Backup|Networker]]
[[Category:Solaris|Backup]]
=Backup of ZFS snapshots on Solaris Cluster with Legato/EMC Networker=
This describes how to setup a backup of the Solaris Cluster resource group named sample-rg.
The structure of my RGs is always:
<pre>
RG: <name>-rg
ZFS-HASP: <name>-hasp-zfs-res
Logical Host: <name>-lh-res
Logical Host Name: <name>-lh
ZPOOL: <name>_pool
</pre>
I used the bash as shell.
==Define variables used in the following command lines==
<syntaxhighlight lang=bash>
# NAME=sample
# RGname=${NAME}-rg
# NetworkerGroup=$(echo ${NAME} | tr 'a-z' 'A-Z' )
# ZPOOL=${NAME}_pool
# ZPOOL_BASEDIR=/local/${RGname}
</syntaxhighlight>
==Define a resource for Networker==
What we need now is a resource definition in our Networker directory like this:
<syntaxhighlight lang=bash>
# mkdir /nsr/{bin,log,res}
# cat > /nsr/res/${NetworkerGroup}.res <<EOF
type: savepnpc;
precmd: "/nsr/bin/nsr_snapshot.sh pre >/nsr/log/networker_precmd.log 2>&1";
pstcmd: "/nsr/bin/nsr_snapshot.sh pst >/nsr/log/networker_pstcmd.log 2>&1";
timeout: "08:00am";
abort precmd with group: Yes;
EOF
</syntaxhighlight>
==The pre-/pstcmd-script==
!!!THIS CODE IS UNTESTED DO NOT USE THIS!!!
!!!THIS JUST AN EXAMPLE!!!
<syntaxhighlight lang=bash>
#!/bin/bash
cmd_option=$1
export cmd_option
SNAPSHOT_NAME="nsr"
BASE_LOG_DIR="/nsr/logs"
NSR_BACKUP_CLONE="nsr_backup"
# Commands
ZFS_CMD="/usr/sbin/zfs"
ZPOOL_CMD="/usr/sbin/zpool"
ZLOGIN_CMD="/usr/sbin/zlogin"
ZONECFG_CMD="/usr/sbin/zonecfg"
SVCS_CMD="/usr/sbin/svcs"
SVCADM_CMD="/usr/sbin/svcadm"
DF_CMD="/usr/bin/df"
RM_CMD="/usr/bin/rm"
AWK_CMD="/usr/bin/nawk"
MKNOD_CMD="/usr/sbin/mknod"
XARGS_CMD="/usr/bin/xargs"
PARGS_CMD="/usr/bin/pargs"
PTREE_CMD="/usr/bin/ptree"
CLRS_CMD="/usr/cluster/bin/clrs"
CLRG_CMD="/usr/cluster/bin/clrg"
CLRT_CMD="/usr/cluster/bin/clrt"
BASENAME_CMD="/usr/bin/basename"
GETENT_CMD="/usr/bin/getent"
SCHA_RESOURCE_GET_CMD="/usr/cluster/bin/scha_resource_get"
WGET_CMD=/usr/sfw/bin/wget
HOSTNAME_CMD="/usr/bin/uname -n"
# Subdir in ZFS where to put ZFS-config
ZFS_SETUP_SUBDIR="cluster_config"
ZFS_CONFIG_FILE=ZFS_Setup.sh
# Oracle parameter
ORACLE_SID=SAMPLE
ORACLE_USER=oracle
# Sophora parameter
SOPHORA_FMRI="svc:/cms/sophora:default"
SOPHORA_USER=admin
SOPHORA_PASS=password
GLOBAL_LOGFILE=${BASE_LOG_DIR}/$(${BASENAME_CMD} $0 .sh).log
# For all but get_slaves redirect output to log
case ${cmd_option} in
get_slaves)
;;
*)
exec >>${GLOBAL_LOGFILE} 2>&1
;;
esac
function print_option () {
option=$1; shift
# now process line
while [ $# -gt 0 ]
do
case $1 in
${option})
echo $2
shift
shift
;;
*)
shift
;;
esac
done
}
function sophora_startup () {
SOPHORA_ZONE=$1 # Zone for zlogin
SOPHORA_FMRI=$2 # FMRI for svcadm
print_log ${LOGFILE} "Starting sophora in ${SOPHORA_ZONE}..."
${ZLOGIN_CMD} ${SOPHORA_ZONE} ${SVCADM_CMD} enable ${SOPHORA_FMRI}
}
function sophora_shutdown () {
SOPHORA_ZONE=$1 # Zone for zlogin
SOPHORA_FMRI=$2 # FMRI for svcadm
print_log ${LOGFILE} "Shutting down sophora in ${SOPHORA_ZONE}..."
${ZLOGIN_CMD} ${SOPHORA_ZONE} ${SVCADM_CMD} disable -t ${SOPHORA_FMRI}
}
function sophora_get_slaves () {
SOPHORA_ZONE=$1 # Zone for zlogin
SOPHORA_PORT=$2 # Sophora port at localhost
SOPHORA_USER=$3 # Sophora admin user
SOPHORA_PASS=$4 # Sophora admin port
${ZLOGIN_CMD} ${SOPHORA_ZONE} \
${WGET_CMD} \
-qO- \
--no-proxy \
--http-user=${SOPHORA_USER} \
--http-password=${SOPHORA_PASS} \
"http://localhost:${SOPHORA_PORT}/content-api/servers/?replicationMode=SLAVE" | \
${AWK_CMD} '
function get_param(param,name){
name="\""name"\"";
count=split(param,tupel,/,/);
for(i=1;i<=count;i++){
split(tupel[i],part,/:/);
if(part[1]==name){
gsub(/\"/,"",part[2]);return part[2];
}
}
}
{
json=$0;
gsub(/(\[\{|\}\])/,"",json);
elements=split(json,array,/\},\{/);
for(element=1;element<=elements;element++){
print get_param(array[element],"hostname");
}
}' | ${XARGS_CMD} -n 1 -i ${BASENAME_CMD} {} .server.de
}
function get_zone_hostname () {
${ZLOGIN_CMD} $1 ${HOSTNAME_CMD}
}
function print_log () {
LOGFILE=$1 ; shift
if [ $# -gt 0 ]
then
printf "%s (%s): %s\n" "$(date '+%Y%m%d %H:%M:%S')" "${cmd_option}" "$*" >> ${LOGFILE}
else
#printf "%s (%s): " "$(date '+%Y%m%d %H:%M:%S')" "${cmd_option}" >> ${LOGFILE}
while read data
do
printf "%s (%s): %s\n" "$(date '+%Y%m%d %H:%M:%S')" "${cmd_option}" "${data}" >> ${LOGFILE}
done
fi
}
function dump_zfs_config {
ZPOOL=$1
OUTPUT_FILE=$2
printf "\n\n# Create ZPool ${ZPOOL} with size $(${ZPOOL_CMD} list -Ho size ${ZPOOL}):\n\n" >> ${OUTPUT_FILE}
${ZPOOL_CMD} status ${ZPOOL} | ${AWK_CMD} '/config:/,/errors:/{if(/NAME/){getline; printf "Zpool structure of %s:\n\nzpool create %s",$1,$1; getline ; device=0; while(!/^$/ && !/errors:/){gsub(/mirror-[0-9]+/,"mirror",$1);gsub(/logs/,"log",$1);gsub(/(\/dev\/(r)*dsk\/)*c[0-9]+t[0-9A-F]+d[0-9]+(s[0-9]+)*/,"<device"device">",$1);if(/device/)device++;printf " %s",$1 ; getline}};printf "\n" ;}' >> ${OUTPUT_FILE}
printf "\n\n# Create ZFS\n\n" >> ${OUTPUT_FILE}
${ZFS_CMD} list -Hrt filesystem -o name,origin ${ZPOOL} | ${AWK_CMD} -v zfs_cmd=${ZFS_CMD} 'NR>1 && $2=="-"{print zfs_cmd,"create -o mountpoint=none",$1}' >> ${OUTPUT_FILE}
printf "\n\n# Set ZFS values\n\n" >> ${OUTPUT_FILE}
${ZFS_CMD} get -s local -Ho name,property,value -pr all ${ZPOOL} | ${AWK_CMD} -v zfs_cmd=${ZFS_CMD} '$2!="readonly"{printf "%s set -p %s=%s %s\n",zfs_cmd,$2,$3,$1}' >> ${OUTPUT_FILE}
}
function dump_cluster_config {
RG=$1
OUTPUT_DIR=$2
${RM_CMD} -f ${OUTPUT_DIR}/${RG}.clrg_export.xml
${CLRG_CMD} export -o ${OUTPUT_DIR}/${RG}.clrg_export.xml ${RG}
for RES in $(${CLRS_CMD} list -g ${RG})
do
${RM_CMD} -f ${OUTPUT_DIR}/${RES}.clrs_export.xml
${CLRS_CMD} export -o ${OUTPUT_DIR}/${RES}.clrs_export.xml ${RES}
done
# Commands to recreate the RG
COMMAND_FILE="${OUTPUT_DIR}/${RG}.ClusterCreateCommands.txt"
printf "Recreate %s:\n%s create -i %s %s\n\n" "${RG}" "${CLRG_CMD}" "${OUTPUT_DIR}/${RG}.clrg_export.xml" "${RG}" > ${COMMAND_FILE}
for RT in SUNW.LogicalHostname SUNW.HAStoragePlus SUNW.gds LGTO.clnt
do
for RT_VERSION in $(${CLRT_CMD} list | ${AWK_CMD} -v rt=${RT} '$1 ~ rt')
do
for RES in $(${CLRS_CMD} list -g ${RG} -t ${RT_VERSION})
do
if [ "_${RT}_" == "_SUNW.LogicalHostname_" ]
then
printf "Add the following entries to all nodes!!!:\n/etc/inet/hosts:\n" >> ${COMMAND_FILE}
${GETENT_CMD} hosts $(${CLRS_CMD} show -p HostnameList ${RES} | nawk '$1=="HostnameList:"{$1="";print}') >> ${COMMAND_FILE}
printf "\n" >> ${COMMAND_FILE}
fi
printf "Recreate %s:\n%s create -i %s %s\n\n" "${RES}" "${CLRS_CMD}" "${OUTPUT_DIR}/${RES}.clrs_export.xml" "${RES}" >> ${COMMAND_FILE}
done
done
done
}
function snapshot_pre {
DB=$1
DBUSER=$2
if [ $# -eq 3 -a "_$3_" != "__" ]
then
ZONE=$3
ZONE_CMD="${ZLOGIN_CMD} -l ${DBUSER} ${ZONE}"
ZONE_BASE=$(/usr/sbin/zonecfg -z ${ZONE} info zonepath | ${AWK_CMD} '{print $NF;}')
ZONE_ROOT="${ZONE_BASE}/root"
else
ZONE_ROOT=""
ZONE_CMD="su - ${DBUSER} -c"
fi
if( ${ZONE_CMD} echo >/dev/null 2>&1 )
then
SCRIPT_NAME="tmp/.nsr-pre-snap-script.$$"
# Create script inside zone
cat >${ZONE_ROOT}/{SCRIPT_NAME} <<EOS
#!/bin/bash
DBDIR=\$(${AWK_CMD} -F':' -v ORACLE_SID=${ORACLE_SID} '\$1==ORACLE_SID {print \$2;}' /var/opt/oracle/oratab)
\${DBDIR}/bin/sqlplus sys/${DBUSER} as sysdba << EOF
create pfile from spfile;
alter system archive log current;
alter database backup controlfile to trace;
alter database begin backup;
EOF
EOS
chmod 755 ${ZONE_ROOT}/${SCRIPT_NAME}
${ZONE_CMD} /${SCRIPT_NAME} 2>&1 | print_log ${LOGFILE}
rm -f ${ZONE_ROOT}/${SCRIPT_NAME}
fi
}
function snapshot_pst {
DB=$1
DBUSER=$2
if [ $# -eq 3 -a "_$3_" != "__" ]
then
ZONE=$3
ZONE_CMD="${ZLOGIN_CMD} -l ${DBUSER} ${ZONE}"
ZONE_BASE=$(/usr/sbin/zonecfg -z ${ZONE} info zonepath | ${AWK_CMD} '{print $NF;}')
ZONE_ROOT="${ZONE_BASE}/root"
else
ZONE_ROOT=""
ZONE_CMD="su - ${DBUSER} -c"
fi
if( ${ZONE_CMD} echo >/dev/null 2>&1 )
then
SCRIPT_NAME="tmp/.nsr-pre-snap-script.$$"
# Create script inside zone
cat >${ZONE_ROOT}/{SCRIPT_NAME} <<EOS
#!/bin/bash
DBDIR=\$(${AWK_CMD} -F':' -v ORACLE_SID=${ORACLE_SID} '\$1==ORACLE_SID {print \$2;}' /var/opt/oracle/oratab)
\${DBDIR}/bin/sqlplus sys/${DBUSER} as sysdba << EOF
alter database end backup;
alter system archive log current;
EOF
EOS
chmod 755 ${ZONE_ROOT}/${SCRIPT_NAME}
${ZONE_CMD} /${SCRIPT_NAME} 2>&1 | print_log ${LOGFILE}
rm -f ${ZONE_ROOT}/${SCRIPT_NAME}
fi
}
function snapshot_create {
ZPOOL=$1
SNAPSHOT_NAME=$2
RES="$(${CLRS_CMD} show -p ZPools | ${AWK_CMD} -v pool=${ZPOOL} '/^Resource:/{res=$NF;}$NF ~ pool{print res;}')"
# Because of problems with unmounting during cluster monitoring disable montoring for this step
print_log ${LOGFILE} "Telling Cluster not to monitor ${RES}"
if [ "_${RES}_" != "__" ]
then
${CLRS_CMD} unmonitor ${RES}
fi
print_log ${LOGFILE} "Create ZFS snapshot -r ${ZPOOL}@${SNAPSHOT_NAME}"
${ZFS_CMD} snapshot -r ${ZPOOL}@${SNAPSHOT_NAME}
for zfs_snapshot in $(${ZFS_CMD} list -Ho name -t snapshot -r ${ZPOOL} | grep ${SNAPSHOT_NAME})
do
${ZFS_CMD} clone -o readonly=on ${zfs_snapshot} ${zfs_snapshot/@*/}/${NSR_BACKUP_CLONE}
${ZFS_CMD} mount ${zfs_snapshot/@*/}/${NSR_BACKUP_CLONE} 2>/dev/null
if [ "_$(${ZFS_CMD} get -Ho value mounted ${zfs_snapshot/@*/}/${NSR_BACKUP_CLONE})_" == "_yes_" ]
then
# echo /usr/sbin/save -s ${SERVER_NAME} -g ${GROUP_NAME} -LL -m ${CLIENT_NAME} $(${ZFS_CMD} get -Ho value mountpoint ${zfs_snapshot/@*/}/${NSR_BACKUP_CLONE})
${ZFS_CMD} list -Ho creation,name ${zfs_snapshot/@*/}/${NSR_BACKUP_CLONE} | print_log ${LOGFILE}
fi
done
print_log ${LOGFILE} "Telling Cluster to monitor ${RES} again"
if [ "_${RES}_" != "__" ]
then
sleep 1
${CLRS_CMD} monitor ${RES}
fi
}
function snapshot_destroy {
ZPOOL=$1
SNAPSHOT_NAME=$2
RES="$(${CLRS_CMD} show -p ZPools | ${AWK_CMD} -v pool=${ZPOOL} '/^Resource:/{res=$NF;}$NF ~ pool{print res;}')"
# Because of problems with unmounting during cluster monitoring disable montoring for this step
print_log ${LOGFILE} "Telling Cluster not to monitor ${RES}"
if [ "_${RES}_" != "__" ]
then
${CLRS_CMD} unmonitor ${RES}
fi
if (${ZFS_CMD} list -t snapshot ${ZPOOL}@${SNAPSHOT_NAME} > /dev/null)
then
for zfs_snapshot in $(${ZFS_CMD} list -Ho name -t snapshot -r ${ZPOOL} | grep ${SNAPSHOT_NAME})
do
if [ "_$(${ZFS_CMD} get -Ho value mounted ${zfs_snapshot/@*/}/${NSR_BACKUP_CLONE})_" == "_yes_" ]
then
print_log ${LOGFILE} "Unmount ZFS clone ${zfs_snapshot/@*/}/${NSR_BACKUP_CLONE}"
${ZFS_CMD} unmount ${zfs_snapshot/@*/}/${NSR_BACKUP_CLONE}
fi
# If this is a clone of ${zfs_snapshot}, then destroy it
if [ "_$(${ZFS_CMD} list -Ho origin ${zfs_snapshot/@*/}/${NSR_BACKUP_CLONE})_" == "_${zfs_snapshot}_" ]
then
print_log ${LOGFILE} "Destroy ZFS clone ${zfs_snapshot/@*/}/${NSR_BACKUP_CLONE}"
${ZFS_CMD} destroy ${zfs_snapshot/@*/}/${NSR_BACKUP_CLONE}
fi
done
print_log ${LOGFILE} "Destroy ZFS snapshot -r ${ZPOOL}@${SNAPSHOT_NAME}"
${ZFS_CMD} destroy -r ${ZPOOL}@${SNAPSHOT_NAME}
fi
print_log ${LOGFILE} "Telling Cluster to monitor ${RES} again"
if [ "_${RES}_" != "__" ]
then
${CLRS_CMD} monitor ${RES}
fi
}
function usage {
echo "Usage: $0 (pre|pst)"
echo "Usage: $0 init <ZPool-Name>"
echo "Usage: $0 initall"
echo "Usage: $0 dump <ZPool-Name> <Output-File>"
exit 1
}
case ${cmd_option} in
pre|pst)
case ${cmd_option} in
pre)
# Get commandline from parent pid
# pre /usr/sbin/savepnpc -c <NetworkerClient> -s <NetworkerServer> -g <NetworkerGroup> -LL
print_log ${GLOBAL_LOGFILE} "Begin (${cmd_option}) Called from $(${PTREE_CMD} $$ | ${AWK_CMD} '/savepnpc/{print $0}')"
pid=$(${PTREE_CMD} $$ | ${AWK_CMD} '/savepnpc/{print $1}')
;;
pst)
# Get commandline from parent pid
# pst /usr/bin/pstclntsave -s <NetworkerServer> -g <NetworkerGroup> -c <NetworkerClient>
print_log ${GLOBAL_LOGFILE} "Begin (${cmd_option}) Called from $(${PTREE_CMD} $$ | ${AWK_CMD} '/pstclntsave/{print $0}')"
pid=$(${PTREE_CMD} $$ | ${AWK_CMD} '/pstclntsave/{print $1}')
${PTREE_CMD} $$ | print_log ${GLOBAL_LOGFILE}
print_log ${GLOBAL_LOGFILE} "(${cmd_option}) PID=${pid}"
;;
esac
commandline="$(${PARGS_CMD} -c ${pid} | ${AWK_CMD} -F':' '$1 ~ /^argv/{printf $2}END{print;}')"
# Called from backupserver use -c
CLIENT_NAME=$(print_option -c ${commandline})
# If called from cmdline use -m
CLIENT_NAME=${CLIENT_NAME:-$(print_option -m ${commandline})}
# Last resort pre/post
CLIENT_NAME=${CLIENT_NAME:-${cmd_option}}
SERVER_NAME=$(print_option -s ${commandline})
GROUP_NAME=$(print_option -g ${commandline})
LOGFILE=${BASE_LOG_DIR}/${CLIENT_NAME}.log
print_log ${LOGFILE} "Called from ${commandline}"
named_pipe=/tmp/.named_pipe.$$
# Delete named pipe on exit
trap "rm -f ${named_pipe}" EXIT
# Create named pipe
${MKNOD_CMD} ${named_pipe} p
# Read from named pipe and send it to print_log
tee <${named_pipe} | print_log ${LOGFILE}&
# Close STDOUT & STDERR
exec 1>&-
exec 2>&-
# Redirect them to named pipe
exec >${named_pipe} 2>&1
print_log ${LOGFILE} "Begin backup of ${CLIENT_NAME}"
# Get resource name from hostname
LH_RES=$(${CLRS_CMD} show -t SUNW.LogicalHostname -p HostnameList | ${AWK_CMD} -v Hostname="${CLIENT_NAME}" '/^Resource:/{res=$NF} /HostnameList:/ {for(i=2;i<=NF;i++){if($i == Hostname){print res}}}')
print_log ${LOGFILE} "LogicalHostname of ${CLIENT_NAME} is ${LH_RES}"
# Get ressourceGroup name from ressource name
RG=$(${SCHA_RESOURCE_GET_CMD} -O GROUP -R ${LH_RES})
print_log ${LOGFILE} "RessourceGroup of ${LH_RES} is ${RG}"
ZPOOLS=$(${CLRS_CMD} show -g ${RG} -p Zpools | ${AWK_CMD} '$1=="Zpools:"{$1="";print $0}')
print_log ${LOGFILE} "ZPools used in ${RG}: ${ZPOOLS}"
Start_command=$(${CLRS_CMD} show -p Start_command -g ${RG} | ${AWK_CMD} -F ':' '$1 ~ /Start_command/ && $2 ~ /sczbt/')
print_log ${LOGFILE} "sczbt Start_command is: ${Start_command}"
sczbt_config=$(print_option -P ${Start_command})/sczbt_$(print_option -R ${Start_command})
print_log ${LOGFILE} "sczbt_config is ${sczbt_config}"
ZONE=$(${AWK_CMD} -F '=' '$1=="Zonename"{gsub(/"/,"",$2);print $2}' ${sczbt_config})
print_log ${LOGFILE} "Zone from ${sczbt_config} is ${ZONE}"
;;
init)
LOGFILE=${BASE_LOG_DIR}/init.log
if [ $# -ne 2 ]
then
echo "Wrong count of parameters."
echo "Use $0 init <ZPool-Name>"
exit 1
fi
ZPOOL=$2
print_log ${GLOBAL_LOGFILE} "Begin (${cmd_option}) of zpool ${ZPOOL}"
print_log ${LOGFILE} "Begin init of zpool ${ZPOOL}"
;;
initall)
LOGFILE=${BASE_LOG_DIR}/initall.log
print_log ${GLOBAL_LOGFILE} "Begin (${cmd_option})"
;;
get_slaves)
if [ $# -ne 5 ]
then
echo "Wrong count of parameters."
echo "Use $0 get_slaves <Zone-Name> <Sophora-Port> <Sophora-Adminuser> <Sophora-Password>"
exit 1
fi
echo "Slave node(s): $(sophora_get_slaves $2 $3 $4 $5)"
exit 0
;;
esac
case ${cmd_option} in
dump_cluster)
if [ $# -ne 3 ]
then
echo "Wrong count of parameters."
echo "Use $0 dump_cluster <Ressource_Group> <DIR>"
exit 1
fi
dump_cluster_config $2 $3
;;
dump)
if [ $# -ne 3 ]
then
echo "Wrong count of parameters."
echo "Use $0 dump <ZPool-Name> <File>"
exit 1
fi
dump_zfs_config $2 $3
;;
init)
snapshot_destroy ${ZPOOL} ${SNAPSHOT_NAME}
snapshot_create ${ZPOOL} ${SNAPSHOT_NAME}
print_log ${LOGFILE} "End init of zpool ${ZPOOL}"
;;
initall)
for ZPOOL in $(${ZPOOL_CMD} list -Ho name)
do
if [ "_${ZPOOL}_" == "_rpool_" ]
then
continue
fi
print_log ${LOGFILE} "Begin init of zpool ${ZPOOL}"
snapshot_destroy ${ZPOOL} ${SNAPSHOT_NAME}
snapshot_create ${ZPOOL} ${SNAPSHOT_NAME}
print_log ${LOGFILE} "End init of zpool ${ZPOOL}"
done
;;
pre)
for ZPOOL in ${ZPOOLS}
do
snapshot_destroy ${ZPOOL} ${SNAPSHOT_NAME}
done
# Shutdown Sophora?
startup="No"
case ${ZONE} in
arcus-rg)
# Staging zones
#sophora_shutdown ${ZONE} ${SOPHORA_FMRI}
#startup="Yes"
;;
incus-zone|velum-zone)
SOPHORA_ADMINPORT=1196
# Master-/slave-zones
is_slave=0
zone_hostname=$(get_zone_hostname ${ZONE})
for slave in $(sophora_get_slaves ${ZONE} ${SOPHORA_ADMINPORT} ${SOPHORA_USER} ${SOPHORA_PASS})
do
print_log ${LOGFILE} "_${slave}_ == _${zone_hostname}_?"
if [ "_${slave}_" == "_${zone_hostname}_" ]
then
is_slave=1
fi
done
if [ ${is_slave} -eq 1 ]
then
# Slave
print_log ${LOGFILE} "Slave..."
sophora_shutdown ${ZONE} ${SOPHORA_FMRI}
startup="Yes"
else
# Master
print_log ${LOGFILE} "Master... Not shutting down Sophora"
fi
;;
merkel-zone|brandt-zone|schmidt-zone)
SOPHORA_ADMINPORT=1396
# Master-/slave-zones
is_slave=0
zone_hostname=$(get_zone_hostname ${ZONE})
for slave in $(sophora_get_slaves ${ZONE} ${SOPHORA_ADMINPORT} ${SOPHORA_USER} ${SOPHORA_PASS})
do
print_log ${LOGFILE} "_${slave}_ == _${zone_hostname}_?"
if [ "_${slave}_" == "_${zone_hostname}_" ]
then
is_slave=1
fi
done
if [ ${is_slave} -eq 1 ]
then
# Slave
print_log ${LOGFILE} "Slave..."
sophora_shutdown ${ZONE} ${SOPHORA_FMRI}
startup="Yes"
else
# Master
print_log ${LOGFILE} "Master... Not shutting down Sophora"
fi
;;
*)
;;
esac
# Find the dir to write down zfs-setup
for ZPOOL in ${ZPOOLS}
do
if [ "_$(${ZFS_CMD} list -Ho name ${ZPOOL}/${ZFS_SETUP_SUBDIR} 2>/dev/null)_" != "__" ]
then
CONFIG_DIR=$(${ZFS_CMD} get -Ho value mountpoint ${ZPOOL}/${ZFS_SETUP_SUBDIR})
else
if [ -d $(${ZFS_CMD} get -Ho value mountpoint ${ZPOOL})/${ZFS_SETUP_SUBDIR} ]
then
CONFIG_DIR=$(${ZFS_CMD} get -Ho value mountpoint ${ZPOOL})/${ZFS_SETUP_SUBDIR}
fi
fi
if [ -d ${CONFIG_DIR} ]
then
printf "# Settings for ZFS\n\n" > ${CONFIG_DIR}/${ZFS_CONFIG_FILE}
ZONE_CONFIG_FILE=zonecfg_${ZONE}.export
[ "_${ZONE}_" != "__" ] && ${ZONECFG_CMD} -z ${ZONE} export > ${CONFIG_DIR}/${ZONE_CONFIG_FILE}
fi
done
# Save configs and create snapshots
for ZPOOL in ${ZPOOLS}
do
if [ "_${CONFIG_DIR}_" != "__" ]
then
# Save zfs config
dump_zfs_config ${ZPOOL} ${CONFIG_DIR}/${ZFS_CONFIG_FILE}
# Save Clusterconfig
dump_cluster_config ${RG} ${CONFIG_DIR}
fi
snapshot_create ${ZPOOL} ${SNAPSHOT_NAME}
done
# Startup Sophora?
if [ "_${startup}_" == "_Yes_" ]
then
sophora_startup ${ZONE} ${SOPHORA_FMRI}
fi
print_log ${LOGFILE} "End backup of ${CLIENT_NAME}"
;;
pst)
for ZPOOL in ${ZPOOLS}
do
snapshot_destroy ${ZPOOL} ${SNAPSHOT_NAME}
done
print_log ${LOGFILE} "End backup of ${CLIENT_NAME}"
;;
*)
usage
;;
esac
print_log ${GLOBAL_LOGFILE} "End (${cmd_option}) Called from:"
${PTREE_CMD} $$ | print_log ${GLOBAL_LOGFILE}
exit 0
</syntaxhighlight>
MD5-Checksum
<syntaxhighlight lang=bash>
# digest -a md5 /nsr/bin/nsr_snapshot.sh
01be6677ddf4342b625b1aa59d805628
</syntaxhighlight>
!!!THIS CODE IS UNTESTED DO NOT USE THIS!!!
!!!THIS JUST AN EXAMPLE!!!
==Restore/Recover==
===Set some variables===
<syntaxhighlight lang=bash>
NSR_CLIENT="sample-cl"
NSR_SERVER="nsr-server"
ZPOOL="sample_pool"
RG="${NSR_CLIENT%-cl}-rg"
ZONE="${NSR_CLIENT%-cl}-zone"
</syntaxhighlight>
===Look for a valid backup===
<syntaxhighlight lang=bash>
# /usr/sbin/mminfo -s ${NSR_SERVER} -o t -N /local/${RG}/cluster_config/nsr_backup
</syntaxhighlight>
===Restore ZFS configuration===
<syntaxhighlight lang=bash>
# /usr/sbin/recover -s ${NSR_SERVER} -c ${NSR_CLIENT} -d /tmp -a /local/${RG}/cluster_config/nsr_backup/ZFS_Setup.sh
</syntaxhighlight>
Look into file /tmp/ZFS_Setup.sh which should look like this:
<syntaxhighlight lang=bash>
# Create ZPool sample_pool with size 1.02T:
Zpool structure of sample_pool:
zpool create sample_pool mirror <device0> <device1>
# Create ZFS
/usr/sbin/zfs create -o mountpoint=none sample_pool/app
/usr/sbin/zfs create -o mountpoint=none sample_pool/cluster_config
/usr/sbin/zfs create -o mountpoint=none sample_pool/data1
/usr/sbin/zfs create -o mountpoint=none sample_pool/data2
/usr/sbin/zfs create -o mountpoint=none sample_pool/home
/usr/sbin/zfs create -o mountpoint=none sample_pool/log
/usr/sbin/zfs create -o mountpoint=none sample_pool/usr_local
/usr/sbin/zfs create -o mountpoint=none sample_pool/zone
# Set ZFS values
/usr/sbin/zfs set -p reservation=104857600 sample_pool
/usr/sbin/zfs set -p mountpoint=none sample_pool
/usr/sbin/zfs set -p mountpoint=/local/sample-rg/app sample_pool/app
/usr/sbin/zfs set -p mountpoint=/local/sample-rg/cluster_config sample_pool/cluster_config
/usr/sbin/zfs set -p mountpoint=/local/sample-rg/data1 sample_pool/data1
/usr/sbin/zfs set -p mountpoint=/local/sample-rg/data2 sample_pool/data2
/usr/sbin/zfs set -p mountpoint=/local/sample-rg/home sample_pool/home
/usr/sbin/zfs set -p mountpoint=/local/sample-rg/log sample_pool/log
/usr/sbin/zfs set -p mountpoint=/local/sample-rg/usr_local sample_pool/usr_local
/usr/sbin/zfs set -p mountpoint=/local/sample-rg/zone sample_pool/zone
/usr/sbin/zfs set -p zpdata:zn=sample-zone sample_pool/zone
/usr/sbin/zfs set -p zpdata:rbe=S10_U9 sample_pool/zone
/usr/sbin/zfs set -p mountpoint=/local/sample-rg/zone-zfsBE_20121105 sample_pool/zone-zfsBE_20121105
/usr/sbin/zfs set -p zoned=off sample_pool/zone-zfsBE_20121105
/usr/sbin/zfs set -p canmount=on sample_pool/zone-zfsBE_20121105
/usr/sbin/zfs set -p zpdata:zn=sample-zone sample_pool/zone-zfsBE_20121105
/usr/sbin/zfs set -p zpdata:rbe=S10_U9 sample_pool/zone-zfsBE_20121105
</syntaxhighlight>
Mount the needed ZFS filesystems.
===Restore zone configuration===
<syntaxhighlight lang=bash>
# /usr/sbin/recover -s ${NSR_SERVER} -c ${NSR_CLIENT} -d /tmp -a /local/${RG}/cluster_config/nsr_backup/zonecfg_${ZONE}.export
# zonecfg -z ${ZONE} -f /tmp/zonecfg_${ZONE}.export
# zonecfg -z ${ZONE} info
</syntaxhighlight>
===Restore cluster configuration===
<syntaxhighlight lang=bash>
# /usr/sbin/recover -s ${NSR_SERVER} -c ${NSR_CLIENT} -d /tmp -a /local/${RG}/cluster_config/nsr_backup/*_export.xml
# /usr/sbin/recover -s ${NSR_SERVER} -c ${NSR_CLIENT} -d /tmp -a /local/${RG}/cluster_config/nsr_backup/*.ClusterCreateCommands.txt
# /usr/bin/perl -pi -e "s#/local/${RG}/cluster_config/nsr_backup/#/tmp/#g" /tmp/${RG}.ClusterCreateCommands.txt
</syntaxhighlight>
Follow the instructions in /tmp/${RG}.ClusterCreateCommands.txt:
<syntaxhighlight lang=bash>
Recreate sample-rg:
/usr/cluster/bin/clrg create -i /tmp/sample-rg.clrg_export.xml sample-rg
Add the following entries to all nodes!!!:
/etc/inet/hosts:
10.29.7.96 sample-cl
Recreate sample-lh-res:
/usr/cluster/bin/clrs create -i /tmp/sample-lh-res.clrs_export.xml sample-lh-res
Recreate sample-hasp-zfs-res:
/usr/cluster/bin/clrs create -i /tmp/sample-hasp-zfs-res.clrs_export.xml sample-hasp-zfs-res
Recreate sample-emctl-res:
/usr/cluster/bin/clrs create -i /tmp/sample-emctl-res.clrs_export.xml sample-emctl-res
Recreate sample-oracle-res:
/usr/cluster/bin/clrs create -i /tmp/sample-oracle-res.clrs_export.xml sample-oracle-res
Recreate sample-zone-res:
/usr/cluster/bin/clrs create -i /tmp/sample-zone-res.clrs_export.xml sample-zone-res
Recreate sample-nsr-res:
/usr/cluster/bin/clrs create -i /tmp/sample-nsr-res.clrs_export.xml sample-nsr-res
</syntaxhighlight>
==Registering new resource type LGTO.clnt==
1. Install Solaris client package LGTOclnt
2. Register new resource type in cluster. One one node do:
<syntaxhighlight lang=bash>
# clrt register -f /usr/sbin/LGTO.clnt.rtr LGTO.clnt
</syntaxhighlight>
Now you have a new resource type LGTO.clnt in your cluster.
==Create client resource of type LGTO.clnt==
So I use scripts like this:
<syntaxhighlight lang=bash>
# RGname=sample-rg
# clrs create \
-t LGTO.clnt \
-g ${RGname} \
-p Resource_dependencies=$(basename ${RGname} -rg)-hasp-zfs-res \
-p clientname=$(basename ${RGname} -rg)-lh \
-p Network_resource=$(basename ${RGname} -rg)-lh-res \
-p owned_paths=${ZPOOL_BASEDIR} \
$(basename ${RGname} -rg)-nsr-res
</syntaxhighlight>
This expands to:
<syntaxhighlight lang=bash>
# clrs create \
-t LGTO.clnt \
-g sample-rg \
-p Resource_dependencies=sample-hasp-zfs-res \
-p clientname=sample-lh \
-p Network_resource=sample-lh-res \
-p owned_paths=/local/sample-rg \
sample-nsr-res
</syntaxhighlight>
Now we have a client name to which we can connect to: sample-lh
a8eb5d9b8dafc3883f453f5d5b3bc74976938930
Trochus sp.
0
123
2287
340
2021-11-25T15:51:50Z
Lollypop
2
Text replacement - "[[Kategorie:" to "[[Category:"
wikitext
text/x-wiki
[[Category:MeerwasserAquarium]]
{{Systematik
| DeName = Kegelschnecke
| WissName = Trochus sp.
| Autor =
| Untergattung =
| Gattung =
| Unterfamilie =
| Art =
| Verbreitung =
| Habitat =
| Nahrung = Algen
| Luftfeuchtigkeit =
| Temperatur = 24°C - 26°C
| Winterruhe =
}}
43ef493eecb21c0c84111bbe3deda24944365af2
Solaris grub
0
199
2288
2150
2021-11-25T15:52:14Z
Lollypop
2
Text replacement - "<source" to "<syntaxhighlight"
wikitext
text/x-wiki
[[Category:Solaris|Grub]]
[[Category:Grub|Solaris]]
= Set SP-console on x86-systems to 115200 Baud =
You need to set the new speed in all three places:
# grub
# SP host serial
# BIOS serial
== Solaris 11 ==
=== Set speed and port in grub ===
<syntaxhighlight lang=bash>
# bootadm set-menu console=serial serial_params=0,115200,8,N,1
# bootadm generate-menu -f
# eeprom console=ttya
# eeprom ttya-mode=115200,8,n,1,-
</source>
== Solaris 10 ==
=== Set speed and port in grub ===
/rpool/boot/grub/menu.lst
<syntaxhighlight lang=bash>
title Oracle Solaris 10 X86
findroot (pool_rpool,0,a)
kernel$ /platform/i86pc/multiboot -B $ZFS-BOOTFS,console=ttya,ttya-mode="115200,8,n,1,-"
module /platform/i86pc/boot_archive
title Solaris failsafe
findroot (pool_rpool,0,a)
kernel /boot/multiboot -s -B console=ttya,ttya-mode="115200,8,n,1,-"
module /boot/amd64/x86.miniroot-safe
</source>
=== Set speed ===
/boot/solaris/bootenv.rc
<syntaxhighlight lang=bash>
setprop ttya-mode '115200,8,n,1,-'
</source>
Active after reboot.
=== Set console login speed ===
/etc/ttydefs
<syntaxhighlight lang=bash>
console115200:115200 hupcl opost onlcr:115200::console
</source>
<syntaxhighlight lang=bash>
# svccfg -s svc:/system/console-login setprop ttymon/label= astring: "console115200"
# svcadm refresh svc:/system/console-login
# svcadm restart svc:/system/console-login
</source>
== Set speed in BIOS ==
Enter BIOS setup with <i>F2</i> or <i>CTRL+E</i>, then go to
<pre>
Advanced -> Serial Port Console Redirection -> Bits per second : 115200
</pre>
== Set speed for SP host serial ==
<syntaxhighlight lang=bash>
-> set SP/serial/host pendingspeed=115200 commitpending=true
Set 'pendingspeed' to '115200'
Set 'commitpending' to 'true'
-> show SP/serial/host speed
/SP/serial/host
Properties:
speed = 115200
</source>
=grub rescue>=
The problem:
<syntaxhighlight lang=bash>
GRUB loading...
Welcome to GRUB!
error: couldn't find a valid DVA.
Entering rescue mode...
grub rescue>
</source>
==Get into the normal grub==
Find your devices:
<syntaxhighlight lang=bash>
grub rescue> ls
(hd0) (hd0,gpt9) (hd0,gpt2) (hd0,gpt1) (hd1)
</source>
===Find the directory where the normal.mod file resides===
In this example the boot environment is named Solaris11.3SRU1.
Remember to replace <i>Solaris11.3SRU15</i> with your boot environment name.
<syntaxhighlight lang=bash>
grub rescue> ls (hd0,gpt2)/ROOT/Solaris11.3SRU15/@/boot/grub/i386-pc
... normal.mod ...
</source>
===Set the prefix to the right place===
Remember to replace <i>Solaris11.3SRU15</i> with your boot environment name.
<syntaxhighlight lang=bash>
grub rescue> set
prefix=(hd0,gpt2)//@/boot/grub/i386-pc
root=hd0,gpt2
grub rescue> set prefix=(hd0,gpt2)/ROOT/Solaris11.3SRU15/@/boot/grub/i386-pc
</source>
===Now you can load and start the module called "normal"===
<syntaxhighlight lang=bash>
grub rescue> insmod normal
grub rescue> normal
GNU GRUB version 1.99,5.11.0.175.2.0.0.42.2
Minimal BASH-like line editing is supported. For the first word, TAB
lists possible command completions. Anywhere else TAB lists possible
device or file completions.
grub>
</source>
==Normal grub is bootet, now start the Solaris==
At the <i>grub></i> prompt enter the following lines. Remember to replace <i>Solaris11.3SRU15</i> with your boot environment name.
<syntaxhighlight lang=bash>
insmod zfs
zfs-bootfs /ROOT/Solaris11.3SRU15/@/ zfs_bootfs
set kern=/platform/i86pc/kernel/amd64/unix
$multiboot /ROOT/Solaris11.3SRU15/@/$kern $kern -B $zfs_bootfs
insmod gzio
$module /ROOT/Solaris11.3SRU15/@/platform/i86pc/amd64/boot_archive
boot
</source>
a560b820efa44d81a287873bf10f84959c62c3e4
Solaris 11 Zones
0
257
2289
1256
2021-11-25T15:52:25Z
Lollypop
2
Text replacement - "<source" to "<syntaxhighlight"
wikitext
text/x-wiki
[[Kategorie:Solaris11|Zones]]
==zoneclone.sh==
<syntaxhighlight lang=bash>
#!/bin/bash
SRC_ZONE=$1
DST_ZONE=$2
DST_DIR=$3
DST_DATASET=$4
if [ $# -lt 3 ] ; then
echo "Not enough arguments!"
echo "Usage: $0 <src_zone> <dst_zone> <dst_dir> [dst_dataset]"
exit 1
fi
zonecfg -z ${DST_ZONE} info >/dev/null 2>&1 && {
echo "Destination zone exists!"
exit 1
}
zonecfg -z ${SRC_ZONE} info >/dev/null 2>&1 || {
echo "Source zone does not exist!"
exit 1
}
SRC_ZONE_STATUS="$(zoneadm list -cs | nawk -v zone=${SRC_ZONE} '$1==zone {print $2;}')"
if [ "_${SRC_ZONE_STATUS}_" != "_installed_" ] ; then
echo "Zone ${SRC_ZONE} must be in the status \"installed\" and not \"${SRC_ZONE_STATUS}\"!"
exit 1
fi
if [ -n "${DST_DATASET}" ] ; then
if [ -d ${DST_DIR} ] ; then
rmdir ${DST_DIR} || {
echo "${DST_DIR} must be empty!"
exit 1
}
fi
# Is parent dataset there?
zfs list -Ho name ${DST_DATASET%/*} >/dev/null 2>&1 || {
echo "Destination dataset does not exist!"
exit 1
}
zfs create -o mountpoint=${DST_DIR} ${DST_DATASET}
fi
[ -d ${DST_DIR} ] || {
echo "Destination dir must exist!"
exit 1
}
zonecfg -z ${SRC_ZONE} export \
| nawk -v zonepath=${DST_DIR} '
BEGIN {
FS="=";
OFS="=";
}
/set zonepath/{$2=zonepath}
{ print; }
' \
| zonecfg -z ${DST_ZONE} -f -
zoneadm -z ${DST_ZONE} clone ${SRC_ZONE}
</source>
==Way that works with Solaris Cluster and immutable zones==
Problem was that some steps of updating (indexing man pages etc) could not be done in the immutable zone after solaris update. So one <i>boot -w</i> is necessary to come up in cluster.
<syntaxhighlight lang=bash>
Apr 1 02:31:52 node01 SC[SUNWsczone.start_sczbt]:zone01-rg:zone01-zone-rs: [ID 567783 daemon.error] start_sczbt rc<1> - Installing: Using existing zone boot environment
Apr 1 02:31:52 node01 SC[SUNWsczone.start_sczbt]:zone01-rg:zone01-zone-rs: [ID 567783 daemon.error] start_sczbt rc<1> - Zone BE root dataset: zone01/zone/rpool/ROOT/solaris-6
Apr 1 02:31:52 node01 SC[SUNWsczone.start_sczbt]:zone01-rg:zone01-zone-rs: [ID 567783 daemon.error] start_sczbt rc<1> - Cache: Using /var/pkg/publisher.
Apr 1 02:31:52 node01 SC[SUNWsczone.start_sczbt]:zone01-rg:zone01-zone-rs: [ID 567783 daemon.error] start_sczbt rc<1> - Updating non-global zone: Linking to image /.
Apr 1 02:31:52 node01 SC[SUNWsczone.start_sczbt]:zone01-rg:zone01-zone-rs: [ID 567783 daemon.error] start_sczbt rc<1> - Finished processing linked images.
Apr 1 02:31:52 node01 SC[SUNWsczone.start_sczbt]:zone01-rg:zone01-zone-rs: [ID 567783 daemon.error] start_sczbt rc<1> - Result: Attach Failed.
</source>
===Move all RGs from node first===
<syntaxhighlight lang=bash>
# clrg evacuate -n $(hostname) +
</source>
===Update Solaris===
<syntaxhighlight lang=bash>
# pkg update --be-name $(pkg info -r system/kernel | nawk '/Build Release:/{split($NF,release,".");}/Branch:/{split($NF,versions,".");print "Solaris_"release[2]"."versions[3]"_SRU"versions[4];}') --accept -v
# init 6
</source>
===Disable zone on other node and move to self===
But leave the HAStoragePlus resource online
<syntaxhighlight lang=bash>
# clrs disable zone01-zone-rs
# clrg switch -n $(hostname) zone01-rg
</source>
===Attach, boot -w, detach without cluster===
<syntaxhighlight lang=bash>
# zoneadm -z zone01 attach -u
# zoneadm -z zone01 boot -w
# zlogin zone01 svcs -xv # <- wait for all services to be ready
# zlogin zone01 svcs -xv # <- wait for all services to be ready
...
# zlogin zone01 svcs -xv # <- wait for all services to be ready
# zoneadm -z zone01 halt
# zoneadm -z zone01 detach
</source>
===Enable zone in cluster===
<syntaxhighlight lang=bash>
# clrs enable zone01-zone-rs
</source>
==Some other things==
<syntaxhighlight lang=bash>
# zoneadm -z zone01 attach -x deny-zbe-clone -z solaris-7
# clrs enable zone01-rs
</source>
<syntaxhighlight lang=bash>
# /usr/lib/brand/solaris/attach:
Brand specific options:
brand-specific usage:
Usage:
attach [-uv] [-a archive | -d directory | -z zbe]
[-c profile.xml | dir] [-x attach-last-booted-zbe|
force-zbe-clone|deny-zbe-clone|destroy-orphan-zbes]
-u Update the software in the attached zone boot environment to
match the sofware in the global zone boot environment.
-v Verbose.
-c Update the zone configuration with the sysconfig profile
specified in the given file or directory.
-a Extract the specified archive into the zone then attach the
active boot environment found in the archive. The archive
may be a zfs, cpio, or tar archive. It may be compressed with
gzip or bzip2.
-d Copy the specified directory into a new zone boot environment
then attach the zone boot environment.
-z Attach the specified zone boot environment.
-x attach-last-booted-zbe : Attach the last booted zone boot
environment.
force-zbe-clone : Clone zone boot environment
on attach.
deny-zbe-clone : Do not clone zone boot environment
on attach.
destroy-orphan-zbes : Destroy all orphan zone boot
environments. (not associated with
any global BE)
</source>
7ea4baccde862bcebc599911263d3439736cd517
Ubuntu networking
0
278
2290
2129
2021-11-25T15:52:28Z
Lollypop
2
Text replacement - "<source" to "<syntaxhighlight"
wikitext
text/x-wiki
[[Kategorie:Ubuntu|Networking]]
[[Kategorie:Linux|Networking]]
==Disable IPv6==
===Create /etc/sysctl.d/60-disable-ipv6.conf===
Create a file named <i>/etc/sysctl.d/60-disable-ipv6.conf</i> with this content:
<syntaxhighlight lang=bash>
net.ipv6.conf.all.disable_ipv6 = 1
net.ipv6.conf.default.disable_ipv6 = 1
net.ipv6.conf.lo.disable_ipv6 = 1
</source>
===Activate /etc/sysctl.d/60-disable-ipv6.conf===
<syntaxhighlight lang=bash>
# sysctl -p /etc/sysctl.d/10-disable-ipv6.conf
net.ipv6.conf.all.disable_ipv6 = 1
net.ipv6.conf.default.disable_ipv6 = 1
net.ipv6.conf.lo.disable_ipv6 = 1
</source>
===Check settings===
<syntaxhighlight lang=bash>
# cat /proc/sys/net/ipv6/conf/all/disable_ipv6
1
</source>
==The ip command==
===Configure bond manually===
Specify your environment
<syntaxhighlight lang=bash>
# mymaster1=eno5
# mymaster2=eno6
# myinterface=bond007
# myipaddr=172.16.78.9/24
# mygateway=172.16.78.1
# declare -a mynameservers=( 172.16.77.4 172.16.79.4 )
</source>
Create the bonding interface out of the two masters
<syntaxhighlight lang=bash>
# ip link add ${myinterface} type bond
# ip link set ${myinterface} type bond miimon 100 mode active-backup
# ip link set ${mymaster1} down
# ip link set ${mymaster1} master ${myinterface}
# ip link set ${mymaster2} down
# ip link set ${mymaster2} master ${myinterface}
</source>
If you want to add a VLAN to your interface
<syntaxhighlight lang=bash>
# myvlan=1234
# ip link add link ${myinterface} name ${myinterface}.${myvlan} type vlan id ${myvlan}
# myinterface=${myinterface}.${myvlan}
</source>
Bring your interface up and set yout ip address
<syntaxhighlight lang=bash>
# ip link set ${myinterface} up
# ip addr add ${myipaddr} dev ${myinterface}
</source>
Set your default gateway and DNS
<syntaxhighlight lang=bash>
# ip route add default via ${mygateway}
# if (( ${#mynameservers[*]} > 1 )) ; then eval systemd-resolve --interface ${myinterface} --set-dns={$(IFS=,; printf '%s' "${mynameservers[*]}")} ; else eval systemd-resolve --interface ${myinterface} --set-dns=${mynameservers[0]} ; fi
</source>
===ipa===
This is not only indian pale ale! On linux
<syntaxhighlight lang=bash>
# ip a
</source>
shows you the configured addresses.
It is the short cut for "ip address show".
===iplishup===
This just sounds like a word and helps you to keep it in mind.
<syntaxhighlight lang=bash>
# ip li sh up
</source>
shows you all links (interfaces) that are up.
This is short for "ip link show up".
==New since Ubuntu 17.10==
===netplan===
Former configuration in /etc/network/interfaces{,.d} is now found in /etc/netplan in YAML syntax.
The name of the file is /etc/netplan/<whatever you want, I prefer the interface name>.yaml . The .yaml at the end is not optional!
====netplan <command>====
To apply changes to your files in /etc/netplan without reboot use:
<syntaxhighlight lang=bash>
# netplan appy
</source>
Keep in mind: You might lose your connection depending on the changes made!
====DHCP====
<i>/etc/netplan/ens160.yaml</i>
<syntaxhighlight lang=yaml>
network:
ethernets:
ens160:
dhcp4: yes
version: 2
</source>
====Bonding====
<i>/etc/netplan/bond007.yaml</i>
<syntaxhighlight lang=yaml>
network:
version: 2
renderer: networkd
ethernets:
slave1:
match:
macaddress: "3c:a7:2a:22:af:70"
dhcp4: no
slave2:
match:
macaddress: "3c:a7:2a:22:af:71"
dhcp4: no
bonds:
bond007:
interfaces:
- slave1
- slave2
parameters:
mode: balance-rr
mii-monitor-interval: 10
dhcp4: no
addresses:
- 192.168.189.202/27
gateway4: 192.168.189.193
nameservers:
search:
- mcs.de
addresses:
- "192.168.3.60"
- "192.168.3.61"
</source>
dec2801425e687b5be7e45b66de0d2185e5f7da9
Leptoseris sp.
0
128
2291
347
2021-11-25T15:52:28Z
Lollypop
2
Text replacement - "[[Kategorie:" to "[[Category:"
wikitext
text/x-wiki
[[Category:MeerwasserAquarium]]
{{Systematik
| DeName = Großpolypige Steinkoralle
| WissName = Leptoseris sp.
| Autor =
| Untergattung =
| Gattung =
| Unterfamilie =
| Art =
| Verbreitung =
| Habitat =
| Nahrung = Zooxanthellen / Licht
| Luftfeuchtigkeit =
| Temperatur = 24°C - 26°C
}}
ce1ea96233e8f8eef4bda1731117f4a1612d3b8d
Xenia umbellata
0
113
2292
355
2021-11-25T15:52:36Z
Lollypop
2
Text replacement - "[[Kategorie:" to "[[Category:"
wikitext
text/x-wiki
[[Category:MeerwasserAquarium]]
{{Systematik
| Bild = Xenia_umbellata-Two_weeks.png
| Bildbeschreibung = Xenia umbellata
| DeName = Pumpende Xenie
| WissName = Xenia umbellata
| Autor =
| Untergattung =
| Gattung =
| Unterfamilie =
| Art =
| Verbreitung = Indopazifik
| Habitat =
| Gruendung =
| Koeniginnen =
| Nest =
| Ausbruchsschutz =
| Nahrung = Zooxanthellen / Licht
| Luftfeuchtigkeit =
| Temperatur = 24°C - 26°C
| Winterruhe =
}}
Wenn mann die Pumpende Xenie (Xenia umbellata) frisch in das Becken einsetzt, zieht sie sich, wie alle Weichkorallen ersteinmal zusammen uns sieht die ersten 2 Tage eher tot, als lebendig aus. Wir haben sie einfach mit einem Gummiband aufgebunden. Nach wenigen Tagen entfaltet sie wieder ihre volle Pracht und nach spätestens zwei Wochen ist sie dann auch festgewachsen und das Gummiband kann entfernt werden.
Da sich die Pumpende Xenie recht gut vermehrt bietet es sich an sie gleich als Neuzugang auf einem extra Stein zu befestigen, damit man sie besser im Griff hat und eventuell noch umsetzen kann.
<gallery mode="packed-hover">
Image:Xenia_umbellata-New.png|Frisch eingesetzt
Image:Xenia_umbellata-First_week.png|In der ersten Woche
Image:Xenia_umbellata-Two_weeks.png|Nach zwei Wochen
</gallery>
9f6f4ecc75a99304418ad0a5fecce6b6df9ea0e9
Cachefilesd
0
382
2293
2117
2021-11-25T15:52:39Z
Lollypop
2
Text replacement - "<source " to "<syntaxhighlight "
wikitext
text/x-wiki
[[category: Linux]]
=Cachefilesd=
==Create ramdisk for cache if enough ram==
A dir named /cache is created and ramdisk ist mounted there!
<syntaxhighlight lang=bash>
# systemctl --force --full edit create-ramdisk@.service
</source>
<syntaxhighlight lang=ini>
[Unit]
Description=create cache dir in ramdisk
After=remote-fs.target
Before=cachefilesd.service
[Service]
Type=oneshot
RemainAfterExit=yes
TimeoutSec=0
ExecStartPre=/sbin/modprobe brd rd_nr=1 rd_size=%i
ExecStartPre=/sbin/sgdisk -Z --new 1:0:0 /dev/ram0
ExecStartPre=/sbin/mkfs.ext4 -m 0 /dev/ram0p1
ExecStartPre=-/bin/mkdir /cache
ExecStart=/bin/mount -o user_xattr /dev/ram0p1 /cache
ExecStop=/bin/umount /cache
ExecStop=/sbin/rmmod brd
[Install]
WantedBy=multi-user.target
</source>
Create for example a 2gb disk with:
<syntaxhighlight lang=bash>
# systemctl start create-ramdisk@$[ 2 * 1024 * 1024 ].service
</source>
Destroy it again:
<syntaxhighlight lang=bash>
# systemctl stop create-ramdisk@$[ 2 * 1024 * 1024 ].service
</source>
Make a 4gb one instead:
<syntaxhighlight lang=bash>
# systemctl start create-ramdisk@$[ 4 * 1024 * 1024 ].service
</source>
If you found the right value, nail it for the next reboot with:
<syntaxhighlight lang=bash>
# systemctl enable create-ramdisk@$[ ${your_gigabyte_value} * 1024 * 1024 ].service
</source>
==Check if kernel supports filesystem cache for your filesystem type==
<syntaxhighlight lang=bash>
# grep "CONFIG_.*_FSCACHE" /boot/config-`uname -r`
CONFIG_NFS_FSCACHE=y
CONFIG_CEPH_FSCACHE=y
CONFIG_CIFS_FSCACHE=y
CONFIG_AFS_FSCACHE=y
CONFIG_9P_FSCACHE=y
</source>
== Setup /etc/cachefilesd.conf ==
<source>
###############################################################################
#
# Copyright (C) 2006,2010 Red Hat, Inc. All Rights Reserved.
# Written by David Howells (dhowells@redhat.com)
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version
# 2 of the License, or (at your option) any later version.
#
###############################################################################
# obviously this should be a path to a ramdisk if you have enough ram
dir /var/cache/fscache
secctx cachefiles_kernel_t
tag mycache
debug 1
brun 10%
bcull 7%
bstop 3%
frun 10%
fcull 7%
fstop 3%
# Assuming you're using SELinux with the default security policy included in
# this package
# secctx system_u:system_r:cachefiles_kernel_t:s0
</source>
== Problems with autofs mounted filesystems ==
In case of using automount with caching the cachefilesd must be running <b>before</b> autofs comes up an might mount the filesystem.
=== Make sure it is started by systemd ===
==== Disable the SYSV way of starting ====
<syntaxhighlight lang=bash>
# update-rc.d cachefilesd disable
</source>
==== Make cachefilesd started by systemd ====
<syntaxhighlight lang=bash>
# systemctl edit --force --full cachefilesd.service
</source>
<syntaxhighlight lang=ini>
[Unit]
Documentation=man:cachefilesd
Description=LSB: CacheFiles daemon
After=remote-fs.target
Before=autofs.service
[Service]
Type=simple
Restart=no
TimeoutSec=5min
IgnoreSIGPIPE=no
KillMode=process
GuessMainPID=no
RemainAfterExit=yes
SuccessExitStatus=5 6
RuntimeDirectory=cachefilesd
ExecStartPre=-/sbin/modprobe -qab cachefiles
ExecStart=/sbin/cachefilesd -n -p /run/cachefilesd/cachefilesd.pid
[Install]
WantedBy=multi-user.target
</source>
==== Enable starting new service ====
<syntaxhighlight lang=bash>
# systemctl enable cachefilesd.service
</source>
==== Verify autofs is depending on cachefilesd.service ====
<syntaxhighlight lang=bash>
# systemctl show -p After,Before autofs.service | grep cachefilesd.service
After=cachefilesd.service network.target network-online.target sysinit.target ypbind.service basic.target system.slice sssd.service systemd-journald.socket remote-fs.target
</source>
== Define a cached CIFS share with autofs ==
=== Install needed packages ===
<syntaxhighlight lang=bash>
# apt install cachefilesd autofs cifs-utils
</source>
=== Create the credentials file ===
<syntaxhighlight lang=bash>
# mkdir --mode=0700 /etc/cifs_cred
# touch /etc/cifs_cred/credentials
# chmod 0600 /etc/cifs_cred/credentials
# cat > /etc/cifs_cred/credentials <<EOF
username=myuser
password=mypass
EOF
</source>
=== Create basedir of your cifs mounts ===
<syntaxhighlight lang=bash>
# mkdir --mode=0755 /data/cifs
</source>
===/etc/auto.master===
<source>
...
/data/cifs /etc/auto.cifs-shares --timeout=0 --ghost
...
</source>
===/etc/auto.cifs-shares===
The option {{strong|fsc}} enables caching:
<source>
myshare -fstype=cifs,credentials=/etc/cifs_cred/credentials,nounix,file_mode=0644,vers=3.0,dir_mode=0755,noperm,fsc ://cifsserver.my.dom/the_cifs_share
</source>
== Check if things are getting cached ==
Initially there is nothing in the cache (almost all values are zero):
<syntaxhighlight lang=bash>
# cat /proc/fs/fscache/stats
FS-Cache statistics
Cookies: idx=2 dat=0 spc=0
Objects: alc=0 nal=0 avl=0 ded=0
ChkAux : non=0 ok=0 upd=0 obs=0
Pages : mrk=0 unc=0
Acquire: n=2 nul=0 noc=0 ok=2 nbf=0 oom=0
Lookups: n=0 neg=0 pos=0 crt=0 tmo=0
Invals : n=0 run=0
Updates: n=0 nul=0 run=0
Relinqs: n=0 nul=0 wcr=0 rtr=0
AttrChg: n=0 ok=0 nbf=0 oom=0 run=0
Allocs : n=0 ok=0 wt=0 nbf=0 int=0
Allocs : ops=0 owt=0 abt=0
Retrvls: n=0 ok=0 wt=0 nod=0 nbf=0 int=0 oom=0
Retrvls: ops=0 owt=0 abt=0
Stores : n=0 ok=0 agn=0 nbf=0 oom=0
Stores : ops=0 run=0 pgs=0 rxd=0 olm=0
VmScan : nos=0 gon=0 bsy=0 can=0 wt=0
Ops : pend=0 run=0 enq=0 can=0 rej=0
Ops : ini=0 dfr=0 rel=0 gc=0
CacheOp: alo=0 luo=0 luc=0 gro=0
CacheOp: inv=0 upo=0 dro=0 pto=0 atc=0 syn=0
CacheOp: rap=0 ras=0 alp=0 als=0 wrp=0 ucp=0 dsp=0
CacheEv: nsp=0 stl=0 rtr=0 cul=0
</source>
but after a few requests:
<syntaxhighlight lang=bash>
# cat /proc/fs/fscache/stats
FS-Cache statistics
Cookies: idx=3 dat=77 spc=0
Objects: alc=80 nal=0 avl=80 ded=70
ChkAux : non=0 ok=1 upd=0 obs=1
Pages : mrk=3138215 unc=181438
Acquire: n=150 nul=0 noc=0 ok=80 nbf=0 oom=0
Lookups: n=80 neg=78 pos=2 crt=78 tmo=0
Invals : n=0 run=0
Updates: n=0 nul=0 run=0
Relinqs: n=70 nul=0 wcr=0 rtr=70
AttrChg: n=0 ok=0 nbf=0 oom=0 run=0
Allocs : n=0 ok=0 wt=0 nbf=0 int=0
Allocs : ops=0 owt=0 abt=0
Retrvls: n=72447 ok=0 wt=6 nod=72447 nbf=0 int=0 oom=0
Retrvls: ops=72447 owt=15 abt=0
Stores : n=3136954 ok=3136954 agn=0 nbf=0 oom=0
Stores : ops=67042 run=3203996 pgs=3136954 rxd=3136954 olm=0
VmScan : nos=180177 gon=0 bsy=0 can=0 wt=0
Ops : pend=15 run=139489 enq=3203996 can=0 rej=0
Ops : ini=3209401 dfr=266 rel=3209401 gc=266
CacheOp: alo=0 luo=0 luc=0 gro=0
CacheOp: inv=0 upo=0 dro=0 pto=0 atc=0 syn=0
CacheOp: rap=0 ras=0 alp=0 als=0 wrp=0 ucp=0 dsp=0
CacheEv: nsp=1 stl=0 rtr=0 cul=0
</source>
213681db2c1bbb3ba586d6596eab14697885e59d
MySQL Tipps und Tricks
0
197
2294
2203
2021-11-25T15:52:39Z
Lollypop
2
Text replacement - "<source" to "<syntaxhighlight"
wikitext
text/x-wiki
[[Kategorie:MySQL|Tipps und Tricks]]
==Oneliner==
===Show MySQL-traffic fired from a client===
<syntaxhighlight lang=bash>
# tcpdump -i any -s 0 -l -vvv -w - dst port 3306 | strings | perl -e '
while(<>) { chomp; next if /^[^ ]+[ ]*$/;
if(/^(SELECT|UPDATE|DELETE|INSERT|SET|COMMIT|ROLLBACK|CREATE|DROP|ALTER)/i) {
if (defined $q) { print "$q\n"; }
$q=$_;
} else {
$_ =~ s/^[ \t]+//; $q.=" $_";
}
}'
</syntaxhighlight>
===Mysql processes each second===
<syntaxhighlight lang=bash>
# mysqladmin -i 1 --verbose processlist
</syntaxhighlight>
===All grants===
<syntaxhighlight lang=bash>
# mysql --skip-column-names --batch --execute 'select concat("`",user,"`@`",host,"`") from mysql.user' | xargs -n 1 -i mysql --execute 'show grants for {}'
</syntaxhighlight>
Or a little nicer:
<syntaxhighlight lang=bash>
#!/bin/bash
#
## Written by Lars Timmann <L@rs.Timmann.de> 2017
#
function usage () {
cat << EOH
Usage: $0 [--all] [--grant-user <pattern>|--gu <pattern>] [--grant-db <pattern>|--gdb <pattern>] [--help] ...
--help: This output
--grant-user|--gu: You can specify this option several times.
The <pattern> can be:
<user> : You will get grants on all hosts for this user.
@<host> : You will get grants for all users on this host.
<user>@<host> : You will get specific grants for user@host.
The pattern may contain % as wildcard.
If the pattern is @% it shows all grants where host is exactly '%'.
--grant-db|--gdb: You can specify this option several times.
The pattern names the database to look for.
The pattern may contain % as wildcard.
--all: Show all grants
...: Optional parameters to the mysql command
EOH
exit
}
show_all_grants=0
declare -a grant_user
for ((param=1;param<=${#};param++))
do
case ${!param} in
--grant-user|--gu)
param=$[ ${param} + 1 ]
grant_user+=( "${!param}" )
# delete 2 parameters from list and set back $param
set -- "${@:1:param-2}" "${@:param+1}"
param=$[ ${param} - 2 ]
;;
--grant-db|--gdb)
param=$[ ${param} + 1 ]
grant_db+=( "${!param}" )
# delete 2 parameters from list and set back $param
set -- "${@:1:param-2}" "${@:param+1}"
param=$[ ${param} - 2 ]
;;
--all)
show_all_grants=1
# delete 1 parameter from list and set back $param
set -- "${@:1:param-1}" "${@:param+1}"
param=$[ ${param} - 1 ]
;;
--help)
usage
;;
*)
;;
esac
done
count=${#grant_user[@]}
for((param=0;param<count;param++))
do
before=${#grant_user[@]}
grant="${grant_user[${param}]}"
user="${grant%@*}"
if [[ "${grant}" == *\@?* ]]
then
host="${grant/*@}"
else
host=''
fi
case ${host} in
'')
select="select concat('\'',user,'\'@\'',host,'\'') as user from mysql.user where user like '${user}'"
;;
'%')
select="select concat('\'',user,'\'@\'',host,'\'') as user from mysql.user where host='${host}' ${user:+and user like '${user}'}"
;;
*)
select="select concat('\'',user,'\'@\'',host,'\'') as user from mysql.user where host like '${host}' ${user:+and user like '${user}'}"
;;
esac
grant_user=( "${grant_user[@]:0:param}" $(mysql $* --silent --skip-column-names --execute "${select}" | sort ) "${grant_user[@]:param+1}" )
after=${#grant_user[@]}
param=$[ param + after - before ]
count=$[ count + after - before ]
done
# Get user for database in grant_db array
for db in ${grant_db[@]}
do
grant_user+=( $(mysql $* --silent --skip-column-names --execute "
select concat('\'',user,'\'@\'',host,'\'') as user from mysql.db where db like '${db}';
select concat('\'',user,'\'@\'',host,'\'') as user from mysql.columns_priv where db like '${db}';
select concat('\'',user,'\'@\'',host,'\'') as user from mysql.tables_priv where db like '${db}';
" | sort -u ) )
done
# --all
if [ ${show_all_grants} -eq 1 ]
then
printf -- '--\n-- %s\n--\n' "all grants";
grant_user=( $(mysql $* --silent --skip-column-names --execute "select concat('\'',user,'\'@\'',host,'\'') as user from mysql.user" | sort ) )
fi
for user in ${grant_user[@]}
do
printf -- '--\n-- %s\n--\n' "${user}";
show_create_user="$(mysql $* --silent --skip-column-names --execute "select (substring_index(version(), '.',1) >= 5) and (substring_index(substring_index(version(), '.', 2),'.',-1) >=7) as show_create_user;";)"
if [ "${show_create_user}" -eq 1 ]
then
mysql $* --silent --skip-column-names --execute "show create user ${user};" | sed 's/$/;/'
fi
OLD_IFS=${IFS}
IFS=$'\n'
for grant in $(mysql $* --silent --skip-column-names --execute "show grants for ${user}" | sed 's/$/;/')
do
regex='GRANT[ ]+.*[ ]+ON[ ]+(FUNCTION[ ]+|)`([^`]*)`\..*'
if [[ $grant =~ $regex ]]
then
database=${BASH_REMATCH[2]}
if [ ${#grant_db[@]} -gt 0 ]
then
if [[ " ${grant_db[@]} " =~ " ${database} " ]]
then
echo "${grant}"
fi
else
echo "${grant}"
fi
else
echo "${grant}"
fi
done
done
</syntaxhighlight>
===Last update time===
* Per table
<syntaxhighlight lang=mysql>
mysql> SELECT TABLE_SCHEMA AS DB,TABLE_NAME,UPDATE_TIME FROM INFORMATION_SCHEMA.TABLES ORDER BY DB,UPDATE_TIME;
</syntaxhighlight>
* Per database
<syntaxhighlight lang=mysql>
mysql> SELECT TABLE_SCHEMA AS DB,MAX(UPDATE_TIME) AS LAST_UPDATE FROM INFORMATION_SCHEMA.TABLES GROUP BY DB ORDER BY LAST_UPDATE;
</syntaxhighlight>
==InnoDB space==
===Per database===
<syntaxhighlight lang=mysql>
mysql> select table_schema as database_name, sum(round(data_length/1024/1024,2)) as total_size_mb from information_schema.tables where engine like 'innodb' group by table_schema order by total_size_mb;
</syntaxhighlight>
===Per table===
<syntaxhighlight lang=mysql>
mysql> select table_schema as database_name,table_name,round(data_length/1024/1024,2) as size_mb from information_schema.tables order by size_mb;
</syntaxhighlight>
==Logging==
If you use SET GLOBAL it is just for the moment.
'''Don't forget to add it in your my.cnf to make it permanent!'''
===What can I log?===
The interesting variables here are:
* log_queries_not_using_indexes
* log_slave_updates
* log_slow_queries
* general_log
===Choose logging destination FILE/TABLE/NONE===
This affects general_log and slow_query_log.
* Log to the table mysql.slow_log and mysql.general_log
<syntaxhighlight lang=mysql>
mysql> SET GLOBAL log_output=TABLE;
</syntaxhighlight>
* Log to the table mysql.slow_log and mysql.general_log
<syntaxhighlight lang=mysql>
mysql> SET GLOBAL log_output=TABLE;
</syntaxhighlight>
* Both: tables and files
<syntaxhighlight lang=mysql>
mysql> SET GLOBAL log_output = 'TABLE,FILE';
</syntaxhighlight>
* None, if NONE appears in the log_output destinations there is no logging
<syntaxhighlight lang=mysql>
mysql> SET GLOBAL log_output = 'TABLE,FILE,NONE';
</syntaxhighlight>
is equal to
<syntaxhighlight lang=mysql>
mysql> SET GLOBAL log_output = 'NONE';
</syntaxhighlight>
===Enable/disable general logging===
<syntaxhighlight lang=mysql>
mysql> SET GLOBAL general_log_file = '/var/lib/mysql/general.log';
Query OK, 0 rows affected (0.00 sec)
mysql> SET GLOBAL general_log = 'ON';
Query OK, 0 rows affected (0.00 sec)
</syntaxhighlight>
<syntaxhighlight lang=mysql>
mysql> SET GLOBAL general_log = 'OFF';
Query OK, 0 rows affected (0.00 sec)
</syntaxhighlight>
===Enable/disable logging of slow queries===
<syntaxhighlight lang=mysql>
mysql> SET GLOBAL slow_query_log_file = '/var/lib/mysql/slow-query.log';
Query OK, 0 rows affected (0.00 sec)
mysql> SET GLOBAL slow_query_log = 'ON';
Query OK, 0 rows affected (0.00 sec)
</syntaxhighlight>
<syntaxhighlight lang=mysql>
mysql> SET GLOBAL slow_query_log = 'OFF';
Query OK, 0 rows affected (0.00 sec)
</syntaxhighlight>
== Slave ==
=== Debugging ===
==== What did we see from the master ====
Read the binlog from Master:
<syntaxhighlight lang=bash>
# mysqlbinlog --read-from-remote-server --host='your replication host' --user='your replication user' --password='your replication password' --base64-output=auto --database='limit output to this database' -vv mysql-bin.number | less
</syntaxhighlight>
if you get
ERROR: Failed on connect: SSL connection error: protocol version mismatch
try
<syntaxhighlight lang=bash>
# mysqlbinlog --read-from-remote-server --host='your replication host' --user='your replication user' --password='your replication password' --ssl-mode=DISABLED --base64-output=auto --database='limit output to this database' -vv mysql-bin.number | less
</syntaxhighlight>
For an idea of the binlog file to investigate on the master do this on your slave:
<syntaxhighlight lang=bash>
# mysql -e 'show slave status\G' | awk '$1=="Master_Log_File:"'
</syntaxhighlight>
==Filesystems for MySQL==
===ext3/ext4===
====Create Options====
<syntaxhighlight lang=bash>
# mkfs.ext4 -b 4096 /dev/mapper/vg--data-lv--ext4--mysql_data
</syntaxhighlight>
====Mountoptions====
* noatime
* data=writeback (best performance , only metadata is logged)
* data=ordered (ok performance , recording metadata and grouping metadata related to the data changes)
* data=journal (worst performance, but best data protection, ext3 default mode, recording metadata and all data)
===Raw devices with InnoDB===
'''Take a look at [[Linux_udev_permissions|setting device permissions via udev]] first.'''
'''After''' that the device is owned by mysql:
<syntaxhighlight lang=bash>
# ls -alL /dev/vg-data/lv-rawdisk-innodb01
brw-rw---- 1 mysql mysql 252, 0 Aug 12 15:07 /dev/vg-data/lv-rawdisk-innodb01
</syntaxhighlight>
Determine the size:
<syntaxhighlight lang=bash>
# lvs vg-data
LV VG Attr LSize Pool Origin Data% Move Log Copy% Convert
lv-rawdisk-innodb01 vg-data -wi-a---- 25.00g
# fdisk -l /dev/vg-data/lv-rawdisk-innodb01
Disk /dev/vg-data/lv-rawdisk-innodb01: 26.8 GB, 26843545600 bytes
255 heads, 63 sectors/track, 3263 cylinders, total 52428800 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
# bc -l
26843545600/(1024*1024*1024)
25.00000000000000000000
</syntaxhighlight>
Yes... really 25GB!
Add your logical volume to your configuration /etc/mysql/conf.d/innodb.cnf :
<syntaxhighlight lang=mysql>
[mysqld]
# InnoDB raw disks
innodb_data_home_dir=
innodb_data_file_path=/dev/vg-data/lv-rawdisk-innodb01:25Gnewraw
</syntaxhighlight>
Start mysql:
<syntaxhighlight lang=bash>
# service mysql start
</syntaxhighlight>
Aaaaaand.. do not forget apparmor! Like I did.. :-D
<syntaxhighlight lang=mysql>
InnoDB: Operating system error number 13 in a file operation.
InnoDB: The error means mysqld does not have the access rights to
InnoDB: the directory.
InnoDB: File name /dev/dm-0
InnoDB: File operation call: 'open'.
InnoDB: Cannot continue operation.
</syntaxhighlight>
<syntaxhighlight lang=bash>
# tail /var/log/kern.log
...
Aug 12 15:30:09 mysql kernel: [ 5840.118528] audit: type=1400 audit(1439386209.399:33): apparmor="DENIED" operation="open" profile="/usr/sbin/mysqld" name="/dev/dm-0" pid=11810 comm="mysqld" requested_mask="wr" denied_mask="wr" fsuid=108 ouid=108
...
</syntaxhighlight>
Add your raw device to the apparmor config in /etc/apparmor.d/local/usr.sbin.mysqld :
<syntaxhighlight lang=bash>
# Site-specific additions and overrides for usr.sbin.mysqld.
# For more details, please see /etc/apparmor.d/local/README.
/dev/dm-* rwk,
</syntaxhighlight>
Reload apparmor:
<syntaxhighlight lang=bash>
# service apparmor reload
</syntaxhighlight>
Another try!
<syntaxhighlight lang=bash>
# service mysql start
</syntaxhighlight>
<syntaxhighlight lang=mysql>
InnoDB: The first specified data file /dev/vg-data/lv-rawdisk-innodb01 did not exist:
InnoDB: a new database to be created!
150812 15:48:23 InnoDB: Setting file /dev/vg-data/lv-rawdisk-innodb01 size to 25600 MB
InnoDB: Database physically writes the file full: wait...
InnoDB: Progress in MB: 100 200 300 400 500 600 700 800 900 1000 1100 1200 ...
</syntaxhighlight>
Much better!
So shutdown MySQL again!
Change your configuration /etc/mysql/conf.d/innodb.cnf and '''change newraw to raw!''' :
<syntaxhighlight lang=mysql>
[mysqld]
# InnoDB raw disks
innodb_data_home_dir=
innodb_data_file_path=/dev/vg-data/lv-rawdisk-innodb01:25Graw
</syntaxhighlight>
=== NFS ===
==== NFSv4 ====
===== On NetApp CDOT SVM =====
<syntaxhighlight lang=text>
cdot1nfsv4::> export-policy rule create -policyname default -clientmatch 172.18.128.0/22 -superuser none -rwrule none -rorule sys -allow-dev false -allow-suid false
cdot1nfsv4::>
cdot1nfsv4::> export-policy create -policyname mysql_clients
cdot1nfsv4::> export-policy rule create -policyname mysql_clients -clientmatch 172.18.128.0/22 -superuser sys -rwrule sys -rorule sys -allow-dev true -allow-suid false
cdot1nfsv4::>
cdot1nfsv4::> nfs server modify -v4.0 enabled -v4-id-domain this.domain.tld
cdot1nfsv4::> set -units GB
cdot1nfsv4::> vol show -volume MYSQLNFS_* -fields volume,policy,size,junction-path
vserver volume size policy junction-path
------------------ --------------------- ---- ------------- ----------------------
cdot1nfsv4 MYSQLNFS_DATA 40GB mysql_clients /MYSQLNFS_DATA
cdot1nfsv4 MYSQLNFS_LOG 1GB mysql_clients /MYSQLNFS_LOG
2 entries were displayed.
</syntaxhighlight>
Links:
* [https://kb.netapp.com/support/s/article/how-to-configure-nfsv4-in-cluster-mode How to configure NFSv4 in Cluster-Mode]
* [https://kb.netapp.com/support/s/article/clustered-data-ontap-nfs-expert-recommended-articles Clustered Data ONTAP NFS Expert recommended articles]
* [https://kb.netapp.com/support/s/article/how-to-configure-netapp-storage-systems-for-network-file-system-version-4-in-aix-and-linux-environments How to configure NetApp storage systems for Network File System version 4 in AIX and Linux environments]
* [https://kb.netapp.com/support/s/article/how-to-enable-or-disable-nfsv4-on-netapp-storage-systems How to enable or disable NFSv4 on NetApp storage systems]
===== On Linux =====
====== Blacklist rpcsec_gss_krb5 ======
To disable loading of the rpcsec_gss_krb5 kernel module which causes problems with performance, do this:
<syntaxhighlight lang=text>
# echo "blacklist rpcsec_gss_krb5" > /etc/modprobe.d/blacklist-rpcsec_gss_krb5.conf
# rmmod rpcsec_gss_krb5
</syntaxhighlight>
====== /etc/sysctl.d/99-mysql.conf ======
<syntaxhighlight lang=text>
#
## http://www.ajohnstone.com/achives/optimizing-mysql-over-nfs-with-netapp/
#
###################################################################
# Semaphores & IPC for optimizations in innodb
kernel.shmmax=2147483648
kernel.shmall=2147483648
kernel.msgmni=1024
kernel.msgmax=65536
kernel.sem=250 32000 32 1024
###################################################################
# Swap
vm.swappiness = 0
vm.vfs_cache_pressure = 50
</syntaxhighlight>
====== /etc/sysctl.d/99-netapp-nfs.conf ======
<syntaxhighlight lang=text>
#
## http://www.ajohnstone.com/achives/optimizing-mysql-over-nfs-with-netapp/
#
###################################################################
# Optimization for netapp/nfs increased from 64k, @see http://tldp.org/HOWTO/NFS-HOWTO/performance.html#MEMLIMITS
net.core.wmem_default=262144
net.core.rmem_default=262144
net.core.wmem_max=262144
net.core.rmem_max=262144
net.ipv4.tcp_rmem = 4096 87380 16777216
net.ipv4.tcp_wmem = 4096 65536 16777216
net.ipv4.tcp_no_metrics_save = 1
# Guidelines from http://media.netapp.com/documents/mysqlperformance-5.pdf
net.ipv4.tcp_sack=0
net.ipv4.tcp_timestamps=0
sunrpc.tcp_slot_table_entries=128
#nfs.v3.enable on
nfs.tcp.enable=on
nfs.tcp.recvwindowsize=65536
nfs.tcp.xfersize=65536
#iscsi.iswt.max_ios_per_session 128
#iscsi.iswt.tcp_window_size 131400
#iscsi.max_connections_per_session 16
net.ipv4.tcp_tw_reuse = 1
net.ipv4.ip_local_port_range = 1024 65023
net.ipv4.tcp_max_syn_backlog = 10240
net.ipv4.tcp_max_tw_buckets = 400000
net.ipv4.tcp_max_orphans = 60000
net.ipv4.tcp_synack_retries = 3
net.core.somaxconn = 10000
kernel.sysrq=0
net.ipv4.neigh.default.gc_thresh1 = 4096
net.ipv4.neigh.default.gc_thresh2 = 8192
net.ipv4.neigh.default.gc_thresh3 = 8192
net.ipv4.neigh.default.base_reachable_time = 86400
net.ipv4.neigh.default.gc_stale_time = 86400
</syntaxhighlight>
====== Raise allowed number of open files for mysql in /etc/security/limits.d/mysql.conf ======
<syntaxhighlight lang=text>
mysql soft nofile 1024000
mysql hard nofile 1024000
mysql soft nproc 10240
mysql hard nproc 10240
</syntaxhighlight>
====== Modify systemd mysql.service to raise the number of files limit ======
To raise the number of files for the service you have to tell the systemd the new limit.
<syntaxhighlight lang=bash>
# systemctl edit mysql.service
</syntaxhighlight>
and enter:
<syntaxhighlight lang=ini>
[Service]
LimitNOFILE=1024000
</syntaxhighlight>
<syntaxhighlight lang=bash>
# systemctl cat mysql
# /lib/systemd/system/mysql.service
# MySQL systemd service file
...
# /etc/systemd/system/mysql.service.d/override.conf
[Service]
LimitNOFILE=1024000
</syntaxhighlight>
Do not forget to activate and check the limit
<syntaxhighlight lang=bash>
# systemctl daemon-reload
# systemctl restart mysql
# awk 'NR==1 || /Max open files/' /proc/$(pgrep mysqld$)/limits
Limit Soft Limit Hard Limit Units
Max open files 1024000 1024000 files
</syntaxhighlight>
====== Modify systemd service to wait for NFS ======
To be sure that the NFS mount is ready when the mysql server starts add After=nfs-client.target to the systemd service in the Unit-section.
<syntaxhighlight lang=bash>
# systemctl edit mysql.service
</syntaxhighlight>
and enter:
<syntaxhighlight lang=ini>
[Unit]
Description=MySQL Community Server
After=network.target
After=nfs-client.target
</syntaxhighlight>
<syntaxhighlight lang=bash>
# systemctl cat mysql
# /lib/systemd/system/mysql.service
# MySQL systemd service file
[Unit]
Description=MySQL Community Server
After=network.target
[Install]
WantedBy=multi-user.target
[Service]
User=mysql
Group=mysql
PermissionsStartOnly=true
ExecStartPre=/usr/share/mysql/mysql-systemd-start pre
ExecStart=/usr/sbin/mysqld
ExecStartPost=/usr/share/mysql/mysql-systemd-start post
TimeoutSec=600
Restart=on-failure
RuntimeDirectory=mysqld
RuntimeDirectoryMode=755
# /etc/systemd/system/mysql.service.d/override.conf
[Unit]
Description=MySQL Community Server
After=network.target
After=nfs-client.target
[Service]
LimitNOFILE=1024000
</syntaxhighlight>
Do not forget to activate the changes...
<syntaxhighlight lang=bash>
# systemctl daemon-reload
# systemctl restart mysql
</syntaxhighlight>
... and check they are active:
<syntaxhighlight lang=bash>
# systemctl list-dependencies --after mysql.service | grep nfs-client.target
● ├─nfs-client.target
</syntaxhighlight>
====== /etc/idmapd.conf ======
<syntaxhighlight lang=text>
# Domain = localdomain
Domain = this.domain.tld
</syntaxhighlight>
====== /etc/fstab ======
<syntaxhighlight lang=text>
cdot-nfsv4-svm:/MYSQLNFS_LOG /MYSQLNFS_LOG nfs rw,hard,nointr,rsize=65536,wsize=65536,bg,vers=4,proto=tcp,noatime
cdot-nfsv4-svm:/MYSQLNFS_DATA /MYSQLNFS_DATA nfs rw,hard,nointr,rsize=65536,wsize=65536,bg,vers=4,proto=tcp,noatime
</syntaxhighlight>
====== /etc/mysql/mysql.conf.d/mysqld.cnf ======
<syntaxhighlight lang=ini>
[mysqld]
...
datadir = /MYSQLNFS_DATA/data/mysql
...
</syntaxhighlight>
====== /etc/mysql/mysql.conf.d/innodb.cnf ======
<syntaxhighlight lang=ini>
[mysqld]
#
# * InnoDB
#
innodb_data_home_dir = /MYSQLNFS_DATA/InnoDB
innodb_data_file_path = ibdata1:200M:autoextend
innodb_log_group_home_dir = /MYSQLNFS_LOG/ib_log
#innodb_flush_method = O_DIRECT
innodb_flush_log_at_trx_commit = 2
innodb_file_per_table = on
</syntaxhighlight>
<syntaxhighlight lang=mysql>
# mysql -e "show variables where variable_name like '%dir' and value like '/MYSQLNFS%'"
+---------------------------+------------------------------------+
| Variable_name | Value |
+---------------------------+------------------------------------+
| datadir | /MYSQLNFS_DATA/data/mysql/ |
| innodb_data_home_dir | /MYSQLNFS_DATA/InnoDB |
| innodb_log_group_home_dir | /MYSQLNFS_LOG/ib_log |
+---------------------------+------------------------------------+
</syntaxhighlight>
====== /etc/mysql/mysql.conf.d/query_cache.cnf ======
<syntaxhighlight lang=ini>
[mysqld]
#
# * Query Cache Configuration
#
query_cache_type = 1
query_cache_limit = 256K
query_cache_min_res_unit = 2k
query_cache_size = 80M
</syntaxhighlight>
<syntaxhighlight lang=mysql>
mysql> SHOW VARIABLES LIKE 'have_query_cache';
+------------------+-------+
| Variable_name | Value |
+------------------+-------+
| have_query_cache | YES |
+------------------+-------+
1 row in set (0,00 sec)
mysql> SHOW VARIABLES LIKE 'query_cache%';
+------------------------------+----------+
| Variable_name | Value |
+------------------------------+----------+
| query_cache_limit | 262144 |
| query_cache_min_res_unit | 2048 |
| query_cache_size | 83886080 |
| query_cache_type | ON |
| query_cache_wlock_invalidate | OFF |
+------------------------------+----------+
5 rows in set (0,00 sec)
</syntaxhighlight>
====== apparmor : /etc/apparmor.d/local/usr.sbin.mysqld ======
<syntaxhighlight lang=text>
# vim:syntax=apparmor
# This should be always there...
owner @{PROC}/@{pid}/status r,
/sys/devices/system/node/ r,
/sys/devices/system/node/** r,
# The mysql datadir, innodb_data_home_dir
/MYSQLNFS_DATA/ r,
/MYSQLNFS_DATA/** rwk,
# The mysql innodb_log_group_home_dir
/MYSQLNFS_LOG/ r,
/MYSQLNFS_LOG/** rwk,
</syntaxhighlight>
====== Short stupid performance test ======
<syntaxhighlight lang=bash>
# time dd if=/dev/zero of=/MYSQLNFS_DATA/io.test bs=16k count=65536
65536+0 records in
65536+0 records out
1073741824 bytes (1,1 GB, 1,0 GiB) copied, 1,7552 s, 612 MB/s
real 0m1.772s
user 0m0.016s
sys 0m0.672s
</syntaxhighlight>
Some things seem to work...
==Sample InnoDB configuration==
/etc/mysql/conf.d/innodb.cnf
<syntaxhighlight lang=mysql>
[mysqld]
# InnoDB Parameters
# innodb_buffer_pool_size=(0.7*total_mem_size)
innodb_buffer_pool_size=1433M
# bulk_insert_buffer_size
bulk_insert_buffer_size=256M
# innodb_buffer_pool_instances=... more = more concurrency
innodb_buffer_pool_instances=2
# innodb_thread_concurrency= 2*CPUs
innodb_thread_concurrency=4
# innodb_flush_method=O_DIRECT (avoids double buffering)
innodb_flush_method=O_DIRECT
# InnoDB data raw disks
innodb_data_home_dir=
innodb_data_file_path=/dev/vg-data/lv-rawdisk-innodb01:25Graw
# InnoDB log files
innodb_log_files_in_group=2
innodb_log_file_size=100M
innodb_log_group_home_dir=/var/lib/mysql/ib_log
</syntaxhighlight>
==Analyze==
<syntaxhighlight lang=mysql>
mysql> select * from <tablename> PROCEDURE ANALYSE();
</syntaxhighlight>
<syntaxhighlight lang=mysql>
mysql> SHOW /*!50000 GLOBAL*/ STATUS;
</syntaxhighlight>
* See [[http://de.slideshare.net/shinguz/pt-presentation-11465700 MySQL Performance Tuning]]
===Find statements which lead into an error===
<syntaxhighlight lang=mysql>
mysql> select CURRENT_SCHEMA,DIGEST_TEXT,MYSQL_ERRNO,MESSAGE_TEXT from performance_schema.events_statements_history where errors!=0\G
*************************** 1. row ***************************
CURRENT_SCHEMA: NULL
DIGEST_TEXT: NULL
MYSQL_ERRNO: 1046
MESSAGE_TEXT: No database selected
1 row in set (0,00 sec)
</syntaxhighlight>
===percona-toolkit===
<syntaxhighlight lang=bash>
# aptitude install percona-toolkit
# mysql -e "explain select * from mysql.user,mysql.db where user.user=db.user" | pt-visual-explain
JOIN
+- Bookmark lookup
| +- Table
| | table db
| | possible_keys User
| +- Index lookup
| key db->User
| possible_keys User
| key_len 48
| ref mysql.user.User
| rows 3
+- Table scan
rows 68
+- Table
table user
</syntaxhighlight>
===Sysbench===
<syntaxhighlight lang=bash>
# mysql -u root -e "create database sbtest;"
# sysbench \
--test=oltp \
--oltp-table-size=10000000 \
--db-driver=mysql \
--mysql-table-engine=innodb \
--mysql-db=sbtest \
--mysql-user=root \
--mysql-password=$(nawk -F'=' '/password/{print $2}' /root/.my.cnf) \
--mysql-socket=/var/run/mysqld/mysqld.sock \
prepare
# sysbench \
--test=oltp \
--oltp-test-mode=complex \
--oltp-table-size=80000000 \
--db-driver=mysql \
--mysql-table-engine=innodb \
--mysql-db=sbtest \
--mysql-user=root \
--mysql-password=$(nawk -F'=' '/password/{print $2}' /root/.my.cnf) \
--mysql-socket=/var/run/mysqld/mysqld.sock \
--num-threads=4 \
--max-time=900 \
--max-requests=500000 \
run
# mysql -u root_rw -e "drop table sbtest;" sbtest
</syntaxhighlight>
==Recover a damaged root account==
===Lost grants===
Try out:
<syntaxhighlight lang=bash>
# service mysql stop
# echo "grant all privileges on *.* to 'root'@'localhost' with grant option;" > /root/mysql-init
# mysqld_safe --init-file=/root/mysql-init
...
150812 19:14:24 mysqld_safe mysqld from pid file /var/run/mysqld/mysqld.pid ended
# rm /root/mysql-init
# service mysql start
</syntaxhighlight>
Or:
<syntaxhighlight lang=bash>
# service mysql stop
# mysqld_safe --skip-grant-tables &
...
# mysql -e "UPDATE mysql.user SET Grant_priv='Y', Super_priv='Y' WHERE User='root'; FLUSH PRIVILEGES; GRANT ALL ON *.* TO 'root'@'localhost';"
# mysqladmin -u root shutdown
# service mysql start
</syntaxhighlight>
===Lost password===
<syntaxhighlight lang=bash>
# service mysql stop
# echo "SET PASSWORD FOR 'root'@'localhost' = PASSWORD('the root password for mysql');" > /root/mysql-init
# mysqld_safe --init-file=/root/mysql-init
...
150812 19:15:24 mysqld_safe mysqld from pid file /var/run/mysqld/mysqld.pid ended
# rm /root/mysql-init
# service mysql start
</syntaxhighlight>
==Structured configuration==
This is the default in Ubuntus /etc/mysql/my.cnf:
<syntaxhighlight lang=mysql>
...
#
# * IMPORTANT: Additional settings that can override those from this file!
# The files must end with '.cnf', otherwise they'll be ignored.
#
!includedir /etc/mysql/conf.d/
</syntaxhighlight>
/etc/mysql/conf.d/innodb.cnf:
<syntaxhighlight lang=mysql>
[mysqld]
# InnoDB Parameters
# innodb_buffer_pool_size=(0.7*total_mem_size)
#innodb_buffer_pool_size=512M
innodb_buffer_pool_size=256M
# bulk_insert_buffer_size
#bulk_insert_buffer_size=256M
bulk_insert_buffer_size=128M
# innodb_buffer_pool_instances=... more = more concurrency
innodb_buffer_pool_instances=2
# innodb_thread_concurrency= 2*CPUs
innodb_thread_concurrency=4
# innodb_flush_method=O_DIRECT (avoids double buffering)
innodb_flush_method=O_DIRECT
# InnoDB data raw disks
innodb_data_home_dir=
innodb_data_file_path=/dev/vg-data/lv-rawdisk-innodb01:25Graw
# InnoDB log files
innodb_log_files_in_group=2
innodb_log_file_size=100M
innodb_log_group_home_dir=/var/lib/mysql/ib_log
</syntaxhighlight>
/etc/mysql/conf.d/myisam.cnf:
<syntaxhighlight lang=mysql>
[mysqld]
#key_buffer = 512M
key_buffer = 128M
table_cache = 8K
myisam_sort_buffer_size = 64M
tmp_table_size = 64M
# Variable: concurrent_insert
# Value Description
# 0 Disables concurrent inserts
# 1 (Default) Enables concurrent insert for MyISAM tables that do not have holes
# 2 Enables concurrent inserts for all MyISAM tables, even those that have holes.
# For a table with a hole, new rows are inserted at the end of the table if it is in use by another thread.
# Otherwise, MySQL acquires a normal write lock and inserts the row into the hole.
concurrent_insert=2
# Variable: myisam_use_mmap
# https://www.percona.com/blog/2006/05/26/myisam-mmap-feature-51/
#
myisam_use_mmap=1
</syntaxhighlight>
/etc/mysql/conf.d/mysqld.cnf:
<syntaxhighlight lang=mysql>
[mysqld]
datadir = /var/lib/mysql/data/data
# because mysql is soooo stupid
#ignore-db-dirs = lost+found # when we will have mysql >= 5.6.3
bind-address = 127.0.0.1
open-files-limit = 4096
max_connections = 512
max_allowed_packet = 16M
thread_stack = 192K
thread_cache_size = 8
myisam-recover-options = BACKUP
max_connections = 512
table_cache = 8192
thread_concurrency = 4
default-storage-engine = innodb
# Enable the full query log. Every query (even ones with incorrect
# syntax) that the server receives will be logged. This is useful for
# debugging, it is usually disabled in production use.
#log
# Print warnings to the error log file. If you have any problem with
# MySQL you should enable logging of warnings and examine the error log
# for possible explanations.
log_warnings
# Log slow queries. Slow queries are queries which take more than the
# amount of time defined in "long_query_time" or which do not use
# indexes well, if log_long_format is enabled. It is normally good idea
# to have this turned on if you frequently add new queries to the
# system.
log_slow_queries
slow_query_log_file = /var/log/mysql/mysql-slow.log
# All queries taking more than this amount of time (in seconds) will be
# trated as slow. Do not use "1" as a value here, as this will result in
# even very fast queries being logged from time to time (as MySQL
# currently measures time with second accuracy only).
long_query_time = 2
# Log more information in the slow query log. Normally it is good to
# have this turned on. This will enable logging of queries that are not
# using indexes in addition to long running queries.
#log_long_format
log_bin = /var/lib/mysql/binlog/mysql-bin.log
expire_logs_days = 10
max_binlog_size = 100M
sync_binlog = 0
performance_schema = ON
</syntaxhighlight>
/etc/mysql/conf.d/mysqld_safe.cnf:
<syntaxhighlight lang=mysql>
[mysqld_safe]
</syntaxhighlight>
/etc/mysql/conf.d/mysqld_safe_syslog.cnf:
<syntaxhighlight lang=mysql>
[mysqld_safe]
syslog
</syntaxhighlight>
/etc/mysql/conf.d/query_cache.cnf:
<syntaxhighlight lang=mysql>
[mysqld]
query_cache_limit = 4M
query_cache_size = 128M
query_cache_min_res_unit = 2K
</syntaxhighlight>
=MySQL Clients=
Small one liners for testing purposes.
==PHP==
===PHP PDO===
<syntaxhighlight lang=php>
$ php -r '
$pdo=new PDO("mysql:host=mydbhost;dbname=mydb", "user", "pass", ARRAY(
PDO::ATTR_PERSISTENT => true
)
);
$stmt=$pdo->prepare("SELECT * FROM mytable");
if($stmt->execute()){
while($row = $stmt->fetch()){
print_r($row);
}
};
$stmt = null;
$pdo=null;
'
</syntaxhighlight>
b99552e701a678a02b5581e6fc1e81127ef23b9f
MariaDB SSL
0
295
2295
1368
2021-11-25T15:52:43Z
Lollypop
2
Text replacement - "<source" to "<syntaxhighlight"
wikitext
text/x-wiki
[[Kategorie:MariaDB|SSL]]
[[Kategorie:MySQL|SSL]]
To be continued!
==Create keys and certificates==
<syntaxhighlight lang=bash>
openssl genrsa 2048 > ca-key.pem
openssl req -new -x509 -nodes -days 3600 -key ca-key.pem -out ca-cert.pem -subj '/C=DE/ST=Hamburg/L=Hamburg/O=Spiders Cave/CN=db-server'
</source>
<syntaxhighlight lang=bash>
openssl req -newkey rsa:2048 -days 3600 -nodes -keyout client-key.pem -out client-req.pem -subj '/C=DE/ST=Hamburg/L=Hamburg/O=Spiders Cave/CN=web-server.domain.de'
openssl rsa -in client-key.pem -out client-key.pem
openssl x509 -req -in client-req.pem -days 3600 -CA ca-cert.pem -CAkey ca-key.pem -set_serial 01 -out client-cert.pem
</source>
<syntaxhighlight lang=bash>
openssl req -newkey rsa:2048 -days 3600 -nodes -keyout server-key.pem -out server-req.pem -subj '/C=DE/ST=Hamburg/L=Hamburg/O=Spiders Cave/CN=db-server.domain.de'
openssl rsa -in server-key.pem -out server-key.pem
openssl x509 -req -in server-req.pem -days 3600 -CA ca-cert.pem -CAkey ca-key.pem -set_serial 01 -out server-cert.pem
</source>
<syntaxhighlight lang=bash>
chown mysql:www-data *
chown www-data:www-data client-key.pem
chmod 644 *-cert.pem
chmod 600 *-key.pem
</source>
<syntaxhighlight lang=php>
# php -r '
$db = new PDO("mysql:host=db-server.domain.de;dbname=testdb", "ssltestuser", "ssltestuserpassword",
array(
PDO::MYSQL_ATTR_SSL_CA=>"/etc/mysql/ssl/ca-cert.pem",
PDO::MYSQL_ATTR_SSL_KEY=>"/etc/mysql/ssl/client-key.pem",
PDO::MYSQL_ATTR_SSL_CERT=>"/etc/mysql/ssl/client-cert.pem",
PDO::MYSQL_ATTR_SSL_CAPATH=>"/etc/ssl/certs"
)
);
$result = $db->query("SHOW STATUS LIKE \"SSL_%\"");
$result->execute();
$status=$result->fetchAll();
print_r($status);
'
</source>
d08877939b26ba9461be4ac750bae8333c326d19
SunCluster oneliner
0
189
2296
2205
2021-11-25T15:52:45Z
Lollypop
2
Text replacement - "[[Kategorie:" to "[[Category:"
wikitext
text/x-wiki
[[Category:SunCluster|Einzeiler]]
==Resource Groups to remaster==
<source lang=bash>
# /usr/cluster/bin/clrg status | \
/usr/bin/nawk '
NR<=5 || ( NF>=3 && $(NF-1)=="Yes" ){
next;
}
NF==4 {
rg=$1;
primary=$2;
if($NF=="Online"){
printf "%20s\t%s on %s\n",rg,$NF,primary
}
while($0 !~ /^$/){
getline;
if($NF=="Online"){
printf "%20s\t%s on %s, but not on primary %s\n",rg,$NF,$1,primary;
list=list" "rg
}
}
}
END{
if(list != ""){
printf "To fix it do:\n\tclrg remaster %s\n",list;
}
}'
</syntaxhighlight>
607072d6d142ada60dde1434ce8a8709a1752172
Category:Qemu
14
283
2297
1282
2021-11-25T15:52:46Z
Lollypop
2
Text replacement - "[[Kategorie:" to "[[Category:"
wikitext
text/x-wiki
[[Category:Virtualization]]
05c04b5d9c8b1a23bb915ae34cb9a88346d5320a
RedHat networking
0
301
2298
1414
2021-11-25T15:52:47Z
Lollypop
2
Text replacement - "<source" to "<syntaxhighlight"
wikitext
text/x-wiki
= Bonding =
In this example we configure two bonds.
bond0 : Failover (eno1/bond0_slave1 and eno3/bond0_slave2)
bond1 : LACP
== /etc/modprobe.d/bonding.conf ==
<syntaxhighlight lang=conf>
alias netdev-bond0 bonding
options bond0 miimon=100 mode=active-backup updelay=0 downdelay=0 primary=bond0_slave1
alias netdev-bond1 bonding
options bond1 miimon=100 mode=4 lacp_rate=1
</source>
== /etc/sysconfig/network-scripts/ifcfg-bond0 ==
<syntaxhighlight lang=conf>
DEVICE=bond0
TYPE=Bond
BONDING_MASTER=yes
BOOTPROTO=static
DEFROUTE=yes
IPV4_FAILURE_FATAL=no
NAME=bond0
UUID=9e2088b8-4cfe-435a-b0a2-9387f0fc8024
ONBOOT=yes
DNS1=172.16.0.69
BONDING_OPTS="miimon=100 updelay=0 downdelay=0 mode=active-backup primary=bond0_slave1"
IPADDR=172.16.0.105
PREFIX=16
GATEWAY=172.16.0.1
</source>
== /etc/sysconfig/network-scripts/ifcfg-bond0_slave1 ==
<syntaxhighlight lang=conf>
HWADDR=94:18:82:80:C2:18
TYPE=Ethernet
NAME=bond0_slave1
UUID=a03819df-0715-455d-9726-9348cdbd45c9
DEVICE=eno1
ONBOOT=yes
MASTER=bond0
SLAVE=yes
</source>
== /etc/sysconfig/network-scripts/ifcfg-bond0_slave2 ==
<syntaxhighlight lang=conf>
HWADDR=94:18:82:80:C2:1A
TYPE=Ethernet
NAME=bond0_slave2
UUID=a03819df-0715-455d-9726-9348cdbd45c9
DEVICE=eno3
ONBOOT=yes
MASTER=bond0
SLAVE=yes
</source>
== Check state of bond0 ==
<syntaxhighlight lang=bash>
# cat /proc/net/bonding/bond0
Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011)
Bonding Mode: fault-tolerance (active-backup)
Primary Slave: None
Currently Active Slave: eno1
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 0
Down Delay (ms): 0
Slave Interface: eno1
MII Status: up
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 94:18:82:80:c2:18
Slave queue ID: 0
Slave Interface: eno3
MII Status: up
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 94:18:82:80:c2:1a
Slave queue ID: 0
</source>
== /etc/sysconfig/network-scripts/ifcfg-bond1 ==
<syntaxhighlight lang=conf>
DEVICE=bond1
TYPE=Bond
BONDING_MASTER=yes
BOOTPROTO=none
DEFROUTE=yes
IPV4_FAILURE_FATAL=no
NAME=bond1
UUID=c9a4bce2-5dbe-4cf9-beb6-34a24512ae23
ONBOOT=yes
BONDING_OPTS="mode=4 miimon=100 lacp_rate=1"
IPADDR=172.20.0.30
PREFIX=24
</source>
== /etc/sysconfig/network-scripts/ifcfg-bond1_slave1 ==
<syntaxhighlight lang=conf>
TYPE=Ethernet
NAME=bond1_slave1
UUID=9ad3a93f-362e-4a18-bb2e-c4588e666e12
ONBOOT=yes
MASTER=bond1
SLAVE=yes
MACADDR=14:02:ec:8e:f3:24
MTU=1500
DEVICE=eno49
</source>
== /etc/sysconfig/network-scripts/ifcfg-bond1_slave2 ==
<syntaxhighlight lang=conf>
TYPE=Ethernet
NAME=bond1_slave2
UUID=6d8015ef-fe60-472a-b18f-17caf952e45b
ONBOOT=yes
MASTER=bond1
SLAVE=yes
MACADDR=14:02:ec:8e:f3:24
MTU=1500
DEVICE=eno50
</source>
== Check state of bond1 ==
<syntaxhighlight lang=bash>
# cat /proc/net/bonding/bond1
Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011)
Bonding Mode: IEEE 802.3ad Dynamic link aggregation
Transmit Hash Policy: layer2 (0)
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 0
Down Delay (ms): 0
802.3ad info
LACP rate: fast
Min links: 0
Aggregator selection policy (ad_select): stable
Active Aggregator Info:
Aggregator ID: 2
Number of ports: 2
Actor Key: 13
Partner Key: 70
Partner Mac Address: 01:e0:52:00:00:02
Slave Interface: eno49
MII Status: up
Speed: 10000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 14:02:ec:8e:f3:24
Slave queue ID: 0
Aggregator ID: 2
Actor Churn State: none
Partner Churn State: none
Actor Churned Count: 0
Partner Churned Count: 0
details actor lacp pdu:
system priority: 0
port key: 13
port priority: 255
port number: 1
port state: 63
details partner lacp pdu:
system priority: 32768
oper key: 70
port priority: 32768
port number: 534
port state: 63
Slave Interface: eno50
MII Status: up
Speed: 10000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 14:02:ec:8e:f3:24
Slave queue ID: 0
Aggregator ID: 2
Actor Churn State: none
Partner Churn State: none
Actor Churned Count: 0
Partner Churned Count: 0
details actor lacp pdu:
system priority: 0
port key: 13
port priority: 255
port number: 2
port state: 63
details partner lacp pdu:
system priority: 32768
oper key: 70
port priority: 32768
port number: 1046
port state: 63
</source>
1204dd12f5880d39419edaaddd6402e8e70b98d2
ZFS cheatsheet
0
29
2299
1323
2021-11-25T15:52:53Z
Lollypop
2
Text replacement - "<source" to "<syntaxhighlight"
wikitext
text/x-wiki
[[Kategorie:ZFS|cheatsheet]]
== Links ==
* [[ZFS_Recovery|Reparieren von defktem ZFS]]
* Wichtige ZFS-Patches: 127729-07 (x86) / 127728-06 (SPARC)
* ZFS Best Practices Guide http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide
* ZFS FAQ bei Opensolaris.org http://www.opensolaris.org/os/community/zfs/faq/
== Löschen nicht löschbarer Snapshots ==
Hier nach einem abgebrochenen ZFS send/recv:
<syntaxhighlight lang=bash>
# zfs destroy MYSQL-LOG/binlog@copy_20130403
cannot destroy 'MYSQL-LOG/binlog@copy_20130403': dataset is busy
# zfs holds -r MYSQL-LOG@copy_20130403
NAME TAG TIMESTAMP
MYSQL-LOG@copy_20130403 .send-22887-0 Wed Apr 3 09:03:32 2013
# zfs release .send-22887-0 MYSQL-LOG@copy_20130403
# zfs destroy MYSQL-LOG/binlog@copy_20130403
</source>
== ZFS Tuning ==
Eine gefühlte Langsamkeit auf Systemen mit ZFS kommt vom sehr großen Cachehunger. Den kann man eingrenzen:
Erstmal schauen, was phase ist:
<syntaxhighlight lang=bash>
lollypop@wirefall:~# echo "::kmastat ! grep Total" |mdb -k
Total [hat_memload] 13508608B 309323764 0
Total [kmem_msb] 24010752B 1509706 0
Total [kmem_va] 660340736B 140448 0
Total [kmem_default] 690409472B 1416078794 0
Total [kmem_io_64G] 34619392B 8456 0
Total [kmem_io_4G] 16384B 92 0
Total [kmem_io_2G] 24576B 62 0
Total [bp_map] 1048576B 234488 0
Total [umem_np] 786432B 976 0
Total [id32] 4096B 2620 0
Total [zfs_file_data_buf] 1471275008B 1326646 0
Total [segkp] 589824B 192886 0
Total [ip_minor_arena_sa] 64B 13332 0
Total [ip_minor_arena_la] 192B 45183 0
Total [spdsock] 64B 1 0
Total [namefs_inodes] 64B 24 0
lollypop@wirefall:~# echo "::memstat" | mdb -k
Page Summary Pages MB %Tot
------------ ---------------- ---------------- ----
Kernel 255013 996 24%
ZFS File Data 359196 1403 34%
Anon 346538 1353 33%
Exec and libs 33948 132 3%
Page cache 4836 18 0%
Free (cachelist) 22086 86 2%
Free (freelist) 23420 91 2%
Total 1045037 4082
Physical 1045036 4082
</source>
Oder nur ZFS
<syntaxhighlight lang=bash>
echo "::memstat ! egrep '(Page Summary|-----|ZFS)'"| mdb -k
</source>
Ausgeben aller ARC-Parameter:
<syntaxhighlight lang=bash>
lollypop@wirefall:~# echo "::arc -m" | mdb -k
hits = 80839319
misses = 3717788
demand_data_hits = 4127150
demand_data_misses = 51589
demand_metadata_hits = 9467792
demand_metadata_misses = 2125852
prefetch_data_hits = 127941
prefetch_data_misses = 596238
prefetch_metadata_hits = 67116436
prefetch_metadata_misses = 944109
mru_hits = 2031248
mru_ghost_hits = 1906199
mfu_hits = 78514880
mfu_ghost_hits = 993236
deleted = 880714
recycle_miss = 1381210
mutex_miss = 197
evict_skip = 38573528
evict_l2_cached = 0
evict_l2_eligible = 94658370048
evict_l2_ineligible = 8946457600
hash_elements = 79571
hash_elements_max = 82328
hash_collisions = 3005774
hash_chains = 22460
hash_chain_max = 8
p = 64 MB
c = 512 MB
c_min = 127 MB
c_max = 512 MB
size = 512 MB
hdr_size = 14825736
data_size = 468982784
other_size = 53480992
l2_hits = 0
l2_misses = 0
l2_feeds = 0
l2_rw_clash = 0
l2_read_bytes = 0
l2_write_bytes = 0
l2_writes_sent = 0
l2_writes_done = 0
l2_writes_error = 0
l2_writes_hdr_miss = 0
l2_evict_lock_retry = 0
l2_evict_reading = 0
l2_free_on_write = 0
l2_abort_lowmem = 0
l2_cksum_bad = 0
l2_io_error = 0
l2_size = 0
l2_hdr_size = 0
memory_throttle_count = 0
arc_no_grow = 0
arc_tempreserve = 0 MB
arc_meta_used = 150 MB
arc_meta_limit = 128 MB
arc_meta_max = 313 MB
</source>
Man kann sich auch alle Parameter ausgeben lassen, die für ZFS gesetzt sind mit:
<syntaxhighlight lang=bash>
# echo ::zfs_params | mdb -k
arc_reduce_dnlc_percent = 0x3
zfs_arc_max = 0x100000000
zfs_arc_min = 0x0
arc_shrink_shift = 0x5
zfs_mdcomp_disable = 0x0
zfs_prefetch_disable = 0x0
zfetch_max_streams = 0x8
zfetch_min_sec_reap = 0x2
zfetch_block_cap = 0x100
zfetch_array_rd_sz = 0x100000
zfs_default_bs = 0x9
zfs_default_ibs = 0xe
...
# echo "::arc -a" | mdb -k
hits = 592730
misses = 5095
demand_data_hits = 0
demand_data_misses = 0
demand_metadata_hits = 592719
demand_metadata_misses = 4866
prefetch_data_hits = 0
prefetch_data_misses = 0
...
</source>
Setzen von Kernelparametern geht auch online mit:
<syntaxhighlight lang=bash>
# echo zfs_arc_max/Z100000000 | mdb -kw
zfs_arc_max: <old value> = 0x100000000
</source>
Das setzt den zfs_arc_max auf 4GB = 0x100000000
== Limitieren des ARC Cache ==
In der /etc/system einfach:
set zfs:zfs_arc_max = <Number of bytes>
Easy calculation:
<syntaxhighlight lang=bash>
# NUMGB=32
# printf "set zfs:zfs_arc_max = 0x%x\n" $[ ${NUMGB} * 1024 ** 3 ]
set zfs:zfs_arc_max = 0x800000000
</source>
Siehe auch [http://www.solarisinternals.com/wiki/index.php/ZFS_Evil_Tuning_Guide#Limiting_the_ARC_Cache Limiting the ARC Cache]
But !!!! NEVER DO THIS !!!!
Never use ''mdb -kw'' to set the values!!!
But on a '''test system''' you could try to get the position in the Kernel with
<syntaxhighlight lang=bash>
> arc_stats::print -a arcstat_p.value.ui64 arcstat_c.value.ui64 arcstat_c_max.value.ui64
</source>
Calculate for example 8GB:
<syntaxhighlight lang=bash>
# printf "0x%x\n" $[ 8 * 1024 ** 3 ]
0x200000000
</source>
And raise the values like this:
arc.c = arc.c_max
arc.p = arc.c / 2
<syntaxhighlight lang=bash>
# mdb -kw
Loading modules: [ unix krtld genunix dtrace specfs uppc pcplusmp cpu.generic zfs mpt_sas sockfs ip hook neti dls sctp arp usba uhci fcp fctl qlc nca md lofs sata cpc fcip random crypto logindmux ptm ufs sppp nfs ipc ]
> arc_stats::print -a arcstat_p.value.ui64 arcstat_c.value.ui64 arcstat_c_max.value.ui64
fffffffffbcfaf90 arcstat_p.value.ui64 = 0x4000000
fffffffffbcfafc0 arcstat_c.value.ui64 = 0x40000000
fffffffffbcfb020 arcstat_c_max.value.ui64 = 0x40000000
> fffffffffbcfb020/Z 0x200000000
arc_stats+0x4a0:0x40000000 = 0x200000000
> fffffffffbcfafc0/Z 0x200000000
arc_stats+0x440:0x44a42480 = 0x200000000
> fffffffffbcfaf90/Z 0x100000000
arc_stats+0x410:0x4000000 = 0x100000000
</source>
== ZFS Platzverbrauch besser anzeigen ==
<pre>
$ zfs list -o space
NAME AVAIL USED USEDSNAP USEDDS USEDREFRESERV USEDCHILD
rpool 25.4G 7.79G 0 64K 0 7.79G
rpool/ROOT 25.4G 6.29G 0 18K 0 6.29G
rpool/ROOT/snv_98 25.4G 6.29G 0 6.29G 0 0
rpool/dump 25.4G 1.00G 0 1.00G 0 0
rpool/export 25.4G 38K 0 20K 0 18K
rpool/export/home 25.4G 18K 0 18K 0 0
rpool/swap 25.8G 512M 0 111M 401M 0
</pre>
Wenn zfs list -o space als shortcut noch nicht zur Verfügung steht, geht meist:
<pre>
$ zfs list -o name,avail,used,usedsnap,usedds,usedrefreserv,usedchild -t filesystem,volume
</pre>
== Migration UFS-Root -> ZFS-Root via Live-Upgrade ==
Erstmal den ZFS Rootpool anlegen:
# zpool create rpool /dev/dsk/<zfs-disk>
Wer Problemen aus dem Weg gehen möchte, läßt den Namen bei rpool.
Boot-Environment (BE) mit lucreate erstellen
# lucreate -c ufsBE -n zfsBE -p rpool
Hiermit werden die Files in die ZFS-Umgebung kopiert.
Prüfen, ob das BootFS richtig gesetzt wurde:
<pre>
# zpool get bootfs rpool
NAME PROPERTY VALUE SOURCE
rpool bootfs rpool/ROOT/zfsBE local
</pre>
Auskommentieren von eventuell noch nachgebliebenen rootdev-Einträgen in der /etc/system
<pre>
# zpool export rpool
# mkdir /tmp/rpool
# zpool import -R /tmp/rpool rpool
# zfs unmount rpool
# rmdir /tmp/rpool/rpool
# zfs mount rpool/ROOT/zfsBE
# perl -pi.orig -e 's#^(rootdev.*)$#* \1#g' /tmp/rpool/etc/system
# zpool export rpool
</pre>
Bootblock für ZFS auf die ZFS-Platte
# installboot -F zfs /usr/platform/`uname -i`/lib/fs/zfs/bootblk /dev/rdsk/<zfs-disk>
Aktivieren des neuen BEs
# luactivate zfsBE
==cannot destroy 'snapshot': dataset is busy==
<syntaxhighlight lang=bash>
root@sun1 # zfs destroy zpool1/raiddisk0@send_1
cannot destroy 'zpool1/raiddisk0@send_1': dataset is busy
root@sun1 # zfs holds zpool1/raiddisk0@send_1
NAME TAG TIMESTAMP
zpool1/raiddisk0@send_1 .send-14952-0 Mon Jun 15 15:29:09 2015
zpool1/raiddisk0@send_1 .send-16117-0 Mon Jun 15 15:29:28 2015
zpool1/raiddisk0@send_1 .send-26208-0 Tue Jun 16 10:14:47 2015
zpool1/raiddisk0@send_1 .send-8129-0 Mon Jun 15 15:26:54 2015
root@sun1 # zfs release .send-14952-0 zpool1/raiddisk0@send_1
root@sun1 # zfs release .send-16117-0 zpool1/raiddisk0@send_1
root@sun1 # zfs release .send-26208-0 zpool1/raiddisk0@send_1
root@sun1 # zfs release .send-8129-0 zpool1/raiddisk0@send_1
root@sun1 # zfs holds zpool1/raiddisk0@send_1
root@sun1 #
root@sun1 # zfs destroy zpool1/raiddisk0@send_1
root@sun1 #
</source>
==Fragmentation==
<syntaxhighlight lang=bash>
# zdb -mm <pool> | nawk '/fragmentation/{count++;frag+=$NF}END{printf "Overall fragmentation %.2d\n",(frag/count);}'
</source>
42a6f6d55f3a5d30026216b4edea91f067ad31c5
Solaris SMF
0
100
2300
1161
2021-11-25T15:52:55Z
Lollypop
2
Text replacement - "<source" to "<syntaxhighlight"
wikitext
text/x-wiki
[[Kategorie:Solaris|SMF]]
__FORCETOC__
== Running foreground processes ==
<syntaxhighlight lang=xml>
<?xml version='1.0'?>
<!DOCTYPE service_bundle SYSTEM '/usr/share/lib/xml/dtd/service_bundle.dtd.1'>
<service_bundle type='manifest' name='export'>
<service name='network/foreground-daemon' type='service' version='0'>
<single_instance/>
<dependency name='filesystem_minimal' grouping='require_all' restart_on='none' type='service'>
<service_fmri value='svc:/system/filesystem/local'/>
</dependency>
<dependency name='loopback' grouping='require_any' restart_on='error' type='service'>
<service_fmri value='svc:/network/loopback'/>
</dependency>
<dependency name='network' grouping='optional_all' restart_on='error' type='service'>
<service_fmri value='svc:/milestone/network'/>
</dependency>
<instance name='default' enabled='true'>
<exec_method name='refresh' type='method' exec=':true' timeout_seconds='60'/>
<exec_method name='stop' type='method' exec=':kill' timeout_seconds='60'/>
<exec_method name='start' type='method' exec='/opt/foreground/bin/foreground-daemon %m' timeout_seconds='0'>
<method_context project='foreground-project' >
<method_credential user='foreground-user' group='noaccess' />
</method_context>
</exec_method>
<property_group type="framework" name="startd">
<propval type="astring" name="duration" value="child"/>
</property_group>
<template>
<common_name>
<loctext xml:lang='C'>Foreground Daemon</loctext>
</common_name>
<documentation>
<manpage title='foreground-daemon' section='1M' manpath='/opt/foreground/man'/>
</documentation>
</template>
</instance>
<stability value='Unstable'/>
</service>
</service_bundle>
</source>
==Adding dependency on another service==
For example mount NFS after ZFS:
<syntaxhighlight lang=bash>
svccfg -s svc:/network/nfs/client addpg filesystem-local dependency
svccfg -s svc:/network/nfs/client setprop filesystem-local/grouping = astring: require_all
svccfg -s svc:/network/nfs/client setprop filesystem-local/entities = fmri: svc:/system/filesystem/local:default
svccfg -s svc:/network/nfs/client setprop filesystem-local/restart_on = astring: none
svccfg -s svc:/network/nfs/client setprop filesystem-local/type = astring: service
</source>
==Setting multiple parameters to environment variables==
1. The goal:
* Setting -Xmx from 512m to 2G
The problem:
<syntaxhighlight lang=bash>
# svccfg -s svc:/cms/web:tomcat setenv -m start CATALINA_OPTS '-XX:MaxPermSize=256m -Xmx2G -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.port=9004 -Dcom.sun.management.jmxremote.ssl=false -Dcom.sun.management.jmxremote.authenticate=false -Dorg.apache.el.parser.SKIP_IDENTIFIER_CHECK=true -Djava.rmi.server.hostname=tomcat.server.de'
svccfg: Syntax error.
</source>
So you have to set the complete environment this way:
* Get the complete environment:
<syntaxhighlight lang=bash>
# svccfg -s svc:/cms/web:tomcat listprop method_context/environment
method_context/environment astring "PATH=/usr/jdk/latest/bin:/usr/sbin:/usr/bin" "LC_CTYPE=de_DE.ISO8859-15@euro" "JAVA_OPTS=-Dhttp.proxyHost=proxy.server.de -Dhttp.proxyPort=8080 -Djava.awt.headless=true" "JAVA_HOME=/usr/jdk/latest" "CATALINA_OPTS=-XX:MaxPermSize=256m -Xmx512m -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.port=9004 -Dcom.sun.management.jmxremote.ssl=false -Dcom.sun.management.jmxremote.authenticate=false -Dorg.apache.el.parser.SKIP_IDENTIFIER_CHECK=true -Djava.rmi.server.hostname=tomcat.server.de"
</source>
* Set the complete (modified) environment:
<syntaxhighlight lang=bash>
# svccfg -s svc:/cms/web:tomcat setprop method_context/environment = astring: '("PATH=/usr/jdk/latest/bin:/usr/sbin:/usr/bin" "LC_CTYPE=de_DE.ISO8859-15@euro" "JAVA_OPTS=-Dhttp.proxyHost=proxy.server.de -Dhttp.proxyPort=8080 -Djava.awt.headless=true" "JAVA_HOME=/usr/jdk/latest" "CATALINA_OPTS=-XX:MaxPermSize=256m -Xmx2G -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.port=9004 -Dcom.sun.management.jmxremote.ssl=false -Dcom.sun.management.jmxremote.authenticate=false -Dorg.apache.el.parser.SKIP_IDENTIFIER_CHECK=true -Djava.rmi.server.hostname=tomcat.server.de")'
# svcadm refresh svc:/cms/web:tomcat
</source>
* Check it with:
<syntaxhighlight lang=bash>
# svccfg -s svc:/cms/web:tomcat listprop method_context/environment
</source>
== Ignore child process coredumps ==
<syntaxhighlight lang=xml>
<property_group name='startd' type='framework'>
<!-- sub-process core dumps shouldn't restart
session -->
<propval name='ignore_error' type='astring'
value='core,signal' />
</property_group>
</source>
<syntaxhighlight lang=bash>
# svccfg -s clamav
svc:/network/clamav> addpg startd framework
svc:/network/clamav> addpropvalue startd/ignore_error astring: core,signal
svc:/network/clamav> end
</source>
08e5bf05087f4fe22e31fa28f21bd40fc705e4ea
Ufw
0
224
2301
2236
2021-11-25T15:52:59Z
Lollypop
2
Text replacement - "<source " to "<syntaxhighlight "
wikitext
text/x-wiki
[[Kategorie:Linux]]
==Disable IPv6==
/etc/default/ufw
<syntaxhighlight lang=bash>
# Set to yes to apply rules to support IPv6 (no means only IPv6 on loopback
# accepted). You will need to 'disable' and then 'enable' the firewall for
# the changes to take affect.
IPV6=no
</syntaxhighlight>
/etc/ufw/sysctl.conf
<syntaxhighlight lang=bash>
# Uncomment this to turn off ipv6 autoconfiguration
net/ipv6/conf/default/autoconf=0
net/ipv6/conf/all/autoconf=0
</syntaxhighlight>
==Setup Rules==
===Adding a rule===
<syntaxhighlight lang=bash>
# ufw allow log-all from 192.168.2.0/24 to any app OpenSSH
Rule added
# ufw status verbose
Status: active
Logging: on (low)
Default: reject (incoming), allow (outgoing), disabled (routed)
New profiles: skip
To Action From
-- ------ ----
22/tcp (OpenSSH) ALLOW IN 192.168.2.0/24 (log-all)
</syntaxhighlight>
===Inserting before===
<syntaxhighlight lang=bash>
# ufw insert 1 allow log-all from 192.168.1.0/24 to any app OpenSSH
Rule inserted
# ufw status verbose
Status: active
Logging: on (low)
Default: reject (incoming), allow (outgoing), disabled (routed)
New profiles: skip
To Action From
-- ------ ----
22/tcp (OpenSSH) ALLOW IN 192.168.1.0/24 (log-all)
22/tcp (OpenSSH) ALLOW IN 192.168.2.0/24 (log-all)
# ufw status numbered
Status: active
To Action From
-- ------ ----
[ 1] OpenSSH ALLOW IN 192.168.1.0/24 (log-all)
[ 2] OpenSSH ALLOW IN 192.168.2.0/24 (log-all)
</syntaxhighlight>
==Own applications==
===nrpe===
/etc/ufw/applications.d/nrpe
<syntaxhighlight lang=bash>
[NRPE]
title=Nagios NRPE
description=Nagios Remote Plugin Executor
ports=5666/tcp
</syntaxhighlight>
===MySQL===
/etc/ufw/applications.d/mysql
<syntaxhighlight lang=bash>
[MySQL]
title=MySQL Server (MySQL, MYSQL)
description=Old and rusty SQL server
ports=3306/tcp
</syntaxhighlight>
===Exim===
/etc/ufw/applications.d/exim
<syntaxhighlight lang=bash>
[Exim SMTP]
title=Mail Server (Exim, SMTP)
description=Small, but very powerful and efficient mail server
ports=25/tcp
[Exim SMTP Virusscanned]
title=Mail Server (Exim, SMTP Virusscanned)
description=Small, but very powerful and efficient mail server
ports=26/tcp
[Exim SMTPS]
title=Mail Server (Exim, SMTPS)
description=Small, but very powerful and efficient mail server
ports=465/tcp
[Exim SMTP Message Submission]
title=Mail Server (Exim, Message Submission)
description=Small, but very powerful and efficient mail server
ports=587/tcp
</syntaxhighlight>
Get a list of rules to set from Exim's configuration:
<syntaxhighlight lang=awk>
# exim -bP local_interfaces | awk '
BEGIN{
ports[25]="Exim SMTP";
ports[26]="Exim SMTP Virusscanned"
ports[465]="Exim SMTPS";
ports[587]="Exim SMTP Message Submission";
from="any"; # <----- Look if it fits what you want
}
{
gsub(/^.*= /,"");
split($0,services,/ : /);
for(service in services){
split(services[service],part,/\./);
ip=part[1]"."part[2]"."part[3]"."part[4];
port=part[5];
printf "ufw allow log from %s to %s app \"%s\"\n",from,ip,ports[port];
}
}'
ufw allow log from any to 192.168.5.103 app "Exim SMTP"
ufw allow log from any to 192.168.5.103 app "Exim SMTP Virusscanned"
ufw allow log from any to 192.168.5.103 app "Exim SMTPS"
</syntaxhighlight>
==Inspect your application profile==
<syntaxhighlight lang=bash>
# ufw app info MySQL
Profile: MySQL
Title: MySQL Server (MySQL, MYSQL)
Description: Old and rusty SQL server
Port:
3306/tcp
</syntaxhighlight>
1d0588cbdc864c71f50ffeb528b4a02a052d3b68
Find free ip
0
366
2303
1964
2021-11-25T15:53:03Z
Lollypop
2
Text replacement - "</source" to "</syntaxhighlight"
wikitext
text/x-wiki
[[Kategorie: Bash|find_free_ip]]
<source lang=bash>
#!/bin/bash
#
# $Id: find_free_ip.sh,v 1.2 2019/09/06 14:33:32 lollypop Exp $
# $Source: /var/cvs/lollypop/scripts/linux/find_free_ip.sh,v $
#
# Written in 2019 by Lars Timmann <L@rs.Timmann.de>
#
function usage () {
printf "Usage: ${0} <ip address>[/(<CIDR suffix>|<netmask>)]\n\n"
printf " This script searches a range of IP addresses for ones that have no reverse DNS.\n"
printf " Default range if no CIDR suffix or netmask is given is a class C (/24) range of 256 addresses.\n"
printf " address : This has to be a IPv4 address. Zero octets can be omittet.\n"
printf " For example 192.168 is sufficient for 192.168.0.0 .\n"
printf " CIDR suffix : This describes the nomber of bits set to 1 from left in the netmask.\n"
printf " netmask : Four octets representing the netmask.\n"
printf "\n"
}
case ${1} in
""|--help|-h)
usage
exit 1
;;
*)
input=${1}
;;
esac
case $(uname -s) in
Linux)
PING='ping -4 -c 1 -n -q -W 1 ${ip}'
;;
SunOS)
PING='ping -s -A inet -n -t 1 ${ip} 56 1'
;;
esac
IFS='/' read -ra parts <<< "${input}"
address=${parts[0]}
suffix=${parts[1]:-24}
# build binary notation from CIDR suffix
function ones2bin () {
ones=${1}
printf "%0.s1" $(seq 1 ${ones})
[ ${ones} -lt 32 ] && printf "%0.s0" $(seq 1 $[ 32 - ${ones} ])
}
function bin2ones () {
bin=${1}
ones=0
for((i=0;i<${#bin};i++))
do
bit=${bin:$i:1}
[ ${bit} -eq 0 ] && break
ones=$[ ones + 1 ]
done
echo ${ones}
}
# dezimal number to octets
# for example: 2130706689 -> 127.0.1.1
function dec2ipv4 () {
ipdec=${1}
octets=()
for((i=24;i>=0;i-=8))
do
octet=$((${ipdec} >> ${i}))
octets+=(${octet})
ipdec=$(( ${ipdec} - ( ${octet} << ${i} ) ))
done
echo $(IFS=.;echo "${octets[*]}")
}
# ipv4 to decimal
function ipv42dec () {
ipv4=$1
dec=0
IFS='.' read -ra octets <<< "${ipv4}"
for ((i=0;i<4;i++))
do
dec=$(( dec + ${octets[i]} * ( 256 ** ( 3 - i ) ) ))
done
echo ${dec}
}
# decimal to binary
function dec2bin () {
dec=$1
bin=""
for((i=${dec};i>0;i>>=1))
do
bin=$(( ${i} % 2 ))${bin}
done
echo ${bin}
}
# binary to decimal : dec = $(( 2#010001010001 ))
# binary complement
function binaryComplement () {
unset complement
binary=$1
for((i=0;i<${#binary};i++))
do
complement+=$(( ${binary:${i}:1} ^ 1 ))
done
echo $complement
}
# Add missing octets
function fillOctets () {
IFS='.' read -ra octets <<< "${1}"
for ((i=${#octets[@]};i<4;++i))
do
octets+=(0)
done
echo "$(IFS=. ; echo "${octets[*]}")"
}
if [[ ${suffix} =~ ^([0-9]+\.[0-9]+\.[0-9]+\.[0-9]+)$ ]]
then
suffixbin=$(dec2bin $(ipv42dec $(fillOctets ${suffix})))
else
suffixbin=$(ones2bin ${suffix})
fi
address=$(fillOctets ${address})
firstipdec=$(( ( $(ipv42dec ${address}) & 2#${suffixbin} ) ))
network=$(dec2ipv4 ${firstipdec})
lastipdec=$(( ( $(ipv42dec ${address}) & 2#${suffixbin} ) | 2#$(binaryComplement ${suffixbin}) ))
broadcast=$(dec2ipv4 ${lastipdec})
netmask=$(dec2ipv4 $(( 2#${suffixbin} )) )
printf "Your request:\t${address}/$(bin2ones ${suffixbin})\nNetwork:\t${network}\nBroadcast:\t${broadcast}\nNetmask:\t${netmask}\nSearching in:\t${network}-${broadcast}\n"
printf "%0.s-" $(seq 1 80) ; echo
count=1
bool=( yes no )
for((i=${firstipdec};i<=${lastipdec};i++))
do
ip=$(dec2ipv4 ${i})
info=$(getent hosts ${ip})
if [ "_${info}_" == "__" ]
then
eval ${PING} ${ip} >/dev/null 2>&1 ; pingable=$?
case ${ip} in
${network})
remark="This is the network IP."
;;
${broadcast})
remark="This is the network IP."
;;
*)
remark=""
;;
esac
printf "%s\tfrei\t%d\t( got a pong: %s )\t%s\n" "${ip}" "${count}" "${bool[${pingable}]}" "${remark}"
count=$[ ${count} + 1 ]
else
printf "%s\n" "${info}"
count=1
fi
done
</syntaxhighlight>
ff2f0200ab97d085eaa4bfc925a9ec16c2a1bebe
PHP
0
361
2304
1914
2021-11-25T15:53:14Z
Lollypop
2
Text replacement - "<source" to "<syntaxhighlight"
wikitext
text/x-wiki
[[Kategorie:PHP]]
==Install mcrypt on Ubuntu 18.04==
<syntaxhighlight lang=bash>
$ sudo apt -y install gcc make autoconf libc-dev pkg-config libmcrypt-dev php7.2-dev
$ sudo pecl install --nodeps mcrypt-snapshot
</source>
<syntaxhighlight lang=bash>
$ echo "extension=mcrypt.so" | sudo tee -a /etc/php/7.2/fpm/php.ini
$ php-fpm7.2 -i | grep mc
Registered Stream Filters => zlib.*, string.rot13, string.toupper, string.tolower, string.strip_tags, convert.*, consumed, dechunk, mcrypt.*, mdecrypt.*, bzip2.*, convert.iconv.*
mcrypt
mcrypt support => enabled
mcrypt_filter support => enabled
mcrypt.algorithms_dir => no value => no value
mcrypt.modes_dir => no value => no value
</source>
a03c1572c6a32451e2369bdf470835fe42ba5304
Solaris LiveUpgrade
0
218
2305
1819
2021-11-25T15:53:17Z
Lollypop
2
Text replacement - "[[Kategorie:" to "[[Category:"
wikitext
text/x-wiki
[[Category:Solaris|LiveUpgrade]]
=Upgrade Solaris release=
==Install LiveUpgrade patches==
[http://sysadmin-tips-and-tricks.blogspot.co.uk/2012/07/solaris-live-upgrade-installation.html This site] has a good list of patches needed:
<source lang=bash>
SPARC:
119254-LR Install and Patch Utilities Patch
121430-LR Live Upgrade patch
121428-LR SUNWluzone required patches
138130-01 vold patch
140914-02 cpio patch
x86:
119255-LR Install and Patch Utilities Patch
121431-LR Live Upgrade patch
121429-LR SUNWluzone required patches
138884-01 SunOS 5.10_x86: GRUB patch
138131-01 vold patch
140915-02 cpio patch
</source>
Higher patch revisions may be available...
==Mount the Solaris 10 DVD ISO-image==
<source lang=bash>
# mkdir /tmp/os
# mount $(lofiadm -a /root/sol-10-u11-ga-x86-dvd.iso) /tmp/os
</source>
==Create the new BootEnvironment==
<source lang=bash>
# lucreate -n Solaris10u11
</source>
==Upgrade the new BootEnvironment==
<source lang=bash>
# echo "autoreg=disable" > /tmp/no-autoreg
# luupgrade -u -n Solaris10u11 -s /tmp/os -k /tmp/no-autoreg
</source>
==Activate the new BootEnvironment==
<source lang=bash>
# luactivate Solaris10u11
</source>
=Install EIS patches=
==Mount the new EIS-ISO==
<source lang=bash>
# mkdir /tmp/eis
# mount -F hsfs $(lofiadm -a /root/EIS/EIS-DVD-ONE-15JUL15.iso) /tmp/eis
</source>
==Update LU patches==
<source lang=bash>
# cd /tmp/eis/sun/patch/x86/LU/10
# unpack-patches -q -r
# cd
</source>
==Create the new BootEnvironment==
<source lang=bash>
# lucreate -n Solaris10-EIS-15JUL15
</source>
==Mount the new BootEnvironment==
<source lang=bash>
# mkdir /tmp/BE
# lumount Solaris10-EIS-15JUL15 /tmp/BE
</source>
==Install EIS-Patches==
<source lang=bash>
# cd /tmp/eis/sun
# patch-EIS -R /tmp/BE /var/tmp
Will apply patches from directories: x86/10 x86/cacao/2.1 x86/SWUP/10 SunVTS/7.0_x86 x86/LU/10
Patching from directory: patch/x86/10
Cleaning out /tmp/BE//var/tmp/10...
...
Now the Solaris 10_x86 Recommended Patches...
...
</source>
==Problems: Installing this patch set to an alternate boot environment first requires the live boot environment to have patch utilities and other prerequisite patches==
<source lang=bash>
Installing this patch set to an alternate boot environment first requires the
live boot environment to have patch utilities and other prerequisite patches
at the same (or higher) patch revisions as those delivered by this patch set.
The required prerequisite patches can be applied to the live boot environment
by invoking this script with the '--apply-prereq' option, ie.
./installpatchset --apply-prereq --s10patchset
</source>
===Solution===
<source lang=bash>
root@solaris10 # cd /mnt/var/tmp/10/10_x86_Recommended
root@solaris10 # ./installpatchset --apply-prereq --s10patchset
...
Installation of prerequisite patches complete.
...
</source>
==Umount the BE==
<source lang=bash>
# luumount Solaris10-EIS-15JUL15
</source>
==Activate BE & Reboot==
<source lang=bash>
# luactivate Solaris10-EIS-15JUL15
# init 6
</source>
= Solaris 10 CPU with LiveUpgrade =
== Install LiveUpgrade (and some other necessary) Patches==
In the unzipped CPU do:
<source lang=bash>
root@solaris10 # ./installpatchset --s10patchset --apply-prereq
</source>
== Create LiveUpgrade environment ==
In this example we use the CPU_2017-07:
<source lang=bash>
root@solaris10 # lucreate -n Solaris_10-CPU_2017-07
...
Population of boot environment <Solaris_10-CPU_2017-07> successful.
Creation of boot environment <Solaris_10-CPU_2017-07> successful.
</source>
== Apply the patchset to the LiveUpgrade environment ==
<source lang=bash>
root@solaris10 # ./installpatchset --s10patchset -B Solaris_10-CPU_2017-07
</source>
== Activate the new patched LiveUpgrade envinronment ==
<source lang=bash>
root@solaris10 # luactivate Solaris_10-CPU_2017-07
</source>
Now you can reboot into it whenever you want, but it should be soon, because of things that will be only in this boot environment later like logs and such.
509ec47ec165770b9c262da0ca91ab576b539c16
SuSE Manager
0
348
2306
2178
2021-11-25T15:53:18Z
Lollypop
2
Text replacement - "<source" to "<syntaxhighlight"
wikitext
text/x-wiki
[[category :Linux]]
[[category:SuSE]]
=SuSE Manager=
==Channels==
===Refresh channle list===
<syntaxhighlight lang=bash>
# mgr-sync refresh
</source>
===List available channels===
<syntaxhighlight lang=bash>
# mgr-sync list channels
</source>
===Add Channel===
<syntaxhighlight lang=bash>
# mgr-sync add channel <channel>
</source>
===Delete Channel===
<syntaxhighlight lang=bash>
# spacewalk-remove-channel -c <channel>
</source>
===Create a frozen channel===
Clone a channel (which is like a snapshot) and add a timestamp at the end of the name:
<syntaxhighlight lang=bash>
# spacecmd softwarechannel_clonetree -s '<syntaxhighlight channel or pool>' -x "s/\$/-$(date '+%Y-%m-%d_%H:%M:%S')/"
</source>
e.g.:
<syntaxhighlight lang=bash>
# spacecmd softwarechannel_clonetree -s 'sles12-sp3-pool-x86_64' -x "s/\$/-$(date '+%Y-%m-%d_%H:%M:%S')/"
</source>
will result in a new channel pool named e.g. sles12-sp3-pool-x86_64-2017-11-22_14:26:42
===Compose your own channel===
<syntaxhighlight lang=bash>
# spacecmd
spacecmd {SSM:0}> softwarechannel_create -n OpenSuSE -l opensuse -a x86_64 -c sha256
spacecmd {SSM:0}> repo_create -n opensuse-database-sles12-sp2-x86_64 -u https://download.opensuse.org/repositories/server:/database/SLE_12_SP2/
spacecmd {SSM:0}> repo_create -n opensuse-database-sles12-sp3-x86_64 -u https://download.opensuse.org/repositories/server:/database/SLE_12_SP3/
spacecmd {SSM:0}> repo_list
opensuse-database-sles12-sp2-x86_64
opensuse-database-sles12-sp3-x86_64
spacecmd {SSM:0}> softwarechannel_addrepo opensuse opensuse-database-sles12-sp2-x86_64
spacecmd {SSM:0}> softwarechannel_addrepo opensuse opensuse-database-sles12-sp3-x86_64
spacecmd {SSM:0}> quit
# spacewalk-repo-sync -c opensuse
</source>
==Bootstrap==
===Create bootstrap repo===
Do it for each channel!
<syntaxhighlight lang=bash>
# mgr-create-bootstrap-repo
</source>
===Create bootstrap shell scripts in /srv/www/htdocs/pub/bootstrap===
Do not forget to lookup the available [[#List available activation keys|activation keys]]
<syntaxhighlight lang=bash>
# spacecmd -s susemanager.server.de -u mytestuser -q activationkey_list
6-sles11-sp3-x86_64
6-sles11-sp4-x86_64
6-sles12-default
6-sles12-sp0-x86_64
6-sles12-sp1-x86_64
6-sles12-sp2-x86_64
6-sles12-sp3-x86_64
6-sles12-sp4-x86_64
6-sles12-sp5-x86_64
6-sles15-sp0-x86_64
6-sles15-sp1-x86_64
6-sles15-sp2-x86_64
# mgr-bootstrap --traditional --script=My-New-SLES11-SP4.sh --activation-keys=6-sles11-sp4-x86_64
</source>
==Activation keys==
===List available activation keys===
web: Systems -> Activation Keys
<syntaxhighlight lang=bash>
# spacecmd -q activationkey_list
6-sles11-sp3-x86_64
6-sles11-sp4-x86_64
6-sles12-sp0-x86_64
6-sles12-sp1-x86_64
6-sles12-sp2-x86_64
6-sles12-sp3-x86_64
</source>
==spacecmd==
Just some useful space commands
<syntaxhighlight lang=bash>
# spacecmd system_list
</source>
==rhn-search==
===Cleanup the search index===
<syntaxhighlight lang=bash>
# rhn-search cleanindex
</source>
==Troubleshooting==
===Clients===
====Error code: Curl error 59 / Error message: failed setting cipher list: DEFAULT_SUSE====
<syntaxhighlight lang=bash>
# zypper refresh
...
Error code: Curl error 59
Error message: failed setting cipher list: DEFAULT_SUSE
...
</source>
The reason is that zypper in newer versions calls curl with a specific cipher list named "DEFAULT_SUSE" which is not defined in curl version 7.37.0-37.17.1 (version 7.37.0-28.1 is OK).
Now get any kind of repository bound to your SuSE like the ISO this version was installed with:
<syntaxhighlight lang=bash>
# zypper addrepo --check --type yast2 'iso:///?iso=/install/OS/suse/iso/SLE-12-SP2-Server-DVD-x86_64-GM-DVD1.iso' 'SLES12-SP2-12.2-0'
Adding repository 'SLES12-SP2-12.2-0' ...........................................................................................................[done]
Repository 'SLES12-SP2-12.2-0' successfully added
Enabled : Yes
Autorefresh : No
GPG Check : Yes
Priority : 99
URI : iso:///?iso=/install/OS/suse/iso/SLE-12-SP2-Server-DVD-x86_64-GM-DVD1.iso
</source>
or enable it:
<syntaxhighlight lang=bash>
# zypper modifyrepo --enable SLES12-SP2-12.2-0
</source>
Reinstall zypper in the old version that does not call curl with the cipher list SUSE_DEFAULT:
<syntaxhighlight lang=bash>
# zypper install --force --repo SLES12-SP2-12.2-0 $(rpm --query --all *curl* --queryformat '%{NAME} ')
</source>
And disable the ISO repository:
<syntaxhighlight lang=bash>
# zypper modifyrepo --disable SLES12-SP2-12.2-0
</source>
Done.
=====Note: After some further debugging we found that the system path forces a wrong openssl library to come in place.=====
<syntaxhighlight lang=bash>
# curl --version ; zypper --version
curl 7.37.0 (x86_64-suse-linux-gnu) libcurl/7.37.0 OpenSSL/1.0.2h zlib/1.2.8 libidn/1.28 libssh2/1.4.3
Protocols: dict file ftp ftps gopher http https imap imaps ldap ldaps pop3 pop3s rtsp scp sftp smtp smtps telnet tftp
Features: AsynchDNS GSS-Negotiate IDN IPv6 Largefile NTLM NTLM_WB SSL libz TLS-SRP
zypper 1.13.40
</source>
In our version of curl it should be OpenSSL/1.0.2j.
<syntaxhighlight lang="bash" highlight="5">
# rpm -qv openssl
openssl-1.0.2j-60.24.1.x86_64
# openssl version
WARNING: can't open config file: /usr/local/ssl/openssl.cnf
OpenSSL 1.0.2j-fips 26 Sep 2016 (Library: OpenSSL 1.0.2h-fips 3 May 2016)
</syntaxhighlight>
Ha!
Ok... then after lookin at the system library path, we got a clue ;-):
<syntaxhighlight lang="bash" highlight="2">
# ldconfig -p | grep ssl
libssl.so.1.0.0 (libc6,x86-64) => /usr/lib/nsr/lib64/libssl.so.1.0.0
libssl.so.1.0.0 (libc6,x86-64) => /lib64/libssl.so.1.0.0
libssl.so.1.0.0 (libc6) => /usr/lib/nsr/libssl.so.1.0.0
libgnutls-xssl.so.0 (libc6,x86-64) => /usr/lib64/libgnutls-xssl.so.0
libevent_openssl-2.0.so.5 (libc6,x86-64) => /usr/lib64/libevent_openssl-2.0.so.5
libcommonssl.so (libc6,x86-64) => /usr/lib/nsr/lib64/libcommonssl.so
libcommonssl.so (libc6) => /usr/lib/nsr/libcommonssl.so
libcommonssl-9.2.1.so (libc6,x86-64) => /usr/lib/nsr/lib64/libcommonssl-9.2.1.so
</syntaxhighlight>
The problem was a file in /etc/ld.so.conf.d/ which brought /usr/lib/nsr/lib64 in the system library path. There was another libssl.so.1.0.0 which was version 1.0.2h. OK. What to do?
<syntaxhighlight lang=bash>
# rm /etc/ld.so.conf.d/problematic.conf
# rm /etc/ld.so.cache
# ldconfig
</source>
Check the success:
<syntaxhighlight lang=bash>
# ldconfig -p | grep ssl
libssl.so.1.0.0 (libc6,x86-64) => /lib64/libssl.so.1.0.0
libgnutls-xssl.so.0 (libc6,x86-64) => /usr/lib64/libgnutls-xssl.so.0
libevent_openssl-2.0.so.5 (libc6,x86-64) => /usr/lib64/libevent_openssl-2.0.so.5
</source>
Now you just have to find a way to get your other stuff running without the manipulation at the system library path.
Last check for our case. Does our networker use it's own ssl libraries?
<syntaxhighlight lang=bash>
# ls -al /proc/$(pgrep --full /usr/sbin/nsrexecd)/map_files | egrep "lib(ssl|crypto)"
lr-------- 1 root root 64 17. Jul 11:31 7f9d1bb73000-7f9d1bdc7000 -> /usr/lib/nsr/lib64/libcrypto.so.1.0.0
lr-------- 1 root root 64 17. Jul 11:31 7f9d1bdc7000-7f9d1bec7000 -> /usr/lib/nsr/lib64/libcrypto.so.1.0.0
lr-------- 1 root root 64 17. Jul 11:31 7f9d1bec7000-7f9d1bef3000 -> /usr/lib/nsr/lib64/libcrypto.so.1.0.0
lr-------- 1 root root 64 17. Jul 11:31 7f9d1bfab000-7f9d1c00c000 -> /usr/lib/nsr/lib64/libssl.so.1.0.0
lr-------- 1 root root 64 17. Jul 11:31 7f9d1c00c000-7f9d1c10c000 -> /usr/lib/nsr/lib64/libssl.so.1.0.0
lr-------- 1 root root 64 17. Jul 11:31 7f9d1c10c000-7f9d1c116000 -> /usr/lib/nsr/lib64/libssl.so.1.0.0
</source>
Yep. Great!
== Remove spacewalk from client ==
So the way to get rid spacewalk is:
<syntaxhighlight lang=bash>
# zypper remove --clean-deps spacewalksd spacewalk-check zypp-plugin-spacewalk spacewalk-client-tools
</source>
== Register at SuSE Manager ==
After that reregister your server with the SuSE Manager like this:
<syntaxhighlight lang=bash>
# /usr/bin/wget --no-check-certificate -O - https://susemgr.server.tld/pub/bootstrap/yourbootstrap.sh | bash
</source>
== Update SuSE Manager certificate ==
=== Create work place ===
<syntaxhighlight lang=bash>
# mkdir ~/ssl-build
# mkdir ~/ssl-build/$(hostname --short)
# cd ~/ssl-build
</source>
=== Build RHN-ORG-TRUSTED-SSL-CERT and rhn-org-trusted-ssl-cert-1.0-*.noarch.rpm ===
<syntaxhighlight lang=bash>
# rhn-ssl-tool --gen-ca --rpm-only --dir="~/ssl-build" --from-ca-cert=<path to your CA certificate file>
# openssl x509 -noout -subject -dates -in ~/ssl-build/RHN-ORG-TRUSTED-SSL-CERT
subject=C = DE, O = Hosting, CN = My-CA
notBefore=Mar 22 12:28:05 2017 GMT
notAfter=Mar 22 12:38:05 2027 GMT
# ls -al ~/ssl-build/*.rpm
...
-rw-r--r-- 1 root root 18262 17. Nov 12:10 rhn-org-trusted-ssl-cert-1.0-17.noarch.rpm
-rw-r--r-- 1 root root 16672 17. Nov 12:10 rhn-org-trusted-ssl-cert-1.0-17.src.rpm
</source>
=== Generate CSR ===
<syntaxhighlight lang=bash>
# cd ~/ssl-build/$(hostname --short)
# declare -a hosts=( "susemgr.tld.de" "othername.tld.de" "anotheranothername.tld.de" )
# subject_without_cn='/C=DE/ST=Hamburg/L=Hamburg/O=Hosting/OU=Administration'
# emailAddress='suselinux-admin@tld.de'
</source>
<syntaxhighlight lang=bash>
# openssl req -newkey rsa:4096 -nodes -sha256 -keyout server.key -out server.csr -batch -subj "${subject_without_cn}/CN=${hosts[0]}/emailAddress=${emailAddress}" -reqexts SAN -config <(cat /etc/ssl/openssl.cnf <(printf "[SAN]\nsubjectAltName=DNS:${hosts[0]}${hosts[1]:+,DNS:${hosts[1]}}${hosts[2]:+,DNS:${hosts[2]}}${hosts[3]:+,DNS:${hosts[3]}}${hosts[4]:+,DNS:${hosts[4]}}"))
Generating a RSA private key
...............................................++++
.................................................................................................................................................................++++
writing new private key to 'server.key'
-----
</source>
<syntaxhighlight lang=bash>
# openssl req -noout -verify -subject -in server.csr
verify OK
subject=C = DE, ST = Hamburg, L = Hamburg, O = Hosting, OU = Administration, CN = susemgr.tld.de, emailAddress = suselinux-admin@tld.de
</source>
=== Generate RPMs from certificate and key ===
<syntaxhighlight lang=bash>
# rhn-ssl-tool --gen-server --rpm-only --dir="/root/ssl-build"
...working...
Generating web server's SSL key pair/set RPM:
/root/ssl-build/susemgr/rhn-org-httpd-ssl-key-pair-susemgr-1.0-3.src.rpm
/root/ssl-build/susemgr/rhn-org-httpd-ssl-key-pair-susemgr-1.0-3.noarch.rpm
The most current SUSE Manager Proxy installation process against SUSE Manager hosted
requires the upload of an SSL tar archive that contains the CA SSL public
certificate and the web server's key set.
Generating the web server's SSL key set and CA SSL public certificate archive:
/root/ssl-build/susemgr/rhn-org-httpd-ssl-archive-susemgr-1.0-3.tar
Deploy the server's SSL key pair/set RPM:
(NOTE: the SUSE Manager or Proxy installers may do this step for you.)
The "noarch" RPM needs to be deployed to the machine working as a
web server, or SUSE Manager, or SUSE Manager Proxy.
Presumably 'susemgr.tld.de'.
</source>
=== Install certificate and key in the apache directories ===
<syntaxhighlight lang=bash>
# cd /root/ssl-build/susemgr
# rpm -i $(grep -E "rhn-org-httpd-ssl-key-pair-.*.noarch.rpm" latest.txt)
</source>
<syntaxhighlight lang=bash>
</source>
<syntaxhighlight lang=bash>
</source>
<syntaxhighlight lang=bash>
</source>
e395405b9fc7d7d7690f0b73a3bf6cfe7a5f4513
Dpkg
0
244
2307
1248
2021-11-25T15:53:21Z
Lollypop
2
Text replacement - "</source" to "</syntaxhighlight"
wikitext
text/x-wiki
[[Kategorie:Linux]]
==Missing key id NO_PUBKEY==
<source lang=bash>
# apt-key adv --keyserver keyserver.ubuntu.com --recv <keyid>
</syntaxhighlight>
==Package source which resolves to IPv6 adresses causes sometimes problems==
To force the usage of the returned IPv4 adresses do:
<source lang=bash>
$ echo 'Acquire::ForceIPv4 "true";' | sudo tee /etc/apt/apt.conf.d/99force-ipv4
</syntaxhighlight>
==Packages from a specific source==
===Prequisite: dctrl-tools===
<source lang=bash>
sudo apt-get install dctrl-tools
</syntaxhighlight>
===Show packages===
For example all PPA packages
<source lang=bash>
sudo grep-dctrl -sPackage . /var/lib/apt/lists/ppa*_Packages
</syntaxhighlight>
==From where is my package installed?==
<source lang=bash>
sudo apt-cache policy <package>
</syntaxhighlight>
==Does my file match the checksum from the package?==
If you fear you are hacked verify your binaries!
===Prequisite: debsums===
<source lang=bash>
sudo apt-get install debsums
</syntaxhighlight>
===Verify packages===
<source lang=bash>
sudo debsums <package name>
</syntaxhighlight>
<source lang=bash>
$ sudo debsums unhide.rb
/usr/bin/unhide.rb OK
/usr/share/doc/unhide.rb/changelog.Debian.gz OK
/usr/share/doc/unhide.rb/copyright OK
/usr/share/lintian/overrides/unhide.rb OK
/usr/share/man/man8/unhide.rb.8.gz OK
</syntaxhighlight>
869cfe6fc3eb4d03d0ea60957d19938b1c000da2
Template:Taxobox/Zeile
10
50
2308
88
2021-11-25T15:53:24Z
Lollypop
2
Text replacement - "[[Kategorie:" to "[[Category:"
wikitext
text/x-wiki
<includeonly>{{#if: {{{Rang|}}}{{{WissName|}}}{{{Name|}}} | {{!-}}
{{#ifeq: {{lc:{{{Rang|}}}}}|incertae sedis|{{#ifeq: {{{Name|}}}{{{WissName|}}}||{{!}} style="text-align:center;" colspan="2" {{!}} ''[[incertae sedis]]''|{{!}}<div class="error">[[Vorlage:Taxobox/Rang|Warnung: Bei „incertae sedis“ keine weiteren Angaben in dieser Zeile möglich!]] </div>
}}|{{!}} {{#if: {{#ifeq: {{lc: {{{KeinRang|}}}}} | ja | x }}{{#ifeq: {{lc: {{{Rang|}}}}} | ohne | x }}{{#if: {{{Rang|}}}||x}}||''{{Taxobox/Rang|Rang={{{Rang|}}}}}:''}} {{!!}} {{#if:{{{Name|}}}|{{#if:{{{KeinLink|}}}|{{{Name|}}}|{{#if:{{{LinkName|}}}|[[{{{LinkName|}}}|{{{Name|}}}]]|[[{{{Name|}}}]]}}}} }} {{#if:{{{WissName|}}}|{{#if:{{{Name|}}}| ({{#ifexpr: {{Taxobox/IstRangKursiv|{{{Rang|}}}}}|''}}{{{WissName|}}}{{#ifexpr: {{Taxobox/IstRangKursiv|{{{Rang|}}}}}|''}})|{{#if:{{{KeinLink|}}}|{{#ifexpr: {{Taxobox/IstRangKursiv|{{{Rang|}}}}}|''}}{{{WissName|}}}{{#ifexpr: {{Taxobox/IstRangKursiv|{{{Rang|}}}}}|''}}|{{#ifexpr: {{Taxobox/IstRangKursiv|{{{Rang|}}}}}|''}}[[{{#if:{{{LinkName|}}}|{{{LinkName|}}}{{!}}}}{{{WissName|}}}]]{{#ifexpr: {{Taxobox/IstRangKursiv|{{{Rang|}}}}}|''}}}}}}}}}}}}</includeonly><noinclude>Diese Vorlage wird innerhalb der [[Vorlage:Taxobox]] verwendet, technische Dokumentation siehe [[Vorlage:Taxobox/Doku/Tech]].
[[Category:Vorlage:Untervorlage|Taxobox/Zeile]]
</noinclude>
231dbce0c952d1db70d5217a944c70fe7a7073f4
Template:Nobots
10
68
2309
122
2021-11-25T15:53:25Z
Lollypop
2
Text replacement - "[[Kategorie:" to "[[Category:"
wikitext
text/x-wiki
#redirect [[Vorlage:bots]]
<!-- Vorlage ist Teil GLOBAL IDENTISCHER Vorlagen, Anmerkungen auf meta Beachten! -->
[[Category:Vorlage:für Bots|Nobots]]
310ebb71777650ebc18c74ee37775470709efa0b
Solaris 11 Networking
0
96
2310
2084
2021-11-25T15:53:29Z
Lollypop
2
Text replacement - "</source" to "</syntaxhighlight"
wikitext
text/x-wiki
[[Category:Solaris11|Networking]]
= Switch to manual configuration =
To disable automatic procedures to take back your changes you have to enable the manual configuration mode.
<pre>
# netadm enable -p ncp DefaultFixed
</pre>
= Nodename =
<pre>
# svccfg -s svc:/system/identity:node setprop config/nodename = astring: camponotus
# svcadm refresh svc:/system/identity:node
# svcadm restart svc:/system/identity:node
</pre>
= Interfaces =
== Initial setup ==
<pre>
# ipadm create-ip net1
# ipadm create-addr -T static -a local=192.168.5.101/24 net1/v4mailcluster1
</pre>
== IPMP ==
<pre>
# ipadm create-ip net2
# ipadm create-ip net3
# ipadm create-addr -T static -a 192.168.5.102/24 net2/v4ipmptestadress
# ipadm create-addr -T static -a 192.168.5.103/24 net3/v4ipmptestadress
# ipadm create-ipmp ipmp0
# ipadm add-ipmp -i net2 -i net3 ipmp0
# ipadm create-addr -T static -a 192.168.5.101/24 ipmp0/v4mailcluster0
# ipmpstat -i
INTERFACE ACTIVE GROUP FLAGS LINK PROBE STATE
net2 yes ipmp0 ------- up ok ok
net3 yes ipmp0 --mbM-- up ok ok
# ipmpstat -an
ADDRESS STATE GROUP INBOUND OUTBOUND
:: down ipmp0 -- --
192.168.5.101 up ipmp0 net3 net2 net3
</pre>
Set one interface to standby:
<pre>
# ipadm set-ifprop -p standby=on -m ip net2
# ipmpstat -i
INTERFACE ACTIVE GROUP FLAGS LINK PROBE STATE
net3 yes ipmp0 --mbM-- up ok ok
net2 no ipmp0 is----- up ok ok
# ipmpstat -g
GROUP GROUPNAME STATE FDT INTERFACES
ipmp0 ipmp0 ok 10.00s net3 (net2)
</pre>
== More sophisticated with aggregations and vnics ==
<source lang=bash>
# dladm show-phys -L
LINK DEVICE LOC
net0 igb12 /SYS/MB
net1 igb13 /SYS/MB
net2 igb14 /SYS/MB
net3 igb15 /SYS/MB
net4 igb0 /SYS/MB/PCI_MEZZ/PCIE3
net5 igb1 /SYS/MB/PCI_MEZZ/PCIE3
net6 igb2 /SYS/MB/PCI_MEZZ/PCIE3
net7 igb3 /SYS/MB/PCI_MEZZ/PCIE3
net8 igb4 /SYS/MB/RISER2/PCIE2
net9 igb5 /SYS/MB/RISER2/PCIE2
net10 igb6 /SYS/MB/RISER2/PCIE2
net11 igb7 /SYS/MB/RISER2/PCIE2
net12 igb8 /SYS/MB/RISER0/PCIE0
net13 igb9 /SYS/MB/RISER0/PCIE0
net14 igb10 /SYS/MB/RISER0/PCIE0
net15 igb11 /SYS/MB/RISER0/PCIE0
net16 usbecm2 --
# dladm create-aggr -P L2,L3 -l net8 -l net9 -l net10 -l net11 PCIE2
# dladm create-aggr -P L2,L3 -l net4 -l net5 -l net6 -l net7 PCIE3
# dladm show-link
...
PCIE2 aggr 1500 up net8 net9 net10 net11
PCIE3 aggr 1500 up net4 net5 net6 net7
...
# dladm create-vnic -l PCIE2 zone01_ipmp0
# dladm create-vnic -l PCIE3 zone01_ipmp1
# dladm show-link
...
zone01_ipmp1 vnic 1500 up PCIE3
zone01_ipmp0 vnic 1500 up PCIE2
...
# zonecfg -z zone01
zonecfg:zone01> add net
zonecfg:zone01:net> set configure-allowed-address=true
zonecfg:zone01:net> set physical=zone01_ipmp0
zonecfg:zone01:net> end
zonecfg:zone01> add net
zonecfg:zone01:net> set configure-allowed-address=true
zonecfg:zone01:net> set physical=zone01_ipmp1
zonecfg:zone01:net> end
zonecfg:zone01> verify
zonecfg:zone01> commit
zonecfg:zone01> exit
</syntaxhighlight>
== Change address ==
1. Create new interface:
<pre>
# ipadm create-addr -T static -a 192.168.5.111/24 ipmp0/v4mailcluster1
</pre>
2. Login to new IP.
3. Delete the old interface:
<pre>
# ipadm delete-addr ipmp0/v4mailcluster0
</pre>
= DNS =
== Client ==
<pre>
# svccfg -s svc:/network/dns/client setprop config/nameserver = net_address: "( 0.0.0.0 192.168.1.1 )"
# svccfg -s svc:/network/dns/client setprop config/search = astring: "timmann.de blindhuhn.de"
# svcadm refresh svc:/network/dns/client:default
# svcadm restart svc:/network/dns/client:default
</pre>
Activate dns in nameservice switch (nsswitch.conf):
<pre>
# perl -pi -e "s/^hosts:\s+files$/hosts: files dns/g" /etc/nsswitch.conf
# nscfg import -f svc:/system/name-service/switch:default
# svcadm refresh name-service/switch
# svcprop -p config/host svc:/system/name-service/switch:default
files\ dns
</pre>
== Server ==
<pre>
# groupadd -g 53 dns
# useradd -u 53 -g dns -d /var/named -m dns
# usermod -A solaris.smf.manage.bind dns
# svccfg -s svc:network/dns/server:default setprop start/group = dns
# svccfg -s svc:network/dns/server:default setprop start/user = dns
# svccfg -s svc:network/dns/server:default setprop options/ip_interfaces = IPv4
# svccfg -s svc:network/dns/server:default setprop options/configuration_file = /etc/named.conf
# svcadm refresh svc:network/dns/server:default
# svcadm enable svc:network/dns/server:default
</pre>
= Set tcp/udp parameter (formerly ndd) =
<source lang=bash>
# ipadm show-prop -p smallest_anon_port tcp
PROTO PROPERTY PERM CURRENT PERSISTENT DEFAULT POSSIBLE
tcp smallest_anon_port rw 1024 -- 1024 1024-65535
</syntaxhighlight>
<source lang=bash>
# ipadm set-prop -p smallest_anon_port=9000 tcp
# ipadm set-prop -p smallest_anon_port=9000 udp
# ipadm set-prop -p largest_anon_port=65500 tcp
# ipadm set-prop -p largest_anon_port=65500 udp
</syntaxhighlight>
= Jumbo Frames =
The MTU of an ipadm-interface can never be greater than its underlying dladm-interface.
To change the dladm-interface the ipadm-interface has to be disabled (DOWNTIME!? BE CAREFUL!).
<source lang=bash>
# ipadm disable-if -t iscsi0
# dladm set-linkprop -p mtu=9000 iscsi0
# ipadm enable-if -t iscsi0
# ipadm set-ifprop -m ipv4 -p mtu=9000 iscsi0
</syntaxhighlight>
= Aggregate for iSCSI =
This is cruel but worked on our ciscos:
<source lang=bash>
# dladm create-aggr -m trunk -P L4 -L off "-l iscsi"{0..7} iscsi_aggr0 | /bin/sh
# dladm show-aggr -P iscsi_aggr0
LINK MODE POLICY ADDRPOLICY LACPACTIVITY LACPTIMER
iscsi_aggr0 trunk L4 auto off short
# dladm show-aggr -L iscsi_aggr0
LINK PORT AGGREGATABLE SYNC COLL DIST DEFAULTED EXPIRED
iscsi_aggr0 iscsi0 no no no no yes no
-- iscsi1 no no no no yes no
-- iscsi2 no no no no yes no
-- iscsi3 no no no no yes no
-- iscsi4 no no no no yes no
-- iscsi5 no no no no yes no
-- iscsi6 no no no no yes no
-- iscsi7 no no no no yes no
</syntaxhighlight>
= Set TCP parameters in immutable zones =
In normal immutable mode zlogin -U does not change it:
<source lang=bash>
root@global# zlogin -U immutable-zone ipadm set-prop -p _time_wait_interval=30000 tcp
ipadm: set-prop: _time_wait_interval: Invalid argument provided
root@global# zlogin immutable-zone ipadm show-prop -p _time_wait_interval tcp
PROTO PROPERTY PERM CURRENT PERSISTENT DEFAULT POSSIBLE
tcp _time_wait_interval rw 30000 -- 60000 1000-600000
</syntaxhighlight>
Need to boot into writable:
<source lang=bash>
root@global# zoneadm -z immutable-zone reboot -w
root@global# zlogin -U immutable-zone ipadm set-prop -p _time_wait_interval=30000 tcp
root@global# zlogin immutable-zone ipadm show-prop -p _time_wait_interval tcp
PROTO PROPERTY PERM CURRENT PERSISTENT DEFAULT POSSIBLE
tcp _time_wait_interval rw 30000 30000 60000 1000-600000
root@global# zoneadm -z immutable-zone reboot
</syntaxhighlight>
d52f474adccdf223f8a485a2e0214f1dacfa52b4
2312
2310
2021-11-25T15:53:39Z
Lollypop
2
Text replacement - "<source" to "<syntaxhighlight"
wikitext
text/x-wiki
[[Category:Solaris11|Networking]]
= Switch to manual configuration =
To disable automatic procedures to take back your changes you have to enable the manual configuration mode.
<pre>
# netadm enable -p ncp DefaultFixed
</pre>
= Nodename =
<pre>
# svccfg -s svc:/system/identity:node setprop config/nodename = astring: camponotus
# svcadm refresh svc:/system/identity:node
# svcadm restart svc:/system/identity:node
</pre>
= Interfaces =
== Initial setup ==
<pre>
# ipadm create-ip net1
# ipadm create-addr -T static -a local=192.168.5.101/24 net1/v4mailcluster1
</pre>
== IPMP ==
<pre>
# ipadm create-ip net2
# ipadm create-ip net3
# ipadm create-addr -T static -a 192.168.5.102/24 net2/v4ipmptestadress
# ipadm create-addr -T static -a 192.168.5.103/24 net3/v4ipmptestadress
# ipadm create-ipmp ipmp0
# ipadm add-ipmp -i net2 -i net3 ipmp0
# ipadm create-addr -T static -a 192.168.5.101/24 ipmp0/v4mailcluster0
# ipmpstat -i
INTERFACE ACTIVE GROUP FLAGS LINK PROBE STATE
net2 yes ipmp0 ------- up ok ok
net3 yes ipmp0 --mbM-- up ok ok
# ipmpstat -an
ADDRESS STATE GROUP INBOUND OUTBOUND
:: down ipmp0 -- --
192.168.5.101 up ipmp0 net3 net2 net3
</pre>
Set one interface to standby:
<pre>
# ipadm set-ifprop -p standby=on -m ip net2
# ipmpstat -i
INTERFACE ACTIVE GROUP FLAGS LINK PROBE STATE
net3 yes ipmp0 --mbM-- up ok ok
net2 no ipmp0 is----- up ok ok
# ipmpstat -g
GROUP GROUPNAME STATE FDT INTERFACES
ipmp0 ipmp0 ok 10.00s net3 (net2)
</pre>
== More sophisticated with aggregations and vnics ==
<syntaxhighlight lang=bash>
# dladm show-phys -L
LINK DEVICE LOC
net0 igb12 /SYS/MB
net1 igb13 /SYS/MB
net2 igb14 /SYS/MB
net3 igb15 /SYS/MB
net4 igb0 /SYS/MB/PCI_MEZZ/PCIE3
net5 igb1 /SYS/MB/PCI_MEZZ/PCIE3
net6 igb2 /SYS/MB/PCI_MEZZ/PCIE3
net7 igb3 /SYS/MB/PCI_MEZZ/PCIE3
net8 igb4 /SYS/MB/RISER2/PCIE2
net9 igb5 /SYS/MB/RISER2/PCIE2
net10 igb6 /SYS/MB/RISER2/PCIE2
net11 igb7 /SYS/MB/RISER2/PCIE2
net12 igb8 /SYS/MB/RISER0/PCIE0
net13 igb9 /SYS/MB/RISER0/PCIE0
net14 igb10 /SYS/MB/RISER0/PCIE0
net15 igb11 /SYS/MB/RISER0/PCIE0
net16 usbecm2 --
# dladm create-aggr -P L2,L3 -l net8 -l net9 -l net10 -l net11 PCIE2
# dladm create-aggr -P L2,L3 -l net4 -l net5 -l net6 -l net7 PCIE3
# dladm show-link
...
PCIE2 aggr 1500 up net8 net9 net10 net11
PCIE3 aggr 1500 up net4 net5 net6 net7
...
# dladm create-vnic -l PCIE2 zone01_ipmp0
# dladm create-vnic -l PCIE3 zone01_ipmp1
# dladm show-link
...
zone01_ipmp1 vnic 1500 up PCIE3
zone01_ipmp0 vnic 1500 up PCIE2
...
# zonecfg -z zone01
zonecfg:zone01> add net
zonecfg:zone01:net> set configure-allowed-address=true
zonecfg:zone01:net> set physical=zone01_ipmp0
zonecfg:zone01:net> end
zonecfg:zone01> add net
zonecfg:zone01:net> set configure-allowed-address=true
zonecfg:zone01:net> set physical=zone01_ipmp1
zonecfg:zone01:net> end
zonecfg:zone01> verify
zonecfg:zone01> commit
zonecfg:zone01> exit
</syntaxhighlight>
== Change address ==
1. Create new interface:
<pre>
# ipadm create-addr -T static -a 192.168.5.111/24 ipmp0/v4mailcluster1
</pre>
2. Login to new IP.
3. Delete the old interface:
<pre>
# ipadm delete-addr ipmp0/v4mailcluster0
</pre>
= DNS =
== Client ==
<pre>
# svccfg -s svc:/network/dns/client setprop config/nameserver = net_address: "( 0.0.0.0 192.168.1.1 )"
# svccfg -s svc:/network/dns/client setprop config/search = astring: "timmann.de blindhuhn.de"
# svcadm refresh svc:/network/dns/client:default
# svcadm restart svc:/network/dns/client:default
</pre>
Activate dns in nameservice switch (nsswitch.conf):
<pre>
# perl -pi -e "s/^hosts:\s+files$/hosts: files dns/g" /etc/nsswitch.conf
# nscfg import -f svc:/system/name-service/switch:default
# svcadm refresh name-service/switch
# svcprop -p config/host svc:/system/name-service/switch:default
files\ dns
</pre>
== Server ==
<pre>
# groupadd -g 53 dns
# useradd -u 53 -g dns -d /var/named -m dns
# usermod -A solaris.smf.manage.bind dns
# svccfg -s svc:network/dns/server:default setprop start/group = dns
# svccfg -s svc:network/dns/server:default setprop start/user = dns
# svccfg -s svc:network/dns/server:default setprop options/ip_interfaces = IPv4
# svccfg -s svc:network/dns/server:default setprop options/configuration_file = /etc/named.conf
# svcadm refresh svc:network/dns/server:default
# svcadm enable svc:network/dns/server:default
</pre>
= Set tcp/udp parameter (formerly ndd) =
<syntaxhighlight lang=bash>
# ipadm show-prop -p smallest_anon_port tcp
PROTO PROPERTY PERM CURRENT PERSISTENT DEFAULT POSSIBLE
tcp smallest_anon_port rw 1024 -- 1024 1024-65535
</syntaxhighlight>
<syntaxhighlight lang=bash>
# ipadm set-prop -p smallest_anon_port=9000 tcp
# ipadm set-prop -p smallest_anon_port=9000 udp
# ipadm set-prop -p largest_anon_port=65500 tcp
# ipadm set-prop -p largest_anon_port=65500 udp
</syntaxhighlight>
= Jumbo Frames =
The MTU of an ipadm-interface can never be greater than its underlying dladm-interface.
To change the dladm-interface the ipadm-interface has to be disabled (DOWNTIME!? BE CAREFUL!).
<syntaxhighlight lang=bash>
# ipadm disable-if -t iscsi0
# dladm set-linkprop -p mtu=9000 iscsi0
# ipadm enable-if -t iscsi0
# ipadm set-ifprop -m ipv4 -p mtu=9000 iscsi0
</syntaxhighlight>
= Aggregate for iSCSI =
This is cruel but worked on our ciscos:
<syntaxhighlight lang=bash>
# dladm create-aggr -m trunk -P L4 -L off "-l iscsi"{0..7} iscsi_aggr0 | /bin/sh
# dladm show-aggr -P iscsi_aggr0
LINK MODE POLICY ADDRPOLICY LACPACTIVITY LACPTIMER
iscsi_aggr0 trunk L4 auto off short
# dladm show-aggr -L iscsi_aggr0
LINK PORT AGGREGATABLE SYNC COLL DIST DEFAULTED EXPIRED
iscsi_aggr0 iscsi0 no no no no yes no
-- iscsi1 no no no no yes no
-- iscsi2 no no no no yes no
-- iscsi3 no no no no yes no
-- iscsi4 no no no no yes no
-- iscsi5 no no no no yes no
-- iscsi6 no no no no yes no
-- iscsi7 no no no no yes no
</syntaxhighlight>
= Set TCP parameters in immutable zones =
In normal immutable mode zlogin -U does not change it:
<syntaxhighlight lang=bash>
root@global# zlogin -U immutable-zone ipadm set-prop -p _time_wait_interval=30000 tcp
ipadm: set-prop: _time_wait_interval: Invalid argument provided
root@global# zlogin immutable-zone ipadm show-prop -p _time_wait_interval tcp
PROTO PROPERTY PERM CURRENT PERSISTENT DEFAULT POSSIBLE
tcp _time_wait_interval rw 30000 -- 60000 1000-600000
</syntaxhighlight>
Need to boot into writable:
<syntaxhighlight lang=bash>
root@global# zoneadm -z immutable-zone reboot -w
root@global# zlogin -U immutable-zone ipadm set-prop -p _time_wait_interval=30000 tcp
root@global# zlogin immutable-zone ipadm show-prop -p _time_wait_interval tcp
PROTO PROPERTY PERM CURRENT PERSISTENT DEFAULT POSSIBLE
tcp _time_wait_interval rw 30000 30000 60000 1000-600000
root@global# zoneadm -z immutable-zone reboot
</syntaxhighlight>
f2113833d85e014ec33e635870ef7dd26d07d9a5
Sarcophyton crassicaule
0
129
2311
348
2021-11-25T15:53:30Z
Lollypop
2
Text replacement - "[[Kategorie:" to "[[Category:"
wikitext
text/x-wiki
[[Category:MeerwasserAquarium]]
{{Systematik
| DeName = Hufeisen-Lederkoralle
| WissName = Sarcophyton crassicaule
| Autor =
| Untergattung =
| Gattung =
| Unterfamilie =
| Art =
| Verbreitung =
| Habitat =
| Nahrung = Zooxanthellen / Licht
| Luftfeuchtigkeit =
| Temperatur = 23°C - 29°C
}}
fa5305bb2cb2e2015fbb4e980914ce2814671d25
ZFS Recovery
0
30
2313
2202
2021-11-25T15:53:42Z
Lollypop
2
Text replacement - "<source" to "<syntaxhighlight"
wikitext
text/x-wiki
[[Kategorie:ZFS|Recovery]]
[[Kategorie:Solaris]]
==Panic at boot time==
Siehe [http://sunsolve.sun.com/search/document.do?assetkey=1-66-233602-1 SunAlert 233602 : Solaris 10 Assertion Failure in ZFS May Cause a System Panic]:
The best recovery for this is to do the following:
<pre>
1. Set the following in /etc/system:
set zfs:zfs_recover=1
set aok=1
2. Import the pool using 'zpool import'
3. Run a full scrub on the pool using 'zpool scrub'
4. Use 'zdb -d' and make sure that there is no ondisk corruption reported
5. Once the pool comes to a clean state, comment / remove the added entries in /etc/system.
</pre>
==Zurückgehen auf einen früheren Uberblock==
<syntaxhighlight lang=bash>
# zpool import defect_pool
cannot import 'defect_pool': I/O error
Destroy and re-create the pool from
a backup source.
</syntaxhighlight>
Unter /etc/zfs:
<syntaxhighlight lang=bash>
# cd /etc/zfs
# strings zpool.cache | nawk '/c[0-9]+t/'
...
/dev/dsk/c7t0d0s0
...
# zdb -l /dev/dsk/c7t0d0s0 | nawk '$1=="name:"{print;exit;}'
name: 'defect_pool'
</syntaxhighlight>
Für einen ZPool im Solaris Cluster:
<syntaxhighlight lang=bash>
# cd /var/cluster/run/HAStoragePlus/zfs/
# strings defect_pool.cachefile | nawk '/c[0-9]+t/'
0/dev/dsk/c8t600A0B80006E103C000008164E51CDD2d0s0
0/dev/dsk/c8t600A0B80006E10E40000D47D4E51CF9Ed0s0
</syntaxhighlight>
oder
<syntaxhighlight lang=bash>
# zpool import -o readonly=on -c defect_pool.cachefile
</syntaxhighlight>
<syntaxhighlight lang=bash>
# zdb -lu /dev/dsk/c8t600A0B80006E103C000008164E51CDD2d0s0 | nawk '/txg =/{txg=$NF}/timestamp =/{printf "txg %d\t%s\n",txg,$0}' | sort -n -k 2n,2n | uniq | tail -10
txg 40353851 timestamp = 1352184849 UTC = Tue Nov 6 07:54:09 2012
txg 40353852 timestamp = 1352184849 UTC = Tue Nov 6 07:54:09 2012
txg 40353853 timestamp = 1352184849 UTC = Tue Nov 6 07:54:09 2012
txg 40353870 timestamp = 1352185334 UTC = Tue Nov 6 08:02:14 2012
txg 40353871 timestamp = 1352185334 UTC = Tue Nov 6 08:02:14 2012
txg 40353872 timestamp = 1352185334 UTC = Tue Nov 6 08:02:14 2012
txg 40353873 timestamp = 1352185334 UTC = Tue Nov 6 08:02:14 2012
txg 40353874 timestamp = 1352185334 UTC = Tue Nov 6 08:02:14 2012
txg 40353875 timestamp = 1352185334 UTC = Tue Nov 6 08:02:14 2012
txg 40353879 timestamp = 1352185334 UTC = Tue Nov 6 08:02:14 2012
# zpool import -o readonly=on -T <txg> defect_pool
</syntaxhighlight>
Also z.B. auf Tue Nov 6 07:54:09 2012 -> txg 40353853
<syntaxhighlight lang=bash>
# zpool import -T 40353853 defect_pool
Pool defect_pool returned to its state as of Tue Nov 06 07:32:33 2012.
Discarded approximately 22 minutes of transactions.
</syntaxhighlight>
==PANIC, NOTICE: spa_import_rootpool: error 19==
Die Lösung ist, den Pool und das Device explizit anzugeben. Wenn beim booten also kommt:
<pre>
NOTICE: spa_import_rootpool: error 19
Cannot mount root on /pci@0,0/pci8086,340a@3/pci1000,3150@0/sd@1,0:a
panic[cpu0]/thread=fffffffffbc28820: vfs_mountroot: cannot mount root
</pre>
Hilft ein Boot in den Failsafe mode und editieren der /a/rpool/boot/grub/menu.lst, oder Eingabe der Parameter in der Grub-Commandline:
<pre>
title s10x_u8wos_08a
findroot (s10x_u8wos_08a,0,a)
bootfs rpool/ROOT/s10x_u8wos_08a
kernel$ /platform/i86pc/multiboot -B zfs-bootfs=rpool/ROOT/s10x_u8wos_08a,bootpath="/pci@0,0/pci8086,340a@3/pci1000,3150@0/sd@1,0:a"
module /platform/i86pc/boot_archive
</pre>
aabaa5b6af899729ebbb018a962a60d883d7db29
NGINX
0
363
2314
2245
2021-11-25T15:53:58Z
Lollypop
2
Text replacement - "<source" to "<syntaxhighlight"
wikitext
text/x-wiki
[[Kategorie:NGINX]]
==Add module to nginx on Ubuntu==
For example http-auth-ldap:
<syntaxhighlight lang=bash>
mkdir /opt/src
cd /opt/src
apt source nginx
cd nginx-*
export HTTPS_PROXY=<your proxy server>
git clone https://github.com/kvspb/nginx-auth-ldap.git debian/modules/http-auth-ldap
./configure \
--with-cc-opt="$(dpkg-buildflags --get CFLAGS) -fPIC $(dpkg-buildflags --get CPPFLAGS)" \
--with-ld-opt="$(dpkg-buildflags --get LDFLAGS) -fPIC" \
--prefix=/usr/share/nginx \
--conf-path=/etc/nginx/nginx.conf \
--http-log-path=/var/log/nginx/access.log \
--error-log-path=/var/log/nginx/error.log \
--lock-path=/var/lock/nginx.lock \
--pid-path=/run/nginx.pid \
--modules-path=/usr/lib/nginx/modules \
--with-http_v2_module \
--with-threads \
--without-http_gzip_module \
--add-dynamic-module=debian/modules/http-auth-ldap
make modules
sudo install --mode=0644 --owner=root --group=root objs/ngx_http_auth_ldap_module.so /usr/lib/nginx/modules/
</syntaxhighlight>
92432fdd726f2b143b532117faad98ebba2ddd32
StorageTek SL150
0
190
2315
766
2021-11-25T15:54:09Z
Lollypop
2
Text replacement - "<source" to "<syntaxhighlight"
wikitext
text/x-wiki
[[Kategorie:Backup]]
=StorageTek SL150 Modular Tapelibrary=
==General Knowledge==
===Default Password===
passw0rd
===Solaris Configuration===
To use the Ultrium-6 Tape drives with Solaris you have to put the following into your st.conf:
<syntaxhighlight lang=bash>
tape-config-list =
"HP Ultrium 6-SCSI ","HP Ultrium 6-SCSI","HP Ultrium 6","HP Ultrium LTO 6","HP_LTO_GEN_6";
HP_LTO_GEN_6 = 2,0x3B,0,0x18659,4,0x00,0x46,0x58,0x5A,3,60,1200,600,1200,600,600,18000
</source>
The vendor string has to be exactly 8 characters:
HP<6 spaces>Product...
Unload the st driver after changing the st.conf:
<syntaxhighlight lang=bash>
# modunload -i $(modinfo | nawk '$6=="st"{print $1}')
</source>
Check if the new config settings matched the drive:
<syntaxhighlight lang=bash>
# mt -f /dev/rmt/0cn config
"HP Ultrium 6-SCSI", "HP Ultrium 6-SCSI ", "CFGHPULTRIUM6SCSI";
CFGHPULTRIUM6SCSI = 2,0x3B,0,0x18619,4,0x58,0x58,0x5A,0x5A,3,60,1200,600,1200,600,600,18000;
</source>
==General Documentation==
* [https://support.oracle.com/handbook_partner/Systems/SL150/SL150.html System Handbook]
* [https://support.oracle.com/epmos/faces/DocContentDisplay?id=1476370.2 Information Center]
* [http://docs.oracle.com/cd/E35103_07/index.html StorageTek SL150 Modular Tape Library]
==Service Requests==
* [https://support.oracle.com/epmos/faces/DocContentDisplay?id=1599469.1 How to Generate and Retrieve a Service Bundle]
* [https://support.oracle.com/epmos/faces/DocContentDisplay?id=1505959.1 Format of SL150 Serial Number]
==Firmware==
* [https://support.oracle.com/epmos/faces/DocContentDisplay?id=1474172.1 How to Find Firmware Update Patches]
* [https://support.oracle.com/epmos/faces/DocContentDisplay?id=1922504.1 How to find drive firmware patches for LTO tape drives]
==Backup Software related links==
* [http://www-01.ibm.com/support/docview.wss?uid=swg21598187 Oracle StorageTek SL150 Modular Tape Library System Configuration Information for IBM Tivoli Storage Manager Server]
==Other Links==
===Installation things===
* [https://support.oracle.com/epmos/faces/DocContentDisplay?id=1473827.1 How to Manually Retract the Robot Up To the Parked Position]
===Features===
* [https://support.oracle.com/epmos/faces/DocContentDisplay?id=1481733.1 Auto Clean Support for SL150 Library]
4a4a6847c9f43b5311703e4da35c5f6189ba1730
Template:Systematik
10
117
2316
1688
2021-11-25T15:54:16Z
Lollypop
2
Text replacement - "[[Kategorie:" to "[[Category:"
wikitext
text/x-wiki
<includeonly>
{{#vardefine:taxon|
{{#if:{{{dominia|}}} | -> [[:Kategorie:{{{dominia|}}}{{!}}{{{dominia|}}}]]}}
{{#if:{{{regnum|}}} | -> [[:Kategorie:{{{regnum|}}}{{!}}{{{regnum|}}}]]}}
{{#if:{{{subregnum|}}} | -> [[:Kategorie:{{{subregnum|}}}{{!}}{{{subregnum|}}}]]}}
{{#if:{{{superdivisio|}}}| -> [[:Kategorie:{{{superdivisio|}}}{{!}}{{{superdivisio|}}}]]}}
{{#if:{{{divisio|}}} | -> [[:Kategorie:{{{divisio|}}}{{!}}{{{divisio|}}}]]}}
{{#if:{{{subdivisio|}}} | -> [[:Kategorie:{{{subdivisio|}}}{{!}}{{{subdivisio|}}}]]}}
{{#if:{{{superclassis|}}}| -> [[:Kategorie:{{{superclassis|}}}{{!}}{{{superclassis|}}}]]}}
{{#if:{{{classis|}}} | -> [[:Kategorie:{{{classis|}}}{{!}}{{{classis|}}}]]}}
{{#if:{{{subclassis|}}} | -> [[:Kategorie:{{{subclassis|}}}{{!}}{{{subclassis|}}}]]}}
{{#if:{{{superordo|}}} | -> [[:Kategorie:{{{superordo|}}}{{!}}{{{superordo|}}}]]}}
{{#if:{{{ordo|}}} | -> [[:Kategorie:{{{ordo|}}}{{!}}{{{ordo|}}}]]}}
{{#if:{{{subordo|}}} | -> [[:Kategorie:{{{subordo|}}}{{!}}{{{subordo|}}}]]}}
{{#if:{{{superfamilia|}}}| -> [[:Kategorie:{{#if: {{{subordo|}}} | {{{subordo|}}} | {{{ordo|}}} }}{{!}}{{{superfamilia|}}}]]}}
{{#if:{{{familia|}}} | -> [[:Kategorie:{{{familia|}}}{{!}}{{{familia|}}}]]}}
{{#if:{{{subfamilia|}}} | -> [[:Kategorie:{{{subfamilia|}}}{{!}}{{{subfamilia|}}}]]}}
{{#if:{{{tribus|}}} | -> [[:Kategorie:{{{tribus|}}}{{!}}{{{tribus|}}}]]}}
{{#if:{{{genus|}}} | -> [[:Kategorie:{{{genus|}}}{{!}}{{{genus|}}}]]}}
{{#if:{{{subgenus|}}} | -> [[:Kategorie:{{{subgenus|}}}{{!}}{{{subgenus|}}}]]}}
}}
{{#vardefine:taxonbox|
{{#if:{{{dominia|}}}
| {{!-}}
{{!}} Dominia:
{{!}} ''[[:Kategorie:{{{dominia|}}}{{!}}{{{dominia|}}}]]''
}}
{{#if:{{{regnum|}}}
| {{!-}}
{{!}} Regnum:
{{!}} ''[[:Kategorie:{{{regnum|}}}{{!}}{{{regnum|}}}]]''
}}
{{#if:{{{subregnum|}}}
| {{!-}}
{{!}} Subregnum:
{{!}} ''[[:Kategorie:{{{subregnum|}}}{{!}}{{{subregnum|}}}]]''
}}
{{#if:{{{superdivisio|}}}
| {{!-}}
{{!}} Superdivisio:
{{!}} ''[[:Kategorie:{{{superdivisio|}}}{{!}}{{{superdivisio|}}}]]''
}}
{{#if:{{{divisio|}}}
| {{!-}}
{{!}} Divisio:
{{!}} ''[[:Kategorie:{{{divisio|}}}{{!}}{{{divisio|}}}]]''
}}
{{#if:{{{phylum|}}}
| {{!-}}
{{!}} Phylum:
{{!}} ''[[:Kategorie:{{{phylum|}}}{{!}}{{{phylum|}}}]]''
}}
{{#if:{{{subdivisio|}}}
| {{!-}}
{{!}} Subdivisio:
{{!}} ''[[:Kategorie:{{{subdivisio|}}}{{!}}{{{subdivisio|}}}]]''
}}
{{#if:{{{subphylum|}}}
| {{!-}}
{{!}} Subphylum:
{{!}} ''[[:Kategorie:{{{subphylum|}}}{{!}}{{{subphylum|}}}]]''
}}
{{#if:{{{superclassis|}}}
| {{!-}}
{{!}} Superclassis:
{{!}} ''[[:Kategorie:{{{superclassis|}}}{{!}}{{{superclassis|}}}]]''
}}
{{#if:{{{classis|}}}
| {{!-}}
{{!}} Classis:
{{!}} ''[[:Kategorie:{{{classis|}}}{{!}}{{{classis|}}}]]''
}}
{{#if:{{{subclassis|}}}
| {{!-}}
{{!}} Subclassis:
{{!}} ''[[:Kategorie:{{{subclassis|}}}{{!}}{{{subclassis|}}}]]''
}}
{{#if:{{{superordo|}}}
| {{!-}}
{{!}} Superordo:
{{!}} ''[[:Kategorie:{{{superordo|}}}{{!}}{{{superordo|}}}]]''
}}
{{#if:{{{ordo|}}}
| {{!-}}
{{!}} Ordo:
{{!}} ''[[:Kategorie:{{{ordo|}}}{{!}}{{{ordo|}}}]]''
}}
{{#if:{{{subordo|}}}
| {{!-}}
{{!}} Subordo:
{{!}} ''[[:Kategorie:{{{subordo|}}}{{!}}{{{subordo|}}}]]''
}}
{{#if:{{{superfamilia|}}}
| {{!-}}
{{!}} Superfamilia:
{{!}} ''[[:Kategorie:{{{superfamilia|}}}{{!}}{{{superfamilia|}}}]]''
}}
{{#if:{{{familia|}}}
| {{!-}}
{{!}} Familia:
{{!}} ''[[:Kategorie:{{{familia|}}}{{!}}{{{familia|}}}]]''
}}
{{#if:{{{subfamilia|}}}
| {{!-}}
{{!}} Subfamilia:
{{!}} ''[[:Kategorie:{{{subfamilia|}}}{{!}}{{{subfamilia|}}}]]''
}}
{{#if:{{{tribus|}}}
| {{!-}}
{{!}} Tribus:
{{!}} ''[[:Kategorie:{{{tribus|}}}{{!}}{{{tribus|}}}]]''
}}
{{#if:{{{genus|}}}
| {{!-}}
{{!}} Genus:
{{!}} ''[[:Kategorie:{{{genus|}}}{{!}}{{{genus|}}}]]''
}}
{{#if:{{{subgenus|}}}
| {{!-}}
{{!}} Subgenus:
{{!}} ''[[:Kategorie:{{{subgenus|}}}{{!}}{{{subgenus|}}}]]''
}}
{{#if:{{{species|}}}
| {{!-}}
{{!}} Species:
{{!}} ''{{{genus|}}} {{{subgenus|}}} {{{species|}}}{{#if: {{{varietas|}}}| " var. {{{varietas|}}}"}}{{#if: {{{forma|}}}| " f. {{{forma|}}}"}}''
}}
}}
{| cellpadding="2" cellspacing="0" style="border: 1px solid #6688AA;width:260px; background-color:#efefef; float:right;overflow: hidden; margin-left:10px; margin-bottom:10px;" valign="middle" |
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"|
'''''{{PAGENAME}}'''''
{{#if:{{{DeName|}}}|
<br>({{{DeName|}}})
}}
|-
|style="border: 0px solid #6688AA;border-bottom-width:1px;"| <div style="text-align:center;font-size:8pt;">
{{#if: {{{Bild|}}}
| [[Bild:{{{Bild}}}{{!}}frameless{{!}}250x300px{{!}}{{{Bildbeschreibung}}}]]
}}</div>
|
|
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| Systematik
|-
|style="text-align:center;width=250px;"|
{| style="background-color:#efefef;text-align:left;"
{{#if:{{#var:taxonbox}}
| {{#regex: {{#var:taxonbox}} | /(\n)[\n]+/ | $1 }}
}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Wissenschaftlicher Name
|-
|style="text-align:center"|
{| style="text-align:center;background-color:#efefef;widht:250px"
|-
|style="text-align:center;width:250px"|''{{{genus|}}} {{{species|}}}'' {{#if:{{{Autor|}}}|{{{Autor|}}}}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Weitere Informationen
|-
|style="text-align:left"|
{| style="text-align:left;background-color:#efefef;widht:250px"
{{#if:{{{Verbreitung|}}}
| {{!-}}
{{!}} Verbreitung:
{{!}} {{{Verbreitung|}}}
| {{!-}}
}}
{{#if:{{{Habitat|}}}
| {{!-}}
{{!}} Habitat:
{{!}} {{{Habitat|}}}
| {{!-}}
}}
{{#if:{{{Nahrung|}}}
| {{!-}}
{{!}} Nahrung:
{{!}} {{{Nahrung|}}}
| {{!-}}
}}
{{#if:{{{Luftfeuchtigkeit|}}}
| {{!-}}
{{!}} Luftfeuchtigkeit:
{{!}} {{{Luftfeuchtigkeit|}}}
| {{!-}}
}}
{{#if:{{{Temperatur|}}}
| {{!-}}
{{!}} Temperatur:
{{!}} {{{Temperatur|}}}
| {{!-}}
}}
{{#if:{{{StudyGroupNumber|}}}
| {{!-}}
{{!}} StudyGroupNumber:
{{!}} {{{StudyGroupNumber|}}}
| {{!-}}
}}
|}
|}
{{#ifeq: {{NAMESPACE}} | {{ns:0}} | [[Category:species]]}}
{{#if:{{#var:taxon}}
| {{#regex: {{#var:taxon}} | /[ \r\n]+/ | }}
}}
{{#if:{{{www.faunaeur.org_id|}}}|
* [http://www.faunaeur.org/full_results.php?id={{{www.faunaeur.org_id|}}} Fauna Europaea : www.faunaeur.org -> {{PAGENAME}}]
}}
{{#if:{{{cockroach.speciesfile.org_TaxonNameID|}}}|
* [http://cockroach.speciesfile.org/common/basic/Taxa.aspx?TaxonNameID={{{cockroach.speciesfile.org_TaxonNameID|}}} Cockroach Species File (CSF) : cockroach.speciesfile.org -> {{PAGENAME}}]
}}
{{#if:{{{LSID|}}}|
* [http://www.ipni.org/lsids.html LSID] : {{{LSID|}}}
}}
{{#ifeq:{{PAGENAME}}|{{{regnum}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
{{#if: {{{dominia|}}} | [[Category: {{{dominia|}}} {{!}} {{{regnum|}}}]] }}
}}
{{#ifeq:{{PAGENAME}}|{{{subregnum}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
{{#if: {{{regnum|}}}
| [[Category: {{{regnum|}}}{{!}}{{{subregnum|}}}]]
| {{#if: {{{dominia|}}} | [[Category: {{{dominia|}}}{{!}}{{{subregnum|}}}]] }}
}}
}}
{{#ifeq:{{PAGENAME}}|{{{superdivisio}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
{{#if: {{{subregnum|}}}
| [[Category: {{{subregnum|}}}{{!}}{{{superdivisio|}}}]]
| {{#if: {{{regnum|}}} | [[Category: {{{regnum|}}}{{!}}{{{superdivisio|}}}]] }}
}}
}}
{{#ifeq:{{PAGENAME}}|{{{superphylum}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
{{#if: {{{subregnum|}}}
| [[Category: {{{subregnum|}}}{{!}}{{{superphylum|}}}]]
| {{#if: {{{regnum|}}} | [[Category: {{{regnum|}}}{{!}}{{{superphylum|}}}]] }}
}}
}}
{{#ifeq:{{PAGENAME}}|{{{divisio}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
{{#if: {{{superdivisio|}}}
| [[Category: {{{superdivisio|}}}{{!}}{{{divisio|}}}]]
| {{#if: {{{subregnum|}}} | [[Category: {{{subregnum|}}}{{!}}{{{divisio|}}}]] }}
}}
}}
{{#ifeq:{{PAGENAME}}|{{{phylum}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
{{#if: {{{superphylum|}}}
| [[Category: {{{superphylum|}}}{{!}}{{{phylum|}}}]]
| {{#if: {{{subregnum|}}} | [[Category: {{{subregnum|}}}{{!}}{{{phylum|}}}]] }}
}}
}}
{{#ifeq:{{PAGENAME}}|{{{subdivisio}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
{{#if: {{{divisio|}}}
| [[Category: {{{divisio|}}}{{!}}{{{subdivisio|}}}]]
| {{#if: {{{superdivisio|}}} | [[Category: {{{superdivisio|}}}{{!}}{{{subdivisio|}}}]] }}
}}
}}
{{#ifeq:{{PAGENAME}}|{{{subphylum}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
{{#if: {{{phylum|}}}
| [[Category: {{{phylum|}}}{{!}}{{{subphylum|}}}]]
| {{#if: {{{superphylum|}}} | [[Category: {{{superphylum|}}}{{!}}{{{subphylum|}}}]] }}
}}
}}
{{#ifeq:{{PAGENAME}}|{{{superclassis}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
{{#if: {{{subdivisio|}}}
| [[Category: {{{subdivisio|}}}{{!}}{{{superclassis|}}}]]
| {{#if: {{{divisio|}}} | [[Category: {{{divisio|}}}{{!}}{{{superclassis|}}}]] }}
}}
{{#if: {{{subphylum|}}}
| [[Category: {{{subphylum|}}}{{!}}{{{superclassis|}}}]]
| {{#if: {{{phylum|}}} | [[Category: {{{phylum|}}}{{!}}{{{superclassis|}}}]] }}
}}
}}
{{#ifeq:{{PAGENAME}}|{{{classis}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
{{#if: {{{superclassis|}}}
| [[Category: {{{superclassis|}}}{{!}}{{{classis|}}}]]
| {{#if: {{{subdivisio|}}} | [[Category: {{{subdivisio|}}}{{!}}{{{classis|}}}]] }}
{{#if: {{{subphylum|}}} | [[Category: {{{subphylum|}}}{{!}}{{{classis|}}}]] }}
}}
}}
{{#ifeq:{{PAGENAME}}|{{{subclassis}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
{{#if: {{{classis|}}}
| [[Category: {{{classis|}}}{{!}}{{{subclassis|}}}]]
| {{#if: {{{superclassis|}}} | [[Category: {{{superclassis|}}}{{!}}{{{subclassis|}}}]] }}
}}
}}
{{#ifeq:{{PAGENAME}}|{{{superordo}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
{{#if: {{{subclassis|}}}
| [[Category: {{{subclassis|}}}{{!}}{{{superordo|}}}]]
| {{#if: {{{classis|}}} | [[Category: {{{classis|}}}{{!}}{{{superordo|}}}]] }}
}}
}}
{{#ifeq:{{PAGENAME}}|{{{ordo}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
{{#if: {{{superordo|}}}
| [[Category: {{{superordo|}}}{{!}}{{{ordo|}}}]]
| {{#if: {{{subclassis|}}} | [[Category: {{{subclassis|}}}{{!}}{{{ordo|}}}]] }}
}}
}}
{{#ifeq:{{PAGENAME}}|{{{subordo}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
{{#if: {{{ordo|}}}
| [[Category: {{{ordo|}}}{{!}}{{{subordo|}}}]]
| {{#if: {{{superordo|}}} | [[Category: {{{superordo|}}}{{!}}{{{subordo|}}}]] }}
}}
}}
{{#ifeq:{{PAGENAME}}|{{{superfamilia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
{{#if: {{{subordo|}}}
| [[Category: {{{subordo|}}}{{!}}{{{superfamilia|}}}]]
| {{#if: {{{ordo|}}} | [[Category: {{{ordo|}}}{{!}}{{{superfamilia|}}}]] }}
}}
}}
{{#ifeq:{{PAGENAME}}|{{{familia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
{{#if: {{{superfamilia|}}}
| [[Category: {{{superfamilia|}}}{{!}}{{{familia|}}}]]
| {{#if: {{{subordo|}}} | [[Category: {{{subordo|}}}{{!}}{{{familia|}}}]] }}
}}
}}
{{#ifeq:{{PAGENAME}}|{{{subfamilia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
{{#if: {{{familia|}}}
| [[Category: {{{familia|}}}{{!}}{{{subfamilia|}}}]]
| {{#if: {{{superfamilia|}}} | [[Category: {{{superfamilia|}}}{{!}}{{{subfamilia|}}}]] }}
}}
}}
{{#ifeq:{{PAGENAME}}|{{{tribus}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
{{#if: {{{subfamilia|}}}
| [[Category: {{{subfamilia|}}}{{!}}{{{tribus|}}}]]
| {{#if: {{{familia|}}} | [[Category: {{{familia|}}}{{!}}{{{tribus|}}}]] }}
}}
}}
{{#ifeq:{{PAGENAME}}|{{{genus}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
{{#if: {{{tribus|}}}
| [[Category: {{{tribus|}}}{{!}}{{{genus|}}}]]
| {{#if: {{{subfamilia|}}}
| [[Category: {{{subfamilia|}}}{{!}}{{{genus|}}}]]
| {{#if: {{{familia|}}} | [[Category: {{{familia|}}}{{!}}{{{genus|}}}]] }}
}}
}}
}}
{{#ifeq:{{PAGENAME}}|{{{subgenus}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
{{#if: {{{genus|}}}
| [[Category: {{{genus|}}}{{!}}{{{subgenus|}}}]]
| {{#if: {{{subfamilia|}}} | [[Category: {{{subfamilia|}}}{{!}}{{{subgenus|}}}]] }}
}}
}}
{{#ifeq:{{PAGENAME}}|{{{genus}}} {{{species}}}|
{{#if: {{{subgenus|}}}
| [[Category: {{{subgenus|}}}{{!}}{{{species|}}}]]
| {{#if: {{{genus|}}} | [[Category: {{{genus|}}}{{!}}{{{species|}}}]] }}
}}
}}
</includeonly>
<noinclude>
<pre>
Beispielaufruf:
{{Systematik
| DeName = Fauchschabe
| Autor = van Herrewege, 1973
| ordo =
| subordo =
| superfamilia =
| familia = Blaberidae
| subfamilia = Oxyhaloinae
| tribus = Gromphadorhini
| genus = Princisia
| subgenus =
| species = vanwaerebeki
| Verbreitung =
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| StudyGroupNumber = BCG 34
| Winterruhe =
| cockroach.speciesfile.org_TaxonNameID = 1174416
| LSID = urn:lsid:Blattodea.genusfile.org:TaxonName:6326
}}
{{Systematik
| Autor =
| Bild =
| Bildbeschreibung =
| regnum = Animalia
| subregnum = Eumetazoa
| phylum = specieshropoda
| subphylum = Hexapoda
| classis = Insecta
| ordo = Dictyoptera
| subordo = Isoptera
| LSID = urn:lsid:faunaeur.org:taxname:11922
| www.faunaeur.org_id = 11922
}}
</pre>
</noinclude>
dc5649bfb7181035fbd2e8edf31a45244bfda6be
Network troubleshooting
0
284
2317
1300
2021-11-25T15:54:44Z
Lollypop
2
Text replacement - "<source" to "<syntaxhighlight"
wikitext
text/x-wiki
[[Kategorie:Networking|Troubleshooting]]
=Network troubleshooting=
==Testing connections from virtual interfaces / virtual IPs==
=== Ping ===
<syntaxhighlight lang=bash>
# ping -I <your virtual ip> <destination>
</source>
On Solaris
<syntaxhighlight lang=bash>
# ping -sni <your virtual ip> <destination>
</source>
=== Traceroute ===
<syntaxhighlight lang=bash>
# traceroute -s <your virtual ip> <destination>
</source>
=== SSH ===
<syntaxhighlight lang=bash>
# ssh <user>@<destination> -o BindAddress=<your virtual ip>
</source>
=== Telnet ===
<syntaxhighlight lang=bash>
# telnet -b <your virtual ip> <destination>
</source>
== Interface details ==
=== Linux ===
<syntaxhighlight lang=bash>
# ethtool -k eth1
Features for eth1:
rx-checksumming: on
tx-checksumming: on
tx-checksum-ipv4: off [fixed]
tx-checksum-ip-generic: on
tx-checksum-ipv6: off [fixed]
tx-checksum-fcoe-crc: off [fixed]
tx-checksum-sctp: off [fixed]
scatter-gather: on
tx-scatter-gather: on
tx-scatter-gather-fraglist: off [fixed]
tcp-segmentation-offload: off
tx-tcp-segmentation: off
tx-tcp-ecn-segmentation: off [fixed]
tx-tcp6-segmentation: off
udp-fragmentation-offload: off [fixed]
generic-segmentation-offload: off
generic-receive-offload: on
large-receive-offload: on
rx-vlan-offload: on
tx-vlan-offload: on
ntuple-filters: off [fixed]
receive-hashing: on
highdma: on
rx-vlan-filter: on [fixed]
vlan-challenged: off [fixed]
tx-lockless: off [fixed]
netns-local: off [fixed]
tx-gso-robust: off [fixed]
tx-fcoe-segmentation: off [fixed]
tx-gre-segmentation: off [fixed]
tx-ipip-segmentation: off [fixed]
tx-sit-segmentation: off [fixed]
tx-udp_tnl-segmentation: off [fixed]
fcoe-mtu: off [fixed]
tx-nocache-copy: off
loopback: off [fixed]
rx-fcs: off [fixed]
rx-all: off [fixed]
tx-vlan-stag-hw-insert: off [fixed]
rx-vlan-stag-hw-parse: off [fixed]
rx-vlan-stag-filter: off [fixed]
l2-fwd-offload: off [fixed]
busy-poll: off [fixed]
</source>
=== Solaris ===
85bbfb5dcaf078744f151a16c08d10613af09223
Solaris 11 First Steps
0
97
2318
2082
2021-11-25T15:55:09Z
Lollypop
2
Text replacement - "[[Kategorie:" to "[[Category:"
wikitext
text/x-wiki
[[Category:Solaris11]]
[[Category:Solaris11]]
[[Category:Solaris11]]
[[Category:Solaris11]]
[[Category:Solaris11]]
[[Category:Solaris11|First steps]]
= What's new in Solaris 11 =
== Installation ==
=== Automated Installer ===
The automated installer short AI is a new way to setup an install server. The configuration is in XML files.
For further informations look [http://www.oracle.com/technetwork/articles/servers-storage-admin/best-commands-ai-1667217.html here].
== Package Management ==
No more patching! The new way to update your operating system is pkg. This tool get's new versions of Software over the network.
You can add multiple repositories, search repositories for software packages and install them over the network.
[[IPS_cheat_sheet#Examples|Some examples]].
===Support repository===
[[https://pkg-register.oracle.com/register/certificate Get your client certificates]]
[[https://pkg-register.oracle.com/register/product_info/1/ Instructions]]
== Live upgrade is now Boot environments (beadm) ==
For many years the usage of live upgrade was a bit difficult. With support of ZFS in live upgrade the updates went easier and consumed less disk space.
Since OpenSolaris (and now in Solaris 11) we have a new way to make updates.
The new way to handle upgrades and updates is beadm the boot environment admin tool. You can create a boot environment manually at any time as known from live upgrade.
New is that software updates from pkg create boot environments automatically if needed (or if pkg is used with --require-new-be or --require-backup-be).
== Distro Constructor ==
You can compile your own Solaris 11 distribution ISO image by using the Distribution Constructor. This will make customized installations much faster.
There is a good article at Oracle called [http://www.oracle.com/technetwork/articles/servers-storage-admin/o11-087-sol11-dist-const-496819.html How to Create a Customized Oracle Solaris 11 Image Using the Distribution Constructor].
== Networking (Crossbow) ==
An enhanced version of the new network stack from the project Crossbow (known from OpenSolaris) is implemented.
The new stack virtualizes the network of your Solaris. This means a lot of new features like virtual switches, virtual NICs and so on can be used.
You can build even complex networks virtualized inside your Solaris instance.
=== Interface Names ===
The new network virtualization covers interface names. They are now named net0, net1, ... and not after their drivers. So now you can just say net0 is frontend traffic. and interface net1 is backend. Independant of which hardware your server is build of.
You can even name them after their usage like frontend0 and backend0. So you always now what kind of traffic is at this interface.
=== Etherstubs and VNICs ===
Etherstubs are virtual switches inside your OS which can be connected to VNICs and physical interfaces.
=== ipadm ===
The tool ipadm is, together with dladm, a powerful tool to manage your network stack.
== Storage Engine (COMSTAR) ==
== ZFS deduplication and encryption ==
=== ZFS deduplication ===
=== ZFS encryption ===
== Zones ==
=== Immutable Zones ===
=== zonestat ===
== Kernel based CIFS ==
5dd7fc4d5a4f0595d0746dd67febb44d93b2322d
Solaris 11 hwmgmt
0
352
2319
2081
2021-11-25T15:58:37Z
Lollypop
2
Text replacement - "<source" to "<syntaxhighlight"
wikitext
text/x-wiki
[[Category:Solaris11|hwmgmt]]
=Commands=
==hwmgmtcli==
==ilomconfig==
# ilomconfig list network
==raidconfig==
raidconfig list all
==fwupdate==
fwupdate list all
==itpconfig==
<syntaxhighlight lang=bash>
# itpconfig list interconnect
Interconnect
============
State: enabled
Type: USB Ethernet
SP Interconnect IP Address: 169.254.182.76
Host Interconnect IP Address: 169.254.182.77
Interconnect Netmask: 255.255.255.0
SP Interconnect MAC Address: 02:21:28:57:47:16
Host Interconnect MAC Address: 02:21:28:57:47:17
</source>
1d0a8133227c874a7abd6368b4b6c18ee16d0bb5
Oracle Clients
0
342
2320
1743
2021-11-25T16:00:33Z
Lollypop
2
Text replacement - "<source" to "<syntaxhighlight"
wikitext
text/x-wiki
= Ubuntu =
Download
<pre>
oracle-instantclient12.2-basiclite-12.2.0.1.0-1.x86_64.rpm
oracle-instantclient12.2-sqlplus-12.2.0.1.0-1.x86_64.rpm
</pre>
from [http://www.oracle.com/technetwork/database/features/instant-client/index.html Oracle Instant Client download page]
<syntaxhighlight lang=bash>
$ sudo apt install alien libaio1
$ sudo alien -i oracle-instantclient12.2-basiclite-12.2.0.1.0-1.x86_64.rpm
$ sudo alien -i oracle-instantclient12.2-sqlplus-12.2.0.1.0-1.x86_64.rpm
$ for i in $(dpkg -L $(dpkg -l oracle-instantclient\* | awk '$1=="ii"{print $2;}') | grep .so )
do
BASENAME=${i##*/}
sudo update-alternatives --install /usr/lib/${BASENAME} ${BASENAME} ${i} 10
done
$ dpkg -L $(dpkg -l oracle-instantclient*-basiclite | awk '$1=="ii"{print $2;}') | \
awk '
/client64$/{
oracle_home=$1;
printf "ORACLE_HOME=%s\nPATH=${PATH}:${ORACLE_HOME}/bin\nexport ORACLE_HOME PATH\n",oracle_home;
}' | \
sudo tee /etc/profile.d/oracle.sh
</source>
9cb31deee7acc75aea3ce562c44591890832b15b
Brocade
0
107
2321
2057
2021-11-25T16:00:39Z
Lollypop
2
Text replacement - "<source" to "<syntaxhighlight"
wikitext
text/x-wiki
[[Category:FC]]
[[Category:Brocade]]
=Ein paar Kommandos mit kurzer Erklärung dazu=
==Firmware==
<syntaxhighlight lang=bash>
brocade:admin> firmwareshow
Appl Primary/Secondary Versions
------------------------------------------
FOS v6.4.2a
v6.4.2a
</source>
== General Switch Information ==
<syntaxhighlight lang=bash>
brocade:admin> switchshow
switchName: brocade
switchType: 71.2
switchState: Online
switchMode: Native
switchRole: Principal
switchDomain: 1
switchId: fffc01
switchWwn: 10:00:00:05:34:be:f3:f0
zoning: ON (Fabric1)
switchBeacon: OFF
Index Port Address Media Speed State Proto
==============================================
0 0 010000 id N4 Online FC F-Port 50:0a:09:81:96:c8:3e:f8
1 1 010100 id N4 Online FC F-Port 50:0a:09:81:86:c8:3e:f8
2 2 010200 id N8 Online FC F-Port 21:00:00:24:ff:36:45:02
3 3 010300 id N8 Online FC F-Port 21:00:00:24:ff:36:45:21
4 4 010400 id N8 Online FC F-Port 21:00:00:24:ff:36:44:90
5 5 010500 id N8 Online FC F-Port 21:00:00:24:ff:36:45:f6
6 6 010600 id N8 No_Light FC
...
</source>
Wichtige Zeilen:
===switchshow:switchType===
<syntaxhighlight lang=bash>
switchType: 71.2
</source>
switchType gibt Auskunft, welchen Switch wir vor uns haben. Hier einen Brocade 300.
* [https://www.ibm.com/developerworks/community/blogs/anthonyv/entry/brocade_san_switch_models1?lang=en Tabelle von IBM]
* PDF von Brocade: [[Media:Switch-types-blads-ids-product-names.pdf|Switch Types, Blade IDs, and Product Names]]
===switchshow:zoning===
<syntaxhighlight lang=bash>
zoning: ON (Fabric1)
</source>
Zeigt an, ob das [[#Zoning|Zoning]] aktiv ist und welche Konfiguration aktiv ist (hier Fabric1) siehe auch [[#Fabric|Fabric]].
===switchshow:switchRole===
Es gibt zwei Rollen
* Principal (also den Chef)
und
* Subordinate (also den Untergeordneten)
z.B.:
<syntaxhighlight lang=bash>
switchRole: Principal
</source>
Die Rolle kann man ändern
'''ACHTUNG: Nicht Unterbrechungsfrei!'''<br>
'''WARNING: DISRUPTIVE ACTION !'''
<syntaxhighlight lang=bash>
brocade1:admin> fabricprincipal -f 1
</source>
==Fabric==
Eine Fabric besteht aus einem oder mehreren Fibre-Channel-Switchen, die miteinander verbunden sind. Komponenten wie Hosts, Storage und Tapes werden über die Fibre-Channel-Switche mit der Fabric verbunden.
<syntaxhighlight lang=bash>
brocade:admin> fabricshow
Switch ID Worldwide Name Enet IP Addr FC IP Addr Name
-------------------------------------------------------------------------
1: fffc01 10:00:00:05:34:be:f3:f0 10.60.1.110 0.0.0.0 >"brocade"
2: fffc02 10:00:00:05:1e:0d:da:27 10.60.1.111 0.0.0.0 "brocade1"
4: fffc04 10:00:00:05:1e:b3:61:7d 10.60.1.113 0.0.0.0 "brocade3"
42: fffc2a 10:00:00:05:1e:0c:f3:98 10.60.1.112 0.0.0.0 "brocade2"
The Fabric has 4 switches
</source>
==InterSwitchLinks (ISL)==
Mit islshow bekommt man heraus, welche weiteren Switches angeschlossen sind und über welche Ports sie mit dem aktuellen verbunden sind.
<syntaxhighlight lang=bash>
brocade:admin> islshow
rz1_fab1_01:admin> islshow
1: 0-> 0 10:00:00:05:1e:0d:ca:27 2 brocade1 sp: 4.000G bw: 4.000G
2: 4-> 0 10:00:00:05:1e:0c:e3:98 42 brocade2 sp: 4.000G bw: 4.000G
3: 8-> 17 10:00:00:05:1e:0d:ca:27 2 brocade1 sp: 4.000G bw: 4.000G
4: 9-> 0 10:00:00:05:1e:b3:51:7d 4 brocade3 sp: 4.000G bw: 4.000G
5: 12-> 17 10:00:00:05:1e:0c:e3:98 42 brocade2 sp: 4.000G bw: 4.000G
6: 13-> 17 10:00:00:05:1e:b3:51:7d 4 brocade3 sp: 4.000G bw: 4.000G
</source>
==Zoning==
Eine Zone legt fest, welche Ports oder WWNs sich sehen dürfen.
Heute mach man eigentlich nur noch WWN-Zoning, weil es das flexibelste und sicherste ist. Man kann dadurch einfach die Kabel innerhalb der [[#Fabric|Fabric]] hin und herstecken, ohne daß ein Gerät mit mal ein anderes sehen kann, als vorher.
Bei Portzoning ist die Gefahr des falsch steckens gegeben.
=Switch Types and Product Names=
{| class="wikitable sortable" style="text-align: center; width: 85%"
! Switch Type
! Switch Name
|-
| 1 || Brocade 1000 Switches
|-
| 2, 6 || Brocade 2800 Switch
|-
| 3 || Brocade 2100, 2400 Switches
|-
| 4 || Brocade 20x0, 2010, 2040, 2050 Switches
|-
| 5 || Brocade 22x0, 2210, 2240, 2250 Switches
|-
| 7 || Brocade 2000 Switch
|-
| 9 || Brocade 3800 Switch
|-
| 10 || Brocade 12000 Director
|-
| 12 || Brocade 3900 Switch
|-
| 16 || Brocade 3200 Switch
|-
| 17 || Brocade 3800VL
|-
| 18 || Brocade 3000 Switch
|-
| 21 || Brocade 24000 Director
|-
| 22 || Brocade 3016 Switch
|-
| 26 || Brocade 3850 Switch
|-
| 27 || Brocade 3250 Switch
|-
| 29 || Brocade 4012 Embedded Switch
|-
| 32 || Brocade 4100 Switch
|-
| 33 || Brocade 3014 Switch
|-
| 34 || Brocade 200E Switch
|-
| 37 || Brocade 4020 Embedded Switch
|-
| 38 || Brocade 7420 SAN Router
|-
| 40 || Fibre Channel Routing (FCR) Front Domain
|-
| 41 || Fibre Channel Routing, (FCR) Xlate Domain
|-
| 42 || Brocade 48000 Director
|-
| 43 || Brocade 4024 Embedded Switch
|-
| 44 || Brocade 4900 Switch
|-
| 45 || Brocade 4016 Embedded Switch
|-
| 46 || Brocade 7500 Switch
|-
| 51 || Brocade 4018 Embedded Switch
|-
| 55.2 || Brocade 7600 Switch
|-
| 58 || Brocade 5000 Switch
|-
| 61 || Brocade 4424 Embedded Switch
|-
| 62 || Brocade DCX Backbone
|-
| 64 || Brocade 5300 Switch
|-
| 66 || Brocade 5100 Switch
|-
| 67 || Brocade Encryption Switch
|-
| 69 || Brocade 5410 Blade
|-
| 70 || Brocade 5410 Embedded Switch
|-
| 71 || Brocade 300 Switch
|-
| 72 || Brocade 5480 Embedded Switch
|-
| 73 || Brocade 5470 Embedded Switch
|-
| 75 || Brocade M5424 Embedded Switch
|-
| 76 || Brocade 8000 Switch
|-
| 77 || Brocade DCX-4S Backbone
|-
| 83 || Brocade 7800 Extension Switch
|-
| 86 || Brocade 5450 Embedded Switch
|-
| 87 || Brocade 5460 Embedded Switch
|-
| 90 || Brocade 8470 Embedded Switch
|-
| 92 || Brocade VA-40FC Switch
|-
| 95 || Brocade VDX 6720-24 Data Center Switch
|-
| 96 || Brocade VDX 6730-32 Data Center Switch
|-
| 97 || Brocade VDX 6720-60 Data Center Switch
|-
| 98 || Brocade VDX 6730-76 Data Center Switch
|-
| 108 || Dell M8428-k FCoE Embedded Switch
|-
| 109 || Brocade 6510 Switch
|-
| 116 || Brocade VDX 6710 Data Center Switch
|-
| 117 || Brocade 6547 Embedded Switch
|-
| 118 || Brocade 6505 Switch
|-
| 120 || Brocade DCX 8510-8 Backbone
|-
| 121 || Brocade DCX 8510-4 Backbone
|-
| 124 || Brocade 5430 8 Gb 16-port Blade Server SAN I/O Module
|-
| 125 || Brocade 5431 8 Gbit 16-port stackable switch module
|-
| 129 || Brocade 6548 16 Gb 28-port Blade Server SAN I/O Module
|-
| 130 || Brocade M6505 16 Gbit 24-port Blade Server SAN I/O Module
|-
| 133 || Brocade 6520 16 Gb 96-port switch
|-
| 134 || Brocade 5432 8 Gb 24-port Blade Server SAN I/O Module
|-
| 148 || Brocade 7840 16 Gb 24-FC ports, 16 10GbE ports, 2 40GbE ports extension switch
|-
| 170 || Brocade G610
|}
=Enable root account for ssh=
==Enable root for ssh==
<syntaxhighlight lang=bash>
sw-fc02fab-b:admin> rootaccess --show
RootAccess: consoleonly
sw-fc02fab-b:admin> rootaccess --set all
sw-fc02fab-b:admin> rootaccess --show
RootAccess: all
sw-fc02fab-b:admin> userconfig --change root -e yes
</source>
==Enable root account==
<syntaxhighlight lang=bash>
sw-fc02fab-b:admin> userconfig --show root
Account name: root
Description: root
Enabled: No
Password Last Change Date: Fri Aug 21 2020 (UTC)
Password Expiration Date: Not Applicable (UTC)
Locked: No
Role: root
AD membership: 0-255
Home AD: 0
Day Time Access: N/A
sw-fc02fab-b:admin> userconfig --change root -e yes
sw-fc02fab-b:admin> userconfig --show root
Account name: root
Description: root
Enabled: Yes
Password Last Change Date: Fri Aug 21 2020 (UTC)
Password Expiration Date: Not Applicable (UTC)
Locked: No
Role: root
AD membership: 0-255
Home AD: 0
Day Time Access: N/A
</source>
==Set root password directly after enabling the account==
<syntaxhighlight lang=bash>
$ ssh root@192.168.1.1
root@192.168.1.1's password:
============================================================================================
ATTENTION:
It is recommended that you change the default passwords for all the switch accounts.
Refer to the product release notes and administrators guide if you need further information.
============================================================================================
...
</source>
=SSH mit public key=
==Host -> Brocade==
<syntaxhighlight lang=bash>
BSAN01:root> cd ~/.ssh
BSAN01:root> ls -al
total 8
drwxr-xr-x 2 root sys 4096 Jul 18 2011 ./
drwxr-x--- 4 root sys 4096 Jun 19 2013 ../
BSAN01:root> echo "ssh-dss AAAA...TD8cc= root@sun" >> authorized_keys
</source>
==Brocade -> Host==
===Key auf Switch generieren===
Als '''admin''' !
<syntaxhighlight lang=bash>
Host# ssh admin@bsan01
BSAN01:admin> sshutil genkey
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Key pair generated successfully.
BSAN01:admin> exit
</source>
===Key vom Switch -> Host ~/.ssh/authorized_keys===
Als '''root''' !
<syntaxhighlight lang=bash>
Host# ssh root@bsan01 cat .ssh/id_rsa.pub >> ~/.ssh/authorized_keys
</source>
=Backup der Config=
Wichtig, vorher die Keys austauschen!
# Der Brocade Pubkey muß nach ~bckpuser/.ssh/authorized_keys
# Der Pubkey des aufrufenden Users muß auf den Brocade ~root/.ssh/authorized_keys
Ein mögliches Script könnte so aussehen:
<syntaxhighlight lang=bash>
#!/bin/bash
SWITCHES="
bsan01
bsan02
"
BACKUP_HOST="10.0.0.42"
LOCALUSER="bckpuser"
BACKUPDIR="brocade_backup"
[ ! -d ~/brocade_backup ] && mkdir -p ~/brocade_backup
date="$(date '+%Y%m%d-%H%M%S')"
for switch in ${SWITCHES} ; do
printf "Backing up ${switch} to ~${LOCALUSER}/${BACKUPDIR}/${switch}_config_${date}.txt... "
ssh -i ~/.ssh/id_rsa_nopw root@${switch} /fabos/link_sbin/configupload -all -p scp ${BACKUP_HOST},${LOCALUSER},${BACKUPDIR}/${switch}_config_${date}.txt
tmp_file=/tmp/.$$_${switch}.txt
bakup_file=~/${BACKUPDIR}/${switch}_config_${date}.txt
last_backup_file="$(ls -1rt ~/${BACKUPDIR}/${switch}_config_*.txt.gz | tail -1)"
gzip -cd ${last_backup_file} | grep -v "date =" > ${tmp_file}
if ( grep -v "date =" ${bakup_file} | diff -ub - ${tmp_file} )
then
# The last backup is identical
rm -f ${bakup_file}
else
# Differences encountered keep new backup
gzip -9 ${bakup_file}
fi
[ -f "${tmp_file}" ] && rm -f ${tmp_file}
done
</source>
=Firmware update=
==Record the running firmware==
==Example for a brocade sftp firmware download directory==
First take a look [[SSH_Tipps_und_Tricks#SFTP_chroot|here]] for setting up a chroot sftp environment.
Then create the home on the sftp-server:
<syntaxhighlight lang=bash>
# mkdir --parents --mode=0755 /home/sftp/brocade
# useradd --create-home --home-dir /home/sftp/brocade/fw brocade
</source>
If there is allready an brocade user with an authorized_keys file do:
<syntaxhighlight lang=bash>
# cp --preserve=mode ~brocade/.ssh/authorized_keys /home/sftp/.authorized_keys/brocade
</source>
else put them into /home/sftp/.authorized_keys/brocade if you want.
Untar your firmware as brocade in /home/sftp/brocade/fw.
Login to the switch as admin and do for example:
<syntaxhighlight lang=bash>
san-sw:admin> firmwaredownload -s -b -p sftp <ip of the sftp-server>,brocade,fw/v7.2.1f
</source>
26b18837504da70ebfa143713e3fdbcd2f623d5a
2329
2321
2021-11-25T16:02:16Z
Lollypop
2
Text replacement - "</source" to "</syntaxhighlight"
wikitext
text/x-wiki
[[Category:FC]]
[[Category:Brocade]]
=Ein paar Kommandos mit kurzer Erklärung dazu=
==Firmware==
<syntaxhighlight lang=bash>
brocade:admin> firmwareshow
Appl Primary/Secondary Versions
------------------------------------------
FOS v6.4.2a
v6.4.2a
</syntaxhighlight>
== General Switch Information ==
<syntaxhighlight lang=bash>
brocade:admin> switchshow
switchName: brocade
switchType: 71.2
switchState: Online
switchMode: Native
switchRole: Principal
switchDomain: 1
switchId: fffc01
switchWwn: 10:00:00:05:34:be:f3:f0
zoning: ON (Fabric1)
switchBeacon: OFF
Index Port Address Media Speed State Proto
==============================================
0 0 010000 id N4 Online FC F-Port 50:0a:09:81:96:c8:3e:f8
1 1 010100 id N4 Online FC F-Port 50:0a:09:81:86:c8:3e:f8
2 2 010200 id N8 Online FC F-Port 21:00:00:24:ff:36:45:02
3 3 010300 id N8 Online FC F-Port 21:00:00:24:ff:36:45:21
4 4 010400 id N8 Online FC F-Port 21:00:00:24:ff:36:44:90
5 5 010500 id N8 Online FC F-Port 21:00:00:24:ff:36:45:f6
6 6 010600 id N8 No_Light FC
...
</syntaxhighlight>
Wichtige Zeilen:
===switchshow:switchType===
<syntaxhighlight lang=bash>
switchType: 71.2
</syntaxhighlight>
switchType gibt Auskunft, welchen Switch wir vor uns haben. Hier einen Brocade 300.
* [https://www.ibm.com/developerworks/community/blogs/anthonyv/entry/brocade_san_switch_models1?lang=en Tabelle von IBM]
* PDF von Brocade: [[Media:Switch-types-blads-ids-product-names.pdf|Switch Types, Blade IDs, and Product Names]]
===switchshow:zoning===
<syntaxhighlight lang=bash>
zoning: ON (Fabric1)
</syntaxhighlight>
Zeigt an, ob das [[#Zoning|Zoning]] aktiv ist und welche Konfiguration aktiv ist (hier Fabric1) siehe auch [[#Fabric|Fabric]].
===switchshow:switchRole===
Es gibt zwei Rollen
* Principal (also den Chef)
und
* Subordinate (also den Untergeordneten)
z.B.:
<syntaxhighlight lang=bash>
switchRole: Principal
</syntaxhighlight>
Die Rolle kann man ändern
'''ACHTUNG: Nicht Unterbrechungsfrei!'''<br>
'''WARNING: DISRUPTIVE ACTION !'''
<syntaxhighlight lang=bash>
brocade1:admin> fabricprincipal -f 1
</syntaxhighlight>
==Fabric==
Eine Fabric besteht aus einem oder mehreren Fibre-Channel-Switchen, die miteinander verbunden sind. Komponenten wie Hosts, Storage und Tapes werden über die Fibre-Channel-Switche mit der Fabric verbunden.
<syntaxhighlight lang=bash>
brocade:admin> fabricshow
Switch ID Worldwide Name Enet IP Addr FC IP Addr Name
-------------------------------------------------------------------------
1: fffc01 10:00:00:05:34:be:f3:f0 10.60.1.110 0.0.0.0 >"brocade"
2: fffc02 10:00:00:05:1e:0d:da:27 10.60.1.111 0.0.0.0 "brocade1"
4: fffc04 10:00:00:05:1e:b3:61:7d 10.60.1.113 0.0.0.0 "brocade3"
42: fffc2a 10:00:00:05:1e:0c:f3:98 10.60.1.112 0.0.0.0 "brocade2"
The Fabric has 4 switches
</syntaxhighlight>
==InterSwitchLinks (ISL)==
Mit islshow bekommt man heraus, welche weiteren Switches angeschlossen sind und über welche Ports sie mit dem aktuellen verbunden sind.
<syntaxhighlight lang=bash>
brocade:admin> islshow
rz1_fab1_01:admin> islshow
1: 0-> 0 10:00:00:05:1e:0d:ca:27 2 brocade1 sp: 4.000G bw: 4.000G
2: 4-> 0 10:00:00:05:1e:0c:e3:98 42 brocade2 sp: 4.000G bw: 4.000G
3: 8-> 17 10:00:00:05:1e:0d:ca:27 2 brocade1 sp: 4.000G bw: 4.000G
4: 9-> 0 10:00:00:05:1e:b3:51:7d 4 brocade3 sp: 4.000G bw: 4.000G
5: 12-> 17 10:00:00:05:1e:0c:e3:98 42 brocade2 sp: 4.000G bw: 4.000G
6: 13-> 17 10:00:00:05:1e:b3:51:7d 4 brocade3 sp: 4.000G bw: 4.000G
</syntaxhighlight>
==Zoning==
Eine Zone legt fest, welche Ports oder WWNs sich sehen dürfen.
Heute mach man eigentlich nur noch WWN-Zoning, weil es das flexibelste und sicherste ist. Man kann dadurch einfach die Kabel innerhalb der [[#Fabric|Fabric]] hin und herstecken, ohne daß ein Gerät mit mal ein anderes sehen kann, als vorher.
Bei Portzoning ist die Gefahr des falsch steckens gegeben.
=Switch Types and Product Names=
{| class="wikitable sortable" style="text-align: center; width: 85%"
! Switch Type
! Switch Name
|-
| 1 || Brocade 1000 Switches
|-
| 2, 6 || Brocade 2800 Switch
|-
| 3 || Brocade 2100, 2400 Switches
|-
| 4 || Brocade 20x0, 2010, 2040, 2050 Switches
|-
| 5 || Brocade 22x0, 2210, 2240, 2250 Switches
|-
| 7 || Brocade 2000 Switch
|-
| 9 || Brocade 3800 Switch
|-
| 10 || Brocade 12000 Director
|-
| 12 || Brocade 3900 Switch
|-
| 16 || Brocade 3200 Switch
|-
| 17 || Brocade 3800VL
|-
| 18 || Brocade 3000 Switch
|-
| 21 || Brocade 24000 Director
|-
| 22 || Brocade 3016 Switch
|-
| 26 || Brocade 3850 Switch
|-
| 27 || Brocade 3250 Switch
|-
| 29 || Brocade 4012 Embedded Switch
|-
| 32 || Brocade 4100 Switch
|-
| 33 || Brocade 3014 Switch
|-
| 34 || Brocade 200E Switch
|-
| 37 || Brocade 4020 Embedded Switch
|-
| 38 || Brocade 7420 SAN Router
|-
| 40 || Fibre Channel Routing (FCR) Front Domain
|-
| 41 || Fibre Channel Routing, (FCR) Xlate Domain
|-
| 42 || Brocade 48000 Director
|-
| 43 || Brocade 4024 Embedded Switch
|-
| 44 || Brocade 4900 Switch
|-
| 45 || Brocade 4016 Embedded Switch
|-
| 46 || Brocade 7500 Switch
|-
| 51 || Brocade 4018 Embedded Switch
|-
| 55.2 || Brocade 7600 Switch
|-
| 58 || Brocade 5000 Switch
|-
| 61 || Brocade 4424 Embedded Switch
|-
| 62 || Brocade DCX Backbone
|-
| 64 || Brocade 5300 Switch
|-
| 66 || Brocade 5100 Switch
|-
| 67 || Brocade Encryption Switch
|-
| 69 || Brocade 5410 Blade
|-
| 70 || Brocade 5410 Embedded Switch
|-
| 71 || Brocade 300 Switch
|-
| 72 || Brocade 5480 Embedded Switch
|-
| 73 || Brocade 5470 Embedded Switch
|-
| 75 || Brocade M5424 Embedded Switch
|-
| 76 || Brocade 8000 Switch
|-
| 77 || Brocade DCX-4S Backbone
|-
| 83 || Brocade 7800 Extension Switch
|-
| 86 || Brocade 5450 Embedded Switch
|-
| 87 || Brocade 5460 Embedded Switch
|-
| 90 || Brocade 8470 Embedded Switch
|-
| 92 || Brocade VA-40FC Switch
|-
| 95 || Brocade VDX 6720-24 Data Center Switch
|-
| 96 || Brocade VDX 6730-32 Data Center Switch
|-
| 97 || Brocade VDX 6720-60 Data Center Switch
|-
| 98 || Brocade VDX 6730-76 Data Center Switch
|-
| 108 || Dell M8428-k FCoE Embedded Switch
|-
| 109 || Brocade 6510 Switch
|-
| 116 || Brocade VDX 6710 Data Center Switch
|-
| 117 || Brocade 6547 Embedded Switch
|-
| 118 || Brocade 6505 Switch
|-
| 120 || Brocade DCX 8510-8 Backbone
|-
| 121 || Brocade DCX 8510-4 Backbone
|-
| 124 || Brocade 5430 8 Gb 16-port Blade Server SAN I/O Module
|-
| 125 || Brocade 5431 8 Gbit 16-port stackable switch module
|-
| 129 || Brocade 6548 16 Gb 28-port Blade Server SAN I/O Module
|-
| 130 || Brocade M6505 16 Gbit 24-port Blade Server SAN I/O Module
|-
| 133 || Brocade 6520 16 Gb 96-port switch
|-
| 134 || Brocade 5432 8 Gb 24-port Blade Server SAN I/O Module
|-
| 148 || Brocade 7840 16 Gb 24-FC ports, 16 10GbE ports, 2 40GbE ports extension switch
|-
| 170 || Brocade G610
|}
=Enable root account for ssh=
==Enable root for ssh==
<syntaxhighlight lang=bash>
sw-fc02fab-b:admin> rootaccess --show
RootAccess: consoleonly
sw-fc02fab-b:admin> rootaccess --set all
sw-fc02fab-b:admin> rootaccess --show
RootAccess: all
sw-fc02fab-b:admin> userconfig --change root -e yes
</syntaxhighlight>
==Enable root account==
<syntaxhighlight lang=bash>
sw-fc02fab-b:admin> userconfig --show root
Account name: root
Description: root
Enabled: No
Password Last Change Date: Fri Aug 21 2020 (UTC)
Password Expiration Date: Not Applicable (UTC)
Locked: No
Role: root
AD membership: 0-255
Home AD: 0
Day Time Access: N/A
sw-fc02fab-b:admin> userconfig --change root -e yes
sw-fc02fab-b:admin> userconfig --show root
Account name: root
Description: root
Enabled: Yes
Password Last Change Date: Fri Aug 21 2020 (UTC)
Password Expiration Date: Not Applicable (UTC)
Locked: No
Role: root
AD membership: 0-255
Home AD: 0
Day Time Access: N/A
</syntaxhighlight>
==Set root password directly after enabling the account==
<syntaxhighlight lang=bash>
$ ssh root@192.168.1.1
root@192.168.1.1's password:
============================================================================================
ATTENTION:
It is recommended that you change the default passwords for all the switch accounts.
Refer to the product release notes and administrators guide if you need further information.
============================================================================================
...
</syntaxhighlight>
=SSH mit public key=
==Host -> Brocade==
<syntaxhighlight lang=bash>
BSAN01:root> cd ~/.ssh
BSAN01:root> ls -al
total 8
drwxr-xr-x 2 root sys 4096 Jul 18 2011 ./
drwxr-x--- 4 root sys 4096 Jun 19 2013 ../
BSAN01:root> echo "ssh-dss AAAA...TD8cc= root@sun" >> authorized_keys
</syntaxhighlight>
==Brocade -> Host==
===Key auf Switch generieren===
Als '''admin''' !
<syntaxhighlight lang=bash>
Host# ssh admin@bsan01
BSAN01:admin> sshutil genkey
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Key pair generated successfully.
BSAN01:admin> exit
</syntaxhighlight>
===Key vom Switch -> Host ~/.ssh/authorized_keys===
Als '''root''' !
<syntaxhighlight lang=bash>
Host# ssh root@bsan01 cat .ssh/id_rsa.pub >> ~/.ssh/authorized_keys
</syntaxhighlight>
=Backup der Config=
Wichtig, vorher die Keys austauschen!
# Der Brocade Pubkey muß nach ~bckpuser/.ssh/authorized_keys
# Der Pubkey des aufrufenden Users muß auf den Brocade ~root/.ssh/authorized_keys
Ein mögliches Script könnte so aussehen:
<syntaxhighlight lang=bash>
#!/bin/bash
SWITCHES="
bsan01
bsan02
"
BACKUP_HOST="10.0.0.42"
LOCALUSER="bckpuser"
BACKUPDIR="brocade_backup"
[ ! -d ~/brocade_backup ] && mkdir -p ~/brocade_backup
date="$(date '+%Y%m%d-%H%M%S')"
for switch in ${SWITCHES} ; do
printf "Backing up ${switch} to ~${LOCALUSER}/${BACKUPDIR}/${switch}_config_${date}.txt... "
ssh -i ~/.ssh/id_rsa_nopw root@${switch} /fabos/link_sbin/configupload -all -p scp ${BACKUP_HOST},${LOCALUSER},${BACKUPDIR}/${switch}_config_${date}.txt
tmp_file=/tmp/.$$_${switch}.txt
bakup_file=~/${BACKUPDIR}/${switch}_config_${date}.txt
last_backup_file="$(ls -1rt ~/${BACKUPDIR}/${switch}_config_*.txt.gz | tail -1)"
gzip -cd ${last_backup_file} | grep -v "date =" > ${tmp_file}
if ( grep -v "date =" ${bakup_file} | diff -ub - ${tmp_file} )
then
# The last backup is identical
rm -f ${bakup_file}
else
# Differences encountered keep new backup
gzip -9 ${bakup_file}
fi
[ -f "${tmp_file}" ] && rm -f ${tmp_file}
done
</syntaxhighlight>
=Firmware update=
==Record the running firmware==
==Example for a brocade sftp firmware download directory==
First take a look [[SSH_Tipps_und_Tricks#SFTP_chroot|here]] for setting up a chroot sftp environment.
Then create the home on the sftp-server:
<syntaxhighlight lang=bash>
# mkdir --parents --mode=0755 /home/sftp/brocade
# useradd --create-home --home-dir /home/sftp/brocade/fw brocade
</syntaxhighlight>
If there is allready an brocade user with an authorized_keys file do:
<syntaxhighlight lang=bash>
# cp --preserve=mode ~brocade/.ssh/authorized_keys /home/sftp/.authorized_keys/brocade
</syntaxhighlight>
else put them into /home/sftp/.authorized_keys/brocade if you want.
Untar your firmware as brocade in /home/sftp/brocade/fw.
Login to the switch as admin and do for example:
<syntaxhighlight lang=bash>
san-sw:admin> firmwaredownload -s -b -p sftp <ip of the sftp-server>,brocade,fw/v7.2.1f
</syntaxhighlight>
e8ec31d1ec826e2437f2cb98cc10474459313112
SSH FingerprintLogging
0
358
2322
2257
2021-11-25T16:01:00Z
Lollypop
2
Text replacement - "<source" to "<syntaxhighlight"
wikitext
text/x-wiki
[[Kategorie:SSH|Fingerprint]]
[[Kategorie:Bash|Fingerprint]]
=SSH Fingerprintlogging=
==Why logging fingerprints?==
It is just for the possibility of setting the [[Bash]] HISTFILE per logged in user.
==The AuthorizedKeysCommand==
* /opt/sbin/fingerprintlog:
<syntaxhighlight lang=bash>
#!/bin/bash
# /opt/sbin/fingerprintlog <logfile> %u %k %t %f
# Arguments to AuthorizedKeysCommand may be provided using the following tokens, which will be expanded at runtime:
# %% is replaced by a literal '%',
# %u is replaced by the username being authenticated,
# %h is replaced by the home directory of the user being authenticated,
# %t is replaced with the key type offered for authentication,
# %f is replaced with the fingerprint of the key, and
# %k is replaced with the key being offered for authentication.
# If no arguments are specified then the username of the target user will be supplied.
[ "_${LOGNAME}_" != "_daemon_" ] && exit 1
LOGFILE=$1
USER=$2
KEY=$3
KEYTYPE=$4
FINGERPRINT=$5
printf "%s ssh-login T=%s U=%s PPID=%s FP=%s K=%s\n" "$(/bin/date -Iseconds)" "${KEYTYPE}" "${USER}" "${PPID}" "${FINGERPRINT}" "${KEY}" >> ${LOGFILE}
</syntaxhighlight>
<syntaxhighlight lang=bash>
# chmod 0750 /opt/sbin/fingerprintlog
# chown root:daemon /opt/sbin/fingerprintlog
</syntaxhighlight>
==Create the logfile==
* /var/log/fingerprint.log
<syntaxhighlight lang=bash>
# touch /var/log/fingerprint.log
# chown daemon:ssh-user /var/log/fingerprint.log
# chmod 0640 /var/log/fingerprint.log
</syntaxhighlight>
==Setup logrotation==
* /etc/logrotate.d/fingerprintlog
<syntaxhighlight lang=bash>
/var/log/fingerprint.log
{
su daemon syslog
create 0640 daemon ssh-user
rotate 8
weekly
missingok
notifempty
}
</syntaxhighlight>
==Add fingerprint logging to sshd==
* /etc/ssh/sshd_config
<syntaxhighlight lang=bash>
...
DenyUsers daemon
AuthorizedKeysCommand /opt/sbin/fingerprintlog /var/log/fingerprint.log %u %k %t %f
AuthorizedKeysCommandUser daemon
...
</syntaxhighlight>
Restart sshd
<syntaxhighlight lang=bash>
# systemctl restart ssh.service
</syntaxhighlight>
==Add magic to your .bashrc==
<syntaxhighlight lang=bash>
# apt install gawk
</syntaxhighlight>
* ~/.bashrc
<syntaxhighlight lang=bash>
...
# Match parent PID or grand parent PID against fingerprint.log
[ -f /var/log/fingerprint.log ] && FINGERPRINT=$(/usr/bin/gawk -v ppid="(${PPID}|$(awk '{print $4;}' /proc/${PPID}/stat))" -v user=${LOGNAME} '$5 ~ "^PPID="ppid"$" {gsub(/^FP=/,"",$6); gsub(/\//,"_",$6); print $6;exit;}' /var/log/fingerprint.log)
# Set the history file
export HISTFILE=~/.bash_history_${FINGERPRINT:-${SUDO_USER:-default}}
...
</syntaxhighlight>
a038d213417e09a227d777c67e80c09cd61d5ffa
Sendmail
0
384
2323
2206
2021-11-25T16:01:10Z
Lollypop
2
Text replacement - "<source" to "<syntaxhighlight"
wikitext
text/x-wiki
=Compile sendmail=
==Solaris 10==
Untar source, then go into the source directory.
===devtools/Site/site.config.m4===
<syntaxhighlight lang=m4>
dnl #####################################################################
dnl ### Changes to disable the default NIS support ###
dnl #####################################################################
APPENDDEF(`confENVDEF', `-UNIS')
dnl #####################################################################
dnl ### Changes for PH_MAP support. ###
dnl #####################################################################
APPENDDEF(`confMAPDEF',`-DPH_MAP')
APPENDDEF(`confLIBS', `-lphclient')
APPENDDEF(`confINCDIRS', `-I/opt/nph/include')
APPENDDEF(`confLIBDIRS', `-L/opt/nph/lib')
dnl #####################################################################
dnl ### Changes for STARTTLS support ###
dnl #####################################################################
APPENDDEF(`confENVDEF',`-DSTARTTLS')
APPENDDEF(`confLIBS', `-lssl -lcrypto')
APPENDDEF(`confLIBDIRS', `-L/opt/openssl/lib -R/opt/openssl/lib')
APPENDDEF(`confINCDIRS', `-I/opt/openssl/include')
dnl #####################################################################
dnl ### GCC settings ###
dnl #####################################################################
define(`confCC', `gcc')
define(`confOPTIMIZE', `-O3')
define(`confCCOPTS', `-m64 -B/usr/ccs/bin/amd64')
define(`confLDOPTS', `-m64 -static-libgcc -lgcc_s_amd64')
APPENDDEF(`confENVDEF', `-DSM_CONF_STDBOOL_H=0')
APPENDDEF(`confLIBDIRS', `-L/lib/64 -R/lib/64 -L/usr/sfw/lib/amd64 -R/usr/sfw/lib/amd64')
dnl #####################################################################
dnl ### Use the more modern shell ###
dnl #####################################################################
define(`confSHELL', `/usr/bin/bash')
dnl #####################################################################
dnl ### Installdirs ###
dnl #####################################################################
define(`confMANROOT', `/opt/sendmail-8.16.1/share/man/cat')
define(`confMANROOTMAN', `/opt/sendmail-8.16.1/share/man/man')
define(`confMBINDIR', `/opt/sendmail-8.16.1/sbin')
define(`confUBINDIR', `/opt/sendmail-8.16.1/bin')
</syntaxhighlight>
<syntaxhighlight lang=bash>
# sh ./Build -c
# cd cf/cf
# cp generic-solaris.mc sendmail.mc
# sh ./Build sendmail.cf
# sh ./Build install-cf
# mkdir -p /opt/sendmail-8.16.1/{bin,share/man/cat{1,5,8}} ; ./Build install ;
</syntaxhighlight>
== Using the original Solaris 10 svc to sart your own sendmail ==
If you have set config/local_only=true at the parameters of svc:/network/smtp:sendmail the service will fail with:
Invalid operation mode l
This is because the start script will result in calling sendmail with the option "-bl" when config/local_only=true is set.
So put this in your sendmail.mc instead:
DAEMON_OPTIONS(`Port=smtp,Addr=127.0.0.1, Name=MTA')
and set config/local_only=false:
<syntaxhighlight lang=bash>
# svccfg -s svc:/network/smtp:sendmail setprop config/local_only=false
# svcadm refresh svc:/network/smtp:sendmail
</syntaxhighlight>
After that senmail might come up :-).
f4cc676ad8c9fd6feb99d2f3b0799f2eb7a0e4fb
Solaris OracleClusterware
0
274
2324
2270
2021-11-25T16:01:13Z
Lollypop
2
Text replacement - "<source " to "<syntaxhighlight "
wikitext
text/x-wiki
[[Category:Solaris11|Clusterware]]
[[Category:Oracle|Clusterware]]
==Get Solaris release information==
<syntaxhighlight lang=bash>
# pkg info kernel | \
nawk -F '.' '
/Build Release:/{
solaris=$NF;
}
/Branch:/{
subrel=$3;
update=$4;
}
END{
printf "Solaris %d.%d Update %d\n",solaris,subrel,update;
}'
</syntaxhighlight>
=Needed Solaris packages=
==Install pkg dependencies==
<syntaxhighlight lang=bash>
# pkg install developer/assembler
# pkg install developer/build/make
# pkg install x11/diagnostic/x11-info-clients
</syntaxhighlight>
==Check pkg dependencies==
<syntaxhighlight lang=bash>
# pkg list \
developer/assembler \
developer/build/make \
x11/diagnostic/x11-info-clients
</syntaxhighlight>
=User / group settings=
==Groups==
<syntaxhighlight lang=bash>
# groupadd -g 186 oinstall
# groupadd -g 187 asmadmin
# groupadd -g 188 asmdba
# groupadd -g 200 dba
</syntaxhighlight>
==User==
<syntaxhighlight lang=bash>
# useradd \
-u 102 \
-g oinstall \
-G asmdba,dba \
-c "Oracle DB" \
-m -d /export/home/oracle \
oracle
# useradd \
-u 406 \
-g oinstall \
-G asmdba,asmadmin,dba \
-c "Oracle Grid" \
-m -d /export/home/grid \
grid
</syntaxhighlight>
===Generate ssh public keys===
<syntaxhighlight lang=bash>
# su - grid
$ ssh-keygen -t rsa -b 2048
Generating public/private rsa key pair.
Enter file in which to save the key (/export/home/grid/.ssh/id_rsa): <Enter>
Created directory '/export/home/grid/.ssh'.
Enter passphrase (empty for no passphrase): <Enter>
Enter same passphrase again: <Enter>
Your identification has been saved in /export/home/grid/.ssh/id_rsa.
Your public key has been saved in /export/home/grid/.ssh/id_rsa.pub.
The key fingerprint is:
..:..:.. grid@grid01
$ cat .ssh/id_rsa.pub > .ssh/authorized_keys
$ chmod 600 .ssh/authorized_keys
$ vi .ssh/authorized_keys
</syntaxhighlight>
Add the public key of other nodes.
After that do this on all other nodes added as grid:
<syntaxhighlight lang=bash>
$ scp grid01:.ssh/authorized_keys .ssh/authorized_keys
</syntaxhighlight>
Now do a cross login from every node to every other node (even to its self) to add all to the known_hosts. The installer needs this.
==Projects==
<syntaxhighlight lang=bash>
# projadd -p 186 -G oinstall \
-K process.max-file-descriptor="(basic,1024,deny)" \
-K process.max-file-descriptor="(privileged,65536,deny)" \
-K process.max-sem-nsems="(privileged,2048,deny)" \
-K project.max-sem-ids="(privileged,2048,deny)" \
-K project.max-shm-ids="(privileged,200,deny)" \
-K project.max-shm-memory="(privileged,274877906944,deny)" \
group.oinstall
</syntaxhighlight>
===Check project settings===
<syntaxhighlight lang=bash>
# su - oracle
$ for name in process.{max-file-descriptor,max-sem-nsems} ; do prctl -t privileged -i process -n ${name} $$ ; done
process: 14822: -bash
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
process.max-file-descriptor
privileged 65.5K - deny -
process: 14822: -bash
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
process.max-sem-nsems
privileged 2.05K - deny -
$ for name in project.{max-sem-ids,max-shm-ids,max-shm-memory} ; do prctl -t privileged -n ${name} $$ ; done
process: 14822: -bash
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
project.max-sem-ids
privileged 2.05K - deny -
process: 14822: -bash
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
project.max-shm-ids
privileged 200 - deny -
process: 14822: -bash
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
project.max-shm-memory
usage 0B
privileged 256GB - deny -
</syntaxhighlight>
=Directories=
<syntaxhighlight lang=bash>
# zfs create -o mountpoint=none rpool/grid
# zfs create -o mountpoint=/opt/gridhome rpool/grid/gridhome
# zfs create -o mountpoint=/opt/gridbase rpool/grid/gridbase
# zfs create -o mountpoint=/opt/oraInventory rpool/grid/oraInventory
# chown -R grid:oinstall /opt/{grid{home,base},oraInventory}
</syntaxhighlight>
=Storage tasks=
==Discover LUNs==
<syntaxhighlight lang=bash>
# luxadm -e port | \
nawk '{print $1}' | \
xargs -n 1 luxadm -e dump_map | \
nawk '/Disk device/{print $5}' | \
sort -u | \
xargs luxadm display | \
nawk '
/DEVICE PROPERTIES for disk:/{
disk=$NF;
}
/DEVICE PROPERTIES for:/{
disk="";
}
/Vendor:/{
vendor=$NF;
}
/Serial Num:/{
serial=$NF;
}
/Unformatted capacity:/{
capacity=$(NF-1)""$NF;
}
disk != "" && /^$/{
printf "%s vendor=%s serial=%s capacity=%s\n",disk,vendor,serial,capacity;
}' | \
sort -u
</syntaxhighlight>
==Label Disks==
===Single Disk===
<syntaxhighlight lang=bash>
# printf 'type 0 no no\nlabel 1 yes\npartition\n0 usr wm 8192 $\nlabel 1 yes\nquit\nquit\n' | \
format -e /dev/rdsk/<disk>
</syntaxhighlight>
===All FC disks===
For x86 you have to call format -> fdisk -> y for all disks first :-\
'''DON'T DO THE NEXT STEP IF YOU DO NOT KNOW WHAT YOU DO!'''
format_command_file.txt:
<syntaxhighlight lang=bash>
type 0 no no
label 1 yes
partition
0 usr wm 8192 $
label 1 yes
quit
quit
</syntaxhighlight>
<syntaxhighlight lang=bash>
# luxadm -e port | \
nawk '{print $1}' | \
xargs -n 1 luxadm -e dump_map | \
nawk '/Disk device/{print $5}' | \
sort -u | \
xargs luxadm display | \
nawk '
/DEVICE PROPERTIES for disk:/{
disk=$NF;
}
/DEVICE PROPERTIES for:/{
disk="";
}
disk && /^$/{
printf "%s\n",disk;
}' | \
sort -u | \
xargs -n 1 format -e -f ~/format_command_file.txt
</syntaxhighlight>
<syntaxhighlight lang=bash>
# chown -RL grid:asmadmin /dev/rdsk/c0t6000*
# chmod 660 /dev/rdsk/c0t6000*
</syntaxhighlight>
==Set swap to physical RAM==
<syntaxhighlight lang=bash>
# export RAM=256G
# swap -d /dev/zvol/dsk/rpool/swap
# zfs destroy rpool/swap
# zfs create \
-V ${RAM} \
-b 8k \
-o primarycache=metadata \
-o chksum=on \
-o dedup=off \
-o encryption=off \
-o compression=off \
rpool/swap
# swap -a /dev/zvol/dsk/rpool/swap
</syntaxhighlight>
=Network=
==Check port ranges==
<syntaxhighlight lang=bash>
# for protocol in tcp udp ; do ipadm show-prop ${protocol} -p smallest_anon_port,largest_anon_port ; done
PROTO PROPERTY PERM CURRENT PERSISTENT DEFAULT POSSIBLE
tcp smallest_anon_port rw 9000 9000 32768 1024-65500
tcp largest_anon_port rw 65500 65500 65535 9000-65535
PROTO PROPERTY PERM CURRENT PERSISTENT DEFAULT POSSIBLE
udp smallest_anon_port rw 9000 9000 32768 1024-65500
udp largest_anon_port rw 65500 65500 65535 9000-65535
</syntaxhighlight>
==Setup private cluster interconnects==
Example with a small net with six (eight with net and broadcast) usable IPs. This limits the maximum number of nodes to six... which is obvious...
First node:
<syntaxhighlight lang=bash>
# ipadm create-ip net1
# ipadm create-addr -T static -a 10.65.0.1/29 net1/ci1
# ipadm create-ip net5
# ipadm create-addr -T static -a 10.65.0.9/29 net5/ci2
</syntaxhighlight>
Second node:
<syntaxhighlight lang=bash>
# ipadm create-ip net1
# ipadm create-addr -T static -a 10.65.0.2/29 net1/ci1
# ipadm create-ip net5
# ipadm create-addr -T static -a 10.65.0.10/29 net5/ci2
</syntaxhighlight>
==Set slew always for ntp==
After configuring ntp set slew always to avoid time warps!
<syntaxhighlight lang=bash>
# svccfg -s svc:/network/ntp:default setprop config/slew_always = true
# svcadm refresh svc:/network/ntp:default
# svccfg -s svc:/network/ntp:default listprop config/slew_always
config/slew_always boolean true
</syntaxhighlight>
=Patching=
==Upgrade OPatch==
Do as root:
<syntaxhighlight lang=bash>
export ORACLE_HOME=/opt/gridhome/11.2.0.4
export PATH=${PATH}:${ORACLE_HOME}/OPatch
OPATCH_PATCH_ZIP=~oracle/orainst/p6880880_112000_Solaris86-64.zip
zfs snapshot -r rpool/grid@$(opatch version | nawk '/OPatch Version:/{print $1"_"$NF;}')
eval mv ${ORACLE_HOME}/{$(opatch version | nawk '/OPatch Version:/{print $1","$1"_"$NF;}')}
unzip -d ${ORACLE_HOME} ${OPATCH_PATCH_ZIP}
chown -R grid:oinstall ${ORACLE_HOME}/OPatch
zfs snapshot -r rpool/grid@$(opatch version | nawk '/OPatch Version:/{print $1"_"$NF;}')
</syntaxhighlight>
==Apply PSU==
On first node as user grid:
<syntaxhighlight lang=bash>
export ORACLE_HOME=/opt/gridhome/11.2.0.4
OCM_RSP=~grid/ocm_gridcluster1.rsp
${ORACLE_HOME}/OPatch/ocm/bin/emocmrsp -output ${OCM_RSP}
scp ${OCM_RSP} <other node1>:
scp ${OCM_RSP} <other node2>:
...
</syntaxhighlight>
On all nodes do as root:
<syntaxhighlight lang=bash>
export ORACLE_HOME=/opt/gridhome/11.2.0.4
export PATH=${PATH}:${ORACLE_HOME}/bin
export PATH=${PATH}:${ORACLE_HOME}/OPatch
OCM_RSP=~grid/ocm_gridcluster1.rsp
PSU_DIR=~oracle/orainst/psu
PSU_ZIP=~oracle/orainst/p22378167_112040_Solaris86-64.zip
PSU=~oracle/orainst/psu/22378167
su - grid -c "mkdir -p ${PSU_DIR}"
su - grid -c "unzip -d ${PSU_DIR} ${PSU_ZIP}"
su - grid -c "opatch lsinventory -detail -oh ${ORACLE_HOME} > ~grid/lsinventory_before_${PSU##*/}"
zfs snapshot -r rpool/grid@before_psu_${PSU##*/}
cd ~grid
for patch in $(find ${PSU} -name bundle.xml | xargs -n 1 dirname) ; do
opatch auto ${patch} -oh ${ORACLE_HOME} -ocmrf ${OCM_RSP}
done
$ORACLE_HOME/crs/install/rootcrs.pl -unlock # <-- on all nodes
# For every other patch do:
su - grid -c "cd ${patchdir} ; opatch prereq CheckConflictAgainstOHWithDetail -ph ./" # <-- only on first node
su - grid -c "cd ${patchdir} ; opatch apply" # <-- only on first node
$ORACLE_HOME/crs/install/rootcrs.pl -patch # <-- on all nodes
zfs snapshot -r rpool/grid@after_psu_${PSU##*/}
${ORACLE_HOME}/bin/emctl start dbconsole
su - grid -c "opatch lsinventory -detail -oh ${ORACLE_HOME} > ~grid/lsinventory_after_${PSU##*/}"
</syntaxhighlight>
==Configure local listener to another port==
As grid user:
<syntaxhighlight lang=bash>
$ srvctl modify listener -l LISTENER -o ${ORACLE_HOME} -p "TCP:50650"
$ srvctl config listener
Name: LISTENER
Network: 1, Owner: grid
Home: <CRS home>
End points: TCP:50650
$ srvctl stop listener -l LISTENER ; srvctl start listener -l LISTENER
$ sqh
SQL>show parameter list
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
listener_networks string
local_listener string (DESCRIPTION=(ADDRESS_LIST=(A
DDRESS=(PROTOCOL=TCP)(HOST=172
.1.20.1)(PORT=1521))))
remote_listener string
SQL> alter system set local_listener ="(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=172.1.20.1)(PORT=50650))))" SID='+ASM1' ;
System altered.
SQL> ^D
</syntaxhighlight>
=ASM=
==Create ASM diskgroups==
LUNs.txt contains all disks with:
# one line per disk.
# each disk in the first field.
===Example for chdg===
<syntaxhighlight lang=awk>
# nawk -v type='DATA' '
BEGIN {
printf "<chdg name=\"%s\" power=\"3\">\n",type;
}
/002d0/,/011d0/ {
if(/C903/){storage="HSA1";};
if(/C906/){storage="HSA2";};
if(/C061/){storage="HSA3";};
if(/C062/){storage="HSA4";};
if(/002d0/){
# first disk
count=1;
printf " <add>\n";
printf " <fg name=\"%s_%s\">\n",storage,type;
};
gsub(/s2$/,"s0",$1);
printf " <dsk name=\"%s_%s%02d\" string=\"%s\"/>\n",storage,type,count++,$1;
if(/011d0/){
# last disk
print " </fg>";
print " </add>";
}
}
END {
printf "<a name=\"compatible.asm\" value=\"11.2\"/>\n";
printf "<a name=\"compatible.rdbms\" value=\"11.2\"/>\n";
printf "<a name=\"compatible.advm\" value=\"11.2\"/>\n";
printf "</chdg>\n";
}
' LUNs.txt
</syntaxhighlight>
===Example for mkdg===
<syntaxhighlight lang=awk>
# nawk -v type='FRA' '
BEGIN {
printf "<dg name=\"%s\" redundancy=\"normal\">\n",type;
}
/012d0/,/015d0/ {
if(/C903/){storage="HSA1";};
if(/C906/){storage="HSA2";};
if(/C061/){storage="HSA3";};
if(/C062/){storage="HSA4";};
if(/012d0/){
# first disk
count=1;
printf " <fg name=\"%s_%s\">\n",storage,type;
};
gsub(/s2$/,"s0",$1);
printf " <dsk name=\"%s_%s%02d\" string=\"%s\"/>\n",storage,type,count++,$1;
if(/015d0/){
# last disk
print " </fg>";
}
}
END {
printf "<a name=\"compatible.asm\" value=\"11.2\"/>\n";
printf "<a name=\"compatible.rdbms\" value=\"11.2\"/>\n";
printf "<a name=\"compatible.advm\" value=\"11.2\"/>\n";
printf "</dg>\n";
}
' LUNs.txt
</syntaxhighlight>
data_config.xml:
<syntaxhighlight lang=xml>
<chdg name="data" power="3">
<add>
<fg name="HSA1_DATA">
<dsk name="HSA1_DATA01" string="/dev/rdsk/c0t60002AC000000000C903010650004002d0s0"/>
<dsk name="HSA1_DATA02" string="/dev/rdsk/c0t60002AC000000000C903010650004003d0s0"/>
<dsk name="HSA1_DATA03" string="/dev/rdsk/c0t60002AC000000000C903010650004004d0s0"/>
<dsk name="HSA1_DATA04" string="/dev/rdsk/c0t60002AC000000000C903010650004005d0s0"/>
<dsk name="HSA1_DATA05" string="/dev/rdsk/c0t60002AC000000000C903010650004006d0s0"/>
<dsk name="HSA1_DATA06" string="/dev/rdsk/c0t60002AC000000000C903010650004007d0s0"/>
<dsk name="HSA1_DATA07" string="/dev/rdsk/c0t60002AC000000000C903010650004008d0s0"/>
<dsk name="HSA1_DATA08" string="/dev/rdsk/c0t60002AC000000000C903010650004009d0s0"/>
<dsk name="HSA1_DATA09" string="/dev/rdsk/c0t60002AC000000000C903010650004010d0s0"/>
<dsk name="HSA1_DATA10" string="/dev/rdsk/c0t60002AC000000000C903010650004011d0s0"/>
</fg>
</add>
<add>
<fg name="HSA2_DATA">
<dsk name="HSA2_DATA01" string="/dev/rdsk/c0t60002AC000000000C906010650004002d0s0"/>
<dsk name="HSA2_DATA02" string="/dev/rdsk/c0t60002AC000000000C906010650004003d0s0"/>
<dsk name="HSA2_DATA03" string="/dev/rdsk/c0t60002AC000000000C906010650004004d0s0"/>
<dsk name="HSA2_DATA04" string="/dev/rdsk/c0t60002AC000000000C906010650004005d0s0"/>
<dsk name="HSA2_DATA05" string="/dev/rdsk/c0t60002AC000000000C906010650004006d0s0"/>
<dsk name="HSA2_DATA06" string="/dev/rdsk/c0t60002AC000000000C906010650004007d0s0"/>
<dsk name="HSA2_DATA07" string="/dev/rdsk/c0t60002AC000000000C906010650004008d0s0"/>
<dsk name="HSA2_DATA08" string="/dev/rdsk/c0t60002AC000000000C906010650004009d0s0"/>
<dsk name="HSA2_DATA09" string="/dev/rdsk/c0t60002AC000000000C906010650004010d0s0"/>
<dsk name="HSA2_DATA10" string="/dev/rdsk/c0t60002AC000000000C906010650004011d0s0"/>
</fg>
</add>
<a name="compatible.asm" value="11.2"/>
<a name="compatible.rdbms" value="11.2"/>
<a name="compatible.advm" value="11.2"/>
</chdg>
</syntaxhighlight>
asmh:
<syntaxhighlight lang=oracle11>
ASMCMD [+] > chdg data_config.xml
</syntaxhighlight>
aac957ee305135fbaab8101d5cc4730e20ab99b4
RootKitScanner
0
237
2325
920
2021-11-25T16:01:23Z
Lollypop
2
Text replacement - "<source" to "<syntaxhighlight"
wikitext
text/x-wiki
[[Kategorie:Security]]
=RKHunter=
RKHunter is a local security scanner for Linux, Solaris and some other UNIX operating systems.
I will describe usage for Ubuntu/Linux here.
==Installation==
First of all install it to your system:
<syntaxhighlight lang=bash>
# aptitude install rkhunter
</source>
==Update the rule base==
After that (and do this from time to time) update the rule base:
<syntaxhighlight lang=bash>
# rkhunter --update
[ Rootkit Hunter version 1.4.0 ]
Checking rkhunter data files...
Checking file mirrors.dat [ No update ]
Checking file programs_bad.dat [ Updated ]
Checking file backdoorports.dat [ No update ]
Checking file suspscan.dat [ No update ]
Checking file i18n/cn [ No update ]
Checking file i18n/de [ Updated ]
Checking file i18n/en [ Updated ]
Checking file i18n/tr [ Updated ]
Checking file i18n/tr.utf8 [ Updated ]
Checking file i18n/zh [ No update ]
Checking file i18n/zh.utf8 [ No update ]
</source>
==Do the first check==
<syntaxhighlight lang=bash>
# rkhunter --check --pkgmgr DPKG --skip-keypress --report-warnings-only
Warning: Found enabled inetd service: rstatd/1-5
Warning: syslog-ng configuration file allows remote logging: destination d_logserver { udp("logserver-1"); };
Warning: Suspicious file types found in /dev:
/dev/.udev/rules.d/root.rules: ASCII text
Warning: Hidden directory found: '/etc/.bzr: directory '
Warning: Hidden directory found: '/dev/.udev: directory '
Warning: Hidden file found: /etc/.bzrignore: ASCII text
Warning: Hidden file found: /etc/.etckeeper: ASCII text
Warning: Hidden file found: /dev/.initramfs: symbolic link to `/run/initramfs'
</source>
Many warnings.
Check which are false positives and modify your '''/etc/rkhunter.conf'''.
==Acknowledge false positives==
For example to get rid of the warnings above add this lines to the '''/etc/rkhunter.conf''':
<syntaxhighlight lang=bash>
ALLOWHIDDENDIR="/dev/.udev"
ALLOWHIDDENDIR="/etc/.bzr"
ALLOWHIDDENFILE="/etc/.bzrignore"
ALLOWHIDDENFILE="/etc/.etckeeper"
ALLOWHIDDENFILE="/dev/.initramfs"
ALLOWDEVFILE="/dev/.udev/rules.d/root.rules"
INETD_ALLOWED_SVC=rstatd/1-5
ALLOW_SYSLOG_REMOTE_LOGGING=1
</source>
After that rkhunter should have no output:
<syntaxhighlight lang=bash>
# rkhunter --check --pkgmgr DPKG --skip-keypress --report-warnings-only
#
</source>
Now you have done your base setup. From now all further output should force you to get a closer look to your system.
==Configure ongoing security checks==
Configure the user which should get warnings via email in your '''/etc/rkhunter.conf''':
<syntaxhighlight lang=bash>
MAIL-ON-WARNING="security-team@yourdomain.tld"
</source>
6cc7dc495ff3d436c6c8a45346e08028af8304fb
IPS cheat sheet
0
98
2326
1919
2021-11-25T16:01:31Z
Lollypop
2
Text replacement - "<source" to "<syntaxhighlight"
wikitext
text/x-wiki
[[Kategorie:Solaris11]]
=Cheat sheet=
[[File:Ips-one-liners.pdf|page=1|600px]]
=Examples=
== Switching to Oracle Support Repository ==
1. Get the client certificate at https://pkg-register.oracle.com/
(x) Oracle Solaris 11 Support -> Submit
Comment: Rechnername -> Accept
-> Download Key
-> Download Certificate
2. Copy it to your Solaris 11 host into /var/pkg/ssl.
<pre>
# mv Oracle_Solaris_11_Support.key.pem /var/pkg/ssl
# mv Oracle_Solaris_11_Support.certificate.pem /var/pkg/ssl
</pre>
4. Set you proxy environment if needed:
<pre>
# http_proxy=http://proxy:3128/
# https_proxy=http://proxy:3128/
# export http_proxy https_proxy
</pre>
5. Set the publisher to the support repository:
# pkg set-publisher \
-k /var/pkg/ssl/Oracle_Solaris_11_Support.key.pem \
-c /var/pkg/ssl/Oracle_Solaris_11_Support.certificate.pem \
-G '*' -g https://pkg.oracle.com/solaris/support/ solaris
6. Refresh the catalog:
# pkg refresh --full
7. Check for updates:
# pkg update -nv
8. If needed or wanted do the update:
# pkg update -v
== Adding another repository ==
Until now the repository of OpenCSW is [http://www.opencsw.org/2012/02/ips-repository-in-the-works/ not in IPS format].
== Repairing packages ==
Damn fast fingers did it! Lucky Luk style... the man who delete files faster than his shadow...
<pre>
root@solaris11:/home/lollypop# rm /usr/bin/ls
</pre>
So... the file is gone... oops.
No problem in Solaris 11. You can repair package contents!
But.. in which package was it?
<pre>
root@solaris11:/home/lollypop# pkg search /usr/bin/ls
INDEX ACTION VALUE PACKAGE
path file usr/bin/ls pkg:/system/core-os@0.5.11-0.175.0.10.1.0.0
</pre>
So it is in the package pkg:/system/core-os@0.5.11-0.175.0.10.1.0.0 . Let us take a look what the system thinks what is wrong with our files from this package:
<pre>
root@solaris11:/home/lollypop# pkg verify pkg:/system/core-os@0.5.11-0.175.0.10.1.0.0
PACKAGE STATUS <3>
pkg://solaris/system/core-os ERROR
file: usr/bin/ls
Missing: regular file does not exist
</pre>
That is exactly what we thougt :-).
So let us fix it!
<pre>
root@solaris11:/home/lollypop# pkg fix pkg:/system/core-os@0.5.11-0.175.0.10.1.0.0
Verifying: pkg://solaris/system/core-os ERROR <3>
file: usr/bin/ls
Missing: regular file does not exist
Created ZFS snapshot: 2013-04-10-07:40:21
Repairing: pkg://solaris/system/core-os
DOWNLOAD PKGS FILES XFER (MB)
Completed 1/1 1/1 0.0/0.0$<3>
PHASE ACTIONS
Update Phase 1/1
PHASE ITEMS
Image State Update Phase 2/2
root@solaris11:/home/lollypop#
</pre>
Beware of trying this with /usr/bin/pkg !!!
=Solaris 11 release=
<syntaxhighlight lang=bash>
$ LANG=C pkg info kernel | nawk '$1 == "Version:"{split($2,version,/\./)}$1 == "Branch:"{split($2,branch,/\./)}END{printf ("Solaris %d.%d Update %d SRU %d SRU-Build %d\n",version[2],version[3],branch[3],branch[4],branch[6])}'
Solaris 5.11 Update 2 SRU 0 SRU-Build 42
</source>
= Update available? =
<syntaxhighlight lang=bash>
#!/bin/bash
# Written by Lars Timmann <L@rs.Timmann.de> 2018
export LANG=C
function check () {
package=$1
# pkg list -af entire@latest
local=$(pkg info ${package} 2>&1)
remote=$(pkg info -r ${package} 2>&1)
latest_11_3=$(pkg list -H -af ${package} | nawk '$2 ~ /^0.5.11-0.175.3/{print $2; exit;}')
printf "%s\n%s\nLatest_11.3: %s\n" "${local}" "${remote}" "${latest_11_3}" | nawk -v package="${package}" '
BEGIN{
nr=0;
}
$1=="Version:" {
version[nr]=$2;
next;
}
$1=="Branch:" {
branch[nr++]=$2;
next;
}
$1=="Latest_11.3:" {
split($2, latest_part, "-");
latest_version=latest_part[1];
latest_branch=latest_part[2];
}
/^pkg:/ {
error=$0;
}
END{
if(error) {
printf ("Package %s:\t%s\n", package, error);
status=-1;
} else {
if(branch[0]==branch[1]){
printf ("Package %s:\tUptodate at %s\n", package, branch[0]);
status=0;
}else{
printf ("Package %s:\tUpdate is available: %s -> %s\n", package, branch[0], branch[1]);
split(version[1], version_part, /\./);
split(branch[1], branch_part, /\./);
if(version[1]=="0.5.11") {
be_version=sprintf("%d.%d.%d.%d.%d",version_part[3], branch_part[3], branch_part[4], branch_part[5], branch_part[6]);
}
if(version[1]=="11.4") {
be_version=sprintf("%d.%d.%d.%d.%d",branch_part[1], branch_part[2], branch_part[3], branch_part[5], branch_part[6]);
if (version[0]=="0.5.11" && branch[0] != latest_branch ) {
split(latest_branch, latest_part, /\./);
be_version3=sprintf("%d.%d.%d.%d.%d",version_part[3], latest_part[3], latest_part[4], latest_part[5], latest_part[6]);
printf ("\nTo update and stay in Solaris 11.3-Branch you can use:\n\tpkg install --accept --require-new-be --be-name solaris_%s\n\n", be_version3);
}else if (version[0]=="0.5.11" && branch[0] == latest_branch ) {
printf ("\nYou are at the latest version of the 11.3-Branch (%s), but you can upgrade to 11.4 .\n",branch[0]);
}
}
printf ("\n\nUse:\tpkg update --accept --require-new-be --be-name solaris_%s\n\n\n", be_version);
status=2;
}
}
exit status;
}
'
}
package="entire"
pkg refresh >/dev/null \
|| echo "Cannot refresh packages" \
&& if [ $# -gt 0 ]
then
while [ $# -gt 0 ]
do
package=$1
shift
check ${package}
done
else
check ${package}
fi
</source>
= ZFS automatic snapshots =
<syntaxhighlight lang=bash>
pkg install pkg:/desktop/time-slider
svcadm restart svc:/system/dbus:default
</source>
b4ddb0ce2062f5b97d024891144bfef78e76c556
ZFS fileinfo
0
90
2327
838
2021-11-25T16:01:40Z
Lollypop
2
Text replacement - "[[Kategorie:" to "[[Category:"
wikitext
text/x-wiki
[[Category:ZFS|fileinfo]]
Wenn man nachträglich z.B. sehen möchte, mit welcher Blocksize ein File angelegt wurde, so kann man sich das anschauen mit zdb:
<pre>
# zdb -ddd <ZFS> <i-Node>
</pre>
z.B.
<pre>
# ls -i /.globaldevices
524575 /.globaldevices
# zdb -dddd rpool/ROOT/zfsBE 524575
Dataset rpool/ROOT/zfsBE [ZPL], ID 45, cr_txg 8, 27.5G, 459538 objects, rootbp DVA[0]=<0:b1eb43600:200:STD:1> DVA[1]=<0:da0e39e00:200:STD:1> [L0 DMU objset] fletcher4 lzjb BE contiguous unique 2-copy size=800L/200P birth=3168L/3168P fill=459538 cksum=17cad0b0f0:7230399a8a3:134096738e1d8:25bba0c8eec052
Object lvl iblk dblk dsize lsize %full type
524575 3 16K 128K 100M 100M 100.00 ZFS plain file
168 bonus System attributes
dnode flags: USED_BYTES USERUSED_ACCOUNTED
dnode maxblkid: 799
path /.globaldevices
uid 0
gid 0
atime Wed Aug 22 09:50:28 2012
mtime Wed Aug 22 09:50:28 2012
ctime Wed Aug 22 09:50:28 2012
crtime Wed Aug 22 09:47:15 2012
gen 2639
mode 101600
size 104857600
parent 4
links 1
</pre>
d182356b4d1ac6d931e9b359f1eb429b5a9f26be
Inetd services
0
251
2328
960
2021-11-25T16:02:13Z
Lollypop
2
Text replacement - "<source " to "<syntaxhighlight "
wikitext
text/x-wiki
[[Kategorie:Solaris]]
==Setting up rsyncd as inetd service==
1. Put it into the legacy file /etc/inetd.conf
<syntaxhighlight lang=bash>
# printf "rsync\tstream\ttcp\tnowait\troot\t/usr/bin/rsync\t/usr/bin/rsync --config=/etc/rsyncd.conf --daemon\n" >> /etc/inetd.conf
</source>
2. Use inetconv to generate your XML file
<syntaxhighlight lang=bash>
# inetconv -o /tmp
100235/1 -> /tmp/100235_1-rpc_ticotsord.xml
Importing 100235_1-rpc_ticotsord.xml ...Done
rsync -> /tmp/rsync-tcp.xml
Importing rsync-tcp.xml ...Done
</source>
3. Optionally modify the generated XML file /tmp/rsync-tcp.xml
4. Import the XML file
<syntaxhighlight lang=bash>
# svccfg import /tmp/rsync-tcp.xml
</source>
5. Enable it:
<syntaxhighlight lang=bash>
# inetadm -e svc:/network/rsync/tcp:default
</source>
6. Check it:
<syntaxhighlight lang=bash>
# netstat -anf inet | nawk -v port="$(nawk '$1=="rsync"{gsub(/\/.*$/,"",$2);print $2;}' /etc/services)" '$1 ~ port"$" && $NF=="LISTEN"'
*.873 *.* 0 0 49152 0 LISTEN
</source>
5c4399f5ad83ee117f6b4ea6ab5722fc2ddd360e
Solaris ssh from DVD
0
111
2330
2260
2021-11-25T16:02:20Z
Lollypop
2
Text replacement - "<source " to "<syntaxhighlight "
wikitext
text/x-wiki
[[Category:Solaris|SSH]]
=Get SSH on a system bootet from DVD=
==Mount DVD==
<syntaxhighlight lang=bash>
# iostat -En
c0t0d0 Soft Errors: 0 Hard Errors: 0 Transport Errors: 0
Vendor: AMI Product: Virtual CDROM Revision: 1.00 Serial No:
Size: 0.00GB <0 bytes>
Media Error: 0 Device Not Ready: 0 No Device: 0 Recoverable: 0
Illegal Request: 732 Predictive Failure Analysis: 0
...
# mkdir /tmp/dvd
# mount -F hsfs -oro /dev/dsk/c0t0d0s0 /tmp/dvd
</source>
==Unpacking software==
<syntaxhighlight lang=bash>
# mkdir /tmp/pkg
# pkgtrans /tmp/dvd/Solaris_10/Product /tmp/pkg SUNWsshu SUNWcry SUNWopenssl-libraries
# mkdir /tmp/ssh
# cd /tmp/ssh
# 7z x -so /tmp/pkg/SUNWsshu/archive/none.7z | cpio -idv
# 7z x -so /tmp/pkg/SUNWcry/archive/none.7z | cpio -idv
# 7z x -so /tmp/pkg/SUNWopenssl-libraries/archive/none.7z | cpio -idv
</source>
==Use unpacked libraries==
<syntaxhighlight lang=bash>
# crle -c /var/ld/ld.config -l /tmp/ssh/usr/sfw/lib:/lib:/usr/lib
# crle
Configuration file [version 4]: /var/ld/ld.config
Platform: 32-bit LSB 80386
Default Library Path (ELF): /tmp/ssh/usr/sfw/lib:/lib:/usr/lib
Trusted Directories (ELF): /lib/secure:/usr/lib/secure (system default)
Command line:
crle -c /var/ld/ld.config -l /tmp/ssh/usr/sfw/lib:/lib:/usr/lib
</source>
==Check it==
<syntaxhighlight lang=bash>
# ldd /tmp/ssh/usr/bin/ssh
libsocket.so.1 => /lib/libsocket.so.1
libnsl.so.1 => /lib/libnsl.so.1
libz.so.1 => /usr/lib/libz.so.1
libcrypto.so.0.9.7 => /usr/sfw/lib/libcrypto.so.0.9.7
libgss.so.1 => /usr/lib/libgss.so.1
libc.so.1 => /lib/libc.so.1
libmp.so.2 => /lib/libmp.so.2
libmd.so.1 => /lib/libmd.so.1
libscf.so.1 => /lib/libscf.so.1
libcmd.so.1 => /lib/libcmd.so.1
libdoor.so.1 => /lib/libdoor.so.1
libuutil.so.1 => /lib/libuutil.so.1
libgen.so.1 => /lib/libgen.so.1
libcrypto_extra.so.0.9.7 => /tmp/ssh/usr/sfw/lib/libcrypto_extra.so.0.9.7
libm.so.2 => /lib/libm.so.2
</source>
Looks good:
* libcrypto_extra.so.0.9.7 => /tmp/ssh/usr/sfw/lib/libcrypto_extra.so.0.9.7
==Use ssh from /tmp/ssh==
<syntaxhighlight lang=bash>
# /tmp/ssh/usr/bin/ssh <user>@<ip>
</source>
139a74b5d0bfe7ea21ea080ad7d363e12c10924e
Oracle Tips and Tricks
0
220
2331
1943
2021-11-25T16:02:40Z
Lollypop
2
Text replacement - "<source" to "<syntaxhighlight"
wikitext
text/x-wiki
[[Kategorie:Oracle|Tipps]]
==Set environment in .bash_profile==
<syntaxhighlight lang=bash>
ORATAB=/etc/oratab # <-- maybe somwhere else?
declare -A ORACLE_HOMES
export BASE_PATH=${PATH}
while IFS=$': \t\n' read ORACLE_SID ORACLE_HOME DBSTART
do
# Ignore empty lines, commented lines (#) and ORACLE_SIDs starting with + or - (RAC)
if [[ ${ORACLE_SID} =~ ^(#|[\t ]*$|[-+]) ]] ; then continue ; fi
ALIASNAME=${ORACLE_SID,,*}
eval ORACLE_HOMES["${ORACLE_SID}"]=${ORACLE_HOME}
alias ${ALIASNAME}="export ORACLE_SID=${ORACLE_SID}; export ORACLE_HOME=\${ORACLE_HOMES[${ORACLE_SID}]}; export PATH=\${ORACLE_HOMES[${ORACLE_SID}]}/bin:\${ORACLE_HOMES[${ORACLE_SID}]}/OPatch:\${BASE_PATH}"
done < ${ORATAB}
</source>
After that .bash_profile is sourced (as done at login) you have aliases as lower case ORACLE_SIDs that set all you need.
==Recover datafiles==
Problem:
<syntaxhighlight lang=oracle11>
ORA-00376: file 18 cannot be read at this time
ORA-01110: data file 18: '/data/oracle/oradata/datafile04.dbf'
</source>
<syntaxhighlight lang=oracle11>
SQL> select * from v$recover_file;
FILE# ONLINE ONLINE_
---------- ------- -------
ERROR CHANGE#
----------------------------------------------------------------- ----------
TIME
---------
18 OFFLINE OFFLINE
5.8016E+12
22-JUL-15
</source>
<syntaxhighlight lang=oracle11>
SQL> select ONLINE_STATUS from dba_data_files where file_id = 18;
ONLINE_
-------
RECOVER
</source>
Recover datafile:
<syntaxhighlight lang=oracle11>
SQL> recover datafile 18;
ORA-00279: change 5801623243148 generated at 07/22/2015 21:26:51 needed for thread 1
ORA-00289: suggestion :
/data/oracle/arclog/ORACLESID_1946_1_882824275.ARC
ORA-00280: change 5801623243148 for thread 1 is in sequence #1946
ORA-00278: log file
'/data/oracle/arclog/ORACLESID_1945_1_882824275.ARC' no longer needed
for this recovery
Specify log: {<RET>=suggested | filename | AUTO | CANCEL}
AUTO
Log applied.
Media recovery complete.
</source>
Set file online:
<syntaxhighlight lang=oracle11>
SQL> alter database datafile 18 online
</source>
Anything else?
<syntaxhighlight lang=oracle11>
SQL> select * from v$recover_file;
no rows selected
</source>
==Show non default settings==
<syntaxhighlight lang=oracle11>
SQL> select name || ' = ' || value from v$parameter where isdefault = 'FALSE';
</source>
==Show CPU count from database==
<syntaxhighlight lang=oracle11>
SQL> SELECT 'DATABASE CPU COUNT: ' || value ||
decode(ISDEFAULT, 'TRUE', ' (ISDEFAULT)', ' (IS NOT DEFAULT !!!: '|| ISDEFAULT ||')')
from V$PARAMETER where UPPER(name) like '%CPU_COUNT%'
</source>
==Startup some Databases manually==
For example: First DEVDE, than all other DEV*
<syntaxhighlight lang=bash>
for SID in DEVDE $(awk -F':' '$1 ~ /^DEV/ && $1 !~ /^DEVDE$/ {print $1}' /var/opt/oracle/oratab )
do
export ORAENV_ASK=NO ORACLE_SID=${SID}
. oraenv
printf "startup\nquit\n" | sqlplus -s "/ as sysdba"
lsnrctl start ${SID}
done
</source>
==Shutdown some Databases manually==
For example: First all other DEV*, than DEVDE
<syntaxhighlight lang=bash>
for SID in $(awk -F':' '$1 ~ /^DEV/ && $1 !~ /^DEVDE$/ {print $1}' /var/opt/oracle/oratab ) DEVDE
do
export ORAENV_ASK=NO ORACLE_SID=${SID}
. oraenv
lsnrctl stop ${SID}
printf "shutdown immediate\nquit\n" | sqlplus -s "/ as sysdba"
done
</source>
==Get session id (sid) of system process id (pid)==
<syntaxhighlight lang=sql>
col sid format 999999
col username format a20
col osuser format a15
select b.spid,a.sid, a.serial#,a.username, a.osuser from v$session a, v$process b where a.paddr= b.addr and b.spid='&spid' order by b.spid;
</source>
33144c8a55a54b40cb09fe38d8ab8c668d16f8b3
Solaris mdb magic
0
23
2332
1374
2021-11-25T16:02:43Z
Lollypop
2
Text replacement - "<source" to "<syntaxhighlight"
wikitext
text/x-wiki
[[Kategorie:Solaris|Modular Debugger]]
=Verschiedene kleine mdb Tricks=
==Memory usage==
<pre>
# echo ::memstat|mdb -k
Page Summary Pages MB %Tot
------------ ---------------- ---------------- ----
Kernel 2855874 11155 69%
Anon 50119 195 1%
Exec and libs 4754 18 0%
Page cache 22972 89 1%
Free (cachelist) 11948 46 0%
Free (freelist) 1221894 4773 29%
Total 4167561 16279
Physical 4078747 15932
</pre>
==Kernelparameter abfragen==
Syntax: echo '<Parameter>/D' | mdb -k
<pre>
# echo 'ncsize/D' | mdb -k
ncsize:
ncsize: 70485
</pre>
==Kernelparameter setzen==
Syntax: echo '<Parameter>/W<Value>' | mdb -wk
<pre>
# echo 'do_tcp_fusion/W0' | mdb -wk
do_tcp_fusion: 0 = 0x0
</pre>
==Inquiry strings in Solaris 11==
<syntaxhighlight lang=bash>
# echo "::walk sd_state | ::grep '.!=0' | ::print struct sd_lun un_sd | ::print struct scsi_device sd_inq | ::print struct scsi_inquiry inq_vid inq_pid" | mdb -k
inq_vid = [ "VMware " ]
inq_pid = [ "Virtual disk " ]
inq_vid = [ "NECVMWar" ]
inq_pid = [ "VMware SATA CD00" ]
inq_vid = [ "VMware " ]
inq_pid = [ "Virtual disk " ]
</source>
e6809f1084fcb0eee1df4ab827a95ba3f85662d6
PowerDNS
0
287
2333
2243
2021-11-25T16:02:43Z
Lollypop
2
Text replacement - "<source" to "<syntaxhighlight"
wikitext
text/x-wiki
[[Category: DNS]]
=PowerDNS Server (pdns_server)=
==Newer version in Ubuntu==
If you are living in Ubunbtu xenial and need a newer PowerDNS from Ubuntu zesty, do this:
===/etc/apt/apt.conf.d/01pinning===
<syntaxhighlight lang=apt>
APT::Default-Release "xenial";
</source>
===/etc/apt/preferences.d/pdns===
<syntaxhighlight lang=apt>
Package: pdns-*
Pin: release a=zesty, l=Ubuntu
Pin-Priority: 1000
Package: pdns-*
Pin: release a=zesty-updates, l=Ubuntu
Pin-Priority: 1000
Package: pdns-*
Pin: release a=zesty-security, l=Ubuntu
Pin-Priority: 1000
</source>
===/etc/apt/sources.list===
add zesty sources. for example:
<syntaxhighlight>
deb [arch=amd64] http://de.archive.ubuntu.com/ubuntu/ xenial main restricted universe
deb [arch=amd64] http://de.archive.ubuntu.com/ubuntu/ xenial-updates main restricted universe
deb [arch=amd64] http://security.ubuntu.com/ubuntu xenial-security main restricted universe
deb [arch=amd64] http://de.archive.ubuntu.com/ubuntu/ zesty main restricted universe
deb [arch=amd64] http://de.archive.ubuntu.com/ubuntu/ zesty-updates main restricted universe
deb [arch=amd64] http://security.ubuntu.com/ubuntu zesty-security main restricted universe
</source>
===Do the upgrade===
<syntaxhighlight lang=bash>
# apt update
# apt install pdns-recursor/zesty pdns-tools/zesty libstdc++6/zesty gcc-6-base/zesty
</source>
==Logging with systemd and syslog-ng==
1. Tell the journald of systemd to forward messages to syslog:
In <i>/etc/systemd/journald.conf</i> set it from
<syntaxhighlight lang=bash>
#ForwardToSyslog=yes
</source>
to
<syntaxhighlight lang=bash>
ForwardToSyslog=yes
</source>
Then restart the journald
<syntaxhighlight lang=bash>
# systemctl restart systemd-journald.service
</source>
2. Tell syslog-ng to take the dev-log-socket from journald as input:
Change the part in <i>/etc/syslog-ng/syslog-ng.conf</i> from
<syntaxhighlight lang=bash>
source s_src {
system();
internal();
};
</source>
to
<syntaxhighlight lang=bash>
source s_src {
system();
internal();
unix-dgram ("/run/systemd/journal/dev-log");
};
</source>
==chroot with systemd==
<syntaxhighlight lang=bash>
# mkdir -p /var/chroot/run/systemd
# touch /var/chroot/run/systemd/notify
</source>
<syntaxhighlight lang=ini>
# /etc/systemd/system/var-chroot-run-systemd-notify.mount
[Unit]
After=zfs-mount.service
Requires=var-chroot.mount
[Mount]
What=/run/systemd/notify
Where=/var/chroot/run/systemd/notify
Type=none
Options=bind
</source>
or
<syntaxhighlight lang=ini>
# /etc/systemd/system/var-chroot-run-systemd-notify.mount
[Unit]
Description=Mount /run/systemd/notify to chroot
DefaultDependencies=no
ConditionPathExists=/var/chroot/run/systemd/notify
ConditionCapability=CAP_SYS_ADMIN
After=systemd-modules-load.service
Before=pdns-recursor.service
[Mount]
What=/run/systemd/notify
Where=/var/chroot/run/systemd/notify
Type=none
Options=bind
[Install]
WantedBy=multi-user.target
</source>
<syntaxhighlight lang=ini>
# /etc/systemd/system/pdns.service.d/override.conf
[Service]
Type=simple
ExecStart=
ExecStart=/usr/sbin/pdns_server --guardian=no --daemon=no --disable-syslog --log-timestamp=no --write-pid=no
CapabilityBoundingSet=CAP_NET_BIND_SERVICE CAP_SETGID CAP_SETUID CAP_CHOWN CAP_SYS_CHROOT
[Unit]
Wants=local-fs.target
</source>
<syntaxhighlight lang=ini>
# /etc/systemd/system/pdns-recursor.service.d/override.conf
[Service]
Type=simple
ExecStart=
ExecStart=/usr/sbin/pdns_recursor --daemon=no --write-pid=no --include-dir=/etc/powerdns/recursor.d
CapabilityBoundingSet=CAP_NET_BIND_SERVICE CAP_SETGID CAP_SETUID CAP_CHOWN CAP_SYS_CHROOT
[Unit]
Wants=local-fs.target
</source>
a3e225f4401d62c7ee46ec20ce19dbe19f927e1a
MySQL Symmetric Encryption
0
272
2334
1153
2021-11-25T16:02:50Z
Lollypop
2
Text replacement - "<source" to "<syntaxhighlight"
wikitext
text/x-wiki
<syntaxhighlight lang=mysql>
> select hex(aes_encrypt(rpad("abcqweqweqweqwe",31,"~"),"mykey")) as encrypted;
+------------------------------------------------------------------+
| encrypted |
+------------------------------------------------------------------+
| E5FB394568B8F03D43CF083F5065C959AC6E22BDB7749E4D97F5ABC72B08D843 |
+------------------------------------------------------------------+
</source>
<syntaxhighlight lang=mysql>
> select trim(trailing "~" from aes_decrypt(unhex("E5FB394568B8F03D43CF083F5065C959AC6E22BDB7749E4D97F5ABC72B08D843"),"mykey")) as decrypted;
+-----------------+
| decrypted |
+-----------------+
| abcqweqweqweqwe |
+-----------------+
</source>
58d3cb13e0b55984f31ee97b837a9dab6702be7c
Category:DNS
14
288
2335
1304
2021-11-25T16:02:53Z
Lollypop
2
Text replacement - "[[Kategorie:" to "[[Category:"
wikitext
text/x-wiki
[[Category:KnowHow]]
a53883501ef62bde531096835b5015f2915a2297
OpenSSL
0
347
2336
2232
2021-11-25T16:02:54Z
Lollypop
2
Text replacement - "</source" to "</syntaxhighlight"
wikitext
text/x-wiki
[[category:Security]]
=Verify=
<syntaxhighlight lang=bash>
# openssl verify -CAfile /srv/www/htdocs/pub/RHN-ORG-TRUSTED-SSL-CERT /etc/pki/spacewalk/jabberd/server.pem
</syntaxhighlight>
<syntaxhighlight lang=bash>
# openssl crl2pkcs7 -nocrl -certfile /srv/www/htdocs/pub/RHN-ORG-TRUSTED-SSL-CERT | openssl pkcs7 -print_certs -noout -print_certs
</syntaxhighlight>
=CSR=
== Create key and CSR ==
<syntaxhighlight lang=bash>
$ subject_without_cn='/C=DE/ST=Hamburg/L=Hamburg/O=Organisation/OU=Team'
$ emailAddress='webadmin@server.de'
$ declare -a hosts=( "name1.server.de" "name2.server.de" )
$ openssl req -newkey rsa:4096 -sha256 -keyout ${hosts[0]}-key.pem -out ${hosts[0]}-csr.pem -batch -subj "${subject_without_cn}/CN=${hosts[0]}/emailAddress=${emailAddress}" -reqexts SAN -config <(cat /etc/ssl/openssl.cnf <(printf "[SAN]\nsubjectAltName=DNS:${hosts[0]}${hosts[1]:+,DNS:${hosts[1]}}${hosts[2]:+,DNS:${hosts[2]}}${hosts[3]:+,DNS:${hosts[3]}}${hosts[4]:+,DNS:${hosts[4]}}"))
</syntaxhighlight>
== Verify your CSR==
<syntaxhighlight lang=bash>
$ openssl req -text -noout -verify -in ${hosts[0]}-csr.pem
</syntaxhighlight>
aa72448844807b2d6183721f75804b365816d4df
NetApp SSH
0
110
2337
774
2021-11-25T16:03:04Z
Lollypop
2
Text replacement - "<source" to "<syntaxhighlight"
wikitext
text/x-wiki
[[Kategorie:NetApp|SSH]]
== Prüfen, ob das SSH-Homedir /etc/sshd/<user>/.ssh existiert ==
<syntaxhighlight lang=bash>
nac*> priv set -q diag
nac*> ls /etc/sshd/
.
..
ssh_host_key
ssh_host_key.pub
ssh_host_rsa_key
ssh_host_rsa_key.pub
ssh_host_dsa_key
ssh_host_dsa_key.pub
</source>
== Anlegen eines Verzeichnisses mit Mode 0700 ==
<syntaxhighlight lang=bash>
nac*> options wafl.default_qtree_mode
wafl.default_qtree_mode 0777
nac*> options wafl.default_qtree_mode 0700
nac*> qtree create /vol/vol0/__
nac*> options wafl.default_qtree_mode 0777
</source>
== NDMPd Status prüfen / anschalten ==
<syntaxhighlight lang=bash>
nac*> ndmpd status
ndmpd OFF.
No ndmpd sessions active.
nac*> ndmpd on
nac*> ndmpd status
ndmpd ON.
No ndmpd sessions active.
</source>
== Verzeichnis erzeugen durch kopieren des QTrees ==
<syntaxhighlight lang=bash>
nac*> ndmpcopy /vol/vol0/__ /vol/vol0/etc/sshd/root/.ssh
...
Ndmpcopy: Transfer successful [ 0 hours, 0 minutes, 20 seconds ]
Ndmpcopy: Done
nac*> qtree delete /vol/vol0/__
</source>
== SSH-Key /etc/sshd/<user>/.ssh/authorized_keys schreiben ==
<syntaxhighlight lang=bash>
nac*> wrfile /etc/sshd/root/.ssh/authorized_keys
ssh-dss AAA...== user@clienthost
^C
</source>
528b2c11521ca6b6f2640f729ed813a5b2b4d11d
SuSE NIS
0
380
2338
2099
2021-11-25T16:03:11Z
Lollypop
2
Text replacement - "<source" to "<syntaxhighlight"
wikitext
text/x-wiki
[[Category:SuSE]]
=!!!! First of all: You do NOT want NIS because of security reasons !!!!=
NIS is not NIS+ and it is without encryption. So do not use it or if you really have to, use it wisely!
==NIS Client==
===Add packages===
<syntaxhighlight>
# zypper in yast2-nis-client ypbind
</source>
===/etc/sysconfig/network/config===
<syntaxhighlight>
NETCONFIG_MODULES_ORDER="dns-resolver dns-bind dns-dnsmasq nis ntp-runtime"
NETCONFIG_NIS_STATIC_SERVERS="nis-server.domain.tld"
NETCONFIG_NIS_SETDOMAINNAME="yes"
NETCONFIG_NIS_POLICY="auto"
</source>
<syntaxhighlight>
# netconfig update -f
</source>
Check:
<syntaxhighlight>
# cat /etc/yp.conf
...
ypserver nis-server.domain.tld
</source>
===Set NIS Domain===
<syntaxhighlight>
# nisdomainname nis.domain.tld
</source>
Check:
<syntaxhighlight>
# nisdomainname
nis.domain.tld
#
</source>
===Add to /etc/passwd===
<syntaxhighlight>
+::::::
</source>
===Add to /etc/shadow===
<syntaxhighlight>
+::0:0:0::::
</source>
===/etc/nsswitch.conf===
<syntaxhighlight>
...
passwd: compat
group: compat
...
</source>
alternative for older installations:
<syntaxhighlight>
...
passwd: files nis
group: files nis
...
</source>
===yast===
<syntaxhighlight>
Network Services -> NIS Client
[Alt]+[u] (Use NIS)
[F10] Finish
[F9] Quit
</source>
Check:
<syntaxhighlight>
# ypcat passwd.byname
</source>
2a3790c36d55d130a84faed16e53fc3a6a4a446b
Tomcat
0
375
2339
2053
2021-11-25T16:03:18Z
Lollypop
2
Text replacement - "<source" to "<syntaxhighlight"
wikitext
text/x-wiki
== Terminating SSL at the webserver or load balancer ==
If you want the tomcat let know that he is behind another Instance that terminates the SSL and tomcat should put https:// in the links, just add <i>scheme="https"</i> and <i>proxyPort="443"</i> to the non SSL Connector definition like this:
<syntaxhighlight>
<Connector port="8080" protocol="HTTP/1.1"
server="Apache"
connectionTimeout="20000"
scheme="https"
proxyPort="443"
/>
</source>
d59610bba4a4de4b2a60ddda9bb1369794cfb65a
MariaDB Tipps und Tricks
0
235
2340
2249
2021-11-25T16:04:26Z
Lollypop
2
Text replacement - "<source" to "<syntaxhighlight"
wikitext
text/x-wiki
[[Category:MySQL]]
[[Category:MariaDB]]
==ERROR 1524 (HY000): Plugin 'unix_socket' is not loaded==
===Problem===
<syntaxhighlight lang=bash>
# mysql
ERROR 1524 (HY000): Plugin 'unix_socket' is not loaded
</source>
===Solution===
<syntaxhighlight lang=bash>
# service mysql stop
# mysqld_safe --skip-grant-tables
150918 15:41:13 mysqld_safe Logging to '/var/log/mysql/error.log'.
150918 15:41:13 mysqld_safe Starting mysqld daemon with databases from /var/lib/mysql
# mysql
Welcome to the MariaDB monitor. Commands end with ; or \g.
Your MariaDB connection id is 2
Server version: 10.0.20-MariaDB-0ubuntu0.15.04.1 (Ubuntu)
Copyright (c) 2000, 2015, Oracle, MariaDB Corporation Ab and others.
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
MariaDB [(none)]> INSERT INTO mysql.plugin (name, dl) VALUES ('unix_socket', 'auth_socket');
Query OK, 1 row affected (0.00 sec)
MariaDB [(none)]> shutdown
# service mysql start
</source>
4ab50518bc7a0169373e49c74b243edfa7e5c0b7
Tmux tips and tricks
0
376
2341
2074
2021-11-25T16:16:48Z
Lollypop
2
Text replacement - "<source" to "<syntaxhighlight"
wikitext
text/x-wiki
== Enable mouse scrollwheel ==
<syntaxhighlight>
# echo "set -g mouse on" >> ~/.tmux.conf
</source>
7aaa2c431820e529eccbd5ddec303e233d03ffb0
Category:Virtualization
14
282
2342
1278
2021-11-25T16:27:13Z
Lollypop
2
Text replacement - "[[Kategorie:" to "[[Category:"
wikitext
text/x-wiki
[[Category:KnowHow]]
a53883501ef62bde531096835b5015f2915a2297
Category:Sendmail
14
101
2343
283
2021-11-25T16:35:14Z
Lollypop
2
Text replacement - "[[Kategorie:" to "[[Category:"
wikitext
text/x-wiki
[[Category:KnowHow]]
=Wenn es doch mal nicht ohne Sendmail geht=
f589a3947fa5e36ef5c043dd1d723086de9da47b
Oracle Tips and Tricks
0
220
2344
2331
2021-11-25T16:43:04Z
Lollypop
2
Text replacement - "</source" to "</syntaxhighlight"
wikitext
text/x-wiki
[[Kategorie:Oracle|Tipps]]
==Set environment in .bash_profile==
<syntaxhighlight lang=bash>
ORATAB=/etc/oratab # <-- maybe somwhere else?
declare -A ORACLE_HOMES
export BASE_PATH=${PATH}
while IFS=$': \t\n' read ORACLE_SID ORACLE_HOME DBSTART
do
# Ignore empty lines, commented lines (#) and ORACLE_SIDs starting with + or - (RAC)
if [[ ${ORACLE_SID} =~ ^(#|[\t ]*$|[-+]) ]] ; then continue ; fi
ALIASNAME=${ORACLE_SID,,*}
eval ORACLE_HOMES["${ORACLE_SID}"]=${ORACLE_HOME}
alias ${ALIASNAME}="export ORACLE_SID=${ORACLE_SID}; export ORACLE_HOME=\${ORACLE_HOMES[${ORACLE_SID}]}; export PATH=\${ORACLE_HOMES[${ORACLE_SID}]}/bin:\${ORACLE_HOMES[${ORACLE_SID}]}/OPatch:\${BASE_PATH}"
done < ${ORATAB}
</syntaxhighlight>
After that .bash_profile is sourced (as done at login) you have aliases as lower case ORACLE_SIDs that set all you need.
==Recover datafiles==
Problem:
<syntaxhighlight lang=oracle11>
ORA-00376: file 18 cannot be read at this time
ORA-01110: data file 18: '/data/oracle/oradata/datafile04.dbf'
</syntaxhighlight>
<syntaxhighlight lang=oracle11>
SQL> select * from v$recover_file;
FILE# ONLINE ONLINE_
---------- ------- -------
ERROR CHANGE#
----------------------------------------------------------------- ----------
TIME
---------
18 OFFLINE OFFLINE
5.8016E+12
22-JUL-15
</syntaxhighlight>
<syntaxhighlight lang=oracle11>
SQL> select ONLINE_STATUS from dba_data_files where file_id = 18;
ONLINE_
-------
RECOVER
</syntaxhighlight>
Recover datafile:
<syntaxhighlight lang=oracle11>
SQL> recover datafile 18;
ORA-00279: change 5801623243148 generated at 07/22/2015 21:26:51 needed for thread 1
ORA-00289: suggestion :
/data/oracle/arclog/ORACLESID_1946_1_882824275.ARC
ORA-00280: change 5801623243148 for thread 1 is in sequence #1946
ORA-00278: log file
'/data/oracle/arclog/ORACLESID_1945_1_882824275.ARC' no longer needed
for this recovery
Specify log: {<RET>=suggested | filename | AUTO | CANCEL}
AUTO
Log applied.
Media recovery complete.
</syntaxhighlight>
Set file online:
<syntaxhighlight lang=oracle11>
SQL> alter database datafile 18 online
</syntaxhighlight>
Anything else?
<syntaxhighlight lang=oracle11>
SQL> select * from v$recover_file;
no rows selected
</syntaxhighlight>
==Show non default settings==
<syntaxhighlight lang=oracle11>
SQL> select name || ' = ' || value from v$parameter where isdefault = 'FALSE';
</syntaxhighlight>
==Show CPU count from database==
<syntaxhighlight lang=oracle11>
SQL> SELECT 'DATABASE CPU COUNT: ' || value ||
decode(ISDEFAULT, 'TRUE', ' (ISDEFAULT)', ' (IS NOT DEFAULT !!!: '|| ISDEFAULT ||')')
from V$PARAMETER where UPPER(name) like '%CPU_COUNT%'
</syntaxhighlight>
==Startup some Databases manually==
For example: First DEVDE, than all other DEV*
<syntaxhighlight lang=bash>
for SID in DEVDE $(awk -F':' '$1 ~ /^DEV/ && $1 !~ /^DEVDE$/ {print $1}' /var/opt/oracle/oratab )
do
export ORAENV_ASK=NO ORACLE_SID=${SID}
. oraenv
printf "startup\nquit\n" | sqlplus -s "/ as sysdba"
lsnrctl start ${SID}
done
</syntaxhighlight>
==Shutdown some Databases manually==
For example: First all other DEV*, than DEVDE
<syntaxhighlight lang=bash>
for SID in $(awk -F':' '$1 ~ /^DEV/ && $1 !~ /^DEVDE$/ {print $1}' /var/opt/oracle/oratab ) DEVDE
do
export ORAENV_ASK=NO ORACLE_SID=${SID}
. oraenv
lsnrctl stop ${SID}
printf "shutdown immediate\nquit\n" | sqlplus -s "/ as sysdba"
done
</syntaxhighlight>
==Get session id (sid) of system process id (pid)==
<syntaxhighlight lang=sql>
col sid format 999999
col username format a20
col osuser format a15
select b.spid,a.sid, a.serial#,a.username, a.osuser from v$session a, v$process b where a.paddr= b.addr and b.spid='&spid' order by b.spid;
</syntaxhighlight>
010b058528391ffaca8d2c13ceb266e528167ace
2379
2344
2021-11-25T20:07:51Z
Lollypop
2
Text replacement - "[[Kategorie:" to "[[Category:"
wikitext
text/x-wiki
[[Category:Oracle|Tipps]]
==Set environment in .bash_profile==
<syntaxhighlight lang=bash>
ORATAB=/etc/oratab # <-- maybe somwhere else?
declare -A ORACLE_HOMES
export BASE_PATH=${PATH}
while IFS=$': \t\n' read ORACLE_SID ORACLE_HOME DBSTART
do
# Ignore empty lines, commented lines (#) and ORACLE_SIDs starting with + or - (RAC)
if [[ ${ORACLE_SID} =~ ^(#|[\t ]*$|[-+]) ]] ; then continue ; fi
ALIASNAME=${ORACLE_SID,,*}
eval ORACLE_HOMES["${ORACLE_SID}"]=${ORACLE_HOME}
alias ${ALIASNAME}="export ORACLE_SID=${ORACLE_SID}; export ORACLE_HOME=\${ORACLE_HOMES[${ORACLE_SID}]}; export PATH=\${ORACLE_HOMES[${ORACLE_SID}]}/bin:\${ORACLE_HOMES[${ORACLE_SID}]}/OPatch:\${BASE_PATH}"
done < ${ORATAB}
</syntaxhighlight>
After that .bash_profile is sourced (as done at login) you have aliases as lower case ORACLE_SIDs that set all you need.
==Recover datafiles==
Problem:
<syntaxhighlight lang=oracle11>
ORA-00376: file 18 cannot be read at this time
ORA-01110: data file 18: '/data/oracle/oradata/datafile04.dbf'
</syntaxhighlight>
<syntaxhighlight lang=oracle11>
SQL> select * from v$recover_file;
FILE# ONLINE ONLINE_
---------- ------- -------
ERROR CHANGE#
----------------------------------------------------------------- ----------
TIME
---------
18 OFFLINE OFFLINE
5.8016E+12
22-JUL-15
</syntaxhighlight>
<syntaxhighlight lang=oracle11>
SQL> select ONLINE_STATUS from dba_data_files where file_id = 18;
ONLINE_
-------
RECOVER
</syntaxhighlight>
Recover datafile:
<syntaxhighlight lang=oracle11>
SQL> recover datafile 18;
ORA-00279: change 5801623243148 generated at 07/22/2015 21:26:51 needed for thread 1
ORA-00289: suggestion :
/data/oracle/arclog/ORACLESID_1946_1_882824275.ARC
ORA-00280: change 5801623243148 for thread 1 is in sequence #1946
ORA-00278: log file
'/data/oracle/arclog/ORACLESID_1945_1_882824275.ARC' no longer needed
for this recovery
Specify log: {<RET>=suggested | filename | AUTO | CANCEL}
AUTO
Log applied.
Media recovery complete.
</syntaxhighlight>
Set file online:
<syntaxhighlight lang=oracle11>
SQL> alter database datafile 18 online
</syntaxhighlight>
Anything else?
<syntaxhighlight lang=oracle11>
SQL> select * from v$recover_file;
no rows selected
</syntaxhighlight>
==Show non default settings==
<syntaxhighlight lang=oracle11>
SQL> select name || ' = ' || value from v$parameter where isdefault = 'FALSE';
</syntaxhighlight>
==Show CPU count from database==
<syntaxhighlight lang=oracle11>
SQL> SELECT 'DATABASE CPU COUNT: ' || value ||
decode(ISDEFAULT, 'TRUE', ' (ISDEFAULT)', ' (IS NOT DEFAULT !!!: '|| ISDEFAULT ||')')
from V$PARAMETER where UPPER(name) like '%CPU_COUNT%'
</syntaxhighlight>
==Startup some Databases manually==
For example: First DEVDE, than all other DEV*
<syntaxhighlight lang=bash>
for SID in DEVDE $(awk -F':' '$1 ~ /^DEV/ && $1 !~ /^DEVDE$/ {print $1}' /var/opt/oracle/oratab )
do
export ORAENV_ASK=NO ORACLE_SID=${SID}
. oraenv
printf "startup\nquit\n" | sqlplus -s "/ as sysdba"
lsnrctl start ${SID}
done
</syntaxhighlight>
==Shutdown some Databases manually==
For example: First all other DEV*, than DEVDE
<syntaxhighlight lang=bash>
for SID in $(awk -F':' '$1 ~ /^DEV/ && $1 !~ /^DEVDE$/ {print $1}' /var/opt/oracle/oratab ) DEVDE
do
export ORAENV_ASK=NO ORACLE_SID=${SID}
. oraenv
lsnrctl stop ${SID}
printf "shutdown immediate\nquit\n" | sqlplus -s "/ as sysdba"
done
</syntaxhighlight>
==Get session id (sid) of system process id (pid)==
<syntaxhighlight lang=sql>
col sid format 999999
col username format a20
col osuser format a15
select b.spid,a.sid, a.serial#,a.username, a.osuser from v$session a, v$process b where a.paddr= b.addr and b.spid='&spid' order by b.spid;
</syntaxhighlight>
3dad98e14703dfa946d7e707af1149716a674d20
EasyRSA
0
275
2345
2238
2021-11-25T16:46:04Z
Lollypop
2
Text replacement - "</source" to "</syntaxhighlight"
wikitext
text/x-wiki
[[Kategorie: Security]]
[[Kategorie: Linux]]
=create CA user=
<syntaxhighlight lang=bash>
# groupadd -g 22000 ca && adduser --uid 22000 --gid 22000 --gecos "CA user" --encrypt-home ca
</syntaxhighlight>
=Do everything CA specific as CA user!=
<syntaxhighlight lang=bash>
# su - ca
ca@rzeasyrsa:~$ ecryptfs-mount-private
ca@rzeasyrsa:~$ cd
ca@rzeasyrsa:~$ exec bash
</syntaxhighlight>
=Setup EasyRSA=
==Ubuntu packets==
<syntaxhighlight lang=bash>
# aptitude install openvpn easy-rsa
</syntaxhighlight>
==Create your CA==
<syntaxhighlight lang=bash>
mkdir --mode=0700 OpenVPN-CA
cd OpenVPN-CA
for i in /usr/share/easy-rsa/* ; do ln -s $i ; done
rm -f vars clean-all
cp /usr/share/easy-rsa/vars .
</syntaxhighlight>
==Edit the defaults==
Setup proper defaults in your vars file.
Source it every time before you do CA work.
==Base setup (Only one time at the beginning!!!)==
'''Really just do this before you start with your CA. It will delete everything: keys and certificates!!!'''
$ cd OpenVPN-CA
$ . vars
$ /usr/share/easy-rsa/clean-all
==Generate DH parameter==
$ cd OpenVPN-CA
$ . vars
$ KEY_SIZE=4096 ./build-dh
or
$ cd OpenVPN-CA/keys
$ openssl dhparam -2 -out dh4096.pem 4096
==Generate TLS-auth parameter==
$ cd OpenVPN-CA/keys
$ /usr/sbin/openvpn --genkey --secret ta.key
==User certificates with passwords in scripts==
If you want to work with password encrypted keys and wat to batch process many users, you might find this helpful.
Add a line after <i># output_password = secret</i>:
<syntaxhighlight lang=bash>
# output_password = secret
output_password = $ENV::KEY_PASS
</syntaxhighlight>
After that the openssl calls taking the needed password from the environment variable <i>KEY_PASS</i>.
You can call it like this for example:
<syntaxhighlight lang=bash>
KEY_PASS="password" ./build-key-pass --batch user
</syntaxhighlight>
==Create your CA certificate==
$ cd OpenVPN-CA
$ . vars
$ ./buid-ca
Check it with
$ openssl x509 -noout -text -in keys/ca.crt
==Create the server certificate==
$ cd OpenVPN-CA
$ . vars
$ ./build-key-server openvpn-server
For example server keys with 5 years validity:
$ KEY_EXPIRE=1825 ./build-key-server openvpn-server
=Create your OpenVPN config=
==get_ovpn.sh==
I wrote a little helper script called get_ovpn.sh:
<syntaxhighlight lang=bash>
#!/bin/bash
# Written by Lars Timmann L@rs.Timmann.de> 2016
# You may use it for free but on your own risk!!!
TYPE="client"
KEY_DIR="OpenVPN-CA/keys"
function usage() {
if [ "_${1}_" != "_help_" ]
then
printf "ERROR: $*\n"
fi
printf "Options:\n"
cat <<EOF
-h|--help This help
-c|--config-type Default: client (client|server)
-k|--key-dir Default: OpenVPN-CA/keys Directory where certificates and keys can be found
-t|--template Default: ${configtype}.ovpn The template to use
-u|--user User to create config for
-s|--server Servername for --config-type=server
--what-ever=value Replace <WHAT_EVER> in template with value e.g.: --server-net=... replaces <SERVER_NET> with the given value
EOF
exit 1
}
while [ $# -gt 0 ]
do
#if [ $# -ge 2 ]; then value=$2; fi
case $1 in
-h|--help)
usage "help"
;;
--?*=?*|-?*=?*)
param=${1%=*}
value=${1#*=}
shift;
;;
--?*=|-?*=)
param=${1%=*}
usage "${param} needs a vlaue!"
;;
*)
if [ $# -lt 2 ] ; then usage "$1 needs a value!"; fi
param=$1
value=$2
shift; shift;
;;
esac
case $param in
-t|--template)
TEMPLATE=${value}
;;
-k|--key-dir)
KEY_DIR=${value}
;;
-u|--user)
OVPN_USER=${value}
;;
-c|--config-type)
TYPE=${value}
;;
-s|--server-name)
SERVER=${value}
;;
*)
param=${param#--}
param=${param/-/_}
export ${param^^}=${value}
;;
esac
done
TEMPLATE=${TEMPLATE:-"${TYPE}.ovpn"}
[ -z "${SERVER}" -a "_${TYPE}_" == "_server_" ] && usage "For which server?\n"
[ -z "${OVPN_USER}" -a "_${TYPE}_" == "_client_" ] && usage "For which user?\n"
[ ! -f "${TEMPLATE}" ] && usage "Template file ${TEMPLATE} not found!\n"
[ ! -d "${KEY_DIR}" ] && usage "Key directory ${KEY_DIR} not found!\n"
[ ! -f "${KEY_DIR}/ta.key" ] && usage "TLS Auth ${KEY_DIR}/ta.key not found!\n"
[ ! -f "${KEY_DIR}/ca.crt" ] && usage "CA Certificate ${KEY_DIR}/ca.crt not found!\n"
[ ! -f "${KEY_DIR}/${SERVER}.key" -a "_${TYPE}_" == "_server_" ] && usage "Private key ${KEY_DIR}/${SERVER}.key not found!\n"
[ ! -f "${KEY_DIR}/${SERVER}.crt" -a "_${TYPE}_" == "_server_" ] && usage "Certificate ${KEY_DIR}/${SERVER}.crt not found!\n"
[ ! -f "${KEY_DIR}/${OVPN_USER}.key" -a "_${TYPE}_" == "_client_" ] && usage "Private key ${KEY_DIR}/${OVPN_USER}.key not found!\n"
[ ! -f "${KEY_DIR}/${OVPN_USER}.crt" -a "_${TYPE}_" == "_client_" ] && usage "Certificate ${KEY_DIR}/${OVPN_USER}.crt not found!\n"
export SERVER
gawk \
-v user="${OVPN_USER}" \
-v key_dir="${KEY_DIR}" \
-v configtype="${TYPE}" \
-v server="${SERVER}" \
'
function print_fingerprint(certfile){
command="openssl x509 -noout -fingerprint -in "certfile;
FS="=";
while(command | getline);
retval=$2;
close(command);
return retval;
}
function print_part(part,certfile){
command="openssl x509 -noout -text -in "certfile;
while(command | getline){
if ($1 == part) {
for(i=2;i<=NF;i++){
if(i==NF) gsub(/\//,", ", $i)
retval=retval""$i;
if(i<NF) retval=retval" ";
}
}
};
close(command);
return retval;
}
function print_cert(name,certfile){
# Header
#printf "# %s\n",certfile;
while(getline < certfile){if(/^#/) print $0};
close(certfile);
printf "<%s>\n",name;
while(getline < certfile){if(!/^#/) print $0};
close(certfile);
printf "</%s>\n",name;
}
{
# Static part
rest=$0;
while(match(rest,/<[A-Z0-9_]+>/)) {
matched=substr(rest,RSTART+1,RLENGTH-2);
##print "Matched:",matched;
if (ENVIRON[matched]) gsub("<"matched">",ENVIRON[matched]);
rest=substr(rest,RSTART+RLENGTH);
}
print $0;
}
END{
# Dynamic part
if(configtype=="client") {
printf "remote-cert-tls server\n";
} else {
printf "remote-cert-tls client\n";
}
# TLS Auth
print_cert("tls-auth",key_dir"/ta.key");
printf "key-direction %d\n",(configtype=="client");
printf "\n";
print_cert("dh",key_dir"/dh4096.pem");
printf "\n";
# Ca Certificate
if (configtype=="client") {
printf "verify-x509-name \"%s\"\n",print_part("Subject:",key_dir"/"server".crt");
}
printf "verify-hash %s\n",print_fingerprint(key_dir"/ca.crt");
print_cert("ca",key_dir"/ca.crt");
printf "\n";
# User Data
if (configtype=="client") {
print_cert("cert",key_dir"/"user".crt");
printf "\n";
print_cert("key",key_dir"/"user".key");
printf "\n";
} else {
print_cert("cert",key_dir"/"server".crt");
printf "\n";
# key secret/<SERVER>.key is in template
}
#print ENVIRON["SERVER_NET"];
}' ${TEMPLATE}
</syntaxhighlight>
ca@rzeasyrsa:~$ ./get_ovpn.sh --help
Options:
-h|--help This help
-c|--config-type Default: client (client|server)
-k|--key-dir Default: OpenVPN-CA/keys Directory where certificates and keys can be found
-t|--template Default: .ovpn The template to use
-u|--user User to create config for
-s|--server Servername for --config-type=server
--what-ever=value Replace <WHAT_EVER> in template with value e.g.: --server-net=... replaces <SERVER_NET> with the given value
==OpenVPN Server ==
===OpenVPN Server Template===
# I am using the mysql-auth-plugin from [https://github.com/chantra/openvpn-mysql-auth https://github.com/chantra/openvpn-mysql-auth]
# On the OpenVPN-Server the user openvpn has uid 1195 and gid 1195 and I have a TMP-dir for this user in the /etc/fstab like this:
none /run/openvpn_tmp tmpfs nodev,noexec,nosuid,size=5m,mode=0700,uid=1195,gid=1195 0 0
Example server.ovpn:
<pre>
local <SERVER_IP>
port <SERVER_PORT>
tmp-dir /run/openvpn_tmp
management <MANAGEMENT_IP> <MANAGEMENT_PORT> /etc/openvpn/management-password
proto udp
dev tun
tun-mtu 1500
mssfix
topology subnet
server <SERVER_NET> <SERVER_NETMASK>
push "redirect-gateway def1 bypass-dhcp"
push "dhcp-option DNS <DNS1>"
push "dhcp-option DNS <DNS2>"
push "route 192.168.18.0 255.255.255.0 net_gateway"
push "route 192.168.0.0 255.255.0.0"
push "route 10.0.0.0 255.0.0.0"
push "route 172.28.0.0 255.255.0.0"
client-to-client
duplicate-cn
keepalive 10 120
auth SHA512
cipher AES-256-CBC
tls-cipher DHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES256-SHA256:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES128-SHA256:DHE-RSA-CAMELLIA256-SHA:DHE-RSA-AES256-SHA:DHE-RSA-CAMELLIA128-SHA:DHE-RSA-AES128-SHA:CAMELLIA256-SHA:AES256-SHA:CAMELLIA128-SHA:AES128-SHA
reneg-sec 36000
comp-lzo adaptive
max-clients 25
user openvpn
group openvpn
persist-key
persist-tun
status /var/log/openvpn/<SERVER>-status.log 2
status-version 2
log-append /var/log/openvpn/<SERVER>-openvpn.log
verb 3
plugin /usr/lib/openvpn/libopenvpn-mysql-auth.so -c /etc/openvpn/auth/<SERVER>_auth_mysql.conf
key secret/<SERVER>.key # This file should be kept secret
remote-cert-tls client
username-as-common-name
</pre>
===Generate OpenVPN Config for server===
<syntaxhighlight lang=bash>
ca@rzeasyrsa:~$ ./get_ovpn.sh \
--server openvpn \
--config-type server \
--server-ip=192.168.18.23 \
--server-port=1234 \
--server-net=10.214.60.128 \
--server-netmask=255.255.255.128 \
--management-ip=192.168.17.23 \
--management-port=11234 \
--dns1=192.168.0.50 \
--dns2=192.168.0.30 \
--template server.ovpn \
--key-dir=OpenVPN-CA/keys
</syntaxhighlight>
==OpenVPN Client==
===OpenVPN client template===
Example client.ovpn:
<pre>
client
dev tun
proto udp
remote <SERVER_IP> <SERVER_PORT>
tls-client
ns-cert-type server
comp-lzo
auth-user-pass
auth SHA512
cipher AES-256-CBC
tls-cipher DHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES256-SHA256:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES128-SHA256:DHE-RSA-CAMELLIA256-SHA:DHE-RSA-AES256-SHA:DHE-RSA-CAMELLIA128-SHA:DHE-RSA-AES128-SHA:CAMELLIA256-SHA:AES256-SHA:CAMELLIA128-SHA:AES128-SHA
#tls-version-min 1.2
route-delay 5 30
persist-key
persist-tun
nobind
mssfix
push-peer-info
reneg-sec 0
tun-mtu 1500
verb 3
#auth-nocache
</pre>
===Generate OpenVPN Config for server===
<syntaxhighlight lang=bash>
ca@rzeasyrsa:~$ ./get_ovpn.sh \
--config-type client \
--server-ip 192.168.18.23 \
--server-port 1234 \
--template client.ovpn \
--key-dir OpenVPN-CA/keys \
--user vpnclient
</syntaxhighlight>
bb79826522eac6e658874d9abba9d8f2f3ac6e7e
VirtualBox physical mapping
0
355
2346
2281
2021-11-25T16:47:22Z
Lollypop
2
Text replacement - "[[Kategorie:" to "[[Category:"
wikitext
text/x-wiki
[[Category:Virtualbox]
==Create a virtual mapping to your physical Windows==
In my example it is on partitions 1 and 2 of the disk.<br>
This helps me to work around problems with installing Windows updates and grub. Some Windows updates are failing if you have grub in your MBR.
===Create a dummy mbr===
<source lang=bash>
# apt install mbr
# install-mbr /var/data/VMs/dev/mbr.img
</syntaxhighlight>
===Create the mapping as a VMDK file===
<source lang=bash>
# VBoxManage internalcommands createrawvmdk -filename /var/data/VMs/dev/Windows-physical.vmdk -rawdisk /dev/sda -partitions 1,2 -mbr /var/data/VMs/dev/mbr.img
</syntaxhighlight>
After that create a VM and use this special VMDK file.
acbf0cc3add615e6e916fcf33ab84ca9ee9af9cd
DNS cheatsheet
0
290
2347
1322
2021-11-25T16:49:19Z
Lollypop
2
Text replacement - "<source " to "<syntaxhighlight "
wikitext
text/x-wiki
=dig=
==Compare several nameserver if SOA matches==
<syntaxhighlight lang=bash>
$ domain=denic.de
$ printf "Domain: %s\n" ${domain} ; for ns in $(dig +short ${domain} ns) ; do printf "Nameserver: %s => SOA: %s\n" ${ns} "$(dig +short ${domain} soa @${ns})" ; done
Domain: denic.de
Nameserver: ns2.denic.de. => SOA: ns1.denic.de. its.denic.de. 1468491003 10800 1800 3600000 1800
Nameserver: ns1.denic.de. => SOA: ns1.denic.de. its.denic.de. 1468491003 10800 1800 3600000 1800
Nameserver: ns3.denic.de. => SOA: ns1.denic.de. its.denic.de. 1468491003 10800 1800 3600000 1800
</source>
==dns2hosts==
<syntaxhighlight lang=perl>
#!/usr/bin/perl
use Net::DNS;
use Net::DNS qw(rrsort);
my @nameservers = ("auth-dns-1.domain.de","auth-dns-2.domain.de");
my $net_regex = '10\.11\.';
my $domain = 'domain.de';
# cut_off_domain=0 : host.domain
# cut_off_domain=1 : short name only
# cut_off_domain=2 : short name and with domain
my $cut_off_domain=1;
my $res = Net::DNS::Resolver->new;
$res->nameservers(@nameservers);
Net::DNS::RR::A->set_rrsort_func ('asorted',
sub {($a,$b)=($Net::DNS::a,$Net::DNS::b);
$a->{'address'} cmp $b->{'address'}});
# Get the zone
my @zone = $res->axfr($domain);
# All A records
my @addresses = grep { $_->type eq "A" } @zone;
# Filter out net if $net_regex is set
@addresses = grep { $_->address =~ /$net_regex/ } @addresses if(defined($net_regex));
# All CNAME records
my @cnames = grep { $_->type eq "CNAME" } @zone;
my $host;
foreach $rr (rrsort("A","asorted", @addresses)) {
$host=$rr->name;
$host=(split /\./,$host)[0] if ($cut_off_domain eq 1);
$host=(split /\./,$rr->name)[0]." ".$rr->name if ($cut_off_domain eq 2);
print $rr->address."\t".$host;
foreach $cname (grep { $_->cname eq $rr->name } @cnames) {
$host=$cname->name;
$host=(split /\./,$host)[0] if ($cut_off_domain eq 1);
$host=(split /\./,$cname->name)[0]." ".$cname->name if ($cut_off_domain eq 2);
print " ".$host;
}
print "\n";
}
</source>
33222e4e38e321b55b76295134d8193e2edcfcf8
2351
2347
2021-11-25T17:52:50Z
Lollypop
2
Text replacement - "</source" to "</syntaxhighlight"
wikitext
text/x-wiki
=dig=
==Compare several nameserver if SOA matches==
<syntaxhighlight lang=bash>
$ domain=denic.de
$ printf "Domain: %s\n" ${domain} ; for ns in $(dig +short ${domain} ns) ; do printf "Nameserver: %s => SOA: %s\n" ${ns} "$(dig +short ${domain} soa @${ns})" ; done
Domain: denic.de
Nameserver: ns2.denic.de. => SOA: ns1.denic.de. its.denic.de. 1468491003 10800 1800 3600000 1800
Nameserver: ns1.denic.de. => SOA: ns1.denic.de. its.denic.de. 1468491003 10800 1800 3600000 1800
Nameserver: ns3.denic.de. => SOA: ns1.denic.de. its.denic.de. 1468491003 10800 1800 3600000 1800
</syntaxhighlight>
==dns2hosts==
<syntaxhighlight lang=perl>
#!/usr/bin/perl
use Net::DNS;
use Net::DNS qw(rrsort);
my @nameservers = ("auth-dns-1.domain.de","auth-dns-2.domain.de");
my $net_regex = '10\.11\.';
my $domain = 'domain.de';
# cut_off_domain=0 : host.domain
# cut_off_domain=1 : short name only
# cut_off_domain=2 : short name and with domain
my $cut_off_domain=1;
my $res = Net::DNS::Resolver->new;
$res->nameservers(@nameservers);
Net::DNS::RR::A->set_rrsort_func ('asorted',
sub {($a,$b)=($Net::DNS::a,$Net::DNS::b);
$a->{'address'} cmp $b->{'address'}});
# Get the zone
my @zone = $res->axfr($domain);
# All A records
my @addresses = grep { $_->type eq "A" } @zone;
# Filter out net if $net_regex is set
@addresses = grep { $_->address =~ /$net_regex/ } @addresses if(defined($net_regex));
# All CNAME records
my @cnames = grep { $_->type eq "CNAME" } @zone;
my $host;
foreach $rr (rrsort("A","asorted", @addresses)) {
$host=$rr->name;
$host=(split /\./,$host)[0] if ($cut_off_domain eq 1);
$host=(split /\./,$rr->name)[0]." ".$rr->name if ($cut_off_domain eq 2);
print $rr->address."\t".$host;
foreach $cname (grep { $_->cname eq $rr->name } @cnames) {
$host=$cname->name;
$host=(split /\./,$host)[0] if ($cut_off_domain eq 1);
$host=(split /\./,$cname->name)[0]." ".$cname->name if ($cut_off_domain eq 2);
print " ".$host;
}
print "\n";
}
</syntaxhighlight>
3510f7f86fa2c9c33b39d455058294372b4c05a4
SSL and TLS
0
229
2348
2242
2021-11-25T16:51:49Z
Lollypop
2
Text replacement - "</source" to "</syntaxhighlight"
wikitext
text/x-wiki
[[Category: Security]]
=Web=
==HTTPS==
===TLSA - Record ===
<syntaxhighlight lang=bash>
$ openssl s_client -connect lars.timmann.de:443 </dev/null 2>/dev/null | openssl x509 -pubkey -noout | openssl pkey -pubin -outform DER | openssl sha256
(stdin)= e642c89062361241dc77f3fb363c8cd0faa04d870b68a3411b8fac8c4b4581ac
</syntaxhighlight>
This could be used for a tlsa record like this:
_443._tcp.lars.timmann.de. 60 IN TLSA 3 0 1 e642c89062361241dc77f3fb363c8cd0faa04d870b68a3411b8fac8c4b4581ac
===HSTS - HTTP Strict Transport Security===
<syntaxhighlight lang=apache>
<VirtualHost <host>:443>
...
Header always set Strict-Transport-Security "max-age=31556926; includeSubDomains;"
...
</VirtualHost>
</syntaxhighlight>
You need to enable the headers module in Apache.
On Ubuntu just do:
<syntaxhighlight lang=bash>
# sudo a2enmod headers
</syntaxhighlight>
The max-age is entered in seconds:
<syntaxhighlight lang=bash>
$ bc -l
31556926/(60*60*24)
365.24219907407407407407
</souce>
So this value is a year as seconds.
What changes when we set this header and the browser understands it?
The browser transforms any link on this page to https even if the link is a http link. If the secure connection cannot be established because of Certificate errors, the browser will refuse to load the page. If this header contains ''includeSubDomains;'' subdomains are treated like this as well.
Links:
* [https://en.wikipedia.org/wiki/HTTP_Strict_Transport_Security HSTS at Wikipedia (English)]
* [https://de.wikipedia.org/wiki/Hypertext_Transfer_Protocol_Secure#HSTS HSTS at Wikipedia (German)]
===HPKP - HTTP Public Key Pinning===
A helpful script to create the hashes was made by Hanno Böck and is accessible at [https://github.com/hannob/hpkp Github].
I added a create option which makes the script more comfortable for me at [https://github.com/Popyllol/hpkp Github], too.
The public key pins for this site are created like this:
<syntaxhighlight lang=bash>
# /etc/apache2/ssl/hpkp-gen.sh create DE Hamburg Hamburg lars.timmann.de
Generating RSA private key, 4096 bit long modulus
..................................................................................................................................................................................................................++
..........................................................................................................................................................................................++
e is 65537 (0x10001)
Generating RSA private key, 4096 bit long modulus
..................................................++
..........................................++
e is 65537 (0x10001)
Header always set Strict-Transport-Security "max-age=31556926;"
Header always set Public-Key-Pins "max-age=5184000; pin-sha256=\"UcmGe/VSm6N9ruX235yb9PEYseuo+mr2volWwx1RffE=\";pin-sha256=\"O8xUszxHm+JJpRR4Pycl7LCnKjFpTY3REemrBxQZWQU=\";pin-sha256=\"UcmGe/VSm6N9ruX235yb9PEYseuo+mr2volWwx1RffE=\";"
</syntaxhighlight>
At the end you get one line for adding Strict-Transport-Security and one for Public-Key-Pins. Both in Apache format.
<syntaxhighlight lang=apache>
<VirtualHost lars.timmann.de:443>
...
SSLEngine On
SSLProtocol all -SSLv2 -SSLv3 -TLSv1
SSLCompression off
SSLOptions +FakeBasicAuth +ExportCertData +StrictRequire
SSLCertificateFile /etc/apache2/ssl/timmann.de-wildcard.pem
SSLCertificateKeyFile /etc/apache2/ssl/timmann.de.ec-key
Header always set Strict-Transport-Security "max-age=31556926;"
Header always set Public-Key-Pins "max-age=5184000; pin-sha256=\"sEQMIUbXSCbQQAMcCH7712u+cYCjFITlUSH/C1DEGHY=\";pin-sha256=\"9f3SRITO2UNdpnurhfJGLZqcaXJBUm3WRKRIKYiPARc=\";pin-sha256=\"sEQMIUbXSCbQQAMcCH7712u+cYCjFITlUSH/C1DEGHY=\";"
...
</VirtualHost>
</syntaxhighlight>
You need to enable the headers module in Apache.
On Ubuntu just do:
<syntaxhighlight lang=bash>
# sudo a2enmod headers
</syntaxhighlight>
=Mail=
==STARTTLS==
with OpenSSL:
<syntaxhighlight lang=bash>
$ openssl s_client -starttls smtp -connect <mailserver>:<port>
</syntaxhighlight>
with GNUTLS:
<syntaxhighlight lang=bash>
$ gnutls-cli --crlf --starttls --port <port> <mailserver>
EHLO hey <-- Send EHLO
250-<mailserver> Hello <yourhost> [<yourip>]
250-SIZE 52428800
250-8BITMIME
250-ETRN
250-PIPELINING
250-AUTH PLAIN
250-STARTTLS
250 HELP
STARTTLS <-- Send STARTTLS
220 TLS go ahead
^D <-- Send CTRL-D to begin STARTTLS handshake
...
- Version: TLS1.2
- Key Exchange: DHE-RSA
- Cipher: AES-256-CBC
- MAC: SHA256
- Compression: NULL
</syntaxhighlight>
You can specify the security priority for the handshake like this:
<syntaxhighlight lang=bash>
$ gnutls-cli --crlf --starttls --priority 'SECURE256:%LATEST_RECORD_VERSION:-VERS-SSL3.0' --port <port> <mailserver>
</syntaxhighlight>
Or us sslscan to check the available ciphers:
<syntaxhighlight lang=bash>
$ sudo apt-get install sslscan
$ sslscan --no-failed --starttls <mailserver>:<port>
</syntaxhighlight>
==SMTPS==
with OpenSSL:
<syntaxhighlight lang=bash>
$ openssl s_client -connect <mailserver>:465
</syntaxhighlight>
with GNUTLS:
<syntaxhighlight lang=bash>
$ gnutls-cli --port 465 <mailserver>
</syntaxhighlight>
19b84ac46d4661b69b62b243273f32bcbff98b0e
NetApp SP
0
211
2349
775
2021-11-25T17:09:09Z
Lollypop
2
Text replacement - "<source" to "<syntaxhighlight"
wikitext
text/x-wiki
[[Kategorie:Hardware|NetApp]]
[[Kategorie:NetApp|SP]]
== Setup SP IP address==
<syntaxhighlight lang=bash>
filer01> system node service-processor network modify -address-type IPv4 -ip-address 172.32.40.54 -netmask 255.255.255.0 -gateway 172.32.40.1 -enable true
filer01> system node service-processor reboot-sp
Note: If your console connection is through the SP, it will be disconnected.
Do you want to reboot the SP ? {y|n}: y
</source>
7f28d4b301f758591b179db6a697af885f3b2f29
Snorby
0
234
2350
2258
2021-11-25T17:46:36Z
Lollypop
2
Text replacement - "<source" to "<syntaxhighlight"
wikitext
text/x-wiki
Just a scribble...
<syntaxhighlight lang=bash>
/usr/local/bin/suricata -D -c /etc/suricata/suricata.yaml -i eth1 --init-errors-fatal
barnyard2 -c /etc/suricata/barnyard2.conf -d /var/log/suricata -f unified2.alert -w /var/log/suricata/suricata.waldo -D
</syntaxhighlight>
765cc3c1a60ee5f1f617b8950d0f3d1eb6cfd5a9
Solaris LDOM
0
203
2352
2263
2021-11-25T17:57:12Z
Lollypop
2
Text replacement - "[[Kategorie:" to "[[Category:"
wikitext
text/x-wiki
[[Category:LDOM]]
[[Category:Solaris]]
==Useful scripts==
===get_pf_from_link_name.sh===
<syntaxhighlight lang=bash>
#!/bin/bash
link=$1
dev=$(dladm show-phys -L ${link} | \
nawk '
NR==2{
dev=$2; gsub(/[0-9]+$/,"",dev);
instance=$2; gsub(/^[^0-9]*/,"",instance);
while(getline < "/etc/path_to_inst"){
gsub(/"/,"",$NF);
if($NF == dev && $(NF-1) == instance){
gsub(/"/,"",$1);
gsub(/^\//,"",$1);
print $1;
}
}
}
')
ldm ls-io -l ${dev}
</source>
a989cc759dc83cb928f1421f90bce558919ac762
ZFS RaidController
0
186
2353
836
2021-11-25T18:02:51Z
Lollypop
2
Text replacement - "<source" to "<syntaxhighlight"
wikitext
text/x-wiki
[[Kategorie:Solaris]]
[[Kategorie:ZFS|RaidController]]
=ZFS ist besser direkt auf den Disks=
Wegen der besseren Checksummen im ZFS möchte man den RaidController abschalten, oder die Disks einzeln herausreichen.
==X4170 mit MegaRAID==
Konfiguration aller Disks als einzelne LogicalDrives
<syntaxhighlight lang=bash>
-cfgclr -a0
-cfgldadd -r0[252:0] -a0
-cfgldadd -r0[252:1] -a0
-cfgldadd -r0[252:2] -a0
-cfgldadd -r0[252:3] -a0
-ldsetprop EnDskCache -LAll -a0
-AdpBootDrive -set -L0 -a0
-AdpSetProp MaintainPdFailHistoryEnbl 0 -a0
q for quit
</source>
93377e07061a66c63f403312cb4e0e2f96879518
2380
2353
2021-11-25T20:11:30Z
Lollypop
2
Text replacement - "</source" to "</syntaxhighlight"
wikitext
text/x-wiki
[[Kategorie:Solaris]]
[[Kategorie:ZFS|RaidController]]
=ZFS ist besser direkt auf den Disks=
Wegen der besseren Checksummen im ZFS möchte man den RaidController abschalten, oder die Disks einzeln herausreichen.
==X4170 mit MegaRAID==
Konfiguration aller Disks als einzelne LogicalDrives
<syntaxhighlight lang=bash>
-cfgclr -a0
-cfgldadd -r0[252:0] -a0
-cfgldadd -r0[252:1] -a0
-cfgldadd -r0[252:2] -a0
-cfgldadd -r0[252:3] -a0
-ldsetprop EnDskCache -LAll -a0
-AdpBootDrive -set -L0 -a0
-AdpSetProp MaintainPdFailHistoryEnbl 0 -a0
q for quit
</syntaxhighlight>
f49be4af6cb8169026ac40150ff4bfe953a78706
Solaris perl
0
93
2354
663
2021-11-25T18:45:02Z
Lollypop
2
Text replacement - "[[Kategorie:" to "[[Category:"
wikitext
text/x-wiki
[[Category:Solaris|Perl]]
==Module::Build / Build.PL==
Bei Fehlermeldungen a la
<pre>
gcc: unrecognized option '-KPIC'
gcc: language O4 not recognized
</pre>
beim bauen von Perlmodulen unter Solaris, kann man versuchen die Defaultvariablen im Module::Build zu überschreiben:
<pre>
# /usr/perl5/bin/perlgcc Build.PL --config cc=gcc --config ld=gcc --config optimize='-O2' --config cccdlflags='-DPIC'
# make
</pre>
das gilt auch für Makefile.PL:
<pre>
/usr/perl5/bin/perlgcc Makefile.PL cc=gcc ld=gcc optimize='-O2' cccdlflags='-DPIC'
</pre>
==Environment Variablen für Programme, die MakeMaker benutzen==
Unter Solaris gibt es ja öfter Probleme, wenn man nur den GCC installiert hat. Ein Aufruf von /usr/perl5/bin/perlgcc hilft dann in den meisten Fällen.
Für sa-compile von Spamassassin nützt es jedoch nichts. Dafür hilft es die notwendigen Parameter via PERL_MM_OPT zu setzen:
<pre>
PERL_MM_OPT='optimize=-O2 cc=gcc ld=gcc cccdlflags=-DPIC' /opt/spamassassin/bin/sa-compile -D
</pre>
Wie die Parameter heißen findet man mit <i>perl -V</i> heraus.
Mehr zum Thema gibt es [http://search.cpan.org/~mschwern/ExtUtils-MakeMaker/lib/ExtUtils/MakeMaker.pm hier]
5c8a62ce0c9f6558e5ce05cb210f69f8a5a25262
Category:Tausendfuesser
14
9
2355
496
2021-11-25T19:06:20Z
Lollypop
2
Text replacement - "[[Kategorie:" to "[[Category:"
wikitext
text/x-wiki
[[Category:Tiere]]
{{#categorytree:{{PAGENAMEE}}|mode=pages|hideroot=on|depth=3}}
4102fbf0160f568b22165bd11505c72a1b5d1d77
Category:MariaDB
14
236
2356
914
2021-11-25T19:20:32Z
Lollypop
2
Text replacement - "[[Kategorie:" to "[[Category:"
wikitext
text/x-wiki
[[Category:KnowHow]]
a53883501ef62bde531096835b5015f2915a2297
Ubuntu apt
0
120
2357
2255
2021-11-25T19:22:36Z
Lollypop
2
Text replacement - "<source " to "<syntaxhighlight "
wikitext
text/x-wiki
[[Category:Ubuntu|apt]]
== Get all non LTS packages ==
<syntaxhighlight lang=awk>
# dpkg --list | awk '/^ii/ {print $2}' | xargs apt-cache show | awk '
BEGIN{
support="none";
}
/^Package:/,/^$/{
if(/^Package:/){ pkg=$2; }
if(/^Supported:/){ support=$2; }
if(/^$/ && support != "5y"){ printf "%s:\t%s\n", pkg, support; }
}
/^$/ {
support="none";
}'
</source>
== Ubuntu support status ==
<syntaxhighlight lang=bash>
$ ubuntu-support-status --show-unsupported
</source>
== Configuring a proxy for apt ==
Put this into your /etc/apt/apt.conf.d/00proxy :
<syntaxhighlight lang=bash>
// Options for the downloading routines
Acquire
{
Queue-Mode "host"; // host|access
Retries "0";
Source-Symlinks "true";
// HTTP method configuration
http
{
//Proxy::http.us.debian.org "DIRECT"; // Specific per-host setting
Proxy "http://<user>:<password>@<proxy-host>:<proxy-port>";
Timeout "120";
Pipeline-Depth "5";
// Cache Control. Note these do not work with Squid 2.0.2
No-Cache "false";
Max-Age "86400"; // 1 Day age on index files
No-Store "false"; // Prevent the cache from storing archives
};
ftp
{
Proxy "http://<user>:<password>@<proxy-host>:<proxy-port>";
//Proxy::http.us.debian.org "DIRECT"; // Specific per-host setting
Timeout "120";
/* Passive mode control, proxy, non-proxy and per-host. Pasv mode
is prefered if possible */
Passive "true";
Proxy::Passive "true";
Passive::http.us.debian.org "true"; // Specific per-host setting
};
cdrom
{
mount "/cdrom";
// You need the trailing slash!
"/cdrom"
{
Mount "sleep 1000";
UMount "sleep 500";
}
};
};
</source>
==Use this proxy config for in the shell==
<syntaxhighlight lang=bash>
eval $(apt-config dump Acquire | awk -F '(::| )' '$3 ~ /Proxy/{printf "%s_proxy=%s\nexport %s_proxy\n",$2,$4,$2;}')
</source>
== Getting some packages from a newer release ==
In this example we are living in <i>xenial</i> and want PowerDNS from <i>zesty</i> because we need CAA records in the nameservice.
=== Pin the normal release ===
<syntaxhighlight lang=bash>
# echo 'APT::Default-Release "xenial";' > /etc/apt/apt.conf.d/01pinning
</source>
=== Add new release to /etc/apt/sources.list ===
This is the /etc/apt/sources.list on my x86 64bit Ubuntu:
<pre>
# Xenial
deb [arch=amd64] http://de.archive.ubuntu.com/ubuntu/ xenial main restricted universe
deb [arch=amd64] http://de.archive.ubuntu.com/ubuntu/ xenial-updates main restricted universe
deb [arch=amd64] http://security.ubuntu.com/ubuntu xenial-security main restricted universe
# Zesty
deb [arch=amd64] http://de.archive.ubuntu.com/ubuntu/ zesty main restricted universe
deb [arch=amd64] http://de.archive.ubuntu.com/ubuntu/ zesty-updates main restricted universe
deb [arch=amd64] http://security.ubuntu.com/ubuntu zesty-security main restricted universe
</pre>
=== Tell apt via /etc/apt/preferences.d/... to prefer some packages from the new release ===
This is the /etc/apt/preferences.d/pdns:
<pre>
Package: pdns-*
Pin: release a=zesty, l=Ubuntu
Pin-Priority: 1000
Package: pdns-*
Pin: release a=zesty-updates, l=Ubuntu
Pin-Priority: 1000
Package: pdns-*
Pin: release a=zesty-security, l=Ubuntu
Pin-Priority: 1000
</pre>
=== Upgrade to the packages from the new release ===
<syntaxhighlight lang=bash>
# apt update
...
2 packages can be upgraded. Run 'apt list --upgradable' to see them.
...
</source>
=== Check with "apt-cache policy" which version is preferred now ===
<syntaxhighlight lang=bash>
# apt-cache policy pdns-server pdns-tools
pdns-server:
Installed: 4.0.3-1
Candidate: 4.0.3-1
Version table:
*** 4.0.3-1 1000
500 http://de.archive.ubuntu.com/ubuntu zesty/universe amd64 Packages
100 /var/lib/dpkg/status
4.0.0~alpha2-3build1 990
990 http://de.archive.ubuntu.com/ubuntu xenial/universe amd64 Packages
pdns-tools:
Installed: (none)
Candidate: 4.0.3-1
Version table:
4.0.3-1 1000
500 http://de.archive.ubuntu.com/ubuntu zesty/universe amd64 Packages
4.0.0~alpha2-3build1 990
990 http://de.archive.ubuntu.com/ubuntu xenial/universe amd64 Packages
</source>
=== Upgrade to the packages from the new release ===
<syntaxhighlight lang=bash>
# apt install pdns-tools
Reading package lists... Done
Building dependency tree
Reading state information... Done
Some packages could not be installed. This may mean that you have
requested an impossible situation or if you are using the unstable
distribution that some required packages have not yet been created
or been moved out of Incoming.
The following information may help to resolve the situation:
The following packages have unmet dependencies:
pdns-tools : Depends: libstdc++6 (>= 6) but 5.4.0-6ubuntu1~16.04.5 is to be installed
E: Unable to correct problems, you have held broken packages.
</source>
This shows the pinning to xenial works ;-).
=== Override pinning for one package ===
<syntaxhighlight lang=bash>
# apt -t zesty install libstdc++6
...
</source>
6fa50790ce4b55be8184ae43dc0e2b44518158f3
2376
2357
2021-11-25T20:05:57Z
Lollypop
2
Text replacement - "</source" to "</syntaxhighlight"
wikitext
text/x-wiki
[[Category:Ubuntu|apt]]
== Get all non LTS packages ==
<syntaxhighlight lang=awk>
# dpkg --list | awk '/^ii/ {print $2}' | xargs apt-cache show | awk '
BEGIN{
support="none";
}
/^Package:/,/^$/{
if(/^Package:/){ pkg=$2; }
if(/^Supported:/){ support=$2; }
if(/^$/ && support != "5y"){ printf "%s:\t%s\n", pkg, support; }
}
/^$/ {
support="none";
}'
</syntaxhighlight>
== Ubuntu support status ==
<syntaxhighlight lang=bash>
$ ubuntu-support-status --show-unsupported
</syntaxhighlight>
== Configuring a proxy for apt ==
Put this into your /etc/apt/apt.conf.d/00proxy :
<syntaxhighlight lang=bash>
// Options for the downloading routines
Acquire
{
Queue-Mode "host"; // host|access
Retries "0";
Source-Symlinks "true";
// HTTP method configuration
http
{
//Proxy::http.us.debian.org "DIRECT"; // Specific per-host setting
Proxy "http://<user>:<password>@<proxy-host>:<proxy-port>";
Timeout "120";
Pipeline-Depth "5";
// Cache Control. Note these do not work with Squid 2.0.2
No-Cache "false";
Max-Age "86400"; // 1 Day age on index files
No-Store "false"; // Prevent the cache from storing archives
};
ftp
{
Proxy "http://<user>:<password>@<proxy-host>:<proxy-port>";
//Proxy::http.us.debian.org "DIRECT"; // Specific per-host setting
Timeout "120";
/* Passive mode control, proxy, non-proxy and per-host. Pasv mode
is prefered if possible */
Passive "true";
Proxy::Passive "true";
Passive::http.us.debian.org "true"; // Specific per-host setting
};
cdrom
{
mount "/cdrom";
// You need the trailing slash!
"/cdrom"
{
Mount "sleep 1000";
UMount "sleep 500";
}
};
};
</syntaxhighlight>
==Use this proxy config for in the shell==
<syntaxhighlight lang=bash>
eval $(apt-config dump Acquire | awk -F '(::| )' '$3 ~ /Proxy/{printf "%s_proxy=%s\nexport %s_proxy\n",$2,$4,$2;}')
</syntaxhighlight>
== Getting some packages from a newer release ==
In this example we are living in <i>xenial</i> and want PowerDNS from <i>zesty</i> because we need CAA records in the nameservice.
=== Pin the normal release ===
<syntaxhighlight lang=bash>
# echo 'APT::Default-Release "xenial";' > /etc/apt/apt.conf.d/01pinning
</syntaxhighlight>
=== Add new release to /etc/apt/sources.list ===
This is the /etc/apt/sources.list on my x86 64bit Ubuntu:
<pre>
# Xenial
deb [arch=amd64] http://de.archive.ubuntu.com/ubuntu/ xenial main restricted universe
deb [arch=amd64] http://de.archive.ubuntu.com/ubuntu/ xenial-updates main restricted universe
deb [arch=amd64] http://security.ubuntu.com/ubuntu xenial-security main restricted universe
# Zesty
deb [arch=amd64] http://de.archive.ubuntu.com/ubuntu/ zesty main restricted universe
deb [arch=amd64] http://de.archive.ubuntu.com/ubuntu/ zesty-updates main restricted universe
deb [arch=amd64] http://security.ubuntu.com/ubuntu zesty-security main restricted universe
</pre>
=== Tell apt via /etc/apt/preferences.d/... to prefer some packages from the new release ===
This is the /etc/apt/preferences.d/pdns:
<pre>
Package: pdns-*
Pin: release a=zesty, l=Ubuntu
Pin-Priority: 1000
Package: pdns-*
Pin: release a=zesty-updates, l=Ubuntu
Pin-Priority: 1000
Package: pdns-*
Pin: release a=zesty-security, l=Ubuntu
Pin-Priority: 1000
</pre>
=== Upgrade to the packages from the new release ===
<syntaxhighlight lang=bash>
# apt update
...
2 packages can be upgraded. Run 'apt list --upgradable' to see them.
...
</syntaxhighlight>
=== Check with "apt-cache policy" which version is preferred now ===
<syntaxhighlight lang=bash>
# apt-cache policy pdns-server pdns-tools
pdns-server:
Installed: 4.0.3-1
Candidate: 4.0.3-1
Version table:
*** 4.0.3-1 1000
500 http://de.archive.ubuntu.com/ubuntu zesty/universe amd64 Packages
100 /var/lib/dpkg/status
4.0.0~alpha2-3build1 990
990 http://de.archive.ubuntu.com/ubuntu xenial/universe amd64 Packages
pdns-tools:
Installed: (none)
Candidate: 4.0.3-1
Version table:
4.0.3-1 1000
500 http://de.archive.ubuntu.com/ubuntu zesty/universe amd64 Packages
4.0.0~alpha2-3build1 990
990 http://de.archive.ubuntu.com/ubuntu xenial/universe amd64 Packages
</syntaxhighlight>
=== Upgrade to the packages from the new release ===
<syntaxhighlight lang=bash>
# apt install pdns-tools
Reading package lists... Done
Building dependency tree
Reading state information... Done
Some packages could not be installed. This may mean that you have
requested an impossible situation or if you are using the unstable
distribution that some required packages have not yet been created
or been moved out of Incoming.
The following information may help to resolve the situation:
The following packages have unmet dependencies:
pdns-tools : Depends: libstdc++6 (>= 6) but 5.4.0-6ubuntu1~16.04.5 is to be installed
E: Unable to correct problems, you have held broken packages.
</syntaxhighlight>
This shows the pinning to xenial works ;-).
=== Override pinning for one package ===
<syntaxhighlight lang=bash>
# apt -t zesty install libstdc++6
...
</syntaxhighlight>
1cc3982bd30aef78eb323dac7314b7816175f1e0
HP Smart Array Controller
0
365
2358
2049
2021-11-25T19:27:27Z
Lollypop
2
Text replacement - "<source" to "<syntaxhighlight"
wikitext
text/x-wiki
[[category:Hardware]]
=ssacli=
=== Install the tool ===
<syntaxhighlight lang=bash>
# echo "deb http://downloads.linux.hpe.com/SDR/downloads/MCP/ubuntu $(lsb_release --short --codename)/current non-free" >> /etc/apt/sources.list.d/hp.list
# curl http://downloads.linux.hpe.com/SDR/hpePublicKey2048_key1.pub | sudo apt-key add -
# apt update && apt install --yes ssacli
</source>
=== Revive formerly failed disk ===
<syntaxhighlight lang=bash>
# lsscsi
[0:0:0:0] storage HP P440ar 2.52 -
[0:1:0:0] disk HP LOGICAL VOLUME 2.52 /dev/sda
[0:1:0:1] disk HP LOGICAL VOLUME 2.52 /dev/sdb
[0:1:0:2] disk HP LOGICAL VOLUME 2.52 /dev/sdc
[0:1:0:3] disk HP LOGICAL VOLUME 2.52 /dev/sdd
[0:1:0:5] disk HP LOGICAL VOLUME 2.52 /dev/sdf
# ssacli ctrl slot=0 ld all show status
logicaldrive 1 (279.37 GB, RAID 0): OK
logicaldrive 2 (931.48 GB, RAID 0): OK
logicaldrive 3 (279.37 GB, RAID 0): OK
logicaldrive 4 (931.48 GB, RAID 0): OK
logicaldrive 5 (279.37 GB, RAID 0): Failed
logicaldrive 6 (931.48 GB, RAID 0): OK
# ssacli ctrl slot=0 ld 5 modify reenable forced
# ssacli ctrl slot=0 ld all show status
logicaldrive 1 (279.37 GB, RAID 0): OK
logicaldrive 2 (931.48 GB, RAID 0): OK
logicaldrive 3 (279.37 GB, RAID 0): OK
logicaldrive 4 (931.48 GB, RAID 0): OK
logicaldrive 5 (279.37 GB, RAID 0): OK
logicaldrive 6 (931.48 GB, RAID 0): OK
# lsscsi
[0:0:0:0] storage HP P440ar 2.52 -
[0:1:0:0] disk HP LOGICAL VOLUME 2.52 /dev/sda
[0:1:0:1] disk HP LOGICAL VOLUME 2.52 /dev/sdb
[0:1:0:2] disk HP LOGICAL VOLUME 2.52 /dev/sdc
[0:1:0:3] disk HP LOGICAL VOLUME 2.52 /dev/sdd
[0:1:0:4] disk HP LOGICAL VOLUME 2.52 /dev/sde
[0:1:0:5] disk HP LOGICAL VOLUME 2.52 /dev/sdf
</source>
=hpacucli=
==reenable disk after replacement==
<syntaxhighlight lang=bash>
[root@app02 ~]# hpacucli ctrl all show config
Smart Array P410i in Slot 0 (Embedded) (sn: 50014380141236F0)
array A (SAS, Unused Space: 0 MB)
logicaldrive 1 (279.4 GB, RAID 0, OK)
physicaldrive 1I:1:1 (port 1I:box 1:bay 1, SAS, 300 GB, OK)
array B (SAS, Unused Space: 0 MB)
logicaldrive 2 (279.4 GB, RAID 0, Failed)
physicaldrive 1I:1:2 (port 1I:box 1:bay 2, SAS, 300 GB, OK)
SEP (Vendor ID PMCSIERA, Model SRC 8x6G) 250 (WWID: 50014380141236FF)
[root@app02 ~]# hpacucli controller slot=0 logicaldrive 2 modify reenable forced
[root@app02 ~]# hpacucli ctrl all show config
Smart Array P410i in Slot 0 (Embedded) (sn: 50014380141236F0)
array A (SAS, Unused Space: 0 MB)
logicaldrive 1 (279.4 GB, RAID 0, OK)
physicaldrive 1I:1:1 (port 1I:box 1:bay 1, SAS, 300 GB, OK)
array B (SAS, Unused Space: 0 MB)
logicaldrive 2 (279.4 GB, RAID 0, OK)
physicaldrive 1I:1:2 (port 1I:box 1:bay 2, SAS, 300 GB, OK)
SEP (Vendor ID PMCSIERA, Model SRC 8x6G) 250 (WWID: 50014380141236FF)
</source>
9c3026c2a0ac07e2dac9226620a98bd5c70f2cf6
Ecryptfs
0
349
2359
1831
2021-11-25T19:30:35Z
Lollypop
2
Text replacement - "[[Kategorie:" to "[[Category:"
wikitext
text/x-wiki
[[Category:Linux]]
==Tipps&Tricks==
===ecryptfs-mount-private -> mount: No such file or directory===
====Problem====
<source lang=bash>
user@host:~$ ecryptfs-mount-private
Enter your login passphrase:
Inserted auth tok with sig [affecaffeeaffe00] into the user session keyring
mount: No such file or directory
user@host:~$
</source>
The keys are correctly unlocked
<source lang=bash>
user@host:~$ keyctl list @u
2 keys in keyring:
1013878144: --alswrv 2223 2223 user: affecaffeeaffe01
270316877: --alswrv 2223 2223 user: affecaffeeaffe02
</source>
But no luck:
<source lang=bash>
$ ls -al
total 20
drwx------ 3 ansible admin 8 Dez 7 09:12 .
drwxr-xr-x 6 root root 6 Dez 7 09:10 ..
lrwxrwxrwx 1 root root 32 Dez 7 09:11 .Private -> /home/.ecryptfs/ansible/.Private
lrwxrwxrwx 1 root root 33 Dez 7 09:11 .ecryptfs -> /home/.ecryptfs/ansible/.ecryptfs
lrwxrwxrwx 1 root root 52 Dez 7 09:12 README.txt -> /usr/share/ecryptfs-utils/ecryptfs-mount-private.txt
lrwxrwxrwx 1 root root 56 Dez 7 09:11 ecryptfs-mount-private.desktop -> /usr/share/ecryptfs-utils/ecryptfs-mount-private.desktop
</source>
====Workaround====
<source lang=bash>
user@host:~$ keyctl link @u @s
user@host:~$ ecryptfs-mount-private
user@host:~$
</source>
89350ca193f3d49bee55c18a99d7effba5a661c7
Solaris 11 bootadm
0
207
2360
2201
2021-11-25T19:32:05Z
Lollypop
2
Text replacement - "<source" to "<syntaxhighlight"
wikitext
text/x-wiki
[[Category:Solaris11|bootadm]]
==Booten via SP-Console 115200 Baud==
Add a new ttydef with 115200:
<syntaxhighlight lang=bash>
# echo "console115200:115200 hupcl opost onclr:115200::console" >> /etc/ttydefs
</syntaxhighlight>
Set the new console for system/console-login:default
<syntaxhighlight lang=bash>
# svccfg -s svc:/system/console-login:default setprop ttymon/label=console115200
# svcadm refresh svc:/system/console-login:default
# svcadm restart svc:/system/console-login:default
</syntaxhighlight>
Setup your boot menu:
<syntaxhighlight lang=bash>
# bootadm generate-menu
# bootadm set-menu console=text serial_params='0,115200,8,N,1'
# bootadm change-entry -i 0 kargs="-B \$zfs_bootfs,console=ttya"
# bootadm add-entry -i 1 "Solaris (non-cluster)"
# bootadm change-entry -i 1 kargs="-B \$zfs_bootfs,console=ttya -x"
# bootadm add-entry -i 2 "Solaris (non-cluster)(single-user)"
# bootadm change-entry -i 2 kargs="-B \$zfs_bootfs,console=ttya -xs"
# bootadm add-entry -i 3 "Solaris (kernel debugger)"
# bootadm change-entry -i 3 kargs="-B \$zfs_bootfs,console=ttya -k"
# bootadm add-entry -i 4 "Solaris (non-cluster)(milestone=none)"
# bootadm change-entry -i 4 kargs="-B \$zfs_bootfs,console=ttya -x -m milestone=none"
</syntaxhighlight>
b6b1e18cf35a0ae08526c3759275bfcf52c1073b
Solaris OracleDB zone
0
188
2361
2253
2021-11-25T19:32:44Z
Lollypop
2
Text replacement - "</source" to "</syntaxhighlight"
wikitext
text/x-wiki
[[Kategorie:Solaris|Oracle Zone]]
=Setup Oracle Database on a Solaris zone with CPU limit=
Our setup is a 48GB x86-Server
==Limit ZFS ARC==
Add to /etc/system:
<syntaxhighlight lang=bash>
set zfs:zfs_arc_max = <bytes as hex value>
</syntaxhighlight>
To calculate your own value:
<syntaxhighlight lang=bash>
# LIMIT_GB=8 ; printf "*\n** Limit ZFS ARC to %dGB\n*\nset zfs:zfs_arc_max = 0x%x\n" ${LIMIT_GB} $[${LIMIT_GB} * 1024 * 1024 * 1024]
*
** Limit ZFS ARC to 8GB
*
set zfs:zfs_arc_max = 0x200000000
</syntaxhighlight>
==Create Zone==
Set values:
<syntaxhighlight lang=bash>
ZONENAME=oracle
ZONEPOOL=rpool
ZONEBASE=/var/zones
MAX_SHM_MEMORY=30G
LOCKED_MEMORY=30G
MAX_PHYS_MEMORY=34G
SWAP=${MAX_PHYS_MEMORY}
NUMBER_OF_CPUS=2
</syntaxhighlight>
Create zone with
<syntaxhighlight lang=bash>
zfs create -o mountpoint=none ${ZONEPOOL}/zones
zfs create -o compression=on -o mountpoint=${ZONEBASE}/${ZONENAME} ${ZONEPOOL}/zones/${ZONENAME}
chmod 700 ${ZONEBASE}/${ZONENAME}
printf "
create
set autoboot=true
set zonepath=${ZONEBASE}/${ZONENAME}
add dedicated-cpu
set ncpus=${NUMBER_OF_CPUS}
end
add capped-memory
set swap=${SWAP}
set physical=${MAX_PHYS_MEMORY}
set locked=${LOCKED_MEMORY}
end
set scheduling-class=FSS
set max-shm-memory=${MAX_SHM_MEMORY}
verify
commit
" | zonecfg -z ${ZONENAME} -f -
</syntaxhighlight>
Enable dynamic pool service to add support for dedicated-cpus:
<syntaxhighlight lang=bash>
svcadm enable svc:/system/pools/dynamic
</syntaxhighlight>
Install and boot:
<syntaxhighlight lang=bash>
zoneadm -z ${ZONENAME} install
zoneadm -z ${ZONENAME} boot
zlogin ${ZONENAME} usermod -s /bin/bash root
zlogin ${ZONENAME}
</syntaxhighlight>
CPU-check:
<syntaxhighlight lang=bash>
-bash-3.2# psrinfo -pv
The physical processor has 2 virtual processors (0 1)
x86 (chipid 0x0 GenuineIntel family 6 model 44 step 2 clock 3059 MHz)
Intel(r) Xeon(r) CPU X5675 @ 3.07GHz
</syntaxhighlight>
==Create ZPools==
I used this paper: [http://www.oracle.com/technetwork/server-storage/solaris10/config-solaris-zfs-wp-167894.pdf]
Values are for Solaris 10.
<syntaxhighlight lang=bash>
DATABASEPOOL=dbpool
DATABASEPOOL_DATA_VDEV="mirror c1t1d0 c1t2d0"
DATABASEPOOL_ZIL_VDEV="mirror c1t3d0 c1t4d0"
REDOPOOL_NAME=redopool
REDOPOOL_DATA_VDEV="mirror c1t5d0 c1t6d0"
REDOPOOL_ZIL_VDEV="mirror c1t7d0 c1t8d0"
ARCHIVEPOOL=archivepool
ARCHIVEPOOL_DATAV_DEV="mirror c1t9d0 c1t10d0"
DB_BASEPATH=/database
DB_BLOCK_SIZE=8192
</syntaxhighlight>
<syntaxhighlight lang=bash>
zpool create ${DATABASEPOOL} ${DATABASEPOOL_DATA_VDEV} log ${DATABASEPOOL_ZIL_VDEV}
zfs create -o recordsize=${DB_BLOCK_SIZE} -o mountpoint=${DB_BASEPATH}/data ${DATABASEPOOL}/data
zfs set logbias=throughput ${DATABASEPOOL}/data
zfs create -o recordsize=${DB_BLOCK_SIZE} -o mountpoint=${DB_BASEPATH}/index ${DATABASEPOOL}/index
zfs set logbias=throughput ${DATABASEPOOL}/index
zfs create -o mountpoint=${DB_BASEPATH}/temp ${DATABASEPOOL}/temp
zfs set logbias=throughput ${DATABASEPOOL}/temp
zfs create -o mountpoint=${DB_BASEPATH}/undo ${DATABASEPOOL}/undo
zfs set logbias=throughput ${DATABASEPOOL}/undo
</syntaxhighlight>
<syntaxhighlight lang=bash>
zpool create ${REDOPOOL} ${REDOPOOL_DATA_VDEV} log ${REDOPOOL_ZIL_VDEV}
zfs create -o mountpoint=${DB_BASEPATH}/redo ${REDOPOOL}/redo
zfs set logbias=latency ${REDOPOOL}/redo
</syntaxhighlight>
<syntaxhighlight lang=bash>
zpool create ${ARCHIVEBASEPOOL} ${ARCHIVEPOOL_DATA_VDEV}
zfs create -o compression=on -o mountpoint=${DB_BASEPATH}/archive ${ARCHIVEBASEPOOL}/archive
zfs set primarycache=metadata ${ARCHIVEBASEPOOL}/archive
</syntaxhighlight>
5aa2f75fa75bc094636a2f2e1102a8e345b5b1f4
Category:VMWare
14
109
2362
1280
2021-11-25T19:33:37Z
Lollypop
2
Text replacement - "[[Kategorie:" to "[[Category:"
wikitext
text/x-wiki
[[Category:Virtualization]]
05c04b5d9c8b1a23bb915ae34cb9a88346d5320a
Awk cheatsheet
0
292
2363
1365
2021-11-25T19:34:19Z
Lollypop
2
Text replacement - "<source " to "<syntaxhighlight "
wikitext
text/x-wiki
[[Kategorie:AWK|Cheatsheet]]
==Functions==
===Bytes to human readable===
<syntaxhighlight lang=awk>
function b2h(value){
# Bytes to human readable
unit=1;
while(value>=1024){
unit++;
value/=1024;
}
split("B,KB,MB,GB,TB,PB", unit_string, /,/);
return sprintf("%.2f%s",value,unit_string[unit]);
}
</source>
===Binary to decimal===
<syntaxhighlight lang=awk>
function b2d(bin){
len=length(bin);
for(i=1;i<=len;i++){
dec+=substr(bin,i,1)*2^(len-i);
}
return dec;
}
</source>
===Quicksort===
This is not my code! It is taken from [[http://awk.info/?quicksort here]] maybe slightly modified cannot check, site is down.
You can call it like this: qsort(array,1,length(array));
<syntaxhighlight lang=awk>
# BEGIN http://awk.info/?quicksort
function qsort(A, left, right, i, last) {
if (left >= right)
return;
swap(A, left, left+int((right-left+1)*rand()));
last = left;
for (i = left+1; i <= right; i++)
if (int(A[i]) < int(A[left]))
swap(A, ++last, i)
swap(A, left, last)
qsort(A, left, last-1)
qsort(A, last+1, right)
}
# Helper function to swap two elements of an array
function swap(A, i, j, t) {
t = A[i]; A[i] = A[j]; A[j] = t
}
# END http://awk.info/?quicksort
</source>
Same for alphanumeric:
<syntaxhighlight lang=awk>
# BEGIN http://awk.info/?quicksort
function qsort(A, left, right, i, last) {
if (left >= right)
return;
swap(A, left, left+int((right-left+1)*rand()));
last = left;
for (i = left+1; i <= right; i++)
if (int(A[i]) < int(A[left]))
swap(A, ++last, i)
swap(A, left, last)
qsort(A, left, last-1)
qsort(A, last+1, right)
}
# Helper function to swap two elements of an array
function swap(A, i, j, t) {
t = A[i]; A[i] = A[j]; A[j] = t
}
# END http://awk.info/?quicksort
</source>
Test:
<syntaxhighlight lang=awk>
BEGIN {
string="1524097359810345254";
split(string,array_i,"");
string="ThisIsAQsortExample";
split(string,array_a,"");
}
# BEGIN http://awk.info/?quicksort
function qsort_i(A, left, right, i, last) {
if (left >= right)
return;
swap(A, left, left+int((right-left+1)*rand()));
last = left;
for (i = left+1; i <= right; i++)
if (int(A[i]) < int(A[left]))
swap(A, ++last, i)
swap(A, left, last)
qsort_i(A, left, last-1)
qsort_i(A, last+1, right)
}
function qsort_a(A, left, right, i, last) {
if (left >= right)
return;
swap(A, left, left+int((right-left+1)*rand()));
last = left;
for (i = left+1; i <= right; i++)
if (A[i] < A[left])
swap(A, ++last, i)
swap(A, left, last)
qsort_a(A, left, last-1)
qsort_a(A, last+1, right)
}
# Helper function to swap two elements of an array
function swap(A, i, j, t) {
t = A[i]; A[i] = A[j]; A[j] = t
}
END {
for(element in array_i)
printf array_i[element];
printf " ===qsort==> "
qsort_i(array_i,1,length(array_i));
for(element in array_i)
printf array_i[element];
print;
for(element in array_a)
printf array_a[element];
printf " ===qsort==> "
qsort_a(array_a,1,length(array_a));
for(element in array_a)
printf array_a[element];
print;
}
</source>
which outputs:
<pre>
1524097359810345254 ===qsort==> 0011223344455557899
ThisIsAQsortExample ===qsort==> AEIQTaehilmoprssstx
</pre>
===Sort words inside braces (gawk)===
Written for beautifying lines like
<syntaxhighlight lang=mysql>
GRANT SELECT (account_id, user, enable_imap, fk_domain_id, enable_virusscan, max_msg_size, from_authuser_only, id, time_start, changed_by, enable_spamblocker, onhold, time_end, archive, mailbox_quota) ON `mail_db`.`mail_account` TO 'user'@'172.16.16.16'
</source>
This function:
<syntaxhighlight lang=awk>
function inner_brace_sort (rest, delimiter) {
sorted="";
while( match(rest,/\([^\)]+\)/) ) {
sorted=sprintf("%s%s", sorted, substr(rest, 1, RSTART));
inner=substr(rest, RSTART+1, RLENGTH-2);
rest=substr(rest, RSTART+RLENGTH-1, length(rest));
split(inner, inner_a, delimiter);
inner_l=asort(inner_a, inner_s);
for(i=1; i<=inner_l; i++) {
sorted=sprintf("%s%s", sorted, inner_s[i]);
if(i<inner_l) sorted=sprintf("%s, ", sorted);
}
sorted=sprintf("%s", sorted);
}
return sorted""rest;
}
</source>
Sorts the fields inside the braces alphabetically and can be called like this:
<syntaxhighlight lang=awk>
/\(/ {
print inner_brace_sort($0, ",[ ]*");
}
</source>
06f97cf706b9e9d681106fde251c505db0e9f6ba
NicTool
0
252
2364
2268
2021-11-25T19:34:51Z
Lollypop
2
Text replacement - "</source" to "</syntaxhighlight"
wikitext
text/x-wiki
<syntaxhighlight lang=bash>
root@nictool:/var/www/nictool# wget https://github.com/msimerson/NicTool/releases/download/2.30/NicTool.tar.gz
root@nictool:/var/www/nictool# tar -xzf NicTool.tar.gz
root@nictool:/var/www/nictool# tar -xzf server/NicToolServer-2.??.tar.gz
root@nictool:/var/www/nictool# tar -xzf client/NicToolClient-2.??.tar.gz
root@nictool:/var/www/nictool# mv server foo; mv NicToolServer-2.?? server
root@nictool:/var/www/nictool# mv client bar; mv NicToolClient-2.?? client
root@nictool:/var/www/nictool# rm -rf foo bar
root@nictool:/var/www/nictool# cd client; perl Makefile.PL; make; sudo make install clean
root@nictool:/var/www/nictool# cd ../server; perl Makefile.PL; make; sudo make install clean
root@nictool:/var/www/nictool# cp server/lib/nictoolserver.conf{.dist,}
root@nictool:/var/www/nictool# cp client/lib/nictoolclient.conf{.dist,}
</syntaxhighlight>
b1966a3dd32de20d385a2ceadebbfa57644939f2
Galera Cluster
0
383
2365
2174
2021-11-25T19:35:48Z
Lollypop
2
Text replacement - "<source" to "<syntaxhighlight"
wikitext
text/x-wiki
[[Category:MariaDB]]
[[Category:MySQL]]
=Setup the Cluster=
==Install the packages==
On each node do as root:
* Add sources
<syntaxhighlight lang=bash>
# cat > /etc/apt/sources.list.d/mariadb.list << EOF
# MariaDB Server
# To use a different major version of the server, or to pin to a specific minor version, change URI below.
deb [arch=amd64] http://downloads.mariadb.com/MariaDB/mariadb-10.5/repo/ubuntu $(lsb_release -cs) main
deb [arch=amd64] http://downloads.mariadb.com/MariaDB/mariadb-10.5/repo/ubuntu $(lsb_release -cs) main/debug
# MariaDB MaxScale
# To use the latest stable release of MaxScale, use "latest" as the version
# To use the latest beta (or stable if no current beta) release of MaxScale, use "beta" as the version
deb [arch=amd64] https://dlm.mariadb.com/repo/maxscale/latest/apt $(lsb_release -cs) main
# MariaDB Tools
deb [arch=amd64] http://downloads.mariadb.com/Tools/ubuntu $(lsb_release -cs) main
EOF
</source>
* Install the packages
<syntaxhighlight lang=bash>
# apt update
# apt install mariadb-server mariadb-backup galera-4
</source>
==Setup certificates for the cluster comunication==
===Make a CA certificate===
Make a CA certificate with a very long lifetime as you dont want to make normal certificate updates at this point.
<syntaxhighlight lang=bash>
$ subject='/C=DE/ST=Hamburg/L=Hamburg/O=Organisation/OU=Databases/CN=Galera Cluster'
$ openssl req -new -x509 -nodes -days 365000 -newkey rsa:4096 -sha256 -keyout ca-key.pem -out ca-cert.pem -batch -subj "${subject}"
</source>
===Create a certificate for each cluster node===
<syntaxhighlight lang=bash>
$ for node in {1..4}
do
emailAddress="dbadmin@server.de"
servername="maria-${node}.server.de"
subject="/C=DE/ST=Hamburg/L=Hamburg/O=Organisation/OU=Databases/CN=${servername}/emailAddress=${emailAddress}"
openssl req -newkey rsa:4096 -nodes -keyout ${servername}-key.pem -out ${servername}-req.pem -batch -subj "${subject}"
openssl x509 -req -days 365000 -set_serial $(printf "%02d" "${node}") -in ${servername}-req.pem -out ${servername}-cert.pem -CA ca-cert.pem -CAkey ca-key.pem
done
</source>
===Copy keys and certificates to the nodes===
Copy the specific keys and certs to each node:
<syntaxhighlight lang=bash>
$ sudo mkdir --mode=0700 /etc/mysql/priv # put in here: maria-${node}.server.de-key.pem
$ sudo mkdir --mode=0750 /etc/mysql/cert # put in here: maria-${node}.server.de-cert.pem , ca-cert.pem
</source>
== Configure the MariaDB Galera Cluster ==
=== Create a mariabackup user on each node ===
<syntaxhighlight lang=bash>
# mariadb
MariaDB [(none)]> grant reload, process, lock tables, replication client on *.* to 'mariabackup'@'localhost' identified by 'the_very_secret_mariabackup_password';
MariaDB [(none)]> grant reload, process, lock tables, binlog monitor on *.* to 'mariabackup'@'maria-1.server.de' identified by 'the_very_secret_mariabackup_password';
MariaDB [(none)]> grant reload, process, lock tables, binlog monitor on *.* to 'mariabackup'@'maria-2.server.de' identified by 'the_very_secret_mariabackup_password';
MariaDB [(none)]> grant reload, process, lock tables, binlog monitor on *.* to 'mariabackup'@'maria-3.server.de' identified by 'the_very_secret_mariabackup_password';
MariaDB [(none)]> grant reload, process, lock tables, binlog monitor on *.* to 'mariabackup'@'maria-4.server.de' identified by 'the_very_secret_mariabackup_password';
MariaDB [(none)]> flush privileges;
MariaDB [(none)]>
</source>
=== Galera settings ===
/etc/mysql/mariadb.conf.d/zz-galera.cnf
<syntaxhighlight lang=ini>
[galera]
# Cluster Configuration
wsrep_provider = /usr/lib/galera/libgalera_smm.so
# gcomm://{ comma seperated list of all cluster node IPs }
wsrep_cluster_address = gcomm://10.33.6.1,10.33.6.2,10.33.6.3,10.33.6.4
wsrep_cluster_name = MariaDB Galera Cluster
wsrep_on = ON
# Snapshot state transfer (SST): copy entire database, when new node joins
wsrep_sst_method = mariabackup
# set the the_very_secret_mariabackup_password to your real mariabackup password
wsrep_sst_auth = mariabackup:the_very_secret_mariabackup_password
[mariadb]
binlog_format = ROW
innodb_autoinc_lock_mode = 2
</source>
== Get knowledge about your Cluster ==
=== Show wsrep_provider_options ===
<syntaxhighlight lang=bash>
$ mariadb -NBABe 'show variables like "wsrep_provider_options"' | awk '{gsub(/$/,":\n",$1); gsub(/(;|$)/,";\n"); printf $0; }'
</source>
d818ed30c690f1458e979d1772e502958ca7563c
Arum maculatum
0
85
2366
174
2021-11-25T19:42:47Z
Lollypop
2
Text replacement - "[[Kategorie:" to "[[Category:"
wikitext
text/x-wiki
{{Taxobox
| Taxon_Name = Gefleckter Aronstab
| Taxon_WissName = Arum maculatum
| Taxon_Rang = Art
| Taxon_Autor =
| Taxon2_WissName = Arum
| Taxon2_Rang = Gattung
| Taxon3_WissName =
| Taxon3_Rang = Tribus
| Taxon4_WissName =
| Taxon4_Rang = Unterfamilie
| Taxon5_Name = Araceae
| Taxon5_WissName =
| Taxon5_Rang = Familie
| Taxon6_Name =
| Taxon6_WissName =
| Taxon6_Rang = Ordnung
| Bild =
| Bildbeschreibung =
}}
== Beschreibung ==
[[Category:Arum]]
c569c07103f98d8bc1b4a1ce9cbd5ab7f3aa9fde
HP 3par
0
213
2367
1373
2021-11-25T19:43:04Z
Lollypop
2
Text replacement - "<source" to "<syntaxhighlight"
wikitext
text/x-wiki
[[Kategorie:Hardware]]
Unsorted collection... Don't do this...
Unsortierte Sammlung...
Funktioniert nicht so...
<syntaxhighlight lang=bash>
3par-clusterstorage cli% showcage
Id Name LoopA Pos.A LoopB Pos.B Drives Temp RevA RevB Model Side
0 cage0 1:0:1 0 0:0:1 0 24 29-35 321a 321a DCN1 n/a
1 cage1 1:0:2 0 0:0:2 0 24 34-36 321a 321a DCS2 n/a
</source>
<syntaxhighlight lang=bash>
3par-storage cli% createcpg -t r5 -ssz 4 -ha mag -p -devtype FC -mg 0-19 -cg 0 FC_R5_31_cage0
3par-storage cli% createcpg -t r5 -ssz 4 -ha mag -p -devtype FC -mg 0-19 -cg 1 FC_R5_31_cage1
</source>
<syntaxhighlight lang=bash>
3par-storage cli% showcpg -sdg
------(MB)------
Id Name Warn Limit Grow Args
...
6 FC_R5_31_cage0 - - 32768 -t r5 -ssz 4 -ha mag -p -devtype FC -mg 0-19 -cg 0
7 FC_R5_31_cage1 - - 32768 -t r5 -ssz 4 -ha mag -p -devtype FC -mg 0-19 -cg 1
</source>
<syntaxhighlight lang=bash>
3par-storage cli% createvv -wait 0 -comment "Mirror A: PRODDB" FC_R5_31_cage0 VV_DB_PROD01_DATA_DS.1 2T
3par-storage cli% createvv -wait 0 -comment "Mirror B: PRODDB" FC_R5_31_cage1 VV_DB_PROD01_DATA_DS.2 2T
3par-storage cli% createvv -wait 0 -comment "Mirror A: TESTDB" FC_R5_31_cage0 VV_DB_TEST01_DATA_DS.3 2T
3par-storage cli% createvv -wait 0 -comment "Mirror B: TESTDB" FC_R5_31_cage1 VV_DB_TEST01_DATA_DS.4 2T
</source>
<syntaxhighlight lang=bash>
3par-storage cli% showvv -sortcol 0 -showcols Id,Name,UsrCPG,Prov,Usr_Used_MB -cpg FC_R5_31_cage0,FC_R5_31_cage1
Id Name UsrCPG Prov Usr_Used_MB
2 VV_DB_PROD01_DATA_DS.1 FC_R5_31_cage0 full 2097152
3 VV_DB_PROD01_DATA_DS.2 FC_R5_31_cage1 full 2097152
4 VV_DB_TEST01_DATA_DS.3 FC_R5_31_cage0 full 2097152
5 VV_DB_TEST01_DATA_DS.4 FC_R5_31_cage1 full 2097152
-----------------------------------------------------------------
2 total 8388608
</source>
==Group virtual volumes to sets (vv -> vvset)==
<syntaxhighlight lang=bash>
3par-storage cli% createvvset -comment "Set for all vvs of Solaris Devel" DevelVVSet
3par-storage cli% createvvset -add DevelVVSet VV_DB_TEST01_DATA_DS.3
3par-storage cli% createvvset -add DevelVVSet VV_DB_TEST01_DATA_DS.4
</source>
==Create a set of initiators==
<syntaxhighlight lang=bash>
3par-storage cli% createhost -os Solaris -model M10 -contact "SuperAdmin" -comment "Developer node" -loc "Germany, Hamburg" -persona 1 unix14_c2 21000024ff8f5aae
3par-storage cli% createhost -os Solaris -model M10 -contact "SuperAdmin" -comment "Developer node" -loc "Germany, Hamburg" -persona 1 unix14_c3 21000024ff8f5aaf
</source>
<syntaxhighlight lang=bash>
3par-storage cli% createhostset DevelHosts
3par-storage cli% createhostset -add DevelHosts unix14_c2
3par-storage cli% createhostset -add DevelHosts unix14_c3
</source>
==Map virtual volumes as LUNs to a set of initiators==
<syntaxhighlight lang=bash>
3par-storage cli% createvlun set:DevelVVSet 0+ set:DevelHosts
</source>
Means map all VVs from DevelVVSet to all hosts in DevelHosts and do auto LUN numbering (+) starting with 0.
<syntaxhighlight lang=bash>
3par-storage cli% showvlun
Active VLUNs
Lun VVName HostName -Host_WWN/iSCSI_Name- Port Type Status ID
0 VV_DB_TEST01_DATA_DS.3 unix14_c2 21000024FF8F5AAE 0:1:1 host set active 1
1 VV_DB_TEST01_DATA_DS.4 unix14_c2 21000024FF8F5AAE 0:1:1 host set active 1
0 VV_DB_TEST01_DATA_DS.3 unix14_c3 21000024FF8F5AAF 0:1:2 host set active 1
1 VV_DB_TEST01_DATA_DS.4 unix14_c3 21000024FF8F5AAF 0:1:2 host set active 1
-----------------------------------------------------------------------------------------------
4 total
VLUN Templates
Lun VVName HostName -Host_WWN/iSCSI_Name- Port Type
0 set:DevelVVset set:DevelHosts ---------------- --- host set
---------------------------------------------------------------------
1 total
</source>
==Watch disk initialization==
<syntaxhighlight lang=bash>
3par-storage cli% showsys -space -devtype FC
------------- System Capacity (MB) -------------
Total Capacity : 57139200
Allocated : 40258560
Volumes : 36577280
Non-CPGs : 0
User : 0
Snapshot : 0
Admin : 0
CPGs (TPVVs & TDVVs & CPVVs) : 36577280
User : 36577280
Used : 36427020
Unused : 0
Snapshot : 0
Used : 0
Unused : 0
Admin : 0
Used : 0
Unused : 0
Unmapped : 0
System : 3681280
Internal : 252928
Spare : 3428352
Used : 0
Unused : 3428352
Free : 16880640
Initialized : 7827456
Uninitialized : 9053184 <--- Still initializing!!!!
Unavailable : 0
Failed : 0
------------- Capacity Efficiency --------------
Compaction : 1.0
Dedup : --------
</source>
== Solaris ==
===/kernel/drv/sd.conf===
<pre>
sd-config-list=“3PARdataVV”,“physical-block-size:16384”;
</pre>
f4e3b3e84907876a7e155fdce863015722ca47ed
RedHat networking
0
301
2368
2298
2021-11-25T19:45:51Z
Lollypop
2
Text replacement - "</source" to "</syntaxhighlight"
wikitext
text/x-wiki
= Bonding =
In this example we configure two bonds.
bond0 : Failover (eno1/bond0_slave1 and eno3/bond0_slave2)
bond1 : LACP
== /etc/modprobe.d/bonding.conf ==
<syntaxhighlight lang=conf>
alias netdev-bond0 bonding
options bond0 miimon=100 mode=active-backup updelay=0 downdelay=0 primary=bond0_slave1
alias netdev-bond1 bonding
options bond1 miimon=100 mode=4 lacp_rate=1
</syntaxhighlight>
== /etc/sysconfig/network-scripts/ifcfg-bond0 ==
<syntaxhighlight lang=conf>
DEVICE=bond0
TYPE=Bond
BONDING_MASTER=yes
BOOTPROTO=static
DEFROUTE=yes
IPV4_FAILURE_FATAL=no
NAME=bond0
UUID=9e2088b8-4cfe-435a-b0a2-9387f0fc8024
ONBOOT=yes
DNS1=172.16.0.69
BONDING_OPTS="miimon=100 updelay=0 downdelay=0 mode=active-backup primary=bond0_slave1"
IPADDR=172.16.0.105
PREFIX=16
GATEWAY=172.16.0.1
</syntaxhighlight>
== /etc/sysconfig/network-scripts/ifcfg-bond0_slave1 ==
<syntaxhighlight lang=conf>
HWADDR=94:18:82:80:C2:18
TYPE=Ethernet
NAME=bond0_slave1
UUID=a03819df-0715-455d-9726-9348cdbd45c9
DEVICE=eno1
ONBOOT=yes
MASTER=bond0
SLAVE=yes
</syntaxhighlight>
== /etc/sysconfig/network-scripts/ifcfg-bond0_slave2 ==
<syntaxhighlight lang=conf>
HWADDR=94:18:82:80:C2:1A
TYPE=Ethernet
NAME=bond0_slave2
UUID=a03819df-0715-455d-9726-9348cdbd45c9
DEVICE=eno3
ONBOOT=yes
MASTER=bond0
SLAVE=yes
</syntaxhighlight>
== Check state of bond0 ==
<syntaxhighlight lang=bash>
# cat /proc/net/bonding/bond0
Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011)
Bonding Mode: fault-tolerance (active-backup)
Primary Slave: None
Currently Active Slave: eno1
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 0
Down Delay (ms): 0
Slave Interface: eno1
MII Status: up
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 94:18:82:80:c2:18
Slave queue ID: 0
Slave Interface: eno3
MII Status: up
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 94:18:82:80:c2:1a
Slave queue ID: 0
</syntaxhighlight>
== /etc/sysconfig/network-scripts/ifcfg-bond1 ==
<syntaxhighlight lang=conf>
DEVICE=bond1
TYPE=Bond
BONDING_MASTER=yes
BOOTPROTO=none
DEFROUTE=yes
IPV4_FAILURE_FATAL=no
NAME=bond1
UUID=c9a4bce2-5dbe-4cf9-beb6-34a24512ae23
ONBOOT=yes
BONDING_OPTS="mode=4 miimon=100 lacp_rate=1"
IPADDR=172.20.0.30
PREFIX=24
</syntaxhighlight>
== /etc/sysconfig/network-scripts/ifcfg-bond1_slave1 ==
<syntaxhighlight lang=conf>
TYPE=Ethernet
NAME=bond1_slave1
UUID=9ad3a93f-362e-4a18-bb2e-c4588e666e12
ONBOOT=yes
MASTER=bond1
SLAVE=yes
MACADDR=14:02:ec:8e:f3:24
MTU=1500
DEVICE=eno49
</syntaxhighlight>
== /etc/sysconfig/network-scripts/ifcfg-bond1_slave2 ==
<syntaxhighlight lang=conf>
TYPE=Ethernet
NAME=bond1_slave2
UUID=6d8015ef-fe60-472a-b18f-17caf952e45b
ONBOOT=yes
MASTER=bond1
SLAVE=yes
MACADDR=14:02:ec:8e:f3:24
MTU=1500
DEVICE=eno50
</syntaxhighlight>
== Check state of bond1 ==
<syntaxhighlight lang=bash>
# cat /proc/net/bonding/bond1
Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011)
Bonding Mode: IEEE 802.3ad Dynamic link aggregation
Transmit Hash Policy: layer2 (0)
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 0
Down Delay (ms): 0
802.3ad info
LACP rate: fast
Min links: 0
Aggregator selection policy (ad_select): stable
Active Aggregator Info:
Aggregator ID: 2
Number of ports: 2
Actor Key: 13
Partner Key: 70
Partner Mac Address: 01:e0:52:00:00:02
Slave Interface: eno49
MII Status: up
Speed: 10000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 14:02:ec:8e:f3:24
Slave queue ID: 0
Aggregator ID: 2
Actor Churn State: none
Partner Churn State: none
Actor Churned Count: 0
Partner Churned Count: 0
details actor lacp pdu:
system priority: 0
port key: 13
port priority: 255
port number: 1
port state: 63
details partner lacp pdu:
system priority: 32768
oper key: 70
port priority: 32768
port number: 534
port state: 63
Slave Interface: eno50
MII Status: up
Speed: 10000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 14:02:ec:8e:f3:24
Slave queue ID: 0
Aggregator ID: 2
Actor Churn State: none
Partner Churn State: none
Actor Churned Count: 0
Partner Churned Count: 0
details actor lacp pdu:
system priority: 0
port key: 13
port priority: 255
port number: 2
port state: 63
details partner lacp pdu:
system priority: 32768
oper key: 70
port priority: 32768
port number: 1046
port state: 63
</syntaxhighlight>
e221f33ca35c125820596255c7092a80e392d513
OpenVPN Inline Certs
0
104
2369
1286
2021-11-25T19:49:39Z
Lollypop
2
Text replacement - "<source" to "<syntaxhighlight"
wikitext
text/x-wiki
[[Kategorie:OpenVPN]]
To get an OpenVPN-Configuration in one file you can inline all referred files like this:
<syntaxhighlight lang=bash>
$ nawk '
/^(tls-auth|ca|cert|key)/ {
type=$1;
file=$2;
# for tls-auth we need the key-direction
if(type=="tls-auth")print "key-direction",$3;
print "<"type">";
while(getline tlsauth<file)
print tlsauth;
close(file);
print "</"type">";
next;
}
{
# All other lines are printed as they are
print;
}' connection.ovpn
</source>
And inline to files:
<syntaxhighlight lang=bash>
$ nawk '
/^<(tls-auth|ca|dh|cert|key)>/ {
type=$1;
gsub(/[<>]/,"",type);
file=type".pem";
print type,file;
print ""> file;
while(getline) {
if($0 == "</"type">"){
fflush(file);
close(file);
break;
}
print $0>>file;}
next;
}
{
# All other lines are printed as they are
print $0;
}' connection.ovpn > connection_.ovpn
</source>
de5516a0098aeae85f27011ab37dc92960c0ad1a
Filesysteme Tipps und Tricks
0
194
2370
630
2021-11-25T19:52:17Z
Lollypop
2
Text replacement - "<source" to "<syntaxhighlight"
wikitext
text/x-wiki
[[Kategorie:Linux]]
[[Kategorie:ZFS]]
==Get the creation time... not the changetime==
===Creation time on zfs===
====You need the Filesystem where the file resides====
<syntaxhighlight lang=bash>
# df -h /var/data/dumps/sackhalter_20140407.dump
Filesystem Size Used Avail Use% Mounted on
data/backup/dumps 24G 8.6G 16G 36% /var/data/dumps
</source>
====You need the i-node number of the file====
<syntaxhighlight lang=bash>
# ls -i /var/data/dumps/sackhalter_20140407.dump
103 /var/data/dumps/sackhalter_20140407.dump
</source>
====Get the metadata of the file====
<syntaxhighlight lang=bash>
# zdb -dddd data/backup/dumps 103 | grep crtime
crtime Tue Jul 29 13:00:18 2014
</source>
===Creation time on ext2/3/4===
====You need the Filesystem where the file resides====
<syntaxhighlight lang=bash>
# df -h /usr/bin/passwd
Filesystem Size Used Avail Use% Mounted on
/dev/sda1 15G 8.4G 5.8G 60% /
</source>
====You need the i-node number of the file====
<syntaxhighlight lang=bash>
# ls -i /usr/bin/passwd
130776 /usr/bin/passwd
</source>
====Get the metadata of the file====
<syntaxhighlight lang=bash>
# debugfs -R 'stat <130776>' /dev/sda1 2>/dev/null | grep crtime
crtime: 0x5391870e:a6803fc8 -- Fri Jun 6 11:17:02 2014
</source>
====Nice oneliner====
<syntaxhighlight lang=bash>
# file=/etc/passwd ; ls -1i ${file} | nawk -v dev=$(df --output=source ${file} | tail -n +2) 'BEGIN{debugfs="debugfs -R \"stat <INODE>\" /dev/sda1 2>/dev/null";}{file=$2;command=debugfs;gsub(/INODE/,$1,command); while (command | getline){if(/crtime/){print $0,file}}; close(command);}'
crtime: 0x54009e05:24f51228 -- Fri Aug 29 17:36:37 2014 /etc/passwd
</source>
164619f6f8518ca0b86e5033ca0df8fef0743381
2374
2370
2021-11-25T20:02:16Z
Lollypop
2
Text replacement - "[[Kategorie:" to "[[Category:"
wikitext
text/x-wiki
[[Category:Linux]]
[[Category:ZFS]]
==Get the creation time... not the changetime==
===Creation time on zfs===
====You need the Filesystem where the file resides====
<syntaxhighlight lang=bash>
# df -h /var/data/dumps/sackhalter_20140407.dump
Filesystem Size Used Avail Use% Mounted on
data/backup/dumps 24G 8.6G 16G 36% /var/data/dumps
</source>
====You need the i-node number of the file====
<syntaxhighlight lang=bash>
# ls -i /var/data/dumps/sackhalter_20140407.dump
103 /var/data/dumps/sackhalter_20140407.dump
</source>
====Get the metadata of the file====
<syntaxhighlight lang=bash>
# zdb -dddd data/backup/dumps 103 | grep crtime
crtime Tue Jul 29 13:00:18 2014
</source>
===Creation time on ext2/3/4===
====You need the Filesystem where the file resides====
<syntaxhighlight lang=bash>
# df -h /usr/bin/passwd
Filesystem Size Used Avail Use% Mounted on
/dev/sda1 15G 8.4G 5.8G 60% /
</source>
====You need the i-node number of the file====
<syntaxhighlight lang=bash>
# ls -i /usr/bin/passwd
130776 /usr/bin/passwd
</source>
====Get the metadata of the file====
<syntaxhighlight lang=bash>
# debugfs -R 'stat <130776>' /dev/sda1 2>/dev/null | grep crtime
crtime: 0x5391870e:a6803fc8 -- Fri Jun 6 11:17:02 2014
</source>
====Nice oneliner====
<syntaxhighlight lang=bash>
# file=/etc/passwd ; ls -1i ${file} | nawk -v dev=$(df --output=source ${file} | tail -n +2) 'BEGIN{debugfs="debugfs -R \"stat <INODE>\" /dev/sda1 2>/dev/null";}{file=$2;command=debugfs;gsub(/INODE/,$1,command); while (command | getline){if(/crtime/){print $0,file}}; close(command);}'
crtime: 0x54009e05:24f51228 -- Fri Aug 29 17:36:37 2014 /etc/passwd
</source>
38b75518246b53e7a265a9708453232e202d64db
Solaris 11 unsorted
0
379
2371
2087
2021-11-25T19:52:38Z
Lollypop
2
Text replacement - "<source" to "<syntaxhighlight"
wikitext
text/x-wiki
[[category:Solaris11|unsorted]]
== kcfd: unable to load certificate from /etc/crypto/certs/ORCLObjectCA ==
Problem:
<pre>
Apr 2 11:05:29 host42 kcfd[77]: [ID 180312 user.error] kcfd: unable to load certificate from /etc/crypto/certs/ORCLObjectCA
Apr 2 11:05:29 host42 openssl[2360]: [ID 238837 user.error] libpkcs11: /usr/lib/security/amd64/pkcs11_softtoken.so unexpected failure in ELF signature verification. See cryptoadm(1M). Skipping this plug-in.
</pre>
Solution:
<pre>
# pkg fix pkg:/crypto/ca-certificates
</pre>
== Solaris 11 up to date? ==
<syntaxhighlight lang=bash>
#!/bin/bash
# Written by Lars Timmann <L@rs.Timmann.de> 2018
export LANG=C
function check () {
package=$1
# pkg list -af entire@latest
local=$(pkg info ${package} 2>&1)
remote=$(pkg info -r ${package} 2>&1)
printf "%s\n%s\n" "${local}" "${remote}" | nawk -v package="${package}" '
$1=="Version:" {
version[nr]=$2;
next;
}
$1=="Branch:" {
branch[nr++]=$2;
next;
}
/^pkg:/ {
error=$0;
}
END{
if(error) {
printf ("Package %s:\t%s\n", package, error);
status=-1;
} else {
if(branch[0]==branch[1]){
printf ("Package %s:\tUptodate at %s\n", package, branch[0]);
status=0;
}else{
printf ("Package %s:\tUpdate is available: %s -> %s\n", package, branch[0], branch[1]);
split(version[1], version_part, /\./);
split(branch[1], branch_part, /\./);
if(version[1]=="0.5.11") {
be_version=sprintf("%d.%d.%d.%d.%d",version_part[3], branch_part[3], branch_part[4], branch_part[5], branch_part[6]);
}
if(version[1]=="11.4") {
be_version=sprintf("%d.%d.%d.%d.%d",branch_part[1], branch_part[2], branch_part[4], branch_part[5], branch_part[6]);
}
printf ("\n\nUse:\tpkg update --accept --require-new-be --be-name solaris_%s\n\n\n", be_version);
status=2;
}
}
exit status;
}
'
}
package="entire"
pkg refresh >/dev/null \
|| echo "Cannot refresh packages" \
&& if [ $# -gt 0 ]
then
while [ $# -gt 0 ]
do
package=$1
shift
check ${package}
done
else
check ${package}
fi
</source>
9b04f095fab298ee4474d146541638973b947742
Nice Options
0
253
2372
2116
2021-11-25T19:53:00Z
Lollypop
2
Text replacement - "<source" to "<syntaxhighlight"
wikitext
text/x-wiki
Linux:
<syntaxhighlight lang=bash>
ls -aldi
ls -aladin
netstat -plant
netstat -tulpen
ss -open4all
journalctl -efeu
grep -Hirn
pwgen -nancy 17
</source>
Solaris:
<syntaxhighlight lang=bash>
prstat -Lmaa
iostat -Erni
</source>
ad95a68dbb87000a73d1f1ff394232653b8e1344
Windows
0
356
2373
2241
2021-11-25T19:59:30Z
Lollypop
2
Text replacement - "</source" to "</syntaxhighlight"
wikitext
text/x-wiki
==Manage Stored User Names Passwords==
<syntaxhighlight lang=windows>
%windir%\System32\rundll32.exe keymgr.dll,KRShowKeyMgr
</syntaxhighlight>
51cc907cdc2831e78e0b8e348cd0bbef32f054e0
VMWare Linux parameter
0
108
2375
1342
2021-11-25T20:05:02Z
Lollypop
2
Text replacement - "</source" to "</syntaxhighlight"
wikitext
text/x-wiki
[[Kategorie:VMWare]][[Kategorie:Ubuntu]]
==/etc/sysctl.conf==
<source lang=bash>
# vm.swappiness = 0 The kernel will swap only to avoid an out of memory condition.
vm.swappiness = 0
# TCP SYN Flood Protection
net.ipv4.tcp_syncookies = 1
net.ipv4.tcp_max_syn_backlog = 4096
net.ipv4.tcp_synack_retries = 3
</syntaxhighlight>
==Pinning kernel to 2.6 for ESX 4.1==
Create /etc/apt/preferences.d/linux-image with this content:
<source lang=bash>
Package: linux-image-server linux-server linux-headers-server
Pin: version 2.6.*
Pin-Priority: 1000
</syntaxhighlight>
==Autobuild of kernel drivers==
Create /etc/kernel/header_postinst.d/vmware :
<source lang=bash>
#!/bin/bash
# We're passed the version of the kernel being installed
inst_kern=$1
/usr/bin/vmware-config-tools.pl --modules-only --default --kernel-version ${inst_kern}
</syntaxhighlight>
<source lang=bash>
# chmod 755 /etc/kernel/header_postinst.d/vmware
</syntaxhighlight>
==Prebuild packages from VMWare==
<source lang=bash>
echo "deb http://packages.vmware.com/tools/esx/latest/ubuntu $(lsb_release -cs) main" > /etc/apt/sources.list.d/vmware-repository
apt-key adv --keyserver subkeys.pgp.net --recv-keys C0B5E0AB66FD4949
apt-get update
apt-get install vmware-tools-core vmware-tools-esx-nox vmware-tools-foundation \
vmware-tools-guestlib vmware-tools-libraries-nox vmware-tools-libraries-x \
vmware-tools-plugins-autoupgrade vmware-tools-plugins-deploypkg \
vmware-tools-plugins-grabbitmqproxy vmware-tools-plugins-guestinfo \
vmware-tools-plugins-hgfsserver vmware-tools-plugins-powerops \
vmware-tools-plugins-timesync vmware-tools-plugins-vix \
vmware-tools-plugins-vmbackup vmware-tools-services vmware-tools-user
</syntaxhighlight>
==Source from VMWare==
After you removed previously installed vmware-tools, just follow these steps:
1. Add this to your /etc/apt/sources.list:
<source lang=bash>
deb http://packages.vmware.com/tools/esx/latest/ubuntu precise main
</syntaxhighlight>
Then do:
<source lang=bash>
gpg --search C0B5E0AB66FD4949 # hinzufügen (1)
gpg -a --export C0B5E0AB66FD4949 | apt-key add --
</syntaxhighlight>
2. Update your package database:
<source lang=bash>
# aptitude update
</syntaxhighlight>
3. Get Module-Assistant:
<source lang=bash>
# aptitude install module-assistant
</syntaxhighlight>
4. Get the base packages:
<source lang=bash>
# aptitude install vmware-tools-foundation vmware-tools-libraries-nox vmware-tools-guestlib vmware-tools-core
</syntaxhighlight>
5. Get the modules:
<source lang=bash>
# aptitude install vmware-tools-{vmci,vmxnet,vsock,vmblock,vmhgfs,vmsync}-common
# aptitude install vmware-tools-{vmci,vmxnet,vsock,vmblock,vmhgfs,vmsync}-modules-source
</syntaxhighlight>
6. Get kernel and headers:
<source lang=bash>
# aptitude install linux-{image,headers}-3.2.0-52-generic
</syntaxhighlight>
7. Compile and install the modules with module assistant
<source lang=bash>
# m-a prepare --kvers-list 3.2.0-52-generic
# m-a --text-mode --kvers-list 3.2.0-52-generic build vmware-tools-{vmci,vmxnet,vsock,vmblock,vmhgfs,vmsync}-modules
# m-a --text-mode --kvers-list 3.2.0-52-generic install vmware-tools-{vmci,vmxnet,vsock,vmblock,vmhgfs,vmsync}-modules
</syntaxhighlight>
== Minimal /etc/vmware-tools/config ==
<source lang=bash>
libdir = "/usr/lib/vmware-tools"
</syntaxhighlight>
== Switch to Ubuntu open-vm-tools ==
<source lang=bash>
# /usr/bin/vmware-uninstall-tools.pl ; aptitude purge open-vm-tools ; apt update ; apt install open-vm-tools
</syntaxhighlight>
1bfd4c517432435280aaa2c007bed60c624fce1e
Linbit
0
289
2377
1320
2021-11-25T20:06:58Z
Lollypop
2
Text replacement - "<source " to "<syntaxhighlight "
wikitext
text/x-wiki
PingPong abschalten:
<syntaxhighlight lang=bash>
# crm configure
crm(live)configure# property maintenance-mode=on
crm(live)configure# commit
crm(live)configure# ^D
crm(live)configure# bye
</source>
<syntaxhighlight lang=bash>
# crm configure
crm(live)configure# property maintenance-mode=on
crm(live)configure# commit
crm(live)configure# ^D
crm(live)configure# bye
# vi config-files...
# crm_resource -l | xargs -l crm resource cleanup
# crm configure
crm(live)configure# property maintenance-mode=off
crm(live)configure# ptest actions
INFO: install graphviz to see a transition graph
notice: LogActions: Start stonith_fence_ipmilan_hhlokva04 (hhlokva03.srv.ndr-net.de)
notice: LogActions: Start stonith_fence_ipmilan_hhlokva03 (hhlokva04.srv.ndr-net.de)
crm(live)configure# commit
crm(live)configure# ^D
crm(live)configure# bye
</source>
<syntaxhighlight lang=bash>
# virsh list --all
Id Name State
----------------------------------------------------
1 OSC_v1 running
- ts_v1 shut off
</source>
a3c422ae244d5d5e2fbee0ba741d9b9c69b503f3
SunServer
0
210
2378
2200
2021-11-25T20:07:23Z
Lollypop
2
Text replacement - "</source" to "</syntaxhighlight"
wikitext
text/x-wiki
[[Kategorie:Hardware]]
=X86 Systeme=
==ILOM==
===Reset SP from OS===
<syntaxhighlight lang=bash>
# ipmitool -I bmc bmc reset cold
Sent cold reset command to MC
</syntaxhighlight>
===Access ILOM from OS===
<syntaxhighlight lang=bash>
# ipmitool sunoem cli
Connected. Use ^D to exit.
->
</syntaxhighlight>
or
<syntaxhighlight lang=bash>
# ipmitool -I bmc sunoem cli
Connected. Use ^D to exit.
->
</syntaxhighlight>
===Set SP IP address from OS via ipmitool===
* Set:
<syntaxhighlight lang=bash>
# ipmitool lan set 1 ipaddr 172.30.42.149
Setting LAN IP Address to 172.30.42.149
# ipmitool lan set 1 netmask 255.255.255.0
Setting LAN Subnet Mask to 255.255.255.0
# ipmitool lan set 1 defgw ipaddr 172.30.42.1
Setting LAN Default Gateway IP to 172.30.42.1
</syntaxhighlight>
* Check:
<syntaxhighlight lang=bash>
# ipmitool lan print
Set in Progress : Commit Write
Auth Type Support : NONE MD2 MD5 PASSWORD
Auth Type Enable : Callback : MD2 MD5 PASSWORD
: User : MD2 MD5 PASSWORD
: Operator : MD2 MD5 PASSWORD
: Admin : MD2 MD5 PASSWORD
: OEM :
IP Address Source : Static Address
IP Address : 172.30.42.149
Subnet Mask : 255.255.255.0
MAC Address : 00:1c:24:f0:70:b0
SNMP Community String : public
IP Header : TTL=0x40 Flags=0x40 Precedence=0x00 TOS=0x10
Default Gateway IP : 172.30.42.1
Default Gateway MAC : ff:ff:ff:ff:ff:ff
Backup Gateway IP : 255.255.255.255
Backup Gateway MAC : ff:ff:ff:ff:ff:ff
Cipher Suite Priv Max : aaaaaaaaaaaaaaa
: X=Cipher Suite Unused
: c=CALLBACK
: u=USER
: o=OPERATOR
: a=ADMIN
: O=OEM
</syntaxhighlight>
===Restore lost Serial/Product Information===
<syntaxhighlight lang=bash>
$ ssh root@x4100-sp
-> show /SYS/MB
/SYS/MB
...
Properties:
type = Motherboard
chassis_name = SUN FIRE X4100
chassis_part_number = 541-0250-04
chassis_serial_number = 0000000-0000000000
chassis_manufacturer = SUN MICROSYSTEMS
product_name = SUN FIRE X4100
product_part_number = 602-0000-00
product_serial_number = 0000000000
product_version = (none)
product_manufacturer = SUN MICROSYSTEMS
fru_name = ASSY,MOTHERBOARD,A64
fru_manufacturer = SUN MICROSYSTEMS
fru_part_number = 501-7644-01
fru_serial_number = 1762TH1-0627002296
...
-> exit
$ ssh sunservice@x4100-sp
Password: <the root password>
[(flash)root@X4100-SP:~]# servicetool --board_replaced=mainboard --fru_product_serial_number --fru_chassis_serial_number --fru_product_part_number
<Fill out the answers>
</syntaxhighlight>
=SPARC Systeme=
==T4-1==
===Get disk slot===
<b>get_disk_slot.sh</b>:
<syntaxhighlight lang=bash>
#!/bin/bash
/usr/sbin/prtconf -v | nawk -v disk="$1" '
function get_value() {
getline line;
split(line,values,"=");
return values[2];
}
/inquiry-serial-no/ {
inquiry_serial_no=get_value();
}
/inquiry-product-id/ {
inquiry_product_id=get_value();
}
/inquiry-vendor-id/ {
inquiry_vendor_id=get_value();
}
/obp-path/ {
obp_path=get_value();
}
/phy-num/ {
phy_num[obp_path]=get_value();
}
$0 ~ "/dev/rdsk/"disk"$" {
split(obp_path,path_parts,"/");
if(path_parts[3]=="pci@1"){
controller[obp_path]=0;
}
if(path_parts[3]=="pci@2"){
controller[obp_path]=1;
}
printf "%s\n\t%s\n\t%s\n\t%s\n\tcontroller %d, PhyNum %d => Slot %d\n",$1,inquiry_vendor_id,inquiry_serial_no,obp_path,controller[obp_path],phy_num[obp_path],4*controller[obp_path]+phy_num[obp_path]
}'
</syntaxhighlight>
Example:
<syntaxhighlight lang=bash>
# ./get_disk_slot.sh c0t5000C500230D43A3d0
dev_link=/dev/rdsk/c0t5000C500230D43A3d0
'SEAGATE'
'00101371ZVHM 3SE1ZVHM'
'/pci@400/pci@2/pci@0/pci@4/scsi@0/disk@w5000c500230d43a1,0'
controller 1, PhyNum 2 => Slot 6
</syntaxhighlight>
==XSCF==
===Set XSCF IP address from OS via ssh through dscp===
* Show:
<syntaxhighlight lang=bash>
# /usr/platform/`uname -i`/sbin/prtdscp
Domain Address: 192.168.224.2
SP Address: 192.168.224.1
# ssh eis-installer@192.168.224.1
XSCF> shownetwork -a
xscf#0-lan#0
Link encap:Ethernet HWaddr 00:0B:5D:E3:D8:C4
inet addr:172.42.0.120 Bcast:172.42.0.255 Mask:255.255.255.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:885919090 errors:0 dropped:0 overruns:0 frame:0
TX packets:7150700 errors:1 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:1987024183 (1.8 GiB) TX bytes:492148426 (469.3 MiB)
Base address:0xe000
xscf#0-lan#1
Link encap:Ethernet HWaddr 00:0B:5D:E3:D8:C5
BROADCAST MULTICAST MTU:1500 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
Base address:0xc000
XSCF> showroute -a
Destination Gateway Netmask Flags Interface
172.42.0.0 * 255.255.0.0 U xscf#0-lan#0
default 172.42.0.1 0.0.0.0 UG xscf#0-lan#0
</syntaxhighlight>
* Delete default gateway:
<syntaxhighlight lang=bash>
XSCF> setroute -c del -n 0.0.0.0 -m 0.0.0.0 -g 172.42.0.1 xscf#0-lan#0
</syntaxhighlight>
* Set:
<syntaxhighlight lang=bash>
XSCF> setnetwork xscf#0-lan#0 172.32.40.52 -m 255.255.255.0
XSCF> setroute -c add -n 0.0.0.0 -m 0.0.0.0 -g 172.32.40.1 xscf#0-lan#0
XSCF> applynetwork
The following network settings will be applied:
xscf#0 hostname :hfgsun07-xsfc
DNS domain name :intern.hfg-inkasso.de
nameserver :172.41.0.2
interface :xscf#0-lan#0
status :up
IP address :172.32.40.52
netmask :255.255.255.0
route :-n 172.32.40.1 -m 255.255.255.255
route :-n 0.0.0.0 -m 0.0.0.0 -g 172.32.40.1
interface :xscf#0-lan#1
status :down
IP address :
netmask :
route :
Continue? [y|n] :y
Please reset the XSCF by rebootxscf to apply the network settings.
Please confirm that the settings have been applied by executing
showhostname, shownetwork, showroute and shownameserver after rebooting
the XSCF.
XSCF> rebootxscf
The XSCF will be reset. Continue? [y|n] :y
</syntaxhighlight>
===Enable sending of break signal===
Example on a M10 (Partition 0)... break_signal=off means turn supression of break signals off ;-) :
<syntaxhighlight lang=bash>
XSCF> showpparmode -p 0
Host-ID :90071f40
Diagnostic Level :max
Message Level :max
Alive Check :on
Watchdog Reaction :reset
Break Signal :on
Autoboot(Guest Domain) :on
Elastic Mode :off
IOreconfigure :false
CPU Mode :auto
PPAR DR(Current) :off
PPAR DR(Next) :off
XSCF> setpparmode -p 0 -m break_signal=off
Diagnostic Level :max -> -
Message Level :max -> -
Alive Check :on -> -
Watchdog Reaction :reset -> -
Break Signal :on -> off
Autoboot(Guest Domain) :on -> -
Elastic Mode :off -> -
IOreconfigure :false -> -
CPU Mode :auto -> -
PPAR DR :off -> -
The specified modes will be changed.
Continue? [y|n] :y
configured.
Diagnostic Level :max
Message Level :max
Alive Check :on (alive check:available)
Watchdog Reaction :reset (watchdog reaction:reset)
Break Signal :off (break signal:send)
Autoboot(Guest Domain) :on
Elastic Mode :off
IOreconfigure :false
CPU Mode :auto
PPAR DR :off
XSCF> sendbreak -y -p0
Send break signal to PPAR-ID 0?[y|n] :y
XSCF> console -y -p0
Console contents may be logged.
Connect to PPAR-ID 0?[y|n] :y
c)ontinue, s)ync, r)eset? c
^@Notifying cluster that this node is panicking
panic[cpu10]/thread=2a100df1c80: Aborting node because pm_tick delay of 13644 ms exceeds 5050 ms
...
</syntaxhighlight>
41e20f9e3e59351c4d3ae4582cd081964fb96541
Category:Schaben
14
142
2381
1104
2021-11-25T20:13:43Z
Lollypop
2
Text replacement - "[[Kategorie:" to "[[Category:"
wikitext
text/x-wiki
[[Category:Insekten]]
{{#categorytree:{{PAGENAMEE}}|mode=pages|hideroot=on|depth=4}}
5d04b70be6a78cd6551e0a0fe0364e90d6916cd8
ZFS sync script
0
215
2382
2197
2021-11-25T20:17:06Z
Lollypop
2
Text replacement - "<source" to "<syntaxhighlight"
wikitext
text/x-wiki
[[Kategorie:ZFS|Sync]]
Like all of my scripts this script is coming without any guaranties!!!
You can use it on your own risk!
==About the script==
* It uses [http://www.maier-komor.de/mbuffer.html mbuffer]. It is easy to compile.
* It uses gawk.
* The variable ''SECURE'' defines if you want to use ssh to encrypt your stream. Set it to ''yes'' or ''no''.
* To mark the datasets to copy from the backup host use this on the source:
<syntaxhighlight lang=bash>
# /usr/sbin/zfs set de.timmann:auto-backup=<backup host> <dataset>
</syntaxhighlight>
* Run the script on the destination/backup host.
* If you don't want to use root as backup-user on source host do this to create a ''zfssync'' user (Solaris syntax):
<syntaxhighlight lang=bash>
# useradd -m zfssync
# passwd -N zfssync
# usermod -K type=normal zfssync
</syntaxhighlight>
* Make an ssh-key exchange to login without password for ''SRC_USER''.
Good luck!
==zfs_sync.sh==
<syntaxhighlight lang=bash>
#!/bin/bash
# Written by Lars Timmann <L@rs.Timmann.de> 2013
# This script is a rotten bunch of code... rewrite it!
# Some defaults
BACKUP_PROPERTY="de.timmann:auto-backup"
BACKUP_SNAPSHOT_NAME="zfssync"
MBUFFER_PORT=10001
MBUFFER=/opt/mbuffer/bin/mbuffer;
SRC_USER=zfssync
INITIAL_COPIES=3
# Default yes means use SSH for encryption over the net. Every other value means just mbuffer.
SECURE="yes"
LOCAL_SYNC="no"
MBUFFER_PORT=10001
MBUFFER_OPTS="-v 0 --md5 -s 128k -m 256M"
BACKUP_PROPERTY="de.timmann:auto-backup"
ZFS=/usr/sbin/zfs
SSH="/usr/bin/ssh -xc blowfish"
AWK=/usr/bin/gawk
#AWK=/opt/sfw/bin/gawk
GREP=/usr/bin/grep
DATE=/usr/bin/date
MD5="/usr/bin/digest -a md5"
ROUTE=/usr/sbin/route
MBUFFER="/opt/mbuffer/bin/mbuffer"
MYHOST=$(/usr/bin/hostname)
MYNAME=$(/usr/bin/basename $0)
function usage () {
if [ $# -gt 0 ]
then
if [ "_${1}_" != "_help_" ]
then
echo "Error: ${MYNAME} : $*"
fi
else
echo "Error: ${MYNAME} : Check parameters"
fi
cat <<EOU
Usage: ${MYNAME} <params>
Where params is from this set of parameters:
-s|--src-ip <IP> The host from where we want to sync
-d|--dst-ip <IP> The IP on this host where the remote mbuffer should try to connect to
If omitted the IP to use is guessed via route get.
-u|--user <user> The user on "--src-ip" which has rights to send a zfs.
It must be able to login via ssh with public key.
On Solaris it is the profile "ZFS File System Management"
Try this on the "--src-ip":
# roleadd \
-d /export/home/zfssync \
-c "User for zfs send/recv" \
-s /bin/bash \
-m \
-P "ZFS File System Management" \
zfssync
# rolemod -K type=normal zfssync
# passwd -N zfssync
And then put the ssh-public-key from this host into
/export/home/zfssync/.ssh/authorized_keys
on the "--src-ip".
Remember to set the permissions on .ssh to 700 and .ssh/authorized_keys to 600.
The Homedir of the user must not be world writeable.
-sp|--src-pool <zpool> The zpool we want to sync from "--src-ip".
-dp|--dst-pool <zpool> The zpool on this host where we want to sync to ${MYNAME}.
-mbp|--mbuffer-port <port>
If the default port 10001 is in use use another port.
-mb|--mbuffer-path <path>
Path of mbuffer binary including binary itself.
-mbbw|--mbuffer-bwlimit <rate>
Limit the read bandwith of mbuffer (mbuffer option -r)
From mbuffer --help: limit read rate to <rate> B/s, where <rate> can be given in b,k,M,G
-bp|--backup-property <property>
This defaults to ${BACKUP_PROPERTY}.
You have to set this property on all ZFS datasets to ${MYHOST}.
# /usr/sbin/zfs set ${BACKUP_PROPERTY}=${MYHOST} <dataset>
This is inherited as usual.
-bsn|--backup-snap-name <snapshotname>
This is the name of the snapshot which we use to sync.
This defaults to ${BACKUP_SNAPSHOT_NAME}.
Never delete this snapshot manually or you will break the sync and restart
from the beginning.
-i|--insecure Not for production environments! No ssh tunneling. No encryption over the net!
EOU
##-l|--local Just do a local zfs send/recv...
exit 1
}
while [ $# -gt 0 ]
do
#if [ $# -ge 2 ]; then value=$2; fi
case $1 in
--help|-h)
usage "help"
;;
-l|--local)
LOCAL_SYNC="yes"
SRC_HOST="localhost"
param="dummy"
shift;
;;
-i|--insecure|--fuck-off-security)
SECURE="no"
param="dummy"
shift;
;;
--?*=?*|-?*=?*)
param=${1%=*}
value=${1#*=}
shift;
;;
--?*=|-?*=)
param=${1%=*}
usage "${param} needs a vlaue!"
;;
*)
param=$1
if [ $# -ge 2 -a "_${2%-*}_" != "__" ]
then
value=$2
shift
fi
shift
;;
esac
case $param in
-s|--src-ip)
if [ -z $value ] ; then usage "Param ${param} needs a value" ; fi
SRC_HOST=${value}
;;
-d|--dst-ip)
if [ -z $value ] ; then usage "Param ${param} needs a value" ; fi
DST_HOST=${value};
;;
-u|--user)
if [ -z $value ] ; then usage "Param ${param} needs a value" ; fi
SRC_USER=${value}
;;
-sp|--src-pool)
if [ -z $value ] ; then usage "Param ${param} needs a value" ; fi
SRC_POOL=${value}
;;
-bsn|--backup-snap-name)
if [ -z $value ] ; then usage "Param ${param} needs a value" ; fi
BACKUP_SNAPSHOT_NAME=${value}
;;
-dp|--dst-pool)
if [ -z $value ] ; then usage "Param ${param} needs a value" ; fi
DST_POOL=${value}
;;
-mbp|--mbuffer-port)
if [ -z $value ] ; then usage "Param ${param} needs a value" ; fi
MBUFFER_PORT=${value}
;;
-mb|--mbuffer-path)
if [ -z $value ] ; then usage "Param ${param} needs a value" ; fi
MBUFFER=${value}
;;
-mbbw|--mbuffer-bwlimit)
if [ -z $value ] ; then usage "Param ${param} needs a value" ; fi
MBUFFER_OPTS="${MBUFFER_OPTS} -r ${value}"
;;
-bp|--backup-property)
if [ -z $value ] ; then usage "Param ${param} needs a value" ; fi
BACKUP_PROPERTY=${value}
;;
dummy)
;;
*)
usage "Unknown parameter $1"
esac
done
if [ "_${LOCAL_SYNC}_" == "no" ]
then
if [ -z ${SRC_HOST} ]; then usage "-s|--src-ip is missing" ; fi
# Guess the right IP for communication with source host
if [ -z ${DST_HOST} ]; then
DST_HOST=$(${ROUTE} -vn get ${SRC_HOST} | ${AWK} '{ip=$2}END{print ip}')
if [ -z ${DST_HOST} ]; then
usage "-d|--dst-ip is missing"
fi
fi
fi
if [ -z ${SRC_POOL} ]; then usage "-sp|--src-pool is missing" ; fi
if [ -z ${DST_POOL} ]; then usage "-dp|--dst-pool is missing" ; fi
SRC_DATASETS=/tmp/${MYNAME}_${DST_POOL/\//_}_src_ds.out
DST_DATASETS=/tmp/${MYNAME}_${DST_POOL/\//_}_dst_ds.out
LOCK_FILE=/var/run/${MYNAME}_${DST_POOL/\//_}.lck
TMP_FILE1=/tmp/${MYNAME}_${DST_POOL/\//_}.tmp1
TMP_FILE2=/tmp/${MYNAME}_${DST_POOL/\//_}.tmp2
START_TIME=$(${AWK} 'BEGIN{printf systime();}')
${AWK} -v time=${START_TIME} 'BEGIN{print "START:",strftime("%d.%m.%Y %H:%M.%S",time)}'
# Clean up on signal
# -------------------------
trap 'echo "\n--- Got signal: Exiting ...\n"; \
date ; \
sleep 3; kill -9 ${!} 2>/dev/null; \
/usr/bin/rm -f ${LOCK_FILE}; \
exit 1' 1 2 3 13 14 15 18
###########################
if [ -f ${LOCK_FILE} ] ; then
echo "$0 is allready running as PID $(/usr/bin/cat ${LOCK_FILE}) look in ${LOCK_FILE}"
exit 1
else
echo $$ > ${LOCK_FILE}
fi
if [ "_${LOCAL_SYNC}_" == "_yes_" ]
then
${ZFS} list -rH -t filesystem,snapshot,volume -o name,type,${BACKUP_PROPERTY} -s creation ${SRC_POOL} > ${SRC_DATASETS} &
else
${SSH} ${SRC_USER:+"${SRC_USER}@"}${SRC_HOST} "${ZFS} list -rH -t filesystem,snapshot,volume -o name,type,${BACKUP_PROPERTY} -s creation ${SRC_POOL}" > ${SRC_DATASETS} &
fi
${ZFS} list -rH -t filesystem,snapshot,volume -o name,type -s creation ${DST_POOL} > ${DST_DATASETS} &
wait
function convert_to_poolname () {
from_zfs=$1
search=$2
replace=$3
echo ${from_zfs} | sed -e "s#^${search}#${replace}#g"
}
function is_available () {
snapshot=$1
list=$2
${AWK} -v snapshot=${snapshot} 'BEGIN{rc=1;}$1 == snapshot{print $1; rc=0;}END{exit rc;}' ${list}
return $?
}
function expire_dst_pool_snapshots () {
days_to_keep=$1
min_to_keep=$2
for expired_zfs in $(
${ZFS} list -o creation,name -S creation -t snapshot | \
${AWK} \
-v days_to_keep=${days_to_keep} \
-v min_to_keep=${min_to_keep} \
-v DST_POOL="^${DST_POOL}" \
'
BEGIN{
split("Jan:Feb:Mar:Apr:May:Jun:Jul:Aug:Sep:Oct:Nov:Dec",mon,":");
for(m in mon){
month[mon[m]]=m
};
expire_date=systime()-days_to_keep*60*60*24
}
$NF ~ DST_POOL {
filesystem=$NF;
gsub(/@.*$/,"",filesystem);
split($4,time,":");
filesystem_date=mktime(sprintf("%d %02d %02d %02d %02d 00", $5, month[$2], $3, time[1], time[2]));
count[filesystem]++;
if(filesystem_date < expire_date && count[filesystem] > min_to_keep )
{
print $NF;
}
}')
do
printf "$(${DATE}) Destroying snapshot ${expired_zfs}\n"
${ZFS} destroy ${expired_zfs}
done
}
function get_src_list () {
${AWK} -v backup_server=${MYHOST} '
( $2=="filesystem" || $2=="volume" ) && $3==backup_server {
path[$1]=1;
for(name in path){
# delete name from list, if name is substring of $1
if( index($1,name)==1 && name != $1 && path[name]!=0 ){
path[name]=0;
}
}
}
END{
for(name in path){
if(path[name]==1) print name
}
}
' ${SRC_DATASETS}
}
function first_snapshot () {
${AWK} -v zfs="${1}@" '
$2=="snapshot" && $1 ~ zfs {
first=$1;
# und raus...
nextfile;
}
END{
print first;
}
' $2
}
function last_snapshot () {
${AWK} -v zfs="^${1}" -F '[@ \t]' '
$3 == "snapshot" && $1 ~ zfs {
last=$1"@"$2;
}
END{
printf last;
}
' $2
}
function get_incremental_snapshot () {
src_host=$1
src_datasets=$2
first=$3
last=$4
dst_pool=$5
dst_datasets=$6
if [ $# -lt 6 ] ; then
echo "Called from line ${BASH_LINENO[$i]} with $# Arguments"
end 1
fi
src_zfs=$(echo ${first} | ${AWK} -F'@' '{print $1}')
first_snap=$(echo ${first} | ${AWK} -F'@' '{print FS""$2}')
echo "Getting snapshot ${zfs}..."
if [ "_${LOCAL_SYNC}_" == "_yes_" ]
then
${ZFS} send -I ${first_snap} ${last} | ${ZFS} recv -vFd ${dst_pool}
else
if [ "_${SECURE}_" == "_yes_" ]
then
# setup receiver
${MBUFFER} ${MBUFFER_OPTS} -l ${TMP_FILE1} -I 127.0.0.1:${MBUFFER_PORT} | \
${ZFS} recv -vFd ${dst_pool} 2>&1 &
# start sender
${SSH} ${SRC_USER:+"${SRC_USER}@"}${SRC_HOST} \
-R ${MBUFFER_PORT}:127.0.0.1:${MBUFFER_PORT} \
"${ZFS} send -I ${first_snap} ${last} | ${MBUFFER} ${MBUFFER_OPTS} -O 127.0.0.1:${MBUFFER_PORT} 2>&1" >${TMP_FILE2} &
else
# setup receiver
${MBUFFER} ${MBUFFER_OPTS} -l ${TMP_FILE1} -I ${MBUFFER_PORT} | \
${ZFS} recv -vFd ${dst_pool} 2>&1 &
# start sender
${SSH} ${SRC_USER:+"${SRC_USER}@"}${SRC_HOST} \
"${ZFS} send -I ${first_snap} ${last} | ${MBUFFER} ${MBUFFER_OPTS} -O ${DST_HOST}:${MBUFFER_PORT} 2>&1" >${TMP_FILE2} &
fi
wait
local_md5=$(grep md5 ${TMP_FILE1})
remote_md5=$(grep md5 ${TMP_FILE2})
local_summary=$(grep summary ${TMP_FILE1})
remote_summary=$(grep summary ${TMP_FILE2})
printf "remote %s\nlocal %s\n" "${remote_md5}" "${local_md5}"
printf "remote %s\nlocal %s\n" "${remote_summary}" "${local_summary}"
rm -f ${TMP_FILE1} ${TMP_FILE2}
fi
}
function get_initial_snapshot () {
src_host=$1
src_datasets=$2
zfs=$3
dst_pool=$4
dst_datasets=$5
if [ -z "$(is_available ${zfs} ${dst_datasets})" ] ; then
echo "Getting snapshot ${zfs}..."
if [ "_${LOCAL_SYNC}_" == "_yes_" ]
then
${ZFS} send -R ${zfs} | ${ZFS} recv -vFd ${dst_pool}
else
if [ "_${SECURE}_" == "_yes_" ]
then
# setup receiver
${MBUFFER} ${MBUFFER_OPTS} -l ${TMP_FILE1} -I 127.0.0.1:${MBUFFER_PORT} | \
${ZFS} recv -vFd ${dst_pool} 2>&1 &
# start sender
${SSH} ${SRC_USER:+"${SRC_USER}@"}${SRC_HOST} \
-R ${MBUFFER_PORT}:127.0.0.1:${MBUFFER_PORT} \
"${ZFS} send -R ${zfs} | ${MBUFFER} ${MBUFFER_OPTS} -O 127.0.0.1:${MBUFFER_PORT} 2>&1" >${TMP_FILE2} &
else
# setup receiver
${MBUFFER} ${MBUFFER_OPTS} -l ${TMP_FILE1} -I ${MBUFFER_PORT} | \
${ZFS} recv -vFd ${dst_pool} 2>&1 &
# start sender
${SSH} ${SRC_USER:+"${SRC_USER}@"}${SRC_HOST} \
"${ZFS} send -R ${zfs} | ${MBUFFER} ${MBUFFER_OPTS} -O ${DST_HOST}:${MBUFFER_PORT} 2>&1" >${TMP_FILE2} &
fi
wait
local_md5=$(grep md5 ${TMP_FILE1})
remote_md5=$(grep md5 ${TMP_FILE2})
local_summary=$(grep summary ${TMP_FILE1})
remote_summary=$(grep summary ${TMP_FILE2})
printf "remote %s\nlocal %s\n" "${remote_md5}" "${local_md5}"
printf "remote %s\nlocal %s\n" "${remote_summary}" "${local_summary}"
rm -f ${TMP_FILE1} ${TMP_FILE2}
fi
fi
}
function timestamp () {
echo $(${DATE} '+%Y%m%d-%H:%M:%S')
}
function expire_backup_snapshots () {
src_host=$1
src_datasets=$2
dst_datasets=$3
src_last_to_keep=$4
dst_pool=$5
src_zfs=$(echo ${src_last_to_keep} | ${AWK} -F'@' '{print $1}')
dst_zfs=$(convert_to_poolname ${src_zfs} ${SRC_POOL} ${dst_pool})
dst_last_to_keep=$(convert_to_poolname ${src_last_to_keep} ${SRC_POOL} ${dst_pool})
echo "Deleting old backup snapshots before ${dst_last_to_keep}"
if ( ${ZFS} list -o name ${dst_last_to_keep} >/dev/null 2>&1 ) ; then
for src_backup_snapshot in $(${AWK} -v src_backup="${src_zfs}@${BACKUP_SNAPSHOT_NAME}" -v src_last_to_keep="${src_last_to_keep}" '
$1 == src_last_to_keep {
exit 0;
}
$1 ~ src_backup {
print $1;
}
' ${src_datasets})
do
printf "\tDeleting on src ${src_backup_snapshot} ..."
if [ "_${LOCAL_SYNC}_" == "_yes_" ]
then
${ZFS} destroy ${src_backup_snapshot}
status=$?
else
${SSH} ${SRC_USER:+"${SRC_USER}@"}${SRC_HOST} "${ZFS} destroy ${src_backup_snapshot}"
status=$?
fi
if [ ${status} -eq 0 ] ; then
echo "done"
else
echo "failed"
fi
done
for dst_backup_snapshot in $(${AWK} -v dst_backup="${dst_zfs}@${BACKUP_SNAPSHOT_NAME}" -v dst_last_to_keep=${dst_last_to_keep} '
$1 == dst_last_to_keep {
exit 0;
}
$1 ~ dst_backup {
print $1;
}
' ${dst_datasets})
do
printf "\tDeleting on destination ${dst_backup_snapshot} ..."
if ( ${ZFS} destroy ${dst_backup_snapshot} ) ; then
echo "done"
else
echo "failed"
fi
done
else
echo "Strange we do not have the copy of ${dst_last_to_keep} => STOP!"
fi
}
function end () {
/usr/bin/rm -f ${LOCK_FILE}
exit $1
}
for src_zfs in $(get_src_list) ; do
echo "Evaluating ${src_zfs}"
dst_zfs=$(convert_to_poolname ${src_zfs} ${SRC_POOL} ${DST_POOL})
last_src=$(last_snapshot ${src_zfs} ${SRC_DATASETS})
last_dst=$(last_snapshot ${dst_zfs} ${DST_DATASETS})
last_backup_src=$(${AWK} -v zfs="${src_zfs}@${BACKUP_SNAPSHOT_NAME}" '$1 ~ zfs{last=$1}END{printf last}' ${SRC_DATASETS})
last_backup_dst=$(${AWK} -v zfs="${dst_zfs}@${BACKUP_SNAPSHOT_NAME}" '$1 ~ zfs{last=$1}END{printf last}' ${DST_DATASETS})
last_dst_on_src=$(convert_to_poolname ${last_dst} ${DST_POOL} ${SRC_POOL})
this_backup_src=${src_zfs}@${BACKUP_SNAPSHOT_NAME}_$(timestamp)
# Create snapshot for incremental backups
if [ "_${LOCAL_SYNC}_" == "_yes_" ]
then
${ZFS} snapshot ${this_backup_src}
else
${SSH} ${SRC_USER:+"${SRC_USER}@"}${SRC_HOST} "${ZFS} snapshot ${this_backup_src}"
fi
if [ -z "${last_src}" ] ; then
last_src=${this_backup_src}
fi
if [ -n "$(is_available ${dst_zfs} ${DST_DATASETS})" -a -z "${last_dst}" ] ; then
echo "zfs is on dst, but no snapshots. Getting ${last_src}..."
get_initial_snapshot ${SRC_HOST} ${SRC_DATASETS} ${last_src} ${DST_POOL} ${DST_DATASETS}
# Look for last backup snapshot on destination
elif [ -n "${last_backup_dst}" ] ; then
# Name of last backup snapshot on src
last_dst_backup_on_src=$(convert_to_poolname ${last_backup_dst} ${DST_POOL} ${SRC_POOL})
# If converted name is not empty and snapshot is in the list of src snapshots
# then get all snapshots from last backup until now
if [ -n "${last_dst_backup_on_src}" ] ; then
if [ -n "$(is_available ${last_dst_backup_on_src} ${SRC_DATASETS})" ] ; then
# Get the snapshot of this backup
printf "%s\tsnapshot\n" ${this_backup_src} >> ${SRC_DATASETS}
get_incremental_snapshot ${SRC_HOST} ${SRC_DATASETS} ${last_dst_backup_on_src} ${this_backup_src} ${DST_POOL} ${DST_DATASETS} && \
expire_backup_snapshots ${SRC_HOST} ${SRC_DATASETS} ${DST_DATASETS} ${this_backup_src} ${DST_POOL}
fi
fi
elif [ -n "$(is_available ${dst_zfs} ${DST_DATASETS})" ] ; then
# No last backup snapshot on dst but we have snapshots
if [ -n "$(is_available ${last_dst_on_src} ${SRC_DATASETS})" ] ; then
echo "Try to backup from ${last_dst_on_src} to ${this_backup_src}"
first=${last_dst_on_src}
last=${last_src}
get_incremental_snapshot ${SRC_HOST} ${SRC_DATASETS} ${first} ${last} ${DST_POOL} ${DST_DATASETS} && \
expire_backup_snapshots ${SRC_HOST} ${SRC_DATASETS} ${DST_DATASETS} ${this_backup_src} ${DST_POOL}
# Get the snapshot of this backup
printf "%s\tsnapshot\n" ${this_backup_src} >> ${SRC_DATASETS}
get_incremental_snapshot ${SRC_HOST} ${SRC_DATASETS} ${last} ${this_backup_src} ${DST_POOL} ${DST_DATASETS} && \
expire_backup_snapshots ${SRC_HOST} ${SRC_DATASETS} ${DST_DATASETS} ${this_backup_src} ${DST_POOL}
else
echo "OK I tried hard... now it is your job..."
fi
else
# No existing copies for this zfs. Get the last <INITIAL_COPIES> copies
first=$(${AWK} -v zfs=${src_zfs} -v intitial_copies=$((${INITIAL_COPIES}-1)) '
$1 ~ zfs && $2=="snapshot" {
last[++count]=$1;
}
END {
if(count>intitial_copies){
print last[count-intitial_copies]
}else{
print last[1]
}
}' ${SRC_DATASETS})
last=$( ${AWK} -v zfs=${src_zfs} '$1 ~ zfs && $2=="snapshot"{last=$1}END{printf last}' ${SRC_DATASETS} )
get_initial_snapshot ${SRC_HOST} ${SRC_DATASETS} ${first} ${DST_POOL} ${DST_DATASETS}
get_incremental_snapshot ${SRC_HOST} ${SRC_DATASETS} ${first} ${last} ${DST_POOL} ${DST_DATASETS} && \
expire_backup_snapshots ${SRC_HOST} ${SRC_DATASETS} ${DST_DATASETS} ${this_backup_src} ${DST_POOL}
# Get the snapshot of this backup
printf "%s\tsnapshot\n" ${this_backup_src} >> ${SRC_DATASETS}
get_incremental_snapshot ${SRC_HOST} ${SRC_DATASETS} ${last} ${this_backup_src} ${DST_POOL} ${DST_DATASETS} && \
expire_backup_snapshots ${SRC_HOST} ${SRC_DATASETS} ${DST_DATASETS} ${this_backup_src} ${DST_POOL}
fi
echo
echo --------------------------------------------------------------------------------
date
echo
done
# expire_dst_pool_snapshots days_to_keep min_to_keep
expire_dst_pool_snapshots 34 70
END_TIME=$(${AWK} 'BEGIN{printf systime();}')
${AWK} -v time=${END_TIME} 'BEGIN{print "END :",strftime("%d.%m.%Y %H:%M.%S",time)}'
${AWK} -v start=${START_TIME} -v end=${END_TIME} 'BEGIN{print "DURATION:",strftime("%H:%M.%S",end-start-3600*strftime("%H",0))}'
end 0
</syntaxhighlight>
6abbb21108d2449665f4396c75edb641e8da2e5d
Autofs
0
256
2383
2223
2021-11-25T20:19:12Z
Lollypop
2
Text replacement - "[[Kategorie:" to "[[Category:"
wikitext
text/x-wiki
[[Category:Linux|autofs]]
[[Category:Solaris|autofs]]
==Automount home directories==
===/etc/auto.master===
<syntaxhighlight lang=bash>
#
# Include /etc/auto.master.d/*.autofs
#
+dir:/etc/auto.master.d
</source>
===/etc/auto.master.d/home.autofs===
<syntaxhighlight lang=bash>
/home /etc/auto.master.d/home.map
</source>
===/etc/auto.master.d/home.map===
Mount homes from different locations.
<syntaxhighlight lang=bash>
* :/data/home/& nfs.server.de:/home/&
</source>
or from a server that supports NFSv4.1:
<syntaxhighlight lang=bash>
* -proto=tcp,vers=4.1 nfs.server.de:/home/&
</source>
The asterisk marks any dir in /home/* should be matched by this rule.
The ampers and is replaced by the part which was matched by *.
So if you enter /home/a the automounter searches local for /data/home/a which will be mounted when found.
<syntaxhighlight lang=bash>
# cd /home/a
# mount -v | grep /home/a
/data/home/a on /home/a type none (rw,bind)
</source>
For another /home/b which is on the nfs server it looks like this:
<syntaxhighlight lang=bash>
# cd /home/b
# mount -v | grep /home/b
nfs.server.de:/home/b on /home/b type nfs (rw,addr=172.16.17.24)
</source>
===cifs===
<i>/etc/auto.master.d/mycifsshare.autofs</i>:
<syntaxhighlight lang=bash>
/data/cifs /etc/auto.master.d/mycifsshare.map
</source>
<i>/etc/auto.master.d/mycifsshare.map</i>:
<syntaxhighlight lang=bash>
mycifsshare -fstype=cifs,rw,credentials=/etc/samba/mycifsshare_credentials,uid=<myuser>,forceuid ://192.168.1.2/mycifsshare
</source>
9c3e89e2958fcd02089a79b64f330c6aebb1d8be
Cachefilesd
0
382
2384
2293
2021-11-25T20:19:55Z
Lollypop
2
Text replacement - "<source" to "<syntaxhighlight"
wikitext
text/x-wiki
[[category: Linux]]
=Cachefilesd=
==Create ramdisk for cache if enough ram==
A dir named /cache is created and ramdisk ist mounted there!
<syntaxhighlight lang=bash>
# systemctl --force --full edit create-ramdisk@.service
</source>
<syntaxhighlight lang=ini>
[Unit]
Description=create cache dir in ramdisk
After=remote-fs.target
Before=cachefilesd.service
[Service]
Type=oneshot
RemainAfterExit=yes
TimeoutSec=0
ExecStartPre=/sbin/modprobe brd rd_nr=1 rd_size=%i
ExecStartPre=/sbin/sgdisk -Z --new 1:0:0 /dev/ram0
ExecStartPre=/sbin/mkfs.ext4 -m 0 /dev/ram0p1
ExecStartPre=-/bin/mkdir /cache
ExecStart=/bin/mount -o user_xattr /dev/ram0p1 /cache
ExecStop=/bin/umount /cache
ExecStop=/sbin/rmmod brd
[Install]
WantedBy=multi-user.target
</source>
Create for example a 2gb disk with:
<syntaxhighlight lang=bash>
# systemctl start create-ramdisk@$[ 2 * 1024 * 1024 ].service
</source>
Destroy it again:
<syntaxhighlight lang=bash>
# systemctl stop create-ramdisk@$[ 2 * 1024 * 1024 ].service
</source>
Make a 4gb one instead:
<syntaxhighlight lang=bash>
# systemctl start create-ramdisk@$[ 4 * 1024 * 1024 ].service
</source>
If you found the right value, nail it for the next reboot with:
<syntaxhighlight lang=bash>
# systemctl enable create-ramdisk@$[ ${your_gigabyte_value} * 1024 * 1024 ].service
</source>
==Check if kernel supports filesystem cache for your filesystem type==
<syntaxhighlight lang=bash>
# grep "CONFIG_.*_FSCACHE" /boot/config-`uname -r`
CONFIG_NFS_FSCACHE=y
CONFIG_CEPH_FSCACHE=y
CONFIG_CIFS_FSCACHE=y
CONFIG_AFS_FSCACHE=y
CONFIG_9P_FSCACHE=y
</source>
== Setup /etc/cachefilesd.conf ==
<syntaxhighlight>
###############################################################################
#
# Copyright (C) 2006,2010 Red Hat, Inc. All Rights Reserved.
# Written by David Howells (dhowells@redhat.com)
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version
# 2 of the License, or (at your option) any later version.
#
###############################################################################
# obviously this should be a path to a ramdisk if you have enough ram
dir /var/cache/fscache
secctx cachefiles_kernel_t
tag mycache
debug 1
brun 10%
bcull 7%
bstop 3%
frun 10%
fcull 7%
fstop 3%
# Assuming you're using SELinux with the default security policy included in
# this package
# secctx system_u:system_r:cachefiles_kernel_t:s0
</source>
== Problems with autofs mounted filesystems ==
In case of using automount with caching the cachefilesd must be running <b>before</b> autofs comes up an might mount the filesystem.
=== Make sure it is started by systemd ===
==== Disable the SYSV way of starting ====
<syntaxhighlight lang=bash>
# update-rc.d cachefilesd disable
</source>
==== Make cachefilesd started by systemd ====
<syntaxhighlight lang=bash>
# systemctl edit --force --full cachefilesd.service
</source>
<syntaxhighlight lang=ini>
[Unit]
Documentation=man:cachefilesd
Description=LSB: CacheFiles daemon
After=remote-fs.target
Before=autofs.service
[Service]
Type=simple
Restart=no
TimeoutSec=5min
IgnoreSIGPIPE=no
KillMode=process
GuessMainPID=no
RemainAfterExit=yes
SuccessExitStatus=5 6
RuntimeDirectory=cachefilesd
ExecStartPre=-/sbin/modprobe -qab cachefiles
ExecStart=/sbin/cachefilesd -n -p /run/cachefilesd/cachefilesd.pid
[Install]
WantedBy=multi-user.target
</source>
==== Enable starting new service ====
<syntaxhighlight lang=bash>
# systemctl enable cachefilesd.service
</source>
==== Verify autofs is depending on cachefilesd.service ====
<syntaxhighlight lang=bash>
# systemctl show -p After,Before autofs.service | grep cachefilesd.service
After=cachefilesd.service network.target network-online.target sysinit.target ypbind.service basic.target system.slice sssd.service systemd-journald.socket remote-fs.target
</source>
== Define a cached CIFS share with autofs ==
=== Install needed packages ===
<syntaxhighlight lang=bash>
# apt install cachefilesd autofs cifs-utils
</source>
=== Create the credentials file ===
<syntaxhighlight lang=bash>
# mkdir --mode=0700 /etc/cifs_cred
# touch /etc/cifs_cred/credentials
# chmod 0600 /etc/cifs_cred/credentials
# cat > /etc/cifs_cred/credentials <<EOF
username=myuser
password=mypass
EOF
</source>
=== Create basedir of your cifs mounts ===
<syntaxhighlight lang=bash>
# mkdir --mode=0755 /data/cifs
</source>
===/etc/auto.master===
<syntaxhighlight>
...
/data/cifs /etc/auto.cifs-shares --timeout=0 --ghost
...
</source>
===/etc/auto.cifs-shares===
The option {{strong|fsc}} enables caching:
<syntaxhighlight>
myshare -fstype=cifs,credentials=/etc/cifs_cred/credentials,nounix,file_mode=0644,vers=3.0,dir_mode=0755,noperm,fsc ://cifsserver.my.dom/the_cifs_share
</source>
== Check if things are getting cached ==
Initially there is nothing in the cache (almost all values are zero):
<syntaxhighlight lang=bash>
# cat /proc/fs/fscache/stats
FS-Cache statistics
Cookies: idx=2 dat=0 spc=0
Objects: alc=0 nal=0 avl=0 ded=0
ChkAux : non=0 ok=0 upd=0 obs=0
Pages : mrk=0 unc=0
Acquire: n=2 nul=0 noc=0 ok=2 nbf=0 oom=0
Lookups: n=0 neg=0 pos=0 crt=0 tmo=0
Invals : n=0 run=0
Updates: n=0 nul=0 run=0
Relinqs: n=0 nul=0 wcr=0 rtr=0
AttrChg: n=0 ok=0 nbf=0 oom=0 run=0
Allocs : n=0 ok=0 wt=0 nbf=0 int=0
Allocs : ops=0 owt=0 abt=0
Retrvls: n=0 ok=0 wt=0 nod=0 nbf=0 int=0 oom=0
Retrvls: ops=0 owt=0 abt=0
Stores : n=0 ok=0 agn=0 nbf=0 oom=0
Stores : ops=0 run=0 pgs=0 rxd=0 olm=0
VmScan : nos=0 gon=0 bsy=0 can=0 wt=0
Ops : pend=0 run=0 enq=0 can=0 rej=0
Ops : ini=0 dfr=0 rel=0 gc=0
CacheOp: alo=0 luo=0 luc=0 gro=0
CacheOp: inv=0 upo=0 dro=0 pto=0 atc=0 syn=0
CacheOp: rap=0 ras=0 alp=0 als=0 wrp=0 ucp=0 dsp=0
CacheEv: nsp=0 stl=0 rtr=0 cul=0
</source>
but after a few requests:
<syntaxhighlight lang=bash>
# cat /proc/fs/fscache/stats
FS-Cache statistics
Cookies: idx=3 dat=77 spc=0
Objects: alc=80 nal=0 avl=80 ded=70
ChkAux : non=0 ok=1 upd=0 obs=1
Pages : mrk=3138215 unc=181438
Acquire: n=150 nul=0 noc=0 ok=80 nbf=0 oom=0
Lookups: n=80 neg=78 pos=2 crt=78 tmo=0
Invals : n=0 run=0
Updates: n=0 nul=0 run=0
Relinqs: n=70 nul=0 wcr=0 rtr=70
AttrChg: n=0 ok=0 nbf=0 oom=0 run=0
Allocs : n=0 ok=0 wt=0 nbf=0 int=0
Allocs : ops=0 owt=0 abt=0
Retrvls: n=72447 ok=0 wt=6 nod=72447 nbf=0 int=0 oom=0
Retrvls: ops=72447 owt=15 abt=0
Stores : n=3136954 ok=3136954 agn=0 nbf=0 oom=0
Stores : ops=67042 run=3203996 pgs=3136954 rxd=3136954 olm=0
VmScan : nos=180177 gon=0 bsy=0 can=0 wt=0
Ops : pend=15 run=139489 enq=3203996 can=0 rej=0
Ops : ini=3209401 dfr=266 rel=3209401 gc=266
CacheOp: alo=0 luo=0 luc=0 gro=0
CacheOp: inv=0 upo=0 dro=0 pto=0 atc=0 syn=0
CacheOp: rap=0 ras=0 alp=0 als=0 wrp=0 ucp=0 dsp=0
CacheEv: nsp=1 stl=0 rtr=0 cul=0
</source>
52706bbe92c4f38720320515d878a8c208b024a7
2414
2384
2021-11-25T21:35:32Z
Lollypop
2
Text replacement - "</source" to "</syntaxhighlight"
wikitext
text/x-wiki
[[category: Linux]]
=Cachefilesd=
==Create ramdisk for cache if enough ram==
A dir named /cache is created and ramdisk ist mounted there!
<syntaxhighlight lang=bash>
# systemctl --force --full edit create-ramdisk@.service
</syntaxhighlight>
<syntaxhighlight lang=ini>
[Unit]
Description=create cache dir in ramdisk
After=remote-fs.target
Before=cachefilesd.service
[Service]
Type=oneshot
RemainAfterExit=yes
TimeoutSec=0
ExecStartPre=/sbin/modprobe brd rd_nr=1 rd_size=%i
ExecStartPre=/sbin/sgdisk -Z --new 1:0:0 /dev/ram0
ExecStartPre=/sbin/mkfs.ext4 -m 0 /dev/ram0p1
ExecStartPre=-/bin/mkdir /cache
ExecStart=/bin/mount -o user_xattr /dev/ram0p1 /cache
ExecStop=/bin/umount /cache
ExecStop=/sbin/rmmod brd
[Install]
WantedBy=multi-user.target
</syntaxhighlight>
Create for example a 2gb disk with:
<syntaxhighlight lang=bash>
# systemctl start create-ramdisk@$[ 2 * 1024 * 1024 ].service
</syntaxhighlight>
Destroy it again:
<syntaxhighlight lang=bash>
# systemctl stop create-ramdisk@$[ 2 * 1024 * 1024 ].service
</syntaxhighlight>
Make a 4gb one instead:
<syntaxhighlight lang=bash>
# systemctl start create-ramdisk@$[ 4 * 1024 * 1024 ].service
</syntaxhighlight>
If you found the right value, nail it for the next reboot with:
<syntaxhighlight lang=bash>
# systemctl enable create-ramdisk@$[ ${your_gigabyte_value} * 1024 * 1024 ].service
</syntaxhighlight>
==Check if kernel supports filesystem cache for your filesystem type==
<syntaxhighlight lang=bash>
# grep "CONFIG_.*_FSCACHE" /boot/config-`uname -r`
CONFIG_NFS_FSCACHE=y
CONFIG_CEPH_FSCACHE=y
CONFIG_CIFS_FSCACHE=y
CONFIG_AFS_FSCACHE=y
CONFIG_9P_FSCACHE=y
</syntaxhighlight>
== Setup /etc/cachefilesd.conf ==
<syntaxhighlight>
###############################################################################
#
# Copyright (C) 2006,2010 Red Hat, Inc. All Rights Reserved.
# Written by David Howells (dhowells@redhat.com)
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version
# 2 of the License, or (at your option) any later version.
#
###############################################################################
# obviously this should be a path to a ramdisk if you have enough ram
dir /var/cache/fscache
secctx cachefiles_kernel_t
tag mycache
debug 1
brun 10%
bcull 7%
bstop 3%
frun 10%
fcull 7%
fstop 3%
# Assuming you're using SELinux with the default security policy included in
# this package
# secctx system_u:system_r:cachefiles_kernel_t:s0
</syntaxhighlight>
== Problems with autofs mounted filesystems ==
In case of using automount with caching the cachefilesd must be running <b>before</b> autofs comes up an might mount the filesystem.
=== Make sure it is started by systemd ===
==== Disable the SYSV way of starting ====
<syntaxhighlight lang=bash>
# update-rc.d cachefilesd disable
</syntaxhighlight>
==== Make cachefilesd started by systemd ====
<syntaxhighlight lang=bash>
# systemctl edit --force --full cachefilesd.service
</syntaxhighlight>
<syntaxhighlight lang=ini>
[Unit]
Documentation=man:cachefilesd
Description=LSB: CacheFiles daemon
After=remote-fs.target
Before=autofs.service
[Service]
Type=simple
Restart=no
TimeoutSec=5min
IgnoreSIGPIPE=no
KillMode=process
GuessMainPID=no
RemainAfterExit=yes
SuccessExitStatus=5 6
RuntimeDirectory=cachefilesd
ExecStartPre=-/sbin/modprobe -qab cachefiles
ExecStart=/sbin/cachefilesd -n -p /run/cachefilesd/cachefilesd.pid
[Install]
WantedBy=multi-user.target
</syntaxhighlight>
==== Enable starting new service ====
<syntaxhighlight lang=bash>
# systemctl enable cachefilesd.service
</syntaxhighlight>
==== Verify autofs is depending on cachefilesd.service ====
<syntaxhighlight lang=bash>
# systemctl show -p After,Before autofs.service | grep cachefilesd.service
After=cachefilesd.service network.target network-online.target sysinit.target ypbind.service basic.target system.slice sssd.service systemd-journald.socket remote-fs.target
</syntaxhighlight>
== Define a cached CIFS share with autofs ==
=== Install needed packages ===
<syntaxhighlight lang=bash>
# apt install cachefilesd autofs cifs-utils
</syntaxhighlight>
=== Create the credentials file ===
<syntaxhighlight lang=bash>
# mkdir --mode=0700 /etc/cifs_cred
# touch /etc/cifs_cred/credentials
# chmod 0600 /etc/cifs_cred/credentials
# cat > /etc/cifs_cred/credentials <<EOF
username=myuser
password=mypass
EOF
</syntaxhighlight>
=== Create basedir of your cifs mounts ===
<syntaxhighlight lang=bash>
# mkdir --mode=0755 /data/cifs
</syntaxhighlight>
===/etc/auto.master===
<syntaxhighlight>
...
/data/cifs /etc/auto.cifs-shares --timeout=0 --ghost
...
</syntaxhighlight>
===/etc/auto.cifs-shares===
The option {{strong|fsc}} enables caching:
<syntaxhighlight>
myshare -fstype=cifs,credentials=/etc/cifs_cred/credentials,nounix,file_mode=0644,vers=3.0,dir_mode=0755,noperm,fsc ://cifsserver.my.dom/the_cifs_share
</syntaxhighlight>
== Check if things are getting cached ==
Initially there is nothing in the cache (almost all values are zero):
<syntaxhighlight lang=bash>
# cat /proc/fs/fscache/stats
FS-Cache statistics
Cookies: idx=2 dat=0 spc=0
Objects: alc=0 nal=0 avl=0 ded=0
ChkAux : non=0 ok=0 upd=0 obs=0
Pages : mrk=0 unc=0
Acquire: n=2 nul=0 noc=0 ok=2 nbf=0 oom=0
Lookups: n=0 neg=0 pos=0 crt=0 tmo=0
Invals : n=0 run=0
Updates: n=0 nul=0 run=0
Relinqs: n=0 nul=0 wcr=0 rtr=0
AttrChg: n=0 ok=0 nbf=0 oom=0 run=0
Allocs : n=0 ok=0 wt=0 nbf=0 int=0
Allocs : ops=0 owt=0 abt=0
Retrvls: n=0 ok=0 wt=0 nod=0 nbf=0 int=0 oom=0
Retrvls: ops=0 owt=0 abt=0
Stores : n=0 ok=0 agn=0 nbf=0 oom=0
Stores : ops=0 run=0 pgs=0 rxd=0 olm=0
VmScan : nos=0 gon=0 bsy=0 can=0 wt=0
Ops : pend=0 run=0 enq=0 can=0 rej=0
Ops : ini=0 dfr=0 rel=0 gc=0
CacheOp: alo=0 luo=0 luc=0 gro=0
CacheOp: inv=0 upo=0 dro=0 pto=0 atc=0 syn=0
CacheOp: rap=0 ras=0 alp=0 als=0 wrp=0 ucp=0 dsp=0
CacheEv: nsp=0 stl=0 rtr=0 cul=0
</syntaxhighlight>
but after a few requests:
<syntaxhighlight lang=bash>
# cat /proc/fs/fscache/stats
FS-Cache statistics
Cookies: idx=3 dat=77 spc=0
Objects: alc=80 nal=0 avl=80 ded=70
ChkAux : non=0 ok=1 upd=0 obs=1
Pages : mrk=3138215 unc=181438
Acquire: n=150 nul=0 noc=0 ok=80 nbf=0 oom=0
Lookups: n=80 neg=78 pos=2 crt=78 tmo=0
Invals : n=0 run=0
Updates: n=0 nul=0 run=0
Relinqs: n=70 nul=0 wcr=0 rtr=70
AttrChg: n=0 ok=0 nbf=0 oom=0 run=0
Allocs : n=0 ok=0 wt=0 nbf=0 int=0
Allocs : ops=0 owt=0 abt=0
Retrvls: n=72447 ok=0 wt=6 nod=72447 nbf=0 int=0 oom=0
Retrvls: ops=72447 owt=15 abt=0
Stores : n=3136954 ok=3136954 agn=0 nbf=0 oom=0
Stores : ops=67042 run=3203996 pgs=3136954 rxd=3136954 olm=0
VmScan : nos=180177 gon=0 bsy=0 can=0 wt=0
Ops : pend=15 run=139489 enq=3203996 can=0 rej=0
Ops : ini=3209401 dfr=266 rel=3209401 gc=266
CacheOp: alo=0 luo=0 luc=0 gro=0
CacheOp: inv=0 upo=0 dro=0 pto=0 atc=0 syn=0
CacheOp: rap=0 ras=0 alp=0 als=0 wrp=0 ucp=0 dsp=0
CacheEv: nsp=1 stl=0 rtr=0 cul=0
</syntaxhighlight>
0338c20148c219402db182a834c1289e28fada3c
Oracle Discoverer
0
364
2385
2248
2021-11-25T20:20:44Z
Lollypop
2
Text replacement - "<source" to "<syntaxhighlight"
wikitext
text/x-wiki
[[Kategorie:Oracle]]
== Changing the IP address ==
Just some lines from last change... sorry
<syntaxhighlight lang=bash>
vi /etc/sysconfig/network/ifcfg-eth0
vi /etc/sysconfig/network/routes
vi /etc/hosts
/etc/init.d/network restart
# Change the VLAN in vCenter
# reconnect with new IP
#
# Change the config
#
/opt/Middleware/ashome_1/chgip/scripts/chgiphost.sh -noconfig -oldhost discoverer01.srv.net.de -newhost discoverer.srv.net.de -oldip 172.16.31.29 -newip 172.16.7.4 -instanceHome /opt/Middleware/asinst_1
/etc/init.d/weblogic stop
# Adminserver, too
/opt/Middleware/wlserver_10.3/server/bin/setWLSEnv.sh
/opt/Middleware/wlserver_10.3/common/bin/wlst.sh
wls:/offline> readDomain('/opt/Middleware/user_projects/domains/ClassicDomain')
wls:/offline/ClassicDomain> cd ('/Machine/neuerhostname')
wls:/offline/ClassicDomain/Machine/neuerhostname> machine=cmo
wls:/offline/ClassicDomain/Machine/neuerhostname> cd ('/Server/AdminServer')
wls:/offline/ClassicDomain/Server/AdminServer> set('Machine', machine)
wls:/offline/ClassicDomain/Server/AdminServer> updateDomain()
wls:/offline/ClassicDomain/Server/AdminServer> exit()
# Nach den Anpassungen starten
/etc/init.d/weblogic start
netstat -plant | grep 9001
tail -f /opt/Middleware/user_projects/domains/ClassicDomain/servers/WLS_DISCO/logs/WLS_DISCO.out
</syntaxhighlight>
cb0bdd06cf7287c31e49df062ad3cdbbe1c6652e
Linux udev
0
88
2386
1268
2021-11-25T20:22:07Z
Lollypop
2
Text replacement - "<source" to "<syntaxhighlight"
wikitext
text/x-wiki
[[Kategorie:Linux|udev]]
==Persistent network interface names==
If you have no <i>/etc/udev/rules.d/70-persistent-net.rules</i> just create one:
<syntaxhighlight lang=bash>
# lshw -C network | awk '/logical name:/{iface=$NF;}/serial:/{mac=$NF;printf "SUBSYSTEM==\"net\", ACTION==\"add\", DRIVERS==\"?*\", ATTR{address}==\"%s\", ATTR{dev_id}==\"0x0\", ATTR{type}==\"1\", KERNEL==\"eth*\", NAME=\"%s\"\n",mac,iface;}' >> /etc/udev/rules.d/70-persistent-net.rules
</source>
or add a specific interface to <i>/etc/udev/rules.d/70-persistent-net.rules</i>:
<syntaxhighlight lang=bash>
# MATCHADDR="00:50:56:a1:20:22" INTERFACE=eth2 /lib/udev/write_net_rules
</source>
Change order with:
<syntaxhighlight lang=bash>
# vi /etc/udev/rules.d/70-persistent-net.rules
</source>
Then let udev reread the file:
<syntaxhighlight lang=bash>
# udevadm trigger --action=add --subsystem-match=net --verbose
</source>
==udev for MySQL on LVM with InnoDB on raw devices==
===Make your rule===
<syntaxhighlight lang=bash>
root@mysql:~# cat /etc/udev/rules.d/99-lvm-mysql-permissions.rules
# udevadm info --query=all --name /dev/vg-data/lv-rawdisk-innodb01
# DM_VG_NAME=vg-data
# DM_LV_NAME=lv-rawdisk-innodb01
ENV{DM_VG_NAME}=="vg-data" ENV{DM_LV_NAME}=="lv-rawdisk-innodb*" OWNER="mysql" GROUP="mysql"
</source>
===Test your rule===
<syntaxhighlight lang=bash>
root@mysql:~# ls -al /dev/vg-data/lv-rawdisk-innodb01
lrwxrwxrwx 1 root root 7 Aug 12 14:45 /dev/vg-data/lv-rawdisk-innodb01 -> ../dm-0
root@mysql:~# udevadm test /class/block/dm-0
...
read rules file: /etc/udev/rules.d/99-lvm-mysql-permissions.rules
specified user 'mysql' unknown
...
</source>
OK user mysql unknown... maybe I should install MySQL ;-).
After that:
<syntaxhighlight lang=bash>
root@mysql:~# id -a mysql
uid=108(mysql) gid=114(mysql) groups=114(mysql)
root@mysql:~# udevadm test /class/block/dm-0
...
OWNER 108 /etc/udev/rules.d/99-lvm-mysql-permissions.rules:4
GROUP 114 /etc/udev/rules.d/99-lvm-mysql-permissions.rules:4
handling device node '/dev/dm-0', devnum=b252:0, mode=0660, uid=108, gid=114
set permissions /dev/dm-0, 060660, uid=108, gid=114
...
</source>
===Trigger your rule===
<syntaxhighlight lang=bash>
root@mysql:~# udevadm trigger
root@mysql:~# ls -alL /dev/vg-data/lv-rawdisk-innodb01
brw-rw---- 1 mysql mysql 252, 0 Aug 12 15:07 /dev/vg-data/lv-rawdisk-innodb01
</source>
01e10b44f0ccc4cca4a3dc1fb55c418df45714e3
OpenVPN Inline Certs
0
104
2387
2369
2021-11-25T20:24:28Z
Lollypop
2
Text replacement - "[[Kategorie:" to "[[Category:"
wikitext
text/x-wiki
[[Category:OpenVPN]]
To get an OpenVPN-Configuration in one file you can inline all referred files like this:
<syntaxhighlight lang=bash>
$ nawk '
/^(tls-auth|ca|cert|key)/ {
type=$1;
file=$2;
# for tls-auth we need the key-direction
if(type=="tls-auth")print "key-direction",$3;
print "<"type">";
while(getline tlsauth<file)
print tlsauth;
close(file);
print "</"type">";
next;
}
{
# All other lines are printed as they are
print;
}' connection.ovpn
</source>
And inline to files:
<syntaxhighlight lang=bash>
$ nawk '
/^<(tls-auth|ca|dh|cert|key)>/ {
type=$1;
gsub(/[<>]/,"",type);
file=type".pem";
print type,file;
print ""> file;
while(getline) {
if($0 == "</"type">"){
fflush(file);
close(file);
break;
}
print $0>>file;}
next;
}
{
# All other lines are printed as they are
print $0;
}' connection.ovpn > connection_.ovpn
</source>
2c8fa06b66b106a6b3ae4b56a6d5119848083bd3
VMWare Certificate
0
280
2388
2217
2021-11-25T20:25:31Z
Lollypop
2
Text replacement - "<source" to "<syntaxhighlight"
wikitext
text/x-wiki
[[Kategorie:VMWare]]
[[Kategorie:Security]]
== Neues Zertifikat generieren ==
=== ShellWarning deaktivieren ===
<pre>
-> Bestandsliste
-> Hosts und Cluster
-> <ESX-Host auswählen>
-> Verwalten
-> Einstellungen
-> System
-> Erweiterte Systemeinstellungen
-> <In der Suche "suppress" suchen>
-> UserVars.SuppressShellWarning
-> Bearbeiten: UserVars.SuppressShellWarning = 1
</pre>
=== SSH in der Firewall freischalten ===
<pre>
-> Bestandsliste
-> Hosts und Cluster
-> <ESX-Host auswählen>
-> Verwalten
-> Einstellungen
-> System
-> Sicherheitsprofil
-> Firewall
-> Eingehende Verbindungen
-> Bearbeiten
-> SSH-Server aktivieren
</pre>
=== SSH aktivieren ===
<pre>
-> Bestandsliste
-> Hosts und Cluster
-> <ESX-Host auswählen>
-> Verwalten
-> Einstellungen
-> System
-> Sicherheitsprofil
-> Dienste
-> Bearbeiten
-> SSH starten
</pre>
<syntaxhighlight lang=bash>
$ ssh root@esx-host
~ # cd /etc/vmware/ssl
/etc/vmware/ssl # mv rui.key rui.key.orig
/etc/vmware/ssl # mv rui.crt rui.crt.orig
/etc/vmware/ssl # /sbin/generate-certificates
/etc/vmware/ssl # ls -al *.key *.crt
-rw-r--r-- 1 root root 1440 May 30 09:33 rui.crt
-r-------- 1 root root 1704 May 30 09:33 rui.key
</syntaxhighlight>
=== SSH deaktivieren ===
<pre>
-> Bestandsliste
-> Hosts und Cluster
-> <ESX-Host auswählen>
-> Verwalten
-> Einstellungen
-> System
-> Sicherheitsprofil
-> Dienste
-> Bearbeiten
-> SSH stoppen
</pre>
=== ShellWarning aktivieren ===
<pre>
-> Bestandsliste
-> Hosts und Cluster
-> <ESX-Host auswählen>
-> Verwalten
-> Einstellungen
-> System
-> Erweiterte Systemeinstellungen
-> <In der Suche "suppress" suchen>
-> UserVars.SuppressShellWarning
-> Bearbeiten: UserVars.SuppressShellWarning = 0
</pre>
=== CIM-Server neu starten ===
Damit auch das neue Zertifikat genutzt wird, muß der CIM-Server durchgestartet werden.
<pre>
-> Bestandsliste
-> Hosts und Cluster
-> <ESX-Host auswählen>
-> Verwalten
-> Einstellungen
-> System
-> Sicherheitsprofil
-> Dienste
-> Bearbeiten
-> CIM-Server
-> Neu Starten
</pre>
cfb0b218badd906ba67a1bb570a84f121a3f67c6
NetApp SP
0
211
2389
2349
2021-11-25T20:26:24Z
Lollypop
2
Text replacement - "[[Kategorie:" to "[[Category:"
wikitext
text/x-wiki
[[Category:Hardware|NetApp]]
[[Category:NetApp|SP]]
== Setup SP IP address==
<syntaxhighlight lang=bash>
filer01> system node service-processor network modify -address-type IPv4 -ip-address 172.32.40.54 -netmask 255.255.255.0 -gateway 172.32.40.1 -enable true
filer01> system node service-processor reboot-sp
Note: If your console connection is through the SP, it will be disconnected.
Do you want to reboot the SP ? {y|n}: y
</source>
4182bda8342a0b54730c4255e6839240d0a49346
VMWare Linux parameter
0
108
2390
2375
2021-11-25T20:27:11Z
Lollypop
2
Text replacement - "<source" to "<syntaxhighlight"
wikitext
text/x-wiki
[[Kategorie:VMWare]][[Kategorie:Ubuntu]]
==/etc/sysctl.conf==
<syntaxhighlight lang=bash>
# vm.swappiness = 0 The kernel will swap only to avoid an out of memory condition.
vm.swappiness = 0
# TCP SYN Flood Protection
net.ipv4.tcp_syncookies = 1
net.ipv4.tcp_max_syn_backlog = 4096
net.ipv4.tcp_synack_retries = 3
</syntaxhighlight>
==Pinning kernel to 2.6 for ESX 4.1==
Create /etc/apt/preferences.d/linux-image with this content:
<syntaxhighlight lang=bash>
Package: linux-image-server linux-server linux-headers-server
Pin: version 2.6.*
Pin-Priority: 1000
</syntaxhighlight>
==Autobuild of kernel drivers==
Create /etc/kernel/header_postinst.d/vmware :
<syntaxhighlight lang=bash>
#!/bin/bash
# We're passed the version of the kernel being installed
inst_kern=$1
/usr/bin/vmware-config-tools.pl --modules-only --default --kernel-version ${inst_kern}
</syntaxhighlight>
<syntaxhighlight lang=bash>
# chmod 755 /etc/kernel/header_postinst.d/vmware
</syntaxhighlight>
==Prebuild packages from VMWare==
<syntaxhighlight lang=bash>
echo "deb http://packages.vmware.com/tools/esx/latest/ubuntu $(lsb_release -cs) main" > /etc/apt/sources.list.d/vmware-repository
apt-key adv --keyserver subkeys.pgp.net --recv-keys C0B5E0AB66FD4949
apt-get update
apt-get install vmware-tools-core vmware-tools-esx-nox vmware-tools-foundation \
vmware-tools-guestlib vmware-tools-libraries-nox vmware-tools-libraries-x \
vmware-tools-plugins-autoupgrade vmware-tools-plugins-deploypkg \
vmware-tools-plugins-grabbitmqproxy vmware-tools-plugins-guestinfo \
vmware-tools-plugins-hgfsserver vmware-tools-plugins-powerops \
vmware-tools-plugins-timesync vmware-tools-plugins-vix \
vmware-tools-plugins-vmbackup vmware-tools-services vmware-tools-user
</syntaxhighlight>
==Source from VMWare==
After you removed previously installed vmware-tools, just follow these steps:
1. Add this to your /etc/apt/sources.list:
<syntaxhighlight lang=bash>
deb http://packages.vmware.com/tools/esx/latest/ubuntu precise main
</syntaxhighlight>
Then do:
<syntaxhighlight lang=bash>
gpg --search C0B5E0AB66FD4949 # hinzufügen (1)
gpg -a --export C0B5E0AB66FD4949 | apt-key add --
</syntaxhighlight>
2. Update your package database:
<syntaxhighlight lang=bash>
# aptitude update
</syntaxhighlight>
3. Get Module-Assistant:
<syntaxhighlight lang=bash>
# aptitude install module-assistant
</syntaxhighlight>
4. Get the base packages:
<syntaxhighlight lang=bash>
# aptitude install vmware-tools-foundation vmware-tools-libraries-nox vmware-tools-guestlib vmware-tools-core
</syntaxhighlight>
5. Get the modules:
<syntaxhighlight lang=bash>
# aptitude install vmware-tools-{vmci,vmxnet,vsock,vmblock,vmhgfs,vmsync}-common
# aptitude install vmware-tools-{vmci,vmxnet,vsock,vmblock,vmhgfs,vmsync}-modules-source
</syntaxhighlight>
6. Get kernel and headers:
<syntaxhighlight lang=bash>
# aptitude install linux-{image,headers}-3.2.0-52-generic
</syntaxhighlight>
7. Compile and install the modules with module assistant
<syntaxhighlight lang=bash>
# m-a prepare --kvers-list 3.2.0-52-generic
# m-a --text-mode --kvers-list 3.2.0-52-generic build vmware-tools-{vmci,vmxnet,vsock,vmblock,vmhgfs,vmsync}-modules
# m-a --text-mode --kvers-list 3.2.0-52-generic install vmware-tools-{vmci,vmxnet,vsock,vmblock,vmhgfs,vmsync}-modules
</syntaxhighlight>
== Minimal /etc/vmware-tools/config ==
<syntaxhighlight lang=bash>
libdir = "/usr/lib/vmware-tools"
</syntaxhighlight>
== Switch to Ubuntu open-vm-tools ==
<syntaxhighlight lang=bash>
# /usr/bin/vmware-uninstall-tools.pl ; aptitude purge open-vm-tools ; apt update ; apt install open-vm-tools
</syntaxhighlight>
c55e9939fb8251452d62cd92335b4368ef1decbf
Solaris LDOM
0
203
2391
2352
2021-11-25T20:28:45Z
Lollypop
2
Text replacement - "</source" to "</syntaxhighlight"
wikitext
text/x-wiki
[[Category:LDOM]]
[[Category:Solaris]]
==Useful scripts==
===get_pf_from_link_name.sh===
<syntaxhighlight lang=bash>
#!/bin/bash
link=$1
dev=$(dladm show-phys -L ${link} | \
nawk '
NR==2{
dev=$2; gsub(/[0-9]+$/,"",dev);
instance=$2; gsub(/^[^0-9]*/,"",instance);
while(getline < "/etc/path_to_inst"){
gsub(/"/,"",$NF);
if($NF == dev && $(NF-1) == instance){
gsub(/"/,"",$1);
gsub(/^\//,"",$1);
print $1;
}
}
}
')
ldm ls-io -l ${dev}
</syntaxhighlight>
7c4df17d5e18a7bd1e7cac89da3841bf38732762
Solaris 11 hwmgmt
0
352
2392
2319
2021-11-25T20:30:40Z
Lollypop
2
Text replacement - "</source" to "</syntaxhighlight"
wikitext
text/x-wiki
[[Category:Solaris11|hwmgmt]]
=Commands=
==hwmgmtcli==
==ilomconfig==
# ilomconfig list network
==raidconfig==
raidconfig list all
==fwupdate==
fwupdate list all
==itpconfig==
<syntaxhighlight lang=bash>
# itpconfig list interconnect
Interconnect
============
State: enabled
Type: USB Ethernet
SP Interconnect IP Address: 169.254.182.76
Host Interconnect IP Address: 169.254.182.77
Interconnect Netmask: 255.255.255.0
SP Interconnect MAC Address: 02:21:28:57:47:16
Host Interconnect MAC Address: 02:21:28:57:47:17
</syntaxhighlight>
31acd44c20e39d521ec40ec31aa130c09a57ffb2
RootKitScanner
0
237
2393
2325
2021-11-25T20:32:56Z
Lollypop
2
Text replacement - "</source" to "</syntaxhighlight"
wikitext
text/x-wiki
[[Kategorie:Security]]
=RKHunter=
RKHunter is a local security scanner for Linux, Solaris and some other UNIX operating systems.
I will describe usage for Ubuntu/Linux here.
==Installation==
First of all install it to your system:
<syntaxhighlight lang=bash>
# aptitude install rkhunter
</syntaxhighlight>
==Update the rule base==
After that (and do this from time to time) update the rule base:
<syntaxhighlight lang=bash>
# rkhunter --update
[ Rootkit Hunter version 1.4.0 ]
Checking rkhunter data files...
Checking file mirrors.dat [ No update ]
Checking file programs_bad.dat [ Updated ]
Checking file backdoorports.dat [ No update ]
Checking file suspscan.dat [ No update ]
Checking file i18n/cn [ No update ]
Checking file i18n/de [ Updated ]
Checking file i18n/en [ Updated ]
Checking file i18n/tr [ Updated ]
Checking file i18n/tr.utf8 [ Updated ]
Checking file i18n/zh [ No update ]
Checking file i18n/zh.utf8 [ No update ]
</syntaxhighlight>
==Do the first check==
<syntaxhighlight lang=bash>
# rkhunter --check --pkgmgr DPKG --skip-keypress --report-warnings-only
Warning: Found enabled inetd service: rstatd/1-5
Warning: syslog-ng configuration file allows remote logging: destination d_logserver { udp("logserver-1"); };
Warning: Suspicious file types found in /dev:
/dev/.udev/rules.d/root.rules: ASCII text
Warning: Hidden directory found: '/etc/.bzr: directory '
Warning: Hidden directory found: '/dev/.udev: directory '
Warning: Hidden file found: /etc/.bzrignore: ASCII text
Warning: Hidden file found: /etc/.etckeeper: ASCII text
Warning: Hidden file found: /dev/.initramfs: symbolic link to `/run/initramfs'
</syntaxhighlight>
Many warnings.
Check which are false positives and modify your '''/etc/rkhunter.conf'''.
==Acknowledge false positives==
For example to get rid of the warnings above add this lines to the '''/etc/rkhunter.conf''':
<syntaxhighlight lang=bash>
ALLOWHIDDENDIR="/dev/.udev"
ALLOWHIDDENDIR="/etc/.bzr"
ALLOWHIDDENFILE="/etc/.bzrignore"
ALLOWHIDDENFILE="/etc/.etckeeper"
ALLOWHIDDENFILE="/dev/.initramfs"
ALLOWDEVFILE="/dev/.udev/rules.d/root.rules"
INETD_ALLOWED_SVC=rstatd/1-5
ALLOW_SYSLOG_REMOTE_LOGGING=1
</syntaxhighlight>
After that rkhunter should have no output:
<syntaxhighlight lang=bash>
# rkhunter --check --pkgmgr DPKG --skip-keypress --report-warnings-only
#
</syntaxhighlight>
Now you have done your base setup. From now all further output should force you to get a closer look to your system.
==Configure ongoing security checks==
Configure the user which should get warnings via email in your '''/etc/rkhunter.conf''':
<syntaxhighlight lang=bash>
MAIL-ON-WARNING="security-team@yourdomain.tld"
</syntaxhighlight>
218f589413671e114066bad7fc94cac236bea859
Ubuntu zsys
0
377
2394
2237
2021-11-25T20:35:09Z
Lollypop
2
Text replacement - "<source " to "<syntaxhighlight "
wikitext
text/x-wiki
[[category:Ubuntu]]
==Cconfigure garbage collection==
<syntaxhighlight lang=yaml>
cat > /etc/zsys.conf <<EOF
history:
# Keep at least n history entry per unit of time if enough of them are present
# The order condition the bucket start and end dates (from most recent to oldest)
# We also keep all previous state saves for the previous day.
# gcstartafter: 1 (GC start after a whole day).
gcstartafter: 1
# Minimum number of recent states to keep.
keeplast: 7
# - name: Abitrary name of the bucket
# buckets: Number of buckets over the interval
# bucketlength: Length of each bucket in days
# samplesperbucket: Number of datasets to keep in each bucket
gcrules:
- name: PreviousDay
buckets: 1
bucketlength: 1
samplesperbucket: 3
#
# For the previous Day (after on full day of retention of all
# snapshots due to gcstartafter: 1), the rule PreviousDay
# defines one bucket (buckets: 1) of size 1 day (bucketlength: 1),
# where we keep 3 states. So basically, we keep 3 states on the
# previous full day.
#
- name: PreviousWeek
buckets: 5
bucketlength: 1
samplesperbucket: 1
#
# For the 5 days before (buckets: 5 of size 1 day (bucketlength: 1)),
# we keep one state (samplesperbucket: 1).
# It means thus that we keep one state per day for each of those 5 days.
#
- name: PreviousMonth
buckets: 4
bucketlength: 7
samplesperbucket: 1
#
# We divide the previous month, in 4 buckets (buckets: 4) of
# 7 days each (bucketlength: 7) and keep one state for each
# (samplesperbucket: 1).
# In English, this means that we try to keep one state save
# per week over the previous month.
#
general:
# Minimal free space required before taking a snapshot
minfreepoolspace: 20
# Daemon timeout in seconds
timeout: 60
EOF
systemctl restart zsysd.service
zsysctl service gc
update-grub
</syntaxhighlight>
36b8ba9e654ee4b008443513b85c53230b56c6ed
Solaris LiveUpgrade
0
218
2395
2305
2021-11-25T20:39:02Z
Lollypop
2
Text replacement - "<source" to "<syntaxhighlight"
wikitext
text/x-wiki
[[Category:Solaris|LiveUpgrade]]
=Upgrade Solaris release=
==Install LiveUpgrade patches==
[http://sysadmin-tips-and-tricks.blogspot.co.uk/2012/07/solaris-live-upgrade-installation.html This site] has a good list of patches needed:
<syntaxhighlight lang=bash>
SPARC:
119254-LR Install and Patch Utilities Patch
121430-LR Live Upgrade patch
121428-LR SUNWluzone required patches
138130-01 vold patch
140914-02 cpio patch
x86:
119255-LR Install and Patch Utilities Patch
121431-LR Live Upgrade patch
121429-LR SUNWluzone required patches
138884-01 SunOS 5.10_x86: GRUB patch
138131-01 vold patch
140915-02 cpio patch
</source>
Higher patch revisions may be available...
==Mount the Solaris 10 DVD ISO-image==
<syntaxhighlight lang=bash>
# mkdir /tmp/os
# mount $(lofiadm -a /root/sol-10-u11-ga-x86-dvd.iso) /tmp/os
</source>
==Create the new BootEnvironment==
<syntaxhighlight lang=bash>
# lucreate -n Solaris10u11
</source>
==Upgrade the new BootEnvironment==
<syntaxhighlight lang=bash>
# echo "autoreg=disable" > /tmp/no-autoreg
# luupgrade -u -n Solaris10u11 -s /tmp/os -k /tmp/no-autoreg
</source>
==Activate the new BootEnvironment==
<syntaxhighlight lang=bash>
# luactivate Solaris10u11
</source>
=Install EIS patches=
==Mount the new EIS-ISO==
<syntaxhighlight lang=bash>
# mkdir /tmp/eis
# mount -F hsfs $(lofiadm -a /root/EIS/EIS-DVD-ONE-15JUL15.iso) /tmp/eis
</source>
==Update LU patches==
<syntaxhighlight lang=bash>
# cd /tmp/eis/sun/patch/x86/LU/10
# unpack-patches -q -r
# cd
</source>
==Create the new BootEnvironment==
<syntaxhighlight lang=bash>
# lucreate -n Solaris10-EIS-15JUL15
</source>
==Mount the new BootEnvironment==
<syntaxhighlight lang=bash>
# mkdir /tmp/BE
# lumount Solaris10-EIS-15JUL15 /tmp/BE
</source>
==Install EIS-Patches==
<syntaxhighlight lang=bash>
# cd /tmp/eis/sun
# patch-EIS -R /tmp/BE /var/tmp
Will apply patches from directories: x86/10 x86/cacao/2.1 x86/SWUP/10 SunVTS/7.0_x86 x86/LU/10
Patching from directory: patch/x86/10
Cleaning out /tmp/BE//var/tmp/10...
...
Now the Solaris 10_x86 Recommended Patches...
...
</source>
==Problems: Installing this patch set to an alternate boot environment first requires the live boot environment to have patch utilities and other prerequisite patches==
<syntaxhighlight lang=bash>
Installing this patch set to an alternate boot environment first requires the
live boot environment to have patch utilities and other prerequisite patches
at the same (or higher) patch revisions as those delivered by this patch set.
The required prerequisite patches can be applied to the live boot environment
by invoking this script with the '--apply-prereq' option, ie.
./installpatchset --apply-prereq --s10patchset
</source>
===Solution===
<syntaxhighlight lang=bash>
root@solaris10 # cd /mnt/var/tmp/10/10_x86_Recommended
root@solaris10 # ./installpatchset --apply-prereq --s10patchset
...
Installation of prerequisite patches complete.
...
</source>
==Umount the BE==
<syntaxhighlight lang=bash>
# luumount Solaris10-EIS-15JUL15
</source>
==Activate BE & Reboot==
<syntaxhighlight lang=bash>
# luactivate Solaris10-EIS-15JUL15
# init 6
</source>
= Solaris 10 CPU with LiveUpgrade =
== Install LiveUpgrade (and some other necessary) Patches==
In the unzipped CPU do:
<syntaxhighlight lang=bash>
root@solaris10 # ./installpatchset --s10patchset --apply-prereq
</source>
== Create LiveUpgrade environment ==
In this example we use the CPU_2017-07:
<syntaxhighlight lang=bash>
root@solaris10 # lucreate -n Solaris_10-CPU_2017-07
...
Population of boot environment <Solaris_10-CPU_2017-07> successful.
Creation of boot environment <Solaris_10-CPU_2017-07> successful.
</source>
== Apply the patchset to the LiveUpgrade environment ==
<syntaxhighlight lang=bash>
root@solaris10 # ./installpatchset --s10patchset -B Solaris_10-CPU_2017-07
</source>
== Activate the new patched LiveUpgrade envinronment ==
<syntaxhighlight lang=bash>
root@solaris10 # luactivate Solaris_10-CPU_2017-07
</source>
Now you can reboot into it whenever you want, but it should be soon, because of things that will be only in this boot environment later like logs and such.
84242e0e47b3708d44c5d35c3bab0dd4e89ec246
Linbit
0
289
2396
2377
2021-11-25T20:43:38Z
Lollypop
2
Text replacement - "</source" to "</syntaxhighlight"
wikitext
text/x-wiki
PingPong abschalten:
<syntaxhighlight lang=bash>
# crm configure
crm(live)configure# property maintenance-mode=on
crm(live)configure# commit
crm(live)configure# ^D
crm(live)configure# bye
</syntaxhighlight>
<syntaxhighlight lang=bash>
# crm configure
crm(live)configure# property maintenance-mode=on
crm(live)configure# commit
crm(live)configure# ^D
crm(live)configure# bye
# vi config-files...
# crm_resource -l | xargs -l crm resource cleanup
# crm configure
crm(live)configure# property maintenance-mode=off
crm(live)configure# ptest actions
INFO: install graphviz to see a transition graph
notice: LogActions: Start stonith_fence_ipmilan_hhlokva04 (hhlokva03.srv.ndr-net.de)
notice: LogActions: Start stonith_fence_ipmilan_hhlokva03 (hhlokva04.srv.ndr-net.de)
crm(live)configure# commit
crm(live)configure# ^D
crm(live)configure# bye
</syntaxhighlight>
<syntaxhighlight lang=bash>
# virsh list --all
Id Name State
----------------------------------------------------
1 OSC_v1 running
- ts_v1 shut off
</syntaxhighlight>
f0f4ca14b6ebe167007cafa584566e2def338884
ISCSI Initiator with Linux
0
387
2397
2228
2021-11-25T20:47:22Z
Lollypop
2
Text replacement - "<source" to "<syntaxhighlight"
wikitext
text/x-wiki
[[Category:Linux|iSCSI]]
[[Category:iSCSI|Linux]]
= iSCSI with jumbo-frames and multipathing =
== Configure networking ==
=== LACP-bonding for the frontend ===
==== /etc/netplan/bond0.yaml ====
<syntaxhighlight lang=yaml>
network:
version: 2
renderer: networkd
ethernets:
eno1:
dhcp4: false
dhcp6: false
optional: true
eno2:
dhcp4: false
dhcp6: false
optional: true
bonds:
bond0:
interfaces:
- eno1
- eno2
parameters:
lacp-rate: slow
mode: 802.3ad
transmit-hash-policy: layer2
addresses:
- 10.71.112.135/16
gateway4: 10.71.101.1
nameservers:
addresses:
- 10.71.111.11
- 10.71.111.12
search:
- domain.de
</syntaxhighlight>
=== Two dedicated 10GE interfaces with jumbo-frames for the backend ===
==== /etc/netplan/iscsi.yaml ====
<syntaxhighlight lang=yaml>
network:
version: 2
renderer: networkd
ethernets:
enp132s0f0:
dhcp4: false
dhcp6: false
mtu: 9000
addresses:
- 10.250.71.32/24
set-name: iscsi0
match:
macaddress: a0:36:9f:d4:cd:1a
enp132s0f1:
dhcp4: false
dhcp6: false
mtu: 9000
addresses:
- 10.251.71.32/24
set-name: iscsi1
match:
macaddress: a0:36:9f:d4:cd:18
</syntaxhighlight>
=== Apply the parameters and check settings ===
<syntaxhighlight lang=bash>
# netplan apply
# ip a sh iscsi0
7: iscsi0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc mq state UP group default
qlen 1000
link/ether a0:36:9f:d4:cd:1a brd ff:ff:ff:ff:ff:ff
inet 10.250.71.32/24 brd 10.250.71.255 scope global iscsi0
valid_lft forever preferred_lft forever
inet6 fe80::a236:9fff:fed4:cd1a/64 scope link
valid_lft forever preferred_lft forever
# ip a sh iscsi1
5: iscsi1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc mq state UP group default
qlen 1000
link/ether a0:36:9f:d4:cd:18 brd ff:ff:ff:ff:ff:ff
inet 10.251.71.32/24 brd 10.251.71.255 scope global iscsi1
valid_lft forever preferred_lft forever
inet6 fe80::a236:9fff:fed4:cd18/64 scope link
valid_lft forever preferred_lft forever
# ip a sh bond0
12: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP
group default qlen 1000
link/ether 32:2d:f2:c0:e2:3f brd ff:ff:ff:ff:ff:ff
inet 10.71.112.135/16 brd 10.71.255.255 scope global bond0
valid_lft forever preferred_lft forever
inet6 fe80::302d:f2ff:fed0:e23f/64 scope link
valid_lft forever preferred_lft forever
</syntaxhighlight>
=== Check if all components are configured right for jumbo-frames ===
<syntaxhighlight lang=bash>
# ping -c 3 -M do -s 8972 -I iscsi0 10.250.71.1
PING 10.250.71.1 (10.250.71.1) from 10.250.71.32 iscsi0: 8972(9000) bytes of data.
8980 bytes from 10.250.71.1: icmp_seq=1 ttl=64 time=0.227 ms
8980 bytes from 10.250.71.1: icmp_seq=2 ttl=64 time=0.187 ms
8980 bytes from 10.250.71.1: icmp_seq=3 ttl=64 time=0.198 ms
--- 10.250.71.1 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2045ms
rtt min/avg/max/mdev = 0.187/0.204/0.227/0.016 ms
# ping -c 3 -M do -s 8972 -I iscsi1 10.251.71.1
PING 10.251.71.1 (10.251.71.1) from 10.251.71.32 iscsi1: 8972(9000) bytes of data.
8980 bytes from 10.251.71.1: icmp_seq=1 ttl=64 time=0.202 ms
8980 bytes from 10.251.71.1: icmp_seq=2 ttl=64 time=0.195 ms
8980 bytes from 10.251.71.1: icmp_seq=3 ttl=64 time=0.191 ms
--- 10.251.71.1 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2055ms
rtt min/avg/max/mdev = 0.191/0.196/0.202/0.004 ms
</syntaxhighlight>
If it does not has success, one of the switches on the way or the iSCSI-storage misses jumbo-frame settings.
== Configure iSCSI ==
=== Setup initiator iqn ===
Setup a new iqn:
<syntaxhighlight>
# /sbin/iscsi-iname
</syntaxhighlight>
The result is in /etc/iscsi/initiatorname.iscsi
==== /etc/iscsi/initiatorname.iscsi ====
<syntaxhighlight>
## DO NOT EDIT OR REMOVE THIS FILE!
## If you remove this file, the iSCSI daemon will not start.
## If you change the InitiatorName, existing access control lists
## may reject this initiator. The InitiatorName must be unique
## for each iSCSI initiator. Do NOT duplicate iSCSI InitiatorNames.
InitiatorName=iqn.1993-08.org.debian:01:4efdaa48c123
</syntaxhighlight>
=== Setup iSCSI-Interfaces ===
<syntaxhighlight lang=bash>
# iscsiadm -m iface -I iscsi0 -o new
# iscsiadm -m iface -I iscsi0 --op=update -n iface.net_ifacename -v iscsi0
# iscsiadm -m iface -I iscsi0
# BEGIN RECORD 2.0-874
iface.iscsi_ifacename = iscsi0
iface.net_ifacename = iscsi0
iface.ipaddress = <empty>
iface.hwaddress = <empty>
iface.transport_name = tcp
...
# END RECORD
</syntaxhighlight>
<syntaxhighlight lang=bash>
# iscsiadm -m iface -I iscsi1 -o new
# iscsiadm -m iface -I iscsi1 --op=update -n iface.net_ifacename -v iscsi1
# iscsiadm -m iface -I iscsi1
# BEGIN RECORD 2.0-874
iface.iscsi_ifacename = iscsi1
iface.net_ifacename = iscsi1
iface.ipaddress = <empty>
iface.hwaddress = <empty>
iface.transport_name = tcp
...
# END RECORD
</syntaxhighlight>
=== Discover LUNs that are offered by the storage ===
<syntaxhighlight lang=bash>
# iscsiadm -m discovery -t st -p 10.250.71.1
iscsiadm: cannot make connection to 10.250.71.1: No route to host
iscsiadm: cannot make connection to 10.250.71.1: No route to host
iscsiadm: cannot make connection to 10.250.71.1: No route to host
iscsiadm: cannot make connection to 10.250.71.1: No route to host
iscsiadm: cannot make connection to 10.250.71.1: No route to host
iscsiadm: cannot make connection to 10.250.71.1: No route to host
iscsiadm: connection login retries (reopen_max) 5 exceeded
10.250.71.1:3260,1 iqn.2006-08.com.huawei:oceanstor:210028def5f846b5::20000:10.250.71.1
</syntaxhighlight>
<syntaxhighlight lang=bash>
# iscsiadm -m discovery -t st -p 10.251.71.1
iscsiadm: cannot make connection to 10.251.71.1: No route to host
iscsiadm: cannot make connection to 10.251.71.1: No route to host
iscsiadm: cannot make connection to 10.251.71.1: No route to host
iscsiadm: cannot make connection to 10.251.71.1: No route to host
iscsiadm: cannot make connection to 10.251.71.1: No route to host
iscsiadm: cannot make connection to 10.251.71.1: No route to host
iscsiadm: connection login retries (reopen_max) 5 exceeded
10.251.71.1:3260,2 iqn.2006-08.com.huawei:oceanstor:210028def5f846b5::20001:10.251.71.1
</syntaxhighlight>
=== Login to discovered LUNs ===
<syntaxhighlight lang=bash>
# iscsiadm -m node -T iqn.2006-08.com.huawei:oceanstor:210028def5f846b5::20000:10.250.71.1 --login
Logging in to [iface: iscsi0, target: iqn.2006-08.com.huawei:oceanstor:210028def5f846b5::20000:10.250.71.1, portal: 10.250.71.1,3260] (multiple)
Login to [iface: iscsi0, target: iqn.2006-08.com.huawei:oceanstor:210028def5f846b5::20000:10.250.71.1, portal: 10.250.71.1,3260] successful.
# iscsiadm -m node -T iqn.2006-08.com.huawei:oceanstor:210028def5f846b5::20001:10.251.71.1 --login
Logging in to [iface: iscsi1, target: iqn.2006-08.com.huawei:oceanstor:210028def5f846b5::20001:10.251.71.1, portal: 10.251.71.1,3260] (multiple)
Login to [iface: iscsi1, target: iqn.2006-08.com.huawei:oceanstor:210028def5f846b5::20001:10.251.71.1, portal: 10.251.71.1,3260] successful.
</syntaxhighlight>
=== Take a look at the running session ===
<syntaxhighlight lang=bash>
# iscsiadm -m session -P 1
Target: iqn.2006-08.com.huawei:oceanstor:210028def5f846b5::20000:10.250.71.1 (non-flash)
Current Portal: 10.250.71.1:3260,1
Persistent Portal: 10.250.71.1:3260,1
**********
Interface:
**********
Iface Name: iscsi0
Iface Transport: tcp
Iface Initiatorname: iqn.1993-08.org.debian:01:4efdaa48c123
Iface IPaddress: 10.250.71.32
Iface HWaddress: <empty>
Iface Netdev: iscsi0
SID: 1
iSCSI Connection State: LOGGED IN
iSCSI Session State: LOGGED_IN
Internal iscsid Session State: NO CHANGE
Target: iqn.2006-08.com.huawei:oceanstor:210028def5f846b5::20001:10.251.71.1 (non-flash)
Current Portal: 10.251.71.1:3260,2
Persistent Portal: 10.251.71.1:3260,2
**********
Interface:
**********
Iface Name: iscsi1
Iface Transport: tcp
Iface Initiatorname: iqn.1993-08.org.debian:01:4efdaa48c123
Iface IPaddress: 10.251.71.32
Iface HWaddress: <empty>
Iface Netdev: iscsi1
SID: 2
iSCSI Connection State: LOGGED IN
iSCSI Session State: LOGGED_IN
Internal iscsid Session State: NO CHANGE
</syntaxhighlight>
=== Check the session is still ok after a restart of iscsid.service ===
<syntaxhighlight lang=bash>
# systemctl status iscsid.service
# systemctl restart iscsid.service
# systemctl status iscsid.service
# iscsiadm -m session -o show
tcp: [1] 10.251.71.1:3260,2 iqn.2006-
08.com.huawei:oceanstor:210028def5f846b5::20001:10.251.71.1 (non-flash)
tcp: [2] 10.250.71.1:3260,1 iqn.2006-
08.com.huawei:oceanstor:210028def5f846b5::20000:10.250.71.1 (non-flash)
</syntaxhighlight>
=== Enable automatic startup of connection ===
<syntaxhighlight lang=bash>
# iscsiadm -m node --op=update -n node.conn[0].startup -v automatic
# iscsiadm -m node --op=update -n node.startup -v automatic
</syntaxhighlight>
=== Check timeout parameter ===
<syntaxhighlight lang=bash>
# iscsiadm -m node -T iqn.2006-08.com.huawei:oceanstor:210028def5f846b5::20000:10.250.71.1 | grep node.session.timeo.replacement_timeout
node.session.timeo.replacement_timeout = 120
# iscsiadm -m node -T iqn.2006-08.com.huawei:oceanstor:210028def5f846b5::20001:10.251.71.1 | grep node.session.timeo.replacement_timeout
node.session.timeo.replacement_timeout = 120
</syntaxhighlight>
=== Adjust timeout values to your needs ===
<syntaxhighlight lang=bash>
# iscsiadm -m node -T iqn.2006-08.com.huawei:oceanstor:210028def5f846b5::20000:10.250.71.1 -o update -n node.session.timeo.replacement_timeout -v 10
# iscsiadm -m node -T iqn.2006-08.com.huawei:oceanstor:210028def5f846b5::20001:10.251.71.1 -o update -n node.session.timeo.replacement_timeout -v 10
</syntaxhighlight>
== Configure multipathing ==
=== List SCSI devices ===
<syntaxhighlight lang=bash>
# lsscsi
[0:2:0:0] disk DELL PERC H730 Mini 4.30 /dev/sda <--- this is our internal disk / raid
[11:0:0:0] cd/dvd HL-DT-ST DVD+-RW GTA0N A3C0 /dev/sr0
[12:0:0:1] disk HUAWEI XSG1 4305 /dev/sdb <--- this is our iSCSI-storage
[13:0:0:1] disk HUAWEI XSG1 4305 /dev/sdc <--- this is our iSCSI-storage
</syntaxhighlight>
=== Get wwids for devices ===
<syntaxhighlight lang=bash>
# /lib/udev/scsi_id --whitelisted --device=/dev/sda
361866da075bdee001f9a2ede2705b9ba
# /lib/udev/scsi_id --whitelisted --device=/dev/sdb
3628dee5100f846b5243be07d00000004
# /lib/udev/scsi_id --whitelisted --device=/dev/sdc
3628dee5100f846b5243be07d00000004
</syntaxhighlight>
=== Setup multipathing configuration ===
==== /etc/multipath.conf ====
<syntaxhighlight>
defaults {
user_friendly_names yes
}
devices {
device {
vendor "HUAWEI"
product "XSG1"
path_grouping_policy multibus
path_checker tur
prio const
path_selector "round-robin 0"
failback immediate
no_path_retry 15
}
}
blacklist {
# devnode "^sd[a]$"
# I highly recommend you blacklist by wwid instead of device name
# blacklist /dev/sda by wwid
wwid 361866da075bdee001f9a2ede2705b9ba
}
multipaths {
multipath {
wwid 3628dee5100f846b5243be07d00000004
# alias here can be anything descriptive for your LUN
alias data
}
}
</syntaxhighlight>
=== Startup multipathing ===
From the multipath(1) man page:
<pre>
-r Force a reload of all existing multipath maps. This command is delegated to the
multipathd daemon if it's running. In this case, other command line switches of the multipath
command have no effect.
-ll Show ("list") the current multipath topology from all available information (sysfs, the
device mapper, path checkers ...).
</pre>
<syntaxhighlight lang=bash>
# multipath -r
</syntaxhighlight>
<syntaxhighlight lang=bash>
# multipath -ll
data (3628dee5100f846b5243be07d00000004) dm-0 HUAWEI,XSG1
size=10T features='1 queue_if_no_path' hwhandler='1 alua' wp=rw
`-+- policy='round-robin 0' prio=50 status=active
|- 12:0:0:1 sdb 8:16 active ready running
`- 13:0:0:1 sdc 8:32 active ready running
</syntaxhighlight>
<syntaxhighlight lang=bash>
# ls -al /dev/mapper/data
lrwxrwxrwx 1 root root 7 Okt 18 14:46 /dev/mapper/data -> ../dm-0
</syntaxhighlight>
=== Create a systemd unit to mount it at the right time during boot ===
<syntaxhighlight lang=bash>
# systemctl edit --force --full data.mount
</syntaxhighlight>
==== /etc/systemd/system/data.mount ====
<syntaxhighlight lang=inifile>
[Unit]
Before=remote-fs.target
After=iscsi.service
Requires=iscsi.service
After=blockdev@dev-mapper-data.target
[Mount]
Where=/data
What=/dev/mapper/data
Type=xfs
Options=defaults
</syntaxhighlight>
=== Enable your unit on next reboot and start it for now ===
<syntaxhighlight lang=bash>
# systemctl enable data.mount
# systemctl start data.mount
</syntaxhighlight>
=== Check for success ===
<syntaxhighlight lang=bash>
# df -h /dev/mapper/data
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/data 10T 72G 10T 1% /data
</syntaxhighlight>
== Further reading ==
External link collection:
• https://linux.dell.com/files/whitepapers/iSCSI_Multipathing_in_Ubuntu_Server.pdf
• https://www.suse.com/support/kb/doc/?id=000019648
• https://ubuntu.com/server/docs/service-iscsi
•
e1e6e51262044bef0ac97eab1bbec5ffe67d1470
NetApp and Linux
0
227
2398
1941
2021-11-25T20:54:19Z
Lollypop
2
Text replacement - "[[Kategorie:" to "[[Category:"
wikitext
text/x-wiki
[[Category:NetApp|Linux]]
[[Category:Linux|NetApp]]
==Check partitioning==
Maybe this works:
<source lang=bash>
# fdisk -l /dev/sda | awk '/^\/dev\//{fieldnum=2;if($fielnum=="*"){fieldnum++}blockalign=$fieldnum/8;if(blockalign!=int(blockalign)){match($0,$fieldnum);printf "%s <<<<<<< Bucket %d\n",substr($0,0,RSTART-2)"_"substr($0,RSTART,RLENGTH)"_"substr($0,RSTART+RLENGTH+1),8*(blockalign-int(blockalign));}else{print $0;}next;}{print;}'
Disk /dev/sda: 17.2 GB, 17179869184 bytes
64 heads, 32 sectors/track, 16384 cylinders, total 33554432 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x000d6bea
Device Boot Start End Blocks Id System
/dev/sda1 * 2048 13672447 6835200 83 Linux
/dev/sda2 _13674494_ 33552383 9938945 5 Extended <<<<<<< Bucket 6
/dev/sda5 13674496 25391103 5858304 83 Linux
/dev/sda6 25393152 33552383 4079616 82 Linux swap / Solaris
</source>
Partition 2 does not matter because it is just metadata of partitioning. We are aligned!
==Remove multipath LUNs==
This writes out the commands to delete multipathed devices for filer <filer> (empty volumes parameter to gawk means all volumes):
<source lang=bash>
( sanlun lun show -p <filer> ; echo ) | gawk -v volumes="/vol/vol1:/vol/vol2"'
BEGIN {
split(volumes,vols,":");
}
/ONTAP Path:/,/^$/ {
if(/ONTAP Path:/){
opath=$NF;
if (volumes==""){
todelete="yes";
}else{
odev=$NF;
gsub(/^.*:/,"",odev);
for(vol in vols){
if (odev == vols[vol] || volumes==""){
todelete="yes";
}
}
}
}
if(todelete!="yes")next;
if(/Host Device:/){
command="dmsetup info --columns --noheadings -o open /dev/mapper/"$NF;
command | getline inuse;
close(command);
if (inuse!=0) {
printf "#\n## Device %s (%s) is in use!!!\n## check with: lsof | grep \"$(dmsetup info --columns --noheadings --separator \",\" -omajor,minor /dev/mapper/%s)\"\n#\n",$NF,opath,$NF;
} else {
printf "multipath -w %s\n",$NF;
printf "multipath -f /dev/mapper/%s && (\n",$NF;
}
};
if(inuse==0 && $2 ~ /(primary|secondary)/){
printf "echo 1 > /sys/block/%s/device/delete\n",$3;
}
if (inuse==0 && /^$/) { printf ")\n";}
}
/^$/ {
mpathdevice="";
todelete="no";
delete devices;
}
'
</source>
==Links==
* [https://kb.netapp.com/index?page=content&id=3011193 NetApp KB ID: 3011193 - What is an unaligned I/O?]
3b0b0ee7b54175f54ce434c78b39b554b85767ce
2406
2398
2021-11-25T21:11:18Z
Lollypop
2
Text replacement - "<source" to "<syntaxhighlight"
wikitext
text/x-wiki
[[Category:NetApp|Linux]]
[[Category:Linux|NetApp]]
==Check partitioning==
Maybe this works:
<syntaxhighlight lang=bash>
# fdisk -l /dev/sda | awk '/^\/dev\//{fieldnum=2;if($fielnum=="*"){fieldnum++}blockalign=$fieldnum/8;if(blockalign!=int(blockalign)){match($0,$fieldnum);printf "%s <<<<<<< Bucket %d\n",substr($0,0,RSTART-2)"_"substr($0,RSTART,RLENGTH)"_"substr($0,RSTART+RLENGTH+1),8*(blockalign-int(blockalign));}else{print $0;}next;}{print;}'
Disk /dev/sda: 17.2 GB, 17179869184 bytes
64 heads, 32 sectors/track, 16384 cylinders, total 33554432 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x000d6bea
Device Boot Start End Blocks Id System
/dev/sda1 * 2048 13672447 6835200 83 Linux
/dev/sda2 _13674494_ 33552383 9938945 5 Extended <<<<<<< Bucket 6
/dev/sda5 13674496 25391103 5858304 83 Linux
/dev/sda6 25393152 33552383 4079616 82 Linux swap / Solaris
</source>
Partition 2 does not matter because it is just metadata of partitioning. We are aligned!
==Remove multipath LUNs==
This writes out the commands to delete multipathed devices for filer <filer> (empty volumes parameter to gawk means all volumes):
<syntaxhighlight lang=bash>
( sanlun lun show -p <filer> ; echo ) | gawk -v volumes="/vol/vol1:/vol/vol2"'
BEGIN {
split(volumes,vols,":");
}
/ONTAP Path:/,/^$/ {
if(/ONTAP Path:/){
opath=$NF;
if (volumes==""){
todelete="yes";
}else{
odev=$NF;
gsub(/^.*:/,"",odev);
for(vol in vols){
if (odev == vols[vol] || volumes==""){
todelete="yes";
}
}
}
}
if(todelete!="yes")next;
if(/Host Device:/){
command="dmsetup info --columns --noheadings -o open /dev/mapper/"$NF;
command | getline inuse;
close(command);
if (inuse!=0) {
printf "#\n## Device %s (%s) is in use!!!\n## check with: lsof | grep \"$(dmsetup info --columns --noheadings --separator \",\" -omajor,minor /dev/mapper/%s)\"\n#\n",$NF,opath,$NF;
} else {
printf "multipath -w %s\n",$NF;
printf "multipath -f /dev/mapper/%s && (\n",$NF;
}
};
if(inuse==0 && $2 ~ /(primary|secondary)/){
printf "echo 1 > /sys/block/%s/device/delete\n",$3;
}
if (inuse==0 && /^$/) { printf ")\n";}
}
/^$/ {
mpathdevice="";
todelete="no";
delete devices;
}
'
</source>
==Links==
* [https://kb.netapp.com/index?page=content&id=3011193 NetApp KB ID: 3011193 - What is an unaligned I/O?]
8128f4ec24c1e9aee8d7f1d90037d7015d24cff9
2413
2406
2021-11-25T21:33:51Z
Lollypop
2
Text replacement - "</source" to "</syntaxhighlight"
wikitext
text/x-wiki
[[Category:NetApp|Linux]]
[[Category:Linux|NetApp]]
==Check partitioning==
Maybe this works:
<syntaxhighlight lang=bash>
# fdisk -l /dev/sda | awk '/^\/dev\//{fieldnum=2;if($fielnum=="*"){fieldnum++}blockalign=$fieldnum/8;if(blockalign!=int(blockalign)){match($0,$fieldnum);printf "%s <<<<<<< Bucket %d\n",substr($0,0,RSTART-2)"_"substr($0,RSTART,RLENGTH)"_"substr($0,RSTART+RLENGTH+1),8*(blockalign-int(blockalign));}else{print $0;}next;}{print;}'
Disk /dev/sda: 17.2 GB, 17179869184 bytes
64 heads, 32 sectors/track, 16384 cylinders, total 33554432 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x000d6bea
Device Boot Start End Blocks Id System
/dev/sda1 * 2048 13672447 6835200 83 Linux
/dev/sda2 _13674494_ 33552383 9938945 5 Extended <<<<<<< Bucket 6
/dev/sda5 13674496 25391103 5858304 83 Linux
/dev/sda6 25393152 33552383 4079616 82 Linux swap / Solaris
</syntaxhighlight>
Partition 2 does not matter because it is just metadata of partitioning. We are aligned!
==Remove multipath LUNs==
This writes out the commands to delete multipathed devices for filer <filer> (empty volumes parameter to gawk means all volumes):
<syntaxhighlight lang=bash>
( sanlun lun show -p <filer> ; echo ) | gawk -v volumes="/vol/vol1:/vol/vol2"'
BEGIN {
split(volumes,vols,":");
}
/ONTAP Path:/,/^$/ {
if(/ONTAP Path:/){
opath=$NF;
if (volumes==""){
todelete="yes";
}else{
odev=$NF;
gsub(/^.*:/,"",odev);
for(vol in vols){
if (odev == vols[vol] || volumes==""){
todelete="yes";
}
}
}
}
if(todelete!="yes")next;
if(/Host Device:/){
command="dmsetup info --columns --noheadings -o open /dev/mapper/"$NF;
command | getline inuse;
close(command);
if (inuse!=0) {
printf "#\n## Device %s (%s) is in use!!!\n## check with: lsof | grep \"$(dmsetup info --columns --noheadings --separator \",\" -omajor,minor /dev/mapper/%s)\"\n#\n",$NF,opath,$NF;
} else {
printf "multipath -w %s\n",$NF;
printf "multipath -f /dev/mapper/%s && (\n",$NF;
}
};
if(inuse==0 && $2 ~ /(primary|secondary)/){
printf "echo 1 > /sys/block/%s/device/delete\n",$3;
}
if (inuse==0 && /^$/) { printf ")\n";}
}
/^$/ {
mpathdevice="";
todelete="no";
delete devices;
}
'
</syntaxhighlight>
==Links==
* [https://kb.netapp.com/index?page=content&id=3011193 NetApp KB ID: 3011193 - What is an unaligned I/O?]
442c1bdbb1f34836c5c59b3721151b31883df64b
PHP
0
361
2399
2304
2021-11-25T20:57:30Z
Lollypop
2
Text replacement - "[[Kategorie:" to "[[Category:"
wikitext
text/x-wiki
[[Category:PHP]]
==Install mcrypt on Ubuntu 18.04==
<syntaxhighlight lang=bash>
$ sudo apt -y install gcc make autoconf libc-dev pkg-config libmcrypt-dev php7.2-dev
$ sudo pecl install --nodeps mcrypt-snapshot
</source>
<syntaxhighlight lang=bash>
$ echo "extension=mcrypt.so" | sudo tee -a /etc/php/7.2/fpm/php.ini
$ php-fpm7.2 -i | grep mc
Registered Stream Filters => zlib.*, string.rot13, string.toupper, string.tolower, string.strip_tags, convert.*, consumed, dechunk, mcrypt.*, mdecrypt.*, bzip2.*, convert.iconv.*
mcrypt
mcrypt support => enabled
mcrypt_filter support => enabled
mcrypt.algorithms_dir => no value => no value
mcrypt.modes_dir => no value => no value
</source>
599428b6993a0770879ec8ee1cf1b0de6dec42f8
Cerithium caeruleum
0
122
2400
372
2021-11-25T20:59:22Z
Lollypop
2
Text replacement - "[[Kategorie:" to "[[Category:"
wikitext
text/x-wiki
[[Category:MeerwasserAquarium]]
{{Systematik
| DeName = Nadelschnecke
| WissName = Cerithium caeruleum
| Autor =
| Untergattung =
| Gattung =
| Unterfamilie =
| Art =
| Verbreitung =
| Habitat =
| Nahrung = Algen
| Luftfeuchtigkeit =
| Temperatur = 24°C - 26°C
}}
f986c9f4c2a0e7c13b93da595ed9f7209c578a8a
NetApp Commands
0
201
2401
1912
2021-11-25T20:59:23Z
Lollypop
2
Text replacement - "<source" to "<syntaxhighlight"
wikitext
text/x-wiki
[[Kategorie: NetApp]]
==Alignment==
CDOT 8.3:
<syntaxhighlight lang=bash>
netapp-svm1::> set -priv diag
netapp-svm1::*> lun alignment show -vserver svm_kerberos_backup -fields vserver,lun,alignment -path /vol/kerberos_vol_luns_backup/kerberos_lun_*
vserver path lun alignment
---------- --------------------------------- ------------ ---------
svm_backup /vol/vol_luns_backup/lun_1.bcklun lun_1.bcklun aligned
svm_backup /vol/vol_luns_backup/lun_2.bcklun lun_2.bcklun aligned
svm_backup /vol/vol_luns_backup/lun_3.bcklun lun_3.bcklun aligned
svm_backup /vol/vol_luns_backup/lun_4.bcklun lun_4.bcklun aligned
4 entries were displayed.
netapp-svm1::> set -priv admin
</source>
To see on which bucket the reads and writes occure:
<syntaxhighlight lang=bash>
netapp-svm1::> set -priv diag
netapp-svm1::*> lun alignment show -vserver svm_kerberos_backup -fields vserver,lun,alignment,read-histogram,write-histogram -path /vol/kerberos_vol_luns_backup/kerberos_lun_**
vserver path lun alignment write-histogram read-histogram
---------- --------------------------------- ------------ --------- ---------------- ----------------
svm_backup /vol/vol_luns_backup/lun_1.bcklun lun_1.bcklun aligned 99,0,0,0,0,0,0,0 99,0,0,0,0,0,0,0
svm_backup /vol/vol_luns_backup/lun_2.bcklun lun_2.bcklun aligned 99,0,0,0,0,0,0,0 99,0,0,0,0,0,0,0
svm_backup /vol/vol_luns_backup/lun_3.bcklun lun_3.bcklun aligned 99,0,0,0,0,0,0,0 99,0,0,0,0,0,0,0
svm_backup /vol/vol_luns_backup/lun_4.bcklun lun_4.bcklun aligned 99,0,0,0,0,0,0,0 99,0,0,0,0,0,0,0
4 entries were displayed.
netapp-svm1::> set -priv admin
</source>
==Performance ==
filer> priv set -q diag ; statit -b ; sysstat -x -s -c 20 3 ; statit -e ; priv set
filer> priv set -q diag ; stats show lun:*:avg_latency ; priv set
=== Flashpool ===
filer> priv set -q diag ; stats show -p hybrid_aggr ; priv set
== User ==
=== Create snapshot user ===
<syntaxhighlight lang=bash>
security login role create -vserver svm1 vol-snapshot-only -cmddirname DEFAULT -access none
security login role create -vserver svm1 vol-snapshot-only -cmddirname "security login publickey" -access all
security login role create -vserver svm1 vol-snapshot-only -cmddirname "volume snapshot" -access readonly
security login role create -vserver svm1 vol-snapshot-only -cmddirname "volume snapshot create" -access all
security login create -vserver svm1 -role vol-snapshot-only -user-or-group-name snapshot-user -application ssh -authmethod publickey -comment "Snapshot User"
security login publickey create -vserver svm1 -username snapshot-user -publickey "ssh-rsa AAAAB3Nz...geX33k5 snapshot-user"
</source>
=== Create snapshot user for http api ===
==== Create the role ====
<syntaxhighlight lang=bash>
security login role create -vserver svm42 -role ansible-snapshot-only -cmddirname DEFAULT -access none
security login role create -vserver svm42 -role ansible-snapshot-only -cmddirname "volume snapshot" -access readonly
security login role create -vserver svm42 -role ansible-snapshot-only -cmddirname "volume snapshot create" -query "-snapshot ansible_*" -access all
security login role create -vserver svm42 -role ansible-snapshot-only -cmddirname "volume snapshot delete" -query "-snapshot ansible_*" -access all
</source>
==== Check role parameter ====
<syntaxhighlight lang=bash>
set -showseparator ";" -showallfields true
security login role show -vserver svm42 -role ansible-snapshot-only Role
vserver;role;profilename;cmddirname;access;query;
Vserver;Role Name;Role Name;Command / Directory;Access Level;Query;
svm42;ansible-snapshot-only;ansible-snapshot-only;DEFAULT;none;"";
svm42;ansible-snapshot-only;ansible-snapshot-only;"volume snapshot";readonly;"";
svm42;ansible-snapshot-only;ansible-snapshot-only;"volume snapshot create";all;"-snapshot ansible_*";
svm42;ansible-snapshot-only;ansible-snapshot-only;"volume snapshot delete";all;"-snapshot ansible_*";
svm42;ansible-snapshot-only;ansible-snapshot-only;"volume snapshot modify";all;"-snapshot ansible_*";
svm42;ansible-snapshot-only;ansible-snapshot-only;"volume snapshot show";all;"-snapshot ansible_*";
</source>
==== Create user with role ====
<syntaxhighlight lang=bash>
security login create -vserver svm42 -application ontapi -authentication-method password -role ansible-snapshot-only -user-or-group-name ansible
</source>
==Network interfaces==
<syntaxhighlight lang=bash>
ncl01::> network interface show -vserver ncl1
Logical Status Network Current Current Is
Vserver Interface Admin/Oper Address/Mask Node Port Home
----------- ---------- ---------- ------------------ ------------- ------- ----
ncl01
cluster_mgmt up/up 10.10.20.41/24 ncl01-01 a0a true
ncl01-01-ic1 up/up 10.10.20.44/24 ncl01-01 a0a true
ncl01-01_mgmt1 up/up 10.10.20.42/24 ncl01-01 a0a true
ncl01-02-ic1 up/up 10.10.20.45/24 ncl01-02 a0a true
ncl01-02_mgmt1 up/up 10.10.20.43/24 ncl01-02 a0a true
5 entries were displayed.
ncl01::> network port show -link down
Node: ncl01-01
Speed(Mbps) Health
Port IPspace Broadcast Domain Link MTU Admin/Oper Status
--------- ------------ ---------------- ---- ---- ----------- --------
e0j Default - down 1500 auto/1000 -
e0l Default - down 1500 auto/1000 -
Node: ncl01-02
Speed(Mbps) Health
Port IPspace Broadcast Domain Link MTU Admin/Oper Status
--------- ------------ ---------------- ---- ---- ----------- --------
e0j Default - down 1500 auto/1000 -
e0l Default - down 1500 auto/1000 -
4 entries were displayed.
ncl01::> network port show -health-status degraded
There are no entries matching your query.
ncl01::> network port ifgrp show
Port Distribution Active
Node IfGrp Function MAC Address Ports Ports
-------- ---------- ------------ ----------------- ------- -------------------
ncl01-01
a0a ip 02:a0:98:6d:06:b7 full e0i, e0k
a0b ip 02:a0:98:6d:06:b8 full e3a, e3b, e7a, e7b
ncl01-02
a0a ip 02:a0:98:6d:07:1f full e0i, e0k
a0b ip 02:a0:98:6d:07:20 full e3a, e3b, e7a, e7b
ncl01::> network port ifgrp show -fields down-ports
node ifgrp down-ports
------------- ----- ----------
ncl01-01 a0a -
ncl01-01 a0b -
ncl01-02 a0a -
ncl01-02 a0b -
4 entries were displayed.
ncl01::> network port show -fields speed-oper -port e0j,e0l
node port speed-oper
------------- ---- ----------
ncl01-01 e0j 1000
ncl01-01 e0l 1000
ncl01-02 e0j 1000
ncl01-02 e0l 1000
4 entries were displayed.
ncl01::> network port ifgrp show -fields down-ports
node ifgrp down-ports
------------- ----- ----------
ncl01-01 a0a -
ncl01-01 a0b -
ncl01-02 a0a -
ncl01-02 a0b -
4 entries were displayed.
</source>
==Links==
* [http://www.cosonok.com/2014/02/brief-notes-on-advanced-troubleshooting.html Brief Notes on Advanced Troubleshooting in CDOT]
4ff25f8b4f689247048862033153c75f0438f4bb
2426
2401
2021-11-25T22:27:22Z
Lollypop
2
Text replacement - "[[Kategorie:" to "[[Category:"
wikitext
text/x-wiki
[[Category: NetApp]]
==Alignment==
CDOT 8.3:
<syntaxhighlight lang=bash>
netapp-svm1::> set -priv diag
netapp-svm1::*> lun alignment show -vserver svm_kerberos_backup -fields vserver,lun,alignment -path /vol/kerberos_vol_luns_backup/kerberos_lun_*
vserver path lun alignment
---------- --------------------------------- ------------ ---------
svm_backup /vol/vol_luns_backup/lun_1.bcklun lun_1.bcklun aligned
svm_backup /vol/vol_luns_backup/lun_2.bcklun lun_2.bcklun aligned
svm_backup /vol/vol_luns_backup/lun_3.bcklun lun_3.bcklun aligned
svm_backup /vol/vol_luns_backup/lun_4.bcklun lun_4.bcklun aligned
4 entries were displayed.
netapp-svm1::> set -priv admin
</source>
To see on which bucket the reads and writes occure:
<syntaxhighlight lang=bash>
netapp-svm1::> set -priv diag
netapp-svm1::*> lun alignment show -vserver svm_kerberos_backup -fields vserver,lun,alignment,read-histogram,write-histogram -path /vol/kerberos_vol_luns_backup/kerberos_lun_**
vserver path lun alignment write-histogram read-histogram
---------- --------------------------------- ------------ --------- ---------------- ----------------
svm_backup /vol/vol_luns_backup/lun_1.bcklun lun_1.bcklun aligned 99,0,0,0,0,0,0,0 99,0,0,0,0,0,0,0
svm_backup /vol/vol_luns_backup/lun_2.bcklun lun_2.bcklun aligned 99,0,0,0,0,0,0,0 99,0,0,0,0,0,0,0
svm_backup /vol/vol_luns_backup/lun_3.bcklun lun_3.bcklun aligned 99,0,0,0,0,0,0,0 99,0,0,0,0,0,0,0
svm_backup /vol/vol_luns_backup/lun_4.bcklun lun_4.bcklun aligned 99,0,0,0,0,0,0,0 99,0,0,0,0,0,0,0
4 entries were displayed.
netapp-svm1::> set -priv admin
</source>
==Performance ==
filer> priv set -q diag ; statit -b ; sysstat -x -s -c 20 3 ; statit -e ; priv set
filer> priv set -q diag ; stats show lun:*:avg_latency ; priv set
=== Flashpool ===
filer> priv set -q diag ; stats show -p hybrid_aggr ; priv set
== User ==
=== Create snapshot user ===
<syntaxhighlight lang=bash>
security login role create -vserver svm1 vol-snapshot-only -cmddirname DEFAULT -access none
security login role create -vserver svm1 vol-snapshot-only -cmddirname "security login publickey" -access all
security login role create -vserver svm1 vol-snapshot-only -cmddirname "volume snapshot" -access readonly
security login role create -vserver svm1 vol-snapshot-only -cmddirname "volume snapshot create" -access all
security login create -vserver svm1 -role vol-snapshot-only -user-or-group-name snapshot-user -application ssh -authmethod publickey -comment "Snapshot User"
security login publickey create -vserver svm1 -username snapshot-user -publickey "ssh-rsa AAAAB3Nz...geX33k5 snapshot-user"
</source>
=== Create snapshot user for http api ===
==== Create the role ====
<syntaxhighlight lang=bash>
security login role create -vserver svm42 -role ansible-snapshot-only -cmddirname DEFAULT -access none
security login role create -vserver svm42 -role ansible-snapshot-only -cmddirname "volume snapshot" -access readonly
security login role create -vserver svm42 -role ansible-snapshot-only -cmddirname "volume snapshot create" -query "-snapshot ansible_*" -access all
security login role create -vserver svm42 -role ansible-snapshot-only -cmddirname "volume snapshot delete" -query "-snapshot ansible_*" -access all
</source>
==== Check role parameter ====
<syntaxhighlight lang=bash>
set -showseparator ";" -showallfields true
security login role show -vserver svm42 -role ansible-snapshot-only Role
vserver;role;profilename;cmddirname;access;query;
Vserver;Role Name;Role Name;Command / Directory;Access Level;Query;
svm42;ansible-snapshot-only;ansible-snapshot-only;DEFAULT;none;"";
svm42;ansible-snapshot-only;ansible-snapshot-only;"volume snapshot";readonly;"";
svm42;ansible-snapshot-only;ansible-snapshot-only;"volume snapshot create";all;"-snapshot ansible_*";
svm42;ansible-snapshot-only;ansible-snapshot-only;"volume snapshot delete";all;"-snapshot ansible_*";
svm42;ansible-snapshot-only;ansible-snapshot-only;"volume snapshot modify";all;"-snapshot ansible_*";
svm42;ansible-snapshot-only;ansible-snapshot-only;"volume snapshot show";all;"-snapshot ansible_*";
</source>
==== Create user with role ====
<syntaxhighlight lang=bash>
security login create -vserver svm42 -application ontapi -authentication-method password -role ansible-snapshot-only -user-or-group-name ansible
</source>
==Network interfaces==
<syntaxhighlight lang=bash>
ncl01::> network interface show -vserver ncl1
Logical Status Network Current Current Is
Vserver Interface Admin/Oper Address/Mask Node Port Home
----------- ---------- ---------- ------------------ ------------- ------- ----
ncl01
cluster_mgmt up/up 10.10.20.41/24 ncl01-01 a0a true
ncl01-01-ic1 up/up 10.10.20.44/24 ncl01-01 a0a true
ncl01-01_mgmt1 up/up 10.10.20.42/24 ncl01-01 a0a true
ncl01-02-ic1 up/up 10.10.20.45/24 ncl01-02 a0a true
ncl01-02_mgmt1 up/up 10.10.20.43/24 ncl01-02 a0a true
5 entries were displayed.
ncl01::> network port show -link down
Node: ncl01-01
Speed(Mbps) Health
Port IPspace Broadcast Domain Link MTU Admin/Oper Status
--------- ------------ ---------------- ---- ---- ----------- --------
e0j Default - down 1500 auto/1000 -
e0l Default - down 1500 auto/1000 -
Node: ncl01-02
Speed(Mbps) Health
Port IPspace Broadcast Domain Link MTU Admin/Oper Status
--------- ------------ ---------------- ---- ---- ----------- --------
e0j Default - down 1500 auto/1000 -
e0l Default - down 1500 auto/1000 -
4 entries were displayed.
ncl01::> network port show -health-status degraded
There are no entries matching your query.
ncl01::> network port ifgrp show
Port Distribution Active
Node IfGrp Function MAC Address Ports Ports
-------- ---------- ------------ ----------------- ------- -------------------
ncl01-01
a0a ip 02:a0:98:6d:06:b7 full e0i, e0k
a0b ip 02:a0:98:6d:06:b8 full e3a, e3b, e7a, e7b
ncl01-02
a0a ip 02:a0:98:6d:07:1f full e0i, e0k
a0b ip 02:a0:98:6d:07:20 full e3a, e3b, e7a, e7b
ncl01::> network port ifgrp show -fields down-ports
node ifgrp down-ports
------------- ----- ----------
ncl01-01 a0a -
ncl01-01 a0b -
ncl01-02 a0a -
ncl01-02 a0b -
4 entries were displayed.
ncl01::> network port show -fields speed-oper -port e0j,e0l
node port speed-oper
------------- ---- ----------
ncl01-01 e0j 1000
ncl01-01 e0l 1000
ncl01-02 e0j 1000
ncl01-02 e0l 1000
4 entries were displayed.
ncl01::> network port ifgrp show -fields down-ports
node ifgrp down-ports
------------- ----- ----------
ncl01-01 a0a -
ncl01-01 a0b -
ncl01-02 a0a -
ncl01-02 a0b -
4 entries were displayed.
</source>
==Links==
* [http://www.cosonok.com/2014/02/brief-notes-on-advanced-troubleshooting.html Brief Notes on Advanced Troubleshooting in CDOT]
a2fea33e11b972c59ed4b3d3b710e670197d70cc
Fail2ban
0
276
2402
2239
2021-11-25T21:02:18Z
Lollypop
2
Text replacement - "</source" to "</syntaxhighlight"
wikitext
text/x-wiki
[[Kategorie:Security]]
[[Kategorie:Linux]]
==Installation==
===Debian / Ubuntu===
<syntaxhighlight lang=bash>
# apt-get install fail2ban
</syntaxhighlight>
==Configuration==
To be secure on updates put your personal settings in the <i>*.local</i> files. This will protect them from overwriting through update procedures.
===paths-overrides.local===
I have date parts in my logfiles so the defaults from fail2ban would fail to find the logs.
<syntaxhighlight lang=bash>
# exim -bP log_file_path
log_file_path = /var/log/exim/%slog-%D
# doveadm log find
Looking for log files from /var/log
Debug: /var/log/dovecot/dovecot.debug-20160309
Info: /var/log/dovecot/dovecot.debug-20160309
Warning: /var/log/dovecot/dovecot.log-20160309
Error: /var/log/dovecot/dovecot.log-20160309
Fatal: /var/log/dovecot/dovecot.log-20160309
</syntaxhighlight>
<syntaxhighlight lang=ini>
[DEFAULT]
dovecot_log = /var/log/dovecot/dovecot.log-*
exim_main_log = /var/log/exim/mainlog-*
</syntaxhighlight>
===jail.local===
<syntaxhighlight lang=ini>
[DEFAULT]
bantime = 3600
[sshd]
enabled = true
[exim-spam]
enabled = true
[exim]
enabled = true
[sshd-ddos]
enabled = true
[dovecot]
enabled = true
[sieve]
enabled = true
</syntaxhighlight>
5085381ad93b0031621fe45b9b398fb989dc2e0b
Awk cheatsheet
0
292
2403
2363
2021-11-25T21:04:58Z
Lollypop
2
Text replacement - "</source" to "</syntaxhighlight"
wikitext
text/x-wiki
[[Kategorie:AWK|Cheatsheet]]
==Functions==
===Bytes to human readable===
<syntaxhighlight lang=awk>
function b2h(value){
# Bytes to human readable
unit=1;
while(value>=1024){
unit++;
value/=1024;
}
split("B,KB,MB,GB,TB,PB", unit_string, /,/);
return sprintf("%.2f%s",value,unit_string[unit]);
}
</syntaxhighlight>
===Binary to decimal===
<syntaxhighlight lang=awk>
function b2d(bin){
len=length(bin);
for(i=1;i<=len;i++){
dec+=substr(bin,i,1)*2^(len-i);
}
return dec;
}
</syntaxhighlight>
===Quicksort===
This is not my code! It is taken from [[http://awk.info/?quicksort here]] maybe slightly modified cannot check, site is down.
You can call it like this: qsort(array,1,length(array));
<syntaxhighlight lang=awk>
# BEGIN http://awk.info/?quicksort
function qsort(A, left, right, i, last) {
if (left >= right)
return;
swap(A, left, left+int((right-left+1)*rand()));
last = left;
for (i = left+1; i <= right; i++)
if (int(A[i]) < int(A[left]))
swap(A, ++last, i)
swap(A, left, last)
qsort(A, left, last-1)
qsort(A, last+1, right)
}
# Helper function to swap two elements of an array
function swap(A, i, j, t) {
t = A[i]; A[i] = A[j]; A[j] = t
}
# END http://awk.info/?quicksort
</syntaxhighlight>
Same for alphanumeric:
<syntaxhighlight lang=awk>
# BEGIN http://awk.info/?quicksort
function qsort(A, left, right, i, last) {
if (left >= right)
return;
swap(A, left, left+int((right-left+1)*rand()));
last = left;
for (i = left+1; i <= right; i++)
if (int(A[i]) < int(A[left]))
swap(A, ++last, i)
swap(A, left, last)
qsort(A, left, last-1)
qsort(A, last+1, right)
}
# Helper function to swap two elements of an array
function swap(A, i, j, t) {
t = A[i]; A[i] = A[j]; A[j] = t
}
# END http://awk.info/?quicksort
</syntaxhighlight>
Test:
<syntaxhighlight lang=awk>
BEGIN {
string="1524097359810345254";
split(string,array_i,"");
string="ThisIsAQsortExample";
split(string,array_a,"");
}
# BEGIN http://awk.info/?quicksort
function qsort_i(A, left, right, i, last) {
if (left >= right)
return;
swap(A, left, left+int((right-left+1)*rand()));
last = left;
for (i = left+1; i <= right; i++)
if (int(A[i]) < int(A[left]))
swap(A, ++last, i)
swap(A, left, last)
qsort_i(A, left, last-1)
qsort_i(A, last+1, right)
}
function qsort_a(A, left, right, i, last) {
if (left >= right)
return;
swap(A, left, left+int((right-left+1)*rand()));
last = left;
for (i = left+1; i <= right; i++)
if (A[i] < A[left])
swap(A, ++last, i)
swap(A, left, last)
qsort_a(A, left, last-1)
qsort_a(A, last+1, right)
}
# Helper function to swap two elements of an array
function swap(A, i, j, t) {
t = A[i]; A[i] = A[j]; A[j] = t
}
END {
for(element in array_i)
printf array_i[element];
printf " ===qsort==> "
qsort_i(array_i,1,length(array_i));
for(element in array_i)
printf array_i[element];
print;
for(element in array_a)
printf array_a[element];
printf " ===qsort==> "
qsort_a(array_a,1,length(array_a));
for(element in array_a)
printf array_a[element];
print;
}
</syntaxhighlight>
which outputs:
<pre>
1524097359810345254 ===qsort==> 0011223344455557899
ThisIsAQsortExample ===qsort==> AEIQTaehilmoprssstx
</pre>
===Sort words inside braces (gawk)===
Written for beautifying lines like
<syntaxhighlight lang=mysql>
GRANT SELECT (account_id, user, enable_imap, fk_domain_id, enable_virusscan, max_msg_size, from_authuser_only, id, time_start, changed_by, enable_spamblocker, onhold, time_end, archive, mailbox_quota) ON `mail_db`.`mail_account` TO 'user'@'172.16.16.16'
</syntaxhighlight>
This function:
<syntaxhighlight lang=awk>
function inner_brace_sort (rest, delimiter) {
sorted="";
while( match(rest,/\([^\)]+\)/) ) {
sorted=sprintf("%s%s", sorted, substr(rest, 1, RSTART));
inner=substr(rest, RSTART+1, RLENGTH-2);
rest=substr(rest, RSTART+RLENGTH-1, length(rest));
split(inner, inner_a, delimiter);
inner_l=asort(inner_a, inner_s);
for(i=1; i<=inner_l; i++) {
sorted=sprintf("%s%s", sorted, inner_s[i]);
if(i<inner_l) sorted=sprintf("%s, ", sorted);
}
sorted=sprintf("%s", sorted);
}
return sorted""rest;
}
</syntaxhighlight>
Sorts the fields inside the braces alphabetically and can be called like this:
<syntaxhighlight lang=awk>
/\(/ {
print inner_brace_sort($0, ",[ ]*");
}
</syntaxhighlight>
a7e261f36c3967db3851d96530f0104d023bc092
Nice Options
0
253
2404
2372
2021-11-25T21:05:57Z
Lollypop
2
Text replacement - "</source" to "</syntaxhighlight"
wikitext
text/x-wiki
Linux:
<syntaxhighlight lang=bash>
ls -aldi
ls -aladin
netstat -plant
netstat -tulpen
ss -open4all
journalctl -efeu
grep -Hirn
pwgen -nancy 17
</syntaxhighlight>
Solaris:
<syntaxhighlight lang=bash>
prstat -Lmaa
iostat -Erni
</syntaxhighlight>
fd2ed7187a79b194c86140f8e534ab5413070562
Messor barbarus
0
10
2405
102
2021-11-25T21:09:57Z
Lollypop
2
Text replacement - "[[Kategorie:" to "[[Category:"
wikitext
text/x-wiki
{{Ameisenart
| Autor = (Linnaeus, 1767)
| WissName = Messor barbarus
| Gattung = Messor
| Unterfamilie = Myrmicinae
| Art = barbarus
| Verbreitung = Südeuropa, Afrika, Asien
| Koeniginnen = [[Monogynie|monogyn]]
| Gruendung = [[Gründung#Die unabhängige Koloniegründung durch einzelne Königinnen (claustrale/semiclaustrale Gründung)|claustral]]
}}[[Category:Messor]]
3eef3a1bf49951d7aef79a5db020c444cf65f9f7
Tetramorium caespitum
0
15
2407
190
2021-11-25T21:17:07Z
Lollypop
2
Text replacement - "[[Kategorie:" to "[[Category:"
wikitext
text/x-wiki
{{Ameisenart
| Autor = (Mayr, 1855)
| WissName = Tetramorium caespitum
| Gattung = Tetramorium
| Unterfamilie = Myrmicinae
| Art = caespitum
| Bild =
| Bildbeschreibung =
| Verbreitung =
| Gründung =
| Koeniginnen = [[Monogynie|monogyn]]
| maxKolo = 80000
| Nest = Erdnest
| Ausbruchsschutz = Paraffinöl
| Futter = Brot und Körner, Insekten, Zuckerwasser
}}
==Allgemeines==
Tetramorium caespitum ist eine kleine relativ langsam Art, die in Sandgebieten vorkommt.
In Hamburg habe ich sie im Hafenrandbereich (Kirchwerder) und in den Holmer Sandbergen finden können.
==Eigene Haltungserfahrungen==
Ich habe mir eine kleine Kolonie in Kirchwerder ausgegraben und möchte hier ein wenig von den Erfahrungen berichten.
Diese Art verhält sich gegenüber Futter anders, als meine anderen Ameisen. Angebotene Mehlwürmer werden meist innerhalb eines halben Tages mit Sand bedeckt. Unter dem Sand werden die Mehlwürmer dann ausgehölt.
Auch Wasserschälchen werden gerne mit Sand zugebaggert, so daß ich sie manchmal als die Schlampen unter den Ameisen tituliere.
[[Category:Tetramorium]]
a8153c63444cd9fd91c0162532b4b6939f163f12
Nextcloud
0
368
2408
2184
2021-11-25T21:18:59Z
Lollypop
2
Text replacement - "<source" to "<syntaxhighlight"
wikitext
text/x-wiki
[[category:Web]]
=Nextcloud=
==BASH alias==
<syntaxhighlight lang=bash>
alias occ='sudo --user=www-data /usr/bin/php -f /var/www/nextcloud/occ'
</source>
<syntaxhighlight lang=bash>
# occ status
- installed: true
- version: 19.0.2.2
- versionstring: 19.0.2
- edition:
</source>
==Send calendar events==
Set the EventRemindersMode to occ:
<syntaxhighlight lang=bash>
# occ config:app:set dav sendEventRemindersMode --value occ
</source>
and add a cronjob for the user running he webserver:
<syntaxhighlight lang=bash>
# crontab -u www-data -e
# send calendar events every 5 minutes
*/5 * * * * php -f /var/www/nextcloud/occ dav:send-event-reminders
</source>
=Manual upgrade=
Caution when upgrading from Nextcloud 20.0.9 to Nextcloud 21.0.1!
If you are using APCu as <i>memcache.local</i>
<syntaxhighlight lang=bash>
# occ config:system:get memcache.local
\OC\Memcache\APCu
</source>
you have to put this in your php apcu.ini (e.g. /etc/php/7.4/mods-available/apcu.ini):
apc.enable_cli=1
otherwise you will get in memory trouble during upgrade and in my case the server was down because out of memory.
<syntaxhighlight lang=bash>
# cd /var/www/nextcloud/updater && sudo -u www-data php updater.phar
# occ db:add-missing-indices
</source>
and since version 19:
<syntaxhighlight lang=bash>
# occ db:add-missing-columns
# occ db:add-missing-primary-keys
# occ db:convert-filecache-bigint
</source>
Answer the questions...
If you have an own theme proceed with this steps:
<syntaxhighlight lang=bash>
# occ config:system:set theme --value <your theme>
# occ maintenance:theme:update
</source>
And the apps:
<syntaxhighlight lang=bash>
# occ app:update --all
</source>
=Some tweaks for the theme to disable several things=
<syntaxhighlight lang=css>
/* remove quota */
#quota {
border: 0;
clip: rect(0 0 0 0);
height: 1px;
margin: -1px;
overflow: hidden;
padding: 0;
position: absolute;
width: 1px;
}
/* remove lost password */
.lost-password-container #lost-password, .lost-password-container #lost-password-back {
display: none;
}
/* remove contacts menu */
#contactsmenu { display: none; }
/* remove contacts button */
li[data-id="contacts"] {
display: none;
visibility : hidden;
height : 0px;
width : 0px;
margin : 0px;
padding : 0px;
overflow : hidden;
}
/* remove user button */
li[data-id="core_users"] {
display: none;
visibility : hidden;
height : 0px;
width : 0px;
margin : 0px;
padding : 0px;
overflow : hidden;
}
</source>
= Memcached =
You can import one of the following versions of configfile with
<syntaxhighlight lang=shell-session>
# occ config:import /your_memcache_config_file_like_below.json
Config successfully imported from: /your_memcache_config_file_like_below.json
</source>
== ip:port ==
<syntaxhighlight lang=JSON>
{
"system": {
"memcache.distributed": "\\OC\\Memcache\\Memcached",
"memcached_servers": [
[
'127.0.0.1',
1121
]
]
}
}
</source>
== socket ==
<syntaxhighlight lang=JSON>
{
"system": {
"memcache.distributed": "\\OC\\Memcache\\Memcached",
"memcached_servers": [
[
"\/run\/memcached\/memcached.sock",
0
]
]
}
}
</source>
72bbb6eafb0d1c5861e9186df81a8afb58ad8fd6
TShark
0
238
2409
2266
2021-11-25T21:19:48Z
Lollypop
2
Text replacement - "[[Kategorie:" to "[[Category:"
wikitext
text/x-wiki
[[Category:MySQL]]
[[Category:Security]]
=TShark=
[https://www.wireshark.org/docs/wsug_html_chunked/AppToolstshark.html TShark is the terminal based wireshark.]
The ultimate tool to sniff network traffic when you have no X. It analyzes the traffic as wireshark does. Great tool!
==MySQL traffic==
To look on an application server for MySQL traffic you can use this line:
<syntaxhighlight lang=bash>
# IFACE=eth0 ; tshark -i ${IFACE} -d tcp.port==3306,mysql -R "eth.addr eq $(ip link show ${IFACE} | awk '$1 ~ /link\/ether/{print $2}')" -T fields -e mysql.query 'port 3306'
</source>
<syntaxhighlight lang=bash>
# IFACE=ens192 ; tshark -i ${IFACE} -d tcp.port==3306,mysql -Y "eth.addr eq $(ip link show ${IFACE} | awk '$1 ~ /link\/ether/{print $2}')" -T fields -e mysql.auth_plugin -e mysql.client_auth_plugin -e mysql.error_code -e mysql.error.message -e mysql.message -e mysql.user -e mysql.passwd -e mysql.command 'port 3306'
</source>
The little awk magic selects only pakets which are from our ethernet address on interface ''IFACE''.
==Radius traffic==
Find client with macaddress fc-18-3c-4a-c1-fa :
<syntaxhighlight lang=bash>
# tshark -Y "tls.handshake.type == 1" -T fields -e frame.number -e ip.src -e tls.handshake.version -e radius.Calling_Station_Id -Y 'radius.Calling_Station_Id=="fc-18-3c-4a-c1-fa"' -f "udp port 1812" -V
Running as user "root" and group "root". This could be dangerous.
Capturing on 'ens192'
785 10.155.1.23 fc-18-3c-4a-c1-fa
788 10.155.1.23 0x00000303 fc-18-3c-4a-c1-fa <-- 0x00000303 is TLS handshake version 1.2 , see table below
790 10.155.1.23 fc-18-3c-4a-c1-fa
792 10.155.1.23 fc-18-3c-4a-c1-fa
794 10.155.1.23 fc-18-3c-4a-c1-fa
</source>
With older tshark versions try:
<syntaxhighlight lang=bash>
# tshark -Y "ssl.handshake.type == 1" -T fields -e frame.number -e ip.src -e ssl.handshake.version -e radius.Calling_Station_Id -Y 'radius.Calling_Station_Id=="8c-85-90-1f-03-ff"' -f "udp port 1812"
</source>
==Duplicate ACKs==
<syntaxhighlight lang=bash>
# tshark -i eth1 -Y tcp.analysis.duplicate_ack
</source>
==Finding TCP problems==
<syntaxhighlight lang=bash>
# tshark -i eth1 -Y 'expert.message == "Retransmission (suspected)" || expert.message == "Duplicate ACK (#1)" || expert.message == "Out-Of-Order segment"'
</source>
==Decode SSL Connections==
For example show the used TLS-Versions lower than 1.2.
<pre>
Supported Version: TLS 1.3 (0x0304)
Supported Version: TLS 1.2 (0x0303)
Supported Version: TLS 1.1 (0x0302)
Supported Version: TLS 1.0 (0x0301)
</pre>
<syntaxhighlight lang=bash>
$ tshark -n -f 'dst port 1812 or dst port 2083' -Y "ssl.handshake.version<0x00000303" -T fields -e ip.src_host -e ip.dst_host -e tcp.dstport -e udp.dstport -e ssl.handshake.version
192.168.1.87 192.168.1.140 2083 0x00000301
10.155.4.97 192.168.1.141 1812 0x00000301
192.168.1.85 192.168.1.140 2083 0x00000301
...
</source>
or for https:
<syntaxhighlight lang=bash>
$ tshark -i eth0 -n -f 'dst port 443' -Y "ssl.handshake.version<0x00000303" -T fields -e ip.src_host -e ip.dst_host -e tcp.dstport -e ssl.handshake.version
</source>
42178da6a24baeacb9a9392f0b96bee2e642b405
2417
2409
2021-11-25T21:45:39Z
Lollypop
2
Text replacement - "</source" to "</syntaxhighlight"
wikitext
text/x-wiki
[[Category:MySQL]]
[[Category:Security]]
=TShark=
[https://www.wireshark.org/docs/wsug_html_chunked/AppToolstshark.html TShark is the terminal based wireshark.]
The ultimate tool to sniff network traffic when you have no X. It analyzes the traffic as wireshark does. Great tool!
==MySQL traffic==
To look on an application server for MySQL traffic you can use this line:
<syntaxhighlight lang=bash>
# IFACE=eth0 ; tshark -i ${IFACE} -d tcp.port==3306,mysql -R "eth.addr eq $(ip link show ${IFACE} | awk '$1 ~ /link\/ether/{print $2}')" -T fields -e mysql.query 'port 3306'
</syntaxhighlight>
<syntaxhighlight lang=bash>
# IFACE=ens192 ; tshark -i ${IFACE} -d tcp.port==3306,mysql -Y "eth.addr eq $(ip link show ${IFACE} | awk '$1 ~ /link\/ether/{print $2}')" -T fields -e mysql.auth_plugin -e mysql.client_auth_plugin -e mysql.error_code -e mysql.error.message -e mysql.message -e mysql.user -e mysql.passwd -e mysql.command 'port 3306'
</syntaxhighlight>
The little awk magic selects only pakets which are from our ethernet address on interface ''IFACE''.
==Radius traffic==
Find client with macaddress fc-18-3c-4a-c1-fa :
<syntaxhighlight lang=bash>
# tshark -Y "tls.handshake.type == 1" -T fields -e frame.number -e ip.src -e tls.handshake.version -e radius.Calling_Station_Id -Y 'radius.Calling_Station_Id=="fc-18-3c-4a-c1-fa"' -f "udp port 1812" -V
Running as user "root" and group "root". This could be dangerous.
Capturing on 'ens192'
785 10.155.1.23 fc-18-3c-4a-c1-fa
788 10.155.1.23 0x00000303 fc-18-3c-4a-c1-fa <-- 0x00000303 is TLS handshake version 1.2 , see table below
790 10.155.1.23 fc-18-3c-4a-c1-fa
792 10.155.1.23 fc-18-3c-4a-c1-fa
794 10.155.1.23 fc-18-3c-4a-c1-fa
</syntaxhighlight>
With older tshark versions try:
<syntaxhighlight lang=bash>
# tshark -Y "ssl.handshake.type == 1" -T fields -e frame.number -e ip.src -e ssl.handshake.version -e radius.Calling_Station_Id -Y 'radius.Calling_Station_Id=="8c-85-90-1f-03-ff"' -f "udp port 1812"
</syntaxhighlight>
==Duplicate ACKs==
<syntaxhighlight lang=bash>
# tshark -i eth1 -Y tcp.analysis.duplicate_ack
</syntaxhighlight>
==Finding TCP problems==
<syntaxhighlight lang=bash>
# tshark -i eth1 -Y 'expert.message == "Retransmission (suspected)" || expert.message == "Duplicate ACK (#1)" || expert.message == "Out-Of-Order segment"'
</syntaxhighlight>
==Decode SSL Connections==
For example show the used TLS-Versions lower than 1.2.
<pre>
Supported Version: TLS 1.3 (0x0304)
Supported Version: TLS 1.2 (0x0303)
Supported Version: TLS 1.1 (0x0302)
Supported Version: TLS 1.0 (0x0301)
</pre>
<syntaxhighlight lang=bash>
$ tshark -n -f 'dst port 1812 or dst port 2083' -Y "ssl.handshake.version<0x00000303" -T fields -e ip.src_host -e ip.dst_host -e tcp.dstport -e udp.dstport -e ssl.handshake.version
192.168.1.87 192.168.1.140 2083 0x00000301
10.155.4.97 192.168.1.141 1812 0x00000301
192.168.1.85 192.168.1.140 2083 0x00000301
...
</syntaxhighlight>
or for https:
<syntaxhighlight lang=bash>
$ tshark -i eth0 -n -f 'dst port 443' -Y "ssl.handshake.version<0x00000303" -T fields -e ip.src_host -e ip.dst_host -e tcp.dstport -e ssl.handshake.version
</syntaxhighlight>
ab03a5cb4866582d94c731653b3b9f2afd00ed34
Category:Archispirostreptus
14
12
2410
25
2021-11-25T21:25:28Z
Lollypop
2
Text replacement - "[[Kategorie:" to "[[Category:"
wikitext
text/x-wiki
[[Category:Tausendfuesser]]
350bfff5a2ce40cf57a13992717d176cf5568e4a
Ecryptfs
0
349
2411
2359
2021-11-25T21:26:17Z
Lollypop
2
Text replacement - "<source" to "<syntaxhighlight"
wikitext
text/x-wiki
[[Category:Linux]]
==Tipps&Tricks==
===ecryptfs-mount-private -> mount: No such file or directory===
====Problem====
<syntaxhighlight lang=bash>
user@host:~$ ecryptfs-mount-private
Enter your login passphrase:
Inserted auth tok with sig [affecaffeeaffe00] into the user session keyring
mount: No such file or directory
user@host:~$
</source>
The keys are correctly unlocked
<syntaxhighlight lang=bash>
user@host:~$ keyctl list @u
2 keys in keyring:
1013878144: --alswrv 2223 2223 user: affecaffeeaffe01
270316877: --alswrv 2223 2223 user: affecaffeeaffe02
</source>
But no luck:
<syntaxhighlight lang=bash>
$ ls -al
total 20
drwx------ 3 ansible admin 8 Dez 7 09:12 .
drwxr-xr-x 6 root root 6 Dez 7 09:10 ..
lrwxrwxrwx 1 root root 32 Dez 7 09:11 .Private -> /home/.ecryptfs/ansible/.Private
lrwxrwxrwx 1 root root 33 Dez 7 09:11 .ecryptfs -> /home/.ecryptfs/ansible/.ecryptfs
lrwxrwxrwx 1 root root 52 Dez 7 09:12 README.txt -> /usr/share/ecryptfs-utils/ecryptfs-mount-private.txt
lrwxrwxrwx 1 root root 56 Dez 7 09:11 ecryptfs-mount-private.desktop -> /usr/share/ecryptfs-utils/ecryptfs-mount-private.desktop
</source>
====Workaround====
<syntaxhighlight lang=bash>
user@host:~$ keyctl link @u @s
user@host:~$ ecryptfs-mount-private
user@host:~$
</source>
135f4ce3079c7e79eb5638ecb6985df3309ce247
Find free ip
0
366
2412
2303
2021-11-25T21:28:19Z
Lollypop
2
Text replacement - "<source" to "<syntaxhighlight"
wikitext
text/x-wiki
[[Kategorie: Bash|find_free_ip]]
<syntaxhighlight lang=bash>
#!/bin/bash
#
# $Id: find_free_ip.sh,v 1.2 2019/09/06 14:33:32 lollypop Exp $
# $Source: /var/cvs/lollypop/scripts/linux/find_free_ip.sh,v $
#
# Written in 2019 by Lars Timmann <L@rs.Timmann.de>
#
function usage () {
printf "Usage: ${0} <ip address>[/(<CIDR suffix>|<netmask>)]\n\n"
printf " This script searches a range of IP addresses for ones that have no reverse DNS.\n"
printf " Default range if no CIDR suffix or netmask is given is a class C (/24) range of 256 addresses.\n"
printf " address : This has to be a IPv4 address. Zero octets can be omittet.\n"
printf " For example 192.168 is sufficient for 192.168.0.0 .\n"
printf " CIDR suffix : This describes the nomber of bits set to 1 from left in the netmask.\n"
printf " netmask : Four octets representing the netmask.\n"
printf "\n"
}
case ${1} in
""|--help|-h)
usage
exit 1
;;
*)
input=${1}
;;
esac
case $(uname -s) in
Linux)
PING='ping -4 -c 1 -n -q -W 1 ${ip}'
;;
SunOS)
PING='ping -s -A inet -n -t 1 ${ip} 56 1'
;;
esac
IFS='/' read -ra parts <<< "${input}"
address=${parts[0]}
suffix=${parts[1]:-24}
# build binary notation from CIDR suffix
function ones2bin () {
ones=${1}
printf "%0.s1" $(seq 1 ${ones})
[ ${ones} -lt 32 ] && printf "%0.s0" $(seq 1 $[ 32 - ${ones} ])
}
function bin2ones () {
bin=${1}
ones=0
for((i=0;i<${#bin};i++))
do
bit=${bin:$i:1}
[ ${bit} -eq 0 ] && break
ones=$[ ones + 1 ]
done
echo ${ones}
}
# dezimal number to octets
# for example: 2130706689 -> 127.0.1.1
function dec2ipv4 () {
ipdec=${1}
octets=()
for((i=24;i>=0;i-=8))
do
octet=$((${ipdec} >> ${i}))
octets+=(${octet})
ipdec=$(( ${ipdec} - ( ${octet} << ${i} ) ))
done
echo $(IFS=.;echo "${octets[*]}")
}
# ipv4 to decimal
function ipv42dec () {
ipv4=$1
dec=0
IFS='.' read -ra octets <<< "${ipv4}"
for ((i=0;i<4;i++))
do
dec=$(( dec + ${octets[i]} * ( 256 ** ( 3 - i ) ) ))
done
echo ${dec}
}
# decimal to binary
function dec2bin () {
dec=$1
bin=""
for((i=${dec};i>0;i>>=1))
do
bin=$(( ${i} % 2 ))${bin}
done
echo ${bin}
}
# binary to decimal : dec = $(( 2#010001010001 ))
# binary complement
function binaryComplement () {
unset complement
binary=$1
for((i=0;i<${#binary};i++))
do
complement+=$(( ${binary:${i}:1} ^ 1 ))
done
echo $complement
}
# Add missing octets
function fillOctets () {
IFS='.' read -ra octets <<< "${1}"
for ((i=${#octets[@]};i<4;++i))
do
octets+=(0)
done
echo "$(IFS=. ; echo "${octets[*]}")"
}
if [[ ${suffix} =~ ^([0-9]+\.[0-9]+\.[0-9]+\.[0-9]+)$ ]]
then
suffixbin=$(dec2bin $(ipv42dec $(fillOctets ${suffix})))
else
suffixbin=$(ones2bin ${suffix})
fi
address=$(fillOctets ${address})
firstipdec=$(( ( $(ipv42dec ${address}) & 2#${suffixbin} ) ))
network=$(dec2ipv4 ${firstipdec})
lastipdec=$(( ( $(ipv42dec ${address}) & 2#${suffixbin} ) | 2#$(binaryComplement ${suffixbin}) ))
broadcast=$(dec2ipv4 ${lastipdec})
netmask=$(dec2ipv4 $(( 2#${suffixbin} )) )
printf "Your request:\t${address}/$(bin2ones ${suffixbin})\nNetwork:\t${network}\nBroadcast:\t${broadcast}\nNetmask:\t${netmask}\nSearching in:\t${network}-${broadcast}\n"
printf "%0.s-" $(seq 1 80) ; echo
count=1
bool=( yes no )
for((i=${firstipdec};i<=${lastipdec};i++))
do
ip=$(dec2ipv4 ${i})
info=$(getent hosts ${ip})
if [ "_${info}_" == "__" ]
then
eval ${PING} ${ip} >/dev/null 2>&1 ; pingable=$?
case ${ip} in
${network})
remark="This is the network IP."
;;
${broadcast})
remark="This is the network IP."
;;
*)
remark=""
;;
esac
printf "%s\tfrei\t%d\t( got a pong: %s )\t%s\n" "${ip}" "${count}" "${bool[${pingable}]}" "${remark}"
count=$[ ${count} + 1 ]
else
printf "%s\n" "${info}"
count=1
fi
done
</syntaxhighlight>
9cb43adcd21051e58d0d65dbf988a68b92894ded
VirtualBox physical mapping
0
355
2415
2346
2021-11-25T21:40:27Z
Lollypop
2
Text replacement - "<source" to "<syntaxhighlight"
wikitext
text/x-wiki
[[Category:Virtualbox]
==Create a virtual mapping to your physical Windows==
In my example it is on partitions 1 and 2 of the disk.<br>
This helps me to work around problems with installing Windows updates and grub. Some Windows updates are failing if you have grub in your MBR.
===Create a dummy mbr===
<syntaxhighlight lang=bash>
# apt install mbr
# install-mbr /var/data/VMs/dev/mbr.img
</syntaxhighlight>
===Create the mapping as a VMDK file===
<syntaxhighlight lang=bash>
# VBoxManage internalcommands createrawvmdk -filename /var/data/VMs/dev/Windows-physical.vmdk -rawdisk /dev/sda -partitions 1,2 -mbr /var/data/VMs/dev/mbr.img
</syntaxhighlight>
After that create a VM and use this special VMDK file.
061cdf1b3f55f4f6eefe9cda05dac8e35b09d271
MySQL slave with LVM
0
239
2416
938
2021-11-25T21:41:09Z
Lollypop
2
Text replacement - "<source " to "<syntaxhighlight "
wikitext
text/x-wiki
'''UNFINISHED first few lines...'''
==Create LVM snapshot==
===Get the data mount===
<syntaxhighlight lang=bash>
master# df -h $(mysql --batch --skip-column-names -e "show variables like 'datadir'" | awk '{print $NF;}')
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/vg--mysql-mysql--data 138G 78G 55G 59% /var/lib/mysql
master# DATADIR="$(mysql --batch --skip-column-names -e "show variables like 'datadir'" | awk '{print $NF;}')"
</source>
Enough space for a snapshot?
<syntaxhighlight lang=bash>
master# lvs /dev/mapper/vg--mysql-mysql--data
LV VG Attr LSize Pool Origin Data% Move Log Copy% Convert
mysql-data vg-mysql -wi-ao--- 140,00g
master# vgs vg-mysql
VG #PV #LV #SN Attr VSize VFree
vg-mysql 2 3 1 wz--n- 199,99g 20,00g
</source>
===Create a concsistent snapshot===
<syntaxhighlight lang=bash>
master# mysql -e "FLUSH TABLES WITH READ LOCK; SHOW MASTER STATUS;" > ${DATADIR}/master_status.$(date "+%Y%m%d_%H%M%S")
master# lvcreate -l50%FREE -s -n mysql-data-snap /dev/vg-mysql/mysql-data
master# mysql -e "UNLOCK TABLES;"
master# mount /dev/vg-mysql/mysql-data-snap /mnt
master# cat /mnt/master_status.20151002_225659
File Position Binlog_Do_DB Binlog_Ignore_DB
mysql-bin.002366 263911913
master# mysql --batch --skip-column-names -e "show variables like 'innodb_data_file_path'"
innodb_data_file_path ibdata1:5G;ibdata2:5G;ibdata3:5G;ibdata4:50M:autoextend
</source>
Set the innodb_data_file_path to the same value on the slave.
==Copy the data to the slave==
<syntaxhighlight lang=bash>
slave# DATADIR="$(mysql --batch --skip-column-names -e "show variables like 'datadir'" | awk '{print $NF;}')"
slave# ssh -c blowfish master "cd /mnt ; tar cSpzf - ." | ( cd ${DATADIR} ; tar xlvSpzf - )
</source>
==Create replication user on master==
<syntaxhighlight lang=bash>
master# mysql -e ""
</source>
==Setup slave==
<syntaxhighlight lang=bash>
slave# mysql -e ""
</source>
4ffa23243cd5b76cb0a3fcea3997746b5e7b6139
Category:Exim
14
28
2418
49
2021-11-25T21:50:35Z
Lollypop
2
Text replacement - "[[Kategorie:" to "[[Category:"
wikitext
text/x-wiki
[[Category:KnowHow]]
a53883501ef62bde531096835b5015f2915a2297
Category:Lasius
14
18
2419
31
2021-11-25T21:59:48Z
Lollypop
2
Text replacement - "[[Kategorie:" to "[[Category:"
wikitext
text/x-wiki
[[Category:Ameisen]]
8b4529e141e02312735639bddbd1b0df94a379a3
Capnella imbricata
0
126
2420
345
2021-11-25T22:01:09Z
Lollypop
2
Text replacement - "[[Kategorie:" to "[[Category:"
wikitext
text/x-wiki
[[Category:MeerwasserAquarium]]
{{Systematik
| DeName = Bäumchenweichkoralle
| WissName = Capnella imbricata
| Autor =
| Untergattung =
| Gattung =
| Unterfamilie =
| Art =
| Verbreitung = Indopazifik
| Habitat =
| Nahrung = Zooxanthellen / Licht
| Luftfeuchtigkeit =
| Temperatur = 23°C - 28°C
}}
00206ddff50750b75ae06832c09d1ebb7d39f950
Dpkg
0
244
2421
2307
2021-11-25T22:03:27Z
Lollypop
2
Text replacement - "<source " to "<syntaxhighlight "
wikitext
text/x-wiki
[[Kategorie:Linux]]
==Missing key id NO_PUBKEY==
<syntaxhighlight lang=bash>
# apt-key adv --keyserver keyserver.ubuntu.com --recv <keyid>
</syntaxhighlight>
==Package source which resolves to IPv6 adresses causes sometimes problems==
To force the usage of the returned IPv4 adresses do:
<syntaxhighlight lang=bash>
$ echo 'Acquire::ForceIPv4 "true";' | sudo tee /etc/apt/apt.conf.d/99force-ipv4
</syntaxhighlight>
==Packages from a specific source==
===Prequisite: dctrl-tools===
<syntaxhighlight lang=bash>
sudo apt-get install dctrl-tools
</syntaxhighlight>
===Show packages===
For example all PPA packages
<syntaxhighlight lang=bash>
sudo grep-dctrl -sPackage . /var/lib/apt/lists/ppa*_Packages
</syntaxhighlight>
==From where is my package installed?==
<syntaxhighlight lang=bash>
sudo apt-cache policy <package>
</syntaxhighlight>
==Does my file match the checksum from the package?==
If you fear you are hacked verify your binaries!
===Prequisite: debsums===
<syntaxhighlight lang=bash>
sudo apt-get install debsums
</syntaxhighlight>
===Verify packages===
<syntaxhighlight lang=bash>
sudo debsums <package name>
</syntaxhighlight>
<syntaxhighlight lang=bash>
$ sudo debsums unhide.rb
/usr/bin/unhide.rb OK
/usr/share/doc/unhide.rb/changelog.Debian.gz OK
/usr/share/doc/unhide.rb/copyright OK
/usr/share/lintian/overrides/unhide.rb OK
/usr/share/man/man8/unhide.rb.8.gz OK
</syntaxhighlight>
db1db11a099b6f958885fb507116d84415fffe0c
Category:Tapinoma
14
61
2422
105
2021-11-25T22:09:32Z
Lollypop
2
Text replacement - "[[Kategorie:" to "[[Category:"
wikitext
text/x-wiki
[[Category:Ameisen]]
8b4529e141e02312735639bddbd1b0df94a379a3
Category:ZFS
14
31
2423
834
2021-11-25T22:10:57Z
Lollypop
2
Text replacement - "[[Kategorie:" to "[[Category:"
wikitext
text/x-wiki
[[Category:KnowHow]]
a53883501ef62bde531096835b5015f2915a2297
Template:Überschriftensimulation 4
10
53
2424
91
2021-11-25T22:12:00Z
Lollypop
2
Text replacement - "[[Kategorie:" to "[[Category:"
wikitext
text/x-wiki
{{Anker|{{{1}}}}}<div class="Vorlage_Ueberschriftensimulation_4" style="margin:0; margin-bottom:.3em; padding-top:.5em; padding-bottom:.17em; background:none; font-size:116%; color:black; font-weight:bold">{{{1}}}</div><noinclude>
----
Simuliert in ''Diskussionseiten'' eine Überschrift, die nicht im Inhaltsverzeichnis erscheinen soll. In ''Artikeln'' darf diese Vorlage nicht verwendet werden; dafür gibt es andere Lösungen, siehe [[Hilfe:Inhaltsverzeichnis]].
Für Syntax und Anwendung siehe [[Wikipedia:Textbausteine/Formatierungshilfen]].
[[Category:Vorlage:Formatierungshilfe|Uberschriftensimulation 4]]
</noinclude>
2dad03e18a18be303327f41523578b955a374f38
MariaDB on ZFS
0
294
2425
1979
2021-11-25T22:12:06Z
Lollypop
2
Text replacement - "<source" to "<syntaxhighlight"
wikitext
text/x-wiki
[[Kategorie: MySQL|ZFS]]
[[Kategorie: MariaDB|ZFS]]
==ZFS parameters==
<syntaxhighlight lang=bash>
zfs set atime=off MYSQL-DATA
zfs set compression=lz4 MYSQL-DATA
zfs set atime=off MYSQL-LOG
zfs set compression=lz4 MYSQL-LOG
zfs set recordsize=8k MYSQL-DATA/data
zfs set recordsize=16k MYSQL-DATA/InnoDB
zfs set primarycache=metadata MYSQL-DATA/InnoDB
zfs set primarycache=metadata MYSQL-LOG/ib_log
</source>
<syntaxhighlight lang=bash>
# zfs list -o recordsize,primarycache,compression,compressratio,atime,name -r MYSQL-DATA -r MYSQL-LOG
RECSIZE PRIMARYCACHE COMPRESS RATIO ATIME NAME
128K all lz4 1.06x off MYSQL-DATA
16K metadata lz4 2.81x off MYSQL-DATA/InnoDB
8K all lz4 1.05x off MYSQL-DATA/data
128K all lz4 2.15x off MYSQL-LOG
128K all lz4 1.00x off MYSQL-LOG/binlog
128K metadata lz4 2.17x off MYSQL-LOG/ib_log
</source>
===If you have innodb_file_per_table=on===
<syntaxhighlight lang=bash>
# mysql -e 'show variables like "innodb_file_per_table";'
+-----------------------+-------+
| Variable_name | Value |
+-----------------------+-------+
| innodb_file_per_table | ON |
+-----------------------+-------+
</source>
* If you have only InnoDB-Tables or the only productive ones are InnoDB then consider setting the blocksize of MYSQL-DATA/data to 16k because all Innodb-Datafiles (*.ibd) will be written there :-\.*
* consider setting the initial innodb_data_file_path to smaller value like ibdata1:100M:autoextend
==Database parameters for ZFS==
<syntaxhighlight lang=mysql>
datadir = /MYSQL-DATA/data/mysql
innodb_data_home_dir = /MYSQL-DATA/InnoDB
innodb_data_file_path = ibdata1:2000M:autoextend
innodb_log_group_home_dir = /MYSQL-LOG/ib_log
#innodb_flush_method = O_DIRECT
innodb_flush_log_at_trx_commit = 2
innodb_file_per_table = off
skip-innodb_doublewrite
</source>
<syntaxhighlight lang=bash>
# /usr/sbin/mysqld --print-defaults
/usr/sbin/mysqld would have been started with the following arguments:
--server_id=42
--user=mysql
--pid-file=/var/run/mysqld/mysqld.pid
--socket=/var/run/mysqld/mysqld.sock
--port=3306
--basedir=/usr
--datadir=/MYSQL-DATA/data/mysql
--innodb_data_home_dir=/MYSQL-DATA/InnoDB
--innodb_data_file_path=ibdata1:100M:autoextend
--innodb_log_group_home_dir=/MYSQL-LOG/ib_log
--innodb_flush_method=O_DIRECT
--innodb_flush_log_at_trx_commit=2
--skip-innodb_doublewrite
--tmpdir=/tmp
</source>
On Linux do not forget to add new directories to apparmor!
c9901e350e77c7e21fe639931bdd25d19dd782ff
2429
2425
2021-11-25T22:34:11Z
Lollypop
2
Text replacement - "[[Kategorie:" to "[[Category:"
wikitext
text/x-wiki
[[Category: MySQL|ZFS]]
[[Category: MariaDB|ZFS]]
==ZFS parameters==
<syntaxhighlight lang=bash>
zfs set atime=off MYSQL-DATA
zfs set compression=lz4 MYSQL-DATA
zfs set atime=off MYSQL-LOG
zfs set compression=lz4 MYSQL-LOG
zfs set recordsize=8k MYSQL-DATA/data
zfs set recordsize=16k MYSQL-DATA/InnoDB
zfs set primarycache=metadata MYSQL-DATA/InnoDB
zfs set primarycache=metadata MYSQL-LOG/ib_log
</source>
<syntaxhighlight lang=bash>
# zfs list -o recordsize,primarycache,compression,compressratio,atime,name -r MYSQL-DATA -r MYSQL-LOG
RECSIZE PRIMARYCACHE COMPRESS RATIO ATIME NAME
128K all lz4 1.06x off MYSQL-DATA
16K metadata lz4 2.81x off MYSQL-DATA/InnoDB
8K all lz4 1.05x off MYSQL-DATA/data
128K all lz4 2.15x off MYSQL-LOG
128K all lz4 1.00x off MYSQL-LOG/binlog
128K metadata lz4 2.17x off MYSQL-LOG/ib_log
</source>
===If you have innodb_file_per_table=on===
<syntaxhighlight lang=bash>
# mysql -e 'show variables like "innodb_file_per_table";'
+-----------------------+-------+
| Variable_name | Value |
+-----------------------+-------+
| innodb_file_per_table | ON |
+-----------------------+-------+
</source>
* If you have only InnoDB-Tables or the only productive ones are InnoDB then consider setting the blocksize of MYSQL-DATA/data to 16k because all Innodb-Datafiles (*.ibd) will be written there :-\.*
* consider setting the initial innodb_data_file_path to smaller value like ibdata1:100M:autoextend
==Database parameters for ZFS==
<syntaxhighlight lang=mysql>
datadir = /MYSQL-DATA/data/mysql
innodb_data_home_dir = /MYSQL-DATA/InnoDB
innodb_data_file_path = ibdata1:2000M:autoextend
innodb_log_group_home_dir = /MYSQL-LOG/ib_log
#innodb_flush_method = O_DIRECT
innodb_flush_log_at_trx_commit = 2
innodb_file_per_table = off
skip-innodb_doublewrite
</source>
<syntaxhighlight lang=bash>
# /usr/sbin/mysqld --print-defaults
/usr/sbin/mysqld would have been started with the following arguments:
--server_id=42
--user=mysql
--pid-file=/var/run/mysqld/mysqld.pid
--socket=/var/run/mysqld/mysqld.sock
--port=3306
--basedir=/usr
--datadir=/MYSQL-DATA/data/mysql
--innodb_data_home_dir=/MYSQL-DATA/InnoDB
--innodb_data_file_path=ibdata1:100M:autoextend
--innodb_log_group_home_dir=/MYSQL-LOG/ib_log
--innodb_flush_method=O_DIRECT
--innodb_flush_log_at_trx_commit=2
--skip-innodb_doublewrite
--tmpdir=/tmp
</source>
On Linux do not forget to add new directories to apparmor!
de406c3388c89759eafb680b20665fb9bd837a59
Solaris IPMP
0
73
2427
659
2021-11-25T22:31:35Z
Lollypop
2
Text replacement - "[[Kategorie:" to "[[Category:"
wikitext
text/x-wiki
[[Category:Solaris|IPMP]]
==Einstellen der Konfiguration auf manuell==
<pre>
netadm enable -p ncp defaultfixed
</pre>
==Einfaches IPMP mit einem Standby-Interface==
<pre>
ipadm create-ip net0
ipadm create-ip net1
ipadm set-ifprop -p standby=on -m ip net1
ipadm create-ipmp -i net0 -i net1 ipmp0
ipadm create-addr -T static -a local=1.2.3.4/24 ipmp0/v4
</pre>
==Link-based IPMP in einem VLAN (hier VLAN 2)==
<pre>
# VLAN-Interfaces konfigurieren
dladm create-vlan -l net1 -v 2 net1_vlan2
dladm create-vlan -l net2 -v 2 net2_vlan2
# VLAN-Interfaces für IP konfigurieren
ipadm create-ip net1_vlan2
ipadm create-ip net2_vlan2
# IPMP-Interface konfigurieren
ipadm create-ipmp -i net1_vlan2,net2_vlan2 ipmp0
# Und ganz normal eine IP auf das IPMP-Interface konfigureren
ipadm create-addr -T static -a local=10.1.2.106/24 ipmp0
# Und die Defaultroute permanent setzen
route -p add default 10.1.2.254
</pre>
63a318959d8fa40647805b12038c12bbf7162d51
LUKS - Linux Unified Key Setup
0
255
2428
2234
2021-11-25T22:32:51Z
Lollypop
2
Text replacement - "<source " to "<syntaxhighlight "
wikitext
text/x-wiki
[[Kategorie:Linux]]
[[Kategorie:Security]]
==Encrypted swap on LVM==
===Create logical volume for swap===
<syntaxhighlight lang=bash>
# lvcreate -L 2g -n lv-swap vg-root
Logical volume "lv-swap" created
</syntaxhighlight>
<syntaxhighlight lang=bash>
# lvs /dev/vg-root/lv-swap
LV VG Attr LSize Pool Origin Data% Move Log Copy% Convert
lv-swap vg-root -wi-ao--- 2.00g
</syntaxhighlight>
===Create and get the UUID===
'''This step will erase all of your data from the disk after the mkswap command!!!'''
So be sure you pick the right one!
<syntaxhighlight lang=bash>
# mkswap /dev/vg-root/lv-swap
mkswap: /dev/vg-root/lv-swap: warning: don't erase bootbits sectors
on whole disk. Use -f to force.
Setting up swapspace version 1, size = 2097148 KiB
no label, UUID=4764e516-d025-41de-ab5b-72070a3ae765
</syntaxhighlight>
Save this UUID for the next step!!!
===Create the crypted swap===
Put this in your /etc/crypttab :
<syntaxhighlight lang=bash>
cryptswap1 UUID=4764e516-d025-41de-ab5b-72070a3ae765 /dev/urandom swap,cipher=aes-cbc-essiv:sha256,offset=40,noearly
</syntaxhighlight>
The UUID is the one from mkswap before!!!
Important things:
# offset=40 : Save the region where your UUID is written on disk.
# noearly : Avoid race conditions of the init scripts (cryptdisks and cryptdisks-early).
====Start the crypted partition====
<syntaxhighlight lang=bash>
# cryptdisks_start cryptswap1
* Starting crypto disk...
* cryptswap1 (starting)..
* cryptswap1 (started)...
</syntaxhighlight>
====Check the status====
<syntaxhighlight lang=bash>
# cryptsetup status cryptswap1
/dev/mapper/cryptswap1 is active.
type: PLAIN
cipher: aes-cbc-essiv:sha256
keysize: 256 bits
device: /dev/mapper/vg--root-lv--swap
offset: 40 sectors
size: 4194264 sectors
mode: read/write
</syntaxhighlight>
====Make the swapFS====
<syntaxhighlight lang=bash>
# mkswap /dev/mapper/cryptswap1
mkswap: /dev/mapper/cryptswap1: warning: don't erase bootbits sectors
on whole disk. Use -f to force.
Setting up swapspace version 1, size = 2097128 KiB
no label, UUID=ccdd1d28-0504-4682-8ece-8b6ef381d7e9
</syntaxhighlight>
This new UUID has no relevance for /etc/crypttab.
===Edit the /etc/fstab===
<syntaxhighlight lang=bash>
# vit /etc/fstab
...
/dev/mapper/cryptswap1 none swap sw 0 0
</syntaxhighlight>
Reboot to test your settings.
777f09b047e2714bae573e59ce7adcc1226006d5
Bash cheatsheet
0
37
2430
1987
2021-11-25T22:39:07Z
Lollypop
2
Text replacement - "<source " to "<syntaxhighlight "
wikitext
text/x-wiki
[[Kategorie:Bash]]
=bash history per user=
See [[SSH_FingerprintLogging|Logging the SSH fingerprint]]
=bash prompt=
Put this in your ~/.bash_profile
<syntaxhighlight lang=bash>
typeset +x PS1="\[\e]0;\u@\h: \w\a\]\u@\h:\w# "
</source>
=Useful variable substitutions=
==split==
For example split an ip:
<syntaxhighlight lang=bash>
$ delimiter="."
$ ip="10.1.2.3"
$ declare -a octets=( ${ip//${delimiter}/ } )
$ echo "${#octets[@]} octets -> ${octets[@]}"
4 octets -> 10 1 2 3
</source>
==dirname==
<syntaxhighlight lang=bash>
$ myself=/usr/bin/blafasel ; echo ${myself%/*}
/usr/bin
</source>
==basename==
<syntaxhighlight lang=bash>
$ myself=/usr/bin/blafasel ; echo ${myself##*/}
blafasel
</source>
==Path name resolving function==
<syntaxhighlight lang=bash>
# dir_resolve originally from http://stackoverflow.com/a/20901614/5887626
# modified at https://lars.timmann.de/wiki/index.php/Bash_cheatsheet
dir_resolve() {
local dir=${1%/*}
local file=${1##*/}
# if the name does not contain a / leave file blank or the name will be name/name
[ "_${1/\//}_" == "_${1}_" -a -d ${1} ] && file=""
[ "_${1/\//}_" == "_${1}_" -a -f ${1} ] && dir=""
pushd "$dir" &>/dev/null || return $? # On error, return error code
echo ${PWD}${file:+"/"${file}} # output full path with filename
popd &> /dev/null
}
</source>
=Arrays=
==Reverse the order of elements==
An example for services in normal and reverse order for start/stop
<syntaxhighlight lang=bash>
declare -a SERVICES_STOP=(service1 service2 service3 service4)
declare -a SERVICES_START
for(( i=$[ ${#SERVICES_STOP[*]} - 1 ] ; i>=0 ; i-- ))
do
SERVICES_START+=(${SERVICES_STOP[$i]})
done
</source>
This results in:
<syntaxhighlight lang=bash>
$ echo ${SERVICES_STOP[*]} ; echo ${SERVICES_START[*]}
service1 service2 service3 service4
service4 service3 service2 service1
</source>
=Loops=
==Numbers==
$ for i in {0..9} ; do echo $i ; done
or
$ for ((i=0;i<=9;i++)); do echo $i; done
so gehen natürlich auch andere Sprünge, z.B. immer 3 weiter:
$ for ((i=0;i<=9;i+=3)); do echo $i; done
or or or
$ for ((i=0,j=1;i<=9;i+=3,j++)); do echo "$i $j"; done
==Exit controlled loop==
Just put your code between <i>while</i> and <i>do</i> and use <i>continue</i> alias : in the loop.
<syntaxhighlight lang=bash>
#!/bin/bash
while
# some code
(( <your control expression> ))
do
:
done
</source>
For example:
<syntaxhighlight lang=bash>
#!/bin/bash
i=1
while
i=$[ $i + 1 ];
(( $i < 10 ))
do
:
done
</source>
=Functions=
==Log with timestamp==
<syntaxhighlight lang=bash>
function printlog () {
# Function:
# Log things to logfile
#
# Parameter:
# 1: logfile
# *: You can call printlog like printf (except the first parameter is the logfile)
#
# OR
#
# Just pipe things to printlog
#
local logfile=${1}
shift
if [ -n "${*}" ]
then
format=${1}
shift
printf "%s ${format}" "$(/bin/date '+%Y%m%d %H:%M:%S')" ${*} >> ${logfile}
else
while read input
do
printf "%s %s\n" "$(/bin/date '+%Y%m%d %H:%M:%S')" "${input}" >> ${logfile}
done
fi
}
</source>
<syntaxhighlight lang=bash>
$ printf "test\n\ntoast\n" | printlog /dev/stdout
20190603 12:48:13 test
20190603 12:48:13
20190603 12:48:13 toast
$ printlog /dev/stdout "test\n"
20190603 12:48:19 test
$ printlog /dev/stdout "test %s %d %s\n" "bla" 0 "bli"
20190603 12:48:25 test bla 0 bli
$
</source>
=Calculations=
<syntaxhighlight lang=bash>
$ echo $[ 3 + 4 ]
7
$ echo $[ 2 ** 8 ] # 2^8
256
</source>
=init scripts=
==A basic skeleton==
<syntaxhighlight lang=bash>
#!/bin/bash
NAME=<myname> # The name of the daemon
USER=<runuser> # The user to run the daemon as
SELF=${0##*/}
CALLER=$(id -nu)
# Check if called as ${USER}
if [ "_${CALLER}_" != "_${USER}_" ]
then
# If not do a su if called as root
if [ "_${CALLER}_" == "_root_" ]
then
exec su -l ${USER} -c "$0 $@"
else
echo "Please start this script only as user ${USER}"
exit 1
fi
fi
if [ $# -eq 1 ]
then
command=$1
else
# Called as ${NAME}-start.sh or ${NAME}-stop.sh
command=${SELF%.sh}
command=${command##${NAME}-}
[ "_${command}_" == "_${NAME}_" ] && command=""
fi
case ${command} in
start)
# start commands
;;
stop)
# stop commands
;;
restart)
$0 stop
$0 start
;;
*)
[ ! -z "${command}" ] && echo "ERROR: Unknown option ${command}!"
echo "Usage: $0 (start|stop|restart)";
echo "Or call as ${NAME}-(start|stop|restart).sh"
exit 1
;;
esac
</source>
= Logging and output in your scripts =
== Add a timestamp to all output ==
<syntaxhighlight lang=bash>
#!/bin/bash
# Find temp filename
FIFO=$(mktemp)
# Clenup on exit
trap 'rm -f ${FIFO}' 0
# Delete file created by mktemp
rm "${FIFO}"
# Create a FIFO instead
mkfifo "${FIFO}"
# Read from FIFO and add date at the beginning
sed -e "s|^|$(date '+%d.%m.%Y %H:%M:%S') :: |g" < ${FIFO} &
# Redirect stdout & stderr to FIFO
exec > ${FIFO} 2>&1
#
# Now your program
#
echo bla
echo bli >&2
</source>
== Add a timestamp to all output and send to file==
<syntaxhighlight lang=bash>
#!/bin/bash
LOGFILE=/tmp/bla.log
# Find temp filename
FIFO=$(mktemp)
# Clenup on exit
trap 'rm -f ${FIFO}' 0
# Delete file created by mktemp
rm "${FIFO}"
# Create a FIFO instead
mkfifo "${FIFO}"
# Read from FIFO and add date at the beginning
sed -e "s|^|$(date '+%d.%m.%Y %H:%M:%S') :: |g" < ${FIFO} > ${LOGFILE}&
# Redirect stdout & stderr to FIFO
exec > ${FIFO} 2>&1
#
# Now your program
#
echo bla
echo bli >&2
</source>
=Parameter parsing=
In progress... no time...
<syntaxhighlight lang=bash>
while [ $# -gt 0 ]
do
case $1 in
-h|--help)
usage help
shift;
exit 0;
;;
--?*=?*|-?*=?*)
param=${1%=*}
value=${1#*=}
shift;
;;
--?*=|-?*=)
param=${1%=*}
usage "${param} needs a vlaue!"
;;
*)
if [ $# -lt 2 ] ; then usage "$1 needs a value!"; fi
param=$1
value=$2
shift; shift;
;;
esac
case $param in
*)
other_params[$[${#other_params[*]} + 1]]="${param}=${value}"
param=${param#--}
param=${param/-/_}
export ${param^^}=${value}
;;
esac
done
</source>
a7f3786989003bcdef86263d26ccfad4438c62d5
Category:Ameisen
14
3
2431
543
2021-11-25T22:41:56Z
Lollypop
2
Text replacement - "[[Kategorie:" to "[[Category:"
wikitext
text/x-wiki
[[Category:Insekten]]
{{#categorytree:{{PAGENAMEE}}|mode=pages|hideroot=on|depth=3}}
aff14e37043b6989c38a3039599726fdd196cb0c
SSH Tipps und Tricks
0
75
2432
2213
2021-11-25T22:42:22Z
Lollypop
2
Text replacement - "</source" to "</syntaxhighlight"
wikitext
text/x-wiki
[[Kategorie:SSH|Tipps]]
[[Kategorie:Putty|Tipps]]
=SSH, der Weg zum Ziel=
==SSH über ein oder mehrere Hops==
Um die SSH-Verbindung von Host_A zu Host_B machen zu können muß man sich über zwei Rechner dorthin tunneln (GW_1 und GW_2). Wenn man sich immer erst einloggt und dann weiter einloggt ist es manchmal sehr schwierig die Portforwardings mitzuschleifen, oder den Socks5-Proxy. Einfacher ist es, wenn man sich ProxyCommands für den Weg von Host_a zu Host_b definiert.
Wir kommen also nur von GW_2 zu Host_b, also machen wir uns hierfür einen Eintrag in der ~/.ssh/config:
<pre>
Host Host_B
ProxyJump GW_2
</pre>
Zu GW_2 kommen wir aber nur über GW_1, also brauchen wir hierfür auch einen Eintrag:
<pre>
Host GW_2
ProxyJump GW_1
</pre>
Jetzt gibt man auf Host_A einfach <i>ssh Host_B</i> ein und wird über die beiden Gateways GW_1 und GW_2 getunnelt.
==Portforwardings für z.B. NFS macht man jetzt einfach so==
<pre>
root@Host_A# share -F nfs -o ro=@127.0.0.1/32 /tmp
root@Host_A# ssh -R 22049:localhost:2049 user@Host_B
user@Host_B$ su -
root@Host_B# mount -oro nfs://127.0.0.1:22049/tmp /mnt
</pre>
Im Hintergrund werden dann die TunnelVerbindungen aufgebaut und man macht das PortForwarding direkt von Host_A nach Host_B. Sehr schlank und elegant.
PS: Das /dev/tcp/%h/%p ist ein BASH-Builtin wobei %h und %p von der SSH durch Host (%h) und Port (%p) ausgefüllt werden
==Ausbruch aus dem Paradies==
Problem: Die Umgebung in der man sich aufhält ist leider so unglücklich mit Firewalls verbaut, daß man nicht arbeiten kann. Man muß aber per SSH raus, um woanders kurz etwas zu schauen, oder zu holen. Nunja, es gibt immer einen Weg...
Vorraussetzung ist ein lokal installiertes [http://www.meadowy.org/~gotoh/projects/connect connect], z.B. unter Ubuntu: apt-get install connect-proxy.
Weiterhin braucht man einen SSH-Server, wo ein sshd auf dem Port 443 lauscht, denn die meisten Proxies wollen einen nur auf known Ports durchlassen.
Dann trägt man in der ~/.ssh/config ein:
<pre>
Host ssh-via-proxy
ProxyCommand connect -H proxy-server:3128 ssh-server 443
</pre>
Schwuppdiwupp ist man mit <i>ssh ssh-server</i> auf dem SSH-Ziel, wo man hinmöchte. Natürlich kann man auf den ssh-server wieder als ProxyCommand eintragen usw. usw.
==Achja... das interne Wiki...==
Auch nicht schlimm, wenn das nur vom internen Netz erreichbar ist, dann fragen wir einfach via Socks-Proxy an:
<pre>
user@Host_A$ ssh -C -N -T -f -D8080 interner-rechner
user@Host_A$ chromium-browser --proxy-server="socks5://localhost:8080" https://wiki.intern.firma.de/ &
</pre>
Die Optionen sind:
<pre>
-C Requests compression <- das ist optional
-N Do not execute a remote command.
-T Disable pseudo-tty allocation.
-f Requests ssh to go to background just before command execution.
-D Local-Remote-Socks5-Proxy Port
</pre>
Oder wieder via ~/.ssh/config:
<pre>
Host wiki
Compression yes
DynamicForward 8888
RequestTTY no
PermitLocalCommand yes
LocalCommand chromium-browser --proxy-server="socks5://localhost:8888" https://wiki.intern.firma.de/ &
Hostname interner-rechner
</pre>
Und dann <i>ssh -N -f wiki</i> (Entsprechungen für -N und -f habe ich noch nicht gefunden).
=Der Fingerabdruck=
Für die Verifikation ist es oft leichter mit kürzeren Zahlenketten. Daher ist der Fingerabdruck praktisch, um Keys einfacher zu vergleichen:
<pre>
$ ssh-keygen -lf ~/.ssh/id_dsa.pub
1024 98:c5:76:...:08:fa:ba lollypop@lollybook (DSA)
</pre>
=Nutzer einschränken=
<syntaxhighlight lang=bash>
# SSH is only allowed for users in the group ssh except syslog
AllowGroups ssh
DenyUsers syslog
</syntaxhighlight>
=PuTTY Portable=
==pageant zusammen mit putty starten==
In die Datei ..\PortableApps\PuTTYPortable\App\AppInfo\Launcher\PuTTYPortable.ini muß folgendes unter [Launch] stehen:
<pre>
[Launch]
ProgramExecutable=putty\pageant.exe
CommandLineArguments='%PAL:DataDir%\settings\mykeys.ppk -c %PAL:AppDir%\putty\putty.exe'
DirectoryMoveOK=yes
SupportsUNC=yes
</pre>
Zu PortableApps siehe auch:
* [http://portableapps.com/manuals/PortableApps.comLauncher/ref/envsub.html Environment variable substitions]
* [http://portableapps.com/manuals/PortableApps.comLauncher/ref/launcher.ini/launch.html#programexecutable Launch]
==ppk -> pem==
<syntaxhighlight lang=bash>
$ nawk '/---- BEGIN SSH2 PUBLIC KEY ----/{printf "ssh-rsa "; getline; comment=$2; gsub(/"/,"",comment); getline line; while(line !~ /^---- END/){printf line; getline line;} printf " %s\n",comment;}' pubkey.ppk
</syntaxhighlight>
=Probleme mit älteren Gegenstellen=
==Unable to negotiate with <IP> port 22: no matching host key type found. Their offer: ssh-dss==
<syntaxhighlight lang=bash>
$ ssh -oHostKeyAlgorithms=+ssh-dss <IP>
</syntaxhighlight>
==ssh_dispatch_run_fatal: Connection to <IP> port 22: DH GEX group out of range==
<syntaxhighlight lang=bash>
$ ssh -oKexAlgorithms=diffie-hellman-group-exchange-sha256,diffie-hellman-group14-sha1,diffie-hellman-group1-sha1 <IP>
</syntaxhighlight>
=SFTP chroot=
<syntaxhighlight lang=bash>
# mkdir --parents --mode=0755 /sftp_chroot/etc
</syntaxhighlight>
==/etc/fstab==
<syntaxhighlight lang=bash>
...
/etc/passwd /sftp_chroot/etc/passwd none ro,bind 0 0
/etc/group /sftp_chroot/etc/group none ro,bind 0 0
</syntaxhighlight>
==/etc/ssh/sshd_config==
<syntaxhighlight lang=bash>
...
AllowGroups ssh-user
Subsystem sftp internal-sftp
Match group sftp
AllowGroups sftp
X11Forwarding no
AllowTcpForwarding no
AllowAgentForwarding no
PermitTunnel no
ForceCommand internal-sftp
PasswordAuthentication yes
ChrootDirectory /sftp_chroot/
AuthorizedKeysFile /sftp_chroot/%h/.ssh/authorized_keys
</syntaxhighlight>
==Create SFTP user==
Now you can put authorized keys into the files /home/sftp/.authorized_keys/<i>username</i>
And create the sftp users like this:
<syntaxhighlight lang=bash>
# USER=myuser
# mkdir --parents --mode=0755 /home/sftp/${USER}
# useradd --create-home --home-dir /home/sftp/${USER}/home ${USER}
</syntaxhighlight>
= Two factor authentication =
== Google Authenticator ==
As the Google Authenticator is a tool which is available on several SmartPhone OS I took this one for the OTP authentication.
All steps have to be done on the destination host.
=== Install libpam-google-authenticator ===
<syntaxhighlight lang=bash>
$ sudo apt-get install libpam-google-authenticator
</syntaxhighlight>
=== Add settings to the /etc/pam.d/sshd ===
Put this line at the top of your /etc/pam.d/sshd!
<syntaxhighlight lang=bash>
auth [success=done new_authtok_reqd=done default=die] pam_google_authenticator.so nullok
</syntaxhighlight>
See the man page pam.d(5) or read here...
The meaning of the parameters:
* success=done : If pam_google_authenticator returns successful (code was correct) all authentication is done.
* new_authtok_reqd=done : New authentication token is required set to done. Done is like ok, <nowiki><man page></nowiki>except that the stack also terminates and control is immediately returned to the application.<nowiki></man page></nowiki>
* default=die : If pam_google_authenticator failed no other authentications will be tried
* nullok : Allow user to access auth mechanism even if the password is empty
=== Add settings to the /etc/ssh/sshd_config ===
This lines have to be in the /etc/ssh/sshd_config:
<syntaxhighlight lang=bash>
UsePAM yes
PasswordAuthentication no
PubkeyAuthentication yes
ChallengeResponseAuthentication yes
AuthenticationMethods publickey,keyboard-interactive:pam
</syntaxhighlight>
Without the setting in /etc/pam.d/sshd the "PasswordAuthentication no" will not be sufficient and still ask for a password because /etc/pam.d/sshd enables password authentication.
6aaebe1755fc1b40c06d0d782df7c40dc163d877
Solaris Loadgenerator
0
216
2433
2222
2021-11-25T22:44:54Z
Lollypop
2
Text replacement - "<source" to "<syntaxhighlight"
wikitext
text/x-wiki
[[Kategorie:Solaris|Loadgenerator]]
This is a little script to generate load. It uses gzip and bzip2 to generate load fetched from void and compressed into the void again :-).
Call it with <scriptname> <number> to generate a load of <number>.
<syntaxhighlight lang=bash>
#!/usr/bin/bash
count=$1
for((i=1;i<=${count};i++))
do
cat /dev/urandom | bzip2 | gzip -9 >/dev/null &
done
</syntaxhighlight>
6fa3b0768902811a3b1eb763d9ba92d9a7e97e85
Solaris 11 unsorted
0
379
2434
2371
2021-11-25T22:51:25Z
Lollypop
2
Text replacement - "</source" to "</syntaxhighlight"
wikitext
text/x-wiki
[[category:Solaris11|unsorted]]
== kcfd: unable to load certificate from /etc/crypto/certs/ORCLObjectCA ==
Problem:
<pre>
Apr 2 11:05:29 host42 kcfd[77]: [ID 180312 user.error] kcfd: unable to load certificate from /etc/crypto/certs/ORCLObjectCA
Apr 2 11:05:29 host42 openssl[2360]: [ID 238837 user.error] libpkcs11: /usr/lib/security/amd64/pkcs11_softtoken.so unexpected failure in ELF signature verification. See cryptoadm(1M). Skipping this plug-in.
</pre>
Solution:
<pre>
# pkg fix pkg:/crypto/ca-certificates
</pre>
== Solaris 11 up to date? ==
<syntaxhighlight lang=bash>
#!/bin/bash
# Written by Lars Timmann <L@rs.Timmann.de> 2018
export LANG=C
function check () {
package=$1
# pkg list -af entire@latest
local=$(pkg info ${package} 2>&1)
remote=$(pkg info -r ${package} 2>&1)
printf "%s\n%s\n" "${local}" "${remote}" | nawk -v package="${package}" '
$1=="Version:" {
version[nr]=$2;
next;
}
$1=="Branch:" {
branch[nr++]=$2;
next;
}
/^pkg:/ {
error=$0;
}
END{
if(error) {
printf ("Package %s:\t%s\n", package, error);
status=-1;
} else {
if(branch[0]==branch[1]){
printf ("Package %s:\tUptodate at %s\n", package, branch[0]);
status=0;
}else{
printf ("Package %s:\tUpdate is available: %s -> %s\n", package, branch[0], branch[1]);
split(version[1], version_part, /\./);
split(branch[1], branch_part, /\./);
if(version[1]=="0.5.11") {
be_version=sprintf("%d.%d.%d.%d.%d",version_part[3], branch_part[3], branch_part[4], branch_part[5], branch_part[6]);
}
if(version[1]=="11.4") {
be_version=sprintf("%d.%d.%d.%d.%d",branch_part[1], branch_part[2], branch_part[4], branch_part[5], branch_part[6]);
}
printf ("\n\nUse:\tpkg update --accept --require-new-be --be-name solaris_%s\n\n\n", be_version);
status=2;
}
}
exit status;
}
'
}
package="entire"
pkg refresh >/dev/null \
|| echo "Cannot refresh packages" \
&& if [ $# -gt 0 ]
then
while [ $# -gt 0 ]
do
package=$1
shift
check ${package}
done
else
check ${package}
fi
</syntaxhighlight>
9d4e6168e022a44a58f038301e14e592240281e3
NetApp SSH
0
110
2435
2337
2021-11-25T22:52:50Z
Lollypop
2
Text replacement - "[[Kategorie:" to "[[Category:"
wikitext
text/x-wiki
[[Category:NetApp|SSH]]
== Prüfen, ob das SSH-Homedir /etc/sshd/<user>/.ssh existiert ==
<syntaxhighlight lang=bash>
nac*> priv set -q diag
nac*> ls /etc/sshd/
.
..
ssh_host_key
ssh_host_key.pub
ssh_host_rsa_key
ssh_host_rsa_key.pub
ssh_host_dsa_key
ssh_host_dsa_key.pub
</source>
== Anlegen eines Verzeichnisses mit Mode 0700 ==
<syntaxhighlight lang=bash>
nac*> options wafl.default_qtree_mode
wafl.default_qtree_mode 0777
nac*> options wafl.default_qtree_mode 0700
nac*> qtree create /vol/vol0/__
nac*> options wafl.default_qtree_mode 0777
</source>
== NDMPd Status prüfen / anschalten ==
<syntaxhighlight lang=bash>
nac*> ndmpd status
ndmpd OFF.
No ndmpd sessions active.
nac*> ndmpd on
nac*> ndmpd status
ndmpd ON.
No ndmpd sessions active.
</source>
== Verzeichnis erzeugen durch kopieren des QTrees ==
<syntaxhighlight lang=bash>
nac*> ndmpcopy /vol/vol0/__ /vol/vol0/etc/sshd/root/.ssh
...
Ndmpcopy: Transfer successful [ 0 hours, 0 minutes, 20 seconds ]
Ndmpcopy: Done
nac*> qtree delete /vol/vol0/__
</source>
== SSH-Key /etc/sshd/<user>/.ssh/authorized_keys schreiben ==
<syntaxhighlight lang=bash>
nac*> wrfile /etc/sshd/root/.ssh/authorized_keys
ssh-dss AAA...== user@clienthost
^C
</source>
d63601907c196150e30a4fe03d9e8acd31cf0d0c
NFS
0
386
2436
2142
2021-11-25T22:52:55Z
Lollypop
2
Text replacement - "<source" to "<syntaxhighlight"
wikitext
text/x-wiki
[[Category:Linux]]
Some things to know about NFS...
=NFSv3=
==Server==
===Bind rpc.mountd to specific port===
The port of the rpc.mountd is usually random this is a nightmare for firewallers so picking a known port is much better.
* /etc/default/nfs-kernel-server
<syntaxhighlight lang=ini>
RPCMOUNTDOPTS="--manage-gids --port 33333"
</source>
===Bind statd to specific port===
You just need it if you still need protocols below NFSv4.
* /etc/default/nfs-common
<syntaxhighlight lang=ini>
STATDOPTS="--port 33334 --outgoing-port 33335"
</source>
===Bind lockd to specific port===
* /etc/sysctl.d/nfs-static-ports.conf
<syntaxhighlight lang=ini>
fs.nfs.nlm_tcpport = 33336
fs.nfs.nlm_udpport = 33336
</source>
Activate it without rebooting through:
<syntaxhighlight lang=bash>
# sysctl --load /etc/sysctl.d/nfs-static-ports.conf
fs.nfs.nlm_tcpport = 33336
fs.nfs.nlm_udpport = 33336
</source>
===Configure ufw===
Caution! The port you set above for the mountd has to be the same here! I used 33333, if you changed it above for some reason: Change it here, too!
* /etc/ufw/applications.d/nfs
<syntaxhighlight lang=ini>
[NFS-Server]
title=NFS-Server
description=NFS Server
ports=111/tcp|111/udp|2049/tcp|33333:33336/tcp
</source>
<syntaxhighlight lang=bash>
# ufw allow from 172.16.16.16/28 to any app "NFS-Server"
</source>
=NFSv4.1=
==Server==
===Configure rpc.idmapd===
* /etc/idmapd.conf
You should better set a Domain. Set the same Domain on server an client(s)!
<syntaxhighlight lang=ini>
[General]
...
# set your own domain here, if it differs from FQDN minus hostname.
# you can use a fantasy name, but whatever it is, keep this identical on server and client!
Domain = myfantasy.domain
...
</source>
===Disable at least NFSv2===
* /etc/default/nfs-kernel-server
<syntaxhighlight lang=ini>
STATDOPTS="--port 33334 --outgoing-port 33335 --no-nfs-version 2"
RPCNFSDOPTS="--no-nfs-version 2"
</source>
===Disable all but NFSv4 and higher===
* /etc/default/nfs-kernel-server
<syntaxhighlight lang=ini>
RPCMOUNTDOPTS="--manage-gids --port 33333 --no-nfs-version 2 --no-nfs-version 3"
NEED_STATD="no"
NEED_IDMAPD="yes"
RPCNFSDOPTS="--no-nfs-version 2 --no-nfs-version 3"
</source>
===Configure ufw===
For plain NFSv4 and up you just need this:
<syntaxhighlight lang=bash>
# ufw allow from 172.16.16.16/28 to any port 2049/tcp
</source>
If you need still NFSv3 look above.
===List clients that are connected===
<syntaxhighlight lang=bash>
# cat /proc/fs/nfsd/clients/*/info
clientid: 0x7829c17160bf7066
address: "172.16.16.17:778"
name: "Linux NFSv4.1 client01.domain.tld"
minor version: 1
Implementation domain: "kernel.org"
Implementation name: "Linux 3.10.0-1127.13.1.el7.x86_64 #1 SMP Tue Jun 23 15:46:38 UTC 2020 x86_64"
Implementation time: [0, 0]
</source>
==Server and Client==
da06614472e2ef7d8e2fdf5a83665adc52c07c4f
Template:!!
10
55
2437
93
2021-11-25T22:52:59Z
Lollypop
2
Text replacement - "[[Kategorie:" to "[[Category:"
wikitext
text/x-wiki
<includeonly>||</includeonly><noinclude>''Diese Vorlage wird für [[Vorlage:Ameisenart]] und [[Vorlage:Ameisengattung]] benötigt.''</noinclude><noinclude>
[[Category:Vorlage]]
</noinclude>
a904c7978918ca7e4fe04ecf203661adb50cb198
Solaris process debugging
0
254
2438
2225
2021-11-25T22:53:22Z
Lollypop
2
Text replacement - "[[Kategorie:" to "[[Category:"
wikitext
text/x-wiki
[[Category:Solaris|Debugging]]
==Swap usage per process==
<syntaxhighlight lang=awk>
# pgrep . | xargs -n 1 pmap -S 2>/dev/null | nawk '
function kb2h(value){
unit=1;
while(value>=1024){
unit++;
value/=1024;
};
split("KB,MB,GB,TB,PB", unit_string, /,/);
return sprintf("%7.2f %s",value,unit_string[unit]);
}
/[0-9]+:/ {
pid=$1;
prog=$2;
}
/^total/{
swap_total+=$3;
printf ("%s\t%s\t%s\n",pid,kb2h($3),prog);
}
END{
printf "Total:\t%s\n",kb2h(swap_total);
}'
</syntaxhighlight>
==Set the core file size limit on a process==
For example for the sshd (and all resulting childs from now):
<syntaxhighlight lang=bash>
ssh-server# prctl -n process.max-core-size -v 2g -t privileged -r -e deny $(pgrep -u root -o sshd)
</syntaxhighlight>
Check:
<syntaxhighlight lang=bash>
ssh-server# prctl -n process.max-core-size $(pgrep -u root -o sshd)
process: 1491: /usr/lib/ssh/sshd
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
process.max-core-size
privileged 2.00GB - deny -
system 8.00EB max deny -
</syntaxhighlight>
Now all processes (for example new logged in users) will have a core file size limit of 2GB... really? No!
<syntaxhighlight lang=bash>
ssh-client# ssh ssh-server
ssh-server# ulimit -Ha | grep core
core file size (blocks, -c) 2097152
</syntaxhighlight>
See what it says: blocks <-- !!!
From man page: -c Maximum core file size (in 512-byte blocks)
717bedea02ec5d94e94f7e21494e8d26507e3fec
ZFS sync script
0
215
2439
2382
2021-11-25T22:53:36Z
Lollypop
2
Text replacement - "[[Kategorie:" to "[[Category:"
wikitext
text/x-wiki
[[Category:ZFS|Sync]]
Like all of my scripts this script is coming without any guaranties!!!
You can use it on your own risk!
==About the script==
* It uses [http://www.maier-komor.de/mbuffer.html mbuffer]. It is easy to compile.
* It uses gawk.
* The variable ''SECURE'' defines if you want to use ssh to encrypt your stream. Set it to ''yes'' or ''no''.
* To mark the datasets to copy from the backup host use this on the source:
<syntaxhighlight lang=bash>
# /usr/sbin/zfs set de.timmann:auto-backup=<backup host> <dataset>
</syntaxhighlight>
* Run the script on the destination/backup host.
* If you don't want to use root as backup-user on source host do this to create a ''zfssync'' user (Solaris syntax):
<syntaxhighlight lang=bash>
# useradd -m zfssync
# passwd -N zfssync
# usermod -K type=normal zfssync
</syntaxhighlight>
* Make an ssh-key exchange to login without password for ''SRC_USER''.
Good luck!
==zfs_sync.sh==
<syntaxhighlight lang=bash>
#!/bin/bash
# Written by Lars Timmann <L@rs.Timmann.de> 2013
# This script is a rotten bunch of code... rewrite it!
# Some defaults
BACKUP_PROPERTY="de.timmann:auto-backup"
BACKUP_SNAPSHOT_NAME="zfssync"
MBUFFER_PORT=10001
MBUFFER=/opt/mbuffer/bin/mbuffer;
SRC_USER=zfssync
INITIAL_COPIES=3
# Default yes means use SSH for encryption over the net. Every other value means just mbuffer.
SECURE="yes"
LOCAL_SYNC="no"
MBUFFER_PORT=10001
MBUFFER_OPTS="-v 0 --md5 -s 128k -m 256M"
BACKUP_PROPERTY="de.timmann:auto-backup"
ZFS=/usr/sbin/zfs
SSH="/usr/bin/ssh -xc blowfish"
AWK=/usr/bin/gawk
#AWK=/opt/sfw/bin/gawk
GREP=/usr/bin/grep
DATE=/usr/bin/date
MD5="/usr/bin/digest -a md5"
ROUTE=/usr/sbin/route
MBUFFER="/opt/mbuffer/bin/mbuffer"
MYHOST=$(/usr/bin/hostname)
MYNAME=$(/usr/bin/basename $0)
function usage () {
if [ $# -gt 0 ]
then
if [ "_${1}_" != "_help_" ]
then
echo "Error: ${MYNAME} : $*"
fi
else
echo "Error: ${MYNAME} : Check parameters"
fi
cat <<EOU
Usage: ${MYNAME} <params>
Where params is from this set of parameters:
-s|--src-ip <IP> The host from where we want to sync
-d|--dst-ip <IP> The IP on this host where the remote mbuffer should try to connect to
If omitted the IP to use is guessed via route get.
-u|--user <user> The user on "--src-ip" which has rights to send a zfs.
It must be able to login via ssh with public key.
On Solaris it is the profile "ZFS File System Management"
Try this on the "--src-ip":
# roleadd \
-d /export/home/zfssync \
-c "User for zfs send/recv" \
-s /bin/bash \
-m \
-P "ZFS File System Management" \
zfssync
# rolemod -K type=normal zfssync
# passwd -N zfssync
And then put the ssh-public-key from this host into
/export/home/zfssync/.ssh/authorized_keys
on the "--src-ip".
Remember to set the permissions on .ssh to 700 and .ssh/authorized_keys to 600.
The Homedir of the user must not be world writeable.
-sp|--src-pool <zpool> The zpool we want to sync from "--src-ip".
-dp|--dst-pool <zpool> The zpool on this host where we want to sync to ${MYNAME}.
-mbp|--mbuffer-port <port>
If the default port 10001 is in use use another port.
-mb|--mbuffer-path <path>
Path of mbuffer binary including binary itself.
-mbbw|--mbuffer-bwlimit <rate>
Limit the read bandwith of mbuffer (mbuffer option -r)
From mbuffer --help: limit read rate to <rate> B/s, where <rate> can be given in b,k,M,G
-bp|--backup-property <property>
This defaults to ${BACKUP_PROPERTY}.
You have to set this property on all ZFS datasets to ${MYHOST}.
# /usr/sbin/zfs set ${BACKUP_PROPERTY}=${MYHOST} <dataset>
This is inherited as usual.
-bsn|--backup-snap-name <snapshotname>
This is the name of the snapshot which we use to sync.
This defaults to ${BACKUP_SNAPSHOT_NAME}.
Never delete this snapshot manually or you will break the sync and restart
from the beginning.
-i|--insecure Not for production environments! No ssh tunneling. No encryption over the net!
EOU
##-l|--local Just do a local zfs send/recv...
exit 1
}
while [ $# -gt 0 ]
do
#if [ $# -ge 2 ]; then value=$2; fi
case $1 in
--help|-h)
usage "help"
;;
-l|--local)
LOCAL_SYNC="yes"
SRC_HOST="localhost"
param="dummy"
shift;
;;
-i|--insecure|--fuck-off-security)
SECURE="no"
param="dummy"
shift;
;;
--?*=?*|-?*=?*)
param=${1%=*}
value=${1#*=}
shift;
;;
--?*=|-?*=)
param=${1%=*}
usage "${param} needs a vlaue!"
;;
*)
param=$1
if [ $# -ge 2 -a "_${2%-*}_" != "__" ]
then
value=$2
shift
fi
shift
;;
esac
case $param in
-s|--src-ip)
if [ -z $value ] ; then usage "Param ${param} needs a value" ; fi
SRC_HOST=${value}
;;
-d|--dst-ip)
if [ -z $value ] ; then usage "Param ${param} needs a value" ; fi
DST_HOST=${value};
;;
-u|--user)
if [ -z $value ] ; then usage "Param ${param} needs a value" ; fi
SRC_USER=${value}
;;
-sp|--src-pool)
if [ -z $value ] ; then usage "Param ${param} needs a value" ; fi
SRC_POOL=${value}
;;
-bsn|--backup-snap-name)
if [ -z $value ] ; then usage "Param ${param} needs a value" ; fi
BACKUP_SNAPSHOT_NAME=${value}
;;
-dp|--dst-pool)
if [ -z $value ] ; then usage "Param ${param} needs a value" ; fi
DST_POOL=${value}
;;
-mbp|--mbuffer-port)
if [ -z $value ] ; then usage "Param ${param} needs a value" ; fi
MBUFFER_PORT=${value}
;;
-mb|--mbuffer-path)
if [ -z $value ] ; then usage "Param ${param} needs a value" ; fi
MBUFFER=${value}
;;
-mbbw|--mbuffer-bwlimit)
if [ -z $value ] ; then usage "Param ${param} needs a value" ; fi
MBUFFER_OPTS="${MBUFFER_OPTS} -r ${value}"
;;
-bp|--backup-property)
if [ -z $value ] ; then usage "Param ${param} needs a value" ; fi
BACKUP_PROPERTY=${value}
;;
dummy)
;;
*)
usage "Unknown parameter $1"
esac
done
if [ "_${LOCAL_SYNC}_" == "no" ]
then
if [ -z ${SRC_HOST} ]; then usage "-s|--src-ip is missing" ; fi
# Guess the right IP for communication with source host
if [ -z ${DST_HOST} ]; then
DST_HOST=$(${ROUTE} -vn get ${SRC_HOST} | ${AWK} '{ip=$2}END{print ip}')
if [ -z ${DST_HOST} ]; then
usage "-d|--dst-ip is missing"
fi
fi
fi
if [ -z ${SRC_POOL} ]; then usage "-sp|--src-pool is missing" ; fi
if [ -z ${DST_POOL} ]; then usage "-dp|--dst-pool is missing" ; fi
SRC_DATASETS=/tmp/${MYNAME}_${DST_POOL/\//_}_src_ds.out
DST_DATASETS=/tmp/${MYNAME}_${DST_POOL/\//_}_dst_ds.out
LOCK_FILE=/var/run/${MYNAME}_${DST_POOL/\//_}.lck
TMP_FILE1=/tmp/${MYNAME}_${DST_POOL/\//_}.tmp1
TMP_FILE2=/tmp/${MYNAME}_${DST_POOL/\//_}.tmp2
START_TIME=$(${AWK} 'BEGIN{printf systime();}')
${AWK} -v time=${START_TIME} 'BEGIN{print "START:",strftime("%d.%m.%Y %H:%M.%S",time)}'
# Clean up on signal
# -------------------------
trap 'echo "\n--- Got signal: Exiting ...\n"; \
date ; \
sleep 3; kill -9 ${!} 2>/dev/null; \
/usr/bin/rm -f ${LOCK_FILE}; \
exit 1' 1 2 3 13 14 15 18
###########################
if [ -f ${LOCK_FILE} ] ; then
echo "$0 is allready running as PID $(/usr/bin/cat ${LOCK_FILE}) look in ${LOCK_FILE}"
exit 1
else
echo $$ > ${LOCK_FILE}
fi
if [ "_${LOCAL_SYNC}_" == "_yes_" ]
then
${ZFS} list -rH -t filesystem,snapshot,volume -o name,type,${BACKUP_PROPERTY} -s creation ${SRC_POOL} > ${SRC_DATASETS} &
else
${SSH} ${SRC_USER:+"${SRC_USER}@"}${SRC_HOST} "${ZFS} list -rH -t filesystem,snapshot,volume -o name,type,${BACKUP_PROPERTY} -s creation ${SRC_POOL}" > ${SRC_DATASETS} &
fi
${ZFS} list -rH -t filesystem,snapshot,volume -o name,type -s creation ${DST_POOL} > ${DST_DATASETS} &
wait
function convert_to_poolname () {
from_zfs=$1
search=$2
replace=$3
echo ${from_zfs} | sed -e "s#^${search}#${replace}#g"
}
function is_available () {
snapshot=$1
list=$2
${AWK} -v snapshot=${snapshot} 'BEGIN{rc=1;}$1 == snapshot{print $1; rc=0;}END{exit rc;}' ${list}
return $?
}
function expire_dst_pool_snapshots () {
days_to_keep=$1
min_to_keep=$2
for expired_zfs in $(
${ZFS} list -o creation,name -S creation -t snapshot | \
${AWK} \
-v days_to_keep=${days_to_keep} \
-v min_to_keep=${min_to_keep} \
-v DST_POOL="^${DST_POOL}" \
'
BEGIN{
split("Jan:Feb:Mar:Apr:May:Jun:Jul:Aug:Sep:Oct:Nov:Dec",mon,":");
for(m in mon){
month[mon[m]]=m
};
expire_date=systime()-days_to_keep*60*60*24
}
$NF ~ DST_POOL {
filesystem=$NF;
gsub(/@.*$/,"",filesystem);
split($4,time,":");
filesystem_date=mktime(sprintf("%d %02d %02d %02d %02d 00", $5, month[$2], $3, time[1], time[2]));
count[filesystem]++;
if(filesystem_date < expire_date && count[filesystem] > min_to_keep )
{
print $NF;
}
}')
do
printf "$(${DATE}) Destroying snapshot ${expired_zfs}\n"
${ZFS} destroy ${expired_zfs}
done
}
function get_src_list () {
${AWK} -v backup_server=${MYHOST} '
( $2=="filesystem" || $2=="volume" ) && $3==backup_server {
path[$1]=1;
for(name in path){
# delete name from list, if name is substring of $1
if( index($1,name)==1 && name != $1 && path[name]!=0 ){
path[name]=0;
}
}
}
END{
for(name in path){
if(path[name]==1) print name
}
}
' ${SRC_DATASETS}
}
function first_snapshot () {
${AWK} -v zfs="${1}@" '
$2=="snapshot" && $1 ~ zfs {
first=$1;
# und raus...
nextfile;
}
END{
print first;
}
' $2
}
function last_snapshot () {
${AWK} -v zfs="^${1}" -F '[@ \t]' '
$3 == "snapshot" && $1 ~ zfs {
last=$1"@"$2;
}
END{
printf last;
}
' $2
}
function get_incremental_snapshot () {
src_host=$1
src_datasets=$2
first=$3
last=$4
dst_pool=$5
dst_datasets=$6
if [ $# -lt 6 ] ; then
echo "Called from line ${BASH_LINENO[$i]} with $# Arguments"
end 1
fi
src_zfs=$(echo ${first} | ${AWK} -F'@' '{print $1}')
first_snap=$(echo ${first} | ${AWK} -F'@' '{print FS""$2}')
echo "Getting snapshot ${zfs}..."
if [ "_${LOCAL_SYNC}_" == "_yes_" ]
then
${ZFS} send -I ${first_snap} ${last} | ${ZFS} recv -vFd ${dst_pool}
else
if [ "_${SECURE}_" == "_yes_" ]
then
# setup receiver
${MBUFFER} ${MBUFFER_OPTS} -l ${TMP_FILE1} -I 127.0.0.1:${MBUFFER_PORT} | \
${ZFS} recv -vFd ${dst_pool} 2>&1 &
# start sender
${SSH} ${SRC_USER:+"${SRC_USER}@"}${SRC_HOST} \
-R ${MBUFFER_PORT}:127.0.0.1:${MBUFFER_PORT} \
"${ZFS} send -I ${first_snap} ${last} | ${MBUFFER} ${MBUFFER_OPTS} -O 127.0.0.1:${MBUFFER_PORT} 2>&1" >${TMP_FILE2} &
else
# setup receiver
${MBUFFER} ${MBUFFER_OPTS} -l ${TMP_FILE1} -I ${MBUFFER_PORT} | \
${ZFS} recv -vFd ${dst_pool} 2>&1 &
# start sender
${SSH} ${SRC_USER:+"${SRC_USER}@"}${SRC_HOST} \
"${ZFS} send -I ${first_snap} ${last} | ${MBUFFER} ${MBUFFER_OPTS} -O ${DST_HOST}:${MBUFFER_PORT} 2>&1" >${TMP_FILE2} &
fi
wait
local_md5=$(grep md5 ${TMP_FILE1})
remote_md5=$(grep md5 ${TMP_FILE2})
local_summary=$(grep summary ${TMP_FILE1})
remote_summary=$(grep summary ${TMP_FILE2})
printf "remote %s\nlocal %s\n" "${remote_md5}" "${local_md5}"
printf "remote %s\nlocal %s\n" "${remote_summary}" "${local_summary}"
rm -f ${TMP_FILE1} ${TMP_FILE2}
fi
}
function get_initial_snapshot () {
src_host=$1
src_datasets=$2
zfs=$3
dst_pool=$4
dst_datasets=$5
if [ -z "$(is_available ${zfs} ${dst_datasets})" ] ; then
echo "Getting snapshot ${zfs}..."
if [ "_${LOCAL_SYNC}_" == "_yes_" ]
then
${ZFS} send -R ${zfs} | ${ZFS} recv -vFd ${dst_pool}
else
if [ "_${SECURE}_" == "_yes_" ]
then
# setup receiver
${MBUFFER} ${MBUFFER_OPTS} -l ${TMP_FILE1} -I 127.0.0.1:${MBUFFER_PORT} | \
${ZFS} recv -vFd ${dst_pool} 2>&1 &
# start sender
${SSH} ${SRC_USER:+"${SRC_USER}@"}${SRC_HOST} \
-R ${MBUFFER_PORT}:127.0.0.1:${MBUFFER_PORT} \
"${ZFS} send -R ${zfs} | ${MBUFFER} ${MBUFFER_OPTS} -O 127.0.0.1:${MBUFFER_PORT} 2>&1" >${TMP_FILE2} &
else
# setup receiver
${MBUFFER} ${MBUFFER_OPTS} -l ${TMP_FILE1} -I ${MBUFFER_PORT} | \
${ZFS} recv -vFd ${dst_pool} 2>&1 &
# start sender
${SSH} ${SRC_USER:+"${SRC_USER}@"}${SRC_HOST} \
"${ZFS} send -R ${zfs} | ${MBUFFER} ${MBUFFER_OPTS} -O ${DST_HOST}:${MBUFFER_PORT} 2>&1" >${TMP_FILE2} &
fi
wait
local_md5=$(grep md5 ${TMP_FILE1})
remote_md5=$(grep md5 ${TMP_FILE2})
local_summary=$(grep summary ${TMP_FILE1})
remote_summary=$(grep summary ${TMP_FILE2})
printf "remote %s\nlocal %s\n" "${remote_md5}" "${local_md5}"
printf "remote %s\nlocal %s\n" "${remote_summary}" "${local_summary}"
rm -f ${TMP_FILE1} ${TMP_FILE2}
fi
fi
}
function timestamp () {
echo $(${DATE} '+%Y%m%d-%H:%M:%S')
}
function expire_backup_snapshots () {
src_host=$1
src_datasets=$2
dst_datasets=$3
src_last_to_keep=$4
dst_pool=$5
src_zfs=$(echo ${src_last_to_keep} | ${AWK} -F'@' '{print $1}')
dst_zfs=$(convert_to_poolname ${src_zfs} ${SRC_POOL} ${dst_pool})
dst_last_to_keep=$(convert_to_poolname ${src_last_to_keep} ${SRC_POOL} ${dst_pool})
echo "Deleting old backup snapshots before ${dst_last_to_keep}"
if ( ${ZFS} list -o name ${dst_last_to_keep} >/dev/null 2>&1 ) ; then
for src_backup_snapshot in $(${AWK} -v src_backup="${src_zfs}@${BACKUP_SNAPSHOT_NAME}" -v src_last_to_keep="${src_last_to_keep}" '
$1 == src_last_to_keep {
exit 0;
}
$1 ~ src_backup {
print $1;
}
' ${src_datasets})
do
printf "\tDeleting on src ${src_backup_snapshot} ..."
if [ "_${LOCAL_SYNC}_" == "_yes_" ]
then
${ZFS} destroy ${src_backup_snapshot}
status=$?
else
${SSH} ${SRC_USER:+"${SRC_USER}@"}${SRC_HOST} "${ZFS} destroy ${src_backup_snapshot}"
status=$?
fi
if [ ${status} -eq 0 ] ; then
echo "done"
else
echo "failed"
fi
done
for dst_backup_snapshot in $(${AWK} -v dst_backup="${dst_zfs}@${BACKUP_SNAPSHOT_NAME}" -v dst_last_to_keep=${dst_last_to_keep} '
$1 == dst_last_to_keep {
exit 0;
}
$1 ~ dst_backup {
print $1;
}
' ${dst_datasets})
do
printf "\tDeleting on destination ${dst_backup_snapshot} ..."
if ( ${ZFS} destroy ${dst_backup_snapshot} ) ; then
echo "done"
else
echo "failed"
fi
done
else
echo "Strange we do not have the copy of ${dst_last_to_keep} => STOP!"
fi
}
function end () {
/usr/bin/rm -f ${LOCK_FILE}
exit $1
}
for src_zfs in $(get_src_list) ; do
echo "Evaluating ${src_zfs}"
dst_zfs=$(convert_to_poolname ${src_zfs} ${SRC_POOL} ${DST_POOL})
last_src=$(last_snapshot ${src_zfs} ${SRC_DATASETS})
last_dst=$(last_snapshot ${dst_zfs} ${DST_DATASETS})
last_backup_src=$(${AWK} -v zfs="${src_zfs}@${BACKUP_SNAPSHOT_NAME}" '$1 ~ zfs{last=$1}END{printf last}' ${SRC_DATASETS})
last_backup_dst=$(${AWK} -v zfs="${dst_zfs}@${BACKUP_SNAPSHOT_NAME}" '$1 ~ zfs{last=$1}END{printf last}' ${DST_DATASETS})
last_dst_on_src=$(convert_to_poolname ${last_dst} ${DST_POOL} ${SRC_POOL})
this_backup_src=${src_zfs}@${BACKUP_SNAPSHOT_NAME}_$(timestamp)
# Create snapshot for incremental backups
if [ "_${LOCAL_SYNC}_" == "_yes_" ]
then
${ZFS} snapshot ${this_backup_src}
else
${SSH} ${SRC_USER:+"${SRC_USER}@"}${SRC_HOST} "${ZFS} snapshot ${this_backup_src}"
fi
if [ -z "${last_src}" ] ; then
last_src=${this_backup_src}
fi
if [ -n "$(is_available ${dst_zfs} ${DST_DATASETS})" -a -z "${last_dst}" ] ; then
echo "zfs is on dst, but no snapshots. Getting ${last_src}..."
get_initial_snapshot ${SRC_HOST} ${SRC_DATASETS} ${last_src} ${DST_POOL} ${DST_DATASETS}
# Look for last backup snapshot on destination
elif [ -n "${last_backup_dst}" ] ; then
# Name of last backup snapshot on src
last_dst_backup_on_src=$(convert_to_poolname ${last_backup_dst} ${DST_POOL} ${SRC_POOL})
# If converted name is not empty and snapshot is in the list of src snapshots
# then get all snapshots from last backup until now
if [ -n "${last_dst_backup_on_src}" ] ; then
if [ -n "$(is_available ${last_dst_backup_on_src} ${SRC_DATASETS})" ] ; then
# Get the snapshot of this backup
printf "%s\tsnapshot\n" ${this_backup_src} >> ${SRC_DATASETS}
get_incremental_snapshot ${SRC_HOST} ${SRC_DATASETS} ${last_dst_backup_on_src} ${this_backup_src} ${DST_POOL} ${DST_DATASETS} && \
expire_backup_snapshots ${SRC_HOST} ${SRC_DATASETS} ${DST_DATASETS} ${this_backup_src} ${DST_POOL}
fi
fi
elif [ -n "$(is_available ${dst_zfs} ${DST_DATASETS})" ] ; then
# No last backup snapshot on dst but we have snapshots
if [ -n "$(is_available ${last_dst_on_src} ${SRC_DATASETS})" ] ; then
echo "Try to backup from ${last_dst_on_src} to ${this_backup_src}"
first=${last_dst_on_src}
last=${last_src}
get_incremental_snapshot ${SRC_HOST} ${SRC_DATASETS} ${first} ${last} ${DST_POOL} ${DST_DATASETS} && \
expire_backup_snapshots ${SRC_HOST} ${SRC_DATASETS} ${DST_DATASETS} ${this_backup_src} ${DST_POOL}
# Get the snapshot of this backup
printf "%s\tsnapshot\n" ${this_backup_src} >> ${SRC_DATASETS}
get_incremental_snapshot ${SRC_HOST} ${SRC_DATASETS} ${last} ${this_backup_src} ${DST_POOL} ${DST_DATASETS} && \
expire_backup_snapshots ${SRC_HOST} ${SRC_DATASETS} ${DST_DATASETS} ${this_backup_src} ${DST_POOL}
else
echo "OK I tried hard... now it is your job..."
fi
else
# No existing copies for this zfs. Get the last <INITIAL_COPIES> copies
first=$(${AWK} -v zfs=${src_zfs} -v intitial_copies=$((${INITIAL_COPIES}-1)) '
$1 ~ zfs && $2=="snapshot" {
last[++count]=$1;
}
END {
if(count>intitial_copies){
print last[count-intitial_copies]
}else{
print last[1]
}
}' ${SRC_DATASETS})
last=$( ${AWK} -v zfs=${src_zfs} '$1 ~ zfs && $2=="snapshot"{last=$1}END{printf last}' ${SRC_DATASETS} )
get_initial_snapshot ${SRC_HOST} ${SRC_DATASETS} ${first} ${DST_POOL} ${DST_DATASETS}
get_incremental_snapshot ${SRC_HOST} ${SRC_DATASETS} ${first} ${last} ${DST_POOL} ${DST_DATASETS} && \
expire_backup_snapshots ${SRC_HOST} ${SRC_DATASETS} ${DST_DATASETS} ${this_backup_src} ${DST_POOL}
# Get the snapshot of this backup
printf "%s\tsnapshot\n" ${this_backup_src} >> ${SRC_DATASETS}
get_incremental_snapshot ${SRC_HOST} ${SRC_DATASETS} ${last} ${this_backup_src} ${DST_POOL} ${DST_DATASETS} && \
expire_backup_snapshots ${SRC_HOST} ${SRC_DATASETS} ${DST_DATASETS} ${this_backup_src} ${DST_POOL}
fi
echo
echo --------------------------------------------------------------------------------
date
echo
done
# expire_dst_pool_snapshots days_to_keep min_to_keep
expire_dst_pool_snapshots 34 70
END_TIME=$(${AWK} 'BEGIN{printf systime();}')
${AWK} -v time=${END_TIME} 'BEGIN{print "END :",strftime("%d.%m.%Y %H:%M.%S",time)}'
${AWK} -v start=${START_TIME} -v end=${END_TIME} 'BEGIN{print "DURATION:",strftime("%H:%M.%S",end-start-3600*strftime("%H",0))}'
end 0
</syntaxhighlight>
4a10e3b4a392b3a23d0b474981a818b69e539cbd
Category:Myrmica
14
5
2440
5
2021-11-25T22:53:42Z
Lollypop
2
Text replacement - "[[Kategorie:" to "[[Category:"
wikitext
text/x-wiki
[[Category:Ameisen]]
8b4529e141e02312735639bddbd1b0df94a379a3
Category:Orthoporus
14
155
2441
423
2021-11-25T22:53:58Z
Lollypop
2
Text replacement - "[[Kategorie:" to "[[Category:"
wikitext
text/x-wiki
[[Category:Tausendfuesser]]
350bfff5a2ce40cf57a13992717d176cf5568e4a
ZFS Recovery
0
30
2442
2313
2021-11-25T22:54:28Z
Lollypop
2
Text replacement - "[[Kategorie:" to "[[Category:"
wikitext
text/x-wiki
[[Category:ZFS|Recovery]]
[[Category:Solaris]]
==Panic at boot time==
Siehe [http://sunsolve.sun.com/search/document.do?assetkey=1-66-233602-1 SunAlert 233602 : Solaris 10 Assertion Failure in ZFS May Cause a System Panic]:
The best recovery for this is to do the following:
<pre>
1. Set the following in /etc/system:
set zfs:zfs_recover=1
set aok=1
2. Import the pool using 'zpool import'
3. Run a full scrub on the pool using 'zpool scrub'
4. Use 'zdb -d' and make sure that there is no ondisk corruption reported
5. Once the pool comes to a clean state, comment / remove the added entries in /etc/system.
</pre>
==Zurückgehen auf einen früheren Uberblock==
<syntaxhighlight lang=bash>
# zpool import defect_pool
cannot import 'defect_pool': I/O error
Destroy and re-create the pool from
a backup source.
</syntaxhighlight>
Unter /etc/zfs:
<syntaxhighlight lang=bash>
# cd /etc/zfs
# strings zpool.cache | nawk '/c[0-9]+t/'
...
/dev/dsk/c7t0d0s0
...
# zdb -l /dev/dsk/c7t0d0s0 | nawk '$1=="name:"{print;exit;}'
name: 'defect_pool'
</syntaxhighlight>
Für einen ZPool im Solaris Cluster:
<syntaxhighlight lang=bash>
# cd /var/cluster/run/HAStoragePlus/zfs/
# strings defect_pool.cachefile | nawk '/c[0-9]+t/'
0/dev/dsk/c8t600A0B80006E103C000008164E51CDD2d0s0
0/dev/dsk/c8t600A0B80006E10E40000D47D4E51CF9Ed0s0
</syntaxhighlight>
oder
<syntaxhighlight lang=bash>
# zpool import -o readonly=on -c defect_pool.cachefile
</syntaxhighlight>
<syntaxhighlight lang=bash>
# zdb -lu /dev/dsk/c8t600A0B80006E103C000008164E51CDD2d0s0 | nawk '/txg =/{txg=$NF}/timestamp =/{printf "txg %d\t%s\n",txg,$0}' | sort -n -k 2n,2n | uniq | tail -10
txg 40353851 timestamp = 1352184849 UTC = Tue Nov 6 07:54:09 2012
txg 40353852 timestamp = 1352184849 UTC = Tue Nov 6 07:54:09 2012
txg 40353853 timestamp = 1352184849 UTC = Tue Nov 6 07:54:09 2012
txg 40353870 timestamp = 1352185334 UTC = Tue Nov 6 08:02:14 2012
txg 40353871 timestamp = 1352185334 UTC = Tue Nov 6 08:02:14 2012
txg 40353872 timestamp = 1352185334 UTC = Tue Nov 6 08:02:14 2012
txg 40353873 timestamp = 1352185334 UTC = Tue Nov 6 08:02:14 2012
txg 40353874 timestamp = 1352185334 UTC = Tue Nov 6 08:02:14 2012
txg 40353875 timestamp = 1352185334 UTC = Tue Nov 6 08:02:14 2012
txg 40353879 timestamp = 1352185334 UTC = Tue Nov 6 08:02:14 2012
# zpool import -o readonly=on -T <txg> defect_pool
</syntaxhighlight>
Also z.B. auf Tue Nov 6 07:54:09 2012 -> txg 40353853
<syntaxhighlight lang=bash>
# zpool import -T 40353853 defect_pool
Pool defect_pool returned to its state as of Tue Nov 06 07:32:33 2012.
Discarded approximately 22 minutes of transactions.
</syntaxhighlight>
==PANIC, NOTICE: spa_import_rootpool: error 19==
Die Lösung ist, den Pool und das Device explizit anzugeben. Wenn beim booten also kommt:
<pre>
NOTICE: spa_import_rootpool: error 19
Cannot mount root on /pci@0,0/pci8086,340a@3/pci1000,3150@0/sd@1,0:a
panic[cpu0]/thread=fffffffffbc28820: vfs_mountroot: cannot mount root
</pre>
Hilft ein Boot in den Failsafe mode und editieren der /a/rpool/boot/grub/menu.lst, oder Eingabe der Parameter in der Grub-Commandline:
<pre>
title s10x_u8wos_08a
findroot (s10x_u8wos_08a,0,a)
bootfs rpool/ROOT/s10x_u8wos_08a
kernel$ /platform/i86pc/multiboot -B zfs-bootfs=rpool/ROOT/s10x_u8wos_08a,bootpath="/pci@0,0/pci8086,340a@3/pci1000,3150@0/sd@1,0:a"
module /platform/i86pc/boot_archive
</pre>
69ed8e8abc1981b2127097771a38db82164b4bb1
Linux Tipps und Tricks
0
273
2443
2199
2021-11-25T22:54:36Z
Lollypop
2
Text replacement - "</source" to "</syntaxhighlight"
wikitext
text/x-wiki
[[Category:Linux|Tipps und Tricks]]
==Hard reboot==
This is the hard way to kick your kernel into void. No filesystem sync is done, just and ugly fast direkt reboot!
You should never do this...
<syntaxhighlight lang=bash>
# echo 1 > /proc/sys/kernel/sysrq
# echo b > /proc/sysrq-trigger
</syntaxhighlight>
First line enables sysrq, second line sends the reboot request.
For more look at [https://www.kernel.org/doc/Documentation/sysrq.txt kernel.org]!
==Scan all SCSI buses for new devices==
<syntaxhighlight lang=bash>
# for i in /sys/class/scsi_host/host*/scan ; do echo "- - -" > $i ; done
</syntaxhighlight>
==Scan all FC ports for new devices==
!!!Be CAREFUL!!!
This command line issues a Loop Initialization Protocol (LIP). This is a bus reset hat means that removed devices in the fabric will disappear and new ones will appear.
!!!BUT the connection might get lost for a moment!!!
The softer way is [[#Scan all SCSI buses for new devices|to scan the SCSI buses]].
<syntaxhighlight lang=bash>
# for i in /sys/class/fc_host/*/issue_lip ; do echo "1" > $i ; done
</syntaxhighlight>
==Rescan a device (for example after changing a VMDK size)==
<syntaxhighlight lang=bash>
# device=sda
# echo 1 > /sys/class/block/${device}/device/rescan
</syntaxhighlight>
This is for device sda after changing the VMDK from 20GB to 25GB:
<syntaxhighlight lang=bash>
# device=sda
# echo $[ 512 * $(cat /sys/block/${device}/size) / 1024 ** 3 ]
20
# echo 1 > /sys/class/block/${device}/device/rescan
# echo $[ 512 * $(cat /sys/block/${device}/size) / 1024 ** 3 ]
25
# parted /dev/${device} "print free"
Warning: Not all of the space available to /dev/sda appears to be used, you can fix the GPT to use all of the space (an extra 10485760 blocks) or
continue with the current setting?
Fix/Ignore? F
Model: VMware Virtual disk (scsi)
Disk /dev/sda: 26,8GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:
Number Start End Size File system Name Flags
2 17,4kB 1049kB 1031kB bios_grub
1 1049kB 21,5GB 21,5GB zfs
21,5GB 26,8GB 5369MB Free Space
</syntaxhighlight>
I want to put the free space into partition 1 and resize the rpool:
<syntaxhighlight lang=bash>
# parted /dev/${device} "resizepart 1 -1"
# parted /dev/${device} "print free"
Model: VMware Virtual disk (scsi)
Disk /dev/sda: 26,8GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:
Number Start End Size File system Name Flags
2 17,4kB 1049kB 1031kB bios_grub
1 1049kB 26,8GB 26,8GB zfs
26,8GB 26,8GB 983kB Free Space
# zpool list rpool
NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
rpool 19,9G 1,68G 18,2G - 14% 8% 1.00x ONLINE -
# zpool set autoexpand=on rpool
# zpool status rpool
pool: rpool
state: ONLINE
scan: none requested
config:
NAME STATE READ WRITE CKSUM
rpool ONLINE 0 0 0
sda1 ONLINE 0 0 0
# zpool online rpool sda1
# zpool list rpool
NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
rpool 24,9G 1,69G 23,2G - 11% 6% 1.00x ONLINE -
# zpool set autoexpand=off rpool
</syntaxhighlight>
Done.
==Remove a SCSI-device==
Let us say we want to remove /dev/sdb.
Be careful! Like in this example the lowest SCSI-ID is not always the lowest device name!
Check it with <i>lsscsi</i> from the Ubuntu package lsscsi:
<syntaxhighlight lang=bash>
# lsscsi
[2:0:0:0] cd/dvd NECVMWar VMware SATA CD00 1.00 /dev/sr0
[32:0:0:0] disk VMware Virtual disk 1.0 /dev/sdb
[32:0:1:0] disk VMware Virtual disk 1.0 /dev/sda
</syntaxhighlight>
Then check it is not longer in use:
# mount
# pvs
# zpool status
# etc.
Then delete it:
<syntaxhighlight lang=bash>
# echo 1 > /sys/bus/scsi/drivers/sd/32\:0\:0\:0/delete
</syntaxhighlight>
The 32:0:0:0 is the number reported from the lsscsi above.
Et voila:
<syntaxhighlight lang=bash>
# lsscsi
[2:0:0:0] cd/dvd NECVMWar VMware SATA CD00 1.00 /dev/sr0
[32:0:1:0] disk VMware Virtual disk 1.0 /dev/sda
</syntaxhighlight>
==Copy a GPT partition table==
Copy partition table of sdX to sdY:
<syntaxhighlight lang=bash>
# sgdisk /dev/sdX --replicate=/dev/sdY
# sgdisk --randomize-guids /dev/sdY
</syntaxhighlight>
Or with:
<syntaxhighlight lang=bash>
# sgdisk --backup=sdX.table /dev/sdX
# sgdisk --load-backup=sdX.table /dev/sdY
# sgdisk -G /dev/sdY
</syntaxhighlight>
<pre>
-R, --replicate=second_device_filename
Replicate the main device's partition table on the specified second device. Note that the replicated partition table is an exact
copy, including all GUIDs; if the device should have its own unique GUIDs, you should use the -G option on the new disk.
-G, --randomize-guids
Randomize the disk's GUID and all partitions' unique GUIDs (but not their partition type code GUIDs). This function may be used
after cloning a disk in order to render all GUIDs once again unique.
</pre>
==Resize a GPT partition==
The partition was resized in VMWare from ~6GB to ~50GB.
In the VM I did [[#Remove a SCSI-device|Remove a SCSI-device]] for the resized device and then [[#Scan all SCSI buses for new devices|Scan all SCSI buses for new devices]] after that parted saw the new size.
===Correct the GPT partition table===
<syntaxhighlight lang=bash>
root@mariadb:~# parted /dev/sdb
GNU Parted 3.2
Using /dev/sdb
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted) p
Warning: Not all of the space available to /dev/sdb appears to be used, you can fix the GPT to use all of the space (an extra 92274688 blocks) or continue with the
current setting?
Fix/Ignore? F <-- ! choose F
Model: VMware Virtual disk (scsi)
Disk /dev/sdb: 53,7GB <-- ! the new size is reported now
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:
Number Start End Size File system Name Flags
1 1049kB 6442MB 6441MB zfs
</syntaxhighlight>
===Resize the partition===
<syntaxhighlight lang=bash>
root@mariadb:~# parted /dev/sdb
GNU Parted 3.2
Using /dev/sdb
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted) p
Model: VMware Virtual disk (scsi)
Disk /dev/sdb: 53,7GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:
Number Start End Size File system Name Flags
1 1049kB 6442MB 6441MB zfs
(parted) resizepart 1
End? [6442MB]? 53,7GB <-- ! Put new size here
(parted) p <-- ! Control if it worked
Model: VMware Virtual disk (scsi)
Disk /dev/sdb: 53,7GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:
Number Start End Size File system Name Flags
1 1049kB 53,7GB 53,7GB zfs
(parted) q
Information: You may need to update /etc/fstab.
</syntaxhighlight>
===Optional: Resize the ZPool in it===
Check the actual values:
<syntaxhighlight lang=bash>
root@mariadb:~# zpool list MYSQL-DATA
NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
MYSQL-DATA 5,97G 994M 5,00G 44G 47% 16% 1.00x ONLINE -
root@mariadb:~# zpool get autoexpand MYSQL-DATA
NAME PROPERTY VALUE SOURCE
MYSQL-DATA autoexpand off default
</syntaxhighlight>
Now inform ZPool to grow to the end of the partition.
Set autoexpand to on:
<syntaxhighlight lang=bash>
root@mariadb:~# zpool set autoexpand=on MYSQL-DATA
</syntaxhighlight>
Send an online to the already onlined device to force a recheck in the ZPool to resize it without export/import:
<syntaxhighlight lang=bash>
root@mariadb:~# zpool online MYSQL-DATA /dev/sdb1
</syntaxhighlight>
Et voila:
<syntaxhighlight lang=bash>
root@mariadb:~# zpool list MYSQL-DATA
NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
MYSQL-DATA 50,0G 994M 49,0G - 5% 1% 1.00x ONLINE -
rpool 19,9G 3,36G 16,5G - 19% 16% 1.00x ONLINE -
</syntaxhighlight>
Set autoexpand to off if you want prevent to autoexpand if partition grows:
<syntaxhighlight lang=bash>
root@mariadb:~# zpool set autoexpand=off MYSQL-DATA
</syntaxhighlight>
===Optional: Resize the LVM physical volume===
Check the values:
<syntaxhighlight lang=bash>
# parted /dev/${device} "print free"
Model: VMware Virtual disk (scsi)
Disk /dev/sda: 48.3GB
Sector size (logical/physical): 512B/512B
Partition Table: msdos
Disk Flags:
Number Start End Size Type File system Flags
32.3kB 1049kB 1016kB Free Space
1 1049kB 48.3GB 48.3GB primary boot
48.3GB 48.3GB 999kB Free Space
# pvs
PV VG Fmt Attr PSize PFree
/dev/sda1 vg-root lvm2 a-- <35.00g 0
</syntaxhighlight>
OK, we need to resize the physical volume
<syntaxhighlight lang=bash>
# pvresize /dev/sda1
Physical volume "/dev/sda1" changed
1 physical volume(s) resized / 0 physical volume(s) not resized
</syntaxhighlight>
Check the values:
<syntaxhighlight lang=bash>
# pvs
PV VG Fmt Attr PSize PFree
/dev/sda1 vg-root lvm2 a-- <45.00g 10.00g
</syntaxhighlight>
Done.
5b28ebc2e21d57f08a50a4ee5e5e3b8d0812671f
RadSecProxy
0
345
2444
2092
2021-11-25T22:54:51Z
Lollypop
2
Text replacement - "<source" to "<syntaxhighlight"
wikitext
text/x-wiki
[[Category:Eduroam]]
=RadSecProxy=
==Build==
===Patch for radsecproxy-1.6.8 on Ubuntu 16.04===
In radsecproxy 1.6.9 and source from git on [[https://git.nordu.net/?p=radsecproxy.git;a=tree git.nordu.net]] this patch is not needed since [[https://git.nordu.net/?p=radsecproxy.git;a=commit;h=f3619bf65967255e1009fec42b28007b49e0f4e4 18.1.2017]].
<syntaxhighlight lang=bash>
$ git clone https://git.nordu.net/radsecproxy.git
</source>
[https://project.nordu.net/browse/RADSECPROXY-72 taken from here]
<syntaxhighlight lang=diff>
diff -rub radsecproxy-1.6.8/tcp.c radsecproxy-1.6.8_Ubuntu_16.04/tcp.c
--- radsecproxy-1.6.8/tcp.c 2016-09-21 13:49:09.000000000 +0200
+++ radsecproxy-1.6.8_Ubuntu_16.04/tcp.c 2017-07-13 16:35:52.414151832 +0200
@@ -353,7 +353,7 @@
struct sockaddr_storage from;
socklen_t fromlen = sizeof(from);
- listen(*sp, 0);
+ listen(*sp, 16);
for (;;) {
s = accept(*sp, (struct sockaddr *)&from, &fromlen);
diff -rub radsecproxy-1.6.8/tls.c radsecproxy-1.6.8_Ubuntu_16.04/tls.c
--- radsecproxy-1.6.8/tls.c 2016-09-21 13:49:09.000000000 +0200
+++ radsecproxy-1.6.8_Ubuntu_16.04/tls.c 2017-07-13 16:36:22.678166655 +0200
@@ -467,7 +467,7 @@
struct sockaddr_storage from;
socklen_t fromlen = sizeof(from);
- listen(*sp, 0);
+ listen(*sp, 16);
for (;;) {
s = accept(*sp, (struct sockaddr *)&from, &fromlen);
</source>
===Configure===
<syntaxhighlight lang=bash>
$ ./configure --prefix=/opt/radsecproxy-1.6.8 --sysconfdir=/etc/radsec --with-ssl --enable-fticks
$ make clean all && sudo make install
</source>
=== Another example: Version 1.7.2 from git ===
<syntaxhighlight lang=bash>
$ mkdir radsecproxy && cd radsecproxy
$ git clone --single-branch --branch 1.7.2 https://github.com/radsecproxy/radsecproxy tags/1.7.2
$ cd tags/1.7.2
$ ./autogen.sh
$ ./configure --prefix=/opt/radsecproxy-${PWD##*/} --sysconfdir=/etc/radsec --with-ssl
$ make clean all && sudo make install
</source>
==Config==
===/etc/radsec/radsecproxy.conf===
<syntaxhighlight lang=text>
# Master config file for radsecproxy
IPv4Only on
listenUDP <IP>:1812
listenUDP <IP>:1813
listenTLS <IP>:2083
LogLevel 5 # For testing later reduce to 3
#LogDestination file:///var/log/radsecproxy.log
LogDestination x-syslog:///LOG_DAEMON
LoopPrevention on
######## TLS section
tls default {
CACertificatePath /etc/radsec/cert/ca
CertificateFile /etc/radsec/cert/radsecproxy-cert.pem
CertificateKeyFile /etc/radsec/cert/radsecproxy-key.pem
CertificateKeyPassword <PASSWORD>
}
Include /etc/radsec/rewrites.conf
Include /etc/radsec/clients.conf
Include /etc/radsec/servers.conf
Include /etc/radsec/realms.conf
</source>
===ca certificate in /etc/radsec/cert/ca===
For DFN users it is the TeleSec root certificate
====The destination file name is <hash of the certificate>.0====
<syntaxhighlight lang=text>
# openssl x509 -noout -hash -in /tmp/telesec.pem
1e09d511
# mv /tmp/telesec.pem /etc/radsec/cert/ca/1e09d511.0
</source>
====/etc/radsec/cert/ca/1e09d511.0====
<syntaxhighlight lang=text>
subject= /C=DE/O=T-Systems Enterprise Services GmbH/OU=T-Systems Trust Center/CN=T-TeleSec GlobalRoot Class 2
-----BEGIN CERTIFICATE-----
MIIDwzCCAqugAwIBAgIBATANBgkqhkiG9w0BAQsFADCBgjELMAkGA1UEBhMCREUx
KzApBgNVBAoMIlQtU3lzdGVtcyBFbnRlcnByaXNlIFNlcnZpY2VzIEdtYkgxHzAd
BgNVBAsMFlQtU3lzdGVtcyBUcnVzdCBDZW50ZXIxJTAjBgNVBAMMHFQtVGVsZVNl
YyBHbG9iYWxSb290IENsYXNzIDIwHhcNMDgxMDAxMTA0MDE0WhcNMzMxMDAxMjM1
OTU5WjCBgjELMAkGA1UEBhMCREUxKzApBgNVBAoMIlQtU3lzdGVtcyBFbnRlcnBy
aXNlIFNlcnZpY2VzIEdtYkgxHzAdBgNVBAsMFlQtU3lzdGVtcyBUcnVzdCBDZW50
ZXIxJTAjBgNVBAMMHFQtVGVsZVNlYyBHbG9iYWxSb290IENsYXNzIDIwggEiMA0G
CSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQCqX9obX+hzkeXaXPSi5kfl82hVYAUd
AqSzm1nzHoqvNK38DcLZSBnuaY/JIPwhqgcZ7bBcrGXHX+0CfHt8LRvWurmAwhiC
FoT6ZrAIxlQjgeTNuUk/9k9uN0goOA/FvudocP05l03Sx5iRUKrERLMjfTlH6VJi
1hKTXrcxlkIF+3anHqP1wvzpesVsqXFP6st4vGCvx9702cu+fjOlbpSD8DT6Iavq
jnKgP6TeMFvvhk1qlVtDRKgQFRzlAVfFmPHmBiiRqiDFt1MmUUOyCxGVWOHAD3bZ
wI18gfNycJ5v/hqO2V81xrJvNHy+SE/iWjnX2J14np+GPgNeGYtEotXHAgMBAAGj
QjBAMA8GA1UdEwEB/wQFMAMBAf8wDgYDVR0PAQH/BAQDAgEGMB0GA1UdDgQWBBS/
WSA2AHmgoCJrjNXyYdK4LMuCSjANBgkqhkiG9w0BAQsFAAOCAQEAMQOiYQsfdOhy
NsZt+U2e+iKo4YFWz827n+qrkRk4r6p8FU3ztqONpfSO9kSpp+ghla0+AGIWiPAC
uvxhI+YzmzB6azZie60EI4RYZeLbK4rnJVM3YlNfvNoBYimipidx5joifsFvHZVw
IEoHNN/q/xWA5brXethbdXwFeilHfkCoMRN3zUA7tFFHei4R40cR3p1m0IvVVGb6
g1XqfMIpiRvpb7PO4gWEyS8+eIVibslfwXhjdFjASBgMmTnrpMwatXlajRWc2BQN
9noHV8cigwUtPJslJj0Ys6lDfMjIq2SPDqO/nBudMNva0Bkuqjzx+zOAduTNrRlP
BSeOE6Fuwg==
-----END CERTIFICATE-----
</source>
===/etc/radsec/rewrites.conf===
<syntaxhighlight lang=text>
## Empty for our setup
</source>
===/etc/radsec/clients.conf===
This matches our german top level radius (tlr) you have to customize it for other countries.
<syntaxhighlight lang=text>
client tlr1 {
host 193.174.75.134
type tls
certificatenamecheck off
matchCertificateAttribute CN:/^(radius1\.dfn|tld1\.eduroam)\.de$/
}
client tlr2 {
host 193.174.75.138
type tls
certificatenamecheck off
matchCertificateAttribute CN:/^(radius2\.dfn|tld2\.eduroam)\.de$/
}
# Our WLAN Controller
client wlc {
host 10.1.1.0/24
type udp
secret ****secret****
}
#client anyIP4TLS {
# host 0.0.0.0/0
# type TLS
#}
</source>
===/etc/radsec/servers.conf===
<syntaxhighlight lang=text>
#
## UDP Radius
#
#Server Our-EduroamRadiusAuth {
# host <internal radius server>
# port 1812
# type udp
# secret ****secret****
#}
#Server Our-EduroamRadiusAcct {
# host <internal radius accounting server>
# port 1813
# type udp
# secret ****secret****
#}
#
## TLS Radius / RadSec
#
server freeradius-1 {
host <internal radius accounting server1>
type tls
certificatenamecheck off
matchCertificateAttribute CN:/^freeradius1\.domain\.tld$/
StatusServer on
secret ****secret****
}
server freeradius-2 {
host <internal radius accounting server2>
type tls
certificatenamecheck off
matchCertificateAttribute CN:/^freeradius2\.domain\.tld$/
StatusServer on
secret ****secret****
}
server tlr1 {
host 193.174.75.134
type tls
certificatenamecheck off
matchCertificateAttribute CN:/^(radius1\.dfn|tld1\.eduroam)\.de$/
StatusServer on
}
server tlr2 {
host 193.174.75.138
type tls
certificatenamecheck off
matchCertificateAttribute CN:/^(radius2\.dfn|tld2\.eduroam)\.de$/
StatusServer on
}
</source>
===/etc/radsec/realms.conf===
<syntaxhighlight lang=text>
# Our domain domain.tld
realm /(eduroam|anonymous)@domain\.tld$/ {
server freeradius-1
server freeradius-2
accountingServer freeradius-1
accountingServer freeradius-2
}
# If the anonymous user has not been matched above, fail
# So users that use their real identity fail, too. Force anonymous!
realm /@domain\.tld$ {
replymessage "Access rejected, wrong anonymous identity. Use eduroam@domain.tld as anonymous identity."
accountingresponse on
}
# Other domain of our site not used for eduroam
realm /@wrong-domain\.tld$/ {
replymessage "Misconfigured client: Use domain.tld as domain instead."
accountingresponse on
}
# Default realm of some clients. Do not send to top level radius servers.
realm /@.*\.3gppnetwork\.org$/ {
replymessage "Misconfigured client."
accountingresponse on
}
# Default realm of some clients. Do not send to top level radius servers.
realm /myabc\.com$/ {
replymessage "Misconfigured client: default realm of Intel PRO/Wireless supplicant! Rejected by us."
accountingresponse on
}
# Empty realm. Do not send to top level radius servers.
realm /^$/ {
replymessage "Misconfigured client: empty realm! Rejected by us."
accountingresponse on
}
# Typo in realm. Realm without any dot in it. Do not send to top level radius servers.
realm /@[^\.]+$/ {
replymessage "Misconfigured client: Typo in realm - No dot in realm ! Rejected by us."
accountingresponse on
}
# Typo in realm. Realm without double dot in it. Do not send to top level radius servers.
realm /@.*\.\..*$/ {
replymessage "Misconfigured client: Typo in realm - .. ! Rejected by us."
accountingresponse on
}
# Typo in realm. Realm without space in it. Do not send to top level radius servers.
realm /@.*\s+.*$/ {
replymessage "Misconfigured client: Typo in realm - Don't use spaces in your realm! Rejected by us."
accountingresponse on
}
# All other realms -> Eduroam toplevel servers
realm * {
server tlr1
server tlr2
accountingserver tlr1
accountingserver tlr2
}
</source>
===/etc/radsec/cert/radsecproxy.pem===
<syntaxhighlight lang=text>
subject=/CN=radsecproxy.domain.tld/OU=bla/O=bli/L=Hamburg/ST=Hamburg/C=DE
-----BEGIN CERTIFICATE-----
...
-----END CERTIFICATE-----
And now the whole cerstificate chain...
</source>
==Run the daemon==
===Security===
There is no need to run radsecproxy as root.
But you need write access to the log or use syslog.
The config, certificate and key is not readable by the user (nogroup) but by the group radsecproxy where the porocess lives in (see systemd unit file radsecproxy.service).
====User====
<syntaxhighlight lang=bash>
# addgroup -g 2083 radsecproxy
# useradd -u 2083 -g nogroup -s /bin/false -h /nonexistent
</source>
====Permissions====
<syntaxhighlight lang=bash>
# chown -R root:radsecproxy /etc/radsec
# find /etc/radsec -type d -exec chmod 0750 {} \;
# find /etc/radsec -type f -exec chmod 0640 {} \;
</source>
====systemd unit file====
<syntaxhighlight lang=bash>
# systemctl cat radsecproxy.service
</source>
<syntaxhighlight lang=ini>
[Unit]
Description=radsecproxy
ConditionPathExists=/etc/radsec/radsecproxy.conf
After=network.target
Documentation=man:radsecproxy(1)
[Service]
Type=forking
User=radsecproxy
Group=radsecproxy
RuntimeDirectory=radsecproxy
RuntimeDirectoryMode=0700
PrivateTmp=yes
InaccessibleDirectories=/var
ReadOnlyDirectories=/etc
ReadOnlyDirectories=/lib
ReadOnlyDirectories=/usr
ExecStart=/opt/radsecproxy/sbin/radsecproxy -i /run/radsecproxy/radsecproxy.pid
PIDFile=/run/radsecproxy/radsecproxy.pid
[Install]
WantedBy=multi-user.target
</source>
Put this to /lib/systemd/system/radsecproxy.service and do:
# systemctl daemon-reload
# systemctl enable radsecproxy.service
# systemctl start radsecproxy.service
===Testing===
Check on the server if the radsecproxy is listening:
<syntaxhighlight lang=bash>
# lsof -Pni TCP:2083 -s TCP:Listen
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
radsecpro 1344 radsecproxy 9u IPv4 22751 0t0 TCP <server ip>:2083 (LISTEN)
</source>
===Certificate Enddate===
$ openssl s_client -connect <IP>:2083 -tls1 -no_ssl2 -no_ssl3 -showcerts 2>/dev/null | openssl x509 -enddate -noout
notAfter=Oct 9 12:13:17 2020 GMT
928d30aaf127ba27b171829c4774f5e1e162a32c
Bash cheatsheet
0
37
2445
2430
2021-11-25T22:54:54Z
Lollypop
2
Text replacement - "[[Kategorie:" to "[[Category:"
wikitext
text/x-wiki
[[Category:Bash]]
=bash history per user=
See [[SSH_FingerprintLogging|Logging the SSH fingerprint]]
=bash prompt=
Put this in your ~/.bash_profile
<syntaxhighlight lang=bash>
typeset +x PS1="\[\e]0;\u@\h: \w\a\]\u@\h:\w# "
</source>
=Useful variable substitutions=
==split==
For example split an ip:
<syntaxhighlight lang=bash>
$ delimiter="."
$ ip="10.1.2.3"
$ declare -a octets=( ${ip//${delimiter}/ } )
$ echo "${#octets[@]} octets -> ${octets[@]}"
4 octets -> 10 1 2 3
</source>
==dirname==
<syntaxhighlight lang=bash>
$ myself=/usr/bin/blafasel ; echo ${myself%/*}
/usr/bin
</source>
==basename==
<syntaxhighlight lang=bash>
$ myself=/usr/bin/blafasel ; echo ${myself##*/}
blafasel
</source>
==Path name resolving function==
<syntaxhighlight lang=bash>
# dir_resolve originally from http://stackoverflow.com/a/20901614/5887626
# modified at https://lars.timmann.de/wiki/index.php/Bash_cheatsheet
dir_resolve() {
local dir=${1%/*}
local file=${1##*/}
# if the name does not contain a / leave file blank or the name will be name/name
[ "_${1/\//}_" == "_${1}_" -a -d ${1} ] && file=""
[ "_${1/\//}_" == "_${1}_" -a -f ${1} ] && dir=""
pushd "$dir" &>/dev/null || return $? # On error, return error code
echo ${PWD}${file:+"/"${file}} # output full path with filename
popd &> /dev/null
}
</source>
=Arrays=
==Reverse the order of elements==
An example for services in normal and reverse order for start/stop
<syntaxhighlight lang=bash>
declare -a SERVICES_STOP=(service1 service2 service3 service4)
declare -a SERVICES_START
for(( i=$[ ${#SERVICES_STOP[*]} - 1 ] ; i>=0 ; i-- ))
do
SERVICES_START+=(${SERVICES_STOP[$i]})
done
</source>
This results in:
<syntaxhighlight lang=bash>
$ echo ${SERVICES_STOP[*]} ; echo ${SERVICES_START[*]}
service1 service2 service3 service4
service4 service3 service2 service1
</source>
=Loops=
==Numbers==
$ for i in {0..9} ; do echo $i ; done
or
$ for ((i=0;i<=9;i++)); do echo $i; done
so gehen natürlich auch andere Sprünge, z.B. immer 3 weiter:
$ for ((i=0;i<=9;i+=3)); do echo $i; done
or or or
$ for ((i=0,j=1;i<=9;i+=3,j++)); do echo "$i $j"; done
==Exit controlled loop==
Just put your code between <i>while</i> and <i>do</i> and use <i>continue</i> alias : in the loop.
<syntaxhighlight lang=bash>
#!/bin/bash
while
# some code
(( <your control expression> ))
do
:
done
</source>
For example:
<syntaxhighlight lang=bash>
#!/bin/bash
i=1
while
i=$[ $i + 1 ];
(( $i < 10 ))
do
:
done
</source>
=Functions=
==Log with timestamp==
<syntaxhighlight lang=bash>
function printlog () {
# Function:
# Log things to logfile
#
# Parameter:
# 1: logfile
# *: You can call printlog like printf (except the first parameter is the logfile)
#
# OR
#
# Just pipe things to printlog
#
local logfile=${1}
shift
if [ -n "${*}" ]
then
format=${1}
shift
printf "%s ${format}" "$(/bin/date '+%Y%m%d %H:%M:%S')" ${*} >> ${logfile}
else
while read input
do
printf "%s %s\n" "$(/bin/date '+%Y%m%d %H:%M:%S')" "${input}" >> ${logfile}
done
fi
}
</source>
<syntaxhighlight lang=bash>
$ printf "test\n\ntoast\n" | printlog /dev/stdout
20190603 12:48:13 test
20190603 12:48:13
20190603 12:48:13 toast
$ printlog /dev/stdout "test\n"
20190603 12:48:19 test
$ printlog /dev/stdout "test %s %d %s\n" "bla" 0 "bli"
20190603 12:48:25 test bla 0 bli
$
</source>
=Calculations=
<syntaxhighlight lang=bash>
$ echo $[ 3 + 4 ]
7
$ echo $[ 2 ** 8 ] # 2^8
256
</source>
=init scripts=
==A basic skeleton==
<syntaxhighlight lang=bash>
#!/bin/bash
NAME=<myname> # The name of the daemon
USER=<runuser> # The user to run the daemon as
SELF=${0##*/}
CALLER=$(id -nu)
# Check if called as ${USER}
if [ "_${CALLER}_" != "_${USER}_" ]
then
# If not do a su if called as root
if [ "_${CALLER}_" == "_root_" ]
then
exec su -l ${USER} -c "$0 $@"
else
echo "Please start this script only as user ${USER}"
exit 1
fi
fi
if [ $# -eq 1 ]
then
command=$1
else
# Called as ${NAME}-start.sh or ${NAME}-stop.sh
command=${SELF%.sh}
command=${command##${NAME}-}
[ "_${command}_" == "_${NAME}_" ] && command=""
fi
case ${command} in
start)
# start commands
;;
stop)
# stop commands
;;
restart)
$0 stop
$0 start
;;
*)
[ ! -z "${command}" ] && echo "ERROR: Unknown option ${command}!"
echo "Usage: $0 (start|stop|restart)";
echo "Or call as ${NAME}-(start|stop|restart).sh"
exit 1
;;
esac
</source>
= Logging and output in your scripts =
== Add a timestamp to all output ==
<syntaxhighlight lang=bash>
#!/bin/bash
# Find temp filename
FIFO=$(mktemp)
# Clenup on exit
trap 'rm -f ${FIFO}' 0
# Delete file created by mktemp
rm "${FIFO}"
# Create a FIFO instead
mkfifo "${FIFO}"
# Read from FIFO and add date at the beginning
sed -e "s|^|$(date '+%d.%m.%Y %H:%M:%S') :: |g" < ${FIFO} &
# Redirect stdout & stderr to FIFO
exec > ${FIFO} 2>&1
#
# Now your program
#
echo bla
echo bli >&2
</source>
== Add a timestamp to all output and send to file==
<syntaxhighlight lang=bash>
#!/bin/bash
LOGFILE=/tmp/bla.log
# Find temp filename
FIFO=$(mktemp)
# Clenup on exit
trap 'rm -f ${FIFO}' 0
# Delete file created by mktemp
rm "${FIFO}"
# Create a FIFO instead
mkfifo "${FIFO}"
# Read from FIFO and add date at the beginning
sed -e "s|^|$(date '+%d.%m.%Y %H:%M:%S') :: |g" < ${FIFO} > ${LOGFILE}&
# Redirect stdout & stderr to FIFO
exec > ${FIFO} 2>&1
#
# Now your program
#
echo bla
echo bli >&2
</source>
=Parameter parsing=
In progress... no time...
<syntaxhighlight lang=bash>
while [ $# -gt 0 ]
do
case $1 in
-h|--help)
usage help
shift;
exit 0;
;;
--?*=?*|-?*=?*)
param=${1%=*}
value=${1#*=}
shift;
;;
--?*=|-?*=)
param=${1%=*}
usage "${param} needs a vlaue!"
;;
*)
if [ $# -lt 2 ] ; then usage "$1 needs a value!"; fi
param=$1
value=$2
shift; shift;
;;
esac
case $param in
*)
other_params[$[${#other_params[*]} + 1]]="${param}=${value}"
param=${param#--}
param=${param/-/_}
export ${param^^}=${value}
;;
esac
done
</source>
108124c9abd5ad53eb73658b48ed48ac1ed8dbe1
Solaris OracleDB zone
0
188
2446
2361
2021-11-25T22:55:46Z
Lollypop
2
Text replacement - "[[Kategorie:" to "[[Category:"
wikitext
text/x-wiki
[[Category:Solaris|Oracle Zone]]
=Setup Oracle Database on a Solaris zone with CPU limit=
Our setup is a 48GB x86-Server
==Limit ZFS ARC==
Add to /etc/system:
<syntaxhighlight lang=bash>
set zfs:zfs_arc_max = <bytes as hex value>
</syntaxhighlight>
To calculate your own value:
<syntaxhighlight lang=bash>
# LIMIT_GB=8 ; printf "*\n** Limit ZFS ARC to %dGB\n*\nset zfs:zfs_arc_max = 0x%x\n" ${LIMIT_GB} $[${LIMIT_GB} * 1024 * 1024 * 1024]
*
** Limit ZFS ARC to 8GB
*
set zfs:zfs_arc_max = 0x200000000
</syntaxhighlight>
==Create Zone==
Set values:
<syntaxhighlight lang=bash>
ZONENAME=oracle
ZONEPOOL=rpool
ZONEBASE=/var/zones
MAX_SHM_MEMORY=30G
LOCKED_MEMORY=30G
MAX_PHYS_MEMORY=34G
SWAP=${MAX_PHYS_MEMORY}
NUMBER_OF_CPUS=2
</syntaxhighlight>
Create zone with
<syntaxhighlight lang=bash>
zfs create -o mountpoint=none ${ZONEPOOL}/zones
zfs create -o compression=on -o mountpoint=${ZONEBASE}/${ZONENAME} ${ZONEPOOL}/zones/${ZONENAME}
chmod 700 ${ZONEBASE}/${ZONENAME}
printf "
create
set autoboot=true
set zonepath=${ZONEBASE}/${ZONENAME}
add dedicated-cpu
set ncpus=${NUMBER_OF_CPUS}
end
add capped-memory
set swap=${SWAP}
set physical=${MAX_PHYS_MEMORY}
set locked=${LOCKED_MEMORY}
end
set scheduling-class=FSS
set max-shm-memory=${MAX_SHM_MEMORY}
verify
commit
" | zonecfg -z ${ZONENAME} -f -
</syntaxhighlight>
Enable dynamic pool service to add support for dedicated-cpus:
<syntaxhighlight lang=bash>
svcadm enable svc:/system/pools/dynamic
</syntaxhighlight>
Install and boot:
<syntaxhighlight lang=bash>
zoneadm -z ${ZONENAME} install
zoneadm -z ${ZONENAME} boot
zlogin ${ZONENAME} usermod -s /bin/bash root
zlogin ${ZONENAME}
</syntaxhighlight>
CPU-check:
<syntaxhighlight lang=bash>
-bash-3.2# psrinfo -pv
The physical processor has 2 virtual processors (0 1)
x86 (chipid 0x0 GenuineIntel family 6 model 44 step 2 clock 3059 MHz)
Intel(r) Xeon(r) CPU X5675 @ 3.07GHz
</syntaxhighlight>
==Create ZPools==
I used this paper: [http://www.oracle.com/technetwork/server-storage/solaris10/config-solaris-zfs-wp-167894.pdf]
Values are for Solaris 10.
<syntaxhighlight lang=bash>
DATABASEPOOL=dbpool
DATABASEPOOL_DATA_VDEV="mirror c1t1d0 c1t2d0"
DATABASEPOOL_ZIL_VDEV="mirror c1t3d0 c1t4d0"
REDOPOOL_NAME=redopool
REDOPOOL_DATA_VDEV="mirror c1t5d0 c1t6d0"
REDOPOOL_ZIL_VDEV="mirror c1t7d0 c1t8d0"
ARCHIVEPOOL=archivepool
ARCHIVEPOOL_DATAV_DEV="mirror c1t9d0 c1t10d0"
DB_BASEPATH=/database
DB_BLOCK_SIZE=8192
</syntaxhighlight>
<syntaxhighlight lang=bash>
zpool create ${DATABASEPOOL} ${DATABASEPOOL_DATA_VDEV} log ${DATABASEPOOL_ZIL_VDEV}
zfs create -o recordsize=${DB_BLOCK_SIZE} -o mountpoint=${DB_BASEPATH}/data ${DATABASEPOOL}/data
zfs set logbias=throughput ${DATABASEPOOL}/data
zfs create -o recordsize=${DB_BLOCK_SIZE} -o mountpoint=${DB_BASEPATH}/index ${DATABASEPOOL}/index
zfs set logbias=throughput ${DATABASEPOOL}/index
zfs create -o mountpoint=${DB_BASEPATH}/temp ${DATABASEPOOL}/temp
zfs set logbias=throughput ${DATABASEPOOL}/temp
zfs create -o mountpoint=${DB_BASEPATH}/undo ${DATABASEPOOL}/undo
zfs set logbias=throughput ${DATABASEPOOL}/undo
</syntaxhighlight>
<syntaxhighlight lang=bash>
zpool create ${REDOPOOL} ${REDOPOOL_DATA_VDEV} log ${REDOPOOL_ZIL_VDEV}
zfs create -o mountpoint=${DB_BASEPATH}/redo ${REDOPOOL}/redo
zfs set logbias=latency ${REDOPOOL}/redo
</syntaxhighlight>
<syntaxhighlight lang=bash>
zpool create ${ARCHIVEBASEPOOL} ${ARCHIVEPOOL_DATA_VDEV}
zfs create -o compression=on -o mountpoint=${DB_BASEPATH}/archive ${ARCHIVEBASEPOOL}/archive
zfs set primarycache=metadata ${ARCHIVEBASEPOOL}/archive
</syntaxhighlight>
36c159f9692d70f9bfd38d71e14303a84f2d9439
Fail2ban
0
276
2447
2402
2021-11-25T22:55:57Z
Lollypop
2
Text replacement - "[[Kategorie:" to "[[Category:"
wikitext
text/x-wiki
[[Category:Security]]
[[Category:Linux]]
==Installation==
===Debian / Ubuntu===
<syntaxhighlight lang=bash>
# apt-get install fail2ban
</syntaxhighlight>
==Configuration==
To be secure on updates put your personal settings in the <i>*.local</i> files. This will protect them from overwriting through update procedures.
===paths-overrides.local===
I have date parts in my logfiles so the defaults from fail2ban would fail to find the logs.
<syntaxhighlight lang=bash>
# exim -bP log_file_path
log_file_path = /var/log/exim/%slog-%D
# doveadm log find
Looking for log files from /var/log
Debug: /var/log/dovecot/dovecot.debug-20160309
Info: /var/log/dovecot/dovecot.debug-20160309
Warning: /var/log/dovecot/dovecot.log-20160309
Error: /var/log/dovecot/dovecot.log-20160309
Fatal: /var/log/dovecot/dovecot.log-20160309
</syntaxhighlight>
<syntaxhighlight lang=ini>
[DEFAULT]
dovecot_log = /var/log/dovecot/dovecot.log-*
exim_main_log = /var/log/exim/mainlog-*
</syntaxhighlight>
===jail.local===
<syntaxhighlight lang=ini>
[DEFAULT]
bantime = 3600
[sshd]
enabled = true
[exim-spam]
enabled = true
[exim]
enabled = true
[sshd-ddos]
enabled = true
[dovecot]
enabled = true
[sieve]
enabled = true
</syntaxhighlight>
43872ae5badc05956696e41ce6ac2ae1a4eb1cfc
Template:Tausendfach verwendet
10
69
2448
123
2021-11-25T22:56:04Z
Lollypop
2
Text replacement - "[[Kategorie:" to "[[Category:"
wikitext
text/x-wiki
{| {{Bausteindesign4}}
| valign="center" | [[Bild:Stop hand.svg|40px|alt=]]
| Diese Vorlage ist <span class="plainlinks">[{{fullurl:Spezial:Linkliste|target={{SUBJECTPAGENAMEE}}&limit=500&hideredirs=1&hidelinks=1}} ''vielfach eingebunden'']. Wenn du die Auswirkungen genau kennst, kannst du sie [{{fullurl:{{FULLPAGENAME}}|action=edit}} bearbeiten]</span>. Meist ist es jedoch sinnvoll, Änderungswünsche erst auf [[{{DISKUSSIONSSEITE}}]] abzustimmen.
|}<noinclude>
Diese Vorlage bitte '''immer''' mit <tt><noinclude>{{Tausendfach verwendet}}</noinclude></tt> in andere Vorlagen einbauen!
[[Category:Vorlage:Hinweisbaustein|{{PAGENAME}}]]
[[Category:Vorlage:für Vorlagen|{{PAGENAME}}]]
[[eo:Ŝablono:Milfoje]]
</noinclude>
2f9020b8fd7c567d07d1004096a40e2ef5351d2f
VMWare Certificate
0
280
2449
2388
2021-11-25T22:56:32Z
Lollypop
2
Text replacement - "[[Kategorie:" to "[[Category:"
wikitext
text/x-wiki
[[Category:VMWare]]
[[Category:Security]]
== Neues Zertifikat generieren ==
=== ShellWarning deaktivieren ===
<pre>
-> Bestandsliste
-> Hosts und Cluster
-> <ESX-Host auswählen>
-> Verwalten
-> Einstellungen
-> System
-> Erweiterte Systemeinstellungen
-> <In der Suche "suppress" suchen>
-> UserVars.SuppressShellWarning
-> Bearbeiten: UserVars.SuppressShellWarning = 1
</pre>
=== SSH in der Firewall freischalten ===
<pre>
-> Bestandsliste
-> Hosts und Cluster
-> <ESX-Host auswählen>
-> Verwalten
-> Einstellungen
-> System
-> Sicherheitsprofil
-> Firewall
-> Eingehende Verbindungen
-> Bearbeiten
-> SSH-Server aktivieren
</pre>
=== SSH aktivieren ===
<pre>
-> Bestandsliste
-> Hosts und Cluster
-> <ESX-Host auswählen>
-> Verwalten
-> Einstellungen
-> System
-> Sicherheitsprofil
-> Dienste
-> Bearbeiten
-> SSH starten
</pre>
<syntaxhighlight lang=bash>
$ ssh root@esx-host
~ # cd /etc/vmware/ssl
/etc/vmware/ssl # mv rui.key rui.key.orig
/etc/vmware/ssl # mv rui.crt rui.crt.orig
/etc/vmware/ssl # /sbin/generate-certificates
/etc/vmware/ssl # ls -al *.key *.crt
-rw-r--r-- 1 root root 1440 May 30 09:33 rui.crt
-r-------- 1 root root 1704 May 30 09:33 rui.key
</syntaxhighlight>
=== SSH deaktivieren ===
<pre>
-> Bestandsliste
-> Hosts und Cluster
-> <ESX-Host auswählen>
-> Verwalten
-> Einstellungen
-> System
-> Sicherheitsprofil
-> Dienste
-> Bearbeiten
-> SSH stoppen
</pre>
=== ShellWarning aktivieren ===
<pre>
-> Bestandsliste
-> Hosts und Cluster
-> <ESX-Host auswählen>
-> Verwalten
-> Einstellungen
-> System
-> Erweiterte Systemeinstellungen
-> <In der Suche "suppress" suchen>
-> UserVars.SuppressShellWarning
-> Bearbeiten: UserVars.SuppressShellWarning = 0
</pre>
=== CIM-Server neu starten ===
Damit auch das neue Zertifikat genutzt wird, muß der CIM-Server durchgestartet werden.
<pre>
-> Bestandsliste
-> Hosts und Cluster
-> <ESX-Host auswählen>
-> Verwalten
-> Einstellungen
-> System
-> Sicherheitsprofil
-> Dienste
-> Bearbeiten
-> CIM-Server
-> Neu Starten
</pre>
51ddf0b8278eb1e6d167e6dbf97005e2cc5d69e4
Solaris ssh from DVD
0
111
2450
2330
2021-11-25T22:56:36Z
Lollypop
2
Text replacement - "</source" to "</syntaxhighlight"
wikitext
text/x-wiki
[[Category:Solaris|SSH]]
=Get SSH on a system bootet from DVD=
==Mount DVD==
<syntaxhighlight lang=bash>
# iostat -En
c0t0d0 Soft Errors: 0 Hard Errors: 0 Transport Errors: 0
Vendor: AMI Product: Virtual CDROM Revision: 1.00 Serial No:
Size: 0.00GB <0 bytes>
Media Error: 0 Device Not Ready: 0 No Device: 0 Recoverable: 0
Illegal Request: 732 Predictive Failure Analysis: 0
...
# mkdir /tmp/dvd
# mount -F hsfs -oro /dev/dsk/c0t0d0s0 /tmp/dvd
</syntaxhighlight>
==Unpacking software==
<syntaxhighlight lang=bash>
# mkdir /tmp/pkg
# pkgtrans /tmp/dvd/Solaris_10/Product /tmp/pkg SUNWsshu SUNWcry SUNWopenssl-libraries
# mkdir /tmp/ssh
# cd /tmp/ssh
# 7z x -so /tmp/pkg/SUNWsshu/archive/none.7z | cpio -idv
# 7z x -so /tmp/pkg/SUNWcry/archive/none.7z | cpio -idv
# 7z x -so /tmp/pkg/SUNWopenssl-libraries/archive/none.7z | cpio -idv
</syntaxhighlight>
==Use unpacked libraries==
<syntaxhighlight lang=bash>
# crle -c /var/ld/ld.config -l /tmp/ssh/usr/sfw/lib:/lib:/usr/lib
# crle
Configuration file [version 4]: /var/ld/ld.config
Platform: 32-bit LSB 80386
Default Library Path (ELF): /tmp/ssh/usr/sfw/lib:/lib:/usr/lib
Trusted Directories (ELF): /lib/secure:/usr/lib/secure (system default)
Command line:
crle -c /var/ld/ld.config -l /tmp/ssh/usr/sfw/lib:/lib:/usr/lib
</syntaxhighlight>
==Check it==
<syntaxhighlight lang=bash>
# ldd /tmp/ssh/usr/bin/ssh
libsocket.so.1 => /lib/libsocket.so.1
libnsl.so.1 => /lib/libnsl.so.1
libz.so.1 => /usr/lib/libz.so.1
libcrypto.so.0.9.7 => /usr/sfw/lib/libcrypto.so.0.9.7
libgss.so.1 => /usr/lib/libgss.so.1
libc.so.1 => /lib/libc.so.1
libmp.so.2 => /lib/libmp.so.2
libmd.so.1 => /lib/libmd.so.1
libscf.so.1 => /lib/libscf.so.1
libcmd.so.1 => /lib/libcmd.so.1
libdoor.so.1 => /lib/libdoor.so.1
libuutil.so.1 => /lib/libuutil.so.1
libgen.so.1 => /lib/libgen.so.1
libcrypto_extra.so.0.9.7 => /tmp/ssh/usr/sfw/lib/libcrypto_extra.so.0.9.7
libm.so.2 => /lib/libm.so.2
</syntaxhighlight>
Looks good:
* libcrypto_extra.so.0.9.7 => /tmp/ssh/usr/sfw/lib/libcrypto_extra.so.0.9.7
==Use ssh from /tmp/ssh==
<syntaxhighlight lang=bash>
# /tmp/ssh/usr/bin/ssh <user>@<ip>
</syntaxhighlight>
3a6e73ed28fa006cd8e7af504551d36082cd4a05
ZFS nice commands
0
362
2451
2229
2021-11-25T22:56:47Z
Lollypop
2
Text replacement - "[[Kategorie:" to "[[Category:"
wikitext
text/x-wiki
[[Category:ZFS]]
=Some ZFS commands I use often (on Linux)=
==zpool==
===Get zpool status===
<source lang=bash>
# zpool status -P
pool: rpool
state: ONLINE
scan: scrub repaired 0B in 0h41m with 0 errors on Tue Nov 27 11:49:30 2018
config:
NAME STATE READ WRITE CKSUM
rpool ONLINE 0 0 0
/dev/disk/by-id/ata-SanDisk_SDSSDHII960G_151740411091-part4 ONLINE 0 0 0
</syntaxhighlight>
* -P : Display real paths for vdevs instead of only the last component of the path.
<source lang=bash>
# zpool status -PL
pool: rpool
state: ONLINE
scan: scrub repaired 0B in 0h41m with 0 errors on Tue Nov 27 11:49:30 2018
config:
NAME STATE READ WRITE CKSUM
rpool ONLINE 0 0 0
/dev/sda4 ONLINE 0 0 0
errors: No known data errors
</syntaxhighlight>
* -P : Display real paths for vdevs instead of only the last component of the path.
* -L : Display real paths for vdevs resolving all symbolic links.
===Get zpool size===
<source lang=bash>
# zpool list
NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
rpool 788G 609G 179G - 53% 77% 1.00x ONLINE -
</syntaxhighlight>
Ooooh... bad fragmentation! So what? It's a SSD!
===Get the ashift value===
<source lang=bash>
# zpool list -o name,ashift
NAME ASHIFT
rpool 9
</syntaxhighlight>
which means 2^9=512 := 512 byte blocks in the backend... that is uncool for SSDs.
<source lang=bash>
# echo $[ 2 ** 12 ]
4096
# zpool set ashift=12 rpool
</syntaxhighlight>
<source lang=bash>
# zpool list -o name,ashift
NAME ASHIFT
rpool 12
</syntaxhighlight>
which means 2^12=4096 := 4k blocks in the backend. Perfect!
==zfs==
==zdb==
===Traverse all blocks===
<source lang=bash>
# zdb -b rpool
Traversing all blocks to verify nothing leaked ...
loading space map for vdev 0 of 1, metaslab 196 of 197 ...
609G completed (4928MB/s) estimated time remaining: 0hr 00min 00sec
No leaks (block sum matches space maps exactly)
bp count: 32920989
ganged count: 0
bp logical: 760060348928 avg: 23087
bp physical: 650570102784 avg: 19761 compression: 1.17
bp allocated: 654308115456 avg: 19875 compression: 1.16
bp deduped: 0 ref>1: 0 deduplication: 1.00
SPA allocated: 654308115456 used: 77.33%
additional, non-pointer bps of type 0: 237576
Dittoed blocks on same vdev: 1230844
</syntaxhighlight>
9ea02ff2e1e87288543bad3181f21082146fb290
2456
2451
2021-11-25T22:57:49Z
Lollypop
2
Text replacement - "<source " to "<syntaxhighlight "
wikitext
text/x-wiki
[[Category:ZFS]]
=Some ZFS commands I use often (on Linux)=
==zpool==
===Get zpool status===
<syntaxhighlight lang=bash>
# zpool status -P
pool: rpool
state: ONLINE
scan: scrub repaired 0B in 0h41m with 0 errors on Tue Nov 27 11:49:30 2018
config:
NAME STATE READ WRITE CKSUM
rpool ONLINE 0 0 0
/dev/disk/by-id/ata-SanDisk_SDSSDHII960G_151740411091-part4 ONLINE 0 0 0
</syntaxhighlight>
* -P : Display real paths for vdevs instead of only the last component of the path.
<syntaxhighlight lang=bash>
# zpool status -PL
pool: rpool
state: ONLINE
scan: scrub repaired 0B in 0h41m with 0 errors on Tue Nov 27 11:49:30 2018
config:
NAME STATE READ WRITE CKSUM
rpool ONLINE 0 0 0
/dev/sda4 ONLINE 0 0 0
errors: No known data errors
</syntaxhighlight>
* -P : Display real paths for vdevs instead of only the last component of the path.
* -L : Display real paths for vdevs resolving all symbolic links.
===Get zpool size===
<syntaxhighlight lang=bash>
# zpool list
NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
rpool 788G 609G 179G - 53% 77% 1.00x ONLINE -
</syntaxhighlight>
Ooooh... bad fragmentation! So what? It's a SSD!
===Get the ashift value===
<syntaxhighlight lang=bash>
# zpool list -o name,ashift
NAME ASHIFT
rpool 9
</syntaxhighlight>
which means 2^9=512 := 512 byte blocks in the backend... that is uncool for SSDs.
<syntaxhighlight lang=bash>
# echo $[ 2 ** 12 ]
4096
# zpool set ashift=12 rpool
</syntaxhighlight>
<syntaxhighlight lang=bash>
# zpool list -o name,ashift
NAME ASHIFT
rpool 12
</syntaxhighlight>
which means 2^12=4096 := 4k blocks in the backend. Perfect!
==zfs==
==zdb==
===Traverse all blocks===
<syntaxhighlight lang=bash>
# zdb -b rpool
Traversing all blocks to verify nothing leaked ...
loading space map for vdev 0 of 1, metaslab 196 of 197 ...
609G completed (4928MB/s) estimated time remaining: 0hr 00min 00sec
No leaks (block sum matches space maps exactly)
bp count: 32920989
ganged count: 0
bp logical: 760060348928 avg: 23087
bp physical: 650570102784 avg: 19761 compression: 1.17
bp allocated: 654308115456 avg: 19875 compression: 1.16
bp deduped: 0 ref>1: 0 deduplication: 1.00
SPA allocated: 654308115456 used: 77.33%
additional, non-pointer bps of type 0: 237576
Dittoed blocks on same vdev: 1230844
</syntaxhighlight>
81f8e2761afe0546954a1dbbd0213980d0153b65
ZFS fast scrub
0
141
2452
2256
2021-11-25T22:56:55Z
Lollypop
2
Text replacement - "</source" to "</syntaxhighlight"
wikitext
text/x-wiki
[[Kategorie:ZFS|fast scrub]]
[[Kategorie:Solaris]]
NEVER DO THIS!!!
If you need a fast scrub to get to production state after an bloody hard unplanned downtime... and so on...
I would expect you to not to do this.
But it worked for me:
<syntaxhighlight lang=bash>
# echo "zfs_scrub_delay/D" | mdb -k
zfs_scrub_delay:
zfs_scrub_delay:4
# echo "zfs_scrub_delay/W0" | mdb -kw
zfs_scrub_delay:0x4 = 0x0
</syntaxhighlight>
This sets the scrub delay to zero... your system will do a lot of scrubbing and not so much other things.
Remember to set it back to the old value later (4 in this example)!
<syntaxhighlight lang=bash>
# echo "zfs_scrub_delay/W4" | mdb -kw
zfs_scrub_delay:0x0 = 0x4
</syntaxhighlight>
But remember I told you: NEVER DO THIS!!!
0da095477fa82bf965d0e0064081c597d8f909ad
2482
2452
2021-11-26T00:32:30Z
Lollypop
2
Text replacement - "[[Kategorie:" to "[[Category:"
wikitext
text/x-wiki
[[Category:ZFS|fast scrub]]
[[Category:Solaris]]
NEVER DO THIS!!!
If you need a fast scrub to get to production state after an bloody hard unplanned downtime... and so on...
I would expect you to not to do this.
But it worked for me:
<syntaxhighlight lang=bash>
# echo "zfs_scrub_delay/D" | mdb -k
zfs_scrub_delay:
zfs_scrub_delay:4
# echo "zfs_scrub_delay/W0" | mdb -kw
zfs_scrub_delay:0x4 = 0x0
</syntaxhighlight>
This sets the scrub delay to zero... your system will do a lot of scrubbing and not so much other things.
Remember to set it back to the old value later (4 in this example)!
<syntaxhighlight lang=bash>
# echo "zfs_scrub_delay/W4" | mdb -kw
zfs_scrub_delay:0x0 = 0x4
</syntaxhighlight>
But remember I told you: NEVER DO THIS!!!
bf87b88e5969c340cf3b9f14abf279853b8a92d8
PHP
0
361
2453
2399
2021-11-25T22:56:59Z
Lollypop
2
Text replacement - "</source" to "</syntaxhighlight"
wikitext
text/x-wiki
[[Category:PHP]]
==Install mcrypt on Ubuntu 18.04==
<syntaxhighlight lang=bash>
$ sudo apt -y install gcc make autoconf libc-dev pkg-config libmcrypt-dev php7.2-dev
$ sudo pecl install --nodeps mcrypt-snapshot
</syntaxhighlight>
<syntaxhighlight lang=bash>
$ echo "extension=mcrypt.so" | sudo tee -a /etc/php/7.2/fpm/php.ini
$ php-fpm7.2 -i | grep mc
Registered Stream Filters => zlib.*, string.rot13, string.toupper, string.tolower, string.strip_tags, convert.*, consumed, dechunk, mcrypt.*, mdecrypt.*, bzip2.*, convert.iconv.*
mcrypt
mcrypt support => enabled
mcrypt_filter support => enabled
mcrypt.algorithms_dir => no value => no value
mcrypt.modes_dir => no value => no value
</syntaxhighlight>
2232bc225db0eb047e9cd2f59af36d4cbf27ff99
NGINX
0
363
2454
2314
2021-11-25T22:57:25Z
Lollypop
2
Text replacement - "[[Kategorie:" to "[[Category:"
wikitext
text/x-wiki
[[Category:NGINX]]
==Add module to nginx on Ubuntu==
For example http-auth-ldap:
<syntaxhighlight lang=bash>
mkdir /opt/src
cd /opt/src
apt source nginx
cd nginx-*
export HTTPS_PROXY=<your proxy server>
git clone https://github.com/kvspb/nginx-auth-ldap.git debian/modules/http-auth-ldap
./configure \
--with-cc-opt="$(dpkg-buildflags --get CFLAGS) -fPIC $(dpkg-buildflags --get CPPFLAGS)" \
--with-ld-opt="$(dpkg-buildflags --get LDFLAGS) -fPIC" \
--prefix=/usr/share/nginx \
--conf-path=/etc/nginx/nginx.conf \
--http-log-path=/var/log/nginx/access.log \
--error-log-path=/var/log/nginx/error.log \
--lock-path=/var/lock/nginx.lock \
--pid-path=/run/nginx.pid \
--modules-path=/usr/lib/nginx/modules \
--with-http_v2_module \
--with-threads \
--without-http_gzip_module \
--add-dynamic-module=debian/modules/http-auth-ldap
make modules
sudo install --mode=0644 --owner=root --group=root objs/ngx_http_auth_ldap_module.so /usr/lib/nginx/modules/
</syntaxhighlight>
c03e588affbbe3462d12a724cefaff5caa403f76
Networker Tipps und Tricks
0
204
2455
936
2021-11-25T22:57:33Z
Lollypop
2
Text replacement - "<source" to "<syntaxhighlight"
wikitext
text/x-wiki
[[Kategorie:Backup]]
==Status des Backups prüfen==
Anzeigen, ob für den Client <networker-client> in den letzten 24 Stunden ein Backup gelaufen ist:
<syntaxhighlight lang=bash>
# /usr/sbin/mminfo -avot -s <networker-server> -r "client,savetime(17),name,sumsize" -t "1 day ago" -q client=<networker-client>
</source>
Oder das letzte Backup für einen Client:
<syntaxhighlight lang=bash>
# /usr/sbin/mminfo -avot -s <networker-server> -r "client,group,savetime(17),name,sumsize" -q "group=<group>,client=<networker-client>"
</source>
==Recover/Restore==
SSID des Savesets herausfinden:
<syntaxhighlight lang=bash>
# mminfo -s <networker-server> -q "client=<networker-client>,name=<directory>" -r "ssid,name,savetime(17)"
2752466240 <directory> 03/23/15 00:16:16
...
387566382 <directory> 03/31/15 00:16:14
</source>
OK, wir wollen das Backup vom 31.3.2015 00:16:14 Uhr, also SSID 387566382.
Zielverzeichnis für den Restore:
<syntaxhighlight lang=bash>
# recover -s <networker-server> -S 387566382 -d <destination-directory>
</source>
Achtung, das sind NUR die Dateien, die an dem Tage gesichert wurden!
Möchte man alles so herstellen, wie es zu einem bestimmten Zetipunkt war, dann geht das folgendermaßen:
<syntaxhighlight lang=bash>
# recover -s <networker-server> -c <networker-client> -t '03/31/15 00:16:14' -d <destination-directory> -a <directory>
</source>
bbe5cdd9f7d9639b21b818e64786a094e2713f38
2460
2455
2021-11-25T22:59:11Z
Lollypop
2
Text replacement - "</source" to "</syntaxhighlight"
wikitext
text/x-wiki
[[Kategorie:Backup]]
==Status des Backups prüfen==
Anzeigen, ob für den Client <networker-client> in den letzten 24 Stunden ein Backup gelaufen ist:
<syntaxhighlight lang=bash>
# /usr/sbin/mminfo -avot -s <networker-server> -r "client,savetime(17),name,sumsize" -t "1 day ago" -q client=<networker-client>
</syntaxhighlight>
Oder das letzte Backup für einen Client:
<syntaxhighlight lang=bash>
# /usr/sbin/mminfo -avot -s <networker-server> -r "client,group,savetime(17),name,sumsize" -q "group=<group>,client=<networker-client>"
</syntaxhighlight>
==Recover/Restore==
SSID des Savesets herausfinden:
<syntaxhighlight lang=bash>
# mminfo -s <networker-server> -q "client=<networker-client>,name=<directory>" -r "ssid,name,savetime(17)"
2752466240 <directory> 03/23/15 00:16:16
...
387566382 <directory> 03/31/15 00:16:14
</syntaxhighlight>
OK, wir wollen das Backup vom 31.3.2015 00:16:14 Uhr, also SSID 387566382.
Zielverzeichnis für den Restore:
<syntaxhighlight lang=bash>
# recover -s <networker-server> -S 387566382 -d <destination-directory>
</syntaxhighlight>
Achtung, das sind NUR die Dateien, die an dem Tage gesichert wurden!
Möchte man alles so herstellen, wie es zu einem bestimmten Zetipunkt war, dann geht das folgendermaßen:
<syntaxhighlight lang=bash>
# recover -s <networker-server> -c <networker-client> -t '03/31/15 00:16:14' -d <destination-directory> -a <directory>
</syntaxhighlight>
1f89c2d35cea1d788b6250510d1e3913de34c99e
Category:SunCluster
14
33
2457
58
2021-11-25T22:58:38Z
Lollypop
2
Text replacement - "[[Kategorie:" to "[[Category:"
wikitext
text/x-wiki
[[Category:Solaris]]
b2957bda5fdd0cfbd2a3c12d4f811f750d2f9508
Category:Amorphophallus
14
81
2458
162
2021-11-25T22:58:46Z
Lollypop
2
Text replacement - "[[Kategorie:" to "[[Category:"
wikitext
text/x-wiki
[[Category:Araceae]]
5024b68e594e35c88018933d70ac6158500c45b3
Category:Perl
14
179
2459
539
2021-11-25T22:59:00Z
Lollypop
2
Text replacement - "[[Kategorie:" to "[[Category:"
wikitext
text/x-wiki
[[Category:KnowHow]]
a53883501ef62bde531096835b5015f2915a2297
MariaDB SSL
0
295
2461
2295
2021-11-25T23:00:13Z
Lollypop
2
Text replacement - "</source" to "</syntaxhighlight"
wikitext
text/x-wiki
[[Kategorie:MariaDB|SSL]]
[[Kategorie:MySQL|SSL]]
To be continued!
==Create keys and certificates==
<syntaxhighlight lang=bash>
openssl genrsa 2048 > ca-key.pem
openssl req -new -x509 -nodes -days 3600 -key ca-key.pem -out ca-cert.pem -subj '/C=DE/ST=Hamburg/L=Hamburg/O=Spiders Cave/CN=db-server'
</syntaxhighlight>
<syntaxhighlight lang=bash>
openssl req -newkey rsa:2048 -days 3600 -nodes -keyout client-key.pem -out client-req.pem -subj '/C=DE/ST=Hamburg/L=Hamburg/O=Spiders Cave/CN=web-server.domain.de'
openssl rsa -in client-key.pem -out client-key.pem
openssl x509 -req -in client-req.pem -days 3600 -CA ca-cert.pem -CAkey ca-key.pem -set_serial 01 -out client-cert.pem
</syntaxhighlight>
<syntaxhighlight lang=bash>
openssl req -newkey rsa:2048 -days 3600 -nodes -keyout server-key.pem -out server-req.pem -subj '/C=DE/ST=Hamburg/L=Hamburg/O=Spiders Cave/CN=db-server.domain.de'
openssl rsa -in server-key.pem -out server-key.pem
openssl x509 -req -in server-req.pem -days 3600 -CA ca-cert.pem -CAkey ca-key.pem -set_serial 01 -out server-cert.pem
</syntaxhighlight>
<syntaxhighlight lang=bash>
chown mysql:www-data *
chown www-data:www-data client-key.pem
chmod 644 *-cert.pem
chmod 600 *-key.pem
</syntaxhighlight>
<syntaxhighlight lang=php>
# php -r '
$db = new PDO("mysql:host=db-server.domain.de;dbname=testdb", "ssltestuser", "ssltestuserpassword",
array(
PDO::MYSQL_ATTR_SSL_CA=>"/etc/mysql/ssl/ca-cert.pem",
PDO::MYSQL_ATTR_SSL_KEY=>"/etc/mysql/ssl/client-key.pem",
PDO::MYSQL_ATTR_SSL_CERT=>"/etc/mysql/ssl/client-cert.pem",
PDO::MYSQL_ATTR_SSL_CAPATH=>"/etc/ssl/certs"
)
);
$result = $db->query("SHOW STATUS LIKE \"SSL_%\"");
$result->execute();
$status=$result->fetchAll();
print_r($status);
'
</syntaxhighlight>
451a71497a0ecce0e0ea2b49d14282e734b78ead
NetApp move root vol
0
91
2462
221
2021-11-25T23:00:20Z
Lollypop
2
Text replacement - "[[Kategorie:" to "[[Category:"
wikitext
text/x-wiki
[[Category:NetApp]]
=Migration des Root-Volumes auf ein neues Aggregat=
Das kann auch benutzt werden, um auf ein 64-Bit Aggregat zu migrieren...
==Anlegen des neuen Root-Volumes==
<pre>
> priv set advanced
*> aggr create aggr0_new -B 64 -t raid4 -d <disk0> <disk1>
*> vol create vol0_new aggr0_new 250g
</pre>
==Kopieren der Daten==
Um den ndmpd aktivieren zu können muß mindestens ein Netzwerkinterface einen Link haben!
<pre>
*> options ndmpd.enable on
*> ndmpcopy -f /vol/vol0 /vol/vol0_new
Ndmpcopy: Starting copy [ 0 ] ...
...
*> vol options vol0_new root
*> reboot
</pre>
==Aufräumen==
<pre>
> vol status
Volume State Status Options
vol0_new online raid4, flex root, create_ucode=on
64-bit
vol0 online raid_dp, flex create_ucode=on
64-bit
> vol offline vol0
> vol destroy vol0
> aggr offline aggr0
> aggr destroy aggr0
> disk zero spares
</pre>
==Setzen der Standard-Namen==
<pre>
> aggr rename aggr0_new aggr0
> vol rename vol0_new vol0
> reboot
</pre>
==Umbau des Aggregat auf raid_dp mit Hilfe der frei gewordenen Platten==
<pre>
> aggr status -r aggr0
Aggregate aggr0 (online, raid4) (block checksums)
Plex /aggr0/plex0 (online, normal, active, pool0)
RAID group /aggr0/plex0/rg0 (normal, block checksums)
RAID Disk Device HA SHELF BAY CHAN Pool Type RPM Used (MB/blks) Phys (MB/blks)
--------- ------ ------------- ---- ---- ---- ----- -------------- --------------
parity 0a.00.23 0a 0 23 SA:A 0 BSAS 7200 1695466/3472315904 1695759/3472914816
data 0a.00.6 0a 0 6 SA:A 0 BSAS 7200 1695466/3472315904 1695759/3472914816
> aggr options aggr0 raidtype raid_dp
> aggr status -r aggr0
Aggregate aggr0 (online, raid_dp, reconstruct) (block checksums)
Plex /aggr0/plex0 (online, normal, active, pool0)
RAID group /aggr0/plex0/rg0 (reconstruction 0% completed, block checksums)
RAID Disk Device HA SHELF BAY CHAN Pool Type RPM Used (MB/blks) Phys (MB/blks)
--------- ------ ------------- ---- ---- ---- ----- -------------- --------------
dparity 0a.00.3 0a 0 3 SA:A 0 BSAS 7200 1695466/3472315904 1695759/3472914816 (reconstruction 0% completed)
parity 0a.00.23 0a 0 23 SA:A 0 BSAS 7200 1695466/3472315904 1695759/3472914816
data 0a.00.6 0a 0 6 SA:A 0 BSAS 7200 1695466/3472315904 1695759/3472914816
</pre>
efd6d861ca50a0f8d0f493fc7cf676d8029d8be7
Category:Oracle
14
221
2463
794
2021-11-25T23:00:50Z
Lollypop
2
Text replacement - "[[Kategorie:" to "[[Category:"
wikitext
text/x-wiki
[[Category:KnowHow]]
a53883501ef62bde531096835b5015f2915a2297
Solaris cluster clone
0
185
2464
2283
2021-11-25T23:00:55Z
Lollypop
2
Text replacement - "</source" to "</syntaxhighlight"
wikitext
text/x-wiki
[[Category:Solaris|Cluster Clone]][[Category:SunCluster|Clone]]
If you need to recreate a cluster node from a survived node, you need to do the following steps
==Clone system disk==
For example via metattach to the metaroot.
==Edit normal Solaris parameter==
/etc/nodename
/etc/hostname.*
Check: /etc/inet/hosts
If mirrored by SVM do
# Edit /etc/vfstab of the clone to normal Devices
# Edit /etc/system:
<syntaxhighlight lang=bash>
* Begin MDD root info (do not edit)
** rootdev:/pseudo/md@0:0,10,blk
* End MDD root info (do not edit)
</syntaxhighlight>
Umount cloned disk
fsck cloned disk root slice
==Edit Cluster parameter==
Get the right id from:
<syntaxhighlight lang=bash>
# nawk '/cluster\.nodes\.[^.]*\.name/{split($1,field,"."); print field[3],$NF}' /etc/cluster/ccr/global/infrastructure
1 node-a
2 node-b
</syntaxhighlight>
Edit the
echo <nodeid> > /etc/cluster/nodeid
for example node-b:
echo 2 > /etc/cluster/nodeid
of the clone.
37a34cdd252f401367c25cbbcf5c7b9f14265e89
Awk cheatsheet
0
292
2465
2403
2021-11-25T23:00:59Z
Lollypop
2
Text replacement - "[[Kategorie:" to "[[Category:"
wikitext
text/x-wiki
[[Category:AWK|Cheatsheet]]
==Functions==
===Bytes to human readable===
<syntaxhighlight lang=awk>
function b2h(value){
# Bytes to human readable
unit=1;
while(value>=1024){
unit++;
value/=1024;
}
split("B,KB,MB,GB,TB,PB", unit_string, /,/);
return sprintf("%.2f%s",value,unit_string[unit]);
}
</syntaxhighlight>
===Binary to decimal===
<syntaxhighlight lang=awk>
function b2d(bin){
len=length(bin);
for(i=1;i<=len;i++){
dec+=substr(bin,i,1)*2^(len-i);
}
return dec;
}
</syntaxhighlight>
===Quicksort===
This is not my code! It is taken from [[http://awk.info/?quicksort here]] maybe slightly modified cannot check, site is down.
You can call it like this: qsort(array,1,length(array));
<syntaxhighlight lang=awk>
# BEGIN http://awk.info/?quicksort
function qsort(A, left, right, i, last) {
if (left >= right)
return;
swap(A, left, left+int((right-left+1)*rand()));
last = left;
for (i = left+1; i <= right; i++)
if (int(A[i]) < int(A[left]))
swap(A, ++last, i)
swap(A, left, last)
qsort(A, left, last-1)
qsort(A, last+1, right)
}
# Helper function to swap two elements of an array
function swap(A, i, j, t) {
t = A[i]; A[i] = A[j]; A[j] = t
}
# END http://awk.info/?quicksort
</syntaxhighlight>
Same for alphanumeric:
<syntaxhighlight lang=awk>
# BEGIN http://awk.info/?quicksort
function qsort(A, left, right, i, last) {
if (left >= right)
return;
swap(A, left, left+int((right-left+1)*rand()));
last = left;
for (i = left+1; i <= right; i++)
if (int(A[i]) < int(A[left]))
swap(A, ++last, i)
swap(A, left, last)
qsort(A, left, last-1)
qsort(A, last+1, right)
}
# Helper function to swap two elements of an array
function swap(A, i, j, t) {
t = A[i]; A[i] = A[j]; A[j] = t
}
# END http://awk.info/?quicksort
</syntaxhighlight>
Test:
<syntaxhighlight lang=awk>
BEGIN {
string="1524097359810345254";
split(string,array_i,"");
string="ThisIsAQsortExample";
split(string,array_a,"");
}
# BEGIN http://awk.info/?quicksort
function qsort_i(A, left, right, i, last) {
if (left >= right)
return;
swap(A, left, left+int((right-left+1)*rand()));
last = left;
for (i = left+1; i <= right; i++)
if (int(A[i]) < int(A[left]))
swap(A, ++last, i)
swap(A, left, last)
qsort_i(A, left, last-1)
qsort_i(A, last+1, right)
}
function qsort_a(A, left, right, i, last) {
if (left >= right)
return;
swap(A, left, left+int((right-left+1)*rand()));
last = left;
for (i = left+1; i <= right; i++)
if (A[i] < A[left])
swap(A, ++last, i)
swap(A, left, last)
qsort_a(A, left, last-1)
qsort_a(A, last+1, right)
}
# Helper function to swap two elements of an array
function swap(A, i, j, t) {
t = A[i]; A[i] = A[j]; A[j] = t
}
END {
for(element in array_i)
printf array_i[element];
printf " ===qsort==> "
qsort_i(array_i,1,length(array_i));
for(element in array_i)
printf array_i[element];
print;
for(element in array_a)
printf array_a[element];
printf " ===qsort==> "
qsort_a(array_a,1,length(array_a));
for(element in array_a)
printf array_a[element];
print;
}
</syntaxhighlight>
which outputs:
<pre>
1524097359810345254 ===qsort==> 0011223344455557899
ThisIsAQsortExample ===qsort==> AEIQTaehilmoprssstx
</pre>
===Sort words inside braces (gawk)===
Written for beautifying lines like
<syntaxhighlight lang=mysql>
GRANT SELECT (account_id, user, enable_imap, fk_domain_id, enable_virusscan, max_msg_size, from_authuser_only, id, time_start, changed_by, enable_spamblocker, onhold, time_end, archive, mailbox_quota) ON `mail_db`.`mail_account` TO 'user'@'172.16.16.16'
</syntaxhighlight>
This function:
<syntaxhighlight lang=awk>
function inner_brace_sort (rest, delimiter) {
sorted="";
while( match(rest,/\([^\)]+\)/) ) {
sorted=sprintf("%s%s", sorted, substr(rest, 1, RSTART));
inner=substr(rest, RSTART+1, RLENGTH-2);
rest=substr(rest, RSTART+RLENGTH-1, length(rest));
split(inner, inner_a, delimiter);
inner_l=asort(inner_a, inner_s);
for(i=1; i<=inner_l; i++) {
sorted=sprintf("%s%s", sorted, inner_s[i]);
if(i<inner_l) sorted=sprintf("%s, ", sorted);
}
sorted=sprintf("%s", sorted);
}
return sorted""rest;
}
</syntaxhighlight>
Sorts the fields inside the braces alphabetically and can be called like this:
<syntaxhighlight lang=awk>
/\(/ {
print inner_brace_sort($0, ",[ ]*");
}
</syntaxhighlight>
036f06dc34afcd71cc2d2e6ed3beb23340e63e28
Solaris 11 Zones
0
257
2466
2289
2021-11-25T23:01:15Z
Lollypop
2
Text replacement - "</source" to "</syntaxhighlight"
wikitext
text/x-wiki
[[Kategorie:Solaris11|Zones]]
==zoneclone.sh==
<syntaxhighlight lang=bash>
#!/bin/bash
SRC_ZONE=$1
DST_ZONE=$2
DST_DIR=$3
DST_DATASET=$4
if [ $# -lt 3 ] ; then
echo "Not enough arguments!"
echo "Usage: $0 <src_zone> <dst_zone> <dst_dir> [dst_dataset]"
exit 1
fi
zonecfg -z ${DST_ZONE} info >/dev/null 2>&1 && {
echo "Destination zone exists!"
exit 1
}
zonecfg -z ${SRC_ZONE} info >/dev/null 2>&1 || {
echo "Source zone does not exist!"
exit 1
}
SRC_ZONE_STATUS="$(zoneadm list -cs | nawk -v zone=${SRC_ZONE} '$1==zone {print $2;}')"
if [ "_${SRC_ZONE_STATUS}_" != "_installed_" ] ; then
echo "Zone ${SRC_ZONE} must be in the status \"installed\" and not \"${SRC_ZONE_STATUS}\"!"
exit 1
fi
if [ -n "${DST_DATASET}" ] ; then
if [ -d ${DST_DIR} ] ; then
rmdir ${DST_DIR} || {
echo "${DST_DIR} must be empty!"
exit 1
}
fi
# Is parent dataset there?
zfs list -Ho name ${DST_DATASET%/*} >/dev/null 2>&1 || {
echo "Destination dataset does not exist!"
exit 1
}
zfs create -o mountpoint=${DST_DIR} ${DST_DATASET}
fi
[ -d ${DST_DIR} ] || {
echo "Destination dir must exist!"
exit 1
}
zonecfg -z ${SRC_ZONE} export \
| nawk -v zonepath=${DST_DIR} '
BEGIN {
FS="=";
OFS="=";
}
/set zonepath/{$2=zonepath}
{ print; }
' \
| zonecfg -z ${DST_ZONE} -f -
zoneadm -z ${DST_ZONE} clone ${SRC_ZONE}
</syntaxhighlight>
==Way that works with Solaris Cluster and immutable zones==
Problem was that some steps of updating (indexing man pages etc) could not be done in the immutable zone after solaris update. So one <i>boot -w</i> is necessary to come up in cluster.
<syntaxhighlight lang=bash>
Apr 1 02:31:52 node01 SC[SUNWsczone.start_sczbt]:zone01-rg:zone01-zone-rs: [ID 567783 daemon.error] start_sczbt rc<1> - Installing: Using existing zone boot environment
Apr 1 02:31:52 node01 SC[SUNWsczone.start_sczbt]:zone01-rg:zone01-zone-rs: [ID 567783 daemon.error] start_sczbt rc<1> - Zone BE root dataset: zone01/zone/rpool/ROOT/solaris-6
Apr 1 02:31:52 node01 SC[SUNWsczone.start_sczbt]:zone01-rg:zone01-zone-rs: [ID 567783 daemon.error] start_sczbt rc<1> - Cache: Using /var/pkg/publisher.
Apr 1 02:31:52 node01 SC[SUNWsczone.start_sczbt]:zone01-rg:zone01-zone-rs: [ID 567783 daemon.error] start_sczbt rc<1> - Updating non-global zone: Linking to image /.
Apr 1 02:31:52 node01 SC[SUNWsczone.start_sczbt]:zone01-rg:zone01-zone-rs: [ID 567783 daemon.error] start_sczbt rc<1> - Finished processing linked images.
Apr 1 02:31:52 node01 SC[SUNWsczone.start_sczbt]:zone01-rg:zone01-zone-rs: [ID 567783 daemon.error] start_sczbt rc<1> - Result: Attach Failed.
</syntaxhighlight>
===Move all RGs from node first===
<syntaxhighlight lang=bash>
# clrg evacuate -n $(hostname) +
</syntaxhighlight>
===Update Solaris===
<syntaxhighlight lang=bash>
# pkg update --be-name $(pkg info -r system/kernel | nawk '/Build Release:/{split($NF,release,".");}/Branch:/{split($NF,versions,".");print "Solaris_"release[2]"."versions[3]"_SRU"versions[4];}') --accept -v
# init 6
</syntaxhighlight>
===Disable zone on other node and move to self===
But leave the HAStoragePlus resource online
<syntaxhighlight lang=bash>
# clrs disable zone01-zone-rs
# clrg switch -n $(hostname) zone01-rg
</syntaxhighlight>
===Attach, boot -w, detach without cluster===
<syntaxhighlight lang=bash>
# zoneadm -z zone01 attach -u
# zoneadm -z zone01 boot -w
# zlogin zone01 svcs -xv # <- wait for all services to be ready
# zlogin zone01 svcs -xv # <- wait for all services to be ready
...
# zlogin zone01 svcs -xv # <- wait for all services to be ready
# zoneadm -z zone01 halt
# zoneadm -z zone01 detach
</syntaxhighlight>
===Enable zone in cluster===
<syntaxhighlight lang=bash>
# clrs enable zone01-zone-rs
</syntaxhighlight>
==Some other things==
<syntaxhighlight lang=bash>
# zoneadm -z zone01 attach -x deny-zbe-clone -z solaris-7
# clrs enable zone01-rs
</syntaxhighlight>
<syntaxhighlight lang=bash>
# /usr/lib/brand/solaris/attach:
Brand specific options:
brand-specific usage:
Usage:
attach [-uv] [-a archive | -d directory | -z zbe]
[-c profile.xml | dir] [-x attach-last-booted-zbe|
force-zbe-clone|deny-zbe-clone|destroy-orphan-zbes]
-u Update the software in the attached zone boot environment to
match the sofware in the global zone boot environment.
-v Verbose.
-c Update the zone configuration with the sysconfig profile
specified in the given file or directory.
-a Extract the specified archive into the zone then attach the
active boot environment found in the archive. The archive
may be a zfs, cpio, or tar archive. It may be compressed with
gzip or bzip2.
-d Copy the specified directory into a new zone boot environment
then attach the zone boot environment.
-z Attach the specified zone boot environment.
-x attach-last-booted-zbe : Attach the last booted zone boot
environment.
force-zbe-clone : Clone zone boot environment
on attach.
deny-zbe-clone : Do not clone zone boot environment
on attach.
destroy-orphan-zbes : Destroy all orphan zone boot
environments. (not associated with
any global BE)
</syntaxhighlight>
275525a08f668df2bdbf69a11b401f46467909ae
Cypraea annulus
0
124
2467
366
2021-11-25T23:01:40Z
Lollypop
2
Text replacement - "[[Kategorie:" to "[[Category:"
wikitext
text/x-wiki
[[Category:MeerwasserAquarium]]
{{Systematik
| DeName = Kaurischnecke
| WissName = Cypraea annulus
| Autor = GRAY, 1825
| Untergattung =
| Gattung =
| Unterfamilie =
| Art =
| Verbreitung =
| Habitat =
| Nahrung = Algen
| Luftfeuchtigkeit =
| Temperatur = 24°C - 26°C
| Winterruhe =
}}
520963b5a27b5dd7ab617540d585f626958f9139
Find free ip
0
366
2468
2412
2021-11-25T23:07:50Z
Lollypop
2
Text replacement - "[[Kategorie:" to "[[Category:"
wikitext
text/x-wiki
[[Category: Bash|find_free_ip]]
<syntaxhighlight lang=bash>
#!/bin/bash
#
# $Id: find_free_ip.sh,v 1.2 2019/09/06 14:33:32 lollypop Exp $
# $Source: /var/cvs/lollypop/scripts/linux/find_free_ip.sh,v $
#
# Written in 2019 by Lars Timmann <L@rs.Timmann.de>
#
function usage () {
printf "Usage: ${0} <ip address>[/(<CIDR suffix>|<netmask>)]\n\n"
printf " This script searches a range of IP addresses for ones that have no reverse DNS.\n"
printf " Default range if no CIDR suffix or netmask is given is a class C (/24) range of 256 addresses.\n"
printf " address : This has to be a IPv4 address. Zero octets can be omittet.\n"
printf " For example 192.168 is sufficient for 192.168.0.0 .\n"
printf " CIDR suffix : This describes the nomber of bits set to 1 from left in the netmask.\n"
printf " netmask : Four octets representing the netmask.\n"
printf "\n"
}
case ${1} in
""|--help|-h)
usage
exit 1
;;
*)
input=${1}
;;
esac
case $(uname -s) in
Linux)
PING='ping -4 -c 1 -n -q -W 1 ${ip}'
;;
SunOS)
PING='ping -s -A inet -n -t 1 ${ip} 56 1'
;;
esac
IFS='/' read -ra parts <<< "${input}"
address=${parts[0]}
suffix=${parts[1]:-24}
# build binary notation from CIDR suffix
function ones2bin () {
ones=${1}
printf "%0.s1" $(seq 1 ${ones})
[ ${ones} -lt 32 ] && printf "%0.s0" $(seq 1 $[ 32 - ${ones} ])
}
function bin2ones () {
bin=${1}
ones=0
for((i=0;i<${#bin};i++))
do
bit=${bin:$i:1}
[ ${bit} -eq 0 ] && break
ones=$[ ones + 1 ]
done
echo ${ones}
}
# dezimal number to octets
# for example: 2130706689 -> 127.0.1.1
function dec2ipv4 () {
ipdec=${1}
octets=()
for((i=24;i>=0;i-=8))
do
octet=$((${ipdec} >> ${i}))
octets+=(${octet})
ipdec=$(( ${ipdec} - ( ${octet} << ${i} ) ))
done
echo $(IFS=.;echo "${octets[*]}")
}
# ipv4 to decimal
function ipv42dec () {
ipv4=$1
dec=0
IFS='.' read -ra octets <<< "${ipv4}"
for ((i=0;i<4;i++))
do
dec=$(( dec + ${octets[i]} * ( 256 ** ( 3 - i ) ) ))
done
echo ${dec}
}
# decimal to binary
function dec2bin () {
dec=$1
bin=""
for((i=${dec};i>0;i>>=1))
do
bin=$(( ${i} % 2 ))${bin}
done
echo ${bin}
}
# binary to decimal : dec = $(( 2#010001010001 ))
# binary complement
function binaryComplement () {
unset complement
binary=$1
for((i=0;i<${#binary};i++))
do
complement+=$(( ${binary:${i}:1} ^ 1 ))
done
echo $complement
}
# Add missing octets
function fillOctets () {
IFS='.' read -ra octets <<< "${1}"
for ((i=${#octets[@]};i<4;++i))
do
octets+=(0)
done
echo "$(IFS=. ; echo "${octets[*]}")"
}
if [[ ${suffix} =~ ^([0-9]+\.[0-9]+\.[0-9]+\.[0-9]+)$ ]]
then
suffixbin=$(dec2bin $(ipv42dec $(fillOctets ${suffix})))
else
suffixbin=$(ones2bin ${suffix})
fi
address=$(fillOctets ${address})
firstipdec=$(( ( $(ipv42dec ${address}) & 2#${suffixbin} ) ))
network=$(dec2ipv4 ${firstipdec})
lastipdec=$(( ( $(ipv42dec ${address}) & 2#${suffixbin} ) | 2#$(binaryComplement ${suffixbin}) ))
broadcast=$(dec2ipv4 ${lastipdec})
netmask=$(dec2ipv4 $(( 2#${suffixbin} )) )
printf "Your request:\t${address}/$(bin2ones ${suffixbin})\nNetwork:\t${network}\nBroadcast:\t${broadcast}\nNetmask:\t${netmask}\nSearching in:\t${network}-${broadcast}\n"
printf "%0.s-" $(seq 1 80) ; echo
count=1
bool=( yes no )
for((i=${firstipdec};i<=${lastipdec};i++))
do
ip=$(dec2ipv4 ${i})
info=$(getent hosts ${ip})
if [ "_${info}_" == "__" ]
then
eval ${PING} ${ip} >/dev/null 2>&1 ; pingable=$?
case ${ip} in
${network})
remark="This is the network IP."
;;
${broadcast})
remark="This is the network IP."
;;
*)
remark=""
;;
esac
printf "%s\tfrei\t%d\t( got a pong: %s )\t%s\n" "${ip}" "${count}" "${bool[${pingable}]}" "${remark}"
count=$[ ${count} + 1 ]
else
printf "%s\n" "${info}"
count=1
fi
done
</syntaxhighlight>
610b6fe4d792826e38f65b04ee7e3a7893a930ec
Category:Arundo
14
42
2469
158
2021-11-25T23:09:38Z
Lollypop
2
Text replacement - "[[Kategorie:" to "[[Category:"
wikitext
text/x-wiki
[[Category:Poaceae]]
c9890dfec21e69cd000f360f28d9fd74ab5b1b9f
NetApp SMO
0
77
2470
773
2021-11-25T23:11:15Z
Lollypop
2
Text replacement - "[[Kategorie:" to "[[Category:"
wikitext
text/x-wiki
[[Category:NetApp|SMO]]
=Installation des SnapManager for Oracle unter Solaris=
==HostUtilities==
<pre>
# cd /tmp
# gtar xzf ~/smo/netapp_solaris_host_utilities_5_1_sparc.tar.gz
# pkgadd -d NTAPSANTool.pkg
</pre>
Und um Himmelswillen nicht:
# /opt/NTAP/SANToolkit/bin/mpxio_set -e --no-never-do-this
Dann klapp ALUA nicht!
<pre>
# /opt/NTAP/SANToolkit/bin/basic_config -ssd_set
# touch /reconfigure
# init 6
</pre>
Test mit:
<pre>
# /opt/NTAP/SANToolkit/bin/sanlun fcp show adapter
# /opt/NTAP/SANToolkit/bin/sanlun lun show all
</pre>
==SnapDrive==
<pre>
# cd /tmp
# gtar xzf ~/smo/NTAPsnapdrive_sun_sparc_5.0P1.tar.Z
# pkgadd -d NTAPsnapdrive_sun_sparc_5.0/NTAPsnapdrive.pkg
</pre>
Jetzt noch die /opt/NTAPsnapdrive/snapdrive.conf anpassen.
Und für Solaris mit MPxIO und UFS sieht die /opt/NTAPsnapdrive/snapdrive.conf dann so aus:
<pre>
# Snapdrive Configuration
# file: /opt/NTAPsnapdrive/snapdrive.conf
# Version 5.0 (Change 1612424 Built 'Sun Feb 26 03 11 54 PST 2012')
#
# Default values are shown by lines which are commented-out in this file.
# If there is no un-commented-out line in this file relating to a particular value, then
# the default value represented in the commented-out line is what SnapDrive will use.
#
# To change a value:
#
# -- copy the line that is commented out to another line
# -- Leave the commented-out line
# -- Modify the new line to remove the '#' and to set the new value.
# -- Save the file and exit
# -- Also remember to restart the snapdrive daemon by issuing 'snapdrived restart'
#
#
#PATH="/sbin:/usr/sbin:/bin:/usr/lib/vxvm/bin:/usr/bin:/opt/VRTS/bin:/etc/vx/bin" #toolset search path
all-access-if-rbac-unspecified="on" #Allows access to all filer operations if the RBAC permissions file is missing in filer volume
#audit-log-file="/var/log/sd-audit.log" #Audit Log File Path
#audit-log-max-size=20480 #Maximum size (in bytes) of audit log file
#audit-log-save=2 #Number of historical audit log file to save
#autosupport-enabled="on" #Enable autosupport (requires autosupport-filer be set)
#available-lun-reserve=8 #Number of LUNs for which to reserve host resources
#check-export-permission-nfs-clone="on" #Checks if the host has nfs export permissions for resource being connected
#client-trace-log-file="/var/log/sd-client-trace.log" #client trace log file (Probably never used or useful)
#cluster-operation-timeout-secs=600 #Cluster Operation timeout in seconds (Useful only on SFRAC Environments). Increase this value if you frequent failures in SFRAC environments
#contact-http-dfm-port=8088 #HTTP server port to contact to access the DFM (Change this only if you have modified DFM Server settings)
#contact-http-port=80 #HTTP port to contact to access the filer (This should not be changed most of the time)
#contact-http-port-sdu-daemon=4094 #HTTP port on which sdu daemon will bind
#contact-https-port-sdu-daemon=4095 #HTTPS port on which sdu daemon will bind
#contact-ssl-dfm-port=8488 #SSL server port to contact to access the DFM
#contact-ssl-port=443 #SSL port to contact to access the filer
#contact-viadmin-port=8043 #HTTP/HTTPS port to contact to access the virtual interface admin
#daemon-trace-log-file="/var/log/sd-daemon-trace.log" #daemon trace log file
#datamotion-cutover-wait=120 #Wait time in seconds during data motion
#default-noprompt="off" #A default value for -noprompt option in the command line
default-transport="fcp" #Transport type to use for storage provisioning, when a decision is needed
#device-retries=3 #Number of retries on Ontap filer LUN device inquiry (This is no longer useful or used)
#device-retry-sleep-secs=1 #Number of seconds between Ontap filer LUN device inquiry retries (This is no longer useful or used)
#dfm-api-timeout=180 #Timeout in seconds for calling DFM API
#dfm-rbac-retries=12 #Number of access retries until DFM Refreshes (Increase this value if DFM is unable to discover newly created Volumes)
#dfm-rbac-retry-sleep-secs=15 #Number of seconds between DFM rbac access retries(Increase this value if DFM is unable to discover the Volume)
#do-lunclone="on" #Lunclone for Dataset mount_backup if readonly qtree is detected
#enable-alua="on" #Enable ALUA for the igroup
#enable-fcp-cache="on" #Enable FCP Cache in Assistants
#enable-implicit-host-preparation="on" #Enable implicit host preparation for LUN creation
#enable-parallel-operations="on" #Enable support for parallel operations
#enable-split-clone="off" #Enable split clone volume or lun during connnect/disconnect
#filer-restore-retries=1440 #Number of retries while doing lun restore
#filer-restore-retry-sleep-secs=15 #Number of secs between retries while restoring lun
#filesystem-freeze-timeout-secs=300 #File system freeze timeout in seconds
#flexclone-writereserve-enabled="off" #Enable space reservations during FlexClone creation
fstype="ufs" #File system to use when more than one file system is available
#lun-onlining-in-progress-retries=40 #Number of retries when lun onlining in progress after VBSR
#lun-onlining-in-progress-sleep-secs=3 #Number of secs between retries when lun onlining in progress after VBSR
#mgmt-retries=2 #Number of retries on ManageONTAP control channel
#mgmt-retry-sleep-long-secs=90 #Number of seconds between retries on ManageONTAP control channel (failover error)
#mgmt-retry-sleep-secs=2 #Number of seconds between retries on ManageONTAP control channel
#migrate-file="/opt/NTAPsnapdrive/.migfile" #Location of Migrate File
#multipathing-type="DMP" #Multipathing software to use when more than one multipathing solution is available.
multipathing-type="mpxio" #Multipathing software to use when more than one multipathing solution is available.
#password-file="/opt/NTAPsnapdrive/.pwfile" #location of password file
#portset-file="/opt/NTAPsnapdrive/.portset" #location of portset configuration file
#prefix-clone-name="" #Prefix string for naming FlexClone
#prefix-filer-lun="" #Prefix for all filer LUN names internally generated by storage create
#prepare-lun-count=16 #Number of LUNs for which to request host preparation
#rbac-cache="off" #Use RBAC cache when all DFM servers are down. Active only when rbac-method is dfm.
#rbac-method="native" #Role Based Access Control(RBAC) methods
#recovery-log-file="/var/log/sd-recovery.log" #recovery log file
#recovery-log-save=20 #Number of old copies of recovery log file to save
#san-clone-method="lunclone" #Clone methods for snap connect
#sdu-daemon-certificate-path="/opt/NTAPsnapdrive/snapdrive.pem" #location of https server certificate
#sdu-password-file="/opt/NTAPsnapdrive/.sdupw" #location of SDU Daemon and DFM password file
#secure-communication-among-cluster-nodes="off" #Enable Secure Communication (Useful only on SFRAC environments)
#sfsr-polling-frequency=10 #Sleep for the given amount of seconds before attempting SFSR
#snapconnect-nfs-removedirectories="off" #NFS snap connect cleaup unwanted dirs;
#snapcreate-cg-timeout="relaxed" #Timeout type used in snapshot creation with Consitency Groups.
#snapcreate-check-nonpersistent-nfs="on" #Check that entries exist in persistent filesystem file for specified nfs fs.
#snapcreate-consistency-retries=3 #Number of retries on best-effort snapshot consistency check failure
#snapcreate-consistency-retry-sleep=1 #Number of seconds between best-effort snapshot consistency retries
#snapcreate-must-make-snapinfo-on-qtree="off" #snap create must be able to create snapinfo on qtree
#snapdelete-delete-rollback-with-snap="off" #Delete all rollback snapshots related to specified snapshot
#snapmirror-dest-snap-support-enabled="on" #Enables snap restore and snap connect commands to deal with snapshots which were moved to another filer volume (e.g. via SnapMirror)
#snaprestore-delete-rollback-after-restore="on" #Delete rollback snapshot after a successfull restore
#snaprestore-make-rollback="on" #Create snap rollback before restore
#snaprestore-must-make-rollback="on" #Do not continue 'snap restore' if rollback creation fails
#snaprestore-snapmirror-check="on" #Enable snapmirror destination volume check in snap restore
#space-reservations-enabled="on" #Enable space reservations when creating new luns
#space-reservations-volume-enabled="snapshot" #Enable space reservation over volume.
#split-clone-async="on" #Lunclone for Dataset mount_backup if readonly qtree is detected
#trace-enabled="on" #Enable trace
#trace-level=7 #Trace levels: 1=FatalError; 2=AdminError; 3=CommandError; 4=warning, 5=info, 6=verbose, 7=full
#trace-log-file="/var/log/sd-trace.log" #trace log file
#trace-log-max-size=10485760 #Maximum size of trace log file in bytes; 0 means one trace log file per command
#trace-log-save=100 #Number of old copies of trace log file to save
#use-efi-label="off" #Enables use of EFI labels on Solaris which is required for lun size > 1 TB
#use-https-to-dfm="on" #Communication with DFM done via HTTPS instead of HTTP
use-https-to-filer="on" #Communication with filer done via HTTPS instead of HTTP
#use-https-to-sdu-daemon="off" #Communication with daemon done via HTTPS instead of HTTP
#use-https-to-viadmin="on" #Specifies if HTTPS must be used to communicate with SMVI Product
#vif-password-file="/opt/NTAPsnapdrive/.vifpw" #location of Virtual Interface Server password file
#virtualization-operation-timeout-secs=600 #Virtualization Operation timeout in seconds
#vmtype="vxvm" #Volume manager to use when more than one volume manager is available
vmtype="svm" #Volume manager to use when more than one volume manager is available
#vol-restore="off" #Method of restoring a volume
#volmove-cutover-retry=3 #Number of retries during volume migration
#volmove-cutover-retry-sleep=3 #Number of seconds between retries during volume migration cutover phase
</pre>
Nun erst den snapdrived starten:
<pre>
# /usr/sbin/snapdrived start
</pre>
Verbindung mit dem Filer herstellen von SnapDrive:
<pre>
# getent hosts fas01 >> /etc/hosts
# /opt/NTAPsnapdrive/bin/snapdrive config set root fas01
# /opt/NTAPsnapdrive/bin/snapdrive snap list -filer fas01
</pre>
Check:
<pre>
# /opt/NTAPsnapdrive/bin/snapdrive config list
</pre>
==SnapManager for Oracle==
<pre>
# sh ./netapp.smo.sunos-sparc64-3.2.bin
# smogui
</pre>
Ab durch den Wizard...
c9b75cdfa0a7fe919963c5b58ab624280e5467d9
Category:SSH
14
225
2471
881
2021-11-25T23:18:26Z
Lollypop
2
Text replacement - "[[Kategorie:" to "[[Category:"
wikitext
text/x-wiki
[[Category:KnowHow]]
[[Category:Security]]
0f462b6b503d2fc1d2fb730627d8ca1383c57c16
Category:Messor
14
6
2472
6
2021-11-25T23:22:16Z
Lollypop
2
Text replacement - "[[Kategorie:" to "[[Category:"
wikitext
text/x-wiki
[[Category:Ameisen]]
8b4529e141e02312735639bddbd1b0df94a379a3
ZFS on Linux
0
222
2473
2261
2021-11-25T23:24:21Z
Lollypop
2
Text replacement - "[[Kategorie:" to "[[Category:"
wikitext
text/x-wiki
[[Category:Linux|ZFS]]
[[Category:ZFS|Linux]]
[[Category:VirtualBox|ZFS]]
==Grub==
Create /etc/udev/rules.d/99-local-grub.rules with this content:
<syntaxhighlight lang=bash>
# Create by-id links in /dev as well for zfs vdev. Needed by grub
# Add links for zfs_member only
KERNEL=="sd*[0-9]", IMPORT{parent}=="ID_*", ENV{ID_FS_TYPE}=="zfs_member", SYMLINK+="$env{ID_BUS}-$env{ID_SERIAL}-part%n"
</source>
==Virtualbox on ZVols==
If you use ZVols as rawvmdk-device in VirtualBox as normal user (vmuser in this example) create /etc/udev/rules.d/99-local-zvol.rules with this content:
<syntaxhighlight lang=bash>
KERNEL=="zd*" SUBSYSTEM=="block" ACTION=="add|change" PROGRAM="/lib/udev/zvol_id /dev/%k" RESULT=="rpool/VM/*" OWNER="vmuser"
</source>
<syntaxhighlight lang=bash>
vmuser@virtualbox-server:~$ VBoxManage internalcommands createrawvmdk -filename /var/data/VMs/dev/Solaris10.vmdk -rawdisk /dev/zvol/rpool/VM/Solaris10
</source>
==Setup Ubuntu 16.04 with ZFS root==
Most is from here [https://github.com/zfsonlinux/zfs/wiki/Ubuntu-16.04-Root-on-ZFS Ubuntu-16.04-Root-on-ZFS].
Boot Ubuntu Desktop (alias Live CD) and choose "try out".
===Get the right ashift value===
For example to get sda and sdb:
<syntaxhighlight lang=bash>
# lsblk -o NAME,PHY-SeC,LOG-SEC /dev/sd{a,b} | awk 'function exponent (value) {for(i=0;value>1;i++){value/=2;}; return i;}{if($2 ~ /[0-9]+/){print $0,exponent($2)}else{print$0,"ashift"}}'
NAME PHY-SEC LOG-SEC ashift
sda 512 512 9
├─sda1 512 512 9
├─sda2 512 512 9
├─sda3 512 512 9
└─sda4 512 512 9
sdb 4096 512 12
├─sdb1 4096 512 12
├─sdb2 4096 512 12
├─sdb3 4096 512 12
└─sdb4 4096 512 12
</source>
===Connect it to your network===
<syntaxhighlight lang=bash>
sudo -i
ifconfig ens160 <IP> netmask 255.255.255.0
route add default gw <defaultrouter>
echo "nameserver <nameserver>" >> /etc/resolv.conf
echo 'Acquire::http::Proxy "http://<user>:<pass>@<proxyhost>:<proxyport>";' >> /etc/apt/apt.conf
apt-add-repository universe
apt update
apt --yes install openssh-server
passwd ubuntu
Reconnect via ssh
apt install --yes debootstrap gdisk zfs-initramfs
sgdisk -g -a1 -n2:34:2047 -t2:EF02 /dev/disk/by-id/scsi-36000c2932cdb62febff0b5ac93786dd4
sgdisk -n9:-8M:0 -t9:BF07 /dev/disk/by-id/scsi-36000c2932cdb62febff0b5ac93786dd4
sgdisk -n1:0:0 -t1:BF01 /dev/disk/by-id/scsi-36000c2932cdb62febff0b5ac93786dd4
zpool create -f -o ashift=12 \
-O atime=off \
-O canmount=off \
-O compression=lz4 \
-O normalization=formD \
-O mountpoint=/ \
-R /mnt \
rpool /dev/disk/by-id/scsi-36000c2932cdb62febff0b5ac93786dd4-part1
zfs create -o canmount=off -o mountpoint=none rpool/ROOT
zfs create -o canmount=noauto -o mountpoint=/ rpool/ROOT/ubuntu
zfs mount rpool/ROOT/ubuntu
zfs create -o setuid=off rpool/home
zfs create -o mountpoint=/root rpool/home/root
zfs create -o canmount=off -o setuid=off -o exec=off rpool/var
zfs create -o com.sun:auto-snapshot=false rpool/var/cache
zfs create rpool/var/log
zfs create rpool/var/spool
zfs create -o com.sun:auto-snapshot=false -o exec=on rpool/var/tmp
zfs create -V 4G -b $(getconf PAGESIZE) -o compression=zle \
-o logbias=throughput -o sync=always \
-o primarycache=metadata -o secondarycache=none \
-o com.sun:auto-snapshot=false rpool/swap
cp -p {,/mnt}/etc/apt/apt.conf
export http_proxy=$(awk '/Acquire::http::Proxy/{gsub(/\"/,"");gsub(/;$/,"");print $2}' /mnt/etc/apt/apt.conf)
echo -n xenial{,-security,-updates} | \
xargs -n 1 -d ' ' -I{} echo "deb http://archive.ubuntu.com/ubuntu {} main universe" > /mnt/etc/apt/sources.list
chmod 1777 /mnt/var/tmp
debootstrap xenial /mnt
zfs set devices=off rpool
HOSTNAME=Template-VM
echo ${HOSTNAME} > /mnt/etc/hostname
printf "127.0.1.1\t%s\n" "${HOSTNAME}" >> /mnt/etc/hosts
INTERFACE=$(ip a s scope global | awk 'NR==1{gsub(/:$/,"",$2);print $2;}')
printf "auto %s\niface %s inet dhcp\n" "${INTERFACE}" "${INTERFACE}" > /mnt/etc/network/interfaces.d/${INTERFACE}
mount --rbind /dev /mnt/dev
mount --rbind /proc /mnt/proc
mount --rbind /sys /mnt/sys
cp -p {,/mnt}/etc/apt/apt.conf
echo -n xenial{,-security,-updates} | \
xargs -n 1 -d ' ' -I{} echo "deb http://archive.ubuntu.com/ubuntu {} main universe" > /mnt/etc/apt/sources.list
chroot /mnt /bin/bash --login
locale-gen en_US.UTF-8
echo 'LANG="en_US.UTF-8"' > /etc/default/locale
LANG="en_US.UTF-8"
dpkg-reconfigure tzdata
ln -s /proc/self/mounts /etc/mtab
apt update
apt install --yes ubuntu-minimal
apt install --yes --no-install-recommends linux-image-generic
apt install --yes zfs-initramfs
apt install --yes openssh-server
apt install --yes grub-pc
addgroup --system lpadmin
addgroup --system sambashare
passwd
grub-probe /
update-initramfs -c -k all
vi /etc/default/grub
Comment out: GRUB_HIDDEN_TIMEOUT=0
Remove quiet and splash from: GRUB_CMDLINE_LINUX_DEFAULT
Uncomment: GRUB_TERMINAL=console
update-grub
grub-install /dev/disk/by-id/scsi-36000c2932cdb62febff0b5ac93786dd4
zfs snapshot rpool/ROOT/ubuntu@install
exit
mount | grep -v zfs | tac | awk '/\/mnt/ {print $3}' | xargs -i{} umount -lf {}
zpool export rpool
reboot
apt install --yes cryptsetup
echo cryptswap1 /dev/zvol/rpool/swap /dev/urandom swap,cipher=aes-xts-plain64:sha256,size=256 >> /etc/crypttab
systemctl daemon-reload
systemctl start systemd-cryptsetup@cryptswap1.service
echo /dev/mapper/cryptswap1 none swap defaults 0 0 >> /etc/fstab
swapon -av
</source>
==Swap on ZFS with random key encryption==
<syntaxhighlight lang=ini>
# /etc/systemd/system/zfs-cryptswap@.service
[Unit]
Description=ZFS Random Cryptography Setup for %I
Documentation=man:zfs(8)
DefaultDependencies=no
Conflicts=umount.target
IgnoreOnIsolate=true
After=systemd-random-seed.service
BindsTo=dev-zvol-rpool-%i.device
Before=umount.target
[Service]
Type=oneshot
RemainAfterExit=yes
TimeoutSec=0
KeyringMode=shared
OOMScoreAdjust=500
UMask=0077
RuntimeDirectory=zfs-cryptswap.%i
RuntimeDirectoryMode=0700
ExecStartPre=-/sbin/swapoff '/dev/zvol/rpool/%i'
ExecStartPre=-/sbin/zfs destroy 'rpool/%i'
ExecStartPre=/bin/dd if=/dev/urandom of=/run/zfs-cryptswap.%i/%i.key bs=32 count=1
ExecStart=/sbin/zfs create -V 4G -b 4k -o compression=zle -o logbias=throughput -o sync=always -o primarycache=metadata -o secondarycache=none -o com.sun:auto-snapshot=false -o encryption=on -o keyformat=raw -o keylocation=file:///run/zfs-cryptswap.%i/%i.key rpool/%i
ExecStartPost=/sbin/mkswap '/dev/zvol/rpool/%i'
ExecStartPost=/sbin/swapon '/dev/zvol/rpool/%i'
ExecStop=/sbin/swapoff '/dev/zvol/rpool/%i'
ExecStopPost=/sbin/zfs destroy 'rpool/%i'
[Install]
WantedBy=swap.target
</source>
!!!BE CAREFUL with the name after @ !!!
The name after the @ is the name of the ZFS the will be DESTROYED and recreated!!!
To destroy and recreate an encrypted ZFS volume named cryptswap use:
<syntaxhighlight lang=bash>
# systemctl start zfs-cryptswap@cryptswap.service
# systemctl enable zfs-cryptswap@cryptswap.service
# update-initramfs -k all -u
</source>
==Kernel settings for ZFS==
=== Set module parameter in /etc/modprobe.d/zfs.conf===
<syntaxhighlight lang=bash>
options zfs zfs_arc_max=10737418240
# increase them so scrub/resilver is more quickly at the cost of other work
options zfs zfs_vdev_scrub_min_active=24
options zfs zfs_vdev_scrub_max_active=64
# sync write
options zfs zfs_vdev_sync_write_min_active=8
options zfs zfs_vdev_sync_write_max_active=32
# sync reads (normal)
options zfs zfs_vdev_sync_read_min_active=8
options zfs zfs_vdev_sync_read_max_active=32
# async reads : prefetcher
options zfs zfs_vdev_async_read_min_active=8
options zfs zfs_vdev_async_read_max_active=32
# async write : bulk writes
options zfs zfs_vdev_async_write_min_active=8
options zfs zfs_vdev_async_write_max_active=32
# max write speed to l2arc
# tradeoff between write/read and durability of ssd (?)
# default : 8 * 1024 * 1024
# setting here : 500 * 1024 * 1024
options zfs l2arc_write_max=524288000
options zfs zfs_top_maxinflight=512
options zfs zfs_resilver_min_time_ms=8000
options zfs zfs_resilver_delay=0
</source>
Remember to update your initramfs before boot. This is the filesystem which is read when your module is loaded.
<syntaxhighlight lang=bash>
# update-initramfs -k all -u
</source>
=== Check settings ===
<syntaxhighlight lang=bash>
root@zfshost:~# modprobe -c | grep "options zfs"
options zfs zfs_arc_max=10737418240
options zfs zfs_vdev_scrub_min_active=24
options zfs zfs_vdev_scrub_max_active=64
options zfs zfs_vdev_sync_write_min_active=8
options zfs zfs_vdev_sync_write_max_active=32
options zfs zfs_vdev_sync_read_min_active=8
options zfs zfs_vdev_sync_read_max_active=32
options zfs zfs_vdev_async_read_min_active=8
options zfs zfs_vdev_async_read_max_active=32
options zfs zfs_vdev_async_write_min_active=8
options zfs zfs_vdev_async_write_max_active=32
options zfs l2arc_write_max=524288000
options zfs zfs_top_maxinflight=512
options zfs zfs_resilver_min_time_ms=8000
options zfs zfs_resilver_delay=0
</source>
<syntaxhighlight lang=bash>
root@zfshost:~# modprobe --show-depends zfs
insmod /lib/modules/4.15.0-58-generic/kernel/spl/spl.ko
insmod /lib/modules/4.15.0-58-generic/kernel/zfs/znvpair.ko
insmod /lib/modules/4.15.0-58-generic/kernel/zfs/zcommon.ko
insmod /lib/modules/4.15.0-58-generic/kernel/zfs/icp.ko
insmod /lib/modules/4.15.0-58-generic/kernel/zfs/zavl.ko
insmod /lib/modules/4.15.0-58-generic/kernel/zfs/zunicode.ko
insmod /lib/modules/4.15.0-58-generic/kernel/zfs/zfs.ko zfs_arc_max=10737418240 zfs_vdev_scrub_min_active=24 zfs_vdev_scrub_max_active=64 zfs_vdev_sync_write_min_active=8 zfs_vdev_sync_write_max_active=32 zfs_vdev_sync_read_min_active=8 zfs_vdev_sync_read_max_active=32 zfs_vdev_async_read_min_active=8 zfs_vdev_async_read_max_active=32 zfs_vdev_async_write_min_active=8 zfs_vdev_async_write_max_active=32 l2arc_write_max=524288000 zfs_top_maxinflight=512 zfs_resilver_min_time_ms=8000 zfs_resilver_delay=0
</source>
=== Check actual settings ===
Check files in
* /proc/spl/kstat/zfs/
* /sys/module/zfs/parameters/
==ARC Cache==
===Get the current usage of cache===
<syntaxhighlight lang=bash>
# cat /proc/spl/kstat/zfs/arcstats |grep c_
c_min 4 521779200
c_max 4 1073741824
arc_no_grow 4 0
arc_tempreserve 4 0
arc_loaned_bytes 4 0
arc_prune 4 25360
arc_meta_used 4 493285336
arc_meta_limit 4 805306368
arc_dnode_limit 4 80530636
arc_meta_max 4 706551816
arc_meta_min 4 16777216
sync_wait_for_async 4 357
arc_need_free 4 0
arc_sys_free 4 260889600
</source>
===Limit the cache without reboot non permanent===
For example limit it to 512MB (which is too small for production environments, just an example...):
<syntaxhighlight lang=bash>
# echo "$[512*1024*1024]" > /sys/module/zfs/parameters/zfs_arc_max
</source>
Now you have to drop the caches:
<syntaxhighlight lang=bash>
# echo 3 > /proc/sys/vm/drop_caches
</source>
===Make the cache limit permanent===
For example limit it to 512MB (which is too small for production environments, just an example...):
<syntaxhighlight lang=bash>
# echo "options zfs zfs_arc_max=$[512*1024*1024]" >> /etc/modprobe.d/zfs.conf
</source>
After reboot this value take effect.
===Check cache hits/misses===
<syntaxhighlight lang=bash>
# (while : ; do cat /proc/spl/kstat/zfs/arcstats ; sleep 5 ; done ) | awk '
BEGIN {
}
$1 ~ /(hits|misses)/ {
name=$1;
gsub(/[_]*(hits|misses)/,"",name);
if(name == ""){
name="global";
}
}
$1 ~ /hits/ {
hits[name] = $3 - hitslast[name]
hitslast[name] = $3
}
$1 ~ /misses/ {
misses[name] = $3 - misslast[name]
misslast[name] = $3
rate = 0
total = hits[name] + misses[name]
if (total)
rate = (hits[name] * 100) / total
if (name=="global")
printf "%30s %12s %12s %9s\n", "NAME", "HITS", "MISSES", "HITRATE"
printf "%30s %12d %12d %8.2f%%\n", name, hits[name], misses[name], rate
}
'
</source>
==Higher scrub performance==
<syntaxhighlight lang=bash highlight=3-5>
#!/bin/bash
#
## scrub_fast.sh
#
case $1 in
start)
echo 0 > /sys/module/zfs/parameters/zfs_scan_idle
echo 0 > /sys/module/zfs/parameters/zfs_scrub_delay
echo 512 > /sys/module/zfs/parameters/zfs_top_maxinflight
echo 5000 > /sys/module/zfs/parameters/zfs_scan_min_time_ms
echo 4 > /sys/module/zfs/parameters/zfs_vdev_scrub_min_active
echo 8 > /sys/module/zfs/parameters/zfs_vdev_scrub_max_active
;;
stop)
echo 50 > /sys/module/zfs/parameters/zfs_scan_idle
echo 4 > /sys/module/zfs/parameters/zfs_scrub_delay
echo 32 > /sys/module/zfs/parameters/zfs_top_maxinflight
echo 1000 > /sys/module/zfs/parameters/zfs_scan_min_time_ms
echo 1 > /sys/module/zfs/parameters/zfs_vdev_scrub_min_active
echo 2 > /sys/module/zfs/parameters/zfs_vdev_scrub_max_active
;;
status)
for i in zfs_scan_idle zfs_scrub_delay zfs_top_maxinflight zfs_scan_min_time_ms zfs_vdev_scrub_{min,max}_active
do
param="/sys/module/zfs/parameters/${i}"
printf "%60s\t%d\n" "${param}" "$(cat ${param})"
done
;;
*)
echo "Usage: ${0} (start|stop|status)"
;;
esac
</source>
==Backup ZFS settings==
A little script which may be used on your own risk.
<syntaxhighlight lang=bash>
#!/bin/bash
# Written by Lars Timmann <L@rs.Timmann.de> 2018
# Tested on solaris 11.3 & Ubuntu Linux
# This script is a rotten bunch of code... rewrite it!
AWK_CMD=/usr/bin/gawk
ZPOOL_CMD=/sbin/zpool
ZFS_CMD=/sbin/zfs
ZDB_CMD=/sbin/zdb
function print_local_options () {
DATASET=$1
OPTION=$2
EXCLUDE_REGEX=$3
${ZFS_CMD} get -s local -Ho property,value -p ${OPTION} ${DATASET} | while read -r property value
do
if [[ ! ${property} =~ ${EXCLUDE_REGEX} ]]
then
if [ "_${property}_" == "_share.*_" ]
then
print_local_options "${DATASET}" 'share.all' '^$'
else
printf '\t-o %s=%s \\\n' "${property}" "${value}"
fi
fi
done
}
function print_filesystem () {
ZFS=$1
printf '%s create \\\n' "${ZFS_CMD}"
print_local_options "${ZFS}" 'all' '^$'
printf '\t%s\n' "${ZFS}"
}
function print_filesystems () {
ZPOOL=$1
for ZFS in $(${ZFS_CMD} list -Ho name -t filesystem -r ${ZPOOL})
do
if [ ${ZFS} == ${ZPOOL} ] ; then continue ; fi
printf '#\n## Filesystem: %s\n#\n\n' "${ZFS}"
print_filesystem ${ZFS}
printf '\n'
done
}
function print_volume () {
ZVOL=$1
volsize=$(${ZFS_CMD} get -Ho value volsize ${ZVOL})
volblocksize=$(${ZFS_CMD} get -Ho value volblocksize ${ZVOL})
printf '%s create \\\n\t-V %s \\\n\t-b %s \\\n' "${ZFS_CMD}" "${volsize}" "${volblocksize}"
print_local_options "${ZVOL}" 'all' '(volsize|refreservation)'
printf '\t%s\n' "${ZVOL}"
}
function print_volumes () {
ZPOOL=$1
for ZVOL in $(${ZFS_CMD} list -Ho name -t volume -r ${ZPOOL})
do
printf '#\n## Volume: %s\n#\n\n' "${ZVOL}"
print_volume ${ZVOL}
printf '\n'
done
}
function print_vdevs () {
ZPOOL=$1
${ZDB_CMD} -C ${ZPOOL} | ${AWK_CMD} -F':' '
$1 ~ /^[[:space:]]*type$/ {
gsub(/[ ]+/,"",$NF);
type=substr($NF,2,length($NF)-2);
if ( type == "mirror" ) {
printf " \\\n\t%s",type;
}
}
$1 ~ /^[[:space:]]*path$/ {
gsub(/[ ]+/,"",$NF);
vdev=substr($NF,2,length($NF)-2);
printf " \\\n\t%s",vdev;
}
END {
printf "\n";
}
'
}
function print_zpool () {
ZPOOL=$1
printf '#############################################################\n'
printf '#\n## ZPool: %s\n#\n' "${ZPOOL}"
printf '#############################################################\n\n'
printf '%s create \\\n' "${ZPOOL_CMD}"
print_local_options "${ZPOOL}" 'all' '/@/'
printf '\t%s' "${ZPOOL}"
print_vdevs "${ZPOOL}"
printf '\n'
printf '#############################################################\n\n'
print_filesystems "${ZPOOL}"
print_volumes "${ZPOOL}"
}
OS=$(uname -s)
eval $(uname -s)=1
HOSTNAME=$(hostname)
printf '#############################################################\n'
printf '# Hostname: %s\n' "${HOSTNAME}"
printf '#############################################################\n\n'
for ZPOOL in $(${ZPOOL_CMD} list -Ho name)
do
print_zpool ${ZPOOL}
done
</source>
==Links==
* [[https://github.com/zfsonlinux/pkg-zfs/wiki/HOWTO-install-Ubuntu-16.04-to-a-Whole-Disk-Native-ZFS-Root-Filesystem-using-Ubiquity-GUI-installer HOWTO install Ubuntu 16.04 to a Whole Disk Native ZFS Root Filesystem using Ubiquity GUI installer]]
* [[https://github.com/zfsonlinux/zfs/wiki/Ubuntu-16.04-Root-on-ZFS Ubuntu 16.04 Root on ZFS]]
62126447fa674e1d31b4f66b987709027046552a
Category:Blattodea
14
267
2474
1559
2021-11-25T23:27:58Z
Lollypop
2
Text replacement - "[[Kategorie:" to "[[Category:"
wikitext
text/x-wiki
[[Category: Schaben]]
{{Systematik
| Autor =
| Bild =
| Bildbeschreibung =
| regnum = Animalia
| subregnum = Eumetazoa
| phylum = Arthropoda
| subphylum = Hexapoda
| classis = Insecta
| superordo = Dictyoptera
| ordo = Blattodea
| cockroach.speciesfile.org_TaxonNameID = 1172573
| LSID = urn:lsid:Blattodea.speciesfile.org:TaxonName:1
}}
2563576f516445e5fcbe63c73964956cfbb78438
Category:Brocade
14
140
2475
375
2021-11-25T23:45:01Z
Lollypop
2
Text replacement - "[[Kategorie:" to "[[Category:"
wikitext
text/x-wiki
[[Category:KnowHow]]
a53883501ef62bde531096835b5015f2915a2297
Pass
0
367
2476
2250
2021-11-25T23:54:04Z
Lollypop
2
Text replacement - "[[Kategorie:" to "[[Category:"
wikitext
text/x-wiki
[[Category:Linux|pass]]
=pass - The standard unix password manager=
==Tipps & Tricks==
===SSH===
To pass the password to the ssh password promt you need another tool, too: sshpass .
Put only the password in your Customers/CustomerA/myuser@sshhost.
====Obvious way====
<syntaxhighlight lang=bash>
$ pass -c Customers/CustomerA/myuser@sshhost
$ ssh myuser@sshhost
Password:<paste the copied password>
myuser@sshhost:~$
</syntaxhighlight>
====Cooler way====
=====Create an alias=====
<syntaxhighlight lang=bash>
$ alias customerA-sshhost='sshpass -f <(pass Customers/CustomerA/sshuser@sshhost) ssh sshuser@sshhost'
</syntaxhighlight>
=====Use it=====
<syntaxhighlight lang=bash>
$ customerA-sshhost
sshuser@sshhost:~$
</syntaxhighlight>
===MySQL===
Put only the password in your Customsers/CustomerB/mysqluser@mysqlhost:mysql.
====Obvious way====
<syntaxhighlight lang=bash>
$ pass -c Customsers/CustomerB/mysqluser@mysqlhost:mysql
$ mysql -h mysqlhost -u mysqluser
Enter password: <paste the copied password>
...
MariaDB [(none)]>
</syntaxhighlight>
====Cooler way====
=====Create an alias=====
<syntaxhighlight lang=bash>
$ alias customerB-mysqlhost-mysqluser='mysql --user mysqluser --host mysqlhost --password=$(pass show Customsers/CustomerB/mysqluser@mysqlhost:mysql)'
</syntaxhighlight>
Or even cooler with seperate history and defaults file per connection
<syntaxhighlight lang=bash>
$ mkdir -p ~/Customsers/CustomerB/.mysql
$ cat > ~/Customsers/CustomerB/.mysql/.my.cnf-mysqlhost-mysqluser << EOF
[client]
host=mysqlhost
user=mysqluser
EOF
$ alias customerB-mysqlhost-mysqluser='MYSQL_HISTFILE=~/Customsers/CustomerB/.mysql/.mysql_history_mysqlhost mysql --defaults-file=~/Customsers/CustomerB/.mysql/.my.cnf-mysqlhost-mysqluser --password=$(pass show Customsers/CustomerB/mysqluser@mysqlhost:mysql)'
</syntaxhighlight>
=====Use it=====
<syntaxhighlight lang=bash>
$ customerB-mysqlhost-mysqluser
...
MariaDB [(none)]>
</syntaxhighlight>
==Links==
* [https://www.passwordstore.org/ Official site of pass]
* [https://sourceforge.net/projects/sshpass/ sshpass]
0c4850b1e13ccadc74e4773c37056cb5bfce460a
Systemd
0
233
2477
2210
2021-11-25T23:57:06Z
Lollypop
2
Text replacement - "<source" to "<syntaxhighlight"
wikitext
text/x-wiki
[[Category:Linux]]
=systemd=
Yes, like daemons are usually written this has to be written lowercase.
=What is systemd?=
systemd is a replacement for the old and rusty init system of Linux.
It has many new features and extends the normal init system with the ability to watch processes after the start has done, list sockets owned by processes started with systemd, adds security features like [http://manpages.ubuntu.com/manpages/vivid/en/man7/capabilities.7.html capabilities(7)] and a lot more.
Maybe it will be as good as SMF (Service Management Facility) of Solaris one day :-).
=Take a look with systemctl=
==List units==
As you can see, there are hardware and software related units.
<syntaxhighlight lang=bash>
# systemctl list-units
UNIT LOAD ACTIVE SUB DESCRIPTION
proc-sys-fs-binfmt_misc.automount loaded active running Arbitrary Executable File Formats File System Automount Point
sys-devices-pci0000:00-0000:00:02.0-backlight-acpi_video0.device loaded active plugged /sys/devices/pci0000:00/0000:00:02.0/backlight/acpi_video0
sys-devices-pci0000:00-0000:00:02.0-drm-card0-card0\x2dLVDS\x2d1-intel_backlight.device loaded active plugged /sys/devices/pci0000:00/0000:00:02.0/drm
sys-devices-pci0000:00-0000:00:19.0-net-eth0.device loaded active plugged 82579LM Gigabit Network Connection
sys-devices-pci0000:00-0000:00:1a.0-usb1-1\x2d1-1\x2d1.4-1\x2d1.4:1.0-bluetooth-hci0-rfkill3.device loaded active plugged /sys/devices/pci0000:00/0000
sys-devices-pci0000:00-0000:00:1a.0-usb1-1\x2d1-1\x2d1.4-1\x2d1.4:1.0-bluetooth-hci0.device loaded active plugged /sys/devices/pci0000:00/0000:00:1a.0
sys-devices-pci0000:00-0000:00:1b.0-sound-card0.device loaded active plugged 6 Series/C200 Series Chipset Family High Definition Audio Contro
sys-devices-pci0000:00-0000:00:1c.1-0000:03:00.0-ieee80211-phy0-rfkill2.device loaded active plugged /sys/devices/pci0000:00/0000:00:1c.1/0000:03:00.0
sys-devices-pci0000:00-0000:00:1c.1-0000:03:00.0-net-wlan0.device loaded active plugged Centrino Advanced-N 6205 [Taylor Peak] (Centrino Advanced-N 62
sys-devices-pci0000:00-0000:00:1d.0-usb2-2\x2d1-2\x2d1.4-2\x2d1.4:1.1-tty-ttyACM0.device loaded active plugged F5521gw
sys-devices-pci0000:00-0000:00:1d.0-usb2-2\x2d1-2\x2d1.4-2\x2d1.4:1.3-tty-ttyACM1.device loaded active plugged F5521gw
...
session-c2.scope loaded active running Session c2 of user lollypop
accounts-daemon.service loaded active running Accounts Service
● anacron.service loaded failed failed Run anacron jobs
apparmor.service loaded active exited LSB: AppArmor initialization
apport.service loaded active exited LSB: automatic crash report generation
...
</syntaxhighlight>
In this example you can see that the anacron.service failed to start.
==Display unit status==
<syntaxhighlight lang=bash>
# systemctl status anacron
● anacron.service - Run anacron jobs
Loaded: loaded (/lib/systemd/system/anacron.service; enabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Fr 2015-08-28 09:18:13 CEST; 31min ago
Process: 1591 ExecStart=/usr/sbin/anacron -dsq (code=exited, status=1/FAILURE)
Main PID: 1591 (code=exited, status=1/FAILURE)
Aug 28 09:18:13 lollybook systemd[1]: Started Run anacron jobs.
Aug 28 09:18:13 lollybook systemd[1]: Starting Run anacron jobs...
Aug 28 09:18:13 lollybook systemd[1]: anacron.service: main process exited, code=exited, status=1/FAILURE
Aug 28 09:18:13 lollybook anacron[1591]: anacron: Can't chdir to /var/spool/anacron: No such file or directory
Aug 28 09:18:13 lollybook systemd[1]: Unit anacron.service entered failed state.
Aug 28 09:18:13 lollybook systemd[1]: anacron.service failed.
</syntaxhighlight>
Ah, deleted the anacron spool directory. ;-)
==Restart units==
Fix the problem and restart the service.
<syntaxhighlight lang=bash>
root@lollybook:~# mkdir /var/spool/anacron
root@lollybook:~# systemctl restart anacron.service
root@lollybook:~# systemctl status anacron
● anacron.service - Run anacron jobs
Loaded: loaded (/lib/systemd/system/anacron.service; enabled; vendor preset: enabled)
Active: active (running) since Fr 2015-08-28 09:53:49 CEST; 4s ago
Main PID: 5179 (anacron)
CGroup: /system.slice/anacron.service
└─5179 /usr/sbin/anacron -dsq
Aug 28 09:53:49 lollybook systemd[1]: Started Run anacron jobs.
Aug 28 09:53:49 lollybook systemd[1]: Starting Run anacron jobs...
Aug 28 09:53:49 lollybook anacron[5179]: Anacron 2.3 started on 2015-08-28
Aug 28 09:53:49 lollybook anacron[5179]: Will run job `cron.daily' in 5 min.
Aug 28 09:53:49 lollybook anacron[5179]: Will run job `cron.weekly' in 10 min.
Aug 28 09:53:49 lollybook anacron[5179]: Will run job `cron.monthly' in 15 min.
Aug 28 09:53:49 lollybook anacron[5179]: Jobs will be executed sequentially
</syntaxhighlight>
==Display unit declaration==
<syntaxhighlight lang=ini>
# systemctl cat zfs.target
# /lib/systemd/system/zfs.target
[Unit]
Description=ZFS startup target
Requires=zfs-mount.service
Requires=zfs-share.service
Wants=zed.service
[Install]
WantedBy=multi-user.target
</syntaxhighlight>
==Sockets==
<syntaxhighlight lang=bash>
# systemctl list-sockets --all
LISTEN UNIT ACTIVATES
/run/acpid.socket acpid.socket acpid.service
/run/systemd/fsckd systemd-fsckd.socket systemd-fsckd.service
/run/systemd/initctl/fifo systemd-initctl.socket systemd-initctl.service
/run/systemd/journal/dev-log systemd-journald-dev-log.socket systemd-journald.service
/run/systemd/journal/socket systemd-journald.socket systemd-journald.service
/run/systemd/journal/stdout systemd-journald.socket systemd-journald.service
/run/systemd/journal/syslog syslog.socket rsyslog.service
/run/systemd/shutdownd systemd-shutdownd.socket systemd-shutdownd.service
/run/udev/control systemd-udevd-control.socket systemd-udevd.service
/run/uuidd/request uuidd.socket uuidd.service
/var/run/avahi-daemon/socket avahi-daemon.socket avahi-daemon.service
/var/run/cups/cups.sock cups.socket cups.service
/var/run/dbus/system_bus_socket dbus.socket dbus.service
127.0.0.1:631 cups.socket cups.service
[::1]:631 cups.socket cups.service
audit 1 systemd-journald-audit.socket systemd-journald.service
kobject-uevent 1 systemd-udevd-kernel.socket systemd-udevd.service
17 sockets listed.
</syntaxhighlight>
==View dependencies==
What depends on ''zfs.target'':
<syntaxhighlight lang=bash>
# systemctl list-dependencies --reverse zfs.target
zfs.target
● ├─basic.target
...
● └─multi-user.target
...
</syntaxhighlight>
And what do we need to reach the ''zfs.target''?
<syntaxhighlight lang=bash>
# systemctl list-dependencies --recursive zfs.target
zfs.target
● ├─zed.service
● ├─zfs-mount.service
● └─zfs-share.service
</syntaxhighlight>
==Get the main PID of a service==
<syntaxhighlight lang=bash>
$ systemctl show --property=MainPID --value ssh.service
2026
</syntaxhighlight>
=Security=
==Use capabilities to drop user privileges (CapabilityBoundingSet)==
<syntaxhighlight lang=ini>
# systemctl cat systemd-networkd.service --no-pager
...
[Service]
Type=notify
Restart=on-failure
RestartSec=0
ExecStart=/lib/systemd/systemd-networkd
CapabilityBoundingSet=CAP_NET_ADMIN CAP_NET_BIND_SERVICE CAP_NET_BROADCAST CAP_NET_RAW CAP_SETUID CAP_SETGID CAP_SETPCAP CAP_CHOWN CAP_DAC_OVERRIDE CAP_FOWNER
ProtectSystem=full
ProtectHome=yes
WatchdogSec=1min
...
</syntaxhighlight>
Now the process is started with exactly the capabilities it needs to have. Even if it starts as root all unnessesary capabilities are dropped for starting the process.
I dont want to copy the whole man page of [http://manpages.ubuntu.com/manpages/vivid/en/man7/capabilities.7.html capabilities(7)] here but you can take a look to understand what this capabilities are.
'''BUT''' beware of programs which just test on UID 0!
==Nailing a process to it's rights : NoNewPrivileges==
Setting ''NoNewPrivileges=true'' ensures that the processtree from this level on will stuck at the UID and the privileges it has. This prohibits UID changes. No set UID binary will help the hacker to get more privileges than the user of the exploited service.
==Limiting access to a socket==
For example for the check_mk monitoring system:
<syntaxhighlight lang=ini>
# systemctl edit check_mk.socket
</syntaxhighlight>
Deny from all, but the monitoring server (172.17.128.193):
<syntaxhighlight lang=ini>
[Socket]
IPAddressDeny=any
IPAddressAllow=172.17.128.193
</syntaxhighlight>
==Limiting a socket to IPv4==
For example for the check_mk monitoring system:
<syntaxhighlight lang=ini>
# systemctl edit check_mk.socket
</syntaxhighlight>
First remove old value, then set new one.
<syntaxhighlight lang=ini>
[Socket]
ListenStream=
ListenStream=0.0.0.0:6556
</syntaxhighlight>
=systemd-resolved the name resolve service=
==Status==
<syntaxhighlight lang=bash>
$ systemd-resolve --status
Global
DNS Domain: fritz.box
DNSSEC NTA: 10.in-addr.arpa
168.192.in-addr.arpa
corp
d.f.ip6.arpa
home
internal
intranet
lan
local
private
test
Link 3 (wlan0)
Current Scopes: none
LLMNR setting: yes
MulticastDNS setting: no
DNSSEC setting: no
DNSSEC supported: no
Link 2 (eth0)
Current Scopes: DNS
LLMNR setting: yes
MulticastDNS setting: no
DNSSEC setting: no
DNSSEC supported: no
DNS Servers: 192.168.178.1
DNS Domain: fritz.box
</syntaxhighlight>
==Cache statistics==
<syntaxhighlight lang=bash>
$ systemd-resolve --statistics
DNSSEC supported by current servers: no
Transactions
Current Transactions: 0
Total Transactions: 1824
Cache
Current Cache Size: 11
Cache Hits: 1104
Cache Misses: 771
DNSSEC Verdicts
Secure: 0
Insecure: 0
Bogus: 0
Indeterminate: 0
</syntaxhighlight>
==Flush the cache==
<syntaxhighlight lang=bash>
$ systemd-resolve --flush-caches
</syntaxhighlight>
Check with:
<syntaxhighlight lang=bash>
$ systemd-resolve --statistics
DNSSEC supported by current servers: no
Transactions
Current Transactions: 0
Total Transactions: 1809
Cache
Current Cache Size: 0 <--- Empty
Cache Hits: 1099
Cache Misses: 761
DNSSEC Verdicts
Secure: 0
Insecure: 0
Bogus: 0
Indeterminate: 0
</syntaxhighlight>
=systemd-timesyncd an alternative to ntp=
The ntpd is a good and fat old horse for servers but clients do not necessarily need this one. Just give systemd-timesyncd a chance.
Configuration can be easily made through <i>/etc/systemd/timesyncd.conf</i>:
<syntaxhighlight lang=ini>
# This file is part of systemd.
#
# systemd is free software; you can redistribute it and/or modify it
# under the terms of the GNU Lesser General Public License as published by
# the Free Software Foundation; either version 2.1 of the License, or
# (at your option) any later version.
#
# Entries in this file show the compile time defaults.
# You can change settings by editing this file.
# Defaults can be restored by simply deleting this file.
#
# See timesyncd.conf(5) for details.
[Time]
NTP=ptbtime1.ptb.de hora.cs.tu-berlin.de
FallbackNTP=ntp.ubuntu.com
</syntaxhighlight>
The server list is a space separated list of NTP servers.
FallbackNTP is a list of servers if none of the NTP list could be reached.
If you want to split them into multiple files or generate them at start you can put files with the ending <i>.conf</i> in <i>/etc/systemd/timesyncd.conf.d/</i>.
After you setup the config you can enable the timesyncd via:
<syntaxhighlight lang=bash>
# timedatectl set-ntp true
</syntaxhighlight>
Control your success with:
<syntaxhighlight lang=bash>
# timedatectl
Local time: Fr 2016-07-01 09:16:24 CEST
Universal time: Fr 2016-07-01 07:16:24 UTC
RTC time: Fr 2016-07-01 07:16:24
Time zone: Europe/Berlin (CEST, +0200)
Network time on: yes
NTP synchronized: yes
RTC in local TZ: no
</syntaxhighlight>
Nice it worked <i>NTP synchronized: yes</i>.
If not take a look with <i>systemctl</i>:
<syntaxhighlight lang=bash>
# systemctl status systemd-timesyncd.service
● systemd-timesyncd.service - Network Time Synchronization
Loaded: loaded (/lib/systemd/system/systemd-timesyncd.service; enabled; vendor preset: enabled)
Drop-In: /lib/systemd/system/systemd-timesyncd.service.d
└─disable-with-time-daemon.conf
Active: inactive (dead)
Condition: start condition failed at Fr 2016-07-01 10:49:15 CEST; 1h 43min left
Docs: man:systemd-timesyncd.service(8)
</syntaxhighlight>
Hmm... let us take a look at ntp:
<syntaxhighlight lang=bash>
# systemctl status ntp.service
● ntp.service - LSB: Start NTP daemon
Loaded: loaded (/etc/init.d/ntp; bad; vendor preset: enabled)
Active: active (exited) since Fr 2016-07-01 10:49:19 CEST; 1h 44min left
Docs: man:systemd-sysv-generator(8)
</syntaxhighlight>
Maybe we should uninstall or disable ntp first ;-).
<syntaxhighlight lang=bash>
# systemctl stop ntp.service
# systemctl disable ntp.service
</syntaxhighlight>
<syntaxhighlight lang=bash>
# systemctl start systemd-timesyncd.service
# systemctl status systemd-timesyncd.service
● systemd-timesyncd.service - Network Time Synchronization
Loaded: loaded (/lib/systemd/system/systemd-timesyncd.service; enabled; vendor preset: enabled)
Drop-In: /lib/systemd/system/systemd-timesyncd.service.d
└─disable-with-time-daemon.conf
Active: active (running) since Fr 2016-07-01 09:06:10 CEST; 1s ago
Docs: man:systemd-timesyncd.service(8)
Main PID: 12360 (systemd-timesyn)
Status: "Synchronized to time server 192.53.103.108:123 (ptbtime1.ptb.de)."
CGroup: /system.slice/systemd-timesyncd.service
└─12360 /lib/systemd/systemd-timesyncd
Jul 01 09:06:10 lollybook systemd[1]: Starting Network Time Synchronization...
Jul 01 09:06:10 lollybook systemd[1]: Started Network Time Synchronization.
Jul 01 09:06:10 lollybook systemd-timesyncd[12360]: Synchronized to time server 192.53.103.108:123 (ptbtime1.ptb.de).
</syntaxhighlight>
That's it!
=Units=
==[Unit]==
===Define dependencies===
For example the ''zfs.target'' is defined like this:
<syntaxhighlight lang=ini>
# systemctl cat zfs.target
# /lib/systemd/system/zfs.target
[Unit]
Description=ZFS startup target
Requires=zfs-mount.service
Requires=zfs-share.service
Wants=zed.service
[Install]
WantedBy=multi-user.target
</syntaxhighlight>
This means to reach the ''zfs.target'' we want that ''zed.service'' is started if enabled and we need ''zfs-mount.service'' and ''zfs-share.service''.
===Directories===
====ReadWrite-, ReadOnly- and InaccessibleDirectories====
====Private Tmp-Directories====
Mounts a private incarnation of /tmp and /var/tmp which only lives as long as the unit is up. When the unit comes down the directories are cleared. This is done by a seperate namespace for this unit.
<syntaxhighlight lang=ini>
[Unit]
...
PrivateTmp=true|false
...
</syntaxhighlight>
If several units should share a private tmp-directory you can use ''JoinsNamespaceOf=<unit1>[,<unit2>,<unit3>]''.
==[Service]==
==[Install]==
=Tools=
==Testing around with capabilities==
For example arping:
<syntaxhighlight lang=bash>
# getcap /usr/bin/arping
/usr/bin/arping = cap_net_raw+ep
</syntaxhighlight>
With this capability set we can use this as normal user:
<syntaxhighlight lang=bash>
lollypop $ /usr/bin/arping -I wlan0 192.168.178.1
ARPING 192.168.178.1 from 192.168.178.31 wlan0
Unicast reply from 192.168.178.1 [24:65:11:F0:DC:A8] 1.774ms
Unicast reply from 192.168.178.1 [24:65:11:F0:DC:A8] 1.658ms
</syntaxhighlight>
If we remove this capability it does not work:
<syntaxhighlight lang=bash>
# setcap cap_net_raw=-ep /usr/bin/arping
</syntaxhighlight>
<syntaxhighlight lang=bash>
lollypop $ /usr/bin/arping -I wlan0 192.168.178.1
arping: socket: Operation not permitted
</syntaxhighlight>
Of course it still works as root as root has all capabilities:
<syntaxhighlight lang=bash>
root@lollybook:~# /usr/bin/arping -I wlan0 192.168.178.1
ARPING 192.168.178.1 from 192.168.178.31 wlan0
Unicast reply from 192.168.178.1 [24:65:11:F0:DC:A8] 2.052ms
Unicast reply from 192.168.178.1 [24:65:11:F0:DC:A8] 1.852ms
Received 2 response(s)
</syntaxhighlight>
So we better set this capability again:
<syntaxhighlight lang=bash>
# setcap cap_net_raw=+ep /usr/bin/arping
</syntaxhighlight>
= Logging with syslog-ng and systemd in a chroot environment =
If you have a chroot environment (here I have /var/chroot) some things are a little bit tricky.
==The needed logging socket in your chroot is /run/systemd/journal/dev-log==
Prepare the mountpoint:
<syntaxhighlight lang=bash>
# mkdir -p /var/chroot/run/systemd/journal
# touch /var/chroot/run/systemd/journal/dev-log
</syntaxhighlight>
===Get the name for the needed unit file===
The name of a .mount-unit file has to be the mount destination path. Dashes must be escaped. To get the resulting name you can easily use systemd-escape.
<syntaxhighlight lang=bash>
# systemd-escape -p --suffix=mount /var/chroot/run/systemd/journal/dev-log
var-chroot-run-systemd-journal-dev\x2dlog.mount
</syntaxhighlight>
===Create the unit file /lib/systemd/system/var-chroot-run-systemd-journal-dev\\x2dlog.mount for the mount===
Remember to double escape (\\) the x2d (which is a dash -).
<syntaxhighlight lang=bash>
# vi /lib/systemd/system/var-chroot-run-systemd-journal-dev\\x2dlog.mount
</syntaxhighlight>
I want to mount it before syslog-ng and pdns-recursor are up.
Put this contents in the file:
<syntaxhighlight lang=ini>
[Unit]
Description=Mount /run/systemd/journal/dev-log to chroot
DefaultDependencies=no
ConditionPathExists=/var/chroot/run/systemd/journal/dev-log
ConditionCapability=CAP_SYS_ADMIN
After=systemd-modules-load.service
Before=pdns-recursor.service
Before=syslog-ng.service
[Mount]
What=/run/systemd/journal/dev-log
Where=/var/chroot/run/systemd/journal/dev-log
Type=none
Options=bind
[Install]
WantedBy=multi-user.target
</syntaxhighlight>
===Mount the socket===
<syntaxhighlight lang=bash>
# systemctl daemon-reload
# systemctl enable var-chroot-run-systemd-journal-dev\\x2dlog.mount
# systemctl start var-chroot-run-systemd-journal-dev\\x2dlog.mount
</syntaxhighlight>
Check the success:
<syntaxhighlight lang=bash>
# grep /var/chroot/run/systemd/journal/dev-log /proc/mounts
tmpfs /var/chroot/run/systemd/journal/dev-log tmpfs rw,nosuid,noexec,relatime,size=101604k,mode=755 0 0
</syntaxhighlight>
==Tell the journald to forward logging lines to the socket==
===/etc/systemd/journald.conf===
<syntaxhighlight lang=ini>
[Journal]
...
ForwardToSyslog=yes
...
</syntaxhighlight>
Restart the journal daemon:
<syntaxhighlight lang=bash>
# systemctl restart systemd-journald.service
</syntaxhighlight>
==Configure syslog-ng==
===/etc/syslog-ng/syslog-ng.conf===
Take the log from systemd-journald socket:
<syntaxhighlight>
...
source s_src {
system();
internal();
unix-dgram ("/run/systemd/journal/dev-log");
};
...
</syntaxhighlight>
===Example for powerdns recursor===
====/etc/syslog-ng/conf.d/destination.d/pdns.conf====
<syntaxhighlight>
# PowerDNS authoritative server destination
destination d_pdns { file("/var/log/powerdns/pdns.log"); };
destination d_pdns_recursor { file("/var/log/powerdns/recursor.log"); };
</syntaxhighlight>
====/etc/syslog-ng/conf.d/filter.d/pdns.conf====
<syntaxhighlight>
# PowerDNS authoritative server filter
filter f_pdns { program("^pdns$"); };
filter f_pdns_recursor { program("^pdns_recursor$"); };
</syntaxhighlight>
====/etc/syslog-ng/conf.d/log.d/90_pdns.conf====
<syntaxhighlight>
# PowerDNS authoritative server default final file log
log { source(s_src); filter(f_pdns); destination(d_pdns); flags(final); };
log { source(s_src); filter(f_pdns_recursor); destination(d_pdns_recursor); flags(final); };
</syntaxhighlight>
===Restart syslog-ng daemon===
<syntaxhighlight lang=bash>
# systemctl restart syslog-ng.service
</syntaxhighlight>
= systemd-tmpfiles =
The housekeeping of temporary directories is done by the service <i>systemd-tmpfiles-clean.service</i> .
This service is triggered by the timer <i>systemd-tmpfiles-clean.timer</i>
To use this service for PrivateTMP directories for example of <i>apache2.service</i> you may use a config file under <i>/etc/[https://www.freedesktop.org/software/systemd/man/tmpfiles.d.html tmpfiles.d]/</i> like this example <i>/etc/tmpfiles.d/apache-cleanup.conf</i> :
<pre>
e /tmp/systemd-private-%b-apache2.service-*/tmp - - - 6h
</pre>
This will cleanup all files under <i>/tmp/systemd-private-%b-apache2.service-*/tmp</i> which are older than 6 hours every time the <i>systemd-tmpfiles-clean.service</i> runs.
The <i>%b</i> in the path is the actual boot-id.
What ist that? An id which is generated at each boot.
You can get the boot-id with:
<syntaxhighlight lang=bash>
# journalctl --list-boots
</syntaxhighlight>
The second field of the last line is the actual one, e.g.:
<syntaxhighlight lang=bash>
# journalctl --list-boots | awk 'END {print $2}'
52ae0c2a587a47048ee76818ede269a6
</syntaxhighlight>
When will that be? Try:
<syntaxhighlight lang="bash">
# systemctl list-timers systemd-tmpfiles-clean.timer
NEXT LEFT LAST PASSED UNIT ACTIVATES
Thu 2020-08-13 16:07:24 CEST 46min left n/a n/a systemd-tmpfiles-clean.timer systemd-tmpfiles-clean.service
1 timers listed.
Pass --all to see loaded but inactive timers, too.
</syntaxhighlight>
OK, but you probably want to run ist once an hour? OK, just rescedule the timer like this:
<syntaxhighlight lang="bash">
# systemctl edit systemd-tmpfiles-clean.timer
</syntaxhighlight>
and change the interval like this
<pre>
[Timer]
OnUnitActiveSec=1h
</pre>
Well done...
= Examples =
== fwupd.service behind proxy ==
<syntaxhighlight lang=bash>
# systemctl edit fwupd-refresh.service
</syntaxhighlight>
<syntaxhighlight lang=ini>
[Service]
Environment=http_proxy="http://user:passw0rd@proxy.intern.net:8080" https_proxy="http://user:passw0rd@proxy.intern.net:8080"
PassEnvironment=http_proxy https_proxy
</syntaxhighlight>
== Tomcat ==
=== /etc/systemd/system/tomcat-example.service ===
Simple service definition with some security options (ReadOnlyDirectories):
<syntaxhighlight lang=ini>
# /etc/systemd/system/tomcat-ndr.service
[Unit]
Description=Apache Tomcat Web Application Container
After=syslog.target network.target remote-fs.target
ConditionPathExists=/opt/tomcat/bin
ConditionPathExists=/home/tomcat/bin
[Service]
Type=forking
User=tomcat
Group=java
PrivateTmp=true
RuntimeDirectory=tomcat-example
RuntimeDirectoryMode=0700
ReadOnlyDirectories=/etc
ReadOnlyDirectories=/lib
ReadOnlyDirectories=/usr
EnvironmentFile=/home/tomcat/.Tomcat_init_systemd
PIDFile=/run/tomcat-example/tomcat.pid
ExecStart=/opt/tomcat/bin/catalina.sh start
ExecStop=/opt/tomcat/bin/catalina.sh stop
SuccessExitStatus=0
[Install]
WantedBy=multi-user.target
</syntaxhighlight>
=== /etc/polkit-1/rules.d/57-tomcat-example.rules ===
Allow the user <i>tomcat</i> to start/stop the service:
<syntaxhighlight>
polkit.addRule(function(action, subject) {
if (action.id == "org.freedesktop.systemd1.manage-units" &&
action.lookup("unit") == "tomcat-example.service" &&
subject.user == "tomcat") {
return polkit.Result.YES;
}
});
</syntaxhighlight>
== Oracle ==
UNTESTED, just an example!
File this as
/usr/lib/systemd/system/dbora@.service (SLES12)
<syntaxhighlight lang=ini>
# This file is part of systemd.
#
# Configure instances for your oracle database versions like this
# # systemctl enable dbora@<product>.service
# e.g.:
# # systemctl enable dbora@12cR1.service
#
[Unit]
Description=Oracle Database %I
After=syslog.target network.target
[Service]
# systemd ignores PAM limits, so set any necessary limits in the service.
# Not really a bug, but a feature.
# https://bugzilla.redhat.com/show_bug.cgi?id=754285
LimitMEMLOCK=infinity
LimitNOFILE=65535
#
Type=simple
RemainAfterExit=yes
User=oracle
Group=dba
Environment="ORACLE_HOME=/opt/oracle/product/%i/db"
ExecStart=/opt/oracle/product/%i/db/bin/dbstart $ORACLE_HOME >> 2>&1 &
ExecStop=/opt/oracle/product/%i/db/bin/dbstart $ORACLE_HOME 2>&1 &
[Install]
WantedBy=multi-user.target
</syntaxhighlight>
<syntaxhighlight lang=bash>
# systemctl daemon-reload
# systemctl enable dbora@12cR2.service
Created symlink from /etc/systemd/system/multi-user.target.wants/dbora@12cR2.service to /usr/lib/systemd/system/dbora@.service.
</syntaxhighlight>
52e616390ece621eafc68029b2b6c4eb4d559045
Ubuntu networking
0
278
2478
2290
2021-11-26T00:06:08Z
Lollypop
2
Text replacement - "</source" to "</syntaxhighlight"
wikitext
text/x-wiki
[[Kategorie:Ubuntu|Networking]]
[[Kategorie:Linux|Networking]]
==Disable IPv6==
===Create /etc/sysctl.d/60-disable-ipv6.conf===
Create a file named <i>/etc/sysctl.d/60-disable-ipv6.conf</i> with this content:
<syntaxhighlight lang=bash>
net.ipv6.conf.all.disable_ipv6 = 1
net.ipv6.conf.default.disable_ipv6 = 1
net.ipv6.conf.lo.disable_ipv6 = 1
</syntaxhighlight>
===Activate /etc/sysctl.d/60-disable-ipv6.conf===
<syntaxhighlight lang=bash>
# sysctl -p /etc/sysctl.d/10-disable-ipv6.conf
net.ipv6.conf.all.disable_ipv6 = 1
net.ipv6.conf.default.disable_ipv6 = 1
net.ipv6.conf.lo.disable_ipv6 = 1
</syntaxhighlight>
===Check settings===
<syntaxhighlight lang=bash>
# cat /proc/sys/net/ipv6/conf/all/disable_ipv6
1
</syntaxhighlight>
==The ip command==
===Configure bond manually===
Specify your environment
<syntaxhighlight lang=bash>
# mymaster1=eno5
# mymaster2=eno6
# myinterface=bond007
# myipaddr=172.16.78.9/24
# mygateway=172.16.78.1
# declare -a mynameservers=( 172.16.77.4 172.16.79.4 )
</syntaxhighlight>
Create the bonding interface out of the two masters
<syntaxhighlight lang=bash>
# ip link add ${myinterface} type bond
# ip link set ${myinterface} type bond miimon 100 mode active-backup
# ip link set ${mymaster1} down
# ip link set ${mymaster1} master ${myinterface}
# ip link set ${mymaster2} down
# ip link set ${mymaster2} master ${myinterface}
</syntaxhighlight>
If you want to add a VLAN to your interface
<syntaxhighlight lang=bash>
# myvlan=1234
# ip link add link ${myinterface} name ${myinterface}.${myvlan} type vlan id ${myvlan}
# myinterface=${myinterface}.${myvlan}
</syntaxhighlight>
Bring your interface up and set yout ip address
<syntaxhighlight lang=bash>
# ip link set ${myinterface} up
# ip addr add ${myipaddr} dev ${myinterface}
</syntaxhighlight>
Set your default gateway and DNS
<syntaxhighlight lang=bash>
# ip route add default via ${mygateway}
# if (( ${#mynameservers[*]} > 1 )) ; then eval systemd-resolve --interface ${myinterface} --set-dns={$(IFS=,; printf '%s' "${mynameservers[*]}")} ; else eval systemd-resolve --interface ${myinterface} --set-dns=${mynameservers[0]} ; fi
</syntaxhighlight>
===ipa===
This is not only indian pale ale! On linux
<syntaxhighlight lang=bash>
# ip a
</syntaxhighlight>
shows you the configured addresses.
It is the short cut for "ip address show".
===iplishup===
This just sounds like a word and helps you to keep it in mind.
<syntaxhighlight lang=bash>
# ip li sh up
</syntaxhighlight>
shows you all links (interfaces) that are up.
This is short for "ip link show up".
==New since Ubuntu 17.10==
===netplan===
Former configuration in /etc/network/interfaces{,.d} is now found in /etc/netplan in YAML syntax.
The name of the file is /etc/netplan/<whatever you want, I prefer the interface name>.yaml . The .yaml at the end is not optional!
====netplan <command>====
To apply changes to your files in /etc/netplan without reboot use:
<syntaxhighlight lang=bash>
# netplan appy
</syntaxhighlight>
Keep in mind: You might lose your connection depending on the changes made!
====DHCP====
<i>/etc/netplan/ens160.yaml</i>
<syntaxhighlight lang=yaml>
network:
ethernets:
ens160:
dhcp4: yes
version: 2
</syntaxhighlight>
====Bonding====
<i>/etc/netplan/bond007.yaml</i>
<syntaxhighlight lang=yaml>
network:
version: 2
renderer: networkd
ethernets:
slave1:
match:
macaddress: "3c:a7:2a:22:af:70"
dhcp4: no
slave2:
match:
macaddress: "3c:a7:2a:22:af:71"
dhcp4: no
bonds:
bond007:
interfaces:
- slave1
- slave2
parameters:
mode: balance-rr
mii-monitor-interval: 10
dhcp4: no
addresses:
- 192.168.189.202/27
gateway4: 192.168.189.193
nameservers:
search:
- mcs.de
addresses:
- "192.168.3.60"
- "192.168.3.61"
</syntaxhighlight>
4ac4044252e3e738e8ae4190683c733f9c68fcd7
2481
2478
2021-11-26T00:26:05Z
Lollypop
2
Text replacement - "[[Kategorie:" to "[[Category:"
wikitext
text/x-wiki
[[Category:Ubuntu|Networking]]
[[Category:Linux|Networking]]
==Disable IPv6==
===Create /etc/sysctl.d/60-disable-ipv6.conf===
Create a file named <i>/etc/sysctl.d/60-disable-ipv6.conf</i> with this content:
<syntaxhighlight lang=bash>
net.ipv6.conf.all.disable_ipv6 = 1
net.ipv6.conf.default.disable_ipv6 = 1
net.ipv6.conf.lo.disable_ipv6 = 1
</syntaxhighlight>
===Activate /etc/sysctl.d/60-disable-ipv6.conf===
<syntaxhighlight lang=bash>
# sysctl -p /etc/sysctl.d/10-disable-ipv6.conf
net.ipv6.conf.all.disable_ipv6 = 1
net.ipv6.conf.default.disable_ipv6 = 1
net.ipv6.conf.lo.disable_ipv6 = 1
</syntaxhighlight>
===Check settings===
<syntaxhighlight lang=bash>
# cat /proc/sys/net/ipv6/conf/all/disable_ipv6
1
</syntaxhighlight>
==The ip command==
===Configure bond manually===
Specify your environment
<syntaxhighlight lang=bash>
# mymaster1=eno5
# mymaster2=eno6
# myinterface=bond007
# myipaddr=172.16.78.9/24
# mygateway=172.16.78.1
# declare -a mynameservers=( 172.16.77.4 172.16.79.4 )
</syntaxhighlight>
Create the bonding interface out of the two masters
<syntaxhighlight lang=bash>
# ip link add ${myinterface} type bond
# ip link set ${myinterface} type bond miimon 100 mode active-backup
# ip link set ${mymaster1} down
# ip link set ${mymaster1} master ${myinterface}
# ip link set ${mymaster2} down
# ip link set ${mymaster2} master ${myinterface}
</syntaxhighlight>
If you want to add a VLAN to your interface
<syntaxhighlight lang=bash>
# myvlan=1234
# ip link add link ${myinterface} name ${myinterface}.${myvlan} type vlan id ${myvlan}
# myinterface=${myinterface}.${myvlan}
</syntaxhighlight>
Bring your interface up and set yout ip address
<syntaxhighlight lang=bash>
# ip link set ${myinterface} up
# ip addr add ${myipaddr} dev ${myinterface}
</syntaxhighlight>
Set your default gateway and DNS
<syntaxhighlight lang=bash>
# ip route add default via ${mygateway}
# if (( ${#mynameservers[*]} > 1 )) ; then eval systemd-resolve --interface ${myinterface} --set-dns={$(IFS=,; printf '%s' "${mynameservers[*]}")} ; else eval systemd-resolve --interface ${myinterface} --set-dns=${mynameservers[0]} ; fi
</syntaxhighlight>
===ipa===
This is not only indian pale ale! On linux
<syntaxhighlight lang=bash>
# ip a
</syntaxhighlight>
shows you the configured addresses.
It is the short cut for "ip address show".
===iplishup===
This just sounds like a word and helps you to keep it in mind.
<syntaxhighlight lang=bash>
# ip li sh up
</syntaxhighlight>
shows you all links (interfaces) that are up.
This is short for "ip link show up".
==New since Ubuntu 17.10==
===netplan===
Former configuration in /etc/network/interfaces{,.d} is now found in /etc/netplan in YAML syntax.
The name of the file is /etc/netplan/<whatever you want, I prefer the interface name>.yaml . The .yaml at the end is not optional!
====netplan <command>====
To apply changes to your files in /etc/netplan without reboot use:
<syntaxhighlight lang=bash>
# netplan appy
</syntaxhighlight>
Keep in mind: You might lose your connection depending on the changes made!
====DHCP====
<i>/etc/netplan/ens160.yaml</i>
<syntaxhighlight lang=yaml>
network:
ethernets:
ens160:
dhcp4: yes
version: 2
</syntaxhighlight>
====Bonding====
<i>/etc/netplan/bond007.yaml</i>
<syntaxhighlight lang=yaml>
network:
version: 2
renderer: networkd
ethernets:
slave1:
match:
macaddress: "3c:a7:2a:22:af:70"
dhcp4: no
slave2:
match:
macaddress: "3c:a7:2a:22:af:71"
dhcp4: no
bonds:
bond007:
interfaces:
- slave1
- slave2
parameters:
mode: balance-rr
mii-monitor-interval: 10
dhcp4: no
addresses:
- 192.168.189.202/27
gateway4: 192.168.189.193
nameservers:
search:
- mcs.de
addresses:
- "192.168.3.60"
- "192.168.3.61"
</syntaxhighlight>
e6de8d92cde0df4a06cd7129219615f183449167
Exim cheatsheet
0
27
2479
2221
2021-11-26T00:10:26Z
Lollypop
2
Text replacement - "</source" to "</syntaxhighlight"
wikitext
text/x-wiki
[[Kategorie:Exim]]
=Fragen und Antworten=
==Header einer MailID ansehen==
<pre># exim -mvh <msgid></pre>
==Statistiken der aktuellen Queue ansehen==
<pre># exim -bpu | exiqsum <parameter></pre>
==Routing von Mails testen==
===Kurz und bündig===
<pre># exim -bv -v <Mailadresse></pre>
===Mit viel Debugging===
<pre># exim -bv -d+all <Mailadresse></pre>
==Wie stosse ich den Versand aller Mails für eine bestimmte Domain an?==
<pre># exim -Rff <Domain></pre>
==Wie stosse ich den Versand EINER bestimmten Mail erneut an?==
<pre># exim -M <message-id></pre>
==Wie ermittle ich, wieviele Mails in der Queue liegen?==
<pre># exim -bpc</pre>
==Wie finde ich eine bestimmte Mail in der Queue?==
Dazu kann entweder in den Logfiles gesucht werden
<pre># exigrep <pattern> /var/log/exim/mainlog-jjjjmmdd</pre>
oder es kann in der Queue gesucht werden
<pre># exiqgrep -r <pattern></pre>
Besser, als exigrep ist exipick!
Ausgabe aller frozen Mails in der Queue:
<pre>
# exipick -z
</pre>
Ausgabe aller Mails an <reciepient> in der Queue:
<pre>
# exipick -r <reciepient>
</pre>
Ausgabe aller Mails von <sender> in der Queue:
<pre>
# exipick -f <sender>
</pre>
Ausggeben aller Mails, die Lokal abgesandt wurden in der Queue:
<pre>
# exipick --or '$sender_host_address eq 127.0.0.1' '$received_protocol eq local'
</pre>
Sogar der Body einer Mail kann durchsucht werden:
<pre>
# /opt/exim/bin/exipick '$message_body =~ /.*Vjagra.*/'
</pre>
Oder Ausgabe der sender_host_address für alle Mails die mehr als 40 und weniger als 50 Minuten alt sind und nicht im Status frozen sind:
<pre>
# exipick --show-vars sender_host_address '$message_age > 40m' '$message_age < 50m' '!$deliver_freeze'
</pre>
==Was tun die Exim-Prozesse?==
<pre># exiwhat</pre>
==Ausgeben von Exim-Parametern==
<pre># exim -bP <Parameter></pre>
z.B.:
<pre># exim -bP message_size_limit</pre>
==Immer gut: queue files ansehen==
<pre>
# find $(exim -bP spool_directory | nawk '{print $NF;}')/input
</pre>
==Ratelimit für einen User zurücksetzen==
Einträge finden:
<syntaxhighlight lang=bash>
# exim_dumpdb /var/spool/exim ratelimit | grep user
24-Mar-2016 09:51:28.152687 rate: 218.512 key: 1d/per_rcpt/mail_recipients:user@server.de
24-Mar-2016 09:51:28.098825 rate: 25.618 key: 1d/per_rcpt/failed_recipients:user@server.de
</syntaxhighlight>
Einträge löschen:
Dafür nimmt man das etwas struppige Tool <i>exim_fixdb</i>. Man gibt den Key ein, den man aus den Ausgaben vom letzten Befehl hat und wählt damit den entsprechenden Eintrag in der DB aus. Als nächstes kommd dann d, wie Delete, gefolgt von einem Enter. Weg ist der entsprechende Eintrag.
<syntaxhighlight lang=bash>
# exim_fixdb /var/spool/exim ratelimit
Modifying Exim hints database /var/spool/exim/db/ratelimit
> 1d/per_rcpt/mail_recipients:user@server.de
24-Mar-2016 09:51:28
0 time stamp: 24-Mar-2016 09:51:28
1 fract. time: .152687
2 sender rate: 218.512
> d
deleted
> 1d/per_rcpt/failed_recipients:user@server.de
24-Mar-2016 09:51:28
0 time stamp: 24-Mar-2016 09:51:28
1 fract. time: .098825
2 sender rate: 25.618
> d
deleted
> ^D
</syntaxhighlight>
==Spam==
<syntaxhighlight lang=bash>
for file in $(ls -1 /var/log/spamassassin/spamd-exim-acl.log* | sort -t'.' -k3n,3n)
do
if [ "$(basename $file .gz)" == "$(basename $file)" ]
then
command="cat"
else
command="gzip -cd"
fi
printf "%16s - %16s : %7s\t%s\n" \
"$(${command} ${file} | nawk 'NR==1{print $1,$2,$3}')" \
"$(${command} ${file} | tail -1 | nawk '{print $1,$2,$3}')" \
"$(${command} ${file} | grep -c 'result: Y')" \
"$(basename ${file})"
done
</syntaxhighlight>
= Logrotation with datestamped logfiles =
I love my logfiles datestamped:
<syntaxhighlight lang=bash>
# exim -bP log_file_path
log_file_path = /var/log/exim/%slog-%D
</syntaxhighlight>
But the logrotate with this files is a little bit tricky.
I found this as a good way to rotate the logfiles:
== /etc/logrotate.d/exim ==
<pre>
/var/log/exim/rotate_this_-_do_not_delete {
daily
rotate 0
ifempty
create
lastaction
# gzip all files matching the regex that are not from today
/usr/bin/find /var/log/exim -regextype posix-awk -regex '^/.*/((main|reject)log-[0-9]{8}|paniclog)' ! -mtime +0 -exec /usr/bin/gzip -9q {} \;
# delete gzipped files matching the regex that are older than 90 days
/usr/bin/find /var/log/exim -regextype posix-awk -regex '^/.*/((main|reject)log-[0-9]{8}|paniclog)\.gz' -mtime +90 -delete
endscript
}
== touch the dummy rotate file ==
This one is needed to trigger the rotation even if it is a dummy.
<syntaxhighlight lang=bash>
# touch /var/log/exim/rotate_this_-_do_not_delete
</syntaxhighlight>
</pre>
c6490d420c66e19bdb6016b75bdb7e25ff1e07b8
Dpkg
0
244
2480
2421
2021-11-26T00:22:24Z
Lollypop
2
Text replacement - "[[Kategorie:" to "[[Category:"
wikitext
text/x-wiki
[[Category:Linux]]
==Missing key id NO_PUBKEY==
<syntaxhighlight lang=bash>
# apt-key adv --keyserver keyserver.ubuntu.com --recv <keyid>
</syntaxhighlight>
==Package source which resolves to IPv6 adresses causes sometimes problems==
To force the usage of the returned IPv4 adresses do:
<syntaxhighlight lang=bash>
$ echo 'Acquire::ForceIPv4 "true";' | sudo tee /etc/apt/apt.conf.d/99force-ipv4
</syntaxhighlight>
==Packages from a specific source==
===Prequisite: dctrl-tools===
<syntaxhighlight lang=bash>
sudo apt-get install dctrl-tools
</syntaxhighlight>
===Show packages===
For example all PPA packages
<syntaxhighlight lang=bash>
sudo grep-dctrl -sPackage . /var/lib/apt/lists/ppa*_Packages
</syntaxhighlight>
==From where is my package installed?==
<syntaxhighlight lang=bash>
sudo apt-cache policy <package>
</syntaxhighlight>
==Does my file match the checksum from the package?==
If you fear you are hacked verify your binaries!
===Prequisite: debsums===
<syntaxhighlight lang=bash>
sudo apt-get install debsums
</syntaxhighlight>
===Verify packages===
<syntaxhighlight lang=bash>
sudo debsums <package name>
</syntaxhighlight>
<syntaxhighlight lang=bash>
$ sudo debsums unhide.rb
/usr/bin/unhide.rb OK
/usr/share/doc/unhide.rb/changelog.Debian.gz OK
/usr/share/doc/unhide.rb/copyright OK
/usr/share/lintian/overrides/unhide.rb OK
/usr/share/man/man8/unhide.rb.8.gz OK
</syntaxhighlight>
e4cbfd6694bb38bf1201fde44f092a363684468d
Solaris LiveUpgrade
0
218
2483
2395
2021-11-26T00:33:13Z
Lollypop
2
Text replacement - "</source" to "</syntaxhighlight"
wikitext
text/x-wiki
[[Category:Solaris|LiveUpgrade]]
=Upgrade Solaris release=
==Install LiveUpgrade patches==
[http://sysadmin-tips-and-tricks.blogspot.co.uk/2012/07/solaris-live-upgrade-installation.html This site] has a good list of patches needed:
<syntaxhighlight lang=bash>
SPARC:
119254-LR Install and Patch Utilities Patch
121430-LR Live Upgrade patch
121428-LR SUNWluzone required patches
138130-01 vold patch
140914-02 cpio patch
x86:
119255-LR Install and Patch Utilities Patch
121431-LR Live Upgrade patch
121429-LR SUNWluzone required patches
138884-01 SunOS 5.10_x86: GRUB patch
138131-01 vold patch
140915-02 cpio patch
</syntaxhighlight>
Higher patch revisions may be available...
==Mount the Solaris 10 DVD ISO-image==
<syntaxhighlight lang=bash>
# mkdir /tmp/os
# mount $(lofiadm -a /root/sol-10-u11-ga-x86-dvd.iso) /tmp/os
</syntaxhighlight>
==Create the new BootEnvironment==
<syntaxhighlight lang=bash>
# lucreate -n Solaris10u11
</syntaxhighlight>
==Upgrade the new BootEnvironment==
<syntaxhighlight lang=bash>
# echo "autoreg=disable" > /tmp/no-autoreg
# luupgrade -u -n Solaris10u11 -s /tmp/os -k /tmp/no-autoreg
</syntaxhighlight>
==Activate the new BootEnvironment==
<syntaxhighlight lang=bash>
# luactivate Solaris10u11
</syntaxhighlight>
=Install EIS patches=
==Mount the new EIS-ISO==
<syntaxhighlight lang=bash>
# mkdir /tmp/eis
# mount -F hsfs $(lofiadm -a /root/EIS/EIS-DVD-ONE-15JUL15.iso) /tmp/eis
</syntaxhighlight>
==Update LU patches==
<syntaxhighlight lang=bash>
# cd /tmp/eis/sun/patch/x86/LU/10
# unpack-patches -q -r
# cd
</syntaxhighlight>
==Create the new BootEnvironment==
<syntaxhighlight lang=bash>
# lucreate -n Solaris10-EIS-15JUL15
</syntaxhighlight>
==Mount the new BootEnvironment==
<syntaxhighlight lang=bash>
# mkdir /tmp/BE
# lumount Solaris10-EIS-15JUL15 /tmp/BE
</syntaxhighlight>
==Install EIS-Patches==
<syntaxhighlight lang=bash>
# cd /tmp/eis/sun
# patch-EIS -R /tmp/BE /var/tmp
Will apply patches from directories: x86/10 x86/cacao/2.1 x86/SWUP/10 SunVTS/7.0_x86 x86/LU/10
Patching from directory: patch/x86/10
Cleaning out /tmp/BE//var/tmp/10...
...
Now the Solaris 10_x86 Recommended Patches...
...
</syntaxhighlight>
==Problems: Installing this patch set to an alternate boot environment first requires the live boot environment to have patch utilities and other prerequisite patches==
<syntaxhighlight lang=bash>
Installing this patch set to an alternate boot environment first requires the
live boot environment to have patch utilities and other prerequisite patches
at the same (or higher) patch revisions as those delivered by this patch set.
The required prerequisite patches can be applied to the live boot environment
by invoking this script with the '--apply-prereq' option, ie.
./installpatchset --apply-prereq --s10patchset
</syntaxhighlight>
===Solution===
<syntaxhighlight lang=bash>
root@solaris10 # cd /mnt/var/tmp/10/10_x86_Recommended
root@solaris10 # ./installpatchset --apply-prereq --s10patchset
...
Installation of prerequisite patches complete.
...
</syntaxhighlight>
==Umount the BE==
<syntaxhighlight lang=bash>
# luumount Solaris10-EIS-15JUL15
</syntaxhighlight>
==Activate BE & Reboot==
<syntaxhighlight lang=bash>
# luactivate Solaris10-EIS-15JUL15
# init 6
</syntaxhighlight>
= Solaris 10 CPU with LiveUpgrade =
== Install LiveUpgrade (and some other necessary) Patches==
In the unzipped CPU do:
<syntaxhighlight lang=bash>
root@solaris10 # ./installpatchset --s10patchset --apply-prereq
</syntaxhighlight>
== Create LiveUpgrade environment ==
In this example we use the CPU_2017-07:
<syntaxhighlight lang=bash>
root@solaris10 # lucreate -n Solaris_10-CPU_2017-07
...
Population of boot environment <Solaris_10-CPU_2017-07> successful.
Creation of boot environment <Solaris_10-CPU_2017-07> successful.
</syntaxhighlight>
== Apply the patchset to the LiveUpgrade environment ==
<syntaxhighlight lang=bash>
root@solaris10 # ./installpatchset --s10patchset -B Solaris_10-CPU_2017-07
</syntaxhighlight>
== Activate the new patched LiveUpgrade envinronment ==
<syntaxhighlight lang=bash>
root@solaris10 # luactivate Solaris_10-CPU_2017-07
</syntaxhighlight>
Now you can reboot into it whenever you want, but it should be soon, because of things that will be only in this boot environment later like logs and such.
d25b459d528bf17fe5b098c8dbddcd3e0a121f08
Category:NetApp
14
76
2484
149
2021-11-26T00:40:26Z
Lollypop
2
Text replacement - "[[Kategorie:" to "[[Category:"
wikitext
text/x-wiki
[[Category:KnowHow]]
a53883501ef62bde531096835b5015f2915a2297
PowerDNS
0
287
2485
2333
2021-11-26T00:42:56Z
Lollypop
2
Text replacement - "</source" to "</syntaxhighlight"
wikitext
text/x-wiki
[[Category: DNS]]
=PowerDNS Server (pdns_server)=
==Newer version in Ubuntu==
If you are living in Ubunbtu xenial and need a newer PowerDNS from Ubuntu zesty, do this:
===/etc/apt/apt.conf.d/01pinning===
<syntaxhighlight lang=apt>
APT::Default-Release "xenial";
</syntaxhighlight>
===/etc/apt/preferences.d/pdns===
<syntaxhighlight lang=apt>
Package: pdns-*
Pin: release a=zesty, l=Ubuntu
Pin-Priority: 1000
Package: pdns-*
Pin: release a=zesty-updates, l=Ubuntu
Pin-Priority: 1000
Package: pdns-*
Pin: release a=zesty-security, l=Ubuntu
Pin-Priority: 1000
</syntaxhighlight>
===/etc/apt/sources.list===
add zesty sources. for example:
<syntaxhighlight>
deb [arch=amd64] http://de.archive.ubuntu.com/ubuntu/ xenial main restricted universe
deb [arch=amd64] http://de.archive.ubuntu.com/ubuntu/ xenial-updates main restricted universe
deb [arch=amd64] http://security.ubuntu.com/ubuntu xenial-security main restricted universe
deb [arch=amd64] http://de.archive.ubuntu.com/ubuntu/ zesty main restricted universe
deb [arch=amd64] http://de.archive.ubuntu.com/ubuntu/ zesty-updates main restricted universe
deb [arch=amd64] http://security.ubuntu.com/ubuntu zesty-security main restricted universe
</syntaxhighlight>
===Do the upgrade===
<syntaxhighlight lang=bash>
# apt update
# apt install pdns-recursor/zesty pdns-tools/zesty libstdc++6/zesty gcc-6-base/zesty
</syntaxhighlight>
==Logging with systemd and syslog-ng==
1. Tell the journald of systemd to forward messages to syslog:
In <i>/etc/systemd/journald.conf</i> set it from
<syntaxhighlight lang=bash>
#ForwardToSyslog=yes
</syntaxhighlight>
to
<syntaxhighlight lang=bash>
ForwardToSyslog=yes
</syntaxhighlight>
Then restart the journald
<syntaxhighlight lang=bash>
# systemctl restart systemd-journald.service
</syntaxhighlight>
2. Tell syslog-ng to take the dev-log-socket from journald as input:
Change the part in <i>/etc/syslog-ng/syslog-ng.conf</i> from
<syntaxhighlight lang=bash>
source s_src {
system();
internal();
};
</syntaxhighlight>
to
<syntaxhighlight lang=bash>
source s_src {
system();
internal();
unix-dgram ("/run/systemd/journal/dev-log");
};
</syntaxhighlight>
==chroot with systemd==
<syntaxhighlight lang=bash>
# mkdir -p /var/chroot/run/systemd
# touch /var/chroot/run/systemd/notify
</syntaxhighlight>
<syntaxhighlight lang=ini>
# /etc/systemd/system/var-chroot-run-systemd-notify.mount
[Unit]
After=zfs-mount.service
Requires=var-chroot.mount
[Mount]
What=/run/systemd/notify
Where=/var/chroot/run/systemd/notify
Type=none
Options=bind
</syntaxhighlight>
or
<syntaxhighlight lang=ini>
# /etc/systemd/system/var-chroot-run-systemd-notify.mount
[Unit]
Description=Mount /run/systemd/notify to chroot
DefaultDependencies=no
ConditionPathExists=/var/chroot/run/systemd/notify
ConditionCapability=CAP_SYS_ADMIN
After=systemd-modules-load.service
Before=pdns-recursor.service
[Mount]
What=/run/systemd/notify
Where=/var/chroot/run/systemd/notify
Type=none
Options=bind
[Install]
WantedBy=multi-user.target
</syntaxhighlight>
<syntaxhighlight lang=ini>
# /etc/systemd/system/pdns.service.d/override.conf
[Service]
Type=simple
ExecStart=
ExecStart=/usr/sbin/pdns_server --guardian=no --daemon=no --disable-syslog --log-timestamp=no --write-pid=no
CapabilityBoundingSet=CAP_NET_BIND_SERVICE CAP_SETGID CAP_SETUID CAP_CHOWN CAP_SYS_CHROOT
[Unit]
Wants=local-fs.target
</syntaxhighlight>
<syntaxhighlight lang=ini>
# /etc/systemd/system/pdns-recursor.service.d/override.conf
[Service]
Type=simple
ExecStart=
ExecStart=/usr/sbin/pdns_recursor --daemon=no --write-pid=no --include-dir=/etc/powerdns/recursor.d
CapabilityBoundingSet=CAP_NET_BIND_SERVICE CAP_SETGID CAP_SETUID CAP_CHOWN CAP_SYS_CHROOT
[Unit]
Wants=local-fs.target
</syntaxhighlight>
39cbfde10f8665168e32accc8b2a97304629d5f6
Template:Ameisengattung
10
45
2486
81
2021-11-26T00:46:14Z
Lollypop
2
Text replacement - "[[Kategorie:" to "[[Category:"
wikitext
text/x-wiki
<includeonly>
{| cellpadding="2" cellspacing="0" style="border: 1px solid #6688AA;width:260px; background-color:#efefef; float:right;overflow: hidden; margin-left:10px;margin-bottom:10px;" valign="middle" |
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| '''''{{{Gattung}}}''''' {{#if:{{{DeName|}}}| <br>({{{DeName|}}}) }}
|-
| style=" border: 0px solid #6688AA;border-bottom-width:1px;"| <div style="text-align:center;font-size:8pt;">[[Bild:{{{Bild}}}|frameless|250x300px|{{{Bildbeschreibung}}}]]
{{{Bildbeschreibung}}}</div>
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| Systematik
|-
|style="text-align:center;width=250px;"|
{| style="background-color:#efefef;text-align:left;"
|-
| Unterfamilie:
|[[{{{Unterfamilie|}}}]]
|-
| Gattung:
|''[[{{{Gattung|}}}]]''
|-
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Wissenschaftlicher Name
|-
|style="text-align:center"|
{| style="text-align:center;background-color:#efefef;widht:250px"
|-
|style="text-align:center;width:250px"|''{{{Gattung}}}''
{{#if:{{{Autor|}}}| {{{Autor|}}} | }}
|}
|}
{{#ifeq: {{NAMESPACE}} | {{ns:0}} | [[Category:Ameisengattung]][[Category:{{{Unterfamilie}}}]][[Category:{{{Gattung}}}|!]]}}</includeonly>
<noinclude>
28c878f0c5815bafdc232e6b915ef185313660c9
RadSecProxy
0
345
2487
2444
2021-11-26T00:52:22Z
Lollypop
2
Text replacement - "</source" to "</syntaxhighlight"
wikitext
text/x-wiki
[[Category:Eduroam]]
=RadSecProxy=
==Build==
===Patch for radsecproxy-1.6.8 on Ubuntu 16.04===
In radsecproxy 1.6.9 and source from git on [[https://git.nordu.net/?p=radsecproxy.git;a=tree git.nordu.net]] this patch is not needed since [[https://git.nordu.net/?p=radsecproxy.git;a=commit;h=f3619bf65967255e1009fec42b28007b49e0f4e4 18.1.2017]].
<syntaxhighlight lang=bash>
$ git clone https://git.nordu.net/radsecproxy.git
</syntaxhighlight>
[https://project.nordu.net/browse/RADSECPROXY-72 taken from here]
<syntaxhighlight lang=diff>
diff -rub radsecproxy-1.6.8/tcp.c radsecproxy-1.6.8_Ubuntu_16.04/tcp.c
--- radsecproxy-1.6.8/tcp.c 2016-09-21 13:49:09.000000000 +0200
+++ radsecproxy-1.6.8_Ubuntu_16.04/tcp.c 2017-07-13 16:35:52.414151832 +0200
@@ -353,7 +353,7 @@
struct sockaddr_storage from;
socklen_t fromlen = sizeof(from);
- listen(*sp, 0);
+ listen(*sp, 16);
for (;;) {
s = accept(*sp, (struct sockaddr *)&from, &fromlen);
diff -rub radsecproxy-1.6.8/tls.c radsecproxy-1.6.8_Ubuntu_16.04/tls.c
--- radsecproxy-1.6.8/tls.c 2016-09-21 13:49:09.000000000 +0200
+++ radsecproxy-1.6.8_Ubuntu_16.04/tls.c 2017-07-13 16:36:22.678166655 +0200
@@ -467,7 +467,7 @@
struct sockaddr_storage from;
socklen_t fromlen = sizeof(from);
- listen(*sp, 0);
+ listen(*sp, 16);
for (;;) {
s = accept(*sp, (struct sockaddr *)&from, &fromlen);
</syntaxhighlight>
===Configure===
<syntaxhighlight lang=bash>
$ ./configure --prefix=/opt/radsecproxy-1.6.8 --sysconfdir=/etc/radsec --with-ssl --enable-fticks
$ make clean all && sudo make install
</syntaxhighlight>
=== Another example: Version 1.7.2 from git ===
<syntaxhighlight lang=bash>
$ mkdir radsecproxy && cd radsecproxy
$ git clone --single-branch --branch 1.7.2 https://github.com/radsecproxy/radsecproxy tags/1.7.2
$ cd tags/1.7.2
$ ./autogen.sh
$ ./configure --prefix=/opt/radsecproxy-${PWD##*/} --sysconfdir=/etc/radsec --with-ssl
$ make clean all && sudo make install
</syntaxhighlight>
==Config==
===/etc/radsec/radsecproxy.conf===
<syntaxhighlight lang=text>
# Master config file for radsecproxy
IPv4Only on
listenUDP <IP>:1812
listenUDP <IP>:1813
listenTLS <IP>:2083
LogLevel 5 # For testing later reduce to 3
#LogDestination file:///var/log/radsecproxy.log
LogDestination x-syslog:///LOG_DAEMON
LoopPrevention on
######## TLS section
tls default {
CACertificatePath /etc/radsec/cert/ca
CertificateFile /etc/radsec/cert/radsecproxy-cert.pem
CertificateKeyFile /etc/radsec/cert/radsecproxy-key.pem
CertificateKeyPassword <PASSWORD>
}
Include /etc/radsec/rewrites.conf
Include /etc/radsec/clients.conf
Include /etc/radsec/servers.conf
Include /etc/radsec/realms.conf
</syntaxhighlight>
===ca certificate in /etc/radsec/cert/ca===
For DFN users it is the TeleSec root certificate
====The destination file name is <hash of the certificate>.0====
<syntaxhighlight lang=text>
# openssl x509 -noout -hash -in /tmp/telesec.pem
1e09d511
# mv /tmp/telesec.pem /etc/radsec/cert/ca/1e09d511.0
</syntaxhighlight>
====/etc/radsec/cert/ca/1e09d511.0====
<syntaxhighlight lang=text>
subject= /C=DE/O=T-Systems Enterprise Services GmbH/OU=T-Systems Trust Center/CN=T-TeleSec GlobalRoot Class 2
-----BEGIN CERTIFICATE-----
MIIDwzCCAqugAwIBAgIBATANBgkqhkiG9w0BAQsFADCBgjELMAkGA1UEBhMCREUx
KzApBgNVBAoMIlQtU3lzdGVtcyBFbnRlcnByaXNlIFNlcnZpY2VzIEdtYkgxHzAd
BgNVBAsMFlQtU3lzdGVtcyBUcnVzdCBDZW50ZXIxJTAjBgNVBAMMHFQtVGVsZVNl
YyBHbG9iYWxSb290IENsYXNzIDIwHhcNMDgxMDAxMTA0MDE0WhcNMzMxMDAxMjM1
OTU5WjCBgjELMAkGA1UEBhMCREUxKzApBgNVBAoMIlQtU3lzdGVtcyBFbnRlcnBy
aXNlIFNlcnZpY2VzIEdtYkgxHzAdBgNVBAsMFlQtU3lzdGVtcyBUcnVzdCBDZW50
ZXIxJTAjBgNVBAMMHFQtVGVsZVNlYyBHbG9iYWxSb290IENsYXNzIDIwggEiMA0G
CSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQCqX9obX+hzkeXaXPSi5kfl82hVYAUd
AqSzm1nzHoqvNK38DcLZSBnuaY/JIPwhqgcZ7bBcrGXHX+0CfHt8LRvWurmAwhiC
FoT6ZrAIxlQjgeTNuUk/9k9uN0goOA/FvudocP05l03Sx5iRUKrERLMjfTlH6VJi
1hKTXrcxlkIF+3anHqP1wvzpesVsqXFP6st4vGCvx9702cu+fjOlbpSD8DT6Iavq
jnKgP6TeMFvvhk1qlVtDRKgQFRzlAVfFmPHmBiiRqiDFt1MmUUOyCxGVWOHAD3bZ
wI18gfNycJ5v/hqO2V81xrJvNHy+SE/iWjnX2J14np+GPgNeGYtEotXHAgMBAAGj
QjBAMA8GA1UdEwEB/wQFMAMBAf8wDgYDVR0PAQH/BAQDAgEGMB0GA1UdDgQWBBS/
WSA2AHmgoCJrjNXyYdK4LMuCSjANBgkqhkiG9w0BAQsFAAOCAQEAMQOiYQsfdOhy
NsZt+U2e+iKo4YFWz827n+qrkRk4r6p8FU3ztqONpfSO9kSpp+ghla0+AGIWiPAC
uvxhI+YzmzB6azZie60EI4RYZeLbK4rnJVM3YlNfvNoBYimipidx5joifsFvHZVw
IEoHNN/q/xWA5brXethbdXwFeilHfkCoMRN3zUA7tFFHei4R40cR3p1m0IvVVGb6
g1XqfMIpiRvpb7PO4gWEyS8+eIVibslfwXhjdFjASBgMmTnrpMwatXlajRWc2BQN
9noHV8cigwUtPJslJj0Ys6lDfMjIq2SPDqO/nBudMNva0Bkuqjzx+zOAduTNrRlP
BSeOE6Fuwg==
-----END CERTIFICATE-----
</syntaxhighlight>
===/etc/radsec/rewrites.conf===
<syntaxhighlight lang=text>
## Empty for our setup
</syntaxhighlight>
===/etc/radsec/clients.conf===
This matches our german top level radius (tlr) you have to customize it for other countries.
<syntaxhighlight lang=text>
client tlr1 {
host 193.174.75.134
type tls
certificatenamecheck off
matchCertificateAttribute CN:/^(radius1\.dfn|tld1\.eduroam)\.de$/
}
client tlr2 {
host 193.174.75.138
type tls
certificatenamecheck off
matchCertificateAttribute CN:/^(radius2\.dfn|tld2\.eduroam)\.de$/
}
# Our WLAN Controller
client wlc {
host 10.1.1.0/24
type udp
secret ****secret****
}
#client anyIP4TLS {
# host 0.0.0.0/0
# type TLS
#}
</syntaxhighlight>
===/etc/radsec/servers.conf===
<syntaxhighlight lang=text>
#
## UDP Radius
#
#Server Our-EduroamRadiusAuth {
# host <internal radius server>
# port 1812
# type udp
# secret ****secret****
#}
#Server Our-EduroamRadiusAcct {
# host <internal radius accounting server>
# port 1813
# type udp
# secret ****secret****
#}
#
## TLS Radius / RadSec
#
server freeradius-1 {
host <internal radius accounting server1>
type tls
certificatenamecheck off
matchCertificateAttribute CN:/^freeradius1\.domain\.tld$/
StatusServer on
secret ****secret****
}
server freeradius-2 {
host <internal radius accounting server2>
type tls
certificatenamecheck off
matchCertificateAttribute CN:/^freeradius2\.domain\.tld$/
StatusServer on
secret ****secret****
}
server tlr1 {
host 193.174.75.134
type tls
certificatenamecheck off
matchCertificateAttribute CN:/^(radius1\.dfn|tld1\.eduroam)\.de$/
StatusServer on
}
server tlr2 {
host 193.174.75.138
type tls
certificatenamecheck off
matchCertificateAttribute CN:/^(radius2\.dfn|tld2\.eduroam)\.de$/
StatusServer on
}
</syntaxhighlight>
===/etc/radsec/realms.conf===
<syntaxhighlight lang=text>
# Our domain domain.tld
realm /(eduroam|anonymous)@domain\.tld$/ {
server freeradius-1
server freeradius-2
accountingServer freeradius-1
accountingServer freeradius-2
}
# If the anonymous user has not been matched above, fail
# So users that use their real identity fail, too. Force anonymous!
realm /@domain\.tld$ {
replymessage "Access rejected, wrong anonymous identity. Use eduroam@domain.tld as anonymous identity."
accountingresponse on
}
# Other domain of our site not used for eduroam
realm /@wrong-domain\.tld$/ {
replymessage "Misconfigured client: Use domain.tld as domain instead."
accountingresponse on
}
# Default realm of some clients. Do not send to top level radius servers.
realm /@.*\.3gppnetwork\.org$/ {
replymessage "Misconfigured client."
accountingresponse on
}
# Default realm of some clients. Do not send to top level radius servers.
realm /myabc\.com$/ {
replymessage "Misconfigured client: default realm of Intel PRO/Wireless supplicant! Rejected by us."
accountingresponse on
}
# Empty realm. Do not send to top level radius servers.
realm /^$/ {
replymessage "Misconfigured client: empty realm! Rejected by us."
accountingresponse on
}
# Typo in realm. Realm without any dot in it. Do not send to top level radius servers.
realm /@[^\.]+$/ {
replymessage "Misconfigured client: Typo in realm - No dot in realm ! Rejected by us."
accountingresponse on
}
# Typo in realm. Realm without double dot in it. Do not send to top level radius servers.
realm /@.*\.\..*$/ {
replymessage "Misconfigured client: Typo in realm - .. ! Rejected by us."
accountingresponse on
}
# Typo in realm. Realm without space in it. Do not send to top level radius servers.
realm /@.*\s+.*$/ {
replymessage "Misconfigured client: Typo in realm - Don't use spaces in your realm! Rejected by us."
accountingresponse on
}
# All other realms -> Eduroam toplevel servers
realm * {
server tlr1
server tlr2
accountingserver tlr1
accountingserver tlr2
}
</syntaxhighlight>
===/etc/radsec/cert/radsecproxy.pem===
<syntaxhighlight lang=text>
subject=/CN=radsecproxy.domain.tld/OU=bla/O=bli/L=Hamburg/ST=Hamburg/C=DE
-----BEGIN CERTIFICATE-----
...
-----END CERTIFICATE-----
And now the whole cerstificate chain...
</syntaxhighlight>
==Run the daemon==
===Security===
There is no need to run radsecproxy as root.
But you need write access to the log or use syslog.
The config, certificate and key is not readable by the user (nogroup) but by the group radsecproxy where the porocess lives in (see systemd unit file radsecproxy.service).
====User====
<syntaxhighlight lang=bash>
# addgroup -g 2083 radsecproxy
# useradd -u 2083 -g nogroup -s /bin/false -h /nonexistent
</syntaxhighlight>
====Permissions====
<syntaxhighlight lang=bash>
# chown -R root:radsecproxy /etc/radsec
# find /etc/radsec -type d -exec chmod 0750 {} \;
# find /etc/radsec -type f -exec chmod 0640 {} \;
</syntaxhighlight>
====systemd unit file====
<syntaxhighlight lang=bash>
# systemctl cat radsecproxy.service
</syntaxhighlight>
<syntaxhighlight lang=ini>
[Unit]
Description=radsecproxy
ConditionPathExists=/etc/radsec/radsecproxy.conf
After=network.target
Documentation=man:radsecproxy(1)
[Service]
Type=forking
User=radsecproxy
Group=radsecproxy
RuntimeDirectory=radsecproxy
RuntimeDirectoryMode=0700
PrivateTmp=yes
InaccessibleDirectories=/var
ReadOnlyDirectories=/etc
ReadOnlyDirectories=/lib
ReadOnlyDirectories=/usr
ExecStart=/opt/radsecproxy/sbin/radsecproxy -i /run/radsecproxy/radsecproxy.pid
PIDFile=/run/radsecproxy/radsecproxy.pid
[Install]
WantedBy=multi-user.target
</syntaxhighlight>
Put this to /lib/systemd/system/radsecproxy.service and do:
# systemctl daemon-reload
# systemctl enable radsecproxy.service
# systemctl start radsecproxy.service
===Testing===
Check on the server if the radsecproxy is listening:
<syntaxhighlight lang=bash>
# lsof -Pni TCP:2083 -s TCP:Listen
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
radsecpro 1344 radsecproxy 9u IPv4 22751 0t0 TCP <server ip>:2083 (LISTEN)
</syntaxhighlight>
===Certificate Enddate===
$ openssl s_client -connect <IP>:2083 -tls1 -no_ssl2 -no_ssl3 -showcerts 2>/dev/null | openssl x509 -enddate -noout
notAfter=Oct 9 12:13:17 2020 GMT
4e8ca0fd0a8f822cf88da0cd621fa8d4ee0efda3
Oracle Clients
0
342
2488
2320
2021-11-26T00:53:18Z
Lollypop
2
Text replacement - "</source" to "</syntaxhighlight"
wikitext
text/x-wiki
= Ubuntu =
Download
<pre>
oracle-instantclient12.2-basiclite-12.2.0.1.0-1.x86_64.rpm
oracle-instantclient12.2-sqlplus-12.2.0.1.0-1.x86_64.rpm
</pre>
from [http://www.oracle.com/technetwork/database/features/instant-client/index.html Oracle Instant Client download page]
<syntaxhighlight lang=bash>
$ sudo apt install alien libaio1
$ sudo alien -i oracle-instantclient12.2-basiclite-12.2.0.1.0-1.x86_64.rpm
$ sudo alien -i oracle-instantclient12.2-sqlplus-12.2.0.1.0-1.x86_64.rpm
$ for i in $(dpkg -L $(dpkg -l oracle-instantclient\* | awk '$1=="ii"{print $2;}') | grep .so )
do
BASENAME=${i##*/}
sudo update-alternatives --install /usr/lib/${BASENAME} ${BASENAME} ${i} 10
done
$ dpkg -L $(dpkg -l oracle-instantclient*-basiclite | awk '$1=="ii"{print $2;}') | \
awk '
/client64$/{
oracle_home=$1;
printf "ORACLE_HOME=%s\nPATH=${PATH}:${ORACLE_HOME}/bin\nexport ORACLE_HOME PATH\n",oracle_home;
}' | \
sudo tee /etc/profile.d/oracle.sh
</syntaxhighlight>
531fde59282c0f57a24ffed4ffe91c0f9667cfc0
Category:VirtualBox
14
223
2489
1279
2021-11-26T01:06:07Z
Lollypop
2
Text replacement - "[[Kategorie:" to "[[Category:"
wikitext
text/x-wiki
[[Category:Virtualization]]
05c04b5d9c8b1a23bb915ae34cb9a88346d5320a
Category:Solaris SVM
14
26
2490
46
2021-11-26T01:13:57Z
Lollypop
2
Text replacement - "[[Kategorie:" to "[[Category:"
wikitext
text/x-wiki
[[Category:Solaris]]
b2957bda5fdd0cfbd2a3c12d4f811f750d2f9508
Category:Hardware
14
209
2491
716
2021-11-26T01:16:55Z
Lollypop
2
Text replacement - "[[Kategorie:" to "[[Category:"
wikitext
text/x-wiki
[[Category:KnowHow]]
a53883501ef62bde531096835b5015f2915a2297
Perl Tipps und Tricks
0
178
2492
2264
2021-11-26T01:18:00Z
Lollypop
2
Text replacement - "</source" to "</syntaxhighlight"
wikitext
text/x-wiki
[[Kategorie:Perl|Tipps und Tricks]]
==Negative match in RegEx (?:(?!PATTERN).)*==
Usage of perl as a spcial grep :-):
<syntaxhighlight lang=bash>
perl -ne 'if (/(<a href=[^>]+action=login[^>]+>(?:(?!<\/a>).)*<\/a>)/){ print $1."\n"; }' index.html
</syntaxhighlight>
This one matches a complete <pre><a href=...action=login...>(not </a>)</a></pre>.
Or more complex:
<syntaxhighlight lang=bash>
perl -ne 'if (/(<a href=[^h]*(http[s]{0,1}:\/\/([^\/"]+)[^> "]+)[^> ]*>(?:(?!<\/a>).)*<\/a>)/){ print $3."|".$2."|".$1."\n"; }' index.html
</syntaxhighlight>
Prints out:
<pre>
<server>|<url>|<complete href>
</pre>
==Unread while reading from filehandle==
Dov Grobgeld made my day!
<syntaxhighlight lang=perl>
# Found at a comment of Dov Grobgeld at https://groups.google.com/d/msg/comp.lang.perl/7fPyGpWpP8M/hc7xTMvAoW0J
while($_ = shift(@linestack) || <IN>) {
:
push(@linestack, $whatever); # unread
}
</syntaxhighlight>
== Config ==
Override compile time flags on the commandline like this:
<syntaxhighlight lang=perl>
PERL_MM_OPT='optimize=-O2 cc=gcc ld=gcc cccdlflags=-DPIC'
</syntaxhighlight>
I used it to run sa-compile on Solaris:
<syntaxhighlight lang=perl>
#!/bin/bash
exec >> /var/log/update-spamd-rules.log 2>&1
#LD_LIBRARY_PATH=/usr/sfw/lib
PATH=$PATH:/usr/local/bin:/opt/re2c/bin:/usr/sfw/bin:/usr/ccs/bin:/opt/csw/bin
PERL_VER=$(/usr/perl5/bin/perl -e 'printf "%.3f",$];')
SA_VER=$(/opt/spamassassin/bin/spamassassin -V | /usr/bin/nawk '
/SpamAssassin version/ {
split($NF,version,/\./);
printf "%d.%03d%03d",version[1],version[2],version[3];
}')
export LD_LIBRARY_PATH PATH PERL_VER SA_VER
/usr/perl5/bin/perlgcc -T /opt/spamassassin/bin/sa-update --updatedir=/var/opt/spamassassin/$SA_VER -D
PERL_MM_OPT='optimize=-O2 cc=gcc ld=gcc cccdlflags=-DPIC' /opt/spamassassin/bin/sa-compile --updatedir=/var/opt/spamassassin/compiled/${PERL_VER}/${SA_VER} -D
/usr/bin/kill -HUP `cat /tmp/spamd-exim-acl.pid`
/usr/bin/kill -HUP `cat /tmp/spamd-ip.pid`
</syntaxhighlight>
7be1a25da939f213368a68825ea2d14eeb54519b
2499
2492
2021-11-26T01:39:36Z
Lollypop
2
Text replacement - "[[Kategorie:" to "[[Category:"
wikitext
text/x-wiki
[[Category:Perl|Tipps und Tricks]]
==Negative match in RegEx (?:(?!PATTERN).)*==
Usage of perl as a spcial grep :-):
<syntaxhighlight lang=bash>
perl -ne 'if (/(<a href=[^>]+action=login[^>]+>(?:(?!<\/a>).)*<\/a>)/){ print $1."\n"; }' index.html
</syntaxhighlight>
This one matches a complete <pre><a href=...action=login...>(not </a>)</a></pre>.
Or more complex:
<syntaxhighlight lang=bash>
perl -ne 'if (/(<a href=[^h]*(http[s]{0,1}:\/\/([^\/"]+)[^> "]+)[^> ]*>(?:(?!<\/a>).)*<\/a>)/){ print $3."|".$2."|".$1."\n"; }' index.html
</syntaxhighlight>
Prints out:
<pre>
<server>|<url>|<complete href>
</pre>
==Unread while reading from filehandle==
Dov Grobgeld made my day!
<syntaxhighlight lang=perl>
# Found at a comment of Dov Grobgeld at https://groups.google.com/d/msg/comp.lang.perl/7fPyGpWpP8M/hc7xTMvAoW0J
while($_ = shift(@linestack) || <IN>) {
:
push(@linestack, $whatever); # unread
}
</syntaxhighlight>
== Config ==
Override compile time flags on the commandline like this:
<syntaxhighlight lang=perl>
PERL_MM_OPT='optimize=-O2 cc=gcc ld=gcc cccdlflags=-DPIC'
</syntaxhighlight>
I used it to run sa-compile on Solaris:
<syntaxhighlight lang=perl>
#!/bin/bash
exec >> /var/log/update-spamd-rules.log 2>&1
#LD_LIBRARY_PATH=/usr/sfw/lib
PATH=$PATH:/usr/local/bin:/opt/re2c/bin:/usr/sfw/bin:/usr/ccs/bin:/opt/csw/bin
PERL_VER=$(/usr/perl5/bin/perl -e 'printf "%.3f",$];')
SA_VER=$(/opt/spamassassin/bin/spamassassin -V | /usr/bin/nawk '
/SpamAssassin version/ {
split($NF,version,/\./);
printf "%d.%03d%03d",version[1],version[2],version[3];
}')
export LD_LIBRARY_PATH PATH PERL_VER SA_VER
/usr/perl5/bin/perlgcc -T /opt/spamassassin/bin/sa-update --updatedir=/var/opt/spamassassin/$SA_VER -D
PERL_MM_OPT='optimize=-O2 cc=gcc ld=gcc cccdlflags=-DPIC' /opt/spamassassin/bin/sa-compile --updatedir=/var/opt/spamassassin/compiled/${PERL_VER}/${SA_VER} -D
/usr/bin/kill -HUP `cat /tmp/spamd-exim-acl.pid`
/usr/bin/kill -HUP `cat /tmp/spamd-ip.pid`
</syntaxhighlight>
543e0bb0c5cacc76d4d99149310dedb975e65e75
Template:Taxobox/Rang
10
49
2493
87
2021-11-26T01:22:55Z
Lollypop
2
Text replacement - "[[Kategorie:" to "[[Category:"
wikitext
text/x-wiki
<includeonly>{{#switch: {{lc:{{{Rang}}}}}
||ohne=
|ohne rang= ohne Rang
|klassifikation= {{#if:{{{Genitiv|}}}|der }}[[Systematik (Biologie)|Klassifikation]]
|domäne= {{#if:{{{Genitiv|}}}|der }}[[Domäne (Biologie)|Domäne]]
|reich|regnum = {{#if:{{{Genitiv|}}}|des }}[[Reich (Biologie)|Reich]]{{#ifeq:{{{Plural|}}}|ja|e}}{{#if:{{{Genitiv|}}}|s}}
|unterreich|subregnum = {{#if:{{{Genitiv|}}}|des }}[[Reich (Biologie)|Unterreich]]{{#ifeq:{{{Plural|}}}|ja|e}}{{#if:{{{Genitiv|}}}|s}}
|überabteilung|superdivisio = {{#if:{{{Genitiv|}}}|der }}[[Abteilung (Biologie)|Überabteilung]]{{#ifeq:{{{Plural|}}}|ja|en}}
|abteilung|divisio = {{#if:{{{Genitiv|}}}|der }}[[Abteilung (Biologie)|Abteilung]]{{#ifeq:{{{Plural|}}}|ja|en}}
|unterabteilung|subdivisio = {{#if:{{{Genitiv|}}}|der }}[[Abteilung (Biologie)|Unterabteilung]]{{#ifeq:{{{Plural|}}}|ja|en}}
|überstamm|superphylum = {{#if:{{{Genitiv|}}}|des }}[[Stamm (Systematik)|{{#ifeq:{{{Plural|}}}|ja|Überstämme|Überstamm}}]]{{#if:{{{Genitiv|}}}|s}}
|stamm|phylum = {{#if:{{{Genitiv|}}}|des }}[[Stamm (Systematik)|{{#ifeq:{{{Plural|}}}|ja|Stämme|Stamm}}]]{{#if:{{{Genitiv|}}}|s}}
|unterstamm|subphylum = {{#if:{{{Genitiv|}}}|des }}[[Stamm (Systematik)|{{#ifeq:{{{Plural|}}}|ja|Unterstämme|Unterstamm}}]]{{#if:{{{Genitiv|}}}|s}}
|überklasse|superclassis = {{#if:{{{Genitiv|}}}|der }}[[Klasse (Biologie)|Überklasse]]{{#ifeq:{{{Plural|}}}|ja|n}}
|reihe|seria = {{#if:{{{Genitiv|}}}|der }}[[Reihe (Biologie)|Reihe]]{{#ifeq:{{{Plural|}}}|ja|n}}
|klasse|classis = {{#if:{{{Genitiv|}}}|der }}[[Klasse (Biologie)|Klasse]]{{#ifeq:{{{Plural|}}}|ja|n}}
|unterklasse|subclassis = {{#if:{{{Genitiv|}}}|der }}[[Klasse (Biologie)|Unterklasse]]{{#ifeq:{{{Plural|}}}|ja|n}}
|teilklasse|infraclassis = {{#if:{{{Genitiv|}}}|der }}[[Klasse (Biologie)|Teilklasse]]{{#ifeq:{{{Plural|}}}|ja|n}}
|überkohorte|supercohors = {{#if:{{{Genitiv|}}}|der }}[[Kohorte (Biologie)|Überkohorte]]{{#ifeq:{{{Plural|}}}|ja|n}}
|kohorte|cohors = {{#if:{{{Genitiv|}}}|der }}[[Kohorte (Biologie)|Kohorte]]{{#ifeq:{{{Plural|}}}|ja|n}}
|unterkohorte|subcohors = {{#if:{{{Genitiv|}}}|der }}[[Kohorte (Biologie)|Unterkohorte]]{{#ifeq:{{{Plural|}}}|ja|n}}
|teilkohorte|infracohors = {{#if:{{{Genitiv|}}}|der }}[[Kohorte (Biologie)|Teilkohorte]]{{#ifeq:{{{Plural|}}}|ja|n}}
|überordnung|superordo = {{#if:{{{Genitiv|}}}|der }}[[Ordnung (Biologie)|Überordnung]]{{#ifeq:{{{Plural|}}}|ja|en}}
|ordnung|ordo = {{#if:{{{Genitiv|}}}|der }}[[Ordnung (Biologie)|Ordnung]]{{#ifeq:{{{Plural|}}}|ja|en}}
|unterordnung|subordo = {{#if:{{{Genitiv|}}}|der }}[[Ordnung (Biologie)|Unterordnung]]{{#ifeq:{{{Plural|}}}|ja|en}}
|teilordnung|infraordo = {{#if:{{{Genitiv|}}}|der }}[[Ordnung (Biologie)|Teilordnung]]{{#ifeq:{{{Plural|}}}|ja|en}}
|überfamilie|superfamilia = {{#if:{{{Genitiv|}}}|der }}[[Familie (Biologie)|Überfamilie]]{{#ifeq:{{{Plural|}}}|ja|n}}
|familie|familia = {{#if:{{{Genitiv|}}}|der }}[[Familie (Biologie)|Familie]]{{#ifeq:{{{Plural|}}}|ja|n}}
|unterfamilie|subfamilia = {{#if:{{{Genitiv|}}}|der }}[[Familie (Biologie)|Unterfamilie]]{{#ifeq:{{{Plural|}}}|ja|n}}
|tribus = {{#if:{{{Genitiv|}}}|der }}[[Tribus (Biologie)|Tribus]]{{#ifeq:{{{Plural|}}}|ja|}}
|untertribus|subtribus = {{#if:{{{Genitiv|}}}|der }}[[Tribus (Biologie)|Untertribus]]{{#ifeq:{{{Plural|}}}|ja|}}
|gattung|genus = {{#if:{{{Genitiv|}}}|der }}[[Gattung (Biologie)|Gattung]]{{#ifeq:{{{Plural|}}}|ja|en}}
|untergattung|subgenus = {{#if:{{{Genitiv|}}}|der }}[[Gattung (Biologie)|Untergattung]]{{#ifeq:{{{Plural|}}}|ja|en}}
|sektion|sectio = {{#if:{{{Genitiv|}}}|der }}[[Gattung (Biologie)|Sektion]]{{#ifeq:{{{Plural|}}}|ja|en}}
|untersektion|subsectio = {{#if:{{{Genitiv|}}}|der }}[[Gattung (Biologie)|Untersektion]]{{#ifeq:{{{Plural|}}}|ja|en}}
|serie|series = {{#if:{{{Genitiv|}}}|der }}[[Gattung (Biologie)|Serie]]{{#ifeq:{{{Plural|}}}|ja|n}}
|unterserie|subseries = {{#if:{{{Genitiv|}}}|der }}[[Gattung (Biologie)|Unterserie]]{{#ifeq:{{{Plural|}}}|ja|n}}
|stirps = {{#if:{{{Genitiv|}}}|der }}[[Stirps|{{#ifeq:{{{Plural|}}}|ja|Stirpes|Stirps}}]]
|artenkreis|superspecies|superspezies = {{#if:{{{Genitiv|}}}|der }}[[Superspezies]]{{#ifeq:{{{Plural|}}}|ja|}}
|art|species = {{#if:{{{Genitiv|}}}|der }}[[Art (Biologie)|Art]]{{#ifeq:{{{Plural|}}}|ja|en}}
|unterart|subspecies = {{#if:{{{Genitiv|}}}|der }}[[Unterart]]{{#ifeq:{{{Plural|}}}|ja|en}}
|varietät|varietas = {{#if:{{{Genitiv|}}}|der }}[[Varietät (Biologie)|Varietät]]{{#ifeq:{{{Plural|}}}|ja|en}}
|untervarietät|subvarietas = {{#if:{{{Genitiv|}}}|der }}[[Varietät (Biologie)|Varietät]]{{#ifeq:{{{Plural|}}}|ja|en}}
|form|forma = {{#if:{{{Genitiv|}}}|der }}[[Form (Biologie)|Form]]{{#ifeq:{{{Plural|}}}|ja|en}}
|unterform|subforma = {{#if:{{{Genitiv|}}}|der }}[[Form (Biologie)|Unterform]]{{#ifeq:{{{Plural|}}}|ja|en}}
| #default = <div class="error">[[Vorlage:Taxobox/Rang/Doku|Warnung: Unbekannter Rang]]</div>
}}</includeonly><noinclude>Diese Vorlage wird innerhalb der [[Vorlage:Taxobox]] verwendet, Dokumentation siehe [[Vorlage:Taxobox/Doku/Tech]].
[[Category:Vorlage:Untervorlage|Taxobox/Rang]]
</noinclude>
1466fa89edded863b4e86a421431346c4116eac4
Category:Temnothorax
14
64
2494
112
2021-11-26T01:27:31Z
Lollypop
2
Text replacement - "[[Kategorie:" to "[[Category:"
wikitext
text/x-wiki
[[Category:Ameisen]]
8b4529e141e02312735639bddbd1b0df94a379a3
NetApp Commands
0
201
2495
2426
2021-11-26T01:29:32Z
Lollypop
2
Text replacement - "</source" to "</syntaxhighlight"
wikitext
text/x-wiki
[[Category: NetApp]]
==Alignment==
CDOT 8.3:
<syntaxhighlight lang=bash>
netapp-svm1::> set -priv diag
netapp-svm1::*> lun alignment show -vserver svm_kerberos_backup -fields vserver,lun,alignment -path /vol/kerberos_vol_luns_backup/kerberos_lun_*
vserver path lun alignment
---------- --------------------------------- ------------ ---------
svm_backup /vol/vol_luns_backup/lun_1.bcklun lun_1.bcklun aligned
svm_backup /vol/vol_luns_backup/lun_2.bcklun lun_2.bcklun aligned
svm_backup /vol/vol_luns_backup/lun_3.bcklun lun_3.bcklun aligned
svm_backup /vol/vol_luns_backup/lun_4.bcklun lun_4.bcklun aligned
4 entries were displayed.
netapp-svm1::> set -priv admin
</syntaxhighlight>
To see on which bucket the reads and writes occure:
<syntaxhighlight lang=bash>
netapp-svm1::> set -priv diag
netapp-svm1::*> lun alignment show -vserver svm_kerberos_backup -fields vserver,lun,alignment,read-histogram,write-histogram -path /vol/kerberos_vol_luns_backup/kerberos_lun_**
vserver path lun alignment write-histogram read-histogram
---------- --------------------------------- ------------ --------- ---------------- ----------------
svm_backup /vol/vol_luns_backup/lun_1.bcklun lun_1.bcklun aligned 99,0,0,0,0,0,0,0 99,0,0,0,0,0,0,0
svm_backup /vol/vol_luns_backup/lun_2.bcklun lun_2.bcklun aligned 99,0,0,0,0,0,0,0 99,0,0,0,0,0,0,0
svm_backup /vol/vol_luns_backup/lun_3.bcklun lun_3.bcklun aligned 99,0,0,0,0,0,0,0 99,0,0,0,0,0,0,0
svm_backup /vol/vol_luns_backup/lun_4.bcklun lun_4.bcklun aligned 99,0,0,0,0,0,0,0 99,0,0,0,0,0,0,0
4 entries were displayed.
netapp-svm1::> set -priv admin
</syntaxhighlight>
==Performance ==
filer> priv set -q diag ; statit -b ; sysstat -x -s -c 20 3 ; statit -e ; priv set
filer> priv set -q diag ; stats show lun:*:avg_latency ; priv set
=== Flashpool ===
filer> priv set -q diag ; stats show -p hybrid_aggr ; priv set
== User ==
=== Create snapshot user ===
<syntaxhighlight lang=bash>
security login role create -vserver svm1 vol-snapshot-only -cmddirname DEFAULT -access none
security login role create -vserver svm1 vol-snapshot-only -cmddirname "security login publickey" -access all
security login role create -vserver svm1 vol-snapshot-only -cmddirname "volume snapshot" -access readonly
security login role create -vserver svm1 vol-snapshot-only -cmddirname "volume snapshot create" -access all
security login create -vserver svm1 -role vol-snapshot-only -user-or-group-name snapshot-user -application ssh -authmethod publickey -comment "Snapshot User"
security login publickey create -vserver svm1 -username snapshot-user -publickey "ssh-rsa AAAAB3Nz...geX33k5 snapshot-user"
</syntaxhighlight>
=== Create snapshot user for http api ===
==== Create the role ====
<syntaxhighlight lang=bash>
security login role create -vserver svm42 -role ansible-snapshot-only -cmddirname DEFAULT -access none
security login role create -vserver svm42 -role ansible-snapshot-only -cmddirname "volume snapshot" -access readonly
security login role create -vserver svm42 -role ansible-snapshot-only -cmddirname "volume snapshot create" -query "-snapshot ansible_*" -access all
security login role create -vserver svm42 -role ansible-snapshot-only -cmddirname "volume snapshot delete" -query "-snapshot ansible_*" -access all
</syntaxhighlight>
==== Check role parameter ====
<syntaxhighlight lang=bash>
set -showseparator ";" -showallfields true
security login role show -vserver svm42 -role ansible-snapshot-only Role
vserver;role;profilename;cmddirname;access;query;
Vserver;Role Name;Role Name;Command / Directory;Access Level;Query;
svm42;ansible-snapshot-only;ansible-snapshot-only;DEFAULT;none;"";
svm42;ansible-snapshot-only;ansible-snapshot-only;"volume snapshot";readonly;"";
svm42;ansible-snapshot-only;ansible-snapshot-only;"volume snapshot create";all;"-snapshot ansible_*";
svm42;ansible-snapshot-only;ansible-snapshot-only;"volume snapshot delete";all;"-snapshot ansible_*";
svm42;ansible-snapshot-only;ansible-snapshot-only;"volume snapshot modify";all;"-snapshot ansible_*";
svm42;ansible-snapshot-only;ansible-snapshot-only;"volume snapshot show";all;"-snapshot ansible_*";
</syntaxhighlight>
==== Create user with role ====
<syntaxhighlight lang=bash>
security login create -vserver svm42 -application ontapi -authentication-method password -role ansible-snapshot-only -user-or-group-name ansible
</syntaxhighlight>
==Network interfaces==
<syntaxhighlight lang=bash>
ncl01::> network interface show -vserver ncl1
Logical Status Network Current Current Is
Vserver Interface Admin/Oper Address/Mask Node Port Home
----------- ---------- ---------- ------------------ ------------- ------- ----
ncl01
cluster_mgmt up/up 10.10.20.41/24 ncl01-01 a0a true
ncl01-01-ic1 up/up 10.10.20.44/24 ncl01-01 a0a true
ncl01-01_mgmt1 up/up 10.10.20.42/24 ncl01-01 a0a true
ncl01-02-ic1 up/up 10.10.20.45/24 ncl01-02 a0a true
ncl01-02_mgmt1 up/up 10.10.20.43/24 ncl01-02 a0a true
5 entries were displayed.
ncl01::> network port show -link down
Node: ncl01-01
Speed(Mbps) Health
Port IPspace Broadcast Domain Link MTU Admin/Oper Status
--------- ------------ ---------------- ---- ---- ----------- --------
e0j Default - down 1500 auto/1000 -
e0l Default - down 1500 auto/1000 -
Node: ncl01-02
Speed(Mbps) Health
Port IPspace Broadcast Domain Link MTU Admin/Oper Status
--------- ------------ ---------------- ---- ---- ----------- --------
e0j Default - down 1500 auto/1000 -
e0l Default - down 1500 auto/1000 -
4 entries were displayed.
ncl01::> network port show -health-status degraded
There are no entries matching your query.
ncl01::> network port ifgrp show
Port Distribution Active
Node IfGrp Function MAC Address Ports Ports
-------- ---------- ------------ ----------------- ------- -------------------
ncl01-01
a0a ip 02:a0:98:6d:06:b7 full e0i, e0k
a0b ip 02:a0:98:6d:06:b8 full e3a, e3b, e7a, e7b
ncl01-02
a0a ip 02:a0:98:6d:07:1f full e0i, e0k
a0b ip 02:a0:98:6d:07:20 full e3a, e3b, e7a, e7b
ncl01::> network port ifgrp show -fields down-ports
node ifgrp down-ports
------------- ----- ----------
ncl01-01 a0a -
ncl01-01 a0b -
ncl01-02 a0a -
ncl01-02 a0b -
4 entries were displayed.
ncl01::> network port show -fields speed-oper -port e0j,e0l
node port speed-oper
------------- ---- ----------
ncl01-01 e0j 1000
ncl01-01 e0l 1000
ncl01-02 e0j 1000
ncl01-02 e0l 1000
4 entries were displayed.
ncl01::> network port ifgrp show -fields down-ports
node ifgrp down-ports
------------- ----- ----------
ncl01-01 a0a -
ncl01-01 a0b -
ncl01-02 a0a -
ncl01-02 a0b -
4 entries were displayed.
</syntaxhighlight>
==Links==
* [http://www.cosonok.com/2014/02/brief-notes-on-advanced-troubleshooting.html Brief Notes on Advanced Troubleshooting in CDOT]
e1918ffc457b13357566b33b8fa73a81e7f2b6d1
MariaDB on ZFS
0
294
2496
2429
2021-11-26T01:33:21Z
Lollypop
2
Text replacement - "</source" to "</syntaxhighlight"
wikitext
text/x-wiki
[[Category: MySQL|ZFS]]
[[Category: MariaDB|ZFS]]
==ZFS parameters==
<syntaxhighlight lang=bash>
zfs set atime=off MYSQL-DATA
zfs set compression=lz4 MYSQL-DATA
zfs set atime=off MYSQL-LOG
zfs set compression=lz4 MYSQL-LOG
zfs set recordsize=8k MYSQL-DATA/data
zfs set recordsize=16k MYSQL-DATA/InnoDB
zfs set primarycache=metadata MYSQL-DATA/InnoDB
zfs set primarycache=metadata MYSQL-LOG/ib_log
</syntaxhighlight>
<syntaxhighlight lang=bash>
# zfs list -o recordsize,primarycache,compression,compressratio,atime,name -r MYSQL-DATA -r MYSQL-LOG
RECSIZE PRIMARYCACHE COMPRESS RATIO ATIME NAME
128K all lz4 1.06x off MYSQL-DATA
16K metadata lz4 2.81x off MYSQL-DATA/InnoDB
8K all lz4 1.05x off MYSQL-DATA/data
128K all lz4 2.15x off MYSQL-LOG
128K all lz4 1.00x off MYSQL-LOG/binlog
128K metadata lz4 2.17x off MYSQL-LOG/ib_log
</syntaxhighlight>
===If you have innodb_file_per_table=on===
<syntaxhighlight lang=bash>
# mysql -e 'show variables like "innodb_file_per_table";'
+-----------------------+-------+
| Variable_name | Value |
+-----------------------+-------+
| innodb_file_per_table | ON |
+-----------------------+-------+
</syntaxhighlight>
* If you have only InnoDB-Tables or the only productive ones are InnoDB then consider setting the blocksize of MYSQL-DATA/data to 16k because all Innodb-Datafiles (*.ibd) will be written there :-\.*
* consider setting the initial innodb_data_file_path to smaller value like ibdata1:100M:autoextend
==Database parameters for ZFS==
<syntaxhighlight lang=mysql>
datadir = /MYSQL-DATA/data/mysql
innodb_data_home_dir = /MYSQL-DATA/InnoDB
innodb_data_file_path = ibdata1:2000M:autoextend
innodb_log_group_home_dir = /MYSQL-LOG/ib_log
#innodb_flush_method = O_DIRECT
innodb_flush_log_at_trx_commit = 2
innodb_file_per_table = off
skip-innodb_doublewrite
</syntaxhighlight>
<syntaxhighlight lang=bash>
# /usr/sbin/mysqld --print-defaults
/usr/sbin/mysqld would have been started with the following arguments:
--server_id=42
--user=mysql
--pid-file=/var/run/mysqld/mysqld.pid
--socket=/var/run/mysqld/mysqld.sock
--port=3306
--basedir=/usr
--datadir=/MYSQL-DATA/data/mysql
--innodb_data_home_dir=/MYSQL-DATA/InnoDB
--innodb_data_file_path=ibdata1:100M:autoextend
--innodb_log_group_home_dir=/MYSQL-LOG/ib_log
--innodb_flush_method=O_DIRECT
--innodb_flush_log_at_trx_commit=2
--skip-innodb_doublewrite
--tmpdir=/tmp
</syntaxhighlight>
On Linux do not forget to add new directories to apparmor!
6b83d0418146c67acf79d0f9aff4d5703ad5a86e
Solaris pkg
0
378
2497
2262
2021-11-26T01:37:22Z
Lollypop
2
Text replacement - "</source" to "</syntaxhighlight"
wikitext
text/x-wiki
[[Category: Solaris11|pkg]]
== Troubleshooting ==
=== Error: pkg: This is an internal error in pkg(7) version b'3beb69dcf209'. Please log a Service Request about this issue including the information above and this message.===
Full output example:
<syntaxhighlight lang=bash>
# pkg update --accept --require-new-be --be-name solaris_11.4.27.1.82
Traceback (most recent call last):
File "/usr/bin/pkg", line 5668, in handle_errors
__ret = func(*args, **kwargs)
File "/usr/bin/pkg", line 5654, in main_func
pargs=pargs, **opts)
File "/usr/bin/pkg", line 2267, in update
display_plan_cb=display_plan_cb, logger=logger)
File "/usr/lib/python3.7/vendor-packages/pkg/client/client_api.py", line 1556, in _update
logger=logger)
File "/usr/lib/python3.7/vendor-packages/pkg/client/client_api.py", line 1395, in __api_op
logger=logger, **kwargs)
File "/usr/lib/python3.7/vendor-packages/pkg/client/client_api.py", line 1252, in __api_plan
display_plan_cb=display_plan_cb)
File "/usr/lib/python3.7/vendor-packages/pkg/client/client_api.py", line 1224, in __api_plan
for pd in api_plan_func(**kwargs):
File "/usr/lib/python3.7/vendor-packages/pkg/client/api.py", line 1516, in __plan_op
log_op_end_all=True)
File "/usr/lib/python3.7/vendor-packages/pkg/client/api.py", line 1144, in __plan_common_exception
six.reraise(exc_type, exc_value, exc_traceback)
File "/usr/lib/python3.7/vendor-packages/six.py", line 703, in reraise
raise value
File "/usr/lib/python3.7/vendor-packages/pkg/client/api.py", line 1429, in __plan_op
self.__refresh_publishers()
File "/usr/lib/python3.7/vendor-packages/pkg/client/api.py", line 620, in __refresh_publishers
self.__cert_verify()
File "/usr/lib/python3.7/vendor-packages/pkg/client/api.py", line 603, in __cert_verify
self._img.check_cert_validity()
File "/usr/lib/python3.7/vendor-packages/pkg/client/image.py", line 1338, in check_cert_validity
uri=uri)
File "/usr/lib/python3.7/vendor-packages/pkg/misc.py", line 1242, in validate_ssl_cert
if cert.has_expired():
File "/usr/lib/python3.7/vendor-packages/OpenSSL/crypto.py", line 1360, in has_expired
not_after = datetime.datetime.strptime(time_string, "%Y%m%d%H%M%SZ")
File "/usr/lib/python3.7/_strptime.py", line 277, in <module>
_TimeRE_cache = TimeRE()
File "/usr/lib/python3.7/_strptime.py", line 191, in __init__
self.locale_time = LocaleTime()
File "/usr/lib/python3.7/_strptime.py", line 71, in __init__
self.__calc_month()
File "/usr/lib/python3.7/_strptime.py", line 99, in __calc_month
a_month = [calendar.month_abbr[i].lower() for i in range(13)]
File "/usr/lib/python3.7/_strptime.py", line 99, in <listcomp>
a_month = [calendar.month_abbr[i].lower() for i in range(13)]
File "/usr/lib/python3.7/calendar.py", line 63, in __getitem__
return funcs(self.format)
ValueError: character U+30000043 is not in range [U+0000; U+10ffff]
pkg: This is an internal error in pkg(7) version b'3beb69dcf209'. Please log a
Service Request about this issue including the information above and this
message.
</syntaxhighlight>
Workaround:
<syntaxhighlight lang=bash>
# unset $(env | awk -F'=' '$1 ~ /^LC_/{print $1;}')
# pkg update --accept --require-new-be --be-name solaris_11.4.27.1.82
Creating Plan (Package planning: 766/1256): \
</syntaxhighlight>
02a9c4e147b285c120c55dc6896581aee40043f0
Category:Putty
14
226
2498
863
2021-11-26T01:38:23Z
Lollypop
2
Text replacement - "[[Kategorie:" to "[[Category:"
wikitext
text/x-wiki
[[Category:KnowHow]]
a53883501ef62bde531096835b5015f2915a2297
Amorphophallus henryi
0
82
2500
166
2021-11-26T01:39:53Z
Lollypop
2
Text replacement - "[[Kategorie:" to "[[Category:"
wikitext
text/x-wiki
{{Taxobox
| Taxon_Name = Amorphophallus henryi
| Taxon_WissName = Amorphophallus henryi
| Taxon_Rang = Art
| Taxon_Autor = N.E. Br. (Taiwan)
| Taxon2_WissName = Amorphophallus
| Taxon2_Rang = Gattung
| Taxon3_WissName =
| Taxon3_Rang = Tribus
| Taxon4_WissName =
| Taxon4_Rang = Unterfamilie
| Taxon5_Name =
| Taxon5_WissName = Araceae
| Taxon5_Rang = Familie
| Taxon6_Name =
| Taxon6_WissName =
| Taxon6_Rang = Ordnung
| Bild =
| Bildbeschreibung =
}}
== Beschreibung ==
[[Category:Amorphophallus]]
2d6f8d6685111c166d096743e9a084e60b419c44
ZFS cheatsheet
0
29
2501
2299
2021-11-26T01:42:10Z
Lollypop
2
Text replacement - "[[Kategorie:" to "[[Category:"
wikitext
text/x-wiki
[[Category:ZFS|cheatsheet]]
== Links ==
* [[ZFS_Recovery|Reparieren von defktem ZFS]]
* Wichtige ZFS-Patches: 127729-07 (x86) / 127728-06 (SPARC)
* ZFS Best Practices Guide http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide
* ZFS FAQ bei Opensolaris.org http://www.opensolaris.org/os/community/zfs/faq/
== Löschen nicht löschbarer Snapshots ==
Hier nach einem abgebrochenen ZFS send/recv:
<syntaxhighlight lang=bash>
# zfs destroy MYSQL-LOG/binlog@copy_20130403
cannot destroy 'MYSQL-LOG/binlog@copy_20130403': dataset is busy
# zfs holds -r MYSQL-LOG@copy_20130403
NAME TAG TIMESTAMP
MYSQL-LOG@copy_20130403 .send-22887-0 Wed Apr 3 09:03:32 2013
# zfs release .send-22887-0 MYSQL-LOG@copy_20130403
# zfs destroy MYSQL-LOG/binlog@copy_20130403
</source>
== ZFS Tuning ==
Eine gefühlte Langsamkeit auf Systemen mit ZFS kommt vom sehr großen Cachehunger. Den kann man eingrenzen:
Erstmal schauen, was phase ist:
<syntaxhighlight lang=bash>
lollypop@wirefall:~# echo "::kmastat ! grep Total" |mdb -k
Total [hat_memload] 13508608B 309323764 0
Total [kmem_msb] 24010752B 1509706 0
Total [kmem_va] 660340736B 140448 0
Total [kmem_default] 690409472B 1416078794 0
Total [kmem_io_64G] 34619392B 8456 0
Total [kmem_io_4G] 16384B 92 0
Total [kmem_io_2G] 24576B 62 0
Total [bp_map] 1048576B 234488 0
Total [umem_np] 786432B 976 0
Total [id32] 4096B 2620 0
Total [zfs_file_data_buf] 1471275008B 1326646 0
Total [segkp] 589824B 192886 0
Total [ip_minor_arena_sa] 64B 13332 0
Total [ip_minor_arena_la] 192B 45183 0
Total [spdsock] 64B 1 0
Total [namefs_inodes] 64B 24 0
lollypop@wirefall:~# echo "::memstat" | mdb -k
Page Summary Pages MB %Tot
------------ ---------------- ---------------- ----
Kernel 255013 996 24%
ZFS File Data 359196 1403 34%
Anon 346538 1353 33%
Exec and libs 33948 132 3%
Page cache 4836 18 0%
Free (cachelist) 22086 86 2%
Free (freelist) 23420 91 2%
Total 1045037 4082
Physical 1045036 4082
</source>
Oder nur ZFS
<syntaxhighlight lang=bash>
echo "::memstat ! egrep '(Page Summary|-----|ZFS)'"| mdb -k
</source>
Ausgeben aller ARC-Parameter:
<syntaxhighlight lang=bash>
lollypop@wirefall:~# echo "::arc -m" | mdb -k
hits = 80839319
misses = 3717788
demand_data_hits = 4127150
demand_data_misses = 51589
demand_metadata_hits = 9467792
demand_metadata_misses = 2125852
prefetch_data_hits = 127941
prefetch_data_misses = 596238
prefetch_metadata_hits = 67116436
prefetch_metadata_misses = 944109
mru_hits = 2031248
mru_ghost_hits = 1906199
mfu_hits = 78514880
mfu_ghost_hits = 993236
deleted = 880714
recycle_miss = 1381210
mutex_miss = 197
evict_skip = 38573528
evict_l2_cached = 0
evict_l2_eligible = 94658370048
evict_l2_ineligible = 8946457600
hash_elements = 79571
hash_elements_max = 82328
hash_collisions = 3005774
hash_chains = 22460
hash_chain_max = 8
p = 64 MB
c = 512 MB
c_min = 127 MB
c_max = 512 MB
size = 512 MB
hdr_size = 14825736
data_size = 468982784
other_size = 53480992
l2_hits = 0
l2_misses = 0
l2_feeds = 0
l2_rw_clash = 0
l2_read_bytes = 0
l2_write_bytes = 0
l2_writes_sent = 0
l2_writes_done = 0
l2_writes_error = 0
l2_writes_hdr_miss = 0
l2_evict_lock_retry = 0
l2_evict_reading = 0
l2_free_on_write = 0
l2_abort_lowmem = 0
l2_cksum_bad = 0
l2_io_error = 0
l2_size = 0
l2_hdr_size = 0
memory_throttle_count = 0
arc_no_grow = 0
arc_tempreserve = 0 MB
arc_meta_used = 150 MB
arc_meta_limit = 128 MB
arc_meta_max = 313 MB
</source>
Man kann sich auch alle Parameter ausgeben lassen, die für ZFS gesetzt sind mit:
<syntaxhighlight lang=bash>
# echo ::zfs_params | mdb -k
arc_reduce_dnlc_percent = 0x3
zfs_arc_max = 0x100000000
zfs_arc_min = 0x0
arc_shrink_shift = 0x5
zfs_mdcomp_disable = 0x0
zfs_prefetch_disable = 0x0
zfetch_max_streams = 0x8
zfetch_min_sec_reap = 0x2
zfetch_block_cap = 0x100
zfetch_array_rd_sz = 0x100000
zfs_default_bs = 0x9
zfs_default_ibs = 0xe
...
# echo "::arc -a" | mdb -k
hits = 592730
misses = 5095
demand_data_hits = 0
demand_data_misses = 0
demand_metadata_hits = 592719
demand_metadata_misses = 4866
prefetch_data_hits = 0
prefetch_data_misses = 0
...
</source>
Setzen von Kernelparametern geht auch online mit:
<syntaxhighlight lang=bash>
# echo zfs_arc_max/Z100000000 | mdb -kw
zfs_arc_max: <old value> = 0x100000000
</source>
Das setzt den zfs_arc_max auf 4GB = 0x100000000
== Limitieren des ARC Cache ==
In der /etc/system einfach:
set zfs:zfs_arc_max = <Number of bytes>
Easy calculation:
<syntaxhighlight lang=bash>
# NUMGB=32
# printf "set zfs:zfs_arc_max = 0x%x\n" $[ ${NUMGB} * 1024 ** 3 ]
set zfs:zfs_arc_max = 0x800000000
</source>
Siehe auch [http://www.solarisinternals.com/wiki/index.php/ZFS_Evil_Tuning_Guide#Limiting_the_ARC_Cache Limiting the ARC Cache]
But !!!! NEVER DO THIS !!!!
Never use ''mdb -kw'' to set the values!!!
But on a '''test system''' you could try to get the position in the Kernel with
<syntaxhighlight lang=bash>
> arc_stats::print -a arcstat_p.value.ui64 arcstat_c.value.ui64 arcstat_c_max.value.ui64
</source>
Calculate for example 8GB:
<syntaxhighlight lang=bash>
# printf "0x%x\n" $[ 8 * 1024 ** 3 ]
0x200000000
</source>
And raise the values like this:
arc.c = arc.c_max
arc.p = arc.c / 2
<syntaxhighlight lang=bash>
# mdb -kw
Loading modules: [ unix krtld genunix dtrace specfs uppc pcplusmp cpu.generic zfs mpt_sas sockfs ip hook neti dls sctp arp usba uhci fcp fctl qlc nca md lofs sata cpc fcip random crypto logindmux ptm ufs sppp nfs ipc ]
> arc_stats::print -a arcstat_p.value.ui64 arcstat_c.value.ui64 arcstat_c_max.value.ui64
fffffffffbcfaf90 arcstat_p.value.ui64 = 0x4000000
fffffffffbcfafc0 arcstat_c.value.ui64 = 0x40000000
fffffffffbcfb020 arcstat_c_max.value.ui64 = 0x40000000
> fffffffffbcfb020/Z 0x200000000
arc_stats+0x4a0:0x40000000 = 0x200000000
> fffffffffbcfafc0/Z 0x200000000
arc_stats+0x440:0x44a42480 = 0x200000000
> fffffffffbcfaf90/Z 0x100000000
arc_stats+0x410:0x4000000 = 0x100000000
</source>
== ZFS Platzverbrauch besser anzeigen ==
<pre>
$ zfs list -o space
NAME AVAIL USED USEDSNAP USEDDS USEDREFRESERV USEDCHILD
rpool 25.4G 7.79G 0 64K 0 7.79G
rpool/ROOT 25.4G 6.29G 0 18K 0 6.29G
rpool/ROOT/snv_98 25.4G 6.29G 0 6.29G 0 0
rpool/dump 25.4G 1.00G 0 1.00G 0 0
rpool/export 25.4G 38K 0 20K 0 18K
rpool/export/home 25.4G 18K 0 18K 0 0
rpool/swap 25.8G 512M 0 111M 401M 0
</pre>
Wenn zfs list -o space als shortcut noch nicht zur Verfügung steht, geht meist:
<pre>
$ zfs list -o name,avail,used,usedsnap,usedds,usedrefreserv,usedchild -t filesystem,volume
</pre>
== Migration UFS-Root -> ZFS-Root via Live-Upgrade ==
Erstmal den ZFS Rootpool anlegen:
# zpool create rpool /dev/dsk/<zfs-disk>
Wer Problemen aus dem Weg gehen möchte, läßt den Namen bei rpool.
Boot-Environment (BE) mit lucreate erstellen
# lucreate -c ufsBE -n zfsBE -p rpool
Hiermit werden die Files in die ZFS-Umgebung kopiert.
Prüfen, ob das BootFS richtig gesetzt wurde:
<pre>
# zpool get bootfs rpool
NAME PROPERTY VALUE SOURCE
rpool bootfs rpool/ROOT/zfsBE local
</pre>
Auskommentieren von eventuell noch nachgebliebenen rootdev-Einträgen in der /etc/system
<pre>
# zpool export rpool
# mkdir /tmp/rpool
# zpool import -R /tmp/rpool rpool
# zfs unmount rpool
# rmdir /tmp/rpool/rpool
# zfs mount rpool/ROOT/zfsBE
# perl -pi.orig -e 's#^(rootdev.*)$#* \1#g' /tmp/rpool/etc/system
# zpool export rpool
</pre>
Bootblock für ZFS auf die ZFS-Platte
# installboot -F zfs /usr/platform/`uname -i`/lib/fs/zfs/bootblk /dev/rdsk/<zfs-disk>
Aktivieren des neuen BEs
# luactivate zfsBE
==cannot destroy 'snapshot': dataset is busy==
<syntaxhighlight lang=bash>
root@sun1 # zfs destroy zpool1/raiddisk0@send_1
cannot destroy 'zpool1/raiddisk0@send_1': dataset is busy
root@sun1 # zfs holds zpool1/raiddisk0@send_1
NAME TAG TIMESTAMP
zpool1/raiddisk0@send_1 .send-14952-0 Mon Jun 15 15:29:09 2015
zpool1/raiddisk0@send_1 .send-16117-0 Mon Jun 15 15:29:28 2015
zpool1/raiddisk0@send_1 .send-26208-0 Tue Jun 16 10:14:47 2015
zpool1/raiddisk0@send_1 .send-8129-0 Mon Jun 15 15:26:54 2015
root@sun1 # zfs release .send-14952-0 zpool1/raiddisk0@send_1
root@sun1 # zfs release .send-16117-0 zpool1/raiddisk0@send_1
root@sun1 # zfs release .send-26208-0 zpool1/raiddisk0@send_1
root@sun1 # zfs release .send-8129-0 zpool1/raiddisk0@send_1
root@sun1 # zfs holds zpool1/raiddisk0@send_1
root@sun1 #
root@sun1 # zfs destroy zpool1/raiddisk0@send_1
root@sun1 #
</source>
==Fragmentation==
<syntaxhighlight lang=bash>
# zdb -mm <pool> | nawk '/fragmentation/{count++;frag+=$NF}END{printf "Overall fragmentation %.2d\n",(frag/count);}'
</source>
32e4d64d414f1ef790f5d84c8917580dc7efc3c9
2504
2501
2021-11-26T01:50:57Z
Lollypop
2
Text replacement - "</source" to "</syntaxhighlight"
wikitext
text/x-wiki
[[Category:ZFS|cheatsheet]]
== Links ==
* [[ZFS_Recovery|Reparieren von defktem ZFS]]
* Wichtige ZFS-Patches: 127729-07 (x86) / 127728-06 (SPARC)
* ZFS Best Practices Guide http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide
* ZFS FAQ bei Opensolaris.org http://www.opensolaris.org/os/community/zfs/faq/
== Löschen nicht löschbarer Snapshots ==
Hier nach einem abgebrochenen ZFS send/recv:
<syntaxhighlight lang=bash>
# zfs destroy MYSQL-LOG/binlog@copy_20130403
cannot destroy 'MYSQL-LOG/binlog@copy_20130403': dataset is busy
# zfs holds -r MYSQL-LOG@copy_20130403
NAME TAG TIMESTAMP
MYSQL-LOG@copy_20130403 .send-22887-0 Wed Apr 3 09:03:32 2013
# zfs release .send-22887-0 MYSQL-LOG@copy_20130403
# zfs destroy MYSQL-LOG/binlog@copy_20130403
</syntaxhighlight>
== ZFS Tuning ==
Eine gefühlte Langsamkeit auf Systemen mit ZFS kommt vom sehr großen Cachehunger. Den kann man eingrenzen:
Erstmal schauen, was phase ist:
<syntaxhighlight lang=bash>
lollypop@wirefall:~# echo "::kmastat ! grep Total" |mdb -k
Total [hat_memload] 13508608B 309323764 0
Total [kmem_msb] 24010752B 1509706 0
Total [kmem_va] 660340736B 140448 0
Total [kmem_default] 690409472B 1416078794 0
Total [kmem_io_64G] 34619392B 8456 0
Total [kmem_io_4G] 16384B 92 0
Total [kmem_io_2G] 24576B 62 0
Total [bp_map] 1048576B 234488 0
Total [umem_np] 786432B 976 0
Total [id32] 4096B 2620 0
Total [zfs_file_data_buf] 1471275008B 1326646 0
Total [segkp] 589824B 192886 0
Total [ip_minor_arena_sa] 64B 13332 0
Total [ip_minor_arena_la] 192B 45183 0
Total [spdsock] 64B 1 0
Total [namefs_inodes] 64B 24 0
lollypop@wirefall:~# echo "::memstat" | mdb -k
Page Summary Pages MB %Tot
------------ ---------------- ---------------- ----
Kernel 255013 996 24%
ZFS File Data 359196 1403 34%
Anon 346538 1353 33%
Exec and libs 33948 132 3%
Page cache 4836 18 0%
Free (cachelist) 22086 86 2%
Free (freelist) 23420 91 2%
Total 1045037 4082
Physical 1045036 4082
</syntaxhighlight>
Oder nur ZFS
<syntaxhighlight lang=bash>
echo "::memstat ! egrep '(Page Summary|-----|ZFS)'"| mdb -k
</syntaxhighlight>
Ausgeben aller ARC-Parameter:
<syntaxhighlight lang=bash>
lollypop@wirefall:~# echo "::arc -m" | mdb -k
hits = 80839319
misses = 3717788
demand_data_hits = 4127150
demand_data_misses = 51589
demand_metadata_hits = 9467792
demand_metadata_misses = 2125852
prefetch_data_hits = 127941
prefetch_data_misses = 596238
prefetch_metadata_hits = 67116436
prefetch_metadata_misses = 944109
mru_hits = 2031248
mru_ghost_hits = 1906199
mfu_hits = 78514880
mfu_ghost_hits = 993236
deleted = 880714
recycle_miss = 1381210
mutex_miss = 197
evict_skip = 38573528
evict_l2_cached = 0
evict_l2_eligible = 94658370048
evict_l2_ineligible = 8946457600
hash_elements = 79571
hash_elements_max = 82328
hash_collisions = 3005774
hash_chains = 22460
hash_chain_max = 8
p = 64 MB
c = 512 MB
c_min = 127 MB
c_max = 512 MB
size = 512 MB
hdr_size = 14825736
data_size = 468982784
other_size = 53480992
l2_hits = 0
l2_misses = 0
l2_feeds = 0
l2_rw_clash = 0
l2_read_bytes = 0
l2_write_bytes = 0
l2_writes_sent = 0
l2_writes_done = 0
l2_writes_error = 0
l2_writes_hdr_miss = 0
l2_evict_lock_retry = 0
l2_evict_reading = 0
l2_free_on_write = 0
l2_abort_lowmem = 0
l2_cksum_bad = 0
l2_io_error = 0
l2_size = 0
l2_hdr_size = 0
memory_throttle_count = 0
arc_no_grow = 0
arc_tempreserve = 0 MB
arc_meta_used = 150 MB
arc_meta_limit = 128 MB
arc_meta_max = 313 MB
</syntaxhighlight>
Man kann sich auch alle Parameter ausgeben lassen, die für ZFS gesetzt sind mit:
<syntaxhighlight lang=bash>
# echo ::zfs_params | mdb -k
arc_reduce_dnlc_percent = 0x3
zfs_arc_max = 0x100000000
zfs_arc_min = 0x0
arc_shrink_shift = 0x5
zfs_mdcomp_disable = 0x0
zfs_prefetch_disable = 0x0
zfetch_max_streams = 0x8
zfetch_min_sec_reap = 0x2
zfetch_block_cap = 0x100
zfetch_array_rd_sz = 0x100000
zfs_default_bs = 0x9
zfs_default_ibs = 0xe
...
# echo "::arc -a" | mdb -k
hits = 592730
misses = 5095
demand_data_hits = 0
demand_data_misses = 0
demand_metadata_hits = 592719
demand_metadata_misses = 4866
prefetch_data_hits = 0
prefetch_data_misses = 0
...
</syntaxhighlight>
Setzen von Kernelparametern geht auch online mit:
<syntaxhighlight lang=bash>
# echo zfs_arc_max/Z100000000 | mdb -kw
zfs_arc_max: <old value> = 0x100000000
</syntaxhighlight>
Das setzt den zfs_arc_max auf 4GB = 0x100000000
== Limitieren des ARC Cache ==
In der /etc/system einfach:
set zfs:zfs_arc_max = <Number of bytes>
Easy calculation:
<syntaxhighlight lang=bash>
# NUMGB=32
# printf "set zfs:zfs_arc_max = 0x%x\n" $[ ${NUMGB} * 1024 ** 3 ]
set zfs:zfs_arc_max = 0x800000000
</syntaxhighlight>
Siehe auch [http://www.solarisinternals.com/wiki/index.php/ZFS_Evil_Tuning_Guide#Limiting_the_ARC_Cache Limiting the ARC Cache]
But !!!! NEVER DO THIS !!!!
Never use ''mdb -kw'' to set the values!!!
But on a '''test system''' you could try to get the position in the Kernel with
<syntaxhighlight lang=bash>
> arc_stats::print -a arcstat_p.value.ui64 arcstat_c.value.ui64 arcstat_c_max.value.ui64
</syntaxhighlight>
Calculate for example 8GB:
<syntaxhighlight lang=bash>
# printf "0x%x\n" $[ 8 * 1024 ** 3 ]
0x200000000
</syntaxhighlight>
And raise the values like this:
arc.c = arc.c_max
arc.p = arc.c / 2
<syntaxhighlight lang=bash>
# mdb -kw
Loading modules: [ unix krtld genunix dtrace specfs uppc pcplusmp cpu.generic zfs mpt_sas sockfs ip hook neti dls sctp arp usba uhci fcp fctl qlc nca md lofs sata cpc fcip random crypto logindmux ptm ufs sppp nfs ipc ]
> arc_stats::print -a arcstat_p.value.ui64 arcstat_c.value.ui64 arcstat_c_max.value.ui64
fffffffffbcfaf90 arcstat_p.value.ui64 = 0x4000000
fffffffffbcfafc0 arcstat_c.value.ui64 = 0x40000000
fffffffffbcfb020 arcstat_c_max.value.ui64 = 0x40000000
> fffffffffbcfb020/Z 0x200000000
arc_stats+0x4a0:0x40000000 = 0x200000000
> fffffffffbcfafc0/Z 0x200000000
arc_stats+0x440:0x44a42480 = 0x200000000
> fffffffffbcfaf90/Z 0x100000000
arc_stats+0x410:0x4000000 = 0x100000000
</syntaxhighlight>
== ZFS Platzverbrauch besser anzeigen ==
<pre>
$ zfs list -o space
NAME AVAIL USED USEDSNAP USEDDS USEDREFRESERV USEDCHILD
rpool 25.4G 7.79G 0 64K 0 7.79G
rpool/ROOT 25.4G 6.29G 0 18K 0 6.29G
rpool/ROOT/snv_98 25.4G 6.29G 0 6.29G 0 0
rpool/dump 25.4G 1.00G 0 1.00G 0 0
rpool/export 25.4G 38K 0 20K 0 18K
rpool/export/home 25.4G 18K 0 18K 0 0
rpool/swap 25.8G 512M 0 111M 401M 0
</pre>
Wenn zfs list -o space als shortcut noch nicht zur Verfügung steht, geht meist:
<pre>
$ zfs list -o name,avail,used,usedsnap,usedds,usedrefreserv,usedchild -t filesystem,volume
</pre>
== Migration UFS-Root -> ZFS-Root via Live-Upgrade ==
Erstmal den ZFS Rootpool anlegen:
# zpool create rpool /dev/dsk/<zfs-disk>
Wer Problemen aus dem Weg gehen möchte, läßt den Namen bei rpool.
Boot-Environment (BE) mit lucreate erstellen
# lucreate -c ufsBE -n zfsBE -p rpool
Hiermit werden die Files in die ZFS-Umgebung kopiert.
Prüfen, ob das BootFS richtig gesetzt wurde:
<pre>
# zpool get bootfs rpool
NAME PROPERTY VALUE SOURCE
rpool bootfs rpool/ROOT/zfsBE local
</pre>
Auskommentieren von eventuell noch nachgebliebenen rootdev-Einträgen in der /etc/system
<pre>
# zpool export rpool
# mkdir /tmp/rpool
# zpool import -R /tmp/rpool rpool
# zfs unmount rpool
# rmdir /tmp/rpool/rpool
# zfs mount rpool/ROOT/zfsBE
# perl -pi.orig -e 's#^(rootdev.*)$#* \1#g' /tmp/rpool/etc/system
# zpool export rpool
</pre>
Bootblock für ZFS auf die ZFS-Platte
# installboot -F zfs /usr/platform/`uname -i`/lib/fs/zfs/bootblk /dev/rdsk/<zfs-disk>
Aktivieren des neuen BEs
# luactivate zfsBE
==cannot destroy 'snapshot': dataset is busy==
<syntaxhighlight lang=bash>
root@sun1 # zfs destroy zpool1/raiddisk0@send_1
cannot destroy 'zpool1/raiddisk0@send_1': dataset is busy
root@sun1 # zfs holds zpool1/raiddisk0@send_1
NAME TAG TIMESTAMP
zpool1/raiddisk0@send_1 .send-14952-0 Mon Jun 15 15:29:09 2015
zpool1/raiddisk0@send_1 .send-16117-0 Mon Jun 15 15:29:28 2015
zpool1/raiddisk0@send_1 .send-26208-0 Tue Jun 16 10:14:47 2015
zpool1/raiddisk0@send_1 .send-8129-0 Mon Jun 15 15:26:54 2015
root@sun1 # zfs release .send-14952-0 zpool1/raiddisk0@send_1
root@sun1 # zfs release .send-16117-0 zpool1/raiddisk0@send_1
root@sun1 # zfs release .send-26208-0 zpool1/raiddisk0@send_1
root@sun1 # zfs release .send-8129-0 zpool1/raiddisk0@send_1
root@sun1 # zfs holds zpool1/raiddisk0@send_1
root@sun1 #
root@sun1 # zfs destroy zpool1/raiddisk0@send_1
root@sun1 #
</syntaxhighlight>
==Fragmentation==
<syntaxhighlight lang=bash>
# zdb -mm <pool> | nawk '/fragmentation/{count++;frag+=$NF}END{printf "Overall fragmentation %.2d\n",(frag/count);}'
</syntaxhighlight>
7363fc92185bc59bd3a2d8f69ddff01e167c7613
MySQL Tipps und Tricks
0
197
2502
2294
2021-11-26T01:45:53Z
Lollypop
2
Text replacement - "[[Kategorie:" to "[[Category:"
wikitext
text/x-wiki
[[Category:MySQL|Tipps und Tricks]]
==Oneliner==
===Show MySQL-traffic fired from a client===
<syntaxhighlight lang=bash>
# tcpdump -i any -s 0 -l -vvv -w - dst port 3306 | strings | perl -e '
while(<>) { chomp; next if /^[^ ]+[ ]*$/;
if(/^(SELECT|UPDATE|DELETE|INSERT|SET|COMMIT|ROLLBACK|CREATE|DROP|ALTER)/i) {
if (defined $q) { print "$q\n"; }
$q=$_;
} else {
$_ =~ s/^[ \t]+//; $q.=" $_";
}
}'
</syntaxhighlight>
===Mysql processes each second===
<syntaxhighlight lang=bash>
# mysqladmin -i 1 --verbose processlist
</syntaxhighlight>
===All grants===
<syntaxhighlight lang=bash>
# mysql --skip-column-names --batch --execute 'select concat("`",user,"`@`",host,"`") from mysql.user' | xargs -n 1 -i mysql --execute 'show grants for {}'
</syntaxhighlight>
Or a little nicer:
<syntaxhighlight lang=bash>
#!/bin/bash
#
## Written by Lars Timmann <L@rs.Timmann.de> 2017
#
function usage () {
cat << EOH
Usage: $0 [--all] [--grant-user <pattern>|--gu <pattern>] [--grant-db <pattern>|--gdb <pattern>] [--help] ...
--help: This output
--grant-user|--gu: You can specify this option several times.
The <pattern> can be:
<user> : You will get grants on all hosts for this user.
@<host> : You will get grants for all users on this host.
<user>@<host> : You will get specific grants for user@host.
The pattern may contain % as wildcard.
If the pattern is @% it shows all grants where host is exactly '%'.
--grant-db|--gdb: You can specify this option several times.
The pattern names the database to look for.
The pattern may contain % as wildcard.
--all: Show all grants
...: Optional parameters to the mysql command
EOH
exit
}
show_all_grants=0
declare -a grant_user
for ((param=1;param<=${#};param++))
do
case ${!param} in
--grant-user|--gu)
param=$[ ${param} + 1 ]
grant_user+=( "${!param}" )
# delete 2 parameters from list and set back $param
set -- "${@:1:param-2}" "${@:param+1}"
param=$[ ${param} - 2 ]
;;
--grant-db|--gdb)
param=$[ ${param} + 1 ]
grant_db+=( "${!param}" )
# delete 2 parameters from list and set back $param
set -- "${@:1:param-2}" "${@:param+1}"
param=$[ ${param} - 2 ]
;;
--all)
show_all_grants=1
# delete 1 parameter from list and set back $param
set -- "${@:1:param-1}" "${@:param+1}"
param=$[ ${param} - 1 ]
;;
--help)
usage
;;
*)
;;
esac
done
count=${#grant_user[@]}
for((param=0;param<count;param++))
do
before=${#grant_user[@]}
grant="${grant_user[${param}]}"
user="${grant%@*}"
if [[ "${grant}" == *\@?* ]]
then
host="${grant/*@}"
else
host=''
fi
case ${host} in
'')
select="select concat('\'',user,'\'@\'',host,'\'') as user from mysql.user where user like '${user}'"
;;
'%')
select="select concat('\'',user,'\'@\'',host,'\'') as user from mysql.user where host='${host}' ${user:+and user like '${user}'}"
;;
*)
select="select concat('\'',user,'\'@\'',host,'\'') as user from mysql.user where host like '${host}' ${user:+and user like '${user}'}"
;;
esac
grant_user=( "${grant_user[@]:0:param}" $(mysql $* --silent --skip-column-names --execute "${select}" | sort ) "${grant_user[@]:param+1}" )
after=${#grant_user[@]}
param=$[ param + after - before ]
count=$[ count + after - before ]
done
# Get user for database in grant_db array
for db in ${grant_db[@]}
do
grant_user+=( $(mysql $* --silent --skip-column-names --execute "
select concat('\'',user,'\'@\'',host,'\'') as user from mysql.db where db like '${db}';
select concat('\'',user,'\'@\'',host,'\'') as user from mysql.columns_priv where db like '${db}';
select concat('\'',user,'\'@\'',host,'\'') as user from mysql.tables_priv where db like '${db}';
" | sort -u ) )
done
# --all
if [ ${show_all_grants} -eq 1 ]
then
printf -- '--\n-- %s\n--\n' "all grants";
grant_user=( $(mysql $* --silent --skip-column-names --execute "select concat('\'',user,'\'@\'',host,'\'') as user from mysql.user" | sort ) )
fi
for user in ${grant_user[@]}
do
printf -- '--\n-- %s\n--\n' "${user}";
show_create_user="$(mysql $* --silent --skip-column-names --execute "select (substring_index(version(), '.',1) >= 5) and (substring_index(substring_index(version(), '.', 2),'.',-1) >=7) as show_create_user;";)"
if [ "${show_create_user}" -eq 1 ]
then
mysql $* --silent --skip-column-names --execute "show create user ${user};" | sed 's/$/;/'
fi
OLD_IFS=${IFS}
IFS=$'\n'
for grant in $(mysql $* --silent --skip-column-names --execute "show grants for ${user}" | sed 's/$/;/')
do
regex='GRANT[ ]+.*[ ]+ON[ ]+(FUNCTION[ ]+|)`([^`]*)`\..*'
if [[ $grant =~ $regex ]]
then
database=${BASH_REMATCH[2]}
if [ ${#grant_db[@]} -gt 0 ]
then
if [[ " ${grant_db[@]} " =~ " ${database} " ]]
then
echo "${grant}"
fi
else
echo "${grant}"
fi
else
echo "${grant}"
fi
done
done
</syntaxhighlight>
===Last update time===
* Per table
<syntaxhighlight lang=mysql>
mysql> SELECT TABLE_SCHEMA AS DB,TABLE_NAME,UPDATE_TIME FROM INFORMATION_SCHEMA.TABLES ORDER BY DB,UPDATE_TIME;
</syntaxhighlight>
* Per database
<syntaxhighlight lang=mysql>
mysql> SELECT TABLE_SCHEMA AS DB,MAX(UPDATE_TIME) AS LAST_UPDATE FROM INFORMATION_SCHEMA.TABLES GROUP BY DB ORDER BY LAST_UPDATE;
</syntaxhighlight>
==InnoDB space==
===Per database===
<syntaxhighlight lang=mysql>
mysql> select table_schema as database_name, sum(round(data_length/1024/1024,2)) as total_size_mb from information_schema.tables where engine like 'innodb' group by table_schema order by total_size_mb;
</syntaxhighlight>
===Per table===
<syntaxhighlight lang=mysql>
mysql> select table_schema as database_name,table_name,round(data_length/1024/1024,2) as size_mb from information_schema.tables order by size_mb;
</syntaxhighlight>
==Logging==
If you use SET GLOBAL it is just for the moment.
'''Don't forget to add it in your my.cnf to make it permanent!'''
===What can I log?===
The interesting variables here are:
* log_queries_not_using_indexes
* log_slave_updates
* log_slow_queries
* general_log
===Choose logging destination FILE/TABLE/NONE===
This affects general_log and slow_query_log.
* Log to the table mysql.slow_log and mysql.general_log
<syntaxhighlight lang=mysql>
mysql> SET GLOBAL log_output=TABLE;
</syntaxhighlight>
* Log to the table mysql.slow_log and mysql.general_log
<syntaxhighlight lang=mysql>
mysql> SET GLOBAL log_output=TABLE;
</syntaxhighlight>
* Both: tables and files
<syntaxhighlight lang=mysql>
mysql> SET GLOBAL log_output = 'TABLE,FILE';
</syntaxhighlight>
* None, if NONE appears in the log_output destinations there is no logging
<syntaxhighlight lang=mysql>
mysql> SET GLOBAL log_output = 'TABLE,FILE,NONE';
</syntaxhighlight>
is equal to
<syntaxhighlight lang=mysql>
mysql> SET GLOBAL log_output = 'NONE';
</syntaxhighlight>
===Enable/disable general logging===
<syntaxhighlight lang=mysql>
mysql> SET GLOBAL general_log_file = '/var/lib/mysql/general.log';
Query OK, 0 rows affected (0.00 sec)
mysql> SET GLOBAL general_log = 'ON';
Query OK, 0 rows affected (0.00 sec)
</syntaxhighlight>
<syntaxhighlight lang=mysql>
mysql> SET GLOBAL general_log = 'OFF';
Query OK, 0 rows affected (0.00 sec)
</syntaxhighlight>
===Enable/disable logging of slow queries===
<syntaxhighlight lang=mysql>
mysql> SET GLOBAL slow_query_log_file = '/var/lib/mysql/slow-query.log';
Query OK, 0 rows affected (0.00 sec)
mysql> SET GLOBAL slow_query_log = 'ON';
Query OK, 0 rows affected (0.00 sec)
</syntaxhighlight>
<syntaxhighlight lang=mysql>
mysql> SET GLOBAL slow_query_log = 'OFF';
Query OK, 0 rows affected (0.00 sec)
</syntaxhighlight>
== Slave ==
=== Debugging ===
==== What did we see from the master ====
Read the binlog from Master:
<syntaxhighlight lang=bash>
# mysqlbinlog --read-from-remote-server --host='your replication host' --user='your replication user' --password='your replication password' --base64-output=auto --database='limit output to this database' -vv mysql-bin.number | less
</syntaxhighlight>
if you get
ERROR: Failed on connect: SSL connection error: protocol version mismatch
try
<syntaxhighlight lang=bash>
# mysqlbinlog --read-from-remote-server --host='your replication host' --user='your replication user' --password='your replication password' --ssl-mode=DISABLED --base64-output=auto --database='limit output to this database' -vv mysql-bin.number | less
</syntaxhighlight>
For an idea of the binlog file to investigate on the master do this on your slave:
<syntaxhighlight lang=bash>
# mysql -e 'show slave status\G' | awk '$1=="Master_Log_File:"'
</syntaxhighlight>
==Filesystems for MySQL==
===ext3/ext4===
====Create Options====
<syntaxhighlight lang=bash>
# mkfs.ext4 -b 4096 /dev/mapper/vg--data-lv--ext4--mysql_data
</syntaxhighlight>
====Mountoptions====
* noatime
* data=writeback (best performance , only metadata is logged)
* data=ordered (ok performance , recording metadata and grouping metadata related to the data changes)
* data=journal (worst performance, but best data protection, ext3 default mode, recording metadata and all data)
===Raw devices with InnoDB===
'''Take a look at [[Linux_udev_permissions|setting device permissions via udev]] first.'''
'''After''' that the device is owned by mysql:
<syntaxhighlight lang=bash>
# ls -alL /dev/vg-data/lv-rawdisk-innodb01
brw-rw---- 1 mysql mysql 252, 0 Aug 12 15:07 /dev/vg-data/lv-rawdisk-innodb01
</syntaxhighlight>
Determine the size:
<syntaxhighlight lang=bash>
# lvs vg-data
LV VG Attr LSize Pool Origin Data% Move Log Copy% Convert
lv-rawdisk-innodb01 vg-data -wi-a---- 25.00g
# fdisk -l /dev/vg-data/lv-rawdisk-innodb01
Disk /dev/vg-data/lv-rawdisk-innodb01: 26.8 GB, 26843545600 bytes
255 heads, 63 sectors/track, 3263 cylinders, total 52428800 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
# bc -l
26843545600/(1024*1024*1024)
25.00000000000000000000
</syntaxhighlight>
Yes... really 25GB!
Add your logical volume to your configuration /etc/mysql/conf.d/innodb.cnf :
<syntaxhighlight lang=mysql>
[mysqld]
# InnoDB raw disks
innodb_data_home_dir=
innodb_data_file_path=/dev/vg-data/lv-rawdisk-innodb01:25Gnewraw
</syntaxhighlight>
Start mysql:
<syntaxhighlight lang=bash>
# service mysql start
</syntaxhighlight>
Aaaaaand.. do not forget apparmor! Like I did.. :-D
<syntaxhighlight lang=mysql>
InnoDB: Operating system error number 13 in a file operation.
InnoDB: The error means mysqld does not have the access rights to
InnoDB: the directory.
InnoDB: File name /dev/dm-0
InnoDB: File operation call: 'open'.
InnoDB: Cannot continue operation.
</syntaxhighlight>
<syntaxhighlight lang=bash>
# tail /var/log/kern.log
...
Aug 12 15:30:09 mysql kernel: [ 5840.118528] audit: type=1400 audit(1439386209.399:33): apparmor="DENIED" operation="open" profile="/usr/sbin/mysqld" name="/dev/dm-0" pid=11810 comm="mysqld" requested_mask="wr" denied_mask="wr" fsuid=108 ouid=108
...
</syntaxhighlight>
Add your raw device to the apparmor config in /etc/apparmor.d/local/usr.sbin.mysqld :
<syntaxhighlight lang=bash>
# Site-specific additions and overrides for usr.sbin.mysqld.
# For more details, please see /etc/apparmor.d/local/README.
/dev/dm-* rwk,
</syntaxhighlight>
Reload apparmor:
<syntaxhighlight lang=bash>
# service apparmor reload
</syntaxhighlight>
Another try!
<syntaxhighlight lang=bash>
# service mysql start
</syntaxhighlight>
<syntaxhighlight lang=mysql>
InnoDB: The first specified data file /dev/vg-data/lv-rawdisk-innodb01 did not exist:
InnoDB: a new database to be created!
150812 15:48:23 InnoDB: Setting file /dev/vg-data/lv-rawdisk-innodb01 size to 25600 MB
InnoDB: Database physically writes the file full: wait...
InnoDB: Progress in MB: 100 200 300 400 500 600 700 800 900 1000 1100 1200 ...
</syntaxhighlight>
Much better!
So shutdown MySQL again!
Change your configuration /etc/mysql/conf.d/innodb.cnf and '''change newraw to raw!''' :
<syntaxhighlight lang=mysql>
[mysqld]
# InnoDB raw disks
innodb_data_home_dir=
innodb_data_file_path=/dev/vg-data/lv-rawdisk-innodb01:25Graw
</syntaxhighlight>
=== NFS ===
==== NFSv4 ====
===== On NetApp CDOT SVM =====
<syntaxhighlight lang=text>
cdot1nfsv4::> export-policy rule create -policyname default -clientmatch 172.18.128.0/22 -superuser none -rwrule none -rorule sys -allow-dev false -allow-suid false
cdot1nfsv4::>
cdot1nfsv4::> export-policy create -policyname mysql_clients
cdot1nfsv4::> export-policy rule create -policyname mysql_clients -clientmatch 172.18.128.0/22 -superuser sys -rwrule sys -rorule sys -allow-dev true -allow-suid false
cdot1nfsv4::>
cdot1nfsv4::> nfs server modify -v4.0 enabled -v4-id-domain this.domain.tld
cdot1nfsv4::> set -units GB
cdot1nfsv4::> vol show -volume MYSQLNFS_* -fields volume,policy,size,junction-path
vserver volume size policy junction-path
------------------ --------------------- ---- ------------- ----------------------
cdot1nfsv4 MYSQLNFS_DATA 40GB mysql_clients /MYSQLNFS_DATA
cdot1nfsv4 MYSQLNFS_LOG 1GB mysql_clients /MYSQLNFS_LOG
2 entries were displayed.
</syntaxhighlight>
Links:
* [https://kb.netapp.com/support/s/article/how-to-configure-nfsv4-in-cluster-mode How to configure NFSv4 in Cluster-Mode]
* [https://kb.netapp.com/support/s/article/clustered-data-ontap-nfs-expert-recommended-articles Clustered Data ONTAP NFS Expert recommended articles]
* [https://kb.netapp.com/support/s/article/how-to-configure-netapp-storage-systems-for-network-file-system-version-4-in-aix-and-linux-environments How to configure NetApp storage systems for Network File System version 4 in AIX and Linux environments]
* [https://kb.netapp.com/support/s/article/how-to-enable-or-disable-nfsv4-on-netapp-storage-systems How to enable or disable NFSv4 on NetApp storage systems]
===== On Linux =====
====== Blacklist rpcsec_gss_krb5 ======
To disable loading of the rpcsec_gss_krb5 kernel module which causes problems with performance, do this:
<syntaxhighlight lang=text>
# echo "blacklist rpcsec_gss_krb5" > /etc/modprobe.d/blacklist-rpcsec_gss_krb5.conf
# rmmod rpcsec_gss_krb5
</syntaxhighlight>
====== /etc/sysctl.d/99-mysql.conf ======
<syntaxhighlight lang=text>
#
## http://www.ajohnstone.com/achives/optimizing-mysql-over-nfs-with-netapp/
#
###################################################################
# Semaphores & IPC for optimizations in innodb
kernel.shmmax=2147483648
kernel.shmall=2147483648
kernel.msgmni=1024
kernel.msgmax=65536
kernel.sem=250 32000 32 1024
###################################################################
# Swap
vm.swappiness = 0
vm.vfs_cache_pressure = 50
</syntaxhighlight>
====== /etc/sysctl.d/99-netapp-nfs.conf ======
<syntaxhighlight lang=text>
#
## http://www.ajohnstone.com/achives/optimizing-mysql-over-nfs-with-netapp/
#
###################################################################
# Optimization for netapp/nfs increased from 64k, @see http://tldp.org/HOWTO/NFS-HOWTO/performance.html#MEMLIMITS
net.core.wmem_default=262144
net.core.rmem_default=262144
net.core.wmem_max=262144
net.core.rmem_max=262144
net.ipv4.tcp_rmem = 4096 87380 16777216
net.ipv4.tcp_wmem = 4096 65536 16777216
net.ipv4.tcp_no_metrics_save = 1
# Guidelines from http://media.netapp.com/documents/mysqlperformance-5.pdf
net.ipv4.tcp_sack=0
net.ipv4.tcp_timestamps=0
sunrpc.tcp_slot_table_entries=128
#nfs.v3.enable on
nfs.tcp.enable=on
nfs.tcp.recvwindowsize=65536
nfs.tcp.xfersize=65536
#iscsi.iswt.max_ios_per_session 128
#iscsi.iswt.tcp_window_size 131400
#iscsi.max_connections_per_session 16
net.ipv4.tcp_tw_reuse = 1
net.ipv4.ip_local_port_range = 1024 65023
net.ipv4.tcp_max_syn_backlog = 10240
net.ipv4.tcp_max_tw_buckets = 400000
net.ipv4.tcp_max_orphans = 60000
net.ipv4.tcp_synack_retries = 3
net.core.somaxconn = 10000
kernel.sysrq=0
net.ipv4.neigh.default.gc_thresh1 = 4096
net.ipv4.neigh.default.gc_thresh2 = 8192
net.ipv4.neigh.default.gc_thresh3 = 8192
net.ipv4.neigh.default.base_reachable_time = 86400
net.ipv4.neigh.default.gc_stale_time = 86400
</syntaxhighlight>
====== Raise allowed number of open files for mysql in /etc/security/limits.d/mysql.conf ======
<syntaxhighlight lang=text>
mysql soft nofile 1024000
mysql hard nofile 1024000
mysql soft nproc 10240
mysql hard nproc 10240
</syntaxhighlight>
====== Modify systemd mysql.service to raise the number of files limit ======
To raise the number of files for the service you have to tell the systemd the new limit.
<syntaxhighlight lang=bash>
# systemctl edit mysql.service
</syntaxhighlight>
and enter:
<syntaxhighlight lang=ini>
[Service]
LimitNOFILE=1024000
</syntaxhighlight>
<syntaxhighlight lang=bash>
# systemctl cat mysql
# /lib/systemd/system/mysql.service
# MySQL systemd service file
...
# /etc/systemd/system/mysql.service.d/override.conf
[Service]
LimitNOFILE=1024000
</syntaxhighlight>
Do not forget to activate and check the limit
<syntaxhighlight lang=bash>
# systemctl daemon-reload
# systemctl restart mysql
# awk 'NR==1 || /Max open files/' /proc/$(pgrep mysqld$)/limits
Limit Soft Limit Hard Limit Units
Max open files 1024000 1024000 files
</syntaxhighlight>
====== Modify systemd service to wait for NFS ======
To be sure that the NFS mount is ready when the mysql server starts add After=nfs-client.target to the systemd service in the Unit-section.
<syntaxhighlight lang=bash>
# systemctl edit mysql.service
</syntaxhighlight>
and enter:
<syntaxhighlight lang=ini>
[Unit]
Description=MySQL Community Server
After=network.target
After=nfs-client.target
</syntaxhighlight>
<syntaxhighlight lang=bash>
# systemctl cat mysql
# /lib/systemd/system/mysql.service
# MySQL systemd service file
[Unit]
Description=MySQL Community Server
After=network.target
[Install]
WantedBy=multi-user.target
[Service]
User=mysql
Group=mysql
PermissionsStartOnly=true
ExecStartPre=/usr/share/mysql/mysql-systemd-start pre
ExecStart=/usr/sbin/mysqld
ExecStartPost=/usr/share/mysql/mysql-systemd-start post
TimeoutSec=600
Restart=on-failure
RuntimeDirectory=mysqld
RuntimeDirectoryMode=755
# /etc/systemd/system/mysql.service.d/override.conf
[Unit]
Description=MySQL Community Server
After=network.target
After=nfs-client.target
[Service]
LimitNOFILE=1024000
</syntaxhighlight>
Do not forget to activate the changes...
<syntaxhighlight lang=bash>
# systemctl daemon-reload
# systemctl restart mysql
</syntaxhighlight>
... and check they are active:
<syntaxhighlight lang=bash>
# systemctl list-dependencies --after mysql.service | grep nfs-client.target
● ├─nfs-client.target
</syntaxhighlight>
====== /etc/idmapd.conf ======
<syntaxhighlight lang=text>
# Domain = localdomain
Domain = this.domain.tld
</syntaxhighlight>
====== /etc/fstab ======
<syntaxhighlight lang=text>
cdot-nfsv4-svm:/MYSQLNFS_LOG /MYSQLNFS_LOG nfs rw,hard,nointr,rsize=65536,wsize=65536,bg,vers=4,proto=tcp,noatime
cdot-nfsv4-svm:/MYSQLNFS_DATA /MYSQLNFS_DATA nfs rw,hard,nointr,rsize=65536,wsize=65536,bg,vers=4,proto=tcp,noatime
</syntaxhighlight>
====== /etc/mysql/mysql.conf.d/mysqld.cnf ======
<syntaxhighlight lang=ini>
[mysqld]
...
datadir = /MYSQLNFS_DATA/data/mysql
...
</syntaxhighlight>
====== /etc/mysql/mysql.conf.d/innodb.cnf ======
<syntaxhighlight lang=ini>
[mysqld]
#
# * InnoDB
#
innodb_data_home_dir = /MYSQLNFS_DATA/InnoDB
innodb_data_file_path = ibdata1:200M:autoextend
innodb_log_group_home_dir = /MYSQLNFS_LOG/ib_log
#innodb_flush_method = O_DIRECT
innodb_flush_log_at_trx_commit = 2
innodb_file_per_table = on
</syntaxhighlight>
<syntaxhighlight lang=mysql>
# mysql -e "show variables where variable_name like '%dir' and value like '/MYSQLNFS%'"
+---------------------------+------------------------------------+
| Variable_name | Value |
+---------------------------+------------------------------------+
| datadir | /MYSQLNFS_DATA/data/mysql/ |
| innodb_data_home_dir | /MYSQLNFS_DATA/InnoDB |
| innodb_log_group_home_dir | /MYSQLNFS_LOG/ib_log |
+---------------------------+------------------------------------+
</syntaxhighlight>
====== /etc/mysql/mysql.conf.d/query_cache.cnf ======
<syntaxhighlight lang=ini>
[mysqld]
#
# * Query Cache Configuration
#
query_cache_type = 1
query_cache_limit = 256K
query_cache_min_res_unit = 2k
query_cache_size = 80M
</syntaxhighlight>
<syntaxhighlight lang=mysql>
mysql> SHOW VARIABLES LIKE 'have_query_cache';
+------------------+-------+
| Variable_name | Value |
+------------------+-------+
| have_query_cache | YES |
+------------------+-------+
1 row in set (0,00 sec)
mysql> SHOW VARIABLES LIKE 'query_cache%';
+------------------------------+----------+
| Variable_name | Value |
+------------------------------+----------+
| query_cache_limit | 262144 |
| query_cache_min_res_unit | 2048 |
| query_cache_size | 83886080 |
| query_cache_type | ON |
| query_cache_wlock_invalidate | OFF |
+------------------------------+----------+
5 rows in set (0,00 sec)
</syntaxhighlight>
====== apparmor : /etc/apparmor.d/local/usr.sbin.mysqld ======
<syntaxhighlight lang=text>
# vim:syntax=apparmor
# This should be always there...
owner @{PROC}/@{pid}/status r,
/sys/devices/system/node/ r,
/sys/devices/system/node/** r,
# The mysql datadir, innodb_data_home_dir
/MYSQLNFS_DATA/ r,
/MYSQLNFS_DATA/** rwk,
# The mysql innodb_log_group_home_dir
/MYSQLNFS_LOG/ r,
/MYSQLNFS_LOG/** rwk,
</syntaxhighlight>
====== Short stupid performance test ======
<syntaxhighlight lang=bash>
# time dd if=/dev/zero of=/MYSQLNFS_DATA/io.test bs=16k count=65536
65536+0 records in
65536+0 records out
1073741824 bytes (1,1 GB, 1,0 GiB) copied, 1,7552 s, 612 MB/s
real 0m1.772s
user 0m0.016s
sys 0m0.672s
</syntaxhighlight>
Some things seem to work...
==Sample InnoDB configuration==
/etc/mysql/conf.d/innodb.cnf
<syntaxhighlight lang=mysql>
[mysqld]
# InnoDB Parameters
# innodb_buffer_pool_size=(0.7*total_mem_size)
innodb_buffer_pool_size=1433M
# bulk_insert_buffer_size
bulk_insert_buffer_size=256M
# innodb_buffer_pool_instances=... more = more concurrency
innodb_buffer_pool_instances=2
# innodb_thread_concurrency= 2*CPUs
innodb_thread_concurrency=4
# innodb_flush_method=O_DIRECT (avoids double buffering)
innodb_flush_method=O_DIRECT
# InnoDB data raw disks
innodb_data_home_dir=
innodb_data_file_path=/dev/vg-data/lv-rawdisk-innodb01:25Graw
# InnoDB log files
innodb_log_files_in_group=2
innodb_log_file_size=100M
innodb_log_group_home_dir=/var/lib/mysql/ib_log
</syntaxhighlight>
==Analyze==
<syntaxhighlight lang=mysql>
mysql> select * from <tablename> PROCEDURE ANALYSE();
</syntaxhighlight>
<syntaxhighlight lang=mysql>
mysql> SHOW /*!50000 GLOBAL*/ STATUS;
</syntaxhighlight>
* See [[http://de.slideshare.net/shinguz/pt-presentation-11465700 MySQL Performance Tuning]]
===Find statements which lead into an error===
<syntaxhighlight lang=mysql>
mysql> select CURRENT_SCHEMA,DIGEST_TEXT,MYSQL_ERRNO,MESSAGE_TEXT from performance_schema.events_statements_history where errors!=0\G
*************************** 1. row ***************************
CURRENT_SCHEMA: NULL
DIGEST_TEXT: NULL
MYSQL_ERRNO: 1046
MESSAGE_TEXT: No database selected
1 row in set (0,00 sec)
</syntaxhighlight>
===percona-toolkit===
<syntaxhighlight lang=bash>
# aptitude install percona-toolkit
# mysql -e "explain select * from mysql.user,mysql.db where user.user=db.user" | pt-visual-explain
JOIN
+- Bookmark lookup
| +- Table
| | table db
| | possible_keys User
| +- Index lookup
| key db->User
| possible_keys User
| key_len 48
| ref mysql.user.User
| rows 3
+- Table scan
rows 68
+- Table
table user
</syntaxhighlight>
===Sysbench===
<syntaxhighlight lang=bash>
# mysql -u root -e "create database sbtest;"
# sysbench \
--test=oltp \
--oltp-table-size=10000000 \
--db-driver=mysql \
--mysql-table-engine=innodb \
--mysql-db=sbtest \
--mysql-user=root \
--mysql-password=$(nawk -F'=' '/password/{print $2}' /root/.my.cnf) \
--mysql-socket=/var/run/mysqld/mysqld.sock \
prepare
# sysbench \
--test=oltp \
--oltp-test-mode=complex \
--oltp-table-size=80000000 \
--db-driver=mysql \
--mysql-table-engine=innodb \
--mysql-db=sbtest \
--mysql-user=root \
--mysql-password=$(nawk -F'=' '/password/{print $2}' /root/.my.cnf) \
--mysql-socket=/var/run/mysqld/mysqld.sock \
--num-threads=4 \
--max-time=900 \
--max-requests=500000 \
run
# mysql -u root_rw -e "drop table sbtest;" sbtest
</syntaxhighlight>
==Recover a damaged root account==
===Lost grants===
Try out:
<syntaxhighlight lang=bash>
# service mysql stop
# echo "grant all privileges on *.* to 'root'@'localhost' with grant option;" > /root/mysql-init
# mysqld_safe --init-file=/root/mysql-init
...
150812 19:14:24 mysqld_safe mysqld from pid file /var/run/mysqld/mysqld.pid ended
# rm /root/mysql-init
# service mysql start
</syntaxhighlight>
Or:
<syntaxhighlight lang=bash>
# service mysql stop
# mysqld_safe --skip-grant-tables &
...
# mysql -e "UPDATE mysql.user SET Grant_priv='Y', Super_priv='Y' WHERE User='root'; FLUSH PRIVILEGES; GRANT ALL ON *.* TO 'root'@'localhost';"
# mysqladmin -u root shutdown
# service mysql start
</syntaxhighlight>
===Lost password===
<syntaxhighlight lang=bash>
# service mysql stop
# echo "SET PASSWORD FOR 'root'@'localhost' = PASSWORD('the root password for mysql');" > /root/mysql-init
# mysqld_safe --init-file=/root/mysql-init
...
150812 19:15:24 mysqld_safe mysqld from pid file /var/run/mysqld/mysqld.pid ended
# rm /root/mysql-init
# service mysql start
</syntaxhighlight>
==Structured configuration==
This is the default in Ubuntus /etc/mysql/my.cnf:
<syntaxhighlight lang=mysql>
...
#
# * IMPORTANT: Additional settings that can override those from this file!
# The files must end with '.cnf', otherwise they'll be ignored.
#
!includedir /etc/mysql/conf.d/
</syntaxhighlight>
/etc/mysql/conf.d/innodb.cnf:
<syntaxhighlight lang=mysql>
[mysqld]
# InnoDB Parameters
# innodb_buffer_pool_size=(0.7*total_mem_size)
#innodb_buffer_pool_size=512M
innodb_buffer_pool_size=256M
# bulk_insert_buffer_size
#bulk_insert_buffer_size=256M
bulk_insert_buffer_size=128M
# innodb_buffer_pool_instances=... more = more concurrency
innodb_buffer_pool_instances=2
# innodb_thread_concurrency= 2*CPUs
innodb_thread_concurrency=4
# innodb_flush_method=O_DIRECT (avoids double buffering)
innodb_flush_method=O_DIRECT
# InnoDB data raw disks
innodb_data_home_dir=
innodb_data_file_path=/dev/vg-data/lv-rawdisk-innodb01:25Graw
# InnoDB log files
innodb_log_files_in_group=2
innodb_log_file_size=100M
innodb_log_group_home_dir=/var/lib/mysql/ib_log
</syntaxhighlight>
/etc/mysql/conf.d/myisam.cnf:
<syntaxhighlight lang=mysql>
[mysqld]
#key_buffer = 512M
key_buffer = 128M
table_cache = 8K
myisam_sort_buffer_size = 64M
tmp_table_size = 64M
# Variable: concurrent_insert
# Value Description
# 0 Disables concurrent inserts
# 1 (Default) Enables concurrent insert for MyISAM tables that do not have holes
# 2 Enables concurrent inserts for all MyISAM tables, even those that have holes.
# For a table with a hole, new rows are inserted at the end of the table if it is in use by another thread.
# Otherwise, MySQL acquires a normal write lock and inserts the row into the hole.
concurrent_insert=2
# Variable: myisam_use_mmap
# https://www.percona.com/blog/2006/05/26/myisam-mmap-feature-51/
#
myisam_use_mmap=1
</syntaxhighlight>
/etc/mysql/conf.d/mysqld.cnf:
<syntaxhighlight lang=mysql>
[mysqld]
datadir = /var/lib/mysql/data/data
# because mysql is soooo stupid
#ignore-db-dirs = lost+found # when we will have mysql >= 5.6.3
bind-address = 127.0.0.1
open-files-limit = 4096
max_connections = 512
max_allowed_packet = 16M
thread_stack = 192K
thread_cache_size = 8
myisam-recover-options = BACKUP
max_connections = 512
table_cache = 8192
thread_concurrency = 4
default-storage-engine = innodb
# Enable the full query log. Every query (even ones with incorrect
# syntax) that the server receives will be logged. This is useful for
# debugging, it is usually disabled in production use.
#log
# Print warnings to the error log file. If you have any problem with
# MySQL you should enable logging of warnings and examine the error log
# for possible explanations.
log_warnings
# Log slow queries. Slow queries are queries which take more than the
# amount of time defined in "long_query_time" or which do not use
# indexes well, if log_long_format is enabled. It is normally good idea
# to have this turned on if you frequently add new queries to the
# system.
log_slow_queries
slow_query_log_file = /var/log/mysql/mysql-slow.log
# All queries taking more than this amount of time (in seconds) will be
# trated as slow. Do not use "1" as a value here, as this will result in
# even very fast queries being logged from time to time (as MySQL
# currently measures time with second accuracy only).
long_query_time = 2
# Log more information in the slow query log. Normally it is good to
# have this turned on. This will enable logging of queries that are not
# using indexes in addition to long running queries.
#log_long_format
log_bin = /var/lib/mysql/binlog/mysql-bin.log
expire_logs_days = 10
max_binlog_size = 100M
sync_binlog = 0
performance_schema = ON
</syntaxhighlight>
/etc/mysql/conf.d/mysqld_safe.cnf:
<syntaxhighlight lang=mysql>
[mysqld_safe]
</syntaxhighlight>
/etc/mysql/conf.d/mysqld_safe_syslog.cnf:
<syntaxhighlight lang=mysql>
[mysqld_safe]
syslog
</syntaxhighlight>
/etc/mysql/conf.d/query_cache.cnf:
<syntaxhighlight lang=mysql>
[mysqld]
query_cache_limit = 4M
query_cache_size = 128M
query_cache_min_res_unit = 2K
</syntaxhighlight>
=MySQL Clients=
Small one liners for testing purposes.
==PHP==
===PHP PDO===
<syntaxhighlight lang=php>
$ php -r '
$pdo=new PDO("mysql:host=mydbhost;dbname=mydb", "user", "pass", ARRAY(
PDO::ATTR_PERSISTENT => true
)
);
$stmt=$pdo->prepare("SELECT * FROM mytable");
if($stmt->execute()){
while($row = $stmt->fetch()){
print_r($row);
}
};
$stmt = null;
$pdo=null;
'
</syntaxhighlight>
1e1eb6536550e52365c010aee94f9b93b5756d3a
Inetd services
0
251
2503
2328
2021-11-26T01:50:16Z
Lollypop
2
Text replacement - "</source" to "</syntaxhighlight"
wikitext
text/x-wiki
[[Kategorie:Solaris]]
==Setting up rsyncd as inetd service==
1. Put it into the legacy file /etc/inetd.conf
<syntaxhighlight lang=bash>
# printf "rsync\tstream\ttcp\tnowait\troot\t/usr/bin/rsync\t/usr/bin/rsync --config=/etc/rsyncd.conf --daemon\n" >> /etc/inetd.conf
</syntaxhighlight>
2. Use inetconv to generate your XML file
<syntaxhighlight lang=bash>
# inetconv -o /tmp
100235/1 -> /tmp/100235_1-rpc_ticotsord.xml
Importing 100235_1-rpc_ticotsord.xml ...Done
rsync -> /tmp/rsync-tcp.xml
Importing rsync-tcp.xml ...Done
</syntaxhighlight>
3. Optionally modify the generated XML file /tmp/rsync-tcp.xml
4. Import the XML file
<syntaxhighlight lang=bash>
# svccfg import /tmp/rsync-tcp.xml
</syntaxhighlight>
5. Enable it:
<syntaxhighlight lang=bash>
# inetadm -e svc:/network/rsync/tcp:default
</syntaxhighlight>
6. Check it:
<syntaxhighlight lang=bash>
# netstat -anf inet | nawk -v port="$(nawk '$1=="rsync"{gsub(/\/.*$/,"",$2);print $2;}' /etc/services)" '$1 ~ port"$" && $NF=="LISTEN"'
*.873 *.* 0 0 49152 0 LISTEN
</syntaxhighlight>
056ca7527fc5abaf72ce16816b836f7590619256
Category:Ubuntu
14
121
2505
333
2021-11-26T01:52:38Z
Lollypop
2
Text replacement - "[[Kategorie:" to "[[Category:"
wikitext
text/x-wiki
[[Category:KnowHow]]
a53883501ef62bde531096835b5015f2915a2297
MariaDB Tipps und Tricks
0
235
2506
2340
2021-11-26T01:55:28Z
Lollypop
2
Text replacement - "</source" to "</syntaxhighlight"
wikitext
text/x-wiki
[[Category:MySQL]]
[[Category:MariaDB]]
==ERROR 1524 (HY000): Plugin 'unix_socket' is not loaded==
===Problem===
<syntaxhighlight lang=bash>
# mysql
ERROR 1524 (HY000): Plugin 'unix_socket' is not loaded
</syntaxhighlight>
===Solution===
<syntaxhighlight lang=bash>
# service mysql stop
# mysqld_safe --skip-grant-tables
150918 15:41:13 mysqld_safe Logging to '/var/log/mysql/error.log'.
150918 15:41:13 mysqld_safe Starting mysqld daemon with databases from /var/lib/mysql
# mysql
Welcome to the MariaDB monitor. Commands end with ; or \g.
Your MariaDB connection id is 2
Server version: 10.0.20-MariaDB-0ubuntu0.15.04.1 (Ubuntu)
Copyright (c) 2000, 2015, Oracle, MariaDB Corporation Ab and others.
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
MariaDB [(none)]> INSERT INTO mysql.plugin (name, dl) VALUES ('unix_socket', 'auth_socket');
Query OK, 1 row affected (0.00 sec)
MariaDB [(none)]> shutdown
# service mysql start
</syntaxhighlight>
fabf53712363fbde5aac4de6e846bd64c5a48468
Docker tips and tricks
0
372
2507
2284
2021-11-26T01:59:23Z
Lollypop
2
Text replacement - "</source" to "</syntaxhighlight"
wikitext
text/x-wiki
== Using docker behind a proxy ==
<syntaxhighlight lang=bash>
# systemctl edit docker.service
</syntaxhighlight>
Enter the next three lines and save:
<syntaxhighlight lang=ini>
[Service]
Environment="HTTP_PROXY=user:pass@proxy:port"
Environment="HTTPS_PROXY=user:pass@proxy:port"
</syntaxhighlight>
Restart docker:
<syntaxhighlight lang=bash>
# systemctl restart docker.service
</syntaxhighlight>
== Some useful aliases ==
I put this in my ~/.bash_aliases to maintain a check_mk container:
<syntaxhighlight lang=bash>
alias omd-log='docker container logs monitoring'
alias omd-recreate-volume='docker volume create --driver local --opt type=nfs --opt o=addr=nfs.server.tld,rw --opt device=:/share monitoring'
alias omd-root='docker container exec -it $(docker ps --filter name=monitoring -q) /bin/bash'
alias omd-cmk='docker container exec -it -u omd monitoring bash'
alias omd-start='docker container run --rm -dit -p 8080:5000 --tmpfs /omd/sites/omd/tmp:uid=1000,gid=1000 --ulimit nofile=1024 -v monitoring:/omd/sites --name monitoring -e CMK_SITE_ID=omd -e MAIL_RELAY_HOST='\''smtp-gw.server.tld'\'' -v /etc/localtime:/etc/localtime:ro checkmk/check-mk-raw:1.6.0p12'
alias omd-stop='docker stop $(docker ps --filter name=monitoring -q)'
</syntaxhighlight>
6cd0c0d5e4483dbfc90c41a3745ceb3587a096a4
SunCluster oneliner
0
189
2508
2296
2021-11-26T02:02:06Z
Lollypop
2
Text replacement - "<source" to "<syntaxhighlight"
wikitext
text/x-wiki
[[Category:SunCluster|Einzeiler]]
==Resource Groups to remaster==
<syntaxhighlight lang=bash>
# /usr/cluster/bin/clrg status | \
/usr/bin/nawk '
NR<=5 || ( NF>=3 && $(NF-1)=="Yes" ){
next;
}
NF==4 {
rg=$1;
primary=$2;
if($NF=="Online"){
printf "%20s\t%s on %s\n",rg,$NF,primary
}
while($0 !~ /^$/){
getline;
if($NF=="Online"){
printf "%20s\t%s on %s, but not on primary %s\n",rg,$NF,$1,primary;
list=list" "rg
}
}
}
END{
if(list != ""){
printf "To fix it do:\n\tclrg remaster %s\n",list;
}
}'
</syntaxhighlight>
20d457c8c450bd9de951e6f73543a303f9d323db
Category:OpenVPN
14
103
2509
285
2021-11-26T02:05:35Z
Lollypop
2
Text replacement - "[[Kategorie:" to "[[Category:"
wikitext
text/x-wiki
[[Category:KnowHow]]
a53883501ef62bde531096835b5015f2915a2297
StorageTek SL150
0
190
2510
2315
2021-11-26T02:05:41Z
Lollypop
2
Text replacement - "[[Kategorie:" to "[[Category:"
wikitext
text/x-wiki
[[Category:Backup]]
=StorageTek SL150 Modular Tapelibrary=
==General Knowledge==
===Default Password===
passw0rd
===Solaris Configuration===
To use the Ultrium-6 Tape drives with Solaris you have to put the following into your st.conf:
<syntaxhighlight lang=bash>
tape-config-list =
"HP Ultrium 6-SCSI ","HP Ultrium 6-SCSI","HP Ultrium 6","HP Ultrium LTO 6","HP_LTO_GEN_6";
HP_LTO_GEN_6 = 2,0x3B,0,0x18659,4,0x00,0x46,0x58,0x5A,3,60,1200,600,1200,600,600,18000
</source>
The vendor string has to be exactly 8 characters:
HP<6 spaces>Product...
Unload the st driver after changing the st.conf:
<syntaxhighlight lang=bash>
# modunload -i $(modinfo | nawk '$6=="st"{print $1}')
</source>
Check if the new config settings matched the drive:
<syntaxhighlight lang=bash>
# mt -f /dev/rmt/0cn config
"HP Ultrium 6-SCSI", "HP Ultrium 6-SCSI ", "CFGHPULTRIUM6SCSI";
CFGHPULTRIUM6SCSI = 2,0x3B,0,0x18619,4,0x58,0x58,0x5A,0x5A,3,60,1200,600,1200,600,600,18000;
</source>
==General Documentation==
* [https://support.oracle.com/handbook_partner/Systems/SL150/SL150.html System Handbook]
* [https://support.oracle.com/epmos/faces/DocContentDisplay?id=1476370.2 Information Center]
* [http://docs.oracle.com/cd/E35103_07/index.html StorageTek SL150 Modular Tape Library]
==Service Requests==
* [https://support.oracle.com/epmos/faces/DocContentDisplay?id=1599469.1 How to Generate and Retrieve a Service Bundle]
* [https://support.oracle.com/epmos/faces/DocContentDisplay?id=1505959.1 Format of SL150 Serial Number]
==Firmware==
* [https://support.oracle.com/epmos/faces/DocContentDisplay?id=1474172.1 How to Find Firmware Update Patches]
* [https://support.oracle.com/epmos/faces/DocContentDisplay?id=1922504.1 How to find drive firmware patches for LTO tape drives]
==Backup Software related links==
* [http://www-01.ibm.com/support/docview.wss?uid=swg21598187 Oracle StorageTek SL150 Modular Tape Library System Configuration Information for IBM Tivoli Storage Manager Server]
==Other Links==
===Installation things===
* [https://support.oracle.com/epmos/faces/DocContentDisplay?id=1473827.1 How to Manually Retract the Robot Up To the Parked Position]
===Features===
* [https://support.oracle.com/epmos/faces/DocContentDisplay?id=1481733.1 Auto Clean Support for SL150 Library]
fe0d391e04a0f6713321daa31e746e18e18723e5
Category:Pflanzen
14
41
2511
76
2021-11-26T02:07:23Z
Lollypop
2
Text replacement - "[[Kategorie:" to "[[Category:"
wikitext
text/x-wiki
[[Category:Projekte]]
ea19c1c575c94b8bc19915b9c3d4bf59f7da23f0
Ansible tips and tricks
0
299
2512
2209
2021-11-26T02:08:31Z
Lollypop
2
Text replacement - "<source" to "<syntaxhighlight"
wikitext
text/x-wiki
[[ Kategorie: Ansible | Tips and tricks ]]
== Ansible commandline ==
=== Get settings for host ===
Gathering settings for host in ${hostname}:
<syntaxhighlight lang=bash>
$ ansible -m debug -a 'var=hostvars[inventory_hostname]' ${hostname}
</syntaxhighlight>
For example:
<syntaxhighlight lang=bash>
$ ansible -m debug -a 'var=hostvars[inventory_hostname]' localhost
</syntaxhighlight>
Gathering groups for host in ${hostname}:
<syntaxhighlight lang=bash>
$ ansible -m debug -a 'var=group_names' ${hostname}
</syntaxhighlight>
== Gathering facts from file ==
=== Variables from an Oracle response file ===
This snippet gets some variables from the response file and puts them into an environment variable <i>oracle_environment</i> and sets the variable name itself (prepended with oracle_ if not already there). The variable <i>oracle_environment</i> can be used for <i>environment:</i> when you use <i>shell:</i>.
<syntaxhighlight lang=yaml>
vars:
oracle_user: oracle
oracle_version: 12cR2
oracle_response_file: /install/tepmplate_{{ oracle_version }}/db_{{ oracle_version | lower}}.rsp
</syntaxhighlight>
<syntaxhighlight lang=yaml>
- name: "Getting variables for version {{ oracle_version }} from response file"
shell: |
awk -F '=' '/{{ item }}/{print $2;}' {{ oracle_response_file }}
register: oracle_response_variables
with_items:
- ORACLE_HOME
- ORACLE_BASE
- INVENTORY_LOCATION
tags:
- oracle
- oracle_install
- name: Setting facts from response file to oracle_environment
set_fact:
"{{ 'oracle_' + item.item | lower | regex_replace('oracle_','') }}": "{{ item.stdout }}"
oracle_environment: "{{oracle_environment|default([]) + [ {item.item: item.stdout} ] }}"
with_items:
- "{{ oracle_response_variables.results }}"
tags:
- oracle
- oracle_install
</syntaxhighlight>
== Gathering oracle environment ==
<syntaxhighlight lang=yaml>
- name: Calling oraenv
shell: |
# Set ORAENV_ASK=NO and ORACLE_SID, ORACLE_HOME, PATH from /etc/oratab
eval $(awk -F':' '!/^[ ]*(#|$)/ && $3=="Y"{printf "export ORAENV_ASK=NO ORACLE_SID=%s ORACLE_HOME=%s PATH=${PATH}:%s/bin\n",$1,$2,$2}' /etc/oratab)
# Call /usr/local/bin/oraenv for additional settings
. /usr/local/bin/oraenv -s
# Just register what we need for Oracle
env | egrep "(ORACLE_.*|PATH|LD_LIBRARY_PATH)="
register: env
changed_when: False
- name: Creating environment ora_env
set_fact:
ora_env: |
{# Creating empty dictionary #}
{%- set tmp_env={} -%}
{# For each line from env call tmp_env._setitem_(<variable>,<value>) #}
{%- for line in env.stdout_lines -%}
{{ tmp_env.__setitem__(line.split('=')[0], line.split('=')[1]) }}
{%- endfor -%}
{# Print the created variable #}
{{ tmp_env }}
- debug: var=ora_env
</syntaxhighlight>
== NetApp Modules ==
=== NetApp role ===
==== Snapshot user ====
<syntaxhighlight>
security login role create -vserver cluster01 -role ansible-snapshot-only -cmddirname DEFAULT -access none
security login role create -vserver cluster01 -role ansible-snapshot-only -cmddirname "event generate-autosupport-log" -access all
security login role create -vserver cluster01 -role ansible-snapshot-only -cmddirname "volume snapshot" -access readonly
security login role create -vserver cluster01 -role ansible-snapshot-only -cmddirname "volume snapshot create" -query "-snapshot ansible_*" -access all
security login role create -vserver cluster01 -role ansible-snapshot-only -cmddirname "volume snapshot delete" -query "-snapshot ansible_*" -access all
security login create -vserver cluster01 -role ansible-snapshot-only -application ontapi -authentication-method password -user-or-group-name ansible-snapuser
</syntaxhighlight>
7b30dbc305f62f96ef05618f3157bdd07687dafa
Category:MeerwasserAquarium
14
112
2513
313
2021-11-26T02:16:00Z
Lollypop
2
Text replacement - "[[Kategorie:" to "[[Category:"
wikitext
text/x-wiki
[[Category:Projekte]]
ea19c1c575c94b8bc19915b9c3d4bf59f7da23f0
Category:Grub
14
296
2514
1387
2021-11-26T02:18:47Z
Lollypop
2
Text replacement - "[[Kategorie:" to "[[Category:"
wikitext
text/x-wiki
[[Category:KnowHow]]
a53883501ef62bde531096835b5015f2915a2297
Solaris IO Analyse
0
208
2515
2279
2021-11-26T02:20:24Z
Lollypop
2
Text replacement - "[[Kategorie:" to "[[Category:"
wikitext
text/x-wiki
[[Category:Solaris]]
==Which filesystem is busy?==
For zfs (-F zfs) you can use this oneliner:
<syntaxhighlight lang=bash>
# fsstat -i $(df -hF zfs | nawk '{print $NF}') 5
</syntaxhighlight>
==Links==
* [https://blogs.oracle.com/BestPerf/entry/i_o_analysis_using_dtrace I/O analysis using DTrace]
* [http://www.brendangregg.com/DTrace/dtrace_oneliners.txt Brendan Gregg's DTrace onliners]
4183f85b05d218b4c8d5472147f0dffa0c6eccd6
VMWare Linux parameter
0
108
2516
2390
2021-11-26T02:21:41Z
Lollypop
2
Text replacement - "[[Kategorie:" to "[[Category:"
wikitext
text/x-wiki
[[Category:VMWare]][[Category:Ubuntu]]
==/etc/sysctl.conf==
<syntaxhighlight lang=bash>
# vm.swappiness = 0 The kernel will swap only to avoid an out of memory condition.
vm.swappiness = 0
# TCP SYN Flood Protection
net.ipv4.tcp_syncookies = 1
net.ipv4.tcp_max_syn_backlog = 4096
net.ipv4.tcp_synack_retries = 3
</syntaxhighlight>
==Pinning kernel to 2.6 for ESX 4.1==
Create /etc/apt/preferences.d/linux-image with this content:
<syntaxhighlight lang=bash>
Package: linux-image-server linux-server linux-headers-server
Pin: version 2.6.*
Pin-Priority: 1000
</syntaxhighlight>
==Autobuild of kernel drivers==
Create /etc/kernel/header_postinst.d/vmware :
<syntaxhighlight lang=bash>
#!/bin/bash
# We're passed the version of the kernel being installed
inst_kern=$1
/usr/bin/vmware-config-tools.pl --modules-only --default --kernel-version ${inst_kern}
</syntaxhighlight>
<syntaxhighlight lang=bash>
# chmod 755 /etc/kernel/header_postinst.d/vmware
</syntaxhighlight>
==Prebuild packages from VMWare==
<syntaxhighlight lang=bash>
echo "deb http://packages.vmware.com/tools/esx/latest/ubuntu $(lsb_release -cs) main" > /etc/apt/sources.list.d/vmware-repository
apt-key adv --keyserver subkeys.pgp.net --recv-keys C0B5E0AB66FD4949
apt-get update
apt-get install vmware-tools-core vmware-tools-esx-nox vmware-tools-foundation \
vmware-tools-guestlib vmware-tools-libraries-nox vmware-tools-libraries-x \
vmware-tools-plugins-autoupgrade vmware-tools-plugins-deploypkg \
vmware-tools-plugins-grabbitmqproxy vmware-tools-plugins-guestinfo \
vmware-tools-plugins-hgfsserver vmware-tools-plugins-powerops \
vmware-tools-plugins-timesync vmware-tools-plugins-vix \
vmware-tools-plugins-vmbackup vmware-tools-services vmware-tools-user
</syntaxhighlight>
==Source from VMWare==
After you removed previously installed vmware-tools, just follow these steps:
1. Add this to your /etc/apt/sources.list:
<syntaxhighlight lang=bash>
deb http://packages.vmware.com/tools/esx/latest/ubuntu precise main
</syntaxhighlight>
Then do:
<syntaxhighlight lang=bash>
gpg --search C0B5E0AB66FD4949 # hinzufügen (1)
gpg -a --export C0B5E0AB66FD4949 | apt-key add --
</syntaxhighlight>
2. Update your package database:
<syntaxhighlight lang=bash>
# aptitude update
</syntaxhighlight>
3. Get Module-Assistant:
<syntaxhighlight lang=bash>
# aptitude install module-assistant
</syntaxhighlight>
4. Get the base packages:
<syntaxhighlight lang=bash>
# aptitude install vmware-tools-foundation vmware-tools-libraries-nox vmware-tools-guestlib vmware-tools-core
</syntaxhighlight>
5. Get the modules:
<syntaxhighlight lang=bash>
# aptitude install vmware-tools-{vmci,vmxnet,vsock,vmblock,vmhgfs,vmsync}-common
# aptitude install vmware-tools-{vmci,vmxnet,vsock,vmblock,vmhgfs,vmsync}-modules-source
</syntaxhighlight>
6. Get kernel and headers:
<syntaxhighlight lang=bash>
# aptitude install linux-{image,headers}-3.2.0-52-generic
</syntaxhighlight>
7. Compile and install the modules with module assistant
<syntaxhighlight lang=bash>
# m-a prepare --kvers-list 3.2.0-52-generic
# m-a --text-mode --kvers-list 3.2.0-52-generic build vmware-tools-{vmci,vmxnet,vsock,vmblock,vmhgfs,vmsync}-modules
# m-a --text-mode --kvers-list 3.2.0-52-generic install vmware-tools-{vmci,vmxnet,vsock,vmblock,vmhgfs,vmsync}-modules
</syntaxhighlight>
== Minimal /etc/vmware-tools/config ==
<syntaxhighlight lang=bash>
libdir = "/usr/lib/vmware-tools"
</syntaxhighlight>
== Switch to Ubuntu open-vm-tools ==
<syntaxhighlight lang=bash>
# /usr/bin/vmware-uninstall-tools.pl ; aptitude purge open-vm-tools ; apt update ; apt install open-vm-tools
</syntaxhighlight>
b22238534e4e081e9de98de62a7fbd1355114c1a
Exim cheatsheet
0
27
2517
2479
2021-11-26T02:21:42Z
Lollypop
2
Text replacement - "[[Kategorie:" to "[[Category:"
wikitext
text/x-wiki
[[Category:Exim]]
=Fragen und Antworten=
==Header einer MailID ansehen==
<pre># exim -mvh <msgid></pre>
==Statistiken der aktuellen Queue ansehen==
<pre># exim -bpu | exiqsum <parameter></pre>
==Routing von Mails testen==
===Kurz und bündig===
<pre># exim -bv -v <Mailadresse></pre>
===Mit viel Debugging===
<pre># exim -bv -d+all <Mailadresse></pre>
==Wie stosse ich den Versand aller Mails für eine bestimmte Domain an?==
<pre># exim -Rff <Domain></pre>
==Wie stosse ich den Versand EINER bestimmten Mail erneut an?==
<pre># exim -M <message-id></pre>
==Wie ermittle ich, wieviele Mails in der Queue liegen?==
<pre># exim -bpc</pre>
==Wie finde ich eine bestimmte Mail in der Queue?==
Dazu kann entweder in den Logfiles gesucht werden
<pre># exigrep <pattern> /var/log/exim/mainlog-jjjjmmdd</pre>
oder es kann in der Queue gesucht werden
<pre># exiqgrep -r <pattern></pre>
Besser, als exigrep ist exipick!
Ausgabe aller frozen Mails in der Queue:
<pre>
# exipick -z
</pre>
Ausgabe aller Mails an <reciepient> in der Queue:
<pre>
# exipick -r <reciepient>
</pre>
Ausgabe aller Mails von <sender> in der Queue:
<pre>
# exipick -f <sender>
</pre>
Ausggeben aller Mails, die Lokal abgesandt wurden in der Queue:
<pre>
# exipick --or '$sender_host_address eq 127.0.0.1' '$received_protocol eq local'
</pre>
Sogar der Body einer Mail kann durchsucht werden:
<pre>
# /opt/exim/bin/exipick '$message_body =~ /.*Vjagra.*/'
</pre>
Oder Ausgabe der sender_host_address für alle Mails die mehr als 40 und weniger als 50 Minuten alt sind und nicht im Status frozen sind:
<pre>
# exipick --show-vars sender_host_address '$message_age > 40m' '$message_age < 50m' '!$deliver_freeze'
</pre>
==Was tun die Exim-Prozesse?==
<pre># exiwhat</pre>
==Ausgeben von Exim-Parametern==
<pre># exim -bP <Parameter></pre>
z.B.:
<pre># exim -bP message_size_limit</pre>
==Immer gut: queue files ansehen==
<pre>
# find $(exim -bP spool_directory | nawk '{print $NF;}')/input
</pre>
==Ratelimit für einen User zurücksetzen==
Einträge finden:
<syntaxhighlight lang=bash>
# exim_dumpdb /var/spool/exim ratelimit | grep user
24-Mar-2016 09:51:28.152687 rate: 218.512 key: 1d/per_rcpt/mail_recipients:user@server.de
24-Mar-2016 09:51:28.098825 rate: 25.618 key: 1d/per_rcpt/failed_recipients:user@server.de
</syntaxhighlight>
Einträge löschen:
Dafür nimmt man das etwas struppige Tool <i>exim_fixdb</i>. Man gibt den Key ein, den man aus den Ausgaben vom letzten Befehl hat und wählt damit den entsprechenden Eintrag in der DB aus. Als nächstes kommd dann d, wie Delete, gefolgt von einem Enter. Weg ist der entsprechende Eintrag.
<syntaxhighlight lang=bash>
# exim_fixdb /var/spool/exim ratelimit
Modifying Exim hints database /var/spool/exim/db/ratelimit
> 1d/per_rcpt/mail_recipients:user@server.de
24-Mar-2016 09:51:28
0 time stamp: 24-Mar-2016 09:51:28
1 fract. time: .152687
2 sender rate: 218.512
> d
deleted
> 1d/per_rcpt/failed_recipients:user@server.de
24-Mar-2016 09:51:28
0 time stamp: 24-Mar-2016 09:51:28
1 fract. time: .098825
2 sender rate: 25.618
> d
deleted
> ^D
</syntaxhighlight>
==Spam==
<syntaxhighlight lang=bash>
for file in $(ls -1 /var/log/spamassassin/spamd-exim-acl.log* | sort -t'.' -k3n,3n)
do
if [ "$(basename $file .gz)" == "$(basename $file)" ]
then
command="cat"
else
command="gzip -cd"
fi
printf "%16s - %16s : %7s\t%s\n" \
"$(${command} ${file} | nawk 'NR==1{print $1,$2,$3}')" \
"$(${command} ${file} | tail -1 | nawk '{print $1,$2,$3}')" \
"$(${command} ${file} | grep -c 'result: Y')" \
"$(basename ${file})"
done
</syntaxhighlight>
= Logrotation with datestamped logfiles =
I love my logfiles datestamped:
<syntaxhighlight lang=bash>
# exim -bP log_file_path
log_file_path = /var/log/exim/%slog-%D
</syntaxhighlight>
But the logrotate with this files is a little bit tricky.
I found this as a good way to rotate the logfiles:
== /etc/logrotate.d/exim ==
<pre>
/var/log/exim/rotate_this_-_do_not_delete {
daily
rotate 0
ifempty
create
lastaction
# gzip all files matching the regex that are not from today
/usr/bin/find /var/log/exim -regextype posix-awk -regex '^/.*/((main|reject)log-[0-9]{8}|paniclog)' ! -mtime +0 -exec /usr/bin/gzip -9q {} \;
# delete gzipped files matching the regex that are older than 90 days
/usr/bin/find /var/log/exim -regextype posix-awk -regex '^/.*/((main|reject)log-[0-9]{8}|paniclog)\.gz' -mtime +90 -delete
endscript
}
== touch the dummy rotate file ==
This one is needed to trigger the rotation even if it is a dummy.
<syntaxhighlight lang=bash>
# touch /var/log/exim/rotate_this_-_do_not_delete
</syntaxhighlight>
</pre>
292fea3a7fe82de55f25c4cd3527e0ad95fca537
HP Smart Array Controller
0
365
2518
2358
2021-11-26T02:21:44Z
Lollypop
2
Text replacement - "</source" to "</syntaxhighlight"
wikitext
text/x-wiki
[[category:Hardware]]
=ssacli=
=== Install the tool ===
<syntaxhighlight lang=bash>
# echo "deb http://downloads.linux.hpe.com/SDR/downloads/MCP/ubuntu $(lsb_release --short --codename)/current non-free" >> /etc/apt/sources.list.d/hp.list
# curl http://downloads.linux.hpe.com/SDR/hpePublicKey2048_key1.pub | sudo apt-key add -
# apt update && apt install --yes ssacli
</syntaxhighlight>
=== Revive formerly failed disk ===
<syntaxhighlight lang=bash>
# lsscsi
[0:0:0:0] storage HP P440ar 2.52 -
[0:1:0:0] disk HP LOGICAL VOLUME 2.52 /dev/sda
[0:1:0:1] disk HP LOGICAL VOLUME 2.52 /dev/sdb
[0:1:0:2] disk HP LOGICAL VOLUME 2.52 /dev/sdc
[0:1:0:3] disk HP LOGICAL VOLUME 2.52 /dev/sdd
[0:1:0:5] disk HP LOGICAL VOLUME 2.52 /dev/sdf
# ssacli ctrl slot=0 ld all show status
logicaldrive 1 (279.37 GB, RAID 0): OK
logicaldrive 2 (931.48 GB, RAID 0): OK
logicaldrive 3 (279.37 GB, RAID 0): OK
logicaldrive 4 (931.48 GB, RAID 0): OK
logicaldrive 5 (279.37 GB, RAID 0): Failed
logicaldrive 6 (931.48 GB, RAID 0): OK
# ssacli ctrl slot=0 ld 5 modify reenable forced
# ssacli ctrl slot=0 ld all show status
logicaldrive 1 (279.37 GB, RAID 0): OK
logicaldrive 2 (931.48 GB, RAID 0): OK
logicaldrive 3 (279.37 GB, RAID 0): OK
logicaldrive 4 (931.48 GB, RAID 0): OK
logicaldrive 5 (279.37 GB, RAID 0): OK
logicaldrive 6 (931.48 GB, RAID 0): OK
# lsscsi
[0:0:0:0] storage HP P440ar 2.52 -
[0:1:0:0] disk HP LOGICAL VOLUME 2.52 /dev/sda
[0:1:0:1] disk HP LOGICAL VOLUME 2.52 /dev/sdb
[0:1:0:2] disk HP LOGICAL VOLUME 2.52 /dev/sdc
[0:1:0:3] disk HP LOGICAL VOLUME 2.52 /dev/sdd
[0:1:0:4] disk HP LOGICAL VOLUME 2.52 /dev/sde
[0:1:0:5] disk HP LOGICAL VOLUME 2.52 /dev/sdf
</syntaxhighlight>
=hpacucli=
==reenable disk after replacement==
<syntaxhighlight lang=bash>
[root@app02 ~]# hpacucli ctrl all show config
Smart Array P410i in Slot 0 (Embedded) (sn: 50014380141236F0)
array A (SAS, Unused Space: 0 MB)
logicaldrive 1 (279.4 GB, RAID 0, OK)
physicaldrive 1I:1:1 (port 1I:box 1:bay 1, SAS, 300 GB, OK)
array B (SAS, Unused Space: 0 MB)
logicaldrive 2 (279.4 GB, RAID 0, Failed)
physicaldrive 1I:1:2 (port 1I:box 1:bay 2, SAS, 300 GB, OK)
SEP (Vendor ID PMCSIERA, Model SRC 8x6G) 250 (WWID: 50014380141236FF)
[root@app02 ~]# hpacucli controller slot=0 logicaldrive 2 modify reenable forced
[root@app02 ~]# hpacucli ctrl all show config
Smart Array P410i in Slot 0 (Embedded) (sn: 50014380141236F0)
array A (SAS, Unused Space: 0 MB)
logicaldrive 1 (279.4 GB, RAID 0, OK)
physicaldrive 1I:1:1 (port 1I:box 1:bay 1, SAS, 300 GB, OK)
array B (SAS, Unused Space: 0 MB)
logicaldrive 2 (279.4 GB, RAID 0, OK)
physicaldrive 1I:1:2 (port 1I:box 1:bay 2, SAS, 300 GB, OK)
SEP (Vendor ID PMCSIERA, Model SRC 8x6G) 250 (WWID: 50014380141236FF)
</syntaxhighlight>
b02c45692e10b4f879b8b37eb5119f138b5bab45
MariaDB SSL
0
295
2519
2461
2021-11-26T02:27:29Z
Lollypop
2
Text replacement - "[[Kategorie:" to "[[Category:"
wikitext
text/x-wiki
[[Category:MariaDB|SSL]]
[[Category:MySQL|SSL]]
To be continued!
==Create keys and certificates==
<syntaxhighlight lang=bash>
openssl genrsa 2048 > ca-key.pem
openssl req -new -x509 -nodes -days 3600 -key ca-key.pem -out ca-cert.pem -subj '/C=DE/ST=Hamburg/L=Hamburg/O=Spiders Cave/CN=db-server'
</syntaxhighlight>
<syntaxhighlight lang=bash>
openssl req -newkey rsa:2048 -days 3600 -nodes -keyout client-key.pem -out client-req.pem -subj '/C=DE/ST=Hamburg/L=Hamburg/O=Spiders Cave/CN=web-server.domain.de'
openssl rsa -in client-key.pem -out client-key.pem
openssl x509 -req -in client-req.pem -days 3600 -CA ca-cert.pem -CAkey ca-key.pem -set_serial 01 -out client-cert.pem
</syntaxhighlight>
<syntaxhighlight lang=bash>
openssl req -newkey rsa:2048 -days 3600 -nodes -keyout server-key.pem -out server-req.pem -subj '/C=DE/ST=Hamburg/L=Hamburg/O=Spiders Cave/CN=db-server.domain.de'
openssl rsa -in server-key.pem -out server-key.pem
openssl x509 -req -in server-req.pem -days 3600 -CA ca-cert.pem -CAkey ca-key.pem -set_serial 01 -out server-cert.pem
</syntaxhighlight>
<syntaxhighlight lang=bash>
chown mysql:www-data *
chown www-data:www-data client-key.pem
chmod 644 *-cert.pem
chmod 600 *-key.pem
</syntaxhighlight>
<syntaxhighlight lang=php>
# php -r '
$db = new PDO("mysql:host=db-server.domain.de;dbname=testdb", "ssltestuser", "ssltestuserpassword",
array(
PDO::MYSQL_ATTR_SSL_CA=>"/etc/mysql/ssl/ca-cert.pem",
PDO::MYSQL_ATTR_SSL_KEY=>"/etc/mysql/ssl/client-key.pem",
PDO::MYSQL_ATTR_SSL_CERT=>"/etc/mysql/ssl/client-cert.pem",
PDO::MYSQL_ATTR_SSL_CAPATH=>"/etc/ssl/certs"
)
);
$result = $db->query("SHOW STATUS LIKE \"SSL_%\"");
$result->execute();
$status=$result->fetchAll();
print_r($status);
'
</syntaxhighlight>
493246fa628b3ff2d88502a7a4af3bc753b68cb5
ZFS on Linux
0
222
2520
2473
2021-11-26T02:29:11Z
Lollypop
2
Text replacement - "</source" to "</syntaxhighlight"
wikitext
text/x-wiki
[[Category:Linux|ZFS]]
[[Category:ZFS|Linux]]
[[Category:VirtualBox|ZFS]]
==Grub==
Create /etc/udev/rules.d/99-local-grub.rules with this content:
<syntaxhighlight lang=bash>
# Create by-id links in /dev as well for zfs vdev. Needed by grub
# Add links for zfs_member only
KERNEL=="sd*[0-9]", IMPORT{parent}=="ID_*", ENV{ID_FS_TYPE}=="zfs_member", SYMLINK+="$env{ID_BUS}-$env{ID_SERIAL}-part%n"
</syntaxhighlight>
==Virtualbox on ZVols==
If you use ZVols as rawvmdk-device in VirtualBox as normal user (vmuser in this example) create /etc/udev/rules.d/99-local-zvol.rules with this content:
<syntaxhighlight lang=bash>
KERNEL=="zd*" SUBSYSTEM=="block" ACTION=="add|change" PROGRAM="/lib/udev/zvol_id /dev/%k" RESULT=="rpool/VM/*" OWNER="vmuser"
</syntaxhighlight>
<syntaxhighlight lang=bash>
vmuser@virtualbox-server:~$ VBoxManage internalcommands createrawvmdk -filename /var/data/VMs/dev/Solaris10.vmdk -rawdisk /dev/zvol/rpool/VM/Solaris10
</syntaxhighlight>
==Setup Ubuntu 16.04 with ZFS root==
Most is from here [https://github.com/zfsonlinux/zfs/wiki/Ubuntu-16.04-Root-on-ZFS Ubuntu-16.04-Root-on-ZFS].
Boot Ubuntu Desktop (alias Live CD) and choose "try out".
===Get the right ashift value===
For example to get sda and sdb:
<syntaxhighlight lang=bash>
# lsblk -o NAME,PHY-SeC,LOG-SEC /dev/sd{a,b} | awk 'function exponent (value) {for(i=0;value>1;i++){value/=2;}; return i;}{if($2 ~ /[0-9]+/){print $0,exponent($2)}else{print$0,"ashift"}}'
NAME PHY-SEC LOG-SEC ashift
sda 512 512 9
├─sda1 512 512 9
├─sda2 512 512 9
├─sda3 512 512 9
└─sda4 512 512 9
sdb 4096 512 12
├─sdb1 4096 512 12
├─sdb2 4096 512 12
├─sdb3 4096 512 12
└─sdb4 4096 512 12
</syntaxhighlight>
===Connect it to your network===
<syntaxhighlight lang=bash>
sudo -i
ifconfig ens160 <IP> netmask 255.255.255.0
route add default gw <defaultrouter>
echo "nameserver <nameserver>" >> /etc/resolv.conf
echo 'Acquire::http::Proxy "http://<user>:<pass>@<proxyhost>:<proxyport>";' >> /etc/apt/apt.conf
apt-add-repository universe
apt update
apt --yes install openssh-server
passwd ubuntu
Reconnect via ssh
apt install --yes debootstrap gdisk zfs-initramfs
sgdisk -g -a1 -n2:34:2047 -t2:EF02 /dev/disk/by-id/scsi-36000c2932cdb62febff0b5ac93786dd4
sgdisk -n9:-8M:0 -t9:BF07 /dev/disk/by-id/scsi-36000c2932cdb62febff0b5ac93786dd4
sgdisk -n1:0:0 -t1:BF01 /dev/disk/by-id/scsi-36000c2932cdb62febff0b5ac93786dd4
zpool create -f -o ashift=12 \
-O atime=off \
-O canmount=off \
-O compression=lz4 \
-O normalization=formD \
-O mountpoint=/ \
-R /mnt \
rpool /dev/disk/by-id/scsi-36000c2932cdb62febff0b5ac93786dd4-part1
zfs create -o canmount=off -o mountpoint=none rpool/ROOT
zfs create -o canmount=noauto -o mountpoint=/ rpool/ROOT/ubuntu
zfs mount rpool/ROOT/ubuntu
zfs create -o setuid=off rpool/home
zfs create -o mountpoint=/root rpool/home/root
zfs create -o canmount=off -o setuid=off -o exec=off rpool/var
zfs create -o com.sun:auto-snapshot=false rpool/var/cache
zfs create rpool/var/log
zfs create rpool/var/spool
zfs create -o com.sun:auto-snapshot=false -o exec=on rpool/var/tmp
zfs create -V 4G -b $(getconf PAGESIZE) -o compression=zle \
-o logbias=throughput -o sync=always \
-o primarycache=metadata -o secondarycache=none \
-o com.sun:auto-snapshot=false rpool/swap
cp -p {,/mnt}/etc/apt/apt.conf
export http_proxy=$(awk '/Acquire::http::Proxy/{gsub(/\"/,"");gsub(/;$/,"");print $2}' /mnt/etc/apt/apt.conf)
echo -n xenial{,-security,-updates} | \
xargs -n 1 -d ' ' -I{} echo "deb http://archive.ubuntu.com/ubuntu {} main universe" > /mnt/etc/apt/sources.list
chmod 1777 /mnt/var/tmp
debootstrap xenial /mnt
zfs set devices=off rpool
HOSTNAME=Template-VM
echo ${HOSTNAME} > /mnt/etc/hostname
printf "127.0.1.1\t%s\n" "${HOSTNAME}" >> /mnt/etc/hosts
INTERFACE=$(ip a s scope global | awk 'NR==1{gsub(/:$/,"",$2);print $2;}')
printf "auto %s\niface %s inet dhcp\n" "${INTERFACE}" "${INTERFACE}" > /mnt/etc/network/interfaces.d/${INTERFACE}
mount --rbind /dev /mnt/dev
mount --rbind /proc /mnt/proc
mount --rbind /sys /mnt/sys
cp -p {,/mnt}/etc/apt/apt.conf
echo -n xenial{,-security,-updates} | \
xargs -n 1 -d ' ' -I{} echo "deb http://archive.ubuntu.com/ubuntu {} main universe" > /mnt/etc/apt/sources.list
chroot /mnt /bin/bash --login
locale-gen en_US.UTF-8
echo 'LANG="en_US.UTF-8"' > /etc/default/locale
LANG="en_US.UTF-8"
dpkg-reconfigure tzdata
ln -s /proc/self/mounts /etc/mtab
apt update
apt install --yes ubuntu-minimal
apt install --yes --no-install-recommends linux-image-generic
apt install --yes zfs-initramfs
apt install --yes openssh-server
apt install --yes grub-pc
addgroup --system lpadmin
addgroup --system sambashare
passwd
grub-probe /
update-initramfs -c -k all
vi /etc/default/grub
Comment out: GRUB_HIDDEN_TIMEOUT=0
Remove quiet and splash from: GRUB_CMDLINE_LINUX_DEFAULT
Uncomment: GRUB_TERMINAL=console
update-grub
grub-install /dev/disk/by-id/scsi-36000c2932cdb62febff0b5ac93786dd4
zfs snapshot rpool/ROOT/ubuntu@install
exit
mount | grep -v zfs | tac | awk '/\/mnt/ {print $3}' | xargs -i{} umount -lf {}
zpool export rpool
reboot
apt install --yes cryptsetup
echo cryptswap1 /dev/zvol/rpool/swap /dev/urandom swap,cipher=aes-xts-plain64:sha256,size=256 >> /etc/crypttab
systemctl daemon-reload
systemctl start systemd-cryptsetup@cryptswap1.service
echo /dev/mapper/cryptswap1 none swap defaults 0 0 >> /etc/fstab
swapon -av
</syntaxhighlight>
==Swap on ZFS with random key encryption==
<syntaxhighlight lang=ini>
# /etc/systemd/system/zfs-cryptswap@.service
[Unit]
Description=ZFS Random Cryptography Setup for %I
Documentation=man:zfs(8)
DefaultDependencies=no
Conflicts=umount.target
IgnoreOnIsolate=true
After=systemd-random-seed.service
BindsTo=dev-zvol-rpool-%i.device
Before=umount.target
[Service]
Type=oneshot
RemainAfterExit=yes
TimeoutSec=0
KeyringMode=shared
OOMScoreAdjust=500
UMask=0077
RuntimeDirectory=zfs-cryptswap.%i
RuntimeDirectoryMode=0700
ExecStartPre=-/sbin/swapoff '/dev/zvol/rpool/%i'
ExecStartPre=-/sbin/zfs destroy 'rpool/%i'
ExecStartPre=/bin/dd if=/dev/urandom of=/run/zfs-cryptswap.%i/%i.key bs=32 count=1
ExecStart=/sbin/zfs create -V 4G -b 4k -o compression=zle -o logbias=throughput -o sync=always -o primarycache=metadata -o secondarycache=none -o com.sun:auto-snapshot=false -o encryption=on -o keyformat=raw -o keylocation=file:///run/zfs-cryptswap.%i/%i.key rpool/%i
ExecStartPost=/sbin/mkswap '/dev/zvol/rpool/%i'
ExecStartPost=/sbin/swapon '/dev/zvol/rpool/%i'
ExecStop=/sbin/swapoff '/dev/zvol/rpool/%i'
ExecStopPost=/sbin/zfs destroy 'rpool/%i'
[Install]
WantedBy=swap.target
</syntaxhighlight>
!!!BE CAREFUL with the name after @ !!!
The name after the @ is the name of the ZFS the will be DESTROYED and recreated!!!
To destroy and recreate an encrypted ZFS volume named cryptswap use:
<syntaxhighlight lang=bash>
# systemctl start zfs-cryptswap@cryptswap.service
# systemctl enable zfs-cryptswap@cryptswap.service
# update-initramfs -k all -u
</syntaxhighlight>
==Kernel settings for ZFS==
=== Set module parameter in /etc/modprobe.d/zfs.conf===
<syntaxhighlight lang=bash>
options zfs zfs_arc_max=10737418240
# increase them so scrub/resilver is more quickly at the cost of other work
options zfs zfs_vdev_scrub_min_active=24
options zfs zfs_vdev_scrub_max_active=64
# sync write
options zfs zfs_vdev_sync_write_min_active=8
options zfs zfs_vdev_sync_write_max_active=32
# sync reads (normal)
options zfs zfs_vdev_sync_read_min_active=8
options zfs zfs_vdev_sync_read_max_active=32
# async reads : prefetcher
options zfs zfs_vdev_async_read_min_active=8
options zfs zfs_vdev_async_read_max_active=32
# async write : bulk writes
options zfs zfs_vdev_async_write_min_active=8
options zfs zfs_vdev_async_write_max_active=32
# max write speed to l2arc
# tradeoff between write/read and durability of ssd (?)
# default : 8 * 1024 * 1024
# setting here : 500 * 1024 * 1024
options zfs l2arc_write_max=524288000
options zfs zfs_top_maxinflight=512
options zfs zfs_resilver_min_time_ms=8000
options zfs zfs_resilver_delay=0
</syntaxhighlight>
Remember to update your initramfs before boot. This is the filesystem which is read when your module is loaded.
<syntaxhighlight lang=bash>
# update-initramfs -k all -u
</syntaxhighlight>
=== Check settings ===
<syntaxhighlight lang=bash>
root@zfshost:~# modprobe -c | grep "options zfs"
options zfs zfs_arc_max=10737418240
options zfs zfs_vdev_scrub_min_active=24
options zfs zfs_vdev_scrub_max_active=64
options zfs zfs_vdev_sync_write_min_active=8
options zfs zfs_vdev_sync_write_max_active=32
options zfs zfs_vdev_sync_read_min_active=8
options zfs zfs_vdev_sync_read_max_active=32
options zfs zfs_vdev_async_read_min_active=8
options zfs zfs_vdev_async_read_max_active=32
options zfs zfs_vdev_async_write_min_active=8
options zfs zfs_vdev_async_write_max_active=32
options zfs l2arc_write_max=524288000
options zfs zfs_top_maxinflight=512
options zfs zfs_resilver_min_time_ms=8000
options zfs zfs_resilver_delay=0
</syntaxhighlight>
<syntaxhighlight lang=bash>
root@zfshost:~# modprobe --show-depends zfs
insmod /lib/modules/4.15.0-58-generic/kernel/spl/spl.ko
insmod /lib/modules/4.15.0-58-generic/kernel/zfs/znvpair.ko
insmod /lib/modules/4.15.0-58-generic/kernel/zfs/zcommon.ko
insmod /lib/modules/4.15.0-58-generic/kernel/zfs/icp.ko
insmod /lib/modules/4.15.0-58-generic/kernel/zfs/zavl.ko
insmod /lib/modules/4.15.0-58-generic/kernel/zfs/zunicode.ko
insmod /lib/modules/4.15.0-58-generic/kernel/zfs/zfs.ko zfs_arc_max=10737418240 zfs_vdev_scrub_min_active=24 zfs_vdev_scrub_max_active=64 zfs_vdev_sync_write_min_active=8 zfs_vdev_sync_write_max_active=32 zfs_vdev_sync_read_min_active=8 zfs_vdev_sync_read_max_active=32 zfs_vdev_async_read_min_active=8 zfs_vdev_async_read_max_active=32 zfs_vdev_async_write_min_active=8 zfs_vdev_async_write_max_active=32 l2arc_write_max=524288000 zfs_top_maxinflight=512 zfs_resilver_min_time_ms=8000 zfs_resilver_delay=0
</syntaxhighlight>
=== Check actual settings ===
Check files in
* /proc/spl/kstat/zfs/
* /sys/module/zfs/parameters/
==ARC Cache==
===Get the current usage of cache===
<syntaxhighlight lang=bash>
# cat /proc/spl/kstat/zfs/arcstats |grep c_
c_min 4 521779200
c_max 4 1073741824
arc_no_grow 4 0
arc_tempreserve 4 0
arc_loaned_bytes 4 0
arc_prune 4 25360
arc_meta_used 4 493285336
arc_meta_limit 4 805306368
arc_dnode_limit 4 80530636
arc_meta_max 4 706551816
arc_meta_min 4 16777216
sync_wait_for_async 4 357
arc_need_free 4 0
arc_sys_free 4 260889600
</syntaxhighlight>
===Limit the cache without reboot non permanent===
For example limit it to 512MB (which is too small for production environments, just an example...):
<syntaxhighlight lang=bash>
# echo "$[512*1024*1024]" > /sys/module/zfs/parameters/zfs_arc_max
</syntaxhighlight>
Now you have to drop the caches:
<syntaxhighlight lang=bash>
# echo 3 > /proc/sys/vm/drop_caches
</syntaxhighlight>
===Make the cache limit permanent===
For example limit it to 512MB (which is too small for production environments, just an example...):
<syntaxhighlight lang=bash>
# echo "options zfs zfs_arc_max=$[512*1024*1024]" >> /etc/modprobe.d/zfs.conf
</syntaxhighlight>
After reboot this value take effect.
===Check cache hits/misses===
<syntaxhighlight lang=bash>
# (while : ; do cat /proc/spl/kstat/zfs/arcstats ; sleep 5 ; done ) | awk '
BEGIN {
}
$1 ~ /(hits|misses)/ {
name=$1;
gsub(/[_]*(hits|misses)/,"",name);
if(name == ""){
name="global";
}
}
$1 ~ /hits/ {
hits[name] = $3 - hitslast[name]
hitslast[name] = $3
}
$1 ~ /misses/ {
misses[name] = $3 - misslast[name]
misslast[name] = $3
rate = 0
total = hits[name] + misses[name]
if (total)
rate = (hits[name] * 100) / total
if (name=="global")
printf "%30s %12s %12s %9s\n", "NAME", "HITS", "MISSES", "HITRATE"
printf "%30s %12d %12d %8.2f%%\n", name, hits[name], misses[name], rate
}
'
</syntaxhighlight>
==Higher scrub performance==
<syntaxhighlight lang=bash highlight=3-5>
#!/bin/bash
#
## scrub_fast.sh
#
case $1 in
start)
echo 0 > /sys/module/zfs/parameters/zfs_scan_idle
echo 0 > /sys/module/zfs/parameters/zfs_scrub_delay
echo 512 > /sys/module/zfs/parameters/zfs_top_maxinflight
echo 5000 > /sys/module/zfs/parameters/zfs_scan_min_time_ms
echo 4 > /sys/module/zfs/parameters/zfs_vdev_scrub_min_active
echo 8 > /sys/module/zfs/parameters/zfs_vdev_scrub_max_active
;;
stop)
echo 50 > /sys/module/zfs/parameters/zfs_scan_idle
echo 4 > /sys/module/zfs/parameters/zfs_scrub_delay
echo 32 > /sys/module/zfs/parameters/zfs_top_maxinflight
echo 1000 > /sys/module/zfs/parameters/zfs_scan_min_time_ms
echo 1 > /sys/module/zfs/parameters/zfs_vdev_scrub_min_active
echo 2 > /sys/module/zfs/parameters/zfs_vdev_scrub_max_active
;;
status)
for i in zfs_scan_idle zfs_scrub_delay zfs_top_maxinflight zfs_scan_min_time_ms zfs_vdev_scrub_{min,max}_active
do
param="/sys/module/zfs/parameters/${i}"
printf "%60s\t%d\n" "${param}" "$(cat ${param})"
done
;;
*)
echo "Usage: ${0} (start|stop|status)"
;;
esac
</syntaxhighlight>
==Backup ZFS settings==
A little script which may be used on your own risk.
<syntaxhighlight lang=bash>
#!/bin/bash
# Written by Lars Timmann <L@rs.Timmann.de> 2018
# Tested on solaris 11.3 & Ubuntu Linux
# This script is a rotten bunch of code... rewrite it!
AWK_CMD=/usr/bin/gawk
ZPOOL_CMD=/sbin/zpool
ZFS_CMD=/sbin/zfs
ZDB_CMD=/sbin/zdb
function print_local_options () {
DATASET=$1
OPTION=$2
EXCLUDE_REGEX=$3
${ZFS_CMD} get -s local -Ho property,value -p ${OPTION} ${DATASET} | while read -r property value
do
if [[ ! ${property} =~ ${EXCLUDE_REGEX} ]]
then
if [ "_${property}_" == "_share.*_" ]
then
print_local_options "${DATASET}" 'share.all' '^$'
else
printf '\t-o %s=%s \\\n' "${property}" "${value}"
fi
fi
done
}
function print_filesystem () {
ZFS=$1
printf '%s create \\\n' "${ZFS_CMD}"
print_local_options "${ZFS}" 'all' '^$'
printf '\t%s\n' "${ZFS}"
}
function print_filesystems () {
ZPOOL=$1
for ZFS in $(${ZFS_CMD} list -Ho name -t filesystem -r ${ZPOOL})
do
if [ ${ZFS} == ${ZPOOL} ] ; then continue ; fi
printf '#\n## Filesystem: %s\n#\n\n' "${ZFS}"
print_filesystem ${ZFS}
printf '\n'
done
}
function print_volume () {
ZVOL=$1
volsize=$(${ZFS_CMD} get -Ho value volsize ${ZVOL})
volblocksize=$(${ZFS_CMD} get -Ho value volblocksize ${ZVOL})
printf '%s create \\\n\t-V %s \\\n\t-b %s \\\n' "${ZFS_CMD}" "${volsize}" "${volblocksize}"
print_local_options "${ZVOL}" 'all' '(volsize|refreservation)'
printf '\t%s\n' "${ZVOL}"
}
function print_volumes () {
ZPOOL=$1
for ZVOL in $(${ZFS_CMD} list -Ho name -t volume -r ${ZPOOL})
do
printf '#\n## Volume: %s\n#\n\n' "${ZVOL}"
print_volume ${ZVOL}
printf '\n'
done
}
function print_vdevs () {
ZPOOL=$1
${ZDB_CMD} -C ${ZPOOL} | ${AWK_CMD} -F':' '
$1 ~ /^[[:space:]]*type$/ {
gsub(/[ ]+/,"",$NF);
type=substr($NF,2,length($NF)-2);
if ( type == "mirror" ) {
printf " \\\n\t%s",type;
}
}
$1 ~ /^[[:space:]]*path$/ {
gsub(/[ ]+/,"",$NF);
vdev=substr($NF,2,length($NF)-2);
printf " \\\n\t%s",vdev;
}
END {
printf "\n";
}
'
}
function print_zpool () {
ZPOOL=$1
printf '#############################################################\n'
printf '#\n## ZPool: %s\n#\n' "${ZPOOL}"
printf '#############################################################\n\n'
printf '%s create \\\n' "${ZPOOL_CMD}"
print_local_options "${ZPOOL}" 'all' '/@/'
printf '\t%s' "${ZPOOL}"
print_vdevs "${ZPOOL}"
printf '\n'
printf '#############################################################\n\n'
print_filesystems "${ZPOOL}"
print_volumes "${ZPOOL}"
}
OS=$(uname -s)
eval $(uname -s)=1
HOSTNAME=$(hostname)
printf '#############################################################\n'
printf '# Hostname: %s\n' "${HOSTNAME}"
printf '#############################################################\n\n'
for ZPOOL in $(${ZPOOL_CMD} list -Ho name)
do
print_zpool ${ZPOOL}
done
</syntaxhighlight>
==Links==
* [[https://github.com/zfsonlinux/pkg-zfs/wiki/HOWTO-install-Ubuntu-16.04-to-a-Whole-Disk-Native-ZFS-Root-Filesystem-using-Ubiquity-GUI-installer HOWTO install Ubuntu 16.04 to a Whole Disk Native ZFS Root Filesystem using Ubiquity GUI installer]]
* [[https://github.com/zfsonlinux/zfs/wiki/Ubuntu-16.04-Root-on-ZFS Ubuntu 16.04 Root on ZFS]]
80b8b70f7e96659e97751b6735feef5eb00db4e0
NetApp SSH
0
110
2521
2435
2021-11-26T02:34:36Z
Lollypop
2
Text replacement - "</source" to "</syntaxhighlight"
wikitext
text/x-wiki
[[Category:NetApp|SSH]]
== Prüfen, ob das SSH-Homedir /etc/sshd/<user>/.ssh existiert ==
<syntaxhighlight lang=bash>
nac*> priv set -q diag
nac*> ls /etc/sshd/
.
..
ssh_host_key
ssh_host_key.pub
ssh_host_rsa_key
ssh_host_rsa_key.pub
ssh_host_dsa_key
ssh_host_dsa_key.pub
</syntaxhighlight>
== Anlegen eines Verzeichnisses mit Mode 0700 ==
<syntaxhighlight lang=bash>
nac*> options wafl.default_qtree_mode
wafl.default_qtree_mode 0777
nac*> options wafl.default_qtree_mode 0700
nac*> qtree create /vol/vol0/__
nac*> options wafl.default_qtree_mode 0777
</syntaxhighlight>
== NDMPd Status prüfen / anschalten ==
<syntaxhighlight lang=bash>
nac*> ndmpd status
ndmpd OFF.
No ndmpd sessions active.
nac*> ndmpd on
nac*> ndmpd status
ndmpd ON.
No ndmpd sessions active.
</syntaxhighlight>
== Verzeichnis erzeugen durch kopieren des QTrees ==
<syntaxhighlight lang=bash>
nac*> ndmpcopy /vol/vol0/__ /vol/vol0/etc/sshd/root/.ssh
...
Ndmpcopy: Transfer successful [ 0 hours, 0 minutes, 20 seconds ]
Ndmpcopy: Done
nac*> qtree delete /vol/vol0/__
</syntaxhighlight>
== SSH-Key /etc/sshd/<user>/.ssh/authorized_keys schreiben ==
<syntaxhighlight lang=bash>
nac*> wrfile /etc/sshd/root/.ssh/authorized_keys
ssh-dss AAA...== user@clienthost
^C
</syntaxhighlight>
2121a667293393e0ec56407d8ac35d73beaaaf87
Parazoanthus gracilis
0
134
2522
362
2021-11-26T02:34:49Z
Lollypop
2
Text replacement - "[[Kategorie:" to "[[Category:"
wikitext
text/x-wiki
[[Category:MeerwasserAquarium]]
{{Systematik
| DeName = Gelbe Krustenanemone
| WissName = Parazoanthus gracilis
| Autor =
| Untergattung =
| Gattung =
| Unterfamilie =
| Art =
| Verbreitung =
| Habitat =
| Nahrung = Plankton, Zooxanthellen / Licht
| Luftfeuchtigkeit =
| Temperatur = 20°C - 26°C
}}
4e936184c75b191489ad2878021d87c6a9cf2400
Category:Dracunculus
14
84
2523
169
2021-11-26T02:35:00Z
Lollypop
2
Text replacement - "[[Kategorie:" to "[[Category:"
wikitext
text/x-wiki
[[Category:Araceae]]
5024b68e594e35c88018933d70ac6158500c45b3
OpenVPN Inline Certs
0
104
2524
2387
2021-11-26T02:37:33Z
Lollypop
2
Text replacement - "</source" to "</syntaxhighlight"
wikitext
text/x-wiki
[[Category:OpenVPN]]
To get an OpenVPN-Configuration in one file you can inline all referred files like this:
<syntaxhighlight lang=bash>
$ nawk '
/^(tls-auth|ca|cert|key)/ {
type=$1;
file=$2;
# for tls-auth we need the key-direction
if(type=="tls-auth")print "key-direction",$3;
print "<"type">";
while(getline tlsauth<file)
print tlsauth;
close(file);
print "</"type">";
next;
}
{
# All other lines are printed as they are
print;
}' connection.ovpn
</syntaxhighlight>
And inline to files:
<syntaxhighlight lang=bash>
$ nawk '
/^<(tls-auth|ca|dh|cert|key)>/ {
type=$1;
gsub(/[<>]/,"",type);
file=type".pem";
print type,file;
print ""> file;
while(getline) {
if($0 == "</"type">"){
fflush(file);
close(file);
break;
}
print $0>>file;}
next;
}
{
# All other lines are printed as they are
print $0;
}' connection.ovpn > connection_.ovpn
</syntaxhighlight>
3919d4a83095f7352f9a2675c8ed5044b029c4a6
Solaris mdb magic
0
23
2525
2332
2021-11-26T02:40:07Z
Lollypop
2
Text replacement - "[[Kategorie:" to "[[Category:"
wikitext
text/x-wiki
[[Category:Solaris|Modular Debugger]]
=Verschiedene kleine mdb Tricks=
==Memory usage==
<pre>
# echo ::memstat|mdb -k
Page Summary Pages MB %Tot
------------ ---------------- ---------------- ----
Kernel 2855874 11155 69%
Anon 50119 195 1%
Exec and libs 4754 18 0%
Page cache 22972 89 1%
Free (cachelist) 11948 46 0%
Free (freelist) 1221894 4773 29%
Total 4167561 16279
Physical 4078747 15932
</pre>
==Kernelparameter abfragen==
Syntax: echo '<Parameter>/D' | mdb -k
<pre>
# echo 'ncsize/D' | mdb -k
ncsize:
ncsize: 70485
</pre>
==Kernelparameter setzen==
Syntax: echo '<Parameter>/W<Value>' | mdb -wk
<pre>
# echo 'do_tcp_fusion/W0' | mdb -wk
do_tcp_fusion: 0 = 0x0
</pre>
==Inquiry strings in Solaris 11==
<syntaxhighlight lang=bash>
# echo "::walk sd_state | ::grep '.!=0' | ::print struct sd_lun un_sd | ::print struct scsi_device sd_inq | ::print struct scsi_inquiry inq_vid inq_pid" | mdb -k
inq_vid = [ "VMware " ]
inq_pid = [ "Virtual disk " ]
inq_vid = [ "NECVMWar" ]
inq_pid = [ "VMware SATA CD00" ]
inq_vid = [ "VMware " ]
inq_pid = [ "Virtual disk " ]
</source>
27ec055e45da042e15251e69ac50c9f7f7233cde
ZFS RaidController
0
186
2526
2380
2021-11-26T02:40:30Z
Lollypop
2
Text replacement - "[[Kategorie:" to "[[Category:"
wikitext
text/x-wiki
[[Category:Solaris]]
[[Category:ZFS|RaidController]]
=ZFS ist besser direkt auf den Disks=
Wegen der besseren Checksummen im ZFS möchte man den RaidController abschalten, oder die Disks einzeln herausreichen.
==X4170 mit MegaRAID==
Konfiguration aller Disks als einzelne LogicalDrives
<syntaxhighlight lang=bash>
-cfgclr -a0
-cfgldadd -r0[252:0] -a0
-cfgldadd -r0[252:1] -a0
-cfgldadd -r0[252:2] -a0
-cfgldadd -r0[252:3] -a0
-ldsetprop EnDskCache -LAll -a0
-AdpBootDrive -set -L0 -a0
-AdpSetProp MaintainPdFailHistoryEnbl 0 -a0
q for quit
</syntaxhighlight>
883cb9d9516c6f2181bd51ce49ab5160267fc7a4
Ufw
0
224
2527
2301
2021-11-26T02:41:04Z
Lollypop
2
Text replacement - "[[Kategorie:" to "[[Category:"
wikitext
text/x-wiki
[[Category:Linux]]
==Disable IPv6==
/etc/default/ufw
<syntaxhighlight lang=bash>
# Set to yes to apply rules to support IPv6 (no means only IPv6 on loopback
# accepted). You will need to 'disable' and then 'enable' the firewall for
# the changes to take affect.
IPV6=no
</syntaxhighlight>
/etc/ufw/sysctl.conf
<syntaxhighlight lang=bash>
# Uncomment this to turn off ipv6 autoconfiguration
net/ipv6/conf/default/autoconf=0
net/ipv6/conf/all/autoconf=0
</syntaxhighlight>
==Setup Rules==
===Adding a rule===
<syntaxhighlight lang=bash>
# ufw allow log-all from 192.168.2.0/24 to any app OpenSSH
Rule added
# ufw status verbose
Status: active
Logging: on (low)
Default: reject (incoming), allow (outgoing), disabled (routed)
New profiles: skip
To Action From
-- ------ ----
22/tcp (OpenSSH) ALLOW IN 192.168.2.0/24 (log-all)
</syntaxhighlight>
===Inserting before===
<syntaxhighlight lang=bash>
# ufw insert 1 allow log-all from 192.168.1.0/24 to any app OpenSSH
Rule inserted
# ufw status verbose
Status: active
Logging: on (low)
Default: reject (incoming), allow (outgoing), disabled (routed)
New profiles: skip
To Action From
-- ------ ----
22/tcp (OpenSSH) ALLOW IN 192.168.1.0/24 (log-all)
22/tcp (OpenSSH) ALLOW IN 192.168.2.0/24 (log-all)
# ufw status numbered
Status: active
To Action From
-- ------ ----
[ 1] OpenSSH ALLOW IN 192.168.1.0/24 (log-all)
[ 2] OpenSSH ALLOW IN 192.168.2.0/24 (log-all)
</syntaxhighlight>
==Own applications==
===nrpe===
/etc/ufw/applications.d/nrpe
<syntaxhighlight lang=bash>
[NRPE]
title=Nagios NRPE
description=Nagios Remote Plugin Executor
ports=5666/tcp
</syntaxhighlight>
===MySQL===
/etc/ufw/applications.d/mysql
<syntaxhighlight lang=bash>
[MySQL]
title=MySQL Server (MySQL, MYSQL)
description=Old and rusty SQL server
ports=3306/tcp
</syntaxhighlight>
===Exim===
/etc/ufw/applications.d/exim
<syntaxhighlight lang=bash>
[Exim SMTP]
title=Mail Server (Exim, SMTP)
description=Small, but very powerful and efficient mail server
ports=25/tcp
[Exim SMTP Virusscanned]
title=Mail Server (Exim, SMTP Virusscanned)
description=Small, but very powerful and efficient mail server
ports=26/tcp
[Exim SMTPS]
title=Mail Server (Exim, SMTPS)
description=Small, but very powerful and efficient mail server
ports=465/tcp
[Exim SMTP Message Submission]
title=Mail Server (Exim, Message Submission)
description=Small, but very powerful and efficient mail server
ports=587/tcp
</syntaxhighlight>
Get a list of rules to set from Exim's configuration:
<syntaxhighlight lang=awk>
# exim -bP local_interfaces | awk '
BEGIN{
ports[25]="Exim SMTP";
ports[26]="Exim SMTP Virusscanned"
ports[465]="Exim SMTPS";
ports[587]="Exim SMTP Message Submission";
from="any"; # <----- Look if it fits what you want
}
{
gsub(/^.*= /,"");
split($0,services,/ : /);
for(service in services){
split(services[service],part,/\./);
ip=part[1]"."part[2]"."part[3]"."part[4];
port=part[5];
printf "ufw allow log from %s to %s app \"%s\"\n",from,ip,ports[port];
}
}'
ufw allow log from any to 192.168.5.103 app "Exim SMTP"
ufw allow log from any to 192.168.5.103 app "Exim SMTP Virusscanned"
ufw allow log from any to 192.168.5.103 app "Exim SMTPS"
</syntaxhighlight>
==Inspect your application profile==
<syntaxhighlight lang=bash>
# ufw app info MySQL
Profile: MySQL
Title: MySQL Server (MySQL, MYSQL)
Description: Old and rusty SQL server
Port:
3306/tcp
</syntaxhighlight>
6bc53134faf462bbb7cef3fc1c832b1585eec0b6
Roundcube
0
232
2528
2233
2021-11-26T02:42:12Z
Lollypop
2
Text replacement - "[[Kategorie:" to "[[Category:"
wikitext
text/x-wiki
[[Category:Web]]
[[Category:Mail]]
==Automatic import carddav from Owncloud==
Enable carddav:
/etc/roundcube/config.inc.php:
<syntaxhighlight lang=php>
...
<// List of active plugins (in plugins/ directory)
$config['plugins'] = array(
'carddav', // <---- Enable carddav
'archive',
);
...
</syntaxhighlight>
This imports automagically all Owncloud contacts from the addressbook "contacts" into roundcube carddav:
/usr/share/roundcube/plugins/carddav/config.inc.php
<syntaxhighlight lang=php>
...
$prefs['OwnCloud-Contacts'] = array(
// required attributes
'name' => 'Cloud->contacts->',
'username' => '%u',
'password' => '%p',
'url' => 'https://$cloudserver/remote.php/carddav/addressbooks/%u/contacts/',
// optional attributes
'active' => true,
'readonly' => false,
'refresh_time' => '01:00:00',
'preemptive_auth' => 1,
// attributes that are fixed (i.e., not editable by the user) and
// auto-updated for this preset
'fixed' => array('name', 'active', ),
// hide this preset from CalDAV preferences section so users can't even
// see it
'hide' => false,
);
</syntaxhighlight>
d48236e6fb88a57d9db1114101178bd081bbd65f
Category:Termiten
14
302
2529
1417
2021-11-26T02:43:51Z
Lollypop
2
Text replacement - "[[Kategorie:" to "[[Category:"
wikitext
text/x-wiki
[[Category: Insekten]]
c62df6a98eb1a1855d1a76d8718d259994f78830
Network troubleshooting
0
284
2530
2317
2021-11-26T02:45:00Z
Lollypop
2
Text replacement - "[[Kategorie:" to "[[Category:"
wikitext
text/x-wiki
[[Category:Networking|Troubleshooting]]
=Network troubleshooting=
==Testing connections from virtual interfaces / virtual IPs==
=== Ping ===
<syntaxhighlight lang=bash>
# ping -I <your virtual ip> <destination>
</source>
On Solaris
<syntaxhighlight lang=bash>
# ping -sni <your virtual ip> <destination>
</source>
=== Traceroute ===
<syntaxhighlight lang=bash>
# traceroute -s <your virtual ip> <destination>
</source>
=== SSH ===
<syntaxhighlight lang=bash>
# ssh <user>@<destination> -o BindAddress=<your virtual ip>
</source>
=== Telnet ===
<syntaxhighlight lang=bash>
# telnet -b <your virtual ip> <destination>
</source>
== Interface details ==
=== Linux ===
<syntaxhighlight lang=bash>
# ethtool -k eth1
Features for eth1:
rx-checksumming: on
tx-checksumming: on
tx-checksum-ipv4: off [fixed]
tx-checksum-ip-generic: on
tx-checksum-ipv6: off [fixed]
tx-checksum-fcoe-crc: off [fixed]
tx-checksum-sctp: off [fixed]
scatter-gather: on
tx-scatter-gather: on
tx-scatter-gather-fraglist: off [fixed]
tcp-segmentation-offload: off
tx-tcp-segmentation: off
tx-tcp-ecn-segmentation: off [fixed]
tx-tcp6-segmentation: off
udp-fragmentation-offload: off [fixed]
generic-segmentation-offload: off
generic-receive-offload: on
large-receive-offload: on
rx-vlan-offload: on
tx-vlan-offload: on
ntuple-filters: off [fixed]
receive-hashing: on
highdma: on
rx-vlan-filter: on [fixed]
vlan-challenged: off [fixed]
tx-lockless: off [fixed]
netns-local: off [fixed]
tx-gso-robust: off [fixed]
tx-fcoe-segmentation: off [fixed]
tx-gre-segmentation: off [fixed]
tx-ipip-segmentation: off [fixed]
tx-sit-segmentation: off [fixed]
tx-udp_tnl-segmentation: off [fixed]
fcoe-mtu: off [fixed]
tx-nocache-copy: off
loopback: off [fixed]
rx-fcs: off [fixed]
rx-all: off [fixed]
tx-vlan-stag-hw-insert: off [fixed]
rx-vlan-stag-hw-parse: off [fixed]
rx-vlan-stag-filter: off [fixed]
l2-fwd-offload: off [fixed]
busy-poll: off [fixed]
</source>
=== Solaris ===
d6248d74da3fb43d3a016e4c7fb59ea219e4d721
Category:Linux
14
89
2531
180
2021-11-26T02:46:51Z
Lollypop
2
Text replacement - "[[Kategorie:" to "[[Category:"
wikitext
text/x-wiki
[[Category:KnowHow]]
a53883501ef62bde531096835b5015f2915a2297
Mycedium elephantotus
0
132
2532
369
2021-11-26T02:48:14Z
Lollypop
2
Text replacement - "[[Kategorie:" to "[[Category:"
wikitext
text/x-wiki
[[Category:MeerwasserAquarium]]
{{Systematik
| Bild = Mycedium_elephantopus.png
| Bildbeschreibung = Mycedium elephantotus
| DeName = Großpolypige Steinkoralle
| WissName = Mycedium elephantotus
| Autor =
| Untergattung =
| Gattung =
| Unterfamilie =
| Art =
| Verbreitung = Australien , Great Barrier Riff, Indonesien, Japan, Papua-Neuguinea
| Habitat =
| Nahrung = Plankton, Zooxanthellen / Licht
| Luftfeuchtigkeit =
| Temperatur = 24°C - 27°C
}}
<gallery mode="packed-hover">
Image:Mycedium_elephantopus.png|Kleine Kolonie
</gallery>
129c3fa854f30788d04b667228bb111ac22d34cc
IPS cheat sheet
0
98
2533
2326
2021-11-26T02:48:36Z
Lollypop
2
Text replacement - "</source" to "</syntaxhighlight"
wikitext
text/x-wiki
[[Kategorie:Solaris11]]
=Cheat sheet=
[[File:Ips-one-liners.pdf|page=1|600px]]
=Examples=
== Switching to Oracle Support Repository ==
1. Get the client certificate at https://pkg-register.oracle.com/
(x) Oracle Solaris 11 Support -> Submit
Comment: Rechnername -> Accept
-> Download Key
-> Download Certificate
2. Copy it to your Solaris 11 host into /var/pkg/ssl.
<pre>
# mv Oracle_Solaris_11_Support.key.pem /var/pkg/ssl
# mv Oracle_Solaris_11_Support.certificate.pem /var/pkg/ssl
</pre>
4. Set you proxy environment if needed:
<pre>
# http_proxy=http://proxy:3128/
# https_proxy=http://proxy:3128/
# export http_proxy https_proxy
</pre>
5. Set the publisher to the support repository:
# pkg set-publisher \
-k /var/pkg/ssl/Oracle_Solaris_11_Support.key.pem \
-c /var/pkg/ssl/Oracle_Solaris_11_Support.certificate.pem \
-G '*' -g https://pkg.oracle.com/solaris/support/ solaris
6. Refresh the catalog:
# pkg refresh --full
7. Check for updates:
# pkg update -nv
8. If needed or wanted do the update:
# pkg update -v
== Adding another repository ==
Until now the repository of OpenCSW is [http://www.opencsw.org/2012/02/ips-repository-in-the-works/ not in IPS format].
== Repairing packages ==
Damn fast fingers did it! Lucky Luk style... the man who delete files faster than his shadow...
<pre>
root@solaris11:/home/lollypop# rm /usr/bin/ls
</pre>
So... the file is gone... oops.
No problem in Solaris 11. You can repair package contents!
But.. in which package was it?
<pre>
root@solaris11:/home/lollypop# pkg search /usr/bin/ls
INDEX ACTION VALUE PACKAGE
path file usr/bin/ls pkg:/system/core-os@0.5.11-0.175.0.10.1.0.0
</pre>
So it is in the package pkg:/system/core-os@0.5.11-0.175.0.10.1.0.0 . Let us take a look what the system thinks what is wrong with our files from this package:
<pre>
root@solaris11:/home/lollypop# pkg verify pkg:/system/core-os@0.5.11-0.175.0.10.1.0.0
PACKAGE STATUS <3>
pkg://solaris/system/core-os ERROR
file: usr/bin/ls
Missing: regular file does not exist
</pre>
That is exactly what we thougt :-).
So let us fix it!
<pre>
root@solaris11:/home/lollypop# pkg fix pkg:/system/core-os@0.5.11-0.175.0.10.1.0.0
Verifying: pkg://solaris/system/core-os ERROR <3>
file: usr/bin/ls
Missing: regular file does not exist
Created ZFS snapshot: 2013-04-10-07:40:21
Repairing: pkg://solaris/system/core-os
DOWNLOAD PKGS FILES XFER (MB)
Completed 1/1 1/1 0.0/0.0$<3>
PHASE ACTIONS
Update Phase 1/1
PHASE ITEMS
Image State Update Phase 2/2
root@solaris11:/home/lollypop#
</pre>
Beware of trying this with /usr/bin/pkg !!!
=Solaris 11 release=
<syntaxhighlight lang=bash>
$ LANG=C pkg info kernel | nawk '$1 == "Version:"{split($2,version,/\./)}$1 == "Branch:"{split($2,branch,/\./)}END{printf ("Solaris %d.%d Update %d SRU %d SRU-Build %d\n",version[2],version[3],branch[3],branch[4],branch[6])}'
Solaris 5.11 Update 2 SRU 0 SRU-Build 42
</syntaxhighlight>
= Update available? =
<syntaxhighlight lang=bash>
#!/bin/bash
# Written by Lars Timmann <L@rs.Timmann.de> 2018
export LANG=C
function check () {
package=$1
# pkg list -af entire@latest
local=$(pkg info ${package} 2>&1)
remote=$(pkg info -r ${package} 2>&1)
latest_11_3=$(pkg list -H -af ${package} | nawk '$2 ~ /^0.5.11-0.175.3/{print $2; exit;}')
printf "%s\n%s\nLatest_11.3: %s\n" "${local}" "${remote}" "${latest_11_3}" | nawk -v package="${package}" '
BEGIN{
nr=0;
}
$1=="Version:" {
version[nr]=$2;
next;
}
$1=="Branch:" {
branch[nr++]=$2;
next;
}
$1=="Latest_11.3:" {
split($2, latest_part, "-");
latest_version=latest_part[1];
latest_branch=latest_part[2];
}
/^pkg:/ {
error=$0;
}
END{
if(error) {
printf ("Package %s:\t%s\n", package, error);
status=-1;
} else {
if(branch[0]==branch[1]){
printf ("Package %s:\tUptodate at %s\n", package, branch[0]);
status=0;
}else{
printf ("Package %s:\tUpdate is available: %s -> %s\n", package, branch[0], branch[1]);
split(version[1], version_part, /\./);
split(branch[1], branch_part, /\./);
if(version[1]=="0.5.11") {
be_version=sprintf("%d.%d.%d.%d.%d",version_part[3], branch_part[3], branch_part[4], branch_part[5], branch_part[6]);
}
if(version[1]=="11.4") {
be_version=sprintf("%d.%d.%d.%d.%d",branch_part[1], branch_part[2], branch_part[3], branch_part[5], branch_part[6]);
if (version[0]=="0.5.11" && branch[0] != latest_branch ) {
split(latest_branch, latest_part, /\./);
be_version3=sprintf("%d.%d.%d.%d.%d",version_part[3], latest_part[3], latest_part[4], latest_part[5], latest_part[6]);
printf ("\nTo update and stay in Solaris 11.3-Branch you can use:\n\tpkg install --accept --require-new-be --be-name solaris_%s\n\n", be_version3);
}else if (version[0]=="0.5.11" && branch[0] == latest_branch ) {
printf ("\nYou are at the latest version of the 11.3-Branch (%s), but you can upgrade to 11.4 .\n",branch[0]);
}
}
printf ("\n\nUse:\tpkg update --accept --require-new-be --be-name solaris_%s\n\n\n", be_version);
status=2;
}
}
exit status;
}
'
}
package="entire"
pkg refresh >/dev/null \
|| echo "Cannot refresh packages" \
&& if [ $# -gt 0 ]
then
while [ $# -gt 0 ]
do
package=$1
shift
check ${package}
done
else
check ${package}
fi
</syntaxhighlight>
= ZFS automatic snapshots =
<syntaxhighlight lang=bash>
pkg install pkg:/desktop/time-slider
svcadm restart svc:/system/dbus:default
</syntaxhighlight>
b0bb39c69495d4ee531af3bb38bee0c06f0715d9
Solaris Loadgenerator
0
216
2534
2433
2021-11-26T02:52:01Z
Lollypop
2
Text replacement - "[[Kategorie:" to "[[Category:"
wikitext
text/x-wiki
[[Category:Solaris|Loadgenerator]]
This is a little script to generate load. It uses gzip and bzip2 to generate load fetched from void and compressed into the void again :-).
Call it with <scriptname> <number> to generate a load of <number>.
<syntaxhighlight lang=bash>
#!/usr/bin/bash
count=$1
for((i=1;i<=${count};i++))
do
cat /dev/urandom | bzip2 | gzip -9 >/dev/null &
done
</syntaxhighlight>
84de8931911bf93d94db9c0aa6e088cca2f72643
NFS
0
386
2535
2436
2021-11-26T02:52:02Z
Lollypop
2
Text replacement - "</source" to "</syntaxhighlight"
wikitext
text/x-wiki
[[Category:Linux]]
Some things to know about NFS...
=NFSv3=
==Server==
===Bind rpc.mountd to specific port===
The port of the rpc.mountd is usually random this is a nightmare for firewallers so picking a known port is much better.
* /etc/default/nfs-kernel-server
<syntaxhighlight lang=ini>
RPCMOUNTDOPTS="--manage-gids --port 33333"
</syntaxhighlight>
===Bind statd to specific port===
You just need it if you still need protocols below NFSv4.
* /etc/default/nfs-common
<syntaxhighlight lang=ini>
STATDOPTS="--port 33334 --outgoing-port 33335"
</syntaxhighlight>
===Bind lockd to specific port===
* /etc/sysctl.d/nfs-static-ports.conf
<syntaxhighlight lang=ini>
fs.nfs.nlm_tcpport = 33336
fs.nfs.nlm_udpport = 33336
</syntaxhighlight>
Activate it without rebooting through:
<syntaxhighlight lang=bash>
# sysctl --load /etc/sysctl.d/nfs-static-ports.conf
fs.nfs.nlm_tcpport = 33336
fs.nfs.nlm_udpport = 33336
</syntaxhighlight>
===Configure ufw===
Caution! The port you set above for the mountd has to be the same here! I used 33333, if you changed it above for some reason: Change it here, too!
* /etc/ufw/applications.d/nfs
<syntaxhighlight lang=ini>
[NFS-Server]
title=NFS-Server
description=NFS Server
ports=111/tcp|111/udp|2049/tcp|33333:33336/tcp
</syntaxhighlight>
<syntaxhighlight lang=bash>
# ufw allow from 172.16.16.16/28 to any app "NFS-Server"
</syntaxhighlight>
=NFSv4.1=
==Server==
===Configure rpc.idmapd===
* /etc/idmapd.conf
You should better set a Domain. Set the same Domain on server an client(s)!
<syntaxhighlight lang=ini>
[General]
...
# set your own domain here, if it differs from FQDN minus hostname.
# you can use a fantasy name, but whatever it is, keep this identical on server and client!
Domain = myfantasy.domain
...
</syntaxhighlight>
===Disable at least NFSv2===
* /etc/default/nfs-kernel-server
<syntaxhighlight lang=ini>
STATDOPTS="--port 33334 --outgoing-port 33335 --no-nfs-version 2"
RPCNFSDOPTS="--no-nfs-version 2"
</syntaxhighlight>
===Disable all but NFSv4 and higher===
* /etc/default/nfs-kernel-server
<syntaxhighlight lang=ini>
RPCMOUNTDOPTS="--manage-gids --port 33333 --no-nfs-version 2 --no-nfs-version 3"
NEED_STATD="no"
NEED_IDMAPD="yes"
RPCNFSDOPTS="--no-nfs-version 2 --no-nfs-version 3"
</syntaxhighlight>
===Configure ufw===
For plain NFSv4 and up you just need this:
<syntaxhighlight lang=bash>
# ufw allow from 172.16.16.16/28 to any port 2049/tcp
</syntaxhighlight>
If you need still NFSv3 look above.
===List clients that are connected===
<syntaxhighlight lang=bash>
# cat /proc/fs/nfsd/clients/*/info
clientid: 0x7829c17160bf7066
address: "172.16.16.17:778"
name: "Linux NFSv4.1 client01.domain.tld"
minor version: 1
Implementation domain: "kernel.org"
Implementation name: "Linux 3.10.0-1127.13.1.el7.x86_64 #1 SMP Tue Jun 23 15:46:38 UTC 2020 x86_64"
Implementation time: [0, 0]
</syntaxhighlight>
==Server and Client==
36296064f9e457ab060ce0826104ab968062998f
Solaris kernel debugging
0
24
2536
660
2021-11-26T02:54:46Z
Lollypop
2
Text replacement - "[[Kategorie:" to "[[Category:"
wikitext
text/x-wiki
[[Category:Solaris|Kernel Debugging]]
* Direkt in den Debugger booten
<pre>
ok> boot -kd
...
Welcome to kmdb
kmdb: unable to determine terminal type: assuming `vt100'
[0]>
</pre>
oder bei x86 Grubeintrag auswählen und in der "kernel"-Zeile -kd hinzufügen...
* Mod-Debug aktivieren
<pre>
[0]> moddebug/W 0x80000000
moddebug: 0 = 0x80000000
[0]> :c
SunOS Release 5.10 Version Generic_141415-07 64-bit
...
</pre>
* Mod-Kmem aktivieren
<pre>
[0]> kmem_flags/W 0x0000000f
kmem_flags: 0 = 0xf
[0]> :c
SunOS Release 5.10 Version Generic_141415-07 64-bit
...
</pre>
* Mod-snooping aktivieren
<pre>
[0]> snooping/W 0x1
snooping: 0 = 0x1
[0]> :c
SunOS Release 5.10 Version Generic_141415-07 64-bit
...
</pre>
* Stack ausgeben lassen
<pre>
[0]> $c
</pre>
* Letzte Meldungen
<pre>
[0]> ::msgbuf
</pre>
* Crashdump schreiben lassen bei x86-Systemen
<pre>
panic...
[0]> $<systemdump
</pre>
* Links
* [http://developers.sun.com/solaris/articles/manage_core_dump.html Core Dump Management on the Solaris OS]
* [http://www.c0t0d0s0.org/presentations/hhosug/hhosug2.pdf PDF des zweiten HHOSUG Meetings]
073f59961e0e5b2e6bc51dab27f9e6eef9db7ae6
Bash cheatsheet
0
37
2537
2445
2021-11-26T02:56:21Z
Lollypop
2
Text replacement - "</source" to "</syntaxhighlight"
wikitext
text/x-wiki
[[Category:Bash]]
=bash history per user=
See [[SSH_FingerprintLogging|Logging the SSH fingerprint]]
=bash prompt=
Put this in your ~/.bash_profile
<syntaxhighlight lang=bash>
typeset +x PS1="\[\e]0;\u@\h: \w\a\]\u@\h:\w# "
</syntaxhighlight>
=Useful variable substitutions=
==split==
For example split an ip:
<syntaxhighlight lang=bash>
$ delimiter="."
$ ip="10.1.2.3"
$ declare -a octets=( ${ip//${delimiter}/ } )
$ echo "${#octets[@]} octets -> ${octets[@]}"
4 octets -> 10 1 2 3
</syntaxhighlight>
==dirname==
<syntaxhighlight lang=bash>
$ myself=/usr/bin/blafasel ; echo ${myself%/*}
/usr/bin
</syntaxhighlight>
==basename==
<syntaxhighlight lang=bash>
$ myself=/usr/bin/blafasel ; echo ${myself##*/}
blafasel
</syntaxhighlight>
==Path name resolving function==
<syntaxhighlight lang=bash>
# dir_resolve originally from http://stackoverflow.com/a/20901614/5887626
# modified at https://lars.timmann.de/wiki/index.php/Bash_cheatsheet
dir_resolve() {
local dir=${1%/*}
local file=${1##*/}
# if the name does not contain a / leave file blank or the name will be name/name
[ "_${1/\//}_" == "_${1}_" -a -d ${1} ] && file=""
[ "_${1/\//}_" == "_${1}_" -a -f ${1} ] && dir=""
pushd "$dir" &>/dev/null || return $? # On error, return error code
echo ${PWD}${file:+"/"${file}} # output full path with filename
popd &> /dev/null
}
</syntaxhighlight>
=Arrays=
==Reverse the order of elements==
An example for services in normal and reverse order for start/stop
<syntaxhighlight lang=bash>
declare -a SERVICES_STOP=(service1 service2 service3 service4)
declare -a SERVICES_START
for(( i=$[ ${#SERVICES_STOP[*]} - 1 ] ; i>=0 ; i-- ))
do
SERVICES_START+=(${SERVICES_STOP[$i]})
done
</syntaxhighlight>
This results in:
<syntaxhighlight lang=bash>
$ echo ${SERVICES_STOP[*]} ; echo ${SERVICES_START[*]}
service1 service2 service3 service4
service4 service3 service2 service1
</syntaxhighlight>
=Loops=
==Numbers==
$ for i in {0..9} ; do echo $i ; done
or
$ for ((i=0;i<=9;i++)); do echo $i; done
so gehen natürlich auch andere Sprünge, z.B. immer 3 weiter:
$ for ((i=0;i<=9;i+=3)); do echo $i; done
or or or
$ for ((i=0,j=1;i<=9;i+=3,j++)); do echo "$i $j"; done
==Exit controlled loop==
Just put your code between <i>while</i> and <i>do</i> and use <i>continue</i> alias : in the loop.
<syntaxhighlight lang=bash>
#!/bin/bash
while
# some code
(( <your control expression> ))
do
:
done
</syntaxhighlight>
For example:
<syntaxhighlight lang=bash>
#!/bin/bash
i=1
while
i=$[ $i + 1 ];
(( $i < 10 ))
do
:
done
</syntaxhighlight>
=Functions=
==Log with timestamp==
<syntaxhighlight lang=bash>
function printlog () {
# Function:
# Log things to logfile
#
# Parameter:
# 1: logfile
# *: You can call printlog like printf (except the first parameter is the logfile)
#
# OR
#
# Just pipe things to printlog
#
local logfile=${1}
shift
if [ -n "${*}" ]
then
format=${1}
shift
printf "%s ${format}" "$(/bin/date '+%Y%m%d %H:%M:%S')" ${*} >> ${logfile}
else
while read input
do
printf "%s %s\n" "$(/bin/date '+%Y%m%d %H:%M:%S')" "${input}" >> ${logfile}
done
fi
}
</syntaxhighlight>
<syntaxhighlight lang=bash>
$ printf "test\n\ntoast\n" | printlog /dev/stdout
20190603 12:48:13 test
20190603 12:48:13
20190603 12:48:13 toast
$ printlog /dev/stdout "test\n"
20190603 12:48:19 test
$ printlog /dev/stdout "test %s %d %s\n" "bla" 0 "bli"
20190603 12:48:25 test bla 0 bli
$
</syntaxhighlight>
=Calculations=
<syntaxhighlight lang=bash>
$ echo $[ 3 + 4 ]
7
$ echo $[ 2 ** 8 ] # 2^8
256
</syntaxhighlight>
=init scripts=
==A basic skeleton==
<syntaxhighlight lang=bash>
#!/bin/bash
NAME=<myname> # The name of the daemon
USER=<runuser> # The user to run the daemon as
SELF=${0##*/}
CALLER=$(id -nu)
# Check if called as ${USER}
if [ "_${CALLER}_" != "_${USER}_" ]
then
# If not do a su if called as root
if [ "_${CALLER}_" == "_root_" ]
then
exec su -l ${USER} -c "$0 $@"
else
echo "Please start this script only as user ${USER}"
exit 1
fi
fi
if [ $# -eq 1 ]
then
command=$1
else
# Called as ${NAME}-start.sh or ${NAME}-stop.sh
command=${SELF%.sh}
command=${command##${NAME}-}
[ "_${command}_" == "_${NAME}_" ] && command=""
fi
case ${command} in
start)
# start commands
;;
stop)
# stop commands
;;
restart)
$0 stop
$0 start
;;
*)
[ ! -z "${command}" ] && echo "ERROR: Unknown option ${command}!"
echo "Usage: $0 (start|stop|restart)";
echo "Or call as ${NAME}-(start|stop|restart).sh"
exit 1
;;
esac
</syntaxhighlight>
= Logging and output in your scripts =
== Add a timestamp to all output ==
<syntaxhighlight lang=bash>
#!/bin/bash
# Find temp filename
FIFO=$(mktemp)
# Clenup on exit
trap 'rm -f ${FIFO}' 0
# Delete file created by mktemp
rm "${FIFO}"
# Create a FIFO instead
mkfifo "${FIFO}"
# Read from FIFO and add date at the beginning
sed -e "s|^|$(date '+%d.%m.%Y %H:%M:%S') :: |g" < ${FIFO} &
# Redirect stdout & stderr to FIFO
exec > ${FIFO} 2>&1
#
# Now your program
#
echo bla
echo bli >&2
</syntaxhighlight>
== Add a timestamp to all output and send to file==
<syntaxhighlight lang=bash>
#!/bin/bash
LOGFILE=/tmp/bla.log
# Find temp filename
FIFO=$(mktemp)
# Clenup on exit
trap 'rm -f ${FIFO}' 0
# Delete file created by mktemp
rm "${FIFO}"
# Create a FIFO instead
mkfifo "${FIFO}"
# Read from FIFO and add date at the beginning
sed -e "s|^|$(date '+%d.%m.%Y %H:%M:%S') :: |g" < ${FIFO} > ${LOGFILE}&
# Redirect stdout & stderr to FIFO
exec > ${FIFO} 2>&1
#
# Now your program
#
echo bla
echo bli >&2
</syntaxhighlight>
=Parameter parsing=
In progress... no time...
<syntaxhighlight lang=bash>
while [ $# -gt 0 ]
do
case $1 in
-h|--help)
usage help
shift;
exit 0;
;;
--?*=?*|-?*=?*)
param=${1%=*}
value=${1#*=}
shift;
;;
--?*=|-?*=)
param=${1%=*}
usage "${param} needs a vlaue!"
;;
*)
if [ $# -lt 2 ] ; then usage "$1 needs a value!"; fi
param=$1
value=$2
shift; shift;
;;
esac
case $param in
*)
other_params[$[${#other_params[*]} + 1]]="${param}=${value}"
param=${param#--}
param=${param/-/_}
export ${param^^}=${value}
;;
esac
done
</syntaxhighlight>
92886f6cdacfc2594571b7c5e59d7047488fff12
Network troubleshooting
0
284
2538
2530
2021-11-26T02:57:14Z
Lollypop
2
Text replacement - "</source" to "</syntaxhighlight"
wikitext
text/x-wiki
[[Category:Networking|Troubleshooting]]
=Network troubleshooting=
==Testing connections from virtual interfaces / virtual IPs==
=== Ping ===
<syntaxhighlight lang=bash>
# ping -I <your virtual ip> <destination>
</syntaxhighlight>
On Solaris
<syntaxhighlight lang=bash>
# ping -sni <your virtual ip> <destination>
</syntaxhighlight>
=== Traceroute ===
<syntaxhighlight lang=bash>
# traceroute -s <your virtual ip> <destination>
</syntaxhighlight>
=== SSH ===
<syntaxhighlight lang=bash>
# ssh <user>@<destination> -o BindAddress=<your virtual ip>
</syntaxhighlight>
=== Telnet ===
<syntaxhighlight lang=bash>
# telnet -b <your virtual ip> <destination>
</syntaxhighlight>
== Interface details ==
=== Linux ===
<syntaxhighlight lang=bash>
# ethtool -k eth1
Features for eth1:
rx-checksumming: on
tx-checksumming: on
tx-checksum-ipv4: off [fixed]
tx-checksum-ip-generic: on
tx-checksum-ipv6: off [fixed]
tx-checksum-fcoe-crc: off [fixed]
tx-checksum-sctp: off [fixed]
scatter-gather: on
tx-scatter-gather: on
tx-scatter-gather-fraglist: off [fixed]
tcp-segmentation-offload: off
tx-tcp-segmentation: off
tx-tcp-ecn-segmentation: off [fixed]
tx-tcp6-segmentation: off
udp-fragmentation-offload: off [fixed]
generic-segmentation-offload: off
generic-receive-offload: on
large-receive-offload: on
rx-vlan-offload: on
tx-vlan-offload: on
ntuple-filters: off [fixed]
receive-hashing: on
highdma: on
rx-vlan-filter: on [fixed]
vlan-challenged: off [fixed]
tx-lockless: off [fixed]
netns-local: off [fixed]
tx-gso-robust: off [fixed]
tx-fcoe-segmentation: off [fixed]
tx-gre-segmentation: off [fixed]
tx-ipip-segmentation: off [fixed]
tx-sit-segmentation: off [fixed]
tx-udp_tnl-segmentation: off [fixed]
fcoe-mtu: off [fixed]
tx-nocache-copy: off
loopback: off [fixed]
rx-fcs: off [fixed]
rx-all: off [fixed]
tx-vlan-stag-hw-insert: off [fixed]
rx-vlan-stag-hw-parse: off [fixed]
rx-vlan-stag-filter: off [fixed]
l2-fwd-offload: off [fixed]
busy-poll: off [fixed]
</syntaxhighlight>
=== Solaris ===
a5ba4cf296ee90aa3f73319cb870a687ba79fd3c
Template:Dokumentation
10
52
2539
90
2021-11-26T03:00:07Z
Lollypop
2
Text replacement - "[[Kategorie:" to "[[Category:"
wikitext
text/x-wiki
{{Nobots}}{{Tausendfach verwendet}}<onlyinclude><hr class="rulerdocumentation hintergrundfarbe6" style="margin:1em 0em; height:0.7ex;" />
{{#ifeq:{{NAMESPACE}}|{{ns:0}}|<strong class="error">Achtung: Die {{Vorlage|Dokumentation}} wird im Artikelnamensraum verwendet. Wahrscheinlich fehlt <code><noinclude></code> in einer eingebundenen Vorlage oder die Kapselung ist fehlerhaft. Bitte {{Bearbeiten|text=entferne diesen Fehler}}.</strong>|
<div style="float:right; clear:left;">[[Datei:Information icon.svg|frameless|18px|link=#Dokumentation.Info|Informationen zu dieser Dokumentation|alt=]]</div>
{{Überschriftensimulation 4|1=<span class="editsection">[<span class="plainlinks">[{{fullurl:{{SUBJECTPAGENAME}}/Doku|action=edit}} Bearbeiten]</span>]</span> Dokumentation}}
{{#ifexist: {{SUBJECTPAGENAME}}/Doku|
{{{{SUBJECTPAGENAME}}/Doku}}
<br /><hr style="border:none; height:0.7ex; clear:both;" />
{{{!}} {{Bausteindesign5}}
{{!}} Bei Fragen zu dieser [[Hilfe:Vorlagen|Vorlage]] kannst Du Dich an die [[Wikipedia:WikiProjekt Vorlagen/Werkstatt|Vorlagenwerkstatt]] wenden.
{{!}}}
{{{!}} cellspacing="8" cellpadding="0" class="plainlinks" style="background:transparent; margin: 2px 0;" id="Dokumentation.Info"
{{!}} style="position:relative; width:35px; vertical-align:top;" {{!}} [[Datei:Information icon.svg|30px|Information|alt=]]
{{!}} style="width: 100%;" {{!}}
<ul>
<li>{{#switch:{{ParmPart|1|{{{nr|<noinclude>10</noinclude>}}}}}
| 1 = {{Verwendung|ns=1}} der Vorlage auf Artikel-Diskussionsseiten.
| 2 = {{Verwendung|ns=2}} der Vorlage auf Benutzerseiten.
| 3 = {{Verwendung|ns=3}} der Vorlage auf Benutzer-Diskussionsseiten.
| 4 = {{Verwendung|ns=4}} der Vorlage auf Systemseiten.
| 6 = {{Verwendung|ns=6}} der Vorlage bei Dateien.
| 10 = {{Verwendung|ns=10}} der Vorlage auf Vorlagenseiten.
| 11 = {{Verwendung|ns=10}} der Vorlage auf Vorlagen-Diskussionsseiten.
| 14 = {{Verwendung|ns=14}} der Vorlage auf Kategorieseiten.
| #default = {{Verwendung}} der Vorlage in Artikeln.
}}</li>
<li>{{#switch:{{ParmPart|2|{{{nr|<noinclude>10</noinclude>}}}}}
| 1 = {{Verwendung|ns=1}} der Vorlage auf Artikel-Diskussionsseiten.
| 2 = {{Verwendung|ns=2}} der Vorlage auf Benutzerseiten.
| 3 = {{Verwendung|ns=3}} der Vorlage auf Benutzer-Diskussionsseiten.
| 4 = {{Verwendung|ns=4}} der Vorlage auf Systemseiten.
| 6 = {{Verwendung|ns=6}} der Vorlage bei Dateien.
| 10 = {{Verwendung|ns=10}} der Vorlage auf Vorlagenseiten.
| 11 = {{Verwendung|ns=10}} der Vorlage auf Vorlagen-Diskussionsseiten.
| 14 = {{Verwendung|ns=14}} der Vorlage auf Kategorieseiten.
}}</li>
<li> Diese Dokumentation befindet sich [[{{SUBJECTPAGENAME}}/Doku|auf einer eingebundenen Unterseite]]<span class="metadata"><span /> ([{{fullurl:{{SUBJECTPAGENAME}}/Doku|action=edit}} Bearbeiten]/[{{fullurl:{{SUBJECTPAGENAME}}/Doku|action=history}} Versionen]{{#ifexist:{{TALKPAGENAME}}/Doku|/[[{{TALKPAGENAME}}/Doku|Diskussion]]}})</span>.</li>
{{#ifexist: {{SUBJECTPAGENAME}}/Meta
| <li>Die Metadaten ([[Hilfe:Kategorien|Kategorien]] und [[Hilfe:Internationalisierung|Interwikis]]) {{#ifeq:{{NAMESPACE}}|{{ns:2}}
| in [[{{SUBJECTPAGENAME}}/Meta]] werden '''nicht''' eingebunden, weil sich die Vorlage im [[Hilfe:Benutzernamensraum|Benutzernamensraum]] befindet
| werden [[{{SUBJECTPAGENAME}}/Meta|von einer Unterseite eingebunden]]<span class="metadata"><span /> ([{{fullurl:{{SUBJECTPAGENAME}}/Meta|action=edit}} Bearbeiten]/[{{fullurl:{{SUBJECTPAGENAME}}/Meta|action=history}} Versionen]{{#ifexist:{{TALKPAGENAME}}/Meta|/[[{{TALKPAGENAME}}/Meta|Diskussion]]}})</span>
}}.</li>
| <li class="metadata metadata-label">[{{fullurl:{{SUBJECTPAGENAME}}/Meta|action=edit&preload=Vorlage:Dokumentation/preload-meta}} Metadatenseite erstellen].</li>
}}
{{#ifexist:{{SUBJECTPAGENAME}}/Wartung
| <li>Für diese Vorlage existiert eine [[{{SUBJECTPAGENAME}}/Wartung|Wartungsseite]]<span class="metadata"><span /> ([{{fullurl:{{SUBJECTPAGENAME}}/Wartung|action=edit}} Bearbeiten]/[{{fullurl:{{SUBJECTPAGENAME}}/Wartung|action=history}} Versionen]{{#ifexist:{{TALKPAGENAME}}/Wartung|/[[{{TALKPAGENAME}}/Wartung|Diskussion]]}})</span> zum Auffinden fehlerhafter Verwendungen.</li>
| <li class="metadata metadata-label">[{{fullurl:{{SUBJECTPAGENAME}}/Wartung|action=edit&preload=Vorlage:Dokumentation/preload-wartung}} Wartungsseite erstellen].</li>
}}
{{#ifexist:{{SUBJECTPAGENAME}}/XML
| <li>Für diese Vorlage existiert eine [[{{SUBJECTPAGENAME}}/XML|XML-Beschreibung]]<span class="metadata"><span /> ([{{fullurl:{{SUBJECTPAGENAME}}/XML|action=edit}} Bearbeiten]/[{{fullurl:{{SUBJECTPAGENAME}}/XML|action=history}} Versionen]{{#ifexist:{{TALKPAGENAME}}/XML|/[[{{TALKPAGENAME}}/XML|Diskussion]]}})</span> für den [[Wikipedia:Helferlein/Vorlagen-Meister|Vorlagenmeister]].</li>
| <li class="metadata metadata-label">[[tools:~revolus/Template-Master/index.de.html|XML-Beschreibungsseite erstellen]]</li>
}}
{{#ifexist:{{SUBJECTPAGENAME}}/Test
| <li>Anwendungsbeispiele und Funktionalitätsprüfungen befinden sich auf der [[{{SUBJECTPAGENAME}}/Test|Testseite]]<span class="metadata"><span /> ([{{fullurl:{{SUBJECTPAGENAME}}/Test|action=edit}} Bearbeiten]/[{{fullurl:{{SUBJECTPAGENAME}}/Test|action=history}} Versionen]{{#ifexist:{{TALKPAGENAME}}/Test|/[[{{TALKPAGENAME}}/Test|Diskussion]]}})</span>.</li>
| <li class="metadata metadata-label">[{{fullurl:{{SUBJECTPAGENAME}}/Test|action=edit&preload=Vorlage:Dokumentation/preload-test}} Test-/Beispielseite erstellen].</li>
}}
{{#ifexist:{{SUBJECTPAGENAME}}/Druck
| <li>Es existiert eine spezielle [[{{SUBJECTPAGENAME}}/Druck|Druckversion]]<span class="metadata"><span /> ([{{fullurl:{{SUBJECTPAGENAME}}/Druck|action=edit}} Bearbeiten]/[{{fullurl:{{SUBJECTPAGENAME}}/Druck|action=history}} Versionen]{{#ifexist:{{TALKPAGENAME}}/Druck|/[[{{TALKPAGENAME}}/Druck|Diskussion]]}})</span> für die [[Hilfe:Buchfunktion|Buchfunktion]].</li>
| <li class="metadata metadata-label">[{{fullurl:{{SUBJECTPAGENAME}}/Druck|action=edit&preload=Vorlage:Dokumentation/preload-druck}} Druckversion erstellen].</li>
}}
{{#ifexist:{{SUBJECTPAGENAME}}/Editnotice
| <li>Es existiert eine [[{{SUBJECTPAGENAME}}/Editnotice|Editnotice]]<span class="metadata"><span /> ([{{fullurl:{{SUBJECTPAGENAME}}/Editnotice|action=edit}} Bearbeiten]/[{{fullurl:{{SUBJECTPAGENAME}}/Editnotice|action=history}} Versionen]{{#ifexist:{{TALKPAGENAME}}/Editnotice|/[[{{TALKPAGENAME}}/Editnotice|Diskussion]]}})</span>, die beim Bearbeiten angezeigt wird.</li>
| <li class="metadata metadata-label">[{{fullurl:{{SUBJECTPAGENAME}}/Editnotice|action=edit&preload=Vorlage:Dokumentation/preload-editnotice}} Editnotice erstellen].</li>
}}
<li>[[Spezial:Präfixindex/{{SUBJECTPAGENAME}}/|Liste der Unterseiten]].</li>
</ul>
{{!}}}
|<span class="plainlinks" style="font-size:150%;">
* [{{fullurl:{{SUBJECTPAGENAME}}/Doku|action=edit&preload=Vorlage:Dokumentation/preload-doku}} Dokumentation erstellen]
{{#ifexist:{{SUBJECTPAGENAME}}/Meta||
* [{{fullurl:{{SUBJECTPAGENAME}}/Meta|action=edit&preload=Vorlage:Dokumentation/preload-meta}} Metadatenseite erstellen]}}
{{#ifexist:{{SUBJECTPAGENAME}}/Wartung||
* [{{fullurl:{{SUBJECTPAGENAME}}/Wartung|action=edit&preload=Vorlage:Dokumentation/preload-wartung}} Wartungsseite erstellen]}}
{{#ifexist:{{SUBJECTPAGENAME}}/Test||
* [{{fullurl:{{SUBJECTPAGENAME}}/Test|action=edit&preload=Vorlage:Dokumentation/preload-test}} Test-/Beispielseite erstellen]}}
</span>{{#ifeq:{{NAMESPACE}}|{{ns:10}}|
[[Category:Vorlage:nicht dokumentiert|{{PAGENAME}}]]
}}
}}
<div style="clear:both;" />
{{#ifeq:{{NAMESPACE}}|{{ns:2}}||{{#ifexist: {{SUBJECTPAGENAME}}/Meta|{{{{SUBJECTPAGENAME}}/Meta}}
}}}}
}}<hr class="rulerdocumentation hintergrundfarbe6" style="margin:1em 0em; height:0.7ex;" /></onlyinclude>
4c2d6edd07a3e889051d9d684f47e9a8133d76c5
MySQL Symmetric Encryption
0
272
2540
2334
2021-11-26T03:04:12Z
Lollypop
2
Text replacement - "</source" to "</syntaxhighlight"
wikitext
text/x-wiki
<syntaxhighlight lang=mysql>
> select hex(aes_encrypt(rpad("abcqweqweqweqwe",31,"~"),"mykey")) as encrypted;
+------------------------------------------------------------------+
| encrypted |
+------------------------------------------------------------------+
| E5FB394568B8F03D43CF083F5065C959AC6E22BDB7749E4D97F5ABC72B08D843 |
+------------------------------------------------------------------+
</syntaxhighlight>
<syntaxhighlight lang=mysql>
> select trim(trailing "~" from aes_decrypt(unhex("E5FB394568B8F03D43CF083F5065C959AC6E22BDB7749E4D97F5ABC72B08D843"),"mykey")) as decrypted;
+-----------------+
| decrypted |
+-----------------+
| abcqweqweqweqwe |
+-----------------+
</syntaxhighlight>
b132319a21ae97169c688cddeb9be13378eb1d94
Networker Tipps und Tricks
0
204
2541
2460
2021-11-26T03:12:24Z
Lollypop
2
Text replacement - "[[Kategorie:" to "[[Category:"
wikitext
text/x-wiki
[[Category:Backup]]
==Status des Backups prüfen==
Anzeigen, ob für den Client <networker-client> in den letzten 24 Stunden ein Backup gelaufen ist:
<syntaxhighlight lang=bash>
# /usr/sbin/mminfo -avot -s <networker-server> -r "client,savetime(17),name,sumsize" -t "1 day ago" -q client=<networker-client>
</syntaxhighlight>
Oder das letzte Backup für einen Client:
<syntaxhighlight lang=bash>
# /usr/sbin/mminfo -avot -s <networker-server> -r "client,group,savetime(17),name,sumsize" -q "group=<group>,client=<networker-client>"
</syntaxhighlight>
==Recover/Restore==
SSID des Savesets herausfinden:
<syntaxhighlight lang=bash>
# mminfo -s <networker-server> -q "client=<networker-client>,name=<directory>" -r "ssid,name,savetime(17)"
2752466240 <directory> 03/23/15 00:16:16
...
387566382 <directory> 03/31/15 00:16:14
</syntaxhighlight>
OK, wir wollen das Backup vom 31.3.2015 00:16:14 Uhr, also SSID 387566382.
Zielverzeichnis für den Restore:
<syntaxhighlight lang=bash>
# recover -s <networker-server> -S 387566382 -d <destination-directory>
</syntaxhighlight>
Achtung, das sind NUR die Dateien, die an dem Tage gesichert wurden!
Möchte man alles so herstellen, wie es zu einem bestimmten Zetipunkt war, dann geht das folgendermaßen:
<syntaxhighlight lang=bash>
# recover -s <networker-server> -c <networker-client> -t '03/31/15 00:16:14' -d <destination-directory> -a <directory>
</syntaxhighlight>
e05445975a3895877b69ba759eaba79611109ba6
Category:Web
14
243
2542
932
2021-11-26T03:15:21Z
Lollypop
2
Text replacement - "[[Kategorie:" to "[[Category:"
wikitext
text/x-wiki
[[Category:KnowHow]]
a53883501ef62bde531096835b5015f2915a2297
Arundo donax
0
43
2543
131
2021-11-26T03:15:48Z
Lollypop
2
Text replacement - "[[Kategorie:" to "[[Category:"
wikitext
text/x-wiki
{{Taxobox
| Taxon_Name = Pfahlrohr
| Taxon_WissName = Arundo donax
| Taxon_Rang = Art
| Taxon_Autor = [[Carl von Linné|L.]]
| Taxon2_WissName = Arundo
| Taxon2_Rang = Gattung
| Taxon3_WissName = Arundineae
| Taxon3_Rang = Tribus
| Taxon4_WissName = Arundinoideae
| Taxon4_Rang = Unterfamilie
| Taxon5_Name = Süßgräser
| Taxon5_WissName = Poaceae
| Taxon5_Rang = Familie
| Taxon6_Name = Süßgrasartige
| Taxon6_WissName = Poales
| Taxon6_Rang = Ordnung
| Bild = Arundo donax Austrieb.jpg
| Bildbeschreibung = Pfahlrohr (''Arundo donax'') – Austrieb Mitte Mai in Hamburg
}}
== Beschreibung ==
Das Pfahlrohr kommt eigentlich eher aus dem Süden, ist aber auch bei uns bedingt Winterhart.
[[Category:Arundo]]
9ec106b0ac63f92f11754bf51309ca314b4420b7
ESPEasy
0
371
2544
2216
2021-11-26T03:18:24Z
Lollypop
2
Text replacement - "</source" to "</syntaxhighlight"
wikitext
text/x-wiki
<syntaxhighlight lang=bash>
$ sudo apt install --yes esptool
$ wget https://github.com/letscontrolit/ESPEasy/releases/download/mega-20200515/ESPEasy_mega-20200515.zip
$ esptool --port /dev/ttyUSB0 --baud 115200 write_flash 0 ESP_Easy_mega_20200516_test_beta_ESP8266_4M1M.bin
esptool.py v2.8
Serial port /dev/ttyUSB0
Connecting...
Detecting chip type... ESP8266
Chip is ESP8266EX
Features: WiFi
Crystal is 26MHz
MAC: 3c:71:bf:2a:a6:0b
Enabling default SPI flash mode...
Configuring flash size...
Auto-detected Flash size: 4MB
Erasing flash...
Took 2.33s to erase flash block
Writing at 0x000dbc00... (87 %)
</syntaxhighlight>
* [https://www.az-delivery.de/products/copy-of-nodemcu-lua-amica-v2-modul-mit-esp8266-12e NodeMCU Lua Lolin V3 Module ESP8266 ESP-12F WIFI Wifi Development Board mit CH340]
05ec907116df70bed0e72be9f970427d0991498a
SunCluster Delete Ressource Group
0
206
2545
2285
2021-11-26T03:22:05Z
Lollypop
2
Text replacement - "[[Kategorie:" to "[[Category:"
wikitext
text/x-wiki
[[Category:SunCluster]]
=Komplettes entfernen einer Ressource Group=
Herleitung der Daten, die nachher in den Einzeilern benutzt werden.
Macht dies nicht! Ich übernehme auch hier wieder keine Verantwortung! Alles falsch! Nicht machen!
==Setzen der betreffenden Ressource Group==
<syntaxhighlight lang=bash>
# RG=my-rg
</syntaxhighlight>
==Ressource anzeigen==
<syntaxhighlight lang=bash>
# clrs list -g ${RG}
my-nsr-res
my-oracle-res
my-lh-res
my-zone-res
my-hasp-zfs-res
</syntaxhighlight>
==Abschalten der Ressource Group und Ressourcen==
<syntaxhighlight lang=bash>
# clrg offline ${RG}
# clrs list -g ${RG} | xargs clrs disable
</syntaxhighlight>
==ZPools anzeigen==
<syntaxhighlight lang=bash>
# clrs show -p ZPools -g ${RG}
...
=== Resources ===
Resource: my-hasp-zfs-res
--- Standard and extension properties ---
Zpools: my_pool my-redo1_pool my-redo2_pool
Class: extension
Description: The list of zpools
Per-node: False
Type: stringarray
...
</syntaxhighlight>
==ZPools anzeigen==
<syntaxhighlight lang=bash>
# clrs show -p ZPools -g ${RG} | nawk '$1=="Zpools:"{$1="";print $0;}'
my_pool my-redo1_pool my-redo2_pool
</syntaxhighlight>
==DID Devices anzeigen==
<syntaxhighlight lang=bash>
# for disk in $(for zpool in $(clrs show -p ZPools -g ${RG} | nawk '$1=="Zpools:"{$1="";print $0;}' ) ; do zpool import ${zpool} 2>/dev/null ; zpool status ${zpool} ; zpool export ${zpool} ; done | nawk '/c[0-9]+t/{gsub(/s.*$/,"",$1);print $1}') ; do echo /dev/rdsk/${disk}; done | xargs cldev list -vn $(hostname)
DID Device Full Device Path
---------- ----------------
d53 node06:/dev/rdsk/c0t600A0B80006E103C00000B9B50B2F83Ed0
d38 node06:/dev/rdsk/c0t600A0B80006E10020000D54150B2FF26d0
d57 node06:/dev/rdsk/c0t600A0B80006E103C00000B9E50B2F9FFd0
d50 node06:/dev/rdsk/c0t600A0B80006E10020000D54450B300C8d0
d46 node06:/dev/rdsk/c0t600A0B80006E103C00000BA250B3098Ad0
d28 node06:/dev/rdsk/c0t600A0B80006E10020000D54850B310C2d0
d55 node06:/dev/rdsk/c0t600A0B80006E134400000B5350B2FB08d0
d56 node06:/dev/rdsk/c0t600A0B80006E10E40000D6F450B2FBB1d0
d40 node06:/dev/rdsk/c0t600A0B80006E134400000B5950B30D8Bd0
d45 node06:/dev/rdsk/c0t600A0B80006E10E40000D6FA50B30E62d0
</syntaxhighlight>
oder nur die DIDs:
<syntaxhighlight lang=bash>
# for disk in $(for zpool in $(clrs show -p ZPools -g ${RG} | nawk '$1=="Zpools:"{$1="";print $0;}' ) ; do zpool import ${zpool} 2>/dev/null ; zpool status ${zpool} ; zpool export ${zpool} ; done | nawk '/c[0-9]+t/{gsub(/s.*$/,"",$1);print $1}') ; do echo /dev/rdsk/${disk}; done | xargs scdidadm -lo instance
53
38
57
50
46
28
55
56
40
45
</syntaxhighlight>
==Abschalten des Device Monitorings==
Das ist wichtig, um die Devices später ganz aus dem Cluster zu bekommen!
<syntaxhighlight lang=bash>
# for disk in $(for zpool in $(clrs show -p ZPools -g ${RG} | nawk '$1=="Zpools:"{$1="";print $0;}' ) ; do zpool import ${zpool} 2>/dev/null ; zpool status ${zpool} ; zpool export ${zpool} ; done | nawk '/c[0-9]+t/{gsub(/s.*$/,"",$1);print $1}') ; do echo /dev/rdsk/${disk}; done | xargs scdidadm -lo name | xargs cldev unmonitor
</syntaxhighlight>
==Ressourcegruppe löschen==
<syntaxhighlight lang=bash>
# RG=bla-rg
# clrs disable -g ${RG} +
# clrs delete -g ${RG} +
# clrg delete ${RG}
</syntaxhighlight>
==Jetzt auf dem Storage die LUNs unmappen==
Und gegebenen Falls löschen...
==Nicht mehr vorhandene LUNs aus dem Solaris entfernen==
<syntaxhighlight lang=bash>
# for node in $(clnode list) ; do ssh ${node} cfgadm -alo show_SCSI_LUN | nawk '$NF=="unusable"{gsub(/,[0-9]+$/,"",$1);print $1}' | sort -u | xargs -n 1 ssh ${node} cfgadm -c unconfigure -o unusable_SCSI_LUN ; ssh ${node} devfsadm -C -v -c disk ; done
</syntaxhighlight>
==DIDs aufräumen==
<syntaxhighlight lang=bash>
# for node in $(clnode list) ; do cldev refresh -n ${node} ; cldev clear -n ${node} ; done
</syntaxhighlight>
==Bei bedarf Zonenkonfigs aufräumen==
<syntaxhighlight lang=bash>
# ZONE=my-zone
# for node in $(clnode list) ; do ssh ${node} zonecfg -z ${ZONE} delete -F ; done
</syntaxhighlight>
a8a99f61523a2500f234416bcf7d4051e79745a3
Solaris mdb magic
0
23
2546
2525
2021-11-26T03:24:18Z
Lollypop
2
Text replacement - "</source" to "</syntaxhighlight"
wikitext
text/x-wiki
[[Category:Solaris|Modular Debugger]]
=Verschiedene kleine mdb Tricks=
==Memory usage==
<pre>
# echo ::memstat|mdb -k
Page Summary Pages MB %Tot
------------ ---------------- ---------------- ----
Kernel 2855874 11155 69%
Anon 50119 195 1%
Exec and libs 4754 18 0%
Page cache 22972 89 1%
Free (cachelist) 11948 46 0%
Free (freelist) 1221894 4773 29%
Total 4167561 16279
Physical 4078747 15932
</pre>
==Kernelparameter abfragen==
Syntax: echo '<Parameter>/D' | mdb -k
<pre>
# echo 'ncsize/D' | mdb -k
ncsize:
ncsize: 70485
</pre>
==Kernelparameter setzen==
Syntax: echo '<Parameter>/W<Value>' | mdb -wk
<pre>
# echo 'do_tcp_fusion/W0' | mdb -wk
do_tcp_fusion: 0 = 0x0
</pre>
==Inquiry strings in Solaris 11==
<syntaxhighlight lang=bash>
# echo "::walk sd_state | ::grep '.!=0' | ::print struct sd_lun un_sd | ::print struct scsi_device sd_inq | ::print struct scsi_inquiry inq_vid inq_pid" | mdb -k
inq_vid = [ "VMware " ]
inq_pid = [ "Virtual disk " ]
inq_vid = [ "NECVMWar" ]
inq_pid = [ "VMware SATA CD00" ]
inq_vid = [ "VMware " ]
inq_pid = [ "Virtual disk " ]
</syntaxhighlight>
fc86d9ace9525e392d9868942e3ba569a74b010b
SuSE NIS
0
380
2547
2338
2021-11-26T03:25:31Z
Lollypop
2
Text replacement - "</source" to "</syntaxhighlight"
wikitext
text/x-wiki
[[Category:SuSE]]
=!!!! First of all: You do NOT want NIS because of security reasons !!!!=
NIS is not NIS+ and it is without encryption. So do not use it or if you really have to, use it wisely!
==NIS Client==
===Add packages===
<syntaxhighlight>
# zypper in yast2-nis-client ypbind
</syntaxhighlight>
===/etc/sysconfig/network/config===
<syntaxhighlight>
NETCONFIG_MODULES_ORDER="dns-resolver dns-bind dns-dnsmasq nis ntp-runtime"
NETCONFIG_NIS_STATIC_SERVERS="nis-server.domain.tld"
NETCONFIG_NIS_SETDOMAINNAME="yes"
NETCONFIG_NIS_POLICY="auto"
</syntaxhighlight>
<syntaxhighlight>
# netconfig update -f
</syntaxhighlight>
Check:
<syntaxhighlight>
# cat /etc/yp.conf
...
ypserver nis-server.domain.tld
</syntaxhighlight>
===Set NIS Domain===
<syntaxhighlight>
# nisdomainname nis.domain.tld
</syntaxhighlight>
Check:
<syntaxhighlight>
# nisdomainname
nis.domain.tld
#
</syntaxhighlight>
===Add to /etc/passwd===
<syntaxhighlight>
+::::::
</syntaxhighlight>
===Add to /etc/shadow===
<syntaxhighlight>
+::0:0:0::::
</syntaxhighlight>
===/etc/nsswitch.conf===
<syntaxhighlight>
...
passwd: compat
group: compat
...
</syntaxhighlight>
alternative for older installations:
<syntaxhighlight>
...
passwd: files nis
group: files nis
...
</syntaxhighlight>
===yast===
<syntaxhighlight>
Network Services -> NIS Client
[Alt]+[u] (Use NIS)
[F10] Finish
[F9] Quit
</syntaxhighlight>
Check:
<syntaxhighlight>
# ypcat passwd.byname
</syntaxhighlight>
1b96e49c9b2879ca55e830e041a5e1fa1e8de4dc
Linux udev
0
88
2548
2386
2021-11-26T03:27:50Z
Lollypop
2
Text replacement - "</source" to "</syntaxhighlight"
wikitext
text/x-wiki
[[Kategorie:Linux|udev]]
==Persistent network interface names==
If you have no <i>/etc/udev/rules.d/70-persistent-net.rules</i> just create one:
<syntaxhighlight lang=bash>
# lshw -C network | awk '/logical name:/{iface=$NF;}/serial:/{mac=$NF;printf "SUBSYSTEM==\"net\", ACTION==\"add\", DRIVERS==\"?*\", ATTR{address}==\"%s\", ATTR{dev_id}==\"0x0\", ATTR{type}==\"1\", KERNEL==\"eth*\", NAME=\"%s\"\n",mac,iface;}' >> /etc/udev/rules.d/70-persistent-net.rules
</syntaxhighlight>
or add a specific interface to <i>/etc/udev/rules.d/70-persistent-net.rules</i>:
<syntaxhighlight lang=bash>
# MATCHADDR="00:50:56:a1:20:22" INTERFACE=eth2 /lib/udev/write_net_rules
</syntaxhighlight>
Change order with:
<syntaxhighlight lang=bash>
# vi /etc/udev/rules.d/70-persistent-net.rules
</syntaxhighlight>
Then let udev reread the file:
<syntaxhighlight lang=bash>
# udevadm trigger --action=add --subsystem-match=net --verbose
</syntaxhighlight>
==udev for MySQL on LVM with InnoDB on raw devices==
===Make your rule===
<syntaxhighlight lang=bash>
root@mysql:~# cat /etc/udev/rules.d/99-lvm-mysql-permissions.rules
# udevadm info --query=all --name /dev/vg-data/lv-rawdisk-innodb01
# DM_VG_NAME=vg-data
# DM_LV_NAME=lv-rawdisk-innodb01
ENV{DM_VG_NAME}=="vg-data" ENV{DM_LV_NAME}=="lv-rawdisk-innodb*" OWNER="mysql" GROUP="mysql"
</syntaxhighlight>
===Test your rule===
<syntaxhighlight lang=bash>
root@mysql:~# ls -al /dev/vg-data/lv-rawdisk-innodb01
lrwxrwxrwx 1 root root 7 Aug 12 14:45 /dev/vg-data/lv-rawdisk-innodb01 -> ../dm-0
root@mysql:~# udevadm test /class/block/dm-0
...
read rules file: /etc/udev/rules.d/99-lvm-mysql-permissions.rules
specified user 'mysql' unknown
...
</syntaxhighlight>
OK user mysql unknown... maybe I should install MySQL ;-).
After that:
<syntaxhighlight lang=bash>
root@mysql:~# id -a mysql
uid=108(mysql) gid=114(mysql) groups=114(mysql)
root@mysql:~# udevadm test /class/block/dm-0
...
OWNER 108 /etc/udev/rules.d/99-lvm-mysql-permissions.rules:4
GROUP 114 /etc/udev/rules.d/99-lvm-mysql-permissions.rules:4
handling device node '/dev/dm-0', devnum=b252:0, mode=0660, uid=108, gid=114
set permissions /dev/dm-0, 060660, uid=108, gid=114
...
</syntaxhighlight>
===Trigger your rule===
<syntaxhighlight lang=bash>
root@mysql:~# udevadm trigger
root@mysql:~# ls -alL /dev/vg-data/lv-rawdisk-innodb01
brw-rw---- 1 mysql mysql 252, 0 Aug 12 15:07 /dev/vg-data/lv-rawdisk-innodb01
</syntaxhighlight>
4c92dca6bf80802bc019c1e38686ce5f25054d53
Category:Insekten
14
180
2549
546
2021-11-26T03:30:14Z
Lollypop
2
Text replacement - "[[Kategorie:" to "[[Category:"
wikitext
text/x-wiki
[[Category:Tiere]]
{{#categorytree:{{PAGENAMEE}}|mode=pages|hideroot=on|depth=3}}
4102fbf0160f568b22165bd11505c72a1b5d1d77
Solaris SMF
0
100
2550
2300
2021-11-26T03:32:27Z
Lollypop
2
Text replacement - "</source" to "</syntaxhighlight"
wikitext
text/x-wiki
[[Kategorie:Solaris|SMF]]
__FORCETOC__
== Running foreground processes ==
<syntaxhighlight lang=xml>
<?xml version='1.0'?>
<!DOCTYPE service_bundle SYSTEM '/usr/share/lib/xml/dtd/service_bundle.dtd.1'>
<service_bundle type='manifest' name='export'>
<service name='network/foreground-daemon' type='service' version='0'>
<single_instance/>
<dependency name='filesystem_minimal' grouping='require_all' restart_on='none' type='service'>
<service_fmri value='svc:/system/filesystem/local'/>
</dependency>
<dependency name='loopback' grouping='require_any' restart_on='error' type='service'>
<service_fmri value='svc:/network/loopback'/>
</dependency>
<dependency name='network' grouping='optional_all' restart_on='error' type='service'>
<service_fmri value='svc:/milestone/network'/>
</dependency>
<instance name='default' enabled='true'>
<exec_method name='refresh' type='method' exec=':true' timeout_seconds='60'/>
<exec_method name='stop' type='method' exec=':kill' timeout_seconds='60'/>
<exec_method name='start' type='method' exec='/opt/foreground/bin/foreground-daemon %m' timeout_seconds='0'>
<method_context project='foreground-project' >
<method_credential user='foreground-user' group='noaccess' />
</method_context>
</exec_method>
<property_group type="framework" name="startd">
<propval type="astring" name="duration" value="child"/>
</property_group>
<template>
<common_name>
<loctext xml:lang='C'>Foreground Daemon</loctext>
</common_name>
<documentation>
<manpage title='foreground-daemon' section='1M' manpath='/opt/foreground/man'/>
</documentation>
</template>
</instance>
<stability value='Unstable'/>
</service>
</service_bundle>
</syntaxhighlight>
==Adding dependency on another service==
For example mount NFS after ZFS:
<syntaxhighlight lang=bash>
svccfg -s svc:/network/nfs/client addpg filesystem-local dependency
svccfg -s svc:/network/nfs/client setprop filesystem-local/grouping = astring: require_all
svccfg -s svc:/network/nfs/client setprop filesystem-local/entities = fmri: svc:/system/filesystem/local:default
svccfg -s svc:/network/nfs/client setprop filesystem-local/restart_on = astring: none
svccfg -s svc:/network/nfs/client setprop filesystem-local/type = astring: service
</syntaxhighlight>
==Setting multiple parameters to environment variables==
1. The goal:
* Setting -Xmx from 512m to 2G
The problem:
<syntaxhighlight lang=bash>
# svccfg -s svc:/cms/web:tomcat setenv -m start CATALINA_OPTS '-XX:MaxPermSize=256m -Xmx2G -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.port=9004 -Dcom.sun.management.jmxremote.ssl=false -Dcom.sun.management.jmxremote.authenticate=false -Dorg.apache.el.parser.SKIP_IDENTIFIER_CHECK=true -Djava.rmi.server.hostname=tomcat.server.de'
svccfg: Syntax error.
</syntaxhighlight>
So you have to set the complete environment this way:
* Get the complete environment:
<syntaxhighlight lang=bash>
# svccfg -s svc:/cms/web:tomcat listprop method_context/environment
method_context/environment astring "PATH=/usr/jdk/latest/bin:/usr/sbin:/usr/bin" "LC_CTYPE=de_DE.ISO8859-15@euro" "JAVA_OPTS=-Dhttp.proxyHost=proxy.server.de -Dhttp.proxyPort=8080 -Djava.awt.headless=true" "JAVA_HOME=/usr/jdk/latest" "CATALINA_OPTS=-XX:MaxPermSize=256m -Xmx512m -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.port=9004 -Dcom.sun.management.jmxremote.ssl=false -Dcom.sun.management.jmxremote.authenticate=false -Dorg.apache.el.parser.SKIP_IDENTIFIER_CHECK=true -Djava.rmi.server.hostname=tomcat.server.de"
</syntaxhighlight>
* Set the complete (modified) environment:
<syntaxhighlight lang=bash>
# svccfg -s svc:/cms/web:tomcat setprop method_context/environment = astring: '("PATH=/usr/jdk/latest/bin:/usr/sbin:/usr/bin" "LC_CTYPE=de_DE.ISO8859-15@euro" "JAVA_OPTS=-Dhttp.proxyHost=proxy.server.de -Dhttp.proxyPort=8080 -Djava.awt.headless=true" "JAVA_HOME=/usr/jdk/latest" "CATALINA_OPTS=-XX:MaxPermSize=256m -Xmx2G -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.port=9004 -Dcom.sun.management.jmxremote.ssl=false -Dcom.sun.management.jmxremote.authenticate=false -Dorg.apache.el.parser.SKIP_IDENTIFIER_CHECK=true -Djava.rmi.server.hostname=tomcat.server.de")'
# svcadm refresh svc:/cms/web:tomcat
</syntaxhighlight>
* Check it with:
<syntaxhighlight lang=bash>
# svccfg -s svc:/cms/web:tomcat listprop method_context/environment
</syntaxhighlight>
== Ignore child process coredumps ==
<syntaxhighlight lang=xml>
<property_group name='startd' type='framework'>
<!-- sub-process core dumps shouldn't restart
session -->
<propval name='ignore_error' type='astring'
value='core,signal' />
</property_group>
</syntaxhighlight>
<syntaxhighlight lang=bash>
# svccfg -s clamav
svc:/network/clamav> addpg startd framework
svc:/network/clamav> addpropvalue startd/ignore_error astring: core,signal
svc:/network/clamav> end
</syntaxhighlight>
7b1006a7a81f4954a7aaf119f0eef0bfaab894a4
2565
2550
2021-11-26T03:50:01Z
Lollypop
2
Text replacement - "[[Kategorie:" to "[[Category:"
wikitext
text/x-wiki
[[Category:Solaris|SMF]]
__FORCETOC__
== Running foreground processes ==
<syntaxhighlight lang=xml>
<?xml version='1.0'?>
<!DOCTYPE service_bundle SYSTEM '/usr/share/lib/xml/dtd/service_bundle.dtd.1'>
<service_bundle type='manifest' name='export'>
<service name='network/foreground-daemon' type='service' version='0'>
<single_instance/>
<dependency name='filesystem_minimal' grouping='require_all' restart_on='none' type='service'>
<service_fmri value='svc:/system/filesystem/local'/>
</dependency>
<dependency name='loopback' grouping='require_any' restart_on='error' type='service'>
<service_fmri value='svc:/network/loopback'/>
</dependency>
<dependency name='network' grouping='optional_all' restart_on='error' type='service'>
<service_fmri value='svc:/milestone/network'/>
</dependency>
<instance name='default' enabled='true'>
<exec_method name='refresh' type='method' exec=':true' timeout_seconds='60'/>
<exec_method name='stop' type='method' exec=':kill' timeout_seconds='60'/>
<exec_method name='start' type='method' exec='/opt/foreground/bin/foreground-daemon %m' timeout_seconds='0'>
<method_context project='foreground-project' >
<method_credential user='foreground-user' group='noaccess' />
</method_context>
</exec_method>
<property_group type="framework" name="startd">
<propval type="astring" name="duration" value="child"/>
</property_group>
<template>
<common_name>
<loctext xml:lang='C'>Foreground Daemon</loctext>
</common_name>
<documentation>
<manpage title='foreground-daemon' section='1M' manpath='/opt/foreground/man'/>
</documentation>
</template>
</instance>
<stability value='Unstable'/>
</service>
</service_bundle>
</syntaxhighlight>
==Adding dependency on another service==
For example mount NFS after ZFS:
<syntaxhighlight lang=bash>
svccfg -s svc:/network/nfs/client addpg filesystem-local dependency
svccfg -s svc:/network/nfs/client setprop filesystem-local/grouping = astring: require_all
svccfg -s svc:/network/nfs/client setprop filesystem-local/entities = fmri: svc:/system/filesystem/local:default
svccfg -s svc:/network/nfs/client setprop filesystem-local/restart_on = astring: none
svccfg -s svc:/network/nfs/client setprop filesystem-local/type = astring: service
</syntaxhighlight>
==Setting multiple parameters to environment variables==
1. The goal:
* Setting -Xmx from 512m to 2G
The problem:
<syntaxhighlight lang=bash>
# svccfg -s svc:/cms/web:tomcat setenv -m start CATALINA_OPTS '-XX:MaxPermSize=256m -Xmx2G -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.port=9004 -Dcom.sun.management.jmxremote.ssl=false -Dcom.sun.management.jmxremote.authenticate=false -Dorg.apache.el.parser.SKIP_IDENTIFIER_CHECK=true -Djava.rmi.server.hostname=tomcat.server.de'
svccfg: Syntax error.
</syntaxhighlight>
So you have to set the complete environment this way:
* Get the complete environment:
<syntaxhighlight lang=bash>
# svccfg -s svc:/cms/web:tomcat listprop method_context/environment
method_context/environment astring "PATH=/usr/jdk/latest/bin:/usr/sbin:/usr/bin" "LC_CTYPE=de_DE.ISO8859-15@euro" "JAVA_OPTS=-Dhttp.proxyHost=proxy.server.de -Dhttp.proxyPort=8080 -Djava.awt.headless=true" "JAVA_HOME=/usr/jdk/latest" "CATALINA_OPTS=-XX:MaxPermSize=256m -Xmx512m -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.port=9004 -Dcom.sun.management.jmxremote.ssl=false -Dcom.sun.management.jmxremote.authenticate=false -Dorg.apache.el.parser.SKIP_IDENTIFIER_CHECK=true -Djava.rmi.server.hostname=tomcat.server.de"
</syntaxhighlight>
* Set the complete (modified) environment:
<syntaxhighlight lang=bash>
# svccfg -s svc:/cms/web:tomcat setprop method_context/environment = astring: '("PATH=/usr/jdk/latest/bin:/usr/sbin:/usr/bin" "LC_CTYPE=de_DE.ISO8859-15@euro" "JAVA_OPTS=-Dhttp.proxyHost=proxy.server.de -Dhttp.proxyPort=8080 -Djava.awt.headless=true" "JAVA_HOME=/usr/jdk/latest" "CATALINA_OPTS=-XX:MaxPermSize=256m -Xmx2G -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.port=9004 -Dcom.sun.management.jmxremote.ssl=false -Dcom.sun.management.jmxremote.authenticate=false -Dorg.apache.el.parser.SKIP_IDENTIFIER_CHECK=true -Djava.rmi.server.hostname=tomcat.server.de")'
# svcadm refresh svc:/cms/web:tomcat
</syntaxhighlight>
* Check it with:
<syntaxhighlight lang=bash>
# svccfg -s svc:/cms/web:tomcat listprop method_context/environment
</syntaxhighlight>
== Ignore child process coredumps ==
<syntaxhighlight lang=xml>
<property_group name='startd' type='framework'>
<!-- sub-process core dumps shouldn't restart
session -->
<propval name='ignore_error' type='astring'
value='core,signal' />
</property_group>
</syntaxhighlight>
<syntaxhighlight lang=bash>
# svccfg -s clamav
svc:/network/clamav> addpg startd framework
svc:/network/clamav> addpropvalue startd/ignore_error astring: core,signal
svc:/network/clamav> end
</syntaxhighlight>
46ee96ece71a2b3c7278f5b5e97003685b9a7a46
Solaris 11 Zones
0
257
2551
2466
2021-11-26T03:33:17Z
Lollypop
2
Text replacement - "[[Kategorie:" to "[[Category:"
wikitext
text/x-wiki
[[Category:Solaris11|Zones]]
==zoneclone.sh==
<syntaxhighlight lang=bash>
#!/bin/bash
SRC_ZONE=$1
DST_ZONE=$2
DST_DIR=$3
DST_DATASET=$4
if [ $# -lt 3 ] ; then
echo "Not enough arguments!"
echo "Usage: $0 <src_zone> <dst_zone> <dst_dir> [dst_dataset]"
exit 1
fi
zonecfg -z ${DST_ZONE} info >/dev/null 2>&1 && {
echo "Destination zone exists!"
exit 1
}
zonecfg -z ${SRC_ZONE} info >/dev/null 2>&1 || {
echo "Source zone does not exist!"
exit 1
}
SRC_ZONE_STATUS="$(zoneadm list -cs | nawk -v zone=${SRC_ZONE} '$1==zone {print $2;}')"
if [ "_${SRC_ZONE_STATUS}_" != "_installed_" ] ; then
echo "Zone ${SRC_ZONE} must be in the status \"installed\" and not \"${SRC_ZONE_STATUS}\"!"
exit 1
fi
if [ -n "${DST_DATASET}" ] ; then
if [ -d ${DST_DIR} ] ; then
rmdir ${DST_DIR} || {
echo "${DST_DIR} must be empty!"
exit 1
}
fi
# Is parent dataset there?
zfs list -Ho name ${DST_DATASET%/*} >/dev/null 2>&1 || {
echo "Destination dataset does not exist!"
exit 1
}
zfs create -o mountpoint=${DST_DIR} ${DST_DATASET}
fi
[ -d ${DST_DIR} ] || {
echo "Destination dir must exist!"
exit 1
}
zonecfg -z ${SRC_ZONE} export \
| nawk -v zonepath=${DST_DIR} '
BEGIN {
FS="=";
OFS="=";
}
/set zonepath/{$2=zonepath}
{ print; }
' \
| zonecfg -z ${DST_ZONE} -f -
zoneadm -z ${DST_ZONE} clone ${SRC_ZONE}
</syntaxhighlight>
==Way that works with Solaris Cluster and immutable zones==
Problem was that some steps of updating (indexing man pages etc) could not be done in the immutable zone after solaris update. So one <i>boot -w</i> is necessary to come up in cluster.
<syntaxhighlight lang=bash>
Apr 1 02:31:52 node01 SC[SUNWsczone.start_sczbt]:zone01-rg:zone01-zone-rs: [ID 567783 daemon.error] start_sczbt rc<1> - Installing: Using existing zone boot environment
Apr 1 02:31:52 node01 SC[SUNWsczone.start_sczbt]:zone01-rg:zone01-zone-rs: [ID 567783 daemon.error] start_sczbt rc<1> - Zone BE root dataset: zone01/zone/rpool/ROOT/solaris-6
Apr 1 02:31:52 node01 SC[SUNWsczone.start_sczbt]:zone01-rg:zone01-zone-rs: [ID 567783 daemon.error] start_sczbt rc<1> - Cache: Using /var/pkg/publisher.
Apr 1 02:31:52 node01 SC[SUNWsczone.start_sczbt]:zone01-rg:zone01-zone-rs: [ID 567783 daemon.error] start_sczbt rc<1> - Updating non-global zone: Linking to image /.
Apr 1 02:31:52 node01 SC[SUNWsczone.start_sczbt]:zone01-rg:zone01-zone-rs: [ID 567783 daemon.error] start_sczbt rc<1> - Finished processing linked images.
Apr 1 02:31:52 node01 SC[SUNWsczone.start_sczbt]:zone01-rg:zone01-zone-rs: [ID 567783 daemon.error] start_sczbt rc<1> - Result: Attach Failed.
</syntaxhighlight>
===Move all RGs from node first===
<syntaxhighlight lang=bash>
# clrg evacuate -n $(hostname) +
</syntaxhighlight>
===Update Solaris===
<syntaxhighlight lang=bash>
# pkg update --be-name $(pkg info -r system/kernel | nawk '/Build Release:/{split($NF,release,".");}/Branch:/{split($NF,versions,".");print "Solaris_"release[2]"."versions[3]"_SRU"versions[4];}') --accept -v
# init 6
</syntaxhighlight>
===Disable zone on other node and move to self===
But leave the HAStoragePlus resource online
<syntaxhighlight lang=bash>
# clrs disable zone01-zone-rs
# clrg switch -n $(hostname) zone01-rg
</syntaxhighlight>
===Attach, boot -w, detach without cluster===
<syntaxhighlight lang=bash>
# zoneadm -z zone01 attach -u
# zoneadm -z zone01 boot -w
# zlogin zone01 svcs -xv # <- wait for all services to be ready
# zlogin zone01 svcs -xv # <- wait for all services to be ready
...
# zlogin zone01 svcs -xv # <- wait for all services to be ready
# zoneadm -z zone01 halt
# zoneadm -z zone01 detach
</syntaxhighlight>
===Enable zone in cluster===
<syntaxhighlight lang=bash>
# clrs enable zone01-zone-rs
</syntaxhighlight>
==Some other things==
<syntaxhighlight lang=bash>
# zoneadm -z zone01 attach -x deny-zbe-clone -z solaris-7
# clrs enable zone01-rs
</syntaxhighlight>
<syntaxhighlight lang=bash>
# /usr/lib/brand/solaris/attach:
Brand specific options:
brand-specific usage:
Usage:
attach [-uv] [-a archive | -d directory | -z zbe]
[-c profile.xml | dir] [-x attach-last-booted-zbe|
force-zbe-clone|deny-zbe-clone|destroy-orphan-zbes]
-u Update the software in the attached zone boot environment to
match the sofware in the global zone boot environment.
-v Verbose.
-c Update the zone configuration with the sysconfig profile
specified in the given file or directory.
-a Extract the specified archive into the zone then attach the
active boot environment found in the archive. The archive
may be a zfs, cpio, or tar archive. It may be compressed with
gzip or bzip2.
-d Copy the specified directory into a new zone boot environment
then attach the zone boot environment.
-z Attach the specified zone boot environment.
-x attach-last-booted-zbe : Attach the last booted zone boot
environment.
force-zbe-clone : Clone zone boot environment
on attach.
deny-zbe-clone : Do not clone zone boot environment
on attach.
destroy-orphan-zbes : Destroy all orphan zone boot
environments. (not associated with
any global BE)
</syntaxhighlight>
acd89433d4c31160aa98d8724fdb6f1e66717c6d
SSH FingerprintLogging
0
358
2552
2322
2021-11-26T03:36:07Z
Lollypop
2
Text replacement - "[[Kategorie:" to "[[Category:"
wikitext
text/x-wiki
[[Category:SSH|Fingerprint]]
[[Category:Bash|Fingerprint]]
=SSH Fingerprintlogging=
==Why logging fingerprints?==
It is just for the possibility of setting the [[Bash]] HISTFILE per logged in user.
==The AuthorizedKeysCommand==
* /opt/sbin/fingerprintlog:
<syntaxhighlight lang=bash>
#!/bin/bash
# /opt/sbin/fingerprintlog <logfile> %u %k %t %f
# Arguments to AuthorizedKeysCommand may be provided using the following tokens, which will be expanded at runtime:
# %% is replaced by a literal '%',
# %u is replaced by the username being authenticated,
# %h is replaced by the home directory of the user being authenticated,
# %t is replaced with the key type offered for authentication,
# %f is replaced with the fingerprint of the key, and
# %k is replaced with the key being offered for authentication.
# If no arguments are specified then the username of the target user will be supplied.
[ "_${LOGNAME}_" != "_daemon_" ] && exit 1
LOGFILE=$1
USER=$2
KEY=$3
KEYTYPE=$4
FINGERPRINT=$5
printf "%s ssh-login T=%s U=%s PPID=%s FP=%s K=%s\n" "$(/bin/date -Iseconds)" "${KEYTYPE}" "${USER}" "${PPID}" "${FINGERPRINT}" "${KEY}" >> ${LOGFILE}
</syntaxhighlight>
<syntaxhighlight lang=bash>
# chmod 0750 /opt/sbin/fingerprintlog
# chown root:daemon /opt/sbin/fingerprintlog
</syntaxhighlight>
==Create the logfile==
* /var/log/fingerprint.log
<syntaxhighlight lang=bash>
# touch /var/log/fingerprint.log
# chown daemon:ssh-user /var/log/fingerprint.log
# chmod 0640 /var/log/fingerprint.log
</syntaxhighlight>
==Setup logrotation==
* /etc/logrotate.d/fingerprintlog
<syntaxhighlight lang=bash>
/var/log/fingerprint.log
{
su daemon syslog
create 0640 daemon ssh-user
rotate 8
weekly
missingok
notifempty
}
</syntaxhighlight>
==Add fingerprint logging to sshd==
* /etc/ssh/sshd_config
<syntaxhighlight lang=bash>
...
DenyUsers daemon
AuthorizedKeysCommand /opt/sbin/fingerprintlog /var/log/fingerprint.log %u %k %t %f
AuthorizedKeysCommandUser daemon
...
</syntaxhighlight>
Restart sshd
<syntaxhighlight lang=bash>
# systemctl restart ssh.service
</syntaxhighlight>
==Add magic to your .bashrc==
<syntaxhighlight lang=bash>
# apt install gawk
</syntaxhighlight>
* ~/.bashrc
<syntaxhighlight lang=bash>
...
# Match parent PID or grand parent PID against fingerprint.log
[ -f /var/log/fingerprint.log ] && FINGERPRINT=$(/usr/bin/gawk -v ppid="(${PPID}|$(awk '{print $4;}' /proc/${PPID}/stat))" -v user=${LOGNAME} '$5 ~ "^PPID="ppid"$" {gsub(/^FP=/,"",$6); gsub(/\//,"_",$6); print $6;exit;}' /var/log/fingerprint.log)
# Set the history file
export HISTFILE=~/.bash_history_${FINGERPRINT:-${SUDO_USER:-default}}
...
</syntaxhighlight>
6101806faaba21b39a31e511c7cbe7decdcdf0cc
Category:LDOM
14
202
2553
670
2021-11-26T03:37:02Z
Lollypop
2
Text replacement - "[[Kategorie:" to "[[Category:"
wikitext
text/x-wiki
[[Category:Solaris]]
b2957bda5fdd0cfbd2a3c12d4f811f750d2f9508
Category:Isoptera
14
304
2554
1698
2021-11-26T03:39:14Z
Lollypop
2
Text replacement - "[[Kategorie:" to "[[Category:"
wikitext
text/x-wiki
[[Category: Termiten]]
{{Systematik
| Autor =
| Bild =
| Bildbeschreibung =
| regnum = Animalia
| subregnum = Eumetazoa
| phylum = Arthropoda
| subphylum = Hexapoda
| classis = Insecta
| superordo = Dictyoptera
| ordo = Isoptera
| LSID = urn:lsid:faunaeur.org:taxname:11922
| www.faunaeur.org_id = 11922
}}
* [https://commons.wikimedia.org/wiki/Category:Isoptera Isoptera at Wikimedia Commons]
bec911a5ea13edb0c8e791652d885ee8838da5eb
CreepyLinks
0
354
2555
1854
2021-11-26T03:39:23Z
Lollypop
2
Text replacement - "[[Kategorie:" to "[[Category:"
wikitext
text/x-wiki
[[Category:Security]]
=Google=
* [https://www.google.com/maps/timeline?pb Google Maps Timeline]
* [https://myactivity.google.com/myactivity Activity]
==Settings==
* [https://google.com/settings/ads/ Control your ads]
* [https://myaccount.google.com/security Account security settings]
==Get what google has about you==
* [https://takeout.google.com/settings/takeout?pli=1 Download huge amount of data about you]
=YouTube=
* [https://www.youtube.com/feed/history Youtube History]
6357dee9effa5390475c443733d45d9565c9d6e8
NetApp SP
0
211
2556
2389
2021-11-26T03:41:28Z
Lollypop
2
Text replacement - "</source" to "</syntaxhighlight"
wikitext
text/x-wiki
[[Category:Hardware|NetApp]]
[[Category:NetApp|SP]]
== Setup SP IP address==
<syntaxhighlight lang=bash>
filer01> system node service-processor network modify -address-type IPv4 -ip-address 172.32.40.54 -netmask 255.255.255.0 -gateway 172.32.40.1 -enable true
filer01> system node service-processor reboot-sp
Note: If your console connection is through the SP, it will be disconnected.
Do you want to reboot the SP ? {y|n}: y
</syntaxhighlight>
b9ef304c5d69cefde4e03334e028009a04950e21
Sun Cluster - Repair Infrastructure
0
32
2557
656
2021-11-26T03:42:17Z
Lollypop
2
Text replacement - "[[Kategorie:" to "[[Category:"
wikitext
text/x-wiki
[[Category:SunCluster|Repair Infrastructure]]
Wenn bei einem Clusterknoten die Infrastructure-Datei beschädigt ist, oder ein nicht mehr vorhandenes Quorum-Device herauskonfiguriert werden soll, dann muß man die folgenden Schritte ausführen:
1. Knoten in Non-Cluster-Modus bringen
<pre>
# reboot -- -sx
</pre>
Aus dem OBP ei Sparc-Systemen:
<pre>
ok> boot -sx
</pre>
Oder bei x86/Opteron:
<pre>
b -sx
</pre>
2. Infrastructure editieren:
<pre>
# mount /var
# export TERM=vt100
# vi /etc/cluster/ccr/infrastructure
</pre>
Hier müssen alle Quorumdevice-Einträge raus und die Stimmen der anderen Nodes (bei mehr als zwei Nodes) müssen auf 0 gesetzt werden.
z.B.:
cluster.nodes.2.properties.quorum_vote 0
Und den Installmode enablen:
cluster.properties.installmode enabled
3. Generieren der Checksumme in der Datei:
<pre>
# /usr/cluster/lib/sc/ccradm -i /etc/cluster/ccr/infrastructure -o
</pre>
oder ab Solaris Cluster 3.2:
<pre>
#/usr/cluster/lib/sc/ccradm recover -o /etc/cluster/ccr/global/infrastructure
</pre>
4. Check, ob alles OK ist
<pre>
# /usr/cluster/lib/sc/chkinfr
</pre>
5. Reboot in den Cluster-Modus
<pre>
# reboot
</pre>
Alternative Beschreibung von [http://www.edv-birk.de/ Lothar Birk]:
==Notfall-Situation, wenn der Cluster-Node beim Boot kein Clusterquorum bekommt==
===Boot in den 'Non-Cluster' Modus===
boot -xs
===Manipulation der infrastructure Datei in der ccr===
<pre>
cd /etc/cluster/ccr
oder
cd /etc/cluster/ccr/global
cp infrastructure 100610_infrastructure
vi infrastructure
- Quorum-Vote des anderen Nodes auf 0 setzen
...node.X...quorum_vote 0
- Alle Zeilen am Ende der Datei mit:
...quorum_devices... löschen
/usr/cluster/lib/sc/ccradm -i infrastructure -o
oder
/usr/cluster/lib/sc/ccradm recover -o infrastructure
</pre>
===Boot wieder in den Cluster-Mode und anlegen eines Quorum-Devices===
<pre>
init 6
clq add d1
</pre>
1d24a89be8f31780188206484f773c23fbdae64c
SunCluster cheatsheet
0
35
2558
66
2021-11-26T03:43:58Z
Lollypop
2
Text replacement - "[[Kategorie:" to "[[Category:"
wikitext
text/x-wiki
* [[Media:SC_quickreference.pdf|SunCluster 3.x Quick Reference]]
* [[Media:820-0318.pdf|SunCluster 3.2 Quick Reference (Deutsch)]]
[[Category:SunCluster]]
eae411323847ed235e73a9feeac775088557aea5
Solaris SVM boot cdrom with metadevices
0
25
2559
45
2021-11-26T03:45:09Z
Lollypop
2
Text replacement - "[[Kategorie:" to "[[Category:"
wikitext
text/x-wiki
First, boot from the media:
# boot net -s
Now mount one of the subdisks read-only, so you cannot accidentally damage the subdisk:
# mount -o ro /dev/dsk/c0t0d0s0 /a
Then set up the current booted environment so it can use Solaris Volume Manager:
# cp /a/kernel/drv/md.conf /kernel/drv/md.conf
# umount /a
Now update the Solaris Volume Manager driver to load the configuration:
# update_drv -f md
Ignore any error messages from update_drv:
# metainit -r
[[Category:Solaris_SVM]]
ac6d68b4603e0901b37594732ca0483e99c237ca
Ecryptfs
0
349
2560
2411
2021-11-26T03:47:34Z
Lollypop
2
Text replacement - "</source" to "</syntaxhighlight"
wikitext
text/x-wiki
[[Category:Linux]]
==Tipps&Tricks==
===ecryptfs-mount-private -> mount: No such file or directory===
====Problem====
<syntaxhighlight lang=bash>
user@host:~$ ecryptfs-mount-private
Enter your login passphrase:
Inserted auth tok with sig [affecaffeeaffe00] into the user session keyring
mount: No such file or directory
user@host:~$
</syntaxhighlight>
The keys are correctly unlocked
<syntaxhighlight lang=bash>
user@host:~$ keyctl list @u
2 keys in keyring:
1013878144: --alswrv 2223 2223 user: affecaffeeaffe01
270316877: --alswrv 2223 2223 user: affecaffeeaffe02
</syntaxhighlight>
But no luck:
<syntaxhighlight lang=bash>
$ ls -al
total 20
drwx------ 3 ansible admin 8 Dez 7 09:12 .
drwxr-xr-x 6 root root 6 Dez 7 09:10 ..
lrwxrwxrwx 1 root root 32 Dez 7 09:11 .Private -> /home/.ecryptfs/ansible/.Private
lrwxrwxrwx 1 root root 33 Dez 7 09:11 .ecryptfs -> /home/.ecryptfs/ansible/.ecryptfs
lrwxrwxrwx 1 root root 52 Dez 7 09:12 README.txt -> /usr/share/ecryptfs-utils/ecryptfs-mount-private.txt
lrwxrwxrwx 1 root root 56 Dez 7 09:11 ecryptfs-mount-private.desktop -> /usr/share/ecryptfs-utils/ecryptfs-mount-private.desktop
</syntaxhighlight>
====Workaround====
<syntaxhighlight lang=bash>
user@host:~$ keyctl link @u @s
user@host:~$ ecryptfs-mount-private
user@host:~$
</syntaxhighlight>
f0c03e8d91bd3b3d475dd46220b8c1f16e5568df
Template:Taxobox/IstRangKursiv
10
48
2561
86
2021-11-26T03:47:48Z
Lollypop
2
Text replacement - "[[Kategorie:" to "[[Category:"
wikitext
text/x-wiki
<includeonly>{{#switch: {{lc:{{{1}}}}}
|gattung|genus|untergattung|subgenus|sektion|sectio|untersektion|subsectio|serie|series|unterserie|subseries|stirps|stirps|artenkreis|superspecies|superspezies|art|species|unterart|subspecies|varietät|varietas|untervarietät|subvarietas|form|forma|unterform|subforma = 1
| #default = 0
}}</includeonly><noinclude>Diese Vorlage wird innerhalb der [[Vorlage:Taxobox]] verwendet, technische Dokumentation siehe [[Vorlage:Taxobox/Doku/Tech]].
[[Category:Vorlage:Untervorlage|Taxobox/IstRangKursiv]]
</noinclude>
27b015ee02b0bd19227e66297fe2d70348c24ad9
Template:Taxobox/Zitat
10
51
2562
89
2021-11-26T03:49:20Z
Lollypop
2
Text replacement - "[[Kategorie:" to "[[Category:"
wikitext
text/x-wiki
<includeonly>! [[Nomenklatur (Biologie)|Wissenschaftlicher Name]] {{#if:{{{KeinRang|}}} | | {{Taxobox/Rang|Rang={{{Rang|}}}|Genitiv=ja}}}}
{{!-}}
{{!}} class="taxo-name" {{!}} {{#ifexpr: {{Taxobox/IstRangKursiv|{{{Rang|}}}}}|''}}{{{WissName|}}}{{#ifexpr: {{Taxobox/IstRangKursiv|{{{Rang|}}}}}|''}}
{{!-}}
{{!}} class="Person" {{!}} {{{Autor|}}}
{{!-}}</includeonly><noinclude>Diese Vorlage wird innerhalb der [[Vorlage:Taxobox]] verwendet, technische Dokumentation siehe [[Vorlage:Taxobox/Doku/Tech]].
[[Category:Vorlage:Untervorlage|Taxobox/Zitat]]
</noinclude>
11ae988927e2344a1351d9aa0017808c1e9ebc1e
MySQL slave with LVM
0
239
2563
2416
2021-11-26T03:49:37Z
Lollypop
2
Text replacement - "</source" to "</syntaxhighlight"
wikitext
text/x-wiki
'''UNFINISHED first few lines...'''
==Create LVM snapshot==
===Get the data mount===
<syntaxhighlight lang=bash>
master# df -h $(mysql --batch --skip-column-names -e "show variables like 'datadir'" | awk '{print $NF;}')
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/vg--mysql-mysql--data 138G 78G 55G 59% /var/lib/mysql
master# DATADIR="$(mysql --batch --skip-column-names -e "show variables like 'datadir'" | awk '{print $NF;}')"
</syntaxhighlight>
Enough space for a snapshot?
<syntaxhighlight lang=bash>
master# lvs /dev/mapper/vg--mysql-mysql--data
LV VG Attr LSize Pool Origin Data% Move Log Copy% Convert
mysql-data vg-mysql -wi-ao--- 140,00g
master# vgs vg-mysql
VG #PV #LV #SN Attr VSize VFree
vg-mysql 2 3 1 wz--n- 199,99g 20,00g
</syntaxhighlight>
===Create a concsistent snapshot===
<syntaxhighlight lang=bash>
master# mysql -e "FLUSH TABLES WITH READ LOCK; SHOW MASTER STATUS;" > ${DATADIR}/master_status.$(date "+%Y%m%d_%H%M%S")
master# lvcreate -l50%FREE -s -n mysql-data-snap /dev/vg-mysql/mysql-data
master# mysql -e "UNLOCK TABLES;"
master# mount /dev/vg-mysql/mysql-data-snap /mnt
master# cat /mnt/master_status.20151002_225659
File Position Binlog_Do_DB Binlog_Ignore_DB
mysql-bin.002366 263911913
master# mysql --batch --skip-column-names -e "show variables like 'innodb_data_file_path'"
innodb_data_file_path ibdata1:5G;ibdata2:5G;ibdata3:5G;ibdata4:50M:autoextend
</syntaxhighlight>
Set the innodb_data_file_path to the same value on the slave.
==Copy the data to the slave==
<syntaxhighlight lang=bash>
slave# DATADIR="$(mysql --batch --skip-column-names -e "show variables like 'datadir'" | awk '{print $NF;}')"
slave# ssh -c blowfish master "cd /mnt ; tar cSpzf - ." | ( cd ${DATADIR} ; tar xlvSpzf - )
</syntaxhighlight>
==Create replication user on master==
<syntaxhighlight lang=bash>
master# mysql -e ""
</syntaxhighlight>
==Setup slave==
<syntaxhighlight lang=bash>
slave# mysql -e ""
</syntaxhighlight>
911ff7b1b34d3a1b1cb95dd7fe3b32c257810633
Nextcloud
0
368
2564
2408
2021-11-26T03:49:52Z
Lollypop
2
Text replacement - "</source" to "</syntaxhighlight"
wikitext
text/x-wiki
[[category:Web]]
=Nextcloud=
==BASH alias==
<syntaxhighlight lang=bash>
alias occ='sudo --user=www-data /usr/bin/php -f /var/www/nextcloud/occ'
</syntaxhighlight>
<syntaxhighlight lang=bash>
# occ status
- installed: true
- version: 19.0.2.2
- versionstring: 19.0.2
- edition:
</syntaxhighlight>
==Send calendar events==
Set the EventRemindersMode to occ:
<syntaxhighlight lang=bash>
# occ config:app:set dav sendEventRemindersMode --value occ
</syntaxhighlight>
and add a cronjob for the user running he webserver:
<syntaxhighlight lang=bash>
# crontab -u www-data -e
# send calendar events every 5 minutes
*/5 * * * * php -f /var/www/nextcloud/occ dav:send-event-reminders
</syntaxhighlight>
=Manual upgrade=
Caution when upgrading from Nextcloud 20.0.9 to Nextcloud 21.0.1!
If you are using APCu as <i>memcache.local</i>
<syntaxhighlight lang=bash>
# occ config:system:get memcache.local
\OC\Memcache\APCu
</syntaxhighlight>
you have to put this in your php apcu.ini (e.g. /etc/php/7.4/mods-available/apcu.ini):
apc.enable_cli=1
otherwise you will get in memory trouble during upgrade and in my case the server was down because out of memory.
<syntaxhighlight lang=bash>
# cd /var/www/nextcloud/updater && sudo -u www-data php updater.phar
# occ db:add-missing-indices
</syntaxhighlight>
and since version 19:
<syntaxhighlight lang=bash>
# occ db:add-missing-columns
# occ db:add-missing-primary-keys
# occ db:convert-filecache-bigint
</syntaxhighlight>
Answer the questions...
If you have an own theme proceed with this steps:
<syntaxhighlight lang=bash>
# occ config:system:set theme --value <your theme>
# occ maintenance:theme:update
</syntaxhighlight>
And the apps:
<syntaxhighlight lang=bash>
# occ app:update --all
</syntaxhighlight>
=Some tweaks for the theme to disable several things=
<syntaxhighlight lang=css>
/* remove quota */
#quota {
border: 0;
clip: rect(0 0 0 0);
height: 1px;
margin: -1px;
overflow: hidden;
padding: 0;
position: absolute;
width: 1px;
}
/* remove lost password */
.lost-password-container #lost-password, .lost-password-container #lost-password-back {
display: none;
}
/* remove contacts menu */
#contactsmenu { display: none; }
/* remove contacts button */
li[data-id="contacts"] {
display: none;
visibility : hidden;
height : 0px;
width : 0px;
margin : 0px;
padding : 0px;
overflow : hidden;
}
/* remove user button */
li[data-id="core_users"] {
display: none;
visibility : hidden;
height : 0px;
width : 0px;
margin : 0px;
padding : 0px;
overflow : hidden;
}
</syntaxhighlight>
= Memcached =
You can import one of the following versions of configfile with
<syntaxhighlight lang=shell-session>
# occ config:import /your_memcache_config_file_like_below.json
Config successfully imported from: /your_memcache_config_file_like_below.json
</syntaxhighlight>
== ip:port ==
<syntaxhighlight lang=JSON>
{
"system": {
"memcache.distributed": "\\OC\\Memcache\\Memcached",
"memcached_servers": [
[
'127.0.0.1',
1121
]
]
}
}
</syntaxhighlight>
== socket ==
<syntaxhighlight lang=JSON>
{
"system": {
"memcache.distributed": "\\OC\\Memcache\\Memcached",
"memcached_servers": [
[
"\/run\/memcached\/memcached.sock",
0
]
]
}
}
</syntaxhighlight>
173783f19cd55fef7cebb578f9d6787dc3f620fe
Inetd services
0
251
2566
2503
2021-11-26T03:50:05Z
Lollypop
2
Text replacement - "[[Kategorie:" to "[[Category:"
wikitext
text/x-wiki
[[Category:Solaris]]
==Setting up rsyncd as inetd service==
1. Put it into the legacy file /etc/inetd.conf
<syntaxhighlight lang=bash>
# printf "rsync\tstream\ttcp\tnowait\troot\t/usr/bin/rsync\t/usr/bin/rsync --config=/etc/rsyncd.conf --daemon\n" >> /etc/inetd.conf
</syntaxhighlight>
2. Use inetconv to generate your XML file
<syntaxhighlight lang=bash>
# inetconv -o /tmp
100235/1 -> /tmp/100235_1-rpc_ticotsord.xml
Importing 100235_1-rpc_ticotsord.xml ...Done
rsync -> /tmp/rsync-tcp.xml
Importing rsync-tcp.xml ...Done
</syntaxhighlight>
3. Optionally modify the generated XML file /tmp/rsync-tcp.xml
4. Import the XML file
<syntaxhighlight lang=bash>
# svccfg import /tmp/rsync-tcp.xml
</syntaxhighlight>
5. Enable it:
<syntaxhighlight lang=bash>
# inetadm -e svc:/network/rsync/tcp:default
</syntaxhighlight>
6. Check it:
<syntaxhighlight lang=bash>
# netstat -anf inet | nawk -v port="$(nawk '$1=="rsync"{gsub(/\/.*$/,"",$2);print $2;}' /etc/services)" '$1 ~ port"$" && $NF=="LISTEN"'
*.873 *.* 0 0 49152 0 LISTEN
</syntaxhighlight>
470ec823527f70ab5c4f72893c2fabf7243f5480
Galera Cluster
0
383
2567
2365
2021-11-26T03:50:16Z
Lollypop
2
Text replacement - "</source" to "</syntaxhighlight"
wikitext
text/x-wiki
[[Category:MariaDB]]
[[Category:MySQL]]
=Setup the Cluster=
==Install the packages==
On each node do as root:
* Add sources
<syntaxhighlight lang=bash>
# cat > /etc/apt/sources.list.d/mariadb.list << EOF
# MariaDB Server
# To use a different major version of the server, or to pin to a specific minor version, change URI below.
deb [arch=amd64] http://downloads.mariadb.com/MariaDB/mariadb-10.5/repo/ubuntu $(lsb_release -cs) main
deb [arch=amd64] http://downloads.mariadb.com/MariaDB/mariadb-10.5/repo/ubuntu $(lsb_release -cs) main/debug
# MariaDB MaxScale
# To use the latest stable release of MaxScale, use "latest" as the version
# To use the latest beta (or stable if no current beta) release of MaxScale, use "beta" as the version
deb [arch=amd64] https://dlm.mariadb.com/repo/maxscale/latest/apt $(lsb_release -cs) main
# MariaDB Tools
deb [arch=amd64] http://downloads.mariadb.com/Tools/ubuntu $(lsb_release -cs) main
EOF
</syntaxhighlight>
* Install the packages
<syntaxhighlight lang=bash>
# apt update
# apt install mariadb-server mariadb-backup galera-4
</syntaxhighlight>
==Setup certificates for the cluster comunication==
===Make a CA certificate===
Make a CA certificate with a very long lifetime as you dont want to make normal certificate updates at this point.
<syntaxhighlight lang=bash>
$ subject='/C=DE/ST=Hamburg/L=Hamburg/O=Organisation/OU=Databases/CN=Galera Cluster'
$ openssl req -new -x509 -nodes -days 365000 -newkey rsa:4096 -sha256 -keyout ca-key.pem -out ca-cert.pem -batch -subj "${subject}"
</syntaxhighlight>
===Create a certificate for each cluster node===
<syntaxhighlight lang=bash>
$ for node in {1..4}
do
emailAddress="dbadmin@server.de"
servername="maria-${node}.server.de"
subject="/C=DE/ST=Hamburg/L=Hamburg/O=Organisation/OU=Databases/CN=${servername}/emailAddress=${emailAddress}"
openssl req -newkey rsa:4096 -nodes -keyout ${servername}-key.pem -out ${servername}-req.pem -batch -subj "${subject}"
openssl x509 -req -days 365000 -set_serial $(printf "%02d" "${node}") -in ${servername}-req.pem -out ${servername}-cert.pem -CA ca-cert.pem -CAkey ca-key.pem
done
</syntaxhighlight>
===Copy keys and certificates to the nodes===
Copy the specific keys and certs to each node:
<syntaxhighlight lang=bash>
$ sudo mkdir --mode=0700 /etc/mysql/priv # put in here: maria-${node}.server.de-key.pem
$ sudo mkdir --mode=0750 /etc/mysql/cert # put in here: maria-${node}.server.de-cert.pem , ca-cert.pem
</syntaxhighlight>
== Configure the MariaDB Galera Cluster ==
=== Create a mariabackup user on each node ===
<syntaxhighlight lang=bash>
# mariadb
MariaDB [(none)]> grant reload, process, lock tables, replication client on *.* to 'mariabackup'@'localhost' identified by 'the_very_secret_mariabackup_password';
MariaDB [(none)]> grant reload, process, lock tables, binlog monitor on *.* to 'mariabackup'@'maria-1.server.de' identified by 'the_very_secret_mariabackup_password';
MariaDB [(none)]> grant reload, process, lock tables, binlog monitor on *.* to 'mariabackup'@'maria-2.server.de' identified by 'the_very_secret_mariabackup_password';
MariaDB [(none)]> grant reload, process, lock tables, binlog monitor on *.* to 'mariabackup'@'maria-3.server.de' identified by 'the_very_secret_mariabackup_password';
MariaDB [(none)]> grant reload, process, lock tables, binlog monitor on *.* to 'mariabackup'@'maria-4.server.de' identified by 'the_very_secret_mariabackup_password';
MariaDB [(none)]> flush privileges;
MariaDB [(none)]>
</syntaxhighlight>
=== Galera settings ===
/etc/mysql/mariadb.conf.d/zz-galera.cnf
<syntaxhighlight lang=ini>
[galera]
# Cluster Configuration
wsrep_provider = /usr/lib/galera/libgalera_smm.so
# gcomm://{ comma seperated list of all cluster node IPs }
wsrep_cluster_address = gcomm://10.33.6.1,10.33.6.2,10.33.6.3,10.33.6.4
wsrep_cluster_name = MariaDB Galera Cluster
wsrep_on = ON
# Snapshot state transfer (SST): copy entire database, when new node joins
wsrep_sst_method = mariabackup
# set the the_very_secret_mariabackup_password to your real mariabackup password
wsrep_sst_auth = mariabackup:the_very_secret_mariabackup_password
[mariadb]
binlog_format = ROW
innodb_autoinc_lock_mode = 2
</syntaxhighlight>
== Get knowledge about your Cluster ==
=== Show wsrep_provider_options ===
<syntaxhighlight lang=bash>
$ mariadb -NBABe 'show variables like "wsrep_provider_options"' | awk '{gsub(/$/,":\n",$1); gsub(/(;|$)/,";\n"); printf $0; }'
</syntaxhighlight>
eb514c806e40121c4f060a58c2f2b29b7e2ff1af
Autofs
0
256
2568
2383
2021-11-26T03:50:31Z
Lollypop
2
Text replacement - "</source" to "</syntaxhighlight"
wikitext
text/x-wiki
[[Category:Linux|autofs]]
[[Category:Solaris|autofs]]
==Automount home directories==
===/etc/auto.master===
<syntaxhighlight lang=bash>
#
# Include /etc/auto.master.d/*.autofs
#
+dir:/etc/auto.master.d
</syntaxhighlight>
===/etc/auto.master.d/home.autofs===
<syntaxhighlight lang=bash>
/home /etc/auto.master.d/home.map
</syntaxhighlight>
===/etc/auto.master.d/home.map===
Mount homes from different locations.
<syntaxhighlight lang=bash>
* :/data/home/& nfs.server.de:/home/&
</syntaxhighlight>
or from a server that supports NFSv4.1:
<syntaxhighlight lang=bash>
* -proto=tcp,vers=4.1 nfs.server.de:/home/&
</syntaxhighlight>
The asterisk marks any dir in /home/* should be matched by this rule.
The ampers and is replaced by the part which was matched by *.
So if you enter /home/a the automounter searches local for /data/home/a which will be mounted when found.
<syntaxhighlight lang=bash>
# cd /home/a
# mount -v | grep /home/a
/data/home/a on /home/a type none (rw,bind)
</syntaxhighlight>
For another /home/b which is on the nfs server it looks like this:
<syntaxhighlight lang=bash>
# cd /home/b
# mount -v | grep /home/b
nfs.server.de:/home/b on /home/b type nfs (rw,addr=172.16.17.24)
</syntaxhighlight>
===cifs===
<i>/etc/auto.master.d/mycifsshare.autofs</i>:
<syntaxhighlight lang=bash>
/data/cifs /etc/auto.master.d/mycifsshare.map
</syntaxhighlight>
<i>/etc/auto.master.d/mycifsshare.map</i>:
<syntaxhighlight lang=bash>
mycifsshare -fstype=cifs,rw,credentials=/etc/samba/mycifsshare_credentials,uid=<myuser>,forceuid ://192.168.1.2/mycifsshare
</syntaxhighlight>
0befa7febb2f9860a6db187e012bcc3839f02fca
Category:Tiere
14
40
2569
548
2021-11-26T03:50:36Z
Lollypop
2
Text replacement - "[[Kategorie:" to "[[Category:"
wikitext
text/x-wiki
[[Category:Projekte]]
{{#categorytree:{{PAGENAMEE}}|mode=pages|hideroot=on|depth=4}}
e5d9bae3ab8750216d8fb49e12945dcc20e06351
Qemu
0
281
2570
2247
2021-11-26T03:50:58Z
Lollypop
2
Text replacement - "[[Kategorie:" to "[[Category:"
wikitext
text/x-wiki
[[Category:Qemu]]
=virsh - management user interface=
==Display running domains==
<syntaxhighlight lang=bash>
# virsh list
Id Name State
----------------------------------------------------
1 domain_v1 running
</syntaxhighlight>
==Display domain information==
<syntaxhighlight lang=bash>
# virsh dominfo domain_v1
Id: 1
Name: domain_v1
UUID: b80fe77e-5bdd-29a9-d4c4-84482ace50ff
OS Type: hvm
State: running
CPU(s): 4
CPU time: 674481.3s
Max memory: 15605760 KiB
Used memory: 15605760 KiB
Persistent: yes
Autostart: disable
Managed save: no
Security model: none
Security DOI: 0
</syntaxhighlight>
d18cf8ed1c944b204e92e0e1d1164e0fc8a49ccb
Neotermes sp
0
320
2571
1705
2021-11-26T03:51:15Z
Lollypop
2
Text replacement - "[[Kategorie:" to "[[Category:"
wikitext
text/x-wiki
[[Category: Neotermes ]]
{{Systematik
| DeName =
| WissName = Neotermes sp.
| Autor =
| regnum = Animalia
| subregnum = Eumetazoa
| phylum = Arthropoda
| subphylum = Hexapoda
| classis = Insecta
| ordo = Dictyoptera
| subordo = Isoptera
| familia = Kalotermitidae
| subfamilia =
| tribus =
| genus = Neotermes
| subgenus =
| species = sp
| Verbreitung =
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur =
| Winterruhe =
}}
* [http://www.boldsystems.org/index.php/Taxbrowser_Taxonpage?taxid=354458 BoldSystems Database : Neotermes castaneus]
f8d09be74d7bc52285bdf93649a2bc90f9c8b18e
Ubuntu remove desktop
0
385
2572
2211
2021-11-26T03:51:17Z
Lollypop
2
Text replacement - "</source" to "</syntaxhighlight"
wikitext
text/x-wiki
[category:Ubuntu]
[[category:Ubuntu|desktop]]
=Ubuntu 20.04=
<syntaxhighlight lang=bash>
# GRUB: Remove splash and quiet from GRUB_CMDLINE_LINUX_DEFAULT
sudo perl -pi -e 's#^(GRUB_CMDLINE_LINUX_DEFAULT=".*)(quiet)(.*")$#\1\3#g,s#^(GRUB_CMDLINE_LINUX_DEFAULT=".*)(splash)(.*")$#\1\3#g' /etc/default/grub
# GRUB: Add or change to GRUB_DISABLE_OS_PROBER=true
sudo perl -ni -e '$c=1 if s/^GRUB_DISABLE_OS_PROBER=.*$/GRUB_DISABLE_OS_PROBER=true/; print; if(eof){print "GRUB_DISABLE_OS_PROBER=true\n" unless $c==1};' /etc/default/grub
# Remove desktop packages
sudo apt --yes purge adwaita-icon-theme gedit-common gir1.2-gdm-1.0 \
gir1.2-gnomebluetooth-1.0 gir1.2-gnomedesktop-3.0 gir1.2-goa-1.0 \
gnome-accessibility-themes gnome-bluetooth gnome-calculator gnome-calendar \
gnome-characters gnome-control-center gnome-control-center-data \
gnome-control-center-faces gnome-desktop3-data \
gnome-font-viewer gnome-getting-started-docs gnome-getting-started-docs-ru \
gnome-initial-setup gnome-keyring gnome-keyring-pkcs11 gnome-logs \
gnome-mahjongg gnome-menus gnome-mines gnome-online-accounts \
gnome-power-manager gnome-screenshot gnome-session-bin gnome-session-canberra \
gnome-session-common gnome-settings-daemon gnome-settings-daemon-common \
gnome-shell gnome-shell-common gnome-shell-extension-appindicator \
gnome-shell-extension-desktop-icons gnome-shell-extension-ubuntu-dock \
gnome-startup-applications gnome-sudoku gnome-system-monitor gnome-terminal \
gnome-terminal-data gnome-themes-extra gnome-themes-extra-data gnome-todo \
gnome-todo-common gnome-user-docs gnome-user-docs-ru gnome-video-effects \
language-pack-gnome-en language-pack-gnome-en-base language-pack-gnome-ru \
language-pack-gnome-ru-base language-selector-gnome libgail18 libgail18 \
libgail-common libgail-common libgnome-autoar-0-0 libgnome-bluetooth13 \
libgnome-desktop-3-19 libgnome-games-support-1-3 libgnome-games-support-common \
libgnomekbd8 libgnomekbd-common libgnome-menu-3-0 libgnome-todo libgoa-1.0-0b \
libgoa-1.0-common libpam-gnome-keyring libsoup-gnome2.4-1 libsoup-gnome2.4-1 \
nautilus-extension-gnome-terminal pinentry-gnome3 yaru-theme-gnome-shell \
yaru-theme-icon yaru-theme-sound ubuntu-wallpapers ubuntu-wallpapers-focal \
x11-common x11-apps xcursor-themes xbitmaps xfonts-base xfonts-encodings
# Purge unreferred packages
sudo apt --yes autopurge
# Fix plymouth problems
sudo apt --yes install plymouth-theme-spinner
# Ensure the boot environment creation works
update-initramfs -k $(uname -r) -u
update-grub
</syntaxhighlight>
f5233cdb5d91196060b5ee47004a31847d326889
HP 3par
0
213
2573
2367
2021-11-26T03:51:28Z
Lollypop
2
Text replacement - "[[Kategorie:" to "[[Category:"
wikitext
text/x-wiki
[[Category:Hardware]]
Unsorted collection... Don't do this...
Unsortierte Sammlung...
Funktioniert nicht so...
<syntaxhighlight lang=bash>
3par-clusterstorage cli% showcage
Id Name LoopA Pos.A LoopB Pos.B Drives Temp RevA RevB Model Side
0 cage0 1:0:1 0 0:0:1 0 24 29-35 321a 321a DCN1 n/a
1 cage1 1:0:2 0 0:0:2 0 24 34-36 321a 321a DCS2 n/a
</source>
<syntaxhighlight lang=bash>
3par-storage cli% createcpg -t r5 -ssz 4 -ha mag -p -devtype FC -mg 0-19 -cg 0 FC_R5_31_cage0
3par-storage cli% createcpg -t r5 -ssz 4 -ha mag -p -devtype FC -mg 0-19 -cg 1 FC_R5_31_cage1
</source>
<syntaxhighlight lang=bash>
3par-storage cli% showcpg -sdg
------(MB)------
Id Name Warn Limit Grow Args
...
6 FC_R5_31_cage0 - - 32768 -t r5 -ssz 4 -ha mag -p -devtype FC -mg 0-19 -cg 0
7 FC_R5_31_cage1 - - 32768 -t r5 -ssz 4 -ha mag -p -devtype FC -mg 0-19 -cg 1
</source>
<syntaxhighlight lang=bash>
3par-storage cli% createvv -wait 0 -comment "Mirror A: PRODDB" FC_R5_31_cage0 VV_DB_PROD01_DATA_DS.1 2T
3par-storage cli% createvv -wait 0 -comment "Mirror B: PRODDB" FC_R5_31_cage1 VV_DB_PROD01_DATA_DS.2 2T
3par-storage cli% createvv -wait 0 -comment "Mirror A: TESTDB" FC_R5_31_cage0 VV_DB_TEST01_DATA_DS.3 2T
3par-storage cli% createvv -wait 0 -comment "Mirror B: TESTDB" FC_R5_31_cage1 VV_DB_TEST01_DATA_DS.4 2T
</source>
<syntaxhighlight lang=bash>
3par-storage cli% showvv -sortcol 0 -showcols Id,Name,UsrCPG,Prov,Usr_Used_MB -cpg FC_R5_31_cage0,FC_R5_31_cage1
Id Name UsrCPG Prov Usr_Used_MB
2 VV_DB_PROD01_DATA_DS.1 FC_R5_31_cage0 full 2097152
3 VV_DB_PROD01_DATA_DS.2 FC_R5_31_cage1 full 2097152
4 VV_DB_TEST01_DATA_DS.3 FC_R5_31_cage0 full 2097152
5 VV_DB_TEST01_DATA_DS.4 FC_R5_31_cage1 full 2097152
-----------------------------------------------------------------
2 total 8388608
</source>
==Group virtual volumes to sets (vv -> vvset)==
<syntaxhighlight lang=bash>
3par-storage cli% createvvset -comment "Set for all vvs of Solaris Devel" DevelVVSet
3par-storage cli% createvvset -add DevelVVSet VV_DB_TEST01_DATA_DS.3
3par-storage cli% createvvset -add DevelVVSet VV_DB_TEST01_DATA_DS.4
</source>
==Create a set of initiators==
<syntaxhighlight lang=bash>
3par-storage cli% createhost -os Solaris -model M10 -contact "SuperAdmin" -comment "Developer node" -loc "Germany, Hamburg" -persona 1 unix14_c2 21000024ff8f5aae
3par-storage cli% createhost -os Solaris -model M10 -contact "SuperAdmin" -comment "Developer node" -loc "Germany, Hamburg" -persona 1 unix14_c3 21000024ff8f5aaf
</source>
<syntaxhighlight lang=bash>
3par-storage cli% createhostset DevelHosts
3par-storage cli% createhostset -add DevelHosts unix14_c2
3par-storage cli% createhostset -add DevelHosts unix14_c3
</source>
==Map virtual volumes as LUNs to a set of initiators==
<syntaxhighlight lang=bash>
3par-storage cli% createvlun set:DevelVVSet 0+ set:DevelHosts
</source>
Means map all VVs from DevelVVSet to all hosts in DevelHosts and do auto LUN numbering (+) starting with 0.
<syntaxhighlight lang=bash>
3par-storage cli% showvlun
Active VLUNs
Lun VVName HostName -Host_WWN/iSCSI_Name- Port Type Status ID
0 VV_DB_TEST01_DATA_DS.3 unix14_c2 21000024FF8F5AAE 0:1:1 host set active 1
1 VV_DB_TEST01_DATA_DS.4 unix14_c2 21000024FF8F5AAE 0:1:1 host set active 1
0 VV_DB_TEST01_DATA_DS.3 unix14_c3 21000024FF8F5AAF 0:1:2 host set active 1
1 VV_DB_TEST01_DATA_DS.4 unix14_c3 21000024FF8F5AAF 0:1:2 host set active 1
-----------------------------------------------------------------------------------------------
4 total
VLUN Templates
Lun VVName HostName -Host_WWN/iSCSI_Name- Port Type
0 set:DevelVVset set:DevelHosts ---------------- --- host set
---------------------------------------------------------------------
1 total
</source>
==Watch disk initialization==
<syntaxhighlight lang=bash>
3par-storage cli% showsys -space -devtype FC
------------- System Capacity (MB) -------------
Total Capacity : 57139200
Allocated : 40258560
Volumes : 36577280
Non-CPGs : 0
User : 0
Snapshot : 0
Admin : 0
CPGs (TPVVs & TDVVs & CPVVs) : 36577280
User : 36577280
Used : 36427020
Unused : 0
Snapshot : 0
Used : 0
Unused : 0
Admin : 0
Used : 0
Unused : 0
Unmapped : 0
System : 3681280
Internal : 252928
Spare : 3428352
Used : 0
Unused : 3428352
Free : 16880640
Initialized : 7827456
Uninitialized : 9053184 <--- Still initializing!!!!
Unavailable : 0
Failed : 0
------------- Capacity Efficiency --------------
Compaction : 1.0
Dedup : --------
</source>
== Solaris ==
===/kernel/drv/sd.conf===
<pre>
sd-config-list=“3PARdataVV”,“physical-block-size:16384”;
</pre>
34a63579253a60eac0bdfea810239f493dc02760
SuSE Manager
0
348
2574
2306
2021-11-26T03:51:39Z
Lollypop
2
Text replacement - "</source" to "</syntaxhighlight"
wikitext
text/x-wiki
[[category :Linux]]
[[category:SuSE]]
=SuSE Manager=
==Channels==
===Refresh channle list===
<syntaxhighlight lang=bash>
# mgr-sync refresh
</syntaxhighlight>
===List available channels===
<syntaxhighlight lang=bash>
# mgr-sync list channels
</syntaxhighlight>
===Add Channel===
<syntaxhighlight lang=bash>
# mgr-sync add channel <channel>
</syntaxhighlight>
===Delete Channel===
<syntaxhighlight lang=bash>
# spacewalk-remove-channel -c <channel>
</syntaxhighlight>
===Create a frozen channel===
Clone a channel (which is like a snapshot) and add a timestamp at the end of the name:
<syntaxhighlight lang=bash>
# spacecmd softwarechannel_clonetree -s '<syntaxhighlight channel or pool>' -x "s/\$/-$(date '+%Y-%m-%d_%H:%M:%S')/"
</syntaxhighlight>
e.g.:
<syntaxhighlight lang=bash>
# spacecmd softwarechannel_clonetree -s 'sles12-sp3-pool-x86_64' -x "s/\$/-$(date '+%Y-%m-%d_%H:%M:%S')/"
</syntaxhighlight>
will result in a new channel pool named e.g. sles12-sp3-pool-x86_64-2017-11-22_14:26:42
===Compose your own channel===
<syntaxhighlight lang=bash>
# spacecmd
spacecmd {SSM:0}> softwarechannel_create -n OpenSuSE -l opensuse -a x86_64 -c sha256
spacecmd {SSM:0}> repo_create -n opensuse-database-sles12-sp2-x86_64 -u https://download.opensuse.org/repositories/server:/database/SLE_12_SP2/
spacecmd {SSM:0}> repo_create -n opensuse-database-sles12-sp3-x86_64 -u https://download.opensuse.org/repositories/server:/database/SLE_12_SP3/
spacecmd {SSM:0}> repo_list
opensuse-database-sles12-sp2-x86_64
opensuse-database-sles12-sp3-x86_64
spacecmd {SSM:0}> softwarechannel_addrepo opensuse opensuse-database-sles12-sp2-x86_64
spacecmd {SSM:0}> softwarechannel_addrepo opensuse opensuse-database-sles12-sp3-x86_64
spacecmd {SSM:0}> quit
# spacewalk-repo-sync -c opensuse
</syntaxhighlight>
==Bootstrap==
===Create bootstrap repo===
Do it for each channel!
<syntaxhighlight lang=bash>
# mgr-create-bootstrap-repo
</syntaxhighlight>
===Create bootstrap shell scripts in /srv/www/htdocs/pub/bootstrap===
Do not forget to lookup the available [[#List available activation keys|activation keys]]
<syntaxhighlight lang=bash>
# spacecmd -s susemanager.server.de -u mytestuser -q activationkey_list
6-sles11-sp3-x86_64
6-sles11-sp4-x86_64
6-sles12-default
6-sles12-sp0-x86_64
6-sles12-sp1-x86_64
6-sles12-sp2-x86_64
6-sles12-sp3-x86_64
6-sles12-sp4-x86_64
6-sles12-sp5-x86_64
6-sles15-sp0-x86_64
6-sles15-sp1-x86_64
6-sles15-sp2-x86_64
# mgr-bootstrap --traditional --script=My-New-SLES11-SP4.sh --activation-keys=6-sles11-sp4-x86_64
</syntaxhighlight>
==Activation keys==
===List available activation keys===
web: Systems -> Activation Keys
<syntaxhighlight lang=bash>
# spacecmd -q activationkey_list
6-sles11-sp3-x86_64
6-sles11-sp4-x86_64
6-sles12-sp0-x86_64
6-sles12-sp1-x86_64
6-sles12-sp2-x86_64
6-sles12-sp3-x86_64
</syntaxhighlight>
==spacecmd==
Just some useful space commands
<syntaxhighlight lang=bash>
# spacecmd system_list
</syntaxhighlight>
==rhn-search==
===Cleanup the search index===
<syntaxhighlight lang=bash>
# rhn-search cleanindex
</syntaxhighlight>
==Troubleshooting==
===Clients===
====Error code: Curl error 59 / Error message: failed setting cipher list: DEFAULT_SUSE====
<syntaxhighlight lang=bash>
# zypper refresh
...
Error code: Curl error 59
Error message: failed setting cipher list: DEFAULT_SUSE
...
</syntaxhighlight>
The reason is that zypper in newer versions calls curl with a specific cipher list named "DEFAULT_SUSE" which is not defined in curl version 7.37.0-37.17.1 (version 7.37.0-28.1 is OK).
Now get any kind of repository bound to your SuSE like the ISO this version was installed with:
<syntaxhighlight lang=bash>
# zypper addrepo --check --type yast2 'iso:///?iso=/install/OS/suse/iso/SLE-12-SP2-Server-DVD-x86_64-GM-DVD1.iso' 'SLES12-SP2-12.2-0'
Adding repository 'SLES12-SP2-12.2-0' ...........................................................................................................[done]
Repository 'SLES12-SP2-12.2-0' successfully added
Enabled : Yes
Autorefresh : No
GPG Check : Yes
Priority : 99
URI : iso:///?iso=/install/OS/suse/iso/SLE-12-SP2-Server-DVD-x86_64-GM-DVD1.iso
</syntaxhighlight>
or enable it:
<syntaxhighlight lang=bash>
# zypper modifyrepo --enable SLES12-SP2-12.2-0
</syntaxhighlight>
Reinstall zypper in the old version that does not call curl with the cipher list SUSE_DEFAULT:
<syntaxhighlight lang=bash>
# zypper install --force --repo SLES12-SP2-12.2-0 $(rpm --query --all *curl* --queryformat '%{NAME} ')
</syntaxhighlight>
And disable the ISO repository:
<syntaxhighlight lang=bash>
# zypper modifyrepo --disable SLES12-SP2-12.2-0
</syntaxhighlight>
Done.
=====Note: After some further debugging we found that the system path forces a wrong openssl library to come in place.=====
<syntaxhighlight lang=bash>
# curl --version ; zypper --version
curl 7.37.0 (x86_64-suse-linux-gnu) libcurl/7.37.0 OpenSSL/1.0.2h zlib/1.2.8 libidn/1.28 libssh2/1.4.3
Protocols: dict file ftp ftps gopher http https imap imaps ldap ldaps pop3 pop3s rtsp scp sftp smtp smtps telnet tftp
Features: AsynchDNS GSS-Negotiate IDN IPv6 Largefile NTLM NTLM_WB SSL libz TLS-SRP
zypper 1.13.40
</syntaxhighlight>
In our version of curl it should be OpenSSL/1.0.2j.
<syntaxhighlight lang="bash" highlight="5">
# rpm -qv openssl
openssl-1.0.2j-60.24.1.x86_64
# openssl version
WARNING: can't open config file: /usr/local/ssl/openssl.cnf
OpenSSL 1.0.2j-fips 26 Sep 2016 (Library: OpenSSL 1.0.2h-fips 3 May 2016)
</syntaxhighlight>
Ha!
Ok... then after lookin at the system library path, we got a clue ;-):
<syntaxhighlight lang="bash" highlight="2">
# ldconfig -p | grep ssl
libssl.so.1.0.0 (libc6,x86-64) => /usr/lib/nsr/lib64/libssl.so.1.0.0
libssl.so.1.0.0 (libc6,x86-64) => /lib64/libssl.so.1.0.0
libssl.so.1.0.0 (libc6) => /usr/lib/nsr/libssl.so.1.0.0
libgnutls-xssl.so.0 (libc6,x86-64) => /usr/lib64/libgnutls-xssl.so.0
libevent_openssl-2.0.so.5 (libc6,x86-64) => /usr/lib64/libevent_openssl-2.0.so.5
libcommonssl.so (libc6,x86-64) => /usr/lib/nsr/lib64/libcommonssl.so
libcommonssl.so (libc6) => /usr/lib/nsr/libcommonssl.so
libcommonssl-9.2.1.so (libc6,x86-64) => /usr/lib/nsr/lib64/libcommonssl-9.2.1.so
</syntaxhighlight>
The problem was a file in /etc/ld.so.conf.d/ which brought /usr/lib/nsr/lib64 in the system library path. There was another libssl.so.1.0.0 which was version 1.0.2h. OK. What to do?
<syntaxhighlight lang=bash>
# rm /etc/ld.so.conf.d/problematic.conf
# rm /etc/ld.so.cache
# ldconfig
</syntaxhighlight>
Check the success:
<syntaxhighlight lang=bash>
# ldconfig -p | grep ssl
libssl.so.1.0.0 (libc6,x86-64) => /lib64/libssl.so.1.0.0
libgnutls-xssl.so.0 (libc6,x86-64) => /usr/lib64/libgnutls-xssl.so.0
libevent_openssl-2.0.so.5 (libc6,x86-64) => /usr/lib64/libevent_openssl-2.0.so.5
</syntaxhighlight>
Now you just have to find a way to get your other stuff running without the manipulation at the system library path.
Last check for our case. Does our networker use it's own ssl libraries?
<syntaxhighlight lang=bash>
# ls -al /proc/$(pgrep --full /usr/sbin/nsrexecd)/map_files | egrep "lib(ssl|crypto)"
lr-------- 1 root root 64 17. Jul 11:31 7f9d1bb73000-7f9d1bdc7000 -> /usr/lib/nsr/lib64/libcrypto.so.1.0.0
lr-------- 1 root root 64 17. Jul 11:31 7f9d1bdc7000-7f9d1bec7000 -> /usr/lib/nsr/lib64/libcrypto.so.1.0.0
lr-------- 1 root root 64 17. Jul 11:31 7f9d1bec7000-7f9d1bef3000 -> /usr/lib/nsr/lib64/libcrypto.so.1.0.0
lr-------- 1 root root 64 17. Jul 11:31 7f9d1bfab000-7f9d1c00c000 -> /usr/lib/nsr/lib64/libssl.so.1.0.0
lr-------- 1 root root 64 17. Jul 11:31 7f9d1c00c000-7f9d1c10c000 -> /usr/lib/nsr/lib64/libssl.so.1.0.0
lr-------- 1 root root 64 17. Jul 11:31 7f9d1c10c000-7f9d1c116000 -> /usr/lib/nsr/lib64/libssl.so.1.0.0
</syntaxhighlight>
Yep. Great!
== Remove spacewalk from client ==
So the way to get rid spacewalk is:
<syntaxhighlight lang=bash>
# zypper remove --clean-deps spacewalksd spacewalk-check zypp-plugin-spacewalk spacewalk-client-tools
</syntaxhighlight>
== Register at SuSE Manager ==
After that reregister your server with the SuSE Manager like this:
<syntaxhighlight lang=bash>
# /usr/bin/wget --no-check-certificate -O - https://susemgr.server.tld/pub/bootstrap/yourbootstrap.sh | bash
</syntaxhighlight>
== Update SuSE Manager certificate ==
=== Create work place ===
<syntaxhighlight lang=bash>
# mkdir ~/ssl-build
# mkdir ~/ssl-build/$(hostname --short)
# cd ~/ssl-build
</syntaxhighlight>
=== Build RHN-ORG-TRUSTED-SSL-CERT and rhn-org-trusted-ssl-cert-1.0-*.noarch.rpm ===
<syntaxhighlight lang=bash>
# rhn-ssl-tool --gen-ca --rpm-only --dir="~/ssl-build" --from-ca-cert=<path to your CA certificate file>
# openssl x509 -noout -subject -dates -in ~/ssl-build/RHN-ORG-TRUSTED-SSL-CERT
subject=C = DE, O = Hosting, CN = My-CA
notBefore=Mar 22 12:28:05 2017 GMT
notAfter=Mar 22 12:38:05 2027 GMT
# ls -al ~/ssl-build/*.rpm
...
-rw-r--r-- 1 root root 18262 17. Nov 12:10 rhn-org-trusted-ssl-cert-1.0-17.noarch.rpm
-rw-r--r-- 1 root root 16672 17. Nov 12:10 rhn-org-trusted-ssl-cert-1.0-17.src.rpm
</syntaxhighlight>
=== Generate CSR ===
<syntaxhighlight lang=bash>
# cd ~/ssl-build/$(hostname --short)
# declare -a hosts=( "susemgr.tld.de" "othername.tld.de" "anotheranothername.tld.de" )
# subject_without_cn='/C=DE/ST=Hamburg/L=Hamburg/O=Hosting/OU=Administration'
# emailAddress='suselinux-admin@tld.de'
</syntaxhighlight>
<syntaxhighlight lang=bash>
# openssl req -newkey rsa:4096 -nodes -sha256 -keyout server.key -out server.csr -batch -subj "${subject_without_cn}/CN=${hosts[0]}/emailAddress=${emailAddress}" -reqexts SAN -config <(cat /etc/ssl/openssl.cnf <(printf "[SAN]\nsubjectAltName=DNS:${hosts[0]}${hosts[1]:+,DNS:${hosts[1]}}${hosts[2]:+,DNS:${hosts[2]}}${hosts[3]:+,DNS:${hosts[3]}}${hosts[4]:+,DNS:${hosts[4]}}"))
Generating a RSA private key
...............................................++++
.................................................................................................................................................................++++
writing new private key to 'server.key'
-----
</syntaxhighlight>
<syntaxhighlight lang=bash>
# openssl req -noout -verify -subject -in server.csr
verify OK
subject=C = DE, ST = Hamburg, L = Hamburg, O = Hosting, OU = Administration, CN = susemgr.tld.de, emailAddress = suselinux-admin@tld.de
</syntaxhighlight>
=== Generate RPMs from certificate and key ===
<syntaxhighlight lang=bash>
# rhn-ssl-tool --gen-server --rpm-only --dir="/root/ssl-build"
...working...
Generating web server's SSL key pair/set RPM:
/root/ssl-build/susemgr/rhn-org-httpd-ssl-key-pair-susemgr-1.0-3.src.rpm
/root/ssl-build/susemgr/rhn-org-httpd-ssl-key-pair-susemgr-1.0-3.noarch.rpm
The most current SUSE Manager Proxy installation process against SUSE Manager hosted
requires the upload of an SSL tar archive that contains the CA SSL public
certificate and the web server's key set.
Generating the web server's SSL key set and CA SSL public certificate archive:
/root/ssl-build/susemgr/rhn-org-httpd-ssl-archive-susemgr-1.0-3.tar
Deploy the server's SSL key pair/set RPM:
(NOTE: the SUSE Manager or Proxy installers may do this step for you.)
The "noarch" RPM needs to be deployed to the machine working as a
web server, or SUSE Manager, or SUSE Manager Proxy.
Presumably 'susemgr.tld.de'.
</syntaxhighlight>
=== Install certificate and key in the apache directories ===
<syntaxhighlight lang=bash>
# cd /root/ssl-build/susemgr
# rpm -i $(grep -E "rhn-org-httpd-ssl-key-pair-.*.noarch.rpm" latest.txt)
</syntaxhighlight>
<syntaxhighlight lang=bash>
</syntaxhighlight>
<syntaxhighlight lang=bash>
</syntaxhighlight>
<syntaxhighlight lang=bash>
</syntaxhighlight>
1d3349c5a2867df294d22a6835f0acce5c4ee2a8
NetApp Partner path misconfigured
0
87
2575
928
2021-11-26T03:51:41Z
Lollypop
2
Text replacement - "[[Kategorie:" to "[[Category:"
wikitext
text/x-wiki
[[Category:NetApp]]
=FCP PARTNER PATH MISCONFIGURED=
==Auf dem Filer nachsehen, auf welchen LUNs das Problem besteht==
Statistiken au Null setzen:
filer> lun stats -z
Dann schauen:
filer> lun stats -o
Schlecht ist, wenn unter der Spalte "Partner Ops" oder "Partner KBytes" etwas größer 0 steht.
==Mögliche Ursachen==
# Kein ALUA konfiguriert
# MPxIO falsch konfiguriert: Niemals "/opt/NTAP/SANToolkit/bin/mpxio_set -e" ausführen. Zurücknehmen kann man das mit "/opt/NTAP/SANToolkit/bin/mpxio_set -d" oder händisch in /kernel/drv/scsi_vhci.conf. Danach "touch /reconfigure ; init 6"
229b50175c4412c50d2b49dacf5959ab90260e49
Category:Mail
14
242
2576
931
2021-11-26T03:52:03Z
Lollypop
2
Text replacement - "[[Kategorie:" to "[[Category:"
wikitext
text/x-wiki
[[Category:KnowHow]]
a53883501ef62bde531096835b5015f2915a2297
Category:Security
14
231
2577
880
2021-11-26T03:52:06Z
Lollypop
2
Text replacement - "[[Kategorie:" to "[[Category:"
wikitext
text/x-wiki
[[Category: KnowHow]]
339857b82ab523411f82e25b222bb7f5bb88c2cb
Oracle Discoverer
0
364
2578
2385
2021-11-26T03:52:16Z
Lollypop
2
Text replacement - "[[Kategorie:" to "[[Category:"
wikitext
text/x-wiki
[[Category:Oracle]]
== Changing the IP address ==
Just some lines from last change... sorry
<syntaxhighlight lang=bash>
vi /etc/sysconfig/network/ifcfg-eth0
vi /etc/sysconfig/network/routes
vi /etc/hosts
/etc/init.d/network restart
# Change the VLAN in vCenter
# reconnect with new IP
#
# Change the config
#
/opt/Middleware/ashome_1/chgip/scripts/chgiphost.sh -noconfig -oldhost discoverer01.srv.net.de -newhost discoverer.srv.net.de -oldip 172.16.31.29 -newip 172.16.7.4 -instanceHome /opt/Middleware/asinst_1
/etc/init.d/weblogic stop
# Adminserver, too
/opt/Middleware/wlserver_10.3/server/bin/setWLSEnv.sh
/opt/Middleware/wlserver_10.3/common/bin/wlst.sh
wls:/offline> readDomain('/opt/Middleware/user_projects/domains/ClassicDomain')
wls:/offline/ClassicDomain> cd ('/Machine/neuerhostname')
wls:/offline/ClassicDomain/Machine/neuerhostname> machine=cmo
wls:/offline/ClassicDomain/Machine/neuerhostname> cd ('/Server/AdminServer')
wls:/offline/ClassicDomain/Server/AdminServer> set('Machine', machine)
wls:/offline/ClassicDomain/Server/AdminServer> updateDomain()
wls:/offline/ClassicDomain/Server/AdminServer> exit()
# Nach den Anpassungen starten
/etc/init.d/weblogic start
netstat -plant | grep 9001
tail -f /opt/Middleware/user_projects/domains/ClassicDomain/servers/WLS_DISCO/logs/WLS_DISCO.out
</syntaxhighlight>
f2a7a52d357ea376a03b9f3bd3dca6d29f6c9f5b
Solaris Einzeiler
0
200
2579
2276
2021-11-26T03:52:19Z
Lollypop
2
Text replacement - "[[Kategorie:" to "[[Category:"
wikitext
text/x-wiki
[[Category:Solaris|Einzeiler]]
=== netstat -aun oder lsof -i -P -n unter Solaris 10 ===
<syntaxhighlight lang=bash>
#!/bin/bash
pfiles /proc/* 2>/dev/null | nawk -v port=$1 '
/^[0-9]/ {
pid=$1; cmd=$2; type="unknown"; next;
}
$1 == "SOCK_STREAM" {
type="tcp"; next;
}
$1 == "SOCK_DGRAM" {
type="udp"; next;
}
$2 ~ /AF_INET?/ && ( port=="" || $5==port ) {
if($2 ~ /[0-9]$/ && type !~ /[0-9]$/) type=type""substr($2,8);
if(cmd!="") { printf("%d %s\n",pid,cmd); cmd="" }
printf(" %s:%s/%s\n",$3,$5,type);
}'
</syntaxhighlight>
1f726191c2e30fa4300b1a97f1d89b79f43615d1
LUKS - Linux Unified Key Setup
0
255
2580
2428
2021-11-26T03:52:46Z
Lollypop
2
Text replacement - "[[Kategorie:" to "[[Category:"
wikitext
text/x-wiki
[[Category:Linux]]
[[Category:Security]]
==Encrypted swap on LVM==
===Create logical volume for swap===
<syntaxhighlight lang=bash>
# lvcreate -L 2g -n lv-swap vg-root
Logical volume "lv-swap" created
</syntaxhighlight>
<syntaxhighlight lang=bash>
# lvs /dev/vg-root/lv-swap
LV VG Attr LSize Pool Origin Data% Move Log Copy% Convert
lv-swap vg-root -wi-ao--- 2.00g
</syntaxhighlight>
===Create and get the UUID===
'''This step will erase all of your data from the disk after the mkswap command!!!'''
So be sure you pick the right one!
<syntaxhighlight lang=bash>
# mkswap /dev/vg-root/lv-swap
mkswap: /dev/vg-root/lv-swap: warning: don't erase bootbits sectors
on whole disk. Use -f to force.
Setting up swapspace version 1, size = 2097148 KiB
no label, UUID=4764e516-d025-41de-ab5b-72070a3ae765
</syntaxhighlight>
Save this UUID for the next step!!!
===Create the crypted swap===
Put this in your /etc/crypttab :
<syntaxhighlight lang=bash>
cryptswap1 UUID=4764e516-d025-41de-ab5b-72070a3ae765 /dev/urandom swap,cipher=aes-cbc-essiv:sha256,offset=40,noearly
</syntaxhighlight>
The UUID is the one from mkswap before!!!
Important things:
# offset=40 : Save the region where your UUID is written on disk.
# noearly : Avoid race conditions of the init scripts (cryptdisks and cryptdisks-early).
====Start the crypted partition====
<syntaxhighlight lang=bash>
# cryptdisks_start cryptswap1
* Starting crypto disk...
* cryptswap1 (starting)..
* cryptswap1 (started)...
</syntaxhighlight>
====Check the status====
<syntaxhighlight lang=bash>
# cryptsetup status cryptswap1
/dev/mapper/cryptswap1 is active.
type: PLAIN
cipher: aes-cbc-essiv:sha256
keysize: 256 bits
device: /dev/mapper/vg--root-lv--swap
offset: 40 sectors
size: 4194264 sectors
mode: read/write
</syntaxhighlight>
====Make the swapFS====
<syntaxhighlight lang=bash>
# mkswap /dev/mapper/cryptswap1
mkswap: /dev/mapper/cryptswap1: warning: don't erase bootbits sectors
on whole disk. Use -f to force.
Setting up swapspace version 1, size = 2097128 KiB
no label, UUID=ccdd1d28-0504-4682-8ece-8b6ef381d7e9
</syntaxhighlight>
This new UUID has no relevance for /etc/crypttab.
===Edit the /etc/fstab===
<syntaxhighlight lang=bash>
# vit /etc/fstab
...
/dev/mapper/cryptswap1 none swap sw 0 0
</syntaxhighlight>
Reboot to test your settings.
2a761d47a5cff752333a7d0c3e8ae5ddd6010469
Duncanopsammia axifuga
0
119
2581
339
2021-11-26T03:52:55Z
Lollypop
2
Text replacement - "[[Kategorie:" to "[[Category:"
wikitext
text/x-wiki
[[Category:MeerwasserAquarium]]
{{Systematik
| DeName = Bartkoralle
| WissName = Duncanopsammia axifuga
| Autor =
| Untergattung =
| Gattung =
| Unterfamilie =
| Art =
| Verbreitung = Australien , Indischer Ozean, Südchinesisches Meer, Taiwan
| Nahrung = Artemia, Phytoplankton, Plankton, Staubfutter, Zooplankton, Zooxanthellen / Licht
| Luftfeuchtigkeit =
| Temperatur = 24°C - 26°C
}}
1345a7431b31bd5570879d4bb56bdbac28e92c5a
Solaris zone memory on the fly
0
118
2582
2275
2021-11-26T03:52:57Z
Lollypop
2
Text replacement - "[[Kategorie:" to "[[Category:"
wikitext
text/x-wiki
[[Category:Solaris|Zone Memory]]
= Setting memory parameter for running zones =
You can change memory parameter for running zones. But remember to make it persistent by changing zone config file, too.
So I do it always in advance.
== Change setting in the config file ==
<syntaxhighlight lang=bash>
# zonecfg -z myzone
zonecfg:myzone> select capped-memory
zonecfg:myzone:capped-memory> info
capped-memory:
[swap: 10G]
zonecfg:myzone:capped-memory> set swap=16G
zonecfg:myzone:capped-memory> set physical=16G
zonecfg:myzone:capped-memory> set locked=10G
zonecfg:myzone:capped-memory> info
physical: 16G
[swap: 16G]
[locked: 10G]
zonecfg:myzone:capped-memory> end
zonecfg:myzone> verify
zonecfg:myzone> commit
zonecfg:myzone> exit
#
</syntaxhighlight>
== Change settings for the running zone ==
===First take a look===
<syntaxhighlight lang=bash>
# zlogin myzone prtconf | grep Memory
prtconf: devinfo facility not available
Memory size: 65536 Megabytes
# prctl -t privileged -i zone myzone
zone: 1: myzone
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
zone.max-swap
privileged 10.0GB - deny -
zone.cpu-shares
privileged 1 - none -
</syntaxhighlight>
===Set the new values===
<syntaxhighlight lang=bash>
# rcapadm -z myzone -m 16G
# prctl -n zone.max-swap -v 16g -t privileged -r -e deny -i zone myzone
# prctl -n zone.max-locked-memory -v 16g -t privileged -r -e deny -i zone myzone
</syntaxhighlight>
===Prove values===
<syntaxhighlight lang=bash>
# zlogin myzone prtconf | grep Memory
prtconf: devinfo facility not available
Memory size: 16384 Megabytes
# prctl -t privileged -i zone myzone
zone: 1: myzone
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
zone.max-swap
privileged 16.0GB - deny -
zone.cpu-shares
privileged 1 - none -
</syntaxhighlight>
Done.
21fca97debb4037e193f257681462dd85026b35f
SunServer
0
210
2583
2378
2021-11-26T03:52:59Z
Lollypop
2
Text replacement - "[[Kategorie:" to "[[Category:"
wikitext
text/x-wiki
[[Category:Hardware]]
=X86 Systeme=
==ILOM==
===Reset SP from OS===
<syntaxhighlight lang=bash>
# ipmitool -I bmc bmc reset cold
Sent cold reset command to MC
</syntaxhighlight>
===Access ILOM from OS===
<syntaxhighlight lang=bash>
# ipmitool sunoem cli
Connected. Use ^D to exit.
->
</syntaxhighlight>
or
<syntaxhighlight lang=bash>
# ipmitool -I bmc sunoem cli
Connected. Use ^D to exit.
->
</syntaxhighlight>
===Set SP IP address from OS via ipmitool===
* Set:
<syntaxhighlight lang=bash>
# ipmitool lan set 1 ipaddr 172.30.42.149
Setting LAN IP Address to 172.30.42.149
# ipmitool lan set 1 netmask 255.255.255.0
Setting LAN Subnet Mask to 255.255.255.0
# ipmitool lan set 1 defgw ipaddr 172.30.42.1
Setting LAN Default Gateway IP to 172.30.42.1
</syntaxhighlight>
* Check:
<syntaxhighlight lang=bash>
# ipmitool lan print
Set in Progress : Commit Write
Auth Type Support : NONE MD2 MD5 PASSWORD
Auth Type Enable : Callback : MD2 MD5 PASSWORD
: User : MD2 MD5 PASSWORD
: Operator : MD2 MD5 PASSWORD
: Admin : MD2 MD5 PASSWORD
: OEM :
IP Address Source : Static Address
IP Address : 172.30.42.149
Subnet Mask : 255.255.255.0
MAC Address : 00:1c:24:f0:70:b0
SNMP Community String : public
IP Header : TTL=0x40 Flags=0x40 Precedence=0x00 TOS=0x10
Default Gateway IP : 172.30.42.1
Default Gateway MAC : ff:ff:ff:ff:ff:ff
Backup Gateway IP : 255.255.255.255
Backup Gateway MAC : ff:ff:ff:ff:ff:ff
Cipher Suite Priv Max : aaaaaaaaaaaaaaa
: X=Cipher Suite Unused
: c=CALLBACK
: u=USER
: o=OPERATOR
: a=ADMIN
: O=OEM
</syntaxhighlight>
===Restore lost Serial/Product Information===
<syntaxhighlight lang=bash>
$ ssh root@x4100-sp
-> show /SYS/MB
/SYS/MB
...
Properties:
type = Motherboard
chassis_name = SUN FIRE X4100
chassis_part_number = 541-0250-04
chassis_serial_number = 0000000-0000000000
chassis_manufacturer = SUN MICROSYSTEMS
product_name = SUN FIRE X4100
product_part_number = 602-0000-00
product_serial_number = 0000000000
product_version = (none)
product_manufacturer = SUN MICROSYSTEMS
fru_name = ASSY,MOTHERBOARD,A64
fru_manufacturer = SUN MICROSYSTEMS
fru_part_number = 501-7644-01
fru_serial_number = 1762TH1-0627002296
...
-> exit
$ ssh sunservice@x4100-sp
Password: <the root password>
[(flash)root@X4100-SP:~]# servicetool --board_replaced=mainboard --fru_product_serial_number --fru_chassis_serial_number --fru_product_part_number
<Fill out the answers>
</syntaxhighlight>
=SPARC Systeme=
==T4-1==
===Get disk slot===
<b>get_disk_slot.sh</b>:
<syntaxhighlight lang=bash>
#!/bin/bash
/usr/sbin/prtconf -v | nawk -v disk="$1" '
function get_value() {
getline line;
split(line,values,"=");
return values[2];
}
/inquiry-serial-no/ {
inquiry_serial_no=get_value();
}
/inquiry-product-id/ {
inquiry_product_id=get_value();
}
/inquiry-vendor-id/ {
inquiry_vendor_id=get_value();
}
/obp-path/ {
obp_path=get_value();
}
/phy-num/ {
phy_num[obp_path]=get_value();
}
$0 ~ "/dev/rdsk/"disk"$" {
split(obp_path,path_parts,"/");
if(path_parts[3]=="pci@1"){
controller[obp_path]=0;
}
if(path_parts[3]=="pci@2"){
controller[obp_path]=1;
}
printf "%s\n\t%s\n\t%s\n\t%s\n\tcontroller %d, PhyNum %d => Slot %d\n",$1,inquiry_vendor_id,inquiry_serial_no,obp_path,controller[obp_path],phy_num[obp_path],4*controller[obp_path]+phy_num[obp_path]
}'
</syntaxhighlight>
Example:
<syntaxhighlight lang=bash>
# ./get_disk_slot.sh c0t5000C500230D43A3d0
dev_link=/dev/rdsk/c0t5000C500230D43A3d0
'SEAGATE'
'00101371ZVHM 3SE1ZVHM'
'/pci@400/pci@2/pci@0/pci@4/scsi@0/disk@w5000c500230d43a1,0'
controller 1, PhyNum 2 => Slot 6
</syntaxhighlight>
==XSCF==
===Set XSCF IP address from OS via ssh through dscp===
* Show:
<syntaxhighlight lang=bash>
# /usr/platform/`uname -i`/sbin/prtdscp
Domain Address: 192.168.224.2
SP Address: 192.168.224.1
# ssh eis-installer@192.168.224.1
XSCF> shownetwork -a
xscf#0-lan#0
Link encap:Ethernet HWaddr 00:0B:5D:E3:D8:C4
inet addr:172.42.0.120 Bcast:172.42.0.255 Mask:255.255.255.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:885919090 errors:0 dropped:0 overruns:0 frame:0
TX packets:7150700 errors:1 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:1987024183 (1.8 GiB) TX bytes:492148426 (469.3 MiB)
Base address:0xe000
xscf#0-lan#1
Link encap:Ethernet HWaddr 00:0B:5D:E3:D8:C5
BROADCAST MULTICAST MTU:1500 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
Base address:0xc000
XSCF> showroute -a
Destination Gateway Netmask Flags Interface
172.42.0.0 * 255.255.0.0 U xscf#0-lan#0
default 172.42.0.1 0.0.0.0 UG xscf#0-lan#0
</syntaxhighlight>
* Delete default gateway:
<syntaxhighlight lang=bash>
XSCF> setroute -c del -n 0.0.0.0 -m 0.0.0.0 -g 172.42.0.1 xscf#0-lan#0
</syntaxhighlight>
* Set:
<syntaxhighlight lang=bash>
XSCF> setnetwork xscf#0-lan#0 172.32.40.52 -m 255.255.255.0
XSCF> setroute -c add -n 0.0.0.0 -m 0.0.0.0 -g 172.32.40.1 xscf#0-lan#0
XSCF> applynetwork
The following network settings will be applied:
xscf#0 hostname :hfgsun07-xsfc
DNS domain name :intern.hfg-inkasso.de
nameserver :172.41.0.2
interface :xscf#0-lan#0
status :up
IP address :172.32.40.52
netmask :255.255.255.0
route :-n 172.32.40.1 -m 255.255.255.255
route :-n 0.0.0.0 -m 0.0.0.0 -g 172.32.40.1
interface :xscf#0-lan#1
status :down
IP address :
netmask :
route :
Continue? [y|n] :y
Please reset the XSCF by rebootxscf to apply the network settings.
Please confirm that the settings have been applied by executing
showhostname, shownetwork, showroute and shownameserver after rebooting
the XSCF.
XSCF> rebootxscf
The XSCF will be reset. Continue? [y|n] :y
</syntaxhighlight>
===Enable sending of break signal===
Example on a M10 (Partition 0)... break_signal=off means turn supression of break signals off ;-) :
<syntaxhighlight lang=bash>
XSCF> showpparmode -p 0
Host-ID :90071f40
Diagnostic Level :max
Message Level :max
Alive Check :on
Watchdog Reaction :reset
Break Signal :on
Autoboot(Guest Domain) :on
Elastic Mode :off
IOreconfigure :false
CPU Mode :auto
PPAR DR(Current) :off
PPAR DR(Next) :off
XSCF> setpparmode -p 0 -m break_signal=off
Diagnostic Level :max -> -
Message Level :max -> -
Alive Check :on -> -
Watchdog Reaction :reset -> -
Break Signal :on -> off
Autoboot(Guest Domain) :on -> -
Elastic Mode :off -> -
IOreconfigure :false -> -
CPU Mode :auto -> -
PPAR DR :off -> -
The specified modes will be changed.
Continue? [y|n] :y
configured.
Diagnostic Level :max
Message Level :max
Alive Check :on (alive check:available)
Watchdog Reaction :reset (watchdog reaction:reset)
Break Signal :off (break signal:send)
Autoboot(Guest Domain) :on
Elastic Mode :off
IOreconfigure :false
CPU Mode :auto
PPAR DR :off
XSCF> sendbreak -y -p0
Send break signal to PPAR-ID 0?[y|n] :y
XSCF> console -y -p0
Console contents may be logged.
Connect to PPAR-ID 0?[y|n] :y
c)ontinue, s)ync, r)eset? c
^@Notifying cluster that this node is panicking
panic[cpu10]/thread=2a100df1c80: Aborting node because pm_tick delay of 13644 ms exceeds 5050 ms
...
</syntaxhighlight>
97572658596b3a0c0dddfd363bc7ad0294abc79f
HP 3par
0
213
2584
2573
2021-11-26T03:53:23Z
Lollypop
2
Text replacement - "</source" to "</syntaxhighlight"
wikitext
text/x-wiki
[[Category:Hardware]]
Unsorted collection... Don't do this...
Unsortierte Sammlung...
Funktioniert nicht so...
<syntaxhighlight lang=bash>
3par-clusterstorage cli% showcage
Id Name LoopA Pos.A LoopB Pos.B Drives Temp RevA RevB Model Side
0 cage0 1:0:1 0 0:0:1 0 24 29-35 321a 321a DCN1 n/a
1 cage1 1:0:2 0 0:0:2 0 24 34-36 321a 321a DCS2 n/a
</syntaxhighlight>
<syntaxhighlight lang=bash>
3par-storage cli% createcpg -t r5 -ssz 4 -ha mag -p -devtype FC -mg 0-19 -cg 0 FC_R5_31_cage0
3par-storage cli% createcpg -t r5 -ssz 4 -ha mag -p -devtype FC -mg 0-19 -cg 1 FC_R5_31_cage1
</syntaxhighlight>
<syntaxhighlight lang=bash>
3par-storage cli% showcpg -sdg
------(MB)------
Id Name Warn Limit Grow Args
...
6 FC_R5_31_cage0 - - 32768 -t r5 -ssz 4 -ha mag -p -devtype FC -mg 0-19 -cg 0
7 FC_R5_31_cage1 - - 32768 -t r5 -ssz 4 -ha mag -p -devtype FC -mg 0-19 -cg 1
</syntaxhighlight>
<syntaxhighlight lang=bash>
3par-storage cli% createvv -wait 0 -comment "Mirror A: PRODDB" FC_R5_31_cage0 VV_DB_PROD01_DATA_DS.1 2T
3par-storage cli% createvv -wait 0 -comment "Mirror B: PRODDB" FC_R5_31_cage1 VV_DB_PROD01_DATA_DS.2 2T
3par-storage cli% createvv -wait 0 -comment "Mirror A: TESTDB" FC_R5_31_cage0 VV_DB_TEST01_DATA_DS.3 2T
3par-storage cli% createvv -wait 0 -comment "Mirror B: TESTDB" FC_R5_31_cage1 VV_DB_TEST01_DATA_DS.4 2T
</syntaxhighlight>
<syntaxhighlight lang=bash>
3par-storage cli% showvv -sortcol 0 -showcols Id,Name,UsrCPG,Prov,Usr_Used_MB -cpg FC_R5_31_cage0,FC_R5_31_cage1
Id Name UsrCPG Prov Usr_Used_MB
2 VV_DB_PROD01_DATA_DS.1 FC_R5_31_cage0 full 2097152
3 VV_DB_PROD01_DATA_DS.2 FC_R5_31_cage1 full 2097152
4 VV_DB_TEST01_DATA_DS.3 FC_R5_31_cage0 full 2097152
5 VV_DB_TEST01_DATA_DS.4 FC_R5_31_cage1 full 2097152
-----------------------------------------------------------------
2 total 8388608
</syntaxhighlight>
==Group virtual volumes to sets (vv -> vvset)==
<syntaxhighlight lang=bash>
3par-storage cli% createvvset -comment "Set for all vvs of Solaris Devel" DevelVVSet
3par-storage cli% createvvset -add DevelVVSet VV_DB_TEST01_DATA_DS.3
3par-storage cli% createvvset -add DevelVVSet VV_DB_TEST01_DATA_DS.4
</syntaxhighlight>
==Create a set of initiators==
<syntaxhighlight lang=bash>
3par-storage cli% createhost -os Solaris -model M10 -contact "SuperAdmin" -comment "Developer node" -loc "Germany, Hamburg" -persona 1 unix14_c2 21000024ff8f5aae
3par-storage cli% createhost -os Solaris -model M10 -contact "SuperAdmin" -comment "Developer node" -loc "Germany, Hamburg" -persona 1 unix14_c3 21000024ff8f5aaf
</syntaxhighlight>
<syntaxhighlight lang=bash>
3par-storage cli% createhostset DevelHosts
3par-storage cli% createhostset -add DevelHosts unix14_c2
3par-storage cli% createhostset -add DevelHosts unix14_c3
</syntaxhighlight>
==Map virtual volumes as LUNs to a set of initiators==
<syntaxhighlight lang=bash>
3par-storage cli% createvlun set:DevelVVSet 0+ set:DevelHosts
</syntaxhighlight>
Means map all VVs from DevelVVSet to all hosts in DevelHosts and do auto LUN numbering (+) starting with 0.
<syntaxhighlight lang=bash>
3par-storage cli% showvlun
Active VLUNs
Lun VVName HostName -Host_WWN/iSCSI_Name- Port Type Status ID
0 VV_DB_TEST01_DATA_DS.3 unix14_c2 21000024FF8F5AAE 0:1:1 host set active 1
1 VV_DB_TEST01_DATA_DS.4 unix14_c2 21000024FF8F5AAE 0:1:1 host set active 1
0 VV_DB_TEST01_DATA_DS.3 unix14_c3 21000024FF8F5AAF 0:1:2 host set active 1
1 VV_DB_TEST01_DATA_DS.4 unix14_c3 21000024FF8F5AAF 0:1:2 host set active 1
-----------------------------------------------------------------------------------------------
4 total
VLUN Templates
Lun VVName HostName -Host_WWN/iSCSI_Name- Port Type
0 set:DevelVVset set:DevelHosts ---------------- --- host set
---------------------------------------------------------------------
1 total
</syntaxhighlight>
==Watch disk initialization==
<syntaxhighlight lang=bash>
3par-storage cli% showsys -space -devtype FC
------------- System Capacity (MB) -------------
Total Capacity : 57139200
Allocated : 40258560
Volumes : 36577280
Non-CPGs : 0
User : 0
Snapshot : 0
Admin : 0
CPGs (TPVVs & TDVVs & CPVVs) : 36577280
User : 36577280
Used : 36427020
Unused : 0
Snapshot : 0
Used : 0
Unused : 0
Admin : 0
Used : 0
Unused : 0
Unmapped : 0
System : 3681280
Internal : 252928
Spare : 3428352
Used : 0
Unused : 3428352
Free : 16880640
Initialized : 7827456
Uninitialized : 9053184 <--- Still initializing!!!!
Unavailable : 0
Failed : 0
------------- Capacity Efficiency --------------
Compaction : 1.0
Dedup : --------
</syntaxhighlight>
== Solaris ==
===/kernel/drv/sd.conf===
<pre>
sd-config-list=“3PARdataVV”,“physical-block-size:16384”;
</pre>
4697d5fe18958ded92f8531f5e34e1e0b40428c0
Tmux tips and tricks
0
376
2585
2341
2021-11-26T03:53:36Z
Lollypop
2
Text replacement - "</source" to "</syntaxhighlight"
wikitext
text/x-wiki
== Enable mouse scrollwheel ==
<syntaxhighlight>
# echo "set -g mouse on" >> ~/.tmux.conf
</syntaxhighlight>
2d4b7a780f215a3666a6e2222b7c71347dc74d96
Category:Solaris
14
21
2586
34
2021-11-26T03:54:02Z
Lollypop
2
Text replacement - "[[Kategorie:" to "[[Category:"
wikitext
text/x-wiki
[[Category:KnowHow]]
a53883501ef62bde531096835b5015f2915a2297
EasyRSA
0
275
2587
2345
2021-11-26T03:54:52Z
Lollypop
2
Text replacement - "[[Kategorie:" to "[[Category:"
wikitext
text/x-wiki
[[Category: Security]]
[[Category: Linux]]
=create CA user=
<syntaxhighlight lang=bash>
# groupadd -g 22000 ca && adduser --uid 22000 --gid 22000 --gecos "CA user" --encrypt-home ca
</syntaxhighlight>
=Do everything CA specific as CA user!=
<syntaxhighlight lang=bash>
# su - ca
ca@rzeasyrsa:~$ ecryptfs-mount-private
ca@rzeasyrsa:~$ cd
ca@rzeasyrsa:~$ exec bash
</syntaxhighlight>
=Setup EasyRSA=
==Ubuntu packets==
<syntaxhighlight lang=bash>
# aptitude install openvpn easy-rsa
</syntaxhighlight>
==Create your CA==
<syntaxhighlight lang=bash>
mkdir --mode=0700 OpenVPN-CA
cd OpenVPN-CA
for i in /usr/share/easy-rsa/* ; do ln -s $i ; done
rm -f vars clean-all
cp /usr/share/easy-rsa/vars .
</syntaxhighlight>
==Edit the defaults==
Setup proper defaults in your vars file.
Source it every time before you do CA work.
==Base setup (Only one time at the beginning!!!)==
'''Really just do this before you start with your CA. It will delete everything: keys and certificates!!!'''
$ cd OpenVPN-CA
$ . vars
$ /usr/share/easy-rsa/clean-all
==Generate DH parameter==
$ cd OpenVPN-CA
$ . vars
$ KEY_SIZE=4096 ./build-dh
or
$ cd OpenVPN-CA/keys
$ openssl dhparam -2 -out dh4096.pem 4096
==Generate TLS-auth parameter==
$ cd OpenVPN-CA/keys
$ /usr/sbin/openvpn --genkey --secret ta.key
==User certificates with passwords in scripts==
If you want to work with password encrypted keys and wat to batch process many users, you might find this helpful.
Add a line after <i># output_password = secret</i>:
<syntaxhighlight lang=bash>
# output_password = secret
output_password = $ENV::KEY_PASS
</syntaxhighlight>
After that the openssl calls taking the needed password from the environment variable <i>KEY_PASS</i>.
You can call it like this for example:
<syntaxhighlight lang=bash>
KEY_PASS="password" ./build-key-pass --batch user
</syntaxhighlight>
==Create your CA certificate==
$ cd OpenVPN-CA
$ . vars
$ ./buid-ca
Check it with
$ openssl x509 -noout -text -in keys/ca.crt
==Create the server certificate==
$ cd OpenVPN-CA
$ . vars
$ ./build-key-server openvpn-server
For example server keys with 5 years validity:
$ KEY_EXPIRE=1825 ./build-key-server openvpn-server
=Create your OpenVPN config=
==get_ovpn.sh==
I wrote a little helper script called get_ovpn.sh:
<syntaxhighlight lang=bash>
#!/bin/bash
# Written by Lars Timmann L@rs.Timmann.de> 2016
# You may use it for free but on your own risk!!!
TYPE="client"
KEY_DIR="OpenVPN-CA/keys"
function usage() {
if [ "_${1}_" != "_help_" ]
then
printf "ERROR: $*\n"
fi
printf "Options:\n"
cat <<EOF
-h|--help This help
-c|--config-type Default: client (client|server)
-k|--key-dir Default: OpenVPN-CA/keys Directory where certificates and keys can be found
-t|--template Default: ${configtype}.ovpn The template to use
-u|--user User to create config for
-s|--server Servername for --config-type=server
--what-ever=value Replace <WHAT_EVER> in template with value e.g.: --server-net=... replaces <SERVER_NET> with the given value
EOF
exit 1
}
while [ $# -gt 0 ]
do
#if [ $# -ge 2 ]; then value=$2; fi
case $1 in
-h|--help)
usage "help"
;;
--?*=?*|-?*=?*)
param=${1%=*}
value=${1#*=}
shift;
;;
--?*=|-?*=)
param=${1%=*}
usage "${param} needs a vlaue!"
;;
*)
if [ $# -lt 2 ] ; then usage "$1 needs a value!"; fi
param=$1
value=$2
shift; shift;
;;
esac
case $param in
-t|--template)
TEMPLATE=${value}
;;
-k|--key-dir)
KEY_DIR=${value}
;;
-u|--user)
OVPN_USER=${value}
;;
-c|--config-type)
TYPE=${value}
;;
-s|--server-name)
SERVER=${value}
;;
*)
param=${param#--}
param=${param/-/_}
export ${param^^}=${value}
;;
esac
done
TEMPLATE=${TEMPLATE:-"${TYPE}.ovpn"}
[ -z "${SERVER}" -a "_${TYPE}_" == "_server_" ] && usage "For which server?\n"
[ -z "${OVPN_USER}" -a "_${TYPE}_" == "_client_" ] && usage "For which user?\n"
[ ! -f "${TEMPLATE}" ] && usage "Template file ${TEMPLATE} not found!\n"
[ ! -d "${KEY_DIR}" ] && usage "Key directory ${KEY_DIR} not found!\n"
[ ! -f "${KEY_DIR}/ta.key" ] && usage "TLS Auth ${KEY_DIR}/ta.key not found!\n"
[ ! -f "${KEY_DIR}/ca.crt" ] && usage "CA Certificate ${KEY_DIR}/ca.crt not found!\n"
[ ! -f "${KEY_DIR}/${SERVER}.key" -a "_${TYPE}_" == "_server_" ] && usage "Private key ${KEY_DIR}/${SERVER}.key not found!\n"
[ ! -f "${KEY_DIR}/${SERVER}.crt" -a "_${TYPE}_" == "_server_" ] && usage "Certificate ${KEY_DIR}/${SERVER}.crt not found!\n"
[ ! -f "${KEY_DIR}/${OVPN_USER}.key" -a "_${TYPE}_" == "_client_" ] && usage "Private key ${KEY_DIR}/${OVPN_USER}.key not found!\n"
[ ! -f "${KEY_DIR}/${OVPN_USER}.crt" -a "_${TYPE}_" == "_client_" ] && usage "Certificate ${KEY_DIR}/${OVPN_USER}.crt not found!\n"
export SERVER
gawk \
-v user="${OVPN_USER}" \
-v key_dir="${KEY_DIR}" \
-v configtype="${TYPE}" \
-v server="${SERVER}" \
'
function print_fingerprint(certfile){
command="openssl x509 -noout -fingerprint -in "certfile;
FS="=";
while(command | getline);
retval=$2;
close(command);
return retval;
}
function print_part(part,certfile){
command="openssl x509 -noout -text -in "certfile;
while(command | getline){
if ($1 == part) {
for(i=2;i<=NF;i++){
if(i==NF) gsub(/\//,", ", $i)
retval=retval""$i;
if(i<NF) retval=retval" ";
}
}
};
close(command);
return retval;
}
function print_cert(name,certfile){
# Header
#printf "# %s\n",certfile;
while(getline < certfile){if(/^#/) print $0};
close(certfile);
printf "<%s>\n",name;
while(getline < certfile){if(!/^#/) print $0};
close(certfile);
printf "</%s>\n",name;
}
{
# Static part
rest=$0;
while(match(rest,/<[A-Z0-9_]+>/)) {
matched=substr(rest,RSTART+1,RLENGTH-2);
##print "Matched:",matched;
if (ENVIRON[matched]) gsub("<"matched">",ENVIRON[matched]);
rest=substr(rest,RSTART+RLENGTH);
}
print $0;
}
END{
# Dynamic part
if(configtype=="client") {
printf "remote-cert-tls server\n";
} else {
printf "remote-cert-tls client\n";
}
# TLS Auth
print_cert("tls-auth",key_dir"/ta.key");
printf "key-direction %d\n",(configtype=="client");
printf "\n";
print_cert("dh",key_dir"/dh4096.pem");
printf "\n";
# Ca Certificate
if (configtype=="client") {
printf "verify-x509-name \"%s\"\n",print_part("Subject:",key_dir"/"server".crt");
}
printf "verify-hash %s\n",print_fingerprint(key_dir"/ca.crt");
print_cert("ca",key_dir"/ca.crt");
printf "\n";
# User Data
if (configtype=="client") {
print_cert("cert",key_dir"/"user".crt");
printf "\n";
print_cert("key",key_dir"/"user".key");
printf "\n";
} else {
print_cert("cert",key_dir"/"server".crt");
printf "\n";
# key secret/<SERVER>.key is in template
}
#print ENVIRON["SERVER_NET"];
}' ${TEMPLATE}
</syntaxhighlight>
ca@rzeasyrsa:~$ ./get_ovpn.sh --help
Options:
-h|--help This help
-c|--config-type Default: client (client|server)
-k|--key-dir Default: OpenVPN-CA/keys Directory where certificates and keys can be found
-t|--template Default: .ovpn The template to use
-u|--user User to create config for
-s|--server Servername for --config-type=server
--what-ever=value Replace <WHAT_EVER> in template with value e.g.: --server-net=... replaces <SERVER_NET> with the given value
==OpenVPN Server ==
===OpenVPN Server Template===
# I am using the mysql-auth-plugin from [https://github.com/chantra/openvpn-mysql-auth https://github.com/chantra/openvpn-mysql-auth]
# On the OpenVPN-Server the user openvpn has uid 1195 and gid 1195 and I have a TMP-dir for this user in the /etc/fstab like this:
none /run/openvpn_tmp tmpfs nodev,noexec,nosuid,size=5m,mode=0700,uid=1195,gid=1195 0 0
Example server.ovpn:
<pre>
local <SERVER_IP>
port <SERVER_PORT>
tmp-dir /run/openvpn_tmp
management <MANAGEMENT_IP> <MANAGEMENT_PORT> /etc/openvpn/management-password
proto udp
dev tun
tun-mtu 1500
mssfix
topology subnet
server <SERVER_NET> <SERVER_NETMASK>
push "redirect-gateway def1 bypass-dhcp"
push "dhcp-option DNS <DNS1>"
push "dhcp-option DNS <DNS2>"
push "route 192.168.18.0 255.255.255.0 net_gateway"
push "route 192.168.0.0 255.255.0.0"
push "route 10.0.0.0 255.0.0.0"
push "route 172.28.0.0 255.255.0.0"
client-to-client
duplicate-cn
keepalive 10 120
auth SHA512
cipher AES-256-CBC
tls-cipher DHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES256-SHA256:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES128-SHA256:DHE-RSA-CAMELLIA256-SHA:DHE-RSA-AES256-SHA:DHE-RSA-CAMELLIA128-SHA:DHE-RSA-AES128-SHA:CAMELLIA256-SHA:AES256-SHA:CAMELLIA128-SHA:AES128-SHA
reneg-sec 36000
comp-lzo adaptive
max-clients 25
user openvpn
group openvpn
persist-key
persist-tun
status /var/log/openvpn/<SERVER>-status.log 2
status-version 2
log-append /var/log/openvpn/<SERVER>-openvpn.log
verb 3
plugin /usr/lib/openvpn/libopenvpn-mysql-auth.so -c /etc/openvpn/auth/<SERVER>_auth_mysql.conf
key secret/<SERVER>.key # This file should be kept secret
remote-cert-tls client
username-as-common-name
</pre>
===Generate OpenVPN Config for server===
<syntaxhighlight lang=bash>
ca@rzeasyrsa:~$ ./get_ovpn.sh \
--server openvpn \
--config-type server \
--server-ip=192.168.18.23 \
--server-port=1234 \
--server-net=10.214.60.128 \
--server-netmask=255.255.255.128 \
--management-ip=192.168.17.23 \
--management-port=11234 \
--dns1=192.168.0.50 \
--dns2=192.168.0.30 \
--template server.ovpn \
--key-dir=OpenVPN-CA/keys
</syntaxhighlight>
==OpenVPN Client==
===OpenVPN client template===
Example client.ovpn:
<pre>
client
dev tun
proto udp
remote <SERVER_IP> <SERVER_PORT>
tls-client
ns-cert-type server
comp-lzo
auth-user-pass
auth SHA512
cipher AES-256-CBC
tls-cipher DHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES256-SHA256:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES128-SHA256:DHE-RSA-CAMELLIA256-SHA:DHE-RSA-AES256-SHA:DHE-RSA-CAMELLIA128-SHA:DHE-RSA-AES128-SHA:CAMELLIA256-SHA:AES256-SHA:CAMELLIA128-SHA:AES128-SHA
#tls-version-min 1.2
route-delay 5 30
persist-key
persist-tun
nobind
mssfix
push-peer-info
reneg-sec 0
tun-mtu 1500
verb 3
#auth-nocache
</pre>
===Generate OpenVPN Config for server===
<syntaxhighlight lang=bash>
ca@rzeasyrsa:~$ ./get_ovpn.sh \
--config-type client \
--server-ip 192.168.18.23 \
--server-port 1234 \
--template client.ovpn \
--key-dir OpenVPN-CA/keys \
--user vpnclient
</syntaxhighlight>
b21d783bb7e4faf9a76db682c35c4bc12b0f4d2a
Category:Networking
14
285
2588
1289
2021-11-26T03:56:39Z
Lollypop
2
Text replacement - "[[Kategorie:" to "[[Category:"
wikitext
text/x-wiki
[[Category:KnowHow]]
a53883501ef62bde531096835b5015f2915a2297
Category:Arum
14
86
2589
173
2021-11-26T03:59:35Z
Lollypop
2
Text replacement - "[[Kategorie:" to "[[Category:"
wikitext
text/x-wiki
[[Category:Araceae]]
5024b68e594e35c88018933d70ac6158500c45b3
Solaris grub
0
199
2590
2288
2021-11-26T04:01:34Z
Lollypop
2
Text replacement - "</source" to "</syntaxhighlight"
wikitext
text/x-wiki
[[Category:Solaris|Grub]]
[[Category:Grub|Solaris]]
= Set SP-console on x86-systems to 115200 Baud =
You need to set the new speed in all three places:
# grub
# SP host serial
# BIOS serial
== Solaris 11 ==
=== Set speed and port in grub ===
<syntaxhighlight lang=bash>
# bootadm set-menu console=serial serial_params=0,115200,8,N,1
# bootadm generate-menu -f
# eeprom console=ttya
# eeprom ttya-mode=115200,8,n,1,-
</syntaxhighlight>
== Solaris 10 ==
=== Set speed and port in grub ===
/rpool/boot/grub/menu.lst
<syntaxhighlight lang=bash>
title Oracle Solaris 10 X86
findroot (pool_rpool,0,a)
kernel$ /platform/i86pc/multiboot -B $ZFS-BOOTFS,console=ttya,ttya-mode="115200,8,n,1,-"
module /platform/i86pc/boot_archive
title Solaris failsafe
findroot (pool_rpool,0,a)
kernel /boot/multiboot -s -B console=ttya,ttya-mode="115200,8,n,1,-"
module /boot/amd64/x86.miniroot-safe
</syntaxhighlight>
=== Set speed ===
/boot/solaris/bootenv.rc
<syntaxhighlight lang=bash>
setprop ttya-mode '115200,8,n,1,-'
</syntaxhighlight>
Active after reboot.
=== Set console login speed ===
/etc/ttydefs
<syntaxhighlight lang=bash>
console115200:115200 hupcl opost onlcr:115200::console
</syntaxhighlight>
<syntaxhighlight lang=bash>
# svccfg -s svc:/system/console-login setprop ttymon/label= astring: "console115200"
# svcadm refresh svc:/system/console-login
# svcadm restart svc:/system/console-login
</syntaxhighlight>
== Set speed in BIOS ==
Enter BIOS setup with <i>F2</i> or <i>CTRL+E</i>, then go to
<pre>
Advanced -> Serial Port Console Redirection -> Bits per second : 115200
</pre>
== Set speed for SP host serial ==
<syntaxhighlight lang=bash>
-> set SP/serial/host pendingspeed=115200 commitpending=true
Set 'pendingspeed' to '115200'
Set 'commitpending' to 'true'
-> show SP/serial/host speed
/SP/serial/host
Properties:
speed = 115200
</syntaxhighlight>
=grub rescue>=
The problem:
<syntaxhighlight lang=bash>
GRUB loading...
Welcome to GRUB!
error: couldn't find a valid DVA.
Entering rescue mode...
grub rescue>
</syntaxhighlight>
==Get into the normal grub==
Find your devices:
<syntaxhighlight lang=bash>
grub rescue> ls
(hd0) (hd0,gpt9) (hd0,gpt2) (hd0,gpt1) (hd1)
</syntaxhighlight>
===Find the directory where the normal.mod file resides===
In this example the boot environment is named Solaris11.3SRU1.
Remember to replace <i>Solaris11.3SRU15</i> with your boot environment name.
<syntaxhighlight lang=bash>
grub rescue> ls (hd0,gpt2)/ROOT/Solaris11.3SRU15/@/boot/grub/i386-pc
... normal.mod ...
</syntaxhighlight>
===Set the prefix to the right place===
Remember to replace <i>Solaris11.3SRU15</i> with your boot environment name.
<syntaxhighlight lang=bash>
grub rescue> set
prefix=(hd0,gpt2)//@/boot/grub/i386-pc
root=hd0,gpt2
grub rescue> set prefix=(hd0,gpt2)/ROOT/Solaris11.3SRU15/@/boot/grub/i386-pc
</syntaxhighlight>
===Now you can load and start the module called "normal"===
<syntaxhighlight lang=bash>
grub rescue> insmod normal
grub rescue> normal
GNU GRUB version 1.99,5.11.0.175.2.0.0.42.2
Minimal BASH-like line editing is supported. For the first word, TAB
lists possible command completions. Anywhere else TAB lists possible
device or file completions.
grub>
</syntaxhighlight>
==Normal grub is bootet, now start the Solaris==
At the <i>grub></i> prompt enter the following lines. Remember to replace <i>Solaris11.3SRU15</i> with your boot environment name.
<syntaxhighlight lang=bash>
insmod zfs
zfs-bootfs /ROOT/Solaris11.3SRU15/@/ zfs_bootfs
set kern=/platform/i86pc/kernel/amd64/unix
$multiboot /ROOT/Solaris11.3SRU15/@/$kern $kern -B $zfs_bootfs
insmod gzio
$module /ROOT/Solaris11.3SRU15/@/platform/i86pc/amd64/boot_archive
boot
</syntaxhighlight>
7cd5fb71e6649fd6a0eb1e49e24fa0937e1845c9
RootKitScanner
0
237
2591
2393
2021-11-26T04:02:11Z
Lollypop
2
Text replacement - "[[Kategorie:" to "[[Category:"
wikitext
text/x-wiki
[[Category:Security]]
=RKHunter=
RKHunter is a local security scanner for Linux, Solaris and some other UNIX operating systems.
I will describe usage for Ubuntu/Linux here.
==Installation==
First of all install it to your system:
<syntaxhighlight lang=bash>
# aptitude install rkhunter
</syntaxhighlight>
==Update the rule base==
After that (and do this from time to time) update the rule base:
<syntaxhighlight lang=bash>
# rkhunter --update
[ Rootkit Hunter version 1.4.0 ]
Checking rkhunter data files...
Checking file mirrors.dat [ No update ]
Checking file programs_bad.dat [ Updated ]
Checking file backdoorports.dat [ No update ]
Checking file suspscan.dat [ No update ]
Checking file i18n/cn [ No update ]
Checking file i18n/de [ Updated ]
Checking file i18n/en [ Updated ]
Checking file i18n/tr [ Updated ]
Checking file i18n/tr.utf8 [ Updated ]
Checking file i18n/zh [ No update ]
Checking file i18n/zh.utf8 [ No update ]
</syntaxhighlight>
==Do the first check==
<syntaxhighlight lang=bash>
# rkhunter --check --pkgmgr DPKG --skip-keypress --report-warnings-only
Warning: Found enabled inetd service: rstatd/1-5
Warning: syslog-ng configuration file allows remote logging: destination d_logserver { udp("logserver-1"); };
Warning: Suspicious file types found in /dev:
/dev/.udev/rules.d/root.rules: ASCII text
Warning: Hidden directory found: '/etc/.bzr: directory '
Warning: Hidden directory found: '/dev/.udev: directory '
Warning: Hidden file found: /etc/.bzrignore: ASCII text
Warning: Hidden file found: /etc/.etckeeper: ASCII text
Warning: Hidden file found: /dev/.initramfs: symbolic link to `/run/initramfs'
</syntaxhighlight>
Many warnings.
Check which are false positives and modify your '''/etc/rkhunter.conf'''.
==Acknowledge false positives==
For example to get rid of the warnings above add this lines to the '''/etc/rkhunter.conf''':
<syntaxhighlight lang=bash>
ALLOWHIDDENDIR="/dev/.udev"
ALLOWHIDDENDIR="/etc/.bzr"
ALLOWHIDDENFILE="/etc/.bzrignore"
ALLOWHIDDENFILE="/etc/.etckeeper"
ALLOWHIDDENFILE="/dev/.initramfs"
ALLOWDEVFILE="/dev/.udev/rules.d/root.rules"
INETD_ALLOWED_SVC=rstatd/1-5
ALLOW_SYSLOG_REMOTE_LOGGING=1
</syntaxhighlight>
After that rkhunter should have no output:
<syntaxhighlight lang=bash>
# rkhunter --check --pkgmgr DPKG --skip-keypress --report-warnings-only
#
</syntaxhighlight>
Now you have done your base setup. From now all further output should force you to get a closer look to your system.
==Configure ongoing security checks==
Configure the user which should get warnings via email in your '''/etc/rkhunter.conf''':
<syntaxhighlight lang=bash>
MAIL-ON-WARNING="security-team@yourdomain.tld"
</syntaxhighlight>
b240399740135a5231cc63eb1481511f6b624ef1
Calcinus laevimanus
0
125
2592
364
2021-11-26T04:06:16Z
Lollypop
2
Text replacement - "[[Kategorie:" to "[[Category:"
wikitext
text/x-wiki
[[Category:MeerwasserAquarium]]
{{Systematik
| Bild = Calcinus_laevimanus.png
| Bildbeschreibung = Calcinus laevimanus auf Nahrungssuche
| DeName = Großscheren-Einsiedlerkrebs
| WissName = Calcinus laevimanus
| Autor =
| Untergattung =
| Gattung =
| Unterfamilie =
| Art =
| Verbreitung = Indopazifik
| Habitat =
| Nahrung = Algen, Artemia, Flockenfutter, Frostfutter, Nori-Algen, Salat
| Luftfeuchtigkeit =
| Temperatur = 23°C - 28°C
}}
<gallery mode="packed-hover">
Image:Calcinus_laevimanus.png|Auf Nahrungssuche
Image:Calcinus_laevimanus_neues_Haus.png|Nach Umzug in neues Schneckenhaus
</gallery>
a9307ede3a718014ece816b2969879f229fc8085
Category:Telodeinopus
14
157
2593
429
2021-11-26T04:07:00Z
Lollypop
2
Text replacement - "[[Kategorie:" to "[[Category:"
wikitext
text/x-wiki
[[Category:Tausendfuesser]]
350bfff5a2ce40cf57a13992717d176cf5568e4a
SSH Tipps und Tricks
0
75
2594
2432
2021-11-26T04:08:26Z
Lollypop
2
Text replacement - "[[Kategorie:" to "[[Category:"
wikitext
text/x-wiki
[[Category:SSH|Tipps]]
[[Category:Putty|Tipps]]
=SSH, der Weg zum Ziel=
==SSH über ein oder mehrere Hops==
Um die SSH-Verbindung von Host_A zu Host_B machen zu können muß man sich über zwei Rechner dorthin tunneln (GW_1 und GW_2). Wenn man sich immer erst einloggt und dann weiter einloggt ist es manchmal sehr schwierig die Portforwardings mitzuschleifen, oder den Socks5-Proxy. Einfacher ist es, wenn man sich ProxyCommands für den Weg von Host_a zu Host_b definiert.
Wir kommen also nur von GW_2 zu Host_b, also machen wir uns hierfür einen Eintrag in der ~/.ssh/config:
<pre>
Host Host_B
ProxyJump GW_2
</pre>
Zu GW_2 kommen wir aber nur über GW_1, also brauchen wir hierfür auch einen Eintrag:
<pre>
Host GW_2
ProxyJump GW_1
</pre>
Jetzt gibt man auf Host_A einfach <i>ssh Host_B</i> ein und wird über die beiden Gateways GW_1 und GW_2 getunnelt.
==Portforwardings für z.B. NFS macht man jetzt einfach so==
<pre>
root@Host_A# share -F nfs -o ro=@127.0.0.1/32 /tmp
root@Host_A# ssh -R 22049:localhost:2049 user@Host_B
user@Host_B$ su -
root@Host_B# mount -oro nfs://127.0.0.1:22049/tmp /mnt
</pre>
Im Hintergrund werden dann die TunnelVerbindungen aufgebaut und man macht das PortForwarding direkt von Host_A nach Host_B. Sehr schlank und elegant.
PS: Das /dev/tcp/%h/%p ist ein BASH-Builtin wobei %h und %p von der SSH durch Host (%h) und Port (%p) ausgefüllt werden
==Ausbruch aus dem Paradies==
Problem: Die Umgebung in der man sich aufhält ist leider so unglücklich mit Firewalls verbaut, daß man nicht arbeiten kann. Man muß aber per SSH raus, um woanders kurz etwas zu schauen, oder zu holen. Nunja, es gibt immer einen Weg...
Vorraussetzung ist ein lokal installiertes [http://www.meadowy.org/~gotoh/projects/connect connect], z.B. unter Ubuntu: apt-get install connect-proxy.
Weiterhin braucht man einen SSH-Server, wo ein sshd auf dem Port 443 lauscht, denn die meisten Proxies wollen einen nur auf known Ports durchlassen.
Dann trägt man in der ~/.ssh/config ein:
<pre>
Host ssh-via-proxy
ProxyCommand connect -H proxy-server:3128 ssh-server 443
</pre>
Schwuppdiwupp ist man mit <i>ssh ssh-server</i> auf dem SSH-Ziel, wo man hinmöchte. Natürlich kann man auf den ssh-server wieder als ProxyCommand eintragen usw. usw.
==Achja... das interne Wiki...==
Auch nicht schlimm, wenn das nur vom internen Netz erreichbar ist, dann fragen wir einfach via Socks-Proxy an:
<pre>
user@Host_A$ ssh -C -N -T -f -D8080 interner-rechner
user@Host_A$ chromium-browser --proxy-server="socks5://localhost:8080" https://wiki.intern.firma.de/ &
</pre>
Die Optionen sind:
<pre>
-C Requests compression <- das ist optional
-N Do not execute a remote command.
-T Disable pseudo-tty allocation.
-f Requests ssh to go to background just before command execution.
-D Local-Remote-Socks5-Proxy Port
</pre>
Oder wieder via ~/.ssh/config:
<pre>
Host wiki
Compression yes
DynamicForward 8888
RequestTTY no
PermitLocalCommand yes
LocalCommand chromium-browser --proxy-server="socks5://localhost:8888" https://wiki.intern.firma.de/ &
Hostname interner-rechner
</pre>
Und dann <i>ssh -N -f wiki</i> (Entsprechungen für -N und -f habe ich noch nicht gefunden).
=Der Fingerabdruck=
Für die Verifikation ist es oft leichter mit kürzeren Zahlenketten. Daher ist der Fingerabdruck praktisch, um Keys einfacher zu vergleichen:
<pre>
$ ssh-keygen -lf ~/.ssh/id_dsa.pub
1024 98:c5:76:...:08:fa:ba lollypop@lollybook (DSA)
</pre>
=Nutzer einschränken=
<syntaxhighlight lang=bash>
# SSH is only allowed for users in the group ssh except syslog
AllowGroups ssh
DenyUsers syslog
</syntaxhighlight>
=PuTTY Portable=
==pageant zusammen mit putty starten==
In die Datei ..\PortableApps\PuTTYPortable\App\AppInfo\Launcher\PuTTYPortable.ini muß folgendes unter [Launch] stehen:
<pre>
[Launch]
ProgramExecutable=putty\pageant.exe
CommandLineArguments='%PAL:DataDir%\settings\mykeys.ppk -c %PAL:AppDir%\putty\putty.exe'
DirectoryMoveOK=yes
SupportsUNC=yes
</pre>
Zu PortableApps siehe auch:
* [http://portableapps.com/manuals/PortableApps.comLauncher/ref/envsub.html Environment variable substitions]
* [http://portableapps.com/manuals/PortableApps.comLauncher/ref/launcher.ini/launch.html#programexecutable Launch]
==ppk -> pem==
<syntaxhighlight lang=bash>
$ nawk '/---- BEGIN SSH2 PUBLIC KEY ----/{printf "ssh-rsa "; getline; comment=$2; gsub(/"/,"",comment); getline line; while(line !~ /^---- END/){printf line; getline line;} printf " %s\n",comment;}' pubkey.ppk
</syntaxhighlight>
=Probleme mit älteren Gegenstellen=
==Unable to negotiate with <IP> port 22: no matching host key type found. Their offer: ssh-dss==
<syntaxhighlight lang=bash>
$ ssh -oHostKeyAlgorithms=+ssh-dss <IP>
</syntaxhighlight>
==ssh_dispatch_run_fatal: Connection to <IP> port 22: DH GEX group out of range==
<syntaxhighlight lang=bash>
$ ssh -oKexAlgorithms=diffie-hellman-group-exchange-sha256,diffie-hellman-group14-sha1,diffie-hellman-group1-sha1 <IP>
</syntaxhighlight>
=SFTP chroot=
<syntaxhighlight lang=bash>
# mkdir --parents --mode=0755 /sftp_chroot/etc
</syntaxhighlight>
==/etc/fstab==
<syntaxhighlight lang=bash>
...
/etc/passwd /sftp_chroot/etc/passwd none ro,bind 0 0
/etc/group /sftp_chroot/etc/group none ro,bind 0 0
</syntaxhighlight>
==/etc/ssh/sshd_config==
<syntaxhighlight lang=bash>
...
AllowGroups ssh-user
Subsystem sftp internal-sftp
Match group sftp
AllowGroups sftp
X11Forwarding no
AllowTcpForwarding no
AllowAgentForwarding no
PermitTunnel no
ForceCommand internal-sftp
PasswordAuthentication yes
ChrootDirectory /sftp_chroot/
AuthorizedKeysFile /sftp_chroot/%h/.ssh/authorized_keys
</syntaxhighlight>
==Create SFTP user==
Now you can put authorized keys into the files /home/sftp/.authorized_keys/<i>username</i>
And create the sftp users like this:
<syntaxhighlight lang=bash>
# USER=myuser
# mkdir --parents --mode=0755 /home/sftp/${USER}
# useradd --create-home --home-dir /home/sftp/${USER}/home ${USER}
</syntaxhighlight>
= Two factor authentication =
== Google Authenticator ==
As the Google Authenticator is a tool which is available on several SmartPhone OS I took this one for the OTP authentication.
All steps have to be done on the destination host.
=== Install libpam-google-authenticator ===
<syntaxhighlight lang=bash>
$ sudo apt-get install libpam-google-authenticator
</syntaxhighlight>
=== Add settings to the /etc/pam.d/sshd ===
Put this line at the top of your /etc/pam.d/sshd!
<syntaxhighlight lang=bash>
auth [success=done new_authtok_reqd=done default=die] pam_google_authenticator.so nullok
</syntaxhighlight>
See the man page pam.d(5) or read here...
The meaning of the parameters:
* success=done : If pam_google_authenticator returns successful (code was correct) all authentication is done.
* new_authtok_reqd=done : New authentication token is required set to done. Done is like ok, <nowiki><man page></nowiki>except that the stack also terminates and control is immediately returned to the application.<nowiki></man page></nowiki>
* default=die : If pam_google_authenticator failed no other authentications will be tried
* nullok : Allow user to access auth mechanism even if the password is empty
=== Add settings to the /etc/ssh/sshd_config ===
This lines have to be in the /etc/ssh/sshd_config:
<syntaxhighlight lang=bash>
UsePAM yes
PasswordAuthentication no
PubkeyAuthentication yes
ChallengeResponseAuthentication yes
AuthenticationMethods publickey,keyboard-interactive:pam
</syntaxhighlight>
Without the setting in /etc/pam.d/sshd the "PasswordAuthentication no" will not be sufficient and still ask for a password because /etc/pam.d/sshd enables password authentication.
0d4ed9829f0821cad03aff1709171fbf8de293c0
StorageTek SL150
0
190
2595
2510
2021-11-26T04:10:16Z
Lollypop
2
Text replacement - "</source" to "</syntaxhighlight"
wikitext
text/x-wiki
[[Category:Backup]]
=StorageTek SL150 Modular Tapelibrary=
==General Knowledge==
===Default Password===
passw0rd
===Solaris Configuration===
To use the Ultrium-6 Tape drives with Solaris you have to put the following into your st.conf:
<syntaxhighlight lang=bash>
tape-config-list =
"HP Ultrium 6-SCSI ","HP Ultrium 6-SCSI","HP Ultrium 6","HP Ultrium LTO 6","HP_LTO_GEN_6";
HP_LTO_GEN_6 = 2,0x3B,0,0x18659,4,0x00,0x46,0x58,0x5A,3,60,1200,600,1200,600,600,18000
</syntaxhighlight>
The vendor string has to be exactly 8 characters:
HP<6 spaces>Product...
Unload the st driver after changing the st.conf:
<syntaxhighlight lang=bash>
# modunload -i $(modinfo | nawk '$6=="st"{print $1}')
</syntaxhighlight>
Check if the new config settings matched the drive:
<syntaxhighlight lang=bash>
# mt -f /dev/rmt/0cn config
"HP Ultrium 6-SCSI", "HP Ultrium 6-SCSI ", "CFGHPULTRIUM6SCSI";
CFGHPULTRIUM6SCSI = 2,0x3B,0,0x18619,4,0x58,0x58,0x5A,0x5A,3,60,1200,600,1200,600,600,18000;
</syntaxhighlight>
==General Documentation==
* [https://support.oracle.com/handbook_partner/Systems/SL150/SL150.html System Handbook]
* [https://support.oracle.com/epmos/faces/DocContentDisplay?id=1476370.2 Information Center]
* [http://docs.oracle.com/cd/E35103_07/index.html StorageTek SL150 Modular Tape Library]
==Service Requests==
* [https://support.oracle.com/epmos/faces/DocContentDisplay?id=1599469.1 How to Generate and Retrieve a Service Bundle]
* [https://support.oracle.com/epmos/faces/DocContentDisplay?id=1505959.1 Format of SL150 Serial Number]
==Firmware==
* [https://support.oracle.com/epmos/faces/DocContentDisplay?id=1474172.1 How to Find Firmware Update Patches]
* [https://support.oracle.com/epmos/faces/DocContentDisplay?id=1922504.1 How to find drive firmware patches for LTO tape drives]
==Backup Software related links==
* [http://www-01.ibm.com/support/docview.wss?uid=swg21598187 Oracle StorageTek SL150 Modular Tape Library System Configuration Information for IBM Tivoli Storage Manager Server]
==Other Links==
===Installation things===
* [https://support.oracle.com/epmos/faces/DocContentDisplay?id=1473827.1 How to Manually Retract the Robot Up To the Parked Position]
===Features===
* [https://support.oracle.com/epmos/faces/DocContentDisplay?id=1481733.1 Auto Clean Support for SL150 Library]
67fe8a9745db417bfc5d38ba3eefd190e55fcbb7
Category:Tetramorium
14
16
2596
29
2021-11-26T04:12:15Z
Lollypop
2
Text replacement - "[[Kategorie:" to "[[Category:"
wikitext
text/x-wiki
[[Category:Ameisen]]
8b4529e141e02312735639bddbd1b0df94a379a3
Tomcat
0
375
2597
2339
2021-11-26T04:13:41Z
Lollypop
2
Text replacement - "</source" to "</syntaxhighlight"
wikitext
text/x-wiki
== Terminating SSL at the webserver or load balancer ==
If you want the tomcat let know that he is behind another Instance that terminates the SSL and tomcat should put https:// in the links, just add <i>scheme="https"</i> and <i>proxyPort="443"</i> to the non SSL Connector definition like this:
<syntaxhighlight>
<Connector port="8080" protocol="HTTP/1.1"
server="Apache"
connectionTimeout="20000"
scheme="https"
proxyPort="443"
/>
</syntaxhighlight>
f87f1d3436a73aad4b456b14c26b8afac80bd3ad
Filesysteme Tipps und Tricks
0
194
2598
2374
2021-11-26T04:14:42Z
Lollypop
2
Text replacement - "</source" to "</syntaxhighlight"
wikitext
text/x-wiki
[[Category:Linux]]
[[Category:ZFS]]
==Get the creation time... not the changetime==
===Creation time on zfs===
====You need the Filesystem where the file resides====
<syntaxhighlight lang=bash>
# df -h /var/data/dumps/sackhalter_20140407.dump
Filesystem Size Used Avail Use% Mounted on
data/backup/dumps 24G 8.6G 16G 36% /var/data/dumps
</syntaxhighlight>
====You need the i-node number of the file====
<syntaxhighlight lang=bash>
# ls -i /var/data/dumps/sackhalter_20140407.dump
103 /var/data/dumps/sackhalter_20140407.dump
</syntaxhighlight>
====Get the metadata of the file====
<syntaxhighlight lang=bash>
# zdb -dddd data/backup/dumps 103 | grep crtime
crtime Tue Jul 29 13:00:18 2014
</syntaxhighlight>
===Creation time on ext2/3/4===
====You need the Filesystem where the file resides====
<syntaxhighlight lang=bash>
# df -h /usr/bin/passwd
Filesystem Size Used Avail Use% Mounted on
/dev/sda1 15G 8.4G 5.8G 60% /
</syntaxhighlight>
====You need the i-node number of the file====
<syntaxhighlight lang=bash>
# ls -i /usr/bin/passwd
130776 /usr/bin/passwd
</syntaxhighlight>
====Get the metadata of the file====
<syntaxhighlight lang=bash>
# debugfs -R 'stat <130776>' /dev/sda1 2>/dev/null | grep crtime
crtime: 0x5391870e:a6803fc8 -- Fri Jun 6 11:17:02 2014
</syntaxhighlight>
====Nice oneliner====
<syntaxhighlight lang=bash>
# file=/etc/passwd ; ls -1i ${file} | nawk -v dev=$(df --output=source ${file} | tail -n +2) 'BEGIN{debugfs="debugfs -R \"stat <INODE>\" /dev/sda1 2>/dev/null";}{file=$2;command=debugfs;gsub(/INODE/,$1,command); while (command | getline){if(/crtime/){print $0,file}}; close(command);}'
crtime: 0x54009e05:24f51228 -- Fri Aug 29 17:36:37 2014 /etc/passwd
</syntaxhighlight>
669597d335fe8d1fbf3f1cf4ed59078adfd7ad97
IPS cheat sheet
0
98
2599
2533
2021-11-26T04:18:32Z
Lollypop
2
Text replacement - "[[Kategorie:" to "[[Category:"
wikitext
text/x-wiki
[[Category:Solaris11]]
=Cheat sheet=
[[File:Ips-one-liners.pdf|page=1|600px]]
=Examples=
== Switching to Oracle Support Repository ==
1. Get the client certificate at https://pkg-register.oracle.com/
(x) Oracle Solaris 11 Support -> Submit
Comment: Rechnername -> Accept
-> Download Key
-> Download Certificate
2. Copy it to your Solaris 11 host into /var/pkg/ssl.
<pre>
# mv Oracle_Solaris_11_Support.key.pem /var/pkg/ssl
# mv Oracle_Solaris_11_Support.certificate.pem /var/pkg/ssl
</pre>
4. Set you proxy environment if needed:
<pre>
# http_proxy=http://proxy:3128/
# https_proxy=http://proxy:3128/
# export http_proxy https_proxy
</pre>
5. Set the publisher to the support repository:
# pkg set-publisher \
-k /var/pkg/ssl/Oracle_Solaris_11_Support.key.pem \
-c /var/pkg/ssl/Oracle_Solaris_11_Support.certificate.pem \
-G '*' -g https://pkg.oracle.com/solaris/support/ solaris
6. Refresh the catalog:
# pkg refresh --full
7. Check for updates:
# pkg update -nv
8. If needed or wanted do the update:
# pkg update -v
== Adding another repository ==
Until now the repository of OpenCSW is [http://www.opencsw.org/2012/02/ips-repository-in-the-works/ not in IPS format].
== Repairing packages ==
Damn fast fingers did it! Lucky Luk style... the man who delete files faster than his shadow...
<pre>
root@solaris11:/home/lollypop# rm /usr/bin/ls
</pre>
So... the file is gone... oops.
No problem in Solaris 11. You can repair package contents!
But.. in which package was it?
<pre>
root@solaris11:/home/lollypop# pkg search /usr/bin/ls
INDEX ACTION VALUE PACKAGE
path file usr/bin/ls pkg:/system/core-os@0.5.11-0.175.0.10.1.0.0
</pre>
So it is in the package pkg:/system/core-os@0.5.11-0.175.0.10.1.0.0 . Let us take a look what the system thinks what is wrong with our files from this package:
<pre>
root@solaris11:/home/lollypop# pkg verify pkg:/system/core-os@0.5.11-0.175.0.10.1.0.0
PACKAGE STATUS <3>
pkg://solaris/system/core-os ERROR
file: usr/bin/ls
Missing: regular file does not exist
</pre>
That is exactly what we thougt :-).
So let us fix it!
<pre>
root@solaris11:/home/lollypop# pkg fix pkg:/system/core-os@0.5.11-0.175.0.10.1.0.0
Verifying: pkg://solaris/system/core-os ERROR <3>
file: usr/bin/ls
Missing: regular file does not exist
Created ZFS snapshot: 2013-04-10-07:40:21
Repairing: pkg://solaris/system/core-os
DOWNLOAD PKGS FILES XFER (MB)
Completed 1/1 1/1 0.0/0.0$<3>
PHASE ACTIONS
Update Phase 1/1
PHASE ITEMS
Image State Update Phase 2/2
root@solaris11:/home/lollypop#
</pre>
Beware of trying this with /usr/bin/pkg !!!
=Solaris 11 release=
<syntaxhighlight lang=bash>
$ LANG=C pkg info kernel | nawk '$1 == "Version:"{split($2,version,/\./)}$1 == "Branch:"{split($2,branch,/\./)}END{printf ("Solaris %d.%d Update %d SRU %d SRU-Build %d\n",version[2],version[3],branch[3],branch[4],branch[6])}'
Solaris 5.11 Update 2 SRU 0 SRU-Build 42
</syntaxhighlight>
= Update available? =
<syntaxhighlight lang=bash>
#!/bin/bash
# Written by Lars Timmann <L@rs.Timmann.de> 2018
export LANG=C
function check () {
package=$1
# pkg list -af entire@latest
local=$(pkg info ${package} 2>&1)
remote=$(pkg info -r ${package} 2>&1)
latest_11_3=$(pkg list -H -af ${package} | nawk '$2 ~ /^0.5.11-0.175.3/{print $2; exit;}')
printf "%s\n%s\nLatest_11.3: %s\n" "${local}" "${remote}" "${latest_11_3}" | nawk -v package="${package}" '
BEGIN{
nr=0;
}
$1=="Version:" {
version[nr]=$2;
next;
}
$1=="Branch:" {
branch[nr++]=$2;
next;
}
$1=="Latest_11.3:" {
split($2, latest_part, "-");
latest_version=latest_part[1];
latest_branch=latest_part[2];
}
/^pkg:/ {
error=$0;
}
END{
if(error) {
printf ("Package %s:\t%s\n", package, error);
status=-1;
} else {
if(branch[0]==branch[1]){
printf ("Package %s:\tUptodate at %s\n", package, branch[0]);
status=0;
}else{
printf ("Package %s:\tUpdate is available: %s -> %s\n", package, branch[0], branch[1]);
split(version[1], version_part, /\./);
split(branch[1], branch_part, /\./);
if(version[1]=="0.5.11") {
be_version=sprintf("%d.%d.%d.%d.%d",version_part[3], branch_part[3], branch_part[4], branch_part[5], branch_part[6]);
}
if(version[1]=="11.4") {
be_version=sprintf("%d.%d.%d.%d.%d",branch_part[1], branch_part[2], branch_part[3], branch_part[5], branch_part[6]);
if (version[0]=="0.5.11" && branch[0] != latest_branch ) {
split(latest_branch, latest_part, /\./);
be_version3=sprintf("%d.%d.%d.%d.%d",version_part[3], latest_part[3], latest_part[4], latest_part[5], latest_part[6]);
printf ("\nTo update and stay in Solaris 11.3-Branch you can use:\n\tpkg install --accept --require-new-be --be-name solaris_%s\n\n", be_version3);
}else if (version[0]=="0.5.11" && branch[0] == latest_branch ) {
printf ("\nYou are at the latest version of the 11.3-Branch (%s), but you can upgrade to 11.4 .\n",branch[0]);
}
}
printf ("\n\nUse:\tpkg update --accept --require-new-be --be-name solaris_%s\n\n\n", be_version);
status=2;
}
}
exit status;
}
'
}
package="entire"
pkg refresh >/dev/null \
|| echo "Cannot refresh packages" \
&& if [ $# -gt 0 ]
then
while [ $# -gt 0 ]
do
package=$1
shift
check ${package}
done
else
check ${package}
fi
</syntaxhighlight>
= ZFS automatic snapshots =
<syntaxhighlight lang=bash>
pkg install pkg:/desktop/time-slider
svcadm restart svc:/system/dbus:default
</syntaxhighlight>
89ec1713ca0a3191b554d6baf81a328f222dd81a
Template:Dokumentation/Unterseite
10
59
2600
99
2021-11-26T04:21:22Z
Lollypop
2
Text replacement - "[[Kategorie:" to "[[Category:"
wikitext
text/x-wiki
<onlyinclude>{| {{Bausteindesign3}}
| [[Datei:Information icon.svg|30px|Dokumentations-Unterseite|link=]]
|style="width: 100%;"| Diese Seite ist eine Untervorlage von '''[[{{{1|{{#rel2abs:{{FULLPAGENAME}}/..}}}}}]]'''.
|}<includeonly>{{#ifeq:{{NAMESPACE}}|{{ns:10}}|
[[Category:Vorlage:Untervorlage|{{PAGENAME}}]]
<!--Wartung--><span style="display:none;">{{#ifexist:{{#rel2abs:{{FULLPAGENAME}}/..}}
|<!--nichts-->
|{{#if:{{#rel2abs:{{FULLPAGENAME}}/..}}
| [[Vorlage:Dokumentation/Wartung/Unterseite verwaist]]
| [[Vorlage:Dokumentation/Wartung/keine echte Unterseite]]
}}
}}{{#if:{{{1|}}}
| [[Vorlage:Dokumentation/Wartung/Unterseite mit abweichender Oberseite]]
}}</span>
}}</includeonly></onlyinclude>
[[Category:Vorlage:für Vorlagen| {{PAGENAME}}]]
[[Category:Vorlage:mit Kategorisierung]]
bd840d0e7be4a7a84422d2870d0a4fbf204b799e
Category:Araceae
14
79
2601
171
2021-11-26T04:29:11Z
Lollypop
2
Text replacement - "[[Kategorie:" to "[[Category:"
wikitext
text/x-wiki
[[Category:Pflanzen]]
Ein wichtiger Link zu diesem Thema ist sicherlich die [http://www.aroid.org International Aroid Society]
64697f6c741644b984844eef9b568cb1a1ed7e74
Linux udev
0
88
2602
2548
2021-11-26T04:32:46Z
Lollypop
2
Text replacement - "[[Kategorie:" to "[[Category:"
wikitext
text/x-wiki
[[Category:Linux|udev]]
==Persistent network interface names==
If you have no <i>/etc/udev/rules.d/70-persistent-net.rules</i> just create one:
<syntaxhighlight lang=bash>
# lshw -C network | awk '/logical name:/{iface=$NF;}/serial:/{mac=$NF;printf "SUBSYSTEM==\"net\", ACTION==\"add\", DRIVERS==\"?*\", ATTR{address}==\"%s\", ATTR{dev_id}==\"0x0\", ATTR{type}==\"1\", KERNEL==\"eth*\", NAME=\"%s\"\n",mac,iface;}' >> /etc/udev/rules.d/70-persistent-net.rules
</syntaxhighlight>
or add a specific interface to <i>/etc/udev/rules.d/70-persistent-net.rules</i>:
<syntaxhighlight lang=bash>
# MATCHADDR="00:50:56:a1:20:22" INTERFACE=eth2 /lib/udev/write_net_rules
</syntaxhighlight>
Change order with:
<syntaxhighlight lang=bash>
# vi /etc/udev/rules.d/70-persistent-net.rules
</syntaxhighlight>
Then let udev reread the file:
<syntaxhighlight lang=bash>
# udevadm trigger --action=add --subsystem-match=net --verbose
</syntaxhighlight>
==udev for MySQL on LVM with InnoDB on raw devices==
===Make your rule===
<syntaxhighlight lang=bash>
root@mysql:~# cat /etc/udev/rules.d/99-lvm-mysql-permissions.rules
# udevadm info --query=all --name /dev/vg-data/lv-rawdisk-innodb01
# DM_VG_NAME=vg-data
# DM_LV_NAME=lv-rawdisk-innodb01
ENV{DM_VG_NAME}=="vg-data" ENV{DM_LV_NAME}=="lv-rawdisk-innodb*" OWNER="mysql" GROUP="mysql"
</syntaxhighlight>
===Test your rule===
<syntaxhighlight lang=bash>
root@mysql:~# ls -al /dev/vg-data/lv-rawdisk-innodb01
lrwxrwxrwx 1 root root 7 Aug 12 14:45 /dev/vg-data/lv-rawdisk-innodb01 -> ../dm-0
root@mysql:~# udevadm test /class/block/dm-0
...
read rules file: /etc/udev/rules.d/99-lvm-mysql-permissions.rules
specified user 'mysql' unknown
...
</syntaxhighlight>
OK user mysql unknown... maybe I should install MySQL ;-).
After that:
<syntaxhighlight lang=bash>
root@mysql:~# id -a mysql
uid=108(mysql) gid=114(mysql) groups=114(mysql)
root@mysql:~# udevadm test /class/block/dm-0
...
OWNER 108 /etc/udev/rules.d/99-lvm-mysql-permissions.rules:4
GROUP 114 /etc/udev/rules.d/99-lvm-mysql-permissions.rules:4
handling device node '/dev/dm-0', devnum=b252:0, mode=0660, uid=108, gid=114
set permissions /dev/dm-0, 060660, uid=108, gid=114
...
</syntaxhighlight>
===Trigger your rule===
<syntaxhighlight lang=bash>
root@mysql:~# udevadm trigger
root@mysql:~# ls -alL /dev/vg-data/lv-rawdisk-innodb01
brw-rw---- 1 mysql mysql 252, 0 Aug 12 15:07 /dev/vg-data/lv-rawdisk-innodb01
</syntaxhighlight>
f6062facd3ebc16ec6db61d089fde3e2fd80bd08
Category:Poaceae
14
78
2603
159
2021-11-26T04:33:50Z
Lollypop
2
Text replacement - "[[Kategorie:" to "[[Category:"
wikitext
text/x-wiki
[[Category:Pflanzen]]
08b8105946cb87ee402c2bd8e376f5c6f8248d48
Amorphophallus fuscus
0
80
2604
167
2021-11-26T04:34:53Z
Lollypop
2
Text replacement - "[[Kategorie:" to "[[Category:"
wikitext
text/x-wiki
{{Taxobox
| Taxon_Name = Amorphophallus fuscus
| Taxon_WissName = Amorphophallus fuscus
| Taxon_Rang = Art
| Taxon_Autor = Hett. (N. Thailand)
| Taxon2_WissName = Amorphophallus
| Taxon2_Rang = Gattung
| Taxon3_WissName =
| Taxon3_Rang = Tribus
| Taxon4_WissName =
| Taxon4_Rang = Unterfamilie
| Taxon5_Name = Araceae
| Taxon5_WissName =
| Taxon5_Rang = Familie
| Taxon6_Name =
| Taxon6_WissName =
| Taxon6_Rang = Ordnung
| Bild =
| Bildbeschreibung =
}}
== Beschreibung ==
[[Category:Amorphophallus]]
cfeff91f1afd4fa1eeffe2886c85b4bdb25e546a
Mauersegler
0
381
2605
2224
2021-11-26T08:53:37Z
Lollypop
2
wikitext
text/x-wiki
[[Category:Vögel]]
==Lockrufe mit einem RaspberryPi und Bluetooth-Boxen abspielen==
===Womit ich es realisiert habe===
* RaspberryPi 3B.
* Micro-SD Karte (die Größe ist den aktuellen Anforderungen auf [https://www.raspbian.org/ Raspian.org] zu entnehmen).
* USB Netzteil mit Micro-USB Steckern (für den RaspberryPi).
* Bluetoothfähige, wasserfeste Lautprecher (in meinem Fall [https://www.amazon.de/gp/product/B07QY66L9M Wireless Bluetooth Lautsprecher, Sonkir Tragbarer Bluetooth 5.0 TWS Lautsprecher mit Dual-Treiber Bass, 3D-Stereo, FM Radio, Freisprechfunktion, integriertem 1500-mAh-Akku]).
* USB Netzteil mit Micro-USB Steckern (für die Boxen, bei anderen Boxen evtl. anderes Netzteil!).
* An meinem Laptop ist ein SD Card Reader, um das Betriebssystem auf die SD-Karte zu bekommen. Ist dieser nicht vorhanden, braucht man noch einen USB SD Card Reader. Kostet aber auch nicht die Welt.
===Gründe für diese Wahl===
Dank der guten Reichweite von Bluetooth, kann der RaspberryPi im Haus bleiben und nur die Boxen und ein Netzteil müssen nach draußen.
===Raspian auf dem Pi installieren===
* Anleitungen etc. sind auf [https://www.raspbian.org/ Raspian.org] zu finden, das würde hier kein Sinn machen alles doppelt zu halten.
===Bluetooth aktivieren===
Mit ssh als Benutzer pi auf den RaspberryPi verbinden.
Windows-Nutzer können dafür [https://www.putty.org/ Putty] nutzen.
====Bluetooth Service dauerhaft anschalten====
<syntaxhighlight lang=bash>
pi@raspberrypi:~ $ sudo systemctl enable bluetooth.service
pi@raspberrypi:~ $ sudo systemctl start bluetooth.service
</syntaxhighlight>
====Bluetooth Service Status prüfen====
So sollte es aussehen:
<syntaxhighlight lang=bash>
pi@raspberrypi:~ $ sudo systemctl status bluetooth.service
* bluetooth.service - Bluetooth service
Loaded: loaded (/lib/systemd/system/bluetooth.service; enabled; vendor preset: enabled)
Active: active (running) since Wed 2021-02-03 09:18:58 CET; 32min ago
Docs: man:bluetoothd(8)
Main PID: 943 (bluetoothd)
Status: "Running"
Tasks: 1 (limit: 2062)
CGroup: /system.slice/bluetooth.service
`-943 /usr/lib/bluetooth/bluetoothd
...
</syntaxhighlight>
Sieht es hingegen so aus:
<syntaxhighlight lang=bash>
pi@raspberrypi:~ $ sudo systemctl status bluetooth.service
* bluetooth.service - Bluetooth service
Loaded: loaded (/lib/systemd/system/bluetooth.service; enabled; vendor preset: enabled)
Active: inactive (dead)
Docs: man:bluetoothd(8)
</syntaxhighlight>
Dann sind im Betriebssystem die entsprechenden Treiber(module) für Bluetooth nicht geladen.
====Bluetooth Module aktivieren====
Wenn die Module deaktiviert (blacklisted) sind, müssen wir das ändern.
Der Befehl
egrep "(hci_uart|btbcm)" /etc/modprobe.d/*.conf
zeigt einem die Datei, wo das passiert.
Beispiel:
<syntaxhighlight lang=bash>
pi@raspberrypi:~ $ egrep "(hci_uart|btbcm)" /etc/modprobe.d/*.conf
/etc/modprobe.d/blacklist-bluetooth.conf:blacklist hci_uart
/etc/modprobe.d/blacklist-bluetooth.conf:blacklist btbcm
</syntaxhighlight>
In meinem Beispiel also in <i>/etc/modprobe.d/blacklist-bluetooth.conf</i>.
Der Befehl
sudo perl -pi -e "s/(blacklist.*(hci_uart|btbcm))/#\1/g" <Datei>
kommentiert die <i>blacklist</i> Zeilen für die Beiden benötigten Module aus.
<syntaxhighlight lang=bash>
pi@raspberrypi:~ $ sudo perl -pi -e "s/(blacklist.*(hci_uart|btbcm))/#\1/g" /etc/modprobe.d/blacklist-bluetooth.conf
</syntaxhighlight>
Anschließend einen Neustart(reboot) durchführen
<syntaxhighlight lang=bash>
pi@raspberrypi:~ $ sudo reboot
</syntaxhighlight>
Wenn die Module nach dem geladen sind, sieht es so aus:
<syntaxhighlight lang=bash>
pi@raspberrypi:~ $ sudo lsmod | grep bt
btbcm 16384 1 hci_uart
bluetooth 393216 37 hci_uart,bnep,btbcm,rfcomm
</syntaxhighlight>
Dann nochmal den [[#Bluetooth Service Status prüfen|Bluetooth Service Status prüfen]].
Jetzt sollte alles gut sein.
===Finden der Bluetooth-Boxen===
<syntaxhighlight lang=bash>
pi@raspberrypi:~ $ sudo bluetoothctl
Agent registered
[bluetooth]# scan on
Discovery started
[CHG] Controller B8:27:EB:E6:D3:79 Discovering: yes
</syntaxhighlight>
Jetzt die Bluetooth-Boxen an
<syntaxhighlight lang=bash>
[NEW] Device D6:53:25:BE:37:73 SPEAKER5.0
</syntaxhighlight>
Ah, da ist sie ja!
Jetzt noch verbinden und raus:
<syntaxhighlight lang=bash>
[bluetooth]# scan off
[CHG] Controller B8:27:EB:E6:D3:79 Discovering: no
Discovery stopped
[bluetooth]#
[bluetooth]# connect D6:53:25:BE:37:73
Attempting to connect to D6:53:25:BE:37:73
[CHG] Device D6:53:25:BE:37:73 Connected: yes
[CHG] Device D6:53:25:BE:37:73 UUIDs: 0000110b-0000-1000-8000-00805f9b34fb
[CHG] Device D6:53:25:BE:37:73 UUIDs: 0000110c-0000-1000-8000-00805f9b34fb
[CHG] Device D6:53:25:BE:37:73 UUIDs: 0000110e-0000-1000-8000-00805f9b34fb
[CHG] Device D6:53:25:BE:37:73 UUIDs: 0000111e-0000-1000-8000-00805f9b34fb
[CHG] Device D6:53:25:BE:37:73 ServicesResolved: yes
[CHG] Device D6:53:25:BE:37:73 Paired: yes
Connection successful
[SPEAKER5.0]# trust D6:53:25:BE:37:73
[CHG] Device D6:53:25:BE:37:73 Trusted: yes
Changing D6:53:25:BE:37:73 trust succeeded
[SPEAKER5.0]# paired-devices
Device D6:53:25:BE:37:73 SPEAKER5.0
[SPEAKER5.0]# quit
</syntaxhighlight>
Jetzt muß noch die Addresse der Boxen in die <i>/etc/asound.conf</i> eingetragen werden (die existiert normalerweise noch nicht, einfach neu anlegen).
<syntaxhighlight>
pcm.!default {
type plug
slave {
pcm {
type bluealsa
device D6:53:25:BE:37:73
profile "a2dp"
}
}
hint {
show on
description "Bluetooth SPEAKER5.0"
}
}
ctl.!default {
type bluealsa
}
</syntaxhighlight>
Noch einmal reboot:
<syntaxhighlight lang=bash>
pi@raspberrypi:~ $ sudo reboot
</syntaxhighlight>
Die Boxen sollten beim Starten des RaspberryPi jetzt auch ein kleines Signal abspielen, wenn das Pairing stattfindet.
Jetzt kann man mal Testen:
<syntaxhighlight lang=bash>
pi@raspberrypi:~ $ aplay -L
...
default
Bluetooth SPEAKER5.0
...
</syntaxhighlight>
9d3710a8d56dd81562b0d33e1f40c332736f0cb6
Category:Vögel
14
391
2606
2021-11-26T08:54:21Z
Lollypop
2
Created page with "[[Category:Tiere]]"
wikitext
text/x-wiki
[[Category:Tiere]]
e13bd228f8f258aad2e37ecf6d50d8bdebacf5d0
Chrome
0
351
2607
1949
2021-11-26T08:55:51Z
Lollypop
2
wikitext
text/x-wiki
[[Category:Web]]
==Overview of Chrome URLS==
* chrome://about/
== Apps ==
* chrome://apps/
== Extensions ==
* chrome://extensions/
== Special settings ==
* chrome://flags/
== Your Downloads ==
chrome://downloads/
==Useful URLs==
* chrome://net-internals/#dns -> Clear host cache
* chrome://net-internals/#sockets
150dd918c474968aef3abe2dbadb037e0e4f9cf2
DNS cheatsheet
0
290
2608
2351
2021-11-26T08:57:29Z
Lollypop
2
wikitext
text/x-wiki
[[Category:DNS]]
=dig=
==Compare several nameserver if SOA matches==
<syntaxhighlight lang=bash>
$ domain=denic.de
$ printf "Domain: %s\n" ${domain} ; for ns in $(dig +short ${domain} ns) ; do printf "Nameserver: %s => SOA: %s\n" ${ns} "$(dig +short ${domain} soa @${ns})" ; done
Domain: denic.de
Nameserver: ns2.denic.de. => SOA: ns1.denic.de. its.denic.de. 1468491003 10800 1800 3600000 1800
Nameserver: ns1.denic.de. => SOA: ns1.denic.de. its.denic.de. 1468491003 10800 1800 3600000 1800
Nameserver: ns3.denic.de. => SOA: ns1.denic.de. its.denic.de. 1468491003 10800 1800 3600000 1800
</syntaxhighlight>
==dns2hosts==
<syntaxhighlight lang=perl>
#!/usr/bin/perl
use Net::DNS;
use Net::DNS qw(rrsort);
my @nameservers = ("auth-dns-1.domain.de","auth-dns-2.domain.de");
my $net_regex = '10\.11\.';
my $domain = 'domain.de';
# cut_off_domain=0 : host.domain
# cut_off_domain=1 : short name only
# cut_off_domain=2 : short name and with domain
my $cut_off_domain=1;
my $res = Net::DNS::Resolver->new;
$res->nameservers(@nameservers);
Net::DNS::RR::A->set_rrsort_func ('asorted',
sub {($a,$b)=($Net::DNS::a,$Net::DNS::b);
$a->{'address'} cmp $b->{'address'}});
# Get the zone
my @zone = $res->axfr($domain);
# All A records
my @addresses = grep { $_->type eq "A" } @zone;
# Filter out net if $net_regex is set
@addresses = grep { $_->address =~ /$net_regex/ } @addresses if(defined($net_regex));
# All CNAME records
my @cnames = grep { $_->type eq "CNAME" } @zone;
my $host;
foreach $rr (rrsort("A","asorted", @addresses)) {
$host=$rr->name;
$host=(split /\./,$host)[0] if ($cut_off_domain eq 1);
$host=(split /\./,$rr->name)[0]." ".$rr->name if ($cut_off_domain eq 2);
print $rr->address."\t".$host;
foreach $cname (grep { $_->cname eq $rr->name } @cnames) {
$host=$cname->name;
$host=(split /\./,$host)[0] if ($cut_off_domain eq 1);
$host=(split /\./,$cname->name)[0]." ".$cname->name if ($cut_off_domain eq 2);
print " ".$host;
}
print "\n";
}
</syntaxhighlight>
99d38b3383f7b0f3b817ee487b771215e12572dd
MySQL Symmetric Encryption
0
272
2609
2540
2021-11-26T08:58:44Z
Lollypop
2
wikitext
text/x-wiki
[[Category:MySQL]]
<syntaxhighlight lang=mysql>
> select hex(aes_encrypt(rpad("abcqweqweqweqwe",31,"~"),"mykey")) as encrypted;
+------------------------------------------------------------------+
| encrypted |
+------------------------------------------------------------------+
| E5FB394568B8F03D43CF083F5065C959AC6E22BDB7749E4D97F5ABC72B08D843 |
+------------------------------------------------------------------+
</syntaxhighlight>
<syntaxhighlight lang=mysql>
> select trim(trailing "~" from aes_decrypt(unhex("E5FB394568B8F03D43CF083F5065C959AC6E22BDB7749E4D97F5ABC72B08D843"),"mykey")) as decrypted;
+-----------------+
| decrypted |
+-----------------+
| abcqweqweqweqwe |
+-----------------+
</syntaxhighlight>
c5db729e719f57030e2f823f903cde7569bc3b02
ZFS on Linux
0
222
2610
2520
2021-11-30T16:27:22Z
Lollypop
2
/* Swap on ZFS with random key encryption */
wikitext
text/x-wiki
[[Category:Linux|ZFS]]
[[Category:ZFS|Linux]]
[[Category:VirtualBox|ZFS]]
==Grub==
Create /etc/udev/rules.d/99-local-grub.rules with this content:
<syntaxhighlight lang=bash>
# Create by-id links in /dev as well for zfs vdev. Needed by grub
# Add links for zfs_member only
KERNEL=="sd*[0-9]", IMPORT{parent}=="ID_*", ENV{ID_FS_TYPE}=="zfs_member", SYMLINK+="$env{ID_BUS}-$env{ID_SERIAL}-part%n"
</syntaxhighlight>
==Virtualbox on ZVols==
If you use ZVols as rawvmdk-device in VirtualBox as normal user (vmuser in this example) create /etc/udev/rules.d/99-local-zvol.rules with this content:
<syntaxhighlight lang=bash>
KERNEL=="zd*" SUBSYSTEM=="block" ACTION=="add|change" PROGRAM="/lib/udev/zvol_id /dev/%k" RESULT=="rpool/VM/*" OWNER="vmuser"
</syntaxhighlight>
<syntaxhighlight lang=bash>
vmuser@virtualbox-server:~$ VBoxManage internalcommands createrawvmdk -filename /var/data/VMs/dev/Solaris10.vmdk -rawdisk /dev/zvol/rpool/VM/Solaris10
</syntaxhighlight>
==Setup Ubuntu 16.04 with ZFS root==
Most is from here [https://github.com/zfsonlinux/zfs/wiki/Ubuntu-16.04-Root-on-ZFS Ubuntu-16.04-Root-on-ZFS].
Boot Ubuntu Desktop (alias Live CD) and choose "try out".
===Get the right ashift value===
For example to get sda and sdb:
<syntaxhighlight lang=bash>
# lsblk -o NAME,PHY-SeC,LOG-SEC /dev/sd{a,b} | awk 'function exponent (value) {for(i=0;value>1;i++){value/=2;}; return i;}{if($2 ~ /[0-9]+/){print $0,exponent($2)}else{print$0,"ashift"}}'
NAME PHY-SEC LOG-SEC ashift
sda 512 512 9
├─sda1 512 512 9
├─sda2 512 512 9
├─sda3 512 512 9
└─sda4 512 512 9
sdb 4096 512 12
├─sdb1 4096 512 12
├─sdb2 4096 512 12
├─sdb3 4096 512 12
└─sdb4 4096 512 12
</syntaxhighlight>
===Connect it to your network===
<syntaxhighlight lang=bash>
sudo -i
ifconfig ens160 <IP> netmask 255.255.255.0
route add default gw <defaultrouter>
echo "nameserver <nameserver>" >> /etc/resolv.conf
echo 'Acquire::http::Proxy "http://<user>:<pass>@<proxyhost>:<proxyport>";' >> /etc/apt/apt.conf
apt-add-repository universe
apt update
apt --yes install openssh-server
passwd ubuntu
Reconnect via ssh
apt install --yes debootstrap gdisk zfs-initramfs
sgdisk -g -a1 -n2:34:2047 -t2:EF02 /dev/disk/by-id/scsi-36000c2932cdb62febff0b5ac93786dd4
sgdisk -n9:-8M:0 -t9:BF07 /dev/disk/by-id/scsi-36000c2932cdb62febff0b5ac93786dd4
sgdisk -n1:0:0 -t1:BF01 /dev/disk/by-id/scsi-36000c2932cdb62febff0b5ac93786dd4
zpool create -f -o ashift=12 \
-O atime=off \
-O canmount=off \
-O compression=lz4 \
-O normalization=formD \
-O mountpoint=/ \
-R /mnt \
rpool /dev/disk/by-id/scsi-36000c2932cdb62febff0b5ac93786dd4-part1
zfs create -o canmount=off -o mountpoint=none rpool/ROOT
zfs create -o canmount=noauto -o mountpoint=/ rpool/ROOT/ubuntu
zfs mount rpool/ROOT/ubuntu
zfs create -o setuid=off rpool/home
zfs create -o mountpoint=/root rpool/home/root
zfs create -o canmount=off -o setuid=off -o exec=off rpool/var
zfs create -o com.sun:auto-snapshot=false rpool/var/cache
zfs create rpool/var/log
zfs create rpool/var/spool
zfs create -o com.sun:auto-snapshot=false -o exec=on rpool/var/tmp
zfs create -V 4G -b $(getconf PAGESIZE) -o compression=zle \
-o logbias=throughput -o sync=always \
-o primarycache=metadata -o secondarycache=none \
-o com.sun:auto-snapshot=false rpool/swap
cp -p {,/mnt}/etc/apt/apt.conf
export http_proxy=$(awk '/Acquire::http::Proxy/{gsub(/\"/,"");gsub(/;$/,"");print $2}' /mnt/etc/apt/apt.conf)
echo -n xenial{,-security,-updates} | \
xargs -n 1 -d ' ' -I{} echo "deb http://archive.ubuntu.com/ubuntu {} main universe" > /mnt/etc/apt/sources.list
chmod 1777 /mnt/var/tmp
debootstrap xenial /mnt
zfs set devices=off rpool
HOSTNAME=Template-VM
echo ${HOSTNAME} > /mnt/etc/hostname
printf "127.0.1.1\t%s\n" "${HOSTNAME}" >> /mnt/etc/hosts
INTERFACE=$(ip a s scope global | awk 'NR==1{gsub(/:$/,"",$2);print $2;}')
printf "auto %s\niface %s inet dhcp\n" "${INTERFACE}" "${INTERFACE}" > /mnt/etc/network/interfaces.d/${INTERFACE}
mount --rbind /dev /mnt/dev
mount --rbind /proc /mnt/proc
mount --rbind /sys /mnt/sys
cp -p {,/mnt}/etc/apt/apt.conf
echo -n xenial{,-security,-updates} | \
xargs -n 1 -d ' ' -I{} echo "deb http://archive.ubuntu.com/ubuntu {} main universe" > /mnt/etc/apt/sources.list
chroot /mnt /bin/bash --login
locale-gen en_US.UTF-8
echo 'LANG="en_US.UTF-8"' > /etc/default/locale
LANG="en_US.UTF-8"
dpkg-reconfigure tzdata
ln -s /proc/self/mounts /etc/mtab
apt update
apt install --yes ubuntu-minimal
apt install --yes --no-install-recommends linux-image-generic
apt install --yes zfs-initramfs
apt install --yes openssh-server
apt install --yes grub-pc
addgroup --system lpadmin
addgroup --system sambashare
passwd
grub-probe /
update-initramfs -c -k all
vi /etc/default/grub
Comment out: GRUB_HIDDEN_TIMEOUT=0
Remove quiet and splash from: GRUB_CMDLINE_LINUX_DEFAULT
Uncomment: GRUB_TERMINAL=console
update-grub
grub-install /dev/disk/by-id/scsi-36000c2932cdb62febff0b5ac93786dd4
zfs snapshot rpool/ROOT/ubuntu@install
exit
mount | grep -v zfs | tac | awk '/\/mnt/ {print $3}' | xargs -i{} umount -lf {}
zpool export rpool
reboot
apt install --yes cryptsetup
echo cryptswap1 /dev/zvol/rpool/swap /dev/urandom swap,cipher=aes-xts-plain64:sha256,size=256 >> /etc/crypttab
systemctl daemon-reload
systemctl start systemd-cryptsetup@cryptswap1.service
echo /dev/mapper/cryptswap1 none swap defaults 0 0 >> /etc/fstab
swapon -av
</syntaxhighlight>
==Swap on ZFS with random key encryption==
<syntaxhighlight lang=ini>
# /etc/systemd/system/zfs-cryptswap@.service
[Unit]
Description=ZFS Random Cryptography Setup for %I
Documentation=man:zfs(8)
DefaultDependencies=no
Conflicts=umount.target
IgnoreOnIsolate=true
After=systemd-random-seed.service
BindsTo=dev-zvol-rpool-%i.device
Before=umount.target
[Service]
Type=oneshot
RemainAfterExit=yes
TimeoutSec=0
KeyringMode=shared
OOMScoreAdjust=500
UMask=0077
RuntimeDirectory=zfs-cryptswap.%i
RuntimeDirectoryMode=0700
ExecStartPre=-/sbin/swapoff '/dev/zvol/rpool/%i'
ExecStartPre=-/sbin/zfs destroy 'rpool/%i'
ExecStartPre=/bin/dd if=/dev/urandom of=/run/zfs-cryptswap.%i/%i.key bs=32 count=1
ExecStart=/sbin/zfs create -V 4G -b 4k -o compression=zle -o logbias=throughput -o sync=always -o primarycache=metadata -o secondarycache=none -o com.sun:auto-snapshot=false -o encryption=on -o keyformat=raw -o keylocation=file:///run/zfs-cryptswap.%i/%i.key rpool/%i
ExecStartPost=/sbin/mkswap '/dev/zvol/rpool/%i'
ExecStartPost=/sbin/swapon '/dev/zvol/rpool/%i'
ExecStop=/sbin/swapoff '/dev/zvol/rpool/%i'
ExecStop=/bin/sleep 2
ExecStopPost=/sbin/zfs destroy 'rpool/%i'
[Install]
WantedBy=swap.target
</syntaxhighlight>
!!!BE CAREFUL with the name after @ !!!
The name after the @ is the name of the ZFS the will be DESTROYED and recreated!!!
To destroy and recreate an encrypted ZFS volume named cryptswap use:
<syntaxhighlight lang=bash>
# systemctl start zfs-cryptswap@cryptswap.service
# systemctl enable zfs-cryptswap@cryptswap.service
# update-initramfs -k all -u
</syntaxhighlight>
==Kernel settings for ZFS==
=== Set module parameter in /etc/modprobe.d/zfs.conf===
<syntaxhighlight lang=bash>
options zfs zfs_arc_max=10737418240
# increase them so scrub/resilver is more quickly at the cost of other work
options zfs zfs_vdev_scrub_min_active=24
options zfs zfs_vdev_scrub_max_active=64
# sync write
options zfs zfs_vdev_sync_write_min_active=8
options zfs zfs_vdev_sync_write_max_active=32
# sync reads (normal)
options zfs zfs_vdev_sync_read_min_active=8
options zfs zfs_vdev_sync_read_max_active=32
# async reads : prefetcher
options zfs zfs_vdev_async_read_min_active=8
options zfs zfs_vdev_async_read_max_active=32
# async write : bulk writes
options zfs zfs_vdev_async_write_min_active=8
options zfs zfs_vdev_async_write_max_active=32
# max write speed to l2arc
# tradeoff between write/read and durability of ssd (?)
# default : 8 * 1024 * 1024
# setting here : 500 * 1024 * 1024
options zfs l2arc_write_max=524288000
options zfs zfs_top_maxinflight=512
options zfs zfs_resilver_min_time_ms=8000
options zfs zfs_resilver_delay=0
</syntaxhighlight>
Remember to update your initramfs before boot. This is the filesystem which is read when your module is loaded.
<syntaxhighlight lang=bash>
# update-initramfs -k all -u
</syntaxhighlight>
=== Check settings ===
<syntaxhighlight lang=bash>
root@zfshost:~# modprobe -c | grep "options zfs"
options zfs zfs_arc_max=10737418240
options zfs zfs_vdev_scrub_min_active=24
options zfs zfs_vdev_scrub_max_active=64
options zfs zfs_vdev_sync_write_min_active=8
options zfs zfs_vdev_sync_write_max_active=32
options zfs zfs_vdev_sync_read_min_active=8
options zfs zfs_vdev_sync_read_max_active=32
options zfs zfs_vdev_async_read_min_active=8
options zfs zfs_vdev_async_read_max_active=32
options zfs zfs_vdev_async_write_min_active=8
options zfs zfs_vdev_async_write_max_active=32
options zfs l2arc_write_max=524288000
options zfs zfs_top_maxinflight=512
options zfs zfs_resilver_min_time_ms=8000
options zfs zfs_resilver_delay=0
</syntaxhighlight>
<syntaxhighlight lang=bash>
root@zfshost:~# modprobe --show-depends zfs
insmod /lib/modules/4.15.0-58-generic/kernel/spl/spl.ko
insmod /lib/modules/4.15.0-58-generic/kernel/zfs/znvpair.ko
insmod /lib/modules/4.15.0-58-generic/kernel/zfs/zcommon.ko
insmod /lib/modules/4.15.0-58-generic/kernel/zfs/icp.ko
insmod /lib/modules/4.15.0-58-generic/kernel/zfs/zavl.ko
insmod /lib/modules/4.15.0-58-generic/kernel/zfs/zunicode.ko
insmod /lib/modules/4.15.0-58-generic/kernel/zfs/zfs.ko zfs_arc_max=10737418240 zfs_vdev_scrub_min_active=24 zfs_vdev_scrub_max_active=64 zfs_vdev_sync_write_min_active=8 zfs_vdev_sync_write_max_active=32 zfs_vdev_sync_read_min_active=8 zfs_vdev_sync_read_max_active=32 zfs_vdev_async_read_min_active=8 zfs_vdev_async_read_max_active=32 zfs_vdev_async_write_min_active=8 zfs_vdev_async_write_max_active=32 l2arc_write_max=524288000 zfs_top_maxinflight=512 zfs_resilver_min_time_ms=8000 zfs_resilver_delay=0
</syntaxhighlight>
=== Check actual settings ===
Check files in
* /proc/spl/kstat/zfs/
* /sys/module/zfs/parameters/
==ARC Cache==
===Get the current usage of cache===
<syntaxhighlight lang=bash>
# cat /proc/spl/kstat/zfs/arcstats |grep c_
c_min 4 521779200
c_max 4 1073741824
arc_no_grow 4 0
arc_tempreserve 4 0
arc_loaned_bytes 4 0
arc_prune 4 25360
arc_meta_used 4 493285336
arc_meta_limit 4 805306368
arc_dnode_limit 4 80530636
arc_meta_max 4 706551816
arc_meta_min 4 16777216
sync_wait_for_async 4 357
arc_need_free 4 0
arc_sys_free 4 260889600
</syntaxhighlight>
===Limit the cache without reboot non permanent===
For example limit it to 512MB (which is too small for production environments, just an example...):
<syntaxhighlight lang=bash>
# echo "$[512*1024*1024]" > /sys/module/zfs/parameters/zfs_arc_max
</syntaxhighlight>
Now you have to drop the caches:
<syntaxhighlight lang=bash>
# echo 3 > /proc/sys/vm/drop_caches
</syntaxhighlight>
===Make the cache limit permanent===
For example limit it to 512MB (which is too small for production environments, just an example...):
<syntaxhighlight lang=bash>
# echo "options zfs zfs_arc_max=$[512*1024*1024]" >> /etc/modprobe.d/zfs.conf
</syntaxhighlight>
After reboot this value take effect.
===Check cache hits/misses===
<syntaxhighlight lang=bash>
# (while : ; do cat /proc/spl/kstat/zfs/arcstats ; sleep 5 ; done ) | awk '
BEGIN {
}
$1 ~ /(hits|misses)/ {
name=$1;
gsub(/[_]*(hits|misses)/,"",name);
if(name == ""){
name="global";
}
}
$1 ~ /hits/ {
hits[name] = $3 - hitslast[name]
hitslast[name] = $3
}
$1 ~ /misses/ {
misses[name] = $3 - misslast[name]
misslast[name] = $3
rate = 0
total = hits[name] + misses[name]
if (total)
rate = (hits[name] * 100) / total
if (name=="global")
printf "%30s %12s %12s %9s\n", "NAME", "HITS", "MISSES", "HITRATE"
printf "%30s %12d %12d %8.2f%%\n", name, hits[name], misses[name], rate
}
'
</syntaxhighlight>
==Higher scrub performance==
<syntaxhighlight lang=bash highlight=3-5>
#!/bin/bash
#
## scrub_fast.sh
#
case $1 in
start)
echo 0 > /sys/module/zfs/parameters/zfs_scan_idle
echo 0 > /sys/module/zfs/parameters/zfs_scrub_delay
echo 512 > /sys/module/zfs/parameters/zfs_top_maxinflight
echo 5000 > /sys/module/zfs/parameters/zfs_scan_min_time_ms
echo 4 > /sys/module/zfs/parameters/zfs_vdev_scrub_min_active
echo 8 > /sys/module/zfs/parameters/zfs_vdev_scrub_max_active
;;
stop)
echo 50 > /sys/module/zfs/parameters/zfs_scan_idle
echo 4 > /sys/module/zfs/parameters/zfs_scrub_delay
echo 32 > /sys/module/zfs/parameters/zfs_top_maxinflight
echo 1000 > /sys/module/zfs/parameters/zfs_scan_min_time_ms
echo 1 > /sys/module/zfs/parameters/zfs_vdev_scrub_min_active
echo 2 > /sys/module/zfs/parameters/zfs_vdev_scrub_max_active
;;
status)
for i in zfs_scan_idle zfs_scrub_delay zfs_top_maxinflight zfs_scan_min_time_ms zfs_vdev_scrub_{min,max}_active
do
param="/sys/module/zfs/parameters/${i}"
printf "%60s\t%d\n" "${param}" "$(cat ${param})"
done
;;
*)
echo "Usage: ${0} (start|stop|status)"
;;
esac
</syntaxhighlight>
==Backup ZFS settings==
A little script which may be used on your own risk.
<syntaxhighlight lang=bash>
#!/bin/bash
# Written by Lars Timmann <L@rs.Timmann.de> 2018
# Tested on solaris 11.3 & Ubuntu Linux
# This script is a rotten bunch of code... rewrite it!
AWK_CMD=/usr/bin/gawk
ZPOOL_CMD=/sbin/zpool
ZFS_CMD=/sbin/zfs
ZDB_CMD=/sbin/zdb
function print_local_options () {
DATASET=$1
OPTION=$2
EXCLUDE_REGEX=$3
${ZFS_CMD} get -s local -Ho property,value -p ${OPTION} ${DATASET} | while read -r property value
do
if [[ ! ${property} =~ ${EXCLUDE_REGEX} ]]
then
if [ "_${property}_" == "_share.*_" ]
then
print_local_options "${DATASET}" 'share.all' '^$'
else
printf '\t-o %s=%s \\\n' "${property}" "${value}"
fi
fi
done
}
function print_filesystem () {
ZFS=$1
printf '%s create \\\n' "${ZFS_CMD}"
print_local_options "${ZFS}" 'all' '^$'
printf '\t%s\n' "${ZFS}"
}
function print_filesystems () {
ZPOOL=$1
for ZFS in $(${ZFS_CMD} list -Ho name -t filesystem -r ${ZPOOL})
do
if [ ${ZFS} == ${ZPOOL} ] ; then continue ; fi
printf '#\n## Filesystem: %s\n#\n\n' "${ZFS}"
print_filesystem ${ZFS}
printf '\n'
done
}
function print_volume () {
ZVOL=$1
volsize=$(${ZFS_CMD} get -Ho value volsize ${ZVOL})
volblocksize=$(${ZFS_CMD} get -Ho value volblocksize ${ZVOL})
printf '%s create \\\n\t-V %s \\\n\t-b %s \\\n' "${ZFS_CMD}" "${volsize}" "${volblocksize}"
print_local_options "${ZVOL}" 'all' '(volsize|refreservation)'
printf '\t%s\n' "${ZVOL}"
}
function print_volumes () {
ZPOOL=$1
for ZVOL in $(${ZFS_CMD} list -Ho name -t volume -r ${ZPOOL})
do
printf '#\n## Volume: %s\n#\n\n' "${ZVOL}"
print_volume ${ZVOL}
printf '\n'
done
}
function print_vdevs () {
ZPOOL=$1
${ZDB_CMD} -C ${ZPOOL} | ${AWK_CMD} -F':' '
$1 ~ /^[[:space:]]*type$/ {
gsub(/[ ]+/,"",$NF);
type=substr($NF,2,length($NF)-2);
if ( type == "mirror" ) {
printf " \\\n\t%s",type;
}
}
$1 ~ /^[[:space:]]*path$/ {
gsub(/[ ]+/,"",$NF);
vdev=substr($NF,2,length($NF)-2);
printf " \\\n\t%s",vdev;
}
END {
printf "\n";
}
'
}
function print_zpool () {
ZPOOL=$1
printf '#############################################################\n'
printf '#\n## ZPool: %s\n#\n' "${ZPOOL}"
printf '#############################################################\n\n'
printf '%s create \\\n' "${ZPOOL_CMD}"
print_local_options "${ZPOOL}" 'all' '/@/'
printf '\t%s' "${ZPOOL}"
print_vdevs "${ZPOOL}"
printf '\n'
printf '#############################################################\n\n'
print_filesystems "${ZPOOL}"
print_volumes "${ZPOOL}"
}
OS=$(uname -s)
eval $(uname -s)=1
HOSTNAME=$(hostname)
printf '#############################################################\n'
printf '# Hostname: %s\n' "${HOSTNAME}"
printf '#############################################################\n\n'
for ZPOOL in $(${ZPOOL_CMD} list -Ho name)
do
print_zpool ${ZPOOL}
done
</syntaxhighlight>
==Links==
* [[https://github.com/zfsonlinux/pkg-zfs/wiki/HOWTO-install-Ubuntu-16.04-to-a-Whole-Disk-Native-ZFS-Root-Filesystem-using-Ubiquity-GUI-installer HOWTO install Ubuntu 16.04 to a Whole Disk Native ZFS Root Filesystem using Ubiquity GUI installer]]
* [[https://github.com/zfsonlinux/zfs/wiki/Ubuntu-16.04-Root-on-ZFS Ubuntu 16.04 Root on ZFS]]
72699a286110dc61bd78bfe0d933dc4bcf4ee278
2625
2610
2022-01-11T10:36:13Z
Lollypop
2
/* Swap on ZFS with random key encryption */
wikitext
text/x-wiki
[[Category:Linux|ZFS]]
[[Category:ZFS|Linux]]
[[Category:VirtualBox|ZFS]]
==Grub==
Create /etc/udev/rules.d/99-local-grub.rules with this content:
<syntaxhighlight lang=bash>
# Create by-id links in /dev as well for zfs vdev. Needed by grub
# Add links for zfs_member only
KERNEL=="sd*[0-9]", IMPORT{parent}=="ID_*", ENV{ID_FS_TYPE}=="zfs_member", SYMLINK+="$env{ID_BUS}-$env{ID_SERIAL}-part%n"
</syntaxhighlight>
==Virtualbox on ZVols==
If you use ZVols as rawvmdk-device in VirtualBox as normal user (vmuser in this example) create /etc/udev/rules.d/99-local-zvol.rules with this content:
<syntaxhighlight lang=bash>
KERNEL=="zd*" SUBSYSTEM=="block" ACTION=="add|change" PROGRAM="/lib/udev/zvol_id /dev/%k" RESULT=="rpool/VM/*" OWNER="vmuser"
</syntaxhighlight>
<syntaxhighlight lang=bash>
vmuser@virtualbox-server:~$ VBoxManage internalcommands createrawvmdk -filename /var/data/VMs/dev/Solaris10.vmdk -rawdisk /dev/zvol/rpool/VM/Solaris10
</syntaxhighlight>
==Setup Ubuntu 16.04 with ZFS root==
Most is from here [https://github.com/zfsonlinux/zfs/wiki/Ubuntu-16.04-Root-on-ZFS Ubuntu-16.04-Root-on-ZFS].
Boot Ubuntu Desktop (alias Live CD) and choose "try out".
===Get the right ashift value===
For example to get sda and sdb:
<syntaxhighlight lang=bash>
# lsblk -o NAME,PHY-SeC,LOG-SEC /dev/sd{a,b} | awk 'function exponent (value) {for(i=0;value>1;i++){value/=2;}; return i;}{if($2 ~ /[0-9]+/){print $0,exponent($2)}else{print$0,"ashift"}}'
NAME PHY-SEC LOG-SEC ashift
sda 512 512 9
├─sda1 512 512 9
├─sda2 512 512 9
├─sda3 512 512 9
└─sda4 512 512 9
sdb 4096 512 12
├─sdb1 4096 512 12
├─sdb2 4096 512 12
├─sdb3 4096 512 12
└─sdb4 4096 512 12
</syntaxhighlight>
===Connect it to your network===
<syntaxhighlight lang=bash>
sudo -i
ifconfig ens160 <IP> netmask 255.255.255.0
route add default gw <defaultrouter>
echo "nameserver <nameserver>" >> /etc/resolv.conf
echo 'Acquire::http::Proxy "http://<user>:<pass>@<proxyhost>:<proxyport>";' >> /etc/apt/apt.conf
apt-add-repository universe
apt update
apt --yes install openssh-server
passwd ubuntu
Reconnect via ssh
apt install --yes debootstrap gdisk zfs-initramfs
sgdisk -g -a1 -n2:34:2047 -t2:EF02 /dev/disk/by-id/scsi-36000c2932cdb62febff0b5ac93786dd4
sgdisk -n9:-8M:0 -t9:BF07 /dev/disk/by-id/scsi-36000c2932cdb62febff0b5ac93786dd4
sgdisk -n1:0:0 -t1:BF01 /dev/disk/by-id/scsi-36000c2932cdb62febff0b5ac93786dd4
zpool create -f -o ashift=12 \
-O atime=off \
-O canmount=off \
-O compression=lz4 \
-O normalization=formD \
-O mountpoint=/ \
-R /mnt \
rpool /dev/disk/by-id/scsi-36000c2932cdb62febff0b5ac93786dd4-part1
zfs create -o canmount=off -o mountpoint=none rpool/ROOT
zfs create -o canmount=noauto -o mountpoint=/ rpool/ROOT/ubuntu
zfs mount rpool/ROOT/ubuntu
zfs create -o setuid=off rpool/home
zfs create -o mountpoint=/root rpool/home/root
zfs create -o canmount=off -o setuid=off -o exec=off rpool/var
zfs create -o com.sun:auto-snapshot=false rpool/var/cache
zfs create rpool/var/log
zfs create rpool/var/spool
zfs create -o com.sun:auto-snapshot=false -o exec=on rpool/var/tmp
zfs create -V 4G -b $(getconf PAGESIZE) -o compression=zle \
-o logbias=throughput -o sync=always \
-o primarycache=metadata -o secondarycache=none \
-o com.sun:auto-snapshot=false rpool/swap
cp -p {,/mnt}/etc/apt/apt.conf
export http_proxy=$(awk '/Acquire::http::Proxy/{gsub(/\"/,"");gsub(/;$/,"");print $2}' /mnt/etc/apt/apt.conf)
echo -n xenial{,-security,-updates} | \
xargs -n 1 -d ' ' -I{} echo "deb http://archive.ubuntu.com/ubuntu {} main universe" > /mnt/etc/apt/sources.list
chmod 1777 /mnt/var/tmp
debootstrap xenial /mnt
zfs set devices=off rpool
HOSTNAME=Template-VM
echo ${HOSTNAME} > /mnt/etc/hostname
printf "127.0.1.1\t%s\n" "${HOSTNAME}" >> /mnt/etc/hosts
INTERFACE=$(ip a s scope global | awk 'NR==1{gsub(/:$/,"",$2);print $2;}')
printf "auto %s\niface %s inet dhcp\n" "${INTERFACE}" "${INTERFACE}" > /mnt/etc/network/interfaces.d/${INTERFACE}
mount --rbind /dev /mnt/dev
mount --rbind /proc /mnt/proc
mount --rbind /sys /mnt/sys
cp -p {,/mnt}/etc/apt/apt.conf
echo -n xenial{,-security,-updates} | \
xargs -n 1 -d ' ' -I{} echo "deb http://archive.ubuntu.com/ubuntu {} main universe" > /mnt/etc/apt/sources.list
chroot /mnt /bin/bash --login
locale-gen en_US.UTF-8
echo 'LANG="en_US.UTF-8"' > /etc/default/locale
LANG="en_US.UTF-8"
dpkg-reconfigure tzdata
ln -s /proc/self/mounts /etc/mtab
apt update
apt install --yes ubuntu-minimal
apt install --yes --no-install-recommends linux-image-generic
apt install --yes zfs-initramfs
apt install --yes openssh-server
apt install --yes grub-pc
addgroup --system lpadmin
addgroup --system sambashare
passwd
grub-probe /
update-initramfs -c -k all
vi /etc/default/grub
Comment out: GRUB_HIDDEN_TIMEOUT=0
Remove quiet and splash from: GRUB_CMDLINE_LINUX_DEFAULT
Uncomment: GRUB_TERMINAL=console
update-grub
grub-install /dev/disk/by-id/scsi-36000c2932cdb62febff0b5ac93786dd4
zfs snapshot rpool/ROOT/ubuntu@install
exit
mount | grep -v zfs | tac | awk '/\/mnt/ {print $3}' | xargs -i{} umount -lf {}
zpool export rpool
reboot
apt install --yes cryptsetup
echo cryptswap1 /dev/zvol/rpool/swap /dev/urandom swap,cipher=aes-xts-plain64:sha256,size=256 >> /etc/crypttab
systemctl daemon-reload
systemctl start systemd-cryptsetup@cryptswap1.service
echo /dev/mapper/cryptswap1 none swap defaults 0 0 >> /etc/fstab
swapon -av
</syntaxhighlight>
==Swap on ZFS with random key encryption==
<syntaxhighlight lang=ini>
# /etc/systemd/system/zfs-cryptswap@.service
[Unit]
Description=ZFS Random Cryptography Setup for %I
Documentation=man:zfs(8)
DefaultDependencies=no
Conflicts=umount.target
IgnoreOnIsolate=true
After=systemd-random-seed.service
BindsTo=dev-zvol-rpool-%i.device
Before=umount.target
[Service]
Type=oneshot
RemainAfterExit=yes
TimeoutSec=0
KeyringMode=shared
OOMScoreAdjust=500
UMask=0077
RuntimeDirectory=zfs-cryptswap.%i
RuntimeDirectoryMode=0700
ExecStartPre=-/sbin/swapoff '/dev/zvol/rpool/%i'
ExecStartPre=-/sbin/zfs destroy 'rpool/%i'
ExecStartPre=/bin/dd if=/dev/urandom of=/run/zfs-cryptswap.%i/%i.key bs=32 count=1
ExecStart=/sbin/zfs create -V 4G -b 4k -o compression=zle -o logbias=throughput -o sync=always -o primarycache=metadata -o secondarycache=none -o com.sun:auto-snapshot=false -o encryption=on -o keyformat=raw -o keylocation=file:///run/zfs-cryptswap.%i/%i.key rpool/%i
ExecStartPost=/sbin/mkswap '/dev/zvol/rpool/%i'
ExecStartPost=/sbin/swapon '/dev/zvol/rpool/%i'
ExecStop=/sbin/swapoff '/dev/zvol/rpool/%i'
ExecStop=/bin/sleep 2
ExecStopPost=/sbin/zfs destroy 'rpool/%i'
[Install]
WantedBy=swap.target
</syntaxhighlight>
!!!BE CAREFUL with the name after @ !!!
The name after the @ is the name of the ZFS that will be DESTROYED and recreated!!!
To destroy and recreate an encrypted ZFS volume named cryptswap use:
<syntaxhighlight lang=bash>
# systemctl start zfs-cryptswap@cryptswap.service
# systemctl enable zfs-cryptswap@cryptswap.service
# update-initramfs -k all -u
</syntaxhighlight>
==Kernel settings for ZFS==
=== Set module parameter in /etc/modprobe.d/zfs.conf===
<syntaxhighlight lang=bash>
options zfs zfs_arc_max=10737418240
# increase them so scrub/resilver is more quickly at the cost of other work
options zfs zfs_vdev_scrub_min_active=24
options zfs zfs_vdev_scrub_max_active=64
# sync write
options zfs zfs_vdev_sync_write_min_active=8
options zfs zfs_vdev_sync_write_max_active=32
# sync reads (normal)
options zfs zfs_vdev_sync_read_min_active=8
options zfs zfs_vdev_sync_read_max_active=32
# async reads : prefetcher
options zfs zfs_vdev_async_read_min_active=8
options zfs zfs_vdev_async_read_max_active=32
# async write : bulk writes
options zfs zfs_vdev_async_write_min_active=8
options zfs zfs_vdev_async_write_max_active=32
# max write speed to l2arc
# tradeoff between write/read and durability of ssd (?)
# default : 8 * 1024 * 1024
# setting here : 500 * 1024 * 1024
options zfs l2arc_write_max=524288000
options zfs zfs_top_maxinflight=512
options zfs zfs_resilver_min_time_ms=8000
options zfs zfs_resilver_delay=0
</syntaxhighlight>
Remember to update your initramfs before boot. This is the filesystem which is read when your module is loaded.
<syntaxhighlight lang=bash>
# update-initramfs -k all -u
</syntaxhighlight>
=== Check settings ===
<syntaxhighlight lang=bash>
root@zfshost:~# modprobe -c | grep "options zfs"
options zfs zfs_arc_max=10737418240
options zfs zfs_vdev_scrub_min_active=24
options zfs zfs_vdev_scrub_max_active=64
options zfs zfs_vdev_sync_write_min_active=8
options zfs zfs_vdev_sync_write_max_active=32
options zfs zfs_vdev_sync_read_min_active=8
options zfs zfs_vdev_sync_read_max_active=32
options zfs zfs_vdev_async_read_min_active=8
options zfs zfs_vdev_async_read_max_active=32
options zfs zfs_vdev_async_write_min_active=8
options zfs zfs_vdev_async_write_max_active=32
options zfs l2arc_write_max=524288000
options zfs zfs_top_maxinflight=512
options zfs zfs_resilver_min_time_ms=8000
options zfs zfs_resilver_delay=0
</syntaxhighlight>
<syntaxhighlight lang=bash>
root@zfshost:~# modprobe --show-depends zfs
insmod /lib/modules/4.15.0-58-generic/kernel/spl/spl.ko
insmod /lib/modules/4.15.0-58-generic/kernel/zfs/znvpair.ko
insmod /lib/modules/4.15.0-58-generic/kernel/zfs/zcommon.ko
insmod /lib/modules/4.15.0-58-generic/kernel/zfs/icp.ko
insmod /lib/modules/4.15.0-58-generic/kernel/zfs/zavl.ko
insmod /lib/modules/4.15.0-58-generic/kernel/zfs/zunicode.ko
insmod /lib/modules/4.15.0-58-generic/kernel/zfs/zfs.ko zfs_arc_max=10737418240 zfs_vdev_scrub_min_active=24 zfs_vdev_scrub_max_active=64 zfs_vdev_sync_write_min_active=8 zfs_vdev_sync_write_max_active=32 zfs_vdev_sync_read_min_active=8 zfs_vdev_sync_read_max_active=32 zfs_vdev_async_read_min_active=8 zfs_vdev_async_read_max_active=32 zfs_vdev_async_write_min_active=8 zfs_vdev_async_write_max_active=32 l2arc_write_max=524288000 zfs_top_maxinflight=512 zfs_resilver_min_time_ms=8000 zfs_resilver_delay=0
</syntaxhighlight>
=== Check actual settings ===
Check files in
* /proc/spl/kstat/zfs/
* /sys/module/zfs/parameters/
==ARC Cache==
===Get the current usage of cache===
<syntaxhighlight lang=bash>
# cat /proc/spl/kstat/zfs/arcstats |grep c_
c_min 4 521779200
c_max 4 1073741824
arc_no_grow 4 0
arc_tempreserve 4 0
arc_loaned_bytes 4 0
arc_prune 4 25360
arc_meta_used 4 493285336
arc_meta_limit 4 805306368
arc_dnode_limit 4 80530636
arc_meta_max 4 706551816
arc_meta_min 4 16777216
sync_wait_for_async 4 357
arc_need_free 4 0
arc_sys_free 4 260889600
</syntaxhighlight>
===Limit the cache without reboot non permanent===
For example limit it to 512MB (which is too small for production environments, just an example...):
<syntaxhighlight lang=bash>
# echo "$[512*1024*1024]" > /sys/module/zfs/parameters/zfs_arc_max
</syntaxhighlight>
Now you have to drop the caches:
<syntaxhighlight lang=bash>
# echo 3 > /proc/sys/vm/drop_caches
</syntaxhighlight>
===Make the cache limit permanent===
For example limit it to 512MB (which is too small for production environments, just an example...):
<syntaxhighlight lang=bash>
# echo "options zfs zfs_arc_max=$[512*1024*1024]" >> /etc/modprobe.d/zfs.conf
</syntaxhighlight>
After reboot this value take effect.
===Check cache hits/misses===
<syntaxhighlight lang=bash>
# (while : ; do cat /proc/spl/kstat/zfs/arcstats ; sleep 5 ; done ) | awk '
BEGIN {
}
$1 ~ /(hits|misses)/ {
name=$1;
gsub(/[_]*(hits|misses)/,"",name);
if(name == ""){
name="global";
}
}
$1 ~ /hits/ {
hits[name] = $3 - hitslast[name]
hitslast[name] = $3
}
$1 ~ /misses/ {
misses[name] = $3 - misslast[name]
misslast[name] = $3
rate = 0
total = hits[name] + misses[name]
if (total)
rate = (hits[name] * 100) / total
if (name=="global")
printf "%30s %12s %12s %9s\n", "NAME", "HITS", "MISSES", "HITRATE"
printf "%30s %12d %12d %8.2f%%\n", name, hits[name], misses[name], rate
}
'
</syntaxhighlight>
==Higher scrub performance==
<syntaxhighlight lang=bash highlight=3-5>
#!/bin/bash
#
## scrub_fast.sh
#
case $1 in
start)
echo 0 > /sys/module/zfs/parameters/zfs_scan_idle
echo 0 > /sys/module/zfs/parameters/zfs_scrub_delay
echo 512 > /sys/module/zfs/parameters/zfs_top_maxinflight
echo 5000 > /sys/module/zfs/parameters/zfs_scan_min_time_ms
echo 4 > /sys/module/zfs/parameters/zfs_vdev_scrub_min_active
echo 8 > /sys/module/zfs/parameters/zfs_vdev_scrub_max_active
;;
stop)
echo 50 > /sys/module/zfs/parameters/zfs_scan_idle
echo 4 > /sys/module/zfs/parameters/zfs_scrub_delay
echo 32 > /sys/module/zfs/parameters/zfs_top_maxinflight
echo 1000 > /sys/module/zfs/parameters/zfs_scan_min_time_ms
echo 1 > /sys/module/zfs/parameters/zfs_vdev_scrub_min_active
echo 2 > /sys/module/zfs/parameters/zfs_vdev_scrub_max_active
;;
status)
for i in zfs_scan_idle zfs_scrub_delay zfs_top_maxinflight zfs_scan_min_time_ms zfs_vdev_scrub_{min,max}_active
do
param="/sys/module/zfs/parameters/${i}"
printf "%60s\t%d\n" "${param}" "$(cat ${param})"
done
;;
*)
echo "Usage: ${0} (start|stop|status)"
;;
esac
</syntaxhighlight>
==Backup ZFS settings==
A little script which may be used on your own risk.
<syntaxhighlight lang=bash>
#!/bin/bash
# Written by Lars Timmann <L@rs.Timmann.de> 2018
# Tested on solaris 11.3 & Ubuntu Linux
# This script is a rotten bunch of code... rewrite it!
AWK_CMD=/usr/bin/gawk
ZPOOL_CMD=/sbin/zpool
ZFS_CMD=/sbin/zfs
ZDB_CMD=/sbin/zdb
function print_local_options () {
DATASET=$1
OPTION=$2
EXCLUDE_REGEX=$3
${ZFS_CMD} get -s local -Ho property,value -p ${OPTION} ${DATASET} | while read -r property value
do
if [[ ! ${property} =~ ${EXCLUDE_REGEX} ]]
then
if [ "_${property}_" == "_share.*_" ]
then
print_local_options "${DATASET}" 'share.all' '^$'
else
printf '\t-o %s=%s \\\n' "${property}" "${value}"
fi
fi
done
}
function print_filesystem () {
ZFS=$1
printf '%s create \\\n' "${ZFS_CMD}"
print_local_options "${ZFS}" 'all' '^$'
printf '\t%s\n' "${ZFS}"
}
function print_filesystems () {
ZPOOL=$1
for ZFS in $(${ZFS_CMD} list -Ho name -t filesystem -r ${ZPOOL})
do
if [ ${ZFS} == ${ZPOOL} ] ; then continue ; fi
printf '#\n## Filesystem: %s\n#\n\n' "${ZFS}"
print_filesystem ${ZFS}
printf '\n'
done
}
function print_volume () {
ZVOL=$1
volsize=$(${ZFS_CMD} get -Ho value volsize ${ZVOL})
volblocksize=$(${ZFS_CMD} get -Ho value volblocksize ${ZVOL})
printf '%s create \\\n\t-V %s \\\n\t-b %s \\\n' "${ZFS_CMD}" "${volsize}" "${volblocksize}"
print_local_options "${ZVOL}" 'all' '(volsize|refreservation)'
printf '\t%s\n' "${ZVOL}"
}
function print_volumes () {
ZPOOL=$1
for ZVOL in $(${ZFS_CMD} list -Ho name -t volume -r ${ZPOOL})
do
printf '#\n## Volume: %s\n#\n\n' "${ZVOL}"
print_volume ${ZVOL}
printf '\n'
done
}
function print_vdevs () {
ZPOOL=$1
${ZDB_CMD} -C ${ZPOOL} | ${AWK_CMD} -F':' '
$1 ~ /^[[:space:]]*type$/ {
gsub(/[ ]+/,"",$NF);
type=substr($NF,2,length($NF)-2);
if ( type == "mirror" ) {
printf " \\\n\t%s",type;
}
}
$1 ~ /^[[:space:]]*path$/ {
gsub(/[ ]+/,"",$NF);
vdev=substr($NF,2,length($NF)-2);
printf " \\\n\t%s",vdev;
}
END {
printf "\n";
}
'
}
function print_zpool () {
ZPOOL=$1
printf '#############################################################\n'
printf '#\n## ZPool: %s\n#\n' "${ZPOOL}"
printf '#############################################################\n\n'
printf '%s create \\\n' "${ZPOOL_CMD}"
print_local_options "${ZPOOL}" 'all' '/@/'
printf '\t%s' "${ZPOOL}"
print_vdevs "${ZPOOL}"
printf '\n'
printf '#############################################################\n\n'
print_filesystems "${ZPOOL}"
print_volumes "${ZPOOL}"
}
OS=$(uname -s)
eval $(uname -s)=1
HOSTNAME=$(hostname)
printf '#############################################################\n'
printf '# Hostname: %s\n' "${HOSTNAME}"
printf '#############################################################\n\n'
for ZPOOL in $(${ZPOOL_CMD} list -Ho name)
do
print_zpool ${ZPOOL}
done
</syntaxhighlight>
==Links==
* [[https://github.com/zfsonlinux/pkg-zfs/wiki/HOWTO-install-Ubuntu-16.04-to-a-Whole-Disk-Native-ZFS-Root-Filesystem-using-Ubiquity-GUI-installer HOWTO install Ubuntu 16.04 to a Whole Disk Native ZFS Root Filesystem using Ubiquity GUI installer]]
* [[https://github.com/zfsonlinux/zfs/wiki/Ubuntu-16.04-Root-on-ZFS Ubuntu 16.04 Root on ZFS]]
2eac41b6ea478350d01b1057fced36e242c722a2
2626
2625
2022-01-11T15:19:17Z
Lollypop
2
/* Swap on ZFS with random key encryption */
wikitext
text/x-wiki
[[Category:Linux|ZFS]]
[[Category:ZFS|Linux]]
[[Category:VirtualBox|ZFS]]
==Grub==
Create /etc/udev/rules.d/99-local-grub.rules with this content:
<syntaxhighlight lang=bash>
# Create by-id links in /dev as well for zfs vdev. Needed by grub
# Add links for zfs_member only
KERNEL=="sd*[0-9]", IMPORT{parent}=="ID_*", ENV{ID_FS_TYPE}=="zfs_member", SYMLINK+="$env{ID_BUS}-$env{ID_SERIAL}-part%n"
</syntaxhighlight>
==Virtualbox on ZVols==
If you use ZVols as rawvmdk-device in VirtualBox as normal user (vmuser in this example) create /etc/udev/rules.d/99-local-zvol.rules with this content:
<syntaxhighlight lang=bash>
KERNEL=="zd*" SUBSYSTEM=="block" ACTION=="add|change" PROGRAM="/lib/udev/zvol_id /dev/%k" RESULT=="rpool/VM/*" OWNER="vmuser"
</syntaxhighlight>
<syntaxhighlight lang=bash>
vmuser@virtualbox-server:~$ VBoxManage internalcommands createrawvmdk -filename /var/data/VMs/dev/Solaris10.vmdk -rawdisk /dev/zvol/rpool/VM/Solaris10
</syntaxhighlight>
==Setup Ubuntu 16.04 with ZFS root==
Most is from here [https://github.com/zfsonlinux/zfs/wiki/Ubuntu-16.04-Root-on-ZFS Ubuntu-16.04-Root-on-ZFS].
Boot Ubuntu Desktop (alias Live CD) and choose "try out".
===Get the right ashift value===
For example to get sda and sdb:
<syntaxhighlight lang=bash>
# lsblk -o NAME,PHY-SeC,LOG-SEC /dev/sd{a,b} | awk 'function exponent (value) {for(i=0;value>1;i++){value/=2;}; return i;}{if($2 ~ /[0-9]+/){print $0,exponent($2)}else{print$0,"ashift"}}'
NAME PHY-SEC LOG-SEC ashift
sda 512 512 9
├─sda1 512 512 9
├─sda2 512 512 9
├─sda3 512 512 9
└─sda4 512 512 9
sdb 4096 512 12
├─sdb1 4096 512 12
├─sdb2 4096 512 12
├─sdb3 4096 512 12
└─sdb4 4096 512 12
</syntaxhighlight>
===Connect it to your network===
<syntaxhighlight lang=bash>
sudo -i
ifconfig ens160 <IP> netmask 255.255.255.0
route add default gw <defaultrouter>
echo "nameserver <nameserver>" >> /etc/resolv.conf
echo 'Acquire::http::Proxy "http://<user>:<pass>@<proxyhost>:<proxyport>";' >> /etc/apt/apt.conf
apt-add-repository universe
apt update
apt --yes install openssh-server
passwd ubuntu
Reconnect via ssh
apt install --yes debootstrap gdisk zfs-initramfs
sgdisk -g -a1 -n2:34:2047 -t2:EF02 /dev/disk/by-id/scsi-36000c2932cdb62febff0b5ac93786dd4
sgdisk -n9:-8M:0 -t9:BF07 /dev/disk/by-id/scsi-36000c2932cdb62febff0b5ac93786dd4
sgdisk -n1:0:0 -t1:BF01 /dev/disk/by-id/scsi-36000c2932cdb62febff0b5ac93786dd4
zpool create -f -o ashift=12 \
-O atime=off \
-O canmount=off \
-O compression=lz4 \
-O normalization=formD \
-O mountpoint=/ \
-R /mnt \
rpool /dev/disk/by-id/scsi-36000c2932cdb62febff0b5ac93786dd4-part1
zfs create -o canmount=off -o mountpoint=none rpool/ROOT
zfs create -o canmount=noauto -o mountpoint=/ rpool/ROOT/ubuntu
zfs mount rpool/ROOT/ubuntu
zfs create -o setuid=off rpool/home
zfs create -o mountpoint=/root rpool/home/root
zfs create -o canmount=off -o setuid=off -o exec=off rpool/var
zfs create -o com.sun:auto-snapshot=false rpool/var/cache
zfs create rpool/var/log
zfs create rpool/var/spool
zfs create -o com.sun:auto-snapshot=false -o exec=on rpool/var/tmp
zfs create -V 4G -b $(getconf PAGESIZE) -o compression=zle \
-o logbias=throughput -o sync=always \
-o primarycache=metadata -o secondarycache=none \
-o com.sun:auto-snapshot=false rpool/swap
cp -p {,/mnt}/etc/apt/apt.conf
export http_proxy=$(awk '/Acquire::http::Proxy/{gsub(/\"/,"");gsub(/;$/,"");print $2}' /mnt/etc/apt/apt.conf)
echo -n xenial{,-security,-updates} | \
xargs -n 1 -d ' ' -I{} echo "deb http://archive.ubuntu.com/ubuntu {} main universe" > /mnt/etc/apt/sources.list
chmod 1777 /mnt/var/tmp
debootstrap xenial /mnt
zfs set devices=off rpool
HOSTNAME=Template-VM
echo ${HOSTNAME} > /mnt/etc/hostname
printf "127.0.1.1\t%s\n" "${HOSTNAME}" >> /mnt/etc/hosts
INTERFACE=$(ip a s scope global | awk 'NR==1{gsub(/:$/,"",$2);print $2;}')
printf "auto %s\niface %s inet dhcp\n" "${INTERFACE}" "${INTERFACE}" > /mnt/etc/network/interfaces.d/${INTERFACE}
mount --rbind /dev /mnt/dev
mount --rbind /proc /mnt/proc
mount --rbind /sys /mnt/sys
cp -p {,/mnt}/etc/apt/apt.conf
echo -n xenial{,-security,-updates} | \
xargs -n 1 -d ' ' -I{} echo "deb http://archive.ubuntu.com/ubuntu {} main universe" > /mnt/etc/apt/sources.list
chroot /mnt /bin/bash --login
locale-gen en_US.UTF-8
echo 'LANG="en_US.UTF-8"' > /etc/default/locale
LANG="en_US.UTF-8"
dpkg-reconfigure tzdata
ln -s /proc/self/mounts /etc/mtab
apt update
apt install --yes ubuntu-minimal
apt install --yes --no-install-recommends linux-image-generic
apt install --yes zfs-initramfs
apt install --yes openssh-server
apt install --yes grub-pc
addgroup --system lpadmin
addgroup --system sambashare
passwd
grub-probe /
update-initramfs -c -k all
vi /etc/default/grub
Comment out: GRUB_HIDDEN_TIMEOUT=0
Remove quiet and splash from: GRUB_CMDLINE_LINUX_DEFAULT
Uncomment: GRUB_TERMINAL=console
update-grub
grub-install /dev/disk/by-id/scsi-36000c2932cdb62febff0b5ac93786dd4
zfs snapshot rpool/ROOT/ubuntu@install
exit
mount | grep -v zfs | tac | awk '/\/mnt/ {print $3}' | xargs -i{} umount -lf {}
zpool export rpool
reboot
apt install --yes cryptsetup
echo cryptswap1 /dev/zvol/rpool/swap /dev/urandom swap,cipher=aes-xts-plain64:sha256,size=256 >> /etc/crypttab
systemctl daemon-reload
systemctl start systemd-cryptsetup@cryptswap1.service
echo /dev/mapper/cryptswap1 none swap defaults 0 0 >> /etc/fstab
swapon -av
</syntaxhighlight>
==Swap on ZFS with random key encryption==
<syntaxhighlight lang=bash>
$ sudo systemctl edit --force --full zfs-cryptswap@.service
</syntaxhighlight>
<syntaxhighlight lang=ini>
# /etc/systemd/system/zfs-cryptswap@.service
[Unit]
Description=ZFS Random Cryptography Setup for %I
Documentation=man:zfs(8)
DefaultDependencies=no
Conflicts=umount.target
IgnoreOnIsolate=true
After=systemd-random-seed.service
After=zfs-import.target
BindsTo=dev-zvol-rpool-%i.device
Before=umount.target
[Service]
Type=oneshot
RemainAfterExit=yes
TimeoutSec=0
KeyringMode=shared
OOMScoreAdjust=500
UMask=0077
RuntimeDirectory=zfs-cryptswap.%i
RuntimeDirectoryMode=0700
ExecStartPre=-/sbin/swapoff '/dev/zvol/rpool/%i'
ExecStartPre=-/sbin/zfs destroy 'rpool/%i'
ExecStartPre=/bin/dd if=/dev/urandom of=/run/zfs-cryptswap.%i/%i.key bs=32 count=1
ExecStart=/sbin/zfs create -V 4G -b 4k -o compression=zle -o logbias=throughput -o sync=always -o primarycache=metadata -o secondarycache=none -o com.sun:auto-snapshot=false -o encryption=on -o keyformat=raw -o keylocation=file:///run/zfs-cryptswap.%i/%i.key rpool/%i
ExecStartPost=/sbin/mkswap '/dev/zvol/rpool/%i'
ExecStartPost=/sbin/swapon '/dev/zvol/rpool/%i'
ExecStop=/sbin/swapoff '/dev/zvol/rpool/%i'
ExecStop=/bin/sleep 2
ExecStopPost=/sbin/zfs destroy 'rpool/%i'
[Install]
WantedBy=swap.target
</syntaxhighlight>
!!!BE CAREFUL with the name after @ !!!
The name after the @ is the name of the ZFS that will be DESTROYED and recreated!!!
To destroy and recreate an encrypted ZFS volume named cryptswap use:
<syntaxhighlight lang=bash>
# systemctl start zfs-cryptswap@cryptswap.service
# systemctl enable zfs-cryptswap@cryptswap.service
# update-initramfs -k all -u
</syntaxhighlight>
==Kernel settings for ZFS==
=== Set module parameter in /etc/modprobe.d/zfs.conf===
<syntaxhighlight lang=bash>
options zfs zfs_arc_max=10737418240
# increase them so scrub/resilver is more quickly at the cost of other work
options zfs zfs_vdev_scrub_min_active=24
options zfs zfs_vdev_scrub_max_active=64
# sync write
options zfs zfs_vdev_sync_write_min_active=8
options zfs zfs_vdev_sync_write_max_active=32
# sync reads (normal)
options zfs zfs_vdev_sync_read_min_active=8
options zfs zfs_vdev_sync_read_max_active=32
# async reads : prefetcher
options zfs zfs_vdev_async_read_min_active=8
options zfs zfs_vdev_async_read_max_active=32
# async write : bulk writes
options zfs zfs_vdev_async_write_min_active=8
options zfs zfs_vdev_async_write_max_active=32
# max write speed to l2arc
# tradeoff between write/read and durability of ssd (?)
# default : 8 * 1024 * 1024
# setting here : 500 * 1024 * 1024
options zfs l2arc_write_max=524288000
options zfs zfs_top_maxinflight=512
options zfs zfs_resilver_min_time_ms=8000
options zfs zfs_resilver_delay=0
</syntaxhighlight>
Remember to update your initramfs before boot. This is the filesystem which is read when your module is loaded.
<syntaxhighlight lang=bash>
# update-initramfs -k all -u
</syntaxhighlight>
=== Check settings ===
<syntaxhighlight lang=bash>
root@zfshost:~# modprobe -c | grep "options zfs"
options zfs zfs_arc_max=10737418240
options zfs zfs_vdev_scrub_min_active=24
options zfs zfs_vdev_scrub_max_active=64
options zfs zfs_vdev_sync_write_min_active=8
options zfs zfs_vdev_sync_write_max_active=32
options zfs zfs_vdev_sync_read_min_active=8
options zfs zfs_vdev_sync_read_max_active=32
options zfs zfs_vdev_async_read_min_active=8
options zfs zfs_vdev_async_read_max_active=32
options zfs zfs_vdev_async_write_min_active=8
options zfs zfs_vdev_async_write_max_active=32
options zfs l2arc_write_max=524288000
options zfs zfs_top_maxinflight=512
options zfs zfs_resilver_min_time_ms=8000
options zfs zfs_resilver_delay=0
</syntaxhighlight>
<syntaxhighlight lang=bash>
root@zfshost:~# modprobe --show-depends zfs
insmod /lib/modules/4.15.0-58-generic/kernel/spl/spl.ko
insmod /lib/modules/4.15.0-58-generic/kernel/zfs/znvpair.ko
insmod /lib/modules/4.15.0-58-generic/kernel/zfs/zcommon.ko
insmod /lib/modules/4.15.0-58-generic/kernel/zfs/icp.ko
insmod /lib/modules/4.15.0-58-generic/kernel/zfs/zavl.ko
insmod /lib/modules/4.15.0-58-generic/kernel/zfs/zunicode.ko
insmod /lib/modules/4.15.0-58-generic/kernel/zfs/zfs.ko zfs_arc_max=10737418240 zfs_vdev_scrub_min_active=24 zfs_vdev_scrub_max_active=64 zfs_vdev_sync_write_min_active=8 zfs_vdev_sync_write_max_active=32 zfs_vdev_sync_read_min_active=8 zfs_vdev_sync_read_max_active=32 zfs_vdev_async_read_min_active=8 zfs_vdev_async_read_max_active=32 zfs_vdev_async_write_min_active=8 zfs_vdev_async_write_max_active=32 l2arc_write_max=524288000 zfs_top_maxinflight=512 zfs_resilver_min_time_ms=8000 zfs_resilver_delay=0
</syntaxhighlight>
=== Check actual settings ===
Check files in
* /proc/spl/kstat/zfs/
* /sys/module/zfs/parameters/
==ARC Cache==
===Get the current usage of cache===
<syntaxhighlight lang=bash>
# cat /proc/spl/kstat/zfs/arcstats |grep c_
c_min 4 521779200
c_max 4 1073741824
arc_no_grow 4 0
arc_tempreserve 4 0
arc_loaned_bytes 4 0
arc_prune 4 25360
arc_meta_used 4 493285336
arc_meta_limit 4 805306368
arc_dnode_limit 4 80530636
arc_meta_max 4 706551816
arc_meta_min 4 16777216
sync_wait_for_async 4 357
arc_need_free 4 0
arc_sys_free 4 260889600
</syntaxhighlight>
===Limit the cache without reboot non permanent===
For example limit it to 512MB (which is too small for production environments, just an example...):
<syntaxhighlight lang=bash>
# echo "$[512*1024*1024]" > /sys/module/zfs/parameters/zfs_arc_max
</syntaxhighlight>
Now you have to drop the caches:
<syntaxhighlight lang=bash>
# echo 3 > /proc/sys/vm/drop_caches
</syntaxhighlight>
===Make the cache limit permanent===
For example limit it to 512MB (which is too small for production environments, just an example...):
<syntaxhighlight lang=bash>
# echo "options zfs zfs_arc_max=$[512*1024*1024]" >> /etc/modprobe.d/zfs.conf
</syntaxhighlight>
After reboot this value take effect.
===Check cache hits/misses===
<syntaxhighlight lang=bash>
# (while : ; do cat /proc/spl/kstat/zfs/arcstats ; sleep 5 ; done ) | awk '
BEGIN {
}
$1 ~ /(hits|misses)/ {
name=$1;
gsub(/[_]*(hits|misses)/,"",name);
if(name == ""){
name="global";
}
}
$1 ~ /hits/ {
hits[name] = $3 - hitslast[name]
hitslast[name] = $3
}
$1 ~ /misses/ {
misses[name] = $3 - misslast[name]
misslast[name] = $3
rate = 0
total = hits[name] + misses[name]
if (total)
rate = (hits[name] * 100) / total
if (name=="global")
printf "%30s %12s %12s %9s\n", "NAME", "HITS", "MISSES", "HITRATE"
printf "%30s %12d %12d %8.2f%%\n", name, hits[name], misses[name], rate
}
'
</syntaxhighlight>
==Higher scrub performance==
<syntaxhighlight lang=bash highlight=3-5>
#!/bin/bash
#
## scrub_fast.sh
#
case $1 in
start)
echo 0 > /sys/module/zfs/parameters/zfs_scan_idle
echo 0 > /sys/module/zfs/parameters/zfs_scrub_delay
echo 512 > /sys/module/zfs/parameters/zfs_top_maxinflight
echo 5000 > /sys/module/zfs/parameters/zfs_scan_min_time_ms
echo 4 > /sys/module/zfs/parameters/zfs_vdev_scrub_min_active
echo 8 > /sys/module/zfs/parameters/zfs_vdev_scrub_max_active
;;
stop)
echo 50 > /sys/module/zfs/parameters/zfs_scan_idle
echo 4 > /sys/module/zfs/parameters/zfs_scrub_delay
echo 32 > /sys/module/zfs/parameters/zfs_top_maxinflight
echo 1000 > /sys/module/zfs/parameters/zfs_scan_min_time_ms
echo 1 > /sys/module/zfs/parameters/zfs_vdev_scrub_min_active
echo 2 > /sys/module/zfs/parameters/zfs_vdev_scrub_max_active
;;
status)
for i in zfs_scan_idle zfs_scrub_delay zfs_top_maxinflight zfs_scan_min_time_ms zfs_vdev_scrub_{min,max}_active
do
param="/sys/module/zfs/parameters/${i}"
printf "%60s\t%d\n" "${param}" "$(cat ${param})"
done
;;
*)
echo "Usage: ${0} (start|stop|status)"
;;
esac
</syntaxhighlight>
==Backup ZFS settings==
A little script which may be used on your own risk.
<syntaxhighlight lang=bash>
#!/bin/bash
# Written by Lars Timmann <L@rs.Timmann.de> 2018
# Tested on solaris 11.3 & Ubuntu Linux
# This script is a rotten bunch of code... rewrite it!
AWK_CMD=/usr/bin/gawk
ZPOOL_CMD=/sbin/zpool
ZFS_CMD=/sbin/zfs
ZDB_CMD=/sbin/zdb
function print_local_options () {
DATASET=$1
OPTION=$2
EXCLUDE_REGEX=$3
${ZFS_CMD} get -s local -Ho property,value -p ${OPTION} ${DATASET} | while read -r property value
do
if [[ ! ${property} =~ ${EXCLUDE_REGEX} ]]
then
if [ "_${property}_" == "_share.*_" ]
then
print_local_options "${DATASET}" 'share.all' '^$'
else
printf '\t-o %s=%s \\\n' "${property}" "${value}"
fi
fi
done
}
function print_filesystem () {
ZFS=$1
printf '%s create \\\n' "${ZFS_CMD}"
print_local_options "${ZFS}" 'all' '^$'
printf '\t%s\n' "${ZFS}"
}
function print_filesystems () {
ZPOOL=$1
for ZFS in $(${ZFS_CMD} list -Ho name -t filesystem -r ${ZPOOL})
do
if [ ${ZFS} == ${ZPOOL} ] ; then continue ; fi
printf '#\n## Filesystem: %s\n#\n\n' "${ZFS}"
print_filesystem ${ZFS}
printf '\n'
done
}
function print_volume () {
ZVOL=$1
volsize=$(${ZFS_CMD} get -Ho value volsize ${ZVOL})
volblocksize=$(${ZFS_CMD} get -Ho value volblocksize ${ZVOL})
printf '%s create \\\n\t-V %s \\\n\t-b %s \\\n' "${ZFS_CMD}" "${volsize}" "${volblocksize}"
print_local_options "${ZVOL}" 'all' '(volsize|refreservation)'
printf '\t%s\n' "${ZVOL}"
}
function print_volumes () {
ZPOOL=$1
for ZVOL in $(${ZFS_CMD} list -Ho name -t volume -r ${ZPOOL})
do
printf '#\n## Volume: %s\n#\n\n' "${ZVOL}"
print_volume ${ZVOL}
printf '\n'
done
}
function print_vdevs () {
ZPOOL=$1
${ZDB_CMD} -C ${ZPOOL} | ${AWK_CMD} -F':' '
$1 ~ /^[[:space:]]*type$/ {
gsub(/[ ]+/,"",$NF);
type=substr($NF,2,length($NF)-2);
if ( type == "mirror" ) {
printf " \\\n\t%s",type;
}
}
$1 ~ /^[[:space:]]*path$/ {
gsub(/[ ]+/,"",$NF);
vdev=substr($NF,2,length($NF)-2);
printf " \\\n\t%s",vdev;
}
END {
printf "\n";
}
'
}
function print_zpool () {
ZPOOL=$1
printf '#############################################################\n'
printf '#\n## ZPool: %s\n#\n' "${ZPOOL}"
printf '#############################################################\n\n'
printf '%s create \\\n' "${ZPOOL_CMD}"
print_local_options "${ZPOOL}" 'all' '/@/'
printf '\t%s' "${ZPOOL}"
print_vdevs "${ZPOOL}"
printf '\n'
printf '#############################################################\n\n'
print_filesystems "${ZPOOL}"
print_volumes "${ZPOOL}"
}
OS=$(uname -s)
eval $(uname -s)=1
HOSTNAME=$(hostname)
printf '#############################################################\n'
printf '# Hostname: %s\n' "${HOSTNAME}"
printf '#############################################################\n\n'
for ZPOOL in $(${ZPOOL_CMD} list -Ho name)
do
print_zpool ${ZPOOL}
done
</syntaxhighlight>
==Links==
* [[https://github.com/zfsonlinux/pkg-zfs/wiki/HOWTO-install-Ubuntu-16.04-to-a-Whole-Disk-Native-ZFS-Root-Filesystem-using-Ubiquity-GUI-installer HOWTO install Ubuntu 16.04 to a Whole Disk Native ZFS Root Filesystem using Ubiquity GUI installer]]
* [[https://github.com/zfsonlinux/zfs/wiki/Ubuntu-16.04-Root-on-ZFS Ubuntu 16.04 Root on ZFS]]
645b2842ebdfeec563efcaae92e03554a313b5b6
NGINX
0
363
2611
2454
2021-12-02T10:16:28Z
Lollypop
2
/* Add module to nginx on Ubuntu */
wikitext
text/x-wiki
[[Category:NGINX]]
[[Category:Webserver]]
==Add module to nginx on Ubuntu==
For example http-auth-ldap:
<syntaxhighlight lang=bash>
mkdir /opt/src
cd /opt/src
apt source nginx
cd nginx-*
export HTTPS_PROXY=<your proxy server>
git clone https://github.com/kvspb/nginx-auth-ldap.git debian/modules/http-auth-ldap
./configure \
--with-cc-opt="$(dpkg-buildflags --get CFLAGS) -fPIC $(dpkg-buildflags --get CPPFLAGS)" \
--with-ld-opt="$(dpkg-buildflags --get LDFLAGS) -fPIC" \
--prefix=/usr/share/nginx \
--conf-path=/etc/nginx/nginx.conf \
--http-log-path=/var/log/nginx/access.log \
--error-log-path=/var/log/nginx/error.log \
--lock-path=/var/lock/nginx.lock \
--pid-path=/run/nginx.pid \
--modules-path=/usr/lib/nginx/modules \
--with-http_v2_module \
--with-threads \
--without-http_gzip_module \
--add-dynamic-module=debian/modules/http-auth-ldap
make modules
sudo install --mode=0644 --owner=root --group=root objs/ngx_http_auth_ldap_module.so /usr/lib/nginx/modules/
</syntaxhighlight>
0480bc6b220f10ee5aff083bd5077ca07168bc74
2612
2611
2021-12-02T10:16:51Z
Lollypop
2
wikitext
text/x-wiki
[[Category:Webserver]]
==Add module to nginx on Ubuntu==
For example http-auth-ldap:
<syntaxhighlight lang=bash>
mkdir /opt/src
cd /opt/src
apt source nginx
cd nginx-*
export HTTPS_PROXY=<your proxy server>
git clone https://github.com/kvspb/nginx-auth-ldap.git debian/modules/http-auth-ldap
./configure \
--with-cc-opt="$(dpkg-buildflags --get CFLAGS) -fPIC $(dpkg-buildflags --get CPPFLAGS)" \
--with-ld-opt="$(dpkg-buildflags --get LDFLAGS) -fPIC" \
--prefix=/usr/share/nginx \
--conf-path=/etc/nginx/nginx.conf \
--http-log-path=/var/log/nginx/access.log \
--error-log-path=/var/log/nginx/error.log \
--lock-path=/var/lock/nginx.lock \
--pid-path=/run/nginx.pid \
--modules-path=/usr/lib/nginx/modules \
--with-http_v2_module \
--with-threads \
--without-http_gzip_module \
--add-dynamic-module=debian/modules/http-auth-ldap
make modules
sudo install --mode=0644 --owner=root --group=root objs/ngx_http_auth_ldap_module.so /usr/lib/nginx/modules/
</syntaxhighlight>
0a197b6f53862fb322a5bddfb071af01dd852ae6
Ansible tips and tricks
0
299
2613
2512
2021-12-15T08:52:50Z
Lollypop
2
wikitext
text/x-wiki
[[Category: Ansible | Tips and tricks]]
== Ansible commandline ==
=== Get settings for host ===
Gathering settings for host in ${hostname}:
<syntaxhighlight lang=bash>
$ ansible -m debug -a 'var=hostvars[inventory_hostname]' ${hostname}
</syntaxhighlight>
For example:
<syntaxhighlight lang=bash>
$ ansible -m debug -a 'var=hostvars[inventory_hostname]' localhost
</syntaxhighlight>
Gathering groups for host in ${hostname}:
<syntaxhighlight lang=bash>
$ ansible -m debug -a 'var=group_names' ${hostname}
</syntaxhighlight>
== Gathering facts from file ==
=== Variables from an Oracle response file ===
This snippet gets some variables from the response file and puts them into an environment variable <i>oracle_environment</i> and sets the variable name itself (prepended with oracle_ if not already there). The variable <i>oracle_environment</i> can be used for <i>environment:</i> when you use <i>shell:</i>.
<syntaxhighlight lang=yaml>
vars:
oracle_user: oracle
oracle_version: 12cR2
oracle_response_file: /install/tepmplate_{{ oracle_version }}/db_{{ oracle_version | lower}}.rsp
</syntaxhighlight>
<syntaxhighlight lang=yaml>
- name: "Getting variables for version {{ oracle_version }} from response file"
shell: |
awk -F '=' '/{{ item }}/{print $2;}' {{ oracle_response_file }}
register: oracle_response_variables
with_items:
- ORACLE_HOME
- ORACLE_BASE
- INVENTORY_LOCATION
tags:
- oracle
- oracle_install
- name: Setting facts from response file to oracle_environment
set_fact:
"{{ 'oracle_' + item.item | lower | regex_replace('oracle_','') }}": "{{ item.stdout }}"
oracle_environment: "{{oracle_environment|default([]) + [ {item.item: item.stdout} ] }}"
with_items:
- "{{ oracle_response_variables.results }}"
tags:
- oracle
- oracle_install
</syntaxhighlight>
== Gathering oracle environment ==
<syntaxhighlight lang=yaml>
- name: Calling oraenv
shell: |
# Set ORAENV_ASK=NO and ORACLE_SID, ORACLE_HOME, PATH from /etc/oratab
eval $(awk -F':' '!/^[ ]*(#|$)/ && $3=="Y"{printf "export ORAENV_ASK=NO ORACLE_SID=%s ORACLE_HOME=%s PATH=${PATH}:%s/bin\n",$1,$2,$2}' /etc/oratab)
# Call /usr/local/bin/oraenv for additional settings
. /usr/local/bin/oraenv -s
# Just register what we need for Oracle
env | egrep "(ORACLE_.*|PATH|LD_LIBRARY_PATH)="
register: env
changed_when: False
- name: Creating environment ora_env
set_fact:
ora_env: |
{# Creating empty dictionary #}
{%- set tmp_env={} -%}
{# For each line from env call tmp_env._setitem_(<variable>,<value>) #}
{%- for line in env.stdout_lines -%}
{{ tmp_env.__setitem__(line.split('=')[0], line.split('=')[1]) }}
{%- endfor -%}
{# Print the created variable #}
{{ tmp_env }}
- debug: var=ora_env
</syntaxhighlight>
== NetApp Modules ==
=== NetApp role ===
==== Snapshot user ====
<syntaxhighlight>
security login role create -vserver cluster01 -role ansible-snapshot-only -cmddirname DEFAULT -access none
security login role create -vserver cluster01 -role ansible-snapshot-only -cmddirname "event generate-autosupport-log" -access all
security login role create -vserver cluster01 -role ansible-snapshot-only -cmddirname "volume snapshot" -access readonly
security login role create -vserver cluster01 -role ansible-snapshot-only -cmddirname "volume snapshot create" -query "-snapshot ansible_*" -access all
security login role create -vserver cluster01 -role ansible-snapshot-only -cmddirname "volume snapshot delete" -query "-snapshot ansible_*" -access all
security login create -vserver cluster01 -role ansible-snapshot-only -application ontapi -authentication-method password -user-or-group-name ansible-snapuser
</syntaxhighlight>
f53741bc7dc6aecb7b313ba962bb9454ac9647b9
Spamassassin
0
392
2614
2021-12-16T14:29:11Z
Lollypop
2
Created page with "[[Category:Mail]] == Debugging == === Razor2 === In spamassassins local.cf: <syntaxhighlight> razor_config /etc/spamassassin/Razor2/razor-agent.conf </syntaxhighlight> The config /etc/spamassassin/Razor2/razor-agent.conf might be like this: <syntaxhighlight> # # Razor2 config file # # Autogenerated by Razor-Agents v2.82 # Tue Nov 28 13:36:10 2006 # Created with all default values # # see razor-agent.conf(5) man page # debuglevel = 4 identity..."
wikitext
text/x-wiki
[[Category:Mail]]
== Debugging ==
=== Razor2 ===
In spamassassins local.cf:
<syntaxhighlight>
razor_config /etc/spamassassin/Razor2/razor-agent.conf
</syntaxhighlight>
The config /etc/spamassassin/Razor2/razor-agent.conf might be like this:
<syntaxhighlight>
#
# Razor2 config file
#
# Autogenerated by Razor-Agents v2.82
# Tue Nov 28 13:36:10 2006
# Created with all default values
#
# see razor-agent.conf(5) man page
#
debuglevel = 4
identity = identity
ignorelist = 0
listfile_catalogue = servers.catalogue.lst
listfile_discovery = servers.discovery.lst
listfile_nomination = servers.nomination.lst
logfile = /var/log/razor-agent.log
logic_method = 4
min_cf = ac
razordiscovery = discovery.spamnet.com
rediscovery_wait = 3600
report_headers = 1
turn_off_discovery = 0
use_engines = 4,8
whitelist = razor-whitelist
razorhome = /etc/exim-local/Razor2
</syntaxhighlight>
<syntaxhighlight lang=bash>
$ spamassassin --siteconfigpath=/etc/spamassassin -D razor2 < ~/sample-ham.txt 2>&1 | less
Dec 16 15:20:39.223 [26376] dbg: razor2: razor2 is available, version 2.86
Razor-Log: read_file: 16 items read from /etc/spamassassin/Razor2/razor-agent.conf
Razor-Log: Found razorhome: /etc/exim-local/Razor2
Dec 16 15:20:41.248733 check[26376]: [ 2] [bootup] Logging initiated LogDebugLevel=9 to stdout
Dec 16 15:20:41.248919 check[26376]: [ 5] computed razorhome=/etc/exim-local/Razor2, conf=/etc/spamassassin/Razor2/razor-agent.conf, ident=/etc/exim-local/Razor2/identity
Dec 16 15:20:41.248996 check[26376]: [ 8] Client supported_engines: 4 8
Dec 16 15:20:41.249307 check[26376]: [ 8] prep_mail done: mail 1 headers=3944, mime0=562
Dec 16 15:20:41.249571 check[26376]: [ 5] read_file: 1 items read from /etc/exim-local/Razor2/servers.discovery.lst
Dec 16 15:20:41.249819 check[26376]: [ 5] read_file: 4 items read from /etc/exim-local/Razor2/servers.nomination.lst
Dec 16 15:20:41.250038 check[26376]: [ 5] read_file: 3 items read from /etc/exim-local/Razor2/servers.catalogue.lst
Dec 16 15:20:41.250164 check[26376]: [ 9] Assigning defaults to n002.cloudmark.com
Dec 16 15:20:41.250212 check[26376]: [ 9] Assigning defaults to n004.cloudmark.com
Dec 16 15:20:41.250250 check[26376]: [ 9] Assigning defaults to n001.cloudmark.com
Dec 16 15:20:41.250286 check[26376]: [ 9] Assigning defaults to n003.cloudmark.com
Dec 16 15:20:41.250323 check[26376]: [ 9] Assigning defaults to c303.cloudmark.com
Dec 16 15:20:41.250361 check[26376]: [ 9] Assigning defaults to c301.cloudmark.com
Dec 16 15:20:41.250398 check[26376]: [ 9] Assigning defaults to c302.cloudmark.com
</syntaxhighlight>
c8a5b26fa390662c9754d74fdd245098dc4f28ff
2615
2614
2021-12-16T14:39:31Z
Lollypop
2
wikitext
text/x-wiki
[[Category:Mail]]
== Debugging ==
=== Razor2 ===
In spamassassins local.cf:
<syntaxhighlight>
razor_config /etc/spamassassin/Razor2/razor-agent.conf
</syntaxhighlight>
The config /etc/spamassassin/Razor2/razor-agent.conf might be like this:
<syntaxhighlight>
#
# Razor2 config file
#
# Autogenerated by Razor-Agents v2.82
# Tue Nov 28 13:36:10 2006
# Created with all default values
#
# see razor-agent.conf(5) man page
#
debuglevel = 4
identity = identity
ignorelist = 0
listfile_catalogue = servers.catalogue.lst
listfile_discovery = servers.discovery.lst
listfile_nomination = servers.nomination.lst
logfile = /var/log/razor-agent.log
logic_method = 4
min_cf = ac
razordiscovery = discovery.spamnet.com
rediscovery_wait = 3600
report_headers = 1
turn_off_discovery = 0
use_engines = 4,8
whitelist = razor-whitelist
razorhome = /etc/exim-local/Razor2
</syntaxhighlight>
<syntaxhighlight lang=bash>
$ spamassassin --siteconfigpath=/etc/spamassassin -D razor2 < ~/sample-ham.txt 2>&1 | less
Dec 16 15:20:39.223 [26376] dbg: razor2: razor2 is available, version 2.86
Razor-Log: read_file: 16 items read from /etc/spamassassin/Razor2/razor-agent.conf
Razor-Log: Found razorhome: /etc/exim-local/Razor2
Dec 16 15:20:41.248733 check[26376]: [ 2] [bootup] Logging initiated LogDebugLevel=9 to stdout
Dec 16 15:20:41.248919 check[26376]: [ 5] computed razorhome=/etc/exim-local/Razor2, conf=/etc/spamassassin/Razor2/razor-agent.conf, ident=/etc/exim-local/Razor2/identity
Dec 16 15:20:41.248996 check[26376]: [ 8] Client supported_engines: 4 8
Dec 16 15:20:41.249307 check[26376]: [ 8] prep_mail done: mail 1 headers=3944, mime0=562
Dec 16 15:20:41.249571 check[26376]: [ 5] read_file: 1 items read from /etc/exim-local/Razor2/servers.discovery.lst
Dec 16 15:20:41.249819 check[26376]: [ 5] read_file: 4 items read from /etc/exim-local/Razor2/servers.nomination.lst
Dec 16 15:20:41.250038 check[26376]: [ 5] read_file: 3 items read from /etc/exim-local/Razor2/servers.catalogue.lst
Dec 16 15:20:41.250164 check[26376]: [ 9] Assigning defaults to n002.cloudmark.com
Dec 16 15:20:41.250212 check[26376]: [ 9] Assigning defaults to n004.cloudmark.com
Dec 16 15:20:41.250250 check[26376]: [ 9] Assigning defaults to n001.cloudmark.com
Dec 16 15:20:41.250286 check[26376]: [ 9] Assigning defaults to n003.cloudmark.com
Dec 16 15:20:41.250323 check[26376]: [ 9] Assigning defaults to c303.cloudmark.com
Dec 16 15:20:41.250361 check[26376]: [ 9] Assigning defaults to c301.cloudmark.com
Dec 16 15:20:41.250398 check[26376]: [ 9] Assigning defaults to c302.cloudmark.com
...
</syntaxhighlight>
=== Bayes ===
<syntaxhighlight lang=bash>
$ spamassassin --siteconfigpath=/etc/spamassassin -D bayes < ~/sample-ham.txt 2>&1 | less
Dec 16 15:33:03.945 [3194] dbg: bayes: learner_new self=Mail::SpamAssassin::Plugin::Bayes=HASH(0x1b36a5eb0), bayes_store_module=Mail::SpamAssassin::BayesStore::SQL
Dec 16 15:33:03.976 [3194] dbg: bayes: using username: exim
Dec 16 15:33:03.976 [3194] dbg: bayes: learner_new: got store=Mail::SpamAssassin::BayesStore::SQL=HASH(0x1b4f9ab50)
Dec 16 15:33:04.001 [3194] dbg: bayes: database connection established
Dec 16 15:33:04.002 [3194] dbg: bayes: found bayes db version 3
Dec 16 15:33:04.003 [3194] dbg: bayes: Using userid: 666
Dec 16 15:33:04.183 [3194] dbg: bayes: corpus size: nspam = 345, nham = 925
Dec 16 15:33:04.184 [3194] dbg: bayes: tokenized body: 71 tokens
Dec 16 15:33:04.184 [3194] dbg: bayes: tokenized uri: 34 tokens
Dec 16 15:33:04.184 [3194] dbg: bayes: tokenized invisible: 0 tokens
...
</syntaxhighlight>
2eeac0b85dd2db9f51f958d37cf40cdab688829e
2616
2615
2021-12-16T14:49:30Z
Lollypop
2
/* Bayes */
wikitext
text/x-wiki
[[Category:Mail]]
== Debugging ==
=== Razor2 ===
In spamassassins local.cf:
<syntaxhighlight>
razor_config /etc/spamassassin/Razor2/razor-agent.conf
</syntaxhighlight>
The config /etc/spamassassin/Razor2/razor-agent.conf might be like this:
<syntaxhighlight>
#
# Razor2 config file
#
# Autogenerated by Razor-Agents v2.82
# Tue Nov 28 13:36:10 2006
# Created with all default values
#
# see razor-agent.conf(5) man page
#
debuglevel = 4
identity = identity
ignorelist = 0
listfile_catalogue = servers.catalogue.lst
listfile_discovery = servers.discovery.lst
listfile_nomination = servers.nomination.lst
logfile = /var/log/razor-agent.log
logic_method = 4
min_cf = ac
razordiscovery = discovery.spamnet.com
rediscovery_wait = 3600
report_headers = 1
turn_off_discovery = 0
use_engines = 4,8
whitelist = razor-whitelist
razorhome = /etc/exim-local/Razor2
</syntaxhighlight>
<syntaxhighlight lang=bash>
$ spamassassin --siteconfigpath=/etc/spamassassin -D razor2 < ~/sample-ham.txt 2>&1 | less
Dec 16 15:20:39.223 [26376] dbg: razor2: razor2 is available, version 2.86
Razor-Log: read_file: 16 items read from /etc/spamassassin/Razor2/razor-agent.conf
Razor-Log: Found razorhome: /etc/exim-local/Razor2
Dec 16 15:20:41.248733 check[26376]: [ 2] [bootup] Logging initiated LogDebugLevel=9 to stdout
Dec 16 15:20:41.248919 check[26376]: [ 5] computed razorhome=/etc/exim-local/Razor2, conf=/etc/spamassassin/Razor2/razor-agent.conf, ident=/etc/exim-local/Razor2/identity
Dec 16 15:20:41.248996 check[26376]: [ 8] Client supported_engines: 4 8
Dec 16 15:20:41.249307 check[26376]: [ 8] prep_mail done: mail 1 headers=3944, mime0=562
Dec 16 15:20:41.249571 check[26376]: [ 5] read_file: 1 items read from /etc/exim-local/Razor2/servers.discovery.lst
Dec 16 15:20:41.249819 check[26376]: [ 5] read_file: 4 items read from /etc/exim-local/Razor2/servers.nomination.lst
Dec 16 15:20:41.250038 check[26376]: [ 5] read_file: 3 items read from /etc/exim-local/Razor2/servers.catalogue.lst
Dec 16 15:20:41.250164 check[26376]: [ 9] Assigning defaults to n002.cloudmark.com
Dec 16 15:20:41.250212 check[26376]: [ 9] Assigning defaults to n004.cloudmark.com
Dec 16 15:20:41.250250 check[26376]: [ 9] Assigning defaults to n001.cloudmark.com
Dec 16 15:20:41.250286 check[26376]: [ 9] Assigning defaults to n003.cloudmark.com
Dec 16 15:20:41.250323 check[26376]: [ 9] Assigning defaults to c303.cloudmark.com
Dec 16 15:20:41.250361 check[26376]: [ 9] Assigning defaults to c301.cloudmark.com
Dec 16 15:20:41.250398 check[26376]: [ 9] Assigning defaults to c302.cloudmark.com
...
</syntaxhighlight>
=== Bayes ===
<syntaxhighlight lang=bash>
$ spamassassin --siteconfigpath=/etc/spamassassin -D bayes < ~/sample-ham.txt 2>&1 | less
Dec 16 15:33:03.945 [3194] dbg: bayes: learner_new self=Mail::SpamAssassin::Plugin::Bayes=HASH(0x1b36a5eb0), bayes_store_module=Mail::SpamAssassin::BayesStore::SQL
Dec 16 15:33:03.976 [3194] dbg: bayes: using username: exim
Dec 16 15:33:03.976 [3194] dbg: bayes: learner_new: got store=Mail::SpamAssassin::BayesStore::SQL=HASH(0x1b4f9ab50)
Dec 16 15:33:04.001 [3194] dbg: bayes: database connection established
Dec 16 15:33:04.002 [3194] dbg: bayes: found bayes db version 3
Dec 16 15:33:04.003 [3194] dbg: bayes: Using userid: 666
Dec 16 15:33:04.183 [3194] dbg: bayes: corpus size: nspam = 345, nham = 925
Dec 16 15:33:04.184 [3194] dbg: bayes: tokenized body: 71 tokens
Dec 16 15:33:04.184 [3194] dbg: bayes: tokenized uri: 34 tokens
Dec 16 15:33:04.184 [3194] dbg: bayes: tokenized invisible: 0 tokens
...
</syntaxhighlight>
If you find something like:
<pre>
dbg: bayes: _get_db_version: SQL error: Malformed packet
...
bayes: database version 0 is different than we understand (3), aborting! at .../lib/site_perl/Mail/SpamAssassin/BayesStore/SQL.pm line 139.
</pre>
You might try to disable the MySQL query_cache:
<syntaxhighlight lang=mysql>
mysql> set GLOBAL query_cache_type=0;
Query OK, 0 rows affected, 1 warning (0.00 sec)
</syntaxhighlight>
This is no big deal as it is depricated and will be removed in MySQL 8.0.
If it helps, don't foget to put the settings in your MySQL Config as well.
06a0aa2232ad4b69e0061363373ce27edbd8938e
2617
2616
2021-12-16T14:53:41Z
Lollypop
2
/* Bayes */
wikitext
text/x-wiki
[[Category:Mail]]
== Debugging ==
=== Razor2 ===
In spamassassins local.cf:
<syntaxhighlight>
razor_config /etc/spamassassin/Razor2/razor-agent.conf
</syntaxhighlight>
The config /etc/spamassassin/Razor2/razor-agent.conf might be like this:
<syntaxhighlight>
#
# Razor2 config file
#
# Autogenerated by Razor-Agents v2.82
# Tue Nov 28 13:36:10 2006
# Created with all default values
#
# see razor-agent.conf(5) man page
#
debuglevel = 4
identity = identity
ignorelist = 0
listfile_catalogue = servers.catalogue.lst
listfile_discovery = servers.discovery.lst
listfile_nomination = servers.nomination.lst
logfile = /var/log/razor-agent.log
logic_method = 4
min_cf = ac
razordiscovery = discovery.spamnet.com
rediscovery_wait = 3600
report_headers = 1
turn_off_discovery = 0
use_engines = 4,8
whitelist = razor-whitelist
razorhome = /etc/exim-local/Razor2
</syntaxhighlight>
<syntaxhighlight lang=bash>
$ spamassassin --siteconfigpath=/etc/spamassassin -D razor2 < ~/sample-ham.txt 2>&1 | less
Dec 16 15:20:39.223 [26376] dbg: razor2: razor2 is available, version 2.86
Razor-Log: read_file: 16 items read from /etc/spamassassin/Razor2/razor-agent.conf
Razor-Log: Found razorhome: /etc/exim-local/Razor2
Dec 16 15:20:41.248733 check[26376]: [ 2] [bootup] Logging initiated LogDebugLevel=9 to stdout
Dec 16 15:20:41.248919 check[26376]: [ 5] computed razorhome=/etc/exim-local/Razor2, conf=/etc/spamassassin/Razor2/razor-agent.conf, ident=/etc/exim-local/Razor2/identity
Dec 16 15:20:41.248996 check[26376]: [ 8] Client supported_engines: 4 8
Dec 16 15:20:41.249307 check[26376]: [ 8] prep_mail done: mail 1 headers=3944, mime0=562
Dec 16 15:20:41.249571 check[26376]: [ 5] read_file: 1 items read from /etc/exim-local/Razor2/servers.discovery.lst
Dec 16 15:20:41.249819 check[26376]: [ 5] read_file: 4 items read from /etc/exim-local/Razor2/servers.nomination.lst
Dec 16 15:20:41.250038 check[26376]: [ 5] read_file: 3 items read from /etc/exim-local/Razor2/servers.catalogue.lst
Dec 16 15:20:41.250164 check[26376]: [ 9] Assigning defaults to n002.cloudmark.com
Dec 16 15:20:41.250212 check[26376]: [ 9] Assigning defaults to n004.cloudmark.com
Dec 16 15:20:41.250250 check[26376]: [ 9] Assigning defaults to n001.cloudmark.com
Dec 16 15:20:41.250286 check[26376]: [ 9] Assigning defaults to n003.cloudmark.com
Dec 16 15:20:41.250323 check[26376]: [ 9] Assigning defaults to c303.cloudmark.com
Dec 16 15:20:41.250361 check[26376]: [ 9] Assigning defaults to c301.cloudmark.com
Dec 16 15:20:41.250398 check[26376]: [ 9] Assigning defaults to c302.cloudmark.com
...
</syntaxhighlight>
= SpamAssassin =
=== Bayes ===
<syntaxhighlight lang=bash>
$ spamassassin --siteconfigpath=/etc/spamassassin -D bayes < ~/sample-ham.txt 2>&1 | less
Dec 16 15:33:03.945 [3194] dbg: bayes: learner_new self=Mail::SpamAssassin::Plugin::Bayes=HASH(0x1b36a5eb0), bayes_store_module=Mail::SpamAssassin::BayesStore::SQL
Dec 16 15:33:03.976 [3194] dbg: bayes: using username: exim
Dec 16 15:33:03.976 [3194] dbg: bayes: learner_new: got store=Mail::SpamAssassin::BayesStore::SQL=HASH(0x1b4f9ab50)
Dec 16 15:33:04.001 [3194] dbg: bayes: database connection established
Dec 16 15:33:04.002 [3194] dbg: bayes: found bayes db version 3
Dec 16 15:33:04.003 [3194] dbg: bayes: Using userid: 666
Dec 16 15:33:04.183 [3194] dbg: bayes: corpus size: nspam = 345, nham = 925
Dec 16 15:33:04.184 [3194] dbg: bayes: tokenized body: 71 tokens
Dec 16 15:33:04.184 [3194] dbg: bayes: tokenized uri: 34 tokens
Dec 16 15:33:04.184 [3194] dbg: bayes: tokenized invisible: 0 tokens
...
</syntaxhighlight>
If you find something like:
<pre>
dbg: bayes: _get_db_version: SQL error: Malformed packet
...
bayes: database version 0 is different than we understand (3), aborting! at .../lib/site_perl/Mail/SpamAssassin/BayesStore/SQL.pm line 139.
</pre>
You might try to disable the MySQL query_cache:
<syntaxhighlight lang=mysql>
mysql> set GLOBAL query_cache_type=0;
Query OK, 0 rows affected, 1 warning (0.00 sec)
</syntaxhighlight>
This is no big deal as it is depricated and will be removed in MySQL 8.0.
If it helps, don't foget to put the settings in your MySQL Config as well.
Another way to find out if your bayes store works:
<syntaxhighlight lang=bash>
$ sa-learn --siteconfigpath=/etc/spamassassin --dump magic -u <your user>
0.000 0 3 0 non-token data: bayes db version
0.000 0 6801468 0 non-token data: nspam
0.000 0 2184181 0 non-token data: nham
0.000 0 1776152 0 non-token data: ntokens
0.000 0 1639643612 0 non-token data: oldest atime
0.000 0 1639663214 0 non-token data: newest atime
0.000 0 0 0 non-token data: last journal sync atime
0.000 0 1639643616 0 non-token data: last expiry atime
0.000 0 43200 0 non-token data: last expire atime delta
0.000 0 2137 0 non-token data: last expire reduction count
</syntaxhighlight>
7ad14fcadbf778664d611aa2acb0e6518da68cd0
2618
2617
2021-12-16T14:53:57Z
Lollypop
2
/* SpamAssassin */
wikitext
text/x-wiki
[[Category:Mail]]
== Debugging ==
=== Razor2 ===
In spamassassins local.cf:
<syntaxhighlight>
razor_config /etc/spamassassin/Razor2/razor-agent.conf
</syntaxhighlight>
The config /etc/spamassassin/Razor2/razor-agent.conf might be like this:
<syntaxhighlight>
#
# Razor2 config file
#
# Autogenerated by Razor-Agents v2.82
# Tue Nov 28 13:36:10 2006
# Created with all default values
#
# see razor-agent.conf(5) man page
#
debuglevel = 4
identity = identity
ignorelist = 0
listfile_catalogue = servers.catalogue.lst
listfile_discovery = servers.discovery.lst
listfile_nomination = servers.nomination.lst
logfile = /var/log/razor-agent.log
logic_method = 4
min_cf = ac
razordiscovery = discovery.spamnet.com
rediscovery_wait = 3600
report_headers = 1
turn_off_discovery = 0
use_engines = 4,8
whitelist = razor-whitelist
razorhome = /etc/exim-local/Razor2
</syntaxhighlight>
<syntaxhighlight lang=bash>
$ spamassassin --siteconfigpath=/etc/spamassassin -D razor2 < ~/sample-ham.txt 2>&1 | less
Dec 16 15:20:39.223 [26376] dbg: razor2: razor2 is available, version 2.86
Razor-Log: read_file: 16 items read from /etc/spamassassin/Razor2/razor-agent.conf
Razor-Log: Found razorhome: /etc/exim-local/Razor2
Dec 16 15:20:41.248733 check[26376]: [ 2] [bootup] Logging initiated LogDebugLevel=9 to stdout
Dec 16 15:20:41.248919 check[26376]: [ 5] computed razorhome=/etc/exim-local/Razor2, conf=/etc/spamassassin/Razor2/razor-agent.conf, ident=/etc/exim-local/Razor2/identity
Dec 16 15:20:41.248996 check[26376]: [ 8] Client supported_engines: 4 8
Dec 16 15:20:41.249307 check[26376]: [ 8] prep_mail done: mail 1 headers=3944, mime0=562
Dec 16 15:20:41.249571 check[26376]: [ 5] read_file: 1 items read from /etc/exim-local/Razor2/servers.discovery.lst
Dec 16 15:20:41.249819 check[26376]: [ 5] read_file: 4 items read from /etc/exim-local/Razor2/servers.nomination.lst
Dec 16 15:20:41.250038 check[26376]: [ 5] read_file: 3 items read from /etc/exim-local/Razor2/servers.catalogue.lst
Dec 16 15:20:41.250164 check[26376]: [ 9] Assigning defaults to n002.cloudmark.com
Dec 16 15:20:41.250212 check[26376]: [ 9] Assigning defaults to n004.cloudmark.com
Dec 16 15:20:41.250250 check[26376]: [ 9] Assigning defaults to n001.cloudmark.com
Dec 16 15:20:41.250286 check[26376]: [ 9] Assigning defaults to n003.cloudmark.com
Dec 16 15:20:41.250323 check[26376]: [ 9] Assigning defaults to c303.cloudmark.com
Dec 16 15:20:41.250361 check[26376]: [ 9] Assigning defaults to c301.cloudmark.com
Dec 16 15:20:41.250398 check[26376]: [ 9] Assigning defaults to c302.cloudmark.com
...
</syntaxhighlight>
=== Bayes ===
<syntaxhighlight lang=bash>
$ spamassassin --siteconfigpath=/etc/spamassassin -D bayes < ~/sample-ham.txt 2>&1 | less
Dec 16 15:33:03.945 [3194] dbg: bayes: learner_new self=Mail::SpamAssassin::Plugin::Bayes=HASH(0x1b36a5eb0), bayes_store_module=Mail::SpamAssassin::BayesStore::SQL
Dec 16 15:33:03.976 [3194] dbg: bayes: using username: exim
Dec 16 15:33:03.976 [3194] dbg: bayes: learner_new: got store=Mail::SpamAssassin::BayesStore::SQL=HASH(0x1b4f9ab50)
Dec 16 15:33:04.001 [3194] dbg: bayes: database connection established
Dec 16 15:33:04.002 [3194] dbg: bayes: found bayes db version 3
Dec 16 15:33:04.003 [3194] dbg: bayes: Using userid: 666
Dec 16 15:33:04.183 [3194] dbg: bayes: corpus size: nspam = 345, nham = 925
Dec 16 15:33:04.184 [3194] dbg: bayes: tokenized body: 71 tokens
Dec 16 15:33:04.184 [3194] dbg: bayes: tokenized uri: 34 tokens
Dec 16 15:33:04.184 [3194] dbg: bayes: tokenized invisible: 0 tokens
...
</syntaxhighlight>
If you find something like:
<pre>
dbg: bayes: _get_db_version: SQL error: Malformed packet
...
bayes: database version 0 is different than we understand (3), aborting! at .../lib/site_perl/Mail/SpamAssassin/BayesStore/SQL.pm line 139.
</pre>
You might try to disable the MySQL query_cache:
<syntaxhighlight lang=mysql>
mysql> set GLOBAL query_cache_type=0;
Query OK, 0 rows affected, 1 warning (0.00 sec)
</syntaxhighlight>
This is no big deal as it is depricated and will be removed in MySQL 8.0.
If it helps, don't foget to put the settings in your MySQL Config as well.
Another way to find out if your bayes store works:
<syntaxhighlight lang=bash>
$ sa-learn --siteconfigpath=/etc/spamassassin --dump magic -u <your user>
0.000 0 3 0 non-token data: bayes db version
0.000 0 6801468 0 non-token data: nspam
0.000 0 2184181 0 non-token data: nham
0.000 0 1776152 0 non-token data: ntokens
0.000 0 1639643612 0 non-token data: oldest atime
0.000 0 1639663214 0 non-token data: newest atime
0.000 0 0 0 non-token data: last journal sync atime
0.000 0 1639643616 0 non-token data: last expiry atime
0.000 0 43200 0 non-token data: last expire atime delta
0.000 0 2137 0 non-token data: last expire reduction count
</syntaxhighlight>
e5f99d8c7483b9a7ac35b94330074d8a357a0f9a
2619
2618
2021-12-16T14:54:23Z
Lollypop
2
wikitext
text/x-wiki
[[Category:Mail]]
= SpamAssassin =
== Debugging ==
=== Razor2 ===
In spamassassins local.cf:
<syntaxhighlight>
razor_config /etc/spamassassin/Razor2/razor-agent.conf
</syntaxhighlight>
The config /etc/spamassassin/Razor2/razor-agent.conf might be like this:
<syntaxhighlight>
#
# Razor2 config file
#
# Autogenerated by Razor-Agents v2.82
# Tue Nov 28 13:36:10 2006
# Created with all default values
#
# see razor-agent.conf(5) man page
#
debuglevel = 4
identity = identity
ignorelist = 0
listfile_catalogue = servers.catalogue.lst
listfile_discovery = servers.discovery.lst
listfile_nomination = servers.nomination.lst
logfile = /var/log/razor-agent.log
logic_method = 4
min_cf = ac
razordiscovery = discovery.spamnet.com
rediscovery_wait = 3600
report_headers = 1
turn_off_discovery = 0
use_engines = 4,8
whitelist = razor-whitelist
razorhome = /etc/exim-local/Razor2
</syntaxhighlight>
<syntaxhighlight lang=bash>
$ spamassassin --siteconfigpath=/etc/spamassassin -D razor2 < ~/sample-ham.txt 2>&1 | less
Dec 16 15:20:39.223 [26376] dbg: razor2: razor2 is available, version 2.86
Razor-Log: read_file: 16 items read from /etc/spamassassin/Razor2/razor-agent.conf
Razor-Log: Found razorhome: /etc/exim-local/Razor2
Dec 16 15:20:41.248733 check[26376]: [ 2] [bootup] Logging initiated LogDebugLevel=9 to stdout
Dec 16 15:20:41.248919 check[26376]: [ 5] computed razorhome=/etc/exim-local/Razor2, conf=/etc/spamassassin/Razor2/razor-agent.conf, ident=/etc/exim-local/Razor2/identity
Dec 16 15:20:41.248996 check[26376]: [ 8] Client supported_engines: 4 8
Dec 16 15:20:41.249307 check[26376]: [ 8] prep_mail done: mail 1 headers=3944, mime0=562
Dec 16 15:20:41.249571 check[26376]: [ 5] read_file: 1 items read from /etc/exim-local/Razor2/servers.discovery.lst
Dec 16 15:20:41.249819 check[26376]: [ 5] read_file: 4 items read from /etc/exim-local/Razor2/servers.nomination.lst
Dec 16 15:20:41.250038 check[26376]: [ 5] read_file: 3 items read from /etc/exim-local/Razor2/servers.catalogue.lst
Dec 16 15:20:41.250164 check[26376]: [ 9] Assigning defaults to n002.cloudmark.com
Dec 16 15:20:41.250212 check[26376]: [ 9] Assigning defaults to n004.cloudmark.com
Dec 16 15:20:41.250250 check[26376]: [ 9] Assigning defaults to n001.cloudmark.com
Dec 16 15:20:41.250286 check[26376]: [ 9] Assigning defaults to n003.cloudmark.com
Dec 16 15:20:41.250323 check[26376]: [ 9] Assigning defaults to c303.cloudmark.com
Dec 16 15:20:41.250361 check[26376]: [ 9] Assigning defaults to c301.cloudmark.com
Dec 16 15:20:41.250398 check[26376]: [ 9] Assigning defaults to c302.cloudmark.com
...
</syntaxhighlight>
=== Bayes ===
<syntaxhighlight lang=bash>
$ spamassassin --siteconfigpath=/etc/spamassassin -D bayes < ~/sample-ham.txt 2>&1 | less
Dec 16 15:33:03.945 [3194] dbg: bayes: learner_new self=Mail::SpamAssassin::Plugin::Bayes=HASH(0x1b36a5eb0), bayes_store_module=Mail::SpamAssassin::BayesStore::SQL
Dec 16 15:33:03.976 [3194] dbg: bayes: using username: exim
Dec 16 15:33:03.976 [3194] dbg: bayes: learner_new: got store=Mail::SpamAssassin::BayesStore::SQL=HASH(0x1b4f9ab50)
Dec 16 15:33:04.001 [3194] dbg: bayes: database connection established
Dec 16 15:33:04.002 [3194] dbg: bayes: found bayes db version 3
Dec 16 15:33:04.003 [3194] dbg: bayes: Using userid: 666
Dec 16 15:33:04.183 [3194] dbg: bayes: corpus size: nspam = 345, nham = 925
Dec 16 15:33:04.184 [3194] dbg: bayes: tokenized body: 71 tokens
Dec 16 15:33:04.184 [3194] dbg: bayes: tokenized uri: 34 tokens
Dec 16 15:33:04.184 [3194] dbg: bayes: tokenized invisible: 0 tokens
...
</syntaxhighlight>
If you find something like:
<pre>
dbg: bayes: _get_db_version: SQL error: Malformed packet
...
bayes: database version 0 is different than we understand (3), aborting! at .../lib/site_perl/Mail/SpamAssassin/BayesStore/SQL.pm line 139.
</pre>
You might try to disable the MySQL query_cache:
<syntaxhighlight lang=mysql>
mysql> set GLOBAL query_cache_type=0;
Query OK, 0 rows affected, 1 warning (0.00 sec)
</syntaxhighlight>
This is no big deal as it is depricated and will be removed in MySQL 8.0.
If it helps, don't foget to put the settings in your MySQL Config as well.
Another way to find out if your bayes store works:
<syntaxhighlight lang=bash>
$ sa-learn --siteconfigpath=/etc/spamassassin --dump magic -u <your user>
0.000 0 3 0 non-token data: bayes db version
0.000 0 6801468 0 non-token data: nspam
0.000 0 2184181 0 non-token data: nham
0.000 0 1776152 0 non-token data: ntokens
0.000 0 1639643612 0 non-token data: oldest atime
0.000 0 1639663214 0 non-token data: newest atime
0.000 0 0 0 non-token data: last journal sync atime
0.000 0 1639643616 0 non-token data: last expiry atime
0.000 0 43200 0 non-token data: last expire atime delta
0.000 0 2137 0 non-token data: last expire reduction count
</syntaxhighlight>
f2137144678835e5f432797d3dcca847e52a3fad
2620
2619
2021-12-16T15:21:28Z
Lollypop
2
wikitext
text/x-wiki
[[Category:Mail]]
= SpamAssassin =
== Razor2 ==
=== Config ===
In spamassassins local.cf:
<syntaxhighlight>
loadplugin Mail::SpamAssassin::Plugin::Razor2
use_razor2 1
razor_config /etc/razor/razor-agent.conf
</syntaxhighlight>
The config /etc/razor/razor-agent.conf might be like this:
<syntaxhighlight>
#
# Razor2 config file
#
# Autogenerated by Razor-Agents v2.82
# Tue Nov 28 13:36:10 2006
# Created with all default values
#
# see razor-agent.conf(5) man page
#
debuglevel = 4
identity = identity
ignorelist = 0
listfile_catalogue = servers.catalogue.lst
listfile_discovery = servers.discovery.lst
listfile_nomination = servers.nomination.lst
logfile = /var/log/razor-agent.log
logic_method = 4
min_cf = ac
razordiscovery = discovery.spamnet.com
rediscovery_wait = 3600
report_headers = 1
turn_off_discovery = 0
use_engines = 4,8
whitelist = razor-whitelist
razorhome = /etc/exim-local/Razor2
</syntaxhighlight>
=== Debugging ===
<syntaxhighlight lang=bash>
$ spamassassin --siteconfigpath=/etc/spamassassin -D razor2 < ~/sample-ham.txt 2>&1 | less
Dec 16 15:20:39.223 [26376] dbg: razor2: razor2 is available, version 2.86
Razor-Log: read_file: 16 items read from /etc/spamassassin/Razor2/razor-agent.conf
Razor-Log: Found razorhome: /etc/exim-local/Razor2
Dec 16 15:20:41.248733 check[26376]: [ 2] [bootup] Logging initiated LogDebugLevel=9 to stdout
Dec 16 15:20:41.248919 check[26376]: [ 5] computed razorhome=/etc/exim-local/Razor2, conf=/etc/spamassassin/Razor2/razor-agent.conf, ident=/etc/exim-local/Razor2/identity
Dec 16 15:20:41.248996 check[26376]: [ 8] Client supported_engines: 4 8
Dec 16 15:20:41.249307 check[26376]: [ 8] prep_mail done: mail 1 headers=3944, mime0=562
Dec 16 15:20:41.249571 check[26376]: [ 5] read_file: 1 items read from /etc/exim-local/Razor2/servers.discovery.lst
Dec 16 15:20:41.249819 check[26376]: [ 5] read_file: 4 items read from /etc/exim-local/Razor2/servers.nomination.lst
Dec 16 15:20:41.250038 check[26376]: [ 5] read_file: 3 items read from /etc/exim-local/Razor2/servers.catalogue.lst
Dec 16 15:20:41.250164 check[26376]: [ 9] Assigning defaults to n002.cloudmark.com
Dec 16 15:20:41.250212 check[26376]: [ 9] Assigning defaults to n004.cloudmark.com
Dec 16 15:20:41.250250 check[26376]: [ 9] Assigning defaults to n001.cloudmark.com
Dec 16 15:20:41.250286 check[26376]: [ 9] Assigning defaults to n003.cloudmark.com
Dec 16 15:20:41.250323 check[26376]: [ 9] Assigning defaults to c303.cloudmark.com
Dec 16 15:20:41.250361 check[26376]: [ 9] Assigning defaults to c301.cloudmark.com
Dec 16 15:20:41.250398 check[26376]: [ 9] Assigning defaults to c302.cloudmark.com
...
</syntaxhighlight>
== Bayes ==
=== Debugging ===
<syntaxhighlight lang=bash>
$ spamassassin --siteconfigpath=/etc/spamassassin -D bayes < ~/sample-ham.txt 2>&1 | less
Dec 16 15:33:03.945 [3194] dbg: bayes: learner_new self=Mail::SpamAssassin::Plugin::Bayes=HASH(0x1b36a5eb0), bayes_store_module=Mail::SpamAssassin::BayesStore::SQL
Dec 16 15:33:03.976 [3194] dbg: bayes: using username: exim
Dec 16 15:33:03.976 [3194] dbg: bayes: learner_new: got store=Mail::SpamAssassin::BayesStore::SQL=HASH(0x1b4f9ab50)
Dec 16 15:33:04.001 [3194] dbg: bayes: database connection established
Dec 16 15:33:04.002 [3194] dbg: bayes: found bayes db version 3
Dec 16 15:33:04.003 [3194] dbg: bayes: Using userid: 666
Dec 16 15:33:04.183 [3194] dbg: bayes: corpus size: nspam = 345, nham = 925
Dec 16 15:33:04.184 [3194] dbg: bayes: tokenized body: 71 tokens
Dec 16 15:33:04.184 [3194] dbg: bayes: tokenized uri: 34 tokens
Dec 16 15:33:04.184 [3194] dbg: bayes: tokenized invisible: 0 tokens
...
</syntaxhighlight>
If you find something like:
<pre>
dbg: bayes: _get_db_version: SQL error: Malformed packet
...
bayes: database version 0 is different than we understand (3), aborting! at .../lib/site_perl/Mail/SpamAssassin/BayesStore/SQL.pm line 139.
</pre>
You might try to disable the MySQL query_cache:
<syntaxhighlight lang=mysql>
mysql> set GLOBAL query_cache_type=0;
Query OK, 0 rows affected, 1 warning (0.00 sec)
</syntaxhighlight>
This is no big deal as it is depricated and will be removed in MySQL 8.0.
If it helps, don't foget to put the settings in your MySQL Config as well.
Another way to find out if your bayes store works:
<syntaxhighlight lang=bash>
$ sa-learn --siteconfigpath=/etc/spamassassin --dump magic -u <your user>
0.000 0 3 0 non-token data: bayes db version
0.000 0 6801468 0 non-token data: nspam
0.000 0 2184181 0 non-token data: nham
0.000 0 1776152 0 non-token data: ntokens
0.000 0 1639643612 0 non-token data: oldest atime
0.000 0 1639663214 0 non-token data: newest atime
0.000 0 0 0 non-token data: last journal sync atime
0.000 0 1639643616 0 non-token data: last expiry atime
0.000 0 43200 0 non-token data: last expire atime delta
0.000 0 2137 0 non-token data: last expire reduction count
</syntaxhighlight>
98298ce1be998d517109ade96c975703df03043f
Template:Systematik
10
117
2621
2316
2021-12-16T17:35:28Z
Lollypop
2
Text replacement - "Kategorie:" to "Category:"
wikitext
text/x-wiki
<includeonly>
{{#vardefine:taxon|
{{#if:{{{dominia|}}} | -> [[:Category:{{{dominia|}}}{{!}}{{{dominia|}}}]]}}
{{#if:{{{regnum|}}} | -> [[:Category:{{{regnum|}}}{{!}}{{{regnum|}}}]]}}
{{#if:{{{subregnum|}}} | -> [[:Category:{{{subregnum|}}}{{!}}{{{subregnum|}}}]]}}
{{#if:{{{superdivisio|}}}| -> [[:Category:{{{superdivisio|}}}{{!}}{{{superdivisio|}}}]]}}
{{#if:{{{divisio|}}} | -> [[:Category:{{{divisio|}}}{{!}}{{{divisio|}}}]]}}
{{#if:{{{subdivisio|}}} | -> [[:Category:{{{subdivisio|}}}{{!}}{{{subdivisio|}}}]]}}
{{#if:{{{superclassis|}}}| -> [[:Category:{{{superclassis|}}}{{!}}{{{superclassis|}}}]]}}
{{#if:{{{classis|}}} | -> [[:Category:{{{classis|}}}{{!}}{{{classis|}}}]]}}
{{#if:{{{subclassis|}}} | -> [[:Category:{{{subclassis|}}}{{!}}{{{subclassis|}}}]]}}
{{#if:{{{superordo|}}} | -> [[:Category:{{{superordo|}}}{{!}}{{{superordo|}}}]]}}
{{#if:{{{ordo|}}} | -> [[:Category:{{{ordo|}}}{{!}}{{{ordo|}}}]]}}
{{#if:{{{subordo|}}} | -> [[:Category:{{{subordo|}}}{{!}}{{{subordo|}}}]]}}
{{#if:{{{superfamilia|}}}| -> [[:Category:{{#if: {{{subordo|}}} | {{{subordo|}}} | {{{ordo|}}} }}{{!}}{{{superfamilia|}}}]]}}
{{#if:{{{familia|}}} | -> [[:Category:{{{familia|}}}{{!}}{{{familia|}}}]]}}
{{#if:{{{subfamilia|}}} | -> [[:Category:{{{subfamilia|}}}{{!}}{{{subfamilia|}}}]]}}
{{#if:{{{tribus|}}} | -> [[:Category:{{{tribus|}}}{{!}}{{{tribus|}}}]]}}
{{#if:{{{genus|}}} | -> [[:Category:{{{genus|}}}{{!}}{{{genus|}}}]]}}
{{#if:{{{subgenus|}}} | -> [[:Category:{{{subgenus|}}}{{!}}{{{subgenus|}}}]]}}
}}
{{#vardefine:taxonbox|
{{#if:{{{dominia|}}}
| {{!-}}
{{!}} Dominia:
{{!}} ''[[:Category:{{{dominia|}}}{{!}}{{{dominia|}}}]]''
}}
{{#if:{{{regnum|}}}
| {{!-}}
{{!}} Regnum:
{{!}} ''[[:Category:{{{regnum|}}}{{!}}{{{regnum|}}}]]''
}}
{{#if:{{{subregnum|}}}
| {{!-}}
{{!}} Subregnum:
{{!}} ''[[:Category:{{{subregnum|}}}{{!}}{{{subregnum|}}}]]''
}}
{{#if:{{{superdivisio|}}}
| {{!-}}
{{!}} Superdivisio:
{{!}} ''[[:Category:{{{superdivisio|}}}{{!}}{{{superdivisio|}}}]]''
}}
{{#if:{{{divisio|}}}
| {{!-}}
{{!}} Divisio:
{{!}} ''[[:Category:{{{divisio|}}}{{!}}{{{divisio|}}}]]''
}}
{{#if:{{{phylum|}}}
| {{!-}}
{{!}} Phylum:
{{!}} ''[[:Category:{{{phylum|}}}{{!}}{{{phylum|}}}]]''
}}
{{#if:{{{subdivisio|}}}
| {{!-}}
{{!}} Subdivisio:
{{!}} ''[[:Category:{{{subdivisio|}}}{{!}}{{{subdivisio|}}}]]''
}}
{{#if:{{{subphylum|}}}
| {{!-}}
{{!}} Subphylum:
{{!}} ''[[:Category:{{{subphylum|}}}{{!}}{{{subphylum|}}}]]''
}}
{{#if:{{{superclassis|}}}
| {{!-}}
{{!}} Superclassis:
{{!}} ''[[:Category:{{{superclassis|}}}{{!}}{{{superclassis|}}}]]''
}}
{{#if:{{{classis|}}}
| {{!-}}
{{!}} Classis:
{{!}} ''[[:Category:{{{classis|}}}{{!}}{{{classis|}}}]]''
}}
{{#if:{{{subclassis|}}}
| {{!-}}
{{!}} Subclassis:
{{!}} ''[[:Category:{{{subclassis|}}}{{!}}{{{subclassis|}}}]]''
}}
{{#if:{{{superordo|}}}
| {{!-}}
{{!}} Superordo:
{{!}} ''[[:Category:{{{superordo|}}}{{!}}{{{superordo|}}}]]''
}}
{{#if:{{{ordo|}}}
| {{!-}}
{{!}} Ordo:
{{!}} ''[[:Category:{{{ordo|}}}{{!}}{{{ordo|}}}]]''
}}
{{#if:{{{subordo|}}}
| {{!-}}
{{!}} Subordo:
{{!}} ''[[:Category:{{{subordo|}}}{{!}}{{{subordo|}}}]]''
}}
{{#if:{{{superfamilia|}}}
| {{!-}}
{{!}} Superfamilia:
{{!}} ''[[:Category:{{{superfamilia|}}}{{!}}{{{superfamilia|}}}]]''
}}
{{#if:{{{familia|}}}
| {{!-}}
{{!}} Familia:
{{!}} ''[[:Category:{{{familia|}}}{{!}}{{{familia|}}}]]''
}}
{{#if:{{{subfamilia|}}}
| {{!-}}
{{!}} Subfamilia:
{{!}} ''[[:Category:{{{subfamilia|}}}{{!}}{{{subfamilia|}}}]]''
}}
{{#if:{{{tribus|}}}
| {{!-}}
{{!}} Tribus:
{{!}} ''[[:Category:{{{tribus|}}}{{!}}{{{tribus|}}}]]''
}}
{{#if:{{{genus|}}}
| {{!-}}
{{!}} Genus:
{{!}} ''[[:Category:{{{genus|}}}{{!}}{{{genus|}}}]]''
}}
{{#if:{{{subgenus|}}}
| {{!-}}
{{!}} Subgenus:
{{!}} ''[[:Category:{{{subgenus|}}}{{!}}{{{subgenus|}}}]]''
}}
{{#if:{{{species|}}}
| {{!-}}
{{!}} Species:
{{!}} ''{{{genus|}}} {{{subgenus|}}} {{{species|}}}{{#if: {{{varietas|}}}| " var. {{{varietas|}}}"}}{{#if: {{{forma|}}}| " f. {{{forma|}}}"}}''
}}
}}
{| cellpadding="2" cellspacing="0" style="border: 1px solid #6688AA;width:260px; background-color:#efefef; float:right;overflow: hidden; margin-left:10px; margin-bottom:10px;" valign="middle" |
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"|
'''''{{PAGENAME}}'''''
{{#if:{{{DeName|}}}|
<br>({{{DeName|}}})
}}
|-
|style="border: 0px solid #6688AA;border-bottom-width:1px;"| <div style="text-align:center;font-size:8pt;">
{{#if: {{{Bild|}}}
| [[Bild:{{{Bild}}}{{!}}frameless{{!}}250x300px{{!}}{{{Bildbeschreibung}}}]]
}}</div>
|
|
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| Systematik
|-
|style="text-align:center;width=250px;"|
{| style="background-color:#efefef;text-align:left;"
{{#if:{{#var:taxonbox}}
| {{#regex: {{#var:taxonbox}} | /(\n)[\n]+/ | $1 }}
}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Wissenschaftlicher Name
|-
|style="text-align:center"|
{| style="text-align:center;background-color:#efefef;widht:250px"
|-
|style="text-align:center;width:250px"|''{{{genus|}}} {{{species|}}}'' {{#if:{{{Autor|}}}|{{{Autor|}}}}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Weitere Informationen
|-
|style="text-align:left"|
{| style="text-align:left;background-color:#efefef;widht:250px"
{{#if:{{{Verbreitung|}}}
| {{!-}}
{{!}} Verbreitung:
{{!}} {{{Verbreitung|}}}
| {{!-}}
}}
{{#if:{{{Habitat|}}}
| {{!-}}
{{!}} Habitat:
{{!}} {{{Habitat|}}}
| {{!-}}
}}
{{#if:{{{Nahrung|}}}
| {{!-}}
{{!}} Nahrung:
{{!}} {{{Nahrung|}}}
| {{!-}}
}}
{{#if:{{{Luftfeuchtigkeit|}}}
| {{!-}}
{{!}} Luftfeuchtigkeit:
{{!}} {{{Luftfeuchtigkeit|}}}
| {{!-}}
}}
{{#if:{{{Temperatur|}}}
| {{!-}}
{{!}} Temperatur:
{{!}} {{{Temperatur|}}}
| {{!-}}
}}
{{#if:{{{StudyGroupNumber|}}}
| {{!-}}
{{!}} StudyGroupNumber:
{{!}} {{{StudyGroupNumber|}}}
| {{!-}}
}}
|}
|}
{{#ifeq: {{NAMESPACE}} | {{ns:0}} | [[Category:species]]}}
{{#if:{{#var:taxon}}
| {{#regex: {{#var:taxon}} | /[ \r\n]+/ | }}
}}
{{#if:{{{www.faunaeur.org_id|}}}|
* [http://www.faunaeur.org/full_results.php?id={{{www.faunaeur.org_id|}}} Fauna Europaea : www.faunaeur.org -> {{PAGENAME}}]
}}
{{#if:{{{cockroach.speciesfile.org_TaxonNameID|}}}|
* [http://cockroach.speciesfile.org/common/basic/Taxa.aspx?TaxonNameID={{{cockroach.speciesfile.org_TaxonNameID|}}} Cockroach Species File (CSF) : cockroach.speciesfile.org -> {{PAGENAME}}]
}}
{{#if:{{{LSID|}}}|
* [http://www.ipni.org/lsids.html LSID] : {{{LSID|}}}
}}
{{#ifeq:{{PAGENAME}}|{{{regnum}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
{{#if: {{{dominia|}}} | [[Category: {{{dominia|}}} {{!}} {{{regnum|}}}]] }}
}}
{{#ifeq:{{PAGENAME}}|{{{subregnum}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
{{#if: {{{regnum|}}}
| [[Category: {{{regnum|}}}{{!}}{{{subregnum|}}}]]
| {{#if: {{{dominia|}}} | [[Category: {{{dominia|}}}{{!}}{{{subregnum|}}}]] }}
}}
}}
{{#ifeq:{{PAGENAME}}|{{{superdivisio}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
{{#if: {{{subregnum|}}}
| [[Category: {{{subregnum|}}}{{!}}{{{superdivisio|}}}]]
| {{#if: {{{regnum|}}} | [[Category: {{{regnum|}}}{{!}}{{{superdivisio|}}}]] }}
}}
}}
{{#ifeq:{{PAGENAME}}|{{{superphylum}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
{{#if: {{{subregnum|}}}
| [[Category: {{{subregnum|}}}{{!}}{{{superphylum|}}}]]
| {{#if: {{{regnum|}}} | [[Category: {{{regnum|}}}{{!}}{{{superphylum|}}}]] }}
}}
}}
{{#ifeq:{{PAGENAME}}|{{{divisio}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
{{#if: {{{superdivisio|}}}
| [[Category: {{{superdivisio|}}}{{!}}{{{divisio|}}}]]
| {{#if: {{{subregnum|}}} | [[Category: {{{subregnum|}}}{{!}}{{{divisio|}}}]] }}
}}
}}
{{#ifeq:{{PAGENAME}}|{{{phylum}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
{{#if: {{{superphylum|}}}
| [[Category: {{{superphylum|}}}{{!}}{{{phylum|}}}]]
| {{#if: {{{subregnum|}}} | [[Category: {{{subregnum|}}}{{!}}{{{phylum|}}}]] }}
}}
}}
{{#ifeq:{{PAGENAME}}|{{{subdivisio}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
{{#if: {{{divisio|}}}
| [[Category: {{{divisio|}}}{{!}}{{{subdivisio|}}}]]
| {{#if: {{{superdivisio|}}} | [[Category: {{{superdivisio|}}}{{!}}{{{subdivisio|}}}]] }}
}}
}}
{{#ifeq:{{PAGENAME}}|{{{subphylum}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
{{#if: {{{phylum|}}}
| [[Category: {{{phylum|}}}{{!}}{{{subphylum|}}}]]
| {{#if: {{{superphylum|}}} | [[Category: {{{superphylum|}}}{{!}}{{{subphylum|}}}]] }}
}}
}}
{{#ifeq:{{PAGENAME}}|{{{superclassis}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
{{#if: {{{subdivisio|}}}
| [[Category: {{{subdivisio|}}}{{!}}{{{superclassis|}}}]]
| {{#if: {{{divisio|}}} | [[Category: {{{divisio|}}}{{!}}{{{superclassis|}}}]] }}
}}
{{#if: {{{subphylum|}}}
| [[Category: {{{subphylum|}}}{{!}}{{{superclassis|}}}]]
| {{#if: {{{phylum|}}} | [[Category: {{{phylum|}}}{{!}}{{{superclassis|}}}]] }}
}}
}}
{{#ifeq:{{PAGENAME}}|{{{classis}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
{{#if: {{{superclassis|}}}
| [[Category: {{{superclassis|}}}{{!}}{{{classis|}}}]]
| {{#if: {{{subdivisio|}}} | [[Category: {{{subdivisio|}}}{{!}}{{{classis|}}}]] }}
{{#if: {{{subphylum|}}} | [[Category: {{{subphylum|}}}{{!}}{{{classis|}}}]] }}
}}
}}
{{#ifeq:{{PAGENAME}}|{{{subclassis}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
{{#if: {{{classis|}}}
| [[Category: {{{classis|}}}{{!}}{{{subclassis|}}}]]
| {{#if: {{{superclassis|}}} | [[Category: {{{superclassis|}}}{{!}}{{{subclassis|}}}]] }}
}}
}}
{{#ifeq:{{PAGENAME}}|{{{superordo}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
{{#if: {{{subclassis|}}}
| [[Category: {{{subclassis|}}}{{!}}{{{superordo|}}}]]
| {{#if: {{{classis|}}} | [[Category: {{{classis|}}}{{!}}{{{superordo|}}}]] }}
}}
}}
{{#ifeq:{{PAGENAME}}|{{{ordo}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
{{#if: {{{superordo|}}}
| [[Category: {{{superordo|}}}{{!}}{{{ordo|}}}]]
| {{#if: {{{subclassis|}}} | [[Category: {{{subclassis|}}}{{!}}{{{ordo|}}}]] }}
}}
}}
{{#ifeq:{{PAGENAME}}|{{{subordo}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
{{#if: {{{ordo|}}}
| [[Category: {{{ordo|}}}{{!}}{{{subordo|}}}]]
| {{#if: {{{superordo|}}} | [[Category: {{{superordo|}}}{{!}}{{{subordo|}}}]] }}
}}
}}
{{#ifeq:{{PAGENAME}}|{{{superfamilia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
{{#if: {{{subordo|}}}
| [[Category: {{{subordo|}}}{{!}}{{{superfamilia|}}}]]
| {{#if: {{{ordo|}}} | [[Category: {{{ordo|}}}{{!}}{{{superfamilia|}}}]] }}
}}
}}
{{#ifeq:{{PAGENAME}}|{{{familia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
{{#if: {{{superfamilia|}}}
| [[Category: {{{superfamilia|}}}{{!}}{{{familia|}}}]]
| {{#if: {{{subordo|}}} | [[Category: {{{subordo|}}}{{!}}{{{familia|}}}]] }}
}}
}}
{{#ifeq:{{PAGENAME}}|{{{subfamilia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
{{#if: {{{familia|}}}
| [[Category: {{{familia|}}}{{!}}{{{subfamilia|}}}]]
| {{#if: {{{superfamilia|}}} | [[Category: {{{superfamilia|}}}{{!}}{{{subfamilia|}}}]] }}
}}
}}
{{#ifeq:{{PAGENAME}}|{{{tribus}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
{{#if: {{{subfamilia|}}}
| [[Category: {{{subfamilia|}}}{{!}}{{{tribus|}}}]]
| {{#if: {{{familia|}}} | [[Category: {{{familia|}}}{{!}}{{{tribus|}}}]] }}
}}
}}
{{#ifeq:{{PAGENAME}}|{{{genus}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
{{#if: {{{tribus|}}}
| [[Category: {{{tribus|}}}{{!}}{{{genus|}}}]]
| {{#if: {{{subfamilia|}}}
| [[Category: {{{subfamilia|}}}{{!}}{{{genus|}}}]]
| {{#if: {{{familia|}}} | [[Category: {{{familia|}}}{{!}}{{{genus|}}}]] }}
}}
}}
}}
{{#ifeq:{{PAGENAME}}|{{{subgenus}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
{{#if: {{{genus|}}}
| [[Category: {{{genus|}}}{{!}}{{{subgenus|}}}]]
| {{#if: {{{subfamilia|}}} | [[Category: {{{subfamilia|}}}{{!}}{{{subgenus|}}}]] }}
}}
}}
{{#ifeq:{{PAGENAME}}|{{{genus}}} {{{species}}}|
{{#if: {{{subgenus|}}}
| [[Category: {{{subgenus|}}}{{!}}{{{species|}}}]]
| {{#if: {{{genus|}}} | [[Category: {{{genus|}}}{{!}}{{{species|}}}]] }}
}}
}}
</includeonly>
<noinclude>
<pre>
Beispielaufruf:
{{Systematik
| DeName = Fauchschabe
| Autor = van Herrewege, 1973
| ordo =
| subordo =
| superfamilia =
| familia = Blaberidae
| subfamilia = Oxyhaloinae
| tribus = Gromphadorhini
| genus = Princisia
| subgenus =
| species = vanwaerebeki
| Verbreitung =
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| StudyGroupNumber = BCG 34
| Winterruhe =
| cockroach.speciesfile.org_TaxonNameID = 1174416
| LSID = urn:lsid:Blattodea.genusfile.org:TaxonName:6326
}}
{{Systematik
| Autor =
| Bild =
| Bildbeschreibung =
| regnum = Animalia
| subregnum = Eumetazoa
| phylum = specieshropoda
| subphylum = Hexapoda
| classis = Insecta
| ordo = Dictyoptera
| subordo = Isoptera
| LSID = urn:lsid:faunaeur.org:taxname:11922
| www.faunaeur.org_id = 11922
}}
</pre>
</noinclude>
3a732a2108031f920442726d625b7a1bee6288f9
2623
2621
2021-12-16T17:41:13Z
Lollypop
2
wikitext
text/x-wiki
<includeonly>
{{#vardefine:taxon|
{{#if:{{{dominia|}}} | -> [[:Category:{{{dominia|}}}{{!}}{{{dominia|}}}]]}}
{{#if:{{{regnum|}}} | -> [[:Category:{{{regnum|}}}{{!}}{{{regnum|}}}]]}}
{{#if:{{{subregnum|}}} | -> [[:Category:{{{subregnum|}}}{{!}}{{{subregnum|}}}]]}}
{{#if:{{{superdivisio|}}}| -> [[:Category:{{{superdivisio|}}}{{!}}{{{superdivisio|}}}]]}}
{{#if:{{{divisio|}}} | -> [[:Category:{{{divisio|}}}{{!}}{{{divisio|}}}]]}}
{{#if:{{{subdivisio|}}} | -> [[:Category:{{{subdivisio|}}}{{!}}{{{subdivisio|}}}]]}}
{{#if:{{{superclassis|}}}| -> [[:Category:{{{superclassis|}}}{{!}}{{{superclassis|}}}]]}}
{{#if:{{{classis|}}} | -> [[:Category:{{{classis|}}}{{!}}{{{classis|}}}]]}}
{{#if:{{{subclassis|}}} | -> [[:Category:{{{subclassis|}}}{{!}}{{{subclassis|}}}]]}}
{{#if:{{{superordo|}}} | -> [[:Category:{{{superordo|}}}{{!}}{{{superordo|}}}]]}}
{{#if:{{{ordo|}}} | -> [[:Category:{{{ordo|}}}{{!}}{{{ordo|}}}]]}}
{{#if:{{{subordo|}}} | -> [[:Category:{{{subordo|}}}{{!}}{{{subordo|}}}]]}}
{{#if:{{{superfamilia|}}}| -> [[:Category:{{#if: {{{subordo|}}} | {{{subordo|}}} | {{{ordo|}}} }}{{!}}{{{superfamilia|}}}]]}}
{{#if:{{{familia|}}} | -> [[:Category:{{{familia|}}}{{!}}{{{familia|}}}]]}}
{{#if:{{{subfamilia|}}} | -> [[:Category:{{{subfamilia|}}}{{!}}{{{subfamilia|}}}]]}}
{{#if:{{{tribus|}}} | -> [[:Category:{{{tribus|}}}{{!}}{{{tribus|}}}]]}}
{{#if:{{{genus|}}} | -> [[:Category:{{{genus|}}}{{!}}{{{genus|}}}]]}}
{{#if:{{{subgenus|}}} | -> [[:Category:{{{subgenus|}}}{{!}}{{{subgenus|}}}]]}}
}}
{{#vardefine:taxonbox|
{{#if:{{{dominia|}}}
| {{!-}}
{{!}} Dominia:
{{!}} ''[[:Category:{{{dominia|}}}{{!}}{{{dominia|}}}]]''
}}
{{#if:{{{regnum|}}}
| {{!-}}
{{!}} Regnum:
{{!}} ''[[:Category:{{{regnum|}}}{{!}}{{{regnum|}}}]]''
}}
{{#if:{{{subregnum|}}}
| {{!-}}
{{!}} Subregnum:
{{!}} ''[[:Category:{{{subregnum|}}}{{!}}{{{subregnum|}}}]]''
}}
{{#if:{{{superdivisio|}}}
| {{!-}}
{{!}} Superdivisio:
{{!}} ''[[:Category:{{{superdivisio|}}}{{!}}{{{superdivisio|}}}]]''
}}
{{#if:{{{divisio|}}}
| {{!-}}
{{!}} Divisio:
{{!}} ''[[:Category:{{{divisio|}}}{{!}}{{{divisio|}}}]]''
}}
{{#if:{{{phylum|}}}
| {{!-}}
{{!}} Phylum:
{{!}} ''[[:Category:{{{phylum|}}}{{!}}{{{phylum|}}}]]''
}}
{{#if:{{{subdivisio|}}}
| {{!-}}
{{!}} Subdivisio:
{{!}} ''[[:Category:{{{subdivisio|}}}{{!}}{{{subdivisio|}}}]]''
}}
{{#if:{{{subphylum|}}}
| {{!-}}
{{!}} Subphylum:
{{!}} ''[[:Category:{{{subphylum|}}}{{!}}{{{subphylum|}}}]]''
}}
{{#if:{{{superclassis|}}}
| {{!-}}
{{!}} Superclassis:
{{!}} ''[[:Category:{{{superclassis|}}}{{!}}{{{superclassis|}}}]]''
}}
{{#if:{{{classis|}}}
| {{!-}}
{{!}} Classis:
{{!}} ''[[:Category:{{{classis|}}}{{!}}{{{classis|}}}]]''
}}
{{#if:{{{subclassis|}}}
| {{!-}}
{{!}} Subclassis:
{{!}} ''[[:Category:{{{subclassis|}}}{{!}}{{{subclassis|}}}]]''
}}
{{#if:{{{superordo|}}}
| {{!-}}
{{!}} Superordo:
{{!}} ''[[:Category:{{{superordo|}}}{{!}}{{{superordo|}}}]]''
}}
{{#if:{{{ordo|}}}
| {{!-}}
{{!}} Ordo:
{{!}} ''[[:Category:{{{ordo|}}}{{!}}{{{ordo|}}}]]''
}}
{{#if:{{{subordo|}}}
| {{!-}}
{{!}} Subordo:
{{!}} ''[[:Category:{{{subordo|}}}{{!}}{{{subordo|}}}]]''
}}
{{#if:{{{superfamilia|}}}
| {{!-}}
{{!}} Superfamilia:
{{!}} ''[[:Category:{{{superfamilia|}}}{{!}}{{{superfamilia|}}}]]''
}}
{{#if:{{{familia|}}}
| {{!-}}
{{!}} Familia:
{{!}} ''[[:Category:{{{familia|}}}{{!}}{{{familia|}}}]]''
}}
{{#if:{{{subfamilia|}}}
| {{!-}}
{{!}} Subfamilia:
{{!}} ''[[:Category:{{{subfamilia|}}}{{!}}{{{subfamilia|}}}]]''
}}
{{#if:{{{tribus|}}}
| {{!-}}
{{!}} Tribus:
{{!}} ''[[:Category:{{{tribus|}}}{{!}}{{{tribus|}}}]]''
}}
{{#if:{{{genus|}}}
| {{!-}}
{{!}} Genus:
{{!}} ''[[:Category:{{{genus|}}}{{!}}{{{genus|}}}]]''
}}
{{#if:{{{subgenus|}}}
| {{!-}}
{{!}} Subgenus:
{{!}} ''[[:Category:{{{subgenus|}}}{{!}}{{{subgenus|}}}]]''
}}
{{#if:{{{species|}}}
| {{!-}}
{{!}} Species:
{{!}} ''{{{genus|}}} {{{subgenus|}}} {{{species|}}}{{#if: {{{varietas|}}}| " var. {{{varietas|}}}"}}{{#if: {{{forma|}}}| " f. {{{forma|}}}"}}''
}}
}}
{| cellpadding="2" cellspacing="0" style="border: 1px solid #6688AA;width:260px; background-color:#efefef; float:right;overflow: hidden; margin-left:10px; margin-bottom:10px;" valign="middle" |
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"|
'''''{{PAGENAME}}'''''
{{#if:{{{DeName|}}}|
<br>({{{DeName|}}})
}}
|-
|style="border: 0px solid #6688AA;border-bottom-width:1px;"| <div style="text-align:center;font-size:8pt;">
{{#if: {{{Bild|}}}
| [[File:{{{Bild}}}{{!}}frameless{{!}}250x300px{{!}}{{{Bildbeschreibung}}}]]
}}</div>
|
|
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;"| Systematik
|-
|style="text-align:center;width=250px;"|
{| style="background-color:#efefef;text-align:left;"
{{#if:{{#var:taxonbox}}
| {{#regex: {{#var:taxonbox}} | /(\n)[\n]+/ | $1 }}
}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Wissenschaftlicher Name
|-
|style="text-align:center"|
{| style="text-align:center;background-color:#efefef;widht:250px"
|-
|style="text-align:center;width:250px"|''{{{genus|}}} {{{species|}}}'' {{#if:{{{Autor|}}}|{{{Autor|}}}}}
|}
|-
!style="background-color:#b2deac;text-align:center;border: 0px solid #6688AA;border-bottom-width:1px;border-top-width:1px;"| Weitere Informationen
|-
|style="text-align:left"|
{| style="text-align:left;background-color:#efefef;widht:250px"
{{#if:{{{Verbreitung|}}}
| {{!-}}
{{!}} Verbreitung:
{{!}} {{{Verbreitung|}}}
| {{!-}}
}}
{{#if:{{{Habitat|}}}
| {{!-}}
{{!}} Habitat:
{{!}} {{{Habitat|}}}
| {{!-}}
}}
{{#if:{{{Nahrung|}}}
| {{!-}}
{{!}} Nahrung:
{{!}} {{{Nahrung|}}}
| {{!-}}
}}
{{#if:{{{Luftfeuchtigkeit|}}}
| {{!-}}
{{!}} Luftfeuchtigkeit:
{{!}} {{{Luftfeuchtigkeit|}}}
| {{!-}}
}}
{{#if:{{{Temperatur|}}}
| {{!-}}
{{!}} Temperatur:
{{!}} {{{Temperatur|}}}
| {{!-}}
}}
{{#if:{{{StudyGroupNumber|}}}
| {{!-}}
{{!}} StudyGroupNumber:
{{!}} {{{StudyGroupNumber|}}}
| {{!-}}
}}
|}
|}
{{#ifeq: {{NAMESPACE}} | {{ns:0}} | [[Category:species]]}}
{{#if:{{#var:taxon}}
| {{#regex: {{#var:taxon}} | /[ \r\n]+/ | }}
}}
{{#if:{{{www.faunaeur.org_id|}}}|
* [http://www.faunaeur.org/full_results.php?id={{{www.faunaeur.org_id|}}} Fauna Europaea : www.faunaeur.org -> {{PAGENAME}}]
}}
{{#if:{{{cockroach.speciesfile.org_TaxonNameID|}}}|
* [http://cockroach.speciesfile.org/common/basic/Taxa.aspx?TaxonNameID={{{cockroach.speciesfile.org_TaxonNameID|}}} Cockroach Species File (CSF) : cockroach.speciesfile.org -> {{PAGENAME}}]
}}
{{#if:{{{LSID|}}}|
* [http://www.ipni.org/lsids.html LSID] : {{{LSID|}}}
}}
{{#ifeq:{{PAGENAME}}|{{{regnum}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
{{#if: {{{dominia|}}} | [[Category: {{{dominia|}}} {{!}} {{{regnum|}}}]] }}
}}
{{#ifeq:{{PAGENAME}}|{{{subregnum}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
{{#if: {{{regnum|}}}
| [[Category: {{{regnum|}}}{{!}}{{{subregnum|}}}]]
| {{#if: {{{dominia|}}} | [[Category: {{{dominia|}}}{{!}}{{{subregnum|}}}]] }}
}}
}}
{{#ifeq:{{PAGENAME}}|{{{superdivisio}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
{{#if: {{{subregnum|}}}
| [[Category: {{{subregnum|}}}{{!}}{{{superdivisio|}}}]]
| {{#if: {{{regnum|}}} | [[Category: {{{regnum|}}}{{!}}{{{superdivisio|}}}]] }}
}}
}}
{{#ifeq:{{PAGENAME}}|{{{superphylum}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
{{#if: {{{subregnum|}}}
| [[Category: {{{subregnum|}}}{{!}}{{{superphylum|}}}]]
| {{#if: {{{regnum|}}} | [[Category: {{{regnum|}}}{{!}}{{{superphylum|}}}]] }}
}}
}}
{{#ifeq:{{PAGENAME}}|{{{divisio}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
{{#if: {{{superdivisio|}}}
| [[Category: {{{superdivisio|}}}{{!}}{{{divisio|}}}]]
| {{#if: {{{subregnum|}}} | [[Category: {{{subregnum|}}}{{!}}{{{divisio|}}}]] }}
}}
}}
{{#ifeq:{{PAGENAME}}|{{{phylum}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
{{#if: {{{superphylum|}}}
| [[Category: {{{superphylum|}}}{{!}}{{{phylum|}}}]]
| {{#if: {{{subregnum|}}} | [[Category: {{{subregnum|}}}{{!}}{{{phylum|}}}]] }}
}}
}}
{{#ifeq:{{PAGENAME}}|{{{subdivisio}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
{{#if: {{{divisio|}}}
| [[Category: {{{divisio|}}}{{!}}{{{subdivisio|}}}]]
| {{#if: {{{superdivisio|}}} | [[Category: {{{superdivisio|}}}{{!}}{{{subdivisio|}}}]] }}
}}
}}
{{#ifeq:{{PAGENAME}}|{{{subphylum}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
{{#if: {{{phylum|}}}
| [[Category: {{{phylum|}}}{{!}}{{{subphylum|}}}]]
| {{#if: {{{superphylum|}}} | [[Category: {{{superphylum|}}}{{!}}{{{subphylum|}}}]] }}
}}
}}
{{#ifeq:{{PAGENAME}}|{{{superclassis}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
{{#if: {{{subdivisio|}}}
| [[Category: {{{subdivisio|}}}{{!}}{{{superclassis|}}}]]
| {{#if: {{{divisio|}}} | [[Category: {{{divisio|}}}{{!}}{{{superclassis|}}}]] }}
}}
{{#if: {{{subphylum|}}}
| [[Category: {{{subphylum|}}}{{!}}{{{superclassis|}}}]]
| {{#if: {{{phylum|}}} | [[Category: {{{phylum|}}}{{!}}{{{superclassis|}}}]] }}
}}
}}
{{#ifeq:{{PAGENAME}}|{{{classis}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
{{#if: {{{superclassis|}}}
| [[Category: {{{superclassis|}}}{{!}}{{{classis|}}}]]
| {{#if: {{{subdivisio|}}} | [[Category: {{{subdivisio|}}}{{!}}{{{classis|}}}]] }}
{{#if: {{{subphylum|}}} | [[Category: {{{subphylum|}}}{{!}}{{{classis|}}}]] }}
}}
}}
{{#ifeq:{{PAGENAME}}|{{{subclassis}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
{{#if: {{{classis|}}}
| [[Category: {{{classis|}}}{{!}}{{{subclassis|}}}]]
| {{#if: {{{superclassis|}}} | [[Category: {{{superclassis|}}}{{!}}{{{subclassis|}}}]] }}
}}
}}
{{#ifeq:{{PAGENAME}}|{{{superordo}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
{{#if: {{{subclassis|}}}
| [[Category: {{{subclassis|}}}{{!}}{{{superordo|}}}]]
| {{#if: {{{classis|}}} | [[Category: {{{classis|}}}{{!}}{{{superordo|}}}]] }}
}}
}}
{{#ifeq:{{PAGENAME}}|{{{ordo}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
{{#if: {{{superordo|}}}
| [[Category: {{{superordo|}}}{{!}}{{{ordo|}}}]]
| {{#if: {{{subclassis|}}} | [[Category: {{{subclassis|}}}{{!}}{{{ordo|}}}]] }}
}}
}}
{{#ifeq:{{PAGENAME}}|{{{subordo}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
{{#if: {{{ordo|}}}
| [[Category: {{{ordo|}}}{{!}}{{{subordo|}}}]]
| {{#if: {{{superordo|}}} | [[Category: {{{superordo|}}}{{!}}{{{subordo|}}}]] }}
}}
}}
{{#ifeq:{{PAGENAME}}|{{{superfamilia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
{{#if: {{{subordo|}}}
| [[Category: {{{subordo|}}}{{!}}{{{superfamilia|}}}]]
| {{#if: {{{ordo|}}} | [[Category: {{{ordo|}}}{{!}}{{{superfamilia|}}}]] }}
}}
}}
{{#ifeq:{{PAGENAME}}|{{{familia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
{{#if: {{{superfamilia|}}}
| [[Category: {{{superfamilia|}}}{{!}}{{{familia|}}}]]
| {{#if: {{{subordo|}}} | [[Category: {{{subordo|}}}{{!}}{{{familia|}}}]] }}
}}
}}
{{#ifeq:{{PAGENAME}}|{{{subfamilia}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
{{#if: {{{familia|}}}
| [[Category: {{{familia|}}}{{!}}{{{subfamilia|}}}]]
| {{#if: {{{superfamilia|}}} | [[Category: {{{superfamilia|}}}{{!}}{{{subfamilia|}}}]] }}
}}
}}
{{#ifeq:{{PAGENAME}}|{{{tribus}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
{{#if: {{{subfamilia|}}}
| [[Category: {{{subfamilia|}}}{{!}}{{{tribus|}}}]]
| {{#if: {{{familia|}}} | [[Category: {{{familia|}}}{{!}}{{{tribus|}}}]] }}
}}
}}
{{#ifeq:{{PAGENAME}}|{{{genus}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
{{#if: {{{tribus|}}}
| [[Category: {{{tribus|}}}{{!}}{{{genus|}}}]]
| {{#if: {{{subfamilia|}}}
| [[Category: {{{subfamilia|}}}{{!}}{{{genus|}}}]]
| {{#if: {{{familia|}}} | [[Category: {{{familia|}}}{{!}}{{{genus|}}}]] }}
}}
}}
}}
{{#ifeq:{{PAGENAME}}|{{{subgenus}}}|
{{#categorytree:{{PAGENAME}}|mode=pages|hideroot=on|depth=5}}
{{#if: {{{genus|}}}
| [[Category: {{{genus|}}}{{!}}{{{subgenus|}}}]]
| {{#if: {{{subfamilia|}}} | [[Category: {{{subfamilia|}}}{{!}}{{{subgenus|}}}]] }}
}}
}}
{{#ifeq:{{PAGENAME}}|{{{genus}}} {{{species}}}|
{{#if: {{{subgenus|}}}
| [[Category: {{{subgenus|}}}{{!}}{{{species|}}}]]
| {{#if: {{{genus|}}} | [[Category: {{{genus|}}}{{!}}{{{species|}}}]] }}
}}
}}
</includeonly>
<noinclude>
<pre>
Beispielaufruf:
{{Systematik
| DeName = Fauchschabe
| Autor = van Herrewege, 1973
| ordo =
| subordo =
| superfamilia =
| familia = Blaberidae
| subfamilia = Oxyhaloinae
| tribus = Gromphadorhini
| genus = Princisia
| subgenus =
| species = vanwaerebeki
| Verbreitung =
| Habitat =
| Nahrung =
| Luftfeuchtigkeit =
| Temperatur = 24°C - 28°C
| StudyGroupNumber = BCG 34
| Winterruhe =
| cockroach.speciesfile.org_TaxonNameID = 1174416
| LSID = urn:lsid:Blattodea.genusfile.org:TaxonName:6326
}}
{{Systematik
| Autor =
| Bild =
| Bildbeschreibung =
| regnum = Animalia
| subregnum = Eumetazoa
| phylum = specieshropoda
| subphylum = Hexapoda
| classis = Insecta
| ordo = Dictyoptera
| subordo = Isoptera
| LSID = urn:lsid:faunaeur.org:taxname:11922
| www.faunaeur.org_id = 11922
}}
</pre>
</noinclude>
3582d38c5cb2f00dd148061e7ac9f05241d95fdb
Hauptseite
0
1
2622
1830
2021-12-16T17:35:56Z
Lollypop
2
Text replacement - "Kategorie:" to "Category:"
wikitext
text/x-wiki
First of all please read my [[Project:General_disclaimer|disclaimer]]!
Bitte zuerst bitte meinen [[Project:General_disclaimer|Haftungsausschluss]] lesen!
=[[:Category:KnowHow|KnowHow]]=
<categorytree mode=pages depth=2>KnowHow</categorytree>
=[[:Category:Projekte|Meine Projekte]]=
<categorytree mode=pages hideroot=on depth=3>Projekte</categorytree>
Anmerkungen immer gern an mich: Lars Timmann <<email>L@rs.Timmann.de</email>>
= Starthilfen zum Wiki =
Hilfe zur Benutzung und Konfiguration der Wiki-Software findest du im [http://meta.wikimedia.org/wiki/Help:Contents Benutzerhandbuch].
* [http://www.mediawiki.org/wiki/Manual:Configuration_settings Liste der Konfigurationsvariablen]
* [http://www.mediawiki.org/wiki/Manual:FAQ MediaWiki-FAQ]
* [https://lists.wikimedia.org/mailman/listinfo/mediawiki-announce Mailingliste neuer MediaWiki-Versionen]
Mit dem Urteil vom 12. Mai 1998 hat das Landgericht Hamburg entschieden, dass man durch die Anbringung eines Links die Inhalte der gelinkten Seiten ggf. mit zu verantworten hat. Dies kann nur dadurch verhindert werden, dass man sich ausdrücklich von diesem Inhalt distanziert. Für alle Links auf dieser Homepage gilt: Ich distanziere mich hiermit ausdrücklich von allen Inhalten aller verlinkten Seitenadressen auf meiner Homepage und mache mir diese Inhalte nicht zu eigen.
da7ace56d33b9921d3f2ba63744390f3de96ba5a
Template:Taxobox
10
44
2624
119
2021-12-16T17:45:28Z
Lollypop
2
wikitext
text/x-wiki
<includeonly>{| cellpadding="2" cellspacing="1" width="300" class="taxobox {{#ifeq: {{lc:{{{Modus|taxobox}}}}}|paläobox|palaeobox}} float-right toptextcells" id="Vorlage_Taxobox" summary="Taxobox"
! {{#if: {{{Name|}}}|{{{Name}}}|{{#if: {{{Taxon_Name|}}}|{{{Taxon_Name}}}|{{#if: {{{Taxon_WissName|}}}|{{#ifexpr: {{Taxobox/IstRangKursiv|{{{Taxon_Rang|}}}}}|''}}{{{Taxon_WissName}}}{{#ifexpr: {{Taxobox/IstRangKursiv|{{{Taxon_Rang|}}}}}|''}}}}}}}}
{{#if: {{{Bild|}}}|{{#switch: {{lc:{{{Bild}}}}}
|fehlt|ohne|kein|keines= {{!-}}
|#default={{!-}}
{{!}} style="text-align:center;font-size:8pt;" {{!}} [[File:{{{Bild}}}|frameless|300x400px{{#if:{{{Bildbeschreibung|}}}|{{!}}{{{Bildbeschreibung}}}}}]]
{{#if: {{{Bildbeschreibung|}}}|{{#ifeq: {{{Bildbeschreibung}}}|ohne||{{{Bildbeschreibung|}}}}}|{{#if: {{{Taxon_Name|}}}|{{{Taxon_Name|}}} {{#if: {{{Taxon_WissName|}}}|(''{{{Taxon_WissName|}}}'')}}|''{{{Taxon_WissName|}}}''}}}}
}}|{{!-}}}}
{{#ifeq: {{lc:{{{Modus|taxobox}}}}}|paläobox|
{{#if: {{{ErdzeitalterVon|}}}{{{MioVon|}}}{{{TausendVon|}}}|
{{!-}}
! [[Erdzeitalter|Zeitraum]]
{{#if: {{{ErdzeitalterVon|}}}|
{{!-}}
{{!}}class="taxo-zeit"{{!}} {{{ErdzeitalterVon|}}}{{#if: {{{ErdzeitalterBis|}}}| bis {{{ErdzeitalterBis}}}}}}}}}
{{#if: {{{MioVon|}}}|
{{#if: {{{TausendBis|}}}|
{{!-}}
{{!}}class="taxo-zeit" {{!}}{{{MioVon|}}} [[Mya (Einheit)|Mio. Jahre]] bis {{{TausendBis}}}.000 Jahre
|{{!-}}
{{!}}class="taxo-zeit" {{!}}{{{MioVon|}}}{{#if: {{{MioBis|}}}| bis {{{MioBis}}}}} [[Mya (Einheit)|Mio. Jahre]]}}}}
{{#if: {{{TausendVon|}}}|
{{!-}}
{{!}}class="taxo-zeit" {{!}}{{{TausendVon|}}}{{#if: {{{TausendBis|}}}| bis {{{TausendBis}}}}}.000 Jahre}}
{{#if: {{{Fundorte|}}} |
{{!-}}
! [[Fossil|Fundorte]]
{{!-}}
{{!}} class="taxo-ort" {{!}}
{{{Fundorte}}}}}}}
|-
! [[Systematik (Biologie)|Systematik]]
|-
|
{| width="100%"
{{Taxobox/Zeile
| Rang = {{{Taxon6_Rang|}}}
| Name = {{{Taxon6_Name|}}}
| LinkName = {{{Taxon6_LinkName|}}}
| KeinLink = {{#ifeq:{{{Taxon6_LinkName|}}}|nein|ja}}
| WissName = {{{Taxon6_WissName|}}}
| KeinRang = {{{Rangunterdrückung|}}}
}}
{{Taxobox/Zeile
| Rang = {{{Taxon5_Rang|}}}
| Name = {{{Taxon5_Name|}}}
| LinkName = {{{Taxon5_LinkName|}}}
| KeinLink = {{#ifeq:{{{Taxon5_LinkName|}}}|nein|ja|{{#if:{{{Taxon5_Autor|}}}|ja}}}}
| WissName = {{#if:{{{Taxon5_Autor|}}}|{{#if:{{{Taxon5_Name|}}}||{{{Taxon5_WissName}}}}}|{{{Taxon5_WissName|}}}}}
| KeinRang = {{{Rangunterdrückung|}}}
}}
{{Taxobox/Zeile
| Rang = {{{Taxon4_Rang|}}}
| Name = {{{Taxon4_Name|}}}
| LinkName = {{{Taxon4_LinkName|}}}
| KeinLink = {{#ifeq:{{{Taxon4_LinkName|}}}|nein|ja|{{#if:{{{Taxon4_Autor|}}}|ja}}}}
| WissName = {{#if:{{{Taxon4_Autor|}}}|{{#if:{{{Taxon4_Name|}}}||{{{Taxon4_WissName}}}}}|{{{Taxon4_WissName|}}}}}
| KeinRang = {{{Rangunterdrückung|}}}
}}
{{Taxobox/Zeile
| Rang = {{{Taxon3_Rang|}}}
| Name = {{{Taxon3_Name|}}}
| LinkName = {{{Taxon3_LinkName|}}}
| KeinLink = {{#ifeq:{{{Taxon3_LinkName|}}}|nein|ja|{{#if:{{{Taxon3_Autor|}}}|ja}}}}
| WissName = {{#if:{{{Taxon3_Autor|}}}|{{#if:{{{Taxon3_Name|}}}||{{{Taxon3_WissName}}}}}|{{{Taxon3_WissName|}}}}}
| KeinRang = {{{Rangunterdrückung|}}}
}}
{{Taxobox/Zeile
| Rang = {{{Taxon2_Rang|}}}
| Name = {{{Taxon2_Name|}}}
| LinkName = {{{Taxon2_LinkName|}}}
| KeinLink = {{#ifeq:{{{Taxon2_LinkName|}}}|nein|ja|{{#if:{{{Taxon2_Autor|}}}|ja}}}}
| WissName = {{#if:{{{Taxon2_Autor|}}}|{{#if:{{{Taxon2_Name|}}}||{{{Taxon2_WissName}}}}}|{{{Taxon2_WissName|}}}}}
| KeinRang = {{{Rangunterdrückung|}}}
}}
{{Taxobox/Zeile
| Rang = {{{Taxon_Rang|}}}
| Name = {{{Taxon_Name|}}}
| WissName = {{#if:{{{Taxon_Name|}}}||{{{Taxon_WissName|}}}}}
| KeinLink = ja
| KeinRang = {{{Rangunterdrückung|}}}
}}
|}
|-
{{#if: {{{Taxon5_Autor|}}} | {{Taxobox/Zitat
| Rang = {{{Taxon5_Rang|}}}
| WissName = {{{Taxon5_WissName|}}}
| Autor = {{{Taxon5_Autor|}}}
}}}}
{{#if: {{{Taxon4_Autor|}}} | {{Taxobox/Zitat
| Rang = {{{Taxon4_Rang|}}}
| WissName = {{{Taxon4_WissName|}}}
| Autor = {{{Taxon4_Autor|}}}
}}}}
{{#if: {{{Taxon3_Autor|}}} | {{Taxobox/Zitat
| Rang = {{{Taxon3_Rang|}}}
| WissName = {{{Taxon3_WissName|}}}
| Autor = {{{Taxon3_Autor|}}}
}}}}
{{#if: {{{Taxon2_Autor|}}} | {{Taxobox/Zitat
| Rang = {{{Taxon2_Rang|}}}
| WissName = {{{Taxon2_WissName|}}}
| Autor = {{{Taxon2_Autor|}}}
}}}}
{{#if: {{{Taxon_WissName|}}} | {{Taxobox/Zitat
| Rang = {{{Taxon_Rang|}}}
| WissName = {{{Taxon_WissName|}}}
| Autor = {{{Taxon_Autor|}}}
| KeinRang = {{#if: {{{Taxon2_Autor|}}}{{{Taxon3_Autor|}}}{{{Taxon4_Autor|}}}{{{Taxon5_Autor|}}}||ja}}
}}}}
{{#if: {{{Subtaxa_Rang|}}} | {{!-}}
!{{Taxobox/Rang|Rang={{{Subtaxa_Rang}}}|Plural={{{Subtaxa_Plural|ja}}}}}
{{!-}}
{{!}}
{{#if: {{{Subtaxa|}}} | {{{Subtaxa}}} }}}}
|}{{#if: {{{Taxon_Name|}}}{{#ifexpr: {{Taxobox/IstRangKursiv|{{{Taxon_Rang|}}}}}||nonitalic}}
|
| {{#ifexpr: {{str find|{{PAGENAME}}|(}} = -1
| {{DISPLAYTITLE:{{#if:{{NAMESPACE}}|{{NAMESPACE}}:}}''{{#if: {{{Taxon_WissName|}}}|{{{Taxon_WissName}}}|{{PAGENAME}}}}''}}
| <span style="display:none">[[Vorlage:Taxobox/Wartung/KlammerlemmaUndKursiv]]</span>
}}}}</includeonly><noinclude>{{Dokumentation}}
</noinclude>
fdfdd8a3a0c1c41a2189b78a5e1ad9be67341ae7
GNUTLS
0
393
2627
2022-02-09T17:33:22Z
Lollypop
2
Created page with "[[ Category: Security ]] == Match the required ciphers for the German BSI == * [https://www.bsi.bund.de/SharedDocs/Downloads/DE/BSI/Publikationen/TechnischeRichtlinien/TR02102/BSI-TR-02102-2.pdf BSI TR-02102-2 (german)] For example the ciphers that are defined secure for perfect forward secrecy in that document are: {| class="wikitable" | Cipher-Suite || IANA-No. || Refenrenced || Usable until |- | TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256 || 0xC0,0x23 || [RFC5289] || 2..."
wikitext
text/x-wiki
[[ Category: Security ]]
== Match the required ciphers for the German BSI ==
* [https://www.bsi.bund.de/SharedDocs/Downloads/DE/BSI/Publikationen/TechnischeRichtlinien/TR02102/BSI-TR-02102-2.pdf BSI TR-02102-2 (german)]
For example the ciphers that are defined secure for perfect forward secrecy in that document are:
{| class="wikitable"
| Cipher-Suite || IANA-No. || Refenrenced || Usable until
|-
| TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256 || 0xC0,0x23 || [RFC5289] || 2027+
|-
| TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA384 || 0xC0,0x24 || [RFC5289] || 2027+
|-
| TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256 || 0xC0,0x2B || [RFC5289] || 2027+
|-
| TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384 || 0xC0,0x2C || [RFC5289] || 2027+
|-
| TLS_ECDHE_ECDSA_WITH_AES_128_CCM || 0xC0,0xAC || [RFC7251] || 2027+
|-
| TLS_ECDHE_ECDSA_WITH_AES_256_CCM || 0xC0,0xAD || [RFC7251] || 2027+
|-
| TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256 || 0xC0,0x27 || [RFC5289] || 2027+
|-
| TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384 || 0xC0,0x28 || [RFC5289] || 2027+
|-
| TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 || 0xC0,0x2F || [RFC5289] || 2027+
|-
| TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 || 0xC0,0x30 || [RFC5289] || 2027+
|-
| TLS_DHE_DSS_WITH_AES_128_CBC_SHA256 || 0x00,0x40 || [RFC5246] || 2027+
|-
| TLS_DHE_DSS_WITH_AES_256_CBC_SHA256 || 0x00,0x6A || [RFC5246] || 2027+
|-
| TLS_DHE_DSS_WITH_AES_128_GCM_SHA256 || 0x00,0xA2 || [RFC5288] || 2027+
|-
| TLS_DHE_DSS_WITH_AES_256_GCM_SHA384 || 0x00,0xA3 || [RFC5288] || 2027+
|-
| TLS_DHE_RSA_WITH_AES_128_CBC_SHA256 || 0x00,0x67 || [RFC5246] || 2027+
|-
| TLS_DHE_RSA_WITH_AES_256_CBC_SHA256 || 0x00,0x6B || [RFC5246] || 2027+
|-
| TLS_DHE_RSA_WITH_AES_128_GCM_SHA256 || 0x00,0x9E || [RFC5288] || 2027+
|-
| TLS_DHE_RSA_WITH_AES_256_GCM_SHA384 || 0x00,0x9F || [RFC5288] || 2027+
|-
| TLS_DHE_RSA_WITH_AES_128_CCM || 0xC0,0x9E || [RFC6655] || 2027+
|-
| TLS_DHE_RSA_WITH_AES_256_CCM || 0xC0,0x9F || [RFC6655] || 2027+
|}
The key to the table is: TLS_(key exchange algorithms)_WITH_(ciphers)_(hash algorithms)
So to build a definition for GnuTLS that matches this requirements is:
# Some basic security settings: %SERVER_PRECEDENCE:%LATEST_RECORD_VERSION:PFS
# Disable defaults, enable only TLSv1.2: -VERS-TLS-ALL:+VERS-TLS1.2:-VERS-DTLS-ALL:-CIPHER-ALL:-KX-ALL:-MAC-ALL:-CURVE-ALL
# Set the key exchange algorithms: +ECDHE-RSA:+ECDHE-ECDSA:+DHE-DSS:+DHE-RSA
# Set the ciphers: +AES-256-CBC:+AES-128-CBC:+AES-256-GCM:+AES-128-GCM
# Set the hash algorithms: +SHA256:+SHA384
# Set the wanted curves from the document above: +CURVE-SECP256R1:+CURVE-SECP384R1
# Set the signature algorithm used in your certificate: +SIGN-RSA-SHA256
Ans now put ist all together and let us see what happens:
<syntaxhighlight lang=bash>
$ gnutls-cli --list CIPHER --priority '%SERVER_PRECEDENCE:%LATEST_RECORD_VERSION:PFS:-VERS-TLS-ALL:+VERS-TLS1.2:-VERS-DTLS-ALL:-CIPHER-ALL:-KX-ALL:-MAC-ALL:-CURVE-ALL:+ECDHE-RSA:+ECDHE-ECDSA:+DHE-DSS:+DHE-RSA:+AES-256-CBC:+AES-128-CBC:+AES-256-GCM:+AES-128-GCM:+SHA256:+SHA384:+CURVE-SECP256R1:+CURVE-SECP384R1:+SIGN-RSA-SHA256'
Cipher suites for %SERVER_PRECEDENCE:%LATEST_RECORD_VERSION:PFS:-VERS-TLS-ALL:+VERS-TLS1.2:-VERS-DTLS-ALL:-CIPHER-ALL:-KX-ALL:-MAC-ALL:-CURVE-ALL:+ECDHE-RSA:+ECDHE-ECDSA:+DHE-DSS:+DHE-RSA:+AES-256-CBC:+AES-128-CBC:+AES-256-GCM:+AES-128-GCM:+SHA256:+SHA384:+CURVE-SECP256R1:+CURVE-SECP384R1:+SIGN-RSA-SHA256
TLS_ECDHE_RSA_AES_256_CBC_SHA384 0xc0, 0x28 TLS1.2
TLS_ECDHE_RSA_AES_128_CBC_SHA256 0xc0, 0x27 TLS1.2
TLS_ECDHE_ECDSA_AES_256_CBC_SHA384 0xc0, 0x24 TLS1.2
TLS_ECDHE_ECDSA_AES_128_CBC_SHA256 0xc0, 0x23 TLS1.2
TLS_DHE_DSS_AES_256_CBC_SHA256 0x00, 0x6a TLS1.2
TLS_DHE_DSS_AES_128_CBC_SHA256 0x00, 0x40 TLS1.2
TLS_DHE_RSA_AES_256_CBC_SHA256 0x00, 0x6b TLS1.2
TLS_DHE_RSA_AES_128_CBC_SHA256 0x00, 0x67 TLS1.2
Certificate types: CTYPE-X.509
Protocols: VERS-TLS1.2
Compression: COMP-NULL
Elliptic curves: CURVE-SECP256R1, CURVE-SECP384R1
PK-signatures: SIGN-RSA-SHA256, SIGN-ECDSA-SHA256, SIGN-RSA-SHA384, SIGN-ECDSA-SHA384, SIGN-RSA-SHA512, SIGN-ECDSA-SHA512, SIGN-RSA-SHA224, SIGN-ECDSA-SHA224, SIGN-RSA-SHA1, SIGN-ECDSA-SHA1
</syntaxhighlight>
As you can see it is not all what we would expect, but is faces all what is implemented and can be used with our restrictions in GnuTLS.
As far as I know: that's it!
322ae78725fc1683e95ba29d3bf8efe325501926
2628
2627
2022-02-09T17:35:13Z
Lollypop
2
/* Match the required ciphers for the German BSI */
wikitext
text/x-wiki
[[ Category: Security ]]
== Match the required ciphers for the German BSI ==
* [https://www.bsi.bund.de/SharedDocs/Downloads/DE/BSI/Publikationen/TechnischeRichtlinien/TR02102/BSI-TR-02102-2.pdf BSI TR-02102-2 (german)]
For example the ciphers that are defined secure for perfect forward secrecy in that document are:
{| class="wikitable"
| Cipher-Suite || IANA-No. || Refenrenced || Usable until
|-
| TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256 || 0xC0,0x23 || [RFC5289] || 2027+
|-
| TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA384 || 0xC0,0x24 || [RFC5289] || 2027+
|-
| TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256 || 0xC0,0x2B || [RFC5289] || 2027+
|-
| TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384 || 0xC0,0x2C || [RFC5289] || 2027+
|-
| TLS_ECDHE_ECDSA_WITH_AES_128_CCM || 0xC0,0xAC || [RFC7251] || 2027+
|-
| TLS_ECDHE_ECDSA_WITH_AES_256_CCM || 0xC0,0xAD || [RFC7251] || 2027+
|-
| TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256 || 0xC0,0x27 || [RFC5289] || 2027+
|-
| TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384 || 0xC0,0x28 || [RFC5289] || 2027+
|-
| TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 || 0xC0,0x2F || [RFC5289] || 2027+
|-
| TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 || 0xC0,0x30 || [RFC5289] || 2027+
|-
| TLS_DHE_DSS_WITH_AES_128_CBC_SHA256 || 0x00,0x40 || [RFC5246] || 2027+
|-
| TLS_DHE_DSS_WITH_AES_256_CBC_SHA256 || 0x00,0x6A || [RFC5246] || 2027+
|-
| TLS_DHE_DSS_WITH_AES_128_GCM_SHA256 || 0x00,0xA2 || [RFC5288] || 2027+
|-
| TLS_DHE_DSS_WITH_AES_256_GCM_SHA384 || 0x00,0xA3 || [RFC5288] || 2027+
|-
| TLS_DHE_RSA_WITH_AES_128_CBC_SHA256 || 0x00,0x67 || [RFC5246] || 2027+
|-
| TLS_DHE_RSA_WITH_AES_256_CBC_SHA256 || 0x00,0x6B || [RFC5246] || 2027+
|-
| TLS_DHE_RSA_WITH_AES_128_GCM_SHA256 || 0x00,0x9E || [RFC5288] || 2027+
|-
| TLS_DHE_RSA_WITH_AES_256_GCM_SHA384 || 0x00,0x9F || [RFC5288] || 2027+
|-
| TLS_DHE_RSA_WITH_AES_128_CCM || 0xC0,0x9E || [RFC6655] || 2027+
|-
| TLS_DHE_RSA_WITH_AES_256_CCM || 0xC0,0x9F || [RFC6655] || 2027+
|}
The key to the table is: TLS_(key exchange algorithms)_WITH_(ciphers)_(hash algorithms)
So to build a definition for GnuTLS that matches this requirements is:
# Some basic security settings: %SERVER_PRECEDENCE:%LATEST_RECORD_VERSION:PFS
# Disable defaults, enable only TLSv1.2: -VERS-TLS-ALL:+VERS-TLS1.2:-VERS-DTLS-ALL:-CIPHER-ALL:-KX-ALL:-MAC-ALL:-CURVE-ALL
# Set the key exchange algorithms: +ECDHE-RSA:+ECDHE-ECDSA:+DHE-DSS:+DHE-RSA
# Set the ciphers: +AES-256-CBC:+AES-128-CBC:+AES-256-GCM:+AES-128-GCM
# Set the hash algorithms: +SHA256:+SHA384
# Set the wanted curves from the document above: +CURVE-SECP256R1:+CURVE-SECP384R1
# Set the signature algorithm used in your certificate: +SIGN-RSA-SHA256
And now put ist all together and let us see what happens:
<syntaxhighlight lang=bash>
$ gnutls-cli --list CIPHER --priority '%SERVER_PRECEDENCE:%LATEST_RECORD_VERSION:PFS:-VERS-TLS-ALL:+VERS-TLS1.2:-VERS-DTLS-ALL:-CIPHER-ALL:-KX-ALL:-MAC-ALL:-CURVE-ALL:+ECDHE-RSA:+ECDHE-ECDSA:+DHE-DSS:+DHE-RSA:+AES-256-CBC:+AES-128-CBC:+AES-256-GCM:+AES-128-GCM:+SHA256:+SHA384:+CURVE-SECP256R1:+CURVE-SECP384R1:+SIGN-RSA-SHA256'
Cipher suites for %SERVER_PRECEDENCE:%LATEST_RECORD_VERSION:PFS:-VERS-TLS-ALL:+VERS-TLS1.2:-VERS-DTLS-ALL:-CIPHER-ALL:-KX-ALL:-MAC-ALL:-CURVE-ALL:+ECDHE-RSA:+ECDHE-ECDSA:+DHE-DSS:+DHE-RSA:+AES-256-CBC:+AES-128-CBC:+AES-256-GCM:+AES-128-GCM:+SHA256:+SHA384:+CURVE-SECP256R1:+CURVE-SECP384R1:+SIGN-RSA-SHA256
TLS_ECDHE_RSA_AES_256_CBC_SHA384 0xc0, 0x28 TLS1.2
TLS_ECDHE_RSA_AES_128_CBC_SHA256 0xc0, 0x27 TLS1.2
TLS_ECDHE_ECDSA_AES_256_CBC_SHA384 0xc0, 0x24 TLS1.2
TLS_ECDHE_ECDSA_AES_128_CBC_SHA256 0xc0, 0x23 TLS1.2
TLS_DHE_DSS_AES_256_CBC_SHA256 0x00, 0x6a TLS1.2
TLS_DHE_DSS_AES_128_CBC_SHA256 0x00, 0x40 TLS1.2
TLS_DHE_RSA_AES_256_CBC_SHA256 0x00, 0x6b TLS1.2
TLS_DHE_RSA_AES_128_CBC_SHA256 0x00, 0x67 TLS1.2
Certificate types: CTYPE-X.509
Protocols: VERS-TLS1.2
Compression: COMP-NULL
Elliptic curves: CURVE-SECP256R1, CURVE-SECP384R1
PK-signatures: SIGN-RSA-SHA256, SIGN-ECDSA-SHA256, SIGN-RSA-SHA384, SIGN-ECDSA-SHA384, SIGN-RSA-SHA512, SIGN-ECDSA-SHA512, SIGN-RSA-SHA224, SIGN-ECDSA-SHA224, SIGN-RSA-SHA1, SIGN-ECDSA-SHA1
</syntaxhighlight>
As you can see it is not all what we would expect, but is faces all what is implemented and can be used with our restrictions in GnuTLS.
As far as I know: that's it!
919f763a990c11fa31f86026177a1d70412ce62e
2629
2628
2022-02-09T18:29:48Z
Lollypop
2
/* Match the required ciphers for the German BSI */
wikitext
text/x-wiki
[[ Category: Security ]]
== Match the required ciphers for the German BSI ==
* [https://www.bsi.bund.de/SharedDocs/Downloads/DE/BSI/Publikationen/TechnischeRichtlinien/TR02102/BSI-TR-02102-2.pdf BSI TR-02102-2 (german)]
For example the ciphers that are defined secure for perfect forward secrecy in that document are:
{| class="wikitable"
|-
! scope="col"| Cipher-Suite
! scope="col"| IANA-No.
! scope="col"| Refenrenced
! scope="col"| Usable until
|-
| TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256 || 0xC0,0x23 || [RFC5289] || 2027+
|-
| TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA384 || 0xC0,0x24 || [RFC5289] || 2027+
|-
| TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256 || 0xC0,0x2B || [RFC5289] || 2027+
|-
| TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384 || 0xC0,0x2C || [RFC5289] || 2027+
|-
| TLS_ECDHE_ECDSA_WITH_AES_128_CCM || 0xC0,0xAC || [RFC7251] || 2027+
|-
| TLS_ECDHE_ECDSA_WITH_AES_256_CCM || 0xC0,0xAD || [RFC7251] || 2027+
|-
| TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256 || 0xC0,0x27 || [RFC5289] || 2027+
|-
| TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384 || 0xC0,0x28 || [RFC5289] || 2027+
|-
| TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 || 0xC0,0x2F || [RFC5289] || 2027+
|-
| TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 || 0xC0,0x30 || [RFC5289] || 2027+
|-
| TLS_DHE_DSS_WITH_AES_128_CBC_SHA256 || 0x00,0x40 || [RFC5246] || 2027+
|-
| TLS_DHE_DSS_WITH_AES_256_CBC_SHA256 || 0x00,0x6A || [RFC5246] || 2027+
|-
| TLS_DHE_DSS_WITH_AES_128_GCM_SHA256 || 0x00,0xA2 || [RFC5288] || 2027+
|-
| TLS_DHE_DSS_WITH_AES_256_GCM_SHA384 || 0x00,0xA3 || [RFC5288] || 2027+
|-
| TLS_DHE_RSA_WITH_AES_128_CBC_SHA256 || 0x00,0x67 || [RFC5246] || 2027+
|-
| TLS_DHE_RSA_WITH_AES_256_CBC_SHA256 || 0x00,0x6B || [RFC5246] || 2027+
|-
| TLS_DHE_RSA_WITH_AES_128_GCM_SHA256 || 0x00,0x9E || [RFC5288] || 2027+
|-
| TLS_DHE_RSA_WITH_AES_256_GCM_SHA384 || 0x00,0x9F || [RFC5288] || 2027+
|-
| TLS_DHE_RSA_WITH_AES_128_CCM || 0xC0,0x9E || [RFC6655] || 2027+
|-
| TLS_DHE_RSA_WITH_AES_256_CCM || 0xC0,0x9F || [RFC6655] || 2027+
|}
The key to the table is: TLS_(key exchange algorithms)_WITH_(ciphers)_(hash algorithms)
So to build a definition for GnuTLS that matches this requirements is:
# Some basic security settings: %SERVER_PRECEDENCE:%LATEST_RECORD_VERSION:PFS
# Disable defaults, enable only TLSv1.2: -VERS-TLS-ALL:+VERS-TLS1.2:-VERS-DTLS-ALL:-CIPHER-ALL:-KX-ALL:-MAC-ALL:-CURVE-ALL
# Set the key exchange algorithms: +ECDHE-RSA:+ECDHE-ECDSA:+DHE-DSS:+DHE-RSA
# Set the ciphers: +AES-256-CBC:+AES-128-CBC:+AES-256-GCM:+AES-128-GCM
# Set the hash algorithms: +SHA256:+SHA384
# Set the wanted curves from the document above: +CURVE-SECP256R1:+CURVE-SECP384R1
# Set the signature algorithm used in your certificate: +SIGN-RSA-SHA256
And now put ist all together and let us see what happens:
<syntaxhighlight lang=bash>
$ gnutls-cli --list CIPHER --priority '%SERVER_PRECEDENCE:%LATEST_RECORD_VERSION:PFS:-VERS-TLS-ALL:+VERS-TLS1.2:-VERS-DTLS-ALL:-CIPHER-ALL:-KX-ALL:-MAC-ALL:-CURVE-ALL:+ECDHE-RSA:+ECDHE-ECDSA:+DHE-DSS:+DHE-RSA:+AES-256-CBC:+AES-128-CBC:+AES-256-GCM:+AES-128-GCM:+SHA256:+SHA384:+CURVE-SECP256R1:+CURVE-SECP384R1:+SIGN-RSA-SHA256'
Cipher suites for %SERVER_PRECEDENCE:%LATEST_RECORD_VERSION:PFS:-VERS-TLS-ALL:+VERS-TLS1.2:-VERS-DTLS-ALL:-CIPHER-ALL:-KX-ALL:-MAC-ALL:-CURVE-ALL:+ECDHE-RSA:+ECDHE-ECDSA:+DHE-DSS:+DHE-RSA:+AES-256-CBC:+AES-128-CBC:+AES-256-GCM:+AES-128-GCM:+SHA256:+SHA384:+CURVE-SECP256R1:+CURVE-SECP384R1:+SIGN-RSA-SHA256
TLS_ECDHE_RSA_AES_256_CBC_SHA384 0xc0, 0x28 TLS1.2
TLS_ECDHE_RSA_AES_128_CBC_SHA256 0xc0, 0x27 TLS1.2
TLS_ECDHE_ECDSA_AES_256_CBC_SHA384 0xc0, 0x24 TLS1.2
TLS_ECDHE_ECDSA_AES_128_CBC_SHA256 0xc0, 0x23 TLS1.2
TLS_DHE_DSS_AES_256_CBC_SHA256 0x00, 0x6a TLS1.2
TLS_DHE_DSS_AES_128_CBC_SHA256 0x00, 0x40 TLS1.2
TLS_DHE_RSA_AES_256_CBC_SHA256 0x00, 0x6b TLS1.2
TLS_DHE_RSA_AES_128_CBC_SHA256 0x00, 0x67 TLS1.2
Certificate types: CTYPE-X.509
Protocols: VERS-TLS1.2
Compression: COMP-NULL
Elliptic curves: CURVE-SECP256R1, CURVE-SECP384R1
PK-signatures: SIGN-RSA-SHA256, SIGN-ECDSA-SHA256, SIGN-RSA-SHA384, SIGN-ECDSA-SHA384, SIGN-RSA-SHA512, SIGN-ECDSA-SHA512, SIGN-RSA-SHA224, SIGN-ECDSA-SHA224, SIGN-RSA-SHA1, SIGN-ECDSA-SHA1
</syntaxhighlight>
As you can see it is not all what we would expect, but is faces all what is implemented and can be used with our restrictions in GnuTLS.
As far as I know: that's it!
651a735fd6f756b7d584407385d674cf74b2c0f2
2630
2629
2022-02-14T08:34:50Z
Lollypop
2
/* Match the required ciphers for the German BSI */
wikitext
text/x-wiki
[[ Category: Security ]]
== Match the required ciphers for the German BSI ==
* [https://www.bsi.bund.de/SharedDocs/Downloads/DE/BSI/Publikationen/TechnischeRichtlinien/TR02102/BSI-TR-02102-2.pdf BSI TR-02102-2 (german)]
For example the ciphers that are defined secure for perfect forward secrecy in that document are:
{| class="wikitable"
|-
! scope="col"| Cipher-Suite
! scope="col"| IANA-No.
! scope="col"| Refenrenced
! scope="col"| Usable until
|-
| TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256 || 0xC0,0x23 || [RFC5289] || 2027+
|-
| TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA384 || 0xC0,0x24 || [RFC5289] || 2027+
|-
| TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256 || 0xC0,0x2B || [RFC5289] || 2027+
|-
| TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384 || 0xC0,0x2C || [RFC5289] || 2027+
|-
| TLS_ECDHE_ECDSA_WITH_AES_128_CCM || 0xC0,0xAC || [RFC7251] || 2027+
|-
| TLS_ECDHE_ECDSA_WITH_AES_256_CCM || 0xC0,0xAD || [RFC7251] || 2027+
|-
| TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256 || 0xC0,0x27 || [RFC5289] || 2027+
|-
| TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384 || 0xC0,0x28 || [RFC5289] || 2027+
|-
| TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 || 0xC0,0x2F || [RFC5289] || 2027+
|-
| TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 || 0xC0,0x30 || [RFC5289] || 2027+
|-
| TLS_DHE_DSS_WITH_AES_128_CBC_SHA256 || 0x00,0x40 || [RFC5246] || 2027+
|-
| TLS_DHE_DSS_WITH_AES_256_CBC_SHA256 || 0x00,0x6A || [RFC5246] || 2027+
|-
| TLS_DHE_DSS_WITH_AES_128_GCM_SHA256 || 0x00,0xA2 || [RFC5288] || 2027+
|-
| TLS_DHE_DSS_WITH_AES_256_GCM_SHA384 || 0x00,0xA3 || [RFC5288] || 2027+
|-
| TLS_DHE_RSA_WITH_AES_128_CBC_SHA256 || 0x00,0x67 || [RFC5246] || 2027+
|-
| TLS_DHE_RSA_WITH_AES_256_CBC_SHA256 || 0x00,0x6B || [RFC5246] || 2027+
|-
| TLS_DHE_RSA_WITH_AES_128_GCM_SHA256 || 0x00,0x9E || [RFC5288] || 2027+
|-
| TLS_DHE_RSA_WITH_AES_256_GCM_SHA384 || 0x00,0x9F || [RFC5288] || 2027+
|-
| TLS_DHE_RSA_WITH_AES_128_CCM || 0xC0,0x9E || [RFC6655] || 2027+
|-
| TLS_DHE_RSA_WITH_AES_256_CCM || 0xC0,0x9F || [RFC6655] || 2027+
|}
The key to the table is: TLS_(key exchange algorithms)_WITH_(ciphers)_(hash algorithms)
So to build a definition for GnuTLS that matches this requirements is:
# Some basic security settings: %SERVER_PRECEDENCE:%LATEST_RECORD_VERSION:PFS
# Disable defaults, enable only TLSv1.2: -VERS-TLS-ALL:+VERS-TLS1.2:-VERS-DTLS-ALL:-CIPHER-ALL:-KX-ALL:-MAC-ALL:-CURVE-ALL
# Set the key exchange algorithms: +ECDHE-RSA:+ECDHE-ECDSA:+DHE-DSS:+DHE-RSA
# Set the ciphers: +AES-256-CBC:+AES-128-CBC:+AES-256-GCM:+AES-128-GCM:+CHACHA20-POLY1305(used by google mail, so I needed it as well)
# Set the hash algorithms: +SHA256:+SHA384:+AEAD (The +AEAD is something that is not directly seen be the list above
, but you need it for GCM)
# Set the wanted curves from the document above: +CURVE-SECP256R1:+CURVE-SECP384R1
# Set the signature algorithm used in your certificate: +SIGN-RSA-SHA256 for me
And now put ist all together and let us see what happens:
<syntaxhighlight lang=bash>
$ gnutls-cli --list CIPHER --priority '%SERVER_PRECEDENCE:%LATEST_RECORD_VERSION:PFS:-VERS-TLS-ALL:+VERS-TLS1.2:-VERS-DTLS-ALL:-KX-ALL:-CIPHER-ALL:-MAC-ALL:-CURVE-ALL:-SIGN-ALL:+ECDHE-RSA:+ECDHE-ECDSA:+DHE-DSS:+DHE-RSA:+AES-256-CBC:+AES-128-CBC:+AES-256-GCM:+AES-128-GCM:+CHACHA20-POLY1305:+SHA256:+SHA384:+AEAD:+CURVE-SECP256R1:+CURVE-SECP384R1:+SIGN-RSA-SHA256'
Cipher suites for %SERVER_PRECEDENCE:%LATEST_RECORD_VERSION:PFS:-VERS-TLS-ALL:+VERS-TLS1.2:-VERS-DTLS-ALL:-KX-ALL:-CIPHER-ALL:-MAC-ALL:-CURVE-ALL:-SIGN-ALL:+ECDHE-RSA:+ECDHE-ECDSA:+DHE-DSS:+DHE-RSA:+AES-256-CBC:+AES-128-CBC:+AES-256-GCM:+AES-128-GCM:+CHACHA20-POLY1305:+SHA256:+SHA384:+AEAD:+CURVE-SECP256R1:+CURVE-SECP384R1:+SIGN-RSA-SHA256
TLS_ECDHE_RSA_AES_256_CBC_SHA384 0xc0, 0x28 TLS1.2
TLS_ECDHE_RSA_AES_128_CBC_SHA256 0xc0, 0x27 TLS1.2
TLS_ECDHE_RSA_AES_256_GCM_SHA384 0xc0, 0x30 TLS1.2
TLS_ECDHE_RSA_AES_128_GCM_SHA256 0xc0, 0x2f TLS1.2
TLS_ECDHE_RSA_CHACHA20_POLY1305 0xcc, 0xa8 TLS1.2
TLS_ECDHE_ECDSA_AES_256_CBC_SHA384 0xc0, 0x24 TLS1.2
TLS_ECDHE_ECDSA_AES_128_CBC_SHA256 0xc0, 0x23 TLS1.2
TLS_ECDHE_ECDSA_AES_256_GCM_SHA384 0xc0, 0x2c TLS1.2
TLS_ECDHE_ECDSA_AES_128_GCM_SHA256 0xc0, 0x2b TLS1.2
TLS_ECDHE_ECDSA_CHACHA20_POLY1305 0xcc, 0xa9 TLS1.2
TLS_DHE_DSS_AES_256_CBC_SHA256 0x00, 0x6a TLS1.2
TLS_DHE_DSS_AES_128_CBC_SHA256 0x00, 0x40 TLS1.2
TLS_DHE_DSS_AES_256_GCM_SHA384 0x00, 0xa3 TLS1.2
TLS_DHE_DSS_AES_128_GCM_SHA256 0x00, 0xa2 TLS1.2
TLS_DHE_RSA_AES_256_CBC_SHA256 0x00, 0x6b TLS1.2
TLS_DHE_RSA_AES_128_CBC_SHA256 0x00, 0x67 TLS1.2
TLS_DHE_RSA_AES_256_GCM_SHA384 0x00, 0x9f TLS1.2
TLS_DHE_RSA_AES_128_GCM_SHA256 0x00, 0x9e TLS1.2
TLS_DHE_RSA_CHACHA20_POLY1305 0xcc, 0xaa TLS1.2
Certificate types: CTYPE-X.509
Protocols: VERS-TLS1.2
Compression: COMP-NULL
Elliptic curves: CURVE-SECP256R1, CURVE-SECP384R1
PK-signatures: SIGN-RSA-SHA256
</syntaxhighlight>
As you can see it is not all what we would expect, but is faces all what is implemented and can be used with our restrictions in GnuTLS.
As far as I know: that's it!
e49a0e3c20c15951418c4e1737d3ef22c06ac082
2631
2630
2022-02-14T08:35:39Z
Lollypop
2
/* Match the required ciphers for the German BSI */
wikitext
text/x-wiki
[[ Category: Security ]]
== Match the required ciphers for the German BSI ==
* [https://www.bsi.bund.de/SharedDocs/Downloads/DE/BSI/Publikationen/TechnischeRichtlinien/TR02102/BSI-TR-02102-2.pdf BSI TR-02102-2 (german)]
For example the ciphers that are defined secure for perfect forward secrecy in that document are:
{| class="wikitable"
|-
! scope="col"| Cipher-Suite
! scope="col"| IANA-No.
! scope="col"| Refenrenced
! scope="col"| Usable until
|-
| TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256 || 0xC0,0x23 || [RFC5289] || 2027+
|-
| TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA384 || 0xC0,0x24 || [RFC5289] || 2027+
|-
| TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256 || 0xC0,0x2B || [RFC5289] || 2027+
|-
| TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384 || 0xC0,0x2C || [RFC5289] || 2027+
|-
| TLS_ECDHE_ECDSA_WITH_AES_128_CCM || 0xC0,0xAC || [RFC7251] || 2027+
|-
| TLS_ECDHE_ECDSA_WITH_AES_256_CCM || 0xC0,0xAD || [RFC7251] || 2027+
|-
| TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256 || 0xC0,0x27 || [RFC5289] || 2027+
|-
| TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384 || 0xC0,0x28 || [RFC5289] || 2027+
|-
| TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 || 0xC0,0x2F || [RFC5289] || 2027+
|-
| TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 || 0xC0,0x30 || [RFC5289] || 2027+
|-
| TLS_DHE_DSS_WITH_AES_128_CBC_SHA256 || 0x00,0x40 || [RFC5246] || 2027+
|-
| TLS_DHE_DSS_WITH_AES_256_CBC_SHA256 || 0x00,0x6A || [RFC5246] || 2027+
|-
| TLS_DHE_DSS_WITH_AES_128_GCM_SHA256 || 0x00,0xA2 || [RFC5288] || 2027+
|-
| TLS_DHE_DSS_WITH_AES_256_GCM_SHA384 || 0x00,0xA3 || [RFC5288] || 2027+
|-
| TLS_DHE_RSA_WITH_AES_128_CBC_SHA256 || 0x00,0x67 || [RFC5246] || 2027+
|-
| TLS_DHE_RSA_WITH_AES_256_CBC_SHA256 || 0x00,0x6B || [RFC5246] || 2027+
|-
| TLS_DHE_RSA_WITH_AES_128_GCM_SHA256 || 0x00,0x9E || [RFC5288] || 2027+
|-
| TLS_DHE_RSA_WITH_AES_256_GCM_SHA384 || 0x00,0x9F || [RFC5288] || 2027+
|-
| TLS_DHE_RSA_WITH_AES_128_CCM || 0xC0,0x9E || [RFC6655] || 2027+
|-
| TLS_DHE_RSA_WITH_AES_256_CCM || 0xC0,0x9F || [RFC6655] || 2027+
|}
The key to the table is: TLS_(key exchange algorithms)_WITH_(ciphers)_(hash algorithms)
So to build a definition for GnuTLS that matches this requirements is:
# Some basic security settings: %SERVER_PRECEDENCE:%LATEST_RECORD_VERSION:PFS
# Disable defaults, enable only TLSv1.2: -VERS-TLS-ALL:+VERS-TLS1.2:-VERS-DTLS-ALL:-CIPHER-ALL:-KX-ALL:-MAC-ALL:-CURVE-ALL
# Set the key exchange algorithms: +ECDHE-RSA:+ECDHE-ECDSA:+DHE-DSS:+DHE-RSA
# Set the ciphers: +AES-256-CBC:+AES-128-CBC:+AES-256-GCM:+AES-128-GCM:+CHACHA20-POLY1305(used by google mail, so I needed it as well)
# Set the hash algorithms: +SHA256:+SHA384:+AEAD (The +AEAD is something that is not directly seen be the list above, but you need it for GCM)
# Set the wanted curves from the document above: +CURVE-SECP256R1:+CURVE-SECP384R1
# Set the signature algorithm used in your certificate: +SIGN-RSA-SHA256 for me
And now put ist all together and let us see what happens:
<syntaxhighlight lang=bash>
$ gnutls-cli --list CIPHER --priority '%SERVER_PRECEDENCE:%LATEST_RECORD_VERSION:PFS:-VERS-TLS-ALL:+VERS-TLS1.2:-VERS-DTLS-ALL:-KX-ALL:-CIPHER-ALL:-MAC-ALL:-CURVE-ALL:-SIGN-ALL:+ECDHE-RSA:+ECDHE-ECDSA:+DHE-DSS:+DHE-RSA:+AES-256-CBC:+AES-128-CBC:+AES-256-GCM:+AES-128-GCM:+CHACHA20-POLY1305:+SHA256:+SHA384:+AEAD:+CURVE-SECP256R1:+CURVE-SECP384R1:+SIGN-RSA-SHA256'
Cipher suites for %SERVER_PRECEDENCE:%LATEST_RECORD_VERSION:PFS:-VERS-TLS-ALL:+VERS-TLS1.2:-VERS-DTLS-ALL:-KX-ALL:-CIPHER-ALL:-MAC-ALL:-CURVE-ALL:-SIGN-ALL:+ECDHE-RSA:+ECDHE-ECDSA:+DHE-DSS:+DHE-RSA:+AES-256-CBC:+AES-128-CBC:+AES-256-GCM:+AES-128-GCM:+CHACHA20-POLY1305:+SHA256:+SHA384:+AEAD:+CURVE-SECP256R1:+CURVE-SECP384R1:+SIGN-RSA-SHA256
TLS_ECDHE_RSA_AES_256_CBC_SHA384 0xc0, 0x28 TLS1.2
TLS_ECDHE_RSA_AES_128_CBC_SHA256 0xc0, 0x27 TLS1.2
TLS_ECDHE_RSA_AES_256_GCM_SHA384 0xc0, 0x30 TLS1.2
TLS_ECDHE_RSA_AES_128_GCM_SHA256 0xc0, 0x2f TLS1.2
TLS_ECDHE_RSA_CHACHA20_POLY1305 0xcc, 0xa8 TLS1.2
TLS_ECDHE_ECDSA_AES_256_CBC_SHA384 0xc0, 0x24 TLS1.2
TLS_ECDHE_ECDSA_AES_128_CBC_SHA256 0xc0, 0x23 TLS1.2
TLS_ECDHE_ECDSA_AES_256_GCM_SHA384 0xc0, 0x2c TLS1.2
TLS_ECDHE_ECDSA_AES_128_GCM_SHA256 0xc0, 0x2b TLS1.2
TLS_ECDHE_ECDSA_CHACHA20_POLY1305 0xcc, 0xa9 TLS1.2
TLS_DHE_DSS_AES_256_CBC_SHA256 0x00, 0x6a TLS1.2
TLS_DHE_DSS_AES_128_CBC_SHA256 0x00, 0x40 TLS1.2
TLS_DHE_DSS_AES_256_GCM_SHA384 0x00, 0xa3 TLS1.2
TLS_DHE_DSS_AES_128_GCM_SHA256 0x00, 0xa2 TLS1.2
TLS_DHE_RSA_AES_256_CBC_SHA256 0x00, 0x6b TLS1.2
TLS_DHE_RSA_AES_128_CBC_SHA256 0x00, 0x67 TLS1.2
TLS_DHE_RSA_AES_256_GCM_SHA384 0x00, 0x9f TLS1.2
TLS_DHE_RSA_AES_128_GCM_SHA256 0x00, 0x9e TLS1.2
TLS_DHE_RSA_CHACHA20_POLY1305 0xcc, 0xaa TLS1.2
Certificate types: CTYPE-X.509
Protocols: VERS-TLS1.2
Compression: COMP-NULL
Elliptic curves: CURVE-SECP256R1, CURVE-SECP384R1
PK-signatures: SIGN-RSA-SHA256
</syntaxhighlight>
As you can see it is not all what we would expect, but is faces all what is implemented and can be used with our restrictions in GnuTLS.
As far as I know: that's it!
ca44790e804d889a251c15fe169360512fb5a7d1
2632
2631
2022-02-14T08:36:47Z
Lollypop
2
/* Match the required ciphers for the German BSI */
wikitext
text/x-wiki
[[ Category: Security ]]
== Match the required ciphers for the German BSI ==
* [https://www.bsi.bund.de/SharedDocs/Downloads/DE/BSI/Publikationen/TechnischeRichtlinien/TR02102/BSI-TR-02102-2.pdf BSI TR-02102-2 (german)]
For example the ciphers that are defined secure for perfect forward secrecy in that document are:
{| class="wikitable"
|-
! scope="col"| Cipher-Suite
! scope="col"| IANA-No.
! scope="col"| Refenrenced
! scope="col"| Usable until
|-
| TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256 || 0xC0,0x23 || [RFC5289] || 2027+
|-
| TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA384 || 0xC0,0x24 || [RFC5289] || 2027+
|-
| TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256 || 0xC0,0x2B || [RFC5289] || 2027+
|-
| TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384 || 0xC0,0x2C || [RFC5289] || 2027+
|-
| TLS_ECDHE_ECDSA_WITH_AES_128_CCM || 0xC0,0xAC || [RFC7251] || 2027+
|-
| TLS_ECDHE_ECDSA_WITH_AES_256_CCM || 0xC0,0xAD || [RFC7251] || 2027+
|-
| TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256 || 0xC0,0x27 || [RFC5289] || 2027+
|-
| TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384 || 0xC0,0x28 || [RFC5289] || 2027+
|-
| TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 || 0xC0,0x2F || [RFC5289] || 2027+
|-
| TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 || 0xC0,0x30 || [RFC5289] || 2027+
|-
| TLS_DHE_DSS_WITH_AES_128_CBC_SHA256 || 0x00,0x40 || [RFC5246] || 2027+
|-
| TLS_DHE_DSS_WITH_AES_256_CBC_SHA256 || 0x00,0x6A || [RFC5246] || 2027+
|-
| TLS_DHE_DSS_WITH_AES_128_GCM_SHA256 || 0x00,0xA2 || [RFC5288] || 2027+
|-
| TLS_DHE_DSS_WITH_AES_256_GCM_SHA384 || 0x00,0xA3 || [RFC5288] || 2027+
|-
| TLS_DHE_RSA_WITH_AES_128_CBC_SHA256 || 0x00,0x67 || [RFC5246] || 2027+
|-
| TLS_DHE_RSA_WITH_AES_256_CBC_SHA256 || 0x00,0x6B || [RFC5246] || 2027+
|-
| TLS_DHE_RSA_WITH_AES_128_GCM_SHA256 || 0x00,0x9E || [RFC5288] || 2027+
|-
| TLS_DHE_RSA_WITH_AES_256_GCM_SHA384 || 0x00,0x9F || [RFC5288] || 2027+
|-
| TLS_DHE_RSA_WITH_AES_128_CCM || 0xC0,0x9E || [RFC6655] || 2027+
|-
| TLS_DHE_RSA_WITH_AES_256_CCM || 0xC0,0x9F || [RFC6655] || 2027+
|}
The key to the table is: TLS_(key exchange algorithms)_WITH_(ciphers)_(hash algorithms)
So to build a definition for GnuTLS that matches this requirements is:
# Some basic security settings: %SERVER_PRECEDENCE:%LATEST_RECORD_VERSION:PFS
# Disable defaults, enable only TLSv1.2: -VERS-TLS-ALL:+VERS-TLS1.2:-VERS-DTLS-ALL:-CIPHER-ALL:-KX-ALL:-MAC-ALL:-CURVE-ALL
# Set the key exchange algorithms: +ECDHE-RSA:+ECDHE-ECDSA:+DHE-DSS:+DHE-RSA
# Set the ciphers: +AES-256-CBC:+AES-128-CBC:+AES-256-GCM:+AES-128-GCM:+CHACHA20-POLY1305(CHACHA20-POLY1305 is used by google mail, so I needed it as well)
# Set the hash algorithms: +SHA256:+SHA384:+AEAD (The +AEAD is something that is not directly seen be the list above, but you need it for GCM)
# Set the wanted curves from the document above: +CURVE-SECP256R1:+CURVE-SECP384R1
# Set the signature algorithm used in your certificate: +SIGN-RSA-SHA256 for me
And now put ist all together and let us see what happens:
<syntaxhighlight lang=bash>
$ gnutls-cli --list CIPHER --priority '%SERVER_PRECEDENCE:%LATEST_RECORD_VERSION:PFS:-VERS-TLS-ALL:+VERS-TLS1.2:-VERS-DTLS-ALL:-KX-ALL:-CIPHER-ALL:-MAC-ALL:-CURVE-ALL:-SIGN-ALL:+ECDHE-RSA:+ECDHE-ECDSA:+DHE-DSS:+DHE-RSA:+AES-256-CBC:+AES-128-CBC:+AES-256-GCM:+AES-128-GCM:+CHACHA20-POLY1305:+SHA256:+SHA384:+AEAD:+CURVE-SECP256R1:+CURVE-SECP384R1:+SIGN-RSA-SHA256'
Cipher suites for %SERVER_PRECEDENCE:%LATEST_RECORD_VERSION:PFS:-VERS-TLS-ALL:+VERS-TLS1.2:-VERS-DTLS-ALL:-KX-ALL:-CIPHER-ALL:-MAC-ALL:-CURVE-ALL:-SIGN-ALL:+ECDHE-RSA:+ECDHE-ECDSA:+DHE-DSS:+DHE-RSA:+AES-256-CBC:+AES-128-CBC:+AES-256-GCM:+AES-128-GCM:+CHACHA20-POLY1305:+SHA256:+SHA384:+AEAD:+CURVE-SECP256R1:+CURVE-SECP384R1:+SIGN-RSA-SHA256
TLS_ECDHE_RSA_AES_256_CBC_SHA384 0xc0, 0x28 TLS1.2
TLS_ECDHE_RSA_AES_128_CBC_SHA256 0xc0, 0x27 TLS1.2
TLS_ECDHE_RSA_AES_256_GCM_SHA384 0xc0, 0x30 TLS1.2
TLS_ECDHE_RSA_AES_128_GCM_SHA256 0xc0, 0x2f TLS1.2
TLS_ECDHE_RSA_CHACHA20_POLY1305 0xcc, 0xa8 TLS1.2
TLS_ECDHE_ECDSA_AES_256_CBC_SHA384 0xc0, 0x24 TLS1.2
TLS_ECDHE_ECDSA_AES_128_CBC_SHA256 0xc0, 0x23 TLS1.2
TLS_ECDHE_ECDSA_AES_256_GCM_SHA384 0xc0, 0x2c TLS1.2
TLS_ECDHE_ECDSA_AES_128_GCM_SHA256 0xc0, 0x2b TLS1.2
TLS_ECDHE_ECDSA_CHACHA20_POLY1305 0xcc, 0xa9 TLS1.2
TLS_DHE_DSS_AES_256_CBC_SHA256 0x00, 0x6a TLS1.2
TLS_DHE_DSS_AES_128_CBC_SHA256 0x00, 0x40 TLS1.2
TLS_DHE_DSS_AES_256_GCM_SHA384 0x00, 0xa3 TLS1.2
TLS_DHE_DSS_AES_128_GCM_SHA256 0x00, 0xa2 TLS1.2
TLS_DHE_RSA_AES_256_CBC_SHA256 0x00, 0x6b TLS1.2
TLS_DHE_RSA_AES_128_CBC_SHA256 0x00, 0x67 TLS1.2
TLS_DHE_RSA_AES_256_GCM_SHA384 0x00, 0x9f TLS1.2
TLS_DHE_RSA_AES_128_GCM_SHA256 0x00, 0x9e TLS1.2
TLS_DHE_RSA_CHACHA20_POLY1305 0xcc, 0xaa TLS1.2
Certificate types: CTYPE-X.509
Protocols: VERS-TLS1.2
Compression: COMP-NULL
Elliptic curves: CURVE-SECP256R1, CURVE-SECP384R1
PK-signatures: SIGN-RSA-SHA256
</syntaxhighlight>
As you can see it is not all what we would expect, but is faces all what is implemented and can be used with our restrictions in GnuTLS.
As far as I know: that's it!
bbf73ed28b1494c21720a6727fe4d3acd1f59e06
Galera Cluster
0
383
2633
2567
2022-02-22T16:03:18Z
Lollypop
2
/* Galera settings */
wikitext
text/x-wiki
[[Category:MariaDB]]
[[Category:MySQL]]
=Setup the Cluster=
==Install the packages==
On each node do as root:
* Add sources
<syntaxhighlight lang=bash>
# cat > /etc/apt/sources.list.d/mariadb.list << EOF
# MariaDB Server
# To use a different major version of the server, or to pin to a specific minor version, change URI below.
deb [arch=amd64] http://downloads.mariadb.com/MariaDB/mariadb-10.5/repo/ubuntu $(lsb_release -cs) main
deb [arch=amd64] http://downloads.mariadb.com/MariaDB/mariadb-10.5/repo/ubuntu $(lsb_release -cs) main/debug
# MariaDB MaxScale
# To use the latest stable release of MaxScale, use "latest" as the version
# To use the latest beta (or stable if no current beta) release of MaxScale, use "beta" as the version
deb [arch=amd64] https://dlm.mariadb.com/repo/maxscale/latest/apt $(lsb_release -cs) main
# MariaDB Tools
deb [arch=amd64] http://downloads.mariadb.com/Tools/ubuntu $(lsb_release -cs) main
EOF
</syntaxhighlight>
* Install the packages
<syntaxhighlight lang=bash>
# apt update
# apt install mariadb-server mariadb-backup galera-4
</syntaxhighlight>
==Setup certificates for the cluster comunication==
===Make a CA certificate===
Make a CA certificate with a very long lifetime as you dont want to make normal certificate updates at this point.
<syntaxhighlight lang=bash>
$ subject='/C=DE/ST=Hamburg/L=Hamburg/O=Organisation/OU=Databases/CN=Galera Cluster'
$ openssl req -new -x509 -nodes -days 365000 -newkey rsa:4096 -sha256 -keyout ca-key.pem -out ca-cert.pem -batch -subj "${subject}"
</syntaxhighlight>
===Create a certificate for each cluster node===
<syntaxhighlight lang=bash>
$ for node in {1..4}
do
emailAddress="dbadmin@server.de"
servername="maria-${node}.server.de"
subject="/C=DE/ST=Hamburg/L=Hamburg/O=Organisation/OU=Databases/CN=${servername}/emailAddress=${emailAddress}"
openssl req -newkey rsa:4096 -nodes -keyout ${servername}-key.pem -out ${servername}-req.pem -batch -subj "${subject}"
openssl x509 -req -days 365000 -set_serial $(printf "%02d" "${node}") -in ${servername}-req.pem -out ${servername}-cert.pem -CA ca-cert.pem -CAkey ca-key.pem
done
</syntaxhighlight>
===Copy keys and certificates to the nodes===
Copy the specific keys and certs to each node:
<syntaxhighlight lang=bash>
$ sudo mkdir --mode=0700 /etc/mysql/priv # put in here: maria-${node}.server.de-key.pem
$ sudo mkdir --mode=0750 /etc/mysql/cert # put in here: maria-${node}.server.de-cert.pem , ca-cert.pem
</syntaxhighlight>
== Configure the MariaDB Galera Cluster ==
=== Create a mariabackup user on each node ===
<syntaxhighlight lang=bash>
# mariadb
MariaDB [(none)]> grant reload, process, lock tables, replication client on *.* to 'mariabackup'@'localhost' identified by 'the_very_secret_mariabackup_password';
MariaDB [(none)]> grant reload, process, lock tables, binlog monitor on *.* to 'mariabackup'@'maria-1.server.de' identified by 'the_very_secret_mariabackup_password';
MariaDB [(none)]> grant reload, process, lock tables, binlog monitor on *.* to 'mariabackup'@'maria-2.server.de' identified by 'the_very_secret_mariabackup_password';
MariaDB [(none)]> grant reload, process, lock tables, binlog monitor on *.* to 'mariabackup'@'maria-3.server.de' identified by 'the_very_secret_mariabackup_password';
MariaDB [(none)]> grant reload, process, lock tables, binlog monitor on *.* to 'mariabackup'@'maria-4.server.de' identified by 'the_very_secret_mariabackup_password';
MariaDB [(none)]> flush privileges;
MariaDB [(none)]>
</syntaxhighlight>
=== Galera settings ===
This file is equal on all nodes:
/etc/mysql/mariadb.conf.d/zz-galera.cnf
<syntaxhighlight lang=ini>
[galera]
# Cluster Configuration
wsrep_provider = /usr/lib/galera/libgalera_smm.so
# gcomm://{ comma seperated list of all cluster node IPs }
wsrep_cluster_address = gcomm://10.33.6.1,10.33.6.2,10.33.6.3,10.33.6.4
wsrep_cluster_name = MariaDB Galera Cluster
wsrep_on = ON
# Snapshot state transfer (SST): copy entire database, when new node joins
wsrep_sst_method = mariabackup
# set the the_very_secret_mariabackup_password to your real mariabackup password
wsrep_sst_auth = mariabackup:the_very_secret_mariabackup_password
[mariadb]
binlog_format = ROW
innodb_autoinc_lock_mode = 2
</syntaxhighlight>
This file is different per node:
<syntaxhighlight lang=ini>
[mariadb]
bind-address = 10.33.6.1
ssl_cert = /etc/mysql/cert/maria1.server.de-cert.pem
ssl_key = /etc/mysql/priv/maria1.server.de-key.pem
ssl_ca = /etc/mysql/cert/ca-cert.pem
[sst]
encrypt = 4
tkey = /etc/mysql/priv/maria1.server.de-key.pem
tcert = /etc/mysql/cert/maria1.server.de-cert.pem
tca = /etc/mysql/cert/ca-cert.pem
[galera]
wsrep_node_address = 10.33.6.1
wsrep_node_incoming_address = 10.33.6.1
wsrep_sst_receive_address = 10.33.6.1
wsrep_provider_options = "gcache.size = 1G; gcache.recover = yes; socket.ssl_key=/etc/mysql/priv/maria1.server.de-key.pem;socket.ssl_cert=/etc/mysql/cert/maria1.server.de-cert.pem;socket.ssl_ca=/etc/mysql/cert/ca-cert.pem ; gmcast.listen_addr = ssl://10.33.6.1:4567"
</syntaxhighlight>
== Get knowledge about your Cluster ==
=== Show wsrep_provider_options ===
<syntaxhighlight lang=bash>
$ mariadb -NBABe 'show variables like "wsrep_provider_options"' | awk '{gsub(/$/,":\n",$1); gsub(/(;|$)/,";\n"); printf $0; }'
</syntaxhighlight>
bb5bc40f24eca9a7925e98f9962838a11e95a5b2
Galera Cluster
0
383
2634
2633
2022-02-22T16:04:33Z
Lollypop
2
/* Galera settings */
wikitext
text/x-wiki
[[Category:MariaDB]]
[[Category:MySQL]]
=Setup the Cluster=
==Install the packages==
On each node do as root:
* Add sources
<syntaxhighlight lang=bash>
# cat > /etc/apt/sources.list.d/mariadb.list << EOF
# MariaDB Server
# To use a different major version of the server, or to pin to a specific minor version, change URI below.
deb [arch=amd64] http://downloads.mariadb.com/MariaDB/mariadb-10.5/repo/ubuntu $(lsb_release -cs) main
deb [arch=amd64] http://downloads.mariadb.com/MariaDB/mariadb-10.5/repo/ubuntu $(lsb_release -cs) main/debug
# MariaDB MaxScale
# To use the latest stable release of MaxScale, use "latest" as the version
# To use the latest beta (or stable if no current beta) release of MaxScale, use "beta" as the version
deb [arch=amd64] https://dlm.mariadb.com/repo/maxscale/latest/apt $(lsb_release -cs) main
# MariaDB Tools
deb [arch=amd64] http://downloads.mariadb.com/Tools/ubuntu $(lsb_release -cs) main
EOF
</syntaxhighlight>
* Install the packages
<syntaxhighlight lang=bash>
# apt update
# apt install mariadb-server mariadb-backup galera-4
</syntaxhighlight>
==Setup certificates for the cluster comunication==
===Make a CA certificate===
Make a CA certificate with a very long lifetime as you dont want to make normal certificate updates at this point.
<syntaxhighlight lang=bash>
$ subject='/C=DE/ST=Hamburg/L=Hamburg/O=Organisation/OU=Databases/CN=Galera Cluster'
$ openssl req -new -x509 -nodes -days 365000 -newkey rsa:4096 -sha256 -keyout ca-key.pem -out ca-cert.pem -batch -subj "${subject}"
</syntaxhighlight>
===Create a certificate for each cluster node===
<syntaxhighlight lang=bash>
$ for node in {1..4}
do
emailAddress="dbadmin@server.de"
servername="maria-${node}.server.de"
subject="/C=DE/ST=Hamburg/L=Hamburg/O=Organisation/OU=Databases/CN=${servername}/emailAddress=${emailAddress}"
openssl req -newkey rsa:4096 -nodes -keyout ${servername}-key.pem -out ${servername}-req.pem -batch -subj "${subject}"
openssl x509 -req -days 365000 -set_serial $(printf "%02d" "${node}") -in ${servername}-req.pem -out ${servername}-cert.pem -CA ca-cert.pem -CAkey ca-key.pem
done
</syntaxhighlight>
===Copy keys and certificates to the nodes===
Copy the specific keys and certs to each node:
<syntaxhighlight lang=bash>
$ sudo mkdir --mode=0700 /etc/mysql/priv # put in here: maria-${node}.server.de-key.pem
$ sudo mkdir --mode=0750 /etc/mysql/cert # put in here: maria-${node}.server.de-cert.pem , ca-cert.pem
</syntaxhighlight>
== Configure the MariaDB Galera Cluster ==
=== Create a mariabackup user on each node ===
<syntaxhighlight lang=bash>
# mariadb
MariaDB [(none)]> grant reload, process, lock tables, replication client on *.* to 'mariabackup'@'localhost' identified by 'the_very_secret_mariabackup_password';
MariaDB [(none)]> grant reload, process, lock tables, binlog monitor on *.* to 'mariabackup'@'maria-1.server.de' identified by 'the_very_secret_mariabackup_password';
MariaDB [(none)]> grant reload, process, lock tables, binlog monitor on *.* to 'mariabackup'@'maria-2.server.de' identified by 'the_very_secret_mariabackup_password';
MariaDB [(none)]> grant reload, process, lock tables, binlog monitor on *.* to 'mariabackup'@'maria-3.server.de' identified by 'the_very_secret_mariabackup_password';
MariaDB [(none)]> grant reload, process, lock tables, binlog monitor on *.* to 'mariabackup'@'maria-4.server.de' identified by 'the_very_secret_mariabackup_password';
MariaDB [(none)]> flush privileges;
MariaDB [(none)]>
</syntaxhighlight>
=== Galera settings ===
This file is equal on all nodes:
/etc/mysql/mariadb.conf.d/zz-galera.cnf
<syntaxhighlight lang=ini>
[galera]
# Cluster Configuration
wsrep_provider = /usr/lib/galera/libgalera_smm.so
# gcomm://{ comma seperated list of all cluster node IPs }
wsrep_cluster_address = gcomm://10.33.6.1,10.33.6.2,10.33.6.3,10.33.6.4
wsrep_cluster_name = MariaDB Galera Cluster
wsrep_on = ON
# Snapshot state transfer (SST): copy entire database, when new node joins
wsrep_sst_method = mariabackup
# set the the_very_secret_mariabackup_password to your real mariabackup password
wsrep_sst_auth = mariabackup:the_very_secret_mariabackup_password
[mariadb]
binlog_format = ROW
innodb_autoinc_lock_mode = 2
</syntaxhighlight>
This file is different per node:
/etc/mysql/mariadb.conf.d/zz-node.cnf
<syntaxhighlight lang=ini>
[mariadb]
bind-address = 10.33.6.1
ssl_cert = /etc/mysql/cert/maria-1.server.de-cert.pem
ssl_key = /etc/mysql/priv/maria-1.server.de-key.pem
ssl_ca = /etc/mysql/cert/ca-cert.pem
[sst]
encrypt = 4
tkey = /etc/mysql/priv/maria-1.server.de-key.pem
tcert = /etc/mysql/cert/maria-1.server.de-cert.pem
tca = /etc/mysql/cert/ca-cert.pem
[galera]
wsrep_node_address = 10.33.6.1
wsrep_node_incoming_address = 10.33.6.1
wsrep_sst_receive_address = 10.33.6.1
wsrep_provider_options = "gcache.size = 1G; gcache.recover = yes; socket.ssl_key=/etc/mysql/priv/maria-1.server.de-key.pem;socket.ssl_cert=/etc/mysql/cert/maria-1.server.de-cert.pem;socket.ssl_ca=/etc/mysql/cert/ca-cert.pem ; gmcast.listen_addr = ssl://10.33.6.1:4567"
</syntaxhighlight>
== Get knowledge about your Cluster ==
=== Show wsrep_provider_options ===
<syntaxhighlight lang=bash>
$ mariadb -NBABe 'show variables like "wsrep_provider_options"' | awk '{gsub(/$/,":\n",$1); gsub(/(;|$)/,";\n"); printf $0; }'
</syntaxhighlight>
ba052ed2fc82ead9fe4a98087e61b749724d8901
2635
2634
2022-02-22T17:35:04Z
Lollypop
2
/* Galera settings */
wikitext
text/x-wiki
[[Category:MariaDB]]
[[Category:MySQL]]
=Setup the Cluster=
==Install the packages==
On each node do as root:
* Add sources
<syntaxhighlight lang=bash>
# cat > /etc/apt/sources.list.d/mariadb.list << EOF
# MariaDB Server
# To use a different major version of the server, or to pin to a specific minor version, change URI below.
deb [arch=amd64] http://downloads.mariadb.com/MariaDB/mariadb-10.5/repo/ubuntu $(lsb_release -cs) main
deb [arch=amd64] http://downloads.mariadb.com/MariaDB/mariadb-10.5/repo/ubuntu $(lsb_release -cs) main/debug
# MariaDB MaxScale
# To use the latest stable release of MaxScale, use "latest" as the version
# To use the latest beta (or stable if no current beta) release of MaxScale, use "beta" as the version
deb [arch=amd64] https://dlm.mariadb.com/repo/maxscale/latest/apt $(lsb_release -cs) main
# MariaDB Tools
deb [arch=amd64] http://downloads.mariadb.com/Tools/ubuntu $(lsb_release -cs) main
EOF
</syntaxhighlight>
* Install the packages
<syntaxhighlight lang=bash>
# apt update
# apt install mariadb-server mariadb-backup galera-4
</syntaxhighlight>
==Setup certificates for the cluster comunication==
===Make a CA certificate===
Make a CA certificate with a very long lifetime as you dont want to make normal certificate updates at this point.
<syntaxhighlight lang=bash>
$ subject='/C=DE/ST=Hamburg/L=Hamburg/O=Organisation/OU=Databases/CN=Galera Cluster'
$ openssl req -new -x509 -nodes -days 365000 -newkey rsa:4096 -sha256 -keyout ca-key.pem -out ca-cert.pem -batch -subj "${subject}"
</syntaxhighlight>
===Create a certificate for each cluster node===
<syntaxhighlight lang=bash>
$ for node in {1..4}
do
emailAddress="dbadmin@server.de"
servername="maria-${node}.server.de"
subject="/C=DE/ST=Hamburg/L=Hamburg/O=Organisation/OU=Databases/CN=${servername}/emailAddress=${emailAddress}"
openssl req -newkey rsa:4096 -nodes -keyout ${servername}-key.pem -out ${servername}-req.pem -batch -subj "${subject}"
openssl x509 -req -days 365000 -set_serial $(printf "%02d" "${node}") -in ${servername}-req.pem -out ${servername}-cert.pem -CA ca-cert.pem -CAkey ca-key.pem
done
</syntaxhighlight>
===Copy keys and certificates to the nodes===
Copy the specific keys and certs to each node:
<syntaxhighlight lang=bash>
$ sudo mkdir --mode=0700 /etc/mysql/priv # put in here: maria-${node}.server.de-key.pem
$ sudo mkdir --mode=0750 /etc/mysql/cert # put in here: maria-${node}.server.de-cert.pem , ca-cert.pem
</syntaxhighlight>
== Configure the MariaDB Galera Cluster ==
=== Create a mariabackup user on each node ===
<syntaxhighlight lang=bash>
# mariadb
MariaDB [(none)]> grant reload, process, lock tables, replication client on *.* to 'mariabackup'@'localhost' identified by 'the_very_secret_mariabackup_password';
MariaDB [(none)]> grant reload, process, lock tables, binlog monitor on *.* to 'mariabackup'@'maria-1.server.de' identified by 'the_very_secret_mariabackup_password';
MariaDB [(none)]> grant reload, process, lock tables, binlog monitor on *.* to 'mariabackup'@'maria-2.server.de' identified by 'the_very_secret_mariabackup_password';
MariaDB [(none)]> grant reload, process, lock tables, binlog monitor on *.* to 'mariabackup'@'maria-3.server.de' identified by 'the_very_secret_mariabackup_password';
MariaDB [(none)]> grant reload, process, lock tables, binlog monitor on *.* to 'mariabackup'@'maria-4.server.de' identified by 'the_very_secret_mariabackup_password';
MariaDB [(none)]> flush privileges;
MariaDB [(none)]>
</syntaxhighlight>
=== Galera settings ===
This file is equal on all nodes:
/etc/mysql/mariadb.conf.d/zz-galera.cnf
<syntaxhighlight lang=ini>
[galera]
# Cluster Configuration
wsrep_provider = /usr/lib/galera/libgalera_smm.so
# gcomm://{ comma seperated list of all cluster node IPs }
wsrep_cluster_address = gcomm://10.33.6.1,10.33.6.2,10.33.6.3,10.33.6.4
wsrep_cluster_name = MariaDB Galera Cluster
wsrep_on = ON
# Snapshot state transfer (SST): copy entire database, when new node joins
wsrep_sst_method = mariabackup
# set the the_very_secret_mariabackup_password to your real mariabackup password
wsrep_sst_auth = mariabackup:the_very_secret_mariabackup_password
[mariadb]
binlog_format = ROW
innodb_autoinc_lock_mode = 2
</syntaxhighlight>
This file is different per node (here for node1 IP 10.33.6.1):
/etc/mysql/mariadb.conf.d/zz-node.cnf
<syntaxhighlight lang=ini>
[mariadb]
bind-address = 10.33.6.1
ssl_cert = /etc/mysql/cert/maria-1.server.de-cert.pem
ssl_key = /etc/mysql/priv/maria-1.server.de-key.pem
ssl_ca = /etc/mysql/cert/ca-cert.pem
[sst]
encrypt = 4
tkey = /etc/mysql/priv/maria-1.server.de-key.pem
tcert = /etc/mysql/cert/maria-1.server.de-cert.pem
tca = /etc/mysql/cert/ca-cert.pem
[galera]
wsrep_node_address = 10.33.6.1
wsrep_node_incoming_address = 10.33.6.1
wsrep_sst_receive_address = 10.33.6.1
wsrep_provider_options = "gcache.size = 1G; gcache.recover = yes; socket.ssl_key=/etc/mysql/priv/maria-1.server.de-key.pem;socket.ssl_cert=/etc/mysql/cert/maria-1.server.de-cert.pem;socket.ssl_ca=/etc/mysql/cert/ca-cert.pem ; gmcast.listen_addr = ssl://10.33.6.1:4567"
</syntaxhighlight>
== Get knowledge about your Cluster ==
=== Show wsrep_provider_options ===
<syntaxhighlight lang=bash>
$ mariadb -NBABe 'show variables like "wsrep_provider_options"' | awk '{gsub(/$/,":\n",$1); gsub(/(;|$)/,";\n"); printf $0; }'
</syntaxhighlight>
161cd2592d57ab6b73070d09f155da4e4d06eac2
2636
2635
2022-03-02T07:56:17Z
Lollypop
2
/* Galera settings */
wikitext
text/x-wiki
[[Category:MariaDB]]
[[Category:MySQL]]
=Setup the Cluster=
==Install the packages==
On each node do as root:
* Add sources
<syntaxhighlight lang=bash>
# cat > /etc/apt/sources.list.d/mariadb.list << EOF
# MariaDB Server
# To use a different major version of the server, or to pin to a specific minor version, change URI below.
deb [arch=amd64] http://downloads.mariadb.com/MariaDB/mariadb-10.5/repo/ubuntu $(lsb_release -cs) main
deb [arch=amd64] http://downloads.mariadb.com/MariaDB/mariadb-10.5/repo/ubuntu $(lsb_release -cs) main/debug
# MariaDB MaxScale
# To use the latest stable release of MaxScale, use "latest" as the version
# To use the latest beta (or stable if no current beta) release of MaxScale, use "beta" as the version
deb [arch=amd64] https://dlm.mariadb.com/repo/maxscale/latest/apt $(lsb_release -cs) main
# MariaDB Tools
deb [arch=amd64] http://downloads.mariadb.com/Tools/ubuntu $(lsb_release -cs) main
EOF
</syntaxhighlight>
* Install the packages
<syntaxhighlight lang=bash>
# apt update
# apt install mariadb-server mariadb-backup galera-4
</syntaxhighlight>
==Setup certificates for the cluster comunication==
===Make a CA certificate===
Make a CA certificate with a very long lifetime as you dont want to make normal certificate updates at this point.
<syntaxhighlight lang=bash>
$ subject='/C=DE/ST=Hamburg/L=Hamburg/O=Organisation/OU=Databases/CN=Galera Cluster'
$ openssl req -new -x509 -nodes -days 365000 -newkey rsa:4096 -sha256 -keyout ca-key.pem -out ca-cert.pem -batch -subj "${subject}"
</syntaxhighlight>
===Create a certificate for each cluster node===
<syntaxhighlight lang=bash>
$ for node in {1..4}
do
emailAddress="dbadmin@server.de"
servername="maria-${node}.server.de"
subject="/C=DE/ST=Hamburg/L=Hamburg/O=Organisation/OU=Databases/CN=${servername}/emailAddress=${emailAddress}"
openssl req -newkey rsa:4096 -nodes -keyout ${servername}-key.pem -out ${servername}-req.pem -batch -subj "${subject}"
openssl x509 -req -days 365000 -set_serial $(printf "%02d" "${node}") -in ${servername}-req.pem -out ${servername}-cert.pem -CA ca-cert.pem -CAkey ca-key.pem
done
</syntaxhighlight>
===Copy keys and certificates to the nodes===
Copy the specific keys and certs to each node:
<syntaxhighlight lang=bash>
$ sudo mkdir --mode=0700 /etc/mysql/priv # put in here: maria-${node}.server.de-key.pem
$ sudo mkdir --mode=0750 /etc/mysql/cert # put in here: maria-${node}.server.de-cert.pem , ca-cert.pem
</syntaxhighlight>
== Configure the MariaDB Galera Cluster ==
=== Create a mariabackup user on each node ===
<syntaxhighlight lang=bash>
# mariadb
MariaDB [(none)]> grant reload, process, lock tables, replication client on *.* to 'mariabackup'@'localhost' identified by 'the_very_secret_mariabackup_password';
MariaDB [(none)]> grant reload, process, lock tables, binlog monitor on *.* to 'mariabackup'@'maria-1.server.de' identified by 'the_very_secret_mariabackup_password';
MariaDB [(none)]> grant reload, process, lock tables, binlog monitor on *.* to 'mariabackup'@'maria-2.server.de' identified by 'the_very_secret_mariabackup_password';
MariaDB [(none)]> grant reload, process, lock tables, binlog monitor on *.* to 'mariabackup'@'maria-3.server.de' identified by 'the_very_secret_mariabackup_password';
MariaDB [(none)]> grant reload, process, lock tables, binlog monitor on *.* to 'mariabackup'@'maria-4.server.de' identified by 'the_very_secret_mariabackup_password';
MariaDB [(none)]> flush privileges;
MariaDB [(none)]>
</syntaxhighlight>
=== Galera settings ===
This file is equal on all nodes:
/etc/mysql/mariadb.conf.d/zz-galera.cnf
<syntaxhighlight lang=ini>
[galera]
# Cluster Configuration
wsrep_provider = /usr/lib/galera/libgalera_smm.so
# gcomm://{ comma seperated list of all cluster node IPs }
wsrep_cluster_address = gcomm://10.33.6.1,10.33.6.2,10.33.6.3,10.33.6.4
wsrep_cluster_name = MariaDB Galera Cluster
wsrep_on = ON
# Snapshot state transfer (SST): copy entire database, when new node joins
wsrep_sst_method = mariabackup
# set the the_very_secret_mariabackup_password to your real mariabackup password
wsrep_sst_auth = mariabackup:the_very_secret_mariabackup_password
[mariadb]
binlog_format = ROW
innodb_autoinc_lock_mode = 2
</syntaxhighlight>
This file is different per node (here for node1 IP 10.33.6.1):
/etc/mysql/mariadb.conf.d/zz-node.cnf
<syntaxhighlight lang=ini>
[mariadb]
bind-address = 10.33.6.1
ssl_cert = /etc/mysql/cert/maria-1.server.de-cert.pem
ssl_key = /etc/mysql/priv/maria-1.server.de-key.pem
ssl_ca = /etc/mysql/cert/ca-cert.pem
[sst]
encrypt = 4
tkey = /etc/mysql/priv/maria-1.server.de-key.pem
tcert = /etc/mysql/cert/maria-1.server.de-cert.pem
tca = /etc/mysql/cert/ca-cert.pem
[galera]
wsrep_node_address = 10.33.6.1
wsrep_node_incoming_address = 10.33.6.1
wsrep_sst_receive_address = 10.33.6.1
wsrep_provider_options = "gcache.size = 1G; gcache.recover = yes; socket.ssl_key=/etc/mysql/priv/maria-1.server.de-key.pem;socket.ssl_cert=/etc/mysql/cert/maria-1.server.de-cert.pem;socket.ssl_ca=/etc/mysql/cert/ca-cert.pem ; gmcast.listen_addr = ssl://10.33.6.1:4567"
</syntaxhighlight>
If you have something running on the default port 4567, you can change the port like this (here to 5000):
<syntaxhighlight lang=ini>
wsrep_provider_options = "base_port = 5000; gcache.size = 1G; gcache.recover = yes; socket.ssl_key=/etc/mysql/priv/maria-1.server.de-key.pem;socket.ssl_cert=/etc/mysql/cert/maria-1.server.de-cert.pem;socket.ssl_ca=/etc/mysql/cert/ca-cert.pem ; gmcast.listen_addr = ssl://10.33.6.1:5000"
</syntaxhighlight>
Do not forget to change the <i>gmcast.listen_addr</i> at the end.
== Get knowledge about your Cluster ==
=== Show wsrep_provider_options ===
<syntaxhighlight lang=bash>
$ mariadb -NBABe 'show variables like "wsrep_provider_options"' | awk '{gsub(/$/,":\n",$1); gsub(/(;|$)/,";\n"); printf $0; }'
</syntaxhighlight>
fbbed61dbe421a015ab1eb56080c1ef9853b9411
2637
2636
2022-03-02T07:56:51Z
Lollypop
2
/* Galera settings */
wikitext
text/x-wiki
[[Category:MariaDB]]
[[Category:MySQL]]
=Setup the Cluster=
==Install the packages==
On each node do as root:
* Add sources
<syntaxhighlight lang=bash>
# cat > /etc/apt/sources.list.d/mariadb.list << EOF
# MariaDB Server
# To use a different major version of the server, or to pin to a specific minor version, change URI below.
deb [arch=amd64] http://downloads.mariadb.com/MariaDB/mariadb-10.5/repo/ubuntu $(lsb_release -cs) main
deb [arch=amd64] http://downloads.mariadb.com/MariaDB/mariadb-10.5/repo/ubuntu $(lsb_release -cs) main/debug
# MariaDB MaxScale
# To use the latest stable release of MaxScale, use "latest" as the version
# To use the latest beta (or stable if no current beta) release of MaxScale, use "beta" as the version
deb [arch=amd64] https://dlm.mariadb.com/repo/maxscale/latest/apt $(lsb_release -cs) main
# MariaDB Tools
deb [arch=amd64] http://downloads.mariadb.com/Tools/ubuntu $(lsb_release -cs) main
EOF
</syntaxhighlight>
* Install the packages
<syntaxhighlight lang=bash>
# apt update
# apt install mariadb-server mariadb-backup galera-4
</syntaxhighlight>
==Setup certificates for the cluster comunication==
===Make a CA certificate===
Make a CA certificate with a very long lifetime as you dont want to make normal certificate updates at this point.
<syntaxhighlight lang=bash>
$ subject='/C=DE/ST=Hamburg/L=Hamburg/O=Organisation/OU=Databases/CN=Galera Cluster'
$ openssl req -new -x509 -nodes -days 365000 -newkey rsa:4096 -sha256 -keyout ca-key.pem -out ca-cert.pem -batch -subj "${subject}"
</syntaxhighlight>
===Create a certificate for each cluster node===
<syntaxhighlight lang=bash>
$ for node in {1..4}
do
emailAddress="dbadmin@server.de"
servername="maria-${node}.server.de"
subject="/C=DE/ST=Hamburg/L=Hamburg/O=Organisation/OU=Databases/CN=${servername}/emailAddress=${emailAddress}"
openssl req -newkey rsa:4096 -nodes -keyout ${servername}-key.pem -out ${servername}-req.pem -batch -subj "${subject}"
openssl x509 -req -days 365000 -set_serial $(printf "%02d" "${node}") -in ${servername}-req.pem -out ${servername}-cert.pem -CA ca-cert.pem -CAkey ca-key.pem
done
</syntaxhighlight>
===Copy keys and certificates to the nodes===
Copy the specific keys and certs to each node:
<syntaxhighlight lang=bash>
$ sudo mkdir --mode=0700 /etc/mysql/priv # put in here: maria-${node}.server.de-key.pem
$ sudo mkdir --mode=0750 /etc/mysql/cert # put in here: maria-${node}.server.de-cert.pem , ca-cert.pem
</syntaxhighlight>
== Configure the MariaDB Galera Cluster ==
=== Create a mariabackup user on each node ===
<syntaxhighlight lang=bash>
# mariadb
MariaDB [(none)]> grant reload, process, lock tables, replication client on *.* to 'mariabackup'@'localhost' identified by 'the_very_secret_mariabackup_password';
MariaDB [(none)]> grant reload, process, lock tables, binlog monitor on *.* to 'mariabackup'@'maria-1.server.de' identified by 'the_very_secret_mariabackup_password';
MariaDB [(none)]> grant reload, process, lock tables, binlog monitor on *.* to 'mariabackup'@'maria-2.server.de' identified by 'the_very_secret_mariabackup_password';
MariaDB [(none)]> grant reload, process, lock tables, binlog monitor on *.* to 'mariabackup'@'maria-3.server.de' identified by 'the_very_secret_mariabackup_password';
MariaDB [(none)]> grant reload, process, lock tables, binlog monitor on *.* to 'mariabackup'@'maria-4.server.de' identified by 'the_very_secret_mariabackup_password';
MariaDB [(none)]> flush privileges;
MariaDB [(none)]>
</syntaxhighlight>
=== Galera settings ===
This file is equal on all nodes:
/etc/mysql/mariadb.conf.d/zz-galera.cnf
<syntaxhighlight lang=ini>
[galera]
# Cluster Configuration
wsrep_provider = /usr/lib/galera/libgalera_smm.so
# gcomm://{ comma seperated list of all cluster node IPs }
wsrep_cluster_address = gcomm://10.33.6.1,10.33.6.2,10.33.6.3,10.33.6.4
wsrep_cluster_name = MariaDB Galera Cluster
wsrep_on = ON
# Snapshot state transfer (SST): copy entire database, when new node joins
wsrep_sst_method = mariabackup
# set the the_very_secret_mariabackup_password to your real mariabackup password
wsrep_sst_auth = mariabackup:the_very_secret_mariabackup_password
[mariadb]
binlog_format = ROW
innodb_autoinc_lock_mode = 2
</syntaxhighlight>
This file is different per node (here for node1 IP 10.33.6.1):
/etc/mysql/mariadb.conf.d/zz-node.cnf
<syntaxhighlight lang=ini>
[mariadb]
bind-address = 10.33.6.1
ssl_cert = /etc/mysql/cert/maria-1.server.de-cert.pem
ssl_key = /etc/mysql/priv/maria-1.server.de-key.pem
ssl_ca = /etc/mysql/cert/ca-cert.pem
[sst]
encrypt = 4
tkey = /etc/mysql/priv/maria-1.server.de-key.pem
tcert = /etc/mysql/cert/maria-1.server.de-cert.pem
tca = /etc/mysql/cert/ca-cert.pem
[galera]
wsrep_node_address = 10.33.6.1
wsrep_node_incoming_address = 10.33.6.1
wsrep_sst_receive_address = 10.33.6.1
wsrep_provider_options = "gcache.size = 1G; gcache.recover = yes; socket.ssl_key=/etc/mysql/priv/maria-1.server.de-key.pem;socket.ssl_cert=/etc/mysql/cert/maria-1.server.de-cert.pem;socket.ssl_ca=/etc/mysql/cert/ca-cert.pem ; gmcast.listen_addr = ssl://10.33.6.1:4567"
</syntaxhighlight>
If you have something running on the default port 4567, you can change the <i>base_port</i> like this (here to 5000):
<syntaxhighlight lang=ini>
wsrep_provider_options = "base_port = 5000; gcache.size = 1G; gcache.recover = yes; socket.ssl_key=/etc/mysql/priv/maria-1.server.de-key.pem;socket.ssl_cert=/etc/mysql/cert/maria-1.server.de-cert.pem;socket.ssl_ca=/etc/mysql/cert/ca-cert.pem ; gmcast.listen_addr = ssl://10.33.6.1:5000"
</syntaxhighlight>
Do not forget to change the <i>gmcast.listen_addr</i> at the end.
== Get knowledge about your Cluster ==
=== Show wsrep_provider_options ===
<syntaxhighlight lang=bash>
$ mariadb -NBABe 'show variables like "wsrep_provider_options"' | awk '{gsub(/$/,":\n",$1); gsub(/(;|$)/,";\n"); printf $0; }'
</syntaxhighlight>
2b2d4694b2bfe3573bcc151ba5a045fd4bffd1f8
Oracle Clients
0
342
2638
2488
2022-03-02T09:16:55Z
Lollypop
2
wikitext
text/x-wiki
[[category:Oracle]]
= Ubuntu =
Download
<pre>
oracle-instantclient12.2-basiclite-12.2.0.1.0-1.x86_64.rpm
oracle-instantclient12.2-sqlplus-12.2.0.1.0-1.x86_64.rpm
</pre>
from [http://www.oracle.com/technetwork/database/features/instant-client/index.html Oracle Instant Client download page]
<syntaxhighlight lang=bash>
$ sudo apt install alien libaio1
$ sudo alien -i oracle-instantclient12.2-basiclite-12.2.0.1.0-1.x86_64.rpm
$ sudo alien -i oracle-instantclient12.2-sqlplus-12.2.0.1.0-1.x86_64.rpm
$ for i in $(dpkg -L $(dpkg -l oracle-instantclient\* | awk '$1=="ii"{print $2;}') | grep .so )
do
BASENAME=${i##*/}
sudo update-alternatives --install /usr/lib/${BASENAME} ${BASENAME} ${i} 10
done
$ dpkg -L $(dpkg -l oracle-instantclient*-basiclite | awk '$1=="ii"{print $2;}') | \
awk '
/client64$/{
oracle_home=$1;
printf "ORACLE_HOME=%s\nPATH=${PATH}:${ORACLE_HOME}/bin\nexport ORACLE_HOME PATH\n",oracle_home;
}' | \
sudo tee /etc/profile.d/oracle.sh
</syntaxhighlight>
2e839b520bd4ce0a43e23e0b28f7b967f58efd43
MySQL slave with LVM
0
239
2639
2563
2022-03-02T09:18:22Z
Lollypop
2
wikitext
text/x-wiki
[[category:MySQL]]
'''UNFINISHED first few lines...'''
==Create LVM snapshot==
===Get the data mount===
<syntaxhighlight lang=bash>
master# df -h $(mysql --batch --skip-column-names -e "show variables like 'datadir'" | awk '{print $NF;}')
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/vg--mysql-mysql--data 138G 78G 55G 59% /var/lib/mysql
master# DATADIR="$(mysql --batch --skip-column-names -e "show variables like 'datadir'" | awk '{print $NF;}')"
</syntaxhighlight>
Enough space for a snapshot?
<syntaxhighlight lang=bash>
master# lvs /dev/mapper/vg--mysql-mysql--data
LV VG Attr LSize Pool Origin Data% Move Log Copy% Convert
mysql-data vg-mysql -wi-ao--- 140,00g
master# vgs vg-mysql
VG #PV #LV #SN Attr VSize VFree
vg-mysql 2 3 1 wz--n- 199,99g 20,00g
</syntaxhighlight>
===Create a concsistent snapshot===
<syntaxhighlight lang=bash>
master# mysql -e "FLUSH TABLES WITH READ LOCK; SHOW MASTER STATUS;" > ${DATADIR}/master_status.$(date "+%Y%m%d_%H%M%S")
master# lvcreate -l50%FREE -s -n mysql-data-snap /dev/vg-mysql/mysql-data
master# mysql -e "UNLOCK TABLES;"
master# mount /dev/vg-mysql/mysql-data-snap /mnt
master# cat /mnt/master_status.20151002_225659
File Position Binlog_Do_DB Binlog_Ignore_DB
mysql-bin.002366 263911913
master# mysql --batch --skip-column-names -e "show variables like 'innodb_data_file_path'"
innodb_data_file_path ibdata1:5G;ibdata2:5G;ibdata3:5G;ibdata4:50M:autoextend
</syntaxhighlight>
Set the innodb_data_file_path to the same value on the slave.
==Copy the data to the slave==
<syntaxhighlight lang=bash>
slave# DATADIR="$(mysql --batch --skip-column-names -e "show variables like 'datadir'" | awk '{print $NF;}')"
slave# ssh -c blowfish master "cd /mnt ; tar cSpzf - ." | ( cd ${DATADIR} ; tar xlvSpzf - )
</syntaxhighlight>
==Create replication user on master==
<syntaxhighlight lang=bash>
master# mysql -e ""
</syntaxhighlight>
==Setup slave==
<syntaxhighlight lang=bash>
slave# mysql -e ""
</syntaxhighlight>
dfbdb9981892ddfba431db6ab13b15b8b228fd65
VirtualBox physical mapping
0
355
2640
2415
2022-03-02T09:20:32Z
Lollypop
2
wikitext
text/x-wiki
[[category:VirtualBox]]
==Create a virtual mapping to your physical Windows==
In my example it is on partitions 1 and 2 of the disk.<br>
This helps me to work around problems with installing Windows updates and grub. Some Windows updates are failing if you have grub in your MBR.
===Create a dummy mbr===
<syntaxhighlight lang=bash>
# apt install mbr
# install-mbr /var/data/VMs/dev/mbr.img
</syntaxhighlight>
===Create the mapping as a VMDK file===
<syntaxhighlight lang=bash>
# VBoxManage internalcommands createrawvmdk -filename /var/data/VMs/dev/Windows-physical.vmdk -rawdisk /dev/sda -partitions 1,2 -mbr /var/data/VMs/dev/mbr.img
</syntaxhighlight>
After that create a VM and use this special VMDK file.
dc31c548a7d5e87432c13450b80466cd59acb9cd
2641
2640
2022-03-02T09:21:07Z
Lollypop
2
wikitext
text/x-wiki
[[category:VirtualBox|physical mapping]]
==Create a virtual mapping to your physical Windows==
In my example it is on partitions 1 and 2 of the disk.<br>
This helps me to work around problems with installing Windows updates and grub. Some Windows updates are failing if you have grub in your MBR.
===Create a dummy mbr===
<syntaxhighlight lang=bash>
# apt install mbr
# install-mbr /var/data/VMs/dev/mbr.img
</syntaxhighlight>
===Create the mapping as a VMDK file===
<syntaxhighlight lang=bash>
# VBoxManage internalcommands createrawvmdk -filename /var/data/VMs/dev/Windows-physical.vmdk -rawdisk /dev/sda -partitions 1,2 -mbr /var/data/VMs/dev/mbr.img
</syntaxhighlight>
After that create a VM and use this special VMDK file.
d165c54f9093843fb821e6ee893955cf55e404b5
2642
2641
2022-03-02T09:21:52Z
Lollypop
2
wikitext
text/x-wiki
[[category:VirtualBox|Physical mapping]]
==Create a virtual mapping to your physical Windows==
In my example it is on partitions 1 and 2 of the disk.<br>
This helps me to work around problems with installing Windows updates and grub. Some Windows updates are failing if you have grub in your MBR.
===Create a dummy mbr===
<syntaxhighlight lang=bash>
# apt install mbr
# install-mbr /var/data/VMs/dev/mbr.img
</syntaxhighlight>
===Create the mapping as a VMDK file===
<syntaxhighlight lang=bash>
# VBoxManage internalcommands createrawvmdk -filename /var/data/VMs/dev/Windows-physical.vmdk -rawdisk /dev/sda -partitions 1,2 -mbr /var/data/VMs/dev/mbr.img
</syntaxhighlight>
After that create a VM and use this special VMDK file.
056c6d4976e78919b59aae024c5c20b9f8146768
Git
0
394
2643
2022-03-02T14:59:31Z
Lollypop
2
Created page with "<syntaxhighlight lang=bash> $ git init --bare ansible.git $ cd ansible.git $ git config receive.denyCurrentBranch ignore $ git clone /home/ansible/ansible.git ansible_ $ cd ansible_ $ cp -rp ~/ansible/* . $ git add . $ git commit --all -m "Initial project" $ git push </syntaxhighlight>"
wikitext
text/x-wiki
<syntaxhighlight lang=bash>
$ git init --bare ansible.git
$ cd ansible.git
$ git config receive.denyCurrentBranch ignore
$ git clone /home/ansible/ansible.git ansible_
$ cd ansible_
$ cp -rp ~/ansible/* .
$ git add .
$ git commit --all -m "Initial project"
$ git push
</syntaxhighlight>
46c3cdd2d7198c23c9dcff02965b6a02b3ea8074
Category:Sendmail
14
101
2644
2343
2022-03-04T07:31:41Z
Lollypop
2
wikitext
text/x-wiki
[[Category:Mail]]
=Wenn es doch mal nicht ohne Sendmail geht=
a03f3db1763113e4b6691d538f9d09c5196fe496
Postfix
0
395
2645
2022-03-04T07:39:30Z
Lollypop
2
Created page with "First of all: Postfix is a unwanted fuck in the administrators ass from hell. Use Exim. If you are a poor admin and can not do this, here is a minimal set of commands. ==Display queue== <SyntaxHighlight> root@mail:~# postqueue -p </SyntaxHighlight> ==Display mail content from a mail in the queue== <SyntaxHighlight> root@mail:~# postcat -vq 4K8PQr0VlYz2xV1 </SyntaxHighlight> ==Delete a mail from the queue== <SyntaxHighlight> root@mail:~# postsuper -d 4K8PQr0VlYz2xV1 <..."
wikitext
text/x-wiki
First of all: Postfix is a unwanted fuck in the administrators ass from hell. Use Exim. If you are a poor admin and can not do this, here is a minimal set of commands.
==Display queue==
<SyntaxHighlight>
root@mail:~# postqueue -p
</SyntaxHighlight>
==Display mail content from a mail in the queue==
<SyntaxHighlight>
root@mail:~# postcat -vq 4K8PQr0VlYz2xV1
</SyntaxHighlight>
==Delete a mail from the queue==
<SyntaxHighlight>
root@mail:~# postsuper -d 4K8PQr0VlYz2xV1
</SyntaxHighlight>
==Search and delete a set of mails from the queue==
<SyntaxHighlight>
root@mail:~# postqueue -p | awk -v search='hello@' '/^[0-9a-zA-Z]/{qid=$1; next;} $0 ~ search {print qid;}' | postsuper -d -
postsuper: 4K7xpC29QDz2xTW: removed
postsuper: 4K7zpF2D0Wz2xTZ: removed
postsuper: 4K7wST3jD5z2xTQ: removed
postsuper: 4K7lpn5dtHz2xH2: removed
postsuper: 4K7mFh1ds5z2xJp: removed
postsuper: 4K8MP352Clz2xTq: removed
postsuper: Deleted: 6 messages
</SyntaxHighlight>
aa26d4212938e2cba51594838664af93da90398f
2646
2645
2022-03-04T07:51:03Z
Lollypop
2
wikitext
text/x-wiki
First of all: Postfix is a unwanted fuck in the administrators ass from hell. Use Exim. If you are a poor admin and can not do this, here is a minimal set of commands.
==Display queue==
<SyntaxHighlight>
root@mail:~# postqueue -p
</SyntaxHighlight>
==Display mail content from a mail in the queue==
<SyntaxHighlight>
root@mail:~# postcat -vq 4K8PQr0VlYz2xV1
</SyntaxHighlight>
==Delete a mail from the queue==
<SyntaxHighlight>
root@mail:~# postsuper -d 4K8PQr0VlYz2xV1
</SyntaxHighlight>
==Search and delete a set of mails from the queue==
To delete all mails in queue for a recipient that matches 'hello@':
<SyntaxHighlight>
root@mail:~# postqueue -p | awk -v search='hello@' '/^[0-9a-zA-Z]/{qid=$1; next;} $0 ~ search {print qid;}' | postsuper -d -
postsuper: 4K7xpC29QDz2xTW: removed
postsuper: 4K7zpF2D0Wz2xTZ: removed
postsuper: 4K7wST3jD5z2xTQ: removed
postsuper: 4K7lpn5dtHz2xH2: removed
postsuper: 4K7mFh1ds5z2xJp: removed
postsuper: 4K8MP352Clz2xTq: removed
postsuper: Deleted: 6 messages
</SyntaxHighlight>
5e1888492522d9cbe6ab19b23ecc8ff158308b23
2647
2646
2022-03-04T07:55:17Z
Lollypop
2
wikitext
text/x-wiki
[[category:Mail]]
First of all: Postfix is a unwanted fuck in the administrators ass from hell. Use Exim. If you are a poor admin and can not do this, here is a minimal set of commands.
==Display queue==
<syntaxhighlight lang="bash">
root@mail:~# postqueue -p
</syntaxhighlight>
==Display mail content from a mail in the queue==
<syntaxhighlight lang="bash">
root@mail:~# postcat -vq 4K8PQr0VlYz2xV1
</syntaxhighlight>
==Delete a mail from the queue==
<syntaxhighlight lang="bash">
root@mail:~# postsuper -d 4K8PQr0VlYz2xV1
</syntaxhighlight>
==Search and delete a set of mails from the queue==
To delete all mails in queue for a recipient that matches 'hello@':
<syntaxhighlight lang="bash">
root@mail:~# postqueue -p | awk -v search='hello@' '/^[0-9a-zA-Z]/{qid=$1; next;} $0 ~ search {print qid;}' | postsuper -d -
postsuper: 4K7xpC29QDz2xTW: removed
postsuper: 4K7zpF2D0Wz2xTZ: removed
postsuper: 4K7wST3jD5z2xTQ: removed
postsuper: 4K7lpn5dtHz2xH2: removed
postsuper: 4K7mFh1ds5z2xJp: removed
postsuper: 4K8MP352Clz2xTq: removed
postsuper: Deleted: 6 messages
</syntaxhighlight>
fd6104d864121bb2fedc6819dac744b0883cc310
Exim cheatsheet
0
27
2648
2517
2022-03-07T16:16:34Z
Lollypop
2
/* Ratelimit für einen User zurücksetzen */
wikitext
text/x-wiki
[[Category:Exim]]
=Fragen und Antworten=
==Header einer MailID ansehen==
<pre># exim -mvh <msgid></pre>
==Statistiken der aktuellen Queue ansehen==
<pre># exim -bpu | exiqsum <parameter></pre>
==Routing von Mails testen==
===Kurz und bündig===
<pre># exim -bv -v <Mailadresse></pre>
===Mit viel Debugging===
<pre># exim -bv -d+all <Mailadresse></pre>
==Wie stosse ich den Versand aller Mails für eine bestimmte Domain an?==
<pre># exim -Rff <Domain></pre>
==Wie stosse ich den Versand EINER bestimmten Mail erneut an?==
<pre># exim -M <message-id></pre>
==Wie ermittle ich, wieviele Mails in der Queue liegen?==
<pre># exim -bpc</pre>
==Wie finde ich eine bestimmte Mail in der Queue?==
Dazu kann entweder in den Logfiles gesucht werden
<pre># exigrep <pattern> /var/log/exim/mainlog-jjjjmmdd</pre>
oder es kann in der Queue gesucht werden
<pre># exiqgrep -r <pattern></pre>
Besser, als exigrep ist exipick!
Ausgabe aller frozen Mails in der Queue:
<pre>
# exipick -z
</pre>
Ausgabe aller Mails an <reciepient> in der Queue:
<pre>
# exipick -r <reciepient>
</pre>
Ausgabe aller Mails von <sender> in der Queue:
<pre>
# exipick -f <sender>
</pre>
Ausggeben aller Mails, die Lokal abgesandt wurden in der Queue:
<pre>
# exipick --or '$sender_host_address eq 127.0.0.1' '$received_protocol eq local'
</pre>
Sogar der Body einer Mail kann durchsucht werden:
<pre>
# /opt/exim/bin/exipick '$message_body =~ /.*Vjagra.*/'
</pre>
Oder Ausgabe der sender_host_address für alle Mails die mehr als 40 und weniger als 50 Minuten alt sind und nicht im Status frozen sind:
<pre>
# exipick --show-vars sender_host_address '$message_age > 40m' '$message_age < 50m' '!$deliver_freeze'
</pre>
==Was tun die Exim-Prozesse?==
<pre># exiwhat</pre>
==Ausgeben von Exim-Parametern==
<pre># exim -bP <Parameter></pre>
z.B.:
<pre># exim -bP message_size_limit</pre>
==Immer gut: queue files ansehen==
<pre>
# find $(exim -bP spool_directory | nawk '{print $NF;}')/input
</pre>
==Display configured tls settings==
===gnutls===
<syntaxhighlight lang=bash>
$ gnutls-cli --list CIPHER --priority "$(exim -bP tls_require_ciphers | awk '{print $NF}')"
Cipher suites for %SERVER_PRECEDENCE:%LATEST_RECORD_VERSION:PFS:-VERS-TLS-ALL:+VERS-TLS1.2:-VERS-DTLS-ALL:-KX-ALL:-CIPHER-ALL:-MAC-ALL:-CURVE-ALL:-SIGN-ALL:+ECDHE-RSA:+ECDHE-ECDSA:+DHE-DSS:+DHE-RSA:+AES-256-CBC:+AES-128-CBC:+AES-256-GCM:+AES-128-GCM:+CHACHA20-POLY1305:+SHA256:+SHA384:+AEAD:+CURVE-SECP256R1:+CURVE-SECP384R1:+SIGN-RSA-SHA256
TLS_ECDHE_RSA_AES_256_CBC_SHA384 0xc0, 0x28 TLS1.2
TLS_ECDHE_RSA_AES_128_CBC_SHA256 0xc0, 0x27 TLS1.2
TLS_ECDHE_RSA_AES_256_GCM_SHA384 0xc0, 0x30 TLS1.2
TLS_ECDHE_RSA_AES_128_GCM_SHA256 0xc0, 0x2f TLS1.2
TLS_ECDHE_RSA_CHACHA20_POLY1305 0xcc, 0xa8 TLS1.2
TLS_ECDHE_ECDSA_AES_256_CBC_SHA384 0xc0, 0x24 TLS1.2
TLS_ECDHE_ECDSA_AES_128_CBC_SHA256 0xc0, 0x23 TLS1.2
TLS_ECDHE_ECDSA_AES_256_GCM_SHA384 0xc0, 0x2c TLS1.2
TLS_ECDHE_ECDSA_AES_128_GCM_SHA256 0xc0, 0x2b TLS1.2
TLS_ECDHE_ECDSA_CHACHA20_POLY1305 0xcc, 0xa9 TLS1.2
TLS_DHE_DSS_AES_256_CBC_SHA256 0x00, 0x6a TLS1.2
TLS_DHE_DSS_AES_128_CBC_SHA256 0x00, 0x40 TLS1.2
TLS_DHE_DSS_AES_256_GCM_SHA384 0x00, 0xa3 TLS1.2
TLS_DHE_DSS_AES_128_GCM_SHA256 0x00, 0xa2 TLS1.2
TLS_DHE_RSA_AES_256_CBC_SHA256 0x00, 0x6b TLS1.2
TLS_DHE_RSA_AES_128_CBC_SHA256 0x00, 0x67 TLS1.2
TLS_DHE_RSA_AES_256_GCM_SHA384 0x00, 0x9f TLS1.2
TLS_DHE_RSA_AES_128_GCM_SHA256 0x00, 0x9e TLS1.2
TLS_DHE_RSA_CHACHA20_POLY1305 0xcc, 0xaa TLS1.2
Protocols: VERS-TLS1.2
Ciphers: AES-256-CBC, AES-128-CBC, AES-256-GCM, AES-128-GCM, CHACHA20-POLY1305
MACs: SHA256, SHA384, AEAD
Key Exchange Algorithms: ECDHE-RSA, ECDHE-ECDSA, DHE-DSS, DHE-RSA
Groups: GROUP-SECP256R1, GROUP-SECP384R1
PK-signatures: SIGN-RSA-SHA256
</syntaxhighlight>
==Ratelimit für einen User zurücksetzen==
Einträge finden:
<syntaxhighlight lang=bash>
# exim_dumpdb /var/spool/exim ratelimit | grep user
24-Mar-2016 09:51:28.152687 rate: 218.512 key: 1d/per_rcpt/mail_recipients:user@server.de
24-Mar-2016 09:51:28.098825 rate: 25.618 key: 1d/per_rcpt/failed_recipients:user@server.de
</syntaxhighlight>
Einträge löschen:
Dafür nimmt man das etwas struppige Tool <i>exim_fixdb</i>. Man gibt den Key ein, den man aus den Ausgaben vom letzten Befehl hat und wählt damit den entsprechenden Eintrag in der DB aus. Als nächstes kommd dann d, wie Delete, gefolgt von einem Enter. Weg ist der entsprechende Eintrag.
<syntaxhighlight lang=bash>
# exim_fixdb /var/spool/exim ratelimit
Modifying Exim hints database /var/spool/exim/db/ratelimit
> 1d/per_rcpt/mail_recipients:user@server.de
24-Mar-2016 09:51:28
0 time stamp: 24-Mar-2016 09:51:28
1 fract. time: .152687
2 sender rate: 218.512
> d
deleted
> 1d/per_rcpt/failed_recipients:user@server.de
24-Mar-2016 09:51:28
0 time stamp: 24-Mar-2016 09:51:28
1 fract. time: .098825
2 sender rate: 25.618
> d
deleted
> ^D
</syntaxhighlight>
==Spam==
<syntaxhighlight lang=bash>
for file in $(ls -1 /var/log/spamassassin/spamd-exim-acl.log* | sort -t'.' -k3n,3n)
do
if [ "$(basename $file .gz)" == "$(basename $file)" ]
then
command="cat"
else
command="gzip -cd"
fi
printf "%16s - %16s : %7s\t%s\n" \
"$(${command} ${file} | nawk 'NR==1{print $1,$2,$3}')" \
"$(${command} ${file} | tail -1 | nawk '{print $1,$2,$3}')" \
"$(${command} ${file} | grep -c 'result: Y')" \
"$(basename ${file})"
done
</syntaxhighlight>
= Logrotation with datestamped logfiles =
I love my logfiles datestamped:
<syntaxhighlight lang=bash>
# exim -bP log_file_path
log_file_path = /var/log/exim/%slog-%D
</syntaxhighlight>
But the logrotate with this files is a little bit tricky.
I found this as a good way to rotate the logfiles:
== /etc/logrotate.d/exim ==
<pre>
/var/log/exim/rotate_this_-_do_not_delete {
daily
rotate 0
ifempty
create
lastaction
# gzip all files matching the regex that are not from today
/usr/bin/find /var/log/exim -regextype posix-awk -regex '^/.*/((main|reject)log-[0-9]{8}|paniclog)' ! -mtime +0 -exec /usr/bin/gzip -9q {} \;
# delete gzipped files matching the regex that are older than 90 days
/usr/bin/find /var/log/exim -regextype posix-awk -regex '^/.*/((main|reject)log-[0-9]{8}|paniclog)\.gz' -mtime +90 -delete
endscript
}
== touch the dummy rotate file ==
This one is needed to trigger the rotation even if it is a dummy.
<syntaxhighlight lang=bash>
# touch /var/log/exim/rotate_this_-_do_not_delete
</syntaxhighlight>
</pre>
c10156e2c1c81ad0ffbd04afcc7defa32d82a7cb
Ansible tips and tricks
0
299
2649
2613
2022-03-09T10:22:18Z
Lollypop
2
/* Ansible commandline */
wikitext
text/x-wiki
[[Category: Ansible | Tips and tricks]]
== Ansible commandline ==
=== Get settings for host ===
Gathering settings for host in ${hostname}:
<syntaxhighlight lang=bash>
$ ansible -m debug -a 'var=hostvars[inventory_hostname]' ${hostname}
</syntaxhighlight>
For example:
<syntaxhighlight lang=bash>
$ ansible -m debug -a 'var=hostvars[inventory_hostname]' localhost
</syntaxhighlight>
Gathering groups for host in ${hostname}:
<syntaxhighlight lang=bash>
$ ansible -m debug -a 'var=group_names' ${hostname}
</syntaxhighlight>
Get all installed kernel versions:
<syntaxhighlight lang=bash>
$ ansible -m shell -a 'uname -r' 'all' | perl -pe 's#\s+\|\s+CHANGED\s+\|\s+rc=\d+\s>>\s*\n#;#g' > /tmp/kernel.csv
</syntaxhighlight>
== Gathering facts from file ==
=== Variables from an Oracle response file ===
This snippet gets some variables from the response file and puts them into an environment variable <i>oracle_environment</i> and sets the variable name itself (prepended with oracle_ if not already there). The variable <i>oracle_environment</i> can be used for <i>environment:</i> when you use <i>shell:</i>.
<syntaxhighlight lang=yaml>
vars:
oracle_user: oracle
oracle_version: 12cR2
oracle_response_file: /install/tepmplate_{{ oracle_version }}/db_{{ oracle_version | lower}}.rsp
</syntaxhighlight>
<syntaxhighlight lang=yaml>
- name: "Getting variables for version {{ oracle_version }} from response file"
shell: |
awk -F '=' '/{{ item }}/{print $2;}' {{ oracle_response_file }}
register: oracle_response_variables
with_items:
- ORACLE_HOME
- ORACLE_BASE
- INVENTORY_LOCATION
tags:
- oracle
- oracle_install
- name: Setting facts from response file to oracle_environment
set_fact:
"{{ 'oracle_' + item.item | lower | regex_replace('oracle_','') }}": "{{ item.stdout }}"
oracle_environment: "{{oracle_environment|default([]) + [ {item.item: item.stdout} ] }}"
with_items:
- "{{ oracle_response_variables.results }}"
tags:
- oracle
- oracle_install
</syntaxhighlight>
== Gathering oracle environment ==
<syntaxhighlight lang=yaml>
- name: Calling oraenv
shell: |
# Set ORAENV_ASK=NO and ORACLE_SID, ORACLE_HOME, PATH from /etc/oratab
eval $(awk -F':' '!/^[ ]*(#|$)/ && $3=="Y"{printf "export ORAENV_ASK=NO ORACLE_SID=%s ORACLE_HOME=%s PATH=${PATH}:%s/bin\n",$1,$2,$2}' /etc/oratab)
# Call /usr/local/bin/oraenv for additional settings
. /usr/local/bin/oraenv -s
# Just register what we need for Oracle
env | egrep "(ORACLE_.*|PATH|LD_LIBRARY_PATH)="
register: env
changed_when: False
- name: Creating environment ora_env
set_fact:
ora_env: |
{# Creating empty dictionary #}
{%- set tmp_env={} -%}
{# For each line from env call tmp_env._setitem_(<variable>,<value>) #}
{%- for line in env.stdout_lines -%}
{{ tmp_env.__setitem__(line.split('=')[0], line.split('=')[1]) }}
{%- endfor -%}
{# Print the created variable #}
{{ tmp_env }}
- debug: var=ora_env
</syntaxhighlight>
== NetApp Modules ==
=== NetApp role ===
==== Snapshot user ====
<syntaxhighlight>
security login role create -vserver cluster01 -role ansible-snapshot-only -cmddirname DEFAULT -access none
security login role create -vserver cluster01 -role ansible-snapshot-only -cmddirname "event generate-autosupport-log" -access all
security login role create -vserver cluster01 -role ansible-snapshot-only -cmddirname "volume snapshot" -access readonly
security login role create -vserver cluster01 -role ansible-snapshot-only -cmddirname "volume snapshot create" -query "-snapshot ansible_*" -access all
security login role create -vserver cluster01 -role ansible-snapshot-only -cmddirname "volume snapshot delete" -query "-snapshot ansible_*" -access all
security login create -vserver cluster01 -role ansible-snapshot-only -application ontapi -authentication-method password -user-or-group-name ansible-snapuser
</syntaxhighlight>
b4148a3dbf2f5618a2540a5c3ea71ee87dba8aa9
Systemd
0
233
2650
2477
2022-04-12T18:06:54Z
Lollypop
2
/* /etc/systemd/system/tomcat-example.service */
wikitext
text/x-wiki
[[Category:Linux]]
=systemd=
Yes, like daemons are usually written this has to be written lowercase.
=What is systemd?=
systemd is a replacement for the old and rusty init system of Linux.
It has many new features and extends the normal init system with the ability to watch processes after the start has done, list sockets owned by processes started with systemd, adds security features like [http://manpages.ubuntu.com/manpages/vivid/en/man7/capabilities.7.html capabilities(7)] and a lot more.
Maybe it will be as good as SMF (Service Management Facility) of Solaris one day :-).
=Take a look with systemctl=
==List units==
As you can see, there are hardware and software related units.
<syntaxhighlight lang=bash>
# systemctl list-units
UNIT LOAD ACTIVE SUB DESCRIPTION
proc-sys-fs-binfmt_misc.automount loaded active running Arbitrary Executable File Formats File System Automount Point
sys-devices-pci0000:00-0000:00:02.0-backlight-acpi_video0.device loaded active plugged /sys/devices/pci0000:00/0000:00:02.0/backlight/acpi_video0
sys-devices-pci0000:00-0000:00:02.0-drm-card0-card0\x2dLVDS\x2d1-intel_backlight.device loaded active plugged /sys/devices/pci0000:00/0000:00:02.0/drm
sys-devices-pci0000:00-0000:00:19.0-net-eth0.device loaded active plugged 82579LM Gigabit Network Connection
sys-devices-pci0000:00-0000:00:1a.0-usb1-1\x2d1-1\x2d1.4-1\x2d1.4:1.0-bluetooth-hci0-rfkill3.device loaded active plugged /sys/devices/pci0000:00/0000
sys-devices-pci0000:00-0000:00:1a.0-usb1-1\x2d1-1\x2d1.4-1\x2d1.4:1.0-bluetooth-hci0.device loaded active plugged /sys/devices/pci0000:00/0000:00:1a.0
sys-devices-pci0000:00-0000:00:1b.0-sound-card0.device loaded active plugged 6 Series/C200 Series Chipset Family High Definition Audio Contro
sys-devices-pci0000:00-0000:00:1c.1-0000:03:00.0-ieee80211-phy0-rfkill2.device loaded active plugged /sys/devices/pci0000:00/0000:00:1c.1/0000:03:00.0
sys-devices-pci0000:00-0000:00:1c.1-0000:03:00.0-net-wlan0.device loaded active plugged Centrino Advanced-N 6205 [Taylor Peak] (Centrino Advanced-N 62
sys-devices-pci0000:00-0000:00:1d.0-usb2-2\x2d1-2\x2d1.4-2\x2d1.4:1.1-tty-ttyACM0.device loaded active plugged F5521gw
sys-devices-pci0000:00-0000:00:1d.0-usb2-2\x2d1-2\x2d1.4-2\x2d1.4:1.3-tty-ttyACM1.device loaded active plugged F5521gw
...
session-c2.scope loaded active running Session c2 of user lollypop
accounts-daemon.service loaded active running Accounts Service
● anacron.service loaded failed failed Run anacron jobs
apparmor.service loaded active exited LSB: AppArmor initialization
apport.service loaded active exited LSB: automatic crash report generation
...
</syntaxhighlight>
In this example you can see that the anacron.service failed to start.
==Display unit status==
<syntaxhighlight lang=bash>
# systemctl status anacron
● anacron.service - Run anacron jobs
Loaded: loaded (/lib/systemd/system/anacron.service; enabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Fr 2015-08-28 09:18:13 CEST; 31min ago
Process: 1591 ExecStart=/usr/sbin/anacron -dsq (code=exited, status=1/FAILURE)
Main PID: 1591 (code=exited, status=1/FAILURE)
Aug 28 09:18:13 lollybook systemd[1]: Started Run anacron jobs.
Aug 28 09:18:13 lollybook systemd[1]: Starting Run anacron jobs...
Aug 28 09:18:13 lollybook systemd[1]: anacron.service: main process exited, code=exited, status=1/FAILURE
Aug 28 09:18:13 lollybook anacron[1591]: anacron: Can't chdir to /var/spool/anacron: No such file or directory
Aug 28 09:18:13 lollybook systemd[1]: Unit anacron.service entered failed state.
Aug 28 09:18:13 lollybook systemd[1]: anacron.service failed.
</syntaxhighlight>
Ah, deleted the anacron spool directory. ;-)
==Restart units==
Fix the problem and restart the service.
<syntaxhighlight lang=bash>
root@lollybook:~# mkdir /var/spool/anacron
root@lollybook:~# systemctl restart anacron.service
root@lollybook:~# systemctl status anacron
● anacron.service - Run anacron jobs
Loaded: loaded (/lib/systemd/system/anacron.service; enabled; vendor preset: enabled)
Active: active (running) since Fr 2015-08-28 09:53:49 CEST; 4s ago
Main PID: 5179 (anacron)
CGroup: /system.slice/anacron.service
└─5179 /usr/sbin/anacron -dsq
Aug 28 09:53:49 lollybook systemd[1]: Started Run anacron jobs.
Aug 28 09:53:49 lollybook systemd[1]: Starting Run anacron jobs...
Aug 28 09:53:49 lollybook anacron[5179]: Anacron 2.3 started on 2015-08-28
Aug 28 09:53:49 lollybook anacron[5179]: Will run job `cron.daily' in 5 min.
Aug 28 09:53:49 lollybook anacron[5179]: Will run job `cron.weekly' in 10 min.
Aug 28 09:53:49 lollybook anacron[5179]: Will run job `cron.monthly' in 15 min.
Aug 28 09:53:49 lollybook anacron[5179]: Jobs will be executed sequentially
</syntaxhighlight>
==Display unit declaration==
<syntaxhighlight lang=ini>
# systemctl cat zfs.target
# /lib/systemd/system/zfs.target
[Unit]
Description=ZFS startup target
Requires=zfs-mount.service
Requires=zfs-share.service
Wants=zed.service
[Install]
WantedBy=multi-user.target
</syntaxhighlight>
==Sockets==
<syntaxhighlight lang=bash>
# systemctl list-sockets --all
LISTEN UNIT ACTIVATES
/run/acpid.socket acpid.socket acpid.service
/run/systemd/fsckd systemd-fsckd.socket systemd-fsckd.service
/run/systemd/initctl/fifo systemd-initctl.socket systemd-initctl.service
/run/systemd/journal/dev-log systemd-journald-dev-log.socket systemd-journald.service
/run/systemd/journal/socket systemd-journald.socket systemd-journald.service
/run/systemd/journal/stdout systemd-journald.socket systemd-journald.service
/run/systemd/journal/syslog syslog.socket rsyslog.service
/run/systemd/shutdownd systemd-shutdownd.socket systemd-shutdownd.service
/run/udev/control systemd-udevd-control.socket systemd-udevd.service
/run/uuidd/request uuidd.socket uuidd.service
/var/run/avahi-daemon/socket avahi-daemon.socket avahi-daemon.service
/var/run/cups/cups.sock cups.socket cups.service
/var/run/dbus/system_bus_socket dbus.socket dbus.service
127.0.0.1:631 cups.socket cups.service
[::1]:631 cups.socket cups.service
audit 1 systemd-journald-audit.socket systemd-journald.service
kobject-uevent 1 systemd-udevd-kernel.socket systemd-udevd.service
17 sockets listed.
</syntaxhighlight>
==View dependencies==
What depends on ''zfs.target'':
<syntaxhighlight lang=bash>
# systemctl list-dependencies --reverse zfs.target
zfs.target
● ├─basic.target
...
● └─multi-user.target
...
</syntaxhighlight>
And what do we need to reach the ''zfs.target''?
<syntaxhighlight lang=bash>
# systemctl list-dependencies --recursive zfs.target
zfs.target
● ├─zed.service
● ├─zfs-mount.service
● └─zfs-share.service
</syntaxhighlight>
==Get the main PID of a service==
<syntaxhighlight lang=bash>
$ systemctl show --property=MainPID --value ssh.service
2026
</syntaxhighlight>
=Security=
==Use capabilities to drop user privileges (CapabilityBoundingSet)==
<syntaxhighlight lang=ini>
# systemctl cat systemd-networkd.service --no-pager
...
[Service]
Type=notify
Restart=on-failure
RestartSec=0
ExecStart=/lib/systemd/systemd-networkd
CapabilityBoundingSet=CAP_NET_ADMIN CAP_NET_BIND_SERVICE CAP_NET_BROADCAST CAP_NET_RAW CAP_SETUID CAP_SETGID CAP_SETPCAP CAP_CHOWN CAP_DAC_OVERRIDE CAP_FOWNER
ProtectSystem=full
ProtectHome=yes
WatchdogSec=1min
...
</syntaxhighlight>
Now the process is started with exactly the capabilities it needs to have. Even if it starts as root all unnessesary capabilities are dropped for starting the process.
I dont want to copy the whole man page of [http://manpages.ubuntu.com/manpages/vivid/en/man7/capabilities.7.html capabilities(7)] here but you can take a look to understand what this capabilities are.
'''BUT''' beware of programs which just test on UID 0!
==Nailing a process to it's rights : NoNewPrivileges==
Setting ''NoNewPrivileges=true'' ensures that the processtree from this level on will stuck at the UID and the privileges it has. This prohibits UID changes. No set UID binary will help the hacker to get more privileges than the user of the exploited service.
==Limiting access to a socket==
For example for the check_mk monitoring system:
<syntaxhighlight lang=ini>
# systemctl edit check_mk.socket
</syntaxhighlight>
Deny from all, but the monitoring server (172.17.128.193):
<syntaxhighlight lang=ini>
[Socket]
IPAddressDeny=any
IPAddressAllow=172.17.128.193
</syntaxhighlight>
==Limiting a socket to IPv4==
For example for the check_mk monitoring system:
<syntaxhighlight lang=ini>
# systemctl edit check_mk.socket
</syntaxhighlight>
First remove old value, then set new one.
<syntaxhighlight lang=ini>
[Socket]
ListenStream=
ListenStream=0.0.0.0:6556
</syntaxhighlight>
=systemd-resolved the name resolve service=
==Status==
<syntaxhighlight lang=bash>
$ systemd-resolve --status
Global
DNS Domain: fritz.box
DNSSEC NTA: 10.in-addr.arpa
168.192.in-addr.arpa
corp
d.f.ip6.arpa
home
internal
intranet
lan
local
private
test
Link 3 (wlan0)
Current Scopes: none
LLMNR setting: yes
MulticastDNS setting: no
DNSSEC setting: no
DNSSEC supported: no
Link 2 (eth0)
Current Scopes: DNS
LLMNR setting: yes
MulticastDNS setting: no
DNSSEC setting: no
DNSSEC supported: no
DNS Servers: 192.168.178.1
DNS Domain: fritz.box
</syntaxhighlight>
==Cache statistics==
<syntaxhighlight lang=bash>
$ systemd-resolve --statistics
DNSSEC supported by current servers: no
Transactions
Current Transactions: 0
Total Transactions: 1824
Cache
Current Cache Size: 11
Cache Hits: 1104
Cache Misses: 771
DNSSEC Verdicts
Secure: 0
Insecure: 0
Bogus: 0
Indeterminate: 0
</syntaxhighlight>
==Flush the cache==
<syntaxhighlight lang=bash>
$ systemd-resolve --flush-caches
</syntaxhighlight>
Check with:
<syntaxhighlight lang=bash>
$ systemd-resolve --statistics
DNSSEC supported by current servers: no
Transactions
Current Transactions: 0
Total Transactions: 1809
Cache
Current Cache Size: 0 <--- Empty
Cache Hits: 1099
Cache Misses: 761
DNSSEC Verdicts
Secure: 0
Insecure: 0
Bogus: 0
Indeterminate: 0
</syntaxhighlight>
=systemd-timesyncd an alternative to ntp=
The ntpd is a good and fat old horse for servers but clients do not necessarily need this one. Just give systemd-timesyncd a chance.
Configuration can be easily made through <i>/etc/systemd/timesyncd.conf</i>:
<syntaxhighlight lang=ini>
# This file is part of systemd.
#
# systemd is free software; you can redistribute it and/or modify it
# under the terms of the GNU Lesser General Public License as published by
# the Free Software Foundation; either version 2.1 of the License, or
# (at your option) any later version.
#
# Entries in this file show the compile time defaults.
# You can change settings by editing this file.
# Defaults can be restored by simply deleting this file.
#
# See timesyncd.conf(5) for details.
[Time]
NTP=ptbtime1.ptb.de hora.cs.tu-berlin.de
FallbackNTP=ntp.ubuntu.com
</syntaxhighlight>
The server list is a space separated list of NTP servers.
FallbackNTP is a list of servers if none of the NTP list could be reached.
If you want to split them into multiple files or generate them at start you can put files with the ending <i>.conf</i> in <i>/etc/systemd/timesyncd.conf.d/</i>.
After you setup the config you can enable the timesyncd via:
<syntaxhighlight lang=bash>
# timedatectl set-ntp true
</syntaxhighlight>
Control your success with:
<syntaxhighlight lang=bash>
# timedatectl
Local time: Fr 2016-07-01 09:16:24 CEST
Universal time: Fr 2016-07-01 07:16:24 UTC
RTC time: Fr 2016-07-01 07:16:24
Time zone: Europe/Berlin (CEST, +0200)
Network time on: yes
NTP synchronized: yes
RTC in local TZ: no
</syntaxhighlight>
Nice it worked <i>NTP synchronized: yes</i>.
If not take a look with <i>systemctl</i>:
<syntaxhighlight lang=bash>
# systemctl status systemd-timesyncd.service
● systemd-timesyncd.service - Network Time Synchronization
Loaded: loaded (/lib/systemd/system/systemd-timesyncd.service; enabled; vendor preset: enabled)
Drop-In: /lib/systemd/system/systemd-timesyncd.service.d
└─disable-with-time-daemon.conf
Active: inactive (dead)
Condition: start condition failed at Fr 2016-07-01 10:49:15 CEST; 1h 43min left
Docs: man:systemd-timesyncd.service(8)
</syntaxhighlight>
Hmm... let us take a look at ntp:
<syntaxhighlight lang=bash>
# systemctl status ntp.service
● ntp.service - LSB: Start NTP daemon
Loaded: loaded (/etc/init.d/ntp; bad; vendor preset: enabled)
Active: active (exited) since Fr 2016-07-01 10:49:19 CEST; 1h 44min left
Docs: man:systemd-sysv-generator(8)
</syntaxhighlight>
Maybe we should uninstall or disable ntp first ;-).
<syntaxhighlight lang=bash>
# systemctl stop ntp.service
# systemctl disable ntp.service
</syntaxhighlight>
<syntaxhighlight lang=bash>
# systemctl start systemd-timesyncd.service
# systemctl status systemd-timesyncd.service
● systemd-timesyncd.service - Network Time Synchronization
Loaded: loaded (/lib/systemd/system/systemd-timesyncd.service; enabled; vendor preset: enabled)
Drop-In: /lib/systemd/system/systemd-timesyncd.service.d
└─disable-with-time-daemon.conf
Active: active (running) since Fr 2016-07-01 09:06:10 CEST; 1s ago
Docs: man:systemd-timesyncd.service(8)
Main PID: 12360 (systemd-timesyn)
Status: "Synchronized to time server 192.53.103.108:123 (ptbtime1.ptb.de)."
CGroup: /system.slice/systemd-timesyncd.service
└─12360 /lib/systemd/systemd-timesyncd
Jul 01 09:06:10 lollybook systemd[1]: Starting Network Time Synchronization...
Jul 01 09:06:10 lollybook systemd[1]: Started Network Time Synchronization.
Jul 01 09:06:10 lollybook systemd-timesyncd[12360]: Synchronized to time server 192.53.103.108:123 (ptbtime1.ptb.de).
</syntaxhighlight>
That's it!
=Units=
==[Unit]==
===Define dependencies===
For example the ''zfs.target'' is defined like this:
<syntaxhighlight lang=ini>
# systemctl cat zfs.target
# /lib/systemd/system/zfs.target
[Unit]
Description=ZFS startup target
Requires=zfs-mount.service
Requires=zfs-share.service
Wants=zed.service
[Install]
WantedBy=multi-user.target
</syntaxhighlight>
This means to reach the ''zfs.target'' we want that ''zed.service'' is started if enabled and we need ''zfs-mount.service'' and ''zfs-share.service''.
===Directories===
====ReadWrite-, ReadOnly- and InaccessibleDirectories====
====Private Tmp-Directories====
Mounts a private incarnation of /tmp and /var/tmp which only lives as long as the unit is up. When the unit comes down the directories are cleared. This is done by a seperate namespace for this unit.
<syntaxhighlight lang=ini>
[Unit]
...
PrivateTmp=true|false
...
</syntaxhighlight>
If several units should share a private tmp-directory you can use ''JoinsNamespaceOf=<unit1>[,<unit2>,<unit3>]''.
==[Service]==
==[Install]==
=Tools=
==Testing around with capabilities==
For example arping:
<syntaxhighlight lang=bash>
# getcap /usr/bin/arping
/usr/bin/arping = cap_net_raw+ep
</syntaxhighlight>
With this capability set we can use this as normal user:
<syntaxhighlight lang=bash>
lollypop $ /usr/bin/arping -I wlan0 192.168.178.1
ARPING 192.168.178.1 from 192.168.178.31 wlan0
Unicast reply from 192.168.178.1 [24:65:11:F0:DC:A8] 1.774ms
Unicast reply from 192.168.178.1 [24:65:11:F0:DC:A8] 1.658ms
</syntaxhighlight>
If we remove this capability it does not work:
<syntaxhighlight lang=bash>
# setcap cap_net_raw=-ep /usr/bin/arping
</syntaxhighlight>
<syntaxhighlight lang=bash>
lollypop $ /usr/bin/arping -I wlan0 192.168.178.1
arping: socket: Operation not permitted
</syntaxhighlight>
Of course it still works as root as root has all capabilities:
<syntaxhighlight lang=bash>
root@lollybook:~# /usr/bin/arping -I wlan0 192.168.178.1
ARPING 192.168.178.1 from 192.168.178.31 wlan0
Unicast reply from 192.168.178.1 [24:65:11:F0:DC:A8] 2.052ms
Unicast reply from 192.168.178.1 [24:65:11:F0:DC:A8] 1.852ms
Received 2 response(s)
</syntaxhighlight>
So we better set this capability again:
<syntaxhighlight lang=bash>
# setcap cap_net_raw=+ep /usr/bin/arping
</syntaxhighlight>
= Logging with syslog-ng and systemd in a chroot environment =
If you have a chroot environment (here I have /var/chroot) some things are a little bit tricky.
==The needed logging socket in your chroot is /run/systemd/journal/dev-log==
Prepare the mountpoint:
<syntaxhighlight lang=bash>
# mkdir -p /var/chroot/run/systemd/journal
# touch /var/chroot/run/systemd/journal/dev-log
</syntaxhighlight>
===Get the name for the needed unit file===
The name of a .mount-unit file has to be the mount destination path. Dashes must be escaped. To get the resulting name you can easily use systemd-escape.
<syntaxhighlight lang=bash>
# systemd-escape -p --suffix=mount /var/chroot/run/systemd/journal/dev-log
var-chroot-run-systemd-journal-dev\x2dlog.mount
</syntaxhighlight>
===Create the unit file /lib/systemd/system/var-chroot-run-systemd-journal-dev\\x2dlog.mount for the mount===
Remember to double escape (\\) the x2d (which is a dash -).
<syntaxhighlight lang=bash>
# vi /lib/systemd/system/var-chroot-run-systemd-journal-dev\\x2dlog.mount
</syntaxhighlight>
I want to mount it before syslog-ng and pdns-recursor are up.
Put this contents in the file:
<syntaxhighlight lang=ini>
[Unit]
Description=Mount /run/systemd/journal/dev-log to chroot
DefaultDependencies=no
ConditionPathExists=/var/chroot/run/systemd/journal/dev-log
ConditionCapability=CAP_SYS_ADMIN
After=systemd-modules-load.service
Before=pdns-recursor.service
Before=syslog-ng.service
[Mount]
What=/run/systemd/journal/dev-log
Where=/var/chroot/run/systemd/journal/dev-log
Type=none
Options=bind
[Install]
WantedBy=multi-user.target
</syntaxhighlight>
===Mount the socket===
<syntaxhighlight lang=bash>
# systemctl daemon-reload
# systemctl enable var-chroot-run-systemd-journal-dev\\x2dlog.mount
# systemctl start var-chroot-run-systemd-journal-dev\\x2dlog.mount
</syntaxhighlight>
Check the success:
<syntaxhighlight lang=bash>
# grep /var/chroot/run/systemd/journal/dev-log /proc/mounts
tmpfs /var/chroot/run/systemd/journal/dev-log tmpfs rw,nosuid,noexec,relatime,size=101604k,mode=755 0 0
</syntaxhighlight>
==Tell the journald to forward logging lines to the socket==
===/etc/systemd/journald.conf===
<syntaxhighlight lang=ini>
[Journal]
...
ForwardToSyslog=yes
...
</syntaxhighlight>
Restart the journal daemon:
<syntaxhighlight lang=bash>
# systemctl restart systemd-journald.service
</syntaxhighlight>
==Configure syslog-ng==
===/etc/syslog-ng/syslog-ng.conf===
Take the log from systemd-journald socket:
<syntaxhighlight>
...
source s_src {
system();
internal();
unix-dgram ("/run/systemd/journal/dev-log");
};
...
</syntaxhighlight>
===Example for powerdns recursor===
====/etc/syslog-ng/conf.d/destination.d/pdns.conf====
<syntaxhighlight>
# PowerDNS authoritative server destination
destination d_pdns { file("/var/log/powerdns/pdns.log"); };
destination d_pdns_recursor { file("/var/log/powerdns/recursor.log"); };
</syntaxhighlight>
====/etc/syslog-ng/conf.d/filter.d/pdns.conf====
<syntaxhighlight>
# PowerDNS authoritative server filter
filter f_pdns { program("^pdns$"); };
filter f_pdns_recursor { program("^pdns_recursor$"); };
</syntaxhighlight>
====/etc/syslog-ng/conf.d/log.d/90_pdns.conf====
<syntaxhighlight>
# PowerDNS authoritative server default final file log
log { source(s_src); filter(f_pdns); destination(d_pdns); flags(final); };
log { source(s_src); filter(f_pdns_recursor); destination(d_pdns_recursor); flags(final); };
</syntaxhighlight>
===Restart syslog-ng daemon===
<syntaxhighlight lang=bash>
# systemctl restart syslog-ng.service
</syntaxhighlight>
= systemd-tmpfiles =
The housekeeping of temporary directories is done by the service <i>systemd-tmpfiles-clean.service</i> .
This service is triggered by the timer <i>systemd-tmpfiles-clean.timer</i>
To use this service for PrivateTMP directories for example of <i>apache2.service</i> you may use a config file under <i>/etc/[https://www.freedesktop.org/software/systemd/man/tmpfiles.d.html tmpfiles.d]/</i> like this example <i>/etc/tmpfiles.d/apache-cleanup.conf</i> :
<pre>
e /tmp/systemd-private-%b-apache2.service-*/tmp - - - 6h
</pre>
This will cleanup all files under <i>/tmp/systemd-private-%b-apache2.service-*/tmp</i> which are older than 6 hours every time the <i>systemd-tmpfiles-clean.service</i> runs.
The <i>%b</i> in the path is the actual boot-id.
What ist that? An id which is generated at each boot.
You can get the boot-id with:
<syntaxhighlight lang=bash>
# journalctl --list-boots
</syntaxhighlight>
The second field of the last line is the actual one, e.g.:
<syntaxhighlight lang=bash>
# journalctl --list-boots | awk 'END {print $2}'
52ae0c2a587a47048ee76818ede269a6
</syntaxhighlight>
When will that be? Try:
<syntaxhighlight lang="bash">
# systemctl list-timers systemd-tmpfiles-clean.timer
NEXT LEFT LAST PASSED UNIT ACTIVATES
Thu 2020-08-13 16:07:24 CEST 46min left n/a n/a systemd-tmpfiles-clean.timer systemd-tmpfiles-clean.service
1 timers listed.
Pass --all to see loaded but inactive timers, too.
</syntaxhighlight>
OK, but you probably want to run ist once an hour? OK, just rescedule the timer like this:
<syntaxhighlight lang="bash">
# systemctl edit systemd-tmpfiles-clean.timer
</syntaxhighlight>
and change the interval like this
<pre>
[Timer]
OnUnitActiveSec=1h
</pre>
Well done...
= Examples =
== fwupd.service behind proxy ==
<syntaxhighlight lang=bash>
# systemctl edit fwupd-refresh.service
</syntaxhighlight>
<syntaxhighlight lang=ini>
[Service]
Environment=http_proxy="http://user:passw0rd@proxy.intern.net:8080" https_proxy="http://user:passw0rd@proxy.intern.net:8080"
PassEnvironment=http_proxy https_proxy
</syntaxhighlight>
== Tomcat ==
=== /etc/systemd/system/tomcat-example.service ===
Simple service definition with some security options (ReadOnlyDirectories):
<syntaxhighlight lang=ini>
# /etc/systemd/system/my-tomcat.service
[Unit]
Description=Apache Tomcat Web Application Container
After=syslog.target network.target remote-fs.target
ConditionPathExists=/opt/tomcat/bin
ConditionPathExists=/home/tomcat/bin
[Service]
Type=forking
User=tomcat
Group=java
PrivateTmp=true
RuntimeDirectory=tomcat-example
RuntimeDirectoryMode=0700
ReadOnlyDirectories=/etc
ReadOnlyDirectories=/lib
ReadOnlyDirectories=/usr
EnvironmentFile=/home/tomcat/.Tomcat_init_systemd
PIDFile=/run/tomcat-example/tomcat.pid
ExecStart=/opt/tomcat/bin/catalina.sh start
ExecStop=/opt/tomcat/bin/catalina.sh stop
SuccessExitStatus=0
[Install]
WantedBy=multi-user.target
</syntaxhighlight>
=== /etc/polkit-1/rules.d/57-tomcat-example.rules ===
Allow the user <i>tomcat</i> to start/stop the service:
<syntaxhighlight>
polkit.addRule(function(action, subject) {
if (action.id == "org.freedesktop.systemd1.manage-units" &&
action.lookup("unit") == "tomcat-example.service" &&
subject.user == "tomcat") {
return polkit.Result.YES;
}
});
</syntaxhighlight>
== Oracle ==
UNTESTED, just an example!
File this as
/usr/lib/systemd/system/dbora@.service (SLES12)
<syntaxhighlight lang=ini>
# This file is part of systemd.
#
# Configure instances for your oracle database versions like this
# # systemctl enable dbora@<product>.service
# e.g.:
# # systemctl enable dbora@12cR1.service
#
[Unit]
Description=Oracle Database %I
After=syslog.target network.target
[Service]
# systemd ignores PAM limits, so set any necessary limits in the service.
# Not really a bug, but a feature.
# https://bugzilla.redhat.com/show_bug.cgi?id=754285
LimitMEMLOCK=infinity
LimitNOFILE=65535
#
Type=simple
RemainAfterExit=yes
User=oracle
Group=dba
Environment="ORACLE_HOME=/opt/oracle/product/%i/db"
ExecStart=/opt/oracle/product/%i/db/bin/dbstart $ORACLE_HOME >> 2>&1 &
ExecStop=/opt/oracle/product/%i/db/bin/dbstart $ORACLE_HOME 2>&1 &
[Install]
WantedBy=multi-user.target
</syntaxhighlight>
<syntaxhighlight lang=bash>
# systemctl daemon-reload
# systemctl enable dbora@12cR2.service
Created symlink from /etc/systemd/system/multi-user.target.wants/dbora@12cR2.service to /usr/lib/systemd/system/dbora@.service.
</syntaxhighlight>
c2ac19739948a302c7df8b3f35aa1dae658e12b5
ISCSI Initiator with Linux
0
387
2651
2397
2022-05-06T12:38:24Z
Lollypop
2
/* /etc/multipath.conf */
wikitext
text/x-wiki
[[Category:Linux|iSCSI]]
[[Category:iSCSI|Linux]]
= iSCSI with jumbo-frames and multipathing =
== Configure networking ==
=== LACP-bonding for the frontend ===
==== /etc/netplan/bond0.yaml ====
<syntaxhighlight lang=yaml>
network:
version: 2
renderer: networkd
ethernets:
eno1:
dhcp4: false
dhcp6: false
optional: true
eno2:
dhcp4: false
dhcp6: false
optional: true
bonds:
bond0:
interfaces:
- eno1
- eno2
parameters:
lacp-rate: slow
mode: 802.3ad
transmit-hash-policy: layer2
addresses:
- 10.71.112.135/16
gateway4: 10.71.101.1
nameservers:
addresses:
- 10.71.111.11
- 10.71.111.12
search:
- domain.de
</syntaxhighlight>
=== Two dedicated 10GE interfaces with jumbo-frames for the backend ===
==== /etc/netplan/iscsi.yaml ====
<syntaxhighlight lang=yaml>
network:
version: 2
renderer: networkd
ethernets:
enp132s0f0:
dhcp4: false
dhcp6: false
mtu: 9000
addresses:
- 10.250.71.32/24
set-name: iscsi0
match:
macaddress: a0:36:9f:d4:cd:1a
enp132s0f1:
dhcp4: false
dhcp6: false
mtu: 9000
addresses:
- 10.251.71.32/24
set-name: iscsi1
match:
macaddress: a0:36:9f:d4:cd:18
</syntaxhighlight>
=== Apply the parameters and check settings ===
<syntaxhighlight lang=bash>
# netplan apply
# ip a sh iscsi0
7: iscsi0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc mq state UP group default
qlen 1000
link/ether a0:36:9f:d4:cd:1a brd ff:ff:ff:ff:ff:ff
inet 10.250.71.32/24 brd 10.250.71.255 scope global iscsi0
valid_lft forever preferred_lft forever
inet6 fe80::a236:9fff:fed4:cd1a/64 scope link
valid_lft forever preferred_lft forever
# ip a sh iscsi1
5: iscsi1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc mq state UP group default
qlen 1000
link/ether a0:36:9f:d4:cd:18 brd ff:ff:ff:ff:ff:ff
inet 10.251.71.32/24 brd 10.251.71.255 scope global iscsi1
valid_lft forever preferred_lft forever
inet6 fe80::a236:9fff:fed4:cd18/64 scope link
valid_lft forever preferred_lft forever
# ip a sh bond0
12: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP
group default qlen 1000
link/ether 32:2d:f2:c0:e2:3f brd ff:ff:ff:ff:ff:ff
inet 10.71.112.135/16 brd 10.71.255.255 scope global bond0
valid_lft forever preferred_lft forever
inet6 fe80::302d:f2ff:fed0:e23f/64 scope link
valid_lft forever preferred_lft forever
</syntaxhighlight>
=== Check if all components are configured right for jumbo-frames ===
<syntaxhighlight lang=bash>
# ping -c 3 -M do -s 8972 -I iscsi0 10.250.71.1
PING 10.250.71.1 (10.250.71.1) from 10.250.71.32 iscsi0: 8972(9000) bytes of data.
8980 bytes from 10.250.71.1: icmp_seq=1 ttl=64 time=0.227 ms
8980 bytes from 10.250.71.1: icmp_seq=2 ttl=64 time=0.187 ms
8980 bytes from 10.250.71.1: icmp_seq=3 ttl=64 time=0.198 ms
--- 10.250.71.1 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2045ms
rtt min/avg/max/mdev = 0.187/0.204/0.227/0.016 ms
# ping -c 3 -M do -s 8972 -I iscsi1 10.251.71.1
PING 10.251.71.1 (10.251.71.1) from 10.251.71.32 iscsi1: 8972(9000) bytes of data.
8980 bytes from 10.251.71.1: icmp_seq=1 ttl=64 time=0.202 ms
8980 bytes from 10.251.71.1: icmp_seq=2 ttl=64 time=0.195 ms
8980 bytes from 10.251.71.1: icmp_seq=3 ttl=64 time=0.191 ms
--- 10.251.71.1 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2055ms
rtt min/avg/max/mdev = 0.191/0.196/0.202/0.004 ms
</syntaxhighlight>
If it does not has success, one of the switches on the way or the iSCSI-storage misses jumbo-frame settings.
== Configure iSCSI ==
=== Setup initiator iqn ===
Setup a new iqn:
<syntaxhighlight>
# /sbin/iscsi-iname
</syntaxhighlight>
The result is in /etc/iscsi/initiatorname.iscsi
==== /etc/iscsi/initiatorname.iscsi ====
<syntaxhighlight>
## DO NOT EDIT OR REMOVE THIS FILE!
## If you remove this file, the iSCSI daemon will not start.
## If you change the InitiatorName, existing access control lists
## may reject this initiator. The InitiatorName must be unique
## for each iSCSI initiator. Do NOT duplicate iSCSI InitiatorNames.
InitiatorName=iqn.1993-08.org.debian:01:4efdaa48c123
</syntaxhighlight>
=== Setup iSCSI-Interfaces ===
<syntaxhighlight lang=bash>
# iscsiadm -m iface -I iscsi0 -o new
# iscsiadm -m iface -I iscsi0 --op=update -n iface.net_ifacename -v iscsi0
# iscsiadm -m iface -I iscsi0
# BEGIN RECORD 2.0-874
iface.iscsi_ifacename = iscsi0
iface.net_ifacename = iscsi0
iface.ipaddress = <empty>
iface.hwaddress = <empty>
iface.transport_name = tcp
...
# END RECORD
</syntaxhighlight>
<syntaxhighlight lang=bash>
# iscsiadm -m iface -I iscsi1 -o new
# iscsiadm -m iface -I iscsi1 --op=update -n iface.net_ifacename -v iscsi1
# iscsiadm -m iface -I iscsi1
# BEGIN RECORD 2.0-874
iface.iscsi_ifacename = iscsi1
iface.net_ifacename = iscsi1
iface.ipaddress = <empty>
iface.hwaddress = <empty>
iface.transport_name = tcp
...
# END RECORD
</syntaxhighlight>
=== Discover LUNs that are offered by the storage ===
<syntaxhighlight lang=bash>
# iscsiadm -m discovery -t st -p 10.250.71.1
iscsiadm: cannot make connection to 10.250.71.1: No route to host
iscsiadm: cannot make connection to 10.250.71.1: No route to host
iscsiadm: cannot make connection to 10.250.71.1: No route to host
iscsiadm: cannot make connection to 10.250.71.1: No route to host
iscsiadm: cannot make connection to 10.250.71.1: No route to host
iscsiadm: cannot make connection to 10.250.71.1: No route to host
iscsiadm: connection login retries (reopen_max) 5 exceeded
10.250.71.1:3260,1 iqn.2006-08.com.huawei:oceanstor:210028def5f846b5::20000:10.250.71.1
</syntaxhighlight>
<syntaxhighlight lang=bash>
# iscsiadm -m discovery -t st -p 10.251.71.1
iscsiadm: cannot make connection to 10.251.71.1: No route to host
iscsiadm: cannot make connection to 10.251.71.1: No route to host
iscsiadm: cannot make connection to 10.251.71.1: No route to host
iscsiadm: cannot make connection to 10.251.71.1: No route to host
iscsiadm: cannot make connection to 10.251.71.1: No route to host
iscsiadm: cannot make connection to 10.251.71.1: No route to host
iscsiadm: connection login retries (reopen_max) 5 exceeded
10.251.71.1:3260,2 iqn.2006-08.com.huawei:oceanstor:210028def5f846b5::20001:10.251.71.1
</syntaxhighlight>
=== Login to discovered LUNs ===
<syntaxhighlight lang=bash>
# iscsiadm -m node -T iqn.2006-08.com.huawei:oceanstor:210028def5f846b5::20000:10.250.71.1 --login
Logging in to [iface: iscsi0, target: iqn.2006-08.com.huawei:oceanstor:210028def5f846b5::20000:10.250.71.1, portal: 10.250.71.1,3260] (multiple)
Login to [iface: iscsi0, target: iqn.2006-08.com.huawei:oceanstor:210028def5f846b5::20000:10.250.71.1, portal: 10.250.71.1,3260] successful.
# iscsiadm -m node -T iqn.2006-08.com.huawei:oceanstor:210028def5f846b5::20001:10.251.71.1 --login
Logging in to [iface: iscsi1, target: iqn.2006-08.com.huawei:oceanstor:210028def5f846b5::20001:10.251.71.1, portal: 10.251.71.1,3260] (multiple)
Login to [iface: iscsi1, target: iqn.2006-08.com.huawei:oceanstor:210028def5f846b5::20001:10.251.71.1, portal: 10.251.71.1,3260] successful.
</syntaxhighlight>
=== Take a look at the running session ===
<syntaxhighlight lang=bash>
# iscsiadm -m session -P 1
Target: iqn.2006-08.com.huawei:oceanstor:210028def5f846b5::20000:10.250.71.1 (non-flash)
Current Portal: 10.250.71.1:3260,1
Persistent Portal: 10.250.71.1:3260,1
**********
Interface:
**********
Iface Name: iscsi0
Iface Transport: tcp
Iface Initiatorname: iqn.1993-08.org.debian:01:4efdaa48c123
Iface IPaddress: 10.250.71.32
Iface HWaddress: <empty>
Iface Netdev: iscsi0
SID: 1
iSCSI Connection State: LOGGED IN
iSCSI Session State: LOGGED_IN
Internal iscsid Session State: NO CHANGE
Target: iqn.2006-08.com.huawei:oceanstor:210028def5f846b5::20001:10.251.71.1 (non-flash)
Current Portal: 10.251.71.1:3260,2
Persistent Portal: 10.251.71.1:3260,2
**********
Interface:
**********
Iface Name: iscsi1
Iface Transport: tcp
Iface Initiatorname: iqn.1993-08.org.debian:01:4efdaa48c123
Iface IPaddress: 10.251.71.32
Iface HWaddress: <empty>
Iface Netdev: iscsi1
SID: 2
iSCSI Connection State: LOGGED IN
iSCSI Session State: LOGGED_IN
Internal iscsid Session State: NO CHANGE
</syntaxhighlight>
=== Check the session is still ok after a restart of iscsid.service ===
<syntaxhighlight lang=bash>
# systemctl status iscsid.service
# systemctl restart iscsid.service
# systemctl status iscsid.service
# iscsiadm -m session -o show
tcp: [1] 10.251.71.1:3260,2 iqn.2006-
08.com.huawei:oceanstor:210028def5f846b5::20001:10.251.71.1 (non-flash)
tcp: [2] 10.250.71.1:3260,1 iqn.2006-
08.com.huawei:oceanstor:210028def5f846b5::20000:10.250.71.1 (non-flash)
</syntaxhighlight>
=== Enable automatic startup of connection ===
<syntaxhighlight lang=bash>
# iscsiadm -m node --op=update -n node.conn[0].startup -v automatic
# iscsiadm -m node --op=update -n node.startup -v automatic
</syntaxhighlight>
=== Check timeout parameter ===
<syntaxhighlight lang=bash>
# iscsiadm -m node -T iqn.2006-08.com.huawei:oceanstor:210028def5f846b5::20000:10.250.71.1 | grep node.session.timeo.replacement_timeout
node.session.timeo.replacement_timeout = 120
# iscsiadm -m node -T iqn.2006-08.com.huawei:oceanstor:210028def5f846b5::20001:10.251.71.1 | grep node.session.timeo.replacement_timeout
node.session.timeo.replacement_timeout = 120
</syntaxhighlight>
=== Adjust timeout values to your needs ===
<syntaxhighlight lang=bash>
# iscsiadm -m node -T iqn.2006-08.com.huawei:oceanstor:210028def5f846b5::20000:10.250.71.1 -o update -n node.session.timeo.replacement_timeout -v 10
# iscsiadm -m node -T iqn.2006-08.com.huawei:oceanstor:210028def5f846b5::20001:10.251.71.1 -o update -n node.session.timeo.replacement_timeout -v 10
</syntaxhighlight>
== Configure multipathing ==
=== List SCSI devices ===
<syntaxhighlight lang=bash>
# lsscsi
[0:2:0:0] disk DELL PERC H730 Mini 4.30 /dev/sda <--- this is our internal disk / raid
[11:0:0:0] cd/dvd HL-DT-ST DVD+-RW GTA0N A3C0 /dev/sr0
[12:0:0:1] disk HUAWEI XSG1 4305 /dev/sdb <--- this is our iSCSI-storage
[13:0:0:1] disk HUAWEI XSG1 4305 /dev/sdc <--- this is our iSCSI-storage
</syntaxhighlight>
=== Get wwids for devices ===
<syntaxhighlight lang=bash>
# /lib/udev/scsi_id --whitelisted --device=/dev/sda
361866da075bdee001f9a2ede2705b9ba
# /lib/udev/scsi_id --whitelisted --device=/dev/sdb
3628dee5100f846b5243be07d00000004
# /lib/udev/scsi_id --whitelisted --device=/dev/sdc
3628dee5100f846b5243be07d00000004
</syntaxhighlight>
=== Setup multipathing configuration ===
==== /etc/multipath.conf ====
<syntaxhighlight>
defaults {
user_friendly_names yes
}
devices {
device {
vendor "HUAWEI"
product "XSG1"
path_grouping_policy failover
path_checker tur
prio const
path_selector "round-robin 0"
failback immediate
no_path_retry 15
dev_loss_tmo 30
fast_io_fail_tmo 5
}
}
blacklist {
# devnode "^sd[a]$"
# I highly recommend you blacklist by wwid instead of device name
# blacklist /dev/sda by wwid
wwid 361866da075bdee001f9a2ede2705b9ba
}
multipaths {
multipath {
wwid 3628dee5100f846b5243be07d00000004
# alias here can be anything descriptive for your LUN
alias data
}
}
</syntaxhighlight>
<syntaxhighlight lang=bash>
# systemctl restart multipathd.service
# joutnalctl -lfu multipathd.service
</syntaxhighlight>
=== Startup multipathing ===
From the multipath(1) man page:
<pre>
-r Force a reload of all existing multipath maps. This command is delegated to the
multipathd daemon if it's running. In this case, other command line switches of the multipath
command have no effect.
-ll Show ("list") the current multipath topology from all available information (sysfs, the
device mapper, path checkers ...).
</pre>
<syntaxhighlight lang=bash>
# multipath -r
</syntaxhighlight>
<syntaxhighlight lang=bash>
# multipath -ll
data (3628dee5100f846b5243be07d00000004) dm-0 HUAWEI,XSG1
size=10T features='1 queue_if_no_path' hwhandler='1 alua' wp=rw
`-+- policy='round-robin 0' prio=50 status=active
|- 12:0:0:1 sdb 8:16 active ready running
`- 13:0:0:1 sdc 8:32 active ready running
</syntaxhighlight>
<syntaxhighlight lang=bash>
# ls -al /dev/mapper/data
lrwxrwxrwx 1 root root 7 Okt 18 14:46 /dev/mapper/data -> ../dm-0
</syntaxhighlight>
=== Create a systemd unit to mount it at the right time during boot ===
<syntaxhighlight lang=bash>
# systemctl edit --force --full data.mount
</syntaxhighlight>
==== /etc/systemd/system/data.mount ====
<syntaxhighlight lang=inifile>
[Unit]
Before=remote-fs.target
After=iscsi.service
Requires=iscsi.service
After=blockdev@dev-mapper-data.target
[Mount]
Where=/data
What=/dev/mapper/data
Type=xfs
Options=defaults
</syntaxhighlight>
=== Enable your unit on next reboot and start it for now ===
<syntaxhighlight lang=bash>
# systemctl enable data.mount
# systemctl start data.mount
</syntaxhighlight>
=== Check for success ===
<syntaxhighlight lang=bash>
# df -h /dev/mapper/data
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/data 10T 72G 10T 1% /data
</syntaxhighlight>
== Further reading ==
External link collection:
• https://linux.dell.com/files/whitepapers/iSCSI_Multipathing_in_Ubuntu_Server.pdf
• https://www.suse.com/support/kb/doc/?id=000019648
• https://ubuntu.com/server/docs/service-iscsi
•
a392a56556f945753502b023a479620420c25ac1
2652
2651
2022-05-06T12:40:21Z
Lollypop
2
/* /etc/multipath.conf */
wikitext
text/x-wiki
[[Category:Linux|iSCSI]]
[[Category:iSCSI|Linux]]
= iSCSI with jumbo-frames and multipathing =
== Configure networking ==
=== LACP-bonding for the frontend ===
==== /etc/netplan/bond0.yaml ====
<syntaxhighlight lang=yaml>
network:
version: 2
renderer: networkd
ethernets:
eno1:
dhcp4: false
dhcp6: false
optional: true
eno2:
dhcp4: false
dhcp6: false
optional: true
bonds:
bond0:
interfaces:
- eno1
- eno2
parameters:
lacp-rate: slow
mode: 802.3ad
transmit-hash-policy: layer2
addresses:
- 10.71.112.135/16
gateway4: 10.71.101.1
nameservers:
addresses:
- 10.71.111.11
- 10.71.111.12
search:
- domain.de
</syntaxhighlight>
=== Two dedicated 10GE interfaces with jumbo-frames for the backend ===
==== /etc/netplan/iscsi.yaml ====
<syntaxhighlight lang=yaml>
network:
version: 2
renderer: networkd
ethernets:
enp132s0f0:
dhcp4: false
dhcp6: false
mtu: 9000
addresses:
- 10.250.71.32/24
set-name: iscsi0
match:
macaddress: a0:36:9f:d4:cd:1a
enp132s0f1:
dhcp4: false
dhcp6: false
mtu: 9000
addresses:
- 10.251.71.32/24
set-name: iscsi1
match:
macaddress: a0:36:9f:d4:cd:18
</syntaxhighlight>
=== Apply the parameters and check settings ===
<syntaxhighlight lang=bash>
# netplan apply
# ip a sh iscsi0
7: iscsi0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc mq state UP group default
qlen 1000
link/ether a0:36:9f:d4:cd:1a brd ff:ff:ff:ff:ff:ff
inet 10.250.71.32/24 brd 10.250.71.255 scope global iscsi0
valid_lft forever preferred_lft forever
inet6 fe80::a236:9fff:fed4:cd1a/64 scope link
valid_lft forever preferred_lft forever
# ip a sh iscsi1
5: iscsi1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc mq state UP group default
qlen 1000
link/ether a0:36:9f:d4:cd:18 brd ff:ff:ff:ff:ff:ff
inet 10.251.71.32/24 brd 10.251.71.255 scope global iscsi1
valid_lft forever preferred_lft forever
inet6 fe80::a236:9fff:fed4:cd18/64 scope link
valid_lft forever preferred_lft forever
# ip a sh bond0
12: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP
group default qlen 1000
link/ether 32:2d:f2:c0:e2:3f brd ff:ff:ff:ff:ff:ff
inet 10.71.112.135/16 brd 10.71.255.255 scope global bond0
valid_lft forever preferred_lft forever
inet6 fe80::302d:f2ff:fed0:e23f/64 scope link
valid_lft forever preferred_lft forever
</syntaxhighlight>
=== Check if all components are configured right for jumbo-frames ===
<syntaxhighlight lang=bash>
# ping -c 3 -M do -s 8972 -I iscsi0 10.250.71.1
PING 10.250.71.1 (10.250.71.1) from 10.250.71.32 iscsi0: 8972(9000) bytes of data.
8980 bytes from 10.250.71.1: icmp_seq=1 ttl=64 time=0.227 ms
8980 bytes from 10.250.71.1: icmp_seq=2 ttl=64 time=0.187 ms
8980 bytes from 10.250.71.1: icmp_seq=3 ttl=64 time=0.198 ms
--- 10.250.71.1 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2045ms
rtt min/avg/max/mdev = 0.187/0.204/0.227/0.016 ms
# ping -c 3 -M do -s 8972 -I iscsi1 10.251.71.1
PING 10.251.71.1 (10.251.71.1) from 10.251.71.32 iscsi1: 8972(9000) bytes of data.
8980 bytes from 10.251.71.1: icmp_seq=1 ttl=64 time=0.202 ms
8980 bytes from 10.251.71.1: icmp_seq=2 ttl=64 time=0.195 ms
8980 bytes from 10.251.71.1: icmp_seq=3 ttl=64 time=0.191 ms
--- 10.251.71.1 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2055ms
rtt min/avg/max/mdev = 0.191/0.196/0.202/0.004 ms
</syntaxhighlight>
If it does not has success, one of the switches on the way or the iSCSI-storage misses jumbo-frame settings.
== Configure iSCSI ==
=== Setup initiator iqn ===
Setup a new iqn:
<syntaxhighlight>
# /sbin/iscsi-iname
</syntaxhighlight>
The result is in /etc/iscsi/initiatorname.iscsi
==== /etc/iscsi/initiatorname.iscsi ====
<syntaxhighlight>
## DO NOT EDIT OR REMOVE THIS FILE!
## If you remove this file, the iSCSI daemon will not start.
## If you change the InitiatorName, existing access control lists
## may reject this initiator. The InitiatorName must be unique
## for each iSCSI initiator. Do NOT duplicate iSCSI InitiatorNames.
InitiatorName=iqn.1993-08.org.debian:01:4efdaa48c123
</syntaxhighlight>
=== Setup iSCSI-Interfaces ===
<syntaxhighlight lang=bash>
# iscsiadm -m iface -I iscsi0 -o new
# iscsiadm -m iface -I iscsi0 --op=update -n iface.net_ifacename -v iscsi0
# iscsiadm -m iface -I iscsi0
# BEGIN RECORD 2.0-874
iface.iscsi_ifacename = iscsi0
iface.net_ifacename = iscsi0
iface.ipaddress = <empty>
iface.hwaddress = <empty>
iface.transport_name = tcp
...
# END RECORD
</syntaxhighlight>
<syntaxhighlight lang=bash>
# iscsiadm -m iface -I iscsi1 -o new
# iscsiadm -m iface -I iscsi1 --op=update -n iface.net_ifacename -v iscsi1
# iscsiadm -m iface -I iscsi1
# BEGIN RECORD 2.0-874
iface.iscsi_ifacename = iscsi1
iface.net_ifacename = iscsi1
iface.ipaddress = <empty>
iface.hwaddress = <empty>
iface.transport_name = tcp
...
# END RECORD
</syntaxhighlight>
=== Discover LUNs that are offered by the storage ===
<syntaxhighlight lang=bash>
# iscsiadm -m discovery -t st -p 10.250.71.1
iscsiadm: cannot make connection to 10.250.71.1: No route to host
iscsiadm: cannot make connection to 10.250.71.1: No route to host
iscsiadm: cannot make connection to 10.250.71.1: No route to host
iscsiadm: cannot make connection to 10.250.71.1: No route to host
iscsiadm: cannot make connection to 10.250.71.1: No route to host
iscsiadm: cannot make connection to 10.250.71.1: No route to host
iscsiadm: connection login retries (reopen_max) 5 exceeded
10.250.71.1:3260,1 iqn.2006-08.com.huawei:oceanstor:210028def5f846b5::20000:10.250.71.1
</syntaxhighlight>
<syntaxhighlight lang=bash>
# iscsiadm -m discovery -t st -p 10.251.71.1
iscsiadm: cannot make connection to 10.251.71.1: No route to host
iscsiadm: cannot make connection to 10.251.71.1: No route to host
iscsiadm: cannot make connection to 10.251.71.1: No route to host
iscsiadm: cannot make connection to 10.251.71.1: No route to host
iscsiadm: cannot make connection to 10.251.71.1: No route to host
iscsiadm: cannot make connection to 10.251.71.1: No route to host
iscsiadm: connection login retries (reopen_max) 5 exceeded
10.251.71.1:3260,2 iqn.2006-08.com.huawei:oceanstor:210028def5f846b5::20001:10.251.71.1
</syntaxhighlight>
=== Login to discovered LUNs ===
<syntaxhighlight lang=bash>
# iscsiadm -m node -T iqn.2006-08.com.huawei:oceanstor:210028def5f846b5::20000:10.250.71.1 --login
Logging in to [iface: iscsi0, target: iqn.2006-08.com.huawei:oceanstor:210028def5f846b5::20000:10.250.71.1, portal: 10.250.71.1,3260] (multiple)
Login to [iface: iscsi0, target: iqn.2006-08.com.huawei:oceanstor:210028def5f846b5::20000:10.250.71.1, portal: 10.250.71.1,3260] successful.
# iscsiadm -m node -T iqn.2006-08.com.huawei:oceanstor:210028def5f846b5::20001:10.251.71.1 --login
Logging in to [iface: iscsi1, target: iqn.2006-08.com.huawei:oceanstor:210028def5f846b5::20001:10.251.71.1, portal: 10.251.71.1,3260] (multiple)
Login to [iface: iscsi1, target: iqn.2006-08.com.huawei:oceanstor:210028def5f846b5::20001:10.251.71.1, portal: 10.251.71.1,3260] successful.
</syntaxhighlight>
=== Take a look at the running session ===
<syntaxhighlight lang=bash>
# iscsiadm -m session -P 1
Target: iqn.2006-08.com.huawei:oceanstor:210028def5f846b5::20000:10.250.71.1 (non-flash)
Current Portal: 10.250.71.1:3260,1
Persistent Portal: 10.250.71.1:3260,1
**********
Interface:
**********
Iface Name: iscsi0
Iface Transport: tcp
Iface Initiatorname: iqn.1993-08.org.debian:01:4efdaa48c123
Iface IPaddress: 10.250.71.32
Iface HWaddress: <empty>
Iface Netdev: iscsi0
SID: 1
iSCSI Connection State: LOGGED IN
iSCSI Session State: LOGGED_IN
Internal iscsid Session State: NO CHANGE
Target: iqn.2006-08.com.huawei:oceanstor:210028def5f846b5::20001:10.251.71.1 (non-flash)
Current Portal: 10.251.71.1:3260,2
Persistent Portal: 10.251.71.1:3260,2
**********
Interface:
**********
Iface Name: iscsi1
Iface Transport: tcp
Iface Initiatorname: iqn.1993-08.org.debian:01:4efdaa48c123
Iface IPaddress: 10.251.71.32
Iface HWaddress: <empty>
Iface Netdev: iscsi1
SID: 2
iSCSI Connection State: LOGGED IN
iSCSI Session State: LOGGED_IN
Internal iscsid Session State: NO CHANGE
</syntaxhighlight>
=== Check the session is still ok after a restart of iscsid.service ===
<syntaxhighlight lang=bash>
# systemctl status iscsid.service
# systemctl restart iscsid.service
# systemctl status iscsid.service
# iscsiadm -m session -o show
tcp: [1] 10.251.71.1:3260,2 iqn.2006-
08.com.huawei:oceanstor:210028def5f846b5::20001:10.251.71.1 (non-flash)
tcp: [2] 10.250.71.1:3260,1 iqn.2006-
08.com.huawei:oceanstor:210028def5f846b5::20000:10.250.71.1 (non-flash)
</syntaxhighlight>
=== Enable automatic startup of connection ===
<syntaxhighlight lang=bash>
# iscsiadm -m node --op=update -n node.conn[0].startup -v automatic
# iscsiadm -m node --op=update -n node.startup -v automatic
</syntaxhighlight>
=== Check timeout parameter ===
<syntaxhighlight lang=bash>
# iscsiadm -m node -T iqn.2006-08.com.huawei:oceanstor:210028def5f846b5::20000:10.250.71.1 | grep node.session.timeo.replacement_timeout
node.session.timeo.replacement_timeout = 120
# iscsiadm -m node -T iqn.2006-08.com.huawei:oceanstor:210028def5f846b5::20001:10.251.71.1 | grep node.session.timeo.replacement_timeout
node.session.timeo.replacement_timeout = 120
</syntaxhighlight>
=== Adjust timeout values to your needs ===
<syntaxhighlight lang=bash>
# iscsiadm -m node -T iqn.2006-08.com.huawei:oceanstor:210028def5f846b5::20000:10.250.71.1 -o update -n node.session.timeo.replacement_timeout -v 10
# iscsiadm -m node -T iqn.2006-08.com.huawei:oceanstor:210028def5f846b5::20001:10.251.71.1 -o update -n node.session.timeo.replacement_timeout -v 10
</syntaxhighlight>
== Configure multipathing ==
=== List SCSI devices ===
<syntaxhighlight lang=bash>
# lsscsi
[0:2:0:0] disk DELL PERC H730 Mini 4.30 /dev/sda <--- this is our internal disk / raid
[11:0:0:0] cd/dvd HL-DT-ST DVD+-RW GTA0N A3C0 /dev/sr0
[12:0:0:1] disk HUAWEI XSG1 4305 /dev/sdb <--- this is our iSCSI-storage
[13:0:0:1] disk HUAWEI XSG1 4305 /dev/sdc <--- this is our iSCSI-storage
</syntaxhighlight>
=== Get wwids for devices ===
<syntaxhighlight lang=bash>
# /lib/udev/scsi_id --whitelisted --device=/dev/sda
361866da075bdee001f9a2ede2705b9ba
# /lib/udev/scsi_id --whitelisted --device=/dev/sdb
3628dee5100f846b5243be07d00000004
# /lib/udev/scsi_id --whitelisted --device=/dev/sdc
3628dee5100f846b5243be07d00000004
</syntaxhighlight>
=== Setup multipathing configuration ===
==== /etc/multipath.conf ====
<syntaxhighlight>
defaults {
user_friendly_names yes
}
devices {
device {
vendor "HUAWEI"
product "XSG1"
path_grouping_policy failover
path_checker tur
prio const
path_selector "round-robin 0"
failback immediate
no_path_retry 15
dev_loss_tmo 30
fast_io_fail_tmo 5
}
}
blacklist {
# devnode "^sd[a]$"
# I highly recommend you blacklist by wwid instead of device name
# blacklist /dev/sda by wwid
wwid 361866da075bdee001f9a2ede2705b9ba
}
multipaths {
multipath {
wwid 3628dee5100f846b5243be07d00000004
# alias here can be anything descriptive for your LUN
alias data
}
}
</syntaxhighlight>
<syntaxhighlight lang=bash>
# systemctl restart multipathd.service
# joutnalctl -lfu multipathd.service
</syntaxhighlight>
<syntaxhighlight lang=bash>
# multipathd show config | less
<find HUAWEI and check parameters>
</syntaxhighlight>
=== Startup multipathing ===
From the multipath(1) man page:
<pre>
-r Force a reload of all existing multipath maps. This command is delegated to the
multipathd daemon if it's running. In this case, other command line switches of the multipath
command have no effect.
-ll Show ("list") the current multipath topology from all available information (sysfs, the
device mapper, path checkers ...).
</pre>
<syntaxhighlight lang=bash>
# multipath -r
</syntaxhighlight>
<syntaxhighlight lang=bash>
# multipath -ll
data (3628dee5100f846b5243be07d00000004) dm-0 HUAWEI,XSG1
size=10T features='1 queue_if_no_path' hwhandler='1 alua' wp=rw
`-+- policy='round-robin 0' prio=50 status=active
|- 12:0:0:1 sdb 8:16 active ready running
`- 13:0:0:1 sdc 8:32 active ready running
</syntaxhighlight>
<syntaxhighlight lang=bash>
# ls -al /dev/mapper/data
lrwxrwxrwx 1 root root 7 Okt 18 14:46 /dev/mapper/data -> ../dm-0
</syntaxhighlight>
=== Create a systemd unit to mount it at the right time during boot ===
<syntaxhighlight lang=bash>
# systemctl edit --force --full data.mount
</syntaxhighlight>
==== /etc/systemd/system/data.mount ====
<syntaxhighlight lang=inifile>
[Unit]
Before=remote-fs.target
After=iscsi.service
Requires=iscsi.service
After=blockdev@dev-mapper-data.target
[Mount]
Where=/data
What=/dev/mapper/data
Type=xfs
Options=defaults
</syntaxhighlight>
=== Enable your unit on next reboot and start it for now ===
<syntaxhighlight lang=bash>
# systemctl enable data.mount
# systemctl start data.mount
</syntaxhighlight>
=== Check for success ===
<syntaxhighlight lang=bash>
# df -h /dev/mapper/data
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/data 10T 72G 10T 1% /data
</syntaxhighlight>
== Further reading ==
External link collection:
• https://linux.dell.com/files/whitepapers/iSCSI_Multipathing_in_Ubuntu_Server.pdf
• https://www.suse.com/support/kb/doc/?id=000019648
• https://ubuntu.com/server/docs/service-iscsi
•
684522089cded150e3e6fc609bc116441933b847
ZFS on Linux
0
222
2653
2626
2022-05-31T13:43:21Z
Lollypop
2
/* Swap on ZFS with random key encryption */
wikitext
text/x-wiki
[[Category:Linux|ZFS]]
[[Category:ZFS|Linux]]
[[Category:VirtualBox|ZFS]]
==Grub==
Create /etc/udev/rules.d/99-local-grub.rules with this content:
<syntaxhighlight lang=bash>
# Create by-id links in /dev as well for zfs vdev. Needed by grub
# Add links for zfs_member only
KERNEL=="sd*[0-9]", IMPORT{parent}=="ID_*", ENV{ID_FS_TYPE}=="zfs_member", SYMLINK+="$env{ID_BUS}-$env{ID_SERIAL}-part%n"
</syntaxhighlight>
==Virtualbox on ZVols==
If you use ZVols as rawvmdk-device in VirtualBox as normal user (vmuser in this example) create /etc/udev/rules.d/99-local-zvol.rules with this content:
<syntaxhighlight lang=bash>
KERNEL=="zd*" SUBSYSTEM=="block" ACTION=="add|change" PROGRAM="/lib/udev/zvol_id /dev/%k" RESULT=="rpool/VM/*" OWNER="vmuser"
</syntaxhighlight>
<syntaxhighlight lang=bash>
vmuser@virtualbox-server:~$ VBoxManage internalcommands createrawvmdk -filename /var/data/VMs/dev/Solaris10.vmdk -rawdisk /dev/zvol/rpool/VM/Solaris10
</syntaxhighlight>
==Setup Ubuntu 16.04 with ZFS root==
Most is from here [https://github.com/zfsonlinux/zfs/wiki/Ubuntu-16.04-Root-on-ZFS Ubuntu-16.04-Root-on-ZFS].
Boot Ubuntu Desktop (alias Live CD) and choose "try out".
===Get the right ashift value===
For example to get sda and sdb:
<syntaxhighlight lang=bash>
# lsblk -o NAME,PHY-SeC,LOG-SEC /dev/sd{a,b} | awk 'function exponent (value) {for(i=0;value>1;i++){value/=2;}; return i;}{if($2 ~ /[0-9]+/){print $0,exponent($2)}else{print$0,"ashift"}}'
NAME PHY-SEC LOG-SEC ashift
sda 512 512 9
├─sda1 512 512 9
├─sda2 512 512 9
├─sda3 512 512 9
└─sda4 512 512 9
sdb 4096 512 12
├─sdb1 4096 512 12
├─sdb2 4096 512 12
├─sdb3 4096 512 12
└─sdb4 4096 512 12
</syntaxhighlight>
===Connect it to your network===
<syntaxhighlight lang=bash>
sudo -i
ifconfig ens160 <IP> netmask 255.255.255.0
route add default gw <defaultrouter>
echo "nameserver <nameserver>" >> /etc/resolv.conf
echo 'Acquire::http::Proxy "http://<user>:<pass>@<proxyhost>:<proxyport>";' >> /etc/apt/apt.conf
apt-add-repository universe
apt update
apt --yes install openssh-server
passwd ubuntu
Reconnect via ssh
apt install --yes debootstrap gdisk zfs-initramfs
sgdisk -g -a1 -n2:34:2047 -t2:EF02 /dev/disk/by-id/scsi-36000c2932cdb62febff0b5ac93786dd4
sgdisk -n9:-8M:0 -t9:BF07 /dev/disk/by-id/scsi-36000c2932cdb62febff0b5ac93786dd4
sgdisk -n1:0:0 -t1:BF01 /dev/disk/by-id/scsi-36000c2932cdb62febff0b5ac93786dd4
zpool create -f -o ashift=12 \
-O atime=off \
-O canmount=off \
-O compression=lz4 \
-O normalization=formD \
-O mountpoint=/ \
-R /mnt \
rpool /dev/disk/by-id/scsi-36000c2932cdb62febff0b5ac93786dd4-part1
zfs create -o canmount=off -o mountpoint=none rpool/ROOT
zfs create -o canmount=noauto -o mountpoint=/ rpool/ROOT/ubuntu
zfs mount rpool/ROOT/ubuntu
zfs create -o setuid=off rpool/home
zfs create -o mountpoint=/root rpool/home/root
zfs create -o canmount=off -o setuid=off -o exec=off rpool/var
zfs create -o com.sun:auto-snapshot=false rpool/var/cache
zfs create rpool/var/log
zfs create rpool/var/spool
zfs create -o com.sun:auto-snapshot=false -o exec=on rpool/var/tmp
zfs create -V 4G -b $(getconf PAGESIZE) -o compression=zle \
-o logbias=throughput -o sync=always \
-o primarycache=metadata -o secondarycache=none \
-o com.sun:auto-snapshot=false rpool/swap
cp -p {,/mnt}/etc/apt/apt.conf
export http_proxy=$(awk '/Acquire::http::Proxy/{gsub(/\"/,"");gsub(/;$/,"");print $2}' /mnt/etc/apt/apt.conf)
echo -n xenial{,-security,-updates} | \
xargs -n 1 -d ' ' -I{} echo "deb http://archive.ubuntu.com/ubuntu {} main universe" > /mnt/etc/apt/sources.list
chmod 1777 /mnt/var/tmp
debootstrap xenial /mnt
zfs set devices=off rpool
HOSTNAME=Template-VM
echo ${HOSTNAME} > /mnt/etc/hostname
printf "127.0.1.1\t%s\n" "${HOSTNAME}" >> /mnt/etc/hosts
INTERFACE=$(ip a s scope global | awk 'NR==1{gsub(/:$/,"",$2);print $2;}')
printf "auto %s\niface %s inet dhcp\n" "${INTERFACE}" "${INTERFACE}" > /mnt/etc/network/interfaces.d/${INTERFACE}
mount --rbind /dev /mnt/dev
mount --rbind /proc /mnt/proc
mount --rbind /sys /mnt/sys
cp -p {,/mnt}/etc/apt/apt.conf
echo -n xenial{,-security,-updates} | \
xargs -n 1 -d ' ' -I{} echo "deb http://archive.ubuntu.com/ubuntu {} main universe" > /mnt/etc/apt/sources.list
chroot /mnt /bin/bash --login
locale-gen en_US.UTF-8
echo 'LANG="en_US.UTF-8"' > /etc/default/locale
LANG="en_US.UTF-8"
dpkg-reconfigure tzdata
ln -s /proc/self/mounts /etc/mtab
apt update
apt install --yes ubuntu-minimal
apt install --yes --no-install-recommends linux-image-generic
apt install --yes zfs-initramfs
apt install --yes openssh-server
apt install --yes grub-pc
addgroup --system lpadmin
addgroup --system sambashare
passwd
grub-probe /
update-initramfs -c -k all
vi /etc/default/grub
Comment out: GRUB_HIDDEN_TIMEOUT=0
Remove quiet and splash from: GRUB_CMDLINE_LINUX_DEFAULT
Uncomment: GRUB_TERMINAL=console
update-grub
grub-install /dev/disk/by-id/scsi-36000c2932cdb62febff0b5ac93786dd4
zfs snapshot rpool/ROOT/ubuntu@install
exit
mount | grep -v zfs | tac | awk '/\/mnt/ {print $3}' | xargs -i{} umount -lf {}
zpool export rpool
reboot
apt install --yes cryptsetup
echo cryptswap1 /dev/zvol/rpool/swap /dev/urandom swap,cipher=aes-xts-plain64:sha256,size=256 >> /etc/crypttab
systemctl daemon-reload
systemctl start systemd-cryptsetup@cryptswap1.service
echo /dev/mapper/cryptswap1 none swap defaults 0 0 >> /etc/fstab
swapon -av
</syntaxhighlight>
==Swap on ZFS with random key encryption==
<syntaxhighlight lang=bash>
$ sudo systemctl edit --force --full zfs-cryptswap@.service
</syntaxhighlight>
<syntaxhighlight lang=ini>
# /etc/systemd/system/zfs-cryptswap@.service
[Unit]
Description=ZFS Random Cryptography Setup for %I
Documentation=man:zfs(8)
DefaultDependencies=no
Conflicts=umount.target
IgnoreOnIsolate=true
After=systemd-random-seed.service zfs-volumes.target
BindsTo=dev-zvol-rpool-%i.device
Before=umount.target
[Service]
Type=oneshot
RemainAfterExit=yes
TimeoutSec=0
KeyringMode=shared
OOMScoreAdjust=500
UMask=0077
RuntimeDirectory=zfs-cryptswap.%i
RuntimeDirectoryMode=0700
ExecStartPre=-/sbin/swapoff '/dev/zvol/rpool/%i'
ExecStartPre=-/sbin/zfs destroy 'rpool/%i'
ExecStartPre=/bin/dd if=/dev/urandom of=/run/zfs-cryptswap.%i/%i.key bs=32 count=1
ExecStart=/sbin/zfs create -V 4G -b 4k -o compression=zle -o logbias=throughput -o sync=always -o primarycache=metadata -o secondarycache=none -o com.sun:auto-snapshot=false -o encryption=on -o keyformat=raw -o keylocation=file:///run/zfs-cryptswap.%i/%i.key rpool/%i
ExecStart=/bin/sleep 1
ExecStartPost=/sbin/mkswap '/dev/zvol/rpool/%i'
ExecStartPost=/sbin/swapon '/dev/zvol/rpool/%i'
ExecStop=/sbin/swapoff '/dev/zvol/rpool/%i'
ExecStop=/bin/sleep 2
ExecStopPost=/sbin/zfs destroy 'rpool/%i'
[Install]
WantedBy=swap.target
</syntaxhighlight>
!!!BE CAREFUL with the name after @ !!!
The name after the @ is the name of the ZFS that will be DESTROYED and recreated!!!
To destroy and recreate an encrypted ZFS volume named cryptswap use:
<syntaxhighlight lang=bash>
# systemctl start zfs-cryptswap@cryptswap.service
# systemctl enable zfs-cryptswap@cryptswap.service
# update-initramfs -k all -u
</syntaxhighlight>
==Kernel settings for ZFS==
=== Set module parameter in /etc/modprobe.d/zfs.conf===
<syntaxhighlight lang=bash>
options zfs zfs_arc_max=10737418240
# increase them so scrub/resilver is more quickly at the cost of other work
options zfs zfs_vdev_scrub_min_active=24
options zfs zfs_vdev_scrub_max_active=64
# sync write
options zfs zfs_vdev_sync_write_min_active=8
options zfs zfs_vdev_sync_write_max_active=32
# sync reads (normal)
options zfs zfs_vdev_sync_read_min_active=8
options zfs zfs_vdev_sync_read_max_active=32
# async reads : prefetcher
options zfs zfs_vdev_async_read_min_active=8
options zfs zfs_vdev_async_read_max_active=32
# async write : bulk writes
options zfs zfs_vdev_async_write_min_active=8
options zfs zfs_vdev_async_write_max_active=32
# max write speed to l2arc
# tradeoff between write/read and durability of ssd (?)
# default : 8 * 1024 * 1024
# setting here : 500 * 1024 * 1024
options zfs l2arc_write_max=524288000
options zfs zfs_top_maxinflight=512
options zfs zfs_resilver_min_time_ms=8000
options zfs zfs_resilver_delay=0
</syntaxhighlight>
Remember to update your initramfs before boot. This is the filesystem which is read when your module is loaded.
<syntaxhighlight lang=bash>
# update-initramfs -k all -u
</syntaxhighlight>
=== Check settings ===
<syntaxhighlight lang=bash>
root@zfshost:~# modprobe -c | grep "options zfs"
options zfs zfs_arc_max=10737418240
options zfs zfs_vdev_scrub_min_active=24
options zfs zfs_vdev_scrub_max_active=64
options zfs zfs_vdev_sync_write_min_active=8
options zfs zfs_vdev_sync_write_max_active=32
options zfs zfs_vdev_sync_read_min_active=8
options zfs zfs_vdev_sync_read_max_active=32
options zfs zfs_vdev_async_read_min_active=8
options zfs zfs_vdev_async_read_max_active=32
options zfs zfs_vdev_async_write_min_active=8
options zfs zfs_vdev_async_write_max_active=32
options zfs l2arc_write_max=524288000
options zfs zfs_top_maxinflight=512
options zfs zfs_resilver_min_time_ms=8000
options zfs zfs_resilver_delay=0
</syntaxhighlight>
<syntaxhighlight lang=bash>
root@zfshost:~# modprobe --show-depends zfs
insmod /lib/modules/4.15.0-58-generic/kernel/spl/spl.ko
insmod /lib/modules/4.15.0-58-generic/kernel/zfs/znvpair.ko
insmod /lib/modules/4.15.0-58-generic/kernel/zfs/zcommon.ko
insmod /lib/modules/4.15.0-58-generic/kernel/zfs/icp.ko
insmod /lib/modules/4.15.0-58-generic/kernel/zfs/zavl.ko
insmod /lib/modules/4.15.0-58-generic/kernel/zfs/zunicode.ko
insmod /lib/modules/4.15.0-58-generic/kernel/zfs/zfs.ko zfs_arc_max=10737418240 zfs_vdev_scrub_min_active=24 zfs_vdev_scrub_max_active=64 zfs_vdev_sync_write_min_active=8 zfs_vdev_sync_write_max_active=32 zfs_vdev_sync_read_min_active=8 zfs_vdev_sync_read_max_active=32 zfs_vdev_async_read_min_active=8 zfs_vdev_async_read_max_active=32 zfs_vdev_async_write_min_active=8 zfs_vdev_async_write_max_active=32 l2arc_write_max=524288000 zfs_top_maxinflight=512 zfs_resilver_min_time_ms=8000 zfs_resilver_delay=0
</syntaxhighlight>
=== Check actual settings ===
Check files in
* /proc/spl/kstat/zfs/
* /sys/module/zfs/parameters/
==ARC Cache==
===Get the current usage of cache===
<syntaxhighlight lang=bash>
# cat /proc/spl/kstat/zfs/arcstats |grep c_
c_min 4 521779200
c_max 4 1073741824
arc_no_grow 4 0
arc_tempreserve 4 0
arc_loaned_bytes 4 0
arc_prune 4 25360
arc_meta_used 4 493285336
arc_meta_limit 4 805306368
arc_dnode_limit 4 80530636
arc_meta_max 4 706551816
arc_meta_min 4 16777216
sync_wait_for_async 4 357
arc_need_free 4 0
arc_sys_free 4 260889600
</syntaxhighlight>
===Limit the cache without reboot non permanent===
For example limit it to 512MB (which is too small for production environments, just an example...):
<syntaxhighlight lang=bash>
# echo "$[512*1024*1024]" > /sys/module/zfs/parameters/zfs_arc_max
</syntaxhighlight>
Now you have to drop the caches:
<syntaxhighlight lang=bash>
# echo 3 > /proc/sys/vm/drop_caches
</syntaxhighlight>
===Make the cache limit permanent===
For example limit it to 512MB (which is too small for production environments, just an example...):
<syntaxhighlight lang=bash>
# echo "options zfs zfs_arc_max=$[512*1024*1024]" >> /etc/modprobe.d/zfs.conf
</syntaxhighlight>
After reboot this value take effect.
===Check cache hits/misses===
<syntaxhighlight lang=bash>
# (while : ; do cat /proc/spl/kstat/zfs/arcstats ; sleep 5 ; done ) | awk '
BEGIN {
}
$1 ~ /(hits|misses)/ {
name=$1;
gsub(/[_]*(hits|misses)/,"",name);
if(name == ""){
name="global";
}
}
$1 ~ /hits/ {
hits[name] = $3 - hitslast[name]
hitslast[name] = $3
}
$1 ~ /misses/ {
misses[name] = $3 - misslast[name]
misslast[name] = $3
rate = 0
total = hits[name] + misses[name]
if (total)
rate = (hits[name] * 100) / total
if (name=="global")
printf "%30s %12s %12s %9s\n", "NAME", "HITS", "MISSES", "HITRATE"
printf "%30s %12d %12d %8.2f%%\n", name, hits[name], misses[name], rate
}
'
</syntaxhighlight>
==Higher scrub performance==
<syntaxhighlight lang=bash highlight=3-5>
#!/bin/bash
#
## scrub_fast.sh
#
case $1 in
start)
echo 0 > /sys/module/zfs/parameters/zfs_scan_idle
echo 0 > /sys/module/zfs/parameters/zfs_scrub_delay
echo 512 > /sys/module/zfs/parameters/zfs_top_maxinflight
echo 5000 > /sys/module/zfs/parameters/zfs_scan_min_time_ms
echo 4 > /sys/module/zfs/parameters/zfs_vdev_scrub_min_active
echo 8 > /sys/module/zfs/parameters/zfs_vdev_scrub_max_active
;;
stop)
echo 50 > /sys/module/zfs/parameters/zfs_scan_idle
echo 4 > /sys/module/zfs/parameters/zfs_scrub_delay
echo 32 > /sys/module/zfs/parameters/zfs_top_maxinflight
echo 1000 > /sys/module/zfs/parameters/zfs_scan_min_time_ms
echo 1 > /sys/module/zfs/parameters/zfs_vdev_scrub_min_active
echo 2 > /sys/module/zfs/parameters/zfs_vdev_scrub_max_active
;;
status)
for i in zfs_scan_idle zfs_scrub_delay zfs_top_maxinflight zfs_scan_min_time_ms zfs_vdev_scrub_{min,max}_active
do
param="/sys/module/zfs/parameters/${i}"
printf "%60s\t%d\n" "${param}" "$(cat ${param})"
done
;;
*)
echo "Usage: ${0} (start|stop|status)"
;;
esac
</syntaxhighlight>
==Backup ZFS settings==
A little script which may be used on your own risk.
<syntaxhighlight lang=bash>
#!/bin/bash
# Written by Lars Timmann <L@rs.Timmann.de> 2018
# Tested on solaris 11.3 & Ubuntu Linux
# This script is a rotten bunch of code... rewrite it!
AWK_CMD=/usr/bin/gawk
ZPOOL_CMD=/sbin/zpool
ZFS_CMD=/sbin/zfs
ZDB_CMD=/sbin/zdb
function print_local_options () {
DATASET=$1
OPTION=$2
EXCLUDE_REGEX=$3
${ZFS_CMD} get -s local -Ho property,value -p ${OPTION} ${DATASET} | while read -r property value
do
if [[ ! ${property} =~ ${EXCLUDE_REGEX} ]]
then
if [ "_${property}_" == "_share.*_" ]
then
print_local_options "${DATASET}" 'share.all' '^$'
else
printf '\t-o %s=%s \\\n' "${property}" "${value}"
fi
fi
done
}
function print_filesystem () {
ZFS=$1
printf '%s create \\\n' "${ZFS_CMD}"
print_local_options "${ZFS}" 'all' '^$'
printf '\t%s\n' "${ZFS}"
}
function print_filesystems () {
ZPOOL=$1
for ZFS in $(${ZFS_CMD} list -Ho name -t filesystem -r ${ZPOOL})
do
if [ ${ZFS} == ${ZPOOL} ] ; then continue ; fi
printf '#\n## Filesystem: %s\n#\n\n' "${ZFS}"
print_filesystem ${ZFS}
printf '\n'
done
}
function print_volume () {
ZVOL=$1
volsize=$(${ZFS_CMD} get -Ho value volsize ${ZVOL})
volblocksize=$(${ZFS_CMD} get -Ho value volblocksize ${ZVOL})
printf '%s create \\\n\t-V %s \\\n\t-b %s \\\n' "${ZFS_CMD}" "${volsize}" "${volblocksize}"
print_local_options "${ZVOL}" 'all' '(volsize|refreservation)'
printf '\t%s\n' "${ZVOL}"
}
function print_volumes () {
ZPOOL=$1
for ZVOL in $(${ZFS_CMD} list -Ho name -t volume -r ${ZPOOL})
do
printf '#\n## Volume: %s\n#\n\n' "${ZVOL}"
print_volume ${ZVOL}
printf '\n'
done
}
function print_vdevs () {
ZPOOL=$1
${ZDB_CMD} -C ${ZPOOL} | ${AWK_CMD} -F':' '
$1 ~ /^[[:space:]]*type$/ {
gsub(/[ ]+/,"",$NF);
type=substr($NF,2,length($NF)-2);
if ( type == "mirror" ) {
printf " \\\n\t%s",type;
}
}
$1 ~ /^[[:space:]]*path$/ {
gsub(/[ ]+/,"",$NF);
vdev=substr($NF,2,length($NF)-2);
printf " \\\n\t%s",vdev;
}
END {
printf "\n";
}
'
}
function print_zpool () {
ZPOOL=$1
printf '#############################################################\n'
printf '#\n## ZPool: %s\n#\n' "${ZPOOL}"
printf '#############################################################\n\n'
printf '%s create \\\n' "${ZPOOL_CMD}"
print_local_options "${ZPOOL}" 'all' '/@/'
printf '\t%s' "${ZPOOL}"
print_vdevs "${ZPOOL}"
printf '\n'
printf '#############################################################\n\n'
print_filesystems "${ZPOOL}"
print_volumes "${ZPOOL}"
}
OS=$(uname -s)
eval $(uname -s)=1
HOSTNAME=$(hostname)
printf '#############################################################\n'
printf '# Hostname: %s\n' "${HOSTNAME}"
printf '#############################################################\n\n'
for ZPOOL in $(${ZPOOL_CMD} list -Ho name)
do
print_zpool ${ZPOOL}
done
</syntaxhighlight>
==Links==
* [[https://github.com/zfsonlinux/pkg-zfs/wiki/HOWTO-install-Ubuntu-16.04-to-a-Whole-Disk-Native-ZFS-Root-Filesystem-using-Ubiquity-GUI-installer HOWTO install Ubuntu 16.04 to a Whole Disk Native ZFS Root Filesystem using Ubiquity GUI installer]]
* [[https://github.com/zfsonlinux/zfs/wiki/Ubuntu-16.04-Root-on-ZFS Ubuntu 16.04 Root on ZFS]]
e3dd8276b52f1e247b9cc6ad4de4a5f4b7f1aebe
2654
2653
2022-05-31T13:44:24Z
Lollypop
2
/* Swap on ZFS with random key encryption */
wikitext
text/x-wiki
[[Category:Linux|ZFS]]
[[Category:ZFS|Linux]]
[[Category:VirtualBox|ZFS]]
==Grub==
Create /etc/udev/rules.d/99-local-grub.rules with this content:
<syntaxhighlight lang=bash>
# Create by-id links in /dev as well for zfs vdev. Needed by grub
# Add links for zfs_member only
KERNEL=="sd*[0-9]", IMPORT{parent}=="ID_*", ENV{ID_FS_TYPE}=="zfs_member", SYMLINK+="$env{ID_BUS}-$env{ID_SERIAL}-part%n"
</syntaxhighlight>
==Virtualbox on ZVols==
If you use ZVols as rawvmdk-device in VirtualBox as normal user (vmuser in this example) create /etc/udev/rules.d/99-local-zvol.rules with this content:
<syntaxhighlight lang=bash>
KERNEL=="zd*" SUBSYSTEM=="block" ACTION=="add|change" PROGRAM="/lib/udev/zvol_id /dev/%k" RESULT=="rpool/VM/*" OWNER="vmuser"
</syntaxhighlight>
<syntaxhighlight lang=bash>
vmuser@virtualbox-server:~$ VBoxManage internalcommands createrawvmdk -filename /var/data/VMs/dev/Solaris10.vmdk -rawdisk /dev/zvol/rpool/VM/Solaris10
</syntaxhighlight>
==Setup Ubuntu 16.04 with ZFS root==
Most is from here [https://github.com/zfsonlinux/zfs/wiki/Ubuntu-16.04-Root-on-ZFS Ubuntu-16.04-Root-on-ZFS].
Boot Ubuntu Desktop (alias Live CD) and choose "try out".
===Get the right ashift value===
For example to get sda and sdb:
<syntaxhighlight lang=bash>
# lsblk -o NAME,PHY-SeC,LOG-SEC /dev/sd{a,b} | awk 'function exponent (value) {for(i=0;value>1;i++){value/=2;}; return i;}{if($2 ~ /[0-9]+/){print $0,exponent($2)}else{print$0,"ashift"}}'
NAME PHY-SEC LOG-SEC ashift
sda 512 512 9
├─sda1 512 512 9
├─sda2 512 512 9
├─sda3 512 512 9
└─sda4 512 512 9
sdb 4096 512 12
├─sdb1 4096 512 12
├─sdb2 4096 512 12
├─sdb3 4096 512 12
└─sdb4 4096 512 12
</syntaxhighlight>
===Connect it to your network===
<syntaxhighlight lang=bash>
sudo -i
ifconfig ens160 <IP> netmask 255.255.255.0
route add default gw <defaultrouter>
echo "nameserver <nameserver>" >> /etc/resolv.conf
echo 'Acquire::http::Proxy "http://<user>:<pass>@<proxyhost>:<proxyport>";' >> /etc/apt/apt.conf
apt-add-repository universe
apt update
apt --yes install openssh-server
passwd ubuntu
Reconnect via ssh
apt install --yes debootstrap gdisk zfs-initramfs
sgdisk -g -a1 -n2:34:2047 -t2:EF02 /dev/disk/by-id/scsi-36000c2932cdb62febff0b5ac93786dd4
sgdisk -n9:-8M:0 -t9:BF07 /dev/disk/by-id/scsi-36000c2932cdb62febff0b5ac93786dd4
sgdisk -n1:0:0 -t1:BF01 /dev/disk/by-id/scsi-36000c2932cdb62febff0b5ac93786dd4
zpool create -f -o ashift=12 \
-O atime=off \
-O canmount=off \
-O compression=lz4 \
-O normalization=formD \
-O mountpoint=/ \
-R /mnt \
rpool /dev/disk/by-id/scsi-36000c2932cdb62febff0b5ac93786dd4-part1
zfs create -o canmount=off -o mountpoint=none rpool/ROOT
zfs create -o canmount=noauto -o mountpoint=/ rpool/ROOT/ubuntu
zfs mount rpool/ROOT/ubuntu
zfs create -o setuid=off rpool/home
zfs create -o mountpoint=/root rpool/home/root
zfs create -o canmount=off -o setuid=off -o exec=off rpool/var
zfs create -o com.sun:auto-snapshot=false rpool/var/cache
zfs create rpool/var/log
zfs create rpool/var/spool
zfs create -o com.sun:auto-snapshot=false -o exec=on rpool/var/tmp
zfs create -V 4G -b $(getconf PAGESIZE) -o compression=zle \
-o logbias=throughput -o sync=always \
-o primarycache=metadata -o secondarycache=none \
-o com.sun:auto-snapshot=false rpool/swap
cp -p {,/mnt}/etc/apt/apt.conf
export http_proxy=$(awk '/Acquire::http::Proxy/{gsub(/\"/,"");gsub(/;$/,"");print $2}' /mnt/etc/apt/apt.conf)
echo -n xenial{,-security,-updates} | \
xargs -n 1 -d ' ' -I{} echo "deb http://archive.ubuntu.com/ubuntu {} main universe" > /mnt/etc/apt/sources.list
chmod 1777 /mnt/var/tmp
debootstrap xenial /mnt
zfs set devices=off rpool
HOSTNAME=Template-VM
echo ${HOSTNAME} > /mnt/etc/hostname
printf "127.0.1.1\t%s\n" "${HOSTNAME}" >> /mnt/etc/hosts
INTERFACE=$(ip a s scope global | awk 'NR==1{gsub(/:$/,"",$2);print $2;}')
printf "auto %s\niface %s inet dhcp\n" "${INTERFACE}" "${INTERFACE}" > /mnt/etc/network/interfaces.d/${INTERFACE}
mount --rbind /dev /mnt/dev
mount --rbind /proc /mnt/proc
mount --rbind /sys /mnt/sys
cp -p {,/mnt}/etc/apt/apt.conf
echo -n xenial{,-security,-updates} | \
xargs -n 1 -d ' ' -I{} echo "deb http://archive.ubuntu.com/ubuntu {} main universe" > /mnt/etc/apt/sources.list
chroot /mnt /bin/bash --login
locale-gen en_US.UTF-8
echo 'LANG="en_US.UTF-8"' > /etc/default/locale
LANG="en_US.UTF-8"
dpkg-reconfigure tzdata
ln -s /proc/self/mounts /etc/mtab
apt update
apt install --yes ubuntu-minimal
apt install --yes --no-install-recommends linux-image-generic
apt install --yes zfs-initramfs
apt install --yes openssh-server
apt install --yes grub-pc
addgroup --system lpadmin
addgroup --system sambashare
passwd
grub-probe /
update-initramfs -c -k all
vi /etc/default/grub
Comment out: GRUB_HIDDEN_TIMEOUT=0
Remove quiet and splash from: GRUB_CMDLINE_LINUX_DEFAULT
Uncomment: GRUB_TERMINAL=console
update-grub
grub-install /dev/disk/by-id/scsi-36000c2932cdb62febff0b5ac93786dd4
zfs snapshot rpool/ROOT/ubuntu@install
exit
mount | grep -v zfs | tac | awk '/\/mnt/ {print $3}' | xargs -i{} umount -lf {}
zpool export rpool
reboot
apt install --yes cryptsetup
echo cryptswap1 /dev/zvol/rpool/swap /dev/urandom swap,cipher=aes-xts-plain64:sha256,size=256 >> /etc/crypttab
systemctl daemon-reload
systemctl start systemd-cryptsetup@cryptswap1.service
echo /dev/mapper/cryptswap1 none swap defaults 0 0 >> /etc/fstab
swapon -av
</syntaxhighlight>
==Swap on ZFS with random key encryption==
<syntaxhighlight lang=bash>
$ sudo systemctl edit --force --full zfs-cryptswap@.service
</syntaxhighlight>
<syntaxhighlight lang=ini>
# /etc/systemd/system/zfs-cryptswap@.service
[Unit]
Description=ZFS Random Cryptography Setup for %I
Documentation=man:zfs(8)
DefaultDependencies=no
Conflicts=umount.target
IgnoreOnIsolate=true
After=systemd-random-seed.service zfs-volumes.target
BindsTo=dev-zvol-rpool-%i.device
Before=umount.target
[Service]
Type=oneshot
RemainAfterExit=yes
TimeoutSec=0
KeyringMode=shared
OOMScoreAdjust=500
UMask=0077
RuntimeDirectory=zfs-cryptswap.%i
RuntimeDirectoryMode=0700
ExecStartPre=-/sbin/swapoff '/dev/zvol/rpool/%i'
ExecStartPre=-/sbin/zfs destroy 'rpool/%i'
ExecStartPre=/bin/dd if=/dev/urandom of=/run/zfs-cryptswap.%i/%i.key bs=32 count=1
ExecStart=/sbin/zfs create -V 4G -b 4k -o compression=zle -o logbias=throughput -o sync=always -o primarycache=metadata -o secondarycache=none -o com.sun:auto-snapshot=false -o encryption=on -o keyformat=raw -o keylocation=file:///run/zfs-cryptswap.%i/%i.key rpool/%i
ExecStart=/bin/sleep 1
ExecStartPost=/sbin/mkswap '/dev/zvol/rpool/%i'
ExecStartPost=/sbin/swapon '/dev/zvol/rpool/%i'
ExecStop=/sbin/swapoff '/dev/zvol/rpool/%i'
ExecStop=/bin/sleep 2
ExecStopPost=/sbin/zfs destroy 'rpool/%i'
[Install]
WantedBy=swap.target
</syntaxhighlight>
!!!BE CAREFUL with the name after @ !!!
The name after the @ is the name of the ZFS that will be DESTROYED and recreated!!!
To destroy and recreate an encrypted ZFS volume named cryptswap use:
<syntaxhighlight lang=bash>
# systemctl start zfs-cryptswap@cryptswap.service
# systemctl enable zfs-cryptswap@cryptswap.service
# update-initramfs -k $(uname -i) -u
</syntaxhighlight>
==Kernel settings for ZFS==
=== Set module parameter in /etc/modprobe.d/zfs.conf===
<syntaxhighlight lang=bash>
options zfs zfs_arc_max=10737418240
# increase them so scrub/resilver is more quickly at the cost of other work
options zfs zfs_vdev_scrub_min_active=24
options zfs zfs_vdev_scrub_max_active=64
# sync write
options zfs zfs_vdev_sync_write_min_active=8
options zfs zfs_vdev_sync_write_max_active=32
# sync reads (normal)
options zfs zfs_vdev_sync_read_min_active=8
options zfs zfs_vdev_sync_read_max_active=32
# async reads : prefetcher
options zfs zfs_vdev_async_read_min_active=8
options zfs zfs_vdev_async_read_max_active=32
# async write : bulk writes
options zfs zfs_vdev_async_write_min_active=8
options zfs zfs_vdev_async_write_max_active=32
# max write speed to l2arc
# tradeoff between write/read and durability of ssd (?)
# default : 8 * 1024 * 1024
# setting here : 500 * 1024 * 1024
options zfs l2arc_write_max=524288000
options zfs zfs_top_maxinflight=512
options zfs zfs_resilver_min_time_ms=8000
options zfs zfs_resilver_delay=0
</syntaxhighlight>
Remember to update your initramfs before boot. This is the filesystem which is read when your module is loaded.
<syntaxhighlight lang=bash>
# update-initramfs -k all -u
</syntaxhighlight>
=== Check settings ===
<syntaxhighlight lang=bash>
root@zfshost:~# modprobe -c | grep "options zfs"
options zfs zfs_arc_max=10737418240
options zfs zfs_vdev_scrub_min_active=24
options zfs zfs_vdev_scrub_max_active=64
options zfs zfs_vdev_sync_write_min_active=8
options zfs zfs_vdev_sync_write_max_active=32
options zfs zfs_vdev_sync_read_min_active=8
options zfs zfs_vdev_sync_read_max_active=32
options zfs zfs_vdev_async_read_min_active=8
options zfs zfs_vdev_async_read_max_active=32
options zfs zfs_vdev_async_write_min_active=8
options zfs zfs_vdev_async_write_max_active=32
options zfs l2arc_write_max=524288000
options zfs zfs_top_maxinflight=512
options zfs zfs_resilver_min_time_ms=8000
options zfs zfs_resilver_delay=0
</syntaxhighlight>
<syntaxhighlight lang=bash>
root@zfshost:~# modprobe --show-depends zfs
insmod /lib/modules/4.15.0-58-generic/kernel/spl/spl.ko
insmod /lib/modules/4.15.0-58-generic/kernel/zfs/znvpair.ko
insmod /lib/modules/4.15.0-58-generic/kernel/zfs/zcommon.ko
insmod /lib/modules/4.15.0-58-generic/kernel/zfs/icp.ko
insmod /lib/modules/4.15.0-58-generic/kernel/zfs/zavl.ko
insmod /lib/modules/4.15.0-58-generic/kernel/zfs/zunicode.ko
insmod /lib/modules/4.15.0-58-generic/kernel/zfs/zfs.ko zfs_arc_max=10737418240 zfs_vdev_scrub_min_active=24 zfs_vdev_scrub_max_active=64 zfs_vdev_sync_write_min_active=8 zfs_vdev_sync_write_max_active=32 zfs_vdev_sync_read_min_active=8 zfs_vdev_sync_read_max_active=32 zfs_vdev_async_read_min_active=8 zfs_vdev_async_read_max_active=32 zfs_vdev_async_write_min_active=8 zfs_vdev_async_write_max_active=32 l2arc_write_max=524288000 zfs_top_maxinflight=512 zfs_resilver_min_time_ms=8000 zfs_resilver_delay=0
</syntaxhighlight>
=== Check actual settings ===
Check files in
* /proc/spl/kstat/zfs/
* /sys/module/zfs/parameters/
==ARC Cache==
===Get the current usage of cache===
<syntaxhighlight lang=bash>
# cat /proc/spl/kstat/zfs/arcstats |grep c_
c_min 4 521779200
c_max 4 1073741824
arc_no_grow 4 0
arc_tempreserve 4 0
arc_loaned_bytes 4 0
arc_prune 4 25360
arc_meta_used 4 493285336
arc_meta_limit 4 805306368
arc_dnode_limit 4 80530636
arc_meta_max 4 706551816
arc_meta_min 4 16777216
sync_wait_for_async 4 357
arc_need_free 4 0
arc_sys_free 4 260889600
</syntaxhighlight>
===Limit the cache without reboot non permanent===
For example limit it to 512MB (which is too small for production environments, just an example...):
<syntaxhighlight lang=bash>
# echo "$[512*1024*1024]" > /sys/module/zfs/parameters/zfs_arc_max
</syntaxhighlight>
Now you have to drop the caches:
<syntaxhighlight lang=bash>
# echo 3 > /proc/sys/vm/drop_caches
</syntaxhighlight>
===Make the cache limit permanent===
For example limit it to 512MB (which is too small for production environments, just an example...):
<syntaxhighlight lang=bash>
# echo "options zfs zfs_arc_max=$[512*1024*1024]" >> /etc/modprobe.d/zfs.conf
</syntaxhighlight>
After reboot this value take effect.
===Check cache hits/misses===
<syntaxhighlight lang=bash>
# (while : ; do cat /proc/spl/kstat/zfs/arcstats ; sleep 5 ; done ) | awk '
BEGIN {
}
$1 ~ /(hits|misses)/ {
name=$1;
gsub(/[_]*(hits|misses)/,"",name);
if(name == ""){
name="global";
}
}
$1 ~ /hits/ {
hits[name] = $3 - hitslast[name]
hitslast[name] = $3
}
$1 ~ /misses/ {
misses[name] = $3 - misslast[name]
misslast[name] = $3
rate = 0
total = hits[name] + misses[name]
if (total)
rate = (hits[name] * 100) / total
if (name=="global")
printf "%30s %12s %12s %9s\n", "NAME", "HITS", "MISSES", "HITRATE"
printf "%30s %12d %12d %8.2f%%\n", name, hits[name], misses[name], rate
}
'
</syntaxhighlight>
==Higher scrub performance==
<syntaxhighlight lang=bash highlight=3-5>
#!/bin/bash
#
## scrub_fast.sh
#
case $1 in
start)
echo 0 > /sys/module/zfs/parameters/zfs_scan_idle
echo 0 > /sys/module/zfs/parameters/zfs_scrub_delay
echo 512 > /sys/module/zfs/parameters/zfs_top_maxinflight
echo 5000 > /sys/module/zfs/parameters/zfs_scan_min_time_ms
echo 4 > /sys/module/zfs/parameters/zfs_vdev_scrub_min_active
echo 8 > /sys/module/zfs/parameters/zfs_vdev_scrub_max_active
;;
stop)
echo 50 > /sys/module/zfs/parameters/zfs_scan_idle
echo 4 > /sys/module/zfs/parameters/zfs_scrub_delay
echo 32 > /sys/module/zfs/parameters/zfs_top_maxinflight
echo 1000 > /sys/module/zfs/parameters/zfs_scan_min_time_ms
echo 1 > /sys/module/zfs/parameters/zfs_vdev_scrub_min_active
echo 2 > /sys/module/zfs/parameters/zfs_vdev_scrub_max_active
;;
status)
for i in zfs_scan_idle zfs_scrub_delay zfs_top_maxinflight zfs_scan_min_time_ms zfs_vdev_scrub_{min,max}_active
do
param="/sys/module/zfs/parameters/${i}"
printf "%60s\t%d\n" "${param}" "$(cat ${param})"
done
;;
*)
echo "Usage: ${0} (start|stop|status)"
;;
esac
</syntaxhighlight>
==Backup ZFS settings==
A little script which may be used on your own risk.
<syntaxhighlight lang=bash>
#!/bin/bash
# Written by Lars Timmann <L@rs.Timmann.de> 2018
# Tested on solaris 11.3 & Ubuntu Linux
# This script is a rotten bunch of code... rewrite it!
AWK_CMD=/usr/bin/gawk
ZPOOL_CMD=/sbin/zpool
ZFS_CMD=/sbin/zfs
ZDB_CMD=/sbin/zdb
function print_local_options () {
DATASET=$1
OPTION=$2
EXCLUDE_REGEX=$3
${ZFS_CMD} get -s local -Ho property,value -p ${OPTION} ${DATASET} | while read -r property value
do
if [[ ! ${property} =~ ${EXCLUDE_REGEX} ]]
then
if [ "_${property}_" == "_share.*_" ]
then
print_local_options "${DATASET}" 'share.all' '^$'
else
printf '\t-o %s=%s \\\n' "${property}" "${value}"
fi
fi
done
}
function print_filesystem () {
ZFS=$1
printf '%s create \\\n' "${ZFS_CMD}"
print_local_options "${ZFS}" 'all' '^$'
printf '\t%s\n' "${ZFS}"
}
function print_filesystems () {
ZPOOL=$1
for ZFS in $(${ZFS_CMD} list -Ho name -t filesystem -r ${ZPOOL})
do
if [ ${ZFS} == ${ZPOOL} ] ; then continue ; fi
printf '#\n## Filesystem: %s\n#\n\n' "${ZFS}"
print_filesystem ${ZFS}
printf '\n'
done
}
function print_volume () {
ZVOL=$1
volsize=$(${ZFS_CMD} get -Ho value volsize ${ZVOL})
volblocksize=$(${ZFS_CMD} get -Ho value volblocksize ${ZVOL})
printf '%s create \\\n\t-V %s \\\n\t-b %s \\\n' "${ZFS_CMD}" "${volsize}" "${volblocksize}"
print_local_options "${ZVOL}" 'all' '(volsize|refreservation)'
printf '\t%s\n' "${ZVOL}"
}
function print_volumes () {
ZPOOL=$1
for ZVOL in $(${ZFS_CMD} list -Ho name -t volume -r ${ZPOOL})
do
printf '#\n## Volume: %s\n#\n\n' "${ZVOL}"
print_volume ${ZVOL}
printf '\n'
done
}
function print_vdevs () {
ZPOOL=$1
${ZDB_CMD} -C ${ZPOOL} | ${AWK_CMD} -F':' '
$1 ~ /^[[:space:]]*type$/ {
gsub(/[ ]+/,"",$NF);
type=substr($NF,2,length($NF)-2);
if ( type == "mirror" ) {
printf " \\\n\t%s",type;
}
}
$1 ~ /^[[:space:]]*path$/ {
gsub(/[ ]+/,"",$NF);
vdev=substr($NF,2,length($NF)-2);
printf " \\\n\t%s",vdev;
}
END {
printf "\n";
}
'
}
function print_zpool () {
ZPOOL=$1
printf '#############################################################\n'
printf '#\n## ZPool: %s\n#\n' "${ZPOOL}"
printf '#############################################################\n\n'
printf '%s create \\\n' "${ZPOOL_CMD}"
print_local_options "${ZPOOL}" 'all' '/@/'
printf '\t%s' "${ZPOOL}"
print_vdevs "${ZPOOL}"
printf '\n'
printf '#############################################################\n\n'
print_filesystems "${ZPOOL}"
print_volumes "${ZPOOL}"
}
OS=$(uname -s)
eval $(uname -s)=1
HOSTNAME=$(hostname)
printf '#############################################################\n'
printf '# Hostname: %s\n' "${HOSTNAME}"
printf '#############################################################\n\n'
for ZPOOL in $(${ZPOOL_CMD} list -Ho name)
do
print_zpool ${ZPOOL}
done
</syntaxhighlight>
==Links==
* [[https://github.com/zfsonlinux/pkg-zfs/wiki/HOWTO-install-Ubuntu-16.04-to-a-Whole-Disk-Native-ZFS-Root-Filesystem-using-Ubiquity-GUI-installer HOWTO install Ubuntu 16.04 to a Whole Disk Native ZFS Root Filesystem using Ubiquity GUI installer]]
* [[https://github.com/zfsonlinux/zfs/wiki/Ubuntu-16.04-Root-on-ZFS Ubuntu 16.04 Root on ZFS]]
fc5d8d94e0c2067eb0092dbf1baae76fa1aacd82
2678
2654
2022-10-11T12:03:30Z
Lollypop
2
wikitext
text/x-wiki
[[Category:Linux|ZFS]]
[[Category:ZFS|Linux]]
[[Category:VirtualBox|ZFS]]
==Grub==
Create /etc/udev/rules.d/99-local-grub.rules with this content:
<syntaxhighlight lang=bash>
# Create by-id links in /dev as well for zfs vdev. Needed by grub
# Add links for zfs_member only
KERNEL=="sd*[0-9]", IMPORT{parent}=="ID_*", ENV{ID_FS_TYPE}=="zfs_member", SYMLINK+="$env{ID_BUS}-$env{ID_SERIAL}-part%n"
</syntaxhighlight>
==Virtualbox on ZVols==
If you use ZVols as rawvmdk-device in VirtualBox as normal user (vmuser in this example) create /etc/udev/rules.d/99-local-zvol.rules with this content:
<syntaxhighlight lang=bash>
KERNEL=="zd*" SUBSYSTEM=="block" ACTION=="add|change" PROGRAM="/lib/udev/zvol_id /dev/%k" RESULT=="rpool/VM/*" OWNER="vmuser"
</syntaxhighlight>
<syntaxhighlight lang=bash>
vmuser@virtualbox-server:~$ VBoxManage internalcommands createrawvmdk -filename /var/data/VMs/dev/Solaris10.vmdk -rawdisk /dev/zvol/rpool/VM/Solaris10
</syntaxhighlight>
==Setup Ubuntu 16.04 with ZFS root==
Most is from here [https://github.com/zfsonlinux/zfs/wiki/Ubuntu-16.04-Root-on-ZFS Ubuntu-16.04-Root-on-ZFS].
Boot Ubuntu Desktop (alias Live CD) and choose "try out".
===Get the right ashift value===
For example to get sda and sdb:
<syntaxhighlight lang=bash>
# lsblk -o NAME,PHY-SeC,LOG-SEC /dev/sd{a,b} | awk 'function exponent (value) {for(i=0;value>1;i++){value/=2;}; return i;}{if($2 ~ /[0-9]+/){print $0,exponent($2)}else{print$0,"ashift"}}'
NAME PHY-SEC LOG-SEC ashift
sda 512 512 9
├─sda1 512 512 9
├─sda2 512 512 9
├─sda3 512 512 9
└─sda4 512 512 9
sdb 4096 512 12
├─sdb1 4096 512 12
├─sdb2 4096 512 12
├─sdb3 4096 512 12
└─sdb4 4096 512 12
</syntaxhighlight>
===Connect it to your network===
<syntaxhighlight lang=bash>
sudo -i
ifconfig ens160 <IP> netmask 255.255.255.0
route add default gw <defaultrouter>
echo "nameserver <nameserver>" >> /etc/resolv.conf
echo 'Acquire::http::Proxy "http://<user>:<pass>@<proxyhost>:<proxyport>";' >> /etc/apt/apt.conf
apt-add-repository universe
apt update
apt --yes install openssh-server
passwd ubuntu
Reconnect via ssh
apt install --yes debootstrap gdisk zfs-initramfs
sgdisk -g -a1 -n2:34:2047 -t2:EF02 /dev/disk/by-id/scsi-36000c2932cdb62febff0b5ac93786dd4
sgdisk -n9:-8M:0 -t9:BF07 /dev/disk/by-id/scsi-36000c2932cdb62febff0b5ac93786dd4
sgdisk -n1:0:0 -t1:BF01 /dev/disk/by-id/scsi-36000c2932cdb62febff0b5ac93786dd4
zpool create -f -o ashift=12 \
-O atime=off \
-O canmount=off \
-O compression=lz4 \
-O normalization=formD \
-O mountpoint=/ \
-R /mnt \
rpool /dev/disk/by-id/scsi-36000c2932cdb62febff0b5ac93786dd4-part1
zfs create -o canmount=off -o mountpoint=none rpool/ROOT
zfs create -o canmount=noauto -o mountpoint=/ rpool/ROOT/ubuntu
zfs mount rpool/ROOT/ubuntu
zfs create -o setuid=off rpool/home
zfs create -o mountpoint=/root rpool/home/root
zfs create -o canmount=off -o setuid=off -o exec=off rpool/var
zfs create -o com.sun:auto-snapshot=false rpool/var/cache
zfs create rpool/var/log
zfs create rpool/var/spool
zfs create -o com.sun:auto-snapshot=false -o exec=on rpool/var/tmp
zfs create -V 4G -b $(getconf PAGESIZE) -o compression=zle \
-o logbias=throughput -o sync=always \
-o primarycache=metadata -o secondarycache=none \
-o com.sun:auto-snapshot=false rpool/swap
cp -p {,/mnt}/etc/apt/apt.conf
export http_proxy=$(awk '/Acquire::http::Proxy/{gsub(/\"/,"");gsub(/;$/,"");print $2}' /mnt/etc/apt/apt.conf)
echo -n xenial{,-security,-updates} | \
xargs -n 1 -d ' ' -I{} echo "deb http://archive.ubuntu.com/ubuntu {} main universe" > /mnt/etc/apt/sources.list
chmod 1777 /mnt/var/tmp
debootstrap xenial /mnt
zfs set devices=off rpool
HOSTNAME=Template-VM
echo ${HOSTNAME} > /mnt/etc/hostname
printf "127.0.1.1\t%s\n" "${HOSTNAME}" >> /mnt/etc/hosts
INTERFACE=$(ip a s scope global | awk 'NR==1{gsub(/:$/,"",$2);print $2;}')
printf "auto %s\niface %s inet dhcp\n" "${INTERFACE}" "${INTERFACE}" > /mnt/etc/network/interfaces.d/${INTERFACE}
mount --rbind /dev /mnt/dev
mount --rbind /proc /mnt/proc
mount --rbind /sys /mnt/sys
cp -p {,/mnt}/etc/apt/apt.conf
echo -n xenial{,-security,-updates} | \
xargs -n 1 -d ' ' -I{} echo "deb http://archive.ubuntu.com/ubuntu {} main universe" > /mnt/etc/apt/sources.list
chroot /mnt /bin/bash --login
locale-gen en_US.UTF-8
echo 'LANG="en_US.UTF-8"' > /etc/default/locale
LANG="en_US.UTF-8"
dpkg-reconfigure tzdata
ln -s /proc/self/mounts /etc/mtab
apt update
apt install --yes ubuntu-minimal
apt install --yes --no-install-recommends linux-image-generic
apt install --yes zfs-initramfs
apt install --yes openssh-server
apt install --yes grub-pc
addgroup --system lpadmin
addgroup --system sambashare
passwd
grub-probe /
update-initramfs -c -k all
vi /etc/default/grub
Comment out: GRUB_HIDDEN_TIMEOUT=0
Remove quiet and splash from: GRUB_CMDLINE_LINUX_DEFAULT
Uncomment: GRUB_TERMINAL=console
update-grub
grub-install /dev/disk/by-id/scsi-36000c2932cdb62febff0b5ac93786dd4
zfs snapshot rpool/ROOT/ubuntu@install
exit
mount | grep -v zfs | tac | awk '/\/mnt/ {print $3}' | xargs -i{} umount -lf {}
zpool export rpool
reboot
apt install --yes cryptsetup
echo cryptswap1 /dev/zvol/rpool/swap /dev/urandom swap,cipher=aes-xts-plain64:sha256,size=256 >> /etc/crypttab
systemctl daemon-reload
systemctl start systemd-cryptsetup@cryptswap1.service
echo /dev/mapper/cryptswap1 none swap defaults 0 0 >> /etc/fstab
swapon -av
</syntaxhighlight>
==Swap on ZFS with random key encryption==
<syntaxhighlight lang=bash>
$ sudo systemctl edit --force --full zfs-cryptswap@.service
</syntaxhighlight>
<syntaxhighlight lang=ini>
# /etc/systemd/system/zfs-cryptswap@.service
[Unit]
Description=ZFS Random Cryptography Setup for %I
Documentation=man:zfs(8)
DefaultDependencies=no
Conflicts=umount.target
IgnoreOnIsolate=true
After=systemd-random-seed.service zfs-volumes.target
BindsTo=dev-zvol-rpool-%i.device
Before=umount.target
[Service]
Type=oneshot
RemainAfterExit=yes
TimeoutSec=0
KeyringMode=shared
OOMScoreAdjust=500
UMask=0077
RuntimeDirectory=zfs-cryptswap.%i
RuntimeDirectoryMode=0700
ExecStartPre=-/sbin/swapoff '/dev/zvol/rpool/%i'
ExecStartPre=-/sbin/zfs destroy 'rpool/%i'
ExecStartPre=/bin/dd if=/dev/urandom of=/run/zfs-cryptswap.%i/%i.key bs=32 count=1
ExecStart=/sbin/zfs create -V 4G -b 4k -o compression=zle -o logbias=throughput -o sync=always -o primarycache=metadata -o secondarycache=none -o com.sun:auto-snapshot=false -o encryption=on -o keyformat=raw -o keylocation=file:///run/zfs-cryptswap.%i/%i.key rpool/%i
ExecStart=/bin/sleep 1
ExecStartPost=/sbin/mkswap '/dev/zvol/rpool/%i'
ExecStartPost=/sbin/swapon '/dev/zvol/rpool/%i'
ExecStop=/sbin/swapoff '/dev/zvol/rpool/%i'
ExecStop=/bin/sleep 2
ExecStopPost=/sbin/zfs destroy 'rpool/%i'
[Install]
WantedBy=swap.target
</syntaxhighlight>
!!!BE CAREFUL with the name after @ !!!
The name after the @ is the name of the ZFS that will be DESTROYED and recreated!!!
To destroy and recreate an encrypted ZFS volume named cryptswap use:
<syntaxhighlight lang=bash>
# systemctl start zfs-cryptswap@cryptswap.service
# systemctl enable zfs-cryptswap@cryptswap.service
# update-initramfs -k $(uname -i) -u
</syntaxhighlight>
==Kernel settings for ZFS==
=== Set module parameter in /etc/modprobe.d/zfs.conf===
<syntaxhighlight lang=bash>
options zfs zfs_arc_max=10737418240
# increase them so scrub/resilver is more quickly at the cost of other work
options zfs zfs_vdev_scrub_min_active=24
options zfs zfs_vdev_scrub_max_active=64
# sync write
options zfs zfs_vdev_sync_write_min_active=8
options zfs zfs_vdev_sync_write_max_active=32
# sync reads (normal)
options zfs zfs_vdev_sync_read_min_active=8
options zfs zfs_vdev_sync_read_max_active=32
# async reads : prefetcher
options zfs zfs_vdev_async_read_min_active=8
options zfs zfs_vdev_async_read_max_active=32
# async write : bulk writes
options zfs zfs_vdev_async_write_min_active=8
options zfs zfs_vdev_async_write_max_active=32
# max write speed to l2arc
# tradeoff between write/read and durability of ssd (?)
# default : 8 * 1024 * 1024
# setting here : 500 * 1024 * 1024
options zfs l2arc_write_max=524288000
options zfs zfs_top_maxinflight=512
options zfs zfs_resilver_min_time_ms=8000
options zfs zfs_resilver_delay=0
</syntaxhighlight>
Remember to update your initramfs before boot. This is the filesystem which is read when your module is loaded.
<syntaxhighlight lang=bash>
# update-initramfs -k all -u
</syntaxhighlight>
=== Check settings ===
<syntaxhighlight lang=bash>
root@zfshost:~# modprobe -c | grep "options zfs"
options zfs zfs_arc_max=10737418240
options zfs zfs_vdev_scrub_min_active=24
options zfs zfs_vdev_scrub_max_active=64
options zfs zfs_vdev_sync_write_min_active=8
options zfs zfs_vdev_sync_write_max_active=32
options zfs zfs_vdev_sync_read_min_active=8
options zfs zfs_vdev_sync_read_max_active=32
options zfs zfs_vdev_async_read_min_active=8
options zfs zfs_vdev_async_read_max_active=32
options zfs zfs_vdev_async_write_min_active=8
options zfs zfs_vdev_async_write_max_active=32
options zfs l2arc_write_max=524288000
options zfs zfs_top_maxinflight=512
options zfs zfs_resilver_min_time_ms=8000
options zfs zfs_resilver_delay=0
</syntaxhighlight>
<syntaxhighlight lang=bash>
root@zfshost:~# modprobe --show-depends zfs
insmod /lib/modules/4.15.0-58-generic/kernel/spl/spl.ko
insmod /lib/modules/4.15.0-58-generic/kernel/zfs/znvpair.ko
insmod /lib/modules/4.15.0-58-generic/kernel/zfs/zcommon.ko
insmod /lib/modules/4.15.0-58-generic/kernel/zfs/icp.ko
insmod /lib/modules/4.15.0-58-generic/kernel/zfs/zavl.ko
insmod /lib/modules/4.15.0-58-generic/kernel/zfs/zunicode.ko
insmod /lib/modules/4.15.0-58-generic/kernel/zfs/zfs.ko zfs_arc_max=10737418240 zfs_vdev_scrub_min_active=24 zfs_vdev_scrub_max_active=64 zfs_vdev_sync_write_min_active=8 zfs_vdev_sync_write_max_active=32 zfs_vdev_sync_read_min_active=8 zfs_vdev_sync_read_max_active=32 zfs_vdev_async_read_min_active=8 zfs_vdev_async_read_max_active=32 zfs_vdev_async_write_min_active=8 zfs_vdev_async_write_max_active=32 l2arc_write_max=524288000 zfs_top_maxinflight=512 zfs_resilver_min_time_ms=8000 zfs_resilver_delay=0
</syntaxhighlight>
=== Check actual settings ===
Check files in
* /proc/spl/kstat/zfs/
* /sys/module/zfs/parameters/
==ARC Cache==
===Get the current usage of cache===
<syntaxhighlight lang=bash>
# cat /proc/spl/kstat/zfs/arcstats |grep c_
c_min 4 521779200
c_max 4 1073741824
arc_no_grow 4 0
arc_tempreserve 4 0
arc_loaned_bytes 4 0
arc_prune 4 25360
arc_meta_used 4 493285336
arc_meta_limit 4 805306368
arc_dnode_limit 4 80530636
arc_meta_max 4 706551816
arc_meta_min 4 16777216
sync_wait_for_async 4 357
arc_need_free 4 0
arc_sys_free 4 260889600
</syntaxhighlight>
===Limit the cache without reboot non permanent===
For example limit it to 512MB (which is too small for production environments, just an example...):
<syntaxhighlight lang=bash>
# echo "$[512*1024*1024]" > /sys/module/zfs/parameters/zfs_arc_max
</syntaxhighlight>
Now you have to drop the caches:
<syntaxhighlight lang=bash>
# echo 3 > /proc/sys/vm/drop_caches
</syntaxhighlight>
===Make the cache limit permanent===
For example limit it to 512MB (which is too small for production environments, just an example...):
<syntaxhighlight lang=bash>
# echo "options zfs zfs_arc_max=$[512*1024*1024]" >> /etc/modprobe.d/zfs.conf
</syntaxhighlight>
After reboot this value take effect.
===Check cache hits/misses===
<syntaxhighlight lang=bash>
# (while : ; do cat /proc/spl/kstat/zfs/arcstats ; sleep 5 ; done ) | awk '
BEGIN {
}
$1 ~ /(hits|misses)/ {
name=$1;
gsub(/[_]*(hits|misses)/,"",name);
if(name == ""){
name="global";
}
}
$1 ~ /hits/ {
hits[name] = $3 - hitslast[name]
hitslast[name] = $3
}
$1 ~ /misses/ {
misses[name] = $3 - misslast[name]
misslast[name] = $3
rate = 0
total = hits[name] + misses[name]
if (total)
rate = (hits[name] * 100) / total
if (name=="global")
printf "%30s %12s %12s %9s\n", "NAME", "HITS", "MISSES", "HITRATE"
printf "%30s %12d %12d %8.2f%%\n", name, hits[name], misses[name], rate
}
'
</syntaxhighlight>
==Higher scrub performance==
<syntaxhighlight lang=bash highlight=3-5>
#!/bin/bash
#
## scrub_fast.sh
#
case $1 in
start)
echo 0 > /sys/module/zfs/parameters/zfs_scan_idle
echo 0 > /sys/module/zfs/parameters/zfs_scrub_delay
echo 512 > /sys/module/zfs/parameters/zfs_top_maxinflight
echo 5000 > /sys/module/zfs/parameters/zfs_scan_min_time_ms
echo 4 > /sys/module/zfs/parameters/zfs_vdev_scrub_min_active
echo 8 > /sys/module/zfs/parameters/zfs_vdev_scrub_max_active
;;
stop)
echo 50 > /sys/module/zfs/parameters/zfs_scan_idle
echo 4 > /sys/module/zfs/parameters/zfs_scrub_delay
echo 32 > /sys/module/zfs/parameters/zfs_top_maxinflight
echo 1000 > /sys/module/zfs/parameters/zfs_scan_min_time_ms
echo 1 > /sys/module/zfs/parameters/zfs_vdev_scrub_min_active
echo 2 > /sys/module/zfs/parameters/zfs_vdev_scrub_max_active
;;
status)
for i in zfs_scan_idle zfs_scrub_delay zfs_top_maxinflight zfs_scan_min_time_ms zfs_vdev_scrub_{min,max}_active
do
param="/sys/module/zfs/parameters/${i}"
printf "%60s\t%d\n" "${param}" "$(cat ${param})"
done
;;
*)
echo "Usage: ${0} (start|stop|status)"
;;
esac
</syntaxhighlight>
==More information on zpool status==
<SyntaxHighlight lang=bash highlight=3-5>
#!/bin/bash
#
## print_zpool.sh
#
# Written by Lars Timmann <L@rs.Timmann.de> 2022
columns=5 # number of columns for zpool status
if [ ${1} == "iostat" ]
then
command="iostat -v"
columns=7
shift
fi
stdbuf --output=L zpool ${command:-status} -P ${*} | awk -v columns=${columns} '
BEGIN {
command="lsscsi --scsi_id";
while( command | getline lsscsi ) {
count=split(lsscsi,fields);
dev=fields[count-1];
scsi_id[dev]=fields[1];
}
close(command);
command="ls -Ul /dev/disk/by-id/*";
while( command | getline ) {
dev=$NF;
gsub(/[\.\/]/,"",dev);
dev_id=$(NF-2);
device[dev_id]="/dev/"dev;
}
close(command);
}
$1 ~ /\/dev\// {
line=$0;
dev_by_id=$1;
dev_no_part=dev_by_id;
gsub(/(-part|)[0-9]+$/,"",dev_no_part);
if( NF > 5) {
count=split(line,a,FS,seps);
line=seps[0];
for(i=1;i<columns;i++){
line=line a[i] seps[i];
}
line=line a[columns];
for(i=columns+1;i<=count;i++){
rest=rest a[i] seps[i];
}
}
printf("%s %s %s",line,scsi_id[device[dev_no_part]],device[dev_by_id]);
if(rest!=""){
printf(" %s",rest);
rest="";
}
printf("\n");
next;
}
/^errors:/ {
print;
fflush();
next;
}
{
print;
}'
</SyntaxHighlight>
==Backup ZFS settings==
A little script which may be used on your own risk.
<syntaxhighlight lang=bash>
#!/bin/bash
# Written by Lars Timmann <L@rs.Timmann.de> 2018
# Tested on solaris 11.3 & Ubuntu Linux
# This script is a rotten bunch of code... rewrite it!
AWK_CMD=/usr/bin/gawk
ZPOOL_CMD=/sbin/zpool
ZFS_CMD=/sbin/zfs
ZDB_CMD=/sbin/zdb
function print_local_options () {
DATASET=$1
OPTION=$2
EXCLUDE_REGEX=$3
${ZFS_CMD} get -s local -Ho property,value -p ${OPTION} ${DATASET} | while read -r property value
do
if [[ ! ${property} =~ ${EXCLUDE_REGEX} ]]
then
if [ "_${property}_" == "_share.*_" ]
then
print_local_options "${DATASET}" 'share.all' '^$'
else
printf '\t-o %s=%s \\\n' "${property}" "${value}"
fi
fi
done
}
function print_filesystem () {
ZFS=$1
printf '%s create \\\n' "${ZFS_CMD}"
print_local_options "${ZFS}" 'all' '^$'
printf '\t%s\n' "${ZFS}"
}
function print_filesystems () {
ZPOOL=$1
for ZFS in $(${ZFS_CMD} list -Ho name -t filesystem -r ${ZPOOL})
do
if [ ${ZFS} == ${ZPOOL} ] ; then continue ; fi
printf '#\n## Filesystem: %s\n#\n\n' "${ZFS}"
print_filesystem ${ZFS}
printf '\n'
done
}
function print_volume () {
ZVOL=$1
volsize=$(${ZFS_CMD} get -Ho value volsize ${ZVOL})
volblocksize=$(${ZFS_CMD} get -Ho value volblocksize ${ZVOL})
printf '%s create \\\n\t-V %s \\\n\t-b %s \\\n' "${ZFS_CMD}" "${volsize}" "${volblocksize}"
print_local_options "${ZVOL}" 'all' '(volsize|refreservation)'
printf '\t%s\n' "${ZVOL}"
}
function print_volumes () {
ZPOOL=$1
for ZVOL in $(${ZFS_CMD} list -Ho name -t volume -r ${ZPOOL})
do
printf '#\n## Volume: %s\n#\n\n' "${ZVOL}"
print_volume ${ZVOL}
printf '\n'
done
}
function print_vdevs () {
ZPOOL=$1
${ZDB_CMD} -C ${ZPOOL} | ${AWK_CMD} -F':' '
$1 ~ /^[[:space:]]*type$/ {
gsub(/[ ]+/,"",$NF);
type=substr($NF,2,length($NF)-2);
if ( type == "mirror" ) {
printf " \\\n\t%s",type;
}
}
$1 ~ /^[[:space:]]*path$/ {
gsub(/[ ]+/,"",$NF);
vdev=substr($NF,2,length($NF)-2);
printf " \\\n\t%s",vdev;
}
END {
printf "\n";
}
'
}
function print_zpool () {
ZPOOL=$1
printf '#############################################################\n'
printf '#\n## ZPool: %s\n#\n' "${ZPOOL}"
printf '#############################################################\n\n'
printf '%s create \\\n' "${ZPOOL_CMD}"
print_local_options "${ZPOOL}" 'all' '/@/'
printf '\t%s' "${ZPOOL}"
print_vdevs "${ZPOOL}"
printf '\n'
printf '#############################################################\n\n'
print_filesystems "${ZPOOL}"
print_volumes "${ZPOOL}"
}
OS=$(uname -s)
eval $(uname -s)=1
HOSTNAME=$(hostname)
printf '#############################################################\n'
printf '# Hostname: %s\n' "${HOSTNAME}"
printf '#############################################################\n\n'
for ZPOOL in $(${ZPOOL_CMD} list -Ho name)
do
print_zpool ${ZPOOL}
done
</syntaxhighlight>
==Links==
* [[https://github.com/zfsonlinux/pkg-zfs/wiki/HOWTO-install-Ubuntu-16.04-to-a-Whole-Disk-Native-ZFS-Root-Filesystem-using-Ubiquity-GUI-installer HOWTO install Ubuntu 16.04 to a Whole Disk Native ZFS Root Filesystem using Ubiquity GUI installer]]
* [[https://github.com/zfsonlinux/zfs/wiki/Ubuntu-16.04-Root-on-ZFS Ubuntu 16.04 Root on ZFS]]
4f420b1d33ff7714f0092d1df3ead781a3808fff
2679
2678
2022-10-11T17:13:00Z
Lollypop
2
/* More information on zpool status */
wikitext
text/x-wiki
[[Category:Linux|ZFS]]
[[Category:ZFS|Linux]]
[[Category:VirtualBox|ZFS]]
==Grub==
Create /etc/udev/rules.d/99-local-grub.rules with this content:
<syntaxhighlight lang=bash>
# Create by-id links in /dev as well for zfs vdev. Needed by grub
# Add links for zfs_member only
KERNEL=="sd*[0-9]", IMPORT{parent}=="ID_*", ENV{ID_FS_TYPE}=="zfs_member", SYMLINK+="$env{ID_BUS}-$env{ID_SERIAL}-part%n"
</syntaxhighlight>
==Virtualbox on ZVols==
If you use ZVols as rawvmdk-device in VirtualBox as normal user (vmuser in this example) create /etc/udev/rules.d/99-local-zvol.rules with this content:
<syntaxhighlight lang=bash>
KERNEL=="zd*" SUBSYSTEM=="block" ACTION=="add|change" PROGRAM="/lib/udev/zvol_id /dev/%k" RESULT=="rpool/VM/*" OWNER="vmuser"
</syntaxhighlight>
<syntaxhighlight lang=bash>
vmuser@virtualbox-server:~$ VBoxManage internalcommands createrawvmdk -filename /var/data/VMs/dev/Solaris10.vmdk -rawdisk /dev/zvol/rpool/VM/Solaris10
</syntaxhighlight>
==Setup Ubuntu 16.04 with ZFS root==
Most is from here [https://github.com/zfsonlinux/zfs/wiki/Ubuntu-16.04-Root-on-ZFS Ubuntu-16.04-Root-on-ZFS].
Boot Ubuntu Desktop (alias Live CD) and choose "try out".
===Get the right ashift value===
For example to get sda and sdb:
<syntaxhighlight lang=bash>
# lsblk -o NAME,PHY-SeC,LOG-SEC /dev/sd{a,b} | awk 'function exponent (value) {for(i=0;value>1;i++){value/=2;}; return i;}{if($2 ~ /[0-9]+/){print $0,exponent($2)}else{print$0,"ashift"}}'
NAME PHY-SEC LOG-SEC ashift
sda 512 512 9
├─sda1 512 512 9
├─sda2 512 512 9
├─sda3 512 512 9
└─sda4 512 512 9
sdb 4096 512 12
├─sdb1 4096 512 12
├─sdb2 4096 512 12
├─sdb3 4096 512 12
└─sdb4 4096 512 12
</syntaxhighlight>
===Connect it to your network===
<syntaxhighlight lang=bash>
sudo -i
ifconfig ens160 <IP> netmask 255.255.255.0
route add default gw <defaultrouter>
echo "nameserver <nameserver>" >> /etc/resolv.conf
echo 'Acquire::http::Proxy "http://<user>:<pass>@<proxyhost>:<proxyport>";' >> /etc/apt/apt.conf
apt-add-repository universe
apt update
apt --yes install openssh-server
passwd ubuntu
Reconnect via ssh
apt install --yes debootstrap gdisk zfs-initramfs
sgdisk -g -a1 -n2:34:2047 -t2:EF02 /dev/disk/by-id/scsi-36000c2932cdb62febff0b5ac93786dd4
sgdisk -n9:-8M:0 -t9:BF07 /dev/disk/by-id/scsi-36000c2932cdb62febff0b5ac93786dd4
sgdisk -n1:0:0 -t1:BF01 /dev/disk/by-id/scsi-36000c2932cdb62febff0b5ac93786dd4
zpool create -f -o ashift=12 \
-O atime=off \
-O canmount=off \
-O compression=lz4 \
-O normalization=formD \
-O mountpoint=/ \
-R /mnt \
rpool /dev/disk/by-id/scsi-36000c2932cdb62febff0b5ac93786dd4-part1
zfs create -o canmount=off -o mountpoint=none rpool/ROOT
zfs create -o canmount=noauto -o mountpoint=/ rpool/ROOT/ubuntu
zfs mount rpool/ROOT/ubuntu
zfs create -o setuid=off rpool/home
zfs create -o mountpoint=/root rpool/home/root
zfs create -o canmount=off -o setuid=off -o exec=off rpool/var
zfs create -o com.sun:auto-snapshot=false rpool/var/cache
zfs create rpool/var/log
zfs create rpool/var/spool
zfs create -o com.sun:auto-snapshot=false -o exec=on rpool/var/tmp
zfs create -V 4G -b $(getconf PAGESIZE) -o compression=zle \
-o logbias=throughput -o sync=always \
-o primarycache=metadata -o secondarycache=none \
-o com.sun:auto-snapshot=false rpool/swap
cp -p {,/mnt}/etc/apt/apt.conf
export http_proxy=$(awk '/Acquire::http::Proxy/{gsub(/\"/,"");gsub(/;$/,"");print $2}' /mnt/etc/apt/apt.conf)
echo -n xenial{,-security,-updates} | \
xargs -n 1 -d ' ' -I{} echo "deb http://archive.ubuntu.com/ubuntu {} main universe" > /mnt/etc/apt/sources.list
chmod 1777 /mnt/var/tmp
debootstrap xenial /mnt
zfs set devices=off rpool
HOSTNAME=Template-VM
echo ${HOSTNAME} > /mnt/etc/hostname
printf "127.0.1.1\t%s\n" "${HOSTNAME}" >> /mnt/etc/hosts
INTERFACE=$(ip a s scope global | awk 'NR==1{gsub(/:$/,"",$2);print $2;}')
printf "auto %s\niface %s inet dhcp\n" "${INTERFACE}" "${INTERFACE}" > /mnt/etc/network/interfaces.d/${INTERFACE}
mount --rbind /dev /mnt/dev
mount --rbind /proc /mnt/proc
mount --rbind /sys /mnt/sys
cp -p {,/mnt}/etc/apt/apt.conf
echo -n xenial{,-security,-updates} | \
xargs -n 1 -d ' ' -I{} echo "deb http://archive.ubuntu.com/ubuntu {} main universe" > /mnt/etc/apt/sources.list
chroot /mnt /bin/bash --login
locale-gen en_US.UTF-8
echo 'LANG="en_US.UTF-8"' > /etc/default/locale
LANG="en_US.UTF-8"
dpkg-reconfigure tzdata
ln -s /proc/self/mounts /etc/mtab
apt update
apt install --yes ubuntu-minimal
apt install --yes --no-install-recommends linux-image-generic
apt install --yes zfs-initramfs
apt install --yes openssh-server
apt install --yes grub-pc
addgroup --system lpadmin
addgroup --system sambashare
passwd
grub-probe /
update-initramfs -c -k all
vi /etc/default/grub
Comment out: GRUB_HIDDEN_TIMEOUT=0
Remove quiet and splash from: GRUB_CMDLINE_LINUX_DEFAULT
Uncomment: GRUB_TERMINAL=console
update-grub
grub-install /dev/disk/by-id/scsi-36000c2932cdb62febff0b5ac93786dd4
zfs snapshot rpool/ROOT/ubuntu@install
exit
mount | grep -v zfs | tac | awk '/\/mnt/ {print $3}' | xargs -i{} umount -lf {}
zpool export rpool
reboot
apt install --yes cryptsetup
echo cryptswap1 /dev/zvol/rpool/swap /dev/urandom swap,cipher=aes-xts-plain64:sha256,size=256 >> /etc/crypttab
systemctl daemon-reload
systemctl start systemd-cryptsetup@cryptswap1.service
echo /dev/mapper/cryptswap1 none swap defaults 0 0 >> /etc/fstab
swapon -av
</syntaxhighlight>
==Swap on ZFS with random key encryption==
<syntaxhighlight lang=bash>
$ sudo systemctl edit --force --full zfs-cryptswap@.service
</syntaxhighlight>
<syntaxhighlight lang=ini>
# /etc/systemd/system/zfs-cryptswap@.service
[Unit]
Description=ZFS Random Cryptography Setup for %I
Documentation=man:zfs(8)
DefaultDependencies=no
Conflicts=umount.target
IgnoreOnIsolate=true
After=systemd-random-seed.service zfs-volumes.target
BindsTo=dev-zvol-rpool-%i.device
Before=umount.target
[Service]
Type=oneshot
RemainAfterExit=yes
TimeoutSec=0
KeyringMode=shared
OOMScoreAdjust=500
UMask=0077
RuntimeDirectory=zfs-cryptswap.%i
RuntimeDirectoryMode=0700
ExecStartPre=-/sbin/swapoff '/dev/zvol/rpool/%i'
ExecStartPre=-/sbin/zfs destroy 'rpool/%i'
ExecStartPre=/bin/dd if=/dev/urandom of=/run/zfs-cryptswap.%i/%i.key bs=32 count=1
ExecStart=/sbin/zfs create -V 4G -b 4k -o compression=zle -o logbias=throughput -o sync=always -o primarycache=metadata -o secondarycache=none -o com.sun:auto-snapshot=false -o encryption=on -o keyformat=raw -o keylocation=file:///run/zfs-cryptswap.%i/%i.key rpool/%i
ExecStart=/bin/sleep 1
ExecStartPost=/sbin/mkswap '/dev/zvol/rpool/%i'
ExecStartPost=/sbin/swapon '/dev/zvol/rpool/%i'
ExecStop=/sbin/swapoff '/dev/zvol/rpool/%i'
ExecStop=/bin/sleep 2
ExecStopPost=/sbin/zfs destroy 'rpool/%i'
[Install]
WantedBy=swap.target
</syntaxhighlight>
!!!BE CAREFUL with the name after @ !!!
The name after the @ is the name of the ZFS that will be DESTROYED and recreated!!!
To destroy and recreate an encrypted ZFS volume named cryptswap use:
<syntaxhighlight lang=bash>
# systemctl start zfs-cryptswap@cryptswap.service
# systemctl enable zfs-cryptswap@cryptswap.service
# update-initramfs -k $(uname -i) -u
</syntaxhighlight>
==Kernel settings for ZFS==
=== Set module parameter in /etc/modprobe.d/zfs.conf===
<syntaxhighlight lang=bash>
options zfs zfs_arc_max=10737418240
# increase them so scrub/resilver is more quickly at the cost of other work
options zfs zfs_vdev_scrub_min_active=24
options zfs zfs_vdev_scrub_max_active=64
# sync write
options zfs zfs_vdev_sync_write_min_active=8
options zfs zfs_vdev_sync_write_max_active=32
# sync reads (normal)
options zfs zfs_vdev_sync_read_min_active=8
options zfs zfs_vdev_sync_read_max_active=32
# async reads : prefetcher
options zfs zfs_vdev_async_read_min_active=8
options zfs zfs_vdev_async_read_max_active=32
# async write : bulk writes
options zfs zfs_vdev_async_write_min_active=8
options zfs zfs_vdev_async_write_max_active=32
# max write speed to l2arc
# tradeoff between write/read and durability of ssd (?)
# default : 8 * 1024 * 1024
# setting here : 500 * 1024 * 1024
options zfs l2arc_write_max=524288000
options zfs zfs_top_maxinflight=512
options zfs zfs_resilver_min_time_ms=8000
options zfs zfs_resilver_delay=0
</syntaxhighlight>
Remember to update your initramfs before boot. This is the filesystem which is read when your module is loaded.
<syntaxhighlight lang=bash>
# update-initramfs -k all -u
</syntaxhighlight>
=== Check settings ===
<syntaxhighlight lang=bash>
root@zfshost:~# modprobe -c | grep "options zfs"
options zfs zfs_arc_max=10737418240
options zfs zfs_vdev_scrub_min_active=24
options zfs zfs_vdev_scrub_max_active=64
options zfs zfs_vdev_sync_write_min_active=8
options zfs zfs_vdev_sync_write_max_active=32
options zfs zfs_vdev_sync_read_min_active=8
options zfs zfs_vdev_sync_read_max_active=32
options zfs zfs_vdev_async_read_min_active=8
options zfs zfs_vdev_async_read_max_active=32
options zfs zfs_vdev_async_write_min_active=8
options zfs zfs_vdev_async_write_max_active=32
options zfs l2arc_write_max=524288000
options zfs zfs_top_maxinflight=512
options zfs zfs_resilver_min_time_ms=8000
options zfs zfs_resilver_delay=0
</syntaxhighlight>
<syntaxhighlight lang=bash>
root@zfshost:~# modprobe --show-depends zfs
insmod /lib/modules/4.15.0-58-generic/kernel/spl/spl.ko
insmod /lib/modules/4.15.0-58-generic/kernel/zfs/znvpair.ko
insmod /lib/modules/4.15.0-58-generic/kernel/zfs/zcommon.ko
insmod /lib/modules/4.15.0-58-generic/kernel/zfs/icp.ko
insmod /lib/modules/4.15.0-58-generic/kernel/zfs/zavl.ko
insmod /lib/modules/4.15.0-58-generic/kernel/zfs/zunicode.ko
insmod /lib/modules/4.15.0-58-generic/kernel/zfs/zfs.ko zfs_arc_max=10737418240 zfs_vdev_scrub_min_active=24 zfs_vdev_scrub_max_active=64 zfs_vdev_sync_write_min_active=8 zfs_vdev_sync_write_max_active=32 zfs_vdev_sync_read_min_active=8 zfs_vdev_sync_read_max_active=32 zfs_vdev_async_read_min_active=8 zfs_vdev_async_read_max_active=32 zfs_vdev_async_write_min_active=8 zfs_vdev_async_write_max_active=32 l2arc_write_max=524288000 zfs_top_maxinflight=512 zfs_resilver_min_time_ms=8000 zfs_resilver_delay=0
</syntaxhighlight>
=== Check actual settings ===
Check files in
* /proc/spl/kstat/zfs/
* /sys/module/zfs/parameters/
==ARC Cache==
===Get the current usage of cache===
<syntaxhighlight lang=bash>
# cat /proc/spl/kstat/zfs/arcstats |grep c_
c_min 4 521779200
c_max 4 1073741824
arc_no_grow 4 0
arc_tempreserve 4 0
arc_loaned_bytes 4 0
arc_prune 4 25360
arc_meta_used 4 493285336
arc_meta_limit 4 805306368
arc_dnode_limit 4 80530636
arc_meta_max 4 706551816
arc_meta_min 4 16777216
sync_wait_for_async 4 357
arc_need_free 4 0
arc_sys_free 4 260889600
</syntaxhighlight>
===Limit the cache without reboot non permanent===
For example limit it to 512MB (which is too small for production environments, just an example...):
<syntaxhighlight lang=bash>
# echo "$[512*1024*1024]" > /sys/module/zfs/parameters/zfs_arc_max
</syntaxhighlight>
Now you have to drop the caches:
<syntaxhighlight lang=bash>
# echo 3 > /proc/sys/vm/drop_caches
</syntaxhighlight>
===Make the cache limit permanent===
For example limit it to 512MB (which is too small for production environments, just an example...):
<syntaxhighlight lang=bash>
# echo "options zfs zfs_arc_max=$[512*1024*1024]" >> /etc/modprobe.d/zfs.conf
</syntaxhighlight>
After reboot this value take effect.
===Check cache hits/misses===
<syntaxhighlight lang=bash>
# (while : ; do cat /proc/spl/kstat/zfs/arcstats ; sleep 5 ; done ) | awk '
BEGIN {
}
$1 ~ /(hits|misses)/ {
name=$1;
gsub(/[_]*(hits|misses)/,"",name);
if(name == ""){
name="global";
}
}
$1 ~ /hits/ {
hits[name] = $3 - hitslast[name]
hitslast[name] = $3
}
$1 ~ /misses/ {
misses[name] = $3 - misslast[name]
misslast[name] = $3
rate = 0
total = hits[name] + misses[name]
if (total)
rate = (hits[name] * 100) / total
if (name=="global")
printf "%30s %12s %12s %9s\n", "NAME", "HITS", "MISSES", "HITRATE"
printf "%30s %12d %12d %8.2f%%\n", name, hits[name], misses[name], rate
}
'
</syntaxhighlight>
==Higher scrub performance==
<syntaxhighlight lang=bash highlight=3-5>
#!/bin/bash
#
## scrub_fast.sh
#
case $1 in
start)
echo 0 > /sys/module/zfs/parameters/zfs_scan_idle
echo 0 > /sys/module/zfs/parameters/zfs_scrub_delay
echo 512 > /sys/module/zfs/parameters/zfs_top_maxinflight
echo 5000 > /sys/module/zfs/parameters/zfs_scan_min_time_ms
echo 4 > /sys/module/zfs/parameters/zfs_vdev_scrub_min_active
echo 8 > /sys/module/zfs/parameters/zfs_vdev_scrub_max_active
;;
stop)
echo 50 > /sys/module/zfs/parameters/zfs_scan_idle
echo 4 > /sys/module/zfs/parameters/zfs_scrub_delay
echo 32 > /sys/module/zfs/parameters/zfs_top_maxinflight
echo 1000 > /sys/module/zfs/parameters/zfs_scan_min_time_ms
echo 1 > /sys/module/zfs/parameters/zfs_vdev_scrub_min_active
echo 2 > /sys/module/zfs/parameters/zfs_vdev_scrub_max_active
;;
status)
for i in zfs_scan_idle zfs_scrub_delay zfs_top_maxinflight zfs_scan_min_time_ms zfs_vdev_scrub_{min,max}_active
do
param="/sys/module/zfs/parameters/${i}"
printf "%60s\t%d\n" "${param}" "$(cat ${param})"
done
;;
*)
echo "Usage: ${0} (start|stop|status)"
;;
esac
</syntaxhighlight>
==More information on zpool status==
<SyntaxHighlight lang=bash highlight=3-5>
#!/bin/bash
#
## print_zpool.sh
#
# Written by Lars Timmann <L@rs.Timmann.de> 2022
columns=5 # number of columns for zpool status
if [ ${#} -gt 0 ] && [ ${1} == "iostat" ]
then
command="iostat -v"
columns=7
shift
fi
stdbuf --output=L zpool ${command:-status} -P ${*} | awk -v columns=${columns} '
BEGIN {
command="lsscsi --scsi_id";
while( command | getline lsscsi ) {
count=split(lsscsi,fields);
dev=fields[count-1];
scsi_id[dev]=fields[1];
}
close(command);
command="ls -Ul /dev/disk/by-id/*";
while( command | getline ) {
dev=$NF;
gsub(/[\.\/]/,"",dev);
dev_id=$(NF-2);
device[dev_id]="/dev/"dev;
}
close(command);
}
$1 ~ /\/dev\// {
line=$0;
dev_by_id=$1;
dev_no_part=dev_by_id;
gsub(/(-part|)[0-9]+$/,"",dev_no_part);
if( NF > 5) {
count=split(line,a,FS,seps);
line=seps[0];
for(i=1;i<columns;i++){
line=line a[i] seps[i];
}
line=line a[columns];
for(i=columns+1;i<=count;i++){
rest=rest a[i] seps[i];
}
}
printf("%s %s %s",line,scsi_id[device[dev_no_part]],device[dev_by_id]);
if(rest!=""){
printf(" %s",rest);
rest="";
}
printf("\n");
next;
}
/^errors:/ {
print;
fflush();
next;
}
{
print;
}'
</SyntaxHighlight>
==Backup ZFS settings==
A little script which may be used on your own risk.
<syntaxhighlight lang=bash>
#!/bin/bash
# Written by Lars Timmann <L@rs.Timmann.de> 2018
# Tested on solaris 11.3 & Ubuntu Linux
# This script is a rotten bunch of code... rewrite it!
AWK_CMD=/usr/bin/gawk
ZPOOL_CMD=/sbin/zpool
ZFS_CMD=/sbin/zfs
ZDB_CMD=/sbin/zdb
function print_local_options () {
DATASET=$1
OPTION=$2
EXCLUDE_REGEX=$3
${ZFS_CMD} get -s local -Ho property,value -p ${OPTION} ${DATASET} | while read -r property value
do
if [[ ! ${property} =~ ${EXCLUDE_REGEX} ]]
then
if [ "_${property}_" == "_share.*_" ]
then
print_local_options "${DATASET}" 'share.all' '^$'
else
printf '\t-o %s=%s \\\n' "${property}" "${value}"
fi
fi
done
}
function print_filesystem () {
ZFS=$1
printf '%s create \\\n' "${ZFS_CMD}"
print_local_options "${ZFS}" 'all' '^$'
printf '\t%s\n' "${ZFS}"
}
function print_filesystems () {
ZPOOL=$1
for ZFS in $(${ZFS_CMD} list -Ho name -t filesystem -r ${ZPOOL})
do
if [ ${ZFS} == ${ZPOOL} ] ; then continue ; fi
printf '#\n## Filesystem: %s\n#\n\n' "${ZFS}"
print_filesystem ${ZFS}
printf '\n'
done
}
function print_volume () {
ZVOL=$1
volsize=$(${ZFS_CMD} get -Ho value volsize ${ZVOL})
volblocksize=$(${ZFS_CMD} get -Ho value volblocksize ${ZVOL})
printf '%s create \\\n\t-V %s \\\n\t-b %s \\\n' "${ZFS_CMD}" "${volsize}" "${volblocksize}"
print_local_options "${ZVOL}" 'all' '(volsize|refreservation)'
printf '\t%s\n' "${ZVOL}"
}
function print_volumes () {
ZPOOL=$1
for ZVOL in $(${ZFS_CMD} list -Ho name -t volume -r ${ZPOOL})
do
printf '#\n## Volume: %s\n#\n\n' "${ZVOL}"
print_volume ${ZVOL}
printf '\n'
done
}
function print_vdevs () {
ZPOOL=$1
${ZDB_CMD} -C ${ZPOOL} | ${AWK_CMD} -F':' '
$1 ~ /^[[:space:]]*type$/ {
gsub(/[ ]+/,"",$NF);
type=substr($NF,2,length($NF)-2);
if ( type == "mirror" ) {
printf " \\\n\t%s",type;
}
}
$1 ~ /^[[:space:]]*path$/ {
gsub(/[ ]+/,"",$NF);
vdev=substr($NF,2,length($NF)-2);
printf " \\\n\t%s",vdev;
}
END {
printf "\n";
}
'
}
function print_zpool () {
ZPOOL=$1
printf '#############################################################\n'
printf '#\n## ZPool: %s\n#\n' "${ZPOOL}"
printf '#############################################################\n\n'
printf '%s create \\\n' "${ZPOOL_CMD}"
print_local_options "${ZPOOL}" 'all' '/@/'
printf '\t%s' "${ZPOOL}"
print_vdevs "${ZPOOL}"
printf '\n'
printf '#############################################################\n\n'
print_filesystems "${ZPOOL}"
print_volumes "${ZPOOL}"
}
OS=$(uname -s)
eval $(uname -s)=1
HOSTNAME=$(hostname)
printf '#############################################################\n'
printf '# Hostname: %s\n' "${HOSTNAME}"
printf '#############################################################\n\n'
for ZPOOL in $(${ZPOOL_CMD} list -Ho name)
do
print_zpool ${ZPOOL}
done
</syntaxhighlight>
==Links==
* [[https://github.com/zfsonlinux/pkg-zfs/wiki/HOWTO-install-Ubuntu-16.04-to-a-Whole-Disk-Native-ZFS-Root-Filesystem-using-Ubiquity-GUI-installer HOWTO install Ubuntu 16.04 to a Whole Disk Native ZFS Root Filesystem using Ubiquity GUI installer]]
* [[https://github.com/zfsonlinux/zfs/wiki/Ubuntu-16.04-Root-on-ZFS Ubuntu 16.04 Root on ZFS]]
c98f8dbb892cc2db7bba8391e5b02f80ba91bd32
PowerDNS
0
287
2655
2485
2022-06-09T13:03:29Z
Lollypop
2
/* chroot with systemd */
wikitext
text/x-wiki
[[Category: DNS]]
=PowerDNS Server (pdns_server)=
==Newer version in Ubuntu==
If you are living in Ubunbtu xenial and need a newer PowerDNS from Ubuntu zesty, do this:
===/etc/apt/apt.conf.d/01pinning===
<syntaxhighlight lang=apt>
APT::Default-Release "xenial";
</syntaxhighlight>
===/etc/apt/preferences.d/pdns===
<syntaxhighlight lang=apt>
Package: pdns-*
Pin: release a=zesty, l=Ubuntu
Pin-Priority: 1000
Package: pdns-*
Pin: release a=zesty-updates, l=Ubuntu
Pin-Priority: 1000
Package: pdns-*
Pin: release a=zesty-security, l=Ubuntu
Pin-Priority: 1000
</syntaxhighlight>
===/etc/apt/sources.list===
add zesty sources. for example:
<syntaxhighlight>
deb [arch=amd64] http://de.archive.ubuntu.com/ubuntu/ xenial main restricted universe
deb [arch=amd64] http://de.archive.ubuntu.com/ubuntu/ xenial-updates main restricted universe
deb [arch=amd64] http://security.ubuntu.com/ubuntu xenial-security main restricted universe
deb [arch=amd64] http://de.archive.ubuntu.com/ubuntu/ zesty main restricted universe
deb [arch=amd64] http://de.archive.ubuntu.com/ubuntu/ zesty-updates main restricted universe
deb [arch=amd64] http://security.ubuntu.com/ubuntu zesty-security main restricted universe
</syntaxhighlight>
===Do the upgrade===
<syntaxhighlight lang=bash>
# apt update
# apt install pdns-recursor/zesty pdns-tools/zesty libstdc++6/zesty gcc-6-base/zesty
</syntaxhighlight>
==Logging with systemd and syslog-ng==
1. Tell the journald of systemd to forward messages to syslog:
In <i>/etc/systemd/journald.conf</i> set it from
<syntaxhighlight lang=bash>
#ForwardToSyslog=yes
</syntaxhighlight>
to
<syntaxhighlight lang=bash>
ForwardToSyslog=yes
</syntaxhighlight>
Then restart the journald
<syntaxhighlight lang=bash>
# systemctl restart systemd-journald.service
</syntaxhighlight>
2. Tell syslog-ng to take the dev-log-socket from journald as input:
Change the part in <i>/etc/syslog-ng/syslog-ng.conf</i> from
<syntaxhighlight lang=bash>
source s_src {
system();
internal();
};
</syntaxhighlight>
to
<syntaxhighlight lang=bash>
source s_src {
system();
internal();
unix-dgram ("/run/systemd/journal/dev-log");
};
</syntaxhighlight>
==chroot with systemd==
<syntaxhighlight lang=bash>
# mkdir -p /var/chroot/run/systemd
# touch /var/chroot/run/systemd/notify
</syntaxhighlight>
<syntaxhighlight lang=ini>
# /etc/systemd/system/var-chroot-run-systemd-notify.mount
[Unit]
After=zfs-mount.service
Requires=var-chroot.mount
[Mount]
What=/run/systemd/notify
Where=/var/chroot/run/systemd/notify
Type=none
Options=bind
</syntaxhighlight>
or
<syntaxhighlight lang=ini>
# /etc/systemd/system/var-chroot-run-systemd-notify.mount
[Unit]
Description=Mount /run/systemd/notify to chroot
DefaultDependencies=no
ConditionPathExists=/var/chroot/run/systemd/notify
ConditionCapability=CAP_SYS_ADMIN
After=systemd-modules-load.service
Before=pdns-recursor.service
[Mount]
What=/run/systemd/notify
Where=/var/chroot/run/systemd/notify
Type=none
Options=bind
[Install]
WantedBy=multi-user.target
</syntaxhighlight>
<syntaxhighlight lang=ini>
# /etc/systemd/system/pdns.service.d/override.conf
[Service]
Type=simple
ExecStart=
ExecStart=/usr/sbin/pdns_server --guardian=no --daemon=no --disable-syslog --log-timestamp=no --write-pid=no
CapabilityBoundingSet=CAP_SYS_CHROOT
AmbientCapabilities=CAP_SYS_CHROOT
[Unit]
Wants=local-fs.target
</syntaxhighlight>
<syntaxhighlight lang=ini>
# /etc/systemd/system/pdns-recursor.service.d/override.conf
[Service]
Type=simple
ExecStart=
ExecStart=/usr/sbin/pdns_recursor --daemon=no --write-pid=no --include-dir=/etc/powerdns/recursor.d
CapabilityBoundingSet=CAP_NET_BIND_SERVICE CAP_SETGID CAP_SETUID CAP_CHOWN CAP_SYS_CHROOT
[Unit]
Wants=local-fs.target
</syntaxhighlight>
0444ce2b8ea0b7b12e7f7e6797a9c5d7dfa7f470
2656
2655
2022-07-01T16:42:00Z
Lollypop
2
/* chroot with systemd */
wikitext
text/x-wiki
[[Category: DNS]]
=PowerDNS Server (pdns_server)=
==Newer version in Ubuntu==
If you are living in Ubunbtu xenial and need a newer PowerDNS from Ubuntu zesty, do this:
===/etc/apt/apt.conf.d/01pinning===
<syntaxhighlight lang=apt>
APT::Default-Release "xenial";
</syntaxhighlight>
===/etc/apt/preferences.d/pdns===
<syntaxhighlight lang=apt>
Package: pdns-*
Pin: release a=zesty, l=Ubuntu
Pin-Priority: 1000
Package: pdns-*
Pin: release a=zesty-updates, l=Ubuntu
Pin-Priority: 1000
Package: pdns-*
Pin: release a=zesty-security, l=Ubuntu
Pin-Priority: 1000
</syntaxhighlight>
===/etc/apt/sources.list===
add zesty sources. for example:
<syntaxhighlight>
deb [arch=amd64] http://de.archive.ubuntu.com/ubuntu/ xenial main restricted universe
deb [arch=amd64] http://de.archive.ubuntu.com/ubuntu/ xenial-updates main restricted universe
deb [arch=amd64] http://security.ubuntu.com/ubuntu xenial-security main restricted universe
deb [arch=amd64] http://de.archive.ubuntu.com/ubuntu/ zesty main restricted universe
deb [arch=amd64] http://de.archive.ubuntu.com/ubuntu/ zesty-updates main restricted universe
deb [arch=amd64] http://security.ubuntu.com/ubuntu zesty-security main restricted universe
</syntaxhighlight>
===Do the upgrade===
<syntaxhighlight lang=bash>
# apt update
# apt install pdns-recursor/zesty pdns-tools/zesty libstdc++6/zesty gcc-6-base/zesty
</syntaxhighlight>
==Logging with systemd and syslog-ng==
1. Tell the journald of systemd to forward messages to syslog:
In <i>/etc/systemd/journald.conf</i> set it from
<syntaxhighlight lang=bash>
#ForwardToSyslog=yes
</syntaxhighlight>
to
<syntaxhighlight lang=bash>
ForwardToSyslog=yes
</syntaxhighlight>
Then restart the journald
<syntaxhighlight lang=bash>
# systemctl restart systemd-journald.service
</syntaxhighlight>
2. Tell syslog-ng to take the dev-log-socket from journald as input:
Change the part in <i>/etc/syslog-ng/syslog-ng.conf</i> from
<syntaxhighlight lang=bash>
source s_src {
system();
internal();
};
</syntaxhighlight>
to
<syntaxhighlight lang=bash>
source s_src {
system();
internal();
unix-dgram ("/run/systemd/journal/dev-log");
};
</syntaxhighlight>
==chroot with systemd==
<syntaxhighlight lang=bash>
# mkdir -p /var/chroot/run/systemd
# touch /var/chroot/run/systemd/notify
</syntaxhighlight>
<syntaxhighlight lang=ini>
# /etc/systemd/system/var-chroot-run-systemd-notify.mount
[Unit]
After=zfs-mount.service
Requires=var-chroot.mount
[Mount]
What=/run/systemd/notify
Where=/var/chroot/run/systemd/notify
Type=none
Options=bind
</syntaxhighlight>
or
<syntaxhighlight lang=ini>
# /etc/systemd/system/var-chroot-run-systemd-notify.mount
[Unit]
Description=Mount /run/systemd/notify to chroot
DefaultDependencies=no
ConditionPathExists=/var/chroot/run/systemd/notify
ConditionCapability=CAP_SYS_ADMIN
After=systemd-modules-load.service
Before=pdns-recursor.service
[Mount]
What=/run/systemd/notify
Where=/var/chroot/run/systemd/notify
Type=none
Options=bind
[Install]
WantedBy=multi-user.target
</syntaxhighlight>
<syntaxhighlight lang=ini>
# /etc/systemd/system/pdns.service.d/override.conf
[Service]
Type=simple
ExecStart=
ExecStart=/usr/sbin/pdns_server --guardian=no --daemon=no --disable-syslog --log-timestamp=no --write-pid=no
CapabilityBoundingSet=CAP_SYS_CHROOT
AmbientCapabilities=CAP_SYS_CHROOT
[Unit]
Wants=local-fs.target
</syntaxhighlight>
<syntaxhighlight lang=ini>
# /etc/systemd/system/pdns-recursor.service.d/override.conf
[Service]
Type=simple
ExecStart=
ExecStart=/usr/sbin/pdns_recursor --daemon=no --write-pid=no --include-dir=/etc/powerdns/recursor.d
CapabilityBoundingSet=CAP_NET_BIND_SERVICE CAP_SETGID CAP_SETUID CAP_CHOWN CAP_SYS_CHROOT
AmbientCapabilities=CAP_NET_BIND_SERVICE CAP_SETGID CAP_SETUID CAP_CHOWN CAP_SYS_CHROOT
SystemCallFilter=@mount
[Unit]
Wants=local-fs.target
</syntaxhighlight>
b82d799cfde2d8ba574cf4b2d3bd4327cfe9cabe
2657
2656
2022-07-07T06:19:33Z
Lollypop
2
/* chroot with systemd */
wikitext
text/x-wiki
[[Category: DNS]]
=PowerDNS Server (pdns_server)=
==Newer version in Ubuntu==
If you are living in Ubunbtu xenial and need a newer PowerDNS from Ubuntu zesty, do this:
===/etc/apt/apt.conf.d/01pinning===
<syntaxhighlight lang=apt>
APT::Default-Release "xenial";
</syntaxhighlight>
===/etc/apt/preferences.d/pdns===
<syntaxhighlight lang=apt>
Package: pdns-*
Pin: release a=zesty, l=Ubuntu
Pin-Priority: 1000
Package: pdns-*
Pin: release a=zesty-updates, l=Ubuntu
Pin-Priority: 1000
Package: pdns-*
Pin: release a=zesty-security, l=Ubuntu
Pin-Priority: 1000
</syntaxhighlight>
===/etc/apt/sources.list===
add zesty sources. for example:
<syntaxhighlight>
deb [arch=amd64] http://de.archive.ubuntu.com/ubuntu/ xenial main restricted universe
deb [arch=amd64] http://de.archive.ubuntu.com/ubuntu/ xenial-updates main restricted universe
deb [arch=amd64] http://security.ubuntu.com/ubuntu xenial-security main restricted universe
deb [arch=amd64] http://de.archive.ubuntu.com/ubuntu/ zesty main restricted universe
deb [arch=amd64] http://de.archive.ubuntu.com/ubuntu/ zesty-updates main restricted universe
deb [arch=amd64] http://security.ubuntu.com/ubuntu zesty-security main restricted universe
</syntaxhighlight>
===Do the upgrade===
<syntaxhighlight lang=bash>
# apt update
# apt install pdns-recursor/zesty pdns-tools/zesty libstdc++6/zesty gcc-6-base/zesty
</syntaxhighlight>
==Logging with systemd and syslog-ng==
1. Tell the journald of systemd to forward messages to syslog:
In <i>/etc/systemd/journald.conf</i> set it from
<syntaxhighlight lang=bash>
#ForwardToSyslog=yes
</syntaxhighlight>
to
<syntaxhighlight lang=bash>
ForwardToSyslog=yes
</syntaxhighlight>
Then restart the journald
<syntaxhighlight lang=bash>
# systemctl restart systemd-journald.service
</syntaxhighlight>
2. Tell syslog-ng to take the dev-log-socket from journald as input:
Change the part in <i>/etc/syslog-ng/syslog-ng.conf</i> from
<syntaxhighlight lang=bash>
source s_src {
system();
internal();
};
</syntaxhighlight>
to
<syntaxhighlight lang=bash>
source s_src {
system();
internal();
unix-dgram ("/run/systemd/journal/dev-log");
};
</syntaxhighlight>
==chroot with systemd==
<syntaxhighlight lang=bash>
# mkdir -p /var/chroot/run/systemd
# touch /var/chroot/run/systemd/notify
</syntaxhighlight>
<syntaxhighlight lang=ini>
# /etc/systemd/system/var-chroot-run-systemd-notify.mount
[Unit]
After=zfs-mount.service
Requires=var-chroot.mount
[Mount]
What=/run/systemd/notify
Where=/var/chroot/run/systemd/notify
Type=none
Options=bind
</syntaxhighlight>
or
<syntaxhighlight lang=ini>
# /etc/systemd/system/var-chroot-run-systemd-notify.mount
[Unit]
Description=Mount /run/systemd/notify to chroot
DefaultDependencies=no
ConditionPathExists=/var/chroot/run/systemd/notify
ConditionCapability=CAP_SYS_ADMIN
After=systemd-modules-load.service
Before=pdns-recursor.service
[Mount]
What=/run/systemd/notify
Where=/var/chroot/run/systemd/notify
Type=none
Options=bind
[Install]
WantedBy=multi-user.target
</syntaxhighlight>
<syntaxhighlight lang=ini>
# /etc/systemd/system/pdns.service.d/override.conf
[Service]
Type=simple
ExecStart=
ExecStart=/usr/sbin/pdns_server --guardian=no --daemon=no --disable-syslog --log-timestamp=no --write-pid=no
CapabilityBoundingSet=CAP_NET_BIND_SERVICE CAP_SETGID CAP_SETUID CAP_CHOWN CAP_SYS_CHROOT
AmbientCapabilities=CAP_NET_BIND_SERVICE CAP_SETGID CAP_SETUID CAP_CHOWN CAP_SYS_CHROOT
SystemCallFilter=@mount
[Unit]
Wants=local-fs.target
</syntaxhighlight>
<syntaxhighlight lang=ini>
# /etc/systemd/system/pdns-recursor.service.d/override.conf
[Service]
Type=simple
ExecStart=
ExecStart=/usr/sbin/pdns_recursor --daemon=no --write-pid=no --include-dir=/etc/powerdns/recursor.d
CapabilityBoundingSet=CAP_NET_BIND_SERVICE CAP_SETGID CAP_SETUID CAP_CHOWN CAP_SYS_CHROOT
AmbientCapabilities=CAP_NET_BIND_SERVICE CAP_SETGID CAP_SETUID CAP_CHOWN CAP_SYS_CHROOT
SystemCallFilter=@mount
[Unit]
Wants=local-fs.target
</syntaxhighlight>
98597f11b7f4214350dc28addb97f76fdda5b612
2658
2657
2022-09-23T10:43:24Z
Lollypop
2
/* chroot with systemd */
wikitext
text/x-wiki
[[Category: DNS]]
=PowerDNS Server (pdns_server)=
==Newer version in Ubuntu==
If you are living in Ubunbtu xenial and need a newer PowerDNS from Ubuntu zesty, do this:
===/etc/apt/apt.conf.d/01pinning===
<syntaxhighlight lang=apt>
APT::Default-Release "xenial";
</syntaxhighlight>
===/etc/apt/preferences.d/pdns===
<syntaxhighlight lang=apt>
Package: pdns-*
Pin: release a=zesty, l=Ubuntu
Pin-Priority: 1000
Package: pdns-*
Pin: release a=zesty-updates, l=Ubuntu
Pin-Priority: 1000
Package: pdns-*
Pin: release a=zesty-security, l=Ubuntu
Pin-Priority: 1000
</syntaxhighlight>
===/etc/apt/sources.list===
add zesty sources. for example:
<syntaxhighlight>
deb [arch=amd64] http://de.archive.ubuntu.com/ubuntu/ xenial main restricted universe
deb [arch=amd64] http://de.archive.ubuntu.com/ubuntu/ xenial-updates main restricted universe
deb [arch=amd64] http://security.ubuntu.com/ubuntu xenial-security main restricted universe
deb [arch=amd64] http://de.archive.ubuntu.com/ubuntu/ zesty main restricted universe
deb [arch=amd64] http://de.archive.ubuntu.com/ubuntu/ zesty-updates main restricted universe
deb [arch=amd64] http://security.ubuntu.com/ubuntu zesty-security main restricted universe
</syntaxhighlight>
===Do the upgrade===
<syntaxhighlight lang=bash>
# apt update
# apt install pdns-recursor/zesty pdns-tools/zesty libstdc++6/zesty gcc-6-base/zesty
</syntaxhighlight>
==Logging with systemd and syslog-ng==
1. Tell the journald of systemd to forward messages to syslog:
In <i>/etc/systemd/journald.conf</i> set it from
<syntaxhighlight lang=bash>
#ForwardToSyslog=yes
</syntaxhighlight>
to
<syntaxhighlight lang=bash>
ForwardToSyslog=yes
</syntaxhighlight>
Then restart the journald
<syntaxhighlight lang=bash>
# systemctl restart systemd-journald.service
</syntaxhighlight>
2. Tell syslog-ng to take the dev-log-socket from journald as input:
Change the part in <i>/etc/syslog-ng/syslog-ng.conf</i> from
<syntaxhighlight lang=bash>
source s_src {
system();
internal();
};
</syntaxhighlight>
to
<syntaxhighlight lang=bash>
source s_src {
system();
internal();
unix-dgram ("/run/systemd/journal/dev-log");
};
</syntaxhighlight>
==chroot with systemd==
Create the chroot-base. I would prefer to setup a zfs dataset for it, but you can also do:
<syntaxhighlight lang=bash>
# mkdir -p /var/chroot
</syntaxhighlight>
What we need to run pdns{,-recursor} in chroot is this:
<syntaxhighlight lang=bash>
/var/chroot/run/systemd/notify <-- bind mount from /run/systemd/notify (socket)
/var/chroot/run/pdns-recursor <-- bind mount from /run/pdns (dir)
/var/chroot/run/pdns <-- bind mount from /run/pdns (dir)
/var/chroot/usr/share/dns/root.hints <-- bind mount from /usr/share/dns (dir with root.hints file)
</syntaxhighlight>
For that we have to create some systemd.mount files:
<syntaxhighlight lang=bash>
# systemctl list-units --type=mount | grep chroot-
var-chroot-run-pdns.mount loaded active mounted Mount /run/pdns to chroot
var-chroot-run-pdns\x2drecursor.mount loaded active mounted Mount /run/pdns-recursor to chroot
var-chroot-run-systemd-notify.mount loaded active mounted Mount /run/systemd/notify to chroot
var-chroot-run.mount loaded active mounted Temporary Directory /var/chroot/run
var-chroot-tmp.mount loaded active mounted Temporary Directory /var/chroot/tmp
var-chroot-usr-share-dns.mount loaded active mounted Mount /usr/share/dns (root.hints) to chroot
</syntaxhighlight>
and a service to create the needed /var/chroot/run/systemd/notify file to bind mount the socket from systemd to it.
<syntaxhighlight lang=ini>
# /etc/systemd/system/var-chroot-run.mount
[Unit]
Description=Temporary Directory /var/chroot/run
Documentation=https://systemd.io/TEMPORARY_DIRECTORIES
Documentation=man:file-hierarchy(7)
Documentation=https://www.freedesktop.org/wiki/Software/systemd/APIFileSystems
ConditionPathIsSymbolicLink=!/var/chroot/run
DefaultDependencies=no
Conflicts=umount.target
Before=local-fs.target umount.target
After=swap.target
[Mount]
What=tmpfs
Where=/var/chroot/run
Type=tmpfs
Options=mode=1777,strictatime,nosuid,nodev,noexec,size=50%%,nr_inodes=1m
[Install]
WantedBy=local-fs.target
</syntaxhighlight>
<syntaxhighlight lang=ini>
# /etc/systemd/system/var-chroot-tmp.mount
[Unit]
Description=Temporary Directory /var/chroot/tmp
Documentation=https://systemd.io/TEMPORARY_DIRECTORIES
Documentation=man:file-hierarchy(7)
Documentation=https://www.freedesktop.org/wiki/Software/systemd/APIFileSystems
ConditionPathIsSymbolicLink=!/var/chroot/tmp
DefaultDependencies=no
Conflicts=umount.target
Before=local-fs.target umount.target
After=swap.target
[Mount]
What=tmpfs
Where=/var/chroot/tmp
Type=tmpfs
Options=mode=1777,strictatime,nosuid,nodev,noexec,size=50%%,nr_inodes=1m
[Install]
WantedBy=local-fs.target
</syntaxhighlight>
<syntaxhighlight lang=ini>
# /etc/systemd/system/var-chroot-create-dirs.service
[Unit]
Description=Create directories under /var/chroot
ConditionPathExists=/var/chroot/run
After=var-chroot-run.mount
[Service]
Type=oneshot
RemainAfterExit=yes
RuntimeDirectory=pdns pdns-recursor
RuntimeDirectoryMode=0750
RuntimeDirectoryPreserve=True
User=pdns
Group=pdns
ExecStart=-mkdir /var/chroot/run/systemd
ExecStart=-touch /var/chroot/run/systemd/notify
[Install]
WantedBy=multi-user.target
</syntaxhighlight>
<syntaxhighlight lang=ini>
# /etc/systemd/system/var-chroot-run-pdns.mount
[Unit]
Description=Mount /run/pdns to chroot
DefaultDependencies=no
ConditionPathExists=/run/pdns
ConditionCapability=CAP_SYS_ADMIN
After=zfs-mount.service
After=var-chroot-create-dirs.service
After=pdns.service
[Mount]
What=/run/pdns
Where=/var/chroot/run/pdns
Type=none
Options=bind
[Install]
WantedBy=multi-user.target
</syntaxhighlight>
<syntaxhighlight lang=ini>
No files found for var-chroot-run-pdnsx2drecursor.mount.
</syntaxhighlight>
<syntaxhighlight lang=ini>
# /etc/systemd/system/var-chroot-run-systemd-notify.mount
[Unit]
Description=Mount /run/systemd/notify to chroot
DefaultDependencies=no
ConditionPathExists=/run/systemd/notify
ConditionCapability=CAP_SYS_ADMIN
After=zfs-mount.service
After=var-chroot-create-dirs.service
Before=pdns-recursor.service
[Mount]
What=/run/systemd/notify
Where=/var/chroot/run/systemd/notify
Type=none
Options=rbind
[Install]
WantedBy=multi-user.target
</syntaxhighlight>
<syntaxhighlight lang=ini>
# /etc/systemd/system/var-chroot-usr-share-dns.mount
[Unit]
Description=Mount /usr/share/dns (root.hints) to chroot
DefaultDependencies=no
ConditionPathExists=/var/chroot/usr/share/dns
ConditionCapability=CAP_SYS_ADMIN
After=zfs-mount.service
After=var-chroot-create-dirs.service
Before=pdns-recursor.service
[Mount]
What=/usr/share/dns
Where=/var/chroot/usr/share/dns
Type=none
Options=rbind,ro
[Install]
WantedBy=multi-user.target
</syntaxhighlight>
Now we are ready for modifying pdns.service and pdns-recursor.service like this:
<syntaxhighlight lang=ini>
# /etc/systemd/system/pdns.service.d/override.conf
[Service]
Type=simple
ExecStart=
ExecStart=/usr/sbin/pdns_server --guardian=no --daemon=no --disable-syslog --log-timestamp=no --write-pid=no
CapabilityBoundingSet=CAP_NET_BIND_SERVICE CAP_SETGID CAP_SETUID CAP_CHOWN CAP_SYS_CHROOT
AmbientCapabilities=CAP_NET_BIND_SERVICE CAP_SETGID CAP_SETUID CAP_CHOWN CAP_SYS_CHROOT
SystemCallFilter=@mount
[Unit]
Wants=local-fs.target
</syntaxhighlight>
<syntaxhighlight lang=ini>
# /etc/systemd/system/pdns-recursor.service.d/override.conf
[Service]
Type=simple
ExecStart=
ExecStart=/usr/sbin/pdns_recursor --daemon=no --write-pid=no --include-dir=/etc/powerdns/recursor.d
CapabilityBoundingSet=CAP_NET_BIND_SERVICE CAP_SETGID CAP_SETUID CAP_CHOWN CAP_SYS_CHROOT
AmbientCapabilities=CAP_NET_BIND_SERVICE CAP_SETGID CAP_SETUID CAP_CHOWN CAP_SYS_CHROOT
SystemCallFilter=@mount
[Unit]
Wants=local-fs.target
</syntaxhighlight>
a2efaabaa437f3bcb7ef4c8bfb337106ebb14130
2659
2658
2022-09-23T10:50:54Z
Lollypop
2
/* chroot with systemd */
wikitext
text/x-wiki
[[Category: DNS]]
=PowerDNS Server (pdns_server)=
==Newer version in Ubuntu==
If you are living in Ubunbtu xenial and need a newer PowerDNS from Ubuntu zesty, do this:
===/etc/apt/apt.conf.d/01pinning===
<syntaxhighlight lang=apt>
APT::Default-Release "xenial";
</syntaxhighlight>
===/etc/apt/preferences.d/pdns===
<syntaxhighlight lang=apt>
Package: pdns-*
Pin: release a=zesty, l=Ubuntu
Pin-Priority: 1000
Package: pdns-*
Pin: release a=zesty-updates, l=Ubuntu
Pin-Priority: 1000
Package: pdns-*
Pin: release a=zesty-security, l=Ubuntu
Pin-Priority: 1000
</syntaxhighlight>
===/etc/apt/sources.list===
add zesty sources. for example:
<syntaxhighlight>
deb [arch=amd64] http://de.archive.ubuntu.com/ubuntu/ xenial main restricted universe
deb [arch=amd64] http://de.archive.ubuntu.com/ubuntu/ xenial-updates main restricted universe
deb [arch=amd64] http://security.ubuntu.com/ubuntu xenial-security main restricted universe
deb [arch=amd64] http://de.archive.ubuntu.com/ubuntu/ zesty main restricted universe
deb [arch=amd64] http://de.archive.ubuntu.com/ubuntu/ zesty-updates main restricted universe
deb [arch=amd64] http://security.ubuntu.com/ubuntu zesty-security main restricted universe
</syntaxhighlight>
===Do the upgrade===
<syntaxhighlight lang=bash>
# apt update
# apt install pdns-recursor/zesty pdns-tools/zesty libstdc++6/zesty gcc-6-base/zesty
</syntaxhighlight>
==Logging with systemd and syslog-ng==
1. Tell the journald of systemd to forward messages to syslog:
In <i>/etc/systemd/journald.conf</i> set it from
<syntaxhighlight lang=bash>
#ForwardToSyslog=yes
</syntaxhighlight>
to
<syntaxhighlight lang=bash>
ForwardToSyslog=yes
</syntaxhighlight>
Then restart the journald
<syntaxhighlight lang=bash>
# systemctl restart systemd-journald.service
</syntaxhighlight>
2. Tell syslog-ng to take the dev-log-socket from journald as input:
Change the part in <i>/etc/syslog-ng/syslog-ng.conf</i> from
<syntaxhighlight lang=bash>
source s_src {
system();
internal();
};
</syntaxhighlight>
to
<syntaxhighlight lang=bash>
source s_src {
system();
internal();
unix-dgram ("/run/systemd/journal/dev-log");
};
</syntaxhighlight>
==chroot with systemd==
Create the chroot-base. I would prefer to setup a zfs dataset for it, but you can also do:
<syntaxhighlight lang=bash>
# mkdir -p /var/chroot
</syntaxhighlight>
What we need to run pdns{,-recursor} in chroot is this:
<syntaxhighlight lang=bash>
/var/chroot/run/systemd/notify <-- bind mount from /run/systemd/notify (socket)
/var/chroot/run/pdns-recursor <-- bind mount from /run/pdns (dir)
/var/chroot/run/pdns <-- bind mount from /run/pdns (dir)
/var/chroot/usr/share/dns/root.hints <-- bind mount from /usr/share/dns (dir with root.hints file)
</syntaxhighlight>
For that we have to create some systemd.mount files:
<syntaxhighlight lang=bash>
# systemctl list-units --type=mount | grep chroot-
var-chroot-run-pdns.mount loaded active mounted Mount /run/pdns to chroot
var-chroot-run-pdns\x2drecursor.mount loaded active mounted Mount /run/pdns-recursor to chroot
var-chroot-run-systemd-notify.mount loaded active mounted Mount /run/systemd/notify to chroot
var-chroot-run.mount loaded active mounted Temporary Directory /var/chroot/run
var-chroot-tmp.mount loaded active mounted Temporary Directory /var/chroot/tmp
var-chroot-usr-share-dns.mount loaded active mounted Mount /usr/share/dns (root.hints) to chroot
</syntaxhighlight>
and a service to create the needed /var/chroot/run/systemd/notify file to bind mount the socket from systemd to it.
<syntaxhighlight lang=ini>
# /etc/systemd/system/var-chroot-run.mount
[Unit]
Description=Temporary Directory /var/chroot/run
Documentation=https://systemd.io/TEMPORARY_DIRECTORIES
Documentation=man:file-hierarchy(7)
Documentation=https://www.freedesktop.org/wiki/Software/systemd/APIFileSystems
ConditionPathIsSymbolicLink=!/var/chroot/run
DefaultDependencies=no
Conflicts=umount.target
Before=local-fs.target umount.target
After=swap.target
[Mount]
What=tmpfs
Where=/var/chroot/run
Type=tmpfs
Options=mode=1777,strictatime,nosuid,nodev,noexec,size=50%%,nr_inodes=1m
[Install]
WantedBy=local-fs.target
</syntaxhighlight>
<syntaxhighlight lang=ini>
# /etc/systemd/system/var-chroot-tmp.mount
[Unit]
Description=Temporary Directory /var/chroot/tmp
Documentation=https://systemd.io/TEMPORARY_DIRECTORIES
Documentation=man:file-hierarchy(7)
Documentation=https://www.freedesktop.org/wiki/Software/systemd/APIFileSystems
ConditionPathIsSymbolicLink=!/var/chroot/tmp
DefaultDependencies=no
Conflicts=umount.target
Before=local-fs.target umount.target
After=swap.target
[Mount]
What=tmpfs
Where=/var/chroot/tmp
Type=tmpfs
Options=mode=1777,strictatime,nosuid,nodev,noexec,size=50%%,nr_inodes=1m
[Install]
WantedBy=local-fs.target
</syntaxhighlight>
<syntaxhighlight lang=ini>
# /etc/systemd/system/var-chroot-create-dirs.service
[Unit]
Description=Create directories under /var/chroot
ConditionPathExists=/var/chroot/run
After=var-chroot-run.mount
[Service]
Type=oneshot
RemainAfterExit=yes
RuntimeDirectory=pdns pdns-recursor
RuntimeDirectoryMode=0750
RuntimeDirectoryPreserve=True
User=pdns
Group=pdns
ExecStart=-mkdir /var/chroot/run/systemd
ExecStart=-touch /var/chroot/run/systemd/notify
[Install]
WantedBy=multi-user.target
</syntaxhighlight>
<syntaxhighlight lang=ini>
# /etc/systemd/system/var-chroot-run-pdns.mount
[Unit]
Description=Mount /run/pdns to chroot
DefaultDependencies=no
ConditionPathExists=/run/pdns
ConditionCapability=CAP_SYS_ADMIN
After=zfs-mount.service
After=var-chroot-create-dirs.service
After=pdns.service
[Mount]
What=/run/pdns
Where=/var/chroot/run/pdns
Type=none
Options=bind
[Install]
WantedBy=multi-user.target
</syntaxhighlight>
<syntaxhighlight lang=ini>
No files found for var-chroot-run-pdnsx2drecursor.mount.
</syntaxhighlight>
<syntaxhighlight lang=ini>
# /etc/systemd/system/var-chroot-run-systemd-notify.mount
[Unit]
Description=Mount /run/systemd/notify to chroot
DefaultDependencies=no
ConditionPathExists=/run/systemd/notify
ConditionCapability=CAP_SYS_ADMIN
After=zfs-mount.service
After=var-chroot-create-dirs.service
Before=pdns-recursor.service
[Mount]
What=/run/systemd/notify
Where=/var/chroot/run/systemd/notify
Type=none
Options=rbind
[Install]
WantedBy=multi-user.target
</syntaxhighlight>
<syntaxhighlight lang=ini>
# /etc/systemd/system/var-chroot-usr-share-dns.mount
[Unit]
Description=Mount /usr/share/dns (root.hints) to chroot
DefaultDependencies=no
ConditionPathExists=/var/chroot/usr/share/dns
ConditionCapability=CAP_SYS_ADMIN
After=zfs-mount.service
After=var-chroot-create-dirs.service
Before=pdns-recursor.service
[Mount]
What=/usr/share/dns
Where=/var/chroot/usr/share/dns
Type=none
Options=rbind,ro
[Install]
WantedBy=multi-user.target
</syntaxhighlight>
Now we are ready for modifying pdns.service and pdns-recursor.service like this:
<syntaxhighlight lang=ini>
# /etc/systemd/system/pdns.service.d/override.conf
[Service]
Type=simple
RuntimeDirectoryPreserve=True
ExecStart=
ExecStart=/usr/sbin/pdns_server --guardian=no --daemon=no --disable-syslog --log-timestamp=no --write-pid=no
CapabilityBoundingSet=CAP_NET_BIND_SERVICE CAP_SETGID CAP_SETUID CAP_CHOWN CAP_SYS_CHROOT
AmbientCapabilities=CAP_NET_BIND_SERVICE CAP_SETGID CAP_SETUID CAP_CHOWN CAP_SYS_CHROOT
SystemCallFilter=@mount
[Unit]
Wants=local-fs.target
</syntaxhighlight>
<syntaxhighlight lang=ini>
# /etc/systemd/system/pdns-recursor.service.d/override.conf
[Service]
Type=simple
RuntimeDirectoryPreserve=True
ExecStart=
ExecStart=/usr/sbin/pdns_recursor --daemon=no --write-pid=no --include-dir=/etc/powerdns/recursor.d
CapabilityBoundingSet=CAP_NET_BIND_SERVICE CAP_SETGID CAP_SETUID CAP_CHOWN CAP_SYS_CHROOT
AmbientCapabilities=CAP_NET_BIND_SERVICE CAP_SETGID CAP_SETUID CAP_CHOWN CAP_SYS_CHROOT
SystemCallFilter=@mount
[Unit]
Wants=local-fs.target
</syntaxhighlight>
92687ef1796f7d028ab85f4af96d5bcd479578aa
2660
2659
2022-10-06T08:02:59Z
Lollypop
2
/* chroot with systemd */
wikitext
text/x-wiki
[[Category: DNS]]
=PowerDNS Server (pdns_server)=
==Newer version in Ubuntu==
If you are living in Ubunbtu xenial and need a newer PowerDNS from Ubuntu zesty, do this:
===/etc/apt/apt.conf.d/01pinning===
<syntaxhighlight lang=apt>
APT::Default-Release "xenial";
</syntaxhighlight>
===/etc/apt/preferences.d/pdns===
<syntaxhighlight lang=apt>
Package: pdns-*
Pin: release a=zesty, l=Ubuntu
Pin-Priority: 1000
Package: pdns-*
Pin: release a=zesty-updates, l=Ubuntu
Pin-Priority: 1000
Package: pdns-*
Pin: release a=zesty-security, l=Ubuntu
Pin-Priority: 1000
</syntaxhighlight>
===/etc/apt/sources.list===
add zesty sources. for example:
<syntaxhighlight>
deb [arch=amd64] http://de.archive.ubuntu.com/ubuntu/ xenial main restricted universe
deb [arch=amd64] http://de.archive.ubuntu.com/ubuntu/ xenial-updates main restricted universe
deb [arch=amd64] http://security.ubuntu.com/ubuntu xenial-security main restricted universe
deb [arch=amd64] http://de.archive.ubuntu.com/ubuntu/ zesty main restricted universe
deb [arch=amd64] http://de.archive.ubuntu.com/ubuntu/ zesty-updates main restricted universe
deb [arch=amd64] http://security.ubuntu.com/ubuntu zesty-security main restricted universe
</syntaxhighlight>
===Do the upgrade===
<syntaxhighlight lang=bash>
# apt update
# apt install pdns-recursor/zesty pdns-tools/zesty libstdc++6/zesty gcc-6-base/zesty
</syntaxhighlight>
==Logging with systemd and syslog-ng==
1. Tell the journald of systemd to forward messages to syslog:
In <i>/etc/systemd/journald.conf</i> set it from
<syntaxhighlight lang=bash>
#ForwardToSyslog=yes
</syntaxhighlight>
to
<syntaxhighlight lang=bash>
ForwardToSyslog=yes
</syntaxhighlight>
Then restart the journald
<syntaxhighlight lang=bash>
# systemctl restart systemd-journald.service
</syntaxhighlight>
2. Tell syslog-ng to take the dev-log-socket from journald as input:
Change the part in <i>/etc/syslog-ng/syslog-ng.conf</i> from
<syntaxhighlight lang=bash>
source s_src {
system();
internal();
};
</syntaxhighlight>
to
<syntaxhighlight lang=bash>
source s_src {
system();
internal();
unix-dgram ("/run/systemd/journal/dev-log");
};
</syntaxhighlight>
==chroot with systemd==
Create the chroot-base. I would prefer to setup a zfs dataset for it, but you can also do:
<syntaxhighlight lang=bash>
# mkdir -p /var/chroot
</syntaxhighlight>
What we need to run pdns{,-recursor} in chroot is this:
<syntaxhighlight lang=bash>
/var/chroot/run/systemd/notify <-- bind mount from /run/systemd/notify (socket)
/var/chroot/run/pdns-recursor <-- bind mount from /run/pdns (dir)
/var/chroot/run/pdns <-- bind mount from /run/pdns (dir)
/var/chroot/usr/share/dns/root.hints <-- bind mount from /usr/share/dns (dir with root.hints file)
</syntaxhighlight>
For that we have to create some systemd.mount files:
<syntaxhighlight lang=bash>
# systemctl list-units --all --type=mount,service var-chroot-* pdns*
UNIT LOAD ACTIVE SUB DESCRIPTION
var-chroot-run-pdns.mount loaded active mounted Mount /run/pdns to chroot
var-chroot-run-pdns\x2drecursor.mount loaded active mounted Mount /run/pdns-recursor to chroot
var-chroot-run-systemd-notify.mount loaded active mounted Mount /run/systemd/notify to chroot
var-chroot-run.mount loaded active mounted Temporary Directory /var/chroot/run
var-chroot-tmp.mount loaded active mounted Temporary Directory /var/chroot/tmp
var-chroot-usr-share-dns.mount loaded active mounted Mount /usr/share/dns (root.hints) to chroot
pdns-recursor.service loaded active running PowerDNS Recursor
pdns.service loaded active running PowerDNS Authoritative Server
var-chroot-create-dirs.service loaded active exited Create directories under /var/chroot
LOAD = Reflects whether the unit definition was properly loaded.
ACTIVE = The high-level unit activation state, i.e. generalization of SUB.
SUB = The low-level unit activation state, values depend on unit type.
9 loaded units listed.
To show all installed unit files use 'systemctl list-unit-files'.
</syntaxhighlight>
and a service to create the needed /var/chroot/run/systemd/notify file to bind mount the socket from systemd to it.
<syntaxhighlight lang=ini>
# /etc/systemd/system/var-chroot-run.mount
[Unit]
Description=Temporary Directory /var/chroot/run
Documentation=https://systemd.io/TEMPORARY_DIRECTORIES
Documentation=man:file-hierarchy(7)
Documentation=https://www.freedesktop.org/wiki/Software/systemd/APIFileSystems
ConditionPathIsSymbolicLink=!/var/chroot/run
DefaultDependencies=no
Conflicts=umount.target
Before=local-fs.target umount.target
After=swap.target
[Mount]
What=tmpfs
Where=/var/chroot/run
Type=tmpfs
Options=mode=1777,strictatime,nosuid,nodev,noexec,size=50%%,nr_inodes=1m
[Install]
WantedBy=local-fs.target
</syntaxhighlight>
<syntaxhighlight lang=ini>
# /etc/systemd/system/var-chroot-tmp.mount
[Unit]
Description=Temporary Directory /var/chroot/tmp
Documentation=https://systemd.io/TEMPORARY_DIRECTORIES
Documentation=man:file-hierarchy(7)
Documentation=https://www.freedesktop.org/wiki/Software/systemd/APIFileSystems
ConditionPathIsSymbolicLink=!/var/chroot/tmp
DefaultDependencies=no
Conflicts=umount.target
Before=local-fs.target umount.target
After=swap.target
[Mount]
What=tmpfs
Where=/var/chroot/tmp
Type=tmpfs
Options=mode=1777,strictatime,nosuid,nodev,noexec,size=50%%,nr_inodes=1m
[Install]
WantedBy=local-fs.target
</syntaxhighlight>
<syntaxhighlight lang=ini>
# /etc/systemd/system/var-chroot-create-dirs.service
[Unit]
Description=Create directories under /var/chroot
ConditionPathExists=/var/chroot/run
After=var-chroot-run.mount
[Service]
Type=oneshot
RemainAfterExit=yes
RuntimeDirectory=pdns pdns-recursor
RuntimeDirectoryMode=0750
RuntimeDirectoryPreserve=True
User=pdns
Group=pdns
ExecStart=-mkdir /var/chroot/run/systemd
ExecStart=-touch /var/chroot/run/systemd/notify
[Install]
WantedBy=multi-user.target
</syntaxhighlight>
<syntaxhighlight lang=ini>
# /etc/systemd/system/var-chroot-run-pdns.mount
[Unit]
Description=Mount /run/pdns to chroot
DefaultDependencies=no
ConditionPathExists=/run/pdns
ConditionCapability=CAP_SYS_ADMIN
After=zfs-mount.service
After=var-chroot-create-dirs.service
After=pdns.service
[Mount]
What=/run/pdns
Where=/var/chroot/run/pdns
Type=none
Options=bind
[Install]
WantedBy=multi-user.target
</syntaxhighlight>
<syntaxhighlight lang=ini>
# /etc/systemd/system/var-chroot-run-pdns\x2drecursor.mount
[Unit]
Description=Mount /run/pdns-recursor to chroot
DefaultDependencies=no
ConditionPathExists=/run/pdns-recursor
ConditionCapability=CAP_SYS_ADMIN
After=zfs-mount.service
After=var-chroot-create-dirs.service
Before=pdns-recursor.service
[Mount]
What=/run/pdns-recursor
Where=/var/chroot/run/pdns-recursor
Type=none
Options=bind
[Install]
WantedBy=multi-user.target
</syntaxhighlight>
<syntaxhighlight lang=ini>
# /etc/systemd/system/var-chroot-run-systemd-notify.mount
[Unit]
Description=Mount /run/systemd/notify to chroot
DefaultDependencies=no
ConditionPathExists=/run/systemd/notify
ConditionCapability=CAP_SYS_ADMIN
After=zfs-mount.service
After=var-chroot-create-dirs.service
Before=pdns-recursor.service
[Mount]
What=/run/systemd/notify
Where=/var/chroot/run/systemd/notify
Type=none
Options=rbind
[Install]
WantedBy=multi-user.target
</syntaxhighlight>
<syntaxhighlight lang=ini>
# /etc/systemd/system/var-chroot-usr-share-dns.mount
[Unit]
Description=Mount /usr/share/dns (root.hints) to chroot
DefaultDependencies=no
ConditionPathExists=/var/chroot/usr/share/dns
ConditionCapability=CAP_SYS_ADMIN
After=zfs-mount.service
After=var-chroot-create-dirs.service
Before=pdns-recursor.service
[Mount]
What=/usr/share/dns
Where=/var/chroot/usr/share/dns
Type=none
Options=rbind,ro
[Install]
WantedBy=multi-user.target
</syntaxhighlight>
Now we are ready for modifying pdns.service and pdns-recursor.service like this:
<syntaxhighlight lang=ini>
# /etc/systemd/system/pdns.service.d/override.conf
[Service]
Type=simple
RuntimeDirectoryPreserve=True
ExecStart=
ExecStart=/usr/sbin/pdns_server --guardian=no --daemon=no --disable-syslog --log-timestamp=no --write-pid=no
CapabilityBoundingSet=CAP_NET_BIND_SERVICE CAP_SETGID CAP_SETUID CAP_CHOWN CAP_SYS_CHROOT
AmbientCapabilities=CAP_NET_BIND_SERVICE CAP_SETGID CAP_SETUID CAP_CHOWN CAP_SYS_CHROOT
SystemCallFilter=@mount
[Unit]
Wants=local-fs.target
</syntaxhighlight>
<syntaxhighlight lang=ini>
# /etc/systemd/system/pdns-recursor.service.d/override.conf
[Service]
Type=simple
RuntimeDirectoryPreserve=True
ExecStart=
ExecStart=/usr/sbin/pdns_recursor --daemon=no --write-pid=no --include-dir=/etc/powerdns/recursor.d
CapabilityBoundingSet=CAP_NET_BIND_SERVICE CAP_SETGID CAP_SETUID CAP_CHOWN CAP_SYS_CHROOT
AmbientCapabilities=CAP_NET_BIND_SERVICE CAP_SETGID CAP_SETUID CAP_CHOWN CAP_SYS_CHROOT
SystemCallFilter=@mount
[Unit]
Wants=local-fs.target
</syntaxhighlight>
1a491f5f112bb7348be1420ecbe4ec8ada7b471f
Initramfs
0
396
2661
2022-10-06T08:35:15Z
Lollypop
2
Created page with "=Adding binaries to the initial ram disk in ubuntu= == Adding vi to initramfs == <SyntaxHighlight lang=bash> $ sudo cat >/etc/initramfs-tools/hooks/vi <<EOH #!/bin/sh PREREQ="" prereqs() { echo "$PREREQ" } case $1 in prereqs) prereqs exit 0 ;; esac . /usr/share/initramfs-tools/hook-functions # Begin real processing below this line copy_exec /usr/bin/vim.basic /bin exit 0 EOH $ sudo update-initramfs -k $(uname -r) -u </SyntaxHighlight>"
wikitext
text/x-wiki
=Adding binaries to the initial ram disk in ubuntu=
== Adding vi to initramfs ==
<SyntaxHighlight lang=bash>
$ sudo cat >/etc/initramfs-tools/hooks/vi <<EOH
#!/bin/sh
PREREQ=""
prereqs()
{
echo "$PREREQ"
}
case $1 in
prereqs)
prereqs
exit 0
;;
esac
. /usr/share/initramfs-tools/hook-functions
# Begin real processing below this line
copy_exec /usr/bin/vim.basic /bin
exit 0
EOH
$ sudo update-initramfs -k $(uname -r) -u
</SyntaxHighlight>
6286389cfb5bf055a3ed111f536e98594bbaea13
2662
2661
2022-10-06T08:42:59Z
Lollypop
2
/* Adding vi to initramfs */
wikitext
text/x-wiki
=Adding binaries to the initial ram disk in ubuntu=
== Adding vi to initramfs ==
<SyntaxHighlight lang=bash>
$ sudo apt --yes install vim
$ sudo cat >/etc/initramfs-tools/hooks/vi <<EOH
#!/bin/sh -e
PREREQ=""
prereqs()
{
echo "\$PREREQ"
}
case \$1 in
prereqs)
prereqs
exit 0
;;
esac
. /usr/share/initramfs-tools/hook-functions
# Begin real processing below this line
copy_exec /usr/bin/vim.basic /bin
exit 0
EOH
$ sudo chmod 0755 /etc/initramfs-tools/hooks/vi
$ sudo update-initramfs -k $(uname -r) -u -v
...
Calling hook vi
Adding binary /usr/bin/vim.basic
Adding binary-link /usr/lib/x86_64-linux-gnu/libtinfo.so.6
Adding binary /usr/lib/x86_64-linux-gnu/libtinfo.so.6.3
Adding binary-link /usr/lib/x86_64-linux-gnu/libsodium.so.23
Adding binary /usr/lib/x86_64-linux-gnu/libsodium.so.23.3.0
Adding binary /lib/x86_64-linux-gnu/libgpm.so.2
Adding binary /lib/x86_64-linux-gnu/libpython3.10.so.1.0
Adding binary-link /usr/lib/x86_64-linux-gnu/libexpat.so.1
Adding binary /usr/lib/x86_64-linux-gnu/libexpat.so.1.8.7
...
</SyntaxHighlight>
As you can see all needed libraries for the binary are added automatically.
3a21ac77237be9cd750b496bcb8b0e50d54fad46
2663
2662
2022-10-06T08:43:47Z
Lollypop
2
wikitext
text/x-wiki
[[Category:KnowHow]]
=Adding binaries to the initial ram disk in ubuntu=
== Adding vi to initramfs ==
<SyntaxHighlight lang=bash>
$ sudo apt --yes install vim
$ sudo cat >/etc/initramfs-tools/hooks/vi <<EOH
#!/bin/sh -e
PREREQ=""
prereqs()
{
echo "\$PREREQ"
}
case \$1 in
prereqs)
prereqs
exit 0
;;
esac
. /usr/share/initramfs-tools/hook-functions
# Begin real processing below this line
copy_exec /usr/bin/vim.basic /bin
exit 0
EOH
$ sudo chmod 0755 /etc/initramfs-tools/hooks/vi
$ sudo update-initramfs -k $(uname -r) -u -v
...
Calling hook vi
Adding binary /usr/bin/vim.basic
Adding binary-link /usr/lib/x86_64-linux-gnu/libtinfo.so.6
Adding binary /usr/lib/x86_64-linux-gnu/libtinfo.so.6.3
Adding binary-link /usr/lib/x86_64-linux-gnu/libsodium.so.23
Adding binary /usr/lib/x86_64-linux-gnu/libsodium.so.23.3.0
Adding binary /lib/x86_64-linux-gnu/libgpm.so.2
Adding binary /lib/x86_64-linux-gnu/libpython3.10.so.1.0
Adding binary-link /usr/lib/x86_64-linux-gnu/libexpat.so.1
Adding binary /usr/lib/x86_64-linux-gnu/libexpat.so.1.8.7
...
</SyntaxHighlight>
As you can see all needed libraries for the binary are added automatically.
ad88f628b746bd6cc6b37745031712146393e191
2664
2663
2022-10-06T08:45:19Z
Lollypop
2
wikitext
text/x-wiki
[[Category:Bootprocess]]
=Adding binaries to the initial ram disk in ubuntu=
== Adding vi to initramfs ==
<SyntaxHighlight lang=bash>
$ sudo apt --yes install vim
$ sudo cat >/etc/initramfs-tools/hooks/vi <<EOH
#!/bin/sh -e
PREREQ=""
prereqs()
{
echo "\$PREREQ"
}
case \$1 in
prereqs)
prereqs
exit 0
;;
esac
. /usr/share/initramfs-tools/hook-functions
# Begin real processing below this line
copy_exec /usr/bin/vim.basic /bin
exit 0
EOH
$ sudo chmod 0755 /etc/initramfs-tools/hooks/vi
$ sudo update-initramfs -k $(uname -r) -u -v
...
Calling hook vi
Adding binary /usr/bin/vim.basic
Adding binary-link /usr/lib/x86_64-linux-gnu/libtinfo.so.6
Adding binary /usr/lib/x86_64-linux-gnu/libtinfo.so.6.3
Adding binary-link /usr/lib/x86_64-linux-gnu/libsodium.so.23
Adding binary /usr/lib/x86_64-linux-gnu/libsodium.so.23.3.0
Adding binary /lib/x86_64-linux-gnu/libgpm.so.2
Adding binary /lib/x86_64-linux-gnu/libpython3.10.so.1.0
Adding binary-link /usr/lib/x86_64-linux-gnu/libexpat.so.1
Adding binary /usr/lib/x86_64-linux-gnu/libexpat.so.1.8.7
...
</SyntaxHighlight>
As you can see all needed libraries for the binary are added automatically.
62e5757eafb261bb2842c6e12242c9aa8e423729
2666
2664
2022-10-06T08:47:31Z
Lollypop
2
/* Adding binaries to the initial ram disk in ubuntu */
wikitext
text/x-wiki
[[Category:Bootprocess]]
=Adding binaries to the initial ram disk in Ubuntu=
See manual page for initramfs-tools for this, too.
== Adding vi to initramfs ==
<SyntaxHighlight lang=bash>
$ sudo apt --yes install vim
$ sudo cat >/etc/initramfs-tools/hooks/vi <<EOH
#!/bin/sh -e
PREREQ=""
prereqs()
{
echo "\$PREREQ"
}
case \$1 in
prereqs)
prereqs
exit 0
;;
esac
. /usr/share/initramfs-tools/hook-functions
# Begin real processing below this line
copy_exec /usr/bin/vim.basic /bin
exit 0
EOH
$ sudo chmod 0755 /etc/initramfs-tools/hooks/vi
$ sudo update-initramfs -k $(uname -r) -u -v
...
Calling hook vi
Adding binary /usr/bin/vim.basic
Adding binary-link /usr/lib/x86_64-linux-gnu/libtinfo.so.6
Adding binary /usr/lib/x86_64-linux-gnu/libtinfo.so.6.3
Adding binary-link /usr/lib/x86_64-linux-gnu/libsodium.so.23
Adding binary /usr/lib/x86_64-linux-gnu/libsodium.so.23.3.0
Adding binary /lib/x86_64-linux-gnu/libgpm.so.2
Adding binary /lib/x86_64-linux-gnu/libpython3.10.so.1.0
Adding binary-link /usr/lib/x86_64-linux-gnu/libexpat.so.1
Adding binary /usr/lib/x86_64-linux-gnu/libexpat.so.1.8.7
...
</SyntaxHighlight>
As you can see all needed libraries for the binary are added automatically.
99a5ec74380481b8ff1fc1f131e6d18b712fa529
Category:Bootprocess
14
397
2665
2022-10-06T08:45:46Z
Lollypop
2
Created page with "[[Category:KnowHow]]"
wikitext
text/x-wiki
[[Category:KnowHow]]
a53883501ef62bde531096835b5015f2915a2297
Linux kernel
0
398
2667
2022-10-06T09:33:28Z
Lollypop
2
Created page with "[[Category:Bootprocess]] ==Break boot process before everything really starts== Sometimes, maybe after cloning a VM, you want to edit network settings and so on before your VM comes up with an IP address of the machine it is cloned from. In this moments a stop right after filesystems are mounted is a nice option. To do this you can put an additional option to the kernel options in grub during boot. When grub shows up, navigate to your disired kernel and press <i>e</i>..."
wikitext
text/x-wiki
[[Category:Bootprocess]]
==Break boot process before everything really starts==
Sometimes, maybe after cloning a VM, you want to edit network settings and so on before your VM comes up with an IP address of the machine it is cloned from. In this moments a stop right after filesystems are mounted is a nice option. To do this you can put an additional option to the kernel options in grub during boot.
When grub shows up, navigate to your disired kernel and press <i>e</i> like edit.
Find the line where the kernel is loaded (I always use ZFS so it looks like this at my installations):
<pre>
linux "/BOOT/ubuntu_ocpr44@/vmlinuz-5.15.0-48-generic" root=ZFS="rpool/ROOT/ubuntu_ocpr44" ro init_on_alloc=0
</pre>
add the option break=bottom to the options:
<pre>
linux "/BOOT/ubuntu_ocpr44@/vmlinuz-5.15.0-48-generic" root=ZFS="rpool/ROOT/ubuntu_ocpr44" ro init_on_alloc=0 break=bottom
</pre>
Press <ctrl>+<X> to boot tis setting.
After the initial boot process you will dropped into the initramfs shell:
<pre>
(initramfs) _
</pre>
At this point your root filesystem is mounted to /root so to edit network settings you need to modify the files under /root/etc/netplan.<br>
For example:
<pre>
(initramfs) vi /root/etc/netplan/ens160.yaml
</pre>
To have always your favorite editor in your initramfs take a look at [[initramfs#Adding vi to initramfs]]
f481b520d94138417a8323ffd3ba0c8ae5f4b0de
2668
2667
2022-10-06T09:34:01Z
Lollypop
2
wikitext
text/x-wiki
[[Category:Bootprocess|Kernel]]
==Break boot process before everything really starts==
Sometimes, maybe after cloning a VM, you want to edit network settings and so on before your VM comes up with an IP address of the machine it is cloned from. In this moments a stop right after filesystems are mounted is a nice option. To do this you can put an additional option to the kernel options in grub during boot.
When grub shows up, navigate to your disired kernel and press <i>e</i> like edit.
Find the line where the kernel is loaded (I always use ZFS so it looks like this at my installations):
<pre>
linux "/BOOT/ubuntu_ocpr44@/vmlinuz-5.15.0-48-generic" root=ZFS="rpool/ROOT/ubuntu_ocpr44" ro init_on_alloc=0
</pre>
add the option break=bottom to the options:
<pre>
linux "/BOOT/ubuntu_ocpr44@/vmlinuz-5.15.0-48-generic" root=ZFS="rpool/ROOT/ubuntu_ocpr44" ro init_on_alloc=0 break=bottom
</pre>
Press <ctrl>+<X> to boot tis setting.
After the initial boot process you will dropped into the initramfs shell:
<pre>
(initramfs) _
</pre>
At this point your root filesystem is mounted to /root so to edit network settings you need to modify the files under /root/etc/netplan.<br>
For example:
<pre>
(initramfs) vi /root/etc/netplan/ens160.yaml
</pre>
To have always your favorite editor in your initramfs take a look at [[initramfs#Adding vi to initramfs]]
416920ab9a0dd7268f5bf9cf44c7ee2f93575bdf
ESPEasy
0
371
2669
2544
2022-10-09T12:47:49Z
Lollypop
2
wikitext
text/x-wiki
<syntaxhighlight lang=bash>
$ sudo apt install --yes esptool
$ wget https://github.com/letscontrolit/ESPEasy/releases/download/mega-20200515/ESPEasy_mega-20200515.zip
$ esptool --port /dev/ttyUSB0 --baud 115200 write_flash 0 ESP_Easy_mega_20200516_test_beta_ESP8266_4M1M.bin
esptool.py v2.8
Serial port /dev/ttyUSB0
Connecting...
Detecting chip type... ESP8266
Chip is ESP8266EX
Features: WiFi
Crystal is 26MHz
MAC: 3c:71:bf:2a:a6:0b
Enabling default SPI flash mode...
Configuring flash size...
Auto-detected Flash size: 4MB
Erasing flash...
Took 2.33s to erase flash block
Writing at 0x000dbc00... (87 %)
</syntaxhighlight>
* [https://www.az-delivery.de/products/copy-of-nodemcu-lua-amica-v2-modul-mit-esp8266-12e NodeMCU Lua Lolin V3 Module ESP8266 ESP-12F WIFI Wifi Development Board mit CH340]
==Problems==
<syntaxhighlight lang=bash>
Oct 9 14:40:06 lollybook kernel: [372389.769039] usb 3-1.4.3: ch341-uart converter now attached to ttyUSB0
Oct 9 14:40:07 lollybook kernel: [372390.769995] usb 3-1.4.3: usbfs: interface 0 claimed by ch341 while 'brltty' sets config #1
Oct 9 14:40:07 lollybook kernel: [372390.771187] ch341-uart ttyUSB0: ch341-uart converter now disconnected from ttyUSB0
Oct 9 14:40:07 lollybook kernel: [372390.771258] ch341 3-1.4.3:1.0: device disconnected
</syntaxhighlight>
I found this:
* [https://unix.stackexchange.com/questions/670636/unable-to-use-usb-dongle-based-on-usb-serial-converter-chip Unable to use USB dongle based on USB-serial converter chip]
<syntaxhighlight lang=bash>
for f in /usr/lib/udev/rules.d/*brltty*.rules; do
sudo ln -s /dev/null "/etc/udev/rules.d/$(basename "$f")"
done
sudo udevadm control --reload-rules
</syntaxhighlight>
But I also had to disable the brltty-udev.service
<syntaxhighlight lang=bash>
$ sudo systemctl stop brltty-udev.service
$ sudo systemctl mask brltty-udev.service
Created symlink /etc/systemd/system/brltty-udev.service → /dev/null.
</syntaxhighlight>
After disabling brltty I finally got:
<syntaxhighlight lang=bash>
Oct 9 14:41:30 lollybook kernel: [372473.985072] ch341 3-1.4.3:1.0: ch341-uart converter detected
Oct 9 14:41:30 lollybook kernel: [372473.987063] usb 3-1.4.3: ch341-uart converter now attached to ttyUSB0
</syntaxhighlight>
2b90070e6d96779ea437b80b9be05aacaecebba8
2670
2669
2022-10-09T12:48:36Z
Lollypop
2
wikitext
text/x-wiki
<syntaxhighlight lang=bash>
$ sudo apt install --yes esptool
$ wget https://github.com/letscontrolit/ESPEasy/releases/download/mega-20200515/ESPEasy_mega-20200515.zip
$ esptool --port /dev/ttyUSB0 --baud 115200 write_flash 0 ESP_Easy_mega_20200516_test_beta_ESP8266_4M1M.bin
esptool.py v2.8
Serial port /dev/ttyUSB0
Connecting...
Detecting chip type... ESP8266
Chip is ESP8266EX
Features: WiFi
Crystal is 26MHz
MAC: 3c:71:bf:2a:a6:0b
Enabling default SPI flash mode...
Configuring flash size...
Auto-detected Flash size: 4MB
Erasing flash...
Took 2.33s to erase flash block
Writing at 0x000dbc00... (87 %)
</syntaxhighlight>
* [https://www.az-delivery.de/products/copy-of-nodemcu-lua-amica-v2-modul-mit-esp8266-12e NodeMCU Lua Lolin V3 Module ESP8266 ESP-12F WIFI Wifi Development Board mit CH340]
==Problems==
===interface 0 claimed by ch341 while 'brltty' sets config #1===
<syntaxhighlight lang=bash>
Oct 9 14:40:06 lollybook kernel: [372389.769039] usb 3-1.4.3: ch341-uart converter now attached to ttyUSB0
Oct 9 14:40:07 lollybook kernel: [372390.769995] usb 3-1.4.3: usbfs: interface 0 claimed by ch341 while 'brltty' sets config #1
Oct 9 14:40:07 lollybook kernel: [372390.771187] ch341-uart ttyUSB0: ch341-uart converter now disconnected from ttyUSB0
Oct 9 14:40:07 lollybook kernel: [372390.771258] ch341 3-1.4.3:1.0: device disconnected
</syntaxhighlight>
I found this:
* [https://unix.stackexchange.com/questions/670636/unable-to-use-usb-dongle-based-on-usb-serial-converter-chip Unable to use USB dongle based on USB-serial converter chip]
<syntaxhighlight lang=bash>
for f in /usr/lib/udev/rules.d/*brltty*.rules; do
sudo ln -s /dev/null "/etc/udev/rules.d/$(basename "$f")"
done
sudo udevadm control --reload-rules
</syntaxhighlight>
But I also had to disable the brltty-udev.service
<syntaxhighlight lang=bash>
$ sudo systemctl stop brltty-udev.service
$ sudo systemctl mask brltty-udev.service
Created symlink /etc/systemd/system/brltty-udev.service → /dev/null.
</syntaxhighlight>
After disabling brltty I finally got:
<syntaxhighlight lang=bash>
Oct 9 14:41:30 lollybook kernel: [372473.985072] ch341 3-1.4.3:1.0: ch341-uart converter detected
Oct 9 14:41:30 lollybook kernel: [372473.987063] usb 3-1.4.3: ch341-uart converter now attached to ttyUSB0
</syntaxhighlight>
6873229d3af72a3cbad7bef04b4d10bab4bc7a0f
2671
2670
2022-10-10T08:39:03Z
Lollypop
2
wikitext
text/x-wiki
==Flash the firmware==
<syntaxhighlight lang=bash>
$ sudo apt install --yes esptool
$ wget https://github.com/letscontrolit/ESPEasy/releases/download/mega-20200515/ESPEasy_mega-20200515.zip
$ esptool --port /dev/ttyUSB0 --baud 115200 write_flash 0 ESP_Easy_mega_20200516_test_beta_ESP8266_4M1M.bin
esptool.py v2.8
Serial port /dev/ttyUSB0
Connecting...
Detecting chip type... ESP8266
Chip is ESP8266EX
Features: WiFi
Crystal is 26MHz
MAC: 3c:71:bf:2a:a6:0b
Enabling default SPI flash mode...
Configuring flash size...
Auto-detected Flash size: 4MB
Erasing flash...
Took 2.33s to erase flash block
Writing at 0x000dbc00... (87 %)
</syntaxhighlight>
==Connect via Serial==
<syntaxhighlight lang=bash>
$ minicom -D /dev/ttyUSB0 --baudrate 115200
Welcome to minicom 2.8
OPTIONS: I18n
Port /dev/ttyUSB0, 10:28:08
Press CTRL-A Z for help on special keys
</syntaxhighlight>
=== Posssible commands via serial connection ===
==== Configure your local wifi credentials ====
* WifiSSID <myssid>
* WifiKey <mypassword>
* WifiConnect
* Save
====Reboot====
* Reboot
====Delete the whole configuration====
Be careful! Do you want this?
This deletes the whole configuration but not the Firmware.
* Reset
==Problems==
===interface 0 claimed by ch341 while 'brltty' sets config #1===
<syntaxhighlight lang=bash>
Oct 9 14:40:06 lollybook kernel: [372389.769039] usb 3-1.4.3: ch341-uart converter now attached to ttyUSB0
Oct 9 14:40:07 lollybook kernel: [372390.769995] usb 3-1.4.3: usbfs: interface 0 claimed by ch341 while 'brltty' sets config #1
Oct 9 14:40:07 lollybook kernel: [372390.771187] ch341-uart ttyUSB0: ch341-uart converter now disconnected from ttyUSB0
Oct 9 14:40:07 lollybook kernel: [372390.771258] ch341 3-1.4.3:1.0: device disconnected
</syntaxhighlight>
I found this:
* [https://unix.stackexchange.com/questions/670636/unable-to-use-usb-dongle-based-on-usb-serial-converter-chip Unable to use USB dongle based on USB-serial converter chip]
<syntaxhighlight lang=bash>
for f in /usr/lib/udev/rules.d/*brltty*.rules; do
sudo ln -s /dev/null "/etc/udev/rules.d/$(basename "$f")"
done
sudo udevadm control --reload-rules
</syntaxhighlight>
But I also had to disable the brltty-udev.service
<syntaxhighlight lang=bash>
$ sudo systemctl stop brltty-udev.service
$ sudo systemctl mask brltty-udev.service
Created symlink /etc/systemd/system/brltty-udev.service → /dev/null.
</syntaxhighlight>
After disabling brltty I finally got:
<syntaxhighlight lang=bash>
Oct 9 14:41:30 lollybook kernel: [372473.985072] ch341 3-1.4.3:1.0: ch341-uart converter detected
Oct 9 14:41:30 lollybook kernel: [372473.987063] usb 3-1.4.3: ch341-uart converter now attached to ttyUSB0
</syntaxhighlight>
670086869feba9dd192e35e1fad6754576bafe44
2672
2671
2022-10-10T08:42:00Z
Lollypop
2
/* Posssible commands via serial connection */
wikitext
text/x-wiki
==Flash the firmware==
<syntaxhighlight lang=bash>
$ sudo apt install --yes esptool
$ wget https://github.com/letscontrolit/ESPEasy/releases/download/mega-20200515/ESPEasy_mega-20200515.zip
$ esptool --port /dev/ttyUSB0 --baud 115200 write_flash 0 ESP_Easy_mega_20200516_test_beta_ESP8266_4M1M.bin
esptool.py v2.8
Serial port /dev/ttyUSB0
Connecting...
Detecting chip type... ESP8266
Chip is ESP8266EX
Features: WiFi
Crystal is 26MHz
MAC: 3c:71:bf:2a:a6:0b
Enabling default SPI flash mode...
Configuring flash size...
Auto-detected Flash size: 4MB
Erasing flash...
Took 2.33s to erase flash block
Writing at 0x000dbc00... (87 %)
</syntaxhighlight>
==Connect via Serial==
<syntaxhighlight lang=bash>
$ minicom -D /dev/ttyUSB0 --baudrate 115200
Welcome to minicom 2.8
OPTIONS: I18n
Port /dev/ttyUSB0, 10:28:08
Press CTRL-A Z for help on special keys
</syntaxhighlight>
=== Posssible commands via serial connection ===
You do not see what you type until you hit enter, then you will see the command you have entered and the result, like:
<SyntaxHighlight lang=bash>
>Save
OK
</SyntaxHighlight>
==== Configure your local wifi credentials ====
* WifiSSID <myssid>
* WifiKey <mypassword>
* WifiConnect
* Save
====Reboot====
* Reboot
====Delete the whole configuration====
Be careful! Do you want this?
This deletes the whole configuration but not the Firmware.
* Reset
==Problems==
===interface 0 claimed by ch341 while 'brltty' sets config #1===
<syntaxhighlight lang=bash>
Oct 9 14:40:06 lollybook kernel: [372389.769039] usb 3-1.4.3: ch341-uart converter now attached to ttyUSB0
Oct 9 14:40:07 lollybook kernel: [372390.769995] usb 3-1.4.3: usbfs: interface 0 claimed by ch341 while 'brltty' sets config #1
Oct 9 14:40:07 lollybook kernel: [372390.771187] ch341-uart ttyUSB0: ch341-uart converter now disconnected from ttyUSB0
Oct 9 14:40:07 lollybook kernel: [372390.771258] ch341 3-1.4.3:1.0: device disconnected
</syntaxhighlight>
I found this:
* [https://unix.stackexchange.com/questions/670636/unable-to-use-usb-dongle-based-on-usb-serial-converter-chip Unable to use USB dongle based on USB-serial converter chip]
<syntaxhighlight lang=bash>
for f in /usr/lib/udev/rules.d/*brltty*.rules; do
sudo ln -s /dev/null "/etc/udev/rules.d/$(basename "$f")"
done
sudo udevadm control --reload-rules
</syntaxhighlight>
But I also had to disable the brltty-udev.service
<syntaxhighlight lang=bash>
$ sudo systemctl stop brltty-udev.service
$ sudo systemctl mask brltty-udev.service
Created symlink /etc/systemd/system/brltty-udev.service → /dev/null.
</syntaxhighlight>
After disabling brltty I finally got:
<syntaxhighlight lang=bash>
Oct 9 14:41:30 lollybook kernel: [372473.985072] ch341 3-1.4.3:1.0: ch341-uart converter detected
Oct 9 14:41:30 lollybook kernel: [372473.987063] usb 3-1.4.3: ch341-uart converter now attached to ttyUSB0
</syntaxhighlight>
61358ebdf5b993c97ed711907462ca98c963e70f
2673
2672
2022-10-10T08:56:46Z
Lollypop
2
/* Posssible commands via serial connection */
wikitext
text/x-wiki
==Flash the firmware==
<syntaxhighlight lang=bash>
$ sudo apt install --yes esptool
$ wget https://github.com/letscontrolit/ESPEasy/releases/download/mega-20200515/ESPEasy_mega-20200515.zip
$ esptool --port /dev/ttyUSB0 --baud 115200 write_flash 0 ESP_Easy_mega_20200516_test_beta_ESP8266_4M1M.bin
esptool.py v2.8
Serial port /dev/ttyUSB0
Connecting...
Detecting chip type... ESP8266
Chip is ESP8266EX
Features: WiFi
Crystal is 26MHz
MAC: 3c:71:bf:2a:a6:0b
Enabling default SPI flash mode...
Configuring flash size...
Auto-detected Flash size: 4MB
Erasing flash...
Took 2.33s to erase flash block
Writing at 0x000dbc00... (87 %)
</syntaxhighlight>
==Connect via Serial==
<syntaxhighlight lang=bash>
$ minicom -D /dev/ttyUSB0 --baudrate 115200
Welcome to minicom 2.8
OPTIONS: I18n
Port /dev/ttyUSB0, 10:28:08
Press CTRL-A Z for help on special keys
</syntaxhighlight>
=== Posssible commands via serial connection ===
You do not see what you type until you hit enter, then you will see the command you have entered and the result, like:
<SyntaxHighlight lang=bash>
>Save
OK
</SyntaxHighlight>
You can find all commands here: [https://github.com/letscontrolit/ESPEasy/blob/mega/docs/source/Plugin/P000_commands.repl ESPEasy/docs/source/Plugin/P000_commands.repl].<br>
This is just a subset:
==== Get the Allowed IP range ====
* AccessInfo
==== Get the build number ====
* Build
==== Clear allowed IP range for the web interface for the current session ====
* ClearAccessBlock
==== Clear the password of the unit ====
* ClearPassword
==== Set the password of the unit ====
* Password <mypassword>
==== Configure your local wifi credentials ====
* WifiSSID <myssid>
* WifiKey <mypassword>
* WifiConnect
* Save
====Reboot====
* Reboot
====Delete the whole configuration====
Be careful! Do you want this?
This deletes the whole configuration but not the Firmware.
* Reset
==Problems==
===interface 0 claimed by ch341 while 'brltty' sets config #1===
<syntaxhighlight lang=bash>
Oct 9 14:40:06 lollybook kernel: [372389.769039] usb 3-1.4.3: ch341-uart converter now attached to ttyUSB0
Oct 9 14:40:07 lollybook kernel: [372390.769995] usb 3-1.4.3: usbfs: interface 0 claimed by ch341 while 'brltty' sets config #1
Oct 9 14:40:07 lollybook kernel: [372390.771187] ch341-uart ttyUSB0: ch341-uart converter now disconnected from ttyUSB0
Oct 9 14:40:07 lollybook kernel: [372390.771258] ch341 3-1.4.3:1.0: device disconnected
</syntaxhighlight>
I found this:
* [https://unix.stackexchange.com/questions/670636/unable-to-use-usb-dongle-based-on-usb-serial-converter-chip Unable to use USB dongle based on USB-serial converter chip]
<syntaxhighlight lang=bash>
for f in /usr/lib/udev/rules.d/*brltty*.rules; do
sudo ln -s /dev/null "/etc/udev/rules.d/$(basename "$f")"
done
sudo udevadm control --reload-rules
</syntaxhighlight>
But I also had to disable the brltty-udev.service
<syntaxhighlight lang=bash>
$ sudo systemctl stop brltty-udev.service
$ sudo systemctl mask brltty-udev.service
Created symlink /etc/systemd/system/brltty-udev.service → /dev/null.
</syntaxhighlight>
After disabling brltty I finally got:
<syntaxhighlight lang=bash>
Oct 9 14:41:30 lollybook kernel: [372473.985072] ch341 3-1.4.3:1.0: ch341-uart converter detected
Oct 9 14:41:30 lollybook kernel: [372473.987063] usb 3-1.4.3: ch341-uart converter now attached to ttyUSB0
</syntaxhighlight>
7b5d3cb2a8b2cbfe5574b5945c4dc8ede21406ce
2674
2673
2022-10-10T09:19:49Z
Lollypop
2
/* Posssible commands via serial connection */
wikitext
text/x-wiki
==Flash the firmware==
<syntaxhighlight lang=bash>
$ sudo apt install --yes esptool
$ wget https://github.com/letscontrolit/ESPEasy/releases/download/mega-20200515/ESPEasy_mega-20200515.zip
$ esptool --port /dev/ttyUSB0 --baud 115200 write_flash 0 ESP_Easy_mega_20200516_test_beta_ESP8266_4M1M.bin
esptool.py v2.8
Serial port /dev/ttyUSB0
Connecting...
Detecting chip type... ESP8266
Chip is ESP8266EX
Features: WiFi
Crystal is 26MHz
MAC: 3c:71:bf:2a:a6:0b
Enabling default SPI flash mode...
Configuring flash size...
Auto-detected Flash size: 4MB
Erasing flash...
Took 2.33s to erase flash block
Writing at 0x000dbc00... (87 %)
</syntaxhighlight>
==Connect via Serial==
<syntaxhighlight lang=bash>
$ minicom -D /dev/ttyUSB0 --baudrate 115200
Welcome to minicom 2.8
OPTIONS: I18n
Port /dev/ttyUSB0, 10:28:08
Press CTRL-A Z for help on special keys
</syntaxhighlight>
=== Posssible commands via serial connection ===
You do not see what you type until you hit enter, then you will see the command you have entered and the result, like:
<SyntaxHighlight lang=bash>
>Save
OK
</SyntaxHighlight>
You can find all commands here: [https://github.com/letscontrolit/ESPEasy/blob/mega/docs/source/Plugin/P000_commands.repl ESPEasy/docs/source/Plugin/P000_commands.repl].<br>
This is just a subset:
==== Get the Allowed IP range ====
* AccessInfo
==== Get the build number ====
* Build
==== Clear allowed IP range for the web interface for the current session ====
* ClearAccessBlock
==== Clear the password of the unit ====
* ClearPassword
==== Get or set the date and time ====
* Datetime[,YYYY-MM-DD[,hh:mm:ss]]
==== Get or set DNS configuration ====
* DNS[,<IP address>]
==== Get or set serial port debug level ====
* Debug[,<1-4, default is 2>]
==== Get or set the gateway configuration ====
* Gateway[,<IP address>]
==== Run I2C scanner to find connected I2C chips. Output will be sent to the serial port ====
* I2Cscanner
==== Get or set IP address ====
* IP[,<IP address>]
==== Set the password of the unit ====
* Password <mypassword>
==== Reboot ====
* Reboot
==== Reset config to factory default. Caution, all settings will be lost! ====
Be careful! Do you want this?
This deletes the whole configuration but not the Firmware.
* Reset
==== Save config to persistent flash memory ====
* Save
==== Show settings on serial terminal ====
* Settings
==== TimeZone ====
* TimeZone[,<minutes from UTC>]
==== Get or set the status of NTP (Network Time Protocol) ====
* UseNTP[,<0/1>]
==== Configure your local wifi credentials ====
* WifiSSID <myssid>
* WifiKey <mypassword>
* WifiConnect
==Problems==
===interface 0 claimed by ch341 while 'brltty' sets config #1===
<syntaxhighlight lang=bash>
Oct 9 14:40:06 lollybook kernel: [372389.769039] usb 3-1.4.3: ch341-uart converter now attached to ttyUSB0
Oct 9 14:40:07 lollybook kernel: [372390.769995] usb 3-1.4.3: usbfs: interface 0 claimed by ch341 while 'brltty' sets config #1
Oct 9 14:40:07 lollybook kernel: [372390.771187] ch341-uart ttyUSB0: ch341-uart converter now disconnected from ttyUSB0
Oct 9 14:40:07 lollybook kernel: [372390.771258] ch341 3-1.4.3:1.0: device disconnected
</syntaxhighlight>
I found this:
* [https://unix.stackexchange.com/questions/670636/unable-to-use-usb-dongle-based-on-usb-serial-converter-chip Unable to use USB dongle based on USB-serial converter chip]
<syntaxhighlight lang=bash>
for f in /usr/lib/udev/rules.d/*brltty*.rules; do
sudo ln -s /dev/null "/etc/udev/rules.d/$(basename "$f")"
done
sudo udevadm control --reload-rules
</syntaxhighlight>
But I also had to disable the brltty-udev.service
<syntaxhighlight lang=bash>
$ sudo systemctl stop brltty-udev.service
$ sudo systemctl mask brltty-udev.service
Created symlink /etc/systemd/system/brltty-udev.service → /dev/null.
</syntaxhighlight>
After disabling brltty I finally got:
<syntaxhighlight lang=bash>
Oct 9 14:41:30 lollybook kernel: [372473.985072] ch341 3-1.4.3:1.0: ch341-uart converter detected
Oct 9 14:41:30 lollybook kernel: [372473.987063] usb 3-1.4.3: ch341-uart converter now attached to ttyUSB0
</syntaxhighlight>
60116120da23cbd6636bc59354cd2d4e94b3e7d9
2675
2674
2022-10-10T09:21:10Z
Lollypop
2
Lollypop moved page [[NodeMCU]] to [[ESPEasy]]: NodeMCU is just one possible Unit to run ESPEasy
wikitext
text/x-wiki
==Flash the firmware==
<syntaxhighlight lang=bash>
$ sudo apt install --yes esptool
$ wget https://github.com/letscontrolit/ESPEasy/releases/download/mega-20200515/ESPEasy_mega-20200515.zip
$ esptool --port /dev/ttyUSB0 --baud 115200 write_flash 0 ESP_Easy_mega_20200516_test_beta_ESP8266_4M1M.bin
esptool.py v2.8
Serial port /dev/ttyUSB0
Connecting...
Detecting chip type... ESP8266
Chip is ESP8266EX
Features: WiFi
Crystal is 26MHz
MAC: 3c:71:bf:2a:a6:0b
Enabling default SPI flash mode...
Configuring flash size...
Auto-detected Flash size: 4MB
Erasing flash...
Took 2.33s to erase flash block
Writing at 0x000dbc00... (87 %)
</syntaxhighlight>
==Connect via Serial==
<syntaxhighlight lang=bash>
$ minicom -D /dev/ttyUSB0 --baudrate 115200
Welcome to minicom 2.8
OPTIONS: I18n
Port /dev/ttyUSB0, 10:28:08
Press CTRL-A Z for help on special keys
</syntaxhighlight>
=== Posssible commands via serial connection ===
You do not see what you type until you hit enter, then you will see the command you have entered and the result, like:
<SyntaxHighlight lang=bash>
>Save
OK
</SyntaxHighlight>
You can find all commands here: [https://github.com/letscontrolit/ESPEasy/blob/mega/docs/source/Plugin/P000_commands.repl ESPEasy/docs/source/Plugin/P000_commands.repl].<br>
This is just a subset:
==== Get the Allowed IP range ====
* AccessInfo
==== Get the build number ====
* Build
==== Clear allowed IP range for the web interface for the current session ====
* ClearAccessBlock
==== Clear the password of the unit ====
* ClearPassword
==== Get or set the date and time ====
* Datetime[,YYYY-MM-DD[,hh:mm:ss]]
==== Get or set DNS configuration ====
* DNS[,<IP address>]
==== Get or set serial port debug level ====
* Debug[,<1-4, default is 2>]
==== Get or set the gateway configuration ====
* Gateway[,<IP address>]
==== Run I2C scanner to find connected I2C chips. Output will be sent to the serial port ====
* I2Cscanner
==== Get or set IP address ====
* IP[,<IP address>]
==== Set the password of the unit ====
* Password <mypassword>
==== Reboot ====
* Reboot
==== Reset config to factory default. Caution, all settings will be lost! ====
Be careful! Do you want this?
This deletes the whole configuration but not the Firmware.
* Reset
==== Save config to persistent flash memory ====
* Save
==== Show settings on serial terminal ====
* Settings
==== TimeZone ====
* TimeZone[,<minutes from UTC>]
==== Get or set the status of NTP (Network Time Protocol) ====
* UseNTP[,<0/1>]
==== Configure your local wifi credentials ====
* WifiSSID <myssid>
* WifiKey <mypassword>
* WifiConnect
==Problems==
===interface 0 claimed by ch341 while 'brltty' sets config #1===
<syntaxhighlight lang=bash>
Oct 9 14:40:06 lollybook kernel: [372389.769039] usb 3-1.4.3: ch341-uart converter now attached to ttyUSB0
Oct 9 14:40:07 lollybook kernel: [372390.769995] usb 3-1.4.3: usbfs: interface 0 claimed by ch341 while 'brltty' sets config #1
Oct 9 14:40:07 lollybook kernel: [372390.771187] ch341-uart ttyUSB0: ch341-uart converter now disconnected from ttyUSB0
Oct 9 14:40:07 lollybook kernel: [372390.771258] ch341 3-1.4.3:1.0: device disconnected
</syntaxhighlight>
I found this:
* [https://unix.stackexchange.com/questions/670636/unable-to-use-usb-dongle-based-on-usb-serial-converter-chip Unable to use USB dongle based on USB-serial converter chip]
<syntaxhighlight lang=bash>
for f in /usr/lib/udev/rules.d/*brltty*.rules; do
sudo ln -s /dev/null "/etc/udev/rules.d/$(basename "$f")"
done
sudo udevadm control --reload-rules
</syntaxhighlight>
But I also had to disable the brltty-udev.service
<syntaxhighlight lang=bash>
$ sudo systemctl stop brltty-udev.service
$ sudo systemctl mask brltty-udev.service
Created symlink /etc/systemd/system/brltty-udev.service → /dev/null.
</syntaxhighlight>
After disabling brltty I finally got:
<syntaxhighlight lang=bash>
Oct 9 14:41:30 lollybook kernel: [372473.985072] ch341 3-1.4.3:1.0: ch341-uart converter detected
Oct 9 14:41:30 lollybook kernel: [372473.987063] usb 3-1.4.3: ch341-uart converter now attached to ttyUSB0
</syntaxhighlight>
60116120da23cbd6636bc59354cd2d4e94b3e7d9
2677
2675
2022-10-10T09:24:51Z
Lollypop
2
wikitext
text/x-wiki
[[Category:KnowHow]]
==Flash the firmware==
Flash!<br>
Ah-ah<br>
Saviour of the universe!<br>
<syntaxhighlight lang=bash>
$ sudo apt install --yes esptool
$ wget https://github.com/letscontrolit/ESPEasy/releases/download/mega-20200515/ESPEasy_mega-20200515.zip
$ esptool --port /dev/ttyUSB0 --baud 115200 write_flash 0 ESP_Easy_mega_20200516_test_beta_ESP8266_4M1M.bin
esptool.py v2.8
Serial port /dev/ttyUSB0
Connecting...
Detecting chip type... ESP8266
Chip is ESP8266EX
Features: WiFi
Crystal is 26MHz
MAC: 3c:71:bf:2a:a6:0b
Enabling default SPI flash mode...
Configuring flash size...
Auto-detected Flash size: 4MB
Erasing flash...
Took 2.33s to erase flash block
Writing at 0x000dbc00... (87 %)
</syntaxhighlight>
==Connect via Serial==
<syntaxhighlight lang=bash>
$ minicom -D /dev/ttyUSB0 --baudrate 115200
Welcome to minicom 2.8
OPTIONS: I18n
Port /dev/ttyUSB0, 10:28:08
Press CTRL-A Z for help on special keys
</syntaxhighlight>
=== Posssible commands via serial connection ===
You do not see what you type until you hit enter, then you will see the command you have entered and the result, like:
<SyntaxHighlight lang=bash>
>Save
OK
</SyntaxHighlight>
You can find all commands here: [https://github.com/letscontrolit/ESPEasy/blob/mega/docs/source/Plugin/P000_commands.repl ESPEasy/docs/source/Plugin/P000_commands.repl].<br>
This is just a subset:
==== Get the Allowed IP range ====
* AccessInfo
==== Get the build number ====
* Build
==== Clear allowed IP range for the web interface for the current session ====
* ClearAccessBlock
==== Clear the password of the unit ====
* ClearPassword
==== Get or set the date and time ====
* Datetime[,YYYY-MM-DD[,hh:mm:ss]]
==== Get or set DNS configuration ====
* DNS[,<IP address>]
==== Get or set serial port debug level ====
* Debug[,<1-4, default is 2>]
==== Get or set the gateway configuration ====
* Gateway[,<IP address>]
==== Run I2C scanner to find connected I2C chips. Output will be sent to the serial port ====
* I2Cscanner
==== Get or set IP address ====
* IP[,<IP address>]
==== Set the password of the unit ====
* Password <mypassword>
==== Reboot ====
* Reboot
==== Reset config to factory default. Caution, all settings will be lost! ====
Be careful! Do you want this?
This deletes the whole configuration but not the Firmware.
* Reset
==== Save config to persistent flash memory ====
* Save
==== Show settings on serial terminal ====
* Settings
==== TimeZone ====
* TimeZone[,<minutes from UTC>]
==== Get or set the status of NTP (Network Time Protocol) ====
* UseNTP[,<0/1>]
==== Configure your local wifi credentials ====
* WifiSSID <myssid>
* WifiKey <mypassword>
* WifiConnect
==Problems==
===interface 0 claimed by ch341 while 'brltty' sets config #1===
<syntaxhighlight lang=bash>
Oct 9 14:40:06 lollybook kernel: [372389.769039] usb 3-1.4.3: ch341-uart converter now attached to ttyUSB0
Oct 9 14:40:07 lollybook kernel: [372390.769995] usb 3-1.4.3: usbfs: interface 0 claimed by ch341 while 'brltty' sets config #1
Oct 9 14:40:07 lollybook kernel: [372390.771187] ch341-uart ttyUSB0: ch341-uart converter now disconnected from ttyUSB0
Oct 9 14:40:07 lollybook kernel: [372390.771258] ch341 3-1.4.3:1.0: device disconnected
</syntaxhighlight>
I found this:
* [https://unix.stackexchange.com/questions/670636/unable-to-use-usb-dongle-based-on-usb-serial-converter-chip Unable to use USB dongle based on USB-serial converter chip]
<syntaxhighlight lang=bash>
for f in /usr/lib/udev/rules.d/*brltty*.rules; do
sudo ln -s /dev/null "/etc/udev/rules.d/$(basename "$f")"
done
sudo udevadm control --reload-rules
</syntaxhighlight>
But I also had to disable the brltty-udev.service
<syntaxhighlight lang=bash>
$ sudo systemctl stop brltty-udev.service
$ sudo systemctl mask brltty-udev.service
Created symlink /etc/systemd/system/brltty-udev.service → /dev/null.
</syntaxhighlight>
After disabling brltty I finally got:
<syntaxhighlight lang=bash>
Oct 9 14:41:30 lollybook kernel: [372473.985072] ch341 3-1.4.3:1.0: ch341-uart converter detected
Oct 9 14:41:30 lollybook kernel: [372473.987063] usb 3-1.4.3: ch341-uart converter now attached to ttyUSB0
</syntaxhighlight>
731ab48483213fe348cebd8cee25b652c2061740
NodeMCU
0
399
2676
2022-10-10T09:21:10Z
Lollypop
2
Lollypop moved page [[NodeMCU]] to [[ESPEasy]]: NodeMCU is just one possible Unit to run ESPEasy
wikitext
text/x-wiki
#REDIRECT [[ESPEasy]]
b7dbc5b9494bf2115218d166e128cb50be56d413
Intergator
0
400
2680
2022-10-19T09:28:56Z
Lollypop
2
Created page with "<SyntaxHighlight lang=bash> manage-cluster.py --target-cluster uws-test --sync-workspace manage-cluster.py --target-cluster uws-test --config-all manage-cluster.py --target-cluster uws-test --ig-config-update manage-cluster.py --target-cluster uws-test --action ig-stop manage-cluster.py --target-cluster uws-test --action ig-start manage-cluster.py --target-cluster uws-test --action ig-restart manage-cluster.py --target-cluster uws-test --action ig-status manage-cluste..."
wikitext
text/x-wiki
<SyntaxHighlight lang=bash>
manage-cluster.py --target-cluster uws-test --sync-workspace
manage-cluster.py --target-cluster uws-test --config-all
manage-cluster.py --target-cluster uws-test --ig-config-update
manage-cluster.py --target-cluster uws-test --action ig-stop
manage-cluster.py --target-cluster uws-test --action ig-start
manage-cluster.py --target-cluster uws-test --action ig-restart
manage-cluster.py --target-cluster uws-test --action ig-status
manage-cluster.py --target-cluster uws-test --action proxy-stop
manage-cluster.py --target-cluster uws-test --action proxy-start
manage-cluster.py --target-cluster uws-test --action proxy-restart
manage-cluster.py --target-cluster uws-test --action proxy-status
manage-cluster.py --target-cluster uws-test --action db-stop
manage-cluster.py --target-cluster uws-test --action db-start
manage-cluster.py --target-cluster uws-test --action db-restart
manage-cluster.py --target-cluster uws-test --action db-status
manage-cluster.py --target-cluster uws-test --action mon-stop
manage-cluster.py --target-cluster uws-test --action mon-start
manage-cluster.py --target-cluster uws-test --action mon-restart
manage-cluster.py --target-cluster uws-test --action mon-status
</SyntaxHighlight>
7208ce2988a33a1006a52ee22971e6b4efac3c67
2681
2680
2022-10-19T09:30:05Z
Lollypop
2
wikitext
text/x-wiki
<SyntaxHighlight lang=bash>
manage-cluster.py --target-cluster uws-test --show-defaults
manage-cluster.py --target-cluster uws-test --sync-workspace
manage-cluster.py --target-cluster uws-test --config-all
manage-cluster.py --target-cluster uws-test --ig-config-update
manage-cluster.py --target-cluster uws-test --action ig-stop
manage-cluster.py --target-cluster uws-test --action ig-start
manage-cluster.py --target-cluster uws-test --action ig-restart
manage-cluster.py --target-cluster uws-test --action ig-status
manage-cluster.py --target-cluster uws-test --action proxy-stop
manage-cluster.py --target-cluster uws-test --action proxy-start
manage-cluster.py --target-cluster uws-test --action proxy-restart
manage-cluster.py --target-cluster uws-test --action proxy-status
manage-cluster.py --target-cluster uws-test --action db-stop
manage-cluster.py --target-cluster uws-test --action db-start
manage-cluster.py --target-cluster uws-test --action db-restart
manage-cluster.py --target-cluster uws-test --action db-status
manage-cluster.py --target-cluster uws-test --action mon-stop
manage-cluster.py --target-cluster uws-test --action mon-start
manage-cluster.py --target-cluster uws-test --action mon-restart
manage-cluster.py --target-cluster uws-test --action mon-status
</SyntaxHighlight>
605dfadfb91e2ffad99a8d2db64c52c245c26aa3
SuSE Manager
0
348
2682
2574
2022-11-16T08:21:06Z
Lollypop
2
/* Generate CSR */
wikitext
text/x-wiki
[[category :Linux]]
[[category:SuSE]]
=SuSE Manager=
==Channels==
===Refresh channle list===
<syntaxhighlight lang=bash>
# mgr-sync refresh
</syntaxhighlight>
===List available channels===
<syntaxhighlight lang=bash>
# mgr-sync list channels
</syntaxhighlight>
===Add Channel===
<syntaxhighlight lang=bash>
# mgr-sync add channel <channel>
</syntaxhighlight>
===Delete Channel===
<syntaxhighlight lang=bash>
# spacewalk-remove-channel -c <channel>
</syntaxhighlight>
===Create a frozen channel===
Clone a channel (which is like a snapshot) and add a timestamp at the end of the name:
<syntaxhighlight lang=bash>
# spacecmd softwarechannel_clonetree -s '<syntaxhighlight channel or pool>' -x "s/\$/-$(date '+%Y-%m-%d_%H:%M:%S')/"
</syntaxhighlight>
e.g.:
<syntaxhighlight lang=bash>
# spacecmd softwarechannel_clonetree -s 'sles12-sp3-pool-x86_64' -x "s/\$/-$(date '+%Y-%m-%d_%H:%M:%S')/"
</syntaxhighlight>
will result in a new channel pool named e.g. sles12-sp3-pool-x86_64-2017-11-22_14:26:42
===Compose your own channel===
<syntaxhighlight lang=bash>
# spacecmd
spacecmd {SSM:0}> softwarechannel_create -n OpenSuSE -l opensuse -a x86_64 -c sha256
spacecmd {SSM:0}> repo_create -n opensuse-database-sles12-sp2-x86_64 -u https://download.opensuse.org/repositories/server:/database/SLE_12_SP2/
spacecmd {SSM:0}> repo_create -n opensuse-database-sles12-sp3-x86_64 -u https://download.opensuse.org/repositories/server:/database/SLE_12_SP3/
spacecmd {SSM:0}> repo_list
opensuse-database-sles12-sp2-x86_64
opensuse-database-sles12-sp3-x86_64
spacecmd {SSM:0}> softwarechannel_addrepo opensuse opensuse-database-sles12-sp2-x86_64
spacecmd {SSM:0}> softwarechannel_addrepo opensuse opensuse-database-sles12-sp3-x86_64
spacecmd {SSM:0}> quit
# spacewalk-repo-sync -c opensuse
</syntaxhighlight>
==Bootstrap==
===Create bootstrap repo===
Do it for each channel!
<syntaxhighlight lang=bash>
# mgr-create-bootstrap-repo
</syntaxhighlight>
===Create bootstrap shell scripts in /srv/www/htdocs/pub/bootstrap===
Do not forget to lookup the available [[#List available activation keys|activation keys]]
<syntaxhighlight lang=bash>
# spacecmd -s susemanager.server.de -u mytestuser -q activationkey_list
6-sles11-sp3-x86_64
6-sles11-sp4-x86_64
6-sles12-default
6-sles12-sp0-x86_64
6-sles12-sp1-x86_64
6-sles12-sp2-x86_64
6-sles12-sp3-x86_64
6-sles12-sp4-x86_64
6-sles12-sp5-x86_64
6-sles15-sp0-x86_64
6-sles15-sp1-x86_64
6-sles15-sp2-x86_64
# mgr-bootstrap --traditional --script=My-New-SLES11-SP4.sh --activation-keys=6-sles11-sp4-x86_64
</syntaxhighlight>
==Activation keys==
===List available activation keys===
web: Systems -> Activation Keys
<syntaxhighlight lang=bash>
# spacecmd -q activationkey_list
6-sles11-sp3-x86_64
6-sles11-sp4-x86_64
6-sles12-sp0-x86_64
6-sles12-sp1-x86_64
6-sles12-sp2-x86_64
6-sles12-sp3-x86_64
</syntaxhighlight>
==spacecmd==
Just some useful space commands
<syntaxhighlight lang=bash>
# spacecmd system_list
</syntaxhighlight>
==rhn-search==
===Cleanup the search index===
<syntaxhighlight lang=bash>
# rhn-search cleanindex
</syntaxhighlight>
==Troubleshooting==
===Clients===
====Error code: Curl error 59 / Error message: failed setting cipher list: DEFAULT_SUSE====
<syntaxhighlight lang=bash>
# zypper refresh
...
Error code: Curl error 59
Error message: failed setting cipher list: DEFAULT_SUSE
...
</syntaxhighlight>
The reason is that zypper in newer versions calls curl with a specific cipher list named "DEFAULT_SUSE" which is not defined in curl version 7.37.0-37.17.1 (version 7.37.0-28.1 is OK).
Now get any kind of repository bound to your SuSE like the ISO this version was installed with:
<syntaxhighlight lang=bash>
# zypper addrepo --check --type yast2 'iso:///?iso=/install/OS/suse/iso/SLE-12-SP2-Server-DVD-x86_64-GM-DVD1.iso' 'SLES12-SP2-12.2-0'
Adding repository 'SLES12-SP2-12.2-0' ...........................................................................................................[done]
Repository 'SLES12-SP2-12.2-0' successfully added
Enabled : Yes
Autorefresh : No
GPG Check : Yes
Priority : 99
URI : iso:///?iso=/install/OS/suse/iso/SLE-12-SP2-Server-DVD-x86_64-GM-DVD1.iso
</syntaxhighlight>
or enable it:
<syntaxhighlight lang=bash>
# zypper modifyrepo --enable SLES12-SP2-12.2-0
</syntaxhighlight>
Reinstall zypper in the old version that does not call curl with the cipher list SUSE_DEFAULT:
<syntaxhighlight lang=bash>
# zypper install --force --repo SLES12-SP2-12.2-0 $(rpm --query --all *curl* --queryformat '%{NAME} ')
</syntaxhighlight>
And disable the ISO repository:
<syntaxhighlight lang=bash>
# zypper modifyrepo --disable SLES12-SP2-12.2-0
</syntaxhighlight>
Done.
=====Note: After some further debugging we found that the system path forces a wrong openssl library to come in place.=====
<syntaxhighlight lang=bash>
# curl --version ; zypper --version
curl 7.37.0 (x86_64-suse-linux-gnu) libcurl/7.37.0 OpenSSL/1.0.2h zlib/1.2.8 libidn/1.28 libssh2/1.4.3
Protocols: dict file ftp ftps gopher http https imap imaps ldap ldaps pop3 pop3s rtsp scp sftp smtp smtps telnet tftp
Features: AsynchDNS GSS-Negotiate IDN IPv6 Largefile NTLM NTLM_WB SSL libz TLS-SRP
zypper 1.13.40
</syntaxhighlight>
In our version of curl it should be OpenSSL/1.0.2j.
<syntaxhighlight lang="bash" highlight="5">
# rpm -qv openssl
openssl-1.0.2j-60.24.1.x86_64
# openssl version
WARNING: can't open config file: /usr/local/ssl/openssl.cnf
OpenSSL 1.0.2j-fips 26 Sep 2016 (Library: OpenSSL 1.0.2h-fips 3 May 2016)
</syntaxhighlight>
Ha!
Ok... then after lookin at the system library path, we got a clue ;-):
<syntaxhighlight lang="bash" highlight="2">
# ldconfig -p | grep ssl
libssl.so.1.0.0 (libc6,x86-64) => /usr/lib/nsr/lib64/libssl.so.1.0.0
libssl.so.1.0.0 (libc6,x86-64) => /lib64/libssl.so.1.0.0
libssl.so.1.0.0 (libc6) => /usr/lib/nsr/libssl.so.1.0.0
libgnutls-xssl.so.0 (libc6,x86-64) => /usr/lib64/libgnutls-xssl.so.0
libevent_openssl-2.0.so.5 (libc6,x86-64) => /usr/lib64/libevent_openssl-2.0.so.5
libcommonssl.so (libc6,x86-64) => /usr/lib/nsr/lib64/libcommonssl.so
libcommonssl.so (libc6) => /usr/lib/nsr/libcommonssl.so
libcommonssl-9.2.1.so (libc6,x86-64) => /usr/lib/nsr/lib64/libcommonssl-9.2.1.so
</syntaxhighlight>
The problem was a file in /etc/ld.so.conf.d/ which brought /usr/lib/nsr/lib64 in the system library path. There was another libssl.so.1.0.0 which was version 1.0.2h. OK. What to do?
<syntaxhighlight lang=bash>
# rm /etc/ld.so.conf.d/problematic.conf
# rm /etc/ld.so.cache
# ldconfig
</syntaxhighlight>
Check the success:
<syntaxhighlight lang=bash>
# ldconfig -p | grep ssl
libssl.so.1.0.0 (libc6,x86-64) => /lib64/libssl.so.1.0.0
libgnutls-xssl.so.0 (libc6,x86-64) => /usr/lib64/libgnutls-xssl.so.0
libevent_openssl-2.0.so.5 (libc6,x86-64) => /usr/lib64/libevent_openssl-2.0.so.5
</syntaxhighlight>
Now you just have to find a way to get your other stuff running without the manipulation at the system library path.
Last check for our case. Does our networker use it's own ssl libraries?
<syntaxhighlight lang=bash>
# ls -al /proc/$(pgrep --full /usr/sbin/nsrexecd)/map_files | egrep "lib(ssl|crypto)"
lr-------- 1 root root 64 17. Jul 11:31 7f9d1bb73000-7f9d1bdc7000 -> /usr/lib/nsr/lib64/libcrypto.so.1.0.0
lr-------- 1 root root 64 17. Jul 11:31 7f9d1bdc7000-7f9d1bec7000 -> /usr/lib/nsr/lib64/libcrypto.so.1.0.0
lr-------- 1 root root 64 17. Jul 11:31 7f9d1bec7000-7f9d1bef3000 -> /usr/lib/nsr/lib64/libcrypto.so.1.0.0
lr-------- 1 root root 64 17. Jul 11:31 7f9d1bfab000-7f9d1c00c000 -> /usr/lib/nsr/lib64/libssl.so.1.0.0
lr-------- 1 root root 64 17. Jul 11:31 7f9d1c00c000-7f9d1c10c000 -> /usr/lib/nsr/lib64/libssl.so.1.0.0
lr-------- 1 root root 64 17. Jul 11:31 7f9d1c10c000-7f9d1c116000 -> /usr/lib/nsr/lib64/libssl.so.1.0.0
</syntaxhighlight>
Yep. Great!
== Remove spacewalk from client ==
So the way to get rid spacewalk is:
<syntaxhighlight lang=bash>
# zypper remove --clean-deps spacewalksd spacewalk-check zypp-plugin-spacewalk spacewalk-client-tools
</syntaxhighlight>
== Register at SuSE Manager ==
After that reregister your server with the SuSE Manager like this:
<syntaxhighlight lang=bash>
# /usr/bin/wget --no-check-certificate -O - https://susemgr.server.tld/pub/bootstrap/yourbootstrap.sh | bash
</syntaxhighlight>
== Update SuSE Manager certificate ==
=== Create work place ===
<syntaxhighlight lang=bash>
# mkdir ~/ssl-build
# mkdir ~/ssl-build/$(hostname --short)
# cd ~/ssl-build
</syntaxhighlight>
=== Build RHN-ORG-TRUSTED-SSL-CERT and rhn-org-trusted-ssl-cert-1.0-*.noarch.rpm ===
<syntaxhighlight lang=bash>
# rhn-ssl-tool --gen-ca --rpm-only --dir="~/ssl-build" --from-ca-cert=<path to your CA certificate file>
# openssl x509 -noout -subject -dates -in ~/ssl-build/RHN-ORG-TRUSTED-SSL-CERT
subject=C = DE, O = Hosting, CN = My-CA
notBefore=Mar 22 12:28:05 2017 GMT
notAfter=Mar 22 12:38:05 2027 GMT
# ls -al ~/ssl-build/*.rpm
...
-rw-r--r-- 1 root root 18262 17. Nov 12:10 rhn-org-trusted-ssl-cert-1.0-17.noarch.rpm
-rw-r--r-- 1 root root 16672 17. Nov 12:10 rhn-org-trusted-ssl-cert-1.0-17.src.rpm
</syntaxhighlight>
=== Generate CSR ===
<syntaxhighlight lang=bash>
# cd ~/ssl-build/$(hostname --short)
# declare -a hosts=( "susemgr.tld.de" "othername.tld.de" "anotheranothername.tld.de" )
# subject_without_cn='/C=DE/ST=Hamburg/L=Hamburg/O=Hosting/OU=Administration'
# emailAddress='suselinux-admin@tld.de'
</syntaxhighlight>
<syntaxhighlight lang=bash>
# openssl req -newkey rsa:4096 -nodes -sha256 -keyout server.key -out server.csr -batch -subj "${subject_without_cn} ${emailAddress}/CN=${hosts[0]}/emailAddress=${emailAddress}" -reqexts SAN -config <(cat /etc/ssl/openssl.cnf <(printf "[SAN]\nsubjectAltName=DNS:${hosts[0]}${hosts[1]:+,DNS:${hosts[1]}}${hosts[2]:+,DNS:${hosts[2]}}${hosts[3]:+,DNS:${hosts[3]}}${hosts[4]:+,DNS:${hosts[4]}}"))
Generating a RSA private key
...............................................++++
.................................................................................................................................................................++++
writing new private key to 'server.key'
-----
</syntaxhighlight>
<syntaxhighlight lang=bash>
# openssl req -noout -verify -subject -in server.csr
verify OK
subject=C = DE, ST = Hamburg, L = Hamburg, O = Hosting, OU = Administration, CN = susemgr.tld.de, emailAddress = suselinux-admin@tld.de
</syntaxhighlight>
=== Generate RPMs from certificate and key ===
<syntaxhighlight lang=bash>
# rhn-ssl-tool --gen-server --rpm-only --dir="/root/ssl-build"
...working...
Generating web server's SSL key pair/set RPM:
/root/ssl-build/susemgr/rhn-org-httpd-ssl-key-pair-susemgr-1.0-3.src.rpm
/root/ssl-build/susemgr/rhn-org-httpd-ssl-key-pair-susemgr-1.0-3.noarch.rpm
The most current SUSE Manager Proxy installation process against SUSE Manager hosted
requires the upload of an SSL tar archive that contains the CA SSL public
certificate and the web server's key set.
Generating the web server's SSL key set and CA SSL public certificate archive:
/root/ssl-build/susemgr/rhn-org-httpd-ssl-archive-susemgr-1.0-3.tar
Deploy the server's SSL key pair/set RPM:
(NOTE: the SUSE Manager or Proxy installers may do this step for you.)
The "noarch" RPM needs to be deployed to the machine working as a
web server, or SUSE Manager, or SUSE Manager Proxy.
Presumably 'susemgr.tld.de'.
</syntaxhighlight>
=== Install certificate and key in the apache directories ===
<syntaxhighlight lang=bash>
# cd /root/ssl-build/susemgr
# rpm -i $(grep -E "rhn-org-httpd-ssl-key-pair-.*.noarch.rpm" latest.txt)
</syntaxhighlight>
<syntaxhighlight lang=bash>
</syntaxhighlight>
<syntaxhighlight lang=bash>
</syntaxhighlight>
<syntaxhighlight lang=bash>
</syntaxhighlight>
177b476b8e8b86daaa09272fce7dafffadb19689
2683
2682
2022-11-16T09:06:18Z
Lollypop
2
/* Generate CSR */
wikitext
text/x-wiki
[[category :Linux]]
[[category:SuSE]]
=SuSE Manager=
==Channels==
===Refresh channle list===
<syntaxhighlight lang=bash>
# mgr-sync refresh
</syntaxhighlight>
===List available channels===
<syntaxhighlight lang=bash>
# mgr-sync list channels
</syntaxhighlight>
===Add Channel===
<syntaxhighlight lang=bash>
# mgr-sync add channel <channel>
</syntaxhighlight>
===Delete Channel===
<syntaxhighlight lang=bash>
# spacewalk-remove-channel -c <channel>
</syntaxhighlight>
===Create a frozen channel===
Clone a channel (which is like a snapshot) and add a timestamp at the end of the name:
<syntaxhighlight lang=bash>
# spacecmd softwarechannel_clonetree -s '<syntaxhighlight channel or pool>' -x "s/\$/-$(date '+%Y-%m-%d_%H:%M:%S')/"
</syntaxhighlight>
e.g.:
<syntaxhighlight lang=bash>
# spacecmd softwarechannel_clonetree -s 'sles12-sp3-pool-x86_64' -x "s/\$/-$(date '+%Y-%m-%d_%H:%M:%S')/"
</syntaxhighlight>
will result in a new channel pool named e.g. sles12-sp3-pool-x86_64-2017-11-22_14:26:42
===Compose your own channel===
<syntaxhighlight lang=bash>
# spacecmd
spacecmd {SSM:0}> softwarechannel_create -n OpenSuSE -l opensuse -a x86_64 -c sha256
spacecmd {SSM:0}> repo_create -n opensuse-database-sles12-sp2-x86_64 -u https://download.opensuse.org/repositories/server:/database/SLE_12_SP2/
spacecmd {SSM:0}> repo_create -n opensuse-database-sles12-sp3-x86_64 -u https://download.opensuse.org/repositories/server:/database/SLE_12_SP3/
spacecmd {SSM:0}> repo_list
opensuse-database-sles12-sp2-x86_64
opensuse-database-sles12-sp3-x86_64
spacecmd {SSM:0}> softwarechannel_addrepo opensuse opensuse-database-sles12-sp2-x86_64
spacecmd {SSM:0}> softwarechannel_addrepo opensuse opensuse-database-sles12-sp3-x86_64
spacecmd {SSM:0}> quit
# spacewalk-repo-sync -c opensuse
</syntaxhighlight>
==Bootstrap==
===Create bootstrap repo===
Do it for each channel!
<syntaxhighlight lang=bash>
# mgr-create-bootstrap-repo
</syntaxhighlight>
===Create bootstrap shell scripts in /srv/www/htdocs/pub/bootstrap===
Do not forget to lookup the available [[#List available activation keys|activation keys]]
<syntaxhighlight lang=bash>
# spacecmd -s susemanager.server.de -u mytestuser -q activationkey_list
6-sles11-sp3-x86_64
6-sles11-sp4-x86_64
6-sles12-default
6-sles12-sp0-x86_64
6-sles12-sp1-x86_64
6-sles12-sp2-x86_64
6-sles12-sp3-x86_64
6-sles12-sp4-x86_64
6-sles12-sp5-x86_64
6-sles15-sp0-x86_64
6-sles15-sp1-x86_64
6-sles15-sp2-x86_64
# mgr-bootstrap --traditional --script=My-New-SLES11-SP4.sh --activation-keys=6-sles11-sp4-x86_64
</syntaxhighlight>
==Activation keys==
===List available activation keys===
web: Systems -> Activation Keys
<syntaxhighlight lang=bash>
# spacecmd -q activationkey_list
6-sles11-sp3-x86_64
6-sles11-sp4-x86_64
6-sles12-sp0-x86_64
6-sles12-sp1-x86_64
6-sles12-sp2-x86_64
6-sles12-sp3-x86_64
</syntaxhighlight>
==spacecmd==
Just some useful space commands
<syntaxhighlight lang=bash>
# spacecmd system_list
</syntaxhighlight>
==rhn-search==
===Cleanup the search index===
<syntaxhighlight lang=bash>
# rhn-search cleanindex
</syntaxhighlight>
==Troubleshooting==
===Clients===
====Error code: Curl error 59 / Error message: failed setting cipher list: DEFAULT_SUSE====
<syntaxhighlight lang=bash>
# zypper refresh
...
Error code: Curl error 59
Error message: failed setting cipher list: DEFAULT_SUSE
...
</syntaxhighlight>
The reason is that zypper in newer versions calls curl with a specific cipher list named "DEFAULT_SUSE" which is not defined in curl version 7.37.0-37.17.1 (version 7.37.0-28.1 is OK).
Now get any kind of repository bound to your SuSE like the ISO this version was installed with:
<syntaxhighlight lang=bash>
# zypper addrepo --check --type yast2 'iso:///?iso=/install/OS/suse/iso/SLE-12-SP2-Server-DVD-x86_64-GM-DVD1.iso' 'SLES12-SP2-12.2-0'
Adding repository 'SLES12-SP2-12.2-0' ...........................................................................................................[done]
Repository 'SLES12-SP2-12.2-0' successfully added
Enabled : Yes
Autorefresh : No
GPG Check : Yes
Priority : 99
URI : iso:///?iso=/install/OS/suse/iso/SLE-12-SP2-Server-DVD-x86_64-GM-DVD1.iso
</syntaxhighlight>
or enable it:
<syntaxhighlight lang=bash>
# zypper modifyrepo --enable SLES12-SP2-12.2-0
</syntaxhighlight>
Reinstall zypper in the old version that does not call curl with the cipher list SUSE_DEFAULT:
<syntaxhighlight lang=bash>
# zypper install --force --repo SLES12-SP2-12.2-0 $(rpm --query --all *curl* --queryformat '%{NAME} ')
</syntaxhighlight>
And disable the ISO repository:
<syntaxhighlight lang=bash>
# zypper modifyrepo --disable SLES12-SP2-12.2-0
</syntaxhighlight>
Done.
=====Note: After some further debugging we found that the system path forces a wrong openssl library to come in place.=====
<syntaxhighlight lang=bash>
# curl --version ; zypper --version
curl 7.37.0 (x86_64-suse-linux-gnu) libcurl/7.37.0 OpenSSL/1.0.2h zlib/1.2.8 libidn/1.28 libssh2/1.4.3
Protocols: dict file ftp ftps gopher http https imap imaps ldap ldaps pop3 pop3s rtsp scp sftp smtp smtps telnet tftp
Features: AsynchDNS GSS-Negotiate IDN IPv6 Largefile NTLM NTLM_WB SSL libz TLS-SRP
zypper 1.13.40
</syntaxhighlight>
In our version of curl it should be OpenSSL/1.0.2j.
<syntaxhighlight lang="bash" highlight="5">
# rpm -qv openssl
openssl-1.0.2j-60.24.1.x86_64
# openssl version
WARNING: can't open config file: /usr/local/ssl/openssl.cnf
OpenSSL 1.0.2j-fips 26 Sep 2016 (Library: OpenSSL 1.0.2h-fips 3 May 2016)
</syntaxhighlight>
Ha!
Ok... then after lookin at the system library path, we got a clue ;-):
<syntaxhighlight lang="bash" highlight="2">
# ldconfig -p | grep ssl
libssl.so.1.0.0 (libc6,x86-64) => /usr/lib/nsr/lib64/libssl.so.1.0.0
libssl.so.1.0.0 (libc6,x86-64) => /lib64/libssl.so.1.0.0
libssl.so.1.0.0 (libc6) => /usr/lib/nsr/libssl.so.1.0.0
libgnutls-xssl.so.0 (libc6,x86-64) => /usr/lib64/libgnutls-xssl.so.0
libevent_openssl-2.0.so.5 (libc6,x86-64) => /usr/lib64/libevent_openssl-2.0.so.5
libcommonssl.so (libc6,x86-64) => /usr/lib/nsr/lib64/libcommonssl.so
libcommonssl.so (libc6) => /usr/lib/nsr/libcommonssl.so
libcommonssl-9.2.1.so (libc6,x86-64) => /usr/lib/nsr/lib64/libcommonssl-9.2.1.so
</syntaxhighlight>
The problem was a file in /etc/ld.so.conf.d/ which brought /usr/lib/nsr/lib64 in the system library path. There was another libssl.so.1.0.0 which was version 1.0.2h. OK. What to do?
<syntaxhighlight lang=bash>
# rm /etc/ld.so.conf.d/problematic.conf
# rm /etc/ld.so.cache
# ldconfig
</syntaxhighlight>
Check the success:
<syntaxhighlight lang=bash>
# ldconfig -p | grep ssl
libssl.so.1.0.0 (libc6,x86-64) => /lib64/libssl.so.1.0.0
libgnutls-xssl.so.0 (libc6,x86-64) => /usr/lib64/libgnutls-xssl.so.0
libevent_openssl-2.0.so.5 (libc6,x86-64) => /usr/lib64/libevent_openssl-2.0.so.5
</syntaxhighlight>
Now you just have to find a way to get your other stuff running without the manipulation at the system library path.
Last check for our case. Does our networker use it's own ssl libraries?
<syntaxhighlight lang=bash>
# ls -al /proc/$(pgrep --full /usr/sbin/nsrexecd)/map_files | egrep "lib(ssl|crypto)"
lr-------- 1 root root 64 17. Jul 11:31 7f9d1bb73000-7f9d1bdc7000 -> /usr/lib/nsr/lib64/libcrypto.so.1.0.0
lr-------- 1 root root 64 17. Jul 11:31 7f9d1bdc7000-7f9d1bec7000 -> /usr/lib/nsr/lib64/libcrypto.so.1.0.0
lr-------- 1 root root 64 17. Jul 11:31 7f9d1bec7000-7f9d1bef3000 -> /usr/lib/nsr/lib64/libcrypto.so.1.0.0
lr-------- 1 root root 64 17. Jul 11:31 7f9d1bfab000-7f9d1c00c000 -> /usr/lib/nsr/lib64/libssl.so.1.0.0
lr-------- 1 root root 64 17. Jul 11:31 7f9d1c00c000-7f9d1c10c000 -> /usr/lib/nsr/lib64/libssl.so.1.0.0
lr-------- 1 root root 64 17. Jul 11:31 7f9d1c10c000-7f9d1c116000 -> /usr/lib/nsr/lib64/libssl.so.1.0.0
</syntaxhighlight>
Yep. Great!
== Remove spacewalk from client ==
So the way to get rid spacewalk is:
<syntaxhighlight lang=bash>
# zypper remove --clean-deps spacewalksd spacewalk-check zypp-plugin-spacewalk spacewalk-client-tools
</syntaxhighlight>
== Register at SuSE Manager ==
After that reregister your server with the SuSE Manager like this:
<syntaxhighlight lang=bash>
# /usr/bin/wget --no-check-certificate -O - https://susemgr.server.tld/pub/bootstrap/yourbootstrap.sh | bash
</syntaxhighlight>
== Update SuSE Manager certificate ==
=== Create work place ===
<syntaxhighlight lang=bash>
# mkdir ~/ssl-build
# mkdir ~/ssl-build/$(hostname --short)
# cd ~/ssl-build
</syntaxhighlight>
=== Build RHN-ORG-TRUSTED-SSL-CERT and rhn-org-trusted-ssl-cert-1.0-*.noarch.rpm ===
<syntaxhighlight lang=bash>
# rhn-ssl-tool --gen-ca --rpm-only --dir="~/ssl-build" --from-ca-cert=<path to your CA certificate file>
# openssl x509 -noout -subject -dates -in ~/ssl-build/RHN-ORG-TRUSTED-SSL-CERT
subject=C = DE, O = Hosting, CN = My-CA
notBefore=Mar 22 12:28:05 2017 GMT
notAfter=Mar 22 12:38:05 2027 GMT
# ls -al ~/ssl-build/*.rpm
...
-rw-r--r-- 1 root root 18262 17. Nov 12:10 rhn-org-trusted-ssl-cert-1.0-17.noarch.rpm
-rw-r--r-- 1 root root 16672 17. Nov 12:10 rhn-org-trusted-ssl-cert-1.0-17.src.rpm
</syntaxhighlight>
=== Generate CSR ===
<syntaxhighlight lang=bash>
# cd ~/ssl-build/$(hostname --short)
# declare -a hosts=( "susemgr.tld.de" "othername.tld.de" "anotheranothername.tld.de" )
# subject_without_cn='/C=DE/ST=Hamburg/L=Hamburg/O=Hosting/OU=Administration'
# emailAddress='suselinux-admin@tld.de'
</syntaxhighlight>
<syntaxhighlight lang=bash>
# openssl req -newkey rsa:4096 -nodes -sha256 -keyout server.key -out server.csr -batch -subj "${subject_without_cn} ${emailAddress}/CN=${hosts[0]}/emailAddress=${emailAddress}" -reqexts SAN -config <(cat /etc/ssl/openssl.cnf <(printf "[SAN]\nsubjectAltName=DNS:${hosts[0]}${hosts[1]:+,DNS:${hosts[1]}}${hosts[2]:+,DNS:${hosts[2]}}${hosts[3]:+,DNS:${hosts[3]}}${hosts[4]:+,DNS:${hosts[4]}}"))
Generating a RSA private key
...............................................++++
.................................................................................................................................................................++++
writing new private key to 'server.key'
-----
</syntaxhighlight>
<syntaxhighlight lang=bash>
# openssl req -noout -in server.csr -text 2>/dev/null | grep -E "(CN|DNS:)"
verify OK
subject=C = DE, ST = Hamburg, L = Hamburg, O = Hosting, OU = Administration suselinux-admin@tld.de, CN = susemgr.tld.de, emailAddress = suselinux-admin@tld.de
DNS: susemgr.tld.de
</syntaxhighlight>
=== Generate RPMs from certificate and key ===
<syntaxhighlight lang=bash>
# rhn-ssl-tool --gen-server --rpm-only --dir="/root/ssl-build"
...working...
Generating web server's SSL key pair/set RPM:
/root/ssl-build/susemgr/rhn-org-httpd-ssl-key-pair-susemgr-1.0-3.src.rpm
/root/ssl-build/susemgr/rhn-org-httpd-ssl-key-pair-susemgr-1.0-3.noarch.rpm
The most current SUSE Manager Proxy installation process against SUSE Manager hosted
requires the upload of an SSL tar archive that contains the CA SSL public
certificate and the web server's key set.
Generating the web server's SSL key set and CA SSL public certificate archive:
/root/ssl-build/susemgr/rhn-org-httpd-ssl-archive-susemgr-1.0-3.tar
Deploy the server's SSL key pair/set RPM:
(NOTE: the SUSE Manager or Proxy installers may do this step for you.)
The "noarch" RPM needs to be deployed to the machine working as a
web server, or SUSE Manager, or SUSE Manager Proxy.
Presumably 'susemgr.tld.de'.
</syntaxhighlight>
=== Install certificate and key in the apache directories ===
<syntaxhighlight lang=bash>
# cd /root/ssl-build/susemgr
# rpm -i $(grep -E "rhn-org-httpd-ssl-key-pair-.*.noarch.rpm" latest.txt)
</syntaxhighlight>
<syntaxhighlight lang=bash>
</syntaxhighlight>
<syntaxhighlight lang=bash>
</syntaxhighlight>
<syntaxhighlight lang=bash>
</syntaxhighlight>
3537167ab9f93a0065c259060ddc9a826e67bd63
SuSE Manager
0
348
2684
2683
2022-11-16T09:07:35Z
Lollypop
2
/* Generate CSR */
wikitext
text/x-wiki
[[category :Linux]]
[[category:SuSE]]
=SuSE Manager=
==Channels==
===Refresh channle list===
<syntaxhighlight lang=bash>
# mgr-sync refresh
</syntaxhighlight>
===List available channels===
<syntaxhighlight lang=bash>
# mgr-sync list channels
</syntaxhighlight>
===Add Channel===
<syntaxhighlight lang=bash>
# mgr-sync add channel <channel>
</syntaxhighlight>
===Delete Channel===
<syntaxhighlight lang=bash>
# spacewalk-remove-channel -c <channel>
</syntaxhighlight>
===Create a frozen channel===
Clone a channel (which is like a snapshot) and add a timestamp at the end of the name:
<syntaxhighlight lang=bash>
# spacecmd softwarechannel_clonetree -s '<syntaxhighlight channel or pool>' -x "s/\$/-$(date '+%Y-%m-%d_%H:%M:%S')/"
</syntaxhighlight>
e.g.:
<syntaxhighlight lang=bash>
# spacecmd softwarechannel_clonetree -s 'sles12-sp3-pool-x86_64' -x "s/\$/-$(date '+%Y-%m-%d_%H:%M:%S')/"
</syntaxhighlight>
will result in a new channel pool named e.g. sles12-sp3-pool-x86_64-2017-11-22_14:26:42
===Compose your own channel===
<syntaxhighlight lang=bash>
# spacecmd
spacecmd {SSM:0}> softwarechannel_create -n OpenSuSE -l opensuse -a x86_64 -c sha256
spacecmd {SSM:0}> repo_create -n opensuse-database-sles12-sp2-x86_64 -u https://download.opensuse.org/repositories/server:/database/SLE_12_SP2/
spacecmd {SSM:0}> repo_create -n opensuse-database-sles12-sp3-x86_64 -u https://download.opensuse.org/repositories/server:/database/SLE_12_SP3/
spacecmd {SSM:0}> repo_list
opensuse-database-sles12-sp2-x86_64
opensuse-database-sles12-sp3-x86_64
spacecmd {SSM:0}> softwarechannel_addrepo opensuse opensuse-database-sles12-sp2-x86_64
spacecmd {SSM:0}> softwarechannel_addrepo opensuse opensuse-database-sles12-sp3-x86_64
spacecmd {SSM:0}> quit
# spacewalk-repo-sync -c opensuse
</syntaxhighlight>
==Bootstrap==
===Create bootstrap repo===
Do it for each channel!
<syntaxhighlight lang=bash>
# mgr-create-bootstrap-repo
</syntaxhighlight>
===Create bootstrap shell scripts in /srv/www/htdocs/pub/bootstrap===
Do not forget to lookup the available [[#List available activation keys|activation keys]]
<syntaxhighlight lang=bash>
# spacecmd -s susemanager.server.de -u mytestuser -q activationkey_list
6-sles11-sp3-x86_64
6-sles11-sp4-x86_64
6-sles12-default
6-sles12-sp0-x86_64
6-sles12-sp1-x86_64
6-sles12-sp2-x86_64
6-sles12-sp3-x86_64
6-sles12-sp4-x86_64
6-sles12-sp5-x86_64
6-sles15-sp0-x86_64
6-sles15-sp1-x86_64
6-sles15-sp2-x86_64
# mgr-bootstrap --traditional --script=My-New-SLES11-SP4.sh --activation-keys=6-sles11-sp4-x86_64
</syntaxhighlight>
==Activation keys==
===List available activation keys===
web: Systems -> Activation Keys
<syntaxhighlight lang=bash>
# spacecmd -q activationkey_list
6-sles11-sp3-x86_64
6-sles11-sp4-x86_64
6-sles12-sp0-x86_64
6-sles12-sp1-x86_64
6-sles12-sp2-x86_64
6-sles12-sp3-x86_64
</syntaxhighlight>
==spacecmd==
Just some useful space commands
<syntaxhighlight lang=bash>
# spacecmd system_list
</syntaxhighlight>
==rhn-search==
===Cleanup the search index===
<syntaxhighlight lang=bash>
# rhn-search cleanindex
</syntaxhighlight>
==Troubleshooting==
===Clients===
====Error code: Curl error 59 / Error message: failed setting cipher list: DEFAULT_SUSE====
<syntaxhighlight lang=bash>
# zypper refresh
...
Error code: Curl error 59
Error message: failed setting cipher list: DEFAULT_SUSE
...
</syntaxhighlight>
The reason is that zypper in newer versions calls curl with a specific cipher list named "DEFAULT_SUSE" which is not defined in curl version 7.37.0-37.17.1 (version 7.37.0-28.1 is OK).
Now get any kind of repository bound to your SuSE like the ISO this version was installed with:
<syntaxhighlight lang=bash>
# zypper addrepo --check --type yast2 'iso:///?iso=/install/OS/suse/iso/SLE-12-SP2-Server-DVD-x86_64-GM-DVD1.iso' 'SLES12-SP2-12.2-0'
Adding repository 'SLES12-SP2-12.2-0' ...........................................................................................................[done]
Repository 'SLES12-SP2-12.2-0' successfully added
Enabled : Yes
Autorefresh : No
GPG Check : Yes
Priority : 99
URI : iso:///?iso=/install/OS/suse/iso/SLE-12-SP2-Server-DVD-x86_64-GM-DVD1.iso
</syntaxhighlight>
or enable it:
<syntaxhighlight lang=bash>
# zypper modifyrepo --enable SLES12-SP2-12.2-0
</syntaxhighlight>
Reinstall zypper in the old version that does not call curl with the cipher list SUSE_DEFAULT:
<syntaxhighlight lang=bash>
# zypper install --force --repo SLES12-SP2-12.2-0 $(rpm --query --all *curl* --queryformat '%{NAME} ')
</syntaxhighlight>
And disable the ISO repository:
<syntaxhighlight lang=bash>
# zypper modifyrepo --disable SLES12-SP2-12.2-0
</syntaxhighlight>
Done.
=====Note: After some further debugging we found that the system path forces a wrong openssl library to come in place.=====
<syntaxhighlight lang=bash>
# curl --version ; zypper --version
curl 7.37.0 (x86_64-suse-linux-gnu) libcurl/7.37.0 OpenSSL/1.0.2h zlib/1.2.8 libidn/1.28 libssh2/1.4.3
Protocols: dict file ftp ftps gopher http https imap imaps ldap ldaps pop3 pop3s rtsp scp sftp smtp smtps telnet tftp
Features: AsynchDNS GSS-Negotiate IDN IPv6 Largefile NTLM NTLM_WB SSL libz TLS-SRP
zypper 1.13.40
</syntaxhighlight>
In our version of curl it should be OpenSSL/1.0.2j.
<syntaxhighlight lang="bash" highlight="5">
# rpm -qv openssl
openssl-1.0.2j-60.24.1.x86_64
# openssl version
WARNING: can't open config file: /usr/local/ssl/openssl.cnf
OpenSSL 1.0.2j-fips 26 Sep 2016 (Library: OpenSSL 1.0.2h-fips 3 May 2016)
</syntaxhighlight>
Ha!
Ok... then after lookin at the system library path, we got a clue ;-):
<syntaxhighlight lang="bash" highlight="2">
# ldconfig -p | grep ssl
libssl.so.1.0.0 (libc6,x86-64) => /usr/lib/nsr/lib64/libssl.so.1.0.0
libssl.so.1.0.0 (libc6,x86-64) => /lib64/libssl.so.1.0.0
libssl.so.1.0.0 (libc6) => /usr/lib/nsr/libssl.so.1.0.0
libgnutls-xssl.so.0 (libc6,x86-64) => /usr/lib64/libgnutls-xssl.so.0
libevent_openssl-2.0.so.5 (libc6,x86-64) => /usr/lib64/libevent_openssl-2.0.so.5
libcommonssl.so (libc6,x86-64) => /usr/lib/nsr/lib64/libcommonssl.so
libcommonssl.so (libc6) => /usr/lib/nsr/libcommonssl.so
libcommonssl-9.2.1.so (libc6,x86-64) => /usr/lib/nsr/lib64/libcommonssl-9.2.1.so
</syntaxhighlight>
The problem was a file in /etc/ld.so.conf.d/ which brought /usr/lib/nsr/lib64 in the system library path. There was another libssl.so.1.0.0 which was version 1.0.2h. OK. What to do?
<syntaxhighlight lang=bash>
# rm /etc/ld.so.conf.d/problematic.conf
# rm /etc/ld.so.cache
# ldconfig
</syntaxhighlight>
Check the success:
<syntaxhighlight lang=bash>
# ldconfig -p | grep ssl
libssl.so.1.0.0 (libc6,x86-64) => /lib64/libssl.so.1.0.0
libgnutls-xssl.so.0 (libc6,x86-64) => /usr/lib64/libgnutls-xssl.so.0
libevent_openssl-2.0.so.5 (libc6,x86-64) => /usr/lib64/libevent_openssl-2.0.so.5
</syntaxhighlight>
Now you just have to find a way to get your other stuff running without the manipulation at the system library path.
Last check for our case. Does our networker use it's own ssl libraries?
<syntaxhighlight lang=bash>
# ls -al /proc/$(pgrep --full /usr/sbin/nsrexecd)/map_files | egrep "lib(ssl|crypto)"
lr-------- 1 root root 64 17. Jul 11:31 7f9d1bb73000-7f9d1bdc7000 -> /usr/lib/nsr/lib64/libcrypto.so.1.0.0
lr-------- 1 root root 64 17. Jul 11:31 7f9d1bdc7000-7f9d1bec7000 -> /usr/lib/nsr/lib64/libcrypto.so.1.0.0
lr-------- 1 root root 64 17. Jul 11:31 7f9d1bec7000-7f9d1bef3000 -> /usr/lib/nsr/lib64/libcrypto.so.1.0.0
lr-------- 1 root root 64 17. Jul 11:31 7f9d1bfab000-7f9d1c00c000 -> /usr/lib/nsr/lib64/libssl.so.1.0.0
lr-------- 1 root root 64 17. Jul 11:31 7f9d1c00c000-7f9d1c10c000 -> /usr/lib/nsr/lib64/libssl.so.1.0.0
lr-------- 1 root root 64 17. Jul 11:31 7f9d1c10c000-7f9d1c116000 -> /usr/lib/nsr/lib64/libssl.so.1.0.0
</syntaxhighlight>
Yep. Great!
== Remove spacewalk from client ==
So the way to get rid spacewalk is:
<syntaxhighlight lang=bash>
# zypper remove --clean-deps spacewalksd spacewalk-check zypp-plugin-spacewalk spacewalk-client-tools
</syntaxhighlight>
== Register at SuSE Manager ==
After that reregister your server with the SuSE Manager like this:
<syntaxhighlight lang=bash>
# /usr/bin/wget --no-check-certificate -O - https://susemgr.server.tld/pub/bootstrap/yourbootstrap.sh | bash
</syntaxhighlight>
== Update SuSE Manager certificate ==
=== Create work place ===
<syntaxhighlight lang=bash>
# mkdir ~/ssl-build
# mkdir ~/ssl-build/$(hostname --short)
# cd ~/ssl-build
</syntaxhighlight>
=== Build RHN-ORG-TRUSTED-SSL-CERT and rhn-org-trusted-ssl-cert-1.0-*.noarch.rpm ===
<syntaxhighlight lang=bash>
# rhn-ssl-tool --gen-ca --rpm-only --dir="~/ssl-build" --from-ca-cert=<path to your CA certificate file>
# openssl x509 -noout -subject -dates -in ~/ssl-build/RHN-ORG-TRUSTED-SSL-CERT
subject=C = DE, O = Hosting, CN = My-CA
notBefore=Mar 22 12:28:05 2017 GMT
notAfter=Mar 22 12:38:05 2027 GMT
# ls -al ~/ssl-build/*.rpm
...
-rw-r--r-- 1 root root 18262 17. Nov 12:10 rhn-org-trusted-ssl-cert-1.0-17.noarch.rpm
-rw-r--r-- 1 root root 16672 17. Nov 12:10 rhn-org-trusted-ssl-cert-1.0-17.src.rpm
</syntaxhighlight>
=== Generate CSR ===
<syntaxhighlight lang=bash>
# cd ~/ssl-build/$(hostname --short)
# declare -a hosts=( "susemgr.tld.de" "othername.tld.de" "anotheranothername.tld.de" )
# subject_without_cn='/C=DE/ST=Hamburg/L=Hamburg/O=Hosting/OU=Administration'
# emailAddress='suselinux-admin@tld.de'
</syntaxhighlight>
<syntaxhighlight lang=bash>
# openssl req -newkey rsa:4096 -nodes -sha256 -keyout server.key -out server.csr -batch -subj "${subject_without_cn} ${emailAddress}/CN=${hosts[0]}/emailAddress=${emailAddress}" -reqexts SAN -config <(cat /etc/ssl/openssl.cnf <(printf "[SAN]\nsubjectAltName=DNS:${hosts[0]}${hosts[1]:+,DNS:${hosts[1]}}${hosts[2]:+,DNS:${hosts[2]}}${hosts[3]:+,DNS:${hosts[3]}}${hosts[4]:+,DNS:${hosts[4]}}"))
Generating a RSA private key
...............................................++++
.................................................................................................................................................................++++
writing new private key to 'server.key'
-----
</syntaxhighlight>
<syntaxhighlight lang=bash>
# openssl req -noout -in server.csr -text 2>/dev/null | grep -E "(CN|DNS:)"
verify OK
subject=C = DE, ST = Hamburg, L = Hamburg, O = Hosting, OU = Administration suselinux-admin@tld.de, CN = susemgr.tld.de, emailAddress = suselinux-admin@tld.de
DNS:susemgr.tld.de, DNS:othername.tld.de , DNS:anotheranothername.tld.de
</syntaxhighlight>
=== Generate RPMs from certificate and key ===
<syntaxhighlight lang=bash>
# rhn-ssl-tool --gen-server --rpm-only --dir="/root/ssl-build"
...working...
Generating web server's SSL key pair/set RPM:
/root/ssl-build/susemgr/rhn-org-httpd-ssl-key-pair-susemgr-1.0-3.src.rpm
/root/ssl-build/susemgr/rhn-org-httpd-ssl-key-pair-susemgr-1.0-3.noarch.rpm
The most current SUSE Manager Proxy installation process against SUSE Manager hosted
requires the upload of an SSL tar archive that contains the CA SSL public
certificate and the web server's key set.
Generating the web server's SSL key set and CA SSL public certificate archive:
/root/ssl-build/susemgr/rhn-org-httpd-ssl-archive-susemgr-1.0-3.tar
Deploy the server's SSL key pair/set RPM:
(NOTE: the SUSE Manager or Proxy installers may do this step for you.)
The "noarch" RPM needs to be deployed to the machine working as a
web server, or SUSE Manager, or SUSE Manager Proxy.
Presumably 'susemgr.tld.de'.
</syntaxhighlight>
=== Install certificate and key in the apache directories ===
<syntaxhighlight lang=bash>
# cd /root/ssl-build/susemgr
# rpm -i $(grep -E "rhn-org-httpd-ssl-key-pair-.*.noarch.rpm" latest.txt)
</syntaxhighlight>
<syntaxhighlight lang=bash>
</syntaxhighlight>
<syntaxhighlight lang=bash>
</syntaxhighlight>
<syntaxhighlight lang=bash>
</syntaxhighlight>
dc24a2dc651864774fe779787d2772db84dc07a9
2685
2684
2022-11-16T09:23:46Z
Lollypop
2
/* Install certificate and key in the apache directories */
wikitext
text/x-wiki
[[category :Linux]]
[[category:SuSE]]
=SuSE Manager=
==Channels==
===Refresh channle list===
<syntaxhighlight lang=bash>
# mgr-sync refresh
</syntaxhighlight>
===List available channels===
<syntaxhighlight lang=bash>
# mgr-sync list channels
</syntaxhighlight>
===Add Channel===
<syntaxhighlight lang=bash>
# mgr-sync add channel <channel>
</syntaxhighlight>
===Delete Channel===
<syntaxhighlight lang=bash>
# spacewalk-remove-channel -c <channel>
</syntaxhighlight>
===Create a frozen channel===
Clone a channel (which is like a snapshot) and add a timestamp at the end of the name:
<syntaxhighlight lang=bash>
# spacecmd softwarechannel_clonetree -s '<syntaxhighlight channel or pool>' -x "s/\$/-$(date '+%Y-%m-%d_%H:%M:%S')/"
</syntaxhighlight>
e.g.:
<syntaxhighlight lang=bash>
# spacecmd softwarechannel_clonetree -s 'sles12-sp3-pool-x86_64' -x "s/\$/-$(date '+%Y-%m-%d_%H:%M:%S')/"
</syntaxhighlight>
will result in a new channel pool named e.g. sles12-sp3-pool-x86_64-2017-11-22_14:26:42
===Compose your own channel===
<syntaxhighlight lang=bash>
# spacecmd
spacecmd {SSM:0}> softwarechannel_create -n OpenSuSE -l opensuse -a x86_64 -c sha256
spacecmd {SSM:0}> repo_create -n opensuse-database-sles12-sp2-x86_64 -u https://download.opensuse.org/repositories/server:/database/SLE_12_SP2/
spacecmd {SSM:0}> repo_create -n opensuse-database-sles12-sp3-x86_64 -u https://download.opensuse.org/repositories/server:/database/SLE_12_SP3/
spacecmd {SSM:0}> repo_list
opensuse-database-sles12-sp2-x86_64
opensuse-database-sles12-sp3-x86_64
spacecmd {SSM:0}> softwarechannel_addrepo opensuse opensuse-database-sles12-sp2-x86_64
spacecmd {SSM:0}> softwarechannel_addrepo opensuse opensuse-database-sles12-sp3-x86_64
spacecmd {SSM:0}> quit
# spacewalk-repo-sync -c opensuse
</syntaxhighlight>
==Bootstrap==
===Create bootstrap repo===
Do it for each channel!
<syntaxhighlight lang=bash>
# mgr-create-bootstrap-repo
</syntaxhighlight>
===Create bootstrap shell scripts in /srv/www/htdocs/pub/bootstrap===
Do not forget to lookup the available [[#List available activation keys|activation keys]]
<syntaxhighlight lang=bash>
# spacecmd -s susemanager.server.de -u mytestuser -q activationkey_list
6-sles11-sp3-x86_64
6-sles11-sp4-x86_64
6-sles12-default
6-sles12-sp0-x86_64
6-sles12-sp1-x86_64
6-sles12-sp2-x86_64
6-sles12-sp3-x86_64
6-sles12-sp4-x86_64
6-sles12-sp5-x86_64
6-sles15-sp0-x86_64
6-sles15-sp1-x86_64
6-sles15-sp2-x86_64
# mgr-bootstrap --traditional --script=My-New-SLES11-SP4.sh --activation-keys=6-sles11-sp4-x86_64
</syntaxhighlight>
==Activation keys==
===List available activation keys===
web: Systems -> Activation Keys
<syntaxhighlight lang=bash>
# spacecmd -q activationkey_list
6-sles11-sp3-x86_64
6-sles11-sp4-x86_64
6-sles12-sp0-x86_64
6-sles12-sp1-x86_64
6-sles12-sp2-x86_64
6-sles12-sp3-x86_64
</syntaxhighlight>
==spacecmd==
Just some useful space commands
<syntaxhighlight lang=bash>
# spacecmd system_list
</syntaxhighlight>
==rhn-search==
===Cleanup the search index===
<syntaxhighlight lang=bash>
# rhn-search cleanindex
</syntaxhighlight>
==Troubleshooting==
===Clients===
====Error code: Curl error 59 / Error message: failed setting cipher list: DEFAULT_SUSE====
<syntaxhighlight lang=bash>
# zypper refresh
...
Error code: Curl error 59
Error message: failed setting cipher list: DEFAULT_SUSE
...
</syntaxhighlight>
The reason is that zypper in newer versions calls curl with a specific cipher list named "DEFAULT_SUSE" which is not defined in curl version 7.37.0-37.17.1 (version 7.37.0-28.1 is OK).
Now get any kind of repository bound to your SuSE like the ISO this version was installed with:
<syntaxhighlight lang=bash>
# zypper addrepo --check --type yast2 'iso:///?iso=/install/OS/suse/iso/SLE-12-SP2-Server-DVD-x86_64-GM-DVD1.iso' 'SLES12-SP2-12.2-0'
Adding repository 'SLES12-SP2-12.2-0' ...........................................................................................................[done]
Repository 'SLES12-SP2-12.2-0' successfully added
Enabled : Yes
Autorefresh : No
GPG Check : Yes
Priority : 99
URI : iso:///?iso=/install/OS/suse/iso/SLE-12-SP2-Server-DVD-x86_64-GM-DVD1.iso
</syntaxhighlight>
or enable it:
<syntaxhighlight lang=bash>
# zypper modifyrepo --enable SLES12-SP2-12.2-0
</syntaxhighlight>
Reinstall zypper in the old version that does not call curl with the cipher list SUSE_DEFAULT:
<syntaxhighlight lang=bash>
# zypper install --force --repo SLES12-SP2-12.2-0 $(rpm --query --all *curl* --queryformat '%{NAME} ')
</syntaxhighlight>
And disable the ISO repository:
<syntaxhighlight lang=bash>
# zypper modifyrepo --disable SLES12-SP2-12.2-0
</syntaxhighlight>
Done.
=====Note: After some further debugging we found that the system path forces a wrong openssl library to come in place.=====
<syntaxhighlight lang=bash>
# curl --version ; zypper --version
curl 7.37.0 (x86_64-suse-linux-gnu) libcurl/7.37.0 OpenSSL/1.0.2h zlib/1.2.8 libidn/1.28 libssh2/1.4.3
Protocols: dict file ftp ftps gopher http https imap imaps ldap ldaps pop3 pop3s rtsp scp sftp smtp smtps telnet tftp
Features: AsynchDNS GSS-Negotiate IDN IPv6 Largefile NTLM NTLM_WB SSL libz TLS-SRP
zypper 1.13.40
</syntaxhighlight>
In our version of curl it should be OpenSSL/1.0.2j.
<syntaxhighlight lang="bash" highlight="5">
# rpm -qv openssl
openssl-1.0.2j-60.24.1.x86_64
# openssl version
WARNING: can't open config file: /usr/local/ssl/openssl.cnf
OpenSSL 1.0.2j-fips 26 Sep 2016 (Library: OpenSSL 1.0.2h-fips 3 May 2016)
</syntaxhighlight>
Ha!
Ok... then after lookin at the system library path, we got a clue ;-):
<syntaxhighlight lang="bash" highlight="2">
# ldconfig -p | grep ssl
libssl.so.1.0.0 (libc6,x86-64) => /usr/lib/nsr/lib64/libssl.so.1.0.0
libssl.so.1.0.0 (libc6,x86-64) => /lib64/libssl.so.1.0.0
libssl.so.1.0.0 (libc6) => /usr/lib/nsr/libssl.so.1.0.0
libgnutls-xssl.so.0 (libc6,x86-64) => /usr/lib64/libgnutls-xssl.so.0
libevent_openssl-2.0.so.5 (libc6,x86-64) => /usr/lib64/libevent_openssl-2.0.so.5
libcommonssl.so (libc6,x86-64) => /usr/lib/nsr/lib64/libcommonssl.so
libcommonssl.so (libc6) => /usr/lib/nsr/libcommonssl.so
libcommonssl-9.2.1.so (libc6,x86-64) => /usr/lib/nsr/lib64/libcommonssl-9.2.1.so
</syntaxhighlight>
The problem was a file in /etc/ld.so.conf.d/ which brought /usr/lib/nsr/lib64 in the system library path. There was another libssl.so.1.0.0 which was version 1.0.2h. OK. What to do?
<syntaxhighlight lang=bash>
# rm /etc/ld.so.conf.d/problematic.conf
# rm /etc/ld.so.cache
# ldconfig
</syntaxhighlight>
Check the success:
<syntaxhighlight lang=bash>
# ldconfig -p | grep ssl
libssl.so.1.0.0 (libc6,x86-64) => /lib64/libssl.so.1.0.0
libgnutls-xssl.so.0 (libc6,x86-64) => /usr/lib64/libgnutls-xssl.so.0
libevent_openssl-2.0.so.5 (libc6,x86-64) => /usr/lib64/libevent_openssl-2.0.so.5
</syntaxhighlight>
Now you just have to find a way to get your other stuff running without the manipulation at the system library path.
Last check for our case. Does our networker use it's own ssl libraries?
<syntaxhighlight lang=bash>
# ls -al /proc/$(pgrep --full /usr/sbin/nsrexecd)/map_files | egrep "lib(ssl|crypto)"
lr-------- 1 root root 64 17. Jul 11:31 7f9d1bb73000-7f9d1bdc7000 -> /usr/lib/nsr/lib64/libcrypto.so.1.0.0
lr-------- 1 root root 64 17. Jul 11:31 7f9d1bdc7000-7f9d1bec7000 -> /usr/lib/nsr/lib64/libcrypto.so.1.0.0
lr-------- 1 root root 64 17. Jul 11:31 7f9d1bec7000-7f9d1bef3000 -> /usr/lib/nsr/lib64/libcrypto.so.1.0.0
lr-------- 1 root root 64 17. Jul 11:31 7f9d1bfab000-7f9d1c00c000 -> /usr/lib/nsr/lib64/libssl.so.1.0.0
lr-------- 1 root root 64 17. Jul 11:31 7f9d1c00c000-7f9d1c10c000 -> /usr/lib/nsr/lib64/libssl.so.1.0.0
lr-------- 1 root root 64 17. Jul 11:31 7f9d1c10c000-7f9d1c116000 -> /usr/lib/nsr/lib64/libssl.so.1.0.0
</syntaxhighlight>
Yep. Great!
== Remove spacewalk from client ==
So the way to get rid spacewalk is:
<syntaxhighlight lang=bash>
# zypper remove --clean-deps spacewalksd spacewalk-check zypp-plugin-spacewalk spacewalk-client-tools
</syntaxhighlight>
== Register at SuSE Manager ==
After that reregister your server with the SuSE Manager like this:
<syntaxhighlight lang=bash>
# /usr/bin/wget --no-check-certificate -O - https://susemgr.server.tld/pub/bootstrap/yourbootstrap.sh | bash
</syntaxhighlight>
== Update SuSE Manager certificate ==
=== Create work place ===
<syntaxhighlight lang=bash>
# mkdir ~/ssl-build
# mkdir ~/ssl-build/$(hostname --short)
# cd ~/ssl-build
</syntaxhighlight>
=== Build RHN-ORG-TRUSTED-SSL-CERT and rhn-org-trusted-ssl-cert-1.0-*.noarch.rpm ===
<syntaxhighlight lang=bash>
# rhn-ssl-tool --gen-ca --rpm-only --dir="~/ssl-build" --from-ca-cert=<path to your CA certificate file>
# openssl x509 -noout -subject -dates -in ~/ssl-build/RHN-ORG-TRUSTED-SSL-CERT
subject=C = DE, O = Hosting, CN = My-CA
notBefore=Mar 22 12:28:05 2017 GMT
notAfter=Mar 22 12:38:05 2027 GMT
# ls -al ~/ssl-build/*.rpm
...
-rw-r--r-- 1 root root 18262 17. Nov 12:10 rhn-org-trusted-ssl-cert-1.0-17.noarch.rpm
-rw-r--r-- 1 root root 16672 17. Nov 12:10 rhn-org-trusted-ssl-cert-1.0-17.src.rpm
</syntaxhighlight>
=== Generate CSR ===
<syntaxhighlight lang=bash>
# cd ~/ssl-build/$(hostname --short)
# declare -a hosts=( "susemgr.tld.de" "othername.tld.de" "anotheranothername.tld.de" )
# subject_without_cn='/C=DE/ST=Hamburg/L=Hamburg/O=Hosting/OU=Administration'
# emailAddress='suselinux-admin@tld.de'
</syntaxhighlight>
<syntaxhighlight lang=bash>
# openssl req -newkey rsa:4096 -nodes -sha256 -keyout server.key -out server.csr -batch -subj "${subject_without_cn} ${emailAddress}/CN=${hosts[0]}/emailAddress=${emailAddress}" -reqexts SAN -config <(cat /etc/ssl/openssl.cnf <(printf "[SAN]\nsubjectAltName=DNS:${hosts[0]}${hosts[1]:+,DNS:${hosts[1]}}${hosts[2]:+,DNS:${hosts[2]}}${hosts[3]:+,DNS:${hosts[3]}}${hosts[4]:+,DNS:${hosts[4]}}"))
Generating a RSA private key
...............................................++++
.................................................................................................................................................................++++
writing new private key to 'server.key'
-----
</syntaxhighlight>
<syntaxhighlight lang=bash>
# openssl req -noout -in server.csr -text 2>/dev/null | grep -E "(CN|DNS:)"
verify OK
subject=C = DE, ST = Hamburg, L = Hamburg, O = Hosting, OU = Administration suselinux-admin@tld.de, CN = susemgr.tld.de, emailAddress = suselinux-admin@tld.de
DNS:susemgr.tld.de, DNS:othername.tld.de , DNS:anotheranothername.tld.de
</syntaxhighlight>
=== Generate RPMs from certificate and key ===
<syntaxhighlight lang=bash>
# rhn-ssl-tool --gen-server --rpm-only --dir="/root/ssl-build"
...working...
Generating web server's SSL key pair/set RPM:
/root/ssl-build/susemgr/rhn-org-httpd-ssl-key-pair-susemgr-1.0-3.src.rpm
/root/ssl-build/susemgr/rhn-org-httpd-ssl-key-pair-susemgr-1.0-3.noarch.rpm
The most current SUSE Manager Proxy installation process against SUSE Manager hosted
requires the upload of an SSL tar archive that contains the CA SSL public
certificate and the web server's key set.
Generating the web server's SSL key set and CA SSL public certificate archive:
/root/ssl-build/susemgr/rhn-org-httpd-ssl-archive-susemgr-1.0-3.tar
Deploy the server's SSL key pair/set RPM:
(NOTE: the SUSE Manager or Proxy installers may do this step for you.)
The "noarch" RPM needs to be deployed to the machine working as a
web server, or SUSE Manager, or SUSE Manager Proxy.
Presumably 'susemgr.tld.de'.
</syntaxhighlight>
=== Install certificate and key in the apache directories ===
Remove the previous version:
<syntaxhighlight lang=bash>
rpm --query --all | grep -E "rhn-org-httpd-ssl-key-pair-.*\.noarch" | xargs -i rpm --erase "{}"
</syntaxhighlight>
Install latest version:
<syntaxhighlight lang=bash>
# cd /root/ssl-build/susemgr
# rpm -i $(grep -E "rhn-org-httpd-ssl-key-pair-.*.noarch.rpm" latest.txt)
</syntaxhighlight>
<syntaxhighlight lang=bash>
</syntaxhighlight>
<syntaxhighlight lang=bash>
</syntaxhighlight>
<syntaxhighlight lang=bash>
</syntaxhighlight>
502c6b85edfa71165c45260964ada7c1441617e8
2686
2685
2022-11-16T09:26:12Z
Lollypop
2
/* Install certificate and key in the apache directories */
wikitext
text/x-wiki
[[category :Linux]]
[[category:SuSE]]
=SuSE Manager=
==Channels==
===Refresh channle list===
<syntaxhighlight lang=bash>
# mgr-sync refresh
</syntaxhighlight>
===List available channels===
<syntaxhighlight lang=bash>
# mgr-sync list channels
</syntaxhighlight>
===Add Channel===
<syntaxhighlight lang=bash>
# mgr-sync add channel <channel>
</syntaxhighlight>
===Delete Channel===
<syntaxhighlight lang=bash>
# spacewalk-remove-channel -c <channel>
</syntaxhighlight>
===Create a frozen channel===
Clone a channel (which is like a snapshot) and add a timestamp at the end of the name:
<syntaxhighlight lang=bash>
# spacecmd softwarechannel_clonetree -s '<syntaxhighlight channel or pool>' -x "s/\$/-$(date '+%Y-%m-%d_%H:%M:%S')/"
</syntaxhighlight>
e.g.:
<syntaxhighlight lang=bash>
# spacecmd softwarechannel_clonetree -s 'sles12-sp3-pool-x86_64' -x "s/\$/-$(date '+%Y-%m-%d_%H:%M:%S')/"
</syntaxhighlight>
will result in a new channel pool named e.g. sles12-sp3-pool-x86_64-2017-11-22_14:26:42
===Compose your own channel===
<syntaxhighlight lang=bash>
# spacecmd
spacecmd {SSM:0}> softwarechannel_create -n OpenSuSE -l opensuse -a x86_64 -c sha256
spacecmd {SSM:0}> repo_create -n opensuse-database-sles12-sp2-x86_64 -u https://download.opensuse.org/repositories/server:/database/SLE_12_SP2/
spacecmd {SSM:0}> repo_create -n opensuse-database-sles12-sp3-x86_64 -u https://download.opensuse.org/repositories/server:/database/SLE_12_SP3/
spacecmd {SSM:0}> repo_list
opensuse-database-sles12-sp2-x86_64
opensuse-database-sles12-sp3-x86_64
spacecmd {SSM:0}> softwarechannel_addrepo opensuse opensuse-database-sles12-sp2-x86_64
spacecmd {SSM:0}> softwarechannel_addrepo opensuse opensuse-database-sles12-sp3-x86_64
spacecmd {SSM:0}> quit
# spacewalk-repo-sync -c opensuse
</syntaxhighlight>
==Bootstrap==
===Create bootstrap repo===
Do it for each channel!
<syntaxhighlight lang=bash>
# mgr-create-bootstrap-repo
</syntaxhighlight>
===Create bootstrap shell scripts in /srv/www/htdocs/pub/bootstrap===
Do not forget to lookup the available [[#List available activation keys|activation keys]]
<syntaxhighlight lang=bash>
# spacecmd -s susemanager.server.de -u mytestuser -q activationkey_list
6-sles11-sp3-x86_64
6-sles11-sp4-x86_64
6-sles12-default
6-sles12-sp0-x86_64
6-sles12-sp1-x86_64
6-sles12-sp2-x86_64
6-sles12-sp3-x86_64
6-sles12-sp4-x86_64
6-sles12-sp5-x86_64
6-sles15-sp0-x86_64
6-sles15-sp1-x86_64
6-sles15-sp2-x86_64
# mgr-bootstrap --traditional --script=My-New-SLES11-SP4.sh --activation-keys=6-sles11-sp4-x86_64
</syntaxhighlight>
==Activation keys==
===List available activation keys===
web: Systems -> Activation Keys
<syntaxhighlight lang=bash>
# spacecmd -q activationkey_list
6-sles11-sp3-x86_64
6-sles11-sp4-x86_64
6-sles12-sp0-x86_64
6-sles12-sp1-x86_64
6-sles12-sp2-x86_64
6-sles12-sp3-x86_64
</syntaxhighlight>
==spacecmd==
Just some useful space commands
<syntaxhighlight lang=bash>
# spacecmd system_list
</syntaxhighlight>
==rhn-search==
===Cleanup the search index===
<syntaxhighlight lang=bash>
# rhn-search cleanindex
</syntaxhighlight>
==Troubleshooting==
===Clients===
====Error code: Curl error 59 / Error message: failed setting cipher list: DEFAULT_SUSE====
<syntaxhighlight lang=bash>
# zypper refresh
...
Error code: Curl error 59
Error message: failed setting cipher list: DEFAULT_SUSE
...
</syntaxhighlight>
The reason is that zypper in newer versions calls curl with a specific cipher list named "DEFAULT_SUSE" which is not defined in curl version 7.37.0-37.17.1 (version 7.37.0-28.1 is OK).
Now get any kind of repository bound to your SuSE like the ISO this version was installed with:
<syntaxhighlight lang=bash>
# zypper addrepo --check --type yast2 'iso:///?iso=/install/OS/suse/iso/SLE-12-SP2-Server-DVD-x86_64-GM-DVD1.iso' 'SLES12-SP2-12.2-0'
Adding repository 'SLES12-SP2-12.2-0' ...........................................................................................................[done]
Repository 'SLES12-SP2-12.2-0' successfully added
Enabled : Yes
Autorefresh : No
GPG Check : Yes
Priority : 99
URI : iso:///?iso=/install/OS/suse/iso/SLE-12-SP2-Server-DVD-x86_64-GM-DVD1.iso
</syntaxhighlight>
or enable it:
<syntaxhighlight lang=bash>
# zypper modifyrepo --enable SLES12-SP2-12.2-0
</syntaxhighlight>
Reinstall zypper in the old version that does not call curl with the cipher list SUSE_DEFAULT:
<syntaxhighlight lang=bash>
# zypper install --force --repo SLES12-SP2-12.2-0 $(rpm --query --all *curl* --queryformat '%{NAME} ')
</syntaxhighlight>
And disable the ISO repository:
<syntaxhighlight lang=bash>
# zypper modifyrepo --disable SLES12-SP2-12.2-0
</syntaxhighlight>
Done.
=====Note: After some further debugging we found that the system path forces a wrong openssl library to come in place.=====
<syntaxhighlight lang=bash>
# curl --version ; zypper --version
curl 7.37.0 (x86_64-suse-linux-gnu) libcurl/7.37.0 OpenSSL/1.0.2h zlib/1.2.8 libidn/1.28 libssh2/1.4.3
Protocols: dict file ftp ftps gopher http https imap imaps ldap ldaps pop3 pop3s rtsp scp sftp smtp smtps telnet tftp
Features: AsynchDNS GSS-Negotiate IDN IPv6 Largefile NTLM NTLM_WB SSL libz TLS-SRP
zypper 1.13.40
</syntaxhighlight>
In our version of curl it should be OpenSSL/1.0.2j.
<syntaxhighlight lang="bash" highlight="5">
# rpm -qv openssl
openssl-1.0.2j-60.24.1.x86_64
# openssl version
WARNING: can't open config file: /usr/local/ssl/openssl.cnf
OpenSSL 1.0.2j-fips 26 Sep 2016 (Library: OpenSSL 1.0.2h-fips 3 May 2016)
</syntaxhighlight>
Ha!
Ok... then after lookin at the system library path, we got a clue ;-):
<syntaxhighlight lang="bash" highlight="2">
# ldconfig -p | grep ssl
libssl.so.1.0.0 (libc6,x86-64) => /usr/lib/nsr/lib64/libssl.so.1.0.0
libssl.so.1.0.0 (libc6,x86-64) => /lib64/libssl.so.1.0.0
libssl.so.1.0.0 (libc6) => /usr/lib/nsr/libssl.so.1.0.0
libgnutls-xssl.so.0 (libc6,x86-64) => /usr/lib64/libgnutls-xssl.so.0
libevent_openssl-2.0.so.5 (libc6,x86-64) => /usr/lib64/libevent_openssl-2.0.so.5
libcommonssl.so (libc6,x86-64) => /usr/lib/nsr/lib64/libcommonssl.so
libcommonssl.so (libc6) => /usr/lib/nsr/libcommonssl.so
libcommonssl-9.2.1.so (libc6,x86-64) => /usr/lib/nsr/lib64/libcommonssl-9.2.1.so
</syntaxhighlight>
The problem was a file in /etc/ld.so.conf.d/ which brought /usr/lib/nsr/lib64 in the system library path. There was another libssl.so.1.0.0 which was version 1.0.2h. OK. What to do?
<syntaxhighlight lang=bash>
# rm /etc/ld.so.conf.d/problematic.conf
# rm /etc/ld.so.cache
# ldconfig
</syntaxhighlight>
Check the success:
<syntaxhighlight lang=bash>
# ldconfig -p | grep ssl
libssl.so.1.0.0 (libc6,x86-64) => /lib64/libssl.so.1.0.0
libgnutls-xssl.so.0 (libc6,x86-64) => /usr/lib64/libgnutls-xssl.so.0
libevent_openssl-2.0.so.5 (libc6,x86-64) => /usr/lib64/libevent_openssl-2.0.so.5
</syntaxhighlight>
Now you just have to find a way to get your other stuff running without the manipulation at the system library path.
Last check for our case. Does our networker use it's own ssl libraries?
<syntaxhighlight lang=bash>
# ls -al /proc/$(pgrep --full /usr/sbin/nsrexecd)/map_files | egrep "lib(ssl|crypto)"
lr-------- 1 root root 64 17. Jul 11:31 7f9d1bb73000-7f9d1bdc7000 -> /usr/lib/nsr/lib64/libcrypto.so.1.0.0
lr-------- 1 root root 64 17. Jul 11:31 7f9d1bdc7000-7f9d1bec7000 -> /usr/lib/nsr/lib64/libcrypto.so.1.0.0
lr-------- 1 root root 64 17. Jul 11:31 7f9d1bec7000-7f9d1bef3000 -> /usr/lib/nsr/lib64/libcrypto.so.1.0.0
lr-------- 1 root root 64 17. Jul 11:31 7f9d1bfab000-7f9d1c00c000 -> /usr/lib/nsr/lib64/libssl.so.1.0.0
lr-------- 1 root root 64 17. Jul 11:31 7f9d1c00c000-7f9d1c10c000 -> /usr/lib/nsr/lib64/libssl.so.1.0.0
lr-------- 1 root root 64 17. Jul 11:31 7f9d1c10c000-7f9d1c116000 -> /usr/lib/nsr/lib64/libssl.so.1.0.0
</syntaxhighlight>
Yep. Great!
== Remove spacewalk from client ==
So the way to get rid spacewalk is:
<syntaxhighlight lang=bash>
# zypper remove --clean-deps spacewalksd spacewalk-check zypp-plugin-spacewalk spacewalk-client-tools
</syntaxhighlight>
== Register at SuSE Manager ==
After that reregister your server with the SuSE Manager like this:
<syntaxhighlight lang=bash>
# /usr/bin/wget --no-check-certificate -O - https://susemgr.server.tld/pub/bootstrap/yourbootstrap.sh | bash
</syntaxhighlight>
== Update SuSE Manager certificate ==
=== Create work place ===
<syntaxhighlight lang=bash>
# mkdir ~/ssl-build
# mkdir ~/ssl-build/$(hostname --short)
# cd ~/ssl-build
</syntaxhighlight>
=== Build RHN-ORG-TRUSTED-SSL-CERT and rhn-org-trusted-ssl-cert-1.0-*.noarch.rpm ===
<syntaxhighlight lang=bash>
# rhn-ssl-tool --gen-ca --rpm-only --dir="~/ssl-build" --from-ca-cert=<path to your CA certificate file>
# openssl x509 -noout -subject -dates -in ~/ssl-build/RHN-ORG-TRUSTED-SSL-CERT
subject=C = DE, O = Hosting, CN = My-CA
notBefore=Mar 22 12:28:05 2017 GMT
notAfter=Mar 22 12:38:05 2027 GMT
# ls -al ~/ssl-build/*.rpm
...
-rw-r--r-- 1 root root 18262 17. Nov 12:10 rhn-org-trusted-ssl-cert-1.0-17.noarch.rpm
-rw-r--r-- 1 root root 16672 17. Nov 12:10 rhn-org-trusted-ssl-cert-1.0-17.src.rpm
</syntaxhighlight>
=== Generate CSR ===
<syntaxhighlight lang=bash>
# cd ~/ssl-build/$(hostname --short)
# declare -a hosts=( "susemgr.tld.de" "othername.tld.de" "anotheranothername.tld.de" )
# subject_without_cn='/C=DE/ST=Hamburg/L=Hamburg/O=Hosting/OU=Administration'
# emailAddress='suselinux-admin@tld.de'
</syntaxhighlight>
<syntaxhighlight lang=bash>
# openssl req -newkey rsa:4096 -nodes -sha256 -keyout server.key -out server.csr -batch -subj "${subject_without_cn} ${emailAddress}/CN=${hosts[0]}/emailAddress=${emailAddress}" -reqexts SAN -config <(cat /etc/ssl/openssl.cnf <(printf "[SAN]\nsubjectAltName=DNS:${hosts[0]}${hosts[1]:+,DNS:${hosts[1]}}${hosts[2]:+,DNS:${hosts[2]}}${hosts[3]:+,DNS:${hosts[3]}}${hosts[4]:+,DNS:${hosts[4]}}"))
Generating a RSA private key
...............................................++++
.................................................................................................................................................................++++
writing new private key to 'server.key'
-----
</syntaxhighlight>
<syntaxhighlight lang=bash>
# openssl req -noout -in server.csr -text 2>/dev/null | grep -E "(CN|DNS:)"
verify OK
subject=C = DE, ST = Hamburg, L = Hamburg, O = Hosting, OU = Administration suselinux-admin@tld.de, CN = susemgr.tld.de, emailAddress = suselinux-admin@tld.de
DNS:susemgr.tld.de, DNS:othername.tld.de , DNS:anotheranothername.tld.de
</syntaxhighlight>
=== Generate RPMs from certificate and key ===
<syntaxhighlight lang=bash>
# rhn-ssl-tool --gen-server --rpm-only --dir="/root/ssl-build"
...working...
Generating web server's SSL key pair/set RPM:
/root/ssl-build/susemgr/rhn-org-httpd-ssl-key-pair-susemgr-1.0-3.src.rpm
/root/ssl-build/susemgr/rhn-org-httpd-ssl-key-pair-susemgr-1.0-3.noarch.rpm
The most current SUSE Manager Proxy installation process against SUSE Manager hosted
requires the upload of an SSL tar archive that contains the CA SSL public
certificate and the web server's key set.
Generating the web server's SSL key set and CA SSL public certificate archive:
/root/ssl-build/susemgr/rhn-org-httpd-ssl-archive-susemgr-1.0-3.tar
Deploy the server's SSL key pair/set RPM:
(NOTE: the SUSE Manager or Proxy installers may do this step for you.)
The "noarch" RPM needs to be deployed to the machine working as a
web server, or SUSE Manager, or SUSE Manager Proxy.
Presumably 'susemgr.tld.de'.
</syntaxhighlight>
=== Install certificate and key in the apache directories ===
Remove the previous version:
<syntaxhighlight lang=bash>
# rpm --query --all | grep -E "rhn-org-httpd-ssl-key-pair-.*\.noarch" | xargs -i rpm --erase "{}"
</syntaxhighlight>
Install latest version:
<syntaxhighlight lang=bash>
# cd /root/ssl-build/susemgr
# rpm -i $(grep -E "rhn-org-httpd-ssl-key-pair-.*.noarch.rpm" latest.txt)
</syntaxhighlight>
Check:
<syntaxhighlight lang=bash>
# openssl x509 -noout -in /etc/apache2/ssl.crt/server.crt -dates
notBefore=Nov 16 08:35:35 2022 GMT
notAfter=Nov 16 08:35:35 2023 GMT
</syntaxhighlight>
<syntaxhighlight lang=bash>
</syntaxhighlight>
<syntaxhighlight lang=bash>
</syntaxhighlight>
69158d369fa1c82156624f2d24e28f7be2cf2199
2687
2686
2022-11-16T09:42:36Z
Lollypop
2
/* Install certificate and key in the apache directories */
wikitext
text/x-wiki
[[category :Linux]]
[[category:SuSE]]
=SuSE Manager=
==Channels==
===Refresh channle list===
<syntaxhighlight lang=bash>
# mgr-sync refresh
</syntaxhighlight>
===List available channels===
<syntaxhighlight lang=bash>
# mgr-sync list channels
</syntaxhighlight>
===Add Channel===
<syntaxhighlight lang=bash>
# mgr-sync add channel <channel>
</syntaxhighlight>
===Delete Channel===
<syntaxhighlight lang=bash>
# spacewalk-remove-channel -c <channel>
</syntaxhighlight>
===Create a frozen channel===
Clone a channel (which is like a snapshot) and add a timestamp at the end of the name:
<syntaxhighlight lang=bash>
# spacecmd softwarechannel_clonetree -s '<syntaxhighlight channel or pool>' -x "s/\$/-$(date '+%Y-%m-%d_%H:%M:%S')/"
</syntaxhighlight>
e.g.:
<syntaxhighlight lang=bash>
# spacecmd softwarechannel_clonetree -s 'sles12-sp3-pool-x86_64' -x "s/\$/-$(date '+%Y-%m-%d_%H:%M:%S')/"
</syntaxhighlight>
will result in a new channel pool named e.g. sles12-sp3-pool-x86_64-2017-11-22_14:26:42
===Compose your own channel===
<syntaxhighlight lang=bash>
# spacecmd
spacecmd {SSM:0}> softwarechannel_create -n OpenSuSE -l opensuse -a x86_64 -c sha256
spacecmd {SSM:0}> repo_create -n opensuse-database-sles12-sp2-x86_64 -u https://download.opensuse.org/repositories/server:/database/SLE_12_SP2/
spacecmd {SSM:0}> repo_create -n opensuse-database-sles12-sp3-x86_64 -u https://download.opensuse.org/repositories/server:/database/SLE_12_SP3/
spacecmd {SSM:0}> repo_list
opensuse-database-sles12-sp2-x86_64
opensuse-database-sles12-sp3-x86_64
spacecmd {SSM:0}> softwarechannel_addrepo opensuse opensuse-database-sles12-sp2-x86_64
spacecmd {SSM:0}> softwarechannel_addrepo opensuse opensuse-database-sles12-sp3-x86_64
spacecmd {SSM:0}> quit
# spacewalk-repo-sync -c opensuse
</syntaxhighlight>
==Bootstrap==
===Create bootstrap repo===
Do it for each channel!
<syntaxhighlight lang=bash>
# mgr-create-bootstrap-repo
</syntaxhighlight>
===Create bootstrap shell scripts in /srv/www/htdocs/pub/bootstrap===
Do not forget to lookup the available [[#List available activation keys|activation keys]]
<syntaxhighlight lang=bash>
# spacecmd -s susemanager.server.de -u mytestuser -q activationkey_list
6-sles11-sp3-x86_64
6-sles11-sp4-x86_64
6-sles12-default
6-sles12-sp0-x86_64
6-sles12-sp1-x86_64
6-sles12-sp2-x86_64
6-sles12-sp3-x86_64
6-sles12-sp4-x86_64
6-sles12-sp5-x86_64
6-sles15-sp0-x86_64
6-sles15-sp1-x86_64
6-sles15-sp2-x86_64
# mgr-bootstrap --traditional --script=My-New-SLES11-SP4.sh --activation-keys=6-sles11-sp4-x86_64
</syntaxhighlight>
==Activation keys==
===List available activation keys===
web: Systems -> Activation Keys
<syntaxhighlight lang=bash>
# spacecmd -q activationkey_list
6-sles11-sp3-x86_64
6-sles11-sp4-x86_64
6-sles12-sp0-x86_64
6-sles12-sp1-x86_64
6-sles12-sp2-x86_64
6-sles12-sp3-x86_64
</syntaxhighlight>
==spacecmd==
Just some useful space commands
<syntaxhighlight lang=bash>
# spacecmd system_list
</syntaxhighlight>
==rhn-search==
===Cleanup the search index===
<syntaxhighlight lang=bash>
# rhn-search cleanindex
</syntaxhighlight>
==Troubleshooting==
===Clients===
====Error code: Curl error 59 / Error message: failed setting cipher list: DEFAULT_SUSE====
<syntaxhighlight lang=bash>
# zypper refresh
...
Error code: Curl error 59
Error message: failed setting cipher list: DEFAULT_SUSE
...
</syntaxhighlight>
The reason is that zypper in newer versions calls curl with a specific cipher list named "DEFAULT_SUSE" which is not defined in curl version 7.37.0-37.17.1 (version 7.37.0-28.1 is OK).
Now get any kind of repository bound to your SuSE like the ISO this version was installed with:
<syntaxhighlight lang=bash>
# zypper addrepo --check --type yast2 'iso:///?iso=/install/OS/suse/iso/SLE-12-SP2-Server-DVD-x86_64-GM-DVD1.iso' 'SLES12-SP2-12.2-0'
Adding repository 'SLES12-SP2-12.2-0' ...........................................................................................................[done]
Repository 'SLES12-SP2-12.2-0' successfully added
Enabled : Yes
Autorefresh : No
GPG Check : Yes
Priority : 99
URI : iso:///?iso=/install/OS/suse/iso/SLE-12-SP2-Server-DVD-x86_64-GM-DVD1.iso
</syntaxhighlight>
or enable it:
<syntaxhighlight lang=bash>
# zypper modifyrepo --enable SLES12-SP2-12.2-0
</syntaxhighlight>
Reinstall zypper in the old version that does not call curl with the cipher list SUSE_DEFAULT:
<syntaxhighlight lang=bash>
# zypper install --force --repo SLES12-SP2-12.2-0 $(rpm --query --all *curl* --queryformat '%{NAME} ')
</syntaxhighlight>
And disable the ISO repository:
<syntaxhighlight lang=bash>
# zypper modifyrepo --disable SLES12-SP2-12.2-0
</syntaxhighlight>
Done.
=====Note: After some further debugging we found that the system path forces a wrong openssl library to come in place.=====
<syntaxhighlight lang=bash>
# curl --version ; zypper --version
curl 7.37.0 (x86_64-suse-linux-gnu) libcurl/7.37.0 OpenSSL/1.0.2h zlib/1.2.8 libidn/1.28 libssh2/1.4.3
Protocols: dict file ftp ftps gopher http https imap imaps ldap ldaps pop3 pop3s rtsp scp sftp smtp smtps telnet tftp
Features: AsynchDNS GSS-Negotiate IDN IPv6 Largefile NTLM NTLM_WB SSL libz TLS-SRP
zypper 1.13.40
</syntaxhighlight>
In our version of curl it should be OpenSSL/1.0.2j.
<syntaxhighlight lang="bash" highlight="5">
# rpm -qv openssl
openssl-1.0.2j-60.24.1.x86_64
# openssl version
WARNING: can't open config file: /usr/local/ssl/openssl.cnf
OpenSSL 1.0.2j-fips 26 Sep 2016 (Library: OpenSSL 1.0.2h-fips 3 May 2016)
</syntaxhighlight>
Ha!
Ok... then after lookin at the system library path, we got a clue ;-):
<syntaxhighlight lang="bash" highlight="2">
# ldconfig -p | grep ssl
libssl.so.1.0.0 (libc6,x86-64) => /usr/lib/nsr/lib64/libssl.so.1.0.0
libssl.so.1.0.0 (libc6,x86-64) => /lib64/libssl.so.1.0.0
libssl.so.1.0.0 (libc6) => /usr/lib/nsr/libssl.so.1.0.0
libgnutls-xssl.so.0 (libc6,x86-64) => /usr/lib64/libgnutls-xssl.so.0
libevent_openssl-2.0.so.5 (libc6,x86-64) => /usr/lib64/libevent_openssl-2.0.so.5
libcommonssl.so (libc6,x86-64) => /usr/lib/nsr/lib64/libcommonssl.so
libcommonssl.so (libc6) => /usr/lib/nsr/libcommonssl.so
libcommonssl-9.2.1.so (libc6,x86-64) => /usr/lib/nsr/lib64/libcommonssl-9.2.1.so
</syntaxhighlight>
The problem was a file in /etc/ld.so.conf.d/ which brought /usr/lib/nsr/lib64 in the system library path. There was another libssl.so.1.0.0 which was version 1.0.2h. OK. What to do?
<syntaxhighlight lang=bash>
# rm /etc/ld.so.conf.d/problematic.conf
# rm /etc/ld.so.cache
# ldconfig
</syntaxhighlight>
Check the success:
<syntaxhighlight lang=bash>
# ldconfig -p | grep ssl
libssl.so.1.0.0 (libc6,x86-64) => /lib64/libssl.so.1.0.0
libgnutls-xssl.so.0 (libc6,x86-64) => /usr/lib64/libgnutls-xssl.so.0
libevent_openssl-2.0.so.5 (libc6,x86-64) => /usr/lib64/libevent_openssl-2.0.so.5
</syntaxhighlight>
Now you just have to find a way to get your other stuff running without the manipulation at the system library path.
Last check for our case. Does our networker use it's own ssl libraries?
<syntaxhighlight lang=bash>
# ls -al /proc/$(pgrep --full /usr/sbin/nsrexecd)/map_files | egrep "lib(ssl|crypto)"
lr-------- 1 root root 64 17. Jul 11:31 7f9d1bb73000-7f9d1bdc7000 -> /usr/lib/nsr/lib64/libcrypto.so.1.0.0
lr-------- 1 root root 64 17. Jul 11:31 7f9d1bdc7000-7f9d1bec7000 -> /usr/lib/nsr/lib64/libcrypto.so.1.0.0
lr-------- 1 root root 64 17. Jul 11:31 7f9d1bec7000-7f9d1bef3000 -> /usr/lib/nsr/lib64/libcrypto.so.1.0.0
lr-------- 1 root root 64 17. Jul 11:31 7f9d1bfab000-7f9d1c00c000 -> /usr/lib/nsr/lib64/libssl.so.1.0.0
lr-------- 1 root root 64 17. Jul 11:31 7f9d1c00c000-7f9d1c10c000 -> /usr/lib/nsr/lib64/libssl.so.1.0.0
lr-------- 1 root root 64 17. Jul 11:31 7f9d1c10c000-7f9d1c116000 -> /usr/lib/nsr/lib64/libssl.so.1.0.0
</syntaxhighlight>
Yep. Great!
== Remove spacewalk from client ==
So the way to get rid spacewalk is:
<syntaxhighlight lang=bash>
# zypper remove --clean-deps spacewalksd spacewalk-check zypp-plugin-spacewalk spacewalk-client-tools
</syntaxhighlight>
== Register at SuSE Manager ==
After that reregister your server with the SuSE Manager like this:
<syntaxhighlight lang=bash>
# /usr/bin/wget --no-check-certificate -O - https://susemgr.server.tld/pub/bootstrap/yourbootstrap.sh | bash
</syntaxhighlight>
== Update SuSE Manager certificate ==
=== Create work place ===
<syntaxhighlight lang=bash>
# mkdir ~/ssl-build
# mkdir ~/ssl-build/$(hostname --short)
# cd ~/ssl-build
</syntaxhighlight>
=== Build RHN-ORG-TRUSTED-SSL-CERT and rhn-org-trusted-ssl-cert-1.0-*.noarch.rpm ===
<syntaxhighlight lang=bash>
# rhn-ssl-tool --gen-ca --rpm-only --dir="~/ssl-build" --from-ca-cert=<path to your CA certificate file>
# openssl x509 -noout -subject -dates -in ~/ssl-build/RHN-ORG-TRUSTED-SSL-CERT
subject=C = DE, O = Hosting, CN = My-CA
notBefore=Mar 22 12:28:05 2017 GMT
notAfter=Mar 22 12:38:05 2027 GMT
# ls -al ~/ssl-build/*.rpm
...
-rw-r--r-- 1 root root 18262 17. Nov 12:10 rhn-org-trusted-ssl-cert-1.0-17.noarch.rpm
-rw-r--r-- 1 root root 16672 17. Nov 12:10 rhn-org-trusted-ssl-cert-1.0-17.src.rpm
</syntaxhighlight>
=== Generate CSR ===
<syntaxhighlight lang=bash>
# cd ~/ssl-build/$(hostname --short)
# declare -a hosts=( "susemgr.tld.de" "othername.tld.de" "anotheranothername.tld.de" )
# subject_without_cn='/C=DE/ST=Hamburg/L=Hamburg/O=Hosting/OU=Administration'
# emailAddress='suselinux-admin@tld.de'
</syntaxhighlight>
<syntaxhighlight lang=bash>
# openssl req -newkey rsa:4096 -nodes -sha256 -keyout server.key -out server.csr -batch -subj "${subject_without_cn} ${emailAddress}/CN=${hosts[0]}/emailAddress=${emailAddress}" -reqexts SAN -config <(cat /etc/ssl/openssl.cnf <(printf "[SAN]\nsubjectAltName=DNS:${hosts[0]}${hosts[1]:+,DNS:${hosts[1]}}${hosts[2]:+,DNS:${hosts[2]}}${hosts[3]:+,DNS:${hosts[3]}}${hosts[4]:+,DNS:${hosts[4]}}"))
Generating a RSA private key
...............................................++++
.................................................................................................................................................................++++
writing new private key to 'server.key'
-----
</syntaxhighlight>
<syntaxhighlight lang=bash>
# openssl req -noout -in server.csr -text 2>/dev/null | grep -E "(CN|DNS:)"
verify OK
subject=C = DE, ST = Hamburg, L = Hamburg, O = Hosting, OU = Administration suselinux-admin@tld.de, CN = susemgr.tld.de, emailAddress = suselinux-admin@tld.de
DNS:susemgr.tld.de, DNS:othername.tld.de , DNS:anotheranothername.tld.de
</syntaxhighlight>
=== Generate RPMs from certificate and key ===
<syntaxhighlight lang=bash>
# rhn-ssl-tool --gen-server --rpm-only --dir="/root/ssl-build"
...working...
Generating web server's SSL key pair/set RPM:
/root/ssl-build/susemgr/rhn-org-httpd-ssl-key-pair-susemgr-1.0-3.src.rpm
/root/ssl-build/susemgr/rhn-org-httpd-ssl-key-pair-susemgr-1.0-3.noarch.rpm
The most current SUSE Manager Proxy installation process against SUSE Manager hosted
requires the upload of an SSL tar archive that contains the CA SSL public
certificate and the web server's key set.
Generating the web server's SSL key set and CA SSL public certificate archive:
/root/ssl-build/susemgr/rhn-org-httpd-ssl-archive-susemgr-1.0-3.tar
Deploy the server's SSL key pair/set RPM:
(NOTE: the SUSE Manager or Proxy installers may do this step for you.)
The "noarch" RPM needs to be deployed to the machine working as a
web server, or SUSE Manager, or SUSE Manager Proxy.
Presumably 'susemgr.tld.de'.
</syntaxhighlight>
=== Install certificate and key in the apache directories ===
Remove the previous version:
Install latest version:
<syntaxhighlight lang=bash>
# cd /root/ssl-build/susemgr
# rpm -Uvh $(grep -E "rhn-org-httpd-ssl-key-pair-.*.noarch.rpm" latest.txt)
</syntaxhighlight>
Check:
<syntaxhighlight lang=bash>
# openssl x509 -noout -in /etc/apache2/ssl.crt/server.crt -dates
notBefore=Nov 16 08:35:35 2022 GMT
notAfter=Nov 16 08:35:35 2023 GMT
</syntaxhighlight>
<syntaxhighlight lang=bash>
</syntaxhighlight>
<syntaxhighlight lang=bash>
</syntaxhighlight>
16e64417ad942b367598c7eb9dd66894c9543e9f
2688
2687
2022-11-16T09:55:39Z
Lollypop
2
/* Install certificate and key in the apache directories */
wikitext
text/x-wiki
[[category :Linux]]
[[category:SuSE]]
=SuSE Manager=
==Channels==
===Refresh channle list===
<syntaxhighlight lang=bash>
# mgr-sync refresh
</syntaxhighlight>
===List available channels===
<syntaxhighlight lang=bash>
# mgr-sync list channels
</syntaxhighlight>
===Add Channel===
<syntaxhighlight lang=bash>
# mgr-sync add channel <channel>
</syntaxhighlight>
===Delete Channel===
<syntaxhighlight lang=bash>
# spacewalk-remove-channel -c <channel>
</syntaxhighlight>
===Create a frozen channel===
Clone a channel (which is like a snapshot) and add a timestamp at the end of the name:
<syntaxhighlight lang=bash>
# spacecmd softwarechannel_clonetree -s '<syntaxhighlight channel or pool>' -x "s/\$/-$(date '+%Y-%m-%d_%H:%M:%S')/"
</syntaxhighlight>
e.g.:
<syntaxhighlight lang=bash>
# spacecmd softwarechannel_clonetree -s 'sles12-sp3-pool-x86_64' -x "s/\$/-$(date '+%Y-%m-%d_%H:%M:%S')/"
</syntaxhighlight>
will result in a new channel pool named e.g. sles12-sp3-pool-x86_64-2017-11-22_14:26:42
===Compose your own channel===
<syntaxhighlight lang=bash>
# spacecmd
spacecmd {SSM:0}> softwarechannel_create -n OpenSuSE -l opensuse -a x86_64 -c sha256
spacecmd {SSM:0}> repo_create -n opensuse-database-sles12-sp2-x86_64 -u https://download.opensuse.org/repositories/server:/database/SLE_12_SP2/
spacecmd {SSM:0}> repo_create -n opensuse-database-sles12-sp3-x86_64 -u https://download.opensuse.org/repositories/server:/database/SLE_12_SP3/
spacecmd {SSM:0}> repo_list
opensuse-database-sles12-sp2-x86_64
opensuse-database-sles12-sp3-x86_64
spacecmd {SSM:0}> softwarechannel_addrepo opensuse opensuse-database-sles12-sp2-x86_64
spacecmd {SSM:0}> softwarechannel_addrepo opensuse opensuse-database-sles12-sp3-x86_64
spacecmd {SSM:0}> quit
# spacewalk-repo-sync -c opensuse
</syntaxhighlight>
==Bootstrap==
===Create bootstrap repo===
Do it for each channel!
<syntaxhighlight lang=bash>
# mgr-create-bootstrap-repo
</syntaxhighlight>
===Create bootstrap shell scripts in /srv/www/htdocs/pub/bootstrap===
Do not forget to lookup the available [[#List available activation keys|activation keys]]
<syntaxhighlight lang=bash>
# spacecmd -s susemanager.server.de -u mytestuser -q activationkey_list
6-sles11-sp3-x86_64
6-sles11-sp4-x86_64
6-sles12-default
6-sles12-sp0-x86_64
6-sles12-sp1-x86_64
6-sles12-sp2-x86_64
6-sles12-sp3-x86_64
6-sles12-sp4-x86_64
6-sles12-sp5-x86_64
6-sles15-sp0-x86_64
6-sles15-sp1-x86_64
6-sles15-sp2-x86_64
# mgr-bootstrap --traditional --script=My-New-SLES11-SP4.sh --activation-keys=6-sles11-sp4-x86_64
</syntaxhighlight>
==Activation keys==
===List available activation keys===
web: Systems -> Activation Keys
<syntaxhighlight lang=bash>
# spacecmd -q activationkey_list
6-sles11-sp3-x86_64
6-sles11-sp4-x86_64
6-sles12-sp0-x86_64
6-sles12-sp1-x86_64
6-sles12-sp2-x86_64
6-sles12-sp3-x86_64
</syntaxhighlight>
==spacecmd==
Just some useful space commands
<syntaxhighlight lang=bash>
# spacecmd system_list
</syntaxhighlight>
==rhn-search==
===Cleanup the search index===
<syntaxhighlight lang=bash>
# rhn-search cleanindex
</syntaxhighlight>
==Troubleshooting==
===Clients===
====Error code: Curl error 59 / Error message: failed setting cipher list: DEFAULT_SUSE====
<syntaxhighlight lang=bash>
# zypper refresh
...
Error code: Curl error 59
Error message: failed setting cipher list: DEFAULT_SUSE
...
</syntaxhighlight>
The reason is that zypper in newer versions calls curl with a specific cipher list named "DEFAULT_SUSE" which is not defined in curl version 7.37.0-37.17.1 (version 7.37.0-28.1 is OK).
Now get any kind of repository bound to your SuSE like the ISO this version was installed with:
<syntaxhighlight lang=bash>
# zypper addrepo --check --type yast2 'iso:///?iso=/install/OS/suse/iso/SLE-12-SP2-Server-DVD-x86_64-GM-DVD1.iso' 'SLES12-SP2-12.2-0'
Adding repository 'SLES12-SP2-12.2-0' ...........................................................................................................[done]
Repository 'SLES12-SP2-12.2-0' successfully added
Enabled : Yes
Autorefresh : No
GPG Check : Yes
Priority : 99
URI : iso:///?iso=/install/OS/suse/iso/SLE-12-SP2-Server-DVD-x86_64-GM-DVD1.iso
</syntaxhighlight>
or enable it:
<syntaxhighlight lang=bash>
# zypper modifyrepo --enable SLES12-SP2-12.2-0
</syntaxhighlight>
Reinstall zypper in the old version that does not call curl with the cipher list SUSE_DEFAULT:
<syntaxhighlight lang=bash>
# zypper install --force --repo SLES12-SP2-12.2-0 $(rpm --query --all *curl* --queryformat '%{NAME} ')
</syntaxhighlight>
And disable the ISO repository:
<syntaxhighlight lang=bash>
# zypper modifyrepo --disable SLES12-SP2-12.2-0
</syntaxhighlight>
Done.
=====Note: After some further debugging we found that the system path forces a wrong openssl library to come in place.=====
<syntaxhighlight lang=bash>
# curl --version ; zypper --version
curl 7.37.0 (x86_64-suse-linux-gnu) libcurl/7.37.0 OpenSSL/1.0.2h zlib/1.2.8 libidn/1.28 libssh2/1.4.3
Protocols: dict file ftp ftps gopher http https imap imaps ldap ldaps pop3 pop3s rtsp scp sftp smtp smtps telnet tftp
Features: AsynchDNS GSS-Negotiate IDN IPv6 Largefile NTLM NTLM_WB SSL libz TLS-SRP
zypper 1.13.40
</syntaxhighlight>
In our version of curl it should be OpenSSL/1.0.2j.
<syntaxhighlight lang="bash" highlight="5">
# rpm -qv openssl
openssl-1.0.2j-60.24.1.x86_64
# openssl version
WARNING: can't open config file: /usr/local/ssl/openssl.cnf
OpenSSL 1.0.2j-fips 26 Sep 2016 (Library: OpenSSL 1.0.2h-fips 3 May 2016)
</syntaxhighlight>
Ha!
Ok... then after lookin at the system library path, we got a clue ;-):
<syntaxhighlight lang="bash" highlight="2">
# ldconfig -p | grep ssl
libssl.so.1.0.0 (libc6,x86-64) => /usr/lib/nsr/lib64/libssl.so.1.0.0
libssl.so.1.0.0 (libc6,x86-64) => /lib64/libssl.so.1.0.0
libssl.so.1.0.0 (libc6) => /usr/lib/nsr/libssl.so.1.0.0
libgnutls-xssl.so.0 (libc6,x86-64) => /usr/lib64/libgnutls-xssl.so.0
libevent_openssl-2.0.so.5 (libc6,x86-64) => /usr/lib64/libevent_openssl-2.0.so.5
libcommonssl.so (libc6,x86-64) => /usr/lib/nsr/lib64/libcommonssl.so
libcommonssl.so (libc6) => /usr/lib/nsr/libcommonssl.so
libcommonssl-9.2.1.so (libc6,x86-64) => /usr/lib/nsr/lib64/libcommonssl-9.2.1.so
</syntaxhighlight>
The problem was a file in /etc/ld.so.conf.d/ which brought /usr/lib/nsr/lib64 in the system library path. There was another libssl.so.1.0.0 which was version 1.0.2h. OK. What to do?
<syntaxhighlight lang=bash>
# rm /etc/ld.so.conf.d/problematic.conf
# rm /etc/ld.so.cache
# ldconfig
</syntaxhighlight>
Check the success:
<syntaxhighlight lang=bash>
# ldconfig -p | grep ssl
libssl.so.1.0.0 (libc6,x86-64) => /lib64/libssl.so.1.0.0
libgnutls-xssl.so.0 (libc6,x86-64) => /usr/lib64/libgnutls-xssl.so.0
libevent_openssl-2.0.so.5 (libc6,x86-64) => /usr/lib64/libevent_openssl-2.0.so.5
</syntaxhighlight>
Now you just have to find a way to get your other stuff running without the manipulation at the system library path.
Last check for our case. Does our networker use it's own ssl libraries?
<syntaxhighlight lang=bash>
# ls -al /proc/$(pgrep --full /usr/sbin/nsrexecd)/map_files | egrep "lib(ssl|crypto)"
lr-------- 1 root root 64 17. Jul 11:31 7f9d1bb73000-7f9d1bdc7000 -> /usr/lib/nsr/lib64/libcrypto.so.1.0.0
lr-------- 1 root root 64 17. Jul 11:31 7f9d1bdc7000-7f9d1bec7000 -> /usr/lib/nsr/lib64/libcrypto.so.1.0.0
lr-------- 1 root root 64 17. Jul 11:31 7f9d1bec7000-7f9d1bef3000 -> /usr/lib/nsr/lib64/libcrypto.so.1.0.0
lr-------- 1 root root 64 17. Jul 11:31 7f9d1bfab000-7f9d1c00c000 -> /usr/lib/nsr/lib64/libssl.so.1.0.0
lr-------- 1 root root 64 17. Jul 11:31 7f9d1c00c000-7f9d1c10c000 -> /usr/lib/nsr/lib64/libssl.so.1.0.0
lr-------- 1 root root 64 17. Jul 11:31 7f9d1c10c000-7f9d1c116000 -> /usr/lib/nsr/lib64/libssl.so.1.0.0
</syntaxhighlight>
Yep. Great!
== Remove spacewalk from client ==
So the way to get rid spacewalk is:
<syntaxhighlight lang=bash>
# zypper remove --clean-deps spacewalksd spacewalk-check zypp-plugin-spacewalk spacewalk-client-tools
</syntaxhighlight>
== Register at SuSE Manager ==
After that reregister your server with the SuSE Manager like this:
<syntaxhighlight lang=bash>
# /usr/bin/wget --no-check-certificate -O - https://susemgr.server.tld/pub/bootstrap/yourbootstrap.sh | bash
</syntaxhighlight>
== Update SuSE Manager certificate ==
=== Create work place ===
<syntaxhighlight lang=bash>
# mkdir ~/ssl-build
# mkdir ~/ssl-build/$(hostname --short)
# cd ~/ssl-build
</syntaxhighlight>
=== Build RHN-ORG-TRUSTED-SSL-CERT and rhn-org-trusted-ssl-cert-1.0-*.noarch.rpm ===
<syntaxhighlight lang=bash>
# rhn-ssl-tool --gen-ca --rpm-only --dir="~/ssl-build" --from-ca-cert=<path to your CA certificate file>
# openssl x509 -noout -subject -dates -in ~/ssl-build/RHN-ORG-TRUSTED-SSL-CERT
subject=C = DE, O = Hosting, CN = My-CA
notBefore=Mar 22 12:28:05 2017 GMT
notAfter=Mar 22 12:38:05 2027 GMT
# ls -al ~/ssl-build/*.rpm
...
-rw-r--r-- 1 root root 18262 17. Nov 12:10 rhn-org-trusted-ssl-cert-1.0-17.noarch.rpm
-rw-r--r-- 1 root root 16672 17. Nov 12:10 rhn-org-trusted-ssl-cert-1.0-17.src.rpm
</syntaxhighlight>
=== Generate CSR ===
<syntaxhighlight lang=bash>
# cd ~/ssl-build/$(hostname --short)
# declare -a hosts=( "susemgr.tld.de" "othername.tld.de" "anotheranothername.tld.de" )
# subject_without_cn='/C=DE/ST=Hamburg/L=Hamburg/O=Hosting/OU=Administration'
# emailAddress='suselinux-admin@tld.de'
</syntaxhighlight>
<syntaxhighlight lang=bash>
# openssl req -newkey rsa:4096 -nodes -sha256 -keyout server.key -out server.csr -batch -subj "${subject_without_cn} ${emailAddress}/CN=${hosts[0]}/emailAddress=${emailAddress}" -reqexts SAN -config <(cat /etc/ssl/openssl.cnf <(printf "[SAN]\nsubjectAltName=DNS:${hosts[0]}${hosts[1]:+,DNS:${hosts[1]}}${hosts[2]:+,DNS:${hosts[2]}}${hosts[3]:+,DNS:${hosts[3]}}${hosts[4]:+,DNS:${hosts[4]}}"))
Generating a RSA private key
...............................................++++
.................................................................................................................................................................++++
writing new private key to 'server.key'
-----
</syntaxhighlight>
<syntaxhighlight lang=bash>
# openssl req -noout -in server.csr -text 2>/dev/null | grep -E "(CN|DNS:)"
verify OK
subject=C = DE, ST = Hamburg, L = Hamburg, O = Hosting, OU = Administration suselinux-admin@tld.de, CN = susemgr.tld.de, emailAddress = suselinux-admin@tld.de
DNS:susemgr.tld.de, DNS:othername.tld.de , DNS:anotheranothername.tld.de
</syntaxhighlight>
=== Generate RPMs from certificate and key ===
<syntaxhighlight lang=bash>
# rhn-ssl-tool --gen-server --rpm-only --dir="/root/ssl-build"
...working...
Generating web server's SSL key pair/set RPM:
/root/ssl-build/susemgr/rhn-org-httpd-ssl-key-pair-susemgr-1.0-3.src.rpm
/root/ssl-build/susemgr/rhn-org-httpd-ssl-key-pair-susemgr-1.0-3.noarch.rpm
The most current SUSE Manager Proxy installation process against SUSE Manager hosted
requires the upload of an SSL tar archive that contains the CA SSL public
certificate and the web server's key set.
Generating the web server's SSL key set and CA SSL public certificate archive:
/root/ssl-build/susemgr/rhn-org-httpd-ssl-archive-susemgr-1.0-3.tar
Deploy the server's SSL key pair/set RPM:
(NOTE: the SUSE Manager or Proxy installers may do this step for you.)
The "noarch" RPM needs to be deployed to the machine working as a
web server, or SUSE Manager, or SUSE Manager Proxy.
Presumably 'susemgr.tld.de'.
</syntaxhighlight>
=== Install certificate and key in the apache directories ===
Remove the previous version:
Install latest version:
<syntaxhighlight lang=bash>
# cd /root/ssl-build/susemgr
# rpm -Uvh $(grep -E "rhn-org-httpd-ssl-key-pair-.*.noarch.rpm" latest.txt)
</syntaxhighlight>
Check:
<syntaxhighlight lang=bash>
# openssl x509 -noout -in /etc/apache2/ssl.crt/server.crt -dates
notBefore=Nov 16 08:35:35 2022 GMT
notAfter=Nov 16 08:35:35 2023 GMT
</syntaxhighlight>
I don't know the SuSE way to make it, but this works:
<syntaxhighlight lang=bash>
# cp -p /etc/apache2/ssl.crt/server.crt /etc/pki/tls/certs/spacewalk.crt
# cp -p /etc/apache2/ssl.key/server.key /etc/pki/tls/private/spacewalk.key
</syntaxhighlight>
<syntaxhighlight lang=bash>
# spacewalk-service restart
# echo | openssl s_client -connect localhost:443 2>/dev/null | openssl x509 -noout -dates
notBefore=Nov 16 08:35:35 2022 GMT
notAfter=Nov 16 08:35:35 2023 GMT
</syntaxhighlight>
0cc005779085455d27c3b6ac85ca3f25e40ab871
2696
2688
2022-11-23T08:50:17Z
Lollypop
2
/* Install certificate and key in the apache directories */
wikitext
text/x-wiki
[[category :Linux]]
[[category:SuSE]]
=SuSE Manager=
==Channels==
===Refresh channle list===
<syntaxhighlight lang=bash>
# mgr-sync refresh
</syntaxhighlight>
===List available channels===
<syntaxhighlight lang=bash>
# mgr-sync list channels
</syntaxhighlight>
===Add Channel===
<syntaxhighlight lang=bash>
# mgr-sync add channel <channel>
</syntaxhighlight>
===Delete Channel===
<syntaxhighlight lang=bash>
# spacewalk-remove-channel -c <channel>
</syntaxhighlight>
===Create a frozen channel===
Clone a channel (which is like a snapshot) and add a timestamp at the end of the name:
<syntaxhighlight lang=bash>
# spacecmd softwarechannel_clonetree -s '<syntaxhighlight channel or pool>' -x "s/\$/-$(date '+%Y-%m-%d_%H:%M:%S')/"
</syntaxhighlight>
e.g.:
<syntaxhighlight lang=bash>
# spacecmd softwarechannel_clonetree -s 'sles12-sp3-pool-x86_64' -x "s/\$/-$(date '+%Y-%m-%d_%H:%M:%S')/"
</syntaxhighlight>
will result in a new channel pool named e.g. sles12-sp3-pool-x86_64-2017-11-22_14:26:42
===Compose your own channel===
<syntaxhighlight lang=bash>
# spacecmd
spacecmd {SSM:0}> softwarechannel_create -n OpenSuSE -l opensuse -a x86_64 -c sha256
spacecmd {SSM:0}> repo_create -n opensuse-database-sles12-sp2-x86_64 -u https://download.opensuse.org/repositories/server:/database/SLE_12_SP2/
spacecmd {SSM:0}> repo_create -n opensuse-database-sles12-sp3-x86_64 -u https://download.opensuse.org/repositories/server:/database/SLE_12_SP3/
spacecmd {SSM:0}> repo_list
opensuse-database-sles12-sp2-x86_64
opensuse-database-sles12-sp3-x86_64
spacecmd {SSM:0}> softwarechannel_addrepo opensuse opensuse-database-sles12-sp2-x86_64
spacecmd {SSM:0}> softwarechannel_addrepo opensuse opensuse-database-sles12-sp3-x86_64
spacecmd {SSM:0}> quit
# spacewalk-repo-sync -c opensuse
</syntaxhighlight>
==Bootstrap==
===Create bootstrap repo===
Do it for each channel!
<syntaxhighlight lang=bash>
# mgr-create-bootstrap-repo
</syntaxhighlight>
===Create bootstrap shell scripts in /srv/www/htdocs/pub/bootstrap===
Do not forget to lookup the available [[#List available activation keys|activation keys]]
<syntaxhighlight lang=bash>
# spacecmd -s susemanager.server.de -u mytestuser -q activationkey_list
6-sles11-sp3-x86_64
6-sles11-sp4-x86_64
6-sles12-default
6-sles12-sp0-x86_64
6-sles12-sp1-x86_64
6-sles12-sp2-x86_64
6-sles12-sp3-x86_64
6-sles12-sp4-x86_64
6-sles12-sp5-x86_64
6-sles15-sp0-x86_64
6-sles15-sp1-x86_64
6-sles15-sp2-x86_64
# mgr-bootstrap --traditional --script=My-New-SLES11-SP4.sh --activation-keys=6-sles11-sp4-x86_64
</syntaxhighlight>
==Activation keys==
===List available activation keys===
web: Systems -> Activation Keys
<syntaxhighlight lang=bash>
# spacecmd -q activationkey_list
6-sles11-sp3-x86_64
6-sles11-sp4-x86_64
6-sles12-sp0-x86_64
6-sles12-sp1-x86_64
6-sles12-sp2-x86_64
6-sles12-sp3-x86_64
</syntaxhighlight>
==spacecmd==
Just some useful space commands
<syntaxhighlight lang=bash>
# spacecmd system_list
</syntaxhighlight>
==rhn-search==
===Cleanup the search index===
<syntaxhighlight lang=bash>
# rhn-search cleanindex
</syntaxhighlight>
==Troubleshooting==
===Clients===
====Error code: Curl error 59 / Error message: failed setting cipher list: DEFAULT_SUSE====
<syntaxhighlight lang=bash>
# zypper refresh
...
Error code: Curl error 59
Error message: failed setting cipher list: DEFAULT_SUSE
...
</syntaxhighlight>
The reason is that zypper in newer versions calls curl with a specific cipher list named "DEFAULT_SUSE" which is not defined in curl version 7.37.0-37.17.1 (version 7.37.0-28.1 is OK).
Now get any kind of repository bound to your SuSE like the ISO this version was installed with:
<syntaxhighlight lang=bash>
# zypper addrepo --check --type yast2 'iso:///?iso=/install/OS/suse/iso/SLE-12-SP2-Server-DVD-x86_64-GM-DVD1.iso' 'SLES12-SP2-12.2-0'
Adding repository 'SLES12-SP2-12.2-0' ...........................................................................................................[done]
Repository 'SLES12-SP2-12.2-0' successfully added
Enabled : Yes
Autorefresh : No
GPG Check : Yes
Priority : 99
URI : iso:///?iso=/install/OS/suse/iso/SLE-12-SP2-Server-DVD-x86_64-GM-DVD1.iso
</syntaxhighlight>
or enable it:
<syntaxhighlight lang=bash>
# zypper modifyrepo --enable SLES12-SP2-12.2-0
</syntaxhighlight>
Reinstall zypper in the old version that does not call curl with the cipher list SUSE_DEFAULT:
<syntaxhighlight lang=bash>
# zypper install --force --repo SLES12-SP2-12.2-0 $(rpm --query --all *curl* --queryformat '%{NAME} ')
</syntaxhighlight>
And disable the ISO repository:
<syntaxhighlight lang=bash>
# zypper modifyrepo --disable SLES12-SP2-12.2-0
</syntaxhighlight>
Done.
=====Note: After some further debugging we found that the system path forces a wrong openssl library to come in place.=====
<syntaxhighlight lang=bash>
# curl --version ; zypper --version
curl 7.37.0 (x86_64-suse-linux-gnu) libcurl/7.37.0 OpenSSL/1.0.2h zlib/1.2.8 libidn/1.28 libssh2/1.4.3
Protocols: dict file ftp ftps gopher http https imap imaps ldap ldaps pop3 pop3s rtsp scp sftp smtp smtps telnet tftp
Features: AsynchDNS GSS-Negotiate IDN IPv6 Largefile NTLM NTLM_WB SSL libz TLS-SRP
zypper 1.13.40
</syntaxhighlight>
In our version of curl it should be OpenSSL/1.0.2j.
<syntaxhighlight lang="bash" highlight="5">
# rpm -qv openssl
openssl-1.0.2j-60.24.1.x86_64
# openssl version
WARNING: can't open config file: /usr/local/ssl/openssl.cnf
OpenSSL 1.0.2j-fips 26 Sep 2016 (Library: OpenSSL 1.0.2h-fips 3 May 2016)
</syntaxhighlight>
Ha!
Ok... then after lookin at the system library path, we got a clue ;-):
<syntaxhighlight lang="bash" highlight="2">
# ldconfig -p | grep ssl
libssl.so.1.0.0 (libc6,x86-64) => /usr/lib/nsr/lib64/libssl.so.1.0.0
libssl.so.1.0.0 (libc6,x86-64) => /lib64/libssl.so.1.0.0
libssl.so.1.0.0 (libc6) => /usr/lib/nsr/libssl.so.1.0.0
libgnutls-xssl.so.0 (libc6,x86-64) => /usr/lib64/libgnutls-xssl.so.0
libevent_openssl-2.0.so.5 (libc6,x86-64) => /usr/lib64/libevent_openssl-2.0.so.5
libcommonssl.so (libc6,x86-64) => /usr/lib/nsr/lib64/libcommonssl.so
libcommonssl.so (libc6) => /usr/lib/nsr/libcommonssl.so
libcommonssl-9.2.1.so (libc6,x86-64) => /usr/lib/nsr/lib64/libcommonssl-9.2.1.so
</syntaxhighlight>
The problem was a file in /etc/ld.so.conf.d/ which brought /usr/lib/nsr/lib64 in the system library path. There was another libssl.so.1.0.0 which was version 1.0.2h. OK. What to do?
<syntaxhighlight lang=bash>
# rm /etc/ld.so.conf.d/problematic.conf
# rm /etc/ld.so.cache
# ldconfig
</syntaxhighlight>
Check the success:
<syntaxhighlight lang=bash>
# ldconfig -p | grep ssl
libssl.so.1.0.0 (libc6,x86-64) => /lib64/libssl.so.1.0.0
libgnutls-xssl.so.0 (libc6,x86-64) => /usr/lib64/libgnutls-xssl.so.0
libevent_openssl-2.0.so.5 (libc6,x86-64) => /usr/lib64/libevent_openssl-2.0.so.5
</syntaxhighlight>
Now you just have to find a way to get your other stuff running without the manipulation at the system library path.
Last check for our case. Does our networker use it's own ssl libraries?
<syntaxhighlight lang=bash>
# ls -al /proc/$(pgrep --full /usr/sbin/nsrexecd)/map_files | egrep "lib(ssl|crypto)"
lr-------- 1 root root 64 17. Jul 11:31 7f9d1bb73000-7f9d1bdc7000 -> /usr/lib/nsr/lib64/libcrypto.so.1.0.0
lr-------- 1 root root 64 17. Jul 11:31 7f9d1bdc7000-7f9d1bec7000 -> /usr/lib/nsr/lib64/libcrypto.so.1.0.0
lr-------- 1 root root 64 17. Jul 11:31 7f9d1bec7000-7f9d1bef3000 -> /usr/lib/nsr/lib64/libcrypto.so.1.0.0
lr-------- 1 root root 64 17. Jul 11:31 7f9d1bfab000-7f9d1c00c000 -> /usr/lib/nsr/lib64/libssl.so.1.0.0
lr-------- 1 root root 64 17. Jul 11:31 7f9d1c00c000-7f9d1c10c000 -> /usr/lib/nsr/lib64/libssl.so.1.0.0
lr-------- 1 root root 64 17. Jul 11:31 7f9d1c10c000-7f9d1c116000 -> /usr/lib/nsr/lib64/libssl.so.1.0.0
</syntaxhighlight>
Yep. Great!
== Remove spacewalk from client ==
So the way to get rid spacewalk is:
<syntaxhighlight lang=bash>
# zypper remove --clean-deps spacewalksd spacewalk-check zypp-plugin-spacewalk spacewalk-client-tools
</syntaxhighlight>
== Register at SuSE Manager ==
After that reregister your server with the SuSE Manager like this:
<syntaxhighlight lang=bash>
# /usr/bin/wget --no-check-certificate -O - https://susemgr.server.tld/pub/bootstrap/yourbootstrap.sh | bash
</syntaxhighlight>
== Update SuSE Manager certificate ==
=== Create work place ===
<syntaxhighlight lang=bash>
# mkdir ~/ssl-build
# mkdir ~/ssl-build/$(hostname --short)
# cd ~/ssl-build
</syntaxhighlight>
=== Build RHN-ORG-TRUSTED-SSL-CERT and rhn-org-trusted-ssl-cert-1.0-*.noarch.rpm ===
<syntaxhighlight lang=bash>
# rhn-ssl-tool --gen-ca --rpm-only --dir="~/ssl-build" --from-ca-cert=<path to your CA certificate file>
# openssl x509 -noout -subject -dates -in ~/ssl-build/RHN-ORG-TRUSTED-SSL-CERT
subject=C = DE, O = Hosting, CN = My-CA
notBefore=Mar 22 12:28:05 2017 GMT
notAfter=Mar 22 12:38:05 2027 GMT
# ls -al ~/ssl-build/*.rpm
...
-rw-r--r-- 1 root root 18262 17. Nov 12:10 rhn-org-trusted-ssl-cert-1.0-17.noarch.rpm
-rw-r--r-- 1 root root 16672 17. Nov 12:10 rhn-org-trusted-ssl-cert-1.0-17.src.rpm
</syntaxhighlight>
=== Generate CSR ===
<syntaxhighlight lang=bash>
# cd ~/ssl-build/$(hostname --short)
# declare -a hosts=( "susemgr.tld.de" "othername.tld.de" "anotheranothername.tld.de" )
# subject_without_cn='/C=DE/ST=Hamburg/L=Hamburg/O=Hosting/OU=Administration'
# emailAddress='suselinux-admin@tld.de'
</syntaxhighlight>
<syntaxhighlight lang=bash>
# openssl req -newkey rsa:4096 -nodes -sha256 -keyout server.key -out server.csr -batch -subj "${subject_without_cn} ${emailAddress}/CN=${hosts[0]}/emailAddress=${emailAddress}" -reqexts SAN -config <(cat /etc/ssl/openssl.cnf <(printf "[SAN]\nsubjectAltName=DNS:${hosts[0]}${hosts[1]:+,DNS:${hosts[1]}}${hosts[2]:+,DNS:${hosts[2]}}${hosts[3]:+,DNS:${hosts[3]}}${hosts[4]:+,DNS:${hosts[4]}}"))
Generating a RSA private key
...............................................++++
.................................................................................................................................................................++++
writing new private key to 'server.key'
-----
</syntaxhighlight>
<syntaxhighlight lang=bash>
# openssl req -noout -in server.csr -text 2>/dev/null | grep -E "(CN|DNS:)"
verify OK
subject=C = DE, ST = Hamburg, L = Hamburg, O = Hosting, OU = Administration suselinux-admin@tld.de, CN = susemgr.tld.de, emailAddress = suselinux-admin@tld.de
DNS:susemgr.tld.de, DNS:othername.tld.de , DNS:anotheranothername.tld.de
</syntaxhighlight>
=== Generate RPMs from certificate and key ===
<syntaxhighlight lang=bash>
# rhn-ssl-tool --gen-server --rpm-only --dir="/root/ssl-build"
...working...
Generating web server's SSL key pair/set RPM:
/root/ssl-build/susemgr/rhn-org-httpd-ssl-key-pair-susemgr-1.0-3.src.rpm
/root/ssl-build/susemgr/rhn-org-httpd-ssl-key-pair-susemgr-1.0-3.noarch.rpm
The most current SUSE Manager Proxy installation process against SUSE Manager hosted
requires the upload of an SSL tar archive that contains the CA SSL public
certificate and the web server's key set.
Generating the web server's SSL key set and CA SSL public certificate archive:
/root/ssl-build/susemgr/rhn-org-httpd-ssl-archive-susemgr-1.0-3.tar
Deploy the server's SSL key pair/set RPM:
(NOTE: the SUSE Manager or Proxy installers may do this step for you.)
The "noarch" RPM needs to be deployed to the machine working as a
web server, or SUSE Manager, or SUSE Manager Proxy.
Presumably 'susemgr.tld.de'.
</syntaxhighlight>
=== Install certificate and key in the apache directories ===
Remove the previous version:
Install latest version:
<syntaxhighlight lang=bash>
# cd /root/ssl-build/susemgr
# rpm -Uvh $(grep -E "rhn-org-httpd-ssl-key-pair-.*.noarch.rpm" latest.txt)
</syntaxhighlight>
Check:
<syntaxhighlight lang=bash>
# openssl x509 -noout -in /etc/apache2/ssl.crt/server.crt -dates
notBefore=Nov 16 08:35:35 2022 GMT
notAfter=Nov 16 08:35:35 2023 GMT
</syntaxhighlight>
I don't know the SuSE way to make it, but this works:
<syntaxhighlight lang=bash>
# cp -p /etc/apache2/ssl.crt/server.crt /etc/pki/tls/certs/spacewalk.crt
# cp -p /etc/apache2/ssl.key/server.key /etc/pki/tls/private/spacewalk.key
# cp -p /etc/apache2/ssl.key/server.key /etc/pki/tls/private/pg-spacewalk.key
# chmod 0640 /etc/pki/tls/private/spacewalk.key
# chgrp postgres /etc/pki/tls/private/spacewalk.key
</syntaxhighlight>
<syntaxhighlight lang=bash>
# spacewalk-service restart
# echo | openssl s_client -connect localhost:443 2>/dev/null | openssl x509 -noout -dates
notBefore=Nov 16 08:35:35 2022 GMT
notAfter=Nov 16 08:35:35 2023 GMT
</syntaxhighlight>
a6e41a7c176f0f4766b6a486e0ac3422796ef6e7
2697
2696
2022-11-23T08:50:36Z
Lollypop
2
/* Install certificate and key in the apache directories */
wikitext
text/x-wiki
[[category :Linux]]
[[category:SuSE]]
=SuSE Manager=
==Channels==
===Refresh channle list===
<syntaxhighlight lang=bash>
# mgr-sync refresh
</syntaxhighlight>
===List available channels===
<syntaxhighlight lang=bash>
# mgr-sync list channels
</syntaxhighlight>
===Add Channel===
<syntaxhighlight lang=bash>
# mgr-sync add channel <channel>
</syntaxhighlight>
===Delete Channel===
<syntaxhighlight lang=bash>
# spacewalk-remove-channel -c <channel>
</syntaxhighlight>
===Create a frozen channel===
Clone a channel (which is like a snapshot) and add a timestamp at the end of the name:
<syntaxhighlight lang=bash>
# spacecmd softwarechannel_clonetree -s '<syntaxhighlight channel or pool>' -x "s/\$/-$(date '+%Y-%m-%d_%H:%M:%S')/"
</syntaxhighlight>
e.g.:
<syntaxhighlight lang=bash>
# spacecmd softwarechannel_clonetree -s 'sles12-sp3-pool-x86_64' -x "s/\$/-$(date '+%Y-%m-%d_%H:%M:%S')/"
</syntaxhighlight>
will result in a new channel pool named e.g. sles12-sp3-pool-x86_64-2017-11-22_14:26:42
===Compose your own channel===
<syntaxhighlight lang=bash>
# spacecmd
spacecmd {SSM:0}> softwarechannel_create -n OpenSuSE -l opensuse -a x86_64 -c sha256
spacecmd {SSM:0}> repo_create -n opensuse-database-sles12-sp2-x86_64 -u https://download.opensuse.org/repositories/server:/database/SLE_12_SP2/
spacecmd {SSM:0}> repo_create -n opensuse-database-sles12-sp3-x86_64 -u https://download.opensuse.org/repositories/server:/database/SLE_12_SP3/
spacecmd {SSM:0}> repo_list
opensuse-database-sles12-sp2-x86_64
opensuse-database-sles12-sp3-x86_64
spacecmd {SSM:0}> softwarechannel_addrepo opensuse opensuse-database-sles12-sp2-x86_64
spacecmd {SSM:0}> softwarechannel_addrepo opensuse opensuse-database-sles12-sp3-x86_64
spacecmd {SSM:0}> quit
# spacewalk-repo-sync -c opensuse
</syntaxhighlight>
==Bootstrap==
===Create bootstrap repo===
Do it for each channel!
<syntaxhighlight lang=bash>
# mgr-create-bootstrap-repo
</syntaxhighlight>
===Create bootstrap shell scripts in /srv/www/htdocs/pub/bootstrap===
Do not forget to lookup the available [[#List available activation keys|activation keys]]
<syntaxhighlight lang=bash>
# spacecmd -s susemanager.server.de -u mytestuser -q activationkey_list
6-sles11-sp3-x86_64
6-sles11-sp4-x86_64
6-sles12-default
6-sles12-sp0-x86_64
6-sles12-sp1-x86_64
6-sles12-sp2-x86_64
6-sles12-sp3-x86_64
6-sles12-sp4-x86_64
6-sles12-sp5-x86_64
6-sles15-sp0-x86_64
6-sles15-sp1-x86_64
6-sles15-sp2-x86_64
# mgr-bootstrap --traditional --script=My-New-SLES11-SP4.sh --activation-keys=6-sles11-sp4-x86_64
</syntaxhighlight>
==Activation keys==
===List available activation keys===
web: Systems -> Activation Keys
<syntaxhighlight lang=bash>
# spacecmd -q activationkey_list
6-sles11-sp3-x86_64
6-sles11-sp4-x86_64
6-sles12-sp0-x86_64
6-sles12-sp1-x86_64
6-sles12-sp2-x86_64
6-sles12-sp3-x86_64
</syntaxhighlight>
==spacecmd==
Just some useful space commands
<syntaxhighlight lang=bash>
# spacecmd system_list
</syntaxhighlight>
==rhn-search==
===Cleanup the search index===
<syntaxhighlight lang=bash>
# rhn-search cleanindex
</syntaxhighlight>
==Troubleshooting==
===Clients===
====Error code: Curl error 59 / Error message: failed setting cipher list: DEFAULT_SUSE====
<syntaxhighlight lang=bash>
# zypper refresh
...
Error code: Curl error 59
Error message: failed setting cipher list: DEFAULT_SUSE
...
</syntaxhighlight>
The reason is that zypper in newer versions calls curl with a specific cipher list named "DEFAULT_SUSE" which is not defined in curl version 7.37.0-37.17.1 (version 7.37.0-28.1 is OK).
Now get any kind of repository bound to your SuSE like the ISO this version was installed with:
<syntaxhighlight lang=bash>
# zypper addrepo --check --type yast2 'iso:///?iso=/install/OS/suse/iso/SLE-12-SP2-Server-DVD-x86_64-GM-DVD1.iso' 'SLES12-SP2-12.2-0'
Adding repository 'SLES12-SP2-12.2-0' ...........................................................................................................[done]
Repository 'SLES12-SP2-12.2-0' successfully added
Enabled : Yes
Autorefresh : No
GPG Check : Yes
Priority : 99
URI : iso:///?iso=/install/OS/suse/iso/SLE-12-SP2-Server-DVD-x86_64-GM-DVD1.iso
</syntaxhighlight>
or enable it:
<syntaxhighlight lang=bash>
# zypper modifyrepo --enable SLES12-SP2-12.2-0
</syntaxhighlight>
Reinstall zypper in the old version that does not call curl with the cipher list SUSE_DEFAULT:
<syntaxhighlight lang=bash>
# zypper install --force --repo SLES12-SP2-12.2-0 $(rpm --query --all *curl* --queryformat '%{NAME} ')
</syntaxhighlight>
And disable the ISO repository:
<syntaxhighlight lang=bash>
# zypper modifyrepo --disable SLES12-SP2-12.2-0
</syntaxhighlight>
Done.
=====Note: After some further debugging we found that the system path forces a wrong openssl library to come in place.=====
<syntaxhighlight lang=bash>
# curl --version ; zypper --version
curl 7.37.0 (x86_64-suse-linux-gnu) libcurl/7.37.0 OpenSSL/1.0.2h zlib/1.2.8 libidn/1.28 libssh2/1.4.3
Protocols: dict file ftp ftps gopher http https imap imaps ldap ldaps pop3 pop3s rtsp scp sftp smtp smtps telnet tftp
Features: AsynchDNS GSS-Negotiate IDN IPv6 Largefile NTLM NTLM_WB SSL libz TLS-SRP
zypper 1.13.40
</syntaxhighlight>
In our version of curl it should be OpenSSL/1.0.2j.
<syntaxhighlight lang="bash" highlight="5">
# rpm -qv openssl
openssl-1.0.2j-60.24.1.x86_64
# openssl version
WARNING: can't open config file: /usr/local/ssl/openssl.cnf
OpenSSL 1.0.2j-fips 26 Sep 2016 (Library: OpenSSL 1.0.2h-fips 3 May 2016)
</syntaxhighlight>
Ha!
Ok... then after lookin at the system library path, we got a clue ;-):
<syntaxhighlight lang="bash" highlight="2">
# ldconfig -p | grep ssl
libssl.so.1.0.0 (libc6,x86-64) => /usr/lib/nsr/lib64/libssl.so.1.0.0
libssl.so.1.0.0 (libc6,x86-64) => /lib64/libssl.so.1.0.0
libssl.so.1.0.0 (libc6) => /usr/lib/nsr/libssl.so.1.0.0
libgnutls-xssl.so.0 (libc6,x86-64) => /usr/lib64/libgnutls-xssl.so.0
libevent_openssl-2.0.so.5 (libc6,x86-64) => /usr/lib64/libevent_openssl-2.0.so.5
libcommonssl.so (libc6,x86-64) => /usr/lib/nsr/lib64/libcommonssl.so
libcommonssl.so (libc6) => /usr/lib/nsr/libcommonssl.so
libcommonssl-9.2.1.so (libc6,x86-64) => /usr/lib/nsr/lib64/libcommonssl-9.2.1.so
</syntaxhighlight>
The problem was a file in /etc/ld.so.conf.d/ which brought /usr/lib/nsr/lib64 in the system library path. There was another libssl.so.1.0.0 which was version 1.0.2h. OK. What to do?
<syntaxhighlight lang=bash>
# rm /etc/ld.so.conf.d/problematic.conf
# rm /etc/ld.so.cache
# ldconfig
</syntaxhighlight>
Check the success:
<syntaxhighlight lang=bash>
# ldconfig -p | grep ssl
libssl.so.1.0.0 (libc6,x86-64) => /lib64/libssl.so.1.0.0
libgnutls-xssl.so.0 (libc6,x86-64) => /usr/lib64/libgnutls-xssl.so.0
libevent_openssl-2.0.so.5 (libc6,x86-64) => /usr/lib64/libevent_openssl-2.0.so.5
</syntaxhighlight>
Now you just have to find a way to get your other stuff running without the manipulation at the system library path.
Last check for our case. Does our networker use it's own ssl libraries?
<syntaxhighlight lang=bash>
# ls -al /proc/$(pgrep --full /usr/sbin/nsrexecd)/map_files | egrep "lib(ssl|crypto)"
lr-------- 1 root root 64 17. Jul 11:31 7f9d1bb73000-7f9d1bdc7000 -> /usr/lib/nsr/lib64/libcrypto.so.1.0.0
lr-------- 1 root root 64 17. Jul 11:31 7f9d1bdc7000-7f9d1bec7000 -> /usr/lib/nsr/lib64/libcrypto.so.1.0.0
lr-------- 1 root root 64 17. Jul 11:31 7f9d1bec7000-7f9d1bef3000 -> /usr/lib/nsr/lib64/libcrypto.so.1.0.0
lr-------- 1 root root 64 17. Jul 11:31 7f9d1bfab000-7f9d1c00c000 -> /usr/lib/nsr/lib64/libssl.so.1.0.0
lr-------- 1 root root 64 17. Jul 11:31 7f9d1c00c000-7f9d1c10c000 -> /usr/lib/nsr/lib64/libssl.so.1.0.0
lr-------- 1 root root 64 17. Jul 11:31 7f9d1c10c000-7f9d1c116000 -> /usr/lib/nsr/lib64/libssl.so.1.0.0
</syntaxhighlight>
Yep. Great!
== Remove spacewalk from client ==
So the way to get rid spacewalk is:
<syntaxhighlight lang=bash>
# zypper remove --clean-deps spacewalksd spacewalk-check zypp-plugin-spacewalk spacewalk-client-tools
</syntaxhighlight>
== Register at SuSE Manager ==
After that reregister your server with the SuSE Manager like this:
<syntaxhighlight lang=bash>
# /usr/bin/wget --no-check-certificate -O - https://susemgr.server.tld/pub/bootstrap/yourbootstrap.sh | bash
</syntaxhighlight>
== Update SuSE Manager certificate ==
=== Create work place ===
<syntaxhighlight lang=bash>
# mkdir ~/ssl-build
# mkdir ~/ssl-build/$(hostname --short)
# cd ~/ssl-build
</syntaxhighlight>
=== Build RHN-ORG-TRUSTED-SSL-CERT and rhn-org-trusted-ssl-cert-1.0-*.noarch.rpm ===
<syntaxhighlight lang=bash>
# rhn-ssl-tool --gen-ca --rpm-only --dir="~/ssl-build" --from-ca-cert=<path to your CA certificate file>
# openssl x509 -noout -subject -dates -in ~/ssl-build/RHN-ORG-TRUSTED-SSL-CERT
subject=C = DE, O = Hosting, CN = My-CA
notBefore=Mar 22 12:28:05 2017 GMT
notAfter=Mar 22 12:38:05 2027 GMT
# ls -al ~/ssl-build/*.rpm
...
-rw-r--r-- 1 root root 18262 17. Nov 12:10 rhn-org-trusted-ssl-cert-1.0-17.noarch.rpm
-rw-r--r-- 1 root root 16672 17. Nov 12:10 rhn-org-trusted-ssl-cert-1.0-17.src.rpm
</syntaxhighlight>
=== Generate CSR ===
<syntaxhighlight lang=bash>
# cd ~/ssl-build/$(hostname --short)
# declare -a hosts=( "susemgr.tld.de" "othername.tld.de" "anotheranothername.tld.de" )
# subject_without_cn='/C=DE/ST=Hamburg/L=Hamburg/O=Hosting/OU=Administration'
# emailAddress='suselinux-admin@tld.de'
</syntaxhighlight>
<syntaxhighlight lang=bash>
# openssl req -newkey rsa:4096 -nodes -sha256 -keyout server.key -out server.csr -batch -subj "${subject_without_cn} ${emailAddress}/CN=${hosts[0]}/emailAddress=${emailAddress}" -reqexts SAN -config <(cat /etc/ssl/openssl.cnf <(printf "[SAN]\nsubjectAltName=DNS:${hosts[0]}${hosts[1]:+,DNS:${hosts[1]}}${hosts[2]:+,DNS:${hosts[2]}}${hosts[3]:+,DNS:${hosts[3]}}${hosts[4]:+,DNS:${hosts[4]}}"))
Generating a RSA private key
...............................................++++
.................................................................................................................................................................++++
writing new private key to 'server.key'
-----
</syntaxhighlight>
<syntaxhighlight lang=bash>
# openssl req -noout -in server.csr -text 2>/dev/null | grep -E "(CN|DNS:)"
verify OK
subject=C = DE, ST = Hamburg, L = Hamburg, O = Hosting, OU = Administration suselinux-admin@tld.de, CN = susemgr.tld.de, emailAddress = suselinux-admin@tld.de
DNS:susemgr.tld.de, DNS:othername.tld.de , DNS:anotheranothername.tld.de
</syntaxhighlight>
=== Generate RPMs from certificate and key ===
<syntaxhighlight lang=bash>
# rhn-ssl-tool --gen-server --rpm-only --dir="/root/ssl-build"
...working...
Generating web server's SSL key pair/set RPM:
/root/ssl-build/susemgr/rhn-org-httpd-ssl-key-pair-susemgr-1.0-3.src.rpm
/root/ssl-build/susemgr/rhn-org-httpd-ssl-key-pair-susemgr-1.0-3.noarch.rpm
The most current SUSE Manager Proxy installation process against SUSE Manager hosted
requires the upload of an SSL tar archive that contains the CA SSL public
certificate and the web server's key set.
Generating the web server's SSL key set and CA SSL public certificate archive:
/root/ssl-build/susemgr/rhn-org-httpd-ssl-archive-susemgr-1.0-3.tar
Deploy the server's SSL key pair/set RPM:
(NOTE: the SUSE Manager or Proxy installers may do this step for you.)
The "noarch" RPM needs to be deployed to the machine working as a
web server, or SUSE Manager, or SUSE Manager Proxy.
Presumably 'susemgr.tld.de'.
</syntaxhighlight>
=== Install certificate and key in the apache directories ===
Remove the previous version:
Install latest version:
<syntaxhighlight lang=bash>
# cd /root/ssl-build/susemgr
# rpm -Uvh $(grep -E "rhn-org-httpd-ssl-key-pair-.*.noarch.rpm" latest.txt)
</syntaxhighlight>
Check:
<syntaxhighlight lang=bash>
# openssl x509 -noout -in /etc/apache2/ssl.crt/server.crt -dates
notBefore=Nov 16 08:35:35 2022 GMT
notAfter=Nov 16 08:35:35 2023 GMT
</syntaxhighlight>
I don't know the SuSE way to make it, but this works:
<syntaxhighlight lang=bash>
# cp -p /etc/apache2/ssl.crt/server.crt /etc/pki/tls/certs/spacewalk.crt
# cp -p /etc/apache2/ssl.key/server.key /etc/pki/tls/private/spacewalk.key
# cp -p /etc/apache2/ssl.key/server.key /etc/pki/tls/private/pg-spacewalk.key
# chmod 0640 /etc/pki/tls/private/pg-spacewalk.key
# chgrp postgres /etc/pki/tls/private/pg-spacewalk.key
</syntaxhighlight>
<syntaxhighlight lang=bash>
# spacewalk-service restart
# echo | openssl s_client -connect localhost:443 2>/dev/null | openssl x509 -noout -dates
notBefore=Nov 16 08:35:35 2022 GMT
notAfter=Nov 16 08:35:35 2023 GMT
</syntaxhighlight>
7612fe7a529e9eafcb6d7b1e8479de7a7a131225
ESPEasy
0
371
2689
2677
2022-11-18T09:26:00Z
Lollypop
2
wikitext
text/x-wiki
[[Category:KnowHow]]
==Flash the firmware==
Flash!<br>
Ah-ah<br>
Saviour of the universe!<br>
<syntaxhighlight lang=bash>
$ sudo apt install --yes esptool
$ wget https://github.com/letscontrolit/ESPEasy/releases/download/mega-20200515/ESPEasy_mega-20200515.zip
$ esptool --port /dev/ttyUSB0 --baud 115200 write_flash 0 ESP_Easy_mega_20200516_test_beta_ESP8266_4M1M.bin
esptool.py v2.8
Serial port /dev/ttyUSB0
Connecting...
Detecting chip type... ESP8266
Chip is ESP8266EX
Features: WiFi
Crystal is 26MHz
MAC: 3c:71:bf:2a:a6:0b
Enabling default SPI flash mode...
Configuring flash size...
Auto-detected Flash size: 4MB
Erasing flash...
Took 2.33s to erase flash block
Writing at 0x000dbc00... (87 %)
</syntaxhighlight>
==Connect via Serial==
<syntaxhighlight lang=bash>
$ minicom -D /dev/ttyUSB0 --baudrate 115200
Welcome to minicom 2.8
OPTIONS: I18n
Port /dev/ttyUSB0, 10:28:08
Press CTRL-A Z for help on special keys
</syntaxhighlight>
=== Posssible commands via serial connection ===
You do not see what you type until you hit enter, then you will see the command you have entered and the result, like:
<SyntaxHighlight lang=bash>
>Save
OK
</SyntaxHighlight>
You can find all commands here: [https://github.com/letscontrolit/ESPEasy/blob/mega/docs/source/Plugin/P000_commands.repl ESPEasy/docs/source/Plugin/P000_commands.repl].<br>
This is just a subset:
==== Get the Allowed IP range ====
* AccessInfo
==== Get the build number ====
* Build
==== Clear allowed IP range for the web interface for the current session ====
* ClearAccessBlock
==== Clear the password of the unit ====
* ClearPassword
==== Get or set the date and time ====
* Datetime[,YYYY-MM-DD[,hh:mm:ss]]
==== Get or set DNS configuration ====
* DNS[,<IP address>]
==== Get or set serial port debug level ====
* Debug[,<1-4, default is 2>]
==== Get or set the gateway configuration ====
* Gateway[,<IP address>]
==== Run I2C scanner to find connected I2C chips. Output will be sent to the serial port ====
* I2Cscanner
==== Get or set IP address ====
* IP[,<IP address>]
==== Set the password of the unit ====
* Password <mypassword>
==== Reboot ====
* Reboot
==== Reset config to factory default. Caution, all settings will be lost! ====
Be careful! Do you want this?
This deletes the whole configuration but not the Firmware.
* Reset
==== Save config to persistent flash memory ====
* Save
==== Show settings on serial terminal ====
* Settings
==== TimeZone ====
* TimeZone[,<minutes from UTC>]
==== Get or set the status of NTP (Network Time Protocol) ====
* UseNTP[,<0/1>]
==== Configure your local wifi credentials ====
* WifiSSID <myssid>
* WifiKey <mypassword>
* WifiConnect
==MQTT==
==Rules==
===Switch relays based on MQTT events===
On my relay board ESP12F_Relay_X4 I configured [[#MQTT]] and use rules to switch the 4 relays.
<syntaxhighlight lang=basic>
on MQTT#Relay* do
let,1,15 // Relay 1 is GPIO15
let,2,14 // Relay 2 is GPIO14
let,3,12 // Relay 3 is GPIO12
let,4,13 // Relay 4 is GPIO13
if %eventvalue1%<0
//
// Values below 0 sets timer to %eventvalue1%*-1 seconds
//
let,5,abs(%eventvalue1%)
logentry,"%eventname%: Turn on {substring:5:11:%eventname%} (GPIO%v{substring:10:11:%eventname%}%) for %v5% seconds."
timerSet,{substring:10:11:%eventname%},%v5%
gpio,%v{substring:10:11:%eventname%}%,1
elseif %eventvalue1%=1
//
// Value equal 1 turn on relay
//
logentry,"%eventname%: Turn on {substring:5:11:%eventname%} (GPIO%v{substring:10:11:%eventname%}%)."
gpio,%v{substring:10:11:%eventname%}%,1
elseif %eventvalue1%=0
//
// Value equal 0 turn off relay
//
logentry,"%eventname%: Turn off {substring:5:11:%eventname%} (GPIO%v{substring:10:11:%eventname%}%)."
gpio,%v{substring:10:11:%eventname%}%,0
endif
endon
on Rules#Timer do
logentry,"Eventname %eventname%: %eventvalue1% GPIO%v%eventvalue1%%"
gpio,%v%eventvalue1%%,0
endon
</syntaxhighlight>
==Problems==
===interface 0 claimed by ch341 while 'brltty' sets config #1===
<syntaxhighlight lang=bash>
Oct 9 14:40:06 lollybook kernel: [372389.769039] usb 3-1.4.3: ch341-uart converter now attached to ttyUSB0
Oct 9 14:40:07 lollybook kernel: [372390.769995] usb 3-1.4.3: usbfs: interface 0 claimed by ch341 while 'brltty' sets config #1
Oct 9 14:40:07 lollybook kernel: [372390.771187] ch341-uart ttyUSB0: ch341-uart converter now disconnected from ttyUSB0
Oct 9 14:40:07 lollybook kernel: [372390.771258] ch341 3-1.4.3:1.0: device disconnected
</syntaxhighlight>
I found this:
* [https://unix.stackexchange.com/questions/670636/unable-to-use-usb-dongle-based-on-usb-serial-converter-chip Unable to use USB dongle based on USB-serial converter chip]
<syntaxhighlight lang=bash>
for f in /usr/lib/udev/rules.d/*brltty*.rules; do
sudo ln -s /dev/null "/etc/udev/rules.d/$(basename "$f")"
done
sudo udevadm control --reload-rules
</syntaxhighlight>
But I also had to disable the brltty-udev.service
<syntaxhighlight lang=bash>
$ sudo systemctl stop brltty-udev.service
$ sudo systemctl mask brltty-udev.service
Created symlink /etc/systemd/system/brltty-udev.service → /dev/null.
</syntaxhighlight>
After disabling brltty I finally got:
<syntaxhighlight lang=bash>
Oct 9 14:41:30 lollybook kernel: [372473.985072] ch341 3-1.4.3:1.0: ch341-uart converter detected
Oct 9 14:41:30 lollybook kernel: [372473.987063] usb 3-1.4.3: ch341-uart converter now attached to ttyUSB0
</syntaxhighlight>
cf8b1320cc7fda716bae15866d1b464d63a968e4
2692
2689
2022-11-18T09:40:17Z
Lollypop
2
/* MQTT */
wikitext
text/x-wiki
[[Category:KnowHow]]
==Flash the firmware==
Flash!<br>
Ah-ah<br>
Saviour of the universe!<br>
<syntaxhighlight lang=bash>
$ sudo apt install --yes esptool
$ wget https://github.com/letscontrolit/ESPEasy/releases/download/mega-20200515/ESPEasy_mega-20200515.zip
$ esptool --port /dev/ttyUSB0 --baud 115200 write_flash 0 ESP_Easy_mega_20200516_test_beta_ESP8266_4M1M.bin
esptool.py v2.8
Serial port /dev/ttyUSB0
Connecting...
Detecting chip type... ESP8266
Chip is ESP8266EX
Features: WiFi
Crystal is 26MHz
MAC: 3c:71:bf:2a:a6:0b
Enabling default SPI flash mode...
Configuring flash size...
Auto-detected Flash size: 4MB
Erasing flash...
Took 2.33s to erase flash block
Writing at 0x000dbc00... (87 %)
</syntaxhighlight>
==Connect via Serial==
<syntaxhighlight lang=bash>
$ minicom -D /dev/ttyUSB0 --baudrate 115200
Welcome to minicom 2.8
OPTIONS: I18n
Port /dev/ttyUSB0, 10:28:08
Press CTRL-A Z for help on special keys
</syntaxhighlight>
=== Posssible commands via serial connection ===
You do not see what you type until you hit enter, then you will see the command you have entered and the result, like:
<SyntaxHighlight lang=bash>
>Save
OK
</SyntaxHighlight>
You can find all commands here: [https://github.com/letscontrolit/ESPEasy/blob/mega/docs/source/Plugin/P000_commands.repl ESPEasy/docs/source/Plugin/P000_commands.repl].<br>
This is just a subset:
==== Get the Allowed IP range ====
* AccessInfo
==== Get the build number ====
* Build
==== Clear allowed IP range for the web interface for the current session ====
* ClearAccessBlock
==== Clear the password of the unit ====
* ClearPassword
==== Get or set the date and time ====
* Datetime[,YYYY-MM-DD[,hh:mm:ss]]
==== Get or set DNS configuration ====
* DNS[,<IP address>]
==== Get or set serial port debug level ====
* Debug[,<1-4, default is 2>]
==== Get or set the gateway configuration ====
* Gateway[,<IP address>]
==== Run I2C scanner to find connected I2C chips. Output will be sent to the serial port ====
* I2Cscanner
==== Get or set IP address ====
* IP[,<IP address>]
==== Set the password of the unit ====
* Password <mypassword>
==== Reboot ====
* Reboot
==== Reset config to factory default. Caution, all settings will be lost! ====
Be careful! Do you want this?
This deletes the whole configuration but not the Firmware.
* Reset
==== Save config to persistent flash memory ====
* Save
==== Show settings on serial terminal ====
* Settings
==== TimeZone ====
* TimeZone[,<minutes from UTC>]
==== Get or set the status of NTP (Network Time Protocol) ====
* UseNTP[,<0/1>]
==== Configure your local wifi credentials ====
* WifiSSID <myssid>
* WifiKey <mypassword>
* WifiConnect
==MQTT==
Configuring MQTT on ESPEasy like this:<br>
[[image:ESPEasy Controller MQTT.jpg]]
* Click on Add
* Choose Type openHAB
* Enter your parameters like this where banana1.fritz.box in this example is the server your MQTT broker is running at.
<br>
[[image:ESPEasy Controller MQTT Add.jpg]]
==Rules==
===Switch relays based on MQTT events===
On my relay board ESP12F_Relay_X4 I configured [[#MQTT]] and use rules to switch the 4 relays.
<syntaxhighlight lang=basic>
on MQTT#Relay* do
let,1,15 // Relay 1 is GPIO15
let,2,14 // Relay 2 is GPIO14
let,3,12 // Relay 3 is GPIO12
let,4,13 // Relay 4 is GPIO13
if %eventvalue1%<0
//
// Values below 0 sets timer to %eventvalue1%*-1 seconds
//
let,5,abs(%eventvalue1%)
logentry,"%eventname%: Turn on {substring:5:11:%eventname%} (GPIO%v{substring:10:11:%eventname%}%) for %v5% seconds."
timerSet,{substring:10:11:%eventname%},%v5%
gpio,%v{substring:10:11:%eventname%}%,1
elseif %eventvalue1%=1
//
// Value equal 1 turn on relay
//
logentry,"%eventname%: Turn on {substring:5:11:%eventname%} (GPIO%v{substring:10:11:%eventname%}%)."
gpio,%v{substring:10:11:%eventname%}%,1
elseif %eventvalue1%=0
//
// Value equal 0 turn off relay
//
logentry,"%eventname%: Turn off {substring:5:11:%eventname%} (GPIO%v{substring:10:11:%eventname%}%)."
gpio,%v{substring:10:11:%eventname%}%,0
endif
endon
on Rules#Timer do
logentry,"Eventname %eventname%: %eventvalue1% GPIO%v%eventvalue1%%"
gpio,%v%eventvalue1%%,0
endon
</syntaxhighlight>
==Problems==
===interface 0 claimed by ch341 while 'brltty' sets config #1===
<syntaxhighlight lang=bash>
Oct 9 14:40:06 lollybook kernel: [372389.769039] usb 3-1.4.3: ch341-uart converter now attached to ttyUSB0
Oct 9 14:40:07 lollybook kernel: [372390.769995] usb 3-1.4.3: usbfs: interface 0 claimed by ch341 while 'brltty' sets config #1
Oct 9 14:40:07 lollybook kernel: [372390.771187] ch341-uart ttyUSB0: ch341-uart converter now disconnected from ttyUSB0
Oct 9 14:40:07 lollybook kernel: [372390.771258] ch341 3-1.4.3:1.0: device disconnected
</syntaxhighlight>
I found this:
* [https://unix.stackexchange.com/questions/670636/unable-to-use-usb-dongle-based-on-usb-serial-converter-chip Unable to use USB dongle based on USB-serial converter chip]
<syntaxhighlight lang=bash>
for f in /usr/lib/udev/rules.d/*brltty*.rules; do
sudo ln -s /dev/null "/etc/udev/rules.d/$(basename "$f")"
done
sudo udevadm control --reload-rules
</syntaxhighlight>
But I also had to disable the brltty-udev.service
<syntaxhighlight lang=bash>
$ sudo systemctl stop brltty-udev.service
$ sudo systemctl mask brltty-udev.service
Created symlink /etc/systemd/system/brltty-udev.service → /dev/null.
</syntaxhighlight>
After disabling brltty I finally got:
<syntaxhighlight lang=bash>
Oct 9 14:41:30 lollybook kernel: [372473.985072] ch341 3-1.4.3:1.0: ch341-uart converter detected
Oct 9 14:41:30 lollybook kernel: [372473.987063] usb 3-1.4.3: ch341-uart converter now attached to ttyUSB0
</syntaxhighlight>
38f69af8487e1c0f7dd4e99a10aa99881df9c2a8
2695
2692
2022-11-18T12:53:35Z
Lollypop
2
/* MQTT */
wikitext
text/x-wiki
[[Category:KnowHow]]
==Flash the firmware==
Flash!<br>
Ah-ah<br>
Saviour of the universe!<br>
<syntaxhighlight lang=bash>
$ sudo apt install --yes esptool
$ wget https://github.com/letscontrolit/ESPEasy/releases/download/mega-20200515/ESPEasy_mega-20200515.zip
$ esptool --port /dev/ttyUSB0 --baud 115200 write_flash 0 ESP_Easy_mega_20200516_test_beta_ESP8266_4M1M.bin
esptool.py v2.8
Serial port /dev/ttyUSB0
Connecting...
Detecting chip type... ESP8266
Chip is ESP8266EX
Features: WiFi
Crystal is 26MHz
MAC: 3c:71:bf:2a:a6:0b
Enabling default SPI flash mode...
Configuring flash size...
Auto-detected Flash size: 4MB
Erasing flash...
Took 2.33s to erase flash block
Writing at 0x000dbc00... (87 %)
</syntaxhighlight>
==Connect via Serial==
<syntaxhighlight lang=bash>
$ minicom -D /dev/ttyUSB0 --baudrate 115200
Welcome to minicom 2.8
OPTIONS: I18n
Port /dev/ttyUSB0, 10:28:08
Press CTRL-A Z for help on special keys
</syntaxhighlight>
=== Posssible commands via serial connection ===
You do not see what you type until you hit enter, then you will see the command you have entered and the result, like:
<SyntaxHighlight lang=bash>
>Save
OK
</SyntaxHighlight>
You can find all commands here: [https://github.com/letscontrolit/ESPEasy/blob/mega/docs/source/Plugin/P000_commands.repl ESPEasy/docs/source/Plugin/P000_commands.repl].<br>
This is just a subset:
==== Get the Allowed IP range ====
* AccessInfo
==== Get the build number ====
* Build
==== Clear allowed IP range for the web interface for the current session ====
* ClearAccessBlock
==== Clear the password of the unit ====
* ClearPassword
==== Get or set the date and time ====
* Datetime[,YYYY-MM-DD[,hh:mm:ss]]
==== Get or set DNS configuration ====
* DNS[,<IP address>]
==== Get or set serial port debug level ====
* Debug[,<1-4, default is 2>]
==== Get or set the gateway configuration ====
* Gateway[,<IP address>]
==== Run I2C scanner to find connected I2C chips. Output will be sent to the serial port ====
* I2Cscanner
==== Get or set IP address ====
* IP[,<IP address>]
==== Set the password of the unit ====
* Password <mypassword>
==== Reboot ====
* Reboot
==== Reset config to factory default. Caution, all settings will be lost! ====
Be careful! Do you want this?
This deletes the whole configuration but not the Firmware.
* Reset
==== Save config to persistent flash memory ====
* Save
==== Show settings on serial terminal ====
* Settings
==== TimeZone ====
* TimeZone[,<minutes from UTC>]
==== Get or set the status of NTP (Network Time Protocol) ====
* UseNTP[,<0/1>]
==== Configure your local wifi credentials ====
* WifiSSID <myssid>
* WifiKey <mypassword>
* WifiConnect
==MQTT==
===Controller===
Configure MQTT connection to the broker like this:<br>
[[image:ESPEasy Controller MQTT.jpg]]
* Click on Add
* Choose Type openHAB
* Enter your parameters like this where <i>banana1.fritz.box</i> in this example is the server your MQTT broker is running at.
<br>
[[image:ESPEasy Controller MQTT Add.jpg]]
===Device===
Now you need to listen on MQTT events. For that you need to configure a <i>Generic - MQTT Import</i> device:<br>
[[image:ESPEasy Devices Generic MQTT Import.jpg]]
* Click on Add
* Choose Type Generic MQTT Import
* Enter your parameters like this where <i>MQTT</i> is the event name for the [[#Rules]] and <i>MQTT Topic n</i> are the topics you listen at.
[[image:ESPEasy Devices Generic MQTT Import.jpg]]<br>
[[image:ESPEasy Devices Generic MQTT Import Add.jpg]]
==Rules==
===Switch relays based on MQTT events===
On my relay board ESP12F_Relay_X4 I configured [[#MQTT]] and use rules to switch the 4 relays.
<syntaxhighlight lang=basic>
on MQTT#Relay* do
let,1,15 // Relay 1 is GPIO15
let,2,14 // Relay 2 is GPIO14
let,3,12 // Relay 3 is GPIO12
let,4,13 // Relay 4 is GPIO13
if %eventvalue1%<0
//
// Values below 0 sets timer to %eventvalue1%*-1 seconds
//
let,5,abs(%eventvalue1%)
logentry,"%eventname%: Turn on {substring:5:11:%eventname%} (GPIO%v{substring:10:11:%eventname%}%) for %v5% seconds."
timerSet,{substring:10:11:%eventname%},%v5%
gpio,%v{substring:10:11:%eventname%}%,1
elseif %eventvalue1%=1
//
// Value equal 1 turn on relay
//
logentry,"%eventname%: Turn on {substring:5:11:%eventname%} (GPIO%v{substring:10:11:%eventname%}%)."
gpio,%v{substring:10:11:%eventname%}%,1
elseif %eventvalue1%=0
//
// Value equal 0 turn off relay
//
logentry,"%eventname%: Turn off {substring:5:11:%eventname%} (GPIO%v{substring:10:11:%eventname%}%)."
gpio,%v{substring:10:11:%eventname%}%,0
endif
endon
on Rules#Timer do
logentry,"Eventname %eventname%: %eventvalue1% GPIO%v%eventvalue1%%"
gpio,%v%eventvalue1%%,0
endon
</syntaxhighlight>
==Problems==
===interface 0 claimed by ch341 while 'brltty' sets config #1===
<syntaxhighlight lang=bash>
Oct 9 14:40:06 lollybook kernel: [372389.769039] usb 3-1.4.3: ch341-uart converter now attached to ttyUSB0
Oct 9 14:40:07 lollybook kernel: [372390.769995] usb 3-1.4.3: usbfs: interface 0 claimed by ch341 while 'brltty' sets config #1
Oct 9 14:40:07 lollybook kernel: [372390.771187] ch341-uart ttyUSB0: ch341-uart converter now disconnected from ttyUSB0
Oct 9 14:40:07 lollybook kernel: [372390.771258] ch341 3-1.4.3:1.0: device disconnected
</syntaxhighlight>
I found this:
* [https://unix.stackexchange.com/questions/670636/unable-to-use-usb-dongle-based-on-usb-serial-converter-chip Unable to use USB dongle based on USB-serial converter chip]
<syntaxhighlight lang=bash>
for f in /usr/lib/udev/rules.d/*brltty*.rules; do
sudo ln -s /dev/null "/etc/udev/rules.d/$(basename "$f")"
done
sudo udevadm control --reload-rules
</syntaxhighlight>
But I also had to disable the brltty-udev.service
<syntaxhighlight lang=bash>
$ sudo systemctl stop brltty-udev.service
$ sudo systemctl mask brltty-udev.service
Created symlink /etc/systemd/system/brltty-udev.service → /dev/null.
</syntaxhighlight>
After disabling brltty I finally got:
<syntaxhighlight lang=bash>
Oct 9 14:41:30 lollybook kernel: [372473.985072] ch341 3-1.4.3:1.0: ch341-uart converter detected
Oct 9 14:41:30 lollybook kernel: [372473.987063] usb 3-1.4.3: ch341-uart converter now attached to ttyUSB0
</syntaxhighlight>
a01e5914f0949b6d47e447d93214ccb002cf2f10
2701
2695
2022-12-07T11:01:21Z
Lollypop
2
/* Switch relays based on MQTT events */
wikitext
text/x-wiki
[[Category:KnowHow]]
==Flash the firmware==
Flash!<br>
Ah-ah<br>
Saviour of the universe!<br>
<syntaxhighlight lang=bash>
$ sudo apt install --yes esptool
$ wget https://github.com/letscontrolit/ESPEasy/releases/download/mega-20200515/ESPEasy_mega-20200515.zip
$ esptool --port /dev/ttyUSB0 --baud 115200 write_flash 0 ESP_Easy_mega_20200516_test_beta_ESP8266_4M1M.bin
esptool.py v2.8
Serial port /dev/ttyUSB0
Connecting...
Detecting chip type... ESP8266
Chip is ESP8266EX
Features: WiFi
Crystal is 26MHz
MAC: 3c:71:bf:2a:a6:0b
Enabling default SPI flash mode...
Configuring flash size...
Auto-detected Flash size: 4MB
Erasing flash...
Took 2.33s to erase flash block
Writing at 0x000dbc00... (87 %)
</syntaxhighlight>
==Connect via Serial==
<syntaxhighlight lang=bash>
$ minicom -D /dev/ttyUSB0 --baudrate 115200
Welcome to minicom 2.8
OPTIONS: I18n
Port /dev/ttyUSB0, 10:28:08
Press CTRL-A Z for help on special keys
</syntaxhighlight>
=== Posssible commands via serial connection ===
You do not see what you type until you hit enter, then you will see the command you have entered and the result, like:
<SyntaxHighlight lang=bash>
>Save
OK
</SyntaxHighlight>
You can find all commands here: [https://github.com/letscontrolit/ESPEasy/blob/mega/docs/source/Plugin/P000_commands.repl ESPEasy/docs/source/Plugin/P000_commands.repl].<br>
This is just a subset:
==== Get the Allowed IP range ====
* AccessInfo
==== Get the build number ====
* Build
==== Clear allowed IP range for the web interface for the current session ====
* ClearAccessBlock
==== Clear the password of the unit ====
* ClearPassword
==== Get or set the date and time ====
* Datetime[,YYYY-MM-DD[,hh:mm:ss]]
==== Get or set DNS configuration ====
* DNS[,<IP address>]
==== Get or set serial port debug level ====
* Debug[,<1-4, default is 2>]
==== Get or set the gateway configuration ====
* Gateway[,<IP address>]
==== Run I2C scanner to find connected I2C chips. Output will be sent to the serial port ====
* I2Cscanner
==== Get or set IP address ====
* IP[,<IP address>]
==== Set the password of the unit ====
* Password <mypassword>
==== Reboot ====
* Reboot
==== Reset config to factory default. Caution, all settings will be lost! ====
Be careful! Do you want this?
This deletes the whole configuration but not the Firmware.
* Reset
==== Save config to persistent flash memory ====
* Save
==== Show settings on serial terminal ====
* Settings
==== TimeZone ====
* TimeZone[,<minutes from UTC>]
==== Get or set the status of NTP (Network Time Protocol) ====
* UseNTP[,<0/1>]
==== Configure your local wifi credentials ====
* WifiSSID <myssid>
* WifiKey <mypassword>
* WifiConnect
==MQTT==
===Controller===
Configure MQTT connection to the broker like this:<br>
[[image:ESPEasy Controller MQTT.jpg]]
* Click on Add
* Choose Type openHAB
* Enter your parameters like this where <i>banana1.fritz.box</i> in this example is the server your MQTT broker is running at.
<br>
[[image:ESPEasy Controller MQTT Add.jpg]]
===Device===
Now you need to listen on MQTT events. For that you need to configure a <i>Generic - MQTT Import</i> device:<br>
[[image:ESPEasy Devices Generic MQTT Import.jpg]]
* Click on Add
* Choose Type Generic MQTT Import
* Enter your parameters like this where <i>MQTT</i> is the event name for the [[#Rules]] and <i>MQTT Topic n</i> are the topics you listen at.
[[image:ESPEasy Devices Generic MQTT Import.jpg]]<br>
[[image:ESPEasy Devices Generic MQTT Import Add.jpg]]
==Rules==
===Switch relays based on MQTT events===
On my relay board ESP12F_Relay_X4 I configured [[#MQTT]] and use rules to switch the 4 relays.
<syntaxhighlight lang=basic>
on system#boot do
let,1,15 // Relay 1 is GPIO15
let,2,14 // Relay 2 is GPIO14
let,3,12 // Relay 3 is GPIO12
let,4,13 // Relay 4 is GPIO13
endon
on MQTT#Relay* do
if %eventvalue1%<0
//
// Values below 0 sets timer to %eventvalue1%*-1 seconds
//
let,5,abs(%eventvalue1%)
logentry,"%eventname%: Turn on {substring:5:11:%eventname%} (GPIO%v{substring:10:11:%eventname%}%) for %v5% seconds."
timerSet,{substring:10:11:%eventname%},%v5%
gpio,%v{substring:10:11:%eventname%}%,1
elseif %eventvalue1%=1
//
// Value equal 1 turn on relay
//
logentry,"%eventname%: Turn on {substring:5:11:%eventname%} (GPIO%v{substring:10:11:%eventname%}%)."
gpio,%v{substring:10:11:%eventname%}%,1
elseif %eventvalue1%=0
//
// Value equal 0 turn off relay
//
logentry,"%eventname%: Turn off {substring:5:11:%eventname%} (GPIO%v{substring:10:11:%eventname%}%)."
timerSet,{substring:10:11:%eventname%},0
gpio,%v{substring:10:11:%eventname%}%,0
endif
endon
on Cron#Relay4 do
logentry,"%eventname%: Turn on Relay4."
gpio,%v4%,1
timerSet,4,30
endon
on Rules#Timer do
logentry,"%eventname%: %eventvalue1% GPIO%v%eventvalue1%%"
gpio,%v%eventvalue1%%,0
endon
</syntaxhighlight>
==Problems==
===interface 0 claimed by ch341 while 'brltty' sets config #1===
<syntaxhighlight lang=bash>
Oct 9 14:40:06 lollybook kernel: [372389.769039] usb 3-1.4.3: ch341-uart converter now attached to ttyUSB0
Oct 9 14:40:07 lollybook kernel: [372390.769995] usb 3-1.4.3: usbfs: interface 0 claimed by ch341 while 'brltty' sets config #1
Oct 9 14:40:07 lollybook kernel: [372390.771187] ch341-uart ttyUSB0: ch341-uart converter now disconnected from ttyUSB0
Oct 9 14:40:07 lollybook kernel: [372390.771258] ch341 3-1.4.3:1.0: device disconnected
</syntaxhighlight>
I found this:
* [https://unix.stackexchange.com/questions/670636/unable-to-use-usb-dongle-based-on-usb-serial-converter-chip Unable to use USB dongle based on USB-serial converter chip]
<syntaxhighlight lang=bash>
for f in /usr/lib/udev/rules.d/*brltty*.rules; do
sudo ln -s /dev/null "/etc/udev/rules.d/$(basename "$f")"
done
sudo udevadm control --reload-rules
</syntaxhighlight>
But I also had to disable the brltty-udev.service
<syntaxhighlight lang=bash>
$ sudo systemctl stop brltty-udev.service
$ sudo systemctl mask brltty-udev.service
Created symlink /etc/systemd/system/brltty-udev.service → /dev/null.
</syntaxhighlight>
After disabling brltty I finally got:
<syntaxhighlight lang=bash>
Oct 9 14:41:30 lollybook kernel: [372473.985072] ch341 3-1.4.3:1.0: ch341-uart converter detected
Oct 9 14:41:30 lollybook kernel: [372473.987063] usb 3-1.4.3: ch341-uart converter now attached to ttyUSB0
</syntaxhighlight>
6bdd1c2ab62a082ccebf346459846a4b3ef666de
File:ESPEasy Controller MQTT.jpg
6
401
2690
2022-11-18T09:30:45Z
Lollypop
2
Configure MQTT on ESPEasy
wikitext
text/x-wiki
== Summary ==
Configure MQTT on ESPEasy
76a98a7088cee689be3882b7a42d7c780533dd02
File:ESPEasy Controller MQTT Add.jpg
6
402
2691
2022-11-18T09:38:20Z
Lollypop
2
Adding a new MQTT Controller openHAB
wikitext
text/x-wiki
== Summary ==
Adding a new MQTT Controller openHAB
94d111d660e7382969de227b4b1ebbe081365abd
File:ESPEasy Devices Generic MQTT Import.jpg
6
403
2693
2022-11-18T09:51:58Z
Lollypop
2
ESPEasy Generic MQTT Import
wikitext
text/x-wiki
== Summary ==
ESPEasy Generic MQTT Import
ff693a82aa125636aab6d6d7809ebfbc138c436b
File:ESPEasy Devices Generic MQTT Import Add.jpg
6
404
2694
2022-11-18T12:49:47Z
Lollypop
2
ESPEasy Devices Generic - MQTT Import Add
wikitext
text/x-wiki
== Summary ==
ESPEasy Devices Generic - MQTT Import Add
718615bc297c5a9bd1eecfe76d7e48ac9fdb44b2
MySQL Tipps und Tricks
0
197
2698
2502
2022-11-23T11:07:20Z
Lollypop
2
/* /etc/idmapd.conf */
wikitext
text/x-wiki
[[Category:MySQL|Tipps und Tricks]]
==Oneliner==
===Show MySQL-traffic fired from a client===
<syntaxhighlight lang=bash>
# tcpdump -i any -s 0 -l -vvv -w - dst port 3306 | strings | perl -e '
while(<>) { chomp; next if /^[^ ]+[ ]*$/;
if(/^(SELECT|UPDATE|DELETE|INSERT|SET|COMMIT|ROLLBACK|CREATE|DROP|ALTER)/i) {
if (defined $q) { print "$q\n"; }
$q=$_;
} else {
$_ =~ s/^[ \t]+//; $q.=" $_";
}
}'
</syntaxhighlight>
===Mysql processes each second===
<syntaxhighlight lang=bash>
# mysqladmin -i 1 --verbose processlist
</syntaxhighlight>
===All grants===
<syntaxhighlight lang=bash>
# mysql --skip-column-names --batch --execute 'select concat("`",user,"`@`",host,"`") from mysql.user' | xargs -n 1 -i mysql --execute 'show grants for {}'
</syntaxhighlight>
Or a little nicer:
<syntaxhighlight lang=bash>
#!/bin/bash
#
## Written by Lars Timmann <L@rs.Timmann.de> 2017
#
function usage () {
cat << EOH
Usage: $0 [--all] [--grant-user <pattern>|--gu <pattern>] [--grant-db <pattern>|--gdb <pattern>] [--help] ...
--help: This output
--grant-user|--gu: You can specify this option several times.
The <pattern> can be:
<user> : You will get grants on all hosts for this user.
@<host> : You will get grants for all users on this host.
<user>@<host> : You will get specific grants for user@host.
The pattern may contain % as wildcard.
If the pattern is @% it shows all grants where host is exactly '%'.
--grant-db|--gdb: You can specify this option several times.
The pattern names the database to look for.
The pattern may contain % as wildcard.
--all: Show all grants
...: Optional parameters to the mysql command
EOH
exit
}
show_all_grants=0
declare -a grant_user
for ((param=1;param<=${#};param++))
do
case ${!param} in
--grant-user|--gu)
param=$[ ${param} + 1 ]
grant_user+=( "${!param}" )
# delete 2 parameters from list and set back $param
set -- "${@:1:param-2}" "${@:param+1}"
param=$[ ${param} - 2 ]
;;
--grant-db|--gdb)
param=$[ ${param} + 1 ]
grant_db+=( "${!param}" )
# delete 2 parameters from list and set back $param
set -- "${@:1:param-2}" "${@:param+1}"
param=$[ ${param} - 2 ]
;;
--all)
show_all_grants=1
# delete 1 parameter from list and set back $param
set -- "${@:1:param-1}" "${@:param+1}"
param=$[ ${param} - 1 ]
;;
--help)
usage
;;
*)
;;
esac
done
count=${#grant_user[@]}
for((param=0;param<count;param++))
do
before=${#grant_user[@]}
grant="${grant_user[${param}]}"
user="${grant%@*}"
if [[ "${grant}" == *\@?* ]]
then
host="${grant/*@}"
else
host=''
fi
case ${host} in
'')
select="select concat('\'',user,'\'@\'',host,'\'') as user from mysql.user where user like '${user}'"
;;
'%')
select="select concat('\'',user,'\'@\'',host,'\'') as user from mysql.user where host='${host}' ${user:+and user like '${user}'}"
;;
*)
select="select concat('\'',user,'\'@\'',host,'\'') as user from mysql.user where host like '${host}' ${user:+and user like '${user}'}"
;;
esac
grant_user=( "${grant_user[@]:0:param}" $(mysql $* --silent --skip-column-names --execute "${select}" | sort ) "${grant_user[@]:param+1}" )
after=${#grant_user[@]}
param=$[ param + after - before ]
count=$[ count + after - before ]
done
# Get user for database in grant_db array
for db in ${grant_db[@]}
do
grant_user+=( $(mysql $* --silent --skip-column-names --execute "
select concat('\'',user,'\'@\'',host,'\'') as user from mysql.db where db like '${db}';
select concat('\'',user,'\'@\'',host,'\'') as user from mysql.columns_priv where db like '${db}';
select concat('\'',user,'\'@\'',host,'\'') as user from mysql.tables_priv where db like '${db}';
" | sort -u ) )
done
# --all
if [ ${show_all_grants} -eq 1 ]
then
printf -- '--\n-- %s\n--\n' "all grants";
grant_user=( $(mysql $* --silent --skip-column-names --execute "select concat('\'',user,'\'@\'',host,'\'') as user from mysql.user" | sort ) )
fi
for user in ${grant_user[@]}
do
printf -- '--\n-- %s\n--\n' "${user}";
show_create_user="$(mysql $* --silent --skip-column-names --execute "select (substring_index(version(), '.',1) >= 5) and (substring_index(substring_index(version(), '.', 2),'.',-1) >=7) as show_create_user;";)"
if [ "${show_create_user}" -eq 1 ]
then
mysql $* --silent --skip-column-names --execute "show create user ${user};" | sed 's/$/;/'
fi
OLD_IFS=${IFS}
IFS=$'\n'
for grant in $(mysql $* --silent --skip-column-names --execute "show grants for ${user}" | sed 's/$/;/')
do
regex='GRANT[ ]+.*[ ]+ON[ ]+(FUNCTION[ ]+|)`([^`]*)`\..*'
if [[ $grant =~ $regex ]]
then
database=${BASH_REMATCH[2]}
if [ ${#grant_db[@]} -gt 0 ]
then
if [[ " ${grant_db[@]} " =~ " ${database} " ]]
then
echo "${grant}"
fi
else
echo "${grant}"
fi
else
echo "${grant}"
fi
done
done
</syntaxhighlight>
===Last update time===
* Per table
<syntaxhighlight lang=mysql>
mysql> SELECT TABLE_SCHEMA AS DB,TABLE_NAME,UPDATE_TIME FROM INFORMATION_SCHEMA.TABLES ORDER BY DB,UPDATE_TIME;
</syntaxhighlight>
* Per database
<syntaxhighlight lang=mysql>
mysql> SELECT TABLE_SCHEMA AS DB,MAX(UPDATE_TIME) AS LAST_UPDATE FROM INFORMATION_SCHEMA.TABLES GROUP BY DB ORDER BY LAST_UPDATE;
</syntaxhighlight>
==InnoDB space==
===Per database===
<syntaxhighlight lang=mysql>
mysql> select table_schema as database_name, sum(round(data_length/1024/1024,2)) as total_size_mb from information_schema.tables where engine like 'innodb' group by table_schema order by total_size_mb;
</syntaxhighlight>
===Per table===
<syntaxhighlight lang=mysql>
mysql> select table_schema as database_name,table_name,round(data_length/1024/1024,2) as size_mb from information_schema.tables order by size_mb;
</syntaxhighlight>
==Logging==
If you use SET GLOBAL it is just for the moment.
'''Don't forget to add it in your my.cnf to make it permanent!'''
===What can I log?===
The interesting variables here are:
* log_queries_not_using_indexes
* log_slave_updates
* log_slow_queries
* general_log
===Choose logging destination FILE/TABLE/NONE===
This affects general_log and slow_query_log.
* Log to the table mysql.slow_log and mysql.general_log
<syntaxhighlight lang=mysql>
mysql> SET GLOBAL log_output=TABLE;
</syntaxhighlight>
* Log to the table mysql.slow_log and mysql.general_log
<syntaxhighlight lang=mysql>
mysql> SET GLOBAL log_output=TABLE;
</syntaxhighlight>
* Both: tables and files
<syntaxhighlight lang=mysql>
mysql> SET GLOBAL log_output = 'TABLE,FILE';
</syntaxhighlight>
* None, if NONE appears in the log_output destinations there is no logging
<syntaxhighlight lang=mysql>
mysql> SET GLOBAL log_output = 'TABLE,FILE,NONE';
</syntaxhighlight>
is equal to
<syntaxhighlight lang=mysql>
mysql> SET GLOBAL log_output = 'NONE';
</syntaxhighlight>
===Enable/disable general logging===
<syntaxhighlight lang=mysql>
mysql> SET GLOBAL general_log_file = '/var/lib/mysql/general.log';
Query OK, 0 rows affected (0.00 sec)
mysql> SET GLOBAL general_log = 'ON';
Query OK, 0 rows affected (0.00 sec)
</syntaxhighlight>
<syntaxhighlight lang=mysql>
mysql> SET GLOBAL general_log = 'OFF';
Query OK, 0 rows affected (0.00 sec)
</syntaxhighlight>
===Enable/disable logging of slow queries===
<syntaxhighlight lang=mysql>
mysql> SET GLOBAL slow_query_log_file = '/var/lib/mysql/slow-query.log';
Query OK, 0 rows affected (0.00 sec)
mysql> SET GLOBAL slow_query_log = 'ON';
Query OK, 0 rows affected (0.00 sec)
</syntaxhighlight>
<syntaxhighlight lang=mysql>
mysql> SET GLOBAL slow_query_log = 'OFF';
Query OK, 0 rows affected (0.00 sec)
</syntaxhighlight>
== Slave ==
=== Debugging ===
==== What did we see from the master ====
Read the binlog from Master:
<syntaxhighlight lang=bash>
# mysqlbinlog --read-from-remote-server --host='your replication host' --user='your replication user' --password='your replication password' --base64-output=auto --database='limit output to this database' -vv mysql-bin.number | less
</syntaxhighlight>
if you get
ERROR: Failed on connect: SSL connection error: protocol version mismatch
try
<syntaxhighlight lang=bash>
# mysqlbinlog --read-from-remote-server --host='your replication host' --user='your replication user' --password='your replication password' --ssl-mode=DISABLED --base64-output=auto --database='limit output to this database' -vv mysql-bin.number | less
</syntaxhighlight>
For an idea of the binlog file to investigate on the master do this on your slave:
<syntaxhighlight lang=bash>
# mysql -e 'show slave status\G' | awk '$1=="Master_Log_File:"'
</syntaxhighlight>
==Filesystems for MySQL==
===ext3/ext4===
====Create Options====
<syntaxhighlight lang=bash>
# mkfs.ext4 -b 4096 /dev/mapper/vg--data-lv--ext4--mysql_data
</syntaxhighlight>
====Mountoptions====
* noatime
* data=writeback (best performance , only metadata is logged)
* data=ordered (ok performance , recording metadata and grouping metadata related to the data changes)
* data=journal (worst performance, but best data protection, ext3 default mode, recording metadata and all data)
===Raw devices with InnoDB===
'''Take a look at [[Linux_udev_permissions|setting device permissions via udev]] first.'''
'''After''' that the device is owned by mysql:
<syntaxhighlight lang=bash>
# ls -alL /dev/vg-data/lv-rawdisk-innodb01
brw-rw---- 1 mysql mysql 252, 0 Aug 12 15:07 /dev/vg-data/lv-rawdisk-innodb01
</syntaxhighlight>
Determine the size:
<syntaxhighlight lang=bash>
# lvs vg-data
LV VG Attr LSize Pool Origin Data% Move Log Copy% Convert
lv-rawdisk-innodb01 vg-data -wi-a---- 25.00g
# fdisk -l /dev/vg-data/lv-rawdisk-innodb01
Disk /dev/vg-data/lv-rawdisk-innodb01: 26.8 GB, 26843545600 bytes
255 heads, 63 sectors/track, 3263 cylinders, total 52428800 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
# bc -l
26843545600/(1024*1024*1024)
25.00000000000000000000
</syntaxhighlight>
Yes... really 25GB!
Add your logical volume to your configuration /etc/mysql/conf.d/innodb.cnf :
<syntaxhighlight lang=mysql>
[mysqld]
# InnoDB raw disks
innodb_data_home_dir=
innodb_data_file_path=/dev/vg-data/lv-rawdisk-innodb01:25Gnewraw
</syntaxhighlight>
Start mysql:
<syntaxhighlight lang=bash>
# service mysql start
</syntaxhighlight>
Aaaaaand.. do not forget apparmor! Like I did.. :-D
<syntaxhighlight lang=mysql>
InnoDB: Operating system error number 13 in a file operation.
InnoDB: The error means mysqld does not have the access rights to
InnoDB: the directory.
InnoDB: File name /dev/dm-0
InnoDB: File operation call: 'open'.
InnoDB: Cannot continue operation.
</syntaxhighlight>
<syntaxhighlight lang=bash>
# tail /var/log/kern.log
...
Aug 12 15:30:09 mysql kernel: [ 5840.118528] audit: type=1400 audit(1439386209.399:33): apparmor="DENIED" operation="open" profile="/usr/sbin/mysqld" name="/dev/dm-0" pid=11810 comm="mysqld" requested_mask="wr" denied_mask="wr" fsuid=108 ouid=108
...
</syntaxhighlight>
Add your raw device to the apparmor config in /etc/apparmor.d/local/usr.sbin.mysqld :
<syntaxhighlight lang=bash>
# Site-specific additions and overrides for usr.sbin.mysqld.
# For more details, please see /etc/apparmor.d/local/README.
/dev/dm-* rwk,
</syntaxhighlight>
Reload apparmor:
<syntaxhighlight lang=bash>
# service apparmor reload
</syntaxhighlight>
Another try!
<syntaxhighlight lang=bash>
# service mysql start
</syntaxhighlight>
<syntaxhighlight lang=mysql>
InnoDB: The first specified data file /dev/vg-data/lv-rawdisk-innodb01 did not exist:
InnoDB: a new database to be created!
150812 15:48:23 InnoDB: Setting file /dev/vg-data/lv-rawdisk-innodb01 size to 25600 MB
InnoDB: Database physically writes the file full: wait...
InnoDB: Progress in MB: 100 200 300 400 500 600 700 800 900 1000 1100 1200 ...
</syntaxhighlight>
Much better!
So shutdown MySQL again!
Change your configuration /etc/mysql/conf.d/innodb.cnf and '''change newraw to raw!''' :
<syntaxhighlight lang=mysql>
[mysqld]
# InnoDB raw disks
innodb_data_home_dir=
innodb_data_file_path=/dev/vg-data/lv-rawdisk-innodb01:25Graw
</syntaxhighlight>
=== NFS ===
==== NFSv4 ====
===== On NetApp CDOT SVM =====
<syntaxhighlight lang=text>
cdot1nfsv4::> export-policy rule create -policyname default -clientmatch 172.18.128.0/22 -superuser none -rwrule none -rorule sys -allow-dev false -allow-suid false
cdot1nfsv4::>
cdot1nfsv4::> export-policy create -policyname mysql_clients
cdot1nfsv4::> export-policy rule create -policyname mysql_clients -clientmatch 172.18.128.0/22 -superuser sys -rwrule sys -rorule sys -allow-dev true -allow-suid false
cdot1nfsv4::>
cdot1nfsv4::> nfs server modify -v4.0 enabled -v4-id-domain this.domain.tld
cdot1nfsv4::> set -units GB
cdot1nfsv4::> vol show -volume MYSQLNFS_* -fields volume,policy,size,junction-path
vserver volume size policy junction-path
------------------ --------------------- ---- ------------- ----------------------
cdot1nfsv4 MYSQLNFS_DATA 40GB mysql_clients /MYSQLNFS_DATA
cdot1nfsv4 MYSQLNFS_LOG 1GB mysql_clients /MYSQLNFS_LOG
2 entries were displayed.
</syntaxhighlight>
Links:
* [https://kb.netapp.com/support/s/article/how-to-configure-nfsv4-in-cluster-mode How to configure NFSv4 in Cluster-Mode]
* [https://kb.netapp.com/support/s/article/clustered-data-ontap-nfs-expert-recommended-articles Clustered Data ONTAP NFS Expert recommended articles]
* [https://kb.netapp.com/support/s/article/how-to-configure-netapp-storage-systems-for-network-file-system-version-4-in-aix-and-linux-environments How to configure NetApp storage systems for Network File System version 4 in AIX and Linux environments]
* [https://kb.netapp.com/support/s/article/how-to-enable-or-disable-nfsv4-on-netapp-storage-systems How to enable or disable NFSv4 on NetApp storage systems]
===== On Linux =====
====== Blacklist rpcsec_gss_krb5 ======
To disable loading of the rpcsec_gss_krb5 kernel module which causes problems with performance, do this:
<syntaxhighlight lang=text>
# echo "blacklist rpcsec_gss_krb5" > /etc/modprobe.d/blacklist-rpcsec_gss_krb5.conf
# rmmod rpcsec_gss_krb5
</syntaxhighlight>
====== /etc/sysctl.d/99-mysql.conf ======
<syntaxhighlight lang=text>
#
## http://www.ajohnstone.com/achives/optimizing-mysql-over-nfs-with-netapp/
#
###################################################################
# Semaphores & IPC for optimizations in innodb
kernel.shmmax=2147483648
kernel.shmall=2147483648
kernel.msgmni=1024
kernel.msgmax=65536
kernel.sem=250 32000 32 1024
###################################################################
# Swap
vm.swappiness = 0
vm.vfs_cache_pressure = 50
</syntaxhighlight>
====== /etc/sysctl.d/99-netapp-nfs.conf ======
<syntaxhighlight lang=text>
#
## http://www.ajohnstone.com/achives/optimizing-mysql-over-nfs-with-netapp/
#
###################################################################
# Optimization for netapp/nfs increased from 64k, @see http://tldp.org/HOWTO/NFS-HOWTO/performance.html#MEMLIMITS
net.core.wmem_default=262144
net.core.rmem_default=262144
net.core.wmem_max=262144
net.core.rmem_max=262144
net.ipv4.tcp_rmem = 4096 87380 16777216
net.ipv4.tcp_wmem = 4096 65536 16777216
net.ipv4.tcp_no_metrics_save = 1
# Guidelines from http://media.netapp.com/documents/mysqlperformance-5.pdf
net.ipv4.tcp_sack=0
net.ipv4.tcp_timestamps=0
sunrpc.tcp_slot_table_entries=128
#nfs.v3.enable on
nfs.tcp.enable=on
nfs.tcp.recvwindowsize=65536
nfs.tcp.xfersize=65536
#iscsi.iswt.max_ios_per_session 128
#iscsi.iswt.tcp_window_size 131400
#iscsi.max_connections_per_session 16
net.ipv4.tcp_tw_reuse = 1
net.ipv4.ip_local_port_range = 1024 65023
net.ipv4.tcp_max_syn_backlog = 10240
net.ipv4.tcp_max_tw_buckets = 400000
net.ipv4.tcp_max_orphans = 60000
net.ipv4.tcp_synack_retries = 3
net.core.somaxconn = 10000
kernel.sysrq=0
net.ipv4.neigh.default.gc_thresh1 = 4096
net.ipv4.neigh.default.gc_thresh2 = 8192
net.ipv4.neigh.default.gc_thresh3 = 8192
net.ipv4.neigh.default.base_reachable_time = 86400
net.ipv4.neigh.default.gc_stale_time = 86400
</syntaxhighlight>
====== Raise allowed number of open files for mysql in /etc/security/limits.d/mysql.conf ======
<syntaxhighlight lang=text>
mysql soft nofile 1024000
mysql hard nofile 1024000
mysql soft nproc 10240
mysql hard nproc 10240
</syntaxhighlight>
====== Modify systemd mysql.service to raise the number of files limit ======
To raise the number of files for the service you have to tell the systemd the new limit.
<syntaxhighlight lang=bash>
# systemctl edit mysql.service
</syntaxhighlight>
and enter:
<syntaxhighlight lang=ini>
[Service]
LimitNOFILE=1024000
</syntaxhighlight>
<syntaxhighlight lang=bash>
# systemctl cat mysql
# /lib/systemd/system/mysql.service
# MySQL systemd service file
...
# /etc/systemd/system/mysql.service.d/override.conf
[Service]
LimitNOFILE=1024000
</syntaxhighlight>
Do not forget to activate and check the limit
<syntaxhighlight lang=bash>
# systemctl daemon-reload
# systemctl restart mysql
# awk 'NR==1 || /Max open files/' /proc/$(pgrep mysqld$)/limits
Limit Soft Limit Hard Limit Units
Max open files 1024000 1024000 files
</syntaxhighlight>
====== Modify systemd service to wait for NFS ======
To be sure that the NFS mount is ready when the mysql server starts add After=nfs-client.target to the systemd service in the Unit-section.
<syntaxhighlight lang=bash>
# systemctl edit mysql.service
</syntaxhighlight>
and enter:
<syntaxhighlight lang=ini>
[Unit]
Description=MySQL Community Server
After=network.target
After=nfs-client.target
</syntaxhighlight>
<syntaxhighlight lang=bash>
# systemctl cat mysql
# /lib/systemd/system/mysql.service
# MySQL systemd service file
[Unit]
Description=MySQL Community Server
After=network.target
[Install]
WantedBy=multi-user.target
[Service]
User=mysql
Group=mysql
PermissionsStartOnly=true
ExecStartPre=/usr/share/mysql/mysql-systemd-start pre
ExecStart=/usr/sbin/mysqld
ExecStartPost=/usr/share/mysql/mysql-systemd-start post
TimeoutSec=600
Restart=on-failure
RuntimeDirectory=mysqld
RuntimeDirectoryMode=755
# /etc/systemd/system/mysql.service.d/override.conf
[Unit]
Description=MySQL Community Server
After=network.target
After=nfs-client.target
[Service]
LimitNOFILE=1024000
</syntaxhighlight>
Do not forget to activate the changes...
<syntaxhighlight lang=bash>
# systemctl daemon-reload
# systemctl restart mysql
</syntaxhighlight>
... and check they are active:
<syntaxhighlight lang=bash>
# systemctl list-dependencies --after mysql.service | grep nfs-client.target
● ├─nfs-client.target
</syntaxhighlight>
====== /etc/idmapd.conf ======
<syntaxhighlight lang=text>
# Domain = localdomain
Domain = this.domain.tld
</syntaxhighlight>
<syntaxhighlight lang=text>
# systemctl restart nfs-idmapd.service
</syntaxhighlight>
====== /etc/fstab ======
<syntaxhighlight lang=text>
cdot-nfsv4-svm:/MYSQLNFS_LOG /MYSQLNFS_LOG nfs rw,hard,nointr,rsize=65536,wsize=65536,bg,vers=4,proto=tcp,noatime
cdot-nfsv4-svm:/MYSQLNFS_DATA /MYSQLNFS_DATA nfs rw,hard,nointr,rsize=65536,wsize=65536,bg,vers=4,proto=tcp,noatime
</syntaxhighlight>
====== /etc/mysql/mysql.conf.d/mysqld.cnf ======
<syntaxhighlight lang=ini>
[mysqld]
...
datadir = /MYSQLNFS_DATA/data/mysql
...
</syntaxhighlight>
====== /etc/mysql/mysql.conf.d/innodb.cnf ======
<syntaxhighlight lang=ini>
[mysqld]
#
# * InnoDB
#
innodb_data_home_dir = /MYSQLNFS_DATA/InnoDB
innodb_data_file_path = ibdata1:200M:autoextend
innodb_log_group_home_dir = /MYSQLNFS_LOG/ib_log
#innodb_flush_method = O_DIRECT
innodb_flush_log_at_trx_commit = 2
innodb_file_per_table = on
</syntaxhighlight>
<syntaxhighlight lang=mysql>
# mysql -e "show variables where variable_name like '%dir' and value like '/MYSQLNFS%'"
+---------------------------+------------------------------------+
| Variable_name | Value |
+---------------------------+------------------------------------+
| datadir | /MYSQLNFS_DATA/data/mysql/ |
| innodb_data_home_dir | /MYSQLNFS_DATA/InnoDB |
| innodb_log_group_home_dir | /MYSQLNFS_LOG/ib_log |
+---------------------------+------------------------------------+
</syntaxhighlight>
====== /etc/mysql/mysql.conf.d/query_cache.cnf ======
<syntaxhighlight lang=ini>
[mysqld]
#
# * Query Cache Configuration
#
query_cache_type = 1
query_cache_limit = 256K
query_cache_min_res_unit = 2k
query_cache_size = 80M
</syntaxhighlight>
<syntaxhighlight lang=mysql>
mysql> SHOW VARIABLES LIKE 'have_query_cache';
+------------------+-------+
| Variable_name | Value |
+------------------+-------+
| have_query_cache | YES |
+------------------+-------+
1 row in set (0,00 sec)
mysql> SHOW VARIABLES LIKE 'query_cache%';
+------------------------------+----------+
| Variable_name | Value |
+------------------------------+----------+
| query_cache_limit | 262144 |
| query_cache_min_res_unit | 2048 |
| query_cache_size | 83886080 |
| query_cache_type | ON |
| query_cache_wlock_invalidate | OFF |
+------------------------------+----------+
5 rows in set (0,00 sec)
</syntaxhighlight>
====== apparmor : /etc/apparmor.d/local/usr.sbin.mysqld ======
<syntaxhighlight lang=text>
# vim:syntax=apparmor
# This should be always there...
owner @{PROC}/@{pid}/status r,
/sys/devices/system/node/ r,
/sys/devices/system/node/** r,
# The mysql datadir, innodb_data_home_dir
/MYSQLNFS_DATA/ r,
/MYSQLNFS_DATA/** rwk,
# The mysql innodb_log_group_home_dir
/MYSQLNFS_LOG/ r,
/MYSQLNFS_LOG/** rwk,
</syntaxhighlight>
====== Short stupid performance test ======
<syntaxhighlight lang=bash>
# time dd if=/dev/zero of=/MYSQLNFS_DATA/io.test bs=16k count=65536
65536+0 records in
65536+0 records out
1073741824 bytes (1,1 GB, 1,0 GiB) copied, 1,7552 s, 612 MB/s
real 0m1.772s
user 0m0.016s
sys 0m0.672s
</syntaxhighlight>
Some things seem to work...
==Sample InnoDB configuration==
/etc/mysql/conf.d/innodb.cnf
<syntaxhighlight lang=mysql>
[mysqld]
# InnoDB Parameters
# innodb_buffer_pool_size=(0.7*total_mem_size)
innodb_buffer_pool_size=1433M
# bulk_insert_buffer_size
bulk_insert_buffer_size=256M
# innodb_buffer_pool_instances=... more = more concurrency
innodb_buffer_pool_instances=2
# innodb_thread_concurrency= 2*CPUs
innodb_thread_concurrency=4
# innodb_flush_method=O_DIRECT (avoids double buffering)
innodb_flush_method=O_DIRECT
# InnoDB data raw disks
innodb_data_home_dir=
innodb_data_file_path=/dev/vg-data/lv-rawdisk-innodb01:25Graw
# InnoDB log files
innodb_log_files_in_group=2
innodb_log_file_size=100M
innodb_log_group_home_dir=/var/lib/mysql/ib_log
</syntaxhighlight>
==Analyze==
<syntaxhighlight lang=mysql>
mysql> select * from <tablename> PROCEDURE ANALYSE();
</syntaxhighlight>
<syntaxhighlight lang=mysql>
mysql> SHOW /*!50000 GLOBAL*/ STATUS;
</syntaxhighlight>
* See [[http://de.slideshare.net/shinguz/pt-presentation-11465700 MySQL Performance Tuning]]
===Find statements which lead into an error===
<syntaxhighlight lang=mysql>
mysql> select CURRENT_SCHEMA,DIGEST_TEXT,MYSQL_ERRNO,MESSAGE_TEXT from performance_schema.events_statements_history where errors!=0\G
*************************** 1. row ***************************
CURRENT_SCHEMA: NULL
DIGEST_TEXT: NULL
MYSQL_ERRNO: 1046
MESSAGE_TEXT: No database selected
1 row in set (0,00 sec)
</syntaxhighlight>
===percona-toolkit===
<syntaxhighlight lang=bash>
# aptitude install percona-toolkit
# mysql -e "explain select * from mysql.user,mysql.db where user.user=db.user" | pt-visual-explain
JOIN
+- Bookmark lookup
| +- Table
| | table db
| | possible_keys User
| +- Index lookup
| key db->User
| possible_keys User
| key_len 48
| ref mysql.user.User
| rows 3
+- Table scan
rows 68
+- Table
table user
</syntaxhighlight>
===Sysbench===
<syntaxhighlight lang=bash>
# mysql -u root -e "create database sbtest;"
# sysbench \
--test=oltp \
--oltp-table-size=10000000 \
--db-driver=mysql \
--mysql-table-engine=innodb \
--mysql-db=sbtest \
--mysql-user=root \
--mysql-password=$(nawk -F'=' '/password/{print $2}' /root/.my.cnf) \
--mysql-socket=/var/run/mysqld/mysqld.sock \
prepare
# sysbench \
--test=oltp \
--oltp-test-mode=complex \
--oltp-table-size=80000000 \
--db-driver=mysql \
--mysql-table-engine=innodb \
--mysql-db=sbtest \
--mysql-user=root \
--mysql-password=$(nawk -F'=' '/password/{print $2}' /root/.my.cnf) \
--mysql-socket=/var/run/mysqld/mysqld.sock \
--num-threads=4 \
--max-time=900 \
--max-requests=500000 \
run
# mysql -u root_rw -e "drop table sbtest;" sbtest
</syntaxhighlight>
==Recover a damaged root account==
===Lost grants===
Try out:
<syntaxhighlight lang=bash>
# service mysql stop
# echo "grant all privileges on *.* to 'root'@'localhost' with grant option;" > /root/mysql-init
# mysqld_safe --init-file=/root/mysql-init
...
150812 19:14:24 mysqld_safe mysqld from pid file /var/run/mysqld/mysqld.pid ended
# rm /root/mysql-init
# service mysql start
</syntaxhighlight>
Or:
<syntaxhighlight lang=bash>
# service mysql stop
# mysqld_safe --skip-grant-tables &
...
# mysql -e "UPDATE mysql.user SET Grant_priv='Y', Super_priv='Y' WHERE User='root'; FLUSH PRIVILEGES; GRANT ALL ON *.* TO 'root'@'localhost';"
# mysqladmin -u root shutdown
# service mysql start
</syntaxhighlight>
===Lost password===
<syntaxhighlight lang=bash>
# service mysql stop
# echo "SET PASSWORD FOR 'root'@'localhost' = PASSWORD('the root password for mysql');" > /root/mysql-init
# mysqld_safe --init-file=/root/mysql-init
...
150812 19:15:24 mysqld_safe mysqld from pid file /var/run/mysqld/mysqld.pid ended
# rm /root/mysql-init
# service mysql start
</syntaxhighlight>
==Structured configuration==
This is the default in Ubuntus /etc/mysql/my.cnf:
<syntaxhighlight lang=mysql>
...
#
# * IMPORTANT: Additional settings that can override those from this file!
# The files must end with '.cnf', otherwise they'll be ignored.
#
!includedir /etc/mysql/conf.d/
</syntaxhighlight>
/etc/mysql/conf.d/innodb.cnf:
<syntaxhighlight lang=mysql>
[mysqld]
# InnoDB Parameters
# innodb_buffer_pool_size=(0.7*total_mem_size)
#innodb_buffer_pool_size=512M
innodb_buffer_pool_size=256M
# bulk_insert_buffer_size
#bulk_insert_buffer_size=256M
bulk_insert_buffer_size=128M
# innodb_buffer_pool_instances=... more = more concurrency
innodb_buffer_pool_instances=2
# innodb_thread_concurrency= 2*CPUs
innodb_thread_concurrency=4
# innodb_flush_method=O_DIRECT (avoids double buffering)
innodb_flush_method=O_DIRECT
# InnoDB data raw disks
innodb_data_home_dir=
innodb_data_file_path=/dev/vg-data/lv-rawdisk-innodb01:25Graw
# InnoDB log files
innodb_log_files_in_group=2
innodb_log_file_size=100M
innodb_log_group_home_dir=/var/lib/mysql/ib_log
</syntaxhighlight>
/etc/mysql/conf.d/myisam.cnf:
<syntaxhighlight lang=mysql>
[mysqld]
#key_buffer = 512M
key_buffer = 128M
table_cache = 8K
myisam_sort_buffer_size = 64M
tmp_table_size = 64M
# Variable: concurrent_insert
# Value Description
# 0 Disables concurrent inserts
# 1 (Default) Enables concurrent insert for MyISAM tables that do not have holes
# 2 Enables concurrent inserts for all MyISAM tables, even those that have holes.
# For a table with a hole, new rows are inserted at the end of the table if it is in use by another thread.
# Otherwise, MySQL acquires a normal write lock and inserts the row into the hole.
concurrent_insert=2
# Variable: myisam_use_mmap
# https://www.percona.com/blog/2006/05/26/myisam-mmap-feature-51/
#
myisam_use_mmap=1
</syntaxhighlight>
/etc/mysql/conf.d/mysqld.cnf:
<syntaxhighlight lang=mysql>
[mysqld]
datadir = /var/lib/mysql/data/data
# because mysql is soooo stupid
#ignore-db-dirs = lost+found # when we will have mysql >= 5.6.3
bind-address = 127.0.0.1
open-files-limit = 4096
max_connections = 512
max_allowed_packet = 16M
thread_stack = 192K
thread_cache_size = 8
myisam-recover-options = BACKUP
max_connections = 512
table_cache = 8192
thread_concurrency = 4
default-storage-engine = innodb
# Enable the full query log. Every query (even ones with incorrect
# syntax) that the server receives will be logged. This is useful for
# debugging, it is usually disabled in production use.
#log
# Print warnings to the error log file. If you have any problem with
# MySQL you should enable logging of warnings and examine the error log
# for possible explanations.
log_warnings
# Log slow queries. Slow queries are queries which take more than the
# amount of time defined in "long_query_time" or which do not use
# indexes well, if log_long_format is enabled. It is normally good idea
# to have this turned on if you frequently add new queries to the
# system.
log_slow_queries
slow_query_log_file = /var/log/mysql/mysql-slow.log
# All queries taking more than this amount of time (in seconds) will be
# trated as slow. Do not use "1" as a value here, as this will result in
# even very fast queries being logged from time to time (as MySQL
# currently measures time with second accuracy only).
long_query_time = 2
# Log more information in the slow query log. Normally it is good to
# have this turned on. This will enable logging of queries that are not
# using indexes in addition to long running queries.
#log_long_format
log_bin = /var/lib/mysql/binlog/mysql-bin.log
expire_logs_days = 10
max_binlog_size = 100M
sync_binlog = 0
performance_schema = ON
</syntaxhighlight>
/etc/mysql/conf.d/mysqld_safe.cnf:
<syntaxhighlight lang=mysql>
[mysqld_safe]
</syntaxhighlight>
/etc/mysql/conf.d/mysqld_safe_syslog.cnf:
<syntaxhighlight lang=mysql>
[mysqld_safe]
syslog
</syntaxhighlight>
/etc/mysql/conf.d/query_cache.cnf:
<syntaxhighlight lang=mysql>
[mysqld]
query_cache_limit = 4M
query_cache_size = 128M
query_cache_min_res_unit = 2K
</syntaxhighlight>
=MySQL Clients=
Small one liners for testing purposes.
==PHP==
===PHP PDO===
<syntaxhighlight lang=php>
$ php -r '
$pdo=new PDO("mysql:host=mydbhost;dbname=mydb", "user", "pass", ARRAY(
PDO::ATTR_PERSISTENT => true
)
);
$stmt=$pdo->prepare("SELECT * FROM mytable");
if($stmt->execute()){
while($row = $stmt->fetch()){
print_r($row);
}
};
$stmt = null;
$pdo=null;
'
</syntaxhighlight>
bec61a89bd76863050fea1b603f794a84d309ffa
2699
2698
2022-11-23T11:07:43Z
Lollypop
2
/* /etc/idmapd.conf */
wikitext
text/x-wiki
[[Category:MySQL|Tipps und Tricks]]
==Oneliner==
===Show MySQL-traffic fired from a client===
<syntaxhighlight lang=bash>
# tcpdump -i any -s 0 -l -vvv -w - dst port 3306 | strings | perl -e '
while(<>) { chomp; next if /^[^ ]+[ ]*$/;
if(/^(SELECT|UPDATE|DELETE|INSERT|SET|COMMIT|ROLLBACK|CREATE|DROP|ALTER)/i) {
if (defined $q) { print "$q\n"; }
$q=$_;
} else {
$_ =~ s/^[ \t]+//; $q.=" $_";
}
}'
</syntaxhighlight>
===Mysql processes each second===
<syntaxhighlight lang=bash>
# mysqladmin -i 1 --verbose processlist
</syntaxhighlight>
===All grants===
<syntaxhighlight lang=bash>
# mysql --skip-column-names --batch --execute 'select concat("`",user,"`@`",host,"`") from mysql.user' | xargs -n 1 -i mysql --execute 'show grants for {}'
</syntaxhighlight>
Or a little nicer:
<syntaxhighlight lang=bash>
#!/bin/bash
#
## Written by Lars Timmann <L@rs.Timmann.de> 2017
#
function usage () {
cat << EOH
Usage: $0 [--all] [--grant-user <pattern>|--gu <pattern>] [--grant-db <pattern>|--gdb <pattern>] [--help] ...
--help: This output
--grant-user|--gu: You can specify this option several times.
The <pattern> can be:
<user> : You will get grants on all hosts for this user.
@<host> : You will get grants for all users on this host.
<user>@<host> : You will get specific grants for user@host.
The pattern may contain % as wildcard.
If the pattern is @% it shows all grants where host is exactly '%'.
--grant-db|--gdb: You can specify this option several times.
The pattern names the database to look for.
The pattern may contain % as wildcard.
--all: Show all grants
...: Optional parameters to the mysql command
EOH
exit
}
show_all_grants=0
declare -a grant_user
for ((param=1;param<=${#};param++))
do
case ${!param} in
--grant-user|--gu)
param=$[ ${param} + 1 ]
grant_user+=( "${!param}" )
# delete 2 parameters from list and set back $param
set -- "${@:1:param-2}" "${@:param+1}"
param=$[ ${param} - 2 ]
;;
--grant-db|--gdb)
param=$[ ${param} + 1 ]
grant_db+=( "${!param}" )
# delete 2 parameters from list and set back $param
set -- "${@:1:param-2}" "${@:param+1}"
param=$[ ${param} - 2 ]
;;
--all)
show_all_grants=1
# delete 1 parameter from list and set back $param
set -- "${@:1:param-1}" "${@:param+1}"
param=$[ ${param} - 1 ]
;;
--help)
usage
;;
*)
;;
esac
done
count=${#grant_user[@]}
for((param=0;param<count;param++))
do
before=${#grant_user[@]}
grant="${grant_user[${param}]}"
user="${grant%@*}"
if [[ "${grant}" == *\@?* ]]
then
host="${grant/*@}"
else
host=''
fi
case ${host} in
'')
select="select concat('\'',user,'\'@\'',host,'\'') as user from mysql.user where user like '${user}'"
;;
'%')
select="select concat('\'',user,'\'@\'',host,'\'') as user from mysql.user where host='${host}' ${user:+and user like '${user}'}"
;;
*)
select="select concat('\'',user,'\'@\'',host,'\'') as user from mysql.user where host like '${host}' ${user:+and user like '${user}'}"
;;
esac
grant_user=( "${grant_user[@]:0:param}" $(mysql $* --silent --skip-column-names --execute "${select}" | sort ) "${grant_user[@]:param+1}" )
after=${#grant_user[@]}
param=$[ param + after - before ]
count=$[ count + after - before ]
done
# Get user for database in grant_db array
for db in ${grant_db[@]}
do
grant_user+=( $(mysql $* --silent --skip-column-names --execute "
select concat('\'',user,'\'@\'',host,'\'') as user from mysql.db where db like '${db}';
select concat('\'',user,'\'@\'',host,'\'') as user from mysql.columns_priv where db like '${db}';
select concat('\'',user,'\'@\'',host,'\'') as user from mysql.tables_priv where db like '${db}';
" | sort -u ) )
done
# --all
if [ ${show_all_grants} -eq 1 ]
then
printf -- '--\n-- %s\n--\n' "all grants";
grant_user=( $(mysql $* --silent --skip-column-names --execute "select concat('\'',user,'\'@\'',host,'\'') as user from mysql.user" | sort ) )
fi
for user in ${grant_user[@]}
do
printf -- '--\n-- %s\n--\n' "${user}";
show_create_user="$(mysql $* --silent --skip-column-names --execute "select (substring_index(version(), '.',1) >= 5) and (substring_index(substring_index(version(), '.', 2),'.',-1) >=7) as show_create_user;";)"
if [ "${show_create_user}" -eq 1 ]
then
mysql $* --silent --skip-column-names --execute "show create user ${user};" | sed 's/$/;/'
fi
OLD_IFS=${IFS}
IFS=$'\n'
for grant in $(mysql $* --silent --skip-column-names --execute "show grants for ${user}" | sed 's/$/;/')
do
regex='GRANT[ ]+.*[ ]+ON[ ]+(FUNCTION[ ]+|)`([^`]*)`\..*'
if [[ $grant =~ $regex ]]
then
database=${BASH_REMATCH[2]}
if [ ${#grant_db[@]} -gt 0 ]
then
if [[ " ${grant_db[@]} " =~ " ${database} " ]]
then
echo "${grant}"
fi
else
echo "${grant}"
fi
else
echo "${grant}"
fi
done
done
</syntaxhighlight>
===Last update time===
* Per table
<syntaxhighlight lang=mysql>
mysql> SELECT TABLE_SCHEMA AS DB,TABLE_NAME,UPDATE_TIME FROM INFORMATION_SCHEMA.TABLES ORDER BY DB,UPDATE_TIME;
</syntaxhighlight>
* Per database
<syntaxhighlight lang=mysql>
mysql> SELECT TABLE_SCHEMA AS DB,MAX(UPDATE_TIME) AS LAST_UPDATE FROM INFORMATION_SCHEMA.TABLES GROUP BY DB ORDER BY LAST_UPDATE;
</syntaxhighlight>
==InnoDB space==
===Per database===
<syntaxhighlight lang=mysql>
mysql> select table_schema as database_name, sum(round(data_length/1024/1024,2)) as total_size_mb from information_schema.tables where engine like 'innodb' group by table_schema order by total_size_mb;
</syntaxhighlight>
===Per table===
<syntaxhighlight lang=mysql>
mysql> select table_schema as database_name,table_name,round(data_length/1024/1024,2) as size_mb from information_schema.tables order by size_mb;
</syntaxhighlight>
==Logging==
If you use SET GLOBAL it is just for the moment.
'''Don't forget to add it in your my.cnf to make it permanent!'''
===What can I log?===
The interesting variables here are:
* log_queries_not_using_indexes
* log_slave_updates
* log_slow_queries
* general_log
===Choose logging destination FILE/TABLE/NONE===
This affects general_log and slow_query_log.
* Log to the table mysql.slow_log and mysql.general_log
<syntaxhighlight lang=mysql>
mysql> SET GLOBAL log_output=TABLE;
</syntaxhighlight>
* Log to the table mysql.slow_log and mysql.general_log
<syntaxhighlight lang=mysql>
mysql> SET GLOBAL log_output=TABLE;
</syntaxhighlight>
* Both: tables and files
<syntaxhighlight lang=mysql>
mysql> SET GLOBAL log_output = 'TABLE,FILE';
</syntaxhighlight>
* None, if NONE appears in the log_output destinations there is no logging
<syntaxhighlight lang=mysql>
mysql> SET GLOBAL log_output = 'TABLE,FILE,NONE';
</syntaxhighlight>
is equal to
<syntaxhighlight lang=mysql>
mysql> SET GLOBAL log_output = 'NONE';
</syntaxhighlight>
===Enable/disable general logging===
<syntaxhighlight lang=mysql>
mysql> SET GLOBAL general_log_file = '/var/lib/mysql/general.log';
Query OK, 0 rows affected (0.00 sec)
mysql> SET GLOBAL general_log = 'ON';
Query OK, 0 rows affected (0.00 sec)
</syntaxhighlight>
<syntaxhighlight lang=mysql>
mysql> SET GLOBAL general_log = 'OFF';
Query OK, 0 rows affected (0.00 sec)
</syntaxhighlight>
===Enable/disable logging of slow queries===
<syntaxhighlight lang=mysql>
mysql> SET GLOBAL slow_query_log_file = '/var/lib/mysql/slow-query.log';
Query OK, 0 rows affected (0.00 sec)
mysql> SET GLOBAL slow_query_log = 'ON';
Query OK, 0 rows affected (0.00 sec)
</syntaxhighlight>
<syntaxhighlight lang=mysql>
mysql> SET GLOBAL slow_query_log = 'OFF';
Query OK, 0 rows affected (0.00 sec)
</syntaxhighlight>
== Slave ==
=== Debugging ===
==== What did we see from the master ====
Read the binlog from Master:
<syntaxhighlight lang=bash>
# mysqlbinlog --read-from-remote-server --host='your replication host' --user='your replication user' --password='your replication password' --base64-output=auto --database='limit output to this database' -vv mysql-bin.number | less
</syntaxhighlight>
if you get
ERROR: Failed on connect: SSL connection error: protocol version mismatch
try
<syntaxhighlight lang=bash>
# mysqlbinlog --read-from-remote-server --host='your replication host' --user='your replication user' --password='your replication password' --ssl-mode=DISABLED --base64-output=auto --database='limit output to this database' -vv mysql-bin.number | less
</syntaxhighlight>
For an idea of the binlog file to investigate on the master do this on your slave:
<syntaxhighlight lang=bash>
# mysql -e 'show slave status\G' | awk '$1=="Master_Log_File:"'
</syntaxhighlight>
==Filesystems for MySQL==
===ext3/ext4===
====Create Options====
<syntaxhighlight lang=bash>
# mkfs.ext4 -b 4096 /dev/mapper/vg--data-lv--ext4--mysql_data
</syntaxhighlight>
====Mountoptions====
* noatime
* data=writeback (best performance , only metadata is logged)
* data=ordered (ok performance , recording metadata and grouping metadata related to the data changes)
* data=journal (worst performance, but best data protection, ext3 default mode, recording metadata and all data)
===Raw devices with InnoDB===
'''Take a look at [[Linux_udev_permissions|setting device permissions via udev]] first.'''
'''After''' that the device is owned by mysql:
<syntaxhighlight lang=bash>
# ls -alL /dev/vg-data/lv-rawdisk-innodb01
brw-rw---- 1 mysql mysql 252, 0 Aug 12 15:07 /dev/vg-data/lv-rawdisk-innodb01
</syntaxhighlight>
Determine the size:
<syntaxhighlight lang=bash>
# lvs vg-data
LV VG Attr LSize Pool Origin Data% Move Log Copy% Convert
lv-rawdisk-innodb01 vg-data -wi-a---- 25.00g
# fdisk -l /dev/vg-data/lv-rawdisk-innodb01
Disk /dev/vg-data/lv-rawdisk-innodb01: 26.8 GB, 26843545600 bytes
255 heads, 63 sectors/track, 3263 cylinders, total 52428800 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
# bc -l
26843545600/(1024*1024*1024)
25.00000000000000000000
</syntaxhighlight>
Yes... really 25GB!
Add your logical volume to your configuration /etc/mysql/conf.d/innodb.cnf :
<syntaxhighlight lang=mysql>
[mysqld]
# InnoDB raw disks
innodb_data_home_dir=
innodb_data_file_path=/dev/vg-data/lv-rawdisk-innodb01:25Gnewraw
</syntaxhighlight>
Start mysql:
<syntaxhighlight lang=bash>
# service mysql start
</syntaxhighlight>
Aaaaaand.. do not forget apparmor! Like I did.. :-D
<syntaxhighlight lang=mysql>
InnoDB: Operating system error number 13 in a file operation.
InnoDB: The error means mysqld does not have the access rights to
InnoDB: the directory.
InnoDB: File name /dev/dm-0
InnoDB: File operation call: 'open'.
InnoDB: Cannot continue operation.
</syntaxhighlight>
<syntaxhighlight lang=bash>
# tail /var/log/kern.log
...
Aug 12 15:30:09 mysql kernel: [ 5840.118528] audit: type=1400 audit(1439386209.399:33): apparmor="DENIED" operation="open" profile="/usr/sbin/mysqld" name="/dev/dm-0" pid=11810 comm="mysqld" requested_mask="wr" denied_mask="wr" fsuid=108 ouid=108
...
</syntaxhighlight>
Add your raw device to the apparmor config in /etc/apparmor.d/local/usr.sbin.mysqld :
<syntaxhighlight lang=bash>
# Site-specific additions and overrides for usr.sbin.mysqld.
# For more details, please see /etc/apparmor.d/local/README.
/dev/dm-* rwk,
</syntaxhighlight>
Reload apparmor:
<syntaxhighlight lang=bash>
# service apparmor reload
</syntaxhighlight>
Another try!
<syntaxhighlight lang=bash>
# service mysql start
</syntaxhighlight>
<syntaxhighlight lang=mysql>
InnoDB: The first specified data file /dev/vg-data/lv-rawdisk-innodb01 did not exist:
InnoDB: a new database to be created!
150812 15:48:23 InnoDB: Setting file /dev/vg-data/lv-rawdisk-innodb01 size to 25600 MB
InnoDB: Database physically writes the file full: wait...
InnoDB: Progress in MB: 100 200 300 400 500 600 700 800 900 1000 1100 1200 ...
</syntaxhighlight>
Much better!
So shutdown MySQL again!
Change your configuration /etc/mysql/conf.d/innodb.cnf and '''change newraw to raw!''' :
<syntaxhighlight lang=mysql>
[mysqld]
# InnoDB raw disks
innodb_data_home_dir=
innodb_data_file_path=/dev/vg-data/lv-rawdisk-innodb01:25Graw
</syntaxhighlight>
=== NFS ===
==== NFSv4 ====
===== On NetApp CDOT SVM =====
<syntaxhighlight lang=text>
cdot1nfsv4::> export-policy rule create -policyname default -clientmatch 172.18.128.0/22 -superuser none -rwrule none -rorule sys -allow-dev false -allow-suid false
cdot1nfsv4::>
cdot1nfsv4::> export-policy create -policyname mysql_clients
cdot1nfsv4::> export-policy rule create -policyname mysql_clients -clientmatch 172.18.128.0/22 -superuser sys -rwrule sys -rorule sys -allow-dev true -allow-suid false
cdot1nfsv4::>
cdot1nfsv4::> nfs server modify -v4.0 enabled -v4-id-domain this.domain.tld
cdot1nfsv4::> set -units GB
cdot1nfsv4::> vol show -volume MYSQLNFS_* -fields volume,policy,size,junction-path
vserver volume size policy junction-path
------------------ --------------------- ---- ------------- ----------------------
cdot1nfsv4 MYSQLNFS_DATA 40GB mysql_clients /MYSQLNFS_DATA
cdot1nfsv4 MYSQLNFS_LOG 1GB mysql_clients /MYSQLNFS_LOG
2 entries were displayed.
</syntaxhighlight>
Links:
* [https://kb.netapp.com/support/s/article/how-to-configure-nfsv4-in-cluster-mode How to configure NFSv4 in Cluster-Mode]
* [https://kb.netapp.com/support/s/article/clustered-data-ontap-nfs-expert-recommended-articles Clustered Data ONTAP NFS Expert recommended articles]
* [https://kb.netapp.com/support/s/article/how-to-configure-netapp-storage-systems-for-network-file-system-version-4-in-aix-and-linux-environments How to configure NetApp storage systems for Network File System version 4 in AIX and Linux environments]
* [https://kb.netapp.com/support/s/article/how-to-enable-or-disable-nfsv4-on-netapp-storage-systems How to enable or disable NFSv4 on NetApp storage systems]
===== On Linux =====
====== Blacklist rpcsec_gss_krb5 ======
To disable loading of the rpcsec_gss_krb5 kernel module which causes problems with performance, do this:
<syntaxhighlight lang=text>
# echo "blacklist rpcsec_gss_krb5" > /etc/modprobe.d/blacklist-rpcsec_gss_krb5.conf
# rmmod rpcsec_gss_krb5
</syntaxhighlight>
====== /etc/sysctl.d/99-mysql.conf ======
<syntaxhighlight lang=text>
#
## http://www.ajohnstone.com/achives/optimizing-mysql-over-nfs-with-netapp/
#
###################################################################
# Semaphores & IPC for optimizations in innodb
kernel.shmmax=2147483648
kernel.shmall=2147483648
kernel.msgmni=1024
kernel.msgmax=65536
kernel.sem=250 32000 32 1024
###################################################################
# Swap
vm.swappiness = 0
vm.vfs_cache_pressure = 50
</syntaxhighlight>
====== /etc/sysctl.d/99-netapp-nfs.conf ======
<syntaxhighlight lang=text>
#
## http://www.ajohnstone.com/achives/optimizing-mysql-over-nfs-with-netapp/
#
###################################################################
# Optimization for netapp/nfs increased from 64k, @see http://tldp.org/HOWTO/NFS-HOWTO/performance.html#MEMLIMITS
net.core.wmem_default=262144
net.core.rmem_default=262144
net.core.wmem_max=262144
net.core.rmem_max=262144
net.ipv4.tcp_rmem = 4096 87380 16777216
net.ipv4.tcp_wmem = 4096 65536 16777216
net.ipv4.tcp_no_metrics_save = 1
# Guidelines from http://media.netapp.com/documents/mysqlperformance-5.pdf
net.ipv4.tcp_sack=0
net.ipv4.tcp_timestamps=0
sunrpc.tcp_slot_table_entries=128
#nfs.v3.enable on
nfs.tcp.enable=on
nfs.tcp.recvwindowsize=65536
nfs.tcp.xfersize=65536
#iscsi.iswt.max_ios_per_session 128
#iscsi.iswt.tcp_window_size 131400
#iscsi.max_connections_per_session 16
net.ipv4.tcp_tw_reuse = 1
net.ipv4.ip_local_port_range = 1024 65023
net.ipv4.tcp_max_syn_backlog = 10240
net.ipv4.tcp_max_tw_buckets = 400000
net.ipv4.tcp_max_orphans = 60000
net.ipv4.tcp_synack_retries = 3
net.core.somaxconn = 10000
kernel.sysrq=0
net.ipv4.neigh.default.gc_thresh1 = 4096
net.ipv4.neigh.default.gc_thresh2 = 8192
net.ipv4.neigh.default.gc_thresh3 = 8192
net.ipv4.neigh.default.base_reachable_time = 86400
net.ipv4.neigh.default.gc_stale_time = 86400
</syntaxhighlight>
====== Raise allowed number of open files for mysql in /etc/security/limits.d/mysql.conf ======
<syntaxhighlight lang=text>
mysql soft nofile 1024000
mysql hard nofile 1024000
mysql soft nproc 10240
mysql hard nproc 10240
</syntaxhighlight>
====== Modify systemd mysql.service to raise the number of files limit ======
To raise the number of files for the service you have to tell the systemd the new limit.
<syntaxhighlight lang=bash>
# systemctl edit mysql.service
</syntaxhighlight>
and enter:
<syntaxhighlight lang=ini>
[Service]
LimitNOFILE=1024000
</syntaxhighlight>
<syntaxhighlight lang=bash>
# systemctl cat mysql
# /lib/systemd/system/mysql.service
# MySQL systemd service file
...
# /etc/systemd/system/mysql.service.d/override.conf
[Service]
LimitNOFILE=1024000
</syntaxhighlight>
Do not forget to activate and check the limit
<syntaxhighlight lang=bash>
# systemctl daemon-reload
# systemctl restart mysql
# awk 'NR==1 || /Max open files/' /proc/$(pgrep mysqld$)/limits
Limit Soft Limit Hard Limit Units
Max open files 1024000 1024000 files
</syntaxhighlight>
====== Modify systemd service to wait for NFS ======
To be sure that the NFS mount is ready when the mysql server starts add After=nfs-client.target to the systemd service in the Unit-section.
<syntaxhighlight lang=bash>
# systemctl edit mysql.service
</syntaxhighlight>
and enter:
<syntaxhighlight lang=ini>
[Unit]
Description=MySQL Community Server
After=network.target
After=nfs-client.target
</syntaxhighlight>
<syntaxhighlight lang=bash>
# systemctl cat mysql
# /lib/systemd/system/mysql.service
# MySQL systemd service file
[Unit]
Description=MySQL Community Server
After=network.target
[Install]
WantedBy=multi-user.target
[Service]
User=mysql
Group=mysql
PermissionsStartOnly=true
ExecStartPre=/usr/share/mysql/mysql-systemd-start pre
ExecStart=/usr/sbin/mysqld
ExecStartPost=/usr/share/mysql/mysql-systemd-start post
TimeoutSec=600
Restart=on-failure
RuntimeDirectory=mysqld
RuntimeDirectoryMode=755
# /etc/systemd/system/mysql.service.d/override.conf
[Unit]
Description=MySQL Community Server
After=network.target
After=nfs-client.target
[Service]
LimitNOFILE=1024000
</syntaxhighlight>
Do not forget to activate the changes...
<syntaxhighlight lang=bash>
# systemctl daemon-reload
# systemctl restart mysql
</syntaxhighlight>
... and check they are active:
<syntaxhighlight lang=bash>
# systemctl list-dependencies --after mysql.service | grep nfs-client.target
● ├─nfs-client.target
</syntaxhighlight>
====== /etc/idmapd.conf ======
<syntaxhighlight lang=text>
# Domain = localdomain
Domain = this.domain.tld
</syntaxhighlight>
<syntaxhighlight lang=bash>
# systemctl restart nfs-idmapd.service
</syntaxhighlight>
====== /etc/fstab ======
<syntaxhighlight lang=text>
cdot-nfsv4-svm:/MYSQLNFS_LOG /MYSQLNFS_LOG nfs rw,hard,nointr,rsize=65536,wsize=65536,bg,vers=4,proto=tcp,noatime
cdot-nfsv4-svm:/MYSQLNFS_DATA /MYSQLNFS_DATA nfs rw,hard,nointr,rsize=65536,wsize=65536,bg,vers=4,proto=tcp,noatime
</syntaxhighlight>
====== /etc/mysql/mysql.conf.d/mysqld.cnf ======
<syntaxhighlight lang=ini>
[mysqld]
...
datadir = /MYSQLNFS_DATA/data/mysql
...
</syntaxhighlight>
====== /etc/mysql/mysql.conf.d/innodb.cnf ======
<syntaxhighlight lang=ini>
[mysqld]
#
# * InnoDB
#
innodb_data_home_dir = /MYSQLNFS_DATA/InnoDB
innodb_data_file_path = ibdata1:200M:autoextend
innodb_log_group_home_dir = /MYSQLNFS_LOG/ib_log
#innodb_flush_method = O_DIRECT
innodb_flush_log_at_trx_commit = 2
innodb_file_per_table = on
</syntaxhighlight>
<syntaxhighlight lang=mysql>
# mysql -e "show variables where variable_name like '%dir' and value like '/MYSQLNFS%'"
+---------------------------+------------------------------------+
| Variable_name | Value |
+---------------------------+------------------------------------+
| datadir | /MYSQLNFS_DATA/data/mysql/ |
| innodb_data_home_dir | /MYSQLNFS_DATA/InnoDB |
| innodb_log_group_home_dir | /MYSQLNFS_LOG/ib_log |
+---------------------------+------------------------------------+
</syntaxhighlight>
====== /etc/mysql/mysql.conf.d/query_cache.cnf ======
<syntaxhighlight lang=ini>
[mysqld]
#
# * Query Cache Configuration
#
query_cache_type = 1
query_cache_limit = 256K
query_cache_min_res_unit = 2k
query_cache_size = 80M
</syntaxhighlight>
<syntaxhighlight lang=mysql>
mysql> SHOW VARIABLES LIKE 'have_query_cache';
+------------------+-------+
| Variable_name | Value |
+------------------+-------+
| have_query_cache | YES |
+------------------+-------+
1 row in set (0,00 sec)
mysql> SHOW VARIABLES LIKE 'query_cache%';
+------------------------------+----------+
| Variable_name | Value |
+------------------------------+----------+
| query_cache_limit | 262144 |
| query_cache_min_res_unit | 2048 |
| query_cache_size | 83886080 |
| query_cache_type | ON |
| query_cache_wlock_invalidate | OFF |
+------------------------------+----------+
5 rows in set (0,00 sec)
</syntaxhighlight>
====== apparmor : /etc/apparmor.d/local/usr.sbin.mysqld ======
<syntaxhighlight lang=text>
# vim:syntax=apparmor
# This should be always there...
owner @{PROC}/@{pid}/status r,
/sys/devices/system/node/ r,
/sys/devices/system/node/** r,
# The mysql datadir, innodb_data_home_dir
/MYSQLNFS_DATA/ r,
/MYSQLNFS_DATA/** rwk,
# The mysql innodb_log_group_home_dir
/MYSQLNFS_LOG/ r,
/MYSQLNFS_LOG/** rwk,
</syntaxhighlight>
====== Short stupid performance test ======
<syntaxhighlight lang=bash>
# time dd if=/dev/zero of=/MYSQLNFS_DATA/io.test bs=16k count=65536
65536+0 records in
65536+0 records out
1073741824 bytes (1,1 GB, 1,0 GiB) copied, 1,7552 s, 612 MB/s
real 0m1.772s
user 0m0.016s
sys 0m0.672s
</syntaxhighlight>
Some things seem to work...
==Sample InnoDB configuration==
/etc/mysql/conf.d/innodb.cnf
<syntaxhighlight lang=mysql>
[mysqld]
# InnoDB Parameters
# innodb_buffer_pool_size=(0.7*total_mem_size)
innodb_buffer_pool_size=1433M
# bulk_insert_buffer_size
bulk_insert_buffer_size=256M
# innodb_buffer_pool_instances=... more = more concurrency
innodb_buffer_pool_instances=2
# innodb_thread_concurrency= 2*CPUs
innodb_thread_concurrency=4
# innodb_flush_method=O_DIRECT (avoids double buffering)
innodb_flush_method=O_DIRECT
# InnoDB data raw disks
innodb_data_home_dir=
innodb_data_file_path=/dev/vg-data/lv-rawdisk-innodb01:25Graw
# InnoDB log files
innodb_log_files_in_group=2
innodb_log_file_size=100M
innodb_log_group_home_dir=/var/lib/mysql/ib_log
</syntaxhighlight>
==Analyze==
<syntaxhighlight lang=mysql>
mysql> select * from <tablename> PROCEDURE ANALYSE();
</syntaxhighlight>
<syntaxhighlight lang=mysql>
mysql> SHOW /*!50000 GLOBAL*/ STATUS;
</syntaxhighlight>
* See [[http://de.slideshare.net/shinguz/pt-presentation-11465700 MySQL Performance Tuning]]
===Find statements which lead into an error===
<syntaxhighlight lang=mysql>
mysql> select CURRENT_SCHEMA,DIGEST_TEXT,MYSQL_ERRNO,MESSAGE_TEXT from performance_schema.events_statements_history where errors!=0\G
*************************** 1. row ***************************
CURRENT_SCHEMA: NULL
DIGEST_TEXT: NULL
MYSQL_ERRNO: 1046
MESSAGE_TEXT: No database selected
1 row in set (0,00 sec)
</syntaxhighlight>
===percona-toolkit===
<syntaxhighlight lang=bash>
# aptitude install percona-toolkit
# mysql -e "explain select * from mysql.user,mysql.db where user.user=db.user" | pt-visual-explain
JOIN
+- Bookmark lookup
| +- Table
| | table db
| | possible_keys User
| +- Index lookup
| key db->User
| possible_keys User
| key_len 48
| ref mysql.user.User
| rows 3
+- Table scan
rows 68
+- Table
table user
</syntaxhighlight>
===Sysbench===
<syntaxhighlight lang=bash>
# mysql -u root -e "create database sbtest;"
# sysbench \
--test=oltp \
--oltp-table-size=10000000 \
--db-driver=mysql \
--mysql-table-engine=innodb \
--mysql-db=sbtest \
--mysql-user=root \
--mysql-password=$(nawk -F'=' '/password/{print $2}' /root/.my.cnf) \
--mysql-socket=/var/run/mysqld/mysqld.sock \
prepare
# sysbench \
--test=oltp \
--oltp-test-mode=complex \
--oltp-table-size=80000000 \
--db-driver=mysql \
--mysql-table-engine=innodb \
--mysql-db=sbtest \
--mysql-user=root \
--mysql-password=$(nawk -F'=' '/password/{print $2}' /root/.my.cnf) \
--mysql-socket=/var/run/mysqld/mysqld.sock \
--num-threads=4 \
--max-time=900 \
--max-requests=500000 \
run
# mysql -u root_rw -e "drop table sbtest;" sbtest
</syntaxhighlight>
==Recover a damaged root account==
===Lost grants===
Try out:
<syntaxhighlight lang=bash>
# service mysql stop
# echo "grant all privileges on *.* to 'root'@'localhost' with grant option;" > /root/mysql-init
# mysqld_safe --init-file=/root/mysql-init
...
150812 19:14:24 mysqld_safe mysqld from pid file /var/run/mysqld/mysqld.pid ended
# rm /root/mysql-init
# service mysql start
</syntaxhighlight>
Or:
<syntaxhighlight lang=bash>
# service mysql stop
# mysqld_safe --skip-grant-tables &
...
# mysql -e "UPDATE mysql.user SET Grant_priv='Y', Super_priv='Y' WHERE User='root'; FLUSH PRIVILEGES; GRANT ALL ON *.* TO 'root'@'localhost';"
# mysqladmin -u root shutdown
# service mysql start
</syntaxhighlight>
===Lost password===
<syntaxhighlight lang=bash>
# service mysql stop
# echo "SET PASSWORD FOR 'root'@'localhost' = PASSWORD('the root password for mysql');" > /root/mysql-init
# mysqld_safe --init-file=/root/mysql-init
...
150812 19:15:24 mysqld_safe mysqld from pid file /var/run/mysqld/mysqld.pid ended
# rm /root/mysql-init
# service mysql start
</syntaxhighlight>
==Structured configuration==
This is the default in Ubuntus /etc/mysql/my.cnf:
<syntaxhighlight lang=mysql>
...
#
# * IMPORTANT: Additional settings that can override those from this file!
# The files must end with '.cnf', otherwise they'll be ignored.
#
!includedir /etc/mysql/conf.d/
</syntaxhighlight>
/etc/mysql/conf.d/innodb.cnf:
<syntaxhighlight lang=mysql>
[mysqld]
# InnoDB Parameters
# innodb_buffer_pool_size=(0.7*total_mem_size)
#innodb_buffer_pool_size=512M
innodb_buffer_pool_size=256M
# bulk_insert_buffer_size
#bulk_insert_buffer_size=256M
bulk_insert_buffer_size=128M
# innodb_buffer_pool_instances=... more = more concurrency
innodb_buffer_pool_instances=2
# innodb_thread_concurrency= 2*CPUs
innodb_thread_concurrency=4
# innodb_flush_method=O_DIRECT (avoids double buffering)
innodb_flush_method=O_DIRECT
# InnoDB data raw disks
innodb_data_home_dir=
innodb_data_file_path=/dev/vg-data/lv-rawdisk-innodb01:25Graw
# InnoDB log files
innodb_log_files_in_group=2
innodb_log_file_size=100M
innodb_log_group_home_dir=/var/lib/mysql/ib_log
</syntaxhighlight>
/etc/mysql/conf.d/myisam.cnf:
<syntaxhighlight lang=mysql>
[mysqld]
#key_buffer = 512M
key_buffer = 128M
table_cache = 8K
myisam_sort_buffer_size = 64M
tmp_table_size = 64M
# Variable: concurrent_insert
# Value Description
# 0 Disables concurrent inserts
# 1 (Default) Enables concurrent insert for MyISAM tables that do not have holes
# 2 Enables concurrent inserts for all MyISAM tables, even those that have holes.
# For a table with a hole, new rows are inserted at the end of the table if it is in use by another thread.
# Otherwise, MySQL acquires a normal write lock and inserts the row into the hole.
concurrent_insert=2
# Variable: myisam_use_mmap
# https://www.percona.com/blog/2006/05/26/myisam-mmap-feature-51/
#
myisam_use_mmap=1
</syntaxhighlight>
/etc/mysql/conf.d/mysqld.cnf:
<syntaxhighlight lang=mysql>
[mysqld]
datadir = /var/lib/mysql/data/data
# because mysql is soooo stupid
#ignore-db-dirs = lost+found # when we will have mysql >= 5.6.3
bind-address = 127.0.0.1
open-files-limit = 4096
max_connections = 512
max_allowed_packet = 16M
thread_stack = 192K
thread_cache_size = 8
myisam-recover-options = BACKUP
max_connections = 512
table_cache = 8192
thread_concurrency = 4
default-storage-engine = innodb
# Enable the full query log. Every query (even ones with incorrect
# syntax) that the server receives will be logged. This is useful for
# debugging, it is usually disabled in production use.
#log
# Print warnings to the error log file. If you have any problem with
# MySQL you should enable logging of warnings and examine the error log
# for possible explanations.
log_warnings
# Log slow queries. Slow queries are queries which take more than the
# amount of time defined in "long_query_time" or which do not use
# indexes well, if log_long_format is enabled. It is normally good idea
# to have this turned on if you frequently add new queries to the
# system.
log_slow_queries
slow_query_log_file = /var/log/mysql/mysql-slow.log
# All queries taking more than this amount of time (in seconds) will be
# trated as slow. Do not use "1" as a value here, as this will result in
# even very fast queries being logged from time to time (as MySQL
# currently measures time with second accuracy only).
long_query_time = 2
# Log more information in the slow query log. Normally it is good to
# have this turned on. This will enable logging of queries that are not
# using indexes in addition to long running queries.
#log_long_format
log_bin = /var/lib/mysql/binlog/mysql-bin.log
expire_logs_days = 10
max_binlog_size = 100M
sync_binlog = 0
performance_schema = ON
</syntaxhighlight>
/etc/mysql/conf.d/mysqld_safe.cnf:
<syntaxhighlight lang=mysql>
[mysqld_safe]
</syntaxhighlight>
/etc/mysql/conf.d/mysqld_safe_syslog.cnf:
<syntaxhighlight lang=mysql>
[mysqld_safe]
syslog
</syntaxhighlight>
/etc/mysql/conf.d/query_cache.cnf:
<syntaxhighlight lang=mysql>
[mysqld]
query_cache_limit = 4M
query_cache_size = 128M
query_cache_min_res_unit = 2K
</syntaxhighlight>
=MySQL Clients=
Small one liners for testing purposes.
==PHP==
===PHP PDO===
<syntaxhighlight lang=php>
$ php -r '
$pdo=new PDO("mysql:host=mydbhost;dbname=mydb", "user", "pass", ARRAY(
PDO::ATTR_PERSISTENT => true
)
);
$stmt=$pdo->prepare("SELECT * FROM mytable");
if($stmt->execute()){
while($row = $stmt->fetch()){
print_r($row);
}
};
$stmt = null;
$pdo=null;
'
</syntaxhighlight>
e25d580dc77bdcd3f970922683b7e366c133e8ea
2700
2699
2022-11-23T13:17:37Z
Lollypop
2
/* /etc/idmapd.conf */
wikitext
text/x-wiki
[[Category:MySQL|Tipps und Tricks]]
==Oneliner==
===Show MySQL-traffic fired from a client===
<syntaxhighlight lang=bash>
# tcpdump -i any -s 0 -l -vvv -w - dst port 3306 | strings | perl -e '
while(<>) { chomp; next if /^[^ ]+[ ]*$/;
if(/^(SELECT|UPDATE|DELETE|INSERT|SET|COMMIT|ROLLBACK|CREATE|DROP|ALTER)/i) {
if (defined $q) { print "$q\n"; }
$q=$_;
} else {
$_ =~ s/^[ \t]+//; $q.=" $_";
}
}'
</syntaxhighlight>
===Mysql processes each second===
<syntaxhighlight lang=bash>
# mysqladmin -i 1 --verbose processlist
</syntaxhighlight>
===All grants===
<syntaxhighlight lang=bash>
# mysql --skip-column-names --batch --execute 'select concat("`",user,"`@`",host,"`") from mysql.user' | xargs -n 1 -i mysql --execute 'show grants for {}'
</syntaxhighlight>
Or a little nicer:
<syntaxhighlight lang=bash>
#!/bin/bash
#
## Written by Lars Timmann <L@rs.Timmann.de> 2017
#
function usage () {
cat << EOH
Usage: $0 [--all] [--grant-user <pattern>|--gu <pattern>] [--grant-db <pattern>|--gdb <pattern>] [--help] ...
--help: This output
--grant-user|--gu: You can specify this option several times.
The <pattern> can be:
<user> : You will get grants on all hosts for this user.
@<host> : You will get grants for all users on this host.
<user>@<host> : You will get specific grants for user@host.
The pattern may contain % as wildcard.
If the pattern is @% it shows all grants where host is exactly '%'.
--grant-db|--gdb: You can specify this option several times.
The pattern names the database to look for.
The pattern may contain % as wildcard.
--all: Show all grants
...: Optional parameters to the mysql command
EOH
exit
}
show_all_grants=0
declare -a grant_user
for ((param=1;param<=${#};param++))
do
case ${!param} in
--grant-user|--gu)
param=$[ ${param} + 1 ]
grant_user+=( "${!param}" )
# delete 2 parameters from list and set back $param
set -- "${@:1:param-2}" "${@:param+1}"
param=$[ ${param} - 2 ]
;;
--grant-db|--gdb)
param=$[ ${param} + 1 ]
grant_db+=( "${!param}" )
# delete 2 parameters from list and set back $param
set -- "${@:1:param-2}" "${@:param+1}"
param=$[ ${param} - 2 ]
;;
--all)
show_all_grants=1
# delete 1 parameter from list and set back $param
set -- "${@:1:param-1}" "${@:param+1}"
param=$[ ${param} - 1 ]
;;
--help)
usage
;;
*)
;;
esac
done
count=${#grant_user[@]}
for((param=0;param<count;param++))
do
before=${#grant_user[@]}
grant="${grant_user[${param}]}"
user="${grant%@*}"
if [[ "${grant}" == *\@?* ]]
then
host="${grant/*@}"
else
host=''
fi
case ${host} in
'')
select="select concat('\'',user,'\'@\'',host,'\'') as user from mysql.user where user like '${user}'"
;;
'%')
select="select concat('\'',user,'\'@\'',host,'\'') as user from mysql.user where host='${host}' ${user:+and user like '${user}'}"
;;
*)
select="select concat('\'',user,'\'@\'',host,'\'') as user from mysql.user where host like '${host}' ${user:+and user like '${user}'}"
;;
esac
grant_user=( "${grant_user[@]:0:param}" $(mysql $* --silent --skip-column-names --execute "${select}" | sort ) "${grant_user[@]:param+1}" )
after=${#grant_user[@]}
param=$[ param + after - before ]
count=$[ count + after - before ]
done
# Get user for database in grant_db array
for db in ${grant_db[@]}
do
grant_user+=( $(mysql $* --silent --skip-column-names --execute "
select concat('\'',user,'\'@\'',host,'\'') as user from mysql.db where db like '${db}';
select concat('\'',user,'\'@\'',host,'\'') as user from mysql.columns_priv where db like '${db}';
select concat('\'',user,'\'@\'',host,'\'') as user from mysql.tables_priv where db like '${db}';
" | sort -u ) )
done
# --all
if [ ${show_all_grants} -eq 1 ]
then
printf -- '--\n-- %s\n--\n' "all grants";
grant_user=( $(mysql $* --silent --skip-column-names --execute "select concat('\'',user,'\'@\'',host,'\'') as user from mysql.user" | sort ) )
fi
for user in ${grant_user[@]}
do
printf -- '--\n-- %s\n--\n' "${user}";
show_create_user="$(mysql $* --silent --skip-column-names --execute "select (substring_index(version(), '.',1) >= 5) and (substring_index(substring_index(version(), '.', 2),'.',-1) >=7) as show_create_user;";)"
if [ "${show_create_user}" -eq 1 ]
then
mysql $* --silent --skip-column-names --execute "show create user ${user};" | sed 's/$/;/'
fi
OLD_IFS=${IFS}
IFS=$'\n'
for grant in $(mysql $* --silent --skip-column-names --execute "show grants for ${user}" | sed 's/$/;/')
do
regex='GRANT[ ]+.*[ ]+ON[ ]+(FUNCTION[ ]+|)`([^`]*)`\..*'
if [[ $grant =~ $regex ]]
then
database=${BASH_REMATCH[2]}
if [ ${#grant_db[@]} -gt 0 ]
then
if [[ " ${grant_db[@]} " =~ " ${database} " ]]
then
echo "${grant}"
fi
else
echo "${grant}"
fi
else
echo "${grant}"
fi
done
done
</syntaxhighlight>
===Last update time===
* Per table
<syntaxhighlight lang=mysql>
mysql> SELECT TABLE_SCHEMA AS DB,TABLE_NAME,UPDATE_TIME FROM INFORMATION_SCHEMA.TABLES ORDER BY DB,UPDATE_TIME;
</syntaxhighlight>
* Per database
<syntaxhighlight lang=mysql>
mysql> SELECT TABLE_SCHEMA AS DB,MAX(UPDATE_TIME) AS LAST_UPDATE FROM INFORMATION_SCHEMA.TABLES GROUP BY DB ORDER BY LAST_UPDATE;
</syntaxhighlight>
==InnoDB space==
===Per database===
<syntaxhighlight lang=mysql>
mysql> select table_schema as database_name, sum(round(data_length/1024/1024,2)) as total_size_mb from information_schema.tables where engine like 'innodb' group by table_schema order by total_size_mb;
</syntaxhighlight>
===Per table===
<syntaxhighlight lang=mysql>
mysql> select table_schema as database_name,table_name,round(data_length/1024/1024,2) as size_mb from information_schema.tables order by size_mb;
</syntaxhighlight>
==Logging==
If you use SET GLOBAL it is just for the moment.
'''Don't forget to add it in your my.cnf to make it permanent!'''
===What can I log?===
The interesting variables here are:
* log_queries_not_using_indexes
* log_slave_updates
* log_slow_queries
* general_log
===Choose logging destination FILE/TABLE/NONE===
This affects general_log and slow_query_log.
* Log to the table mysql.slow_log and mysql.general_log
<syntaxhighlight lang=mysql>
mysql> SET GLOBAL log_output=TABLE;
</syntaxhighlight>
* Log to the table mysql.slow_log and mysql.general_log
<syntaxhighlight lang=mysql>
mysql> SET GLOBAL log_output=TABLE;
</syntaxhighlight>
* Both: tables and files
<syntaxhighlight lang=mysql>
mysql> SET GLOBAL log_output = 'TABLE,FILE';
</syntaxhighlight>
* None, if NONE appears in the log_output destinations there is no logging
<syntaxhighlight lang=mysql>
mysql> SET GLOBAL log_output = 'TABLE,FILE,NONE';
</syntaxhighlight>
is equal to
<syntaxhighlight lang=mysql>
mysql> SET GLOBAL log_output = 'NONE';
</syntaxhighlight>
===Enable/disable general logging===
<syntaxhighlight lang=mysql>
mysql> SET GLOBAL general_log_file = '/var/lib/mysql/general.log';
Query OK, 0 rows affected (0.00 sec)
mysql> SET GLOBAL general_log = 'ON';
Query OK, 0 rows affected (0.00 sec)
</syntaxhighlight>
<syntaxhighlight lang=mysql>
mysql> SET GLOBAL general_log = 'OFF';
Query OK, 0 rows affected (0.00 sec)
</syntaxhighlight>
===Enable/disable logging of slow queries===
<syntaxhighlight lang=mysql>
mysql> SET GLOBAL slow_query_log_file = '/var/lib/mysql/slow-query.log';
Query OK, 0 rows affected (0.00 sec)
mysql> SET GLOBAL slow_query_log = 'ON';
Query OK, 0 rows affected (0.00 sec)
</syntaxhighlight>
<syntaxhighlight lang=mysql>
mysql> SET GLOBAL slow_query_log = 'OFF';
Query OK, 0 rows affected (0.00 sec)
</syntaxhighlight>
== Slave ==
=== Debugging ===
==== What did we see from the master ====
Read the binlog from Master:
<syntaxhighlight lang=bash>
# mysqlbinlog --read-from-remote-server --host='your replication host' --user='your replication user' --password='your replication password' --base64-output=auto --database='limit output to this database' -vv mysql-bin.number | less
</syntaxhighlight>
if you get
ERROR: Failed on connect: SSL connection error: protocol version mismatch
try
<syntaxhighlight lang=bash>
# mysqlbinlog --read-from-remote-server --host='your replication host' --user='your replication user' --password='your replication password' --ssl-mode=DISABLED --base64-output=auto --database='limit output to this database' -vv mysql-bin.number | less
</syntaxhighlight>
For an idea of the binlog file to investigate on the master do this on your slave:
<syntaxhighlight lang=bash>
# mysql -e 'show slave status\G' | awk '$1=="Master_Log_File:"'
</syntaxhighlight>
==Filesystems for MySQL==
===ext3/ext4===
====Create Options====
<syntaxhighlight lang=bash>
# mkfs.ext4 -b 4096 /dev/mapper/vg--data-lv--ext4--mysql_data
</syntaxhighlight>
====Mountoptions====
* noatime
* data=writeback (best performance , only metadata is logged)
* data=ordered (ok performance , recording metadata and grouping metadata related to the data changes)
* data=journal (worst performance, but best data protection, ext3 default mode, recording metadata and all data)
===Raw devices with InnoDB===
'''Take a look at [[Linux_udev_permissions|setting device permissions via udev]] first.'''
'''After''' that the device is owned by mysql:
<syntaxhighlight lang=bash>
# ls -alL /dev/vg-data/lv-rawdisk-innodb01
brw-rw---- 1 mysql mysql 252, 0 Aug 12 15:07 /dev/vg-data/lv-rawdisk-innodb01
</syntaxhighlight>
Determine the size:
<syntaxhighlight lang=bash>
# lvs vg-data
LV VG Attr LSize Pool Origin Data% Move Log Copy% Convert
lv-rawdisk-innodb01 vg-data -wi-a---- 25.00g
# fdisk -l /dev/vg-data/lv-rawdisk-innodb01
Disk /dev/vg-data/lv-rawdisk-innodb01: 26.8 GB, 26843545600 bytes
255 heads, 63 sectors/track, 3263 cylinders, total 52428800 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
# bc -l
26843545600/(1024*1024*1024)
25.00000000000000000000
</syntaxhighlight>
Yes... really 25GB!
Add your logical volume to your configuration /etc/mysql/conf.d/innodb.cnf :
<syntaxhighlight lang=mysql>
[mysqld]
# InnoDB raw disks
innodb_data_home_dir=
innodb_data_file_path=/dev/vg-data/lv-rawdisk-innodb01:25Gnewraw
</syntaxhighlight>
Start mysql:
<syntaxhighlight lang=bash>
# service mysql start
</syntaxhighlight>
Aaaaaand.. do not forget apparmor! Like I did.. :-D
<syntaxhighlight lang=mysql>
InnoDB: Operating system error number 13 in a file operation.
InnoDB: The error means mysqld does not have the access rights to
InnoDB: the directory.
InnoDB: File name /dev/dm-0
InnoDB: File operation call: 'open'.
InnoDB: Cannot continue operation.
</syntaxhighlight>
<syntaxhighlight lang=bash>
# tail /var/log/kern.log
...
Aug 12 15:30:09 mysql kernel: [ 5840.118528] audit: type=1400 audit(1439386209.399:33): apparmor="DENIED" operation="open" profile="/usr/sbin/mysqld" name="/dev/dm-0" pid=11810 comm="mysqld" requested_mask="wr" denied_mask="wr" fsuid=108 ouid=108
...
</syntaxhighlight>
Add your raw device to the apparmor config in /etc/apparmor.d/local/usr.sbin.mysqld :
<syntaxhighlight lang=bash>
# Site-specific additions and overrides for usr.sbin.mysqld.
# For more details, please see /etc/apparmor.d/local/README.
/dev/dm-* rwk,
</syntaxhighlight>
Reload apparmor:
<syntaxhighlight lang=bash>
# service apparmor reload
</syntaxhighlight>
Another try!
<syntaxhighlight lang=bash>
# service mysql start
</syntaxhighlight>
<syntaxhighlight lang=mysql>
InnoDB: The first specified data file /dev/vg-data/lv-rawdisk-innodb01 did not exist:
InnoDB: a new database to be created!
150812 15:48:23 InnoDB: Setting file /dev/vg-data/lv-rawdisk-innodb01 size to 25600 MB
InnoDB: Database physically writes the file full: wait...
InnoDB: Progress in MB: 100 200 300 400 500 600 700 800 900 1000 1100 1200 ...
</syntaxhighlight>
Much better!
So shutdown MySQL again!
Change your configuration /etc/mysql/conf.d/innodb.cnf and '''change newraw to raw!''' :
<syntaxhighlight lang=mysql>
[mysqld]
# InnoDB raw disks
innodb_data_home_dir=
innodb_data_file_path=/dev/vg-data/lv-rawdisk-innodb01:25Graw
</syntaxhighlight>
=== NFS ===
==== NFSv4 ====
===== On NetApp CDOT SVM =====
<syntaxhighlight lang=text>
cdot1nfsv4::> export-policy rule create -policyname default -clientmatch 172.18.128.0/22 -superuser none -rwrule none -rorule sys -allow-dev false -allow-suid false
cdot1nfsv4::>
cdot1nfsv4::> export-policy create -policyname mysql_clients
cdot1nfsv4::> export-policy rule create -policyname mysql_clients -clientmatch 172.18.128.0/22 -superuser sys -rwrule sys -rorule sys -allow-dev true -allow-suid false
cdot1nfsv4::>
cdot1nfsv4::> nfs server modify -v4.0 enabled -v4-id-domain this.domain.tld
cdot1nfsv4::> set -units GB
cdot1nfsv4::> vol show -volume MYSQLNFS_* -fields volume,policy,size,junction-path
vserver volume size policy junction-path
------------------ --------------------- ---- ------------- ----------------------
cdot1nfsv4 MYSQLNFS_DATA 40GB mysql_clients /MYSQLNFS_DATA
cdot1nfsv4 MYSQLNFS_LOG 1GB mysql_clients /MYSQLNFS_LOG
2 entries were displayed.
</syntaxhighlight>
Links:
* [https://kb.netapp.com/support/s/article/how-to-configure-nfsv4-in-cluster-mode How to configure NFSv4 in Cluster-Mode]
* [https://kb.netapp.com/support/s/article/clustered-data-ontap-nfs-expert-recommended-articles Clustered Data ONTAP NFS Expert recommended articles]
* [https://kb.netapp.com/support/s/article/how-to-configure-netapp-storage-systems-for-network-file-system-version-4-in-aix-and-linux-environments How to configure NetApp storage systems for Network File System version 4 in AIX and Linux environments]
* [https://kb.netapp.com/support/s/article/how-to-enable-or-disable-nfsv4-on-netapp-storage-systems How to enable or disable NFSv4 on NetApp storage systems]
===== On Linux =====
====== Blacklist rpcsec_gss_krb5 ======
To disable loading of the rpcsec_gss_krb5 kernel module which causes problems with performance, do this:
<syntaxhighlight lang=text>
# echo "blacklist rpcsec_gss_krb5" > /etc/modprobe.d/blacklist-rpcsec_gss_krb5.conf
# rmmod rpcsec_gss_krb5
</syntaxhighlight>
====== /etc/sysctl.d/99-mysql.conf ======
<syntaxhighlight lang=text>
#
## http://www.ajohnstone.com/achives/optimizing-mysql-over-nfs-with-netapp/
#
###################################################################
# Semaphores & IPC for optimizations in innodb
kernel.shmmax=2147483648
kernel.shmall=2147483648
kernel.msgmni=1024
kernel.msgmax=65536
kernel.sem=250 32000 32 1024
###################################################################
# Swap
vm.swappiness = 0
vm.vfs_cache_pressure = 50
</syntaxhighlight>
====== /etc/sysctl.d/99-netapp-nfs.conf ======
<syntaxhighlight lang=text>
#
## http://www.ajohnstone.com/achives/optimizing-mysql-over-nfs-with-netapp/
#
###################################################################
# Optimization for netapp/nfs increased from 64k, @see http://tldp.org/HOWTO/NFS-HOWTO/performance.html#MEMLIMITS
net.core.wmem_default=262144
net.core.rmem_default=262144
net.core.wmem_max=262144
net.core.rmem_max=262144
net.ipv4.tcp_rmem = 4096 87380 16777216
net.ipv4.tcp_wmem = 4096 65536 16777216
net.ipv4.tcp_no_metrics_save = 1
# Guidelines from http://media.netapp.com/documents/mysqlperformance-5.pdf
net.ipv4.tcp_sack=0
net.ipv4.tcp_timestamps=0
sunrpc.tcp_slot_table_entries=128
#nfs.v3.enable on
nfs.tcp.enable=on
nfs.tcp.recvwindowsize=65536
nfs.tcp.xfersize=65536
#iscsi.iswt.max_ios_per_session 128
#iscsi.iswt.tcp_window_size 131400
#iscsi.max_connections_per_session 16
net.ipv4.tcp_tw_reuse = 1
net.ipv4.ip_local_port_range = 1024 65023
net.ipv4.tcp_max_syn_backlog = 10240
net.ipv4.tcp_max_tw_buckets = 400000
net.ipv4.tcp_max_orphans = 60000
net.ipv4.tcp_synack_retries = 3
net.core.somaxconn = 10000
kernel.sysrq=0
net.ipv4.neigh.default.gc_thresh1 = 4096
net.ipv4.neigh.default.gc_thresh2 = 8192
net.ipv4.neigh.default.gc_thresh3 = 8192
net.ipv4.neigh.default.base_reachable_time = 86400
net.ipv4.neigh.default.gc_stale_time = 86400
</syntaxhighlight>
====== Raise allowed number of open files for mysql in /etc/security/limits.d/mysql.conf ======
<syntaxhighlight lang=text>
mysql soft nofile 1024000
mysql hard nofile 1024000
mysql soft nproc 10240
mysql hard nproc 10240
</syntaxhighlight>
====== Modify systemd mysql.service to raise the number of files limit ======
To raise the number of files for the service you have to tell the systemd the new limit.
<syntaxhighlight lang=bash>
# systemctl edit mysql.service
</syntaxhighlight>
and enter:
<syntaxhighlight lang=ini>
[Service]
LimitNOFILE=1024000
</syntaxhighlight>
<syntaxhighlight lang=bash>
# systemctl cat mysql
# /lib/systemd/system/mysql.service
# MySQL systemd service file
...
# /etc/systemd/system/mysql.service.d/override.conf
[Service]
LimitNOFILE=1024000
</syntaxhighlight>
Do not forget to activate and check the limit
<syntaxhighlight lang=bash>
# systemctl daemon-reload
# systemctl restart mysql
# awk 'NR==1 || /Max open files/' /proc/$(pgrep mysqld$)/limits
Limit Soft Limit Hard Limit Units
Max open files 1024000 1024000 files
</syntaxhighlight>
====== Modify systemd service to wait for NFS ======
To be sure that the NFS mount is ready when the mysql server starts add After=nfs-client.target to the systemd service in the Unit-section.
<syntaxhighlight lang=bash>
# systemctl edit mysql.service
</syntaxhighlight>
and enter:
<syntaxhighlight lang=ini>
[Unit]
Description=MySQL Community Server
After=network.target
After=nfs-client.target
</syntaxhighlight>
<syntaxhighlight lang=bash>
# systemctl cat mysql
# /lib/systemd/system/mysql.service
# MySQL systemd service file
[Unit]
Description=MySQL Community Server
After=network.target
[Install]
WantedBy=multi-user.target
[Service]
User=mysql
Group=mysql
PermissionsStartOnly=true
ExecStartPre=/usr/share/mysql/mysql-systemd-start pre
ExecStart=/usr/sbin/mysqld
ExecStartPost=/usr/share/mysql/mysql-systemd-start post
TimeoutSec=600
Restart=on-failure
RuntimeDirectory=mysqld
RuntimeDirectoryMode=755
# /etc/systemd/system/mysql.service.d/override.conf
[Unit]
Description=MySQL Community Server
After=network.target
After=nfs-client.target
[Service]
LimitNOFILE=1024000
</syntaxhighlight>
Do not forget to activate the changes...
<syntaxhighlight lang=bash>
# systemctl daemon-reload
# systemctl restart mysql
</syntaxhighlight>
... and check they are active:
<syntaxhighlight lang=bash>
# systemctl list-dependencies --after mysql.service | grep nfs-client.target
● ├─nfs-client.target
</syntaxhighlight>
====== /etc/idmapd.conf ======
<syntaxhighlight lang=text>
# Domain = localdomain
Domain = this.domain.tld
</syntaxhighlight>
<syntaxhighlight lang=bash>
# systemctl restart nfs-idmapd.service
</syntaxhighlight>
Alternative switch off kerberos authentication with the mount option <i>sec=sys</i>.
====== /etc/fstab ======
<syntaxhighlight lang=text>
cdot-nfsv4-svm:/MYSQLNFS_LOG /MYSQLNFS_LOG nfs rw,hard,nointr,rsize=65536,wsize=65536,bg,vers=4,proto=tcp,noatime
cdot-nfsv4-svm:/MYSQLNFS_DATA /MYSQLNFS_DATA nfs rw,hard,nointr,rsize=65536,wsize=65536,bg,vers=4,proto=tcp,noatime
</syntaxhighlight>
====== /etc/mysql/mysql.conf.d/mysqld.cnf ======
<syntaxhighlight lang=ini>
[mysqld]
...
datadir = /MYSQLNFS_DATA/data/mysql
...
</syntaxhighlight>
====== /etc/mysql/mysql.conf.d/innodb.cnf ======
<syntaxhighlight lang=ini>
[mysqld]
#
# * InnoDB
#
innodb_data_home_dir = /MYSQLNFS_DATA/InnoDB
innodb_data_file_path = ibdata1:200M:autoextend
innodb_log_group_home_dir = /MYSQLNFS_LOG/ib_log
#innodb_flush_method = O_DIRECT
innodb_flush_log_at_trx_commit = 2
innodb_file_per_table = on
</syntaxhighlight>
<syntaxhighlight lang=mysql>
# mysql -e "show variables where variable_name like '%dir' and value like '/MYSQLNFS%'"
+---------------------------+------------------------------------+
| Variable_name | Value |
+---------------------------+------------------------------------+
| datadir | /MYSQLNFS_DATA/data/mysql/ |
| innodb_data_home_dir | /MYSQLNFS_DATA/InnoDB |
| innodb_log_group_home_dir | /MYSQLNFS_LOG/ib_log |
+---------------------------+------------------------------------+
</syntaxhighlight>
====== /etc/mysql/mysql.conf.d/query_cache.cnf ======
<syntaxhighlight lang=ini>
[mysqld]
#
# * Query Cache Configuration
#
query_cache_type = 1
query_cache_limit = 256K
query_cache_min_res_unit = 2k
query_cache_size = 80M
</syntaxhighlight>
<syntaxhighlight lang=mysql>
mysql> SHOW VARIABLES LIKE 'have_query_cache';
+------------------+-------+
| Variable_name | Value |
+------------------+-------+
| have_query_cache | YES |
+------------------+-------+
1 row in set (0,00 sec)
mysql> SHOW VARIABLES LIKE 'query_cache%';
+------------------------------+----------+
| Variable_name | Value |
+------------------------------+----------+
| query_cache_limit | 262144 |
| query_cache_min_res_unit | 2048 |
| query_cache_size | 83886080 |
| query_cache_type | ON |
| query_cache_wlock_invalidate | OFF |
+------------------------------+----------+
5 rows in set (0,00 sec)
</syntaxhighlight>
====== apparmor : /etc/apparmor.d/local/usr.sbin.mysqld ======
<syntaxhighlight lang=text>
# vim:syntax=apparmor
# This should be always there...
owner @{PROC}/@{pid}/status r,
/sys/devices/system/node/ r,
/sys/devices/system/node/** r,
# The mysql datadir, innodb_data_home_dir
/MYSQLNFS_DATA/ r,
/MYSQLNFS_DATA/** rwk,
# The mysql innodb_log_group_home_dir
/MYSQLNFS_LOG/ r,
/MYSQLNFS_LOG/** rwk,
</syntaxhighlight>
====== Short stupid performance test ======
<syntaxhighlight lang=bash>
# time dd if=/dev/zero of=/MYSQLNFS_DATA/io.test bs=16k count=65536
65536+0 records in
65536+0 records out
1073741824 bytes (1,1 GB, 1,0 GiB) copied, 1,7552 s, 612 MB/s
real 0m1.772s
user 0m0.016s
sys 0m0.672s
</syntaxhighlight>
Some things seem to work...
==Sample InnoDB configuration==
/etc/mysql/conf.d/innodb.cnf
<syntaxhighlight lang=mysql>
[mysqld]
# InnoDB Parameters
# innodb_buffer_pool_size=(0.7*total_mem_size)
innodb_buffer_pool_size=1433M
# bulk_insert_buffer_size
bulk_insert_buffer_size=256M
# innodb_buffer_pool_instances=... more = more concurrency
innodb_buffer_pool_instances=2
# innodb_thread_concurrency= 2*CPUs
innodb_thread_concurrency=4
# innodb_flush_method=O_DIRECT (avoids double buffering)
innodb_flush_method=O_DIRECT
# InnoDB data raw disks
innodb_data_home_dir=
innodb_data_file_path=/dev/vg-data/lv-rawdisk-innodb01:25Graw
# InnoDB log files
innodb_log_files_in_group=2
innodb_log_file_size=100M
innodb_log_group_home_dir=/var/lib/mysql/ib_log
</syntaxhighlight>
==Analyze==
<syntaxhighlight lang=mysql>
mysql> select * from <tablename> PROCEDURE ANALYSE();
</syntaxhighlight>
<syntaxhighlight lang=mysql>
mysql> SHOW /*!50000 GLOBAL*/ STATUS;
</syntaxhighlight>
* See [[http://de.slideshare.net/shinguz/pt-presentation-11465700 MySQL Performance Tuning]]
===Find statements which lead into an error===
<syntaxhighlight lang=mysql>
mysql> select CURRENT_SCHEMA,DIGEST_TEXT,MYSQL_ERRNO,MESSAGE_TEXT from performance_schema.events_statements_history where errors!=0\G
*************************** 1. row ***************************
CURRENT_SCHEMA: NULL
DIGEST_TEXT: NULL
MYSQL_ERRNO: 1046
MESSAGE_TEXT: No database selected
1 row in set (0,00 sec)
</syntaxhighlight>
===percona-toolkit===
<syntaxhighlight lang=bash>
# aptitude install percona-toolkit
# mysql -e "explain select * from mysql.user,mysql.db where user.user=db.user" | pt-visual-explain
JOIN
+- Bookmark lookup
| +- Table
| | table db
| | possible_keys User
| +- Index lookup
| key db->User
| possible_keys User
| key_len 48
| ref mysql.user.User
| rows 3
+- Table scan
rows 68
+- Table
table user
</syntaxhighlight>
===Sysbench===
<syntaxhighlight lang=bash>
# mysql -u root -e "create database sbtest;"
# sysbench \
--test=oltp \
--oltp-table-size=10000000 \
--db-driver=mysql \
--mysql-table-engine=innodb \
--mysql-db=sbtest \
--mysql-user=root \
--mysql-password=$(nawk -F'=' '/password/{print $2}' /root/.my.cnf) \
--mysql-socket=/var/run/mysqld/mysqld.sock \
prepare
# sysbench \
--test=oltp \
--oltp-test-mode=complex \
--oltp-table-size=80000000 \
--db-driver=mysql \
--mysql-table-engine=innodb \
--mysql-db=sbtest \
--mysql-user=root \
--mysql-password=$(nawk -F'=' '/password/{print $2}' /root/.my.cnf) \
--mysql-socket=/var/run/mysqld/mysqld.sock \
--num-threads=4 \
--max-time=900 \
--max-requests=500000 \
run
# mysql -u root_rw -e "drop table sbtest;" sbtest
</syntaxhighlight>
==Recover a damaged root account==
===Lost grants===
Try out:
<syntaxhighlight lang=bash>
# service mysql stop
# echo "grant all privileges on *.* to 'root'@'localhost' with grant option;" > /root/mysql-init
# mysqld_safe --init-file=/root/mysql-init
...
150812 19:14:24 mysqld_safe mysqld from pid file /var/run/mysqld/mysqld.pid ended
# rm /root/mysql-init
# service mysql start
</syntaxhighlight>
Or:
<syntaxhighlight lang=bash>
# service mysql stop
# mysqld_safe --skip-grant-tables &
...
# mysql -e "UPDATE mysql.user SET Grant_priv='Y', Super_priv='Y' WHERE User='root'; FLUSH PRIVILEGES; GRANT ALL ON *.* TO 'root'@'localhost';"
# mysqladmin -u root shutdown
# service mysql start
</syntaxhighlight>
===Lost password===
<syntaxhighlight lang=bash>
# service mysql stop
# echo "SET PASSWORD FOR 'root'@'localhost' = PASSWORD('the root password for mysql');" > /root/mysql-init
# mysqld_safe --init-file=/root/mysql-init
...
150812 19:15:24 mysqld_safe mysqld from pid file /var/run/mysqld/mysqld.pid ended
# rm /root/mysql-init
# service mysql start
</syntaxhighlight>
==Structured configuration==
This is the default in Ubuntus /etc/mysql/my.cnf:
<syntaxhighlight lang=mysql>
...
#
# * IMPORTANT: Additional settings that can override those from this file!
# The files must end with '.cnf', otherwise they'll be ignored.
#
!includedir /etc/mysql/conf.d/
</syntaxhighlight>
/etc/mysql/conf.d/innodb.cnf:
<syntaxhighlight lang=mysql>
[mysqld]
# InnoDB Parameters
# innodb_buffer_pool_size=(0.7*total_mem_size)
#innodb_buffer_pool_size=512M
innodb_buffer_pool_size=256M
# bulk_insert_buffer_size
#bulk_insert_buffer_size=256M
bulk_insert_buffer_size=128M
# innodb_buffer_pool_instances=... more = more concurrency
innodb_buffer_pool_instances=2
# innodb_thread_concurrency= 2*CPUs
innodb_thread_concurrency=4
# innodb_flush_method=O_DIRECT (avoids double buffering)
innodb_flush_method=O_DIRECT
# InnoDB data raw disks
innodb_data_home_dir=
innodb_data_file_path=/dev/vg-data/lv-rawdisk-innodb01:25Graw
# InnoDB log files
innodb_log_files_in_group=2
innodb_log_file_size=100M
innodb_log_group_home_dir=/var/lib/mysql/ib_log
</syntaxhighlight>
/etc/mysql/conf.d/myisam.cnf:
<syntaxhighlight lang=mysql>
[mysqld]
#key_buffer = 512M
key_buffer = 128M
table_cache = 8K
myisam_sort_buffer_size = 64M
tmp_table_size = 64M
# Variable: concurrent_insert
# Value Description
# 0 Disables concurrent inserts
# 1 (Default) Enables concurrent insert for MyISAM tables that do not have holes
# 2 Enables concurrent inserts for all MyISAM tables, even those that have holes.
# For a table with a hole, new rows are inserted at the end of the table if it is in use by another thread.
# Otherwise, MySQL acquires a normal write lock and inserts the row into the hole.
concurrent_insert=2
# Variable: myisam_use_mmap
# https://www.percona.com/blog/2006/05/26/myisam-mmap-feature-51/
#
myisam_use_mmap=1
</syntaxhighlight>
/etc/mysql/conf.d/mysqld.cnf:
<syntaxhighlight lang=mysql>
[mysqld]
datadir = /var/lib/mysql/data/data
# because mysql is soooo stupid
#ignore-db-dirs = lost+found # when we will have mysql >= 5.6.3
bind-address = 127.0.0.1
open-files-limit = 4096
max_connections = 512
max_allowed_packet = 16M
thread_stack = 192K
thread_cache_size = 8
myisam-recover-options = BACKUP
max_connections = 512
table_cache = 8192
thread_concurrency = 4
default-storage-engine = innodb
# Enable the full query log. Every query (even ones with incorrect
# syntax) that the server receives will be logged. This is useful for
# debugging, it is usually disabled in production use.
#log
# Print warnings to the error log file. If you have any problem with
# MySQL you should enable logging of warnings and examine the error log
# for possible explanations.
log_warnings
# Log slow queries. Slow queries are queries which take more than the
# amount of time defined in "long_query_time" or which do not use
# indexes well, if log_long_format is enabled. It is normally good idea
# to have this turned on if you frequently add new queries to the
# system.
log_slow_queries
slow_query_log_file = /var/log/mysql/mysql-slow.log
# All queries taking more than this amount of time (in seconds) will be
# trated as slow. Do not use "1" as a value here, as this will result in
# even very fast queries being logged from time to time (as MySQL
# currently measures time with second accuracy only).
long_query_time = 2
# Log more information in the slow query log. Normally it is good to
# have this turned on. This will enable logging of queries that are not
# using indexes in addition to long running queries.
#log_long_format
log_bin = /var/lib/mysql/binlog/mysql-bin.log
expire_logs_days = 10
max_binlog_size = 100M
sync_binlog = 0
performance_schema = ON
</syntaxhighlight>
/etc/mysql/conf.d/mysqld_safe.cnf:
<syntaxhighlight lang=mysql>
[mysqld_safe]
</syntaxhighlight>
/etc/mysql/conf.d/mysqld_safe_syslog.cnf:
<syntaxhighlight lang=mysql>
[mysqld_safe]
syslog
</syntaxhighlight>
/etc/mysql/conf.d/query_cache.cnf:
<syntaxhighlight lang=mysql>
[mysqld]
query_cache_limit = 4M
query_cache_size = 128M
query_cache_min_res_unit = 2K
</syntaxhighlight>
=MySQL Clients=
Small one liners for testing purposes.
==PHP==
===PHP PDO===
<syntaxhighlight lang=php>
$ php -r '
$pdo=new PDO("mysql:host=mydbhost;dbname=mydb", "user", "pass", ARRAY(
PDO::ATTR_PERSISTENT => true
)
);
$stmt=$pdo->prepare("SELECT * FROM mytable");
if($stmt->execute()){
while($row = $stmt->fetch()){
print_r($row);
}
};
$stmt = null;
$pdo=null;
'
</syntaxhighlight>
d936d4f8102b8581a01c21778635a361018ff0dc
SunServer
0
210
2702
2583
2023-01-09T13:33:00Z
Lollypop
2
wikitext
text/x-wiki
[[Category:Hardware]]
=X86 Systeme=
==ILOM==
===Reset SP from OS===
<syntaxhighlight lang=bash>
# ipmitool -I bmc bmc reset cold
Sent cold reset command to MC
</syntaxhighlight>
===Access ILOM from OS===
<syntaxhighlight lang=bash>
# ipmitool sunoem cli
Connected. Use ^D to exit.
->
</syntaxhighlight>
or
<syntaxhighlight lang=bash>
# ipmitool -I bmc sunoem cli
Connected. Use ^D to exit.
->
</syntaxhighlight>
===Set SP IP address from OS via ipmitool===
* Set:
<syntaxhighlight lang=bash>
# ipmitool lan set 1 ipaddr 172.30.42.149
Setting LAN IP Address to 172.30.42.149
# ipmitool lan set 1 netmask 255.255.255.0
Setting LAN Subnet Mask to 255.255.255.0
# ipmitool lan set 1 defgw ipaddr 172.30.42.1
Setting LAN Default Gateway IP to 172.30.42.1
</syntaxhighlight>
* Check:
<syntaxhighlight lang=bash>
# ipmitool lan print
Set in Progress : Commit Write
Auth Type Support : NONE MD2 MD5 PASSWORD
Auth Type Enable : Callback : MD2 MD5 PASSWORD
: User : MD2 MD5 PASSWORD
: Operator : MD2 MD5 PASSWORD
: Admin : MD2 MD5 PASSWORD
: OEM :
IP Address Source : Static Address
IP Address : 172.30.42.149
Subnet Mask : 255.255.255.0
MAC Address : 00:1c:24:f0:70:b0
SNMP Community String : public
IP Header : TTL=0x40 Flags=0x40 Precedence=0x00 TOS=0x10
Default Gateway IP : 172.30.42.1
Default Gateway MAC : ff:ff:ff:ff:ff:ff
Backup Gateway IP : 255.255.255.255
Backup Gateway MAC : ff:ff:ff:ff:ff:ff
Cipher Suite Priv Max : aaaaaaaaaaaaaaa
: X=Cipher Suite Unused
: c=CALLBACK
: u=USER
: o=OPERATOR
: a=ADMIN
: O=OEM
</syntaxhighlight>
===Restore lost Serial/Product Information===
<syntaxhighlight lang=bash>
$ ssh root@x4100-sp
-> show /SYS/MB
/SYS/MB
...
Properties:
type = Motherboard
chassis_name = SUN FIRE X4100
chassis_part_number = 541-0250-04
chassis_serial_number = 0000000-0000000000
chassis_manufacturer = SUN MICROSYSTEMS
product_name = SUN FIRE X4100
product_part_number = 602-0000-00
product_serial_number = 0000000000
product_version = (none)
product_manufacturer = SUN MICROSYSTEMS
fru_name = ASSY,MOTHERBOARD,A64
fru_manufacturer = SUN MICROSYSTEMS
fru_part_number = 501-7644-01
fru_serial_number = 1762TH1-0627002296
...
-> exit
$ ssh sunservice@x4100-sp
Password: <the root password>
[(flash)root@X4100-SP:~]# servicetool --board_replaced=mainboard --fru_product_serial_number --fru_chassis_serial_number --fru_product_part_number
<Fill out the answers>
</syntaxhighlight>
=SPARC Systeme=
==S7-2==
==T4-1==
===Get disk slot===
<b>get_disk_slot.sh</b>:
<syntaxhighlight lang=bash>
#!/bin/bash
/usr/sbin/prtconf -v | nawk -v disk="$1" '
function get_value() {
getline line;
split(line,values,"=");
return values[2];
}
/inquiry-serial-no/ {
inquiry_serial_no=get_value();
}
/inquiry-product-id/ {
inquiry_product_id=get_value();
}
/inquiry-vendor-id/ {
inquiry_vendor_id=get_value();
}
/obp-path/ {
obp_path=get_value();
}
/phy-num/ {
phy_num[obp_path]=get_value();
}
$0 ~ "/dev/rdsk/"disk"$" {
split(obp_path,path_parts,"/");
if(path_parts[3]=="pci@1"){
controller[obp_path]=0;
}
if(path_parts[3]=="pci@2"){
controller[obp_path]=1;
}
printf "%s\n\t%s\n\t%s\n\t%s\n\tcontroller %d, PhyNum %d => Slot %d\n",$1,inquiry_vendor_id,inquiry_serial_no,obp_path,controller[obp_path],phy_num[obp_path],4*controller[obp_path]+phy_num[obp_path]
}'
</syntaxhighlight>
Example:
<syntaxhighlight lang=bash>
# ./get_disk_slot.sh c0t5000C500230D43A3d0
dev_link=/dev/rdsk/c0t5000C500230D43A3d0
'SEAGATE'
'00101371ZVHM 3SE1ZVHM'
'/pci@400/pci@2/pci@0/pci@4/scsi@0/disk@w5000c500230d43a1,0'
controller 1, PhyNum 2 => Slot 6
</syntaxhighlight>
==XSCF==
===Set XSCF IP address from OS via ssh through dscp===
* Show:
<syntaxhighlight lang=bash>
# /usr/platform/`uname -i`/sbin/prtdscp
Domain Address: 192.168.224.2
SP Address: 192.168.224.1
# ssh eis-installer@192.168.224.1
XSCF> shownetwork -a
xscf#0-lan#0
Link encap:Ethernet HWaddr 00:0B:5D:E3:D8:C4
inet addr:172.42.0.120 Bcast:172.42.0.255 Mask:255.255.255.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:885919090 errors:0 dropped:0 overruns:0 frame:0
TX packets:7150700 errors:1 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:1987024183 (1.8 GiB) TX bytes:492148426 (469.3 MiB)
Base address:0xe000
xscf#0-lan#1
Link encap:Ethernet HWaddr 00:0B:5D:E3:D8:C5
BROADCAST MULTICAST MTU:1500 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
Base address:0xc000
XSCF> showroute -a
Destination Gateway Netmask Flags Interface
172.42.0.0 * 255.255.0.0 U xscf#0-lan#0
default 172.42.0.1 0.0.0.0 UG xscf#0-lan#0
</syntaxhighlight>
* Delete default gateway:
<syntaxhighlight lang=bash>
XSCF> setroute -c del -n 0.0.0.0 -m 0.0.0.0 -g 172.42.0.1 xscf#0-lan#0
</syntaxhighlight>
* Set:
<syntaxhighlight lang=bash>
XSCF> setnetwork xscf#0-lan#0 172.32.40.52 -m 255.255.255.0
XSCF> setroute -c add -n 0.0.0.0 -m 0.0.0.0 -g 172.32.40.1 xscf#0-lan#0
XSCF> applynetwork
The following network settings will be applied:
xscf#0 hostname :hfgsun07-xsfc
DNS domain name :intern.hfg-inkasso.de
nameserver :172.41.0.2
interface :xscf#0-lan#0
status :up
IP address :172.32.40.52
netmask :255.255.255.0
route :-n 172.32.40.1 -m 255.255.255.255
route :-n 0.0.0.0 -m 0.0.0.0 -g 172.32.40.1
interface :xscf#0-lan#1
status :down
IP address :
netmask :
route :
Continue? [y|n] :y
Please reset the XSCF by rebootxscf to apply the network settings.
Please confirm that the settings have been applied by executing
showhostname, shownetwork, showroute and shownameserver after rebooting
the XSCF.
XSCF> rebootxscf
The XSCF will be reset. Continue? [y|n] :y
</syntaxhighlight>
===Enable sending of break signal===
Example on a M10 (Partition 0)... break_signal=off means turn supression of break signals off ;-) :
<syntaxhighlight lang=bash>
XSCF> showpparmode -p 0
Host-ID :90071f40
Diagnostic Level :max
Message Level :max
Alive Check :on
Watchdog Reaction :reset
Break Signal :on
Autoboot(Guest Domain) :on
Elastic Mode :off
IOreconfigure :false
CPU Mode :auto
PPAR DR(Current) :off
PPAR DR(Next) :off
XSCF> setpparmode -p 0 -m break_signal=off
Diagnostic Level :max -> -
Message Level :max -> -
Alive Check :on -> -
Watchdog Reaction :reset -> -
Break Signal :on -> off
Autoboot(Guest Domain) :on -> -
Elastic Mode :off -> -
IOreconfigure :false -> -
CPU Mode :auto -> -
PPAR DR :off -> -
The specified modes will be changed.
Continue? [y|n] :y
configured.
Diagnostic Level :max
Message Level :max
Alive Check :on (alive check:available)
Watchdog Reaction :reset (watchdog reaction:reset)
Break Signal :off (break signal:send)
Autoboot(Guest Domain) :on
Elastic Mode :off
IOreconfigure :false
CPU Mode :auto
PPAR DR :off
XSCF> sendbreak -y -p0
Send break signal to PPAR-ID 0?[y|n] :y
XSCF> console -y -p0
Console contents may be logged.
Connect to PPAR-ID 0?[y|n] :y
c)ontinue, s)ync, r)eset? c
^@Notifying cluster that this node is panicking
panic[cpu10]/thread=2a100df1c80: Aborting node because pm_tick delay of 13644 ms exceeds 5050 ms
...
</syntaxhighlight>
7d6474051f9a5f0fef4217d5b08d7a7e254b463b
2703
2702
2023-01-09T13:34:58Z
Lollypop
2
/* S7-2 */
wikitext
text/x-wiki
[[Category:Hardware]]
=X86 Systeme=
==ILOM==
===Reset SP from OS===
<syntaxhighlight lang=bash>
# ipmitool -I bmc bmc reset cold
Sent cold reset command to MC
</syntaxhighlight>
===Access ILOM from OS===
<syntaxhighlight lang=bash>
# ipmitool sunoem cli
Connected. Use ^D to exit.
->
</syntaxhighlight>
or
<syntaxhighlight lang=bash>
# ipmitool -I bmc sunoem cli
Connected. Use ^D to exit.
->
</syntaxhighlight>
===Set SP IP address from OS via ipmitool===
* Set:
<syntaxhighlight lang=bash>
# ipmitool lan set 1 ipaddr 172.30.42.149
Setting LAN IP Address to 172.30.42.149
# ipmitool lan set 1 netmask 255.255.255.0
Setting LAN Subnet Mask to 255.255.255.0
# ipmitool lan set 1 defgw ipaddr 172.30.42.1
Setting LAN Default Gateway IP to 172.30.42.1
</syntaxhighlight>
* Check:
<syntaxhighlight lang=bash>
# ipmitool lan print
Set in Progress : Commit Write
Auth Type Support : NONE MD2 MD5 PASSWORD
Auth Type Enable : Callback : MD2 MD5 PASSWORD
: User : MD2 MD5 PASSWORD
: Operator : MD2 MD5 PASSWORD
: Admin : MD2 MD5 PASSWORD
: OEM :
IP Address Source : Static Address
IP Address : 172.30.42.149
Subnet Mask : 255.255.255.0
MAC Address : 00:1c:24:f0:70:b0
SNMP Community String : public
IP Header : TTL=0x40 Flags=0x40 Precedence=0x00 TOS=0x10
Default Gateway IP : 172.30.42.1
Default Gateway MAC : ff:ff:ff:ff:ff:ff
Backup Gateway IP : 255.255.255.255
Backup Gateway MAC : ff:ff:ff:ff:ff:ff
Cipher Suite Priv Max : aaaaaaaaaaaaaaa
: X=Cipher Suite Unused
: c=CALLBACK
: u=USER
: o=OPERATOR
: a=ADMIN
: O=OEM
</syntaxhighlight>
===Restore lost Serial/Product Information===
<syntaxhighlight lang=bash>
$ ssh root@x4100-sp
-> show /SYS/MB
/SYS/MB
...
Properties:
type = Motherboard
chassis_name = SUN FIRE X4100
chassis_part_number = 541-0250-04
chassis_serial_number = 0000000-0000000000
chassis_manufacturer = SUN MICROSYSTEMS
product_name = SUN FIRE X4100
product_part_number = 602-0000-00
product_serial_number = 0000000000
product_version = (none)
product_manufacturer = SUN MICROSYSTEMS
fru_name = ASSY,MOTHERBOARD,A64
fru_manufacturer = SUN MICROSYSTEMS
fru_part_number = 501-7644-01
fru_serial_number = 1762TH1-0627002296
...
-> exit
$ ssh sunservice@x4100-sp
Password: <the root password>
[(flash)root@X4100-SP:~]# servicetool --board_replaced=mainboard --fru_product_serial_number --fru_chassis_serial_number --fru_product_part_number
<Fill out the answers>
</syntaxhighlight>
=SPARC Systeme=
==S7-2==
===ILOM===
-> set /HOST/bootmode/ script="setenv auto-boot? false"
Set 'script' to 'setenv auto-boot? false'
-> reset /SYS
Are you sure you want to reset /SYS (y/n)? y
Performing reset on /SYS
-> start /HOST/console
Are you sure you want to start /HOST/console (y/n)? y
Serial console started. To stop, type #.
==T4-1==
===Get disk slot===
<b>get_disk_slot.sh</b>:
<syntaxhighlight lang=bash>
#!/bin/bash
/usr/sbin/prtconf -v | nawk -v disk="$1" '
function get_value() {
getline line;
split(line,values,"=");
return values[2];
}
/inquiry-serial-no/ {
inquiry_serial_no=get_value();
}
/inquiry-product-id/ {
inquiry_product_id=get_value();
}
/inquiry-vendor-id/ {
inquiry_vendor_id=get_value();
}
/obp-path/ {
obp_path=get_value();
}
/phy-num/ {
phy_num[obp_path]=get_value();
}
$0 ~ "/dev/rdsk/"disk"$" {
split(obp_path,path_parts,"/");
if(path_parts[3]=="pci@1"){
controller[obp_path]=0;
}
if(path_parts[3]=="pci@2"){
controller[obp_path]=1;
}
printf "%s\n\t%s\n\t%s\n\t%s\n\tcontroller %d, PhyNum %d => Slot %d\n",$1,inquiry_vendor_id,inquiry_serial_no,obp_path,controller[obp_path],phy_num[obp_path],4*controller[obp_path]+phy_num[obp_path]
}'
</syntaxhighlight>
Example:
<syntaxhighlight lang=bash>
# ./get_disk_slot.sh c0t5000C500230D43A3d0
dev_link=/dev/rdsk/c0t5000C500230D43A3d0
'SEAGATE'
'00101371ZVHM 3SE1ZVHM'
'/pci@400/pci@2/pci@0/pci@4/scsi@0/disk@w5000c500230d43a1,0'
controller 1, PhyNum 2 => Slot 6
</syntaxhighlight>
==XSCF==
===Set XSCF IP address from OS via ssh through dscp===
* Show:
<syntaxhighlight lang=bash>
# /usr/platform/`uname -i`/sbin/prtdscp
Domain Address: 192.168.224.2
SP Address: 192.168.224.1
# ssh eis-installer@192.168.224.1
XSCF> shownetwork -a
xscf#0-lan#0
Link encap:Ethernet HWaddr 00:0B:5D:E3:D8:C4
inet addr:172.42.0.120 Bcast:172.42.0.255 Mask:255.255.255.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:885919090 errors:0 dropped:0 overruns:0 frame:0
TX packets:7150700 errors:1 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:1987024183 (1.8 GiB) TX bytes:492148426 (469.3 MiB)
Base address:0xe000
xscf#0-lan#1
Link encap:Ethernet HWaddr 00:0B:5D:E3:D8:C5
BROADCAST MULTICAST MTU:1500 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
Base address:0xc000
XSCF> showroute -a
Destination Gateway Netmask Flags Interface
172.42.0.0 * 255.255.0.0 U xscf#0-lan#0
default 172.42.0.1 0.0.0.0 UG xscf#0-lan#0
</syntaxhighlight>
* Delete default gateway:
<syntaxhighlight lang=bash>
XSCF> setroute -c del -n 0.0.0.0 -m 0.0.0.0 -g 172.42.0.1 xscf#0-lan#0
</syntaxhighlight>
* Set:
<syntaxhighlight lang=bash>
XSCF> setnetwork xscf#0-lan#0 172.32.40.52 -m 255.255.255.0
XSCF> setroute -c add -n 0.0.0.0 -m 0.0.0.0 -g 172.32.40.1 xscf#0-lan#0
XSCF> applynetwork
The following network settings will be applied:
xscf#0 hostname :hfgsun07-xsfc
DNS domain name :intern.hfg-inkasso.de
nameserver :172.41.0.2
interface :xscf#0-lan#0
status :up
IP address :172.32.40.52
netmask :255.255.255.0
route :-n 172.32.40.1 -m 255.255.255.255
route :-n 0.0.0.0 -m 0.0.0.0 -g 172.32.40.1
interface :xscf#0-lan#1
status :down
IP address :
netmask :
route :
Continue? [y|n] :y
Please reset the XSCF by rebootxscf to apply the network settings.
Please confirm that the settings have been applied by executing
showhostname, shownetwork, showroute and shownameserver after rebooting
the XSCF.
XSCF> rebootxscf
The XSCF will be reset. Continue? [y|n] :y
</syntaxhighlight>
===Enable sending of break signal===
Example on a M10 (Partition 0)... break_signal=off means turn supression of break signals off ;-) :
<syntaxhighlight lang=bash>
XSCF> showpparmode -p 0
Host-ID :90071f40
Diagnostic Level :max
Message Level :max
Alive Check :on
Watchdog Reaction :reset
Break Signal :on
Autoboot(Guest Domain) :on
Elastic Mode :off
IOreconfigure :false
CPU Mode :auto
PPAR DR(Current) :off
PPAR DR(Next) :off
XSCF> setpparmode -p 0 -m break_signal=off
Diagnostic Level :max -> -
Message Level :max -> -
Alive Check :on -> -
Watchdog Reaction :reset -> -
Break Signal :on -> off
Autoboot(Guest Domain) :on -> -
Elastic Mode :off -> -
IOreconfigure :false -> -
CPU Mode :auto -> -
PPAR DR :off -> -
The specified modes will be changed.
Continue? [y|n] :y
configured.
Diagnostic Level :max
Message Level :max
Alive Check :on (alive check:available)
Watchdog Reaction :reset (watchdog reaction:reset)
Break Signal :off (break signal:send)
Autoboot(Guest Domain) :on
Elastic Mode :off
IOreconfigure :false
CPU Mode :auto
PPAR DR :off
XSCF> sendbreak -y -p0
Send break signal to PPAR-ID 0?[y|n] :y
XSCF> console -y -p0
Console contents may be logged.
Connect to PPAR-ID 0?[y|n] :y
c)ontinue, s)ync, r)eset? c
^@Notifying cluster that this node is panicking
panic[cpu10]/thread=2a100df1c80: Aborting node because pm_tick delay of 13644 ms exceeds 5050 ms
...
</syntaxhighlight>
10722d2ace4dd769996402e187396c37119ac236
2704
2703
2023-01-09T13:36:14Z
Lollypop
2
/* ILOM */
wikitext
text/x-wiki
[[Category:Hardware]]
=X86 Systeme=
==ILOM==
===Reset SP from OS===
<syntaxhighlight lang=bash>
# ipmitool -I bmc bmc reset cold
Sent cold reset command to MC
</syntaxhighlight>
===Access ILOM from OS===
<syntaxhighlight lang=bash>
# ipmitool sunoem cli
Connected. Use ^D to exit.
->
</syntaxhighlight>
or
<syntaxhighlight lang=bash>
# ipmitool -I bmc sunoem cli
Connected. Use ^D to exit.
->
</syntaxhighlight>
===Set SP IP address from OS via ipmitool===
* Set:
<syntaxhighlight lang=bash>
# ipmitool lan set 1 ipaddr 172.30.42.149
Setting LAN IP Address to 172.30.42.149
# ipmitool lan set 1 netmask 255.255.255.0
Setting LAN Subnet Mask to 255.255.255.0
# ipmitool lan set 1 defgw ipaddr 172.30.42.1
Setting LAN Default Gateway IP to 172.30.42.1
</syntaxhighlight>
* Check:
<syntaxhighlight lang=bash>
# ipmitool lan print
Set in Progress : Commit Write
Auth Type Support : NONE MD2 MD5 PASSWORD
Auth Type Enable : Callback : MD2 MD5 PASSWORD
: User : MD2 MD5 PASSWORD
: Operator : MD2 MD5 PASSWORD
: Admin : MD2 MD5 PASSWORD
: OEM :
IP Address Source : Static Address
IP Address : 172.30.42.149
Subnet Mask : 255.255.255.0
MAC Address : 00:1c:24:f0:70:b0
SNMP Community String : public
IP Header : TTL=0x40 Flags=0x40 Precedence=0x00 TOS=0x10
Default Gateway IP : 172.30.42.1
Default Gateway MAC : ff:ff:ff:ff:ff:ff
Backup Gateway IP : 255.255.255.255
Backup Gateway MAC : ff:ff:ff:ff:ff:ff
Cipher Suite Priv Max : aaaaaaaaaaaaaaa
: X=Cipher Suite Unused
: c=CALLBACK
: u=USER
: o=OPERATOR
: a=ADMIN
: O=OEM
</syntaxhighlight>
===Restore lost Serial/Product Information===
<syntaxhighlight lang=bash>
$ ssh root@x4100-sp
-> show /SYS/MB
/SYS/MB
...
Properties:
type = Motherboard
chassis_name = SUN FIRE X4100
chassis_part_number = 541-0250-04
chassis_serial_number = 0000000-0000000000
chassis_manufacturer = SUN MICROSYSTEMS
product_name = SUN FIRE X4100
product_part_number = 602-0000-00
product_serial_number = 0000000000
product_version = (none)
product_manufacturer = SUN MICROSYSTEMS
fru_name = ASSY,MOTHERBOARD,A64
fru_manufacturer = SUN MICROSYSTEMS
fru_part_number = 501-7644-01
fru_serial_number = 1762TH1-0627002296
...
-> exit
$ ssh sunservice@x4100-sp
Password: <the root password>
[(flash)root@X4100-SP:~]# servicetool --board_replaced=mainboard --fru_product_serial_number --fru_chassis_serial_number --fru_product_part_number
<Fill out the answers>
</syntaxhighlight>
=SPARC Systeme=
==S7-2==
===ILOM===
<syntaxhighlight lang=bash>
-> set /HOST/bootmode/ script="setenv auto-boot? false"
Set 'script' to 'setenv auto-boot? false'
-> reset /SYS
Are you sure you want to reset /SYS (y/n)? y
Performing reset on /SYS
-> start /HOST/console
Are you sure you want to start /HOST/console (y/n)? y
Serial console started. To stop, type #.
...
SPARC S7-2, No Keyboard
Copyright (c) 1998, 2022, Oracle and/or its affiliates. All rights reserved.
OpenBoot 4.43.9, 126.5000 GB memory installed, Serial #1154xxxxx.
Ethernet address 0:10:e0:xx:xx:xx, Host ID: 86e15f24.
auto-boot? = false
{0} ok
</syntaxhighlight>
==T4-1==
===Get disk slot===
<b>get_disk_slot.sh</b>:
<syntaxhighlight lang=bash>
#!/bin/bash
/usr/sbin/prtconf -v | nawk -v disk="$1" '
function get_value() {
getline line;
split(line,values,"=");
return values[2];
}
/inquiry-serial-no/ {
inquiry_serial_no=get_value();
}
/inquiry-product-id/ {
inquiry_product_id=get_value();
}
/inquiry-vendor-id/ {
inquiry_vendor_id=get_value();
}
/obp-path/ {
obp_path=get_value();
}
/phy-num/ {
phy_num[obp_path]=get_value();
}
$0 ~ "/dev/rdsk/"disk"$" {
split(obp_path,path_parts,"/");
if(path_parts[3]=="pci@1"){
controller[obp_path]=0;
}
if(path_parts[3]=="pci@2"){
controller[obp_path]=1;
}
printf "%s\n\t%s\n\t%s\n\t%s\n\tcontroller %d, PhyNum %d => Slot %d\n",$1,inquiry_vendor_id,inquiry_serial_no,obp_path,controller[obp_path],phy_num[obp_path],4*controller[obp_path]+phy_num[obp_path]
}'
</syntaxhighlight>
Example:
<syntaxhighlight lang=bash>
# ./get_disk_slot.sh c0t5000C500230D43A3d0
dev_link=/dev/rdsk/c0t5000C500230D43A3d0
'SEAGATE'
'00101371ZVHM 3SE1ZVHM'
'/pci@400/pci@2/pci@0/pci@4/scsi@0/disk@w5000c500230d43a1,0'
controller 1, PhyNum 2 => Slot 6
</syntaxhighlight>
==XSCF==
===Set XSCF IP address from OS via ssh through dscp===
* Show:
<syntaxhighlight lang=bash>
# /usr/platform/`uname -i`/sbin/prtdscp
Domain Address: 192.168.224.2
SP Address: 192.168.224.1
# ssh eis-installer@192.168.224.1
XSCF> shownetwork -a
xscf#0-lan#0
Link encap:Ethernet HWaddr 00:0B:5D:E3:D8:C4
inet addr:172.42.0.120 Bcast:172.42.0.255 Mask:255.255.255.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:885919090 errors:0 dropped:0 overruns:0 frame:0
TX packets:7150700 errors:1 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:1987024183 (1.8 GiB) TX bytes:492148426 (469.3 MiB)
Base address:0xe000
xscf#0-lan#1
Link encap:Ethernet HWaddr 00:0B:5D:E3:D8:C5
BROADCAST MULTICAST MTU:1500 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
Base address:0xc000
XSCF> showroute -a
Destination Gateway Netmask Flags Interface
172.42.0.0 * 255.255.0.0 U xscf#0-lan#0
default 172.42.0.1 0.0.0.0 UG xscf#0-lan#0
</syntaxhighlight>
* Delete default gateway:
<syntaxhighlight lang=bash>
XSCF> setroute -c del -n 0.0.0.0 -m 0.0.0.0 -g 172.42.0.1 xscf#0-lan#0
</syntaxhighlight>
* Set:
<syntaxhighlight lang=bash>
XSCF> setnetwork xscf#0-lan#0 172.32.40.52 -m 255.255.255.0
XSCF> setroute -c add -n 0.0.0.0 -m 0.0.0.0 -g 172.32.40.1 xscf#0-lan#0
XSCF> applynetwork
The following network settings will be applied:
xscf#0 hostname :hfgsun07-xsfc
DNS domain name :intern.hfg-inkasso.de
nameserver :172.41.0.2
interface :xscf#0-lan#0
status :up
IP address :172.32.40.52
netmask :255.255.255.0
route :-n 172.32.40.1 -m 255.255.255.255
route :-n 0.0.0.0 -m 0.0.0.0 -g 172.32.40.1
interface :xscf#0-lan#1
status :down
IP address :
netmask :
route :
Continue? [y|n] :y
Please reset the XSCF by rebootxscf to apply the network settings.
Please confirm that the settings have been applied by executing
showhostname, shownetwork, showroute and shownameserver after rebooting
the XSCF.
XSCF> rebootxscf
The XSCF will be reset. Continue? [y|n] :y
</syntaxhighlight>
===Enable sending of break signal===
Example on a M10 (Partition 0)... break_signal=off means turn supression of break signals off ;-) :
<syntaxhighlight lang=bash>
XSCF> showpparmode -p 0
Host-ID :90071f40
Diagnostic Level :max
Message Level :max
Alive Check :on
Watchdog Reaction :reset
Break Signal :on
Autoboot(Guest Domain) :on
Elastic Mode :off
IOreconfigure :false
CPU Mode :auto
PPAR DR(Current) :off
PPAR DR(Next) :off
XSCF> setpparmode -p 0 -m break_signal=off
Diagnostic Level :max -> -
Message Level :max -> -
Alive Check :on -> -
Watchdog Reaction :reset -> -
Break Signal :on -> off
Autoboot(Guest Domain) :on -> -
Elastic Mode :off -> -
IOreconfigure :false -> -
CPU Mode :auto -> -
PPAR DR :off -> -
The specified modes will be changed.
Continue? [y|n] :y
configured.
Diagnostic Level :max
Message Level :max
Alive Check :on (alive check:available)
Watchdog Reaction :reset (watchdog reaction:reset)
Break Signal :off (break signal:send)
Autoboot(Guest Domain) :on
Elastic Mode :off
IOreconfigure :false
CPU Mode :auto
PPAR DR :off
XSCF> sendbreak -y -p0
Send break signal to PPAR-ID 0?[y|n] :y
XSCF> console -y -p0
Console contents may be logged.
Connect to PPAR-ID 0?[y|n] :y
c)ontinue, s)ync, r)eset? c
^@Notifying cluster that this node is panicking
panic[cpu10]/thread=2a100df1c80: Aborting node because pm_tick delay of 13644 ms exceeds 5050 ms
...
</syntaxhighlight>
ca8e1088696f7b323f45d466b37d0c6211e278bc
2705
2704
2023-01-09T13:45:02Z
Lollypop
2
/* ILOM */
wikitext
text/x-wiki
[[Category:Hardware]]
=X86 Systeme=
==ILOM==
===Reset SP from OS===
<syntaxhighlight lang=bash>
# ipmitool -I bmc bmc reset cold
Sent cold reset command to MC
</syntaxhighlight>
===Access ILOM from OS===
<syntaxhighlight lang=bash>
# ipmitool sunoem cli
Connected. Use ^D to exit.
->
</syntaxhighlight>
or
<syntaxhighlight lang=bash>
# ipmitool -I bmc sunoem cli
Connected. Use ^D to exit.
->
</syntaxhighlight>
===Set SP IP address from OS via ipmitool===
* Set:
<syntaxhighlight lang=bash>
# ipmitool lan set 1 ipaddr 172.30.42.149
Setting LAN IP Address to 172.30.42.149
# ipmitool lan set 1 netmask 255.255.255.0
Setting LAN Subnet Mask to 255.255.255.0
# ipmitool lan set 1 defgw ipaddr 172.30.42.1
Setting LAN Default Gateway IP to 172.30.42.1
</syntaxhighlight>
* Check:
<syntaxhighlight lang=bash>
# ipmitool lan print
Set in Progress : Commit Write
Auth Type Support : NONE MD2 MD5 PASSWORD
Auth Type Enable : Callback : MD2 MD5 PASSWORD
: User : MD2 MD5 PASSWORD
: Operator : MD2 MD5 PASSWORD
: Admin : MD2 MD5 PASSWORD
: OEM :
IP Address Source : Static Address
IP Address : 172.30.42.149
Subnet Mask : 255.255.255.0
MAC Address : 00:1c:24:f0:70:b0
SNMP Community String : public
IP Header : TTL=0x40 Flags=0x40 Precedence=0x00 TOS=0x10
Default Gateway IP : 172.30.42.1
Default Gateway MAC : ff:ff:ff:ff:ff:ff
Backup Gateway IP : 255.255.255.255
Backup Gateway MAC : ff:ff:ff:ff:ff:ff
Cipher Suite Priv Max : aaaaaaaaaaaaaaa
: X=Cipher Suite Unused
: c=CALLBACK
: u=USER
: o=OPERATOR
: a=ADMIN
: O=OEM
</syntaxhighlight>
===Restore lost Serial/Product Information===
<syntaxhighlight lang=bash>
$ ssh root@x4100-sp
-> show /SYS/MB
/SYS/MB
...
Properties:
type = Motherboard
chassis_name = SUN FIRE X4100
chassis_part_number = 541-0250-04
chassis_serial_number = 0000000-0000000000
chassis_manufacturer = SUN MICROSYSTEMS
product_name = SUN FIRE X4100
product_part_number = 602-0000-00
product_serial_number = 0000000000
product_version = (none)
product_manufacturer = SUN MICROSYSTEMS
fru_name = ASSY,MOTHERBOARD,A64
fru_manufacturer = SUN MICROSYSTEMS
fru_part_number = 501-7644-01
fru_serial_number = 1762TH1-0627002296
...
-> exit
$ ssh sunservice@x4100-sp
Password: <the root password>
[(flash)root@X4100-SP:~]# servicetool --board_replaced=mainboard --fru_product_serial_number --fru_chassis_serial_number --fru_product_part_number
<Fill out the answers>
</syntaxhighlight>
=SPARC Systeme=
==S7-2==
===ILOM===
<syntaxhighlight lang=bash>
-> set /HOST/bootmode/ script="setenv auto-boot? false"
Set 'script' to 'setenv auto-boot? false'
-> reset /SYS
Are you sure you want to reset /SYS (y/n)? y
Performing reset on /SYS
-> start /HOST/console
Are you sure you want to start /HOST/console (y/n)? y
Serial console started. To stop, type #.
...
SPARC S7-2, No Keyboard
Copyright (c) 1998, 2022, Oracle and/or its affiliates. All rights reserved.
OpenBoot 4.43.9, 126.5000 GB memory installed, Serial #1154xxxxx.
Ethernet address 0:10:e0:xx:xx:xx, Host ID: 86e15f24.
auto-boot? = false
{0} ok
{0} ok printenv boot-device
boot-device = disk net
{0} ok devalias
fallback-miniroot /pci@300/pci@1/pci@0/pci@2/usb@0/hub@2/storage@1/disk@0
rcdrom /pci@300/pci@1/pci@0/pci@2/usb@0/hub@2/storage@1/disk@0
disk7 /pci@300/pci@2/pci@0/pci@14/LSI,sas@0/disk@p4
disk6 /pci@300/pci@2/pci@0/pci@14/LSI,sas@0/disk@p5
disk5 /pci@300/pci@2/pci@0/pci@14/LSI,sas@0/disk@p7
disk4 /pci@300/pci@2/pci@0/pci@14/LSI,sas@0/disk@p6
disk3 /pci@300/pci@2/pci@0/pci@14/LSI,sas@0/disk@p0
disk2 /pci@300/pci@2/pci@0/pci@14/LSI,sas@0/disk@p1
disk1 /pci@300/pci@2/pci@0/pci@14/LSI,sas@0/disk@p3
disk /pci@300/pci@2/pci@0/pci@14/LSI,sas@0/disk@p2
disk0 /pci@300/pci@2/pci@0/pci@14/LSI,sas@0/disk@p2
sas0 /pci@300/pci@2/pci@0/pci@14/LSI,sas@0
sas /pci@300/pci@2/pci@0/pci@14/LSI,sas@0
nvme3 /pci@300/pci@2/pci@0/pci@7/nvme@0/disk@1
nvme2 /pci@300/pci@2/pci@0/pci@6/nvme@0/disk@1
nvme1 /pci@300/pci@2/pci@0/pci@5/nvme@0/disk@1
nvme0 /pci@300/pci@2/pci@0/pci@4/nvme@0/disk@1
net3 /pci@300/pci@1/pci@0/pci@1/network@0,3
net2 /pci@300/pci@1/pci@0/pci@1/network@0,2
net1 /pci@300/pci@1/pci@0/pci@1/network@0,1
net /pci@300/pci@1/pci@0/pci@1/network@0
net0 /pci@300/pci@1/pci@0/pci@1/network@0
virtual-console
/virtual-devices/console@1
name aliases
{0} ok boot nvme0
Boot device: /pci@300/pci@2/pci@0/pci@4/nvme@0/disk@1 File and args:
SunOS Release 5.11 Version 11.4.45.119.2 64-bit
Copyright (c) 1983, 2022, Oracle and/or its affiliates.
/
</syntaxhighlight>
==T4-1==
===Get disk slot===
<b>get_disk_slot.sh</b>:
<syntaxhighlight lang=bash>
#!/bin/bash
/usr/sbin/prtconf -v | nawk -v disk="$1" '
function get_value() {
getline line;
split(line,values,"=");
return values[2];
}
/inquiry-serial-no/ {
inquiry_serial_no=get_value();
}
/inquiry-product-id/ {
inquiry_product_id=get_value();
}
/inquiry-vendor-id/ {
inquiry_vendor_id=get_value();
}
/obp-path/ {
obp_path=get_value();
}
/phy-num/ {
phy_num[obp_path]=get_value();
}
$0 ~ "/dev/rdsk/"disk"$" {
split(obp_path,path_parts,"/");
if(path_parts[3]=="pci@1"){
controller[obp_path]=0;
}
if(path_parts[3]=="pci@2"){
controller[obp_path]=1;
}
printf "%s\n\t%s\n\t%s\n\t%s\n\tcontroller %d, PhyNum %d => Slot %d\n",$1,inquiry_vendor_id,inquiry_serial_no,obp_path,controller[obp_path],phy_num[obp_path],4*controller[obp_path]+phy_num[obp_path]
}'
</syntaxhighlight>
Example:
<syntaxhighlight lang=bash>
# ./get_disk_slot.sh c0t5000C500230D43A3d0
dev_link=/dev/rdsk/c0t5000C500230D43A3d0
'SEAGATE'
'00101371ZVHM 3SE1ZVHM'
'/pci@400/pci@2/pci@0/pci@4/scsi@0/disk@w5000c500230d43a1,0'
controller 1, PhyNum 2 => Slot 6
</syntaxhighlight>
==XSCF==
===Set XSCF IP address from OS via ssh through dscp===
* Show:
<syntaxhighlight lang=bash>
# /usr/platform/`uname -i`/sbin/prtdscp
Domain Address: 192.168.224.2
SP Address: 192.168.224.1
# ssh eis-installer@192.168.224.1
XSCF> shownetwork -a
xscf#0-lan#0
Link encap:Ethernet HWaddr 00:0B:5D:E3:D8:C4
inet addr:172.42.0.120 Bcast:172.42.0.255 Mask:255.255.255.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:885919090 errors:0 dropped:0 overruns:0 frame:0
TX packets:7150700 errors:1 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:1987024183 (1.8 GiB) TX bytes:492148426 (469.3 MiB)
Base address:0xe000
xscf#0-lan#1
Link encap:Ethernet HWaddr 00:0B:5D:E3:D8:C5
BROADCAST MULTICAST MTU:1500 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
Base address:0xc000
XSCF> showroute -a
Destination Gateway Netmask Flags Interface
172.42.0.0 * 255.255.0.0 U xscf#0-lan#0
default 172.42.0.1 0.0.0.0 UG xscf#0-lan#0
</syntaxhighlight>
* Delete default gateway:
<syntaxhighlight lang=bash>
XSCF> setroute -c del -n 0.0.0.0 -m 0.0.0.0 -g 172.42.0.1 xscf#0-lan#0
</syntaxhighlight>
* Set:
<syntaxhighlight lang=bash>
XSCF> setnetwork xscf#0-lan#0 172.32.40.52 -m 255.255.255.0
XSCF> setroute -c add -n 0.0.0.0 -m 0.0.0.0 -g 172.32.40.1 xscf#0-lan#0
XSCF> applynetwork
The following network settings will be applied:
xscf#0 hostname :hfgsun07-xsfc
DNS domain name :intern.hfg-inkasso.de
nameserver :172.41.0.2
interface :xscf#0-lan#0
status :up
IP address :172.32.40.52
netmask :255.255.255.0
route :-n 172.32.40.1 -m 255.255.255.255
route :-n 0.0.0.0 -m 0.0.0.0 -g 172.32.40.1
interface :xscf#0-lan#1
status :down
IP address :
netmask :
route :
Continue? [y|n] :y
Please reset the XSCF by rebootxscf to apply the network settings.
Please confirm that the settings have been applied by executing
showhostname, shownetwork, showroute and shownameserver after rebooting
the XSCF.
XSCF> rebootxscf
The XSCF will be reset. Continue? [y|n] :y
</syntaxhighlight>
===Enable sending of break signal===
Example on a M10 (Partition 0)... break_signal=off means turn supression of break signals off ;-) :
<syntaxhighlight lang=bash>
XSCF> showpparmode -p 0
Host-ID :90071f40
Diagnostic Level :max
Message Level :max
Alive Check :on
Watchdog Reaction :reset
Break Signal :on
Autoboot(Guest Domain) :on
Elastic Mode :off
IOreconfigure :false
CPU Mode :auto
PPAR DR(Current) :off
PPAR DR(Next) :off
XSCF> setpparmode -p 0 -m break_signal=off
Diagnostic Level :max -> -
Message Level :max -> -
Alive Check :on -> -
Watchdog Reaction :reset -> -
Break Signal :on -> off
Autoboot(Guest Domain) :on -> -
Elastic Mode :off -> -
IOreconfigure :false -> -
CPU Mode :auto -> -
PPAR DR :off -> -
The specified modes will be changed.
Continue? [y|n] :y
configured.
Diagnostic Level :max
Message Level :max
Alive Check :on (alive check:available)
Watchdog Reaction :reset (watchdog reaction:reset)
Break Signal :off (break signal:send)
Autoboot(Guest Domain) :on
Elastic Mode :off
IOreconfigure :false
CPU Mode :auto
PPAR DR :off
XSCF> sendbreak -y -p0
Send break signal to PPAR-ID 0?[y|n] :y
XSCF> console -y -p0
Console contents may be logged.
Connect to PPAR-ID 0?[y|n] :y
c)ontinue, s)ync, r)eset? c
^@Notifying cluster that this node is panicking
panic[cpu10]/thread=2a100df1c80: Aborting node because pm_tick delay of 13644 ms exceeds 5050 ms
...
</syntaxhighlight>
204598b98b8e85226adf7fe5ce6da56b1d2c891c
2706
2705
2023-01-09T13:45:30Z
Lollypop
2
/* ILOM */
wikitext
text/x-wiki
[[Category:Hardware]]
=X86 Systeme=
==ILOM==
===Reset SP from OS===
<syntaxhighlight lang=bash>
# ipmitool -I bmc bmc reset cold
Sent cold reset command to MC
</syntaxhighlight>
===Access ILOM from OS===
<syntaxhighlight lang=bash>
# ipmitool sunoem cli
Connected. Use ^D to exit.
->
</syntaxhighlight>
or
<syntaxhighlight lang=bash>
# ipmitool -I bmc sunoem cli
Connected. Use ^D to exit.
->
</syntaxhighlight>
===Set SP IP address from OS via ipmitool===
* Set:
<syntaxhighlight lang=bash>
# ipmitool lan set 1 ipaddr 172.30.42.149
Setting LAN IP Address to 172.30.42.149
# ipmitool lan set 1 netmask 255.255.255.0
Setting LAN Subnet Mask to 255.255.255.0
# ipmitool lan set 1 defgw ipaddr 172.30.42.1
Setting LAN Default Gateway IP to 172.30.42.1
</syntaxhighlight>
* Check:
<syntaxhighlight lang=bash>
# ipmitool lan print
Set in Progress : Commit Write
Auth Type Support : NONE MD2 MD5 PASSWORD
Auth Type Enable : Callback : MD2 MD5 PASSWORD
: User : MD2 MD5 PASSWORD
: Operator : MD2 MD5 PASSWORD
: Admin : MD2 MD5 PASSWORD
: OEM :
IP Address Source : Static Address
IP Address : 172.30.42.149
Subnet Mask : 255.255.255.0
MAC Address : 00:1c:24:f0:70:b0
SNMP Community String : public
IP Header : TTL=0x40 Flags=0x40 Precedence=0x00 TOS=0x10
Default Gateway IP : 172.30.42.1
Default Gateway MAC : ff:ff:ff:ff:ff:ff
Backup Gateway IP : 255.255.255.255
Backup Gateway MAC : ff:ff:ff:ff:ff:ff
Cipher Suite Priv Max : aaaaaaaaaaaaaaa
: X=Cipher Suite Unused
: c=CALLBACK
: u=USER
: o=OPERATOR
: a=ADMIN
: O=OEM
</syntaxhighlight>
===Restore lost Serial/Product Information===
<syntaxhighlight lang=bash>
$ ssh root@x4100-sp
-> show /SYS/MB
/SYS/MB
...
Properties:
type = Motherboard
chassis_name = SUN FIRE X4100
chassis_part_number = 541-0250-04
chassis_serial_number = 0000000-0000000000
chassis_manufacturer = SUN MICROSYSTEMS
product_name = SUN FIRE X4100
product_part_number = 602-0000-00
product_serial_number = 0000000000
product_version = (none)
product_manufacturer = SUN MICROSYSTEMS
fru_name = ASSY,MOTHERBOARD,A64
fru_manufacturer = SUN MICROSYSTEMS
fru_part_number = 501-7644-01
fru_serial_number = 1762TH1-0627002296
...
-> exit
$ ssh sunservice@x4100-sp
Password: <the root password>
[(flash)root@X4100-SP:~]# servicetool --board_replaced=mainboard --fru_product_serial_number --fru_chassis_serial_number --fru_product_part_number
<Fill out the answers>
</syntaxhighlight>
=SPARC Systeme=
==S7-2==
===ILOM===
<syntaxhighlight lang=bash>
-> set /HOST/bootmode/ script="setenv auto-boot? false"
Set 'script' to 'setenv auto-boot? false'
-> reset /SYS
Are you sure you want to reset /SYS (y/n)? y
Performing reset on /SYS
-> start /HOST/console
Are you sure you want to start /HOST/console (y/n)? y
Serial console started. To stop, type #.
...
SPARC S7-2, No Keyboard
Copyright (c) 1998, 2022, Oracle and/or its affiliates. All rights reserved.
OpenBoot 4.43.9, 126.5000 GB memory installed, Serial #1154xxxxx.
Ethernet address 0:10:e0:xx:xx:xx, Host ID: 86e15f24.
auto-boot? = false
{0} ok
{0} ok printenv boot-device
boot-device = disk net
{0} ok devalias
fallback-miniroot /pci@300/pci@1/pci@0/pci@2/usb@0/hub@2/storage@1/disk@0
rcdrom /pci@300/pci@1/pci@0/pci@2/usb@0/hub@2/storage@1/disk@0
disk7 /pci@300/pci@2/pci@0/pci@14/LSI,sas@0/disk@p4
disk6 /pci@300/pci@2/pci@0/pci@14/LSI,sas@0/disk@p5
disk5 /pci@300/pci@2/pci@0/pci@14/LSI,sas@0/disk@p7
disk4 /pci@300/pci@2/pci@0/pci@14/LSI,sas@0/disk@p6
disk3 /pci@300/pci@2/pci@0/pci@14/LSI,sas@0/disk@p0
disk2 /pci@300/pci@2/pci@0/pci@14/LSI,sas@0/disk@p1
disk1 /pci@300/pci@2/pci@0/pci@14/LSI,sas@0/disk@p3
disk /pci@300/pci@2/pci@0/pci@14/LSI,sas@0/disk@p2
disk0 /pci@300/pci@2/pci@0/pci@14/LSI,sas@0/disk@p2
sas0 /pci@300/pci@2/pci@0/pci@14/LSI,sas@0
sas /pci@300/pci@2/pci@0/pci@14/LSI,sas@0
nvme3 /pci@300/pci@2/pci@0/pci@7/nvme@0/disk@1
nvme2 /pci@300/pci@2/pci@0/pci@6/nvme@0/disk@1
nvme1 /pci@300/pci@2/pci@0/pci@5/nvme@0/disk@1
nvme0 /pci@300/pci@2/pci@0/pci@4/nvme@0/disk@1
net3 /pci@300/pci@1/pci@0/pci@1/network@0,3
net2 /pci@300/pci@1/pci@0/pci@1/network@0,2
net1 /pci@300/pci@1/pci@0/pci@1/network@0,1
net /pci@300/pci@1/pci@0/pci@1/network@0
net0 /pci@300/pci@1/pci@0/pci@1/network@0
virtual-console
/virtual-devices/console@1
name aliases
{0} ok boot nvme0
Boot device: /pci@300/pci@2/pci@0/pci@4/nvme@0/disk@1 File and args:
SunOS Release 5.11 Version 11.4.45.119.2 64-bit
Copyright (c) 1983, 2022, Oracle and/or its affiliates.
Booting to milestone "svc:/milestone/config:default".
/
</syntaxhighlight>
==T4-1==
===Get disk slot===
<b>get_disk_slot.sh</b>:
<syntaxhighlight lang=bash>
#!/bin/bash
/usr/sbin/prtconf -v | nawk -v disk="$1" '
function get_value() {
getline line;
split(line,values,"=");
return values[2];
}
/inquiry-serial-no/ {
inquiry_serial_no=get_value();
}
/inquiry-product-id/ {
inquiry_product_id=get_value();
}
/inquiry-vendor-id/ {
inquiry_vendor_id=get_value();
}
/obp-path/ {
obp_path=get_value();
}
/phy-num/ {
phy_num[obp_path]=get_value();
}
$0 ~ "/dev/rdsk/"disk"$" {
split(obp_path,path_parts,"/");
if(path_parts[3]=="pci@1"){
controller[obp_path]=0;
}
if(path_parts[3]=="pci@2"){
controller[obp_path]=1;
}
printf "%s\n\t%s\n\t%s\n\t%s\n\tcontroller %d, PhyNum %d => Slot %d\n",$1,inquiry_vendor_id,inquiry_serial_no,obp_path,controller[obp_path],phy_num[obp_path],4*controller[obp_path]+phy_num[obp_path]
}'
</syntaxhighlight>
Example:
<syntaxhighlight lang=bash>
# ./get_disk_slot.sh c0t5000C500230D43A3d0
dev_link=/dev/rdsk/c0t5000C500230D43A3d0
'SEAGATE'
'00101371ZVHM 3SE1ZVHM'
'/pci@400/pci@2/pci@0/pci@4/scsi@0/disk@w5000c500230d43a1,0'
controller 1, PhyNum 2 => Slot 6
</syntaxhighlight>
==XSCF==
===Set XSCF IP address from OS via ssh through dscp===
* Show:
<syntaxhighlight lang=bash>
# /usr/platform/`uname -i`/sbin/prtdscp
Domain Address: 192.168.224.2
SP Address: 192.168.224.1
# ssh eis-installer@192.168.224.1
XSCF> shownetwork -a
xscf#0-lan#0
Link encap:Ethernet HWaddr 00:0B:5D:E3:D8:C4
inet addr:172.42.0.120 Bcast:172.42.0.255 Mask:255.255.255.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:885919090 errors:0 dropped:0 overruns:0 frame:0
TX packets:7150700 errors:1 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:1987024183 (1.8 GiB) TX bytes:492148426 (469.3 MiB)
Base address:0xe000
xscf#0-lan#1
Link encap:Ethernet HWaddr 00:0B:5D:E3:D8:C5
BROADCAST MULTICAST MTU:1500 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
Base address:0xc000
XSCF> showroute -a
Destination Gateway Netmask Flags Interface
172.42.0.0 * 255.255.0.0 U xscf#0-lan#0
default 172.42.0.1 0.0.0.0 UG xscf#0-lan#0
</syntaxhighlight>
* Delete default gateway:
<syntaxhighlight lang=bash>
XSCF> setroute -c del -n 0.0.0.0 -m 0.0.0.0 -g 172.42.0.1 xscf#0-lan#0
</syntaxhighlight>
* Set:
<syntaxhighlight lang=bash>
XSCF> setnetwork xscf#0-lan#0 172.32.40.52 -m 255.255.255.0
XSCF> setroute -c add -n 0.0.0.0 -m 0.0.0.0 -g 172.32.40.1 xscf#0-lan#0
XSCF> applynetwork
The following network settings will be applied:
xscf#0 hostname :hfgsun07-xsfc
DNS domain name :intern.hfg-inkasso.de
nameserver :172.41.0.2
interface :xscf#0-lan#0
status :up
IP address :172.32.40.52
netmask :255.255.255.0
route :-n 172.32.40.1 -m 255.255.255.255
route :-n 0.0.0.0 -m 0.0.0.0 -g 172.32.40.1
interface :xscf#0-lan#1
status :down
IP address :
netmask :
route :
Continue? [y|n] :y
Please reset the XSCF by rebootxscf to apply the network settings.
Please confirm that the settings have been applied by executing
showhostname, shownetwork, showroute and shownameserver after rebooting
the XSCF.
XSCF> rebootxscf
The XSCF will be reset. Continue? [y|n] :y
</syntaxhighlight>
===Enable sending of break signal===
Example on a M10 (Partition 0)... break_signal=off means turn supression of break signals off ;-) :
<syntaxhighlight lang=bash>
XSCF> showpparmode -p 0
Host-ID :90071f40
Diagnostic Level :max
Message Level :max
Alive Check :on
Watchdog Reaction :reset
Break Signal :on
Autoboot(Guest Domain) :on
Elastic Mode :off
IOreconfigure :false
CPU Mode :auto
PPAR DR(Current) :off
PPAR DR(Next) :off
XSCF> setpparmode -p 0 -m break_signal=off
Diagnostic Level :max -> -
Message Level :max -> -
Alive Check :on -> -
Watchdog Reaction :reset -> -
Break Signal :on -> off
Autoboot(Guest Domain) :on -> -
Elastic Mode :off -> -
IOreconfigure :false -> -
CPU Mode :auto -> -
PPAR DR :off -> -
The specified modes will be changed.
Continue? [y|n] :y
configured.
Diagnostic Level :max
Message Level :max
Alive Check :on (alive check:available)
Watchdog Reaction :reset (watchdog reaction:reset)
Break Signal :off (break signal:send)
Autoboot(Guest Domain) :on
Elastic Mode :off
IOreconfigure :false
CPU Mode :auto
PPAR DR :off
XSCF> sendbreak -y -p0
Send break signal to PPAR-ID 0?[y|n] :y
XSCF> console -y -p0
Console contents may be logged.
Connect to PPAR-ID 0?[y|n] :y
c)ontinue, s)ync, r)eset? c
^@Notifying cluster that this node is panicking
panic[cpu10]/thread=2a100df1c80: Aborting node because pm_tick delay of 13644 ms exceeds 5050 ms
...
</syntaxhighlight>
344d2429d82850d7e6111663ac3e6e7982fbe312
CIFS
0
405
2707
2023-02-21T09:36:16Z
Lollypop
2
Created page with "== Decode version from vers=default out of mount options == Sometimes you see from a mounte share jus vers=default like this: <SyntaxHighLight lang=bash> # grep cifs /proc/self/mounts //cifs.server.de/cifsshare /media/cifs cifs rw,relatime,vers=default,cache=strict,username=s_ndr_mgt_saw,domain=AD,uid=0,noforceuid,gid=0,noforcegid,addr=172.16.42.12,file_mode=0644,dir_mode=0755,soft,nounix,serverino,mapposix,rsize=4194304,wsize=4194304,bsize=1048576,echo_interval=60,acti..."
wikitext
text/x-wiki
== Decode version from vers=default out of mount options ==
Sometimes you see from a mounte share jus vers=default like this:
<SyntaxHighLight lang=bash>
# grep cifs /proc/self/mounts
//cifs.server.de/cifsshare /media/cifs cifs rw,relatime,vers=default,cache=strict,username=s_ndr_mgt_saw,domain=AD,uid=0,noforceuid,gid=0,noforcegid,addr=172.16.42.12,file_mode=0644,dir_mode=0755,soft,nounix,serverino,mapposix,rsize=4194304,wsize=4194304,bsize=1048576,echo_interval=60,actimeo=1 0 0
</SyntaxHighLight>
To see the real version you need to take a look at <i>/proc/fs/cifs/DebugData</i>
<SyntaxHighLight lang=bash highlight=11>
# cat /proc/fs/cifs/DebugData
Display Internal CIFS Data Structures for Debugging
---------------------------------------------------
CIFS Version 2.34
Features: DFS,FSCACHE,STATS2,DEBUG,ALLOW_INSECURE_LEGACY,WEAK_PW_HASH,CIFS_POSIX,UPCALL(SPNEGO),XATTR,ACL
CIFSMaxBufSize: 16384
Active VFS Requests: 0
Servers:
1) ConnectionId: 0x1 Hostname: cifs.server.de
Number of credits: 510 Dialect 0x302
TCP status: 1 Instance: 1
Local Users To Server: 1 SecMode: 0x1 Req On Wire: 0
In Send: 0 In MaxReq Wait: 0
Sessions:
1) Address: 172.16.42.11 Uses: 1 Capability: 0x300045 Session Status: 1
Security type: RawNTLMSSP SessionId: 0xf482c7c8
User: 0 Cred User: 0
Shares:
0) IPC: \\cifs.server.de\IPC$ Mounts: 1 DevInfo: 0x0 Attributes: 0x0
PathComponentMax: 0 Status: 1 type: 0 Serial Number: 0x0
Share Capabilities: None Share Flags: 0x0
tid: 0x93aa3485 Maximal Access: 0x1f00a9
1) \\cifs.server.de\cifsshare Mounts: 1 DevInfo: 0x20 Attributes: 0x1006f
PathComponentMax: 255 Status: 1 type: DISK Serial Number: 0xa0f7daf8
Share Capabilities: None Aligned, Partition Aligned, Share Flags: 0x0
tid: 0xe625f45d Optimal sector size: 0x200 Maximal Access: 0x1f01ff
MIDs:
--
</SyntaxHighLight>
In this example you can see the <i>Dialect</i> in line 11 (highlighted) where we can see we are using SMB 3.02.
Links:
* https://docs.kernel.org/admin-guide/cifs/usage.html
3f923323dc16110eb9f404f5d80094380a2fe552
2708
2707
2023-02-21T09:43:59Z
Lollypop
2
/* Decode version from vers=default out of mount options */
wikitext
text/x-wiki
[[category:KnowHow]]
== Decode version from vers=default out of mount options ==
Sometimes you see from a mounte share jus vers=default like this:
<SyntaxHighLight lang=bash>
# grep cifs /proc/self/mounts
//cifs.server.de/cifsshare /media/cifs cifs rw,relatime,vers=default,cache=strict,username=s_ndr_mgt_saw,domain=AD,uid=0,noforceuid,gid=0,noforcegid,addr=172.16.42.12,file_mode=0644,dir_mode=0755,soft,nounix,serverino,mapposix,rsize=4194304,wsize=4194304,bsize=1048576,echo_interval=60,actimeo=1 0 0
</SyntaxHighLight>
To see the real version you need to take a look at <i>/proc/fs/cifs/DebugData</i>
<SyntaxHighLight lang=bash highlight=11>
# cat /proc/fs/cifs/DebugData
Display Internal CIFS Data Structures for Debugging
---------------------------------------------------
CIFS Version 2.34
Features: DFS,FSCACHE,STATS2,DEBUG,ALLOW_INSECURE_LEGACY,WEAK_PW_HASH,CIFS_POSIX,UPCALL(SPNEGO),XATTR,ACL
CIFSMaxBufSize: 16384
Active VFS Requests: 0
Servers:
1) ConnectionId: 0x1 Hostname: cifs.server.de
Number of credits: 510 Dialect 0x302
TCP status: 1 Instance: 1
Local Users To Server: 1 SecMode: 0x1 Req On Wire: 0
In Send: 0 In MaxReq Wait: 0
Sessions:
1) Address: 172.16.42.11 Uses: 1 Capability: 0x300045 Session Status: 1
Security type: RawNTLMSSP SessionId: 0xf482c7c8
User: 0 Cred User: 0
Shares:
0) IPC: \\cifs.server.de\IPC$ Mounts: 1 DevInfo: 0x0 Attributes: 0x0
PathComponentMax: 0 Status: 1 type: 0 Serial Number: 0x0
Share Capabilities: None Share Flags: 0x0
tid: 0x93aa3485 Maximal Access: 0x1f00a9
1) \\cifs.server.de\cifsshare Mounts: 1 DevInfo: 0x20 Attributes: 0x1006f
PathComponentMax: 255 Status: 1 type: DISK Serial Number: 0xa0f7daf8
Share Capabilities: None Aligned, Partition Aligned, Share Flags: 0x0
tid: 0xe625f45d Optimal sector size: 0x200 Maximal Access: 0x1f01ff
MIDs:
--
</SyntaxHighLight>
In this example you can see the <i>Dialect</i> in line 11 (highlighted) where we can see we are using SMB 3.02.
Links:
* https://docs.kernel.org/admin-guide/cifs/usage.html
2fbd3693168f213f426a84a7c622a0a9347566d5
OpenSSL
0
347
2709
2336
2023-03-02T13:48:39Z
Lollypop
2
wikitext
text/x-wiki
[[category:Security]]
=Verify=
<syntaxhighlight lang=bash>
# openssl verify -CAfile /srv/www/htdocs/pub/RHN-ORG-TRUSTED-SSL-CERT /etc/pki/spacewalk/jabberd/server.pem
</syntaxhighlight>
<syntaxhighlight lang=bash>
# openssl crl2pkcs7 -nocrl -certfile /srv/www/htdocs/pub/RHN-ORG-TRUSTED-SSL-CERT | openssl pkcs7 -print_certs -noout -print_certs
</syntaxhighlight>
=CSR=
== Create key and CSR ==
<syntaxhighlight lang=bash>
$ subject_without_cn='/C=DE/ST=Hamburg/L=Hamburg/O=Organisation/OU=Team'
$ emailAddress='webadmin@server.de'
$ declare -a hosts=( "name1.server.de" "name2.server.de" )
$ openssl req -newkey rsa:4096 -sha256 -keyout ${hosts[0]}-key.pem -out ${hosts[0]}-csr.pem -batch -subj "${subject_without_cn}/CN=${hosts[0]}/emailAddress=${emailAddress}" -reqexts SAN -config <(cat /etc/ssl/openssl.cnf <(printf "[SAN]\nsubjectAltName=DNS:${hosts[0]}${hosts[1]:+,DNS:${hosts[1]}}${hosts[2]:+,DNS:${hosts[2]}}${hosts[3]:+,DNS:${hosts[3]}}${hosts[4]:+,DNS:${hosts[4]}}"))
</syntaxhighlight>
== Verify your CSR==
<syntaxhighlight lang=bash>
$ openssl req -text -noout -verify -in ${hosts[0]}-csr.pem
</syntaxhighlight>
=Print validity for certificate file=
<SyntaxHighlight lang=bash>
#!/bin/bash
for i in ${*}
do
certfile=${i}
enddate="$(openssl x509 -enddate -noout -in ${certfile} | sed -e 's#^.*=##g')"
declare -i valid_seconds=$(( $(date --date="${enddate}" '+%s') - $(date '+%s') ))
declare -i seconds=${valid_seconds}
declare -i days=$(( ${seconds} / ( 24 * 60 * 60 ) ))
seconds=$(( ${seconds} % ( 24 * 60 * 60 ) ))
declare -i hours=$(( ${seconds} / ( 60 * 60 ) ))
seconds=$(( ${seconds} % ( 60 * 60 ) ))
declare -i minutes=$(( ${seconds} / 60 ))
seconds=$(( ${seconds} % 60 ))
printf "%s: %s (%d days %d hours %d seconds left)\n" "${certfile}" "$(date --date "${enddate}")" ${days} ${hours} ${seconds}
done
</SyntaxHighlight>
4a257f33055b179bfea5b0a72aa5ad56748d3ebd
SSH Tipps und Tricks
0
75
2710
2594
2023-03-16T09:30:17Z
Lollypop
2
/* SSH, der Weg zum Ziel */
wikitext
text/x-wiki
[[Category:SSH|Tipps]]
[[Category:Putty|Tipps]]
=SSH, way to the target=
==SSH over one or more hops==
To make the SSH connection from host_a to host_b you have to tunnel through two hosts (jumphost_1 and jumphost_2). If you always log in first and then continue logging in, it is sometimes very difficult to loop through the port forwardings or the Socks5 proxy. It is easier to define <i>ProxyJump</i>s for the way from host_a to host_b.
So we only get from jumphost_2 to host_b, so we make an entry in ~/.ssh/config for this:
<pre>
Host host_b
ProxyJump jumphost_2
</pre>
But we can only get to jumphost_2 via jumphost_1, so we need an entry for this as well:
<pre>
Host jumphost_2
ProxyJump jumphost_1
</pre>
Now simply type <i>ssh host_b</i> on host_a and you will be tunneled through the two gateways jumphost_1 and jumphost_2.
==Portforwardings for example for NFS are now easy like this==
<pre>
root@host_a# share -F nfs -o ro=@127.0.0.1/32 /tmp
root@host_a# ssh -R 22049:localhost:2049 user@host_b
user@host_b$ su -
root@host_b# mount -oro nfs://127.0.0.1:22049/tmp /mnt
</pre>
In the background the tunnel connections are established and the port forwarding is done directly from host_a to host_b. Very slim and elegant.
==Breakout from paradise==
Problem: The environment you are in is unfortunately so unfortunate with firewalls that you can not work. But you have to SSH out to look somewhere else or to get something. Well, there is always a way...
You need a locally installed [http://www.meadowy.org/~gotoh/projects/connect connect], e.g. under Ubuntu: apt-get install connect-proxy.
Furthermore you need a SSH server, where a sshd is listening on port 443, because most proxies only want to let you through on known ports.
Then you enter in the ~/.ssh/config:
<pre>
Host ssh-via-proxy
ProxyCommand connect -H proxy-server:3128 ssh-server 443
</pre>
Schwuppdiwupp is one with <i>ssh ssh-server</i> on the SSH target, where one would like to go. Of course you can enter the ssh-server again as ProxyCommand etc. etc.
==Ah yes... the internal wiki...==
Also not bad, if this is only accessible from the internal network, then we just request via socks proxy:
<pre>
user@host_a$ ssh -C -N -T -f -D8080 internal-host
user@host_a$ chromium-browser --proxy-server="socks5://localhost:8080" https://wiki.internal.office/ &
</pre>
Options are:
<pre>
-C Requests compression <- das ist optional
-N Do not execute a remote command.
-T Disable pseudo-tty allocation.
-f Requests ssh to go to background just before command execution.
-D Local-Remote-Socks5-Proxy Port
</pre>
Or again via ~/.ssh/config:
<pre>
Host wiki
Compression yes
DynamicForward 8888
RequestTTY no
PermitLocalCommand yes
LocalCommand chromium-browser --proxy-server="socks5://localhost:8888" https://wiki.intern.firma.de/ &
Hostname internal-host
</pre>
And then <i>ssh -N -f wiki</i>
=Der Fingerabdruck=
Für die Verifikation ist es oft leichter mit kürzeren Zahlenketten. Daher ist der Fingerabdruck praktisch, um Keys einfacher zu vergleichen:
<pre>
$ ssh-keygen -lf ~/.ssh/id_dsa.pub
1024 98:c5:76:...:08:fa:ba lollypop@lollybook (DSA)
</pre>
=Nutzer einschränken=
<syntaxhighlight lang=bash>
# SSH is only allowed for users in the group ssh except syslog
AllowGroups ssh
DenyUsers syslog
</syntaxhighlight>
=PuTTY Portable=
==pageant zusammen mit putty starten==
In die Datei ..\PortableApps\PuTTYPortable\App\AppInfo\Launcher\PuTTYPortable.ini muß folgendes unter [Launch] stehen:
<pre>
[Launch]
ProgramExecutable=putty\pageant.exe
CommandLineArguments='%PAL:DataDir%\settings\mykeys.ppk -c %PAL:AppDir%\putty\putty.exe'
DirectoryMoveOK=yes
SupportsUNC=yes
</pre>
Zu PortableApps siehe auch:
* [http://portableapps.com/manuals/PortableApps.comLauncher/ref/envsub.html Environment variable substitions]
* [http://portableapps.com/manuals/PortableApps.comLauncher/ref/launcher.ini/launch.html#programexecutable Launch]
==ppk -> pem==
<syntaxhighlight lang=bash>
$ nawk '/---- BEGIN SSH2 PUBLIC KEY ----/{printf "ssh-rsa "; getline; comment=$2; gsub(/"/,"",comment); getline line; while(line !~ /^---- END/){printf line; getline line;} printf " %s\n",comment;}' pubkey.ppk
</syntaxhighlight>
=Probleme mit älteren Gegenstellen=
==Unable to negotiate with <IP> port 22: no matching host key type found. Their offer: ssh-dss==
<syntaxhighlight lang=bash>
$ ssh -oHostKeyAlgorithms=+ssh-dss <IP>
</syntaxhighlight>
==ssh_dispatch_run_fatal: Connection to <IP> port 22: DH GEX group out of range==
<syntaxhighlight lang=bash>
$ ssh -oKexAlgorithms=diffie-hellman-group-exchange-sha256,diffie-hellman-group14-sha1,diffie-hellman-group1-sha1 <IP>
</syntaxhighlight>
=SFTP chroot=
<syntaxhighlight lang=bash>
# mkdir --parents --mode=0755 /sftp_chroot/etc
</syntaxhighlight>
==/etc/fstab==
<syntaxhighlight lang=bash>
...
/etc/passwd /sftp_chroot/etc/passwd none ro,bind 0 0
/etc/group /sftp_chroot/etc/group none ro,bind 0 0
</syntaxhighlight>
==/etc/ssh/sshd_config==
<syntaxhighlight lang=bash>
...
AllowGroups ssh-user
Subsystem sftp internal-sftp
Match group sftp
AllowGroups sftp
X11Forwarding no
AllowTcpForwarding no
AllowAgentForwarding no
PermitTunnel no
ForceCommand internal-sftp
PasswordAuthentication yes
ChrootDirectory /sftp_chroot/
AuthorizedKeysFile /sftp_chroot/%h/.ssh/authorized_keys
</syntaxhighlight>
==Create SFTP user==
Now you can put authorized keys into the files /home/sftp/.authorized_keys/<i>username</i>
And create the sftp users like this:
<syntaxhighlight lang=bash>
# USER=myuser
# mkdir --parents --mode=0755 /home/sftp/${USER}
# useradd --create-home --home-dir /home/sftp/${USER}/home ${USER}
</syntaxhighlight>
= Two factor authentication =
== Google Authenticator ==
As the Google Authenticator is a tool which is available on several SmartPhone OS I took this one for the OTP authentication.
All steps have to be done on the destination host.
=== Install libpam-google-authenticator ===
<syntaxhighlight lang=bash>
$ sudo apt-get install libpam-google-authenticator
</syntaxhighlight>
=== Add settings to the /etc/pam.d/sshd ===
Put this line at the top of your /etc/pam.d/sshd!
<syntaxhighlight lang=bash>
auth [success=done new_authtok_reqd=done default=die] pam_google_authenticator.so nullok
</syntaxhighlight>
See the man page pam.d(5) or read here...
The meaning of the parameters:
* success=done : If pam_google_authenticator returns successful (code was correct) all authentication is done.
* new_authtok_reqd=done : New authentication token is required set to done. Done is like ok, <nowiki><man page></nowiki>except that the stack also terminates and control is immediately returned to the application.<nowiki></man page></nowiki>
* default=die : If pam_google_authenticator failed no other authentications will be tried
* nullok : Allow user to access auth mechanism even if the password is empty
=== Add settings to the /etc/ssh/sshd_config ===
This lines have to be in the /etc/ssh/sshd_config:
<syntaxhighlight lang=bash>
UsePAM yes
PasswordAuthentication no
PubkeyAuthentication yes
ChallengeResponseAuthentication yes
AuthenticationMethods publickey,keyboard-interactive:pam
</syntaxhighlight>
Without the setting in /etc/pam.d/sshd the "PasswordAuthentication no" will not be sufficient and still ask for a password because /etc/pam.d/sshd enables password authentication.
08c0e882cf2a0cf6b4da54d78819ea056bc61eb5
2711
2710
2023-03-16T09:34:39Z
Lollypop
2
/* Breakout from paradise */
wikitext
text/x-wiki
[[Category:SSH|Tipps]]
[[Category:Putty|Tipps]]
=SSH, way to the target=
==SSH over one or more hops==
To make the SSH connection from host_a to host_b you have to tunnel through two hosts (jumphost_1 and jumphost_2). If you always log in first and then continue logging in, it is sometimes very difficult to loop through the port forwardings or the Socks5 proxy. It is easier to define <i>ProxyJump</i>s for the way from host_a to host_b.
So we only get from jumphost_2 to host_b, so we make an entry in ~/.ssh/config for this:
<pre>
Host host_b
ProxyJump jumphost_2
</pre>
But we can only get to jumphost_2 via jumphost_1, so we need an entry for this as well:
<pre>
Host jumphost_2
ProxyJump jumphost_1
</pre>
Now simply type <i>ssh host_b</i> on host_a and you will be tunneled through the two gateways jumphost_1 and jumphost_2.
==Portforwardings for example for NFS are now easy like this==
<pre>
root@host_a# share -F nfs -o ro=@127.0.0.1/32 /tmp
root@host_a# ssh -R 22049:localhost:2049 user@host_b
user@host_b$ su -
root@host_b# mount -oro nfs://127.0.0.1:22049/tmp /mnt
</pre>
In the background the tunnel connections are established and the port forwarding is done directly from host_a to host_b. Very slim and elegant.
==Breakout from paradise==
Problem: The environment you are in is unfortunately so unfortunate with firewalls that you can not work. But you have to SSH out to look somewhere else or to get something. Well, there is always a way...
You need a locally installed [http://www.meadowy.org/~gotoh/projects/connect connect], e.g. under Ubuntu: apt-get install connect-proxy.
Furthermore you need a SSH server, where a sshd is listening on port 443, because most proxies only want to let you through on known ports.
Then you enter in the ~/.ssh/config:
<pre>
Host ssh-via-proxy
ProxyCommand connect -H proxy-server:3128 ssh-server 443
</pre>
Whoop di whoop is one with <i>ssh ssh-server</i> on the SSH target, where one would like to go. Of course you can enter another host behind this connection via <i>ProxyJump ssh-via-proxy</i> etc. etc.
==Ah yes... the internal wiki...==
Also not bad, if this is only accessible from the internal network, then we just request via socks proxy:
<pre>
user@host_a$ ssh -C -N -T -f -D8080 internal-host
user@host_a$ chromium-browser --proxy-server="socks5://localhost:8080" https://wiki.internal.office/ &
</pre>
Options are:
<pre>
-C Requests compression <- das ist optional
-N Do not execute a remote command.
-T Disable pseudo-tty allocation.
-f Requests ssh to go to background just before command execution.
-D Local-Remote-Socks5-Proxy Port
</pre>
Or again via ~/.ssh/config:
<pre>
Host wiki
Compression yes
DynamicForward 8888
RequestTTY no
PermitLocalCommand yes
LocalCommand chromium-browser --proxy-server="socks5://localhost:8888" https://wiki.intern.firma.de/ &
Hostname internal-host
</pre>
And then <i>ssh -N -f wiki</i>
=Der Fingerabdruck=
Für die Verifikation ist es oft leichter mit kürzeren Zahlenketten. Daher ist der Fingerabdruck praktisch, um Keys einfacher zu vergleichen:
<pre>
$ ssh-keygen -lf ~/.ssh/id_dsa.pub
1024 98:c5:76:...:08:fa:ba lollypop@lollybook (DSA)
</pre>
=Nutzer einschränken=
<syntaxhighlight lang=bash>
# SSH is only allowed for users in the group ssh except syslog
AllowGroups ssh
DenyUsers syslog
</syntaxhighlight>
=PuTTY Portable=
==pageant zusammen mit putty starten==
In die Datei ..\PortableApps\PuTTYPortable\App\AppInfo\Launcher\PuTTYPortable.ini muß folgendes unter [Launch] stehen:
<pre>
[Launch]
ProgramExecutable=putty\pageant.exe
CommandLineArguments='%PAL:DataDir%\settings\mykeys.ppk -c %PAL:AppDir%\putty\putty.exe'
DirectoryMoveOK=yes
SupportsUNC=yes
</pre>
Zu PortableApps siehe auch:
* [http://portableapps.com/manuals/PortableApps.comLauncher/ref/envsub.html Environment variable substitions]
* [http://portableapps.com/manuals/PortableApps.comLauncher/ref/launcher.ini/launch.html#programexecutable Launch]
==ppk -> pem==
<syntaxhighlight lang=bash>
$ nawk '/---- BEGIN SSH2 PUBLIC KEY ----/{printf "ssh-rsa "; getline; comment=$2; gsub(/"/,"",comment); getline line; while(line !~ /^---- END/){printf line; getline line;} printf " %s\n",comment;}' pubkey.ppk
</syntaxhighlight>
=Probleme mit älteren Gegenstellen=
==Unable to negotiate with <IP> port 22: no matching host key type found. Their offer: ssh-dss==
<syntaxhighlight lang=bash>
$ ssh -oHostKeyAlgorithms=+ssh-dss <IP>
</syntaxhighlight>
==ssh_dispatch_run_fatal: Connection to <IP> port 22: DH GEX group out of range==
<syntaxhighlight lang=bash>
$ ssh -oKexAlgorithms=diffie-hellman-group-exchange-sha256,diffie-hellman-group14-sha1,diffie-hellman-group1-sha1 <IP>
</syntaxhighlight>
=SFTP chroot=
<syntaxhighlight lang=bash>
# mkdir --parents --mode=0755 /sftp_chroot/etc
</syntaxhighlight>
==/etc/fstab==
<syntaxhighlight lang=bash>
...
/etc/passwd /sftp_chroot/etc/passwd none ro,bind 0 0
/etc/group /sftp_chroot/etc/group none ro,bind 0 0
</syntaxhighlight>
==/etc/ssh/sshd_config==
<syntaxhighlight lang=bash>
...
AllowGroups ssh-user
Subsystem sftp internal-sftp
Match group sftp
AllowGroups sftp
X11Forwarding no
AllowTcpForwarding no
AllowAgentForwarding no
PermitTunnel no
ForceCommand internal-sftp
PasswordAuthentication yes
ChrootDirectory /sftp_chroot/
AuthorizedKeysFile /sftp_chroot/%h/.ssh/authorized_keys
</syntaxhighlight>
==Create SFTP user==
Now you can put authorized keys into the files /home/sftp/.authorized_keys/<i>username</i>
And create the sftp users like this:
<syntaxhighlight lang=bash>
# USER=myuser
# mkdir --parents --mode=0755 /home/sftp/${USER}
# useradd --create-home --home-dir /home/sftp/${USER}/home ${USER}
</syntaxhighlight>
= Two factor authentication =
== Google Authenticator ==
As the Google Authenticator is a tool which is available on several SmartPhone OS I took this one for the OTP authentication.
All steps have to be done on the destination host.
=== Install libpam-google-authenticator ===
<syntaxhighlight lang=bash>
$ sudo apt-get install libpam-google-authenticator
</syntaxhighlight>
=== Add settings to the /etc/pam.d/sshd ===
Put this line at the top of your /etc/pam.d/sshd!
<syntaxhighlight lang=bash>
auth [success=done new_authtok_reqd=done default=die] pam_google_authenticator.so nullok
</syntaxhighlight>
See the man page pam.d(5) or read here...
The meaning of the parameters:
* success=done : If pam_google_authenticator returns successful (code was correct) all authentication is done.
* new_authtok_reqd=done : New authentication token is required set to done. Done is like ok, <nowiki><man page></nowiki>except that the stack also terminates and control is immediately returned to the application.<nowiki></man page></nowiki>
* default=die : If pam_google_authenticator failed no other authentications will be tried
* nullok : Allow user to access auth mechanism even if the password is empty
=== Add settings to the /etc/ssh/sshd_config ===
This lines have to be in the /etc/ssh/sshd_config:
<syntaxhighlight lang=bash>
UsePAM yes
PasswordAuthentication no
PubkeyAuthentication yes
ChallengeResponseAuthentication yes
AuthenticationMethods publickey,keyboard-interactive:pam
</syntaxhighlight>
Without the setting in /etc/pam.d/sshd the "PasswordAuthentication no" will not be sufficient and still ask for a password because /etc/pam.d/sshd enables password authentication.
9f4bedf327d0ef124a17e431d32812e9db657330
2712
2711
2023-03-16T09:37:16Z
Lollypop
2
/* Der Fingerabdruck */
wikitext
text/x-wiki
[[Category:SSH|Tipps]]
[[Category:Putty|Tipps]]
=SSH, way to the target=
==SSH over one or more hops==
To make the SSH connection from host_a to host_b you have to tunnel through two hosts (jumphost_1 and jumphost_2). If you always log in first and then continue logging in, it is sometimes very difficult to loop through the port forwardings or the Socks5 proxy. It is easier to define <i>ProxyJump</i>s for the way from host_a to host_b.
So we only get from jumphost_2 to host_b, so we make an entry in ~/.ssh/config for this:
<pre>
Host host_b
ProxyJump jumphost_2
</pre>
But we can only get to jumphost_2 via jumphost_1, so we need an entry for this as well:
<pre>
Host jumphost_2
ProxyJump jumphost_1
</pre>
Now simply type <i>ssh host_b</i> on host_a and you will be tunneled through the two gateways jumphost_1 and jumphost_2.
==Portforwardings for example for NFS are now easy like this==
<pre>
root@host_a# share -F nfs -o ro=@127.0.0.1/32 /tmp
root@host_a# ssh -R 22049:localhost:2049 user@host_b
user@host_b$ su -
root@host_b# mount -oro nfs://127.0.0.1:22049/tmp /mnt
</pre>
In the background the tunnel connections are established and the port forwarding is done directly from host_a to host_b. Very slim and elegant.
==Breakout from paradise==
Problem: The environment you are in is unfortunately so unfortunate with firewalls that you can not work. But you have to SSH out to look somewhere else or to get something. Well, there is always a way...
You need a locally installed [http://www.meadowy.org/~gotoh/projects/connect connect], e.g. under Ubuntu: apt-get install connect-proxy.
Furthermore you need a SSH server, where a sshd is listening on port 443, because most proxies only want to let you through on known ports.
Then you enter in the ~/.ssh/config:
<pre>
Host ssh-via-proxy
ProxyCommand connect -H proxy-server:3128 ssh-server 443
</pre>
Whoop di whoop is one with <i>ssh ssh-server</i> on the SSH target, where one would like to go. Of course you can enter another host behind this connection via <i>ProxyJump ssh-via-proxy</i> etc. etc.
==Ah yes... the internal wiki...==
Also not bad, if this is only accessible from the internal network, then we just request via socks proxy:
<pre>
user@host_a$ ssh -C -N -T -f -D8080 internal-host
user@host_a$ chromium-browser --proxy-server="socks5://localhost:8080" https://wiki.internal.office/ &
</pre>
Options are:
<pre>
-C Requests compression <- das ist optional
-N Do not execute a remote command.
-T Disable pseudo-tty allocation.
-f Requests ssh to go to background just before command execution.
-D Local-Remote-Socks5-Proxy Port
</pre>
Or again via ~/.ssh/config:
<pre>
Host wiki
Compression yes
DynamicForward 8888
RequestTTY no
PermitLocalCommand yes
LocalCommand chromium-browser --proxy-server="socks5://localhost:8888" https://wiki.intern.firma.de/ &
Hostname internal-host
</pre>
And then <i>ssh -N -f wiki</i>
=Fingerprint of a key=
For verification, it is often easier with shorter number strings. Therefore, the fingerprint is handy to compare keys more easily:
<pre>
$ ssh-keygen -lf ~/.ssh/id_rsa.pub
4096 SHA256:s1dtEFvY0EiJQcg66jTUon3BS6gfhSJT4Qegox4e7yk lollypop@lollybook (RSA)
</pre>
=Nutzer einschränken=
<syntaxhighlight lang=bash>
# SSH is only allowed for users in the group ssh except syslog
AllowGroups ssh
DenyUsers syslog
</syntaxhighlight>
=PuTTY Portable=
==pageant zusammen mit putty starten==
In die Datei ..\PortableApps\PuTTYPortable\App\AppInfo\Launcher\PuTTYPortable.ini muß folgendes unter [Launch] stehen:
<pre>
[Launch]
ProgramExecutable=putty\pageant.exe
CommandLineArguments='%PAL:DataDir%\settings\mykeys.ppk -c %PAL:AppDir%\putty\putty.exe'
DirectoryMoveOK=yes
SupportsUNC=yes
</pre>
Zu PortableApps siehe auch:
* [http://portableapps.com/manuals/PortableApps.comLauncher/ref/envsub.html Environment variable substitions]
* [http://portableapps.com/manuals/PortableApps.comLauncher/ref/launcher.ini/launch.html#programexecutable Launch]
==ppk -> pem==
<syntaxhighlight lang=bash>
$ nawk '/---- BEGIN SSH2 PUBLIC KEY ----/{printf "ssh-rsa "; getline; comment=$2; gsub(/"/,"",comment); getline line; while(line !~ /^---- END/){printf line; getline line;} printf " %s\n",comment;}' pubkey.ppk
</syntaxhighlight>
=Probleme mit älteren Gegenstellen=
==Unable to negotiate with <IP> port 22: no matching host key type found. Their offer: ssh-dss==
<syntaxhighlight lang=bash>
$ ssh -oHostKeyAlgorithms=+ssh-dss <IP>
</syntaxhighlight>
==ssh_dispatch_run_fatal: Connection to <IP> port 22: DH GEX group out of range==
<syntaxhighlight lang=bash>
$ ssh -oKexAlgorithms=diffie-hellman-group-exchange-sha256,diffie-hellman-group14-sha1,diffie-hellman-group1-sha1 <IP>
</syntaxhighlight>
=SFTP chroot=
<syntaxhighlight lang=bash>
# mkdir --parents --mode=0755 /sftp_chroot/etc
</syntaxhighlight>
==/etc/fstab==
<syntaxhighlight lang=bash>
...
/etc/passwd /sftp_chroot/etc/passwd none ro,bind 0 0
/etc/group /sftp_chroot/etc/group none ro,bind 0 0
</syntaxhighlight>
==/etc/ssh/sshd_config==
<syntaxhighlight lang=bash>
...
AllowGroups ssh-user
Subsystem sftp internal-sftp
Match group sftp
AllowGroups sftp
X11Forwarding no
AllowTcpForwarding no
AllowAgentForwarding no
PermitTunnel no
ForceCommand internal-sftp
PasswordAuthentication yes
ChrootDirectory /sftp_chroot/
AuthorizedKeysFile /sftp_chroot/%h/.ssh/authorized_keys
</syntaxhighlight>
==Create SFTP user==
Now you can put authorized keys into the files /home/sftp/.authorized_keys/<i>username</i>
And create the sftp users like this:
<syntaxhighlight lang=bash>
# USER=myuser
# mkdir --parents --mode=0755 /home/sftp/${USER}
# useradd --create-home --home-dir /home/sftp/${USER}/home ${USER}
</syntaxhighlight>
= Two factor authentication =
== Google Authenticator ==
As the Google Authenticator is a tool which is available on several SmartPhone OS I took this one for the OTP authentication.
All steps have to be done on the destination host.
=== Install libpam-google-authenticator ===
<syntaxhighlight lang=bash>
$ sudo apt-get install libpam-google-authenticator
</syntaxhighlight>
=== Add settings to the /etc/pam.d/sshd ===
Put this line at the top of your /etc/pam.d/sshd!
<syntaxhighlight lang=bash>
auth [success=done new_authtok_reqd=done default=die] pam_google_authenticator.so nullok
</syntaxhighlight>
See the man page pam.d(5) or read here...
The meaning of the parameters:
* success=done : If pam_google_authenticator returns successful (code was correct) all authentication is done.
* new_authtok_reqd=done : New authentication token is required set to done. Done is like ok, <nowiki><man page></nowiki>except that the stack also terminates and control is immediately returned to the application.<nowiki></man page></nowiki>
* default=die : If pam_google_authenticator failed no other authentications will be tried
* nullok : Allow user to access auth mechanism even if the password is empty
=== Add settings to the /etc/ssh/sshd_config ===
This lines have to be in the /etc/ssh/sshd_config:
<syntaxhighlight lang=bash>
UsePAM yes
PasswordAuthentication no
PubkeyAuthentication yes
ChallengeResponseAuthentication yes
AuthenticationMethods publickey,keyboard-interactive:pam
</syntaxhighlight>
Without the setting in /etc/pam.d/sshd the "PasswordAuthentication no" will not be sufficient and still ask for a password because /etc/pam.d/sshd enables password authentication.
1b305d07f34daac6729d1900b6de12a8a3b658b1
2713
2712
2023-03-16T09:38:09Z
Lollypop
2
/* Nutzer einschränken */
wikitext
text/x-wiki
[[Category:SSH|Tipps]]
[[Category:Putty|Tipps]]
=SSH, way to the target=
==SSH over one or more hops==
To make the SSH connection from host_a to host_b you have to tunnel through two hosts (jumphost_1 and jumphost_2). If you always log in first and then continue logging in, it is sometimes very difficult to loop through the port forwardings or the Socks5 proxy. It is easier to define <i>ProxyJump</i>s for the way from host_a to host_b.
So we only get from jumphost_2 to host_b, so we make an entry in ~/.ssh/config for this:
<pre>
Host host_b
ProxyJump jumphost_2
</pre>
But we can only get to jumphost_2 via jumphost_1, so we need an entry for this as well:
<pre>
Host jumphost_2
ProxyJump jumphost_1
</pre>
Now simply type <i>ssh host_b</i> on host_a and you will be tunneled through the two gateways jumphost_1 and jumphost_2.
==Portforwardings for example for NFS are now easy like this==
<pre>
root@host_a# share -F nfs -o ro=@127.0.0.1/32 /tmp
root@host_a# ssh -R 22049:localhost:2049 user@host_b
user@host_b$ su -
root@host_b# mount -oro nfs://127.0.0.1:22049/tmp /mnt
</pre>
In the background the tunnel connections are established and the port forwarding is done directly from host_a to host_b. Very slim and elegant.
==Breakout from paradise==
Problem: The environment you are in is unfortunately so unfortunate with firewalls that you can not work. But you have to SSH out to look somewhere else or to get something. Well, there is always a way...
You need a locally installed [http://www.meadowy.org/~gotoh/projects/connect connect], e.g. under Ubuntu: apt-get install connect-proxy.
Furthermore you need a SSH server, where a sshd is listening on port 443, because most proxies only want to let you through on known ports.
Then you enter in the ~/.ssh/config:
<pre>
Host ssh-via-proxy
ProxyCommand connect -H proxy-server:3128 ssh-server 443
</pre>
Whoop di whoop is one with <i>ssh ssh-server</i> on the SSH target, where one would like to go. Of course you can enter another host behind this connection via <i>ProxyJump ssh-via-proxy</i> etc. etc.
==Ah yes... the internal wiki...==
Also not bad, if this is only accessible from the internal network, then we just request via socks proxy:
<pre>
user@host_a$ ssh -C -N -T -f -D8080 internal-host
user@host_a$ chromium-browser --proxy-server="socks5://localhost:8080" https://wiki.internal.office/ &
</pre>
Options are:
<pre>
-C Requests compression <- das ist optional
-N Do not execute a remote command.
-T Disable pseudo-tty allocation.
-f Requests ssh to go to background just before command execution.
-D Local-Remote-Socks5-Proxy Port
</pre>
Or again via ~/.ssh/config:
<pre>
Host wiki
Compression yes
DynamicForward 8888
RequestTTY no
PermitLocalCommand yes
LocalCommand chromium-browser --proxy-server="socks5://localhost:8888" https://wiki.intern.firma.de/ &
Hostname internal-host
</pre>
And then <i>ssh -N -f wiki</i>
=Fingerprint of a key=
For verification, it is often easier with shorter number strings. Therefore, the fingerprint is handy to compare keys more easily:
<pre>
$ ssh-keygen -lf ~/.ssh/id_rsa.pub
4096 SHA256:s1dtEFvY0EiJQcg66jTUon3BS6gfhSJT4Qegox4e7yk lollypop@lollybook (RSA)
</pre>
=Limit allowed users in sshd_config=
<syntaxhighlight lang=bash>
# SSH is only allowed for users in the group ssh except user syslog
AllowGroups ssh
DenyUsers syslog
</syntaxhighlight>
=PuTTY Portable=
==pageant zusammen mit putty starten==
In die Datei ..\PortableApps\PuTTYPortable\App\AppInfo\Launcher\PuTTYPortable.ini muß folgendes unter [Launch] stehen:
<pre>
[Launch]
ProgramExecutable=putty\pageant.exe
CommandLineArguments='%PAL:DataDir%\settings\mykeys.ppk -c %PAL:AppDir%\putty\putty.exe'
DirectoryMoveOK=yes
SupportsUNC=yes
</pre>
Zu PortableApps siehe auch:
* [http://portableapps.com/manuals/PortableApps.comLauncher/ref/envsub.html Environment variable substitions]
* [http://portableapps.com/manuals/PortableApps.comLauncher/ref/launcher.ini/launch.html#programexecutable Launch]
==ppk -> pem==
<syntaxhighlight lang=bash>
$ nawk '/---- BEGIN SSH2 PUBLIC KEY ----/{printf "ssh-rsa "; getline; comment=$2; gsub(/"/,"",comment); getline line; while(line !~ /^---- END/){printf line; getline line;} printf " %s\n",comment;}' pubkey.ppk
</syntaxhighlight>
=Probleme mit älteren Gegenstellen=
==Unable to negotiate with <IP> port 22: no matching host key type found. Their offer: ssh-dss==
<syntaxhighlight lang=bash>
$ ssh -oHostKeyAlgorithms=+ssh-dss <IP>
</syntaxhighlight>
==ssh_dispatch_run_fatal: Connection to <IP> port 22: DH GEX group out of range==
<syntaxhighlight lang=bash>
$ ssh -oKexAlgorithms=diffie-hellman-group-exchange-sha256,diffie-hellman-group14-sha1,diffie-hellman-group1-sha1 <IP>
</syntaxhighlight>
=SFTP chroot=
<syntaxhighlight lang=bash>
# mkdir --parents --mode=0755 /sftp_chroot/etc
</syntaxhighlight>
==/etc/fstab==
<syntaxhighlight lang=bash>
...
/etc/passwd /sftp_chroot/etc/passwd none ro,bind 0 0
/etc/group /sftp_chroot/etc/group none ro,bind 0 0
</syntaxhighlight>
==/etc/ssh/sshd_config==
<syntaxhighlight lang=bash>
...
AllowGroups ssh-user
Subsystem sftp internal-sftp
Match group sftp
AllowGroups sftp
X11Forwarding no
AllowTcpForwarding no
AllowAgentForwarding no
PermitTunnel no
ForceCommand internal-sftp
PasswordAuthentication yes
ChrootDirectory /sftp_chroot/
AuthorizedKeysFile /sftp_chroot/%h/.ssh/authorized_keys
</syntaxhighlight>
==Create SFTP user==
Now you can put authorized keys into the files /home/sftp/.authorized_keys/<i>username</i>
And create the sftp users like this:
<syntaxhighlight lang=bash>
# USER=myuser
# mkdir --parents --mode=0755 /home/sftp/${USER}
# useradd --create-home --home-dir /home/sftp/${USER}/home ${USER}
</syntaxhighlight>
= Two factor authentication =
== Google Authenticator ==
As the Google Authenticator is a tool which is available on several SmartPhone OS I took this one for the OTP authentication.
All steps have to be done on the destination host.
=== Install libpam-google-authenticator ===
<syntaxhighlight lang=bash>
$ sudo apt-get install libpam-google-authenticator
</syntaxhighlight>
=== Add settings to the /etc/pam.d/sshd ===
Put this line at the top of your /etc/pam.d/sshd!
<syntaxhighlight lang=bash>
auth [success=done new_authtok_reqd=done default=die] pam_google_authenticator.so nullok
</syntaxhighlight>
See the man page pam.d(5) or read here...
The meaning of the parameters:
* success=done : If pam_google_authenticator returns successful (code was correct) all authentication is done.
* new_authtok_reqd=done : New authentication token is required set to done. Done is like ok, <nowiki><man page></nowiki>except that the stack also terminates and control is immediately returned to the application.<nowiki></man page></nowiki>
* default=die : If pam_google_authenticator failed no other authentications will be tried
* nullok : Allow user to access auth mechanism even if the password is empty
=== Add settings to the /etc/ssh/sshd_config ===
This lines have to be in the /etc/ssh/sshd_config:
<syntaxhighlight lang=bash>
UsePAM yes
PasswordAuthentication no
PubkeyAuthentication yes
ChallengeResponseAuthentication yes
AuthenticationMethods publickey,keyboard-interactive:pam
</syntaxhighlight>
Without the setting in /etc/pam.d/sshd the "PasswordAuthentication no" will not be sufficient and still ask for a password because /etc/pam.d/sshd enables password authentication.
14f6d25f4846e9e44d801f074d2d1c8c6e48bb99
2714
2713
2023-03-16T09:39:15Z
Lollypop
2
/* pageant zusammen mit putty starten */
wikitext
text/x-wiki
[[Category:SSH|Tipps]]
[[Category:Putty|Tipps]]
=SSH, way to the target=
==SSH over one or more hops==
To make the SSH connection from host_a to host_b you have to tunnel through two hosts (jumphost_1 and jumphost_2). If you always log in first and then continue logging in, it is sometimes very difficult to loop through the port forwardings or the Socks5 proxy. It is easier to define <i>ProxyJump</i>s for the way from host_a to host_b.
So we only get from jumphost_2 to host_b, so we make an entry in ~/.ssh/config for this:
<pre>
Host host_b
ProxyJump jumphost_2
</pre>
But we can only get to jumphost_2 via jumphost_1, so we need an entry for this as well:
<pre>
Host jumphost_2
ProxyJump jumphost_1
</pre>
Now simply type <i>ssh host_b</i> on host_a and you will be tunneled through the two gateways jumphost_1 and jumphost_2.
==Portforwardings for example for NFS are now easy like this==
<pre>
root@host_a# share -F nfs -o ro=@127.0.0.1/32 /tmp
root@host_a# ssh -R 22049:localhost:2049 user@host_b
user@host_b$ su -
root@host_b# mount -oro nfs://127.0.0.1:22049/tmp /mnt
</pre>
In the background the tunnel connections are established and the port forwarding is done directly from host_a to host_b. Very slim and elegant.
==Breakout from paradise==
Problem: The environment you are in is unfortunately so unfortunate with firewalls that you can not work. But you have to SSH out to look somewhere else or to get something. Well, there is always a way...
You need a locally installed [http://www.meadowy.org/~gotoh/projects/connect connect], e.g. under Ubuntu: apt-get install connect-proxy.
Furthermore you need a SSH server, where a sshd is listening on port 443, because most proxies only want to let you through on known ports.
Then you enter in the ~/.ssh/config:
<pre>
Host ssh-via-proxy
ProxyCommand connect -H proxy-server:3128 ssh-server 443
</pre>
Whoop di whoop is one with <i>ssh ssh-server</i> on the SSH target, where one would like to go. Of course you can enter another host behind this connection via <i>ProxyJump ssh-via-proxy</i> etc. etc.
==Ah yes... the internal wiki...==
Also not bad, if this is only accessible from the internal network, then we just request via socks proxy:
<pre>
user@host_a$ ssh -C -N -T -f -D8080 internal-host
user@host_a$ chromium-browser --proxy-server="socks5://localhost:8080" https://wiki.internal.office/ &
</pre>
Options are:
<pre>
-C Requests compression <- das ist optional
-N Do not execute a remote command.
-T Disable pseudo-tty allocation.
-f Requests ssh to go to background just before command execution.
-D Local-Remote-Socks5-Proxy Port
</pre>
Or again via ~/.ssh/config:
<pre>
Host wiki
Compression yes
DynamicForward 8888
RequestTTY no
PermitLocalCommand yes
LocalCommand chromium-browser --proxy-server="socks5://localhost:8888" https://wiki.intern.firma.de/ &
Hostname internal-host
</pre>
And then <i>ssh -N -f wiki</i>
=Fingerprint of a key=
For verification, it is often easier with shorter number strings. Therefore, the fingerprint is handy to compare keys more easily:
<pre>
$ ssh-keygen -lf ~/.ssh/id_rsa.pub
4096 SHA256:s1dtEFvY0EiJQcg66jTUon3BS6gfhSJT4Qegox4e7yk lollypop@lollybook (RSA)
</pre>
=Limit allowed users in sshd_config=
<syntaxhighlight lang=bash>
# SSH is only allowed for users in the group ssh except user syslog
AllowGroups ssh
DenyUsers syslog
</syntaxhighlight>
=PuTTY Portable=
==pageant zusammen mit putty starten==
In the file ..\PortableApps\PuTTYPortable\App\AppInfo\Launcher\PuTTYPortable.ini enter the following below [Launch]:
<pre>
[Launch]
ProgramExecutable=putty\pageant.exe
CommandLineArguments='%PAL:DataDir%\settings\mykeys.ppk -c %PAL:AppDir%\putty\putty.exe'
DirectoryMoveOK=yes
SupportsUNC=yes
</pre>
For PortableApps see:
* [http://portableapps.com/manuals/PortableApps.comLauncher/ref/envsub.html Environment variable substitions]
* [http://portableapps.com/manuals/PortableApps.comLauncher/ref/launcher.ini/launch.html#programexecutable Launch]
==ppk -> pem==
<syntaxhighlight lang=bash>
$ nawk '/---- BEGIN SSH2 PUBLIC KEY ----/{printf "ssh-rsa "; getline; comment=$2; gsub(/"/,"",comment); getline line; while(line !~ /^---- END/){printf line; getline line;} printf " %s\n",comment;}' pubkey.ppk
</syntaxhighlight>
=Probleme mit älteren Gegenstellen=
==Unable to negotiate with <IP> port 22: no matching host key type found. Their offer: ssh-dss==
<syntaxhighlight lang=bash>
$ ssh -oHostKeyAlgorithms=+ssh-dss <IP>
</syntaxhighlight>
==ssh_dispatch_run_fatal: Connection to <IP> port 22: DH GEX group out of range==
<syntaxhighlight lang=bash>
$ ssh -oKexAlgorithms=diffie-hellman-group-exchange-sha256,diffie-hellman-group14-sha1,diffie-hellman-group1-sha1 <IP>
</syntaxhighlight>
=SFTP chroot=
<syntaxhighlight lang=bash>
# mkdir --parents --mode=0755 /sftp_chroot/etc
</syntaxhighlight>
==/etc/fstab==
<syntaxhighlight lang=bash>
...
/etc/passwd /sftp_chroot/etc/passwd none ro,bind 0 0
/etc/group /sftp_chroot/etc/group none ro,bind 0 0
</syntaxhighlight>
==/etc/ssh/sshd_config==
<syntaxhighlight lang=bash>
...
AllowGroups ssh-user
Subsystem sftp internal-sftp
Match group sftp
AllowGroups sftp
X11Forwarding no
AllowTcpForwarding no
AllowAgentForwarding no
PermitTunnel no
ForceCommand internal-sftp
PasswordAuthentication yes
ChrootDirectory /sftp_chroot/
AuthorizedKeysFile /sftp_chroot/%h/.ssh/authorized_keys
</syntaxhighlight>
==Create SFTP user==
Now you can put authorized keys into the files /home/sftp/.authorized_keys/<i>username</i>
And create the sftp users like this:
<syntaxhighlight lang=bash>
# USER=myuser
# mkdir --parents --mode=0755 /home/sftp/${USER}
# useradd --create-home --home-dir /home/sftp/${USER}/home ${USER}
</syntaxhighlight>
= Two factor authentication =
== Google Authenticator ==
As the Google Authenticator is a tool which is available on several SmartPhone OS I took this one for the OTP authentication.
All steps have to be done on the destination host.
=== Install libpam-google-authenticator ===
<syntaxhighlight lang=bash>
$ sudo apt-get install libpam-google-authenticator
</syntaxhighlight>
=== Add settings to the /etc/pam.d/sshd ===
Put this line at the top of your /etc/pam.d/sshd!
<syntaxhighlight lang=bash>
auth [success=done new_authtok_reqd=done default=die] pam_google_authenticator.so nullok
</syntaxhighlight>
See the man page pam.d(5) or read here...
The meaning of the parameters:
* success=done : If pam_google_authenticator returns successful (code was correct) all authentication is done.
* new_authtok_reqd=done : New authentication token is required set to done. Done is like ok, <nowiki><man page></nowiki>except that the stack also terminates and control is immediately returned to the application.<nowiki></man page></nowiki>
* default=die : If pam_google_authenticator failed no other authentications will be tried
* nullok : Allow user to access auth mechanism even if the password is empty
=== Add settings to the /etc/ssh/sshd_config ===
This lines have to be in the /etc/ssh/sshd_config:
<syntaxhighlight lang=bash>
UsePAM yes
PasswordAuthentication no
PubkeyAuthentication yes
ChallengeResponseAuthentication yes
AuthenticationMethods publickey,keyboard-interactive:pam
</syntaxhighlight>
Without the setting in /etc/pam.d/sshd the "PasswordAuthentication no" will not be sufficient and still ask for a password because /etc/pam.d/sshd enables password authentication.
b198a70ad9fc5d043832aac7365649eb170dd214
2715
2714
2023-03-16T09:39:55Z
Lollypop
2
/* pageant zusammen mit putty starten */
wikitext
text/x-wiki
[[Category:SSH|Tipps]]
[[Category:Putty|Tipps]]
=SSH, way to the target=
==SSH over one or more hops==
To make the SSH connection from host_a to host_b you have to tunnel through two hosts (jumphost_1 and jumphost_2). If you always log in first and then continue logging in, it is sometimes very difficult to loop through the port forwardings or the Socks5 proxy. It is easier to define <i>ProxyJump</i>s for the way from host_a to host_b.
So we only get from jumphost_2 to host_b, so we make an entry in ~/.ssh/config for this:
<pre>
Host host_b
ProxyJump jumphost_2
</pre>
But we can only get to jumphost_2 via jumphost_1, so we need an entry for this as well:
<pre>
Host jumphost_2
ProxyJump jumphost_1
</pre>
Now simply type <i>ssh host_b</i> on host_a and you will be tunneled through the two gateways jumphost_1 and jumphost_2.
==Portforwardings for example for NFS are now easy like this==
<pre>
root@host_a# share -F nfs -o ro=@127.0.0.1/32 /tmp
root@host_a# ssh -R 22049:localhost:2049 user@host_b
user@host_b$ su -
root@host_b# mount -oro nfs://127.0.0.1:22049/tmp /mnt
</pre>
In the background the tunnel connections are established and the port forwarding is done directly from host_a to host_b. Very slim and elegant.
==Breakout from paradise==
Problem: The environment you are in is unfortunately so unfortunate with firewalls that you can not work. But you have to SSH out to look somewhere else or to get something. Well, there is always a way...
You need a locally installed [http://www.meadowy.org/~gotoh/projects/connect connect], e.g. under Ubuntu: apt-get install connect-proxy.
Furthermore you need a SSH server, where a sshd is listening on port 443, because most proxies only want to let you through on known ports.
Then you enter in the ~/.ssh/config:
<pre>
Host ssh-via-proxy
ProxyCommand connect -H proxy-server:3128 ssh-server 443
</pre>
Whoop di whoop is one with <i>ssh ssh-server</i> on the SSH target, where one would like to go. Of course you can enter another host behind this connection via <i>ProxyJump ssh-via-proxy</i> etc. etc.
==Ah yes... the internal wiki...==
Also not bad, if this is only accessible from the internal network, then we just request via socks proxy:
<pre>
user@host_a$ ssh -C -N -T -f -D8080 internal-host
user@host_a$ chromium-browser --proxy-server="socks5://localhost:8080" https://wiki.internal.office/ &
</pre>
Options are:
<pre>
-C Requests compression <- das ist optional
-N Do not execute a remote command.
-T Disable pseudo-tty allocation.
-f Requests ssh to go to background just before command execution.
-D Local-Remote-Socks5-Proxy Port
</pre>
Or again via ~/.ssh/config:
<pre>
Host wiki
Compression yes
DynamicForward 8888
RequestTTY no
PermitLocalCommand yes
LocalCommand chromium-browser --proxy-server="socks5://localhost:8888" https://wiki.intern.firma.de/ &
Hostname internal-host
</pre>
And then <i>ssh -N -f wiki</i>
=Fingerprint of a key=
For verification, it is often easier with shorter number strings. Therefore, the fingerprint is handy to compare keys more easily:
<pre>
$ ssh-keygen -lf ~/.ssh/id_rsa.pub
4096 SHA256:s1dtEFvY0EiJQcg66jTUon3BS6gfhSJT4Qegox4e7yk lollypop@lollybook (RSA)
</pre>
=Limit allowed users in sshd_config=
<syntaxhighlight lang=bash>
# SSH is only allowed for users in the group ssh except user syslog
AllowGroups ssh
DenyUsers syslog
</syntaxhighlight>
=PuTTY Portable=
==Launch pageant together with putty==
In the file ..\PortableApps\PuTTYPortable\App\AppInfo\Launcher\PuTTYPortable.ini enter the following below [Launch]:
<pre>
[Launch]
ProgramExecutable=putty\pageant.exe
CommandLineArguments='%PAL:DataDir%\settings\mykeys.ppk -c %PAL:AppDir%\putty\putty.exe'
DirectoryMoveOK=yes
SupportsUNC=yes
</pre>
For PortableApps see:
* [http://portableapps.com/manuals/PortableApps.comLauncher/ref/envsub.html Environment variable substitions]
* [http://portableapps.com/manuals/PortableApps.comLauncher/ref/launcher.ini/launch.html#programexecutable Launch]
==ppk -> pem==
<syntaxhighlight lang=bash>
$ nawk '/---- BEGIN SSH2 PUBLIC KEY ----/{printf "ssh-rsa "; getline; comment=$2; gsub(/"/,"",comment); getline line; while(line !~ /^---- END/){printf line; getline line;} printf " %s\n",comment;}' pubkey.ppk
</syntaxhighlight>
=Probleme mit älteren Gegenstellen=
==Unable to negotiate with <IP> port 22: no matching host key type found. Their offer: ssh-dss==
<syntaxhighlight lang=bash>
$ ssh -oHostKeyAlgorithms=+ssh-dss <IP>
</syntaxhighlight>
==ssh_dispatch_run_fatal: Connection to <IP> port 22: DH GEX group out of range==
<syntaxhighlight lang=bash>
$ ssh -oKexAlgorithms=diffie-hellman-group-exchange-sha256,diffie-hellman-group14-sha1,diffie-hellman-group1-sha1 <IP>
</syntaxhighlight>
=SFTP chroot=
<syntaxhighlight lang=bash>
# mkdir --parents --mode=0755 /sftp_chroot/etc
</syntaxhighlight>
==/etc/fstab==
<syntaxhighlight lang=bash>
...
/etc/passwd /sftp_chroot/etc/passwd none ro,bind 0 0
/etc/group /sftp_chroot/etc/group none ro,bind 0 0
</syntaxhighlight>
==/etc/ssh/sshd_config==
<syntaxhighlight lang=bash>
...
AllowGroups ssh-user
Subsystem sftp internal-sftp
Match group sftp
AllowGroups sftp
X11Forwarding no
AllowTcpForwarding no
AllowAgentForwarding no
PermitTunnel no
ForceCommand internal-sftp
PasswordAuthentication yes
ChrootDirectory /sftp_chroot/
AuthorizedKeysFile /sftp_chroot/%h/.ssh/authorized_keys
</syntaxhighlight>
==Create SFTP user==
Now you can put authorized keys into the files /home/sftp/.authorized_keys/<i>username</i>
And create the sftp users like this:
<syntaxhighlight lang=bash>
# USER=myuser
# mkdir --parents --mode=0755 /home/sftp/${USER}
# useradd --create-home --home-dir /home/sftp/${USER}/home ${USER}
</syntaxhighlight>
= Two factor authentication =
== Google Authenticator ==
As the Google Authenticator is a tool which is available on several SmartPhone OS I took this one for the OTP authentication.
All steps have to be done on the destination host.
=== Install libpam-google-authenticator ===
<syntaxhighlight lang=bash>
$ sudo apt-get install libpam-google-authenticator
</syntaxhighlight>
=== Add settings to the /etc/pam.d/sshd ===
Put this line at the top of your /etc/pam.d/sshd!
<syntaxhighlight lang=bash>
auth [success=done new_authtok_reqd=done default=die] pam_google_authenticator.so nullok
</syntaxhighlight>
See the man page pam.d(5) or read here...
The meaning of the parameters:
* success=done : If pam_google_authenticator returns successful (code was correct) all authentication is done.
* new_authtok_reqd=done : New authentication token is required set to done. Done is like ok, <nowiki><man page></nowiki>except that the stack also terminates and control is immediately returned to the application.<nowiki></man page></nowiki>
* default=die : If pam_google_authenticator failed no other authentications will be tried
* nullok : Allow user to access auth mechanism even if the password is empty
=== Add settings to the /etc/ssh/sshd_config ===
This lines have to be in the /etc/ssh/sshd_config:
<syntaxhighlight lang=bash>
UsePAM yes
PasswordAuthentication no
PubkeyAuthentication yes
ChallengeResponseAuthentication yes
AuthenticationMethods publickey,keyboard-interactive:pam
</syntaxhighlight>
Without the setting in /etc/pam.d/sshd the "PasswordAuthentication no" will not be sufficient and still ask for a password because /etc/pam.d/sshd enables password authentication.
81fdb2b8ec1b4988af560ff9d6f08a5536be6076
2716
2715
2023-03-16T09:40:34Z
Lollypop
2
/* ppk -> pem */
wikitext
text/x-wiki
[[Category:SSH|Tipps]]
[[Category:Putty|Tipps]]
=SSH, way to the target=
==SSH over one or more hops==
To make the SSH connection from host_a to host_b you have to tunnel through two hosts (jumphost_1 and jumphost_2). If you always log in first and then continue logging in, it is sometimes very difficult to loop through the port forwardings or the Socks5 proxy. It is easier to define <i>ProxyJump</i>s for the way from host_a to host_b.
So we only get from jumphost_2 to host_b, so we make an entry in ~/.ssh/config for this:
<pre>
Host host_b
ProxyJump jumphost_2
</pre>
But we can only get to jumphost_2 via jumphost_1, so we need an entry for this as well:
<pre>
Host jumphost_2
ProxyJump jumphost_1
</pre>
Now simply type <i>ssh host_b</i> on host_a and you will be tunneled through the two gateways jumphost_1 and jumphost_2.
==Portforwardings for example for NFS are now easy like this==
<pre>
root@host_a# share -F nfs -o ro=@127.0.0.1/32 /tmp
root@host_a# ssh -R 22049:localhost:2049 user@host_b
user@host_b$ su -
root@host_b# mount -oro nfs://127.0.0.1:22049/tmp /mnt
</pre>
In the background the tunnel connections are established and the port forwarding is done directly from host_a to host_b. Very slim and elegant.
==Breakout from paradise==
Problem: The environment you are in is unfortunately so unfortunate with firewalls that you can not work. But you have to SSH out to look somewhere else or to get something. Well, there is always a way...
You need a locally installed [http://www.meadowy.org/~gotoh/projects/connect connect], e.g. under Ubuntu: apt-get install connect-proxy.
Furthermore you need a SSH server, where a sshd is listening on port 443, because most proxies only want to let you through on known ports.
Then you enter in the ~/.ssh/config:
<pre>
Host ssh-via-proxy
ProxyCommand connect -H proxy-server:3128 ssh-server 443
</pre>
Whoop di whoop is one with <i>ssh ssh-server</i> on the SSH target, where one would like to go. Of course you can enter another host behind this connection via <i>ProxyJump ssh-via-proxy</i> etc. etc.
==Ah yes... the internal wiki...==
Also not bad, if this is only accessible from the internal network, then we just request via socks proxy:
<pre>
user@host_a$ ssh -C -N -T -f -D8080 internal-host
user@host_a$ chromium-browser --proxy-server="socks5://localhost:8080" https://wiki.internal.office/ &
</pre>
Options are:
<pre>
-C Requests compression <- das ist optional
-N Do not execute a remote command.
-T Disable pseudo-tty allocation.
-f Requests ssh to go to background just before command execution.
-D Local-Remote-Socks5-Proxy Port
</pre>
Or again via ~/.ssh/config:
<pre>
Host wiki
Compression yes
DynamicForward 8888
RequestTTY no
PermitLocalCommand yes
LocalCommand chromium-browser --proxy-server="socks5://localhost:8888" https://wiki.intern.firma.de/ &
Hostname internal-host
</pre>
And then <i>ssh -N -f wiki</i>
=Fingerprint of a key=
For verification, it is often easier with shorter number strings. Therefore, the fingerprint is handy to compare keys more easily:
<pre>
$ ssh-keygen -lf ~/.ssh/id_rsa.pub
4096 SHA256:s1dtEFvY0EiJQcg66jTUon3BS6gfhSJT4Qegox4e7yk lollypop@lollybook (RSA)
</pre>
=Limit allowed users in sshd_config=
<syntaxhighlight lang=bash>
# SSH is only allowed for users in the group ssh except user syslog
AllowGroups ssh
DenyUsers syslog
</syntaxhighlight>
=PuTTY Portable=
==Launch pageant together with putty==
In the file ..\PortableApps\PuTTYPortable\App\AppInfo\Launcher\PuTTYPortable.ini enter the following below [Launch]:
<pre>
[Launch]
ProgramExecutable=putty\pageant.exe
CommandLineArguments='%PAL:DataDir%\settings\mykeys.ppk -c %PAL:AppDir%\putty\putty.exe'
DirectoryMoveOK=yes
SupportsUNC=yes
</pre>
For PortableApps see:
* [http://portableapps.com/manuals/PortableApps.comLauncher/ref/envsub.html Environment variable substitions]
* [http://portableapps.com/manuals/PortableApps.comLauncher/ref/launcher.ini/launch.html#programexecutable Launch]
==ppk -> OpenSSH format==
<syntaxhighlight lang=bash>
$ nawk '/---- BEGIN SSH2 PUBLIC KEY ----/{printf "ssh-rsa "; getline; comment=$2; gsub(/"/,"",comment); getline line; while(line !~ /^---- END/){printf line; getline line;} printf " %s\n",comment;}' pubkey.ppk
</syntaxhighlight>
=Probleme mit älteren Gegenstellen=
==Unable to negotiate with <IP> port 22: no matching host key type found. Their offer: ssh-dss==
<syntaxhighlight lang=bash>
$ ssh -oHostKeyAlgorithms=+ssh-dss <IP>
</syntaxhighlight>
==ssh_dispatch_run_fatal: Connection to <IP> port 22: DH GEX group out of range==
<syntaxhighlight lang=bash>
$ ssh -oKexAlgorithms=diffie-hellman-group-exchange-sha256,diffie-hellman-group14-sha1,diffie-hellman-group1-sha1 <IP>
</syntaxhighlight>
=SFTP chroot=
<syntaxhighlight lang=bash>
# mkdir --parents --mode=0755 /sftp_chroot/etc
</syntaxhighlight>
==/etc/fstab==
<syntaxhighlight lang=bash>
...
/etc/passwd /sftp_chroot/etc/passwd none ro,bind 0 0
/etc/group /sftp_chroot/etc/group none ro,bind 0 0
</syntaxhighlight>
==/etc/ssh/sshd_config==
<syntaxhighlight lang=bash>
...
AllowGroups ssh-user
Subsystem sftp internal-sftp
Match group sftp
AllowGroups sftp
X11Forwarding no
AllowTcpForwarding no
AllowAgentForwarding no
PermitTunnel no
ForceCommand internal-sftp
PasswordAuthentication yes
ChrootDirectory /sftp_chroot/
AuthorizedKeysFile /sftp_chroot/%h/.ssh/authorized_keys
</syntaxhighlight>
==Create SFTP user==
Now you can put authorized keys into the files /home/sftp/.authorized_keys/<i>username</i>
And create the sftp users like this:
<syntaxhighlight lang=bash>
# USER=myuser
# mkdir --parents --mode=0755 /home/sftp/${USER}
# useradd --create-home --home-dir /home/sftp/${USER}/home ${USER}
</syntaxhighlight>
= Two factor authentication =
== Google Authenticator ==
As the Google Authenticator is a tool which is available on several SmartPhone OS I took this one for the OTP authentication.
All steps have to be done on the destination host.
=== Install libpam-google-authenticator ===
<syntaxhighlight lang=bash>
$ sudo apt-get install libpam-google-authenticator
</syntaxhighlight>
=== Add settings to the /etc/pam.d/sshd ===
Put this line at the top of your /etc/pam.d/sshd!
<syntaxhighlight lang=bash>
auth [success=done new_authtok_reqd=done default=die] pam_google_authenticator.so nullok
</syntaxhighlight>
See the man page pam.d(5) or read here...
The meaning of the parameters:
* success=done : If pam_google_authenticator returns successful (code was correct) all authentication is done.
* new_authtok_reqd=done : New authentication token is required set to done. Done is like ok, <nowiki><man page></nowiki>except that the stack also terminates and control is immediately returned to the application.<nowiki></man page></nowiki>
* default=die : If pam_google_authenticator failed no other authentications will be tried
* nullok : Allow user to access auth mechanism even if the password is empty
=== Add settings to the /etc/ssh/sshd_config ===
This lines have to be in the /etc/ssh/sshd_config:
<syntaxhighlight lang=bash>
UsePAM yes
PasswordAuthentication no
PubkeyAuthentication yes
ChallengeResponseAuthentication yes
AuthenticationMethods publickey,keyboard-interactive:pam
</syntaxhighlight>
Without the setting in /etc/pam.d/sshd the "PasswordAuthentication no" will not be sufficient and still ask for a password because /etc/pam.d/sshd enables password authentication.
ccf852e95cf264c9a4b049b333d3716ac24741e6
2717
2716
2023-03-16T09:45:14Z
Lollypop
2
/* Probleme mit älteren Gegenstellen */
wikitext
text/x-wiki
[[Category:SSH|Tipps]]
[[Category:Putty|Tipps]]
=SSH, way to the target=
==SSH over one or more hops==
To make the SSH connection from host_a to host_b you have to tunnel through two hosts (jumphost_1 and jumphost_2). If you always log in first and then continue logging in, it is sometimes very difficult to loop through the port forwardings or the Socks5 proxy. It is easier to define <i>ProxyJump</i>s for the way from host_a to host_b.
So we only get from jumphost_2 to host_b, so we make an entry in ~/.ssh/config for this:
<pre>
Host host_b
ProxyJump jumphost_2
</pre>
But we can only get to jumphost_2 via jumphost_1, so we need an entry for this as well:
<pre>
Host jumphost_2
ProxyJump jumphost_1
</pre>
Now simply type <i>ssh host_b</i> on host_a and you will be tunneled through the two gateways jumphost_1 and jumphost_2.
==Portforwardings for example for NFS are now easy like this==
<pre>
root@host_a# share -F nfs -o ro=@127.0.0.1/32 /tmp
root@host_a# ssh -R 22049:localhost:2049 user@host_b
user@host_b$ su -
root@host_b# mount -oro nfs://127.0.0.1:22049/tmp /mnt
</pre>
In the background the tunnel connections are established and the port forwarding is done directly from host_a to host_b. Very slim and elegant.
==Breakout from paradise==
Problem: The environment you are in is unfortunately so unfortunate with firewalls that you can not work. But you have to SSH out to look somewhere else or to get something. Well, there is always a way...
You need a locally installed [http://www.meadowy.org/~gotoh/projects/connect connect], e.g. under Ubuntu: apt-get install connect-proxy.
Furthermore you need a SSH server, where a sshd is listening on port 443, because most proxies only want to let you through on known ports.
Then you enter in the ~/.ssh/config:
<pre>
Host ssh-via-proxy
ProxyCommand connect -H proxy-server:3128 ssh-server 443
</pre>
Whoop di whoop is one with <i>ssh ssh-server</i> on the SSH target, where one would like to go. Of course you can enter another host behind this connection via <i>ProxyJump ssh-via-proxy</i> etc. etc.
==Ah yes... the internal wiki...==
Also not bad, if this is only accessible from the internal network, then we just request via socks proxy:
<pre>
user@host_a$ ssh -C -N -T -f -D8080 internal-host
user@host_a$ chromium-browser --proxy-server="socks5://localhost:8080" https://wiki.internal.office/ &
</pre>
Options are:
<pre>
-C Requests compression <- das ist optional
-N Do not execute a remote command.
-T Disable pseudo-tty allocation.
-f Requests ssh to go to background just before command execution.
-D Local-Remote-Socks5-Proxy Port
</pre>
Or again via ~/.ssh/config:
<pre>
Host wiki
Compression yes
DynamicForward 8888
RequestTTY no
PermitLocalCommand yes
LocalCommand chromium-browser --proxy-server="socks5://localhost:8888" https://wiki.intern.firma.de/ &
Hostname internal-host
</pre>
And then <i>ssh -N -f wiki</i>
=Fingerprint of a key=
For verification, it is often easier with shorter number strings. Therefore, the fingerprint is handy to compare keys more easily:
<pre>
$ ssh-keygen -lf ~/.ssh/id_rsa.pub
4096 SHA256:s1dtEFvY0EiJQcg66jTUon3BS6gfhSJT4Qegox4e7yk lollypop@lollybook (RSA)
</pre>
=Limit allowed users in sshd_config=
<syntaxhighlight lang=bash>
# SSH is only allowed for users in the group ssh except user syslog
AllowGroups ssh
DenyUsers syslog
</syntaxhighlight>
=PuTTY Portable=
==Launch pageant together with putty==
In the file ..\PortableApps\PuTTYPortable\App\AppInfo\Launcher\PuTTYPortable.ini enter the following below [Launch]:
<pre>
[Launch]
ProgramExecutable=putty\pageant.exe
CommandLineArguments='%PAL:DataDir%\settings\mykeys.ppk -c %PAL:AppDir%\putty\putty.exe'
DirectoryMoveOK=yes
SupportsUNC=yes
</pre>
For PortableApps see:
* [http://portableapps.com/manuals/PortableApps.comLauncher/ref/envsub.html Environment variable substitions]
* [http://portableapps.com/manuals/PortableApps.comLauncher/ref/launcher.ini/launch.html#programexecutable Launch]
==ppk -> OpenSSH format==
<syntaxhighlight lang=bash>
$ nawk '/---- BEGIN SSH2 PUBLIC KEY ----/{printf "ssh-rsa "; getline; comment=$2; gsub(/"/,"",comment); getline line; while(line !~ /^---- END/){printf line; getline line;} printf " %s\n",comment;}' pubkey.ppk
</syntaxhighlight>
=Problems with older destinations=
==Unable to negotiate with <IP> port 22: no matching host key type found. Their offer: ssh-dss==
<syntaxhighlight lang=bash>
$ ssh -oHostKeyAlgorithms=+ssh-dss <IP>
</syntaxhighlight>
==ssh_dispatch_run_fatal: Connection to <IP> port 22: DH GEX group out of range==
<syntaxhighlight lang=bash>
$ ssh -oKexAlgorithms=diffie-hellman-group-exchange-sha256,diffie-hellman-group14-sha1,diffie-hellman-group1-sha1 <IP>
</syntaxhighlight>
==Allow outdated PubkeyAcceptedAlgorithms==
To reach hosts where you cannod use ed25519 or other more modern keys:
<syntaxhighlight lang=bash>
$ ssh -o PubkeyAcceptedKeyTypes=+ssh-rsa old-rsa-host
</syntaxhighlight>
=SFTP chroot=
<syntaxhighlight lang=bash>
# mkdir --parents --mode=0755 /sftp_chroot/etc
</syntaxhighlight>
==/etc/fstab==
<syntaxhighlight lang=bash>
...
/etc/passwd /sftp_chroot/etc/passwd none ro,bind 0 0
/etc/group /sftp_chroot/etc/group none ro,bind 0 0
</syntaxhighlight>
==/etc/ssh/sshd_config==
<syntaxhighlight lang=bash>
...
AllowGroups ssh-user
Subsystem sftp internal-sftp
Match group sftp
AllowGroups sftp
X11Forwarding no
AllowTcpForwarding no
AllowAgentForwarding no
PermitTunnel no
ForceCommand internal-sftp
PasswordAuthentication yes
ChrootDirectory /sftp_chroot/
AuthorizedKeysFile /sftp_chroot/%h/.ssh/authorized_keys
</syntaxhighlight>
==Create SFTP user==
Now you can put authorized keys into the files /home/sftp/.authorized_keys/<i>username</i>
And create the sftp users like this:
<syntaxhighlight lang=bash>
# USER=myuser
# mkdir --parents --mode=0755 /home/sftp/${USER}
# useradd --create-home --home-dir /home/sftp/${USER}/home ${USER}
</syntaxhighlight>
= Two factor authentication =
== Google Authenticator ==
As the Google Authenticator is a tool which is available on several SmartPhone OS I took this one for the OTP authentication.
All steps have to be done on the destination host.
=== Install libpam-google-authenticator ===
<syntaxhighlight lang=bash>
$ sudo apt-get install libpam-google-authenticator
</syntaxhighlight>
=== Add settings to the /etc/pam.d/sshd ===
Put this line at the top of your /etc/pam.d/sshd!
<syntaxhighlight lang=bash>
auth [success=done new_authtok_reqd=done default=die] pam_google_authenticator.so nullok
</syntaxhighlight>
See the man page pam.d(5) or read here...
The meaning of the parameters:
* success=done : If pam_google_authenticator returns successful (code was correct) all authentication is done.
* new_authtok_reqd=done : New authentication token is required set to done. Done is like ok, <nowiki><man page></nowiki>except that the stack also terminates and control is immediately returned to the application.<nowiki></man page></nowiki>
* default=die : If pam_google_authenticator failed no other authentications will be tried
* nullok : Allow user to access auth mechanism even if the password is empty
=== Add settings to the /etc/ssh/sshd_config ===
This lines have to be in the /etc/ssh/sshd_config:
<syntaxhighlight lang=bash>
UsePAM yes
PasswordAuthentication no
PubkeyAuthentication yes
ChallengeResponseAuthentication yes
AuthenticationMethods publickey,keyboard-interactive:pam
</syntaxhighlight>
Without the setting in /etc/pam.d/sshd the "PasswordAuthentication no" will not be sufficient and still ask for a password because /etc/pam.d/sshd enables password authentication.
da69dd525c21a50bdac9db5080240cbce1526809
2718
2717
2023-03-16T09:46:38Z
Lollypop
2
/* Allow outdated PubkeyAcceptedAlgorithms */
wikitext
text/x-wiki
[[Category:SSH|Tipps]]
[[Category:Putty|Tipps]]
=SSH, way to the target=
==SSH over one or more hops==
To make the SSH connection from host_a to host_b you have to tunnel through two hosts (jumphost_1 and jumphost_2). If you always log in first and then continue logging in, it is sometimes very difficult to loop through the port forwardings or the Socks5 proxy. It is easier to define <i>ProxyJump</i>s for the way from host_a to host_b.
So we only get from jumphost_2 to host_b, so we make an entry in ~/.ssh/config for this:
<pre>
Host host_b
ProxyJump jumphost_2
</pre>
But we can only get to jumphost_2 via jumphost_1, so we need an entry for this as well:
<pre>
Host jumphost_2
ProxyJump jumphost_1
</pre>
Now simply type <i>ssh host_b</i> on host_a and you will be tunneled through the two gateways jumphost_1 and jumphost_2.
==Portforwardings for example for NFS are now easy like this==
<pre>
root@host_a# share -F nfs -o ro=@127.0.0.1/32 /tmp
root@host_a# ssh -R 22049:localhost:2049 user@host_b
user@host_b$ su -
root@host_b# mount -oro nfs://127.0.0.1:22049/tmp /mnt
</pre>
In the background the tunnel connections are established and the port forwarding is done directly from host_a to host_b. Very slim and elegant.
==Breakout from paradise==
Problem: The environment you are in is unfortunately so unfortunate with firewalls that you can not work. But you have to SSH out to look somewhere else or to get something. Well, there is always a way...
You need a locally installed [http://www.meadowy.org/~gotoh/projects/connect connect], e.g. under Ubuntu: apt-get install connect-proxy.
Furthermore you need a SSH server, where a sshd is listening on port 443, because most proxies only want to let you through on known ports.
Then you enter in the ~/.ssh/config:
<pre>
Host ssh-via-proxy
ProxyCommand connect -H proxy-server:3128 ssh-server 443
</pre>
Whoop di whoop is one with <i>ssh ssh-server</i> on the SSH target, where one would like to go. Of course you can enter another host behind this connection via <i>ProxyJump ssh-via-proxy</i> etc. etc.
==Ah yes... the internal wiki...==
Also not bad, if this is only accessible from the internal network, then we just request via socks proxy:
<pre>
user@host_a$ ssh -C -N -T -f -D8080 internal-host
user@host_a$ chromium-browser --proxy-server="socks5://localhost:8080" https://wiki.internal.office/ &
</pre>
Options are:
<pre>
-C Requests compression <- das ist optional
-N Do not execute a remote command.
-T Disable pseudo-tty allocation.
-f Requests ssh to go to background just before command execution.
-D Local-Remote-Socks5-Proxy Port
</pre>
Or again via ~/.ssh/config:
<pre>
Host wiki
Compression yes
DynamicForward 8888
RequestTTY no
PermitLocalCommand yes
LocalCommand chromium-browser --proxy-server="socks5://localhost:8888" https://wiki.intern.firma.de/ &
Hostname internal-host
</pre>
And then <i>ssh -N -f wiki</i>
=Fingerprint of a key=
For verification, it is often easier with shorter number strings. Therefore, the fingerprint is handy to compare keys more easily:
<pre>
$ ssh-keygen -lf ~/.ssh/id_rsa.pub
4096 SHA256:s1dtEFvY0EiJQcg66jTUon3BS6gfhSJT4Qegox4e7yk lollypop@lollybook (RSA)
</pre>
=Limit allowed users in sshd_config=
<syntaxhighlight lang=bash>
# SSH is only allowed for users in the group ssh except user syslog
AllowGroups ssh
DenyUsers syslog
</syntaxhighlight>
=PuTTY Portable=
==Launch pageant together with putty==
In the file ..\PortableApps\PuTTYPortable\App\AppInfo\Launcher\PuTTYPortable.ini enter the following below [Launch]:
<pre>
[Launch]
ProgramExecutable=putty\pageant.exe
CommandLineArguments='%PAL:DataDir%\settings\mykeys.ppk -c %PAL:AppDir%\putty\putty.exe'
DirectoryMoveOK=yes
SupportsUNC=yes
</pre>
For PortableApps see:
* [http://portableapps.com/manuals/PortableApps.comLauncher/ref/envsub.html Environment variable substitions]
* [http://portableapps.com/manuals/PortableApps.comLauncher/ref/launcher.ini/launch.html#programexecutable Launch]
==ppk -> OpenSSH format==
<syntaxhighlight lang=bash>
$ nawk '/---- BEGIN SSH2 PUBLIC KEY ----/{printf "ssh-rsa "; getline; comment=$2; gsub(/"/,"",comment); getline line; while(line !~ /^---- END/){printf line; getline line;} printf " %s\n",comment;}' pubkey.ppk
</syntaxhighlight>
=Problems with older destinations=
==Unable to negotiate with <IP> port 22: no matching host key type found. Their offer: ssh-dss==
<syntaxhighlight lang=bash>
$ ssh -oHostKeyAlgorithms=+ssh-dss <IP>
</syntaxhighlight>
==ssh_dispatch_run_fatal: Connection to <IP> port 22: DH GEX group out of range==
<syntaxhighlight lang=bash>
$ ssh -oKexAlgorithms=diffie-hellman-group-exchange-sha256,diffie-hellman-group14-sha1,diffie-hellman-group1-sha1 <IP>
</syntaxhighlight>
==Allow outdated PubkeyAcceptedAlgorithms==
To reach hosts where you cannod use ed25519 or other more modern keys:
<syntaxhighlight lang=bash>
$ ssh -o PubkeyAcceptedKeyTypes=+ssh-rsa <old-rsa-host>
</syntaxhighlight>
=SFTP chroot=
<syntaxhighlight lang=bash>
# mkdir --parents --mode=0755 /sftp_chroot/etc
</syntaxhighlight>
==/etc/fstab==
<syntaxhighlight lang=bash>
...
/etc/passwd /sftp_chroot/etc/passwd none ro,bind 0 0
/etc/group /sftp_chroot/etc/group none ro,bind 0 0
</syntaxhighlight>
==/etc/ssh/sshd_config==
<syntaxhighlight lang=bash>
...
AllowGroups ssh-user
Subsystem sftp internal-sftp
Match group sftp
AllowGroups sftp
X11Forwarding no
AllowTcpForwarding no
AllowAgentForwarding no
PermitTunnel no
ForceCommand internal-sftp
PasswordAuthentication yes
ChrootDirectory /sftp_chroot/
AuthorizedKeysFile /sftp_chroot/%h/.ssh/authorized_keys
</syntaxhighlight>
==Create SFTP user==
Now you can put authorized keys into the files /home/sftp/.authorized_keys/<i>username</i>
And create the sftp users like this:
<syntaxhighlight lang=bash>
# USER=myuser
# mkdir --parents --mode=0755 /home/sftp/${USER}
# useradd --create-home --home-dir /home/sftp/${USER}/home ${USER}
</syntaxhighlight>
= Two factor authentication =
== Google Authenticator ==
As the Google Authenticator is a tool which is available on several SmartPhone OS I took this one for the OTP authentication.
All steps have to be done on the destination host.
=== Install libpam-google-authenticator ===
<syntaxhighlight lang=bash>
$ sudo apt-get install libpam-google-authenticator
</syntaxhighlight>
=== Add settings to the /etc/pam.d/sshd ===
Put this line at the top of your /etc/pam.d/sshd!
<syntaxhighlight lang=bash>
auth [success=done new_authtok_reqd=done default=die] pam_google_authenticator.so nullok
</syntaxhighlight>
See the man page pam.d(5) or read here...
The meaning of the parameters:
* success=done : If pam_google_authenticator returns successful (code was correct) all authentication is done.
* new_authtok_reqd=done : New authentication token is required set to done. Done is like ok, <nowiki><man page></nowiki>except that the stack also terminates and control is immediately returned to the application.<nowiki></man page></nowiki>
* default=die : If pam_google_authenticator failed no other authentications will be tried
* nullok : Allow user to access auth mechanism even if the password is empty
=== Add settings to the /etc/ssh/sshd_config ===
This lines have to be in the /etc/ssh/sshd_config:
<syntaxhighlight lang=bash>
UsePAM yes
PasswordAuthentication no
PubkeyAuthentication yes
ChallengeResponseAuthentication yes
AuthenticationMethods publickey,keyboard-interactive:pam
</syntaxhighlight>
Without the setting in /etc/pam.d/sshd the "PasswordAuthentication no" will not be sufficient and still ask for a password because /etc/pam.d/sshd enables password authentication.
f27b5b5937678b55a47b29b3f430a986e3cfd876
2719
2718
2023-03-16T10:12:09Z
Lollypop
2
/* SSH, way to the target */
wikitext
text/x-wiki
[[Category:SSH|Tipps]]
[[Category:Putty|Tipps]]
=SSH, way to the target=
==SSH over one or more hops==
To make the SSH connection from host_a to host_b you have to tunnel through two hosts (jumphost_1 and jumphost_2). If you always log in first and then continue logging in, it is sometimes very difficult to loop through the port forwardings or the Socks5 proxy. It is easier to define <i>ProxyJump</i>s for the way from host_a to host_b.
So we only get from jumphost_2 to host_b, so we make an entry in ~/.ssh/config for this:
<pre>
Host host_b
ProxyJump jumphost_2
</pre>
But we can only get to jumphost_2 via jumphost_1, so we need an entry for this as well:
<pre>
Host jumphost_2
ProxyJump jumphost_1
</pre>
Now simply type <i>ssh host_b</i> on host_a and you will be tunneled through the two gateways jumphost_1 and jumphost_2.
==Portforwardings for example for NFS are now easy like this==
<pre>
root@host_a# share -F nfs -o ro=@127.0.0.1/32 /tmp
root@host_a# ssh -R 22049:localhost:2049 user@host_b
user@host_b$ su -
root@host_b# mount -oro nfs://127.0.0.1:22049/tmp /mnt
</pre>
In the background the tunnel connections are established and the port forwarding is done directly from host_a to host_b. Very slim and elegant.
==Breakout from paradise==
Problem: The environment you are in is unfortunately so unfortunate with firewalls that you can not work. But you have to SSH out to look somewhere else or to get something. Well, there is always a way...
You need a locally installed [http://www.meadowy.org/~gotoh/projects/connect connect], e.g. under Ubuntu: apt-get install connect-proxy.
Furthermore you need a SSH server, where a sshd is listening on port 443, because most proxies only want to let you through on known ports.
Then you enter in the ~/.ssh/config:
<pre>
Host ssh-via-proxy
ProxyCommand connect -H proxy-server:3128 ssh-server 443
</pre>
Whoop di whoop is one with <i>ssh ssh-server</i> on the SSH target, where one would like to go. Of course you can enter another host behind this connection via <i>ProxyJump ssh-via-proxy</i> etc. etc.
==Ah yes... the internal wiki...==
Also not bad, if this is only accessible from the internal network, then we just request via socks proxy:
<pre>
user@host_a$ ssh -C -N -T -f -D8080 internal-host
user@host_a$ chromium-browser --proxy-server="socks5://localhost:8080" https://wiki.internal.office/ &
</pre>
Options are:
<pre>
-C Requests compression <- das ist optional
-N Do not execute a remote command.
-T Disable pseudo-tty allocation.
-f Requests ssh to go to background just before command execution.
-D Local-Remote-Socks5-Proxy Port
</pre>
Or again via ~/.ssh/config:
<pre>
Host wiki
Compression yes
DynamicForward 8888
RequestTTY no
PermitLocalCommand yes
LocalCommand chromium-browser --proxy-server="socks5://localhost:8888" https://wiki.intern.firma.de/ &
Hostname internal-host
</pre>
And then
<SyntaxHighlight lang=bash>
$ ssh -N -f wiki
</SyntaxHighlight>
==Do this but only if...==
If you want for example <i>ProxyJump</i> only if you are connected remote via OpenVPN but not if you are at the office:
<pre>
Match exec "ip ro sh dev tun0 src 10.208.129.0/24 2>/dev/null" host !jumphost.office,*.office,172.16.*.*
ProxyJump jumphost.office
</pre>
What happens here is:<br>
You will be proxied over jumphost.office if there is both of<br>
- A route on a dev tun0 where the local IP matches 10.208.129.0/24<br>
- The destination host matches !jumphost.office,*.office,172.16.*.*<br>
=Fingerprint of a key=
For verification, it is often easier with shorter number strings. Therefore, the fingerprint is handy to compare keys more easily:
<pre>
$ ssh-keygen -lf ~/.ssh/id_rsa.pub
4096 SHA256:s1dtEFvY0EiJQcg66jTUon3BS6gfhSJT4Qegox4e7yk lollypop@lollybook (RSA)
</pre>
=Limit allowed users in sshd_config=
<syntaxhighlight lang=bash>
# SSH is only allowed for users in the group ssh except user syslog
AllowGroups ssh
DenyUsers syslog
</syntaxhighlight>
=PuTTY Portable=
==Launch pageant together with putty==
In the file ..\PortableApps\PuTTYPortable\App\AppInfo\Launcher\PuTTYPortable.ini enter the following below [Launch]:
<pre>
[Launch]
ProgramExecutable=putty\pageant.exe
CommandLineArguments='%PAL:DataDir%\settings\mykeys.ppk -c %PAL:AppDir%\putty\putty.exe'
DirectoryMoveOK=yes
SupportsUNC=yes
</pre>
For PortableApps see:
* [http://portableapps.com/manuals/PortableApps.comLauncher/ref/envsub.html Environment variable substitions]
* [http://portableapps.com/manuals/PortableApps.comLauncher/ref/launcher.ini/launch.html#programexecutable Launch]
==ppk -> OpenSSH format==
<syntaxhighlight lang=bash>
$ nawk '/---- BEGIN SSH2 PUBLIC KEY ----/{printf "ssh-rsa "; getline; comment=$2; gsub(/"/,"",comment); getline line; while(line !~ /^---- END/){printf line; getline line;} printf " %s\n",comment;}' pubkey.ppk
</syntaxhighlight>
=Problems with older destinations=
==Unable to negotiate with <IP> port 22: no matching host key type found. Their offer: ssh-dss==
<syntaxhighlight lang=bash>
$ ssh -oHostKeyAlgorithms=+ssh-dss <IP>
</syntaxhighlight>
==ssh_dispatch_run_fatal: Connection to <IP> port 22: DH GEX group out of range==
<syntaxhighlight lang=bash>
$ ssh -oKexAlgorithms=diffie-hellman-group-exchange-sha256,diffie-hellman-group14-sha1,diffie-hellman-group1-sha1 <IP>
</syntaxhighlight>
==Allow outdated PubkeyAcceptedAlgorithms==
To reach hosts where you cannod use ed25519 or other more modern keys:
<syntaxhighlight lang=bash>
$ ssh -o PubkeyAcceptedKeyTypes=+ssh-rsa <old-rsa-host>
</syntaxhighlight>
=SFTP chroot=
<syntaxhighlight lang=bash>
# mkdir --parents --mode=0755 /sftp_chroot/etc
</syntaxhighlight>
==/etc/fstab==
<syntaxhighlight lang=bash>
...
/etc/passwd /sftp_chroot/etc/passwd none ro,bind 0 0
/etc/group /sftp_chroot/etc/group none ro,bind 0 0
</syntaxhighlight>
==/etc/ssh/sshd_config==
<syntaxhighlight lang=bash>
...
AllowGroups ssh-user
Subsystem sftp internal-sftp
Match group sftp
AllowGroups sftp
X11Forwarding no
AllowTcpForwarding no
AllowAgentForwarding no
PermitTunnel no
ForceCommand internal-sftp
PasswordAuthentication yes
ChrootDirectory /sftp_chroot/
AuthorizedKeysFile /sftp_chroot/%h/.ssh/authorized_keys
</syntaxhighlight>
==Create SFTP user==
Now you can put authorized keys into the files /home/sftp/.authorized_keys/<i>username</i>
And create the sftp users like this:
<syntaxhighlight lang=bash>
# USER=myuser
# mkdir --parents --mode=0755 /home/sftp/${USER}
# useradd --create-home --home-dir /home/sftp/${USER}/home ${USER}
</syntaxhighlight>
= Two factor authentication =
== Google Authenticator ==
As the Google Authenticator is a tool which is available on several SmartPhone OS I took this one for the OTP authentication.
All steps have to be done on the destination host.
=== Install libpam-google-authenticator ===
<syntaxhighlight lang=bash>
$ sudo apt-get install libpam-google-authenticator
</syntaxhighlight>
=== Add settings to the /etc/pam.d/sshd ===
Put this line at the top of your /etc/pam.d/sshd!
<syntaxhighlight lang=bash>
auth [success=done new_authtok_reqd=done default=die] pam_google_authenticator.so nullok
</syntaxhighlight>
See the man page pam.d(5) or read here...
The meaning of the parameters:
* success=done : If pam_google_authenticator returns successful (code was correct) all authentication is done.
* new_authtok_reqd=done : New authentication token is required set to done. Done is like ok, <nowiki><man page></nowiki>except that the stack also terminates and control is immediately returned to the application.<nowiki></man page></nowiki>
* default=die : If pam_google_authenticator failed no other authentications will be tried
* nullok : Allow user to access auth mechanism even if the password is empty
=== Add settings to the /etc/ssh/sshd_config ===
This lines have to be in the /etc/ssh/sshd_config:
<syntaxhighlight lang=bash>
UsePAM yes
PasswordAuthentication no
PubkeyAuthentication yes
ChallengeResponseAuthentication yes
AuthenticationMethods publickey,keyboard-interactive:pam
</syntaxhighlight>
Without the setting in /etc/pam.d/sshd the "PasswordAuthentication no" will not be sufficient and still ask for a password because /etc/pam.d/sshd enables password authentication.
b8b162697a888906ad5db80c2b8fab42f9d8373c
Rsyslog
0
406
2720
2023-03-23T13:18:43Z
Lollypop
2
Created page with "[[Category:Syslog]]"
wikitext
text/x-wiki
[[Category:Syslog]]
91bdc6da45e14cb1d612a415d8fdf7a676b5b758
2722
2720
2023-03-23T13:47:21Z
Lollypop
2
wikitext
text/x-wiki
[[Category:Syslog]]
==Logging via TLS==
===Server===
/etc/rsyslog.d/syslog-server.conf
<SyntaxHighlight>
#
## Set the certificates to use
#
global(
DefaultNetstreamDriver="gtls"
DefaultNetstreamDriverCAFile="/etc/ssl/certs/CA.pem"
DefaultNetstreamDriverCertFile="/etc/rsyslog.d/syslog.server.de-cert.pem"
DefaultNetstreamDriverKeyFile="/etc/rsyslog.d/syslog.server.de-key.pem"
)
#
## load input module TCP and force TLS
#
module(
load="imtcp"
StreamDriver.Name="gtls"
StreamDriver.Mode="1"
StreamDriver.Authmode="anon"
)
#
## Dynamic file template for logging into <host>/facility>.log
#
template (name="DynFile" type="string" string="/var/log/remote/%FROMHOST%/%SYSLOGFACILITY-TEXT%.log")
#
## Ruleset to log with the dynamic file name "DynFile" from above
#
ruleset(name="fromremote") {
action(type="omfile" dynafile="DynFile")
stop
}
#
## start up TCP listener at port 6514 and bind ruleset "fromremote" from above
#
input(
type="imtcp"
port="6514"
ruleset="fromremote"
)
</SyntaxHighlight>
===Client===
/etc/rsyslog.d/syslog-client.conf
<SyntaxHighlight>
#
## Set CA certificate to use
#
global(
DefaultNetstreamDriverCAFile="/etc/ssl/certs/CA.pem"
)
#
## Set up the action for logging to remote syslog server with TLS
#
ruleset(name="remotesyslog") {
action(
name="syslogserver"
type="omfwd"
protocol="tcp"
target="syslog.server.de"
port="6514"
StreamDriver="gtls"
StreamDriverMode="1"
StreamDriverAuthMode="anon"
)
}
</SyntaxHighlight>
/etc/rsyslog.d/firewall.frule
<SyntaxHighlight>
#
# firewall messages into separate file and stop their further processing
#
if ($syslogfacility-text == 'kern') and \
($msg contains 'IN=' and $msg contains 'OUT=') \
then {
-/var/log/firewall
call remotesyslog
stop
}
</SyntaxHighlight>
/etc/rsyslog.d/auth.frule
<SyntaxHighlight>
if ( $syslogtag == 'login:' ) or \
( ( $programname == 'sshd' ) and \
( \
( $msg contains 'Accepted publickey for' ) or \
( $msg contains 'Received disconnect' ) or \
( $msg contains 'Disconnected from user' ) \
) \
) \
then {
-/var/log/auth.log
call remotesyslog
stop
}
</SyntaxHighlight>
428511df5cbc88c880a463e32767720358d17e11
2723
2722
2023-03-23T13:55:13Z
Lollypop
2
/* Client */
wikitext
text/x-wiki
[[Category:Syslog]]
==Logging via TLS==
===Server===
/etc/rsyslog.d/syslog-server.conf
<SyntaxHighlight>
#
## Set the certificates to use
#
global(
DefaultNetstreamDriver="gtls"
DefaultNetstreamDriverCAFile="/etc/ssl/certs/CA.pem"
DefaultNetstreamDriverCertFile="/etc/rsyslog.d/syslog.server.de-cert.pem"
DefaultNetstreamDriverKeyFile="/etc/rsyslog.d/syslog.server.de-key.pem"
)
#
## load input module TCP and force TLS
#
module(
load="imtcp"
StreamDriver.Name="gtls"
StreamDriver.Mode="1"
StreamDriver.Authmode="anon"
)
#
## Dynamic file template for logging into <host>/facility>.log
#
template (name="DynFile" type="string" string="/var/log/remote/%FROMHOST%/%SYSLOGFACILITY-TEXT%.log")
#
## Ruleset to log with the dynamic file name "DynFile" from above
#
ruleset(name="fromremote") {
action(type="omfile" dynafile="DynFile")
stop
}
#
## start up TCP listener at port 6514 and bind ruleset "fromremote" from above
#
input(
type="imtcp"
port="6514"
ruleset="fromremote"
)
</SyntaxHighlight>
===Client===
/etc/rsyslog.d/syslog-client.conf
<SyntaxHighlight>
#
## Set CA certificate to use
#
global(
DefaultNetstreamDriverCAFile="/etc/ssl/certs/CA.pem"
)
#
## Set up the action for logging to remote syslog server with TLS
#
ruleset(name="remotesyslog") {
action(
name="syslogserver"
type="omfwd"
protocol="tcp"
target="syslog.server.de"
port="6514"
StreamDriver="gtls"
StreamDriverMode="1"
StreamDriverAuthMode="anon"
StreamDriverAuthMode="x509/name"
StreamDriverPermittedPeers="syslog.server.de"
gnutlsPriorityString="
Protocol=TLSv1.2
Curves=P-384
ClientSignatureAlgorithms=RSA+SHA384:ECDSA+SHA384
SignatureAlgorithms=RSA+SHA384:ECDSA+SHA384
CipherString=ECDHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-SHA384
"
)
}
</SyntaxHighlight>
/etc/rsyslog.d/firewall.frule
<SyntaxHighlight>
#
# firewall messages into separate file and stop their further processing
#
if ($syslogfacility-text == 'kern') and \
($msg contains 'IN=' and $msg contains 'OUT=') \
then {
-/var/log/firewall
call remotesyslog
stop
}
</SyntaxHighlight>
/etc/rsyslog.d/auth.frule
<SyntaxHighlight>
if ( $syslogtag == 'login:' ) or \
( ( $programname == 'sshd' ) and \
( \
( $msg contains 'Accepted publickey for' ) or \
( $msg contains 'Received disconnect' ) or \
( $msg contains 'Disconnected from user' ) \
) \
) \
then {
-/var/log/auth.log
call remotesyslog
stop
}
</SyntaxHighlight>
bdce7e3f1b6fe18933759c3451f2ead79a4363bd
2724
2723
2023-03-23T13:57:11Z
Lollypop
2
/* Server */
wikitext
text/x-wiki
[[Category:Syslog]]
==Logging via TLS==
===Server===
/etc/rsyslog.d/syslog-server.conf
<SyntaxHighlight>
#
## Set the certificates to use
#
global(
DefaultNetstreamDriver="gtls"
DefaultNetstreamDriverCAFile="/etc/ssl/certs/CA.pem"
DefaultNetstreamDriverCertFile="/etc/rsyslog.d/syslog.server.de-cert.pem"
DefaultNetstreamDriverKeyFile="/etc/rsyslog.d/syslog.server.de-key.pem"
)
#
## load input module TCP and force TLS
#
module(
load="imtcp"
StreamDriver.Name="gtls"
StreamDriver.Mode="1"
StreamDriver.Authmode="anon"
)
#
## Dynamic file template for logging into <host>/facility>.log
#
template (name="DynFile" type="string" string="/var/log/remote/%FROMHOST%/%SYSLOGFACILITY-TEXT%.log")
#
## Ruleset to log with the dynamic file name "DynFile" from above
#
ruleset(name="fromremote") {
action(type="omfile" dynafile="DynFile")
stop
}
#
## start up TCP listener at port 6514 and bind ruleset "fromremote" from above
#
input(
type="imtcp"
port="6514"
ruleset="fromremote"
gnutlsPriorityString="
MinProtocol=TLSv1.2
MaxProtocol=TLSv1.3
CipherString=ECDHE-RSA-AES128-GCM-SHA256
Ciphersuites=TLS_AES_128_GCM_SHA256
SignatureAlgorithms=ECDSA+SHA512:RSA-PSS+SHA512
ClientSignatureAlgorithms=ECDSA+SHA512:RSA-PSS+SHA512
Groups=P-521
RecordPadding=512
Options=ServerPreference,Compression,DHSingle,ECDHSingle,AntiReplay,-AllowNoDHEKEX,EncryptThenMac,EncryptThenMac,-UnsafeLegacyRenegotiation,NoRenegotiation,-MiddleboxCompat
"
)
</SyntaxHighlight>
===Client===
/etc/rsyslog.d/syslog-client.conf
<SyntaxHighlight>
#
## Set CA certificate to use
#
global(
DefaultNetstreamDriverCAFile="/etc/ssl/certs/CA.pem"
)
#
## Set up the action for logging to remote syslog server with TLS
#
ruleset(name="remotesyslog") {
action(
name="syslogserver"
type="omfwd"
protocol="tcp"
target="syslog.server.de"
port="6514"
StreamDriver="gtls"
StreamDriverMode="1"
StreamDriverAuthMode="anon"
StreamDriverAuthMode="x509/name"
StreamDriverPermittedPeers="syslog.server.de"
gnutlsPriorityString="
Protocol=TLSv1.2
Curves=P-384
ClientSignatureAlgorithms=RSA+SHA384:ECDSA+SHA384
SignatureAlgorithms=RSA+SHA384:ECDSA+SHA384
CipherString=ECDHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-SHA384
"
)
}
</SyntaxHighlight>
/etc/rsyslog.d/firewall.frule
<SyntaxHighlight>
#
# firewall messages into separate file and stop their further processing
#
if ($syslogfacility-text == 'kern') and \
($msg contains 'IN=' and $msg contains 'OUT=') \
then {
-/var/log/firewall
call remotesyslog
stop
}
</SyntaxHighlight>
/etc/rsyslog.d/auth.frule
<SyntaxHighlight>
if ( $syslogtag == 'login:' ) or \
( ( $programname == 'sshd' ) and \
( \
( $msg contains 'Accepted publickey for' ) or \
( $msg contains 'Received disconnect' ) or \
( $msg contains 'Disconnected from user' ) \
) \
) \
then {
-/var/log/auth.log
call remotesyslog
stop
}
</SyntaxHighlight>
0955bc1802672487ee3bb6d98d5b82c6a04125f4
2725
2724
2023-03-23T14:03:08Z
Lollypop
2
wikitext
text/x-wiki
[[Category:Syslog]]
==Logging via TLS==
===Server===
/etc/rsyslog.d/syslog-server.conf
<SyntaxHighlight>
#
## Set the certificates to use
#
global(
DefaultNetstreamDriver="gtls"
DefaultNetstreamDriverCAFile="/etc/ssl/certs/CA.pem"
DefaultNetstreamDriverCertFile="/etc/rsyslog.d/syslog.server.de-cert.pem"
DefaultNetstreamDriverKeyFile="/etc/rsyslog.d/syslog.server.de-key.pem"
)
#
## load input module TCP and force TLS
#
module(
load="imtcp"
StreamDriver.Name="gtls"
StreamDriver.Mode="1"
StreamDriver.Authmode="anon"
)
#
## Dynamic file template for logging into <host>/facility>.log
#
template (name="DynFile" type="string" string="/var/log/remote/%FROMHOST%/%SYSLOGFACILITY-TEXT%.log")
#
## Ruleset to log with the dynamic file name "DynFile" from above
#
ruleset(name="fromremote") {
action(type="omfile" dynafile="DynFile")
stop
}
#
## start up TCP listener at port 6514 and bind ruleset "fromremote" from above
#
input(
type="imtcp"
port="6514"
ruleset="fromremote"
gnutlsPriorityString="
#Protocol=TLSv1.2
MinProtocol=TLSv1.2
MaxProtocol=TLSv1.3
CipherString=ECDHE-RSA-AES128-GCM-SHA256
Ciphersuites=TLS_AES_128_GCM_SHA256
SignatureAlgorithms=ECDSA+SHA512:RSA-PSS+SHA512
ClientSignatureAlgorithms=ECDSA+SHA512:RSA-PSS+SHA512
Groups=P-521
RecordPadding=512
Options=ServerPreference,Compression,DHSingle,ECDHSingle,AntiReplay,-AllowNoDHEKEX,EncryptThenMac,EncryptThenMac,-UnsafeLegacyRenegotiation,NoRenegotiation,-MiddleboxCompat
"
)
</SyntaxHighlight>
===Client===
/etc/rsyslog.d/syslog-client.conf
<SyntaxHighlight>
#
## Set CA certificate to use
#
global(
DefaultNetstreamDriverCAFile="/etc/ssl/certs/CA.pem"
)
#
## Set up the action for logging to remote syslog server with TLS
#
ruleset(name="remotesyslog") {
action(
name="syslogserver"
type="omfwd"
protocol="tcp"
target="syslog.server.de"
port="6514"
StreamDriver="gtls"
StreamDriverMode="1"
StreamDriverAuthMode="anon"
StreamDriverAuthMode="x509/name"
StreamDriverPermittedPeers="syslog.server.de"
gnutlsPriorityString="
#Protocol=TLSv1.2
MinProtocol=TLSv1.2
MaxProtocol=TLSv1.3
SignatureAlgorithms=ECDSA+SHA512:RSA-PSS+SHA512
ClientSignatureAlgorithms=ECDSA+SHA512:RSA-PSS+SHA512
Groups=P-521
RecordPadding=512
Options=ServerPreference,Compression,DHSingle,ECDHSingle,AntiReplay,-AllowNoDHEKEX,EncryptThenMac,EncryptThenMac,-UnsafeLegacyRenegotiation,NoRenegotiation,-MiddleboxCompat
"
)
}
</SyntaxHighlight>
/etc/rsyslog.d/firewall.frule
<SyntaxHighlight>
#
# firewall messages into separate file and stop their further processing
#
if ($syslogfacility-text == 'kern') and \
($msg contains 'IN=' and $msg contains 'OUT=') \
then {
-/var/log/firewall
call remotesyslog
stop
}
</SyntaxHighlight>
/etc/rsyslog.d/auth.frule
<SyntaxHighlight>
if ( $syslogtag == 'login:' ) or \
( ( $programname == 'sshd' ) and \
( \
( $msg contains 'Accepted publickey for' ) or \
( $msg contains 'Received disconnect' ) or \
( $msg contains 'Disconnected from user' ) \
) \
) \
then {
-/var/log/auth.log
call remotesyslog
stop
}
</SyntaxHighlight>
07d66aa1dd9bb6dad9c02ea7d6aaf9459bb9572f
2726
2725
2023-03-23T14:26:08Z
Lollypop
2
/* Logging via TLS */
wikitext
text/x-wiki
[[Category:Syslog]]
==Logging via TLS==
===Server===
/etc/rsyslog.d/syslog-server.conf
<SyntaxHighlight>
#
## Set the certificates to use
#
global(
DefaultNetstreamDriver="gtls"
DefaultNetstreamDriverCAFile="/etc/ssl/certs/CA.pem"
DefaultNetstreamDriverCertFile="/etc/rsyslog.d/syslog.server.de-cert.pem"
DefaultNetstreamDriverKeyFile="/etc/rsyslog.d/syslog.server.de-key.pem"
)
#
## load input module TCP and force TLS
#
module(
load="imtcp"
StreamDriver.Name="gtls"
StreamDriver.Mode="1"
StreamDriver.Authmode="anon"
)
#
## Dynamic file template for logging into <host>/facility>.log
#
template (name="DynFile" type="string" string="/var/log/remote/%FROMHOST%/%SYSLOGFACILITY-TEXT%.log")
#
## Ruleset to log with the dynamic file name "DynFile" from above
#
ruleset(name="fromremote") {
action(type="omfile" dynafile="DynFile")
stop
}
#
## start up TCP listener at port 6514 and bind ruleset "fromremote" from above
#
input(
type="imtcp"
port="6514"
ruleset="fromremote"
gnutlsPriorityString="#
## Set the certificates to use
#
global(
DefaultNetstreamDriver="gtls"
DefaultNetstreamDriverCAFile="/etc/ssl/certs/CA.pem"
DefaultNetstreamDriverCertFile="/etc/rsyslog.d/syslog.server.de-cert.pem"
DefaultNetstreamDriverKeyFile="/etc/rsyslog.d/syslog.server.de-key.pem"
)
#
## load input module TCP and force TLS
#
module(
load="imtcp"
StreamDriver.Name="gtls"
StreamDriver.Mode="1"
StreamDriver.Authmode="anon"
)
#
## Dynamic file template for logging into <host>/facility>.log
#
template (name="DynFile" type="string" string="/var/log/remote/%FROMHOST%/%SYSLOGFACILITY-TEXT%.log")
#
## Ruleset to log with the dynamic file name "DynFile" from above
#
ruleset(name="fromremote") {
action(type="omfile" dynafile="DynFile")
stop
}
#
## start up TCP listener at port 6514 and bind ruleset "fromremote" from above
#
input(
type="imtcp"
port="6514"
ruleset="fromremote"
gnutlsPriorityString="#
## Set the certificates to use
#
global(
DefaultNetstreamDriver="gtls"
DefaultNetstreamDriverCAFile="/etc/ssl/certs/CA.pem"
DefaultNetstreamDriverCertFile="/etc/rsyslog.d/syslog.server.de-cert.pem"
DefaultNetstreamDriverKeyFile="/etc/rsyslog.d/syslog.server.de-key.pem"
)
#
## load input module TCP and force TLS
#
module(
load="imtcp"
StreamDriver.Name="gtls"
StreamDriver.Mode="1"
StreamDriver.Authmode="anon"
)
#
## Dynamic file template for logging into <host>/facility>.log
#
template (name="DynFile" type="string" string="/var/log/remote/%FROMHOST%/%SYSLOGFACILITY-TEXT%.log")
#
## Ruleset to log with the dynamic file name "DynFile" from above
#
ruleset(name="fromremote") {
action(type="omfile" dynafile="DynFile")
stop
}
#
## start up TCP listener at port 6514 and bind ruleset "fromremote" from above
#
input(
type="imtcp"
port="6514"
ruleset="fromremote"
gnutlsPriorityString="#
## Set the certificates to use
#
global(
DefaultNetstreamDriver="gtls"
DefaultNetstreamDriverCAFile="/etc/ssl/certs/CA.pem"
DefaultNetstreamDriverCertFile="/etc/rsyslog.d/syslog.server.de-cert.pem"
DefaultNetstreamDriverKeyFile="/etc/rsyslog.d/syslog.server.de-key.pem"
)
#
## load input module TCP and force TLS
#
module(
load="imtcp"
StreamDriver.Name="gtls"
StreamDriver.Mode="1"
StreamDriver.Authmode="anon"
)
#
## Dynamic file template for logging into <host>/facility>.log
#
template (name="DynFile" type="string" string="/var/log/remote/%FROMHOST%/%SYSLOGFACILITY-TEXT%.log")
#
## Ruleset to log with the dynamic file name "DynFile" from above
#
ruleset(name="fromremote") {
action(type="omfile" dynafile="DynFile")
stop
}
#
## start up TCP listener at port 6514 and bind ruleset "fromremote" from above
#
input(
type="imtcp"
port="6514"
ruleset="fromremote"
gnutlsPriorityString="%SERVER_PRECEDENCE:%LATEST_RECORD_VERSION:PFS:-VERS-TLS-ALL:+VERS-TLS1.2:-VERS-DTLS-ALL:-KX-ALL:-CIPHER-ALL:-MAC-ALL:-CURVE-ALL:-SIGN-ALL:+ECDHE-RSA:+ECDHE-ECDSA:+DHE-DSS:+DHE-RSA:+AES-256-CBC:+AES-128-CBC:+AES-256-GCM:+AES-128-GCM:+CHACHA20-POLY1305:+SHA256:+SHA384:+AEAD:+CURVE-SECP256R1:+CURVE-SECP384R1:+SIGN-RSA-SHA256"
)
</SyntaxHighlight>
===Client===
/etc/rsyslog.d/syslog-client.conf
<SyntaxHighlight>
#
## Set CA certificate to use
#
global(
DefaultNetstreamDriverCAFile="/etc/ssl/certs/CA.pem"
)
#
## Set up the action for logging to remote syslog server with TLS
#
ruleset(name="remotesyslog") {
action(
name="syslogserver"
type="omfwd"
protocol="tcp"
target="syslog.server.de"
port="6514"
StreamDriver="gtls"
StreamDriverMode="1"
StreamDriverAuthMode="anon"
gnutlsPriorityString="%SERVER_PRECEDENCE:%LATEST_RECORD_VERSION:PFS:-VERS-TLS-ALL:+VERS-TLS1.2:-VERS-DTLS-ALL:-KX-ALL:-CIPHER-ALL:-MAC-ALL:-CURVE-ALL:-SIGN-ALL:+ECDHE-RSA:+ECDHE-ECDSA:+DHE-DSS:+DHE-RSA:+AES-256-CBC:+AES-128-CBC:+AES-256-GCM:+AES-128-GCM:+CHACHA20-POLY1305:+SHA256:+SHA384:+AEAD:+CURVE-SECP256R1:+CURVE-SECP384R1:+SIGN-RSA-SHA256"
)
}
</SyntaxHighlight>
/etc/rsyslog.d/firewall.frule
<SyntaxHighlight>
#
# firewall messages into separate file and stop their further processing
#
if ($syslogfacility-text == 'kern') and \
($msg contains 'IN=' and $msg contains 'OUT=') \
then {
-/var/log/firewall
call remotesyslog
stop
}
</SyntaxHighlight>
/etc/rsyslog.d/auth.frule
<SyntaxHighlight>
if ( $syslogtag == 'login:' ) or \
( ( $programname == 'sshd' ) and \
( \
( $msg contains 'Accepted publickey for' ) or \
( $msg contains 'Received disconnect' ) or \
( $msg contains 'Disconnected from user' ) \
) \
) \
then {
-/var/log/auth.log
call remotesyslog
stop
}
</SyntaxHighlight>
5717e5c313291f3638d985612bfb9b71dbfadbcb
2727
2726
2023-03-23T14:28:01Z
Lollypop
2
/* Server */
wikitext
text/x-wiki
[[Category:Syslog]]
==Logging via TLS==
===Server===
/etc/rsyslog.d/syslog-server.conf
<SyntaxHighlight>
#
## Set the certificates to use
#
global(
DefaultNetstreamDriver="gtls"
DefaultNetstreamDriverCAFile="/etc/ssl/certs/CA.pem"
DefaultNetstreamDriverCertFile="/etc/rsyslog.d/syslog.server.de-cert.pem"
DefaultNetstreamDriverKeyFile="/etc/rsyslog.d/syslog.server.de-key.pem"
)
#
## load input module TCP and force TLS
#
module(
load="imtcp"
StreamDriver.Name="gtls"
StreamDriver.Mode="1"
StreamDriver.Authmode="anon"
)
#
## Dynamic file template for logging into <host>/facility>.log
#
template (name="DynFile" type="string" string="/var/log/remote/%FROMHOST%/%SYSLOGFACILITY-TEXT%.log")
#
## Ruleset to log with the dynamic file name "DynFile" from above
#
ruleset(name="fromremote") {
action(type="omfile" dynafile="DynFile")
stop
}
#
## start up TCP listener at port 6514 and bind ruleset "fromremote" from above
#
input(
type="imtcp"
port="6514"
ruleset="fromremote"
gnutlsPriorityString="%SERVER_PRECEDENCE:%LATEST_RECORD_VERSION:PFS:-VERS-TLS-ALL:+VERS-TLS1.2:-VERS-DTLS-ALL:-KX-ALL:-CIPHER-ALL:-MAC-ALL:-CURVE-ALL:-SIGN-ALL:+ECDHE-RSA:+ECDHE-ECDSA:+DHE-DSS:+DHE-RSA:+AES-256-CBC:+AES-128-CBC:+AES-256-GCM:+AES-128-GCM:+CHACHA20-POLY1305:+SHA256:+SHA384:+AEAD:+CURVE-SECP256R1:+CURVE-SECP384R1:+SIGN-RSA-SHA256"
)
</SyntaxHighlight>
===Client===
/etc/rsyslog.d/syslog-client.conf
<SyntaxHighlight>
#
## Set CA certificate to use
#
global(
DefaultNetstreamDriverCAFile="/etc/ssl/certs/CA.pem"
)
#
## Set up the action for logging to remote syslog server with TLS
#
ruleset(name="remotesyslog") {
action(
name="syslogserver"
type="omfwd"
protocol="tcp"
target="syslog.server.de"
port="6514"
StreamDriver="gtls"
StreamDriverMode="1"
StreamDriverAuthMode="anon"
gnutlsPriorityString="%SERVER_PRECEDENCE:%LATEST_RECORD_VERSION:PFS:-VERS-TLS-ALL:+VERS-TLS1.2:-VERS-DTLS-ALL:-KX-ALL:-CIPHER-ALL:-MAC-ALL:-CURVE-ALL:-SIGN-ALL:+ECDHE-RSA:+ECDHE-ECDSA:+DHE-DSS:+DHE-RSA:+AES-256-CBC:+AES-128-CBC:+AES-256-GCM:+AES-128-GCM:+CHACHA20-POLY1305:+SHA256:+SHA384:+AEAD:+CURVE-SECP256R1:+CURVE-SECP384R1:+SIGN-RSA-SHA256"
)
}
</SyntaxHighlight>
/etc/rsyslog.d/firewall.frule
<SyntaxHighlight>
#
# firewall messages into separate file and stop their further processing
#
if ($syslogfacility-text == 'kern') and \
($msg contains 'IN=' and $msg contains 'OUT=') \
then {
-/var/log/firewall
call remotesyslog
stop
}
</SyntaxHighlight>
/etc/rsyslog.d/auth.frule
<SyntaxHighlight>
if ( $syslogtag == 'login:' ) or \
( ( $programname == 'sshd' ) and \
( \
( $msg contains 'Accepted publickey for' ) or \
( $msg contains 'Received disconnect' ) or \
( $msg contains 'Disconnected from user' ) \
) \
) \
then {
-/var/log/auth.log
call remotesyslog
stop
}
</SyntaxHighlight>
68f6ca61b1c06a14358cdda97b3d9b14f90b14c6
Category:Syslog
14
407
2721
2023-03-23T13:19:27Z
Lollypop
2
Created page with "[[Category:KnowHow]]"
wikitext
text/x-wiki
[[Category:KnowHow]]
a53883501ef62bde531096835b5015f2915a2297
Ansible tips and tricks
0
299
2728
2649
2023-05-04T10:48:16Z
Lollypop
2
/* Ansible commandline */
wikitext
text/x-wiki
[[Category: Ansible | Tips and tricks]]
== Ansible commandline ==
=== Get settings for host ===
Gathering settings for host in ${hostname}:
<syntaxhighlight lang=bash>
$ ansible -m debug -a 'var=hostvars[inventory_hostname]' ${hostname}
</syntaxhighlight>
For example:
<syntaxhighlight lang=bash>
$ ansible -m debug -a 'var=hostvars[inventory_hostname]' localhost
</syntaxhighlight>
Gathering groups for host in ${hostname}:
<syntaxhighlight lang=bash>
$ ansible -m debug -a 'var=group_names' ${hostname}
</syntaxhighlight>
Get all installed kernel versions:
<syntaxhighlight lang=bash>
$ ansible -m shell -a 'uname -r' 'all' | perl -pe 's#\s+\|\s+CHANGED\s+\|\s+rc=\d+\s>>\s*\n#;#g' > /tmp/kernel.csv
</syntaxhighlight>
Get all installed releases:
<syntaxhighlight lang=bash>
$ ansible -m shell -a 'uname -r' 'all'
</syntaxhighlight>
== Gathering facts from file ==
=== Variables from an Oracle response file ===
This snippet gets some variables from the response file and puts them into an environment variable <i>oracle_environment</i> and sets the variable name itself (prepended with oracle_ if not already there). The variable <i>oracle_environment</i> can be used for <i>environment:</i> when you use <i>shell:</i>.
<syntaxhighlight lang=yaml>
vars:
oracle_user: oracle
oracle_version: 12cR2
oracle_response_file: /install/tepmplate_{{ oracle_version }}/db_{{ oracle_version | lower}}.rsp
</syntaxhighlight>
<syntaxhighlight lang=yaml>
- name: "Getting variables for version {{ oracle_version }} from response file"
shell: |
awk -F '=' '/{{ item }}/{print $2;}' {{ oracle_response_file }}
register: oracle_response_variables
with_items:
- ORACLE_HOME
- ORACLE_BASE
- INVENTORY_LOCATION
tags:
- oracle
- oracle_install
- name: Setting facts from response file to oracle_environment
set_fact:
"{{ 'oracle_' + item.item | lower | regex_replace('oracle_','') }}": "{{ item.stdout }}"
oracle_environment: "{{oracle_environment|default([]) + [ {item.item: item.stdout} ] }}"
with_items:
- "{{ oracle_response_variables.results }}"
tags:
- oracle
- oracle_install
</syntaxhighlight>
== Gathering oracle environment ==
<syntaxhighlight lang=yaml>
- name: Calling oraenv
shell: |
# Set ORAENV_ASK=NO and ORACLE_SID, ORACLE_HOME, PATH from /etc/oratab
eval $(awk -F':' '!/^[ ]*(#|$)/ && $3=="Y"{printf "export ORAENV_ASK=NO ORACLE_SID=%s ORACLE_HOME=%s PATH=${PATH}:%s/bin\n",$1,$2,$2}' /etc/oratab)
# Call /usr/local/bin/oraenv for additional settings
. /usr/local/bin/oraenv -s
# Just register what we need for Oracle
env | egrep "(ORACLE_.*|PATH|LD_LIBRARY_PATH)="
register: env
changed_when: False
- name: Creating environment ora_env
set_fact:
ora_env: |
{# Creating empty dictionary #}
{%- set tmp_env={} -%}
{# For each line from env call tmp_env._setitem_(<variable>,<value>) #}
{%- for line in env.stdout_lines -%}
{{ tmp_env.__setitem__(line.split('=')[0], line.split('=')[1]) }}
{%- endfor -%}
{# Print the created variable #}
{{ tmp_env }}
- debug: var=ora_env
</syntaxhighlight>
== NetApp Modules ==
=== NetApp role ===
==== Snapshot user ====
<syntaxhighlight>
security login role create -vserver cluster01 -role ansible-snapshot-only -cmddirname DEFAULT -access none
security login role create -vserver cluster01 -role ansible-snapshot-only -cmddirname "event generate-autosupport-log" -access all
security login role create -vserver cluster01 -role ansible-snapshot-only -cmddirname "volume snapshot" -access readonly
security login role create -vserver cluster01 -role ansible-snapshot-only -cmddirname "volume snapshot create" -query "-snapshot ansible_*" -access all
security login role create -vserver cluster01 -role ansible-snapshot-only -cmddirname "volume snapshot delete" -query "-snapshot ansible_*" -access all
security login create -vserver cluster01 -role ansible-snapshot-only -application ontapi -authentication-method password -user-or-group-name ansible-snapuser
</syntaxhighlight>
cfa002e70271a8d733c51e07bc017ddfab579107
2729
2728
2023-05-04T10:48:59Z
Lollypop
2
/* Get settings for host */
wikitext
text/x-wiki
[[Category: Ansible | Tips and tricks]]
== Ansible commandline ==
=== Get settings for host ===
Gathering settings for host in ${hostname}:
<syntaxhighlight lang=bash>
$ ansible -m debug -a 'var=hostvars[inventory_hostname]' ${hostname}
</syntaxhighlight>
For example:
<syntaxhighlight lang=bash>
$ ansible -m debug -a 'var=hostvars[inventory_hostname]' localhost
</syntaxhighlight>
Gathering groups for host in ${hostname}:
<syntaxhighlight lang=bash>
$ ansible -m debug -a 'var=group_names' ${hostname}
</syntaxhighlight>
Get all installed kernel versions:
<syntaxhighlight lang=bash>
$ ansible -m shell -a 'uname -r' 'all' | perl -pe 's#\s+\|\s+CHANGED\s+\|\s+rc=\d+\s>>\s*\n#;#g' > /tmp/kernel.csv
</syntaxhighlight>
Get all installed releases:
<syntaxhighlight lang=bash>
$ ansible -m setup -a 'filter=ansible_distribution_version' 'all'
</syntaxhighlight>
== Gathering facts from file ==
=== Variables from an Oracle response file ===
This snippet gets some variables from the response file and puts them into an environment variable <i>oracle_environment</i> and sets the variable name itself (prepended with oracle_ if not already there). The variable <i>oracle_environment</i> can be used for <i>environment:</i> when you use <i>shell:</i>.
<syntaxhighlight lang=yaml>
vars:
oracle_user: oracle
oracle_version: 12cR2
oracle_response_file: /install/tepmplate_{{ oracle_version }}/db_{{ oracle_version | lower}}.rsp
</syntaxhighlight>
<syntaxhighlight lang=yaml>
- name: "Getting variables for version {{ oracle_version }} from response file"
shell: |
awk -F '=' '/{{ item }}/{print $2;}' {{ oracle_response_file }}
register: oracle_response_variables
with_items:
- ORACLE_HOME
- ORACLE_BASE
- INVENTORY_LOCATION
tags:
- oracle
- oracle_install
- name: Setting facts from response file to oracle_environment
set_fact:
"{{ 'oracle_' + item.item | lower | regex_replace('oracle_','') }}": "{{ item.stdout }}"
oracle_environment: "{{oracle_environment|default([]) + [ {item.item: item.stdout} ] }}"
with_items:
- "{{ oracle_response_variables.results }}"
tags:
- oracle
- oracle_install
</syntaxhighlight>
== Gathering oracle environment ==
<syntaxhighlight lang=yaml>
- name: Calling oraenv
shell: |
# Set ORAENV_ASK=NO and ORACLE_SID, ORACLE_HOME, PATH from /etc/oratab
eval $(awk -F':' '!/^[ ]*(#|$)/ && $3=="Y"{printf "export ORAENV_ASK=NO ORACLE_SID=%s ORACLE_HOME=%s PATH=${PATH}:%s/bin\n",$1,$2,$2}' /etc/oratab)
# Call /usr/local/bin/oraenv for additional settings
. /usr/local/bin/oraenv -s
# Just register what we need for Oracle
env | egrep "(ORACLE_.*|PATH|LD_LIBRARY_PATH)="
register: env
changed_when: False
- name: Creating environment ora_env
set_fact:
ora_env: |
{# Creating empty dictionary #}
{%- set tmp_env={} -%}
{# For each line from env call tmp_env._setitem_(<variable>,<value>) #}
{%- for line in env.stdout_lines -%}
{{ tmp_env.__setitem__(line.split('=')[0], line.split('=')[1]) }}
{%- endfor -%}
{# Print the created variable #}
{{ tmp_env }}
- debug: var=ora_env
</syntaxhighlight>
== NetApp Modules ==
=== NetApp role ===
==== Snapshot user ====
<syntaxhighlight>
security login role create -vserver cluster01 -role ansible-snapshot-only -cmddirname DEFAULT -access none
security login role create -vserver cluster01 -role ansible-snapshot-only -cmddirname "event generate-autosupport-log" -access all
security login role create -vserver cluster01 -role ansible-snapshot-only -cmddirname "volume snapshot" -access readonly
security login role create -vserver cluster01 -role ansible-snapshot-only -cmddirname "volume snapshot create" -query "-snapshot ansible_*" -access all
security login role create -vserver cluster01 -role ansible-snapshot-only -cmddirname "volume snapshot delete" -query "-snapshot ansible_*" -access all
security login create -vserver cluster01 -role ansible-snapshot-only -application ontapi -authentication-method password -user-or-group-name ansible-snapuser
</syntaxhighlight>
3e6da8bc98aeb56225bc1fdb6b33694f12d14ca7
2730
2729
2023-05-04T10:50:07Z
Lollypop
2
/* Ansible commandline */
wikitext
text/x-wiki
[[Category: Ansible | Tips and tricks]]
= Ansible commandline =
== Get settings for host ==
=== Gathering settings for host in ${hostname}: ===
<syntaxhighlight lang=bash>
$ ansible -m debug -a 'var=hostvars[inventory_hostname]' ${hostname}
</syntaxhighlight>
For example:
<syntaxhighlight lang=bash>
$ ansible -m debug -a 'var=hostvars[inventory_hostname]' localhost
</syntaxhighlight>
=== Gathering groups for host in ${hostname}: ===
<syntaxhighlight lang=bash>
$ ansible -m debug -a 'var=group_names' ${hostname}
</syntaxhighlight>
=== Get all installed kernel versions: ===
<syntaxhighlight lang=bash>
$ ansible -m shell -a 'uname -r' 'all' | perl -pe 's#\s+\|\s+CHANGED\s+\|\s+rc=\d+\s>>\s*\n#;#g' > /tmp/kernel.csv
</syntaxhighlight>
=== Get all installed releases: ===
<syntaxhighlight lang=bash>
$ ansible -m setup -a 'filter=ansible_distribution_version' 'all'
</syntaxhighlight>
== Gathering facts from file ==
=== Variables from an Oracle response file ===
This snippet gets some variables from the response file and puts them into an environment variable <i>oracle_environment</i> and sets the variable name itself (prepended with oracle_ if not already there). The variable <i>oracle_environment</i> can be used for <i>environment:</i> when you use <i>shell:</i>.
<syntaxhighlight lang=yaml>
vars:
oracle_user: oracle
oracle_version: 12cR2
oracle_response_file: /install/tepmplate_{{ oracle_version }}/db_{{ oracle_version | lower}}.rsp
</syntaxhighlight>
<syntaxhighlight lang=yaml>
- name: "Getting variables for version {{ oracle_version }} from response file"
shell: |
awk -F '=' '/{{ item }}/{print $2;}' {{ oracle_response_file }}
register: oracle_response_variables
with_items:
- ORACLE_HOME
- ORACLE_BASE
- INVENTORY_LOCATION
tags:
- oracle
- oracle_install
- name: Setting facts from response file to oracle_environment
set_fact:
"{{ 'oracle_' + item.item | lower | regex_replace('oracle_','') }}": "{{ item.stdout }}"
oracle_environment: "{{oracle_environment|default([]) + [ {item.item: item.stdout} ] }}"
with_items:
- "{{ oracle_response_variables.results }}"
tags:
- oracle
- oracle_install
</syntaxhighlight>
== Gathering oracle environment ==
<syntaxhighlight lang=yaml>
- name: Calling oraenv
shell: |
# Set ORAENV_ASK=NO and ORACLE_SID, ORACLE_HOME, PATH from /etc/oratab
eval $(awk -F':' '!/^[ ]*(#|$)/ && $3=="Y"{printf "export ORAENV_ASK=NO ORACLE_SID=%s ORACLE_HOME=%s PATH=${PATH}:%s/bin\n",$1,$2,$2}' /etc/oratab)
# Call /usr/local/bin/oraenv for additional settings
. /usr/local/bin/oraenv -s
# Just register what we need for Oracle
env | egrep "(ORACLE_.*|PATH|LD_LIBRARY_PATH)="
register: env
changed_when: False
- name: Creating environment ora_env
set_fact:
ora_env: |
{# Creating empty dictionary #}
{%- set tmp_env={} -%}
{# For each line from env call tmp_env._setitem_(<variable>,<value>) #}
{%- for line in env.stdout_lines -%}
{{ tmp_env.__setitem__(line.split('=')[0], line.split('=')[1]) }}
{%- endfor -%}
{# Print the created variable #}
{{ tmp_env }}
- debug: var=ora_env
</syntaxhighlight>
== NetApp Modules ==
=== NetApp role ===
==== Snapshot user ====
<syntaxhighlight>
security login role create -vserver cluster01 -role ansible-snapshot-only -cmddirname DEFAULT -access none
security login role create -vserver cluster01 -role ansible-snapshot-only -cmddirname "event generate-autosupport-log" -access all
security login role create -vserver cluster01 -role ansible-snapshot-only -cmddirname "volume snapshot" -access readonly
security login role create -vserver cluster01 -role ansible-snapshot-only -cmddirname "volume snapshot create" -query "-snapshot ansible_*" -access all
security login role create -vserver cluster01 -role ansible-snapshot-only -cmddirname "volume snapshot delete" -query "-snapshot ansible_*" -access all
security login create -vserver cluster01 -role ansible-snapshot-only -application ontapi -authentication-method password -user-or-group-name ansible-snapuser
</syntaxhighlight>
81e365ef87e0d31e24192532e583761bfd65298d
2731
2730
2023-05-04T10:51:14Z
Lollypop
2
wikitext
text/x-wiki
[[Category: Ansible | Tips and tricks]]
= Ansible commandline =
== Get settings for host ==
=== Gathering settings for host in ${hostname}: ===
<syntaxhighlight lang=bash>
$ ansible -m debug -a 'var=hostvars[inventory_hostname]' ${hostname}
</syntaxhighlight>
For example:
<syntaxhighlight lang=bash>
$ ansible -m debug -a 'var=hostvars[inventory_hostname]' localhost
</syntaxhighlight>
=== Gathering groups for host in ${hostname}: ===
<syntaxhighlight lang=bash>
$ ansible -m debug -a 'var=group_names' ${hostname}
</syntaxhighlight>
=== Get all installed kernel versions: ===
<syntaxhighlight lang=bash>
$ ansible -m shell -a 'uname -r' 'all' | perl -pe 's#\s+\|\s+CHANGED\s+\|\s+rc=\d+\s>>\s*\n#;#g' > /tmp/kernel.csv
</syntaxhighlight>
=== Get all installed releases: ===
<syntaxhighlight lang=bash>
$ ansible -m setup -a 'filter=ansible_distribution_version' 'all'
</syntaxhighlight>
= Gathering facts from file =
== Variables from an Oracle response file ==
This snippet gets some variables from the response file and puts them into an environment variable <i>oracle_environment</i> and sets the variable name itself (prepended with oracle_ if not already there). The variable <i>oracle_environment</i> can be used for <i>environment:</i> when you use <i>shell:</i>.
<syntaxhighlight lang=yaml>
vars:
oracle_user: oracle
oracle_version: 12cR2
oracle_response_file: /install/tepmplate_{{ oracle_version }}/db_{{ oracle_version | lower}}.rsp
</syntaxhighlight>
<syntaxhighlight lang=yaml>
- name: "Getting variables for version {{ oracle_version }} from response file"
shell: |
awk -F '=' '/{{ item }}/{print $2;}' {{ oracle_response_file }}
register: oracle_response_variables
with_items:
- ORACLE_HOME
- ORACLE_BASE
- INVENTORY_LOCATION
tags:
- oracle
- oracle_install
- name: Setting facts from response file to oracle_environment
set_fact:
"{{ 'oracle_' + item.item | lower | regex_replace('oracle_','') }}": "{{ item.stdout }}"
oracle_environment: "{{oracle_environment|default([]) + [ {item.item: item.stdout} ] }}"
with_items:
- "{{ oracle_response_variables.results }}"
tags:
- oracle
- oracle_install
</syntaxhighlight>
= Gathering oracle environment =
<syntaxhighlight lang=yaml>
- name: Calling oraenv
shell: |
# Set ORAENV_ASK=NO and ORACLE_SID, ORACLE_HOME, PATH from /etc/oratab
eval $(awk -F':' '!/^[ ]*(#|$)/ && $3=="Y"{printf "export ORAENV_ASK=NO ORACLE_SID=%s ORACLE_HOME=%s PATH=${PATH}:%s/bin\n",$1,$2,$2}' /etc/oratab)
# Call /usr/local/bin/oraenv for additional settings
. /usr/local/bin/oraenv -s
# Just register what we need for Oracle
env | egrep "(ORACLE_.*|PATH|LD_LIBRARY_PATH)="
register: env
changed_when: False
- name: Creating environment ora_env
set_fact:
ora_env: |
{# Creating empty dictionary #}
{%- set tmp_env={} -%}
{# For each line from env call tmp_env._setitem_(<variable>,<value>) #}
{%- for line in env.stdout_lines -%}
{{ tmp_env.__setitem__(line.split('=')[0], line.split('=')[1]) }}
{%- endfor -%}
{# Print the created variable #}
{{ tmp_env }}
- debug: var=ora_env
</syntaxhighlight>
= NetApp Modules =
== NetApp role ==
=== Snapshot user ===
<syntaxhighlight>
security login role create -vserver cluster01 -role ansible-snapshot-only -cmddirname DEFAULT -access none
security login role create -vserver cluster01 -role ansible-snapshot-only -cmddirname "event generate-autosupport-log" -access all
security login role create -vserver cluster01 -role ansible-snapshot-only -cmddirname "volume snapshot" -access readonly
security login role create -vserver cluster01 -role ansible-snapshot-only -cmddirname "volume snapshot create" -query "-snapshot ansible_*" -access all
security login role create -vserver cluster01 -role ansible-snapshot-only -cmddirname "volume snapshot delete" -query "-snapshot ansible_*" -access all
security login create -vserver cluster01 -role ansible-snapshot-only -application ontapi -authentication-method password -user-or-group-name ansible-snapuser
</syntaxhighlight>
91b901cbda44f6c74b0ed502ddfa3c645d6c0422
2732
2731
2023-05-04T10:52:02Z
Lollypop
2
/* Ansible commandline */
wikitext
text/x-wiki
[[Category: Ansible | Tips and tricks]]
= Ansible commandline =
== Get settings for host ==
=== Gathering settings for host in ${hostname}: ===
<syntaxhighlight lang=bash>
$ ansible -m debug -a 'var=hostvars[inventory_hostname]' ${hostname}
</syntaxhighlight>
For example:
<syntaxhighlight lang=bash>
$ ansible -m debug -a 'var=hostvars[inventory_hostname]' localhost
</syntaxhighlight>
=== Gathering groups for host in ${hostname}: ===
<syntaxhighlight lang=bash>
$ ansible -m debug -a 'var=group_names' ${hostname}
</syntaxhighlight>
== Get informations from host ==
=== Get all installed kernel versions: ===
<syntaxhighlight lang=bash>
$ ansible -m shell -a 'uname -r' 'all' | perl -pe 's#\s+\|\s+CHANGED\s+\|\s+rc=\d+\s>>\s*\n#;#g' > /tmp/kernel.csv
</syntaxhighlight>
=== Get all installed releases: ===
<syntaxhighlight lang=bash>
$ ansible -m setup -a 'filter=ansible_distribution_version' 'all'
</syntaxhighlight>
= Gathering facts from file =
== Variables from an Oracle response file ==
This snippet gets some variables from the response file and puts them into an environment variable <i>oracle_environment</i> and sets the variable name itself (prepended with oracle_ if not already there). The variable <i>oracle_environment</i> can be used for <i>environment:</i> when you use <i>shell:</i>.
<syntaxhighlight lang=yaml>
vars:
oracle_user: oracle
oracle_version: 12cR2
oracle_response_file: /install/tepmplate_{{ oracle_version }}/db_{{ oracle_version | lower}}.rsp
</syntaxhighlight>
<syntaxhighlight lang=yaml>
- name: "Getting variables for version {{ oracle_version }} from response file"
shell: |
awk -F '=' '/{{ item }}/{print $2;}' {{ oracle_response_file }}
register: oracle_response_variables
with_items:
- ORACLE_HOME
- ORACLE_BASE
- INVENTORY_LOCATION
tags:
- oracle
- oracle_install
- name: Setting facts from response file to oracle_environment
set_fact:
"{{ 'oracle_' + item.item | lower | regex_replace('oracle_','') }}": "{{ item.stdout }}"
oracle_environment: "{{oracle_environment|default([]) + [ {item.item: item.stdout} ] }}"
with_items:
- "{{ oracle_response_variables.results }}"
tags:
- oracle
- oracle_install
</syntaxhighlight>
= Gathering oracle environment =
<syntaxhighlight lang=yaml>
- name: Calling oraenv
shell: |
# Set ORAENV_ASK=NO and ORACLE_SID, ORACLE_HOME, PATH from /etc/oratab
eval $(awk -F':' '!/^[ ]*(#|$)/ && $3=="Y"{printf "export ORAENV_ASK=NO ORACLE_SID=%s ORACLE_HOME=%s PATH=${PATH}:%s/bin\n",$1,$2,$2}' /etc/oratab)
# Call /usr/local/bin/oraenv for additional settings
. /usr/local/bin/oraenv -s
# Just register what we need for Oracle
env | egrep "(ORACLE_.*|PATH|LD_LIBRARY_PATH)="
register: env
changed_when: False
- name: Creating environment ora_env
set_fact:
ora_env: |
{# Creating empty dictionary #}
{%- set tmp_env={} -%}
{# For each line from env call tmp_env._setitem_(<variable>,<value>) #}
{%- for line in env.stdout_lines -%}
{{ tmp_env.__setitem__(line.split('=')[0], line.split('=')[1]) }}
{%- endfor -%}
{# Print the created variable #}
{{ tmp_env }}
- debug: var=ora_env
</syntaxhighlight>
= NetApp Modules =
== NetApp role ==
=== Snapshot user ===
<syntaxhighlight>
security login role create -vserver cluster01 -role ansible-snapshot-only -cmddirname DEFAULT -access none
security login role create -vserver cluster01 -role ansible-snapshot-only -cmddirname "event generate-autosupport-log" -access all
security login role create -vserver cluster01 -role ansible-snapshot-only -cmddirname "volume snapshot" -access readonly
security login role create -vserver cluster01 -role ansible-snapshot-only -cmddirname "volume snapshot create" -query "-snapshot ansible_*" -access all
security login role create -vserver cluster01 -role ansible-snapshot-only -cmddirname "volume snapshot delete" -query "-snapshot ansible_*" -access all
security login create -vserver cluster01 -role ansible-snapshot-only -application ontapi -authentication-method password -user-or-group-name ansible-snapuser
</syntaxhighlight>
8541a9a21f22a9d536e42fbfa2110bbc4d1d4332
Ubuntu zsys
0
377
2733
2394
2023-05-09T11:28:14Z
Lollypop
2
/* Cconfigure garbage collection */
wikitext
text/x-wiki
[[category:Ubuntu]]
==Cconfigure garbage collection==
<syntaxhighlight lang=yaml>
cat > /etc/zsys.conf <<EOF
history:
# Keep at least n history entry per unit of time if enough of them are present
# The order condition the bucket start and end dates (from most recent to oldest)
# We also keep all previous state saves for the previous day.
# gcstartafter: 1 (GC start after a whole day).
gcstartafter: 1
# Minimum number of recent states to keep.
keeplast: 7
# - name: Abitrary name of the bucket
# buckets: Number of buckets over the interval
# bucketlength: Length of each bucket in days
# samplesperbucket: Number of datasets to keep in each bucket
gcrules:
- name: PreviousDay
buckets: 1
bucketlength: 1
samplesperbucket: 3
#
# For the previous Day (after on full day of retention of all
# snapshots due to gcstartafter: 1), the rule PreviousDay
# defines one bucket (buckets: 1) of size 1 day (bucketlength: 1),
# where we keep 3 states. So basically, we keep 3 states on the
# previous full day.
#
- name: PreviousWeek
buckets: 5
bucketlength: 1
samplesperbucket: 1
#
# For the 5 days before (buckets: 5 of size 1 day (bucketlength: 1)),
# we keep one state (samplesperbucket: 1).
# It means thus that we keep one state per day for each of those 5 days.
#
- name: PreviousMonth
buckets: 4
bucketlength: 7
samplesperbucket: 1
#
# We divide the previous month, in 4 buckets (buckets: 4) of
# 7 days each (bucketlength: 7) and keep one state for each
# (samplesperbucket: 1).
# In English, this means that we try to keep one state save
# per week over the previous month.
#
general:
# Minimal free space required before taking a snapshot
minfreepoolspace: 20
# Daemon timeout in seconds
timeout: 60
EOF
systemctl restart zsysd.service
zsysctl -vvv service gc
update-grub
</syntaxhighlight>
fef0a2e8b0aa6244808935c1a723dd6772e68469
SSH Tipps und Tricks
0
75
2734
2719
2023-05-16T13:14:23Z
Lollypop
2
/* Problems with older destinations */
wikitext
text/x-wiki
[[Category:SSH|Tipps]]
[[Category:Putty|Tipps]]
=SSH, way to the target=
==SSH over one or more hops==
To make the SSH connection from host_a to host_b you have to tunnel through two hosts (jumphost_1 and jumphost_2). If you always log in first and then continue logging in, it is sometimes very difficult to loop through the port forwardings or the Socks5 proxy. It is easier to define <i>ProxyJump</i>s for the way from host_a to host_b.
So we only get from jumphost_2 to host_b, so we make an entry in ~/.ssh/config for this:
<pre>
Host host_b
ProxyJump jumphost_2
</pre>
But we can only get to jumphost_2 via jumphost_1, so we need an entry for this as well:
<pre>
Host jumphost_2
ProxyJump jumphost_1
</pre>
Now simply type <i>ssh host_b</i> on host_a and you will be tunneled through the two gateways jumphost_1 and jumphost_2.
==Portforwardings for example for NFS are now easy like this==
<pre>
root@host_a# share -F nfs -o ro=@127.0.0.1/32 /tmp
root@host_a# ssh -R 22049:localhost:2049 user@host_b
user@host_b$ su -
root@host_b# mount -oro nfs://127.0.0.1:22049/tmp /mnt
</pre>
In the background the tunnel connections are established and the port forwarding is done directly from host_a to host_b. Very slim and elegant.
==Breakout from paradise==
Problem: The environment you are in is unfortunately so unfortunate with firewalls that you can not work. But you have to SSH out to look somewhere else or to get something. Well, there is always a way...
You need a locally installed [http://www.meadowy.org/~gotoh/projects/connect connect], e.g. under Ubuntu: apt-get install connect-proxy.
Furthermore you need a SSH server, where a sshd is listening on port 443, because most proxies only want to let you through on known ports.
Then you enter in the ~/.ssh/config:
<pre>
Host ssh-via-proxy
ProxyCommand connect -H proxy-server:3128 ssh-server 443
</pre>
Whoop di whoop is one with <i>ssh ssh-server</i> on the SSH target, where one would like to go. Of course you can enter another host behind this connection via <i>ProxyJump ssh-via-proxy</i> etc. etc.
==Ah yes... the internal wiki...==
Also not bad, if this is only accessible from the internal network, then we just request via socks proxy:
<pre>
user@host_a$ ssh -C -N -T -f -D8080 internal-host
user@host_a$ chromium-browser --proxy-server="socks5://localhost:8080" https://wiki.internal.office/ &
</pre>
Options are:
<pre>
-C Requests compression <- das ist optional
-N Do not execute a remote command.
-T Disable pseudo-tty allocation.
-f Requests ssh to go to background just before command execution.
-D Local-Remote-Socks5-Proxy Port
</pre>
Or again via ~/.ssh/config:
<pre>
Host wiki
Compression yes
DynamicForward 8888
RequestTTY no
PermitLocalCommand yes
LocalCommand chromium-browser --proxy-server="socks5://localhost:8888" https://wiki.intern.firma.de/ &
Hostname internal-host
</pre>
And then
<SyntaxHighlight lang=bash>
$ ssh -N -f wiki
</SyntaxHighlight>
==Do this but only if...==
If you want for example <i>ProxyJump</i> only if you are connected remote via OpenVPN but not if you are at the office:
<pre>
Match exec "ip ro sh dev tun0 src 10.208.129.0/24 2>/dev/null" host !jumphost.office,*.office,172.16.*.*
ProxyJump jumphost.office
</pre>
What happens here is:<br>
You will be proxied over jumphost.office if there is both of<br>
- A route on a dev tun0 where the local IP matches 10.208.129.0/24<br>
- The destination host matches !jumphost.office,*.office,172.16.*.*<br>
=Fingerprint of a key=
For verification, it is often easier with shorter number strings. Therefore, the fingerprint is handy to compare keys more easily:
<pre>
$ ssh-keygen -lf ~/.ssh/id_rsa.pub
4096 SHA256:s1dtEFvY0EiJQcg66jTUon3BS6gfhSJT4Qegox4e7yk lollypop@lollybook (RSA)
</pre>
=Limit allowed users in sshd_config=
<syntaxhighlight lang=bash>
# SSH is only allowed for users in the group ssh except user syslog
AllowGroups ssh
DenyUsers syslog
</syntaxhighlight>
=PuTTY Portable=
==Launch pageant together with putty==
In the file ..\PortableApps\PuTTYPortable\App\AppInfo\Launcher\PuTTYPortable.ini enter the following below [Launch]:
<pre>
[Launch]
ProgramExecutable=putty\pageant.exe
CommandLineArguments='%PAL:DataDir%\settings\mykeys.ppk -c %PAL:AppDir%\putty\putty.exe'
DirectoryMoveOK=yes
SupportsUNC=yes
</pre>
For PortableApps see:
* [http://portableapps.com/manuals/PortableApps.comLauncher/ref/envsub.html Environment variable substitions]
* [http://portableapps.com/manuals/PortableApps.comLauncher/ref/launcher.ini/launch.html#programexecutable Launch]
==ppk -> OpenSSH format==
<syntaxhighlight lang=bash>
$ nawk '/---- BEGIN SSH2 PUBLIC KEY ----/{printf "ssh-rsa "; getline; comment=$2; gsub(/"/,"",comment); getline line; while(line !~ /^---- END/){printf line; getline line;} printf " %s\n",comment;}' pubkey.ppk
</syntaxhighlight>
=Problems with older destinations=
==Unable to negotiate with <IP> port 22: no matching host key type found. Their offer: ssh-dss==
<syntaxhighlight lang=bash>
$ ssh -oHostKeyAlgorithms=+ssh-dss <IP>
</syntaxhighlight>
==ssh_dispatch_run_fatal: Connection to <IP> port 22: DH GEX group out of range==
<syntaxhighlight lang=bash>
$ ssh -oKexAlgorithms=diffie-hellman-group-exchange-sha256,diffie-hellman-group14-sha1,diffie-hellman-group1-sha1 <IP>
</syntaxhighlight>
==Allow outdated PubkeyAcceptedAlgorithms==
To reach hosts where you cannod use ed25519 or other more modern keys:
<syntaxhighlight lang=bash>
$ ssh -o PubkeyAcceptedKeyTypes=+ssh-rsa <old-rsa-host>
</syntaxhighlight>
=Problems with older clients/keys=
==Allow RSA keys again (ssh-rsa not in PubkeyAcceptedAlgorithms)==
If you try to connect to new OpenSSH daemons with an RSA key, you will find this in your log and you can not connect via ssh:
<syntaxhighlight lang=bash>
sshd[51342]: userauth_pubkey: key type ssh-rsa not in PubkeyAcceptedAlgorithms [preauth]
</syntaxhighlight>
The temporary workaround which is not recommended is to allow RSA keys in the sshd_config:
<syntaxhighlight lang=bash>
PubkeyAcceptedAlgorithms +ssh-rsa
</syntaxhighlight>
But you better should switch to ED25519 keys.
=SFTP chroot=
<syntaxhighlight lang=bash>
# mkdir --parents --mode=0755 /sftp_chroot/etc
</syntaxhighlight>
==/etc/fstab==
<syntaxhighlight lang=bash>
...
/etc/passwd /sftp_chroot/etc/passwd none ro,bind 0 0
/etc/group /sftp_chroot/etc/group none ro,bind 0 0
</syntaxhighlight>
==/etc/ssh/sshd_config==
<syntaxhighlight lang=bash>
...
AllowGroups ssh-user
Subsystem sftp internal-sftp
Match group sftp
AllowGroups sftp
X11Forwarding no
AllowTcpForwarding no
AllowAgentForwarding no
PermitTunnel no
ForceCommand internal-sftp
PasswordAuthentication yes
ChrootDirectory /sftp_chroot/
AuthorizedKeysFile /sftp_chroot/%h/.ssh/authorized_keys
</syntaxhighlight>
==Create SFTP user==
Now you can put authorized keys into the files /home/sftp/.authorized_keys/<i>username</i>
And create the sftp users like this:
<syntaxhighlight lang=bash>
# USER=myuser
# mkdir --parents --mode=0755 /home/sftp/${USER}
# useradd --create-home --home-dir /home/sftp/${USER}/home ${USER}
</syntaxhighlight>
= Two factor authentication =
== Google Authenticator ==
As the Google Authenticator is a tool which is available on several SmartPhone OS I took this one for the OTP authentication.
All steps have to be done on the destination host.
=== Install libpam-google-authenticator ===
<syntaxhighlight lang=bash>
$ sudo apt-get install libpam-google-authenticator
</syntaxhighlight>
=== Add settings to the /etc/pam.d/sshd ===
Put this line at the top of your /etc/pam.d/sshd!
<syntaxhighlight lang=bash>
auth [success=done new_authtok_reqd=done default=die] pam_google_authenticator.so nullok
</syntaxhighlight>
See the man page pam.d(5) or read here...
The meaning of the parameters:
* success=done : If pam_google_authenticator returns successful (code was correct) all authentication is done.
* new_authtok_reqd=done : New authentication token is required set to done. Done is like ok, <nowiki><man page></nowiki>except that the stack also terminates and control is immediately returned to the application.<nowiki></man page></nowiki>
* default=die : If pam_google_authenticator failed no other authentications will be tried
* nullok : Allow user to access auth mechanism even if the password is empty
=== Add settings to the /etc/ssh/sshd_config ===
This lines have to be in the /etc/ssh/sshd_config:
<syntaxhighlight lang=bash>
UsePAM yes
PasswordAuthentication no
PubkeyAuthentication yes
ChallengeResponseAuthentication yes
AuthenticationMethods publickey,keyboard-interactive:pam
</syntaxhighlight>
Without the setting in /etc/pam.d/sshd the "PasswordAuthentication no" will not be sufficient and still ask for a password because /etc/pam.d/sshd enables password authentication.
d3b4ebc4a31c14b1db0e2f777f2bd06956f4f836
2753
2734
2023-07-20T12:36:24Z
Lollypop
2
/* Do this but only if... */
wikitext
text/x-wiki
[[Category:SSH|Tipps]]
[[Category:Putty|Tipps]]
=SSH, way to the target=
==SSH over one or more hops==
To make the SSH connection from host_a to host_b you have to tunnel through two hosts (jumphost_1 and jumphost_2). If you always log in first and then continue logging in, it is sometimes very difficult to loop through the port forwardings or the Socks5 proxy. It is easier to define <i>ProxyJump</i>s for the way from host_a to host_b.
So we only get from jumphost_2 to host_b, so we make an entry in ~/.ssh/config for this:
<pre>
Host host_b
ProxyJump jumphost_2
</pre>
But we can only get to jumphost_2 via jumphost_1, so we need an entry for this as well:
<pre>
Host jumphost_2
ProxyJump jumphost_1
</pre>
Now simply type <i>ssh host_b</i> on host_a and you will be tunneled through the two gateways jumphost_1 and jumphost_2.
==Portforwardings for example for NFS are now easy like this==
<pre>
root@host_a# share -F nfs -o ro=@127.0.0.1/32 /tmp
root@host_a# ssh -R 22049:localhost:2049 user@host_b
user@host_b$ su -
root@host_b# mount -oro nfs://127.0.0.1:22049/tmp /mnt
</pre>
In the background the tunnel connections are established and the port forwarding is done directly from host_a to host_b. Very slim and elegant.
==Breakout from paradise==
Problem: The environment you are in is unfortunately so unfortunate with firewalls that you can not work. But you have to SSH out to look somewhere else or to get something. Well, there is always a way...
You need a locally installed [http://www.meadowy.org/~gotoh/projects/connect connect], e.g. under Ubuntu: apt-get install connect-proxy.
Furthermore you need a SSH server, where a sshd is listening on port 443, because most proxies only want to let you through on known ports.
Then you enter in the ~/.ssh/config:
<pre>
Host ssh-via-proxy
ProxyCommand connect -H proxy-server:3128 ssh-server 443
</pre>
Whoop di whoop is one with <i>ssh ssh-server</i> on the SSH target, where one would like to go. Of course you can enter another host behind this connection via <i>ProxyJump ssh-via-proxy</i> etc. etc.
==Ah yes... the internal wiki...==
Also not bad, if this is only accessible from the internal network, then we just request via socks proxy:
<pre>
user@host_a$ ssh -C -N -T -f -D8080 internal-host
user@host_a$ chromium-browser --proxy-server="socks5://localhost:8080" https://wiki.internal.office/ &
</pre>
Options are:
<pre>
-C Requests compression <- das ist optional
-N Do not execute a remote command.
-T Disable pseudo-tty allocation.
-f Requests ssh to go to background just before command execution.
-D Local-Remote-Socks5-Proxy Port
</pre>
Or again via ~/.ssh/config:
<pre>
Host wiki
Compression yes
DynamicForward 8888
RequestTTY no
PermitLocalCommand yes
LocalCommand chromium-browser --proxy-server="socks5://localhost:8888" https://wiki.intern.firma.de/ &
Hostname internal-host
</pre>
And then
<SyntaxHighlight lang=bash>
$ ssh -N -f wiki
</SyntaxHighlight>
==Do this but only if...==
If you want for example <i>ProxyJump</i> only if you are connected remote via OpenVPN but not if you are at the office:
<pre>
Match exec "ip ro sh dev tun0 src 10.208.129.0/24 2>/dev/null" host !jumphost.office,*.office,172.16.*.*
ProxyJump jumphost.office
</pre>
What happens here is:<br>
You will be proxied over jumphost.office if there is both of<br>
- A route on a dev tun0 where the local IP matches 10.208.129.0/24<br>
- The destination host matches !jumphost.office,*.office,172.16.*.*<br>
==rsync from remote to remote==
Sometimes your local host is just needed as a relay station to sync between two servers which cannot see each other. You can see both because you are in the admin network.
But you need to get files from HostA to HostB and your laptop has not enough diskspace to save it from HostA to the local disk and then copy it on HostB.
This is a possible solution:
1. Make a reverse forwarding from localhost:PortX on HostA to HostB port 22 (So all packets you send to PortX on HostA get back to your laptop and will be send to port 22, the ssh port, on HostB)
2. Execute rsync on HostA ans tell rsync to make a ssh connection to port PortX for the destination host (which is send back to your laptop and from here to HostB port 22, see 1.)
Here is an Example (with a random port between 50000-52999):
<SyntaxHighLight lang=bash>
$ PortX=$(( ${RANDOM} % 3000 + 50000 ))
$ HostA=10.1.0.42
$ HostB=10.2.0.43
$ ssh -AR 127.0.0.1:${PortX}:${HostB}:22 ${HostA} "rsync -e 'ssh -p ${PortX} -o StrictHostKeyChecking=no' -PWav <HostA-Path> 127.0.0.1:<HostB-Path>"
</SyntaxHighLight>
Some explanations:
$RANDOM is a bash builtin so this works only inside bash.
Use Portx=<your port choosen number> in other shells.
SSH Options:
<pre>
-A Enables forwarding of connections from an authentication agent such as ssh-agent(1). This can also be specified on a per-host basis in a configuration file.
Agent forwarding should be enabled with caution. Users with the ability to bypass file permissions on the remote host (for the agent's UNIX-domain socket) can
access the local agent through the forwarded connection. An attacker cannot obtain key material from the agent, however they can perform operations on the
keys that enable them to authenticate using the identities loaded into the agent. A safer alternative may be to use a jump host (see -J).
-R [bind_address:]port:host:hostport
Specifies that connections to the given TCP port or Unix socket on the remote (server) host are to be forwarded to the local side.
This works by allocating a socket to listen to either a TCP port or to a Unix socket on the remote side. Whenever a connection is made to this port or Unix
socket, the connection is forwarded over the secure channel, and a connection is made from the local machine to either an explicit destination specified by
host port hostport, or local_socket, or, if no explicit destination was specified, ssh will act as a SOCKS 4/5 proxy and forward connections to the destina‐
tions requested by the remote SOCKS client.
Port forwardings can also be specified in the configuration file. Privileged ports can be forwarded only when logging in as root on the remote machine. IPv6
addresses can be specified by enclosing the address in square brackets.
By default, TCP listening sockets on the server will be bound to the loopback interface only. This may be overridden by specifying a bind_address. An empty
bind_address, or the address ‘*’, indicates that the remote socket should listen on all interfaces. Specifying a remote bind_address will only succeed if the
server's GatewayPorts option is enabled (see sshd_config(5)).
If the port argument is ‘0’, the listen port will be dynamically allocated on the server and reported to the client at run time. When used together with -O
forward the allocated port will be printed to the standard output.
-o StrictHostKeyChecking=no
The StrictHostKeyChecking option can be used to control logins to machines whose host key is not known or has changed.
</pre>
=Fingerprint of a key=
For verification, it is often easier with shorter number strings. Therefore, the fingerprint is handy to compare keys more easily:
<pre>
$ ssh-keygen -lf ~/.ssh/id_rsa.pub
4096 SHA256:s1dtEFvY0EiJQcg66jTUon3BS6gfhSJT4Qegox4e7yk lollypop@lollybook (RSA)
</pre>
=Limit allowed users in sshd_config=
<syntaxhighlight lang=bash>
# SSH is only allowed for users in the group ssh except user syslog
AllowGroups ssh
DenyUsers syslog
</syntaxhighlight>
=PuTTY Portable=
==Launch pageant together with putty==
In the file ..\PortableApps\PuTTYPortable\App\AppInfo\Launcher\PuTTYPortable.ini enter the following below [Launch]:
<pre>
[Launch]
ProgramExecutable=putty\pageant.exe
CommandLineArguments='%PAL:DataDir%\settings\mykeys.ppk -c %PAL:AppDir%\putty\putty.exe'
DirectoryMoveOK=yes
SupportsUNC=yes
</pre>
For PortableApps see:
* [http://portableapps.com/manuals/PortableApps.comLauncher/ref/envsub.html Environment variable substitions]
* [http://portableapps.com/manuals/PortableApps.comLauncher/ref/launcher.ini/launch.html#programexecutable Launch]
==ppk -> OpenSSH format==
<syntaxhighlight lang=bash>
$ nawk '/---- BEGIN SSH2 PUBLIC KEY ----/{printf "ssh-rsa "; getline; comment=$2; gsub(/"/,"",comment); getline line; while(line !~ /^---- END/){printf line; getline line;} printf " %s\n",comment;}' pubkey.ppk
</syntaxhighlight>
=Problems with older destinations=
==Unable to negotiate with <IP> port 22: no matching host key type found. Their offer: ssh-dss==
<syntaxhighlight lang=bash>
$ ssh -oHostKeyAlgorithms=+ssh-dss <IP>
</syntaxhighlight>
==ssh_dispatch_run_fatal: Connection to <IP> port 22: DH GEX group out of range==
<syntaxhighlight lang=bash>
$ ssh -oKexAlgorithms=diffie-hellman-group-exchange-sha256,diffie-hellman-group14-sha1,diffie-hellman-group1-sha1 <IP>
</syntaxhighlight>
==Allow outdated PubkeyAcceptedAlgorithms==
To reach hosts where you cannod use ed25519 or other more modern keys:
<syntaxhighlight lang=bash>
$ ssh -o PubkeyAcceptedKeyTypes=+ssh-rsa <old-rsa-host>
</syntaxhighlight>
=Problems with older clients/keys=
==Allow RSA keys again (ssh-rsa not in PubkeyAcceptedAlgorithms)==
If you try to connect to new OpenSSH daemons with an RSA key, you will find this in your log and you can not connect via ssh:
<syntaxhighlight lang=bash>
sshd[51342]: userauth_pubkey: key type ssh-rsa not in PubkeyAcceptedAlgorithms [preauth]
</syntaxhighlight>
The temporary workaround which is not recommended is to allow RSA keys in the sshd_config:
<syntaxhighlight lang=bash>
PubkeyAcceptedAlgorithms +ssh-rsa
</syntaxhighlight>
But you better should switch to ED25519 keys.
=SFTP chroot=
<syntaxhighlight lang=bash>
# mkdir --parents --mode=0755 /sftp_chroot/etc
</syntaxhighlight>
==/etc/fstab==
<syntaxhighlight lang=bash>
...
/etc/passwd /sftp_chroot/etc/passwd none ro,bind 0 0
/etc/group /sftp_chroot/etc/group none ro,bind 0 0
</syntaxhighlight>
==/etc/ssh/sshd_config==
<syntaxhighlight lang=bash>
...
AllowGroups ssh-user
Subsystem sftp internal-sftp
Match group sftp
AllowGroups sftp
X11Forwarding no
AllowTcpForwarding no
AllowAgentForwarding no
PermitTunnel no
ForceCommand internal-sftp
PasswordAuthentication yes
ChrootDirectory /sftp_chroot/
AuthorizedKeysFile /sftp_chroot/%h/.ssh/authorized_keys
</syntaxhighlight>
==Create SFTP user==
Now you can put authorized keys into the files /home/sftp/.authorized_keys/<i>username</i>
And create the sftp users like this:
<syntaxhighlight lang=bash>
# USER=myuser
# mkdir --parents --mode=0755 /home/sftp/${USER}
# useradd --create-home --home-dir /home/sftp/${USER}/home ${USER}
</syntaxhighlight>
= Two factor authentication =
== Google Authenticator ==
As the Google Authenticator is a tool which is available on several SmartPhone OS I took this one for the OTP authentication.
All steps have to be done on the destination host.
=== Install libpam-google-authenticator ===
<syntaxhighlight lang=bash>
$ sudo apt-get install libpam-google-authenticator
</syntaxhighlight>
=== Add settings to the /etc/pam.d/sshd ===
Put this line at the top of your /etc/pam.d/sshd!
<syntaxhighlight lang=bash>
auth [success=done new_authtok_reqd=done default=die] pam_google_authenticator.so nullok
</syntaxhighlight>
See the man page pam.d(5) or read here...
The meaning of the parameters:
* success=done : If pam_google_authenticator returns successful (code was correct) all authentication is done.
* new_authtok_reqd=done : New authentication token is required set to done. Done is like ok, <nowiki><man page></nowiki>except that the stack also terminates and control is immediately returned to the application.<nowiki></man page></nowiki>
* default=die : If pam_google_authenticator failed no other authentications will be tried
* nullok : Allow user to access auth mechanism even if the password is empty
=== Add settings to the /etc/ssh/sshd_config ===
This lines have to be in the /etc/ssh/sshd_config:
<syntaxhighlight lang=bash>
UsePAM yes
PasswordAuthentication no
PubkeyAuthentication yes
ChallengeResponseAuthentication yes
AuthenticationMethods publickey,keyboard-interactive:pam
</syntaxhighlight>
Without the setting in /etc/pam.d/sshd the "PasswordAuthentication no" will not be sufficient and still ask for a password because /etc/pam.d/sshd enables password authentication.
4aef7a5740e69571b09e858c3d725c20f41380b1
2757
2753
2023-09-14T11:40:20Z
Lollypop
2
/* rsync from remote to remote */
wikitext
text/x-wiki
[[Category:SSH|Tipps]]
[[Category:Putty|Tipps]]
=SSH, way to the target=
==SSH over one or more hops==
To make the SSH connection from host_a to host_b you have to tunnel through two hosts (jumphost_1 and jumphost_2). If you always log in first and then continue logging in, it is sometimes very difficult to loop through the port forwardings or the Socks5 proxy. It is easier to define <i>ProxyJump</i>s for the way from host_a to host_b.
So we only get from jumphost_2 to host_b, so we make an entry in ~/.ssh/config for this:
<pre>
Host host_b
ProxyJump jumphost_2
</pre>
But we can only get to jumphost_2 via jumphost_1, so we need an entry for this as well:
<pre>
Host jumphost_2
ProxyJump jumphost_1
</pre>
Now simply type <i>ssh host_b</i> on host_a and you will be tunneled through the two gateways jumphost_1 and jumphost_2.
==Portforwardings for example for NFS are now easy like this==
<pre>
root@host_a# share -F nfs -o ro=@127.0.0.1/32 /tmp
root@host_a# ssh -R 22049:localhost:2049 user@host_b
user@host_b$ su -
root@host_b# mount -oro nfs://127.0.0.1:22049/tmp /mnt
</pre>
In the background the tunnel connections are established and the port forwarding is done directly from host_a to host_b. Very slim and elegant.
==Breakout from paradise==
Problem: The environment you are in is unfortunately so unfortunate with firewalls that you can not work. But you have to SSH out to look somewhere else or to get something. Well, there is always a way...
You need a locally installed [http://www.meadowy.org/~gotoh/projects/connect connect], e.g. under Ubuntu: apt-get install connect-proxy.
Furthermore you need a SSH server, where a sshd is listening on port 443, because most proxies only want to let you through on known ports.
Then you enter in the ~/.ssh/config:
<pre>
Host ssh-via-proxy
ProxyCommand connect -H proxy-server:3128 ssh-server 443
</pre>
Whoop di whoop is one with <i>ssh ssh-server</i> on the SSH target, where one would like to go. Of course you can enter another host behind this connection via <i>ProxyJump ssh-via-proxy</i> etc. etc.
==Ah yes... the internal wiki...==
Also not bad, if this is only accessible from the internal network, then we just request via socks proxy:
<pre>
user@host_a$ ssh -C -N -T -f -D8080 internal-host
user@host_a$ chromium-browser --proxy-server="socks5://localhost:8080" https://wiki.internal.office/ &
</pre>
Options are:
<pre>
-C Requests compression <- das ist optional
-N Do not execute a remote command.
-T Disable pseudo-tty allocation.
-f Requests ssh to go to background just before command execution.
-D Local-Remote-Socks5-Proxy Port
</pre>
Or again via ~/.ssh/config:
<pre>
Host wiki
Compression yes
DynamicForward 8888
RequestTTY no
PermitLocalCommand yes
LocalCommand chromium-browser --proxy-server="socks5://localhost:8888" https://wiki.intern.firma.de/ &
Hostname internal-host
</pre>
And then
<SyntaxHighlight lang=bash>
$ ssh -N -f wiki
</SyntaxHighlight>
==Do this but only if...==
If you want for example <i>ProxyJump</i> only if you are connected remote via OpenVPN but not if you are at the office:
<pre>
Match exec "ip ro sh dev tun0 src 10.208.129.0/24 2>/dev/null" host !jumphost.office,*.office,172.16.*.*
ProxyJump jumphost.office
</pre>
What happens here is:<br>
You will be proxied over jumphost.office if there is both of<br>
- A route on a dev tun0 where the local IP matches 10.208.129.0/24<br>
- The destination host matches !jumphost.office,*.office,172.16.*.*<br>
=rsync from remote to remote=
Sometimes your local host is just needed as a relay station to sync between two servers which cannot see each other. You can see both because you are in the admin network.
But you need to get files from HostA to HostB and your laptop has not enough diskspace to save it from HostA to the local disk and then copy it on HostB.
This is a possible solution:
1. Make a reverse forwarding from localhost:PortX on HostA to HostB port 22 (So all packets you send to PortX on HostA get back to your laptop and will be send to port 22, the ssh port, on HostB)
2. Execute rsync on HostA ans tell rsync to make a ssh connection to port PortX for the destination host (which is send back to your laptop and from here to HostB port 22, see 1.)
Here is an Example (with a random port between 50000-52999):
<SyntaxHighLight lang=bash>
$ PortX=$(( ${RANDOM} % 3000 + 50000 ))
$ HostA=10.1.0.42
$ HostB=10.2.0.43
$ ssh -AR 127.0.0.1:${PortX}:${HostB}:22 ${HostA} "rsync -e 'ssh -p ${PortX} -o StrictHostKeyChecking=no' -PWav <HostA-Path> 127.0.0.1:<HostB-Path>"
</SyntaxHighLight>
Some explanations:
$RANDOM is a bash builtin so this works only inside bash.
Use Portx=<your port choosen number> in other shells.
SSH Options:
<pre>
-A Enables forwarding of connections from an authentication agent such as ssh-agent(1). This can also be specified on a per-host basis in a configuration file.
Agent forwarding should be enabled with caution. Users with the ability to bypass file permissions on the remote host (for the agent's UNIX-domain socket) can
access the local agent through the forwarded connection. An attacker cannot obtain key material from the agent, however they can perform operations on the
keys that enable them to authenticate using the identities loaded into the agent. A safer alternative may be to use a jump host (see -J).
-R [bind_address:]port:host:hostport
Specifies that connections to the given TCP port or Unix socket on the remote (server) host are to be forwarded to the local side.
This works by allocating a socket to listen to either a TCP port or to a Unix socket on the remote side. Whenever a connection is made to this port or Unix
socket, the connection is forwarded over the secure channel, and a connection is made from the local machine to either an explicit destination specified by
host port hostport, or local_socket, or, if no explicit destination was specified, ssh will act as a SOCKS 4/5 proxy and forward connections to the destina‐
tions requested by the remote SOCKS client.
Port forwardings can also be specified in the configuration file. Privileged ports can be forwarded only when logging in as root on the remote machine. IPv6
addresses can be specified by enclosing the address in square brackets.
By default, TCP listening sockets on the server will be bound to the loopback interface only. This may be overridden by specifying a bind_address. An empty
bind_address, or the address ‘*’, indicates that the remote socket should listen on all interfaces. Specifying a remote bind_address will only succeed if the
server's GatewayPorts option is enabled (see sshd_config(5)).
If the port argument is ‘0’, the listen port will be dynamically allocated on the server and reported to the client at run time. When used together with -O
forward the allocated port will be printed to the standard output.
-o StrictHostKeyChecking=no
The StrictHostKeyChecking option can be used to control logins to machines whose host key is not known or has changed.
</pre>
=Fingerprint of a key=
For verification, it is often easier with shorter number strings. Therefore, the fingerprint is handy to compare keys more easily:
<pre>
$ ssh-keygen -lf ~/.ssh/id_rsa.pub
4096 SHA256:s1dtEFvY0EiJQcg66jTUon3BS6gfhSJT4Qegox4e7yk lollypop@lollybook (RSA)
</pre>
=Limit allowed users in sshd_config=
<syntaxhighlight lang=bash>
# SSH is only allowed for users in the group ssh except user syslog
AllowGroups ssh
DenyUsers syslog
</syntaxhighlight>
=PuTTY Portable=
==Launch pageant together with putty==
In the file ..\PortableApps\PuTTYPortable\App\AppInfo\Launcher\PuTTYPortable.ini enter the following below [Launch]:
<pre>
[Launch]
ProgramExecutable=putty\pageant.exe
CommandLineArguments='%PAL:DataDir%\settings\mykeys.ppk -c %PAL:AppDir%\putty\putty.exe'
DirectoryMoveOK=yes
SupportsUNC=yes
</pre>
For PortableApps see:
* [http://portableapps.com/manuals/PortableApps.comLauncher/ref/envsub.html Environment variable substitions]
* [http://portableapps.com/manuals/PortableApps.comLauncher/ref/launcher.ini/launch.html#programexecutable Launch]
==ppk -> OpenSSH format==
<syntaxhighlight lang=bash>
$ nawk '/---- BEGIN SSH2 PUBLIC KEY ----/{printf "ssh-rsa "; getline; comment=$2; gsub(/"/,"",comment); getline line; while(line !~ /^---- END/){printf line; getline line;} printf " %s\n",comment;}' pubkey.ppk
</syntaxhighlight>
=Problems with older destinations=
==Unable to negotiate with <IP> port 22: no matching host key type found. Their offer: ssh-dss==
<syntaxhighlight lang=bash>
$ ssh -oHostKeyAlgorithms=+ssh-dss <IP>
</syntaxhighlight>
==ssh_dispatch_run_fatal: Connection to <IP> port 22: DH GEX group out of range==
<syntaxhighlight lang=bash>
$ ssh -oKexAlgorithms=diffie-hellman-group-exchange-sha256,diffie-hellman-group14-sha1,diffie-hellman-group1-sha1 <IP>
</syntaxhighlight>
==Allow outdated PubkeyAcceptedAlgorithms==
To reach hosts where you cannod use ed25519 or other more modern keys:
<syntaxhighlight lang=bash>
$ ssh -o PubkeyAcceptedKeyTypes=+ssh-rsa <old-rsa-host>
</syntaxhighlight>
=Problems with older clients/keys=
==Allow RSA keys again (ssh-rsa not in PubkeyAcceptedAlgorithms)==
If you try to connect to new OpenSSH daemons with an RSA key, you will find this in your log and you can not connect via ssh:
<syntaxhighlight lang=bash>
sshd[51342]: userauth_pubkey: key type ssh-rsa not in PubkeyAcceptedAlgorithms [preauth]
</syntaxhighlight>
The temporary workaround which is not recommended is to allow RSA keys in the sshd_config:
<syntaxhighlight lang=bash>
PubkeyAcceptedAlgorithms +ssh-rsa
</syntaxhighlight>
But you better should switch to ED25519 keys.
=SFTP chroot=
<syntaxhighlight lang=bash>
# mkdir --parents --mode=0755 /sftp_chroot/etc
</syntaxhighlight>
==/etc/fstab==
<syntaxhighlight lang=bash>
...
/etc/passwd /sftp_chroot/etc/passwd none ro,bind 0 0
/etc/group /sftp_chroot/etc/group none ro,bind 0 0
</syntaxhighlight>
==/etc/ssh/sshd_config==
<syntaxhighlight lang=bash>
...
AllowGroups ssh-user
Subsystem sftp internal-sftp
Match group sftp
AllowGroups sftp
X11Forwarding no
AllowTcpForwarding no
AllowAgentForwarding no
PermitTunnel no
ForceCommand internal-sftp
PasswordAuthentication yes
ChrootDirectory /sftp_chroot/
AuthorizedKeysFile /sftp_chroot/%h/.ssh/authorized_keys
</syntaxhighlight>
==Create SFTP user==
Now you can put authorized keys into the files /home/sftp/.authorized_keys/<i>username</i>
And create the sftp users like this:
<syntaxhighlight lang=bash>
# USER=myuser
# mkdir --parents --mode=0755 /home/sftp/${USER}
# useradd --create-home --home-dir /home/sftp/${USER}/home ${USER}
</syntaxhighlight>
= Two factor authentication =
== Google Authenticator ==
As the Google Authenticator is a tool which is available on several SmartPhone OS I took this one for the OTP authentication.
All steps have to be done on the destination host.
=== Install libpam-google-authenticator ===
<syntaxhighlight lang=bash>
$ sudo apt-get install libpam-google-authenticator
</syntaxhighlight>
=== Add settings to the /etc/pam.d/sshd ===
Put this line at the top of your /etc/pam.d/sshd!
<syntaxhighlight lang=bash>
auth [success=done new_authtok_reqd=done default=die] pam_google_authenticator.so nullok
</syntaxhighlight>
See the man page pam.d(5) or read here...
The meaning of the parameters:
* success=done : If pam_google_authenticator returns successful (code was correct) all authentication is done.
* new_authtok_reqd=done : New authentication token is required set to done. Done is like ok, <nowiki><man page></nowiki>except that the stack also terminates and control is immediately returned to the application.<nowiki></man page></nowiki>
* default=die : If pam_google_authenticator failed no other authentications will be tried
* nullok : Allow user to access auth mechanism even if the password is empty
=== Add settings to the /etc/ssh/sshd_config ===
This lines have to be in the /etc/ssh/sshd_config:
<syntaxhighlight lang=bash>
UsePAM yes
PasswordAuthentication no
PubkeyAuthentication yes
ChallengeResponseAuthentication yes
AuthenticationMethods publickey,keyboard-interactive:pam
</syntaxhighlight>
Without the setting in /etc/pam.d/sshd the "PasswordAuthentication no" will not be sufficient and still ask for a password because /etc/pam.d/sshd enables password authentication.
aab143246eb4de5d466da4c43f77f5f7cf8935dd
2768
2757
2023-12-21T17:52:13Z
Lollypop
2
Workaround for CVEs
wikitext
text/x-wiki
[[Category:SSH|Tipps]]
[[Category:Putty|Tipps]]
=SSH, way to the target=
==SSH over one or more hops==
To make the SSH connection from host_a to host_b you have to tunnel through two hosts (jumphost_1 and jumphost_2). If you always log in first and then continue logging in, it is sometimes very difficult to loop through the port forwardings or the Socks5 proxy. It is easier to define <i>ProxyJump</i>s for the way from host_a to host_b.
So we only get from jumphost_2 to host_b, so we make an entry in ~/.ssh/config for this:
<pre>
Host host_b
ProxyJump jumphost_2
</pre>
But we can only get to jumphost_2 via jumphost_1, so we need an entry for this as well:
<pre>
Host jumphost_2
ProxyJump jumphost_1
</pre>
Now simply type <i>ssh host_b</i> on host_a and you will be tunneled through the two gateways jumphost_1 and jumphost_2.
==Portforwardings for example for NFS are now easy like this==
<pre>
root@host_a# share -F nfs -o ro=@127.0.0.1/32 /tmp
root@host_a# ssh -R 22049:localhost:2049 user@host_b
user@host_b$ su -
root@host_b# mount -oro nfs://127.0.0.1:22049/tmp /mnt
</pre>
In the background the tunnel connections are established and the port forwarding is done directly from host_a to host_b. Very slim and elegant.
==Breakout from paradise==
Problem: The environment you are in is unfortunately so unfortunate with firewalls that you can not work. But you have to SSH out to look somewhere else or to get something. Well, there is always a way...
You need a locally installed [http://www.meadowy.org/~gotoh/projects/connect connect], e.g. under Ubuntu: apt-get install connect-proxy.
Furthermore you need a SSH server, where a sshd is listening on port 443, because most proxies only want to let you through on known ports.
Then you enter in the ~/.ssh/config:
<pre>
Host ssh-via-proxy
ProxyCommand connect -H proxy-server:3128 ssh-server 443
</pre>
Whoop di whoop is one with <i>ssh ssh-server</i> on the SSH target, where one would like to go. Of course you can enter another host behind this connection via <i>ProxyJump ssh-via-proxy</i> etc. etc.
==Ah yes... the internal wiki...==
Also not bad, if this is only accessible from the internal network, then we just request via socks proxy:
<pre>
user@host_a$ ssh -C -N -T -f -D8080 internal-host
user@host_a$ chromium-browser --proxy-server="socks5://localhost:8080" https://wiki.internal.office/ &
</pre>
Options are:
<pre>
-C Requests compression <- das ist optional
-N Do not execute a remote command.
-T Disable pseudo-tty allocation.
-f Requests ssh to go to background just before command execution.
-D Local-Remote-Socks5-Proxy Port
</pre>
Or again via ~/.ssh/config:
<pre>
Host wiki
Compression yes
DynamicForward 8888
RequestTTY no
PermitLocalCommand yes
LocalCommand chromium-browser --proxy-server="socks5://localhost:8888" https://wiki.intern.firma.de/ &
Hostname internal-host
</pre>
And then
<SyntaxHighlight lang=bash>
$ ssh -N -f wiki
</SyntaxHighlight>
==Do this but only if...==
If you want for example <i>ProxyJump</i> only if you are connected remote via OpenVPN but not if you are at the office:
<pre>
Match exec "ip ro sh dev tun0 src 10.208.129.0/24 2>/dev/null" host !jumphost.office,*.office,172.16.*.*
ProxyJump jumphost.office
</pre>
What happens here is:<br>
You will be proxied over jumphost.office if there is both of<br>
- A route on a dev tun0 where the local IP matches 10.208.129.0/24<br>
- The destination host matches !jumphost.office,*.office,172.16.*.*<br>
=rsync from remote to remote=
Sometimes your local host is just needed as a relay station to sync between two servers which cannot see each other. You can see both because you are in the admin network.
But you need to get files from HostA to HostB and your laptop has not enough diskspace to save it from HostA to the local disk and then copy it on HostB.
This is a possible solution:
1. Make a reverse forwarding from localhost:PortX on HostA to HostB port 22 (So all packets you send to PortX on HostA get back to your laptop and will be send to port 22, the ssh port, on HostB)
2. Execute rsync on HostA ans tell rsync to make a ssh connection to port PortX for the destination host (which is send back to your laptop and from here to HostB port 22, see 1.)
Here is an Example (with a random port between 50000-52999):
<SyntaxHighLight lang=bash>
$ PortX=$(( ${RANDOM} % 3000 + 50000 ))
$ HostA=10.1.0.42
$ HostB=10.2.0.43
$ ssh -AR 127.0.0.1:${PortX}:${HostB}:22 ${HostA} "rsync -e 'ssh -p ${PortX} -o StrictHostKeyChecking=no' -PWav <HostA-Path> 127.0.0.1:<HostB-Path>"
</SyntaxHighLight>
Some explanations:
$RANDOM is a bash builtin so this works only inside bash.
Use Portx=<your port choosen number> in other shells.
SSH Options:
<pre>
-A Enables forwarding of connections from an authentication agent such as ssh-agent(1). This can also be specified on a per-host basis in a configuration file.
Agent forwarding should be enabled with caution. Users with the ability to bypass file permissions on the remote host (for the agent's UNIX-domain socket) can
access the local agent through the forwarded connection. An attacker cannot obtain key material from the agent, however they can perform operations on the
keys that enable them to authenticate using the identities loaded into the agent. A safer alternative may be to use a jump host (see -J).
-R [bind_address:]port:host:hostport
Specifies that connections to the given TCP port or Unix socket on the remote (server) host are to be forwarded to the local side.
This works by allocating a socket to listen to either a TCP port or to a Unix socket on the remote side. Whenever a connection is made to this port or Unix
socket, the connection is forwarded over the secure channel, and a connection is made from the local machine to either an explicit destination specified by
host port hostport, or local_socket, or, if no explicit destination was specified, ssh will act as a SOCKS 4/5 proxy and forward connections to the destina‐
tions requested by the remote SOCKS client.
Port forwardings can also be specified in the configuration file. Privileged ports can be forwarded only when logging in as root on the remote machine. IPv6
addresses can be specified by enclosing the address in square brackets.
By default, TCP listening sockets on the server will be bound to the loopback interface only. This may be overridden by specifying a bind_address. An empty
bind_address, or the address ‘*’, indicates that the remote socket should listen on all interfaces. Specifying a remote bind_address will only succeed if the
server's GatewayPorts option is enabled (see sshd_config(5)).
If the port argument is ‘0’, the listen port will be dynamically allocated on the server and reported to the client at run time. When used together with -O
forward the allocated port will be printed to the standard output.
-o StrictHostKeyChecking=no
The StrictHostKeyChecking option can be used to control logins to machines whose host key is not known or has changed.
</pre>
=Fingerprint of a key=
For verification, it is often easier with shorter number strings. Therefore, the fingerprint is handy to compare keys more easily:
<pre>
$ ssh-keygen -lf ~/.ssh/id_rsa.pub
4096 SHA256:s1dtEFvY0EiJQcg66jTUon3BS6gfhSJT4Qegox4e7yk lollypop@lollybook (RSA)
</pre>
=Limit allowed users in sshd_config=
<syntaxhighlight lang=bash>
# SSH is only allowed for users in the group ssh except user syslog
AllowGroups ssh
DenyUsers syslog
</syntaxhighlight>
=PuTTY Portable=
==Launch pageant together with putty==
In the file ..\PortableApps\PuTTYPortable\App\AppInfo\Launcher\PuTTYPortable.ini enter the following below [Launch]:
<pre>
[Launch]
ProgramExecutable=putty\pageant.exe
CommandLineArguments='%PAL:DataDir%\settings\mykeys.ppk -c %PAL:AppDir%\putty\putty.exe'
DirectoryMoveOK=yes
SupportsUNC=yes
</pre>
For PortableApps see:
* [http://portableapps.com/manuals/PortableApps.comLauncher/ref/envsub.html Environment variable substitions]
* [http://portableapps.com/manuals/PortableApps.comLauncher/ref/launcher.ini/launch.html#programexecutable Launch]
==ppk -> OpenSSH format==
<syntaxhighlight lang=bash>
$ nawk '/---- BEGIN SSH2 PUBLIC KEY ----/{printf "ssh-rsa "; getline; comment=$2; gsub(/"/,"",comment); getline line; while(line !~ /^---- END/){printf line; getline line;} printf " %s\n",comment;}' pubkey.ppk
</syntaxhighlight>
=Problems with older destinations=
==Unable to negotiate with <IP> port 22: no matching host key type found. Their offer: ssh-dss==
<syntaxhighlight lang=bash>
$ ssh -oHostKeyAlgorithms=+ssh-dss <IP>
</syntaxhighlight>
==ssh_dispatch_run_fatal: Connection to <IP> port 22: DH GEX group out of range==
<syntaxhighlight lang=bash>
$ ssh -oKexAlgorithms=diffie-hellman-group-exchange-sha256,diffie-hellman-group14-sha1,diffie-hellman-group1-sha1 <IP>
</syntaxhighlight>
==Allow outdated PubkeyAcceptedAlgorithms==
To reach hosts where you cannod use ed25519 or other more modern keys:
<syntaxhighlight lang=bash>
$ ssh -o PubkeyAcceptedKeyTypes=+ssh-rsa <old-rsa-host>
</syntaxhighlight>
=Problems with older clients/keys=
==Allow RSA keys again (ssh-rsa not in PubkeyAcceptedAlgorithms)==
If you try to connect to new OpenSSH daemons with an RSA key, you will find this in your log and you can not connect via ssh:
<syntaxhighlight lang=bash>
sshd[51342]: userauth_pubkey: key type ssh-rsa not in PubkeyAcceptedAlgorithms [preauth]
</syntaxhighlight>
The temporary workaround which is not recommended is to allow RSA keys in the sshd_config:
<syntaxhighlight lang=bash>
PubkeyAcceptedAlgorithms +ssh-rsa
</syntaxhighlight>
But you better should switch to ED25519 keys.
=SFTP chroot=
<syntaxhighlight lang=bash>
# mkdir --parents --mode=0755 /sftp_chroot/etc
</syntaxhighlight>
==/etc/fstab==
<syntaxhighlight lang=bash>
...
/etc/passwd /sftp_chroot/etc/passwd none ro,bind 0 0
/etc/group /sftp_chroot/etc/group none ro,bind 0 0
</syntaxhighlight>
==/etc/ssh/sshd_config==
<syntaxhighlight lang=bash>
...
AllowGroups ssh-user
Subsystem sftp internal-sftp
Match group sftp
AllowGroups sftp
X11Forwarding no
AllowTcpForwarding no
AllowAgentForwarding no
PermitTunnel no
ForceCommand internal-sftp
PasswordAuthentication yes
ChrootDirectory /sftp_chroot/
AuthorizedKeysFile /sftp_chroot/%h/.ssh/authorized_keys
</syntaxhighlight>
==Create SFTP user==
Now you can put authorized keys into the files /home/sftp/.authorized_keys/<i>username</i>
And create the sftp users like this:
<syntaxhighlight lang=bash>
# USER=myuser
# mkdir --parents --mode=0755 /home/sftp/${USER}
# useradd --create-home --home-dir /home/sftp/${USER}/home ${USER}
</syntaxhighlight>
= Two factor authentication =
== Google Authenticator ==
As the Google Authenticator is a tool which is available on several SmartPhone OS I took this one for the OTP authentication.
All steps have to be done on the destination host.
=== Install libpam-google-authenticator ===
<syntaxhighlight lang=bash>
$ sudo apt-get install libpam-google-authenticator
</syntaxhighlight>
=== Add settings to the /etc/pam.d/sshd ===
Put this line at the top of your /etc/pam.d/sshd!
<syntaxhighlight lang=bash>
auth [success=done new_authtok_reqd=done default=die] pam_google_authenticator.so nullok
</syntaxhighlight>
See the man page pam.d(5) or read here...
The meaning of the parameters:
* success=done : If pam_google_authenticator returns successful (code was correct) all authentication is done.
* new_authtok_reqd=done : New authentication token is required set to done. Done is like ok, <nowiki><man page></nowiki>except that the stack also terminates and control is immediately returned to the application.<nowiki></man page></nowiki>
* default=die : If pam_google_authenticator failed no other authentications will be tried
* nullok : Allow user to access auth mechanism even if the password is empty
=== Add settings to the /etc/ssh/sshd_config ===
This lines have to be in the /etc/ssh/sshd_config:
<syntaxhighlight lang=bash>
UsePAM yes
PasswordAuthentication no
PubkeyAuthentication yes
ChallengeResponseAuthentication yes
AuthenticationMethods publickey,keyboard-interactive:pam
</syntaxhighlight>
Without the setting in /etc/pam.d/sshd the "PasswordAuthentication no" will not be sufficient and still ask for a password because /etc/pam.d/sshd enables password authentication.
= Workarounds for CVEs =
In this section I just say: This <b>might</b> help! Absolutely no warranty for anything!<br>
If my workaround does not fix the problem look one line above this one.
== CVE-2023-48795 alias Terrapin ==
First read at [https://terrapin-attack.com/ terrapin-attack.com]
===Check on an Debian/Ubuntu===
<SyntaxHighlight lang=bash>
$ sudo apt-get changelog openssh-server | grep -i CVE-2023-48795
- debian/patches/CVE-2023-48795.patch: implement "strict key exchange"
- CVE-2023-48795
</SyntaxHighlight>
This means there are patches against this CVE are included.
===Check ssh and sshd if they offer problematic ciphers ans macs===
sshd:
<SyntaxHighlight lang=bash>
$ sudo sshd -T | grep -iE '(chacha20-poly1305@openssh.com|-cbc|-etm@openssh.com)'
ciphers chacha20-poly1305@openssh.com,aes256-gcm@openssh.com,aes128-gcm@openssh.com,aes256-ctr,aes192-ctr,aes128-ctr
macs umac-64@openssh.com,umac-128@openssh.com,hmac-sha2-256,hmac-sha2-512,hmac-sha1
</SyntaxHighlight>
ssh: (Example for localhost, other hosts might get other values depending on your ~/.ssh/config or elsewhere)
<SyntaxHighlight lang=bash>
$ ssh -G localhost | grep -E '(chacha20-poly1305@openssh.com|-cbc|-etm@openssh.com)'
ciphers chacha20-poly1305@openssh.com,aes128-ctr,aes192-ctr,aes256-ctr,aes128-gcm@openssh.com,aes256-gcm@openssh.com
macs umac-64-etm@openssh.com,umac-128-etm@openssh.com,hmac-sha2-256-etm@openssh.com,hmac-sha2-512-etm@openssh.com,hmac-sha1-etm@openssh.com,umac-64@openssh.com,umac-128@openssh.com,hmac-sha2-256,hmac-sha2-512,hmac-sha1
</SyntaxHighlight>
===Include workarounds===
The <i>Include</i> statement enters config in OpenSSH 7.3p1 as far as i know.
So, if you have an <i>Include</i> statement in<br>
/etc/ssh/sshd_config:
<pre>
Include /etc/ssh/sshd_config.d/*.conf
</pre>
/etc/ssh/ssh_config:
<pre>
Include /etc/ssh/ssh_config.d/*.conf
</pre>
then add the following files:<br>
/etc/ssh/sshd_config.d/000_terrapin.conf
<pre>
Ciphers -chacha20-poly1305@openssh.com,-*-cbc*
MACs -*-etm@openssh.com
</pre>
/etc/ssh/ssh_config.d/000_terrapin.conf
<pre>
Ciphers -chacha20-poly1305@openssh.com,-*-cbc*
MACs -*-etm@openssh.com
</pre>
After that do the checks from above. If they look good, restart sshd. If not... toss my trash.
2667ff641635cd53561aeb2de7c1d8c7d246dda6
2769
2768
2023-12-21T17:53:03Z
Lollypop
2
/* Check ssh and sshd if they offer problematic ciphers ans macs */
wikitext
text/x-wiki
[[Category:SSH|Tipps]]
[[Category:Putty|Tipps]]
=SSH, way to the target=
==SSH over one or more hops==
To make the SSH connection from host_a to host_b you have to tunnel through two hosts (jumphost_1 and jumphost_2). If you always log in first and then continue logging in, it is sometimes very difficult to loop through the port forwardings or the Socks5 proxy. It is easier to define <i>ProxyJump</i>s for the way from host_a to host_b.
So we only get from jumphost_2 to host_b, so we make an entry in ~/.ssh/config for this:
<pre>
Host host_b
ProxyJump jumphost_2
</pre>
But we can only get to jumphost_2 via jumphost_1, so we need an entry for this as well:
<pre>
Host jumphost_2
ProxyJump jumphost_1
</pre>
Now simply type <i>ssh host_b</i> on host_a and you will be tunneled through the two gateways jumphost_1 and jumphost_2.
==Portforwardings for example for NFS are now easy like this==
<pre>
root@host_a# share -F nfs -o ro=@127.0.0.1/32 /tmp
root@host_a# ssh -R 22049:localhost:2049 user@host_b
user@host_b$ su -
root@host_b# mount -oro nfs://127.0.0.1:22049/tmp /mnt
</pre>
In the background the tunnel connections are established and the port forwarding is done directly from host_a to host_b. Very slim and elegant.
==Breakout from paradise==
Problem: The environment you are in is unfortunately so unfortunate with firewalls that you can not work. But you have to SSH out to look somewhere else or to get something. Well, there is always a way...
You need a locally installed [http://www.meadowy.org/~gotoh/projects/connect connect], e.g. under Ubuntu: apt-get install connect-proxy.
Furthermore you need a SSH server, where a sshd is listening on port 443, because most proxies only want to let you through on known ports.
Then you enter in the ~/.ssh/config:
<pre>
Host ssh-via-proxy
ProxyCommand connect -H proxy-server:3128 ssh-server 443
</pre>
Whoop di whoop is one with <i>ssh ssh-server</i> on the SSH target, where one would like to go. Of course you can enter another host behind this connection via <i>ProxyJump ssh-via-proxy</i> etc. etc.
==Ah yes... the internal wiki...==
Also not bad, if this is only accessible from the internal network, then we just request via socks proxy:
<pre>
user@host_a$ ssh -C -N -T -f -D8080 internal-host
user@host_a$ chromium-browser --proxy-server="socks5://localhost:8080" https://wiki.internal.office/ &
</pre>
Options are:
<pre>
-C Requests compression <- das ist optional
-N Do not execute a remote command.
-T Disable pseudo-tty allocation.
-f Requests ssh to go to background just before command execution.
-D Local-Remote-Socks5-Proxy Port
</pre>
Or again via ~/.ssh/config:
<pre>
Host wiki
Compression yes
DynamicForward 8888
RequestTTY no
PermitLocalCommand yes
LocalCommand chromium-browser --proxy-server="socks5://localhost:8888" https://wiki.intern.firma.de/ &
Hostname internal-host
</pre>
And then
<SyntaxHighlight lang=bash>
$ ssh -N -f wiki
</SyntaxHighlight>
==Do this but only if...==
If you want for example <i>ProxyJump</i> only if you are connected remote via OpenVPN but not if you are at the office:
<pre>
Match exec "ip ro sh dev tun0 src 10.208.129.0/24 2>/dev/null" host !jumphost.office,*.office,172.16.*.*
ProxyJump jumphost.office
</pre>
What happens here is:<br>
You will be proxied over jumphost.office if there is both of<br>
- A route on a dev tun0 where the local IP matches 10.208.129.0/24<br>
- The destination host matches !jumphost.office,*.office,172.16.*.*<br>
=rsync from remote to remote=
Sometimes your local host is just needed as a relay station to sync between two servers which cannot see each other. You can see both because you are in the admin network.
But you need to get files from HostA to HostB and your laptop has not enough diskspace to save it from HostA to the local disk and then copy it on HostB.
This is a possible solution:
1. Make a reverse forwarding from localhost:PortX on HostA to HostB port 22 (So all packets you send to PortX on HostA get back to your laptop and will be send to port 22, the ssh port, on HostB)
2. Execute rsync on HostA ans tell rsync to make a ssh connection to port PortX for the destination host (which is send back to your laptop and from here to HostB port 22, see 1.)
Here is an Example (with a random port between 50000-52999):
<SyntaxHighLight lang=bash>
$ PortX=$(( ${RANDOM} % 3000 + 50000 ))
$ HostA=10.1.0.42
$ HostB=10.2.0.43
$ ssh -AR 127.0.0.1:${PortX}:${HostB}:22 ${HostA} "rsync -e 'ssh -p ${PortX} -o StrictHostKeyChecking=no' -PWav <HostA-Path> 127.0.0.1:<HostB-Path>"
</SyntaxHighLight>
Some explanations:
$RANDOM is a bash builtin so this works only inside bash.
Use Portx=<your port choosen number> in other shells.
SSH Options:
<pre>
-A Enables forwarding of connections from an authentication agent such as ssh-agent(1). This can also be specified on a per-host basis in a configuration file.
Agent forwarding should be enabled with caution. Users with the ability to bypass file permissions on the remote host (for the agent's UNIX-domain socket) can
access the local agent through the forwarded connection. An attacker cannot obtain key material from the agent, however they can perform operations on the
keys that enable them to authenticate using the identities loaded into the agent. A safer alternative may be to use a jump host (see -J).
-R [bind_address:]port:host:hostport
Specifies that connections to the given TCP port or Unix socket on the remote (server) host are to be forwarded to the local side.
This works by allocating a socket to listen to either a TCP port or to a Unix socket on the remote side. Whenever a connection is made to this port or Unix
socket, the connection is forwarded over the secure channel, and a connection is made from the local machine to either an explicit destination specified by
host port hostport, or local_socket, or, if no explicit destination was specified, ssh will act as a SOCKS 4/5 proxy and forward connections to the destina‐
tions requested by the remote SOCKS client.
Port forwardings can also be specified in the configuration file. Privileged ports can be forwarded only when logging in as root on the remote machine. IPv6
addresses can be specified by enclosing the address in square brackets.
By default, TCP listening sockets on the server will be bound to the loopback interface only. This may be overridden by specifying a bind_address. An empty
bind_address, or the address ‘*’, indicates that the remote socket should listen on all interfaces. Specifying a remote bind_address will only succeed if the
server's GatewayPorts option is enabled (see sshd_config(5)).
If the port argument is ‘0’, the listen port will be dynamically allocated on the server and reported to the client at run time. When used together with -O
forward the allocated port will be printed to the standard output.
-o StrictHostKeyChecking=no
The StrictHostKeyChecking option can be used to control logins to machines whose host key is not known or has changed.
</pre>
=Fingerprint of a key=
For verification, it is often easier with shorter number strings. Therefore, the fingerprint is handy to compare keys more easily:
<pre>
$ ssh-keygen -lf ~/.ssh/id_rsa.pub
4096 SHA256:s1dtEFvY0EiJQcg66jTUon3BS6gfhSJT4Qegox4e7yk lollypop@lollybook (RSA)
</pre>
=Limit allowed users in sshd_config=
<syntaxhighlight lang=bash>
# SSH is only allowed for users in the group ssh except user syslog
AllowGroups ssh
DenyUsers syslog
</syntaxhighlight>
=PuTTY Portable=
==Launch pageant together with putty==
In the file ..\PortableApps\PuTTYPortable\App\AppInfo\Launcher\PuTTYPortable.ini enter the following below [Launch]:
<pre>
[Launch]
ProgramExecutable=putty\pageant.exe
CommandLineArguments='%PAL:DataDir%\settings\mykeys.ppk -c %PAL:AppDir%\putty\putty.exe'
DirectoryMoveOK=yes
SupportsUNC=yes
</pre>
For PortableApps see:
* [http://portableapps.com/manuals/PortableApps.comLauncher/ref/envsub.html Environment variable substitions]
* [http://portableapps.com/manuals/PortableApps.comLauncher/ref/launcher.ini/launch.html#programexecutable Launch]
==ppk -> OpenSSH format==
<syntaxhighlight lang=bash>
$ nawk '/---- BEGIN SSH2 PUBLIC KEY ----/{printf "ssh-rsa "; getline; comment=$2; gsub(/"/,"",comment); getline line; while(line !~ /^---- END/){printf line; getline line;} printf " %s\n",comment;}' pubkey.ppk
</syntaxhighlight>
=Problems with older destinations=
==Unable to negotiate with <IP> port 22: no matching host key type found. Their offer: ssh-dss==
<syntaxhighlight lang=bash>
$ ssh -oHostKeyAlgorithms=+ssh-dss <IP>
</syntaxhighlight>
==ssh_dispatch_run_fatal: Connection to <IP> port 22: DH GEX group out of range==
<syntaxhighlight lang=bash>
$ ssh -oKexAlgorithms=diffie-hellman-group-exchange-sha256,diffie-hellman-group14-sha1,diffie-hellman-group1-sha1 <IP>
</syntaxhighlight>
==Allow outdated PubkeyAcceptedAlgorithms==
To reach hosts where you cannod use ed25519 or other more modern keys:
<syntaxhighlight lang=bash>
$ ssh -o PubkeyAcceptedKeyTypes=+ssh-rsa <old-rsa-host>
</syntaxhighlight>
=Problems with older clients/keys=
==Allow RSA keys again (ssh-rsa not in PubkeyAcceptedAlgorithms)==
If you try to connect to new OpenSSH daemons with an RSA key, you will find this in your log and you can not connect via ssh:
<syntaxhighlight lang=bash>
sshd[51342]: userauth_pubkey: key type ssh-rsa not in PubkeyAcceptedAlgorithms [preauth]
</syntaxhighlight>
The temporary workaround which is not recommended is to allow RSA keys in the sshd_config:
<syntaxhighlight lang=bash>
PubkeyAcceptedAlgorithms +ssh-rsa
</syntaxhighlight>
But you better should switch to ED25519 keys.
=SFTP chroot=
<syntaxhighlight lang=bash>
# mkdir --parents --mode=0755 /sftp_chroot/etc
</syntaxhighlight>
==/etc/fstab==
<syntaxhighlight lang=bash>
...
/etc/passwd /sftp_chroot/etc/passwd none ro,bind 0 0
/etc/group /sftp_chroot/etc/group none ro,bind 0 0
</syntaxhighlight>
==/etc/ssh/sshd_config==
<syntaxhighlight lang=bash>
...
AllowGroups ssh-user
Subsystem sftp internal-sftp
Match group sftp
AllowGroups sftp
X11Forwarding no
AllowTcpForwarding no
AllowAgentForwarding no
PermitTunnel no
ForceCommand internal-sftp
PasswordAuthentication yes
ChrootDirectory /sftp_chroot/
AuthorizedKeysFile /sftp_chroot/%h/.ssh/authorized_keys
</syntaxhighlight>
==Create SFTP user==
Now you can put authorized keys into the files /home/sftp/.authorized_keys/<i>username</i>
And create the sftp users like this:
<syntaxhighlight lang=bash>
# USER=myuser
# mkdir --parents --mode=0755 /home/sftp/${USER}
# useradd --create-home --home-dir /home/sftp/${USER}/home ${USER}
</syntaxhighlight>
= Two factor authentication =
== Google Authenticator ==
As the Google Authenticator is a tool which is available on several SmartPhone OS I took this one for the OTP authentication.
All steps have to be done on the destination host.
=== Install libpam-google-authenticator ===
<syntaxhighlight lang=bash>
$ sudo apt-get install libpam-google-authenticator
</syntaxhighlight>
=== Add settings to the /etc/pam.d/sshd ===
Put this line at the top of your /etc/pam.d/sshd!
<syntaxhighlight lang=bash>
auth [success=done new_authtok_reqd=done default=die] pam_google_authenticator.so nullok
</syntaxhighlight>
See the man page pam.d(5) or read here...
The meaning of the parameters:
* success=done : If pam_google_authenticator returns successful (code was correct) all authentication is done.
* new_authtok_reqd=done : New authentication token is required set to done. Done is like ok, <nowiki><man page></nowiki>except that the stack also terminates and control is immediately returned to the application.<nowiki></man page></nowiki>
* default=die : If pam_google_authenticator failed no other authentications will be tried
* nullok : Allow user to access auth mechanism even if the password is empty
=== Add settings to the /etc/ssh/sshd_config ===
This lines have to be in the /etc/ssh/sshd_config:
<syntaxhighlight lang=bash>
UsePAM yes
PasswordAuthentication no
PubkeyAuthentication yes
ChallengeResponseAuthentication yes
AuthenticationMethods publickey,keyboard-interactive:pam
</syntaxhighlight>
Without the setting in /etc/pam.d/sshd the "PasswordAuthentication no" will not be sufficient and still ask for a password because /etc/pam.d/sshd enables password authentication.
= Workarounds for CVEs =
In this section I just say: This <b>might</b> help! Absolutely no warranty for anything!<br>
If my workaround does not fix the problem look one line above this one.
== CVE-2023-48795 alias Terrapin ==
First read at [https://terrapin-attack.com/ terrapin-attack.com]
===Check on an Debian/Ubuntu===
<SyntaxHighlight lang=bash>
$ sudo apt-get changelog openssh-server | grep -i CVE-2023-48795
- debian/patches/CVE-2023-48795.patch: implement "strict key exchange"
- CVE-2023-48795
</SyntaxHighlight>
This means there are patches against this CVE are included.
===Check ssh and sshd if they offer problematic ciphers and macs===
sshd:
<SyntaxHighlight lang=bash>
$ sudo sshd -T | grep -iE '(chacha20-poly1305@openssh.com|-cbc|-etm@openssh.com)'
ciphers chacha20-poly1305@openssh.com,aes256-gcm@openssh.com,aes128-gcm@openssh.com,aes256-ctr,aes192-ctr,aes128-ctr
macs umac-64@openssh.com,umac-128@openssh.com,hmac-sha2-256,hmac-sha2-512,hmac-sha1
</SyntaxHighlight>
ssh: (Example for localhost, other hosts might get other values depending on your ~/.ssh/config or elsewhere)
<SyntaxHighlight lang=bash>
$ ssh -G localhost | grep -E '(chacha20-poly1305@openssh.com|-cbc|-etm@openssh.com)'
ciphers chacha20-poly1305@openssh.com,aes128-ctr,aes192-ctr,aes256-ctr,aes128-gcm@openssh.com,aes256-gcm@openssh.com
macs umac-64-etm@openssh.com,umac-128-etm@openssh.com,hmac-sha2-256-etm@openssh.com,hmac-sha2-512-etm@openssh.com,hmac-sha1-etm@openssh.com,umac-64@openssh.com,umac-128@openssh.com,hmac-sha2-256,hmac-sha2-512,hmac-sha1
</SyntaxHighlight>
===Include workarounds===
The <i>Include</i> statement enters config in OpenSSH 7.3p1 as far as i know.
So, if you have an <i>Include</i> statement in<br>
/etc/ssh/sshd_config:
<pre>
Include /etc/ssh/sshd_config.d/*.conf
</pre>
/etc/ssh/ssh_config:
<pre>
Include /etc/ssh/ssh_config.d/*.conf
</pre>
then add the following files:<br>
/etc/ssh/sshd_config.d/000_terrapin.conf
<pre>
Ciphers -chacha20-poly1305@openssh.com,-*-cbc*
MACs -*-etm@openssh.com
</pre>
/etc/ssh/ssh_config.d/000_terrapin.conf
<pre>
Ciphers -chacha20-poly1305@openssh.com,-*-cbc*
MACs -*-etm@openssh.com
</pre>
After that do the checks from above. If they look good, restart sshd. If not... toss my trash.
d49087f89f72fd5d66ce2ea54dbc513a803654dd
2770
2769
2023-12-21T17:54:29Z
Lollypop
2
/* Check on an Debian/Ubuntu */
wikitext
text/x-wiki
[[Category:SSH|Tipps]]
[[Category:Putty|Tipps]]
=SSH, way to the target=
==SSH over one or more hops==
To make the SSH connection from host_a to host_b you have to tunnel through two hosts (jumphost_1 and jumphost_2). If you always log in first and then continue logging in, it is sometimes very difficult to loop through the port forwardings or the Socks5 proxy. It is easier to define <i>ProxyJump</i>s for the way from host_a to host_b.
So we only get from jumphost_2 to host_b, so we make an entry in ~/.ssh/config for this:
<pre>
Host host_b
ProxyJump jumphost_2
</pre>
But we can only get to jumphost_2 via jumphost_1, so we need an entry for this as well:
<pre>
Host jumphost_2
ProxyJump jumphost_1
</pre>
Now simply type <i>ssh host_b</i> on host_a and you will be tunneled through the two gateways jumphost_1 and jumphost_2.
==Portforwardings for example for NFS are now easy like this==
<pre>
root@host_a# share -F nfs -o ro=@127.0.0.1/32 /tmp
root@host_a# ssh -R 22049:localhost:2049 user@host_b
user@host_b$ su -
root@host_b# mount -oro nfs://127.0.0.1:22049/tmp /mnt
</pre>
In the background the tunnel connections are established and the port forwarding is done directly from host_a to host_b. Very slim and elegant.
==Breakout from paradise==
Problem: The environment you are in is unfortunately so unfortunate with firewalls that you can not work. But you have to SSH out to look somewhere else or to get something. Well, there is always a way...
You need a locally installed [http://www.meadowy.org/~gotoh/projects/connect connect], e.g. under Ubuntu: apt-get install connect-proxy.
Furthermore you need a SSH server, where a sshd is listening on port 443, because most proxies only want to let you through on known ports.
Then you enter in the ~/.ssh/config:
<pre>
Host ssh-via-proxy
ProxyCommand connect -H proxy-server:3128 ssh-server 443
</pre>
Whoop di whoop is one with <i>ssh ssh-server</i> on the SSH target, where one would like to go. Of course you can enter another host behind this connection via <i>ProxyJump ssh-via-proxy</i> etc. etc.
==Ah yes... the internal wiki...==
Also not bad, if this is only accessible from the internal network, then we just request via socks proxy:
<pre>
user@host_a$ ssh -C -N -T -f -D8080 internal-host
user@host_a$ chromium-browser --proxy-server="socks5://localhost:8080" https://wiki.internal.office/ &
</pre>
Options are:
<pre>
-C Requests compression <- das ist optional
-N Do not execute a remote command.
-T Disable pseudo-tty allocation.
-f Requests ssh to go to background just before command execution.
-D Local-Remote-Socks5-Proxy Port
</pre>
Or again via ~/.ssh/config:
<pre>
Host wiki
Compression yes
DynamicForward 8888
RequestTTY no
PermitLocalCommand yes
LocalCommand chromium-browser --proxy-server="socks5://localhost:8888" https://wiki.intern.firma.de/ &
Hostname internal-host
</pre>
And then
<SyntaxHighlight lang=bash>
$ ssh -N -f wiki
</SyntaxHighlight>
==Do this but only if...==
If you want for example <i>ProxyJump</i> only if you are connected remote via OpenVPN but not if you are at the office:
<pre>
Match exec "ip ro sh dev tun0 src 10.208.129.0/24 2>/dev/null" host !jumphost.office,*.office,172.16.*.*
ProxyJump jumphost.office
</pre>
What happens here is:<br>
You will be proxied over jumphost.office if there is both of<br>
- A route on a dev tun0 where the local IP matches 10.208.129.0/24<br>
- The destination host matches !jumphost.office,*.office,172.16.*.*<br>
=rsync from remote to remote=
Sometimes your local host is just needed as a relay station to sync between two servers which cannot see each other. You can see both because you are in the admin network.
But you need to get files from HostA to HostB and your laptop has not enough diskspace to save it from HostA to the local disk and then copy it on HostB.
This is a possible solution:
1. Make a reverse forwarding from localhost:PortX on HostA to HostB port 22 (So all packets you send to PortX on HostA get back to your laptop and will be send to port 22, the ssh port, on HostB)
2. Execute rsync on HostA ans tell rsync to make a ssh connection to port PortX for the destination host (which is send back to your laptop and from here to HostB port 22, see 1.)
Here is an Example (with a random port between 50000-52999):
<SyntaxHighLight lang=bash>
$ PortX=$(( ${RANDOM} % 3000 + 50000 ))
$ HostA=10.1.0.42
$ HostB=10.2.0.43
$ ssh -AR 127.0.0.1:${PortX}:${HostB}:22 ${HostA} "rsync -e 'ssh -p ${PortX} -o StrictHostKeyChecking=no' -PWav <HostA-Path> 127.0.0.1:<HostB-Path>"
</SyntaxHighLight>
Some explanations:
$RANDOM is a bash builtin so this works only inside bash.
Use Portx=<your port choosen number> in other shells.
SSH Options:
<pre>
-A Enables forwarding of connections from an authentication agent such as ssh-agent(1). This can also be specified on a per-host basis in a configuration file.
Agent forwarding should be enabled with caution. Users with the ability to bypass file permissions on the remote host (for the agent's UNIX-domain socket) can
access the local agent through the forwarded connection. An attacker cannot obtain key material from the agent, however they can perform operations on the
keys that enable them to authenticate using the identities loaded into the agent. A safer alternative may be to use a jump host (see -J).
-R [bind_address:]port:host:hostport
Specifies that connections to the given TCP port or Unix socket on the remote (server) host are to be forwarded to the local side.
This works by allocating a socket to listen to either a TCP port or to a Unix socket on the remote side. Whenever a connection is made to this port or Unix
socket, the connection is forwarded over the secure channel, and a connection is made from the local machine to either an explicit destination specified by
host port hostport, or local_socket, or, if no explicit destination was specified, ssh will act as a SOCKS 4/5 proxy and forward connections to the destina‐
tions requested by the remote SOCKS client.
Port forwardings can also be specified in the configuration file. Privileged ports can be forwarded only when logging in as root on the remote machine. IPv6
addresses can be specified by enclosing the address in square brackets.
By default, TCP listening sockets on the server will be bound to the loopback interface only. This may be overridden by specifying a bind_address. An empty
bind_address, or the address ‘*’, indicates that the remote socket should listen on all interfaces. Specifying a remote bind_address will only succeed if the
server's GatewayPorts option is enabled (see sshd_config(5)).
If the port argument is ‘0’, the listen port will be dynamically allocated on the server and reported to the client at run time. When used together with -O
forward the allocated port will be printed to the standard output.
-o StrictHostKeyChecking=no
The StrictHostKeyChecking option can be used to control logins to machines whose host key is not known or has changed.
</pre>
=Fingerprint of a key=
For verification, it is often easier with shorter number strings. Therefore, the fingerprint is handy to compare keys more easily:
<pre>
$ ssh-keygen -lf ~/.ssh/id_rsa.pub
4096 SHA256:s1dtEFvY0EiJQcg66jTUon3BS6gfhSJT4Qegox4e7yk lollypop@lollybook (RSA)
</pre>
=Limit allowed users in sshd_config=
<syntaxhighlight lang=bash>
# SSH is only allowed for users in the group ssh except user syslog
AllowGroups ssh
DenyUsers syslog
</syntaxhighlight>
=PuTTY Portable=
==Launch pageant together with putty==
In the file ..\PortableApps\PuTTYPortable\App\AppInfo\Launcher\PuTTYPortable.ini enter the following below [Launch]:
<pre>
[Launch]
ProgramExecutable=putty\pageant.exe
CommandLineArguments='%PAL:DataDir%\settings\mykeys.ppk -c %PAL:AppDir%\putty\putty.exe'
DirectoryMoveOK=yes
SupportsUNC=yes
</pre>
For PortableApps see:
* [http://portableapps.com/manuals/PortableApps.comLauncher/ref/envsub.html Environment variable substitions]
* [http://portableapps.com/manuals/PortableApps.comLauncher/ref/launcher.ini/launch.html#programexecutable Launch]
==ppk -> OpenSSH format==
<syntaxhighlight lang=bash>
$ nawk '/---- BEGIN SSH2 PUBLIC KEY ----/{printf "ssh-rsa "; getline; comment=$2; gsub(/"/,"",comment); getline line; while(line !~ /^---- END/){printf line; getline line;} printf " %s\n",comment;}' pubkey.ppk
</syntaxhighlight>
=Problems with older destinations=
==Unable to negotiate with <IP> port 22: no matching host key type found. Their offer: ssh-dss==
<syntaxhighlight lang=bash>
$ ssh -oHostKeyAlgorithms=+ssh-dss <IP>
</syntaxhighlight>
==ssh_dispatch_run_fatal: Connection to <IP> port 22: DH GEX group out of range==
<syntaxhighlight lang=bash>
$ ssh -oKexAlgorithms=diffie-hellman-group-exchange-sha256,diffie-hellman-group14-sha1,diffie-hellman-group1-sha1 <IP>
</syntaxhighlight>
==Allow outdated PubkeyAcceptedAlgorithms==
To reach hosts where you cannod use ed25519 or other more modern keys:
<syntaxhighlight lang=bash>
$ ssh -o PubkeyAcceptedKeyTypes=+ssh-rsa <old-rsa-host>
</syntaxhighlight>
=Problems with older clients/keys=
==Allow RSA keys again (ssh-rsa not in PubkeyAcceptedAlgorithms)==
If you try to connect to new OpenSSH daemons with an RSA key, you will find this in your log and you can not connect via ssh:
<syntaxhighlight lang=bash>
sshd[51342]: userauth_pubkey: key type ssh-rsa not in PubkeyAcceptedAlgorithms [preauth]
</syntaxhighlight>
The temporary workaround which is not recommended is to allow RSA keys in the sshd_config:
<syntaxhighlight lang=bash>
PubkeyAcceptedAlgorithms +ssh-rsa
</syntaxhighlight>
But you better should switch to ED25519 keys.
=SFTP chroot=
<syntaxhighlight lang=bash>
# mkdir --parents --mode=0755 /sftp_chroot/etc
</syntaxhighlight>
==/etc/fstab==
<syntaxhighlight lang=bash>
...
/etc/passwd /sftp_chroot/etc/passwd none ro,bind 0 0
/etc/group /sftp_chroot/etc/group none ro,bind 0 0
</syntaxhighlight>
==/etc/ssh/sshd_config==
<syntaxhighlight lang=bash>
...
AllowGroups ssh-user
Subsystem sftp internal-sftp
Match group sftp
AllowGroups sftp
X11Forwarding no
AllowTcpForwarding no
AllowAgentForwarding no
PermitTunnel no
ForceCommand internal-sftp
PasswordAuthentication yes
ChrootDirectory /sftp_chroot/
AuthorizedKeysFile /sftp_chroot/%h/.ssh/authorized_keys
</syntaxhighlight>
==Create SFTP user==
Now you can put authorized keys into the files /home/sftp/.authorized_keys/<i>username</i>
And create the sftp users like this:
<syntaxhighlight lang=bash>
# USER=myuser
# mkdir --parents --mode=0755 /home/sftp/${USER}
# useradd --create-home --home-dir /home/sftp/${USER}/home ${USER}
</syntaxhighlight>
= Two factor authentication =
== Google Authenticator ==
As the Google Authenticator is a tool which is available on several SmartPhone OS I took this one for the OTP authentication.
All steps have to be done on the destination host.
=== Install libpam-google-authenticator ===
<syntaxhighlight lang=bash>
$ sudo apt-get install libpam-google-authenticator
</syntaxhighlight>
=== Add settings to the /etc/pam.d/sshd ===
Put this line at the top of your /etc/pam.d/sshd!
<syntaxhighlight lang=bash>
auth [success=done new_authtok_reqd=done default=die] pam_google_authenticator.so nullok
</syntaxhighlight>
See the man page pam.d(5) or read here...
The meaning of the parameters:
* success=done : If pam_google_authenticator returns successful (code was correct) all authentication is done.
* new_authtok_reqd=done : New authentication token is required set to done. Done is like ok, <nowiki><man page></nowiki>except that the stack also terminates and control is immediately returned to the application.<nowiki></man page></nowiki>
* default=die : If pam_google_authenticator failed no other authentications will be tried
* nullok : Allow user to access auth mechanism even if the password is empty
=== Add settings to the /etc/ssh/sshd_config ===
This lines have to be in the /etc/ssh/sshd_config:
<syntaxhighlight lang=bash>
UsePAM yes
PasswordAuthentication no
PubkeyAuthentication yes
ChallengeResponseAuthentication yes
AuthenticationMethods publickey,keyboard-interactive:pam
</syntaxhighlight>
Without the setting in /etc/pam.d/sshd the "PasswordAuthentication no" will not be sufficient and still ask for a password because /etc/pam.d/sshd enables password authentication.
= Workarounds for CVEs =
In this section I just say: This <b>might</b> help! Absolutely no warranty for anything!<br>
If my workaround does not fix the problem look one line above this one.
== CVE-2023-48795 alias Terrapin ==
First read at [https://terrapin-attack.com/ terrapin-attack.com]
===Check if patches against this CVE are allready included in OS===
On Debian/Ubuntu do:
<SyntaxHighlight lang=bash>
$ sudo apt-get changelog openssh-server | grep -i CVE-2023-48795
- debian/patches/CVE-2023-48795.patch: implement "strict key exchange"
- CVE-2023-48795
</SyntaxHighlight>
This means there are patches against this CVE are included.
===Check ssh and sshd if they offer problematic ciphers and macs===
sshd:
<SyntaxHighlight lang=bash>
$ sudo sshd -T | grep -iE '(chacha20-poly1305@openssh.com|-cbc|-etm@openssh.com)'
ciphers chacha20-poly1305@openssh.com,aes256-gcm@openssh.com,aes128-gcm@openssh.com,aes256-ctr,aes192-ctr,aes128-ctr
macs umac-64@openssh.com,umac-128@openssh.com,hmac-sha2-256,hmac-sha2-512,hmac-sha1
</SyntaxHighlight>
ssh: (Example for localhost, other hosts might get other values depending on your ~/.ssh/config or elsewhere)
<SyntaxHighlight lang=bash>
$ ssh -G localhost | grep -E '(chacha20-poly1305@openssh.com|-cbc|-etm@openssh.com)'
ciphers chacha20-poly1305@openssh.com,aes128-ctr,aes192-ctr,aes256-ctr,aes128-gcm@openssh.com,aes256-gcm@openssh.com
macs umac-64-etm@openssh.com,umac-128-etm@openssh.com,hmac-sha2-256-etm@openssh.com,hmac-sha2-512-etm@openssh.com,hmac-sha1-etm@openssh.com,umac-64@openssh.com,umac-128@openssh.com,hmac-sha2-256,hmac-sha2-512,hmac-sha1
</SyntaxHighlight>
===Include workarounds===
The <i>Include</i> statement enters config in OpenSSH 7.3p1 as far as i know.
So, if you have an <i>Include</i> statement in<br>
/etc/ssh/sshd_config:
<pre>
Include /etc/ssh/sshd_config.d/*.conf
</pre>
/etc/ssh/ssh_config:
<pre>
Include /etc/ssh/ssh_config.d/*.conf
</pre>
then add the following files:<br>
/etc/ssh/sshd_config.d/000_terrapin.conf
<pre>
Ciphers -chacha20-poly1305@openssh.com,-*-cbc*
MACs -*-etm@openssh.com
</pre>
/etc/ssh/ssh_config.d/000_terrapin.conf
<pre>
Ciphers -chacha20-poly1305@openssh.com,-*-cbc*
MACs -*-etm@openssh.com
</pre>
After that do the checks from above. If they look good, restart sshd. If not... toss my trash.
39cfc8e2d1ca0a2e36aaadc8f73fa2161a6519af
2771
2770
2023-12-21T17:54:47Z
Lollypop
2
/* Check if patches against this CVE are allready included in OS */
wikitext
text/x-wiki
[[Category:SSH|Tipps]]
[[Category:Putty|Tipps]]
=SSH, way to the target=
==SSH over one or more hops==
To make the SSH connection from host_a to host_b you have to tunnel through two hosts (jumphost_1 and jumphost_2). If you always log in first and then continue logging in, it is sometimes very difficult to loop through the port forwardings or the Socks5 proxy. It is easier to define <i>ProxyJump</i>s for the way from host_a to host_b.
So we only get from jumphost_2 to host_b, so we make an entry in ~/.ssh/config for this:
<pre>
Host host_b
ProxyJump jumphost_2
</pre>
But we can only get to jumphost_2 via jumphost_1, so we need an entry for this as well:
<pre>
Host jumphost_2
ProxyJump jumphost_1
</pre>
Now simply type <i>ssh host_b</i> on host_a and you will be tunneled through the two gateways jumphost_1 and jumphost_2.
==Portforwardings for example for NFS are now easy like this==
<pre>
root@host_a# share -F nfs -o ro=@127.0.0.1/32 /tmp
root@host_a# ssh -R 22049:localhost:2049 user@host_b
user@host_b$ su -
root@host_b# mount -oro nfs://127.0.0.1:22049/tmp /mnt
</pre>
In the background the tunnel connections are established and the port forwarding is done directly from host_a to host_b. Very slim and elegant.
==Breakout from paradise==
Problem: The environment you are in is unfortunately so unfortunate with firewalls that you can not work. But you have to SSH out to look somewhere else or to get something. Well, there is always a way...
You need a locally installed [http://www.meadowy.org/~gotoh/projects/connect connect], e.g. under Ubuntu: apt-get install connect-proxy.
Furthermore you need a SSH server, where a sshd is listening on port 443, because most proxies only want to let you through on known ports.
Then you enter in the ~/.ssh/config:
<pre>
Host ssh-via-proxy
ProxyCommand connect -H proxy-server:3128 ssh-server 443
</pre>
Whoop di whoop is one with <i>ssh ssh-server</i> on the SSH target, where one would like to go. Of course you can enter another host behind this connection via <i>ProxyJump ssh-via-proxy</i> etc. etc.
==Ah yes... the internal wiki...==
Also not bad, if this is only accessible from the internal network, then we just request via socks proxy:
<pre>
user@host_a$ ssh -C -N -T -f -D8080 internal-host
user@host_a$ chromium-browser --proxy-server="socks5://localhost:8080" https://wiki.internal.office/ &
</pre>
Options are:
<pre>
-C Requests compression <- das ist optional
-N Do not execute a remote command.
-T Disable pseudo-tty allocation.
-f Requests ssh to go to background just before command execution.
-D Local-Remote-Socks5-Proxy Port
</pre>
Or again via ~/.ssh/config:
<pre>
Host wiki
Compression yes
DynamicForward 8888
RequestTTY no
PermitLocalCommand yes
LocalCommand chromium-browser --proxy-server="socks5://localhost:8888" https://wiki.intern.firma.de/ &
Hostname internal-host
</pre>
And then
<SyntaxHighlight lang=bash>
$ ssh -N -f wiki
</SyntaxHighlight>
==Do this but only if...==
If you want for example <i>ProxyJump</i> only if you are connected remote via OpenVPN but not if you are at the office:
<pre>
Match exec "ip ro sh dev tun0 src 10.208.129.0/24 2>/dev/null" host !jumphost.office,*.office,172.16.*.*
ProxyJump jumphost.office
</pre>
What happens here is:<br>
You will be proxied over jumphost.office if there is both of<br>
- A route on a dev tun0 where the local IP matches 10.208.129.0/24<br>
- The destination host matches !jumphost.office,*.office,172.16.*.*<br>
=rsync from remote to remote=
Sometimes your local host is just needed as a relay station to sync between two servers which cannot see each other. You can see both because you are in the admin network.
But you need to get files from HostA to HostB and your laptop has not enough diskspace to save it from HostA to the local disk and then copy it on HostB.
This is a possible solution:
1. Make a reverse forwarding from localhost:PortX on HostA to HostB port 22 (So all packets you send to PortX on HostA get back to your laptop and will be send to port 22, the ssh port, on HostB)
2. Execute rsync on HostA ans tell rsync to make a ssh connection to port PortX for the destination host (which is send back to your laptop and from here to HostB port 22, see 1.)
Here is an Example (with a random port between 50000-52999):
<SyntaxHighLight lang=bash>
$ PortX=$(( ${RANDOM} % 3000 + 50000 ))
$ HostA=10.1.0.42
$ HostB=10.2.0.43
$ ssh -AR 127.0.0.1:${PortX}:${HostB}:22 ${HostA} "rsync -e 'ssh -p ${PortX} -o StrictHostKeyChecking=no' -PWav <HostA-Path> 127.0.0.1:<HostB-Path>"
</SyntaxHighLight>
Some explanations:
$RANDOM is a bash builtin so this works only inside bash.
Use Portx=<your port choosen number> in other shells.
SSH Options:
<pre>
-A Enables forwarding of connections from an authentication agent such as ssh-agent(1). This can also be specified on a per-host basis in a configuration file.
Agent forwarding should be enabled with caution. Users with the ability to bypass file permissions on the remote host (for the agent's UNIX-domain socket) can
access the local agent through the forwarded connection. An attacker cannot obtain key material from the agent, however they can perform operations on the
keys that enable them to authenticate using the identities loaded into the agent. A safer alternative may be to use a jump host (see -J).
-R [bind_address:]port:host:hostport
Specifies that connections to the given TCP port or Unix socket on the remote (server) host are to be forwarded to the local side.
This works by allocating a socket to listen to either a TCP port or to a Unix socket on the remote side. Whenever a connection is made to this port or Unix
socket, the connection is forwarded over the secure channel, and a connection is made from the local machine to either an explicit destination specified by
host port hostport, or local_socket, or, if no explicit destination was specified, ssh will act as a SOCKS 4/5 proxy and forward connections to the destina‐
tions requested by the remote SOCKS client.
Port forwardings can also be specified in the configuration file. Privileged ports can be forwarded only when logging in as root on the remote machine. IPv6
addresses can be specified by enclosing the address in square brackets.
By default, TCP listening sockets on the server will be bound to the loopback interface only. This may be overridden by specifying a bind_address. An empty
bind_address, or the address ‘*’, indicates that the remote socket should listen on all interfaces. Specifying a remote bind_address will only succeed if the
server's GatewayPorts option is enabled (see sshd_config(5)).
If the port argument is ‘0’, the listen port will be dynamically allocated on the server and reported to the client at run time. When used together with -O
forward the allocated port will be printed to the standard output.
-o StrictHostKeyChecking=no
The StrictHostKeyChecking option can be used to control logins to machines whose host key is not known or has changed.
</pre>
=Fingerprint of a key=
For verification, it is often easier with shorter number strings. Therefore, the fingerprint is handy to compare keys more easily:
<pre>
$ ssh-keygen -lf ~/.ssh/id_rsa.pub
4096 SHA256:s1dtEFvY0EiJQcg66jTUon3BS6gfhSJT4Qegox4e7yk lollypop@lollybook (RSA)
</pre>
=Limit allowed users in sshd_config=
<syntaxhighlight lang=bash>
# SSH is only allowed for users in the group ssh except user syslog
AllowGroups ssh
DenyUsers syslog
</syntaxhighlight>
=PuTTY Portable=
==Launch pageant together with putty==
In the file ..\PortableApps\PuTTYPortable\App\AppInfo\Launcher\PuTTYPortable.ini enter the following below [Launch]:
<pre>
[Launch]
ProgramExecutable=putty\pageant.exe
CommandLineArguments='%PAL:DataDir%\settings\mykeys.ppk -c %PAL:AppDir%\putty\putty.exe'
DirectoryMoveOK=yes
SupportsUNC=yes
</pre>
For PortableApps see:
* [http://portableapps.com/manuals/PortableApps.comLauncher/ref/envsub.html Environment variable substitions]
* [http://portableapps.com/manuals/PortableApps.comLauncher/ref/launcher.ini/launch.html#programexecutable Launch]
==ppk -> OpenSSH format==
<syntaxhighlight lang=bash>
$ nawk '/---- BEGIN SSH2 PUBLIC KEY ----/{printf "ssh-rsa "; getline; comment=$2; gsub(/"/,"",comment); getline line; while(line !~ /^---- END/){printf line; getline line;} printf " %s\n",comment;}' pubkey.ppk
</syntaxhighlight>
=Problems with older destinations=
==Unable to negotiate with <IP> port 22: no matching host key type found. Their offer: ssh-dss==
<syntaxhighlight lang=bash>
$ ssh -oHostKeyAlgorithms=+ssh-dss <IP>
</syntaxhighlight>
==ssh_dispatch_run_fatal: Connection to <IP> port 22: DH GEX group out of range==
<syntaxhighlight lang=bash>
$ ssh -oKexAlgorithms=diffie-hellman-group-exchange-sha256,diffie-hellman-group14-sha1,diffie-hellman-group1-sha1 <IP>
</syntaxhighlight>
==Allow outdated PubkeyAcceptedAlgorithms==
To reach hosts where you cannod use ed25519 or other more modern keys:
<syntaxhighlight lang=bash>
$ ssh -o PubkeyAcceptedKeyTypes=+ssh-rsa <old-rsa-host>
</syntaxhighlight>
=Problems with older clients/keys=
==Allow RSA keys again (ssh-rsa not in PubkeyAcceptedAlgorithms)==
If you try to connect to new OpenSSH daemons with an RSA key, you will find this in your log and you can not connect via ssh:
<syntaxhighlight lang=bash>
sshd[51342]: userauth_pubkey: key type ssh-rsa not in PubkeyAcceptedAlgorithms [preauth]
</syntaxhighlight>
The temporary workaround which is not recommended is to allow RSA keys in the sshd_config:
<syntaxhighlight lang=bash>
PubkeyAcceptedAlgorithms +ssh-rsa
</syntaxhighlight>
But you better should switch to ED25519 keys.
=SFTP chroot=
<syntaxhighlight lang=bash>
# mkdir --parents --mode=0755 /sftp_chroot/etc
</syntaxhighlight>
==/etc/fstab==
<syntaxhighlight lang=bash>
...
/etc/passwd /sftp_chroot/etc/passwd none ro,bind 0 0
/etc/group /sftp_chroot/etc/group none ro,bind 0 0
</syntaxhighlight>
==/etc/ssh/sshd_config==
<syntaxhighlight lang=bash>
...
AllowGroups ssh-user
Subsystem sftp internal-sftp
Match group sftp
AllowGroups sftp
X11Forwarding no
AllowTcpForwarding no
AllowAgentForwarding no
PermitTunnel no
ForceCommand internal-sftp
PasswordAuthentication yes
ChrootDirectory /sftp_chroot/
AuthorizedKeysFile /sftp_chroot/%h/.ssh/authorized_keys
</syntaxhighlight>
==Create SFTP user==
Now you can put authorized keys into the files /home/sftp/.authorized_keys/<i>username</i>
And create the sftp users like this:
<syntaxhighlight lang=bash>
# USER=myuser
# mkdir --parents --mode=0755 /home/sftp/${USER}
# useradd --create-home --home-dir /home/sftp/${USER}/home ${USER}
</syntaxhighlight>
= Two factor authentication =
== Google Authenticator ==
As the Google Authenticator is a tool which is available on several SmartPhone OS I took this one for the OTP authentication.
All steps have to be done on the destination host.
=== Install libpam-google-authenticator ===
<syntaxhighlight lang=bash>
$ sudo apt-get install libpam-google-authenticator
</syntaxhighlight>
=== Add settings to the /etc/pam.d/sshd ===
Put this line at the top of your /etc/pam.d/sshd!
<syntaxhighlight lang=bash>
auth [success=done new_authtok_reqd=done default=die] pam_google_authenticator.so nullok
</syntaxhighlight>
See the man page pam.d(5) or read here...
The meaning of the parameters:
* success=done : If pam_google_authenticator returns successful (code was correct) all authentication is done.
* new_authtok_reqd=done : New authentication token is required set to done. Done is like ok, <nowiki><man page></nowiki>except that the stack also terminates and control is immediately returned to the application.<nowiki></man page></nowiki>
* default=die : If pam_google_authenticator failed no other authentications will be tried
* nullok : Allow user to access auth mechanism even if the password is empty
=== Add settings to the /etc/ssh/sshd_config ===
This lines have to be in the /etc/ssh/sshd_config:
<syntaxhighlight lang=bash>
UsePAM yes
PasswordAuthentication no
PubkeyAuthentication yes
ChallengeResponseAuthentication yes
AuthenticationMethods publickey,keyboard-interactive:pam
</syntaxhighlight>
Without the setting in /etc/pam.d/sshd the "PasswordAuthentication no" will not be sufficient and still ask for a password because /etc/pam.d/sshd enables password authentication.
= Workarounds for CVEs =
In this section I just say: This <b>might</b> help! Absolutely no warranty for anything!<br>
If my workaround does not fix the problem look one line above this one.
== CVE-2023-48795 alias Terrapin ==
First read at [https://terrapin-attack.com/ terrapin-attack.com]
===Check if patches against this CVE are already included in OS===
On Debian/Ubuntu do:
<SyntaxHighlight lang=bash>
$ sudo apt-get changelog openssh-server | grep -i CVE-2023-48795
- debian/patches/CVE-2023-48795.patch: implement "strict key exchange"
- CVE-2023-48795
</SyntaxHighlight>
This means there are patches against this CVE are included.
===Check ssh and sshd if they offer problematic ciphers and macs===
sshd:
<SyntaxHighlight lang=bash>
$ sudo sshd -T | grep -iE '(chacha20-poly1305@openssh.com|-cbc|-etm@openssh.com)'
ciphers chacha20-poly1305@openssh.com,aes256-gcm@openssh.com,aes128-gcm@openssh.com,aes256-ctr,aes192-ctr,aes128-ctr
macs umac-64@openssh.com,umac-128@openssh.com,hmac-sha2-256,hmac-sha2-512,hmac-sha1
</SyntaxHighlight>
ssh: (Example for localhost, other hosts might get other values depending on your ~/.ssh/config or elsewhere)
<SyntaxHighlight lang=bash>
$ ssh -G localhost | grep -E '(chacha20-poly1305@openssh.com|-cbc|-etm@openssh.com)'
ciphers chacha20-poly1305@openssh.com,aes128-ctr,aes192-ctr,aes256-ctr,aes128-gcm@openssh.com,aes256-gcm@openssh.com
macs umac-64-etm@openssh.com,umac-128-etm@openssh.com,hmac-sha2-256-etm@openssh.com,hmac-sha2-512-etm@openssh.com,hmac-sha1-etm@openssh.com,umac-64@openssh.com,umac-128@openssh.com,hmac-sha2-256,hmac-sha2-512,hmac-sha1
</SyntaxHighlight>
===Include workarounds===
The <i>Include</i> statement enters config in OpenSSH 7.3p1 as far as i know.
So, if you have an <i>Include</i> statement in<br>
/etc/ssh/sshd_config:
<pre>
Include /etc/ssh/sshd_config.d/*.conf
</pre>
/etc/ssh/ssh_config:
<pre>
Include /etc/ssh/ssh_config.d/*.conf
</pre>
then add the following files:<br>
/etc/ssh/sshd_config.d/000_terrapin.conf
<pre>
Ciphers -chacha20-poly1305@openssh.com,-*-cbc*
MACs -*-etm@openssh.com
</pre>
/etc/ssh/ssh_config.d/000_terrapin.conf
<pre>
Ciphers -chacha20-poly1305@openssh.com,-*-cbc*
MACs -*-etm@openssh.com
</pre>
After that do the checks from above. If they look good, restart sshd. If not... toss my trash.
3d7c34db2332cfed26ebe1c5165ea63bdd1e4830
TShark
0
238
2735
2417
2023-06-08T09:04:16Z
Lollypop
2
wikitext
text/x-wiki
[[Category:MySQL]]
[[Category:Security]]
=TShark=
[https://www.wireshark.org/docs/wsug_html_chunked/AppToolstshark.html TShark is the terminal based wireshark.]
The ultimate tool to sniff network traffic when you have no X. It analyzes the traffic as wireshark does. Great tool!
==DNS Traffic==
<syntaxhighlight lang=bash>
# tshark -n -T fields -e frame.time -e dns.id -e ip.src -e ip.dst -e dns.qry.name -f 'port 53'
</syntaxhighlight>
==MySQL traffic==
To look on an application server for MySQL traffic you can use this line:
<syntaxhighlight lang=bash>
# IFACE=eth0 ; tshark -i ${IFACE} -d tcp.port==3306,mysql -R "eth.addr eq $(ip link show ${IFACE} | awk '$1 ~ /link\/ether/{print $2}')" -T fields -e mysql.query 'port 3306'
</syntaxhighlight>
<syntaxhighlight lang=bash>
# IFACE=ens192 ; tshark -i ${IFACE} -d tcp.port==3306,mysql -Y "eth.addr eq $(ip link show ${IFACE} | awk '$1 ~ /link\/ether/{print $2}')" -T fields -e mysql.auth_plugin -e mysql.client_auth_plugin -e mysql.error_code -e mysql.error.message -e mysql.message -e mysql.user -e mysql.passwd -e mysql.command 'port 3306'
</syntaxhighlight>
The little awk magic selects only pakets which are from our ethernet address on interface ''IFACE''.
==Radius traffic==
Find client with macaddress fc-18-3c-4a-c1-fa :
<syntaxhighlight lang=bash>
# tshark -Y "tls.handshake.type == 1" -T fields -e frame.number -e ip.src -e tls.handshake.version -e radius.Calling_Station_Id -Y 'radius.Calling_Station_Id=="fc-18-3c-4a-c1-fa"' -f "udp port 1812" -V
Running as user "root" and group "root". This could be dangerous.
Capturing on 'ens192'
785 10.155.1.23 fc-18-3c-4a-c1-fa
788 10.155.1.23 0x00000303 fc-18-3c-4a-c1-fa <-- 0x00000303 is TLS handshake version 1.2 , see table below
790 10.155.1.23 fc-18-3c-4a-c1-fa
792 10.155.1.23 fc-18-3c-4a-c1-fa
794 10.155.1.23 fc-18-3c-4a-c1-fa
</syntaxhighlight>
With older tshark versions try:
<syntaxhighlight lang=bash>
# tshark -Y "ssl.handshake.type == 1" -T fields -e frame.number -e ip.src -e ssl.handshake.version -e radius.Calling_Station_Id -Y 'radius.Calling_Station_Id=="8c-85-90-1f-03-ff"' -f "udp port 1812"
</syntaxhighlight>
==Duplicate ACKs==
<syntaxhighlight lang=bash>
# tshark -i eth1 -Y tcp.analysis.duplicate_ack
</syntaxhighlight>
==Finding TCP problems==
<syntaxhighlight lang=bash>
# tshark -i eth1 -Y 'expert.message == "Retransmission (suspected)" || expert.message == "Duplicate ACK (#1)" || expert.message == "Out-Of-Order segment"'
</syntaxhighlight>
==Decode SSL Connections==
For example show the used TLS-Versions lower than 1.2.
<pre>
Supported Version: TLS 1.3 (0x0304)
Supported Version: TLS 1.2 (0x0303)
Supported Version: TLS 1.1 (0x0302)
Supported Version: TLS 1.0 (0x0301)
</pre>
<syntaxhighlight lang=bash>
$ tshark -n -f 'dst port 1812 or dst port 2083' -Y "ssl.handshake.version<0x00000303" -T fields -e ip.src_host -e ip.dst_host -e tcp.dstport -e udp.dstport -e ssl.handshake.version
192.168.1.87 192.168.1.140 2083 0x00000301
10.155.4.97 192.168.1.141 1812 0x00000301
192.168.1.85 192.168.1.140 2083 0x00000301
...
</syntaxhighlight>
or for https:
<syntaxhighlight lang=bash>
$ tshark -i eth0 -n -f 'dst port 443' -Y "ssl.handshake.version<0x00000303" -T fields -e ip.src_host -e ip.dst_host -e tcp.dstport -e ssl.handshake.version
</syntaxhighlight>
071dd374d05885a77a7afd00a61cf76c60a752b4
Nextcloud
0
368
2736
2564
2023-06-08T13:06:59Z
Lollypop
2
/* Some tweaks for the theme to disable several things */
wikitext
text/x-wiki
[[category:Web]]
=Nextcloud=
==BASH alias==
<syntaxhighlight lang=bash>
alias occ='sudo --user=www-data /usr/bin/php -f /var/www/nextcloud/occ'
</syntaxhighlight>
<syntaxhighlight lang=bash>
# occ status
- installed: true
- version: 19.0.2.2
- versionstring: 19.0.2
- edition:
</syntaxhighlight>
==Send calendar events==
Set the EventRemindersMode to occ:
<syntaxhighlight lang=bash>
# occ config:app:set dav sendEventRemindersMode --value occ
</syntaxhighlight>
and add a cronjob for the user running he webserver:
<syntaxhighlight lang=bash>
# crontab -u www-data -e
# send calendar events every 5 minutes
*/5 * * * * php -f /var/www/nextcloud/occ dav:send-event-reminders
</syntaxhighlight>
=Manual upgrade=
Caution when upgrading from Nextcloud 20.0.9 to Nextcloud 21.0.1!
If you are using APCu as <i>memcache.local</i>
<syntaxhighlight lang=bash>
# occ config:system:get memcache.local
\OC\Memcache\APCu
</syntaxhighlight>
you have to put this in your php apcu.ini (e.g. /etc/php/7.4/mods-available/apcu.ini):
apc.enable_cli=1
otherwise you will get in memory trouble during upgrade and in my case the server was down because out of memory.
<syntaxhighlight lang=bash>
# cd /var/www/nextcloud/updater && sudo -u www-data php updater.phar
# occ db:add-missing-indices
</syntaxhighlight>
and since version 19:
<syntaxhighlight lang=bash>
# occ db:add-missing-columns
# occ db:add-missing-primary-keys
# occ db:convert-filecache-bigint
</syntaxhighlight>
Answer the questions...
If you have an own theme proceed with this steps:
<syntaxhighlight lang=bash>
# occ config:system:set theme --value <your theme>
# occ maintenance:theme:update
</syntaxhighlight>
And the apps:
<syntaxhighlight lang=bash>
# occ app:update --all
</syntaxhighlight>
=Some tweaks for the theme to disable several things=
<syntaxhighlight lang=css>
/* remove quota */
#quota {
border: 0;
clip: rect(0 0 0 0);
height: 1px;
margin: -1px;
overflow: hidden;
padding: 0;
position: absolute;
width: 1px;
}
/* remove lost password */
.lost-password-container #lost-password, .lost-password-container #lost-password-back {
display: none;
}
/* remove contacts menu */
#contactsmenu { display: none; }
/* remove contacts button */
li[data-id="contacts"] {
display: none;
visibility : hidden;
height : 0px;
width : 0px;
margin : 0px;
padding : 0px;
overflow : hidden;
}
/* remove user button */
li[data-id="core_users"] {
display: none;
visibility : hidden;
height : 0px;
width : 0px;
margin : 0px;
padding : 0px;
overflow : hidden;
}
/* Get rid of the box at login: This community release of Nextcloud is unsupported and push notifications are limited. */
#body-login .notecard {
display: none;
visibility : hidden;
height : 0px !important;
width : 0px !important;
margin : 0px;
padding : 0px;
overflow : hidden;
}
</syntaxhighlight>
= Memcached =
You can import one of the following versions of configfile with
<syntaxhighlight lang=shell-session>
# occ config:import /your_memcache_config_file_like_below.json
Config successfully imported from: /your_memcache_config_file_like_below.json
</syntaxhighlight>
== ip:port ==
<syntaxhighlight lang=JSON>
{
"system": {
"memcache.distributed": "\\OC\\Memcache\\Memcached",
"memcached_servers": [
[
'127.0.0.1',
1121
]
]
}
}
</syntaxhighlight>
== socket ==
<syntaxhighlight lang=JSON>
{
"system": {
"memcache.distributed": "\\OC\\Memcache\\Memcached",
"memcached_servers": [
[
"\/run\/memcached\/memcached.sock",
0
]
]
}
}
</syntaxhighlight>
bf56b366950ff5f88e726e88df1d596acdcdec39
2783
2736
2024-02-02T11:17:17Z
Lollypop
2
/* Some tweaks for the theme to disable several things */
wikitext
text/x-wiki
[[category:Web]]
=Nextcloud=
==BASH alias==
<syntaxhighlight lang=bash>
alias occ='sudo --user=www-data /usr/bin/php -f /var/www/nextcloud/occ'
</syntaxhighlight>
<syntaxhighlight lang=bash>
# occ status
- installed: true
- version: 19.0.2.2
- versionstring: 19.0.2
- edition:
</syntaxhighlight>
==Send calendar events==
Set the EventRemindersMode to occ:
<syntaxhighlight lang=bash>
# occ config:app:set dav sendEventRemindersMode --value occ
</syntaxhighlight>
and add a cronjob for the user running he webserver:
<syntaxhighlight lang=bash>
# crontab -u www-data -e
# send calendar events every 5 minutes
*/5 * * * * php -f /var/www/nextcloud/occ dav:send-event-reminders
</syntaxhighlight>
=Manual upgrade=
Caution when upgrading from Nextcloud 20.0.9 to Nextcloud 21.0.1!
If you are using APCu as <i>memcache.local</i>
<syntaxhighlight lang=bash>
# occ config:system:get memcache.local
\OC\Memcache\APCu
</syntaxhighlight>
you have to put this in your php apcu.ini (e.g. /etc/php/7.4/mods-available/apcu.ini):
apc.enable_cli=1
otherwise you will get in memory trouble during upgrade and in my case the server was down because out of memory.
<syntaxhighlight lang=bash>
# cd /var/www/nextcloud/updater && sudo -u www-data php updater.phar
# occ db:add-missing-indices
</syntaxhighlight>
and since version 19:
<syntaxhighlight lang=bash>
# occ db:add-missing-columns
# occ db:add-missing-primary-keys
# occ db:convert-filecache-bigint
</syntaxhighlight>
Answer the questions...
If you have an own theme proceed with this steps:
<syntaxhighlight lang=bash>
# occ config:system:set theme --value <your theme>
# occ maintenance:theme:update
</syntaxhighlight>
And the apps:
<syntaxhighlight lang=bash>
# occ app:update --all
</syntaxhighlight>
=Some tweaks for the theme to disable several things=
<syntaxhighlight lang=css>
/* remove quota */
#quota {
border: 0;
clip: rect(0 0 0 0);
height: 1px;
margin: -1px;
overflow: hidden;
padding: 0;
position: absolute;
width: 1px;
}
/* remove lost password */
.lost-password-container #lost-password, .lost-password-container #lost-password-back {
display: none;
}
/* remove contacts menu */
#contactsmenu { display: none; }
/* remove contacts button */
li[data-id="contacts"] {
display: none;
visibility : hidden;
height : 0px;
width : 0px;
margin : 0px;
padding : 0px;
overflow : hidden;
}
/* remove user button */
li[data-id="core_users"] {
display: none;
visibility : hidden;
height : 0px;
width : 0px;
margin : 0px;
padding : 0px;
overflow : hidden;
}
/* Get rid of the box at login: This community release of Nextcloud is unsupported and push notifications are limited. */
#body-login .notecard {
display: none;
visibility : hidden;
height : 0px !important;
width : 0px !important;
margin : 0px;
padding : 0px;
overflow : hidden;
}
/* remove background-image from all pages, but login page */
body:not(#body-login) {
background-image: none;
}
</syntaxhighlight>
= Memcached =
You can import one of the following versions of configfile with
<syntaxhighlight lang=shell-session>
# occ config:import /your_memcache_config_file_like_below.json
Config successfully imported from: /your_memcache_config_file_like_below.json
</syntaxhighlight>
== ip:port ==
<syntaxhighlight lang=JSON>
{
"system": {
"memcache.distributed": "\\OC\\Memcache\\Memcached",
"memcached_servers": [
[
'127.0.0.1',
1121
]
]
}
}
</syntaxhighlight>
== socket ==
<syntaxhighlight lang=JSON>
{
"system": {
"memcache.distributed": "\\OC\\Memcache\\Memcached",
"memcached_servers": [
[
"\/run\/memcached\/memcached.sock",
0
]
]
}
}
</syntaxhighlight>
f099f0442b95f65fb84f1740168c3f3ea1cb3e38
Docker tips and tricks
0
372
2737
2507
2023-06-14T18:23:04Z
Lollypop
2
wikitext
text/x-wiki
== Using docker behind a proxy ==
<syntaxhighlight lang=bash>
# systemctl edit docker.service
</syntaxhighlight>
Enter the next three lines and save:
<syntaxhighlight lang=ini>
[Service]
Environment="HTTP_PROXY=user:pass@proxy:port"
Environment="HTTPS_PROXY=user:pass@proxy:port"
</syntaxhighlight>
Restart docker:
<syntaxhighlight lang=bash>
# systemctl restart docker.service
</syntaxhighlight>
== Some useful aliases ==
I put this in my ~/.bash_aliases to maintain a check_mk container:
<syntaxhighlight lang=bash>
alias omd-log='docker container logs monitoring'
alias omd-recreate-volume='docker volume create --driver local --opt type=nfs --opt o=addr=nfs.server.tld,rw --opt device=:/share monitoring'
alias omd-root='docker container exec -it $(docker ps --filter name=monitoring -q) /bin/bash'
alias omd-cmk='docker container exec -it -u omd monitoring bash'
alias omd-start='docker container run --rm -dit -p 8080:5000 --tmpfs /omd/sites/omd/tmp:uid=1000,gid=1000 --ulimit nofile=1024 -v monitoring:/omd/sites --name monitoring -e CMK_SITE_ID=omd -e MAIL_RELAY_HOST='\''smtp-gw.server.tld'\'' -v /etc/localtime:/etc/localtime:ro checkmk/check-mk-raw:1.6.0p12'
alias omd-stop='docker stop $(docker ps --filter name=monitoring -q)'
</syntaxhighlight>
== Setting some defaults ==
/etc/docker/daemon.json
<syntaxhighlight lang=json>
{
"insecure-registries" : ["registry.server.de:5000"],
"data-root": "/docker-data/",
"default-address-pools": [
{
"scope": "local",
"base": "10.42.0.0/16",
"size": 24
}
],
"log-driver": "json-file",
"log-opts": {
"max-size": "2m",
"max-file": "10"
}
}
</syntaxhighlight>
aa2ba94be5e8622ba3dfbd25e36611cac5d8da48
PowerDNS
0
287
2738
2660
2023-06-16T08:35:32Z
Lollypop
2
wikitext
text/x-wiki
[[Category: DNS]]
=PowerDNS Server (pdns_server)=
==Newer version in Ubuntu==
If you are living in Ubunbtu xenial and need a newer PowerDNS from Ubuntu zesty, do this:
===/etc/apt/apt.conf.d/01pinning===
<syntaxhighlight lang=apt>
APT::Default-Release "xenial";
</syntaxhighlight>
===/etc/apt/preferences.d/pdns===
<syntaxhighlight lang=apt>
Package: pdns-*
Pin: release a=zesty, l=Ubuntu
Pin-Priority: 1000
Package: pdns-*
Pin: release a=zesty-updates, l=Ubuntu
Pin-Priority: 1000
Package: pdns-*
Pin: release a=zesty-security, l=Ubuntu
Pin-Priority: 1000
</syntaxhighlight>
===/etc/apt/sources.list===
add zesty sources. for example:
<syntaxhighlight>
deb [arch=amd64] http://de.archive.ubuntu.com/ubuntu/ xenial main restricted universe
deb [arch=amd64] http://de.archive.ubuntu.com/ubuntu/ xenial-updates main restricted universe
deb [arch=amd64] http://security.ubuntu.com/ubuntu xenial-security main restricted universe
deb [arch=amd64] http://de.archive.ubuntu.com/ubuntu/ zesty main restricted universe
deb [arch=amd64] http://de.archive.ubuntu.com/ubuntu/ zesty-updates main restricted universe
deb [arch=amd64] http://security.ubuntu.com/ubuntu zesty-security main restricted universe
</syntaxhighlight>
===Do the upgrade===
<syntaxhighlight lang=bash>
# apt update
# apt install pdns-recursor/zesty pdns-tools/zesty libstdc++6/zesty gcc-6-base/zesty
</syntaxhighlight>
==Logging with systemd and syslog-ng==
I had problems with multiply the log lines over syslog and polluting the disks with redundant log entries.
So I found a way to bind the daemon output to a dedicated namespace and cat them later in syslog-ng.
<syntaxhighlight lang=bash>
$ sudo systemctl edit pdns-recursor.service
</syntaxhighlight>
<syntaxhighlight lang=Ini>
[Service]
LogNamespace=pdns-recursor
</syntaxhighlight>
In <i>/etc/systemd/journald.conf</i> set it from
<syntaxhighlight lang=bash>
#ForwardToSyslog=yes
</syntaxhighlight>
to
<syntaxhighlight lang=bash>
ForwardToSyslog=yes
</syntaxhighlight>
Then restart the journald
<syntaxhighlight lang=bash>
# systemctl restart systemd-journald.service
</syntaxhighlight>
2. Tell syslog-ng to take the dev-log-socket from journald as input:
Change the part in <i>/etc/syslog-ng/syslog-ng.conf</i> from
<syntaxhighlight lang=bash>
source s_src {
system();
internal();
};
</syntaxhighlight>
to
<syntaxhighlight lang=bash>
source s_src {
system();
internal();
unix-dgram ("/run/systemd/journal/dev-log");
};
</syntaxhighlight>
==chroot with systemd==
Create the chroot-base. I would prefer to setup a zfs dataset for it, but you can also do:
<syntaxhighlight lang=bash>
# mkdir -p /var/chroot
</syntaxhighlight>
What we need to run pdns{,-recursor} in chroot is this:
<syntaxhighlight lang=bash>
/var/chroot/run/systemd/notify <-- bind mount from /run/systemd/notify (socket)
/var/chroot/run/pdns-recursor <-- bind mount from /run/pdns (dir)
/var/chroot/run/pdns <-- bind mount from /run/pdns (dir)
/var/chroot/usr/share/dns/root.hints <-- bind mount from /usr/share/dns (dir with root.hints file)
</syntaxhighlight>
For that we have to create some systemd.mount files:
<syntaxhighlight lang=bash>
# systemctl list-units --all --type=mount,service var-chroot-* pdns*
UNIT LOAD ACTIVE SUB DESCRIPTION
var-chroot-run-pdns.mount loaded active mounted Mount /run/pdns to chroot
var-chroot-run-pdns\x2drecursor.mount loaded active mounted Mount /run/pdns-recursor to chroot
var-chroot-run-systemd-notify.mount loaded active mounted Mount /run/systemd/notify to chroot
var-chroot-run.mount loaded active mounted Temporary Directory /var/chroot/run
var-chroot-tmp.mount loaded active mounted Temporary Directory /var/chroot/tmp
var-chroot-usr-share-dns.mount loaded active mounted Mount /usr/share/dns (root.hints) to chroot
pdns-recursor.service loaded active running PowerDNS Recursor
pdns.service loaded active running PowerDNS Authoritative Server
var-chroot-create-dirs.service loaded active exited Create directories under /var/chroot
LOAD = Reflects whether the unit definition was properly loaded.
ACTIVE = The high-level unit activation state, i.e. generalization of SUB.
SUB = The low-level unit activation state, values depend on unit type.
9 loaded units listed.
To show all installed unit files use 'systemctl list-unit-files'.
</syntaxhighlight>
and a service to create the needed /var/chroot/run/systemd/notify file to bind mount the socket from systemd to it.
<syntaxhighlight lang=ini>
# /etc/systemd/system/var-chroot-run.mount
[Unit]
Description=Temporary Directory /var/chroot/run
Documentation=https://systemd.io/TEMPORARY_DIRECTORIES
Documentation=man:file-hierarchy(7)
Documentation=https://www.freedesktop.org/wiki/Software/systemd/APIFileSystems
ConditionPathIsSymbolicLink=!/var/chroot/run
DefaultDependencies=no
Conflicts=umount.target
Before=local-fs.target umount.target
After=swap.target
[Mount]
What=tmpfs
Where=/var/chroot/run
Type=tmpfs
Options=mode=1777,strictatime,nosuid,nodev,noexec,size=50%%,nr_inodes=1m
[Install]
WantedBy=local-fs.target
</syntaxhighlight>
<syntaxhighlight lang=ini>
# /etc/systemd/system/var-chroot-tmp.mount
[Unit]
Description=Temporary Directory /var/chroot/tmp
Documentation=https://systemd.io/TEMPORARY_DIRECTORIES
Documentation=man:file-hierarchy(7)
Documentation=https://www.freedesktop.org/wiki/Software/systemd/APIFileSystems
ConditionPathIsSymbolicLink=!/var/chroot/tmp
DefaultDependencies=no
Conflicts=umount.target
Before=local-fs.target umount.target
After=swap.target
[Mount]
What=tmpfs
Where=/var/chroot/tmp
Type=tmpfs
Options=mode=1777,strictatime,nosuid,nodev,noexec,size=50%%,nr_inodes=1m
[Install]
WantedBy=local-fs.target
</syntaxhighlight>
<syntaxhighlight lang=ini>
# /etc/systemd/system/var-chroot-create-dirs.service
[Unit]
Description=Create directories under /var/chroot
ConditionPathExists=/var/chroot/run
After=var-chroot-run.mount
[Service]
Type=oneshot
RemainAfterExit=yes
RuntimeDirectory=pdns pdns-recursor
RuntimeDirectoryMode=0750
RuntimeDirectoryPreserve=True
User=pdns
Group=pdns
ExecStart=-mkdir /var/chroot/run/systemd
ExecStart=-touch /var/chroot/run/systemd/notify
[Install]
WantedBy=multi-user.target
</syntaxhighlight>
<syntaxhighlight lang=ini>
# /etc/systemd/system/var-chroot-run-pdns.mount
[Unit]
Description=Mount /run/pdns to chroot
DefaultDependencies=no
ConditionPathExists=/run/pdns
ConditionCapability=CAP_SYS_ADMIN
After=zfs-mount.service
After=var-chroot-create-dirs.service
After=pdns.service
[Mount]
What=/run/pdns
Where=/var/chroot/run/pdns
Type=none
Options=bind
[Install]
WantedBy=multi-user.target
</syntaxhighlight>
<syntaxhighlight lang=ini>
# /etc/systemd/system/var-chroot-run-pdns\x2drecursor.mount
[Unit]
Description=Mount /run/pdns-recursor to chroot
DefaultDependencies=no
ConditionPathExists=/run/pdns-recursor
ConditionCapability=CAP_SYS_ADMIN
After=zfs-mount.service
After=var-chroot-create-dirs.service
Before=pdns-recursor.service
[Mount]
What=/run/pdns-recursor
Where=/var/chroot/run/pdns-recursor
Type=none
Options=bind
[Install]
WantedBy=multi-user.target
</syntaxhighlight>
<syntaxhighlight lang=ini>
# /etc/systemd/system/var-chroot-run-systemd-notify.mount
[Unit]
Description=Mount /run/systemd/notify to chroot
DefaultDependencies=no
ConditionPathExists=/run/systemd/notify
ConditionCapability=CAP_SYS_ADMIN
After=zfs-mount.service
After=var-chroot-create-dirs.service
Before=pdns-recursor.service
[Mount]
What=/run/systemd/notify
Where=/var/chroot/run/systemd/notify
Type=none
Options=rbind
[Install]
WantedBy=multi-user.target
</syntaxhighlight>
<syntaxhighlight lang=ini>
# /etc/systemd/system/var-chroot-usr-share-dns.mount
[Unit]
Description=Mount /usr/share/dns (root.hints) to chroot
DefaultDependencies=no
ConditionPathExists=/var/chroot/usr/share/dns
ConditionCapability=CAP_SYS_ADMIN
After=zfs-mount.service
After=var-chroot-create-dirs.service
Before=pdns-recursor.service
[Mount]
What=/usr/share/dns
Where=/var/chroot/usr/share/dns
Type=none
Options=rbind,ro
[Install]
WantedBy=multi-user.target
</syntaxhighlight>
Now we are ready for modifying pdns.service and pdns-recursor.service like this:
<syntaxhighlight lang=ini>
# /etc/systemd/system/pdns.service.d/override.conf
[Service]
Type=simple
RuntimeDirectoryPreserve=True
ExecStart=
ExecStart=/usr/sbin/pdns_server --guardian=no --daemon=no --disable-syslog --log-timestamp=no --write-pid=no
CapabilityBoundingSet=CAP_NET_BIND_SERVICE CAP_SETGID CAP_SETUID CAP_CHOWN CAP_SYS_CHROOT
AmbientCapabilities=CAP_NET_BIND_SERVICE CAP_SETGID CAP_SETUID CAP_CHOWN CAP_SYS_CHROOT
SystemCallFilter=@mount
[Unit]
Wants=local-fs.target
</syntaxhighlight>
<syntaxhighlight lang=ini>
# /etc/systemd/system/pdns-recursor.service.d/override.conf
[Service]
Type=simple
RuntimeDirectoryPreserve=True
ExecStart=
ExecStart=/usr/sbin/pdns_recursor --daemon=no --write-pid=no --include-dir=/etc/powerdns/recursor.d
CapabilityBoundingSet=CAP_NET_BIND_SERVICE CAP_SETGID CAP_SETUID CAP_CHOWN CAP_SYS_CHROOT
AmbientCapabilities=CAP_NET_BIND_SERVICE CAP_SETGID CAP_SETUID CAP_CHOWN CAP_SYS_CHROOT
SystemCallFilter=@mount
[Unit]
Wants=local-fs.target
</syntaxhighlight>
abcf490942ed41a1bb15e8d13ef104b8fe136a45
2739
2738
2023-06-16T08:39:41Z
Lollypop
2
/* Logging with systemd and syslog-ng */
wikitext
text/x-wiki
[[Category: DNS]]
=PowerDNS Server (pdns_server)=
==Newer version in Ubuntu==
If you are living in Ubunbtu xenial and need a newer PowerDNS from Ubuntu zesty, do this:
===/etc/apt/apt.conf.d/01pinning===
<syntaxhighlight lang=apt>
APT::Default-Release "xenial";
</syntaxhighlight>
===/etc/apt/preferences.d/pdns===
<syntaxhighlight lang=apt>
Package: pdns-*
Pin: release a=zesty, l=Ubuntu
Pin-Priority: 1000
Package: pdns-*
Pin: release a=zesty-updates, l=Ubuntu
Pin-Priority: 1000
Package: pdns-*
Pin: release a=zesty-security, l=Ubuntu
Pin-Priority: 1000
</syntaxhighlight>
===/etc/apt/sources.list===
add zesty sources. for example:
<syntaxhighlight>
deb [arch=amd64] http://de.archive.ubuntu.com/ubuntu/ xenial main restricted universe
deb [arch=amd64] http://de.archive.ubuntu.com/ubuntu/ xenial-updates main restricted universe
deb [arch=amd64] http://security.ubuntu.com/ubuntu xenial-security main restricted universe
deb [arch=amd64] http://de.archive.ubuntu.com/ubuntu/ zesty main restricted universe
deb [arch=amd64] http://de.archive.ubuntu.com/ubuntu/ zesty-updates main restricted universe
deb [arch=amd64] http://security.ubuntu.com/ubuntu zesty-security main restricted universe
</syntaxhighlight>
===Do the upgrade===
<syntaxhighlight lang=bash>
# apt update
# apt install pdns-recursor/zesty pdns-tools/zesty libstdc++6/zesty gcc-6-base/zesty
</syntaxhighlight>
==Logging with systemd and syslog-ng==
I had problems with multiply the log lines over syslog and polluting the disks with redundant log entries.
So I found a way to bind the daemon output to a dedicated namespace and cat them later in syslog-ng.
<syntaxhighlight lang=bash>
$ sudo systemctl edit pdns-recursor.service
</syntaxhighlight>
<syntaxhighlight lang=Ini>
[Service]
LogNamespace=pdns-recursor
</syntaxhighlight>
<syntaxhighlight lang=bash>
$ sudo systemctl restart pdns-recursor.service
</syntaxhighlight>
Change the part in <i>/etc/syslog-ng/syslog-ng.conf</i> from
<syntaxhighlight lang=bash>
source s_src {
system();
internal();
};
</syntaxhighlight>
to
<syntaxhighlight lang=bash>
source s_journal_pdns_recursor
{
systemd-journal(namespace("pdns-recursor"));
};
source s_journal_pdns
{
systemd-journal(namespace("pdns"));
};
source s_src {
#system();
internal();
};
</syntaxhighlight>
<syntaxhighlight lang=bash>
$ sudo systemctl restart syslog-ng.service
</syntaxhighlight>
==chroot with systemd==
Create the chroot-base. I would prefer to setup a zfs dataset for it, but you can also do:
<syntaxhighlight lang=bash>
# mkdir -p /var/chroot
</syntaxhighlight>
What we need to run pdns{,-recursor} in chroot is this:
<syntaxhighlight lang=bash>
/var/chroot/run/systemd/notify <-- bind mount from /run/systemd/notify (socket)
/var/chroot/run/pdns-recursor <-- bind mount from /run/pdns (dir)
/var/chroot/run/pdns <-- bind mount from /run/pdns (dir)
/var/chroot/usr/share/dns/root.hints <-- bind mount from /usr/share/dns (dir with root.hints file)
</syntaxhighlight>
For that we have to create some systemd.mount files:
<syntaxhighlight lang=bash>
# systemctl list-units --all --type=mount,service var-chroot-* pdns*
UNIT LOAD ACTIVE SUB DESCRIPTION
var-chroot-run-pdns.mount loaded active mounted Mount /run/pdns to chroot
var-chroot-run-pdns\x2drecursor.mount loaded active mounted Mount /run/pdns-recursor to chroot
var-chroot-run-systemd-notify.mount loaded active mounted Mount /run/systemd/notify to chroot
var-chroot-run.mount loaded active mounted Temporary Directory /var/chroot/run
var-chroot-tmp.mount loaded active mounted Temporary Directory /var/chroot/tmp
var-chroot-usr-share-dns.mount loaded active mounted Mount /usr/share/dns (root.hints) to chroot
pdns-recursor.service loaded active running PowerDNS Recursor
pdns.service loaded active running PowerDNS Authoritative Server
var-chroot-create-dirs.service loaded active exited Create directories under /var/chroot
LOAD = Reflects whether the unit definition was properly loaded.
ACTIVE = The high-level unit activation state, i.e. generalization of SUB.
SUB = The low-level unit activation state, values depend on unit type.
9 loaded units listed.
To show all installed unit files use 'systemctl list-unit-files'.
</syntaxhighlight>
and a service to create the needed /var/chroot/run/systemd/notify file to bind mount the socket from systemd to it.
<syntaxhighlight lang=ini>
# /etc/systemd/system/var-chroot-run.mount
[Unit]
Description=Temporary Directory /var/chroot/run
Documentation=https://systemd.io/TEMPORARY_DIRECTORIES
Documentation=man:file-hierarchy(7)
Documentation=https://www.freedesktop.org/wiki/Software/systemd/APIFileSystems
ConditionPathIsSymbolicLink=!/var/chroot/run
DefaultDependencies=no
Conflicts=umount.target
Before=local-fs.target umount.target
After=swap.target
[Mount]
What=tmpfs
Where=/var/chroot/run
Type=tmpfs
Options=mode=1777,strictatime,nosuid,nodev,noexec,size=50%%,nr_inodes=1m
[Install]
WantedBy=local-fs.target
</syntaxhighlight>
<syntaxhighlight lang=ini>
# /etc/systemd/system/var-chroot-tmp.mount
[Unit]
Description=Temporary Directory /var/chroot/tmp
Documentation=https://systemd.io/TEMPORARY_DIRECTORIES
Documentation=man:file-hierarchy(7)
Documentation=https://www.freedesktop.org/wiki/Software/systemd/APIFileSystems
ConditionPathIsSymbolicLink=!/var/chroot/tmp
DefaultDependencies=no
Conflicts=umount.target
Before=local-fs.target umount.target
After=swap.target
[Mount]
What=tmpfs
Where=/var/chroot/tmp
Type=tmpfs
Options=mode=1777,strictatime,nosuid,nodev,noexec,size=50%%,nr_inodes=1m
[Install]
WantedBy=local-fs.target
</syntaxhighlight>
<syntaxhighlight lang=ini>
# /etc/systemd/system/var-chroot-create-dirs.service
[Unit]
Description=Create directories under /var/chroot
ConditionPathExists=/var/chroot/run
After=var-chroot-run.mount
[Service]
Type=oneshot
RemainAfterExit=yes
RuntimeDirectory=pdns pdns-recursor
RuntimeDirectoryMode=0750
RuntimeDirectoryPreserve=True
User=pdns
Group=pdns
ExecStart=-mkdir /var/chroot/run/systemd
ExecStart=-touch /var/chroot/run/systemd/notify
[Install]
WantedBy=multi-user.target
</syntaxhighlight>
<syntaxhighlight lang=ini>
# /etc/systemd/system/var-chroot-run-pdns.mount
[Unit]
Description=Mount /run/pdns to chroot
DefaultDependencies=no
ConditionPathExists=/run/pdns
ConditionCapability=CAP_SYS_ADMIN
After=zfs-mount.service
After=var-chroot-create-dirs.service
After=pdns.service
[Mount]
What=/run/pdns
Where=/var/chroot/run/pdns
Type=none
Options=bind
[Install]
WantedBy=multi-user.target
</syntaxhighlight>
<syntaxhighlight lang=ini>
# /etc/systemd/system/var-chroot-run-pdns\x2drecursor.mount
[Unit]
Description=Mount /run/pdns-recursor to chroot
DefaultDependencies=no
ConditionPathExists=/run/pdns-recursor
ConditionCapability=CAP_SYS_ADMIN
After=zfs-mount.service
After=var-chroot-create-dirs.service
Before=pdns-recursor.service
[Mount]
What=/run/pdns-recursor
Where=/var/chroot/run/pdns-recursor
Type=none
Options=bind
[Install]
WantedBy=multi-user.target
</syntaxhighlight>
<syntaxhighlight lang=ini>
# /etc/systemd/system/var-chroot-run-systemd-notify.mount
[Unit]
Description=Mount /run/systemd/notify to chroot
DefaultDependencies=no
ConditionPathExists=/run/systemd/notify
ConditionCapability=CAP_SYS_ADMIN
After=zfs-mount.service
After=var-chroot-create-dirs.service
Before=pdns-recursor.service
[Mount]
What=/run/systemd/notify
Where=/var/chroot/run/systemd/notify
Type=none
Options=rbind
[Install]
WantedBy=multi-user.target
</syntaxhighlight>
<syntaxhighlight lang=ini>
# /etc/systemd/system/var-chroot-usr-share-dns.mount
[Unit]
Description=Mount /usr/share/dns (root.hints) to chroot
DefaultDependencies=no
ConditionPathExists=/var/chroot/usr/share/dns
ConditionCapability=CAP_SYS_ADMIN
After=zfs-mount.service
After=var-chroot-create-dirs.service
Before=pdns-recursor.service
[Mount]
What=/usr/share/dns
Where=/var/chroot/usr/share/dns
Type=none
Options=rbind,ro
[Install]
WantedBy=multi-user.target
</syntaxhighlight>
Now we are ready for modifying pdns.service and pdns-recursor.service like this:
<syntaxhighlight lang=ini>
# /etc/systemd/system/pdns.service.d/override.conf
[Service]
Type=simple
RuntimeDirectoryPreserve=True
ExecStart=
ExecStart=/usr/sbin/pdns_server --guardian=no --daemon=no --disable-syslog --log-timestamp=no --write-pid=no
CapabilityBoundingSet=CAP_NET_BIND_SERVICE CAP_SETGID CAP_SETUID CAP_CHOWN CAP_SYS_CHROOT
AmbientCapabilities=CAP_NET_BIND_SERVICE CAP_SETGID CAP_SETUID CAP_CHOWN CAP_SYS_CHROOT
SystemCallFilter=@mount
[Unit]
Wants=local-fs.target
</syntaxhighlight>
<syntaxhighlight lang=ini>
# /etc/systemd/system/pdns-recursor.service.d/override.conf
[Service]
Type=simple
RuntimeDirectoryPreserve=True
ExecStart=
ExecStart=/usr/sbin/pdns_recursor --daemon=no --write-pid=no --include-dir=/etc/powerdns/recursor.d
CapabilityBoundingSet=CAP_NET_BIND_SERVICE CAP_SETGID CAP_SETUID CAP_CHOWN CAP_SYS_CHROOT
AmbientCapabilities=CAP_NET_BIND_SERVICE CAP_SETGID CAP_SETUID CAP_CHOWN CAP_SYS_CHROOT
SystemCallFilter=@mount
[Unit]
Wants=local-fs.target
</syntaxhighlight>
312d3a29bbbb0dff6bc5c092c9be7e3987c7e7f2
2740
2739
2023-06-16T08:51:43Z
Lollypop
2
/* Logging with systemd and syslog-ng */
wikitext
text/x-wiki
[[Category: DNS]]
=PowerDNS Server (pdns_server)=
==Newer version in Ubuntu==
If you are living in Ubunbtu xenial and need a newer PowerDNS from Ubuntu zesty, do this:
===/etc/apt/apt.conf.d/01pinning===
<syntaxhighlight lang=apt>
APT::Default-Release "xenial";
</syntaxhighlight>
===/etc/apt/preferences.d/pdns===
<syntaxhighlight lang=apt>
Package: pdns-*
Pin: release a=zesty, l=Ubuntu
Pin-Priority: 1000
Package: pdns-*
Pin: release a=zesty-updates, l=Ubuntu
Pin-Priority: 1000
Package: pdns-*
Pin: release a=zesty-security, l=Ubuntu
Pin-Priority: 1000
</syntaxhighlight>
===/etc/apt/sources.list===
add zesty sources. for example:
<syntaxhighlight>
deb [arch=amd64] http://de.archive.ubuntu.com/ubuntu/ xenial main restricted universe
deb [arch=amd64] http://de.archive.ubuntu.com/ubuntu/ xenial-updates main restricted universe
deb [arch=amd64] http://security.ubuntu.com/ubuntu xenial-security main restricted universe
deb [arch=amd64] http://de.archive.ubuntu.com/ubuntu/ zesty main restricted universe
deb [arch=amd64] http://de.archive.ubuntu.com/ubuntu/ zesty-updates main restricted universe
deb [arch=amd64] http://security.ubuntu.com/ubuntu zesty-security main restricted universe
</syntaxhighlight>
===Do the upgrade===
<syntaxhighlight lang=bash>
# apt update
# apt install pdns-recursor/zesty pdns-tools/zesty libstdc++6/zesty gcc-6-base/zesty
</syntaxhighlight>
==Logging with systemd and syslog-ng==
I had problems with multiply the log lines over syslog and polluting the disks with redundant log entries.
So I found a way to bind the daemon output to a dedicated namespace and cat them later in syslog-ng.
<syntaxhighlight lang=bash>
$ sudo systemctl edit pdns-recursor.service
</syntaxhighlight>
<syntaxhighlight lang=Ini>
[Service]
LogNamespace=pdns-recursor
</syntaxhighlight>
<syntaxhighlight lang=bash>
$ sudo systemctl restart pdns-recursor.service
</syntaxhighlight>
Change the part in <i>/etc/syslog-ng/syslog-ng.conf</i> from
<syntaxhighlight lang=bash>
source s_src {
system();
internal();
};
</syntaxhighlight>
to
<syntaxhighlight lang=bash>
source s_journal_pdns_recursor
{
systemd-journal(namespace("pdns-recursor"));
};
source s_journal_pdns
{
systemd-journal(namespace("pdns"));
};
source s_src {
#system();
internal();
};
</syntaxhighlight>
Then you can take this dedicated source to put it in your favorite destinations:
<syntaxhighlight lang=bash>
destination d_pdns_recursor {
file("/var/log/powerdns/recursor.log");
};
destination d_graylog {
network(
"172.16.1.210"
port("514")
transport(udp)
);
};
log {
source(s_journal_pdns_recursor);
destination(d_graylog);
destination(d_pdns_recursor);
flags(final);
};
</syntaxhighlight>
<syntaxhighlight lang=bash>
$ sudo systemctl restart syslog-ng.service
</syntaxhighlight>
==chroot with systemd==
Create the chroot-base. I would prefer to setup a zfs dataset for it, but you can also do:
<syntaxhighlight lang=bash>
# mkdir -p /var/chroot
</syntaxhighlight>
What we need to run pdns{,-recursor} in chroot is this:
<syntaxhighlight lang=bash>
/var/chroot/run/systemd/notify <-- bind mount from /run/systemd/notify (socket)
/var/chroot/run/pdns-recursor <-- bind mount from /run/pdns (dir)
/var/chroot/run/pdns <-- bind mount from /run/pdns (dir)
/var/chroot/usr/share/dns/root.hints <-- bind mount from /usr/share/dns (dir with root.hints file)
</syntaxhighlight>
For that we have to create some systemd.mount files:
<syntaxhighlight lang=bash>
# systemctl list-units --all --type=mount,service var-chroot-* pdns*
UNIT LOAD ACTIVE SUB DESCRIPTION
var-chroot-run-pdns.mount loaded active mounted Mount /run/pdns to chroot
var-chroot-run-pdns\x2drecursor.mount loaded active mounted Mount /run/pdns-recursor to chroot
var-chroot-run-systemd-notify.mount loaded active mounted Mount /run/systemd/notify to chroot
var-chroot-run.mount loaded active mounted Temporary Directory /var/chroot/run
var-chroot-tmp.mount loaded active mounted Temporary Directory /var/chroot/tmp
var-chroot-usr-share-dns.mount loaded active mounted Mount /usr/share/dns (root.hints) to chroot
pdns-recursor.service loaded active running PowerDNS Recursor
pdns.service loaded active running PowerDNS Authoritative Server
var-chroot-create-dirs.service loaded active exited Create directories under /var/chroot
LOAD = Reflects whether the unit definition was properly loaded.
ACTIVE = The high-level unit activation state, i.e. generalization of SUB.
SUB = The low-level unit activation state, values depend on unit type.
9 loaded units listed.
To show all installed unit files use 'systemctl list-unit-files'.
</syntaxhighlight>
and a service to create the needed /var/chroot/run/systemd/notify file to bind mount the socket from systemd to it.
<syntaxhighlight lang=ini>
# /etc/systemd/system/var-chroot-run.mount
[Unit]
Description=Temporary Directory /var/chroot/run
Documentation=https://systemd.io/TEMPORARY_DIRECTORIES
Documentation=man:file-hierarchy(7)
Documentation=https://www.freedesktop.org/wiki/Software/systemd/APIFileSystems
ConditionPathIsSymbolicLink=!/var/chroot/run
DefaultDependencies=no
Conflicts=umount.target
Before=local-fs.target umount.target
After=swap.target
[Mount]
What=tmpfs
Where=/var/chroot/run
Type=tmpfs
Options=mode=1777,strictatime,nosuid,nodev,noexec,size=50%%,nr_inodes=1m
[Install]
WantedBy=local-fs.target
</syntaxhighlight>
<syntaxhighlight lang=ini>
# /etc/systemd/system/var-chroot-tmp.mount
[Unit]
Description=Temporary Directory /var/chroot/tmp
Documentation=https://systemd.io/TEMPORARY_DIRECTORIES
Documentation=man:file-hierarchy(7)
Documentation=https://www.freedesktop.org/wiki/Software/systemd/APIFileSystems
ConditionPathIsSymbolicLink=!/var/chroot/tmp
DefaultDependencies=no
Conflicts=umount.target
Before=local-fs.target umount.target
After=swap.target
[Mount]
What=tmpfs
Where=/var/chroot/tmp
Type=tmpfs
Options=mode=1777,strictatime,nosuid,nodev,noexec,size=50%%,nr_inodes=1m
[Install]
WantedBy=local-fs.target
</syntaxhighlight>
<syntaxhighlight lang=ini>
# /etc/systemd/system/var-chroot-create-dirs.service
[Unit]
Description=Create directories under /var/chroot
ConditionPathExists=/var/chroot/run
After=var-chroot-run.mount
[Service]
Type=oneshot
RemainAfterExit=yes
RuntimeDirectory=pdns pdns-recursor
RuntimeDirectoryMode=0750
RuntimeDirectoryPreserve=True
User=pdns
Group=pdns
ExecStart=-mkdir /var/chroot/run/systemd
ExecStart=-touch /var/chroot/run/systemd/notify
[Install]
WantedBy=multi-user.target
</syntaxhighlight>
<syntaxhighlight lang=ini>
# /etc/systemd/system/var-chroot-run-pdns.mount
[Unit]
Description=Mount /run/pdns to chroot
DefaultDependencies=no
ConditionPathExists=/run/pdns
ConditionCapability=CAP_SYS_ADMIN
After=zfs-mount.service
After=var-chroot-create-dirs.service
After=pdns.service
[Mount]
What=/run/pdns
Where=/var/chroot/run/pdns
Type=none
Options=bind
[Install]
WantedBy=multi-user.target
</syntaxhighlight>
<syntaxhighlight lang=ini>
# /etc/systemd/system/var-chroot-run-pdns\x2drecursor.mount
[Unit]
Description=Mount /run/pdns-recursor to chroot
DefaultDependencies=no
ConditionPathExists=/run/pdns-recursor
ConditionCapability=CAP_SYS_ADMIN
After=zfs-mount.service
After=var-chroot-create-dirs.service
Before=pdns-recursor.service
[Mount]
What=/run/pdns-recursor
Where=/var/chroot/run/pdns-recursor
Type=none
Options=bind
[Install]
WantedBy=multi-user.target
</syntaxhighlight>
<syntaxhighlight lang=ini>
# /etc/systemd/system/var-chroot-run-systemd-notify.mount
[Unit]
Description=Mount /run/systemd/notify to chroot
DefaultDependencies=no
ConditionPathExists=/run/systemd/notify
ConditionCapability=CAP_SYS_ADMIN
After=zfs-mount.service
After=var-chroot-create-dirs.service
Before=pdns-recursor.service
[Mount]
What=/run/systemd/notify
Where=/var/chroot/run/systemd/notify
Type=none
Options=rbind
[Install]
WantedBy=multi-user.target
</syntaxhighlight>
<syntaxhighlight lang=ini>
# /etc/systemd/system/var-chroot-usr-share-dns.mount
[Unit]
Description=Mount /usr/share/dns (root.hints) to chroot
DefaultDependencies=no
ConditionPathExists=/var/chroot/usr/share/dns
ConditionCapability=CAP_SYS_ADMIN
After=zfs-mount.service
After=var-chroot-create-dirs.service
Before=pdns-recursor.service
[Mount]
What=/usr/share/dns
Where=/var/chroot/usr/share/dns
Type=none
Options=rbind,ro
[Install]
WantedBy=multi-user.target
</syntaxhighlight>
Now we are ready for modifying pdns.service and pdns-recursor.service like this:
<syntaxhighlight lang=ini>
# /etc/systemd/system/pdns.service.d/override.conf
[Service]
Type=simple
RuntimeDirectoryPreserve=True
ExecStart=
ExecStart=/usr/sbin/pdns_server --guardian=no --daemon=no --disable-syslog --log-timestamp=no --write-pid=no
CapabilityBoundingSet=CAP_NET_BIND_SERVICE CAP_SETGID CAP_SETUID CAP_CHOWN CAP_SYS_CHROOT
AmbientCapabilities=CAP_NET_BIND_SERVICE CAP_SETGID CAP_SETUID CAP_CHOWN CAP_SYS_CHROOT
SystemCallFilter=@mount
[Unit]
Wants=local-fs.target
</syntaxhighlight>
<syntaxhighlight lang=ini>
# /etc/systemd/system/pdns-recursor.service.d/override.conf
[Service]
Type=simple
RuntimeDirectoryPreserve=True
ExecStart=
ExecStart=/usr/sbin/pdns_recursor --daemon=no --write-pid=no --include-dir=/etc/powerdns/recursor.d
CapabilityBoundingSet=CAP_NET_BIND_SERVICE CAP_SETGID CAP_SETUID CAP_CHOWN CAP_SYS_CHROOT
AmbientCapabilities=CAP_NET_BIND_SERVICE CAP_SETGID CAP_SETUID CAP_CHOWN CAP_SYS_CHROOT
SystemCallFilter=@mount
[Unit]
Wants=local-fs.target
</syntaxhighlight>
667e298976fcfc4f65d4d52e6068bebfd6d0c2cc
2741
2740
2023-06-16T09:01:06Z
Lollypop
2
/* Logging with systemd and syslog-ng */
wikitext
text/x-wiki
[[Category: DNS]]
=PowerDNS Server (pdns_server)=
==Newer version in Ubuntu==
If you are living in Ubunbtu xenial and need a newer PowerDNS from Ubuntu zesty, do this:
===/etc/apt/apt.conf.d/01pinning===
<syntaxhighlight lang=apt>
APT::Default-Release "xenial";
</syntaxhighlight>
===/etc/apt/preferences.d/pdns===
<syntaxhighlight lang=apt>
Package: pdns-*
Pin: release a=zesty, l=Ubuntu
Pin-Priority: 1000
Package: pdns-*
Pin: release a=zesty-updates, l=Ubuntu
Pin-Priority: 1000
Package: pdns-*
Pin: release a=zesty-security, l=Ubuntu
Pin-Priority: 1000
</syntaxhighlight>
===/etc/apt/sources.list===
add zesty sources. for example:
<syntaxhighlight>
deb [arch=amd64] http://de.archive.ubuntu.com/ubuntu/ xenial main restricted universe
deb [arch=amd64] http://de.archive.ubuntu.com/ubuntu/ xenial-updates main restricted universe
deb [arch=amd64] http://security.ubuntu.com/ubuntu xenial-security main restricted universe
deb [arch=amd64] http://de.archive.ubuntu.com/ubuntu/ zesty main restricted universe
deb [arch=amd64] http://de.archive.ubuntu.com/ubuntu/ zesty-updates main restricted universe
deb [arch=amd64] http://security.ubuntu.com/ubuntu zesty-security main restricted universe
</syntaxhighlight>
===Do the upgrade===
<syntaxhighlight lang=bash>
# apt update
# apt install pdns-recursor/zesty pdns-tools/zesty libstdc++6/zesty gcc-6-base/zesty
</syntaxhighlight>
==Logging with systemd and syslog-ng==
I had problems with multiply the log lines over syslog and polluting the disks with redundant log entries.
So I found a way to bind the daemon output to a dedicated namespace and cat them later in syslog-ng.
<syntaxhighlight lang=bash>
$ sudo systemctl edit pdns-recursor.service
</syntaxhighlight>
<syntaxhighlight lang=Ini>
[Service]
ExecStart=
ExecStart=/usr/sbin/pdns_recursor --daemon=no --write-pid=no
LogNamespace=pdns-recursor
</syntaxhighlight>
/etc/systemd/system/pdns-recursor.service.d/syslog.conf
<syntaxhighlight lang=bash>
log-timestamp=no
quiet=no
disable-syslog=no
</syntaxhighlight>
<syntaxhighlight lang=bash>
$ sudo systemctl restart pdns-recursor.service
</syntaxhighlight>
Change the part in <i>/etc/syslog-ng/syslog-ng.conf</i> from
<syntaxhighlight lang=bash>
source s_src {
system();
internal();
};
</syntaxhighlight>
to
<syntaxhighlight lang=bash>
source s_journal_pdns_recursor
{
systemd-journal(namespace("pdns-recursor"));
};
source s_journal_pdns
{
systemd-journal(namespace("pdns"));
};
source s_src {
#system();
internal();
};
</syntaxhighlight>
Then you can take this dedicated source to put it in your favorite destinations:
<syntaxhighlight lang=bash>
destination d_pdns_recursor {
file("/var/log/powerdns/recursor.log");
};
destination d_graylog {
network(
"172.16.1.210"
port("514")
transport(udp)
);
};
log {
source(s_journal_pdns_recursor);
destination(d_graylog);
destination(d_pdns_recursor);
flags(final);
};
</syntaxhighlight>
<syntaxhighlight lang=bash>
$ sudo systemctl restart syslog-ng.service
</syntaxhighlight>
==chroot with systemd==
Create the chroot-base. I would prefer to setup a zfs dataset for it, but you can also do:
<syntaxhighlight lang=bash>
# mkdir -p /var/chroot
</syntaxhighlight>
What we need to run pdns{,-recursor} in chroot is this:
<syntaxhighlight lang=bash>
/var/chroot/run/systemd/notify <-- bind mount from /run/systemd/notify (socket)
/var/chroot/run/pdns-recursor <-- bind mount from /run/pdns (dir)
/var/chroot/run/pdns <-- bind mount from /run/pdns (dir)
/var/chroot/usr/share/dns/root.hints <-- bind mount from /usr/share/dns (dir with root.hints file)
</syntaxhighlight>
For that we have to create some systemd.mount files:
<syntaxhighlight lang=bash>
# systemctl list-units --all --type=mount,service var-chroot-* pdns*
UNIT LOAD ACTIVE SUB DESCRIPTION
var-chroot-run-pdns.mount loaded active mounted Mount /run/pdns to chroot
var-chroot-run-pdns\x2drecursor.mount loaded active mounted Mount /run/pdns-recursor to chroot
var-chroot-run-systemd-notify.mount loaded active mounted Mount /run/systemd/notify to chroot
var-chroot-run.mount loaded active mounted Temporary Directory /var/chroot/run
var-chroot-tmp.mount loaded active mounted Temporary Directory /var/chroot/tmp
var-chroot-usr-share-dns.mount loaded active mounted Mount /usr/share/dns (root.hints) to chroot
pdns-recursor.service loaded active running PowerDNS Recursor
pdns.service loaded active running PowerDNS Authoritative Server
var-chroot-create-dirs.service loaded active exited Create directories under /var/chroot
LOAD = Reflects whether the unit definition was properly loaded.
ACTIVE = The high-level unit activation state, i.e. generalization of SUB.
SUB = The low-level unit activation state, values depend on unit type.
9 loaded units listed.
To show all installed unit files use 'systemctl list-unit-files'.
</syntaxhighlight>
and a service to create the needed /var/chroot/run/systemd/notify file to bind mount the socket from systemd to it.
<syntaxhighlight lang=ini>
# /etc/systemd/system/var-chroot-run.mount
[Unit]
Description=Temporary Directory /var/chroot/run
Documentation=https://systemd.io/TEMPORARY_DIRECTORIES
Documentation=man:file-hierarchy(7)
Documentation=https://www.freedesktop.org/wiki/Software/systemd/APIFileSystems
ConditionPathIsSymbolicLink=!/var/chroot/run
DefaultDependencies=no
Conflicts=umount.target
Before=local-fs.target umount.target
After=swap.target
[Mount]
What=tmpfs
Where=/var/chroot/run
Type=tmpfs
Options=mode=1777,strictatime,nosuid,nodev,noexec,size=50%%,nr_inodes=1m
[Install]
WantedBy=local-fs.target
</syntaxhighlight>
<syntaxhighlight lang=ini>
# /etc/systemd/system/var-chroot-tmp.mount
[Unit]
Description=Temporary Directory /var/chroot/tmp
Documentation=https://systemd.io/TEMPORARY_DIRECTORIES
Documentation=man:file-hierarchy(7)
Documentation=https://www.freedesktop.org/wiki/Software/systemd/APIFileSystems
ConditionPathIsSymbolicLink=!/var/chroot/tmp
DefaultDependencies=no
Conflicts=umount.target
Before=local-fs.target umount.target
After=swap.target
[Mount]
What=tmpfs
Where=/var/chroot/tmp
Type=tmpfs
Options=mode=1777,strictatime,nosuid,nodev,noexec,size=50%%,nr_inodes=1m
[Install]
WantedBy=local-fs.target
</syntaxhighlight>
<syntaxhighlight lang=ini>
# /etc/systemd/system/var-chroot-create-dirs.service
[Unit]
Description=Create directories under /var/chroot
ConditionPathExists=/var/chroot/run
After=var-chroot-run.mount
[Service]
Type=oneshot
RemainAfterExit=yes
RuntimeDirectory=pdns pdns-recursor
RuntimeDirectoryMode=0750
RuntimeDirectoryPreserve=True
User=pdns
Group=pdns
ExecStart=-mkdir /var/chroot/run/systemd
ExecStart=-touch /var/chroot/run/systemd/notify
[Install]
WantedBy=multi-user.target
</syntaxhighlight>
<syntaxhighlight lang=ini>
# /etc/systemd/system/var-chroot-run-pdns.mount
[Unit]
Description=Mount /run/pdns to chroot
DefaultDependencies=no
ConditionPathExists=/run/pdns
ConditionCapability=CAP_SYS_ADMIN
After=zfs-mount.service
After=var-chroot-create-dirs.service
After=pdns.service
[Mount]
What=/run/pdns
Where=/var/chroot/run/pdns
Type=none
Options=bind
[Install]
WantedBy=multi-user.target
</syntaxhighlight>
<syntaxhighlight lang=ini>
# /etc/systemd/system/var-chroot-run-pdns\x2drecursor.mount
[Unit]
Description=Mount /run/pdns-recursor to chroot
DefaultDependencies=no
ConditionPathExists=/run/pdns-recursor
ConditionCapability=CAP_SYS_ADMIN
After=zfs-mount.service
After=var-chroot-create-dirs.service
Before=pdns-recursor.service
[Mount]
What=/run/pdns-recursor
Where=/var/chroot/run/pdns-recursor
Type=none
Options=bind
[Install]
WantedBy=multi-user.target
</syntaxhighlight>
<syntaxhighlight lang=ini>
# /etc/systemd/system/var-chroot-run-systemd-notify.mount
[Unit]
Description=Mount /run/systemd/notify to chroot
DefaultDependencies=no
ConditionPathExists=/run/systemd/notify
ConditionCapability=CAP_SYS_ADMIN
After=zfs-mount.service
After=var-chroot-create-dirs.service
Before=pdns-recursor.service
[Mount]
What=/run/systemd/notify
Where=/var/chroot/run/systemd/notify
Type=none
Options=rbind
[Install]
WantedBy=multi-user.target
</syntaxhighlight>
<syntaxhighlight lang=ini>
# /etc/systemd/system/var-chroot-usr-share-dns.mount
[Unit]
Description=Mount /usr/share/dns (root.hints) to chroot
DefaultDependencies=no
ConditionPathExists=/var/chroot/usr/share/dns
ConditionCapability=CAP_SYS_ADMIN
After=zfs-mount.service
After=var-chroot-create-dirs.service
Before=pdns-recursor.service
[Mount]
What=/usr/share/dns
Where=/var/chroot/usr/share/dns
Type=none
Options=rbind,ro
[Install]
WantedBy=multi-user.target
</syntaxhighlight>
Now we are ready for modifying pdns.service and pdns-recursor.service like this:
<syntaxhighlight lang=ini>
# /etc/systemd/system/pdns.service.d/override.conf
[Service]
Type=simple
RuntimeDirectoryPreserve=True
ExecStart=
ExecStart=/usr/sbin/pdns_server --guardian=no --daemon=no --disable-syslog --log-timestamp=no --write-pid=no
CapabilityBoundingSet=CAP_NET_BIND_SERVICE CAP_SETGID CAP_SETUID CAP_CHOWN CAP_SYS_CHROOT
AmbientCapabilities=CAP_NET_BIND_SERVICE CAP_SETGID CAP_SETUID CAP_CHOWN CAP_SYS_CHROOT
SystemCallFilter=@mount
[Unit]
Wants=local-fs.target
</syntaxhighlight>
<syntaxhighlight lang=ini>
# /etc/systemd/system/pdns-recursor.service.d/override.conf
[Service]
Type=simple
RuntimeDirectoryPreserve=True
ExecStart=
ExecStart=/usr/sbin/pdns_recursor --daemon=no --write-pid=no --include-dir=/etc/powerdns/recursor.d
CapabilityBoundingSet=CAP_NET_BIND_SERVICE CAP_SETGID CAP_SETUID CAP_CHOWN CAP_SYS_CHROOT
AmbientCapabilities=CAP_NET_BIND_SERVICE CAP_SETGID CAP_SETUID CAP_CHOWN CAP_SYS_CHROOT
SystemCallFilter=@mount
[Unit]
Wants=local-fs.target
</syntaxhighlight>
592e5a273681939310d4caa526f49f2ee2ed54ef
2742
2741
2023-06-16T09:08:52Z
Lollypop
2
/* Logging with systemd and syslog-ng */
wikitext
text/x-wiki
[[Category: DNS]]
=PowerDNS Server (pdns_server)=
==Newer version in Ubuntu==
If you are living in Ubunbtu xenial and need a newer PowerDNS from Ubuntu zesty, do this:
===/etc/apt/apt.conf.d/01pinning===
<syntaxhighlight lang=apt>
APT::Default-Release "xenial";
</syntaxhighlight>
===/etc/apt/preferences.d/pdns===
<syntaxhighlight lang=apt>
Package: pdns-*
Pin: release a=zesty, l=Ubuntu
Pin-Priority: 1000
Package: pdns-*
Pin: release a=zesty-updates, l=Ubuntu
Pin-Priority: 1000
Package: pdns-*
Pin: release a=zesty-security, l=Ubuntu
Pin-Priority: 1000
</syntaxhighlight>
===/etc/apt/sources.list===
add zesty sources. for example:
<syntaxhighlight>
deb [arch=amd64] http://de.archive.ubuntu.com/ubuntu/ xenial main restricted universe
deb [arch=amd64] http://de.archive.ubuntu.com/ubuntu/ xenial-updates main restricted universe
deb [arch=amd64] http://security.ubuntu.com/ubuntu xenial-security main restricted universe
deb [arch=amd64] http://de.archive.ubuntu.com/ubuntu/ zesty main restricted universe
deb [arch=amd64] http://de.archive.ubuntu.com/ubuntu/ zesty-updates main restricted universe
deb [arch=amd64] http://security.ubuntu.com/ubuntu zesty-security main restricted universe
</syntaxhighlight>
===Do the upgrade===
<syntaxhighlight lang=bash>
# apt update
# apt install pdns-recursor/zesty pdns-tools/zesty libstdc++6/zesty gcc-6-base/zesty
</syntaxhighlight>
==Logging with systemd and syslog-ng==
I had problems with multiply the log lines over syslog and polluting the disks with redundant log entries.
So I found a way to bind the daemon output to a dedicated namespace and cat them later in syslog-ng.
<syntaxhighlight lang=bash>
$ sudo systemctl edit pdns-recursor.service
</syntaxhighlight>
<syntaxhighlight lang=Ini>
[Service]
ExecStart=
ExecStart=/usr/sbin/pdns_recursor --daemon=no --write-pid=no
LogNamespace=pdns-recursor
</syntaxhighlight>
<syntaxhighlight lang=bash>
$ sudo systemctl restart pdns-recursor.service
</syntaxhighlight>
after that you will find the output of the daemon with:
<syntaxhighlight lang=bash>
$ sudo journalctl -lf --namespace=pdns-recursor
</syntaxhighlight>
/etc/systemd/system/pdns-recursor.service.d/syslog.conf
<syntaxhighlight lang=bash>
log-timestamp=no
quiet=no
disable-syslog=no
</syntaxhighlight>
<syntaxhighlight lang=bash>
$ sudo systemctl restart pdns-recursor.service
</syntaxhighlight>
Change the part in <i>/etc/syslog-ng/syslog-ng.conf</i> from
<syntaxhighlight lang=bash>
source s_src {
system();
internal();
};
</syntaxhighlight>
to
<syntaxhighlight lang=bash>
source s_journal_pdns_recursor
{
systemd-journal(namespace("pdns-recursor"));
};
source s_journal_pdns
{
systemd-journal(namespace("pdns"));
};
source s_src {
#system();
internal();
};
</syntaxhighlight>
Then you can take this dedicated source to put it in your favorite destinations like this:
<syntaxhighlight lang=bash>
destination d_graylog {
network(
"172.16.1.210"
port("514")
transport(udp)
);
};
log {
source(s_journal_pdns_recursor);
destination(d_graylog);
flags(final);
};
</syntaxhighlight>
<syntaxhighlight lang=bash>
$ sudo systemctl restart syslog-ng.service
</syntaxhighlight>
:wq
==chroot with systemd==
Create the chroot-base. I would prefer to setup a zfs dataset for it, but you can also do:
<syntaxhighlight lang=bash>
# mkdir -p /var/chroot
</syntaxhighlight>
What we need to run pdns{,-recursor} in chroot is this:
<syntaxhighlight lang=bash>
/var/chroot/run/systemd/notify <-- bind mount from /run/systemd/notify (socket)
/var/chroot/run/pdns-recursor <-- bind mount from /run/pdns (dir)
/var/chroot/run/pdns <-- bind mount from /run/pdns (dir)
/var/chroot/usr/share/dns/root.hints <-- bind mount from /usr/share/dns (dir with root.hints file)
</syntaxhighlight>
For that we have to create some systemd.mount files:
<syntaxhighlight lang=bash>
# systemctl list-units --all --type=mount,service var-chroot-* pdns*
UNIT LOAD ACTIVE SUB DESCRIPTION
var-chroot-run-pdns.mount loaded active mounted Mount /run/pdns to chroot
var-chroot-run-pdns\x2drecursor.mount loaded active mounted Mount /run/pdns-recursor to chroot
var-chroot-run-systemd-notify.mount loaded active mounted Mount /run/systemd/notify to chroot
var-chroot-run.mount loaded active mounted Temporary Directory /var/chroot/run
var-chroot-tmp.mount loaded active mounted Temporary Directory /var/chroot/tmp
var-chroot-usr-share-dns.mount loaded active mounted Mount /usr/share/dns (root.hints) to chroot
pdns-recursor.service loaded active running PowerDNS Recursor
pdns.service loaded active running PowerDNS Authoritative Server
var-chroot-create-dirs.service loaded active exited Create directories under /var/chroot
LOAD = Reflects whether the unit definition was properly loaded.
ACTIVE = The high-level unit activation state, i.e. generalization of SUB.
SUB = The low-level unit activation state, values depend on unit type.
9 loaded units listed.
To show all installed unit files use 'systemctl list-unit-files'.
</syntaxhighlight>
and a service to create the needed /var/chroot/run/systemd/notify file to bind mount the socket from systemd to it.
<syntaxhighlight lang=ini>
# /etc/systemd/system/var-chroot-run.mount
[Unit]
Description=Temporary Directory /var/chroot/run
Documentation=https://systemd.io/TEMPORARY_DIRECTORIES
Documentation=man:file-hierarchy(7)
Documentation=https://www.freedesktop.org/wiki/Software/systemd/APIFileSystems
ConditionPathIsSymbolicLink=!/var/chroot/run
DefaultDependencies=no
Conflicts=umount.target
Before=local-fs.target umount.target
After=swap.target
[Mount]
What=tmpfs
Where=/var/chroot/run
Type=tmpfs
Options=mode=1777,strictatime,nosuid,nodev,noexec,size=50%%,nr_inodes=1m
[Install]
WantedBy=local-fs.target
</syntaxhighlight>
<syntaxhighlight lang=ini>
# /etc/systemd/system/var-chroot-tmp.mount
[Unit]
Description=Temporary Directory /var/chroot/tmp
Documentation=https://systemd.io/TEMPORARY_DIRECTORIES
Documentation=man:file-hierarchy(7)
Documentation=https://www.freedesktop.org/wiki/Software/systemd/APIFileSystems
ConditionPathIsSymbolicLink=!/var/chroot/tmp
DefaultDependencies=no
Conflicts=umount.target
Before=local-fs.target umount.target
After=swap.target
[Mount]
What=tmpfs
Where=/var/chroot/tmp
Type=tmpfs
Options=mode=1777,strictatime,nosuid,nodev,noexec,size=50%%,nr_inodes=1m
[Install]
WantedBy=local-fs.target
</syntaxhighlight>
<syntaxhighlight lang=ini>
# /etc/systemd/system/var-chroot-create-dirs.service
[Unit]
Description=Create directories under /var/chroot
ConditionPathExists=/var/chroot/run
After=var-chroot-run.mount
[Service]
Type=oneshot
RemainAfterExit=yes
RuntimeDirectory=pdns pdns-recursor
RuntimeDirectoryMode=0750
RuntimeDirectoryPreserve=True
User=pdns
Group=pdns
ExecStart=-mkdir /var/chroot/run/systemd
ExecStart=-touch /var/chroot/run/systemd/notify
[Install]
WantedBy=multi-user.target
</syntaxhighlight>
<syntaxhighlight lang=ini>
# /etc/systemd/system/var-chroot-run-pdns.mount
[Unit]
Description=Mount /run/pdns to chroot
DefaultDependencies=no
ConditionPathExists=/run/pdns
ConditionCapability=CAP_SYS_ADMIN
After=zfs-mount.service
After=var-chroot-create-dirs.service
After=pdns.service
[Mount]
What=/run/pdns
Where=/var/chroot/run/pdns
Type=none
Options=bind
[Install]
WantedBy=multi-user.target
</syntaxhighlight>
<syntaxhighlight lang=ini>
# /etc/systemd/system/var-chroot-run-pdns\x2drecursor.mount
[Unit]
Description=Mount /run/pdns-recursor to chroot
DefaultDependencies=no
ConditionPathExists=/run/pdns-recursor
ConditionCapability=CAP_SYS_ADMIN
After=zfs-mount.service
After=var-chroot-create-dirs.service
Before=pdns-recursor.service
[Mount]
What=/run/pdns-recursor
Where=/var/chroot/run/pdns-recursor
Type=none
Options=bind
[Install]
WantedBy=multi-user.target
</syntaxhighlight>
<syntaxhighlight lang=ini>
# /etc/systemd/system/var-chroot-run-systemd-notify.mount
[Unit]
Description=Mount /run/systemd/notify to chroot
DefaultDependencies=no
ConditionPathExists=/run/systemd/notify
ConditionCapability=CAP_SYS_ADMIN
After=zfs-mount.service
After=var-chroot-create-dirs.service
Before=pdns-recursor.service
[Mount]
What=/run/systemd/notify
Where=/var/chroot/run/systemd/notify
Type=none
Options=rbind
[Install]
WantedBy=multi-user.target
</syntaxhighlight>
<syntaxhighlight lang=ini>
# /etc/systemd/system/var-chroot-usr-share-dns.mount
[Unit]
Description=Mount /usr/share/dns (root.hints) to chroot
DefaultDependencies=no
ConditionPathExists=/var/chroot/usr/share/dns
ConditionCapability=CAP_SYS_ADMIN
After=zfs-mount.service
After=var-chroot-create-dirs.service
Before=pdns-recursor.service
[Mount]
What=/usr/share/dns
Where=/var/chroot/usr/share/dns
Type=none
Options=rbind,ro
[Install]
WantedBy=multi-user.target
</syntaxhighlight>
Now we are ready for modifying pdns.service and pdns-recursor.service like this:
<syntaxhighlight lang=ini>
# /etc/systemd/system/pdns.service.d/override.conf
[Service]
Type=simple
RuntimeDirectoryPreserve=True
ExecStart=
ExecStart=/usr/sbin/pdns_server --guardian=no --daemon=no --disable-syslog --log-timestamp=no --write-pid=no
CapabilityBoundingSet=CAP_NET_BIND_SERVICE CAP_SETGID CAP_SETUID CAP_CHOWN CAP_SYS_CHROOT
AmbientCapabilities=CAP_NET_BIND_SERVICE CAP_SETGID CAP_SETUID CAP_CHOWN CAP_SYS_CHROOT
SystemCallFilter=@mount
[Unit]
Wants=local-fs.target
</syntaxhighlight>
<syntaxhighlight lang=ini>
# /etc/systemd/system/pdns-recursor.service.d/override.conf
[Service]
Type=simple
RuntimeDirectoryPreserve=True
ExecStart=
ExecStart=/usr/sbin/pdns_recursor --daemon=no --write-pid=no --include-dir=/etc/powerdns/recursor.d
CapabilityBoundingSet=CAP_NET_BIND_SERVICE CAP_SETGID CAP_SETUID CAP_CHOWN CAP_SYS_CHROOT
AmbientCapabilities=CAP_NET_BIND_SERVICE CAP_SETGID CAP_SETUID CAP_CHOWN CAP_SYS_CHROOT
SystemCallFilter=@mount
[Unit]
Wants=local-fs.target
</syntaxhighlight>
87a3f6bd3647d14deabbd3e87e565a240d6f7be6
2743
2742
2023-06-16T09:10:45Z
Lollypop
2
/* Logging with systemd and syslog-ng */
wikitext
text/x-wiki
[[Category: DNS]]
=PowerDNS Server (pdns_server)=
==Newer version in Ubuntu==
If you are living in Ubunbtu xenial and need a newer PowerDNS from Ubuntu zesty, do this:
===/etc/apt/apt.conf.d/01pinning===
<syntaxhighlight lang=apt>
APT::Default-Release "xenial";
</syntaxhighlight>
===/etc/apt/preferences.d/pdns===
<syntaxhighlight lang=apt>
Package: pdns-*
Pin: release a=zesty, l=Ubuntu
Pin-Priority: 1000
Package: pdns-*
Pin: release a=zesty-updates, l=Ubuntu
Pin-Priority: 1000
Package: pdns-*
Pin: release a=zesty-security, l=Ubuntu
Pin-Priority: 1000
</syntaxhighlight>
===/etc/apt/sources.list===
add zesty sources. for example:
<syntaxhighlight>
deb [arch=amd64] http://de.archive.ubuntu.com/ubuntu/ xenial main restricted universe
deb [arch=amd64] http://de.archive.ubuntu.com/ubuntu/ xenial-updates main restricted universe
deb [arch=amd64] http://security.ubuntu.com/ubuntu xenial-security main restricted universe
deb [arch=amd64] http://de.archive.ubuntu.com/ubuntu/ zesty main restricted universe
deb [arch=amd64] http://de.archive.ubuntu.com/ubuntu/ zesty-updates main restricted universe
deb [arch=amd64] http://security.ubuntu.com/ubuntu zesty-security main restricted universe
</syntaxhighlight>
===Do the upgrade===
<syntaxhighlight lang=bash>
# apt update
# apt install pdns-recursor/zesty pdns-tools/zesty libstdc++6/zesty gcc-6-base/zesty
</syntaxhighlight>
==Logging with systemd and syslog-ng==
I had problems with multiply the log lines over syslog and polluting the disks with redundant log entries.
So I found a way to bind the daemon output to a dedicated namespace and cat them later in syslog-ng.
<syntaxhighlight lang=bash>
$ sudo systemctl edit pdns-recursor.service
</syntaxhighlight>
<syntaxhighlight lang=Ini>
[Service]
ExecStart=
ExecStart=/usr/sbin/pdns_recursor --daemon=no --write-pid=no
LogNamespace=pdns-recursor
</syntaxhighlight>
/etc/systemd/system/pdns-recursor.service.d/syslog.conf
<syntaxhighlight lang=bash>
log-timestamp=no
quiet=no
disable-syslog=no
</syntaxhighlight>
<syntaxhighlight lang=bash>
$ sudo systemctl restart pdns-recursor.service
</syntaxhighlight>
after that you will find the output of the daemon with:
<syntaxhighlight lang=bash>
$ sudo journalctl -lf --namespace=pdns-recursor
</syntaxhighlight>
Change the part in <i>/etc/syslog-ng/syslog-ng.conf</i> from
<syntaxhighlight lang=bash>
source s_src {
system();
internal();
};
</syntaxhighlight>
to
<syntaxhighlight lang=bash>
source s_journal_pdns_recursor
{
systemd-journal(namespace("pdns-recursor"));
};
source s_journal_pdns
{
systemd-journal(namespace("pdns"));
};
source s_src {
#system();
internal();
};
</syntaxhighlight>
Then you can take this dedicated source to put it in your favorite destinations like this:
<syntaxhighlight lang=bash>
destination d_graylog {
network(
"172.16.1.210"
port("514")
transport(udp)
);
};
log {
source(s_journal_pdns_recursor);
destination(d_graylog);
flags(final);
};
</syntaxhighlight>
<syntaxhighlight lang=bash>
$ sudo systemctl restart syslog-ng.service
</syntaxhighlight>
:wq
==chroot with systemd==
Create the chroot-base. I would prefer to setup a zfs dataset for it, but you can also do:
<syntaxhighlight lang=bash>
# mkdir -p /var/chroot
</syntaxhighlight>
What we need to run pdns{,-recursor} in chroot is this:
<syntaxhighlight lang=bash>
/var/chroot/run/systemd/notify <-- bind mount from /run/systemd/notify (socket)
/var/chroot/run/pdns-recursor <-- bind mount from /run/pdns (dir)
/var/chroot/run/pdns <-- bind mount from /run/pdns (dir)
/var/chroot/usr/share/dns/root.hints <-- bind mount from /usr/share/dns (dir with root.hints file)
</syntaxhighlight>
For that we have to create some systemd.mount files:
<syntaxhighlight lang=bash>
# systemctl list-units --all --type=mount,service var-chroot-* pdns*
UNIT LOAD ACTIVE SUB DESCRIPTION
var-chroot-run-pdns.mount loaded active mounted Mount /run/pdns to chroot
var-chroot-run-pdns\x2drecursor.mount loaded active mounted Mount /run/pdns-recursor to chroot
var-chroot-run-systemd-notify.mount loaded active mounted Mount /run/systemd/notify to chroot
var-chroot-run.mount loaded active mounted Temporary Directory /var/chroot/run
var-chroot-tmp.mount loaded active mounted Temporary Directory /var/chroot/tmp
var-chroot-usr-share-dns.mount loaded active mounted Mount /usr/share/dns (root.hints) to chroot
pdns-recursor.service loaded active running PowerDNS Recursor
pdns.service loaded active running PowerDNS Authoritative Server
var-chroot-create-dirs.service loaded active exited Create directories under /var/chroot
LOAD = Reflects whether the unit definition was properly loaded.
ACTIVE = The high-level unit activation state, i.e. generalization of SUB.
SUB = The low-level unit activation state, values depend on unit type.
9 loaded units listed.
To show all installed unit files use 'systemctl list-unit-files'.
</syntaxhighlight>
and a service to create the needed /var/chroot/run/systemd/notify file to bind mount the socket from systemd to it.
<syntaxhighlight lang=ini>
# /etc/systemd/system/var-chroot-run.mount
[Unit]
Description=Temporary Directory /var/chroot/run
Documentation=https://systemd.io/TEMPORARY_DIRECTORIES
Documentation=man:file-hierarchy(7)
Documentation=https://www.freedesktop.org/wiki/Software/systemd/APIFileSystems
ConditionPathIsSymbolicLink=!/var/chroot/run
DefaultDependencies=no
Conflicts=umount.target
Before=local-fs.target umount.target
After=swap.target
[Mount]
What=tmpfs
Where=/var/chroot/run
Type=tmpfs
Options=mode=1777,strictatime,nosuid,nodev,noexec,size=50%%,nr_inodes=1m
[Install]
WantedBy=local-fs.target
</syntaxhighlight>
<syntaxhighlight lang=ini>
# /etc/systemd/system/var-chroot-tmp.mount
[Unit]
Description=Temporary Directory /var/chroot/tmp
Documentation=https://systemd.io/TEMPORARY_DIRECTORIES
Documentation=man:file-hierarchy(7)
Documentation=https://www.freedesktop.org/wiki/Software/systemd/APIFileSystems
ConditionPathIsSymbolicLink=!/var/chroot/tmp
DefaultDependencies=no
Conflicts=umount.target
Before=local-fs.target umount.target
After=swap.target
[Mount]
What=tmpfs
Where=/var/chroot/tmp
Type=tmpfs
Options=mode=1777,strictatime,nosuid,nodev,noexec,size=50%%,nr_inodes=1m
[Install]
WantedBy=local-fs.target
</syntaxhighlight>
<syntaxhighlight lang=ini>
# /etc/systemd/system/var-chroot-create-dirs.service
[Unit]
Description=Create directories under /var/chroot
ConditionPathExists=/var/chroot/run
After=var-chroot-run.mount
[Service]
Type=oneshot
RemainAfterExit=yes
RuntimeDirectory=pdns pdns-recursor
RuntimeDirectoryMode=0750
RuntimeDirectoryPreserve=True
User=pdns
Group=pdns
ExecStart=-mkdir /var/chroot/run/systemd
ExecStart=-touch /var/chroot/run/systemd/notify
[Install]
WantedBy=multi-user.target
</syntaxhighlight>
<syntaxhighlight lang=ini>
# /etc/systemd/system/var-chroot-run-pdns.mount
[Unit]
Description=Mount /run/pdns to chroot
DefaultDependencies=no
ConditionPathExists=/run/pdns
ConditionCapability=CAP_SYS_ADMIN
After=zfs-mount.service
After=var-chroot-create-dirs.service
After=pdns.service
[Mount]
What=/run/pdns
Where=/var/chroot/run/pdns
Type=none
Options=bind
[Install]
WantedBy=multi-user.target
</syntaxhighlight>
<syntaxhighlight lang=ini>
# /etc/systemd/system/var-chroot-run-pdns\x2drecursor.mount
[Unit]
Description=Mount /run/pdns-recursor to chroot
DefaultDependencies=no
ConditionPathExists=/run/pdns-recursor
ConditionCapability=CAP_SYS_ADMIN
After=zfs-mount.service
After=var-chroot-create-dirs.service
Before=pdns-recursor.service
[Mount]
What=/run/pdns-recursor
Where=/var/chroot/run/pdns-recursor
Type=none
Options=bind
[Install]
WantedBy=multi-user.target
</syntaxhighlight>
<syntaxhighlight lang=ini>
# /etc/systemd/system/var-chroot-run-systemd-notify.mount
[Unit]
Description=Mount /run/systemd/notify to chroot
DefaultDependencies=no
ConditionPathExists=/run/systemd/notify
ConditionCapability=CAP_SYS_ADMIN
After=zfs-mount.service
After=var-chroot-create-dirs.service
Before=pdns-recursor.service
[Mount]
What=/run/systemd/notify
Where=/var/chroot/run/systemd/notify
Type=none
Options=rbind
[Install]
WantedBy=multi-user.target
</syntaxhighlight>
<syntaxhighlight lang=ini>
# /etc/systemd/system/var-chroot-usr-share-dns.mount
[Unit]
Description=Mount /usr/share/dns (root.hints) to chroot
DefaultDependencies=no
ConditionPathExists=/var/chroot/usr/share/dns
ConditionCapability=CAP_SYS_ADMIN
After=zfs-mount.service
After=var-chroot-create-dirs.service
Before=pdns-recursor.service
[Mount]
What=/usr/share/dns
Where=/var/chroot/usr/share/dns
Type=none
Options=rbind,ro
[Install]
WantedBy=multi-user.target
</syntaxhighlight>
Now we are ready for modifying pdns.service and pdns-recursor.service like this:
<syntaxhighlight lang=ini>
# /etc/systemd/system/pdns.service.d/override.conf
[Service]
Type=simple
RuntimeDirectoryPreserve=True
ExecStart=
ExecStart=/usr/sbin/pdns_server --guardian=no --daemon=no --disable-syslog --log-timestamp=no --write-pid=no
CapabilityBoundingSet=CAP_NET_BIND_SERVICE CAP_SETGID CAP_SETUID CAP_CHOWN CAP_SYS_CHROOT
AmbientCapabilities=CAP_NET_BIND_SERVICE CAP_SETGID CAP_SETUID CAP_CHOWN CAP_SYS_CHROOT
SystemCallFilter=@mount
[Unit]
Wants=local-fs.target
</syntaxhighlight>
<syntaxhighlight lang=ini>
# /etc/systemd/system/pdns-recursor.service.d/override.conf
[Service]
Type=simple
RuntimeDirectoryPreserve=True
ExecStart=
ExecStart=/usr/sbin/pdns_recursor --daemon=no --write-pid=no --include-dir=/etc/powerdns/recursor.d
CapabilityBoundingSet=CAP_NET_BIND_SERVICE CAP_SETGID CAP_SETUID CAP_CHOWN CAP_SYS_CHROOT
AmbientCapabilities=CAP_NET_BIND_SERVICE CAP_SETGID CAP_SETUID CAP_CHOWN CAP_SYS_CHROOT
SystemCallFilter=@mount
[Unit]
Wants=local-fs.target
</syntaxhighlight>
134e8550b930603fc40c23edb0f69da39cdcfdd0
2744
2743
2023-06-16T09:13:43Z
Lollypop
2
/* Logging with systemd and syslog-ng */
wikitext
text/x-wiki
[[Category: DNS]]
=PowerDNS Server (pdns_server)=
==Newer version in Ubuntu==
If you are living in Ubunbtu xenial and need a newer PowerDNS from Ubuntu zesty, do this:
===/etc/apt/apt.conf.d/01pinning===
<syntaxhighlight lang=apt>
APT::Default-Release "xenial";
</syntaxhighlight>
===/etc/apt/preferences.d/pdns===
<syntaxhighlight lang=apt>
Package: pdns-*
Pin: release a=zesty, l=Ubuntu
Pin-Priority: 1000
Package: pdns-*
Pin: release a=zesty-updates, l=Ubuntu
Pin-Priority: 1000
Package: pdns-*
Pin: release a=zesty-security, l=Ubuntu
Pin-Priority: 1000
</syntaxhighlight>
===/etc/apt/sources.list===
add zesty sources. for example:
<syntaxhighlight>
deb [arch=amd64] http://de.archive.ubuntu.com/ubuntu/ xenial main restricted universe
deb [arch=amd64] http://de.archive.ubuntu.com/ubuntu/ xenial-updates main restricted universe
deb [arch=amd64] http://security.ubuntu.com/ubuntu xenial-security main restricted universe
deb [arch=amd64] http://de.archive.ubuntu.com/ubuntu/ zesty main restricted universe
deb [arch=amd64] http://de.archive.ubuntu.com/ubuntu/ zesty-updates main restricted universe
deb [arch=amd64] http://security.ubuntu.com/ubuntu zesty-security main restricted universe
</syntaxhighlight>
===Do the upgrade===
<syntaxhighlight lang=bash>
# apt update
# apt install pdns-recursor/zesty pdns-tools/zesty libstdc++6/zesty gcc-6-base/zesty
</syntaxhighlight>
==Logging with systemd and syslog-ng==
I had problems with multiply the log lines over syslog and polluting the disks with redundant log entries.
So I found a way to bind the daemon output to a dedicated namespace and cat them later in syslog-ng.
<syntaxhighlight lang=bash>
$ sudo systemctl edit pdns-recursor.service
</syntaxhighlight>
<syntaxhighlight lang=Ini>
[Service]
ExecStart=
ExecStart=/usr/sbin/pdns_recursor --daemon=no --write-pid=no
LogNamespace=pdns-recursor
</syntaxhighlight>
/etc/powerdns/recursor.d/syslog.conf
<syntaxhighlight lang=bash>
log-timestamp=no
quiet=no
disable-syslog=no
</syntaxhighlight>
<syntaxhighlight lang=bash>
$ sudo systemctl restart pdns-recursor.service
</syntaxhighlight>
after that you will find the output of the daemon with:
<syntaxhighlight lang=bash>
$ sudo journalctl -lf --namespace=pdns-recursor
</syntaxhighlight>
Change the part in <i>/etc/syslog-ng/syslog-ng.conf</i> from
<syntaxhighlight lang=bash>
source s_src {
system();
internal();
};
</syntaxhighlight>
to
<syntaxhighlight lang=bash>
source s_journal_pdns_recursor
{
systemd-journal(namespace("pdns-recursor"));
};
source s_journal_pdns
{
systemd-journal(namespace("pdns"));
};
source s_src {
#system();
internal();
};
</syntaxhighlight>
Then you can take this dedicated source to put it in your favorite destinations like this:
<syntaxhighlight lang=bash>
destination d_graylog {
network(
"172.16.1.210"
port("514")
transport(udp)
);
};
log {
source(s_journal_pdns_recursor);
destination(d_graylog);
flags(final);
};
</syntaxhighlight>
<syntaxhighlight lang=bash>
$ sudo systemctl restart syslog-ng.service
</syntaxhighlight>
:wq
==chroot with systemd==
Create the chroot-base. I would prefer to setup a zfs dataset for it, but you can also do:
<syntaxhighlight lang=bash>
# mkdir -p /var/chroot
</syntaxhighlight>
What we need to run pdns{,-recursor} in chroot is this:
<syntaxhighlight lang=bash>
/var/chroot/run/systemd/notify <-- bind mount from /run/systemd/notify (socket)
/var/chroot/run/pdns-recursor <-- bind mount from /run/pdns (dir)
/var/chroot/run/pdns <-- bind mount from /run/pdns (dir)
/var/chroot/usr/share/dns/root.hints <-- bind mount from /usr/share/dns (dir with root.hints file)
</syntaxhighlight>
For that we have to create some systemd.mount files:
<syntaxhighlight lang=bash>
# systemctl list-units --all --type=mount,service var-chroot-* pdns*
UNIT LOAD ACTIVE SUB DESCRIPTION
var-chroot-run-pdns.mount loaded active mounted Mount /run/pdns to chroot
var-chroot-run-pdns\x2drecursor.mount loaded active mounted Mount /run/pdns-recursor to chroot
var-chroot-run-systemd-notify.mount loaded active mounted Mount /run/systemd/notify to chroot
var-chroot-run.mount loaded active mounted Temporary Directory /var/chroot/run
var-chroot-tmp.mount loaded active mounted Temporary Directory /var/chroot/tmp
var-chroot-usr-share-dns.mount loaded active mounted Mount /usr/share/dns (root.hints) to chroot
pdns-recursor.service loaded active running PowerDNS Recursor
pdns.service loaded active running PowerDNS Authoritative Server
var-chroot-create-dirs.service loaded active exited Create directories under /var/chroot
LOAD = Reflects whether the unit definition was properly loaded.
ACTIVE = The high-level unit activation state, i.e. generalization of SUB.
SUB = The low-level unit activation state, values depend on unit type.
9 loaded units listed.
To show all installed unit files use 'systemctl list-unit-files'.
</syntaxhighlight>
and a service to create the needed /var/chroot/run/systemd/notify file to bind mount the socket from systemd to it.
<syntaxhighlight lang=ini>
# /etc/systemd/system/var-chroot-run.mount
[Unit]
Description=Temporary Directory /var/chroot/run
Documentation=https://systemd.io/TEMPORARY_DIRECTORIES
Documentation=man:file-hierarchy(7)
Documentation=https://www.freedesktop.org/wiki/Software/systemd/APIFileSystems
ConditionPathIsSymbolicLink=!/var/chroot/run
DefaultDependencies=no
Conflicts=umount.target
Before=local-fs.target umount.target
After=swap.target
[Mount]
What=tmpfs
Where=/var/chroot/run
Type=tmpfs
Options=mode=1777,strictatime,nosuid,nodev,noexec,size=50%%,nr_inodes=1m
[Install]
WantedBy=local-fs.target
</syntaxhighlight>
<syntaxhighlight lang=ini>
# /etc/systemd/system/var-chroot-tmp.mount
[Unit]
Description=Temporary Directory /var/chroot/tmp
Documentation=https://systemd.io/TEMPORARY_DIRECTORIES
Documentation=man:file-hierarchy(7)
Documentation=https://www.freedesktop.org/wiki/Software/systemd/APIFileSystems
ConditionPathIsSymbolicLink=!/var/chroot/tmp
DefaultDependencies=no
Conflicts=umount.target
Before=local-fs.target umount.target
After=swap.target
[Mount]
What=tmpfs
Where=/var/chroot/tmp
Type=tmpfs
Options=mode=1777,strictatime,nosuid,nodev,noexec,size=50%%,nr_inodes=1m
[Install]
WantedBy=local-fs.target
</syntaxhighlight>
<syntaxhighlight lang=ini>
# /etc/systemd/system/var-chroot-create-dirs.service
[Unit]
Description=Create directories under /var/chroot
ConditionPathExists=/var/chroot/run
After=var-chroot-run.mount
[Service]
Type=oneshot
RemainAfterExit=yes
RuntimeDirectory=pdns pdns-recursor
RuntimeDirectoryMode=0750
RuntimeDirectoryPreserve=True
User=pdns
Group=pdns
ExecStart=-mkdir /var/chroot/run/systemd
ExecStart=-touch /var/chroot/run/systemd/notify
[Install]
WantedBy=multi-user.target
</syntaxhighlight>
<syntaxhighlight lang=ini>
# /etc/systemd/system/var-chroot-run-pdns.mount
[Unit]
Description=Mount /run/pdns to chroot
DefaultDependencies=no
ConditionPathExists=/run/pdns
ConditionCapability=CAP_SYS_ADMIN
After=zfs-mount.service
After=var-chroot-create-dirs.service
After=pdns.service
[Mount]
What=/run/pdns
Where=/var/chroot/run/pdns
Type=none
Options=bind
[Install]
WantedBy=multi-user.target
</syntaxhighlight>
<syntaxhighlight lang=ini>
# /etc/systemd/system/var-chroot-run-pdns\x2drecursor.mount
[Unit]
Description=Mount /run/pdns-recursor to chroot
DefaultDependencies=no
ConditionPathExists=/run/pdns-recursor
ConditionCapability=CAP_SYS_ADMIN
After=zfs-mount.service
After=var-chroot-create-dirs.service
Before=pdns-recursor.service
[Mount]
What=/run/pdns-recursor
Where=/var/chroot/run/pdns-recursor
Type=none
Options=bind
[Install]
WantedBy=multi-user.target
</syntaxhighlight>
<syntaxhighlight lang=ini>
# /etc/systemd/system/var-chroot-run-systemd-notify.mount
[Unit]
Description=Mount /run/systemd/notify to chroot
DefaultDependencies=no
ConditionPathExists=/run/systemd/notify
ConditionCapability=CAP_SYS_ADMIN
After=zfs-mount.service
After=var-chroot-create-dirs.service
Before=pdns-recursor.service
[Mount]
What=/run/systemd/notify
Where=/var/chroot/run/systemd/notify
Type=none
Options=rbind
[Install]
WantedBy=multi-user.target
</syntaxhighlight>
<syntaxhighlight lang=ini>
# /etc/systemd/system/var-chroot-usr-share-dns.mount
[Unit]
Description=Mount /usr/share/dns (root.hints) to chroot
DefaultDependencies=no
ConditionPathExists=/var/chroot/usr/share/dns
ConditionCapability=CAP_SYS_ADMIN
After=zfs-mount.service
After=var-chroot-create-dirs.service
Before=pdns-recursor.service
[Mount]
What=/usr/share/dns
Where=/var/chroot/usr/share/dns
Type=none
Options=rbind,ro
[Install]
WantedBy=multi-user.target
</syntaxhighlight>
Now we are ready for modifying pdns.service and pdns-recursor.service like this:
<syntaxhighlight lang=ini>
# /etc/systemd/system/pdns.service.d/override.conf
[Service]
Type=simple
RuntimeDirectoryPreserve=True
ExecStart=
ExecStart=/usr/sbin/pdns_server --guardian=no --daemon=no --disable-syslog --log-timestamp=no --write-pid=no
CapabilityBoundingSet=CAP_NET_BIND_SERVICE CAP_SETGID CAP_SETUID CAP_CHOWN CAP_SYS_CHROOT
AmbientCapabilities=CAP_NET_BIND_SERVICE CAP_SETGID CAP_SETUID CAP_CHOWN CAP_SYS_CHROOT
SystemCallFilter=@mount
[Unit]
Wants=local-fs.target
</syntaxhighlight>
<syntaxhighlight lang=ini>
# /etc/systemd/system/pdns-recursor.service.d/override.conf
[Service]
Type=simple
RuntimeDirectoryPreserve=True
ExecStart=
ExecStart=/usr/sbin/pdns_recursor --daemon=no --write-pid=no --include-dir=/etc/powerdns/recursor.d
CapabilityBoundingSet=CAP_NET_BIND_SERVICE CAP_SETGID CAP_SETUID CAP_CHOWN CAP_SYS_CHROOT
AmbientCapabilities=CAP_NET_BIND_SERVICE CAP_SETGID CAP_SETUID CAP_CHOWN CAP_SYS_CHROOT
SystemCallFilter=@mount
[Unit]
Wants=local-fs.target
</syntaxhighlight>
564828ceb486d681c45a34200fe1127eeeabbfc7
2745
2744
2023-06-16T09:22:38Z
Lollypop
2
/* Logging with systemd and syslog-ng */
wikitext
text/x-wiki
[[Category: DNS]]
=PowerDNS Server (pdns_server)=
==Newer version in Ubuntu==
If you are living in Ubunbtu xenial and need a newer PowerDNS from Ubuntu zesty, do this:
===/etc/apt/apt.conf.d/01pinning===
<syntaxhighlight lang=apt>
APT::Default-Release "xenial";
</syntaxhighlight>
===/etc/apt/preferences.d/pdns===
<syntaxhighlight lang=apt>
Package: pdns-*
Pin: release a=zesty, l=Ubuntu
Pin-Priority: 1000
Package: pdns-*
Pin: release a=zesty-updates, l=Ubuntu
Pin-Priority: 1000
Package: pdns-*
Pin: release a=zesty-security, l=Ubuntu
Pin-Priority: 1000
</syntaxhighlight>
===/etc/apt/sources.list===
add zesty sources. for example:
<syntaxhighlight>
deb [arch=amd64] http://de.archive.ubuntu.com/ubuntu/ xenial main restricted universe
deb [arch=amd64] http://de.archive.ubuntu.com/ubuntu/ xenial-updates main restricted universe
deb [arch=amd64] http://security.ubuntu.com/ubuntu xenial-security main restricted universe
deb [arch=amd64] http://de.archive.ubuntu.com/ubuntu/ zesty main restricted universe
deb [arch=amd64] http://de.archive.ubuntu.com/ubuntu/ zesty-updates main restricted universe
deb [arch=amd64] http://security.ubuntu.com/ubuntu zesty-security main restricted universe
</syntaxhighlight>
===Do the upgrade===
<syntaxhighlight lang=bash>
# apt update
# apt install pdns-recursor/zesty pdns-tools/zesty libstdc++6/zesty gcc-6-base/zesty
</syntaxhighlight>
==Logging with systemd and syslog-ng==
I had problems with multiply the log lines over syslog and polluting the disks with redundant log entries.
So I found a way to bind the daemon output to a dedicated namespace and cat them later in syslog-ng.
<syntaxhighlight lang=bash>
$ sudo systemctl edit pdns-recursor.service
</syntaxhighlight>
<syntaxhighlight lang=Ini>
[Service]
ExecStart=
ExecStart=/usr/sbin/pdns_recursor --daemon=no --write-pid=no
LogNamespace=pdns-recursor
</syntaxhighlight>
/etc/powerdns/recursor.d/syslog.conf
<syntaxhighlight lang=bash>
log-timestamp=no
quiet=no
disable-syslog=no
</syntaxhighlight>
<syntaxhighlight lang=bash>
$ sudo systemctl restart pdns-recursor.service
</syntaxhighlight>
after that you will find the output of the daemon with:
<syntaxhighlight lang=bash>
$ sudo journalctl -lf --namespace=pdns-recursor
</syntaxhighlight>
Change the part in <i>/etc/syslog-ng/syslog-ng.conf</i> from
<syntaxhighlight lang=bash>
source s_src {
system();
internal();
};
</syntaxhighlight>
to
<syntaxhighlight lang=bash>
source s_journal_pdns_recursor
{
systemd-journal(namespace("pdns-recursor"));
};
source s_journal_pdns
{
systemd-journal(namespace("pdns"));
};
source s_src {
# Because system() catches systemd-journal() you will have to comment it out or you will get this error:
# The configuration must not contain more than one systemd-journal() source;
#system();
internal();
};
</syntaxhighlight>
Then you can take this dedicated source to put it in your favorite destinations like this:
<syntaxhighlight lang=bash>
destination d_graylog {
network(
"172.16.1.210"
port("514")
transport(udp)
);
};
log {
source(s_journal_pdns_recursor);
destination(d_graylog);
flags(final);
};
</syntaxhighlight>
<syntaxhighlight lang=bash>
$ sudo systemctl restart syslog-ng.service
</syntaxhighlight>
:wq
==chroot with systemd==
Create the chroot-base. I would prefer to setup a zfs dataset for it, but you can also do:
<syntaxhighlight lang=bash>
# mkdir -p /var/chroot
</syntaxhighlight>
What we need to run pdns{,-recursor} in chroot is this:
<syntaxhighlight lang=bash>
/var/chroot/run/systemd/notify <-- bind mount from /run/systemd/notify (socket)
/var/chroot/run/pdns-recursor <-- bind mount from /run/pdns (dir)
/var/chroot/run/pdns <-- bind mount from /run/pdns (dir)
/var/chroot/usr/share/dns/root.hints <-- bind mount from /usr/share/dns (dir with root.hints file)
</syntaxhighlight>
For that we have to create some systemd.mount files:
<syntaxhighlight lang=bash>
# systemctl list-units --all --type=mount,service var-chroot-* pdns*
UNIT LOAD ACTIVE SUB DESCRIPTION
var-chroot-run-pdns.mount loaded active mounted Mount /run/pdns to chroot
var-chroot-run-pdns\x2drecursor.mount loaded active mounted Mount /run/pdns-recursor to chroot
var-chroot-run-systemd-notify.mount loaded active mounted Mount /run/systemd/notify to chroot
var-chroot-run.mount loaded active mounted Temporary Directory /var/chroot/run
var-chroot-tmp.mount loaded active mounted Temporary Directory /var/chroot/tmp
var-chroot-usr-share-dns.mount loaded active mounted Mount /usr/share/dns (root.hints) to chroot
pdns-recursor.service loaded active running PowerDNS Recursor
pdns.service loaded active running PowerDNS Authoritative Server
var-chroot-create-dirs.service loaded active exited Create directories under /var/chroot
LOAD = Reflects whether the unit definition was properly loaded.
ACTIVE = The high-level unit activation state, i.e. generalization of SUB.
SUB = The low-level unit activation state, values depend on unit type.
9 loaded units listed.
To show all installed unit files use 'systemctl list-unit-files'.
</syntaxhighlight>
and a service to create the needed /var/chroot/run/systemd/notify file to bind mount the socket from systemd to it.
<syntaxhighlight lang=ini>
# /etc/systemd/system/var-chroot-run.mount
[Unit]
Description=Temporary Directory /var/chroot/run
Documentation=https://systemd.io/TEMPORARY_DIRECTORIES
Documentation=man:file-hierarchy(7)
Documentation=https://www.freedesktop.org/wiki/Software/systemd/APIFileSystems
ConditionPathIsSymbolicLink=!/var/chroot/run
DefaultDependencies=no
Conflicts=umount.target
Before=local-fs.target umount.target
After=swap.target
[Mount]
What=tmpfs
Where=/var/chroot/run
Type=tmpfs
Options=mode=1777,strictatime,nosuid,nodev,noexec,size=50%%,nr_inodes=1m
[Install]
WantedBy=local-fs.target
</syntaxhighlight>
<syntaxhighlight lang=ini>
# /etc/systemd/system/var-chroot-tmp.mount
[Unit]
Description=Temporary Directory /var/chroot/tmp
Documentation=https://systemd.io/TEMPORARY_DIRECTORIES
Documentation=man:file-hierarchy(7)
Documentation=https://www.freedesktop.org/wiki/Software/systemd/APIFileSystems
ConditionPathIsSymbolicLink=!/var/chroot/tmp
DefaultDependencies=no
Conflicts=umount.target
Before=local-fs.target umount.target
After=swap.target
[Mount]
What=tmpfs
Where=/var/chroot/tmp
Type=tmpfs
Options=mode=1777,strictatime,nosuid,nodev,noexec,size=50%%,nr_inodes=1m
[Install]
WantedBy=local-fs.target
</syntaxhighlight>
<syntaxhighlight lang=ini>
# /etc/systemd/system/var-chroot-create-dirs.service
[Unit]
Description=Create directories under /var/chroot
ConditionPathExists=/var/chroot/run
After=var-chroot-run.mount
[Service]
Type=oneshot
RemainAfterExit=yes
RuntimeDirectory=pdns pdns-recursor
RuntimeDirectoryMode=0750
RuntimeDirectoryPreserve=True
User=pdns
Group=pdns
ExecStart=-mkdir /var/chroot/run/systemd
ExecStart=-touch /var/chroot/run/systemd/notify
[Install]
WantedBy=multi-user.target
</syntaxhighlight>
<syntaxhighlight lang=ini>
# /etc/systemd/system/var-chroot-run-pdns.mount
[Unit]
Description=Mount /run/pdns to chroot
DefaultDependencies=no
ConditionPathExists=/run/pdns
ConditionCapability=CAP_SYS_ADMIN
After=zfs-mount.service
After=var-chroot-create-dirs.service
After=pdns.service
[Mount]
What=/run/pdns
Where=/var/chroot/run/pdns
Type=none
Options=bind
[Install]
WantedBy=multi-user.target
</syntaxhighlight>
<syntaxhighlight lang=ini>
# /etc/systemd/system/var-chroot-run-pdns\x2drecursor.mount
[Unit]
Description=Mount /run/pdns-recursor to chroot
DefaultDependencies=no
ConditionPathExists=/run/pdns-recursor
ConditionCapability=CAP_SYS_ADMIN
After=zfs-mount.service
After=var-chroot-create-dirs.service
Before=pdns-recursor.service
[Mount]
What=/run/pdns-recursor
Where=/var/chroot/run/pdns-recursor
Type=none
Options=bind
[Install]
WantedBy=multi-user.target
</syntaxhighlight>
<syntaxhighlight lang=ini>
# /etc/systemd/system/var-chroot-run-systemd-notify.mount
[Unit]
Description=Mount /run/systemd/notify to chroot
DefaultDependencies=no
ConditionPathExists=/run/systemd/notify
ConditionCapability=CAP_SYS_ADMIN
After=zfs-mount.service
After=var-chroot-create-dirs.service
Before=pdns-recursor.service
[Mount]
What=/run/systemd/notify
Where=/var/chroot/run/systemd/notify
Type=none
Options=rbind
[Install]
WantedBy=multi-user.target
</syntaxhighlight>
<syntaxhighlight lang=ini>
# /etc/systemd/system/var-chroot-usr-share-dns.mount
[Unit]
Description=Mount /usr/share/dns (root.hints) to chroot
DefaultDependencies=no
ConditionPathExists=/var/chroot/usr/share/dns
ConditionCapability=CAP_SYS_ADMIN
After=zfs-mount.service
After=var-chroot-create-dirs.service
Before=pdns-recursor.service
[Mount]
What=/usr/share/dns
Where=/var/chroot/usr/share/dns
Type=none
Options=rbind,ro
[Install]
WantedBy=multi-user.target
</syntaxhighlight>
Now we are ready for modifying pdns.service and pdns-recursor.service like this:
<syntaxhighlight lang=ini>
# /etc/systemd/system/pdns.service.d/override.conf
[Service]
Type=simple
RuntimeDirectoryPreserve=True
ExecStart=
ExecStart=/usr/sbin/pdns_server --guardian=no --daemon=no --disable-syslog --log-timestamp=no --write-pid=no
CapabilityBoundingSet=CAP_NET_BIND_SERVICE CAP_SETGID CAP_SETUID CAP_CHOWN CAP_SYS_CHROOT
AmbientCapabilities=CAP_NET_BIND_SERVICE CAP_SETGID CAP_SETUID CAP_CHOWN CAP_SYS_CHROOT
SystemCallFilter=@mount
[Unit]
Wants=local-fs.target
</syntaxhighlight>
<syntaxhighlight lang=ini>
# /etc/systemd/system/pdns-recursor.service.d/override.conf
[Service]
Type=simple
RuntimeDirectoryPreserve=True
ExecStart=
ExecStart=/usr/sbin/pdns_recursor --daemon=no --write-pid=no --include-dir=/etc/powerdns/recursor.d
CapabilityBoundingSet=CAP_NET_BIND_SERVICE CAP_SETGID CAP_SETUID CAP_CHOWN CAP_SYS_CHROOT
AmbientCapabilities=CAP_NET_BIND_SERVICE CAP_SETGID CAP_SETUID CAP_CHOWN CAP_SYS_CHROOT
SystemCallFilter=@mount
[Unit]
Wants=local-fs.target
</syntaxhighlight>
10fce195a631413b51920067960a8759fd90e634
2746
2745
2023-06-16T09:28:35Z
Lollypop
2
/* Logging with systemd and syslog-ng */
wikitext
text/x-wiki
[[Category: DNS]]
=PowerDNS Server (pdns_server)=
==Newer version in Ubuntu==
If you are living in Ubunbtu xenial and need a newer PowerDNS from Ubuntu zesty, do this:
===/etc/apt/apt.conf.d/01pinning===
<syntaxhighlight lang=apt>
APT::Default-Release "xenial";
</syntaxhighlight>
===/etc/apt/preferences.d/pdns===
<syntaxhighlight lang=apt>
Package: pdns-*
Pin: release a=zesty, l=Ubuntu
Pin-Priority: 1000
Package: pdns-*
Pin: release a=zesty-updates, l=Ubuntu
Pin-Priority: 1000
Package: pdns-*
Pin: release a=zesty-security, l=Ubuntu
Pin-Priority: 1000
</syntaxhighlight>
===/etc/apt/sources.list===
add zesty sources. for example:
<syntaxhighlight>
deb [arch=amd64] http://de.archive.ubuntu.com/ubuntu/ xenial main restricted universe
deb [arch=amd64] http://de.archive.ubuntu.com/ubuntu/ xenial-updates main restricted universe
deb [arch=amd64] http://security.ubuntu.com/ubuntu xenial-security main restricted universe
deb [arch=amd64] http://de.archive.ubuntu.com/ubuntu/ zesty main restricted universe
deb [arch=amd64] http://de.archive.ubuntu.com/ubuntu/ zesty-updates main restricted universe
deb [arch=amd64] http://security.ubuntu.com/ubuntu zesty-security main restricted universe
</syntaxhighlight>
===Do the upgrade===
<syntaxhighlight lang=bash>
# apt update
# apt install pdns-recursor/zesty pdns-tools/zesty libstdc++6/zesty gcc-6-base/zesty
</syntaxhighlight>
==Logging with systemd and syslog-ng==
I had problems with multiply the log lines over syslog and polluting the disks with redundant log entries.<br>
So I found a way to bind the daemon output to a dedicated systemd <i>namespace</i> and catch them later in syslog-ng.
<syntaxhighlight lang=bash>
$ sudo systemctl edit pdns-recursor.service
</syntaxhighlight>
<syntaxhighlight lang=Ini>
[Service]
ExecStart=
ExecStart=/usr/sbin/pdns_recursor --daemon=no --write-pid=no
LogNamespace=pdns-recursor
</syntaxhighlight>
/etc/powerdns/recursor.d/syslog.conf
<syntaxhighlight lang=bash>
log-timestamp=no
quiet=no
disable-syslog=no
</syntaxhighlight>
<syntaxhighlight lang=bash>
$ sudo systemctl restart pdns-recursor.service
</syntaxhighlight>
after that you will find the output of the daemon with:
<syntaxhighlight lang=bash>
$ sudo journalctl -lf --namespace=pdns-recursor
</syntaxhighlight>
Change the part in <i>/etc/syslog-ng/syslog-ng.conf</i> from
<syntaxhighlight lang=bash>
source s_src {
system();
internal();
};
</syntaxhighlight>
to
<syntaxhighlight lang=bash>
source s_journal_pdns_recursor
{
systemd-journal(namespace("pdns-recursor"));
};
source s_journal_pdns
{
systemd-journal(namespace("pdns"));
};
source s_src {
# Because system() catches systemd-journal() you will have to comment it out or you will get this error:
# The configuration must not contain more than one systemd-journal() source;
#system();
internal();
};
</syntaxhighlight>
Then you can take this dedicated source to put it in your favorite destinations like this:
<syntaxhighlight lang=bash>
destination d_graylog {
network(
"172.16.1.210"
port("514")
transport(udp)
);
};
log {
source(s_journal_pdns_recursor);
destination(d_graylog);
flags(final);
};
</syntaxhighlight>
<syntaxhighlight lang=bash>
$ sudo systemctl restart syslog-ng.service
</syntaxhighlight>
:wq
==chroot with systemd==
Create the chroot-base. I would prefer to setup a zfs dataset for it, but you can also do:
<syntaxhighlight lang=bash>
# mkdir -p /var/chroot
</syntaxhighlight>
What we need to run pdns{,-recursor} in chroot is this:
<syntaxhighlight lang=bash>
/var/chroot/run/systemd/notify <-- bind mount from /run/systemd/notify (socket)
/var/chroot/run/pdns-recursor <-- bind mount from /run/pdns (dir)
/var/chroot/run/pdns <-- bind mount from /run/pdns (dir)
/var/chroot/usr/share/dns/root.hints <-- bind mount from /usr/share/dns (dir with root.hints file)
</syntaxhighlight>
For that we have to create some systemd.mount files:
<syntaxhighlight lang=bash>
# systemctl list-units --all --type=mount,service var-chroot-* pdns*
UNIT LOAD ACTIVE SUB DESCRIPTION
var-chroot-run-pdns.mount loaded active mounted Mount /run/pdns to chroot
var-chroot-run-pdns\x2drecursor.mount loaded active mounted Mount /run/pdns-recursor to chroot
var-chroot-run-systemd-notify.mount loaded active mounted Mount /run/systemd/notify to chroot
var-chroot-run.mount loaded active mounted Temporary Directory /var/chroot/run
var-chroot-tmp.mount loaded active mounted Temporary Directory /var/chroot/tmp
var-chroot-usr-share-dns.mount loaded active mounted Mount /usr/share/dns (root.hints) to chroot
pdns-recursor.service loaded active running PowerDNS Recursor
pdns.service loaded active running PowerDNS Authoritative Server
var-chroot-create-dirs.service loaded active exited Create directories under /var/chroot
LOAD = Reflects whether the unit definition was properly loaded.
ACTIVE = The high-level unit activation state, i.e. generalization of SUB.
SUB = The low-level unit activation state, values depend on unit type.
9 loaded units listed.
To show all installed unit files use 'systemctl list-unit-files'.
</syntaxhighlight>
and a service to create the needed /var/chroot/run/systemd/notify file to bind mount the socket from systemd to it.
<syntaxhighlight lang=ini>
# /etc/systemd/system/var-chroot-run.mount
[Unit]
Description=Temporary Directory /var/chroot/run
Documentation=https://systemd.io/TEMPORARY_DIRECTORIES
Documentation=man:file-hierarchy(7)
Documentation=https://www.freedesktop.org/wiki/Software/systemd/APIFileSystems
ConditionPathIsSymbolicLink=!/var/chroot/run
DefaultDependencies=no
Conflicts=umount.target
Before=local-fs.target umount.target
After=swap.target
[Mount]
What=tmpfs
Where=/var/chroot/run
Type=tmpfs
Options=mode=1777,strictatime,nosuid,nodev,noexec,size=50%%,nr_inodes=1m
[Install]
WantedBy=local-fs.target
</syntaxhighlight>
<syntaxhighlight lang=ini>
# /etc/systemd/system/var-chroot-tmp.mount
[Unit]
Description=Temporary Directory /var/chroot/tmp
Documentation=https://systemd.io/TEMPORARY_DIRECTORIES
Documentation=man:file-hierarchy(7)
Documentation=https://www.freedesktop.org/wiki/Software/systemd/APIFileSystems
ConditionPathIsSymbolicLink=!/var/chroot/tmp
DefaultDependencies=no
Conflicts=umount.target
Before=local-fs.target umount.target
After=swap.target
[Mount]
What=tmpfs
Where=/var/chroot/tmp
Type=tmpfs
Options=mode=1777,strictatime,nosuid,nodev,noexec,size=50%%,nr_inodes=1m
[Install]
WantedBy=local-fs.target
</syntaxhighlight>
<syntaxhighlight lang=ini>
# /etc/systemd/system/var-chroot-create-dirs.service
[Unit]
Description=Create directories under /var/chroot
ConditionPathExists=/var/chroot/run
After=var-chroot-run.mount
[Service]
Type=oneshot
RemainAfterExit=yes
RuntimeDirectory=pdns pdns-recursor
RuntimeDirectoryMode=0750
RuntimeDirectoryPreserve=True
User=pdns
Group=pdns
ExecStart=-mkdir /var/chroot/run/systemd
ExecStart=-touch /var/chroot/run/systemd/notify
[Install]
WantedBy=multi-user.target
</syntaxhighlight>
<syntaxhighlight lang=ini>
# /etc/systemd/system/var-chroot-run-pdns.mount
[Unit]
Description=Mount /run/pdns to chroot
DefaultDependencies=no
ConditionPathExists=/run/pdns
ConditionCapability=CAP_SYS_ADMIN
After=zfs-mount.service
After=var-chroot-create-dirs.service
After=pdns.service
[Mount]
What=/run/pdns
Where=/var/chroot/run/pdns
Type=none
Options=bind
[Install]
WantedBy=multi-user.target
</syntaxhighlight>
<syntaxhighlight lang=ini>
# /etc/systemd/system/var-chroot-run-pdns\x2drecursor.mount
[Unit]
Description=Mount /run/pdns-recursor to chroot
DefaultDependencies=no
ConditionPathExists=/run/pdns-recursor
ConditionCapability=CAP_SYS_ADMIN
After=zfs-mount.service
After=var-chroot-create-dirs.service
Before=pdns-recursor.service
[Mount]
What=/run/pdns-recursor
Where=/var/chroot/run/pdns-recursor
Type=none
Options=bind
[Install]
WantedBy=multi-user.target
</syntaxhighlight>
<syntaxhighlight lang=ini>
# /etc/systemd/system/var-chroot-run-systemd-notify.mount
[Unit]
Description=Mount /run/systemd/notify to chroot
DefaultDependencies=no
ConditionPathExists=/run/systemd/notify
ConditionCapability=CAP_SYS_ADMIN
After=zfs-mount.service
After=var-chroot-create-dirs.service
Before=pdns-recursor.service
[Mount]
What=/run/systemd/notify
Where=/var/chroot/run/systemd/notify
Type=none
Options=rbind
[Install]
WantedBy=multi-user.target
</syntaxhighlight>
<syntaxhighlight lang=ini>
# /etc/systemd/system/var-chroot-usr-share-dns.mount
[Unit]
Description=Mount /usr/share/dns (root.hints) to chroot
DefaultDependencies=no
ConditionPathExists=/var/chroot/usr/share/dns
ConditionCapability=CAP_SYS_ADMIN
After=zfs-mount.service
After=var-chroot-create-dirs.service
Before=pdns-recursor.service
[Mount]
What=/usr/share/dns
Where=/var/chroot/usr/share/dns
Type=none
Options=rbind,ro
[Install]
WantedBy=multi-user.target
</syntaxhighlight>
Now we are ready for modifying pdns.service and pdns-recursor.service like this:
<syntaxhighlight lang=ini>
# /etc/systemd/system/pdns.service.d/override.conf
[Service]
Type=simple
RuntimeDirectoryPreserve=True
ExecStart=
ExecStart=/usr/sbin/pdns_server --guardian=no --daemon=no --disable-syslog --log-timestamp=no --write-pid=no
CapabilityBoundingSet=CAP_NET_BIND_SERVICE CAP_SETGID CAP_SETUID CAP_CHOWN CAP_SYS_CHROOT
AmbientCapabilities=CAP_NET_BIND_SERVICE CAP_SETGID CAP_SETUID CAP_CHOWN CAP_SYS_CHROOT
SystemCallFilter=@mount
[Unit]
Wants=local-fs.target
</syntaxhighlight>
<syntaxhighlight lang=ini>
# /etc/systemd/system/pdns-recursor.service.d/override.conf
[Service]
Type=simple
RuntimeDirectoryPreserve=True
ExecStart=
ExecStart=/usr/sbin/pdns_recursor --daemon=no --write-pid=no --include-dir=/etc/powerdns/recursor.d
CapabilityBoundingSet=CAP_NET_BIND_SERVICE CAP_SETGID CAP_SETUID CAP_CHOWN CAP_SYS_CHROOT
AmbientCapabilities=CAP_NET_BIND_SERVICE CAP_SETGID CAP_SETUID CAP_CHOWN CAP_SYS_CHROOT
SystemCallFilter=@mount
[Unit]
Wants=local-fs.target
</syntaxhighlight>
860831b42abf4ec64488c3e3821b41b4fe40d611
2747
2746
2023-06-22T06:02:58Z
Lollypop
2
/* chroot with systemd */
wikitext
text/x-wiki
[[Category: DNS]]
=PowerDNS Server (pdns_server)=
==Newer version in Ubuntu==
If you are living in Ubunbtu xenial and need a newer PowerDNS from Ubuntu zesty, do this:
===/etc/apt/apt.conf.d/01pinning===
<syntaxhighlight lang=apt>
APT::Default-Release "xenial";
</syntaxhighlight>
===/etc/apt/preferences.d/pdns===
<syntaxhighlight lang=apt>
Package: pdns-*
Pin: release a=zesty, l=Ubuntu
Pin-Priority: 1000
Package: pdns-*
Pin: release a=zesty-updates, l=Ubuntu
Pin-Priority: 1000
Package: pdns-*
Pin: release a=zesty-security, l=Ubuntu
Pin-Priority: 1000
</syntaxhighlight>
===/etc/apt/sources.list===
add zesty sources. for example:
<syntaxhighlight>
deb [arch=amd64] http://de.archive.ubuntu.com/ubuntu/ xenial main restricted universe
deb [arch=amd64] http://de.archive.ubuntu.com/ubuntu/ xenial-updates main restricted universe
deb [arch=amd64] http://security.ubuntu.com/ubuntu xenial-security main restricted universe
deb [arch=amd64] http://de.archive.ubuntu.com/ubuntu/ zesty main restricted universe
deb [arch=amd64] http://de.archive.ubuntu.com/ubuntu/ zesty-updates main restricted universe
deb [arch=amd64] http://security.ubuntu.com/ubuntu zesty-security main restricted universe
</syntaxhighlight>
===Do the upgrade===
<syntaxhighlight lang=bash>
# apt update
# apt install pdns-recursor/zesty pdns-tools/zesty libstdc++6/zesty gcc-6-base/zesty
</syntaxhighlight>
==Logging with systemd and syslog-ng==
I had problems with multiply the log lines over syslog and polluting the disks with redundant log entries.<br>
So I found a way to bind the daemon output to a dedicated systemd <i>namespace</i> and catch them later in syslog-ng.
<syntaxhighlight lang=bash>
$ sudo systemctl edit pdns-recursor.service
</syntaxhighlight>
<syntaxhighlight lang=Ini>
[Service]
ExecStart=
ExecStart=/usr/sbin/pdns_recursor --daemon=no --write-pid=no
LogNamespace=pdns-recursor
</syntaxhighlight>
/etc/powerdns/recursor.d/syslog.conf
<syntaxhighlight lang=bash>
log-timestamp=no
quiet=no
disable-syslog=no
</syntaxhighlight>
<syntaxhighlight lang=bash>
$ sudo systemctl restart pdns-recursor.service
</syntaxhighlight>
after that you will find the output of the daemon with:
<syntaxhighlight lang=bash>
$ sudo journalctl -lf --namespace=pdns-recursor
</syntaxhighlight>
Change the part in <i>/etc/syslog-ng/syslog-ng.conf</i> from
<syntaxhighlight lang=bash>
source s_src {
system();
internal();
};
</syntaxhighlight>
to
<syntaxhighlight lang=bash>
source s_journal_pdns_recursor
{
systemd-journal(namespace("pdns-recursor"));
};
source s_journal_pdns
{
systemd-journal(namespace("pdns"));
};
source s_src {
# Because system() catches systemd-journal() you will have to comment it out or you will get this error:
# The configuration must not contain more than one systemd-journal() source;
#system();
internal();
};
</syntaxhighlight>
Then you can take this dedicated source to put it in your favorite destinations like this:
<syntaxhighlight lang=bash>
destination d_graylog {
network(
"172.16.1.210"
port("514")
transport(udp)
);
};
log {
source(s_journal_pdns_recursor);
destination(d_graylog);
flags(final);
};
</syntaxhighlight>
<syntaxhighlight lang=bash>
$ sudo systemctl restart syslog-ng.service
</syntaxhighlight>
:wq
==chroot with systemd==
Create the chroot-base. I would prefer to setup a zfs dataset for it, but you can also do:
<syntaxhighlight lang=bash>
# mkdir -p /var/chroot
</syntaxhighlight>
What we need to run pdns{,-recursor} in chroot is this:
<syntaxhighlight lang=bash>
/var/chroot/run/systemd/notify <-- bind mount from /run/systemd/notify (socket)
/var/chroot/run/pdns-recursor <-- bind mount from /run/pdns (dir)
/var/chroot/run/pdns <-- bind mount from /run/pdns (dir)
/var/chroot/usr/share/dns/root.hints <-- bind mount from /usr/share/dns (dir with root.hints file)
</syntaxhighlight>
For that we have to create some systemd.mount files:
<syntaxhighlight lang=bash>
# systemctl list-units --all --type=mount,service var-chroot-* pdns*
UNIT LOAD ACTIVE SUB DESCRIPTION
var-chroot-run-pdns.mount loaded active mounted Mount /run/pdns to chroot
var-chroot-run-pdns\x2drecursor.mount loaded active mounted Mount /run/pdns-recursor to chroot
var-chroot-run-systemd-notify.mount loaded active mounted Mount /run/systemd/notify to chroot
var-chroot-run.mount loaded active mounted Temporary Directory /var/chroot/run
var-chroot-tmp.mount loaded active mounted Temporary Directory /var/chroot/tmp
var-chroot-usr-share-dns.mount loaded active mounted Mount /usr/share/dns (root.hints) to chroot
pdns-recursor.service loaded active running PowerDNS Recursor
pdns.service loaded active running PowerDNS Authoritative Server
var-chroot-create-dirs.service loaded active exited Create directories under /var/chroot
LOAD = Reflects whether the unit definition was properly loaded.
ACTIVE = The high-level unit activation state, i.e. generalization of SUB.
SUB = The low-level unit activation state, values depend on unit type.
9 loaded units listed.
To show all installed unit files use 'systemctl list-unit-files'.
</syntaxhighlight>
and a service to create the needed /var/chroot/run/systemd/notify file to bind mount the socket from systemd to it.
<syntaxhighlight lang=ini>
# /etc/systemd/system/var-chroot-run.mount
[Unit]
Description=Temporary Directory /var/chroot/run
Documentation=https://systemd.io/TEMPORARY_DIRECTORIES
Documentation=man:file-hierarchy(7)
Documentation=https://www.freedesktop.org/wiki/Software/systemd/APIFileSystems
ConditionPathIsSymbolicLink=!/var/chroot/run
DefaultDependencies=no
Conflicts=umount.target
Before=local-fs.target umount.target
After=swap.target
[Mount]
What=tmpfs
Where=/var/chroot/run
Type=tmpfs
Options=mode=1777,strictatime,nosuid,nodev,noexec,size=50%%,nr_inodes=1m
[Install]
WantedBy=local-fs.target
</syntaxhighlight>
<syntaxhighlight lang=ini>
# /etc/systemd/system/var-chroot-tmp.mount
[Unit]
Description=Temporary Directory /var/chroot/tmp
Documentation=https://systemd.io/TEMPORARY_DIRECTORIES
Documentation=man:file-hierarchy(7)
Documentation=https://www.freedesktop.org/wiki/Software/systemd/APIFileSystems
ConditionPathIsSymbolicLink=!/var/chroot/tmp
DefaultDependencies=no
Conflicts=umount.target
Before=local-fs.target umount.target
After=swap.target
[Mount]
What=tmpfs
Where=/var/chroot/tmp
Type=tmpfs
Options=mode=1777,strictatime,nosuid,nodev,noexec,size=50%%,nr_inodes=1m
[Install]
WantedBy=local-fs.target
</syntaxhighlight>
<syntaxhighlight lang=ini>
# /etc/systemd/system/var-chroot-create-dirs.service
[Unit]
Description=Create directories under /var/chroot
ConditionPathExists=/var/chroot/run
After=var-chroot-run.mount
[Service]
Type=oneshot
RemainAfterExit=yes
RuntimeDirectory=pdns pdns-recursor
RuntimeDirectoryMode=0750
RuntimeDirectoryPreserve=True
User=pdns
Group=pdns
ExecStart=-mkdir /var/chroot/run/systemd
ExecStart=-touch /var/chroot/run/systemd/notify
[Install]
WantedBy=multi-user.target
</syntaxhighlight>
<syntaxhighlight lang=ini>
# /etc/systemd/system/var-chroot-run-pdns.mount
[Unit]
Description=Mount /run/pdns to chroot
DefaultDependencies=no
ConditionPathExists=/run/pdns
ConditionCapability=CAP_SYS_ADMIN
After=zfs-mount.service
After=var-chroot-create-dirs.service
After=pdns.service
[Mount]
What=/run/pdns
Where=/var/chroot/run/pdns
Type=none
Options=bind
[Install]
WantedBy=multi-user.target
</syntaxhighlight>
<syntaxhighlight lang=ini>
# /etc/systemd/system/var-chroot-run-pdns\x2drecursor.mount
[Unit]
Description=Mount /run/pdns-recursor to chroot
DefaultDependencies=no
ConditionPathExists=/run/pdns-recursor
ConditionCapability=CAP_SYS_ADMIN
After=zfs-mount.service
After=var-chroot-create-dirs.service
Before=pdns-recursor.service
[Mount]
What=/run/pdns-recursor
Where=/var/chroot/run/pdns-recursor
Type=none
Options=bind
[Install]
WantedBy=multi-user.target
</syntaxhighlight>
<syntaxhighlight lang=ini>
# /etc/systemd/system/var-chroot-run-systemd-notify.mount
[Unit]
Description=Mount /run/systemd/notify to chroot
DefaultDependencies=no
ConditionPathExists=/run/systemd/notify
ConditionCapability=CAP_SYS_ADMIN
After=zfs-mount.service
After=var-chroot-create-dirs.service
Before=pdns-recursor.service
[Mount]
What=/run/systemd/notify
Where=/var/chroot/run/systemd/notify
Type=none
Options=rbind
[Install]
WantedBy=multi-user.target
</syntaxhighlight>
<syntaxhighlight lang=ini>
# /etc/systemd/system/var-chroot-usr-share-dns.mount
[Unit]
Description=Mount /usr/share/dns (root.hints) to chroot
DefaultDependencies=no
ConditionPathExists=/var/chroot/usr/share/dns
ConditionCapability=CAP_SYS_ADMIN
After=zfs-mount.service
After=var-chroot-create-dirs.service
Before=pdns-recursor.service
[Mount]
What=/usr/share/dns
Where=/var/chroot/usr/share/dns
Type=none
Options=rbind,ro
[Install]
WantedBy=multi-user.target
</syntaxhighlight>
Now we are ready for modifying pdns.service and pdns-recursor.service like this:
<syntaxhighlight lang=ini>
# /etc/systemd/system/pdns.service.d/override.conf
[Service]
Type=simple
RuntimeDirectoryPreserve=True
ExecStart=
ExecStart=/usr/sbin/pdns_server --guardian=no --daemon=no --disable-syslog --log-timestamp=no --write-pid=no
CapabilityBoundingSet=CAP_NET_BIND_SERVICE CAP_SETGID CAP_SETUID CAP_CHOWN CAP_SYS_CHROOT
AmbientCapabilities=CAP_NET_BIND_SERVICE CAP_SETGID CAP_SETUID CAP_CHOWN CAP_SYS_CHROOT
SystemCallFilter=@mount
[Unit]
Wants=local-fs.target
</syntaxhighlight>
<syntaxhighlight lang=ini>
# /etc/systemd/system/pdns-recursor.service.d/override.conf
[Service]
Type=simple
RuntimeDirectoryPreserve=True
ExecStart=
ExecStart=/usr/sbin/pdns_recursor --daemon=no --write-pid=no --include-dir=/etc/powerdns/recursor.d
# Add the possibility to change user id and group id and to chroot
CapabilityBoundingSet=CAP_SETGID CAP_SETUID CAP_SYS_CHROOT
AmbientCapabilities=CAP_SETGID CAP_SETUID CAP_SYS_CHROOT
SystemCallFilter=@mount
[Unit]
Wants=local-fs.target
</syntaxhighlight>
c222d3623681a0c6ed206982c96d125f2d9a5d7a
Ubuntu zsys
0
377
2748
2733
2023-06-22T11:43:21Z
Lollypop
2
/* Cconfigure garbage collection */
wikitext
text/x-wiki
[[category:Ubuntu]]
==Configure garbage collection==
<syntaxhighlight lang=yaml>
cat > /etc/zsys.conf <<EOF
history:
# Keep at least n history entry per unit of time if enough of them are present
# The order condition the bucket start and end dates (from most recent to oldest)
# We also keep all previous state saves for the previous day.
# gcstartafter: 1 (GC start after a whole day).
gcstartafter: 1
# Minimum number of recent states to keep.
keeplast: 7
# - name: Abitrary name of the bucket
# buckets: Number of buckets over the interval
# bucketlength: Length of each bucket in days
# samplesperbucket: Number of datasets to keep in each bucket
gcrules:
- name: PreviousDay
buckets: 1
bucketlength: 1
samplesperbucket: 3
#
# For the previous Day (after on full day of retention of all
# snapshots due to gcstartafter: 1), the rule PreviousDay
# defines one bucket (buckets: 1) of size 1 day (bucketlength: 1),
# where we keep 3 states. So basically, we keep 3 states on the
# previous full day.
#
- name: PreviousWeek
buckets: 5
bucketlength: 1
samplesperbucket: 1
#
# For the 5 days before (buckets: 5 of size 1 day (bucketlength: 1)),
# we keep one state (samplesperbucket: 1).
# It means thus that we keep one state per day for each of those 5 days.
#
- name: PreviousMonth
buckets: 4
bucketlength: 7
samplesperbucket: 1
#
# We divide the previous month, in 4 buckets (buckets: 4) of
# 7 days each (bucketlength: 7) and keep one state for each
# (samplesperbucket: 1).
# In English, this means that we try to keep one state save
# per week over the previous month.
#
general:
# Minimal free space required before taking a snapshot
minfreepoolspace: 20
# Daemon timeout in seconds
timeout: 60
EOF
systemctl restart zsysd.service
zsysctl -vvv service gc
update-grub
</syntaxhighlight>
== Current machine isn't Zsys, nothing to create ==
<syntaxhighlight lang=bash>
# zfs set com.ubuntu.zsys:bootfs=yes $(df --output=source -t zfs / | tail -1)
</syntaxhighlight>
<syntaxhighlight lang=bash>
# zfs list -o name,com.ubuntu.zsys:bootfs $(df --output=source -t zfs / | tail -1)
NAME COM.UBUNTU.ZSYS:BOOTFS
rpool/ROOT/ubuntu_82yzok yes
</syntaxhighlight>
<syntaxhighlight lang=bash>
# systemctl restart zsys*
</syntaxhighlight>
<syntaxhighlight lang=bash>
# zsysctl list
ID ZSys Last Used
-- ---- ---------
rpool/ROOT/ubuntu_82yzok true current
# zsysctl save --system
ZSys is adding automatic system snapshot to GRUB menu
</syntaxhighlight>
2071f09577df8da4f19232b67867e36b2819af4b
ZFS on Linux
0
222
2749
2679
2023-06-22T14:54:38Z
Lollypop
2
/* Swap on ZFS with random key encryption */
wikitext
text/x-wiki
[[Category:Linux|ZFS]]
[[Category:ZFS|Linux]]
[[Category:VirtualBox|ZFS]]
==Grub==
Create /etc/udev/rules.d/99-local-grub.rules with this content:
<syntaxhighlight lang=bash>
# Create by-id links in /dev as well for zfs vdev. Needed by grub
# Add links for zfs_member only
KERNEL=="sd*[0-9]", IMPORT{parent}=="ID_*", ENV{ID_FS_TYPE}=="zfs_member", SYMLINK+="$env{ID_BUS}-$env{ID_SERIAL}-part%n"
</syntaxhighlight>
==Virtualbox on ZVols==
If you use ZVols as rawvmdk-device in VirtualBox as normal user (vmuser in this example) create /etc/udev/rules.d/99-local-zvol.rules with this content:
<syntaxhighlight lang=bash>
KERNEL=="zd*" SUBSYSTEM=="block" ACTION=="add|change" PROGRAM="/lib/udev/zvol_id /dev/%k" RESULT=="rpool/VM/*" OWNER="vmuser"
</syntaxhighlight>
<syntaxhighlight lang=bash>
vmuser@virtualbox-server:~$ VBoxManage internalcommands createrawvmdk -filename /var/data/VMs/dev/Solaris10.vmdk -rawdisk /dev/zvol/rpool/VM/Solaris10
</syntaxhighlight>
==Setup Ubuntu 16.04 with ZFS root==
Most is from here [https://github.com/zfsonlinux/zfs/wiki/Ubuntu-16.04-Root-on-ZFS Ubuntu-16.04-Root-on-ZFS].
Boot Ubuntu Desktop (alias Live CD) and choose "try out".
===Get the right ashift value===
For example to get sda and sdb:
<syntaxhighlight lang=bash>
# lsblk -o NAME,PHY-SeC,LOG-SEC /dev/sd{a,b} | awk 'function exponent (value) {for(i=0;value>1;i++){value/=2;}; return i;}{if($2 ~ /[0-9]+/){print $0,exponent($2)}else{print$0,"ashift"}}'
NAME PHY-SEC LOG-SEC ashift
sda 512 512 9
├─sda1 512 512 9
├─sda2 512 512 9
├─sda3 512 512 9
└─sda4 512 512 9
sdb 4096 512 12
├─sdb1 4096 512 12
├─sdb2 4096 512 12
├─sdb3 4096 512 12
└─sdb4 4096 512 12
</syntaxhighlight>
===Connect it to your network===
<syntaxhighlight lang=bash>
sudo -i
ifconfig ens160 <IP> netmask 255.255.255.0
route add default gw <defaultrouter>
echo "nameserver <nameserver>" >> /etc/resolv.conf
echo 'Acquire::http::Proxy "http://<user>:<pass>@<proxyhost>:<proxyport>";' >> /etc/apt/apt.conf
apt-add-repository universe
apt update
apt --yes install openssh-server
passwd ubuntu
Reconnect via ssh
apt install --yes debootstrap gdisk zfs-initramfs
sgdisk -g -a1 -n2:34:2047 -t2:EF02 /dev/disk/by-id/scsi-36000c2932cdb62febff0b5ac93786dd4
sgdisk -n9:-8M:0 -t9:BF07 /dev/disk/by-id/scsi-36000c2932cdb62febff0b5ac93786dd4
sgdisk -n1:0:0 -t1:BF01 /dev/disk/by-id/scsi-36000c2932cdb62febff0b5ac93786dd4
zpool create -f -o ashift=12 \
-O atime=off \
-O canmount=off \
-O compression=lz4 \
-O normalization=formD \
-O mountpoint=/ \
-R /mnt \
rpool /dev/disk/by-id/scsi-36000c2932cdb62febff0b5ac93786dd4-part1
zfs create -o canmount=off -o mountpoint=none rpool/ROOT
zfs create -o canmount=noauto -o mountpoint=/ rpool/ROOT/ubuntu
zfs mount rpool/ROOT/ubuntu
zfs create -o setuid=off rpool/home
zfs create -o mountpoint=/root rpool/home/root
zfs create -o canmount=off -o setuid=off -o exec=off rpool/var
zfs create -o com.sun:auto-snapshot=false rpool/var/cache
zfs create rpool/var/log
zfs create rpool/var/spool
zfs create -o com.sun:auto-snapshot=false -o exec=on rpool/var/tmp
zfs create -V 4G -b $(getconf PAGESIZE) -o compression=zle \
-o logbias=throughput -o sync=always \
-o primarycache=metadata -o secondarycache=none \
-o com.sun:auto-snapshot=false rpool/swap
cp -p {,/mnt}/etc/apt/apt.conf
export http_proxy=$(awk '/Acquire::http::Proxy/{gsub(/\"/,"");gsub(/;$/,"");print $2}' /mnt/etc/apt/apt.conf)
echo -n xenial{,-security,-updates} | \
xargs -n 1 -d ' ' -I{} echo "deb http://archive.ubuntu.com/ubuntu {} main universe" > /mnt/etc/apt/sources.list
chmod 1777 /mnt/var/tmp
debootstrap xenial /mnt
zfs set devices=off rpool
HOSTNAME=Template-VM
echo ${HOSTNAME} > /mnt/etc/hostname
printf "127.0.1.1\t%s\n" "${HOSTNAME}" >> /mnt/etc/hosts
INTERFACE=$(ip a s scope global | awk 'NR==1{gsub(/:$/,"",$2);print $2;}')
printf "auto %s\niface %s inet dhcp\n" "${INTERFACE}" "${INTERFACE}" > /mnt/etc/network/interfaces.d/${INTERFACE}
mount --rbind /dev /mnt/dev
mount --rbind /proc /mnt/proc
mount --rbind /sys /mnt/sys
cp -p {,/mnt}/etc/apt/apt.conf
echo -n xenial{,-security,-updates} | \
xargs -n 1 -d ' ' -I{} echo "deb http://archive.ubuntu.com/ubuntu {} main universe" > /mnt/etc/apt/sources.list
chroot /mnt /bin/bash --login
locale-gen en_US.UTF-8
echo 'LANG="en_US.UTF-8"' > /etc/default/locale
LANG="en_US.UTF-8"
dpkg-reconfigure tzdata
ln -s /proc/self/mounts /etc/mtab
apt update
apt install --yes ubuntu-minimal
apt install --yes --no-install-recommends linux-image-generic
apt install --yes zfs-initramfs
apt install --yes openssh-server
apt install --yes grub-pc
addgroup --system lpadmin
addgroup --system sambashare
passwd
grub-probe /
update-initramfs -c -k all
vi /etc/default/grub
Comment out: GRUB_HIDDEN_TIMEOUT=0
Remove quiet and splash from: GRUB_CMDLINE_LINUX_DEFAULT
Uncomment: GRUB_TERMINAL=console
update-grub
grub-install /dev/disk/by-id/scsi-36000c2932cdb62febff0b5ac93786dd4
zfs snapshot rpool/ROOT/ubuntu@install
exit
mount | grep -v zfs | tac | awk '/\/mnt/ {print $3}' | xargs -i{} umount -lf {}
zpool export rpool
reboot
apt install --yes cryptsetup
echo cryptswap1 /dev/zvol/rpool/swap /dev/urandom swap,cipher=aes-xts-plain64:sha256,size=256 >> /etc/crypttab
systemctl daemon-reload
systemctl start systemd-cryptsetup@cryptswap1.service
echo /dev/mapper/cryptswap1 none swap defaults 0 0 >> /etc/fstab
swapon -av
</syntaxhighlight>
==Swap on ZFS with random key encryption==
<syntaxhighlight lang=bash>
$ sudo systemctl edit --force --full zfs-cryptswap@.service
</syntaxhighlight>
<syntaxhighlight lang=ini>
# /etc/systemd/system/zfs-cryptswap@.service
[Unit]
Description=ZFS Random Cryptography Setup for %I
Documentation=man:zfs(8)
DefaultDependencies=no
Conflicts=umount.target
IgnoreOnIsolate=true
After=systemd-random-seed.service zfs-volumes.target
BindsTo=dev-zvol-rpool-%i.device
Before=umount.target
[Service]
Type=oneshot
RemainAfterExit=yes
TimeoutSec=0
KeyringMode=shared
OOMScoreAdjust=500
UMask=0077
RuntimeDirectory=zfs-cryptswap.%i
RuntimeDirectoryMode=0700
ExecStartPre=-/sbin/swapoff '/dev/zvol/rpool/%i'
ExecStartPre=-/sbin/zfs destroy 'rpool/%i'
ExecStartPre=/bin/dd if=/dev/urandom of=/run/zfs-cryptswap.%i/%i.key bs=32 count=1
ExecStart=/sbin/zfs create -V 4G -b 8k -o compression=zle -o logbias=throughput -o sync=always -o primarycache=metadata -o secondarycache=none -o com.sun:auto-snapshot=false -o encryption=on -o keyformat=raw -o keylocation=file:///run/zfs-cryptswap.%i/%i.key rpool/%i
ExecStart=/bin/sleep 1
ExecStartPost=/sbin/mkswap '/dev/zvol/rpool/%i'
ExecStartPost=/sbin/swapon '/dev/zvol/rpool/%i'
ExecStop=/sbin/swapoff '/dev/zvol/rpool/%i'
ExecStop=/bin/sleep 2
ExecStopPost=/sbin/zfs destroy 'rpool/%i'
[Install]
WantedBy=swap.target
</syntaxhighlight>
!!!BE CAREFUL with the name after @ !!!
The name after the @ is the name of the ZFS that will be DESTROYED and recreated!!!
To destroy and recreate an encrypted ZFS volume named cryptswap use:
<syntaxhighlight lang=bash>
# systemctl start zfs-cryptswap@cryptswap.service
# systemctl enable zfs-cryptswap@cryptswap.service
# update-initramfs -k $(uname -i) -u
</syntaxhighlight>
==Kernel settings for ZFS==
=== Set module parameter in /etc/modprobe.d/zfs.conf===
<syntaxhighlight lang=bash>
options zfs zfs_arc_max=10737418240
# increase them so scrub/resilver is more quickly at the cost of other work
options zfs zfs_vdev_scrub_min_active=24
options zfs zfs_vdev_scrub_max_active=64
# sync write
options zfs zfs_vdev_sync_write_min_active=8
options zfs zfs_vdev_sync_write_max_active=32
# sync reads (normal)
options zfs zfs_vdev_sync_read_min_active=8
options zfs zfs_vdev_sync_read_max_active=32
# async reads : prefetcher
options zfs zfs_vdev_async_read_min_active=8
options zfs zfs_vdev_async_read_max_active=32
# async write : bulk writes
options zfs zfs_vdev_async_write_min_active=8
options zfs zfs_vdev_async_write_max_active=32
# max write speed to l2arc
# tradeoff between write/read and durability of ssd (?)
# default : 8 * 1024 * 1024
# setting here : 500 * 1024 * 1024
options zfs l2arc_write_max=524288000
options zfs zfs_top_maxinflight=512
options zfs zfs_resilver_min_time_ms=8000
options zfs zfs_resilver_delay=0
</syntaxhighlight>
Remember to update your initramfs before boot. This is the filesystem which is read when your module is loaded.
<syntaxhighlight lang=bash>
# update-initramfs -k all -u
</syntaxhighlight>
=== Check settings ===
<syntaxhighlight lang=bash>
root@zfshost:~# modprobe -c | grep "options zfs"
options zfs zfs_arc_max=10737418240
options zfs zfs_vdev_scrub_min_active=24
options zfs zfs_vdev_scrub_max_active=64
options zfs zfs_vdev_sync_write_min_active=8
options zfs zfs_vdev_sync_write_max_active=32
options zfs zfs_vdev_sync_read_min_active=8
options zfs zfs_vdev_sync_read_max_active=32
options zfs zfs_vdev_async_read_min_active=8
options zfs zfs_vdev_async_read_max_active=32
options zfs zfs_vdev_async_write_min_active=8
options zfs zfs_vdev_async_write_max_active=32
options zfs l2arc_write_max=524288000
options zfs zfs_top_maxinflight=512
options zfs zfs_resilver_min_time_ms=8000
options zfs zfs_resilver_delay=0
</syntaxhighlight>
<syntaxhighlight lang=bash>
root@zfshost:~# modprobe --show-depends zfs
insmod /lib/modules/4.15.0-58-generic/kernel/spl/spl.ko
insmod /lib/modules/4.15.0-58-generic/kernel/zfs/znvpair.ko
insmod /lib/modules/4.15.0-58-generic/kernel/zfs/zcommon.ko
insmod /lib/modules/4.15.0-58-generic/kernel/zfs/icp.ko
insmod /lib/modules/4.15.0-58-generic/kernel/zfs/zavl.ko
insmod /lib/modules/4.15.0-58-generic/kernel/zfs/zunicode.ko
insmod /lib/modules/4.15.0-58-generic/kernel/zfs/zfs.ko zfs_arc_max=10737418240 zfs_vdev_scrub_min_active=24 zfs_vdev_scrub_max_active=64 zfs_vdev_sync_write_min_active=8 zfs_vdev_sync_write_max_active=32 zfs_vdev_sync_read_min_active=8 zfs_vdev_sync_read_max_active=32 zfs_vdev_async_read_min_active=8 zfs_vdev_async_read_max_active=32 zfs_vdev_async_write_min_active=8 zfs_vdev_async_write_max_active=32 l2arc_write_max=524288000 zfs_top_maxinflight=512 zfs_resilver_min_time_ms=8000 zfs_resilver_delay=0
</syntaxhighlight>
=== Check actual settings ===
Check files in
* /proc/spl/kstat/zfs/
* /sys/module/zfs/parameters/
==ARC Cache==
===Get the current usage of cache===
<syntaxhighlight lang=bash>
# cat /proc/spl/kstat/zfs/arcstats |grep c_
c_min 4 521779200
c_max 4 1073741824
arc_no_grow 4 0
arc_tempreserve 4 0
arc_loaned_bytes 4 0
arc_prune 4 25360
arc_meta_used 4 493285336
arc_meta_limit 4 805306368
arc_dnode_limit 4 80530636
arc_meta_max 4 706551816
arc_meta_min 4 16777216
sync_wait_for_async 4 357
arc_need_free 4 0
arc_sys_free 4 260889600
</syntaxhighlight>
===Limit the cache without reboot non permanent===
For example limit it to 512MB (which is too small for production environments, just an example...):
<syntaxhighlight lang=bash>
# echo "$[512*1024*1024]" > /sys/module/zfs/parameters/zfs_arc_max
</syntaxhighlight>
Now you have to drop the caches:
<syntaxhighlight lang=bash>
# echo 3 > /proc/sys/vm/drop_caches
</syntaxhighlight>
===Make the cache limit permanent===
For example limit it to 512MB (which is too small for production environments, just an example...):
<syntaxhighlight lang=bash>
# echo "options zfs zfs_arc_max=$[512*1024*1024]" >> /etc/modprobe.d/zfs.conf
</syntaxhighlight>
After reboot this value take effect.
===Check cache hits/misses===
<syntaxhighlight lang=bash>
# (while : ; do cat /proc/spl/kstat/zfs/arcstats ; sleep 5 ; done ) | awk '
BEGIN {
}
$1 ~ /(hits|misses)/ {
name=$1;
gsub(/[_]*(hits|misses)/,"",name);
if(name == ""){
name="global";
}
}
$1 ~ /hits/ {
hits[name] = $3 - hitslast[name]
hitslast[name] = $3
}
$1 ~ /misses/ {
misses[name] = $3 - misslast[name]
misslast[name] = $3
rate = 0
total = hits[name] + misses[name]
if (total)
rate = (hits[name] * 100) / total
if (name=="global")
printf "%30s %12s %12s %9s\n", "NAME", "HITS", "MISSES", "HITRATE"
printf "%30s %12d %12d %8.2f%%\n", name, hits[name], misses[name], rate
}
'
</syntaxhighlight>
==Higher scrub performance==
<syntaxhighlight lang=bash highlight=3-5>
#!/bin/bash
#
## scrub_fast.sh
#
case $1 in
start)
echo 0 > /sys/module/zfs/parameters/zfs_scan_idle
echo 0 > /sys/module/zfs/parameters/zfs_scrub_delay
echo 512 > /sys/module/zfs/parameters/zfs_top_maxinflight
echo 5000 > /sys/module/zfs/parameters/zfs_scan_min_time_ms
echo 4 > /sys/module/zfs/parameters/zfs_vdev_scrub_min_active
echo 8 > /sys/module/zfs/parameters/zfs_vdev_scrub_max_active
;;
stop)
echo 50 > /sys/module/zfs/parameters/zfs_scan_idle
echo 4 > /sys/module/zfs/parameters/zfs_scrub_delay
echo 32 > /sys/module/zfs/parameters/zfs_top_maxinflight
echo 1000 > /sys/module/zfs/parameters/zfs_scan_min_time_ms
echo 1 > /sys/module/zfs/parameters/zfs_vdev_scrub_min_active
echo 2 > /sys/module/zfs/parameters/zfs_vdev_scrub_max_active
;;
status)
for i in zfs_scan_idle zfs_scrub_delay zfs_top_maxinflight zfs_scan_min_time_ms zfs_vdev_scrub_{min,max}_active
do
param="/sys/module/zfs/parameters/${i}"
printf "%60s\t%d\n" "${param}" "$(cat ${param})"
done
;;
*)
echo "Usage: ${0} (start|stop|status)"
;;
esac
</syntaxhighlight>
==More information on zpool status==
<SyntaxHighlight lang=bash highlight=3-5>
#!/bin/bash
#
## print_zpool.sh
#
# Written by Lars Timmann <L@rs.Timmann.de> 2022
columns=5 # number of columns for zpool status
if [ ${#} -gt 0 ] && [ ${1} == "iostat" ]
then
command="iostat -v"
columns=7
shift
fi
stdbuf --output=L zpool ${command:-status} -P ${*} | awk -v columns=${columns} '
BEGIN {
command="lsscsi --scsi_id";
while( command | getline lsscsi ) {
count=split(lsscsi,fields);
dev=fields[count-1];
scsi_id[dev]=fields[1];
}
close(command);
command="ls -Ul /dev/disk/by-id/*";
while( command | getline ) {
dev=$NF;
gsub(/[\.\/]/,"",dev);
dev_id=$(NF-2);
device[dev_id]="/dev/"dev;
}
close(command);
}
$1 ~ /\/dev\// {
line=$0;
dev_by_id=$1;
dev_no_part=dev_by_id;
gsub(/(-part|)[0-9]+$/,"",dev_no_part);
if( NF > 5) {
count=split(line,a,FS,seps);
line=seps[0];
for(i=1;i<columns;i++){
line=line a[i] seps[i];
}
line=line a[columns];
for(i=columns+1;i<=count;i++){
rest=rest a[i] seps[i];
}
}
printf("%s %s %s",line,scsi_id[device[dev_no_part]],device[dev_by_id]);
if(rest!=""){
printf(" %s",rest);
rest="";
}
printf("\n");
next;
}
/^errors:/ {
print;
fflush();
next;
}
{
print;
}'
</SyntaxHighlight>
==Backup ZFS settings==
A little script which may be used on your own risk.
<syntaxhighlight lang=bash>
#!/bin/bash
# Written by Lars Timmann <L@rs.Timmann.de> 2018
# Tested on solaris 11.3 & Ubuntu Linux
# This script is a rotten bunch of code... rewrite it!
AWK_CMD=/usr/bin/gawk
ZPOOL_CMD=/sbin/zpool
ZFS_CMD=/sbin/zfs
ZDB_CMD=/sbin/zdb
function print_local_options () {
DATASET=$1
OPTION=$2
EXCLUDE_REGEX=$3
${ZFS_CMD} get -s local -Ho property,value -p ${OPTION} ${DATASET} | while read -r property value
do
if [[ ! ${property} =~ ${EXCLUDE_REGEX} ]]
then
if [ "_${property}_" == "_share.*_" ]
then
print_local_options "${DATASET}" 'share.all' '^$'
else
printf '\t-o %s=%s \\\n' "${property}" "${value}"
fi
fi
done
}
function print_filesystem () {
ZFS=$1
printf '%s create \\\n' "${ZFS_CMD}"
print_local_options "${ZFS}" 'all' '^$'
printf '\t%s\n' "${ZFS}"
}
function print_filesystems () {
ZPOOL=$1
for ZFS in $(${ZFS_CMD} list -Ho name -t filesystem -r ${ZPOOL})
do
if [ ${ZFS} == ${ZPOOL} ] ; then continue ; fi
printf '#\n## Filesystem: %s\n#\n\n' "${ZFS}"
print_filesystem ${ZFS}
printf '\n'
done
}
function print_volume () {
ZVOL=$1
volsize=$(${ZFS_CMD} get -Ho value volsize ${ZVOL})
volblocksize=$(${ZFS_CMD} get -Ho value volblocksize ${ZVOL})
printf '%s create \\\n\t-V %s \\\n\t-b %s \\\n' "${ZFS_CMD}" "${volsize}" "${volblocksize}"
print_local_options "${ZVOL}" 'all' '(volsize|refreservation)'
printf '\t%s\n' "${ZVOL}"
}
function print_volumes () {
ZPOOL=$1
for ZVOL in $(${ZFS_CMD} list -Ho name -t volume -r ${ZPOOL})
do
printf '#\n## Volume: %s\n#\n\n' "${ZVOL}"
print_volume ${ZVOL}
printf '\n'
done
}
function print_vdevs () {
ZPOOL=$1
${ZDB_CMD} -C ${ZPOOL} | ${AWK_CMD} -F':' '
$1 ~ /^[[:space:]]*type$/ {
gsub(/[ ]+/,"",$NF);
type=substr($NF,2,length($NF)-2);
if ( type == "mirror" ) {
printf " \\\n\t%s",type;
}
}
$1 ~ /^[[:space:]]*path$/ {
gsub(/[ ]+/,"",$NF);
vdev=substr($NF,2,length($NF)-2);
printf " \\\n\t%s",vdev;
}
END {
printf "\n";
}
'
}
function print_zpool () {
ZPOOL=$1
printf '#############################################################\n'
printf '#\n## ZPool: %s\n#\n' "${ZPOOL}"
printf '#############################################################\n\n'
printf '%s create \\\n' "${ZPOOL_CMD}"
print_local_options "${ZPOOL}" 'all' '/@/'
printf '\t%s' "${ZPOOL}"
print_vdevs "${ZPOOL}"
printf '\n'
printf '#############################################################\n\n'
print_filesystems "${ZPOOL}"
print_volumes "${ZPOOL}"
}
OS=$(uname -s)
eval $(uname -s)=1
HOSTNAME=$(hostname)
printf '#############################################################\n'
printf '# Hostname: %s\n' "${HOSTNAME}"
printf '#############################################################\n\n'
for ZPOOL in $(${ZPOOL_CMD} list -Ho name)
do
print_zpool ${ZPOOL}
done
</syntaxhighlight>
==Links==
* [[https://github.com/zfsonlinux/pkg-zfs/wiki/HOWTO-install-Ubuntu-16.04-to-a-Whole-Disk-Native-ZFS-Root-Filesystem-using-Ubiquity-GUI-installer HOWTO install Ubuntu 16.04 to a Whole Disk Native ZFS Root Filesystem using Ubiquity GUI installer]]
* [[https://github.com/zfsonlinux/zfs/wiki/Ubuntu-16.04-Root-on-ZFS Ubuntu 16.04 Root on ZFS]]
c51c247a95d8a5a88a64076fb4902d72e07278f7
SSH FingerprintLogging
0
358
2750
2552
2023-07-11T15:27:39Z
Lollypop
2
/* SSH Fingerprintlogging */
wikitext
text/x-wiki
[[Category:SSH|Fingerprint]]
[[Category:Bash|Fingerprint]]
=SSH Fingerprintlogging=
==Why logging fingerprints?==
It is just for the possibility of setting the [[Bash]] HISTFILE per logged in user.
==Add magic to your .bashrc==
* ~/.bashrc
<syntaxhighlight lang=bash>
...
FINGERPRINT=$(ssh_client_array=( ${SSH_CLIENT} ); journalctl --lines=100 --grep "${ssh_client_array[0]} port ${ssh_client_array[1]}" --no-pager --quiet --unit=ssh.service | awk 'END{print $NF}')
export HISTFILE=~/.bash_history_${FINGERPRINT:-${SUDO_USER:-default}}
...
</syntaxhighlight>
5fe08e9ac73b1fb15f59ed299241159412c65a23
2751
2750
2023-07-14T07:29:45Z
Lollypop
2
/* Add magic to your .bashrc */
wikitext
text/x-wiki
[[Category:SSH|Fingerprint]]
[[Category:Bash|Fingerprint]]
=SSH Fingerprintlogging=
==Why logging fingerprints?==
It is just for the possibility of setting the [[Bash]] HISTFILE per logged in user.
==Add magic to your .bashrc==
* ~/.bashrc
Not fully working... wait...
<syntaxhighlight lang=bash>
...
FINGERPRINT=$([ -z "${SSH_CLIENT}" ] || { ssh_client_array=( ${SSH_CLIENT} ); [ -z "${SSH_CLIENT}" ] || journalctl --lines=100 --grep "${ssh_client_array[0]} port ${ssh_client_array[1]}" --no-pager --quiet --unit=ssh.service | awk 'END{print $NF}' ; } )
export HISTFILE=~/.bash_history_${FINGERPRINT:-${SUDO_USER:-default}}
...
</syntaxhighlight>
ec6f15b1aae704e0cd280b03e3892d2294e1c7bd
OpenSSL
0
347
2752
2709
2023-07-14T13:44:01Z
Lollypop
2
wikitext
text/x-wiki
[[category:Security]]
=Verify=
<syntaxhighlight lang=bash>
# openssl verify -CAfile /srv/www/htdocs/pub/RHN-ORG-TRUSTED-SSL-CERT /etc/pki/spacewalk/jabberd/server.pem
</syntaxhighlight>
<syntaxhighlight lang=bash>
# openssl crl2pkcs7 -nocrl -certfile /srv/www/htdocs/pub/RHN-ORG-TRUSTED-SSL-CERT | openssl pkcs7 -print_certs -noout -print_certs
</syntaxhighlight>
=CSR=
== Create key and CSR ==
<syntaxhighlight lang=bash>
$ subject_without_cn='/C=DE/ST=Hamburg/L=Hamburg/O=Organisation/OU=Team'
$ emailAddress='webadmin@server.de'
$ declare -a hosts=( "name1.server.de" "name2.server.de" )
$ openssl req -newkey rsa:4096 -sha256 -keyout ${hosts[0]}-key.pem -out ${hosts[0]}-csr.pem -batch -subj "${subject_without_cn}/CN=${hosts[0]}/emailAddress=${emailAddress}" -reqexts SAN -config <(cat /etc/ssl/openssl.cnf <(printf "[SAN]\nsubjectAltName=DNS:${hosts[0]}${hosts[1]:+,DNS:${hosts[1]}}${hosts[2]:+,DNS:${hosts[2]}}${hosts[3]:+,DNS:${hosts[3]}}${hosts[4]:+,DNS:${hosts[4]}}"))
</syntaxhighlight>
== Verify your CSR==
<syntaxhighlight lang=bash>
$ openssl req -text -noout -verify -in ${hosts[0]}-csr.pem
</syntaxhighlight>
=Print validity for certificate file=
<SyntaxHighlight lang=bash>
#!/bin/bash
for i in ${*}
do
certfile=${i}
enddate="$(openssl x509 -enddate -noout -in ${certfile} | sed -e 's#^.*=##g')"
declare -i valid_seconds=$(( $(date --date="${enddate}" '+%s') - $(date '+%s') ))
declare -i seconds=${valid_seconds}
declare -i days=$(( ${seconds} / ( 24 * 60 * 60 ) ))
seconds=$(( ${seconds} % ( 24 * 60 * 60 ) ))
declare -i hours=$(( ${seconds} / ( 60 * 60 ) ))
seconds=$(( ${seconds} % ( 60 * 60 ) ))
declare -i minutes=$(( ${seconds} / 60 ))
seconds=$(( ${seconds} % 60 ))
printf "%s: %s (%d days %d hours %d seconds left)\n" "${certfile}" "$(date --date "${enddate}")" ${days} ${hours} ${seconds}
done
</SyntaxHighlight>
<SyntaxHighlight lang=bash>
awk '
BEGIN{
count=0;
}
{
if ($0 == "-----BEGIN CERTIFICATE-----") { pem[count]=$0"\n"; cert[count]=""; }
else if ($0 == "-----END CERTIFICATE-----") { pem[count++]=pem[count]$0; }
else { pem[count]=pem[count]$0"\n"; cert[count]=cert[count]$0;}
}
END{
for(i=0;i<count;i++){
command=sprintf("openssl x509 -noout -inform PEM -subject -issuer <<EOF\n%s\nEOF\n",pem[i]);
while( command | getline subject) { print subject; }
close(command);
print pem[i];
}
}' < cert.pem
</SyntaxHighlight>
9437f422563e1ccacc2aa41242f8282bad9b4997
2754
2752
2023-07-21T07:49:42Z
Lollypop
2
/* Print validity for certificate file */
wikitext
text/x-wiki
[[category:Security]]
=Verify=
<syntaxhighlight lang=bash>
# openssl verify -CAfile /srv/www/htdocs/pub/RHN-ORG-TRUSTED-SSL-CERT /etc/pki/spacewalk/jabberd/server.pem
</syntaxhighlight>
<syntaxhighlight lang=bash>
# openssl crl2pkcs7 -nocrl -certfile /srv/www/htdocs/pub/RHN-ORG-TRUSTED-SSL-CERT | openssl pkcs7 -print_certs -noout -print_certs
</syntaxhighlight>
=CSR=
== Create key and CSR ==
<syntaxhighlight lang=bash>
$ subject_without_cn='/C=DE/ST=Hamburg/L=Hamburg/O=Organisation/OU=Team'
$ emailAddress='webadmin@server.de'
$ declare -a hosts=( "name1.server.de" "name2.server.de" )
$ openssl req -newkey rsa:4096 -sha256 -keyout ${hosts[0]}-key.pem -out ${hosts[0]}-csr.pem -batch -subj "${subject_without_cn}/CN=${hosts[0]}/emailAddress=${emailAddress}" -reqexts SAN -config <(cat /etc/ssl/openssl.cnf <(printf "[SAN]\nsubjectAltName=DNS:${hosts[0]}${hosts[1]:+,DNS:${hosts[1]}}${hosts[2]:+,DNS:${hosts[2]}}${hosts[3]:+,DNS:${hosts[3]}}${hosts[4]:+,DNS:${hosts[4]}}"))
</syntaxhighlight>
== Verify your CSR==
<syntaxhighlight lang=bash>
$ openssl req -text -noout -verify -in ${hosts[0]}-csr.pem
</syntaxhighlight>
=Print validity for certificate file=
<SyntaxHighlight lang=bash>
#!/bin/bash
for i in ${*}
do
certfile=${i}
enddate="$(openssl x509 -enddate -noout -in ${certfile} | sed -e 's#^.*=##g')"
declare -i valid_seconds=$(( $(date --date="${enddate}" '+%s') - $(date '+%s') ))
declare -i seconds=${valid_seconds}
declare -i days=$(( ${seconds} / ( 24 * 60 * 60 ) ))
seconds=$(( ${seconds} % ( 24 * 60 * 60 ) ))
declare -i hours=$(( ${seconds} / ( 60 * 60 ) ))
seconds=$(( ${seconds} % ( 60 * 60 ) ))
declare -i minutes=$(( ${seconds} / 60 ))
seconds=$(( ${seconds} % 60 ))
printf "%s: %s (%d days %d hours %d seconds left)\n" "${certfile}" "$(date --date "${enddate}")" ${days} ${hours} ${seconds}
done
</SyntaxHighlight>
=Beautify chain certificate=
<SyntaxHighlight lang=bash>
awk '
BEGIN{
count=0;
}
{
if ($0 == "-----BEGIN CERTIFICATE-----") { pem[count]=$0"\n"; cert[count]=""; }
else if ($0 == "-----END CERTIFICATE-----") { pem[count++]=pem[count]$0; }
else { pem[count]=pem[count]$0"\n"; cert[count]=cert[count]$0;}
}
END{
for(i=0;i<count;i++){
command=sprintf("openssl x509 -noout -inform PEM -subject -issuer <<EOF\n%s\nEOF\n",pem[i]);
while( command | getline subject) { print subject; }
close(command);
print pem[i];
}
}' < cert.pem
</SyntaxHighlight>
e349db1591811a49eb0aaaff1bf4f003c6468177
2755
2754
2023-07-21T07:50:18Z
Lollypop
2
/* Beautify chain certificate */
wikitext
text/x-wiki
[[category:Security]]
=Verify=
<syntaxhighlight lang=bash>
# openssl verify -CAfile /srv/www/htdocs/pub/RHN-ORG-TRUSTED-SSL-CERT /etc/pki/spacewalk/jabberd/server.pem
</syntaxhighlight>
<syntaxhighlight lang=bash>
# openssl crl2pkcs7 -nocrl -certfile /srv/www/htdocs/pub/RHN-ORG-TRUSTED-SSL-CERT | openssl pkcs7 -print_certs -noout -print_certs
</syntaxhighlight>
=CSR=
== Create key and CSR ==
<syntaxhighlight lang=bash>
$ subject_without_cn='/C=DE/ST=Hamburg/L=Hamburg/O=Organisation/OU=Team'
$ emailAddress='webadmin@server.de'
$ declare -a hosts=( "name1.server.de" "name2.server.de" )
$ openssl req -newkey rsa:4096 -sha256 -keyout ${hosts[0]}-key.pem -out ${hosts[0]}-csr.pem -batch -subj "${subject_without_cn}/CN=${hosts[0]}/emailAddress=${emailAddress}" -reqexts SAN -config <(cat /etc/ssl/openssl.cnf <(printf "[SAN]\nsubjectAltName=DNS:${hosts[0]}${hosts[1]:+,DNS:${hosts[1]}}${hosts[2]:+,DNS:${hosts[2]}}${hosts[3]:+,DNS:${hosts[3]}}${hosts[4]:+,DNS:${hosts[4]}}"))
</syntaxhighlight>
== Verify your CSR==
<syntaxhighlight lang=bash>
$ openssl req -text -noout -verify -in ${hosts[0]}-csr.pem
</syntaxhighlight>
=Print validity for certificate file=
<SyntaxHighlight lang=bash>
#!/bin/bash
for i in ${*}
do
certfile=${i}
enddate="$(openssl x509 -enddate -noout -in ${certfile} | sed -e 's#^.*=##g')"
declare -i valid_seconds=$(( $(date --date="${enddate}" '+%s') - $(date '+%s') ))
declare -i seconds=${valid_seconds}
declare -i days=$(( ${seconds} / ( 24 * 60 * 60 ) ))
seconds=$(( ${seconds} % ( 24 * 60 * 60 ) ))
declare -i hours=$(( ${seconds} / ( 60 * 60 ) ))
seconds=$(( ${seconds} % ( 60 * 60 ) ))
declare -i minutes=$(( ${seconds} / 60 ))
seconds=$(( ${seconds} % 60 ))
printf "%s: %s (%d days %d hours %d seconds left)\n" "${certfile}" "$(date --date "${enddate}")" ${days} ${hours} ${seconds}
done
</SyntaxHighlight>
=Beautify chain certificate=
<SyntaxHighlight lang=bash>
$ awk '
BEGIN{
count=0;
}
{
if ($0 == "-----BEGIN CERTIFICATE-----") { pem[count]=$0"\n"; cert[count]=""; }
else if ($0 == "-----END CERTIFICATE-----") { pem[count++]=pem[count]$0; }
else { pem[count]=pem[count]$0"\n"; cert[count]=cert[count]$0;}
}
END{
for(i=0;i<count;i++){
command=sprintf("openssl x509 -noout -inform PEM -subject -issuer <<EOF\n%s\nEOF\n",pem[i]);
while( command | getline subject) { print subject; }
close(command);
print pem[i];
}
}' < cert.pem
</SyntaxHighlight>
e2068c50508a889e62b1de2656b42b5778832b87
Systemd
0
233
2756
2650
2023-08-31T09:30:45Z
Lollypop
2
/* Create the unit file /lib/systemd/system/var-chroot-run-systemd-journal-dev\\x2dlog.mount for the mount */
wikitext
text/x-wiki
[[Category:Linux]]
=systemd=
Yes, like daemons are usually written this has to be written lowercase.
=What is systemd?=
systemd is a replacement for the old and rusty init system of Linux.
It has many new features and extends the normal init system with the ability to watch processes after the start has done, list sockets owned by processes started with systemd, adds security features like [http://manpages.ubuntu.com/manpages/vivid/en/man7/capabilities.7.html capabilities(7)] and a lot more.
Maybe it will be as good as SMF (Service Management Facility) of Solaris one day :-).
=Take a look with systemctl=
==List units==
As you can see, there are hardware and software related units.
<syntaxhighlight lang=bash>
# systemctl list-units
UNIT LOAD ACTIVE SUB DESCRIPTION
proc-sys-fs-binfmt_misc.automount loaded active running Arbitrary Executable File Formats File System Automount Point
sys-devices-pci0000:00-0000:00:02.0-backlight-acpi_video0.device loaded active plugged /sys/devices/pci0000:00/0000:00:02.0/backlight/acpi_video0
sys-devices-pci0000:00-0000:00:02.0-drm-card0-card0\x2dLVDS\x2d1-intel_backlight.device loaded active plugged /sys/devices/pci0000:00/0000:00:02.0/drm
sys-devices-pci0000:00-0000:00:19.0-net-eth0.device loaded active plugged 82579LM Gigabit Network Connection
sys-devices-pci0000:00-0000:00:1a.0-usb1-1\x2d1-1\x2d1.4-1\x2d1.4:1.0-bluetooth-hci0-rfkill3.device loaded active plugged /sys/devices/pci0000:00/0000
sys-devices-pci0000:00-0000:00:1a.0-usb1-1\x2d1-1\x2d1.4-1\x2d1.4:1.0-bluetooth-hci0.device loaded active plugged /sys/devices/pci0000:00/0000:00:1a.0
sys-devices-pci0000:00-0000:00:1b.0-sound-card0.device loaded active plugged 6 Series/C200 Series Chipset Family High Definition Audio Contro
sys-devices-pci0000:00-0000:00:1c.1-0000:03:00.0-ieee80211-phy0-rfkill2.device loaded active plugged /sys/devices/pci0000:00/0000:00:1c.1/0000:03:00.0
sys-devices-pci0000:00-0000:00:1c.1-0000:03:00.0-net-wlan0.device loaded active plugged Centrino Advanced-N 6205 [Taylor Peak] (Centrino Advanced-N 62
sys-devices-pci0000:00-0000:00:1d.0-usb2-2\x2d1-2\x2d1.4-2\x2d1.4:1.1-tty-ttyACM0.device loaded active plugged F5521gw
sys-devices-pci0000:00-0000:00:1d.0-usb2-2\x2d1-2\x2d1.4-2\x2d1.4:1.3-tty-ttyACM1.device loaded active plugged F5521gw
...
session-c2.scope loaded active running Session c2 of user lollypop
accounts-daemon.service loaded active running Accounts Service
● anacron.service loaded failed failed Run anacron jobs
apparmor.service loaded active exited LSB: AppArmor initialization
apport.service loaded active exited LSB: automatic crash report generation
...
</syntaxhighlight>
In this example you can see that the anacron.service failed to start.
==Display unit status==
<syntaxhighlight lang=bash>
# systemctl status anacron
● anacron.service - Run anacron jobs
Loaded: loaded (/lib/systemd/system/anacron.service; enabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Fr 2015-08-28 09:18:13 CEST; 31min ago
Process: 1591 ExecStart=/usr/sbin/anacron -dsq (code=exited, status=1/FAILURE)
Main PID: 1591 (code=exited, status=1/FAILURE)
Aug 28 09:18:13 lollybook systemd[1]: Started Run anacron jobs.
Aug 28 09:18:13 lollybook systemd[1]: Starting Run anacron jobs...
Aug 28 09:18:13 lollybook systemd[1]: anacron.service: main process exited, code=exited, status=1/FAILURE
Aug 28 09:18:13 lollybook anacron[1591]: anacron: Can't chdir to /var/spool/anacron: No such file or directory
Aug 28 09:18:13 lollybook systemd[1]: Unit anacron.service entered failed state.
Aug 28 09:18:13 lollybook systemd[1]: anacron.service failed.
</syntaxhighlight>
Ah, deleted the anacron spool directory. ;-)
==Restart units==
Fix the problem and restart the service.
<syntaxhighlight lang=bash>
root@lollybook:~# mkdir /var/spool/anacron
root@lollybook:~# systemctl restart anacron.service
root@lollybook:~# systemctl status anacron
● anacron.service - Run anacron jobs
Loaded: loaded (/lib/systemd/system/anacron.service; enabled; vendor preset: enabled)
Active: active (running) since Fr 2015-08-28 09:53:49 CEST; 4s ago
Main PID: 5179 (anacron)
CGroup: /system.slice/anacron.service
└─5179 /usr/sbin/anacron -dsq
Aug 28 09:53:49 lollybook systemd[1]: Started Run anacron jobs.
Aug 28 09:53:49 lollybook systemd[1]: Starting Run anacron jobs...
Aug 28 09:53:49 lollybook anacron[5179]: Anacron 2.3 started on 2015-08-28
Aug 28 09:53:49 lollybook anacron[5179]: Will run job `cron.daily' in 5 min.
Aug 28 09:53:49 lollybook anacron[5179]: Will run job `cron.weekly' in 10 min.
Aug 28 09:53:49 lollybook anacron[5179]: Will run job `cron.monthly' in 15 min.
Aug 28 09:53:49 lollybook anacron[5179]: Jobs will be executed sequentially
</syntaxhighlight>
==Display unit declaration==
<syntaxhighlight lang=ini>
# systemctl cat zfs.target
# /lib/systemd/system/zfs.target
[Unit]
Description=ZFS startup target
Requires=zfs-mount.service
Requires=zfs-share.service
Wants=zed.service
[Install]
WantedBy=multi-user.target
</syntaxhighlight>
==Sockets==
<syntaxhighlight lang=bash>
# systemctl list-sockets --all
LISTEN UNIT ACTIVATES
/run/acpid.socket acpid.socket acpid.service
/run/systemd/fsckd systemd-fsckd.socket systemd-fsckd.service
/run/systemd/initctl/fifo systemd-initctl.socket systemd-initctl.service
/run/systemd/journal/dev-log systemd-journald-dev-log.socket systemd-journald.service
/run/systemd/journal/socket systemd-journald.socket systemd-journald.service
/run/systemd/journal/stdout systemd-journald.socket systemd-journald.service
/run/systemd/journal/syslog syslog.socket rsyslog.service
/run/systemd/shutdownd systemd-shutdownd.socket systemd-shutdownd.service
/run/udev/control systemd-udevd-control.socket systemd-udevd.service
/run/uuidd/request uuidd.socket uuidd.service
/var/run/avahi-daemon/socket avahi-daemon.socket avahi-daemon.service
/var/run/cups/cups.sock cups.socket cups.service
/var/run/dbus/system_bus_socket dbus.socket dbus.service
127.0.0.1:631 cups.socket cups.service
[::1]:631 cups.socket cups.service
audit 1 systemd-journald-audit.socket systemd-journald.service
kobject-uevent 1 systemd-udevd-kernel.socket systemd-udevd.service
17 sockets listed.
</syntaxhighlight>
==View dependencies==
What depends on ''zfs.target'':
<syntaxhighlight lang=bash>
# systemctl list-dependencies --reverse zfs.target
zfs.target
● ├─basic.target
...
● └─multi-user.target
...
</syntaxhighlight>
And what do we need to reach the ''zfs.target''?
<syntaxhighlight lang=bash>
# systemctl list-dependencies --recursive zfs.target
zfs.target
● ├─zed.service
● ├─zfs-mount.service
● └─zfs-share.service
</syntaxhighlight>
==Get the main PID of a service==
<syntaxhighlight lang=bash>
$ systemctl show --property=MainPID --value ssh.service
2026
</syntaxhighlight>
=Security=
==Use capabilities to drop user privileges (CapabilityBoundingSet)==
<syntaxhighlight lang=ini>
# systemctl cat systemd-networkd.service --no-pager
...
[Service]
Type=notify
Restart=on-failure
RestartSec=0
ExecStart=/lib/systemd/systemd-networkd
CapabilityBoundingSet=CAP_NET_ADMIN CAP_NET_BIND_SERVICE CAP_NET_BROADCAST CAP_NET_RAW CAP_SETUID CAP_SETGID CAP_SETPCAP CAP_CHOWN CAP_DAC_OVERRIDE CAP_FOWNER
ProtectSystem=full
ProtectHome=yes
WatchdogSec=1min
...
</syntaxhighlight>
Now the process is started with exactly the capabilities it needs to have. Even if it starts as root all unnessesary capabilities are dropped for starting the process.
I dont want to copy the whole man page of [http://manpages.ubuntu.com/manpages/vivid/en/man7/capabilities.7.html capabilities(7)] here but you can take a look to understand what this capabilities are.
'''BUT''' beware of programs which just test on UID 0!
==Nailing a process to it's rights : NoNewPrivileges==
Setting ''NoNewPrivileges=true'' ensures that the processtree from this level on will stuck at the UID and the privileges it has. This prohibits UID changes. No set UID binary will help the hacker to get more privileges than the user of the exploited service.
==Limiting access to a socket==
For example for the check_mk monitoring system:
<syntaxhighlight lang=ini>
# systemctl edit check_mk.socket
</syntaxhighlight>
Deny from all, but the monitoring server (172.17.128.193):
<syntaxhighlight lang=ini>
[Socket]
IPAddressDeny=any
IPAddressAllow=172.17.128.193
</syntaxhighlight>
==Limiting a socket to IPv4==
For example for the check_mk monitoring system:
<syntaxhighlight lang=ini>
# systemctl edit check_mk.socket
</syntaxhighlight>
First remove old value, then set new one.
<syntaxhighlight lang=ini>
[Socket]
ListenStream=
ListenStream=0.0.0.0:6556
</syntaxhighlight>
=systemd-resolved the name resolve service=
==Status==
<syntaxhighlight lang=bash>
$ systemd-resolve --status
Global
DNS Domain: fritz.box
DNSSEC NTA: 10.in-addr.arpa
168.192.in-addr.arpa
corp
d.f.ip6.arpa
home
internal
intranet
lan
local
private
test
Link 3 (wlan0)
Current Scopes: none
LLMNR setting: yes
MulticastDNS setting: no
DNSSEC setting: no
DNSSEC supported: no
Link 2 (eth0)
Current Scopes: DNS
LLMNR setting: yes
MulticastDNS setting: no
DNSSEC setting: no
DNSSEC supported: no
DNS Servers: 192.168.178.1
DNS Domain: fritz.box
</syntaxhighlight>
==Cache statistics==
<syntaxhighlight lang=bash>
$ systemd-resolve --statistics
DNSSEC supported by current servers: no
Transactions
Current Transactions: 0
Total Transactions: 1824
Cache
Current Cache Size: 11
Cache Hits: 1104
Cache Misses: 771
DNSSEC Verdicts
Secure: 0
Insecure: 0
Bogus: 0
Indeterminate: 0
</syntaxhighlight>
==Flush the cache==
<syntaxhighlight lang=bash>
$ systemd-resolve --flush-caches
</syntaxhighlight>
Check with:
<syntaxhighlight lang=bash>
$ systemd-resolve --statistics
DNSSEC supported by current servers: no
Transactions
Current Transactions: 0
Total Transactions: 1809
Cache
Current Cache Size: 0 <--- Empty
Cache Hits: 1099
Cache Misses: 761
DNSSEC Verdicts
Secure: 0
Insecure: 0
Bogus: 0
Indeterminate: 0
</syntaxhighlight>
=systemd-timesyncd an alternative to ntp=
The ntpd is a good and fat old horse for servers but clients do not necessarily need this one. Just give systemd-timesyncd a chance.
Configuration can be easily made through <i>/etc/systemd/timesyncd.conf</i>:
<syntaxhighlight lang=ini>
# This file is part of systemd.
#
# systemd is free software; you can redistribute it and/or modify it
# under the terms of the GNU Lesser General Public License as published by
# the Free Software Foundation; either version 2.1 of the License, or
# (at your option) any later version.
#
# Entries in this file show the compile time defaults.
# You can change settings by editing this file.
# Defaults can be restored by simply deleting this file.
#
# See timesyncd.conf(5) for details.
[Time]
NTP=ptbtime1.ptb.de hora.cs.tu-berlin.de
FallbackNTP=ntp.ubuntu.com
</syntaxhighlight>
The server list is a space separated list of NTP servers.
FallbackNTP is a list of servers if none of the NTP list could be reached.
If you want to split them into multiple files or generate them at start you can put files with the ending <i>.conf</i> in <i>/etc/systemd/timesyncd.conf.d/</i>.
After you setup the config you can enable the timesyncd via:
<syntaxhighlight lang=bash>
# timedatectl set-ntp true
</syntaxhighlight>
Control your success with:
<syntaxhighlight lang=bash>
# timedatectl
Local time: Fr 2016-07-01 09:16:24 CEST
Universal time: Fr 2016-07-01 07:16:24 UTC
RTC time: Fr 2016-07-01 07:16:24
Time zone: Europe/Berlin (CEST, +0200)
Network time on: yes
NTP synchronized: yes
RTC in local TZ: no
</syntaxhighlight>
Nice it worked <i>NTP synchronized: yes</i>.
If not take a look with <i>systemctl</i>:
<syntaxhighlight lang=bash>
# systemctl status systemd-timesyncd.service
● systemd-timesyncd.service - Network Time Synchronization
Loaded: loaded (/lib/systemd/system/systemd-timesyncd.service; enabled; vendor preset: enabled)
Drop-In: /lib/systemd/system/systemd-timesyncd.service.d
└─disable-with-time-daemon.conf
Active: inactive (dead)
Condition: start condition failed at Fr 2016-07-01 10:49:15 CEST; 1h 43min left
Docs: man:systemd-timesyncd.service(8)
</syntaxhighlight>
Hmm... let us take a look at ntp:
<syntaxhighlight lang=bash>
# systemctl status ntp.service
● ntp.service - LSB: Start NTP daemon
Loaded: loaded (/etc/init.d/ntp; bad; vendor preset: enabled)
Active: active (exited) since Fr 2016-07-01 10:49:19 CEST; 1h 44min left
Docs: man:systemd-sysv-generator(8)
</syntaxhighlight>
Maybe we should uninstall or disable ntp first ;-).
<syntaxhighlight lang=bash>
# systemctl stop ntp.service
# systemctl disable ntp.service
</syntaxhighlight>
<syntaxhighlight lang=bash>
# systemctl start systemd-timesyncd.service
# systemctl status systemd-timesyncd.service
● systemd-timesyncd.service - Network Time Synchronization
Loaded: loaded (/lib/systemd/system/systemd-timesyncd.service; enabled; vendor preset: enabled)
Drop-In: /lib/systemd/system/systemd-timesyncd.service.d
└─disable-with-time-daemon.conf
Active: active (running) since Fr 2016-07-01 09:06:10 CEST; 1s ago
Docs: man:systemd-timesyncd.service(8)
Main PID: 12360 (systemd-timesyn)
Status: "Synchronized to time server 192.53.103.108:123 (ptbtime1.ptb.de)."
CGroup: /system.slice/systemd-timesyncd.service
└─12360 /lib/systemd/systemd-timesyncd
Jul 01 09:06:10 lollybook systemd[1]: Starting Network Time Synchronization...
Jul 01 09:06:10 lollybook systemd[1]: Started Network Time Synchronization.
Jul 01 09:06:10 lollybook systemd-timesyncd[12360]: Synchronized to time server 192.53.103.108:123 (ptbtime1.ptb.de).
</syntaxhighlight>
That's it!
=Units=
==[Unit]==
===Define dependencies===
For example the ''zfs.target'' is defined like this:
<syntaxhighlight lang=ini>
# systemctl cat zfs.target
# /lib/systemd/system/zfs.target
[Unit]
Description=ZFS startup target
Requires=zfs-mount.service
Requires=zfs-share.service
Wants=zed.service
[Install]
WantedBy=multi-user.target
</syntaxhighlight>
This means to reach the ''zfs.target'' we want that ''zed.service'' is started if enabled and we need ''zfs-mount.service'' and ''zfs-share.service''.
===Directories===
====ReadWrite-, ReadOnly- and InaccessibleDirectories====
====Private Tmp-Directories====
Mounts a private incarnation of /tmp and /var/tmp which only lives as long as the unit is up. When the unit comes down the directories are cleared. This is done by a seperate namespace for this unit.
<syntaxhighlight lang=ini>
[Unit]
...
PrivateTmp=true|false
...
</syntaxhighlight>
If several units should share a private tmp-directory you can use ''JoinsNamespaceOf=<unit1>[,<unit2>,<unit3>]''.
==[Service]==
==[Install]==
=Tools=
==Testing around with capabilities==
For example arping:
<syntaxhighlight lang=bash>
# getcap /usr/bin/arping
/usr/bin/arping = cap_net_raw+ep
</syntaxhighlight>
With this capability set we can use this as normal user:
<syntaxhighlight lang=bash>
lollypop $ /usr/bin/arping -I wlan0 192.168.178.1
ARPING 192.168.178.1 from 192.168.178.31 wlan0
Unicast reply from 192.168.178.1 [24:65:11:F0:DC:A8] 1.774ms
Unicast reply from 192.168.178.1 [24:65:11:F0:DC:A8] 1.658ms
</syntaxhighlight>
If we remove this capability it does not work:
<syntaxhighlight lang=bash>
# setcap cap_net_raw=-ep /usr/bin/arping
</syntaxhighlight>
<syntaxhighlight lang=bash>
lollypop $ /usr/bin/arping -I wlan0 192.168.178.1
arping: socket: Operation not permitted
</syntaxhighlight>
Of course it still works as root as root has all capabilities:
<syntaxhighlight lang=bash>
root@lollybook:~# /usr/bin/arping -I wlan0 192.168.178.1
ARPING 192.168.178.1 from 192.168.178.31 wlan0
Unicast reply from 192.168.178.1 [24:65:11:F0:DC:A8] 2.052ms
Unicast reply from 192.168.178.1 [24:65:11:F0:DC:A8] 1.852ms
Received 2 response(s)
</syntaxhighlight>
So we better set this capability again:
<syntaxhighlight lang=bash>
# setcap cap_net_raw=+ep /usr/bin/arping
</syntaxhighlight>
= Logging with syslog-ng and systemd in a chroot environment =
If you have a chroot environment (here I have /var/chroot) some things are a little bit tricky.
==The needed logging socket in your chroot is /run/systemd/journal/dev-log==
Prepare the mountpoint:
<syntaxhighlight lang=bash>
# mkdir -p /var/chroot/run/systemd/journal
# touch /var/chroot/run/systemd/journal/dev-log
</syntaxhighlight>
===Get the name for the needed unit file===
The name of a .mount-unit file has to be the mount destination path. Dashes must be escaped. To get the resulting name you can easily use systemd-escape.
<syntaxhighlight lang=bash>
# systemd-escape -p --suffix=mount /var/chroot/run/systemd/journal/dev-log
var-chroot-run-systemd-journal-dev\x2dlog.mount
</syntaxhighlight>
===Create the unit file /lib/systemd/system/var-chroot-run-systemd-journal-dev\\x2dlog.mount for the mount===
Remember to double escape (\\) the x2d (which is a dash -).
<syntaxhighlight lang=bash>
# systemctl edit var-chroot-run-systemd-journal-dev\\x2dlog.mount
</syntaxhighlight>
I want to mount it before syslog-ng and pdns-recursor are up.
Put this contents in the file:
<syntaxhighlight lang=ini>
[Unit]
Description=Mount /run/systemd/journal/dev-log to chroot
DefaultDependencies=no
ConditionPathExists=/var/chroot/run/systemd/journal/dev-log
ConditionCapability=CAP_SYS_ADMIN
After=systemd-modules-load.service
Before=pdns-recursor.service
Before=syslog-ng.service
[Mount]
What=/run/systemd/journal/dev-log
Where=/var/chroot/run/systemd/journal/dev-log
Type=none
Options=bind
[Install]
WantedBy=multi-user.target
</syntaxhighlight>
===Mount the socket===
<syntaxhighlight lang=bash>
# systemctl daemon-reload
# systemctl enable var-chroot-run-systemd-journal-dev\\x2dlog.mount
# systemctl start var-chroot-run-systemd-journal-dev\\x2dlog.mount
</syntaxhighlight>
Check the success:
<syntaxhighlight lang=bash>
# grep /var/chroot/run/systemd/journal/dev-log /proc/mounts
tmpfs /var/chroot/run/systemd/journal/dev-log tmpfs rw,nosuid,noexec,relatime,size=101604k,mode=755 0 0
</syntaxhighlight>
==Tell the journald to forward logging lines to the socket==
===/etc/systemd/journald.conf===
<syntaxhighlight lang=ini>
[Journal]
...
ForwardToSyslog=yes
...
</syntaxhighlight>
Restart the journal daemon:
<syntaxhighlight lang=bash>
# systemctl restart systemd-journald.service
</syntaxhighlight>
==Configure syslog-ng==
===/etc/syslog-ng/syslog-ng.conf===
Take the log from systemd-journald socket:
<syntaxhighlight>
...
source s_src {
system();
internal();
unix-dgram ("/run/systemd/journal/dev-log");
};
...
</syntaxhighlight>
===Example for powerdns recursor===
====/etc/syslog-ng/conf.d/destination.d/pdns.conf====
<syntaxhighlight>
# PowerDNS authoritative server destination
destination d_pdns { file("/var/log/powerdns/pdns.log"); };
destination d_pdns_recursor { file("/var/log/powerdns/recursor.log"); };
</syntaxhighlight>
====/etc/syslog-ng/conf.d/filter.d/pdns.conf====
<syntaxhighlight>
# PowerDNS authoritative server filter
filter f_pdns { program("^pdns$"); };
filter f_pdns_recursor { program("^pdns_recursor$"); };
</syntaxhighlight>
====/etc/syslog-ng/conf.d/log.d/90_pdns.conf====
<syntaxhighlight>
# PowerDNS authoritative server default final file log
log { source(s_src); filter(f_pdns); destination(d_pdns); flags(final); };
log { source(s_src); filter(f_pdns_recursor); destination(d_pdns_recursor); flags(final); };
</syntaxhighlight>
===Restart syslog-ng daemon===
<syntaxhighlight lang=bash>
# systemctl restart syslog-ng.service
</syntaxhighlight>
= systemd-tmpfiles =
The housekeeping of temporary directories is done by the service <i>systemd-tmpfiles-clean.service</i> .
This service is triggered by the timer <i>systemd-tmpfiles-clean.timer</i>
To use this service for PrivateTMP directories for example of <i>apache2.service</i> you may use a config file under <i>/etc/[https://www.freedesktop.org/software/systemd/man/tmpfiles.d.html tmpfiles.d]/</i> like this example <i>/etc/tmpfiles.d/apache-cleanup.conf</i> :
<pre>
e /tmp/systemd-private-%b-apache2.service-*/tmp - - - 6h
</pre>
This will cleanup all files under <i>/tmp/systemd-private-%b-apache2.service-*/tmp</i> which are older than 6 hours every time the <i>systemd-tmpfiles-clean.service</i> runs.
The <i>%b</i> in the path is the actual boot-id.
What ist that? An id which is generated at each boot.
You can get the boot-id with:
<syntaxhighlight lang=bash>
# journalctl --list-boots
</syntaxhighlight>
The second field of the last line is the actual one, e.g.:
<syntaxhighlight lang=bash>
# journalctl --list-boots | awk 'END {print $2}'
52ae0c2a587a47048ee76818ede269a6
</syntaxhighlight>
When will that be? Try:
<syntaxhighlight lang="bash">
# systemctl list-timers systemd-tmpfiles-clean.timer
NEXT LEFT LAST PASSED UNIT ACTIVATES
Thu 2020-08-13 16:07:24 CEST 46min left n/a n/a systemd-tmpfiles-clean.timer systemd-tmpfiles-clean.service
1 timers listed.
Pass --all to see loaded but inactive timers, too.
</syntaxhighlight>
OK, but you probably want to run ist once an hour? OK, just rescedule the timer like this:
<syntaxhighlight lang="bash">
# systemctl edit systemd-tmpfiles-clean.timer
</syntaxhighlight>
and change the interval like this
<pre>
[Timer]
OnUnitActiveSec=1h
</pre>
Well done...
= Examples =
== fwupd.service behind proxy ==
<syntaxhighlight lang=bash>
# systemctl edit fwupd-refresh.service
</syntaxhighlight>
<syntaxhighlight lang=ini>
[Service]
Environment=http_proxy="http://user:passw0rd@proxy.intern.net:8080" https_proxy="http://user:passw0rd@proxy.intern.net:8080"
PassEnvironment=http_proxy https_proxy
</syntaxhighlight>
== Tomcat ==
=== /etc/systemd/system/tomcat-example.service ===
Simple service definition with some security options (ReadOnlyDirectories):
<syntaxhighlight lang=ini>
# /etc/systemd/system/my-tomcat.service
[Unit]
Description=Apache Tomcat Web Application Container
After=syslog.target network.target remote-fs.target
ConditionPathExists=/opt/tomcat/bin
ConditionPathExists=/home/tomcat/bin
[Service]
Type=forking
User=tomcat
Group=java
PrivateTmp=true
RuntimeDirectory=tomcat-example
RuntimeDirectoryMode=0700
ReadOnlyDirectories=/etc
ReadOnlyDirectories=/lib
ReadOnlyDirectories=/usr
EnvironmentFile=/home/tomcat/.Tomcat_init_systemd
PIDFile=/run/tomcat-example/tomcat.pid
ExecStart=/opt/tomcat/bin/catalina.sh start
ExecStop=/opt/tomcat/bin/catalina.sh stop
SuccessExitStatus=0
[Install]
WantedBy=multi-user.target
</syntaxhighlight>
=== /etc/polkit-1/rules.d/57-tomcat-example.rules ===
Allow the user <i>tomcat</i> to start/stop the service:
<syntaxhighlight>
polkit.addRule(function(action, subject) {
if (action.id == "org.freedesktop.systemd1.manage-units" &&
action.lookup("unit") == "tomcat-example.service" &&
subject.user == "tomcat") {
return polkit.Result.YES;
}
});
</syntaxhighlight>
== Oracle ==
UNTESTED, just an example!
File this as
/usr/lib/systemd/system/dbora@.service (SLES12)
<syntaxhighlight lang=ini>
# This file is part of systemd.
#
# Configure instances for your oracle database versions like this
# # systemctl enable dbora@<product>.service
# e.g.:
# # systemctl enable dbora@12cR1.service
#
[Unit]
Description=Oracle Database %I
After=syslog.target network.target
[Service]
# systemd ignores PAM limits, so set any necessary limits in the service.
# Not really a bug, but a feature.
# https://bugzilla.redhat.com/show_bug.cgi?id=754285
LimitMEMLOCK=infinity
LimitNOFILE=65535
#
Type=simple
RemainAfterExit=yes
User=oracle
Group=dba
Environment="ORACLE_HOME=/opt/oracle/product/%i/db"
ExecStart=/opt/oracle/product/%i/db/bin/dbstart $ORACLE_HOME >> 2>&1 &
ExecStop=/opt/oracle/product/%i/db/bin/dbstart $ORACLE_HOME 2>&1 &
[Install]
WantedBy=multi-user.target
</syntaxhighlight>
<syntaxhighlight lang=bash>
# systemctl daemon-reload
# systemctl enable dbora@12cR2.service
Created symlink from /etc/systemd/system/multi-user.target.wants/dbora@12cR2.service to /usr/lib/systemd/system/dbora@.service.
</syntaxhighlight>
e7000d20cf86b23e3359d38e467cb44311ff409f
Network troubleshooting
0
284
2758
2538
2023-09-21T12:33:12Z
Lollypop
2
wikitext
text/x-wiki
[[Category:Networking|Troubleshooting]]
=Network troubleshooting=
==Find open ports==
===lsof===
Show all open IPv4/IPv6/both TCP ports:
<syntaxhighlight lang=bash>
# lsof -Pni 4TCP -sTCP:LISTEN
# lsof -Pni 6TCP -sTCP:LISTEN
# lsof -Pni -sTCP:LISTEN
</syntaxhighlight>
Show all open IPv4/IPv6/both addresses and applications listening on TCP port https(443):
<syntaxhighlight lang=bash>
# lsof -Pni 4TCP:443 -sTCP:LISTEN
# lsof -Pni 6TCP:443 -sTCP:LISTEN
# lsof -Pni TCP:443 -sTCP:LISTEN
</syntaxhighlight>
==Testing connections from virtual interfaces / virtual IPs==
=== Ping ===
<syntaxhighlight lang=bash>
# ping -I <your virtual ip> <destination>
</syntaxhighlight>
On Solaris
<syntaxhighlight lang=bash>
# ping -sni <your virtual ip> <destination>
</syntaxhighlight>
=== Traceroute ===
<syntaxhighlight lang=bash>
# traceroute -s <your virtual ip> <destination>
</syntaxhighlight>
=== SSH ===
<syntaxhighlight lang=bash>
# ssh <user>@<destination> -o BindAddress=<your virtual ip>
</syntaxhighlight>
=== Telnet ===
<syntaxhighlight lang=bash>
# telnet -b <your virtual ip> <destination>
</syntaxhighlight>
== Interface details ==
=== Linux ===
<syntaxhighlight lang=bash>
# ethtool -k eth1
Features for eth1:
rx-checksumming: on
tx-checksumming: on
tx-checksum-ipv4: off [fixed]
tx-checksum-ip-generic: on
tx-checksum-ipv6: off [fixed]
tx-checksum-fcoe-crc: off [fixed]
tx-checksum-sctp: off [fixed]
scatter-gather: on
tx-scatter-gather: on
tx-scatter-gather-fraglist: off [fixed]
tcp-segmentation-offload: off
tx-tcp-segmentation: off
tx-tcp-ecn-segmentation: off [fixed]
tx-tcp6-segmentation: off
udp-fragmentation-offload: off [fixed]
generic-segmentation-offload: off
generic-receive-offload: on
large-receive-offload: on
rx-vlan-offload: on
tx-vlan-offload: on
ntuple-filters: off [fixed]
receive-hashing: on
highdma: on
rx-vlan-filter: on [fixed]
vlan-challenged: off [fixed]
tx-lockless: off [fixed]
netns-local: off [fixed]
tx-gso-robust: off [fixed]
tx-fcoe-segmentation: off [fixed]
tx-gre-segmentation: off [fixed]
tx-ipip-segmentation: off [fixed]
tx-sit-segmentation: off [fixed]
tx-udp_tnl-segmentation: off [fixed]
fcoe-mtu: off [fixed]
tx-nocache-copy: off
loopback: off [fixed]
rx-fcs: off [fixed]
rx-all: off [fixed]
tx-vlan-stag-hw-insert: off [fixed]
rx-vlan-stag-hw-parse: off [fixed]
rx-vlan-stag-filter: off [fixed]
l2-fwd-offload: off [fixed]
busy-poll: off [fixed]
</syntaxhighlight>
=== Solaris ===
1bf97e42f3ebfbe6a5b2a65c5809a8a6f578b5f4
Ubuntu apt
0
120
2759
2376
2023-09-22T08:41:06Z
Lollypop
2
wikitext
text/x-wiki
[[Category:Ubuntu|apt]]
== Get all non LTS packages ==
<syntaxhighlight lang=awk>
# dpkg --list | awk '/^ii/ {print $2}' | xargs apt-cache show | awk '
BEGIN{
support="none";
}
/^Package:/,/^$/{
if(/^Package:/){ pkg=$2; }
if(/^Supported:/){ support=$2; }
if(/^$/ && support != "5y"){ printf "%s:\t%s\n", pkg, support; }
}
/^$/ {
support="none";
}'
</syntaxhighlight>
== Ubuntu support status ==
<syntaxhighlight lang=bash>
$ ubuntu-support-status --show-unsupported
</syntaxhighlight>
== Configuring a proxy for apt ==
Put this into your /etc/apt/apt.conf.d/00proxy :
<syntaxhighlight lang=bash>
// Options for the downloading routines
Acquire
{
Queue-Mode "host"; // host|access
Retries "0";
Source-Symlinks "true";
// HTTP method configuration
http
{
//Proxy::http.us.debian.org "DIRECT"; // Specific per-host setting
Proxy "http://<user>:<password>@<proxy-host>:<proxy-port>";
Timeout "120";
Pipeline-Depth "5";
// Cache Control. Note these do not work with Squid 2.0.2
No-Cache "false";
Max-Age "86400"; // 1 Day age on index files
No-Store "false"; // Prevent the cache from storing archives
};
ftp
{
Proxy "http://<user>:<password>@<proxy-host>:<proxy-port>";
//Proxy::http.us.debian.org "DIRECT"; // Specific per-host setting
Timeout "120";
/* Passive mode control, proxy, non-proxy and per-host. Pasv mode
is prefered if possible */
Passive "true";
Proxy::Passive "true";
Passive::http.us.debian.org "true"; // Specific per-host setting
};
cdrom
{
mount "/cdrom";
// You need the trailing slash!
"/cdrom"
{
Mount "sleep 1000";
UMount "sleep 500";
}
};
};
</syntaxhighlight>
==Use this proxy config for in the shell==
<syntaxhighlight lang=bash>
eval $(apt-config dump Acquire | awk -F '(::| )' '$3 ~ /Proxy/{printf "%s_proxy=%s\nexport %s_proxy\n",$2,$4,$2;}')
</syntaxhighlight>
== Getting some packages from a newer release ==
In this example we are living in <i>xenial</i> and want PowerDNS from <i>zesty</i> because we need CAA records in the nameservice.
=== Pin the normal release ===
<syntaxhighlight lang=bash>
# echo 'APT::Default-Release "xenial";' > /etc/apt/apt.conf.d/01pinning
</syntaxhighlight>
=== Add new release to /etc/apt/sources.list ===
This is the /etc/apt/sources.list on my x86 64bit Ubuntu:
<pre>
# Xenial
deb [arch=amd64] http://de.archive.ubuntu.com/ubuntu/ xenial main restricted universe
deb [arch=amd64] http://de.archive.ubuntu.com/ubuntu/ xenial-updates main restricted universe
deb [arch=amd64] http://security.ubuntu.com/ubuntu xenial-security main restricted universe
# Zesty
deb [arch=amd64] http://de.archive.ubuntu.com/ubuntu/ zesty main restricted universe
deb [arch=amd64] http://de.archive.ubuntu.com/ubuntu/ zesty-updates main restricted universe
deb [arch=amd64] http://security.ubuntu.com/ubuntu zesty-security main restricted universe
</pre>
=== Tell apt via /etc/apt/preferences.d/... to prefer some packages from the new release ===
This is the /etc/apt/preferences.d/pdns:
<pre>
Package: pdns-*
Pin: release a=zesty, l=Ubuntu
Pin-Priority: 1000
Package: pdns-*
Pin: release a=zesty-updates, l=Ubuntu
Pin-Priority: 1000
Package: pdns-*
Pin: release a=zesty-security, l=Ubuntu
Pin-Priority: 1000
</pre>
=== Upgrade to the packages from the new release ===
<syntaxhighlight lang=bash>
# apt update
...
2 packages can be upgraded. Run 'apt list --upgradable' to see them.
...
</syntaxhighlight>
=== Check with "apt-cache policy" which version is preferred now ===
<syntaxhighlight lang=bash>
# apt-cache policy pdns-server pdns-tools
pdns-server:
Installed: 4.0.3-1
Candidate: 4.0.3-1
Version table:
*** 4.0.3-1 1000
500 http://de.archive.ubuntu.com/ubuntu zesty/universe amd64 Packages
100 /var/lib/dpkg/status
4.0.0~alpha2-3build1 990
990 http://de.archive.ubuntu.com/ubuntu xenial/universe amd64 Packages
pdns-tools:
Installed: (none)
Candidate: 4.0.3-1
Version table:
4.0.3-1 1000
500 http://de.archive.ubuntu.com/ubuntu zesty/universe amd64 Packages
4.0.0~alpha2-3build1 990
990 http://de.archive.ubuntu.com/ubuntu xenial/universe amd64 Packages
</syntaxhighlight>
=== Upgrade to the packages from the new release ===
<syntaxhighlight lang=bash>
# apt install pdns-tools
Reading package lists... Done
Building dependency tree
Reading state information... Done
Some packages could not be installed. This may mean that you have
requested an impossible situation or if you are using the unstable
distribution that some required packages have not yet been created
or been moved out of Incoming.
The following information may help to resolve the situation:
The following packages have unmet dependencies:
pdns-tools : Depends: libstdc++6 (>= 6) but 5.4.0-6ubuntu1~16.04.5 is to be installed
E: Unable to correct problems, you have held broken packages.
</syntaxhighlight>
This shows the pinning to xenial works ;-).
=== Override pinning for one package ===
<syntaxhighlight lang=bash>
# apt -t zesty install libstdc++6
...
</syntaxhighlight>
== Get man pages in minimized ubuntu ==
<syntaxhighlight lang=bash>
# man ls
This system has been minimized by removing packages and content that are
not required on a system that users do not log into.
To restore this content, including manpages, you can run the 'unminimize'
command. You will still need to ensure the 'man-db' package is installed.
</syntaxhighlight>
<syntaxhighlight lang=bash>
# rm /usr/bin/man
# dpkg-divert --quiet --remove --rename /usr/bin/man
</syntaxhighlight>
ec7bcb13fe927a50204d651d1193353f272068ea
2760
2759
2023-09-22T08:43:36Z
Lollypop
2
/* Get man pages in minimized ubuntu */
wikitext
text/x-wiki
[[Category:Ubuntu|apt]]
== Get all non LTS packages ==
<syntaxhighlight lang=awk>
# dpkg --list | awk '/^ii/ {print $2}' | xargs apt-cache show | awk '
BEGIN{
support="none";
}
/^Package:/,/^$/{
if(/^Package:/){ pkg=$2; }
if(/^Supported:/){ support=$2; }
if(/^$/ && support != "5y"){ printf "%s:\t%s\n", pkg, support; }
}
/^$/ {
support="none";
}'
</syntaxhighlight>
== Ubuntu support status ==
<syntaxhighlight lang=bash>
$ ubuntu-support-status --show-unsupported
</syntaxhighlight>
== Configuring a proxy for apt ==
Put this into your /etc/apt/apt.conf.d/00proxy :
<syntaxhighlight lang=bash>
// Options for the downloading routines
Acquire
{
Queue-Mode "host"; // host|access
Retries "0";
Source-Symlinks "true";
// HTTP method configuration
http
{
//Proxy::http.us.debian.org "DIRECT"; // Specific per-host setting
Proxy "http://<user>:<password>@<proxy-host>:<proxy-port>";
Timeout "120";
Pipeline-Depth "5";
// Cache Control. Note these do not work with Squid 2.0.2
No-Cache "false";
Max-Age "86400"; // 1 Day age on index files
No-Store "false"; // Prevent the cache from storing archives
};
ftp
{
Proxy "http://<user>:<password>@<proxy-host>:<proxy-port>";
//Proxy::http.us.debian.org "DIRECT"; // Specific per-host setting
Timeout "120";
/* Passive mode control, proxy, non-proxy and per-host. Pasv mode
is prefered if possible */
Passive "true";
Proxy::Passive "true";
Passive::http.us.debian.org "true"; // Specific per-host setting
};
cdrom
{
mount "/cdrom";
// You need the trailing slash!
"/cdrom"
{
Mount "sleep 1000";
UMount "sleep 500";
}
};
};
</syntaxhighlight>
==Use this proxy config for in the shell==
<syntaxhighlight lang=bash>
eval $(apt-config dump Acquire | awk -F '(::| )' '$3 ~ /Proxy/{printf "%s_proxy=%s\nexport %s_proxy\n",$2,$4,$2;}')
</syntaxhighlight>
== Getting some packages from a newer release ==
In this example we are living in <i>xenial</i> and want PowerDNS from <i>zesty</i> because we need CAA records in the nameservice.
=== Pin the normal release ===
<syntaxhighlight lang=bash>
# echo 'APT::Default-Release "xenial";' > /etc/apt/apt.conf.d/01pinning
</syntaxhighlight>
=== Add new release to /etc/apt/sources.list ===
This is the /etc/apt/sources.list on my x86 64bit Ubuntu:
<pre>
# Xenial
deb [arch=amd64] http://de.archive.ubuntu.com/ubuntu/ xenial main restricted universe
deb [arch=amd64] http://de.archive.ubuntu.com/ubuntu/ xenial-updates main restricted universe
deb [arch=amd64] http://security.ubuntu.com/ubuntu xenial-security main restricted universe
# Zesty
deb [arch=amd64] http://de.archive.ubuntu.com/ubuntu/ zesty main restricted universe
deb [arch=amd64] http://de.archive.ubuntu.com/ubuntu/ zesty-updates main restricted universe
deb [arch=amd64] http://security.ubuntu.com/ubuntu zesty-security main restricted universe
</pre>
=== Tell apt via /etc/apt/preferences.d/... to prefer some packages from the new release ===
This is the /etc/apt/preferences.d/pdns:
<pre>
Package: pdns-*
Pin: release a=zesty, l=Ubuntu
Pin-Priority: 1000
Package: pdns-*
Pin: release a=zesty-updates, l=Ubuntu
Pin-Priority: 1000
Package: pdns-*
Pin: release a=zesty-security, l=Ubuntu
Pin-Priority: 1000
</pre>
=== Upgrade to the packages from the new release ===
<syntaxhighlight lang=bash>
# apt update
...
2 packages can be upgraded. Run 'apt list --upgradable' to see them.
...
</syntaxhighlight>
=== Check with "apt-cache policy" which version is preferred now ===
<syntaxhighlight lang=bash>
# apt-cache policy pdns-server pdns-tools
pdns-server:
Installed: 4.0.3-1
Candidate: 4.0.3-1
Version table:
*** 4.0.3-1 1000
500 http://de.archive.ubuntu.com/ubuntu zesty/universe amd64 Packages
100 /var/lib/dpkg/status
4.0.0~alpha2-3build1 990
990 http://de.archive.ubuntu.com/ubuntu xenial/universe amd64 Packages
pdns-tools:
Installed: (none)
Candidate: 4.0.3-1
Version table:
4.0.3-1 1000
500 http://de.archive.ubuntu.com/ubuntu zesty/universe amd64 Packages
4.0.0~alpha2-3build1 990
990 http://de.archive.ubuntu.com/ubuntu xenial/universe amd64 Packages
</syntaxhighlight>
=== Upgrade to the packages from the new release ===
<syntaxhighlight lang=bash>
# apt install pdns-tools
Reading package lists... Done
Building dependency tree
Reading state information... Done
Some packages could not be installed. This may mean that you have
requested an impossible situation or if you are using the unstable
distribution that some required packages have not yet been created
or been moved out of Incoming.
The following information may help to resolve the situation:
The following packages have unmet dependencies:
pdns-tools : Depends: libstdc++6 (>= 6) but 5.4.0-6ubuntu1~16.04.5 is to be installed
E: Unable to correct problems, you have held broken packages.
</syntaxhighlight>
This shows the pinning to xenial works ;-).
=== Override pinning for one package ===
<syntaxhighlight lang=bash>
# apt -t zesty install libstdc++6
...
</syntaxhighlight>
== Get man pages in minimized ubuntu ==
<syntaxhighlight lang=bash>
# man ls
This system has been minimized by removing packages and content that are
not required on a system that users do not log into.
To restore this content, including manpages, you can run the 'unminimize'
command. You will still need to ensure the 'man-db' package is installed.
</syntaxhighlight>
In <i>/etc/dpkg/dpkg.cfg.d/excludes</i> comment out <i>path-exclude=/usr/share/man/*</i> :
<syntaxhighlight lang=bash>
# Drop all man pages
#path-exclude=/usr/share/man/*
...
</syntaxhighlight>
<syntaxhighlight lang=bash>
# rm /usr/bin/man
# dpkg-divert --quiet --remove --rename /usr/bin/man
</syntaxhighlight>
43a37e9aa40b33e5dc0f206bd051cfdacc8b3d01
Php-fpm
0
408
2761
2023-09-22T10:30:00Z
Lollypop
2
Created page with " =Health check on command line= Let us assume you have configured your ping in php-fpm an want to try it on a specific socket: On Ubuntu you need this package to get cgi-fcgi: <SyntaxHighlight lang=bash> # apt install cgi-fcgi </SyntaxHighlight> I have configured my socket for my nextcloud pool like this: <SyntaxHighlight lang=bash> # grep -E '^listen' /etc/php/8.1/fpm/pool.d/nextcloud.conf listen = /run/php/php-fpm.nextcloud.sock listen.owner = www-data listen.grou..."
wikitext
text/x-wiki
=Health check on command line=
Let us assume you have configured your ping in php-fpm an want to try it on a specific socket:
On Ubuntu you need this package to get cgi-fcgi:
<SyntaxHighlight lang=bash>
# apt install cgi-fcgi
</SyntaxHighlight>
I have configured my socket for my nextcloud pool like this:
<SyntaxHighlight lang=bash>
# grep -E '^listen' /etc/php/8.1/fpm/pool.d/nextcloud.conf
listen = /run/php/php-fpm.nextcloud.sock
listen.owner = www-data
listen.group = www-data
listen.mode = 0600
</SyntaxHighlight>
And the ping like this:
<SyntaxHighlight lang=bash>
# grep -E '^ping\.' /etc/php/8.1/fpm/pool.d/nextcloud.conf
ping.path = /fpm-ping
ping.response = pong
</SyntaxHighlight>
Then I can send a ping directly to this socket (not through the webserver) like this:
<SyntaxHighlight lang=bash>
# SCRIPT_NAME=/fpm-ping SCRIPT_FILENAME=/fpm-ping REQUEST_METHOD=GET /usr/bin/cgi-fcgi -bind -connect /run/php/php-fpm.nextcloud.sock ; echo
Content-type: text/plain;charset=UTF-8
Expires: Thu, 01 Jan 1970 00:00:00 GMT
Cache-Control: no-cache, no-store, must-revalidate, max-age=0
pong
</SyntaxHighlight>
9154406aab8fb0a3a3e3199b9e1fb8471cbe235c
2762
2761
2023-09-22T10:34:32Z
Lollypop
2
wikitext
text/x-wiki
[[Category: Web]]
=Health check on command line=
Let us assume you have configured your ping in php-fpm an want to try it on a specific socket:
On Ubuntu you need this package to get cgi-fcgi:
<SyntaxHighlight lang=bash>
# apt install cgi-fcgi
</SyntaxHighlight>
I have configured my socket for my nextcloud pool like this:
<SyntaxHighlight lang=bash>
# grep -E '^listen' /etc/php/8.1/fpm/pool.d/nextcloud.conf
listen = /run/php/php-fpm.nextcloud.sock
listen.owner = www-data
listen.group = www-data
listen.mode = 0600
</SyntaxHighlight>
And the ping like this:
<SyntaxHighlight lang=bash>
# grep -E '^ping\.' /etc/php/8.1/fpm/pool.d/nextcloud.conf
ping.path = /fpm-ping
ping.response = pong
</SyntaxHighlight>
Then I can send a ping directly to this socket (not through the webserver) like this:
<SyntaxHighlight lang=bash>
# SCRIPT_NAME=/fpm-ping SCRIPT_FILENAME=/fpm-ping REQUEST_METHOD=GET /usr/bin/cgi-fcgi -bind -connect /run/php/php-fpm.nextcloud.sock ; echo
Content-type: text/plain;charset=UTF-8
Expires: Thu, 01 Jan 1970 00:00:00 GMT
Cache-Control: no-cache, no-store, must-revalidate, max-age=0
pong
</SyntaxHighlight>
5d163d92327fc174b4ba1729ac7cc19806e680c8
Linux Tipps und Tricks
0
273
2763
2443
2023-10-25T09:18:48Z
Lollypop
2
/* Optional: Resize the LVM physical volume */
wikitext
text/x-wiki
[[Category:Linux|Tipps und Tricks]]
==Hard reboot==
This is the hard way to kick your kernel into void. No filesystem sync is done, just and ugly fast direkt reboot!
You should never do this...
<syntaxhighlight lang=bash>
# echo 1 > /proc/sys/kernel/sysrq
# echo b > /proc/sysrq-trigger
</syntaxhighlight>
First line enables sysrq, second line sends the reboot request.
For more look at [https://www.kernel.org/doc/Documentation/sysrq.txt kernel.org]!
==Scan all SCSI buses for new devices==
<syntaxhighlight lang=bash>
# for i in /sys/class/scsi_host/host*/scan ; do echo "- - -" > $i ; done
</syntaxhighlight>
==Scan all FC ports for new devices==
!!!Be CAREFUL!!!
This command line issues a Loop Initialization Protocol (LIP). This is a bus reset hat means that removed devices in the fabric will disappear and new ones will appear.
!!!BUT the connection might get lost for a moment!!!
The softer way is [[#Scan all SCSI buses for new devices|to scan the SCSI buses]].
<syntaxhighlight lang=bash>
# for i in /sys/class/fc_host/*/issue_lip ; do echo "1" > $i ; done
</syntaxhighlight>
==Rescan a device (for example after changing a VMDK size)==
<syntaxhighlight lang=bash>
# device=sda
# echo 1 > /sys/class/block/${device}/device/rescan
</syntaxhighlight>
This is for device sda after changing the VMDK from 20GB to 25GB:
<syntaxhighlight lang=bash>
# device=sda
# echo $[ 512 * $(cat /sys/block/${device}/size) / 1024 ** 3 ]
20
# echo 1 > /sys/class/block/${device}/device/rescan
# echo $[ 512 * $(cat /sys/block/${device}/size) / 1024 ** 3 ]
25
# parted /dev/${device} "print free"
Warning: Not all of the space available to /dev/sda appears to be used, you can fix the GPT to use all of the space (an extra 10485760 blocks) or
continue with the current setting?
Fix/Ignore? F
Model: VMware Virtual disk (scsi)
Disk /dev/sda: 26,8GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:
Number Start End Size File system Name Flags
2 17,4kB 1049kB 1031kB bios_grub
1 1049kB 21,5GB 21,5GB zfs
21,5GB 26,8GB 5369MB Free Space
</syntaxhighlight>
I want to put the free space into partition 1 and resize the rpool:
<syntaxhighlight lang=bash>
# parted /dev/${device} "resizepart 1 -1"
# parted /dev/${device} "print free"
Model: VMware Virtual disk (scsi)
Disk /dev/sda: 26,8GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:
Number Start End Size File system Name Flags
2 17,4kB 1049kB 1031kB bios_grub
1 1049kB 26,8GB 26,8GB zfs
26,8GB 26,8GB 983kB Free Space
# zpool list rpool
NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
rpool 19,9G 1,68G 18,2G - 14% 8% 1.00x ONLINE -
# zpool set autoexpand=on rpool
# zpool status rpool
pool: rpool
state: ONLINE
scan: none requested
config:
NAME STATE READ WRITE CKSUM
rpool ONLINE 0 0 0
sda1 ONLINE 0 0 0
# zpool online rpool sda1
# zpool list rpool
NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
rpool 24,9G 1,69G 23,2G - 11% 6% 1.00x ONLINE -
# zpool set autoexpand=off rpool
</syntaxhighlight>
Done.
==Remove a SCSI-device==
Let us say we want to remove /dev/sdb.
Be careful! Like in this example the lowest SCSI-ID is not always the lowest device name!
Check it with <i>lsscsi</i> from the Ubuntu package lsscsi:
<syntaxhighlight lang=bash>
# lsscsi
[2:0:0:0] cd/dvd NECVMWar VMware SATA CD00 1.00 /dev/sr0
[32:0:0:0] disk VMware Virtual disk 1.0 /dev/sdb
[32:0:1:0] disk VMware Virtual disk 1.0 /dev/sda
</syntaxhighlight>
Then check it is not longer in use:
# mount
# pvs
# zpool status
# etc.
Then delete it:
<syntaxhighlight lang=bash>
# echo 1 > /sys/bus/scsi/drivers/sd/32\:0\:0\:0/delete
</syntaxhighlight>
The 32:0:0:0 is the number reported from the lsscsi above.
Et voila:
<syntaxhighlight lang=bash>
# lsscsi
[2:0:0:0] cd/dvd NECVMWar VMware SATA CD00 1.00 /dev/sr0
[32:0:1:0] disk VMware Virtual disk 1.0 /dev/sda
</syntaxhighlight>
==Copy a GPT partition table==
Copy partition table of sdX to sdY:
<syntaxhighlight lang=bash>
# sgdisk /dev/sdX --replicate=/dev/sdY
# sgdisk --randomize-guids /dev/sdY
</syntaxhighlight>
Or with:
<syntaxhighlight lang=bash>
# sgdisk --backup=sdX.table /dev/sdX
# sgdisk --load-backup=sdX.table /dev/sdY
# sgdisk -G /dev/sdY
</syntaxhighlight>
<pre>
-R, --replicate=second_device_filename
Replicate the main device's partition table on the specified second device. Note that the replicated partition table is an exact
copy, including all GUIDs; if the device should have its own unique GUIDs, you should use the -G option on the new disk.
-G, --randomize-guids
Randomize the disk's GUID and all partitions' unique GUIDs (but not their partition type code GUIDs). This function may be used
after cloning a disk in order to render all GUIDs once again unique.
</pre>
==Resize a GPT partition==
The partition was resized in VMWare from ~6GB to ~50GB.
In the VM I did [[#Remove a SCSI-device|Remove a SCSI-device]] for the resized device and then [[#Scan all SCSI buses for new devices|Scan all SCSI buses for new devices]] after that parted saw the new size.
===Correct the GPT partition table===
<syntaxhighlight lang=bash>
root@mariadb:~# parted /dev/sdb
GNU Parted 3.2
Using /dev/sdb
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted) p
Warning: Not all of the space available to /dev/sdb appears to be used, you can fix the GPT to use all of the space (an extra 92274688 blocks) or continue with the
current setting?
Fix/Ignore? F <-- ! choose F
Model: VMware Virtual disk (scsi)
Disk /dev/sdb: 53,7GB <-- ! the new size is reported now
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:
Number Start End Size File system Name Flags
1 1049kB 6442MB 6441MB zfs
</syntaxhighlight>
===Resize the partition===
<syntaxhighlight lang=bash>
root@mariadb:~# parted /dev/sdb
GNU Parted 3.2
Using /dev/sdb
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted) p
Model: VMware Virtual disk (scsi)
Disk /dev/sdb: 53,7GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:
Number Start End Size File system Name Flags
1 1049kB 6442MB 6441MB zfs
(parted) resizepart 1
End? [6442MB]? 53,7GB <-- ! Put new size here
(parted) p <-- ! Control if it worked
Model: VMware Virtual disk (scsi)
Disk /dev/sdb: 53,7GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:
Number Start End Size File system Name Flags
1 1049kB 53,7GB 53,7GB zfs
(parted) q
Information: You may need to update /etc/fstab.
</syntaxhighlight>
===Optional: Resize the ZPool in it===
Check the actual values:
<syntaxhighlight lang=bash>
root@mariadb:~# zpool list MYSQL-DATA
NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
MYSQL-DATA 5,97G 994M 5,00G 44G 47% 16% 1.00x ONLINE -
root@mariadb:~# zpool get autoexpand MYSQL-DATA
NAME PROPERTY VALUE SOURCE
MYSQL-DATA autoexpand off default
</syntaxhighlight>
Now inform ZPool to grow to the end of the partition.
Set autoexpand to on:
<syntaxhighlight lang=bash>
root@mariadb:~# zpool set autoexpand=on MYSQL-DATA
</syntaxhighlight>
Send an online to the already onlined device to force a recheck in the ZPool to resize it without export/import:
<syntaxhighlight lang=bash>
root@mariadb:~# zpool online MYSQL-DATA /dev/sdb1
</syntaxhighlight>
Et voila:
<syntaxhighlight lang=bash>
root@mariadb:~# zpool list MYSQL-DATA
NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
MYSQL-DATA 50,0G 994M 49,0G - 5% 1% 1.00x ONLINE -
rpool 19,9G 3,36G 16,5G - 19% 16% 1.00x ONLINE -
</syntaxhighlight>
Set autoexpand to off if you want prevent to autoexpand if partition grows:
<syntaxhighlight lang=bash>
root@mariadb:~# zpool set autoexpand=off MYSQL-DATA
</syntaxhighlight>
===Optional: Resize the LVM physical volume===
Check the values:
<syntaxhighlight lang=bash>
# parted /dev/${device} "print free"
Model: VMware Virtual disk (scsi)
Disk /dev/sda: 48.3GB
Sector size (logical/physical): 512B/512B
Partition Table: msdos
Disk Flags:
Number Start End Size Type File system Flags
32.3kB 1049kB 1016kB Free Space
1 1049kB 48.3GB 48.3GB primary boot
48.3GB 48.3GB 999kB Free Space
# pvs
PV VG Fmt Attr PSize PFree
/dev/sda1 vg-root lvm2 a-- <35.00g 0
</syntaxhighlight>
OK, we need to resize the physical volume
<syntaxhighlight lang=bash>
# pvresize /dev/sda1
Physical volume "/dev/sda1" changed
1 physical volume(s) resized / 0 physical volume(s) not resized
</syntaxhighlight>
Check the values:
<syntaxhighlight lang=bash>
# pvs
PV VG Fmt Attr PSize PFree
/dev/sda1 vg-root lvm2 a-- <45.00g 10.00g
</syntaxhighlight>
<syntaxhighlight lang=bash>
# lvextend -l +100%FREE /dev/vg-root/log
</syntaxhighlight>
Done.
3d1a9f5fc1e18d4768a6ee866ad3a17d384cd8fc
2766
2763
2023-11-07T10:05:16Z
Lollypop
2
/* Rescan a device (for example after changing a VMDK size) */
wikitext
text/x-wiki
[[Category:Linux|Tipps und Tricks]]
==Hard reboot==
This is the hard way to kick your kernel into void. No filesystem sync is done, just and ugly fast direkt reboot!
You should never do this...
<syntaxhighlight lang=bash>
# echo 1 > /proc/sys/kernel/sysrq
# echo b > /proc/sysrq-trigger
</syntaxhighlight>
First line enables sysrq, second line sends the reboot request.
For more look at [https://www.kernel.org/doc/Documentation/sysrq.txt kernel.org]!
==Scan all SCSI buses for new devices==
<syntaxhighlight lang=bash>
# for i in /sys/class/scsi_host/host*/scan ; do echo "- - -" > $i ; done
</syntaxhighlight>
==Scan all FC ports for new devices==
!!!Be CAREFUL!!!
This command line issues a Loop Initialization Protocol (LIP). This is a bus reset hat means that removed devices in the fabric will disappear and new ones will appear.
!!!BUT the connection might get lost for a moment!!!
The softer way is [[#Scan all SCSI buses for new devices|to scan the SCSI buses]].
<syntaxhighlight lang=bash>
# for i in /sys/class/fc_host/*/issue_lip ; do echo "1" > $i ; done
</syntaxhighlight>
==Rescan a device (for example after changing a VMDK size)==
<syntaxhighlight lang=bash>
# device=sda
# echo 1 > /sys/class/block/${device}/device/rescan
</syntaxhighlight>
This is for device sda after changing the VMDK from 20GB to 25GB:
<syntaxhighlight lang=bash>
# device=sda
# echo "$[ 512 * $(</sys/block/${device}/size) / 1024 ** 3 ] GB"
20 GB
# echo 1 > /sys/class/block/${device}/device/rescan
# echo "$[ 512 * $(</sys/block/${device}/size) / 1024 ** 3 ] GB"
25 GB
# parted /dev/${device} "print free"
Warning: Not all of the space available to /dev/sda appears to be used, you can fix the GPT to use all of the space (an extra 10485760 blocks) or
continue with the current setting?
Fix/Ignore? F
Model: VMware Virtual disk (scsi)
Disk /dev/sda: 26,8GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:
Number Start End Size File system Name Flags
2 17,4kB 1049kB 1031kB bios_grub
1 1049kB 21,5GB 21,5GB zfs
21,5GB 26,8GB 5369MB Free Space
</syntaxhighlight>
I want to put the free space into partition 1 and resize the rpool:
<syntaxhighlight lang=bash>
# parted /dev/${device} "resizepart 1 -1"
# parted /dev/${device} "print free"
Model: VMware Virtual disk (scsi)
Disk /dev/sda: 26,8GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:
Number Start End Size File system Name Flags
2 17,4kB 1049kB 1031kB bios_grub
1 1049kB 26,8GB 26,8GB zfs
26,8GB 26,8GB 983kB Free Space
# zpool list rpool
NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
rpool 19,9G 1,68G 18,2G - 14% 8% 1.00x ONLINE -
# zpool set autoexpand=on rpool
# zpool status rpool
pool: rpool
state: ONLINE
scan: none requested
config:
NAME STATE READ WRITE CKSUM
rpool ONLINE 0 0 0
sda1 ONLINE 0 0 0
# zpool online rpool sda1
# zpool list rpool
NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
rpool 24,9G 1,69G 23,2G - 11% 6% 1.00x ONLINE -
# zpool set autoexpand=off rpool
</syntaxhighlight>
Done.
==Remove a SCSI-device==
Let us say we want to remove /dev/sdb.
Be careful! Like in this example the lowest SCSI-ID is not always the lowest device name!
Check it with <i>lsscsi</i> from the Ubuntu package lsscsi:
<syntaxhighlight lang=bash>
# lsscsi
[2:0:0:0] cd/dvd NECVMWar VMware SATA CD00 1.00 /dev/sr0
[32:0:0:0] disk VMware Virtual disk 1.0 /dev/sdb
[32:0:1:0] disk VMware Virtual disk 1.0 /dev/sda
</syntaxhighlight>
Then check it is not longer in use:
# mount
# pvs
# zpool status
# etc.
Then delete it:
<syntaxhighlight lang=bash>
# echo 1 > /sys/bus/scsi/drivers/sd/32\:0\:0\:0/delete
</syntaxhighlight>
The 32:0:0:0 is the number reported from the lsscsi above.
Et voila:
<syntaxhighlight lang=bash>
# lsscsi
[2:0:0:0] cd/dvd NECVMWar VMware SATA CD00 1.00 /dev/sr0
[32:0:1:0] disk VMware Virtual disk 1.0 /dev/sda
</syntaxhighlight>
==Copy a GPT partition table==
Copy partition table of sdX to sdY:
<syntaxhighlight lang=bash>
# sgdisk /dev/sdX --replicate=/dev/sdY
# sgdisk --randomize-guids /dev/sdY
</syntaxhighlight>
Or with:
<syntaxhighlight lang=bash>
# sgdisk --backup=sdX.table /dev/sdX
# sgdisk --load-backup=sdX.table /dev/sdY
# sgdisk -G /dev/sdY
</syntaxhighlight>
<pre>
-R, --replicate=second_device_filename
Replicate the main device's partition table on the specified second device. Note that the replicated partition table is an exact
copy, including all GUIDs; if the device should have its own unique GUIDs, you should use the -G option on the new disk.
-G, --randomize-guids
Randomize the disk's GUID and all partitions' unique GUIDs (but not their partition type code GUIDs). This function may be used
after cloning a disk in order to render all GUIDs once again unique.
</pre>
==Resize a GPT partition==
The partition was resized in VMWare from ~6GB to ~50GB.
In the VM I did [[#Remove a SCSI-device|Remove a SCSI-device]] for the resized device and then [[#Scan all SCSI buses for new devices|Scan all SCSI buses for new devices]] after that parted saw the new size.
===Correct the GPT partition table===
<syntaxhighlight lang=bash>
root@mariadb:~# parted /dev/sdb
GNU Parted 3.2
Using /dev/sdb
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted) p
Warning: Not all of the space available to /dev/sdb appears to be used, you can fix the GPT to use all of the space (an extra 92274688 blocks) or continue with the
current setting?
Fix/Ignore? F <-- ! choose F
Model: VMware Virtual disk (scsi)
Disk /dev/sdb: 53,7GB <-- ! the new size is reported now
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:
Number Start End Size File system Name Flags
1 1049kB 6442MB 6441MB zfs
</syntaxhighlight>
===Resize the partition===
<syntaxhighlight lang=bash>
root@mariadb:~# parted /dev/sdb
GNU Parted 3.2
Using /dev/sdb
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted) p
Model: VMware Virtual disk (scsi)
Disk /dev/sdb: 53,7GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:
Number Start End Size File system Name Flags
1 1049kB 6442MB 6441MB zfs
(parted) resizepart 1
End? [6442MB]? 53,7GB <-- ! Put new size here
(parted) p <-- ! Control if it worked
Model: VMware Virtual disk (scsi)
Disk /dev/sdb: 53,7GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:
Number Start End Size File system Name Flags
1 1049kB 53,7GB 53,7GB zfs
(parted) q
Information: You may need to update /etc/fstab.
</syntaxhighlight>
===Optional: Resize the ZPool in it===
Check the actual values:
<syntaxhighlight lang=bash>
root@mariadb:~# zpool list MYSQL-DATA
NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
MYSQL-DATA 5,97G 994M 5,00G 44G 47% 16% 1.00x ONLINE -
root@mariadb:~# zpool get autoexpand MYSQL-DATA
NAME PROPERTY VALUE SOURCE
MYSQL-DATA autoexpand off default
</syntaxhighlight>
Now inform ZPool to grow to the end of the partition.
Set autoexpand to on:
<syntaxhighlight lang=bash>
root@mariadb:~# zpool set autoexpand=on MYSQL-DATA
</syntaxhighlight>
Send an online to the already onlined device to force a recheck in the ZPool to resize it without export/import:
<syntaxhighlight lang=bash>
root@mariadb:~# zpool online MYSQL-DATA /dev/sdb1
</syntaxhighlight>
Et voila:
<syntaxhighlight lang=bash>
root@mariadb:~# zpool list MYSQL-DATA
NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
MYSQL-DATA 50,0G 994M 49,0G - 5% 1% 1.00x ONLINE -
rpool 19,9G 3,36G 16,5G - 19% 16% 1.00x ONLINE -
</syntaxhighlight>
Set autoexpand to off if you want prevent to autoexpand if partition grows:
<syntaxhighlight lang=bash>
root@mariadb:~# zpool set autoexpand=off MYSQL-DATA
</syntaxhighlight>
===Optional: Resize the LVM physical volume===
Check the values:
<syntaxhighlight lang=bash>
# parted /dev/${device} "print free"
Model: VMware Virtual disk (scsi)
Disk /dev/sda: 48.3GB
Sector size (logical/physical): 512B/512B
Partition Table: msdos
Disk Flags:
Number Start End Size Type File system Flags
32.3kB 1049kB 1016kB Free Space
1 1049kB 48.3GB 48.3GB primary boot
48.3GB 48.3GB 999kB Free Space
# pvs
PV VG Fmt Attr PSize PFree
/dev/sda1 vg-root lvm2 a-- <35.00g 0
</syntaxhighlight>
OK, we need to resize the physical volume
<syntaxhighlight lang=bash>
# pvresize /dev/sda1
Physical volume "/dev/sda1" changed
1 physical volume(s) resized / 0 physical volume(s) not resized
</syntaxhighlight>
Check the values:
<syntaxhighlight lang=bash>
# pvs
PV VG Fmt Attr PSize PFree
/dev/sda1 vg-root lvm2 a-- <45.00g 10.00g
</syntaxhighlight>
<syntaxhighlight lang=bash>
# lvextend -l +100%FREE /dev/vg-root/log
</syntaxhighlight>
Done.
56692ed1365ec14392b204837b12b0ba11733a81
Ansible tips and tricks
0
299
2764
2732
2023-11-06T09:53:24Z
Lollypop
2
wikitext
text/x-wiki
[[Category: Ansible | Tips and tricks]]
= Ansible commandline =
== Get settings for host ==
=== Show inventory for host ===
<syntaxhighlight lang=bash>
$ ansible-inventory --host <host>
</syntaxhighlight>
=== Gathering settings for host in ${hostname}: ===
<syntaxhighlight lang=bash>
$ ansible -m debug -a 'var=hostvars[inventory_hostname]' ${hostname}
</syntaxhighlight>
For example:
<syntaxhighlight lang=bash>
$ ansible -m debug -a 'var=hostvars[inventory_hostname]' localhost
</syntaxhighlight>
=== Gathering groups for host in ${hostname}: ===
<syntaxhighlight lang=bash>
$ ansible -m debug -a 'var=group_names' ${hostname}
</syntaxhighlight>
== Get informations from host ==
=== Get all installed kernel versions: ===
<syntaxhighlight lang=bash>
$ ansible -m shell -a 'uname -r' 'all' | perl -pe 's#\s+\|\s+CHANGED\s+\|\s+rc=\d+\s>>\s*\n#;#g' > /tmp/kernel.csv
</syntaxhighlight>
=== Get all installed releases: ===
<syntaxhighlight lang=bash>
$ ansible -m setup -a 'filter=ansible_distribution_version' 'all'
</syntaxhighlight>
= Gathering facts from file =
== Variables from an Oracle response file ==
This snippet gets some variables from the response file and puts them into an environment variable <i>oracle_environment</i> and sets the variable name itself (prepended with oracle_ if not already there). The variable <i>oracle_environment</i> can be used for <i>environment:</i> when you use <i>shell:</i>.
<syntaxhighlight lang=yaml>
vars:
oracle_user: oracle
oracle_version: 12cR2
oracle_response_file: /install/tepmplate_{{ oracle_version }}/db_{{ oracle_version | lower}}.rsp
</syntaxhighlight>
<syntaxhighlight lang=yaml>
- name: "Getting variables for version {{ oracle_version }} from response file"
shell: |
awk -F '=' '/{{ item }}/{print $2;}' {{ oracle_response_file }}
register: oracle_response_variables
with_items:
- ORACLE_HOME
- ORACLE_BASE
- INVENTORY_LOCATION
tags:
- oracle
- oracle_install
- name: Setting facts from response file to oracle_environment
set_fact:
"{{ 'oracle_' + item.item | lower | regex_replace('oracle_','') }}": "{{ item.stdout }}"
oracle_environment: "{{oracle_environment|default([]) + [ {item.item: item.stdout} ] }}"
with_items:
- "{{ oracle_response_variables.results }}"
tags:
- oracle
- oracle_install
</syntaxhighlight>
= Gathering oracle environment =
<syntaxhighlight lang=yaml>
- name: Calling oraenv
shell: |
# Set ORAENV_ASK=NO and ORACLE_SID, ORACLE_HOME, PATH from /etc/oratab
eval $(awk -F':' '!/^[ ]*(#|$)/ && $3=="Y"{printf "export ORAENV_ASK=NO ORACLE_SID=%s ORACLE_HOME=%s PATH=${PATH}:%s/bin\n",$1,$2,$2}' /etc/oratab)
# Call /usr/local/bin/oraenv for additional settings
. /usr/local/bin/oraenv -s
# Just register what we need for Oracle
env | egrep "(ORACLE_.*|PATH|LD_LIBRARY_PATH)="
register: env
changed_when: False
- name: Creating environment ora_env
set_fact:
ora_env: |
{# Creating empty dictionary #}
{%- set tmp_env={} -%}
{# For each line from env call tmp_env._setitem_(<variable>,<value>) #}
{%- for line in env.stdout_lines -%}
{{ tmp_env.__setitem__(line.split('=')[0], line.split('=')[1]) }}
{%- endfor -%}
{# Print the created variable #}
{{ tmp_env }}
- debug: var=ora_env
</syntaxhighlight>
= NetApp Modules =
== NetApp role ==
=== Snapshot user ===
<syntaxhighlight>
security login role create -vserver cluster01 -role ansible-snapshot-only -cmddirname DEFAULT -access none
security login role create -vserver cluster01 -role ansible-snapshot-only -cmddirname "event generate-autosupport-log" -access all
security login role create -vserver cluster01 -role ansible-snapshot-only -cmddirname "volume snapshot" -access readonly
security login role create -vserver cluster01 -role ansible-snapshot-only -cmddirname "volume snapshot create" -query "-snapshot ansible_*" -access all
security login role create -vserver cluster01 -role ansible-snapshot-only -cmddirname "volume snapshot delete" -query "-snapshot ansible_*" -access all
security login create -vserver cluster01 -role ansible-snapshot-only -application ontapi -authentication-method password -user-or-group-name ansible-snapuser
</syntaxhighlight>
c7f0b4272bcb77db2b844b00633425b62d5f7309
2765
2764
2023-11-06T09:53:40Z
Lollypop
2
/* Show inventory for host */
wikitext
text/x-wiki
[[Category: Ansible | Tips and tricks]]
= Ansible commandline =
== Get settings for host ==
=== Show inventory for host ===
<syntaxhighlight lang=bash>
$ ansible-inventory --host ${hostname}
</syntaxhighlight>
=== Gathering settings for host in ${hostname}: ===
<syntaxhighlight lang=bash>
$ ansible -m debug -a 'var=hostvars[inventory_hostname]' ${hostname}
</syntaxhighlight>
For example:
<syntaxhighlight lang=bash>
$ ansible -m debug -a 'var=hostvars[inventory_hostname]' localhost
</syntaxhighlight>
=== Gathering groups for host in ${hostname}: ===
<syntaxhighlight lang=bash>
$ ansible -m debug -a 'var=group_names' ${hostname}
</syntaxhighlight>
== Get informations from host ==
=== Get all installed kernel versions: ===
<syntaxhighlight lang=bash>
$ ansible -m shell -a 'uname -r' 'all' | perl -pe 's#\s+\|\s+CHANGED\s+\|\s+rc=\d+\s>>\s*\n#;#g' > /tmp/kernel.csv
</syntaxhighlight>
=== Get all installed releases: ===
<syntaxhighlight lang=bash>
$ ansible -m setup -a 'filter=ansible_distribution_version' 'all'
</syntaxhighlight>
= Gathering facts from file =
== Variables from an Oracle response file ==
This snippet gets some variables from the response file and puts them into an environment variable <i>oracle_environment</i> and sets the variable name itself (prepended with oracle_ if not already there). The variable <i>oracle_environment</i> can be used for <i>environment:</i> when you use <i>shell:</i>.
<syntaxhighlight lang=yaml>
vars:
oracle_user: oracle
oracle_version: 12cR2
oracle_response_file: /install/tepmplate_{{ oracle_version }}/db_{{ oracle_version | lower}}.rsp
</syntaxhighlight>
<syntaxhighlight lang=yaml>
- name: "Getting variables for version {{ oracle_version }} from response file"
shell: |
awk -F '=' '/{{ item }}/{print $2;}' {{ oracle_response_file }}
register: oracle_response_variables
with_items:
- ORACLE_HOME
- ORACLE_BASE
- INVENTORY_LOCATION
tags:
- oracle
- oracle_install
- name: Setting facts from response file to oracle_environment
set_fact:
"{{ 'oracle_' + item.item | lower | regex_replace('oracle_','') }}": "{{ item.stdout }}"
oracle_environment: "{{oracle_environment|default([]) + [ {item.item: item.stdout} ] }}"
with_items:
- "{{ oracle_response_variables.results }}"
tags:
- oracle
- oracle_install
</syntaxhighlight>
= Gathering oracle environment =
<syntaxhighlight lang=yaml>
- name: Calling oraenv
shell: |
# Set ORAENV_ASK=NO and ORACLE_SID, ORACLE_HOME, PATH from /etc/oratab
eval $(awk -F':' '!/^[ ]*(#|$)/ && $3=="Y"{printf "export ORAENV_ASK=NO ORACLE_SID=%s ORACLE_HOME=%s PATH=${PATH}:%s/bin\n",$1,$2,$2}' /etc/oratab)
# Call /usr/local/bin/oraenv for additional settings
. /usr/local/bin/oraenv -s
# Just register what we need for Oracle
env | egrep "(ORACLE_.*|PATH|LD_LIBRARY_PATH)="
register: env
changed_when: False
- name: Creating environment ora_env
set_fact:
ora_env: |
{# Creating empty dictionary #}
{%- set tmp_env={} -%}
{# For each line from env call tmp_env._setitem_(<variable>,<value>) #}
{%- for line in env.stdout_lines -%}
{{ tmp_env.__setitem__(line.split('=')[0], line.split('=')[1]) }}
{%- endfor -%}
{# Print the created variable #}
{{ tmp_env }}
- debug: var=ora_env
</syntaxhighlight>
= NetApp Modules =
== NetApp role ==
=== Snapshot user ===
<syntaxhighlight>
security login role create -vserver cluster01 -role ansible-snapshot-only -cmddirname DEFAULT -access none
security login role create -vserver cluster01 -role ansible-snapshot-only -cmddirname "event generate-autosupport-log" -access all
security login role create -vserver cluster01 -role ansible-snapshot-only -cmddirname "volume snapshot" -access readonly
security login role create -vserver cluster01 -role ansible-snapshot-only -cmddirname "volume snapshot create" -query "-snapshot ansible_*" -access all
security login role create -vserver cluster01 -role ansible-snapshot-only -cmddirname "volume snapshot delete" -query "-snapshot ansible_*" -access all
security login create -vserver cluster01 -role ansible-snapshot-only -application ontapi -authentication-method password -user-or-group-name ansible-snapuser
</syntaxhighlight>
ca758fe221392a1966da5408eda2e6c98acac373
RadSecProxy
0
345
2767
2487
2023-11-30T08:14:44Z
Lollypop
2
/* RadSecProxy */
wikitext
text/x-wiki
[[Category:Eduroam]]
=RadSecProxy=
==Build==
===Patch for radsecproxy-1.6.8 on Ubuntu 16.04===
In radsecproxy 1.6.9 and source from git on [[https://git.nordu.net/?p=radsecproxy.git;a=tree git.nordu.net]] this patch is not needed since [[https://git.nordu.net/?p=radsecproxy.git;a=commit;h=f3619bf65967255e1009fec42b28007b49e0f4e4 18.1.2017]].
<syntaxhighlight lang=bash>
$ git clone https://git.nordu.net/radsecproxy.git
</syntaxhighlight>
[https://project.nordu.net/browse/RADSECPROXY-72 taken from here]
<syntaxhighlight lang=diff>
diff -rub radsecproxy-1.6.8/tcp.c radsecproxy-1.6.8_Ubuntu_16.04/tcp.c
--- radsecproxy-1.6.8/tcp.c 2016-09-21 13:49:09.000000000 +0200
+++ radsecproxy-1.6.8_Ubuntu_16.04/tcp.c 2017-07-13 16:35:52.414151832 +0200
@@ -353,7 +353,7 @@
struct sockaddr_storage from;
socklen_t fromlen = sizeof(from);
- listen(*sp, 0);
+ listen(*sp, 16);
for (;;) {
s = accept(*sp, (struct sockaddr *)&from, &fromlen);
diff -rub radsecproxy-1.6.8/tls.c radsecproxy-1.6.8_Ubuntu_16.04/tls.c
--- radsecproxy-1.6.8/tls.c 2016-09-21 13:49:09.000000000 +0200
+++ radsecproxy-1.6.8_Ubuntu_16.04/tls.c 2017-07-13 16:36:22.678166655 +0200
@@ -467,7 +467,7 @@
struct sockaddr_storage from;
socklen_t fromlen = sizeof(from);
- listen(*sp, 0);
+ listen(*sp, 16);
for (;;) {
s = accept(*sp, (struct sockaddr *)&from, &fromlen);
</syntaxhighlight>
===Configure===
<syntaxhighlight lang=bash>
$ ./configure --prefix=/opt/radsecproxy-1.6.8 --sysconfdir=/etc/radsec --with-ssl --enable-fticks
$ make clean all && sudo make install
</syntaxhighlight>
=== Another example: Version 1.7.2 from git ===
<syntaxhighlight lang=bash>
$ mkdir radsecproxy && cd radsecproxy
$ git clone --single-branch --branch 1.7.2 https://github.com/radsecproxy/radsecproxy tags/1.7.2
$ cd tags/1.7.2
$ ./autogen.sh
$ ./configure --prefix=/opt/radsecproxy-${PWD##*/} --sysconfdir=/etc/radsec --with-ssl
$ make clean all && sudo make install
</syntaxhighlight>
==Config==
===/etc/radsec/radsecproxy.conf===
<syntaxhighlight lang=text>
# Master config file for radsecproxy
IPv4Only on
listenUDP <IP>:1812
listenUDP <IP>:1813
listenTLS <IP>:2083
LogLevel 5 # For testing later reduce to 3
#LogDestination file:///var/log/radsecproxy.log
LogDestination x-syslog:///LOG_DAEMON
LoopPrevention on
######## TLS section
tls default {
CACertificatePath /etc/radsec/cert/ca
CertificateFile /etc/radsec/cert/radsecproxy-cert.pem
CertificateKeyFile /etc/radsec/cert/radsecproxy-key.pem
CertificateKeyPassword <PASSWORD>
}
Include /etc/radsec/rewrites.conf
Include /etc/radsec/clients.conf
Include /etc/radsec/servers.conf
Include /etc/radsec/realms.conf
</syntaxhighlight>
===ca certificate in /etc/radsec/cert/ca===
For DFN users it is the TeleSec root certificate
====The destination file name is <hash of the certificate>.0====
<syntaxhighlight lang=text>
# openssl x509 -noout -hash -in /tmp/telesec.pem
1e09d511
# mv /tmp/telesec.pem /etc/radsec/cert/ca/1e09d511.0
</syntaxhighlight>
====/etc/radsec/cert/ca/1e09d511.0====
<syntaxhighlight lang=text>
subject= /C=DE/O=T-Systems Enterprise Services GmbH/OU=T-Systems Trust Center/CN=T-TeleSec GlobalRoot Class 2
-----BEGIN CERTIFICATE-----
MIIDwzCCAqugAwIBAgIBATANBgkqhkiG9w0BAQsFADCBgjELMAkGA1UEBhMCREUx
KzApBgNVBAoMIlQtU3lzdGVtcyBFbnRlcnByaXNlIFNlcnZpY2VzIEdtYkgxHzAd
BgNVBAsMFlQtU3lzdGVtcyBUcnVzdCBDZW50ZXIxJTAjBgNVBAMMHFQtVGVsZVNl
YyBHbG9iYWxSb290IENsYXNzIDIwHhcNMDgxMDAxMTA0MDE0WhcNMzMxMDAxMjM1
OTU5WjCBgjELMAkGA1UEBhMCREUxKzApBgNVBAoMIlQtU3lzdGVtcyBFbnRlcnBy
aXNlIFNlcnZpY2VzIEdtYkgxHzAdBgNVBAsMFlQtU3lzdGVtcyBUcnVzdCBDZW50
ZXIxJTAjBgNVBAMMHFQtVGVsZVNlYyBHbG9iYWxSb290IENsYXNzIDIwggEiMA0G
CSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQCqX9obX+hzkeXaXPSi5kfl82hVYAUd
AqSzm1nzHoqvNK38DcLZSBnuaY/JIPwhqgcZ7bBcrGXHX+0CfHt8LRvWurmAwhiC
FoT6ZrAIxlQjgeTNuUk/9k9uN0goOA/FvudocP05l03Sx5iRUKrERLMjfTlH6VJi
1hKTXrcxlkIF+3anHqP1wvzpesVsqXFP6st4vGCvx9702cu+fjOlbpSD8DT6Iavq
jnKgP6TeMFvvhk1qlVtDRKgQFRzlAVfFmPHmBiiRqiDFt1MmUUOyCxGVWOHAD3bZ
wI18gfNycJ5v/hqO2V81xrJvNHy+SE/iWjnX2J14np+GPgNeGYtEotXHAgMBAAGj
QjBAMA8GA1UdEwEB/wQFMAMBAf8wDgYDVR0PAQH/BAQDAgEGMB0GA1UdDgQWBBS/
WSA2AHmgoCJrjNXyYdK4LMuCSjANBgkqhkiG9w0BAQsFAAOCAQEAMQOiYQsfdOhy
NsZt+U2e+iKo4YFWz827n+qrkRk4r6p8FU3ztqONpfSO9kSpp+ghla0+AGIWiPAC
uvxhI+YzmzB6azZie60EI4RYZeLbK4rnJVM3YlNfvNoBYimipidx5joifsFvHZVw
IEoHNN/q/xWA5brXethbdXwFeilHfkCoMRN3zUA7tFFHei4R40cR3p1m0IvVVGb6
g1XqfMIpiRvpb7PO4gWEyS8+eIVibslfwXhjdFjASBgMmTnrpMwatXlajRWc2BQN
9noHV8cigwUtPJslJj0Ys6lDfMjIq2SPDqO/nBudMNva0Bkuqjzx+zOAduTNrRlP
BSeOE6Fuwg==
-----END CERTIFICATE-----
</syntaxhighlight>
===/etc/radsec/rewrites.conf===
<syntaxhighlight lang=text>
## Empty for our setup
</syntaxhighlight>
===/etc/radsec/clients.conf===
This matches our german top level radius (tlr) you have to customize it for other countries.
<syntaxhighlight lang=text>
client tlr1 {
host 193.174.75.134
type tls
certificatenamecheck off
matchCertificateAttribute CN:/^(radius1\.dfn|tld1\.eduroam)\.de$/
}
client tlr2 {
host 193.174.75.138
type tls
certificatenamecheck off
matchCertificateAttribute CN:/^(radius2\.dfn|tld2\.eduroam)\.de$/
}
# Our WLAN Controller
client wlc {
host 10.1.1.0/24
type udp
secret ****secret****
}
#client anyIP4TLS {
# host 0.0.0.0/0
# type TLS
#}
</syntaxhighlight>
===/etc/radsec/servers.conf===
<syntaxhighlight lang=text>
#
## UDP Radius
#
#Server Our-EduroamRadiusAuth {
# host <internal radius server>
# port 1812
# type udp
# secret ****secret****
#}
#Server Our-EduroamRadiusAcct {
# host <internal radius accounting server>
# port 1813
# type udp
# secret ****secret****
#}
#
## TLS Radius / RadSec
#
server freeradius-1 {
host <internal radius accounting server1>
type tls
certificatenamecheck off
matchCertificateAttribute CN:/^freeradius1\.domain\.tld$/
StatusServer on
secret ****secret****
}
server freeradius-2 {
host <internal radius accounting server2>
type tls
certificatenamecheck off
matchCertificateAttribute CN:/^freeradius2\.domain\.tld$/
StatusServer on
secret ****secret****
}
server tlr1 {
host 193.174.75.134
type tls
certificatenamecheck off
matchCertificateAttribute CN:/^(radius1\.dfn|tld1\.eduroam)\.de$/
StatusServer on
}
server tlr2 {
host 193.174.75.138
type tls
certificatenamecheck off
matchCertificateAttribute CN:/^(radius2\.dfn|tld2\.eduroam)\.de$/
StatusServer on
}
</syntaxhighlight>
===/etc/radsec/realms.conf===
<syntaxhighlight lang=text>
# Our domain domain.tld
realm /(eduroam|anonymous)@domain\.tld$/ {
server freeradius-1
server freeradius-2
accountingServer freeradius-1
accountingServer freeradius-2
}
# If the anonymous user has not been matched above, fail
# So users that use their real identity fail, too. Force anonymous!
realm /@domain\.tld$ {
replymessage "Access rejected, wrong anonymous identity. Use eduroam@domain.tld as anonymous identity."
accountingresponse on
}
# Other domain of our site not used for eduroam
realm /@wrong-domain\.tld$/ {
replymessage "Misconfigured client: Use domain.tld as domain instead."
accountingresponse on
}
# Default realm of some clients. Do not send to top level radius servers.
realm /@.*\.3gppnetwork\.org$/ {
replymessage "Misconfigured client."
accountingresponse on
}
# Default realm of some clients. Do not send to top level radius servers.
realm /myabc\.com$/ {
replymessage "Misconfigured client: default realm of Intel PRO/Wireless supplicant! Rejected by us."
accountingresponse on
}
# Empty realm. Do not send to top level radius servers.
realm /^$/ {
replymessage "Misconfigured client: empty realm! Rejected by us."
accountingresponse on
}
# Typo in realm. Realm without any dot in it. Do not send to top level radius servers.
realm /@[^\.]+$/ {
replymessage "Misconfigured client: Typo in realm - No dot in realm ! Rejected by us."
accountingresponse on
}
# Typo in realm. Realm without double dot in it. Do not send to top level radius servers.
realm /@.*\.\..*$/ {
replymessage "Misconfigured client: Typo in realm - .. ! Rejected by us."
accountingresponse on
}
# Typo in realm. Realm without space in it. Do not send to top level radius servers.
realm /@.*\s+.*$/ {
replymessage "Misconfigured client: Typo in realm - Don't use spaces in your realm! Rejected by us."
accountingresponse on
}
# All other realms -> Eduroam toplevel servers
realm * {
server tlr1
server tlr2
accountingserver tlr1
accountingserver tlr2
}
</syntaxhighlight>
===/etc/radsec/cert/radsecproxy.pem===
<syntaxhighlight lang=text>
subject=/CN=radsecproxy.domain.tld/OU=bla/O=bli/L=Hamburg/ST=Hamburg/C=DE
-----BEGIN CERTIFICATE-----
...
-----END CERTIFICATE-----
And now the whole cerstificate chain...
</syntaxhighlight>
==Run the daemon==
===Security===
There is no need to run radsecproxy as root.
But you need write access to the log or use syslog.
The config, certificate and key is not readable by the user (nogroup) but by the group radsecproxy where the porocess lives in (see systemd unit file radsecproxy.service).
====User====
<syntaxhighlight lang=bash>
# addgroup -g 2083 radsecproxy
# useradd -u 2083 -g nogroup -s /bin/false -h /nonexistent
</syntaxhighlight>
====Permissions====
<syntaxhighlight lang=bash>
# chown -R root:radsecproxy /etc/radsec
# find /etc/radsec -type d -exec chmod 0750 {} \;
# find /etc/radsec -type f -exec chmod 0640 {} \;
</syntaxhighlight>
====systemd unit file====
<syntaxhighlight lang=bash>
# systemctl cat radsecproxy.service
</syntaxhighlight>
<syntaxhighlight lang=ini>
[Unit]
Description=radsecproxy
ConditionPathExists=/etc/radsec/radsecproxy.conf
After=network.target
Documentation=man:radsecproxy(1)
[Service]
Type=forking
User=radsecproxy
Group=radsecproxy
RuntimeDirectory=radsecproxy
RuntimeDirectoryMode=0700
PrivateTmp=yes
InaccessibleDirectories=/var
ReadOnlyDirectories=/etc
ReadOnlyDirectories=/lib
ReadOnlyDirectories=/usr
ExecStart=/opt/radsecproxy/sbin/radsecproxy -i /run/radsecproxy/radsecproxy.pid
PIDFile=/run/radsecproxy/radsecproxy.pid
[Install]
WantedBy=multi-user.target
</syntaxhighlight>
Put this to /lib/systemd/system/radsecproxy.service and do:
# systemctl daemon-reload
# systemctl enable radsecproxy.service
# systemctl start radsecproxy.service
===Testing===
Check on the server if the radsecproxy is listening:
<syntaxhighlight lang=bash>
# lsof -Pni TCP:2083 -s TCP:Listen
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
radsecpro 1344 radsecproxy 9u IPv4 22751 0t0 TCP <server ip>:2083 (LISTEN)
</syntaxhighlight>
===Certificate Enddate===
$ openssl s_client -connect <IP>:2083 -tls1 -no_ssl2 -no_ssl3 -showcerts 2>/dev/null | openssl x509 -enddate -noout
notAfter=Oct 9 12:13:17 2020 GMT
==Problems & Solutions==
===freeradius: systemd[1]: freeradius.service: Main process exited, code=killed, status=11/SEGV===
If you see something like this at you connected freeradius servers, check if you got the file from a windows user and see if it is in DOS format like this (use vi not less):
<pre>
-----BEGIN CERTIFICATE-----
MIIHszCCBZugAwIBAgIMKX2RveVddPE7s2KGMA0GCSqGSIb3DQEBCwUAMHYxCzAJ^M
BgNVBAYTAkRFMUUwQwYDVQQKDDxWZXJlaW4genVyIEZvZXJkZXJ1bmcgZWluZXMg^M
RGV1dHNjaGVuIEZvcnNjaHVuZ3NuZXR6ZXMgZS4gVi4xIDAeBgNVBAMMF2VkdXJv^M
...
</pre>
You can use unix2dos to convert it to unix format.
Yes... that the radsecproxy does not convert it is ugly... but that's life.
0ab6ea1278155c4780b6d6c8a657c39862551580
2772
2767
2024-01-12T14:07:26Z
Lollypop
2
/* Certificate Enddate */
wikitext
text/x-wiki
[[Category:Eduroam]]
=RadSecProxy=
==Build==
===Patch for radsecproxy-1.6.8 on Ubuntu 16.04===
In radsecproxy 1.6.9 and source from git on [[https://git.nordu.net/?p=radsecproxy.git;a=tree git.nordu.net]] this patch is not needed since [[https://git.nordu.net/?p=radsecproxy.git;a=commit;h=f3619bf65967255e1009fec42b28007b49e0f4e4 18.1.2017]].
<syntaxhighlight lang=bash>
$ git clone https://git.nordu.net/radsecproxy.git
</syntaxhighlight>
[https://project.nordu.net/browse/RADSECPROXY-72 taken from here]
<syntaxhighlight lang=diff>
diff -rub radsecproxy-1.6.8/tcp.c radsecproxy-1.6.8_Ubuntu_16.04/tcp.c
--- radsecproxy-1.6.8/tcp.c 2016-09-21 13:49:09.000000000 +0200
+++ radsecproxy-1.6.8_Ubuntu_16.04/tcp.c 2017-07-13 16:35:52.414151832 +0200
@@ -353,7 +353,7 @@
struct sockaddr_storage from;
socklen_t fromlen = sizeof(from);
- listen(*sp, 0);
+ listen(*sp, 16);
for (;;) {
s = accept(*sp, (struct sockaddr *)&from, &fromlen);
diff -rub radsecproxy-1.6.8/tls.c radsecproxy-1.6.8_Ubuntu_16.04/tls.c
--- radsecproxy-1.6.8/tls.c 2016-09-21 13:49:09.000000000 +0200
+++ radsecproxy-1.6.8_Ubuntu_16.04/tls.c 2017-07-13 16:36:22.678166655 +0200
@@ -467,7 +467,7 @@
struct sockaddr_storage from;
socklen_t fromlen = sizeof(from);
- listen(*sp, 0);
+ listen(*sp, 16);
for (;;) {
s = accept(*sp, (struct sockaddr *)&from, &fromlen);
</syntaxhighlight>
===Configure===
<syntaxhighlight lang=bash>
$ ./configure --prefix=/opt/radsecproxy-1.6.8 --sysconfdir=/etc/radsec --with-ssl --enable-fticks
$ make clean all && sudo make install
</syntaxhighlight>
=== Another example: Version 1.7.2 from git ===
<syntaxhighlight lang=bash>
$ mkdir radsecproxy && cd radsecproxy
$ git clone --single-branch --branch 1.7.2 https://github.com/radsecproxy/radsecproxy tags/1.7.2
$ cd tags/1.7.2
$ ./autogen.sh
$ ./configure --prefix=/opt/radsecproxy-${PWD##*/} --sysconfdir=/etc/radsec --with-ssl
$ make clean all && sudo make install
</syntaxhighlight>
==Config==
===/etc/radsec/radsecproxy.conf===
<syntaxhighlight lang=text>
# Master config file for radsecproxy
IPv4Only on
listenUDP <IP>:1812
listenUDP <IP>:1813
listenTLS <IP>:2083
LogLevel 5 # For testing later reduce to 3
#LogDestination file:///var/log/radsecproxy.log
LogDestination x-syslog:///LOG_DAEMON
LoopPrevention on
######## TLS section
tls default {
CACertificatePath /etc/radsec/cert/ca
CertificateFile /etc/radsec/cert/radsecproxy-cert.pem
CertificateKeyFile /etc/radsec/cert/radsecproxy-key.pem
CertificateKeyPassword <PASSWORD>
}
Include /etc/radsec/rewrites.conf
Include /etc/radsec/clients.conf
Include /etc/radsec/servers.conf
Include /etc/radsec/realms.conf
</syntaxhighlight>
===ca certificate in /etc/radsec/cert/ca===
For DFN users it is the TeleSec root certificate
====The destination file name is <hash of the certificate>.0====
<syntaxhighlight lang=text>
# openssl x509 -noout -hash -in /tmp/telesec.pem
1e09d511
# mv /tmp/telesec.pem /etc/radsec/cert/ca/1e09d511.0
</syntaxhighlight>
====/etc/radsec/cert/ca/1e09d511.0====
<syntaxhighlight lang=text>
subject= /C=DE/O=T-Systems Enterprise Services GmbH/OU=T-Systems Trust Center/CN=T-TeleSec GlobalRoot Class 2
-----BEGIN CERTIFICATE-----
MIIDwzCCAqugAwIBAgIBATANBgkqhkiG9w0BAQsFADCBgjELMAkGA1UEBhMCREUx
KzApBgNVBAoMIlQtU3lzdGVtcyBFbnRlcnByaXNlIFNlcnZpY2VzIEdtYkgxHzAd
BgNVBAsMFlQtU3lzdGVtcyBUcnVzdCBDZW50ZXIxJTAjBgNVBAMMHFQtVGVsZVNl
YyBHbG9iYWxSb290IENsYXNzIDIwHhcNMDgxMDAxMTA0MDE0WhcNMzMxMDAxMjM1
OTU5WjCBgjELMAkGA1UEBhMCREUxKzApBgNVBAoMIlQtU3lzdGVtcyBFbnRlcnBy
aXNlIFNlcnZpY2VzIEdtYkgxHzAdBgNVBAsMFlQtU3lzdGVtcyBUcnVzdCBDZW50
ZXIxJTAjBgNVBAMMHFQtVGVsZVNlYyBHbG9iYWxSb290IENsYXNzIDIwggEiMA0G
CSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQCqX9obX+hzkeXaXPSi5kfl82hVYAUd
AqSzm1nzHoqvNK38DcLZSBnuaY/JIPwhqgcZ7bBcrGXHX+0CfHt8LRvWurmAwhiC
FoT6ZrAIxlQjgeTNuUk/9k9uN0goOA/FvudocP05l03Sx5iRUKrERLMjfTlH6VJi
1hKTXrcxlkIF+3anHqP1wvzpesVsqXFP6st4vGCvx9702cu+fjOlbpSD8DT6Iavq
jnKgP6TeMFvvhk1qlVtDRKgQFRzlAVfFmPHmBiiRqiDFt1MmUUOyCxGVWOHAD3bZ
wI18gfNycJ5v/hqO2V81xrJvNHy+SE/iWjnX2J14np+GPgNeGYtEotXHAgMBAAGj
QjBAMA8GA1UdEwEB/wQFMAMBAf8wDgYDVR0PAQH/BAQDAgEGMB0GA1UdDgQWBBS/
WSA2AHmgoCJrjNXyYdK4LMuCSjANBgkqhkiG9w0BAQsFAAOCAQEAMQOiYQsfdOhy
NsZt+U2e+iKo4YFWz827n+qrkRk4r6p8FU3ztqONpfSO9kSpp+ghla0+AGIWiPAC
uvxhI+YzmzB6azZie60EI4RYZeLbK4rnJVM3YlNfvNoBYimipidx5joifsFvHZVw
IEoHNN/q/xWA5brXethbdXwFeilHfkCoMRN3zUA7tFFHei4R40cR3p1m0IvVVGb6
g1XqfMIpiRvpb7PO4gWEyS8+eIVibslfwXhjdFjASBgMmTnrpMwatXlajRWc2BQN
9noHV8cigwUtPJslJj0Ys6lDfMjIq2SPDqO/nBudMNva0Bkuqjzx+zOAduTNrRlP
BSeOE6Fuwg==
-----END CERTIFICATE-----
</syntaxhighlight>
===/etc/radsec/rewrites.conf===
<syntaxhighlight lang=text>
## Empty for our setup
</syntaxhighlight>
===/etc/radsec/clients.conf===
This matches our german top level radius (tlr) you have to customize it for other countries.
<syntaxhighlight lang=text>
client tlr1 {
host 193.174.75.134
type tls
certificatenamecheck off
matchCertificateAttribute CN:/^(radius1\.dfn|tld1\.eduroam)\.de$/
}
client tlr2 {
host 193.174.75.138
type tls
certificatenamecheck off
matchCertificateAttribute CN:/^(radius2\.dfn|tld2\.eduroam)\.de$/
}
# Our WLAN Controller
client wlc {
host 10.1.1.0/24
type udp
secret ****secret****
}
#client anyIP4TLS {
# host 0.0.0.0/0
# type TLS
#}
</syntaxhighlight>
===/etc/radsec/servers.conf===
<syntaxhighlight lang=text>
#
## UDP Radius
#
#Server Our-EduroamRadiusAuth {
# host <internal radius server>
# port 1812
# type udp
# secret ****secret****
#}
#Server Our-EduroamRadiusAcct {
# host <internal radius accounting server>
# port 1813
# type udp
# secret ****secret****
#}
#
## TLS Radius / RadSec
#
server freeradius-1 {
host <internal radius accounting server1>
type tls
certificatenamecheck off
matchCertificateAttribute CN:/^freeradius1\.domain\.tld$/
StatusServer on
secret ****secret****
}
server freeradius-2 {
host <internal radius accounting server2>
type tls
certificatenamecheck off
matchCertificateAttribute CN:/^freeradius2\.domain\.tld$/
StatusServer on
secret ****secret****
}
server tlr1 {
host 193.174.75.134
type tls
certificatenamecheck off
matchCertificateAttribute CN:/^(radius1\.dfn|tld1\.eduroam)\.de$/
StatusServer on
}
server tlr2 {
host 193.174.75.138
type tls
certificatenamecheck off
matchCertificateAttribute CN:/^(radius2\.dfn|tld2\.eduroam)\.de$/
StatusServer on
}
</syntaxhighlight>
===/etc/radsec/realms.conf===
<syntaxhighlight lang=text>
# Our domain domain.tld
realm /(eduroam|anonymous)@domain\.tld$/ {
server freeradius-1
server freeradius-2
accountingServer freeradius-1
accountingServer freeradius-2
}
# If the anonymous user has not been matched above, fail
# So users that use their real identity fail, too. Force anonymous!
realm /@domain\.tld$ {
replymessage "Access rejected, wrong anonymous identity. Use eduroam@domain.tld as anonymous identity."
accountingresponse on
}
# Other domain of our site not used for eduroam
realm /@wrong-domain\.tld$/ {
replymessage "Misconfigured client: Use domain.tld as domain instead."
accountingresponse on
}
# Default realm of some clients. Do not send to top level radius servers.
realm /@.*\.3gppnetwork\.org$/ {
replymessage "Misconfigured client."
accountingresponse on
}
# Default realm of some clients. Do not send to top level radius servers.
realm /myabc\.com$/ {
replymessage "Misconfigured client: default realm of Intel PRO/Wireless supplicant! Rejected by us."
accountingresponse on
}
# Empty realm. Do not send to top level radius servers.
realm /^$/ {
replymessage "Misconfigured client: empty realm! Rejected by us."
accountingresponse on
}
# Typo in realm. Realm without any dot in it. Do not send to top level radius servers.
realm /@[^\.]+$/ {
replymessage "Misconfigured client: Typo in realm - No dot in realm ! Rejected by us."
accountingresponse on
}
# Typo in realm. Realm without double dot in it. Do not send to top level radius servers.
realm /@.*\.\..*$/ {
replymessage "Misconfigured client: Typo in realm - .. ! Rejected by us."
accountingresponse on
}
# Typo in realm. Realm without space in it. Do not send to top level radius servers.
realm /@.*\s+.*$/ {
replymessage "Misconfigured client: Typo in realm - Don't use spaces in your realm! Rejected by us."
accountingresponse on
}
# All other realms -> Eduroam toplevel servers
realm * {
server tlr1
server tlr2
accountingserver tlr1
accountingserver tlr2
}
</syntaxhighlight>
===/etc/radsec/cert/radsecproxy.pem===
<syntaxhighlight lang=text>
subject=/CN=radsecproxy.domain.tld/OU=bla/O=bli/L=Hamburg/ST=Hamburg/C=DE
-----BEGIN CERTIFICATE-----
...
-----END CERTIFICATE-----
And now the whole cerstificate chain...
</syntaxhighlight>
==Run the daemon==
===Security===
There is no need to run radsecproxy as root.
But you need write access to the log or use syslog.
The config, certificate and key is not readable by the user (nogroup) but by the group radsecproxy where the porocess lives in (see systemd unit file radsecproxy.service).
====User====
<syntaxhighlight lang=bash>
# addgroup -g 2083 radsecproxy
# useradd -u 2083 -g nogroup -s /bin/false -h /nonexistent
</syntaxhighlight>
====Permissions====
<syntaxhighlight lang=bash>
# chown -R root:radsecproxy /etc/radsec
# find /etc/radsec -type d -exec chmod 0750 {} \;
# find /etc/radsec -type f -exec chmod 0640 {} \;
</syntaxhighlight>
====systemd unit file====
<syntaxhighlight lang=bash>
# systemctl cat radsecproxy.service
</syntaxhighlight>
<syntaxhighlight lang=ini>
[Unit]
Description=radsecproxy
ConditionPathExists=/etc/radsec/radsecproxy.conf
After=network.target
Documentation=man:radsecproxy(1)
[Service]
Type=forking
User=radsecproxy
Group=radsecproxy
RuntimeDirectory=radsecproxy
RuntimeDirectoryMode=0700
PrivateTmp=yes
InaccessibleDirectories=/var
ReadOnlyDirectories=/etc
ReadOnlyDirectories=/lib
ReadOnlyDirectories=/usr
ExecStart=/opt/radsecproxy/sbin/radsecproxy -i /run/radsecproxy/radsecproxy.pid
PIDFile=/run/radsecproxy/radsecproxy.pid
[Install]
WantedBy=multi-user.target
</syntaxhighlight>
Put this to /lib/systemd/system/radsecproxy.service and do:
# systemctl daemon-reload
# systemctl enable radsecproxy.service
# systemctl start radsecproxy.service
===Testing===
Check on the server if the radsecproxy is listening:
<syntaxhighlight lang=bash>
# lsof -Pni TCP:2083 -s TCP:Listen
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
radsecpro 1344 radsecproxy 9u IPv4 22751 0t0 TCP <server ip>:2083 (LISTEN)
</syntaxhighlight>
===Certificate Enddate===
$ openssl s_client -connect <IP>:2083 -tls1 -no_ssl2 -no_ssl3 -showcerts 2>/dev/null | openssl x509 -enddate -noout
notAfter=Oct 9 12:13:17 2020 GMT
For this to work it is necessary, that you have configured a client that matches the host where you ask from:
<pre>
client openssl_check_radsec {
host <host from where you use openssl s_client>
type tls
secret dummy_for_openssl
}
</pre>
if not you will see something like this in the radsecproxy log:
<pre>
Jan 12 14:53:48 radsecproxy-1 radsecproxy[1359]: (305626) tlsservernew: ignoring request, no matching TLS client for a.b.c.d
</pre>
and your openssl just will show:
<SyntaxHighlight lang=bash>
$ openssl s_client -showcerts -connect a.b.c.d:2083
CONNECTED(00000003)
write:errno=0
---
no peer certificate available
---
No client certificate CA names sent
---
SSL handshake has read 0 bytes and written 283 bytes
Verification: OK
---
New, (NONE), Cipher is (NONE)
Secure Renegotiation IS NOT supported
Compression: NONE
Expansion: NONE
No ALPN negotiated
Early data was not sent
Verify return code: 0 (ok)
---
</SyntaxHighlight>
==Problems & Solutions==
===freeradius: systemd[1]: freeradius.service: Main process exited, code=killed, status=11/SEGV===
If you see something like this at you connected freeradius servers, check if you got the file from a windows user and see if it is in DOS format like this (use vi not less):
<pre>
-----BEGIN CERTIFICATE-----
MIIHszCCBZugAwIBAgIMKX2RveVddPE7s2KGMA0GCSqGSIb3DQEBCwUAMHYxCzAJ^M
BgNVBAYTAkRFMUUwQwYDVQQKDDxWZXJlaW4genVyIEZvZXJkZXJ1bmcgZWluZXMg^M
RGV1dHNjaGVuIEZvcnNjaHVuZ3NuZXR6ZXMgZS4gVi4xIDAeBgNVBAMMF2VkdXJv^M
...
</pre>
You can use unix2dos to convert it to unix format.
Yes... that the radsecproxy does not convert it is ugly... but that's life.
5b902c1b2c859d29da0d379ba1c77f16d51d9247
2773
2772
2024-01-12T14:08:10Z
Lollypop
2
/* Certificate Enddate */
wikitext
text/x-wiki
[[Category:Eduroam]]
=RadSecProxy=
==Build==
===Patch for radsecproxy-1.6.8 on Ubuntu 16.04===
In radsecproxy 1.6.9 and source from git on [[https://git.nordu.net/?p=radsecproxy.git;a=tree git.nordu.net]] this patch is not needed since [[https://git.nordu.net/?p=radsecproxy.git;a=commit;h=f3619bf65967255e1009fec42b28007b49e0f4e4 18.1.2017]].
<syntaxhighlight lang=bash>
$ git clone https://git.nordu.net/radsecproxy.git
</syntaxhighlight>
[https://project.nordu.net/browse/RADSECPROXY-72 taken from here]
<syntaxhighlight lang=diff>
diff -rub radsecproxy-1.6.8/tcp.c radsecproxy-1.6.8_Ubuntu_16.04/tcp.c
--- radsecproxy-1.6.8/tcp.c 2016-09-21 13:49:09.000000000 +0200
+++ radsecproxy-1.6.8_Ubuntu_16.04/tcp.c 2017-07-13 16:35:52.414151832 +0200
@@ -353,7 +353,7 @@
struct sockaddr_storage from;
socklen_t fromlen = sizeof(from);
- listen(*sp, 0);
+ listen(*sp, 16);
for (;;) {
s = accept(*sp, (struct sockaddr *)&from, &fromlen);
diff -rub radsecproxy-1.6.8/tls.c radsecproxy-1.6.8_Ubuntu_16.04/tls.c
--- radsecproxy-1.6.8/tls.c 2016-09-21 13:49:09.000000000 +0200
+++ radsecproxy-1.6.8_Ubuntu_16.04/tls.c 2017-07-13 16:36:22.678166655 +0200
@@ -467,7 +467,7 @@
struct sockaddr_storage from;
socklen_t fromlen = sizeof(from);
- listen(*sp, 0);
+ listen(*sp, 16);
for (;;) {
s = accept(*sp, (struct sockaddr *)&from, &fromlen);
</syntaxhighlight>
===Configure===
<syntaxhighlight lang=bash>
$ ./configure --prefix=/opt/radsecproxy-1.6.8 --sysconfdir=/etc/radsec --with-ssl --enable-fticks
$ make clean all && sudo make install
</syntaxhighlight>
=== Another example: Version 1.7.2 from git ===
<syntaxhighlight lang=bash>
$ mkdir radsecproxy && cd radsecproxy
$ git clone --single-branch --branch 1.7.2 https://github.com/radsecproxy/radsecproxy tags/1.7.2
$ cd tags/1.7.2
$ ./autogen.sh
$ ./configure --prefix=/opt/radsecproxy-${PWD##*/} --sysconfdir=/etc/radsec --with-ssl
$ make clean all && sudo make install
</syntaxhighlight>
==Config==
===/etc/radsec/radsecproxy.conf===
<syntaxhighlight lang=text>
# Master config file for radsecproxy
IPv4Only on
listenUDP <IP>:1812
listenUDP <IP>:1813
listenTLS <IP>:2083
LogLevel 5 # For testing later reduce to 3
#LogDestination file:///var/log/radsecproxy.log
LogDestination x-syslog:///LOG_DAEMON
LoopPrevention on
######## TLS section
tls default {
CACertificatePath /etc/radsec/cert/ca
CertificateFile /etc/radsec/cert/radsecproxy-cert.pem
CertificateKeyFile /etc/radsec/cert/radsecproxy-key.pem
CertificateKeyPassword <PASSWORD>
}
Include /etc/radsec/rewrites.conf
Include /etc/radsec/clients.conf
Include /etc/radsec/servers.conf
Include /etc/radsec/realms.conf
</syntaxhighlight>
===ca certificate in /etc/radsec/cert/ca===
For DFN users it is the TeleSec root certificate
====The destination file name is <hash of the certificate>.0====
<syntaxhighlight lang=text>
# openssl x509 -noout -hash -in /tmp/telesec.pem
1e09d511
# mv /tmp/telesec.pem /etc/radsec/cert/ca/1e09d511.0
</syntaxhighlight>
====/etc/radsec/cert/ca/1e09d511.0====
<syntaxhighlight lang=text>
subject= /C=DE/O=T-Systems Enterprise Services GmbH/OU=T-Systems Trust Center/CN=T-TeleSec GlobalRoot Class 2
-----BEGIN CERTIFICATE-----
MIIDwzCCAqugAwIBAgIBATANBgkqhkiG9w0BAQsFADCBgjELMAkGA1UEBhMCREUx
KzApBgNVBAoMIlQtU3lzdGVtcyBFbnRlcnByaXNlIFNlcnZpY2VzIEdtYkgxHzAd
BgNVBAsMFlQtU3lzdGVtcyBUcnVzdCBDZW50ZXIxJTAjBgNVBAMMHFQtVGVsZVNl
YyBHbG9iYWxSb290IENsYXNzIDIwHhcNMDgxMDAxMTA0MDE0WhcNMzMxMDAxMjM1
OTU5WjCBgjELMAkGA1UEBhMCREUxKzApBgNVBAoMIlQtU3lzdGVtcyBFbnRlcnBy
aXNlIFNlcnZpY2VzIEdtYkgxHzAdBgNVBAsMFlQtU3lzdGVtcyBUcnVzdCBDZW50
ZXIxJTAjBgNVBAMMHFQtVGVsZVNlYyBHbG9iYWxSb290IENsYXNzIDIwggEiMA0G
CSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQCqX9obX+hzkeXaXPSi5kfl82hVYAUd
AqSzm1nzHoqvNK38DcLZSBnuaY/JIPwhqgcZ7bBcrGXHX+0CfHt8LRvWurmAwhiC
FoT6ZrAIxlQjgeTNuUk/9k9uN0goOA/FvudocP05l03Sx5iRUKrERLMjfTlH6VJi
1hKTXrcxlkIF+3anHqP1wvzpesVsqXFP6st4vGCvx9702cu+fjOlbpSD8DT6Iavq
jnKgP6TeMFvvhk1qlVtDRKgQFRzlAVfFmPHmBiiRqiDFt1MmUUOyCxGVWOHAD3bZ
wI18gfNycJ5v/hqO2V81xrJvNHy+SE/iWjnX2J14np+GPgNeGYtEotXHAgMBAAGj
QjBAMA8GA1UdEwEB/wQFMAMBAf8wDgYDVR0PAQH/BAQDAgEGMB0GA1UdDgQWBBS/
WSA2AHmgoCJrjNXyYdK4LMuCSjANBgkqhkiG9w0BAQsFAAOCAQEAMQOiYQsfdOhy
NsZt+U2e+iKo4YFWz827n+qrkRk4r6p8FU3ztqONpfSO9kSpp+ghla0+AGIWiPAC
uvxhI+YzmzB6azZie60EI4RYZeLbK4rnJVM3YlNfvNoBYimipidx5joifsFvHZVw
IEoHNN/q/xWA5brXethbdXwFeilHfkCoMRN3zUA7tFFHei4R40cR3p1m0IvVVGb6
g1XqfMIpiRvpb7PO4gWEyS8+eIVibslfwXhjdFjASBgMmTnrpMwatXlajRWc2BQN
9noHV8cigwUtPJslJj0Ys6lDfMjIq2SPDqO/nBudMNva0Bkuqjzx+zOAduTNrRlP
BSeOE6Fuwg==
-----END CERTIFICATE-----
</syntaxhighlight>
===/etc/radsec/rewrites.conf===
<syntaxhighlight lang=text>
## Empty for our setup
</syntaxhighlight>
===/etc/radsec/clients.conf===
This matches our german top level radius (tlr) you have to customize it for other countries.
<syntaxhighlight lang=text>
client tlr1 {
host 193.174.75.134
type tls
certificatenamecheck off
matchCertificateAttribute CN:/^(radius1\.dfn|tld1\.eduroam)\.de$/
}
client tlr2 {
host 193.174.75.138
type tls
certificatenamecheck off
matchCertificateAttribute CN:/^(radius2\.dfn|tld2\.eduroam)\.de$/
}
# Our WLAN Controller
client wlc {
host 10.1.1.0/24
type udp
secret ****secret****
}
#client anyIP4TLS {
# host 0.0.0.0/0
# type TLS
#}
</syntaxhighlight>
===/etc/radsec/servers.conf===
<syntaxhighlight lang=text>
#
## UDP Radius
#
#Server Our-EduroamRadiusAuth {
# host <internal radius server>
# port 1812
# type udp
# secret ****secret****
#}
#Server Our-EduroamRadiusAcct {
# host <internal radius accounting server>
# port 1813
# type udp
# secret ****secret****
#}
#
## TLS Radius / RadSec
#
server freeradius-1 {
host <internal radius accounting server1>
type tls
certificatenamecheck off
matchCertificateAttribute CN:/^freeradius1\.domain\.tld$/
StatusServer on
secret ****secret****
}
server freeradius-2 {
host <internal radius accounting server2>
type tls
certificatenamecheck off
matchCertificateAttribute CN:/^freeradius2\.domain\.tld$/
StatusServer on
secret ****secret****
}
server tlr1 {
host 193.174.75.134
type tls
certificatenamecheck off
matchCertificateAttribute CN:/^(radius1\.dfn|tld1\.eduroam)\.de$/
StatusServer on
}
server tlr2 {
host 193.174.75.138
type tls
certificatenamecheck off
matchCertificateAttribute CN:/^(radius2\.dfn|tld2\.eduroam)\.de$/
StatusServer on
}
</syntaxhighlight>
===/etc/radsec/realms.conf===
<syntaxhighlight lang=text>
# Our domain domain.tld
realm /(eduroam|anonymous)@domain\.tld$/ {
server freeradius-1
server freeradius-2
accountingServer freeradius-1
accountingServer freeradius-2
}
# If the anonymous user has not been matched above, fail
# So users that use their real identity fail, too. Force anonymous!
realm /@domain\.tld$ {
replymessage "Access rejected, wrong anonymous identity. Use eduroam@domain.tld as anonymous identity."
accountingresponse on
}
# Other domain of our site not used for eduroam
realm /@wrong-domain\.tld$/ {
replymessage "Misconfigured client: Use domain.tld as domain instead."
accountingresponse on
}
# Default realm of some clients. Do not send to top level radius servers.
realm /@.*\.3gppnetwork\.org$/ {
replymessage "Misconfigured client."
accountingresponse on
}
# Default realm of some clients. Do not send to top level radius servers.
realm /myabc\.com$/ {
replymessage "Misconfigured client: default realm of Intel PRO/Wireless supplicant! Rejected by us."
accountingresponse on
}
# Empty realm. Do not send to top level radius servers.
realm /^$/ {
replymessage "Misconfigured client: empty realm! Rejected by us."
accountingresponse on
}
# Typo in realm. Realm without any dot in it. Do not send to top level radius servers.
realm /@[^\.]+$/ {
replymessage "Misconfigured client: Typo in realm - No dot in realm ! Rejected by us."
accountingresponse on
}
# Typo in realm. Realm without double dot in it. Do not send to top level radius servers.
realm /@.*\.\..*$/ {
replymessage "Misconfigured client: Typo in realm - .. ! Rejected by us."
accountingresponse on
}
# Typo in realm. Realm without space in it. Do not send to top level radius servers.
realm /@.*\s+.*$/ {
replymessage "Misconfigured client: Typo in realm - Don't use spaces in your realm! Rejected by us."
accountingresponse on
}
# All other realms -> Eduroam toplevel servers
realm * {
server tlr1
server tlr2
accountingserver tlr1
accountingserver tlr2
}
</syntaxhighlight>
===/etc/radsec/cert/radsecproxy.pem===
<syntaxhighlight lang=text>
subject=/CN=radsecproxy.domain.tld/OU=bla/O=bli/L=Hamburg/ST=Hamburg/C=DE
-----BEGIN CERTIFICATE-----
...
-----END CERTIFICATE-----
And now the whole cerstificate chain...
</syntaxhighlight>
==Run the daemon==
===Security===
There is no need to run radsecproxy as root.
But you need write access to the log or use syslog.
The config, certificate and key is not readable by the user (nogroup) but by the group radsecproxy where the porocess lives in (see systemd unit file radsecproxy.service).
====User====
<syntaxhighlight lang=bash>
# addgroup -g 2083 radsecproxy
# useradd -u 2083 -g nogroup -s /bin/false -h /nonexistent
</syntaxhighlight>
====Permissions====
<syntaxhighlight lang=bash>
# chown -R root:radsecproxy /etc/radsec
# find /etc/radsec -type d -exec chmod 0750 {} \;
# find /etc/radsec -type f -exec chmod 0640 {} \;
</syntaxhighlight>
====systemd unit file====
<syntaxhighlight lang=bash>
# systemctl cat radsecproxy.service
</syntaxhighlight>
<syntaxhighlight lang=ini>
[Unit]
Description=radsecproxy
ConditionPathExists=/etc/radsec/radsecproxy.conf
After=network.target
Documentation=man:radsecproxy(1)
[Service]
Type=forking
User=radsecproxy
Group=radsecproxy
RuntimeDirectory=radsecproxy
RuntimeDirectoryMode=0700
PrivateTmp=yes
InaccessibleDirectories=/var
ReadOnlyDirectories=/etc
ReadOnlyDirectories=/lib
ReadOnlyDirectories=/usr
ExecStart=/opt/radsecproxy/sbin/radsecproxy -i /run/radsecproxy/radsecproxy.pid
PIDFile=/run/radsecproxy/radsecproxy.pid
[Install]
WantedBy=multi-user.target
</syntaxhighlight>
Put this to /lib/systemd/system/radsecproxy.service and do:
# systemctl daemon-reload
# systemctl enable radsecproxy.service
# systemctl start radsecproxy.service
===Testing===
Check on the server if the radsecproxy is listening:
<syntaxhighlight lang=bash>
# lsof -Pni TCP:2083 -s TCP:Listen
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
radsecpro 1344 radsecproxy 9u IPv4 22751 0t0 TCP <server ip>:2083 (LISTEN)
</syntaxhighlight>
===Certificate Enddate===
<SyntaxHighlight lang=bash>
$ openssl s_client -connect <IP>:2083 -tls1 -no_ssl2 -no_ssl3 -showcerts 2>/dev/null | openssl x509 -enddate -noout
notAfter=Oct 9 12:13:17 2020 GMT
</SyntaxHighlight>
For this to work it is necessary, that you have configured a client that matches the host where you ask from:
<pre>
client openssl_check_radsec {
host <host from where you use openssl s_client>
type tls
secret dummy_for_openssl
}
</pre>
if not you will see something like this in the radsecproxy log:
<pre>
Jan 12 14:53:48 radsecproxy-1 radsecproxy[1359]: (305626) tlsservernew: ignoring request, no matching TLS client for a.b.c.d
</pre>
and your openssl just will show:
<SyntaxHighlight lang=bash>
$ openssl s_client -showcerts -connect a.b.c.d:2083
CONNECTED(00000003)
write:errno=0
---
no peer certificate available
---
No client certificate CA names sent
---
SSL handshake has read 0 bytes and written 283 bytes
Verification: OK
---
New, (NONE), Cipher is (NONE)
Secure Renegotiation IS NOT supported
Compression: NONE
Expansion: NONE
No ALPN negotiated
Early data was not sent
Verify return code: 0 (ok)
---
</SyntaxHighlight>
==Problems & Solutions==
===freeradius: systemd[1]: freeradius.service: Main process exited, code=killed, status=11/SEGV===
If you see something like this at you connected freeradius servers, check if you got the file from a windows user and see if it is in DOS format like this (use vi not less):
<pre>
-----BEGIN CERTIFICATE-----
MIIHszCCBZugAwIBAgIMKX2RveVddPE7s2KGMA0GCSqGSIb3DQEBCwUAMHYxCzAJ^M
BgNVBAYTAkRFMUUwQwYDVQQKDDxWZXJlaW4genVyIEZvZXJkZXJ1bmcgZWluZXMg^M
RGV1dHNjaGVuIEZvcnNjaHVuZ3NuZXR6ZXMgZS4gVi4xIDAeBgNVBAMMF2VkdXJv^M
...
</pre>
You can use unix2dos to convert it to unix format.
Yes... that the radsecproxy does not convert it is ugly... but that's life.
648aed8db341a2c376a95fa0ab997b8d0051664f
2774
2773
2024-01-12T14:09:02Z
Lollypop
2
/* Certificate Enddate */
wikitext
text/x-wiki
[[Category:Eduroam]]
=RadSecProxy=
==Build==
===Patch for radsecproxy-1.6.8 on Ubuntu 16.04===
In radsecproxy 1.6.9 and source from git on [[https://git.nordu.net/?p=radsecproxy.git;a=tree git.nordu.net]] this patch is not needed since [[https://git.nordu.net/?p=radsecproxy.git;a=commit;h=f3619bf65967255e1009fec42b28007b49e0f4e4 18.1.2017]].
<syntaxhighlight lang=bash>
$ git clone https://git.nordu.net/radsecproxy.git
</syntaxhighlight>
[https://project.nordu.net/browse/RADSECPROXY-72 taken from here]
<syntaxhighlight lang=diff>
diff -rub radsecproxy-1.6.8/tcp.c radsecproxy-1.6.8_Ubuntu_16.04/tcp.c
--- radsecproxy-1.6.8/tcp.c 2016-09-21 13:49:09.000000000 +0200
+++ radsecproxy-1.6.8_Ubuntu_16.04/tcp.c 2017-07-13 16:35:52.414151832 +0200
@@ -353,7 +353,7 @@
struct sockaddr_storage from;
socklen_t fromlen = sizeof(from);
- listen(*sp, 0);
+ listen(*sp, 16);
for (;;) {
s = accept(*sp, (struct sockaddr *)&from, &fromlen);
diff -rub radsecproxy-1.6.8/tls.c radsecproxy-1.6.8_Ubuntu_16.04/tls.c
--- radsecproxy-1.6.8/tls.c 2016-09-21 13:49:09.000000000 +0200
+++ radsecproxy-1.6.8_Ubuntu_16.04/tls.c 2017-07-13 16:36:22.678166655 +0200
@@ -467,7 +467,7 @@
struct sockaddr_storage from;
socklen_t fromlen = sizeof(from);
- listen(*sp, 0);
+ listen(*sp, 16);
for (;;) {
s = accept(*sp, (struct sockaddr *)&from, &fromlen);
</syntaxhighlight>
===Configure===
<syntaxhighlight lang=bash>
$ ./configure --prefix=/opt/radsecproxy-1.6.8 --sysconfdir=/etc/radsec --with-ssl --enable-fticks
$ make clean all && sudo make install
</syntaxhighlight>
=== Another example: Version 1.7.2 from git ===
<syntaxhighlight lang=bash>
$ mkdir radsecproxy && cd radsecproxy
$ git clone --single-branch --branch 1.7.2 https://github.com/radsecproxy/radsecproxy tags/1.7.2
$ cd tags/1.7.2
$ ./autogen.sh
$ ./configure --prefix=/opt/radsecproxy-${PWD##*/} --sysconfdir=/etc/radsec --with-ssl
$ make clean all && sudo make install
</syntaxhighlight>
==Config==
===/etc/radsec/radsecproxy.conf===
<syntaxhighlight lang=text>
# Master config file for radsecproxy
IPv4Only on
listenUDP <IP>:1812
listenUDP <IP>:1813
listenTLS <IP>:2083
LogLevel 5 # For testing later reduce to 3
#LogDestination file:///var/log/radsecproxy.log
LogDestination x-syslog:///LOG_DAEMON
LoopPrevention on
######## TLS section
tls default {
CACertificatePath /etc/radsec/cert/ca
CertificateFile /etc/radsec/cert/radsecproxy-cert.pem
CertificateKeyFile /etc/radsec/cert/radsecproxy-key.pem
CertificateKeyPassword <PASSWORD>
}
Include /etc/radsec/rewrites.conf
Include /etc/radsec/clients.conf
Include /etc/radsec/servers.conf
Include /etc/radsec/realms.conf
</syntaxhighlight>
===ca certificate in /etc/radsec/cert/ca===
For DFN users it is the TeleSec root certificate
====The destination file name is <hash of the certificate>.0====
<syntaxhighlight lang=text>
# openssl x509 -noout -hash -in /tmp/telesec.pem
1e09d511
# mv /tmp/telesec.pem /etc/radsec/cert/ca/1e09d511.0
</syntaxhighlight>
====/etc/radsec/cert/ca/1e09d511.0====
<syntaxhighlight lang=text>
subject= /C=DE/O=T-Systems Enterprise Services GmbH/OU=T-Systems Trust Center/CN=T-TeleSec GlobalRoot Class 2
-----BEGIN CERTIFICATE-----
MIIDwzCCAqugAwIBAgIBATANBgkqhkiG9w0BAQsFADCBgjELMAkGA1UEBhMCREUx
KzApBgNVBAoMIlQtU3lzdGVtcyBFbnRlcnByaXNlIFNlcnZpY2VzIEdtYkgxHzAd
BgNVBAsMFlQtU3lzdGVtcyBUcnVzdCBDZW50ZXIxJTAjBgNVBAMMHFQtVGVsZVNl
YyBHbG9iYWxSb290IENsYXNzIDIwHhcNMDgxMDAxMTA0MDE0WhcNMzMxMDAxMjM1
OTU5WjCBgjELMAkGA1UEBhMCREUxKzApBgNVBAoMIlQtU3lzdGVtcyBFbnRlcnBy
aXNlIFNlcnZpY2VzIEdtYkgxHzAdBgNVBAsMFlQtU3lzdGVtcyBUcnVzdCBDZW50
ZXIxJTAjBgNVBAMMHFQtVGVsZVNlYyBHbG9iYWxSb290IENsYXNzIDIwggEiMA0G
CSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQCqX9obX+hzkeXaXPSi5kfl82hVYAUd
AqSzm1nzHoqvNK38DcLZSBnuaY/JIPwhqgcZ7bBcrGXHX+0CfHt8LRvWurmAwhiC
FoT6ZrAIxlQjgeTNuUk/9k9uN0goOA/FvudocP05l03Sx5iRUKrERLMjfTlH6VJi
1hKTXrcxlkIF+3anHqP1wvzpesVsqXFP6st4vGCvx9702cu+fjOlbpSD8DT6Iavq
jnKgP6TeMFvvhk1qlVtDRKgQFRzlAVfFmPHmBiiRqiDFt1MmUUOyCxGVWOHAD3bZ
wI18gfNycJ5v/hqO2V81xrJvNHy+SE/iWjnX2J14np+GPgNeGYtEotXHAgMBAAGj
QjBAMA8GA1UdEwEB/wQFMAMBAf8wDgYDVR0PAQH/BAQDAgEGMB0GA1UdDgQWBBS/
WSA2AHmgoCJrjNXyYdK4LMuCSjANBgkqhkiG9w0BAQsFAAOCAQEAMQOiYQsfdOhy
NsZt+U2e+iKo4YFWz827n+qrkRk4r6p8FU3ztqONpfSO9kSpp+ghla0+AGIWiPAC
uvxhI+YzmzB6azZie60EI4RYZeLbK4rnJVM3YlNfvNoBYimipidx5joifsFvHZVw
IEoHNN/q/xWA5brXethbdXwFeilHfkCoMRN3zUA7tFFHei4R40cR3p1m0IvVVGb6
g1XqfMIpiRvpb7PO4gWEyS8+eIVibslfwXhjdFjASBgMmTnrpMwatXlajRWc2BQN
9noHV8cigwUtPJslJj0Ys6lDfMjIq2SPDqO/nBudMNva0Bkuqjzx+zOAduTNrRlP
BSeOE6Fuwg==
-----END CERTIFICATE-----
</syntaxhighlight>
===/etc/radsec/rewrites.conf===
<syntaxhighlight lang=text>
## Empty for our setup
</syntaxhighlight>
===/etc/radsec/clients.conf===
This matches our german top level radius (tlr) you have to customize it for other countries.
<syntaxhighlight lang=text>
client tlr1 {
host 193.174.75.134
type tls
certificatenamecheck off
matchCertificateAttribute CN:/^(radius1\.dfn|tld1\.eduroam)\.de$/
}
client tlr2 {
host 193.174.75.138
type tls
certificatenamecheck off
matchCertificateAttribute CN:/^(radius2\.dfn|tld2\.eduroam)\.de$/
}
# Our WLAN Controller
client wlc {
host 10.1.1.0/24
type udp
secret ****secret****
}
#client anyIP4TLS {
# host 0.0.0.0/0
# type TLS
#}
</syntaxhighlight>
===/etc/radsec/servers.conf===
<syntaxhighlight lang=text>
#
## UDP Radius
#
#Server Our-EduroamRadiusAuth {
# host <internal radius server>
# port 1812
# type udp
# secret ****secret****
#}
#Server Our-EduroamRadiusAcct {
# host <internal radius accounting server>
# port 1813
# type udp
# secret ****secret****
#}
#
## TLS Radius / RadSec
#
server freeradius-1 {
host <internal radius accounting server1>
type tls
certificatenamecheck off
matchCertificateAttribute CN:/^freeradius1\.domain\.tld$/
StatusServer on
secret ****secret****
}
server freeradius-2 {
host <internal radius accounting server2>
type tls
certificatenamecheck off
matchCertificateAttribute CN:/^freeradius2\.domain\.tld$/
StatusServer on
secret ****secret****
}
server tlr1 {
host 193.174.75.134
type tls
certificatenamecheck off
matchCertificateAttribute CN:/^(radius1\.dfn|tld1\.eduroam)\.de$/
StatusServer on
}
server tlr2 {
host 193.174.75.138
type tls
certificatenamecheck off
matchCertificateAttribute CN:/^(radius2\.dfn|tld2\.eduroam)\.de$/
StatusServer on
}
</syntaxhighlight>
===/etc/radsec/realms.conf===
<syntaxhighlight lang=text>
# Our domain domain.tld
realm /(eduroam|anonymous)@domain\.tld$/ {
server freeradius-1
server freeradius-2
accountingServer freeradius-1
accountingServer freeradius-2
}
# If the anonymous user has not been matched above, fail
# So users that use their real identity fail, too. Force anonymous!
realm /@domain\.tld$ {
replymessage "Access rejected, wrong anonymous identity. Use eduroam@domain.tld as anonymous identity."
accountingresponse on
}
# Other domain of our site not used for eduroam
realm /@wrong-domain\.tld$/ {
replymessage "Misconfigured client: Use domain.tld as domain instead."
accountingresponse on
}
# Default realm of some clients. Do not send to top level radius servers.
realm /@.*\.3gppnetwork\.org$/ {
replymessage "Misconfigured client."
accountingresponse on
}
# Default realm of some clients. Do not send to top level radius servers.
realm /myabc\.com$/ {
replymessage "Misconfigured client: default realm of Intel PRO/Wireless supplicant! Rejected by us."
accountingresponse on
}
# Empty realm. Do not send to top level radius servers.
realm /^$/ {
replymessage "Misconfigured client: empty realm! Rejected by us."
accountingresponse on
}
# Typo in realm. Realm without any dot in it. Do not send to top level radius servers.
realm /@[^\.]+$/ {
replymessage "Misconfigured client: Typo in realm - No dot in realm ! Rejected by us."
accountingresponse on
}
# Typo in realm. Realm without double dot in it. Do not send to top level radius servers.
realm /@.*\.\..*$/ {
replymessage "Misconfigured client: Typo in realm - .. ! Rejected by us."
accountingresponse on
}
# Typo in realm. Realm without space in it. Do not send to top level radius servers.
realm /@.*\s+.*$/ {
replymessage "Misconfigured client: Typo in realm - Don't use spaces in your realm! Rejected by us."
accountingresponse on
}
# All other realms -> Eduroam toplevel servers
realm * {
server tlr1
server tlr2
accountingserver tlr1
accountingserver tlr2
}
</syntaxhighlight>
===/etc/radsec/cert/radsecproxy.pem===
<syntaxhighlight lang=text>
subject=/CN=radsecproxy.domain.tld/OU=bla/O=bli/L=Hamburg/ST=Hamburg/C=DE
-----BEGIN CERTIFICATE-----
...
-----END CERTIFICATE-----
And now the whole cerstificate chain...
</syntaxhighlight>
==Run the daemon==
===Security===
There is no need to run radsecproxy as root.
But you need write access to the log or use syslog.
The config, certificate and key is not readable by the user (nogroup) but by the group radsecproxy where the porocess lives in (see systemd unit file radsecproxy.service).
====User====
<syntaxhighlight lang=bash>
# addgroup -g 2083 radsecproxy
# useradd -u 2083 -g nogroup -s /bin/false -h /nonexistent
</syntaxhighlight>
====Permissions====
<syntaxhighlight lang=bash>
# chown -R root:radsecproxy /etc/radsec
# find /etc/radsec -type d -exec chmod 0750 {} \;
# find /etc/radsec -type f -exec chmod 0640 {} \;
</syntaxhighlight>
====systemd unit file====
<syntaxhighlight lang=bash>
# systemctl cat radsecproxy.service
</syntaxhighlight>
<syntaxhighlight lang=ini>
[Unit]
Description=radsecproxy
ConditionPathExists=/etc/radsec/radsecproxy.conf
After=network.target
Documentation=man:radsecproxy(1)
[Service]
Type=forking
User=radsecproxy
Group=radsecproxy
RuntimeDirectory=radsecproxy
RuntimeDirectoryMode=0700
PrivateTmp=yes
InaccessibleDirectories=/var
ReadOnlyDirectories=/etc
ReadOnlyDirectories=/lib
ReadOnlyDirectories=/usr
ExecStart=/opt/radsecproxy/sbin/radsecproxy -i /run/radsecproxy/radsecproxy.pid
PIDFile=/run/radsecproxy/radsecproxy.pid
[Install]
WantedBy=multi-user.target
</syntaxhighlight>
Put this to /lib/systemd/system/radsecproxy.service and do:
# systemctl daemon-reload
# systemctl enable radsecproxy.service
# systemctl start radsecproxy.service
===Testing===
Check on the server if the radsecproxy is listening:
<syntaxhighlight lang=bash>
# lsof -Pni TCP:2083 -s TCP:Listen
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
radsecpro 1344 radsecproxy 9u IPv4 22751 0t0 TCP <server ip>:2083 (LISTEN)
</syntaxhighlight>
===Certificate Enddate===
<SyntaxHighlight lang=bash>
$ openssl s_client -connect <IP>:2083 -tls1 -no_ssl2 -no_ssl3 -showcerts 2>/dev/null | openssl x509 -enddate -noout
notAfter=Oct 9 12:13:17 2020 GMT
</SyntaxHighlight>
For this to work it is necessary, that you have configured a client that matches the host where you ask from:
<pre>
client openssl_check_radsec {
host a.b.c.d
type tls
secret dummy_for_openssl
}
</pre>
if not you will see something like this in the radsecproxy log:
<pre>
Jan 12 14:53:48 radsecproxy-1 radsecproxy[1359]: (305626) tlsservernew: ignoring request, no matching TLS client for a.b.c.d
</pre>
and your openssl just will show:
<SyntaxHighlight lang=bash>
$ openssl s_client -showcerts -connect a.b.c.d:2083
CONNECTED(00000003)
write:errno=0
---
no peer certificate available
---
No client certificate CA names sent
---
SSL handshake has read 0 bytes and written 283 bytes
Verification: OK
---
New, (NONE), Cipher is (NONE)
Secure Renegotiation IS NOT supported
Compression: NONE
Expansion: NONE
No ALPN negotiated
Early data was not sent
Verify return code: 0 (ok)
---
</SyntaxHighlight>
==Problems & Solutions==
===freeradius: systemd[1]: freeradius.service: Main process exited, code=killed, status=11/SEGV===
If you see something like this at you connected freeradius servers, check if you got the file from a windows user and see if it is in DOS format like this (use vi not less):
<pre>
-----BEGIN CERTIFICATE-----
MIIHszCCBZugAwIBAgIMKX2RveVddPE7s2KGMA0GCSqGSIb3DQEBCwUAMHYxCzAJ^M
BgNVBAYTAkRFMUUwQwYDVQQKDDxWZXJlaW4genVyIEZvZXJkZXJ1bmcgZWluZXMg^M
RGV1dHNjaGVuIEZvcnNjaHVuZ3NuZXR6ZXMgZS4gVi4xIDAeBgNVBAMMF2VkdXJv^M
...
</pre>
You can use unix2dos to convert it to unix format.
Yes... that the radsecproxy does not convert it is ugly... but that's life.
120cb19e12e9120f86a67fe9fd5c465e69787b6a
2775
2774
2024-01-12T14:18:04Z
Lollypop
2
/* /etc/radsec/radsecproxy.conf */
wikitext
text/x-wiki
[[Category:Eduroam]]
=RadSecProxy=
==Build==
===Patch for radsecproxy-1.6.8 on Ubuntu 16.04===
In radsecproxy 1.6.9 and source from git on [[https://git.nordu.net/?p=radsecproxy.git;a=tree git.nordu.net]] this patch is not needed since [[https://git.nordu.net/?p=radsecproxy.git;a=commit;h=f3619bf65967255e1009fec42b28007b49e0f4e4 18.1.2017]].
<syntaxhighlight lang=bash>
$ git clone https://git.nordu.net/radsecproxy.git
</syntaxhighlight>
[https://project.nordu.net/browse/RADSECPROXY-72 taken from here]
<syntaxhighlight lang=diff>
diff -rub radsecproxy-1.6.8/tcp.c radsecproxy-1.6.8_Ubuntu_16.04/tcp.c
--- radsecproxy-1.6.8/tcp.c 2016-09-21 13:49:09.000000000 +0200
+++ radsecproxy-1.6.8_Ubuntu_16.04/tcp.c 2017-07-13 16:35:52.414151832 +0200
@@ -353,7 +353,7 @@
struct sockaddr_storage from;
socklen_t fromlen = sizeof(from);
- listen(*sp, 0);
+ listen(*sp, 16);
for (;;) {
s = accept(*sp, (struct sockaddr *)&from, &fromlen);
diff -rub radsecproxy-1.6.8/tls.c radsecproxy-1.6.8_Ubuntu_16.04/tls.c
--- radsecproxy-1.6.8/tls.c 2016-09-21 13:49:09.000000000 +0200
+++ radsecproxy-1.6.8_Ubuntu_16.04/tls.c 2017-07-13 16:36:22.678166655 +0200
@@ -467,7 +467,7 @@
struct sockaddr_storage from;
socklen_t fromlen = sizeof(from);
- listen(*sp, 0);
+ listen(*sp, 16);
for (;;) {
s = accept(*sp, (struct sockaddr *)&from, &fromlen);
</syntaxhighlight>
===Configure===
<syntaxhighlight lang=bash>
$ ./configure --prefix=/opt/radsecproxy-1.6.8 --sysconfdir=/etc/radsec --with-ssl --enable-fticks
$ make clean all && sudo make install
</syntaxhighlight>
=== Another example: Version 1.7.2 from git ===
<syntaxhighlight lang=bash>
$ mkdir radsecproxy && cd radsecproxy
$ git clone --single-branch --branch 1.7.2 https://github.com/radsecproxy/radsecproxy tags/1.7.2
$ cd tags/1.7.2
$ ./autogen.sh
$ ./configure --prefix=/opt/radsecproxy-${PWD##*/} --sysconfdir=/etc/radsec --with-ssl
$ make clean all && sudo make install
</syntaxhighlight>
==Config==
===/etc/radsec/radsecproxy.conf===
<syntaxhighlight lang=text>
# Master config file for radsecproxy
IPv4Only on
listenUDP <IP>:1812
listenUDP <IP>:1813
listenTLS <IP>:2083
LogLevel 5 # For testing later reduce to 3
#LogDestination file:///var/log/radsecproxy.log
LogDestination x-syslog:///LOG_DAEMON
LogThreadId on
LoopPrevention on
######## TLS section
tls default {
CACertificatePath /etc/radsec/cert/ca
CertificateFile /etc/radsec/cert/radsecproxy-cert.pem
CertificateKeyFile /etc/radsec/cert/radsecproxy-key.pem
CertificateKeyPassword <PASSWORD>
TlsVersion TLS1_2
CipherList TLS_AES_256_GCM_SHA384:TLS_CHACHA20_POLY1305_SHA256:TLS_AES_128_GCM_SHA256:AES256-GCM-SHA384
}
Include /etc/radsec/rewrites.conf
Include /etc/radsec/clients.conf
Include /etc/radsec/servers.conf
Include /etc/radsec/realms.conf
</syntaxhighlight>
===ca certificate in /etc/radsec/cert/ca===
For DFN users it is the TeleSec root certificate
====The destination file name is <hash of the certificate>.0====
<syntaxhighlight lang=text>
# openssl x509 -noout -hash -in /tmp/telesec.pem
1e09d511
# mv /tmp/telesec.pem /etc/radsec/cert/ca/1e09d511.0
</syntaxhighlight>
====/etc/radsec/cert/ca/1e09d511.0====
<syntaxhighlight lang=text>
subject= /C=DE/O=T-Systems Enterprise Services GmbH/OU=T-Systems Trust Center/CN=T-TeleSec GlobalRoot Class 2
-----BEGIN CERTIFICATE-----
MIIDwzCCAqugAwIBAgIBATANBgkqhkiG9w0BAQsFADCBgjELMAkGA1UEBhMCREUx
KzApBgNVBAoMIlQtU3lzdGVtcyBFbnRlcnByaXNlIFNlcnZpY2VzIEdtYkgxHzAd
BgNVBAsMFlQtU3lzdGVtcyBUcnVzdCBDZW50ZXIxJTAjBgNVBAMMHFQtVGVsZVNl
YyBHbG9iYWxSb290IENsYXNzIDIwHhcNMDgxMDAxMTA0MDE0WhcNMzMxMDAxMjM1
OTU5WjCBgjELMAkGA1UEBhMCREUxKzApBgNVBAoMIlQtU3lzdGVtcyBFbnRlcnBy
aXNlIFNlcnZpY2VzIEdtYkgxHzAdBgNVBAsMFlQtU3lzdGVtcyBUcnVzdCBDZW50
ZXIxJTAjBgNVBAMMHFQtVGVsZVNlYyBHbG9iYWxSb290IENsYXNzIDIwggEiMA0G
CSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQCqX9obX+hzkeXaXPSi5kfl82hVYAUd
AqSzm1nzHoqvNK38DcLZSBnuaY/JIPwhqgcZ7bBcrGXHX+0CfHt8LRvWurmAwhiC
FoT6ZrAIxlQjgeTNuUk/9k9uN0goOA/FvudocP05l03Sx5iRUKrERLMjfTlH6VJi
1hKTXrcxlkIF+3anHqP1wvzpesVsqXFP6st4vGCvx9702cu+fjOlbpSD8DT6Iavq
jnKgP6TeMFvvhk1qlVtDRKgQFRzlAVfFmPHmBiiRqiDFt1MmUUOyCxGVWOHAD3bZ
wI18gfNycJ5v/hqO2V81xrJvNHy+SE/iWjnX2J14np+GPgNeGYtEotXHAgMBAAGj
QjBAMA8GA1UdEwEB/wQFMAMBAf8wDgYDVR0PAQH/BAQDAgEGMB0GA1UdDgQWBBS/
WSA2AHmgoCJrjNXyYdK4LMuCSjANBgkqhkiG9w0BAQsFAAOCAQEAMQOiYQsfdOhy
NsZt+U2e+iKo4YFWz827n+qrkRk4r6p8FU3ztqONpfSO9kSpp+ghla0+AGIWiPAC
uvxhI+YzmzB6azZie60EI4RYZeLbK4rnJVM3YlNfvNoBYimipidx5joifsFvHZVw
IEoHNN/q/xWA5brXethbdXwFeilHfkCoMRN3zUA7tFFHei4R40cR3p1m0IvVVGb6
g1XqfMIpiRvpb7PO4gWEyS8+eIVibslfwXhjdFjASBgMmTnrpMwatXlajRWc2BQN
9noHV8cigwUtPJslJj0Ys6lDfMjIq2SPDqO/nBudMNva0Bkuqjzx+zOAduTNrRlP
BSeOE6Fuwg==
-----END CERTIFICATE-----
</syntaxhighlight>
===/etc/radsec/rewrites.conf===
<syntaxhighlight lang=text>
## Empty for our setup
</syntaxhighlight>
===/etc/radsec/clients.conf===
This matches our german top level radius (tlr) you have to customize it for other countries.
<syntaxhighlight lang=text>
client tlr1 {
host 193.174.75.134
type tls
certificatenamecheck off
matchCertificateAttribute CN:/^(radius1\.dfn|tld1\.eduroam)\.de$/
}
client tlr2 {
host 193.174.75.138
type tls
certificatenamecheck off
matchCertificateAttribute CN:/^(radius2\.dfn|tld2\.eduroam)\.de$/
}
# Our WLAN Controller
client wlc {
host 10.1.1.0/24
type udp
secret ****secret****
}
#client anyIP4TLS {
# host 0.0.0.0/0
# type TLS
#}
</syntaxhighlight>
===/etc/radsec/servers.conf===
<syntaxhighlight lang=text>
#
## UDP Radius
#
#Server Our-EduroamRadiusAuth {
# host <internal radius server>
# port 1812
# type udp
# secret ****secret****
#}
#Server Our-EduroamRadiusAcct {
# host <internal radius accounting server>
# port 1813
# type udp
# secret ****secret****
#}
#
## TLS Radius / RadSec
#
server freeradius-1 {
host <internal radius accounting server1>
type tls
certificatenamecheck off
matchCertificateAttribute CN:/^freeradius1\.domain\.tld$/
StatusServer on
secret ****secret****
}
server freeradius-2 {
host <internal radius accounting server2>
type tls
certificatenamecheck off
matchCertificateAttribute CN:/^freeradius2\.domain\.tld$/
StatusServer on
secret ****secret****
}
server tlr1 {
host 193.174.75.134
type tls
certificatenamecheck off
matchCertificateAttribute CN:/^(radius1\.dfn|tld1\.eduroam)\.de$/
StatusServer on
}
server tlr2 {
host 193.174.75.138
type tls
certificatenamecheck off
matchCertificateAttribute CN:/^(radius2\.dfn|tld2\.eduroam)\.de$/
StatusServer on
}
</syntaxhighlight>
===/etc/radsec/realms.conf===
<syntaxhighlight lang=text>
# Our domain domain.tld
realm /(eduroam|anonymous)@domain\.tld$/ {
server freeradius-1
server freeradius-2
accountingServer freeradius-1
accountingServer freeradius-2
}
# If the anonymous user has not been matched above, fail
# So users that use their real identity fail, too. Force anonymous!
realm /@domain\.tld$ {
replymessage "Access rejected, wrong anonymous identity. Use eduroam@domain.tld as anonymous identity."
accountingresponse on
}
# Other domain of our site not used for eduroam
realm /@wrong-domain\.tld$/ {
replymessage "Misconfigured client: Use domain.tld as domain instead."
accountingresponse on
}
# Default realm of some clients. Do not send to top level radius servers.
realm /@.*\.3gppnetwork\.org$/ {
replymessage "Misconfigured client."
accountingresponse on
}
# Default realm of some clients. Do not send to top level radius servers.
realm /myabc\.com$/ {
replymessage "Misconfigured client: default realm of Intel PRO/Wireless supplicant! Rejected by us."
accountingresponse on
}
# Empty realm. Do not send to top level radius servers.
realm /^$/ {
replymessage "Misconfigured client: empty realm! Rejected by us."
accountingresponse on
}
# Typo in realm. Realm without any dot in it. Do not send to top level radius servers.
realm /@[^\.]+$/ {
replymessage "Misconfigured client: Typo in realm - No dot in realm ! Rejected by us."
accountingresponse on
}
# Typo in realm. Realm without double dot in it. Do not send to top level radius servers.
realm /@.*\.\..*$/ {
replymessage "Misconfigured client: Typo in realm - .. ! Rejected by us."
accountingresponse on
}
# Typo in realm. Realm without space in it. Do not send to top level radius servers.
realm /@.*\s+.*$/ {
replymessage "Misconfigured client: Typo in realm - Don't use spaces in your realm! Rejected by us."
accountingresponse on
}
# All other realms -> Eduroam toplevel servers
realm * {
server tlr1
server tlr2
accountingserver tlr1
accountingserver tlr2
}
</syntaxhighlight>
===/etc/radsec/cert/radsecproxy.pem===
<syntaxhighlight lang=text>
subject=/CN=radsecproxy.domain.tld/OU=bla/O=bli/L=Hamburg/ST=Hamburg/C=DE
-----BEGIN CERTIFICATE-----
...
-----END CERTIFICATE-----
And now the whole cerstificate chain...
</syntaxhighlight>
==Run the daemon==
===Security===
There is no need to run radsecproxy as root.
But you need write access to the log or use syslog.
The config, certificate and key is not readable by the user (nogroup) but by the group radsecproxy where the porocess lives in (see systemd unit file radsecproxy.service).
====User====
<syntaxhighlight lang=bash>
# addgroup -g 2083 radsecproxy
# useradd -u 2083 -g nogroup -s /bin/false -h /nonexistent
</syntaxhighlight>
====Permissions====
<syntaxhighlight lang=bash>
# chown -R root:radsecproxy /etc/radsec
# find /etc/radsec -type d -exec chmod 0750 {} \;
# find /etc/radsec -type f -exec chmod 0640 {} \;
</syntaxhighlight>
====systemd unit file====
<syntaxhighlight lang=bash>
# systemctl cat radsecproxy.service
</syntaxhighlight>
<syntaxhighlight lang=ini>
[Unit]
Description=radsecproxy
ConditionPathExists=/etc/radsec/radsecproxy.conf
After=network.target
Documentation=man:radsecproxy(1)
[Service]
Type=forking
User=radsecproxy
Group=radsecproxy
RuntimeDirectory=radsecproxy
RuntimeDirectoryMode=0700
PrivateTmp=yes
InaccessibleDirectories=/var
ReadOnlyDirectories=/etc
ReadOnlyDirectories=/lib
ReadOnlyDirectories=/usr
ExecStart=/opt/radsecproxy/sbin/radsecproxy -i /run/radsecproxy/radsecproxy.pid
PIDFile=/run/radsecproxy/radsecproxy.pid
[Install]
WantedBy=multi-user.target
</syntaxhighlight>
Put this to /lib/systemd/system/radsecproxy.service and do:
# systemctl daemon-reload
# systemctl enable radsecproxy.service
# systemctl start radsecproxy.service
===Testing===
Check on the server if the radsecproxy is listening:
<syntaxhighlight lang=bash>
# lsof -Pni TCP:2083 -s TCP:Listen
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
radsecpro 1344 radsecproxy 9u IPv4 22751 0t0 TCP <server ip>:2083 (LISTEN)
</syntaxhighlight>
===Certificate Enddate===
<SyntaxHighlight lang=bash>
$ openssl s_client -connect <IP>:2083 -tls1 -no_ssl2 -no_ssl3 -showcerts 2>/dev/null | openssl x509 -enddate -noout
notAfter=Oct 9 12:13:17 2020 GMT
</SyntaxHighlight>
For this to work it is necessary, that you have configured a client that matches the host where you ask from:
<pre>
client openssl_check_radsec {
host a.b.c.d
type tls
secret dummy_for_openssl
}
</pre>
if not you will see something like this in the radsecproxy log:
<pre>
Jan 12 14:53:48 radsecproxy-1 radsecproxy[1359]: (305626) tlsservernew: ignoring request, no matching TLS client for a.b.c.d
</pre>
and your openssl just will show:
<SyntaxHighlight lang=bash>
$ openssl s_client -showcerts -connect a.b.c.d:2083
CONNECTED(00000003)
write:errno=0
---
no peer certificate available
---
No client certificate CA names sent
---
SSL handshake has read 0 bytes and written 283 bytes
Verification: OK
---
New, (NONE), Cipher is (NONE)
Secure Renegotiation IS NOT supported
Compression: NONE
Expansion: NONE
No ALPN negotiated
Early data was not sent
Verify return code: 0 (ok)
---
</SyntaxHighlight>
==Problems & Solutions==
===freeradius: systemd[1]: freeradius.service: Main process exited, code=killed, status=11/SEGV===
If you see something like this at you connected freeradius servers, check if you got the file from a windows user and see if it is in DOS format like this (use vi not less):
<pre>
-----BEGIN CERTIFICATE-----
MIIHszCCBZugAwIBAgIMKX2RveVddPE7s2KGMA0GCSqGSIb3DQEBCwUAMHYxCzAJ^M
BgNVBAYTAkRFMUUwQwYDVQQKDDxWZXJlaW4genVyIEZvZXJkZXJ1bmcgZWluZXMg^M
RGV1dHNjaGVuIEZvcnNjaHVuZ3NuZXR6ZXMgZS4gVi4xIDAeBgNVBAMMF2VkdXJv^M
...
</pre>
You can use unix2dos to convert it to unix format.
Yes... that the radsecproxy does not convert it is ugly... but that's life.
565a5d5275f11f5560646f8b1ae30928c7dfc4e3
2776
2775
2024-01-12T14:19:25Z
Lollypop
2
/* /etc/radsec/clients.conf */
wikitext
text/x-wiki
[[Category:Eduroam]]
=RadSecProxy=
==Build==
===Patch for radsecproxy-1.6.8 on Ubuntu 16.04===
In radsecproxy 1.6.9 and source from git on [[https://git.nordu.net/?p=radsecproxy.git;a=tree git.nordu.net]] this patch is not needed since [[https://git.nordu.net/?p=radsecproxy.git;a=commit;h=f3619bf65967255e1009fec42b28007b49e0f4e4 18.1.2017]].
<syntaxhighlight lang=bash>
$ git clone https://git.nordu.net/radsecproxy.git
</syntaxhighlight>
[https://project.nordu.net/browse/RADSECPROXY-72 taken from here]
<syntaxhighlight lang=diff>
diff -rub radsecproxy-1.6.8/tcp.c radsecproxy-1.6.8_Ubuntu_16.04/tcp.c
--- radsecproxy-1.6.8/tcp.c 2016-09-21 13:49:09.000000000 +0200
+++ radsecproxy-1.6.8_Ubuntu_16.04/tcp.c 2017-07-13 16:35:52.414151832 +0200
@@ -353,7 +353,7 @@
struct sockaddr_storage from;
socklen_t fromlen = sizeof(from);
- listen(*sp, 0);
+ listen(*sp, 16);
for (;;) {
s = accept(*sp, (struct sockaddr *)&from, &fromlen);
diff -rub radsecproxy-1.6.8/tls.c radsecproxy-1.6.8_Ubuntu_16.04/tls.c
--- radsecproxy-1.6.8/tls.c 2016-09-21 13:49:09.000000000 +0200
+++ radsecproxy-1.6.8_Ubuntu_16.04/tls.c 2017-07-13 16:36:22.678166655 +0200
@@ -467,7 +467,7 @@
struct sockaddr_storage from;
socklen_t fromlen = sizeof(from);
- listen(*sp, 0);
+ listen(*sp, 16);
for (;;) {
s = accept(*sp, (struct sockaddr *)&from, &fromlen);
</syntaxhighlight>
===Configure===
<syntaxhighlight lang=bash>
$ ./configure --prefix=/opt/radsecproxy-1.6.8 --sysconfdir=/etc/radsec --with-ssl --enable-fticks
$ make clean all && sudo make install
</syntaxhighlight>
=== Another example: Version 1.7.2 from git ===
<syntaxhighlight lang=bash>
$ mkdir radsecproxy && cd radsecproxy
$ git clone --single-branch --branch 1.7.2 https://github.com/radsecproxy/radsecproxy tags/1.7.2
$ cd tags/1.7.2
$ ./autogen.sh
$ ./configure --prefix=/opt/radsecproxy-${PWD##*/} --sysconfdir=/etc/radsec --with-ssl
$ make clean all && sudo make install
</syntaxhighlight>
==Config==
===/etc/radsec/radsecproxy.conf===
<syntaxhighlight lang=text>
# Master config file for radsecproxy
IPv4Only on
listenUDP <IP>:1812
listenUDP <IP>:1813
listenTLS <IP>:2083
LogLevel 5 # For testing later reduce to 3
#LogDestination file:///var/log/radsecproxy.log
LogDestination x-syslog:///LOG_DAEMON
LogThreadId on
LoopPrevention on
######## TLS section
tls default {
CACertificatePath /etc/radsec/cert/ca
CertificateFile /etc/radsec/cert/radsecproxy-cert.pem
CertificateKeyFile /etc/radsec/cert/radsecproxy-key.pem
CertificateKeyPassword <PASSWORD>
TlsVersion TLS1_2
CipherList TLS_AES_256_GCM_SHA384:TLS_CHACHA20_POLY1305_SHA256:TLS_AES_128_GCM_SHA256:AES256-GCM-SHA384
}
Include /etc/radsec/rewrites.conf
Include /etc/radsec/clients.conf
Include /etc/radsec/servers.conf
Include /etc/radsec/realms.conf
</syntaxhighlight>
===ca certificate in /etc/radsec/cert/ca===
For DFN users it is the TeleSec root certificate
====The destination file name is <hash of the certificate>.0====
<syntaxhighlight lang=text>
# openssl x509 -noout -hash -in /tmp/telesec.pem
1e09d511
# mv /tmp/telesec.pem /etc/radsec/cert/ca/1e09d511.0
</syntaxhighlight>
====/etc/radsec/cert/ca/1e09d511.0====
<syntaxhighlight lang=text>
subject= /C=DE/O=T-Systems Enterprise Services GmbH/OU=T-Systems Trust Center/CN=T-TeleSec GlobalRoot Class 2
-----BEGIN CERTIFICATE-----
MIIDwzCCAqugAwIBAgIBATANBgkqhkiG9w0BAQsFADCBgjELMAkGA1UEBhMCREUx
KzApBgNVBAoMIlQtU3lzdGVtcyBFbnRlcnByaXNlIFNlcnZpY2VzIEdtYkgxHzAd
BgNVBAsMFlQtU3lzdGVtcyBUcnVzdCBDZW50ZXIxJTAjBgNVBAMMHFQtVGVsZVNl
YyBHbG9iYWxSb290IENsYXNzIDIwHhcNMDgxMDAxMTA0MDE0WhcNMzMxMDAxMjM1
OTU5WjCBgjELMAkGA1UEBhMCREUxKzApBgNVBAoMIlQtU3lzdGVtcyBFbnRlcnBy
aXNlIFNlcnZpY2VzIEdtYkgxHzAdBgNVBAsMFlQtU3lzdGVtcyBUcnVzdCBDZW50
ZXIxJTAjBgNVBAMMHFQtVGVsZVNlYyBHbG9iYWxSb290IENsYXNzIDIwggEiMA0G
CSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQCqX9obX+hzkeXaXPSi5kfl82hVYAUd
AqSzm1nzHoqvNK38DcLZSBnuaY/JIPwhqgcZ7bBcrGXHX+0CfHt8LRvWurmAwhiC
FoT6ZrAIxlQjgeTNuUk/9k9uN0goOA/FvudocP05l03Sx5iRUKrERLMjfTlH6VJi
1hKTXrcxlkIF+3anHqP1wvzpesVsqXFP6st4vGCvx9702cu+fjOlbpSD8DT6Iavq
jnKgP6TeMFvvhk1qlVtDRKgQFRzlAVfFmPHmBiiRqiDFt1MmUUOyCxGVWOHAD3bZ
wI18gfNycJ5v/hqO2V81xrJvNHy+SE/iWjnX2J14np+GPgNeGYtEotXHAgMBAAGj
QjBAMA8GA1UdEwEB/wQFMAMBAf8wDgYDVR0PAQH/BAQDAgEGMB0GA1UdDgQWBBS/
WSA2AHmgoCJrjNXyYdK4LMuCSjANBgkqhkiG9w0BAQsFAAOCAQEAMQOiYQsfdOhy
NsZt+U2e+iKo4YFWz827n+qrkRk4r6p8FU3ztqONpfSO9kSpp+ghla0+AGIWiPAC
uvxhI+YzmzB6azZie60EI4RYZeLbK4rnJVM3YlNfvNoBYimipidx5joifsFvHZVw
IEoHNN/q/xWA5brXethbdXwFeilHfkCoMRN3zUA7tFFHei4R40cR3p1m0IvVVGb6
g1XqfMIpiRvpb7PO4gWEyS8+eIVibslfwXhjdFjASBgMmTnrpMwatXlajRWc2BQN
9noHV8cigwUtPJslJj0Ys6lDfMjIq2SPDqO/nBudMNva0Bkuqjzx+zOAduTNrRlP
BSeOE6Fuwg==
-----END CERTIFICATE-----
</syntaxhighlight>
===/etc/radsec/rewrites.conf===
<syntaxhighlight lang=text>
## Empty for our setup
</syntaxhighlight>
===/etc/radsec/clients.conf===
This matches our german top level radius (tlr) you have to customize it for other countries.
<syntaxhighlight lang=text>
#
## DFN
#
client tlr1 {
host 193.174.75.134
type tls
certificatenamecheck off
matchCertificateAttribute CN:/^(radius1\.dfn|tld1\.eduroam)\.de$/
}
client tlr2 {
host 193.174.75.138
type tls
certificatenamecheck off
matchCertificateAttribute CN:/^(radius2\.dfn|tld2\.eduroam)\.de$/
}
# Our WLAN Controller
client wlc {
host 10.1.1.0/24
type udp
secret ****secret****
}
#client anyIP4TLS {
# host 0.0.0.0/0
# type TLS
#}
</syntaxhighlight>
===/etc/radsec/servers.conf===
<syntaxhighlight lang=text>
#
## UDP Radius
#
#Server Our-EduroamRadiusAuth {
# host <internal radius server>
# port 1812
# type udp
# secret ****secret****
#}
#Server Our-EduroamRadiusAcct {
# host <internal radius accounting server>
# port 1813
# type udp
# secret ****secret****
#}
#
## TLS Radius / RadSec
#
server freeradius-1 {
host <internal radius accounting server1>
type tls
certificatenamecheck off
matchCertificateAttribute CN:/^freeradius1\.domain\.tld$/
StatusServer on
secret ****secret****
}
server freeradius-2 {
host <internal radius accounting server2>
type tls
certificatenamecheck off
matchCertificateAttribute CN:/^freeradius2\.domain\.tld$/
StatusServer on
secret ****secret****
}
server tlr1 {
host 193.174.75.134
type tls
certificatenamecheck off
matchCertificateAttribute CN:/^(radius1\.dfn|tld1\.eduroam)\.de$/
StatusServer on
}
server tlr2 {
host 193.174.75.138
type tls
certificatenamecheck off
matchCertificateAttribute CN:/^(radius2\.dfn|tld2\.eduroam)\.de$/
StatusServer on
}
</syntaxhighlight>
===/etc/radsec/realms.conf===
<syntaxhighlight lang=text>
# Our domain domain.tld
realm /(eduroam|anonymous)@domain\.tld$/ {
server freeradius-1
server freeradius-2
accountingServer freeradius-1
accountingServer freeradius-2
}
# If the anonymous user has not been matched above, fail
# So users that use their real identity fail, too. Force anonymous!
realm /@domain\.tld$ {
replymessage "Access rejected, wrong anonymous identity. Use eduroam@domain.tld as anonymous identity."
accountingresponse on
}
# Other domain of our site not used for eduroam
realm /@wrong-domain\.tld$/ {
replymessage "Misconfigured client: Use domain.tld as domain instead."
accountingresponse on
}
# Default realm of some clients. Do not send to top level radius servers.
realm /@.*\.3gppnetwork\.org$/ {
replymessage "Misconfigured client."
accountingresponse on
}
# Default realm of some clients. Do not send to top level radius servers.
realm /myabc\.com$/ {
replymessage "Misconfigured client: default realm of Intel PRO/Wireless supplicant! Rejected by us."
accountingresponse on
}
# Empty realm. Do not send to top level radius servers.
realm /^$/ {
replymessage "Misconfigured client: empty realm! Rejected by us."
accountingresponse on
}
# Typo in realm. Realm without any dot in it. Do not send to top level radius servers.
realm /@[^\.]+$/ {
replymessage "Misconfigured client: Typo in realm - No dot in realm ! Rejected by us."
accountingresponse on
}
# Typo in realm. Realm without double dot in it. Do not send to top level radius servers.
realm /@.*\.\..*$/ {
replymessage "Misconfigured client: Typo in realm - .. ! Rejected by us."
accountingresponse on
}
# Typo in realm. Realm without space in it. Do not send to top level radius servers.
realm /@.*\s+.*$/ {
replymessage "Misconfigured client: Typo in realm - Don't use spaces in your realm! Rejected by us."
accountingresponse on
}
# All other realms -> Eduroam toplevel servers
realm * {
server tlr1
server tlr2
accountingserver tlr1
accountingserver tlr2
}
</syntaxhighlight>
===/etc/radsec/cert/radsecproxy.pem===
<syntaxhighlight lang=text>
subject=/CN=radsecproxy.domain.tld/OU=bla/O=bli/L=Hamburg/ST=Hamburg/C=DE
-----BEGIN CERTIFICATE-----
...
-----END CERTIFICATE-----
And now the whole cerstificate chain...
</syntaxhighlight>
==Run the daemon==
===Security===
There is no need to run radsecproxy as root.
But you need write access to the log or use syslog.
The config, certificate and key is not readable by the user (nogroup) but by the group radsecproxy where the porocess lives in (see systemd unit file radsecproxy.service).
====User====
<syntaxhighlight lang=bash>
# addgroup -g 2083 radsecproxy
# useradd -u 2083 -g nogroup -s /bin/false -h /nonexistent
</syntaxhighlight>
====Permissions====
<syntaxhighlight lang=bash>
# chown -R root:radsecproxy /etc/radsec
# find /etc/radsec -type d -exec chmod 0750 {} \;
# find /etc/radsec -type f -exec chmod 0640 {} \;
</syntaxhighlight>
====systemd unit file====
<syntaxhighlight lang=bash>
# systemctl cat radsecproxy.service
</syntaxhighlight>
<syntaxhighlight lang=ini>
[Unit]
Description=radsecproxy
ConditionPathExists=/etc/radsec/radsecproxy.conf
After=network.target
Documentation=man:radsecproxy(1)
[Service]
Type=forking
User=radsecproxy
Group=radsecproxy
RuntimeDirectory=radsecproxy
RuntimeDirectoryMode=0700
PrivateTmp=yes
InaccessibleDirectories=/var
ReadOnlyDirectories=/etc
ReadOnlyDirectories=/lib
ReadOnlyDirectories=/usr
ExecStart=/opt/radsecproxy/sbin/radsecproxy -i /run/radsecproxy/radsecproxy.pid
PIDFile=/run/radsecproxy/radsecproxy.pid
[Install]
WantedBy=multi-user.target
</syntaxhighlight>
Put this to /lib/systemd/system/radsecproxy.service and do:
# systemctl daemon-reload
# systemctl enable radsecproxy.service
# systemctl start radsecproxy.service
===Testing===
Check on the server if the radsecproxy is listening:
<syntaxhighlight lang=bash>
# lsof -Pni TCP:2083 -s TCP:Listen
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
radsecpro 1344 radsecproxy 9u IPv4 22751 0t0 TCP <server ip>:2083 (LISTEN)
</syntaxhighlight>
===Certificate Enddate===
<SyntaxHighlight lang=bash>
$ openssl s_client -connect <IP>:2083 -tls1 -no_ssl2 -no_ssl3 -showcerts 2>/dev/null | openssl x509 -enddate -noout
notAfter=Oct 9 12:13:17 2020 GMT
</SyntaxHighlight>
For this to work it is necessary, that you have configured a client that matches the host where you ask from:
<pre>
client openssl_check_radsec {
host a.b.c.d
type tls
secret dummy_for_openssl
}
</pre>
if not you will see something like this in the radsecproxy log:
<pre>
Jan 12 14:53:48 radsecproxy-1 radsecproxy[1359]: (305626) tlsservernew: ignoring request, no matching TLS client for a.b.c.d
</pre>
and your openssl just will show:
<SyntaxHighlight lang=bash>
$ openssl s_client -showcerts -connect a.b.c.d:2083
CONNECTED(00000003)
write:errno=0
---
no peer certificate available
---
No client certificate CA names sent
---
SSL handshake has read 0 bytes and written 283 bytes
Verification: OK
---
New, (NONE), Cipher is (NONE)
Secure Renegotiation IS NOT supported
Compression: NONE
Expansion: NONE
No ALPN negotiated
Early data was not sent
Verify return code: 0 (ok)
---
</SyntaxHighlight>
==Problems & Solutions==
===freeradius: systemd[1]: freeradius.service: Main process exited, code=killed, status=11/SEGV===
If you see something like this at you connected freeradius servers, check if you got the file from a windows user and see if it is in DOS format like this (use vi not less):
<pre>
-----BEGIN CERTIFICATE-----
MIIHszCCBZugAwIBAgIMKX2RveVddPE7s2KGMA0GCSqGSIb3DQEBCwUAMHYxCzAJ^M
BgNVBAYTAkRFMUUwQwYDVQQKDDxWZXJlaW4genVyIEZvZXJkZXJ1bmcgZWluZXMg^M
RGV1dHNjaGVuIEZvcnNjaHVuZ3NuZXR6ZXMgZS4gVi4xIDAeBgNVBAMMF2VkdXJv^M
...
</pre>
You can use unix2dos to convert it to unix format.
Yes... that the radsecproxy does not convert it is ugly... but that's life.
f5ca169605167f05b83d122709f62c3cfe09b0e4
2777
2776
2024-01-12T14:20:15Z
Lollypop
2
/* /etc/radsec/clients.conf */
wikitext
text/x-wiki
[[Category:Eduroam]]
=RadSecProxy=
==Build==
===Patch for radsecproxy-1.6.8 on Ubuntu 16.04===
In radsecproxy 1.6.9 and source from git on [[https://git.nordu.net/?p=radsecproxy.git;a=tree git.nordu.net]] this patch is not needed since [[https://git.nordu.net/?p=radsecproxy.git;a=commit;h=f3619bf65967255e1009fec42b28007b49e0f4e4 18.1.2017]].
<syntaxhighlight lang=bash>
$ git clone https://git.nordu.net/radsecproxy.git
</syntaxhighlight>
[https://project.nordu.net/browse/RADSECPROXY-72 taken from here]
<syntaxhighlight lang=diff>
diff -rub radsecproxy-1.6.8/tcp.c radsecproxy-1.6.8_Ubuntu_16.04/tcp.c
--- radsecproxy-1.6.8/tcp.c 2016-09-21 13:49:09.000000000 +0200
+++ radsecproxy-1.6.8_Ubuntu_16.04/tcp.c 2017-07-13 16:35:52.414151832 +0200
@@ -353,7 +353,7 @@
struct sockaddr_storage from;
socklen_t fromlen = sizeof(from);
- listen(*sp, 0);
+ listen(*sp, 16);
for (;;) {
s = accept(*sp, (struct sockaddr *)&from, &fromlen);
diff -rub radsecproxy-1.6.8/tls.c radsecproxy-1.6.8_Ubuntu_16.04/tls.c
--- radsecproxy-1.6.8/tls.c 2016-09-21 13:49:09.000000000 +0200
+++ radsecproxy-1.6.8_Ubuntu_16.04/tls.c 2017-07-13 16:36:22.678166655 +0200
@@ -467,7 +467,7 @@
struct sockaddr_storage from;
socklen_t fromlen = sizeof(from);
- listen(*sp, 0);
+ listen(*sp, 16);
for (;;) {
s = accept(*sp, (struct sockaddr *)&from, &fromlen);
</syntaxhighlight>
===Configure===
<syntaxhighlight lang=bash>
$ ./configure --prefix=/opt/radsecproxy-1.6.8 --sysconfdir=/etc/radsec --with-ssl --enable-fticks
$ make clean all && sudo make install
</syntaxhighlight>
=== Another example: Version 1.7.2 from git ===
<syntaxhighlight lang=bash>
$ mkdir radsecproxy && cd radsecproxy
$ git clone --single-branch --branch 1.7.2 https://github.com/radsecproxy/radsecproxy tags/1.7.2
$ cd tags/1.7.2
$ ./autogen.sh
$ ./configure --prefix=/opt/radsecproxy-${PWD##*/} --sysconfdir=/etc/radsec --with-ssl
$ make clean all && sudo make install
</syntaxhighlight>
==Config==
===/etc/radsec/radsecproxy.conf===
<syntaxhighlight lang=text>
# Master config file for radsecproxy
IPv4Only on
listenUDP <IP>:1812
listenUDP <IP>:1813
listenTLS <IP>:2083
LogLevel 5 # For testing later reduce to 3
#LogDestination file:///var/log/radsecproxy.log
LogDestination x-syslog:///LOG_DAEMON
LogThreadId on
LoopPrevention on
######## TLS section
tls default {
CACertificatePath /etc/radsec/cert/ca
CertificateFile /etc/radsec/cert/radsecproxy-cert.pem
CertificateKeyFile /etc/radsec/cert/radsecproxy-key.pem
CertificateKeyPassword <PASSWORD>
TlsVersion TLS1_2
CipherList TLS_AES_256_GCM_SHA384:TLS_CHACHA20_POLY1305_SHA256:TLS_AES_128_GCM_SHA256:AES256-GCM-SHA384
}
Include /etc/radsec/rewrites.conf
Include /etc/radsec/clients.conf
Include /etc/radsec/servers.conf
Include /etc/radsec/realms.conf
</syntaxhighlight>
===ca certificate in /etc/radsec/cert/ca===
For DFN users it is the TeleSec root certificate
====The destination file name is <hash of the certificate>.0====
<syntaxhighlight lang=text>
# openssl x509 -noout -hash -in /tmp/telesec.pem
1e09d511
# mv /tmp/telesec.pem /etc/radsec/cert/ca/1e09d511.0
</syntaxhighlight>
====/etc/radsec/cert/ca/1e09d511.0====
<syntaxhighlight lang=text>
subject= /C=DE/O=T-Systems Enterprise Services GmbH/OU=T-Systems Trust Center/CN=T-TeleSec GlobalRoot Class 2
-----BEGIN CERTIFICATE-----
MIIDwzCCAqugAwIBAgIBATANBgkqhkiG9w0BAQsFADCBgjELMAkGA1UEBhMCREUx
KzApBgNVBAoMIlQtU3lzdGVtcyBFbnRlcnByaXNlIFNlcnZpY2VzIEdtYkgxHzAd
BgNVBAsMFlQtU3lzdGVtcyBUcnVzdCBDZW50ZXIxJTAjBgNVBAMMHFQtVGVsZVNl
YyBHbG9iYWxSb290IENsYXNzIDIwHhcNMDgxMDAxMTA0MDE0WhcNMzMxMDAxMjM1
OTU5WjCBgjELMAkGA1UEBhMCREUxKzApBgNVBAoMIlQtU3lzdGVtcyBFbnRlcnBy
aXNlIFNlcnZpY2VzIEdtYkgxHzAdBgNVBAsMFlQtU3lzdGVtcyBUcnVzdCBDZW50
ZXIxJTAjBgNVBAMMHFQtVGVsZVNlYyBHbG9iYWxSb290IENsYXNzIDIwggEiMA0G
CSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQCqX9obX+hzkeXaXPSi5kfl82hVYAUd
AqSzm1nzHoqvNK38DcLZSBnuaY/JIPwhqgcZ7bBcrGXHX+0CfHt8LRvWurmAwhiC
FoT6ZrAIxlQjgeTNuUk/9k9uN0goOA/FvudocP05l03Sx5iRUKrERLMjfTlH6VJi
1hKTXrcxlkIF+3anHqP1wvzpesVsqXFP6st4vGCvx9702cu+fjOlbpSD8DT6Iavq
jnKgP6TeMFvvhk1qlVtDRKgQFRzlAVfFmPHmBiiRqiDFt1MmUUOyCxGVWOHAD3bZ
wI18gfNycJ5v/hqO2V81xrJvNHy+SE/iWjnX2J14np+GPgNeGYtEotXHAgMBAAGj
QjBAMA8GA1UdEwEB/wQFMAMBAf8wDgYDVR0PAQH/BAQDAgEGMB0GA1UdDgQWBBS/
WSA2AHmgoCJrjNXyYdK4LMuCSjANBgkqhkiG9w0BAQsFAAOCAQEAMQOiYQsfdOhy
NsZt+U2e+iKo4YFWz827n+qrkRk4r6p8FU3ztqONpfSO9kSpp+ghla0+AGIWiPAC
uvxhI+YzmzB6azZie60EI4RYZeLbK4rnJVM3YlNfvNoBYimipidx5joifsFvHZVw
IEoHNN/q/xWA5brXethbdXwFeilHfkCoMRN3zUA7tFFHei4R40cR3p1m0IvVVGb6
g1XqfMIpiRvpb7PO4gWEyS8+eIVibslfwXhjdFjASBgMmTnrpMwatXlajRWc2BQN
9noHV8cigwUtPJslJj0Ys6lDfMjIq2SPDqO/nBudMNva0Bkuqjzx+zOAduTNrRlP
BSeOE6Fuwg==
-----END CERTIFICATE-----
</syntaxhighlight>
===/etc/radsec/rewrites.conf===
<syntaxhighlight lang=text>
## Empty for our setup
</syntaxhighlight>
===/etc/radsec/clients.conf===
This matches our german top level radius (tlr) you have to customize it for other countries.
<syntaxhighlight lang=text>
#
## DFN
#
client tlr1 {
host 193.174.75.134
type tls
certificatenamecheck off
matchCertificateAttribute CN:/^(radius1\.dfn|tld1\.eduroam)\.de$/
}
client tlr2 {
host 193.174.75.138
type tls
certificatenamecheck off
matchCertificateAttribute CN:/^(radius2\.dfn|tld2\.eduroam)\.de$/
}
# Our WLAN Controller
client wlc {
host 10.1.1.0/24
type udp
secret ****secret****
}
#client anyIP4TLS {
# host 0.0.0.0/0
# type TLS
#}
</syntaxhighlight>
===/etc/radsec/servers.conf===
<syntaxhighlight lang=text>
#
## UDP Radius
#
#Server Our-EduroamRadiusAuth {
# host <internal radius server>
# port 1812
# type udp
# secret ****secret****
#}
#Server Our-EduroamRadiusAcct {
# host <internal radius accounting server>
# port 1813
# type udp
# secret ****secret****
#}
#
## TLS Radius / RadSec
#
server freeradius-1 {
host <internal radius accounting server1>
type tls
certificatenamecheck off
matchCertificateAttribute CN:/^freeradius1\.domain\.tld$/
StatusServer on
secret ****secret****
}
server freeradius-2 {
host <internal radius accounting server2>
type tls
certificatenamecheck off
matchCertificateAttribute CN:/^freeradius2\.domain\.tld$/
StatusServer on
secret ****secret****
}
server tlr1 {
host 193.174.75.134
type tls
certificatenamecheck off
matchCertificateAttribute CN:/^(radius1\.dfn|tld1\.eduroam)\.de$/
StatusServer on
}
server tlr2 {
host 193.174.75.138
type tls
certificatenamecheck off
matchCertificateAttribute CN:/^(radius2\.dfn|tld2\.eduroam)\.de$/
StatusServer on
}
</syntaxhighlight>
===/etc/radsec/realms.conf===
<syntaxhighlight lang=text>
# Our domain domain.tld
realm /(eduroam|anonymous)@domain\.tld$/ {
server freeradius-1
server freeradius-2
accountingServer freeradius-1
accountingServer freeradius-2
}
# If the anonymous user has not been matched above, fail
# So users that use their real identity fail, too. Force anonymous!
realm /@domain\.tld$ {
replymessage "Access rejected, wrong anonymous identity. Use eduroam@domain.tld as anonymous identity."
accountingresponse on
}
# Other domain of our site not used for eduroam
realm /@wrong-domain\.tld$/ {
replymessage "Misconfigured client: Use domain.tld as domain instead."
accountingresponse on
}
# Default realm of some clients. Do not send to top level radius servers.
realm /@.*\.3gppnetwork\.org$/ {
replymessage "Misconfigured client."
accountingresponse on
}
# Default realm of some clients. Do not send to top level radius servers.
realm /myabc\.com$/ {
replymessage "Misconfigured client: default realm of Intel PRO/Wireless supplicant! Rejected by us."
accountingresponse on
}
# Empty realm. Do not send to top level radius servers.
realm /^$/ {
replymessage "Misconfigured client: empty realm! Rejected by us."
accountingresponse on
}
# Typo in realm. Realm without any dot in it. Do not send to top level radius servers.
realm /@[^\.]+$/ {
replymessage "Misconfigured client: Typo in realm - No dot in realm ! Rejected by us."
accountingresponse on
}
# Typo in realm. Realm without double dot in it. Do not send to top level radius servers.
realm /@.*\.\..*$/ {
replymessage "Misconfigured client: Typo in realm - .. ! Rejected by us."
accountingresponse on
}
# Typo in realm. Realm without space in it. Do not send to top level radius servers.
realm /@.*\s+.*$/ {
replymessage "Misconfigured client: Typo in realm - Don't use spaces in your realm! Rejected by us."
accountingresponse on
}
# All other realms -> Eduroam toplevel servers
realm * {
server tlr1
server tlr2
accountingserver tlr1
accountingserver tlr2
}
</syntaxhighlight>
===/etc/radsec/cert/radsecproxy.pem===
<syntaxhighlight lang=text>
subject=/CN=radsecproxy.domain.tld/OU=bla/O=bli/L=Hamburg/ST=Hamburg/C=DE
-----BEGIN CERTIFICATE-----
...
-----END CERTIFICATE-----
And now the whole cerstificate chain...
</syntaxhighlight>
==Run the daemon==
===Security===
There is no need to run radsecproxy as root.
But you need write access to the log or use syslog.
The config, certificate and key is not readable by the user (nogroup) but by the group radsecproxy where the porocess lives in (see systemd unit file radsecproxy.service).
====User====
<syntaxhighlight lang=bash>
# addgroup -g 2083 radsecproxy
# useradd -u 2083 -g nogroup -s /bin/false -h /nonexistent
</syntaxhighlight>
====Permissions====
<syntaxhighlight lang=bash>
# chown -R root:radsecproxy /etc/radsec
# find /etc/radsec -type d -exec chmod 0750 {} \;
# find /etc/radsec -type f -exec chmod 0640 {} \;
</syntaxhighlight>
====systemd unit file====
<syntaxhighlight lang=bash>
# systemctl cat radsecproxy.service
</syntaxhighlight>
<syntaxhighlight lang=ini>
[Unit]
Description=radsecproxy
ConditionPathExists=/etc/radsec/radsecproxy.conf
After=network.target
Documentation=man:radsecproxy(1)
[Service]
Type=forking
User=radsecproxy
Group=radsecproxy
RuntimeDirectory=radsecproxy
RuntimeDirectoryMode=0700
PrivateTmp=yes
InaccessibleDirectories=/var
ReadOnlyDirectories=/etc
ReadOnlyDirectories=/lib
ReadOnlyDirectories=/usr
ExecStart=/opt/radsecproxy/sbin/radsecproxy -i /run/radsecproxy/radsecproxy.pid
PIDFile=/run/radsecproxy/radsecproxy.pid
[Install]
WantedBy=multi-user.target
</syntaxhighlight>
Put this to /lib/systemd/system/radsecproxy.service and do:
# systemctl daemon-reload
# systemctl enable radsecproxy.service
# systemctl start radsecproxy.service
===Testing===
Check on the server if the radsecproxy is listening:
<syntaxhighlight lang=bash>
# lsof -Pni TCP:2083 -s TCP:Listen
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
radsecpro 1344 radsecproxy 9u IPv4 22751 0t0 TCP <server ip>:2083 (LISTEN)
</syntaxhighlight>
===Certificate Enddate===
<SyntaxHighlight lang=bash>
$ openssl s_client -connect <IP>:2083 -tls1 -no_ssl2 -no_ssl3 -showcerts 2>/dev/null | openssl x509 -enddate -noout
notAfter=Oct 9 12:13:17 2020 GMT
</SyntaxHighlight>
For this to work it is necessary, that you have configured a client that matches the host where you ask from:
<pre>
client openssl_check_radsec {
host a.b.c.d
type tls
secret dummy_for_openssl
}
</pre>
if not you will see something like this in the radsecproxy log:
<pre>
Jan 12 14:53:48 radsecproxy-1 radsecproxy[1359]: (305626) tlsservernew: ignoring request, no matching TLS client for a.b.c.d
</pre>
and your openssl just will show:
<SyntaxHighlight lang=bash>
$ openssl s_client -showcerts -connect a.b.c.d:2083
CONNECTED(00000003)
write:errno=0
---
no peer certificate available
---
No client certificate CA names sent
---
SSL handshake has read 0 bytes and written 283 bytes
Verification: OK
---
New, (NONE), Cipher is (NONE)
Secure Renegotiation IS NOT supported
Compression: NONE
Expansion: NONE
No ALPN negotiated
Early data was not sent
Verify return code: 0 (ok)
---
</SyntaxHighlight>
==Problems & Solutions==
===freeradius: systemd[1]: freeradius.service: Main process exited, code=killed, status=11/SEGV===
If you see something like this at you connected freeradius servers, check if you got the file from a windows user and see if it is in DOS format like this (use vi not less):
<pre>
-----BEGIN CERTIFICATE-----
MIIHszCCBZugAwIBAgIMKX2RveVddPE7s2KGMA0GCSqGSIb3DQEBCwUAMHYxCzAJ^M
BgNVBAYTAkRFMUUwQwYDVQQKDDxWZXJlaW4genVyIEZvZXJkZXJ1bmcgZWluZXMg^M
RGV1dHNjaGVuIEZvcnNjaHVuZ3NuZXR6ZXMgZS4gVi4xIDAeBgNVBAMMF2VkdXJv^M
...
</pre>
You can use unix2dos to convert it to unix format.
Yes... that the radsecproxy does not convert it is ugly... but that's life.
54aa4a11536c405738460896f51039cb1dce1d28
2778
2777
2024-01-12T14:20:55Z
Lollypop
2
/* /etc/radsec/clients.conf */
wikitext
text/x-wiki
[[Category:Eduroam]]
=RadSecProxy=
==Build==
===Patch for radsecproxy-1.6.8 on Ubuntu 16.04===
In radsecproxy 1.6.9 and source from git on [[https://git.nordu.net/?p=radsecproxy.git;a=tree git.nordu.net]] this patch is not needed since [[https://git.nordu.net/?p=radsecproxy.git;a=commit;h=f3619bf65967255e1009fec42b28007b49e0f4e4 18.1.2017]].
<syntaxhighlight lang=bash>
$ git clone https://git.nordu.net/radsecproxy.git
</syntaxhighlight>
[https://project.nordu.net/browse/RADSECPROXY-72 taken from here]
<syntaxhighlight lang=diff>
diff -rub radsecproxy-1.6.8/tcp.c radsecproxy-1.6.8_Ubuntu_16.04/tcp.c
--- radsecproxy-1.6.8/tcp.c 2016-09-21 13:49:09.000000000 +0200
+++ radsecproxy-1.6.8_Ubuntu_16.04/tcp.c 2017-07-13 16:35:52.414151832 +0200
@@ -353,7 +353,7 @@
struct sockaddr_storage from;
socklen_t fromlen = sizeof(from);
- listen(*sp, 0);
+ listen(*sp, 16);
for (;;) {
s = accept(*sp, (struct sockaddr *)&from, &fromlen);
diff -rub radsecproxy-1.6.8/tls.c radsecproxy-1.6.8_Ubuntu_16.04/tls.c
--- radsecproxy-1.6.8/tls.c 2016-09-21 13:49:09.000000000 +0200
+++ radsecproxy-1.6.8_Ubuntu_16.04/tls.c 2017-07-13 16:36:22.678166655 +0200
@@ -467,7 +467,7 @@
struct sockaddr_storage from;
socklen_t fromlen = sizeof(from);
- listen(*sp, 0);
+ listen(*sp, 16);
for (;;) {
s = accept(*sp, (struct sockaddr *)&from, &fromlen);
</syntaxhighlight>
===Configure===
<syntaxhighlight lang=bash>
$ ./configure --prefix=/opt/radsecproxy-1.6.8 --sysconfdir=/etc/radsec --with-ssl --enable-fticks
$ make clean all && sudo make install
</syntaxhighlight>
=== Another example: Version 1.7.2 from git ===
<syntaxhighlight lang=bash>
$ mkdir radsecproxy && cd radsecproxy
$ git clone --single-branch --branch 1.7.2 https://github.com/radsecproxy/radsecproxy tags/1.7.2
$ cd tags/1.7.2
$ ./autogen.sh
$ ./configure --prefix=/opt/radsecproxy-${PWD##*/} --sysconfdir=/etc/radsec --with-ssl
$ make clean all && sudo make install
</syntaxhighlight>
==Config==
===/etc/radsec/radsecproxy.conf===
<syntaxhighlight lang=text>
# Master config file for radsecproxy
IPv4Only on
listenUDP <IP>:1812
listenUDP <IP>:1813
listenTLS <IP>:2083
LogLevel 5 # For testing later reduce to 3
#LogDestination file:///var/log/radsecproxy.log
LogDestination x-syslog:///LOG_DAEMON
LogThreadId on
LoopPrevention on
######## TLS section
tls default {
CACertificatePath /etc/radsec/cert/ca
CertificateFile /etc/radsec/cert/radsecproxy-cert.pem
CertificateKeyFile /etc/radsec/cert/radsecproxy-key.pem
CertificateKeyPassword <PASSWORD>
TlsVersion TLS1_2
CipherList TLS_AES_256_GCM_SHA384:TLS_CHACHA20_POLY1305_SHA256:TLS_AES_128_GCM_SHA256:AES256-GCM-SHA384
}
Include /etc/radsec/rewrites.conf
Include /etc/radsec/clients.conf
Include /etc/radsec/servers.conf
Include /etc/radsec/realms.conf
</syntaxhighlight>
===ca certificate in /etc/radsec/cert/ca===
For DFN users it is the TeleSec root certificate
====The destination file name is <hash of the certificate>.0====
<syntaxhighlight lang=text>
# openssl x509 -noout -hash -in /tmp/telesec.pem
1e09d511
# mv /tmp/telesec.pem /etc/radsec/cert/ca/1e09d511.0
</syntaxhighlight>
====/etc/radsec/cert/ca/1e09d511.0====
<syntaxhighlight lang=text>
subject= /C=DE/O=T-Systems Enterprise Services GmbH/OU=T-Systems Trust Center/CN=T-TeleSec GlobalRoot Class 2
-----BEGIN CERTIFICATE-----
MIIDwzCCAqugAwIBAgIBATANBgkqhkiG9w0BAQsFADCBgjELMAkGA1UEBhMCREUx
KzApBgNVBAoMIlQtU3lzdGVtcyBFbnRlcnByaXNlIFNlcnZpY2VzIEdtYkgxHzAd
BgNVBAsMFlQtU3lzdGVtcyBUcnVzdCBDZW50ZXIxJTAjBgNVBAMMHFQtVGVsZVNl
YyBHbG9iYWxSb290IENsYXNzIDIwHhcNMDgxMDAxMTA0MDE0WhcNMzMxMDAxMjM1
OTU5WjCBgjELMAkGA1UEBhMCREUxKzApBgNVBAoMIlQtU3lzdGVtcyBFbnRlcnBy
aXNlIFNlcnZpY2VzIEdtYkgxHzAdBgNVBAsMFlQtU3lzdGVtcyBUcnVzdCBDZW50
ZXIxJTAjBgNVBAMMHFQtVGVsZVNlYyBHbG9iYWxSb290IENsYXNzIDIwggEiMA0G
CSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQCqX9obX+hzkeXaXPSi5kfl82hVYAUd
AqSzm1nzHoqvNK38DcLZSBnuaY/JIPwhqgcZ7bBcrGXHX+0CfHt8LRvWurmAwhiC
FoT6ZrAIxlQjgeTNuUk/9k9uN0goOA/FvudocP05l03Sx5iRUKrERLMjfTlH6VJi
1hKTXrcxlkIF+3anHqP1wvzpesVsqXFP6st4vGCvx9702cu+fjOlbpSD8DT6Iavq
jnKgP6TeMFvvhk1qlVtDRKgQFRzlAVfFmPHmBiiRqiDFt1MmUUOyCxGVWOHAD3bZ
wI18gfNycJ5v/hqO2V81xrJvNHy+SE/iWjnX2J14np+GPgNeGYtEotXHAgMBAAGj
QjBAMA8GA1UdEwEB/wQFMAMBAf8wDgYDVR0PAQH/BAQDAgEGMB0GA1UdDgQWBBS/
WSA2AHmgoCJrjNXyYdK4LMuCSjANBgkqhkiG9w0BAQsFAAOCAQEAMQOiYQsfdOhy
NsZt+U2e+iKo4YFWz827n+qrkRk4r6p8FU3ztqONpfSO9kSpp+ghla0+AGIWiPAC
uvxhI+YzmzB6azZie60EI4RYZeLbK4rnJVM3YlNfvNoBYimipidx5joifsFvHZVw
IEoHNN/q/xWA5brXethbdXwFeilHfkCoMRN3zUA7tFFHei4R40cR3p1m0IvVVGb6
g1XqfMIpiRvpb7PO4gWEyS8+eIVibslfwXhjdFjASBgMmTnrpMwatXlajRWc2BQN
9noHV8cigwUtPJslJj0Ys6lDfMjIq2SPDqO/nBudMNva0Bkuqjzx+zOAduTNrRlP
BSeOE6Fuwg==
-----END CERTIFICATE-----
</syntaxhighlight>
===/etc/radsec/rewrites.conf===
<syntaxhighlight lang=text>
## Empty for our setup
</syntaxhighlight>
===/etc/radsec/clients.conf===
This matches our german top level radius (tlr) you have to customize it for other countries.
<syntaxhighlight lang=text>
#
## DFN
#
client tld1 {
host 193.174.75.134
type tls
certificatenamecheck off
matchCertificateAttribute CN:/^(radius1\.dfn|tld1\.eduroam)\.de$/
}
client tld2 {
host 193.174.75.138
type tls
certificatenamecheck off
matchCertificateAttribute CN:/^(radius2\.dfn|tld2\.eduroam)\.de$/
}
client tld3 {
host 194.95.245.98
type tls
certificatenamecheck off
matchCertificateAttribute CN:/^tld3\.eduroam\.de$/
}
# Our WLAN Controller
client wlc {
host 10.1.1.0/24
type udp
secret ****secret****
}
#client anyIP4TLS {
# host 0.0.0.0/0
# type TLS
#}
</syntaxhighlight>
===/etc/radsec/servers.conf===
<syntaxhighlight lang=text>
#
## UDP Radius
#
#Server Our-EduroamRadiusAuth {
# host <internal radius server>
# port 1812
# type udp
# secret ****secret****
#}
#Server Our-EduroamRadiusAcct {
# host <internal radius accounting server>
# port 1813
# type udp
# secret ****secret****
#}
#
## TLS Radius / RadSec
#
server freeradius-1 {
host <internal radius accounting server1>
type tls
certificatenamecheck off
matchCertificateAttribute CN:/^freeradius1\.domain\.tld$/
StatusServer on
secret ****secret****
}
server freeradius-2 {
host <internal radius accounting server2>
type tls
certificatenamecheck off
matchCertificateAttribute CN:/^freeradius2\.domain\.tld$/
StatusServer on
secret ****secret****
}
server tlr1 {
host 193.174.75.134
type tls
certificatenamecheck off
matchCertificateAttribute CN:/^(radius1\.dfn|tld1\.eduroam)\.de$/
StatusServer on
}
server tlr2 {
host 193.174.75.138
type tls
certificatenamecheck off
matchCertificateAttribute CN:/^(radius2\.dfn|tld2\.eduroam)\.de$/
StatusServer on
}
</syntaxhighlight>
===/etc/radsec/realms.conf===
<syntaxhighlight lang=text>
# Our domain domain.tld
realm /(eduroam|anonymous)@domain\.tld$/ {
server freeradius-1
server freeradius-2
accountingServer freeradius-1
accountingServer freeradius-2
}
# If the anonymous user has not been matched above, fail
# So users that use their real identity fail, too. Force anonymous!
realm /@domain\.tld$ {
replymessage "Access rejected, wrong anonymous identity. Use eduroam@domain.tld as anonymous identity."
accountingresponse on
}
# Other domain of our site not used for eduroam
realm /@wrong-domain\.tld$/ {
replymessage "Misconfigured client: Use domain.tld as domain instead."
accountingresponse on
}
# Default realm of some clients. Do not send to top level radius servers.
realm /@.*\.3gppnetwork\.org$/ {
replymessage "Misconfigured client."
accountingresponse on
}
# Default realm of some clients. Do not send to top level radius servers.
realm /myabc\.com$/ {
replymessage "Misconfigured client: default realm of Intel PRO/Wireless supplicant! Rejected by us."
accountingresponse on
}
# Empty realm. Do not send to top level radius servers.
realm /^$/ {
replymessage "Misconfigured client: empty realm! Rejected by us."
accountingresponse on
}
# Typo in realm. Realm without any dot in it. Do not send to top level radius servers.
realm /@[^\.]+$/ {
replymessage "Misconfigured client: Typo in realm - No dot in realm ! Rejected by us."
accountingresponse on
}
# Typo in realm. Realm without double dot in it. Do not send to top level radius servers.
realm /@.*\.\..*$/ {
replymessage "Misconfigured client: Typo in realm - .. ! Rejected by us."
accountingresponse on
}
# Typo in realm. Realm without space in it. Do not send to top level radius servers.
realm /@.*\s+.*$/ {
replymessage "Misconfigured client: Typo in realm - Don't use spaces in your realm! Rejected by us."
accountingresponse on
}
# All other realms -> Eduroam toplevel servers
realm * {
server tlr1
server tlr2
accountingserver tlr1
accountingserver tlr2
}
</syntaxhighlight>
===/etc/radsec/cert/radsecproxy.pem===
<syntaxhighlight lang=text>
subject=/CN=radsecproxy.domain.tld/OU=bla/O=bli/L=Hamburg/ST=Hamburg/C=DE
-----BEGIN CERTIFICATE-----
...
-----END CERTIFICATE-----
And now the whole cerstificate chain...
</syntaxhighlight>
==Run the daemon==
===Security===
There is no need to run radsecproxy as root.
But you need write access to the log or use syslog.
The config, certificate and key is not readable by the user (nogroup) but by the group radsecproxy where the porocess lives in (see systemd unit file radsecproxy.service).
====User====
<syntaxhighlight lang=bash>
# addgroup -g 2083 radsecproxy
# useradd -u 2083 -g nogroup -s /bin/false -h /nonexistent
</syntaxhighlight>
====Permissions====
<syntaxhighlight lang=bash>
# chown -R root:radsecproxy /etc/radsec
# find /etc/radsec -type d -exec chmod 0750 {} \;
# find /etc/radsec -type f -exec chmod 0640 {} \;
</syntaxhighlight>
====systemd unit file====
<syntaxhighlight lang=bash>
# systemctl cat radsecproxy.service
</syntaxhighlight>
<syntaxhighlight lang=ini>
[Unit]
Description=radsecproxy
ConditionPathExists=/etc/radsec/radsecproxy.conf
After=network.target
Documentation=man:radsecproxy(1)
[Service]
Type=forking
User=radsecproxy
Group=radsecproxy
RuntimeDirectory=radsecproxy
RuntimeDirectoryMode=0700
PrivateTmp=yes
InaccessibleDirectories=/var
ReadOnlyDirectories=/etc
ReadOnlyDirectories=/lib
ReadOnlyDirectories=/usr
ExecStart=/opt/radsecproxy/sbin/radsecproxy -i /run/radsecproxy/radsecproxy.pid
PIDFile=/run/radsecproxy/radsecproxy.pid
[Install]
WantedBy=multi-user.target
</syntaxhighlight>
Put this to /lib/systemd/system/radsecproxy.service and do:
# systemctl daemon-reload
# systemctl enable radsecproxy.service
# systemctl start radsecproxy.service
===Testing===
Check on the server if the radsecproxy is listening:
<syntaxhighlight lang=bash>
# lsof -Pni TCP:2083 -s TCP:Listen
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
radsecpro 1344 radsecproxy 9u IPv4 22751 0t0 TCP <server ip>:2083 (LISTEN)
</syntaxhighlight>
===Certificate Enddate===
<SyntaxHighlight lang=bash>
$ openssl s_client -connect <IP>:2083 -tls1 -no_ssl2 -no_ssl3 -showcerts 2>/dev/null | openssl x509 -enddate -noout
notAfter=Oct 9 12:13:17 2020 GMT
</SyntaxHighlight>
For this to work it is necessary, that you have configured a client that matches the host where you ask from:
<pre>
client openssl_check_radsec {
host a.b.c.d
type tls
secret dummy_for_openssl
}
</pre>
if not you will see something like this in the radsecproxy log:
<pre>
Jan 12 14:53:48 radsecproxy-1 radsecproxy[1359]: (305626) tlsservernew: ignoring request, no matching TLS client for a.b.c.d
</pre>
and your openssl just will show:
<SyntaxHighlight lang=bash>
$ openssl s_client -showcerts -connect a.b.c.d:2083
CONNECTED(00000003)
write:errno=0
---
no peer certificate available
---
No client certificate CA names sent
---
SSL handshake has read 0 bytes and written 283 bytes
Verification: OK
---
New, (NONE), Cipher is (NONE)
Secure Renegotiation IS NOT supported
Compression: NONE
Expansion: NONE
No ALPN negotiated
Early data was not sent
Verify return code: 0 (ok)
---
</SyntaxHighlight>
==Problems & Solutions==
===freeradius: systemd[1]: freeradius.service: Main process exited, code=killed, status=11/SEGV===
If you see something like this at you connected freeradius servers, check if you got the file from a windows user and see if it is in DOS format like this (use vi not less):
<pre>
-----BEGIN CERTIFICATE-----
MIIHszCCBZugAwIBAgIMKX2RveVddPE7s2KGMA0GCSqGSIb3DQEBCwUAMHYxCzAJ^M
BgNVBAYTAkRFMUUwQwYDVQQKDDxWZXJlaW4genVyIEZvZXJkZXJ1bmcgZWluZXMg^M
RGV1dHNjaGVuIEZvcnNjaHVuZ3NuZXR6ZXMgZS4gVi4xIDAeBgNVBAMMF2VkdXJv^M
...
</pre>
You can use unix2dos to convert it to unix format.
Yes... that the radsecproxy does not convert it is ugly... but that's life.
e7f3c335f438bf07a5c066c0ca8f7cbb59176ccb
2779
2778
2024-01-12T14:22:18Z
Lollypop
2
/* /etc/radsec/servers.conf */
wikitext
text/x-wiki
[[Category:Eduroam]]
=RadSecProxy=
==Build==
===Patch for radsecproxy-1.6.8 on Ubuntu 16.04===
In radsecproxy 1.6.9 and source from git on [[https://git.nordu.net/?p=radsecproxy.git;a=tree git.nordu.net]] this patch is not needed since [[https://git.nordu.net/?p=radsecproxy.git;a=commit;h=f3619bf65967255e1009fec42b28007b49e0f4e4 18.1.2017]].
<syntaxhighlight lang=bash>
$ git clone https://git.nordu.net/radsecproxy.git
</syntaxhighlight>
[https://project.nordu.net/browse/RADSECPROXY-72 taken from here]
<syntaxhighlight lang=diff>
diff -rub radsecproxy-1.6.8/tcp.c radsecproxy-1.6.8_Ubuntu_16.04/tcp.c
--- radsecproxy-1.6.8/tcp.c 2016-09-21 13:49:09.000000000 +0200
+++ radsecproxy-1.6.8_Ubuntu_16.04/tcp.c 2017-07-13 16:35:52.414151832 +0200
@@ -353,7 +353,7 @@
struct sockaddr_storage from;
socklen_t fromlen = sizeof(from);
- listen(*sp, 0);
+ listen(*sp, 16);
for (;;) {
s = accept(*sp, (struct sockaddr *)&from, &fromlen);
diff -rub radsecproxy-1.6.8/tls.c radsecproxy-1.6.8_Ubuntu_16.04/tls.c
--- radsecproxy-1.6.8/tls.c 2016-09-21 13:49:09.000000000 +0200
+++ radsecproxy-1.6.8_Ubuntu_16.04/tls.c 2017-07-13 16:36:22.678166655 +0200
@@ -467,7 +467,7 @@
struct sockaddr_storage from;
socklen_t fromlen = sizeof(from);
- listen(*sp, 0);
+ listen(*sp, 16);
for (;;) {
s = accept(*sp, (struct sockaddr *)&from, &fromlen);
</syntaxhighlight>
===Configure===
<syntaxhighlight lang=bash>
$ ./configure --prefix=/opt/radsecproxy-1.6.8 --sysconfdir=/etc/radsec --with-ssl --enable-fticks
$ make clean all && sudo make install
</syntaxhighlight>
=== Another example: Version 1.7.2 from git ===
<syntaxhighlight lang=bash>
$ mkdir radsecproxy && cd radsecproxy
$ git clone --single-branch --branch 1.7.2 https://github.com/radsecproxy/radsecproxy tags/1.7.2
$ cd tags/1.7.2
$ ./autogen.sh
$ ./configure --prefix=/opt/radsecproxy-${PWD##*/} --sysconfdir=/etc/radsec --with-ssl
$ make clean all && sudo make install
</syntaxhighlight>
==Config==
===/etc/radsec/radsecproxy.conf===
<syntaxhighlight lang=text>
# Master config file for radsecproxy
IPv4Only on
listenUDP <IP>:1812
listenUDP <IP>:1813
listenTLS <IP>:2083
LogLevel 5 # For testing later reduce to 3
#LogDestination file:///var/log/radsecproxy.log
LogDestination x-syslog:///LOG_DAEMON
LogThreadId on
LoopPrevention on
######## TLS section
tls default {
CACertificatePath /etc/radsec/cert/ca
CertificateFile /etc/radsec/cert/radsecproxy-cert.pem
CertificateKeyFile /etc/radsec/cert/radsecproxy-key.pem
CertificateKeyPassword <PASSWORD>
TlsVersion TLS1_2
CipherList TLS_AES_256_GCM_SHA384:TLS_CHACHA20_POLY1305_SHA256:TLS_AES_128_GCM_SHA256:AES256-GCM-SHA384
}
Include /etc/radsec/rewrites.conf
Include /etc/radsec/clients.conf
Include /etc/radsec/servers.conf
Include /etc/radsec/realms.conf
</syntaxhighlight>
===ca certificate in /etc/radsec/cert/ca===
For DFN users it is the TeleSec root certificate
====The destination file name is <hash of the certificate>.0====
<syntaxhighlight lang=text>
# openssl x509 -noout -hash -in /tmp/telesec.pem
1e09d511
# mv /tmp/telesec.pem /etc/radsec/cert/ca/1e09d511.0
</syntaxhighlight>
====/etc/radsec/cert/ca/1e09d511.0====
<syntaxhighlight lang=text>
subject= /C=DE/O=T-Systems Enterprise Services GmbH/OU=T-Systems Trust Center/CN=T-TeleSec GlobalRoot Class 2
-----BEGIN CERTIFICATE-----
MIIDwzCCAqugAwIBAgIBATANBgkqhkiG9w0BAQsFADCBgjELMAkGA1UEBhMCREUx
KzApBgNVBAoMIlQtU3lzdGVtcyBFbnRlcnByaXNlIFNlcnZpY2VzIEdtYkgxHzAd
BgNVBAsMFlQtU3lzdGVtcyBUcnVzdCBDZW50ZXIxJTAjBgNVBAMMHFQtVGVsZVNl
YyBHbG9iYWxSb290IENsYXNzIDIwHhcNMDgxMDAxMTA0MDE0WhcNMzMxMDAxMjM1
OTU5WjCBgjELMAkGA1UEBhMCREUxKzApBgNVBAoMIlQtU3lzdGVtcyBFbnRlcnBy
aXNlIFNlcnZpY2VzIEdtYkgxHzAdBgNVBAsMFlQtU3lzdGVtcyBUcnVzdCBDZW50
ZXIxJTAjBgNVBAMMHFQtVGVsZVNlYyBHbG9iYWxSb290IENsYXNzIDIwggEiMA0G
CSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQCqX9obX+hzkeXaXPSi5kfl82hVYAUd
AqSzm1nzHoqvNK38DcLZSBnuaY/JIPwhqgcZ7bBcrGXHX+0CfHt8LRvWurmAwhiC
FoT6ZrAIxlQjgeTNuUk/9k9uN0goOA/FvudocP05l03Sx5iRUKrERLMjfTlH6VJi
1hKTXrcxlkIF+3anHqP1wvzpesVsqXFP6st4vGCvx9702cu+fjOlbpSD8DT6Iavq
jnKgP6TeMFvvhk1qlVtDRKgQFRzlAVfFmPHmBiiRqiDFt1MmUUOyCxGVWOHAD3bZ
wI18gfNycJ5v/hqO2V81xrJvNHy+SE/iWjnX2J14np+GPgNeGYtEotXHAgMBAAGj
QjBAMA8GA1UdEwEB/wQFMAMBAf8wDgYDVR0PAQH/BAQDAgEGMB0GA1UdDgQWBBS/
WSA2AHmgoCJrjNXyYdK4LMuCSjANBgkqhkiG9w0BAQsFAAOCAQEAMQOiYQsfdOhy
NsZt+U2e+iKo4YFWz827n+qrkRk4r6p8FU3ztqONpfSO9kSpp+ghla0+AGIWiPAC
uvxhI+YzmzB6azZie60EI4RYZeLbK4rnJVM3YlNfvNoBYimipidx5joifsFvHZVw
IEoHNN/q/xWA5brXethbdXwFeilHfkCoMRN3zUA7tFFHei4R40cR3p1m0IvVVGb6
g1XqfMIpiRvpb7PO4gWEyS8+eIVibslfwXhjdFjASBgMmTnrpMwatXlajRWc2BQN
9noHV8cigwUtPJslJj0Ys6lDfMjIq2SPDqO/nBudMNva0Bkuqjzx+zOAduTNrRlP
BSeOE6Fuwg==
-----END CERTIFICATE-----
</syntaxhighlight>
===/etc/radsec/rewrites.conf===
<syntaxhighlight lang=text>
## Empty for our setup
</syntaxhighlight>
===/etc/radsec/clients.conf===
This matches our german top level radius (tlr) you have to customize it for other countries.
<syntaxhighlight lang=text>
#
## DFN
#
client tld1 {
host 193.174.75.134
type tls
certificatenamecheck off
matchCertificateAttribute CN:/^(radius1\.dfn|tld1\.eduroam)\.de$/
}
client tld2 {
host 193.174.75.138
type tls
certificatenamecheck off
matchCertificateAttribute CN:/^(radius2\.dfn|tld2\.eduroam)\.de$/
}
client tld3 {
host 194.95.245.98
type tls
certificatenamecheck off
matchCertificateAttribute CN:/^tld3\.eduroam\.de$/
}
# Our WLAN Controller
client wlc {
host 10.1.1.0/24
type udp
secret ****secret****
}
#client anyIP4TLS {
# host 0.0.0.0/0
# type TLS
#}
</syntaxhighlight>
===/etc/radsec/servers.conf===
<syntaxhighlight lang=text>
#
## UDP Radius
#
#Server Our-EduroamRadiusAuth {
# host <internal radius server>
# port 1812
# type udp
# secret ****secret****
#}
#Server Our-EduroamRadiusAcct {
# host <internal radius accounting server>
# port 1813
# type udp
# secret ****secret****
#}
#
## TLS Radius / RadSec
#
server freeradius-1 {
host <internal radius accounting server1>
type tls
certificatenamecheck off
matchCertificateAttribute CN:/^freeradius1\.domain\.tld$/
StatusServer on
secret ****secret****
}
server freeradius-2 {
host <internal radius accounting server2>
type tls
certificatenamecheck off
matchCertificateAttribute CN:/^freeradius2\.domain\.tld$/
StatusServer on
secret ****secret****
}
#
## DFN
#
server tld1 {
host 193.174.75.134
type tls
certificatenamecheck off
matchCertificateAttribute CN:/^(radius1\.dfn|tld1\.eduroam)\.de$/
StatusServer on
}
server tld2 {
host 193.174.75.138
type tls
certificatenamecheck off
matchCertificateAttribute CN:/^(radius2\.dfn|tld2\.eduroam)\.de$/
StatusServer on
}
server tld3 {
host 194.95.245.98
type tls
certificatenamecheck off
matchCertificateAttribute CN:/^tld3\.eduroam\.de$/
StatusServer on
}
</syntaxhighlight>
===/etc/radsec/realms.conf===
<syntaxhighlight lang=text>
# Our domain domain.tld
realm /(eduroam|anonymous)@domain\.tld$/ {
server freeradius-1
server freeradius-2
accountingServer freeradius-1
accountingServer freeradius-2
}
# If the anonymous user has not been matched above, fail
# So users that use their real identity fail, too. Force anonymous!
realm /@domain\.tld$ {
replymessage "Access rejected, wrong anonymous identity. Use eduroam@domain.tld as anonymous identity."
accountingresponse on
}
# Other domain of our site not used for eduroam
realm /@wrong-domain\.tld$/ {
replymessage "Misconfigured client: Use domain.tld as domain instead."
accountingresponse on
}
# Default realm of some clients. Do not send to top level radius servers.
realm /@.*\.3gppnetwork\.org$/ {
replymessage "Misconfigured client."
accountingresponse on
}
# Default realm of some clients. Do not send to top level radius servers.
realm /myabc\.com$/ {
replymessage "Misconfigured client: default realm of Intel PRO/Wireless supplicant! Rejected by us."
accountingresponse on
}
# Empty realm. Do not send to top level radius servers.
realm /^$/ {
replymessage "Misconfigured client: empty realm! Rejected by us."
accountingresponse on
}
# Typo in realm. Realm without any dot in it. Do not send to top level radius servers.
realm /@[^\.]+$/ {
replymessage "Misconfigured client: Typo in realm - No dot in realm ! Rejected by us."
accountingresponse on
}
# Typo in realm. Realm without double dot in it. Do not send to top level radius servers.
realm /@.*\.\..*$/ {
replymessage "Misconfigured client: Typo in realm - .. ! Rejected by us."
accountingresponse on
}
# Typo in realm. Realm without space in it. Do not send to top level radius servers.
realm /@.*\s+.*$/ {
replymessage "Misconfigured client: Typo in realm - Don't use spaces in your realm! Rejected by us."
accountingresponse on
}
# All other realms -> Eduroam toplevel servers
realm * {
server tlr1
server tlr2
accountingserver tlr1
accountingserver tlr2
}
</syntaxhighlight>
===/etc/radsec/cert/radsecproxy.pem===
<syntaxhighlight lang=text>
subject=/CN=radsecproxy.domain.tld/OU=bla/O=bli/L=Hamburg/ST=Hamburg/C=DE
-----BEGIN CERTIFICATE-----
...
-----END CERTIFICATE-----
And now the whole cerstificate chain...
</syntaxhighlight>
==Run the daemon==
===Security===
There is no need to run radsecproxy as root.
But you need write access to the log or use syslog.
The config, certificate and key is not readable by the user (nogroup) but by the group radsecproxy where the porocess lives in (see systemd unit file radsecproxy.service).
====User====
<syntaxhighlight lang=bash>
# addgroup -g 2083 radsecproxy
# useradd -u 2083 -g nogroup -s /bin/false -h /nonexistent
</syntaxhighlight>
====Permissions====
<syntaxhighlight lang=bash>
# chown -R root:radsecproxy /etc/radsec
# find /etc/radsec -type d -exec chmod 0750 {} \;
# find /etc/radsec -type f -exec chmod 0640 {} \;
</syntaxhighlight>
====systemd unit file====
<syntaxhighlight lang=bash>
# systemctl cat radsecproxy.service
</syntaxhighlight>
<syntaxhighlight lang=ini>
[Unit]
Description=radsecproxy
ConditionPathExists=/etc/radsec/radsecproxy.conf
After=network.target
Documentation=man:radsecproxy(1)
[Service]
Type=forking
User=radsecproxy
Group=radsecproxy
RuntimeDirectory=radsecproxy
RuntimeDirectoryMode=0700
PrivateTmp=yes
InaccessibleDirectories=/var
ReadOnlyDirectories=/etc
ReadOnlyDirectories=/lib
ReadOnlyDirectories=/usr
ExecStart=/opt/radsecproxy/sbin/radsecproxy -i /run/radsecproxy/radsecproxy.pid
PIDFile=/run/radsecproxy/radsecproxy.pid
[Install]
WantedBy=multi-user.target
</syntaxhighlight>
Put this to /lib/systemd/system/radsecproxy.service and do:
# systemctl daemon-reload
# systemctl enable radsecproxy.service
# systemctl start radsecproxy.service
===Testing===
Check on the server if the radsecproxy is listening:
<syntaxhighlight lang=bash>
# lsof -Pni TCP:2083 -s TCP:Listen
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
radsecpro 1344 radsecproxy 9u IPv4 22751 0t0 TCP <server ip>:2083 (LISTEN)
</syntaxhighlight>
===Certificate Enddate===
<SyntaxHighlight lang=bash>
$ openssl s_client -connect <IP>:2083 -tls1 -no_ssl2 -no_ssl3 -showcerts 2>/dev/null | openssl x509 -enddate -noout
notAfter=Oct 9 12:13:17 2020 GMT
</SyntaxHighlight>
For this to work it is necessary, that you have configured a client that matches the host where you ask from:
<pre>
client openssl_check_radsec {
host a.b.c.d
type tls
secret dummy_for_openssl
}
</pre>
if not you will see something like this in the radsecproxy log:
<pre>
Jan 12 14:53:48 radsecproxy-1 radsecproxy[1359]: (305626) tlsservernew: ignoring request, no matching TLS client for a.b.c.d
</pre>
and your openssl just will show:
<SyntaxHighlight lang=bash>
$ openssl s_client -showcerts -connect a.b.c.d:2083
CONNECTED(00000003)
write:errno=0
---
no peer certificate available
---
No client certificate CA names sent
---
SSL handshake has read 0 bytes and written 283 bytes
Verification: OK
---
New, (NONE), Cipher is (NONE)
Secure Renegotiation IS NOT supported
Compression: NONE
Expansion: NONE
No ALPN negotiated
Early data was not sent
Verify return code: 0 (ok)
---
</SyntaxHighlight>
==Problems & Solutions==
===freeradius: systemd[1]: freeradius.service: Main process exited, code=killed, status=11/SEGV===
If you see something like this at you connected freeradius servers, check if you got the file from a windows user and see if it is in DOS format like this (use vi not less):
<pre>
-----BEGIN CERTIFICATE-----
MIIHszCCBZugAwIBAgIMKX2RveVddPE7s2KGMA0GCSqGSIb3DQEBCwUAMHYxCzAJ^M
BgNVBAYTAkRFMUUwQwYDVQQKDDxWZXJlaW4genVyIEZvZXJkZXJ1bmcgZWluZXMg^M
RGV1dHNjaGVuIEZvcnNjaHVuZ3NuZXR6ZXMgZS4gVi4xIDAeBgNVBAMMF2VkdXJv^M
...
</pre>
You can use unix2dos to convert it to unix format.
Yes... that the radsecproxy does not convert it is ugly... but that's life.
27e32d156f85504e116008284e7ed4072566b06e
Galera Cluster
0
383
2780
2637
2024-01-19T14:35:09Z
Lollypop
2
/* Get knowledge about your Cluster */
wikitext
text/x-wiki
[[Category:MariaDB]]
[[Category:MySQL]]
=Setup the Cluster=
==Install the packages==
On each node do as root:
* Add sources
<syntaxhighlight lang=bash>
# cat > /etc/apt/sources.list.d/mariadb.list << EOF
# MariaDB Server
# To use a different major version of the server, or to pin to a specific minor version, change URI below.
deb [arch=amd64] http://downloads.mariadb.com/MariaDB/mariadb-10.5/repo/ubuntu $(lsb_release -cs) main
deb [arch=amd64] http://downloads.mariadb.com/MariaDB/mariadb-10.5/repo/ubuntu $(lsb_release -cs) main/debug
# MariaDB MaxScale
# To use the latest stable release of MaxScale, use "latest" as the version
# To use the latest beta (or stable if no current beta) release of MaxScale, use "beta" as the version
deb [arch=amd64] https://dlm.mariadb.com/repo/maxscale/latest/apt $(lsb_release -cs) main
# MariaDB Tools
deb [arch=amd64] http://downloads.mariadb.com/Tools/ubuntu $(lsb_release -cs) main
EOF
</syntaxhighlight>
* Install the packages
<syntaxhighlight lang=bash>
# apt update
# apt install mariadb-server mariadb-backup galera-4
</syntaxhighlight>
==Setup certificates for the cluster comunication==
===Make a CA certificate===
Make a CA certificate with a very long lifetime as you dont want to make normal certificate updates at this point.
<syntaxhighlight lang=bash>
$ subject='/C=DE/ST=Hamburg/L=Hamburg/O=Organisation/OU=Databases/CN=Galera Cluster'
$ openssl req -new -x509 -nodes -days 365000 -newkey rsa:4096 -sha256 -keyout ca-key.pem -out ca-cert.pem -batch -subj "${subject}"
</syntaxhighlight>
===Create a certificate for each cluster node===
<syntaxhighlight lang=bash>
$ for node in {1..4}
do
emailAddress="dbadmin@server.de"
servername="maria-${node}.server.de"
subject="/C=DE/ST=Hamburg/L=Hamburg/O=Organisation/OU=Databases/CN=${servername}/emailAddress=${emailAddress}"
openssl req -newkey rsa:4096 -nodes -keyout ${servername}-key.pem -out ${servername}-req.pem -batch -subj "${subject}"
openssl x509 -req -days 365000 -set_serial $(printf "%02d" "${node}") -in ${servername}-req.pem -out ${servername}-cert.pem -CA ca-cert.pem -CAkey ca-key.pem
done
</syntaxhighlight>
===Copy keys and certificates to the nodes===
Copy the specific keys and certs to each node:
<syntaxhighlight lang=bash>
$ sudo mkdir --mode=0700 /etc/mysql/priv # put in here: maria-${node}.server.de-key.pem
$ sudo mkdir --mode=0750 /etc/mysql/cert # put in here: maria-${node}.server.de-cert.pem , ca-cert.pem
</syntaxhighlight>
== Configure the MariaDB Galera Cluster ==
=== Create a mariabackup user on each node ===
<syntaxhighlight lang=bash>
# mariadb
MariaDB [(none)]> grant reload, process, lock tables, replication client on *.* to 'mariabackup'@'localhost' identified by 'the_very_secret_mariabackup_password';
MariaDB [(none)]> grant reload, process, lock tables, binlog monitor on *.* to 'mariabackup'@'maria-1.server.de' identified by 'the_very_secret_mariabackup_password';
MariaDB [(none)]> grant reload, process, lock tables, binlog monitor on *.* to 'mariabackup'@'maria-2.server.de' identified by 'the_very_secret_mariabackup_password';
MariaDB [(none)]> grant reload, process, lock tables, binlog monitor on *.* to 'mariabackup'@'maria-3.server.de' identified by 'the_very_secret_mariabackup_password';
MariaDB [(none)]> grant reload, process, lock tables, binlog monitor on *.* to 'mariabackup'@'maria-4.server.de' identified by 'the_very_secret_mariabackup_password';
MariaDB [(none)]> flush privileges;
MariaDB [(none)]>
</syntaxhighlight>
=== Galera settings ===
This file is equal on all nodes:
/etc/mysql/mariadb.conf.d/zz-galera.cnf
<syntaxhighlight lang=ini>
[galera]
# Cluster Configuration
wsrep_provider = /usr/lib/galera/libgalera_smm.so
# gcomm://{ comma seperated list of all cluster node IPs }
wsrep_cluster_address = gcomm://10.33.6.1,10.33.6.2,10.33.6.3,10.33.6.4
wsrep_cluster_name = MariaDB Galera Cluster
wsrep_on = ON
# Snapshot state transfer (SST): copy entire database, when new node joins
wsrep_sst_method = mariabackup
# set the the_very_secret_mariabackup_password to your real mariabackup password
wsrep_sst_auth = mariabackup:the_very_secret_mariabackup_password
[mariadb]
binlog_format = ROW
innodb_autoinc_lock_mode = 2
</syntaxhighlight>
This file is different per node (here for node1 IP 10.33.6.1):
/etc/mysql/mariadb.conf.d/zz-node.cnf
<syntaxhighlight lang=ini>
[mariadb]
bind-address = 10.33.6.1
ssl_cert = /etc/mysql/cert/maria-1.server.de-cert.pem
ssl_key = /etc/mysql/priv/maria-1.server.de-key.pem
ssl_ca = /etc/mysql/cert/ca-cert.pem
[sst]
encrypt = 4
tkey = /etc/mysql/priv/maria-1.server.de-key.pem
tcert = /etc/mysql/cert/maria-1.server.de-cert.pem
tca = /etc/mysql/cert/ca-cert.pem
[galera]
wsrep_node_address = 10.33.6.1
wsrep_node_incoming_address = 10.33.6.1
wsrep_sst_receive_address = 10.33.6.1
wsrep_provider_options = "gcache.size = 1G; gcache.recover = yes; socket.ssl_key=/etc/mysql/priv/maria-1.server.de-key.pem;socket.ssl_cert=/etc/mysql/cert/maria-1.server.de-cert.pem;socket.ssl_ca=/etc/mysql/cert/ca-cert.pem ; gmcast.listen_addr = ssl://10.33.6.1:4567"
</syntaxhighlight>
If you have something running on the default port 4567, you can change the <i>base_port</i> like this (here to 5000):
<syntaxhighlight lang=ini>
wsrep_provider_options = "base_port = 5000; gcache.size = 1G; gcache.recover = yes; socket.ssl_key=/etc/mysql/priv/maria-1.server.de-key.pem;socket.ssl_cert=/etc/mysql/cert/maria-1.server.de-cert.pem;socket.ssl_ca=/etc/mysql/cert/ca-cert.pem ; gmcast.listen_addr = ssl://10.33.6.1:5000"
</syntaxhighlight>
Do not forget to change the <i>gmcast.listen_addr</i> at the end.
== Get knowledge about your Cluster ==
=== Show wsrep_provider_options ===
<syntaxhighlight lang=bash>
$ mariadb -NBABe 'show variables like "wsrep_provider_options"' | awk '{gsub(/$/,":\n",$1); gsub(/(;|$)/,";\n"); printf $0; }'
</syntaxhighlight>
==haproxy==
* /etc/haproxy/haproxy.cfg
<syntaxhighlight lang=bash>
defaults
timeout connect 5000
timeout client 50000
timeout server 50000
global
log /dev/log local0
log /dev/log local1 notice
chroot /var/lib/haproxy
stats socket /run/haproxy/admin.sock mode 660 level admin expose-fd listeners
stats timeout 30s
user haproxy
group haproxy
daemon
# Default SSL material locations
ca-base /etc/ssl/certs
crt-base /etc/ssl/private
# See: https://ssl-config.mozilla.org/#server=haproxy&server-version=2.0.3&config=intermediate
ssl-default-bind-ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384
ssl-default-bind-ciphersuites TLS_AES_128_GCM_SHA256:TLS_AES_256_GCM_SHA384:TLS_CHACHA20_POLY1305_SHA256
ssl-default-bind-options ssl-min-ver TLSv1.2 no-tls-tickets
frontend mysqld_listen
bind 10.42.42.11:3306
bind 127.0.0.1:3306
mode tcp
log global
option dontlognull
option tcplog
use_backend galera_cluster
# Load Balancing for Galera Cluster
backend galera_cluster
balance leastconn
#balance leastconn
#balance roundrobin
mode tcp
log global
option tcpka
option log-health-checks
option mysql-check user haproxy post-41
option allbackups
default-server inter 2s downinter 5s rise 3 fall 2 slowstart 60s maxconn 1024 maxqueue 128 weight 100
server galera-ham-1 10.42.42.41:3306 check send-proxy-v2
server galera-ham-2 10.42.42.42:3306 check send-proxy-v2
server galera-muc-1 10.130.5.65:3306 check send-proxy-v2 backup
server galera-muc-2 10.130.5.66:3306 check send-proxy-v2 backup
</syntaxhighlight>
On the galera cluster you need the <i>haproy</i> user:
<syntaxhighlight lang=mysql>
MariaDB [(none)]> GRANT USAGE ON *.* TO `haproxy`@`10.42.42.11` identified by '';
</syntaxhighlight>
d13f67ecc5237a132489a39af54eb62bd1a5ddd4
2781
2780
2024-01-19T14:37:38Z
Lollypop
2
/* haproxy */
wikitext
text/x-wiki
[[Category:MariaDB]]
[[Category:MySQL]]
=Setup the Cluster=
==Install the packages==
On each node do as root:
* Add sources
<syntaxhighlight lang=bash>
# cat > /etc/apt/sources.list.d/mariadb.list << EOF
# MariaDB Server
# To use a different major version of the server, or to pin to a specific minor version, change URI below.
deb [arch=amd64] http://downloads.mariadb.com/MariaDB/mariadb-10.5/repo/ubuntu $(lsb_release -cs) main
deb [arch=amd64] http://downloads.mariadb.com/MariaDB/mariadb-10.5/repo/ubuntu $(lsb_release -cs) main/debug
# MariaDB MaxScale
# To use the latest stable release of MaxScale, use "latest" as the version
# To use the latest beta (or stable if no current beta) release of MaxScale, use "beta" as the version
deb [arch=amd64] https://dlm.mariadb.com/repo/maxscale/latest/apt $(lsb_release -cs) main
# MariaDB Tools
deb [arch=amd64] http://downloads.mariadb.com/Tools/ubuntu $(lsb_release -cs) main
EOF
</syntaxhighlight>
* Install the packages
<syntaxhighlight lang=bash>
# apt update
# apt install mariadb-server mariadb-backup galera-4
</syntaxhighlight>
==Setup certificates for the cluster comunication==
===Make a CA certificate===
Make a CA certificate with a very long lifetime as you dont want to make normal certificate updates at this point.
<syntaxhighlight lang=bash>
$ subject='/C=DE/ST=Hamburg/L=Hamburg/O=Organisation/OU=Databases/CN=Galera Cluster'
$ openssl req -new -x509 -nodes -days 365000 -newkey rsa:4096 -sha256 -keyout ca-key.pem -out ca-cert.pem -batch -subj "${subject}"
</syntaxhighlight>
===Create a certificate for each cluster node===
<syntaxhighlight lang=bash>
$ for node in {1..4}
do
emailAddress="dbadmin@server.de"
servername="maria-${node}.server.de"
subject="/C=DE/ST=Hamburg/L=Hamburg/O=Organisation/OU=Databases/CN=${servername}/emailAddress=${emailAddress}"
openssl req -newkey rsa:4096 -nodes -keyout ${servername}-key.pem -out ${servername}-req.pem -batch -subj "${subject}"
openssl x509 -req -days 365000 -set_serial $(printf "%02d" "${node}") -in ${servername}-req.pem -out ${servername}-cert.pem -CA ca-cert.pem -CAkey ca-key.pem
done
</syntaxhighlight>
===Copy keys and certificates to the nodes===
Copy the specific keys and certs to each node:
<syntaxhighlight lang=bash>
$ sudo mkdir --mode=0700 /etc/mysql/priv # put in here: maria-${node}.server.de-key.pem
$ sudo mkdir --mode=0750 /etc/mysql/cert # put in here: maria-${node}.server.de-cert.pem , ca-cert.pem
</syntaxhighlight>
== Configure the MariaDB Galera Cluster ==
=== Create a mariabackup user on each node ===
<syntaxhighlight lang=bash>
# mariadb
MariaDB [(none)]> grant reload, process, lock tables, replication client on *.* to 'mariabackup'@'localhost' identified by 'the_very_secret_mariabackup_password';
MariaDB [(none)]> grant reload, process, lock tables, binlog monitor on *.* to 'mariabackup'@'maria-1.server.de' identified by 'the_very_secret_mariabackup_password';
MariaDB [(none)]> grant reload, process, lock tables, binlog monitor on *.* to 'mariabackup'@'maria-2.server.de' identified by 'the_very_secret_mariabackup_password';
MariaDB [(none)]> grant reload, process, lock tables, binlog monitor on *.* to 'mariabackup'@'maria-3.server.de' identified by 'the_very_secret_mariabackup_password';
MariaDB [(none)]> grant reload, process, lock tables, binlog monitor on *.* to 'mariabackup'@'maria-4.server.de' identified by 'the_very_secret_mariabackup_password';
MariaDB [(none)]> flush privileges;
MariaDB [(none)]>
</syntaxhighlight>
=== Galera settings ===
This file is equal on all nodes:
/etc/mysql/mariadb.conf.d/zz-galera.cnf
<syntaxhighlight lang=ini>
[galera]
# Cluster Configuration
wsrep_provider = /usr/lib/galera/libgalera_smm.so
# gcomm://{ comma seperated list of all cluster node IPs }
wsrep_cluster_address = gcomm://10.33.6.1,10.33.6.2,10.33.6.3,10.33.6.4
wsrep_cluster_name = MariaDB Galera Cluster
wsrep_on = ON
# Snapshot state transfer (SST): copy entire database, when new node joins
wsrep_sst_method = mariabackup
# set the the_very_secret_mariabackup_password to your real mariabackup password
wsrep_sst_auth = mariabackup:the_very_secret_mariabackup_password
[mariadb]
binlog_format = ROW
innodb_autoinc_lock_mode = 2
</syntaxhighlight>
This file is different per node (here for node1 IP 10.33.6.1):
/etc/mysql/mariadb.conf.d/zz-node.cnf
<syntaxhighlight lang=ini>
[mariadb]
bind-address = 10.33.6.1
ssl_cert = /etc/mysql/cert/maria-1.server.de-cert.pem
ssl_key = /etc/mysql/priv/maria-1.server.de-key.pem
ssl_ca = /etc/mysql/cert/ca-cert.pem
[sst]
encrypt = 4
tkey = /etc/mysql/priv/maria-1.server.de-key.pem
tcert = /etc/mysql/cert/maria-1.server.de-cert.pem
tca = /etc/mysql/cert/ca-cert.pem
[galera]
wsrep_node_address = 10.33.6.1
wsrep_node_incoming_address = 10.33.6.1
wsrep_sst_receive_address = 10.33.6.1
wsrep_provider_options = "gcache.size = 1G; gcache.recover = yes; socket.ssl_key=/etc/mysql/priv/maria-1.server.de-key.pem;socket.ssl_cert=/etc/mysql/cert/maria-1.server.de-cert.pem;socket.ssl_ca=/etc/mysql/cert/ca-cert.pem ; gmcast.listen_addr = ssl://10.33.6.1:4567"
</syntaxhighlight>
If you have something running on the default port 4567, you can change the <i>base_port</i> like this (here to 5000):
<syntaxhighlight lang=ini>
wsrep_provider_options = "base_port = 5000; gcache.size = 1G; gcache.recover = yes; socket.ssl_key=/etc/mysql/priv/maria-1.server.de-key.pem;socket.ssl_cert=/etc/mysql/cert/maria-1.server.de-cert.pem;socket.ssl_ca=/etc/mysql/cert/ca-cert.pem ; gmcast.listen_addr = ssl://10.33.6.1:5000"
</syntaxhighlight>
Do not forget to change the <i>gmcast.listen_addr</i> at the end.
== Get knowledge about your Cluster ==
=== Show wsrep_provider_options ===
<syntaxhighlight lang=bash>
$ mariadb -NBABe 'show variables like "wsrep_provider_options"' | awk '{gsub(/$/,":\n",$1); gsub(/(;|$)/,";\n"); printf $0; }'
</syntaxhighlight>
==haproxy==
Hosts:
* haproxy 10.42.42.11
* galera-ham-1 10.42.42.41
* galera-ham-2 10.42.42.42
* galera-muc-1 10.130.5.65
* galera-muc-2 10.130.5.66
===/etc/haproxy/haproxy.cfg===
<syntaxhighlight lang=bash>
defaults
timeout connect 5000
timeout client 50000
timeout server 50000
global
log /dev/log local0
log /dev/log local1 notice
chroot /var/lib/haproxy
stats socket /run/haproxy/admin.sock mode 660 level admin expose-fd listeners
stats timeout 30s
user haproxy
group haproxy
daemon
# Default SSL material locations
ca-base /etc/ssl/certs
crt-base /etc/ssl/private
# See: https://ssl-config.mozilla.org/#server=haproxy&server-version=2.0.3&config=intermediate
ssl-default-bind-ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384
ssl-default-bind-ciphersuites TLS_AES_128_GCM_SHA256:TLS_AES_256_GCM_SHA384:TLS_CHACHA20_POLY1305_SHA256
ssl-default-bind-options ssl-min-ver TLSv1.2 no-tls-tickets
frontend mysqld_listen
bind 10.42.42.11:3306
bind 127.0.0.1:3306
mode tcp
log global
option dontlognull
option tcplog
use_backend galera_cluster
# Load Balancing for Galera Cluster
backend galera_cluster
balance leastconn
#balance leastconn
#balance roundrobin
mode tcp
log global
option tcpka
option log-health-checks
option mysql-check user haproxy post-41
option allbackups
default-server inter 2s downinter 5s rise 3 fall 2 slowstart 60s maxconn 1024 maxqueue 128 weight 100
server galera-ham-1 10.42.42.41:3306 check send-proxy-v2
server galera-ham-2 10.42.42.42:3306 check send-proxy-v2
server galera-muc-1 10.130.5.65:3306 check send-proxy-v2 backup
server galera-muc-2 10.130.5.66:3306 check send-proxy-v2 backup
</syntaxhighlight>
===Grant===
On the galera cluster you need the <i>haproy</i> user:
<syntaxhighlight lang=mysql>
MariaDB [(none)]> GRANT USAGE ON *.* TO `haproxy`@`10.42.42.11` identified by '';
</syntaxhighlight>
90a4ffb08cdd50a0c0ac85e89023e82950b186b1
VMWare guest
0
409
2782
2024-01-23T16:18:27Z
Lollypop
2
Created page with "[[Category:VMWare]] ==Match the guest disk with the vmdk in Linux== The only way I know is to download the VMX file from the location where your VM resides. In you VCenter # find your VM # go to the <i>Datastores</i> tab # select the datastore where your VM config is in # go to the <i>Files</i> tab # select your VM name on the left side # find the <VM Name>.vmx file on the right side and select it via the square box left to the name # Press <i>DOWNLOAD</i> Save it whe..."
wikitext
text/x-wiki
[[Category:VMWare]]
==Match the guest disk with the vmdk in Linux==
The only way I know is to download the VMX file from the location where your VM resides.
In you VCenter
# find your VM
# go to the <i>Datastores</i> tab
# select the datastore where your VM config is in
# go to the <i>Files</i> tab
# select your VM name on the left side
# find the <VM Name>.vmx file on the right side and select it via the square box left to the name
# Press <i>DOWNLOAD</i>
Save it wherever you want.
Open a shell and do:
You hve to sort the scsi hosts by the pci slot number (look at scsi3 in this example):
<SyntaxHighlight lang=bash>
$ grep -E "scsi[0-9]+\.pciSlotNumber" vmware-guest.vmx | sort -k3,3
scsi0.pciSlotNumber = "16"
scsi3.pciSlotNumber = "32"
scsi1.pciSlotNumber = "34"
scsi2.pciSlotNumber = "35"
</SyntaxHighlight>
For the scsi devices there is a mapping in the option scsi[0-9]+:[0-9]+\.fileName to the corresponding VMDKs:
<SyntaxHighlight lang=bash>
$ grep -E "scsi[0-9]+:[0-9]+\.fileName" vmware-guest.vmx | sort
scsi0:1.fileName = "vmware-guest_2.vmdk"
scsi0:2.fileName = "/vmfs/volumes/ad3ee111-a9176b63/vmware-guest/vmware-guest.vmdk"
scsi0:3.fileName = "/vmfs/volumes/ad3ee111-a9176b63/vmware-guest/vmware-guest_1.vmdk"
scsi1:2.fileName = "/vmfs/volumes/4c4de182-ed59d164/vmware-guest/vmware-guest.vmdk"
scsi1:3.fileName = "/vmfs/volumes/4c4de182-ed59d164/vmware-guest/vmware-guest_3.vmdk"
scsi2:0.fileName = "/vmfs/volumes/77482ef3-91f2dc5e-0000-000000000000/vmware-guest/vmware-guest.vmdk"
scsi3:0.fileName = "/vmfs/volumes/90b16a56-cda26ee5-0000-000000000000/vmware-guest/vmware-guest.vmdk"
scsi3:1.fileName = "/vmfs/volumes/90b16a56-cda26ee5-0000-000000000000/vmware-guest/vmware-guest_4.vmdk"
</SyntaxHighlight>
I have found no easy way to match the volume ids to volume names until now but if you look at the <i>Summary</i> of your volumes in vSPhere you will find something like:
<pre>
Type: NFS 4.1
URL: ds:///vmfs/volumes/77482ef3-91f2dc5e-0000-000000000000/
</pre>
which matches our scsi2:0.fileName in this example.
This knowledge put together in a little awk script:
<SyntaxHighlight lang=awk>
$ awk '
$1 ~ /scsi[0-9]+\.pciSlotNumber$/ {
scsi=$1;
gsub(/\..*$/,"",scsi);
slots[$NF]=scsi;
}
$1 ~ /scsi[0-9]+:[0-9]+\.fileName/ {
id=$1;
gsub(/\..*$/,"",id);
gsub(/:/,SUBSEP,id);
vmdk[id]=$NF;
}
END{
host=32; # base host id of your linux vm
n=asorti(slots,slots_sorted);
for(i=1; i<=n; i++){
for(key in vmdk){
split( key, values, SUBSEP);
# values[1] is the pciSlotNumber name (scsi0 etc.)
# values[2] is the scsi id
if(values[1]==slots[slots_sorted[i]]) {
print host":0:"values[2]":0",vmdk[key];
}
}
host++;
}
}' vmware-guest.vmx
</SyntaxHighlight>
The output might look like this:
<pre>
32:0:1:0 "hhlokavs-ts_2.vmdk"
32:0:2:0 "/vmfs/volumes/ad3ee111-a9176b63/hhlokavs-ts/hhlokavs-ts.vmdk"
32:0:3:0 "/vmfs/volumes/ad3ee111-a9176b63/hhlokavs-ts/hhlokavs-ts_1.vmdk"
33:0:0:0 "/vmfs/volumes/90b16a56-cda26ee5-0000-000000000000/hhlokavs-ts/hhlokavs-ts.vmdk"
33:0:1:0 "/vmfs/volumes/90b16a56-cda26ee5-0000-000000000000/hhlokavs-ts/hhlokavs-ts_4.vmdk"
34:0:2:0 "/vmfs/volumes/4c4de182-ed59d164/hhlokavs-ts/hhlokavs-ts.vmdk"
34:0:3:0 "/vmfs/volumes/4c4de182-ed59d164/hhlokavs-ts/hhlokavs-ts_3.vmdk"
35:0:0:0 "/vmfs/volumes/77482ef3-91f2dc5e-0000-000000000000/hhlokavs-ts/hhlokavs-ts.vmdk"
</pre>
On the host you find the equal ids with lsscsi:
<SyntaxHighlight lang=bash>
$ lsscsi
[2:0:0:0] cd/dvd NECVMWar VMware SATA CD00 1.00 /dev/sr0
[32:0:1:0] disk VMware Virtual disk 2.0 /dev/sda
[32:0:2:0] disk VMware Virtual disk 2.0 /dev/sdb
[32:0:3:0] disk VMware Virtual disk 2.0 /dev/sdc
[33:0:0:0] disk VMware Virtual disk 2.0 /dev/sdd
[33:0:1:0] disk VMware Virtual disk 2.0 /dev/sde
[34:0:2:0] disk VMware Virtual disk 2.0 /dev/sdf
[34:0:3:0] disk VMware Virtual disk 2.0 /dev/sdg
[35:0:0:0] disk VMware Virtual disk 2.0 /dev/sdh
</SyntaxHighlight>
3776cdae0280ca60edbc841b042f18ffa9655734
Roundcube
0
232
2784
2528
2024-02-08T14:10:14Z
Lollypop
2
Lollypop moved page [[Roundcube Config]] to [[Roundcube]]: Misspelled title
wikitext
text/x-wiki
[[Category:Web]]
[[Category:Mail]]
==Automatic import carddav from Owncloud==
Enable carddav:
/etc/roundcube/config.inc.php:
<syntaxhighlight lang=php>
...
<// List of active plugins (in plugins/ directory)
$config['plugins'] = array(
'carddav', // <---- Enable carddav
'archive',
);
...
</syntaxhighlight>
This imports automagically all Owncloud contacts from the addressbook "contacts" into roundcube carddav:
/usr/share/roundcube/plugins/carddav/config.inc.php
<syntaxhighlight lang=php>
...
$prefs['OwnCloud-Contacts'] = array(
// required attributes
'name' => 'Cloud->contacts->',
'username' => '%u',
'password' => '%p',
'url' => 'https://$cloudserver/remote.php/carddav/addressbooks/%u/contacts/',
// optional attributes
'active' => true,
'readonly' => false,
'refresh_time' => '01:00:00',
'preemptive_auth' => 1,
// attributes that are fixed (i.e., not editable by the user) and
// auto-updated for this preset
'fixed' => array('name', 'active', ),
// hide this preset from CalDAV preferences section so users can't even
// see it
'hide' => false,
);
</syntaxhighlight>
d48236e6fb88a57d9db1114101178bd081bbd65f
2786
2784
2024-02-08T14:21:03Z
Lollypop
2
wikitext
text/x-wiki
[[Category:Web]]
[[Category:Mail]]
==Automatic import carddav from Owncloud==
Enable carddav:
/etc/roundcube/config.inc.php:
<syntaxhighlight lang=php>
...
<// List of active plugins (in plugins/ directory)
$config['plugins'] = array(
'carddav', // <---- Enable carddav
'archive',
);
...
</syntaxhighlight>
This imports automagically all Owncloud contacts from the addressbook "contacts" into roundcube carddav:
/usr/share/roundcube/plugins/carddav/config.inc.php
<syntaxhighlight lang=php>
...
$prefs['OwnCloud-Contacts'] = array(
// required attributes
'name' => 'Cloud->contacts->',
'username' => '%u',
'password' => '%p',
'url' => 'https://$cloudserver/remote.php/carddav/addressbooks/%u/contacts/',
// optional attributes
'active' => true,
'readonly' => false,
'refresh_time' => '01:00:00',
'preemptive_auth' => 1,
// attributes that are fixed (i.e., not editable by the user) and
// auto-updated for this preset
'fixed' => array('name', 'active', ),
// hide this preset from CalDAV preferences section so users can't even
// see it
'hide' => false,
);
</syntaxhighlight>
==Change CSS==
Enter the skin e.g. <i>elastic</i>.<br>
Put all changes into the <i>styles/_styles.less</i> (yes with underscore!).<br>
If changed or created run:
<SyntaxHighlight lang=bash>
# lessc --clean-css="--s1 --advanced" styles/styles.less > styles/styles.min.css
</SyntaxHighlight>
(Yes, without underscore ;-) )
On Ubuntu you can install lessc by running:
<SyntaxHighlight lang=bash>
# apt install node-less node-clean-css
</SyntaxHighlight>
===Hide the about button===
<SyntaxHighlight lang=css>
//
// Hide about
//
a.about[id^="rcmbtn"] {
display: none;
visibility: hidden;
height : 0px;
width : 0px;
margin : 0px;
padding : 0px;
overflow : hidden;
}
</SyntaxHighlight>
6d86d7193773809b18ec0420d0b224123788fc4b
Roundcube Config
0
410
2785
2024-02-08T14:10:14Z
Lollypop
2
Lollypop moved page [[Roundcube Config]] to [[Roundcube]]: Misspelled title
wikitext
text/x-wiki
#REDIRECT [[Roundcube]]
4d96ab967ef855d293643c2262fef6ca6755abfa
SSH Tipps und Tricks
0
75
2787
2771
2024-02-15T12:02:33Z
Lollypop
2
/* ppk -> OpenSSH format */
wikitext
text/x-wiki
[[Category:SSH|Tipps]]
[[Category:Putty|Tipps]]
=SSH, way to the target=
==SSH over one or more hops==
To make the SSH connection from host_a to host_b you have to tunnel through two hosts (jumphost_1 and jumphost_2). If you always log in first and then continue logging in, it is sometimes very difficult to loop through the port forwardings or the Socks5 proxy. It is easier to define <i>ProxyJump</i>s for the way from host_a to host_b.
So we only get from jumphost_2 to host_b, so we make an entry in ~/.ssh/config for this:
<pre>
Host host_b
ProxyJump jumphost_2
</pre>
But we can only get to jumphost_2 via jumphost_1, so we need an entry for this as well:
<pre>
Host jumphost_2
ProxyJump jumphost_1
</pre>
Now simply type <i>ssh host_b</i> on host_a and you will be tunneled through the two gateways jumphost_1 and jumphost_2.
==Portforwardings for example for NFS are now easy like this==
<pre>
root@host_a# share -F nfs -o ro=@127.0.0.1/32 /tmp
root@host_a# ssh -R 22049:localhost:2049 user@host_b
user@host_b$ su -
root@host_b# mount -oro nfs://127.0.0.1:22049/tmp /mnt
</pre>
In the background the tunnel connections are established and the port forwarding is done directly from host_a to host_b. Very slim and elegant.
==Breakout from paradise==
Problem: The environment you are in is unfortunately so unfortunate with firewalls that you can not work. But you have to SSH out to look somewhere else or to get something. Well, there is always a way...
You need a locally installed [http://www.meadowy.org/~gotoh/projects/connect connect], e.g. under Ubuntu: apt-get install connect-proxy.
Furthermore you need a SSH server, where a sshd is listening on port 443, because most proxies only want to let you through on known ports.
Then you enter in the ~/.ssh/config:
<pre>
Host ssh-via-proxy
ProxyCommand connect -H proxy-server:3128 ssh-server 443
</pre>
Whoop di whoop is one with <i>ssh ssh-server</i> on the SSH target, where one would like to go. Of course you can enter another host behind this connection via <i>ProxyJump ssh-via-proxy</i> etc. etc.
==Ah yes... the internal wiki...==
Also not bad, if this is only accessible from the internal network, then we just request via socks proxy:
<pre>
user@host_a$ ssh -C -N -T -f -D8080 internal-host
user@host_a$ chromium-browser --proxy-server="socks5://localhost:8080" https://wiki.internal.office/ &
</pre>
Options are:
<pre>
-C Requests compression <- das ist optional
-N Do not execute a remote command.
-T Disable pseudo-tty allocation.
-f Requests ssh to go to background just before command execution.
-D Local-Remote-Socks5-Proxy Port
</pre>
Or again via ~/.ssh/config:
<pre>
Host wiki
Compression yes
DynamicForward 8888
RequestTTY no
PermitLocalCommand yes
LocalCommand chromium-browser --proxy-server="socks5://localhost:8888" https://wiki.intern.firma.de/ &
Hostname internal-host
</pre>
And then
<SyntaxHighlight lang=bash>
$ ssh -N -f wiki
</SyntaxHighlight>
==Do this but only if...==
If you want for example <i>ProxyJump</i> only if you are connected remote via OpenVPN but not if you are at the office:
<pre>
Match exec "ip ro sh dev tun0 src 10.208.129.0/24 2>/dev/null" host !jumphost.office,*.office,172.16.*.*
ProxyJump jumphost.office
</pre>
What happens here is:<br>
You will be proxied over jumphost.office if there is both of<br>
- A route on a dev tun0 where the local IP matches 10.208.129.0/24<br>
- The destination host matches !jumphost.office,*.office,172.16.*.*<br>
=rsync from remote to remote=
Sometimes your local host is just needed as a relay station to sync between two servers which cannot see each other. You can see both because you are in the admin network.
But you need to get files from HostA to HostB and your laptop has not enough diskspace to save it from HostA to the local disk and then copy it on HostB.
This is a possible solution:
1. Make a reverse forwarding from localhost:PortX on HostA to HostB port 22 (So all packets you send to PortX on HostA get back to your laptop and will be send to port 22, the ssh port, on HostB)
2. Execute rsync on HostA ans tell rsync to make a ssh connection to port PortX for the destination host (which is send back to your laptop and from here to HostB port 22, see 1.)
Here is an Example (with a random port between 50000-52999):
<SyntaxHighLight lang=bash>
$ PortX=$(( ${RANDOM} % 3000 + 50000 ))
$ HostA=10.1.0.42
$ HostB=10.2.0.43
$ ssh -AR 127.0.0.1:${PortX}:${HostB}:22 ${HostA} "rsync -e 'ssh -p ${PortX} -o StrictHostKeyChecking=no' -PWav <HostA-Path> 127.0.0.1:<HostB-Path>"
</SyntaxHighLight>
Some explanations:
$RANDOM is a bash builtin so this works only inside bash.
Use Portx=<your port choosen number> in other shells.
SSH Options:
<pre>
-A Enables forwarding of connections from an authentication agent such as ssh-agent(1). This can also be specified on a per-host basis in a configuration file.
Agent forwarding should be enabled with caution. Users with the ability to bypass file permissions on the remote host (for the agent's UNIX-domain socket) can
access the local agent through the forwarded connection. An attacker cannot obtain key material from the agent, however they can perform operations on the
keys that enable them to authenticate using the identities loaded into the agent. A safer alternative may be to use a jump host (see -J).
-R [bind_address:]port:host:hostport
Specifies that connections to the given TCP port or Unix socket on the remote (server) host are to be forwarded to the local side.
This works by allocating a socket to listen to either a TCP port or to a Unix socket on the remote side. Whenever a connection is made to this port or Unix
socket, the connection is forwarded over the secure channel, and a connection is made from the local machine to either an explicit destination specified by
host port hostport, or local_socket, or, if no explicit destination was specified, ssh will act as a SOCKS 4/5 proxy and forward connections to the destina‐
tions requested by the remote SOCKS client.
Port forwardings can also be specified in the configuration file. Privileged ports can be forwarded only when logging in as root on the remote machine. IPv6
addresses can be specified by enclosing the address in square brackets.
By default, TCP listening sockets on the server will be bound to the loopback interface only. This may be overridden by specifying a bind_address. An empty
bind_address, or the address ‘*’, indicates that the remote socket should listen on all interfaces. Specifying a remote bind_address will only succeed if the
server's GatewayPorts option is enabled (see sshd_config(5)).
If the port argument is ‘0’, the listen port will be dynamically allocated on the server and reported to the client at run time. When used together with -O
forward the allocated port will be printed to the standard output.
-o StrictHostKeyChecking=no
The StrictHostKeyChecking option can be used to control logins to machines whose host key is not known or has changed.
</pre>
=Fingerprint of a key=
For verification, it is often easier with shorter number strings. Therefore, the fingerprint is handy to compare keys more easily:
<pre>
$ ssh-keygen -lf ~/.ssh/id_rsa.pub
4096 SHA256:s1dtEFvY0EiJQcg66jTUon3BS6gfhSJT4Qegox4e7yk lollypop@lollybook (RSA)
</pre>
=Limit allowed users in sshd_config=
<syntaxhighlight lang=bash>
# SSH is only allowed for users in the group ssh except user syslog
AllowGroups ssh
DenyUsers syslog
</syntaxhighlight>
=PuTTY Portable=
==Launch pageant together with putty==
In the file ..\PortableApps\PuTTYPortable\App\AppInfo\Launcher\PuTTYPortable.ini enter the following below [Launch]:
<pre>
[Launch]
ProgramExecutable=putty\pageant.exe
CommandLineArguments='%PAL:DataDir%\settings\mykeys.ppk -c %PAL:AppDir%\putty\putty.exe'
DirectoryMoveOK=yes
SupportsUNC=yes
</pre>
For PortableApps see:
* [http://portableapps.com/manuals/PortableApps.comLauncher/ref/envsub.html Environment variable substitions]
* [http://portableapps.com/manuals/PortableApps.comLauncher/ref/launcher.ini/launch.html#programexecutable Launch]
==ppk -> OpenSSH format==
<syntaxhighlight lang=bash>
$ nawk '/---- BEGIN SSH2 PUBLIC KEY ----/{printf "ssh-rsa "; getline; comment=$2; gsub(/"/,"",comment); getline line; while(line !~ /^---- END/){printf line; getline line;} printf " %s\n",comment;}' pubkey.ppk
</syntaxhighlight>
Or simply:
<syntaxhighlight lang=bash>
$ ssh-keygen -i -f putty.pub > openssh.pub
</syntaxhighlight>
=Problems with older destinations=
==Unable to negotiate with <IP> port 22: no matching host key type found. Their offer: ssh-dss==
<syntaxhighlight lang=bash>
$ ssh -oHostKeyAlgorithms=+ssh-dss <IP>
</syntaxhighlight>
==ssh_dispatch_run_fatal: Connection to <IP> port 22: DH GEX group out of range==
<syntaxhighlight lang=bash>
$ ssh -oKexAlgorithms=diffie-hellman-group-exchange-sha256,diffie-hellman-group14-sha1,diffie-hellman-group1-sha1 <IP>
</syntaxhighlight>
==Allow outdated PubkeyAcceptedAlgorithms==
To reach hosts where you cannod use ed25519 or other more modern keys:
<syntaxhighlight lang=bash>
$ ssh -o PubkeyAcceptedKeyTypes=+ssh-rsa <old-rsa-host>
</syntaxhighlight>
=Problems with older clients/keys=
==Allow RSA keys again (ssh-rsa not in PubkeyAcceptedAlgorithms)==
If you try to connect to new OpenSSH daemons with an RSA key, you will find this in your log and you can not connect via ssh:
<syntaxhighlight lang=bash>
sshd[51342]: userauth_pubkey: key type ssh-rsa not in PubkeyAcceptedAlgorithms [preauth]
</syntaxhighlight>
The temporary workaround which is not recommended is to allow RSA keys in the sshd_config:
<syntaxhighlight lang=bash>
PubkeyAcceptedAlgorithms +ssh-rsa
</syntaxhighlight>
But you better should switch to ED25519 keys.
=SFTP chroot=
<syntaxhighlight lang=bash>
# mkdir --parents --mode=0755 /sftp_chroot/etc
</syntaxhighlight>
==/etc/fstab==
<syntaxhighlight lang=bash>
...
/etc/passwd /sftp_chroot/etc/passwd none ro,bind 0 0
/etc/group /sftp_chroot/etc/group none ro,bind 0 0
</syntaxhighlight>
==/etc/ssh/sshd_config==
<syntaxhighlight lang=bash>
...
AllowGroups ssh-user
Subsystem sftp internal-sftp
Match group sftp
AllowGroups sftp
X11Forwarding no
AllowTcpForwarding no
AllowAgentForwarding no
PermitTunnel no
ForceCommand internal-sftp
PasswordAuthentication yes
ChrootDirectory /sftp_chroot/
AuthorizedKeysFile /sftp_chroot/%h/.ssh/authorized_keys
</syntaxhighlight>
==Create SFTP user==
Now you can put authorized keys into the files /home/sftp/.authorized_keys/<i>username</i>
And create the sftp users like this:
<syntaxhighlight lang=bash>
# USER=myuser
# mkdir --parents --mode=0755 /home/sftp/${USER}
# useradd --create-home --home-dir /home/sftp/${USER}/home ${USER}
</syntaxhighlight>
= Two factor authentication =
== Google Authenticator ==
As the Google Authenticator is a tool which is available on several SmartPhone OS I took this one for the OTP authentication.
All steps have to be done on the destination host.
=== Install libpam-google-authenticator ===
<syntaxhighlight lang=bash>
$ sudo apt-get install libpam-google-authenticator
</syntaxhighlight>
=== Add settings to the /etc/pam.d/sshd ===
Put this line at the top of your /etc/pam.d/sshd!
<syntaxhighlight lang=bash>
auth [success=done new_authtok_reqd=done default=die] pam_google_authenticator.so nullok
</syntaxhighlight>
See the man page pam.d(5) or read here...
The meaning of the parameters:
* success=done : If pam_google_authenticator returns successful (code was correct) all authentication is done.
* new_authtok_reqd=done : New authentication token is required set to done. Done is like ok, <nowiki><man page></nowiki>except that the stack also terminates and control is immediately returned to the application.<nowiki></man page></nowiki>
* default=die : If pam_google_authenticator failed no other authentications will be tried
* nullok : Allow user to access auth mechanism even if the password is empty
=== Add settings to the /etc/ssh/sshd_config ===
This lines have to be in the /etc/ssh/sshd_config:
<syntaxhighlight lang=bash>
UsePAM yes
PasswordAuthentication no
PubkeyAuthentication yes
ChallengeResponseAuthentication yes
AuthenticationMethods publickey,keyboard-interactive:pam
</syntaxhighlight>
Without the setting in /etc/pam.d/sshd the "PasswordAuthentication no" will not be sufficient and still ask for a password because /etc/pam.d/sshd enables password authentication.
= Workarounds for CVEs =
In this section I just say: This <b>might</b> help! Absolutely no warranty for anything!<br>
If my workaround does not fix the problem look one line above this one.
== CVE-2023-48795 alias Terrapin ==
First read at [https://terrapin-attack.com/ terrapin-attack.com]
===Check if patches against this CVE are already included in OS===
On Debian/Ubuntu do:
<SyntaxHighlight lang=bash>
$ sudo apt-get changelog openssh-server | grep -i CVE-2023-48795
- debian/patches/CVE-2023-48795.patch: implement "strict key exchange"
- CVE-2023-48795
</SyntaxHighlight>
This means there are patches against this CVE are included.
===Check ssh and sshd if they offer problematic ciphers and macs===
sshd:
<SyntaxHighlight lang=bash>
$ sudo sshd -T | grep -iE '(chacha20-poly1305@openssh.com|-cbc|-etm@openssh.com)'
ciphers chacha20-poly1305@openssh.com,aes256-gcm@openssh.com,aes128-gcm@openssh.com,aes256-ctr,aes192-ctr,aes128-ctr
macs umac-64@openssh.com,umac-128@openssh.com,hmac-sha2-256,hmac-sha2-512,hmac-sha1
</SyntaxHighlight>
ssh: (Example for localhost, other hosts might get other values depending on your ~/.ssh/config or elsewhere)
<SyntaxHighlight lang=bash>
$ ssh -G localhost | grep -E '(chacha20-poly1305@openssh.com|-cbc|-etm@openssh.com)'
ciphers chacha20-poly1305@openssh.com,aes128-ctr,aes192-ctr,aes256-ctr,aes128-gcm@openssh.com,aes256-gcm@openssh.com
macs umac-64-etm@openssh.com,umac-128-etm@openssh.com,hmac-sha2-256-etm@openssh.com,hmac-sha2-512-etm@openssh.com,hmac-sha1-etm@openssh.com,umac-64@openssh.com,umac-128@openssh.com,hmac-sha2-256,hmac-sha2-512,hmac-sha1
</SyntaxHighlight>
===Include workarounds===
The <i>Include</i> statement enters config in OpenSSH 7.3p1 as far as i know.
So, if you have an <i>Include</i> statement in<br>
/etc/ssh/sshd_config:
<pre>
Include /etc/ssh/sshd_config.d/*.conf
</pre>
/etc/ssh/ssh_config:
<pre>
Include /etc/ssh/ssh_config.d/*.conf
</pre>
then add the following files:<br>
/etc/ssh/sshd_config.d/000_terrapin.conf
<pre>
Ciphers -chacha20-poly1305@openssh.com,-*-cbc*
MACs -*-etm@openssh.com
</pre>
/etc/ssh/ssh_config.d/000_terrapin.conf
<pre>
Ciphers -chacha20-poly1305@openssh.com,-*-cbc*
MACs -*-etm@openssh.com
</pre>
After that do the checks from above. If they look good, restart sshd. If not... toss my trash.
8a914d4e196dad9d926a468ff4467118a0115e8d
TShark
0
238
2788
2735
2024-02-20T12:55:06Z
Lollypop
2
/* MySQL traffic */
wikitext
text/x-wiki
[[Category:MySQL]]
[[Category:Security]]
=TShark=
[https://www.wireshark.org/docs/wsug_html_chunked/AppToolstshark.html TShark is the terminal based wireshark.]
The ultimate tool to sniff network traffic when you have no X. It analyzes the traffic as wireshark does. Great tool!
==DNS Traffic==
<syntaxhighlight lang=bash>
# tshark -n -T fields -e frame.time -e dns.id -e ip.src -e ip.dst -e dns.qry.name -f 'port 53'
</syntaxhighlight>
==MySQL traffic==
To look on an application server for MySQL traffic you can use this line:
<syntaxhighlight lang=bash>
# IFACE=eth0 ; tshark -i ${IFACE} -d tcp.port==3306,mysql -R "eth.addr eq $(ip link show ${IFACE} | awk '$1 ~ /link\/ether/{print $2}')" -T fields -e mysql.query 'port 3306'
</syntaxhighlight>
newer versions of tshark:
<syntaxhighlight lang=bash>
# IFACE=ens192 ; tshark -i ${IFACE} -d tcp.port==3306,mysql -Y "eth.addr eq $(ip link show ${IFACE} | awk '$1 ~ /link\/ether/{print $2}')" -T fields -e mysql.auth_plugin -e mysql.client_auth_plugin -e mysql.error_code -e mysql.error.message -e mysql.message -e mysql.user -e mysql.passwd -e mysql.command 'port 3306'
</syntaxhighlight>
The little awk magic selects only pakets which are from our ethernet address on interface ''IFACE''.
==Radius traffic==
Find client with macaddress fc-18-3c-4a-c1-fa :
<syntaxhighlight lang=bash>
# tshark -Y "tls.handshake.type == 1" -T fields -e frame.number -e ip.src -e tls.handshake.version -e radius.Calling_Station_Id -Y 'radius.Calling_Station_Id=="fc-18-3c-4a-c1-fa"' -f "udp port 1812" -V
Running as user "root" and group "root". This could be dangerous.
Capturing on 'ens192'
785 10.155.1.23 fc-18-3c-4a-c1-fa
788 10.155.1.23 0x00000303 fc-18-3c-4a-c1-fa <-- 0x00000303 is TLS handshake version 1.2 , see table below
790 10.155.1.23 fc-18-3c-4a-c1-fa
792 10.155.1.23 fc-18-3c-4a-c1-fa
794 10.155.1.23 fc-18-3c-4a-c1-fa
</syntaxhighlight>
With older tshark versions try:
<syntaxhighlight lang=bash>
# tshark -Y "ssl.handshake.type == 1" -T fields -e frame.number -e ip.src -e ssl.handshake.version -e radius.Calling_Station_Id -Y 'radius.Calling_Station_Id=="8c-85-90-1f-03-ff"' -f "udp port 1812"
</syntaxhighlight>
==Duplicate ACKs==
<syntaxhighlight lang=bash>
# tshark -i eth1 -Y tcp.analysis.duplicate_ack
</syntaxhighlight>
==Finding TCP problems==
<syntaxhighlight lang=bash>
# tshark -i eth1 -Y 'expert.message == "Retransmission (suspected)" || expert.message == "Duplicate ACK (#1)" || expert.message == "Out-Of-Order segment"'
</syntaxhighlight>
==Decode SSL Connections==
For example show the used TLS-Versions lower than 1.2.
<pre>
Supported Version: TLS 1.3 (0x0304)
Supported Version: TLS 1.2 (0x0303)
Supported Version: TLS 1.1 (0x0302)
Supported Version: TLS 1.0 (0x0301)
</pre>
<syntaxhighlight lang=bash>
$ tshark -n -f 'dst port 1812 or dst port 2083' -Y "ssl.handshake.version<0x00000303" -T fields -e ip.src_host -e ip.dst_host -e tcp.dstport -e udp.dstport -e ssl.handshake.version
192.168.1.87 192.168.1.140 2083 0x00000301
10.155.4.97 192.168.1.141 1812 0x00000301
192.168.1.85 192.168.1.140 2083 0x00000301
...
</syntaxhighlight>
or for https:
<syntaxhighlight lang=bash>
$ tshark -i eth0 -n -f 'dst port 443' -Y "ssl.handshake.version<0x00000303" -T fields -e ip.src_host -e ip.dst_host -e tcp.dstport -e ssl.handshake.version
</syntaxhighlight>
43268d3687b003bcc45e90223e0335922e4c94bb
ESPEasy
0
371
2789
2701
2024-04-10T07:07:48Z
Lollypop
2
wikitext
text/x-wiki
[[Category:KnowHow]]
This is about ESP32 and ESP8266 microcontrollers
==Flash the firmware==
Flash!<br>
Ah-ah<br>
Saviour of the universe!<br>
<syntaxhighlight lang=bash>
$ sudo apt install --yes esptool
$ wget https://github.com/letscontrolit/ESPEasy/releases/download/mega-20200515/ESPEasy_mega-20200515.zip
$ esptool --port /dev/ttyUSB0 --baud 115200 write_flash 0 ESP_Easy_mega_20200516_test_beta_ESP8266_4M1M.bin
esptool.py v2.8
Serial port /dev/ttyUSB0
Connecting...
Detecting chip type... ESP8266
Chip is ESP8266EX
Features: WiFi
Crystal is 26MHz
MAC: 3c:71:bf:2a:a6:0b
Enabling default SPI flash mode...
Configuring flash size...
Auto-detected Flash size: 4MB
Erasing flash...
Took 2.33s to erase flash block
Writing at 0x000dbc00... (87 %)
</syntaxhighlight>
==Connect via Serial==
<syntaxhighlight lang=bash>
$ minicom -D /dev/ttyUSB0 --baudrate 115200
Welcome to minicom 2.8
OPTIONS: I18n
Port /dev/ttyUSB0, 10:28:08
Press CTRL-A Z for help on special keys
</syntaxhighlight>
=== Posssible commands via serial connection ===
You do not see what you type until you hit enter, then you will see the command you have entered and the result, like:
<SyntaxHighlight lang=bash>
>Save
OK
</SyntaxHighlight>
You can find all commands here: [https://github.com/letscontrolit/ESPEasy/blob/mega/docs/source/Plugin/P000_commands.repl ESPEasy/docs/source/Plugin/P000_commands.repl].<br>
This is just a subset:
==== Get the Allowed IP range ====
* AccessInfo
==== Get the build number ====
* Build
==== Clear allowed IP range for the web interface for the current session ====
* ClearAccessBlock
==== Clear the password of the unit ====
* ClearPassword
==== Get or set the date and time ====
* Datetime[,YYYY-MM-DD[,hh:mm:ss]]
==== Get or set DNS configuration ====
* DNS[,<IP address>]
==== Get or set serial port debug level ====
* Debug[,<1-4, default is 2>]
==== Get or set the gateway configuration ====
* Gateway[,<IP address>]
==== Run I2C scanner to find connected I2C chips. Output will be sent to the serial port ====
* I2Cscanner
==== Get or set IP address ====
* IP[,<IP address>]
==== Set the password of the unit ====
* Password <mypassword>
==== Reboot ====
* Reboot
==== Reset config to factory default. Caution, all settings will be lost! ====
Be careful! Do you want this?
This deletes the whole configuration but not the Firmware.
* Reset
==== Save config to persistent flash memory ====
* Save
==== Show settings on serial terminal ====
* Settings
==== TimeZone ====
* TimeZone[,<minutes from UTC>]
==== Get or set the status of NTP (Network Time Protocol) ====
* UseNTP[,<0/1>]
==== Configure your local wifi credentials ====
* WifiSSID <myssid>
* WifiKey <mypassword>
* WifiConnect
==MQTT==
===Controller===
Configure MQTT connection to the broker like this:<br>
[[image:ESPEasy Controller MQTT.jpg]]
* Click on Add
* Choose Type openHAB
* Enter your parameters like this where <i>banana1.fritz.box</i> in this example is the server your MQTT broker is running at.
<br>
[[image:ESPEasy Controller MQTT Add.jpg]]
===Device===
Now you need to listen on MQTT events. For that you need to configure a <i>Generic - MQTT Import</i> device:<br>
[[image:ESPEasy Devices Generic MQTT Import.jpg]]
* Click on Add
* Choose Type Generic MQTT Import
* Enter your parameters like this where <i>MQTT</i> is the event name for the [[#Rules]] and <i>MQTT Topic n</i> are the topics you listen at.
[[image:ESPEasy Devices Generic MQTT Import.jpg]]<br>
[[image:ESPEasy Devices Generic MQTT Import Add.jpg]]
==Rules==
===Switch relays based on MQTT events===
On my relay board ESP12F_Relay_X4 I configured [[#MQTT]] and use rules to switch the 4 relays.
<syntaxhighlight lang=basic>
on system#boot do
let,1,15 // Relay 1 is GPIO15
let,2,14 // Relay 2 is GPIO14
let,3,12 // Relay 3 is GPIO12
let,4,13 // Relay 4 is GPIO13
endon
on MQTT#Relay* do
if %eventvalue1%<0
//
// Values below 0 sets timer to %eventvalue1%*-1 seconds
//
let,5,abs(%eventvalue1%)
logentry,"%eventname%: Turn on {substring:5:11:%eventname%} (GPIO%v{substring:10:11:%eventname%}%) for %v5% seconds."
timerSet,{substring:10:11:%eventname%},%v5%
gpio,%v{substring:10:11:%eventname%}%,1
elseif %eventvalue1%=1
//
// Value equal 1 turn on relay
//
logentry,"%eventname%: Turn on {substring:5:11:%eventname%} (GPIO%v{substring:10:11:%eventname%}%)."
gpio,%v{substring:10:11:%eventname%}%,1
elseif %eventvalue1%=0
//
// Value equal 0 turn off relay
//
logentry,"%eventname%: Turn off {substring:5:11:%eventname%} (GPIO%v{substring:10:11:%eventname%}%)."
timerSet,{substring:10:11:%eventname%},0
gpio,%v{substring:10:11:%eventname%}%,0
endif
endon
on Cron#Relay4 do
logentry,"%eventname%: Turn on Relay4."
gpio,%v4%,1
timerSet,4,30
endon
on Rules#Timer do
logentry,"%eventname%: %eventvalue1% GPIO%v%eventvalue1%%"
gpio,%v%eventvalue1%%,0
endon
</syntaxhighlight>
==Problems==
===interface 0 claimed by ch341 while 'brltty' sets config #1===
<syntaxhighlight lang=bash>
Oct 9 14:40:06 lollybook kernel: [372389.769039] usb 3-1.4.3: ch341-uart converter now attached to ttyUSB0
Oct 9 14:40:07 lollybook kernel: [372390.769995] usb 3-1.4.3: usbfs: interface 0 claimed by ch341 while 'brltty' sets config #1
Oct 9 14:40:07 lollybook kernel: [372390.771187] ch341-uart ttyUSB0: ch341-uart converter now disconnected from ttyUSB0
Oct 9 14:40:07 lollybook kernel: [372390.771258] ch341 3-1.4.3:1.0: device disconnected
</syntaxhighlight>
I found this:
* [https://unix.stackexchange.com/questions/670636/unable-to-use-usb-dongle-based-on-usb-serial-converter-chip Unable to use USB dongle based on USB-serial converter chip]
<syntaxhighlight lang=bash>
for f in /usr/lib/udev/rules.d/*brltty*.rules; do
sudo ln -s /dev/null "/etc/udev/rules.d/$(basename "$f")"
done
sudo udevadm control --reload-rules
</syntaxhighlight>
But I also had to disable the brltty-udev.service
<syntaxhighlight lang=bash>
$ sudo systemctl stop brltty-udev.service
$ sudo systemctl mask brltty-udev.service
Created symlink /etc/systemd/system/brltty-udev.service → /dev/null.
</syntaxhighlight>
After disabling brltty I finally got:
<syntaxhighlight lang=bash>
Oct 9 14:41:30 lollybook kernel: [372473.985072] ch341 3-1.4.3:1.0: ch341-uart converter detected
Oct 9 14:41:30 lollybook kernel: [372473.987063] usb 3-1.4.3: ch341-uart converter now attached to ttyUSB0
</syntaxhighlight>
366c495906f2a6d24cb01a13c20520ed3dbff94a
2790
2789
2024-04-10T07:08:07Z
Lollypop
2
wikitext
text/x-wiki
[[Category:KnowHow]]
This is about ESP32 and ESP8622 microcontrollers
==Flash the firmware==
Flash!<br>
Ah-ah<br>
Saviour of the universe!<br>
<syntaxhighlight lang=bash>
$ sudo apt install --yes esptool
$ wget https://github.com/letscontrolit/ESPEasy/releases/download/mega-20200515/ESPEasy_mega-20200515.zip
$ esptool --port /dev/ttyUSB0 --baud 115200 write_flash 0 ESP_Easy_mega_20200516_test_beta_ESP8266_4M1M.bin
esptool.py v2.8
Serial port /dev/ttyUSB0
Connecting...
Detecting chip type... ESP8266
Chip is ESP8266EX
Features: WiFi
Crystal is 26MHz
MAC: 3c:71:bf:2a:a6:0b
Enabling default SPI flash mode...
Configuring flash size...
Auto-detected Flash size: 4MB
Erasing flash...
Took 2.33s to erase flash block
Writing at 0x000dbc00... (87 %)
</syntaxhighlight>
==Connect via Serial==
<syntaxhighlight lang=bash>
$ minicom -D /dev/ttyUSB0 --baudrate 115200
Welcome to minicom 2.8
OPTIONS: I18n
Port /dev/ttyUSB0, 10:28:08
Press CTRL-A Z for help on special keys
</syntaxhighlight>
=== Posssible commands via serial connection ===
You do not see what you type until you hit enter, then you will see the command you have entered and the result, like:
<SyntaxHighlight lang=bash>
>Save
OK
</SyntaxHighlight>
You can find all commands here: [https://github.com/letscontrolit/ESPEasy/blob/mega/docs/source/Plugin/P000_commands.repl ESPEasy/docs/source/Plugin/P000_commands.repl].<br>
This is just a subset:
==== Get the Allowed IP range ====
* AccessInfo
==== Get the build number ====
* Build
==== Clear allowed IP range for the web interface for the current session ====
* ClearAccessBlock
==== Clear the password of the unit ====
* ClearPassword
==== Get or set the date and time ====
* Datetime[,YYYY-MM-DD[,hh:mm:ss]]
==== Get or set DNS configuration ====
* DNS[,<IP address>]
==== Get or set serial port debug level ====
* Debug[,<1-4, default is 2>]
==== Get or set the gateway configuration ====
* Gateway[,<IP address>]
==== Run I2C scanner to find connected I2C chips. Output will be sent to the serial port ====
* I2Cscanner
==== Get or set IP address ====
* IP[,<IP address>]
==== Set the password of the unit ====
* Password <mypassword>
==== Reboot ====
* Reboot
==== Reset config to factory default. Caution, all settings will be lost! ====
Be careful! Do you want this?
This deletes the whole configuration but not the Firmware.
* Reset
==== Save config to persistent flash memory ====
* Save
==== Show settings on serial terminal ====
* Settings
==== TimeZone ====
* TimeZone[,<minutes from UTC>]
==== Get or set the status of NTP (Network Time Protocol) ====
* UseNTP[,<0/1>]
==== Configure your local wifi credentials ====
* WifiSSID <myssid>
* WifiKey <mypassword>
* WifiConnect
==MQTT==
===Controller===
Configure MQTT connection to the broker like this:<br>
[[image:ESPEasy Controller MQTT.jpg]]
* Click on Add
* Choose Type openHAB
* Enter your parameters like this where <i>banana1.fritz.box</i> in this example is the server your MQTT broker is running at.
<br>
[[image:ESPEasy Controller MQTT Add.jpg]]
===Device===
Now you need to listen on MQTT events. For that you need to configure a <i>Generic - MQTT Import</i> device:<br>
[[image:ESPEasy Devices Generic MQTT Import.jpg]]
* Click on Add
* Choose Type Generic MQTT Import
* Enter your parameters like this where <i>MQTT</i> is the event name for the [[#Rules]] and <i>MQTT Topic n</i> are the topics you listen at.
[[image:ESPEasy Devices Generic MQTT Import.jpg]]<br>
[[image:ESPEasy Devices Generic MQTT Import Add.jpg]]
==Rules==
===Switch relays based on MQTT events===
On my relay board ESP12F_Relay_X4 I configured [[#MQTT]] and use rules to switch the 4 relays.
<syntaxhighlight lang=basic>
on system#boot do
let,1,15 // Relay 1 is GPIO15
let,2,14 // Relay 2 is GPIO14
let,3,12 // Relay 3 is GPIO12
let,4,13 // Relay 4 is GPIO13
endon
on MQTT#Relay* do
if %eventvalue1%<0
//
// Values below 0 sets timer to %eventvalue1%*-1 seconds
//
let,5,abs(%eventvalue1%)
logentry,"%eventname%: Turn on {substring:5:11:%eventname%} (GPIO%v{substring:10:11:%eventname%}%) for %v5% seconds."
timerSet,{substring:10:11:%eventname%},%v5%
gpio,%v{substring:10:11:%eventname%}%,1
elseif %eventvalue1%=1
//
// Value equal 1 turn on relay
//
logentry,"%eventname%: Turn on {substring:5:11:%eventname%} (GPIO%v{substring:10:11:%eventname%}%)."
gpio,%v{substring:10:11:%eventname%}%,1
elseif %eventvalue1%=0
//
// Value equal 0 turn off relay
//
logentry,"%eventname%: Turn off {substring:5:11:%eventname%} (GPIO%v{substring:10:11:%eventname%}%)."
timerSet,{substring:10:11:%eventname%},0
gpio,%v{substring:10:11:%eventname%}%,0
endif
endon
on Cron#Relay4 do
logentry,"%eventname%: Turn on Relay4."
gpio,%v4%,1
timerSet,4,30
endon
on Rules#Timer do
logentry,"%eventname%: %eventvalue1% GPIO%v%eventvalue1%%"
gpio,%v%eventvalue1%%,0
endon
</syntaxhighlight>
==Problems==
===interface 0 claimed by ch341 while 'brltty' sets config #1===
<syntaxhighlight lang=bash>
Oct 9 14:40:06 lollybook kernel: [372389.769039] usb 3-1.4.3: ch341-uart converter now attached to ttyUSB0
Oct 9 14:40:07 lollybook kernel: [372390.769995] usb 3-1.4.3: usbfs: interface 0 claimed by ch341 while 'brltty' sets config #1
Oct 9 14:40:07 lollybook kernel: [372390.771187] ch341-uart ttyUSB0: ch341-uart converter now disconnected from ttyUSB0
Oct 9 14:40:07 lollybook kernel: [372390.771258] ch341 3-1.4.3:1.0: device disconnected
</syntaxhighlight>
I found this:
* [https://unix.stackexchange.com/questions/670636/unable-to-use-usb-dongle-based-on-usb-serial-converter-chip Unable to use USB dongle based on USB-serial converter chip]
<syntaxhighlight lang=bash>
for f in /usr/lib/udev/rules.d/*brltty*.rules; do
sudo ln -s /dev/null "/etc/udev/rules.d/$(basename "$f")"
done
sudo udevadm control --reload-rules
</syntaxhighlight>
But I also had to disable the brltty-udev.service
<syntaxhighlight lang=bash>
$ sudo systemctl stop brltty-udev.service
$ sudo systemctl mask brltty-udev.service
Created symlink /etc/systemd/system/brltty-udev.service → /dev/null.
</syntaxhighlight>
After disabling brltty I finally got:
<syntaxhighlight lang=bash>
Oct 9 14:41:30 lollybook kernel: [372473.985072] ch341 3-1.4.3:1.0: ch341-uart converter detected
Oct 9 14:41:30 lollybook kernel: [372473.987063] usb 3-1.4.3: ch341-uart converter now attached to ttyUSB0
</syntaxhighlight>
63d8c282eba92719bb20818d6e479662a394dbc3
2791
2790
2024-04-10T08:02:04Z
Lollypop
2
/* Flash the firmware */
wikitext
text/x-wiki
[[Category:KnowHow]]
This is about ESP32 and ESP8622 microcontrollers
==Flash the firmware==
Flash!<br>
Ah-ah<br>
Saviour of the universe!<br>
<syntaxhighlight lang=bash>
$ sudo apt install --yes esptool
$ wget https://github.com/letscontrolit/ESPEasy/releases/download/mega-20200515/ESPEasy_mega-20200515.zip
$ esptool --port /dev/ttyUSB0 --baud 115200 write_flash 0 ESP_Easy_mega_20200516_test_beta_ESP8266_4M1M.bin
esptool.py v2.8
Serial port /dev/ttyUSB0
Connecting...
Detecting chip type... ESP8266
Chip is ESP8266EX
Features: WiFi
Crystal is 26MHz
MAC: 3c:71:bf:2a:a6:0b
Enabling default SPI flash mode...
Configuring flash size...
Auto-detected Flash size: 4MB
Erasing flash...
Took 2.33s to erase flash block
Writing at 0x000dbc00... (87 %)
</syntaxhighlight>
=== ESP32 WROOM32 ===
<syntaxhighlight lang=bash>
$ esptool --port /dev/ttyUSB1 --baud 256000 flash_idesptool.py v2.8
Serial port /dev/ttyUSB1
Connecting.....
Detecting chip type... ESP32
Chip is ESP32D0WDQ5 (revision 3)
Features: WiFi, BT, Dual Core, 240MHz, VRef calibration in efuse, Coding Scheme None
Crystal is 40MHz
MAC: 10:06:1c:41:ab:a4
Changing baud rate to 256000
Changed.
Enabling default SPI flash mode...
Manufacturer: 68
Device: 4016
Detected flash size: 4MB
Hard resetting via RTS pin...
$ esptool --port /dev/ttyUSB1 --baud 256000 write_flash -fs 4MB -fm dout 0x0 bin/ESP_Easy_mega_20230623_custom_ESP32_4M316k_LittleFS.factory.bin
</syntaxhighlight>
==Connect via Serial==
<syntaxhighlight lang=bash>
$ minicom -D /dev/ttyUSB0 --baudrate 115200
Welcome to minicom 2.8
OPTIONS: I18n
Port /dev/ttyUSB0, 10:28:08
Press CTRL-A Z for help on special keys
</syntaxhighlight>
=== Posssible commands via serial connection ===
You do not see what you type until you hit enter, then you will see the command you have entered and the result, like:
<SyntaxHighlight lang=bash>
>Save
OK
</SyntaxHighlight>
You can find all commands here: [https://github.com/letscontrolit/ESPEasy/blob/mega/docs/source/Plugin/P000_commands.repl ESPEasy/docs/source/Plugin/P000_commands.repl].<br>
This is just a subset:
==== Get the Allowed IP range ====
* AccessInfo
==== Get the build number ====
* Build
==== Clear allowed IP range for the web interface for the current session ====
* ClearAccessBlock
==== Clear the password of the unit ====
* ClearPassword
==== Get or set the date and time ====
* Datetime[,YYYY-MM-DD[,hh:mm:ss]]
==== Get or set DNS configuration ====
* DNS[,<IP address>]
==== Get or set serial port debug level ====
* Debug[,<1-4, default is 2>]
==== Get or set the gateway configuration ====
* Gateway[,<IP address>]
==== Run I2C scanner to find connected I2C chips. Output will be sent to the serial port ====
* I2Cscanner
==== Get or set IP address ====
* IP[,<IP address>]
==== Set the password of the unit ====
* Password <mypassword>
==== Reboot ====
* Reboot
==== Reset config to factory default. Caution, all settings will be lost! ====
Be careful! Do you want this?
This deletes the whole configuration but not the Firmware.
* Reset
==== Save config to persistent flash memory ====
* Save
==== Show settings on serial terminal ====
* Settings
==== TimeZone ====
* TimeZone[,<minutes from UTC>]
==== Get or set the status of NTP (Network Time Protocol) ====
* UseNTP[,<0/1>]
==== Configure your local wifi credentials ====
* WifiSSID <myssid>
* WifiKey <mypassword>
* WifiConnect
==MQTT==
===Controller===
Configure MQTT connection to the broker like this:<br>
[[image:ESPEasy Controller MQTT.jpg]]
* Click on Add
* Choose Type openHAB
* Enter your parameters like this where <i>banana1.fritz.box</i> in this example is the server your MQTT broker is running at.
<br>
[[image:ESPEasy Controller MQTT Add.jpg]]
===Device===
Now you need to listen on MQTT events. For that you need to configure a <i>Generic - MQTT Import</i> device:<br>
[[image:ESPEasy Devices Generic MQTT Import.jpg]]
* Click on Add
* Choose Type Generic MQTT Import
* Enter your parameters like this where <i>MQTT</i> is the event name for the [[#Rules]] and <i>MQTT Topic n</i> are the topics you listen at.
[[image:ESPEasy Devices Generic MQTT Import.jpg]]<br>
[[image:ESPEasy Devices Generic MQTT Import Add.jpg]]
==Rules==
===Switch relays based on MQTT events===
On my relay board ESP12F_Relay_X4 I configured [[#MQTT]] and use rules to switch the 4 relays.
<syntaxhighlight lang=basic>
on system#boot do
let,1,15 // Relay 1 is GPIO15
let,2,14 // Relay 2 is GPIO14
let,3,12 // Relay 3 is GPIO12
let,4,13 // Relay 4 is GPIO13
endon
on MQTT#Relay* do
if %eventvalue1%<0
//
// Values below 0 sets timer to %eventvalue1%*-1 seconds
//
let,5,abs(%eventvalue1%)
logentry,"%eventname%: Turn on {substring:5:11:%eventname%} (GPIO%v{substring:10:11:%eventname%}%) for %v5% seconds."
timerSet,{substring:10:11:%eventname%},%v5%
gpio,%v{substring:10:11:%eventname%}%,1
elseif %eventvalue1%=1
//
// Value equal 1 turn on relay
//
logentry,"%eventname%: Turn on {substring:5:11:%eventname%} (GPIO%v{substring:10:11:%eventname%}%)."
gpio,%v{substring:10:11:%eventname%}%,1
elseif %eventvalue1%=0
//
// Value equal 0 turn off relay
//
logentry,"%eventname%: Turn off {substring:5:11:%eventname%} (GPIO%v{substring:10:11:%eventname%}%)."
timerSet,{substring:10:11:%eventname%},0
gpio,%v{substring:10:11:%eventname%}%,0
endif
endon
on Cron#Relay4 do
logentry,"%eventname%: Turn on Relay4."
gpio,%v4%,1
timerSet,4,30
endon
on Rules#Timer do
logentry,"%eventname%: %eventvalue1% GPIO%v%eventvalue1%%"
gpio,%v%eventvalue1%%,0
endon
</syntaxhighlight>
==Problems==
===interface 0 claimed by ch341 while 'brltty' sets config #1===
<syntaxhighlight lang=bash>
Oct 9 14:40:06 lollybook kernel: [372389.769039] usb 3-1.4.3: ch341-uart converter now attached to ttyUSB0
Oct 9 14:40:07 lollybook kernel: [372390.769995] usb 3-1.4.3: usbfs: interface 0 claimed by ch341 while 'brltty' sets config #1
Oct 9 14:40:07 lollybook kernel: [372390.771187] ch341-uart ttyUSB0: ch341-uart converter now disconnected from ttyUSB0
Oct 9 14:40:07 lollybook kernel: [372390.771258] ch341 3-1.4.3:1.0: device disconnected
</syntaxhighlight>
I found this:
* [https://unix.stackexchange.com/questions/670636/unable-to-use-usb-dongle-based-on-usb-serial-converter-chip Unable to use USB dongle based on USB-serial converter chip]
<syntaxhighlight lang=bash>
for f in /usr/lib/udev/rules.d/*brltty*.rules; do
sudo ln -s /dev/null "/etc/udev/rules.d/$(basename "$f")"
done
sudo udevadm control --reload-rules
</syntaxhighlight>
But I also had to disable the brltty-udev.service
<syntaxhighlight lang=bash>
$ sudo systemctl stop brltty-udev.service
$ sudo systemctl mask brltty-udev.service
Created symlink /etc/systemd/system/brltty-udev.service → /dev/null.
</syntaxhighlight>
After disabling brltty I finally got:
<syntaxhighlight lang=bash>
Oct 9 14:41:30 lollybook kernel: [372473.985072] ch341 3-1.4.3:1.0: ch341-uart converter detected
Oct 9 14:41:30 lollybook kernel: [372473.987063] usb 3-1.4.3: ch341-uart converter now attached to ttyUSB0
</syntaxhighlight>
c24acbb40e06b4be2f0a887c818f03644f6e2a5f
2792
2791
2024-04-15T13:19:06Z
Lollypop
2
/* ESP32 WROOM32 */
wikitext
text/x-wiki
[[Category:KnowHow]]
This is about ESP32 and ESP8622 microcontrollers
==Flash the firmware==
Flash!<br>
Ah-ah<br>
Saviour of the universe!<br>
<syntaxhighlight lang=bash>
$ sudo apt install --yes esptool
$ wget https://github.com/letscontrolit/ESPEasy/releases/download/mega-20200515/ESPEasy_mega-20200515.zip
$ esptool --port /dev/ttyUSB0 --baud 115200 write_flash 0 ESP_Easy_mega_20200516_test_beta_ESP8266_4M1M.bin
esptool.py v2.8
Serial port /dev/ttyUSB0
Connecting...
Detecting chip type... ESP8266
Chip is ESP8266EX
Features: WiFi
Crystal is 26MHz
MAC: 3c:71:bf:2a:a6:0b
Enabling default SPI flash mode...
Configuring flash size...
Auto-detected Flash size: 4MB
Erasing flash...
Took 2.33s to erase flash block
Writing at 0x000dbc00... (87 %)
</syntaxhighlight>
=== ESP32 WROOM32 ===
<syntaxhighlight lang=bash>
$ esptool --port /dev/ttyUSB1 --baud 512000 write_flash -fs 4MB -fm dout 0x0 bin/ESP_Easy_mega_20240414_collection_A_ESP32_4M316k.factory.bin
</syntaxhighlight>
==Connect via Serial==
<syntaxhighlight lang=bash>
$ minicom -D /dev/ttyUSB0 --baudrate 115200
Welcome to minicom 2.8
OPTIONS: I18n
Port /dev/ttyUSB0, 10:28:08
Press CTRL-A Z for help on special keys
</syntaxhighlight>
=== Posssible commands via serial connection ===
You do not see what you type until you hit enter, then you will see the command you have entered and the result, like:
<SyntaxHighlight lang=bash>
>Save
OK
</SyntaxHighlight>
You can find all commands here: [https://github.com/letscontrolit/ESPEasy/blob/mega/docs/source/Plugin/P000_commands.repl ESPEasy/docs/source/Plugin/P000_commands.repl].<br>
This is just a subset:
==== Get the Allowed IP range ====
* AccessInfo
==== Get the build number ====
* Build
==== Clear allowed IP range for the web interface for the current session ====
* ClearAccessBlock
==== Clear the password of the unit ====
* ClearPassword
==== Get or set the date and time ====
* Datetime[,YYYY-MM-DD[,hh:mm:ss]]
==== Get or set DNS configuration ====
* DNS[,<IP address>]
==== Get or set serial port debug level ====
* Debug[,<1-4, default is 2>]
==== Get or set the gateway configuration ====
* Gateway[,<IP address>]
==== Run I2C scanner to find connected I2C chips. Output will be sent to the serial port ====
* I2Cscanner
==== Get or set IP address ====
* IP[,<IP address>]
==== Set the password of the unit ====
* Password <mypassword>
==== Reboot ====
* Reboot
==== Reset config to factory default. Caution, all settings will be lost! ====
Be careful! Do you want this?
This deletes the whole configuration but not the Firmware.
* Reset
==== Save config to persistent flash memory ====
* Save
==== Show settings on serial terminal ====
* Settings
==== TimeZone ====
* TimeZone[,<minutes from UTC>]
==== Get or set the status of NTP (Network Time Protocol) ====
* UseNTP[,<0/1>]
==== Configure your local wifi credentials ====
* WifiSSID <myssid>
* WifiKey <mypassword>
* WifiConnect
==MQTT==
===Controller===
Configure MQTT connection to the broker like this:<br>
[[image:ESPEasy Controller MQTT.jpg]]
* Click on Add
* Choose Type openHAB
* Enter your parameters like this where <i>banana1.fritz.box</i> in this example is the server your MQTT broker is running at.
<br>
[[image:ESPEasy Controller MQTT Add.jpg]]
===Device===
Now you need to listen on MQTT events. For that you need to configure a <i>Generic - MQTT Import</i> device:<br>
[[image:ESPEasy Devices Generic MQTT Import.jpg]]
* Click on Add
* Choose Type Generic MQTT Import
* Enter your parameters like this where <i>MQTT</i> is the event name for the [[#Rules]] and <i>MQTT Topic n</i> are the topics you listen at.
[[image:ESPEasy Devices Generic MQTT Import.jpg]]<br>
[[image:ESPEasy Devices Generic MQTT Import Add.jpg]]
==Rules==
===Switch relays based on MQTT events===
On my relay board ESP12F_Relay_X4 I configured [[#MQTT]] and use rules to switch the 4 relays.
<syntaxhighlight lang=basic>
on system#boot do
let,1,15 // Relay 1 is GPIO15
let,2,14 // Relay 2 is GPIO14
let,3,12 // Relay 3 is GPIO12
let,4,13 // Relay 4 is GPIO13
endon
on MQTT#Relay* do
if %eventvalue1%<0
//
// Values below 0 sets timer to %eventvalue1%*-1 seconds
//
let,5,abs(%eventvalue1%)
logentry,"%eventname%: Turn on {substring:5:11:%eventname%} (GPIO%v{substring:10:11:%eventname%}%) for %v5% seconds."
timerSet,{substring:10:11:%eventname%},%v5%
gpio,%v{substring:10:11:%eventname%}%,1
elseif %eventvalue1%=1
//
// Value equal 1 turn on relay
//
logentry,"%eventname%: Turn on {substring:5:11:%eventname%} (GPIO%v{substring:10:11:%eventname%}%)."
gpio,%v{substring:10:11:%eventname%}%,1
elseif %eventvalue1%=0
//
// Value equal 0 turn off relay
//
logentry,"%eventname%: Turn off {substring:5:11:%eventname%} (GPIO%v{substring:10:11:%eventname%}%)."
timerSet,{substring:10:11:%eventname%},0
gpio,%v{substring:10:11:%eventname%}%,0
endif
endon
on Cron#Relay4 do
logentry,"%eventname%: Turn on Relay4."
gpio,%v4%,1
timerSet,4,30
endon
on Rules#Timer do
logentry,"%eventname%: %eventvalue1% GPIO%v%eventvalue1%%"
gpio,%v%eventvalue1%%,0
endon
</syntaxhighlight>
==Problems==
===interface 0 claimed by ch341 while 'brltty' sets config #1===
<syntaxhighlight lang=bash>
Oct 9 14:40:06 lollybook kernel: [372389.769039] usb 3-1.4.3: ch341-uart converter now attached to ttyUSB0
Oct 9 14:40:07 lollybook kernel: [372390.769995] usb 3-1.4.3: usbfs: interface 0 claimed by ch341 while 'brltty' sets config #1
Oct 9 14:40:07 lollybook kernel: [372390.771187] ch341-uart ttyUSB0: ch341-uart converter now disconnected from ttyUSB0
Oct 9 14:40:07 lollybook kernel: [372390.771258] ch341 3-1.4.3:1.0: device disconnected
</syntaxhighlight>
I found this:
* [https://unix.stackexchange.com/questions/670636/unable-to-use-usb-dongle-based-on-usb-serial-converter-chip Unable to use USB dongle based on USB-serial converter chip]
<syntaxhighlight lang=bash>
for f in /usr/lib/udev/rules.d/*brltty*.rules; do
sudo ln -s /dev/null "/etc/udev/rules.d/$(basename "$f")"
done
sudo udevadm control --reload-rules
</syntaxhighlight>
But I also had to disable the brltty-udev.service
<syntaxhighlight lang=bash>
$ sudo systemctl stop brltty-udev.service
$ sudo systemctl mask brltty-udev.service
Created symlink /etc/systemd/system/brltty-udev.service → /dev/null.
</syntaxhighlight>
After disabling brltty I finally got:
<syntaxhighlight lang=bash>
Oct 9 14:41:30 lollybook kernel: [372473.985072] ch341 3-1.4.3:1.0: ch341-uart converter detected
Oct 9 14:41:30 lollybook kernel: [372473.987063] usb 3-1.4.3: ch341-uart converter now attached to ttyUSB0
</syntaxhighlight>
0ddfa8589f7b5752507b08a41a56b6ef8e9fc3a8
2813
2792
2025-03-05T10:20:46Z
Lollypop
2
wikitext
text/x-wiki
[[Category:KnowHow]]
This is about ESP32 and ESP8622 microcontrollers
==Flash the firmware==
Flash!<br>
Ah-ah<br>
Saviour of the universe!<br>
<syntaxhighlight lang=bash>
$ sudo apt install --yes esptool
$ wget https://github.com/letscontrolit/ESPEasy/releases/download/mega-20200515/ESPEasy_mega-20200515.zip
$ esptool --port /dev/ttyUSB0 --baud 115200 write_flash 0 ESP_Easy_mega_20200516_test_beta_ESP8266_4M1M.bin
esptool.py v2.8
Serial port /dev/ttyUSB0
Connecting...
Detecting chip type... ESP8266
Chip is ESP8266EX
Features: WiFi
Crystal is 26MHz
MAC: 3c:71:bf:2a:a6:0b
Enabling default SPI flash mode...
Configuring flash size...
Auto-detected Flash size: 4MB
Erasing flash...
Took 2.33s to erase flash block
Writing at 0x000dbc00... (87 %)
</syntaxhighlight>
=== ESP32 WROOM32 ===
<syntaxhighlight lang=bash>
$ esptool --port /dev/ttyUSB1 --baud 512000 write_flash -fs 4MB -fm dout 0x0 bin/ESP_Easy_mega_20240414_collection_A_ESP32_4M316k.factory.bin
</syntaxhighlight>
=== ESP32 C6 ===
<syntaxhighlight lang=bash>
$ esptool --port /dev/ttyACM1 --chip esp32c6 -b 460800 --before default_reset --after hard_reset write_flash 0x0 bin/ESP_Easy_mega_20241222_normal_ESP32c6_4M316k_LittleFS_CDC_ETH.factory.bin
</syntaxhighlight>
==Connect via Serial==
<syntaxhighlight lang=bash>
$ minicom -D /dev/ttyUSB0 --baudrate 115200
Welcome to minicom 2.8
OPTIONS: I18n
Port /dev/ttyUSB0, 10:28:08
Press CTRL-A Z for help on special keys
</syntaxhighlight>
=== Posssible commands via serial connection ===
You do not see what you type until you hit enter, then you will see the command you have entered and the result, like:
<SyntaxHighlight lang=bash>
>Save
OK
</SyntaxHighlight>
You can find all commands here: [https://github.com/letscontrolit/ESPEasy/blob/mega/docs/source/Plugin/P000_commands.repl ESPEasy/docs/source/Plugin/P000_commands.repl].<br>
This is just a subset:
==== Get the Allowed IP range ====
* AccessInfo
==== Get the build number ====
* Build
==== Clear allowed IP range for the web interface for the current session ====
* ClearAccessBlock
==== Clear the password of the unit ====
* ClearPassword
==== Get or set the date and time ====
* Datetime[,YYYY-MM-DD[,hh:mm:ss]]
==== Get or set DNS configuration ====
* DNS[,<IP address>]
==== Get or set serial port debug level ====
* Debug[,<1-4, default is 2>]
==== Get or set the gateway configuration ====
* Gateway[,<IP address>]
==== Run I2C scanner to find connected I2C chips. Output will be sent to the serial port ====
* I2Cscanner
==== Get or set IP address ====
* IP[,<IP address>]
==== Set the password of the unit ====
* Password <mypassword>
==== Reboot ====
* Reboot
==== Reset config to factory default. Caution, all settings will be lost! ====
Be careful! Do you want this?
This deletes the whole configuration but not the Firmware.
* Reset
==== Save config to persistent flash memory ====
* Save
==== Show settings on serial terminal ====
* Settings
==== TimeZone ====
* TimeZone[,<minutes from UTC>]
==== Get or set the status of NTP (Network Time Protocol) ====
* UseNTP[,<0/1>]
==== Configure your local wifi credentials ====
* WifiSSID <myssid>
* WifiKey <mypassword>
* WifiConnect
==MQTT==
===Controller===
Configure MQTT connection to the broker like this:<br>
[[image:ESPEasy Controller MQTT.jpg]]
* Click on Add
* Choose Type openHAB
* Enter your parameters like this where <i>banana1.fritz.box</i> in this example is the server your MQTT broker is running at.
<br>
[[image:ESPEasy Controller MQTT Add.jpg]]
===Device===
Now you need to listen on MQTT events. For that you need to configure a <i>Generic - MQTT Import</i> device:<br>
[[image:ESPEasy Devices Generic MQTT Import.jpg]]
* Click on Add
* Choose Type Generic MQTT Import
* Enter your parameters like this where <i>MQTT</i> is the event name for the [[#Rules]] and <i>MQTT Topic n</i> are the topics you listen at.
[[image:ESPEasy Devices Generic MQTT Import.jpg]]<br>
[[image:ESPEasy Devices Generic MQTT Import Add.jpg]]
==Rules==
===Switch relays based on MQTT events===
On my relay board ESP12F_Relay_X4 I configured [[#MQTT]] and use rules to switch the 4 relays.
<syntaxhighlight lang=basic>
on system#boot do
let,1,15 // Relay 1 is GPIO15
let,2,14 // Relay 2 is GPIO14
let,3,12 // Relay 3 is GPIO12
let,4,13 // Relay 4 is GPIO13
endon
on MQTT#Relay* do
if %eventvalue1%<0
//
// Values below 0 sets timer to %eventvalue1%*-1 seconds
//
let,5,abs(%eventvalue1%)
logentry,"%eventname%: Turn on {substring:5:11:%eventname%} (GPIO%v{substring:10:11:%eventname%}%) for %v5% seconds."
timerSet,{substring:10:11:%eventname%},%v5%
gpio,%v{substring:10:11:%eventname%}%,1
elseif %eventvalue1%=1
//
// Value equal 1 turn on relay
//
logentry,"%eventname%: Turn on {substring:5:11:%eventname%} (GPIO%v{substring:10:11:%eventname%}%)."
gpio,%v{substring:10:11:%eventname%}%,1
elseif %eventvalue1%=0
//
// Value equal 0 turn off relay
//
logentry,"%eventname%: Turn off {substring:5:11:%eventname%} (GPIO%v{substring:10:11:%eventname%}%)."
timerSet,{substring:10:11:%eventname%},0
gpio,%v{substring:10:11:%eventname%}%,0
endif
endon
on Cron#Relay4 do
logentry,"%eventname%: Turn on Relay4."
gpio,%v4%,1
timerSet,4,30
endon
on Rules#Timer do
logentry,"%eventname%: %eventvalue1% GPIO%v%eventvalue1%%"
gpio,%v%eventvalue1%%,0
endon
</syntaxhighlight>
==Problems==
===interface 0 claimed by ch341 while 'brltty' sets config #1===
<syntaxhighlight lang=bash>
Oct 9 14:40:06 lollybook kernel: [372389.769039] usb 3-1.4.3: ch341-uart converter now attached to ttyUSB0
Oct 9 14:40:07 lollybook kernel: [372390.769995] usb 3-1.4.3: usbfs: interface 0 claimed by ch341 while 'brltty' sets config #1
Oct 9 14:40:07 lollybook kernel: [372390.771187] ch341-uart ttyUSB0: ch341-uart converter now disconnected from ttyUSB0
Oct 9 14:40:07 lollybook kernel: [372390.771258] ch341 3-1.4.3:1.0: device disconnected
</syntaxhighlight>
I found this:
* [https://unix.stackexchange.com/questions/670636/unable-to-use-usb-dongle-based-on-usb-serial-converter-chip Unable to use USB dongle based on USB-serial converter chip]
<syntaxhighlight lang=bash>
for f in /usr/lib/udev/rules.d/*brltty*.rules; do
sudo ln -s /dev/null "/etc/udev/rules.d/$(basename "$f")"
done
sudo udevadm control --reload-rules
</syntaxhighlight>
But I also had to disable the brltty-udev.service
<syntaxhighlight lang=bash>
$ sudo systemctl stop brltty-udev.service
$ sudo systemctl mask brltty-udev.service
Created symlink /etc/systemd/system/brltty-udev.service → /dev/null.
</syntaxhighlight>
After disabling brltty I finally got:
<syntaxhighlight lang=bash>
Oct 9 14:41:30 lollybook kernel: [372473.985072] ch341 3-1.4.3:1.0: ch341-uart converter detected
Oct 9 14:41:30 lollybook kernel: [372473.987063] usb 3-1.4.3: ch341-uart converter now attached to ttyUSB0
</syntaxhighlight>
174e8343bc37bf6a446354e07d62c7f11407b1d8
Docker tips and tricks
0
372
2793
2737
2024-04-16T15:12:42Z
Lollypop
2
/* Setting some defaults */
wikitext
text/x-wiki
== Using docker behind a proxy ==
<syntaxhighlight lang=bash>
# systemctl edit docker.service
</syntaxhighlight>
Enter the next three lines and save:
<syntaxhighlight lang=ini>
[Service]
Environment="HTTP_PROXY=user:pass@proxy:port"
Environment="HTTPS_PROXY=user:pass@proxy:port"
</syntaxhighlight>
Restart docker:
<syntaxhighlight lang=bash>
# systemctl restart docker.service
</syntaxhighlight>
== Some useful aliases ==
I put this in my ~/.bash_aliases to maintain a check_mk container:
<syntaxhighlight lang=bash>
alias omd-log='docker container logs monitoring'
alias omd-recreate-volume='docker volume create --driver local --opt type=nfs --opt o=addr=nfs.server.tld,rw --opt device=:/share monitoring'
alias omd-root='docker container exec -it $(docker ps --filter name=monitoring -q) /bin/bash'
alias omd-cmk='docker container exec -it -u omd monitoring bash'
alias omd-start='docker container run --rm -dit -p 8080:5000 --tmpfs /omd/sites/omd/tmp:uid=1000,gid=1000 --ulimit nofile=1024 -v monitoring:/omd/sites --name monitoring -e CMK_SITE_ID=omd -e MAIL_RELAY_HOST='\''smtp-gw.server.tld'\'' -v /etc/localtime:/etc/localtime:ro checkmk/check-mk-raw:1.6.0p12'
alias omd-stop='docker stop $(docker ps --filter name=monitoring -q)'
</syntaxhighlight>
== Setting some defaults ==
/etc/docker/daemon.json
<syntaxhighlight lang=json>
{
"insecure-registries" : ["registry.server.de:5000"],
"data-root": "/docker-data/",
"default-address-pools": [
{
"scope": "local",
"base": "10.42.0.0/16",
"size": 24
}
],
"log-driver": "json-file",
"log-opts": {
"max-size": "2m",
"max-file": "10"
}
}
</syntaxhighlight>
Or log via systemd journald:
<syntaxhighlight lang=json>
"log-driver": "journald",
"log-opts": {
"tag": "{{.Name}}"
}
</syntaxhighlight>
a7831f143041275da07c518cc9f000d1aa307fd5
2812
2793
2025-02-21T10:29:02Z
Lollypop
2
wikitext
text/x-wiki
== Using docker behind a proxy ==
<syntaxhighlight lang=bash>
# systemctl edit docker.service
</syntaxhighlight>
Enter the next three lines and save:
<syntaxhighlight lang=ini>
[Service]
Environment="HTTP_PROXY=user:pass@proxy:port"
Environment="HTTPS_PROXY=user:pass@proxy:port"
</syntaxhighlight>
Restart docker:
<syntaxhighlight lang=bash>
# systemctl restart docker.service
</syntaxhighlight>
== Some useful aliases ==
I put this in my ~/.bash_aliases to maintain a check_mk container:
<syntaxhighlight lang=bash>
alias omd-log='docker container logs monitoring'
alias omd-recreate-volume='docker volume create --driver local --opt type=nfs --opt o=addr=nfs.server.tld,rw --opt device=:/share monitoring'
alias omd-root='docker container exec -it $(docker ps --filter name=monitoring -q) /bin/bash'
alias omd-cmk='docker container exec -it -u omd monitoring bash'
alias omd-start='docker container run --rm -dit -p 8080:5000 --tmpfs /omd/sites/omd/tmp:uid=1000,gid=1000 --ulimit nofile=1024 -v monitoring:/omd/sites --name monitoring -e CMK_SITE_ID=omd -e MAIL_RELAY_HOST='\''smtp-gw.server.tld'\'' -v /etc/localtime:/etc/localtime:ro checkmk/check-mk-raw:1.6.0p12'
alias omd-stop='docker stop $(docker ps --filter name=monitoring -q)'
</syntaxhighlight>
== Setting some defaults ==
/etc/docker/daemon.json
<syntaxhighlight lang=json>
{
"insecure-registries" : ["registry.server.de:5000"],
"data-root": "/docker-data/",
"default-address-pools": [
{
"scope": "local",
"base": "10.42.0.0/16",
"size": 24
}
],
"log-driver": "json-file",
"log-opts": {
"max-size": "2m",
"max-file": "10"
}
}
</syntaxhighlight>
Or log via systemd journald:
<syntaxhighlight lang=json>
"log-driver": "journald",
"log-opts": {
"tag": "{{.Name}}"
}
</syntaxhighlight>
==tcpdump==
To tcpdump inside a container you have to enter its namespace. This can be done through nsenter.<br>
First you need to get the PID of the container:
<syntaxhighlight lang=bash>
# docker inspect --format "{{ .State.Pid }}" my-docker-container-name
92405
</syntaxhighlight>
Then you can enter the namespace of this PID:
<syntaxhighlight lang=bash>
# nsenter -n -t 92405
#
</syntaxhighlight>
<syntaxhighlight lang=bash>
# nsenter -n -t $(docker inspect --format "{{ .State.Pid }}" my-docker-container-name)
</syntaxhighlight>
17c6ac14f1a84e0fe3055293f2dbee4e7501a580
2814
2812
2025-03-14T10:36:27Z
Lollypop
2
/* tcpdump */
wikitext
text/x-wiki
== Using docker behind a proxy ==
<syntaxhighlight lang=bash>
# systemctl edit docker.service
</syntaxhighlight>
Enter the next three lines and save:
<syntaxhighlight lang=ini>
[Service]
Environment="HTTP_PROXY=user:pass@proxy:port"
Environment="HTTPS_PROXY=user:pass@proxy:port"
</syntaxhighlight>
Restart docker:
<syntaxhighlight lang=bash>
# systemctl restart docker.service
</syntaxhighlight>
== Some useful aliases ==
I put this in my ~/.bash_aliases to maintain a check_mk container:
<syntaxhighlight lang=bash>
alias omd-log='docker container logs monitoring'
alias omd-recreate-volume='docker volume create --driver local --opt type=nfs --opt o=addr=nfs.server.tld,rw --opt device=:/share monitoring'
alias omd-root='docker container exec -it $(docker ps --filter name=monitoring -q) /bin/bash'
alias omd-cmk='docker container exec -it -u omd monitoring bash'
alias omd-start='docker container run --rm -dit -p 8080:5000 --tmpfs /omd/sites/omd/tmp:uid=1000,gid=1000 --ulimit nofile=1024 -v monitoring:/omd/sites --name monitoring -e CMK_SITE_ID=omd -e MAIL_RELAY_HOST='\''smtp-gw.server.tld'\'' -v /etc/localtime:/etc/localtime:ro checkmk/check-mk-raw:1.6.0p12'
alias omd-stop='docker stop $(docker ps --filter name=monitoring -q)'
</syntaxhighlight>
== Setting some defaults ==
/etc/docker/daemon.json
<syntaxhighlight lang=json>
{
"insecure-registries" : ["registry.server.de:5000"],
"data-root": "/docker-data/",
"default-address-pools": [
{
"scope": "local",
"base": "10.42.0.0/16",
"size": 24
}
],
"log-driver": "json-file",
"log-opts": {
"max-size": "2m",
"max-file": "10"
}
}
</syntaxhighlight>
Or log via systemd journald:
<syntaxhighlight lang=json>
"log-driver": "journald",
"log-opts": {
"tag": "{{.Name}}"
}
</syntaxhighlight>
==tcpdump==
To tcpdump inside a container you have to enter its namespace. This can be done through nsenter.<br>
First you need to get the PID of the container:
<syntaxhighlight lang=bash>
# docker inspect --format "{{ .State.Pid }}" my-docker-container-name
92405
</syntaxhighlight>
Then you can enter the namespace of this PID:
<syntaxhighlight lang=bash>
# nsenter -n -t 92405
#
</syntaxhighlight>
<syntaxhighlight lang=bash>
# nsenter -n -t $(docker inspect --format "{{ .State.Pid }}" my-docker-container-name)
</syntaxhighlight>
==Execute commands in a specific docker namespace==
<syntaxhighlight lang=bash>
# ln -s /run/docker/netns /run/netns
# container=my-docker-container
# netns=$(docker container inspect ${container} | jq -r '.[] .NetworkSettings.SandboxKey | split("/")[5]')
# ip netns exec ${netns} lsof -Pni :80
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
traefik 2814891 root 10u IPv6 21349777 0t0 TCP *:80 (LISTEN)
</syntaxhighlight>
51b2b30bdd7bfbfc066423a789805a1d3611f2a2
SSH FingerprintLogging
0
358
2794
2751
2024-06-06T10:57:48Z
Lollypop
2
/* Add magic to your .bashrc */
wikitext
text/x-wiki
[[Category:SSH|Fingerprint]]
[[Category:Bash|Fingerprint]]
=SSH Fingerprintlogging=
==Why logging fingerprints?==
It is just for the possibility of setting the [[Bash]] HISTFILE per logged in user.
==Add magic to your .bashrc==
* ~/.bashrc
Not fully working... wait...
<syntaxhighlight lang=bash>
...
FINGERPRINT=$([ -z "${SSH_CLIENT}" ] || { ssh_client_array=( ${SSH_CLIENT} ); [ -z "${SSH_CLIENT}" ] || journalctl --lines=100 --grep "${ssh_client_array[0]} port ${ssh_client_array[1]}" --no-pager --quiet --unit=ssh.service | awk 'END{print $NF}' ; } )
export HISTFILE=~/.bash_history_${FINGERPRINT:-${SUDO_USER:-default}}
...
</syntaxhighlight>
or
<syntaxhighlight lang=bash>
...
FINGERPRINT=$([ -z "${SSH_CLIENT}" ] || { ssh_client_array=( ${SSH_CLIENT} ); [ -z "${SSH_CLIENT}" ] || journalctl --lines=100 --grep "${ssh_client_array[0]} port ${ssh_client_array[1]}" --no-pager --quiet --unit=ssh.service | awk 'END{print $NF}' ; })
export HISTFILE=~/.bash_history_${FINGERPRINT:-${SUDO_USER:-default}}
...
</syntaxhighlight>
f351702984e0e32a064f9f69b513e417ca7b1966
2808
2794
2025-01-16T06:36:44Z
Lollypop
2
/* Add magic to your .bashrc */
wikitext
text/x-wiki
[[Category:SSH|Fingerprint]]
[[Category:Bash|Fingerprint]]
=SSH Fingerprintlogging=
==Why logging fingerprints?==
It is just for the possibility of setting the [[Bash]] HISTFILE per logged in user.
==Add magic to your .bashrc==
* ~/.bashrc
<syntaxhighlight lang=bash>
...
FINGERPRINT=$([ -z "${SSH_CLIENT}" ] || { ssh_client_array=( ${SSH_CLIENT} ); [ -z "${SSH_CLIENT}" ] || journalctl --lines=100 --grep "Accepted publickey for .* ${ssh_client_array[0]} port ${ssh_client_array[1]} ssh2:" --no-pager --quiet --unit=ssh.service | awk 'END{print $NF}' ; })
export HISTFILE=~/.bash_history_${FINGERPRINT:-${SUDO_USER:-default}}
...
</syntaxhighlight>
This greps the last line matching the current ssh client IP and port from ssh.service journal and sets the last field (what is the hash/fingerprint of the accepted public key) as FINGERPRINT. Then it sets the HISTFILE to whatever is set: $FINGERPRINT, $SUDO_USER or "-default".
9dae457997a3e48b62d8067643d0227ff8046167
Ansible tips and tricks
0
299
2795
2765
2024-06-19T12:46:47Z
Lollypop
2
wikitext
text/x-wiki
[[Category: Ansible | Tips and tricks]]
= Ansible commandline =
== Get settings for host ==
=== Show inventory for host ===
<syntaxhighlight lang=bash>
$ ansible-inventory --host ${hostname}
</syntaxhighlight>
=== Gathering settings for host in ${hostname}: ===
<syntaxhighlight lang=bash>
$ ansible -m debug -a 'var=hostvars[inventory_hostname]' ${hostname}
</syntaxhighlight>
For example:
<syntaxhighlight lang=bash>
$ ansible -m debug -a 'var=hostvars[inventory_hostname]' localhost
</syntaxhighlight>
=== Gathering groups for host in ${hostname}: ===
<syntaxhighlight lang=bash>
$ ansible -m debug -a 'var=group_names' ${hostname}
</syntaxhighlight>
== Get informations from host ==
=== Get all installed kernel versions: ===
<syntaxhighlight lang=bash>
$ ansible -m shell -a 'uname -r' 'all' | perl -pe 's#\s+\|\s+CHANGED\s+\|\s+rc=\d+\s>>\s*\n#;#g' > /tmp/kernel.csv
</syntaxhighlight>
=== Get all installed releases: ===
<syntaxhighlight lang=bash>
$ ansible -m setup -a 'filter=ansible_distribution_version' 'all'
</syntaxhighlight>
= Using ansible variables filled by gathering =
== ansible_mounts ==
<syntaxhighlight lang=yaml>
---
- hosts: all
gather_facts: false
vars:
# query all cifs filesystems mounted under /media/cifs
query: "@[?fstype=='cifs'] | @[?starts_with(mount,'/media/cifs/')]"
tasks:
- name: "Just gather mounts"
setup:
gather_subset:
- mounts
- name: "Show all mounts matching query"
debug:
msg: "{{ item }}"
with_items: "{{ ansible_mounts | community.general.json_query(query) }}"
</syntaxhighlight>
= Gathering facts from file =
== Variables from an Oracle response file ==
This snippet gets some variables from the response file and puts them into an environment variable <i>oracle_environment</i> and sets the variable name itself (prepended with oracle_ if not already there). The variable <i>oracle_environment</i> can be used for <i>environment:</i> when you use <i>shell:</i>.
<syntaxhighlight lang=yaml>
vars:
oracle_user: oracle
oracle_version: 12cR2
oracle_response_file: /install/tepmplate_{{ oracle_version }}/db_{{ oracle_version | lower}}.rsp
</syntaxhighlight>
<syntaxhighlight lang=yaml>
- name: "Getting variables for version {{ oracle_version }} from response file"
shell: |
awk -F '=' '/{{ item }}/{print $2;}' {{ oracle_response_file }}
register: oracle_response_variables
with_items:
- ORACLE_HOME
- ORACLE_BASE
- INVENTORY_LOCATION
tags:
- oracle
- oracle_install
- name: Setting facts from response file to oracle_environment
set_fact:
"{{ 'oracle_' + item.item | lower | regex_replace('oracle_','') }}": "{{ item.stdout }}"
oracle_environment: "{{oracle_environment|default([]) + [ {item.item: item.stdout} ] }}"
with_items:
- "{{ oracle_response_variables.results }}"
tags:
- oracle
- oracle_install
</syntaxhighlight>
= Gathering oracle environment =
<syntaxhighlight lang=yaml>
- name: Calling oraenv
shell: |
# Set ORAENV_ASK=NO and ORACLE_SID, ORACLE_HOME, PATH from /etc/oratab
eval $(awk -F':' '!/^[ ]*(#|$)/ && $3=="Y"{printf "export ORAENV_ASK=NO ORACLE_SID=%s ORACLE_HOME=%s PATH=${PATH}:%s/bin\n",$1,$2,$2}' /etc/oratab)
# Call /usr/local/bin/oraenv for additional settings
. /usr/local/bin/oraenv -s
# Just register what we need for Oracle
env | egrep "(ORACLE_.*|PATH|LD_LIBRARY_PATH)="
register: env
changed_when: False
- name: Creating environment ora_env
set_fact:
ora_env: |
{# Creating empty dictionary #}
{%- set tmp_env={} -%}
{# For each line from env call tmp_env._setitem_(<variable>,<value>) #}
{%- for line in env.stdout_lines -%}
{{ tmp_env.__setitem__(line.split('=')[0], line.split('=')[1]) }}
{%- endfor -%}
{# Print the created variable #}
{{ tmp_env }}
- debug: var=ora_env
</syntaxhighlight>
= NetApp Modules =
== NetApp role ==
=== Snapshot user ===
<syntaxhighlight>
security login role create -vserver cluster01 -role ansible-snapshot-only -cmddirname DEFAULT -access none
security login role create -vserver cluster01 -role ansible-snapshot-only -cmddirname "event generate-autosupport-log" -access all
security login role create -vserver cluster01 -role ansible-snapshot-only -cmddirname "volume snapshot" -access readonly
security login role create -vserver cluster01 -role ansible-snapshot-only -cmddirname "volume snapshot create" -query "-snapshot ansible_*" -access all
security login role create -vserver cluster01 -role ansible-snapshot-only -cmddirname "volume snapshot delete" -query "-snapshot ansible_*" -access all
security login create -vserver cluster01 -role ansible-snapshot-only -application ontapi -authentication-method password -user-or-group-name ansible-snapuser
</syntaxhighlight>
cfb1235dbaf5e72a28de00302d71ffc2611f03c9
2796
2795
2024-06-19T12:58:37Z
Lollypop
2
/* ansible_mounts */
wikitext
text/x-wiki
[[Category: Ansible | Tips and tricks]]
= Ansible commandline =
== Get settings for host ==
=== Show inventory for host ===
<syntaxhighlight lang=bash>
$ ansible-inventory --host ${hostname}
</syntaxhighlight>
=== Gathering settings for host in ${hostname}: ===
<syntaxhighlight lang=bash>
$ ansible -m debug -a 'var=hostvars[inventory_hostname]' ${hostname}
</syntaxhighlight>
For example:
<syntaxhighlight lang=bash>
$ ansible -m debug -a 'var=hostvars[inventory_hostname]' localhost
</syntaxhighlight>
=== Gathering groups for host in ${hostname}: ===
<syntaxhighlight lang=bash>
$ ansible -m debug -a 'var=group_names' ${hostname}
</syntaxhighlight>
== Get informations from host ==
=== Get all installed kernel versions: ===
<syntaxhighlight lang=bash>
$ ansible -m shell -a 'uname -r' 'all' | perl -pe 's#\s+\|\s+CHANGED\s+\|\s+rc=\d+\s>>\s*\n#;#g' > /tmp/kernel.csv
</syntaxhighlight>
=== Get all installed releases: ===
<syntaxhighlight lang=bash>
$ ansible -m setup -a 'filter=ansible_distribution_version' 'all'
</syntaxhighlight>
= Using ansible variables filled by gathering =
== ansible_mounts ==
<syntaxhighlight lang=yaml>
---
- hosts: all
gather_facts: false
vars:
# query all cifs filesystems mounted under /media/cifs
query: "@[?fstype=='cifs'] | @[?starts_with(mount,'/media/cifs/')]"
tasks:
- name: "Just gather mounts"
setup:
gather_subset:
- mounts
- name: "Show all mounts matching query"
debug:
msg: "{{ item.options }}"
loop: "{{ ansible_mounts | community.general.json_query(query) }}"
loop_control:
label: "{{ item.device }} -> {{ item.mount }}"
</syntaxhighlight>
= Gathering facts from file =
== Variables from an Oracle response file ==
This snippet gets some variables from the response file and puts them into an environment variable <i>oracle_environment</i> and sets the variable name itself (prepended with oracle_ if not already there). The variable <i>oracle_environment</i> can be used for <i>environment:</i> when you use <i>shell:</i>.
<syntaxhighlight lang=yaml>
vars:
oracle_user: oracle
oracle_version: 12cR2
oracle_response_file: /install/tepmplate_{{ oracle_version }}/db_{{ oracle_version | lower}}.rsp
</syntaxhighlight>
<syntaxhighlight lang=yaml>
- name: "Getting variables for version {{ oracle_version }} from response file"
shell: |
awk -F '=' '/{{ item }}/{print $2;}' {{ oracle_response_file }}
register: oracle_response_variables
with_items:
- ORACLE_HOME
- ORACLE_BASE
- INVENTORY_LOCATION
tags:
- oracle
- oracle_install
- name: Setting facts from response file to oracle_environment
set_fact:
"{{ 'oracle_' + item.item | lower | regex_replace('oracle_','') }}": "{{ item.stdout }}"
oracle_environment: "{{oracle_environment|default([]) + [ {item.item: item.stdout} ] }}"
with_items:
- "{{ oracle_response_variables.results }}"
tags:
- oracle
- oracle_install
</syntaxhighlight>
= Gathering oracle environment =
<syntaxhighlight lang=yaml>
- name: Calling oraenv
shell: |
# Set ORAENV_ASK=NO and ORACLE_SID, ORACLE_HOME, PATH from /etc/oratab
eval $(awk -F':' '!/^[ ]*(#|$)/ && $3=="Y"{printf "export ORAENV_ASK=NO ORACLE_SID=%s ORACLE_HOME=%s PATH=${PATH}:%s/bin\n",$1,$2,$2}' /etc/oratab)
# Call /usr/local/bin/oraenv for additional settings
. /usr/local/bin/oraenv -s
# Just register what we need for Oracle
env | egrep "(ORACLE_.*|PATH|LD_LIBRARY_PATH)="
register: env
changed_when: False
- name: Creating environment ora_env
set_fact:
ora_env: |
{# Creating empty dictionary #}
{%- set tmp_env={} -%}
{# For each line from env call tmp_env._setitem_(<variable>,<value>) #}
{%- for line in env.stdout_lines -%}
{{ tmp_env.__setitem__(line.split('=')[0], line.split('=')[1]) }}
{%- endfor -%}
{# Print the created variable #}
{{ tmp_env }}
- debug: var=ora_env
</syntaxhighlight>
= NetApp Modules =
== NetApp role ==
=== Snapshot user ===
<syntaxhighlight>
security login role create -vserver cluster01 -role ansible-snapshot-only -cmddirname DEFAULT -access none
security login role create -vserver cluster01 -role ansible-snapshot-only -cmddirname "event generate-autosupport-log" -access all
security login role create -vserver cluster01 -role ansible-snapshot-only -cmddirname "volume snapshot" -access readonly
security login role create -vserver cluster01 -role ansible-snapshot-only -cmddirname "volume snapshot create" -query "-snapshot ansible_*" -access all
security login role create -vserver cluster01 -role ansible-snapshot-only -cmddirname "volume snapshot delete" -query "-snapshot ansible_*" -access all
security login create -vserver cluster01 -role ansible-snapshot-only -application ontapi -authentication-method password -user-or-group-name ansible-snapuser
</syntaxhighlight>
e9e4a2b9ee420a4d5948de08314b592459fa732c
2797
2796
2024-06-19T13:01:08Z
Lollypop
2
/* ansible_mounts */
wikitext
text/x-wiki
[[Category: Ansible | Tips and tricks]]
= Ansible commandline =
== Get settings for host ==
=== Show inventory for host ===
<syntaxhighlight lang=bash>
$ ansible-inventory --host ${hostname}
</syntaxhighlight>
=== Gathering settings for host in ${hostname}: ===
<syntaxhighlight lang=bash>
$ ansible -m debug -a 'var=hostvars[inventory_hostname]' ${hostname}
</syntaxhighlight>
For example:
<syntaxhighlight lang=bash>
$ ansible -m debug -a 'var=hostvars[inventory_hostname]' localhost
</syntaxhighlight>
=== Gathering groups for host in ${hostname}: ===
<syntaxhighlight lang=bash>
$ ansible -m debug -a 'var=group_names' ${hostname}
</syntaxhighlight>
== Get informations from host ==
=== Get all installed kernel versions: ===
<syntaxhighlight lang=bash>
$ ansible -m shell -a 'uname -r' 'all' | perl -pe 's#\s+\|\s+CHANGED\s+\|\s+rc=\d+\s>>\s*\n#;#g' > /tmp/kernel.csv
</syntaxhighlight>
=== Get all installed releases: ===
<syntaxhighlight lang=bash>
$ ansible -m setup -a 'filter=ansible_distribution_version' 'all'
</syntaxhighlight>
= Using ansible variables filled by gathering =
== ansible_mounts ==
<syntaxhighlight lang=yaml>
---
- hosts: all
gather_facts: false
vars:
# query all cifs filesystems mounted under /media/cifs
query: "@[?fstype=='cifs'] | @[?starts_with(mount,'/media/cifs/')]"
tasks:
- name: "Just gather mounts"
setup:
gather_subset:
- mounts
- name: "Show all mounts matching query {{ query }}"
debug:
msg: "{{ item.options }}"
loop: "{{ ansible_mounts | community.general.json_query(query) }}"
loop_control:
label: "{{ item.device }} -> {{ item.mount }}"
</syntaxhighlight>
= Gathering facts from file =
== Variables from an Oracle response file ==
This snippet gets some variables from the response file and puts them into an environment variable <i>oracle_environment</i> and sets the variable name itself (prepended with oracle_ if not already there). The variable <i>oracle_environment</i> can be used for <i>environment:</i> when you use <i>shell:</i>.
<syntaxhighlight lang=yaml>
vars:
oracle_user: oracle
oracle_version: 12cR2
oracle_response_file: /install/tepmplate_{{ oracle_version }}/db_{{ oracle_version | lower}}.rsp
</syntaxhighlight>
<syntaxhighlight lang=yaml>
- name: "Getting variables for version {{ oracle_version }} from response file"
shell: |
awk -F '=' '/{{ item }}/{print $2;}' {{ oracle_response_file }}
register: oracle_response_variables
with_items:
- ORACLE_HOME
- ORACLE_BASE
- INVENTORY_LOCATION
tags:
- oracle
- oracle_install
- name: Setting facts from response file to oracle_environment
set_fact:
"{{ 'oracle_' + item.item | lower | regex_replace('oracle_','') }}": "{{ item.stdout }}"
oracle_environment: "{{oracle_environment|default([]) + [ {item.item: item.stdout} ] }}"
with_items:
- "{{ oracle_response_variables.results }}"
tags:
- oracle
- oracle_install
</syntaxhighlight>
= Gathering oracle environment =
<syntaxhighlight lang=yaml>
- name: Calling oraenv
shell: |
# Set ORAENV_ASK=NO and ORACLE_SID, ORACLE_HOME, PATH from /etc/oratab
eval $(awk -F':' '!/^[ ]*(#|$)/ && $3=="Y"{printf "export ORAENV_ASK=NO ORACLE_SID=%s ORACLE_HOME=%s PATH=${PATH}:%s/bin\n",$1,$2,$2}' /etc/oratab)
# Call /usr/local/bin/oraenv for additional settings
. /usr/local/bin/oraenv -s
# Just register what we need for Oracle
env | egrep "(ORACLE_.*|PATH|LD_LIBRARY_PATH)="
register: env
changed_when: False
- name: Creating environment ora_env
set_fact:
ora_env: |
{# Creating empty dictionary #}
{%- set tmp_env={} -%}
{# For each line from env call tmp_env._setitem_(<variable>,<value>) #}
{%- for line in env.stdout_lines -%}
{{ tmp_env.__setitem__(line.split('=')[0], line.split('=')[1]) }}
{%- endfor -%}
{# Print the created variable #}
{{ tmp_env }}
- debug: var=ora_env
</syntaxhighlight>
= NetApp Modules =
== NetApp role ==
=== Snapshot user ===
<syntaxhighlight>
security login role create -vserver cluster01 -role ansible-snapshot-only -cmddirname DEFAULT -access none
security login role create -vserver cluster01 -role ansible-snapshot-only -cmddirname "event generate-autosupport-log" -access all
security login role create -vserver cluster01 -role ansible-snapshot-only -cmddirname "volume snapshot" -access readonly
security login role create -vserver cluster01 -role ansible-snapshot-only -cmddirname "volume snapshot create" -query "-snapshot ansible_*" -access all
security login role create -vserver cluster01 -role ansible-snapshot-only -cmddirname "volume snapshot delete" -query "-snapshot ansible_*" -access all
security login create -vserver cluster01 -role ansible-snapshot-only -application ontapi -authentication-method password -user-or-group-name ansible-snapuser
</syntaxhighlight>
32a2bb7c81341c35a81f8ec0e308f0de98516e2c
2798
2797
2024-06-21T07:13:26Z
Lollypop
2
/* ansible_mounts */
wikitext
text/x-wiki
[[Category: Ansible | Tips and tricks]]
= Ansible commandline =
== Get settings for host ==
=== Show inventory for host ===
<syntaxhighlight lang=bash>
$ ansible-inventory --host ${hostname}
</syntaxhighlight>
=== Gathering settings for host in ${hostname}: ===
<syntaxhighlight lang=bash>
$ ansible -m debug -a 'var=hostvars[inventory_hostname]' ${hostname}
</syntaxhighlight>
For example:
<syntaxhighlight lang=bash>
$ ansible -m debug -a 'var=hostvars[inventory_hostname]' localhost
</syntaxhighlight>
=== Gathering groups for host in ${hostname}: ===
<syntaxhighlight lang=bash>
$ ansible -m debug -a 'var=group_names' ${hostname}
</syntaxhighlight>
== Get informations from host ==
=== Get all installed kernel versions: ===
<syntaxhighlight lang=bash>
$ ansible -m shell -a 'uname -r' 'all' | perl -pe 's#\s+\|\s+CHANGED\s+\|\s+rc=\d+\s>>\s*\n#;#g' > /tmp/kernel.csv
</syntaxhighlight>
=== Get all installed releases: ===
<syntaxhighlight lang=bash>
$ ansible -m setup -a 'filter=ansible_distribution_version' 'all'
</syntaxhighlight>
= Using ansible variables filled by gathering =
== ansible_mounts ==
<syntaxhighlight lang=yaml>
---
- hosts: all
gather_facts: false
vars:
# query all cifs filesystems mounted under /media/cifs. See community.general.json_query below.
query: "@[?fstype=='cifs'] | @[?starts_with(mount,'/media/cifs/')]"
tasks:
- name: "Just gather mounts"
setup:
gather_subset:
- mounts
- name: "Show all mounts matching query {{ query }}"
debug:
msg: "{{ item.options }}"
loop: "{{ ansible_mounts | community.general.json_query(query) }}"
loop_control:
label: "{{ item.device }} -> {{ item.mount }}"
</syntaxhighlight>
= Gathering facts from file =
== Variables from an Oracle response file ==
This snippet gets some variables from the response file and puts them into an environment variable <i>oracle_environment</i> and sets the variable name itself (prepended with oracle_ if not already there). The variable <i>oracle_environment</i> can be used for <i>environment:</i> when you use <i>shell:</i>.
<syntaxhighlight lang=yaml>
vars:
oracle_user: oracle
oracle_version: 12cR2
oracle_response_file: /install/tepmplate_{{ oracle_version }}/db_{{ oracle_version | lower}}.rsp
</syntaxhighlight>
<syntaxhighlight lang=yaml>
- name: "Getting variables for version {{ oracle_version }} from response file"
shell: |
awk -F '=' '/{{ item }}/{print $2;}' {{ oracle_response_file }}
register: oracle_response_variables
with_items:
- ORACLE_HOME
- ORACLE_BASE
- INVENTORY_LOCATION
tags:
- oracle
- oracle_install
- name: Setting facts from response file to oracle_environment
set_fact:
"{{ 'oracle_' + item.item | lower | regex_replace('oracle_','') }}": "{{ item.stdout }}"
oracle_environment: "{{oracle_environment|default([]) + [ {item.item: item.stdout} ] }}"
with_items:
- "{{ oracle_response_variables.results }}"
tags:
- oracle
- oracle_install
</syntaxhighlight>
= Gathering oracle environment =
<syntaxhighlight lang=yaml>
- name: Calling oraenv
shell: |
# Set ORAENV_ASK=NO and ORACLE_SID, ORACLE_HOME, PATH from /etc/oratab
eval $(awk -F':' '!/^[ ]*(#|$)/ && $3=="Y"{printf "export ORAENV_ASK=NO ORACLE_SID=%s ORACLE_HOME=%s PATH=${PATH}:%s/bin\n",$1,$2,$2}' /etc/oratab)
# Call /usr/local/bin/oraenv for additional settings
. /usr/local/bin/oraenv -s
# Just register what we need for Oracle
env | egrep "(ORACLE_.*|PATH|LD_LIBRARY_PATH)="
register: env
changed_when: False
- name: Creating environment ora_env
set_fact:
ora_env: |
{# Creating empty dictionary #}
{%- set tmp_env={} -%}
{# For each line from env call tmp_env._setitem_(<variable>,<value>) #}
{%- for line in env.stdout_lines -%}
{{ tmp_env.__setitem__(line.split('=')[0], line.split('=')[1]) }}
{%- endfor -%}
{# Print the created variable #}
{{ tmp_env }}
- debug: var=ora_env
</syntaxhighlight>
= NetApp Modules =
== NetApp role ==
=== Snapshot user ===
<syntaxhighlight>
security login role create -vserver cluster01 -role ansible-snapshot-only -cmddirname DEFAULT -access none
security login role create -vserver cluster01 -role ansible-snapshot-only -cmddirname "event generate-autosupport-log" -access all
security login role create -vserver cluster01 -role ansible-snapshot-only -cmddirname "volume snapshot" -access readonly
security login role create -vserver cluster01 -role ansible-snapshot-only -cmddirname "volume snapshot create" -query "-snapshot ansible_*" -access all
security login role create -vserver cluster01 -role ansible-snapshot-only -cmddirname "volume snapshot delete" -query "-snapshot ansible_*" -access all
security login create -vserver cluster01 -role ansible-snapshot-only -application ontapi -authentication-method password -user-or-group-name ansible-snapuser
</syntaxhighlight>
73c071a2d8f40497659bddeb92ca70979596821e
OpenSSL
0
347
2799
2755
2024-06-27T08:07:13Z
Lollypop
2
/* Beautify chain certificate */
wikitext
text/x-wiki
[[category:Security]]
=Verify=
<syntaxhighlight lang=bash>
# openssl verify -CAfile /srv/www/htdocs/pub/RHN-ORG-TRUSTED-SSL-CERT /etc/pki/spacewalk/jabberd/server.pem
</syntaxhighlight>
<syntaxhighlight lang=bash>
# openssl crl2pkcs7 -nocrl -certfile /srv/www/htdocs/pub/RHN-ORG-TRUSTED-SSL-CERT | openssl pkcs7 -print_certs -noout -print_certs
</syntaxhighlight>
=CSR=
== Create key and CSR ==
<syntaxhighlight lang=bash>
$ subject_without_cn='/C=DE/ST=Hamburg/L=Hamburg/O=Organisation/OU=Team'
$ emailAddress='webadmin@server.de'
$ declare -a hosts=( "name1.server.de" "name2.server.de" )
$ openssl req -newkey rsa:4096 -sha256 -keyout ${hosts[0]}-key.pem -out ${hosts[0]}-csr.pem -batch -subj "${subject_without_cn}/CN=${hosts[0]}/emailAddress=${emailAddress}" -reqexts SAN -config <(cat /etc/ssl/openssl.cnf <(printf "[SAN]\nsubjectAltName=DNS:${hosts[0]}${hosts[1]:+,DNS:${hosts[1]}}${hosts[2]:+,DNS:${hosts[2]}}${hosts[3]:+,DNS:${hosts[3]}}${hosts[4]:+,DNS:${hosts[4]}}"))
</syntaxhighlight>
== Verify your CSR==
<syntaxhighlight lang=bash>
$ openssl req -text -noout -verify -in ${hosts[0]}-csr.pem
</syntaxhighlight>
=Print validity for certificate file=
<SyntaxHighlight lang=bash>
#!/bin/bash
for i in ${*}
do
certfile=${i}
enddate="$(openssl x509 -enddate -noout -in ${certfile} | sed -e 's#^.*=##g')"
declare -i valid_seconds=$(( $(date --date="${enddate}" '+%s') - $(date '+%s') ))
declare -i seconds=${valid_seconds}
declare -i days=$(( ${seconds} / ( 24 * 60 * 60 ) ))
seconds=$(( ${seconds} % ( 24 * 60 * 60 ) ))
declare -i hours=$(( ${seconds} / ( 60 * 60 ) ))
seconds=$(( ${seconds} % ( 60 * 60 ) ))
declare -i minutes=$(( ${seconds} / 60 ))
seconds=$(( ${seconds} % 60 ))
printf "%s: %s (%d days %d hours %d seconds left)\n" "${certfile}" "$(date --date "${enddate}")" ${days} ${hours} ${seconds}
done
</SyntaxHighlight>
=Beautify chain certificate=
<SyntaxHighlight lang=bash>
$ awk '
BEGIN{
openssl="openssl x509 -subject -issuer";
}
/-----BEGIN CERTIFICATE-----/,/-----END CERTIFICATE-----/ {
print $0 | openssl;
if(/-----END CERTIFICATE-----/) {
close(openssl); # end pipe to send this part to openssl command
}
}' cert.pem
</SyntaxHighlight>
ee617df3b2c872373c3a581083f9dfae68ee6ef9
Admin hints
0
360
2800
2265
2024-07-12T08:34:52Z
Lollypop
2
wikitext
text/x-wiki
[[category:KnowHow]]
==Cheat sheets==
* [https://cheat.sh Curl usable general cheat sheet]
==DNS==
===Get your IP address===
<syntaxhighlight lang=bash>
$ dig +short +time=2 +tries=1 myip.opendns.com @resolver1.opendns.com
</syntaxhighlight>
==Remove empty and comment lines==
<syntaxhighlight lang=bash>
$ grep -Ev "^(\s*|(//|[;#]).*)$" <file>
</syntaxhighlight>
827d8b9e203741b0018016353b3398fda4fecd51
Linux grub
0
297
2801
2280
2024-08-01T09:20:37Z
Lollypop
2
/* Normal grub is bootet, now start the Kernel */
wikitext
text/x-wiki
[[Category:Linux|Grub]]
[[Category:Grub|Linux]]
=grub rescue>=
The problem:
<syntaxhighlight lang=bash>
...
Entering rescue mode...
grub rescue>
</syntaxhighlight>
==Get into the normal grub==
Find your devices:
<syntaxhighlight lang=bash>
grub rescue> ls
</syntaxhighlight>
===Find the directory where the normal.mod file resides===
In this example we have LVM and the /boot/grub is in VG vg-root and the LV lv-root.
<syntaxhighlight lang=bash>
grub rescue> ls (lvm/vg--root-lv--root)/boot/grub/i386-pc
... normal.mod ...
</syntaxhighlight>
===Set the prefix to the right place===
<syntaxhighlight lang=bash>
grub rescue> set prefix=(lvm/vg--root-lv--root)/boot/grub
</syntaxhighlight>
===Now you can load and start the module called "normal"===
<syntaxhighlight lang=bash>
grub rescue> insmod normal
grub rescue> normal
</syntaxhighlight>
If the menu not occurs you get something like this:
<syntaxhighlight lang=bash>
GNU GRUB version 1.99,5.11.0.175.2.0.0.42.2
Minimal BASH-like line editing is supported. For the first word, TAB
lists possible command completions. Anywhere else TAB lists possible
device or file completions.
grub>
</syntaxhighlight>
==Normal grub is bootet, now start the Kernel==
===Example for LVM===
<syntaxhighlight lang=bash>
insmod gzio
insmod part_msdos
insmod lvm
insmod ext2
set root='lvmid/KAlPF4-Qb8I-Sx41-10cC-lACw-Msoh-3qEohv/pmE9Nt-rLG3-FlNM-CwOT-hy42-gSnm-fZSn3l'
linux /boot/vmlinuz-4.4.0-53-generic root=/dev/mapper/vg--root-lv--root ro
initrd /boot/initrd.img-4.4.0-53-generic
</syntaxhighlight>
===Example for ZFS-Root===
<syntaxhighlight lang=bash>
insmod gzio
insmod part_msdos
insmod part_gpt
insmod zfs
set root='hd0,msdos4'
linux /ROOT/ubuntu-15.04@/boot/vmlinuz-4.4.0-57-generic root=ZFS=rpool/ROOT/ubuntu-15.04 boot=zfs zfs_force=1 ro quiet splash nomdmonddf nomdmonisw $vt_handoff
initrd /ROOT/ubuntu-15.04@/boot/initrd.img-4.4.0-57-generic
</syntaxhighlight>
===Example for ZFS-Root on GPT with known nothing about the location of kernel and initrd===
<br><pre><TAB> means press tab key.</pre>
<syntaxhighlight lang=bash>
grub> insmod gzio
grub> insmod part_gpt
grub> insmod part_msdos
grub> insmod zfs
grub> ls (<TAB>
Possible devices are:
proc memdisk hd0 hd1 hd2
</syntaxhighlight>
So we have several disks. Let us see what partitions we have:
<syntaxhighlight lang=bash highlight="5">
grub> ls (hd0<TAB>
Possible partitions are:
Device hd0: No known filesystem detected - Sector size 512B - Total size 5242880KiB
Partition hd0,gpt1: Filesystem type fat - Label `EFI', UID 4711-6C33 - Partition start at 1024KiB - Total size 524288KiB
Partition hd0,gpt3: Filesystem type zfs - Label `bpool' - Last modification time 2024-07-01 04:36:00 Monday, UUID 001702272575c1blabla - Partition start at 525312KiB - Total size 4717551.5KiB
Partition hd0,gpt5: No known filesystem detected - Partition start at 24KiB - Total size 1000KiB
</syntaxhighlight>
Now we know more about the possible partitions and due to the filesystem type and the labels we can see hd0,gpt3 would be a good start to look at. But the Last modification time looks a little old. We will see (Tell me why I don't like Mondays...).
<syntaxhighlight lang=bash>
grub> ls (hd0,gpt3)/<TAB>
Possible files are:
@/ BOOT/
</syntaxhighlight>
You can use tab until you found the right dataset and than /@/ at the end:
<syntaxhighlight lang=bash>
grub> ls (hd0,gpt3)/BOOT/ubuntu_flupdy/@/
efi grub
grub>
</syntaxhighlight>
Soooo... now we know, our dataset was not properly mounted when configuring grub. But we migrated from all in rpool to rpool and bpool. Maybe we have a snapshot on rpool where still a /boot with a kernel dangles around...
<syntaxhighlight lang=bash>
grub> ls (hd1,gpt1)/ROOT/<TAB>
...
@zfs-auto-snap_daily-2024-06-31-06.42.00--7d/
@zfs-auto-snap_daily-2024-06-30-06.42.00--7d/
@zfs-auto-snap_daily-2024-06-29-06.43.00--7d/
@zfs-auto-snap_daily-2024-06-28-06.42.00--7d/
@zfs-auto-snap_daily-2024-06-27-06.44.00--7d/
@zfs-auto-snap_daily-2024-06-26-06.42.00--7d/
...
grub> ls (hd1,gpt1)/ROOT/ubuntu_flupdy/@zfs-auto-snap_daily-2024-06-29-06.43.00--7d/boot/
vmlinuz-6.5.0-45-generic ...
</syntaxhighlight>
YES! That is what we searched for!
<syntaxhighlight lang=bash>
set root='hd1,gpt1'
linux "/ROOT/ubuntu_flupdy/@zfs-auto-snap_daily-2024-06-29-06.43.00--7d/boot/vmlinuz-6.5.0-45-generic" root=ZFS="rpool/ROOT/ubuntu_flupdy" ro single nomodeset dis_ucode_ldr text console=tty0 console=ttyS0,115200n8 nosplash init_on_alloc=0
initrd "/ROOT/ubuntu_flupdy/@zfs-auto-snap_daily-2024-06-29-06.43.00--7d/boot/initrd.img-6.5.0-45-generic"
boot
</syntaxhighlight>
Here we go!
0e636f788866c4b0d7ef909baeb3b716424f0cdc
2802
2801
2024-08-01T11:24:35Z
Lollypop
2
/* Example for ZFS-Root on GPT with known nothing about the location of kernel and initrd */
wikitext
text/x-wiki
[[Category:Linux|Grub]]
[[Category:Grub|Linux]]
=grub rescue>=
The problem:
<syntaxhighlight lang=bash>
...
Entering rescue mode...
grub rescue>
</syntaxhighlight>
==Get into the normal grub==
Find your devices:
<syntaxhighlight lang=bash>
grub rescue> ls
</syntaxhighlight>
===Find the directory where the normal.mod file resides===
In this example we have LVM and the /boot/grub is in VG vg-root and the LV lv-root.
<syntaxhighlight lang=bash>
grub rescue> ls (lvm/vg--root-lv--root)/boot/grub/i386-pc
... normal.mod ...
</syntaxhighlight>
===Set the prefix to the right place===
<syntaxhighlight lang=bash>
grub rescue> set prefix=(lvm/vg--root-lv--root)/boot/grub
</syntaxhighlight>
===Now you can load and start the module called "normal"===
<syntaxhighlight lang=bash>
grub rescue> insmod normal
grub rescue> normal
</syntaxhighlight>
If the menu not occurs you get something like this:
<syntaxhighlight lang=bash>
GNU GRUB version 1.99,5.11.0.175.2.0.0.42.2
Minimal BASH-like line editing is supported. For the first word, TAB
lists possible command completions. Anywhere else TAB lists possible
device or file completions.
grub>
</syntaxhighlight>
==Normal grub is bootet, now start the Kernel==
===Example for LVM===
<syntaxhighlight lang=bash>
insmod gzio
insmod part_msdos
insmod lvm
insmod ext2
set root='lvmid/KAlPF4-Qb8I-Sx41-10cC-lACw-Msoh-3qEohv/pmE9Nt-rLG3-FlNM-CwOT-hy42-gSnm-fZSn3l'
linux /boot/vmlinuz-4.4.0-53-generic root=/dev/mapper/vg--root-lv--root ro
initrd /boot/initrd.img-4.4.0-53-generic
</syntaxhighlight>
===Example for ZFS-Root===
<syntaxhighlight lang=bash>
insmod gzio
insmod part_msdos
insmod part_gpt
insmod zfs
set root='hd0,msdos4'
linux /ROOT/ubuntu-15.04@/boot/vmlinuz-4.4.0-57-generic root=ZFS=rpool/ROOT/ubuntu-15.04 boot=zfs zfs_force=1 ro quiet splash nomdmonddf nomdmonisw $vt_handoff
initrd /ROOT/ubuntu-15.04@/boot/initrd.img-4.4.0-57-generic
</syntaxhighlight>
===Example for ZFS-Root on GPT with known nothing about the location of kernel and initrd===
<br><pre><TAB> means press tab key.</pre>
<syntaxhighlight lang=bash>
grub> insmod gzio
grub> insmod part_gpt
grub> insmod part_msdos
grub> insmod zfs
grub> ls (<TAB>
Possible devices are:
proc memdisk hd0 hd1 hd2
</syntaxhighlight>
So we have several disks. Let us see what partitions we have:
<syntaxhighlight lang=bash highlight="5">
grub> ls (hd0<TAB>
Possible partitions are:
Device hd0: No known filesystem detected - Sector size 512B - Total size 5242880KiB
Partition hd0,gpt1: Filesystem type fat - Label `EFI', UID 4711-6C33 - Partition start at 1024KiB - Total size 524288KiB
Partition hd0,gpt3: Filesystem type zfs - Label `bpool' - Last modification time 2024-07-01 04:36:00 Monday, UUID 001702272575c1blabla - Partition start at 525312KiB - Total size 4717551.5KiB
Partition hd0,gpt5: No known filesystem detected - Partition start at 24KiB - Total size 1000KiB
</syntaxhighlight>
Now we know more about the possible partitions and due to the filesystem type and the labels we can see hd0,gpt3 would be a good start to look at. But the Last modification time looks a little old. We will see (Tell me why I don't like Mondays...).
<syntaxhighlight lang=bash>
grub> ls (hd0,gpt3)/<TAB>
Possible files are:
@/ BOOT/
</syntaxhighlight>
You can use tab until you found the right dataset and than /@/ at the end:
<syntaxhighlight lang=bash>
grub> ls (hd0,gpt3)/BOOT/ubuntu_flupdy/@/
efi grub
grub>
</syntaxhighlight>
Soooo... now we know, our dataset was not properly mounted when configuring grub. But we migrated from all in rpool to rpool and bpool. Maybe we have a snapshot on rpool where still a /boot with a kernel dangles around...
<syntaxhighlight lang=bash highlight="5">
grub> ls (hd1,gpt1)/ROOT/<TAB>
...
@zfs-auto-snap_daily-2024-06-31-06.42.00--7d/
@zfs-auto-snap_daily-2024-06-30-06.42.00--7d/
@zfs-auto-snap_daily-2024-06-29-06.43.00--7d/
@zfs-auto-snap_daily-2024-06-28-06.42.00--7d/
@zfs-auto-snap_daily-2024-06-27-06.44.00--7d/
@zfs-auto-snap_daily-2024-06-26-06.42.00--7d/
...
grub> ls (hd1,gpt1)/ROOT/ubuntu_flupdy/@zfs-auto-snap_daily-2024-06-29-06.43.00--7d/boot/
vmlinuz-6.5.0-45-generic ...
</syntaxhighlight>
YES! That is what we searched for!
<syntaxhighlight lang=bash>
set root='hd1,gpt1'
linux "/ROOT/ubuntu_flupdy/@zfs-auto-snap_daily-2024-06-29-06.43.00--7d/boot/vmlinuz-6.5.0-45-generic" root=ZFS="rpool/ROOT/ubuntu_flupdy" ro single nomodeset dis_ucode_ldr text console=tty0 console=ttyS0,115200n8 nosplash init_on_alloc=0
initrd "/ROOT/ubuntu_flupdy/@zfs-auto-snap_daily-2024-06-29-06.43.00--7d/boot/initrd.img-6.5.0-45-generic"
boot
</syntaxhighlight>
Here we go!
b4697804d45b1478b1276ded172684e52f22e843
Systemd
0
233
2803
2756
2024-09-12T06:12:02Z
Lollypop
2
/* Tell the journald to forward logging lines to the socket */
wikitext
text/x-wiki
[[Category:Linux]]
=systemd=
Yes, like daemons are usually written this has to be written lowercase.
=What is systemd?=
systemd is a replacement for the old and rusty init system of Linux.
It has many new features and extends the normal init system with the ability to watch processes after the start has done, list sockets owned by processes started with systemd, adds security features like [http://manpages.ubuntu.com/manpages/vivid/en/man7/capabilities.7.html capabilities(7)] and a lot more.
Maybe it will be as good as SMF (Service Management Facility) of Solaris one day :-).
=Take a look with systemctl=
==List units==
As you can see, there are hardware and software related units.
<syntaxhighlight lang=bash>
# systemctl list-units
UNIT LOAD ACTIVE SUB DESCRIPTION
proc-sys-fs-binfmt_misc.automount loaded active running Arbitrary Executable File Formats File System Automount Point
sys-devices-pci0000:00-0000:00:02.0-backlight-acpi_video0.device loaded active plugged /sys/devices/pci0000:00/0000:00:02.0/backlight/acpi_video0
sys-devices-pci0000:00-0000:00:02.0-drm-card0-card0\x2dLVDS\x2d1-intel_backlight.device loaded active plugged /sys/devices/pci0000:00/0000:00:02.0/drm
sys-devices-pci0000:00-0000:00:19.0-net-eth0.device loaded active plugged 82579LM Gigabit Network Connection
sys-devices-pci0000:00-0000:00:1a.0-usb1-1\x2d1-1\x2d1.4-1\x2d1.4:1.0-bluetooth-hci0-rfkill3.device loaded active plugged /sys/devices/pci0000:00/0000
sys-devices-pci0000:00-0000:00:1a.0-usb1-1\x2d1-1\x2d1.4-1\x2d1.4:1.0-bluetooth-hci0.device loaded active plugged /sys/devices/pci0000:00/0000:00:1a.0
sys-devices-pci0000:00-0000:00:1b.0-sound-card0.device loaded active plugged 6 Series/C200 Series Chipset Family High Definition Audio Contro
sys-devices-pci0000:00-0000:00:1c.1-0000:03:00.0-ieee80211-phy0-rfkill2.device loaded active plugged /sys/devices/pci0000:00/0000:00:1c.1/0000:03:00.0
sys-devices-pci0000:00-0000:00:1c.1-0000:03:00.0-net-wlan0.device loaded active plugged Centrino Advanced-N 6205 [Taylor Peak] (Centrino Advanced-N 62
sys-devices-pci0000:00-0000:00:1d.0-usb2-2\x2d1-2\x2d1.4-2\x2d1.4:1.1-tty-ttyACM0.device loaded active plugged F5521gw
sys-devices-pci0000:00-0000:00:1d.0-usb2-2\x2d1-2\x2d1.4-2\x2d1.4:1.3-tty-ttyACM1.device loaded active plugged F5521gw
...
session-c2.scope loaded active running Session c2 of user lollypop
accounts-daemon.service loaded active running Accounts Service
● anacron.service loaded failed failed Run anacron jobs
apparmor.service loaded active exited LSB: AppArmor initialization
apport.service loaded active exited LSB: automatic crash report generation
...
</syntaxhighlight>
In this example you can see that the anacron.service failed to start.
==Display unit status==
<syntaxhighlight lang=bash>
# systemctl status anacron
● anacron.service - Run anacron jobs
Loaded: loaded (/lib/systemd/system/anacron.service; enabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Fr 2015-08-28 09:18:13 CEST; 31min ago
Process: 1591 ExecStart=/usr/sbin/anacron -dsq (code=exited, status=1/FAILURE)
Main PID: 1591 (code=exited, status=1/FAILURE)
Aug 28 09:18:13 lollybook systemd[1]: Started Run anacron jobs.
Aug 28 09:18:13 lollybook systemd[1]: Starting Run anacron jobs...
Aug 28 09:18:13 lollybook systemd[1]: anacron.service: main process exited, code=exited, status=1/FAILURE
Aug 28 09:18:13 lollybook anacron[1591]: anacron: Can't chdir to /var/spool/anacron: No such file or directory
Aug 28 09:18:13 lollybook systemd[1]: Unit anacron.service entered failed state.
Aug 28 09:18:13 lollybook systemd[1]: anacron.service failed.
</syntaxhighlight>
Ah, deleted the anacron spool directory. ;-)
==Restart units==
Fix the problem and restart the service.
<syntaxhighlight lang=bash>
root@lollybook:~# mkdir /var/spool/anacron
root@lollybook:~# systemctl restart anacron.service
root@lollybook:~# systemctl status anacron
● anacron.service - Run anacron jobs
Loaded: loaded (/lib/systemd/system/anacron.service; enabled; vendor preset: enabled)
Active: active (running) since Fr 2015-08-28 09:53:49 CEST; 4s ago
Main PID: 5179 (anacron)
CGroup: /system.slice/anacron.service
└─5179 /usr/sbin/anacron -dsq
Aug 28 09:53:49 lollybook systemd[1]: Started Run anacron jobs.
Aug 28 09:53:49 lollybook systemd[1]: Starting Run anacron jobs...
Aug 28 09:53:49 lollybook anacron[5179]: Anacron 2.3 started on 2015-08-28
Aug 28 09:53:49 lollybook anacron[5179]: Will run job `cron.daily' in 5 min.
Aug 28 09:53:49 lollybook anacron[5179]: Will run job `cron.weekly' in 10 min.
Aug 28 09:53:49 lollybook anacron[5179]: Will run job `cron.monthly' in 15 min.
Aug 28 09:53:49 lollybook anacron[5179]: Jobs will be executed sequentially
</syntaxhighlight>
==Display unit declaration==
<syntaxhighlight lang=ini>
# systemctl cat zfs.target
# /lib/systemd/system/zfs.target
[Unit]
Description=ZFS startup target
Requires=zfs-mount.service
Requires=zfs-share.service
Wants=zed.service
[Install]
WantedBy=multi-user.target
</syntaxhighlight>
==Sockets==
<syntaxhighlight lang=bash>
# systemctl list-sockets --all
LISTEN UNIT ACTIVATES
/run/acpid.socket acpid.socket acpid.service
/run/systemd/fsckd systemd-fsckd.socket systemd-fsckd.service
/run/systemd/initctl/fifo systemd-initctl.socket systemd-initctl.service
/run/systemd/journal/dev-log systemd-journald-dev-log.socket systemd-journald.service
/run/systemd/journal/socket systemd-journald.socket systemd-journald.service
/run/systemd/journal/stdout systemd-journald.socket systemd-journald.service
/run/systemd/journal/syslog syslog.socket rsyslog.service
/run/systemd/shutdownd systemd-shutdownd.socket systemd-shutdownd.service
/run/udev/control systemd-udevd-control.socket systemd-udevd.service
/run/uuidd/request uuidd.socket uuidd.service
/var/run/avahi-daemon/socket avahi-daemon.socket avahi-daemon.service
/var/run/cups/cups.sock cups.socket cups.service
/var/run/dbus/system_bus_socket dbus.socket dbus.service
127.0.0.1:631 cups.socket cups.service
[::1]:631 cups.socket cups.service
audit 1 systemd-journald-audit.socket systemd-journald.service
kobject-uevent 1 systemd-udevd-kernel.socket systemd-udevd.service
17 sockets listed.
</syntaxhighlight>
==View dependencies==
What depends on ''zfs.target'':
<syntaxhighlight lang=bash>
# systemctl list-dependencies --reverse zfs.target
zfs.target
● ├─basic.target
...
● └─multi-user.target
...
</syntaxhighlight>
And what do we need to reach the ''zfs.target''?
<syntaxhighlight lang=bash>
# systemctl list-dependencies --recursive zfs.target
zfs.target
● ├─zed.service
● ├─zfs-mount.service
● └─zfs-share.service
</syntaxhighlight>
==Get the main PID of a service==
<syntaxhighlight lang=bash>
$ systemctl show --property=MainPID --value ssh.service
2026
</syntaxhighlight>
=Security=
==Use capabilities to drop user privileges (CapabilityBoundingSet)==
<syntaxhighlight lang=ini>
# systemctl cat systemd-networkd.service --no-pager
...
[Service]
Type=notify
Restart=on-failure
RestartSec=0
ExecStart=/lib/systemd/systemd-networkd
CapabilityBoundingSet=CAP_NET_ADMIN CAP_NET_BIND_SERVICE CAP_NET_BROADCAST CAP_NET_RAW CAP_SETUID CAP_SETGID CAP_SETPCAP CAP_CHOWN CAP_DAC_OVERRIDE CAP_FOWNER
ProtectSystem=full
ProtectHome=yes
WatchdogSec=1min
...
</syntaxhighlight>
Now the process is started with exactly the capabilities it needs to have. Even if it starts as root all unnessesary capabilities are dropped for starting the process.
I dont want to copy the whole man page of [http://manpages.ubuntu.com/manpages/vivid/en/man7/capabilities.7.html capabilities(7)] here but you can take a look to understand what this capabilities are.
'''BUT''' beware of programs which just test on UID 0!
==Nailing a process to it's rights : NoNewPrivileges==
Setting ''NoNewPrivileges=true'' ensures that the processtree from this level on will stuck at the UID and the privileges it has. This prohibits UID changes. No set UID binary will help the hacker to get more privileges than the user of the exploited service.
==Limiting access to a socket==
For example for the check_mk monitoring system:
<syntaxhighlight lang=ini>
# systemctl edit check_mk.socket
</syntaxhighlight>
Deny from all, but the monitoring server (172.17.128.193):
<syntaxhighlight lang=ini>
[Socket]
IPAddressDeny=any
IPAddressAllow=172.17.128.193
</syntaxhighlight>
==Limiting a socket to IPv4==
For example for the check_mk monitoring system:
<syntaxhighlight lang=ini>
# systemctl edit check_mk.socket
</syntaxhighlight>
First remove old value, then set new one.
<syntaxhighlight lang=ini>
[Socket]
ListenStream=
ListenStream=0.0.0.0:6556
</syntaxhighlight>
=systemd-resolved the name resolve service=
==Status==
<syntaxhighlight lang=bash>
$ systemd-resolve --status
Global
DNS Domain: fritz.box
DNSSEC NTA: 10.in-addr.arpa
168.192.in-addr.arpa
corp
d.f.ip6.arpa
home
internal
intranet
lan
local
private
test
Link 3 (wlan0)
Current Scopes: none
LLMNR setting: yes
MulticastDNS setting: no
DNSSEC setting: no
DNSSEC supported: no
Link 2 (eth0)
Current Scopes: DNS
LLMNR setting: yes
MulticastDNS setting: no
DNSSEC setting: no
DNSSEC supported: no
DNS Servers: 192.168.178.1
DNS Domain: fritz.box
</syntaxhighlight>
==Cache statistics==
<syntaxhighlight lang=bash>
$ systemd-resolve --statistics
DNSSEC supported by current servers: no
Transactions
Current Transactions: 0
Total Transactions: 1824
Cache
Current Cache Size: 11
Cache Hits: 1104
Cache Misses: 771
DNSSEC Verdicts
Secure: 0
Insecure: 0
Bogus: 0
Indeterminate: 0
</syntaxhighlight>
==Flush the cache==
<syntaxhighlight lang=bash>
$ systemd-resolve --flush-caches
</syntaxhighlight>
Check with:
<syntaxhighlight lang=bash>
$ systemd-resolve --statistics
DNSSEC supported by current servers: no
Transactions
Current Transactions: 0
Total Transactions: 1809
Cache
Current Cache Size: 0 <--- Empty
Cache Hits: 1099
Cache Misses: 761
DNSSEC Verdicts
Secure: 0
Insecure: 0
Bogus: 0
Indeterminate: 0
</syntaxhighlight>
=systemd-timesyncd an alternative to ntp=
The ntpd is a good and fat old horse for servers but clients do not necessarily need this one. Just give systemd-timesyncd a chance.
Configuration can be easily made through <i>/etc/systemd/timesyncd.conf</i>:
<syntaxhighlight lang=ini>
# This file is part of systemd.
#
# systemd is free software; you can redistribute it and/or modify it
# under the terms of the GNU Lesser General Public License as published by
# the Free Software Foundation; either version 2.1 of the License, or
# (at your option) any later version.
#
# Entries in this file show the compile time defaults.
# You can change settings by editing this file.
# Defaults can be restored by simply deleting this file.
#
# See timesyncd.conf(5) for details.
[Time]
NTP=ptbtime1.ptb.de hora.cs.tu-berlin.de
FallbackNTP=ntp.ubuntu.com
</syntaxhighlight>
The server list is a space separated list of NTP servers.
FallbackNTP is a list of servers if none of the NTP list could be reached.
If you want to split them into multiple files or generate them at start you can put files with the ending <i>.conf</i> in <i>/etc/systemd/timesyncd.conf.d/</i>.
After you setup the config you can enable the timesyncd via:
<syntaxhighlight lang=bash>
# timedatectl set-ntp true
</syntaxhighlight>
Control your success with:
<syntaxhighlight lang=bash>
# timedatectl
Local time: Fr 2016-07-01 09:16:24 CEST
Universal time: Fr 2016-07-01 07:16:24 UTC
RTC time: Fr 2016-07-01 07:16:24
Time zone: Europe/Berlin (CEST, +0200)
Network time on: yes
NTP synchronized: yes
RTC in local TZ: no
</syntaxhighlight>
Nice it worked <i>NTP synchronized: yes</i>.
If not take a look with <i>systemctl</i>:
<syntaxhighlight lang=bash>
# systemctl status systemd-timesyncd.service
● systemd-timesyncd.service - Network Time Synchronization
Loaded: loaded (/lib/systemd/system/systemd-timesyncd.service; enabled; vendor preset: enabled)
Drop-In: /lib/systemd/system/systemd-timesyncd.service.d
└─disable-with-time-daemon.conf
Active: inactive (dead)
Condition: start condition failed at Fr 2016-07-01 10:49:15 CEST; 1h 43min left
Docs: man:systemd-timesyncd.service(8)
</syntaxhighlight>
Hmm... let us take a look at ntp:
<syntaxhighlight lang=bash>
# systemctl status ntp.service
● ntp.service - LSB: Start NTP daemon
Loaded: loaded (/etc/init.d/ntp; bad; vendor preset: enabled)
Active: active (exited) since Fr 2016-07-01 10:49:19 CEST; 1h 44min left
Docs: man:systemd-sysv-generator(8)
</syntaxhighlight>
Maybe we should uninstall or disable ntp first ;-).
<syntaxhighlight lang=bash>
# systemctl stop ntp.service
# systemctl disable ntp.service
</syntaxhighlight>
<syntaxhighlight lang=bash>
# systemctl start systemd-timesyncd.service
# systemctl status systemd-timesyncd.service
● systemd-timesyncd.service - Network Time Synchronization
Loaded: loaded (/lib/systemd/system/systemd-timesyncd.service; enabled; vendor preset: enabled)
Drop-In: /lib/systemd/system/systemd-timesyncd.service.d
└─disable-with-time-daemon.conf
Active: active (running) since Fr 2016-07-01 09:06:10 CEST; 1s ago
Docs: man:systemd-timesyncd.service(8)
Main PID: 12360 (systemd-timesyn)
Status: "Synchronized to time server 192.53.103.108:123 (ptbtime1.ptb.de)."
CGroup: /system.slice/systemd-timesyncd.service
└─12360 /lib/systemd/systemd-timesyncd
Jul 01 09:06:10 lollybook systemd[1]: Starting Network Time Synchronization...
Jul 01 09:06:10 lollybook systemd[1]: Started Network Time Synchronization.
Jul 01 09:06:10 lollybook systemd-timesyncd[12360]: Synchronized to time server 192.53.103.108:123 (ptbtime1.ptb.de).
</syntaxhighlight>
That's it!
=Units=
==[Unit]==
===Define dependencies===
For example the ''zfs.target'' is defined like this:
<syntaxhighlight lang=ini>
# systemctl cat zfs.target
# /lib/systemd/system/zfs.target
[Unit]
Description=ZFS startup target
Requires=zfs-mount.service
Requires=zfs-share.service
Wants=zed.service
[Install]
WantedBy=multi-user.target
</syntaxhighlight>
This means to reach the ''zfs.target'' we want that ''zed.service'' is started if enabled and we need ''zfs-mount.service'' and ''zfs-share.service''.
===Directories===
====ReadWrite-, ReadOnly- and InaccessibleDirectories====
====Private Tmp-Directories====
Mounts a private incarnation of /tmp and /var/tmp which only lives as long as the unit is up. When the unit comes down the directories are cleared. This is done by a seperate namespace for this unit.
<syntaxhighlight lang=ini>
[Unit]
...
PrivateTmp=true|false
...
</syntaxhighlight>
If several units should share a private tmp-directory you can use ''JoinsNamespaceOf=<unit1>[,<unit2>,<unit3>]''.
==[Service]==
==[Install]==
=Tools=
==Testing around with capabilities==
For example arping:
<syntaxhighlight lang=bash>
# getcap /usr/bin/arping
/usr/bin/arping = cap_net_raw+ep
</syntaxhighlight>
With this capability set we can use this as normal user:
<syntaxhighlight lang=bash>
lollypop $ /usr/bin/arping -I wlan0 192.168.178.1
ARPING 192.168.178.1 from 192.168.178.31 wlan0
Unicast reply from 192.168.178.1 [24:65:11:F0:DC:A8] 1.774ms
Unicast reply from 192.168.178.1 [24:65:11:F0:DC:A8] 1.658ms
</syntaxhighlight>
If we remove this capability it does not work:
<syntaxhighlight lang=bash>
# setcap cap_net_raw=-ep /usr/bin/arping
</syntaxhighlight>
<syntaxhighlight lang=bash>
lollypop $ /usr/bin/arping -I wlan0 192.168.178.1
arping: socket: Operation not permitted
</syntaxhighlight>
Of course it still works as root as root has all capabilities:
<syntaxhighlight lang=bash>
root@lollybook:~# /usr/bin/arping -I wlan0 192.168.178.1
ARPING 192.168.178.1 from 192.168.178.31 wlan0
Unicast reply from 192.168.178.1 [24:65:11:F0:DC:A8] 2.052ms
Unicast reply from 192.168.178.1 [24:65:11:F0:DC:A8] 1.852ms
Received 2 response(s)
</syntaxhighlight>
So we better set this capability again:
<syntaxhighlight lang=bash>
# setcap cap_net_raw=+ep /usr/bin/arping
</syntaxhighlight>
= Logging with syslog-ng and systemd in a chroot environment =
If you have a chroot environment (here I have /var/chroot) some things are a little bit tricky.
==The needed logging socket in your chroot is /run/systemd/journal/dev-log==
Prepare the mountpoint:
<syntaxhighlight lang=bash>
# mkdir -p /var/chroot/run/systemd/journal
# touch /var/chroot/run/systemd/journal/dev-log
</syntaxhighlight>
===Get the name for the needed unit file===
The name of a .mount-unit file has to be the mount destination path. Dashes must be escaped. To get the resulting name you can easily use systemd-escape.
<syntaxhighlight lang=bash>
# systemd-escape -p --suffix=mount /var/chroot/run/systemd/journal/dev-log
var-chroot-run-systemd-journal-dev\x2dlog.mount
</syntaxhighlight>
===Create the unit file /lib/systemd/system/var-chroot-run-systemd-journal-dev\\x2dlog.mount for the mount===
Remember to double escape (\\) the x2d (which is a dash -).
<syntaxhighlight lang=bash>
# systemctl edit var-chroot-run-systemd-journal-dev\\x2dlog.mount
</syntaxhighlight>
I want to mount it before syslog-ng and pdns-recursor are up.
Put this contents in the file:
<syntaxhighlight lang=ini>
[Unit]
Description=Mount /run/systemd/journal/dev-log to chroot
DefaultDependencies=no
ConditionPathExists=/var/chroot/run/systemd/journal/dev-log
ConditionCapability=CAP_SYS_ADMIN
After=systemd-modules-load.service
Before=pdns-recursor.service
Before=syslog-ng.service
[Mount]
What=/run/systemd/journal/dev-log
Where=/var/chroot/run/systemd/journal/dev-log
Type=none
Options=bind
[Install]
WantedBy=multi-user.target
</syntaxhighlight>
===Mount the socket===
<syntaxhighlight lang=bash>
# systemctl daemon-reload
# systemctl enable var-chroot-run-systemd-journal-dev\\x2dlog.mount
# systemctl start var-chroot-run-systemd-journal-dev\\x2dlog.mount
</syntaxhighlight>
Check the success:
<syntaxhighlight lang=bash>
# grep /var/chroot/run/systemd/journal/dev-log /proc/mounts
tmpfs /var/chroot/run/systemd/journal/dev-log tmpfs rw,nosuid,noexec,relatime,size=101604k,mode=755 0 0
</syntaxhighlight>
==Tell the journald to forward logging lines to the socket==
===/etc/systemd/journald.conf.d/ForwardToSyslog,conf===
Check if ForwardToSyslog is not already set to yes by default:
<syntaxhighlight lang=bash>
# systemd-analyze cat-config systemd/journald.conf
</syntaxhighlight>
If not place a file in /etc/systemd/journald.conf.d e.g. /etc/systemd/journald.conf.d/ForwardToSyslog,conf with the following content:
<syntaxhighlight lang=ini>
[Journal]
ForwardToSyslog=yes
</syntaxhighlight>
Recheck the config with systemd-analyze from above.
Restart the journal daemon:
<syntaxhighlight lang=bash>
# systemctl restart systemd-journald.service
</syntaxhighlight>
==Configure syslog-ng==
===/etc/syslog-ng/syslog-ng.conf===
Take the log from systemd-journald socket:
<syntaxhighlight>
...
source s_src {
system();
internal();
unix-dgram ("/run/systemd/journal/dev-log");
};
...
</syntaxhighlight>
===Example for powerdns recursor===
====/etc/syslog-ng/conf.d/destination.d/pdns.conf====
<syntaxhighlight>
# PowerDNS authoritative server destination
destination d_pdns { file("/var/log/powerdns/pdns.log"); };
destination d_pdns_recursor { file("/var/log/powerdns/recursor.log"); };
</syntaxhighlight>
====/etc/syslog-ng/conf.d/filter.d/pdns.conf====
<syntaxhighlight>
# PowerDNS authoritative server filter
filter f_pdns { program("^pdns$"); };
filter f_pdns_recursor { program("^pdns_recursor$"); };
</syntaxhighlight>
====/etc/syslog-ng/conf.d/log.d/90_pdns.conf====
<syntaxhighlight>
# PowerDNS authoritative server default final file log
log { source(s_src); filter(f_pdns); destination(d_pdns); flags(final); };
log { source(s_src); filter(f_pdns_recursor); destination(d_pdns_recursor); flags(final); };
</syntaxhighlight>
===Restart syslog-ng daemon===
<syntaxhighlight lang=bash>
# systemctl restart syslog-ng.service
</syntaxhighlight>
= systemd-tmpfiles =
The housekeeping of temporary directories is done by the service <i>systemd-tmpfiles-clean.service</i> .
This service is triggered by the timer <i>systemd-tmpfiles-clean.timer</i>
To use this service for PrivateTMP directories for example of <i>apache2.service</i> you may use a config file under <i>/etc/[https://www.freedesktop.org/software/systemd/man/tmpfiles.d.html tmpfiles.d]/</i> like this example <i>/etc/tmpfiles.d/apache-cleanup.conf</i> :
<pre>
e /tmp/systemd-private-%b-apache2.service-*/tmp - - - 6h
</pre>
This will cleanup all files under <i>/tmp/systemd-private-%b-apache2.service-*/tmp</i> which are older than 6 hours every time the <i>systemd-tmpfiles-clean.service</i> runs.
The <i>%b</i> in the path is the actual boot-id.
What ist that? An id which is generated at each boot.
You can get the boot-id with:
<syntaxhighlight lang=bash>
# journalctl --list-boots
</syntaxhighlight>
The second field of the last line is the actual one, e.g.:
<syntaxhighlight lang=bash>
# journalctl --list-boots | awk 'END {print $2}'
52ae0c2a587a47048ee76818ede269a6
</syntaxhighlight>
When will that be? Try:
<syntaxhighlight lang="bash">
# systemctl list-timers systemd-tmpfiles-clean.timer
NEXT LEFT LAST PASSED UNIT ACTIVATES
Thu 2020-08-13 16:07:24 CEST 46min left n/a n/a systemd-tmpfiles-clean.timer systemd-tmpfiles-clean.service
1 timers listed.
Pass --all to see loaded but inactive timers, too.
</syntaxhighlight>
OK, but you probably want to run ist once an hour? OK, just rescedule the timer like this:
<syntaxhighlight lang="bash">
# systemctl edit systemd-tmpfiles-clean.timer
</syntaxhighlight>
and change the interval like this
<pre>
[Timer]
OnUnitActiveSec=1h
</pre>
Well done...
= Examples =
== fwupd.service behind proxy ==
<syntaxhighlight lang=bash>
# systemctl edit fwupd-refresh.service
</syntaxhighlight>
<syntaxhighlight lang=ini>
[Service]
Environment=http_proxy="http://user:passw0rd@proxy.intern.net:8080" https_proxy="http://user:passw0rd@proxy.intern.net:8080"
PassEnvironment=http_proxy https_proxy
</syntaxhighlight>
== Tomcat ==
=== /etc/systemd/system/tomcat-example.service ===
Simple service definition with some security options (ReadOnlyDirectories):
<syntaxhighlight lang=ini>
# /etc/systemd/system/my-tomcat.service
[Unit]
Description=Apache Tomcat Web Application Container
After=syslog.target network.target remote-fs.target
ConditionPathExists=/opt/tomcat/bin
ConditionPathExists=/home/tomcat/bin
[Service]
Type=forking
User=tomcat
Group=java
PrivateTmp=true
RuntimeDirectory=tomcat-example
RuntimeDirectoryMode=0700
ReadOnlyDirectories=/etc
ReadOnlyDirectories=/lib
ReadOnlyDirectories=/usr
EnvironmentFile=/home/tomcat/.Tomcat_init_systemd
PIDFile=/run/tomcat-example/tomcat.pid
ExecStart=/opt/tomcat/bin/catalina.sh start
ExecStop=/opt/tomcat/bin/catalina.sh stop
SuccessExitStatus=0
[Install]
WantedBy=multi-user.target
</syntaxhighlight>
=== /etc/polkit-1/rules.d/57-tomcat-example.rules ===
Allow the user <i>tomcat</i> to start/stop the service:
<syntaxhighlight>
polkit.addRule(function(action, subject) {
if (action.id == "org.freedesktop.systemd1.manage-units" &&
action.lookup("unit") == "tomcat-example.service" &&
subject.user == "tomcat") {
return polkit.Result.YES;
}
});
</syntaxhighlight>
== Oracle ==
UNTESTED, just an example!
File this as
/usr/lib/systemd/system/dbora@.service (SLES12)
<syntaxhighlight lang=ini>
# This file is part of systemd.
#
# Configure instances for your oracle database versions like this
# # systemctl enable dbora@<product>.service
# e.g.:
# # systemctl enable dbora@12cR1.service
#
[Unit]
Description=Oracle Database %I
After=syslog.target network.target
[Service]
# systemd ignores PAM limits, so set any necessary limits in the service.
# Not really a bug, but a feature.
# https://bugzilla.redhat.com/show_bug.cgi?id=754285
LimitMEMLOCK=infinity
LimitNOFILE=65535
#
Type=simple
RemainAfterExit=yes
User=oracle
Group=dba
Environment="ORACLE_HOME=/opt/oracle/product/%i/db"
ExecStart=/opt/oracle/product/%i/db/bin/dbstart $ORACLE_HOME >> 2>&1 &
ExecStop=/opt/oracle/product/%i/db/bin/dbstart $ORACLE_HOME 2>&1 &
[Install]
WantedBy=multi-user.target
</syntaxhighlight>
<syntaxhighlight lang=bash>
# systemctl daemon-reload
# systemctl enable dbora@12cR2.service
Created symlink from /etc/systemd/system/multi-user.target.wants/dbora@12cR2.service to /usr/lib/systemd/system/dbora@.service.
</syntaxhighlight>
48b1c8d5e9bf6593a8fa76e1c68181815a1f074d
Linux Tipps und Tricks
0
273
2804
2766
2024-11-19T12:40:35Z
Lollypop
2
wikitext
text/x-wiki
[[Category:Linux|Tipps und Tricks]]
==Hard reboot==
This is the hard way to kick your kernel into void. No filesystem sync is done, just and ugly fast direkt reboot!
You should never do this...
<syntaxhighlight lang=bash>
# echo 1 > /proc/sys/kernel/sysrq
# echo b > /proc/sysrq-trigger
</syntaxhighlight>
First line enables sysrq, second line sends the reboot request.
For more look at [https://www.kernel.org/doc/Documentation/sysrq.txt kernel.org]!
==Scan all SCSI buses for new devices==
<syntaxhighlight lang=bash>
# for i in /sys/class/scsi_host/host*/scan ; do echo "- - -" > $i ; done
</syntaxhighlight>
==Scan all FC ports for new devices==
!!!Be CAREFUL!!!
This command line issues a Loop Initialization Protocol (LIP). This is a bus reset hat means that removed devices in the fabric will disappear and new ones will appear.
!!!BUT the connection might get lost for a moment!!!
The softer way is [[#Scan all SCSI buses for new devices|to scan the SCSI buses]].
<syntaxhighlight lang=bash>
# for i in /sys/class/fc_host/*/issue_lip ; do echo "1" > $i ; done
</syntaxhighlight>
==Rescan a device (for example after changing a VMDK size)==
<syntaxhighlight lang=bash>
# device=sda
# echo 1 > /sys/class/block/${device}/device/rescan
</syntaxhighlight>
This is for device sda after changing the VMDK from 20GB to 25GB:
<syntaxhighlight lang=bash>
# device=sda
# echo "$[ 512 * $(</sys/block/${device}/size) / 1024 ** 3 ] GB"
20 GB
# echo 1 > /sys/class/block/${device}/device/rescan
# echo "$[ 512 * $(</sys/block/${device}/size) / 1024 ** 3 ] GB"
25 GB
# parted /dev/${device} "print free"
Warning: Not all of the space available to /dev/sda appears to be used, you can fix the GPT to use all of the space (an extra 10485760 blocks) or
continue with the current setting?
Fix/Ignore? F
Model: VMware Virtual disk (scsi)
Disk /dev/sda: 26,8GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:
Number Start End Size File system Name Flags
2 17,4kB 1049kB 1031kB bios_grub
1 1049kB 21,5GB 21,5GB zfs
21,5GB 26,8GB 5369MB Free Space
</syntaxhighlight>
I want to put the free space into partition 1 and resize the rpool:
<syntaxhighlight lang=bash>
# parted /dev/${device} "resizepart 1 -1"
# parted /dev/${device} "print free"
Model: VMware Virtual disk (scsi)
Disk /dev/sda: 26,8GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:
Number Start End Size File system Name Flags
2 17,4kB 1049kB 1031kB bios_grub
1 1049kB 26,8GB 26,8GB zfs
26,8GB 26,8GB 983kB Free Space
# zpool list rpool
NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
rpool 19,9G 1,68G 18,2G - 14% 8% 1.00x ONLINE -
# zpool set autoexpand=on rpool
# zpool status rpool
pool: rpool
state: ONLINE
scan: none requested
config:
NAME STATE READ WRITE CKSUM
rpool ONLINE 0 0 0
sda1 ONLINE 0 0 0
# zpool online rpool sda1
# zpool list rpool
NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
rpool 24,9G 1,69G 23,2G - 11% 6% 1.00x ONLINE -
# zpool set autoexpand=off rpool
</syntaxhighlight>
Done.
==Remove a SCSI-device==
Let us say we want to remove /dev/sdb.
Be careful! Like in this example the lowest SCSI-ID is not always the lowest device name!
Check it with <i>lsscsi</i> from the Ubuntu package lsscsi:
<syntaxhighlight lang=bash>
# lsscsi
[2:0:0:0] cd/dvd NECVMWar VMware SATA CD00 1.00 /dev/sr0
[32:0:0:0] disk VMware Virtual disk 1.0 /dev/sdb
[32:0:1:0] disk VMware Virtual disk 1.0 /dev/sda
</syntaxhighlight>
Then check it is not longer in use:
# mount
# pvs
# zpool status
# etc.
Then delete it:
<syntaxhighlight lang=bash>
# echo 1 > /sys/bus/scsi/drivers/sd/32\:0\:0\:0/delete
</syntaxhighlight>
The 32:0:0:0 is the number reported from the lsscsi above.
Et voila:
<syntaxhighlight lang=bash>
# lsscsi
[2:0:0:0] cd/dvd NECVMWar VMware SATA CD00 1.00 /dev/sr0
[32:0:1:0] disk VMware Virtual disk 1.0 /dev/sda
</syntaxhighlight>
==Copy a GPT partition table==
Copy partition table of sdX to sdY:
<syntaxhighlight lang=bash>
# sgdisk /dev/sdX --replicate=/dev/sdY
# sgdisk --randomize-guids /dev/sdY
</syntaxhighlight>
Or with:
<syntaxhighlight lang=bash>
# sgdisk --backup=sdX.table /dev/sdX
# sgdisk --load-backup=sdX.table /dev/sdY
# sgdisk -G /dev/sdY
</syntaxhighlight>
<pre>
-R, --replicate=second_device_filename
Replicate the main device's partition table on the specified second device. Note that the replicated partition table is an exact
copy, including all GUIDs; if the device should have its own unique GUIDs, you should use the -G option on the new disk.
-G, --randomize-guids
Randomize the disk's GUID and all partitions' unique GUIDs (but not their partition type code GUIDs). This function may be used
after cloning a disk in order to render all GUIDs once again unique.
</pre>
==Resize a GPT partition==
The partition was resized in VMWare from ~6GB to ~50GB.
In the VM I did [[#Remove a SCSI-device|Remove a SCSI-device]] for the resized device and then [[#Scan all SCSI buses for new devices|Scan all SCSI buses for new devices]] after that parted saw the new size.
===Correct the GPT partition table===
<syntaxhighlight lang=bash>
root@mariadb:~# parted /dev/sdb
GNU Parted 3.2
Using /dev/sdb
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted) p
Warning: Not all of the space available to /dev/sdb appears to be used, you can fix the GPT to use all of the space (an extra 92274688 blocks) or continue with the
current setting?
Fix/Ignore? F <-- ! choose F
Model: VMware Virtual disk (scsi)
Disk /dev/sdb: 53,7GB <-- ! the new size is reported now
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:
Number Start End Size File system Name Flags
1 1049kB 6442MB 6441MB zfs
</syntaxhighlight>
===Resize the partition===
<syntaxhighlight lang=bash>
root@mariadb:~# parted /dev/sdb
GNU Parted 3.2
Using /dev/sdb
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted) p
Model: VMware Virtual disk (scsi)
Disk /dev/sdb: 53,7GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:
Number Start End Size File system Name Flags
1 1049kB 6442MB 6441MB zfs
(parted) resizepart 1
End? [6442MB]? 53,7GB <-- ! Put new size here
(parted) p <-- ! Control if it worked
Model: VMware Virtual disk (scsi)
Disk /dev/sdb: 53,7GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:
Number Start End Size File system Name Flags
1 1049kB 53,7GB 53,7GB zfs
(parted) q
Information: You may need to update /etc/fstab.
</syntaxhighlight>
===Optional: Resize the ZPool in it===
Check the actual values:
<syntaxhighlight lang=bash>
root@mariadb:~# zpool list MYSQL-DATA
NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
MYSQL-DATA 5,97G 994M 5,00G 44G 47% 16% 1.00x ONLINE -
root@mariadb:~# zpool get autoexpand MYSQL-DATA
NAME PROPERTY VALUE SOURCE
MYSQL-DATA autoexpand off default
</syntaxhighlight>
Now inform ZPool to grow to the end of the partition.
Set autoexpand to on:
<syntaxhighlight lang=bash>
root@mariadb:~# zpool set autoexpand=on MYSQL-DATA
</syntaxhighlight>
Send an online to the already onlined device to force a recheck in the ZPool to resize it without export/import:
<syntaxhighlight lang=bash>
root@mariadb:~# zpool online MYSQL-DATA /dev/sdb1
</syntaxhighlight>
Et voila:
<syntaxhighlight lang=bash>
root@mariadb:~# zpool list MYSQL-DATA
NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
MYSQL-DATA 50,0G 994M 49,0G - 5% 1% 1.00x ONLINE -
rpool 19,9G 3,36G 16,5G - 19% 16% 1.00x ONLINE -
</syntaxhighlight>
Set autoexpand to off if you want prevent to autoexpand if partition grows:
<syntaxhighlight lang=bash>
root@mariadb:~# zpool set autoexpand=off MYSQL-DATA
</syntaxhighlight>
===Optional: Resize the LVM physical volume===
Check the values:
<syntaxhighlight lang=bash>
# parted /dev/${device} "print free"
Model: VMware Virtual disk (scsi)
Disk /dev/sda: 48.3GB
Sector size (logical/physical): 512B/512B
Partition Table: msdos
Disk Flags:
Number Start End Size Type File system Flags
32.3kB 1049kB 1016kB Free Space
1 1049kB 48.3GB 48.3GB primary boot
48.3GB 48.3GB 999kB Free Space
# pvs
PV VG Fmt Attr PSize PFree
/dev/sda1 vg-root lvm2 a-- <35.00g 0
</syntaxhighlight>
OK, we need to resize the physical volume
<syntaxhighlight lang=bash>
# pvresize /dev/sda1
Physical volume "/dev/sda1" changed
1 physical volume(s) resized / 0 physical volume(s) not resized
</syntaxhighlight>
Check the values:
<syntaxhighlight lang=bash>
# pvs
PV VG Fmt Attr PSize PFree
/dev/sda1 vg-root lvm2 a-- <45.00g 10.00g
</syntaxhighlight>
<syntaxhighlight lang=bash>
# lvextend -l +100%FREE /dev/vg-root/log
</syntaxhighlight>
Done.
==Find open but deleted files==
Sometimes you have a full filesystem, but cannot see files with ls.<br>
And the output of <i>du -sh <mountpoint></i> and <i>df -h <mountpoint></i> differ, because <i>du</i> just sums the files by traversing the directory.<br>
Then it is time to look for files that are open by any process but deleted in the filesystem.<br>
You can investigate the /proc kernel filesystem:
<syntaxhighlight lang=bash>
# find /proc/*/fd -ls | grep '(deleted)'
91565697 0 lrwx------ 1 mysql mysql 64 Nov 19 12:55 /proc/2118/fd/7 -> /tmp/ibNhVEnm\ (deleted)
91565698 0 lrwx------ 1 mysql mysql 64 Nov 19 12:55 /proc/2118/fd/8 -> /tmp/ibhSEF8n\ (deleted)
91565699 0 lrwx------ 1 mysql mysql 64 Nov 19 12:55 /proc/2118/fd/9 -> /tmp/ibADGDrl\ (deleted)
91565703 0 lrwx------ 1 mysql mysql 64 Nov 19 12:55 /proc/2118/fd/13 -> /tmp/ibtl5efn\ (deleted)
</syntaxhighlight>
Or you can use <i>lsof</i>:
<syntaxhighlight lang=bash>
# lsof +aL1
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NLINK NODE NAME
mysqld 2118 mysql 7u REG 0,27 0 0 32780 /tmp/ibNhVEnm (deleted)
mysqld 2118 mysql 8u REG 0,27 0 0 32782 /tmp/ibhSEF8n (deleted)
mysqld 2118 mysql 9u REG 0,27 0 0 32786 /tmp/ibADGDrl (deleted)
mysqld 2118 mysql 13u REG 0,27 0 0 32796 /tmp/ibtl5efn (deleted)
</syntaxhighlight>
ff61a00e702a40e52b83d185a98487b6642f7987
2805
2804
2024-11-19T12:58:17Z
Lollypop
2
/* Find open but deleted files */
wikitext
text/x-wiki
[[Category:Linux|Tipps und Tricks]]
==Hard reboot==
This is the hard way to kick your kernel into void. No filesystem sync is done, just and ugly fast direkt reboot!
You should never do this...
<syntaxhighlight lang=bash>
# echo 1 > /proc/sys/kernel/sysrq
# echo b > /proc/sysrq-trigger
</syntaxhighlight>
First line enables sysrq, second line sends the reboot request.
For more look at [https://www.kernel.org/doc/Documentation/sysrq.txt kernel.org]!
==Scan all SCSI buses for new devices==
<syntaxhighlight lang=bash>
# for i in /sys/class/scsi_host/host*/scan ; do echo "- - -" > $i ; done
</syntaxhighlight>
==Scan all FC ports for new devices==
!!!Be CAREFUL!!!
This command line issues a Loop Initialization Protocol (LIP). This is a bus reset hat means that removed devices in the fabric will disappear and new ones will appear.
!!!BUT the connection might get lost for a moment!!!
The softer way is [[#Scan all SCSI buses for new devices|to scan the SCSI buses]].
<syntaxhighlight lang=bash>
# for i in /sys/class/fc_host/*/issue_lip ; do echo "1" > $i ; done
</syntaxhighlight>
==Rescan a device (for example after changing a VMDK size)==
<syntaxhighlight lang=bash>
# device=sda
# echo 1 > /sys/class/block/${device}/device/rescan
</syntaxhighlight>
This is for device sda after changing the VMDK from 20GB to 25GB:
<syntaxhighlight lang=bash>
# device=sda
# echo "$[ 512 * $(</sys/block/${device}/size) / 1024 ** 3 ] GB"
20 GB
# echo 1 > /sys/class/block/${device}/device/rescan
# echo "$[ 512 * $(</sys/block/${device}/size) / 1024 ** 3 ] GB"
25 GB
# parted /dev/${device} "print free"
Warning: Not all of the space available to /dev/sda appears to be used, you can fix the GPT to use all of the space (an extra 10485760 blocks) or
continue with the current setting?
Fix/Ignore? F
Model: VMware Virtual disk (scsi)
Disk /dev/sda: 26,8GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:
Number Start End Size File system Name Flags
2 17,4kB 1049kB 1031kB bios_grub
1 1049kB 21,5GB 21,5GB zfs
21,5GB 26,8GB 5369MB Free Space
</syntaxhighlight>
I want to put the free space into partition 1 and resize the rpool:
<syntaxhighlight lang=bash>
# parted /dev/${device} "resizepart 1 -1"
# parted /dev/${device} "print free"
Model: VMware Virtual disk (scsi)
Disk /dev/sda: 26,8GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:
Number Start End Size File system Name Flags
2 17,4kB 1049kB 1031kB bios_grub
1 1049kB 26,8GB 26,8GB zfs
26,8GB 26,8GB 983kB Free Space
# zpool list rpool
NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
rpool 19,9G 1,68G 18,2G - 14% 8% 1.00x ONLINE -
# zpool set autoexpand=on rpool
# zpool status rpool
pool: rpool
state: ONLINE
scan: none requested
config:
NAME STATE READ WRITE CKSUM
rpool ONLINE 0 0 0
sda1 ONLINE 0 0 0
# zpool online rpool sda1
# zpool list rpool
NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
rpool 24,9G 1,69G 23,2G - 11% 6% 1.00x ONLINE -
# zpool set autoexpand=off rpool
</syntaxhighlight>
Done.
==Remove a SCSI-device==
Let us say we want to remove /dev/sdb.
Be careful! Like in this example the lowest SCSI-ID is not always the lowest device name!
Check it with <i>lsscsi</i> from the Ubuntu package lsscsi:
<syntaxhighlight lang=bash>
# lsscsi
[2:0:0:0] cd/dvd NECVMWar VMware SATA CD00 1.00 /dev/sr0
[32:0:0:0] disk VMware Virtual disk 1.0 /dev/sdb
[32:0:1:0] disk VMware Virtual disk 1.0 /dev/sda
</syntaxhighlight>
Then check it is not longer in use:
# mount
# pvs
# zpool status
# etc.
Then delete it:
<syntaxhighlight lang=bash>
# echo 1 > /sys/bus/scsi/drivers/sd/32\:0\:0\:0/delete
</syntaxhighlight>
The 32:0:0:0 is the number reported from the lsscsi above.
Et voila:
<syntaxhighlight lang=bash>
# lsscsi
[2:0:0:0] cd/dvd NECVMWar VMware SATA CD00 1.00 /dev/sr0
[32:0:1:0] disk VMware Virtual disk 1.0 /dev/sda
</syntaxhighlight>
==Copy a GPT partition table==
Copy partition table of sdX to sdY:
<syntaxhighlight lang=bash>
# sgdisk /dev/sdX --replicate=/dev/sdY
# sgdisk --randomize-guids /dev/sdY
</syntaxhighlight>
Or with:
<syntaxhighlight lang=bash>
# sgdisk --backup=sdX.table /dev/sdX
# sgdisk --load-backup=sdX.table /dev/sdY
# sgdisk -G /dev/sdY
</syntaxhighlight>
<pre>
-R, --replicate=second_device_filename
Replicate the main device's partition table on the specified second device. Note that the replicated partition table is an exact
copy, including all GUIDs; if the device should have its own unique GUIDs, you should use the -G option on the new disk.
-G, --randomize-guids
Randomize the disk's GUID and all partitions' unique GUIDs (but not their partition type code GUIDs). This function may be used
after cloning a disk in order to render all GUIDs once again unique.
</pre>
==Resize a GPT partition==
The partition was resized in VMWare from ~6GB to ~50GB.
In the VM I did [[#Remove a SCSI-device|Remove a SCSI-device]] for the resized device and then [[#Scan all SCSI buses for new devices|Scan all SCSI buses for new devices]] after that parted saw the new size.
===Correct the GPT partition table===
<syntaxhighlight lang=bash>
root@mariadb:~# parted /dev/sdb
GNU Parted 3.2
Using /dev/sdb
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted) p
Warning: Not all of the space available to /dev/sdb appears to be used, you can fix the GPT to use all of the space (an extra 92274688 blocks) or continue with the
current setting?
Fix/Ignore? F <-- ! choose F
Model: VMware Virtual disk (scsi)
Disk /dev/sdb: 53,7GB <-- ! the new size is reported now
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:
Number Start End Size File system Name Flags
1 1049kB 6442MB 6441MB zfs
</syntaxhighlight>
===Resize the partition===
<syntaxhighlight lang=bash>
root@mariadb:~# parted /dev/sdb
GNU Parted 3.2
Using /dev/sdb
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted) p
Model: VMware Virtual disk (scsi)
Disk /dev/sdb: 53,7GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:
Number Start End Size File system Name Flags
1 1049kB 6442MB 6441MB zfs
(parted) resizepart 1
End? [6442MB]? 53,7GB <-- ! Put new size here
(parted) p <-- ! Control if it worked
Model: VMware Virtual disk (scsi)
Disk /dev/sdb: 53,7GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:
Number Start End Size File system Name Flags
1 1049kB 53,7GB 53,7GB zfs
(parted) q
Information: You may need to update /etc/fstab.
</syntaxhighlight>
===Optional: Resize the ZPool in it===
Check the actual values:
<syntaxhighlight lang=bash>
root@mariadb:~# zpool list MYSQL-DATA
NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
MYSQL-DATA 5,97G 994M 5,00G 44G 47% 16% 1.00x ONLINE -
root@mariadb:~# zpool get autoexpand MYSQL-DATA
NAME PROPERTY VALUE SOURCE
MYSQL-DATA autoexpand off default
</syntaxhighlight>
Now inform ZPool to grow to the end of the partition.
Set autoexpand to on:
<syntaxhighlight lang=bash>
root@mariadb:~# zpool set autoexpand=on MYSQL-DATA
</syntaxhighlight>
Send an online to the already onlined device to force a recheck in the ZPool to resize it without export/import:
<syntaxhighlight lang=bash>
root@mariadb:~# zpool online MYSQL-DATA /dev/sdb1
</syntaxhighlight>
Et voila:
<syntaxhighlight lang=bash>
root@mariadb:~# zpool list MYSQL-DATA
NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
MYSQL-DATA 50,0G 994M 49,0G - 5% 1% 1.00x ONLINE -
rpool 19,9G 3,36G 16,5G - 19% 16% 1.00x ONLINE -
</syntaxhighlight>
Set autoexpand to off if you want prevent to autoexpand if partition grows:
<syntaxhighlight lang=bash>
root@mariadb:~# zpool set autoexpand=off MYSQL-DATA
</syntaxhighlight>
===Optional: Resize the LVM physical volume===
Check the values:
<syntaxhighlight lang=bash>
# parted /dev/${device} "print free"
Model: VMware Virtual disk (scsi)
Disk /dev/sda: 48.3GB
Sector size (logical/physical): 512B/512B
Partition Table: msdos
Disk Flags:
Number Start End Size Type File system Flags
32.3kB 1049kB 1016kB Free Space
1 1049kB 48.3GB 48.3GB primary boot
48.3GB 48.3GB 999kB Free Space
# pvs
PV VG Fmt Attr PSize PFree
/dev/sda1 vg-root lvm2 a-- <35.00g 0
</syntaxhighlight>
OK, we need to resize the physical volume
<syntaxhighlight lang=bash>
# pvresize /dev/sda1
Physical volume "/dev/sda1" changed
1 physical volume(s) resized / 0 physical volume(s) not resized
</syntaxhighlight>
Check the values:
<syntaxhighlight lang=bash>
# pvs
PV VG Fmt Attr PSize PFree
/dev/sda1 vg-root lvm2 a-- <45.00g 10.00g
</syntaxhighlight>
<syntaxhighlight lang=bash>
# lvextend -l +100%FREE /dev/vg-root/log
</syntaxhighlight>
Done.
==Find open but deleted files==
Sometimes you have a full filesystem, but cannot see files with ls.<br>
And the output of <i>du -sh <mountpoint></i> and <i>df -h <mountpoint></i> differ, because <i>du</i> just sums the files by traversing the directory.<br>
Then it is time to look for files that are open by any process but deleted in the filesystem.<br>
You can investigate the /proc kernel filesystem:
<syntaxhighlight lang=bash>
# find /proc/*/fd -ls | grep '(deleted)'
91565697 0 lrwx------ 1 mysql mysql 64 Nov 19 12:55 /proc/2118/fd/7 -> /tmp/ibNhVEnm\ (deleted)
91565698 0 lrwx------ 1 mysql mysql 64 Nov 19 12:55 /proc/2118/fd/8 -> /tmp/ibhSEF8n\ (deleted)
91565699 0 lrwx------ 1 mysql mysql 64 Nov 19 12:55 /proc/2118/fd/9 -> /tmp/ibADGDrl\ (deleted)
91565703 0 lrwx------ 1 mysql mysql 64 Nov 19 12:55 /proc/2118/fd/13 -> /tmp/ibtl5efn\ (deleted)
</syntaxhighlight>
Or you can use <i>lsof</i>:
<syntaxhighlight lang=bash>
# lsof +aL1
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NLINK NODE NAME
mysqld 2118 mysql 7u REG 0,27 0 0 32780 /tmp/ibNhVEnm (deleted)
mysqld 2118 mysql 8u REG 0,27 0 0 32782 /tmp/ibhSEF8n (deleted)
mysqld 2118 mysql 9u REG 0,27 0 0 32786 /tmp/ibADGDrl (deleted)
mysqld 2118 mysql 13u REG 0,27 0 0 32796 /tmp/ibtl5efn (deleted)
</syntaxhighlight>
Or for a specific running command:
<syntaxhighlight lang=bash>
# lsof +aL1 -c mariadbd
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NLINK NODE NAME
mariadbd 2821 mysql 7u REG 0,36 0 0 50 /tmp/#50 (deleted)
mariadbd 2821 mysql 8u REG 0,36 0 0 51 /tmp/#51 (deleted)
mariadbd 2821 mysql 10u REG 0,36 5420 0 52 /tmp/#52 (deleted)
mariadbd 2821 mysql 13u REG 0,36 0 0 53 /tmp/#53 (deleted)
</syntaxhighlight>
To truncate the files you can use the filedescriptor of the process that has the file open:
<syntaxhighlight lang=bash>
# : > "/proc/<pid>/fd/<fd>"
</syntaxhighlight>
10a56b5afa4ef4da822b717c6bf28461e787b813
2806
2805
2024-11-19T13:00:43Z
Lollypop
2
/* Find open but deleted files */
wikitext
text/x-wiki
[[Category:Linux|Tipps und Tricks]]
==Hard reboot==
This is the hard way to kick your kernel into void. No filesystem sync is done, just and ugly fast direkt reboot!
You should never do this...
<syntaxhighlight lang=bash>
# echo 1 > /proc/sys/kernel/sysrq
# echo b > /proc/sysrq-trigger
</syntaxhighlight>
First line enables sysrq, second line sends the reboot request.
For more look at [https://www.kernel.org/doc/Documentation/sysrq.txt kernel.org]!
==Scan all SCSI buses for new devices==
<syntaxhighlight lang=bash>
# for i in /sys/class/scsi_host/host*/scan ; do echo "- - -" > $i ; done
</syntaxhighlight>
==Scan all FC ports for new devices==
!!!Be CAREFUL!!!
This command line issues a Loop Initialization Protocol (LIP). This is a bus reset hat means that removed devices in the fabric will disappear and new ones will appear.
!!!BUT the connection might get lost for a moment!!!
The softer way is [[#Scan all SCSI buses for new devices|to scan the SCSI buses]].
<syntaxhighlight lang=bash>
# for i in /sys/class/fc_host/*/issue_lip ; do echo "1" > $i ; done
</syntaxhighlight>
==Rescan a device (for example after changing a VMDK size)==
<syntaxhighlight lang=bash>
# device=sda
# echo 1 > /sys/class/block/${device}/device/rescan
</syntaxhighlight>
This is for device sda after changing the VMDK from 20GB to 25GB:
<syntaxhighlight lang=bash>
# device=sda
# echo "$[ 512 * $(</sys/block/${device}/size) / 1024 ** 3 ] GB"
20 GB
# echo 1 > /sys/class/block/${device}/device/rescan
# echo "$[ 512 * $(</sys/block/${device}/size) / 1024 ** 3 ] GB"
25 GB
# parted /dev/${device} "print free"
Warning: Not all of the space available to /dev/sda appears to be used, you can fix the GPT to use all of the space (an extra 10485760 blocks) or
continue with the current setting?
Fix/Ignore? F
Model: VMware Virtual disk (scsi)
Disk /dev/sda: 26,8GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:
Number Start End Size File system Name Flags
2 17,4kB 1049kB 1031kB bios_grub
1 1049kB 21,5GB 21,5GB zfs
21,5GB 26,8GB 5369MB Free Space
</syntaxhighlight>
I want to put the free space into partition 1 and resize the rpool:
<syntaxhighlight lang=bash>
# parted /dev/${device} "resizepart 1 -1"
# parted /dev/${device} "print free"
Model: VMware Virtual disk (scsi)
Disk /dev/sda: 26,8GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:
Number Start End Size File system Name Flags
2 17,4kB 1049kB 1031kB bios_grub
1 1049kB 26,8GB 26,8GB zfs
26,8GB 26,8GB 983kB Free Space
# zpool list rpool
NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
rpool 19,9G 1,68G 18,2G - 14% 8% 1.00x ONLINE -
# zpool set autoexpand=on rpool
# zpool status rpool
pool: rpool
state: ONLINE
scan: none requested
config:
NAME STATE READ WRITE CKSUM
rpool ONLINE 0 0 0
sda1 ONLINE 0 0 0
# zpool online rpool sda1
# zpool list rpool
NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
rpool 24,9G 1,69G 23,2G - 11% 6% 1.00x ONLINE -
# zpool set autoexpand=off rpool
</syntaxhighlight>
Done.
==Remove a SCSI-device==
Let us say we want to remove /dev/sdb.
Be careful! Like in this example the lowest SCSI-ID is not always the lowest device name!
Check it with <i>lsscsi</i> from the Ubuntu package lsscsi:
<syntaxhighlight lang=bash>
# lsscsi
[2:0:0:0] cd/dvd NECVMWar VMware SATA CD00 1.00 /dev/sr0
[32:0:0:0] disk VMware Virtual disk 1.0 /dev/sdb
[32:0:1:0] disk VMware Virtual disk 1.0 /dev/sda
</syntaxhighlight>
Then check it is not longer in use:
# mount
# pvs
# zpool status
# etc.
Then delete it:
<syntaxhighlight lang=bash>
# echo 1 > /sys/bus/scsi/drivers/sd/32\:0\:0\:0/delete
</syntaxhighlight>
The 32:0:0:0 is the number reported from the lsscsi above.
Et voila:
<syntaxhighlight lang=bash>
# lsscsi
[2:0:0:0] cd/dvd NECVMWar VMware SATA CD00 1.00 /dev/sr0
[32:0:1:0] disk VMware Virtual disk 1.0 /dev/sda
</syntaxhighlight>
==Copy a GPT partition table==
Copy partition table of sdX to sdY:
<syntaxhighlight lang=bash>
# sgdisk /dev/sdX --replicate=/dev/sdY
# sgdisk --randomize-guids /dev/sdY
</syntaxhighlight>
Or with:
<syntaxhighlight lang=bash>
# sgdisk --backup=sdX.table /dev/sdX
# sgdisk --load-backup=sdX.table /dev/sdY
# sgdisk -G /dev/sdY
</syntaxhighlight>
<pre>
-R, --replicate=second_device_filename
Replicate the main device's partition table on the specified second device. Note that the replicated partition table is an exact
copy, including all GUIDs; if the device should have its own unique GUIDs, you should use the -G option on the new disk.
-G, --randomize-guids
Randomize the disk's GUID and all partitions' unique GUIDs (but not their partition type code GUIDs). This function may be used
after cloning a disk in order to render all GUIDs once again unique.
</pre>
==Resize a GPT partition==
The partition was resized in VMWare from ~6GB to ~50GB.
In the VM I did [[#Remove a SCSI-device|Remove a SCSI-device]] for the resized device and then [[#Scan all SCSI buses for new devices|Scan all SCSI buses for new devices]] after that parted saw the new size.
===Correct the GPT partition table===
<syntaxhighlight lang=bash>
root@mariadb:~# parted /dev/sdb
GNU Parted 3.2
Using /dev/sdb
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted) p
Warning: Not all of the space available to /dev/sdb appears to be used, you can fix the GPT to use all of the space (an extra 92274688 blocks) or continue with the
current setting?
Fix/Ignore? F <-- ! choose F
Model: VMware Virtual disk (scsi)
Disk /dev/sdb: 53,7GB <-- ! the new size is reported now
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:
Number Start End Size File system Name Flags
1 1049kB 6442MB 6441MB zfs
</syntaxhighlight>
===Resize the partition===
<syntaxhighlight lang=bash>
root@mariadb:~# parted /dev/sdb
GNU Parted 3.2
Using /dev/sdb
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted) p
Model: VMware Virtual disk (scsi)
Disk /dev/sdb: 53,7GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:
Number Start End Size File system Name Flags
1 1049kB 6442MB 6441MB zfs
(parted) resizepart 1
End? [6442MB]? 53,7GB <-- ! Put new size here
(parted) p <-- ! Control if it worked
Model: VMware Virtual disk (scsi)
Disk /dev/sdb: 53,7GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:
Number Start End Size File system Name Flags
1 1049kB 53,7GB 53,7GB zfs
(parted) q
Information: You may need to update /etc/fstab.
</syntaxhighlight>
===Optional: Resize the ZPool in it===
Check the actual values:
<syntaxhighlight lang=bash>
root@mariadb:~# zpool list MYSQL-DATA
NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
MYSQL-DATA 5,97G 994M 5,00G 44G 47% 16% 1.00x ONLINE -
root@mariadb:~# zpool get autoexpand MYSQL-DATA
NAME PROPERTY VALUE SOURCE
MYSQL-DATA autoexpand off default
</syntaxhighlight>
Now inform ZPool to grow to the end of the partition.
Set autoexpand to on:
<syntaxhighlight lang=bash>
root@mariadb:~# zpool set autoexpand=on MYSQL-DATA
</syntaxhighlight>
Send an online to the already onlined device to force a recheck in the ZPool to resize it without export/import:
<syntaxhighlight lang=bash>
root@mariadb:~# zpool online MYSQL-DATA /dev/sdb1
</syntaxhighlight>
Et voila:
<syntaxhighlight lang=bash>
root@mariadb:~# zpool list MYSQL-DATA
NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
MYSQL-DATA 50,0G 994M 49,0G - 5% 1% 1.00x ONLINE -
rpool 19,9G 3,36G 16,5G - 19% 16% 1.00x ONLINE -
</syntaxhighlight>
Set autoexpand to off if you want prevent to autoexpand if partition grows:
<syntaxhighlight lang=bash>
root@mariadb:~# zpool set autoexpand=off MYSQL-DATA
</syntaxhighlight>
===Optional: Resize the LVM physical volume===
Check the values:
<syntaxhighlight lang=bash>
# parted /dev/${device} "print free"
Model: VMware Virtual disk (scsi)
Disk /dev/sda: 48.3GB
Sector size (logical/physical): 512B/512B
Partition Table: msdos
Disk Flags:
Number Start End Size Type File system Flags
32.3kB 1049kB 1016kB Free Space
1 1049kB 48.3GB 48.3GB primary boot
48.3GB 48.3GB 999kB Free Space
# pvs
PV VG Fmt Attr PSize PFree
/dev/sda1 vg-root lvm2 a-- <35.00g 0
</syntaxhighlight>
OK, we need to resize the physical volume
<syntaxhighlight lang=bash>
# pvresize /dev/sda1
Physical volume "/dev/sda1" changed
1 physical volume(s) resized / 0 physical volume(s) not resized
</syntaxhighlight>
Check the values:
<syntaxhighlight lang=bash>
# pvs
PV VG Fmt Attr PSize PFree
/dev/sda1 vg-root lvm2 a-- <45.00g 10.00g
</syntaxhighlight>
<syntaxhighlight lang=bash>
# lvextend -l +100%FREE /dev/vg-root/log
</syntaxhighlight>
Done.
==Find open but deleted files==
Sometimes you have a full filesystem, but cannot see files with ls.<br>
And the output of <i>du -sh <mountpoint></i> and <i>df -h <mountpoint></i> differ, because <i>du</i> just sums the files by traversing the directory.<br>
Then it is time to look for files that are open by any process but deleted in the filesystem.<br>
You can investigate the /proc kernel filesystem:
<syntaxhighlight lang=bash>
# find /proc/*/fd -ls | grep '(deleted)'
91565697 0 lrwx------ 1 mysql mysql 64 Nov 19 12:55 /proc/2118/fd/7 -> /tmp/ibNhVEnm\ (deleted)
91565698 0 lrwx------ 1 mysql mysql 64 Nov 19 12:55 /proc/2118/fd/8 -> /tmp/ibhSEF8n\ (deleted)
91565699 0 lrwx------ 1 mysql mysql 64 Nov 19 12:55 /proc/2118/fd/9 -> /tmp/ibADGDrl\ (deleted)
91565703 0 lrwx------ 1 mysql mysql 64 Nov 19 12:55 /proc/2118/fd/13 -> /tmp/ibtl5efn\ (deleted)
</syntaxhighlight>
Or you can use <i>lsof</i>:
<syntaxhighlight lang=bash>
# lsof +aL1
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NLINK NODE NAME
mysqld 2118 mysql 7u REG 0,27 0 0 32780 /tmp/ibNhVEnm (deleted)
mysqld 2118 mysql 8u REG 0,27 0 0 32782 /tmp/ibhSEF8n (deleted)
mysqld 2118 mysql 9u REG 0,27 0 0 32786 /tmp/ibADGDrl (deleted)
mysqld 2118 mysql 13u REG 0,27 0 0 32796 /tmp/ibtl5efn (deleted)
</syntaxhighlight>
Or for a specific running command:
<syntaxhighlight lang=bash>
# lsof +aL1 -c mariadbd
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NLINK NODE NAME
mariadbd 2821 mysql 7u REG 0,36 0 0 50 /tmp/#50 (deleted)
mariadbd 2821 mysql 8u REG 0,36 0 0 51 /tmp/#51 (deleted)
mariadbd 2821 mysql 10u REG 0,36 5420 0 52 /tmp/#52 (deleted)
mariadbd 2821 mysql 13u REG 0,36 0 0 53 /tmp/#53 (deleted)
</syntaxhighlight>
To truncate the files you can use the filedescriptor of the process that has the file open:<br>
<b>ATTENTION! Do this ONLY if you exactly know what you are doing!!! Restarting the process is often the much safer solution!!!</b>
<syntaxhighlight lang=bash>
# : > "/proc/<pid>/fd/<fd>"
</syntaxhighlight>
53561e291ab8cca02f5957ce2ab14560a165393e
Exim cheatsheet
0
27
2807
2648
2024-12-31T08:53:14Z
Lollypop
2
/* Header einer MailID ansehen */
wikitext
text/x-wiki
[[Category:Exim]]
=Fragen und Antworten=
==Header einer MailID ansehen==
<pre># exim -Mvh <msgid></pre>
==Statistiken der aktuellen Queue ansehen==
<pre># exim -bpu | exiqsum <parameter></pre>
==Routing von Mails testen==
===Kurz und bündig===
<pre># exim -bv -v <Mailadresse></pre>
===Mit viel Debugging===
<pre># exim -bv -d+all <Mailadresse></pre>
==Wie stosse ich den Versand aller Mails für eine bestimmte Domain an?==
<pre># exim -Rff <Domain></pre>
==Wie stosse ich den Versand EINER bestimmten Mail erneut an?==
<pre># exim -M <message-id></pre>
==Wie ermittle ich, wieviele Mails in der Queue liegen?==
<pre># exim -bpc</pre>
==Wie finde ich eine bestimmte Mail in der Queue?==
Dazu kann entweder in den Logfiles gesucht werden
<pre># exigrep <pattern> /var/log/exim/mainlog-jjjjmmdd</pre>
oder es kann in der Queue gesucht werden
<pre># exiqgrep -r <pattern></pre>
Besser, als exigrep ist exipick!
Ausgabe aller frozen Mails in der Queue:
<pre>
# exipick -z
</pre>
Ausgabe aller Mails an <reciepient> in der Queue:
<pre>
# exipick -r <reciepient>
</pre>
Ausgabe aller Mails von <sender> in der Queue:
<pre>
# exipick -f <sender>
</pre>
Ausggeben aller Mails, die Lokal abgesandt wurden in der Queue:
<pre>
# exipick --or '$sender_host_address eq 127.0.0.1' '$received_protocol eq local'
</pre>
Sogar der Body einer Mail kann durchsucht werden:
<pre>
# /opt/exim/bin/exipick '$message_body =~ /.*Vjagra.*/'
</pre>
Oder Ausgabe der sender_host_address für alle Mails die mehr als 40 und weniger als 50 Minuten alt sind und nicht im Status frozen sind:
<pre>
# exipick --show-vars sender_host_address '$message_age > 40m' '$message_age < 50m' '!$deliver_freeze'
</pre>
==Was tun die Exim-Prozesse?==
<pre># exiwhat</pre>
==Ausgeben von Exim-Parametern==
<pre># exim -bP <Parameter></pre>
z.B.:
<pre># exim -bP message_size_limit</pre>
==Immer gut: queue files ansehen==
<pre>
# find $(exim -bP spool_directory | nawk '{print $NF;}')/input
</pre>
==Display configured tls settings==
===gnutls===
<syntaxhighlight lang=bash>
$ gnutls-cli --list CIPHER --priority "$(exim -bP tls_require_ciphers | awk '{print $NF}')"
Cipher suites for %SERVER_PRECEDENCE:%LATEST_RECORD_VERSION:PFS:-VERS-TLS-ALL:+VERS-TLS1.2:-VERS-DTLS-ALL:-KX-ALL:-CIPHER-ALL:-MAC-ALL:-CURVE-ALL:-SIGN-ALL:+ECDHE-RSA:+ECDHE-ECDSA:+DHE-DSS:+DHE-RSA:+AES-256-CBC:+AES-128-CBC:+AES-256-GCM:+AES-128-GCM:+CHACHA20-POLY1305:+SHA256:+SHA384:+AEAD:+CURVE-SECP256R1:+CURVE-SECP384R1:+SIGN-RSA-SHA256
TLS_ECDHE_RSA_AES_256_CBC_SHA384 0xc0, 0x28 TLS1.2
TLS_ECDHE_RSA_AES_128_CBC_SHA256 0xc0, 0x27 TLS1.2
TLS_ECDHE_RSA_AES_256_GCM_SHA384 0xc0, 0x30 TLS1.2
TLS_ECDHE_RSA_AES_128_GCM_SHA256 0xc0, 0x2f TLS1.2
TLS_ECDHE_RSA_CHACHA20_POLY1305 0xcc, 0xa8 TLS1.2
TLS_ECDHE_ECDSA_AES_256_CBC_SHA384 0xc0, 0x24 TLS1.2
TLS_ECDHE_ECDSA_AES_128_CBC_SHA256 0xc0, 0x23 TLS1.2
TLS_ECDHE_ECDSA_AES_256_GCM_SHA384 0xc0, 0x2c TLS1.2
TLS_ECDHE_ECDSA_AES_128_GCM_SHA256 0xc0, 0x2b TLS1.2
TLS_ECDHE_ECDSA_CHACHA20_POLY1305 0xcc, 0xa9 TLS1.2
TLS_DHE_DSS_AES_256_CBC_SHA256 0x00, 0x6a TLS1.2
TLS_DHE_DSS_AES_128_CBC_SHA256 0x00, 0x40 TLS1.2
TLS_DHE_DSS_AES_256_GCM_SHA384 0x00, 0xa3 TLS1.2
TLS_DHE_DSS_AES_128_GCM_SHA256 0x00, 0xa2 TLS1.2
TLS_DHE_RSA_AES_256_CBC_SHA256 0x00, 0x6b TLS1.2
TLS_DHE_RSA_AES_128_CBC_SHA256 0x00, 0x67 TLS1.2
TLS_DHE_RSA_AES_256_GCM_SHA384 0x00, 0x9f TLS1.2
TLS_DHE_RSA_AES_128_GCM_SHA256 0x00, 0x9e TLS1.2
TLS_DHE_RSA_CHACHA20_POLY1305 0xcc, 0xaa TLS1.2
Protocols: VERS-TLS1.2
Ciphers: AES-256-CBC, AES-128-CBC, AES-256-GCM, AES-128-GCM, CHACHA20-POLY1305
MACs: SHA256, SHA384, AEAD
Key Exchange Algorithms: ECDHE-RSA, ECDHE-ECDSA, DHE-DSS, DHE-RSA
Groups: GROUP-SECP256R1, GROUP-SECP384R1
PK-signatures: SIGN-RSA-SHA256
</syntaxhighlight>
==Ratelimit für einen User zurücksetzen==
Einträge finden:
<syntaxhighlight lang=bash>
# exim_dumpdb /var/spool/exim ratelimit | grep user
24-Mar-2016 09:51:28.152687 rate: 218.512 key: 1d/per_rcpt/mail_recipients:user@server.de
24-Mar-2016 09:51:28.098825 rate: 25.618 key: 1d/per_rcpt/failed_recipients:user@server.de
</syntaxhighlight>
Einträge löschen:
Dafür nimmt man das etwas struppige Tool <i>exim_fixdb</i>. Man gibt den Key ein, den man aus den Ausgaben vom letzten Befehl hat und wählt damit den entsprechenden Eintrag in der DB aus. Als nächstes kommd dann d, wie Delete, gefolgt von einem Enter. Weg ist der entsprechende Eintrag.
<syntaxhighlight lang=bash>
# exim_fixdb /var/spool/exim ratelimit
Modifying Exim hints database /var/spool/exim/db/ratelimit
> 1d/per_rcpt/mail_recipients:user@server.de
24-Mar-2016 09:51:28
0 time stamp: 24-Mar-2016 09:51:28
1 fract. time: .152687
2 sender rate: 218.512
> d
deleted
> 1d/per_rcpt/failed_recipients:user@server.de
24-Mar-2016 09:51:28
0 time stamp: 24-Mar-2016 09:51:28
1 fract. time: .098825
2 sender rate: 25.618
> d
deleted
> ^D
</syntaxhighlight>
==Spam==
<syntaxhighlight lang=bash>
for file in $(ls -1 /var/log/spamassassin/spamd-exim-acl.log* | sort -t'.' -k3n,3n)
do
if [ "$(basename $file .gz)" == "$(basename $file)" ]
then
command="cat"
else
command="gzip -cd"
fi
printf "%16s - %16s : %7s\t%s\n" \
"$(${command} ${file} | nawk 'NR==1{print $1,$2,$3}')" \
"$(${command} ${file} | tail -1 | nawk '{print $1,$2,$3}')" \
"$(${command} ${file} | grep -c 'result: Y')" \
"$(basename ${file})"
done
</syntaxhighlight>
= Logrotation with datestamped logfiles =
I love my logfiles datestamped:
<syntaxhighlight lang=bash>
# exim -bP log_file_path
log_file_path = /var/log/exim/%slog-%D
</syntaxhighlight>
But the logrotate with this files is a little bit tricky.
I found this as a good way to rotate the logfiles:
== /etc/logrotate.d/exim ==
<pre>
/var/log/exim/rotate_this_-_do_not_delete {
daily
rotate 0
ifempty
create
lastaction
# gzip all files matching the regex that are not from today
/usr/bin/find /var/log/exim -regextype posix-awk -regex '^/.*/((main|reject)log-[0-9]{8}|paniclog)' ! -mtime +0 -exec /usr/bin/gzip -9q {} \;
# delete gzipped files matching the regex that are older than 90 days
/usr/bin/find /var/log/exim -regextype posix-awk -regex '^/.*/((main|reject)log-[0-9]{8}|paniclog)\.gz' -mtime +90 -delete
endscript
}
== touch the dummy rotate file ==
This one is needed to trigger the rotation even if it is a dummy.
<syntaxhighlight lang=bash>
# touch /var/log/exim/rotate_this_-_do_not_delete
</syntaxhighlight>
</pre>
b99a18c25d901999ff2fa891ed1eec4331ca5927
Nextcloud
0
368
2809
2783
2025-02-11T08:19:40Z
Lollypop
2
wikitext
text/x-wiki
[[category:Web]]
=Nextcloud=
==BASH alias==
<syntaxhighlight lang=bash>
alias occ='sudo --user=www-data /usr/bin/php -f /var/www/nextcloud/occ'
</syntaxhighlight>
<syntaxhighlight lang=bash>
# occ status
- installed: true
- version: 19.0.2.2
- versionstring: 19.0.2
- edition:
</syntaxhighlight>
==Send calendar events==
Set the EventRemindersMode to occ:
<syntaxhighlight lang=bash>
# occ config:app:set dav sendEventRemindersMode --value occ
</syntaxhighlight>
and add a cronjob for the user running he webserver:
<syntaxhighlight lang=bash>
# crontab -u www-data -e
# send calendar events every 5 minutes
*/5 * * * * php -f /var/www/nextcloud/occ dav:send-event-reminders
</syntaxhighlight>
=Manual upgrade=
Caution when upgrading from Nextcloud 20.0.9 to Nextcloud 21.0.1!
If you are using APCu as <i>memcache.local</i>
<syntaxhighlight lang=bash>
# occ config:system:get memcache.local
\OC\Memcache\APCu
</syntaxhighlight>
you have to put this in your php apcu.ini (e.g. /etc/php/7.4/mods-available/apcu.ini):
apc.enable_cli=1
otherwise you will get in memory trouble during upgrade and in my case the server was down because out of memory.
<syntaxhighlight lang=bash>
# cd /var/www/nextcloud/updater && sudo -u www-data php updater.phar
# occ db:add-missing-indices
</syntaxhighlight>
and since version 19:
<syntaxhighlight lang=bash>
# occ db:add-missing-columns
# occ db:add-missing-primary-keys
# occ db:convert-filecache-bigint
</syntaxhighlight>
Answer the questions...
If you have an own theme proceed with this steps:
<syntaxhighlight lang=bash>
# occ config:system:set theme --value <your theme>
# occ maintenance:theme:update
</syntaxhighlight>
And the apps:
<syntaxhighlight lang=bash>
# occ app:update --all
</syntaxhighlight>
=Set loglevel from commandline=
As written at the [https://docs.nextcloud.com/server/latest/admin_manual/configuration_server/config_sample_php_parameters.html#loglevel documentation] you can set the following log levels:
<pre>
Loglevel to start logging at. Valid values are: 0 = Debug, 1 = Info, 2 = Warning, 3 = Error, and 4 = Fatal. The default value is Warning.
</pre>
You can do this with occ via:
<syntaxhighlight lang=bash>
# occ config:system:set loglevel --type integer --value <log level>
</syntaxhighlight>
or look for the current setting eith:
<syntaxhighlight lang=bash>
# occ config:system:get loglevel
2
</syntaxhighlight>
=Some tweaks for the theme to disable several things=
<syntaxhighlight lang=css>
/* remove quota */
#quota {
border: 0;
clip: rect(0 0 0 0);
height: 1px;
margin: -1px;
overflow: hidden;
padding: 0;
position: absolute;
width: 1px;
}
/* remove lost password */
.lost-password-container #lost-password, .lost-password-container #lost-password-back {
display: none;
}
/* remove contacts menu */
#contactsmenu { display: none; }
/* remove contacts button */
li[data-id="contacts"] {
display: none;
visibility : hidden;
height : 0px;
width : 0px;
margin : 0px;
padding : 0px;
overflow : hidden;
}
/* remove user button */
li[data-id="core_users"] {
display: none;
visibility : hidden;
height : 0px;
width : 0px;
margin : 0px;
padding : 0px;
overflow : hidden;
}
/* Get rid of the box at login: This community release of Nextcloud is unsupported and push notifications are limited. */
#body-login .notecard {
display: none;
visibility : hidden;
height : 0px !important;
width : 0px !important;
margin : 0px;
padding : 0px;
overflow : hidden;
}
/* remove background-image from all pages, but login page */
body:not(#body-login) {
background-image: none;
}
</syntaxhighlight>
= Memcached =
You can import one of the following versions of configfile with
<syntaxhighlight lang=shell-session>
# occ config:import /your_memcache_config_file_like_below.json
Config successfully imported from: /your_memcache_config_file_like_below.json
</syntaxhighlight>
== ip:port ==
<syntaxhighlight lang=JSON>
{
"system": {
"memcache.distributed": "\\OC\\Memcache\\Memcached",
"memcached_servers": [
[
'127.0.0.1',
1121
]
]
}
}
</syntaxhighlight>
== socket ==
<syntaxhighlight lang=JSON>
{
"system": {
"memcache.distributed": "\\OC\\Memcache\\Memcached",
"memcached_servers": [
[
"\/run\/memcached\/memcached.sock",
0
]
]
}
}
</syntaxhighlight>
de4c1ebc7c211ffe9d3423ee4d8bf4a5ff9c2bf2
2810
2809
2025-02-11T08:20:37Z
Lollypop
2
/* Set loglevel from commandline */
wikitext
text/x-wiki
[[category:Web]]
=Nextcloud=
==BASH alias==
<syntaxhighlight lang=bash>
alias occ='sudo --user=www-data /usr/bin/php -f /var/www/nextcloud/occ'
</syntaxhighlight>
<syntaxhighlight lang=bash>
# occ status
- installed: true
- version: 19.0.2.2
- versionstring: 19.0.2
- edition:
</syntaxhighlight>
==Send calendar events==
Set the EventRemindersMode to occ:
<syntaxhighlight lang=bash>
# occ config:app:set dav sendEventRemindersMode --value occ
</syntaxhighlight>
and add a cronjob for the user running he webserver:
<syntaxhighlight lang=bash>
# crontab -u www-data -e
# send calendar events every 5 minutes
*/5 * * * * php -f /var/www/nextcloud/occ dav:send-event-reminders
</syntaxhighlight>
=Manual upgrade=
Caution when upgrading from Nextcloud 20.0.9 to Nextcloud 21.0.1!
If you are using APCu as <i>memcache.local</i>
<syntaxhighlight lang=bash>
# occ config:system:get memcache.local
\OC\Memcache\APCu
</syntaxhighlight>
you have to put this in your php apcu.ini (e.g. /etc/php/7.4/mods-available/apcu.ini):
apc.enable_cli=1
otherwise you will get in memory trouble during upgrade and in my case the server was down because out of memory.
<syntaxhighlight lang=bash>
# cd /var/www/nextcloud/updater && sudo -u www-data php updater.phar
# occ db:add-missing-indices
</syntaxhighlight>
and since version 19:
<syntaxhighlight lang=bash>
# occ db:add-missing-columns
# occ db:add-missing-primary-keys
# occ db:convert-filecache-bigint
</syntaxhighlight>
Answer the questions...
If you have an own theme proceed with this steps:
<syntaxhighlight lang=bash>
# occ config:system:set theme --value <your theme>
# occ maintenance:theme:update
</syntaxhighlight>
And the apps:
<syntaxhighlight lang=bash>
# occ app:update --all
</syntaxhighlight>
=Set loglevel from commandline=
As written at the [https://docs.nextcloud.com/server/latest/admin_manual/configuration_server/config_sample_php_parameters.html#loglevel documentation] you can set the following log levels:
<pre>
Loglevel to start logging at. Valid values are: 0 = Debug, 1 = Info, 2 = Warning, 3 = Error, and 4 = Fatal. The default value is Warning.
</pre>
You can do this with occ via:
<syntaxhighlight lang=bash>
# occ config:system:set loglevel --type integer --value <log level>
</syntaxhighlight>
or look for the current setting with:
<syntaxhighlight lang=bash>
# occ config:system:get loglevel
2
</syntaxhighlight>
=Some tweaks for the theme to disable several things=
<syntaxhighlight lang=css>
/* remove quota */
#quota {
border: 0;
clip: rect(0 0 0 0);
height: 1px;
margin: -1px;
overflow: hidden;
padding: 0;
position: absolute;
width: 1px;
}
/* remove lost password */
.lost-password-container #lost-password, .lost-password-container #lost-password-back {
display: none;
}
/* remove contacts menu */
#contactsmenu { display: none; }
/* remove contacts button */
li[data-id="contacts"] {
display: none;
visibility : hidden;
height : 0px;
width : 0px;
margin : 0px;
padding : 0px;
overflow : hidden;
}
/* remove user button */
li[data-id="core_users"] {
display: none;
visibility : hidden;
height : 0px;
width : 0px;
margin : 0px;
padding : 0px;
overflow : hidden;
}
/* Get rid of the box at login: This community release of Nextcloud is unsupported and push notifications are limited. */
#body-login .notecard {
display: none;
visibility : hidden;
height : 0px !important;
width : 0px !important;
margin : 0px;
padding : 0px;
overflow : hidden;
}
/* remove background-image from all pages, but login page */
body:not(#body-login) {
background-image: none;
}
</syntaxhighlight>
= Memcached =
You can import one of the following versions of configfile with
<syntaxhighlight lang=shell-session>
# occ config:import /your_memcache_config_file_like_below.json
Config successfully imported from: /your_memcache_config_file_like_below.json
</syntaxhighlight>
== ip:port ==
<syntaxhighlight lang=JSON>
{
"system": {
"memcache.distributed": "\\OC\\Memcache\\Memcached",
"memcached_servers": [
[
'127.0.0.1',
1121
]
]
}
}
</syntaxhighlight>
== socket ==
<syntaxhighlight lang=JSON>
{
"system": {
"memcache.distributed": "\\OC\\Memcache\\Memcached",
"memcached_servers": [
[
"\/run\/memcached\/memcached.sock",
0
]
]
}
}
</syntaxhighlight>
688ffd7c69325ab6890fc3ade553bbfd97404680
2811
2810
2025-02-12T17:23:59Z
Lollypop
2
wikitext
text/x-wiki
[[category:Web]]
=Nextcloud=
==BASH alias==
<syntaxhighlight lang=bash>
alias occ='sudo --user=www-data /usr/bin/php -f /var/www/nextcloud/occ'
</syntaxhighlight>
<syntaxhighlight lang=bash>
# occ status
- installed: true
- version: 19.0.2.2
- versionstring: 19.0.2
- edition:
</syntaxhighlight>
==Send calendar events==
Set the EventRemindersMode to occ:
<syntaxhighlight lang=bash>
# occ config:app:set dav sendEventRemindersMode --value occ
</syntaxhighlight>
and add a cronjob for the user running he webserver:
<syntaxhighlight lang=bash>
# crontab -u www-data -e
# send calendar events every 5 minutes
*/5 * * * * php -f /var/www/nextcloud/occ dav:send-event-reminders
</syntaxhighlight>
=Manual upgrade=
Caution when upgrading from Nextcloud 20.0.9 to Nextcloud 21.0.1!
If you are using APCu as <i>memcache.local</i>
<syntaxhighlight lang=bash>
# occ config:system:get memcache.local
\OC\Memcache\APCu
</syntaxhighlight>
you have to put this in your php apcu.ini (e.g. /etc/php/7.4/mods-available/apcu.ini):
apc.enable_cli=1
otherwise you will get in memory trouble during upgrade and in my case the server was down because out of memory.
<syntaxhighlight lang=bash>
# cd /var/www/nextcloud/updater && sudo -u www-data php updater.phar
# occ db:add-missing-indices
</syntaxhighlight>
and since version 19:
<syntaxhighlight lang=bash>
# occ db:add-missing-columns
# occ db:add-missing-primary-keys
# occ db:convert-filecache-bigint
</syntaxhighlight>
Answer the questions...
If you have an own theme proceed with this steps:
<syntaxhighlight lang=bash>
# occ config:system:set theme --value <your theme>
# occ maintenance:theme:update
</syntaxhighlight>
And the apps:
<syntaxhighlight lang=bash>
# occ app:update --all
</syntaxhighlight>
=Recreate preview directories=
Sometimes something happens and you loose your preview subdirectories. This is a shellscript for recreate the dirs. It takes a long time...
<syntaxhighlight lang=bash>
#!/bin/bash
owner=www-data
group=www-data
nextcloud_installdir=/var/www/nextcloud
occ=${nextcloud_installdir}/occ
occ_cmd="/usr/bin/sudo --user=${owner} /usr/bin/php -f ${occ}"
nextcloud_instanceid=$(${occ_cmd} config:system:get instanceid)
nextcloud_datadir=$(${occ_cmd} config:system:get datadirectory)
basedir=${nextcloud_datadir}/appdata_${nextcloud_instanceid}/preview
depth=7
function makesubdirs () {
local basedir=${1}
local -i depth=${2}
(( ${depth} == 0 )) && return
for dir in ${basedir}/{{0..9},{a..f}}
do
[ -d ${dir} ] || {
mkdir ${dir}
chown ${owner}:${group} ${dir}
echo ${dir}
}
makesubdirs ${dir} $((depth -1))
done
}
makesubdirs ${basedir} ${depth}
</syntaxhighlight>
See disclaimer...
=Set loglevel from commandline=
As written at the [https://docs.nextcloud.com/server/latest/admin_manual/configuration_server/config_sample_php_parameters.html#loglevel documentation] you can set the following log levels:
<pre>
Loglevel to start logging at. Valid values are: 0 = Debug, 1 = Info, 2 = Warning, 3 = Error, and 4 = Fatal. The default value is Warning.
</pre>
You can do this with occ via:
<syntaxhighlight lang=bash>
# occ config:system:set loglevel --type integer --value <log level>
</syntaxhighlight>
or look for the current setting with:
<syntaxhighlight lang=bash>
# occ config:system:get loglevel
2
</syntaxhighlight>
=Some tweaks for the theme to disable several things=
<syntaxhighlight lang=css>
/* remove quota */
#quota {
border: 0;
clip: rect(0 0 0 0);
height: 1px;
margin: -1px;
overflow: hidden;
padding: 0;
position: absolute;
width: 1px;
}
/* remove lost password */
.lost-password-container #lost-password, .lost-password-container #lost-password-back {
display: none;
}
/* remove contacts menu */
#contactsmenu { display: none; }
/* remove contacts button */
li[data-id="contacts"] {
display: none;
visibility : hidden;
height : 0px;
width : 0px;
margin : 0px;
padding : 0px;
overflow : hidden;
}
/* remove user button */
li[data-id="core_users"] {
display: none;
visibility : hidden;
height : 0px;
width : 0px;
margin : 0px;
padding : 0px;
overflow : hidden;
}
/* Get rid of the box at login: This community release of Nextcloud is unsupported and push notifications are limited. */
#body-login .notecard {
display: none;
visibility : hidden;
height : 0px !important;
width : 0px !important;
margin : 0px;
padding : 0px;
overflow : hidden;
}
/* remove background-image from all pages, but login page */
body:not(#body-login) {
background-image: none;
}
</syntaxhighlight>
= Memcached =
You can import one of the following versions of configfile with
<syntaxhighlight lang=shell-session>
# occ config:import /your_memcache_config_file_like_below.json
Config successfully imported from: /your_memcache_config_file_like_below.json
</syntaxhighlight>
== ip:port ==
<syntaxhighlight lang=JSON>
{
"system": {
"memcache.distributed": "\\OC\\Memcache\\Memcached",
"memcached_servers": [
[
'127.0.0.1',
1121
]
]
}
}
</syntaxhighlight>
== socket ==
<syntaxhighlight lang=JSON>
{
"system": {
"memcache.distributed": "\\OC\\Memcache\\Memcached",
"memcached_servers": [
[
"\/run\/memcached\/memcached.sock",
0
]
]
}
}
</syntaxhighlight>
14cae3acab74edc57296b46189a9abe774ddd2da